PodcastsBildung80,000 Hours Podcast

80,000 Hours Podcast

The 80,000 Hours team
80,000 Hours Podcast
Neueste Episode

333 Episoden

  • 80,000 Hours Podcast

    "AI doesn't work" – the story behind the stat that misled millions

    28.04.2026 | 10 Min.
    You might have heard that 95% of corporate AI pilots are failing. It was a widely cited AI statistic in 2025, repeated by media outlets and commentators everywhere. It helped trigger a Nasdaq selloff and became a pillar of the "AI is overhyped" case. The problem: 95% fail is 100% wrong.
    The real finding, once you read the underlying MIT report carefully, points in roughly the opposite direction:
    80% of surveyed companies had never piloted a custom AI tool at all.
    Among the companies that deployed pilots, a quarter reported success — according to an extremely high bar set by the researchers — within six months. 
    Over 90% of staff at all surveyed companies were using tools like ChatGPT regularly for their work.
    None of that made the headlines. Nor did the fact that the study’s authors are all developing or selling the "agentic AI framework" technology the report recommends as the solution to this supposed epidemic of failing AI.
    Host Rob Wiblin breaks down how an opaque, conflicted, barely scrutinised report carrying the MIT label managed to move markets and shape global opinions on AI’s real-world utility.
    Learn more, video, and full transcript: https://80k.info/mit-ai-study
    This episode was recorded on February 13, 2026.
    Chapters:
    The AI myth that moved global markets (00:00)
    The math was totally wrong (00:52)
    The bar for success was insanely high (01:46)
    The study ignores its own best finding (03:28)
    The sample was tiny (04:49)
    The report wasn’t even available when it went viral (05:54)
    The hidden conflicts of interest (06:58)
    The real lesson (09:28)
    Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
    Camera operator: Dominic Armstrong
    Production: Nick Stockton, Elizabeth Cox, and Katy Moore
  • 80,000 Hours Podcast

    #242 – Will MacAskill on how we survive the 'intelligence explosion,' AI character, and the case for 'viatopia'

    22.04.2026 | 3 Std. 9 Min.
    Hundreds of millions already turn to AI on the most personal of topics — therapy, political opinions, and how to treat others. And as AI takes over more of the economy, the character of these systems will shape culture on an even grander scale, ultimately becoming “the personality of most of the world’s workforce.”
    So… should they be designed to push us towards the better angels of our nature? Or simply do as we ask? Will MacAskill, philosopher and senior research fellow at Forethought, has been thinking through that and the other thorniest issues that come up in designing an AI personality.
    He’s also been exploring how we might coexist peacefully with the ‘superintelligent AI’ companies are racing to build. He concludes that we should train such systems to be very risk averse, pay them for their work, and build institutions that enable humans to make credible contracts with AIs themselves.
    Will and host Rob Wiblin also discuss what a good world after superintelligence would actually look like — a subject that has received surprisingly little attention from the people working to make it. Will argues that we shouldn’t aim for a specific utopian vision: we don’t know enough about what the best possible future actually is to aim directly for it, and trying to lock in today’s best guesses forever risks baking in errors we can’t yet see.
    Will and Rob explore what we can do to steer towards a good future instead, along with why a coalition of democracies building superintelligence together is safer than any single actor, how absurdly useful ChatGPT is for analytic philosophy, and more.

    Learn more, video, and full transcript: https://80k.info/wm26
    This episode was recorded on February 6, 2026.
    Chapters:
    Cold open (00:00:00)
    Will MacAskill is back — for a 6th time! (00:00:29)
    AIs’ “character” could be vital to securing a good future (00:00:59)
    The panic over sychophancy is justified (00:07:54)
    How opinionated should AI be about ethics? (00:12:59)
    Commercial pressures won’t fully determine AI character (00:29:38)
    Risk-averse AI would rather strike a deal than attempt a coup (00:36:46)
    A coalition of democracies building superintelligence is safer than one doing it alone (01:06:40)
    How selfish agents could fund the common good (01:19:13)
    Why not push for pausing AI development? (01:38:39)
    Effective altruism is making a comeback post-SBF (01:48:18)
    EA in the age of AGI (01:56:15)
    Viatopia: an alternative to utopia (02:05:08)
    The least bad alternative to total utilitarianism? (02:34:42)
    How AI could kickstart a golden age of philosophy (02:58:03)
    Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
    Music: CORBIT
    Camera operator: Alex Miles
    Production: Elizabeth Cox, Nick Stockton, and Katy Moore
  • 80,000 Hours Podcast

    Risks from power-seeking AI systems (article narration by Zershaaneh Qureshi)

    16.04.2026 | 1 Std. 29 Min.
    Hundreds of prominent AI scientists and other notable figures signed a statement in 2023 saying that mitigating the risk of extinction from AI should be a global priority. At 80,000 Hours, we’ve considered risks from AI to be the world’s most pressing problem since 2016. 
    But what led us to this conclusion? Could AI really cause human extinction? We’re not certain, but we think the risk is worth taking very seriously. 
    In particular, as companies create increasingly powerful AI systems, there’s a concerning chance that:
    These AI systems may develop dangerous long-term goals we don’t want.
    To pursue these goals, they may seek power and undermine the safeguards meant to contain them.
    They may even aim to disempower humanity and potentially cause our extinction.
    This article is written by Cody Fenwick and Zershaaneh Qureshi, and narrated by Zershaaneh Qureshi. It discusses why future AI systems could disempower humanity, what current AI research reveals about behaviours like power-seeking and deception, and how you can help mitigate the dangers.
    You can see the original article — packed with graphs, images, footnotes, and further resources — on the 80,000 Hours website: 
    https://80000hours.org/problem-profiles/risks-from-power-seeking-ai/ 
    Chapters:
    Risks from power-seeking AI systems (00:01:00)
    Introduction (00:01:17)
    Summary (00:03:09)
    Why are the risks from power-seeking AI a pressing world problem? (00:04:04)
    Section 1: Humans will likely build advanced AI systems with long-term goals (00:05:43)
    Section 2: AIs with long-term goals may be inclined to seek power (00:11:32)
    Section 3: These power-seeking AI systems could successfully disempower humanity (00:26:26)
    Section 4. People might create power-seeking AI systems without enough safeguards, despite the risks (00:38:34)
    Section 5: Work on this problem is neglected and tractable (00:47:37)
    Section 6: What are the arguments against working on this problem? (00:59:20)
    Section 7: How you can help (01:25:07)
    Thank you for listening (01:28:56)
    Audio editing: Dominic Armstrong
    Production: Zershaaneh Qureshi, Elizabeth Cox, and Katy Moore
  • 80,000 Hours Podcast

    How scary is Claude Mythos? 303 pages in 21 minutes

    10.04.2026 | 21 Min.
    With Claude Mythos we have an AI that knows when it's being tested, can obscure its reasoning when it wants, and is better at breaking into (and out of) computers than any human alive. Rob Wiblin works through its 244-page System Card and 59-page Alignment Risk Update to explain why: 
    Mythos is a nightmare for computer security
    It has arrived far ahead of schedule
    It might be great news for alignment and safety
    But 3 key problems mean we can’t take its alignment results at face value
    Mythos isn’t building its replacement yet, probably
    Anthropic staff are, for the first time, kinda scared of Claude
    He's losing sleep
    Learn more & full transcript: https://80k.info/mythos
    This episode was recorded on April 9, 2026.
    Chapters:
    Why people are panicking about computer security (01:05)
    Mythos could break out of containment (04:23)
    Anthropic is losing billions in revenue by not releasing Mythos (06:21)
    Mythos is actually the most aligned model to date, except… (07:48)
    Mythos knows when it’s being tested (09:52)
    Mythos can hide its thoughts (11:50)
    Mythos can’t be trusted about whether it’s untrustworthy (14:02)
    Does Mythos advance automated AI R&D? (17:03)
    Mythos scares Anthropic (19:15)
    Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
    Camera operator: Dominic Armstrong
    Production: Elizabeth Cox, Nick Stockton, and Katy Moore
  • 80,000 Hours Podcast

    Village gossip, pesticide bans, and gene drives: 17 experts on the future of global health

    07.04.2026 | 4 Std. 6 Min.
    What does it really take to lift millions out of poverty and prevent needless deaths?
    In this special compilation episode, 17 past guests — including economists, nonprofit founders, and policy advisors — share their most powerful and actionable insights from the front lines of global health and development. You’ll hear about the critical need to boost agricultural productivity in sub-Saharan Africa, the staggering impact of lead poisoning on children in low-income countries, and the social forces that contribute to high neonatal mortality rates in India.
    What’s so striking is how some of the most effective interventions sound almost too simple to work: banning certain pesticides, replacing thatch roofs, or identifying village “influencers” to spread health information.
    Full transcript and links to learn more: https://80k.info/ghd
    Chapters:
    Cold open (00:00:00)
    Luisa’s intro (00:00:58)
    Development consultant Karen Levy on why pushing for “sustainable” programmes isn’t as good as it sounds (00:02:15)
    Economist Dean Spears on the social forces and gender inequality that contribute to neonatal mortality in Uttar Pradesh (00:06:55)
    Charity founder Sarah Eustis-Guthrie on what we can learn from the massive failure of PlayPumps (00:14:33)
    Economist Rachel Glennerster on how randomised controlled trials are just one way to better understand tricky development problems (00:19:05)
    Data scientist Hannah Ritchie on why improving agricultural productivity in sub-Saharan Africa is critical to solving global poverty (00:24:36)
    Charity founder Lucia Coulter on the huge, neglected upsides of reducing lead exposure (00:47:48)
    Malaria expert James Tibenderana on using gene drives to wipe out the species of mosquitoes that cause malaria (00:53:11)
    Charity founder Varsha Venugopal on using village gossip to get kids their critical immunisations (01:04:14)
    Rachel Glennerster on solving tough global problems by creating the right incentives for innovation (01:11:31)
    Karen Levy on when governments should pay for programmes instead of NGOs (01:26:51)
    Open Philanthropy lead Alexander Berger on declining returns in global health, and finding and funding the most cost-effective interventions (01:29:40)
    GiveWell researcher James Snowden on making funding decisions with tricky moral weights (01:34:44)
    Lucia Coulter on “hits-based giving” approaches to funding global health and development projects (01:43:01)
    Rachel Glennerster on whether it’s better to fix problems in education with small-scale interventions versus systemic reforms (01:48:12)
    GiveDirectly cofounder Paul Niehaus on why it’s so important to give aid recipients a choice in how they spend their money (01:51:09)
    Sarah Eustis-Guthrie on whether more charities should scale back or shut down, and aligning incentives with beneficiaries (01:56:12)
    James Tibenderana on why we need loads better data to harness the power of AI to eradicate malaria (02:11:22)
    Lucia Coulter on rapidly scaling a light-touch intervention to more countries (02:20:14)
    Karen Levy on why pre-policy plans are so great at aligning perspectives (02:32:47)
    Rachel Glennerster on the value we get from doing the right RCTs well (02:40:04)
    Economist Mushtaq Khan on really drilling down into why “context matters” for development work (02:50:13)
    GiveWell cofounder Elie Hassenfeld on contrasting GiveWell’s approach with the subjective wellbeing approach of Happier Lives Institute (02:57:24)
    James Tibenderana on whether people actually use antimalarial bed nets for fishing — and why that’s the wrong thing to focus on (03:05:30)
    Karen Levy on working with governments to get big results (03:10:53)
    Leah Utyasheva on how a simple intervention reduced suicide in Sri Lanka by 70% (03:17:38)
    Karen Levy on working with academics to get the best results on the ground (03:29:03)
    James Tibenderana on the value of working with local researchers (03:32:15)
    Lucia Coulter on getting buy-in from both industry and government (03:35:05)
    Alexander Berger on reasons neartermist work makes sense even by longtermist standards (03:39:26)
    Economist Shruti Rajagopalan on the key skills to succeed in public policy careers, and seeing economics in everything (03:47:42)
    J-PAL lead Claire Walsh on her career advice for young people who want to get involved in global health and development (03:55:20)
    Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
    Content editing: Katy Moore and Milo McGuire
    Music: CORBIT
    Coordination, transcriptions, and web: Katy Moore

Weitere Bildung Podcasts

Über 80,000 Hours Podcast

The most important conversations about artificial intelligence you won’t hear anywhere else. Subscribe by searching for '80000 Hours' wherever you get podcasts. Hosted by Rob Wiblin, Luisa Rodriguez, and Zershaaneh Qureshi.
Podcast-Website

Höre 80,000 Hours Podcast, Easy German: Learn German with native speakers | Deutsch lernen mit Muttersprachlern und viele andere Podcasts aus aller Welt mit der radio.at-App

Hol dir die kostenlose radio.at App

  • Sender und Podcasts favorisieren
  • Streamen via Wifi oder Bluetooth
  • Unterstützt Carplay & Android Auto
  • viele weitere App Funktionen

80,000 Hours Podcast: Zugehörige Podcasts

Rechtliches
Social
v8.8.13| © 2007-2026 radio.de GmbH
Generated: 4/28/2026 - 9:39:58 PM