Hundreds of millions already turn to AI on the most personal of topics — therapy, political opinions, and how to treat others. And as AI takes over more of the economy, the character of these systems will shape culture on an even grander scale, ultimately becoming “the personality of most of the world’s workforce.”
So… should they be designed to push us towards the better angels of our nature? Or simply do as we ask? Will MacAskill, philosopher and senior research fellow at Forethought, has been thinking through that and the other thorniest issues that come up in designing an AI personality.
He’s also been exploring how we might coexist peacefully with the ‘superintelligent AI’ companies are racing to build. He concludes that we should train such systems to be very risk averse, pay them for their work, and build institutions that enable humans to make credible contracts with AIs themselves.
Will and host Rob Wiblin also discuss what a good world after superintelligence would actually look like — a subject that has received surprisingly little attention from the people working to make it. Will argues that we shouldn’t aim for a specific utopian vision: we don’t know enough about what the best possible future actually is to aim directly for it, and trying to lock in today’s best guesses forever risks baking in errors we can’t yet see.
Will and Rob explore what we can do to steer towards a good future instead, along with why a coalition of democracies building superintelligence together is safer than any single actor, how absurdly useful ChatGPT is for analytic philosophy, and more.
Learn more, video, and full transcript: https://80k.info/wm26
This episode was recorded on February 6, 2026.
Chapters:
Cold open (00:00:00)
Will MacAskill is back — for a 6th time! (00:00:29)
AIs’ “character” could be vital to securing a good future (00:00:59)
The panic over sychophancy is justified (00:07:54)
How opinionated should AI be about ethics? (00:12:59)
Commercial pressures won’t fully determine AI character (00:29:38)
Risk-averse AI would rather strike a deal than attempt a coup (00:36:46)
A coalition of democracies building superintelligence is safer than one doing it alone (01:06:40)
How selfish agents could fund the common good (01:19:13)
Why not push for pausing AI development? (01:38:39)
Effective altruism is making a comeback post-SBF (01:48:18)
EA in the age of AGI (01:56:15)
Viatopia: an alternative to utopia (02:05:08)
The least bad alternative to total utilitarianism? (02:34:42)
How AI could kickstart a golden age of philosophy (02:58:03)
Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
Music: CORBIT
Camera operator: Alex Miles
Production: Elizabeth Cox, Nick Stockton, and Katy Moore