Learn why AI safety matters
One of the most important things you can do to help with AI alignment and the existential risk (x-risk) that superintelligence poses, is to learn about it. Here are some resources to get you started.
On this website
- Risks . A summary of the risks of AI.
- X-risk . Why AI is an existential risk.
- Takeover . How AI could take over the world.
- Quotes . Quotes on AI risks and governance.
- Feasiblity of a Pause . The feasibility of a pause in AI development.
- Building the Pause button . What it takes to pause AI.
Websites
- The Compendium . A highly comprehensive bundle of knowledge on why the current AI race is so dangerous, and what we can do about it.
- A Narrow Path . A detailed plan on the steps that we need to take to increase our odds at surviving the next decadesa.
- AISafety.com & AISafety.info . The landing pages for AI Safety. Learn about the risks, communities, events, jobs, courses, ideas for how to mitigate the risks and more!
- AISafety.dance . A more fun, friendly and interactive introduction to the AI catastrophic risks!
- AISafety.world . The whole AI Safety landscape with all the organizations, media outlets, forums, blogs, and other actors and resources.
- IncidentDatabase.ai . Database of incidents where AI systems caused harm.
Newsletters
- PauseAI Substack : Our newsletter.
- TransformerNews Comprehensive weekly newsletter on AI safety and governance.
- Don’t Worry About The Vase : A newsletter about AI safety, rationality, and other topics.
Videos
- Introduction to AI Risks is a YouTube playlist we compiled, featuring videos ranging from 1 minute to 1 hour in various formats and from diverse sources, and it doesn’t require any prior knowledge.
- Robert Miles’ YouTube videos are a great place to start understanding the fundamentals of AI alignment.
Podcasts
- Future of Life Institute | Connor Leahy on AI Safety and Why the World is Fragile . Interview with Connor about the AI Safety strategies.
- Lex Fridman | Max Tegmark: The Case for Halting AI Development . Interview that dives into the details of our current dangerous situation.
- Sam Harris | Eliezer Yudkowsky: AI, Racing Toward the Brink . Conversation about the nature of intelligence, different types of AI, the alignment problem, Is vs Ought, and more. One of many episodes Making Sense has on AI Safety.
- Connor Leahy, AI Fire Alarm . Talk about the intelligence explosion and why it would be the most important thing that could ever happen.
- The 80,000 Hours Podcast recommended episodes on AI . Not 80k hours long, but a compilation of episodes of The 80,000 Hours Podcast about AI Safety.
- Future of Life Institute Podcast episodes on AI . All of the episodes of the FLI Podcast on the future of Artificial Intelligence.
Podcasts featuring PauseAI members can be found in the media coverage list.
Articles
- The ‘Don’t Look Up’ Thinking That Could Doom Us With AI (by Max Tegmark)
- Pausing AI Developments Isn’t Enough. We Need to Shut it All Down (by Eliezer Yudkowsky)
- The Case for Slowing Down AI (by Sigal Samuel)
- The AI Revolution: The Road to Superintelligence (by WaitButWhy)
- How rogue AIs may arise (by Yoshua Bengio)
- Reasoning through arguments against taking AI safety seriously (by Yoshua Bengio)
If you want to read what journalists have written about PauseAI, check out the list of media coverage .
Books
- Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World (Darren McKee, 2023). Get it for free !
- The Precipice: Existential Risk and the Future of Humanity (Toby Ord, 2020)
- The Alignment Problem (Brian Christian, 2020)
- Human Compatible: Artificial Intelligence and the Problem of Control (Stuart Russell, 2019)
- Life 3.0: Being Human in the Age of Artificial Intelligence (Max Tegmark, 2017)
- Superintelligence: Paths, Dangers, Strategies (Nick Bostrom, 2014)
- Our Final Invention: Artificial Intelligence and the End of the Human Era (James Barrat, 2013)
Courses
- AGI safety fundamentals (30hrs)
- CHAI Bibliography of Recommended Materials (50hrs+)
- AISafety.training : Overview of training programs, conferences, and other events
Organizations
- Future of Life Institute started the open letter , led by Max Tegmark.
- FutureSociety
- Conjecture . Start-up that is working on AI alignment and AI policy, led by Connor Leahy.
- Existential Risk Observatory . Dutch organization that is informing the public on x-risks and studying communication strategies.
- Center for AI Safety (CAIS) is a research center at the Czech Technical University in Prague, led by
- Center for Human-Compatible Artificial Intelligence (CHAI), led by Stuart Russell.
- Machine Intelligence Research Institute (MIRI), doing mathematical research on AI safety, led by Eliezer Yudkowsky.
- Centre for the Governance of AI
- Institute for AI Policy and Strategy (IAPS)
- The AI Policy Institute
- AI Safety Communications Centre
- The Midas Project Corporate pressure campaigns for AI safety.
- The Human Survival Project
- AI Safety World Here’s an overview of the AI Safety landscape.
If you are convinced and want to take action
There are many things that you can do . Writing a letter, going to a protest, donating some money or joining a community is not that hard! And these actions have a real impact. Even when facing the end of the world, there can still be hope and very rewarding work to do.
Or if you still don’t feel quite sure of it
Learning about the psychology of x-risk could help you.