Communication Strategy
How we communicate
- Defer to experts. We are warning people about a scenario that is so extreme and scary, that a gut-level response is to dismiss it as crazy talk. Show the expert polls and surveys . The top three most cited AI scientists are all warning about x-risk. Deferring to them is a good way to make our case.
- Use simple language. You can show you understand the technology and you’ve done your homework, but excessive jargon can make people lose interest. We want to reach as many people as possible, so don’t over complicate language. Many of the people we want to reach are non-native English speakers, so consider doing translations.
- Show our emotions. Seeing emotions gives others the permission to feel emotions. We are worried, we are angry, we are eager to act. Showing how you feel can be scary, but in our case we need to. Our message can only be received if it matches with how we send it.
- Emphasize uncertainty. Don’t say AI will take over, or that we will reach AGI in x years. Nobody can predict the future. There is a significant chance that AI will go wrong soon, and that should be enough to act on. Don’t let uncertainty be the reason to not act. Refer to the Precautionary Principle, and make the point that we should err on the side of caution.
- Make individuals feel responsible. Nobody wants to feel like they have a strong responsibility to make things go well. Our brains steer us away from this, because we all have a deep desire to believe that someone is in charge, protecting us. But there are no adults in the room right now. You need to be the one to do this. Choose to take responsibility.
- Inspire hope. When hearing about the dangers of AI and the current race to the bottom, many of us will feel dread, and that makes us not act. Fatalism is comfortable, because a lack of hope means that we don’t have to work towards a good outcome. This is why we need to emphasize that our case is not lost. AGI is not inevitable , technology has been successfully banned internationally before, and our proposal has broad public support.
No-gos
- No AI-generated visuals. Using AI models is fine for doing research, ideation and iterating on ideas, but don’t publish AI-generated images and videos. Even if we are not anti-AI, we can easily be labeled as hypocrites if we clearly use AI-generated content.
- No partisan politics. We do not push for any political party or ideology. We don’t have opinions on things outside of AI.
- No tactical self-censorship. Some AI Governance organizations choose not to say how worried they are, or are choosing not to push for the policies that they think are necessary because they worry about losing credibility. We cannot copy this same strategy, because if we all do, no-one is left to speak the truth.
- No rumors. We don’t promote vague or unverified information. We cannot afford to lose credibility by spreading false information.
Narratives that we push
- AI is not just a tool. AI models are not programmed, they are digital brains . We don’t understand how they work, we can’t predict what they can do, we can’t properly control their behavior.
- AI does not need to be sentient to be dangerous. Being able to experience the world, or feel emotions is not a requirement for AI to take dangerous actions. The only thing that matters is capabilities .
- Global race to the bottom. This is not a race to be won. It’s not about US vs China, it’s about humanity vs AI. We cannot expect to wield superintelligent AI as a weapon - we don’t know if it can be controlled at all.
- Existing AI harms will get worse. Deepfakes, job loss, surveillance, misinformation, polarization… Existing AI is already causing harm and we need to acknowledge that. The harms will only get worse with more powerful AI, and we need to Pause AI to prevent that from happening.
- Superhuman AI is not inevitable. It requires hordes of engineers with million-dollar paychecks. It requires highly specialized hardware, created by a handful of monopolies. It requires all of us to sit back and do nothing.
- International regulation is possible. We’ve collectively protected the Ozone Layer by banning CFC’s and blinding laser weapons globally. The centralized AI chip supply chain makes enforcing compute governance very feasible .
Much of our strategy is derived from our values .