Towards public understanding of AI safety and strong laws to stop big tech from building dangerous and overly powerful AI

This is archive of the work of the Campaign for AI Safety. We merged with Existential Risk Observatory in 2024. ERO continues our work.

See our materials:

What can you do to stop AI doom?

As long as companies are allowed to carry on unregulated and unmonitored development towards the misguided goal of developing overly powerful AI, the danger of human extinction is real and imminent. Therefore,

We are standing at the brink of abyss, but we do not have to make a step into it.

Act now

Please get involved. This problem will not go away by itself. Your help is needed.