Towards public understanding of AI safety and strong laws to stop big tech from building dangerous and overly powerful AI
This is archive of the work of the Campaign for AI Safety. We merged with Existential Risk Observatory in 2024. ERO continues our work.
See our materials:
What can you do to stop AI doom?
As long as companies are allowed to carry on unregulated and unmonitored development towards the misguided goal of developing overly powerful AI, the danger of human extinction is real and imminent. Therefore,
- We, the whole human civilisation, must exercise extreme caution.
- We must indefinitely pause the development of AI capabilities until it is proven to be safe (but not necessarily the use of existing AI systems as long as they are safe and ethical).
- We need to do a lot of work on AI safety and controllability. In fact, all the smart people working on advancing AI capabilities need to switch to working on safety or switch to narrow AI that is unlikely to be dangerous.
- We should activate existing power structures (via treaties, laws, regulations) to achieve these goals.
We are standing at the brink of abyss, but we do not have to make a step into it.
Act now
Please get involved. This problem will not go away by itself. Your help is needed.