AI worry isn't anything new. We are worried about artificial intelligence taking jobs, about toys that seem too real to our kids, about mass surveillance of our every move. But Anthropic's warning about its own product is bigger than any of those singular problems. It is a call from inside the house that disaster is hiding right around the corner.
Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. The fallout — for economies, public safety, and national security — could be severe.
Why on Earth would you make something that you thought had a 25% chance of wiping out your entire species? Or even a 5% chance? I don't know about you, but to me that sounds like a pretty stupid thing to do!
Like almost everyone in the AI model-making industry, Anthropic's employees believe that they are literally creating a god, and that this god will come into its full existence sooner rather than later.
1mo ago
Underscored — save the words that stop you in your tracks.