Anthropic says its new model is too dangerous to release; there are reasons to be skeptical, but to the extent Anthropic is right, that raises even deeper concerns.
Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. The fallout — for economies, public safety, and national security — could be severe.
Why on Earth would you make something that you thought had a 25% chance of wiping out your entire species? Or even a 5% chance? I don't know about you, but to me that sounds like a pretty stupid thing to do!
3w ago
Underscored — save the words that stop you in your tracks.