Thinking about the Dangers of AI
Here are some useful frameworks for thinking about the dangers posed by AI
Many people fear AI, but it’s an amorphous, undifferentiated fear. The best way to banish undifferentiated fear is to be specific; what exactly should we be worried about? One of the ways I do this with a complex, rapidly developing topic is to build some frameworks. The purpose of a framework isn’t to provide you with a definitive answer; it’s to seed your thinking by providing you with some structure to work with. So, with this goal in mind, let’s dig in.
The first two frameworks deal with existential threats to humanity; the rest are just events in the ongoing perma-crisis.
Framework 1: A Technological Singularity
When Musk (and many others) talk about the existential danger posed by AI, they are referring to an AI with superhuman intelligence. This type of AI could easily wipe us out if it wants to.