In this letter, I’d like to explore why some people who are knowledgeable in AI take extreme positions on AI “safety” that warn of human extinction and describe scenarios, such as AI deciding to “take over,” based less on science than science fiction. As I wrote in last year’s Halloween edition, exaggerated fears of AI cause real harm. I’d like to share my observations on the psychology behind some of the fear mongering. Companies that are training large models have pushed governments to place large regulatory burdens on competitors, including open source/open weights models. A few enterprising entrepreneurs have used the supposed dangers of their technology to gin up investor interest. After all, if your technology is so powerful that it can destroy the world, it has to be worth a lot!
Fear mongering attracts a lot of attention and is an inexpensive way to get people talking about you or your company. This makes individuals and companies more visible and apparently more relevant to conversations around AI. It also allows one to play savior: “Unlike the dangerous AI products of my competitors, mine will be safe!” Or “unlike all other legislators who callously ignore the risk that AI could cause human extinction, I will pass laws to protect you!” To be clear, AI has problems and potentially harmful applications that we should address. But excessive hype about science-fiction dangers is also harmful.