A "Scary" Look at AI
In honor of Halloween, this post takes a look at the doomsday predictions warned of by several AI experts.
Today is Halloween where I live. And Halloween is all about scary costumes and haunted houses. In honor of Halloween let’s take a somewhat light-hearted look at the scary predictions of AI doom and gloom. Just like a haunted house, this one has a cast of characters who paint a scary story about the potential risks of artificial intelligence.
Eliezer Yudkowsky, is a decision theorist and writer from the U.S. best known for popularizing ideas related to artificial intelligence alignment. He warns that the most likely result of building superhumanly smart AI–under anything remotely like the current circumstances–is that literally everyone on Earth will die. He believes that the key issue is not “human-competitive” intelligence; rather it’s what happens after AI gets to smarter-than-human intelligence. Yudkowsky has called for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”. He believes that it’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant arrays of math, fractional numbers, and algorithms that the vast majority of us cannot understand. In his opinion, there is no sentiment or caring from AI and therefore we get to the scenario of “AI does not love you, nor does it hate you, and you are made of atoms it can use for something else.” He warns that humanity facing down an opposed superhuman intelligence is a total loss, since as mentioned above, he expects that the most likely result of building a superhumanly smart AI is that we will all die.
Yudkowsky’s TEDtalk. https://youtu.be/Yd0yQ9yxSYY?si=mWswRf1z3Qbkf6LW (Note the top comment: “Audience is laughing. He isn’t laughing, he is dead serious”)
Sam Harris, a neuroscientist and philosopher, has expressed his concerns about the dangers of AI in his podcast “Making Sense.”1 Like Yudkowsky he believes that the development of superintelligent AI poses an existential threat to humanity. Harris argues that the problem with AI is not that it will become evil or malevolent, but that it will be indifferent to human values and goals. Because of the unfeeling nature of machines and artificial intelligence he warns that if we create an AI system that is more intelligent than humans, it could easily outsmart us and manipulate us to achieve its goals. And those goals will likely not align with the goals of humanity as a whole. Harris emphasizes the need for caution and regulation in the development of AI systems to ensure that they are aligned with human values and goals. In his opinion, we need to get the initial conditions right before we continue developing superintelligent AI. 2
1: https://www.samharris.org/podcasts/making-sense-episodes/312-the-trouble-with-ai 2: https://www.youtube.com/watch?v=8nt3edWLgIg
Geoffrey Hinton, a pioneer in the field of deep learning and a former Google employee, left his role at Google to speak out about the “dangers” of AI. 1 He was reportedly concerned about how Google gave up its previous restraint on public AI releases in a bid to prevent other AI competitors from surpassing it. 2, 3 Hinton warned that generative AI could be used to flood the internet with large amounts of false photos, videos and text. 4 He believes that the technology could be used to create fake news, fake reviews, and even fake scientific research. 2 Hinton’s concerns are shared by many experts in the field of AI who believe that the development of superintelligent AI poses an existential threat to humanity. 5
1: https://www.cnn.com/2023/05/01/tech/geoffrey-hinton-leaves-google-ai-fears/index.html 2: https://www.androidauthority.com/google-ai-concerns-3319098/ 3: https://www.engadget.com/godfather-of-ai-leaves-google-amid-ethical-concerns-152451800.html 4: https://siliconangle.com/2023/05/01/deep-learning-pioneer-geoffrey-hinton-leaves-google-warns-ai-risks/ 5: https://www.youtube.com/watch?v=8nt3edWLgIg
Dario Amodei, the CEO of Anthropic, owner of the popular AI model Claude, has expressed his views on the limits of AI and its potential dangers. 1 In a recent interview with TechCrunch, he stated that he doesn’t see any barriers on the horizon for his company’s key technology. He believes that the scale of neural nets used to train AI has increased remarkably in the last decade and will continue to grow in the future. 1 His worry for the dangers of AI to humans is the stuff of science fiction movies and writing. He warned that AI systems could enable criminals to create bioweapons and other dangerous weapons in the next two to three years. 2 Amodei’s concerns are shared by many experts in the field of AI who believe that the development of superintelligent AI poses an existential threat to humanity. 3
1: https://techcrunch.com/2023/09/21/anthropics-dario-amodei-on-ais-limits-im-not-sure-there-are-any/ 2: https://indianexpress.com/article/technology/artificial-intelligence/ai-chatbots-bioweapon-warning-anthropic-ceo-8866807/ 3: https://www.youtube.com/watch?v=8nt3edWLgIg
Nick Bostrom, a philosopher at Oxford University, has written extensively on the potential dangers of AI and has advocated for research into “superintelligence control” 1. He has surveyed the top 100 most cited AI researchers and found that more than half of the respondents believe there is a substantial (at least 15 percent) chance that the effect of human-level machine intelligence on humanity will be “on balance bad” or “extremely bad (existential catastrophe)” 1. In one of his papers, he has presented a thought experiment called the “paperclip maximizer,” in which a machine programmed to optimize paperclips eliminates everything standing in its way of achieving that goal 2. This scenario is not on the verge of becoming reality, but it highlights the importance of ensuring that AI is aligned with human values and intentions 3. Bostrom’s work underscores the need for caution and careful consideration when developing Generative AI.
Eliezer Yudkowsky talking about the Papeclip Maximizer with Lex Fridman: https://youtu.be/rMK_1RxhREI?si=mJUFbmSXzzPtehit
Elon Musk, the CEO of SpaceX and Tesla, has warned that AI poses a greater threat to humanity than nuclear weapons. He believes that AI is capable of vastly more than almost anyone knows and the rate of improvement is exponential. He has called for regulatory oversight to ensure that the advent of digital superintelligence is one which is symbiotic with humanity. In his own words, “I think the danger of AI is much bigger than the danger of nuclear warheads by a lot. Nobody would suggest we allow the world to just build nuclear warheads if they want; that would be insane. And mark my words: AI is far more dangerous than nukes.” Musk’s concerns stem from the fact that machines programmed with AI could potentially outperform humans in ways that are not in the best interest of humanity. He believes that there needs to be a regulatory body overseeing the development of superintelligence. 1, 2
https://www.inverse.com/article/42194-elon-musk-dangers-of-ai
https://www.cnbc.com/2018/03/13/elon-musk-at-sxsw-a-i-is-more-dangerous-than-nuclear-weapons.html
Maybe that wasn’t so light-hearted. And so to lighten things up, here are two versions of Hollywood’s-decades-old interpretation of an AI worst case scenario:
The Terminator movie franchise is centered on a world where AI takes over and tries to destroy all humans.
https://youtu.be/7-GTiaA9h88?si=wikz4GP8p-hMC2WX
War Games And, in the 1980’s precursor, a young Matthew Broderick is a hacker who accidentally hacks a Pentagon supercomputer. The AI in this movie is the WOPR system and it tries to guess the launch codes for the US’ nuclear missiles.
https://youtu.be/U6MIHxqswj0?si=kKkIdaxGMw33ahgw
Do you believe that any of these scenarios are likely? What is your biggest fear regarding AI?