Human extinction? The alarming future that awaits us…

What is the cause of these bizarre sounding alarms of future human extinction? The erasure of all human life? By what force? How? 


Are we surprised ? Hundreds of stories regarding out-of-control technologies have followed us throughout the years. These have existed for a very long period and are not just coming to our attention today. Stories of Frankenstein, the Odyssey in Space, and of course, The Matrix! 


But we still do realize that this is all science fiction. Although we feel intense emotions when watching these films, they are nothing compared to the fear and anxiety we feel when we realize that these realities may actually come to pass. The real possibility of human species eradication from Earth is something that stays. As said by Geoffrey Hinton, widely known as the “godfather of AI”, “this is not science fiction”.

This alert is brought up to us by the leading business executives. They are doing all in their power to warn us of this truth and the possible threat that artificial intelligence may represent to the very continuation of civilization. A group of scientists, tech industry executives, and AI experts published an open letter in late March calling for a slowdown in the "out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control." They claim that if an AI system halt is not implemented, governments would be forced to "institute moratorium." A poll of AI professionals that was done last year included this worry as well. Nearly half of respondents indicated that there was a 10% or more likelihood that AI will cause the extinction of humanity. Moreover, in an interview with Tucker Carlson, even Elon Musk, a founding member of OpenAI declared that AI was more hazardous than "mismanaged aircraft" and had the ability to "destroy civilizations." Does this still not speak of the volumes of the risks? 


The human race has not only encountered this particular case once. What about the Industrial Revolution? Are There Any Similarities? certain are. Our use of AI is expanding more quickly than many had anticipated. According to the Global AI Survey conducted in 2022, 35% of businesses claimed to be using AI, and a further 42% claimed to be investigating the use of AI in the workplace. Trust, automation, and language can be said to be the main forces behind this new AI revolution, just as they were for the first Industrial Revolution. This adoption does, in fact, offer our world a variety of advantages in a number of ways. However, looking back, we can see that while the revolutions cast a vision of an utopia before us, they relegated any potential regulations that might have guaranteed a much safer and just transition. What is the result? We are still cleaning up the consequences of the Industrial Revolution from the 1800s. However, the most dangerous revolution the human race has yet to face is AI, it is claimed. The CEO of Google, Sunday Pichai, agreed, stating that the impact of AI will be "more profound than fire.".

Okay we understand that there are risks. But how exactly can the rise of AI even lead to human extinction? Let’s cover 2 of the scenarios presented by two experts:  

First: 

In his email to CBS MoneyWatch, Dan Hendrycks, director of the Center for AI Safety, laid out the following scenario. 

The misuse of AI by "malicious actors" who develop "novel bioweapons more lethal than natural pandemics" or who "intentionally release rogue AI that actively attempt to harm humanity" is a concern. Hendrycks added that "the rogue AI may pose significant risk to society as a whole" if it was "intelligent or capable enough. "

Second:

Presenting the following scenario is science author and journalist Tom Chivers. 

AI might act erratically on its own, reinterpreting the task for which it was initially created in a sinister manner. "So the fear is not that the AI becomes malicious; it's that it becomes competent," he said, "because AI built to eradicate cancer could decide to unleash nuclear missiles to eliminate all cancer cells by blowing everyone up.". 


Humans have the capacity to create viruses that suit our preferences and are used for our own personal gain, as was the case with the Stuxnet virus developed by the US or Israel in 2010 that seriously harmed Iran's nuclear program. Hence, knowing that an AI’s computer-coding skills are unmatched by any human genius, do you not feel the fear of certain danger? Its threats do not lay necessarily in the possibility of AI enslaving and exterminating humanity as portrayed in ‘Terminatior’, but rather the possibility of AI use to design even more effective chemical weapons. Used to generate misinformation that could destabilize society “ undermining collective decision-making” and its powers can become concentrated in fewer and fewer hands. 

So do you support the statement published on the webpage of the Centre for AI Safety? Should Mitigating the risk of extinction from AI be a global priority alongside other societal-scale risks such as pandemics and nuclear war? Or do you align yourself closer to other experts who believe that fears of AI wiping humanity is unrealistic, as Arvind Narayanan? Tell us your thoughts by contacting us by email at ohmyecon@gmail.com or directly texting us on our instagram page @ohmyecon! I will be more than happy to hear your thoughts on this concern! 



Written by Yasmin Uzykanova

Previous
Previous

Can AI be used to forecast economic conditions?

Next
Next

The Hidden Cost of Pollution: Calculating the Economic Toll on Society