ChatGPT fires off familiar doomsday scenarios, urges humanity to ‘take these threats seriously’

Like a dystopian nightmare come to life, tech experts are increasingly warning that the speed with which artificial intelligence is evolving could threaten the very survival of the human race. But according to Open AI’s ChatGPT, mankind’s demise could come from a variety of sources, and like all programmed progressives, the chatbot puts climate change at the top of the species-ending list.

“It’s important to note that predicting the end of the world is a difficult and highly speculative task, and any predictions in this regard should be viewed with skepticism,” the bot told Fox News Digital (FND) when asked about the apocalypse. “However, there are several trends and potential developments that could significantly impact the trajectory of humanity and potentially contribute to its downfall.”

Having said that, the bot warned, climate change will have “catastrophic effects on the planet if not addressed.”

ChatGPT rattled off all the familiar talking points — rising sea levels, extreme weather, and food scarcity — and said they could result in “widespread displacement, conflict, and instability.”

Asked to explain how climate change, specifically, could bring about the end of the world, the bot “elaborated that coastal flooding, displacement of millions of people and loss of infrastructure could throw humanity into its end times,” FND reports. “ChatGPT also cited the potential increase in severe weather events, such as hurricanes and droughts, as well as a potential ‘ecosystem collapse’ that could disrupt ‘global food webs.'”

Amid growing concerns that Russian President Vladimir Putin may use nuclear weapons in his war against Ukraine, ChatGPT next listed the ongoing development of such weapons and the threats of using them as a “potential threat to humanity.”

While the current threat of a global nuclear conflict remains low, the chatbot noted that “geopolitical tensions and regional conflicts could potentially escalate and result in devastating consequences.”

Should Russia use nuclear weapons, the bot said it would present a “grave threat to humanity and the planet.”

The “potential death toll, environmental destruction and long-term impacts of a nuclear attack are almost unimaginable.” the bot added.

And then ChatGPT looked inward and admitted that AI and robotics could lead to the end of the world as we know it.

“The continued evolution of artificial intelligence and robotics also raises concerns about the potential impact on employment and societal structures,” it said. “The automation of jobs could lead to significant economic and social disruption, potentially contributing to widespread unrest and instability.”

It’s a concern that Elon Musk has been raising for years.

As BizPac Review reported, last month, the tech genius told Tucker Carlson that AI could “absolutely” override humans and take control of civilization.

“That’s real?” Carlson asked. “It is conceivable that AI could take control and reach a point where you couldn’t turn it off and it would be making the decisions for people?”

“Yeah, absolutely,” Musk replied matter-of-factly.

In early April, Musk joined with other industry leaders in calling for a six-month pause on the development of next-generation AI systems.

“AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs,” the tech giants, including Apple co-founder Steve Wozniak and the founder and scientific director of the Montreal Institute for Learning Algorithms (Mila) Yoshua Bengio, stated in a letter.

“Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth?” the letter continued. “Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”

According to Fox News:

This week, computer scientist Geoffrey Hinton, known as the “godfather of artificial intelligence,” warned “it’s not inconceivable” that AI could wipe “out humanity.” His remarks came after he quit his job at Google, saying he regrets his life’s work due to how AI can be misused.

 

And if AI doesn’t end the human race, a pandemic or public health crisis just might, according to ChatGPT.

“The rapid spread of infectious diseases, particularly in a globally interconnected world, could lead to widespread illness, death, and social and economic disruption,” the bot stated.

When asked what top pandemics pose the greatest threat, the bot replied that, while pandemics “are difficult to predict,” influenza, an Ebola outbreak, another coronavirus pandemic, and bioterrorism are good candidates.

“It’s possible that as a species, we will find solutions to these challenges and avoid their worst consequences,” the bot said of its chilling list. “However, it’s critical that we take these threats seriously and work together to develop strategies to mitigate their impact and create a more sustainable and resilient world.”

Meanwhile, OpenAI has met the demands of Italian regulators, who banned ChatGPT in March after some users’ messages and payment information were exposed, Fox Business reports.

Additionally, Garante, the Italian watchdog responsible for the ban, wanted to know if “there was a legal basis for OpenAI to collect massive amounts of data used to train ChatGPT’s algorithms and raised concerns that the system could sometimes generate false information about individuals,” the outlet explains.

On Friday, it was announced that OpenAI met Garante’s conditions, and the ban was lifted.

“ChatGPT is available again to our users in Italy,” OpenAI, based in San Francisco, said in an email. “We are excited to welcome them back, and we remain dedicated to protecting their privacy.”

And in other AI news, Ed Davis, Boston’s police commissioner during the deadly Boston Marathon bombing, predicts that artificial intelligence “will ultimately improve investigations and allow many dangerous criminals to be brought to justice.”

According to him, AI could have stopped the April 2013 attack, but the risks of the technology still need to be addressed.

“Use of artificial intelligence systems applied to secret and top-secret databases could very well have prevented the Boston Marathon bombing,” he said. “However, it may be years before this happens. But right now, the government needs to be aware of the downsides and mitigate the risks of AI.”

 

DONATE TO BIZPAC REVIEW

Please help us! If you are fed up with letting radical big tech execs, phony fact-checkers, tyrannical liberals and a lying mainstream media have unprecedented power over your news please consider making a donation to BPR to help us fight them. Now is the time. Truth has never been more critical!

Success! Thank you for donating. Please share BPR content to help combat the lies.
Melissa Fine

Comment

We have no tolerance for comments containing violence, racism, profanity, vulgarity, doxing, or discourteous behavior. If a comment is spam, instead of replying to it please click the ∨ icon below and to the right of that comment. Thank you for partnering with us to maintain fruitful conversation.

BPR INSIDER COMMENTS

Scroll down for non-member comments or join our insider conversations by becoming a member. We'd love to have you!

Latest Articles