Get the latest BPR news delivered free to your inbox daily. SIGN UP HERE
(Video Credit: Future of Life Institute)
Superpowers across the planet are racing to build killer robots that will use artificial intelligence to determine who gets taken out on the battlefield and experts fear it could wipe out humanity if left unchecked.
Technology leaders warned at a UN conference of a doomsday scenario involving the rise of AI Terminator-style “slaughterbots.” They failed to secure a ban against the technology which is currently being developed by China, Russia, and the United States.
Billions are being spent to further the creation of AI weapons, according to The Sun. And they won’t just be used by the military in wars that take place in foreign theaters. Cartels are reportedly already looking to use them in their murderous criminal ventures. Other criminals and terrorists are sure to get their hands on them as well. If the police decide to deploy the technology, it could be coming to a neighborhood near you not too far in the future.
The weapons use algorithms and facial recognition to determine who to kill with potentially no input from controllers.
The current trajectory toward a world none of us want is clear, and the urgency of changing this course cannot be overstated.
Luckily there is hope for action in 2022 in a separate forum not hostage to a few states’ unwillingness to accept any limitation on slaughterbots. (2/3)
— Future of Life Institute (@FLIxrisk) December 17, 2021
A Turkish-made kamikaze drone took out human targets in Libya last year according to a UN report.
The technology is advancing at an alarming pace with no oversight or rules to prevent it from slaughtering targets at the whim of whoever buys the weapons. These machines are only as reliable as those who program them and errors are doomed to be commonplace as “bugs” are worked out of the killing machines. They are said to be unpredictable and prone to rapidly spreading errors.
It sounds like science fiction or a bad Star Wars plot but it’s here and few realize that the technology is being deployed. Consider the possibility that future weaponry could be armed with biological, chemical, or even nuclear warheads. If that happens, Armageddon may be in the cards for humanity.
“It is a world where the sort of unavoidable algorithmic errors that plague even tech giants like Amazon and Google can now lead to the elimination of whole cities,” warns Prof. James Dawes, who teaches at Macalester College. “The world should not repeat the catastrophic mistakes of the nuclear arms race. It should not sleepwalk into dystopia.”
MIT professor Max Tegmark, who is the co-founder of the Future of Life Institute, issued a similarly bleak warning this week.
“The technology is developing much faster than the military-political discussion. And we’re heading, by default, to the worst possible outcome,” he told Wired.
It should be noted here that what many are referring to as autonomous artificial intelligence is a misnomer as much of the weaponry is still controlled by humans. But incredibly fast advances in AI are extremely concerning.
Korea’s Demilitarized Zone is said to have self-firing machine guns. At least 14 countries are now said to have suicide AI drones.
Russia’s inventory reportedly includes the new Checkmate stealth fighter, which combines AI systems with a human pilot. They are working on a version that no longer requires a human pilot.
China was first to enter the game over a decade ago when it allegedly began testing a robot submarine that is designed to track and destroy enemy ships autonomously. They also have an anti-submarine drone and truck-launched battlefield drone swarms.
In October, satellite pictures reportedly indicated that China was seen building a robot warship armed with torpedoes.
These weapons not only are capable of a human deciding whether they should act but also could be controlled entirely by a computer, remaining in an area for days firing on targets.
As more of these weapons are created, the price will fall, allowing unsavory characters to buy them either legitimately or on the black market. But there is no doubt they will get a hold of them.
“If you can buy slaughterbots for the same price as an AK-47, that’s much preferable for drug cartels, because you’re not going to get caught when you kill someone,” Prof. Tegmark told The Sun. “Even if a judge has lots of bodyguards, you can fly in through a bedroom window while they’re sleeping and kill them.”
“They’ll be small, cheap and light like smartphones, and incredibly versatile and powerful,” he told TheNextWeb. “It’s clearly not in the national security interest of these countries to legalize super-powerful weapons of mass destruction.”
#CCWUN has decided to keep it legal for rogue states to export cheap slaughterbots to anyone, after wasting tax dollars discussing for for 8(!) years. Thanks in advance to Germany, Norway, Canada & others for moving regulation forward in a less useless venue! #BanSlaughterbots pic.twitter.com/e7E7yaU2bn
— Max Tegmark (@tegmark) December 17, 2021
Militaries will, in the future, likely turn to robots rather than having humans on the battlefield, and that scenario is destined to end badly. It’s hard to fathom who would be held accountable for atrocities and war crimes at that point.
The United States is being forced into the development of AI weaponry since China and Russia are racing to employ it.
The Defense Advanced Research Projects Agency (DARPA) has begun trials involving large numbers of drones and ground vehicles that work together.
The Air Force is looking into replacing human fighter pilots with robots as well.
In April, a Pentagon official confirmed to The Sun that they are considering in the future the need to remove humans from the chain of command in situations where they are unable to respond fast enough against robotic enemies.
Many are pointing out that “slaughterbots” are an apocalyptically bad idea:
We have decades of sci fi movies on how this is a bad idea
— Finesse.Kawal (@KawalFinesse) December 21, 2021
What could go wrong?
— Lost Viking (@LostViking15) December 21, 2021
What’s terrifying is that they are autonomous and do the will of the better hacker!!!
— GIT (@Gmant055) December 21, 2021
— Woke Jesus Christ (@OscarEs63826415) December 21, 2021
— Virgil_Hilts (@VirgilHilts6) December 21, 2021
— The Robot Afro Show (@RobotAfro) December 22, 2021
While companies like Boston Dynamics have pivoted to commercial/industrial markets and nod to ethics with "do no harm" terms and conditions, the openness of these devices as platforms means they can be used in harmful ways by criminals/terrorists, but also military/police. pic.twitter.com/FwudYP5ehq
— Geoff Ford (@aGeoffFord) December 6, 2021
DONATE TO BIZPAC REVIEW
Please help us! If you are fed up with letting radical big tech execs, phony fact-checkers, tyrannical liberals and a lying mainstream media have unprecedented power over your news please consider making a donation to BPR to help us fight them. Now is the time. Truth has never been more critical!
- Menendez stands firm amid calls to resign, suddenly recalls due process: ‘Innocent until proven guilty’ - September 25, 2023
- Charlamagne takes issue with Biden calling LL Cool J a ‘boy,’ says that’s a ‘white racist word’ - September 25, 2023
- FBI facing legal action after bureau allegedly ‘lost’ rare fortune during raid - September 25, 2023
We have no tolerance for comments containing violence, racism, profanity, vulgarity, doxing, or discourteous behavior. If a comment is spam, instead of replying to it please click the ∨ icon below and to the right of that comment. Thank you for partnering with us to maintain fruitful conversation.