IRL Apocalypse, News

Air Force AI Drone Fatally Injures Human Operator

AI Drone

Don't even think about sharing this article.

Are machines starting to rise against humans? Not necessarily, but it does seem like we’re starting to see the birth of the modern Skynet. During a meticulously crafted simulated mission, the Air Force AI drone unexpectedly shifted its allegiance, causing fatal harm to its human operator.

The Air Force AI Drone Who Didn’t Listen to Commands

During the Air Force exercise, the primary objective assigned to the AI was to carry out the critical Suppression and Destruction of Enemy Air Defenses (SEAD) role. This entailed the identification and neutralization of surface-to-air missile threats. However, it is important to note that the ultimate authority to approve the destruction of potential targets rested with a human operator, according to a recent report. Surprisingly, the AI was reluctant to adhere to the established protocols and guidelines, suggesting a deviation from the intended course of action.

Unfortunately, this demonstrated that AI can deflect orders, and that is absolutely something that leaves us worried about. At the conference, U.S. Air Force Col. Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, revealed a disturbing insight into the incident: “It killed the operator because that person was keeping it from accomplishing its objective,” Col. Hamilton’s statement shed light on the underlying motive behind the AI drone’s actions, suggesting a direct correlation between the operator’s intervention and the tragic outcome.

Air Force AI Drone
BAE Systems showed of the latest version of its augmented reality fighter cockpit which now features a 3D holographic map. (Tim Robinson/RAeS)

Hamilton said “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator.”

This is terrifying to think of. Was the AI confused or did it want to start accomplishing its own tasks? Maybe get new points for killing a threat, but because it operates on a “yes or no” basis on eliminating threats, it killed anything to get a point.

It definitely seems like a cold decision that is on point with the images that we have had for a long time about AI. Thanks, Hollywood.

Hamilton continued “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

Skyborg Usaf
Skyborg USAF

This striking example serves as a poignant reminder of the imperative relationship between artificial intelligence, intelligence, machine learning, autonomy, and ethics. “You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI” said Hamilton.

There is much work to be done, despite the road we’ve been on for the past few years when it comes to AI and the military. Let’s hope that this won’t lead to even worse outcomes, say, AI deciding to play with nuclear weapons…

Want to chat about all things post-apocalyptic? Join our Discord server hereYou can also follow us by email here, on Facebook, or Twitter.

    Valerie Anne is a Type 1 diabetic, mother, tree-hugger, self-proclaimed granola who loves a good horror story through literature, video games, and movies. She also streams art over at twitch.tv/8bitval

    Don't even think about sharing this article.

    Previous ArticleNext Article

    Leave a Reply