40 years later, The Terminator still shapes our view of AI

Date:

Share:


Countries, including the US, specify the need for human operators to “exercise appropriate levels of human judgment over the use of force” when operating autonomous weapon systems. In some instances, operators can visually verify targets before authorizing strikes and can “wave off” attacks if situations change.

AI is already being used to support military targeting. According to some, it’s even a responsible use of the technology since it could reduce collateral damage. This idea evokes Schwarzenegger’s role reversal as the benevolent “machine guardian” in the original film’s sequel, Terminator 2: Judgment Day.

However, AI could also undermine the role human drone operators play in challenging recommendations by machines. Some researchers think that humans have a tendency to trust whatever computers say.

“Loitering munitions”

Militaries engaged in conflicts are increasingly making use of small, cheap aerial drones that can detect and crash into targets. These “loitering munitions” (so named because they are designed to hover over a battlefield) feature varying degrees of autonomy.

As I’ve argued in research co-authored with security researcher Ingvild Bode, the dynamics of the Ukraine war and other recent conflicts in which these munitions have been widely used raises concerns about the quality of control exerted by human operators.

Ground-based military robots armed with weapons and designed for use on the battlefield might call to mind the relentless Terminators, and weaponized aerial drones may, in time, come to resemble the franchise’s airborne “hunter-killers.” But these technologies don’t hate us as Skynet does, and neither are they “super-intelligent.”

However, it’s crucially important that human operators continue to exercise agency and meaningful control over machine systems.

Arguably, The Terminator’s greatest legacy has been to distort how we collectively think and speak about AI. This matters now more than ever, because of how central these technologies have become to the strategic competition for global power and influence between the US, China, and Russia.

The entire international community, from superpowers such as China and the US to smaller countries, needs to find the political will to cooperate—and to manage the ethical and legal challenges posed by the military applications of AI during this time of geopolitical upheaval. How nations navigate these challenges will determine whether we can avoid the dystopian future so vividly imagined in The Terminator—even if we don’t see time-traveling cyborgs any time soon.

Tom F.A Watts, Postdoctoral Fellow, Department of Politics, International Relations, and Philosophy, Royal Holloway University of London. This article is republished from The Conversation under a Creative Commons license. Read the original article.



Source link

━ more like this

Joobie: Your interactive, trendy AI companion for every moment

Today’s tech-driven world can make emotional connections seem even further away when you never take the time to meet someone. Youth are always looking...

The psychology of the re-check: What Claritycheck says about digital trust

ClarityCheck is a digital safety platform and online verification tool that helps people proactively know who to trust in the digital landscape. People...

How to watch the 2026 Super Bowl: Patriots vs. Seahawks channel, where to stream and more

The New England Patriots and the Seattle Seahawks will face off in Super Bowl LX. For those of you who just can't with...

Next-gen lunar spacesuit redefines mobility

As NASA gears up to send four astronauts on a crewed flight around the moon in the imminent Artemis II mission, Axiom Space...

Outside Parties is the creepiest Playdate game yet, and I’m kind of obsessed

Never underestimate the chilling powers of grainy grayscale imagery and ethereal whooshing sounds. Outside Parties asks, "What if I Spy, but in an...
spot_img