We Need a New Right to Repair for Artificial Intelligence

Date:

Share:


There’s a growing trend of people and organizations rejecting the unsolicited imposition of AI in their lives. In December 2023, the The New York Times sued OpenAI and Microsoft for copyright infringement. In March 2024, three authors filed a class action in California against Nvidia for allegedly training its AI platform NeMo on their copyrighted work. Two months later, the A-list actress Scarlett Johansson sent a legal letter to OpenAI when she realized its new ChatGPT voice was “eerily similar” to hers.

The technology isn’t the problem here. The power dynamic is. People understand that this technology is being built on their data, often without our permission. It’s no wonder that public confidence in AI is declining. A recent study by Pew Research shows that more than half of Americans are more concerned than they are excited about AI, a sentiment echoed by a majority of people from Central and South American, African, and Middle Eastern countries in a World Risk Poll.

In 2025, we will see people demand more control over how AI is used. How will that be achieved? One example is red teaming, a practice borrowed from the military and used in cybersecurity. In a red teaming exercise, external experts are asked to “infiltrate” or break a system. It acts as a test of where your defenses can go wrong, so you can fix them.

Red teaming is used by major AI companies to find issues in their models, but isn’t yet widespread as a practice for public use. That will change in 2025.

The law firm DLA Piper, for instance, now uses red teaming with lawyers to test directly whether AI systems are in compliance with legal frameworks. My nonprofit, Humane Intelligence, builds red teaming exercises with nontechnical experts, governments, and civil society organizations to test AI for discrimination and bias. In 2023, we conducted a 2,200-person red teaming exercise that was supported by the White House. In 2025, our red teaming events will draw on the lived experience of regular people to evaluate AI models for Islamophobia, and for their capacity to enable online harassment against women.

Overwhelmingly, when I host one of these exercises, the most common question I’m asked is how we can evolve from identifying problems to fixing problems ourselves. In other words, people want a right to repair.

An AI right to repair might look like this—a user could have the ability to run diagnostics on an AI, report any anomalies, and see when they are fixed by the company. Third party-groups, like ethical hackers, could create patches or fixes for problems that anyone can access. Or, you could hire an independent accredited party to evaluate an AI system and customize it for you.

While this is an abstract idea today, we’re setting the stage for a right to repair to be a reality in the future. Overturning the current, dangerous power dynamic will take some work—we’re rapidly pushed to normalize a world in which AI companies simply put new and untested AI models into real-world systems, with regular people as the collateral damage. A right to repair gives every person the ability to control how AI is used in their lives. 2024 was the year the world woke up to the pervasiveness and impact of AI. 2025 is the year we demand our rights.



Source link

━ more like this

This Amazon bundle includes the Sony WH-1000XM6 headphones and a free $30 gift card

There are a few undeniable truths in this world: the sky is blue, Mario Kart is always a good idea and Sony's 1000X...

NYC proposes 5 percent raise for rideshare drivers in a bid to appease Uber and Lyft

New York City's Taxi and Limousine Commission (TLC) have settled on new minimum-wage rules for rideshare drivers, Bloomberg reports. Drivers will receive a...

Remedy is trying to fix FBC: Firebreak in response to middling reviews and player feedback

Remedy has shared its plans to improve FBC: Firebreak, the new multiplayer Control spinoff, following a string of less-than-stellar reviews that criticized the...

‘Wall-E With a Gun’: Midjourney Generates Videos of Disney Characters Amid Massive Copyright Lawsuit

Midjourney’s new AI-generated video tool will produce animated clips featuring copyrighted characters from Disney and Universal, WIRED has found—including video of the beloved...

A shark scientist reflects on Jaws at 50

A shark strikes ...
spot_img