The US Government Wants You—Yes, You—to Hunt Down Generative AI Flaws

Date:

Share:


At the 2023 Defcon hacker conference in Las Vegas, prominent AI tech companies partnered with algorithmic integrity and transparency groups to sic thousands of attendees on generative AI platforms and find weaknesses in these critical systems. This “red-teaming” exercise, which also had support from the US government, took a step in opening these increasingly influential yet opaque systems to scrutiny. Now, the ethical AI and algorithmic assessment nonprofit Humane Intelligence is taking this model one step further. On Wednesday, the group announced a call for participation with the US National Institute of Standards and Technology, inviting any US resident to participate in the qualifying round of a nationwide red-teaming effort to evaluate AI office productivity software.

The qualifier will take place online and is open to both developers and anyone in the general public as part of NIST’s AI challenges, known as Assessing Risks and Impacts of AI, or ARIA. Participants who pass through the qualifying round will take part in an in-person red-teaming event at the end of October at the Conference on Applied Machine Learning in Information Security (CAMLIS) in Virginia. The goal is to expand capabilities for conducting rigorous testing of the security, resilience, and ethics of generative AI technologies.

“The average person utilizing one of these models doesn’t really have the ability to determine whether or not the model is fit for purpose,” says Theo Skeadas, CEO of the AI governance and online safety group Tech Policy Consulting, which works with Humane Intelligence. “So we want to democratize the ability to conduct evaluations and make sure everyone using these models can assess for themselves whether or not the model is meeting their needs.”

The final event at CAMLIS will split the participants into a red team trying to attack the AI systems and a blue team working on defense. Participants will use NIST’s AI risk management framework, known as AI 600-1, as a rubric for measuring whether the red team is able to produce outcomes that violate the systems’ expected behavior.

“NIST’s ARIA is drawing on structured user feedback to understand real-world applications of AI models,” says Humane Intelligence founder Rumman Chowdhury, who is also a contractor in NIST’s Office of Emerging Technologies and a member of the US Department of Homeland Security AI safety and security board. “The ARIA team is mostly experts on sociotechnical test and evaluation, and [is] using that background as a way of evolving the field toward rigorous scientific evaluation of generative AI.”

Chowdhury and Skeadas say the NIST partnership is just one of a series of AI red team collaborations that Humane Intelligence will announce in the coming weeks with US government agencies, international governments, and NGOs. The effort aims to make it much more common for the companies and organizations that develop what are now black-box algorithms to offer transparency and accountability through mechanisms like “bias bounty challenges,” where individuals can be rewarded for finding problems and inequities in AI models.

“The community should be broader than programmers,” Skeadas says. “Policymakers, journalists, civil society, and nontechnical people should all be involved in the process of testing and evaluating of these systems. And we need to make sure that less represented groups like individuals who speak minority languages or are from nonmajority cultures and perspectives are able to participate in this process.”



Source link

━ more like this

OpenAI is reportedly working on an AI music-generation tool

According to a report from The Information, OpenAI is interested in developing a tool that could generate music from text and audio prompts,...

CBP will photograph non-citizens entering and exiting the US for its facial recognition database

The US Customs and Border Protection (CBP) submitted a new measure that allows it to photograph any non-US citizen who enters or exits...

Apple is reportedly getting ready to introduce ads to its Maps app

Opening Apple's Maps app just for directions may look a little different in the near future. According to Bloomberg's Mark Gurman, Apple is...

The next iPad Pro could be the first to get vapor chamber cooling

The iterative upgrades for iPads may not be enticing enough to warrant a new purchase every year, but Apple may have a particularly...

Putin warns UK of the new ‘unstoppable’ nuclear missile dubbed the ‘Flying Chernobyl’ – London Business News | Londonlovesbusiness.com

Vladimir Putin has issued another warning to the West and the UK his nuclear forces are now on the “highest level” after the...
spot_img