OpenAI Offers a Peek Inside the Guts of ChatGPT

Date:

Share:


ChatGPT developer OpenAI’s approach to building artificial intelligence came under fire this week from former employees who accuse the company of taking unnecessary risks with technology that could become harmful.

Today, OpenAI released a new research paper apparently aimed at showing it is serious about tackling AI risk by making its models more explainable. In the paper, researchers from the company lay out a way to peer inside the AI model that powers ChatGPT. They devise a method of identifying how the model stores certain concepts—including those that might cause an AI system to misbehave.

Although the research makes OpenAI’s work on keeping AI in check more visible, it also highlights recent turmoil at the company. The new research was performed by the recently disbanded “superalignment” team at OpenAI that was dedicated to studying the technology’s long-term risks.

The former group’s coleads, Ilya Sutskever and Jan Leike—both of whom have left OpenAI—are named as coauthors. Sutskever, a cofounder of OpenAI and formerly chief scientist, was among the board members who voted to fire CEO Sam Altman last November, triggering a chaotic few days that culminated in Altman’s return as leader.

ChatGPT is powered by a family of so-called large language models called GPT, based on an approach to machine learning known as artificial neural networks. These mathematical networks have shown great power to learn useful tasks by analyzing example data, but their workings cannot be easily scrutinized as conventional computer programs can. The complex interplay between the layers of “neurons” within an artificial neural network makes reverse engineering why a system like ChatGPT came up with a particular response hugely challenging.

“Unlike with most human creations, we don’t really understand the inner workings of neural networks,” the researchers behind the work wrote in an accompanying blog post. Some prominent AI researchers believe that the most powerful AI models, including ChatGPT, could perhaps be used to design chemical or biological weapons and coordinate cyber attacks. A longer-term concern is that AI models may choose to hide information or act in harmful ways in order to achieve their goals.

OpenAI’s new paper outlines a technique that lessens the mystery a little, by identifying patterns that represent specific concepts inside a machine learning system with help from an additional machine learning model. The key innovation is in refining the network used to peer inside the system of interest by identifying concepts, to make it more efficient.

OpenAI proved out the approach by identifying patterns that represent concepts inside GPT-4, one of its largest AI models. The company released code related to the interpretability work, as well as a visualization tool that can be used to see how the words in different sentences activate concepts, including profanity and erotic content, in GPT-4 and another model. Knowing how a model represents certain concepts could be a step toward being able to dial down those associated with unwanted behavior, to keep an AI system on the rails. It could also make it possible to tune an AI system to favor certain topics or ideas.



Source link

━ more like this

EU regulators are once again investigating TikTok over data transfers to China

TikTok is in more regulatory hot water. Only a couple of months after it slapped TikTok with a hefty fine over data transfers...

Transforming brand identity with vinyl wrapping services – London Business News | Londonlovesbusiness.com

Vinyl wrapping services have emerged as a transformative tool for businesses seeking to enhance their brand visibility and identity. By offering a versatile...

Senior Ukrainian spy chief assassinated in Kyiv by ‘a pistol with a silencer’ – London Business News | Londonlovesbusiness.com

A senior Ukrainian spy chief has been assassinated in Kyiv in broad daylight by an unknown person by a “pistol with a silencer.” Colonel...

OpenAI’s own web browser could arrive within weeks

OpenAI is said to be almost ready to unleash its own web browser, which could be out in the wild within weeks. According...

Intelligence report warns Iran is trying to kill or kidnap ‘prominent Jewish individuals’ in the UK – London Business News | Londonlovesbusiness.com

Parliament’s Intelligence and Security Committee has released a major report revealing that Iran has targeted “prominent Jewish individuals” among at least fifteen attempts...
spot_img