Hackers cracked OpenAI’s internal messaging system last year | Tech Reader

Date:

Share:


A concept image of a hacker at work in a dark room.
Microbiz Mag

A hacker managed to infiltrate OpenAI’s internal messaging system last year and abscond with details about the company’s AI design, according to a report from the New York Times on Thursday. The attack targeted an online forum where OpenAI employees discussed upcoming technologies and features for the popular chatbot, however, the systems where the actual GPT code and user data are stored were not impacted.

While the company disclosed that information to its employees and board members in April 2023, the company declined to notify either the public or the FBI about the breach, claiming that doing so was unnecessary because no user or partner data was stolen. OpenAI does not consider the attack to constitute a national security threat and believes the attacker was a single individual with no ties to foreign powers.

Per the NYT, former OpenAI employee Leopold Aschenbrenner previously raised concerns about the state of the company’s security apparatus and warned that its systems could be accessible to the intelligence services of adversaries like China. Aschenbrenner was summarily dismissed by the company, though OpenAI spokesperson Liz Bourgeois told the New York Times his termination was unrelated to the memo.

This is far from the first time that OpenAI has suffered such a security lapse. Since its debut in November 2022, ChatGPT has been repeatedly targeted by malicious actors, often resulting in data leaks.  In February of this year, user names and passwords were leaked in a separate hack. The previous March, OpenAI had to take ChatGPT offline entirely to fix a bug that revealed users’ payment information, including the first and last name, email address, payment address, credit card type, and the last four digits of their card number to other active users. Last December, security researchers discovered that they could entice ChatGPT to reveal snippets of its training data simply by instructing the system to endlessly repeat the word “poem.”

“ChatGPT is not secure. Period,” AI researcher Gary Marcus told The Street in January. “If you type something into a chatbot, it is probably safest to assume that (unless they guarantee otherwise), the chatbot company might train on those data; those data could leak to other users.” Since the attack, OpenAI has taken steps to beef up its security systems, including installing additional safety guardrails to prevent unauthorized access and misuse of the models, as well as establishing a Safety and Security Committee to address future issues.








Source link

━ more like this

What is the best graphics card for laptops? | Tech Reader

Modern laptop graphics cards offer incredible power and, in many cases, impressive efficiency, too. They aren’t quite as easy to break down as...

AMD is now more recognizable than Intel | Tech Reader

While many would assume otherwise, a recent report tells us that AMD is now a more recognizable brand than Intel — and that’s...

I think I found the perfect iPhone screen protector | Tech Reader

Every smartphone brand touts having some kind of protection for its phone displays, trying to convince you that you don’t need a screen...

Hackers reverse-engineer Ticketmaster’s barcode system to unlock resales on other platforms

Scalpers have used a security researcher’s findings to reverse-engineer “nontransferable” digital tickets from Ticketmaster and AXS, allowing transfers outside their apps. The workaround...

The best all-in-one printers you can buy in 2024 | Tech Reader

Photo by Tracey Truly If you're shopping for the best printers for a home office, an all-in-one is a good choice. Multifunction printers include...
spot_img