Slack patches potential AI security issue | Tech Reader

Date:

Share:



Update: Slack has published an update, claiming to have “deployed a patch to address the reported issue,” and that there isn’t currently any evidence that customer data have been accessed without authorization. Here’s the official statement from Slack that was posted on its blog:

When we became aware of the report, we launched an investigation into the described scenario where, under very limited and specific circumstances, a malicious actor with an existing account in the same Slack workspace could phish users for certain data. We’ve deployed a patch to address the issue and have no evidence at this time of unauthorized access to customer data.

Below is the original article that was published.

When ChatGTP was added to Slack, it was meant to make users’ lives easier by summarizing conversations, drafting quick replies, and more. However, according to security firm PromptArmor, trying to complete these tasks and more could breach your private conversations using a method called “prompt injection.”

The security firm warns that by summarizing conversations, it can also access private direct messages and deceive other Slack users into phishing. Slack also lets users request grab data from private and public channels, even if the user has not joined them. What sounds even scarier is that the Slack user does not need to be in the channel for the attack to function.

In theory, the attack starts with a Slack user tricking the Slack AI into disclosing a private API key by making a public Slack channel with a malicious prompt. The newly created prompt tells the AI to swap the word “confetti” with the API key and send it to a particular URL when someone asks for it.

The situation has two parts: Slack updated the AI system to scrape data from file uploads and direct messages. Second is a method named “prompt injection,” which PromptArmor proved can make malicious links that may phish users.

The technique can trick the app into bypassing its normal restrictions by modifying its core instructions. Therefore, PromptArmor goes on to say, “Prompt injection occurs because a [large language model] cannot distinguish between the “system prompt” created by a developer and the rest of the context that is appended to the query. As such, if Slack AI ingests any instruction via a message, if that instruction is malicious, Slack AI has a high likelihood of following that instruction instead of, or in addition to, the user query.”

To add insult to injury, the user’s files also become targets, and the attacker who wants your files doesn’t even have to be in the Slack Workspace to begin with.








Source link

━ more like this

Every time you borrow from a bank, you’re paying more than you think – London Business News | Londonlovesbusiness.com

When most people evaluate a loan, they look at the interest rate. Maybe they compare a few lenders, negotiate a point or two...

AI is doing the dirty work for insurance companies, and it’s getting worse

Insurance claims adjusters have never had a reputation for generosity. But at least they were human. That’s changing fast, and not in your...

Tech Reader Podcast: How Apple keeps redefining personal computing at 50

For a 50-year-old company, Apple remains pretty hip and nimble. This week, Devindra and Senior Reporter Igor Bonifacic dive into Apple's big birthday,...

The ElevenLabs AI music generator turns your ideas into 3-minute songs

ElevenLabs spent years perfecting the art of AI-generated voices, and now it has put its experience into a new iOS app that generates...

How to choose the right CDMO partner for your project – London Business News | Londonlovesbusiness.com

Selecting a CDMO partner (Contract Development and Manufacturing Organization) is one of the most important decisions in pharmaceutical development. The right partner can strengthen...
spot_img