Generative AI in Security: Risks and Mitigation Strategies

Date:

Share:


Generative AI became tech’s fiercest buzzword seemingly overnight with the release of ChatGPT. Two years later, Microsoft is using OpenAI foundation models and fielding questions from customers about how AI changes the security landscape.

Siva Sundaramoorthy, senior cloud solutions security architect at Microsoft, often answers these questions. The security expert provided an overview of generative AI — including its benefits and security risks — to a crowd of cybersecurity professionals at ISC2 in Las Vegas on Oct. 14.

What security risks can come from using generative AI?

During his speech, Sundaramoorthy discussed concerns about GenAI’s accuracy. He emphasized that the technology functions as a predictor, selecting what it deems the most likely answer — though other answers might also be correct depending on the context.

Cybersecurity professionals should consider AI use cases from three angles: usage, application, and platform.

“You need to understand what use case you are trying to protect,” Sundaramoorthy said.

He added: “A lot of developers and people in companies are going to be in this center bucket [application] where people are creating applications in it. Each company has a bot or a pre-trained AI in their environment.”

SEE: AMD revealed its competitor to NVIDIA’s heavy-duty AI chips last week as the hardware war continues.

Once the usage, application, and platform are identified, AI can be secured similarly to other systems — though not entirely. Certain risks are more likely to emerge with generative AI than with traditional systems. Sundaramoorthy named seven adoption risks, including:

  • Bias.
  • Misinformation.
  • Deception.
  • Lack of accountability.
  • Overreliance.
  • Intellectual property rights.
  • Psychological impact.

AI presents a unique threat map, corresponding to the three angles mentioned above:

  • AI usage in security can lead to disclosure of sensitive information, shadow IT from third-party LLM-based apps or plugins, or insider threat risks.
  • AI applications in security can open doors for prompt injection, data leaks or infiltration, or insider threat risks.
  • AI platforms can introduce security problems through data poisoning, denial-of-service attacks on the model, theft of models, model inversion, or hallucinations.

Attackers can use strategies such as prompt converters — using obfuscation, semantic tricks, or explicitly malicious instructions to get around content filters — or jailbreaking techniques. They could potentially exploit AI systems and poison training data, perform prompt injection, take advantage of insecure plugin design, launch denial-of-service attacks, or force AI models to leak data.

“What happens if the AI is connected to another system, to an API that can execute some type of code in some other systems?” Sundaramoorthy said. “Can you trick the AI to make a backdoor for you?”

Security teams must balance the risks and benefits of AI

Sundaramoorthy uses Microsoft’s Copilot often and finds it valuable for his work. However, “The value proposition is too high for hackers not to target it,” he said.

Other pain points security teams should be aware of around AI include:

  • The integration of new technology or design decisions introduces vulnerabilities.
  • Users must be trained to adapt to new AI capabilities.
  • Sensitive data access and processing with AI systems creates new risks.
  • Transparency and control must be established and maintained throughout the AI’s lifecycle.
  • The AI supply chain can introduce vulnerable or malicious code.
  • The absence of established compliance standards and the rapid evolution of best practices make it unclear how to secure AI effectively.
  • Leaders must establish a trusted pathway to generative AI-integrated applications from the top down.
  • AI introduces unique and poorly understood challenges, such as hallucinations.
  • The ROI of AI has not yet been proven in the real world.

Additionally, Sundaramoorthy explained that generative AI can fail in both malicious and benign ways. A malicious failure might involve an attacker bypassing the AI’s safeguards by posing as a security researcher to extract sensitive information, like passwords. A benign failure could occur when biased content unintentionally enters the AI’s output due to poorly filtered training data.

Trusted ways to secure AI solutions

Despite the uncertainty surrounding AI, there are some tried-and-trusted ways to secure AI solutions in a reasonably thorough manner. Standard organizations such as NIST and OWASP provide risk management frameworks for working with generative AI. MITRE publishes the ATLAS Matrix, a library of known tactics and techniques attackers use against AI.

Furthermore, Microsoft offers governance and evaluation tools that security teams can use to assess AI solutions. Google offers its own version, the Secure AI Framework.

Organizations should ensure user data does not enter training model data through adequate data sanitation and scrubbing. They should apply the principle of least privilege when fine-tuning a model. Strict access control methods should be used when connecting the model to external data sources.

Ultimately, Sundaramoorthy said, “The best practices in cyber are best practices in AI.”

To use AI — or not to use AI

What about not using AI at all? Author and AI researcher Janelle Shane, who spoke at the ISC2 Security Congress opening keynote, noted one option for security teams is not to use AI due to the risks it introduces.

Sundaramoorthy took a different tack. If AI can access documents in an organization that should be insulated from any outside applications, he said, “That is not an AI problem. That is an access control problem.”

Disclaimer: ISC2 paid for my airfare, accommodations, and some meals for the ISC2 Security Congres event held Oct. 13 – 16 in Las Vegas.



Source link

━ more like this

Samsung QN90D 98-inch review: solid and classy, but worth the price tag? | Tech Reader

Samsung QN90D 98-inch QLED TV “The QN90D series is excellent, but the 98-inch model fell apart for us.” Pros Excellent color accuracy Deep blacks, impressive contrast Solid on-board...

Space station crew had an amazing stroke of luck during Starship launch | Tech Reader

NASA astronaut and current space station inhabitant Don Pettit seems to have the luck of the stars. During SpaceX’s sixth test flight of...

3 great dramas on Amazon Prime Video you need to watch in November 2024 | Tech Reader

Never let it be said that Amazon Prime Video isn’t stacked with great dramas. Amazon wasn’t content to only have the great MGM...

Get ready for AI-dubbed YouTube videos | Tech Reader

YouTube has reportedly started rolling out a new AI-empowered translation feature for its content creators, one that will automatically redub a video’s contents...

ChatGPT’s latest model may be a regression in performance | Tech Reader

According to a new report from Artificial Analysis, OpenAI’s flagship large language model for ChatGPT, GPT-4o, has significantly regressed in recent weeks, putting...
spot_img