process management blog posts

Generative AI: A double-edged sword for application security

Blog: OpenText Blogs

A side profile outline of a head is staring at a circular computer port. The head is a blue gradient, with binar and computer chip lines on it. The lines have red dots on their ends.

Generative AI (GenAI) burst into the public consciousness in late 2022. By April 2024, 17% of organizations had already introduced GenAI applications into production, with another 38% making significant investments.  

On the one hand, GenAI brings unprecedented opportunities to strengthen and innovate cybersecurity, but on the other hand, it also introduces new risks that require cutting-edge solutions. 

Desire for efficiency vs increased risk 

There is a growing desire across industries to harness AI to improve organizational efficiency and productivity. GenAI can improve cybersecurity processes, such as automated threat detection, code review, and security testing. However, the same technology presents unique security challenges that traditional methods struggle to address.  

GenAI models operate as black boxes and exhibit highly dynamic behavior. Traditional security tools often rely on understanding the application's logic to detect anomalies or vulnerabilities, which is challenging with opaque AI models. 

GenAI applications have both a supply chain to be secured and distinct vulnerabilities.  Due to their reliance on large data sources, pretrained models, library's and components which are often untraceable, organizations need to adopt a new paradigm to mitigate the risks introduced by AI-powered systems. Compromised data sets, model manipulation, and backdoor attacks through open-source components are just a few examples of vulnerabilities common to Generative AI. 

As businesses integrate AI deeper into their operations, they inadvertently expose themselves to new and evolving cyberthreats. With their heavy reliance on large, often sensitive data sets for training, GenAI applications will become prime targets for data breaches. Cyber attackers are exploiting vulnerabilities specific to AI models, such as data poisoning and adversarial attacks, making it clear that AI is both a tool for defense and a target for exploitation. 

For a more detailed analysis of the security challenges unique to Generative AI, check out the full IDC position paper here.

Securing AI while leveraging its power 

Organizations should implement security tools specifically designed to tackle the unique vulnerabilities of GenAI applications. These tools need to identify code patterns that allow malicious inputs to exploit AI model’s behaviors and must also be capable of recognizing and understanding AI and ML libraries and frameworks like TensorFlow and PyTorch. Furthermore, compliance with AI-related industry standards and regulations, providing audit trails and documentation, is crucial.  

Want to dive deeper into how GenAI is reshaping application security? 

Download the full IDC position paper or register for our upcoming webinar with IDC Research Manager Katie Norton where you’ll learn exactly how to protect your organization from the unique vulnerabilities posed by GenAI! 

The post Generative AI: A double-edged sword for application security  appeared first on OpenText Blogs.