Seeing the unseen: How OpenText is leading the way in detecting AI risk
Blog: OpenText Blogs

TL;DR: As AI becomes integral to software development, it’s also creating new and complex security risks. OpenText is leading the charge in AI risk detection, embedding AI-aware analysis into its AppSec platform to identify vulnerabilities in how AI models, APIs, and generative systems are used in code. Unlike traditional tools, OpenText doesn’t just scan for known flaws. We understand how AI behaves, helping enterprises build trust, ensure compliance, and innovate responsibly. The takeaway: in the AI era, secure innovation depends on detecting AI risk before it becomes a business risk.
Navigating the new frontier of application security
Artificial intelligence (AI) is no longer a futuristic concept, it’s embedded in nearly every modern business process and software product. From automating code generation to enabling adaptive digital experiences, AI is redefining how organizations innovate and compete.
But with every leap forward comes a new class of risk. The same models that help organizations accelerate development can inadvertently introduce vulnerabilities, expose sensitive data, or enable insecure behaviors when integrated without proper governance. AI systems have become more deeply woven into the software supply chain. The challenge is no longer just “how fast can we adopt AI?” but rather “how securely can we deploy it?”
This is where OpenText™ Application Security is leading the industry, by detecting AI risk at the source and turning responsible AI innovation into a business advantage.
The next security challenge: AI-driven software
AI-enabled applications introduce a new layer of trust assumptions. Large Language Models (LLMs) and agentic frameworks are powerful, but they can produce unpredictable or unsafe outputs if not properly validated. The risk compounds when AI systems make autonomous decisions, generate code, or interface directly with sensitive APIs.
Recent research by OpenText Software Security Research (SSR) highlights that many of today’s vulnerabilities stem not from malicious intent, but from implicit trust, developers assuming AI responses or generated code are safe by default. OpenText’s security research team addresses this head-on. We embed new detection capabilities that identify weaknesses in how AI models are integrated, used, and validated within applications .
How OpenText AppSec detects and mitigates AI risk
OpenText AppSec differentiates itself through deeply integrated, AI-aware security testing. Traditional tools that focus solely on known code vulnerabilities. OpenText’s SAST and DAST engines now analyze the context of AI usage, how models, prompts, and APIs interact within the broader application ecosystem.
Some key innovations include:
- AI model trust analysis: Detects vulnerabilities arising from unvalidated or overly trusted responses from AI/ML APIs, ensuring safe integration with LLM frameworks such as Python AutoGen and Google Vertex AI.
- Generative framework awareness: OpenText’s AppSec continuously updates rulepacks to identify emerging risks from agent-based systems, cooperative AI workflows, and AI-generated code injection.
- AI-augmented auditing with SAST Aviator: Using Anthropic’s Claude LLM, OpenText’s Aviator technology enhances code audit accuracy, drastically reducing false positives and providing human-readable explanations for detected issues.
- Continuous research-driven content: The SSR team monitors and models new AI development ecosystems, translating their findings into real-time updates across OpenText AppSec products, empowering customers to stay ahead of evolving AI threats.
By embedding this intelligence directly into the application security lifecycle, OpenText helps organizations detect and mitigate risks before they impact production systems.
Bridging innovation and responsibility
AI risk is more than a security issue, it’s a business risk. The potential impact spans compliance violations, reputational damage, intellectual property exposure, and loss of customer trust.
OpenText’s approach combines technical rigor with governance insight, helping organizations align their AI development with emerging standards for responsible AI. Business leaders gain clarity on key questions:
- Where is AI being used across our software ecosystem?
- Is the data used by these systems protected and compliant?
- Can we explain, validate, and control what our AI systems produce?
OpenText transforms these unknowns into actionable intelligence, allowing security and business teams to make confident, risk-informed decisions about AI adoption.
The power of research-led innovation
What sets OpenText apart isn’t just its product portfolio, it’s the depth of its security research. With over 1,700 vulnerability categories tracked across 33+ languages and more than one million APIs, the AppSec platform is backed by a global intelligence network that continuously evolves with the threat landscape.
This same expertise now powers OpenText’s AI risk detection capabilities. As generative AI frameworks evolve, the SSR team rapidly translates new findings into updated detection logic, ensuring customers are protected from risks that didn’t even exist six months ago.
Building digital trust in the age of AI
AI is transforming every industry, but innovation without security is unsustainable. Detecting and managing AI risk isn’t about slowing down, it’s about creating the confidence to innovate responsibly.
With OpenText AppSec, organizations can harness the power of AI while maintaining control, transparency, and trust. By proactively detecting AI-related vulnerabilities, OpenText helps leaders transform security into a strategic advantage—empowering them to build faster, smarter, and safer.
In short:
AI may be rewriting the rules of software development, but OpenText AppSec is redefining how we secure it.
The post Seeing the unseen: How OpenText is leading the way in detecting AI risk appeared first on OpenText Blogs.