A guide to AI AppSec
Blog: OpenText Blogs

Application security teams don’t usually struggle to find issues. They struggle to keep up with them.
As AST programs mature, scan coverage expands across more apps, more teams, more pipelines, and results pile up fast. Each finding still needs validation, context, prioritization, and a path to remediation. At enterprise scale, that “last mile” becomes a monumental bottleneck.
OpenText ran into the same reality inside its own engineering organization. With thousands of applications and more than 7,000 developers, security testing produced hundreds of thousands of issues, and most required manual review. On average, each finding took ~10 minutes to assess. (a huge operational drag that pulled engineering time away from shipping product)
The takeaway: the challenge isn’t coverage. It’s scalability.
The approach: AI AppSec audits findings before humans do
OpenText built Application Security Aviator as a generative AI capability inside the OpenText Application Security platform. Aviator audits SAST findings using AI before humans have to, analyzing and enriching results to reduce triage time, improve fix quality, and help organizations scale AppSec coverage without adding headcount.
As an AI code security assistant (ACSA), Aviator embeds into developer workflows and can run inline in CI/CD pipelines or be enabled centrally through Software Security Center (SSC). It provides plain-language explanations and guided remediation, including suggested code and automated fixes that teams can review and apply.
OpenText AI AppSec outcomes:
OpenText validated Aviator against its own environment first, where any weakness in scale or accuracy would show up quickly. Over the first eight weeks of internal deployment, 1,500 applications were onboarded.
The results were measurable:
- 300,000+ findings analyzed and dispositioned
- Mean time to triage (MTTT) reduced by 70%
- Three million minutes of manual review removed from the pipeline
- Equivalent to 50,000 hours (2,080 days) saved and a productivity gain of 29 full-time employees across engineering
Want the same results for your organization?
We captured how OpenText did this in a new guide, including the rollout approach, how Aviator was operationalized in real developer workflows, and what it took to scale across a large application portfolio. As AI-assisted coding accelerates development and increases code output, Dev teams need a way to scale review and remediation without becoming the bottleneck.
If your AppSec program is fighting triage fatigue, false positives, or growing review backlogs, this guide lays out the internal blueprint OpenText used and the measurable outcomes they achieved, along with practical takeaways you can apply to your own program.
Read the full guide: AI AppSec, proven at scale: OpenText’s blueprint and outcomes
The post A guide to AI AppSec appeared first on OpenText Blogs.
