From Hype to Practical Enterprise AI
Blog: OpenText Blogs

We’re living through a seismic shift in enterprise technology.
The rise of generative AI, powered by large language models (LLMs) trained on 10–15 zettabytes of public data, has captured global attention. But here’s the real game-changer: behind enterprise firewalls lies a treasure trove of proprietary data, 100–150 zettabytes strong. When this data meets AI, the possibilities are transformative.
AI isn’t a distant future. It’s here. And it’s not waiting for us to catch up.
The misconception that AI will arrive as a single, all-knowing super-agent is misleading. The reality is far more nuanced and promising. The next generation of enterprise AI will be built on domain-specific agents. Think -- RCA agent, FalsePositive Agent, TicketAgent, TranslationAgent, ReleaseNotesAgent, ContractReview Agent, and more.
These agents won’t just mimic human intelligence. Instead, every human will have an army of agents, and those agents will need an orchestrator to keep them all coordinated. The orchestrator can take on the identity and permission settings of the human – e.g., what data it can access, what applications it is entitled to, and what decisions it can make.
This is a game changer. Have you ever had an experience of approving a workflow only to realize that you require a bunch of data from different sources to make a decision? Well, now Agentic AI can be applied to securely automate that critical business process. Or have you ever tried getting information about your paystub, raised a ticket, and waited for days to get an answer? Agentic AI can be applied to cross functional collaboration with relevant insights and speed.
But here’s the catch: AI is only as good as the data it’s trained on. And hallucinations, those moments when AI confidently delivers false information, are real and potentially damaging. For example, a recent incident involving a government agency saw an AI-generated summary cite fictitious judges, leading to reputational harm and eroded trust. In another instance, a customer service agent incorrectly gave a customer a refund that was not within the company’s policy.
So, how do we avoid the risk of false confidence? How do we go from agents that give 90% correct answers to 100% correct answers if the 10% risk is too high for a mission critical process? How do we ensure that data accessed was only entitled to that agent? How do we avoid agent sprawl and prevent shadow AI in the future?
The answer to secure AI lies in trust and data. Without data, there is no context. Without context, enterprise AI fails. Organizations must take control of their digital agents, define the knowledge they possess, the policies they follow, and the decisions they’re authorized to make. Security cannot be an afterthought; it must be built in. For years we’ve designed governance, permissions, and lifecycle management for humans, now it is time to do the same for the agents.
This is not just an IT challenge. It’s a C-level strategic imperative.
This is where the OpenText AI Data Platform (AIDP) comes in. OpenText is shaping our product roadmap to bring important capabilities to customers:
Discover – harness the proprietary private data across functions (structured and unstructured) to feed AI
Connect – your OpenText applications as data products to see information in new ways to drive cross-functional insights
Act – via an AI control plane designed for secure enterprise and customize agents to meet your specific needs
Govern – with compliance aware services that enable you to redact and retain data based on your company policies
Monitor – model drift and hallucinations, track feedback loops, refine prompts and models, maintain audit trails
Our goal is to be the most open AI Data Platform through an architecture enabled by APIs and strategic partnerships that bring together the structured and unstructured data. We have been the custodians for 30 years of passive data sets from our customers, and now it is time to help them bring new life to old data. While we don’t know exactly how quickly and what new innovations will come next with AI, we want to provide customers with the platform and capabilities to innovate faster. Just like the original mobile app store didn’t know it would become the foundational enabler for future businesses like Uber, we are at the beginning of the next frontier of practical AI.
To Get Started
Today, OpenText is introducing OpenText Knowledge Discovery – a set of tools to help ingest structured and unstructured data and to automate metadata tagging to fuel AI. This solution category includes:
- OpenText Capture and Intelligent Document Processing
- OpenText AI Content Management (Connectors, Knowledge Graphs, etc.)
- OpenText Observability (Applications, Infrastructure, Assets, Networks)
- OpenText Quality Management (Quality Logs, Code Sources, etc.)
- OpenText Metadata Service API
Also, OpenText is introducing OpenText Data Compliance – a suite of services and APIs that make governance proactive and persistent:
- OpenText Advisory Services (Get AI Ready Assessment)
- OpenText Retention Service API (Records Retention Policies)
- OpenText Risk Guard Service API (Risk and Compliance Parameters)
- OpenText Data Privacy and Protection (Voltage)
- OpenText Core Threat Detection and Response
Lastly, OpenText is introducing OpenText Aviator AI Services – a group of professional services experts who help customers from discovery to deployment on their AI journey:
- PLAN: AI Roadmap Definition
- AI Data Readiness Workshop
- LLM Discovery Workshop
- Business Workspace Workshop
- BUILD: AI Adoption Acceleration
- AI Data Discovery and Preparedness
- AI Business Value Definition
- RUN: AI Usage Enablement
- Aviator Learning Services
- Aviator Studio Agent Accelerator
- Aviator Model Services
These capabilities abstract the complexity of compliance and enforce it by design, not after the fact.
But governance doesn’t stop at the enterprise’s edge. As AI becomes more embedded in global operations, questions around sovereign data and sovereign AI are rising. Where is your data stored? Where are your models deployed? What borders does your data cross?
Whether your workloads reside on-prem, in the cloud, or in hybrid environments, OpenText gives you control and choice. To support various deployment models, our Aviator AI capabilities are multi-cloud, multi-model, and multi-applications.
With an upgrade to release 26.1, OpenText Aviator entry tier package will now be included in Core Content Management, Core Service Management, and Core Communications Management. Also with release 26.1, Aviator will be now available on-prem for key OpenText applications including Content, Content Management, Communications, Service Management, Software Delivery, and Application Security.
Why does all this matter?
Because the payoff is exponential. AI isn’t just about efficiency; it’s about new capabilities. From personalized customer experiences to proactive operations and new revenue streams, AI enables enterprises to respond faster to markets, risks, and opportunities.
But speed without security is reckless. Innovation without governance is fragile. That’s why OpenText is building the tools, services, and platforms to help you design the future, secure it, and lead with intelligence.
The future of enterprise AI won’t be dictated by vendors or regulators. It will be designed by your data, your workflows, and your values.
So, buckle up. The journey to intelligent, secure enterprise transformation has begun.
The post From Hype to Practical Enterprise AI appeared first on OpenText Blogs.
