Building the Ticketless Enterprise: AI-Powered IT Operations
Blog: OpenText Blogs

Enterprise IT has become like the Winchester Mystery House—layers of complexity built over decades that now limit innovation. In this must-watch keynote from OpenText World 2025, Travis Greene and Bruno Labruere tackle this complexity head-on, presenting OpenText's roadmap for achieving a "ticketless enterprise" through observability, AI-driven automation, and intelligent self-service. Learn how AI agents can detect situations before they become incidents, and hear from Chris Mesecher of Metropolitan Nashville about saving over $500K through service management automation.
Watch the keynote or scroll down for the complete transcript.
OpenText World Service Management and Observability keynote Video Transcript:
Travis: Good afternoon. Come in and join us. Thanks for visiting this session. My name is Travis Greene, I lead the product marketing team for Observability and Service Management here at OpenText, and this is Bruno Labruere, who leads the product management team for our Observability and Service Management business unit. And I'd like to start this section by making a bit of a bold claim, which is that OpenText is uniquely positioned to deliver on this vision of a ticketless enterprise.
Now, of course, ticketless enterprise is more of a goal than a destination. We may not get there 100%, but we believe that we have the technologies in place to help you achieve this vision through a pathway that we've built out for you. So you might ask yourself, what is a ticketless enterprise? And I would say very simply, it is one where users are able to get the problem resolution and the request fulfillment that they need through self-service. But even better that we are reducing the number of incidents down to such a small number that the need for resolution tickets simply don't exist anymore.
So this doesn't happen overnight. But It has been the dream, I would say, of many ITOps organizations that I speak with. And so we need to think through what are the technologies, and what are the pathways that we're going to have to have in order to build out this vision. And of course, it includes AI, right, Bruno?
Bruno: Yes.
Travis: All right. Well, so as we think about Life as a Highway, that was a good opening song. I want to tell you a story of a guy from California like me. Not from France.
Bruno: No.
Travis: No, because French people know that--
Travis: He would not have done that.
Bruno: Yeah. We don't have this truck, by the way.
Travis: Well, that's why you know it's local. So actually, it wasn't a local, it a Californian driving here locally in the Tennessee area. And he was following his GPS, as we often do when we're in unfamiliar places. And the GPS told him to turn right onto a road and he wasn't paying very close attention. He wound up on some railroad tracks, facing down an oncoming train. Now, that train wound up smashing his truck and destroying the GPS device as well. But the question is, who was at fault? Was it the fault of the driver? Was it the fault of the GPS device? Or was it bad data that was at the heart of this particular accident?
Well, I'll submit to you that the township that had put together the data set for the navigation company that had put this out had decided to change their zoning from this being a regular road to being a railroad. And what happened, in fact, is that the data set was really the problem that caused cause this whole situation, that was quite costly. So Bruno, let's put that into context for IT operations. What does that mean?
Bruno: So I believe it's the same for it. It has a data quality problem as well. A organization in average have about 20 monitoring tools, 10 discovery tools, and they still get outages. By the way, this morning Cloudflare got a massive outages. The nice thing in working in IT is that you don't have to find a story. Every day, you hear a story on the news. So I was going to repeat the incident from Amazon East US region two weeks ago because of network. And I didn't have to I could have a brand new story with Cloudflare this morning and the caching system being down.
So I think you know the organization have a lot of tools. Despite that, they can't prevent the outages. I think, they have a hard time maintaining the service maps, have a hard time with knowledge bases, maintaining them. They have been neglecting their knowledge base for too long and can't keep up with the maintenance. And with the next tool purchase, what's happening is that we even increase the complexity.
Travis: So we've never had more data, but it's still not solving all of the challenges of outages, and it's just adding to that complexity.
We do hope, though, that AI is going to help with this. I mean, I think, there's some skepticism, rightfully so, about what AI is doing, but how is AI going to help us fix the complexity problem? Because in reality, the complexity has only begun. Does anybody what this picture represents? I saw this, by the way, at a Forrester Conference about two weeks ago, and it was such a good example that I had to share it with you. So if you've never been to a forest or technology and innovation conference, I would highly recommend it. A really good way to look at the complexity problem. But does anyone what this house is?
Yes. It is the Winchester Mystery house. Thank you very-- that's a great, great guess there. There’s the Winchester house, if you don't know, was a home that was purchased by Sarah Winchester, who was the widow of the man who started the Winchester Repeating Arms Company. And she thought she was haunted by the ghost of the people who were killed by the rifles manufactured by her husband's company. So she went to a medium, and that medium told her that she needed to continuously build onto a house in order to have these ghosts confused so that they would not haunt her. And so, from 1886 until her death in 1922, she continuously added on to the house.
And I think you see where I'm going with this. This is what the house looks like today. You can go visit it in San Jose. And if you take a look at the layout, the floor plan, this is what it has evolved into. Now, I know you're already thinking ahead of me here. This is a little bit like enterprise IT, isn't it?
Audience: [LAUGHTER]
Travis: I mean, we started with a vision, a goal, an idea, an application, and we've added on to it over time. And we've collected some technical debt, haven't we. We've been trying to think of how are we going to get past all of this complexity, especially when we want to get to Agentic AI, and the complexity has now created limits for us to actually be able to achieve that. In fact, many of the platforms that exist today only add to that complexity. And at OpenText, what we want to do is to help you break through that.
So how are we going to do that? Well, for our IT operations customers, our goal really is to help you go beyond the complexity limits--that every one of us has seen if you have had any IT operations for a long period of time. And we want to bring clarity and connection to your data. You heard the story this morning in the keynote about how critical data is. And so what we want to do in this session is really focus on what we're doing from observability and service management business unit, to bring that data together so that you can achieve your ticketless enterprise vision.
Now, some of you are very familiar with these categories. These are the products that you've been using, in some cases, for decades. So you'll see us refer back to these over time. However, we want to make sure you understand that our strategy doesn't require you to replace anything that you might be using that isn't from one of these categories from OpenText. But rather our platform is designed to bring that clarity and connection across the tools that you're using today and the tools that you might choose to use tomorrow so you can achieve that ideal state of ticketless enterprise. But this is where we are today, Bruno. Where are we going next?
Bruno: That's right, Travis. And our strategy is evolving. So with our operation platform that is at the heart of this slide. That operation platform is powered by the AI data platform that savinay was presenting in the keynote this morning. And with it, we're connecting the data and capabilities of all our products in observability and service management, along with third party products. And this decrease the complexity, in particular, the complexity of normalization and data quality. And with this, we can focus on the outcomes that enable to deliver the ticketless enterprise.
So the first outcome is taking our discovery and CMDB and service management and observability together, and making it about total asset insights. So you do understand everything that you have in your company in terms of assets. The second step is to prevent incidents by obviously using AI technology and leveraging our capabilities we have been building over the years around both AIOps and automation. And the last one, key to go to ticketless, is a self-service experience, enabling all of the users across the company to be able to self serve. And all of this is possible with baking in AI and automation everywhere.
And this is based on our platform that, as Travis was presenting, is composable. It's not a rip and replace. It's delivering on a low TCO, lower TCO than all our competitors, and it's secure and compliant built in. With data insight, you can have the right data, and how important it is to have the right data to be able to do AI. Because we have the right data, we can intervene at the right time to prevent incidents. And we have the right workflow to ensure that tickets are more ticket are not going to be created.
Travis: So these are the building blocks of that ticketless enterprise. This is how we're going to help you to get there.
However, we should probably address the elephant in the room, which is what about Agentic AI? I mean, this is the topic that if you go to any tech conference or many of the conversations that we're having is, how does this relate to Agentic AI? So we thought we'd get some definitions clear up front, and then we'll walk through each of those three sections that Bruno just talked about.
Bruno: Yeah, you might sorry to hear, but we're not going to do Agentic AI washing in this session. So what we try to do, we want to help you adopting AI. And for this, we found we defined a framework to make sure that you can get the value from AI and get the trust. You heard this morning multiple times how it's important to be able to trust AI. So think about it as a maturity model where you can start with the first level of conversational. And usually, that's where your users are going to be very familiar because that's how they got to generative AI through a chat based interface. That's what we're providing.
The second level is what we call assistive, which means that your user are going to have a number of AI agents that are helping them in answering questions and automating tasks on a day to day basis. And the last level is agentic. This level is when the system is capable of autonomous decision making, capable of understanding, with reasoning a big problem, being able to split it into multiple tasks, being able to find out who are the AI agents that can work on all of these tasks, and getting them and orchestrating them through the execution of this task, including the validation of the work being done. And with this, we believe that if you follow this path, you can realize value early on. Get your user to trust the AI system and be used to using it as you move up the stack.
Travis: So I think, we all understand conversational. And we have all interacted with ChatGPT or the likes. And so we understand how I really plays in that role, and we can get a sense of the vision of having autonomous agents, although it's scary. It sounds like handing an intern the keys to your enterprise and seeing what comes out of it. So I don't most people I talk to aren't ready for full agentic. But it's interesting what assistive might be in terms of a stepping stone towards more full Agentic AI. Do you have any examples of assistive?
Bruno: Yeah, absolutely. So think about assistive as a way to get a few AI agents that are going to work on a number of tasks for you. So an example that we have in our products, already available today, is a knowledge management AI agent. That knowledge management AI agent is going to look at all of your incidents to understand which ones are good incidents to build knowledge from, but that's not where it stops. With this, it will create you a draft knowledge, and it will start a workflow to make sure that you have the right people involved in reviewing this knowledge based on the information that was in the incident, based on the service that is impacted, and be able to make sure that, as well, it's helping you with the life cycle of the knowledge. Because often we create things and we forget about them. Unfortunately, obsolete knowledge is part of the bad data we refer to that gets a truck to go on the railway tracks. So that's one example.
Travis: Well, in the sake of time, let's move on to the idea of the ticket list enterprise. And where are we investing across these three portfolios? Bruno, can you teach us that.
Bruno: So I think we're doing a lot of AI innovation on each of these three outcomes. And in the rest of the presentation, we're going to spend quite some time on each of them. I think, what's important is that in total insight, remember, it's about you having an easy access via visualization to all of the data your enterprise, with obviously the richness and the data quality. So we're using AI a lot in normalization of software and enrichment through technology catalog, finding out, I would say, AI depth of observability data. So we understand the health, the utilization and the performance of assets. But also, as you will see, we're innovating in helping you creating and maintaining service maps.
On the second one is on prevention. We're going to go into much more detail on this one because it's a pure AI first innovation we're building to help you detecting incident and remediating them before they occur. And the last one is, a thing we have been introducing earlier on with virtual agent and a number of capabilities to make sure that any employer in your company can be helped by virtual agents and get access to both Knowledge Services and support with automation to make sure it's fulfilled quickly.
Travis: On that third one, I was having a conversation earlier. And this idea that, how are we going to get our users to stop wanting to talk to a person at the service desk? Because that's costly and they have to wait in queues. And there's a downside to always wanting to interact with a human. We've got to give them a better experience with the AI, with the agent, than they could possibly get from talking to a human.
Bruno: Absolutely, by 25.7, in their own language, for instance.
Travis Yes, exactly. All right. So we've talked about now the three categories. And I want to drill down into each one of those three. So this idea of total asset insight is a critical foundation for anything that we do in IT operations. We really have to get this right if we're going to get any of the other parts productive and something of value for us. So it's something that we've been working on for decades. Anybody remember the promises of the CMDB 25 years ago. I certainly worked in that. We were talking about this in the days of IT, and how important a foundation it was, and it still is. It's just that the technology just seemed to not really get us where we needed to go. So I think, the case that we have to make now is that the technology is here and we can trust in it, and it's working for people.
Bruno: Absolutely.
Travis: Except that, the survey that we did showed that people still struggle with it in a broad sense, these are not OpenText customers, of course. But--
Audience: [LAUGHTER]
About 50% of the customers surveyed said that they had inconsistent discovery, meaning that things were sometimes discovered and things sometimes they weren't. And so it was very confusing. Or maybe they didn't have sufficient data that was collected to give them the information that they needed in order to support things like incident and change management, and so forth. For those who did find that they were able to discover things, a lot of those jobs, even though they were running, they timed out. And so they weren't reliable or accurate enough to rely upon. And then finally, some of them had a lot of duplicate data. So you sometimes see data collected by multiple tools. And there's no reconciliation happening between them. And so you wind up having duplicate entries. And how do you which one is the right one to deal with.
I think, even the optimistic numbers from some of the people that I've talked to. So we like to think of the CMDB and our vision for total asset insights as being like the grid that runs your IT operations. And that consists of world class discovery, a CMDB that maintains accurate service maps, and observability that is then deployed to monitor the status and the flow of information to deliver those services out to the end users. And when we talk about discovery maps, for every connection that includes not just IT, but IoT devices, and we're starting to look at operational technology devices. And now, incorporating more third party discovery into the CMDB to get to that place of the single source of truth that we've always wanted to get to.
Once we have the CIS collected, then we have to do something with that. And it's really the service maps that are the most valuable and yet the most challenging to build and maintain. But if we have accurate service maps, it dramatically improves our ability to conduct these other disciplines like asset management and change management and incident management. And then finally, from an observability perspective, we often think of observability as being at the application layer, because that's the messaging that you hear in the marketplace. But we would submit to you that the network and the infrastructure is just as critical as the application. And it needs to be an equal partner in order to achieve true observability and get to that total asset insight that we're talking about. So Bruno, tell us more about the platform and how we're going to do that.
Bruno: Yeah, so what we're doing is in the middle, we have our operation platform, which includes a data lake. And what we can do is both base this on the breadth of the asset. We can discover-- as Travis was saying, we can discover pretty much everything from deep network, to application, to service maps, both in it, but as well IoT, industrial, IoT and operational technology. With all the service model, all of this data being managed through asset management processes. We even add a layer of security with information about vulnerability or end of life end of support information. Because all of this information is normalized and enriched with AI, integrated and enriched with technology catalogs. And having a number of information, as well, about compliance.
So all this breadth of discovery data is put into data lake along with the depth of observability. And observability, we're unique where we have a very deep observability on both the network side, the infrastructure side, and the application side, which means that we can find everything that is happening on your system, whether it's in the cloud, it's on prem, it's in the hybrid, and it's in your network or up being raised by a OpenTelemetry trace or event in the application layer. And all of that is leveraging AI. In particular, we have intelligence at the edge to be able to capture exactly the information that is needed and bring it as it is needed to the central system. So you can have access to all of this data.
Travis: So Bruno, what's interesting is I'm hearing more demand from those outside of it for the data sets that are inside of the CMDB, in particular, I would call out security and compliance. That is a group of people who definitely could benefit from having an understanding of what's in the environment, because if you don't what's in it, how can you be sure that you're protecting it?
Bruno: You're absolutely right. I think when you have a denial of service or a zero day, knowing what list of servers are impacted, as well as value, than knowing the business services that are impacted and how critical they are. And with this, you can obviously do a remediation. Our system allows to do remediation being prioritized by business service instead of prioritizing by nodes.
Travis: Let's talk about what's new, and how we're going to deliver on this grand vision?
Bruno: Absolutely, absolutely. That's very exciting. So on that, we have a session on Wednesday, tomorrow, at 4:00 on total asset visibility. What we are bringing brand new, and you will be able to see a demo this week, is how we're leveraging Service Management Aviator and our AI and generative AI to infer the service model. So we know everybody in the industry has been struggling to create this service maps, understanding how all of these infrastructure and application and middleware are together participating to a business service. So obviously, it's based on our automated discovery, which is both bottom up and top down. Universal discovery, discovering pretty much everything you have in your infrastructure.
And we're adding Aviator in a conversational mode. So very easy step by step, helping your users of the CMDB system in creating this service model. So what we'll do, it will be able to find out this cluster of nodes working together, be able to link all of these components together, define some of the existing relationships, take them, define some of the missing relationships, put that together and be able to group them and even associate a name on it. And we have a session. Let the machines model your service on Thursday at 12. Please go see it. I think the demo is very interesting. And it's something that we're going to release early in next year, in 2026.
That's critical. That's critical because we can spend a lot of time on modeling service. It's a time-consuming activities, and the challenge is to maintain them. In the demo, you will see the automated creation of a service model. You will not see the ongoing maintenance. It's not something we have, but it's something we are planning on doing with the same technology.
Travis: All right, so what does the roadmap look like going forward from there?
Bruno: Hey, on total asset insight, plenty of roadmap. Obviously, in this session, I'm not going to describe everything we are bringing in the coming 12 to 18 months. But you're going to be able to have drill down on this in the breakout sessions that our product management team will do this week. I think, what I can call out here is that what you're seeing is a lot of integration and content expansion. If you want, we want to be the richest CMDB and total asset repository on the market. And I think you heard from our CPO and CTO, Savinay Berry, this morning that we want to embrace all of the third parties of the world. We want to be known for the company. We can integrate any third party repository.
Absolutely, true. In observability and service management, we can see a lot of third party discovery system, third party controllers that we access to get all of the information on the assets, and the properties of the assets. And the second thing we're going to bring in our roadmap is a number of AI agents. I called out the AI agent--the AI-based service modeling coming in 26.2. When I say, 26.2, it's the second quarter of 2026. And we're going to bring as well AI agent to help you with having insight on your log, and as well having alerting on your log. And what's unique when we say log is that we don't say log in a silo, we're capable. And that's been introduced this year already to understand logs across network infrastructure and application.
Travis: All right. Thanks, Bruno. Let's move along to incident prevention. So this is the second category that we were talking about. And as a reminder, this is once we've got that great foundation set up with CMDB, we can get to a place where we have the right data at the right time, not just to resolve problems, but to prevent incidents before they disrupt normal operations for our end users. And ideally, that would reduce the number of tickets that we have in this goal towards getting to that ticket list enterprise. So I want to ask you, what are these numbers represent? Anyone want to guess?
Travis: Number of hours you need to sleep after the event?
Audience: [LAUGHTER]
Travis: Yeah, after a night on Broadway.
Travis: Go on. Somebody give me a guess.
Travis: Time to incident completion. Time, time to incidence? Completion? Yeah, remediation. Time to resolve. You have not answered it. It's pretty close. Mean time to respond. Yeah, those are all really close answers. So you're thinking along the right lines. These are the average downtime per year for Unix, Linux, and Windows servers. I don't it seems depressing to me given it's 2025, you'd think we'd have this a little bit more under control. So I think what that represents is that, yeah, we've got a lot more data. Yeah, we're starting to implement AI and we've got lots of tools and that sort of thing. But the outages, they just keep coming. And we've seen that from AWS from--
Bruno: Cloudflare.
Travis: --yeah Cloudflare. There's big public ones, but I'm sure you can remember a time when maybe it happened to you. So our proposal here is, how can we stop reacting and start preventing, Bruno?
Travis: So what we do is that we bring to the table an army of AI agents. So the idea the whole idea is to automate the war room before it happens. So you don't have to have a war room at the end. So the idea here is that we're going to bring a number of AI agents. One, to detect-- the idea of this detect agent, its task is to look at all of the data you have. That's why the quality of the data and the quantity of the data you have, and normalization of the data is super important. It's going to from this data, being able to find out that something is going to happen. What we call it's a situation that is building up. And if we don't react, an incident will be created.
The second step will be this diagnose agent. This diagnose agent, if you want, is going to analyze everything about this situation. And it's going to try to help you with finding the root cause, doing RCA, being able to look at, again, all of the data that are available to help you with finding the RCA. And the last step--
Travis: I was going to say, what's interesting about that is that this is not just about AIOps, operating in a silo, or not just about observability, but you're pulling in change information and service map topologies, incident records and problem records, and service levels, and all of the things that come from the IT service management side, too, is if those two things should not be separate.
Bruno: Exactly I think that convergence of observability and service management, that's why we call observability and service management, because we believe it's one observability and service management. Too often, we have seen a lot of people having their event on one system and just throwing the data of the event to an incident management system where we no longer have all of the data of the observability and discovery. Well, here, the idea of this system is that it can access all of this data, both from open system and from third party, whether it's discovery data, asset management data, service management data, observability data.
And so the last step, once we have a good idea of the RCA, we have a resolved agent that is going to propose a number of remediation. Could be a manual step to follow. It could be some automation. It could be even generating an automation or a full orchestration flow to be able to proceed with remediating the root cause. But also validating that remediation worked. And obviously closing the system. So here, the idea is that it's not a traditional I would say, event to incident, closed loop incident process that I think we've seen implemented many times here. It's a much more real time system that is basically inspecting what's happening in your company, always with a number of agents capable of detecting issues, raising them, and being able to resolve them before an incident is being created.
Travis: And we heard this morning about the AI Studio and how that's going to be a allow you to build your own agents, even though we'll be providing some as well. And be able to build, maintain, govern them and make sure that they're secure.
Bruno: Absolutely. And that's going to be important to be able to tune them as well. So when we have a studio, it's both to create new AI agent. But as well you can start by configuring the one we're providing, which is easy. And as you've seen, we're doing that in a codeless manner.
Travis: So let's look at how it is?
Travis: Yeah, everybody likes screenshots.
Bruno: Yes. So here, typically, a situation happens. So my agent was there inspecting the data. And he's seeing that we have a slowdown on accessing the online banking service with some metric, causing us result with a response time issue on this router. That's what are 42. If you look at the left panel, typically, you can see the situation was raised. And the system in AI has been categorizing everything, which means that classification was done not by a human. Until this screen nothing was done by humans. The idea is that the system has already understood what could be the potential impact. And as well, the system is smart enough to understand from all of the information, who could be the right people to involve. Because often we see that when a situation is building up, if we want to avoid the impact to the end user and having an incident, we need probably a few people, but the right people at the right time.
So system is detecting, who could be, based on what they have been working on, the right people, including profiling.
Travis: And we probably been in those war rooms with 50 people or 100 people. And that's not very efficient. So--
Bruno: Absolutely. And that's a nice thing about this system is that if you look at the right side, it's telling you the story in the storyline, but as well it's bringing, in the chat, a mix of human agents and AI agents. And you can add more people if the ones that were suggested are not the right one. But the idea is to keep it to a small number of people to have basically human in the loop, but without spending too much time on it.
I think what's interesting, obviously, is the system is summarizing what's happening, but as well. On the next step, I can click on root cause and the system is telling me what I can detect is that there was a change to this router. The routing table have been updated this morning by someone at 11 something, or at 9:00, I think.Yes, at 9:00.
So I have a change information here. And the system is telling me, not only that it's the is the most probable root cause for the issue, but it's telling me why. And we believe that with AI, it's very important for AI. If we want you to trust our AI, our AI will tell you why it decided that it's the most probable root cause. And you can see that it's in plain English, you can read why it was the most probable cause, in particular, because there is a clear correlation between doing a change on a Ci and having an incident, in particular on network devices, that the change was done on the router. And we get both a couple of metrics linked to the router having problem. And we're getting a slowdown in accessing-- a performance issue in accessing the router.
Travis: So that's clear that bringing together change records or even just the detection of changes with the exact event that has occurred through the monitoring or observability tools has real value that is difficult to expose without having some sort of situation analysis. And I think that's an interesting language to apply to this because it hasn't really risen to an incident yet.
Bruno: Absolutely. And I think here it's important to that speed is of the essence here. And here, the change was created in the ITSM system. So we got access to it. But as well with our technology, we can even automatically detect unplanned change with our network capabilities or with our discovery capabilities. So this unplanned change can be taken into account as well, even if they were not recorded in the ITSM system, so very resistant to even processes not being followed.
The last agent that is resolving the issue, providing me the best course of action. And I'm a human in the loop. I can accept it or not. But here the idea is the best course. So your online banking service is not going to go down because it would be a major cost impact. I can just create a name change for the exact same person that did the change this morning to revert it. And then the system would constantly evaluate if the metric are going back to green. And when they go back to green, can validate that the situation is being resolved and do the proper handling of the situation. If it's all done timely, you can see that a lot of that is automation and AI brain behind the scene, then we can have dealt with it without an incident being created.
Travis: So the other thing I noticed here Bruno, is that there's that blue box there in the middle about initiate selected steps. So there's still a man in the loop. As we're building trust in these agents to make the final decision.
Bruno: Absolutely, absolutely. That's completely based on the trust comment we made earlier, it's important that we can control. And at some point, once we are very familiar with some pattern, we'll be able to say that this pattern can happen automatically.So, obviously, the Holy Grail is to run a autonomous operation.
Travis: So I'm sure everybody's going to ask the question, when are we getting this?
Bruno: Very soon. But obviously, I couldn't do justice to this. So I think there are two ways for you to learn more about this incident prevention, innovation. It's an AI first innovation. You will have a turbo torque AI in AIOps that is on Wednesday. And we have an Innovation Lab in the demo floor where you can see it and you can play with it. You can take the mouse and the keyboard and be able to play with it.
Travis: And we'd love your feedback.
Bruno: And in terms of deliverable, some of the building blocks for this have already been delivered, in particular, the bottom topology. I could not describe it here. It's what we call the neighborhood topology, a very interesting component. It's something we've been delivering in 25.4. And the first version of incident prevention with this situation management. Very focused on the diagnosed agent is going to be in 26.2. So second quarter of next year. And then we're going to release more agents as we go over the next 12 months.
Travis: All right. So what is the rest of the roadmap look like?
Travis: The rest of it?
Bruno: Yeah.
Audience: [LAUGHS]
Travis: Is there anything else coming in our incident prevention roadmap we should call out?
Bruno: Well, many things around here I think what's probably most important to comment here is that obviously we have all of these agents that are going to be delivered. But also as part of that, obviously the resolve agent is working with a lot of automation. And automation, what we do is that we have the ability in our AI system to call any automation, and we're using in particular MCP server. If you're all familiar with MCP server. It's a standard protocol that enables any AI to call basically a system. And all our automation is going to be accessible through SMTP server.
Self-service Experience: Customer Success Story - Metropolitan Government of Nashville
Travis: All right. OK, let's move on to our third category, which was self-service experience. And of course, incident prevention is going to make everybody's lives better. It makes lives better for IT, of course. But the end users too, and even our customers who aren't impacted so much by these unexpected outages. However, we need to do more for our users, don't we? Because the service experience is the front door to it, and a lot of the impressions that users have of the IT organization are going to be built from what that experience looks like. So let's focus on that next.
The question I have for you is, how do your users rate your service experience today? What are your NPS scores look like? Could they be improved? I think in a lot of organizations, most people would say, yeah, we'd like to find ways to improve that. So in order to help us to tell this story a little bit better, we thought it would be great to bring up a customer who's recently moved to our service management platform. This is Chris Mesecher from the city of Nashville, or Metropolitan government of Nashville, as they call it here. So let's welcome Chris to the stage.
Audience: [MUSIC PLAYING]
Travis: All right, Chris, tell us a little bit about your role at the Metropolitan Nashville?
Chris: Yeah, so I'm the ITSM Manager. So my team actually handles the training, development, rollout of SMAX, answer questions, all those kinds of things. So it's a very, very role, for sure. We also handle all the documentation for all the automation that actually interacts with SMAX.
Travis: And you had an interesting pathway to discovering our service management offering. Can you tell us a little bit about the story, about why you chose it and the pathway that you got.
Chris: Absolutely. So we were using Cherwell actually prior to using OpenText Service Management.
Bruno: That’s OK
Bruno: Exactly right. I'm sure some of you share is going end of life. So budgetary reasons really forced our hand per se to choose a new product. And we chose SMAX. But we had a very, very tight window.
Travis: So you waited a little while to make the final decision.
Chris: Yes, we decided in May of last year to go ahead and do it. Signed the contract. Gosh and then three and half months we actually put in the tool.
Bruno: Wow, migration in three and half months?
Chris: In three and half months.
Audience: [LAUGHTER, APPLAUSE]
Chris: Good job. So I couldn't have done it without OpenText professional services. We had a really fantastic group of folks who worked with us that made it possible. Luckily, I have great background in consulting for ITSM as well. So between my experience and professional services, we got it done. But I would not recommend anybody trying it that fast. It's just too difficult and too much to get done in that length of time.
Travis: So clearly, you would need to prioritize some things to get that into at least a minimal viable production category. Was there any lessons learned you'd like to share with the audience?
Chris: It's ironically, we needed pretty much everything to be live by the end of September, so we did it all. We did knowledge change, incident request, and self-service all in three and half months. So it was pretty amazing. Again, I don't recommend it. It's just too difficult.
Travis: So in terms of outcomes, have you seen any improvements since you've implemented.
Chris: We have it's a great question. So we use automation. So Power automate and orchestrator outside of SMAX extensively actually in metro Nashville. So that team actually has a bunch of really smart, smart folks. And they actually use those tools to actually pump requests primarily into SMAX. So the last number I saw, they've saved more than half a million dollars in dollars by actually being able to self-generate all those comp tickets, and for lack of a better word. So it's actually real cost savings. It's almost 35% 40% of our volume actually comes from automation requests.
Travis: Right. Well, I mean, that's a good case where this idea of ticket lists is tongue in cheek. I mean, we probably still need tickets as records--
Chris: We do.
Travis: --of what's happening so that we can use that to apply towards continuous improvement.
Bruno: Exactly. And that was the whole point of doing it this way. We have record of all these cases per say.
Travis: I would suppose that we want to reduce the impact on the humans, right?
Chris: Yes.
Travis: And the end users, what end user likes to open a ticket. Nobody does. I mean, no one wants to fill out all those forms.
Travis: No one. And of course, it doesn't want to have to document things and close them out and that sort of thing. They wanted to be even less than the normal customers, actually.
Travis: All right. Well, that sounds great. What is the next thing, the next big thing for your implementation?
Travis: Yeah it's funny, as you just mentioned on the screen a few minutes ago, CMDB is a huge project for us right now. We're just taking baby steps. First, hardware. And now, into software. So it's difficult as most customers find, but we're getting there. And then knowledge management is our next big project as well. So we really want to gather all of our knowledge, get it into SMAX so that we can use agents to actually. Search that knowledge, both internally, as well as for our customers.
Travis: That's great. In Knowledge management as we know is a pretty critical stepping stone to AI and being able to extend that out to end users. So you're looking towards that.
Chris: Absolutely, we want to leverage agents A lot, for lack of a better word. We're not there yet. We have some agents built, but we need to make take more steps. Again, it's a baby steps.
Travis: So for a city, I mean, you have constituents and then or the citizens and then you have I would suppose, the internal IT organization and all the users that make up the city employees.
Chris: Yes, more than 10,000.
Travis: So is there so is there a priority for addressing one of those groups?
Travis: Self-service is becoming more and more important to us, because of the number of employees that we have. And obviously there's limited budgets for it as everybody has problems with that. So it's really about making sure we're doing self-service the right way to ensure we're doing engagement with our customers, closing tickets faster, better knowledge, empowering them to do more self-service. So it all flows back to self-service.
Travis: So building trust, taking those small steps along the way.
Chris: Exactly.
Travis: All right. Well, final question for you, Chris. You're a local. So what would you recommend this audience do if they were looking for a--
Chris: Absolutely.
Travis: --little fun or some great food tonight.
Chris: You've got to go get barbecue. So no matter when you come to Nashville, you have to get barbecue. It's a sin not to. So be sure and do that. There's lots of good places around. Martins is right down the street. So you can check that out. And Jack says a little bit of drive, or Edley's. So those are some good choices for you. Music, be sure and just walk up and down Broadway or all around here. There's music venues absolutely everywhere. Most don't charge covers, so just walk in and out, listen to some music and move on. And then, of course, hot chicken. You've got to get Nashville hot chicken while you're here.
Travis: All right. We'll be sure to check that out. Thanks, Chris, for joining us and sharing a little bit about your experience.
Audience: [APPLAUSE]
Chris: Thanks.
Bruno: Thank you, Chris.
Travis: All right. Well, self-service doesn't just emerge, as you heard. We've got to take steps, we've got to have great knowledge, we need to test it so that it's trusted. But with every benefit that AI brings to the table, we also have some challenges, don't, Bruno?
Bruno: That's right, Travis. We think with AI, I'm going to get a lot of efficiency. But the reality is that we have a data quality challenge. And by chance, I think I can help, as well, normalizing data. As I said, being able to create relationships. So we can improve the data quality. We can detect things. But it's something that you need to have in mind.
Travis: What about speed, Bruno? Like everybody thinks in terms of AI is going to help us address things faster. We've been talking a lot about that. as for this presentation.
Travis: Well, absolutely. I think what we shared about our way of managing situation will go fast. The thing is that in order to go fast, you might need to start slow. And I think we need to be aware that there is implementation costs linked to AI and obviously legal and security questions.
Travis: Yeah, we have to demonstrate a return on that investment if we're going to justify the cost to our executives.
Bruno: Certainly.
Travis: Yeah, we think in terms of automation, too. So the combination of AI and automation gets us to Agentic AI. That's where everybody wants to be. But what are the challenges there?
Travis: Well, the challenge that we're all automating many systems, including very old, multiple decades old systems. So we have a number of legacy integrations with integration are maybe going to be harder for being reached by AI. Maybe that's why it's important to have that open platform that sovereignty was talking about.
Bruno: Absolutely. And orchestrators can access any type of integration, including legacy integrations. And we talked earlier about the fact that an improved user experience is only good if it's going to deflect people from trying to call in to the service desk if the experience is better there.
Bruno: Absolutely, but we need to watch any security risk it will open. In particular, we to make sure that AI will be safe in applying the same entitlement and access restriction as a user. And it's a very important integration of our AI agent with the IAM system.
Travis: All right, well, if we could solve those problems, if we could bring some unity to all the technologies that are necessary to do this, to create this ticketless experience, what would that look like?
Travis: Well, building that clarity and connection for data in support of service experience starts, as we've seen first, with total assets insight incident prevention. But also, we can extend the service experience beyond IT to both internal services and external services. This self-service experience, covering all of the departments of an organization is how we build ticketless enterprise. So it's not just about it anymore. It's about-- if we're going to invest all of this effort to build out this ticketless experience, shouldn't we share it with HR and finance and our end customers even?
Bruno: Absolutely, absolutely. And that's exactly what we have been doing. And we're happy to announce that in 25.4, the release that got out last month, we've been expanding our enterprise service management system beyond IT and human resources into a lot of domain facilities, finance, legal, supply chain. And also, for external services, we're bringing to the market a brand new application dealing with customer service management. You can manage both your internal employee and external people working with you.
Travis: Obviously, we don't do only this. In 25.4, we've been doubling down on our Aviator work. You know in our Aviator work. We're providing you with a private LLMs that we're running ourselves to make sure that your data are very secure. Now, if you want, as well, the strength of a commercial LLM, we can call as well Google Gemini as an LLM. And the last thing I would be commenting on is that in this release, we're bringing the management with AI Studio of AI agents, where you can see all of the AI agents. With one click, you can deactivate an AI agent or not. It was a clear feedback we were getting from you is that you want AI, but you want to make sure you're in control. So in one click, you can disable an agent and you can configure all of these agents, including, as you can see on the screenshot, the orchestration of the task being done by the agent across AI task, automation task, human in the loop task. You can configure all of this in a codeless manner by doing drag and drop.
Travis: So this is the first version of AI Studio built right into service management?
Bruno: Exactly, exactly. So what we've been sharing this morning already can see things and go at the demo booth. And we can share that with you. If you want to learn more about what we're bringing in our asset and service management product in the latest release and our roadmap, you can go to a session on Wednesday tomorrow at 2:20.
Travis: All right. So that's what we just delivered. What's coming up next?
Travis: So what's coming up, see, we are going to continue to expand to all domains. In OpenText, you will hear tomorrow from our CIO and Chief Digital Officer, Shannon Bell. She's going to talk about how she has been rolling out all of the observability and service management in the company in a program we call OpenText, thrust, OpenText. And we have been rolling it out not only to support employees with IT. We've been rolling out across an incredible number of functions. If we covered yet all the functions, but it's very impressive. So as we see a need to cover more of corporate function, we're going to add them to our ESM solution.
Travis: So Shannon Bell actually won an award for that OpenText trust OpenText initiative.
Bruno: Absolutely. Definitely, worth. Checking that out tomorrow morning.
Travis: All right. Well, this is a wrap up of our session. So we want to make sure that you understand that if you're interested in learning more about what we're doing with AI and Aviator, there's a perfect opportunity while you're here to go see that at our playground. So go to the Aviator playgrounds. We have some experts there who will walk you through, and we would love your feedback. This is information flowing both ways. And we really value ways that we can help to improve it to satisfy the requirements that you have in your organization.
Travis: Now, let's go back to the beginning. And the point that we made about complexity, is that, no matter how good any of this sounds, we still go back on Monday to our organizations that are full of this complexity built up over time, lots of layers. So it's going to create limits on our ability to innovate and to deliver against this promise of adopting AI. So Bruno, why don't you take us home here and remind everybody about what's different about our approach?
Travis: Well, it's all about removing these limits, resolving these complexities that we have. So what we think you can take away and do differently on Monday. The first thing is, we really look at your data foundation. Look at your CMDB, look at how you bring all of this data together in the data lake, because the quality of the data is paramount. The second step, we neglect every single customer we talk to, wants to do better in knowledge management, but we have been neglecting knowledge management. It's very important to focus on knowledge. That's the food for AI. So we need to make sure that Aviator eyes, who is very hungry, is going to have food.
Travis: And the last one is that we want you to get the benefit of AI. It's incredible. It's embedded in our products. So what we are announcing this week is that we are going to offer to all of you our Aviators, as part of your observability and service management subscription. So next week, contact us. You can have access to it for free in production.
Travis: All right. So just make sure we heard that clearly. We are giving away a limited set of Aviator capabilities so that you can play around with it in your own environment and see if it works for you.
Bruno: Exactly. Functionally, very wide spectrum of functionality you can access to. And we're up to a threshold of number of queries. And after that, we sell management back for these queries.
Travis: All right. Well, I think, I guess, we need to say it is that if we're going to build a ticketless enterprise, the best ticket is the one that never actually gets created.
Bruno: Absolutely, Travis.
Travis: All right. Well, thanks, everyone, for joining us for this session. And we hope that you take advantage of all the great sessions that are coming up next.
Audience: [APPLAUSE]
Related Content:
- Stop treating ESM like a tool choice: It’s a business strategy
- Why OpenText Universal Discovery and CMDB Are Essential for Your Backup and Disaster Recovery Strategy
- Unlocking IT clarity: OpenText recognized on the Constellation ShortList™ for Observability
- From IT and HR to customers: One Enterprise Service Management platform to rule them all
- Is your CMDB ready for the demands of today and the future?
- AI Elevates ITSM Automation—If You Can Trust It
The post Building the Ticketless Enterprise: AI-Powered IT Operations appeared first on OpenText Blogs.
