AI Agents Are the New Attack Surface—Here’s How to Defend Against Them
Blog: OpenText Blogs

As organizations rush to deploy AI agents, a critical question emerges: How do we prevent these autonomous systems from becoming our biggest security liability? At OpenText World 2025, cybersecurity experts Marcus Hearne and Scott Richards tackled this challenge head-on, revealing both the risks and solutions for safely deploying AI agents at scale.
Watch the keynote video or scroll down for the complete transcript.
Transcript from the OpenText World Cybersecurity Keynote
Marcus Hearne: Thank you for attending the cybersecurity keynote today. My name is Marcus Hearne. I'm a Senior Director of Product Marketing at OpenText for Cybersecurity. Joining me is my colleague Scott Richards, Senior VP of Product and Engineering for Cybersecurity. Our topic is AI agents and how they represent a new attack surface, and how to stop, control, and prevent threats from them. Everyone's going to have agents, everyone's going to roll them out. One of the key things that everyone's got to do is basically lock them down, track them, and be able to roll back quickly, because not rolling them out is just not an option.
Moving on quickly today, we'll do an intro. I'll go through where we are today with AI, where I see us possibly going with AI agents, and then Scott will step in and really take you into the meat of the presentation as to how cybersecurity is really critical for any agent rollout for them to be productive, secure, and so forth. We've still got people rolling in, so I'll drag this out a little bit as we go. If one of you is a member of the customer panel, please sit up front so I make sure you're here.
The value is real. We know that. Stanford and MIT produce reports showing productivity uplift rates. So it's been quantified. Boston Consulting shows that productivity is huge—12% more tasks, 25% faster, and quality doesn't degrade, which is always a problem, right? Speed versus quality. And McKinsey has shown that there's 2.6 to 4.4 trillion in just gen AI value, right? Not all of AI, but just Gen AI alone. So that's pretty significant.
The agentic revolution has begun. I'm going to walk around a little bit. I hope I don't knock you. This is a really small stage. I know there's competing definitions, so let's just set the stage. And I'm not saying this is absolutely how you define agentic AI, but it's essentially AI that's able to gather information, make a decision, and then act, right? As opposed to what AI has been doing today, which is delivering information to an actor, who usually is a human, who then makes the decision to act or not.
It will land pretty much anywhere. There are definitely places where gen AI is more productive and more useful than others. We've seen that in organizations that struggle to realize ROI from gen AI in certain parts of the organization. But in parts like mine, marketing, it's got clear value for speed, productivity, quality, all these sorts of things. Agentic AI will land everywhere and produce productivity. And that's the third thing. It will be highly productive. Being able to have an AI make decisions and act without the need for a human speeds up a lot of things, especially low-risk tasks. So hopefully none of that's controversial. Like I said, we could probably argue some of the details and so forth, but that's pretty straightforward.
The risks are obviously potentially devastating. If you have something that can land anywhere, and it can act autonomously, and it can potentially move from one part of the organization to another or change processes or change data, that's a huge risk, right? And we're not just talking about disruption, but even destruction. And I'll go into it a little bit in a scenario that could paint that out. Even to the point where if you don't get your cybersecurity right, agents could act to change their own levels of access if you don't lock that down properly, right? Or someone else's level of access and such.
So going to an example. I'll give you a quick example. Helios. We've made up an AI agent that works in an energy utility for this scenario. It's given very clear—just two goals. Two goals are keep customer satisfaction at 75% or above out of 100 and control costs and try to minimize the costs. And they think, okay, these two are very simple things. It can minimize the cost to the point that it endangers customer satisfaction. Ergo, I've got a check and balance, right?
So Helios rocks off and realizes, oh, we get credits for green energy. So if I can get credits, that actually increases profit margins, so I'm going to focus maintenance on green energy products out there—wind farms, solar and so forth. And it defocuses maintenance and stuff on the coal and gas and the more traditional fossil fuels sort of energy production. One day, there's a huge heat wave. We see this in the news all the time. It happens every year. It's inevitable. And that taxes the heck out of the grid. And then parts of the grid will start failing, particularly the fossil fuel, the more traditional sort of energy-produced parts.
And it has to make a decision. Well, I haven't been maintaining them. If I start turning them on full bore, if I go to another grid to buy, I'm going to wreck that balance of costs. I'm going to mess up that goal. So I'll start doing things like shutting down electricity to hospitals, because if I have 1,000 consumers in one hospital and the hospital's customer satisfaction goes to zero but all the consumers, the residential stay at 100, arguably, I stick to my 75% satisfaction goal, so let's rock and roll. So it starts shutting it all down.
Everyone's like, oh my God, that's no good because, obviously, a hospital is a far greater risk than just someone losing their TV and fridge, right? But at the same time, the agent could conceivably have seen that what else threatens its ability to control these factors is other people's access. So while it may not be able to change its own access, it could, in our scenario—yes, it's hypothetical—change anyone else's access to go in and intervene because that would risk its cost and satisfaction scores. So it's like, I'm not going to let anyone interfere here and roll this back. I'm going to maintain these scores perfectly.
So, you can see how just in this sort of little scenario that's dreamed up, it just leads to disaster. A CEO would be fired for that in an electrical utility. I mean, obviously, the news would be very bad for that organization. They'd probably get bought up by another one. There'd be fines. There'd be all sorts of nonsense. But the AI agent did exactly what it was meant to do. It controlled costs and it kept customer satisfaction at 75%. So that original assumption—we have the most simple and greatest balance to make sure this thing doesn't go off the rails—let it off the rails. And in part, because AI agents will follow the literal goal, not the actual goal. And that's how the math works, and that's how they will work.
So cybersecurity becomes very important because if we don't lock this down—and I know I'm being a bit dramatic—but if you have AI agents just being thrown out there willy-nilly, you're going to end up here. You're going to end up with Skynet. We're going to be a resource that it just gets rid of. And really, honestly, I don't see how we're not doing it. So I'm going to hand it over to Scott Richards, OpenText SVP, AI & Discovery / Cybersecurity, to try and convince you otherwise.
Scott Richards: So the dramatic portion of the presentation is over. And while the concerns are certainly real and there is some risk here, we feel like if you use the right tools and you take the right precautions, that you can deploy these agents safely. So who—just by raise of hand real quick—are any of you actually concerned with the proliferation of agents, what they're doing, how you're going to control them? Any concern? If I see no hands, it's going to be a really short presentation. Okay.
So we feel like we've got a silver bullet foolproof solution to that. So keep your hands raised just for a second. Okay. So we have these OpenText stress balls. So who's got a hand up? So if you're concerned at all, here's the solution. We're engineers. We're not athletes. It's fine. It's good. So all you have to do is when you feel like your agents are getting out of control, you give those little stress balls a squeeze and it all just goes away.
In all seriousness, though, we do feel like we have solutions here. If you use the right tools, we think you can deploy these agents, and you're going to have to deploy these agents in order to be competitive. And they're going to be deployed in your environments, whether you like it or not. If you're prepared, we feel like these agents won't be rogue AI overlords. We think they're going to be more like C-3PO. They'll be hard working. They might be a bit annoying, but they're going to follow the rules that you've set for them if you set clear boundaries for them. But there are some vulnerabilities, right? And we need to acknowledge that and we need to prepare for that.
You heard Savinay in his keynote. He talked about a couple of examples. He talked about the Chevy dealer, where someone got on the service chatbot, did some prompt injection and was able to buy a Chevy Tahoe for $1. That's a famous example. There's another example with Air Canada where their service chatbot misinformed a customer about a bereavement refund that was due them for a bereavement flight. And they actually tried to deny that and say, hey, that agent was a separate entity. And the tribunal declared, no, it's part of your organization. It's your entity. You have to be held responsible for that.
So what are the tools and what is the solution here? And certainly, it's not to just ban all agents. If I can use a travel metaphor, when air travel became a slightly more risky a few years ago, just banning all flights was not the answer. What we did was we made sure there were controls in place to make sure we were doing the right checks and balances for people entering a plane. And that's what we need to do here. We need to make sure that we are building the right runway, a secure runway first, for these agents to be deployed.
And we think in order to prepare that runway, there are five steps. Number one, we need to make sure we're identifying, tagging and protecting our sensitive data. That's what the bad actors are after. We need to make sure we're treating these agents just like humans, with identities, and applying proper access and governance controls to those agents. We need to make sure we're monitoring their behavior, making sure that they stay within the scope that they were intended. And then we need to make sure that we're treating these agents just like any other piece of code, any other application. We need to make sure they're scanned and that we're running proper simulations against them to make sure that they're safe and secure in the first place. And then lastly, when all goes wrong, we need to make sure we're ready to respond and we're ready to respond in the right way and we're ready to respond quickly.
So let's talk quickly about each one of these five steps. So the first step in this process is we need to make sure we create a secure environment and we're identifying and securing the information that's in our ecosystem. The bad actors, like I mentioned, they're after the data. They're either after trying to stop us from accessing our data or they want the data for themselves. So we need to, first of all, continually scan the ecosystem. We need to do this with AI-powered tools in order to keep up with the scale. We need to scan. We need to identify the data that is sensitive and needs to be protected, whether it's PII, PHI, whether it's credit card information, financial information, et cetera. We need to make sure that's secure.
Solutions like our data privacy and protection do that. An AI-powered tool that scans the entire ecosystem and identifies data that needs to be secured. Once that's secured, we can do multiple things, right? We can feed that into other systems like threat detection, et cetera, to make sure it's paying closer attention to those repositories, or we can protect that data through things like encryption, relocation, masking, et cetera.
We do some very unique things there with format-preserving encryption, where we take something that's a credit card number or a Social Security number, we encrypt it, but we ensure that it still looks like a credit card number or a Social Security number, which is really useful in multiple ways. One of the reasons that's useful is when we're looking at deceiving bad actors, which is one of the things we need to try to do as cybersecurity specialists. They think they're getting sensitive data, but we provided them this honey token where they think they have sensitive data, they really don't. They have encrypted gibberish that just looks like sensitive information. And then again, like I said, we need to feed this information into other systems that can further the security cycle.
The second thing that we need to do is—we talked about identities. So we need to treat all of these NHIs—they call them non-human identities, the agents—we need to treat them just like we treat our human identities. They need to have a unique identity assigned to them. They need to be governed by clear policies. They need authentication that follows zero-trust privileges. And we need to make sure that we have a firm and clear audit trail of what they're accessing, when they're accessing. So the idea is to make sure that we get these agents what they need, when they need it and nothing more so we don't have this excessive agency creep.
Next, the next step in this process, as we mentioned, is to monitor these agents for what they're doing. And we need to do this by continually monitoring their behavior. We have a unique approach at OpenText to this through our threat detection and response capability. Whether it's a human identity, whether it's an agent, whether it's an application or whatever it is, we take a continual baseline approach to what that agent's normal behavior is, what's acceptable for that behavior to do. We detect AI-specific signals like prompts, actions, and intent, we create this baseline, and then we monitor for atypical behavior.
So if you think about indicators of compromise, we're moving as far left as we can, really looking for indicators of attack. So as soon as an attack starts, we recognize that behavior is atypical and we flag that in the system, report it to other security solutions so that we can take remediation action against that. It's also important that these solutions integrate with other systems. Systems like SIEM, SOAR, IAM, as I talked about, et cetera. So that's monitoring the behavior.
And we need to do that in a very preemptive, proactive way. The idea of looking for known patterns of attack, while still valuable, we still need to do it, it's just not enough. We have to be able to detect novel threats, threats that are new. We can do that by monitoring behavior and looking for that atypical behavior.
Sticking with the world of proactive security, the foundation to securing these agents is making sure that we're doing proper application security on them, scanning these agents, making sure that we are detecting vulnerabilities like prompt injection, like excessive agency, things like that. And we can do that through—this is one of the things we do really, really well. We've been the leaders in this space for over a decade. You might know of our solution from the name Fortify. We call it our Core Application Security Solutions now, where we can scan these agents, we can look for vulnerabilities that would lead to gaps allowing things like prompt injection, et cetera.
We can not only notify you that your code has some gaps. We actually now, through our Aviator solutions, give code suggestions for how to fix that. And in our 25.4 release, we're actually providing automatic remediation. So you can choose to let our Cybersecurity Aviator remediate those vulnerabilities in your code.
And then lastly, when a breach occurs, it's important that you have modern forensics and response capabilities ready to respond. And OpenText, again, we play in this space with our digital forensics and incident response solution, where we bring everything together in terms of evidence. So endpoint and server information, cloud instances, network into a single unified view that allows you to move from alert to insight to action very quickly. And there's a lot that goes into that in terms of automatic correlation that connects all of these artifacts, these logs and traces into a clear single pane of glass view so that you can respond quickly.
So what I wanted to do, just real quick, I wanted to just run through a scenario. This follows on to the hypothetical Helios scenario that Marcus talked about. What we have here is we have a customer service agent. Again, this is named Helios. And this isn't just a chatbot. This is an agent that actually has the ability to act for the user. So they do more than just answer questions. They can change account details. They can look up warranty details. All of this is within the intended scope of the agent.
What you have on the right-hand side, again, Savinay talked about this in his keynote. You have an example of prompt injection, where a malicious actor has come in, they've used prompt injection to trick the agent into accessing and revealing data that was outside the scope that was intended for its use. In this case, that sensitive data is employee bank records.
So what I want to do is just take a look at how these solutions—none of these solutions as a standalone solution can solve this problem. It requires these solutions working together, creating this feedback loop to leverage AI to evolve in order to combat these AI-driven risks. So let's just look a little bit about how we do that.
So the first step, as I mentioned, is data security. So prior to our Helios agent being deployed, we've leveraged our OpenText data discovery and risk insight solution. It reviewed the entire ecosystem, and it identified that there was some sensitive employee information in this employee records folder. So at this point, a couple of things happened using this solution. Number one, as we talked about, it leveraged our format-preserving encryption to encrypt this data. So if the Helios agent were—and I'm going to show you how we're not going to let it, but if it were to access this folder, it wouldn't get data, it would get encrypted data. And it would think it was getting the data it wanted because it's format preserving, but it would be encrypted gibberish.
The second thing it does is it reports to our TDR, our Threat Detection and Response solution that this is a folder, it has heightened security, we need to make sure that we're aware of what's going on with this folder. So now we switch to our threat detection and response solution. This solution coordinates this OpenTelemetry from both identity and data layers, and it's determined that this Helios agent has demonstrated some unusual behavior, that it's not a human and that it has an access pattern that is unique and is a little bit outside of the bounds that we've set for it.
We use this advanced machine learning and these AI—we have 250-plus AI algorithms that we run to look for things like activity timestamp, access information, what location are these files being accessed from. And what we found here is this particular agent is outside of its unique normal pattern of behavior, and so it's flagged. And we noticed that it was active during some unusual hours. It attempted to access an abnormally high number of shared drives. And it was really trying to access this employee records folder.
So we're going to report that now from our TDR agent into our identity and access management solution, which is the next solution that we'll look at. So when our identity and access management solution receives this flag from TDR, it immediately checks and removes the employee records access from the Helios agent. It reverts it back to only allowing access to the approved product documentation, and it corrects this agent so that it doesn't have this scope creep or this excessive agency that happened because of the prompt injection.
So now, all of this sort of assumes that this agent was misconfigured in the first place and was given a loophole for prompt injection. So if you look at what we can do in our application security testing—so now we're going back to how could we have stopped all of this from happening in the first place. Before this Helios agent was deployed, we used our OpenText application security scans to scan the agent itself, and then we reported on some areas where there was some weakness in the code. And there were some vulnerabilities that would allow for prompt injection.
In addition to that—this is the same, this is still our application security solution—and this is where, with our Aviator technology, we're actually also now giving the coder themselves or the security analyst suggestions on what code they can insert into the vulnerable section to remediate this problem. So we have customers that are using this Aviator. It's saving their developers upwards of 70% of time by just being able to cut and paste this code into the code they've written. And with 25.4, as I mentioned, we've gone a step further. We're actually allowing for auto remediation. So you can just immediately remediate the code through this.
So to summarize, there is no silver bullet. So the squeeze ball is not the answer. The answer is providing this fabric of solutions that work together, that are powered by AI, that can use machine scale and work together for this feedback loop to evolve as these threats evolve. We cover that. This is kind of our architecture. We cover that from assessment all the way through to recovery with the five pillars that we have, with our identity and access management, data security, application security, our security analytics or our SecOps, and then also with digital forensics and incident response when we do have incidents. And then we power this with our AI, with our Aviators.
So we've leveraged all of these core foundational technologies that we've had. We power that with our Aviators so that we can really fight these AI-powered attacks with AI-powered solutions. And that's what we've done. We're really excited about it. We know that cybersecurity, especially at this scale, it's a team sport, right? So we don't claim to have every single answer, but we're very proud of the partnerships we've built across the entire ecosystem. Those partnerships are critical as well. So with that, I'll turn it back over to Marcus.
Marcus Hearne: Thank you. All right. So I'll do another summary very quickly. So using AI safely. I think this is probably a lot of common knowledge, I know, but just to cover it again. Guardrails and the value of it. The value is obviously the most important thing. I think that, over the last year, has probably been one of the biggest struggles for a lot of organizations that want to deploy AI solutions, is articulating the value, and then being able to measure it and go back and show progress.
Guardrails you can audit as well. Very important. So if something does go wrong, where did it go wrong? That's all part of that digital response. You've got to be able to go back in and apply learnings when an incident does go down and you stop it and roll it back. Obviously, close the agent risk. Scott covered this. I won't go over it again.
Least privilege. Always least privilege. Let an agent hit a brick wall. Don't try to assume like, I think it's going to need this, I think it's going to need this. Give it the least amount of access it can have. Give it the lowest identity it can have. And, of course, classified data for every action.
Shadow mode. So anytime you're deploying an agent, our advice is always shadow mode, right? Sandbox it. Shadow mode. Have an actual human approve every action first. Don't let agents just start acting. I would go as far—I'm obviously the paranoid one in this duo here. I would say, like, I don't care how low the risk is. If it's no risk, I would still have a human in there because there's always something unintended. I mean, the scenario head up front, it's a little bit fun, a little bit frivolous, but it does a good job of showing you that there's unintended consequences to things that we as humans think are airtight.
Track everything, audit everything. That's always—I think if you're in cybersecurity, it's a no-brainer. And obviously, start small. Prove the control and then scale. I think most of the AI projects I've seen where someone's come in and tried to do big scale straight away across we're going to take all these marketing processes and AI them failed every time. It's the little ones that increment and grow.
OpenText Cybersecurity Customer Panel
Okay. So right now, we're going to move over to a customer panel. So, Paul, why don't you come up?
So with what we've been talking about, about the rollout of AI—because Paul, actually, before I get to the question, you should probably introduce yourself. Tell everyone what you do at Ericsson and I can put your answers in context.
Paul Olsson: Yeah. Thank you very much. Yes. My name is Paul Olsson, Strategic Product Manager at Ericsson. And there I'm meeting with the business—with customer support. So I'm actually dealing a lot with our customers' customer data, which is quite interesting because that's even more sensitive for us to deal with that. To mess it up, that's a no-no. So that's my background. So customer support.
Marcus Hearne: We're being requested to turn the mic on. All right. Bring it to him. Hello. There's one. All right. Well, we'll get this sorted. That's the feedback there. In the context of data, one of the things Scott pointed out is it's the starting place. When you're going to roll out agentic AI, you've got to get your data locked down and secure it. So when you think about it happening at Ericsson—I don't know if it is happening. Maybe it is, and if it is, you can talk about that. But how do you think through, okay, these are the steps we need to take to get ready to get secured and potentially to track any incursions and such?
Paul Olsson: Yeah, yeah. You're right. And I mean, we are on a journey. There is a reason why I'm here to listen and learn, because the situation we are having today is that data is quite scattered. So what we are doing right now is really to get the order of our data. When we have data and ownership of data, that's also a question mark. So we need to sort that out. And I think that goes for most industries. That's actually what needs to be done. That's the starting point really to be able to handle the situation. So get the data in place. And what we are addressing right now is mainly the human part of it. AI I can see is coming later on, but that's where we are at the moment.
Marcus Hearne: So how is Ericsson—when you're—I think every organization has got some sort of AI task or some group that makes decisions about projects that are going to be funded and rolled out and monitored. How does Ericsson approach it?
Paul Olsson: Yeah. We can see that that's something that gets more and more attention at the moment. And there are a lot of discussions. Again, quite scattered actually. But more and more, this is coming in place. And for example, we are setting up a data office at the moment. And I can see this is where this is coming from. That's where it will take off really. There is AI and AI discussions.
Marcus Hearne: Do you have AI agents today in your organization you can talk about?
Paul Olsson: In some cases, yes. I can talk, for example, since I belong to the customer support organization, we are applying some AI. Very simple example. We realized that when our customers are sending us what their problems are, they are not very good in explaining what the problem is. So we actually use a generative AI to help us. So the problem description, we actually use a large language model to help us to describe the problem in a better way.
I have to admit, I didn't believe in this from the beginning. Could that really work? But it has turned out to work very well. So the support engineers that are working with this, they get help to understand the problem a lot better than what was described from the customer from the beginning. Sounds amazing when you listen to it, but that's actually the way it is. So yes, we apply AI connected to that.
Marcus Hearne: That's brilliant. So it takes in information and then communicates it back to you guys in a more clear way.
Paul Olsson: Exactly. So the way it works is that you have a customer problem description. And then when the AI part is done, that part, we have what we call a working problem description. And of course, we have a human that decides, is this really the right thing? Yes. But in most of the cases, it is actually—it's a great help. So we use it there. The large language model, we are now—we are using. We are not training it. So it works from the beginning actually, which is quite amazing.
Marcus Hearne: Do you do any sort of RAG, like, retrieval augmented analysis or generation? Like, you feed your own documentation in as well?
Paul Olsson: Well, yeah. It's part of that as well to look into other areas as well. You're right. For example, the customer product information. Yes. That has part of that as well. So it scans a number of areas actually to have the engineer to come up with a solution to the problem that was raised.
Marcus Hearne: So as your organization gets further and further along in your journey, you bring in data, particularly customer data. Do you find or do you expect you'll get pulled in more and more and more into the projects, into the decision making, into the governance and compliance?
Paul Olsson: Yes, definitely. That's going to happen. It's an important area. Data is so important for us, and customer data, of course, in particular because—and it's a balance, because with that data, we can, of course, help our customers a lot. But we also have a challenge there because when you meet customers and you start to talk about this, the reaction you get, oh, you're using my data. What are you doing?
Sometimes in front of the customer, when we talk about AI, and that's the experience I've had, is that it's not always the best thing to talk about. They get worried. What do you do with my data? How can I be sure that you don't use it in the wrong way? And so on. So we work a lot on that. And we are actually—together with our legal department, we have developed something called a data collection and data use agreement. And that's done to meet up with our customers and we agree that we are allowed to collect and use the data, and they are fine, as long as we solve their problems.
But if we thought about—start to talk about to develop other things using their data, then we might have challenges sometimes.
Marcus Hearne: Okay. Makes sense. I've got one more question, then I'll open it up for Q&A in the room. As you get involved in these AI projects, as the organization considers its expansion of AI, who are your nearest neighbors, your colleagues that are getting dragged in quickly with you? And I think we heard from Scott about identity. Clearly, it's super important that an agent has to have a proper identity. This is not just read/write access for an app. This is an actual identity, so I think it seems likely there. But who else—what other parts of the organization, particularly cybersecurity, sort of get dragged in quickly to make sure this thing is going to work and is going to fly?
Paul Olsson: Yeah. There are separate organizations actually. I don't know what services organization. So right now, we see more and more that the projects, the rollout system, integration project, everyone gets involved in this. But also the product side of it, that's also an important part. And we have realized now within Ericsson that we don't have one security offering. Maybe should have a security offering. We don't have today. And that's something that's being discussed at the moment. So yes, we might come up with that. So I've actually started to identify what are the people doing this. And you find a lot. You find a lot more than you might think you would find actually.
Marcus Hearne: Well, as you build a security product, I think I know someone you could work with, maybe. All right. All right. Why don't we open it up? If anyone has any questions. If you don't, that's fine. But just while we're up here. And Scott's included in that. So if you want to throw questions at him, please feel free to do that. I encourage it, in fact. So anyone. Just yell it out. If you don't, that's fine. Oh, down here.
Audience Member 1: Do you have a training class for the forensic side of things when it comes to AI?
Scott Richards: So the question is, do we have a training class—or you specifically asked about the forensic side of what we do. Yes. Yeah. So we'd have to get our training group to answer that question. But what I will say is that anytime any of our customers and partners want to talk about it, we'll set a training program up. We'll get the right people online with anybody that's interested and talk through our forensic solutions, how they work, or any of our solutions.
Audience Member 2: Just answering this question, I am a forensic examiner. We actually use a lot of AI to help identify and flush out the Windows event logs and put it into an easier to read format so that we can actually see what's going on in the background. And we sometimes use EnCase or some of the other products, forensic products that are out there. But there are training classes out there, but primarily for forensic examiners.
Scott Richards: Thank you.
Marcus Hearne: All right. Let me wrap it up then quickly. Thank you very much. Yes. We'll find Jim tonight.
[Applause]
Related Content:
- Seeing the unseen: How OpenText is leading the way in detecting AI risk
- Think EDR has your back? Think again.
- What the TransUnion breach teaches us about the need for Digital Forensics and Incident Response (DFIR)
- Building trust in the age of emerging technologies — the new era of application security
- Data security’s next chapter: from siloed controls to a unified growth engine
- Secure Identity, Smarter Access: What to Expect from IAM at OpenText World 2025
- OpenText Cybersecurity 2025 Global Ransomware Survey: Confidence Up, Recovery Down
The post AI Agents Are the New Attack Surface—Here’s How to Defend Against Them appeared first on OpenText Blogs.
