Blog Posts Process Management

Securing AI Deployments: Striking the Balance

Blog: OpenText

AI robot looking at a screen with a cyber lock projected in the background

The recent IT leaders CIO MarketPulse survey by Foundry underscores that data must be ready for AI. If it’s not accurate, accessible, and secure, organizations won’t get the desired results. When it comes to the challenges organizations face in implementing AI, respondents listed issues related to data management as the most significant. Since AI relies on data to learn and improve, organizations must ensure their data is accurate, accessible, and secure. They must also build a solid data foundation, including a governance framework, to take full advantage of the benefits of AI. 

CIOs are at the forefront of steering their organizations’ journey into the realm of artificial intelligence. A significant majority, 71%, are deeply involved in formulating AI applications, while an even higher percentage, 80%, are actively researching and evaluating potential AI integrations into their technology stack. 

The OpenText™ Advantage 

For decades, enterprises have turned to OpenText’s information management solutions to organize, connect, govern, and protect all data sets. Artificial Intelligence (AI) requires these foundational data strategies that OpenText is known for. Information management makes data better. And the better the data, the better the AI. From natural language processing to robotics to machine learning, our history of applying AI puts the future within your reach.  

Every day, we see the power that excellent information management brings to our customers by connecting users to content and data and super-charging content-rich processes through intelligence and automation. The launch of OpenText™ Aviator brings the power of large language models (LLMs) and generative AI (GenAI) to information management users. At the same time, we must also protect sensitive information and deliver accurate information to the user. OpenText is widely recognized for bringing trusted and secure information governance to customer processes and increasing customers’ security posture.  

The OWASP Top 10 for Large Language Model LLM Applications identifies three areas in particular that need to be addressed during the planning and operating phases of populating and accessing Large Language Models (LLMs): 

  1. Remove vast amounts of Personally Identifiable Information (PII) in large datasets. 
  1. Implement least-privilege access controls to protect LLMs from nefarious modification. 
  1. Check for existing vulnerabilities in the LLM model or supply chain. 

OpenText™ Voltage Fusion Data Discovery & Protection 

Personally Identifiable Information (PII) refers to any data that can be used to identify an individual. Examples include names, mailing addresses, phone numbers, social insurance numbers, and credit card details. When implementing GenAI and training language models (LLMs), businesses face the challenge of accidentally including PII in the training data. If PII is part of the model’s training dataset, it may generate responses to users, potentially leading to data breaches, privacy compromises, and violations of compliance regulations. OpenText Aviator and Voltage Fusion are equipped to help large industry sectors manage their regulatory landscapes, ensuring compliance with sector-specific standards like GDPR, PCI DSS, FCC, and evolving privacy laws. 

OpenText Voltage Fusion is a cloud-first data security platform that protects sensitive data, including Personally Identifiable Information (PII). Here’s how it works: 

  1. AI-Driven Data Discovery: Voltage Fusion can detect regulated information, such as PII, across unstructured and structured data. This allows it to identify and classify sensitive data across your data estate. Contextually aware, AI-driven grammars reduce false positives and quickly identify high-value assets (e.g., contracts, intellectual property, patents, etc.) and personal or sensitive data types (e.g., PI/ PII, PCI, PHI, etc.). 
  1. Data Classification: It goes beyond simple risk scoring by connecting data discovery and classification to the potential monetary and business impact of a data breach or non-compliance. This helps prioritize risk reduction
  1. Data Protection Technologies: Voltage Fusion replaces manual remediation with automated, privacy-aligned data protection technologies that improve compliance and promote strong data ethics and business growth. It uses privacy-enhancing technologies, including format-preserving encryption, tokenization, hashing, and data masking. Voltage Fusion leverages Voltage SecureData’s industry-leading Format-Preserving Encryption (FPE), Secure Stateless Tokenization (SST) and Format-Preserving Hashing (FPH) technologies. 
  1. Test Data Management: Automates the privacy and protection of sensitive production and PII data, preparing it for testing, training, and LLM pipelines. 
  1. Data Access Governance: Only authorized users with appropriate roles can access data when needed. It also enables change notifications, lifecycle management, security lockdown, and security fencing
  1. Masking, Format-Preserving Encryption and Tokenization: These technologies deidentify data to render it useless to attackers while maintaining its usability and referential integrity for data processes, applications, and services. 

By implementing these measures, OpenText Voltage Fusion helps to prevent the leakage of PII data, thereby enhancing data security and privacy practices. 

OpenText™ NetIQ Identity, Credential, and Access Management (ICAM) 

To address the need for least-privilege access to the LLM application, OpenText NetIQ stands as a cornerstone in our Cybersecurity suite, championing identity-based privacy. It offers comprehensive data protection through risk-based adaptive authentication and seamless integration with OpenText Extended Content Management (xECM). A dynamic policy engine, capable of immediate response to organizational shifts during critical stages such as employee transitions, bolsters this integration. Furthermore, our Interset machine learning technology enhances Identity Governance and Administration, employing anomaly detection to elevate the governance risk score. This advanced approach enables proactive policy enforcement, effectively neutralizing threat actors by revoking access at the pivotal point of identity verification. 

Training Data Stores Protection: 

LLM Models Access Control: 

Sensitive Data Protection: 

AI System and Plugin Access: 

OpenText™ Fortify Application Security ML/AI Risk Detection 

Conducting security scans against large language models (LLMs) and machine learning models is crucial for several reasons: 

  1. Identify Vulnerabilities: Security scans can help identify potential model vulnerabilities. The complexity introduced by machine learning algorithms like language models, which leverage vast volumes of training data, can expose new security vulnerabilities. 
  1. Prevent Misuse: LLMs have significantly transformed the landscape of Natural Language Processing (NLP). However, they also introduce critical security and risk considerations1. Security scans can help prevent potential harm caused by misuse of LLMs. 
  1. Ensure Trustworthiness: As LLM usage expands and these models become more integrated into various applications and platforms, it’s crucial to address the challenges LLMs pose to ensure the trustworthiness and safety of LLM-driven systems. 
  1. Detect Threats: Code scanning will detect threats such as prompt injection, insecure output handling (which can include XSS), and insecure plugin design, which are security issues in AI. 
  1. Compliance with Regulations: Security scans can help ensure that using LLMs complies with data privacy regulations and standards. 
  1. Future-Proofing: Regular security scans can help keep up with the evolving nature of threats in the digital landscape. 

Application Security scans are essential to responsible practices when deploying LLMs and other machine learning models. 

Both Fortify Audit Assistant service and Fortify Static Code Analyzer can identify issues that come with the use of generative AI and large language models (LLMs) that are rapidly changing the solution space of the software industry and presenting new risks. The initial Fortify support covers Python projects that consume OpenAI API, Amazon Web Services (AWS), SageMaker, or LangChain. Fortify detects weaknesses resulting from the implicit trust of responses from AI/ML model APIs, plus some unique features around Cross-Site Scripting for applications developed using large language models (LLMs). 

The Fortify Audit Assistant service also uses machine learning algorithms based on hundreds of millions of anonymous audit decisions. Using Audit Assistant, These decision models can automatically be applied to OpenText Fortify results. Audit Assistant gets better and better over time. The more you audit your vulnerabilities, the more the models learn what’s appropriate for your projects. 

Conclusion 

The integration of AI into business processes presents both opportunities and challenges. Accurate, accessible, and secure data is crucial for effective AI implementation. Organizations must establish a robust data governance framework to leverage AI’s full potential. CISOs/CIOs play a pivotal role in this transformation, with a majority actively involved in developing AI applications and exploring new AI integrations. With its long-standing expertise in information management, OpenText provides solutions that enhance data quality—essential for AI success. Their latest offering, OpenText™ Aviator, incorporates large language models and generative AI, empowering users while maintaining a commitment to data protection and governance. This approach positions OpenText as a leader in secure and trusted information management, ready to meet the demands of the AI-driven future. 

Written by Gary Freeman, Manager Solution Consulting; Roland Kahl, Senior Solution Consultant; Pedro Garcia, Senior Solution Consultant; and Richard Cabana, Senior Solution Consultant

The post Securing AI Deployments: Striking the Balance appeared first on OpenText Blogs.

Leave a Comment

Get the BPI Web Feed

Using the HTML code below, you can display this Business Process Incubator page content with the current filter and sorting inside your web site for FREE.

Copy/Paste this code in your website html code:

<iframe src="https://www.businessprocessincubator.com/content/securing-ai-deployments-striking-the-balance/?feed=html" frameborder="0" scrolling="auto" width="100%" height="700">

Customizing your BPI Web Feed

You can click on the Get the BPI Web Feed link on any of our page to create the best possible feed for your site. Here are a few tips to customize your BPI Web Feed.

Customizing the Content Filter
On any page, you can add filter criteria using the MORE FILTERS interface:

Customizing the Content Filter

Customizing the Content Sorting
Clicking on the sorting options will also change the way your BPI Web Feed will be ordered on your site:

Get the BPI Web Feed

Some integration examples

BPMN.org

XPDL.org

×