process management blog posts

Trustworthy, secure GenAI starts with strong governance

Blog: OpenText

decorative

OpenText™ launched Aviator, a generative AI platform at OpenText World last fall. Aviator’s innovative approach enables new horizons of information exploration, creativity, and productivity. One common theme emerges in conversations with our customers: the importance of trust. The benefits of generative AI are immense, but without trust, users won’t be willing or able to realize the power of generative AI fully. To achieve trusted AI, we must start with a solid foundation of purposeful information management and governance that helps us stay organized, secures our content, and facilitates management and change where the technology and the regulatory environment are moving rapidly. Generative AI will advance to transform how knowledge workers approach their jobs, but it can only reach its full potential through careful application of governance principles through information management. 

What can we do as information governance practitioners to help ensure successful generative AI use cases and advances within your organizations? Here are six best practices to consider in your project planning and governance policy-making going forward that will make positive impacts on the trustworthiness and usefulness of AI: 

1. Curate trusted content 

Generative AI is only as good as the information it has available. Curated content that is purposefully selected can give you quick wins. Launch AI pilot projects with explicitly approved content in high demand. Examples are completed contracts, RFPs, FAQs, patent libraries, SOPs, and regulatory content.  

2. Control content sprawl 

Content sprawl refers to stray convenience copies and abandoned edited versions of documents in email, chat messages, and OneDrive. This type of content can be useful in the short run, but because it is noisy, it can tend to lead AI responses further astray. A well-managed information management process will naturally improve the accuracy and relevancy of responses generative AI produces. 

3. Label data 

The AI grounding process will be more effective if we can identify the most useful information based on a prompt. Provide labeled data or rich, accurate metadata gathered through well-managed content services and automated processes so that grounding is more accurate and AI advances can better infer source material. 

4. Institute better security controls 

Commercial large language models do not automatically understand your processes and what must be secured. They can reveal anything they have access to. Diligently secured content repositories and avoiding shadow IT are essential.  

5. Provide context 

Context is critical. Ideally, we want to infer AI grounding context from the user’s present work context. The context window is most valuable if grounding is focused on a single business transaction such as a new client, project file, HR file, or insurance claim.  

6. Incorporate AI governance  

AI governance is a rapidly evolving topic of public and organizational policy, legislation and risk mitigation. Some of the principles of AI governance—a subcategory of information governance—include transparency and explainability, fairness and non-discrimination, privacy and data protection, accountability and oversight, and safety and robustness.  

Generative AI gives us a huge leap in usability and productivity, especially with vast information repositories. By following these six essential suggestions, we can bridge the gap between busy and overwhelmed users faced with too much information and expectations of higher productivity. 

Additional resources

Learn about OpenText™ Content Aviator by accessing the resources below: 

The post Trustworthy, secure GenAI starts with strong governance appeared first on OpenText Blogs.