rules management blog posts

Avoiding AI Hallucinations

Blog: Decision Management Community

Large language models (LLMs) trained on stale, incomplete information are prone to “hallucinations”—incorrect results, from slightly off-base to totally incoherent. Hallucinations include incorrect answers to questions and false information about people and events. This article “Why knowledge management is foundational to AI success” discussed how providing the right context to AI can improve accuracy and reduce hallucinations. Link

The classic computing rule of “garbage in, garbage out” applies to generative AI, too. Your AI model is dependent on the training data you provide; if that data is outdated, poorly structured, or full of holes, the AI will start inventing answers that mislead users and create headaches, even chaos, for your organization. 

Avoiding hallucinations requires a body of knowledge that is:

  • Accurate and trustworthy, with information quality verified by knowledgeable users
  • Up-to-date and easy to refresh as new data/edge cases emerge
  • Contextual, meaning it captures the context in which solutions are sought and offered
  • Continuously improving and self-sustaining

A knowledge management (KM) approach that enables discussion and collaboration improves the quality of your knowledge base, since it allows you to work with colleagues to vet the AI’s responses and refine prompt structure to improve answer quality. This interaction acts as a form of reinforcement learning in AI: humans applying their judgment to the quality and accuracy of the AI-generated output and helping the AI (and humans) improve.