Is AI BS’ing you? Lessons in AI Hallucinations!
Blog: Agile Adoption Roadmap
User beware! Did you know that your AI query is subject to hallucinations? What is an AI hallucination? When AI inadvertently generates false or misleading information that seems plausible but is not rooted in reality. It is trying to give you an answer. These are errors in AI outputs that arise from flawed reasoning or inaccurate training data, typically not from malicious intent.
For example, a language model like ChatGPT might generate an article with fake references or make up scientific facts because it is just predicting what should come next based on patterns in data. In fact, when I asked “what was the duck wearing when it won the Boston Marathon?”, it said that “the duck was wearing a quacking pair of sneakers and a feather-light singlet when it flapped its way to victory!”
This should not be confused with people deliberately using AI tools to create misinformation, typically to manipulate public opinion or cause harm. AI itself may be used to generate highly realistic but fake content, such as fabricated news articles, doctored images, or videos. For example, AI may be used to create Deepfake videos to manipulate someone's face and voice to make them appear to say something they never did.
Turning back to actual AI hallucinations, what are the risks where it inadvertently poses several serious dangers? Generally, creating and sharing hallucination misinformation can spread quickly, particularly in news, health, legal, or political contexts. Users who trust AI outputs may unknowingly share false information, amplifying its reach. What are more specific dangers?
- Generating legal and medical judgments or diagnoses. AI-generated hallucinations in legal documents, medical advice, or financial reports can lead to harmful or even illegal outcomes. This can damage reputations or result in malpractice.
- Misinterpreting security and safety threats. In cybersecurity or military applications, a hallucinated misinterpretation of data in critical systems (e.g., aviation or nuclear control) could trigger wrong decisions with high-stakes consequences.
- Spreading stereotypes and reinforcing bias. Hallucinated outputs might reflect or invent stereotypes or discriminatory patterns that reinforce social biases. This can be especially harmful in generative content involving race, gender, religion, or culture.
- Damaging reputations and polluting research. Fake references or fabricated studies can pollute scientific research, especially if unnoticed in peer review or student submissions. AI hallucinations in education can mislead learners or promote academic dishonesty.
After enough hallucinations are shared and spread, repeated exposure to hallucinated content undermines trust in AI tools and technology in general. Ultimately, if enough misinformation occurs, there will be an erosion of trust and hesitancy to adopt AI. The important thing is be aware that AI tools will inadvertently generate false or misleading information. Don’t accept answers at first blush. Instead, verify the answers, verify the references, fact-check the outputs, and ask AI to double-check its results.