rules management blog posts

About LLM Hallucinations

Blog: For Practitioners by Practitioners!

Andriy Burkov: “I often hear that LLMs hallucinate because they weren’t trained on quality data. This is not what a hallucination is. A hallucination is a situation when the model generates information that was close to the fringe of its training domain. So, you cannot fix hallucinations by providing better data because the fringe of the training set will not go anywhere; it can only change the shape.”