You are viewing a free preview of this lesson.
Subscribe to unlock all 10 lessons in this course and every other course on LearningBro.
Every AI tool you use — ChatGPT, Claude, Gemini, or any other — will sometimes produce output that is confidently, fluently, and completely wrong. This is not a bug that will be fixed in the next update. It is a fundamental characteristic of how large language models work.
Understanding when and why AI gets things wrong is arguably the most important skill in prompt engineering. A user who trusts AI blindly is more dangerous than a user who does not use AI at all — because they will act on false information with the confidence that a sophisticated AI tool has validated it.
In the AI context, a hallucination is when the model generates output that is not grounded in reality — fabricated facts, invented citations, made-up events, or incorrect information presented with the same fluency and confidence as accurate information.
The term "hallucination" is somewhat misleading because it implies the model is "seeing things that are not there." What is actually happening is simpler: the model is predicting the most statistically likely next tokens, and sometimes the most likely-sounding text is not factually accurate.
Subscribe to continue reading
Get full access to this lesson and all 10 lessons in this course.