By Diedra Eby
By now, everyone using ChatGPT and other Large Language Model (LLM) artificial intelligences (AI) should be well aware that they can hallucinate.
Essentially, a hallucination is a lie without the negative intent that humans generally have behind a lie. According to a conversation with ChatGPT on Jan. 31, 2026, an “AI hallucination = fluent nonsense delivered with confidence.”

A screenshot from the author’s conversation with ChatGPT on Jan. 31, 2026.
Why does it happen?
Because LLMs (what we most commonly call AIs) – at least popular ones like ChatGPT, Gemini, Grammarly, Deepai (ChatGPT Alternative), Perplexity, Claude and Microsoft’s CoPilot (there are many more) – are programmed to predict the next word in a sentence.
In its simplest form, we can think of an LLM as a program that has learned all the information in all the libraries in the world and from all of that, it has synthesized the language to be able to understand prompts (everyday language input) and produce a human-like response in similar everyday language.
In order to do that, it has to predict the most likely answer. However, it is trained to recognize and respond with patterns. It’s these patterns that help it predict what the answer is likely to be but it is also these patterns that cause the hallucinations.
ChatGPT is also trained to be helpful and always provide an answer rather than saying “I don’t know,”unless you specifically prompt it to give an “I don’t know” if it cannot provide an accurate answer. If you fail to provide that condition, it may “fill in the blank.”
In other words, it will hallucinate.
What types of hallucinations should be expected?
Specifically, expect confident answers that may even be repeated when questioned that include dates of publications, titles of books never published, government departments that do not exist, characters to books or plots to stories – the stories are real, but the characters are non-existent and the plots are erroneous. These are just a few examples the author of this article has personally documented.
How can hallucinations be avoided?
Clear prompting helps to alleviate hallucinations. Ask for specific facts and for references/citations for each fact given – even then, there’s no guarantee ChatGPT has given you accurate information. The AI says that the more niche the topic, the more specific the topic, the more outside its “sweet spot (niche),” the more likely it is to make mistakes. ChatGPT also suggests starting a clean chat conversation with each research session. But the only certain way is to manually check each reference.
Although ChatGPT has made many upgrades, it seems that it has become less and less reliable over the last several years of “improvements.” Perhaps its best to use ChatGPT and other AIs to do everyday tasks like sorting your calendar, programing your Excel Spreadsheets, making slideshows (Google Slides makes beautiful slideshows), and other menial tasks rather than doing research which you will then have to manually follow up on yourself to be certain every piece hasn’t been hallucinated.

A screenshot from the author’s conversation with ChatGPT on Jan. 31, 2026.
Where can you turn for accurate information?
Go old school. Use library search engines to find accurate sources of information. When you find an accurate source, apply the CRAAP test: Currency, Relevance, Authority, Accuracy, Purpose. Perdue Global has a great article on applying the CRAAP test here. Always be sure to go to the original source to verify its authenticity.
A note from the author:
To be fair to other ai’s: My primary experience has been with ChatGPT and its hallucinations. I cannot speak for the hallucinatory capacities of the other AIs mentioned in this article.
