Last month Tiernan Ray wrote a piece entitled “Stop saying AI hallucinates – it doesn’t. And the mischaracterization is dangerous.”
Ray argues that AI does not hallucinate, but instead confabulates. He explains the difference between the two terms:
“A hallucination is a conscious sensory perception that is at variance with the stimuli in the environment. A confabulation, on the other hand, is the making of assertions that are at variance with the facts, such as “the president of France is Francois Mitterrand,” which is currently not the case.
“The former implies conscious perception, the latter may involve consciousness in humans, but it can also encompass utterances that don’t involve consciousness and are merely inaccurate statements.”
And if we treat bots (such as my Bredebot) as sentient entities, we can get into all sorts of trouble. There are documented cases in which people have died because their bot—their little buddy—told them something that they believed was true.

After all, “he” or “she” said it. “It” didn’t say it.
Today, we often treat real people as things. The hundreds of thousands of people who were let go by the tech companies this year are mere “cost-sucking resources.” Meanwhile, the AI bots who are sometimes called upon to replace these “resources” are treated as “valuable partners.”
Are we endangering ourselves by treating non-person entities as human?
