AI chatbot hallucinations – mind those P’s and Q’s

AI seal of approval
No promises …

This is no joke! You’ve heard about this – whether AI chatbots mind their Ps and Qs. So, beware of nonsense.

Statistically, how often do AI hallucinations happen?

Yes, there’ll be updates … “We can’t stop hallucinations, but we can manage them.” (Maybe like the Id and Ego?)

• Wiki > Hallucination (artificial intelligence)

In the field of artificial intelligence (AI), a hallucination or artificial hallucination (also called confabulation or delusion) is a response generated by AI which contains false or misleading information presented as fact.

• CNET > “Hallucinations: Why AI Makes Stuff Up, and What’s Being Done About It” by Lisa Lacy (April 1, 2024) – If you’re using generative AI to answer questions, it’s wise to do some external fact-checking to verify responses.

… the [AI] model is trained to generate data that is “statistically indistinguishable” from the training data, or that has the same type of generic characteristics. There’s no requirement for it to be “true,” Soatto [Stefano Soatto, vice president and distinguished scientist at Amazon Web Services] said.

“It generalizes or makes an inference based on what it knows about language, what it knows about the occurrence of words in different contexts,” said Swabha Swayamdipta, assistant professor of computer science at the USC Viterbi School of Engineering and leader of the Data, Interpretability, Language and Learning (DILL) lab. “This is why these language models produce facts which kind of seem plausible but are not quite true because they’re not trained to just produce exactly what they have seen before.”

Another solution is to embed the model within a larger system — more software — that checks consistency and factuality and traces attribution.

“Hallucination as a property of an AI model is unavoidable, but as a property of the system that uses the model, it is not only unavoidable, it is very avoidable and manageable,” Soatto said.