Press "Enter" to skip to content

Chatbots on Mushrooms

“Some amount of chatbot hallucination is inevitable. But there are ways to minimize it .” – writes Lauren Leffer of Scientific American

Hallucination is usually framed as a technical problem with AI—one that hardworking developers will eventually solve. But many machine-learning experts don’t view hallucination as fixable because it stems from LLMs doing exactly what they were developed and trained to do: respond, however they can, to user prompts. The real problem, according to some AI researchers, lies in our collective ideas about what these models are and how we’ve decided to use them. To mitigate hallucinations, the researchers say, generative AI tools must be paired with fact-checking systems that leave no chatbot unsupervised.

Many conflicts related to AI hallucinations have roots in marketing and hype. Tech companies have portrayed their LLMs as digital Swiss Army knives, capable of solving myriad problems or replacing human work. But applied in the wrong setting, these tools simply fail. Chatbots have offered users incorrect and potentially harmful medical advice, media outlets have published AI-generated articles that included inaccurate financial guidance, and search engines with AI interfaces have invented fake citations. As more people and businesses rely on chatbots for factual information, their tendency to make things up becomes even more apparent and disruptive.

Read the full article at Scientific American