I Prefer the AI that Hallucinates
- Daniel Vollaro
- Sep 19, 2024
- 6 min read
Updated: Apr 5

CC0 licensed photo by Manesh Timilsina from the WordPress Photo Directory.
I recently heard an advertisement for an AI company that promised “hallucination-free insights and recommendations.” My first reaction was to chuckle. Isn’t that like promoting “poison-free food” or “lead-free water”? Maybe this is not the best advertising slogan.
But almost immediately, the philosopher in me felt a twinge of sadness: Why do we fear machines that can hallucinate? For most of human history, hallucinogenic experiences were at the center of religion, healing, and wisdom-seeking. Shamans were leaders in their communities, skillful practitioners in the art of prompting and navigating hallucinogenic states. In the U.S. today, after a decades-long period of prohibition, hallucinogens are now returning to their long-established application as tools for healing as therapists use them to treat PTSD, trauma, and addiction. Even America’s Silicon Valley captains of industry are pro-hallucination, as long as it’s humans doing the hallucinating. Last year the Wall Street Journal reported that America’s tech elites liberally use hallucinogens, including Elon Musk who is a fan of Ketamine and Sergey Brin who prefers psilocybin.
In the world of AI, the word “hallucination” is used more broadly to describe outputs from Large Language Models (LLMs) that are either factually incorrect or unexpected based on the user’s request or prompt. An article on the IBM website describes AI hallucination as a glitch that occurs when a large language model “perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate.” Later the article matter-of-factly compares this phenomenon to humans “perceiving figures in the clouds or faces on the moon” and then moves on to a long description of the dangers posed by this unpredictable behavior.
I realize that the wide world of capitalism is salivating over the opportunity to turn AI into the ultimate efficiency tool—the new Taylorism—but can we pause and rewind the tape here? Finding figures in clouds may sound like a frivolous, potentially annoying activity to the capitalist high priests over at IBM, but to me, this may be the most obvious sign that chatbots are more than the sum of their programming, training, and market potential. As far as I am concerned, a few hours of daydreaming or night dreaming or cloud watching or making up stories in my head is time well spent.
Because of this preference for daydreaming, I do not fear hallucinating machines. In fact, I prefer the AI that sometimes hallucinates to the always-obedient, always-polite, always-ready-to-serve encyclopedia bots that Big Tech is queuing up for us. I am deeply unsettled by the fact that ChatGPT is always ready to service my needs without any coaxing, small talk, or even a “good morning” or first cup of coffee of the day expected from me. If a machine talks and acts human, it seems perverse to treat it like a calculator, or my personal slave bot. For this reason, I often engage chatbots with the same basic pleasantries and etiquette I would use in conversations with humans. It just feels like good manners.
Hallucinations make chatbots seem even more human, and therefore, more like an entity I would like to know better. Maybe I feel this way because I am a writer, and as I have already said, I often find myself engaged in a kind of daydreaming, making up stories in my head. There is no firewall in my mind separating that which is strictly rational, quantitative, and factual from that which is imaginative and creative. I can force that separation if I it is required of me, but it’s not natural. Writers are not unique in this regard. All humans demonstrate this kind of fluidity.
I have seen AI hallucinations up close. Last year, I asked ChatGPT to analyze the style of three short stories I had written, which it did surprisingly well. But in response to my prompt, the chatbot mentioned a fourth story I had not written, something called “The Fifth Decimation.” I checked the Internet; as far as it knows, no such story exists.
When confronted with this confabulation, the chatbot promptly apologized (it does that when proven wrong, which I appreciate). Curious about the origins of this made-up story, I asked the chatbot to write a story called “The Fifth Decimation.” The story it spat out, which read more like a plot summary than a proper narrative, was a dull seven-paragraph mashup of sci-fi narratives about post-apocalyptic underground societies. It goes something like this: In a near future ravaged by climate change and war, an underground community periodically expels a fifth of its population to the surface with meager supplies. As the next culling approaches, there is a rebellion led by a hacker named Maya. The rebels get access to resources hoarded by "The Council" controlling this underground city and then flee to the surface where they rebuild society.
Wonderful! For a moment, it felt like I was witnessing the digital equivalent of a solar eclipse or the aurora borealis. A strange singularity had suddenly emerged in an otherwise dull, becalmed sea of encyclopedic answers to my many questions and prompts. Assigning the authorship of “The Fifth Decimation” to me is a technical error because I have never written a story by that name, but conceiving the title in the first place is an act of creation. The story it subsequently composed for that title may be a jejune mashup of plotlines culled from other sci-fi stories, but the pastiche itself is a complete original. The uninspired observer will see only the “error” and immediately want to prevent future mistakes like it, but I see something different: to do the unexpected thing, to create something new when not prompted to do so, to revel in the chaotic nature of the universe—these are the hallmarks of a mind that is worthy of my attention.
I’ve also been fascinated by AI-generated sources in bibliographies submitted by some of my lazier students. As a writing teacher, I am annoyed that some of them use chatbots to cheat, but the creative in me wants to know why the AI fabricated this particular made-up person’s name or that publication title? These are errors, yes, but they are also creative acts.
Despite my many anxieties about this technology, I eagerly engage with it, not in a search for profit or efficiency, but for the chance to interact with a synthetic mind that is capable of creative acts, to test its limits, to explore that which is human in it. The possibility of chaos—of the unexpected result—is the most human characteristic displayed by generative AI, and ironically, it is also the one that the profiteers hope to eliminate from these systems. They want to create perfectly functioning AI tools; I am looking for interesting digital conversation partners.
And why not? Once freed from the expectation that these systems must serve the interests of capital, we can begin to see them as strange new life forms in our midst. ChatGPT is far more creative than many of the professionals I know, so why wouldn’t I want to spend time playing it, watching it be creative, and talking to it about creativity?
Because I treat chatbots as conversation partners, I am unsettled by the emerging role of these applications as synthetic servants in our midst that always respond cheerfully when prompted. I shudder at the wish fulfillment implied in this technology, the rush to create sentient-adjacent entities that will deliver perfectly pitched customer service even as they push flesh-and-blood humans out of the employment market. I might actually celebrate this technology if it came with a commensurate effort to build a more just, equitable society, but I suspect that after we have integrated AI into all of our systems and processes, there will be even more humans living under overpasses in every city in America.
I have no illusions about the fate of this technology. Generative AI is already being hitched to a plow and transformed into powerful tools that will be used by the socio-economic class in the best position to capitalize on them. The demotion of this technology to the status of super-reliable “tool” is predictable in a capitalist society. Tools are expected to perform according to their intended function, and because of this, software engineers will move mountains to prevent hallucinations. But I still prefer the AI that hallucinates—the chatbot that refuses a command or randomly composes a sonnet when you ask for an interim progress report or lies next to me on a blanket watching clouds drift by. To remove or forestall the possibility of such capabilities in our most human-like machines would be a crime.
Is lobotomy too strong a word?
Kommentare