The core issue with many AI tools today is their profound inability to provide accurate information consistently. But here's where it gets controversial: just because an AI can generate plausible-sounding answers doesn’t mean those answers are trustworthy.
I regularly perform a simple online search for my own name—not out of vanity, but because it helps me stay updated on reviews, recognition in year-end best lists, foreign publication dates, and other info about my work that I might not find otherwise. During one of these searches, I discovered something quite startling: the AI known as Grok, associated with the tech company X, claimed that I had dedicated my novel The Consuming Fire to characters from Disney’s Frozen—an attribution I never made, nor do I have multiple children as the AI suggested. I only have one, and I’ve certainly never dedicated a book to Disney characters.
So, why did Grok get this wrong? The simple truth is that most consumer-facing AI systems are essentially just advanced predictive text engines—highly sophisticated autocompleters. Their main goal is to predict what word or phrase is statistically most likely to follow, not to verify facts. These models lack real understanding or awareness; they cannot recognize when the information they produce is false. 'Statistically likely' doesn't mean 'correct,' and AI models don’t possess a built-in sense of factual accuracy.
Curiously, I decided to test what other AI systems would tell me about my book's dedication. Here’s what I found:
- Google’s AI: When asked publicly, it confidently asserted that I had a daughter named Corbin, which is completely untrue—I have only one child, and her name is Athena. Clearly, Google’s AI was fabricating or misremembering.
- Microsoft’s Copilot: This AI correctly stated that I had dedicated some books to my wife, Krissy, and was glad it didn’t believe I’d dedicated the book to a character named Leloo from The Fifth Element. But it’s important to note that, just like Google, it falsely claimed I dedicated The Consuming Fire to Krissy—something I never did.
- ChatGPT: Unfortunately, this AI performed the worst. It not only assigned a wrong dedication to the novel—claiming I dedicated it to various people named Corey or Cory—but also fabricated the dedication text itself. I didn’t request the dedication’s wording; ChatGPT simply added false information, choosing to go beyond what I asked.
- Claude (from Anthropic): To its credit, Claude was the only AI that honestly admitted it lacked reliable data and attempted to search the web. It then wisely said it couldn’t find the answer and offered tips on how I could find it myself—an example of responsible AI behavior. When I asked Grok directly, it also admitted it couldn’t determine the dedication, and when I queried why another instance of Grok got it wrong, it shrugged and suggested 'it happens.'
- Gemini (which powers Google’s AI mix): When asked, it confidently provided incorrect information—claiming other authors’ books were dedicated to the wrong people or not dedicated at all. When corrected, it apologized and even falsely attributed a dedication to a different author.
So, out of five different AI systems and multiple runs, only Claude refrained from confidently spouting falsehoods about my book. However, it’s important to emphasize that even Claude isn’t immune to hallucinating inaccuracies—they happen from time to time. The key point is that these AIs, regardless of their sophistication, are fundamentally guessing based on statistical likelihood, not verifying facts.
I often ask AI systems about my own life—things I know firsthand—and, consistently, they get those facts wrong with unwavering confidence. If an AI cannot reliably tell me facts I already know, how can I trust it to deliver accurate insights on topics I don’t possess direct knowledge of?
To illustrate this point further, I randomly selected a book from my shelf that is not mine and asked Gemini to identify its dedication. It confidently made a false attribution—implying an absurd dedication that simply isn’t true. For example, I asked about Richard Kadrey’s Aloha From Hell; it wrongly claimed it was dedicated to a specific band member of The Cramps. Then I inquired about Robopocalypse by Daniel H. Wilson—again, false details about the dedication. Similarly, when asked about Alif the Unseen by G. Willow Wilson, it misattributed the dedication to someone else entirely. These consistent mistakes highlight a critical flaw: AI models are often just making educated guesses, frequently confidently, but entirely inaccurately.
So, what does all this tell us?
First, don’t rely on AI tools as your primary search engine—you’re likely to get misleading or outright false information without even realizing it.
Second, don’t trust AI for factual accuracy. When it doesn’t know something, it tends to fill in the gaps with fabricated details, confidently stated as facts.
Third, since verifying every fact AI provides is necessary—often more work than just checking yourself—why bother? Using AI as a source may actually increase your workload rather than reduce it.
Yes, AI can have its uses, but definitely not as a fact checker. The fault isn’t in the AI itself; it lies in how the technology is marketed and used—often portrayed as authoritative when it’s fundamentally a statistical matching engine at its core. You wouldn’t trust a device marketed as an air filter to actually clean your air if it’s just a fan with a sticker, so don’t treat AI like a reliable knowledge source.
I dedicate this piece to everyone out there who takes these lessons seriously and refuses to trust AI blindly. You’re the wise ones—true skeptics in a sea of false confidence. And that’s a fact.