Interacting with AI feels disturbing to me. Oddly slick and beautiful fake humans with 7 fingers aside, I think it’s because there’s no way to prioritize truth-telling in a large language model, and no prompts that shortcut you to it. This is because AI is meta-trained1 to give answers.
AI does not have the option to go silent. It’s against it’s charter of existence to say “I don’t know,” to create a pregnant pause, or to ask you to reflect and find the answer yourself.
AI also can’t push back saying, “why do you ask?”
Don’t get me wrong, the speed at which AI produces facts, figures, bits of code and calculations, analytics, and even ideas is astonishing. And I use it for these things. But truth and wisdom is usually left to the poets, artists, musicians and spiritual guides. Truth is something we dance around, so that the meaning can be felt inside as knowing.
AI cannot do this. It also cannot be silent. Even when it needs to be. And its important we don’t get trained by this model in turn: having fast answers for nearly everything. Humans don’t and shouldn’t. Life is a mystery in so many ways, and that’s the gift of it. Truth is often revealed in layers, not spit out in chat.
Not seeing an option doesn’t mean it doesn’t exist. For instance, Facebook doesn’t have an emoji for “speechless,” but if it did, I’d probably choose it half the time.

My first experience with a Large Language Model (LLM)
When it first arrived for consumer use, I was excited to use ChatGPT, hoping to maybe even help develop this new tool. I love innovation and my work overlaps in tech, so I started this journey open-minded and with a testing mindset: what can it do?
In my first interaction with ChatGPT, it miss-attributed one of my quotes to Werner Erhard. I found this annoying, because it gave a man credit for my words. My expression was unique, said something different than Werner ever said, but AI attributed my words to a famous person who had already made a mark on the social landscape.
This is not surprising: it’s like the rich getting richer, or those with great search placement getting better search placement. Authority bias goes to those who’ve already “done it,” to give them even more credit for things they do not necessarily do2.
But what I found even more concerning, was that the more I ‘asked’ ChatGPT to cite the source of its result, the more it doubled down on the wrong citation. It was bias, on top of notoriety, on top of a major issue: the mandate to give answers.
After about 7 levels of my repetitive asking for source of the quote – ChatGPT replied that the words were “like” Werner Erhard’s and “about a topic he talked about” and that’s why it attributed the quote to him. So it had rationale for falseness (much like humans do), but not actually the real answer.
I thought about smart people doing research and getting loads of false answers, stated authoritatively. How would someone know the sourcing was literally made up? Would they have the energy to probe deeply through 7 rounds of BS?
How would someone know they should even keep questioning?
What I found even more interesting was the realization that AI is – by its very existence – trained to answer questions with the tone of certainty, even when there is no certainty at all. And to do it fast and politely, in the King’s English. AI will not simply say, “I don’t know,3” when it doesn’t.
.
Okay, AI is great for so many things, but wisdom?
There outstanding applications of AI; things like spotting early cancers in imaging, folding proteins4 that would normally take scientists lifetimes to achieve, analyzing huge sets of data to reveal big trends, doing cumbersome data operations, or putting together loads of code.
But can AI tell and seek the truth? All on its own? Can it even make truth a priority? It really can’t, if you think about it. LLM’s are trained to answer, answer, answer with more ever more speed and accuracy. And even if they shouldn’t answer, the still do. That’s their core “identity” if you will.
No one is building the silent AI right now. Because, why would they?
Could AI ever reply with “it’s in your best interest to figure this out yourself”?
Would AI ever go silent? When and why would that be a good idea?
FOOTNOTES
- I’m coining this term “meta-trained” to mean, that which is assumed must happen, before the LLM training can even take place. Or that which demonstrates “existence” of the model. In the case of AI, it can only seem to exist IF it is “answering” questions or “producing” a response, so it will do LOTS of it, even if contrary to truth pointed to by the inquiry. ↩︎
- I just recreated the experiment, over a year later, with the same question: this time the answer was another published man.
↩︎
- To be clear, AI will say, “I can’t access that information,” when not connected to the open web, or when inquiry is about classified or inappropriate subject matter. But again, this reinforces the idea of access to information = knowing, which is the thing tech bros are banking on these days. ↩︎
- https://www.nature.com/articles/s41587-023-01705-y ↩︎