A sentence I often hear when talking to companies that tried a chatbot without really stopping to understand what they were buying:
The chatbot answers… but not exactly what you asked.
Yes, it happens with most assistants I test. Even chatbots from companies with enough brand and budget not to settle for something that works halfway.
The assistant doesn’t behave like this because of an intrinsic limitation of the technology. The problem is how it was prepared. And it’s more subtle — and more subjective — than what usually appears in a quote.
The problem isn’t what it says, but where it answers from
A customer asks a specific question. The bot has the correct information to answer it. But instead of getting to the point, it does this:
- repeats the question
- re-explains something the customer already knows
- answers something related, but not what was asked
From the outside, it looks like clumsiness. From the inside — from “the chatbot’s mind” — it’s confusion. And that confusion comes from the instructions it was given and from the very context it was trained on.
It’s not that it lacks (artificial) intelligence to answer. It’s that it doesn’t clearly know which part of the information feeding it is the answer, which is context, and which is noise.
When information gets in the way
Many assistants fail out of an excess of good intentions.
They’re loaded with:
- website texts
- replies from old emails
- explanations meant for training staff
- FAQs that already include the questions
All together. All mixed up. And the result is a chatbot that:
- answers questions no one asked
- goes back to topics that were already closed
- shifts the focus of the conversation without warning
For the user, it’s frustrating. For the business, it’s wasted time (sometimes lost customers).
It’s not a lack of intelligence, it’s a lack of order
The tool doesn’t decide what matters. It doesn’t know which part of the text is foundational and which is secondary. It doesn’t understand what came earlier in the conversation and what’s already behind it. It does what it can with what it was given.
And when you give it everything at once, it answers everything at once.
The silent consequence
This kind of failure doesn’t produce obvious errors. It produces something worse:
- customers who get tired
- employees who stop using it
- opportunities that cool off
The assistant works… but it’s annoying. And when it’s annoying, it gets abandoned.
🔵 In the next issue: How I prevent assistants from drifting off-topic, repeating unnecessary things, or answering what doesn’t apply.