© 2025 Parrot Answers
Why website chatbots give wrong answers
If you have built a widget that answers questions from your own content, you may have seen this happen.
In many cases, the problem is not the answer itself. It is how that answer is located.
Most modern website chatbots use retrieval-augmented generation (RAG), where the answer is generated from a small set of selected sections of your content. Chatbots that rely on retrieval can only work with the information they are given.
This article explains why that selection step fails more often than expected, and how small changes to wording can make content easier to retrieve and reuse. Those same changes often improve search visibility and reduce follow-up questions.
When relevant content is not selected, it does not matter how capable the language model is.
The missing information never reaches it.
You might recognise questions like:
Most systems that answer questions from a website follow a similar process.
Content is divided into smaller sections. When a question is asked, the system searches those sections and selects a limited subset that appears relevant. Only that subset is passed to the language model to generate a response.
Each step narrows the available information. By the time the model produces an answer, it is working from a small slice of the original content.
This explains why an answer can be visible on the page but absent from the response. The full page is available to the reader. The system is working from selected fragments.
This flow shows how retrieval-based chatbots select a small subset of site content before generating an answer, which is where many failures occur.
Many FAQs and help pages assume shared context.
This works when the page is read as a whole, however, it can become unreliable when a single paragraph is evaluated on its own.
Selection systems look for textual overlap between the question and the available content. When key terms are missing or implied, relevance becomes harder to establish. The system may select a different section that appears closer to the query, even if it contains less useful information.
A practical way to find these problems is to remove the surrounding context mentally.
Read an answer as if it were detached from its heading and page title. If the subject is unclear or the scope is ambiguous, the answer is likely to be missed during retrieval.
Making the subject explicit usually resolves this without changing the intent or length of the answer.
Most improvements come from straightforward edits.
These changes reduce uncertainty. They also make answers usable outside their original page, which is how retrieval systems encounter them.
This approach can work extremely well for tools like Parrot Answers, which intentionally restrict what the model can answer and focuses on improving how content is written and selected rather than letting a chatbot improvise. It's designed for locked, reviewable answers rather than open-ended conversation.
Original:
Q How much does it cost?
A Basic is $19 per month, Pro is $49 per month.
This answer assumes the subject is already known and omits common pricing terms.
Revised:
Q How much does it cost?
A Parrot Answers offers subscriptions that are priced by plan. The Basic plan is $19 per month, and the Pro tier is $49 per month.
The revised version names the product once and introduces the pricing concepts directly. The answer can now stand on its own.
Original:
Q Can I cancel?
A Yes, you can cancel anytime. Refunds are handled case by case.
This leaves key details undefined and often leads to follow-up questions.
Revised:
Q Can I cancel my subscription?
A You can cancel your subscription at any time from your account settings. Your plan remains active until the end of the current billing period. Refunds are not automatic. If you believe you were billed incorrectly, contact support with your invoice number so the billing details can be reviewed.
The revised answer defines cancellation behaviour and uses the same terms that typically appear in related questions.
Original:
Q Is there a limit?
A It depends on your plan.
This does not specify what is limited or how limits are applied.
Revised:
Q Is there a message limit on the widget?
A Each plan includes a monthly message limit for the widget. The Basic plan includes 1,000 messages per month and the Pro plan includes 5,000. If you exceed your quota, the widget continues to load but may pause responses until the next billing cycle unless you upgrade.
The scope and behaviour of the limit are clear without adding unnecessary detail.
Original:
Q Who can see this?
A Only you and your team.
This answer depends on shared understanding of roles and access.
Revised:
Q Who can access my Parrot Answers dashboard and content?
A Access to your dashboard is limited to users invited to your workspace. Each member signs in with their own account. Uploaded FAQ content is not publicly visible in the dashboard. The public widget only shows answers generated from that content.
The revised version defines access, scope, and visibility explicitly.
Clear questions and explicit answers tend to perform better in several contexts.
Rewriting for retrieval does not require expanding answers unnecessarily.
Specific wording with defined boundaries is easier to retrieve and easier to rely on.
Start with the pages that generate the most questions. Pricing, billing, limits, onboarding, and cancellations usually benefit first.
Review each answer on its own. Replace implied references with explicit ones. Name the concepts involved. Clarify behaviour where uncertainty leads to follow-up questions.
Those changes tend to improve retrieval reliability without altering the overall structure of the content.
If you're working with FAQs or help pages, you may also want to read Why FAQ pages fail, which explains why many FAQ pages fail when used with chatbots and RAG systems.