Skip to content

Chat Guardrails Inspection

This document lists every place in the codebase that can prevent or restrict the chat from answering certain questions (e.g. “list down top python frameworks”).


1. Empty-context fallback (primary guardrail for off-topic questions)

Section titled “1. Empty-context fallback (primary guardrail for off-topic questions)”

File: app/lib/rag.server.js
Lines: 111–121

What it does: If the vector search returns zero chunks for the shop, the app never calls the LLM. It immediately returns a fixed message and does not call OpenRouter chat.

Relevant code:

// 8. Empty-context fallback — avoid an LLM call when no knowledge exists.
if (!chunks || chunks.length === 0) {
// ...
return {
reply: "I don't have enough information to answer that. Please contact the store's support team.",
sessionId: session.id,
};
}

When this triggers:

  • The shop has no ingested knowledge (no products, collections, articles, or website knowledge), so there are no rows in embeddings for that shop_id, and searchSimilarChunks returns an empty array.

When it does not trigger:

  • As soon as the shop has any ingested chunks, vector search returns up to 5 chunks (the 5 nearest by embedding). So for “list down top python frameworks”, you typically get 5 unrelated store chunks, and the flow continues to the LLM. The block only happens when chunks.length === 0.

Conclusion: This is the guardrail that blocks any answer when the shop has no knowledge. It is the only place that short-circuits the flow without calling the LLM.


File: app/lib/prompt-builder.server.js
Lines: 3–5 (constant SYSTEM_PROMPT)

What it does: The system prompt is sent to the LLM and tells it how to behave. It can instruct the model to only use the provided store context and to say “I’m not sure” when the answer is not in that context.

Current text in codebase:

const SYSTEM_PROMPT = `You are a helpful agent. answer any type of question user asks.
...
`;

With this text, the model is not restricted to store context only; it can answer general questions (e.g. Python frameworks) from its own knowledge.

If you use a stricter prompt (e.g. from an older version or another branch), such as:

  • “Answer customer questions using only the product and store information provided below.”
  • If the answer is not in the context, say you’re not sure and suggest they contact support.”

then the LLM itself acts as a guardrail: for “list down top python frameworks” it would refuse to use general knowledge and reply with something like “I’m not sure” or “that’s not in the store information.”

Conclusion: The only LLM-level guardrail is the system prompt in prompt-builder.server.js. The current version allows any question; a stricter prompt would restrict answers to store context only.


File: app/routes/api.chat.jsx

What was checked:

  • No content filtering, no topic blocklist, no check for “python” or “frameworks.”
  • Validation is only: shopDomain and message required, message.length <= 2000.
  • No guardrail logic here that would prevent answering “list down top python frameworks.”

4. Vector search (no similarity threshold)

Section titled “4. Vector search (no similarity threshold)”

File: app/lib/vector-search.server.js

What it does: Returns the top topK (default 5) chunks by cosine similarity. There is no minimum similarity threshold; the code does not filter out low-similarity results.

So as long as the shop has at least one chunk, you get up to 5 chunks (even if they are unrelated to the query). The guardrail is not in the vector search layer.


File: extensions/appifire-chat/assets/chat-widget.js

What was checked:

  • Sends message in the POST body to /api/chat. No client-side filtering or blocking of questions.

Summary: what blocks “list down top python frameworks”

Section titled “Summary: what blocks “list down top python frameworks””
CauseLocationFix
No knowledge for shop (0 chunks)app/lib/rag.server.js lines 111–121Ensure the shop has ingested products/collections/articles/website so vector search returns at least one chunk; or change logic to allow an LLM call with empty context for general questions.
Strict system prompt (“only use context”)app/lib/prompt-builder.server.js SYSTEM_PROMPTCurrent prompt allows any question. If you see refusals, ensure the deployed prompt is the one that says “answer any type of question” and is not reverted to a context-only version.

There are no other guardrails in the codebase (no topic blocklist, no similarity threshold, no API-level content filter). The two mechanisms above are the only ones that can prevent answering that question.