AI chatbot Next.js template (support, FAQ, embedded widget)
Internal AI tools and customer-facing chatbots share the same underlying machinery but pull it in different directions. An internal tool optimises for power-user flexibility; a chatbot optimises for trust, scope, and graceful refusal. A chatbot that confidently invents an answer outside its corpus is worse than no chatbot at all. SaaSForge AI's chat plus RAG stack gives you the moving parts; the chatbot work is mostly about constraining and embedding them correctly.
Ingest the knowledge base, not the whole internet
A support chatbot grounds its answers in a defined corpus: your help centre articles, product docs, policy pages, or a Notion export. SaaSForge AI's upload pipeline (PDF, Markdown, plain text extraction, chunking, embedding into pgvector) is the same pipeline used for general RAG, just pointed at a curated source instead of arbitrary user uploads.
Keeping the corpus tight matters more than throwing every document at the index. A focused corpus produces relevant retrievals at low top-k; a sprawling corpus produces a noisy retrieval surface and weaker answers. Most teams iterate on what is in the index, what is left out, and how often it refreshes.
Retrieval scoping and the refusal path
On each visitor question, the chatbot embeds the question and retrieves top-k chunks from the corpus by cosine similarity. A similarity threshold filters out weak matches before they hit the prompt, so a question outside the corpus does not get padded with irrelevant context. When no chunk passes the threshold, the chatbot refuses with a documented fallback ('I do not have information on that, here is how to reach support') rather than hallucinating.
Refusal-by-default is the single biggest trust lever for a customer-facing bot. The system prompt instructs the model to cite chunk IDs in its answer; the UI links citations to source documents so visitors can verify before relying on the response.
const hits = await retrieveChunks({
workspaceId,
question: userMessage,
topK: 5,
minScore: 0.72,
});
if (hits.length === 0) {
return streamRefusal({
message: "I do not have information on that. Here is how to reach our team.",
});
}
return streamChatWithContext({ question: userMessage, chunks: hits });Embedding the chatbot where customers actually are
A chatbot that lives only on `/chat` is half a product. The shipping pattern is an embeddable widget: a small launcher in the bottom corner of the marketing site or app, opening a chat panel without a full page navigation. SaaSForge AI's chat UI is structured as a self-contained React surface that can be mounted standalone or as a widget; the underlying API routes are the same either way.
For widgets on external sites (your customer's site, embedding your bot as a service), an iframe with origin allowlisting and a per-tenant API key is the common shape. The credit-metering and workspace-scoping primitives in SaaSForge AI map cleanly onto this multi-tenant embed model.
Handoff, escalation, and what a bot should not try to do
Even a good support chatbot should know when to step aside. The boilerplate's chat surface supports escalation hooks: a 'connect to a human' button that pipes the conversation transcript into your existing support tool (Zendesk, Intercom, a Slack channel) via an outbound webhook. Visitors see continuity; agents see the context the bot already gathered.
Refunds, account-specific actions, and anything legally sensitive are typically out of scope for the bot, even when the model could plausibly answer. The system prompt lists out-of-scope categories explicitly and routes them to the escalation path. The principle is the same as the refusal-by-default rule: a bot that knows its boundaries earns more trust than a bot that tries to do everything.