Building an AI Chatbot as a Product Interface, Not a Side Feature
Most AI chatbot integrations are implemented as a separate input box attached to an existing application. The user asks a question, the model returns text, and the rest of the product remains mostly unchanged.
In an analytics product, that approach is too limited. Users do not just need answers — they need the system to understand context, translate intent into backend requests, and return results that connect back to the application.
The Core Problem
Financial analytics queries are rarely simple text questions. A user may ask about headcount, budget variance, department-level spend, historical trends, or comparisons between plan versions.
Behind each question is a structured request involving dimensions, filters, time periods, metrics, and grouping logic.
The challenge is converting natural language into something the backend can execute reliably.
Why a Plain Chatbot Is Not Enough
A chatbot that only produces text creates two problems:
- It cannot reliably query application data without structure.
- It cannot return rich outputs such as tables, charts, or drill-downs.
For business users, the useful answer is often not a paragraph. It may be a table, a variance breakdown, a chart, or a follow-up action.
The Design Approach
At Precanto, I worked on integrating an LLM-powered conversational layer directly into the product workflow.
The chatbot was not treated as a separate assistant. It was designed as a natural language interface over existing backend capabilities.
Structured Request Generation
The first step was translating user intent into a structured request that backend APIs could understand.
Instead of asking the model to directly answer financial questions, the model produced a strict JSON representation of the user’s intent.
This request captured information such as:
- Metric being requested
- Accounting period or relative time range
- Department, location, and GL filters
- Grouping and comparison dimensions
- Requested visualization or output format
The backend then validated this request and executed it using normal application APIs.
Keeping the Backend in Control
A key design principle was that the LLM should interpret intent, not own business logic.
The backend remained responsible for:
- Authorization
- Data access
- Validation
- Aggregation
- Computation correctness
This separation made the system safer and easier to reason about. The LLM handled ambiguity and language, while deterministic backend services handled data and computation.
Embedded Into the Product
The conversational layer was designed to be available from different parts of the application, not just from a single chatbot screen.
A user could begin with a table, chart, or dashboard context and ask a follow-up question. The system could use that context to produce a more relevant structured request.
This made the chatbot feel less like a separate feature and more like an interaction layer over the product.
Rich Responses
The response layer was designed to support more than text.
A useful response could include:
- Explanatory text
- Tables
- Charts
- Variance breakdowns
- Drill-down paths
This required the backend and frontend to exchange structured response objects rather than treating every answer as plain text.
Key Design Principles
1. Use the LLM for Interpretation, Not Truth
The model should translate natural language into structured intent. It should not invent numbers, bypass permissions, or compute financial results independently.
2. Keep Outputs Structured
Structured requests and structured responses make the system testable, debuggable, and easier to integrate with existing product workflows.
3. Preserve Application Context
The best user experience comes when the assistant understands where the user is in the product and what data they are currently looking at.
4. Design for Follow-up Questions
Analytical workflows are iterative. Users rarely ask one question and stop. The system needs to support follow-ups while preserving enough context to remain useful.
What I Learned
AI becomes valuable in enterprise software when it is integrated into the workflow, not placed beside it.
The most important design decision is not which model to use. It is deciding where the model belongs in the system boundary.
In this architecture, the LLM is a translation layer between human intent and backend capability. That keeps the system powerful without giving up correctness, security, or control.