LLM Settings
⚠️ Note: This configuration section is only available in Basic Flow mode.
The LLM Settings panel allows you to configure different AI models for key tasks during the chatbot flow. These tasks include retrieving relevant knowledge (RAG), verifying if a user’s answer is appropriate (Check Fit), and detecting the user's intent (Detect Intent). Proper setup ensures accurate and intelligent chatbot behavior based on user responses.
1. RAG (Retrieval-Augmented Generation)
Purpose:
RAG is used to generate answers based on knowledge ingested into your system. It enables the AI to combine its own language understanding with relevant content from uploaded documents or connected knowledge sources.
This is especially important when users ask questions that rely on domain-specific or organization-specific information.
Settings:
RAG Settings
-
AI Model: Choose the model that will be used for generating retrieval-augmented responses.
-
Temperature: Controls the creativity of the AI's response.
0: More stable and deterministic.- Higher values (e.g.,
0.7–1): More creative and variable.
-
Limit (0–20): The number of document chunks to retrieve and use when generating a response.
- Recommended:
6– offers a good balance between context and performance. - Adjust based on the complexity and length of your documents.
- Recommended:
-
Threshold: Similarity threshold (0–1) that determines how closely the retrieved content must match the user's query.
- Recommended:
0.8– provides a solid balance between relevance and coverage.
- Recommended:
Set the
Thresholdcarefully: a higher value makes the AI more selective, while a lower value allows broader matching but may reduce precision.
2. Check Fit
Purpose:
This setting is used to determine whether the user's answer is appropriate or relevant to the chatbot’s previous question. It plays a critical role in Basic Flow, helping decide how the flow should proceed.
Example:
- Bot: "Could you please provide your phone number?"
- User: "I'd rather not." → In this case, Check Fit may judge the answer as unqualified and trigger a re-prompt or an alternative path.
Settings:
- AI Model: Select the model to evaluate user responses.
- Temperature: Controls how strict or flexible the model is when evaluating.
3. Detect Intent
Purpose:
This setting allows the AI to determine the user’s intent from free-form input, which is essential for routing logic in Basic Flow.
Example:
- User says: "What are your working hours?"
→ Detected intent:
inquire_working_hours
Settings:
- AI Model: Select the model used to analyze intent.
- Temperature: Controls how strict or flexible the model is when evaluating.