Chains
Conversational Retrieval QA Chain
The Conversational Retrieval QA Chain is an advanced question-answering system that combines document retrieval with conversational context to provide accurate and contextually relevant answers.
Node Details
- Name:
conversationalRetrievalQAChain
- Type:
ConversationalRetrievalQAChain
- Category: [[Chains]]
- Version: 3.0
Parameters
-
Chat Model (Required)
- Type: BaseChatModel
- Description: The language model used for generating responses.
-
Vector Store Retriever (Required)
- Type: BaseRetriever
- Description: The retriever used to fetch relevant documents from a vector store.
-
Memory (Optional)
- Type: BaseMemory
- Description: The memory component to store conversation history. If not provided, a default BufferMemory will be used.
-
Return Source Documents (Optional)
- Type: boolean
- Description: Whether to return the source documents used to generate the answer.
-
Rephrase Prompt (Optional)
- Type: string
- Description: Custom prompt for rephrasing the question based on chat history.
- Default: A predefined template (REPHRASE_TEMPLATE)
- Additional Params: true
-
Response Prompt (Optional)
- Type: string
- Description: Custom prompt for generating the final response.
- Default: A predefined template (RESPONSE_TEMPLATE)
- Additional Params: true
-
Input Moderation (Optional)
- Type: Moderation[]
- Description: Moderation tools to detect and prevent harmful input.
- Additional Params: true
Input
- A string containing the user’s question or query.
Output
- If
Return Source Documents
is false:- A string containing the answer to the user’s question.
- If
Return Source Documents
is true:- An object containing:
text
: The answer to the user’s question.sourceDocuments
: An array of documents used to generate the answer.
- An object containing:
How It Works
- The chain receives a user question.
- It uses the chat history and the rephrase prompt to generate a standalone question.
- The vector store retriever fetches relevant documents based on the rephrased question.
- The response prompt is used along with the retrieved documents to generate a final answer.
- The answer (and optionally source documents) is returned as output.
- The conversation history is updated with the new question-answer pair.
Use Cases
- Building conversational AI systems with access to large document repositories
- Creating chatbots that can answer questions based on specific knowledge bases
- Implementing virtual assistants for customer support or information retrieval
- Developing educational tools that can answer follow-up questions and maintain context
- Enhancing search functionality with conversational capabilities
Special Features
- Conversational Context: Maintains and utilizes chat history for more coherent interactions.
- Dynamic Question Rephrasing: Reformulates questions based on conversation context.
- Flexible Retrieval: Uses vector store for efficient and relevant document retrieval.
- Customizable Prompts: Allows fine-tuning of question rephrasing and answer generation.
- Source Attribution: Option to return source documents for transparency.
- Input Moderation: Can implement safeguards against inappropriate or harmful inputs.
- Memory Management: Supports various memory types for different use cases.
Notes
- The quality of answers depends on both the underlying language model and the relevance of retrieved documents.
- Custom prompts can significantly impact the chain’s behavior and should be carefully crafted.
- The chain supports streaming responses for real-time interaction in compatible environments.
- Proper error handling and input validation should be implemented for production use