Node Details

  • Name: conversationalRetrievalQAChain
  • Type: ConversationalRetrievalQAChain
  • Category: [[Chains]]
  • Version: 3.0

Parameters

  1. Chat Model (Required)

    • Type: BaseChatModel
    • Description: The language model used for generating responses.
  2. Vector Store Retriever (Required)

    • Type: BaseRetriever
    • Description: The retriever used to fetch relevant documents from a vector store.
  3. Memory (Optional)

    • Type: BaseMemory
    • Description: The memory component to store conversation history. If not provided, a default BufferMemory will be used.
  4. Return Source Documents (Optional)

    • Type: boolean
    • Description: Whether to return the source documents used to generate the answer.
  5. Rephrase Prompt (Optional)

    • Type: string
    • Description: Custom prompt for rephrasing the question based on chat history.
    • Default: A predefined template (REPHRASE_TEMPLATE)
    • Additional Params: true
  6. Response Prompt (Optional)

    • Type: string
    • Description: Custom prompt for generating the final response.
    • Default: A predefined template (RESPONSE_TEMPLATE)
    • Additional Params: true
  7. Input Moderation (Optional)

    • Type: Moderation[]
    • Description: Moderation tools to detect and prevent harmful input.
    • Additional Params: true

Input

  • A string containing the user’s question or query.

Output

  • If Return Source Documents is false:
    • A string containing the answer to the user’s question.
  • If Return Source Documents is true:
    • An object containing:
      • text: The answer to the user’s question.
      • sourceDocuments: An array of documents used to generate the answer.

How It Works

  1. The chain receives a user question.
  2. It uses the chat history and the rephrase prompt to generate a standalone question.
  3. The vector store retriever fetches relevant documents based on the rephrased question.
  4. The response prompt is used along with the retrieved documents to generate a final answer.
  5. The answer (and optionally source documents) is returned as output.
  6. The conversation history is updated with the new question-answer pair.

Use Cases

  • Building conversational AI systems with access to large document repositories
  • Creating chatbots that can answer questions based on specific knowledge bases
  • Implementing virtual assistants for customer support or information retrieval
  • Developing educational tools that can answer follow-up questions and maintain context
  • Enhancing search functionality with conversational capabilities

Special Features

  • Conversational Context: Maintains and utilizes chat history for more coherent interactions.
  • Dynamic Question Rephrasing: Reformulates questions based on conversation context.
  • Flexible Retrieval: Uses vector store for efficient and relevant document retrieval.
  • Customizable Prompts: Allows fine-tuning of question rephrasing and answer generation.
  • Source Attribution: Option to return source documents for transparency.
  • Input Moderation: Can implement safeguards against inappropriate or harmful inputs.
  • Memory Management: Supports various memory types for different use cases.

Notes

  • The quality of answers depends on both the underlying language model and the relevance of retrieved documents.
  • Custom prompts can significantly impact the chain’s behavior and should be carefully crafted.
  • The chain supports streaming responses for real-time interaction in compatible environments.
  • Proper error handling and input validation should be implemented for production use