
Node Details
- Name:
multiPromptChain
- Type:
MultiPromptChain
- Category: [[Chains]]
- Version: 2.0
Parameters
-
Language Model (Required)
- Type: BaseLanguageModel
- Description: The language model to be used for generating responses and selecting prompts.
-
Prompt Retriever (Required)
- Type: PromptRetriever[]
- Description: An array of prompt retrievers, each containing a prompt template, name, and description.
- List: true
-
Input Moderation (Optional)
- Type: Moderation[]
- Description: Moderation tools to detect and prevent harmful input.
- List: true
Input
- A string containing the user’s query or input.
Output
- A string containing the response generated using the selected prompt.
How It Works
- The chain receives a user input.
- If input moderation is enabled, it checks the input for potential harmful content.
- The language model analyzes the input to determine which prompt from the provided set is most appropriate.
- The selected prompt is then used to format the input for the language model.
- The language model generates a response based on the formatted prompt and input.
- The final response is returned as output.
Use Cases
- Creating versatile chatbots that can handle a wide range of topics
- Building AI assistants capable of adapting to different types of user queries
- Implementing dynamic Q&A systems that can switch between different knowledge domains
- Developing content generation tools that can adapt to various writing styles or formats
- Creating flexible customer support systems that can handle diverse inquiries
Special Features
- Dynamic Prompt Selection: Automatically chooses the most appropriate prompt for each input.
- Multiple Domain Support: Can handle queries across various topics or domains using specialized prompts.
- Flexible Configuration: Allows for easy addition or modification of prompt templates.
- Input Moderation: Optional safeguards against inappropriate or harmful inputs.
- Scalability: Can be expanded to cover new topics or query types by adding new prompt templates.
- Improved Accuracy: By using specialized prompts, it can provide more accurate and relevant responses.
Notes
- The effectiveness of the chain depends on the quality and diversity of the provided prompt templates.
- Careful design of prompt descriptions is crucial for accurate prompt selection.
- The chain may require more computation time compared to single-prompt chains due to the selection process.
- It’s important to ensure that the language model is capable of accurately selecting between prompts.
- For best results, prompt templates should be distinct and cover a wide range of potential query types.
- Regular analysis of chain performance can help identify areas where new prompts might be needed.
- The chain supports streaming responses for real-time interaction in compatible environments.