ChatHuggingFace
The ChatHuggingFace node is a wrapper around HuggingFace’s large language models, specifically designed for chat-based interactions. It provides an interface to interact with various HuggingFace models, either through their API or custom endpoints.
Node Details
- Name: ChatHuggingFace
- Type: ChatHuggingFace
- Version: 3.0
- Category: Chat Models
Base Classes
- ChatHuggingFace
- BaseChatModel
- HuggingFaceInference (and its base classes)
Parameters
Credential (Required)
Parameters
Credential (Required)
- Type: huggingFaceApi
Inputs
- Model (Required)
Inputs
-
Model (Required)
- Type: string
- Description: The name of the HuggingFace model to use (e.g., “gpt2”)
- Note: Leave blank if using a custom inference endpoint
-
Cache (Optional)
-
Cache (Optional)
- Type: BaseCache
- Description: Caching mechanism for the model
-
Endpoint (Optional)
-
Endpoint (Optional)
- Type: string
- Description: Custom inference endpoint URL
- Example: https://xyz.eu-west-1.aws.endpoints.huggingface.cloud/gpt2
-
Temperature (Optional)
-
Temperature (Optional)
- Type: number
- Description: Controls randomness in output generation
- Step: 0.1
-
Max Tokens (Optional)
-
Max Tokens (Optional)
- Type: number
- Description: Maximum number of tokens to generate
- Step: 1
-
Top Probability (Top P) (Optional)
-
Top Probability (Top P) (Optional)
- Type: number
- Description: Nucleus sampling parameter
- Step: 0.1
-
Top K (Optional)
-
Top K (Optional)
- Type: number
- Description: Top-K sampling parameter
- Step: 0.1
-
Frequency Penalty (Optional)
-
Frequency Penalty (Optional)
- Type: number
- Description: Penalizes frequent token usage
- Step: 0.1
-
Stop Sequence (Optional)
-
Stop Sequence (Optional)
- Type: string
- Description: Comma-separated list of sequences to stop generation
Initialization
The node initializes a HuggingFaceInference instance with the provided parameters. It handles parameter parsing and credential retrieval.
Usage
This node is particularly useful for:
- Integrating HuggingFace models into chat applications
- Customizing language model behavior for specific use cases
- Experimenting with different model parameters
- Using custom-trained or fine-tuned models via custom endpoints
Notes
- Some parameters may not apply to all models. Users should check the specific model’s documentation for compatible parameters.
- The node supports both HuggingFace’s hosted models and custom inference endpoints, providing flexibility in deployment options.