Node Details

  • Name: ChatHuggingFace
  • Type: ChatHuggingFace
  • Version: 3.0
  • Category: Chat Models

Base Classes

  • ChatHuggingFace
  • BaseChatModel
  • HuggingFaceInference (and its base classes)

Parameters

Credential (Required)

Parameters

Credential (Required)

  • Type: huggingFaceApi

Inputs

  1. Model (Required)

Inputs

  1. Model (Required)

    • Type: string
    • Description: The name of the HuggingFace model to use (e.g., “gpt2”)
    • Note: Leave blank if using a custom inference endpoint
  2. Cache (Optional)

  3. Cache (Optional)

    • Type: BaseCache
    • Description: Caching mechanism for the model
  4. Endpoint (Optional)

  5. Endpoint (Optional)

  6. Temperature (Optional)

  7. Temperature (Optional)

    • Type: number
    • Description: Controls randomness in output generation
    • Step: 0.1
  8. Max Tokens (Optional)

  9. Max Tokens (Optional)

    • Type: number
    • Description: Maximum number of tokens to generate
    • Step: 1
  10. Top Probability (Top P) (Optional)

  11. Top Probability (Top P) (Optional)

    • Type: number
    • Description: Nucleus sampling parameter
    • Step: 0.1
  12. Top K (Optional)

  13. Top K (Optional)

    • Type: number
    • Description: Top-K sampling parameter
    • Step: 0.1
  14. Frequency Penalty (Optional)

  15. Frequency Penalty (Optional)

    • Type: number
    • Description: Penalizes frequent token usage
    • Step: 0.1
  16. Stop Sequence (Optional)

  17. Stop Sequence (Optional)

    • Type: string
    • Description: Comma-separated list of sequences to stop generation

Initialization

The node initializes a HuggingFaceInference instance with the provided parameters. It handles parameter parsing and credential retrieval.

Usage

This node is particularly useful for:

  1. Integrating HuggingFace models into chat applications
  2. Customizing language model behavior for specific use cases
  3. Experimenting with different model parameters
  4. Using custom-trained or fine-tuned models via custom endpoints

Notes

  • Some parameters may not apply to all models. Users should check the specific model’s documentation for compatible parameters.
  • The node supports both HuggingFace’s hosted models and custom inference endpoints, providing flexibility in deployment options.