Node Details

  • Name: ChatOllamaFunction
  • Type: ChatOllamaFunction
  • Version: 1.0
  • Category: Chat Models

Parameters

Inputs

Inputs

  1. Cache (optional)
    • Type: BaseCache
    • Description: Caching mechanism for the model
  2. Base URL
  3. Model Name
    • Type: string
    • Description: Name of the compatible function-calling model (e.g., “mistral”)
  4. Temperature
    • Type: number
    • Default: 0.9
    • Range: 0 to 1
    • Description: Controls the randomness of the model’s output
  5. Tool System Prompt Template (optional)
    • Type: string
    • Description: Custom system prompt template for tool usage
  6. Top P (optional)
    • Type: number
    • Description: Nucleus sampling parameter
  7. Top K (optional)
    • Type: number
    • Description: Top-K sampling parameter
  8. Mirostat (optional)
    • Type: number
    • Description: Mirostat sampling mode
  9. Mirostat ETA (optional)
    • Type: number
    • Description: Mirostat learning rate
  10. Mirostat TAU (optional)
    • Type: number
    • Description: Mirostat target entropy
  11. Context Window Size (optional)
    • Type: number
    • Description: Size of the context window
  12. Number of GQA groups (optional)
    • Type: number
    • Description: Number of GQA groups in the transformer layer
  13. Number of GPU (optional)
    • Type: number
    • Description: Number of layers to send to GPU(s)
  14. Number of Thread (optional)
    • Type: number
    • Description: Number of threads for computation
  15. Repeat Last N (optional)
    • Type: number
    • Description: Look-back window to prevent repetition
  16. Repeat Penalty (optional)
    • Type: number
    • Description: Penalty for repeated tokens
  17. Stop Sequence (optional)
    • Type: string
    • Description: Sequences to stop generation
  18. Tail Free Sampling (optional)
    • Type: number
    • Description: Parameter for Tail Free Sampling

Inputs/Outputs

  • Input: Receives messages and optional function definitions
  • Output: Generates responses, potentially including function calls

Usage

This node is used in workflows where function-calling capabilities of language models are needed. It’s particularly useful for tasks that require the model to choose and use specific tools or functions based on the input.

Special Features

  • Supports various Ollama-specific parameters for fine-tuning model behavior
  • Can handle both regular chat interactions and function-calling scenarios
  • Includes a default