The ChatOllamaFunction node is a chat model component that integrates Ollama’s function-calling compatible language models into a workflow. It allows for running open-source LLMs with function-calling capabilities using Ollama.
Name: ChatOllamaFunction
Type: ChatOllamaFunction
Version: 1.0
Category: Chat Models
Cache (optional)
Type: BaseCache
Description: Caching mechanism for the model
Base URL
Type: string
Default: “http://localhost:11434”
Description: The base URL for the Ollama API
Model Name
Type: string
Description: Name of the compatible function-calling model (e.g., “mistral”)
Temperature
Type: number
Default: 0.9
Range: 0 to 1
Description: Controls the randomness of the model’s output
Tool System Prompt Template (optional)
Type: string
Description: Custom system prompt template for tool usage
Top P (optional)
Type: number
Description: Nucleus sampling parameter
Top K (optional)
Type: number
Description: Top-K sampling parameter
Mirostat (optional)
Type: number
Description: Mirostat sampling mode
Mirostat ETA (optional)
Type: number
Description: Mirostat learning rate
Mirostat TAU (optional)
Type: number
Description: Mirostat target entropy
Context Window Size (optional)
Type: number
Description: Size of the context window
Number of GQA groups (optional)
Type: number
Description: Number of GQA groups in the transformer layer
Number of GPU (optional)
Type: number
Description: Number of layers to send to GPU(s)
Number of Thread (optional)
Type: number
Description: Number of threads for computation
Repeat Last N (optional)
Type: number
Description: Look-back window to prevent repetition
Repeat Penalty (optional)
Type: number
Description: Penalty for repeated tokens
Stop Sequence (optional)
Type: string
Description: Sequences to stop generation
Tail Free Sampling (optional)
Type: number
Description: Parameter for Tail Free Sampling
Input: Receives messages and optional function definitions
Output: Generates responses, potentially including function calls
This node is used in workflows where function-calling capabilities of language models are needed. It’s particularly useful for tasks that require the model to choose and use specific tools or functions based on the input.
Supports various Ollama-specific parameters for fine-tuning model behavior
Can handle both regular chat interactions and function-calling scenarios
Includes a default