
Node Details
- Name: OllamaEmbedding_Embeddings
- Type: OllamaEmbeddings
- Version: 1.0
- Category: Embeddings
Parameters
Inputs
-
Base URL (string)
- Default: “http://localhost:11434”
- The base URL for the Ollama API.
-
Model Name (string)
- Placeholder: “llama2”
- The name of the Ollama model to use for generating embeddings.
-
Number of GPU (number, optional)
- Description: The number of layers to send to the GPU(s). On macOS, it defaults to 1 to enable metal support, 0 to disable.
- Additional parameter
-
Number of Thread (number, optional)
- Description: Sets the number of threads to use during computation. By default, Ollama will detect this for optimal performance. It is recommended to set this value to the number of physical CPU cores your system has.
- Additional parameter
-
Use MMap (boolean, optional)
- Default: true
- Determines whether to use memory mapping for loading the model.
- Additional parameter
Output
The node initializes and returns an instance of theOllamaEmbeddings
class, which can be used to generate embeddings for input text.
Usage
This node is particularly useful in workflows that require text embeddings, such as:- Semantic search
- Text clustering
- Document similarity comparisons
- Feature extraction for machine learning models
Notes
- The node uses the
@langchain/community/embeddings/ollama
package, which should be installed in the project. - Make sure the Ollama service is running and accessible at the specified Base URL.
- The additional parameters (Number of GPU, Number of Thread, and Use MMap) allow for fine-tuning the embedding generation process based on available hardware and specific requirements.