
Node Details
- Name: confluence
- Type: Document
- Version: 1.0
- Category: Document Loaders
Credentials
The node requires one of the following credential types:- confluenceCloudApi
- confluenceServerDCApi
Input Parameters
-
Text Splitter (optional)
- Type: TextSplitter
- Description: A text splitter to break down large documents into smaller chunks.
-
Base URL (required)
- Type: string
- Description: The base URL of your Confluence instance.
- Example: https://example.atlassian.net/wiki
-
Space Key (required)
- Type: string
- Description: The key of the Confluence space to load documents from.
- Example: ~EXAMPLE362906de5d343d49dcdbae5dEXAMPLE
-
Limit (optional)
- Type: number
- Default: 0
- Description: The maximum number of pages to load.
-
Additional Metadata (optional)
- Type: JSON
- Description: Additional metadata to be added to the extracted documents.
-
Omit Metadata Keys (optional)
- Type: string
- Description: A comma-separated list of metadata keys to omit from the extracted documents. Use * to omit all metadata keys except those specified in Additional Metadata.
Functionality
- The node authenticates with Confluence using either Cloud or Server/Data Center credentials.
- It loads pages from the specified Confluence space.
- If a text splitter is provided, it splits the loaded documents.
- Additional metadata is added to the documents if specified.
- Metadata keys are omitted based on the “Omit Metadata Keys” input.
Output
The node outputs an array of document objects, each containing:- Page content
- Metadata (original or modified based on inputs)
Use Cases
- Extracting knowledge base content from Confluence for processing in AI models
- Integrating Confluence documentation into chatbots or search systems
- Analyzing or summarizing Confluence content programmatically
Notes
- The node supports both Confluence Cloud and Confluence Server/Data Center APIs.
- Users should be cautious with the ‘limit’ parameter to avoid overloading their Confluence instance or exceeding API rate limits.
- The text splitter can be useful for breaking down large Confluence pages into more manageable chunks for processing.