Node Details

  • Name: ChatFireworks
  • Type: ChatFireworks
  • Version: 1.0
  • Category: Chat Models

Base Classes

  • ChatFireworks
  • [Other base classes from ChatFireworks]

Parameters

Credential (Required)

Parameters

Credential (Required)

  • Type: credential
  • Name: fireworksApi
  • Description: API key for authenticating with Fireworks AI services

Inputs

Inputs

  1. Cache

    • Type: BaseCache
    • Optional: Yes
    • Description: Caching mechanism for storing and retrieving chat responses
  2. Model

    • Type: string
    • Default: “accounts/fireworks/models/llama-v2-13b-chat”
    • Description: The specific Fireworks AI model to use for chat
  3. Temperature

    • Type: number
    • Default: 0.9
    • Step: 0.1
    • Optional: Yes
    • Description: Controls the randomness of the model’s output. Higher values make the output more random, while lower values make it more deterministic

Initialization

The node initializes a ChatFireworks instance with the following configurations:

  • Fireworks API Key (from credentials)
  • Model name
  • Temperature
  • Streaming option (default: true)
  • Cache (if provided)

Usage

This node is used to integrate Fireworks AI’s chat capabilities into a workflow. It allows users to interact with various Fireworks chat models, with options to adjust the model’s behavior through temperature settings and caching mechanisms.

Input/Output

  • Input: The node expects input in the form of chat messages or prompts (handled externally)
  • Output: Generates chat responses using the specified Fireworks AI model

Notes

  • The node supports streaming responses by default
  • Temperature can be adjusted to control the creativity/randomness of the model’s outputs
  • Caching can be implemented to improve response times for repeated queries
  • The default model is set to “llama-v2-13b-chat”, but this can be changed as needed