Node Details

  • Name: awsChatBedrock
  • Type: AWSChatBedrock
  • Category: Chat Models
  • Version: 5.0

Parameters

Credential (Optional)

  • Type: credential
  • Credential Names: awsApi
  • Description: AWS API credentials for authentication.

Credential (Optional)

  • Type: credential
  • Credential Names: awsApi
  • Description: AWS API credentials for authentication.

Inputs

  1. Cache (Optional)

Inputs

  1. Cache (Optional)

    • Type: BaseCache
    • Description: Caching mechanism to store and retrieve previous responses.
  2. Region (Required)

  3. Region (Required)

    • Type: string
    • Description: AWS region where the Bedrock service is hosted.
    • Default: “us-east-1”
  4. Model Name (Required)

  5. Model Name (Required)

    • Type: string
    • Description: The specific Bedrock model to use.
    • Default: “anthropic.claude-3-haiku”
  6. Custom Model Name (Optional)

  7. Custom Model Name (Optional)

    • Type: string
    • Description: If provided, will override the model selected from Model Name option.
  8. Temperature (Optional)

  9. Temperature (Optional)

    • Type: number
    • Description: Controls randomness in output. Higher values make output more random.
    • Default: 0.7
    • Step: 0.1
  10. Max Tokens to Sample (Optional)

  11. Max Tokens to Sample (Optional)

    • Type: number
    • Description: The maximum number of tokens to generate in the response.
    • Default: 200
    • Step: 10
  12. Allow Image Uploads (Optional)

  13. Allow Image Uploads (Optional)

    • Type: boolean
    • Description: Enables image input for compatible models (claude-3-* models).
    • Default: false

Input

  • For text-only input: A string containing the user’s message or query.
  • For multi-modal input (if allowed): An object containing text and image data.

Output

  • A string containing the AI’s response to the input.

How It Works

  1. The node authenticates with AWS using provided credentials (or falls back to environment credentials).
  2. It initializes the specified Bedrock model with the given parameters.
  3. When receiving an input:
    • For text-only input, it sends the message directly to the model.
    • For multi-modal input, it formats the input to include both text and image data.
  4. The model processes the input and generates a response.
  5. The response is returned as output.

Use Cases

  • Building conversational AI applications and chatbots
  • Creating virtual assistants for customer support
  • Developing language translation tools
  • Generating creative content or assisting in writing tasks
  • Analyzing and summarizing text documents
  • Processing and describing images in multi-modal applications

Special Features

  • Multi-Modal Support: Can handle both text and image inputs for compatible models.
  • Flexible Model Selection: Supports various Bedrock models with option for custom model names.
  • Regional Configuration: Allows specifying the AWS region for optimal performance.
  • Caching Integration: Optional caching to improve response times for repeated queries.
  • Temperature Control: Adjustable randomness in model outputs.
  • Token Limit: Configurable maximum token generation for controlled response lengths.

Notes

  • Requires an AWS account with access to Bedrock services.
  • Performance and capabilities may vary depending on the selected model.
  • Multi-modal support is limited to specific models (e.g., claude-3-* series).
  • Proper error handling should be implemented, especially for potential API failures or token limit issues.
  • Consider data privacy and compliance requirements when using cloud-based AI services.
  • Costs are associated with using AWS Bedrock services; refer to AWS pricing for details.

The AWS ChatBedrock node provides a powerful interface to leverage AWS’s advanced language models for a wide range of natural language processing tasks. It’s particularly useful for developers and organizations looking to build sophisticated AI-powered applications while utilizing AWS’s scalable and secure cloud infrastructure. This node excels in scenarios requiring high-quality language understanding and generation, especially when integration with existing AWS services is desired.