Skip to main content
This guide covers how to send AWS Bedrock usage data to Fenra.

Bedrock Runtime API

import { BedrockRuntimeClient, InvokeModelCommand } from '@aws-sdk/client-bedrock-runtime';

const client = new BedrockRuntimeClient({ region: 'us-east-1' });

async function chat(messages) {
  const command = new InvokeModelCommand({
    modelId: 'anthropic.claude-3-5-sonnet-20241022-v2:0',
    contentType: 'application/json',
    accept: 'application/json',
    body: JSON.stringify({
      anthropic_version: 'bedrock-2023-05-31',
      max_tokens: 1024,
      messages
    })
  });

  const response = await client.send(command);
  const result = JSON.parse(new TextDecoder().decode(response.body));

  // Send to Fenra
  await fetch('https://api.fenra.io/ingest/usage', {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'X-Api-Key': process.env.FENRA_API_KEY
    },
    body: JSON.stringify({
      provider: 'bedrock',
      model: 'anthropic.claude-3-5-sonnet-20241022-v2:0',
      usage: [{
        type: 'tokens',
        metrics: {
          input_tokens: result.usage.input_tokens,
          output_tokens: result.usage.output_tokens,
          total_tokens: result.usage.input_tokens + result.usage.output_tokens
        }
      }],
      context: {
        billable_customer_id: process.env.BILLABLE_CUSTOMER_ID
      }
    })
  });

  return result;
}

Supported Models

Fenra supports all models hosted on AWS Bedrock. Common providers include:
ProviderModels
AnthropicAll Claude models (claude-3.5-sonnet, claude-3-opus, etc.)
MetaAll Llama models (llama-3.2, llama-3.1, llama-3)
AmazonAll Titan models (titan-text, titan-embeddings)
MistralAll Mistral models (mistral-large, mistral-7b)
Use the full Bedrock model ID (e.g., anthropic.claude-3-5-sonnet-20241022-v2:0) for accurate cost calculation.

Next Steps