Skip to main content
This guide covers how to send Anthropic usage data to Fenra.

Messages API

import Anthropic from '@anthropic-ai/sdk';

const anthropic = new Anthropic();

async function chat(messages) {
  const response = await anthropic.messages.create({
    model: 'claude-3-5-sonnet-20241022',
    max_tokens: 1024,
    messages
  });

  // Send to Fenra
  await fetch('https://api.fenra.io/ingest/usage', {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'X-Api-Key': process.env.FENRA_API_KEY
    },
    body: JSON.stringify({
      provider: 'anthropic',
      model: response.model,
      usage: [{
        type: 'tokens',
        metrics: {
          input_tokens: response.usage.input_tokens,
          output_tokens: response.usage.output_tokens,
          total_tokens: response.usage.input_tokens + response.usage.output_tokens
        }
      }],
      context: {
        billable_customer_id: process.env.BILLABLE_CUSTOMER_ID
      }
    })
  });

  return response;
}

Prompt Caching

When using prompt caching, include the cache tokens:
usage: [{
  type: 'tokens',
  metrics: {
    input_tokens: response.usage.input_tokens,
    output_tokens: response.usage.output_tokens,
    total_tokens: response.usage.input_tokens + response.usage.output_tokens,
    cache_read_input_tokens: response.usage.cache_read_input_tokens || 0,
    cache_creation_input_tokens: response.usage.cache_creation_input_tokens || 0
  }
}]

Extended Thinking

When using extended thinking, include thinking tokens:
usage: [{
  type: 'tokens',
  metrics: {
    input_tokens: response.usage.input_tokens,
    output_tokens: response.usage.output_tokens,
    total_tokens: response.usage.input_tokens + response.usage.output_tokens,
    thinking_tokens: response.usage.thinking_tokens || 0
  }
}]

Streaming

When streaming, wait for the final message to get usage data:
const stream = await anthropic.messages.stream({
  model: 'claude-3-5-sonnet-20241022',
  max_tokens: 1024,
  messages
});

const response = await stream.finalMessage();
// Now send to Fenra with response.usage

Supported Models

Fenra supports all Anthropic Claude models. Common models include:
ModelType
claude-3-5-sonnet-20241022Latest Sonnet
claude-3-5-haiku-20241022Latest Haiku
claude-3-opus-20240229Opus
claude-3-sonnet-20240229Sonnet (legacy)
claude-3-haiku-20240307Haiku (legacy)

Model Tiers

For batch processing, specify the tier:
{
  provider: 'anthropic',
  model: 'claude-3-5-sonnet-20241022',
  model_tier: 'batch',  // 50% discount
  usage: [...]
}

Next Steps