Skip to main content
This guide covers how to send OpenAI usage data to Fenra.

Chat Completions (GPT-4, GPT-3.5, o1, o3)

import OpenAI from 'openai';

const openai = new OpenAI();

async function chat(messages) {
  const response = await openai.chat.completions.create({
    model: 'gpt-4o',
    messages
  });

  // Send to Fenra
  await fetch('https://api.fenra.io/ingest/usage', {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'X-Api-Key': process.env.FENRA_API_KEY
    },
    body: JSON.stringify({
      provider: 'openai',
      model: response.model,
      usage: [{
        type: 'tokens',
        metrics: {
          input_tokens: response.usage.prompt_tokens,
          output_tokens: response.usage.completion_tokens,
          total_tokens: response.usage.total_tokens
        }
      }],
      context: {
        billable_customer_id: process.env.BILLABLE_CUSTOMER_ID
      }
    })
  });

  return response;
}

Prompt Caching

When the response includes prompt_tokens_details.cached_tokens, include it:
usage: [{
  type: 'tokens',
  metrics: {
    input_tokens: response.usage.prompt_tokens,
    output_tokens: response.usage.completion_tokens,
    total_tokens: response.usage.total_tokens,
    cached_tokens: response.usage.prompt_tokens_details?.cached_tokens || 0
  }
}]

Reasoning Models (o1, o3)

OpenAI’s reasoning models include reasoning_tokens. Include them for accurate cost tracking:
usage: [{
  type: 'tokens',
  metrics: {
    input_tokens: response.usage.prompt_tokens,
    output_tokens: response.usage.completion_tokens,
    total_tokens: response.usage.total_tokens,
    reasoning_tokens: response.usage.completion_tokens_details?.reasoning_tokens || 0
  }
}]

Image Generation (DALL-E)

const response = await openai.images.generate({
  model: 'dall-e-3',
  prompt: 'A sunset over mountains',
  n: 1,
  size: '1024x1024'
});

await fetch('https://api.fenra.io/ingest/usage', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    'X-Api-Key': process.env.FENRA_API_KEY
  },
  body: JSON.stringify({
    provider: 'openai',
    model: 'dall-e-3',
    usage: [{
      type: 'images',
      metrics: {
        generated: response.data.length,
        size_px: 1024
      }
    }],
    context: {
      billable_customer_id: process.env.BILLABLE_CUSTOMER_ID
    }
  })
});

Audio (Whisper, TTS)

For Whisper transcription:
usage: [{
  type: 'audio_seconds',
  metrics: {
    input_seconds: audioDurationInSeconds,
    total_seconds: audioDurationInSeconds
  }
}]
For TTS:
usage: [{
  type: 'audio_seconds',
  metrics: {
    output_seconds: estimatedDuration,
    total_seconds: estimatedDuration
  }
}]

Supported Models

Fenra supports all OpenAI models. Common models include:
ModelTypeUsage Type
gpt-4o, gpt-4o-miniChattokens
gpt-4-turbo, gpt-4Chattokens
gpt-3.5-turboChattokens
o1-preview, o1-miniReasoningtokens (with reasoning)
o3, o3-miniReasoningtokens (with reasoning)
dall-e-3, dall-e-2Imageimages
whisper-1Audioaudio_seconds
tts-1, tts-1-hdAudioaudio_seconds

Next Steps