Skip to main content
This guide covers how to send DeepSeek usage data to Fenra.

DeepSeek API

DeepSeek’s API follows the OpenAI format:
async function chat(messages) {
  const response = await fetch('https://api.deepseek.com/chat/completions', {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'Authorization': `Bearer ${process.env.DEEPSEEK_API_KEY}`
    },
    body: JSON.stringify({
      model: 'deepseek-chat',
      messages
    })
  });

  const result = await response.json();

  // Send to Fenra
  await fetch('https://api.fenra.io/ingest/usage', {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'X-Api-Key': process.env.FENRA_API_KEY
    },
    body: JSON.stringify({
      provider: 'deepseek',
      model: result.model,
      usage: [{
        type: 'tokens',
        metrics: {
          input_tokens: result.usage.prompt_tokens,
          output_tokens: result.usage.completion_tokens,
          total_tokens: result.usage.total_tokens
        }
      }],
      context: {
        billable_customer_id: process.env.BILLABLE_CUSTOMER_ID
      }
    })
  });

  return result;
}

DeepSeek Reasoner

For reasoning models, include reasoning tokens:
usage: [{
  type: 'tokens',
  metrics: {
    input_tokens: result.usage.prompt_tokens,
    output_tokens: result.usage.completion_tokens,
    total_tokens: result.usage.total_tokens,
    reasoning_tokens: result.usage.completion_tokens_details?.reasoning_tokens || 0
  }
}]

Supported Models

Fenra supports all DeepSeek models. Common models include:
ModelDescription
deepseek-chatGeneral chat
deepseek-coderCode-optimized
deepseek-reasonerReasoning with chain-of-thought

Next Steps