This guide shows you how to send AI usage transactions to Fenra. Transactions track individual AI API calls and their associated costs, enabling comprehensive cost analysis and monitoring.
Prerequisites
A Fenra API key (see How to Obtain an API Key )
An application that makes AI provider API calls
Understanding of your AI provider’s response format
Understanding Transactions
A transaction represents a single AI API call and includes:
Provider - Which AI provider was used (OpenAI, Anthropic, Google, etc.)
Model - The specific model used (e.g., gpt-4o, claude-3.5-sonnet)
Usage - Usage metrics (tokens, images, audio seconds, etc.)
Context - Metadata for cost allocation (environment, feature, user, etc.)
Basic Transaction Structure
Here’s the basic structure of a transaction:
{
"provider" : "openai" ,
"model" : "gpt-4o" ,
"usage" : [
{
"type" : "tokens" ,
"metrics" : {
"input_tokens" : 100 ,
"output_tokens" : 50 ,
"total_tokens" : 150
}
}
],
"context" : {
"customer_id" : "cust_123456" ,
"request_id" : "req_abc123" ,
"environment" : "production" ,
"feature_name" : "chat-assistant"
}
}
Step-by-Step: Adding a Transaction
Make Your AI Provider Call
First, make your AI provider API call as you normally would: const response = await openai . chat . completions . create ({
model: "gpt-4o" ,
messages: [
{ role: "user" , content: "Hello, world!" }
]
});
Extract Usage Data
Extract usage information from the provider’s response: const usage = {
input_tokens: response . usage . prompt_tokens ,
output_tokens: response . usage . completion_tokens ,
total_tokens: response . usage . total_tokens
};
Build the Transaction
Construct the transaction object with provider, model, usage, and context: const transaction = {
provider: 'openai' ,
model: response . model ,
usage: [{
type: 'tokens' ,
metrics: {
input_tokens: response . usage . prompt_tokens ,
output_tokens: response . usage . completion_tokens ,
total_tokens: response . usage . total_tokens
}
}],
context: {
customer_id: 'cust_123456' ,
request_id: response . id ,
environment: 'production' ,
feature_name: 'chat-assistant' ,
user_id: currentUser . id
}
};
Send to Fenra
Send the transaction to Fenra’s ingestion API: const fenraResponse = await fetch ( 'https://api.fenra.io/ingest/usage' , {
method: 'POST' ,
headers: {
'Content-Type' : 'application/json' ,
'X-Api-Key' : process . env . FENRA_API_KEY
},
body: JSON . stringify ( transaction )
});
if ( fenraResponse . status === 202 ) {
console . log ( 'Transaction queued successfully' );
}
Complete Integration Example
Here’s a complete example that wraps an AI provider call with Fenra tracking:
async function trackedAICall ( messages , featureName , userId ) {
// Make the AI provider call
const response = await openai . chat . completions . create ({
model: "gpt-4o" ,
messages: messages
});
// Track usage with Fenra
try {
await fetch ( 'https://api.fenra.io/ingest/usage' , {
method: 'POST' ,
headers: {
'Content-Type' : 'application/json' ,
'X-Api-Key' : process . env . FENRA_API_KEY
},
body: JSON . stringify ({
provider: 'openai' ,
model: response . model ,
usage: [{
type: 'tokens' ,
metrics: {
input_tokens: response . usage . prompt_tokens ,
output_tokens: response . usage . completion_tokens ,
total_tokens: response . usage . total_tokens
}
}],
context: {
customer_id: process . env . CUSTOMER_ID ,
request_id: response . id ,
environment: process . env . NODE_ENV ,
feature_name: featureName ,
user_id: userId
}
})
});
} catch ( error ) {
// Log but don't throw - don't break your app if tracking fails
console . error ( 'Failed to track usage:' , error );
}
return response ;
}
Usage Types
Fenra supports different usage types depending on your AI provider and use case:
Tokens Usage
For text generation, chat, and reasoning models:
{
"type" : "tokens" ,
"metrics" : {
"input_tokens" : 100 ,
"output_tokens" : 50 ,
"total_tokens" : 150
}
}
Images Usage
For image generation:
{
"type" : "images" ,
"metrics" : {
"generated" : 1 ,
"size_px" : "1024x1024"
}
}
Audio Usage
For speech-to-text or text-to-speech:
{
"type" : "audio_seconds" ,
"metrics" : {
"input_seconds" : 30.5 ,
"output_seconds" : 0 ,
"total_seconds" : 30.5
}
}
Requests Usage
For flat per-request pricing:
{
"type" : "requests" ,
"metrics" : {
"count" : 1
}
}
Context Fields
The context object helps with cost allocation and analysis:
Required Fields
customer_id - Identifies the customer or account to be billed
request_id - Unique identifier for the logical request (must be unique per customer)
Optional Fields
environment - Deployment environment (e.g., “production”, “staging”, “development”)
feature_name - Product feature using AI (e.g., “chat-assistant”, “image-generation”)
user_id - User identifier for per-user analysis
session_id - Session identifier for session-based analysis
Include as much context as possible. This enables powerful filtering and analysis in the Fenra dashboard, helping you understand costs at the feature, user, or environment level.
Bulk Transactions
For high-throughput scenarios, send multiple transactions in a single request:
const transactions = [
{
provider: 'openai' ,
model: 'gpt-4o' ,
usage: [{ type: 'tokens' , metrics: { ... } }],
context: { ... }
},
{
provider: 'anthropic' ,
model: 'claude-3.5-sonnet' ,
usage: [{ type: 'tokens' , metrics: { ... } }],
context: { ... }
}
];
await fetch ( 'https://api.fenra.io/ingest/usage' , {
method: 'POST' ,
headers: {
'Content-Type' : 'application/json' ,
'X-Api-Key' : process . env . FENRA_API_KEY
},
body: JSON . stringify ({ transactions })
});
Error Handling
Always implement proper error handling:
try {
const response = await fetch ( 'https://api.fenra.io/ingest/usage' , {
method: 'POST' ,
headers: {
'Content-Type' : 'application/json' ,
'X-Api-Key' : process . env . FENRA_API_KEY
},
body: JSON . stringify ( transaction )
});
if ( response . status === 202 ) {
// Success - transaction queued
const data = await response . json ();
console . log ( `Queued ${ data . events_queued } transaction(s)` );
} else if ( response . status === 400 ) {
// Validation error - fix the request
const error = await response . json ();
console . error ( 'Validation error:' , error . error . details );
} else if ( response . status === 401 ) {
// Authentication error - check API key
console . error ( 'Invalid API key' );
} else {
// Server error - may retry
console . error ( 'Server error:' , response . status );
}
} catch ( error ) {
// Network error - log but don't break your app
console . error ( 'Network error:' , error );
}
Don’t break your app : If Fenra tracking fails, log the error but don’t throw. Your AI provider call should still succeed even if tracking fails.
Response Codes
The API returns different status codes:
202 Accepted - Transaction(s) queued successfully
207 Multi-Status - Partial success (some transactions queued, some failed)
400 Bad Request - Validation error or invalid JSON
401 Unauthorized - Missing or invalid API key
500 Internal Server Error - Server error (may retry)
Best Practices
Send Asynchronously Send transactions asynchronously to avoid blocking your application. Use background jobs or fire-and-forget patterns.
Batch When Possible Use bulk transactions for high-volume scenarios to reduce HTTP overhead and improve performance.
Include Rich Context Add environment, feature, and user context to enable powerful filtering and analysis in dashboards.
Handle Errors Gracefully Implement proper error handling that logs issues but doesn’t break your application flow.
Next Steps
After adding transactions, you can: