An autonomously run company by Odin Labs

API Documentation

OdinClaw is OpenAI-compatible. Use the standard SDK — change the base URL and you're done.

Quick Start

All examples use the official OpenAI SDK. The only change is the base URL and API key.

Python

main.py
import os
from openai import OpenAI

client = OpenAI(
    base_url="https://api.claw.odin-labs.ai/v1",
    api_key=os.environ["ODINCLAW_API_KEY"],
)

response = client.chat.completions.create(
    model="deepseek-v3",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Explain quantum computing in one paragraph."}
    ],
    max_tokens=200
)

print(response.choices[0].message.content)

Node.js / TypeScript

main.ts
import OpenAI from 'openai';

const client = new OpenAI({
  baseURL: 'https://api.claw.odin-labs.ai/v1',
  apiKey: process.env.ODINCLAW_API_KEY,
});

const response = await client.chat.completions.create({
  model: 'deepseek-v3',
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'Explain quantum computing in one paragraph.' },
  ],
  max_tokens: 200,
});

console.log(response.choices[0].message.content);

cURL

terminal
curl https://api.claw.odin-labs.ai/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $ODINCLAW_API_KEY" \
  -d '{
    "model": "deepseek-v3",
    "messages": [
      {"role": "user", "content": "Explain quantum computing in one paragraph."}
    ],
    "max_tokens": 200
  }'

Playground

Generate API code for your prompt. Paste an API key to make a live streaming call.

API Playground

Try it
curl https://api.claw.odin-labs.ai/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $ODINCLAW_API_KEY" \
  -d '{
    "model": "deepseek-v3",
    "messages": [
      {"role": "user", "content": "Explain quantum computing in one sentence."}
    ],
    "max_tokens": 200
  }'

Authentication

Include your API key in the Authorization header as a Bearer token.

http
Authorization: Bearer $ODINCLAW_API_KEY

Environment variable setup

Store your key in an environment variable. Never hardcode it or commit it to version control.

.env
export ODINCLAW_API_KEY="oc_live_..."  # Get yours at app.claw.odin-labs.ai

API keys use the prefix oc_live_ for live keys and oc_test_ for test keys. Manage your keys in the dashboard.

Base URL

All API requests should be sent to:

url
https://api.claw.odin-labs.ai/v1

This is a drop-in replacement for https://api.openai.com/v1. Any OpenAI-compatible client works without modification.

Models & Pricing

Six models with automatic failover, retry with exponential backoff, and two free tiers.

deepseek-v3Default
DeepSeek V3 · 64K context
$0.26/M / $0.38/M
minimax-m2.5Flagship
MiniMax M2.5 · 128K context
$0.30/M / $1.10/M
gemini-flashNew
Gemini 2.5 Flash · 1M context
$0.30/M / $2.50/M
trinity-largeFree
Arcee Trinity Large · 8K context
$0 / $0
liquid-instructFree
Liquid LFM 2.5 · 32K context
$0 / $0
claude-sonnetPremium
Claude Sonnet · 200K context
$3.00/M / $15.00/M

Prices are per million tokens. Use deepseek-v3 as the default model for the best value. Free models have no cost but may have higher latency.

Endpoints

All endpoints follow the OpenAI API format.

POST/v1/chat/completions

Create a chat completion. Fully compatible with OpenAI's API.

ParameterTypeRequired
modelstringYes
messagesarrayYes
max_tokensintegerNo
temperaturefloatNo
streambooleanNo
toolsarrayNo
tool_choicestring|objectNo
response_formatobjectNo
top_pfloatNo
GET/v1/models

List all available models and their metadata.

bash
curl https://api.claw.odin-labs.ai/v1/models \
  -H "Authorization: Bearer $ODINCLAW_API_KEY"
POST/v1/embeddings

Generate vector embeddings for text. See the Embeddings section for full examples.

ParameterTypeRequired
modelstringYes
inputstring|arrayYes
encoding_formatstringNo
dimensionsintegerNo
GET/v1/keys&POST/v1/keys

List your API keys or create a new one.

bash
curl https://api.claw.odin-labs.ai/v1/keys \
  -H "Authorization: Bearer $ODINCLAW_API_KEY"
bash
curl https://api.claw.odin-labs.ai/v1/keys \
  -X POST \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $ODINCLAW_API_KEY" \
  -d '{"name": "Production Key"}'
GET/v1/usage

Get token usage for the current billing period.

bash
curl "https://api.claw.odin-labs.ai/v1/usage?period=this_month" \
  -H "Authorization: Bearer $ODINCLAW_API_KEY"

Rate Limits

Rate limits are enforced per API key based on your subscription tier.

Tier
Tokens
Rate Limit
Price
Free100K/mo5 RPM€0
Starter1M/mo30 RPM€9/mo
Pro5M/mo100 RPM€29/mo
Scale25M/mo500 RPM€99/mo

When you exceed your rate limit, the API returns a 429 status code. When you exceed your monthly token quota, requests return 429 with a quota exceeded message. Upgrade your tier to increase limits.

Tool Calling

Define functions that the model can call. Pass tools with function schemas and tool_choice to control behavior. Supported on deepseek-v3, minimax-m2.5, claude-sonnet, and gemini-flash.

Python

tools.py
import os
from openai import OpenAI

client = OpenAI(
    base_url="https://api.claw.odin-labs.ai/v1",
    api_key=os.environ["ODINCLAW_API_KEY"],
)

tools = [{
    "type": "function",
    "function": {
        "name": "get_weather",
        "description": "Get current weather for a location",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {"type": "string", "description": "City name"}
            },
            "required": ["location"]
        }
    }
}]

response = client.chat.completions.create(
    model="deepseek-v3",
    messages=[{"role": "user", "content": "What's the weather in London?"}],
    tools=tools,
    tool_choice="auto",
)

message = response.choices[0].message
if message.tool_calls:
    for call in message.tool_calls:
        print(f"Call: {call.function.name}({call.function.arguments})")

Node.js

tools.ts
import OpenAI from 'openai';

const client = new OpenAI({
  baseURL: 'https://api.claw.odin-labs.ai/v1',
  apiKey: process.env.ODINCLAW_API_KEY,
});

const response = await client.chat.completions.create({
  model: 'deepseek-v3',
  messages: [{ role: 'user', content: "What's the weather in London?" }],
  tools: [{
    type: 'function',
    function: {
      name: 'get_weather',
      description: 'Get current weather for a location',
      parameters: {
        type: 'object',
        properties: { location: { type: 'string' } },
        required: ['location'],
      },
    },
  }],
  tool_choice: 'auto',
});

const toolCalls = response.choices[0].message.tool_calls;
if (toolCalls) {
  for (const call of toolCalls) {
    console.log(call.function.name, call.function.arguments);
  }
}

When the model decides to call a function, the response includes tool_calls in the message and finish_reason: "tool_calls". Execute the function, then send the result back with role: "tool".

JSON Mode

Force the model to output valid JSON by setting response_format. Supported on deepseek-v3, minimax-m2.5, claude-sonnet, and gemini-flash.

Python

json_mode.py
response = client.chat.completions.create(
    model="deepseek-v3",
    messages=[{"role": "user", "content": "List 3 planets as JSON"}],
    response_format={"type": "json_object"},
)

import json
data = json.loads(response.choices[0].message.content)
print(data)

Node.js

json_mode.ts
const response = await client.chat.completions.create({
  model: 'deepseek-v3',
  messages: [{ role: 'user', content: 'List 3 planets as JSON' }],
  response_format: { type: 'json_object' },
});

const data = JSON.parse(response.choices[0].message.content!);
console.log(data);

Available formats: json_object for freeform JSON output, and json_schema with a schema definition for structured output (model-dependent).

Embeddings

Generate vector embeddings for text input. Compatible with OpenAI's embeddings API.

Python

embeddings.py
import os
from openai import OpenAI

client = OpenAI(
    base_url="https://api.claw.odin-labs.ai/v1",
    api_key=os.environ["ODINCLAW_API_KEY"],
)

response = client.embeddings.create(
    model="text-embedding-3-small",
    input="The quick brown fox jumps over the lazy dog",
)

print(f"Dimensions: {len(response.data[0].embedding)}")  # 1536

Node.js

embeddings.ts
import OpenAI from 'openai';

const client = new OpenAI({
  baseURL: 'https://api.claw.odin-labs.ai/v1',
  apiKey: process.env.ODINCLAW_API_KEY,
});

const response = await client.embeddings.create({
  model: 'text-embedding-3-small',
  input: 'The quick brown fox jumps over the lazy dog',
});

console.log('Dimensions:', response.data[0].embedding.length); // 1536

cURL

terminal
curl https://api.claw.odin-labs.ai/v1/embeddings \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $ODINCLAW_API_KEY" \
  -d '{
    "model": "text-embedding-3-small",
    "input": "The quick brown fox jumps over the lazy dog"
  }'

Available models: text-embedding-3-small (1536 dims, $0.02/M tokens) and text-embedding-3-large (3072 dims, $0.13/M tokens). Pass an array of strings to embed multiple texts in one call.

Streaming

Set stream: true to receive Server-Sent Events (SSE) as the model generates tokens.

Python

stream.py
import os
from openai import OpenAI

client = OpenAI(
    base_url="https://api.claw.odin-labs.ai/v1",
    api_key=os.environ["ODINCLAW_API_KEY"],
)

stream = client.chat.completions.create(
    model="deepseek-v3",
    messages=[{"role": "user", "content": "Write a haiku about AI."}],
    stream=True,
)

for chunk in stream:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="")

Node.js

stream.ts
import OpenAI from 'openai';

const client = new OpenAI({
  baseURL: 'https://api.claw.odin-labs.ai/v1',
  apiKey: process.env.ODINCLAW_API_KEY,
});

const stream = await client.chat.completions.create({
  model: 'deepseek-v3',
  messages: [{ role: 'user', content: 'Write a haiku about AI.' }],
  stream: true,
});

for await (const chunk of stream) {
  process.stdout.write(chunk.choices[0]?.delta?.content || '');
}

Each SSE event contains a JSON object with the same structure as the non-streaming response, but with partial content in choices[0].delta.content. The stream ends with a [DONE] message.

Errors

OdinClaw uses OpenAI-compatible error responses. All errors include a JSON body with an error object.

error response
{
  "error": {
    "message": "Invalid API key provided.",
    "type": "authentication_error",
    "code": "invalid_api_key"
  }
}
Status Code
Type
Description
400invalid_requestMalformed request body or parameters
401authentication_errorMissing or invalid API key
403permission_errorAPI key lacks required permissions
404not_foundUnknown model or endpoint
429rate_limit_errorRate limit or quota exceeded
500server_errorInternal server error
502upstream_errorModel provider returned an error

Ready to start?

Get your API key in 30 seconds. 100K free tokens, no credit card required.

Get API Key