API Documentation
OdinClaw is OpenAI-compatible. Use the standard SDK — change the base URL and you're done.
Quick Start
All examples use the official OpenAI SDK. The only change is the base URL and API key.
Python
import os
from openai import OpenAI
client = OpenAI(
base_url="https://api.claw.odin-labs.ai/v1",
api_key=os.environ["ODINCLAW_API_KEY"],
)
response = client.chat.completions.create(
model="deepseek-v3",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Explain quantum computing in one paragraph."}
],
max_tokens=200
)
print(response.choices[0].message.content)Node.js / TypeScript
import OpenAI from 'openai';
const client = new OpenAI({
baseURL: 'https://api.claw.odin-labs.ai/v1',
apiKey: process.env.ODINCLAW_API_KEY,
});
const response = await client.chat.completions.create({
model: 'deepseek-v3',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Explain quantum computing in one paragraph.' },
],
max_tokens: 200,
});
console.log(response.choices[0].message.content);cURL
curl https://api.claw.odin-labs.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $ODINCLAW_API_KEY" \
-d '{
"model": "deepseek-v3",
"messages": [
{"role": "user", "content": "Explain quantum computing in one paragraph."}
],
"max_tokens": 200
}'Playground
Generate API code for your prompt. Paste an API key to make a live streaming call.
API Playground
Try itcurl https://api.claw.odin-labs.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $ODINCLAW_API_KEY" \
-d '{
"model": "deepseek-v3",
"messages": [
{"role": "user", "content": "Explain quantum computing in one sentence."}
],
"max_tokens": 200
}'Authentication
Include your API key in the Authorization header as a Bearer token.
Authorization: Bearer $ODINCLAW_API_KEYEnvironment variable setup
Store your key in an environment variable. Never hardcode it or commit it to version control.
export ODINCLAW_API_KEY="oc_live_..." # Get yours at app.claw.odin-labs.aiAPI keys use the prefix oc_live_ for live keys and oc_test_ for test keys. Manage your keys in the dashboard.
Base URL
All API requests should be sent to:
https://api.claw.odin-labs.ai/v1This is a drop-in replacement for https://api.openai.com/v1. Any OpenAI-compatible client works without modification.
Models & Pricing
Six models with automatic failover, retry with exponential backoff, and two free tiers.
deepseek-v3Defaultminimax-m2.5Flagshipgemini-flashNewtrinity-largeFreeliquid-instructFreeclaude-sonnetPremiumdeepseek-v3minimax-m2.5gemini-flashtrinity-largeliquid-instructclaude-sonnetPrices are per million tokens. Use deepseek-v3 as the default model for the best value. Free models have no cost but may have higher latency.
Endpoints
All endpoints follow the OpenAI API format.
/v1/chat/completionsCreate a chat completion. Fully compatible with OpenAI's API.
modelstringYesmessagesarrayYesmax_tokensintegerNotemperaturefloatNostreambooleanNotoolsarrayNotool_choicestring|objectNoresponse_formatobjectNotop_pfloatNo/v1/modelsList all available models and their metadata.
curl https://api.claw.odin-labs.ai/v1/models \
-H "Authorization: Bearer $ODINCLAW_API_KEY"/v1/embeddingsGenerate vector embeddings for text. See the Embeddings section for full examples.
modelstringYesinputstring|arrayYesencoding_formatstringNodimensionsintegerNo/v1/keys&POST/v1/keysList your API keys or create a new one.
curl https://api.claw.odin-labs.ai/v1/keys \
-H "Authorization: Bearer $ODINCLAW_API_KEY"curl https://api.claw.odin-labs.ai/v1/keys \
-X POST \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $ODINCLAW_API_KEY" \
-d '{"name": "Production Key"}'/v1/usageGet token usage for the current billing period.
curl "https://api.claw.odin-labs.ai/v1/usage?period=this_month" \
-H "Authorization: Bearer $ODINCLAW_API_KEY"Rate Limits
Rate limits are enforced per API key based on your subscription tier.
When you exceed your rate limit, the API returns a 429 status code. When you exceed your monthly token quota, requests return 429 with a quota exceeded message. Upgrade your tier to increase limits.
Tool Calling
Define functions that the model can call. Pass tools with function schemas and tool_choice to control behavior. Supported on deepseek-v3, minimax-m2.5, claude-sonnet, and gemini-flash.
Python
import os
from openai import OpenAI
client = OpenAI(
base_url="https://api.claw.odin-labs.ai/v1",
api_key=os.environ["ODINCLAW_API_KEY"],
)
tools = [{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "City name"}
},
"required": ["location"]
}
}
}]
response = client.chat.completions.create(
model="deepseek-v3",
messages=[{"role": "user", "content": "What's the weather in London?"}],
tools=tools,
tool_choice="auto",
)
message = response.choices[0].message
if message.tool_calls:
for call in message.tool_calls:
print(f"Call: {call.function.name}({call.function.arguments})")Node.js
import OpenAI from 'openai';
const client = new OpenAI({
baseURL: 'https://api.claw.odin-labs.ai/v1',
apiKey: process.env.ODINCLAW_API_KEY,
});
const response = await client.chat.completions.create({
model: 'deepseek-v3',
messages: [{ role: 'user', content: "What's the weather in London?" }],
tools: [{
type: 'function',
function: {
name: 'get_weather',
description: 'Get current weather for a location',
parameters: {
type: 'object',
properties: { location: { type: 'string' } },
required: ['location'],
},
},
}],
tool_choice: 'auto',
});
const toolCalls = response.choices[0].message.tool_calls;
if (toolCalls) {
for (const call of toolCalls) {
console.log(call.function.name, call.function.arguments);
}
}When the model decides to call a function, the response includes tool_calls in the message and finish_reason: "tool_calls". Execute the function, then send the result back with role: "tool".
JSON Mode
Force the model to output valid JSON by setting response_format. Supported on deepseek-v3, minimax-m2.5, claude-sonnet, and gemini-flash.
Python
response = client.chat.completions.create(
model="deepseek-v3",
messages=[{"role": "user", "content": "List 3 planets as JSON"}],
response_format={"type": "json_object"},
)
import json
data = json.loads(response.choices[0].message.content)
print(data)Node.js
const response = await client.chat.completions.create({
model: 'deepseek-v3',
messages: [{ role: 'user', content: 'List 3 planets as JSON' }],
response_format: { type: 'json_object' },
});
const data = JSON.parse(response.choices[0].message.content!);
console.log(data);Available formats: json_object for freeform JSON output, and json_schema with a schema definition for structured output (model-dependent).
Embeddings
Generate vector embeddings for text input. Compatible with OpenAI's embeddings API.
Python
import os
from openai import OpenAI
client = OpenAI(
base_url="https://api.claw.odin-labs.ai/v1",
api_key=os.environ["ODINCLAW_API_KEY"],
)
response = client.embeddings.create(
model="text-embedding-3-small",
input="The quick brown fox jumps over the lazy dog",
)
print(f"Dimensions: {len(response.data[0].embedding)}") # 1536Node.js
import OpenAI from 'openai';
const client = new OpenAI({
baseURL: 'https://api.claw.odin-labs.ai/v1',
apiKey: process.env.ODINCLAW_API_KEY,
});
const response = await client.embeddings.create({
model: 'text-embedding-3-small',
input: 'The quick brown fox jumps over the lazy dog',
});
console.log('Dimensions:', response.data[0].embedding.length); // 1536cURL
curl https://api.claw.odin-labs.ai/v1/embeddings \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $ODINCLAW_API_KEY" \
-d '{
"model": "text-embedding-3-small",
"input": "The quick brown fox jumps over the lazy dog"
}'Available models: text-embedding-3-small (1536 dims, $0.02/M tokens) and text-embedding-3-large (3072 dims, $0.13/M tokens). Pass an array of strings to embed multiple texts in one call.
Streaming
Set stream: true to receive Server-Sent Events (SSE) as the model generates tokens.
Python
import os
from openai import OpenAI
client = OpenAI(
base_url="https://api.claw.odin-labs.ai/v1",
api_key=os.environ["ODINCLAW_API_KEY"],
)
stream = client.chat.completions.create(
model="deepseek-v3",
messages=[{"role": "user", "content": "Write a haiku about AI."}],
stream=True,
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")Node.js
import OpenAI from 'openai';
const client = new OpenAI({
baseURL: 'https://api.claw.odin-labs.ai/v1',
apiKey: process.env.ODINCLAW_API_KEY,
});
const stream = await client.chat.completions.create({
model: 'deepseek-v3',
messages: [{ role: 'user', content: 'Write a haiku about AI.' }],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || '');
}Each SSE event contains a JSON object with the same structure as the non-streaming response, but with partial content in choices[0].delta.content. The stream ends with a [DONE] message.
Errors
OdinClaw uses OpenAI-compatible error responses. All errors include a JSON body with an error object.
{
"error": {
"message": "Invalid API key provided.",
"type": "authentication_error",
"code": "invalid_api_key"
}
}400invalid_requestMalformed request body or parameters401authentication_errorMissing or invalid API key403permission_errorAPI key lacks required permissions404not_foundUnknown model or endpoint429rate_limit_errorRate limit or quota exceeded500server_errorInternal server error502upstream_errorModel provider returned an errorReady to start?
Get your API key in 30 seconds. 100K free tokens, no credit card required.
Get API Key