API
Bring Your Own Key
If you have your own API key from a provider (OpenAI, Anthropic, Google, XAI), you can pass it to the Nebo API. Your key is used instead of Nebo's pool keys for that request.
How It Works
Add the X-Provider-API-Key header to any chat completion request:
curl https://janus.neboloop.com/v1/chat/completions \
-H "Authorization: Bearer $NEBO_TOKEN" \
-H "X-Provider-API-Key: sk-your-openai-key" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-5.2",
"messages": [{"role": "user", "content": "Hello"}]
}'
The JWT token is still required for authentication and usage tracking. The provider API key only replaces the key used to call the upstream provider.
With the OpenAI SDK
from openai import OpenAI
client = OpenAI(
base_url="https://janus.neboloop.com/v1",
api_key="<your-jwt-token>",
default_headers={
"X-Provider-API-Key": "sk-your-openai-key"
}
)
response = client.chat.completions.create(
model="gpt-5.2",
messages=[{"role": "user", "content": "Hello"}]
)
Supported Providers
BYOK works with all providers:
| Provider | Key Format | Example |
|---|---|---|
| OpenAI | sk-... |
sk-proj-abc123... |
| Anthropic | sk-ant-... |
sk-ant-api03-abc123... |
| Google API key | AIza... |
|
| XAI | xai-... |
xai-abc123... |
| Bedrock (via Mantle) | Bearer token | AWS bearer token |
The key format must match the provider that serves the model you're requesting. If you send an OpenAI key but request an Anthropic model, the upstream call will fail with an authentication error.
When to Use BYOK
| Scenario | Recommendation |
|---|---|
| You want higher rate limits than your plan allows | Use BYOK to bypass Nebo's pool limits |
| You have negotiated enterprise pricing with a provider | Use BYOK to take advantage of your rates |
| You need guaranteed capacity for production workloads | Use BYOK for dedicated throughput |
| You want to test a specific provider's behavior | Use BYOK with direct model selection |
What BYOK Changes
| Aspect | Without BYOK | With BYOK |
|---|---|---|
| API key used | Nebo's pool key (round-robin) | Your key |
| Provider rate limits | Shared across Nebo users | Your own limits |
| JWT auth | Required | Still required |
| Usage tracking | Tracked normally | Still tracked |
| Budget deduction | Deducted from your plan | Still deducted from your plan |
| Smart routing | Works normally | Works normally |
| Pool rotation | Round-robin with cooldown | Bypassed (your key used directly) |
Important: BYOK requests still consume your Nebo budget. The provider API key only replaces the upstream authentication — billing and usage tracking on the Nebo side remain unchanged.
Combining BYOK with Direct Model Selection
BYOK pairs naturally with direct model selection. Specify both the model and your key:
response = client.chat.completions.create(
model="gpt-5.2", # Direct model selection
messages=[{"role": "user", "content": "Complex reasoning task..."}],
extra_headers={
"X-Provider-API-Key": "sk-your-key" # Your OpenAI key
}
)
This gives you full control: you choose the model and provide the API key, while Nebo handles the OpenAI-compatible protocol, streaming, and usage tracking.
Error Handling
If your BYOK key is invalid or rate-limited, the error comes from the upstream provider:
{
"error": {
"message": "Incorrect API key provided: sk-your...key.",
"type": "server_error",
"code": "provider_error"
}
}
BYOK rate limit errors do not affect Nebo's pool keys. If your key gets rate-limited, only your requests are affected — other Nebo users and your non-BYOK requests continue working normally.
Next Steps
- Chat Completions — full endpoint reference
- Models — choose which model to pair with your key
- Usage & Limits — understand budget tracking with BYOK