API Integration
Use AI models from CoreSynth directly in your favorite tools — Claude Code, Cursor, VS Code, Windsurf, OpenCode, and more. Our API is fully compatible with both OpenAI and Anthropic formats, so it works with any client that supports these protocols.
How It Works
CoreSynth operates an AI API Proxy — a unified endpoint that translates your requests into native formats for each AI provider (OpenAI, Anthropic, Google). The result is one URL and one API key to access all models.
Your tool (Cursor, Claude Code, ...)
│
▼
https://coresynth.io/v1/ ← CoreSynth API Proxy
│
├─► Claude Sonnet / Opus (Anthropic)
├─► GPT-4o / o3 (OpenAI)
└─► Gemini 2.5 Pro (Google)Key Features
| Feature | Description |
|---|---|
| OpenAI Compatible | POST /v1/chat/completions — works with any OpenAI client |
| Anthropic Compatible | POST /v1/messages — native Anthropic SDK support |
| Streaming | SSE streaming support for both formats |
| Models | Claude, GPT, Gemini and more — one key for all |
| Rate Limits | Flexible limits based on your plan |
| Security | Bearer token auth, HTTPS, AES-256 key encryption |
Prerequisites
Before you begin, you need:
- Active Alex Code service on coresynth.io
- API key in format
cs-xxxxxxxxxxxxxxxxxxxxxxxx— generate it in the dashboard - An AI tool installed (Claude Code, Cursor, etc.)
:::tip[Don’t have an API key?] Go to API Keys and generate one. It takes 30 seconds. :::
Endpoints
| Endpoint | Format | Description |
|---|---|---|
https://coresynth.io/v1/chat/completions | OpenAI | Chat completions (standard) |
https://coresynth.io/v1/messages | Anthropic | Messages API (native) |
https://coresynth.io/v1/models | OpenAI | List available models |
https://coresynth.io/v1/health | — | API health check (public) |
Authentication
All requests require an API key. We support two methods:
# OpenAI style (Bearer token)
Authorization: Bearer cs-your-key-here
# Anthropic style (x-api-key header)
x-api-key: cs-your-key-hereClaude Code
Claude Code is a terminal-based AI assistant by Anthropic. With CoreSynth API you can use it with any model.
Setup
Claude Code supports custom OpenAI-compatible endpoints via environment variables:
# Set in ~/.bashrc or ~/.zshrc
export ANTHROPIC_API_KEY="cs-your-key-here"
export ANTHROPIC_BASE_URL="https://coresynth.io/v1"Or use the config file .claude/settings.json:
{
"env": {
"ANTHROPIC_API_KEY": "cs-your-key-here",
"ANTHROPIC_BASE_URL": "https://coresynth.io/v1"
}
}Usage
# Start Claude Code
claude
# With a specific model
claude --model claude-sonnet-4-20250514What You Get
- AI pair programming right in your terminal
- Read and edit files on your server via SSH
- Execute commands with AI assistance
- Complex refactoring and debugging
- All AI models — not just Claude, but GPT-4o, Gemini, and more
Cursor
Cursor is an AI-powered IDE fork of VS Code. It supports custom OpenAI endpoints.
Setup
- Open Settings (
Ctrl+,/Cmd+,) - Search for “Models” or “OpenAI API Key”
- Configure:
| Field | Value |
|---|---|
| OpenAI API Key | cs-your-key-here |
| OpenAI Base URL | https://coresynth.io/v1 |
Alternatively, edit ~/.cursor/settings.json:
{
"cursor.general.openaiApiKey": "cs-your-key-here",
"cursor.general.openaiBaseUrl": "https://coresynth.io/v1"
}Using Models
In Cursor’s Chat panel, you can select from all available models:
claude-sonnet-4-20250514— fast, affordableclaude-opus-4-20250514— most intelligentgpt-4o— versatilegemini-2.5-pro— long context
Benefits
- Tab completion with AI prediction
- Chat with code context — AI sees your entire project
- Inline edits — AI modifies code directly in the editor
- Composer — generates entire functions from descriptions
OpenCode
OpenCode is an open-source AI coding assistant for the terminal.
Setup
Create or edit ~/.opencode/config.json:
{
"provider": {
"name": "openai",
"apiKey": "cs-your-key-here",
"baseURL": "https://coresynth.io/v1"
},
"model": {
"name": "claude-sonnet-4-20250514"
}
}Usage
# Start
opencode
# Switch models in chat
/model gpt-4o
/model claude-opus-4-20250514VS Code + Continue.dev
Continue is an open-source AI assistant for VS Code and JetBrains.
Setup
In ~/.continue/config.json:
{
"models": [
{
"title": "CoreSynth Claude Sonnet",
"provider": "openai",
"model": "claude-sonnet-4-20250514",
"apiKey": "cs-your-key-here",
"apiBase": "https://coresynth.io/v1"
},
{
"title": "CoreSynth GPT-4o",
"provider": "openai",
"model": "gpt-4o",
"apiKey": "cs-your-key-here",
"apiBase": "https://coresynth.io/v1"
}
],
"tabAutocompleteModel": {
"title": "CoreSynth Autocomplete",
"provider": "openai",
"model": "claude-sonnet-4-20250514",
"apiKey": "cs-your-key-here",
"apiBase": "https://coresynth.io/v1"
}
}Windsurf (Codeium)
Windsurf is an AI IDE by Codeium.
Setup
- Open Settings → AI Providers
- Add a custom OpenAI-compatible provider:
| Field | Value |
|---|---|
| Provider Name | CoreSynth |
| API Key | cs-your-key-here |
| Base URL | https://coresynth.io/v1 |
Or in ~/.windsurf/settings.json:
{
"aiProviders": {
"coresynth": {
"apiKey": "cs-your-key-here",
"baseUrl": "https://coresynth.io/v1"
}
}
}JetBrains AI Assistant
JetBrains IDEs (IntelliJ, PhpStorm, PyCharm, WebStorm, …) support custom AI providers.
Setup
- Open Settings → Tools → AI Assistant
- Enable Custom AI Provider
- Configure:
| Field | Value |
|---|---|
| API URL | https://coresynth.io/v1/chat/completions |
| API Key | cs-your-key-here |
| Model | claude-sonnet-4-20250514 |
Direct API Calls
If you’re building your own application or script, you can call the API directly.
OpenAI Format
curl -X POST https://coresynth.io/v1/chat/completions \
-H "Authorization: Bearer cs-your-key-here" \
-H "Content-Type: application/json" \
-d '{
"model": "claude-sonnet-4-20250514",
"messages": [
{"role": "user", "content": "Hello, how are you?"}
],
"stream": true
}'Anthropic Format
curl -X POST https://coresynth.io/v1/messages \
-H "x-api-key: cs-your-key-here" \
-H "Content-Type: application/json" \
-H "anthropic-version: 2023-06-01" \
-d '{
"model": "claude-sonnet-4-20250514",
"max_tokens": 4096,
"messages": [
{"role": "user", "content": "Hello, how are you?"}
],
"stream": true
}'Python (OpenAI SDK)
from openai import OpenAI
client = OpenAI(
api_key="cs-your-key-here",
base_url="https://coresynth.io/v1"
)
response = client.chat.completions.create(
model="claude-sonnet-4-20250514",
messages=[
{"role": "user", "content": "Write me a Python backup script."}
],
stream=True
)
for chunk in response:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")Python (Anthropic SDK)
from anthropic import Anthropic
client = Anthropic(
api_key="cs-your-key-here",
base_url="https://coresynth.io/v1"
)
message = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=4096,
messages=[
{"role": "user", "content": "Write me a Python backup script."}
]
)
print(message.content[0].text)Node.js
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: 'cs-your-key-here',
baseURL: 'https://coresynth.io/v1',
});
const stream = await client.chat.completions.create({
model: 'claude-sonnet-4-20250514',
messages: [{ role: 'user', content: 'Write me an Express.js server.' }],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || '');
}PHP
<?php
$apiKey = 'cs-your-key-here';
$url = 'https://coresynth.io/v1/chat/completions';
$data = [
'model' => 'claude-sonnet-4-20250514',
'messages' => [
['role' => 'user', 'content' => 'Write me a PHP email sending script.']
],
'stream' => false,
];
$ch = curl_init($url);
curl_setopt_array($ch, [
CURLOPT_POST => true,
CURLOPT_HTTPHEADER => [
'Authorization: Bearer ' . $apiKey,
'Content-Type: application/json',
],
CURLOPT_POSTFIELDS => json_encode($data),
CURLOPT_RETURNTRANSFER => true,
CURLOPT_TIMEOUT => 120,
]);
$response = curl_exec($ch);
curl_close($ch);
$result = json_decode($response, true);
echo $result['choices'][0]['message']['content'];Model List
Get the current list of models via the API:
curl -H "Authorization: Bearer cs-your-key-here" \
https://coresynth.io/v1/modelsOr check the overview at Models & Limits.
Rate Limits
| Parameter | Default |
|---|---|
| Requests per minute | 60 RPM |
| Max tokens | 128,000 |
| Timeout | 300 seconds |
| Max API keys | 5 per service |
When the limit is exceeded, the API returns 429 Too Many Requests with headers:
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1712505600Error Codes
| Code | HTTP | Description |
|---|---|---|
authentication_error | 401 | Invalid or missing API key |
permission_error | 403 | Insufficient permissions (model requires Premium) |
invalid_request_error | 400 | Invalid request format |
not_found_error | 404 | Model not found or disabled |
insufficient_quota | 429 | Request limit exceeded |
upstream_error | 502 | Upstream provider error |
server_error | 500 | Internal server error |
Tool Comparison
| Tool | Type | OpenAI | Anthropic | Streaming | Autocomplete |
|---|---|---|---|---|---|
| Claude Code | Terminal | Yes | Yes | Yes | — |
| Cursor | IDE | Yes | — | Yes | Yes |
| OpenCode | Terminal | Yes | — | Yes | — |
| VS Code + Continue | IDE | Yes | — | Yes | Yes |
| Windsurf | IDE | Yes | — | Yes | Yes |
| JetBrains | IDE | Yes | — | Yes | Yes |
| Custom App | — | Yes | Yes | Yes | — |
FAQ
Does it work with all models?
Yes. All models allowed on your API key are available through both endpoints (OpenAI and Anthropic format). Some models require an active Premium subscription.
Can I use streaming?
Yes, both endpoints support SSE streaming. Set "stream": true in your request.
What libraries can I use?
Any library compatible with OpenAI or Anthropic APIs:
- Python:
openai,anthropic - Node.js:
openai,@anthropic-ai/sdk - PHP:
openai-php/client - Go:
sashabaranov/go-openai - Ruby:
ruby-openai
Is the API key secure?
Keys are encrypted with AES-256-CBC, transport is over HTTPS. Never store your key in code — use environment variables.
Next Steps
- API Keys — Generate and manage keys
- Alex Code — Standalone AI service
- Models & Limits — Available models overview
Need help? Open a support ticket .