Skip to Content
🇬🇧API Integration

API Integration

Use AI models from CoreSynth directly in your favorite tools — Claude Code, Cursor, VS Code, Windsurf, OpenCode, and more. Our API is fully compatible with both OpenAI and Anthropic formats, so it works with any client that supports these protocols.


How It Works

CoreSynth operates an AI API Proxy — a unified endpoint that translates your requests into native formats for each AI provider (OpenAI, Anthropic, Google). The result is one URL and one API key to access all models.

Your tool (Cursor, Claude Code, ...) https://coresynth.io/v1/ ← CoreSynth API Proxy ├─► Claude Sonnet / Opus (Anthropic) ├─► GPT-4o / o3 (OpenAI) └─► Gemini 2.5 Pro (Google)

Key Features

FeatureDescription
OpenAI CompatiblePOST /v1/chat/completions — works with any OpenAI client
Anthropic CompatiblePOST /v1/messages — native Anthropic SDK support
StreamingSSE streaming support for both formats
ModelsClaude, GPT, Gemini and more — one key for all
Rate LimitsFlexible limits based on your plan
SecurityBearer token auth, HTTPS, AES-256 key encryption

Prerequisites

Before you begin, you need:

  1. Active Alex Code service on coresynth.io 
  2. API key in format cs-xxxxxxxxxxxxxxxxxxxxxxxx — generate it in the dashboard
  3. An AI tool installed (Claude Code, Cursor, etc.)

:::tip[Don’t have an API key?] Go to API Keys and generate one. It takes 30 seconds. :::


Endpoints

EndpointFormatDescription
https://coresynth.io/v1/chat/completionsOpenAIChat completions (standard)
https://coresynth.io/v1/messagesAnthropicMessages API (native)
https://coresynth.io/v1/modelsOpenAIList available models
https://coresynth.io/v1/healthAPI health check (public)

Authentication

All requests require an API key. We support two methods:

# OpenAI style (Bearer token) Authorization: Bearer cs-your-key-here # Anthropic style (x-api-key header) x-api-key: cs-your-key-here

Claude Code

Claude Code  is a terminal-based AI assistant by Anthropic. With CoreSynth API you can use it with any model.

Setup

Claude Code supports custom OpenAI-compatible endpoints via environment variables:

# Set in ~/.bashrc or ~/.zshrc export ANTHROPIC_API_KEY="cs-your-key-here" export ANTHROPIC_BASE_URL="https://coresynth.io/v1"

Or use the config file .claude/settings.json:

{ "env": { "ANTHROPIC_API_KEY": "cs-your-key-here", "ANTHROPIC_BASE_URL": "https://coresynth.io/v1" } }

Usage

# Start Claude Code claude # With a specific model claude --model claude-sonnet-4-20250514

What You Get

  • AI pair programming right in your terminal
  • Read and edit files on your server via SSH
  • Execute commands with AI assistance
  • Complex refactoring and debugging
  • All AI models — not just Claude, but GPT-4o, Gemini, and more

Cursor

Cursor  is an AI-powered IDE fork of VS Code. It supports custom OpenAI endpoints.

Setup

  1. Open Settings (Ctrl+, / Cmd+,)
  2. Search for “Models” or “OpenAI API Key”
  3. Configure:
FieldValue
OpenAI API Keycs-your-key-here
OpenAI Base URLhttps://coresynth.io/v1

Alternatively, edit ~/.cursor/settings.json:

{ "cursor.general.openaiApiKey": "cs-your-key-here", "cursor.general.openaiBaseUrl": "https://coresynth.io/v1" }

Using Models

In Cursor’s Chat panel, you can select from all available models:

  • claude-sonnet-4-20250514 — fast, affordable
  • claude-opus-4-20250514 — most intelligent
  • gpt-4o — versatile
  • gemini-2.5-pro — long context

Benefits

  • Tab completion with AI prediction
  • Chat with code context — AI sees your entire project
  • Inline edits — AI modifies code directly in the editor
  • Composer — generates entire functions from descriptions

OpenCode

OpenCode  is an open-source AI coding assistant for the terminal.

Setup

Create or edit ~/.opencode/config.json:

{ "provider": { "name": "openai", "apiKey": "cs-your-key-here", "baseURL": "https://coresynth.io/v1" }, "model": { "name": "claude-sonnet-4-20250514" } }

Usage

# Start opencode # Switch models in chat /model gpt-4o /model claude-opus-4-20250514

VS Code + Continue.dev

Continue  is an open-source AI assistant for VS Code and JetBrains.

Setup

In ~/.continue/config.json:

{ "models": [ { "title": "CoreSynth Claude Sonnet", "provider": "openai", "model": "claude-sonnet-4-20250514", "apiKey": "cs-your-key-here", "apiBase": "https://coresynth.io/v1" }, { "title": "CoreSynth GPT-4o", "provider": "openai", "model": "gpt-4o", "apiKey": "cs-your-key-here", "apiBase": "https://coresynth.io/v1" } ], "tabAutocompleteModel": { "title": "CoreSynth Autocomplete", "provider": "openai", "model": "claude-sonnet-4-20250514", "apiKey": "cs-your-key-here", "apiBase": "https://coresynth.io/v1" } }

Windsurf (Codeium)

Windsurf  is an AI IDE by Codeium.

Setup

  1. Open SettingsAI Providers
  2. Add a custom OpenAI-compatible provider:
FieldValue
Provider NameCoreSynth
API Keycs-your-key-here
Base URLhttps://coresynth.io/v1

Or in ~/.windsurf/settings.json:

{ "aiProviders": { "coresynth": { "apiKey": "cs-your-key-here", "baseUrl": "https://coresynth.io/v1" } } }

JetBrains AI Assistant

JetBrains IDEs (IntelliJ, PhpStorm, PyCharm, WebStorm, …) support custom AI providers.

Setup

  1. Open SettingsToolsAI Assistant
  2. Enable Custom AI Provider
  3. Configure:
FieldValue
API URLhttps://coresynth.io/v1/chat/completions
API Keycs-your-key-here
Modelclaude-sonnet-4-20250514

Direct API Calls

If you’re building your own application or script, you can call the API directly.

OpenAI Format

curl -X POST https://coresynth.io/v1/chat/completions \ -H "Authorization: Bearer cs-your-key-here" \ -H "Content-Type: application/json" \ -d '{ "model": "claude-sonnet-4-20250514", "messages": [ {"role": "user", "content": "Hello, how are you?"} ], "stream": true }'

Anthropic Format

curl -X POST https://coresynth.io/v1/messages \ -H "x-api-key: cs-your-key-here" \ -H "Content-Type: application/json" \ -H "anthropic-version: 2023-06-01" \ -d '{ "model": "claude-sonnet-4-20250514", "max_tokens": 4096, "messages": [ {"role": "user", "content": "Hello, how are you?"} ], "stream": true }'

Python (OpenAI SDK)

from openai import OpenAI client = OpenAI( api_key="cs-your-key-here", base_url="https://coresynth.io/v1" ) response = client.chat.completions.create( model="claude-sonnet-4-20250514", messages=[ {"role": "user", "content": "Write me a Python backup script."} ], stream=True ) for chunk in response: if chunk.choices[0].delta.content: print(chunk.choices[0].delta.content, end="")

Python (Anthropic SDK)

from anthropic import Anthropic client = Anthropic( api_key="cs-your-key-here", base_url="https://coresynth.io/v1" ) message = client.messages.create( model="claude-sonnet-4-20250514", max_tokens=4096, messages=[ {"role": "user", "content": "Write me a Python backup script."} ] ) print(message.content[0].text)

Node.js

import OpenAI from 'openai'; const client = new OpenAI({ apiKey: 'cs-your-key-here', baseURL: 'https://coresynth.io/v1', }); const stream = await client.chat.completions.create({ model: 'claude-sonnet-4-20250514', messages: [{ role: 'user', content: 'Write me an Express.js server.' }], stream: true, }); for await (const chunk of stream) { process.stdout.write(chunk.choices[0]?.delta?.content || ''); }

PHP

<?php $apiKey = 'cs-your-key-here'; $url = 'https://coresynth.io/v1/chat/completions'; $data = [ 'model' => 'claude-sonnet-4-20250514', 'messages' => [ ['role' => 'user', 'content' => 'Write me a PHP email sending script.'] ], 'stream' => false, ]; $ch = curl_init($url); curl_setopt_array($ch, [ CURLOPT_POST => true, CURLOPT_HTTPHEADER => [ 'Authorization: Bearer ' . $apiKey, 'Content-Type: application/json', ], CURLOPT_POSTFIELDS => json_encode($data), CURLOPT_RETURNTRANSFER => true, CURLOPT_TIMEOUT => 120, ]); $response = curl_exec($ch); curl_close($ch); $result = json_decode($response, true); echo $result['choices'][0]['message']['content'];

Model List

Get the current list of models via the API:

curl -H "Authorization: Bearer cs-your-key-here" \ https://coresynth.io/v1/models

Or check the overview at Models & Limits.


Rate Limits

ParameterDefault
Requests per minute60 RPM
Max tokens128,000
Timeout300 seconds
Max API keys5 per service

When the limit is exceeded, the API returns 429 Too Many Requests with headers:

X-RateLimit-Limit: 60 X-RateLimit-Remaining: 0 X-RateLimit-Reset: 1712505600

Error Codes

CodeHTTPDescription
authentication_error401Invalid or missing API key
permission_error403Insufficient permissions (model requires Premium)
invalid_request_error400Invalid request format
not_found_error404Model not found or disabled
insufficient_quota429Request limit exceeded
upstream_error502Upstream provider error
server_error500Internal server error

Tool Comparison

ToolTypeOpenAIAnthropicStreamingAutocomplete
Claude CodeTerminalYesYesYes
CursorIDEYesYesYes
OpenCodeTerminalYesYes
VS Code + ContinueIDEYesYesYes
WindsurfIDEYesYesYes
JetBrainsIDEYesYesYes
Custom AppYesYesYes

FAQ

Does it work with all models?

Yes. All models allowed on your API key are available through both endpoints (OpenAI and Anthropic format). Some models require an active Premium subscription.

Can I use streaming?

Yes, both endpoints support SSE streaming. Set "stream": true in your request.

What libraries can I use?

Any library compatible with OpenAI or Anthropic APIs:

  • Python: openai, anthropic
  • Node.js: openai, @anthropic-ai/sdk
  • PHP: openai-php/client
  • Go: sashabaranov/go-openai
  • Ruby: ruby-openai

Is the API key secure?

Keys are encrypted with AES-256-CBC, transport is over HTTPS. Never store your key in code — use environment variables.


Next Steps


Need help? Open a support ticket .

Last updated on