Skip to main content
Base URL: https://api.chipipay.com/v1

When to use each

/ai/chat/ai/chat/stream
How it worksSend a prompt, get the full response in one JSON objectSend a prompt, get tokens one-by-one via Server-Sent Events
Best forBackend agents, batch processing, serverless functions, any case where you process the full response at onceChatbots, real-time UIs, any interface where users watch the response appear
Latency feelUser waits until the entire response is generated (can be 1-5 seconds)First token arrives in ~200ms, text streams as it’s generated
CostPer tokenSame per-token cost, debited when stream completes
ComplexitySimple — one HTTP request, one JSON responseRequires SSE parsing (see examples below)

What you can build

  • Customer support bot/chat/stream for real-time replies, /chat for classification/routing behind the scenes
  • Code assistant/chat/stream so developers see code being written line by line
  • Transaction summarizer/chat to batch-summarize wallet activity on your backend
  • AI-powered search/chat to rerank results or generate answers from your data
  • DeFi copilot UI/chat/stream for the conversational layer, pair with /think for portfolio decisions
  • Telegram/Discord bot/chat for simple Q&A, /chat/stream if the platform supports progressive message edits
  • Content generation/chat to generate descriptions, emails, or marketing copy

POST /ai/chat

General-purpose AI. Any prompt, any model. Powers chatbots, code assistants, finance apps. Cost: Per token (varies by model — see Models & Pricing)
curl -X POST https://api.chipipay.com/v1/ai/chat \
  -H "Authorization: Bearer sk_prod_YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "haiku",
    "messages": [
      {"role": "system", "content": "You are a helpful assistant."},
      {"role": "user", "content": "What is USDC?"}
    ],
    "max_tokens": 256
  }'

Request body

FieldTypeDefaultDescription
modelstring"haiku"Any model from Models & Pricing.
messagesarrayRequiredArray of objects with role ("system", "user", or "assistant") and content (string).
max_tokensnumber4096Max output tokens. Capped at the model’s limit.

Response fields

FieldTypeDescription
textstringThe model’s response text.
modelstringModel ID used (e.g. "haiku", "gpt-4o-mini").
providerstring"anthropic", "openai", or "google".
usageobjectToken counts: inputTokens, outputTokens, totalTokens.
costobjectcharged (USD) and currency.
finishReasonstring"stop" (completed), "length" (hit max_tokens), or "unknown".
latencyMsnumberServer-side processing time in milliseconds.

POST /ai/chat/stream

Streaming version of /ai/chat. Returns Server-Sent Events (SSE) with token-by-token text deltas. Ideal for chatbots and real-time UIs. Cost: Same as /ai/chat — per token, debited after stream completes. Same request body as /ai/chat. Same models, same auth.
curl -N -X POST https://api.chipipay.com/v1/ai/chat/stream \
  -H "Authorization: Bearer sk_prod_YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{"model": "haiku", "messages": [{"role": "user", "content": "Tell me a short joke"}]}'

SSE event format

EventShapeWhen
Text deltadata: {"text":"token"}Each token as it’s generated
Completiondata: {"done":true,"model":"haiku","provider":"anthropic","usage":{...},"cost":{...}}Stream finished
Errordata: {"error":"message"}Provider error during stream
The completion event includes model, provider, usage, and cost — same fields as the synchronous response. Credits are debited when this event fires.

Error handling

HTTP StatusMeaning
201Success (chat)
200Success (stream — SSE follows)
400Invalid request (missing messages, unsupported model)
401Invalid, inactive, or expired API key
402Insufficient credits
429Rate limit exceeded
502Provider error (model returned an error)