Skip to main content

Errors by HTTP status code

Cold model timeouts

A model is “cold” when no orchestrator on the network has it loaded in GPU memory. The first request loads it, which takes 30 seconds to several minutes depending on model size. A cold model returns a 503 or a long pending response — it is not an error in your code. Mitigation strategies: Use warm models for latency-sensitive applications. The following models are warmed across the network:
PipelineWarm model
text-to-imageSG161222/RealVisXL_V4.0_Lightning
image-to-imagetimbrooks/instruct-pix2pix
audio-to-textopenai/whisper-large-v3
image-to-textSalesforce/blip-image-captioning-large
LLMmeta-llama/Meta-Llama-3.1-8B-Instruct
Implement retry with exponential backoff for any request that may target a cold model:
async function callWithRetry(fn: () => Promise<any>, maxRetries = 5): Promise<any> {
  let delay = 5000; // 5 seconds initial
  for (let attempt = 0; attempt <= maxRetries; attempt++) {
    try {
      return await fn();
    } catch (err: any) {
      if (err.statusCode === 503 && attempt < maxRetries) {
        await new Promise(r => setTimeout(r, delay));
        delay = Math.min(delay * 1.5, 60000); // cap at 60 seconds
      } else {
        throw err;
      }
    }
  }
}

Diagnosing a non-responsive job

When a request hangs with no response, check in this order:
  1. Network connectivity — confirm you can reach livepeer.studio:
    curl -I https://livepeer.studio/api/beta/generate/text-to-image
    # Expect: HTTP/2 405 (not a 200 -- method not allowed on GET is correct)
    
  2. Studio status — check https://status.livepeer.studio for active incidents.
  3. Request construction — use curl to isolate from SDK behaviour:
    curl -v -X POST https://livepeer.studio/api/beta/generate/text-to-image \
      -H "Authorization: Bearer $LIVEPEER_API_KEY" \
      -H "Content-Type: application/json" \
      -d '{"prompt": "test", "model_id": "SG161222/RealVisXL_V4.0_Lightning"}'
    
  4. Model availability — some models are not always available on the network. Use a known warm model to confirm the integration works, then switch to your target model.
  5. Gateway availability — if you are using a self-hosted or third-party gateway rather than livepeer.studio, confirm the gateway is running and accepting connections.

422 validation errors

A 422 response includes a body that identifies the failing field:
{
  "detail": [
    {
      "loc": ["body", "model_id"],
      "msg": "field required",
      "type": "value_error.missing"
    }
  ]
}
Common causes:
  • model_id is missing (required on all pipelines)
  • model_id format is wrong — must be a Hugging Face model ID string, e.g., SG161222/RealVisXL_V4.0_Lightning
  • Image input is sent as JSON instead of multipart/form-data (image-to-image, upscale, segment-anything-2)
  • Dimension values are not integers (use 1024, not "1024")

Getting help

If the above steps do not resolve the issue:
  • Discord: #builders and #ai-help channels in the Livepeer Discord. Include your request body (redact your API key), the response status, and the response body.
  • Forum: https://forum.livepeer.orgAI Research category for inference-specific questions.
  • GitHub: File an issue against livepeer/ai-runner for suspected network-level bugs.

AI Authentication

API key types, CORS configuration, and rotation.

Production Checklist

What to verify before shipping an AI application to production.
Last modified on April 7, 2026