Livepeer supports AI inference, real-time AI video, video transcoding, custom containerised models, and protocol-level contribution. These paths share infrastructure but have different APIs, SDKs, and architectural requirements.
What are you building?
Call AI inference from my application
Call AI inference from my application
Path: AI Gateway API via StudioYou send REST requests to a Livepeer gateway and receive processed outputs. No GPU required. The Studio-managed gateway handles routing to the network.Nine batch pipelines are available: text-to-image, image-to-image, image-to-video, image-to-text, audio-to-text, text-to-speech, upscale, segment-anything-2, and LLM.
Build real-time AI video (live webcam or stream effects)
Build real-time AI video (live webcam or stream effects)
Path: ComfyStreamComfyStream applies ComfyUI workflows to live video streams at up to 25 FPS on an NVIDIA RTX 4090. Build your pipeline as a ComfyUI workflow JSON file; ComfyStream handles WebRTC transport and frame processing.Requirements: NVIDIA GPU (RTX 3090 or better), ComfyUI, Python 3.12, CUDA 12.5+.Supported models: StreamDiffusion, ControlNet, IPAdapter, FaceID, LoRA, Whisper (audio), Gemma (video understanding), SuperResolution.
Deploy a custom AI model or container
Deploy a custom AI model or container
Path: BYOC (Bring Your Own Container)Package your model in Docker, implement PyTrickle’s
FrameProcessor interface, and register the container as a worker on the Livepeer network. Your container receives and publishes live video frames over the trickle streaming protocol.Requirements: Docker on Linux with NVIDIA GPU, Python model code (PyTorch recommended).BYOC reached production-grade in Phase 4 (January 2026). Embody SPE and Streamplace run production BYOC workloads today.Integrate video transcoding or streaming
Integrate video transcoding or streaming
Path: Studio API — VideoThe Studio REST API and SDKs manage the full lifecycle of livestreams and VOD assets: create streams, get ingest URLs, transcode assets, manage playback, and set access control.SDKs:
npm install livepeer / pip install livepeer / go get github.com/livepeer/livepeer-goFree tier: 1,000 transcoding minutes per month. Growth: $100/month minimum.Contribute to the Livepeer protocol codebase
Contribute to the Livepeer protocol codebase
Path: OSS ContributionFour primary repositories: go-livepeer (protocol node, Go), ai-runner (AI inference runtime, Python), ComfyStream (real-time AI video, Python), protocol (Solidity contracts).Requirements: Go 1.21+ for go-livepeer; Python 3.12 for ComfyStream and PyTrickle; local testnet for integration testing.
Evaluate Livepeer for my use case
Evaluate Livepeer for my use case
Path: EvaluationStart with the developer stack concept page for the three access layers, then the workload fit page to match your use case to a path.Key questions:
- Batch vs real-time AI — different APIs, latency, and infrastructure
- Studio-managed vs self-hosted gateway — managed is simpler; self-hosted controls cost at scale
- Standard pipelines vs custom model — BYOC for fully custom; gateway API for supported pipelines
Build path reference
Related pages
Concepts
Understand the developer stack, AI pipelines, video infrastructure, and the OSS codebase.
Get Started
Quickstarts for AI inference, ComfyStream, video transcoding, and OSS contribution.
Custom AI Workflows
Deep guides for BYOC, ComfyStream, SDK integration, and model support.