Skip to main content

Workload Types

Orchestrators can execute four categories of compute workload. Which workloads a node accepts depends on its hardware, the pipelines and models it has loaded, and how it is configured.
An Orchestrator running both video transcoding and AI inference is described as operating in a dual-workload configuration. This is not a separate mode - it is the same Orchestrator process with both pipelines enabled. See for the pipeline internals.

Supported AI Pipelines

Livepeer defines a standard set of AI pipelines that Orchestrators can advertise. Each pipeline maps to a category of inference task and a compatible set of models. An Orchestrator can support any subset of pipelines and models. Each combination of pipeline and model is independently priced and advertised. Gateways discover these via the AIServiceRegistry contract or from the Orchestrator’s capability response during session negotiation.

How Capabilities Are Advertised

When a Gateway wants to route a job, it must find an Orchestrator that can handle it. Orchestrators make themselves discoverable through two mechanisms:

On-chain registration

Orchestrators register their service URI in the ServiceRegistry contract on Arbitrum. AI-capable Orchestrators additionally register with the AIServiceRegistry contract (or use the -aiServiceRegistry flag to connect to the AI subnet). This makes the Orchestrator discoverable to all Gateways that query the registry.

Capability negotiation

When a Gateway establishes a session with an Orchestrator, the Orchestrator returns a capability manifest - the full list of pipelines it supports, the models it has loaded, and its price per unit for each. The Gateway uses this to decide whether to proceed with the session. Capabilities that are advertised but not actually available (e.g. models not yet loaded into VRAM) will result in job failures. Keep declared capabilities in sync with loaded models.

How Gateways Select Orchestrators

Understanding Gateway selection is essential for Orchestrators that want to attract work. Gateways do not randomly assign jobs - they apply a multi-factor ranking algorithm to every session. A Gateway that sends a job and receives an error or timeout will deprioritise your Orchestrator for subsequent sessions. Sustained availability and accurate capability declaration are the strongest signals for consistent job flow. See for how to configure competitive prices.

Capability Boundaries

Orchestrators handle compute and payment receipt. They do not handle job routing, application integration, or business-layer concerns. If you want to aggregate application demand and route work across multiple Orchestrators, that is the Gateway role. See for that path.

Orchestrator Role

What Orchestrators are and how the role has evolved.

Orchestrator Architecture

Internal components, request flow, and system interactions.

Incentive Model

Revenue streams, cost structure, and earnings potential.

Workloads and AI

Detailed setup guides for video, AI, and BYOC workloads.
Last modified on April 7, 2026