Hobbyist vs Commercial
The Livepeer Orchestrator ecosystem contains two broadly distinct operating models that coexist on the same protocol. Understanding which model applies determines everything from hardware investment to how pricing is set. Neither model is superior - they reflect different operator goals and capabilities. Many operators run a hybrid: inflation rewards provide a base, service fees from well-served application workloads provide the upside.Why Service Fees Scale
For commercial operators, ETH service fees - not LPT inflation rewards - are the primary economic lever. The reason is straightforward: fee income scales with workload volume, while inflation rewards scale with stake. An Orchestrator with significant staked LPT earns a fixed percentage of the round’s inflation regardless of how many jobs it processes. An Orchestrator serving high-volume AI inference workloads earns ETH proportional to every pixel processed and every model inference returned. For an Orchestrator actively serving a high-volume Gateway - a streaming platform, an AI product, or a real-time video application - monthly ETH fee income from job processing can exceed LPT inflation income by a substantial margin.Commercial fee income is variable - it depends on Gateway demand, job mix, and market pricing
conditions. Inflation rewards are predictable by stake. Most commercial operators treat inflation as
a base and fees as the variable upside.
What Commercial Operation Requires
Serving application workloads at commercial scale imposes concrete operational requirements beyond what is needed for passive inflation earning.Uptime and reliability
A Gateway operator building a product on Livepeer’s network needs the Orchestrators it routes to to be consistently available. If an Orchestrator fails mid-session, the Gateway must failover - introducing latency and degraded user experience. Repeated failures result in the Orchestrator being deprioritised in the Gateway’s selection algorithm. Commercial Orchestrators target 99%+ uptime. This requires:- Automated monitoring with immediate alerts on node failure
- Automated restart and recovery
- Stable, redundant connectivity (not shared home broadband)
- Consistent power supply (UPS or colocation)
- Hardware health monitoring (GPU temperatures, VRAM utilisation)
Model warm-up management
For AI inference workloads, cold model starts (loading a model from disk into VRAM on first request) introduce latency that breaks user-facing SLAs. Commercial AI Orchestrators pre-load all advertised models at startup and keep them warm. The practical implication: the VRAM requirements for commercial AI operation are determined by the sum of all models that must be simultaneously loaded, not just the largest single model.Latency targets
Gateways rank Orchestrators by response latency among other factors. Consistently slow responses- even within acceptable job completion time - affect long-term selection probability.
- Network proximity to high-volume Gateways
- Low GPU scheduling latency (dedicated GPU, not shared)
- Fast storage for model weights (NVMe preferred over SATA)
Working with Gateways
The standard Orchestrator discovery model is anonymous: a Gateway queries the ServiceRegistry, ranks nodes by capability and price, and selects the best match. Commercial operators take a more active approach.Per-Gateway pricing
The-pricePerGateway flag allows Orchestrators to set different prices for specific Gateway
addresses. This is the primary tool for commercial Gateway relationships:
per-gateway pricing
Capability signalling
Gateways discover AI capabilities through the capability manifest returned during session negotiation. Commercial Orchestrators ensure their declared capabilities are accurate and stable - advertising a model that is slow to load or frequently unavailable damages the Gateway’s product and the Orchestrator’s selection score. Practical discipline for commercial capability management:- Declare only models that are loaded and warm at startup
- Remove capability declarations for models that are not being actively served
- Use
-aiModelsto specify exactly which pipeline/model combinations to load on startup - Monitor model load times and remove slow-start models from the active set
Building Gateway relationships
Active commercial relationships with Gateways typically develop through:- Consistent performance history visible on the Livepeer Explorer
- Participation in the
#orchestratorschannel on the Livepeer Discord - Direct outreach to Gateway SPEs and ecosystem partners
- Demonstrated capability support for pipelines that specific Gateways need
How to Position for Commercial Workloads
The transition from passive inflation earner to active commercial Orchestrator requires changes in both technical setup and operational approach.Capability selection
Capability selection
Commercial Orchestrators focus on pipelines with active demand rather than loading every
available model. Check current network demand at
tools.livepeer.cloud/ai/network-capabilities
to see which pipelines are being routed and at what prices.Prioritise:
- Pipelines with few available Orchestrators and active demand
- High-VRAM models that exclude commodity GPU competition
- Cascade (real-time AI) pipelines if hardware supports it - higher per-job value
Pricing discipline
Pricing discipline
Commercial pricing differs from setting a generic
-pricePerUnit. It requires:- Understanding the Gateway’s
maxPricePerUnitceiling for each pipeline - Setting prices that are competitive but not floor-level (under-pricing signals low quality to some Gateway operators)
- Using
-pricePerGatewayto offer volume discounts to specific Gateways - Using
-autoAdjustPricecarefully - automatic adjustment can undercut commercial relationships
Infrastructure investment
Infrastructure investment
Commercial operations typically require infrastructure changes that hobbyist setups do not:
- Colocation or cloud GPU instead of home hardware, for reliability and connectivity
- Dedicated GPUs with no competing workloads (mining rigs sharing GPUs with Livepeer introduce unpredictable latency)
- Redundant connectivity with failover (not a single home ISP connection)
- UPS or colocation power for uptime targets above 99%
Monitoring and alerting
Monitoring and alerting
Commercial uptime targets require automated monitoring. go-livepeer exposes a Prometheus
metrics endpoint (port 7935 by default). Connect this to an alerting stack (Grafana,
PagerDuty, or equivalent) to detect:
- Node offline or unreachable
- GPU memory pressure (model eviction)
- Reward call failures
- Unusual session failure rates
The Commercial Operator Landscape
Several types of operator run commercial-grade Orchestrators on the Livepeer network. Pool operators manage the Orchestrator registration, on-chain staking, and reward calling for a fleet of GPU workers. Workers register under the pool’s Orchestrator address; the pool earns a margin on their job income. Pool operators are effectively GPU infrastructure businesses, combining the service fee model with a managed-Orchestrator offering. Enterprise GPU operators run dedicated fleets serving specific AI application workloads. These operators serve Gateways that power user-facing AI products and require SLA-level commitments. Their hardware is typically data-centre grade with redundant connectivity. Dual-workload operators run both video transcoding and AI inference from the same infrastructure, earning fees from both streams. This is the natural next step for video Orchestrators who invest in high-VRAM GPUs.The commercial orchestrator landscape is evolving. The Livepeer Forum
and the
#orchestrators Discord channel contain the most current information on who is operating
commercially and what workloads are available.Related Pages
Operating Rationale
Financial evaluation - costs, revenue streams, and the decision matrix for choosing your path.
Pricing Strategy
How to configure competitive prices for video and AI workloads, including per-Gateway rates.
Working with Gateways
The technical and operational details of the Gateway-Orchestrator relationship.
Protocol Influence
Why operating an Orchestrator matters beyond earnings - governance weight and network stewardship.