Definition: The top 100 orchestrators by total bonded stake eligible to receive video transcoding work in the current round.Context: Active set membership is determined at round start by ranking orchestrators by total bonded LPT (self-stake plus delegated stake). AI inference routing does not require active set membership — it prioritises capability and price over stake position.Status: currentPages: orchestrators/staking, orchestrators/protocol
AIServiceRegistry
Definition: Smart contract registering AI service capabilities for orchestrators on the Livepeer AI network.Context: Orchestrators optionally advertise their AI pipelines and models on-chain via this contract, enabling capability-based routing by gateways.Status: currentPages: orchestrators/ai, orchestrators/contracts
aiModels.json
Definition: JSON configuration file specifying available AI models including pipeline type, model ID, pricing, and warm status for an orchestrator node.Context: The primary config file for AI orchestrators. Each entry defines which model to load, at what price, and whether it should be pre-warmed on startup.Status: currentPages: orchestrators/ai, orchestrators/config
AI Runner
Definition: The container process that executes AI model inference jobs; go-livepeer communicates with it via HTTP and it loads models into GPU memory to process requests.Context: Configured via aiModels.json and the -aiWorker / -aiModels CLI flags. Each AI runner handles one or more pipelines on a dedicated GPU.Status: currentPages: orchestrators/ai, orchestrators/setup
aiWorker
Definition: CLI flag starting a go-livepeer node as a dedicated AI worker process that connects to an orchestrator and handles AI inference jobs.Context: Enables the orchestrator to offload GPU inference work to a separate subprocess. Multiple aiWorker processes can be connected to a single orchestrator for multi-GPU setups.Status: currentPages: orchestrators/ai, orchestrators/architecture
BondingManager
Definition: Core smart contract managing bonding, delegation, staking logic, and fund ownership on the Livepeer protocol.Context: The central contract for all LPT stake operations — bonding, unbonding, delegation, reward distribution, and slash execution.Status: currentPages: orchestrators/contracts, orchestrators/staking
BYOC (Bring Your Own Container)
Definition: Deployment pattern enabling orchestrators to run custom Docker containers for AI workloads alongside standard Livepeer pipelines.Context: BYOC containers must conform to the Livepeer AI worker API specification. Used by teams deploying proprietary or experimental models not available in the standard pipeline set.Status: currentPages: orchestrators/compute, orchestrators/ai
Capability Advertisement
Definition: Mechanism by which orchestrators inform gateways of the AI services, pipelines, and models they can process.Context: Orchestrators broadcast their capabilities either on-chain via the AIServiceRegistry or off-chain via webhook discovery. Gateways use this data to route inference jobs to capable nodes.Status: currentPages: orchestrators/discovery, orchestrators/ai
Capability Matching
Definition: Process by which a gateway routes an AI task to an appropriate orchestrator based on advertised capabilities.Context: The gateway compares the requested pipeline and model against each orchestrator’s advertised capabilities and selects the best match based on price, performance score, and availability.Status: currentPages: orchestrators/discovery, orchestrators/routing
Cascade
Definition: Strategic vision for Livepeer to become the leading platform for real-time AI video pipelines, representing the current phase of protocol development.Context: The Cascade phase introduced AI inference as a first-class network capability, enabling orchestrators to advertise and earn from AI workloads alongside video transcoding.Status: currentPages: orchestrators/protocol, orchestrators/upgrades
Clearinghouse
Definition: Contract or system handling settlement of payments between gateways and orchestrators.Context: The clearinghouse resolves probabilistic micropayment tickets on-chain via the TicketBroker contract, converting winning tickets into ETH for orchestrators.Status: currentPages: orchestrators/payments, orchestrators/protocol
ComfyStream
Definition: Livepeer project running ComfyUI workflows as a real-time media processing backend for live streams.Context: ComfyStream enables orchestrators to expose ComfyUI-based diffusion pipelines as live-video-to-video capabilities on the Livepeer network.Status: currentPages: orchestrators/ai
Daydream
Definition: Livepeer’s hosted real-time AI video platform turning live camera input into AI-transformed visuals with sub-second latency.Context: Daydream is both a Livepeer product and a showcase of AI inference on the network, demonstrating live-video-to-video pipelines for interactive creative use cases.Status: currentPages: orchestrators/ai, orchestrators/use-cases
Delegator
Definition: LPT token holder who stakes tokens to an orchestrator to secure the network, participate in governance, and earn a share of rewards.Context: Delegators do not run infrastructure. They bond LPT to an orchestrator of their choice and receive a proportional share of that orchestrator’s inflation rewards and service fees.Status: currentPages: orchestrators/staking, orchestrators/protocol
Dual Mode
Definition: Deployment configuration where a single orchestrator process handles both video transcoding and AI inference simultaneously.Context: The most common production configuration for operators with capable hardware (24 GB+ VRAM). Dual mode is a workload configuration, not a separate protocol mode — the same go-livepeer binary supports all modes via flag combinations.Status: currentPages: orchestrators/modes, orchestrators/config
Gateway
Definition: Node that submits jobs to the network, routes work to orchestrators, manages payment flows, and provides a protocol interface for applications.Context: Gateways are the demand side of the Livepeer network. They receive streams or AI requests from users or applications and select orchestrators to fulfil the work.Status: currentPages: orchestrators/architecture, orchestrators/routing
go-livepeer
Definition: Official Go implementation of the Livepeer protocol containing the Broadcaster, Orchestrator, Transcoder, Gateway, and Worker roles in a single binary.Context: The canonical node software for running any Livepeer network role. Orchestrators, gateways, and transcoders all run go-livepeer with different flag combinations.Status: currentPages: orchestrators/setup, orchestrators/code
GPU Worker
Definition: Subprocess running AI inference on a dedicated GPU, managed by the go-livepeer orchestrator process.Context: In AI or dual-mode deployments, each GPU in the system runs a dedicated AI runner subprocess (GPU worker). The orchestrator routes inference jobs to available GPU workers.Status: currentPages: orchestrators/ai, orchestrators/architecture
Hard Gate
Definition: Strict filter that immediately disqualifies orchestrators failing a required criterion such as exceeding the gateway’s maximum price threshold.Context: Unlike soft scoring factors, a hard gate is binary — the orchestrator is excluded from consideration entirely if the condition is not met. MaxPrice is the most common hard gate in practice.Status: currentPages: orchestrators/ai, orchestrators/config
LPT (Livepeer Token)
Definition: ERC-20 governance and staking token used to coordinate, incentivise, and secure the Livepeer network; staked LPT determines work allocation and reward share.Context: LPT is the native utility token of the Livepeer protocol. Orchestrators must bond LPT to enter the active set for video transcoding work. Delegators bond LPT to orchestrators to earn a share of rewards.Status: currentPages: orchestrators/staking, orchestrators/protocol
MaxPrice
Definition: CLI flag setting the maximum transcoding price per pixelsPerUnit that a gateway will accept from an orchestrator; orchestrators above this threshold are excluded.Context: MaxPrice acts as a hard gate in orchestrator selection. Orchestrators set their own pricePerUnit; gateways set their MaxPrice to filter out orchestrators whose prices exceed the budget.Status: currentPages: orchestrators/pricing, orchestrators/config
O-T Split
Definition: Architectural separation of the Orchestrator and Transcoder (Worker) processes, typically running on different machines, where the orchestrator handles protocol interaction and the transcoder handles GPU compute.Context: Enables security isolation and multi-GPU scaling. The orchestrator process uses the -orchestrator flag; the transcoder uses -transcoder. Authentication between them uses the orchSecret shared secret.Status: currentPages: orchestrators/architecture, orchestrators/config
Orchestrator
Definition: Supply-side network node contributing GPU resources, receiving jobs from gateways, performing transcoding or AI inference, and earning rewards.Context: The canonical Livepeer compute node. An orchestrator handles protocol interaction, job routing, payment negotiation, and capability advertisement. It may run its own transcoder subprocess or delegate to remote transcoder workers.Status: currentPages: orchestrators/index, orchestrators/protocol
OrchestratorInfo
Definition: Data structure advertised by orchestrators containing capabilities, pricing, service URI, and metadata used by gateways for selection decisions.Context: OrchestratorInfo is exchanged during gateway-orchestrator negotiation. It includes the orchestrator’s pricePerUnit, supported AI capabilities, ticket parameters, and service URI.Status: currentPages: orchestrators/code, orchestrators/protocol
orchSecret
Definition: Shared secret used to authenticate communication between an orchestrator process and its standalone transcoder or worker nodes in an O-T split deployment.Context: Set via the -orchSecret CLI flag on both the orchestrator and transcoder. Must match exactly. Prevents unauthorised nodes from connecting to an orchestrator as transcoders.Status: currentPages: orchestrators/config, orchestrators/security
Performance Score
Definition: Composite metric rating an orchestrator’s reliability and speed, calculated as latency score multiplied by success rate, used by gateways in orchestrator selection.Context: Performance score is tracked per-gateway and influences routing decisions. A low score from failed transcodes or high latency reduces the probability of being selected for future jobs.Status: currentPages: orchestrators/discovery, orchestrators/performance
pixelsPerUnit
Definition: CLI parameter defining the number of pixels constituting one billable work unit, allowing granular pricing control.Context: Used in conjunction with pricePerUnit. Setting a larger pixelsPerUnit value effectively lowers the per-pixel price while keeping the per-unit number manageable. Defaults to 1 pixel per unit.Status: currentPages: orchestrators/pricing, orchestrators/config
Pool
Definition: Group of transcoder or worker nodes coordinated under a single orchestrator for increased capacity and redundancy.Context: A pool allows orchestrators to scale beyond a single machine. The pool operator runs the on-chain orchestrator node and handles staking, reward calling, and ticket redemption. Pool workers contribute GPU compute and receive off-chain payouts from the operator.Status: currentPages: orchestrators/architecture, orchestrators/operations
Pool Operator
Definition: Entity running an orchestrator that coordinates a pool of transcoder or worker nodes, managing on-chain operations and distributing earnings to workers.Context: Pool operators require infrastructure reliability and community trust. They stake LPT to the active set threshold and distribute earnings to pool workers via off-chain agreements.Status: currentPages: orchestrators/architecture, orchestrators/operations
Pool Worker
Definition: Individual machine within an orchestrator pool, running go-livepeer in transcoder mode and executing GPU compute jobs delegated by the pool operator’s orchestrator.Also known as: Pool nodeContext: Pool workers do not hold LPT or interact with the protocol directly — the pool operator stakes on their behalf. Workers connect to the orchestrator using the -orchAddr and -orchSecret flags.Status: currentPages: orchestrators/architecture, orchestrators/operations
Price Feed
Definition: External data source providing real-time ETH/USD exchange rates used by orchestrators to denominate prices in USD terms.Context: Orchestrators using USD pricing fetch the current ETH/USD rate from a price feed service to dynamically adjust their wei-denominated pricePerUnit as ETH price fluctuates.Status: currentPages: orchestrators/pricing, orchestrators/config
pricePerCapability
Definition: CLI flag setting the price per unit for a specific AI pipeline and model pair, overriding the default pricePerUnit for that capability.Context: Allows orchestrators to charge different rates for different AI pipelines based on compute intensity. For example, a text-to-image pipeline with a large model can be priced higher than a lightweight audio-to-text pipeline.Status: currentPages: orchestrators/pricing, orchestrators/ai
pricePerGateway
Definition: JSON configuration allowing orchestrators to set customised per-gateway-address pricing, enabling different rates for specific gateway partners.Context: Useful for commercial relationships where specific gateways receive preferential pricing. Configured as a JSON map from gateway Ethereum addresses to price overrides.Status: currentPages: orchestrators/pricing, orchestrators/config
pricePerUnit
Definition: CLI flag setting the transcoding price in wei per pixelsPerUnit that an orchestrator advertises to gateways.Context: The primary pricing parameter for video transcoding. Gateways with -maxPricePerUnit below this value will not route work to the orchestrator. Can be set in wei directly or with a USD target using a price feed.Status: currentPages: orchestrators/pricing, orchestrators/config
Redeemer
Definition: Service or entity submitting a winning probabilistic micropayment ticket to the TicketBroker contract to claim its face value in ETH.Context: In production deployments, orchestrators typically run an automated redeemer process that monitors for winning tickets and submits them on-chain. Redemption costs gas, so batching is common.Status: currentPages: orchestrators/payments, orchestrators/protocol
Reward Call
Definition: On-chain transaction (Reward()) that an active orchestrator submits once per round to mint and distribute new LPT inflation rewards.Context: Missing a reward call permanently forfeits that round’s rewards. Gas cost on Arbitrum is approximately 0.01−0.12 per call. Orchestrators typically automate reward calling via a cron job or dedicated service.Status: currentPages: orchestrators/staking, orchestrators/protocol
RoundsManager
Definition: Smart contract tracking round progression and coordinating round-based protocol state including round initialisation and block counting.Context: Each protocol round is approximately 22 hours (5,760 Ethereum L1 blocks). The RoundsManager tracks the current round number and must be initialised at the start of each round before reward calls can be submitted.Status: currentPages: orchestrators/contracts, orchestrators/protocol
Segment
Definition: Short time-sliced chunk of a video stream (typically around 2 seconds) representing the unit of work for video transcoding in the Livepeer protocol.Context: Gateways split incoming live streams into segments and distribute them to orchestrators. Orchestrators transcode each segment independently and return the results. Segment-level parallelism enables distributed transcoding at scale.Status: currentPages: orchestrators/transcoding, orchestrators/protocol
Service URI
Definition: Public URL registered on-chain that gateways use to discover and connect to an orchestrator node for job submission.Context: Must be publicly reachable from the internet. Format is typically https://your-domain:8935. Registered via the ServiceRegistry contract. An unreachable or incorrect service URI prevents gateways from routing work to the orchestrator.Status: currentPages: orchestrators/config, orchestrators/protocol
ServiceRegistry
Definition: Smart contract where orchestrators register their service URI for gateway discovery.Context: Orchestrators call the ServiceRegistry when activating or updating their service endpoint. Gateways query this contract to build their list of reachable orchestrators.Status: currentPages: orchestrators/contracts, orchestrators/protocol
Session
Definition: Active logical connection between a gateway and an orchestrator during which one or more jobs are processed.Context: Video sessions are stream-based (one per active stream); AI sessions are job-based (one per inference request or batch). The -maxSessions flag limits concurrent sessions and effectively controls the orchestrator’s maximum throughput.Status: currentPages: orchestrators/routing, orchestrators/architecture
Siphon
Definition: Lightweight component directing incoming work to the correct processing path within an orchestrator, or routing a subset of network traffic to specific orchestrators for staged rollout.Context: In orchestrator architecture, the siphon routes incoming jobs between video transcoding and AI inference paths. It can also describe a minimal transcoder deployment that connects to a remote orchestrator to expose local GPU resources.Status: currentPages: orchestrators/architecture, orchestrators/routing
Slashing
Definition: Penalty mechanism that destroys a portion of an orchestrator’s bonded LPT stake for protocol violations such as failing verification or missing verifications.Context: Slashing conditions include failing transcoding verification, skipping required verifications, or sustained underperformance. Both the orchestrator’s self-stake and delegated stake are at risk, which incentivises delegators to select reliable orchestrators.Status: currentPages: orchestrators/protocol, orchestrators/staking
Solo Operator
Definition: Orchestrator deployment where a single operator runs a complete orchestrator node with all components on one machine, without pool workers.Context: The standard deployment for most individual orchestrators. Full control and full responsibility for staking, reward calling, ticket redemption, and compute. Can run in video, AI, or dual mode.Status: currentPages: orchestrators/modes, orchestrators/setup
SPE (Special Purpose Entity)
Definition: Treasury-funded organisational unit with a defined scope, budget, and accountability structure for executing ecosystem initiatives.Context: SPEs are how the Livepeer community funds sustained work. Orchestrator-relevant SPEs include LiveInfra (infrastructure), LISAR (contributions), and AI Video SPE (compute scaling). SPEs are approved via on-chain governance.Status: currentPages: orchestrators/governance
TicketBroker
Definition: Smart contract managing the probabilistic micropayment system, holding gateway funds and processing winning ticket redemptions.Context: The TicketBroker holds gateway deposits and reserves, validates winning ticket signatures, and transfers ETH to orchestrators when tickets are redeemed. It is the on-chain settlement layer for all Livepeer service payments.Status: currentPages: orchestrators/payments, orchestrators/contracts
Titan Node
Definition: Community orchestrator group in Western North America providing education, Start Up Grants, and pre-configured hardware for running Livepeer orchestrators.Context: Titan Node operates as both a community resource and a hardware supply partner. Their pre-configured nodes are designed to lower the barrier to entry for new orchestrator operators.Status: currentPages: orchestrators/setup, orchestrators/hardware
Treasury
Definition: On-chain pool of LPT and ETH governed by token holder votes, used for funding public goods and ecosystem development initiatives.Context: The Livepeer on-chain treasury is funded by a governable percentage of per-round inflation (the treasury reward cut rate). Orchestrators appear in treasury governance pages as stake-weighted voters.Status: currentPages: orchestrators/governance, orchestrators/protocol
Webhook Discovery
Definition: Mechanism for orchestrators to dynamically advertise their AI capabilities to gateways via HTTP webhook callbacks rather than only relying on on-chain registration.Context: Provides a flexible, off-chain channel for capability advertisement. Gateways can call a registered webhook endpoint to retrieve the orchestrator’s current capability set, enabling real-time updates without on-chain transactions.Status: currentPages: orchestrators/discovery, orchestrators/config
Definition: Running a trained neural network model on new input data to produce predictions or generated outputs.External: Inference engine — WikipediaStatus: currentPages: orchestrators/ai, orchestrators/pipelines
Audio-to-Text
Definition: AI pipeline converting spoken language audio into written text using deep neural networks.External: Speech recognition — WikipediaStatus: currentPages: orchestrators/pipelines, orchestrators/ai
Batch AI Inference
Definition: Running a trained model on a group of inputs asynchronously, optimising GPU utilisation through parallelisation.External: Batch inference — Google CloudStatus: currentPages: orchestrators/ai, orchestrators/pipelines
BLIP
Definition: Vision-language model by Salesforce using bootstrapped captioning and filtering for image understanding tasks including captioning and visual QA.External: BLIP — Hugging FaceStatus: currentPages: orchestrators/pipelines, orchestrators/ai
Cold Model / Cold Start
Definition: Latency incurred when an AI model must be loaded from storage into GPU memory before the first request can be processed, typically adding 5 to 90 seconds of delay.Context: During the current beta, orchestrators typically support one warm model per GPU. Requests to a cold model trigger a model load before inference can begin. Warm model status is configured in aiModels.json.Status: currentPages: orchestrators/ai, orchestrators/performance
ComfyUI
Definition: Open-source node-based graphical interface for building and executing AI image and video generation workflows.External: ComfyUI — GitHubStatus: currentPages: orchestrators/ai, orchestrators/pipelines
ControlNet
Definition: Neural network architecture adding spatial conditioning controls such as edge maps, depth, and pose signals to pretrained diffusion models.External: ControlNet — Hugging FaceStatus: currentPages: orchestrators/pipelines, orchestrators/ai
CUDA (Compute Unified Device Architecture)
Definition: NVIDIA’s parallel computing platform and programming API enabling GPUs to be used for general-purpose processing and deep learning.External: CUDA — WikipediaStatus: currentPages: orchestrators/setup, orchestrators/ai
Diffusion
Definition: Class of generative models that learn to produce data by reversing a gradual noising process applied during training.External: Diffusion model — WikipediaStatus: currentPages: orchestrators/pipelines, orchestrators/ai
HuggingFace
Definition: An AI platform and open-source community providing model repositories, datasets, and inference APIs; a primary source for AI models deployed on Livepeer orchestrator nodes.Also known as: Hugging Face, HFExternal: HuggingFaceStatus: currentPages: orchestrators/ai, orchestrators/run-an-orchestrator/requirements/setup
Image-to-Image
Definition: AI pipeline transforming an input image into a modified output image, guided by a text prompt or conditioning signal.External: Image translation — WikipediaStatus: currentPages: orchestrators/pipelines, orchestrators/ai
Image-to-Text
Definition: AI pipeline generating a textual description from an input image, encompassing captioning and OCR tasks.External: Image-to-text — Hugging FaceStatus: currentPages: orchestrators/pipelines, orchestrators/ai
Image-to-Video
Definition: AI pipeline generating a short video clip conditioned on a single input image, animating a still frame into motion.External: Image-to-video — Hugging FaceStatus: currentPages: orchestrators/pipelines, orchestrators/ai
Live-Video-to-Video
Definition: AI pipeline applying generative models to a continuous video stream frame-by-frame at interactive frame rates.External: StreamDiffusion — GitHubStatus: currentPages: orchestrators/pipelines, orchestrators/ai
LLM (Large Language Model)
Definition: Neural network trained on massive text corpora to understand and generate human language for tasks including text generation, reasoning, and conversation.External: LLM — WikipediaStatus: currentPages: orchestrators/pipelines, orchestrators/ai
Model Warmth
Definition: Status indicating whether an AI model is currently loaded in GPU memory (warm) or must be loaded from storage on demand (cold).Context: Orchestrators typically support one warm model per GPU during the current beta phase. The warmth status of each model is configured in aiModels.json and determines whether a model can serve requests immediately or incurs a cold-start delay.Status: currentPages: orchestrators/ai, orchestrators/performance
Ollama
Definition: Open-source tool for running large language models locally with a CLI and OpenAI-compatible REST API.External: Ollama — ollama.comStatus: currentPages: orchestrators/pipelines, orchestrators/ai
PyTorch (Torch)
Definition: Open-source deep learning framework providing GPU-accelerated tensor computation and automatic differentiation, used to build and run AI models on orchestrator nodes.External: PyTorch — WikipediaStatus: currentPages: orchestrators/ai
Segmentation (AI)
Definition: AI task partitioning a digital image into regions by assigning a label to every pixel, identifying and outlining objects or areas.External: Image segmentation — WikipediaStatus: currentPages: orchestrators/pipelines, orchestrators/ai
Stable Diffusion
Definition: Open-source latent diffusion model for text-to-image generation, operating in a compressed latent space for efficient high-quality image synthesis.External: Stable Diffusion — WikipediaStatus: currentPages: orchestrators/pipelines, orchestrators/ai
StreamDiffusion
Definition: Optimised real-time diffusion pipeline using stream batching and stochastic similarity filtering to achieve interactive frame rates for live video transformation.External: StreamDiffusion — GitHubStatus: currentPages: orchestrators/pipelines, orchestrators/ai
Text-to-Image
Definition: AI pipeline generating an image from a natural language text prompt using a language encoder and diffusion model.External: Text-to-image model — WikipediaStatus: currentPages: orchestrators/pipelines, orchestrators/ai
Text-to-Speech
Definition: AI pipeline synthesising spoken audio from written text input via phonetic conversion and neural audio synthesis.External: Speech synthesis — WikipediaStatus: currentPages: orchestrators/pipelines, orchestrators/ai
Upscale / Upscaling
Definition: AI pipeline increasing the resolution of an image or video frame using neural models that predict high-frequency detail not present in the source.External: Image scaling — WikipediaStatus: currentPages: orchestrators/pipelines, orchestrators/ai
Warm Model
Definition: AI model already loaded into GPU memory and ready to serve inference requests immediately, without cold-start latency.Context: During the current beta, orchestrators typically support one warm model per GPU. The warm status for each model is declared in aiModels.json. Requests to a warm model are served immediately; requests to a cold model trigger a model load that adds seconds to minutes of latency.Status: currentPages: orchestrators/ai, orchestrators/performance
Whisper
Definition: OpenAI’s encoder-decoder transformer model for speech recognition and translation, pretrained on 680,000 hours of multilingual audio.External: Whisper — Hugging FaceStatus: currentPages: orchestrators/pipelines, orchestrators/ai
Definition: The payout amount assigned to a probabilistic micropayment ticket if it is drawn as a winner.Context: The face value of tickets is set so that, over many tickets, the expected payout matches the fair cost of the work performed. Orchestrators accept lower-probability, higher-face-value tickets to reduce on-chain redemption frequency.Status: currentPages: orchestrators/payments, orchestrators/protocol
Fee Cut
Definition: The percentage of ETH service fees that an orchestrator retains before distributing the remainder to delegators.Context: Set independently from reward cut. A lower fee cut sends more ETH earnings to delegators, which can attract more delegation. Both cuts are configured on-chain and visible in the Livepeer Explorer.Status: currentPages: orchestrators/staking, orchestrators/economics
Fee Pool
Definition: Accumulated ETH fees awaiting distribution between an orchestrator and its delegators.Context: ETH earned from winning tickets flows into the fee pool each round. Orchestrators and delegators claim their respective shares according to the orchestrator’s fee cut setting.Status: currentPages: orchestrators/staking, orchestrators/protocol
Inflation
Definition: Dynamic issuance of new LPT tokens each protocol round, distributed to orchestrators and delegators based on participation and stake.Context: The inflation rate adjusts by 0.00005% per round based on whether total bonded LPT is above or below the 50% target bonding rate. Orchestrators claim their share of inflationary rewards each round via the reward call transaction.Status: currentPages: orchestrators/staking, orchestrators/economics
Micropayment
Definition: Small-value payment represented as a signed probabilistic ticket with a chance of being a winner redeemable for ETH.External: Micropayment — WikipediaStatus: currentPages: orchestrators/payments, orchestrators/protocol
Overhead
Definition: Additional operational costs beyond direct computation, including gas fees for ticket redemption, bandwidth, and administrative costs.Context: In Livepeer pricing, overhead specifically refers to the estimated ticket redemption cost divided by the face value, expressed as a percentage. The autoAdjustPrice flag incorporates overhead into automatic price calculations.Status: currentPages: orchestrators/performance, orchestrators/economics
Per Pixel (Price Per Pixel)
Definition: Livepeer’s unit-based pricing mechanism where fees are calculated based on the number of pixels processed during a transcoding or AI inference job.Context: A 4K frame costs more to process than a 720p frame because it contains more pixels; enables pricing that scales with workload complexity.Status: currentPages: orchestrators/pricing, orchestrators/transcoding
Per Round
Definition: The Livepeer protocol’s fundamental time unit, approximately equal to one day of Ethereum blocks; reward minting, activations, and delegator earnings accrue on a per-round basis.Context: Key unit for orchestrator reward calculations, delegator stake checkpoints, and LPT inflation scheduling.Status: currentPages: orchestrators/staking, orchestrators/protocol
PM (Probabilistic Micropayment)
Definition: Lottery-based payment scheme where gateways send signed tickets to orchestrators and only winning tickets are redeemed on-chain, amortising transaction costs across many payments.Context: The PM system is the core payment mechanism in Livepeer. Most tickets are non-winning; over time, the expected value of winning tickets equals the fair payment for work performed. Orchestrators batch redemptions to optimise gas costs.Status: currentPages: orchestrators/payments, orchestrators/protocol
Reward Cut
Definition: The percentage of inflationary LPT rewards that an orchestrator keeps before distributing the remainder to delegators.Context: Set by the orchestrator and visible in the Livepeer Explorer. A lower reward cut sends more LPT to delegators, which can attract more delegation and increase active set rank. Separate from fee cut.Status: currentPages: orchestrators/staking, orchestrators/economics
Stake Weight
Definition: An orchestrator’s proportional influence in the network, determined by total bonded LPT (self-stake plus delegated stake), affecting active set rank, reward share, and governance vote weight.Context: Stake weight is the primary factor in active set membership for video transcoding. Higher total bonded LPT means a higher rank and greater share of inflationary rewards.Status: currentPages: orchestrators/staking, orchestrators/protocol
USD Pricing
Definition: Pricing approach where orchestrators denominate work costs in US dollars, using a price feed to dynamically convert to wei as the ETH/USD rate fluctuates.Context: Enables price stability relative to real-world costs. Implemented via the -pricePerUnit flag with a USD value (e.g. 0.50USD) combined with an ETH/USD price feed service.Status: currentPages: orchestrators/pricing, orchestrators/config
Win Probability
Definition: The configured likelihood that any given micropayment ticket is a winning ticket; a lower probability means larger face values per winning ticket.Context: Win probability is a parameter negotiated between gateway and orchestrator. Lower win probability reduces on-chain redemption frequency (and gas costs) at the expense of larger, less frequent payouts.Status: currentPages: orchestrators/payments, orchestrators/protocol
Winning Ticket
Definition: Probabilistic payment ticket whose random outcome meets the configured threshold, entitling the holder to redeem its face value in ETH on-chain.Context: Most tickets sent by gateways are non-winning. The winning ticket mechanism amortises on-chain transaction costs across many off-chain payments while preserving the expected payout value.Status: currentPages: orchestrators/payments, orchestrators/protocol
Yield
Definition: Return earned from staking LPT and performing work, expressed as an annual percentage combining inflationary LPT rewards and ETH service fees.Context: Orchestrator yield depends on active set position, reward cut, fee cut, total stake, and network workload volume. Delegator yield is derived from the orchestrator’s yield minus the cuts retained by the orchestrator.Status: currentPages: orchestrators/staking, orchestrators/economics
Definition: Streaming protocol by Apple that encodes video into multiple quality levels and delivers them via standard HTTP with an M3U8 index playlist.External: HLS — WikipediaStatus: currentPages: orchestrators/transcoding, orchestrators/streaming
Output Profile
Definition: Predefined set of encoding parameters (resolution, bitrate, codec, frame rate) defining a single rendition of a transcoded video.External: Video codec — WikipediaStatus: currentPages: orchestrators/transcoding, orchestrators/config
Pixel
Definition: Single point in a video frame used as the fundamental pricing unit for transcoding work on the Livepeer network.External: Pixel — WikipediaStatus: currentPages: orchestrators/transcoding, orchestrators/pricing
Rendition
Definition: Single encoded version of a source video at a specific resolution, bitrate, and codec configuration produced by a transcoding job.External: Video rendition — Cloudinary GlossaryStatus: currentPages: orchestrators/transcoding, orchestrators/encoding
RTMP (Real-Time Messaging Protocol)
Definition: Protocol for streaming audio, video, and data over TCP on port 1935, used as the primary ingest protocol for live video submitted to Livepeer orchestrators.External: RTMP — WikipediaStatus: currentPages: orchestrators/streaming
Transcoding
Definition: Direct digital-to-digital conversion of video from one encoding to another, producing multiple adaptive renditions at different resolutions and bitrates.External: Transcoding — WikipediaStatus: currentPages: orchestrators/transcoding, orchestrators/index
Definition: A Layer 2 Optimistic Rollup settling to Ethereum, processing transactions off-chain while inheriting Ethereum-grade security.External: Arbitrum — docs.arbitrum.ioStatus: currentPages: orchestrators/protocol, orchestrators/staking
Bonding
Definition: Staking (locking) LPT tokens to an orchestrator in Livepeer’s delegated proof-of-stake system.External: Proof-of-stake — ethereum.orgStatus: currentPages: orchestrators/staking, orchestrators/protocol
ETH
Definition: The native cryptocurrency of Ethereum, used to pay transaction fees (gas) and as the settlement currency for orchestrator service fee payments.External: Ether — ethereum.orgStatus: currentPages: orchestrators/payments, orchestrators/staking
Mainnet
Definition: The primary public production blockchain where actual-value transactions occur on the distributed ledger.External: Mainnet — ethereum.orgStatus: currentPages: orchestrators/protocol
Subgraph
Definition: Custom open API defining how Livepeer on-chain data is indexed and queried via GraphQL, built on The Graph protocol.External: Subgraphs — The GraphStatus: currentPages: orchestrators/protocol, orchestrators/data
Wei
Definition: The smallest denomination of Ether, where 1 ETH equals 10^18 Wei; used in on-chain calculations and Livepeer pricing parameters.External: Wei — ethereum.orgStatus: currentPages: orchestrators/pricing, orchestrators/payments
Definition: The primary general-purpose processor in a computer; in Livepeer, CPU handles node software overhead while GPU handles intensive transcoding and AI inference workloads.Tags: technical:hardwareExternal: WikipediaStatus: currentPages: orchestrators/run-an-orchestrator/requirements/setup
GB (Gigabyte)
Definition: A unit of digital storage equal to 1,073,741,824 bytes (binary); used in Livepeer hardware specifications for RAM, VRAM, and storage requirements.Tags: technical:hardwareExternal: WikipediaStatus: currentPages: orchestrators/run-an-orchestrator/requirements/setup
GeForce
Definition: NVIDIA’s consumer-grade discrete GPU brand, encompassing the GTX and RTX product lines; the most common GPU family used by Livepeer orchestrator operators.Tags: technical:hardwareExternal: NVIDIA GeForceStatus: currentPages: orchestrators/run-an-orchestrator/requirements/setup
GTX (NVIDIA GTX)
Definition: NVIDIA’s previous-generation consumer GPU product line; capable of Livepeer video transcoding but lacks the Tensor cores of the RTX series needed for accelerated AI inference.Also known as: GeForce GTXTags: technical:hardwareExternal: NVIDIA GeForce graphics cardsStatus: currentPages: orchestrators/run-an-orchestrator/requirements/setup
RTX (NVIDIA RTX)
Definition: NVIDIA’s current consumer GPU product line featuring dedicated Tensor cores that accelerate AI/ML inference workloads; RTX GPUs are well-suited for Livepeer AI pipeline tasks.Also known as: GeForce RTXTags: technical:hardwareExternal: NVIDIA GeForce graphics cardsStatus: currentPages: orchestrators/run-an-orchestrator/requirements/setup
Definition: Text-based interface for interacting with software by typing commands; in Livepeer, the primary method for configuring and running go-livepeer nodes.External: CLI — WikipediaStatus: currentPages: orchestrators/setup, orchestrators/config
GPU (Graphics Processing Unit)
Definition: Specialised processor designed for parallel computation, used in Livepeer for both video transcoding and AI model inference.External: GPU — WikipediaStatus: currentPages: orchestrators/ai
gRPC
Definition: High-performance remote procedure call framework using HTTP/2 and Protocol Buffers for efficient binary communication between services.External: gRPC — WikipediaStatus: currentPages: orchestrators/architecture, orchestrators/code
NVDEC
Definition: NVIDIA hardware video decoder that offloads video decoding from the CPU to dedicated silicon on NVIDIA GPUs.External: NVDEC — WikipediaStatus: currentPages: orchestrators/transcoding, orchestrators/setup
NVENC
Definition: NVIDIA hardware video encoder that offloads H.264 and H.265 encoding from the CPU to dedicated silicon on NVIDIA GPUs.External: NVENC — WikipediaStatus: currentPages: orchestrators/transcoding, orchestrators/setup
Remote Signer
Definition: External service that holds private keys securely and signs Ethereum transactions on behalf of a node, allowing the node to operate without direct access to the signing key.External: Remote signing — ethereum.orgStatus: currentPages: orchestrators/security
VRAM (Video RAM)
Definition: Dedicated memory on a GPU used to store graphics data, AI model weights, intermediate tensors, and video frames during processing.External: VRAM — WikipediaStatus: currentPages: orchestrators/ai, orchestrators/hardware
Webhook
Definition: HTTP callback mechanism triggered by an event, sending a POST request to a configured URL to notify external services of state changes.External: Webhook — WikipediaStatus: currentPages: orchestrators/discovery
Definition: Livepeer Improvement Proposal introducing the on-chain LivepeerGovernor governance contract and community treasury.Context: LIP-89 established the on-chain governance infrastructure including stake-weighted voting, the 10-round voting period, the 33% quorum threshold, and the community treasury funded by inflation.Status: currentPages: orchestrators/protocol, orchestrators/upgrades
LIP-91
Definition: Livepeer Improvement Proposal bundling the treasury establishment mechanism and defining the inflation-funded treasury reward cut rate.Context: LIP-91 activated the community treasury by directing a governable percentage of per-round LPT inflation into the on-chain treasury contract.Status: currentPages: orchestrators/protocol, orchestrators/upgrades
LIP-92
Definition: Livepeer Improvement Proposal defining the AI model registry and capability discovery mechanism for the network.Context: LIP-92 specified how orchestrators register AI capabilities on-chain via the AIServiceRegistry, enabling structured capability advertisement and gateway discovery of AI services.Status: currentPages: orchestrators/protocol, orchestrators/upgrades
Loki
Definition: Horizontally scalable log aggregation system by Grafana Labs, used in Livepeer orchestrator monitoring stacks.External: Loki — Grafana LabsStatus: currentPages: orchestrators/monitoring, orchestrators/operations
Smoke Test
Definition: Preliminary test verifying that an AI pipeline or node configuration is working correctly before deploying to production or accepting live traffic.External: Smoke testing — WikipediaStatus: currentPages: orchestrators/ai, orchestrators/testing
Throughput
Definition: Rate of successful data processing per unit time, measuring the volume of work an orchestrator can complete (segments transcoded per second, or AI requests per minute).External: Throughput — WikipediaStatus: currentPages: orchestrators/performance, orchestrators/benchmarks
Transcode Fail Rate
Definition: Percentage of source segments that an orchestrator fails to transcode successfully, used as a performance and reliability metric by gateways.Context: A high transcode fail rate lowers an orchestrator’s performance score and reduces the probability of being selected for future jobs. Causes include GPU errors, timeout, software bugs, and capacity overload.Status: currentPages: orchestrators/performance, orchestrators/monitoring
Workload
Definition: Total amount of work assigned to an orchestrator — the aggregate of active sessions, concurrent segments, and AI inference requests being processed at a given time.Context: Workload determines whether an orchestrator is at capacity. The -maxSessions flag caps the maximum concurrent workload. Monitoring workload against capacity helps operators tune pricing and hardware scaling decisions.Status: currentPages: orchestrators/performance, orchestrators/operations
Orchestrator Docs
Setup guides, configuration references, and architecture for running an orchestrator node.
Full Glossary
All terms across every Livepeer tab
Orchestrator FAQ
Answers to common questions about running an orchestrator.