Skip to main content
By default, go-livepeer runs the Orchestrator and Transcoder as a single combined process on one machine. The split setup separates them: one machine handles protocol operations (on-chain interactions, job routing, reward calling) and one or more machines handle the GPU work. The two connect over the network using a shared secret. This is also the architectural foundation for - a pool is the O-T split extended to accept connections from external workers. For a comparison of all alternate deployment options, see .

Reasons to Split

Security isolation

The Ethereum keystore lives only on the Orchestrator machine. GPU worker machines have no wallet access. A compromised worker cannot drain funds or perform on-chain actions.

Independent scaling

Add or remove Transcoder machines without touching the Orchestrator. Scale GPU capacity by connecting more Transcoder nodes - each reports its own capacity to the Orchestrator.

Stable reward calling

The Orchestrator machine can be a small stable VPS with no GPU. Reward calls come from this machine, independent of GPU machine availability.

Role-optimised hardware

Optimise the Orchestrator for fast CPU, reliable network, and stable uptime. Optimise Transcoder machines purely for GPU throughput.

Architecture

Data flow:
  1. A Gateway connects to the Orchestrator on port 8935 (the public service URI)
  2. The Orchestrator receives the job and dispatches it to an available connected Transcoder via gRPC
  3. The Transcoder processes the segment and returns results to the Orchestrator
  4. The Orchestrator returns results to the Gateway
The Gateway and Delegators see only the Orchestrator. Transcoders are not visible to the protocol.

Part 1 - Orchestrator Machine

The Orchestrator machine needs: a publicly accessible IP or hostname, an Ethereum keystore, and outbound access to an Arbitrum RPC endpoint. It does not need a GPU.
livepeer \
    -network arbitrum-one-mainnet \
    -ethUrl <ARBITRUM_RPC_URL> \
    -ethAcctAddr <YOUR_ETH_ADDRESS> \
    -orchestrator \
    -orchSecret <ORCH_SECRET> \
    -serviceAddr <YOUR_PUBLIC_HOST>:8935 \
    -pricePerUnit <PRICE_PER_UNIT>
Key flags for the Orchestrator-only process: Without -transcoder, go-livepeer runs in standalone Orchestrator mode - it routes jobs to connected Transcoders but performs no local transcoding. It will refuse job assignments until at least one Transcoder connects.
Pass -orchSecret as a file path for production setups - secrets passed as plaintext values are visible in the process list via ps aux.
echo "my-secret-value" > /etc/livepeer/orchsecret.txt
chmod 600 /etc/livepeer/orchsecret.txt
# then: -orchSecret /etc/livepeer/orchsecret.txt

Part 2 - Transcoder Machines

Each Transcoder machine needs: an NVIDIA GPU with drivers installed, and network connectivity to the Orchestrator on port 8935. It does not need an Ethereum account, LPT stake, or Arbitrum RPC.
livepeer \
    -transcoder \
    -nvidia <GPU_IDs> \
    -orchSecret <ORCH_SECRET> \
    -orchAddr <ORCHESTRATOR_HOST>:8935 \
    -maxSessions <MAX_SESSIONS>
Key flags for the Transcoder-only process:

Verifying the connection

When the Transcoder connects successfully, the Orchestrator logs show:
Got a RegisterTranscoder request from transcoder=10.3.27.1 capacity=10
The capacity field reflects the Transcoder’s -maxSessions value. Once this line appears, the Orchestrator begins routing jobs to the connected Transcoder.

Connecting Multiple Transcoders

Any number of Transcoders can connect to a single Orchestrator using the same -orchSecret. Each connection appears in Orchestrator logs:
Got a RegisterTranscoder request from transcoder=10.3.27.1 capacity=10
Got a RegisterTranscoder request from transcoder=10.3.27.2 capacity=8
Got a RegisterTranscoder request from transcoder=10.3.27.3 capacity=12
The Orchestrator distributes incoming job segments across all connected Transcoders automatically. The effective session capacity is the sum of all connected Transcoder capacities - in the example above, 30 concurrent sessions. New Transcoders can be added at any time; the Orchestrator begins routing to them immediately.

Relationship to Pool Operations

The O-T split and a worker pool are the same architecture. The difference is operational scope: For pool operations - accepting external worker connections and managing off-chain fee distribution - see .

Security Considerations

The orchSecret is the only authentication between Orchestrator and Transcoder. Any node with this secret can connect as a Transcoder and receive job assignments. Keep it private: do not embed it in public Docker images, public configuration files, or version control. Use file-based secrets with restricted permissions.
In a correctly configured split setup, Transcoder machines do not have the Ethereum keystore and are not passed -ethUrl or -ethAcctAddr. This is intentional: Transcoders have no ability to submit on-chain transactions. Keep it this way - do not copy keystores to GPU worker machines.
Port 8935 must be publicly accessible for both Gateway and Transcoder connections. Gateways connect inbound to route jobs; Transcoders connect inbound to register and receive work. Open port 8935 for all inbound TCP if behind a firewall.
If the -orchSecret is compromised: generate a new secret, update the Orchestrator launch command, communicate the new secret to all Transcoder operators, then restart the Orchestrator. All existing Transcoder connections drop; they reconnect automatically with the new secret. There is no zero-downtime rotation mechanism.

Troubleshooting

Check in order:
  1. Verify port 8935 is reachable from the Transcoder: curl -v https://<orchestrator-host>:8935/status
  2. Confirm -orchSecret matches exactly on both sides (case-sensitive)
  3. Check for a TLS certificate issue if the Orchestrator uses HTTPS - the Transcoder will fail if the cert is self-signed and not trusted
  4. Check Transcoder startup logs for the GPU test result - a GPU test failure causes the process to exit before connecting
Once Got a RegisterTranscoder request appears in Orchestrator logs, the Transcoder is connected and will receive jobs as they arrive. If jobs arrive at the Orchestrator but the Transcoder is idle:
  • Check whether the Transcoder’s -maxSessions capacity is already reported as fully used
  • Verify the Orchestrator is receiving jobs from Gateways (check session metrics at http://localhost:7935/metrics)
  • If the Orchestrator itself is idle, the issue is Gateway routing - see
The Transcoder’s GPU startup test failed - typically because the NVENC session cap has been reached on that GPU. See the GPU and memory errors section of the .

Alternate Deployments

Overview of all three alternate deployment options and how to choose between them.

Siphon Setup

Combine the split architecture with OrchestratorSiphon for keystore isolation and reward safety.

Run a Pool

Extend this architecture to accept external worker connections.

Large-Scale Operations

Fleet architecture and multi-Orchestrator operations.
Last modified on April 7, 2026