Prerequisites
- A supported host OS: Linux, macOS, or Windows. For production GPU transcoding, plan on Linux.
curlandtarfor Linux or macOS binary installs, or PowerShell plusExpand-Archivefor Windows.- Docker Engine if you want the container install path. GPU containers also need the NVIDIA Container Toolkit.
- An NVIDIA driver that satisfies CUDA 12.0.0 if you will use the Linux GPU binary or Docker GPU runtime. The CUDA 12.0 release notes list Linux
525.60.13and Windows527.41as the minimum supported driver versions. - CUDA 12.0.0 on the host if you will run the Linux GPU binary directly or build from source. The official Docker image already includes CUDA 12.0.0.
- Go 1.25.0 if you will build from source.
- Write access to a directory on your
PATH, such as/usr/local/bin, if you want to runlivepeerfrom anywhere.
Pin your install to a release tag. If you want a community-maintained helper for upgrades, see Bash script to update Livepeer.
macOS binaries are useful for local development and CLI access, but upstream GPU documentation only covers NVIDIA workflows on Linux and Windows, and the current Docker GPU path is Linux-first. Use Linux for production transcoding and AI workloads.
Install go-livepeer
Verify your installation
Run the version check first. The architecture and operating system lines vary by platform, but the release line should match the tag you installed. For a go-livepeer GPU check on a native install, start the binary in transcoder mode with the NVIDIA device flag. The-testTranscoder startup check is enabled by default, so you can stop the process after the device lines appear.
If you are using Docker, the equivalent runtime check is:
Next steps
Configure your orchestrator
Set the flags your node needs to connect, price jobs, and accept work.
Transcoding quickstart
Continue with the end-to-end orchestrator flow after the binary and GPU checks pass.