Skip to main content
By the end of this guide, you will have go-livepeer installed on your machine and your GPU detected and ready to configure.

Prerequisites

  • A supported host OS: Linux, macOS, or Windows. For production GPU transcoding, plan on Linux.
  • curl and tar for Linux or macOS binary installs, or PowerShell plus Expand-Archive for Windows.
  • Docker Engine if you want the container install path. GPU containers also need the NVIDIA Container Toolkit.
  • An NVIDIA driver that satisfies CUDA 12.0.0 if you will use the Linux GPU binary or Docker GPU runtime. The CUDA 12.0 release notes list Linux 525.60.13 and Windows 527.41 as the minimum supported driver versions.
  • CUDA 12.0.0 on the host if you will run the Linux GPU binary directly or build from source. The official Docker image already includes CUDA 12.0.0.
  • Go 1.25.0 if you will build from source.
  • Write access to a directory on your PATH, such as /usr/local/bin, if you want to run livepeer from anywhere.
Pin your install to a release tag. If you want a community-maintained helper for upgrades, see Bash script to update Livepeer.
macOS binaries are useful for local development and CLI access, but upstream GPU documentation only covers NVIDIA workflows on Linux and Windows, and the current Docker GPU path is Linux-first. Use Linux for production transcoding and AI workloads.

Install go-livepeer

Verify your installation

Run the version check first. The architecture and operating system lines vary by platform, but the release line should match the tag you installed. For a go-livepeer GPU check on a native install, start the binary in transcoder mode with the NVIDIA device flag. The -testTranscoder startup check is enabled by default, so you can stop the process after the device lines appear. If you are using Docker, the equivalent runtime check is:
Do not move on to configuration until the version check succeeds and your GPU check shows at least one device. If livepeer -transcoder -nvidia all does not print the Nvidia devices: line, or if the Docker runtime cannot run nvidia-smi, stop here and work through the FAQ and troubleshooting guide.

Next steps

Configure your orchestrator

Set the flags your node needs to connect, price jobs, and accept work.

Transcoding quickstart

Continue with the end-to-end orchestrator flow after the binary and GPU checks pass.
Last modified on April 7, 2026