Skip to main content

This tutorial is adapted from the Livepeer Agent SPE guide on Mirror.xyz, published as part of the Agent SPE incentive programme. The original tutorial was written by the Agent SPE team and made available under an open licence.
Eliza is an open-source multi-agent framework (ai16z) that supports pluggable model providers. The Livepeer plugin routes Eliza’s LLM inference requests to the Livepeer AI gateway, where they run on decentralised GPU infrastructure rather than centralised cloud providers. What you will build: An Eliza agent that uses Livepeer as its LLM backend, running meta-llama/Meta-Llama-3.1-8B-Instruct (or any Ollama-compatible model) on the network.

Prerequisites

  • Node.js 22 or later
  • pnpm (npm install -g pnpm)
  • A Livepeer Studio API key from livepeer.studio — free tier covers this tutorial

Build your agent

What to build next

With a working agent, three natural extensions exist: Switch the model — Change settings.model in your character file to any Ollama-compatible model available on the Livepeer network. The LLM pipeline accepts arbitrary model IDs; see the model support page for supported variants. Add memory and knowledge — Eliza supports RAG (retrieval-augmented generation) via knowledge files and vector stores. Add entries to the knowledge array in your character file to give the agent domain-specific context. Integrate with Slack or Discord — The Eliza framework includes client connectors for Slack, Discord, and Twitter. Add "slack" or "discord" to the clients array and configure the respective credentials in .env. Build a multi-agent swarm — The SwarmZero framework integrates with Livepeer’s inference APIs and supports multi-agent orchestration. See the SwarmZero Livepeer example for a working YouTube video generator swarm.

Supported LLM models

The Livepeer LLM pipeline uses an Ollama-based runner. Any Ollama-compatible model works. Warm models respond immediately; others cold-start on the first request.
ModelWarm on networkVRAM required
meta-llama/Meta-Llama-3.1-8B-InstructYes8 GB
mistralai/Mistral-7B-Instruct-v0.3Check network8 GB
google/gemma-2-9b-itCheck network10 GB
Qwen/Qwen2.5-7B-InstructCheck network8 GB

AI Quickstart

Make a direct API call to any AI pipeline without a framework.

Model Support

All supported pipelines, warm models, and VRAM requirements.

BYOC

Deploy a custom model container for workloads not covered by standard pipelines.

AI Troubleshooting

Diagnose cold model timeouts, auth errors, and validation failures.
Last modified on April 7, 2026