Skip to main content

AI-first documentation is a term that risks being interpreted superficially - adding an AI chatbot, or writing in shorter sentences. The v2 engagement implemented AI-first at every layer of the system: structural, content, discovery, tooling, and operational.

Interpretation

  • Discoverability - Docs can be found, indexed, and cited by AI systems without human mediation. Every page is optimised for answer engines, not just search engines.
  • Parseability - Docs can be read and understood by machines with the same reliability as humans. Semantic structure, consistent metadata, and machine-readable formats are first-class requirements.
  • Executability - Docs can be acted on by AI agents, beyond read. Instructions are structured as explicit, verifiable steps an agent can follow to completion.
    • Native Integration - AI tooling is embedded in the docs surface itself - assistants, agent runbooks, and repository guidance for AI coding tools.

    Research & Platform Selection

    A comprehensive evaluation of 14 documentation platforms was conducted specifically assessing AI compatibility as a first-class criterion. GitBook’s AI Search, GitBook Assistant, and MCP connectivity for published docs were fully documented and evaluated. Mintlify was ultimately selected for its superior MDX component system and built-in AI assistant integration. The Mintlify team held a direct meeting to validate AI feature roadmap alignment. An AI feature roadmap was produced proposing 11 progressive AI documentation features with difficulty ratings - from embedded assistant (done) through to MCP server exposure and agent-native quickstarts (future roadmap).

    Implementation: How v2 Delivers AI-First

    • Mintlify AI Assistant (“Ask AI”) integrated and live in v2 navigation. Test surface at v2/pages/00_home/test.mdx. Trained on structured v2 content for natural language queries across full docs.
    • llms.txt File - Emerging standard (analogous to robots.txt) for LLM discoverability at tools/ai-rules/llms.txt.information.md with structured guidance for LLM parsing.
    • “Get AI to Set Up the Gateway” - Novel documentation pattern (v2/pages/04_gateways/quickstart/AI-prompt.mdx) written for AI agent execution with explicit preconditions, step invariants, and verification criteria.
    • OpenAPI Spec Integration - Six API specs integrated (gateway.openapi.yaml, ai-worker.yaml, studio.yaml, openapi.json/yaml). SDK auto-generation workflow (sdk_generation.yaml) keeps API reference current via fetch-openapi-specs.sh and generate-api-docs.sh.
    • Semantic Heading Hierarchies - Enforced site-wide with H1 titles, H2 sections, consistent frontmatter (description, keywords, og:image) for reliable LLM parsing.
    • SEO/AEO Automation - generate-seo.js automates metadata enforcement. Glossary scripts (generate-glossary.js, terminology-search.js, glossary-terms.json) provide LLM-parseable terminology assets.
    • Repository AI Guidance - AGENTS.md plus the native adapters under .github/, .claude/, .cursor/, and .windsurf/ ensure AI coding tools operate within repository conventions.
    • Machine-Readable Architecture Maps - docs-guide/features/feature-map.mdx and docs-guide/features/architecture-map.mdx provide Mermaid diagrams for AI understanding of system structure.
    • Diátaxis Structure - Content separated into tutorials, how-to guides, explanations, and references-intrinsically machine-legible for LLM classification and retrieval.
    • n8n Automation Layer - Platform-independent automation with parallel GitHub Actions + n8n architecture for AI pipeline integration.

    Discoverability - Machine-Readable Parallel Formats

    Documentation discoverability for AI systems requires more than well-structured HTML. LLM crawlers, retrieval-augmented generation (RAG) systems, and AI search engines require clean, parseable content without the navigation chrome, JavaScript rendering, and layout noise of a production documentation site. v2 addresses this at two levels: platform-native and system-level.
    • [Technical] Mintlify natively serves all MDX pages as clean, parseable content accessible at predictable URLs. The MDX source files in v2/pages/ are structured to be fetched and read directly - clean semantic markup, no extraneous UI scaffolding - providing a de-facto parallel readable format for bot consumers alongside the rendered human UI.
    • [Scripts] Pages index generator (operations/scripts/generate-pages-index.js) produces and validates section-level index.mdx files for all v2/pages/ folders plus a root aggregate index. This index is machine-readable and provides AI systems with a flat, navigable inventory of all documentation surfaces without requiring a site crawl.
  • [Scripts] generate-seo.js produces structured metadata (title, description, keywords, og:image) across all pages in a consistent schema - making frontmatter reliably parseable as structured data. AI systems extracting page-level metadata receive a consistent, complete signal instead of ad-hoc per-page variations.
    • [AI] llms.txt file at the documentation root provides the emerging standard entry point for LLM agent discovery - analogous to robots.txt for search engines. Structured at tools/ai-rules/llms.txt.information.md, this file directs AI systems to the most important content surfaces, canonical URL patterns, and any consumption guidance specific to Livepeer documentation.
    • [Technical] Repository AI guidance files (AGENTS.md plus the native adapter paths) make the documentation repository itself legible to AI coding assistants - enabling developers using AI tools to query and navigate the docs repo without human instruction.

    Readability - Clear Journeys & Implementation Items for AI Consumers

Beyond discoverability, AI-first documentation provides structured, actionable journeys explicitly designed for AI agent execution. v2 introduces multiple novel documentation patterns in this category.
  • “Get AI to Set Up the Gateway” - Dedicated quickstart (v2/pages/04_gateways/quickstart/AI-prompt.mdx) written as a structured prompt for AI agent consumption. Developers copy and paste the prompt into their AI assistant to set up a Livepeer gateway. Establishes a “copy to AI” quickstart pattern extendable across the library.
  • Agent-oriented Structure - All quickstart and how-to pages follow Diátaxis typing for reliable agent task decomposition. Tutorials provide explicit preconditions, step sequences, verification criteria, and failure modes-critical for AI agent execution reliability.
  • Mintlify AI Assistant Context - Trained on full v2 documentation with consistent semantic structure (H1/H2/H3 hierarchies, frontmatter descriptions, component-driven MDX). High-quality, well-scoped chunks enable precise assistant retrieval. Test surface at v2/pages/00_home/test.mdx validates response quality.
  • AGENTS.md & native adapters - System-level context for AI coding tools. Provides repository structure, naming conventions, and governance rules automatically-improving AI consumer accuracy without human onboarding.

Roadmap

AI summary pages and structured FAQ: dedicated per-section AI summary pages (dot-point format, machine-optimised) and a structured Q&A FAQ surface are designed and in the documentation roadmap.These represent the next generation of AI-first features beyond the current structural foundation.The architecture to support them (consistent page structure, glossary data, automation pipelines) is in place.

Constraints

Full AEO (Answer Engine Optimization) beyond semantic structure has not been explicitly audited. Structured data markup (JSON-LD) has not been verified across all pages. Analytics tracking at anchor level was not implemented, meaning there is no feedback signal yet for which sections LLMs are retrieving most frequently. These represent the next-generation AEO layer instead of a failure of the current implementation.
Last modified on April 7, 2026