Why AI Agents Acting Onchain Need an Indexer
- HyperIndex is Envio's multichain blockchain indexing framework for EVM chains. It is the right data layer for AI agents acting onchain because it ships reorg-safe data, structured GraphQL output, an MCP server that exposes the docs to any agent, and a
.claude/skills/directory that auto-discovers for Cursor, Claude Code, and Codex. - The published agentic demo documents an end-to-end flow where an agent scaffolded, configured, pushed to GitHub, and deployed a wstETH indexer on Monad Mainnet from a single prompt. 400,000 events indexed in approximately 20 seconds.
- The Envio docs MCP server exposes two tools (
docs_searchanddocs_fetch) over Streamable HTTP athttps://docs.envio.dev/mcp. Configured into Claude Code, Cursor, or VS Code with one command. - HyperIndex projects scaffold a
.claude/skills/directory pre-populated with 14 skills covering config, schema, handler syntax, factory patterns, filters, multichain, performance, traces, transactions, wildcard, blocks, external calls (the Effect API), testing, and subgraph migration.
The agentic-onchain conversation in 2026 has settled into two camps. One says agents need a reconciled SQL warehouse to make sense of raw blockchain data. The other says agents need a programmable indexer that lets them act, not just analyse. Both are right about the diagnosis. Raw RPC is unworkable for an agent. The disagreement is about what replaces it.
This blog is the case for indexers, not warehouses. A SQL warehouse lets an agent ask questions. An indexing framework lets an agent build, deploy, and own new data pipelines mid-session. The first is a query tool, the second is infrastructure. Agents acting onchain need the second. HyperIndex ships it today.
Why Raw Blockchain Data Breaks Agents
An agent reading from RPC directly hits four problems within minutes.
1. Reorgs. A recent block can be reorged. An agent that wrote a record based on an unfinalized block has to either lag the chain head (and miss real-time signals) or roll its own rollback logic (and get it wrong on the next edge case). Neither is acceptable for an agent running in a production environment.
2. Schema. RPC returns logs and transactions. It does not return entities, relationships, or aggregations. The agent has to assemble the schema in memory on every query. Cross-contract state, factory pattern instances, and anything time-windowed have to be rebuilt from scratch.
3. Throughput. An agent that wants to know the last 1,000 trades on a market has to issue 1,000 eth_getLogs calls or hand-tune a paginated request. A historical sweep across a year of activity can take hours to query from RPC.
4. Multichain. Most agents that matter operate across at least two chains. Each chain is a separate RPC, separate quirks, separate rate limits. The application code that joins those RPCs is exactly the indexing code an indexer would write for you.
The standard response to "raw RPC is unworkable for agents" is to put a SQL warehouse in front of it. That works for read-only analytical queries. It does not work for an agent that needs to spin up a new product on top of the data within a session.
What HyperIndex Provides Instead
HyperIndex addresses all four problems by being a blockchain indexing framework rather than a query layer.
- Reorg safety at the framework level. Entity state history, automatic rollback, no reorg logic required in handlers. Learn more in Indexing and Reorgs.
- Structured GraphQL output. Entities, relationships, aggregations, time-windowed views, all queryable from one endpoint. Agents read GraphQL, not raw logs.
- HyperSync historical throughput. Up to 2,000x faster than RPC. The Polymarket reference indexer synced 4,000,000,000 events in 6 days.
- Multichain in one config. 87+ have native HyperSync coverage, any EVM chain accessible via standard RPC, all in a single
config.yaml.
That is the read side. The act side is what makes HyperIndex an agent's infrastructure, not just an agent's data layer.
The Three Things That Make It Programmable for Agents
1. The Envio Docs MCP Server
The docs MCP server exposes the entire Envio docs site as two MCP tools:
docs_searchfor semantic search across the docsdocs_fetchto retrieve a docs page by ID
Endpoint: https://docs.envio.dev/mcp. Transport: Streamable HTTP.
Setup is one command for Claude Code:
claude mcp add --transport http envio-docs https://docs.envio.dev/mcp
Cursor and VS Code use the JSON config form on the same page. Once added, every agent session in that workspace grounds its answers about HyperIndex in the live docs rather than stale training data.
This matters because agents writing indexer code typically hallucinate APIs that do not exist. The MCP server gives the agent a fresh source of truth on every request, so it cites real HyperIndex syntax instead of guessing.
2. Auto-Discovered Skills in .claude/skills/
When a HyperIndex project is initialised, it scaffolds a .claude/skills/ directory pre-populated with skill definitions. Cursor, Claude Code, and Codex all auto-discover skills from this directory at session start. The descriptions load up front, full skill content loads on demand. Confirmed in the public Polymarket reference repo's CLAUDE.md:
Skills in
.claude/skills/are auto-discovered — descriptions load at startup, full content on demand.
HyperIndex projects scaffolded with v3 rc ship 14 skill definitions:
.claude/skills/
indexer-blocks/
indexer-configuration/
indexer-external-calls/ # Effect API for fetch / RPC / async I/O
indexer-factory/ # Dynamic contract registration
indexer-filters/
indexer-handlers/
indexer-multichain/
indexer-performance/
indexer-schema/
indexer-testing/ # Vitest patterns for handler tests
indexer-traces/
indexer-transactions/
indexer-wildcard/
migrate-from-subgraph/ # AssemblyScript-to-TypeScript conversion
The canonical skill set lives at github.com/enviodev/hyperindex/tree/main/packages/cli/templates/static/shared/.claude/skills and ships into every new HyperIndex project.
These skills encode the patterns that make a HyperIndex project work. A developer running Claude Code, Cursor, or Codex in a HyperIndex project does not need to teach the agent what HyperIndex is. The skills do that, scoped to the actual conventions the framework expects.
3. The envio-cloud CLI
The envio-cloud CLI is the GitHub-native deploy surface for HyperIndex indexers running on Envio Cloud. The three core agent-facing commands are:
envio-cloud loginto authenticate via GitHubenvio-cloud indexer addto register a new indexerenvio-cloud deployment statusto check sync state
Every command supports -o json for parseable output. Install with npm install -g envio-cloud.
The full CLI reference is in the docs.
The deploy model is GitHub-native. An agent commits the indexer code to a GitHub repo, pushes to the envio branch (the default deploy branch), and registers the indexer with envio-cloud indexer add. The Envio GitHub App handles deployments from there. No deploy button, no dashboard step.
The published agentic demo did exactly that for a wstETH indexer on Monad Mainnet. 400,000 events indexed in approximately 20 seconds. The agent reads the contract, scaffolds the project from the ERC20 template, configures config.yaml for Monad, runs codegen and a type check, pushes to GitHub, and registers the indexer. End to end, no human in the loop after the first prompt. Loom walkthrough. Live deployment.
That is the act in "programmable infrastructure for agents that need to act, not just query."
A Concrete Example: The Published wstETH-on-Monad Demo
The agentic indexing blog documents the full end-to-end flow an agent ran from scaffold to live deployment. This is not a hypothetical. The live deployment and the Loom walkthrough are both public.
The commands the agent ran (from the published blog):
Step 1: Scaffold from the ERC20 template
pnpx envio@3.0.0-rc.0 init template -t erc20 -l typescript -d ./my-indexer --api-token ""
The --api-token "" makes the init non-interactive. No token is needed at scaffold time; auth is handled at deploy.
Step 2: Configure for the target chain
The agent edits config.yaml to target the wstETH contract on Monad Mainnet. From the published blog: chain ID 143, contract 0x10Aeaf63194db8d453d4D85a06E5eFE1dd0b5417, start_block: 0.
Then runs codegen and a type check:
pnpm codegen
pnpm tsc --noEmit
Step 3: Push to GitHub on the deploy branch
Envio Cloud deploys from the envio branch by default:
gh repo create wsteth-monad-indexer-demo --public
git init && git add . && git commit -m "init"
git push -u origin main
git checkout -b envio && git push -u origin envio
Step 4: Connect the Envio GitHub App to the repo
A one-time install at github.com/apps/envio-deployments. The app handles the actual deployment when commits land on the envio branch.
Step 5: Register and deploy
pnpx envio-cloud login
pnpx envio-cloud indexer add \
--name wsteth-monad-indexer-demo \
--repo wsteth-monad-indexer-demo \
--description "wstETH ERC20 indexer on Monad" \
--branch envio \
--skip-repo-check \
--yes
Step 6: Verify
pnpx envio-cloud indexer get wsteth-monad-indexer-demo {org}
pnpx envio-cloud deployment status wsteth-monad-indexer-demo <commit-hash> {org}
Once synced, the indexer is at https://envio.dev/app/{org}/{indexer-name}/{commit-hash}.
Result: 400,000 events indexed in ~20 seconds.
Every command above is taken directly from the published blog. Every flag exists. The flow is what this whole blog is arguing for, an agent that scaffolds, configures, deploys, and verifies an indexer end to end, with no human stepping in. The Polymarket reference indexer is the production-scale reference for what this stack produces at full scale. The wstETH demo is the documented one-prompt run.
Why This Beats a SQL Warehouse for Agentic Workflows
A SQL warehouse fronted by an LLM is excellent for analysts. The agent reads a question, writes SQL, returns a number. The agent does not change the warehouse, does not deploy new ingestion, does not branch the schema.
An agent acting onchain needs the opposite. It needs to:
- Add a new contract to its data ingestion mid-session
- Branch the schema to add a new entity type for a workflow it is exploring
- Spin up a brand-new indexer for an opportunity it just discovered
- Deploy to a hosted runtime and stream results back
HyperIndex gives the agent that ability. The indexer is a project the agent owns, not a warehouse it queries. The same agent can have ten indexers running at any time, each tracking a different market. None of that is possible if the only interface is read-only SQL.
For analysts: SQL warehouses are the right tool. For agents acting on the data: an indexing framework is the right tool. Both can coexist. The case here is for the agent side, which is the side under-served by the current SQL-warehouse-plus-LLM consensus.
Get Started
- HyperIndex Quickstart with AI
- Envio docs MCP server
- Agentic indexing case (400k events, 20s)
- envio-cloud CLI reference
- Polymarket production reference
Frequently Asked Questions
Why do AI agents acting onchain need an indexer at all?
Raw RPC has four problems for an agent: reorgs, no schema, low throughput, and per-chain quirks at multichain scale. An indexer addresses all four. HyperIndex addresses them at the framework level, with reorg-safe storage, structured GraphQL output, HyperSync throughput, and a single multichain config.
What is the Envio docs MCP server?
A Model Context Protocol server at https://docs.envio.dev/mcp that exposes the Envio docs as two tools, docs_search and docs_fetch. Configured into Claude Code, Cursor, or VS Code with one setup command. Announcement blog: Introducing the Envio Docs MCP Server.
What skills ship with HyperIndex?
HyperIndex projects scaffold a .claude/skills/ directory that auto-discovers for Cursor, Claude Code, and Codex. The canonical templates ship 14 skill definitions covering indexer-configuration, indexer-schema, indexer-handlers, indexer-factory (dynamic contracts), indexer-external-calls (Effect API), indexer-multichain, indexer-performance, indexer-blocks, indexer-transactions, indexer-traces, indexer-filters, indexer-wildcard, indexer-testing, and migrate-from-subgraph. The full list lives at the canonical skills directory.
How fast can an AI agent deploy a HyperIndex indexer?
The agentic indexing blog documents a single-prompt flow that scaffolds, deploys, and runs an indexer covering 400,000 events on Monad in roughly 20 seconds.
Does the envio-cloud CLI support agent-driven deploys?
Yes. The CLI surface includes envio-cloud login, envio-cloud indexer add, envio-cloud deployment status, envio-cloud deployment metrics, envio-cloud deployment promote, with -o json on any command for parseable output. Deployments are GitHub-native: an agent commits to the envio branch and the registered indexer deploys automatically.
How does HyperIndex differ from a SQL warehouse for agents?
A SQL warehouse is a read-only query layer. HyperIndex is a programmable indexer the agent can own, branch, and deploy. Both have a place. SQL warehouses suit analytical workflows. Indexers suit agents that need to act, not just query.
What chains does HyperIndex support for AI agents?
Any EVM chain. 87+ have native HyperSync coverage for maximum speed. Any EVM chain without native HyperSync is accessible via standard RPC.
How does HyperIndex handle reorgs for agents?
At the framework level. Entity state history is persisted for unfinalized blocks. The framework rolls back automatically on reorg. No handler code is required.
Build With Envio
Envio is the fastest independently benchmarked EVM blockchain indexer for querying real-time and historical data. If you are building onchain and need indexing that keeps up with your chain, check out the docs, run the benchmarks yourself, and come talk to us about your data needs.
Stay tuned for more updates by subscribing to our newsletter, following us on X, or hopping into our Discord.
Website | X | Discord | Telegram | GitHub | YouTube | Reddit
Jordyn Laurier