HyperIndex Complete Documentation
This document contains all HyperIndex documentation consolidated into a single file for LLM consumption.
Overviewβ
File: overview.md
HyperIndex: Fast Multichain Indexer
HyperIndex is a blazing-fast, developer-friendly multichain indexer, optimized for both local development and reliable hosted deployment. It empowers developers to effortlessly build robust backends for blockchain applications.
!Sync Process
HyperIndex is Envio's full-featured blockchain indexing framework that transforms on-chain events into structured, queryable databases with GraphQL APIs.
HyperSync is the high-performance data engine that powers HyperIndex. It provides the raw blockchain data access layer, delivering up to 2000x faster performance than traditional RPC endpoints.
While HyperIndex gives you a complete indexing solution with schema management and event handling, HyperSync can be used directly for custom data pipelines and specialized applications.
Hypersync API Token Requirementsβ
Starting from 21 May 2025, HyperSync (the data engine powering HyperIndex) will implement rate limits for requests without API tokens. Here's what you need to know:
- Local Development: No API token is required for local development, though requests will be rate limited.
- Self-Hosted Deployments: API tokens are required for unlimited HyperSync access in self-hosted deployments. The token can be set via the
ENVIO_API_TOKEN
environment variable in your indexer configuration. This can be read from the.env
file in the root of your HyperIndex project. - Hosted Service: Indexers deployed to our hosted service will have special access that doesn't require a custom API token.
- Free Usage: The service remains free to use until mid-June 2025.
- Future Pricing: From mid-June 2025 onwards, we will introduce tiered packages based on usage. Credits are calculated based on comprehensive metrics including data bandwidth, disk read operations, and other resource utilization factors. For preferred introductory pricing based on your specific use case, reach out to us on Discord.
For more details about API tokens, including how to generate and implement them, see our API Tokens documentation.
π Quick Linksβ
- GitHub Repository β
- Join our Discord Community
Getting Startedβ
File: getting-started.md
Indexer Initializationβ
Prerequisitesβ
- Node.js (v22 or newer recommended)
- pnpm (v8 or newer)
- Docker Desktop (required to run the Envio indexer locally)
Note: Docker is required only if you plan to run your indexer locally. You can skip installing Docker if you'll only be using Envio's hosted service.
Additionally for Windows Users:β
- WSL Windows Subsystem for Linux
Essential Filesβ
After initialization, your indexer will contain three main files that are essential for its operation:
config.yaml
β Defines indexing settings such as blockchain endpoints, events to index, and advanced behaviors.schema.graphql
β Defines the GraphQL schema for indexed data and its structure for efficient querying.src/EventHandlers.*
β Contains the logic for processing blockchain events.
Note: The file extension for Event Handlers (
*.ts
,*.js
, or*.res
) depends on the programming language chosen (TypeScript, JavaScript, or ReScript).
You can customize your indexer by modifying these files to meet your specific requirements.
For a complete walkthrough of the process, refer to the Quickstart guide.
Contract Importβ
File: contract-import.md
The Quickstart enables you to instantly autogenerate a powerful indexer and start querying blockchain data in minutes. This is the fastest and easiest way to begin using HyperIndex.
Example: Autogenerate an indexer for the Eigenlayer contract and index its entire history in less than 5 minutes by simply running pnpx envio init
and providing the contract address from Etherscan.
Video Tutorialsβ
Contract Import Methodsβ
There are two convenient methods to import your contract:
- Block Explorer (verified contracts on supported explorers like Etherscan and Blockscout)
- Local ABI (custom or unverified contracts)
1. Block Explorer Importβ
This method uses a verified contract's address from a supported blockchain explorer (Etherscan, Routescan, etc.) to automatically fetch the ABI.
Steps:β
a. Select the blockchain
? Which blockchain would you like to import a contract from?
> ethereum-mainnet
goerli
optimism
base
bsc
gnosis
polygon
[ββ to move, enter to select]
HyperIndex supports all EVM-compatible chains. If your desired chain is not listed, you can import via the local ABI method or manually adjust the config.yaml
file after initialization.
b. Enter the contract address
? What is the address of the contract?
[Use proxy address if ABI is for a proxy implementation]
If using a proxy contract, always specify the proxy address, not the implementation address.
c. Select events to index
? Which events would you like to index?
> [x] ClaimRewards(address indexed from, address indexed reward, uint256 amount)
[x] Deposit(address indexed from, uint256 indexed tokenId, uint256 amount)
[x] NotifyReward(address indexed from, address indexed reward, uint256 indexed epoch, uint256 amount)
[x] Withdraw(address indexed from, uint256 indexed tokenId, uint256 amount)
[space to select, β to select all, β to deselect all]
d. Finish or add more contracts
You'll be prompted to continue adding more contracts or to complete the setup:
? Would you like to add another contract?
> I'm finished
Add a new address for same contract on same network
Add a new network for same contract
Add a new contract (with a different ABI)
Generated Files & Configurationβ
The Quickstart automatically generates key files:
1. config.yaml
β
Automatically configured parameters include:
- Network ID
- Start Block
- Contract Name
- Contract Address
- Event Signatures
By default, all selected events are included, but you can manually adjust the file if needed. See the detailed guide on config.yaml
.
2. GraphQL Schemaβ
- Entities are automatically generated for each selected event.
- Fields match the event parameters emitted.
See more details in the schema file guide.
3. Event Handlersβ
- Handlers are autogenerated for each event.
- Handlers create event-specific entities.
Learn more in the event handlers guide.
HyperIndex Performance Benchmarksβ
File: benchmarks.md
Overviewβ
HyperIndex delivers industry-leading performance for blockchain data indexing. Independent benchmarks have consistently shown Envio's HyperIndex to be the fastest indexing solution available, with dramatic performance advantages over competitive offerings.
Recent Independent Benchmarksβ
The most comprehensive and up-to-date benchmarks were conducted by Sentio in April 2025 and are available in the sentio-benchmark repository. These benchmarks compare Envio's HyperIndex against other popular indexers across multiple real-world scenarios:
Key Performance Highlightsβ
Case | Description | Envio | Nearest Competitor | TheGraph | Ponder |
---|---|---|---|---|---|
LBTC Token Transfers | Event handling, No RPC calls, Write-only | 3m | 8m - 2.6x slower (Sentio) | 3h9m - 3780x slower | 1h40m - 2000x slower |
LBTC Token with RPC calls | Event handling, RPC calls, Read-after-write | 1m | 6m - 6x slower (Sentio) | 1h3m - 63x slower | 45m - 45x slower |
Ethereum Block Processing | 100K blocks with Metadata extraction | 7.9s | 1m - 7.5x slower (Subsquid) | 10m - 75x slower | 33m - 250x slower |
Ethereum Transaction Gas Usage | Transaction handling, Gas calculations | 1m 26s | 7m - 4.8x slower (Subsquid) | N/A | 33m - 23x slower |
Uniswap V2 Swap Trace Analysis | Transaction trace handling, Swap decoding | 41s | 2m - 3x slower (Subsquid) | 8m - 11x slower | N/A |
Uniswap V2 Factory | Event handling, Pair and swap analysis | 8s | 2m - 15x slower (Subsquid) | 19m - 142x slower | 21m - 157x slower |
The independent benchmark results demonstrate that HyperIndex consistently outperforms all competitors across every tested scenario. This includes the most realistic real-world indexing scenario LBTC Token with RPC calls - where HyperIndex was up to 6x faster than the nearest competitor and over 63x faster than TheGraph.
Historical Benchmarking Resultsβ
Our internal benchmarking from October 2023 showed similar performance advantages. When indexing the Uniswap V3 ETH-USDC pool contract on Ethereum Mainnet, HyperIndex achieved:
- 2.1x faster indexing than the nearest competitor
- Over 100x faster indexing than some popular alternatives
You can read the full details in our Indexer Benchmarking Results blog post.
Verify For Yourselfβ
We encourage developers to run their own benchmarks. You can use the templates provided in the Sentio benchmark repository or our sample indexer implementations for various scenarios.
Migrate from TheGraph to HyperIndexβ
File: migration-guide.md
Please reach out to our team on Discord for personalized migration assistance.
Introductionβ
Migrating from a subgraph to HyperIndex is designed to be a developer-friendly process. HyperIndex draws strong inspiration from TheGraphβs subgraph architecture, which makes the migration simple, especially with the help of coding assistants like Cursor and AI tools (don't forget to use our ai friendly docs).
The process is simple but requires a good understanding of the underlying concepts. If you are new to HyperIndex, we recommend starting with the Getting Started guide.
Why Migrate to HyperIndex?β
- Superior Performance: Up to 100x faster indexing speeds
- Lower Costs: Reduced infrastructure requirements and operational expenses
- Better Developer Experience: Simplified configuration and deployment
- Advanced Features: Access to capabilities not available in other indexing solutions
- Seamless Integration: Easy integration with existing GraphQL APIs and applications
Subgraph to HyperIndex Migration Overviewβ
Migration consists of three major steps:
- Subgraph.yaml migration
- Schema migration - near copy paste
- Event handler migration
At any point in the migration run
pnpm envio codegen
to verify the config.yaml
and schema.graphql
files are valid.
or run
pnpm dev
to verify the indexer is running and indexing correctly.
0.5 Use npx envio init
to generate a boilerplateβ
As a first step, we recommend using npx envio init
to generate a boilerplate for your project. This will handle the creation of the config.yaml
file and a basic schema.graphql
file with generic handler functions.
1. subgraph.yaml
β config.yaml
β
npx envio init
will generate this for you. It's a simple configuration file conversion. Effectively specifying which contracts to index, which networks to index (multiple networks can be specified with envio) and which events from those contracts to index.
Take the following conversion as an example, where the subgraph.yaml
file is converted to config.yaml
the below comparisons is for the Uniswap v4 pool manager subgraph.
theGraph - subgraph.yaml
specVersion: 0.0.4
description: Uniswap is a decentralized protocol for automated token exchange on Ethereum.
repository: https://github.com/Uniswap/v4-subgraph
schema:
file: ./schema.graphql
features:
- nonFatalErrors
- grafting
- kind: ethereum/contract
name: PositionManager
network: mainnet
source:
abi: PositionManager
address: "0xbD216513d74C8cf14cf4747E6AaA6420FF64ee9e"
startBlock: 21689089
mapping:
kind: ethereum/events
apiVersion: 0.0.7
language: wasm/assemblyscript
file: ./src/mappings/index.ts
entities:
- Position
abis:
- name: PositionManager
file: ./abis/PositionManager.json
eventHandlers:
- event: Subscription(indexed uint256,indexed address)
handler: handleSubscription
- event: Unsubscription(indexed uint256,indexed address)
handler: handleUnsubscription
- event: Transfer(indexed address,indexed address,indexed uint256)
handler: handleTransfer
HyperIndex - config.yaml
# yaml-language-server: $schema=./node_modules/envio/evm.schema.json
name: uni-v4-indexer
networks:
- id: 1
start_block: 21689089
contracts:
- name: PositionManager
address: 0xbD216513d74C8cf14cf4747E6AaA6420FF64ee9e
handler: src/EventHandlers.ts
events:
- event: Subscription(uint256 indexed tokenId, address indexed subscriber)
- event: Unsubscription(uint256 indexed tokenId, address indexed subscriber)
- event: Transfer(address indexed from, address indexed to, uint256 indexed id)
For any potential hurdles, please refer to the Configuration File documentation.
2. Schema migrationβ
copy
& paste
the schema from the subgraph to the HyperIndex config file.
Small nuance differences:
- You can remove the
@entity
directive - Enums
- BigDecimals
3. Event handler migrationβ
This consists of two parts
- Converting assemblyscript to typescript
- Converting the subgraph syntax to HyperIndex syntax
3.1 Converting Assemblyscript to Typescriptβ
The subgraph uses assemblyscript to write event handlers. The HyperIndex syntax is usually in typescript. Since assemblyscript is a subset of typescript, it's quite simple to copy and paste the code, especially so for pure functions.
3.2 Converting the subgraph syntax to HyperIndex syntaxβ
There are some subtle differences in the syntax of the subgraph and HyperIndex. Including but not limited to the following:
- Replace Entity.save() with context.Entity.set()
- Convert to async handler functions
- Use
await
for loading entitiesconst x = await context.Entity.get(id)
- Use dynamic contract registration to register contracts
The below code snippets can give you a basic idea of what this difference might look like.
theGraph - eventHandler.ts
export function handleSubscription(event: SubscriptionEvent): void {
const subscription = new Subscribe(event.transaction.hash + event.logIndex);
subscription.tokenId = event.params.tokenId;
subscription.address = event.params.subscriber.toHexString();
subscription.logIndex = event.logIndex;
subscription.blockNumber = event.block.number;
subscription.position = event.params.tokenId;
subscription.save();
}
HyperIndex - eventHandler.ts
PoolManager.Subscription.handler( async (event, context) => {
const entity = {
id: event.transaction.hash + event.logIndex,
tokenId: event.params.tokenId,
address: event.params.subscriber,
blockNumber: event.block.number,
logIndex: event.logIndex,
position: event.params.tokenId
}
context.Subscription.set(entity);
})
Extra tipsβ
HyperIndex is a powerful tool that can be used to index any contract. There are some features that are especially powerful that go above subgraph implementations and so in some cases you may want to optimise your migration to HyperIndex further to take advantage of these features. Here are some useful tips:
- Use the
field_selection
option to add additional fields to your index. Doc here: field selection - Use the
unordered_multichain_mode
option to enable unordered multichain mode, this is the most common need for multichain indexing. However comes with tradeoffs worth understanding. Doc here: unordered multichain mode - Use wildcard indexing to index by event signatures rather than by contract address.
- HyperIndex uses the standard graphql query language, where as the subgraph uses a custom query language. You can read about the slight nuances here. (We are working on a basic tool to help with backwards compatibility, please check in with us on discord for it's current status).
- Loaders are a powerful feature to optimize historical sync performance. You can read more about them here.
- HyperIndex is very flexible and can be used to index offchain data too or send messages to a queue etc for fetching external data, you can further optimise the fetching by using the effects api
Share Your Learningsβ
If you discover helpful tips during your migration, weβd love contributions! Open a PR to this guide and help future developers.
Getting Helpβ
Join Our Discord: The fastest way to get personalized help is through our Discord community.
---
## Configuration File
**File:** `Guides/configuration-file.mdx`
The `config.yaml` file defines your indexer's behavior, including which blockchain events to index, contract addresses, which networks to index, and various advanced indexing options. It is a crucial step in configuring your HyperIndex setup.
After any changes to your `config.yaml` and the schema, run:
```bash
pnpm codegen
This command generates necessary types and code for your event handlers.
Key Configuration Optionsβ
Contract Addressesβ
Set the address of the smart contract you're indexing.
Addresses can be provided in checksum format or in lowercase. Envio accepts both and normalizes them internally.
Single address:
address: 0xContractAddress
Multiple addresses for the same contract:
contracts:
- name: MyContract
address:
- 0xAddress1
- 0xAddress2
If using a proxy contract, always use the proxy address, not the implementation address.
Global definitions:
You can also avoid repeating addresses by using global contract definitions:
contracts:
- name: Greeter
abi: greeter.json
networks:
- id: ethereum-mainnet
contracts:
- name: Greeter
address: 0xProxyAddressHere
Events Selectionβ
Define specific events to index in a human-readable format:
events:
- event: "NewGreeting(address user, string greeting)"
- event: "ClearGreeting(address user)"
By default, all events defined in the contract are indexed, but you can selectively disable them by removing them from this list.
Custom Event Namesβ
You can assign custom names to events in config.yaml
. This is handy when
two events share the same name but have different signatures, or when you want
a more descriptive name in your Envio project.
events:
- event: Assigned(address indexed recipientId, uint256 amount, address token)
- event: Assigned(address indexed recipientId, uint256 amount, address token, address sender)
name: AssignedWithSender
Field Selectionβ
To improve indexing performance and reduce credits usage, the block
and transaction
fields on events contain only a subset of the fields available on the blockchain.
To access fields that are not provided by default, specify them using the field_selection
option for your event:
events:
- event: "Assigned(address indexed user, uint256 amount)"
field_selection:
transaction_fields:
- transactionIndex
block_fields:
- timestamp
See all possible options in the Config File Reference or use IDE autocomplete for your help.
Global Field Selectionβ
You can also specify fields globally for all events in the root of the config file:
field_selection:
transaction_fields:
- hash
- gasUsed
block_fields:
- parentHash
Try to use this option sparingly as it can cause redundant Data Source calls and increased credits usage.
Field Selection per Event is available from envio@2.11.0
and above. Please, upgrade your indexer to access this feature.
Rollback on Reorgβ
HyperIndex automatically handles blockchain reorganizations by default. To disable or customize this behavior, set the rollback_on_reorg
flag in your config.yaml
:
rollback_on_reorg: true # default is true
See detailed configuration options here.
Environment Variablesβ
Since envio@2.9.0
, environment variable interpolation is supported for flexibility and security:
networks:
- id: ${ENVIO_CHAIN_ID:-ethereum-mainnet}
contracts:
- name: Greeter
address: ${ENVIO_GREETER_ADDRESS}
Run your indexer with custom environment variables:
ENVIO_CHAIN_ID=optimism ENVIO_GREETER_ADDRESS=0xYourContractAddress pnpm dev
Interpolation syntax:
${ENVIO_VAR}
β Use the value ofENVIO_VAR
${ENVIO_VAR:-default}
β UseENVIO_VAR
if set, otherwise usedefault
For more detailed information about environment variables, see our Environment Variables Guide.
Output Directory Pathβ
You can customize the path where the generated directory will be placed using the output
option:
output: ./custom/generated/path
By default, the generated directory is placed in generated
relative to the current working directory. If set, it will be a path relative to the config file location.
This is an advanced configuration option. When using a custom output directory, you'll need to manually adjust your .gitignore
file and project structure to match the new configuration.
Configuration Schema Referenceβ
Explore detailed configuration schema parameters here:
- See the full, deep-linkable reference: Config Schema Reference
Schema Fileβ
File: Guides/schema-file.md
The schema.graphql
file defines the data model for your HyperIndex indexer. Each entity type defined in this schema corresponds directly to a database table, with your event handlers responsible for creating and updating the records. HyperIndex automatically generates a GraphQL API based on these entity types, allowing easy access to the indexed data.
Scalar Typesβ
Scalar types represent basic data types and map directly to JavaScript, TypeScript, or ReScript types.
GraphQL Scalar | Description | JavaScript/TypeScript | ReScript |
---|---|---|---|
ID | Unique identifier | string | string |
String | UTF-8 character sequence | string | string |
Int | Signed 32-bit integer | number | int |
Float | Signed floating-point number | number | float |
Boolean | true or false | boolean | bool |
Bytes | UTF-8 character sequence (hex prefixed 0x ) | string | string |
BigInt | Signed integer (int256 in Solidity) | bigint | bigint |
BigDecimal | Arbitrary-size floating-point | BigDecimal (imported) | BigDecimal.t |
Timestamp | Timestamp with timezone | Date | Js.Date.t |
Json | JSON object (from envio@2.20) | Json | Js.Json.t |
Learn more about GraphQL scalars here.
Enum Typesβ
Enums allow fields to accept only a predefined set of values.
Example:
enum AccountType {
ADMIN
USER
}
type User {
id: ID!
balance: Int!
accountType: AccountType!
}
Enums translate to string unions (TypeScript/JavaScript) or polymorphic variants (ReScript):
TypeScript Example:
let user = {
id: event.params.id,
balance: event.params.balance,
accountType: "USER", // enum as string
};
ReScript Example:
let user: Types.userEntity = {
id: event.params.id,
balance: event.params.balance,
accountType: #USER, // polymorphic variant
};
Field Indexing (@index
)β
Add an index to a field for optimized queries and loader performance:
type Token {
id: ID!
tokenId: BigInt!
collection: NftCollection!
owner: User! @index
}
- All
id
fields and fields referenced via@derivedFrom
are indexed automatically.
Generating Typesβ
Once you've defined your schema, run this command to generate these entity types that can be accessed in your event handlers:
pnpm envio codegen
You're now ready to define powerful schemas and efficiently query your indexed data with HyperIndex!
Event Handlersβ
File: Guides/event-handlers.mdx
Event Handlers
Registrationβ
A handler is a function that receives blockchain data, processes it, and inserts it into the database. You can register handlers in the file defined in the handler
field in your config.yaml
file. By default this is src/EventHandlers.*
file.
..handler(async ({ event, context }) => {
// Your logic here
});
const { } = require("generated");
..handler(async ({ event, context }) => {
// Your logic here
});
Handlers...handler(async ({ event, context }) => {
// Your logic here
});
The generated
module contains code and types based on config.yaml
and schema.graphql
files. Update it by running pnpm codegen
command whenever you change these files.
Basic Exampleβ
Here's a handler example for the NewGreeting
event. It belongs to the Greeter
contract from our beginners Greeter Tutorial:
// Handler for the NewGreeting event
Greeter.NewGreeting.handler(async ({ event, context }) => {
const userId = event.params.user; // The id for the User entity
const latestGreeting = event.params.greeting; // The greeting string that was added
const currentUserEntity = await context.User.get(userId); // Optional user entity that may already exist
// Update or create a new User entity
const userEntity: User = currentUserEntity
? {
id: userId,
latestGreeting,
numberOfGreetings: currentUserEntity.numberOfGreetings + 1,
greetings: [...currentUserEntity.greetings, latestGreeting],
}
: {
id: userId,
latestGreeting,
numberOfGreetings: 1,
greetings: [latestGreeting],
};
context.User.set(userEntity); // Set the User entity in the DB
});
const { Greeter } = require("generated");
// Handler for the NewGreeting event
Greeter.NewGreeting.handler(async ({ event, context }) => {
const userId = event.params.user; // The id for the User entity
const latestGreeting = event.params.greeting; // The greeting string that was added
const currentUserEntity = await context.User.get(userId); // Optional user entity that may already exist
// Update or create a new User entity
const userEntity = currentUserEntity
? {
id: userId,
latestGreeting,
numberOfGreetings: currentUserEntity.numberOfGreetings + 1,
greetings: [...currentUserEntity.greetings, latestGreeting],
}
: {
id: userId,
latestGreeting,
numberOfGreetings: 1,
greetings: [latestGreeting],
};
context.User.set(userEntity); // Set the User entity in the DB
});
open Types
// Handler for the NewGreeting event
Handlers.Greeter.NewGreeting.handler(async ({event, context}) => {
let userId = event.params.user->Address.toString // The id for the User entity
let latestGreeting = event.params.greeting // The greeting string that was added
let maybeCurrentUserEntity = await context.user.get(userId) // Optional User entity that may already exist
// Update or create a new User entity
let userEntity: Entities.User.t = switch maybeCurrentUserEntity {
| Some(existingUserEntity) => {
id: userId,
latestGreeting,
numberOfGreetings: existingUserEntity.numberOfGreetings + 1,
greetings: existingUserEntity.greetings->Belt.Array.concat([latestGreeting]),
}
| None => {
id: userId,
latestGreeting,
numberOfGreetings: 1,
greetings: [latestGreeting],
}
}
context.user.set(userEntity) // Set the User entity in the DB
})
Preload Optimizationβ
Important! Preload optimization makes your handlers run twice.
Starting from envio@2.27
all new indexers are created with preload optimization pre-configured by default.
This optimization enables HyperIndex to efficiently preload entities used by handlers through batched database queries, while ensuring events are processed synchronously in their original order. When combined with the Effect API for external calls, this feature delivers performance improvements of multiple orders of magnitude compared to other indexing solutions.
Read more in the dedicated guides:
- How Preload Optimization Works
- Double-Run Footgun
- Effect API
- Migrating from Loaders (recommended)
Advanced Use Casesβ
HyperIndex provides many features to help you build more powerful and efficient indexers. There's definitely the one for you:
- Handle Factory Contracts with Dynamic Contract Registration (with nested factories support)
- Perform external calls to decide which contract address to register using Async Contract Register
- Index all ERC20 token transfers with Wildcard Indexing
- Use Topic Filtering to ignore irrelevant events
- With multiple filters for single event
- With different filters per network
- With filter by dynamicly registered contract addresses (eg Index all ERC20 transfers to/from your Contract)
- Access Contract State directly from handlers
- Perform external calls from handlers by following the IPFS Integration guide
Context Objectβ
The handler context
provides methods to interact with entities stored in the database.
Retrieving Entitiesβ
Retrieve entities from the database using context.Entity.get
where Entity
is the name of the entity you want to retrieve, which is defined in your schema.graphql file.
await context.Entity.get(entityId);
It'll return Entity
object or undefined
if the entity doesn't exist.
Starting from envio@2.22.0
you can use context.Entity.getOrThrow
to conveniently throw an error if the entity doesn't exist:
const pool = await context.Pool.getOrThrow(poolId);
// Will throw: Entity 'Pool' with ID '...' is expected to exist.
// Or you can pass a custom message as a second argument:
const pool = await context.Pool.getOrThrow(
poolId,
`Pool with ID ${poolId} is expected.`
);
Or use context.Entity.getOrCreate
to automatically create an entity with default values if it doesn't exist:
const pool = await context.Pool.getOrCreate({
id: poolId,
totalValueLockedETH: 0n,
});
// Which is equivalent to:
let pool = await context.Pool.get(poolId);
if (!pool) {
pool = {
id: poolId,
totalValueLockedETH: 0n,
};
context.Pool.set(pool);
}
Retrieving Entities by Fieldβ
ERC20.Approval.handler(async ({ event, context }) => {
// Find all approvals for this specific owner
const currentOwnerApprovals = await context.Approval.getWhere.owner_id.eq(
event.params.owner
);
// Process all the owner's approvals efficiently
for (const approval of currentOwnerApprovals) {
// Process each approval
}
});
You can also use context..getWhere..gt
to get all entities where the field value is greater than the given value.
Important:
-
This feature requires Preload Optimization to be enabled.
- Either by
preload_handlers: true
in yourconfig.yaml
file - Or by using Loaders (Deprecated)
- Either by
-
Works with any field that:
- Is used in a relationship with the
@derivedFrom
directive - Has an
@index
directive
- Is used in a relationship with the
-
Potential Memory Issues: Very large
getWhere
queries might cause memory overflows. -
Tip: Try to put the
getWhere
query to the top of the handler, to make sure it's being preloaded. Read more about how Preload Optimization works.
Modifying Entitiesβ
Use context.Entity.set
to create or update an entity:
context.Entity.set({
id: entityId,
...otherEntityFields,
});
Both context.Entity.set
and context.Entity.deleteUnsafe
methods use the In-Memory Storage under the hood and don't require await
in front of them.
Referencing Linked Entitiesβ
When your schema defines a field that links to another entity type, set the relationship using _id
with the referenced entity's id
. You are storing the ID, not the full entity object.
type A {
id: ID!
b: B!
}
type B {
id: ID!
}
context.A.set({
id: aId,
b_id: bId, // ID of the linked B entity
});
HyperIndex automatically resolves A.b
based on the stored b_id
when querying the API.
Deleting Entities (Unsafe)β
To delete an entity:
context.Entity.deleteUnsafe(entityId);
The deleteUnsafe
method is experimental and unsafe. You need to manually handle all entity references after deletion to maintain database consistency.
Updating Specific Entity Fieldsβ
Use the following approach to update specific fields in an existing entity:
const pool = await context.Pool.get(poolId);
if (pool) {
context.Pool.set({
...pool,
totalValueLockedETH: pool.totalValueLockedETH.plus(newDeposit),
});
}
const pool = await context.Pool.get(poolId);
if (pool) {
context.Pool.set({
...pool,
totalValueLockedETH: pool.totalValueLockedETH.plus(newDeposit),
});
}
let pool = await context.pool.get(poolId);
pool->Option.forEach(pool => {
context.pool.set({
...pool,
totalValueLockedETH: pool.totalValueLockedETH.plus(newDeposit),
});
});
context.log
β
The context object also provides a logger that you can use to log messages to the console. Compared to console.log
calls, these logs will be displayed on our Hosted Service runtime logs page.
Read more in the Logging Guide.
context.isPreload
β
If you need to skip the preload phase for CPU-intensive operations or to perform certain actions only once per event, you can use context.isPreload
.
ERC20.Transfer.handler(async ({ event, context }) => {
// Load existing data efficiently
const [sender, receiver] = await Promise.all([
context.Account.getOrThrow(event.params.from),
context.Account.getOrThrow(event.params.to),
]);
// Skip expensive operations during preload
if (context.isPreload) {
return;
}
// CPU-intensive calculations only happen once
const complexCalculation = performExpensiveOperation(event.params.value); // Placeholder function for demonstration
// Create or update sender account
context.Account.set({
id: event.params.from,
balance: sender.balance - event.params.value,
computedValue: complexCalculation,
});
// Create or update receiver account
context.Account.set({
id: event.params.to,
balance: receiver.balance + event.params.value,
});
});
Note: While context.isPreload
can be useful for bypassing double execution, it's recommended to use the Effect API for external calls instead, as it provides automatic batching and memoization benefits.
External Callsβ
Envio indexer runs using Node.js runtime. This means that you can use fetch
or any other library like viem
to perform external calls from your handlers.
Note that with Preload Optimization all handlers run twice. But with Effect API this behavior makes your external calls run in parallel, while keeping the processing data consistent.
Check out our IPFS Integration, Accessing Contract State and Effect API guides for more information.
context.effect
β
Define an effect and use it in your handler with context.effect
:
// Define an effect that will be called from the handler.
const getMetadata = experimental_createEffect(
{
name: "getMetadata",
input: S.string,
output: {
description: S.string,
value: S.bigint,
},
cache: true, // Optionally persist the results in the database
},
({ input }) => {
const response = await fetch(`https://api.example.com/metadata/${input}`);
const data = await response.json();
return {
description: data.description,
value: data.value,
};
}
);
ERC20.Transfer.handler(async ({ event, context }) => {
// Load metadata for the token.
// This will be executed in parallel for all events in the batch.
// The call is automatically memoized, so you don't need to worry about duplicate requests.
const sender = await context.effect(getMetadata, event.params.from);
// Process the transfer with the pre-loaded data
});
Performance Considerationsβ
For performance optimization and best practices, refer to:
- Benchmarking
- Preload Optimization
These guides offer detailed recommendations on optimizing entity loading and indexing performance.
Block Handlers (new in v2.29)β
File: Guides/block-handlers.md
Run logic on every block or an interval.
Multichain Indexingβ
File: Advanced/multichain-indexing.mdx
Understanding Multichain Indexing
Multichain indexing allows you to monitor and process events from contracts deployed across multiple blockchain networks within a single indexer instance. This capability is essential for applications that:
- Track the same contract deployed across multiple networks
- Need to aggregate data from different chains into a unified view
- Monitor cross-chain interactions or state
How It Worksβ
With multichain indexing, events from contracts deployed on multiple chains can be used to create and update entities defined in your schema file. Your indexer will process events from all configured networks, maintaining proper synchronization across chains.
Configuration Requirementsβ
To implement multichain indexing, you need to:
- Populate the
networks
section in yourconfig.yaml
file for each chain - Specify contracts to index from each network
- Create event handlers for the specified contracts
Real-World Example: Uniswap V4 Multichain Indexerβ
For a comprehensive, production-ready example of multichain indexing, we recommend exploring our Uniswap V4 Multichain Indexer. This official reference implementation:
- Indexes Uniswap V4 deployments across 10 different blockchain networks
- Powers the official v4.xyz interface with real-time data
- Demonstrates best practices for high-performance multichain indexing
- Provides a complete, production-grade implementation you can study and adapt
!V4 indexer
The Uniswap V4 indexer showcases how to effectively structure a multichain indexer for a complex DeFi protocol, handling high volumes of data across multiple networks while maintaining performance and reliability.
Config File Structure for Multichain Indexingβ
The config.yaml
file for multichain indexing contains three key sections:
- Global contract definitions - Define contracts, ABIs, and events once
- Network-specific configurations - Specify chain IDs and starting blocks
- Contract instances - Reference global contracts with network-specific addresses
# Example structure (simplified)
contracts:
- name: ExampleContract
abi_file_path: ./abis/example-abi.json
handler: ./src/EventHandlers.js
events:
- event: ExampleEvent
networks:
- id: 1 # Ethereum Mainnet
start_block: 0
contracts:
- name: ExampleContract
address: "0x1234..."
- id: 137 # Polygon
start_block: 0
contracts:
- name: ExampleContract
address: "0x5678..."
Key Configuration Conceptsβ
- The global
contracts
section defines the contract interface, ABI, handlers, and events once - The
networks
section lists each blockchain network you want to index - Each network entry references the global contract and provides the network-specific address
- This structure allows you to reuse the same handler functions and event definitions across networks
π’ Best Practice: When developing multichain indexers, append the chain ID to entity IDs to avoid collisions. For example:
user-1
for Ethereum anduser-137
for Polygon.
Multichain Event Orderingβ
When indexing multiple chains, you have two approaches for handling event ordering:
Unordered Multichain Modeβ
Unordered mode is recommended for most applications.
The indexer processes events as soon as they're available from each chain, without waiting for other chains. This "Unordered Multichain Mode" provides better performance and lower latency.
- Events will still be processed in order within each individual chain
- Events across different chains may be processed out of order
- Processing happens as soon as events are emitted, reducing latency
- You avoid waiting for the slowest chain's block time
This mode is ideal for most applications, especially when:
- Operations on your entities are commutative (order doesn't matter)
- Entities from different networks never interact with each other
- Processing speed is more important than guaranteed cross-chain ordering
How to Enable Unordered Modeβ
In your config.yaml:
unordered_multichain_mode: true
networks: ...
Ordered Modeβ
Ordered mode is currently the default mode. But it'll be changed to unordered mode in the future. If you don't need strict deterministic ordering of events across all chains, it's recommended to use unordered mode.
If your application requires strict deterministic ordering of events across all chains, you can enable "Ordered Mode". In this mode, the indexer synchronizes event processing across all chains, ensuring that events are processed in the exact same order in every indexer run, regardless of which chain they came from.
When to Use Ordered Modeβ
Use ordered mode only when:
- The exact ordering of operations across different chains is critical to your application logic
- You need guaranteed deterministic results across all indexer runs
- You're willing to accept higher latency for cross-chain consistency
Cross-chain ordering is particularly important for applications like:
- Bridge applications: Where messages or assets must be processed on one chain before being processed on another chain
- Cross-chain governance: Where decisions made on one chain affect operations on another chain
- Multi-chain financial applications: Where the sequence of transactions across chains affects accounting or risk calculations
- Data consistency systems: Where the state must be consistent across multiple chains in a specific order
Technical Detailsβ
With ordered mode enabled:
- The indexer needs to wait for all blocks to increment from each network
- There is increased latency between when an event is emitted and when it's processed
- Processing speed is limited by the block interval of the slowest network
- Events are guaranteed to be processed in the same order in every indexer run
Cross-Chain Ordering Preservationβ
Ordered mode ensures that the temporal relationship between events on different chains is preserved. This is achieved by:
- Global timestamp ordering: Events are ordered based on their block timestamps across all chains
- Deterministic processing: The same sequence of events will be processed in the same order every time
The primary trade-off is increased latency at the head of the chain. Since the indexer must wait for blocks from all chains to determine the correct ordering, the processing of recent events is delayed by the slowest chain's block time. For example, if Chain A has 2-second blocks and Chain B has 15-second blocks, the indexer will process events at the slower 15-second rate to maintain proper ordering.
This latency is acceptable for applications where correct cross-chain ordering is more important than real-time updates. For bridge applications in particular, this ordering preservation can be critical for security and correctness, as it ensures that deposit events on one chain are always processed before the corresponding withdrawal events on another chain.
Best Practices for Multichain Indexingβ
1. Entity ID Namespacingβ
Always namespace your entity IDs with the chain ID to prevent collisions between networks. This ensures that entities from different networks remain distinct.
2. Error Handlingβ
Implement robust error handling for network-specific issues. A failure on one chain shouldn't prevent indexing from continuing on other chains.
3. Testingβ
- Test your indexer with realistic scenarios across all networks
- Use testnet deployments for initial validation
- Verify entity updates work correctly across chains
4. Performance Considerationsβ
- Use unordered mode when appropriate for better performance
- Consider your indexing frequency based on the block times of each chain
- Monitor resource usage, as indexing multiple chains increases load
Troubleshooting Common Issuesβ
-
Different Network Speeds: If one network is significantly slower than others, consider using unordered mode to prevent bottlenecks.
-
Entity Conflicts: If you see unexpected entity updates, verify that your entity IDs are properly namespaced with chain IDs.
-
Memory Usage: If your indexer uses excessive memory, consider optimizing your entity structure and implementing pagination in your queries.
Next Stepsβ
- Explore our Uniswap V4 Multichain Indexer for a complete implementation
- Review performance optimization techniques for your indexer
Testingβ
File: Guides/testing.mdx
Introductionβ
Envio comes with a built-in testing library that enables developers to thoroughly validate their indexer behavior without requiring deployment or interaction with actual blockchains. This library is specifically crafted to:
- Mock database states: Create and manipulate in-memory representations of your database
- Simulate blockchain events: Generate test events that mimic real blockchain activity
- Assert event handler logic: Verify that your handlers correctly process events and update entities
- Test complete workflows: Validate the entire process from event creation to database updates
The testing library provides helper functions that integrate with any JavaScript-based testing framework (like Mocha, Jest, or others), giving you flexibility in how you structure and run your tests.
Learn by doingβ
If you prefer to explore by example, the Greeter template includes complete tests that demonstrate best practices:
- Generate
greeter
template in TypeScript using Envio CLI
pnpx envio init template -l typescript -d greeter -t greeter -n greeter
- Run tests
pnpm test
- See the
test/test.ts
file to understand how the tests are written.
Writing testsβ
Test Library Designβ
The testing library follows key design principles that make it effective for testing HyperIndex indexers:
- Immutable database: The mock database is immutable, with each operation returning a new instance. This makes it robust and easy to test against previous states.
- Chainable operations: Operations can be chained together to build complex test scenarios.
- Realistic simulations: Mock events closely mirror real blockchain events, allowing you to test your handlers in conditions similar to production.
Typical Test Flowβ
Most tests will follow this general pattern:
- Initialize the mock database (empty or with predefined entities)
- Create a mock event with test parameters
- Process the mock event through your handler(s)
- Assert that the resulting database state matches your expectations
This flow allows you to verify that your event handlers correctly create, update, or modify entities in response to blockchain events.
Assertionsβ
The testing library works with any JavaScript assertion library. In the examples, we use Node.js's built-in assert module, but you can also use popular alternatives like chai or expect.
Common assertion patterns include:
assert.deepEqual(expectedEntity, actualEntity)
- Check that entire entities matchassert.equal(expectedValue, actualEntity.property)
- Verify specific property valuesassert.ok(updatedMockDb.entities.Entity.get(id))
- Ensure an entity exists
Troubleshootingβ
If you encounter issues with your tests, check the following:
Environment and Setupβ
-
Verify your Envio version: The testing library is available in versions
v0.0.26
and abovepnpm envio -v
-
Ensure you've generated testing code: Always run codegen after updating your schema or config
pnpm codegen
-
Check your imports: Make sure you're importing the correct files
const { MockDb, Greeter, Addresses } = TestHelpers;
const assert = require("assert");
const { UserEntity, TestHelpers } = require("generated");
const { MockDb, Greeter, Addresses } = TestHelpers;
open RescriptMocha
open Mocha
open Belt
Common Issues and Solutionsβ
-
"Cannot read properties of undefined": This usually means an entity wasn't found in the database. Verify your IDs match exactly and that the entity exists before accessing it.
-
"Type mismatch": Ensure that your entity structure matches what's defined in your schema. Type issues are common when working with numeric types (like
BigInt
vsnumber
). -
ReScript specific setup: If using ReScript, remember to update your
rescript.json
file:{
"sources": [
{ "dir": "src", "subdirs": true },
{ "dir": "test", "subdirs": true }
],
"bs-dependencies": ["rescript-mocha"]
} -
Debug database state: If you're having trouble with assertions, add a debug log to see the exact state of your entities:
console.log(
JSON.stringify(updatedMockDb.entities.User.get(userAddress), null, 2)
);
If you encounter any issues or have questions, please reach out to us on Discord
Navigating Hasuraβ
File: Guides/navigating-hasura.md
This page is only relevant when testing on a local machine or using a self-hosted version of Envio that uses Hasura.
Introductionβ
Hasura is a GraphQL engine that provides a web interface for interacting with your indexed blockchain data. When running HyperIndex locally, Hasura serves as your primary tool for:
- Querying indexed data via GraphQL
- Visualizing database tables and relationships
- Testing API endpoints before integration with your frontend
- Monitoring the indexing process
This guide explains how to navigate the Hasura dashboard to effectively work with your indexed data.
Accessing Hasura Consoleβ
When running HyperIndex locally, Hasura Console is automatically available at:
http://localhost:8080
You can access this URL in any web browser to open the Hasura console.
When prompted for authentication, use the password: testing
Key Dashboard Areasβ
The Hasura dashboard has several tabs, but we'll focus on the two most important ones for HyperIndex developers:
API Tabβ
The API tab lets you execute GraphQL queries and mutations on indexed data. It serves as a GraphQL playground for testing your API calls.
Featuresβ
- Explorer Panel: The left panel shows all available entities defined in your
schema.graphql
file - Query Builder: The center area is where you write and execute GraphQL queries
- Results Panel: The right panel displays query results in JSON format
Available Entitiesβ
By default, you'll see:
- All entities defined in your
schema.graphql
file dynamic_contracts
(for dynamically added contracts)raw_events
table (Note: This table is no longer populated by default to improve performance. To enable storage of raw events, addraw_events: true
to yourconfig.yaml
file as described in the Raw Events Storage section)
Example Queryβ
Try a simple query to test your indexer:
query MyQuery {
User(limit: 5) {
id
latestGreeting
numberOfGreetings
}
}
Click the "Play" button to execute the query and see the results.
For more advanced GraphQL query options, see Hasura's quickstart guide.
Data Tabβ
The Data tab provides direct access to your database tables and relationships, allowing you to view the actual indexed data.
Featuresβ
- Schema Browser: View all tables in the database (left panel)
- Table Data: Examine and browse data within each table
- Relationship Viewer: See how different entities are connected
Working with Tablesβ
- Select any table from the "public" schema to view its contents
- Use the "Browse Rows" tab to see all data in that table
- Check the "Insert Row" tab to manually add data (useful for testing)
- View the "Modify" tab to see the table structure
Verifying Indexed Dataβ
To confirm your indexer is working correctly:
- Check entity tables to ensure they contain the expected data
- Look at the
db_write_timestamp
column values to confirm when data was last updated - Newer timestamps indicate fresh data; older timestamps might indicate stale data from previous runs
Common Tasksβ
Checking Indexing Statusβ
To verify your indexer is actively processing new blocks:
- Go to the Data tab
- Select any entity table
- Check the latest
db_write_timestamp
values - Monitor these values over time to ensure they're updating
(Note the TUI is also an easy way to monitor this)
Troubleshooting Missing Dataβ
If expected data isn't appearing:
- Check if you've enabled raw events storage (
raw_events: true
inconfig.yaml
) and then examine theraw_events
table to confirm events were captured - Verify your event handlers are correctly processing these events
- Examine your GraphQL queries to ensure they match your schema structure
- Check console logs for any processing errors
Resetting Indexed Dataβ
When testing, you may need to reset your database:
- Stop your indexer
- Reset your database (refer to the development guide for commands)
- Restart your indexer to begin processing from the configured start block
Best Practicesβ
- Regular Verification: Periodically check both the API and Data tabs to ensure your indexer is functioning correctly
- Query Testing: Test complex queries in the API tab before implementing them in your application
- Schema Validation: Use the Data tab to verify that relationships between entities are correctly established
- Performance Monitoring: Watch for tables that grow unusually large, which might indicate inefficient indexing
Aggregations: local vs hosted (avoid the footβgun)β
When developing locally with Hasura, you may notice that GraphQL aggregate helpers (for example, count/sum-style aggregations) are available. On the hosted service, these aggregate endpoints are intentionally not exposed. Aggregations over large datasets can be very slow and unpredictable in production.
The recommended approach is to compute and store aggregates at indexing time, not at query time. In practice this means maintaining counters, sums, and other rollups in entities as part of your event handlers, and then querying those precomputed values.
Example: indexing-time aggregationβ
schema.graphql
# singleton; you hardcode the id and load it in and out
type GlobalState {
id: ID! # "global-state"
count: Int!
}
type Token {
id: ID! # incremental number
description: String!
}
EventHandler.ts
const globalStateId = "global-state";
NftContract.Mint.handler(async ({event, context}) => {
const globalState = await context.GlobalState.get(globalStateId);
if (!globalState) {
context.log.error("global state doesn't exist");
return;
}
const incrementedTokenId = globalState.count + 1;
context.Token.set({
id: incrementedTokenId,
description: event.params.description,
});
context.GlobalState.set({
...globalState,
count: incrementedTokenId,
});
});
This pattern scales: you can keep per-entity counters, rolling windows (daily/hourly entities keyed by date), and top-N caches by updating entities as events arrive. Your queries then read these precomputed values directly, avoiding expensive runtime aggregations.
Exceptional casesβ
If runtime aggregate queries are a hard requirement for your use case, please reach out and we can evaluate options for your project on the hosted service. Contact us on Discord.
Disable Hasura for Self-Hosted Indexersβ
Starting from envio@2.26.0
it's possible to disable Hasura integration for self-hosted indexers. To do so, set the ENVIO_HASURA
environment variable to false
.
Environment Variablesβ
File: Guides/environment-variables.md
Environment variables are a crucial part of configuring your Envio indexer. They allow you to manage sensitive information and configuration settings without hardcoding them in your codebase.
Naming Conventionβ
All environment variables used by Envio must be prefixed with ENVIO_
. This naming convention:
- Prevents conflicts with other environment variables
- Makes it clear which variables are used by the Envio indexer
- Ensures consistency across different environments
Envio API Token (required for HyperSync)β
To ensure continued access to HyperSync, set an Envio API token in your environment.
- Use
ENVIO_API_TOKEN
to provide your token at runtime - See the API Tokens guide for how to generate a token: API Tokens
Envio-specific environment variablesβ
The following variables are used by HyperIndex:
-
ENVIO_API_TOKEN
: API token for HyperSync access (required for continued access in self-hosted deployments) -
ENVIO_HASURA
: Set tofalse
to disable Hasura integration for self-hosted indexers -
MAX_BATCH_SIZE
: Size of the in-memory batch before writing to the database. Default:5000
. Set to1
to help isolate which event or data save is causing Postgres write errors. -
ENVIO_PG_PORT
: Port for the Postgres service used by HyperIndex during local development -
ENVIO_PG_PASSWORD
: Postgres password (self-hosted) -
ENVIO_PG_USER
: Postgres username (self-hosted) -
ENVIO_PG_DATABASE
: Postgres database name (self-hosted) -
ENVIO_PG_PUBLIC_SCHEMA
: Postgres schema name override for the generated/public schema
Example Environment Variablesβ
Here are some commonly used environment variables:
# Envio API Token (required for continued HyperSync access)
ENVIO_API_TOKEN=your-secret-token
# Blockchain RPC URL
ENVIO_RPC_URL=https://arbitrum.direct.dev/your-api-key
# Starting block number for indexing
ENVIO_START_BLOCK=12345678
# Coingecko API key
ENVIO_COINGECKO_API_KEY=api-key
# In-memory batch size (default 5000)
MAX_BATCH_SIZE=1
Setting Environment Variablesβ
Local Developmentβ
For local development, you can set environment variables in several ways:
- Using a
.env
file in your project root:
# .env
ENVIO_API_TOKEN=your-secret-token
ENVIO_RPC_URL=https://arbitrum.direct.dev/your-api-key
ENVIO_START_BLOCK=12345678
- Directly in your terminal:
export ENVIO_API_TOKEN=your-secret-token
export ENVIO_RPC_URL=https://arbitrum.direct.dev/your-api-key
Hosted Serviceβ
When using the Envio Hosted Service, you can configure environment variables through the Envio platform's dashboard. Remember that all variables must still be prefixed with ENVIO_
.
For more information about environment variables in the hosted service, see the Hosted Service documentation.
Configuration Fileβ
For use of environment variables in your configuration file, read the docs here: Configuration File.
Best Practicesβ
- Never commit sensitive values: Always use environment variables for sensitive information like API keys and database credentials
- Never commit or use private keys: Never commit or use private keys in your codebase
- Use descriptive names: Make your environment variable names clear and descriptive
- Document your variables: Keep a list of required environment variables in your project's README
- Use different values: Use different environment variables for development, staging, and production environments
- Validate required variables: Check that all required environment variables are set before starting your indexer
Troubleshootingβ
If you encounter issues with environment variables:
- Verify that all required variables are set
- Check that variables are prefixed with
ENVIO_
- Ensure there are no typos in variable names
- Confirm that the values are correctly formatted
For more help, see our Troubleshooting Guide.
Uniswap V4 Multi-chain Indexerβ
File: Examples/example-uniswap-v4.md
The following indexer example is a reference implementation and can serve as a starting point for applications with similar logic.
This official Uniswap V4 indexer is a comprehensive implementation for the Uniswap V4 protocol using Envio HyperIndex. This is the same indexer that powers the v4.xyz website, providing real-time data for the Uniswap V4 interface.
Key Featuresβ
- Multi-chain Support: Indexes Uniswap V4 deployments across 10 different blockchain networks in real-time
- Complete Pool Metrics: Tracks pool statistics including volume, TVL, fees, and other critical metrics
- Swap Analysis: Monitors swap events and liquidity changes with high precision
- Hook Integration: In-progress support for Uniswap V4 hooks and their events
- Production Ready: Powers the official v4.xyz interface with production-grade reliability
- Ultra-Fast Syncing: Processes massive amounts of blockchain data significantly faster than alternative indexing solutions, reducing sync times from days to minutes
!V4 gif
Technical Overviewβ
This indexer is built using TypeScript and provides a unified GraphQL API for accessing Uniswap V4 data across all supported networks. The architecture is designed to handle high throughput and maintain consistency across different blockchain networks.
Performance Advantagesβ
The Envio-powered Uniswap V4 indexer offers extraordinary performance benefits:
- 10-100x Faster Sync Times: Leveraging Envio's HyperSync technology, this indexer can process historical blockchain data orders of magnitude faster than traditional solutions
- Real-time Updates: Maintains low latency for new blocks while efficiently managing historical data
Use Casesβ
- Power analytics dashboards and trading interfaces
- Monitor DeFi positions and protocol health
- Track historical performance of Uniswap V4 pools
- Build custom notifications and alerts
- Analyze hook interactions and their impact
Getting Startedβ
To use this indexer, you can:
- Clone the repository
- Follow the installation instructions in the README
- Run the indexer locally or deploy it to a production environment
- Access indexed data through the GraphQL API
Contributionβ
The Uniswap V4 indexer is actively maintained and welcomes contributions from the community. If you'd like to contribute or report issues, please visit the GitHub repository.
This is an official reference implementation that powers the v4.xyz website. While extensively tested in production, remember to validate the data for your specific use case. The indexer is continuously updated to support the latest Uniswap V4 features and optimizations.
Sablier Protocol Indexersβ
File: Examples/example-sablier.md
The following indexers serve as exceptional reference implementations for the Sablier protocol, showcasing professional development practices and efficient multi-chain data processing.
Overviewβ
Sablier is a token streaming protocol that enables real-time finance on the blockchain, allowing tokens to be streamed continuously over time. These official Sablier indexers track streaming activity across 18 different EVM-compatible chains, providing comprehensive data through a unified GraphQL API.
Professional Indexer Suiteβ
Sablier maintains three specialized indexers, each targeting a specific part of their protocol:
1. Lockup Indexerβ
Tracks the core Sablier lockup contracts, which handle the streaming of tokens with fixed durations and amounts. This indexer provides data about stream creation, cancellation, and withdrawal events. Used primarily for the vesting functionality of Sablier.
2. Flow Indexerβ
Monitors Sablier's advanced streaming functionality, allowing for dynamic flow rates and more complex streaming scenarios. This indexer captures stream modifications, batch operations, and other flow-specific events. Powers the payments side of the Sablier application.
3. Merkle Indexerβ
Tracks Sablier's Merkle distribution system, which enables efficient batch stream creation using cryptographic proofs. This indexer provides data about batch creations, claims, and related activities. Used for both Airstreams and Instant Airdrops functionality.
Key Featuresβ
- Comprehensive Multi-chain Support: Indexes data across 18 different EVM chains
- Professionally Maintained: Used in production by the Sablier team and their partners
- Extensive Test Coverage: Includes comprehensive testing to ensure data accuracy
- Optimized Performance: Implements efficient data processing techniques
- Well-Documented: Clear code structure with extensive comments
- Backward Compatibility: Carefully manages schema evolution and contract upgrades
- Cross-chain Architecture: Envio promotes efficient cross-chain indexing where all networks share the same indexer endpoint
Best Practices Showcaseβ
These indexers demonstrate several development best practices:
- Modular Code Structure: Well-organized code with clear separation of concerns
- Consistent Naming Conventions: Professional and consistent naming throughout
- Efficient Event Handling: Optimized processing of blockchain events
- Comprehensive Entity Relationships: Well-designed data model with proper relationships
- Thorough Input Validation: Robust error handling and input validation
- Detailed Changelogs: Documentation of breaking changes and migrations
- Handler/Loader Pattern: Envio indexers use an optimized pattern with loaders to pre-fetch entities and handlers to process them
Getting Startedβ
To use these indexers as a reference for your own development:
- Clone the specific repository based on your needs:
- Lockup Indexer
- Flow Indexer
- Merkle Indexer
- Review the file structure and implementation patterns
- Examine the event handlers for efficient data processing techniques
- Study the schema design for effective entity modeling
For complete API documentation and usage examples, see:
- Sablier API Overview
- Implementation Caveats
These are official indexers maintained by the Sablier team and represent production-quality implementations. They serve as excellent examples of professional indexer development and are regularly updated to support the latest protocol features.
Envio Hosted Serviceβ
File: Hosted_Service/hosted-service.md
Envio offers a fully managed hosting solution for your indexers, providing all the infrastructure, scaling, and monitoring needed to run production-grade indexers without operational overhead.
Key Featuresβ
- Git-based Deployments: Similar to Vercel, deploy your indexer by simply pushing to a designated deployment branch
- Zero Infrastructure Management: We handle all the servers, databases, and scaling for you
- Version Management: Switch between different deployed versions of your indexer with one click
- Built-in Monitoring: Track logs and sync status
- Alerting: Get email alerts when indexing errors occur
- GraphQL API: Access your indexed data through a performant GraphQL endpoint
- Multi-chain Support: Deploy indexers that track multiple networks from a single codebase
Deployment Modelβ
The Envio Hosted Service connects directly to your GitHub repository:
- Connect your GitHub repository to the Envio platform
- Configure your deployment settings (branch, config file location, etc.)
- Push changes to your deployment branch to trigger automatic deployments
- View deployment logs and status in real-time
- Switch between versions or rollback if needed
You can view and manage your hosted indexers in the Envio Explorer.
Deployment Optionsβ
Envio provides flexibility in how you deploy and host your indexers:
- Fully Managed Hosted Service: Let Envio handle everything (recommended for most users)
- Self-Hosting: Run your indexer on your own infrastructure with our Docker container
For self-hosting information and instructions, see our Self-Hosting Guide. For a complete list of CLI commands to control your indexer, see the CLI Commands documentation.
Deploying Your Indexerβ
File: Hosted_Service/hosted-service-deployment.md
The Envio Hosted Service provides a seamless git-based deployment workflow, similar to modern platforms like Vercel. This enables you to easily deploy, update, and manage your indexers through your normal development workflow.
Initial Setupβ
- Log in with GitHub: Visit the Envio App and authenticate with your GitHub account
- Select an Organization: Choose your personal account or any organization you have access to !Select organisation
- Install the Envio Deployments GitHub App: Grant access to the repositories you want to deploy !Install GitHub App
Configuring Your Indexerβ
- Add a New Indexer: Click "Add Indexer" in the dashboard !Add indexer
- Connect to Repository: Select the repository containing your indexer code !Connect indexer
- Configure Deployment Settings:
- Specify the config file location
- Set the root directory (important for monorepos)
- Choose the deployment branch !Configure indexer !Add org
Multiple Indexers Per Repository
You can deploy multiple indexers from a single repository by configuring them with different:
- Config file paths
- Root directories
- Deployment branches
Monorepo Configuration
If you're working in a monorepo, ensure all your imports are contained within your indexer directory to avoid deployment issues.
Deployment Workflowβ
-
Create a Deployment Branch: Set up the branch you specified during configuration !Create branch
-
Deploy via Git: Push your code to the deployment branch !Push code
-
Monitor Deployment: Track the progress of your deployment in the Envio dashboard
-
Version Management: Once deployed, you can:
- View detailed logs
- Switch between different deployed versions
- Rollback to previous versions if needed
Continuous Deployment Best Practicesβ
For a robust deployment workflow, we recommend:
- Protected Branches: Set up branch protection rules for your deployment branch
- Pull Request Workflow: Instead of pushing directly to the deployment branch, use pull requests from feature branches
- CI Integration: Add tests to your CI pipeline to validate indexer functionality before merging to the deployment branch
Version Managementβ
Each deployment creates a new version of your indexer that you can access through the dashboard. You can:
- Compare different versions
- Switch the active version with one click
- Maintain multiple versions for testing or staging purposes
Deployment Limitsβ
These can vary depending on the plan you select. In general, development plans are allowed:
- 3 indexers per organization
- 3 deployments per indexer
Need to free up space? You can delete old deployments through the Envio dashboard.
Hosted Service Billingβ
File: Hosted_Service/hosted-service-billing.mdx
Pricing & Billing
Envio offers flexible pricing options to meet the needs of projects at different stages of development.
Pricing Structureβ
We have both development tiers and production tiers to suit a variety of users:
-
Development Tier: Our development tier is completely free and designed to be user-friendly, making it easy to get started with Envio without any cost barriers.
-
Production Tiers: For projects ready for production, we offer scalable options that grow with your needs.
For detailed pricing information and plan comparisons, please visit the Envio Pricing Page.
Self-Hosting Optionβ
For users who prefer to manage their own infrastructure, we support self-hosting your indexer as well. For your convenience, there is a Docker file in the root of the generated
folder.
For more information on self-hosting, see our Self-Hosting Guide.
Not sure which option is right for your project? Book a call with our team to discuss your specific needs.
Self-Hosting Your Envio Indexerβ
File: Hosted_Service/self-hosting.md
This documentation page is actively being improved. Check back regularly for updates and additional information.
While Envio offers a fully managed Hosted Service, you may prefer to run your indexer on your own infrastructure. This guide covers everything you need to know about self-hosting Envio indexers.
We deeply appreciate users who choose our hosted service, as it directly supports our team and helps us continue developing and improving Envio's technology. If your use case allows for it, please consider the hosted option.
Why Self-Host?β
Self-hosting gives you:
- Complete Control: Manage your own infrastructure and configurations
- Data Sovereignty: Keep all indexed data within your own systems
Prerequisitesβ
Before self-hosting, ensure you have:
- Docker installed on your host machine
- Sufficient storage for blockchain data and the indexer database
- Adequate CPU and memory resources (requirements vary based on chains and indexing complexity)
- Required HyperSync and/or RPC endpoints
- Envio API token for HyperSync access (
ENVIO_API_TOKEN
) β required for continued access. See API Tokens.
Getting Startedβ
In general, if you want to self-host, you will likely use a Docker setup.
For a working example, check out the local-docker-example repository.
It contains a minimal Dockerfile
and docker-compose.yaml
that configure the Envio indexer together with PostgreSQL and Hasura.
Configuration Explainedβ
The compose file in that repository sets up three main services:
- PostgreSQL Database (
envio-postgres
): Stores your indexed data - Hasura GraphQL Engine (
graphql-engine
): Provides the GraphQL API for querying your data - Envio Indexer (
envio-indexer
): The core indexing service that processes blockchain data
Environment Variablesβ
The configuration uses environment variables with sensible defaults. For production, you should customize:
- Envio API token (
ENVIO_API_TOKEN
) - Database credentials (
ENVIO_PG_PASSWORD
,ENVIO_PG_USER
, etc.) - Hasura admin secret (
HASURA_GRAPHQL_ADMIN_SECRET
) - Resource limits based on your workload requirements
Getting Helpβ
If you encounter issues with self-hosting:
- Check the Envio GitHub repository for known issues
- Join the Envio Discord community for community support
For most production use cases, we recommend using the Envio Hosted Service to benefit from automatic scaling, monitoring, and maintenance.
Indexing Optimism Bridge Depositsβ
File: Tutorials/tutorial-op-bridge-deposits.md
Introductionβ
This tutorial will guide you through indexing Optimism Standard Bridge deposits in under 5 minutes using Envio HyperIndex's no-code contract import feature.
The Optimism Standard Bridge enables the movement of ETH and ERC-20 tokens between Ethereum and Optimism. We'll index bridge deposit events by extracting the DepositFinalized
logs emitted by the bridge contracts on both networks.
Prerequisitesβ
Before starting, ensure you have the following installed:
- Node.js (v22 or newer recommended)
- pnpm (v8 or newer)
- Docker Desktop (required to run the Envio indexer locally)
Note: Docker is specifically required to run your indexer locally. You can skip Docker installation if you plan only to use Envio's hosted service.
Step 1: Initialize Your Indexerβ
- Open your terminal in an empty directory and run:
pnpx envio init
-
Name your indexer (we'll use "optimism-bridge-indexer" in this example):
-
Choose your preferred language (TypeScript, JavaScript, or ReScript):
Step 2: Import the Optimism Bridge Contractβ
-
Select Contract Import β Block Explorer β Optimism
-
Enter the Optimism bridge contract address:
0x4200000000000000000000000000000000000010
View on Optimistic Etherscan
-
Select the
DepositFinalized
event:- Navigate using arrow keys (ββ)
- Press spacebar to select the event
Tip: You can select multiple events to index simultaneously.
Step 3: Add the Ethereum Mainnet Bridge Contractβ
-
When prompted, select Add a new contract
-
Choose Block Explorer β Ethereum Mainnet
-
Enter the Ethereum Mainnet gateway contract address:
0x99C9fc46f92E8a1c0deC1b1747d010903E884bE1
View on Etherscan
-
Select the
ETHDepositInitiated
event -
When finished adding contracts, select I'm finished
Step 4: Start Your Indexerβ
- If you have any running indexers, stop them first:
pnpm envio stop
- Start your new indexer:
pnpm dev
This command:
- Starts the required Docker containers
- Sets up your database
- Launches the indexing process
- Opens the Hasura GraphQL interface
Step 5: Understanding the Generated Codeβ
Let's examine the key files that Envio generated:
1. config.yaml
β
This configuration file defines:
- Networks to index (Optimism and Ethereum Mainnet)
- Starting blocks for each network
- Contract addresses and ABIs
- Events to track
2. schema.graphql
β
This schema defines the data structures for our selected events:
- Entity types based on event data
- Field types matching the event parameters
- Relationships between entities (if applicable)
3. src/EventHandlers.ts
β
This file contains the business logic for processing events:
- Functions that execute when events are detected
- Data transformation and storage logic
- Entity creation and relationship management
Step 6: Exploring Your Indexed Dataβ
Now you can interact with your indexed data:
Accessing Hasuraβ
- Open Hasura at http://localhost:8080
- When prompted, enter the admin password:
testing
Monitoring Indexing Progressβ
- Click the Data tab in the top navigation
- Find the
_events_sync_state
table to check indexing progress - Observe which blocks are currently being processed
Note: Thanks to Envio's HyperSync, indexing happens significantly faster than with standard RPC methods.