HyperIndex Complete Documentation
This document contains all HyperIndex documentation consolidated into a single file for LLM consumption.
Overviewβ
File: overview.md
HyperIndex: Fast Multichain Indexer
HyperIndex is a blazing-fast, developer-friendly multichain indexer, optimized for both local development and reliable hosted deployment. It empowers developers to effortlessly build robust backends for blockchain applications.
!Sync Process
HyperIndex is Envio's full-featured blockchain indexing framework that transforms on-chain events into structured, queryable databases with GraphQL APIs.
HyperSync is the high-performance data engine that powers HyperIndex. It provides the raw blockchain data access layer, delivering up to 2000x faster performance than traditional RPC endpoints.
While HyperIndex gives you a complete indexing solution with schema management and event handling, HyperSync can be used directly for custom data pipelines and specialized applications.
Hypersync API Token Requirementsβ
Starting from 21 May 2025, HyperSync (the data engine powering HyperIndex) will implement rate limits for requests without API tokens. Here's what you need to know:
- Local Development: No API token is required for local development, though requests will be rate limited.
- Self-Hosted Deployments: API tokens are required for unlimited HyperSync access in self-hosted deployments. The token can be set via the
ENVIO_API_TOKEN
environment variable in your indexer configuration. This can be read from the.env
file in the root of your HyperIndex project. - Hosted Service: Indexers deployed to our hosted service will have special access that doesn't require a custom API token.
- Free Usage: The service remains free to use until mid-June 2025.
- Future Pricing: From mid-June 2025 onwards, we will introduce tiered packages based on usage. Credits are calculated based on comprehensive metrics including data bandwidth, disk read operations, and other resource utilization factors. For preferred introductory pricing based on your specific use case, reach out to us on Discord.
For more details about API tokens, including how to generate and implement them, see our API Tokens documentation.
π Quick Linksβ
- GitHub Repository β
- Join our Discord Community
Getting Startedβ
File: getting-started.md
Indexer Initializationβ
Prerequisitesβ
- Node.js (v22 or newer recommended)
- pnpm (v8 or newer)
- Docker Desktop (required to run the Envio indexer locally)
Note: Docker is required only if you plan to run your indexer locally. You can skip installing Docker if you'll only be using Envio's hosted service.
Additionally for Windows Users:β
- WSL Windows Subsystem for Linux
Essential Filesβ
After initialization, your indexer will contain three main files that are essential for its operation:
config.yaml
β Defines indexing settings such as blockchain endpoints, events to index, and advanced behaviors.schema.graphql
β Defines the GraphQL schema for indexed data and its structure for efficient querying.src/EventHandlers.*
β Contains the logic for processing blockchain events.
Note: The file extension for Event Handlers (
*.ts
,*.js
, or*.res
) depends on the programming language chosen (TypeScript, JavaScript, or ReScript).
You can customize your indexer by modifying these files to meet your specific requirements.
For a complete walkthrough of the process, refer to the Quickstart guide.
Contract Importβ
File: contract-import.md
The Quickstart enables you to instantly autogenerate a powerful indexer and start querying blockchain data in minutes. This is the fastest and easiest way to begin using HyperIndex.
Example: Autogenerate an indexer for the Eigenlayer contract and index its entire history in less than 5 minutes by simply running pnpx envio init
and providing the contract address from Etherscan.
Video Tutorialsβ
Contract Import Methodsβ
There are two convenient methods to import your contract:
- Block Explorer (verified contracts on supported explorers like Etherscan and Blockscout)
- Local ABI (custom or unverified contracts)
1. Block Explorer Importβ
This method uses a verified contract's address from a supported blockchain explorer (Etherscan, Routescan, etc.) to automatically fetch the ABI.
Steps:β
a. Select the blockchain
? Which blockchain would you like to import a contract from?
> ethereum-mainnet
goerli
optimism
base
bsc
gnosis
polygon
[ββ to move, enter to select]
HyperIndex supports all EVM-compatible chains. If your desired chain is not listed, you can import via the local ABI method or manually adjust the config.yaml
file after initialization.
b. Enter the contract address
? What is the address of the contract?
[Use proxy address if ABI is for a proxy implementation]
If using a proxy contract, always specify the proxy address, not the implementation address.
c. Select events to index
? Which events would you like to index?
> [x] ClaimRewards(address indexed from, address indexed reward, uint256 amount)
[x] Deposit(address indexed from, uint256 indexed tokenId, uint256 amount)
[x] NotifyReward(address indexed from, address indexed reward, uint256 indexed epoch, uint256 amount)
[x] Withdraw(address indexed from, uint256 indexed tokenId, uint256 amount)
[space to select, β to select all, β to deselect all]
d. Finish or add more contracts
You'll be prompted to continue adding more contracts or to complete the setup:
? Would you like to add another contract?
> I'm finished
Add a new address for same contract on same network
Add a new network for same contract
Add a new contract (with a different ABI)
Generated Files & Configurationβ
The Quickstart automatically generates key files:
1. config.yaml
β
Automatically configured parameters include:
- Network ID
- Start Block
- Contract Name
- Contract Address
- Event Signatures
By default, all selected events are included, but you can manually adjust the file if needed. See the detailed guide on config.yaml
.
2. GraphQL Schemaβ
- Entities are automatically generated for each selected event.
- Fields match the event parameters emitted.
See more details in the schema file guide.
3. Event Handlersβ
- Handlers are autogenerated for each event.
- Handlers create event-specific entities.
Learn more in the event handlers guide.
HyperIndex Performance Benchmarksβ
File: benchmarks.md
Overviewβ
HyperIndex delivers industry-leading performance for blockchain data indexing. Independent benchmarks have consistently shown Envio's HyperIndex to be the fastest indexing solution available, with dramatic performance advantages over competitive offerings.
Recent Independent Benchmarksβ
The most comprehensive and up-to-date benchmarks were conducted by Sentio in April 2025 and are available in the sentio-benchmark repository. These benchmarks compare Envio's HyperIndex against other popular indexers across multiple real-world scenarios:
Key Performance Highlightsβ
Case | Description | Envio | Nearest Competitor | TheGraph | Ponder |
---|---|---|---|---|---|
LBTC Token Transfers | Event handling, No RPC calls, Write-only | 3m | 8m - 2.6x slower (Sentio) | 3h9m - 3780x slower | 1h40m - 2000x slower |
LBTC Token with RPC calls | Event handling, RPC calls, Read-after-write | 1m | 6m - 6x slower (Sentio) | 1h3m - 63x slower | 45m - 45x slower |
Ethereum Block Processing | 100K blocks with Metadata extraction | 7.9s | 1m - 7.5x slower (Subsquid) | 10m - 75x slower | 33m - 250x slower |
Ethereum Transaction Gas Usage | Transaction handling, Gas calculations | 1m 26s | 7m - 4.8x slower (Subsquid) | N/A | 33m - 23x slower |
Uniswap V2 Swap Trace Analysis | Transaction trace handling, Swap decoding | 41s | 2m - 3x slower (Subsquid) | 8m - 11x slower | N/A |
Uniswap V2 Factory | Event handling, Pair and swap analysis | 8s | 2m - 15x slower (Subsquid) | 19m - 142x slower | 21m - 157x slower |
The independent benchmark results demonstrate that HyperIndex consistently outperforms all competitors across every tested scenario. This includes the most realistic real-world indexing scenario LBTC Token with RPC calls - where HyperIndex was up to 6x faster than the nearest competitor and over 63x faster than TheGraph.
Historical Benchmarking Resultsβ
Our internal benchmarking from October 2023 showed similar performance advantages. When indexing the Uniswap V3 ETH-USDC pool contract on Ethereum Mainnet, HyperIndex achieved:
- 2.1x faster indexing than the nearest competitor
- Over 100x faster indexing than some popular alternatives
You can read the full details in our Indexer Benchmarking Results blog post.
Verify For Yourselfβ
We encourage developers to run their own benchmarks. You can use the templates provided in the Sentio benchmark repository or our sample indexer implementations for various scenarios.
Migrate from TheGraph to HyperIndexβ
File: migration-guide.md
Please reach out to our team on Discord for personalized migration assistance.
Introductionβ
Migrating from a subgraph to HyperIndex is designed to be a developer-friendly process. HyperIndex draws strong inspiration from TheGraphβs subgraph architecture, which makes the migration simple, especially with the help of coding assistants like Cursor and AI tools (don't forget to use our ai friendly docs).
The process is simple but requires a good understanding of the underlying concepts. If you are new to HyperIndex, we recommend starting with the Getting Started guide.
Why Migrate to HyperIndex?β
- Superior Performance: Up to 100x faster indexing speeds
- Lower Costs: Reduced infrastructure requirements and operational expenses
- Better Developer Experience: Simplified configuration and deployment
- Advanced Features: Access to capabilities not available in other indexing solutions
- Seamless Integration: Easy integration with existing GraphQL APIs and applications
Subgraph to HyperIndex Migration Overviewβ
Migration consists of three major steps:
- Subgraph.yaml migration
- Schema migration - near copy paste
- Event handler migration
At any point in the migration run
pnpm envio codegen
to verify the config.yaml
and schema.graphql
files are valid.
or run
pnpm dev
to verify the indexer is running and indexing correctly.
0.5 Use npx envio init
to generate a boilerplateβ
As a first step, we recommend using npx envio init
to generate a boilerplate for your project. This will handle the creation of the config.yaml
file and a basic schema.graphql
file with generic handler functions.
1. subgraph.yaml
β config.yaml
β
npx envio init
will generate this for you. It's a simple configuration file conversion. Effectively specifying which contracts to index, which networks to index (multiple networks can be specified with envio) and which events from those contracts to index.
Take the following conversion as an example, where the subgraph.yaml
file is converted to config.yaml
the below comparisons is for the Uniswap v4 pool manager subgraph.
theGraph - subgraph.yaml
specVersion: 0.0.4
description: Uniswap is a decentralized protocol for automated token exchange on Ethereum.
repository: https://github.com/Uniswap/v4-subgraph
schema:
file: ./schema.graphql
features:
- nonFatalErrors
- grafting
- kind: ethereum/contract
name: PositionManager
network: mainnet
source:
abi: PositionManager
address: "0xbD216513d74C8cf14cf4747E6AaA6420FF64ee9e"
startBlock: 21689089
mapping:
kind: ethereum/events
apiVersion: 0.0.7
language: wasm/assemblyscript
file: ./src/mappings/index.ts
entities:
- Position
abis:
- name: PositionManager
file: ./abis/PositionManager.json
eventHandlers:
- event: Subscription(indexed uint256,indexed address)
handler: handleSubscription
- event: Unsubscription(indexed uint256,indexed address)
handler: handleUnsubscription
- event: Transfer(indexed address,indexed address,indexed uint256)
handler: handleTransfer
HyperIndex - config.yaml
# yaml-language-server: $schema=./node_modules/envio/evm.schema.json
name: uni-v4-indexer
networks:
- id: 1
start_block: 21689089
contracts:
- name: PositionManager
address: 0xbD216513d74C8cf14cf4747E6AaA6420FF64ee9e
handler: src/EventHandlers.ts
events:
- event: Subscription(uint256 indexed tokenId, address indexed subscriber)
- event: Unsubscription(uint256 indexed tokenId, address indexed subscriber)
- event: Transfer(address indexed from, address indexed to, uint256 indexed id)
For any potential hurdles, please refer to the Configuration File documentation.
2. Schema migrationβ
copy
& paste
the schema from the subgraph to the HyperIndex config file.
Small nuance differences:
- You can remove the
@entity
directive - Enums
- BigDecimals
3. Event handler migrationβ
This consists of two parts
- Converting assemblyscript to typescript
- Converting the subgraph syntax to HyperIndex syntax
3.1 Converting Assemblyscript to Typescriptβ
The subgraph uses assemblyscript to write event handlers. The HyperIndex syntax is usually in typescript. Since assemblyscript is a subset of typescript, it's quite simple to copy and paste the code, especially so for pure functions.
3.2 Converting the subgraph syntax to HyperIndex syntaxβ
There are some subtle differences in the syntax of the subgraph and HyperIndex. Including but not limited to the following:
- Replace Entity.save() with context.Entity.set()
- Convert to async handler functions
- Use
await
for loading entitiesconst x = await context.Entity.get(id)
- Use dynamic contract registration to register contracts
The below code snippets can give you a basic idea of what this difference might look like.
theGraph - eventHandler.ts
export function handleSubscription(event: SubscriptionEvent): void {
const subscription = new Subscribe(event.transaction.hash + event.logIndex);
subscription.tokenId = event.params.tokenId;
subscription.address = event.params.subscriber.toHexString();
subscription.logIndex = event.logIndex;
subscription.blockNumber = event.block.number;
subscription.position = event.params.tokenId;
subscription.save();
}
HyperIndex - eventHandler.ts
PoolManager.Subscription.handler( async (event, context) => {
const entity = {
id: event.transaction.hash + event.logIndex,
tokenId: event.params.tokenId,
address: event.params.subscriber,
blockNumber: event.block.number,
logIndex: event.logIndex,
position: event.params.tokenId
}
context.Subscription.set(entity);
})
Extra tipsβ
HyperIndex is a powerful tool that can be used to index any contract. There are some features that are especially powerful that go above subgraph implementations and so in some cases you may want to optimise your migration to HyperIndex further to take advantage of these features. Here are some useful tips:
- Use the
field_selection
option to add additional fields to your index. Doc here: field selection - Use the
unordered_multichain_mode
option to enable unordered multichain mode, this is the most common need for multichain indexing. However comes with tradeoffs worth understanding. Doc here: unordered multichain mode - Use wildcard indexing to index by event signatures rather than by contract address.
- HyperIndex uses the standard graphql query language, where as the subgraph uses a custom query language. You can read about the slight nuances here. (We are working on a basic tool to help with backwards compatibility, please check in with us on discord for it's current status).
- Loaders are a powerful feature to optimize historical sync performance. You can read more about them here.
- HyperIndex is very flexible and can be used to index offchain data too or send messages to a queue etc for fetching external data, you can further optimise the fetching by using the effects api
Share Your Learningsβ
If you discover helpful tips during your migration, weβd love contributions! Open a PR to this guide and help future developers.
Getting Helpβ
Join Our Discord: The fastest way to get personalized help is through our Discord community.
---
## Configuration File
**File:** `Guides/configuration-file.mdx`
The `config.yaml` file defines your indexer's behavior, including which blockchain events to index, contract addresses, which networks to index, and various advanced indexing options. It is a crucial step in configuring your HyperIndex setup.
After any changes to your `config.yaml` and the schema, run:
```bash
pnpm codegen
This command generates necessary types and code for your event handlers.
Key Configuration Optionsβ
Contract Addressesβ
Set the address of the smart contract you're indexing.
Addresses can be provided in checksum format or in lowercase. Envio accepts both and normalizes them internally.
Single address:
address: 0xContractAddress
Multiple addresses for the same contract:
contracts:
- name: MyContract
address:
- 0xAddress1
- 0xAddress2
If using a proxy contract, always use the proxy address, not the implementation address.
Global definitions:
You can also avoid repeating addresses by using global contract definitions:
contracts:
- name: Greeter
abi: greeter.json
networks:
- id: ethereum-mainnet
contracts:
- name: Greeter
address: 0xProxyAddressHere
Events Selectionβ
Define specific events to index in a human-readable format:
events:
- event: "NewGreeting(address user, string greeting)"
- event: "ClearGreeting(address user)"
By default, all events defined in the contract are indexed, but you can selectively disable them by removing them from this list.
Custom Event Namesβ
You can assign custom names to events in config.yaml
. This is handy when
two events share the same name but have different signatures, or when you want
a more descriptive name in your Envio project.
events:
- event: Assigned(address indexed recipientId, uint256 amount, address token)
- event: Assigned(address indexed recipientId, uint256 amount, address token, address sender)
name: AssignedWithSender
Field Selectionβ
To improve indexing performance and reduce credits usage, the block
and transaction
fields on events contain only a subset of the fields available on the blockchain.
To access fields that are not provided by default, specify them using the field_selection
option for your event:
events:
- event: "Assigned(address indexed user, uint256 amount)"
field_selection:
transaction_fields:
- transactionIndex
block_fields:
- timestamp
See all possible options in the Interactive Schema Explorer or use IDE autocomplete for your help.
Global Field Selectionβ
You can also specify fields globally for all events in the root of the config file:
field_selection:
transaction_fields:
- hash
- gasUsed
block_fields:
- parentHash
Try to use this option sparingly as it can cause redundant Data Source calls and increased credits usage.
Field Selection per Event is available from envio@2.11.0
and above. Please, upgrade your indexer to access this feature.
Rollback on Reorgβ
HyperIndex automatically handles blockchain reorganizations by default. To disable or customize this behavior, set the rollback_on_reorg
flag in your config.yaml
:
rollback_on_reorg: true # default is true
See detailed configuration options here.
Environment Variablesβ
Since envio@2.9.0
, environment variable interpolation is supported for flexibility and security:
networks:
- id: ${ENVIO_CHAIN_ID:-ethereum-mainnet}
contracts:
- name: Greeter
address: ${ENVIO_GREETER_ADDRESS}
Run your indexer with custom environment variables:
ENVIO_CHAIN_ID=optimism ENVIO_GREETER_ADDRESS=0xYourContractAddress pnpm dev
Interpolation syntax:
${ENVIO_VAR}
β Use the value ofENVIO_VAR
${ENVIO_VAR:-default}
β UseENVIO_VAR
if set, otherwise usedefault
For more detailed information about environment variables, see our Environment Variables Guide.
Output Directory Pathβ
You can customize the path where the generated directory will be placed using the output
option:
output: ./custom/generated/path
By default, the generated directory is placed in generated
relative to the current working directory. If set, it will be a path relative to the config file location.
This is an advanced configuration option. When using a custom output directory, you'll need to manually adjust your .gitignore
file and project structure to match the new configuration.
Configuration Schema Referenceβ
Explore detailed configuration schema parameters here:
- See the full, deep-linkable reference: Config Schema Reference
Schema Fileβ
File: Guides/schema-file.md
The schema.graphql
file defines the data model for your HyperIndex indexer. Each entity type defined in this schema corresponds directly to a database table, with your event handlers responsible for creating and updating the records. HyperIndex automatically generates a GraphQL API based on these entity types, allowing easy access to the indexed data.
Scalar Typesβ
Scalar types represent basic data types and map directly to JavaScript, TypeScript, or ReScript types.
GraphQL Scalar | Description | JavaScript/TypeScript | ReScript |
---|---|---|---|
ID | Unique identifier | string | string |
String | UTF-8 character sequence | string | string |
Int | Signed 32-bit integer | number | int |
Float | Signed floating-point number | number | float |
Boolean | true or false | boolean | bool |
Bytes | UTF-8 character sequence (hex prefixed 0x ) | string | string |
BigInt | Signed integer (int256 in Solidity) | bigint | bigint |
BigDecimal | Arbitrary-size floating-point | BigDecimal (imported) | BigDecimal.t |
Timestamp | Timestamp with timezone | Date | Js.Date.t |
Json | JSON object (from envio@2.20) | Json | Js.Json.t |
Learn more about GraphQL scalars here.
Enum Typesβ
Enums allow fields to accept only a predefined set of values.
Example:
enum AccountType {
ADMIN
USER
}
type User {
id: ID!
balance: Int!
accountType: AccountType!
}
Enums translate to string unions (TypeScript/JavaScript) or polymorphic variants (ReScript):
TypeScript Example:
let user = {
id: event.params.id,
balance: event.params.balance,
accountType: "USER", // enum as string
};
ReScript Example:
let user: Types.userEntity = {
id: event.params.id,
balance: event.params.balance,
accountType: #USER, // polymorphic variant
};
Field Indexing (@index
)β
Add an index to a field for optimized queries and loader performance:
type Token {
id: ID!
tokenId: BigInt!
collection: NftCollection!
owner: User! @index
}
- All
id
fields and fields referenced via@derivedFrom
are indexed automatically.
Generating Typesβ
Once you've defined your schema, run this command to generate these entity types that can be accessed in your event handlers:
pnpm envio codegen
You're now ready to define powerful schemas and efficiently query your indexed data with HyperIndex!
Event Handlersβ
File: Guides/event-handlers.mdx
Event Handlers
Registrationβ
A handler is a function that receives blockchain data, processes it, and inserts it into the database. You can register handlers in the file defined in the handler
field in your config.yaml
file. By default this is src/EventHandlers.*
file.
..handler(async ({ event, context }) => {
// Your logic here
});
const { } = require("generated");
..handler(async ({ event, context }) => {
// Your logic here
});
Handlers...handler(async ({ event, context }) => {
// Your logic here
});
The generated
module contains code and types based on config.yaml
and schema.graphql
files. Update it by running pnpm codegen
command whenever you change these files.
Basic Exampleβ
Here's a handler example for the NewGreeting
event. It belongs to the Greeter
contract from our beginners Greeter Tutorial:
// Handler for the NewGreeting event
Greeter.NewGreeting.handler(async ({ event, context }) => {
const userId = event.params.user; // The id for the User entity
const latestGreeting = event.params.greeting; // The greeting string that was added
const currentUserEntity = await context.User.get(userId); // Optional user entity that may already exist
// Update or create a new User entity
const userEntity: User = currentUserEntity
? {
id: userId,
latestGreeting,
numberOfGreetings: currentUserEntity.numberOfGreetings + 1,
greetings: [...currentUserEntity.greetings, latestGreeting],
}
: {
id: userId,
latestGreeting,
numberOfGreetings: 1,
greetings: [latestGreeting],
};
context.User.set(userEntity); // Set the User entity in the DB
});
const { Greeter } = require("generated");
// Handler for the NewGreeting event
Greeter.NewGreeting.handler(async ({ event, context }) => {
const userId = event.params.user; // The id for the User entity
const latestGreeting = event.params.greeting; // The greeting string that was added
const currentUserEntity = await context.User.get(userId); // Optional user entity that may already exist
// Update or create a new User entity
const userEntity = currentUserEntity
? {
id: userId,
latestGreeting,
numberOfGreetings: currentUserEntity.numberOfGreetings + 1,
greetings: [...currentUserEntity.greetings, latestGreeting],
}
: {
id: userId,
latestGreeting,
numberOfGreetings: 1,
greetings: [latestGreeting],
};
context.User.set(userEntity); // Set the User entity in the DB
});
open Types
// Handler for the NewGreeting event
Handlers.Greeter.NewGreeting.handler(async ({event, context}) => {
let userId = event.params.user->Address.toString // The id for the User entity
let latestGreeting = event.params.greeting // The greeting string that was added
let maybeCurrentUserEntity = await context.user.get(userId) // Optional User entity that may already exist
// Update or create a new User entity
let userEntity: Entities.User.t = switch maybeCurrentUserEntity {
| Some(existingUserEntity) => {
id: userId,
latestGreeting,
numberOfGreetings: existingUserEntity.numberOfGreetings + 1,
greetings: existingUserEntity.greetings->Belt.Array.concat([latestGreeting]),
}
| None => {
id: userId,
latestGreeting,
numberOfGreetings: 1,
greetings: [latestGreeting],
}
}
context.user.set(userEntity) // Set the User entity in the DB
})
Preload Optimizationβ
Important! Preload optimization makes your handlers run twice.
Starting from envio@2.27
all new indexers are created with preload optimization pre-configured by default.
This optimization enables HyperIndex to efficiently preload entities used by handlers through batched database queries, while ensuring events are processed synchronously in their original order. When combined with the Effect API for external calls, this feature delivers performance improvements of multiple orders of magnitude compared to other indexing solutions.
Read more in the dedicated guides:
- How Preload Optimization Works
- Double-Run Footgun
- Effect API
- Migrating from Loaders (recommended)
Advanced Use Casesβ
HyperIndex provides many features to help you build more powerful and efficient indexers. There's definitely the one for you:
- Handle Factory Contracts with Dynamic Contract Registration (with nested factories support)
- Perform external calls to decide which contract address to register using Async Contract Register
- Index all ERC20 token transfers with Wildcard Indexing
- Use Topic Filtering to ignore irrelevant events
- With multiple filters for single event
- With different filters per network
- With filter by dynamicly registered contract addresses (eg Index all ERC20 transfers to/from your Contract)
- Access Contract State directly from handlers
- Perform external calls from handlers by following the IPFS Integration guide
Context Objectβ
The handler context
provides methods to interact with entities stored in the database.
Retrieving Entitiesβ
Retrieve entities from the database using context.Entity.get
where Entity
is the name of the entity you want to retrieve, which is defined in your schema.graphql file.
await context.Entity.get(entityId);
It'll return Entity
object or undefined
if the entity doesn't exist.
Starting from envio@2.22.0
you can use context.Entity.getOrThrow
to conveniently throw an error if the entity doesn't exist:
const pool = await context.Pool.getOrThrow(poolId);
// Will throw: Entity 'Pool' with ID '...' is expected to exist.
// Or you can pass a custom message as a second argument:
const pool = await context.Pool.getOrThrow(
poolId,
`Pool with ID ${poolId} is expected.`
);
Or use context.Entity.getOrCreate
to automatically create an entity with default values if it doesn't exist:
const pool = await context.Pool.getOrCreate({
id: poolId,
totalValueLockedETH: 0n,
});
// Which is equivalent to:
let pool = await context.Pool.get(poolId);
if (!pool) {
pool = {
id: poolId,
totalValueLockedETH: 0n,
};
context.Pool.set(pool);
}
Retrieving Entities by Fieldβ
ERC20.Approval.handler(async ({ event, context }) => {
// Find all approvals for this specific owner
const currentOwnerApprovals = await context.Approval.getWhere.owner_id.eq(
event.params.owner
);
// Process all the owner's approvals efficiently
for (const approval of currentOwnerApprovals) {
// Process each approval
}
});
You can also use context..getWhere..gt
to get all entities where the field value is greater than the given value.
Important:
-
This feature requires Preload Optimization to be enabled.
- Either by
preload_handlers: true
in yourconfig.yaml
file - Or by using Loaders (Deprecated)
- Either by
-
Works with any field that:
- Is used in a relationship with the
@derivedFrom
directive - Has an
@index
directive
- Is used in a relationship with the
-
Potential Memory Issues: Very large
getWhere
queries might cause memory overflows. -
Tip: Try to put the
getWhere
query to the top of the handler, to make sure it's being preloaded. Read more about how Preload Optimization works.
Modifying Entitiesβ
Use context.Entity.set
to create or update an entity:
context.Entity.set({
id: entityId,
...otherEntityFields,
});
Both context.Entity.set
and context.Entity.deleteUnsafe
methods use the In-Memory Storage under the hood and don't require await
in front of them.
Referencing Linked Entitiesβ
When your schema defines a field that links to another entity type, set the relationship using _id
with the referenced entity's id
. You are storing the ID, not the full entity object.
type A {
id: ID!
b: B!
}
type B {
id: ID!
}
context.A.set({
id: aId,
b_id: bId, // ID of the linked B entity
});
HyperIndex automatically resolves A.b
based on the stored b_id
when querying the API.
Deleting Entities (Unsafe)β
To delete an entity:
context.Entity.deleteUnsafe(entityId);
The deleteUnsafe
method is experimental and unsafe. You need to manually handle all entity references after deletion to maintain database consistency.
Updating Specific Entity Fieldsβ
Use the following approach to update specific fields in an existing entity:
const pool = await context.Pool.get(poolId);
if (pool) {
context.Pool.set({
...pool,
totalValueLockedETH: pool.totalValueLockedETH.plus(newDeposit),
});
}
const pool = await context.Pool.get(poolId);
if (pool) {
context.Pool.set({
...pool,
totalValueLockedETH: pool.totalValueLockedETH.plus(newDeposit),
});
}
let pool = await context.pool.get(poolId);
pool->Option.forEach(pool => {
context.pool.set({
...pool,
totalValueLockedETH: pool.totalValueLockedETH.plus(newDeposit),
});
});
context.log
β
The context object also provides a logger that you can use to log messages to the console. Compared to console.log
calls, these logs will be displayed on our Hosted Service runtime logs page.
Read more in the Logging Guide.
context.isPreload
β
If you need to skip the preload phase for CPU-intensive operations or to perform certain actions only once per event, you can use context.isPreload
.
ERC20.Transfer.handler(async ({ event, context }) => {
// Load existing data efficiently
const [sender, receiver] = await Promise.all([
context.Account.getOrThrow(event.params.from),
context.Account.getOrThrow(event.params.to),
]);
// Skip expensive operations during preload
if (context.isPreload) {
return;
}
// CPU-intensive calculations only happen once
const complexCalculation = performExpensiveOperation(event.params.value); // Placeholder function for demonstration
// Create or update sender account
context.Account.set({
id: event.params.from,
balance: sender.balance - event.params.value,
computedValue: complexCalculation,
});
// Create or update receiver account
context.Account.set({
id: event.params.to,
balance: receiver.balance + event.params.value,
});
});
Note: While context.isPreload
can be useful for bypassing double execution, it's recommended to use the Effect API for external calls instead, as it provides automatic batching and memoization benefits.
External Callsβ
Envio indexer runs using Node.js runtime. This means that you can use fetch
or any other library like viem
to perform external calls from your handlers.
Note that with Preload Optimization all handlers run twice. But with Effect API this behavior makes your external calls run in parallel, while keeping the processing data consistent.
Check out our IPFS Integration, Accessing Contract State and Effect API guides for more information.
context.effect
β
Define an effect and use it in your handler with context.effect
:
// Define an effect that will be called from the handler.
const getMetadata = experimental_createEffect(
{
name: "getMetadata",
input: S.string,
output: {
description: S.string,
value: S.bigint,
},
cache: true, // Optionally persist the results in the database
},
({ input }) => {
const response = await fetch(`https://api.example.com/metadata/${input}`);
const data = await response.json();
return {
description: data.description,
value: data.value,
};
}
);
ERC20.Transfer.handler(async ({ event, context }) => {
// Load metadata for the token.
// This will be executed in parallel for all events in the batch.
// The call is automatically memoized, so you don't need to worry about duplicate requests.
const sender = await context.effect(getMetadata, event.params.from);
// Process the transfer with the pre-loaded data
});
Performance Considerationsβ
For performance optimization and best practices, refer to:
- Benchmarking
- Preload Optimization
These guides offer detailed recommendations on optimizing entity loading and indexing performance.
Multichain Indexingβ
File: Advanced/multichain-indexing.mdx
Understanding Multichain Indexing
Multichain indexing allows you to monitor and process events from contracts deployed across multiple blockchain networks within a single indexer instance. This capability is essential for applications that:
- Track the same contract deployed across multiple networks
- Need to aggregate data from different chains into a unified view
- Monitor cross-chain interactions or state
How It Worksβ
With multichain indexing, events from contracts deployed on multiple chains can be used to create and update entities defined in your schema file. Your indexer will process events from all configured networks, maintaining proper synchronization across chains.
Configuration Requirementsβ
To implement multichain indexing, you need to:
- Populate the
networks
section in yourconfig.yaml
file for each chain - Specify contracts to index from each network
- Create event handlers for the specified contracts
Real-World Example: Uniswap V4 Multichain Indexerβ
For a comprehensive, production-ready example of multichain indexing, we recommend exploring our Uniswap V4 Multichain Indexer. This official reference implementation:
- Indexes Uniswap V4 deployments across 10 different blockchain networks
- Powers the official v4.xyz interface with real-time data
- Demonstrates best practices for high-performance multichain indexing
- Provides a complete, production-grade implementation you can study and adapt
!V4 indexer
The Uniswap V4 indexer showcases how to effectively structure a multichain indexer for a complex DeFi protocol, handling high volumes of data across multiple networks while maintaining performance and reliability.
Config File Structure for Multichain Indexingβ
The config.yaml
file for multichain indexing contains three key sections:
- Global contract definitions - Define contracts, ABIs, and events once
- Network-specific configurations - Specify chain IDs and starting blocks
- Contract instances - Reference global contracts with network-specific addresses
# Example structure (simplified)
contracts:
- name: ExampleContract
abi_file_path: ./abis/example-abi.json
handler: ./src/EventHandlers.js
events:
- event: ExampleEvent
networks:
- id: 1 # Ethereum Mainnet
start_block: 0
contracts:
- name: ExampleContract
address: "0x1234..."
- id: 137 # Polygon
start_block: 0
contracts:
- name: ExampleContract
address: "0x5678..."
Key Configuration Conceptsβ
- The global
contracts
section defines the contract interface, ABI, handlers, and events once - The
networks
section lists each blockchain network you want to index - Each network entry references the global contract and provides the network-specific address
- This structure allows you to reuse the same handler functions and event definitions across networks
π’ Best Practice: When developing multichain indexers, append the chain ID to entity IDs to avoid collisions. For example:
user-1
for Ethereum anduser-137
for Polygon.
Multichain Event Orderingβ
When indexing multiple chains, you have two approaches for handling event ordering:
Unordered Multichain Modeβ
Unordered mode is recommended for most applications.
The indexer processes events as soon as they're available from each chain, without waiting for other chains. This "Unordered Multichain Mode" provides better performance and lower latency.
- Events will still be processed in order within each individual chain
- Events across different chains may be processed out of order
- Processing happens as soon as events are emitted, reducing latency
- You avoid waiting for the slowest chain's block time
This mode is ideal for most applications, especially when:
- Operations on your entities are commutative (order doesn't matter)
- Entities from different networks never interact with each other
- Processing speed is more important than guaranteed cross-chain ordering
How to Enable Unordered Modeβ
In your config.yaml:
unordered_multichain_mode: true
networks: ...
Ordered Modeβ
Ordered mode is currently the default mode. But it'll be changed to unordered mode in the future. If you don't need strict deterministic ordering of events across all chains, it's recommended to use unordered mode.
If your application requires strict deterministic ordering of events across all chains, you can enable "Ordered Mode". In this mode, the indexer synchronizes event processing across all chains, ensuring that events are processed in the exact same order in every indexer run, regardless of which chain they came from.
When to Use Ordered Modeβ
Use ordered mode only when:
- The exact ordering of operations across different chains is critical to your application logic
- You need guaranteed deterministic results across all indexer runs
- You're willing to accept higher latency for cross-chain consistency
Cross-chain ordering is particularly important for applications like:
- Bridge applications: Where messages or assets must be processed on one chain before being processed on another chain
- Cross-chain governance: Where decisions made on one chain affect operations on another chain
- Multi-chain financial applications: Where the sequence of transactions across chains affects accounting or risk calculations
- Data consistency systems: Where the state must be consistent across multiple chains in a specific order
Technical Detailsβ
With ordered mode enabled:
- The indexer needs to wait for all blocks to increment from each network
- There is increased latency between when an event is emitted and when it's processed
- Processing speed is limited by the block interval of the slowest network
- Events are guaranteed to be processed in the same order in every indexer run
Cross-Chain Ordering Preservationβ
Ordered mode ensures that the temporal relationship between events on different chains is preserved. This is achieved by:
- Global timestamp ordering: Events are ordered based on their block timestamps across all chains
- Deterministic processing: The same sequence of events will be processed in the same order every time
The primary trade-off is increased latency at the head of the chain. Since the indexer must wait for blocks from all chains to determine the correct ordering, the processing of recent events is delayed by the slowest chain's block time. For example, if Chain A has 2-second blocks and Chain B has 15-second blocks, the indexer will process events at the slower 15-second rate to maintain proper ordering.
This latency is acceptable for applications where correct cross-chain ordering is more important than real-time updates. For bridge applications in particular, this ordering preservation can be critical for security and correctness, as it ensures that deposit events on one chain are always processed before the corresponding withdrawal events on another chain.
Best Practices for Multichain Indexingβ
1. Entity ID Namespacingβ
Always namespace your entity IDs with the chain ID to prevent collisions between networks. This ensures that entities from different networks remain distinct.
2. Error Handlingβ
Implement robust error handling for network-specific issues. A failure on one chain shouldn't prevent indexing from continuing on other chains.
3. Testingβ
- Test your indexer with realistic scenarios across all networks
- Use testnet deployments for initial validation
- Verify entity updates work correctly across chains
4. Performance Considerationsβ
- Use unordered mode when appropriate for better performance
- Consider your indexing frequency based on the block times of each chain
- Monitor resource usage, as indexing multiple chains increases load
Troubleshooting Common Issuesβ
-
Different Network Speeds: If one network is significantly slower than others, consider using unordered mode to prevent bottlenecks.
-
Entity Conflicts: If you see unexpected entity updates, verify that your entity IDs are properly namespaced with chain IDs.
-
Memory Usage: If your indexer uses excessive memory, consider optimizing your entity structure and implementing pagination in your queries.
Next Stepsβ
- Explore our Uniswap V4 Multichain Indexer for a complete implementation
- Review performance optimization techniques for your indexer
Testingβ
File: Guides/testing.mdx
Introductionβ
Envio comes with a built-in testing library that enables developers to thoroughly validate their indexer behavior without requiring deployment or interaction with actual blockchains. This library is specifically crafted to:
- Mock database states: Create and manipulate in-memory representations of your database
- Simulate blockchain events: Generate test events that mimic real blockchain activity
- Assert event handler logic: Verify that your handlers correctly process events and update entities
- Test complete workflows: Validate the entire process from event creation to database updates
The testing library provides helper functions that integrate with any JavaScript-based testing framework (like Mocha, Jest, or others), giving you flexibility in how you structure and run your tests.
Learn by doingβ
If you prefer to explore by example, the Greeter template includes complete tests that demonstrate best practices:
- Generate
greeter
template in TypeScript using Envio CLI
pnpx envio init template -l typescript -d greeter -t greeter -n greeter
- Run tests
pnpm test
- See the
test/test.ts
file to understand how the tests are written.
Writing testsβ
Test Library Designβ
The testing library follows key design principles that make it effective for testing HyperIndex indexers:
- Immutable database: The mock database is immutable, with each operation returning a new instance. This makes it robust and easy to test against previous states.
- Chainable operations: Operations can be chained together to build complex test scenarios.
- Realistic simulations: Mock events closely mirror real blockchain events, allowing you to test your handlers in conditions similar to production.
Typical Test Flowβ
Most tests will follow this general pattern:
- Initialize the mock database (empty or with predefined entities)
- Create a mock event with test parameters
- Process the mock event through your handler(s)
- Assert that the resulting database state matches your expectations
This flow allows you to verify that your event handlers correctly create, update, or modify entities in response to blockchain events.
Assertionsβ
The testing library works with any JavaScript assertion library. In the examples, we use Node.js's built-in assert module, but you can also use popular alternatives like chai or expect.
Common assertion patterns include:
assert.deepEqual(expectedEntity, actualEntity)
- Check that entire entities matchassert.equal(expectedValue, actualEntity.property)
- Verify specific property valuesassert.ok(updatedMockDb.entities.Entity.get(id))
- Ensure an entity exists
Troubleshootingβ
If you encounter issues with your tests, check the following:
Environment and Setupβ
-
Verify your Envio version: The testing library is available in versions
v0.0.26
and abovepnpm envio -v
-
Ensure you've generated testing code: Always run codegen after updating your schema or config
pnpm codegen
-
Check your imports: Make sure you're importing the correct files
const { MockDb, Greeter, Addresses } = TestHelpers;
const assert = require("assert");
const { UserEntity, TestHelpers } = require("generated");
const { MockDb, Greeter, Addresses } = TestHelpers;
open RescriptMocha
open Mocha
open Belt
Common Issues and Solutionsβ
-
"Cannot read properties of undefined": This usually means an entity wasn't found in the database. Verify your IDs match exactly and that the entity exists before accessing it.
-
"Type mismatch": Ensure that your entity structure matches what's defined in your schema. Type issues are common when working with numeric types (like
BigInt
vsnumber
). -
ReScript specific setup: If using ReScript, remember to update your
rescript.json
file:{
"sources": [
{ "dir": "src", "subdirs": true },
{ "dir": "test", "subdirs": true }
],
"bs-dependencies": ["rescript-mocha"]
} -
Debug database state: If you're having trouble with assertions, add a debug log to see the exact state of your entities:
console.log(
JSON.stringify(updatedMockDb.entities.User.get(userAddress), null, 2)
);
If you encounter any issues or have questions, please reach out to us on Discord
Navigating Hasuraβ
File: Guides/navigating-hasura.md
This page is only relevant when testing on a local machine or using a self-hosted version of Envio that uses Hasura.
Introductionβ
Hasura is a GraphQL engine that provides a web interface for interacting with your indexed blockchain data. When running HyperIndex locally, Hasura serves as your primary tool for:
- Querying indexed data via GraphQL
- Visualizing database tables and relationships
- Testing API endpoints before integration with your frontend
- Monitoring the indexing process
This guide explains how to navigate the Hasura dashboard to effectively work with your indexed data.
Accessing Hasura Consoleβ
When running HyperIndex locally, Hasura Console is automatically available at:
http://localhost:8080
You can access this URL in any web browser to open the Hasura console.
When prompted for authentication, use the password: testing
Key Dashboard Areasβ
The Hasura dashboard has several tabs, but we'll focus on the two most important ones for HyperIndex developers:
API Tabβ
The API tab lets you execute GraphQL queries and mutations on indexed data. It serves as a GraphQL playground for testing your API calls.
Featuresβ
- Explorer Panel: The left panel shows all available entities defined in your
schema.graphql
file - Query Builder: The center area is where you write and execute GraphQL queries
- Results Panel: The right panel displays query results in JSON format
Available Entitiesβ
By default, you'll see:
- All entities defined in your
schema.graphql
file dynamic_contracts
(for dynamically added contracts)raw_events
table (Note: This table is no longer populated by default to improve performance. To enable storage of raw events, addraw_events: true
to yourconfig.yaml
file as described in the Raw Events Storage section)
Example Queryβ
Try a simple query to test your indexer:
query MyQuery {
User(limit: 5) {
id
latestGreeting
numberOfGreetings
}
}
Click the "Play" button to execute the query and see the results.
For more advanced GraphQL query options, see Hasura's quickstart guide.
Data Tabβ
The Data tab provides direct access to your database tables and relationships, allowing you to view the actual indexed data.
Featuresβ
- Schema Browser: View all tables in the database (left panel)
- Table Data: Examine and browse data within each table
- Relationship Viewer: See how different entities are connected
Working with Tablesβ
- Select any table from the "public" schema to view its contents
- Use the "Browse Rows" tab to see all data in that table
- Check the "Insert Row" tab to manually add data (useful for testing)
- View the "Modify" tab to see the table structure
Verifying Indexed Dataβ
To confirm your indexer is working correctly:
- Check entity tables to ensure they contain the expected data
- Look at the
db_write_timestamp
column values to confirm when data was last updated - Newer timestamps indicate fresh data; older timestamps might indicate stale data from previous runs
Common Tasksβ
Checking Indexing Statusβ
To verify your indexer is actively processing new blocks:
- Go to the Data tab
- Select any entity table
- Check the latest
db_write_timestamp
values - Monitor these values over time to ensure they're updating
(Note the TUI is also an easy way to monitor this)
Troubleshooting Missing Dataβ
If expected data isn't appearing:
- Check if you've enabled raw events storage (
raw_events: true
inconfig.yaml
) and then examine theraw_events
table to confirm events were captured - Verify your event handlers are correctly processing these events
- Examine your GraphQL queries to ensure they match your schema structure
- Check console logs for any processing errors
Resetting Indexed Dataβ
When testing, you may need to reset your database:
- Stop your indexer
- Reset your database (refer to the development guide for commands)
- Restart your indexer to begin processing from the configured start block
Best Practicesβ
- Regular Verification: Periodically check both the API and Data tabs to ensure your indexer is functioning correctly
- Query Testing: Test complex queries in the API tab before implementing them in your application
- Schema Validation: Use the Data tab to verify that relationships between entities are correctly established
- Performance Monitoring: Watch for tables that grow unusually large, which might indicate inefficient indexing
Aggregations: local vs hosted (avoid the footβgun)β
When developing locally with Hasura, you may notice that GraphQL aggregate helpers (for example, count/sum-style aggregations) are available. On the hosted service, these aggregate endpoints are intentionally not exposed. Aggregations over large datasets can be very slow and unpredictable in production.
The recommended approach is to compute and store aggregates at indexing time, not at query time. In practice this means maintaining counters, sums, and other rollups in entities as part of your event handlers, and then querying those precomputed values.
Example: indexing-time aggregationβ
schema.graphql
# singleton; you hardcode the id and load it in and out
type GlobalState {
id: ID! # "global-state"
count: Int!
}
type Token {
id: ID! # incremental number
description: String!
}
EventHandler.ts
const globalStateId = "global-state";
NftContract.Mint.handler(async ({event, context}) => {
const globalState = await context.GlobalState.get(globalStateId);
if (!globalState) {
context.log.error("global state doesn't exist");
return;
}
const incrementedTokenId = globalState.count + 1;
context.Token.set({
id: incrementedTokenId,
description: event.params.description,
});
context.GlobalState.set({
...globalState,
count: incrementedTokenId,
});
});
This pattern scales: you can keep per-entity counters, rolling windows (daily/hourly entities keyed by date), and top-N caches by updating entities as events arrive. Your queries then read these precomputed values directly, avoiding expensive runtime aggregations.
Exceptional casesβ
If runtime aggregate queries are a hard requirement for your use case, please reach out and we can evaluate options for your project on the hosted service. Contact us on Discord.
Disable Hasura for Self-Hosted Indexersβ
Starting from envio@2.26.0
it's possible to disable Hasura integration for self-hosted indexers. To do so, set the ENVIO_HASURA
environment variable to false
.
Environment Variablesβ
File: Guides/environment-variables.md
Environment variables are a crucial part of configuring your Envio indexer. They allow you to manage sensitive information and configuration settings without hardcoding them in your codebase.
Naming Conventionβ
All environment variables used by Envio must be prefixed with ENVIO_
. This naming convention:
- Prevents conflicts with other environment variables
- Makes it clear which variables are used by the Envio indexer
- Ensures consistency across different environments
Envio API Token (required for HyperSync)β
To ensure continued access to HyperSync, set an Envio API token in your environment.
- Use
ENVIO_API_TOKEN
to provide your token at runtime - See the API Tokens guide for how to generate a token: API Tokens
Envio-specific environment variablesβ
The following variables are used by HyperIndex:
-
ENVIO_API_TOKEN
: API token for HyperSync access (required for continued access in self-hosted deployments) -
ENVIO_HASURA
: Set tofalse
to disable Hasura integration for self-hosted indexers -
ENVIO_PG_PORT
: Port for the Postgres service used by HyperIndex during local development -
ENVIO_PG_PASSWORD
: Postgres password (self-hosted) -
ENVIO_PG_USER
: Postgres username (self-hosted) -
ENVIO_PG_DATABASE
: Postgres database name (self-hosted) -
ENVIO_PG_PUBLIC_SCHEMA
: Postgres schema name override for the generated/public schema
Example Environment Variablesβ
Here are some commonly used environment variables:
# Envio API Token (required for continued HyperSync access)
ENVIO_API_TOKEN=your-secret-token
# Blockchain RPC URL
ENVIO_RPC_URL=https://arbitrum.direct.dev/your-api-key
# Starting block number for indexing
ENVIO_START_BLOCK=12345678
# Coingecko API key
ENVIO_COINGECKO_API_KEY=api-key
Setting Environment Variablesβ
Local Developmentβ
For local development, you can set environment variables in several ways:
- Using a
.env
file in your project root:
# .env
ENVIO_API_TOKEN=your-secret-token
ENVIO_RPC_URL=https://arbitrum.direct.dev/your-api-key
ENVIO_START_BLOCK=12345678
- Directly in your terminal:
export ENVIO_API_TOKEN=your-secret-token
export ENVIO_RPC_URL=https://arbitrum.direct.dev/your-api-key
Hosted Serviceβ
When using the Envio Hosted Service, you can configure environment variables through the Envio platform's dashboard. Remember that all variables must still be prefixed with ENVIO_
.
For more information about environment variables in the hosted service, see the Hosted Service documentation.
Configuration Fileβ
For use of environment variables in your configuration file, read the docs here: Configuration File.
Best Practicesβ
- Never commit sensitive values: Always use environment variables for sensitive information like API keys and database credentials
- Never commit or use private keys: Never commit or use private keys in your codebase
- Use descriptive names: Make your environment variable names clear and descriptive
- Document your variables: Keep a list of required environment variables in your project's README
- Use different values: Use different environment variables for development, staging, and production environments
- Validate required variables: Check that all required environment variables are set before starting your indexer
Troubleshootingβ
If you encounter issues with environment variables:
- Verify that all required variables are set
- Check that variables are prefixed with
ENVIO_
- Ensure there are no typos in variable names
- Confirm that the values are correctly formatted
For more help, see our Troubleshooting Guide.
Uniswap V4 Multi-chain Indexerβ
File: Examples/example-uniswap-v4.md
The following indexer example is a reference implementation and can serve as a starting point for applications with similar logic.
This official Uniswap V4 indexer is a comprehensive implementation for the Uniswap V4 protocol using Envio HyperIndex. This is the same indexer that powers the v4.xyz website, providing real-time data for the Uniswap V4 interface.
Key Featuresβ
- Multi-chain Support: Indexes Uniswap V4 deployments across 10 different blockchain networks in real-time
- Complete Pool Metrics: Tracks pool statistics including volume, TVL, fees, and other critical metrics
- Swap Analysis: Monitors swap events and liquidity changes with high precision
- Hook Integration: In-progress support for Uniswap V4 hooks and their events
- Production Ready: Powers the official v4.xyz interface with production-grade reliability
- Ultra-Fast Syncing: Processes massive amounts of blockchain data significantly faster than alternative indexing solutions, reducing sync times from days to minutes
!V4 gif
Technical Overviewβ
This indexer is built using TypeScript and provides a unified GraphQL API for accessing Uniswap V4 data across all supported networks. The architecture is designed to handle high throughput and maintain consistency across different blockchain networks.
Performance Advantagesβ
The Envio-powered Uniswap V4 indexer offers extraordinary performance benefits:
- 10-100x Faster Sync Times: Leveraging Envio's HyperSync technology, this indexer can process historical blockchain data orders of magnitude faster than traditional solutions
- Real-time Updates: Maintains low latency for new blocks while efficiently managing historical data
Use Casesβ
- Power analytics dashboards and trading interfaces
- Monitor DeFi positions and protocol health
- Track historical performance of Uniswap V4 pools
- Build custom notifications and alerts
- Analyze hook interactions and their impact
Getting Startedβ
To use this indexer, you can:
- Clone the repository
- Follow the installation instructions in the README
- Run the indexer locally or deploy it to a production environment
- Access indexed data through the GraphQL API
Contributionβ
The Uniswap V4 indexer is actively maintained and welcomes contributions from the community. If you'd like to contribute or report issues, please visit the GitHub repository.
This is an official reference implementation that powers the v4.xyz website. While extensively tested in production, remember to validate the data for your specific use case. The indexer is continuously updated to support the latest Uniswap V4 features and optimizations.
Sablier Protocol Indexersβ
File: Examples/example-sablier.md
The following indexers serve as exceptional reference implementations for the Sablier protocol, showcasing professional development practices and efficient multi-chain data processing.
Overviewβ
Sablier is a token streaming protocol that enables real-time finance on the blockchain, allowing tokens to be streamed continuously over time. These official Sablier indexers track streaming activity across 18 different EVM-compatible chains, providing comprehensive data through a unified GraphQL API.
Professional Indexer Suiteβ
Sablier maintains three specialized indexers, each targeting a specific part of their protocol:
1. Lockup Indexerβ
Tracks the core Sablier lockup contracts, which handle the streaming of tokens with fixed durations and amounts. This indexer provides data about stream creation, cancellation, and withdrawal events. Used primarily for the vesting functionality of Sablier.
2. Flow Indexerβ
Monitors Sablier's advanced streaming functionality, allowing for dynamic flow rates and more complex streaming scenarios. This indexer captures stream modifications, batch operations, and other flow-specific events. Powers the payments side of the Sablier application.
3. Merkle Indexerβ
Tracks Sablier's Merkle distribution system, which enables efficient batch stream creation using cryptographic proofs. This indexer provides data about batch creations, claims, and related activities. Used for both Airstreams and Instant Airdrops functionality.
Key Featuresβ
- Comprehensive Multi-chain Support: Indexes data across 18 different EVM chains
- Professionally Maintained: Used in production by the Sablier team and their partners
- Extensive Test Coverage: Includes comprehensive testing to ensure data accuracy
- Optimized Performance: Implements efficient data processing techniques
- Well-Documented: Clear code structure with extensive comments
- Backward Compatibility: Carefully manages schema evolution and contract upgrades
- Cross-chain Architecture: Envio promotes efficient cross-chain indexing where all networks share the same indexer endpoint
Best Practices Showcaseβ
These indexers demonstrate several development best practices:
- Modular Code Structure: Well-organized code with clear separation of concerns
- Consistent Naming Conventions: Professional and consistent naming throughout
- Efficient Event Handling: Optimized processing of blockchain events
- Comprehensive Entity Relationships: Well-designed data model with proper relationships
- Thorough Input Validation: Robust error handling and input validation
- Detailed Changelogs: Documentation of breaking changes and migrations
- Handler/Loader Pattern: Envio indexers use an optimized pattern with loaders to pre-fetch entities and handlers to process them
Getting Startedβ
To use these indexers as a reference for your own development:
- Clone the specific repository based on your needs:
- Lockup Indexer
- Flow Indexer
- Merkle Indexer
- Review the file structure and implementation patterns
- Examine the event handlers for efficient data processing techniques
- Study the schema design for effective entity modeling
For complete API documentation and usage examples, see:
- Sablier API Overview
- Implementation Caveats
These are official indexers maintained by the Sablier team and represent production-quality implementations. They serve as excellent examples of professional indexer development and are regularly updated to support the latest protocol features.
Envio Hosted Serviceβ
File: Hosted_Service/hosted-service.md
Envio offers a fully managed hosting solution for your indexers, providing all the infrastructure, scaling, and monitoring needed to run production-grade indexers without operational overhead.
Key Featuresβ
- Git-based Deployments: Similar to Vercel, deploy your indexer by simply pushing to a designated deployment branch
- Zero Infrastructure Management: We handle all the servers, databases, and scaling for you
- Version Management: Switch between different deployed versions of your indexer with one click
- Built-in Monitoring: Track logs and sync status
- Alerting: Get email alerts when indexing errors occur
- GraphQL API: Access your indexed data through a performant GraphQL endpoint
- Multi-chain Support: Deploy indexers that track multiple networks from a single codebase
Deployment Modelβ
The Envio Hosted Service connects directly to your GitHub repository:
- Connect your GitHub repository to the Envio platform
- Configure your deployment settings (branch, config file location, etc.)
- Push changes to your deployment branch to trigger automatic deployments
- View deployment logs and status in real-time
- Switch between versions or rollback if needed
You can view and manage your hosted indexers in the Envio Explorer.
Deployment Optionsβ
Envio provides flexibility in how you deploy and host your indexers:
- Fully Managed Hosted Service: Let Envio handle everything (recommended for most users)
- Self-Hosting: Run your indexer on your own infrastructure with our Docker container
For self-hosting information and instructions, see our Self-Hosting Guide. For a complete list of CLI commands to control your indexer, see the CLI Commands documentation.
Deploying Your Indexerβ
File: Hosted_Service/hosted-service-deployment.md
The Envio Hosted Service provides a seamless git-based deployment workflow, similar to modern platforms like Vercel. This enables you to easily deploy, update, and manage your indexers through your normal development workflow.
Initial Setupβ
- Log in with GitHub: Visit the Envio App and authenticate with your GitHub account
- Select an Organization: Choose your personal account or any organization you have access to !Select organisation
- Install the Envio Deployments GitHub App: Grant access to the repositories you want to deploy !Install GitHub App
Configuring Your Indexerβ
- Add a New Indexer: Click "Add Indexer" in the dashboard !Add indexer
- Connect to Repository: Select the repository containing your indexer code !Connect indexer
- Configure Deployment Settings:
- Specify the config file location
- Set the root directory (important for monorepos)
- Choose the deployment branch !Configure indexer !Add org
Multiple Indexers Per Repository
You can deploy multiple indexers from a single repository by configuring them with different:
- Config file paths
- Root directories
- Deployment branches
Monorepo Configuration
If you're working in a monorepo, ensure all your imports are contained within your indexer directory to avoid deployment issues.
Deployment Workflowβ
-
Create a Deployment Branch: Set up the branch you specified during configuration !Create branch
-
Deploy via Git: Push your code to the deployment branch !Push code
-
Monitor Deployment: Track the progress of your deployment in the Envio dashboard
-
Version Management: Once deployed, you can:
- View detailed logs
- Switch between different deployed versions
- Rollback to previous versions if needed
Continuous Deployment Best Practicesβ
For a robust deployment workflow, we recommend:
- Protected Branches: Set up branch protection rules for your deployment branch
- Pull Request Workflow: Instead of pushing directly to the deployment branch, use pull requests from feature branches
- CI Integration: Add tests to your CI pipeline to validate indexer functionality before merging to the deployment branch
Version Managementβ
Each deployment creates a new version of your indexer that you can access through the dashboard. You can:
- Compare different versions
- Switch the active version with one click
- Maintain multiple versions for testing or staging purposes
Deployment Limitsβ
These can vary depending on the plan you select. In general, development plans are allowed:
- 3 indexers per organization
- 3 deployments per indexer
Need to free up space? You can delete old deployments through the Envio dashboard.
Hosted Service Billingβ
File: Hosted_Service/hosted-service-billing.mdx
Pricing & Billing
Envio offers flexible pricing options to meet the needs of projects at different stages of development.
Pricing Structureβ
We have both development tiers and production tiers to suit a variety of users:
-
Development Tier: Our development tier is completely free and designed to be user-friendly, making it easy to get started with Envio without any cost barriers.
-
Production Tiers: For projects ready for production, we offer scalable options that grow with your needs.
For detailed pricing information and plan comparisons, please visit the Envio Pricing Page.
Self-Hosting Optionβ
For users who prefer to manage their own infrastructure, we support self-hosting your indexer as well. For your convenience, there is a Docker file in the root of the generated
folder.
For more information on self-hosting, see our Self-Hosting Guide.
Not sure which option is right for your project? Book a call with our team to discuss your specific needs.
Self-Hosting Your Envio Indexerβ
File: Hosted_Service/self-hosting.md
This documentation page is actively being improved. Check back regularly for updates and additional information.
While Envio offers a fully managed Hosted Service, you may prefer to run your indexer on your own infrastructure. This guide covers everything you need to know about self-hosting Envio indexers.
We deeply appreciate users who choose our hosted service, as it directly supports our team and helps us continue developing and improving Envio's technology. If your use case allows for it, please consider the hosted option.
Why Self-Host?β
Self-hosting gives you:
- Complete Control: Manage your own infrastructure and configurations
- Data Sovereignty: Keep all indexed data within your own systems
Prerequisitesβ
Before self-hosting, ensure you have:
- Docker installed on your host machine
- Sufficient storage for blockchain data and the indexer database
- Adequate CPU and memory resources (requirements vary based on chains and indexing complexity)
- Required HyperSync and/or RPC endpoints
- Envio API token for HyperSync access (
ENVIO_API_TOKEN
) β required for continued access. See API Tokens.
Getting Startedβ
In general, if you want to self-host, you will likely use a Docker setup.
For a working example, check out the local-docker-example repository.
It contains a minimal Dockerfile
and docker-compose.yaml
that configure the Envio indexer together with PostgreSQL and Hasura.
Configuration Explainedβ
The compose file in that repository sets up three main services:
- PostgreSQL Database (
envio-postgres
): Stores your indexed data - Hasura GraphQL Engine (
graphql-engine
): Provides the GraphQL API for querying your data - Envio Indexer (
envio-indexer
): The core indexing service that processes blockchain data
Environment Variablesβ
The configuration uses environment variables with sensible defaults. For production, you should customize:
- Envio API token (
ENVIO_API_TOKEN
) - Database credentials (
ENVIO_PG_PASSWORD
,ENVIO_PG_USER
, etc.) - Hasura admin secret (
HASURA_GRAPHQL_ADMIN_SECRET
) - Resource limits based on your workload requirements
Getting Helpβ
If you encounter issues with self-hosting:
- Check the Envio GitHub repository for known issues
- Join the Envio Discord community for community support
For most production use cases, we recommend using the Envio Hosted Service to benefit from automatic scaling, monitoring, and maintenance.
Indexing Optimism Bridge Depositsβ
File: Tutorials/tutorial-op-bridge-deposits.md
Introductionβ
This tutorial will guide you through indexing Optimism Standard Bridge deposits in under 5 minutes using Envio HyperIndex's no-code contract import feature.
The Optimism Standard Bridge enables the movement of ETH and ERC-20 tokens between Ethereum and Optimism. We'll index bridge deposit events by extracting the DepositFinalized
logs emitted by the bridge contracts on both networks.
Prerequisitesβ
Before starting, ensure you have the following installed:
- Node.js (v22 or newer recommended)
- pnpm (v8 or newer)
- Docker Desktop (required to run the Envio indexer locally)
Note: Docker is specifically required to run your indexer locally. You can skip Docker installation if you plan only to use Envio's hosted service.
Step 1: Initialize Your Indexerβ
- Open your terminal in an empty directory and run:
pnpx envio init
-
Name your indexer (we'll use "optimism-bridge-indexer" in this example):
-
Choose your preferred language (TypeScript, JavaScript, or ReScript):
Step 2: Import the Optimism Bridge Contractβ
-
Select Contract Import β Block Explorer β Optimism
-
Enter the Optimism bridge contract address:
0x4200000000000000000000000000000000000010
View on Optimistic Etherscan
-
Select the
DepositFinalized
event:- Navigate using arrow keys (ββ)
- Press spacebar to select the event
Tip: You can select multiple events to index simultaneously.
Step 3: Add the Ethereum Mainnet Bridge Contractβ
-
When prompted, select Add a new contract
-
Choose Block Explorer β Ethereum Mainnet
-
Enter the Ethereum Mainnet gateway contract address:
0x99C9fc46f92E8a1c0deC1b1747d010903E884bE1
View on Etherscan
-
Select the
ETHDepositInitiated
event -
When finished adding contracts, select I'm finished
Step 4: Start Your Indexerβ
- If you have any running indexers, stop them first:
pnpm envio stop
- Start your new indexer:
pnpm dev
This command:
- Starts the required Docker containers
- Sets up your database
- Launches the indexing process
- Opens the Hasura GraphQL interface
Step 5: Understanding the Generated Codeβ
Let's examine the key files that Envio generated:
1. config.yaml
β
This configuration file defines:
- Networks to index (Optimism and Ethereum Mainnet)
- Starting blocks for each network
- Contract addresses and ABIs
- Events to track
2. schema.graphql
β
This schema defines the data structures for our selected events:
- Entity types based on event data
- Field types matching the event parameters
- Relationships between entities (if applicable)
3. src/EventHandlers.ts
β
This file contains the business logic for processing events:
- Functions that execute when events are detected
- Data transformation and storage logic
- Entity creation and relationship management
Step 6: Exploring Your Indexed Dataβ
Now you can interact with your indexed data:
Accessing Hasuraβ
- Open Hasura at http://localhost:8080
- When prompted, enter the admin password:
testing
Monitoring Indexing Progressβ
- Click the Data tab in the top navigation
- Find the
_events_sync_state
table to check indexing progress - Observe which blocks are currently being processed
Note: Thanks to Envio's HyperSync, indexing happens significantly faster than with standard RPC methods.
Querying Indexed Eventsβ
- Click the API tab
- Construct a GraphQL query to explore your data
Here's an example query to fetch the 10 largest bridge deposits:
query LargestDeposits {
DepositFinalized(limit: 10, order_by: { amount: desc }) {
l1Token
l2Token
from
to
amount
blockTimestamp
}
}
- Click the Play button to execute your query
Conclusionβ
Congratulations! You've successfully created an indexer for Optimism Bridge deposits across both Ethereum and Optimism networks.
What You've Learnedβ
- How to initialize a multi-network indexer using Envio
- How to import contracts from different blockchains
- How to query and explore indexed blockchain data
Next Stepsβ
- Try customizing the event handlers to add additional logic
- Create relationships between events on different networks
- Deploy your indexer to Envio's hosted service
For more tutorials and advanced features, check out our documentation or watch our video walkthroughs on YouTube.
Indexing ERC20 Token Transfers on Baseβ
File: Tutorials/tutorial-erc20-token-transfers.md
Introductionβ
In this tutorial, you'll learn how to index ERC20 token transfers on the Base network using Envio HyperIndex. By leveraging the no-code contract import feature, you'll be able to quickly analyze USDC transfer activity, including identifying the largest transfers.
We'll create an indexer that tracks all USDC token transfers on Base by extracting the Transfer
events emitted by the USDC contract. The entire process takes less than 5 minutes to set up and start querying data.
Prerequisitesβ
Before starting, ensure you have the following installed:
- Node.js (v22 or newer recommended)
- pnpm (v8 or newer)
- Docker Desktop (required to run the Envio indexer locally)
Note: Docker is specifically required to run your indexer locally. You can skip Docker installation if you plan only to use Envio's hosted service.
Step 1: Initialize Your Indexerβ
- Open your terminal in an empty directory and run:
pnpx envio init
-
Name your indexer (we'll use "usdc-base-transfer-indexer" in this example):
-
Choose your preferred language (TypeScript, JavaScript, or ReScript):
Step 2: Import the USDC Token Contractβ
-
Select Contract Import β Block Explorer β Base
-
Enter the USDC token contract address on Base:
0x833589fCD6eDb6E08f4c7C32D4f71b54bdA02913
View on BaseScan
-
Select the
Transfer
event:- Navigate using arrow keys (ββ)
- Press spacebar to select the event
Tip: You can select multiple events to index simultaneously if needed.
- When finished adding contracts, select I'm finished
Step 3: Start Your Indexerβ
- If you have any running indexers, stop them first:
pnpm envio stop
Note: You can skip this step if this is your first time running an indexer.
- Start your new indexer:
pnpm dev
This command:
- Starts the required Docker containers
- Sets up your database
- Launches the indexing process
- Opens the Hasura GraphQL interface
Step 4: Understanding the Generated Codeβ
Let's examine the key files that Envio generated:
1. config.yaml
β
This configuration file defines:
- Network to index (Base)
- Starting block for indexing
- Contract address and ABI details
- Events to track (Transfer)
2. schema.graphql
β
This schema defines the data structures for the Transfer event:
- Entity types based on event data
- Field types for sender, receiver, and amount
- Any relationships between entities
3. src/EventHandlers.*
β
This file contains the business logic for processing events:
- Functions that execute when Transfer events are detected
- Data transformation and storage logic
- Entity creation and relationship management
Step 5: Exploring Your Indexed Dataβ
Now you can interact with your indexed USDC transfer data:
Accessing Hasuraβ
- Open Hasura at http://localhost:8080
- When prompted, enter the admin password:
testing
Monitoring Indexing Progressβ
- Click the Data tab in the top navigation
- Find the
_events_sync_state
table to check indexing progress - Observe which blocks are currently being processed
Note: Thanks to Envio's HyperSync, you can index millions of USDC transfers in just minutes rather than hours or days with traditional methods.
Querying Indexed Eventsβ
- Click the API tab
- Construct a GraphQL query to explore your data
Here's an example query to fetch the 10 largest USDC transfers:
query LargestTransfers {
FiatTokenV2_2_Transfer(limit: 10, order_by: { value: desc }) {
from
to
value
blockTimestamp
}
}
- Click the Play button to execute your query
Conclusionβ
Congratulations! You've successfully created an indexer for USDC token transfers on Base. In just a few minutes, you've indexed over 3.6 million transfer events and can now query this data in real-time.
What You've Learnedβ
- How to initialize an indexer using Envio's contract import feature
- How to index ERC20 token transfers on the Base network
- How to query and analyze token transfer data using GraphQL
Next Stepsβ
- Try customizing the event handlers to add additional logic
- Create aggregated statistics about token transfers
- Add more tokens or events to your indexer
- Deploy your indexer to Envio's hosted service
For more tutorials and advanced features, check out our documentation or watch our video walkthrough on YouTube.
Indexing Sway Farm on the Fuel Networkβ
File: Tutorials/tutorial-indexing-fuel.md
Until recently, HyperIndex was only available on EVM-compatible blockchains, and now we have extended support to the Fuel Network.
Indexers are vital to the success of any dApp. In this tutorial, we will create an Envio indexer for the Fuel dApp Sway Farm step by step.
Sway Farm is a simple farming game and for the sake of a real-world example, let's create the indexer for a leaderboard of all farmers π§βπΎ
About Fuelβ
Fuel is an operating system purpose-built for Ethereum rollups. Fuel's unique architecture allows rollups to solve for PSI (parallelization, state minimized execution, interoperability). Powered by the FuelVM, Fuel aims to expand Ethereum's capability set without compromising security or decentralization.
Website | X | Discord
Prerequisitesβ
Environment toolingβ
- Node.js (v22 or newer recommended)
- pnpm (v8 or newer)
- Docker Desktop (required to run the Envio indexer locally)
Note: Docker is specifically required to run your indexer locally. You can skip Docker installation if you plan only to use Envio's hosted service.
Initialize the projectβ
Now that you have installed the prerequisite packages let's begin the practical steps of setting up the indexer.
Open your terminal in an empty directory and initialize a new indexer by running the command:
pnpx envio init
In the following prompt, choose the directory where you want to set up your project. The default is the current directory, but in the tutorial, I'll use the indexer name:
? Specify a folder name (ENTER to skip): sway-farm-indexer
Then, choose a language of your choice for the event handlers. TypeScript is the most popular one, so we'll stick with it:
? Which language would you like to use?
JavaScript
> TypeScript
ReScript
[ββ to move, enter to select, type to filter]
Next, we have the new prompt for a blockchain ecosystem. Previously Envio supported only EVM, but now it's possible to choose between Evm
, Fuel
and other VMs in the future:
? Choose blockchain ecosystem
Evm
> Fuel
[ββ to move, enter to select, type to filter]
In the following prompt, you can choose an initialization option. There's a Greeter template for Fuel, which is an excellent way to learn more about HyperIndex. But since we have an existing contract, the Contract Import
option is the best way to create an indexer:
? Choose an initialization option
Template
> Contract Import
[ββ to move, enter to select, type to filter]
A separate Tutorial page provides more details about the
Greeter
template.
Next it'll ask us for an ABI file. You can find it in the ./out/debug
directory after building your Sway contract with forc build
:
? What is the path to your json abi file? ./sway-farm/contract/out/debug/contract-abi.json
After the ABI file is provided, Envio parses all possible events you can use for indexing:
? Which events would you like to index?
> [x] NewPlayer
[x] PlantSeed
[x] SellItem
[x] InvalidError
[x] Harvest
[x] BuySeeds
[x] LevelUp
[ββ to move, space to select one, β to all, β to none, type to filter]
Let's select the events we want to index. I opened the code of the contract file and realized that for a leaderboard we need only events which update player information. Hence, I left only NewPlayer
, LevelUp
, and SellItem
selected in the list. We'd want to index more events in real life, but this is enough for the tutorial.
? Which events would you like to index?
> [x] NewPlayer
[ ] PlantSeed
[x] SellItem
[ ] InvalidError
[ ] Harvest
[ ] BuySeeds
[x] LevelUp
[ββ to move, space to select one, β to all, β to none, type to filter]
π For the tutorial we only need to index
LOG_DATA
receipts, but you can also indexMint
,Burn
,Transfer
andCall
receipts. Read more about Supported Event Types.
Just a few simple questions left. Let's call our contract SwayFarm
:
? What is the name of this contract? SwayFarm
Set an address for the deployed contract:
? What is the address of the contract? 0xf5b08689ada97df7fd2fbd67bee7dea6d219f117c1dc9345245da16fe4e99111
[Use the proxy address if your abi is a proxy implementation]
Finish the initialization process:
? Would you like to add another contract?
> I'm finished
Add a new address for same contract on same network
Add a new contract (with a different ABI)
[Current contract: SwayFarm, on network: Fuel]
If you see the following line, it means we are already halfway through π
Please run `cd sway-farm-indexer` to run the rest of the envio commands
Let's open the indexer in an IDE and start adjusting it for our farm π
Walk through initialized indexerβ
At this point, we should already have a working indexer. You can start it by running pnpm dev
, which we cover in more detail later in the tutorial.
Everything is configured by modifying the 3 files below. Let's walk through each of them.
- config.yaml
Guide
- schema.graphql
Guide
- EventHandlers.*
Guide
(* depending on the language chosen for the indexer)
config.yaml
β
The config.yaml
outlines the specifications for the indexer, including details such as network and contract specifications and the event information to be used in the indexing process.
name: sway-farm-indexer
ecosystem: fuel
networks:
- id: 0
start_block: 0
contracts:
- name: SwayFarm
address:
- 0xf5b08689ada97df7fd2fbd67bee7dea6d219f117c1dc9345245da16fe4e99111
abi_file_path: abis/swayfarm-abi.json
handler: src/EventHandlers.ts
events:
- name: SellItem
logId: "11192939610819626128"
- name: LevelUp
logId: "9956391856148830557"
- name: NewPlayer
logId: "169340015036328252"
In the tutorial, we don't need to adjust it in any way. But later you can modify the file and add more events for indexing.
As a nice to have, you can use a Sway struct name without specifying a logId
, like this:
- name: SellItem
- name: LevelUp
- name: NewPlayer
schema.graphql
β
The schema.graphql
file serves as a representation of your application's data model. It defines entity types that directly correspond to database tables, and the event handlers you create are responsible for creating and updating records within those tables. Additionally, the GraphQL API is automatically generated based on the entity types specified in the schema.graphql
file, to allow access to the indexed data.
π§ A separate Guide page provides more details about the
schema.graphql
file.
For the leaderboard, we need only one entity representing the player. Let's create it:
type Player {
id: ID!
farmingSkill: BigInt!
totalValueSold: BigInt!
}
We will use the user address as an ID. The fields farmingSkill
and totalValueSold
are u64
in Sway, so to safely map them to JavaScript value, we'll use BigInt
.
EventHandlers.ts
β
The event handlers generated by contract import are quite simple and only add an entity to a DB when a related event is indexed.
/*
* Please refer to https://docs.envio.dev for a thorough guide on all Envio indexer features
*/
SwayFarmContract.SellItem.handler(async ({ event, context }) => {
const entity: SwayFarm_SellItemEntity = {
id: `${event.chainId}_${event.block.height}_${event.logIndex}`,
};
context.SwayFarm_SellItem.set(entity);
});
Let's modify the handlers to update the Player
entity instead. But before we start, we need to run pnpm codegen
to generate utility code and types for the Player
entity we've added.
pnpm codegen
It's time for a little bit of coding. The indexer is very simple; it requires us only to pass event data to an entity.
/**
Registers a handler that processes NewPlayer event
on the SwayFarm contract and stores the players in the DB
*/
SwayFarmContract.NewPlayer.handler(async ({ event, context }) => {
// Set the Player entity in the DB with the intial values
context.Player.set({
// The address in Sway is a union type of user Address and ContractID. Envio supports most of the Sway types, and the address value was decoded as a discriminated union 100% typesafe
id: event.params.address.payload.bits,
// Initial values taken from the contract logic
farmingSkill: 1n,
totalValueSold: 0n,
});
});
SwayFarmContract.LevelUp.handler(async ({ event, context }) => {
const playerInfo = event.params.player_info;
context.Player.set({
id: event.params.address.payload.bits,
farmingSkill: playerInfo.farming_skill,
totalValueSold: playerInfo.total_value_sold,
});
});
SwayFarmContract.SellItem.handler(async ({ event, context }) => {
const playerInfo = event.params.player_info;
context.Player.set({
id: event.params.address.payload.bits,
farmingSkill: playerInfo.farming_skill,
totalValueSold: playerInfo.total_value_sold,
});
});
Without overengineering, simply set the player data into the database. What's nice is that whenever your ABI or entities in graphql.schema
change, Envio regenerates types and shows the compilation error.
π§ You can find the indexer repo created during the tutorial on GitHub.
Starting the Indexerβ
π’ Make sure you have docker open
The following commands will start the docker and create databases for indexed data. Make sure to re-run pnpm dev
if you've made some changes.
pnpm dev
Nice, we indexed 1,721,352
blocks containing 58,784
events in 10 seconds, and they continue coming in.
View the indexed resultsβ
Let's check indexed players on the local Hasura server.
open http://localhost:8080
The Hasura admin-secret / password is testing
, and the tables can be viewed in the data tab or queried from the playground.
Now, we can easily get the top 5 players, the number of inactive and active players, and the average sold value. What's left is a nice UI for the Sway Farm leaderboard, but that's not the tutorial's topic.
π§ A separate Guide page provides more details about navigating Hasura.
Deploy the indexer onto the hosted serviceβ
Once you have verified that the indexer is working for your contracts, then you are ready to deploy the indexer onto our hosted service.
Deploying an indexer onto the hosted service allows you to extract information via graphQL queries into your front-end or some back-end application.
Navigate to the hosted service to start deploying your indexer and refer to this documentation for more information on deploying your indexer.
What next?β
Once you have successfully finished the tutorial, you are ready to become a blockchain indexing wizard!
Join our Discord channel to make sure you catch all new releases.
Indexing a Greeter Contractβ
File: Tutorials/greeter-tutorial.md
Introductionβ
This tutorial provides a step-by-step guide to indexing a simple Greeter smart contract deployed on multiple blockchains. You'll learn how to set up and run a multi-chain indexer using Envio's template system.
What is the Greeter Contract?β
The Greeter contract is a straightforward smart contract that allows users to store greeting messages on the blockchain. For this tutorial, we'll be indexing instances of this contract deployed on both Polygon and Linea networks.
What You'll Buildβ
By the end of this tutorial, you'll have:
- A functioning multi-chain indexer that tracks greeting events
- The ability to query these events through a GraphQL endpoint
- Experience with Envio's core indexing functionality
Prerequisitesβ
Before starting, ensure you have the following installed:
- Node.js (v22 or newer recommended)
- pnpm (v8 or newer)
- Docker Desktop (required to run the Envio indexer locally)
Note: Docker is specifically required to run your indexer locally. You can skip Docker installation if you plan only to use Envio's hosted service.
Step 1: Initialize Your Projectβ
First, let's create a new project using Envio's Greeter template:
- Open your terminal and run:
pnpx envio init
- When prompted for a directory, you can press Enter to use the current directory or specify another path:
? Set the directory: (.) .
- Choose your preferred programming language for event handlers:
? Which language would you like to use?
> JavaScript
TypeScript
ReScript
- Select the Template initialization option:
? Choose an initialization option
> Template
Contract Import
- Choose the Greeter template:
? Which template would you like to use?
> Greeter
Erc20
After completing these steps, Envio will generate all the necessary files for your indexer project.
Step 2: Understanding the Generated Filesβ
Let's examine the key files that were created:
config.yaml
β
This configuration file defines which networks and contracts to index:
# Partial example
envio_node:
networks:
- name: polygon
# ... Polygon network settings
contracts:
- name: Greeter
address: "0x9D02A17dE4E68545d3a58D3a20BbBE0399E05c9c"
# ... contract settings
- name: linea
# ... Linea network settings
contracts:
- name: Greeter
address: "0xdEe21B97AB77a16B4b236F952e586cf8408CF32A"
# ... contract settings
schema.graphql
β
This schema defines the data structures for the indexed events:
type Greeting {
id: ID!
user: String!
greeting: String!
blockNumber: Int!
blockTimestamp: Int!
transactionHash: String!
}
type User {
id: ID!
latestGreeting: String!
numberOfGreetings: Int!
greetings: [String!]!
}
src/EventHandlers.js
(or .ts
/.res
)β
This file contains the logic to process events emitted by the Greeter contract.
Step 3: Start Your Indexerβ
Important: Make sure Docker Desktop is running before proceeding.
- Start the indexer with:
pnpm dev
This command:
- Launches Docker containers for the database and Hasura
- Sets up your local development environment
- Begins indexing data from the specified contracts
- Opens a terminal UI to monitor indexing progress
The indexer will retrieve data from both Polygon and Linea blockchains, starting from the blocks specified in your config.yaml
file.
Step 4: Interact with the Contractsβ
To see your indexer in action, you can write new greetings to the blockchain:
For Polygon:β
- Visit the contract on Polygonscan
- Connect your wallet
- Use the
setGreeting
function to write a new greeting - Submit the transaction
For Linea:β
- Visit the contract on Lineascan
- Connect your wallet
- Use the
setGreeting
function to write a new greeting - Submit the transaction
Since this is a multi-chain example, you can interact with both contracts to see how Envio handles data from different blockchains simultaneously.
Step 5: Query the Indexed Dataβ
Now you can explore the data your indexer has captured:
- Open Hasura at http://localhost:8080
- When prompted for authentication, use the password:
testing
- Navigate to the Data tab to browse the database tables
- Or use the API tab to write GraphQL queries
Example Queryβ
Try this query to see the latest greetings:
query GetGreetings {
Greeting(limit: 10, order_by: { blockTimestamp: desc }) {
id
user
greeting
blockNumber
blockTimestamp
transactionHash
}
}
Step 6: Deploy to Production (Optional)β
When you're ready to move from local development to production:
- Visit the Envio Hosted Service
- Follow the steps to deploy your indexer
- Get a production GraphQL endpoint for your application
For detailed deployment instructions, see the Hosted Service documentation.
What You've Learnedβ
By completing this tutorial, you've learned:
- How to initialize an Envio project from a template
- How indexers process data from multiple blockchains
- How to query indexed data using GraphQL
- The basic structure of an Envio indexing project
Next Stepsβ
Now that you've mastered the basics, you can:
- Try the Contract Import feature to index any deployed contract
- Customize the event handlers to implement more complex indexing logic
- Add relationships between entities in your schema
- Explore the Advanced Querying features
- Create aggregated statistics from your indexed data
For more tutorials and examples, visit the Envio Documentation or join our Discord community for support.
Getting Price Data in Your Indexerβ
File: Tutorials/price-data.md
Introductionβ
Many blockchain applications require price data to calculate values such as:
- Historical token transfer values in USD
- Total value locked (TVL) in DeFi protocols over time
- Portfolio valuations at specific points in time
This tutorial explores three different approaches to incorporating price data into your Envio indexer, using a real-world example of tracking ETH deposits into a Uniswap V3 liquidity pool on the Blast blockchain.
TL;DR: The complete code for this tutorial is available in this GitHub repository.
What You'll Learnβ
In this tutorial, you'll:
- Compare three different methods for accessing token price data
- Analyze the tradeoffs between accuracy, decentralization, and performance
- Implement a multi-source price feed in an Envio indexer
- Build a practical example indexing Uniswap V3 liquidity events with price context
Price Data Methods Comparedβ
There are three primary methods to access price data within your indexer:
Method | Description | Speed | Accuracy | Decentralization |
---|---|---|---|---|
Oracles | On-chain price feeds (e.g., API3, Chainlink) | Fast | Medium | Medium |
DEX Pools | Swap events from decentralized exchanges | Fast | Medium-High | High |
Off-chain APIs | External services (e.g., CoinGecko) | Slow | High | Low |
Let's explore each method in detail.
Method 1: Using Oracle Price Feedsβ
Oracle networks provide on-chain price data through specialized smart contracts. For this tutorial, we'll use API3 price feeds on Blast.
How Oracles Workβ
Oracle services like API3 maintain a network of data providers that push price updates to on-chain contracts. These updates typically occur:
- At regular time intervals
- When price deviations exceed a predefined threshold (e.g., 1%)
- When manually triggered by network participants
Finding the Right Oracle Feedβ
To locate the ETH/USD price feed using API3 on Blast:
-
Identify the API3 contract address:
0x709944a48cAf83535e43471680fDA4905FB3920a
-
Find the data feed ID for ETH/USD:
- The dAPI name "ETH/USD" as bytes32:
0x4554482f55534400000000000000000000000000000000000000000000000000
- Using the
dapiNameToDataFeedId
function, this maps to0x3efb3990846102448c3ee2e47d22f1e5433cd45fa56901abe7ab3ffa054f70b5
- The dAPI name "ETH/USD" as bytes32:
-
Monitor the
UpdatedBeaconSetWithBeacons
events with this data feed ID to get price updates
Oracle Advantages and Limitationsβ
Advantages:
- Fast indexing (no external API calls required)
- Moderate decentralization
- Generally reliable data
Limitations:
- Updates only on significant price changes
- Limited token coverage (mainly high-liquidity pairs)
- Minor accuracy tradeoffs
Method 2: Using DEX Pool Swap Eventsβ
Decentralized exchanges like Uniswap provide price data through swap events. We'll use the USDB/WETH pool on Blast to derive ETH pricing.
Locating the Right DEX Poolβ
First, we need to find the specific Uniswap V3 pool for USDB/WETH:
const usdb = "0x4300000000000000000000000000000000000003";
const weth = "0x4300000000000000000000000000000000000004";
const factoryAddress = "0x792edAdE80af5fC680d96a2eD80A44247D2Cf6Fd";
const factoryAbi = parseAbi([
"function getPool( address tokenA, address tokenB, uint24 fee ) external view returns (address pool)",
]);
const providerUrl = "https://rpc.ankr.com/blast";
const poolBips = 3000; // 0.3%. This is measured in hundredths of a bip
const client = createPublicClient({
chain: blast,
transport: http(providerUrl),
});
const factoryContract = getContract({
abi: factoryAbi,
address: factoryAddress,
client: client,
});
(async () => {
const poolAddress = await factoryContract.read.getPool([
usdb,
weth,
poolBips,
]);
console.log(poolAddress);
})();
Tip: You can also manually find the pool address using the
getPool
function on a block explorer.
Running this code reveals the USDB/WETH pool is at 0xf52B4b69123CbcF07798AE8265642793b2E8990C
.
Getting Price Data From Swap Eventsβ
Uniswap V3 emits Swap
events containing price information in the sqrtPriceX96
field. To convert this to a price, we'll use a formula in our event handler.
DEX Advantages and Limitationsβ
Advantages:
- Very decentralized
- High update frequency
- Wide token coverage
Limitations:
- Susceptible to price impact and manipulation (especially in low-liquidity pools)
- Requires extra calculations to derive prices
- May require multiple pools for cross-pair calculations
Method 3: Using Off-chain APIsβ
External price APIs like CoinGecko provide comprehensive token price data but require HTTP calls from your indexer.
Making API Requestsβ
Here's a simple function to fetch historical ETH prices from CoinGecko:
const COIN_GECKO_API_KEY = process.env.COIN_GECKO_API_KEY;
async function fetchEthPriceFromUnix(
unix: number,
token = "ethereum"
): Promise {
// convert unix to date dd-mm-yyyy
const _date = new Date(unix * 1000);
const date = _date.toISOString().slice(0, 10).split("-").reverse().join("-");
return fetchEthPrice(date.slice(0, 10), token);
}
async function fetchEthPrice(
date: string,
token = "ethereum"
): Promise {
const options = {
method: "GET",
headers: {
accept: "application/json",
"x-cg-demo-api-key": COIN_GECKO_API_KEY,
},
};
return fetch(
`https://api.coingecko.com/api/v3/coins/${token}/history?date=${date}&localization=false`,
options as any
)
.then((res) => res.json())
.then((res: any) => {
const usdPrice = res.market_data.current_price.usd;
console.log(`ETH price on ${date}: ${usdPrice}`);
return usdPrice;
})
.catch((err) => console.error(err));
}
export default fetchEthPriceFromUnix;
Note: The free CoinGecko API only provides daily price data (at 00:00 UTC), not block-by-block precision. For production use, consider a paid API with more granular historical data.
Off-chain API Advantages and Limitationsβ
Advantages:
- Highest accuracy (with paid APIs)
- Most comprehensive token coverage
- No susceptibility to on-chain manipulation
Limitations:
- Significantly slows indexing speed due to API calls
- Centralized data source
- May require paid subscriptions for full functionality
Building a Multi-Source Price Feed Indexerβ
Now let's build an indexer that compares all three methods when tracking Uniswap V3 liquidity pool deposits.
Step 1: Initialize Your Indexerβ
Create a new Envio indexer project:
pnpx envio init
Step 2: Configure Your Indexerβ
Edit your config.yaml
file to track both the API3 oracle and the Uniswap V3 pool:
# yaml-language-server: $schema=./node_modules/envio/evm.schema.json
name: envio-indexer
preload_handlers: true
networks:
- id: 81457
start_block: 11000000
contracts:
- name: Api3ServerV1
address:
- 0x709944a48cAf83535e43471680fDA4905FB3920a
handler: src/EventHandlers.ts
events:
- event: UpdatedBeaconSetWithBeacons(bytes32 indexed beaconSetId, int224 value, uint32 timestamp)
- name: UniswapV3Pool
address:
- 0xf52B4b69123CbcF07798AE8265642793b2E8990C
handler: src/EventHandlers.ts
events:
- event: Swap(address indexed sender, address indexed recipient, int256 amount0, int256 amount1, uint160 sqrtPriceX96, uint128 liquidity, int24 tick)
- event: Mint(address sender, address indexed owner, int24 indexed tickLower, int24 indexed tickUpper, uint128 amount, uint256 amount0, uint256 amount1)
field_selection:
transaction_fields:
- "hash"
Important: The
field_selection
section is needed to include transaction hashes in your indexed data.
Step 3: Define Your Schemaβ
Create a schema that captures price data from all three sources:
type OraclePoolPrice {
id: ID!
value: BigInt!
timestamp: BigInt!
block: Int!
}
type UniswapV3PoolPrice {
id: ID!
sqrtPriceX96: BigInt!
timestamp: Int!
block: Int!
}
type EthDeposited {
id: ID!
timestamp: Int!
block: Int!
oraclePrice: Float!
poolPrice: Float!
offChainPrice: Float!
offchainOracleDiff: Float!
depositedPool: Float!
depositedOffchain: Float!
depositedOrcale: Float!
txHash: String!
}
Step 4: Implement Event Handlersβ
Create event handlers to process data from all three sources:
import {
Api3ServerV1,
OraclePoolPrice,
UniswapV3Pool,
UniswapV3PoolPrice,
EthDeposited,
} from "generated";
let latestOraclePrice = 0;
let latestPoolPrice = 0;
Api3ServerV1.UpdatedBeaconSetWithBeacons.handler(async ({ event, context }) => {
// Filter out the beacon set for the ETH/USD price
if (
event.params.beaconSetId !=
"0x3efb3990846102448c3ee2e47d22f1e5433cd45fa56901abe7ab3ffa054f70b5"
) {
return;
}
const entity: OraclePoolPrice = {
id: `${event.chainId}-${event.block.number}-${event.logIndex}`,
value: event.params.value,
timestamp: event.params.timestamp,
block: event.block.number,
};
latestOraclePrice = Number(event.params.value) / Number(10 ** 18);
context.OraclePoolPrice.set(entity);
});
UniswapV3Pool.Swap.handler(async ({ event, context }) => {
const entity: UniswapV3PoolPrice = {
id: `${event.chainId}-${event.block.number}-${event.logIndex}`,
sqrtPriceX96: event.params.sqrtPriceX96,
timestamp: event.block.timestamp,
block: event.block.number,
};
latestPoolPrice = Number(
BigInt(2 ** 192) /
(BigInt(event.params.sqrtPriceX96) * BigInt(event.params.sqrtPriceX96))
);
context.UniswapV3PoolPrice.set(entity);
});
UniswapV3Pool.Mint.handler(async ({ event, context }) => {
const offChainPrice = await fetchEthPriceFromUnix(event.block.timestamp);
const ethDepositedUsdPool =
(latestPoolPrice * Number(event.params.amount1)) / 10 ** 18;
const ethDepositedUsdOffchain =
(offChainPrice * Number(event.params.amount1)) / 10 ** 18;
const ethDepositedUsdOrcale =
(latestOraclePrice * Number(event.params.amount1)) / 10 ** 18;
const EthDeposited: EthDeposited = {
id: `${event.chainId}-${event.block.number}-${event.logIndex}`,
timestamp: event.block.timestamp,
block: event.block.number,
oraclePrice: round(latestOraclePrice),
poolPrice: round(latestPoolPrice),
offChainPrice: round(offChainPrice),
depositedPool: round(ethDepositedUsdPool),
depositedOffchain: round(ethDepositedUsdOffchain),
depositedOrcale: round(ethDepositedUsdOrcale),
offchainOracleDiff: round(
((ethDepositedUsdOffchain - ethDepositedUsdOrcale) /
ethDepositedUsdOffchain) *
100
),
txHash: event.transaction.hash,
};
context.EthDeposited.set(EthDeposited);
});
function round(value: number) {
return Math.round(value * 100) / 100;
}
Step 5: Run Your Indexerβ
Start your indexer with:
pnpm dev
This will begin indexing data from block 11,000,000 on Blast.
Step 6: Analyze the Resultsβ
After running your indexer, you can query the data in Hasura to compare the three price data sources:
query ComparePrices {
EthDeposited(order_by: { block: desc }, limit: 10) {
block
timestamp
oraclePrice
poolPrice
offChainPrice
depositedPool
depositedOffchain
depositedOrcale
offchainOracleDiff
txHash
}
}
Results Analysisβ
When comparing our three price data sources, we found:
!Table of indexer results
Looking at the offchainOracleDiff
column, we can see that oracle and off-chain prices typically align closely but can deviate by as much as 17.98% in some cases.
For the highlighted transaction (0xe7e79ddf29ed2f0ea8cb5bb4ffdab1ea23d0a3a0a57cacfa875f0d15768ba37d), we can compare our calculated values:
- Actual value (from block explorer): $2,358.27
- DEX pool value (
depositedPool
): $2,117.07 - Off-chain API value (
depositedOffchain
): $2,156.15
This demonstrates that even the most accurate methods have limitations.
Conclusion: Choosing the Right Methodβ
Based on our analysis, here are some recommendations for choosing a price data method:
Use Oracle or DEX Pools when:β
- Indexing speed is critical
- Absolute precision isn't required
- You're working with high-liquidity tokens
Use Off-chain APIs when:β
- Price accuracy is paramount
- Indexing speed is less important
- You can implement effective caching
For maximum accuracy while maintaining performance:β
- Combine multiple methods and aggregate results
- Use high-volume DEX pools on major networks
- Cache API results to avoid redundant calls
Next Stepsβ
To further enhance your price data indexing:
- Implement caching for off-chain API calls
- Cross-reference multiple DEX pools for better accuracy
- Consider time-weighted average prices (TWAP) instead of spot prices
- Use multi-chain indexing to access higher-liquidity pools on major networks
By carefully choosing and implementing the right price data strategy, you can build robust indexers that provide accurate financial data for your blockchain applications.
Dynamic Contracts / Factoriesβ
File: Advanced/dynamic-contracts.md
Introductionβ
Many blockchain systems use factory patterns where new contracts are created dynamically. Common examples include:
- DEXes like Uniswap where each trading pair creates a new contract
- NFT platforms that deploy new collection contracts
- Lending protocols that create new markets as isolated contracts
When indexing these systems, you need a way to discover and track these dynamically created contracts. Envio provides powerful tools to handle this use case.
Contract Registration Handlerβ
Instead of a template based approach, we've introduced a contractRegister
handler that can be added to any event.
This allows you to easily:
- Register contracts from any event handler.
- Use conditions and any logic you want to register contracts.
- Have nested factories which are registered by other factories.
..contractRegister(({ event, context }) => {
context.add();
});
Example: NFT Factory Patternβ
Let's look at a complete example using an NFT factory pattern.
Scenarioβ
NftFactory
contract creates newSimpleNft
contracts- We want to index events from all NFTs created by this factory
- Each time a new NFT is created, the factory emits a
SimpleNftCreated
event
1. Configure Your Contracts in config.yamlβ
name: nftindexer
description: NFT Factory
networks:
- id: 1337
start_block: 0
contracts:
- name: NftFactory
abi_file_path: abis/NftFactory.json
address: 0x4675a6B115329294e0518A2B7cC12B70987895C4 # Factory address is known
handler: src/EventHandlers.ts
events:
- event: SimpleNftCreated (string name, string symbol, uint256 maxSupply, address contractAddress)
- name: SimpleNft
abi_file_path: abis/SimpleNft.json
# No address field - we'll discover these addresses from events
handler: src/EventHandlers.ts
events:
- event: Transfer (address from, address to, uint256 tokenId)
Note that:
- The
NftFactory
contract has a known address specified in the config - The
SimpleNft
contract has no address, as we'll register instances dynamically
2. Create the Contract Registration Handlerβ
In your src/EventHandlers.ts
file:
// Register SimpleNft contracts whenever they're created by the factory
NftFactory.SimpleNftCreated.contractRegister(({ event, context }) => {
// Register the new NFT contract using its address from the event
context.addSimpleNft(event.params.contractAddress);
context.log.info(
`Registered new SimpleNft at ${event.params.contractAddress}`
);
});
// Handle Transfer events from all SimpleNft contracts
SimpleNft.Transfer.handler(async ({ event, context }) => {
// Your event handling logic here
context.log.info(
`NFT Transfer at ${event.srcAddress} - Token ID: ${event.params.tokenId}`
);
// Example: Store transfer information in the database
// ...
});
Async Contract Registerβ
As of version 2.21
, you can use async contract registration.
This is a unique feature of Envio that allows you to perform an external call to determine the address of the contract to register.
NftFactory.SimpleNftCreated.contractRegister(async ({ event, context }) => {
const version = await getContractVersion(event.params.contractAddress);
if (version === "v2") {
context.addSimpleNftV2(event.params.contractAddress);
} else {
context.addSimpleNft(event.params.contractAddress);
}
});
When to Use Dynamic Contract Registrationβ
Use dynamic contract registration when:
- Your system includes factory contracts that deploy new contracts over time
- You want to index events from all instances of a particular contract type
- The addresses of these contracts aren't known at the time you create your indexer
Important Notesβ
-
Block Coverage: When a dynamic contract is registered, Envio will index all events from that contract in the same block where it was created, even if those events happened in transactions before the registration event. This is particularly useful for contracts that emit events during their construction.
-
Handler Organization: You can register contracts from any event handler. For example, you might register a token contract when you see it being added to a registry, not just when it's created.
-
Pre-registration: Pre-registration was a recommended mode to optimize performance. But starting from version
2.19
the option is removed in favor of the default behavior, which got even faster.
Debugging Tipsβ
- Use logging in your
contractRegister
function to confirm contracts are being registered. - If you're not seeing events from your dynamic contracts, verify they're being properly registered in database.
For more information on writing event handlers, see the Event Handlers Guide.
Config Schema Referenceβ
File: Advanced/config-schema-reference.md
Static, deep-linkable reference for the config.yaml
JSON Schema.
Tip: Use the Table of Contents to jump to a field or definition.
Top-level Propertiesβ
- description
- name (required)
- ecosystem
- schema
- output
- contracts
- networks (required)
- unordered_multichain_mode
- event_decoder
- rollback_on_reorg
- save_full_history
- field_selection
- raw_events
descriptionβ
Description of the project
- type:
string | null
Example (config.yaml):
description: Greeter indexer
nameβ
Name of the project
- type:
string
Example (config.yaml):
name: MyIndexer
ecosystemβ
Ecosystem of the project.
- type:
anyOf(unknown | null)
Variants:
1
: EcosystemTag2
:null
Example (config.yaml):
ecosystem: evm
schemaβ
Custom path to schema.graphql file
- type:
string | null
Example (config.yaml):
schema: ./schema.graphql
outputβ
Path where the generated directory will be placed. By default it's 'generated' relative to the current working directory. If set, it'll be a path relative to the config file location.
- type:
string | null
Example (config.yaml):
output: ./generated
contractsβ
Global contract definitions that must contain all definitions except addresses. You can share a single handler/abi/event definitions for contracts across multiple chains.
- type:
array | null
Example (config.yaml):
contracts:
- name: Greeter
handler: src/EventHandlers.ts
events:
- event: "NewGreeting(address user, string greeting)"
networksβ
Configuration of the blockchain networks that the project is deployed on.
- type:
array<unknown>
- items:
unknown
- items ref: Network
Example (config.yaml):
networks:
- id: 1
start_block: 0
contracts:
- name: Greeter
address: 0x9D02A17dE4E68545d3a58D3a20BbBE0399E05c9c
unordered_multichain_modeβ
A flag to indicate if the indexer should use a single queue for all chains or a queue per chain (default: false)
- type:
boolean | null
Example (config.yaml):
unordered_multichain_mode: true
event_decoderβ
The event decoder to use for the indexer (default: hypersync-client)
- type:
anyOf(unknown | null)
Variants:
1
: EventDecoder2
:null
Example (config.yaml):
event_decoder: hypersync-client
rollback_on_reorgβ
A flag to indicate if the indexer should rollback to the last known valid block on a reorg. This currently incurs a performance hit on historical sync and is recommended to turn this off while developing (default: true)
- type:
boolean | null
Example (config.yaml):
rollback_on_reorg: true
save_full_historyβ
A flag to indicate if the indexer should save the full history of events. This is useful for debugging but will increase the size of the database (default: false)
- type:
boolean | null
Example (config.yaml):
save_full_history: false
field_selectionβ
Select the block and transaction fields to include in all events globally
- type:
anyOf(unknown | null)
Variants:
1
: FieldSelection2
:null
Example (config.yaml):
field_selection:
transaction_fields:
- hash
block_fields:
- timestamp
raw_eventsβ
If true, the indexer will store the raw event data in the database. This is useful for debugging, but will increase the size of the database and the amount of time it takes to process events (default: false)
- type:
boolean | null
Example (config.yaml):
raw_events: true
Definitionsβ
EcosystemTagβ
- type:
enum (1 values)
- allowed:
evm
Example (config.yaml):
ecosystem: evm
GlobalContract_for_ContractConfigβ
- type:
object
- required:
name
,handler
,events
Properties:
name
:string
β A unique project-wide name for this contract (no spaces)abi_file_path
:string | null
β Relative path (from config) to a json abi. If this is used then each configured event should simply be referenced by its namehandler
:string
β The relative path to a file where handlers are registered for the given contractevents
:array<unknown>
β A list of events that should be indexed on this contract
Example (config.yaml):
contracts:
- name: Greeter
handler: src/EventHandlers.ts
events:
- event: "NewGreeting(address user, string greeting)"
EventConfigβ
- type:
object
- required:
event
Properties:
event
:string
β The human readable signature of an event 'eg. Transfer(address indexed from, address indexed to, uint256 value)' OR a reference to the name of an event in a json ABI file defined in your contract config. A provided signature will take precedence over what is defined in the json ABIname
:string | null
β Name of the event in the HyperIndex generated code. When ommitted, the event field will be used. Should be unique per contractfield_selection
:anyOf(unknown | null)
β Select the block and transaction fields to include in the specific event
Example (config.yaml):
contracts:
- name: Greeter
handler: src/EventHandlers.ts
events:
- event: "Assigned(address indexed recipientId, uint256 amount, address token)"
name: Assigned
field_selection:
transaction_fields:
- transactionIndex
FieldSelectionβ
- type:
object
Properties:
transaction_fields
:array | null
β The transaction fields to include in the event, or in all events if applied globallyblock_fields
:array | null
β The block fields to include in the event, or in all events if applied globally
Example (config.yaml):
events:
- event: "Assigned(address indexed user, uint256 amount)"
field_selection:
transaction_fields:
- transactionIndex
block_fields:
- timestamp
TransactionFieldβ
- type:
enum (33 values)
- allowed:
transactionIndex
,hash
,from
,to
,gas
,gasPrice
,maxPriorityFeePerGas
,maxFeePerGas
,cumulativeGasUsed
,effectiveGasPrice
,gasUsed
,input
,nonce
,value
,v
,r
,s
,contractAddress
,logsBloom
,root
,status
,yParity
,chainId
,accessList
,maxFeePerBlobGas
,blobVersionedHashes
,kind
,l1Fee
,l1GasPrice
,l1GasUsed
,l1FeeScalar
,gasUsedForL1
,authorizationList
BlockFieldβ
- type:
enum (24 values)
- allowed:
parentHash
,nonce
,sha3Uncles
,logsBloom
,transactionsRoot
,stateRoot
,receiptsRoot
,miner
,difficulty
,totalDifficulty
,extraData
,size
,gasLimit
,gasUsed
,uncles
,baseFeePerGas
,blobGasUsed
,excessBlobGas
,parentBeaconBlockRoot
,withdrawalsRoot
,l1BlockNumber
,sendCount
,sendRoot
,mixHash
Networkβ
- type:
object
- required:
id
,start_block
,contracts
Properties:
id
:integer
β The public blockchain network ID.rpc_config
:anyOf(unknown | null)
β RPC configuration for utilizing as the network's data-source. Typically optional for chains with HyperSync support, which is highly recommended. HyperSync dramatically enhances performance, providing up to a 1000x speed boost over traditional RPC.rpc
:anyOf(unknown | null)
β RPC configuration for your indexer. If not specified otherwise, for networks supported by HyperSync, RPC serves as a fallback for added reliability. For others, it acts as the primary data-source. HyperSync offers significant performance improvements, up to a 1000x faster than traditional RPC.hypersync_config
:anyOf(unknown | null)
β Optional HyperSync Config for additional fine-tuningconfirmed_block_threshold
:integer | null
β The number of blocks from the head that the indexer should account for in case of reorgs.start_block
:integer
β The block at which the indexer should start ingesting dataend_block
:integer | null
β The block at which the indexer should terminate.contracts
:array<unknown>
β All the contracts that should be indexed on the given network
Example (config.yaml):
networks:
- id: 1
start_block: 0
end_block: 19000000
contracts:
- name: Greeter
address: 0x1111111111111111111111111111111111111111
RpcConfigβ
- type:
object
- required:
url
Properties:
url
:anyOf(string | array<string>)
β URL of the RPC endpoint. Can be a single URL or an array of URLs. If multiple URLs are provided, the first one will be used as the primary RPC endpoint and the rest will be used as fallbacks.initial_block_interval
:integer | null
β The starting interval in range of blocks per querybackoff_multiplicative
:number | null
β After an RPC error, how much to scale back the number of blocks requested at onceacceleration_additive
:integer | null
β Without RPC errors or timeouts, how much to increase the number of blocks requested by for the next batchinterval_ceiling
:integer | null
β Do not further increase the block interval past this limitbackoff_millis
:integer | null
β After an error, how long to wait before retryingfallback_stall_timeout
:integer | null
β If a fallback RPC is provided, the amount of time in ms to wait before kicking off the next providerquery_timeout_millis
:integer | null
β How long to wait before cancelling an RPC request
Example (config.yaml):
networks:
- id: 1
rpc_config:
url: https://eth.llamarpc.com
initial_block_interval: 1000
NetworkRpcβ
- type:
anyOf(string | unknown | array<unknown>)
Variants:
1
:string
2
: Rpc3
:array<unknown>
Example (config.yaml):
networks:
- id: 1
rpc: https://eth.llamarpc.com
Rpcβ
- type:
object
- required:
url
,for
Properties:
url
:string
β The RPC endpoint URL.for
:unknown
β Determines if this RPC is for historical sync, real-time chain indexing, or as a fallback.initial_block_interval
:integer | null
β The starting interval in range of blocks per querybackoff_multiplicative
:number | null
β After an RPC error, how much to scale back the number of blocks requested at onceacceleration_additive
:integer | null
β Without RPC errors or timeouts, how much to increase the number of blocks requested by for the next batchinterval_ceiling
:integer | null
β Do not further increase the block interval past this limitbackoff_millis
:integer | null
β After an error, how long to wait before retryingfallback_stall_timeout
:integer | null
β If a fallback RPC is provided, the amount of time in ms to wait before kicking off the next providerquery_timeout_millis
:integer | null
β How long to wait before cancelling an RPC request
Example (config.yaml):
networks:
- id: 1
rpc:
- url: https://eth.llamarpc.com
for: sync
Forβ
- type:
oneOf(const sync | const fallback)
Variants:
1
:const sync
2
:const fallback
HypersyncConfigβ
- type:
object
- required:
url
Properties:
url
:string
β URL of the HyperSync endpoint (default: The most performant HyperSync endpoint for the network)
Example (config.yaml):
networks:
- id: 1
hypersync_config:
url: https://eth.hypersync.xyz
NetworkContract_for_ContractConfigβ
- type:
object
- required:
name
Properties:
name
:string
β A unique project-wide name for this contract if events and handler are defined OR a reference to the name of contract defined globally at the top leveladdress
:unknown
β A single address or a list of addresses to be indexed. This can be left as null in the case where this contracts addresses will be registered dynamically.abi_file_path
:string | null
β Relative path (from config) to a json abi. If this is used then each configured event should simply be referenced by its namehandler
:string
β The relative path to a file where handlers are registered for the given contractevents
:array<unknown>
β A list of events that should be indexed on this contract
Example (config.yaml):
networks:
- id: 1
start_block: 0
contracts:
- name: Greeter
address:
- 0x1111111111111111111111111111111111111111
handler: src/EventHandlers.ts
events:
- event: Transfer(address indexed from, address indexed to, uint256 value)
Addressesβ
- type:
anyOf(anyOf(string | integer) | array<anyOf(string | integer)>)
Variants:
1
:anyOf(string | integer)
2
:array<anyOf(string | integer)>
Example (config.yaml):
networks:
- id: 1
contracts:
- name: Greeter
address:
- 0x1111111111111111111111111111111111111111
- 0x2222222222222222222222222222222222222222
EventDecoderβ
- type:
enum (2 values)
- allowed:
viem
,hypersync-client
Example (config.yaml):
event_decoder: hypersync-client
Wildcard Indexingβ
File: Advanced/wildcard-indexing.mdx
Wildcard indexing is a feature that allows you to index all events matching a specified event signature without requiring the contract address from which the event was emitted. This is useful in cases such as indexing contracts deployed through factories, where the factory contract does not emit any events upon contract creation. It also enables indexing events from all contracts implementing a standard (e.g. all ERC20 transfers).
Wildcard Indexing is supported for HyperSync & HyperFuel data sources starting from v2.3.0
.
For the RPC data source support added in the v2.12.0
release.
Index all ERC20 transfersβ
As an example, let's say we want to index all ERC20 Transfer
events. Start with a config.yaml
file:
name: transefer-indexer
networks:
- id: 1
start_block: 0
contracts:
- name: ERC20
handler: ./src/EventHandlers.ts
events:
- event: Transfer(address indexed from, address indexed to, uint256 value)
Let's also define some entities in schema.graphql
file, so our handlers can store the processed data:
type Transfer {
id: ID!
from: String!
to: String!
}
And the last bit is to register an event handler in the src/EventHandlers.ts
. Note how we pass the wildcard: true
option to enable wildcard indexing:
ERC20.Transfer.handler(
async ({ event, context }) => {
context.Transfer.set({
id: `${event.chainId}_${event.block.number}_${event.logIndex}`,
from: event.params.from,
to: event.params.to,
});
},
{ wildcard: true }
);
const { ERC20 } = require("generated");
ERC20.Transfer.handler(
async ({ event, context }) => {
context.Transfer.set({
id: `${event.chainId}_${event.block.number}_${event.logIndex}`,
from: event.params.from,
to: event.params.to,
});
},
{ wildcard: true }
);
Handlers.ERC20.Transfer.handler(
async ({ event, context }) => {
context.Transfer.set({
id: `${event.chainId}_${event.block.number}_${event.logIndex}`,
from: event.params.from,
to: event.params.to,
})
},
~eventConfig={wildcard: true},
)
After running your indexer with pnpm dev
you will have all ERC20 Transfer
events indexed, regardless of the contract address from which the event was emitted.
Topic Filteringβ
Indexing all ERC20 Transfer
events is a lot of events, so ideally to reduce it only to the ones you trully need with the Topic Filtering feature.
When you register an event handler or a contract register you can provide the eventFilters
option. You can filter by each indexed
parameter on the given event.
Let's say you only want to index Mint
events where the from
address is equal to ZERO_ADDRESS
:
const ZERO_ADDRESS = "0x0000000000000000000000000000000000000000";
ERC20.Transfer.handler(
async ({ event, context }) => {
//... your handler logic
},
{ wildcard: true, eventFilters: { from: ZERO_ADDRESS } }
);
const { ERC20 } = require("generated");
const ZERO_ADDRESS = "0x0000000000000000000000000000000000000000";
ERC20.Transfer.handler(
async ({ event, context }) => {
//... your handler logic
},
{ wildcard: true, eventFilters: { from: ZERO_ADDRESS } }
);
open Types.SingleOrMultiple
let zeroAddress = Address.unsafeFromString("0x0000000000000000000000000000000000000000")
Handlers.ERC20.Transfer.handler(
async ({ event, context }) => {
//... your handler logic
},
~eventConfig={
wildcard: true,
eventFilters: Single({from: single(zeroAddress)}),
},
)
Multiple Filtersβ
If you want to index both Mint
and Burn
events you can provide multiple filters as an array. Also, every parameter can accept an array to filter by multiple possible values. We'll use it to filter by a group of whitelisted addresses in the example below:
const ZERO_ADDRESS = "0x0000000000000000000000000000000000000000";
const WHITELISTED_ADDRESSES = [
"0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266",
"0x70997970C51812dc3A010C7d01b50e0d17dc79C8",
"0x3C44CdDdB6a900fa2b585dd299e03d12FA4293BC",
];
ERC20.Transfer.handler(
async ({ event, context }) => {
//... your handler logic
},
{
wildcard: true,
eventFilters: [
{ from: ZERO_ADDRESS, to: WHITELISTED_ADDRESSES },
{ from: WHITELISTED_ADDRESSES, to: ZERO_ADDRESS },
],
}
);
const { ERC20 } = require("generated");
const ZERO_ADDRESS = "0x0000000000000000000000000000000000000000";
const WHITELISTED_ADDRESSES = [
"0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266",
"0x70997970C51812dc3A010C7d01b50e0d17dc79C8",
"0x3C44CdDdB6a900fa2b585dd299e03d12FA4293BC",
];
ERC20.Transfer.handler(
async ({ event, context }) => {
//... your handler logic
},
{
wildcard: true,
eventFilters: [
{ from: ZERO_ADDRESS, to: WHITELISTED_ADDRESSES },
{ from: WHITELISTED_ADDRESSES, to: ZERO_ADDRESS },
],
}
);
open Types.SingleOrMultiple
let zeroAddress = Address.unsafeFromString("0x0000000000000000000000000000000000000000")
let whitelistedAddresses = [
Address.unsafeFromString("0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266"),
Address.unsafeFromString("0x70997970C51812dc3A010C7d01b50e0d17dc79C8"),
Address.unsafeFromString("0x3C44CdDdB6a900fa2b585dd299e03d12FA4293BC")
]
Handlers.ERC20.Transfer.handler(
async ({ event, context }) => {
//... your handler logic
},
~eventConfig={
wildcard: true,
eventFilters: Multiple([
{ from: single(zeroAddress), to: multiple(whitelistedAddresses) },
{ from: multiple(whitelistedAddresses), to: single(zeroAddress) }
]),
},
)
Different Filters per Networkβ
For Multichain Indexers you can pass a function to eventFilters
and use chainId
to filter by different values per network:
const ZERO_ADDRESS = "0x0000000000000000000000000000000000000000";
const WHITELISTED_ADDRESSES = {
1: ["0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266"],
137: [
"0x70997970C51812dc3A010C7d01b50e0d17dc79C8",
"0x3C44CdDdB6a900fa2b585dd299e03d12FA4293BC",
],
};
ERC20.Transfer.handler(
async ({ event, context }) => {
//... your handler logic
},
{
wildcard: true,
eventFilters: ({ chainId }) => [
{ from: ZERO_ADDRESS, to: WHITELISTED_ADDRESSES[chainId] },
{ from: WHITELISTED_ADDRESSES[chainId], to: ZERO_ADDRESS },
],
}
);
const { ERC20 } = require("generated");
const ZERO_ADDRESS = "0x0000000000000000000000000000000000000000";
const WHITELISTED_ADDRESSES = {
1: ["0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266"],
137: [
"0x70997970C51812dc3A010C7d01b50e0d17dc79C8",
"0x3C44CdDdB6a900fa2b585dd299e03d12FA4293BC",
],
};
ERC20.Transfer.handler(
async ({ event, context }) => {
//... your handler logic
},
{
wildcard: true,
eventFilters: ({ chainId }) => [
{ from: ZERO_ADDRESS, to: WHITELISTED_ADDRESSES[chainId] },
{ from: WHITELISTED_ADDRESSES[chainId], to: ZERO_ADDRESS },
],
}
);
Index all ERC20 transfers to your Contractβ
Besides chainId
you can also access the addresses
value to filter by.
For example, if you have a Safe
contract, you can index all ERC20 transfers sent specifically to/from your Safe
contracts. The event filter gets addresses belonging to the contract, so we need to define the Transfer
event on the Safe
contract:
name: locker
networks:
- id: 1
start_block: 0
contracts:
- name: Safe
handler: ./src/EventHandlers.ts
events:
- event: Transfer(address indexed from, address indexed to, uint256 value)
addresses:
- 0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266
- 0x70997970C51812dc3A010C7d01b50e0d17dc79C8
- 0x3C44CdDdB6a900fa2b585dd299e03d12FA4293BC
Safe.Transfer.handler(async ({ event, context }) => {}, {
wildcard: true,
eventFilters: ({ addresses }) => [{ from: addresses }, { to: addresses }],
});
const { Safe } = require("generated");
Safe.Transfer.handler(async ({ event, context }) => {}, {
wildcard: true,
eventFilters: ({ addresses }) => [{ from: addresses }, { to: addresses }],
});
This example is not much different from using a WHITELISTED_ADDRESSES
constant, but this becomes much more powerful when the Safe
contract addresses are registered dynamically by a factory contract:
name: locker
networks:
- id: 1
start_block: 0
contracts:
- name: SafeRegistry
handler: ./src/EventHandlers.ts
events:
- event: NewSafe(address safe)
addresses:
- 0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266
- name: Safe
handler: ./src/EventHandlers.ts
events:
- event: Transfer(address indexed from, address indexed to, uint256 value)
SafeRegistry.NewSafe.contractRegister(async ({ event, context }) => {
context.addSafe(event.params.safe);
});
Safe.Transfer.handler(async ({ event, context }) => {}, {
wildcard: true,
eventFilters: ({ addresses }) => [{ from: addresses }, { to: addresses }],
});
const { SafeRegistry, Safe } = require("generated");
SafeRegistry.NewSafe.contractRegister(async ({ event, context }) => {
context.addSafe(event.params.safe);
});
Safe.Transfer.handler(async ({ event, context }) => {}, {
wildcard: true,
eventFilters: ({ addresses }) => [{ from: addresses }, { to: addresses }],
});
Assert ERC20 Transfers in Handlerβ
After you got all ERC20 Transfers relevant to your contracts, you can additionally filter them in the handler. For example, to get only USDC
transfers:
const USDC_ADDRESS = {
84532: "0x036CbD53842c5426634e7929541eC2318f3dCF7e",
11155111: "0x1c7D4B196Cb0C7B01d743Fbc6116a902379C7238",
};
Safe.Transfer.handler(
async ({ event, context }) => {
// Filter and store only the USDC transfers that involve a Safe address
if (event.srcAddress === USDC_ADDRESS[event.chainId]) {
context.Transfer.set({
id: `${event.chainId}_${event.block.number}_${event.logIndex}`,
from: event.params.from,
to: event.params.to,
});
}
},
{
wildcard: true,
eventFilters: ({ addresses }) => [{ from: addresses }, { to: addresses }],
}
);
const { Safe } = require("generated");
const USDC_ADDRESS = {
84532: "0x036CbD53842c5426634e7929541eC2318f3dCF7e",
11155111: "0x1c7D4B196Cb0C7B01d743Fbc6116a902379C7238",
};
Safe.Transfer.handler(
async ({ event, context }) => {
// Filter and store only the USDC transfers that involve a Safe address
if (event.srcAddress === USDC_ADDRESS[event.chainId]) {
context.Transfer.set({
id: `${event.chainId}_${event.block.number}_${event.logIndex}`,
from: event.params.from,
to: event.params.to,
});
}
},
{
wildcard: true,
eventFilters: ({ addresses }) => [{ from: addresses }, { to: addresses }],
}
);
Contract Register Exampleβ
The same eventFilters
can be applied to contractRegister
and handlerWithLoader
APIs. Here is an example where we only register Uniswap pools that contain DAI token:
const DAI_ADDRESS = "0x6B175474E89094C44Da98b954EedeAC495271d0F";
UniV3Factory.PoolCreated.contractRegister(
async ({ event, context }) => {
const poolAddress = event.params.pool;
context.UniV3Pool.add(poolAddress);
},
{ eventFilters: [{ token0: DAI_ADDRESS }, { token1: DAI_ADDRESS }] }
);
const { UniV3Factory } = require("generated");
const DAI_ADDRESS = "0x6B175474E89094C44Da98b954EedeAC495271d0F";
UniV3Factory.PoolCreated.contractRegister(
async ({ event, context }) => {
const poolAddress = event.params.pool;
context.UniV3Pool.add(poolAddress);
},
{ eventFilters: [{ token0: DAI_ADDRESS }, { token1: DAI_ADDRESS }] }
);
open Types.SingleOrMultiple
let daiAddress = Address.unsafeFromString("0x6B175474E89094C44Da98b954EedeAC495271d0F")
Handlers.UniV3Factory.PoolCreated.contractRegister(
async ({ event, context }) => {
let poolAddress = event.params.pool
context.UniV3Pool.add(poolAddress)
},
~eventConfig={
eventFilters: Multiple([
{ token0: single(daiAddress) },
{ token1: single(daiAddress) }
])
},
)
Handler With Loader Exampleβ
For handlerWithLoader
API simply add wildcard
or eventFilters
options to the single argument object:
ERC20.Transfer.handlerWithLoader({
loader: async ({ event, context }) => {},
handler: async ({ event, context }) => {},
wildcard: ...,
eventFilters: ...,
});
Limitationsβ
-
For any given network, only one event of a given signature can be indexed using wildcard indexing. This means that if you have multiple contract definitions in your config that contain the same event signature. Only one of them is allowed to be set to
wildcard: true
-
Either the
contractRegister
or thehandler
function can take an event config object (with wildcard/eventFilters fields) but not both. -
The RPC data source currently supports Topic Filtering only applied to a single wildcard event.
Preload Optimizationβ
File: Advanced/preload-optimization.md
Important! Preload optimization makes your handlers run twice.
Starting from envio@2.27
all new indexers are created with preload optimization pre-configured by default.
This optimization enables HyperIndex to efficiently preload entities used by handlers through batched database queries, while ensuring events are processed synchronously in their original order. When combined with the Effect API for external calls, this feature delivers performance improvements of multiple orders of magnitude compared to other indexing solutions.
Configureβ
Currently, you need to explicitly enable the preloaded configuration in your config.yaml
file. In the future, this will be enabled by default.
preload_handlers: true
Why Preload?β
To ensure reliable data, HyperIndex guarantees that all events will be processed in the same order as they occurred on-chain.
This guarantee is crucial as it allows you to build indexers that depend on the sequential order of events.
However, this leads to a challenge: Handlers must run one at a time, sequentially for each event. Any asynchronous operations will block the entire process.
To solve this, we introduced Preload Optimization.
It combines in-memory storage, batching, deduplication, and the Effect API to parallelize asynchronous operations across batches of events.
How It Works?β
With Preload Optimization handlers run twice per event:
- First Run (Preload Phase): All event handlers run concurrently for the whole batch of events. During the phase all DB write operations are skipped and only DB read operations and external calls are performed.
- Second Run (Processing Phase): Each event handler runs sequentially in the on-chain order. During the phase it'll get the data from the in-memory store, reflecting changes made by previously processed events.
This double execution pattern ensures that entities created by earlier events in the batch are available to later events.
The Database I/O Problemβ
Consider this common pattern of getting entities in event handlers:
ERC20.Transfer.handler(async ({ event, context }) => {
const sender = await context.Account.get(event.params.from);
const receiver = await context.Account.get(event.params.to);
// Process the transfer...
});
Without Preload Optimization: If you're processing 5,000 transfer events, each with unique from
and to
addresses, this results in 10,000 total database roundtripsβone for each sender and receiver lookup (2 per event Γ 5,000 events). This creates a significant bottleneck that slows down your entire indexing process.
With Preload Optimization: During the Preload Phase, all 5,000 events are processed in parallel. HyperIndex batches database reads that occur simultaneously into single database queries - one query for sender lookups and one for receiver lookups. The loaded accounts are cached in memory. After the Preload Phase completes, the second processing phase begins. This phase runs handlers sequentially in on-chain order, but instead of making database calls, it retrieves the data from the in-memory cache.
For our example of 5,000 transfer events, this optimization reduces database roundtrips from 10,000 calls to just 2!
Optimizing for Concurrencyβ
You can further optimize performance by requesting multiple entities concurrently:
ERC20.Transfer.handler(async ({ event, context }) => {
// Request sender and receiver concurrently for maximum efficiency
const [sender, receiver] = await Promise.all([
context.Account.get(event.params.from),
context.Account.get(event.params.to),
]);
// Process the transfer...
});
This approach can reduce the database roundtrips to just 1 for the entire batch of events!
The External Calls Problemβ
Let's say you want to populate your indexer with offchain data:
ERC20.Transfer.handler(async ({ event, context }) => {
// Without Preload: Blocking external calls
const metadata = await fetch(
`https://api.example.com/metadata/${event.params.from}`
);
// Process the transfer...
});
Without Preload Optimization: If you're processing 5,000 transfer events, each with an external call, this results in 5,000 sequential external callsβeach waiting for the previous one to complete. This can turn a fast indexing process into a slow, sequential crawl.
With Preload Optimization: Since handlers run twice for each event, making direct external calls can be problematic. The Effect API provides a solution. During the Preload Phase, it batches all external calls and runs them in parallel. Then during the Processing Phase, it runs the handlers sequentially, retrieving the already requested data from the in-memory store.
const fetchMetadata = experimental_createEffect(
{
name: "fetchMetadata",
input: {
from: S.string,
},
output: {
decimals: S.number,
symbol: S.string,
},
},
async ({ input }) => {
const metadata = await fetch(
`https://api.example.com/metadata/${input.from}`
);
return metadata;
}
);
ERC20.Transfer.handler(async ({ event, context }) => {
// With Preload: Performs the call in parallel
const metadata = await context.effect(fetchMetadata, {
from: event.params.from,
});
// Process the transfer...
});
Assuming an average call takes 200ms, this optimization reduces the total processing time for 5,000 events from ~16 minutes to ~200 milliseconds - making it 5,000 times faster!
Learn more about the Effect API in our dedicated guide.
Preload Phase Behaviorβ
The Preload Phase is a special phase that runs before the actual event processing. It's designed to preload data that will be used during event processing.
Key characteristics of the Preload Phase:
- It runs in parallel for all events in the batch
- Exceptions won't crash the indexer but will silently abort the Preload Phase for that specific event (Starting from
envio@2.23
) - All storage updates are ignored
- All
context.log
calls are ignored
During the second run (Processing Phase), all operations become fully enabled:
- Exceptions will crash the indexer if not handled
- Entity setting operations will persist to the database
- Logging will output to the console
This two-phase design allows the Preload Phase to optimistically attempt loading data that may not exist yet, while ensuring data consistency during the Processing Phase when all operations are executed normally.
If you're using an earlier version of envio
, we strongly recommend upgrading to the latest version using pnpm install envio@latest
to benefit from this improved Preload Phase behavior.
Double-Run Footgunβ
As mentioned above, the Preload Phase gives a lot of benefits for the event processing, but also it means that you must be aware of its table run nature:
- Never call
fetch
or other external calls directly in the handler.- Use the Effect API instead.
- Or use
context.isPreload
to guarantee that the code will run once.
Due to the optimistic nature of the Preload Phase, the Effect API may occasionally execute with stale data, leading to redundant external calls. If you need to ensure that external calls are made with the most up-to-date data, you can use the context.isPreload
check to restrict execution to only the processing phase.
Note: This will disable the Preload Optimization for the external calls.
ERC20.Transfer.handler(async ({ event, context }) => {
const sender = await context.Account.get(event.params.from);
if (context.isPreload) {
return;
}
const metadata = await fetch(
`https://api.example.com/metadata/${sender.metadataId}`
);
});
Best Practicesβ
- Use
Promise.all
to load multiple entities concurrently for better performance - Place database reads and external calls at the beginning of your handler to maximize the benefits of Preload Optimization
- Consider using
context.isPreload
to exit early from the Preload Phase after loading required data
Migrating from Loadersβ
The Preload Optimization for handlers was born from a concept we had before called Loaders. If you're using loaders, we recommend you to migrate to preload optimization by enabling it in the config and moving all your code to the handler.
// Before:
ERC20.Transfer.handlerWithLoader({
loader: async ({ event, context }) => {
// Load sender and receiver accounts efficiently
const sender = await context.Account.get(event.params.from);
const receiver = await context.Account.get(event.params.to);
// Return the loaded data to the handler
return {
sender,
receiver,
};
},
handler: async ({ event, context, loaderReturn }) => {
const { sender, receiver } = loaderReturn;
// Process the transfer with the pre-loaded data
// No database lookups needed here!
},
});
// After:
ERC20.Transfer.handler(async ({ event, context }) => {
// Load sender and receiver accounts efficiently
const sender = await context.Account.get(event.params.from);
const receiver = await context.Account.get(event.params.to);
// To imitate the behavior of the loader,
// we can use `context.isPreload` to make next code run only once.
// Note: This is not required, but might be useful for CPU-intensive operations.
if (context.isPreload) {
return;
}
// Process the transfer with the pre-loaded data
});
Effect Apiβ
File: Advanced/effect-api.md
The Effect API provides a powerful and convenient way to perform external calls from your handlers. It's especially effective when used with Preload Optimization:
- Automatic batching: Calls of the same kind are automatically batched together
- Intelligent memoization: Calls are memoized, so you don't need to worry about the handler function being called multiple times
- Deduplication: Calls with the same arguments are deduplicated to prevent overfetching
- Persistence: Built-in support for result persistence for indexer reruns (opt-in via
cache: true
) - Future enhancements: We're working on automatic retry logic and enhanced caching workflows ποΈ
To use the Effect API, you first need to define an effect using experimental_createEffect
function from the envio
package:
export const getMetadata = experimental_createEffect(
{
name: "getMetadata",
input: S.string,
output: {
description: S.string,
value: S.bigint,
},
cache: true,
},
async ({ input, context }) => {
const response = await fetch(`https://api.example.com/metadata/${input}`);
const data = await response.json();
context.log.info(`Fetched metadata for ${input}`);
return {
description: data.description,
value: data.value,
};
}
);
The first argument is an options object that describes the effect:
name
(required) - the name of the effect used for debugging and logginginput
(required) - the input type of the effectoutput
(required) - the output type of the effectcache
(optional) - save effect results in the database to prevent duplicate calls (Starting fromenvio@2.26.0
)
The second argument is a function that will be called with the effect's input.
Note: For type definitions, you should use
S
from theenvio
package, which uses Sury library under the hood.
After defining an effect, you can use context.effect
to call it from your handler, loader, or another effect.
The context.effect
function accepts an effect as the first argument and the effect's input as the second argument:
ERC20.Transfer.handler(async ({ event, context }) => {
const metadata = await context.effect(getMetadata, event.params.from);
// Process the event with the metadata
});
Viem Transport Batchingβ
You can use viem
or any other blockchain client inside your effect functions. When doing so, it's highly recommended to enable the batch
option to group all effect calls into fewer RPC requests:
// Create a public client to interact with the blockchain
const client = createPublicClient({
chain: mainnet,
// Enable batching to group calls into fewer RPC requests
transport: http(rpcUrl, { batch: true }),
});
// Get the contract instance for your contract
const lbtcContract = getContract({
abi: erc20Abi,
address: "0x8236a87084f8B84306f72007F36F2618A5634494",
client: client,
});
// Effect to get the balance of a specific address at a specific block
export const getBalance = experimental_createEffect(
{
name: "getBalance",
input: {
address: S.string,
blockNumber: S.optional(S.bigint),
},
output: S.bigint,
cache: true,
},
async ({ input, context }) => {
try {
// If blockNumber is provided, use it to get balance at that specific block
const options = input.blockNumber
? { blockNumber: input.blockNumber }
: undefined;
const balance = await lbtcContract.read.balanceOf(
[input.address as `0x${string}`],
options
);
return balance;
} catch (error) {
context.log.error(`Error getting balance for ${input.address}: ${error}`);
// Return 0 on error to prevent processing failures
return BigInt(0);
}
}
);