Skip to main content

HyperIndex Complete Documentation

This document contains all HyperIndex documentation consolidated into a single file for LLM consumption.


Overview​

File: overview.md

HyperIndex: Fast Multichain Indexer

HyperIndex is a blazing-fast, developer-friendly multichain indexer, optimized for both local development and reliable hosted deployment. It empowers developers to effortlessly build robust backends for blockchain applications.

!Sync Process

HyperIndex & HyperSync

HyperIndex is Envio's full-featured blockchain indexing framework that transforms on-chain events into structured, queryable databases with GraphQL APIs.

HyperSync is the high-performance data engine that powers HyperIndex. It provides the raw blockchain data access layer, delivering up to 2000x faster performance than traditional RPC endpoints.

While HyperIndex gives you a complete indexing solution with schema management and event handling, HyperSync can be used directly for custom data pipelines and specialized applications.

Hypersync API Token Requirements​

Starting from 21 May 2025, HyperSync (the data engine powering HyperIndex) will implement rate limits for requests without API tokens. Here's what you need to know:

  • Local Development: No API token is required for local development, though requests will be rate limited.
  • Self-Hosted Deployments: API tokens are required for unlimited HyperSync access in self-hosted deployments. The token can be set via the ENVIO_API_TOKEN environment variable in your indexer configuration. This can be read from the .env file in the root of your HyperIndex project.
  • Hosted Service: Indexers deployed to our hosted service will have special access that doesn't require a custom API token.
  • Free Usage: The service remains free to use until mid-June 2025.
  • Future Pricing: From mid-June 2025 onwards, we will introduce tiered packages based on usage. Credits are calculated based on comprehensive metrics including data bandwidth, disk read operations, and other resource utilization factors. For preferred introductory pricing based on your specific use case, reach out to us on Discord.

For more details about API tokens, including how to generate and implement them, see our API Tokens documentation.

  • GitHub Repository ⭐
  • Join our Discord Community

Getting Started​

File: getting-started.md

Indexer Initialization​

Prerequisites​

  • Node.js (v18 or newer recommended)
  • pnpm (v8 or newer)
  • Docker Desktop (required to run the Envio indexer locally)

Note: Docker is specifically required to run your indexer locally. You can skip Docker installation if you plan only to use Envio's hosted service.

Essential Files​

After initialization, your indexer will contain three main files that are essential for its operation:

  1. config.yaml – Defines indexing settings such as blockchain endpoints, events to index, and advanced behaviors.
  2. schema.graphql – Defines the GraphQL schema for indexed data and its structure for efficient querying.
  3. src/EventHandlers.* – Contains the logic for processing blockchain events.

Note: The file extension for Event Handlers (*.ts, *.js, or *.res) depends on the programming language chosen (TypeScript, JavaScript, or ReScript).

You can customize your indexer by modifying these files to meet your specific requirements.

For a complete walkthrough of the process, refer to the Quickstart guide.


Contract Import​

File: contract-import.md

The Quickstart enables you to instantly autogenerate a powerful indexer and start querying blockchain data in minutes. This is the fastest and easiest way to begin using HyperIndex.

Example: Autogenerate an indexer for the Eigenlayer contract and index its entire history in less than 5 minutes by simply running pnpx envio init and providing the contract address from Etherscan.

Video Tutorials​

Contract Import Methods​

There are two convenient methods to import your contract:

  • Block Explorer (verified contracts on supported explorers like Etherscan and Blockscout)
  • Local ABI (custom or unverified contracts)

1. Block Explorer Import​

This method uses a verified contract's address from a supported blockchain explorer (Etherscan, Routescan, etc.) to automatically fetch the ABI.

Steps:​

a. Select the blockchain

? Which blockchain would you like to import a contract from?
> ethereum-mainnet
goerli
optimism
base
bsc
gnosis
polygon
[↑↓ to move, enter to select]
note

HyperIndex supports all EVM-compatible chains. If your desired chain is not listed, you can import via the local ABI method or manually adjust the config.yaml file after initialization.

b. Enter the contract address

? What is the address of the contract?
[Use proxy address if ABI is for a proxy implementation]
tip

If using a proxy contract, always specify the proxy address, not the implementation address.

c. Select events to index

? Which events would you like to index?
> [x] ClaimRewards(address indexed from, address indexed reward, uint256 amount)
[x] Deposit(address indexed from, uint256 indexed tokenId, uint256 amount)
[x] NotifyReward(address indexed from, address indexed reward, uint256 indexed epoch, uint256 amount)
[x] Withdraw(address indexed from, uint256 indexed tokenId, uint256 amount)
[space to select, β†’ to select all, ← to deselect all]

d. Finish or add more contracts

You'll be prompted to continue adding more contracts or to complete the setup:

? Would you like to add another contract?
> I'm finished
Add a new address for same contract on same network
Add a new network for same contract
Add a new contract (with a different ABI)

Generated Files & Configuration​

The Quickstart automatically generates key files:

1. config.yaml​

Automatically configured parameters include:

  • Network ID
  • Start Block
  • Contract Name
  • Contract Address
  • Event Signatures

By default, all selected events are included, but you can manually adjust the file if needed. See the detailed guide on config.yaml.

2. GraphQL Schema​

  • Entities are automatically generated for each selected event.
  • Fields match the event parameters emitted.

See more details in the schema file guide.

3. Event Handlers​

  • Handlers are autogenerated for each event.
  • Handlers create event-specific entities.

Learn more in the event handlers guide.


HyperIndex Performance Benchmarks​

File: benchmarks.md

Overview​

HyperIndex delivers industry-leading performance for blockchain data indexing. Independent benchmarks have consistently shown Envio's HyperIndex to be the fastest indexing solution available, with dramatic performance advantages over competitive offerings.

Recent Independent Benchmarks​

The most comprehensive and up-to-date benchmarks were conducted by Sentio in April 2025 and are available in the sentio-benchmark repository. These benchmarks compare Envio's HyperIndex against other popular indexers across multiple real-world scenarios:

Key Performance Highlights​

CaseDescriptionEnvioNearest CompetitorTheGraphPonder
LBTC Token TransfersEvent handling, No RPC calls, Write-only3m8m - 2.6x slower (Sentio)3h9m - 3780x slower1h40m - 2000x slower
LBTC Token with RPC callsEvent handling, RPC calls, Read-after-write1m6m - 6x slower (Sentio)1h3m - 63x slower45m - 45x slower
Ethereum Block Processing100K blocks with Metadata extraction7.9s1m - 7.5x slower (Subsquid)10m - 75x slower33m - 250x slower
Ethereum Transaction Gas UsageTransaction handling, Gas calculations1m 26s7m - 4.8x slower (Subsquid)N/A33m - 23x slower
Uniswap V2 Swap Trace AnalysisTransaction trace handling, Swap decoding41s2m - 3x slower (Subsquid)8m - 11x slowerN/A
Uniswap V2 FactoryEvent handling, Pair and swap analysis8s2m - 15x slower (Subsquid)19m - 142x slower21m - 157x slower

The independent benchmark results demonstrate that HyperIndex consistently outperforms all competitors across every tested scenario. This includes the most realistic real-world indexing scenario LBTC Token with RPC calls - where HyperIndex was up to 6x faster than the nearest competitor and over 63x faster than TheGraph.

Historical Benchmarking Results​

Our internal benchmarking from October 2023 showed similar performance advantages. When indexing the Uniswap V3 ETH-USDC pool contract on Ethereum Mainnet, HyperIndex achieved:

  • 2.1x faster indexing than the nearest competitor
  • Over 100x faster indexing than some popular alternatives

You can read the full details in our Indexer Benchmarking Results blog post.

Verify For Yourself​

We encourage developers to run their own benchmarks. You can use the templates provided in the Sentio benchmark repository or our sample indexer implementations for various scenarios.


Migrate from TheGraph to HyperIndex​

File: migration-guide.md

info

Please reach out to our team on Discord for personalized migration assistance.

Introduction​

Migrating from a subgraph to HyperIndex is designed to be a developer-friendly process. HyperIndex draws strong inspiration from TheGraph’s subgraph architecture, which makes the migration simple, especially with the help of coding assistants like Cursor and AI tools (don't forget to use our ai friendly docs).

The process is simple but requires a good understanding of the underlying concepts. If you are new to HyperIndex, we recommend starting with the Getting Started guide.

Why Migrate to HyperIndex?​

  • Superior Performance: Up to 100x faster indexing speeds
  • Lower Costs: Reduced infrastructure requirements and operational expenses
  • Better Developer Experience: Simplified configuration and deployment
  • Advanced Features: Access to capabilities not available in other indexing solutions
  • Seamless Integration: Easy integration with existing GraphQL APIs and applications

Subgraph to HyperIndex Migration Overview​

Migration consists of three major steps:

  1. Subgraph.yaml migration
  2. Schema migration - near copy paste
  3. Event handler migration

At any point in the migration run

pnpm envio codegen

to verify the config.yaml and schema.graphql files are valid.

or run

pnpm dev

to verify the indexer is running and indexing correctly.

0.5 Use npx envio init to generate a boilerplate​

As a first step, we recommend using npx envio init to generate a boilerplate for your project. This will handle the creation of the config.yaml file and a basic schema.graphql file with generic handler functions.

1. subgraph.yaml β†’ config.yaml​

npx envio init will generate this for you. It's a simple configuration file conversion. Effectively specifying which contracts to index, which networks to index (multiple networks can be specified with envio) and which events from those contracts to index.

Take the following conversion as an example, where the subgraph.yaml file is converted to config.yaml the below comparisons is for the Uniswap v4 pool manager subgraph.

theGraph - subgraph.yaml

specVersion: 0.0.4
description: Uniswap is a decentralized protocol for automated token exchange on Ethereum.
repository: https://github.com/Uniswap/v4-subgraph
schema:
file: ./schema.graphql
features:
- nonFatalErrors
- grafting
- kind: ethereum/contract
name: PositionManager
network: mainnet
source:
abi: PositionManager
address: "0xbD216513d74C8cf14cf4747E6AaA6420FF64ee9e"
startBlock: 21689089
mapping:
kind: ethereum/events
apiVersion: 0.0.7
language: wasm/assemblyscript
file: ./src/mappings/index.ts
entities:
- Position
abis:
- name: PositionManager
file: ./abis/PositionManager.json
eventHandlers:
- event: Subscription(indexed uint256,indexed address)
handler: handleSubscription
- event: Unsubscription(indexed uint256,indexed address)
handler: handleUnsubscription
- event: Transfer(indexed address,indexed address,indexed uint256)
handler: handleTransfer

HyperIndex - config.yaml

# yaml-language-server: $schema=./node_modules/envio/evm.schema.json
name: uni-v4-indexer
networks:
- id: 1
start_block: 21689089
contracts:
- name: PositionManager
address: 0xbD216513d74C8cf14cf4747E6AaA6420FF64ee9e
handler: src/EventHandlers.ts
events:
- event: Subscription(uint256 indexed tokenId, address indexed subscriber)
- event: Unsubscription(uint256 indexed tokenId, address indexed subscriber)
- event: Transfer(address indexed from, address indexed to, uint256 indexed id)

For any potential hurdles, please refer to the Configuration File documentation.

2. Schema migration​

copy & paste the schema from the subgraph to the HyperIndex config file.

Small nuance differences:

  • You can remove the @entity directive
  • Enums
  • BigDecimals

3. Event handler migration​

This consists of two parts

  1. Converting assemblyscript to typescript
  2. Converting the subgraph syntax to HyperIndex syntax

3.1 Converting Assemblyscript to Typescript​

The subgraph uses assemblyscript to write event handlers. The HyperIndex syntax is usually in typescript. Since assemblyscript is a subset of typescript, it's quite simple to copy and paste the code, especially so for pure functions.

3.2 Converting the subgraph syntax to HyperIndex syntax​

There are some subtle differences in the syntax of the subgraph and HyperIndex. Including but not limited to the following:

  • Replace Entity.save() with context.Entity.set()
  • Convert to async handler functions
  • Use await for loading entities const x = await context.Entity.get(id)
  • Use dynamic contract registration to register contracts

The below code snippets can give you a basic idea of what this difference might look like.

theGraph - eventHandler.ts

export function handleSubscription(event: SubscriptionEvent): void {
const subscription = new Subscribe(event.transaction.hash + event.logIndex)

subscription.tokenId = event.params.tokenId
subscription.address = event.params.subscriber.toHexString()
subscription.logIndex = event.logIndex
subscription.blockNumber = event.block.number
subscription.position = event.params.tokenId

subscription.save()
}

HyperIndex - eventHandler.ts

PoolManager.Subscription.handler( async (event, context) => {
const entity = {
id: event.transaction.hash + event.logIndex,
tokenId: event.params.tokenId,
address: event.params.subscriber,
blockNumber: event.block.number,
logIndex: event.logIndex,
position: event.params.tokenId
}

context.Subscription.set(entity);
})

Extra tips​

HyperIndex is a powerful tool that can be used to index any contract. There are some features that are especially powerful that go above subgraph implementations and so in some cases you may want to optimise your migration to HyperIndex further to take advantage of these features. Here are some useful tips:

  • Use the field_selection option to add additional fields to your index. Doc here: field selection
  • Use the unordered_multichain_mode option to enable unordered multichain mode, this is the most common need for multichain indexing. However comes with tradeoffs worth understanding. Doc here: unordered multichain mode
  • Use wildcard indexing to index by event signatures rather than by contract address.
  • HyperIndex uses the standard graphql query language, where as the subgraph uses a custom query language. You can read about the slight nuances here. (We are working on a basic tool to help with backwards compatibility, please check in with us on discord for it's current status).
  • Loaders are a powerful feature to optimize historical sync performance. You can read more about them here.
  • HyperIndex is very flexible and can be used to index offchain data too or send messages to a queue etc for fetching external data, you can further optimise the fetching by using the effects api

Share Your Learnings​

If you discover helpful tips during your migration, we’d love contributions! Open a PR to this guide and help future developers.

Getting Help​

Join Our Discord: The fastest way to get personalized help is through our Discord community.


Configuration File​

File: Guides/configuration-file.mdx

The config.yaml file defines your indexer's behavior, including which blockchain events to index, contract addresses, which networks to index, and various advanced indexing options. It is a crucial step in configuring your HyperIndex setup.

After any changes to your config.yaml and the schema, run:

pnpm codegen

This command generates necessary types and code for your event handlers.

Key Configuration Options​

Contract Addresses​

Set the address of the smart contract you're indexing.

note

Addresses can be provided in checksum format or in lowercase. Envio accepts both and normalizes them internally.

Single address:

address: 0xContractAddress

Multiple addresses for the same contract:

contracts:
- name: MyContract
address:
- 0xAddress1
- 0xAddress2
tip

If using a proxy contract, always use the proxy address, not the implementation address.

Global definitions:
You can also avoid repeating addresses by using global contract definitions:

contracts:
- name: Greeter
abi: greeter.json

networks:
- id: ethereum-mainnet
contracts:
- name: Greeter
address: 0xProxyAddressHere

Raw Events Storage​

By default, HyperIndex doesn't store raw event data in the database to optimize performance and reduce storage requirements. However, you can enable this feature for debugging purposes or if you need to access the original event data.

To enable storage of raw events, add the following to your config.yaml:

raw_events: true

When enabled, all indexed events will be stored in the raw_events table in the database, which you can view through the Hasura interface. This is particularly useful for:

  • Debugging event processing issues
  • Verifying that events are being captured correctly
  • Creating custom queries against raw blockchain data

Note that enabling this option will increase database storage requirements and may slightly impact indexing performance.

Unordered Multichain Mode​

To improve indexing performance when indexing multiple blockchains, you can enable Unordered Multichain Mode. This setting allows parallel processing of events from different networks without strictly maintaining cross-chain event ordering. Useful for minimizing indexing latency when indexing at the head.

Activate this by adding to your config.yaml:

unordered_multichain_mode: true

Learn more about when and how to use this feature here.

Environment Variables​

Since envio@2.9.0, environment variable interpolation is supported for flexibility and security:

networks:
- id: ${ENVIO_CHAIN_ID:-ethereum-mainnet}
contracts:
- name: Greeter
address: ${ENVIO_GREETER_ADDRESS}

Run your indexer with custom environment variables:

ENVIO_CHAIN_ID=optimism ENVIO_GREETER_ADDRESS=0xYourContractAddress pnpm dev

Interpolation syntax:

  • ${ENVIO_VAR} – Use the value of ENVIO_VAR
  • ${ENVIO_VAR:-default} – Use ENVIO_VAR if set, otherwise use default

For more detailed information about environment variables, see our Environment Variables Guide.

Output Directory Path​

You can customize the path where the generated directory will be placed using the output option:

output: ./custom/generated/path

By default, the generated directory is placed in generated relative to the current working directory. If set, it will be a path relative to the config file location.

Advanced Configuration

This is an advanced configuration option. When using a custom output directory, you'll need to manually adjust your .gitignore file and project structure to match the new configuration.

Interactive Schema Explorer​

Explore detailed configuration schema parameters here:


Schema File​

File: Guides/schema-file.md

The schema.graphql file defines the data model for your HyperIndex indexer. Each entity type defined in this schema corresponds directly to a database table, with your event handlers responsible for creating and updating the records. HyperIndex automatically generates a GraphQL API based on these entity types, allowing easy access to the indexed data.

Scalar Types​

Scalar types represent basic data types and map directly to JavaScript, TypeScript, or ReScript types.

GraphQL ScalarDescriptionJavaScript/TypeScriptReScript
IDUnique identifierstringstring
StringUTF-8 character sequencestringstring
IntSigned 32-bit integernumberint
FloatSigned floating-point numbernumberfloat
Booleantrue or falsebooleanbool
BytesUTF-8 character sequence (hex prefixed 0x)stringstring
BigIntSigned integer (int256 in Solidity)bigintbigint
BigDecimalArbitrary-size floating-pointBigDecimal (imported)BigDecimal.t
TimestampTimestamp with timezoneDateJs.Date.t
JsonJSON object (from envio@2.20)JsonJs.Json.t

Learn more about GraphQL scalars here.

Enum Types​

Enums allow fields to accept only a predefined set of values.

Example:

enum AccountType {
ADMIN
USER
}

type User {
id: ID!
balance: Int!
accountType: AccountType!
}

Enums translate to string unions (TypeScript/JavaScript) or polymorphic variants (ReScript):

TypeScript Example:


let user = {
id: event.params.id,
balance: event.params.balance,
accountType: "USER", // enum as string
};

ReScript Example:

let user: Types.userEntity = {
id: event.params.id,
balance: event.params.balance,
accountType: #USER, // polymorphic variant
};

Field Indexing (@index)​

Add an index to a field for optimized queries and loader performance:

type Token {
id: ID!
tokenId: BigInt!
collection: NftCollection!
owner: User! @index
}
  • All id fields and fields referenced via @derivedFrom are indexed automatically.

Generating Types​

Once you've defined your schema, run this command to generate these entity types that can be accessed in your event handlers:

pnpm envio codegen

You're now ready to define powerful schemas and efficiently query your indexed data with HyperIndex!


Event Handlers​

File: Guides/event-handlers.mdx

Event Handlers

Registration​

A handler is a function that receives blockchain data, processes it, and inserts it into the database. You can register handlers in the file defined in the handler field in your config.yaml file. By default this is src/EventHandlers.* file.


..handler(async ({ event, context }) => {
// Your logic here
});
const {  } = require("generated");

..handler(async ({ event, context }) => {
// Your logic here
});
Handlers...handler(async ({ event, context }) => {
// Your logic here
});
note

The generated module contains code and types based on config.yaml and schema.graphql files. Update it by running pnpm codegen command whenever you change these files.

Basic Example​

Here's a handler example for the NewGreeting event. It belongs to the Greeter contract from our beginners Greeter Tutorial:


// Handler for the NewGreeting event
Greeter.NewGreeting.handler(async ({ event, context }) => {
const userId = event.params.user; // The id for the User entity
const latestGreeting = event.params.greeting; // The greeting string that was added
const currentUserEntity = await context.User.get(userId); // Optional user entity that may already exist

// Update or create a new User entity
const userEntity: User = currentUserEntity
? {
id: userId,
latestGreeting,
numberOfGreetings: currentUserEntity.numberOfGreetings + 1,
greetings: [...currentUserEntity.greetings, latestGreeting],
}
: {
id: userId,
latestGreeting,
numberOfGreetings: 1,
greetings: [latestGreeting],
};

context.User.set(userEntity); // Set the User entity in the DB
});
const { Greeter } = require("generated");

// Handler for the NewGreeting event
Greeter.NewGreeting.handler(async ({ event, context }) => {
const userId = event.params.user; // The id for the User entity
const latestGreeting = event.params.greeting; // The greeting string that was added
const currentUserEntity = await context.User.get(userId); // Optional user entity that may already exist

// Update or create a new User entity
const userEntity = currentUserEntity
? {
id: userId,
latestGreeting,
numberOfGreetings: currentUserEntity.numberOfGreetings + 1,
greetings: [...currentUserEntity.greetings, latestGreeting],
}
: {
id: userId,
latestGreeting,
numberOfGreetings: 1,
greetings: [latestGreeting],
};

context.User.set(userEntity); // Set the User entity in the DB
});
open Types

// Handler for the NewGreeting event
Handlers.Greeter.NewGreeting.handler(async ({event, context}) => {
let userId = event.params.user->Address.toString // The id for the User entity
let latestGreeting = event.params.greeting // The greeting string that was added
let maybeCurrentUserEntity = await context.user.get(userId) // Optional User entity that may already exist

// Update or create a new User entity
let userEntity: Entities.User.t = switch maybeCurrentUserEntity {
| Some(existingUserEntity) => {
id: userId,
latestGreeting,
numberOfGreetings: existingUserEntity.numberOfGreetings + 1,
greetings: existingUserEntity.greetings->Belt.Array.concat([latestGreeting]),
}
| None => {
id: userId,
latestGreeting,
numberOfGreetings: 1,
greetings: [latestGreeting],
}
}

context.user.set(userEntity) // Set the User entity in the DB
})

Advanced Use Cases​

HyperIndex provides many features to help you build more powerful and efficient indexers. Read more about these on separate pages:

  • Handle Factory Contracts with Dynamic Contract Registration (with nested factories support)
  • Perform external calls to decide which contract address to register using Async Contract Register
  • Index all ERC20 token transfers with Wildcard Indexing
  • Use Topic Filtering to ignore irrelevant events
    • With multiple filters for single event
    • With different filters per network
    • With filter by dynamicly registered contract addresses (eg Index all ERC20 transfers to/from your Contract)
  • Access Contract State directly from handlers
  • Perform external calls from handlers by following the IPFS Integration guide
  • Optimise database access with Loaders

Context Object​

The handler context provides methods to interact with entities stored in the database.

Retrieving Entities​

Retrieve entities from the database using context.Entity.get where Entity is the name of the entity you want to retrieve, which is defined in your schema.graphql file.

await context.Entity.get(entityId);

It'll return Entity object or undefined if the entity doesn't exist.

Starting from envio@2.22.0 you can use context.Entity.getOrThrow to conveniently throw an error if the entity doesn't exist:

const pool = await context.Pool.getOrThrow(poolId);
// Will throw: Entity 'Pool' with ID '...' is expected to exist.

// Or you can pass a custom message as a second argument:
const pool = await context.Pool.getOrThrow(
poolId,
`Pool with ID ${poolId} is expected.`
);

Or use context.Entity.getOrCreate to automatically create an entity with default values if it doesn't exist:

const pool = await context.Pool.getOrCreate({
id: poolId,
totalValueLockedETH: 0n,
});

// Which is equivalent to:
let pool = await context.Pool.get(poolId);
if (!pool) {
pool = {
id: poolId,
totalValueLockedETH: 0n,
};
context.Pool.set(pool);
}

Modifying Entities​

Use context.Entity.set to create or update an entity:

context.Entity.set({
id: entityId,
...otherEntityFields,
});
note

Both context.Entity.set and context.Entity.deleteUnsafe methods use the In-Memory Storage under the hood and don't require await in front of them.

Deleting Entities (Unsafe)​

To delete an entity:

context.Entity.deleteUnsafe(entityId);
warning

The deleteUnsafe method is experimental and unsafe. You need to manually handle all entity references after deletion to maintain database consistency.

Updating Specific Entity Fields​

Use the following approach to update specific fields in an existing entity:

const pool = await context.Pool.get(poolId);
if (pool) {
context.Pool.set({
...pool,
totalValueLockedETH: pool.totalValueLockedETH.plus(newDeposit),
});
}
const pool = await context.Pool.get(poolId);
if (pool) {
context.Pool.set({
...pool,
totalValueLockedETH: pool.totalValueLockedETH.plus(newDeposit),
});
}
let pool = await context.pool.get(poolId);
pool->Option.forEach(pool => {
context.pool.set({
...pool,
totalValueLockedETH: pool.totalValueLockedETH.plus(newDeposit),
});
});

context.log​

The context object also provides a logger that you can use to log messages to the console. Compared to console.log calls, these logs will be displayed on our Hosted Service runtime logs page.

Read more in the Logging Guide.

External Calls​

Envio indexer runs using Node.js runtime. This means that you can use fetch or any other library like viem to perform external calls from your handlers.

Check out our IPFS Integration and Accessing Contract State guides for more information.

context.effect (Experimental)​

To ensure consistent and reliable data, all handlers are executed synchronously in the on-chain order. This means that external calls might easily blow up the processing time.

To avoid this, you can use Loaders together with Effect API to parallelize external calls and make the indexing process more efficient.



// Define an effect that will be called from the handler.
const getMetadata = experimental_createEffect(
{
name: "getMetadata",
input: S.string,
output: {
description: S.string,
value: S.bigint,
},
},
({ input }) => {
const response = await fetch(`https://api.example.com/metadata/${input}`);
const data = await response.json();
return {
description: data.description,
value: data.value,
};
}
);

ERC20.Transfer.handlerWithLoader({
loader: async ({ event, context }) => {
// Load metadata for the token.
// This will be executed in parallel for all events in the batch.
// The call is automatically memoized, so you don't need to worry about duplicate requests.
const sender = await context.effect(getMetadata, event.params.from);

// Return the loaded data to the handler
return {
sender,
};
},
handler: async ({ event, context, loaderReturn }) => {
const { sender } = loaderReturn;

// Process the transfer with the pre-loaded data
},
});

Read more about the Effect API and Loaders in the dedicated guides.

Performance Considerations​

For performance optimization and best practices, refer to:

  • Benchmarking
  • Loaders

These guides offer detailed recommendations on optimizing entity loading and indexing performance.


Multichain Indexing​

File: Advanced/multichain-indexing.mdx

Understanding Multichain Indexing

Multichain indexing allows you to monitor and process events from contracts deployed across multiple blockchain networks within a single indexer instance. This capability is essential for applications that:

  • Track the same contract deployed across multiple networks
  • Need to aggregate data from different chains into a unified view
  • Monitor cross-chain interactions or state

How It Works​

With multichain indexing, events from contracts deployed on multiple chains can be used to create and update entities defined in your schema file. Your indexer will process events from all configured networks, maintaining proper synchronization across chains.

Configuration Requirements​

To implement multichain indexing, you need to:

  1. Populate the networks section in your config.yaml file for each chain
  2. Specify contracts to index from each network
  3. Create event handlers for the specified contracts

Real-World Example: Uniswap V4 Multichain Indexer​

For a comprehensive, production-ready example of multichain indexing, we recommend exploring our Uniswap V4 Multichain Indexer. This official reference implementation:

  • Indexes Uniswap V4 deployments across 10 different blockchain networks
  • Powers the official v4.xyz interface with real-time data
  • Demonstrates best practices for high-performance multichain indexing
  • Provides a complete, production-grade implementation you can study and adapt

!V4 indexer

The Uniswap V4 indexer showcases how to effectively structure a multichain indexer for a complex DeFi protocol, handling high volumes of data across multiple networks while maintaining performance and reliability.

Config File Structure for Multichain Indexing​

The config.yaml file for multichain indexing contains three key sections:

  1. Global contract definitions - Define contracts, ABIs, and events once
  2. Network-specific configurations - Specify chain IDs and starting blocks
  3. Contract instances - Reference global contracts with network-specific addresses
# Example structure (simplified)
contracts:
- name: ExampleContract
abi_file_path: ./abis/example-abi.json
handler: ./src/EventHandlers.js
events:
- event: ExampleEvent

networks:
- id: 1 # Ethereum Mainnet
start_block: 0
contracts:
- name: ExampleContract
address: "0x1234..."
- id: 137 # Polygon
start_block: 0
contracts:
- name: ExampleContract
address: "0x5678..."

Key Configuration Concepts​

  • The global contracts section defines the contract interface, ABI, handlers, and events once
  • The networks section lists each blockchain network you want to index
  • Each network entry references the global contract and provides the network-specific address
  • This structure allows you to reuse the same handler functions and event definitions across networks

πŸ“’ Best Practice: When developing multichain indexers, append the chain ID to entity IDs to avoid collisions. For example: user-1 for Ethereum and user-137 for Polygon.

Multichain Event Ordering​

When indexing multiple chains, you have two approaches for handling event ordering:

Unordered Multichain Mode​

note

Unordered mode is recommended for most applications.

The indexer processes events as soon as they're available from each chain, without waiting for other chains. This "Unordered Multichain Mode" provides better performance and lower latency.

  • Events will still be processed in order within each individual chain
  • Events across different chains may be processed out of order
  • Processing happens as soon as events are emitted, reducing latency
  • You avoid waiting for the slowest chain's block time

This mode is ideal for most applications, especially when:

  • Operations on your entities are commutative (order doesn't matter)
  • Entities from different networks never interact with each other
  • Processing speed is more important than guaranteed cross-chain ordering

How to Enable Unordered Mode​

In your config.yaml:

unordered_multichain_mode: true
networks: ...

Ordered Mode​

note

Ordered mode is currently the default mode. But it'll be changed to unordered mode in the future. If you don't need strict deterministic ordering of events across all chains, it's recommended to use unordered mode.

If your application requires strict deterministic ordering of events across all chains, you can enable "Ordered Mode". In this mode, the indexer synchronizes event processing across all chains, ensuring that events are processed in the exact same order in every indexer run, regardless of which chain they came from.

When to Use Ordered Mode​

Use ordered mode only when:

  • The exact ordering of operations across different chains is critical to your application logic
  • You need guaranteed deterministic results across all indexer runs
  • You're willing to accept higher latency for cross-chain consistency

Cross-chain ordering is particularly important for applications like:

  • Bridge applications: Where messages or assets must be processed on one chain before being processed on another chain
  • Cross-chain governance: Where decisions made on one chain affect operations on another chain
  • Multi-chain financial applications: Where the sequence of transactions across chains affects accounting or risk calculations
  • Data consistency systems: Where the state must be consistent across multiple chains in a specific order

Technical Details​

With ordered mode enabled:

  • The indexer needs to wait for all blocks to increment from each network
  • There is increased latency between when an event is emitted and when it's processed
  • Processing speed is limited by the block interval of the slowest network
  • Events are guaranteed to be processed in the same order in every indexer run

Cross-Chain Ordering Preservation​

Ordered mode ensures that the temporal relationship between events on different chains is preserved. This is achieved by:

  1. Global timestamp ordering: Events are ordered based on their block timestamps across all chains
  2. Deterministic processing: The same sequence of events will be processed in the same order every time

The primary trade-off is increased latency at the head of the chain. Since the indexer must wait for blocks from all chains to determine the correct ordering, the processing of recent events is delayed by the slowest chain's block time. For example, if Chain A has 2-second blocks and Chain B has 15-second blocks, the indexer will process events at the slower 15-second rate to maintain proper ordering.

This latency is acceptable for applications where correct cross-chain ordering is more important than real-time updates. For bridge applications in particular, this ordering preservation can be critical for security and correctness, as it ensures that deposit events on one chain are always processed before the corresponding withdrawal events on another chain.

Best Practices for Multichain Indexing​

1. Entity ID Namespacing​

Always namespace your entity IDs with the chain ID to prevent collisions between networks. This ensures that entities from different networks remain distinct.

2. Error Handling​

Implement robust error handling for network-specific issues. A failure on one chain shouldn't prevent indexing from continuing on other chains.

3. Testing​

  • Test your indexer with realistic scenarios across all networks
  • Use testnet deployments for initial validation
  • Verify entity updates work correctly across chains

4. Performance Considerations​

  • Use unordered mode when appropriate for better performance
  • Consider your indexing frequency based on the block times of each chain
  • Monitor resource usage, as indexing multiple chains increases load

Troubleshooting Common Issues​

  1. Different Network Speeds: If one network is significantly slower than others, consider using unordered mode to prevent bottlenecks.

  2. Entity Conflicts: If you see unexpected entity updates, verify that your entity IDs are properly namespaced with chain IDs.

  3. Memory Usage: If your indexer uses excessive memory, consider optimizing your entity structure and implementing pagination in your queries.

Next Steps​

  • Explore our Uniswap V4 Multichain Indexer for a complete implementation
  • Review performance optimization techniques for your indexer

Testing​

File: Guides/testing.mdx

Introduction​

Envio comes with a built-in testing library that enables developers to thoroughly validate their indexer behavior without requiring deployment or interaction with actual blockchains. This library is specifically crafted to:

  • Mock database states: Create and manipulate in-memory representations of your database
  • Simulate blockchain events: Generate test events that mimic real blockchain activity
  • Assert event handler logic: Verify that your handlers correctly process events and update entities
  • Test complete workflows: Validate the entire process from event creation to database updates

The testing library provides helper functions that integrate with any JavaScript-based testing framework (like Mocha, Jest, or others), giving you flexibility in how you structure and run your tests.

Learn by doing​

If you prefer to explore by example, the Greeter template includes complete tests that demonstrate best practices:

  1. Generate greeter template in TypeScript using Envio CLI
pnpx envio init template -l typescript -d greeter -t greeter -n greeter
  1. Run tests
pnpm test
  1. See the test/test.ts file to understand how the tests are written.

Writing tests​

Test Library Design​

The testing library follows key design principles that make it effective for testing HyperIndex indexers:

  • Immutable database: The mock database is immutable, with each operation returning a new instance. This makes it robust and easy to test against previous states.
  • Chainable operations: Operations can be chained together to build complex test scenarios.
  • Realistic simulations: Mock events closely mirror real blockchain events, allowing you to test your handlers in conditions similar to production.

Typical Test Flow​

Most tests will follow this general pattern:

  1. Initialize the mock database (empty or with predefined entities)
  2. Create a mock event with test parameters
  3. Process the mock event through your handler(s)
  4. Assert that the resulting database state matches your expectations

This flow allows you to verify that your event handlers correctly create, update, or modify entities in response to blockchain events.

Assertions​

The testing library works with any JavaScript assertion library. In the examples, we use Node.js's built-in assert module, but you can also use popular alternatives like chai or expect.

Common assertion patterns include:

  • assert.deepEqual(expectedEntity, actualEntity) - Check that entire entities match
  • assert.equal(expectedValue, actualEntity.property) - Verify specific property values
  • assert.ok(updatedMockDb.entities.Entity.get(id)) - Ensure an entity exists

Troubleshooting​

If you encounter issues with your tests, check the following:

Environment and Setup​

  1. Verify your Envio version: The testing library is available in versions v0.0.26 and above

    pnpm envio -v
  2. Ensure you've generated testing code: Always run codegen after updating your schema or config

    pnpm codegen
  3. Check your imports: Make sure you're importing the correct files



const { MockDb, Greeter, Addresses } = TestHelpers;
const assert = require("assert");
const { UserEntity, TestHelpers } = require("generated");
const { MockDb, Greeter, Addresses } = TestHelpers;
open RescriptMocha
open Mocha
open Belt

Common Issues and Solutions​

  • "Cannot read properties of undefined": This usually means an entity wasn't found in the database. Verify your IDs match exactly and that the entity exists before accessing it.

  • "Type mismatch": Ensure that your entity structure matches what's defined in your schema. Type issues are common when working with numeric types (like BigInt vs number).

  • ReScript specific setup: If using ReScript, remember to update your rescript.json file:

    {
    "sources": [
    { "dir": "src", "subdirs": true },
    { "dir": "test", "subdirs": true }
    ],
    "bs-dependencies": ["rescript-mocha"]
    }
  • Debug database state: If you're having trouble with assertions, add a debug log to see the exact state of your entities:

    console.log(
    JSON.stringify(updatedMockDb.entities.User.get(userAddress), null, 2)
    );

If you encounter any issues or have questions, please reach out to us on Discord


File: Guides/navigating-hasura.md

This page is only relevant when testing on a local machine or using a self-hosted version of Envio that uses Hasura.

Introduction​

Hasura is a GraphQL engine that provides a web interface for interacting with your indexed blockchain data. When running HyperIndex locally, Hasura serves as your primary tool for:

  • Querying indexed data via GraphQL
  • Visualizing database tables and relationships
  • Testing API endpoints before integration with your frontend
  • Monitoring the indexing process

This guide explains how to navigate the Hasura dashboard to effectively work with your indexed data.

Accessing Hasura​

When running HyperIndex locally, Hasura is automatically available at:

http://localhost:8080

You can access this URL in any web browser to open the Hasura console.

note

When prompted for authentication, use the password: testing

Key Dashboard Areas​

The Hasura dashboard has several tabs, but we'll focus on the two most important ones for HyperIndex developers:

API Tab​

The API tab lets you execute GraphQL queries and mutations on indexed data. It serves as a GraphQL playground for testing your API calls.

Features​

  • Explorer Panel: The left panel shows all available entities defined in your schema.graphql file
  • Query Builder: The center area is where you write and execute GraphQL queries
  • Results Panel: The right panel displays query results in JSON format

Available Entities​

By default, you'll see:

  • All entities defined in your schema.graphql file
  • dynamic_contracts (for dynamically added contracts)
  • raw_events table (Note: This table is no longer populated by default to improve performance. To enable storage of raw events, add raw_events: true to your config.yaml file as described in the Raw Events Storage section)

Example Query​

Try a simple query to test your indexer:

query MyQuery {
User(limit: 5) {
id
latestGreeting
numberOfGreetings
}
}

Click the "Play" button to execute the query and see the results.

For more advanced GraphQL query options, see Hasura's quickstart guide.

Data Tab​

The Data tab provides direct access to your database tables and relationships, allowing you to view the actual indexed data.

Features​

  • Schema Browser: View all tables in the database (left panel)
  • Table Data: Examine and browse data within each table
  • Relationship Viewer: See how different entities are connected

Working with Tables​

  1. Select any table from the "public" schema to view its contents
  2. Use the "Browse Rows" tab to see all data in that table
  3. Check the "Insert Row" tab to manually add data (useful for testing)
  4. View the "Modify" tab to see the table structure

Verifying Indexed Data​

To confirm your indexer is working correctly:

  1. Check entity tables to ensure they contain the expected data
  2. Look at the db_write_timestamp column values to confirm when data was last updated
  3. Newer timestamps indicate fresh data; older timestamps might indicate stale data from previous runs

Common Tasks​

Checking Indexing Status​

To verify your indexer is actively processing new blocks:

  1. Go to the Data tab
  2. Select any entity table
  3. Check the latest db_write_timestamp values
  4. Monitor these values over time to ensure they're updating

(Note the TUI is also an easy way to monitor this)

Troubleshooting Missing Data​

If expected data isn't appearing:

  1. Check if you've enabled raw events storage (raw_events: true in config.yaml) and then examine the raw_events table to confirm events were captured
  2. Verify your event handlers are correctly processing these events
  3. Examine your GraphQL queries to ensure they match your schema structure
  4. Check console logs for any processing errors

Resetting Indexed Data​

When testing, you may need to reset your database:

  1. Stop your indexer
  2. Reset your database (refer to the development guide for commands)
  3. Restart your indexer to begin processing from the configured start block

Best Practices​

  • Regular Verification: Periodically check both the API and Data tabs to ensure your indexer is functioning correctly
  • Query Testing: Test complex queries in the API tab before implementing them in your application
  • Schema Validation: Use the Data tab to verify that relationships between entities are correctly established
  • Performance Monitoring: Watch for tables that grow unusually large, which might indicate inefficient indexing

Environment Variables​

File: Guides/environment-variables.md

Environment variables are a crucial part of configuring your Envio indexer. They allow you to manage sensitive information and configuration settings without hardcoding them in your codebase.

Naming Convention​

All environment variables used by Envio must be prefixed with ENVIO_. This naming convention:

  • Prevents conflicts with other environment variables
  • Makes it clear which variables are used by the Envio indexer
  • Ensures consistency across different environments

Example Environment Variables​

Here are some commonly used environment variables:

# Blockchain RPC URL
ENVIO_RPC_URL=https://arbitrum.direct.dev/your-api-key

# Starting block number for indexing
ENVIO_START_BLOCK=12345678

# Coingecko API key
ENVIO_COINGECKO_API_KEY=api-key

Setting Environment Variables​

Local Development​

For local development, you can set environment variables in several ways:

  1. Using a .env file in your project root:
# .env
ENVIO_RPC_URL=https://arbitrum.direct.dev/your-api-key
ENVIO_START_BLOCK=12345678
  1. Directly in your terminal:
export ENVIO_RPC_URL=https://arbitrum.direct.dev/your-api-key

Hosted Service​

When using the Envio Hosted Service, you can configure environment variables through the Envio platform's dashboard. Remember that all variables must still be prefixed with ENVIO_.

For more information about environment variables in the hosted service, see the Hosted Service documentation.

Configuration File​

For use of environment variables in your configuration file, read the docs here: Configuration File.

Best Practices​

  1. Never commit sensitive values: Always use environment variables for sensitive information like API keys and database credentials
  2. Never commit or use private keys: Never commit or use private keys in your codebase
  3. Use descriptive names: Make your environment variable names clear and descriptive
  4. Document your variables: Keep a list of required environment variables in your project's README
  5. Use different values: Use different environment variables for development, staging, and production environments
  6. Validate required variables: Check that all required environment variables are set before starting your indexer

Troubleshooting​

If you encounter issues with environment variables:

  1. Verify that all required variables are set
  2. Check that variables are prefixed with ENVIO_
  3. Ensure there are no typos in variable names
  4. Confirm that the values are correctly formatted

For more help, see our Troubleshooting Guide.


Uniswap V4 Multi-chain Indexer​

File: Examples/example-uniswap-v4.md

The following indexer example is a reference implementation and can serve as a starting point for applications with similar logic.

This official Uniswap V4 indexer is a comprehensive implementation for the Uniswap V4 protocol using Envio HyperIndex. This is the same indexer that powers the v4.xyz website, providing real-time data for the Uniswap V4 interface.

Key Features​

  • Multi-chain Support: Indexes Uniswap V4 deployments across 10 different blockchain networks in real-time
  • Complete Pool Metrics: Tracks pool statistics including volume, TVL, fees, and other critical metrics
  • Swap Analysis: Monitors swap events and liquidity changes with high precision
  • Hook Integration: In-progress support for Uniswap V4 hooks and their events
  • Production Ready: Powers the official v4.xyz interface with production-grade reliability
  • Ultra-Fast Syncing: Processes massive amounts of blockchain data significantly faster than alternative indexing solutions, reducing sync times from days to minutes

!V4 gif

Technical Overview​

This indexer is built using TypeScript and provides a unified GraphQL API for accessing Uniswap V4 data across all supported networks. The architecture is designed to handle high throughput and maintain consistency across different blockchain networks.

Performance Advantages​

The Envio-powered Uniswap V4 indexer offers extraordinary performance benefits:

  • 10-100x Faster Sync Times: Leveraging Envio's HyperSync technology, this indexer can process historical blockchain data orders of magnitude faster than traditional solutions
  • Real-time Updates: Maintains low latency for new blocks while efficiently managing historical data

Use Cases​

  • Power analytics dashboards and trading interfaces
  • Monitor DeFi positions and protocol health
  • Track historical performance of Uniswap V4 pools
  • Build custom notifications and alerts
  • Analyze hook interactions and their impact

Getting Started​

To use this indexer, you can:

  1. Clone the repository
  2. Follow the installation instructions in the README
  3. Run the indexer locally or deploy it to a production environment
  4. Access indexed data through the GraphQL API

Contribution​

The Uniswap V4 indexer is actively maintained and welcomes contributions from the community. If you'd like to contribute or report issues, please visit the GitHub repository.

note

This is an official reference implementation that powers the v4.xyz website. While extensively tested in production, remember to validate the data for your specific use case. The indexer is continuously updated to support the latest Uniswap V4 features and optimizations.


Sablier Protocol Indexers​

File: Examples/example-sablier.md

The following indexers serve as exceptional reference implementations for the Sablier protocol, showcasing professional development practices and efficient multi-chain data processing.

Overview​

Sablier is a token streaming protocol that enables real-time finance on the blockchain, allowing tokens to be streamed continuously over time. These official Sablier indexers track streaming activity across 18 different EVM-compatible chains, providing comprehensive data through a unified GraphQL API.

Professional Indexer Suite​

Sablier maintains three specialized indexers, each targeting a specific part of their protocol:

1. Lockup Indexer​

Tracks the core Sablier lockup contracts, which handle the streaming of tokens with fixed durations and amounts. This indexer provides data about stream creation, cancellation, and withdrawal events. Used primarily for the vesting functionality of Sablier.

2. Flow Indexer​

Monitors Sablier's advanced streaming functionality, allowing for dynamic flow rates and more complex streaming scenarios. This indexer captures stream modifications, batch operations, and other flow-specific events. Powers the payments side of the Sablier application.

3. Merkle Indexer​

Tracks Sablier's Merkle distribution system, which enables efficient batch stream creation using cryptographic proofs. This indexer provides data about batch creations, claims, and related activities. Used for both Airstreams and Instant Airdrops functionality.

Key Features​

  • Comprehensive Multi-chain Support: Indexes data across 18 different EVM chains
  • Professionally Maintained: Used in production by the Sablier team and their partners
  • Extensive Test Coverage: Includes comprehensive testing to ensure data accuracy
  • Optimized Performance: Implements efficient data processing techniques
  • Well-Documented: Clear code structure with extensive comments
  • Backward Compatibility: Carefully manages schema evolution and contract upgrades
  • Cross-chain Architecture: Envio promotes efficient cross-chain indexing where all networks share the same indexer endpoint

Best Practices Showcase​

These indexers demonstrate several development best practices:

  • Modular Code Structure: Well-organized code with clear separation of concerns
  • Consistent Naming Conventions: Professional and consistent naming throughout
  • Efficient Event Handling: Optimized processing of blockchain events
  • Comprehensive Entity Relationships: Well-designed data model with proper relationships
  • Thorough Input Validation: Robust error handling and input validation
  • Detailed Changelogs: Documentation of breaking changes and migrations
  • Handler/Loader Pattern: Envio indexers use an optimized pattern with loaders to pre-fetch entities and handlers to process them

Getting Started​

To use these indexers as a reference for your own development:

  1. Clone the specific repository based on your needs:
    • Lockup Indexer
    • Flow Indexer
    • Merkle Indexer
  2. Review the file structure and implementation patterns
  3. Examine the event handlers for efficient data processing techniques
  4. Study the schema design for effective entity modeling

For complete API documentation and usage examples, see:

  • Sablier API Overview
  • Implementation Caveats
note

These are official indexers maintained by the Sablier team and represent production-quality implementations. They serve as excellent examples of professional indexer development and are regularly updated to support the latest protocol features.


Envio Hosted Service​

File: Hosted_Service/hosted-service.md

Envio offers a fully managed hosting solution for your indexers, providing all the infrastructure, scaling, and monitoring needed to run production-grade indexers without operational overhead.

Key Features​

  • Git-based Deployments: Similar to Vercel, deploy your indexer by simply pushing to a designated deployment branch
  • Zero Infrastructure Management: We handle all the servers, databases, and scaling for you
  • Version Management: Switch between different deployed versions of your indexer with one click
  • Built-in Monitoring: Track logs and sync status
  • Alerting: Get email alerts when indexing errors occur
  • GraphQL API: Access your indexed data through a performant GraphQL endpoint
  • Multi-chain Support: Deploy indexers that track multiple networks from a single codebase

Deployment Model​

The Envio Hosted Service connects directly to your GitHub repository:

  1. Connect your GitHub repository to the Envio platform
  2. Configure your deployment settings (branch, config file location, etc.)
  3. Push changes to your deployment branch to trigger automatic deployments
  4. View deployment logs and status in real-time
  5. Switch between versions or rollback if needed

You can view and manage your hosted indexers in the Envio Explorer.

Deployment Options​

Envio provides flexibility in how you deploy and host your indexers:

  • Fully Managed Hosted Service: Let Envio handle everything (recommended for most users)
  • Self-Hosting: Run your indexer on your own infrastructure with our Docker container
info

For self-hosting information and instructions, see our Self-Hosting Guide. For a complete list of CLI commands to control your indexer, see the CLI Commands documentation.


Deploying Your Indexer​

File: Hosted_Service/hosted-service-deployment.md

The Envio Hosted Service provides a seamless git-based deployment workflow, similar to modern platforms like Vercel. This enables you to easily deploy, update, and manage your indexers through your normal development workflow.

Initial Setup​

  1. Log in with GitHub: Visit the Envio App and authenticate with your GitHub account
  2. Select an Organization: Choose your personal account or any organization you have access to !Select organisation
  3. Install the Envio Deployments GitHub App: Grant access to the repositories you want to deploy !Install GitHub App

Configuring Your Indexer​

  1. Add a New Indexer: Click "Add Indexer" in the dashboard !Add indexer
  2. Connect to Repository: Select the repository containing your indexer code !Connect indexer
  3. Configure Deployment Settings:
    • Specify the config file location
    • Set the root directory (important for monorepos)
    • Choose the deployment branch !Configure indexer !Add org
tip

Multiple Indexers Per Repository

You can deploy multiple indexers from a single repository by configuring them with different:

  • Config file paths
  • Root directories
  • Deployment branches
warning

Monorepo Configuration

If you're working in a monorepo, ensure all your imports are contained within your indexer directory to avoid deployment issues.

Deployment Workflow​

  1. Create a Deployment Branch: Set up the branch you specified during configuration !Create branch

  2. Deploy via Git: Push your code to the deployment branch !Push code

  3. Monitor Deployment: Track the progress of your deployment in the Envio dashboard

  4. Version Management: Once deployed, you can:

    • View detailed logs
    • Switch between different deployed versions
    • Rollback to previous versions if needed

Continuous Deployment Best Practices​

For a robust deployment workflow, we recommend:

  1. Protected Branches: Set up branch protection rules for your deployment branch
  2. Pull Request Workflow: Instead of pushing directly to the deployment branch, use pull requests from feature branches
  3. CI Integration: Add tests to your CI pipeline to validate indexer functionality before merging to the deployment branch

Version Management​

Each deployment creates a new version of your indexer that you can access through the dashboard. You can:

  • Compare different versions
  • Switch the active version with one click
  • Maintain multiple versions for testing or staging purposes

Deployment Limits​

These can vary depending on the plan you select. In general, development plans are allowed:

  • 3 indexers per organization
  • 3 deployments per indexer

Need to free up space? You can delete old deployments through the Envio dashboard.


Hosted Service Billing​

File: Hosted_Service/hosted-service-billing.mdx

Pricing & Billing

Envio offers flexible pricing options to meet the needs of projects at different stages of development.

Pricing Structure​

We have both development tiers and production tiers to suit a variety of users:

  • Development Tier: Our development tier is completely free and designed to be user-friendly, making it easy to get started with Envio without any cost barriers.

  • Production Tiers: For projects ready for production, we offer scalable options that grow with your needs.

info

For detailed pricing information and plan comparisons, please visit the Envio Pricing Page.

Self-Hosting Option​

For users who prefer to manage their own infrastructure, we support self-hosting your indexer as well. For your convenience, there is a Docker file in the root of the generated folder.

For more information on self-hosting, see our Self-Hosting Guide.

tip

Not sure which option is right for your project? Book a call with our team to discuss your specific needs.


Self-Hosting Your Envio Indexer​

File: Hosted_Service/self-hosting.md

info

This documentation page is actively being improved. Check back regularly for updates and additional information.

While Envio offers a fully managed Hosted Service, you may prefer to run your indexer on your own infrastructure. This guide covers everything you need to know about self-hosting Envio indexers.

note

We deeply appreciate users who choose our hosted service, as it directly supports our team and helps us continue developing and improving Envio's technology. If your use case allows for it, please consider the hosted option.

Why Self-Host?​

Self-hosting gives you:

  • Complete Control: Manage your own infrastructure and configurations
  • Data Sovereignty: Keep all indexed data within your own systems

Prerequisites​

Before self-hosting, ensure you have:

  • Docker installed on your host machine
  • Sufficient storage for blockchain data and the indexer database
  • Adequate CPU and memory resources (requirements vary based on chains and indexing complexity)
  • Required HyperSync and/or RPC endpoints

Getting Started​

In general, if you want to self-host, you will likely use a Docker setup. For a working example, check out the local-docker-example repository. It contains a minimal Dockerfile and docker-compose.yaml that configure the Envio indexer together with PostgreSQL and Hasura.

Configuration Explained​

The compose file in that repository sets up three main services:

  1. PostgreSQL Database (envio-postgres): Stores your indexed data
  2. Hasura GraphQL Engine (graphql-engine): Provides the GraphQL API for querying your data
  3. Envio Indexer (envio-indexer): The core indexing service that processes blockchain data

Environment Variables​

The configuration uses environment variables with sensible defaults. For production, you should customize:

  • Database credentials (ENVIO_POSTGRES_PASSWORD, ENVIO_PG_USER, etc.)
  • Hasura admin secret (HASURA_GRAPHQL_ADMIN_SECRET)
  • Resource limits based on your workload requirements

Getting Help​

If you encounter issues with self-hosting:

  • Check the Envio GitHub repository for known issues
  • Join the Envio Discord community for community support
tip

For most production use cases, we recommend using the Envio Hosted Service to benefit from automatic scaling, monitoring, and maintenance.


Indexing Optimism Bridge Deposits​

File: Tutorials/tutorial-op-bridge-deposits.md

Introduction​

This tutorial will guide you through indexing Optimism Standard Bridge deposits in under 5 minutes using Envio HyperIndex's no-code contract import feature.

The Optimism Standard Bridge enables the movement of ETH and ERC-20 tokens between Ethereum and Optimism. We'll index bridge deposit events by extracting the DepositFinalized logs emitted by the bridge contracts on both networks.

Prerequisites​

Before starting, ensure you have the following installed:

  • Node.js (v18 or newer recommended)
  • pnpm (v8 or newer)
  • Docker Desktop (required to run the Envio indexer locally)

Note: Docker is specifically required to run your indexer locally. You can skip Docker installation if you plan only to use Envio's hosted service.

Step 1: Initialize Your Indexer​

  1. Open your terminal in an empty directory and run:
pnpx envio init
  1. Name your indexer (we'll use "optimism-bridge-indexer" in this example):

  2. Choose your preferred language (TypeScript, JavaScript, or ReScript):

Step 2: Import the Optimism Bridge Contract​

  1. Select Contract Import β†’ Block Explorer β†’ Optimism

  2. Enter the Optimism bridge contract address:

    0x4200000000000000000000000000000000000010

    View on Optimistic Etherscan

  3. Select the DepositFinalized event:

    • Navigate using arrow keys (↑↓)
    • Press spacebar to select the event

Tip: You can select multiple events to index simultaneously.

Step 3: Add the Ethereum Mainnet Bridge Contract​

  1. When prompted, select Add a new contract

  2. Choose Block Explorer β†’ Ethereum Mainnet

  3. Enter the Ethereum Mainnet gateway contract address:

    0x99C9fc46f92E8a1c0deC1b1747d010903E884bE1

    View on Etherscan

  4. Select the ETHDepositInitiated event

  5. When finished adding contracts, select I'm finished

Step 4: Start Your Indexer​

  1. If you have any running indexers, stop them first:
pnpm envio stop
  1. Start your new indexer:
pnpm dev

This command:

  • Starts the required Docker containers
  • Sets up your database
  • Launches the indexing process
  • Opens the Hasura GraphQL interface

Step 5: Understanding the Generated Code​

Let's examine the key files that Envio generated:

1. config.yaml​

This configuration file defines:

  • Networks to index (Optimism and Ethereum Mainnet)
  • Starting blocks for each network
  • Contract addresses and ABIs
  • Events to track

2. schema.graphql​

This schema defines the data structures for our selected events:

  • Entity types based on event data
  • Field types matching the event parameters
  • Relationships between entities (if applicable)

3. src/EventHandlers.ts​

This file contains the business logic for processing events:

  • Functions that execute when events are detected
  • Data transformation and storage logic
  • Entity creation and relationship management

Step 6: Exploring Your Indexed Data​

Now you can interact with your indexed data:

Accessing Hasura​

  1. Open Hasura at http://localhost:8080
  2. When prompted, enter the admin password: testing

Monitoring Indexing Progress​

  1. Click the Data tab in the top navigation
  2. Find the _events_sync_state table to check indexing progress
  3. Observe which blocks are currently being processed

Note: Thanks to Envio's HyperSync, indexing happens significantly faster than with standard RPC methods.

Querying Indexed Events​

  1. Click the API tab
  2. Construct a GraphQL query to explore your data

Here's an example query to fetch the 10 largest bridge deposits:

query LargestDeposits {
DepositFinalized(limit: 10, order_by: { amount: desc }) {
l1Token
l2Token
from
to
amount
blockTimestamp
}
}
  1. Click the Play button to execute your query

Conclusion​

Congratulations! You've successfully created an indexer for Optimism Bridge deposits across both Ethereum and Optimism networks.

What You've Learned​

  • How to initialize a multi-network indexer using Envio
  • How to import contracts from different blockchains
  • How to query and explore indexed blockchain data

Next Steps​

  • Try customizing the event handlers to add additional logic
  • Create relationships between events on different networks
  • Deploy your indexer to Envio's hosted service

For more tutorials and advanced features, check out our documentation or watch our video walkthroughs on YouTube.


Indexing ERC20 Token Transfers on Base​

File: Tutorials/tutorial-erc20-token-transfers.md

Introduction​

In this tutorial, you'll learn how to index ERC20 token transfers on the Base network using Envio HyperIndex. By leveraging the no-code contract import feature, you'll be able to quickly analyze USDC transfer activity, including identifying the largest transfers.

We'll create an indexer that tracks all USDC token transfers on Base by extracting the Transfer events emitted by the USDC contract. The entire process takes less than 5 minutes to set up and start querying data.

Prerequisites​

Before starting, ensure you have the following installed:

  • Node.js (v18 or newer recommended)
  • pnpm (v8 or newer)
  • Docker Desktop (required to run the Envio indexer locally)

Note: Docker is specifically required to run your indexer locally. You can skip Docker installation if you plan only to use Envio's hosted service.

Step 1: Initialize Your Indexer​

  1. Open your terminal in an empty directory and run:
pnpx envio init
  1. Name your indexer (we'll use "usdc-base-transfer-indexer" in this example):

  2. Choose your preferred language (TypeScript, JavaScript, or ReScript):

Step 2: Import the USDC Token Contract​

  1. Select Contract Import β†’ Block Explorer β†’ Base

  2. Enter the USDC token contract address on Base:

    0x833589fCD6eDb6E08f4c7C32D4f71b54bdA02913

    View on BaseScan

  3. Select the Transfer event:

    • Navigate using arrow keys (↑↓)
    • Press spacebar to select the event

Tip: You can select multiple events to index simultaneously if needed.

  1. When finished adding contracts, select I'm finished

Step 3: Start Your Indexer​

  1. If you have any running indexers, stop them first:
pnpm envio stop

Note: You can skip this step if this is your first time running an indexer.

  1. Start your new indexer:
pnpm dev

This command:

  • Starts the required Docker containers
  • Sets up your database
  • Launches the indexing process
  • Opens the Hasura GraphQL interface

Step 4: Understanding the Generated Code​

Let's examine the key files that Envio generated:

1. config.yaml​

This configuration file defines:

  • Network to index (Base)
  • Starting block for indexing
  • Contract address and ABI details
  • Events to track (Transfer)

2. schema.graphql​

This schema defines the data structures for the Transfer event:

  • Entity types based on event data
  • Field types for sender, receiver, and amount
  • Any relationships between entities

3. src/EventHandlers.*​

This file contains the business logic for processing events:

  • Functions that execute when Transfer events are detected
  • Data transformation and storage logic
  • Entity creation and relationship management

Step 5: Exploring Your Indexed Data​

Now you can interact with your indexed USDC transfer data:

Accessing Hasura​

  1. Open Hasura at http://localhost:8080
  2. When prompted, enter the admin password: testing

Monitoring Indexing Progress​

  1. Click the Data tab in the top navigation
  2. Find the _events_sync_state table to check indexing progress
  3. Observe which blocks are currently being processed

Note: Thanks to Envio's HyperSync, you can index millions of USDC transfers in just minutes rather than hours or days with traditional methods.

Querying Indexed Events​

  1. Click the API tab
  2. Construct a GraphQL query to explore your data

Here's an example query to fetch the 10 largest USDC transfers:

query LargestTransfers {
FiatTokenV2_2_Transfer(limit: 10, order_by: { value: desc }) {
from
to
value
blockTimestamp
}
}
  1. Click the Play button to execute your query

Conclusion​

Congratulations! You've successfully created an indexer for USDC token transfers on Base. In just a few minutes, you've indexed over 3.6 million transfer events and can now query this data in real-time.

What You've Learned​

  • How to initialize an indexer using Envio's contract import feature
  • How to index ERC20 token transfers on the Base network
  • How to query and analyze token transfer data using GraphQL

Next Steps​

  • Try customizing the event handlers to add additional logic
  • Create aggregated statistics about token transfers
  • Add more tokens or events to your indexer
  • Deploy your indexer to Envio's hosted service

For more tutorials and advanced features, check out our documentation or watch our video walkthrough on YouTube.


Indexing Sway Farm on the Fuel Network​

File: Tutorials/tutorial-indexing-fuel.md

Until recently, HyperIndex was only available on EVM-compatible blockchains, and now we have extended support to the Fuel Network.

Indexers are vital to the success of any dApp. In this tutorial, we will create an Envio indexer for the Fuel dApp Sway Farm step by step.

Sway Farm is a simple farming game and for the sake of a real-world example, let's create the indexer for a leaderboard of all farmers πŸ§‘β€πŸŒΎ

About Fuel​

Fuel is an operating system purpose-built for Ethereum rollups. Fuel's unique architecture allows rollups to solve for PSI (parallelization, state minimized execution, interoperability). Powered by the FuelVM, Fuel aims to expand Ethereum's capability set without compromising security or decentralization.

Website | X | Discord

Prerequisites​

Environment tooling​

  • Node.js (v18 or newer recommended)
  • pnpm (v8 or newer)
  • Docker Desktop (required to run the Envio indexer locally)

Note: Docker is specifically required to run your indexer locally. You can skip Docker installation if you plan only to use Envio's hosted service.

Initialize the project​

Now that you have installed the prerequisite packages let's begin the practical steps of setting up the indexer.

Open your terminal in an empty directory and initialize a new indexer by running the command:

pnpx envio init

In the following prompt, choose the directory where you want to set up your project. The default is the current directory, but in the tutorial, I'll use the indexer name:

? Specify a folder name (ENTER to skip): sway-farm-indexer

Then, choose a language of your choice for the event handlers. TypeScript is the most popular one, so we'll stick with it:

? Which language would you like to use?
JavaScript
> TypeScript
ReScript
[↑↓ to move, enter to select, type to filter]

Next, we have the new prompt for a blockchain ecosystem. Previously Envio supported only EVM, but now it's possible to choose between Evm, Fuel and other VMs in the future:

? Choose blockchain ecosystem
Evm
> Fuel
[↑↓ to move, enter to select, type to filter]

In the following prompt, you can choose an initialization option. There's a Greeter template for Fuel, which is an excellent way to learn more about HyperIndex. But since we have an existing contract, the Contract Import option is the best way to create an indexer:

? Choose an initialization option
Template
> Contract Import
[↑↓ to move, enter to select, type to filter]

A separate Tutorial page provides more details about the Greeter template.

Next it'll ask us for an ABI file. You can find it in the ./out/debug directory after building your Sway contract with forc build:

? What is the path to your json abi file? ./sway-farm/contract/out/debug/contract-abi.json

After the ABI file is provided, Envio parses all possible events you can use for indexing:

? Which events would you like to index?
> [x] NewPlayer
[x] PlantSeed
[x] SellItem
[x] InvalidError
[x] Harvest
[x] BuySeeds
[x] LevelUp
[↑↓ to move, space to select one, β†’ to all, ← to none, type to filter]

Let's select the events we want to index. I opened the code of the contract file and realized that for a leaderboard we need only events which update player information. Hence, I left only NewPlayer, LevelUp, and SellItem selected in the list. We'd want to index more events in real life, but this is enough for the tutorial.

? Which events would you like to index?
> [x] NewPlayer
[ ] PlantSeed
[x] SellItem
[ ] InvalidError
[ ] Harvest
[ ] BuySeeds
[x] LevelUp
[↑↓ to move, space to select one, β†’ to all, ← to none, type to filter]

πŸ“– For the tutorial we only need to index LOG_DATA receipts, but you can also index Mint, Burn, Transfer and Call receipts. Read more about Supported Event Types.

Just a few simple questions left. Let's call our contract SwayFarm:

? What is the name of this contract? SwayFarm

Set an address for the deployed contract:

? What is the address of the contract? 0xf5b08689ada97df7fd2fbd67bee7dea6d219f117c1dc9345245da16fe4e99111
[Use the proxy address if your abi is a proxy implementation]

Finish the initialization process:

? Would you like to add another contract?
> I'm finished
Add a new address for same contract on same network
Add a new contract (with a different ABI)
[Current contract: SwayFarm, on network: Fuel]

If you see the following line, it means we are already halfway through πŸ™Œ

Please run `cd sway-farm-indexer` to run the rest of the envio commands

Let's open the indexer in an IDE and start adjusting it for our farm πŸ…

Walk through initialized indexer​

At this point, we should already have a working indexer. You can start it by running pnpm dev, which we cover in more detail later in the tutorial.

Everything is configured by modifying the 3 files below. Let's walk through each of them.

  • config.yaml Guide
  • schema.graphql Guide
  • EventHandlers.* Guide

(* depending on the language chosen for the indexer)

config.yaml​

The config.yaml outlines the specifications for the indexer, including details such as network and contract specifications and the event information to be used in the indexing process.

name: sway-farm-indexer
ecosystem: fuel
networks:
- id: 0
start_block: 0
contracts:
- name: SwayFarm
address:
- 0xf5b08689ada97df7fd2fbd67bee7dea6d219f117c1dc9345245da16fe4e99111
abi_file_path: abis/swayfarm-abi.json
handler: src/EventHandlers.ts
events:
- name: SellItem
logId: "11192939610819626128"
- name: LevelUp
logId: "9956391856148830557"
- name: NewPlayer
logId: "169340015036328252"

In the tutorial, we don't need to adjust it in any way. But later you can modify the file and add more events for indexing.

As a nice to have, you can use a Sway struct name without specifying a logId, like this:

- name: SellItem
- name: LevelUp
- name: NewPlayer

schema.graphql​

The schema.graphql file serves as a representation of your application's data model. It defines entity types that directly correspond to database tables, and the event handlers you create are responsible for creating and updating records within those tables. Additionally, the GraphQL API is automatically generated based on the entity types specified in the schema.graphql file, to allow access to the indexed data.

🧠 A separate Guide page provides more details about the schema.graphql file.

For the leaderboard, we need only one entity representing the player. Let's create it:

type Player {
id: ID!
farmingSkill: BigInt!
totalValueSold: BigInt!
}

We will use the user address as an ID. The fields farmingSkill and totalValueSold are u64 in Sway, so to safely map them to JavaScript value, we'll use BigInt.

EventHandlers.ts​

The event handlers generated by contract import are quite simple and only add an entity to a DB when a related event is indexed.

/*
* Please refer to https://docs.envio.dev for a thorough guide on all Envio indexer features
*/

SwayFarmContract.SellItem.handler(async ({ event, context }) => {
const entity: SwayFarm_SellItemEntity = {
id: `${event.chainId}_${event.block.height}_${event.logIndex}`,
};

context.SwayFarm_SellItem.set(entity);
});

Let's modify the handlers to update the Player entity instead. But before we start, we need to run pnpm codegen to generate utility code and types for the Player entity we've added.

pnpm codegen

It's time for a little bit of coding. The indexer is very simple; it requires us only to pass event data to an entity.


/**
Registers a handler that processes NewPlayer event
on the SwayFarm contract and stores the players in the DB
*/
SwayFarmContract.NewPlayer.handler(async ({ event, context }) => {
// Set the Player entity in the DB with the intial values
context.Player.set({
// The address in Sway is a union type of user Address and ContractID. Envio supports most of the Sway types, and the address value was decoded as a discriminated union 100% typesafe
id: event.params.address.payload.bits,
// Initial values taken from the contract logic
farmingSkill: 1n,
totalValueSold: 0n,
});
});

SwayFarmContract.LevelUp.handler(async ({ event, context }) => {
const playerInfo = event.params.player_info;
context.Player.set({
id: event.params.address.payload.bits,
farmingSkill: playerInfo.farming_skill,
totalValueSold: playerInfo.total_value_sold,
});
});

SwayFarmContract.SellItem.handler(async ({ event, context }) => {
const playerInfo = event.params.player_info;
context.Player.set({
id: event.params.address.payload.bits,
farmingSkill: playerInfo.farming_skill,
totalValueSold: playerInfo.total_value_sold,
});
});

Without overengineering, simply set the player data into the database. What's nice is that whenever your ABI or entities in graphql.schema change, Envio regenerates types and shows the compilation error.

🧠 You can find the indexer repo created during the tutorial on GitHub.

Starting the Indexer​

πŸ“’ Make sure you have docker open

The following commands will start the docker and create databases for indexed data. Make sure to re-run pnpm dev if you've made some changes.

pnpm dev

Nice, we indexed 1,721,352 blocks containing 58,784 events in 10 seconds, and they continue coming in.

View the indexed results​

Let's check indexed players on the local Hasura server.

open http://localhost:8080

The Hasura admin-secret / password is testing, and the tables can be viewed in the data tab or queried from the playground.

Now, we can easily get the top 5 players, the number of inactive and active players, and the average sold value. What's left is a nice UI for the Sway Farm leaderboard, but that's not the tutorial's topic.

🧠 A separate Guide page provides more details about navigating Hasura.

Deploy the indexer onto the hosted service​

Once you have verified that the indexer is working for your contracts, then you are ready to deploy the indexer onto our hosted service.

Deploying an indexer onto the hosted service allows you to extract information via graphQL queries into your front-end or some back-end application.

Navigate to the hosted service to start deploying your indexer and refer to this documentation for more information on deploying your indexer.

What next?​

Once you have successfully finished the tutorial, you are ready to become a blockchain indexing wizard!

Join our Discord channel to make sure you catch all new releases.


Indexing a Greeter Contract​

File: Tutorials/greeter-tutorial.md

Introduction​

This tutorial provides a step-by-step guide to indexing a simple Greeter smart contract deployed on multiple blockchains. You'll learn how to set up and run a multi-chain indexer using Envio's template system.

What is the Greeter Contract?​

The Greeter contract is a straightforward smart contract that allows users to store greeting messages on the blockchain. For this tutorial, we'll be indexing instances of this contract deployed on both Polygon and Linea networks.

What You'll Build​

By the end of this tutorial, you'll have:

  • A functioning multi-chain indexer that tracks greeting events
  • The ability to query these events through a GraphQL endpoint
  • Experience with Envio's core indexing functionality

Prerequisites​

Before starting, ensure you have the following installed:

  • Node.js (v18 or newer recommended)
  • pnpm (v8 or newer)
  • Docker Desktop (required to run the Envio indexer locally)

Note: Docker is specifically required to run your indexer locally. You can skip Docker installation if you plan only to use Envio's hosted service.

Step 1: Initialize Your Project​

First, let's create a new project using Envio's Greeter template:

  1. Open your terminal and run:
pnpx envio init
  1. When prompted for a directory, you can press Enter to use the current directory or specify another path:
? Set the directory: (.) .
  1. Choose your preferred programming language for event handlers:
? Which language would you like to use?
> JavaScript
TypeScript
ReScript
  1. Select the Template initialization option:
? Choose an initialization option
> Template
Contract Import
  1. Choose the Greeter template:
? Which template would you like to use?
> Greeter
Erc20

After completing these steps, Envio will generate all the necessary files for your indexer project.

Step 2: Understanding the Generated Files​

Let's examine the key files that were created:

config.yaml​

This configuration file defines which networks and contracts to index:

# Partial example
envio_node:
networks:
- name: polygon
# ... Polygon network settings
contracts:
- name: Greeter
address: "0x9D02A17dE4E68545d3a58D3a20BbBE0399E05c9c"
# ... contract settings
- name: linea
# ... Linea network settings
contracts:
- name: Greeter
address: "0xdEe21B97AB77a16B4b236F952e586cf8408CF32A"
# ... contract settings

schema.graphql​

This schema defines the data structures for the indexed events:

type Greeting {
id: ID!
user: String!
greeting: String!
blockNumber: Int!
blockTimestamp: Int!
transactionHash: String!
}

type User {
id: ID!
latestGreeting: String!
numberOfGreetings: Int!
greetings: [String!]!
}

src/EventHandlers.js (or .ts/.res)​

This file contains the logic to process events emitted by the Greeter contract.

Step 3: Start Your Indexer​

Important: Make sure Docker Desktop is running before proceeding.

  1. Start the indexer with:
pnpm dev

This command:

  • Launches Docker containers for the database and Hasura
  • Sets up your local development environment
  • Begins indexing data from the specified contracts
  • Opens a terminal UI to monitor indexing progress

The indexer will retrieve data from both Polygon and Linea blockchains, starting from the blocks specified in your config.yaml file.

Step 4: Interact with the Contracts​

To see your indexer in action, you can write new greetings to the blockchain:

For Polygon:​

  1. Visit the contract on Polygonscan
  2. Connect your wallet
  3. Use the setGreeting function to write a new greeting
  4. Submit the transaction

For Linea:​

  1. Visit the contract on Lineascan
  2. Connect your wallet
  3. Use the setGreeting function to write a new greeting
  4. Submit the transaction

Since this is a multi-chain example, you can interact with both contracts to see how Envio handles data from different blockchains simultaneously.

Step 5: Query the Indexed Data​

Now you can explore the data your indexer has captured:

  1. Open Hasura at http://localhost:8080
  2. When prompted for authentication, use the password: testing
  3. Navigate to the Data tab to browse the database tables
  4. Or use the API tab to write GraphQL queries

Example Query​

Try this query to see the latest greetings:

query GetGreetings {
Greeting(limit: 10, order_by: { blockTimestamp: desc }) {
id
user
greeting
blockNumber
blockTimestamp
transactionHash
}
}

Step 6: Deploy to Production (Optional)​

When you're ready to move from local development to production:

  1. Visit the Envio Hosted Service
  2. Follow the steps to deploy your indexer
  3. Get a production GraphQL endpoint for your application

For detailed deployment instructions, see the Hosted Service documentation.

What You've Learned​

By completing this tutorial, you've learned:

  • How to initialize an Envio project from a template
  • How indexers process data from multiple blockchains
  • How to query indexed data using GraphQL
  • The basic structure of an Envio indexing project

Next Steps​

Now that you've mastered the basics, you can:

  • Try the Contract Import feature to index any deployed contract
  • Customize the event handlers to implement more complex indexing logic
  • Add relationships between entities in your schema
  • Explore the Advanced Querying features
  • Create aggregated statistics from your indexed data

For more tutorials and examples, visit the Envio Documentation or join our Discord community for support.


Getting Price Data in Your Indexer​

File: Tutorials/price-data.md

Introduction​

Many blockchain applications require price data to calculate values such as:

  • Historical token transfer values in USD
  • Total value locked (TVL) in DeFi protocols over time
  • Portfolio valuations at specific points in time

This tutorial explores three different approaches to incorporating price data into your Envio indexer, using a real-world example of tracking ETH deposits into a Uniswap V3 liquidity pool on the Blast blockchain.

TL;DR: The complete code for this tutorial is available in this GitHub repository.

What You'll Learn​

In this tutorial, you'll:

  • Compare three different methods for accessing token price data
  • Analyze the tradeoffs between accuracy, decentralization, and performance
  • Implement a multi-source price feed in an Envio indexer
  • Build a practical example indexing Uniswap V3 liquidity events with price context

Price Data Methods Compared​

There are three primary methods to access price data within your indexer:

MethodDescriptionSpeedAccuracyDecentralization
OraclesOn-chain price feeds (e.g., API3, Chainlink)FastMediumMedium
DEX PoolsSwap events from decentralized exchangesFastMedium-HighHigh
Off-chain APIsExternal services (e.g., CoinGecko)SlowHighLow

Let's explore each method in detail.

Method 1: Using Oracle Price Feeds​

Oracle networks provide on-chain price data through specialized smart contracts. For this tutorial, we'll use API3 price feeds on Blast.

How Oracles Work​

Oracle services like API3 maintain a network of data providers that push price updates to on-chain contracts. These updates typically occur:

  • At regular time intervals
  • When price deviations exceed a predefined threshold (e.g., 1%)
  • When manually triggered by network participants

Finding the Right Oracle Feed​

To locate the ETH/USD price feed using API3 on Blast:

  1. Identify the API3 contract address: 0x709944a48cAf83535e43471680fDA4905FB3920a

  2. Find the data feed ID for ETH/USD:

    • The dAPI name "ETH/USD" as bytes32: 0x4554482f55534400000000000000000000000000000000000000000000000000
    • Using the dapiNameToDataFeedId function, this maps to 0x3efb3990846102448c3ee2e47d22f1e5433cd45fa56901abe7ab3ffa054f70b5
  3. Monitor the UpdatedBeaconSetWithBeacons events with this data feed ID to get price updates

Oracle Advantages and Limitations​

Advantages:

  • Fast indexing (no external API calls required)
  • Moderate decentralization
  • Generally reliable data

Limitations:

  • Updates only on significant price changes
  • Limited token coverage (mainly high-liquidity pairs)
  • Minor accuracy tradeoffs

Method 2: Using DEX Pool Swap Events​

Decentralized exchanges like Uniswap provide price data through swap events. We'll use the USDB/WETH pool on Blast to derive ETH pricing.

Locating the Right DEX Pool​

First, we need to find the specific Uniswap V3 pool for USDB/WETH:



const usdb = "0x4300000000000000000000000000000000000003";
const weth = "0x4300000000000000000000000000000000000004";
const factoryAddress = "0x792edAdE80af5fC680d96a2eD80A44247D2Cf6Fd";
const factoryAbi = parseAbi([
"function getPool( address tokenA, address tokenB, uint24 fee ) external view returns (address pool)",
]);

const providerUrl = "https://rpc.ankr.com/blast";
const poolBips = 3000; // 0.3%. This is measured in hundredths of a bip

const client = createPublicClient({
chain: blast,
transport: http(providerUrl),
});

const factoryContract = getContract({
abi: factoryAbi,
address: factoryAddress,
client: client,
});

(async () => {
const poolAddress = await factoryContract.read.getPool([
usdb,
weth,
poolBips,
]);
console.log(poolAddress);
})();

Tip: You can also manually find the pool address using the getPool function on a block explorer.

Running this code reveals the USDB/WETH pool is at 0xf52B4b69123CbcF07798AE8265642793b2E8990C.

Getting Price Data From Swap Events​

Uniswap V3 emits Swap events containing price information in the sqrtPriceX96 field. To convert this to a price, we'll use a formula in our event handler.

DEX Advantages and Limitations​

Advantages:

  • Very decentralized
  • High update frequency
  • Wide token coverage

Limitations:

  • Susceptible to price impact and manipulation (especially in low-liquidity pools)
  • Requires extra calculations to derive prices
  • May require multiple pools for cross-pair calculations

Method 3: Using Off-chain APIs​

External price APIs like CoinGecko provide comprehensive token price data but require HTTP calls from your indexer.

Making API Requests​

Here's a simple function to fetch historical ETH prices from CoinGecko:

const COIN_GECKO_API_KEY = process.env.COIN_GECKO_API_KEY;

async function fetchEthPriceFromUnix(
unix: number,
token = "ethereum"
): Promise {
// convert unix to date dd-mm-yyyy
const _date = new Date(unix * 1000);
const date = _date.toISOString().slice(0, 10).split("-").reverse().join("-");
return fetchEthPrice(date.slice(0, 10), token);
}

async function fetchEthPrice(
date: string,
token = "ethereum"
): Promise {
const options = {
method: "GET",
headers: {
accept: "application/json",
"x-cg-demo-api-key": COIN_GECKO_API_KEY,
},
};

return fetch(
`https://api.coingecko.com/api/v3/coins/${token}/history?date=${date}&localization=false`,
options as any
)
.then((res) => res.json())
.then((res: any) => {
const usdPrice = res.market_data.current_price.usd;
console.log(`ETH price on ${date}: ${usdPrice}`);
return usdPrice;
})
.catch((err) => console.error(err));
}

export default fetchEthPriceFromUnix;

Note: The free CoinGecko API only provides daily price data (at 00:00 UTC), not block-by-block precision. For production use, consider a paid API with more granular historical data.

Off-chain API Advantages and Limitations​

Advantages:

  • Highest accuracy (with paid APIs)
  • Most comprehensive token coverage
  • No susceptibility to on-chain manipulation

Limitations:

  • Significantly slows indexing speed due to API calls
  • Centralized data source
  • May require paid subscriptions for full functionality

Building a Multi-Source Price Feed Indexer​

Now let's build an indexer that compares all three methods when tracking Uniswap V3 liquidity pool deposits.

Step 1: Initialize Your Indexer​

Create a new Envio indexer project:

pnpx envio init

Step 2: Configure Your Indexer​

Edit your config.yaml file to track both the API3 oracle and the Uniswap V3 pool:

# yaml-language-server: $schema=./node_modules/envio/evm.schema.json
name: envio-indexer
rollback_on_reorg: false
networks:
- id: 81457
start_block: 11000000
contracts:
- name: Api3ServerV1
address:
- 0x709944a48cAf83535e43471680fDA4905FB3920a
handler: src/EventHandlers.ts
events:
- event: UpdatedBeaconSetWithBeacons(bytes32 indexed beaconSetId, int224 value, uint32 timestamp)
- name: UniswapV3Pool
address:
- 0xf52B4b69123CbcF07798AE8265642793b2E8990C
handler: src/EventHandlers.ts
events:
- event: Swap(address indexed sender, address indexed recipient, int256 amount0, int256 amount1, uint160 sqrtPriceX96, uint128 liquidity, int24 tick)
- event: Mint(address sender, address indexed owner, int24 indexed tickLower, int24 indexed tickUpper, uint128 amount, uint256 amount0, uint256 amount1)
field_selection:
transaction_fields:
- "hash"

Important: The field_selection section is needed to include transaction hashes in your indexed data.

Step 3: Define Your Schema​

Create a schema that captures price data from all three sources:

type OraclePoolPrice {
id: ID!
value: BigInt!
timestamp: BigInt!
block: Int!
}

type UniswapV3PoolPrice {
id: ID!
sqrtPriceX96: BigInt!
timestamp: Int!
block: Int!
}

type EthDeposited {
id: ID!
timestamp: Int!
block: Int!
oraclePrice: Float!
poolPrice: Float!
offChainPrice: Float!
offchainOracleDiff: Float!
depositedPool: Float!
depositedOffchain: Float!
depositedOrcale: Float!
txHash: String!
}

Step 4: Implement Event Handlers​

Create event handlers to process data from all three sources:

import {
Api3ServerV1,
OraclePoolPrice,
UniswapV3Pool,
UniswapV3PoolPrice,
EthDeposited,
} from "generated";

let latestOraclePrice = 0;
let latestPoolPrice = 0;

Api3ServerV1.UpdatedBeaconSetWithBeacons.handler(async ({ event, context }) => {
// Filter out the beacon set for the ETH/USD price
if (
event.params.beaconSetId !=
"0x3efb3990846102448c3ee2e47d22f1e5433cd45fa56901abe7ab3ffa054f70b5"
) {
return;
}

const entity: OraclePoolPrice = {
id: `${event.chainId}-${event.block.number}-${event.logIndex}`,
value: event.params.value,
timestamp: event.params.timestamp,
block: event.block.number,
};

latestOraclePrice = Number(event.params.value) / Number(10 ** 18);

context.OraclePoolPrice.set(entity);
});

UniswapV3Pool.Swap.handler(async ({ event, context }) => {
const entity: UniswapV3PoolPrice = {
id: `${event.chainId}-${event.block.number}-${event.logIndex}`,
sqrtPriceX96: event.params.sqrtPriceX96,
timestamp: event.block.timestamp,
block: event.block.number,
};

latestPoolPrice = Number(
BigInt(2 ** 192) /
(BigInt(event.params.sqrtPriceX96) * BigInt(event.params.sqrtPriceX96))
);

context.UniswapV3PoolPrice.set(entity);
});

UniswapV3Pool.Mint.handler(async ({ event, context }) => {
const offChainPrice = await fetchEthPriceFromUnix(event.block.timestamp);

const ethDepositedUsdPool =
(latestPoolPrice * Number(event.params.amount1)) / 10 ** 18;
const ethDepositedUsdOffchain =
(offChainPrice * Number(event.params.amount1)) / 10 ** 18;
const ethDepositedUsdOrcale =
(latestOraclePrice * Number(event.params.amount1)) / 10 ** 18;

const EthDeposited: EthDeposited = {
id: `${event.chainId}-${event.block.number}-${event.logIndex}`,
timestamp: event.block.timestamp,
block: event.block.number,
oraclePrice: round(latestOraclePrice),
poolPrice: round(latestPoolPrice),
offChainPrice: round(offChainPrice),
depositedPool: round(ethDepositedUsdPool),
depositedOffchain: round(ethDepositedUsdOffchain),
depositedOrcale: round(ethDepositedUsdOrcale),
offchainOracleDiff: round(
((ethDepositedUsdOffchain - ethDepositedUsdOrcale) /
ethDepositedUsdOffchain) *
100
),
txHash: event.transaction.hash,
};

context.EthDeposited.set(EthDeposited);
});

function round(value: number) {
return Math.round(value * 100) / 100;
}

Step 5: Run Your Indexer​

Start your indexer with:

pnpm dev

This will begin indexing data from block 11,000,000 on Blast.

Step 6: Analyze the Results​

After running your indexer, you can query the data in Hasura to compare the three price data sources:

query ComparePrices {
EthDeposited(order_by: { block: desc }, limit: 10) {
block
timestamp
oraclePrice
poolPrice
offChainPrice
depositedPool
depositedOffchain
depositedOrcale
offchainOracleDiff
txHash
}
}

Results Analysis​

When comparing our three price data sources, we found:

!Table of indexer results

Looking at the offchainOracleDiff column, we can see that oracle and off-chain prices typically align closely but can deviate by as much as 17.98% in some cases.

For the highlighted transaction (0xe7e79ddf29ed2f0ea8cb5bb4ffdab1ea23d0a3a0a57cacfa875f0d15768ba37d), we can compare our calculated values:

  • Actual value (from block explorer): $2,358.27
  • DEX pool value (depositedPool): $2,117.07
  • Off-chain API value (depositedOffchain): $2,156.15

This demonstrates that even the most accurate methods have limitations.

Conclusion: Choosing the Right Method​

Based on our analysis, here are some recommendations for choosing a price data method:

Use Oracle or DEX Pools when:​

  • Indexing speed is critical
  • Absolute precision isn't required
  • You're working with high-liquidity tokens

Use Off-chain APIs when:​

  • Price accuracy is paramount
  • Indexing speed is less important
  • You can implement effective caching

For maximum accuracy while maintaining performance:​

  • Combine multiple methods and aggregate results
  • Use high-volume DEX pools on major networks
  • Cache API results to avoid redundant calls

Next Steps​

To further enhance your price data indexing:

  1. Implement caching for off-chain API calls
  2. Cross-reference multiple DEX pools for better accuracy
  3. Consider time-weighted average prices (TWAP) instead of spot prices
  4. Use multi-chain indexing to access higher-liquidity pools on major networks

By carefully choosing and implementing the right price data strategy, you can build robust indexers that provide accurate financial data for your blockchain applications.


Dynamic Contracts / Factories​

File: Advanced/dynamic-contracts.md

Introduction​

Many blockchain systems use factory patterns where new contracts are created dynamically. Common examples include:

  • DEXes like Uniswap where each trading pair creates a new contract
  • NFT platforms that deploy new collection contracts
  • Lending protocols that create new markets as isolated contracts

When indexing these systems, you need a way to discover and track these dynamically created contracts. Envio provides powerful tools to handle this use case.

Contract Registration Handler​

Instead of a template based approach, we've introduced a contractRegister handler that can be added to any event.

This allows you to easily:

  • Register contracts from any event handler.
  • Use conditions and any logic you want to register contracts.
  • Have nested factories which are registered by other factories.
..contractRegister(({ event, context }) => {
context.add();
});

Example: NFT Factory Pattern​

Let's look at a complete example using an NFT factory pattern.

Scenario​

  • NftFactory contract creates new SimpleNft contracts
  • We want to index events from all NFTs created by this factory
  • Each time a new NFT is created, the factory emits a SimpleNftCreated event

1. Configure Your Contracts in config.yaml​

name: nftindexer
description: NFT Factory
networks:
- id: 1337
start_block: 0
contracts:
- name: NftFactory
abi_file_path: abis/NftFactory.json
address: 0x4675a6B115329294e0518A2B7cC12B70987895C4 # Factory address is known
handler: src/EventHandlers.ts
events:
- event: SimpleNftCreated (string name, string symbol, uint256 maxSupply, address contractAddress)

- name: SimpleNft
abi_file_path: abis/SimpleNft.json
# No address field - we'll discover these addresses from events
handler: src/EventHandlers.ts
events:
- event: Transfer (address from, address to, uint256 tokenId)

Note that:

  • The NftFactory contract has a known address specified in the config
  • The SimpleNft contract has no address, as we'll register instances dynamically

2. Create the Contract Registration Handler​

In your src/EventHandlers.ts file:

// Register SimpleNft contracts whenever they're created by the factory
NftFactory.SimpleNftCreated.contractRegister(({ event, context }) => {
// Register the new NFT contract using its address from the event
context.addSimpleNft(event.params.contractAddress);

context.log.info(
`Registered new SimpleNft at ${event.params.contractAddress}`
);
});

// Handle Transfer events from all SimpleNft contracts
SimpleNft.Transfer.handler(async ({ event, context }) => {
// Your event handling logic here
context.log.info(
`NFT Transfer at ${event.srcAddress} - Token ID: ${event.params.tokenId}`
);

// Example: Store transfer information in the database
// ...
});

Async Contract Register​

As of version 2.21, you can use async contract registration.

This is a unique feature of Envio that allows you to perform an external call to determine the address of the contract to register.

NftFactory.SimpleNftCreated.contractRegister(async ({ event, context }) => {
const version = await getContractVersion(event.params.contractAddress);
if (version === "v2") {
context.addSimpleNftV2(event.params.contractAddress);
} else {
context.addSimpleNft(event.params.contractAddress);
}
});

When to Use Dynamic Contract Registration​

Use dynamic contract registration when:

  • Your system includes factory contracts that deploy new contracts over time
  • You want to index events from all instances of a particular contract type
  • The addresses of these contracts aren't known at the time you create your indexer

Important Notes​

  • Block Coverage: When a dynamic contract is registered, Envio will index all events from that contract in the same block where it was created, even if those events happened in transactions before the registration event. This is particularly useful for contracts that emit events during their construction.

  • Handler Organization: You can register contracts from any event handler. For example, you might register a token contract when you see it being added to a registry, not just when it's created.

  • Pre-registration: Pre-registration was a recommended mode to optimize performance. But starting from version 2.19 the option is removed in favor of the default behavior, which got even faster.

Debugging Tips​

  • Use logging in your contractRegister function to confirm contracts are being registered.
  • If you're not seeing events from your dynamic contracts, verify they're being properly registered in database.

For more information on writing event handlers, see the Event Handlers Guide.


Wildcard Indexing​

File: Advanced/wildcard-indexing.mdx

Wildcard indexing is a feature that allows you to index all events matching a specified event signature without requiring the contract address from which the event was emitted. This is useful in cases such as indexing contracts deployed through factories, where the factory contract does not emit any events upon contract creation. It also enables indexing events from all contracts implementing a standard (e.g. all ERC20 transfers).

note

Wildcard Indexing is supported for HyperSync & HyperFuel data sources starting from v2.3.0. For the RPC data source support added in the v2.12.0 release.

Index all ERC20 transfers​

As an example, let's say we want to index all ERC20 Transfer events. Start with a config.yaml file:

name: transefer-indexer
networks:
- id: 1
start_block: 0
contracts:
- name: ERC20
handler: ./src/EventHandlers.ts
events:
- event: Transfer(address indexed from, address indexed to, uint256 value)

Let's also define some entities in schema.graphql file, so our handlers can store the processed data:

type Transfer {
id: ID!
from: String!
to: String!
}

And the last bit is to register an event handler in the src/EventHandlers.ts. Note how we pass the wildcard: true option to enable wildcard indexing:


ERC20.Transfer.handler(
async ({ event, context }) => {
context.Transfer.set({
id: `${event.chainId}_${event.block.number}_${event.logIndex}`,
from: event.params.from,
to: event.params.to,
});
},
{ wildcard: true }
);
const { ERC20 } = require("generated");

ERC20.Transfer.handler(
async ({ event, context }) => {
context.Transfer.set({
id: `${event.chainId}_${event.block.number}_${event.logIndex}`,
from: event.params.from,
to: event.params.to,
});
},
{ wildcard: true }
);
Handlers.ERC20.Transfer.handler(
async ({ event, context }) => {
context.Transfer.set({
id: `${event.chainId}_${event.block.number}_${event.logIndex}`,
from: event.params.from,
to: event.params.to,
})
},
~eventConfig={wildcard: true},
)

After running your indexer with pnpm dev you will have all ERC20 Transfer events indexed, regardless of the contract address from which the event was emitted.

Topic Filtering​

Indexing all ERC20 Transfer events is a lot of events, so ideally to reduce it only to the ones you trully need with the Topic Filtering feature.

When you register an event handler or a contract register you can provide the eventFilters option. You can filter by each indexed parameter on the given event.

Let's say you only want to index Mint events where the from address is equal to ZERO_ADDRESS:


const ZERO_ADDRESS = "0x0000000000000000000000000000000000000000";

ERC20.Transfer.handler(
async ({ event, context }) => {
//... your handler logic
},
{ wildcard: true, eventFilters: { from: ZERO_ADDRESS } }
);
const { ERC20 } = require("generated");

const ZERO_ADDRESS = "0x0000000000000000000000000000000000000000";

ERC20.Transfer.handler(
async ({ event, context }) => {
//... your handler logic
},
{ wildcard: true, eventFilters: { from: ZERO_ADDRESS } }
);
open Types.SingleOrMultiple

let zeroAddress = Address.unsafeFromString("0x0000000000000000000000000000000000000000")

Handlers.ERC20.Transfer.handler(
async ({ event, context }) => {
//... your handler logic
},
~eventConfig={
wildcard: true,
eventFilters: Single({from: single(zeroAddress)}),
},
)

Multiple Filters​

If you want to index both Mint and Burn events you can provide multiple filters as an array. Also, every parameter can accept an array to filter by multiple possible values. We'll use it to filter by a group of whitelisted addresses in the example below:


const ZERO_ADDRESS = "0x0000000000000000000000000000000000000000";

const WHITELISTED_ADDRESSES = [
"0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266",
"0x70997970C51812dc3A010C7d01b50e0d17dc79C8",
"0x3C44CdDdB6a900fa2b585dd299e03d12FA4293BC",
];

ERC20.Transfer.handler(
async ({ event, context }) => {
//... your handler logic
},
{
wildcard: true,
eventFilters: [
{ from: ZERO_ADDRESS, to: WHITELISTED_ADDRESSES },
{ from: WHITELISTED_ADDRESSES, to: ZERO_ADDRESS },
],
}
);
const { ERC20 } = require("generated");

const ZERO_ADDRESS = "0x0000000000000000000000000000000000000000";

const WHITELISTED_ADDRESSES = [
"0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266",
"0x70997970C51812dc3A010C7d01b50e0d17dc79C8",
"0x3C44CdDdB6a900fa2b585dd299e03d12FA4293BC",
];

ERC20.Transfer.handler(
async ({ event, context }) => {
//... your handler logic
},
{
wildcard: true,
eventFilters: [
{ from: ZERO_ADDRESS, to: WHITELISTED_ADDRESSES },
{ from: WHITELISTED_ADDRESSES, to: ZERO_ADDRESS },
],
}
);
open Types.SingleOrMultiple

let zeroAddress = Address.unsafeFromString("0x0000000000000000000000000000000000000000")

let whitelistedAddresses = [
Address.unsafeFromString("0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266"),
Address.unsafeFromString("0x70997970C51812dc3A010C7d01b50e0d17dc79C8"),
Address.unsafeFromString("0x3C44CdDdB6a900fa2b585dd299e03d12FA4293BC")
]

Handlers.ERC20.Transfer.handler(
async ({ event, context }) => {
//... your handler logic
},
~eventConfig={
wildcard: true,
eventFilters: Multiple([
{ from: single(zeroAddress), to: multiple(whitelistedAddresses) },
{ from: multiple(whitelistedAddresses), to: single(zeroAddress) }
]),
},
)

Different Filters per Network​

For Multichain Indexers you can pass a function to eventFilters and use chainId to filter by different values per network:


const ZERO_ADDRESS = "0x0000000000000000000000000000000000000000";

const WHITELISTED_ADDRESSES = {
1: ["0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266"],
137: [
"0x70997970C51812dc3A010C7d01b50e0d17dc79C8",
"0x3C44CdDdB6a900fa2b585dd299e03d12FA4293BC",
],
};

ERC20.Transfer.handler(
async ({ event, context }) => {
//... your handler logic
},
{
wildcard: true,
eventFilters: ({ chainId }) => [
{ from: ZERO_ADDRESS, to: WHITELISTED_ADDRESSES[chainId] },
{ from: WHITELISTED_ADDRESSES[chainId], to: ZERO_ADDRESS },
],
}
);
const { ERC20 } = require("generated");

const ZERO_ADDRESS = "0x0000000000000000000000000000000000000000";

const WHITELISTED_ADDRESSES = {
1: ["0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266"],
137: [
"0x70997970C51812dc3A010C7d01b50e0d17dc79C8",
"0x3C44CdDdB6a900fa2b585dd299e03d12FA4293BC",
],
};

ERC20.Transfer.handler(
async ({ event, context }) => {
//... your handler logic
},
{
wildcard: true,
eventFilters: ({ chainId }) => [
{ from: ZERO_ADDRESS, to: WHITELISTED_ADDRESSES[chainId] },
{ from: WHITELISTED_ADDRESSES[chainId], to: ZERO_ADDRESS },
],
}
);

Index all ERC20 transfers to your Contract​

Besides chainId you can also access the addresses value to filter by.

For example, if you have a Safe contract, you can index all ERC20 transfers sent specifically to/from your Safe contracts. The event filter gets addresses belonging to the contract, so we need to define the Transfer event on the Safe contract:

name: locker
networks:
- id: 1
start_block: 0
contracts:
- name: Safe
handler: ./src/EventHandlers.ts
events:
- event: Transfer(address indexed from, address indexed to, uint256 value)
addresses:
- 0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266
- 0x70997970C51812dc3A010C7d01b50e0d17dc79C8
- 0x3C44CdDdB6a900fa2b585dd299e03d12FA4293BC

Safe.Transfer.handler(async ({ event, context }) => {}, {
wildcard: true,
eventFilters: ({ addresses }) => [{ from: addresses }, { to: addresses }],
});
const { Safe } = require("generated");

Safe.Transfer.handler(async ({ event, context }) => {}, {
wildcard: true,
eventFilters: ({ addresses }) => [{ from: addresses }, { to: addresses }],
});

This example is not much different from using a WHITELISTED_ADDRESSES constant, but this becomes much more powerful when the Safe contract addresses are registered dynamically by a factory contract:

name: locker
networks:
- id: 1
start_block: 0
contracts:
- name: SafeRegistry
handler: ./src/EventHandlers.ts
events:
- event: NewSafe(address safe)
addresses:
- 0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266
- name: Safe
handler: ./src/EventHandlers.ts
events:
- event: Transfer(address indexed from, address indexed to, uint256 value)

SafeRegistry.NewSafe.contractRegister(async ({ event, context }) => {
context.addSafe(event.params.safe);
});

Safe.Transfer.handler(async ({ event, context }) => {}, {
wildcard: true,
eventFilters: ({ addresses }) => [{ from: addresses }, { to: addresses }],
});
const { SafeRegistry, Safe } = require("generated");

SafeRegistry.NewSafe.contractRegister(async ({ event, context }) => {
context.addSafe(event.params.safe);
});

Safe.Transfer.handler(async ({ event, context }) => {}, {
wildcard: true,
eventFilters: ({ addresses }) => [{ from: addresses }, { to: addresses }],
});

Assert ERC20 Transfers in Handler​

After you got all ERC20 Transfers relevant to your contracts, you can additionally filter them in the handler. For example, to get only USDC transfers:


const USDC_ADDRESS = {
84532: "0x036CbD53842c5426634e7929541eC2318f3dCF7e",
11155111: "0x1c7D4B196Cb0C7B01d743Fbc6116a902379C7238",
};

Safe.Transfer.handler(
async ({ event, context }) => {
// Filter and store only the USDC transfers that involve a Safe address
if (event.srcAddress === USDC_ADDRESS[event.chainId]) {
context.Transfer.set({
id: `${event.chainId}_${event.block.number}_${event.logIndex}`,
from: event.params.from,
to: event.params.to,
});
}
},
{
wildcard: true,
eventFilters: ({ addresses }) => [{ from: addresses }, { to: addresses }],
}
);
const { Safe } = require("generated");

const USDC_ADDRESS = {
84532: "0x036CbD53842c5426634e7929541eC2318f3dCF7e",
11155111: "0x1c7D4B196Cb0C7B01d743Fbc6116a902379C7238",
};

Safe.Transfer.handler(
async ({ event, context }) => {
// Filter and store only the USDC transfers that involve a Safe address
if (event.srcAddress === USDC_ADDRESS[event.chainId]) {
context.Transfer.set({
id: `${event.chainId}_${event.block.number}_${event.logIndex}`,
from: event.params.from,
to: event.params.to,
});
}
},
{
wildcard: true,
eventFilters: ({ addresses }) => [{ from: addresses }, { to: addresses }],
}
);

Contract Register Example​

The same eventFilters can be applied to contractRegister and handlerWithLoader APIs. Here is an example where we only register Uniswap pools that contain DAI token:


const DAI_ADDRESS = "0x6B175474E89094C44Da98b954EedeAC495271d0F";

UniV3Factory.PoolCreated.contractRegister(
async ({ event, context }) => {
const poolAddress = event.params.pool;
context.UniV3Pool.add(poolAddress);
},
{ eventFilters: [{ token0: DAI_ADDRESS }, { token1: DAI_ADDRESS }] }
);
const { UniV3Factory } = require("generated");

const DAI_ADDRESS = "0x6B175474E89094C44Da98b954EedeAC495271d0F";

UniV3Factory.PoolCreated.contractRegister(
async ({ event, context }) => {
const poolAddress = event.params.pool;
context.UniV3Pool.add(poolAddress);
},
{ eventFilters: [{ token0: DAI_ADDRESS }, { token1: DAI_ADDRESS }] }
);
open Types.SingleOrMultiple

let daiAddress = Address.unsafeFromString("0x6B175474E89094C44Da98b954EedeAC495271d0F")

Handlers.UniV3Factory.PoolCreated.contractRegister(
async ({ event, context }) => {
let poolAddress = event.params.pool
context.UniV3Pool.add(poolAddress)
},
~eventConfig={
eventFilters: Multiple([
{ token0: single(daiAddress) },
{ token1: single(daiAddress) }
])
},
)

Handler With Loader Example​

For handlerWithLoader API simply add wildcard or eventFilters options to the single argument object:

ERC20.Transfer.handlerWithLoader({
loader: async ({ event, context }) => {},
handler: async ({ event, context }) => {},
wildcard: ...,
eventFilters: ...,
});

Limitations​

  • For any given network, only one event of a given signature can be indexed using wildcard indexing. This means that if you have multiple contract definitions in your config that contain the same event signature. Only one of them is allowed to be set to wildcard: true

  • Either the contractRegister or the handler function can take an event config object (with wildcard/eventFilters fields) but not both.

  • The RPC data source currently supports Topic Filtering only applied to a single wildcard event.


Accessing Contract State in Event Handlers​

File: Guides/contract-state.md

Example Repository: The complete code for this guide can be found here

Introduction​

This guide demonstrates how to access on-chain contract state from your event handlers. You'll learn how to:

  1. Make RPC calls to external contracts within your event handlers
  2. Batch multiple calls using multicall for efficiency
  3. Implement caching to reduce redundant RPC requests
  4. Handle common edge cases that arise when accessing token contract data

The Challenge: Token Data from Pool Creation Events​

Scenario​

We want to track token information (name, symbol, decimals) for every token involved in a Uniswap V3 pool creation event.

Problem​

The Uniswap V3 factory PoolCreated event only provides token addresses, not their metadata:

PoolCreated(address indexed token0, address indexed token1, uint24 indexed fee, int24 tickSpacing, address pool)

To get the token name, symbol, and decimals, we need to:

  1. Extract the token addresses from the event
  2. Make RPC calls to each token's contract
  3. Store this data alongside our pool information

Prerequisites​

This guide assumes:

  • Basic familiarity with Envio indexing
  • Understanding of the viem library for making contract calls
  • Access to an Ethereum RPC endpoint (dRPC recommended)

For a gentle introduction to viem with a similar example, check out this medium article.

Implementation Steps​

Step 1: Setup the Indexer Configuration​

First, create a new indexer:

pnpx envio init

When prompted, enter the Ethereum mainnet Uniswap V3 Factory address: 0x1F98431c8aD98523631AE4a59f267346ea31F984

Then modify your configuration to focus only on the PoolCreated event:

# config.yaml
name: uniswap-v3-factory-token-indexer
networks:
- id: 1
start_block: 21000000 # Starting at a higher block to speed up initial sync
contracts:
- name: UniswapV3Factory
address:
- 0x1F98431c8aD98523631AE4a59f267346ea31F984
handler: src/EventHandlers.ts
events:
- event: PoolCreated(address indexed token0, address indexed token1, uint24 indexed fee, int24 tickSpacing, address pool)
rollback_on_reorg: false

Note: We're starting at block 21,000,000 to reduce initial sync time, as RPC calls can be slow. See the Rate Limiting section for optimization strategies.

Step 2: Define the Schema​

Create a schema that captures both pool and token information:

# schema.graphql
type Token {
id: ID! # token address
name: String!
symbol: String!
decimals: Int!
}

type Pool {
id: ID! # unique identifier
token0: Token!
token1: Token!
fee: BigInt!
tickSpacing: BigInt!
pool: String! # pool address
}

Step 3: Implement the Event Handler​

The event handler needs to:

  1. Create a Pool entity from the event data
  2. Make RPC calls to fetch token information for both token0 and token1
  3. Create Token entities with the retrieved data
// src/EventHandlers.ts


UniswapV3Factory.PoolCreated.handler(async ({ event, context }) => {
// Create Pool entity
const entity: Pool = {
id: `${event.chainId}_${event.block.number}_${event.logIndex}`,
token0_id: event.params.token0,
token1_id: event.params.token1,
fee: event.params.fee,
tickSpacing: event.params.tickSpacing,
pool: event.params.pool,
};
context.Pool.set(entity);

// Fetch and store token0 information
try {
const {
name: name0,
symbol: symbol0,
decimals: decimals0,
} = await getTokenDetails(
event.params.token0,
event.chainId,
event.params.pool
);
context.Token.set({
id: event.params.token0,
name: name0,
symbol: symbol0,
decimals: decimals0,
});
} catch (error) {
console.log("failed token0 with address", event.params.token0);
return;
}

// Fetch and store token1 information
try {
const {
name: name1,
symbol: symbol1,
decimals: decimals1,
} = await getTokenDetails(
event.params.token1,
event.chainId,
event.params.pool
);
context.Token.set({
id: event.params.token1,
name: name1,
symbol: symbol1,
decimals: decimals1,
});
} catch (error) {
console.log("failed token1 with address", event.params.token1);
return;
}
});

Step 4: Create the Token Details Helper​

This is where the magic happens. We need to:

  1. Make RPC calls to token contracts
  2. Use multicall to batch multiple calls for efficiency
  3. Handle edge cases like non-standard ERC20 implementations
  4. Cache results to avoid redundant calls
// src/tokenDetails.ts




const RPC_URL = process.env.RPC_URL;

const client = createPublicClient({
chain: mainnet,
transport: http(RPC_URL),
batch: { multicall: true }, // Enable multicall batching for efficiency
});

export async function getTokenDetails(
contractAddress: string,
chainId: number,
pool: string
): Promise {
// Check cache first to avoid redundant RPC calls
const cache = await Cache.init(CacheCategory.Token, chainId);
const token = await cache.read(contractAddress.toLowerCase());

if (token) {
return token;
}

// Prepare contract instances for different token standard variations
const erc20 = getERC20Contract(contractAddress as `0x${string}`);
const erc20Bytes = getERC20BytesContract(contractAddress as `0x${string}`);

let results: [number, string, string];
try {
// Try standard ERC20 interface first (most common)
results = await client.multicall({
allowFailure: false,
contracts: [
{
...erc20,
functionName: "decimals",
},
{
...erc20,
functionName: "name",
},
{
...erc20,
functionName: "symbol",
},
],
});
} catch (error) {
console.log("First multicall failed, trying alternate method");
try {
// Some tokens use bytes32 for name/symbol instead of string
const alternateResults = await client.multicall({
allowFailure: false,
contracts: [
{
...erc20Bytes,
functionName: "decimals",
},
{
...erc20Bytes,
functionName: "name",
},
{
...erc20Bytes,
functionName: "symbol",
},
],
});
results = [
alternateResults[0],
hexToString(alternateResults[1]).replace(/\u0000/g, ""), // Remove null byte padding
hexToString(alternateResults[2]).replace(/\u0000/g, ""), // Remove null byte padding
];
} catch (alternateError) {
console.error(`Alternate method failed for pool ${pool}:`);
results = [0, "unknown", "unknown"]; // Fallback for completely non-standard tokens
}
}

const [decimals, name, symbol] = results;

console.log(
`Got token details for ${contractAddress}: ${name} (${symbol}) with ${decimals} decimals`
);

// Prepare and cache the result
const entry = {
name,
symbol,
decimals,
} as const;

cache.add({ [contractAddress.toLowerCase()]: entry as any });
return entry;
}

Important: The hexToString method from Viem adds byte padding to the string. We remove this padding with replace(/\u0000/g, '') to avoid errors when writing to the database.

Step 5: Implement Caching​

Caching is crucial for efficiency. Many pools share common tokens (like USDC, WETH, etc.), and we don't want to make redundant RPC calls for the same token addresses.

// src/cache.ts


export const CacheCategory = {
Token: "token",
} as const;

export type CacheCategory = (typeof CacheCategory)[keyof typeof CacheCategory];

type Address = string;

type Shape = Record>;
type ShapeRoot = Shape & Record;

type ShapeToken = Shape &
Record;

export class Cache {
static init(
category: C,
chainId: number | string | bigint
) {
if (!Object.values(CacheCategory).find((c) => c === category)) {
throw new Error("Unsupported cache category");
}

type S = C extends "token" ? ShapeToken : ShapeRoot;
const entry = new Entry(`${category}-${chainId.toString()}`);
return entry;
}
}

export class Entry {
private memory: Shape = {};

static encoding = "utf8" as const;
static folder = "./.cache" as const;

public readonly key: string;
public readonly file: string;

constructor(key: string) {
this.key = key;
this.file = Entry.resolve(key);

this.preflight();
this.load();
}

public read(key: string) {
const memory = this.memory || {};
return memory[key] as T[typeof key];
}

public load() {
try {
const data = fs.readFileSync(this.file, Entry.encoding);
this.memory = JSON.parse(data) as T;
} catch (error) {
console.error(error);
this.memory = {};
}
}

public add(fields: N) {
if (!this.memory || Object.values(this.memory).length === 0) {
this.memory = fields;
} else {
Object.keys(fields).forEach((key) => {
if (!this.memory[key]) {
this.memory[key] = {};
}
Object.keys(fields[key]).forEach((nested) => {
this.memory[key][nested] = fields[key][nested];
});
});
}

this.publish();
}

private preflight() {
/** Ensure cache folder exists */
if (!fs.existsSync(Entry.folder)) {
fs.mkdirSync(Entry.folder);
}
if (!fs.existsSync(this.file)) {
fs.writeFileSync(this.file, JSON.stringify({}));
}
}

private publish() {
const prepared = JSON.stringify(this.memory);
try {
fs.writeFileSync(this.file, prepared);
} catch (error) {
console.error(error);
}
}

static resolve(key: string) {
return path.join(Entry.folder, key.toLowerCase().concat(".json"));
}
}

Note: The hosted service supports basic JSON file caching in beta. Speak to the team if you want to discuss caching options.

Step 6: Improve Performance with Loaders​

The solution will perform external calls for each handler one by one. This is not efficient and can be improved with loaders. Read more about the Effect API and Loaders in the dedicated guides.

Key Considerations​

Understanding Current vs. Historical State​

Standard RPC requests return the current state of a contract, not the state at a specific historical block. For token metadata (name, symbol, decimals), this isn't typically an issue since these values rarely change.

However, if you need historical state (like an account balance at a specific block), you would need a specialized RPC method like eth_getBalanceAt.

Handling Rate Limiting​

RPC providers often limit the number of requests per time period. To avoid hitting rate limits:

  1. Use multicall (as shown in our example) to batch multiple contract calls into a single RPC request
  2. Implement caching to avoid redundant requests
  3. Use a paid, unthrottled RPC provider for production indexers
  4. Implement request throttling to space out requests when needed
  5. Use multiple RPC providers and rotate between them for high-volume indexing

Conclusion​

Accessing contract state from your event handlers opens up powerful possibilities for enriching your indexed data. By following the patterns in this guide, you can efficiently retrieve and store contract state while maintaining good performance.

For more advanced techniques, explore:

  • Implementing retry logic for failed RPC calls
  • Handling complex contract interactions beyond basic ERC20 tokens

Using HyperSync as Your Indexing Data Source​

File: Advanced/hypersync.md

"Beam me up, Scotty!" πŸ–– β€” Just like the Star Trek transporter, HyperSync delivers your blockchain data at warp speed.

What is HyperSync?​

HyperSync is a purpose built data-node that helps powers the exceptional performance of HyperIndex. It's a specialized data source optimized for indexing that provides:

  • 100x faster sync speeds compared to traditional RPC methods
  • Cost-effective data retrieval with optimized resource usage
  • Flexibility with the ability to fetch multiple data points in a single round trip with more complex filtering

How HyperSync Powers Your Indexers​

The Performance Advantage​

Traditional blockchain indexing relies on RPC (Remote Procedure Call) endpoints to query blockchain data. While functional, RPCs become highly inefficient when:

  • Indexing millions of events
  • Processing historical blockchain data
  • Extracting data across multiple networks
  • Working with thousands of contracts

HyperSync addresses these limitations by providing a streamlined data access layer that dramatically reduces sync times from days to minutes.

Default Enablement​

HyperSync is used by default as the data source for all HyperIndex networks. This means:

  • No additional configuration is required to benefit from its speed
  • No need to worry about RPC rate limiting
  • No management of multiple RPC providers
  • No costs for external RPC services

Using HyperSync in Your Projects​

Configuration​

To use HyperSync (the default), simply don't set an RPC for historical sync in your config. HyperIndex will automatically use HyperSync for supported networks:

name: Greeter
description: Greeter indexer
networks:
- id: 137 # Polygon
start_block: 0 # With HyperSync, you can use 0 regardless of contract deployment time
contracts:
- name: PolygonGreeter
abi_file_path: abis/greeter-abi.json
address: 0x9D02A17dE4E68545d3a58D3a20BbBE0399E05c9c
handler: ./src/EventHandlers.bs.js
events:
- event: NewGreeting
- event: ClearGreeting

Smart Block Detection​

When using HyperSync, you can specify start_block: 0 in your configuration. HyperSync will automatically:

  1. Detect the first block where your contract was deployed
  2. Begin indexing from that block
  3. Skip unnecessary processing of earlier blocks

This feature eliminates the need to manually determine the deployment block of your contract, saving setup time and reducing configuration errors.

Availability and Support​

HyperSync is maintained and hosted by Envio for all supported networks. We handle the infrastructure, allowing you to focus on building your indexer logic.

Improving resilience with RPC fallback​

HyperIndex allows you to configure additional RPC providers as fallback data sources. This redundancy is recommended for production deployments to ensure continuous operation of your indexer. If HyperSync experiences any interruption, your indexer will automatically switch to the fallback RPC provider.

Adding an RPC fallback provides these benefits:

  • High availability: Your indexer continues to function even during temporary HyperSync outages
  • Automatic failover: The system detects issues and switches to fallback RPC without manual intervention
  • Operational control: You can specify which RPC providers to use as fallbacks based on your requirements

Configure a fallback RPC by adding the rpc field to your network configuration:

name: Greeter
description: Greeter indexer
networks:
- id: 137 # Polygon
+ # Short and simple
+ rpc: https://eth-mainnet.your-rpc-provider.com?API_KEY={ENVIO_MAINNET_API_KEY}
+ # Or provide multiple RPC endpoints with more flexibility
+ rpc:
+ - url: https://eth-mainnet.your-rpc-provider.com?API_KEY={ENVIO_MAINNET_API_KEY}
+ for: fallback
+ - url: https://eth-mainnet.your-free-rpc-provider.com
+ for: fallback
+ initial_block_interval: 1000
start_block: 0 # With HyperSync, you can use 0 regardless of contract deployment time
contracts:
- name: PolygonGreeter
abi_file_path: abis/greeter-abi.json
address: 0x9D02A17dE4E68545d3a58D3a20BbBE0399E05c9c
handler: ./src/EventHandlers.bs.js
events:
- event: NewGreeting
- event: ClearGreeting
info

This feature is available starting from version 2.14.0. The fallback RPC is activated only when a primary data source doesn't receive a new block for more than 20 seconds.

Supported Networks​

HyperSync supports numerous EVM networks including Ethereum, Unichain, Arbitrum, Optimism, and more. For a complete and up-to-date list of supported networks, see the HyperSync Supported Networks documentation.

Alternatives​

HyperSync data source is vendorlock-free. While HyperSync is recommended for optimal performance, you can always switch to RPCs without the need to change your indexer code. For information on configuring RPC-based indexing, visit the RPC Data Source documentation.

Performance Comparison​

MetricTraditional RPCHyperSync
Indexing 1M EventsHours to daysMinutes
Resource UsageHighOptimized
Network CallsMany individual callsBatched for efficiency
Rate LimitingCommon issueNot applicable
CostPay per API callIncluded with Hosted Service

Summary​

HyperSync provides a significant competitive advantage for Envio indexers by dramatically reducing sync times, lowering costs, and simplifying configuration. By using HyperSync as your default data source, you'll experience:

  • Faster indexing performance
  • Support for previously impossible indexing cases
  • Enhanced reliability
  • Reduced operational complexity

To learn more about HyperSync's underlying technology, visit the HyperSync documentation.


Using RPC as Your Indexing Data Source​

File: Advanced/rpc-sync.md

HyperIndex supports indexing any EVM blockchain using RPC (Remote Procedure Call) as the data source. This page explains when and how to use RPC for your indexing needs.

When to Use RPC​

While HyperSync is the recommended and default data source for optimal performance, there are scenarios where you might need to use RPC instead:

  1. Unsupported Networks: When indexing a blockchain network that isn't yet supported by HyperSync
  2. Custom Requirements: When you need specific RPC functionality not available in HyperSync
  3. Private Chains: When working with private or development EVM chains

Note: For networks that HyperSync supports, we strongly recommend using HyperSync rather than RPC. HyperSync provides significantly faster indexing performance (up to 100x) and doesn't require managing RPC endpoints or worrying about rate limits.

Configuring RPC in Your Indexer​

Basic Configuration​

To use RPC as your data source, you need to add an rpc_config section to your network configuration in the config.yaml file:

networks:
- id: 1 # Ethereum Mainnet
rpc_config:
url: https://eth-mainnet.your-rpc-provider.com # Your RPC endpoint
start_block: 15000000
contracts:
- name: MyContract
address: "0x1234..."
# Additional contract configuration...

The presence of the rpc_config section tells HyperIndex to use RPC instead of HyperSync for this network.

Advanced RPC Configuration​

For more control over how your indexer interacts with the RPC endpoint, you can configure additional parameters:

networks:
- id: 1
rpc_config:
url: https://eth-mainnet.your-rpc-provider.com
initial_block_interval: 10000 # Initial number of blocks to fetch in each request
backoff_multiplicative: 0.8 # Factor to scale back block request size after errors
acceleration_additive: 2000 # How many more blocks to request when successful
interval_ceiling: 10000 # Maximum blocks to request in a single call
backoff_millis: 5000 # Milliseconds to wait after an error
query_timeout_millis: 20000 # Milliseconds before timing out a request
start_block: 15000000
# Additional network configuration...

Configuration Parameters Explained​

ParameterDescriptionRecommended Value
urlYour RPC endpoint URLDepends on provider
initial_block_intervalStarting block batch size1,000 - 10,000
backoff_multiplicativeHow much to reduce batch size after errors0.5 - 0.9
acceleration_additiveHow much to increase batch size on success500 - 2,000
interval_ceilingMaximum blocks per request5,000 - 10,000
backoff_millisWait time after errors (ms)1,000 - 10,000
query_timeout_millisRequest timeout (ms)10,000 - 30,000

The optimal values depend on your RPC provider's performance and limits, as well as the complexity of your contracts and the data being indexed.

RPC Best Practices​

Selecting an RPC Provider​

When choosing an RPC provider, consider:

  • Rate limits: Most providers have limits on requests per second/minute
  • Node performance: Some providers offer faster nodes for premium tiers
  • Archive nodes: Required if you need historical state (e.g., balances at past blocks)
  • Geographic location: Choose nodes closest to your indexer deployment

Performance Optimization​

To get the best performance when using RPC:

  1. Start from a recent block if possible, rather than indexing from genesis
  2. Tune batch parameters based on your provider's capabilities
  3. Use a paid service for better reliability and higher rate limits
  4. Consider multiple fallback RPCs for redundancy

Enhanced RPC with eRPC​

For more robust RPC usage, you can implement eRPC - a fault-tolerant EVM RPC proxy with advanced features like caching and failover.

What eRPC Provides​

  • Permanent caching: Stores historical responses to reduce redundant requests
  • Auto failover: Automatically switches between multiple RPC providers
  • Re-org awareness: Properly handles blockchain reorganizations
  • Auto-batching: Optimizes requests to minimize network overhead
  • Load balancing: Distributes requests across multiple providers

Setting Up eRPC​

  1. Create your eRPC configuration file (erpc.yaml):
logLevel: debug
projects:
- id: main
upstreams:
# Add HyperRPC as primary source
- endpoint: evm+envio://rpc.hypersync.xyz
# Add fallback RPC endpoints
- endpoint: https://eth-mainnet-provider1.com
- endpoint: https://eth-mainnet-provider2.com
- endpoint: https://eth-mainnet-provider3.com
  1. Run eRPC using Docker:
docker run -v $(pwd)/erpc.yaml:/root/erpc.yaml -p 4000:4000 -p 4001:4001 ghcr.io/erpc/erpc:latest

Or add it to your existing Docker Compose setup:

services:
# Your existing services...

erpc:
image: ghcr.io/erpc/erpc:latest
platform: linux/amd64
volumes:
- "${PWD}/erpc.yaml:/root/erpc.yaml"
ports:
- 4000:4000
- 4001:4001
restart: always
  1. Configure HyperIndex to use eRPC in your config.yaml:
networks:
- id: 1
rpc_config:
url: http://erpc:4000/main/evm/1 # eRPC endpoint for Ethereum Mainnet
start_block: 15000000
# Additional network configuration...

For more detailed configuration options, refer to the eRPC documentation.

Comparing HyperSync and RPC​

FeatureHyperSyncRPC
Speed10-100x fasterBaseline
ConfigurationMinimalRequires tuning
Rate LimitsNoneDepends on provider
CostIncluded with Hosted ServicePay per request/subscription
Network SupportSupported networksAny EVM network
MaintenanceManaged by EnvioSelf-managed

Summary​

While RPC provides the flexibility to index any EVM blockchain, it comes with performance limitations and configuration complexity. For supported networks, we recommend using HyperSync as your data source for optimal performance.

If you must use RPC:

  • Choose a reliable provider
  • Configure your indexer for optimal performance
  • Consider implementing eRPC for enhanced reliability and performance
  • Start from recent blocks when possible to reduce indexing time

For any questions about using RPC with HyperIndex, please contact the Envio team.


Indexing IPFS Data with Envio​

File: Guides/ipfs.md

Example Repository: The complete code for this guide can be found here

Important Note: The example repository contains a SQLite caching implementation which is not supported on the Envio hosted service. This guide has been updated to exclude the SQLite implementation, focusing only on approaches compatible with the hosted environment.

Introduction​

This guide demonstrates how to fetch and index data stored on IPFS within your Envio indexer. We'll use the Bored Ape Yacht Club NFT collection as a practical example, showing you how to retrieve and store token metadata from IPFS.

IPFS (InterPlanetary File System) is commonly used in blockchain applications to store larger data like images and metadata that would be prohibitively expensive to store on-chain. By integrating IPFS fetching capabilities into your indexers, you can provide a more complete data model that combines on-chain events with off-chain metadata.

Implementation Overview​

Our implementation will follow these steps:

  1. Create a basic indexer for Bored Ape Yacht Club NFT transfers
  2. Extend the indexer to fetch and store metadata from IPFS
  3. Handle IPFS connection issues with fallback gateways

Step 1: Setting Up the Basic NFT Indexer​

First, let's create a basic indexer that tracks NFT ownership:

Initialize the Indexer​

pnpx envio init

When prompted, enter the Bored Ape Yacht Club contract address: 0xBC4CA0EdA7647A8aB7C2061c2E118A18a936f13D

Configure the Indexer​

Modify the configuration to focus on the Transfer events:

# config.yaml
name: bored-ape-yacht-club-nft-indexer
networks:
- id: 1
start_block: 0
end_block: 12299114 # Optional: limit blocks for development
contracts:
- name: BoredApeYachtClub
address:
- 0xBC4CA0EdA7647A8aB7C2061c2E118A18a936f13D
handler: src/EventHandlers.ts
events:
- event: Transfer(address indexed from, address indexed to, uint256 indexed tokenId)

Define the Schema​

Create a schema to store NFT ownership data:

# schema.graphql
type Nft {
id: ID! # tokenId
owner: String!
}

Implement the Event Handler​

Track ownership changes by handling Transfer events:

// src/EventHandler.ts

const ZERO_ADDRESS = "0x0000000000000000000000000000000000000000";

BoredApeYachtClub.Transfer.handler(async ({ event, context }) => {
if (event.params.from === ZERO_ADDRESS) {
// mint
const nft: Nft = {
id: event.params.tokenId.toString(),
owner: event.params.to,
};
context.Nft.set(nft);
} else {
// transfer
let nft = await context.Nft.get(event.params.tokenId.toString());
if (!nft) {
throw new Error("Can't transfer non-existing NFT");
}
nft = { ...nft, owner: event.params.to };
context.Nft.set(nft);
}
});

Run your indexer with pnpm dev and visit http://localhost:8080 to see the ownership data:

!Basic NFT ownership data

Step 2: Fetching IPFS Metadata​

Now, let's enhance our indexer to fetch metadata from IPFS:

Update the Schema​

Extend the schema to include metadata fields:

# schema.graphql
type Nft {
id: ID! # tokenId
owner: String!
image: String!
attributes: String! # JSON string of attributes
}

Create IPFS Utility Functions​

Create a new file to handle IPFS requests with fallbacks to multiple gateways:

// src/utils/ipfs.ts

type NftMetadata = {
image: string;
attributes: Array;
};

// unique identifier for the BoredApeYachtClub IPFS tokenURI
const BASE_URI_UID = "QmeSjSinHpPnmXmspMjwiXyN6zS4E9zccariGR3jxcaWtq";

async function fetchFromEndpoint(
endpoint: string,
tokenId: string,
context: handlerContext
): Promise {
try {
const response = await fetch(`${endpoint}/${BASE_URI_UID}/${tokenId}`);
if (response.ok) {
const metadata: any = await response.json();
context.log.info(metadata);
return { attributes: metadata.attributes, image: metadata.image };
} else {
throw new Error("Unable to fetch from endpoint");
}
} catch (e) {
context.log.warn(`Unable to fetch from ${endpoint}`);
}
return null;
}

export async function tryFetchIpfsFile(
tokenId: string,
context: handlerContext
): Promise {
const endpoints = [
// Try multiple endpoints to ensure data availability
process.env.PINATA_IPFS_GATEWAY || "", // Optional paid gateway (set in .env)
"https://cloudflare-ipfs.com/ipfs",
"https://ipfs.io/ipfs",
];

for (const endpoint of endpoints) {
if (!endpoint) continue; // Skip empty endpoints

const metadata = await fetchFromEndpoint(endpoint, tokenId, context);
if (metadata) {
return metadata;
}
}

context.log.error("Unable to fetch from all endpoints");
return { attributes: ["unknown"], image: "unknown" };
}

Update the Event Handler​

Modify the event handler to fetch and store metadata:

// src/EventHandlers.ts


const ZERO_ADDRESS = "0x0000000000000000000000000000000000000000";

BoredApeYachtClub.Transfer.handler(async ({ event, context }) => {
if (event.params.from === ZERO_ADDRESS) {
// mint
let metadata = await tryFetchIpfsFile(
event.params.tokenId.toString(),
context
);

const nft: Nft = {
id: event.params.tokenId.toString(),
owner: event.params.to,
image: metadata.image,
attributes: JSON.stringify(metadata.attributes),
};
context.Nft.set(nft);
} else {
// transfer
let nft = await context.Nft.get(event.params.tokenId.toString());
if (!nft) {
throw new Error("Can't transfer non-existing NFT");
}
nft = { ...nft, owner: event.params.to };
context.Nft.set(nft);
}
});

When you run the indexer now, it will populate both ownership data and token metadata:

!NFT ownership and metadata

Best Practices for IPFS Integration​

When working with IPFS in your indexers, consider these best practices:

1. Use Multiple Gateways​

IPFS gateways can be unreliable, so always implement multiple fallback options:

const endpoints = [
process.env.PAID_IPFS_GATEWAY || "",
"https://cloudflare-ipfs.com/ipfs",
"https://ipfs.io/ipfs",
"https://gateway.pinata.cloud/ipfs",
];

2. Handle Failures Gracefully​

Always include error handling and provide fallback values:

try {
// IPFS fetch logic
} catch (error) {
context.log.error(`Failed to fetch from IPFS: ${error.message}`);
return { attributes: [], image: "default-image-url" };
}

3. Implement Local Caching (For Local Development)​

For local development, consider implementing in-memory caching to avoid repeatedly fetching the same data:

// Simple in-memory cache for local development
const metadataCache = new Map();

async function getMetadata(tokenId: string) {
// Check cache first
if (metadataCache.has(tokenId)) {
return metadataCache.get(tokenId);
}

// Fetch from IPFS
const metadata = await fetchFromIPFS(tokenId);

// Store in cache
metadataCache.set(tokenId, metadata);

return metadata;
}

Note: For production indexers, the Envio hosted service automatically handles optimizations. You should not implement persistent file-based caching mechanisms as they are not supported in the hosted environment. Please discuss with the team your options regarding caching.

Important: While the example repository includes SQLite-based caching, this approach is not compatible with the Envio hosted service and should not be used for production deployments.

4. Improve Performance with Loaders​

The solution will perform external calls for each handler one by one. This is not efficient and can be improved with loaders. Read more about the Effect API and Loaders in the dedicated guides.

Understanding IPFS​

What is IPFS?​

IPFS (InterPlanetary File System) is a distributed system for storing and accessing files, websites, applications, and data. It works by:

  1. Splitting files into chunks
  2. Creating content-addressed identifiers (CIDs) based on the content itself
  3. Distributing these chunks across a network of nodes
  4. Retrieving data based on its CID rather than its location

Common Use Cases with Smart Contracts​

IPFS is frequently used alongside smart contracts for:

  • NFTs: Storing images, videos, and metadata while the contract manages ownership
  • Decentralized Identity Systems: Storing credential documents and personal information
  • DAOs: Maintaining governance documents, proposals, and organizational assets
  • dApps: Hosting front-end interfaces and application assets

IPFS Challenges​

IPFS integration comes with several challenges:

  1. Slow Retrieval Times: IPFS data can be slow to retrieve, especially for less widely replicated content
  2. Gateway Reliability: Public gateways can be inconsistent in their availability
  3. Data Persistence: Content may become unavailable if nodes stop hosting it

To mitigate these issues:

  • Use pinning services like Pinata or Infura to ensure data persistence
  • Implement multiple gateway fallbacks
  • Consider paid gateways for production applications

Envio Command Line Interface​

File: Guides/cli-commands.md

This comprehensive reference guide covers all available commands and options in the Envio CLI tool. Use this documentation to explore the full capabilities of the envio command and its subcommands for managing your indexing projects.

Getting Started​

The Envio CLI provides a powerful set of tools for creating, developing, and managing your indexers. Whether you're starting a new project, running a development server, or deploying to production, the CLI offers commands to simplify and automate your workflow.

Command Overview​

The commands are organized into the following categories:

Initialization Commands​

  • envio init - Create new indexer projects
  • envio init contract-import - Import from existing contracts
  • envio init template - Use pre-built templates

Development Commands​

  • envio dev - Run in development mode with hot reloading
  • envio codegen - Generate code from configuration files
  • envio start - Start the indexer without code generation

Environment Management​

  • envio stop - Stop running processes
  • envio local - Manage local environment
  • envio local docker - Control Docker containers
  • envio local db-migrate - Manage database schema

Analysis Tools​

  • envio benchmark-summary - View performance data

Global Command​

envio​

The base command that provides access to all Envio functionality.

Usage: envio [OPTIONS]

Options:​
  • -d, --directory β€” The directory of the project. Defaults to current dir ("./")
  • -o, --output-directory β€” The directory within the project that generated code should output to (Default: generated)
  • --config β€” The file in the project containing config (Default: config.yaml)

Initialization Commands​

These commands help you create and set up new indexing projects quickly.

envio init​

Initialize an indexer with one of the initialization options.

Usage: envio init [OPTIONS] [COMMAND]

Subcommands:​
  • contract-import β€” Initialize Evm indexer by importing config from a contract for a given chain
  • template β€” Initialize Evm indexer from an example template
  • fuel β€” Initialization option for creating Fuel indexer
Options:​
  • -n, --name β€” The name of your project
  • -l, --language β€” The language used to write handlers (Options: javascript, typescript, rescript)
  • --api-token β€” The hypersync API key to be initialized in your templates .env file

envio init contract-import​

Initialize Evm indexer by importing config from a contract for a given chain.

Usage: envio init contract-import [OPTIONS] [COMMAND]

Subcommands:​
  • explorer β€” Initialize by pulling the contract ABI from a block explorer
  • local β€” Initialize from a local json ABI file
Options:​
  • -c, --contract-address β€” Contract address to generate the config from
  • --single-contract β€” If selected, prompt will not ask for additional contracts/addresses/networks
  • --all-events β€” If selected, prompt will not ask to confirm selection of events on a contract

envio init contract-import explorer​

Initialize by pulling the contract ABI from a block explorer.

Usage: envio init contract-import explorer [OPTIONS]

Options:​
  • -b, --blockchain β€” Network to import the contract from (Options include ethereum-mainnet, polygon, arbitrum-one, etc.)

envio init contract-import local​

Initialize from a local json ABI file.

Usage: envio init contract-import local [OPTIONS]

Options:​
  • -a, --abi-file β€” The path to a json abi file
  • --contract-name β€” The name of the contract
  • -b, --blockchain β€” Name or ID of the contract network
  • -r, --rpc-url β€” The rpc url to use if the network id used is unsupported by our hypersync
  • -s, --start-block β€” The start block to use on this network

envio init template​

Initialize Evm indexer from an example template.

Usage: envio init template [OPTIONS]

Options:​
  • -t, --template β€” Template to use (Options: greeter, erc20)

envio init fuel​

Initialization option for creating Fuel indexer.

Usage: envio init fuel [COMMAND]

Subcommands:​
  • contract-import β€” Initialize Fuel indexer by importing config from a contract
  • template β€” Initialize Fuel indexer from an example template

envio init fuel contract-import​

Initialize Fuel indexer by importing config from a contract for a given chain.

Usage: envio init fuel contract-import [OPTIONS] [COMMAND]

Subcommands:​
  • local β€” Initialize from a local json ABI file
Options:​
  • -c, --contract-address β€” Contract address to generate the config from
  • --single-contract β€” If selected, prompt will not ask for additional contracts/addresses/networks
  • --all-events β€” If selected, prompt will not ask to confirm selection of events on a contract

envio init fuel contract-import local​

Initialize from a local json ABI file.

Usage: envio init fuel contract-import local [OPTIONS]

Options:​
  • -a, --abi-file β€” The path to a json abi file
  • --contract-name β€” The name of the contract

envio init fuel template​

Initialize Fuel indexer from an example template.

Usage: envio init fuel template [OPTIONS]

Options:​
  • -t, --template β€” Name of the template (Options: greeter)

Development Commands​

These commands help you develop, test, and run your indexers locally.

envio dev​

Development commands for starting, stopping, and restarting the indexer with automatic codegen for any changed files.

Usage: envio dev

envio stop​

Stop the local environment - delete the database and stop all processes (including Docker) for the current directory.

Usage: envio stop

envio codegen​

Generate indexing code from user-defined configuration & schema files.

Usage: envio codegen

envio start​

Start the indexer without any automatic codegen.

Usage: envio start [OPTIONS]

Options:​
  • -r, --restart β€” Clear your database and restart indexing from scratch
  • -b, --bench β€” Saves benchmark data to a file during indexing

Environment Management Commands​

These commands help you manage your local development environment.

envio local​

Prepare local environment for envio testing.

Usage: envio local

Subcommands:​
  • docker β€” Local Envio and ganache environment commands
  • db-migrate β€” Local Envio database commands

envio local docker​

Local Envio and ganache environment commands.

Usage: envio local docker

Subcommands:​
  • up β€” Create docker images required for local environment
  • down β€” Delete existing docker images on local environment

envio local docker up​

Create docker images required for local environment.

Usage: envio local docker up

envio local docker down​

Delete existing docker images on local environment.

Usage: envio local docker down

envio local db-migrate​

Local Envio database commands.

Usage: envio local db-migrate

Subcommands:​
  • up β€” Migrate latest schema to database
  • down β€” Drop database schema
  • setup β€” Setup database by dropping schema and then running migrations

envio local db-migrate up​

Migrate latest schema to database.

Usage: envio local db-migrate up

envio local db-migrate down​

Drop database schema.

Usage: envio local db-migrate down

envio local db-migrate setup​

Setup database by dropping schema and then running migrations.

Usage: envio local db-migrate setup

Analysis Tools​

These commands help you analyze and optimize your indexer's performance.

envio benchmark-summary​

Prints a summary of the benchmark data after running the indexer with envio start --bench flag or setting 'ENVIO_SAVE_BENCHMARK_DATA=true'.

Usage: envio benchmark-summary

Command Reference Table​

CommandDescriptionCommon Use Case
envio initCreate new indexerStarting a new project
envio devRun in development modeLocal development with hot reload
envio startStart indexerProduction or testing runs
envio stopStop all processesCleaning up environment
envio codegenGenerate codeAfter changing config or schema
envio local docker upStart Docker containersSetting up environment
envio local db-migrate setupInitialize databaseBefore first run

Advanced Usage Examples​

Creating a New Indexer from an Existing Contract​

envio init contract-import explorer -b ethereum-mainnet -c 0x1234...

Starting the Indexer in Development Mode​

envio dev

Running Benchmarks​

envio start --bench
envio benchmark-summary

Migration Guide: HyperIndex v1 to v2​

File: migration-guide-v1-v2.md

Introduction​

Welcome to HyperIndex v2 - a major upgrade that significantly enhances your indexing experience! This new version introduces asynchronous processing, streamlined workflows, and improved flexibility for your indexers. With v2, you'll benefit from faster development, better performance, and a more intuitive API.

While the full release changes can be found in the v2.0.0 release notes, here are some key highlights before we dive into the comprehensive migration guide:

  • Handlers are now asynchronous, with loaders becoming an optional tool for additional optimizations.
  • Async-mode has been removed as it's no longer needed in v2.
  • Loaders (when used) are more expressive and directly connected to the handler context via their return type.
    • In v2, you can access loader fields in the handler the same way you do in the loader, using an async 'get' function.
    • The return type of the loader can be used to directly access loaded data in the handler via the context.
  • Indexing parameters with names that are reserved words in ReScript have been fixed.
  • Validation and autocompletion for config.yaml is now available. Enable it by adding # yaml-language-server: $schema=./node_modules/envio/evm.schema.json at the top of your config.yaml file.

These changes simplify the development process and provide a more consistent and powerful indexing experience. The following sections will guide you through the necessary steps to migrate your existing v1 indexers to v2.

Changes to Make​

Handlers​

  • Handlers are now asynchronous - add the async keyword and rename handlerAsync to handler.
  • You can use handlerWithLoader if you need a loader, otherwise, use handler directly.
  • The 'get' function is now asynchronous, so add an await before those functions.
  • No labeled entities.

Loaders​

  • Loaders are merged into the handlers using handlerWithLoader.
  • Loading linked entities is done directly with promises in the loader.
  • Loaders are completely optional - only use the if you care about high throughput indexing.
  • Loaders return the required entities which are then used in the handler.
  • The dynamic contract registration moved from loaders to its own ..contractRegister handler.
  • The return type of the loader is used directly in the handler to access the loaded data. No need to re-'get' it again in the handler.

Configuration​

  • There is no async-mode anymore, so you can remove isAsync: true from each of the events in your config.yaml.
  • There is no more 'required_entities' in the config file. This includes sub-fields such as label and arrayLabels.
- isAsync: true
  • Removed entity labels and required entities.
- required_entities:
- - name: User

Field Selection and Event Parameter Changes​

In v2, the structure of the event parameter has changed significantly. Some fields have been moved or renamed, and new fields are available through the field_selection configuration.

Field selection allows you to add additional data points to each event that gets passed to your handlers. This feature enhances the flexibility and efficiency of your indexer, as by default you don't fetch data that isn't required.

To use field selection, add a field_selection section to your config.yaml file. For example:

field_selection:
transaction_fields:
- "from"
- "to"
- "hash"
- "transactionIndex"
block_fields:
# Not required for migration, but more fields can be added here
- "parentHash"

For an exhaustive list of fields that can be added and more detailed information about field selection, please refer to the Field Selection section in the Configuration File guide.

Note: By default, number, hash, and timestamp are already selected for block_fields and do not need to be configured.

'event' Parameter Changes​

The structure of the event parameter has changed in v2. This affects loaders, handlers, and dynamic contract registration. Here are the key changes:

  1. Block and transaction fields are now scoped under event.block and event.transaction respectively.
  2. Some field names have changed:
    • event.txOrigin is now event.transaction.from (requires adding to config)
    • event.txTo is now event.transaction.to (requires adding to config)
    • event.txHash is now event.transaction.hash (requires adding to config)
    • event.blockTimestamp is now event.block.timestamp (no config change)
    • event.blockNumber is now event.block.number (no config change)
    • event.blockHash is now event.block.hash (no config change)

Miscellaneous breaking changes and deprecations​

  • The context.Entity.load function is deprecated and should be replaced with direct calls to context.Entity.get in the loader.
  • The context.ParentEntity.loadField functions are deprecated and should be replaced with direct calls to context.ChildEntity.get.
  • Remove the Contract and Entity suffixes from the generated code.
  • For JavaScript/TypeScript users:
    • The event param names are not uncapitalized anymore. So you might need to change event.params.capitalizedParamName to event.params.CapitalizedParamName.
  • For ReScript users:
    • We moved to the built-in bigint type instead of the Ethers.BigInt.t.
    • We migrated to ReScript 11 uncurried mode. Curried mode is not supported anymore. So you need to remove uncurried: false from your rescript.json file. Also, we vendored RescriptMocha bindings to support uncurried mode. Please use it instead of rescript-mocha.
  • The config parsing is more strict, unknown fields will result in an error.
    • You can add # yaml-language-server: $schema=./node_modules/envio/evm.schema.json at the top of your 'config.yaml' file to get autocomplete and validation for the config file.

Migration Steps​

1. Update Imports​

Replace the old import statements with the new ones.

Before:

import {
GreeterContract_NewGreeting_handler,
// or you aren't using these `_` versions of the imports
GreeterContract,
// ...
} from "../generated/src/Handlers.gen"; // Not all imports still look like this, but on old indexers they do.

import {
GreetingEntity,
UserEntity,
// ... other entities
} from "../generated/src/Types.gen";

After:

import {
Greeter, // the Greeter Contract
// ...
Greeting, // the Greeting Entity
User, // The User Entity
// ... other entities
} from "generated"; // Note this requires adding the 'generated' folder to your 'optionalDependencies' in your package.json

2. Update Handler Definitions​

Before:

/// or if your indexer is very old: GreeterContract_Event1_loader
GreeterContract.Event1.loader(({ event, context }) => {
// Loader code
});
GreeterContract.Event1.handler(({ event, context }) => {
// Handler code
});

After:

Greeter.Event1.handlerWithLoader({
loader: async ({ event, context }) => {
// Loader code
return {
/* loaded data, this data is available in the "handler" via the `loaderReturn` parameter */
};
},
handler: async ({ event, context, loaderReturn }) => {
// Handler code using loaderReturn
},
});

Or without a loader:​

Before:

GreeterContract.Event1.handler(({ event, context }) => {
// Handler code
});

After:

Greeter.Event1.handler(async ({ event, context }) => {
// Handler code
});

3. Dynamic Contract Registration​

Use contractRegister for dynamic contract registration. Assuming there is an event called NewGreeterCreated that creates a contract called Greeter that has the address of the newGreeter as a field.

Before:

GreeterContract.NewGreeterCreated.loader(({ event, context }) => {
context.contractRegistration.addGreeter(event.params.newGreeter);
});

After:

Greeter.NewGreeterCreated.contractRegister(({ event, context }) => {
context.addGreeter(event.params.newGreeter);
});

4. Handling Entities​

Before

const greetingInstance: GreetingEntity = {
...currentGreeting,
// ...loaderReturn
};
context.Greeting.set(greetingInstance);

After

const greetingInstance: Greeting = {
...currentGreeting,
// ...
};
context.Greeting.set(greetingInstance);

The only change is in the TypeScript/ReScript type for the entity πŸ’ͺ

5. Accessing Loaded Data​

Access data via asynchronous get functions:​

Before:

let currentEntity = context.Entity.get(event.srcAddress.toString());

After:

let currentEntity = await context.Entity.get(event.srcAddress.toString());

Access loaded data through the loaderReturn if you are using loaders:​

Before:

let currentEntity = context.Entity.get(event.srcAddress.toString());

After:

const { currentEntity } = loaderReturn;

6. Loading Linked Entities​

Before:

GreeterContract.Event1.loader(({ event, context }) => {
context.Entity.load(event.srcAddress.toString(), {
loadField1: true,
loadField2: true,
});
});

After:

Greeter.Event1.handlerWithLoader({
loader: async ({ event, context }) => {
const currentEntity = await context.Entity.get(event.srcAddress.toString());
if (currentEntity == undefined) return null;

const field1Instance = await context.Entity.getField1(
currentEntity.field1_id
);
const field2Instance = await context.Entity.getField2(
currentEntity.field2_id
);

return { currentEntity, field1Instance, field2Instance };
},
});

7. Config File Changes​

Before:

contracts:
- name: Greeter
sameRandomFieldThatIsntPartOfSchema: true
handler: src/EventHandlers.ts
events:
- event: Greet(address indexed recipient, string greeting)
isAsync: true
requiredEntities:
- name: User
label: recipient
- name: Greetings
arrayLabels: previousGreetings

After:

contracts:
- name: Greeter
handler: src/EventHandlers.ts
events:
- event: Greet(address indexed recipient, string greeting)

8. Event Fields​

Before (v1):

GreeterContract.Event1.handler(({ event, context }) => {
console.log(
"The event timestamp and block number",
event.txOrigin,
event.txTo,
event.transactionHash,
event.transactionIndex,
event.blockNumber,
event.blockTimestamp,
event.blockHash
);
});

After (v2):

Greeter.Event1.handlerWithLoader(async ({ event, context }) => {
// NOTE: these fields are in the loader and the contractRegister function too
console.log(
"The event timestamp and block number",
event.transaction.from,
event.transaction.to,
event.transaction.hash,
event.transaction.transactionIndex,
event.block.number,
event.block.timestamp,
event.block.hash
);
});

And in your config.yaml file:

field_selection:
transaction_fields:
- "from"
- "to"
- "hash"
- "transactionIndex"

Examples​

As we upgrade public repos on GitHub, we'll add the commits of the upgrade to this page for reference:

  • Velodrome Indexer Upgrade Commit

Additional Tips​

  • Make sure to thoroughly test your migrated code to catch any issues that might arise from the asynchronous nature of the new handlers.

  • If performance isn't a massive concern, you can simply use the handler function without a loader.


Understanding and Handling Chain Reorganizations​

File: Advanced/reorgs-support.md

What Are Chain Reorganizations?​

Chain reorganizations (reorgs) occur when the blockchain temporarily forks and then resolves to a single chain, causing some previously confirmed blocks to be replaced by different blocks. This is a normal part of blockchain consensus mechanisms, especially in proof-of-work chains.

When a reorg happens:

  • Transactions that were previously considered confirmed may be dropped
  • New transactions may be added to the blockchain
  • The order of transactions might change

For indexers, this presents a challenge: data that was previously indexed may no longer be valid, requiring a rollback and reprocessing of the affected blocks.

Automatic Reorg Handling in HyperIndex​

HyperIndex includes built-in support for handling chain reorganizations, ensuring your indexed data remains consistent with the blockchain's canonical state. This feature is enabled by default to protect your data integrity.

Configuration Options​

Enabling or Disabling Reorg Support​

You can control reorg handling through the rollback_on_reorg flag in your config.yaml file:

# Enable reorg handling (default)
rollback_on_reorg: true
networks:
# network configurations...

# OR

# Disable reorg handling (not recommended for production)
rollback_on_reorg: false
networks:
# network configurations...

Configuring Confirmation Thresholds​

You can customize the number of blocks required before considering a block "confirmed" and no longer subject to reorgs:

rollback_on_reorg: true
networks:
- id: 137 # Polygon
confirmed_block_threshold: 150
- id: 1 # Ethereum
# Using default threshold

The confirmed_block_threshold defines how many blocks below the chain head are considered safe from reorganizations. Any reorg deeper than this threshold won't trigger a rollback in your indexer.

Default Confirmation Thresholds​

Currently, all chains default to a threshold of 200 blocks. In future releases, these thresholds will be tailored per chain based on their specific characteristics and historical reorg depths.

Network TypeDefault ThresholdNotes
All Networks200 blocksWill be customized per chain in future releases

Technical Details and Limitations​

Guaranteed Detection​

Reorg detection is guaranteed when using HyperSync as your data source. HyperSync's architecture ensures that any reorganization in the blockchain will be properly detected and handled.

RPC Limitations​

When using a custom RPC endpoint as your data source, there are some edge cases where reorgs might go undetected, depending on the RPC provider's implementation and your indexing pattern.

Scope of Rollbacks​

During a reorg-triggered rollback:

βœ… What is rolled back:

  • All entities defined in your schema
  • All data that your handlers read or write to the database

❌ What is not rolled back:

  • Side effects in your handler code (API calls, external services)
  • Custom caching mechanisms outside of HyperIndex
  • Logs or external files written by your handlers

Best Practices​

  1. Keep reorg support enabled for production indexers
  2. Use HyperSync when possible for guaranteed reorg detection
  3. Avoid external side effects in your handlers that cannot be rolled back
  4. Consider higher thresholds for high-value applications or networks with historically deep reorgs

Example Configuration​

Here's a complete example showing reorg handling configuration for multiple networks:

rollback_on_reorg: true
networks:
- id: 1 # Ethereum Mainnet
confirmed_block_threshold: 250 # Higher threshold for Ethereum
# other network config...

- id: 137 # Polygon
confirmed_block_threshold: 150 # Lower threshold for Polygon
# other network config...

- id: 42161 # Arbitrum One
# Using default threshold (200)
# other network config...

By properly configuring reorg support, you ensure that your indexed data remains consistent with the blockchain, even when the chain reorganizes.


Understanding Generated Indexing Files​

File: Advanced/generated-files.md

Overview​

The /generated directory contains files automatically created by Envio's code generation system. These files form the backbone of your indexer's runtime operations, translating your configuration, schema, and event handlers into executable code that processes blockchain data.

Important: Generated files should never be manually edited. Any changes will be overwritten the next time code generation runs.

Purpose of Generated Files​

Generated files serve several critical functions:

  1. Type-Safe Data Access - They provide strongly-typed interfaces to interact with your defined entities
  2. Event Processing - They contain the logic to decode and process contract events
  3. Database Interactions - They manage database operations for storing and retrieving indexed data
  4. Runtime Orchestration - They coordinate the indexing workflow

Real-World Example: Uniswap V4 Indexer​

Let's examine how specific elements from a real Uniswap V4 indexer translate into generated files:

From Schema to Generated Types​

For a schema entity like this:

type Pool {
id: ID!
chainId: BigInt!
currency0: String!
currency1: String!
fee: BigInt!
tickSpacing: BigInt!
hooks: String!
numberOfSwaps: BigInt! @index
createdAtTimestamp: BigInt!
createdAtBlockNumber: BigInt!
}

The codegen process generates:

  1. Type Definition in EntityModels.res:

    type pool = {
    id: string,
    chainId: BigInt.t,
    currency0: string,
    currency1: string,
    fee: BigInt.t,
    tickSpacing: BigInt.t,
    hooks: string,
    numberOfSwaps: BigInt.t,
    createdAtTimestamp: BigInt.t,
    createdAtBlockNumber: BigInt.t,
    }
  2. Constructor Function:

    let makePool = (
    ~id: string,
    ~chainId: BigInt.t,
    ~currency0: string,
    ~currency1: string,
    ~fee: BigInt.t,
    ~tickSpacing: BigInt.t,
    ~hooks: string,
    ~numberOfSwaps: BigInt.t,
    ~createdAtTimestamp: BigInt.t,
    ~createdAtBlockNumber: BigInt.t,
    ) => {
    {
    id,
    chainId,
    currency0,
    currency1,
    fee,
    tickSpacing,
    hooks,
    numberOfSwaps,
    createdAtTimestamp,
    createdAtBlockNumber,
    }
    }
  3. Database Functions in Queries.res:

    let getPoolById = (id: string): option => {
    // Database retrieval logic
    }

    let savePool = (entity: EntityModels.pool): unit => {
    // Database save logic
    }

From Config to Generated Event Handlers​

Given a contract event in config.yaml:

contracts:
- name: PoolManager
handler: src/EventHandlers.ts
events:
- event: Swap(bytes32 indexed id, address indexed sender, int128 amount0, int128 amount1, uint160 sqrtPriceX96, uint128 liquidity, int24 tick, uint24 fee)

The codegen process generates handler wrappers like:

// In Handlers.res
let handlePoolManager_Swap = (
~blockHeader: Types.blockHeader,
~txHash: string,
~logIndex: int,
~id: string,
~sender: string,
~amount0: BigInt.t,
~amount1: BigInt.t,
~sqrtPriceX96: BigInt.t,
~liquidity: BigInt.t,
~tick: BigInt.t,
~fee: BigInt.t,
): unit => {
// Call the user-defined handler
let context = makeEventContext(
~blockHeader,
~txHash,
~logIndex,
~eventIdx,
~contractName="PoolManager",
~eventName="Swap"
)
UserHandlers.handlePoolManager_Swap(
~context,
~id,
~sender,
~amount0,
~amount1,
~sqrtPriceX96,
~liquidity,
~tick,
~fee,
)
}

From Multi-Network Config to Generated Network Handlers​

Your config has multiple networks:

networks:
- id: 1 # Ethereum Mainnet
# ...
- id: 10 # Optimism
# ...
- id: 42161 # Arbitrum
# ...

The generated code will include configuration parsing that handles all three networks:

// In Config.res
let networks = [
{
id: 1,
name: "ethereum-mainnet",
startBlock: 0,
contracts: [
{
name: "PositionManager",
addresses: ["0xbD216513d74C8cf14cf4747E6AaA6420FF64ee9e"],
// ...
},
{
name: "PoolManager",
addresses: ["0x000000000004444c5dc75cB358380D2e3dE08A90"],
// ...
},
],
},
{
id: 10,
name: "optimism",
startBlock: 0,
// Similar contract configuration for Optimism
},
{
id: 42161,
name: "arbitrum-one",
startBlock: 0,
// Similar contract configuration for Arbitrum
},
]

When to Run Code Generation​

You should run code generation using the Envio CLI whenever you:

pnpm envio codegen

Codegen should be run after:

  1. Modifying your config.yaml file
  2. Changing your GraphQL schema
  3. Adding or updating event handlers
  4. Switching to a new contract or ABI
  5. After pulling changes from version control

Troubleshooting Generation Errors​

When code generation fails, the errors typically point to issues in your setup files. Here are common error patterns and their solutions:

Configuration Errors​

Error messages containing Config validation failed typically mean there's an issue in your config.yaml file:

  • Check for syntax errors in YAML formatting
  • Verify that all required fields are present
  • Ensure contract addresses are in the correct format
  • Confirm that referenced networks are valid

For example, if you see an error about invalid network IDs, check that all network IDs in your config are valid:

networks:
- id: 1 # Valid Ethereum mainnet
- id: 10 # Valid Optimism
- id: 999 # Might be invalid if this network ID isn't recognized

Schema Errors​

Errors mentioning Schema parsing error point to issues in your GraphQL schema:

  • Check for invalid GraphQL syntax
  • Ensure entity names match those referenced in handlers
  • Verify that relationships between entities are properly defined
  • Check for unsupported types or directives

For example, if you're using the @index directive as in your Pool entity's numberOfSwaps field, make sure it's correctly placed:

type Pool {
id: ID!
numberOfSwaps: BigInt! @index # Correct placement of @index directive
}

Handler Errors​

If you see Handler validation failed errors:

  • Check that handler function signatures match expected patterns
  • Ensure all referenced entities exist in your schema
  • Verify proper import syntax for entities and contract events

Relationship with Setup Files​

The generated files directly reflect the structure defined in your setup files:

  • config.yaml β†’ Determines which networks, contracts, and events are indexed
  • schema.graphql β†’ Defines the entities and relationships that are generated
  • EventHandlers β†’ Provides the business logic that the generated code wraps

Best Practices​

  1. Never modify generated files directly - Always change the source files
  2. Run codegen before starting your indexer - Ensure all files are up to date
  3. Check error messages carefully - They often pinpoint issues in your setup files

Summary​

Generated files form the critical bridge between your indexing specifications and the actual runtime execution. While you shouldn't modify them directly, understanding their structure and purpose can help you debug issues and optimize your indexing process.

If you encounter persistent errors in generated files, ensure your configuration, schema, and handlers follow Envio's best practices, or contact support for assistance.


HyperIndex Terminology & Key Concepts​

File: Advanced/terminology.md

This comprehensive glossary explains the key terms and concepts used throughout the Envio documentation and ecosystem. Terms are organized by category for easier reference.

Table of Contents​

  • Blockchain Fundamentals
  • Smart Contract Concepts
  • Indexing & Data
  • Development Tools
  • Programming Languages
  • Envio Platform
  • Mathematical Concepts

Blockchain Fundamentals​

Address​

A unique identifier representing an account or entity within a blockchain network. Addresses are typically represented as hexadecimal strings (e.g., 0x1234...abcd) and used to send, receive, or interact with blockchain resources.

Block​

A collection of data containing a set of transactions that are bundled together and added to the blockchain. Blocks are linked together chronologically to form the blockchain.

EVM​

Ethereum Virtual Machine (EVM) is a runtime environment that executes smart contracts on the Ethereum blockchain. It provides a sandboxed and deterministic execution environment for smart contract code.

EVM Compatible​

The ability for a blockchain to run the EVM and execute Ethereum smart contracts. In the context of Envio, it's the ability to deploy a unified API to retrieve data from multiple EVM-compatible blockchains (e.g., Ethereum, BSC, Arbitrum, Polygon, Avalanche, Optimism, Fantom, Cronos, etc.).

Node​

A device or computer that participates in a blockchain network, maintaining a copy of the blockchain and validating transactions.

Transaction​

An action or set of actions recorded on the blockchain, typically involving the transfer of assets, execution of smart contracts, or other network interactions. Once confirmed, transactions become a permanent part of the blockchain.

Smart Contract Concepts​

Event​

A specific occurrence or action within a blockchain system that is specified in smart contracts and used to emit data from the blockchain. Smart contracts can emit events to essentially communicate that something has happened on the blockchain.

Web applications or any kind of application (e.g., mobile app, backend job, etc.) can listen to events and take actions when they occur. Events are typically data that are not stored on-chain as it would be considerably more expensive to store.

Example:

Declaring an event:

event Deposit(address indexed _from, bytes32 indexed _id, uint _value);

Emitting an event:

emit Deposit(msg.sender, _id, msg.value);

Event Handler​

A function that listens for a specific event from a smart contract and either updates or inserts new data into your Envio API. Event handlers define the business logic for processing blockchain events.

Smart Contract​

A self-executing program with the terms of an agreement directly written into code that runs on the blockchain. Smart contracts are not controlled by a user but are deployed to the network and run as programmed. User accounts can interact with smart contracts by submitting transactions that execute defined functions.

Tokens​

Digital representations of assets or utilities within a blockchain system that follow a specific standard. Common token standards include:

  • ERC-20: Standard for fungible tokens (identical and interchangeable)
  • ERC-721: Standard for non-fungible tokens (unique and non-interchangeable)
  • ERC-1155: Multi-token standard supporting both fungible and non-fungible tokens

Indexing & Data​

API​

Application Programming Interface is a set of protocols and tools for building software applications. APIs define how different software components should interact with each other.

Endpoint​

A URL that can be used to query an Envio custom API. Endpoints provide a structured way to request specific data from the indexer.

GraphQL​

A query language for interacting with APIs, commonly used in blockchain systems for retrieving specific data from blockchain platforms. As an alternative to REST, GraphQL lets developers construct requests that pull data from multiple data sources in a single API call.

GraphQL API​

The data presentation part of an Envio indexer. Typically, it's a GraphQL API auto-generated from the schema file, allowing flexible and efficient data queries.

Indexer​

A specialized database management system (DBMS) that indexes and organizes blockchain data, making it easier for developers to efficiently query, retrieve, and utilize on-chain data.

Web2 apps usually rely on indexers like Google to pre-sort information into indices for data retrieval and filtering. In blockchain and Web3, applications need indexers to achieve similar data retrieval capabilities.

Query​

A request for data. In the context of Envio, a query is a request for data from an Envio API that will be answered by an Envio Indexer.

Schema File​

A file that defines entities based on events emitted from smart contracts and specifies the data types for these entities. The schema serves as the blueprint for your indexed data structure.

Development Tools​

Codegen​

The process of automatically generating code based on a given input. In blockchain development, codegen is often used for generating client libraries, interfaces, or type-safe data access layers from schemas or specifications.

Envio CLI​

A command line interface tool for building and deploying Envio indexers. The CLI provides commands for initializing, developing, and managing your indexer projects.

SDK​

Software Development Kit is a collection of tools, libraries, and documentation that facilitates the development of applications for a specific platform or system.

Programming Languages​

JavaScript​

A high-level, interpreted programming language primarily used for client-side scripting in web browsers. It is the de facto language for web development, enabling developers to create interactive and dynamic web applications.

ReScript​

A robustly typed language that compiles to efficient and human-readable JavaScript. ReScript aims to bring the power and expressiveness of functional programming to JavaScript development. It offers seamless integration with JavaScript and provides features like static typing, pattern matching, and immutable data structures.

TypeScript​

A superset of JavaScript that adds static typing and other advanced features to the language. It compiles down to plain JavaScript, making it compatible with existing JavaScript codebases. TypeScript helps developers catch errors during development by providing type-checking and improved tooling support. It enhances JavaScript by adding features like interfaces, classes, modules, and generics.

Envio Platform​

Hosted Service​

A managed service platform for building, hosting, and querying Envio's Indexers with guaranteed uptime and performance service level agreements. The Hosted Service removes the operational burden of running indexers.

Ploffen​

Ploffen (meaning "Pop" in Dutch) is a fun game based on an ERC20 token contract, where users can deposit a game token (i.e., make a contribution) into a savings pool.

The last user to add a contribution to the savings pool has a chance of winning the entire pool if no other user deposits a contribution within 1 hour of the previous contribution. For example, if 30 persons play the game, and each person contributes a small amount, the last person can win the total contributions made by all 30 persons in the savings pool.

The Ploffen project demonstrates a Hardhat framework example. It includes a sample contract, a test for that contract, a deployment script, and the Envio integration to index emitted events from the Ploffen smart contract.

Mathematical Concepts​

Commutative Property​

A fundamental property of certain binary operations in mathematics. An operation is said to be commutative if the order in which you apply the operation to two operands does not affect the result. In other words, for a commutative operation:

a + b = b + a

Examples of commutative operations:

  1. Addition: 2 + 3 = 3 + 2
  2. Multiplication: 2 _ 3 = 3 _ 2

Examples of non-commutative operations:

  1. Subtraction: 5 - 3 β‰  3 - 5
  2. Division: 8 / 4 β‰  4 / 8
  3. String Concatenation: "Hello" + "World" β‰  "World" + "Hello"

The commutative property is a property of the operation itself, not necessarily the numbers involved. If an operation is commutative, you can switch the order of the operands without changing the result.


Optimizing Database Access with Loaders​

File: Advanced/loaders.md

What Are Loaders?​

Loaders are specialized functions that dramatically optimize how your event handlers fetch data from the database. They provide a powerful mechanism to:

  • Batch multiple database requests into single operations
  • Cache database results in memory for instant access
  • Reduce I/O operations, which are typically the primary performance bottleneck in indexing

By using loaders, you can reduce database roundtrips from thousands to just a handful, especially when processing large batches of events.

Why Use Loaders?​

The Database I/O Problem​

Consider this common pattern in event handlers:

// Without loaders: Inefficient database access
ERC20.Transfer.handler(async ({ event, context }) => {
const sender = await context.Account.get(event.params.from);
const receiver = await context.Account.get(event.params.to);

// Process the transfer...
});

The Performance Challenge: If you're processing 5,000 transfer events, each with unique from and to addresses, this results in 10,000 total database roundtripsβ€”one for each sender and receiver lookup (2 per event Γ— 5,000 events). This creates a significant bottleneck that slows down your entire indexing process.

The External Calls Problem​

To ensure consistent and reliable data, all handlers execute synchronously in on-chain order. This means external calls can dramatically increase processing time:

// Without loaders: Blocking external calls
ERC20.Transfer.handler(async ({ event, context }) => {
const metadata = await fetch(
`https://api.example.com/metadata/${event.params.from}`
);

// Process the transfer...
});

The Performance Challenge: If you're processing 5,000 transfer events, each with an external call, this results in 5,000 sequential external callsβ€”each waiting for the previous one to complete. This can turn a fast indexing process into a slow, sequential crawl.

How Loaders Solve This​

Loaders address these performance bottlenecks through intelligent optimization:

  1. Collect all database and Effect requests before processing events
  2. Batch similar requests into single I/O operations
  3. Cache results in memory for efficient reuse

This approach reduces thousands of database calls to just a handful per batch, dramatically improving indexing performance. When combined with the Effect API, you can also parallelize external calls for even greater efficiency.

How to Implement Loaders​

Basic Structure​

Loaders use the handlerWithLoader pattern, which elegantly separates data loading from event processing:

ContractName.EventName.handlerWithLoader({
// The loader function runs before event processing starts
loader: async ({ event, context }) => {
// Load all required data from the database
// Return the data needed for event processing
},

// The handler function processes each event with pre-loaded data
handler: async ({ event, context, loaderReturn }) => {
// Process the event using the data returned by the loader
},
});

Basic Example: Converting a Simple Handler​

Let's convert our previous inefficient example to use loaders:

ERC20.Transfer.handlerWithLoader({
loader: async ({ event, context }) => {
// Load sender and receiver accounts efficiently
const sender = await context.Account.get(event.params.from);
const receiver = await context.Account.get(event.params.to);

// Return the loaded data to the handler
return {
sender,
receiver,
};
},

handler: async ({ event, context, loaderReturn }) => {
const { sender, receiver } = loaderReturn;

// Process the transfer with the pre-loaded data
// No database lookups needed here!
},
});

How Batching Works​

The batching process follows these three key steps:

  1. Batch Creation: HyperIndex creates an ordered batch of events from the event buffer accumulated in memory

  2. Preload Phase: All loader functions run concurrently for the entire batch. This is called the Preload phase.

    Note: During the preload phase, some entities being loaded may not exist yet since the handlers haven't been executed. This is expected behavior - the loader runs twice per event to ensure data consistency.

  3. Sequential Processing: For each event in the batch, its loader runs a second time and then passes the result to the handler. This step is sequential to maintain order.

For our 5,000 transfer events example, this reduces database roundtrips from 10,000 total calls to just 2!

Advanced Loader Techniques​

Optimizing for Concurrency​

You can further optimize performance by requesting multiple entities concurrently:

ERC20.Transfer.handlerWithLoader({
loader: async ({ event, context }) => {
// Request sender and receiver concurrently for maximum efficiency
const [sender, receiver] = await Promise.all([
context.Account.get(event.params.from),
context.Account.get(event.params.to),
]);

return { sender, receiver };
},

handler: async ({ event, context, loaderReturn }) => {
const { sender, receiver } = loaderReturn;
// Process with pre-loaded data
},
});

This approach can reduce the database roundtrips to just 1 for the entire batch of events!

Querying by Field Values​

Loaders also support complex queries using the getWhere method, which allows you to retrieve arrays of entities based on field values:

ERC20.Approval.handlerWithLoader({
loader: async ({ event, context }) => {
// Find all approvals for this specific owner
const currentOwnerApprovals = await context.Approval.getWhere.owner_id.eq(
event.params.owner
);

return { currentOwnerApprovals };
},

handler: async ({ event, context, loaderReturn }) => {
const { currentOwnerApprovals } = loaderReturn;

// Process all the owner's approvals efficiently
for (const approval of currentOwnerApprovals) {
// Process each approval
}
},
});

This technique works with any entity field that:

  • Is used in a relationship with the @derivedFrom directive
  • Has an @index directive

Effect API experimental​

The Effect API provides a powerful and convenient way to perform external calls from your handlers. It's especially effective when used with loaders:

  • Automatic batching: Calls of the same kind are automatically batched together
  • Intelligent memoization: Calls are memoized, so you don't need to worry about the handler function being called multiple times
  • Deduplication: Calls with the same arguments are deduplicated to prevent overfetching
  • Future enhancements: We're working on automatic retry logic and result persistence for indexer reruns πŸ—οΈ

To use the Effect API, you first need to define an effect using experimental_createEffect function from the envio package:


export const getMetadata = experimental_createEffect(
{
name: "getMetadata",
input: S.string,
output: {
description: S.string,
value: S.bigint,
},
},
async ({ input, context }) => {
const response = await fetch(`https://api.example.com/metadata/${input}`);
const data = await response.json();
context.log.info(`Fetched metadata for ${input}`);
return {
description: data.description,
value: data.value,
};
}
);

The first argument is an options object that describes the effect:

  • name (required) - the name of the effect used for debugging and logging
  • input (required) - the input type of the effect
  • output (required) - the output type of the effect

The second argument is a function that will be called with the effect's input.

Note: For type definitions, you should use S from the envio package, which uses Sury library under the hood.

After defining an effect, you can use context.effect to call it from your handler, loader, or another effect.

The context.effect function accepts an effect as the first argument and the effect's input as the second argument:

ERC20.Transfer.handlerWithLoader({
loader: async ({ event, context }) => {
const metadata = await context.effect(getMetadata, event.params.from);
return { metadata };
},

handler: async ({ event, context, loaderReturn }) => {
const { metadata } = loaderReturn;
// Process the event with the metadata
},
});

This way for our problem of 5,000 transfer events, we will be able to parallelize all external calls instead of executing them one by one.

Viem Pro Tip​

You can use viem or any other blockchain client inside your effect functions. When doing so, it's highly recommended to enable the batch option to group all effect calls into fewer RPC requests:

// Create a public client to interact with the blockchain
const client = createPublicClient({
chain: mainnet,
// Enable batching to group calls into fewer RPC requests
transport: http(rpcUrl, { batch: true }),
});

// Get the contract instance for your contract
const lbtcContract = getContract({
abi: erc20Abi,
address: "0x8236a87084f8B84306f72007F36F2618A5634494",
client: client,
});

// Effect to get the balance of a specific address at a specific block
export const getBalance = experimental_createEffect(
{
name: "getBalance",
input: {
address: S.string,
blockNumber: S.optional(S.bigint),
},
output: S.bigint,
},
async ({ input, context }) => {
try {
// If blockNumber is provided, use it to get balance at that specific block
const options = input.blockNumber
? { blockNumber: input.blockNumber }
: undefined;
const balance = await lbtcContract.read.balanceOf(
[input.address as `0x${string}`],
options
);

return balance;
} catch (error) {
context.log.error(`Error getting balance for ${input.address}: ${error}`);
// Return 0 on error to prevent processing failures
return BigInt(0);
}
}
);

Why Experimental?​

The Effect API is currently marked as experimental, but we don't expect breaking changes in the future. This designation simply means we're actively iterating on the feature and may add new capabilities that could subtly change indexer behavior. We plan to remove the experimental tag soon, and your feedback is invaluable in this process!

Best Practices​

When to Use Loaders​

Loaders provide the most significant benefits when:

  • Processing large batches of events that require similar database lookups
  • Reading the same entities multiple times across different events
  • Performing relationship queries that affect multiple entities
  • Building high-performance indexers that need to handle millions of events

Performance Considerations​

When using loaders, keep these performance factors in mind:

  • Memory Usage: All loaded entities are stored in memory during batch processing
  • Query Size: Very large getWhere queries might cause memory issues
  • Complexity: Balance the benefits of batching against code complexity
  • Batch Size: Larger batches provide better performance but use more memory

Rules of Thumb​

Follow these guidelines for optimal loader usage:

  1. Use loaders for large-scale indexing: Implement loaders if you're indexing more than 1 million events
  2. Centralize database operations: Put all database operations in the loader function
  3. Wrap external calls in effects: Use the Effect API for external calls and implement them in loaders
  4. Keep handlers focused: Reserve handler functions for business logic, not data fetching
  5. Optimize with concurrency: Use concurrent requests when loading multiple unrelated entities
  6. Monitor memory usage: Be mindful of memory consumption with large batches

Understanding Double Run Behavior​

Loaders are designed to run twice per event to ensure data consistency across the batch. This is intentional and expected behavior:

  1. First Run (Preload Phase): All loaders run concurrently at the start of batch processing
  2. Second Run (Event Processing): Each loader runs again sequentially before its corresponding handler

This double execution pattern ensures that:

  • Entities created by earlier events in the batch are available to later events
  • Data consistency is maintained across the entire batch
  • The benefits of batching are preserved while ensuring accurate data access
ERC20.Transfer.handlerWithLoader({
loader: async ({ event, context }) => {
// This loader will run twice per event
// First run: May not find the sender if it's created by an earlier event in this batch
// Second run: Will find the sender if it was created by an earlier event
const sender = await context.Account.getOrThrow(event.params.from);

return {
sender,
};
},

handler: async ({ event, context, loaderReturn }) => {
const { sender } = loaderReturn;
// Process the event with the loaded sender data
},
});

The indexer will only crash if the sender entity was actually not set in an event preceding the one being processed, which indicates a genuine data consistency issue.

Preload Phase Behavior​

Starting from envio@2.23.0, the preload phase has been enhanced to prevent batch processing failures. During the first run (preload phase), the following operations are silently ignored:

  • Thrown exceptions: Any errors thrown during the preload phase are caught and ignored
  • Entity setting: Calls to context.Entity.set() and other entity operations are ignored during preload
  • Logging: Calls to context.log.*() are ignored during preload

Only during the second run (actual event processing) are all operations fully enabled:

  • Exceptions will crash the indexer if not handled
  • Entity setting operations will persist to the database
  • Logging will output to the console

This design ensures that the preload phase can safely attempt to load data that may not exist yet, while the actual processing phase handles all operations normally.

If you're using an earlier version of envio, we strongly recommend upgrading to the latest version with pnpm install envio@latest to take advantage of this improved preload phase behavior.

Going All-In with Loaders​

Starting from envio@2.23.0, loaders support setting entities directly, making handlers optional. You can now process all your events entirely within the loader function:

ERC20.Transfer.handlerWithLoader({
loader: async ({ event, context }) => {
// Load existing data efficiently
const [sender, receiver] = await Promise.all([
context.Account.getOrThrow(event.params.from),
context.Account.getOrThrow(event.params.to),
]);

// Skip expensive operations during preload
if (context.isPreload) {
return;
}

// CPU-intensive calculations only happen once
const complexCalculation = performExpensiveOperation(event.params.value); // Placeholder function for demonstration

// Create or update sender account
const senderAccount = context.Account.set({
id: event.params.from,
balance: sender.balance - event.params.value,
computedValue: complexCalculation,
});

// Create or update receiver account
const receiverAccount = context.Account.set({
id: event.params.to,
balance: receiver.balance + event.params.value,
});

// No need to return anything - all work is done in the loader
},
handler: async ({ event, context, loaderReturn }) => {
// Handler can be empty - all logic is handled in the loader
},
});

For power users, we recommend moving all event processing to the loader function to take advantage of this enhanced functionality. This approach will be the most compatible with future Envio versions.

If you need to skip the preload phase for CPU-intensive operations or to perform certain actions only once, you can use context.isPreload. This allows you to replicate the traditional handler behavior within a loader.

Note: While context.isPreload can be useful for bypassing double execution, it's recommended to use the Effect API for external calls instead, as it provides automatic batching and memoization benefits.

Future: Version 3.0 Unified Handler Behavior​

In Envio V3, the separation between handlers and loaders will be removed entirely. All handlers will behave like loaders by default, running twice per event to ensure data consistency. This change will:

  • Make many indexers faster by default without requiring explicit configuration
  • Simplify the mental model (similar to React's double-render pattern)
  • Provide the benefits of loaders without requiring explicit configuration

The double execution pattern should be familiar to JavaScript/React developers and will be thoroughly documented to help users understand this new behavior.

Summary​

Loaders provide a powerful and efficient way to optimize database access in your Envio indexers by:

  • Dramatically reducing database roundtrips from thousands to just a few per batch
  • Automatically batching similar requests for maximum efficiency
  • Caching results in memory for instant access during processing

By elegantly separating data loading from event processing, loaders enable you to build more efficient and performant indexers while maintaining clean, readable code. Whether you're processing thousands or millions of events, loaders can transform your indexing performance from slow and sequential to fast and parallel.


Optimizing Database Performance in HyperIndex​

File: Advanced/performance/database-performance-optimization.md

Introduction​

As your indexed data grows, database performance becomes critical to maintaining responsive queries and efficient operations. This guide explains how to optimize your HyperIndex database through strategic indexing and schema design to ensure your applications remain fast even as data volume increases.

Understanding Database Indices​

Database indices are special data structures that improve the speed of data retrieval operations. Think of them like the index at the back of a book β€” rather than scanning every page (row) to find what you're looking for, indices allow the database to quickly locate the relevant data.

Why Indices Matter​

Without proper indices, your database must perform "full table scans" when searching for data, examining every row to find matches. As your data grows, this becomes increasingly inefficient:

Data SizeWithout IndicesWith Proper Indices
1,000 records~10ms~1ms
100,000 records~500ms~2ms
1,000,000+ records5+ seconds~5ms

Note: Actual performance varies based on hardware, query complexity, and database load.

Creating Custom Indices in Your Schema​

HyperIndex provides several ways to define indices in your GraphQL schema, giving you control over database performance.

Single-Column Indices​

The simplest form of indexing is on individual fields using the @index directive:

type Transaction {
id: ID!
userAddress: String! @index
tokenAddress: String! @index
amount: BigInt!
timestamp: BigInt! @index
}

In this example:

  • Queries filtering on userAddress (e.g., finding all transactions for a user)
  • Queries filtering on tokenAddress (e.g., finding all transactions for a token)
  • Queries filtering on timestamp (e.g., finding transactions in a date range)

All become significantly faster because the database can use the indices to quickly locate matching records.

Composite Indices for Multi-Field Queries​

When you frequently query using multiple fields together, composite indices provide better performance:

type Transfer @index(fields: ["from", "to", "tokenId"]) {
id: ID!
from: String! @index
to: String! @index
tokenId: BigInt!
value: BigInt!
timestamp: BigInt!
}

This creates:

  1. Individual indices on from and to fields
  2. A composite index on the combination of from, to, and tokenId

Composite indices are particularly valuable for complex queries that filter on multiple columns simultaneously, such as "find all transfers from address X to address Y for token Z."

Automatic Indices​

HyperIndex automatically creates indices for:

  • All ID fields
  • All fields marked with @derivedFrom

There's no need to manually add indices for these fields.

Strategic Indexing: When to Use Each Type​

When to Use Single-Column Indices​

Use single-column indices when:

  • You frequently filter by a specific field
  • You sort results by a specific field
  • The field has high "cardinality" (many different values)

Example use case: Indexing userAddress in a transaction table when users frequently look up their transaction history.

When to Use Composite Indices​

Use composite indices when:

  • You frequently query using multiple fields together
  • Your queries consistently filter on the same combination of fields
  • You need to optimize complex queries with multiple conditions

Example use case: Indexing (tokenAddress, timestamp) together when users frequently view token transaction history within specific time ranges.

Performance Tradeoffs​

While indices improve query performance, they come with tradeoffs:

Write Performance Impact​

Each index requires additional updates when data is inserted or modified:

  • No indices: Fastest write performance, but slow reads
  • Few targeted indices: Slight write slowdown (5-10%), much faster reads
  • Many indices: Noticeable write slowdown (15%+), fastest possible reads

For most applications, the read performance benefits outweigh the write performance costs, especially since blockchain data is primarily read-intensive.

Storage Considerations​

Indices increase database storage requirements:

  • Each index typically requires 2-10 bytes per row
  • For large datasets (millions of records), this can add up
  • Consider storage requirements when designing indices for very large tables

Practical Examples​

Optimizing Token Transfer Queries​

Consider a token transfer entity:

type TokenTransfer {
id: ID!
token: Token! @index
from: String! @index
to: String! @index
amount: BigInt!
blockNumber: BigInt! @index
timestamp: BigInt! @index
}

With this schema, the following queries will be optimized:

  • Find all transfers for a specific token
  • Find all transfers from a specific address
  • Find all transfers to a specific address
  • Find transfers within a specific block range
  • Find transfers within a specific time range

Optimizing Complex NFT Marketplace Queries​

For an NFT marketplace with listings and sales:

type NFTListing
@index(fields: ["collection", "status", "price"])
@index(fields: ["seller", "status"]) {
id: ID!
collection: String! @index
tokenId: BigInt!
seller: String! @index
price: BigInt!
status: String! @index # "active", "sold", "cancelled"
createdAt: BigInt! @index
}

This schema efficiently supports:

  • Finding all active listings for a collection, sorted by price
  • Finding all listings by a specific seller with a specific status
  • Finding recently created listings across all collections

Optimizing GraphQL Queries​

Beyond schema design, how you write your GraphQL queries affects performance:

Fetch Only What You Need​

Request only the fields you actually need:

# Good
query {
tokenTransfers(first: 10, where: { token: "0x123" }) {
id
amount
}
}

# Bad - fetches unnecessary fields
query {
tokenTransfers(first: 10, where: { token: "0x123" }) {
id
amount
from
to
timestamp
blockNumber
transactionHash
# other fields you don't need
}
}

Use Pagination for Large Result Sets​

Always paginate large result sets:

query {
tokenTransfers(
first: 20
skip: 40 # Skip first 40 results (page 3 with 20 items per page)
where: { token: "0x123" }
) {
id
amount
}
}

Use Timestamps for Efficient Polling​

When building applications that poll for updates, use timestamps to fetch only new data:

query getUpdatedTransfers($lastFetched: BigInt!) {
tokenTransfers(where: { timestamp_gt: $lastFetched }) {
id
from
to
amount
}
}

Summary​

Proper database indexing is essential for maintaining performance as your indexed data grows. By strategically placing indices on frequently queried fields and field combinations, you can ensure fast query responses even with large datasets.

Key takeaways:

  • Use @index for frequently filtered or sorted individual fields
  • Use composite indices for multi-field query patterns
  • Consider performance tradeoffs for write-heavy applications
  • Design your schema and queries with performance in mind from the start
  • Always use pagination for large result sets

Understanding Chain Head Latency​

File: Advanced/performance/latency-at-head.md

Maintaining low latency at the chain head is crucial for ensuring timely data updates in your indexed data. This page explains how HyperSync handles this important aspect of blockchain indexing.

HyperSync Block Retrieval​

  • Efficient Processing: We pull new blocks from HyperSync using a highly efficient process, ensuring your indexer stays up-to-date with minimal delay.
  • Reliable Operation: This process typically runs smoothly without significant issues.
  • Redundancy Plans: We're developing a system to sync new blocks from both RPC and HyperSync simultaneously, improving robustness if one source experiences issues.

Network-Specific Performance​

Optimized Major Networks​

  • Priority Networks: We've dedicated significant resources to maintaining extremely low latency on popular networks including Ethereum, Optimism, and Arbitrum.
  • User Experience: Users should experience seamless, near real-time data updates on these networks.

Smaller Chain Networks​

  • Standard Performance: On smaller chains, latency might be slightly higher as these networks have received less optimization.
  • Improvement Process: Your feedback helps us prioritize which chains to optimize next. Please let us know in Discord if low latency on specific smaller chains is important for your use case.

Special Configuration Options​

Multi-Chain Indexing​

  • Unordered Multi-Chain Mode: For applications indexing multiple chains, our unordered multi-chain mode allows each chain to continue syncing independently.
  • Resilient Design: With this configuration, even if one chain experiences latency, your other chains will continue syncing normally.

Chain Reorganization Support​

  • Reorg Handling: Our reorg support system ensures data consistency even when chains reorganize.
  • Documentation: Contact our team on Discord if you have concerns about reorg support while we finalize documentation.

Hosted Service Performance​

Our hosted service offers reliable performance with ongoing improvements:

  • Continuous Enhancement: We're actively improving sync and build times on our hosted service.
  • Relative Performance: Currently, indexers may run slightly slower on the hosted service compared to high-performance local machines.
  • Enterprise Solutions: For applications requiring exceptional performance, contact us on Discord to discuss our enterprise hosting plans.

By leveraging these features and providing feedback on your specific needs, you can help us continually improve the HyperIndex head latency performance.


Benchmarking Your Indexer Performance​

File: Advanced/performance/benchmarking.md

Why Benchmark Your Indexer?​

Benchmarking is a critical tool for understanding and optimizing your indexer's performance. By collecting and analyzing performance metrics, you can:

  • Identify bottlenecks in your indexing pipeline
  • Determine if performance issues are due to data fetching, processing, or database operations
  • Measure the impact of code optimizations
  • Set realistic expectations for indexing speed
  • Plan infrastructure requirements for production deployments

Running Benchmarks​

Capturing Benchmark Data​

To collect performance metrics while your indexer is running:

pnpm envio start --bench

Note: Benchmarking adds some memory and processing overhead. It should not be enabled in production environments, as it holds benchmark data points in memory and periodically writes them to disk.

Viewing Benchmark Results​

After running your indexer with benchmarking enabled, you can generate a performance summary:

pnpm envio benchmark-summary

This command processes the collected benchmark data and displays a comprehensive performance report.

Understanding Benchmark Output​

The benchmark output is divided into several sections, each providing insights into different aspects of your indexer's performance:

Time Breakdown​

Time breakdown
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ (index) β”‚ seconds β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ Total Runtime β”‚ 45 β”‚
β”‚ Total Time Fetching Chain 1 Partition 0 β”‚ 44 β”‚
β”‚ Total Time Processing β”‚ 9 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

What This Tells You:

  • Total Runtime: Overall time the indexer has been running
  • Total Time Fetching: Time spent retrieving data from the blockchain
  • Total Time Processing: Time spent in event handlers and database operations

How to Interpret:

  • If fetching time dominates (as in this example), your bottleneck is data retrieval, not processing
  • If processing time is high relative to fetching, your handlers may need optimization
  • Note that fetching and processing can overlap, so the sum may exceed total runtime

General Performance Metrics​

General
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ (index) β”‚ Values β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ batch sizes sum β”‚ 158205 β”‚
β”‚ total runtime (sec) β”‚ 45.801 β”‚
β”‚ events per second β”‚ 3454.18 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

What This Tells You:

  • Batch Sizes Sum: Total number of events processed
  • Total Runtime: Precise runtime in seconds
  • Events Per Second: Overall processing throughput

How to Interpret:

  • Events per second is your key performance indicator
  • Over 10,000 events/second represents excellent performance
  • 1,000-5,000 events/second indicates good performance
  • Under 500 events/second may indicate optimization opportunities

Block Fetching Performance​

BlockRangeFetched Summary for Chain 1 Root Register
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ (index) β”‚ n β”‚ mean β”‚ std-dev β”‚ min β”‚ max β”‚ sum β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ Total Time Elapsed (ms) β”‚ 12 β”‚ 3675.17 β”‚ 1147.69 β”‚ 2329 β”‚ 5972 β”‚ 44102 β”‚
β”‚ Parsing Time Elapsed (ms) β”‚ 12 β”‚ 142.17 β”‚ 40.15 β”‚ 80 β”‚ 235 β”‚ 1706 β”‚
β”‚ Page Fetch Time (ms) β”‚ 12 β”‚ 3481.58 β”‚ 1042.93 β”‚ 2249 β”‚ 5737 β”‚ 41779 β”‚
β”‚ Num Events β”‚ 12 β”‚ 13183.75 β”‚ 3858.92 β”‚ 7579 β”‚ 22426 β”‚ 158205 β”‚
β”‚ Block Range Size β”‚ 12 β”‚ 906593.17 β”‚ 3006127.15 β”‚ 149 β”‚ 10876789 β”‚ 10879118 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

What This Tells You:

  • Total Time Elapsed: Time spent fetching and parsing each batch of blocks
  • Parsing Time: Time spent decoding and preparing event data
  • Page Fetch Time: Time spent retrieving data from the blockchain
  • Num Events: Number of events in each batch
  • Block Range Size: Number of blocks in each fetch operation

How to Interpret:

  • Compare Page Fetch Time to Total Time to see if data retrieval is your bottleneck
  • Large standard deviations (std-dev) indicate inconsistent performance
  • If Block Range Size varies significantly, your indexer may be adjusting batch sizes dynamically

Event Processing Performance​

EventProcessing Summary
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ (index) β”‚ n β”‚ mean β”‚ std-dev β”‚ min β”‚ max β”‚ sum β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ Batch Size β”‚ 38 β”‚ 4163.29 β”‚ 1424.85 β”‚ 89 β”‚ 5000 β”‚ 158205 β”‚
β”‚ Contract Register Duration (ms) β”‚ 38 β”‚ 0.11 β”‚ 0.38 β”‚ 0 β”‚ 2 β”‚ 4 β”‚
β”‚ Load Duration (ms) β”‚ 38 β”‚ 80.79 β”‚ 32.58 β”‚ 5 β”‚ 149 β”‚ 3070 β”‚
β”‚ Handler Duration (ms) β”‚ 38 β”‚ 22.18 β”‚ 9.07 β”‚ 0 β”‚ 47 β”‚ 843 β”‚
β”‚ DB Write Duration (ms) β”‚ 38 β”‚ 135.92 β”‚ 52.09 β”‚ 8 β”‚ 220 β”‚ 5165 β”‚
β”‚ Total Time Elapsed (ms) β”‚ 38 β”‚ 239 β”‚ 83.24 β”‚ 13 β”‚ 370 β”‚ 9082 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”˜

What This Tells You:

  • Batch Size: Number of events in each processing batch
  • Contract Register Duration: Time spent preparing contract data
  • Load Duration: Time spent loading entities from the database
  • Handler Duration: Time spent executing your event handler logic
  • DB Write Duration: Time spent writing updated entities to the database
  • Total Time Elapsed: Overall time for the processing phase

How to Interpret:

  • Compare Load, Handler, and DB Write durations to identify bottlenecks
  • In this example, DB Write (135ms) and Load (80ms) operations dominate processing time
  • If Load Duration is high, consider implementing entity loaders
  • If DB Write Duration is high, check if you're updating too many entities per event

Per-Handler Performance​

Handlers Per Event
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ (index) β”‚ n β”‚ mean β”‚ std-dev β”‚ min β”‚ max β”‚ sum β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ ERC20 Transfer Handler (ms) β”‚ 158205 β”‚ 0.0021 β”‚ 0.0364 β”‚ 0.0007 β”‚ 4.6752 β”‚ 329.7264 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

What This Tells You:

  • Detailed timing for each specific event handler
  • Shows average and total execution time across all events

How to Interpret:

  • Compare different handlers to identify which ones are most expensive
  • Look for handlers with high maximum values (max column), which may indicate inconsistent performance
  • Handlers averaging above 1ms per event may benefit from optimization

Interpreting Results and Taking Action​

Identifying Your Bottleneck​

Based on the benchmark data, determine your primary performance bottleneck:

  1. Data Fetching Bottleneck

    • Symptoms: Most time spent in "Total Time Fetching"
    • Solutions:
      • Use HyperSync if available for your network
      • If using RPC, consider a more performant provider
      • Adjust block batch sizes in your configuration
  2. Data Loading Bottleneck

    • Symptoms: High "Load Duration" in Event Processing
    • Solutions:
      • Implement entity loaders to batch database operations
      • Add appropriate database indices for frequently queried fields
      • Optimize your entity relationships to reduce join complexity
  3. Handler Logic Bottleneck

    • Symptoms: High "Handler Duration" relative to other metrics
    • Solutions:
      • Simplify complex calculations in your handlers
      • Move complex operations to a post-processing step
      • Consider caching frequently accessed values
  4. Database Write Bottleneck

    • Symptoms: High "DB Write Duration"
    • Solutions:
      • Reduce the number of entities updated per event
      • Batch related updates where possible
      • Check if you're updating the same entity multiple times unnecessarily

Benchmarking Best Practices​

  1. Benchmark Before and After Optimizations

    • Run benchmarks before making changes to establish a baseline
    • Run again after each optimization to measure impact
  2. Focus on the Largest Bottleneck First

    • Prioritize optimizations based on where time is being spent
    • Small improvements to the critical path yield the greatest results
  3. Watch for Memory Usage

    • Monitor memory consumption alongside performance metrics
    • High memory usage can lead to degraded performance over time
  4. Consider Real-World Conditions

    • Test with realistic data volumes and event patterns
    • Include periods of high activity in your benchmark tests

Advanced Performance Tuning​

For cases where standard optimizations aren't sufficient:

  1. Custom Database Indices

    • Create indices tailored to your specific query patterns
    • Add composite indices for multi-field filters
  2. Handler Specialization

    • Create specialized handlers for high-volume events
    • Simplify logic for the most common paths
  3. Speak to the Envio Team

    • We can help!

By regularly benchmarking your indexer and methodically addressing performance bottlenecks, you can achieve significant improvements in indexing speed and efficiency.


Logging​

File: Troubleshoot/logging.mdx

Logging in Envio HyperIndex

Effective logging is essential for monitoring your indexer's performance, diagnosing issues, and gathering insights. The Envio indexer uses pino, a high-performance logging library for JavaScript. These logs can be integrated with analytics tools such as Kibana to generate metrics and visualizations.

Table of Contents​

  • User Logging - How to implement logging in your event handlers
  • Configuration & Output Formats - Configuring log output and formats
  • Log Levels - Understanding available log levels
  • Troubleshooting - Common issues and solutions

Users​

When implementing handlers for your indexer, use the logging functions provided in the context object. These functions allow you to record events and errors at different severity levels.

Available Logging Methods​

  • .log.debug - For detailed debugging information
  • .log.info - For general information about application flow
  • .log.warn - For potentially problematic situations
  • .log.error - For error events that might still allow the application to continue

Examples by Language​

// Inside your handler
context.log.debug(
`Processing event with block hash: ${event.blockHash} (debug)`
);
context.log.info(`Processing event with block hash: ${event.blockHash} (info)`);
context.log.warn(`Potential issue with event: ${event.blockHash} (warn)`);
context.log.error(`Failed to process event: ${event.blockHash} (error)`);

// With exception:
context.log.error(
`Failed to process event: ${event.blockHash}`,
new Error("Error processing event")
);

// You can also provide an object as the second argument for structured logging:
context.log.info("Processing blockchain event", {
type: "info",
extra: "Additional debugging context",
data: { blockHash: event.blockHash },
});
// Inside your handler
context.log.debug(
`Processing event with block hash: ${event.blockHash} (debug)`
);
context.log.info(`Processing event with block hash: ${event.blockHash} (info)`);
context.log.warn(`Potential issue with event: ${event.blockHash} (warn)`);
context.log.error(`Failed to process event: ${event.blockHash} (error)`);

// With exception:
context.log.error(
`Failed to process event: ${event.blockHash}`,
new Error("Error processing event")
);

// You can also provide an object as the second argument for structured logging:
context.log.info("Processing blockchain event", {
type: "info",
extra: "Additional debugging context",
data: { blockHash: event.blockHash },
});
// Inside your handler
exception ExampleException(string) // Example of an exception

// Basic string logging
context.log.debug(`Processing event with block hash: ${event.blockHash} (debug)`)
context.log.info(`Processing event with block hash: ${event.blockHash} (info)`)
context.log.warn(`Potential issue with event: ${event.blockHash} (warn)`)
context.log.error(`Failed to process event: ${event.blockHash} (error)`)

// With exception:
context.log.errorWithExn(
`Failed to process event: ${event.blockHash}`,
ExampleException("Error processing event")
)

// You can also provide an object as the second argument for structured logging:
context.log.info("Processing blockchain event", ~params={
"type": "info",
"extra": "Additional debugging context",
"data": { "blockHash": event.blockHash },
});

Metrics, Debugging, and Troubleshooting​

The Envio indexer provides flexible logging configurations to suit different environments and use cases.

Output Formats​

The default output format is human-readable with color-coded log levels (console-pretty). You can modify the output format using environment variables:

# Available log strategies
LOG_STRATEGY="console-pretty" # Default: Human-readable logs with colors in terminal
LOG_STRATEGY="ecs-file" # ECS format logs to file (standard for Elastic Stack)
LOG_STRATEGY="ecs-console" # ECS format logs to console
LOG_STRATEGY="file-only" # Logs to file in Pino format (most efficient)
LOG_STRATEGY="console-raw" # Raw Pino format logs to console
LOG_STRATEGY="both-prettyconsole" # Pretty logs to console, Pino format to file

# Specify log file location when using file output
LOG_FILE=""

For production environments or detailed analytics, consider integrating logs with Kibana or similar tools. We're developing Kibana dashboards for self-hosting and UI dashboards for our hosting solution. If you have specific dashboard requirements, please contact us on Discord.

Developer Logging​

Log Levels​

The indexer supports the following log levels in ascending order of severity:

- trace     # Most verbose level, detailed tracing information
- debug # Debugging information for developers
- info # General information about system operation
- udebug # User-level debug logs
- uinfo # User-level info logs
- uwarn # User-level warning logs
- uerror # User-level error logs
- warn # System warning logs
- error # System error logs
- fatal # Critical errors causing system shutdown

Note: Log levels prefixed with u (like udebug) are user-level logs emitted from the context for handlers or loaders.

Configuring Log Levels​

Set log levels using these environment variables:

LOG_LEVEL="info"      # Controls log level for console output (default: "info")
FILE_LOG_LEVEL="trace" # Controls log level for file output (default: "trace")

Example:

export LOG_LEVEL="trace"  # Set console log level to the most verbose option

Troubleshooting​

When debugging issues in development:

  1. Terminal UI Issues: The Terminal UI may sometimes hide errors. To disable it:

    export TUI_OFF="true"  # Or use --tui-off flag when starting
  2. Log Visibility: To maintain the Terminal UI while capturing detailed logs:

    export LOG_STRATEGY="both-prettyconsole"
    export LOG_FILE="./debug.log"

This approach allows you to view essential information in the UI while capturing comprehensive logs for troubleshooting.


Common Issues and Troubleshooting​

File: Troubleshoot/common-issues.md

This guide helps you identify and resolve common issues you might encounter when working with Envio HyperIndex. If you don't find a solution to your problem here, please join our Discord community for additional support.

Table of Contents​

  • Setup and Configuration Issues
    • Module Not Found Errors
    • Smart Contract Updates
    • Node.js Version Compatibility
    • PNPM Version Compatibility
  • Runtime Issues
    • Indexer Start Block Issues
    • Tables Not Registered in Hasura
    • RPC-Related Issues
  • Infrastructure Conflicts
    • Local Postgres Conflicts

Setup and Configuration Issues​

Cannot find module errors on pnpm start​

Problem: Errors like Cannot find module when starting your indexer indicate missing generated files.

Cause: The indexer cannot find necessary files, typically because the code generation step was skipped after cloning the repository.

Solution:

  1. Delete the generated folder if it exists
  2. Run the code generation command:
pnpm codegen

Important: Always run pnpm codegen immediately after cloning an indexer repository using Envio.

Smart contract updated after the initial codegen​

Problem: Changes to smart contracts aren't reflected in your indexer.

Cause: When smart contracts are modified after initial setup, the ABIs need to be regenerated and the indexer needs to be updated.

Solution:

  1. Re-export smart contract ABIs (example using Hardhat):
cd contracts/
pnpm hardhat export-abi
  1. Verify that the ABI directory in config.yaml points to the correct location where ABIs were freshly generated
  2. Run codegen again:
pnpm codegen

Using the correct version of Node.js​

Problem: Compatibility issues or unexpected errors when running the indexer.

Solution: Envio requires Node.js v18 or newer. If you're using Node.js v16 or older, please update:

# Using nvm (recommended)
nvm install 18
nvm use 18

# Or download directly from https://nodejs.org/

Using the correct version of PNPM​

Problem: Package management issues or build failures.

Solution: Envio requires pnpm v8 or newer. If you're using v7.8 or older, please update:

# Update pnpm
npm install -g pnpm@latest

# Verify version
pnpm --version

Runtime Issues​

Indexer not starting at the specified start block​

Problem: The indexer runs but doesn't start from the start_block defined in your configuration.

Cause: This typically happens when the indexer's state is persisted from a previous run.

Solution: Stop the indexer completely before restarting:

# First stop the indexer
pnpm envio stop

# Then restart it
pnpm dev

Tables for entities are not registered on Hasura​

Problem: Entity tables defined in your schema don't appear in Hasura.

Cause: Database schema might be out of sync with your entity definitions.

Solution: Reset the indexer environment to recreate the necessary tables:

# Stop the indexer
pnpm envio stop

# Restart it (this will recreate tables)
pnpm dev

Problem: The indexer shows warnings such as:

  • Error getting events, will retry after backoff time
  • Failed Combined Query Filter from block
  • Issue while running fetching batch of events from the RPC. Will wait ()ms and try again.

Cause: Issues connecting to or retrieving data from the blockchain RPC endpoint.

Solutions:

  1. Recommended: Use HyperSync if your network is supported, as it provides better performance and reliability

  2. If HyperSync isn't an option, try:

    • Using a different RPC endpoint in your config.yaml
    • Verifying your RPC endpoint is stable and has archive data if needed
    • Checking if your RPC provider has rate limits you're exceeding
# Example of updating RPC in config.yaml
network:
# Replace with a more reliable RPC
rpc_url: "https://mainnet.infura.io/v3/YOUR-API-KEY"

Infrastructure Conflicts​

Postgres running locally​

Problem: Conflicts when Postgres is already running on port 5432.

Cause: The default Postgres port (5432) is already in use by another instance.

Solution: Configure Envio to use a different port by setting environment variables:

# Option 1: Set variables inline
ENVIO_PG_PORT=5433 pnpm codegen
ENVIO_PG_PORT=5433 pnpm dev

# Option 2: Export variables for the session
export ENVIO_PG_PORT=5433
pnpm codegen
pnpm dev

You can further customize your Postgres connection with these additional environment variables:

  • ENVIO_POSTGRES_PASSWORD: Set a custom password
  • ENVIO_PG_USER: Set a custom username
  • ENVIO_PG_DATABASE: Set a custom database name

Envio Error Codes​

File: Troubleshoot/error-codes.md

This guide provides a comprehensive list of error codes you may encounter when using Envio HyperIndex. Each error includes an explanation and recommended actions to resolve the issue.

How to Use This Guide​

When encountering an error in Envio, you'll receive an error code (like EE101). Use this guide to:

  1. Locate the error code by category or by searching for the specific code
  2. Read the explanation to understand what caused the error
  3. Follow the recommended steps to resolve the issue

If you can't resolve an error after following the suggestions, please reach out for support on our Discord community.

Error Categories​

Envio error codes are categorized by their first digits:

Error Code RangeCategoryDescription
EE100 - EE199Configuration FileIssues with the configuration file format, parameters, or values
EE200 - EE299Schema FileProblems with GraphQL schema definition
EE300 - EE399ABI FileIssues with smart contract ABI files or event definitions
EE400 - EE499Initialization ArgumentsProblems with initialization parameters or directories
EE500 - EE599Event HandlingIssues with event handler files or functions
EE600 - EE699Event SyncingProblems with event synchronization process
EE700 - EE799Database FunctionsIssues with database operations
EE800 - EE899Database MigrationsProblems with database schema migrations or tracking
EE900 - EE999Contract InterfaceIssues related to smart contract interfaces
EE1000 - EE1099Chain ManagerProblems with blockchain network connections
EE1100 - EE1199Lazy LoaderGeneral errors related to the loading process

Configuration File Errors (EE100-EE111)​

EE100: Invalid Addresses​

Issue: The configuration file contains invalid smart contract addresses.

Solution: Verify all contract addresses in your configuration file. Ensure they:

  • Match the correct format for the blockchain (0x-prefixed for EVM chains)
  • Are valid addresses for the specified network
  • Have the correct length (42 characters including '0x' for EVM)

EE101: Non-Unique Contract Names​

Issue: The configuration file contains duplicate contract names.

Solution: Each contract in your configuration must have a unique name. Review your config.yaml and ensure all contract names are unique.

EE102: Reserved Words in Configuration File​

Issue: Your configuration uses reserved programming words that conflict with Envio's code generation.

Solution:

  • Review the reserved words list for JavaScript, TypeScript, and ReScript
  • Rename any contract or event names that use reserved words
  • Choose descriptive names that don't conflict with programming languages

EE103: Parse Event Error​

Issue: Envio couldn't parse event signatures in your configuration.

Solution:

  • Check your event signatures in the configuration file
  • Ensure they match the format in your ABI
  • Refer to the configuration guide for correct event definition syntax

EE104: Resolve Config Path​

Issue: Envio couldn't find your configuration file at the specified path.

Solution:

  • Verify that your configuration file exists in the correct directory
  • Ensure the file is named correctly (usually config.yaml)
  • Check for file permission issues

EE105: Deserialize Config​

Issue: Your configuration file contains invalid YAML syntax.

Solution:

  • Check your YAML file for syntax errors
  • Ensure proper indentation and structure
  • Validate your YAML using a linter or validator

EE106: Undefined Network Config​

Issue: No hypersync_config or rpc_config defined for the network specified in your configuration.

Solution:

  • Add either a HyperSync or RPC configuration for your network
  • See the HyperSync Data Source or RPC Data Source documentation
  • Example:
    network:
    network_id: 1
    rpc_config:
    rpc_url: "https://eth-mainnet.g.alchemy.com/v2/YOUR_API_KEY"

EE108: Invalid Postgres Database Name​

Issue: The Postgres database name provided doesn't meet requirements.

Solution: Provide a database name that:

  • Begins with a letter or underscore
  • Contains only letters, numbers, and underscores (no spaces)
  • Has a maximum length of 63 characters

EE109: Incorrect RPC URL Format​

Issue: The RPC URL in your configuration has an invalid format.

Solution:

  • Ensure all RPC URLs start with either http:// or https://
  • Verify the URL is correctly formatted and accessible
  • Example: https://eth-mainnet.g.alchemy.com/v2/YOUR_API_KEY

EE110: End Block Not Greater Than Start Block​

Issue: Your configuration specifies an end block that is less than or equal to the start block.

Solution: If providing an end block, ensure it's greater than the start block:

start_block: 10000000
end_block: 20000000 # Must be greater than start_block

EE111: Invalid Characters in Contract or Event Names​

Issue: Contract or event names contain invalid characters.

Solution: Use only alphanumeric characters and underscores in contract and event names.

Schema File Errors (EE200-EE217)​

EE200: Schema File Read Error​

Issue: Envio couldn't read the schema file.

Solution:

  • Ensure the schema file exists at the expected location
  • Check file permissions
  • Verify the file isn't corrupted

EE201: Schema Parse Error​

Issue: The schema file contains syntax errors.

Solution:

  • Check for GraphQL syntax errors in your schema.graphql file
  • Ensure all entities and fields are properly defined
  • Validate your GraphQL schema with a schema validator

EE202: Multiple @derivedFrom Directives​

Issue: An entity field has more than one @derivedFrom directive.

Solution: Use only one @derivedFrom directive per entity. Review your schema and remove duplicate directives.

EE203: Missing Field Argument for @derivedFrom​

Issue: A @derivedFrom directive is missing the required field argument.

Solution: Add the field argument to your @derivedFrom directive:

type User {
id: ID!
orders: [Order!]! @derivedFrom(field: "user")
}

EE204: Invalid @derivedFrom Argument​

Issue: The field argument in @derivedFrom has an invalid value.

Solution: Ensure the field argument contains a valid string value that matches a field name in the referenced entity.

EE207: Undefined Type​

Issue: The schema contains an undefined type.

Solution: Use only supported scalar types or defined entity types:

  • ID
  • String
  • Int
  • Float
  • Boolean
  • Bytes
  • BigInt

EE208: Unsupported Nullable Scalars​

Issue: The schema contains nullable scalar types inside lists.

Solution: Use non-nullable scalars in lists by adding ! after the type:

# Incorrect
items: [String]

# Correct
items: [String!]!

EE209: Unsupported Multidimensional Lists​

Issue: The schema contains nullable multidimensional list types.

Solution: Ensure inner list types are non-nullable:

# Incorrect
matrix: [[Int]]

# Correct
matrix: [[Int!]!]!

EE210: Reserved Words in Schema File​

Issue: The schema uses reserved programming words.

Solution:

  • Check the reserved words list
  • Rename any entities or fields using reserved words
  • Choose alternative descriptive names

EE211: Unsupported Arrays of Entities​

Issue: The schema uses unsupported array syntax for entity relationships.

Solution: Use one of the supported methods for entity references as outlined in the schema documentation.

EE212: Reserved Enum Names​

Issue: The schema uses enum names that conflict with Envio's internal enums.

Solution: Check the internal reserved types list and rename conflicting enums.

EE213: Duplicate Enum Values​

Issue: An enum in the schema contains duplicate values.

Solution: Ensure all values within each enum type are unique.

EE214: Naming Conflicts Between Enums and Entities​

Issue: The schema has enums and entities with the same names.

Solution: Ensure all enum and entity names are unique within the schema.

EE215: Incorrectly Placed Directive​

Issue: A directive is used in an incorrect location in the schema.

Solution: Ensure directives are placed on appropriate schema elements according to GraphQL specifications.

EE216: Incorrect Directive Parameters​

Issue: A directive has incorrect parameter labels or count.

Solution: Verify that all directive parameters match the expected format and count.

EE217: Incorrect Directive Parameter Type​

Issue: A directive parameter has an invalid type.

Solution: Ensure parameter values match the expected types for each directive.

ABI File Errors (EE300-EE305)​

EE300: Event ABI Parse Error​

Issue: Cannot parse the ABI for specified contract events.

Solution:

  • Verify the ABI file contains valid JSON
  • Ensure the ABI includes all events referenced in your configuration
  • Check for syntax errors in your ABI file

EE301: Missing ABI File Path​

Issue: No ABI file path specified for a contract.

Solution: Add the abi_file_path property in your configuration for each contract:

contracts:
- name: MyContract
abi_file_path: ./abis/MyContract.json

EE302: Invalid ABI File Path​

Issue: The specified ABI file path is invalid or inaccessible.

Solution:

  • Verify the ABI file exists at the specified path
  • Ensure the path is relative to your project directory
  • Check file permissions

EE303: Missing Event in ABI​

Issue: An event referenced in your configuration doesn't exist in the ABI.

Solution:

  • Ensure the event name matches exactly what's in the ABI
  • Verify the ABI includes all events you want to track
  • If using a human-readable ABI, check event signature formatting

EE304: Mismatched Event Signature​

Issue: Event signature in configuration doesn't match the ABI.

Solution: Ensure event signatures in your configuration match exactly what's in the ABI file.

EE305: ABI Config Mismatch​

Issue: Event parameters in configuration don't match ABI definition.

Solution: Verify that event parameters in your configuration match the types and order defined in the ABI.

Initialization Arguments Errors (EE400-EE402)​

EE400: Invalid Directory Name​

Issue: A specified directory name contains invalid characters.

Solution: Use directory names without special characters like /, \, :, *, ?, ", ``, |.

EE401: Directory Already Exists​

Issue: Trying to create a directory that already exists.

Solution: Use a different directory name or remove the existing directory if appropriate.

EE402: Invalid Subgraph ID​

Issue: The subgraph ID for migration is invalid.

Solution: Provide a valid subgraph ID that starts with "Qm".

Event Handling Errors (EE500)​

EE500: Event Handler File Not Found​

Issue: Envio couldn't find or import the event handler file.

Solution:

  • Ensure the handler file exists in the correct directory
  • Verify the file path in your configuration
  • Make sure the handler file is compiled correctly
  • Refer to the event handlers documentation for proper setup

Event Syncing Errors (EE600)​

EE600: Top Level Error During Event Processing​

Issue: An unexpected error occurred while processing events.

Solution:

  • Check your event handler logic for errors
  • Review recent changes to your indexer
  • If unable to resolve, contact support through Discord with error details

For database-related errors (EE700-EE808), you can often resolve issues by resetting the database migration:

pnpm envio local db-migrate setup

Database Function Errors (EE700)​

EE700: Database Row Parse Error​

Issue: Unable to parse rows from the database.

Solution:

  • Check entity definitions in your schema
  • Verify data types match between schema and database
  • Reset database migrations using the command above

Database Migration Errors (EE800-EE808)​

EE800: Raw Table Creation Error​

Issue: Error creating raw events table in database.

Solution: Reset database migrations using the command above.

EE801: Dynamic Contracts Table Creation Error​

Issue: Error creating dynamic contracts table.

Solution: Reset database migrations using the command above.

EE802: Entity Tables Creation Error​

Issue: Error creating entity tables.

Solution:

  • Check your schema for invalid entity definitions
  • Reset database migrations

EE803: Tracking Tables Error​

Issue: Error tracking tables in database.

Solution: Reset database migrations using the command above.

EE804: Drop Entity Tables Error​

Issue: Error dropping entity tables.

Solution:

  • Check if any other processes are using the database
  • Reset database migrations

EE805: Drop Tables Except Raw Error​

Issue: Error dropping all tables except raw events table.

Solution: Reset database migrations using the command above.

EE806: Clear Metadata Error​

Issue: Error clearing metadata.

Solution:

  • Reset database migrations
  • Note: Indexing may still work, but you might have issues querying data in Hasura

EE807: Table Tracking Error​

Issue: Error tracking a table in Hasura.

Solution:

  • Reset database migrations
  • Note: Indexing may still work, but you might have issues querying data in Hasura

EE808: View Permissions Error​

Issue: Error setting up view permissions.

Solution:

  • Reset database migrations
  • Note: Indexing may still work, but you might have issues querying data in Hasura

EE900: Undefined Contract​

Issue: Referencing a contract that isn't defined in configuration.

Solution:

  • Verify all contract names in your handlers match those in the configuration file
  • Check for typos in contract names

EE901: Interface Mapping Error​

Issue: Contract name not found in interface mapping (unexpected internal error).

Solution: Contact support through Discord for assistance.

EE1000: Undefined Chain​

Issue: Using a chain ID that isn't defined or supported.

Solution:

  • Use a valid chain ID in your configuration file
  • Check if the network is supported by Envio
  • Verify chain ID matches the intended network

General Errors​

EE1100: Promise Timeout​

Issue: A long-running operation timed out.

Solution:

  • Check network connectivity
  • Verify RPC endpoint performance
  • Consider increasing timeouts if possible
  • If persists, contact support through Discord

Reserved Words in Envio​

File: Troubleshoot/reserved-words.md

Overview​

When creating your Envio indexer, certain words cannot be used in entity names, field names, contract names, or event names because they are reserved by the underlying programming languages or by Envio itself. Using these reserved words will trigger validation errors (such as EE102 for configuration files or EE210 for schema files).

Reserved words in Envio are taken from JavaScript, TypeScript, and ReScript because Envio generates code in these languages to power your indexer. Using these reserved words would create syntax conflicts in the generated code.

Why This Matters​

When you define names in your:

  • config.yaml file (for contracts and events)
  • schema.graphql file (for entities and fields)

Envio automatically generates code based on these names. If you use reserved words, the generated code will contain syntax errors and will not compile, causing your indexer to fail.

Common Error Scenarios​

If you use reserved words, you'll encounter these errors:

  • Error EE102: Reserved words in the configuration file
  • Error EE210: Reserved words in the schema file
  • Error EE212: Reserved enum names that conflict with Envio internal types

How to Fix Reserved Word Errors​

When you encounter these errors, you need to rename the offending identifiers:

  1. Identify which names in your configuration or schema are using reserved words
  2. Choose alternative names that aren't reserved
  3. Update all references to these names in your code
  4. Run codegen again to regenerate the code

Example Problem​

# In config.yaml
contracts:
- name: class # Error: 'class' is a reserved word in JavaScript
abi_file_path: ./abis/MyContract.json
# In schema.graphql
type interface { # Error: 'interface' is a reserved word in JavaScript and TypeScript
id: ID!
name: String!
}

Example Solution​

# Fixed config.yaml
contracts:
- name: ClassContract # Good: Not a reserved word
abi_file_path: ./abis/MyContract.json
# Fixed schema.graphql
type UserInterface { # Good: Not a reserved word
id: ID!
name: String!
}

Tips for Avoiding Reserved Word Conflicts​

  • Use camelCase or PascalCase for naming (e.g., userAccount instead of class)
  • Add a prefix or suffix to potentially conflicting names (e.g., userInterface instead of interface)
  • Use domain-specific terms that are less likely to be programming keywords
  • When in doubt, check against the lists below before finalizing your schema or configuration

Complete List of Reserved Words​

JavaScript Reserved Words​

These keywords cannot be used as identifiers in your Envio configuration or schema:

abstract, arguments, await, boolean, break, byte, case, catch, char,
class, const, continue, debugger, default, delete, do, double, else,
enum, eval, export, extends, false, final, finally, float, for, function,
goto, if, implements, import, in, instanceof, int, interface, let, long,
native, new, null, package, private, protected, public, return, short,
static, super, switch, synchronized, this, throw, throws, transient, true,
try, typeof, var, void, volatile, while, with, yield

TypeScript Reserved Words​

In addition to JavaScript keywords, these TypeScript-specific keywords are also reserved:

any, as, boolean, break, case, catch, class, const, constructor, continue,
declare, default, delete, do, else, enum, export, extends, false, finally,
for, from, function, get, if, implements, import, in, instanceof, interface,
let, module, new, null, number, of, package, private, protected, public,
require, return, set, static, string, super, switch, symbol, this, throw,
true, try, type, typeof, var, void, while, with, yield

ReScript Reserved Words​

These ReScript-specific keywords are also reserved:

and, as, assert, constraint, else, exception, external, false, for, if, in,
include, lazy, let, module, mutable, of, open, rec, switch, true, try, type,
when, while, with

Envio Internal Reserved Types​

These types are used internally by Envio and cannot be used as enum or entity names:

EVENT_TYPE
CONTRACT_TYPE

Best Practices​

  1. Use descriptive names that are unlikely to be programming keywords
  2. Check these lists before finalizing your schema design
  3. Run validation early with pnpm codegen to catch issues before spending time on implementation
  4. Use prefixes for domain entities (e.g., TokenTransfer instead of Transfer)

If you encounter persistent issues with reserved words or need help refactoring your schema to avoid them, please reach out for support on our Discord community.


Any EVM with RPC πŸŒβ€‹

File: supported-networks/any-evm-with-rpc.md


Local network - Anvil​

File: supported-networks/local-anvil.md


Local network - Hardhat​

File: supported-networks/local-hardhat.md


Arbitrum​

File: supported-networks/arbitrum.md

Indexing Arbitrum Data with Envio​

FieldValue
Arbitrum Chain ID42161
HyperSync URL Endpointhttps://arbitrum.hypersync.xyz or https://42161.hypersync.xyz
HyperRPC URL Endpointhttps://arbitrum.rpc.hypersync.xyz or https://42161.rpc.hypersync.xyz

Defining Network Configurations​

name: IndexerName # Specify indexer name
description: Indexer Description # Include indexer description
networks:
- id: 42161 # Arbitrum
start_block: START_BLOCK_NUMBER # Specify the starting block
contracts:
- name: ContractName
address:
- "0xYourContractAddress1"
- "0xYourContractAddress2"
handler: ./src/EventHandlers.ts
events:
- event: Event # Specify event
- event: Event

With these steps completed, your application will be set to efficiently index Arbitrum data using Envio’s blockchain indexer.

For more information on how to set up your config, define a schema, and write event handlers, refer to the guides section in our documentation.

Support​

Can’t find what you’re looking for or need support? Reach out to us on Discord; we’re always happy to help!


Eth​

File: supported-networks/eth.md

Indexing Eth Data with Envio​

FieldValue
Eth Chain ID1
HyperSync URL Endpointhttps://eth.hypersync.xyz or https://1.hypersync.xyz
HyperRPC URL Endpointhttps://eth.rpc.hypersync.xyz or https://1.rpc.hypersync.xyz

Defining Network Configurations​

name: IndexerName # Specify indexer name
description: Indexer Description # Include indexer description
networks:
- id: 1 # Eth
start_block: START_BLOCK_NUMBER # Specify the starting block
contracts:
- name: ContractName
address:
- "0xYourContractAddress1"
- "0xYourContractAddress2"
handler: ./src/EventHandlers.ts
events:
- event: Event # Specify event
- event: Event

With these steps completed, your application will be set to efficiently index Eth data using Envio’s blockchain indexer.

For more information on how to set up your config, define a schema, and write event handlers, refer to the guides section in our documentation.

Support​

Can’t find what you’re looking for or need support? Reach out to us on Discord; we’re always happy to help!


Optimism​

File: supported-networks/optimism.md

Indexing Optimism Data with Envio​

FieldValue
Optimism Chain ID10
HyperSync URL Endpointhttps://optimism.hypersync.xyz or https://10.hypersync.xyz
HyperRPC URL Endpointhttps://optimism.rpc.hypersync.xyz or https://10.rpc.hypersync.xyz

Defining Network Configurations​

name: IndexerName # Specify indexer name
description: Indexer Description # Include indexer description
networks:
- id: 10 # Optimism
start_block: START_BLOCK_NUMBER # Specify the starting block
contracts:
- name: ContractName
address:
- "0xYourContractAddress1"
- "0xYourContractAddress2"
handler: ./src/EventHandlers.ts
events:
- event: Event # Specify event
- event: Event

With these steps completed, your application will be set to efficiently index Optimism data using Envio’s blockchain indexer.

For more information on how to set up your config, define a schema, and write event handlers, refer to the guides section in our documentation.

Support​

Can’t find what you’re looking for or need support? Reach out to us on Discord; we’re always happy to help!


Polygon​

File: supported-networks/polygon.md

Indexing Polygon Data with Envio​

FieldValue
Polygon Chain ID137
HyperSync URL Endpointhttps://polygon.hypersync.xyz or https://137.hypersync.xyz
HyperRPC URL Endpointhttps://polygon.rpc.hypersync.xyz or https://137.rpc.hypersync.xyz

Defining Network Configurations​

name: IndexerName # Specify indexer name
description: Indexer Description # Include indexer description
networks:
- id: 137 # Polygon
start_block: START_BLOCK_NUMBER # Specify the starting block
contracts:
- name: ContractName
address:
- "0xYourContractAddress1"
- "0xYourContractAddress2"
handler: ./src/EventHandlers.ts
events:
- event: Event # Specify event
- event: Event

With these steps completed, your application will be set to efficiently index Polygon data using Envio’s blockchain indexer.

For more information on how to set up your config, define a schema, and write event handlers, refer to the guides section in our documentation.

Support​

Can’t find what you’re looking for or need support? Reach out to us on Discord; we’re always happy to help!


Indexing on Fuel Network​

File: fuel/fuel.md

Introduction​

Envio has expanded its indexing capabilities beyond EVM-compatible blockchains to now fully support the Fuel Network (both mainnet and testnet). This documentation covers how to use Envio's products with Fuel's unique architecture and features. β›½βš‘

Fuel offers several advantages as a modular execution layer including:

  • Parallel transaction execution
  • State-minimized design
  • UTXO-based architecture
  • Advanced FuelVM capabilities

HyperIndex for Fuel​

HyperIndex enables developers to easily index and query real-time and historical data on Fuel Network with the same powerful features available for EVM chains.

Getting Started with Fuel Indexing​

You can start indexing Fuel contracts in two ways:

  1. Quick Start (5-minute tutorial): Follow our step-by-step tutorial to create your first Fuel indexer quickly.

  2. No-Code Contract Import: Use our Contract Import tool to automatically generate configuration and schema files for your Fuel contracts.

Example Fuel Indexers​

Looking for inspiration? Check out these indexers built by projects in the Fuel ecosystem:

ProjectTypeGitHub Repository
SparkOrderbook DEXgithub
MiraAMM DEXgithub
ThunderNFT Marketplacegithub
SwaylendLending Protocolgithub
GreeterTutorialgithub

Features Supported on Fuel​

HyperIndex for Fuel supports all the core features available in the EVM version:

  • βœ… No-code Contract Import
  • βœ… Dynamic Contracts / Factory Tracking
  • βœ… Testing Framework
  • βœ… Hosted Service
  • βœ… Wildcard Indexing

Fuel-Specific Event Types​

Understanding Fuel's Event Model​

Fuel's event model differs significantly from EVM. Instead of predefined events, Fuel uses a more flexible approach with various receipt types that can be indexed.

LOG_DATA Receipts (Primary Event Type)​

The most common event type in Fuel is the LOG_DATA receipt, created by the log instruction in Sway contracts.

Unlike Solidity's emit which requires predefined event structures, Sway's log function allows passing any data, providing greater flexibility.

Configuration Example:​

ecosystem: fuel
network:
name: "fuel_testnet"

contracts:
- name: SwayContract
abi_file_path: "./abis/SwayContract.json"
start_block: 1
address: "0x123..."
events:
- name: NewGreeting
logId: "8500535089865083573"

The logId is a unique identifier for the logged struct, which you can find in your contract's ABI file.

Auto-detection of logId:​

If your event name matches the logged struct name in Sway, you can omit the logId:

events:
- name: NewGreeting # Will automatically detect logId if it matches the struct name

Tip: Instead of manually configuring events, use the Contract Import tool which automatically detects events and generates the proper configuration.

Additional Fuel Event Types​

Fuel allows indexing several additional receipt types not available in EVM:

Event TypeDescriptionExample Configuration
MintTriggered when a contract mints tokens- name: Mint
BurnTriggered when a contract burns tokens- name: Burn
TransferCombines TRANSFER and TRANSFER_OUT receipts- name: Transfer
CallTriggered when a contract calls another contract- name: Call

Using Custom Names:​

You can rename these events while maintaining their type:

events:
- name: MintMyNft # Custom name
type: mint # Actual event type

Note: All event types can be used with Wildcard Indexing.

Transfer Event Specifics​

The Transfer event type combines two Fuel receipt types:

  • TRANSFER: Emitted when a contract transfers tokens to another contract
  • TRANSFER_OUT: Emitted when a contract transfers tokens to a wallet

Important: Transfers between wallets are not included in the Transfer event type.

Event Object Structure in Handlers​

When handling Fuel events, the event object structure differs from EVM:

// Example Fuel event handler
SwayContract.NewGreeting.handler(async ({ event, context }) => {
// Access event parameters
const message = event.params.message;

// Access block information
const blockHeight = event.block.height;
const blockTime = event.block.time;
const blockId = event.block.id;

// Access transaction information
const txId = event.transaction.id;

// Access source contract address
const sourceContract = event.srcAddress;

// Access log position
const logIndex = event.logIndex;

// Store data
context.Greeting.set({
id: event.transaction.id,
message: message,
timestamp: blockTime,
});
});

Migration Guide from v2.x.x-fuel​

Starting with V2.3, the Fuel indexer has been integrated into the main envio package. If you were using the Fuel-specific version (envio@2.x.x-fuel), follow these steps to migrate:

1. Update Package Version​

# Update local dependency
pnpm i envio@latest

# If installed globally
pnpm i -g envio@latest

2. Update Configuration​

Add the ecosystem: fuel field to your config.yaml:

ecosystem: fuel # Required for Fuel indexers
network:
name: "fuel_testnet"
# other network config...

3. Update Event Handler Code​

Several field names have changed in the event object:

Old FieldNew Field
event.data.xevent.params.x
event.timeevent.block.time
event.blockHeightevent.block.height
(none)event.block.id
event.transactionIdevent.transaction.id
event.contractIdevent.srcAddress
event.receiptIndexevent.logIndex
event.receiptType(removed)

Example migration:

SwayContract.NewGreeting.handler(async ({ event, context }) => {
context.Greeting.set({
- id: event.data.id,
- message: event.data.message,
- createdAt: event.time,
- blockHeight: event.blockHeight,
+ id: event.params.id,
+ message: event.params.message,
+ createdAt: event.block.time,
+ blockHeight: event.block.height,
transaction: event.transaction.id,
});
});

Note: If you use loaders, also follow the v1 to v2 migration guide for loader-specific changes.

HyperFuel​

HyperFuel is Envio's low-level data API for the Fuel Network (equivalent to HyperSync for EVM chains).

HyperFuel provides:

  • High-performance data access
  • Flexible query capabilities
  • Multiple data formats (Parquet, Arrow, typed data)
  • Complete historical data

Available Clients​

Access HyperFuel data using any of these clients:

  • Rust: hyperfuel-client-rust
  • Python: hyperfuel-client-python
  • Node.js: hyperfuel-client-node
  • JSON API: hyperfuel-json-api

HyperFuel Endpoints​

For detailed information, see the HyperFuel documentation.

About Fuel Network​

Fuel is an operating system purpose-built for Ethereum rollups with unique architecture focused on:

  • Parallelization: Execute transactions concurrently for higher throughput
  • State-minimized execution: Efficient storage and computation model
  • Interoperability: Seamless integration with other blockchain systems

Powered by the FuelVM, Fuel expands Ethereum's capabilities without compromising security or decentralization.

Resources​

  • Website
  • Twitter
  • Discord
  • Documentation

Need Help?​

If you encounter any issues with Fuel indexing, please:

  1. Check our Troubleshooting guides
  2. Join our Discord for community support
  3. Create an issue in our GitHub repository

Licensing​

File: licensing.md

TL;DR​

  • Envio's licensing reflects open source ethos but is not OSI recognized.
  • Developers can use Envio's services without vendor lock-in, either by self-hosting or specifying an RPC URL.
  • The generated code is open and public
  • Our license allows self-hosting but restricts third-party competition with Envio's hosted service.
  • Envio may consider open-sourcing in the future but prioritizes stakeholder interests and market traction.

Our position​

We're devs and we value OS ethos too, that's why our licensing mirrors a lot of the benefits from open source licensing however Envio and its products do not use a recognized open source license by the OSI, we are however public and open and our licensing reflects this.

Our future business model lies in our hosted service and HyperSync requests and so we are protecting this, but to ensure continuity and no vendor lock-in, developers are able to run and develop on their indexer without either. Either by self-hosting, which our license permits, or by specifying an RPC URL in their indexer configuration and thus bypassing HyperSync.

Envio is in its formative stages and though we may look to open-source the software in the future we are dedicated to ensuring the best interests of all stakeholders. Going open source is somewhat of a one-way function and it is easier to go open source than to proverbially go "closed source". Once we have gained more market traction we will review our position on going open source.

HyperIndex End-User License Agreement (EULA)​

This agreement describes the users' rights and the conditions upon which the Software and Generated Code may be used. The user should review the entire agreement, including any supplemental license terms that accompany the Software since all of the terms are important and together create this agreement that applies to them.

1. Definitions​

Software: HyperIndex, a copyrightable work created by Envio and licensed under this End User License Agreement (β€œEULA”).

Generated Code: In the context of this license agreement, the term "generated code" refers to computer programming code that is produced automatically by the Software based on input provided by the user.

Licensed Material: The Software and Generated Code defined here will be collectively referred to as β€œLicensed Material”.

2. Installation and User Rights​

License: The Software is provided under this EULA. By agreeing to the EULA terms, you are granted the right to install and operate one instance of the Software on your device (referred to as the licensed device), for the use of one individual at a time, on the condition that you adhere to all terms outlined in this agreement. The licensor provides you with a non-exclusive, royalty-free, worldwide license that is non-sublicensable and non-transferable. This license allows you to use the Software subject to the limitations and conditions outlined in this EULA. With one license, the user can only use the Software on a single device.

Device: In this agreement, "device" refers to a hardware system, whether physical or virtual, equipped with an internal storage device capable of executing the Software. This includes hardware partitions, which are considered individual devices for the purposes of this agreement. Updates may be provided to the Software, and these updates may alter the minimum hardware requirements necessary for the Software. It is the responsibility of users to comply with any changing hardware requirements.

Updates: The Software may be updated automatically. With each update, the EULA may be amended, and it is the users' responsibility to comply with the amendments.

Limitations: Envio reserves all rights, including those under intellectual property laws, not expressly granted in this agreement. For instance, this license does not confer upon you the right to, and you are prohibited from:

(i) Publishing, copying (other than the permitted backup copy), renting, leasing, or lending the Software;

(ii) Transferring the Software (except as permitted by this agreement);

(iii) Circumventing any technical restrictions or limitations in the Software;

(iv) Using the Software as server Software, for commercial hosting, making the Software available for simultaneous use by multiple users over a network, installing the Software on a server and allowing users to access it remotely, or installing the Software on a device solely for remote user use;

(v) Reverse engineering, decompiling, or disassembling the Software, or attempting to do so, except and only to the extent that the foregoing restriction is (a) permitted by applicable law; (b) permitted by licensing terms governing the use of open-source components that may be included with the Software and

(vi) When using the Software, you may not use any features in any manner that could interfere with anyone else's use of them, or attempt to gain unauthorized access to or use of any service, data, account, or network.

These limitations apply specifically to the Software and do not extend to the Generated Code. Details regarding the use of the Generated Code, including associated limitations, are provided below.

3. Use of the Generated Code​

Limitations: Users can use, copy, distribute, make available, and create derivative works of the Generated Code freely, subject to the limitations and conditions specified below.

(i) The user is prohibited from offering the Generated Code or any software that includes the Generated Code to third parties as a hosted or managed service, where the service grants users access to a significant portion of the Software's features or functionality.

(ii) The user is not permitted to tamper with, alter, disable, or bypass the functionality of the license key in the Software. Additionally, the user may not eliminate or conceal any functionality within the Software that is safeguarded by the license key.

(iii) Any modification, removal, or concealment of licensing, copyright, or other notices belonging to the licensor in the Software is strictly forbidden. The use of the licensor's trademarks is subject to relevant laws.

Credit: If the user utilizes the Generated Code to develop and release new software, product, or service, the license agreement for said software, product, or service must include proper credit to HyperIndex.

Liability: Envio does not provide any assurance that the Generated Code functions correctly, nor does it assume any responsibility in this regard.

Additionally, it will be the responsibility of the user to assess whether the Generated Code is suitable for the products and services provided by the user. Envio will not bear any responsibility if the Generated Code is found unsuitable for the products and services provided by the user.

4. Additional Terms​

Disclaimer of Warranties and Limitation of Liability:

(i) Unless expressly undertaken by the Licensor separately, the Licensed Material is provided on an as-is, as-available basis, and the Licensor makes no representations or warranties of any kind regarding the Licensed Material, whether express, implied, statutory, or otherwise. This encompasses, without limitation, warranties of title, merchantability, fitness for a particular purpose, non-infringement, absence of latent or other defects, accuracy, or the presence or absence of errors, whether known or discoverable. If disclaimers of warranties are not permitted in whole or in part, this disclaimer may not apply to You.

(ii) To the fullest extent permitted by law, under no circumstances shall the Licensor be liable to You under any legal theory (including, but not limited to, negligence) for any direct, special, indirect, incidental, consequential, punitive, exemplary, or other losses, costs, expenses, or damages arising from the use of the Licensed Material, even if the Licensor has been advised of the possibility of such losses, costs, expenses, or damages. If limitations of liability are not permitted in whole or in part, this limitation may not apply to You.

(iii) The disclaimers of warranties and limitations of liability outlined above shall be construed in a manner that most closely approximates an absolute disclaimer and waiver of all liability, to the fullest extent permitted by law.

Applicable Law and Competent Courts: This EULA shall be governed by and construed in accordance with the laws of England. The courts of England shall have exclusive jurisdiction to settle any dispute arising out of or in connection with this EULA.

Additional Agreements: If the user chooses to use the Software, it may be required to agree to additional terms or agreements outside of this EULA.


Terms of Service​

File: terms-of-service.md

Last updated: Feb 06, 2025

The fine print: Please note these terms are intended to protect us, where adverse outcomes arise we will do our best to use generally accepted reason and logic as our first port of call.

  1. Introduction and Acceptance of Terms

    Welcome to Envio! By accessing or using our website and services, you agree to abide by these Terms of Service. If you do not agree with any part of these terms, you may not use our services.

  2. Description of Services

    Envio provides the following services:

    • A development framework named HyperIndex.
    • A hosted service for deploying and hosting HyperIndex indexers.
    • Low-level read-only API accessible as Hypersync and HyperRPC.
  3. User Accounts

    Users may create accounts to access and utilize our services. By creating an account, you agree to provide accurate and up-to-date information. You are responsible for maintaining the security of your account credentials.

  4. Payments and Refunds

    Our services are provided on a paid basis. By subscribing or making a payment, you agree to the applicable fees and billing terms.

    Billing & Payment: Fees are charged upfront based on the selected plan. Any additional unit fees exceeding the included base usage will be billed separately at the end of the monthly billing cycle. Should the additional unit fees exceed a significant threshold, we reserve the right to charge for accrued units at a point in time before the end of the billing cycle. Payments must be made via the payment methods we support.

    No Refunds: All payments are non-refundable. We do not provide refunds or credits for any unused service, partial subscription periods, downgrades, or cancellations.

    Pricing Changes: We reserve the right to modify our pricing at any time. Any changes will be communicated in advance and will take effect at the start of the next billing cycle.

    Cancellation: Subscriptions can be canceled at any time, but cancellations take effect at the end of the current monthly billing cycle for monthly subscriptions. For annual subscriptions, cancellations will take effect at the end of the current annual billing cycle. Previously paid fees will not be refunded.

    Failed Payments & Account Suspension: If a payment fails, we may retry the charge or suspend access to our services until the outstanding amount is settled. Continued failure to make payment may result in account termination.

    By using our services, you acknowledge and agree to these payment terms.

  5. Termination

    Envio reserves the right to terminate user accounts or suspend access to our services at our discretion. Users will be notified in advance of any such actions unless deemed necessary for security or legal reasons.

  6. Governing Law and Dispute Resolution

    These terms and any disputes arising from or related to them shall be governed by and construed in accordance with the laws of the United Kingdom. Any disputes shall be resolved through arbitration in the jurisdiction of the United Kingdom.

  7. Changes to Terms

    Envio reserves the right to update or modify these terms at any time. Changes will be effective upon posting on our website. It is your responsibility to review these terms periodically for any updates. Your continued use of our services after the posting of changes constitutes your acceptance of such changes.

  8. Prohibited Conduct

    Users are prohibited from engaging in the following conduct while using Envio's website and services:

    • Violating any applicable laws or regulations.
    • Transmitting any content that is unlawful, harmful, threatening, abusive, harassing, defamatory, vulgar, obscene, or otherwise objectionable.
    • Attempting to gain unauthorized access to other users' accounts or to any part of Envio's systems.
    • Interfering with or disrupting the operation of Envio's website or services.
    • Engaging in any activity that could harm, disable, overburden, or impair Envio's servers or networks.
    • Uploading or transmitting any viruses, worms, or other malicious code.
    • Violating the intellectual property rights of Envio or any third party.
  9. Privacy Policy

    Envio is committed to protecting the privacy and security of our users' personal information. Our Privacy Policy outlines how we collect, use, and safeguard user data. By using our website and services, you agree to the terms of our Privacy Policy. Please review our Privacy Policy carefully to understand how we handle your information.

  10. Disclaimer of Warranties

    Envio's products are provided "as is" and without warranties of any kind. We make no guarantees regarding the reliability, availability, or performance of our services. Users utilize our services at their own risk.

  11. Limitation of Liability

    Envio shall not be liable for any damages arising from the use or inability to use our services, including but not limited to direct, indirect, incidental, consequential, or punitive damages.

  12. Indemnification

    Users agree to indemnify and hold harmless Envio, its affiliates, and their respective officers, directors, employees, and agents from any claims, damages, losses, or liabilities arising out of their use of our services or violation of these terms.


Privacy Policy​

File: privacy-policy.md

Last updated: February 06, 2024

The fine print: Please note this privacy policy is intended to protect us, we have no intention of using your data for any malicious purposes

This Privacy Policy describes Our policies and procedures on the collection, use, and disclosure of Your information when You use the Service and tells You about Your privacy rights and how the law protects You.

We use Your Personal data to provide and improve the Service. By using the Service, You agree to the collection and use of information in accordance with this Privacy Policy.

Interpretation and Definitions​

Interpretation​

The words of which the initial letter is capitalized have meanings defined under the following conditions. The following definitions shall have the same meaning regardless of whether they appear in singular or in plural.

Definitions​

For the purposes of this Privacy Policy:

  • Account means a unique account created for You to access our Service or parts of our Service.

  • Affiliate means an entity that controls, is controlled by or is under common control with a party, where "control" means ownership of 50% or more of the shares, equity interest, or other securities entitled to vote for the election of directors or other managing authority.

  • Company (referred to as either "the Company", "We", "Us" or "Our" in this Agreement) refers to Envio.

  • Cookies are small files that are placed on Your computer, mobile device, or any other device by a website, containing the details of Your browsing history on that website among its many uses.

  • Country refers to: Cayman Islands

  • Device means any device that can access the Service such as a computer, a cellphone, or a digital tablet.

  • Personal Data is any information that relates to an identified or identifiable individual.

  • Service refers to the Website.

  • Service Provider means any natural or legal person who processes the data on behalf of the Company. It refers to third-party companies or individuals employed by the Company to facilitate the Service, to provide the Service on behalf of the Company, to perform services related to the Service, or to assist the Company in analyzing how the Service is used.

  • Third-party Social Media Service refers to any website or any social network website through which a User can log in or create an account to use the Service.

  • Usage Data refers to data collected automatically, either generated by the use of the Service or from the Service infrastructure itself (for example, the duration of a page visit).

  • Website refers to Envio, accessible from https://envio.dev

  • You means the individual accessing or using the Service, or the company, or other legal entity on behalf of which such individual is accessing or using the Service, as applicable.

Collecting and Using Your Personal Data​

Types of Data Collected​

Personal Data​

While using Our Service, We may ask You to provide Us with certain personally identifiable information that can be used to contact or identify You. Personally identifiable information may include, but is not limited to:

  • Email address

  • First name and last name

  • Usage Data

Usage Data​

Usage Data is collected automatically when using the Service.

Usage Data may include information such as Your Device's Internet Protocol address (e.g. IP address), browser type, browser version, the pages of our Service that You visit, the time and date of Your visit, the time spent on those pages, unique device identifiers and other diagnostic data.

When You access the Service by or through a mobile device, We may collect certain information automatically, including, but not limited to, the type of mobile device You use, Your mobile device's unique ID, the IP address of Your mobile device, Your mobile operating system, the type of mobile Internet browser You use, unique device identifiers and other diagnostic data.

We may also collect information that Your browser sends whenever You visit our Service or when You access the Service by or through a mobile device.

Information from Third-Party Social Media Services​

The Company allows You to create an account and log in to use the Service through the following Third-party Social Media Services:

Github

If You decide to register through or otherwise grant us access to a Third-Party Social Media Service, We may collect Personal data that is already associated with Your Third-Party Social Media Service's account, such as Your name and Your email address.

You may also have the option of sharing additional information with the Company through Your Third-Party Social Media Service's account. If You choose to provide such information and Personal Data, during registration or otherwise, You are giving the Company permission to use, share, and store it in a manner consistent with this Privacy Policy.

Tracking Technologies and Cookies​

We use Cookies and similar tracking technologies to track the activity on Our Service and store certain information. Tracking technologies used are beacons, tags, and scripts to collect and track information and to improve and analyze Our Service. The technologies We use may include:

Cookies or Browser Cookies. A cookie is a small file placed on Your Device. You can instruct Your browser to refuse all Cookies or to indicate when a Cookie is being sent. However, if You do not accept Cookies, You may not be able to use some parts of our Service. Unless you have adjusted Your browser setting so that it will refuse cookies, our Service may use Cookies. Web Beacons. Certain sections of our Service and our emails may contain small electronic files known as web beacons (also referred to as clear gifs, pixel tags, and single-pixel gifs) that permit the Company, for example, to count users who have visited those pages or opened an email and for other related website statistics (for example, recording the popularity of a certain section and verifying system and server integrity). Cookies can be "Persistent" or "Session" Cookies. Persistent Cookies remain on Your personal computer or mobile device when You go offline, while Session Cookies are deleted as soon as You close Your web browser.

We use both Session and Persistent Cookies for the purposes set out below:

  • Necessary / Essential Cookies

    Type: Session Cookies

    Administered by: Us

    Purpose: These Cookies are essential to provide You with services available through the Website and to enable You to use some of its features. They help to authenticate users and prevent fraudulent use of user accounts. Without these Cookies, the services that You have asked for cannot be provided, and We only use these Cookies to provide You with those services.

  • Cookies Policy / Notice Acceptance Cookies

    Type: Persistent Cookies

    Administered by: Us

    Purpose: These Cookies identify if users have accepted the use of cookies on the Website.

  • Functionality Cookies

    Type: Persistent Cookies

    Administered by: Us

    Purpose: These Cookies allow us to remember choices You make when You use the Website, such as remembering your login details or language preference. The purpose of these cookies is to provide You with a more personal experience and to avoid having to re-enter your preferences every time You use the Website.

    For more information about the cookies we use and your choices regarding cookies, please visit our Cookies Policy or the Cookies section of our Privacy Policy.

By visiting this site you consent to the use of cookies.

Use of Your Personal Data​

The Company may use Personal Data for the following purposes:

To provide and maintain our Service, including monitoring the usage of our Service.

To manage Your Account: to manage Your registration as a user of the Service. The Personal Data You provide can give You access to different functionalities of the Service that are available to You as a registered user.

For the performance of a contract: the development, compliance, and undertaking of the purchase contract for the products, items, or services You have purchased or of any other contract with Us through the Service.

To contact You: To contact You by email, telephone calls, SMS, or other equivalent forms of electronic communication, such as a mobile application's push notifications regarding updates or informative communications related to the functionalities, products, or contracted services, including the security updates, when necessary or reasonable for their implementation.

To provide You with news, special offers, and general information about other goods, services, and events that we offer that are similar to those that you have already purchased or enquired about unless You have opted not to receive such information.

To manage Your requests: To attend and manage Your requests to Us.

For business transfers: We may use Your information to evaluate or conduct a merger, divestiture, restructuring, reorganization, dissolution, or other sale or transfer of some or all of Our assets, whether as a going concern or as part of bankruptcy, liquidation, or similar proceeding, in which Personal Data held by Us about our Service users is among the assets transferred.

For other purposes: We may use Your information for other purposes, such as data analysis, identifying usage trends, determining the effectiveness of our promotional campaigns, and evaluating and improving our Service, products, services, marketing, and your experience.

We may share Your personal information in the following situations:

With Service Providers: We may share Your personal information with Service Providers to monitor and analyze the use of our Service, to contact You. For business transfers: We may share or transfer Your personal information in connection with, or during negotiations of, any merger, sale of Company assets, financing, or acquisition of all or a portion of Our business to another company. With Affiliates: We may share Your information with Our affiliates, in which case we will require those affiliates to honor this Privacy Policy. Affiliates include Our parent company and any other subsidiaries, joint venture partners, or other companies that We control or that are under common control with Us. With business partners: We may share Your information with Our business partners to offer You certain products, services, or promotions. With other users: When you share personal information or otherwise interact in public areas with other users, such information may be viewed by all users and may be publicly distributed outside. If You interact with other users or register through a Third-Party Social Media Service, Your contacts on the Third-Party Social Media Service may see Your name, profile, pictures and description of Your activity. Similarly, other users will be able to view descriptions of Your activity, communicate with You, and view Your profile. With Your consent: We may disclose Your personal information for any other purpose with Your consent. Retention of Your Personal Data The Company will retain Your Personal Data only for as long as is necessary for the purposes set out in this Privacy Policy. We will retain and use Your Personal Data to the extent necessary to comply with our legal obligations (for example, if we are required to retain your data to comply with applicable laws), resolve disputes, and enforce our legal agreements and policies.

The Company will also retain Usage Data for internal analysis purposes. Usage Data is generally retained for a shorter period of time, except when this data is used to strengthen the security or to improve the functionality of Our Service, or We are legally obligated to retain this data for longer time periods.

Transfer of Your Personal Data Your information, including Personal Data, is processed at the Company's operating offices and in any other places where the parties involved in the processing are located. It means that this information may be transferred to β€” and maintained on β€” computers located outside of Your state, province, country, or other governmental jurisdiction where the data protection laws may differ than those from Your jurisdiction.

Your consent to this Privacy Policy followed by Your submission of such information represents Your agreement to that transfer.

The Company will take all steps reasonably necessary to ensure that Your data is treated securely and in accordance with this Privacy Policy and no transfer of Your Personal Data will take place to an organization or a country unless there are adequate controls in place including the security of Your data and other personal information.

Delete Your Personal Data You have the right to delete or request that We assist in deleting the Personal Data that We have collected about You.

Our Service may give You the ability to delete certain information about You from within the Service.

You may also contact Us to request access to, correct, or delete any personal information that You have provided to Us.

Please note, however, that We may need to retain certain information when we have a legal obligation or lawful basis to do so.

Disclosure of Your Personal Data​

Business Transactions​

If the Company is involved in a merger, acquisition, or asset sale, Your Personal Data may be transferred. We will provide notice before Your Personal Data is transferred and becomes subject to a different Privacy Policy.

Law enforcement​

Under certain circumstances, the Company may be required to disclose Your Personal Data if required to do so by law or in response to valid requests by public authorities (e.g. a court or a government agency).

The Company may disclose Your Personal Data in the good faith belief that such action is necessary to:

Protect and defend the rights or property of the Company Prevent or investigate possible wrongdoing in connection with the Service Protect the personal safety of Users of the Service or the public Protect against legal liability Security of Your Personal Data The security of Your Personal Data is important to Us, but remember that no method of transmission over the Internet, or method of electronic storage is 100% secure. While We strive to use commercially acceptable means to protect Your Personal Data, We cannot guarantee its absolute security.

Children's Privacy​

Our Service does not address anyone under the age of 13. We do not knowingly collect personally identifiable information from anyone under the age of 13. If You are a parent or guardian and You are aware that Your child has provided Us with Personal Data, please contact Us. If We become aware that We have collected Personal Data from anyone under the age of 13 without verification of parental consent, We take steps to remove that information from Our servers.

Our Service may contain links to other websites that are not operated by Us. If You click on a third-party link, You will be directed to that third-party's site. We strongly advise You to review the Privacy Policy of every site You visit.

We have no control over and assume no responsibility for the content, privacy policies, or practices of any third party sites or services.

Changes to this Privacy Policy​

We may update Our Privacy Policy from time to time. We will update the Privacy Policy by posting the new Privacy Policy on this page.

You are advised to review this Privacy Policy periodically for any changes. Changes to this Privacy Policy are effective when they are posted on this page.

Contact Us​

If you have any questions about this Privacy Policy, You can contact us by email: hello@envio.dev or on our discord


0G Newton Testnet​

File: supported-networks/0g-newton-testnet.md

Indexing 0G Newton Testnet Data with Envio via RPC​

warning

RPC as a source is not as fast as HyperSync. It is important in production to source RPC data from reliable sources. We recommend our partners at drpc.org. Below, we have provided a set of free endpoints sourced from chainlist.org. We don't recommend using these in production as they may be rate limited. We recommend tweaking the RPC config to accommodate potential rate limiting.

We suggest getting the latest from chainlist.org.

Overview​

Envio supports 0G Newton Testnet through an RPC-based indexing approach. This method allows you to ingest blockchain data via an RPC endpoint by setting the RPC configuration.


Abstract​

File: supported-networks/abstract.md

Indexing Abstract Data with Envio​

FieldValue
Abstract Chain ID2741
HyperSync URL Endpointhttps://abstract.hypersync.xyz or https://2741.hypersync.xyz
HyperRPC URL Endpointhttps://abstract.rpc.hypersync.xyz or https://2741.rpc.hypersync.xyz

Defining Network Configurations​

name: IndexerName # Specify indexer name
description: Indexer Description # Include indexer description
networks:
- id: 2741 # Abstract
start_block: START_BLOCK_NUMBER # Specify the starting block
contracts:
- name: ContractName
address:
- "0xYourContractAddress1"
- "0xYourContractAddress2"
handler: ./src/EventHandlers.ts
events:
- event: Event # Specify event
- event: Event

With these steps completed, your application will be set to efficiently index Abstract data using Envio’s blockchain indexer.

For more information on how to set up your config, define a schema, and write event handlers, refer to the guides section in our documentation.

Support​

Can’t find what you’re looking for or need support? Reach out to us on Discord; we’re always happy to help!


Aleph Zero EVM​

File: supported-networks/aleph-zero-evm.md

Indexing Aleph Zero EVM Data with Envio via RPC​

warning

RPC as a source is not as fast as HyperSync. It is important in production to source RPC data from reliable sources. We recommend our partners at drpc.org. Below, we have provided a set of free endpoints sourced from chainlist.org. We don't recommend using these in production as they may be rate limited. We recommend tweaking the RPC config to accommodate potential rate limiting.

We suggest getting the latest from chainlist.org.

Overview​

Envio supports Aleph Zero EVM through an RPC-based indexing approach. This method allows you to ingest blockchain data via an RPC endpoint by setting the RPC configuration.


Altlayer OP Demo Testnet​

File: supported-networks/altlayer-op-demo-testnet.md

Indexing Altlayer OP Demo Testnet Data with Envio via RPC​

warning

RPC as a source is not as fast as HyperSync. It is important in production to source RPC data from reliable sources. We recommend our partners at drpc.org. Below, we have provided a set of free endpoints sourced from chainlist.org. We don't recommend using these in production as they may be rate limited. We recommend tweaking the RPC config to accommodate potential rate limiting.

We suggest getting the latest from chainlist.org.

Overview​

Envio supports Altlayer OP Demo Testnet through an RPC-based indexing approach. This method allows you to ingest blockchain data via an RPC endpoint by setting the RPC configuration.


Ancient8​

File: supported-networks/ancient8.md

Indexing Ancient8 Data with Envio via RPC​

warning

RPC as a source is not as fast as HyperSync. It is important in production to source RPC data from reliable sources. We recommend our partners at drpc.org. Below, we have provided a set of free endpoints sourced from chainlist.org. We don't recommend using these in production as they may be rate limited. We recommend tweaking the RPC config to accommodate potential rate limiting.

We suggest getting the latest from chainlist.org.

Overview​

Envio supports Ancient8 through an RPC-based indexing approach. This method allows you to ingest blockchain data via an RPC endpoint by setting the RPC configuration.