Skip to main content

Custom Indexer

You can build custom indexers using the IOTA micro-data ingestion framework. To create an indexer, you subscribe to a checkpoint stream with full checkpoint content. This stream can be one of the publicly available streams from IOTA, one that you set up in your local environment, or a combination of the two.

Establishing a custom indexer helps improve latency, allows pruning the data of your IOTA full node, and provides efficient assemblage of checkpoint data.

Interface and Data Format

To use the framework, implement a basic interface:

#[async_trait]
trait Worker: Send + Sync {
type Error: Debug + Display;
type Message: Send + Sync;

async fn process_checkpoint(&self, checkpoint: Arc<CheckpointData>) -> Result<Self::Message, Self::Error>;
}

In this example, the CheckpointData struct represents full checkpoint content. The struct contains checkpoint summary and contents, as well as detailed information about each individual transaction.

Checkpoint Stream Sources

Data ingestion for your indexer supports several checkpoint stream sources.

Remote Reader

The most straightforward stream source is to subscribe to a remote store of checkpoint contents. The IOTA Foundation provides the following endpoints for downloading checkpoint data:

Historical Checkpoint Data

For syncing historical data up to the tip of the network:

  • Devnet: https://checkpoints.devnet.iota.cafe/ingestion/historical
  • Testnet: https://checkpoints.testnet.iota.cafe/ingestion/historical
  • Mainnet: https://checkpoints.mainnet.iota.cafe/ingestion/historical

Live Checkpoint Streaming (Optional)

For real-time streaming of current epoch checkpoints only:

  • Devnet: https://checkpoints.devnet.iota.cafe/ingestion/live
  • Testnet: https://checkpoints.testnet.iota.cafe/ingestion/live
  • Mainnet: https://checkpoints.mainnet.iota.cafe/ingestion/live
Historical vs Live Endpoints
FeatureHistorical EndpointLive Endpoint
FormatBatches multiple checkpoints into single filesIndividual checkpoint files
Best forInitial sync from genesis to near network tipReal-time ingestion at network tip
Latency behavior• Near zero latency during historical sync (processes batches as fast as possible)
• Variable latency at network tip (waits for batch completion, typically ~1000 checkpoints or epoch change)
Minimal latency (published immediately)
Data coverageComplete data coverage from genesisCurrent epoch only (older checkpoints automatically purged)
OptimizationThroughput-focused for bulk ingestionLow-latency processing
  1. Historical endpoint should always be used as the primary source to guarantee complete data coverage from genesis
  2. Live endpoint is optional for applications needing real-time access to the very latest checkpoints

In this example, we'll explore a simple custom indexer that can ingest data from multiple sources: the historical and live endpoints, or directly from a local fullnode REST API. The RemoteUrl enum provides flexible configuration options for these different data sources, including hybrid configurations that combine multiple sources.

// Copyright (c) Mysten Labs, Inc.
// Modifications Copyright (c) 2024 IOTA Stiftung
// SPDX-License-Identifier: Apache-2.0

use std::{env, sync::Arc};

use anyhow::Result;
use async_trait::async_trait;
use iota_data_ingestion_core::{
DataIngestionMetrics, FileProgressStore, IndexerExecutor, ReaderOptions, Worker, WorkerPool,
reader::v2::{CheckpointReaderConfig, RemoteUrl},
};
use iota_types::full_checkpoint_content::CheckpointData;
use prometheus::Registry;

struct CustomWorker;

#[async_trait]
impl Worker for CustomWorker {
type Message = ();
type Error = anyhow::Error;

async fn process_checkpoint(&self, checkpoint: Arc<CheckpointData>) -> Result<Self::Message> {
// custom processing logic
println!(
"Processing checkpoint: {}",
checkpoint.checkpoint_summary.to_string()
);
Ok(())
}
}

#[tokio::main]
async fn main() -> Result<()> {
// Number of Workers to process checkpoints in parallel.
let concurrency = 5;
let metrics = DataIngestionMetrics::new(&Registry::new());
let progress_file_path =
env::var("PROGRESS_FILE_PATH").unwrap_or("/tmp/remote_reader_progress".to_string());
// Save last processed checkpoint to a file.
let progress_store = FileProgressStore::new(progress_file_path).await?;

let mut executor = IndexerExecutor::new(
progress_store,
1, // should match the total number of registered workers.
metrics,
Default::default(),
);
let worker_pool = WorkerPool::new(
CustomWorker,
"remote_reader".to_string(),
concurrency,
Default::default(),
);

executor.register(worker_pool).await?;

let config = CheckpointReaderConfig {
// It's also possible to start a fullnode locally and use the REST API to sync checkpoints
// data.
//
// remote_store_url: Some(RemoteUrl::Fullnode("http://127.0.0.1:9000/api/v1".to_string())),
remote_store_url: Some(RemoteUrl::HybridHistoricalStore {
historical_url: "https://checkpoints.mainnet.iota.cafe/ingestion/historical".into(),
live_url: Some("https://checkpoints.mainnet.iota.cafe/ingestion/live".into()),
}),
reader_options: ReaderOptions::default(),
..Default::default()
};
executor.run_with_config(config).await?;
Ok(())
}

Local Reader

Colocate the data ingestion daemon with a full node and enable checkpoint dumping on the latter to set up a local stream source. After enabling, the full node starts dumping executed checkpoints as files to a local directory, and the data ingestion daemon subscribes to changes in the directory through an inotify-like mechanism. This approach allows minimizing ingestion latency (checkpoint are processed immediately after a checkpoint executor on a full node) and getting rid of dependency on an externally managed bucket.

To enable, add the following to your full node configuration file:

checkpoint-executor-config:
checkpoint-execution-max-concurrency: 200
local-execution-timeout-sec: 30
data-ingestion-dir: <path to a local directory>
// Copyright (c) Mysten Labs, Inc.
// Modifications Copyright (c) 2024 IOTA Stiftung
// SPDX-License-Identifier: Apache-2.0

use std::{env, path::PathBuf, sync::Arc};

use anyhow::Result;
use async_trait::async_trait;
use iota_data_ingestion_core::{
DataIngestionMetrics, FileProgressStore, IndexerExecutor, ReaderOptions, Worker, WorkerPool,
reader::v2::CheckpointReaderConfig,
};
use iota_types::full_checkpoint_content::CheckpointData;
use prometheus::Registry;

struct CustomWorker;

#[async_trait]
impl Worker for CustomWorker {
type Message = ();
type Error = anyhow::Error;

async fn process_checkpoint(&self, checkpoint: Arc<CheckpointData>) -> Result<Self::Message> {
// custom processing logic
println!(
"Processing Local checkpoint: {}",
checkpoint.checkpoint_summary.to_string()
);
Ok(())
}
}

#[tokio::main]
async fn main() -> Result<()> {
// Number of Workers to process checkpoints in parallel.
let concurrency = 5;
let metrics = DataIngestionMetrics::new(&Registry::new());
let progress_file_path =
env::var("PROGRESS_FILE_PATH").unwrap_or("/tmp/local_reader_progress".to_string());
// Save last processed checkpoint to a file.
let progress_store = FileProgressStore::new(progress_file_path).await?;
let mut executor = IndexerExecutor::new(
progress_store,
1, // should match the total number of registered workers.
metrics,
Default::default(),
);
let worker_pool = WorkerPool::new(
CustomWorker,
"local_reader".to_string(),
concurrency,
Default::default(),
);

executor.register(worker_pool).await?;

let config = CheckpointReaderConfig {
ingestion_path: Some(PathBuf::from("./chk")),
reader_options: ReaderOptions::default(),
..Default::default()
};

executor.run_with_config(config).await?;
Ok(())
}

Let's highlight a couple lines of code:

let worker_pool = WorkerPool::new(CustomWorker, "local_reader".to_string(), concurrency);
executor.register(worker_pool).await?;

The data ingestion executor can run multiple workflows simultaneously. For each workflow, you need to create a separate worker pool and register it in the executor. The WorkerPool requires an instance of the Worker trait, the name of the workflow (which is used for tracking the progress of the flow in the progress store and metrics), and concurrency.

The concurrency parameter specifies how many threads the workflow uses. Having a concurrency value greater than 1 is helpful when tasks are idempotent and can be processed in parallel and out of order. The executor only updates the progress/watermark to a certain checkpoint when all preceding checkpoints are processed.

Hybrid Mode

Specify both a local and remote store as a fallback to ensure constant data flow. The framework always prioritizes locally available checkpoint data over remote data. It's useful when you want to start utilizing your own full node for data ingestion but need to partially backfill historical data or just have a failover.

// Copyright (c) Mysten Labs, Inc.
// Modifications Copyright (c) 2024 IOTA Stiftung
// SPDX-License-Identifier: Apache-2.0

use std::{env, path::PathBuf, sync::Arc};

use anyhow::Result;
use async_trait::async_trait;
use iota_data_ingestion_core::{
DataIngestionMetrics, FileProgressStore, IndexerExecutor, ReaderOptions, Worker, WorkerPool,
reader::v2::{CheckpointReaderConfig, RemoteUrl},
};
use iota_types::full_checkpoint_content::CheckpointData;
use prometheus::Registry;

struct CustomWorker;

#[async_trait]
impl Worker for CustomWorker {
type Message = ();
type Error = anyhow::Error;

async fn process_checkpoint(&self, checkpoint: Arc<CheckpointData>) -> Result<Self::Message> {
// custom processing logic
println!(
"Processing checkpoint: {}",
checkpoint.checkpoint_summary.to_string()
);
Ok(())
}
}

#[tokio::main]
async fn main() -> Result<()> {
// Number of Workers to process checkpoints in parallel.
let concurrency = 5;
let metrics = DataIngestionMetrics::new(&Registry::new());
let progress_file_path =
env::var("PROGRESS_FILE_PATH").unwrap_or("/tmp/remote_reader_progress".to_string());
// Save last processed checkpoint to a file.
let progress_store = FileProgressStore::new(progress_file_path).await?;

let mut executor = IndexerExecutor::new(
progress_store,
1, // should match the total number of registered workers.
metrics,
Default::default(),
);
let worker_pool = WorkerPool::new(
CustomWorker,
"hybrid_reader".to_string(),
concurrency,
Default::default(),
);

executor.register(worker_pool).await?;

let config = CheckpointReaderConfig {
ingestion_path: Some(PathBuf::from("./chk")),
remote_store_url: Some(RemoteUrl::HybridHistoricalStore {
historical_url: "https://checkpoints.mainnet.iota.cafe/ingestion/historical".into(),
live_url: Some("https://checkpoints.mainnet.iota.cafe/ingestion/live".into()),
}),
reader_options: ReaderOptions::default(),
};
executor.run_with_config(config).await?;
Ok(())
}

Manifest

Code for the cargo.toml manifest file for the custom indexer.

[package]
name = "custom-indexer"
version = "0.1.0"
edition = "2021"
license = "Apache-2.0"

[dependencies]
# external dependencies
anyhow = "1.0"
async-trait = "0.1"
prometheus = "0.14"
tokio = "1.46.1"
tokio-util = "0.7"

# internal dependencies
iota-data-ingestion-core = { git = "https://github.com/iotaledger/iota", package = "iota-data-ingestion-core" }
iota-types = { git = "https://github.com/iotaledger/iota", package = "iota-types" }

[[bin]]
name = "local_reader"
path = "local_reader.rs"

[[bin]]
name = "remote_reader"
path = "remote_reader.rs"

[[bin]]
name = "hybrid_reader"
path = "hybrid_reader.rs"

Source Code

Find the following source code in the IOTA repo.