NAME
SYNOPSIS
DESCRIPTION
nebu packages Stellar's modern indexing primitives — RPC-backed ledger access, the ingest SDK, XDR-native extraction — into a stable Go contract, standalone processor binaries, and Unix-composable pipelines.
The nebu binary is intentionally minimal. It discovers processors via a registry.yaml, installs them with go install, and can fetch raw ledger XDR. Actual event extraction happens in standalone processor binaries that speak newline-delimited JSON over stdout.
Pipelines are composed with the shell's own pipe. There is no daemon, no orchestrator, no YAML pipeline file.
COMMANDS
Show all processors known to the embedded registry and the community registry. Lists name, type, location, and description.
$ nebu list
Resolve the processor in the registry and install its binary to $GOPATH/bin via go install. Works for built-in examples and any community processor whose Go module installs cleanly.
$ nebu install token-transfer
Fetch raw LedgerCloseMeta XDR for a ledger range and write it to stdout. Decouples fetching from processing — capture once, replay through many processors.
Supports two modes:
- --mode rpc · (default) stream from a Stellar RPC endpoint
- --mode archive · read from a GCS/S3 ledger archive (e.g. the public aws-public-blockchain bucket)
$ nebu fetch 60200000 60200100 > ledgers.xdr $ cat ledgers.xdr | token-transfer | jq
Run an installed processor. Origins read ledgers (from RPC or stdin) and emit typed events as NDJSON on stdout. Transforms and sinks read NDJSON from stdin.
$ token-transfer --start-ledger 60200000 --end-ledger 60200001 $ token-transfer --start-ledger 60200000 --follow
Emit a JSON-schema-valid manifest of the processor's flags, inputs, outputs, and schema IDs. The agent-legible contract — tooling and LLMs can discover capabilities without reading source.
$ token-transfer --describe-json | jq .outputs
OPTIONS
ENVIRONMENT
NEBU_RPC_URL — default Stellar RPC endpoint. Default: https://archive-rpc.lightsail.network.
NEBU_NETWORK — mainnet, testnet, or passphrase. Default: mainnet.
NEBU_RPC_AUTH — authorization header value for premium RPC endpoints (e.g. Api-Key YOUR_KEY).
NEBU_MODE — rpc or archive.
NEBU_DATASTORE_TYPE — GCS or S3 (archive mode).
NEBU_BUCKET_PATH — bucket path to ledger archive (archive mode).
NEBU_REGION — S3 region (archive mode, S3 only).
NEBU_BUFFER_SIZE — archive fetch cache size. Default: 100.
NEBU_NUM_WORKERS — archive parallel fetch workers. Default: 10.
FILES
registry.yaml — processor registry (repo root or embedded in the CLI).
description.yml — per-processor manifest consumed by nebu install.
$GOPATH/bin/ — install target for processor binaries.
EXIT STATUS
0 — success.
1 — general error (see stderr).
2 — bad arguments or unknown processor.
3 — RPC / archive fetch failed.
EXAMPLES
$ token-transfer --start-ledger 60200000 --end-ledger 60200100
$ token-transfer --start-ledger 60200000 --follow \
| jq -c 'select(.transfer.assetCode == "USDC")' \
| dedup --key meta.txHash \
| json-file-sink --out usdc.jsonl
$ token-transfer --start-ledger 60200000 --end-ledger 60200100 \
| duckdb -c "
SELECT
json_extract_string(transfer, '\$.assetCode') AS asset,
COUNT(*) AS transfers,
SUM(CAST(json_extract_string(transfer, '\$.amount') AS DOUBLE)) AS volume
FROM read_json('/dev/stdin')
WHERE transfer IS NOT NULL
GROUP BY asset
ORDER BY volume DESC
"
$ token-transfer --start-ledger 60200000 --follow \
| tee >(nats-sink --subject "stellar.live" --jetstream) \
| tee >(json-file-sink --out archive.jsonl) \
| jq -r '"Ledger \(.meta.ledgerSequence): \(.transfer.amount)"'
No AWS credentials required — the SDK falls back to anonymous access. Pubnet only; testnet uses a different (non-Galexie) layout in this bucket.
$ nebu fetch --mode archive \
--datastore-type S3 \
--bucket-path "aws-public-blockchain/v1.1/stellar/ledgers/pubnet" \
--region us-east-2 \
62080000 62081000 \
| gzip > historical.xdr.gz
SCHEMA
Every event carries two self-describing fields:
_schema — schema identifier (e.g. nebu.token_transfer.v1).
_nebu_version — nebu version that produced the event.
Breaking changes bump the schema version. Non-breaking additions (new fields, new event types) do not.
Filter by _schema in your downstream queries to stay compatible across rolls.
SEE ALSO
nebu(7) — overview, architecture, quickstart.
token-transfer(1), contract-events(1), json-file-sink(1), nats-sink(1), postgres-sink(1)
flowctl(1) — production orchestrator for nebu-style pipelines.
jq(1), duckdb(1), tee(1), pipe(7)