# QUICKSTART
From install to a live JSON stream of Stellar events — in two minutes.
This is the I want to query Stellar data path. No Go required after install. If you want to build a processor, head to BUILD →.
## 01_INSTALL
Pick the path that matches your environment.
go install github.com/withObsrvr/nebu/cmd/nebu@latest export PATH="$HOME/go/bin:$PATH" nebu --version
Requires Go >= 1.22. The canonical path.
curl -sSL https://nebu.withobsrvr.com/install.sh | sh
Pulls a pre-built binary from the latest GitHub release. No toolchain required.
docker run --rm withobsrvr/nebu:latest \ token-transfer --start-ledger 60200000 --end-ledger 60200001
Ephemeral — tire-kicking only. Not recommended for pipelines.
## 02_FIRST_STREAM
Install a processor, then run it. Each processor is its own binary — nebu install uses go install under the hood and drops the binary at $GOPATH/bin.
nebu install token-transfer token-transfer --start-ledger 60200000 --end-ledger 60200001
You'll see newline-delimited JSON on stdout. One object per transfer event:
{ "_schema": "nebu.token_transfer.v1", "_nebu_version": "v0.6.7", "meta": { "ledgerSequence": 60200000, "txHash": "abc...", ... }, "transfer": { "from": "GA...", "to": "GB...", "assetCode": "USDC", "amount": "1000000" } }
Teal = nebu meta (schema, version, ledger info). Coral = payload data (the transfer itself).
## 03_FILTER_WITH_JQ
Pipe the stream through jq to keep only the events you care about. Exactly the same ergonomics as any other NDJSON source.
token-transfer --start-ledger 60200000 --end-ledger 60200100 \ | jq -c 'select(.transfer.assetCode == "USDC")'
Swap the predicate for any jq expression — filter by issuer, amount threshold, contract address, whatever the schema exposes.
## 04_PERSIST
Write to a file with json-file-sink, or a NATS subject with nats-sink, or both at once with tee.
nebu install json-file-sink token-transfer --start-ledger 60200000 --end-ledger 60200100 \ | jq -c 'select(.transfer.assetCode == "USDC")' \ | json-file-sink --out usdc.jsonl
token-transfer --start-ledger 60200000 --follow \ | tee >(nats-sink --subject "stellar.live" --jetstream) \ | tee >(json-file-sink --out archive.jsonl) \ | jq -r '"L\(.meta.ledgerSequence): \(.transfer.amount)"'
## 05_ANALYZE
duckdb reads NDJSON directly from stdin — no database, no schema, no ETL job. SQL across the stream in seconds.
token-transfer --start-ledger 60200000 --end-ledger 60200100 \
| duckdb -c "
SELECT
json_extract_string(transfer, '\$.assetCode') AS asset,
COUNT(*) AS transfers,
SUM(CAST(json_extract_string(transfer, '\$.amount') AS DOUBLE)) AS volume
FROM read_json('/dev/stdin')
WHERE transfer IS NOT NULL
GROUP BY asset
ORDER BY volume DESC"
See docs/DUCKDB_COOKBOOK.md for deeper recipes — windowing, joins, exports to Parquet.
## 06_FOLLOW_LIVE
Omit --end-ledger or pass --follow to stream continuously — like tail -f for the chain.
token-transfer --start-ledger 60200000 --follow \ | jq -c 'select(.transfer.assetCode == "USDC" and (.transfer.amount | tonumber) > 1000000)'
## 07_FETCH_AND_PROCESS_SEPARATELY
Capture raw XDR once with nebu fetch, replay it through as many processors as you like — no repeated RPC calls.
nebu fetch 60200000 60200100 > ledgers.xdr
cat ledgers.xdr | token-transfer | jq 'select(.transfer)'
cat ledgers.xdr | contract-events | grep -i swap
cat ledgers.xdr | token-transfer | duckdb -c "SELECT COUNT(*) FROM read_json('/dev/stdin')"
For deep history, switch to archive mode against the public aws-public-blockchain S3 bucket — no AWS account required, the AWS SDK falls back to anonymous access automatically:
nebu fetch --mode archive \ --datastore-type S3 \ --bucket-path "aws-public-blockchain/v1.1/stellar/ledgers/pubnet" \ --region us-east-2 \ 62080000 62080100 > ledgers.xdr cat ledgers.xdr | token-transfer | jq 'select(.transfer)'
See nebu(1) › COMMANDS → for the full flag set.
## 08_TROUBLESHOOT
Your $GOPATH/bin isn't on $PATH. Add export PATH="$HOME/go/bin:$PATH" to your shell rc.
The default endpoint is a shared archive RPC. For sustained ingestion, use an authenticated endpoint: export NEBU_RPC_AUTH="Api-Key YOUR_KEY", then pass --rpc-url.
RPC only serves recent ledgers. For deep history, switch to nebu fetch --mode archive against a GCS/S3 archive.
Use jq -c (compact output) in pipelines — the default pretty-printed output breaks NDJSON framing for downstream consumers.