# PROCESSORS
Reference processors shipped in this repo. Exemplars — build your own against the stable contract.
[ NOTE ] these are not "the real nebu" — the real nebu is the contract in pkg/processor. Anyone can ship a processor as its own module and register it. Browse community-contributed processors at the community registry.
## ORIGINS
Consume ledger XDR, emit typed events as NDJSON. Accept --start-ledger, optional --end-ledger or --follow, plus the standard env chain (NEBU_RPC_URL, NEBU_NETWORK, NEBU_RPC_AUTH).
token-transfer
[ ORIGIN ]Stream token transfer events from Stellar ledgers. Wraps Stellar's official token_transfer.EventsProcessor to emit structured events for transfers, mints, burns, clawbacks, and fees.
nebu.token_transfer.v1
nebu install token-transfer token-transfer --start-ledger 60200000 --end-ledger 60200001 | jq
contract-events
[ ORIGIN ]Soroban diagnostic events emitted by smart contracts during execution. Per-contract filtering via the payload's contractAddress field.
nebu.contract_events.v1
nebu install contract-events contract-events --start-ledger 60200000 --follow \ | jq 'select(.event.contractAddress == "CAQ...")'
contract-invocation
[ ORIGIN ]Surface every Soroban contract invocation with its function name, arguments, and sub-invocation tree. One event per InvokeHostFunction operation.
nebu.contract_invocation.v1
nebu install contract-invocation contract-invocation --start-ledger 60200000 --end-ledger 60200100 | jq .function
contract-invocation-extractor
[ ORIGIN ]Project-specific contract invocations — extracts named functions with typed arguments based on a user-supplied contract spec. Useful when you know the protocol and want strongly-typed events instead of raw XDR SCVals.
nebu.contract_invocation.extracted.v1
transaction-stats
[ ORIGIN · EDUCATIONAL ]Per-ledger transaction metrics — count, success/fail split, operation counts, fee charged. Reference implementation for anyone writing their own stats processor.
nebu install transaction-stats
transaction-stats --start-ledger 60200000 --end-ledger 60200100 \
| duckdb -c "SELECT AVG(tx_count) FROM read_json('/dev/stdin')"
ledger-change-stats
[ ORIGIN · EDUCATIONAL ]Per-ledger counts of account creations, trustline additions/removals, data entries — a tour of the LedgerEntryChanges surface.
## TRANSFORMS
Read NDJSON from stdin, emit NDJSON to stdout. Pass through _schema and _nebu_version unchanged.
usdc-filter
[ TRANSFORM ]Keep only events where transfer.assetCode is USDC and the issuer matches Circle's canonical issuer. Sugar over a jq predicate.
token-transfer --start-ledger 60200000 --end-ledger 60200100 | usdc-filter
amount-filter
[ TRANSFORM ]Drop events below a minimum or above a maximum amount threshold. Flags: --min, --max, --field (dot path, default transfer.amount).
token-transfer --follow --start-ledger 60200000 | amount-filter --min 1000000
dedup
[ TRANSFORM ]Keyed deduplication over a sliding window. Flags: --key (dot path into the event), --window (number of events or duration).
token-transfer --follow --start-ledger 60200000 | dedup --key meta.txHash
time-window
[ TRANSFORM ]Bucket events into fixed-duration windows and emit one aggregate per window close. Flags: --window, --group-by, --agg (count/sum/avg).
## SINKS
Read NDJSON from stdin, produce side effects. Sinks are terminal — nothing reads from their stdout.
json-file-sink
[ SINK ]Append each event as a JSON line to a file. The simplest sink — useful for archiving and offline analysis.
--out path to the .jsonl file--append keep existing file (default: false)token-transfer --start-ledger 60200000 --end-ledger 60200100 \ | json-file-sink --out events.jsonl
nats-sink
[ SINK ]Publish each event to a NATS subject, with optional JetStream persistence. Bridges a local pipeline into a distributed real-time firehose for your API.
--url nats:// URL (default $NATS_URL)--subject target subject--jetstream persist to a streamtoken-transfer --follow --start-ledger 60200000 \ | nats-sink --subject "stellar.transfers" --jetstream
postgres-sink
[ SINK ]Upsert events into a Postgres table. Uses COPY for batched writes, falls back to row-by-row INSERT ... ON CONFLICT for streaming.
--dsn postgres://... (or $DATABASE_URL)--table target table (auto-created if missing)--on-conflict primary key for upsertstoken-transfer --start-ledger 60200000 --end-ledger 60200100 \ | postgres-sink --table transfers --on-conflict meta.txHash
Your processor belongs here.
Build against pkg/processor, ship as a module, drop a description.yml in any Git repo, and nebu install your-processor just works. Submit it to the community registry to see it appear in nebu list for everyone.