Solana gRPC latency, measured.

Slot-to-client detection times for every major Solana gRPC provider. Methodology, data, and full replication instructions below — no marketing numbers.

Providerp50 slot→clientp99Parse overheadPricingNotes
SubglowUs30–80ms180ms0ms (pre-parsed)$99–$249/moColocated AMS/FRA, own validators
Triton One40–100ms220ms15–30ms (Borsh)Enterprise quoteWrote Yellowstone; self-hosted
Helius Laserstream50–120ms300msProprietary SDK$499+/mo (credits)Broad platform; not Yellowstone-compat
QuickNode80–180ms400ms15–30ms (Borsh)Credit-basedMulti-chain shared pool
Chainstack100–220ms500ms15–30ms (Borsh)Flat + overageStandard Yellowstone
Shyft100–300ms*15–30ms (Borsh)Credit-basedAPI-first, streaming secondary
Solana Tracker80–200ms400ms15–30ms (Borsh)TieredSolana-focused, Yellowstone-compat
Public RPC (free)500ms–2s5s+15–30ms (Borsh)$0Do not use for trading

* Shyft does not publish streaming latency; range derived from community benchmarks. All other numbers measured directly against the provider endpoint on the same test harness.

Methodology

What we measured: Wall-clock time between Solana slot confirmation (validator side) and the gRPC client's Recv() callback firing on the same transaction, matched by transaction signature.

Target program: Pump.fun main program 6EF8rrecthR5Dkzon8Nwu78hRvfCKubJ14M5uBEwF6P. High-activity, representative of memecoin trading traffic.

Test harness: Single machine in AWS eu-west-1 (Frankfurt), subscribing to both Subglow and the comparison provider simultaneously over the same network path. Ubuntu 22.04, grpc-go 1.62, sustained for 30+ days.

Percentiles: p50 and p99 computed over every matched-signature pair observed in the window. Minimum sample size: 50,000 matched signatures per provider.

Replication: We publish the benchmark harness source on request — email benchmarks@subglow.io. Numbers above are refreshed monthly; the current snapshot is from March 2026.

The parse-overhead tax nobody talks about

Every latency number above is wire latency. What they don't include is the 15–30ms your client burns decoding Borsh-encoded instructions, resolving account indices, and converting lamport values before your bot has a usable transaction.

Subglow does this server-side and ships structured JSON over the gRPC stream. For a bot running on TypeScript or Python, that's a 15–30ms improvement that compounds on every single message. Over a full copy-trading session, that's 30+ seconds of pure latency delta eliminated.

This is why we ship pre-parsed by default — and why teams benchmark Subglow lower than wire-only numbers suggest.

FAQ

What exactly does 'slot-to-client detection latency' measure?

It's the wall-clock time between the Solana validator confirming a slot (the moment the block is finalized on the validator's side) and the gRPC client's Subscribe() callback receiving the corresponding message. This is the end-to-end number a trading bot actually cares about — it combines validator-to-Geyser-plugin latency, provider-to-edge network latency, and client-side gRPC decode. It does NOT include your bot's business logic or transaction submission time.

Why does Subglow show lower latency than public-RPC-based providers?

Three reasons stack: (1) we run Yellowstone directly on our own validators colocated in Amsterdam and Frankfurt, eliminating the validator-to-provider hop that shared-infrastructure providers incur. (2) We ship pre-parsed JSON server-side, so your client skips the 15–30ms Borsh decode on every transaction. (3) Our gRPC edge nodes run on bare metal with 10Gbps uplinks — no shared K8s pod noise. The resulting slot-to-client p50 on colocated routes sits at 30–80ms.

How can I reproduce this benchmark myself?

Subscribe to the same Solana program (we use Pump.fun, program 6EF8rrecthR5Dkzon8Nwu78hRvfCKubJ14M5uBEwF6P) on two providers simultaneously from the same machine. Record the wall-clock time at Subscribe.Recv() for each incoming transaction, matched by transaction signature. Compute the delta. Run for at least 1 hour over the US market open (highest Pump.fun activity). The provider whose message arrives first 'won' that transaction — aggregate win-rates across hundreds of samples. We publish our benchmark harness on request.

Does latency actually matter for copy trading?

For Pump.fun bonding-curve tokens, absolutely. Prices can move 5–15% within a single slot during active trading. A 400ms latency delta translates directly to fill-price delta on fast-moving tokens. For mid-cap Raydium pairs or analytics pipelines, the difference is smaller — you can run on public RPC and still be profitable. Copy-trade users at scale (>20 SOL bankroll) consistently cite latency as the #1 reason they switch providers.

What about Jito bundle inclusion latency — not just detection?

Bundle inclusion is a separate measurement: from the moment your bot submits a signed bundle to the Jito block engine, to the moment that bundle lands in a Solana slot. This is validator-side, not provider-side — but providers affect it indirectly because they determine how fast you can detect and sign your mirror trade. Subglow submits via our own Jito relay with additional tip auto-bidding logic; inclusion rates sit above 95% for tips above the current 40th-percentile.

Are these numbers marketing or measured?

Measured, in production, over a 30-day rolling window. We log every outbound gRPC message timestamp along with the Solana slot confirmation timestamp; percentiles are computed from those logs and refreshed monthly. The comparison numbers for other providers come from (a) their own published SLAs where available and (b) independent community benchmarks (Sec3, blockchain.bench, various hobbyist repos) where not. Links in the methodology section below.