gRPC stream

CANCELLED — stream closed unexpectedly

`Status { code: Cancelled }` on an active stream almost always means the TCP connection was closed by a middlebox (Kubernetes pod restart, load balancer, NAT timeout). It's not an error in the programmer sense — it's a normal event to handle with reconnect logic.

Root causes

Ranked by frequency. First cause is the one to check first.

  1. 01Kubernetes rolling update of the client pod — SIGTERM gracefully closed the stream.
  2. 02AWS NLB or GCP L4 LB idle timeout (typically 350s) with no keepalive configured.
  3. 03Docker bridge network dropping idle connections after 5 minutes.
  4. 04Provider rolling out a new server version — brief disconnect while connections drain.
  5. 05Your client-side code threw inside the `for await` loop and awaited the exception without cancelling the iterator.

Fix steps

  1. 1

    Always wrap subscribe in a reconnect loop

    CANCELLED is normal on long-running streams. Treat any stream termination (CANCELLED, UNAVAILABLE, or silent EOF) as a signal to reconnect with backoff.

  2. 2

    Enable keepalive

    Configure 30s keepalive with 10s timeout. This forces the client to send PING frames that refresh NAT state and LB session tables. Without this, an idle stream dies ~5 min in on most cloud networks.

  3. 3

    Implement a watermark for resume

    Track the last processed slot. After reconnect, discard updates below that slot (or above+skip) to avoid double-processing. For bonding-curve sniping, missing 2 slots is acceptable — for arb, maintain a dedupe cache keyed on txSig.

Related errors

Want an endpoint that just works?

Subglow is flat-priced Solana gRPC + JSON-RPC on a single API key. Pre-parsed JSON, dedicated sendTransaction bucket, 99.9% latency SLA on Dedicated. No credit juggling, no surprise bills.