RESOURCE_EXHAUSTED — rate limit on gRPC
`Status { code: ResourceExhausted }` means the server refused to serve your call because you hit a limit — concurrent streams, messages-per-second, or filter complexity. Unlike HTTP 429, this is a hard rejection: the stream is closed and must be reconnected, not retried on the same channel.
Root causes
Ranked by frequency. First cause is the one to check first.
- 01Hit the plan's concurrent-stream cap (Subglow Sniper = 2, Pro = 10; Helius Developer = 3).
- 02Opened a second subscription to the same program IDs — some providers count each filter group separately.
- 03Exceeded the messages/sec per-stream cap. Typical default: 5000 msg/s on Pro-tier plans; shared-infrastructure customers can be throttled lower during peak load.
- 04Filter complexity limit — e.g., more than 100 account_include entries in a single filter.
Fix steps
- 1
Consolidate streams
Multiple programs can share a single stream. Instead of opening one subscribe per program (Pump.fun, Raydium, Jupiter), combine them in a single SubscribeRequest with three `transactions_filter` entries. One open stream, three filters — not three streams.
- 2
Upgrade the plan or go dedicated
If you genuinely need >10 parallel streams, Subglow Dedicated is unlimited streams on a private gRPC endpoint. On credit-metered providers (Helius, QuickNode), concurrent streams scale with plan price — check your dashboard.
- 3
Add exponential backoff before reconnect
If you slam reconnect on RESOURCE_EXHAUSTED, the provider treats it as DoS and may rate-limit harder or block your key. Back off at least 30 seconds between retries on this specific error code.
- 4
Check filter complexity
Subglow caps a single filter at 100 account_include entries and 25 owner entries. If you need more, split into multiple filter groups inside one SubscribeRequest — the provider sums them at the source.
Provider-specific notes
- SubglowReturns RESOURCE_EXHAUSTED only on new connection attempts once the concurrent-stream cap is hit — existing streams are never interrupted mid-flow.
- HeliusOn Laserstream, RESOURCE_EXHAUSTED is the common symptom of running out of message-credits for the month. Upgrade the credit pool or wait for the monthly reset.
Related errors
- CANCELLED — stream closed unexpectedly`Status { code: Cancelled }` on an active stream almost always means the TCP connection was closed by a middlebox (Kubernetes pod restart, load balancer, NAT timeout). It's not an error in the programmer sense — it's a normal event to handle with reconnect logic.
- DEADLINE_EXCEEDED on Yellowstone subscribe`Status { code: DeadlineExceeded }` means the RPC didn't produce a response within the deadline you (or the default) set. On a streaming subscribe this usually means the initial handshake took too long, or the client's deadline was accidentally set on the whole stream instead of the initial call.
- INVALID_ARGUMENT — bad subscribe filter`Status { code: InvalidArgument }` is the server rejecting your SubscribeRequest before streaming starts. Almost always a malformed filter: base58-encoded where base64 expected, commitment level spelled wrong, or an empty filter group.
Want an endpoint that just works?
Subglow is flat-priced Solana gRPC + JSON-RPC on a single API key. Pre-parsed JSON, dedicated sendTransaction bucket, 99.9% latency SLA on Dedicated. No credit juggling, no surprise bills.