DEADLINE_EXCEEDED on Yellowstone subscribe
`Status { code: DeadlineExceeded }` means the RPC didn't produce a response within the deadline you (or the default) set. On a streaming subscribe this usually means the initial handshake took too long, or the client's deadline was accidentally set on the whole stream instead of the initial call.
Root causes
Ranked by frequency. First cause is the one to check first.
- 01You set a deadline on `.subscribe()` itself — gRPC streaming RPCs shouldn't have short deadlines, only the initial method invocation should.
- 02Default 20s deadline on the HTTP/2 handshake over a high-latency link (e.g., client in Singapore, provider in Frankfurt).
- 03Provider is backpressuring because your subscription filter is too broad — every Solana transaction queued up waiting for your slow consumer.
- 04Client is behind a layer-7 proxy (Cloudflare Worker, NGINX with default `proxy_read_timeout 60s`) that closes the stream when no bytes flow for a minute.
Fix steps
- 1
Remove stream-level deadlines
In the TS client, never pass `{ deadline }` to `.subscribe()`. Set a connection deadline on the channel creation only, not on the streaming call. In Rust, call `.send_timeout(None)` on the streaming channel.
- 2
Enable keepalive pings
On all three clients, enable gRPC keepalive: 30s interval, 10s timeout, `permit_without_calls = true`. This prevents intermediate proxies from idle-closing the stream.
- 3
Narrow your filter
If your SubscribeRequest filter matches every Solana transaction (empty `accountInclude`, empty `owner` list), the provider is delivering 3000+ messages/sec. A slow consumer backs up, fills the HTTP/2 flow-control window, and the server eventually gives up. Always include at least one program or account filter in production.
- 4
Check the region
Colocate your bot in the same region as the provider. Subglow's primary region is Frankfurt; Helius is Chicago + Frankfurt; Triton is US-East + EU-West. A bot in AWS Singapore talking to a Frankfurt gRPC endpoint is adding 200ms+ to every round-trip.
Code example
import grpc
from yellowstone_grpc_client import geyser_pb2_grpc, geyser_pb2
channel = grpc.aio.secure_channel(
"grpc.subglow.io:443",
grpc.ssl_channel_credentials(),
options=[
("grpc.keepalive_time_ms", 30_000),
("grpc.keepalive_timeout_ms", 10_000),
("grpc.keepalive_permit_without_calls", 1),
("grpc.http2.max_pings_without_data", 0),
],
)
stub = geyser_pb2_grpc.GeyserStub(channel)
metadata = (("x-api-key", "YOUR_API_KEY"),)
async def subscribe_forever():
while True:
try:
async for msg in stub.Subscribe(request, metadata=metadata):
handle(msg)
except grpc.aio.AioRpcError as e:
print(f"reconnecting after {e.code()}")
await asyncio.sleep(1)Related errors
- UNAVAILABLE: connection refusedYour gRPC client got `Status { code: Unavailable }` with `connection refused` (or `transport is closing`). The TCP handshake never completed — either you're hitting the wrong port, TLS is misconfigured, or the endpoint is genuinely down.
- RESOURCE_EXHAUSTED — rate limit on gRPC`Status { code: ResourceExhausted }` means the server refused to serve your call because you hit a limit — concurrent streams, messages-per-second, or filter complexity. Unlike HTTP 429, this is a hard rejection: the stream is closed and must be reconnected, not retried on the same channel.
- CANCELLED — stream closed unexpectedly`Status { code: Cancelled }` on an active stream almost always means the TCP connection was closed by a middlebox (Kubernetes pod restart, load balancer, NAT timeout). It's not an error in the programmer sense — it's a normal event to handle with reconnect logic.
Want an endpoint that just works?
Subglow is flat-priced Solana gRPC + JSON-RPC on a single API key. Pre-parsed JSON, dedicated sendTransaction bucket, 99.9% latency SLA on Dedicated. No credit juggling, no surprise bills.