// Rust + Solana gRPC

Solana gRPC streaming with Rust.

Rust is the native language of Solana. The yellowstone-grpc-client crate gives you direct, type-safe access to real-time blockchain data over gRPC with zero garbage collection overhead. This tutorial walks through setup, connection, subscriptions, stream handling, parsing, and production-ready error recovery — everything you need to build a Solana streaming bot in Rust.

Why Rust for Solana gRPC?

Rust is the language Solana itself is written in. Every on-chain program, the validator client, the runtime, and the core SDK — all Rust. When you build your gRPC streaming infrastructure in Rust, you're working with the same type system, the same memory model, and the same toolchain that powers the entire Solana ecosystem. There's no impedance mismatch between your data pipeline and the chain it's consuming. Types from the Solana SDK like Pubkey, Signature, and Transaction are native to your code — no serialization boundaries, no foreign function interfaces, no wrappers.

Performance is the primary reason serious MEV and trading teams choose Rust. The language compiles to native machine code with no runtime overhead: no garbage collector pauses, no JIT warmup, no interpreter overhead. For gRPC streaming, this translates to the lowest possible execution latency between receiving a protobuf message and acting on it. When you're competing on microsecond timescales — submitting a swap transaction before other bots react to the same price feed — Rust's deterministic performance is not a luxury, it's a requirement.

The yellowstone-grpc-client crate is the official Rust client for the Yellowstone gRPC protocol. Built on top of tonic (the de-facto Rust gRPC framework) and tokio (the standard async runtime), it handles connection management, protobuf serialization, TLS negotiation, and stream lifecycle. The companion crate yellowstone-grpc-proto provides all the generated protobuf types: SubscribeRequest, SubscribeUpdate, transaction filters, commitment levels, and every nested message type defined in the Yellowstone specification.

The client works with any Yellowstone (Dragon's Mouth) compatible endpoint. Whether you're connecting to a validator running Geyser plugins directly, a third-party RPC provider, or Subglow, the same crate handles the connection. You change the endpoint URL and authentication token — everything else stays identical. This portability means you can develop against a local validator, test against a staging endpoint, and deploy to production against Subglow without rewriting your streaming logic.

Prerequisites

You need the Rust toolchain installed via rustup. If you don't have it yet, install it from rustup.rs — the installer sets up rustc, cargo, and the standard library in one step. Make sure you're on the stable channel; nightly is not required for any of the crates we'll use. You also need protoc (the Protocol Buffers compiler) installed on your system, since yellowstone-grpc-proto builds protobuf definitions at compile time.

Create a new project with cargo new solana-grpc-bot and add the following dependencies to your Cargo.toml. The key crates are yellowstone-grpc-client for the connection and subscription API, yellowstone-grpc-proto for the protobuf types, tokio as the async runtime, and futures for the StreamExt trait that lets you iterate over the gRPC stream.

terminal
$curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
$ cargo new solana-grpc-bot
$ cd solana-grpc-bot
Cargo.toml
[package]
name = "solana-grpc-bot"
version = "0.1.0"
edition = "2021"
[dependencies]
yellowstone-grpc-client = "2.1"
yellowstone-grpc-proto = "2.1"
tokio = { version = "1", features = ["full"] }
futures = "0.3"
tonic = "0.12"
serde_json = "1"
anyhow = "1"
bs58 = "0.5"

yellowstone-grpc-client provides the GeyserGrpcClient builder for establishing connections. yellowstone-grpc-proto contains all the protobuf message types — you'll import SubscribeRequest, SubscribeRequestFilterTransactions, and CommitmentLevel from here. tokio with the full feature flag enables the multi-threaded scheduler, I/O drivers, timers, and macros — everything you need for a production async application. futures gives you StreamExt for ergonomic stream iteration. bs58 is for encoding binary Solana addresses into their base58 string representation.

Connecting to Subglow

The GeyserGrpcClient uses a builder pattern for connection configuration. You call build_from_shared with the endpoint URL, chain .x_token to set your authentication token, and finally .connect() to establish the gRPC channel. The x_token method sets the x-token header on every request — this is the standard authentication mechanism for Yellowstone (Dragon's Mouth) compatible endpoints, including Subglow.

The connect() call is async and returns a GeyserGrpcClient instance. Under the hood, tonic establishes an HTTP/2 connection with TLS to the endpoint. The client handles connection pooling, keepalives, and automatic HTTP/2 flow control. Once connected, you can call subscribe to open a bidirectional streaming RPC — you send subscription requests on one half and receive events on the other.

Subglow uses flat monthly pricing with no credit metering. You get an API key when you register, pass it as the x_token, and stream as much data as your plan allows. There are no per-request charges, no rate limit tiers based on credits, and no surprise bills at the end of the month.

connect.rs
use yellowstone_grpc_client::GeyserGrpcClient;
const ENDPOINT: &str = "https://grpc.subglow.io";
const API_KEY: &str = "YOUR_SUBGLOW_KEY";
let mut client = GeyserGrpcClient
::build_from_shared(ENDPOINT)?
.x_token(Some(API_KEY.to_string()))?
.connect()
.await?;
println!("Connected to Subglow gRPC endpoint");

The ? operator propagates errors at each step of the builder chain. If the endpoint URL is malformed, build_from_shared returns an error. If the token format is invalid, x_token returns an error. If the server is unreachable or rejects the TLS handshake, connect returns an error. Rust's error handling ensures you handle every failure case explicitly — no silent connection failures, no null pointer exceptions, no uncaught promises.

Subscribing to transactions

Transaction subscriptions are the foundation of Solana trading bots. You build a SubscribeRequest with a transactions filter that specifies which programs you want to track. The account_include field accepts program IDs as strings — every transaction that touches one of these programs will stream to your client. For DeFi trading, the most common targets are Pump.fun (6EF8rrecthR5Dkzon8Nwu78hRvfCKubJ14M5uBEwF6P), Raydium (675kPX9MHTjS2zt1qfr1NYHuzeLXfQM9H24wFSUt1Mp8), and Jupiter (JUP6LkbZbjS1jKKwapdHNy74zcZ3tLUZoi5QNyVTaV4).

Set vote: Some(false) and failed: Some(false) on your filter to eliminate noise. Validator vote transactions make up roughly half of Solana's throughput and are never relevant for DeFi applications. Failed transactions add volume without actionable data. Excluding both reduces your processing load dramatically and lets your bot focus on confirmed, successful trades.

The subscribe method on the client returns a tuple: a SubscribeRequest sender (mpsc channel) and a Streaming receiver. You send your subscription configuration through the sender, then iterate over the receiver to consume incoming events. This bidirectional design lets you dynamically update your filters without reconnecting — send a new SubscribeRequest at any time to change what data you receive.

subscribe.rs
use std::collections::HashMap;
use yellowstone_grpc_proto::prelude::*;
const PUMP_FUN: &str = "6EF8rrecthR5Dkzon8Nwu78hRvfCKubJ14M5uBEwF6P";
const RAYDIUM: &str = "675kPX9MHTjS2zt1qfr1NYHuzeLXfQM9H24wFSUt1Mp8";
const JUPITER: &str = "JUP6LkbZbjS1jKKwapdHNy74zcZ3tLUZoi5QNyVTaV4";
let txn_filter = SubscribeRequestFilterTransactions {
vote: Some(false),
failed: Some(false),
account_include: vec![
PUMP_FUN.to_string(),
RAYDIUM.to_string(),
JUPITER.to_string(),
],
..Default::default()
};
let mut transactions = HashMap::new();
transactions.insert("defi_txns".to_string(), txn_filter);
let request = SubscribeRequest {
transactions,
commitment: Some(CommitmentLevel::Confirmed as i32),
..Default::default()
};
let (subscribe_tx, mut stream) =
client.subscribe_with_request(Some(request)).await?;

The HashMap keys for the transactions map are arbitrary labels — they let you name your filters for debugging. The values are the actual filter configurations. You can have multiple named filters in a single request: one for Pump.fun, one for Raydium, one for a specific wallet. All matching events stream through the same connection.

The commitment field controls at what finality level you receive events. Use Confirmed for trading bots where speed matters — confirmed transactions are available within roughly 400ms of the slot. Use Finalizedfor high-value operations where you need the strongest guarantee that the transaction won't be rolled back.

Handling the stream

The subscription returns a tonic::Streaming<SubscribeUpdate> which implements futures::Stream. You consume events using StreamExt::next() in a while let loop. Each SubscribeUpdate has an update_oneof field — a Rust enum that discriminates between transaction updates, account updates, slot notifications, block metadata, pings, and pongs. Rust's match expression handles this naturally: the compiler ensures you address every variant, and pattern matching extracts the inner data with zero overhead.

For transaction updates, the key fields are transaction (the actual transaction data including signature, message, and account keys), slot (the slot number), and signature (the transaction signature as bytes). The message field inside the transaction contains account_keys, instructions (compiled instructions with program_id_index, accounts, and data), and recent_blockhash.

Pings are keepalive messages sent by the server. You must respond with a pong to keep the connection alive. If the server doesn't receive a pong within the timeout window, it will close the stream. The code below handles pings automatically by sending a pong through the subscription sender channel.

handle_stream.rs
use futures::StreamExt;
use yellowstone_grpc_proto::prelude::
subscribe_update::UpdateOneof;
while let Some(msg) = stream.next().await {
let msg = msg?;
match msg.update_oneof {
Some(UpdateOneof::Transaction(tx_update)) => {
let slot = tx_update.slot;
letsig = bs58::encode(&tx_update.transaction
.as_ref().unwrap().signature).into_string();
println!("TX slot={slot} sig={sig}");
}
Some(UpdateOneof::Account(acct)) => {
letpubkey = bs58::encode(&acct.account
.as_ref().unwrap().pubkey).into_string();
println!("ACCT update: {pubkey}");
}
Some(UpdateOneof::Ping(_)) => {
subscribe_tx.send(SubscribeRequest {
ping: Some(SubscribeRequestPing { id: 1 }),
..Default::default()
}).await?;
}
_ => {}
}
};

The match expression is exhaustive by default in Rust. The wildcard _arm catches slot updates, block metadata, pongs, and any other update types you don't need to handle. This is safer than the if-else chains used in TypeScript or Python — the compiler prevents you from accidentally ignoring a new variant added in a future protocol version.

The bs58::encode call converts raw bytes into the familiar base58 string format used by Solana explorers and wallets. This is a lightweight operation — base58 encoding a 64-byte signature takes single-digit microseconds.

Parsing transaction data

With raw Yellowstone gRPC, every transaction arrives as a protobuf SubscribeUpdateTransaction containing compiled instructions. Each CompiledInstruction has a program_id_index (an index into the account keys array, not the program ID itself), an accounts field (indices into the same array, not the actual pubkeys), and a datafield containing Borsh-encoded instruction data. To make sense of a transaction, you need to: resolve the program ID from the index, map each account index to its pubkey, determine which instruction variant was called (the first 8 bytes are typically the Anchor discriminator), and Borsh-deserialize the remaining bytes using the program's IDL schema.

This is where Rust actually shines over other languages for raw Yellowstone — Borsh deserialization is a native Rust operation since Borsh was designed for Rust-first. The borshcrate gives you zero-copy deserialization when combined with the right data structures. But it's still a significant amount of code: you need a struct definition for every instruction variant of every program you want to parse, plus the account resolution logic, plus inner instruction handling for CPI calls.

parse_raw.rs
// Raw Yellowstone: manual parsing required
let tx = tx_update.transaction.unwrap();
let msg = tx.message.unwrap();
letaccount_keys: Vec<String> = msg.account_keys
.iter()
.map(|k| bs58::encode(k).into_string())
.collect();
for ix in &msg.instructions {
let program_id = &account_keys[ix.program_id_index as usize];
letix_accounts: Vec<&str> = ix.accounts
.iter()
.map(|&i| account_keys[i as usize].as_str())
.collect();
// Discriminator: first 8 bytes of ix.data
letdisc = &ix.data[..8];
// Remaining bytes: Borsh-encoded payload
letpayload = &ix.data[8..];
// You need a struct for each instruction
// variant to deserialize the payload...
};

With Subglow, all of that parsing complexity disappears. Subglow is Yellowstone (Dragon's Mouth) compatible, so you use the exact same yellowstone-grpc-client — but responses include a parsed field containing pre-parsed JSON. The server-side parsing pipeline, written in Rust, handles account resolution, instruction discrimination, Borsh deserialization, and inner instruction extraction. Pre-parsed JSON eliminates 15–30ms of parsing overhead that would otherwise happen in your client.

parse_subglow.rs
// Subglow: pre-parsed JSON — zero deserialization
use serde_json::Value;
Some(UpdateOneof::Transaction(tx_update)) => {
if let Some(ref parsed) = tx_update.parsed {
let data: Value = serde_json::from_str(parsed)?;
let program = data["program"].as_str().unwrap_or_default();
let event_type = data["type"].as_str().unwrap_or_default();
let sig = data["signature"].as_str().unwrap_or_default();
println!("{program} {event_type}{sig}");
}
};

The JSON contains everything your bot needs — token amounts, buyer and seller addresses, bonding curve percentages, pool reserves, price impact — already extracted and labeled. You parse it with serde_json and feed it directly into your trading logic. No Borsh structs, no account index mapping, no discriminator lookup tables. For typed deserialization, define a Rust struct with #[derive(Deserialize)] and serde_json::from_str gives you compile-time type safety on the parsed output.

Error handling & reconnection

gRPC streams are long-lived connections, and every long-lived connection will eventually disconnect. Network partitions, load balancer rotations, server deployments, and transient DNS failures all cause stream interruptions. A production Rust bot that doesn't implement reconnection logic will silently stop receiving data after the first interruption — and in trading, silent data gaps are catastrophic.

The standard approach is a reconnection loop with exponential backoff. When the stream returns None (stream ended) or an Err (gRPC error), you wait a short interval, then attempt to reconnect. If the reconnection fails, double the wait interval up to a configurable ceiling. This prevents your client from flooding the server during an outage.

Different gRPC status codes require different handling. UNAVAILABLE (code 14) is transient — the server is temporarily unreachable, retry with backoff. RESOURCE_EXHAUSTED (code 8) means you've hit a rate limit — back off longer before retrying. UNAUTHENTICATED (code 16) is permanent — your API key is invalid and retrying won't help. Distinguishing between retryable and non-retryable errors prevents your bot from wasting CPU cycles on reconnections that will never succeed.

reconnect.rs
use std::time::Duration;
use tokio::time::sleep;
use tonic::Code;
const MAX_BACKOFF: Duration = Duration::from_secs(30);
async fn stream_with_reconnect() -> anyhow::Result<()> {
let mut backoff = Duration::from_secs(1);
let mut last_slot: u64 = 0;
loop {
match connect_and_stream(&mut last_slot).await {
Ok(()) => {
eprintln!("Stream ended cleanly — reconnecting");
backoff = Duration::from_secs(1);
}
Err(e) => {
if let Some(status) = e.downcast_ref::<tonic::Status>() {
match status.code() {
Code::Unauthenticated => {
eprintln!("Invalid API key — exiting");
return Err(e);
}
Code::ResourceExhausted => {
backoff = MAX_BACKOFF;
}
_ => {}
}
}
eprintln!("Error: {e} — retry in {backoff:?}");
}
}
sleep(backoff).await;
backoff = (backoff * 2).min(MAX_BACKOFF);
}
};

Reset backoff on success

Set backoff to 1s when the stream ends cleanly so transient hiccups recover instantly.

Track last processed slot

After reconnection, compare incoming slots to last_slot to detect and log data gaps.

Distinguish error codes

UNAVAILABLE = retry. RESOURCE_EXHAUSTED = long backoff. UNAUTHENTICATED = exit immediately.

Cap the backoff ceiling

Use Duration::min() to cap exponential backoff at 30 seconds during extended outages.

Performance tips

Rust gives you control over performance that no garbage-collected language can match. Here are the patterns that matter most for Solana gRPC streaming bots where low execution latency is the goal.

01

Use tokio::select! for multiple streams

If you need to monitor multiple data sources — say, a transaction stream and an account stream, or streams from different endpoints for redundancy — use tokio::select! to poll them concurrently. The macro resolves whichever future completes first, letting you react to the fastest data source without blocking on the slower one. This is more efficient than spawning separate tasks when the streams share state.

02

Channel-based architecture

Separate your stream consumer from your processing logic using tokio::sync::mpsc channels. One task reads from the gRPC stream and sends events into a channel; another task receives events and executes your trading logic. This decoupling means a slow trade execution (waiting for a transaction to confirm) never blocks your stream consumer from processing the next event. The channel provides natural backpressure — if the processing task falls behind, the channel buffers events up to your configured capacity.

03

Zero-copy where possible

Avoid cloning protobuf messages. Use references and as_ref() to access nested fields without allocating new memory. When you need to pass transaction data to a processing function, pass a reference instead of moving or cloning the entire struct. For Borsh deserialization, consider using borsh::from_slice which deserializes directly from a byte slice without intermediate allocations. Every allocation saved is microseconds reclaimed.

04

Pre-allocate collections

Use Vec::with_capacity() and HashMap::with_capacity() when you know the approximate size of your data. Solana transactions typically have 5–20 account keys and 1–10 instructions. Pre-allocating avoids the repeated reallocations that occur when a Vec grows beyond its capacity — each reallocation copies the entire buffer to a new location.

select_streams.rs
use tokio::sync::mpsc;
let (tx, mut rx) = mpsc::channel(1024);
// Producer: reads gRPC stream, sends to channel
tokio::spawn(async move {
while let Some(Ok(msg)) = stream.next().await {
let _ = tx.send(msg).await;
}
});
// Consumer: processes events independently
while let Some(update) = rx.recv().await {
match update.update_oneof {
Some(UpdateOneof::Transaction(tx)) => process_tx(tx),
_ => {}
}
};

Complete example

Here's a full, working main.rs that ties everything together: connecting to Subglow, subscribing to Pump.fun and Raydium transactions, handling the stream with proper ping/pong keepalives, parsing pre-parsed JSON events, and reconnecting on failure with exponential backoff. Copy this into your project, replace YOUR_SUBGLOW_KEY with your actual API key, and run with cargo run.

main.rs
use std::collections::HashMap;
use std::time::Duration;
use anyhow::Result;
use futures::StreamExt;
use tokio::time::sleep;
use tonic::Code;
use yellowstone_grpc_client::GeyserGrpcClient;
use yellowstone_grpc_proto::prelude::*;
use yellowstone_grpc_proto::prelude
::subscribe_update::UpdateOneof;
const ENDPOINT: &str = "https://grpc.subglow.io";
const API_KEY: &str = "YOUR_SUBGLOW_KEY";
const PUMP_FUN: &str = "6EF8rrecthR5Dkzon8Nwu78hRvfCKubJ14M5uBEwF6P";
const RAYDIUM: &str = "675kPX9MHTjS2zt1qfr1NYHuzeLXfQM9H24wFSUt1Mp8";
const MAX_BACKOFF: Duration = Duration::from_secs(30);
fn build_request() -> SubscribeRequest {
let txn_filter = SubscribeRequestFilterTransactions {
vote: Some(false),
failed: Some(false),
account_include: vec![
PUMP_FUN.to_string(),
RAYDIUM.to_string(),
],
..Default::default()
};
let mut transactions = HashMap::new();
transactions.insert("defi".to_string(), txn_filter);
SubscribeRequest {
transactions,
commitment: Some(CommitmentLevel::Confirmed as i32),
..Default::default()
}
};
async fn run_stream(last_slot: &mut u64) -> Result<()> {
let mut client = GeyserGrpcClient
::build_from_shared(ENDPOINT)?
.x_token(Some(API_KEY.to_string()))?
.connect()
.await?;
let request = build_request();
let (subscribe_tx, mut stream) =
client.subscribe_with_request(Some(request)).await?;
println!("Connected — streaming events...");
while let Some(msg) = stream.next().await {
let msg = msg?;
match msg.update_oneof {
Some(UpdateOneof::Transaction(tx_update)) => {
*last_slot = tx_update.slot;
let sig = bs58::encode(
&tx_update.transaction.as_ref()
.unwrap().signature
).into_string();
if let Some(ref parsed) = tx_update.parsed {
let data: serde_json::Value =
serde_json::from_str(parsed)?;
println!(
"[slot {} ] {} {}{}",
tx_update.slot,
data["program"].as_str().unwrap_or("?"),
data["type"].as_str().unwrap_or("?"),
&sig[..16],
);
} else {
println!("[slot {} ] raw tx — {}", tx_update.slot, &sig[..16]);
}
}
Some(UpdateOneof::Ping(_)) => {
subscribe_tx.send(SubscribeRequest {
ping: Some(SubscribeRequestPing { id: 1 }),
..Default::default()
}).await?;
}
_ => {}
}
}
Ok(())
};
#[tokio::main]
async fn main() -> Result<()> {
let mut backoff = Duration::from_secs(1);
let mut last_slot: u64 = 0;
loop {
match run_stream(&mut last_slot).await {
Ok(()) => {
eprintln!("Stream ended — reconnecting");
backoff = Duration::from_secs(1);
}
Err(e) => {
if let Some(s) = e.downcast_ref::<tonic::Status>() {
if s.code() == Code::Unauthenticated {
eprintln!("Invalid API key");
return Err(e);
}
}
eprintln!("Error: {e} — retry in {backoff:?}");
}
}
sleep(backoff).await;
backoff = (backoff * 2).min(MAX_BACKOFF);
}
};

This example is production-ready in structure. It handles authentication errors as non-retryable, uses exponential backoff with a 30-second ceiling for transient failures, tracks the last processed slot for gap detection, responds to ping keepalives, and cleanly separates connection logic from event processing. The only thing you need to add is your actual trading logic inside the transaction handler — the infrastructure for receiving and routing events is complete.

Run the example with cargo run --release for optimized performance. The --release flag enables compiler optimizations that significantly reduce protobuf deserialization time and improve throughput for high-volume streams.

Frequently asked questions

Which crate should I use for Yellowstone gRPC in Rust?

yellowstone-grpc-client is the official Rust client maintained by the Yellowstone team. Add it to your Cargo.toml alongside yellowstone-grpc-proto for the protobuf type definitions. Both crates are published on crates.io.

Is Rust necessary for Solana gRPC bots?

Not necessary, but optimal. Rust gives you zero-cost abstractions, no garbage collector pauses, and the lowest execution latency of any language. If you're building MEV bots, arbitrage systems, or high-frequency trading infrastructure where microseconds matter, Rust is the standard choice.

How do I handle gRPC stream disconnections in Rust?

Wrap your subscription in a loop with exponential backoff using tokio::time::sleep. Match on tonic::Status error codes — UNAVAILABLE and RESOURCE_EXHAUSTED are transient and should be retried, while UNAUTHENTICATED is permanent. Track the last processed slot to detect data gaps after reconnection.

Can I subscribe to multiple programs in one connection?

Yes. Add multiple program IDs to the account_include field in your SubscribeRequestFilterTransactions. All matching transactions stream through a single gRPC connection. You can also combine transaction and account filters in the same SubscribeRequest.

How does Subglow work with the Rust Yellowstone client?

Subglow is Yellowstone (Dragon's Mouth) compatible. You use the same yellowstone-grpc-client crate — just change the endpoint to grpc.subglow.io and add your API key via x_token. Responses include a pre-parsed JSON field that eliminates 15-30ms of Borsh deserialization overhead.

What async runtime does yellowstone-grpc-client require?

The client is built on tonic, which requires tokio. Use tokio with the full feature flag enabled. The subscribe method returns a tonic::Streaming that implements futures::Stream, so you consume events with StreamExt::next().await.

Rust + real-time Solana data.

Same Yellowstone client. Pre-parsed JSON output. Zero Borsh deserialization. Flat monthly pricing. cargo add and go.