Validator Modes
The validator switches between two operational modes:- Leader Mode (TPU) - Produces blocks when scheduled as the cluster leader
- Validator Mode (TVU) - Validates and votes on blocks produced by other leaders
Pipelining Architecture
Validators use pipelining extensively to maximize throughput. Like a CPU pipeline or an assembly line, different stages of transaction processing occur simultaneously on different hardware resources.Think of pipelining like a laundry process: while one load is being washed, another can be dried, and a third can be folded. Each stage uses different hardware and can operate independently.
- Network I/O - QUIC endpoints for receiving/sending data
- GPU - Signature verification (when available)
- CPU cores - Transaction execution, banking, consensus
- Disk I/O - Blockstore writes and reads
docs/src/architecture/validator-anatomy.md:10
Transaction Processing Unit (TPU)
The TPU implements the block production pipeline. It consists of several interconnected stages:QUIC Streamers
Three separate QUIC servers handle different types of traffic:- TPU QUIC - Regular transaction ingestion
- TPU Forwards QUIC - Forwarded transactions from other validators
- TPU Vote QUIC - Vote transactions (higher priority)
- Stake-weighted Quality of Service (QoS)
- Connection limits based on sender stake
- Rate limiting to prevent spam
- Stream multiplexing for efficiency
core/src/tpu.rs:211
Fetch Stage
Receives UDP vote packets and manages the packet ingress pipeline:- Allocates packet memory
- Reads from vote sockets
- Applies initial packet coalescing
- Forwards to signature verification
core/src/tpu.rs:165
SigVerify Stage
Parallel signature verification stage:- Uses a Rayon thread pool for parallel verification
- Deduplicates packets before verification
- Applies load shedding under high load
- Sets discard flag on invalid signatures
- Separates vote and non-vote transactions
core/src/tpu.rs:266
Signature verification is one of the most CPU-intensive operations in transaction processing. Parallelizing it across multiple cores is critical for high throughput.
Banking Stage
The heart of transaction processing:- Buffers transactions when approaching leader slot
- Executes transactions against the Bank (account state)
- Locks accounts to prevent conflicts
- Parallelizes non-conflicting transactions
- Records transactions in Proof of History
- Handles transaction scheduling and prioritization
- Unified Scheduler (default) - Advanced parallel scheduler
- Central Scheduler - Centralized transaction coordination
core/src/tpu.rs:305
Forwarding Stage
Forwards transactions to upcoming leaders:- Determines next leader from the schedule
- Prioritizes transactions for forwarding
- Forwards votes unconditionally
- Forwards non-vote transactions based on configuration
- Uses QUIC for efficient forwarding
core/src/tpu.rs:329
Broadcast Stage
Disseminates blocks to the network:- Receives entries from Banking Stage
- Serializes entries into shreds (fragments)
- Signs shreds with validator identity
- Generates erasure codes for fault tolerance
- Broadcasts via Turbine tree structure
core/src/tpu.rs:354
Transaction Validation Unit (TVU)
The TVU implements the block validation pipeline:Shred Fetch Stage
Receives block fragments from the network:- Listens on multiple UDP sockets for shreds
- Receives repair responses via QUIC
- Distributes work across threads
- Filters shreds by version
- Handles retransmitted shreds
core/src/tvu.rs:321
Shred SigVerify
Verifies shred signatures in parallel:- Validates leader signatures on shreds
- Checks that shreds are from the expected leader
- Filters invalid shreds before further processing
- Uses multiple verification threads
core/src/tvu.rs:339
Window Service
Manages shred assembly and repair:- Collects shreds into complete blocks
- Detects missing shreds
- Initiates repair requests for gaps
- Handles ancestor hashes for fork validation
- Manages duplicate slot detection
core/src/tvu.rs:368
Replay Stage
Executes and validates blocks:- Replays transactions from assembled blocks
- Maintains fork state and bank hierarchy
- Implements fork choice logic
- Generates votes on valid blocks
- Handles rollback on invalid forks
- Coordinates with consensus (Tower)
core/src/tvu.rs:557
Retransmit Stage
Propagates shreds to other validators:- Implements Turbine protocol for block propagation
- Uses erasure coding for fault tolerance
- Organizes validators into a tree structure
- Forwards shreds to designated neighbors
- Optimizes for network topology
core/src/tvu.rs:349
Supporting Services
Cluster Info Vote Listener
Processes votes from gossip:- Receives votes via gossip network
- Verifies vote signatures
- Tracks voting patterns
- Feeds votes into consensus
core/src/tpu.rs:289
Voting Service
Sends validator votes to the network:- Creates vote transactions
- Signs votes with validator key
- Submits to RPC or broadcasts directly
- Manages vote account state
core/src/tvu.rs:528
Staked Nodes Updater
Maintains current stake distribution:- Updates stake weights from epoch changes
- Provides stake info to QUIC QoS
- Supports stake-weighted operations
core/src/tpu.rs:175
Cost Update Service
Tracks transaction costs for fee market:- Updates cost model parameters
- Feeds prioritization fee cache
- Supports dynamic fee adjustments
core/src/tvu.rs:553
Process Lifecycle
The validator follows this lifecycle:- Initialization - Load configuration, keypairs, and genesis
- Blockstore Setup - Initialize or load existing ledger
- Service Start - Launch all pipeline stages
- Gossip Join - Connect to cluster via gossip
- Sync - Replay ledger and catch up to cluster
- Active Operation - Process transactions and vote on blocks
- Graceful Shutdown - Clean exit on signal
core/src/validator.rs:1
Resource Management
Validators carefully manage system resources:- File Descriptors - Adjusted for high connection counts
- Memory - Bounded channels prevent memory exhaustion
- CPU - Thread pools sized for available cores
- Disk - Automatic ledger pruning to manage storage
- Network - Port ranges for multiple services
core/src/validator.rs:26
Next Steps
- Learn about Tower BFT consensus
- Understand the Sealevel runtime
- Explore cluster participation