The Application layer is the heart of Rippled's architecture—the central orchestrator that initializes, coordinates, and manages all subsystems. Understanding the Application layer is essential for grasping how Rippled operates as a cohesive system, where consensus, networking, transaction processing, and ledger management all work together seamlessly.
At its core, the Application class acts as a dependency injection container and service locator, providing every component access to the resources it needs while maintaining clean separation of concerns. Whether you're debugging a startup issue, optimizing system performance, or implementing a new feature, you'll inevitably interact with the Application layer.
The Application Class Architecture
Design Philosophy
The Application class follows several key design principles that make Rippled maintainable and extensible:
Single Point of Coordination: Instead of components directly creating and managing their dependencies, everything flows through the Application. This centralization makes it easy to understand system initialization and component relationships.
Dependency Injection: Components receive their dependencies through constructor parameters rather than creating them internally. This makes testing easier and dependencies explicit.
Interface-Based Design: The Application class implements the Application interface, allowing for different implementations (production, test, mock) without changing dependent code.
Lifetime Management: The Application controls the creation, initialization, and destruction of all major subsystems, ensuring proper startup/shutdown sequences.
Application Interface
The Application interface is defined in src/ripple/app/main/Application.h:
All work happens in background threads managed by various subsystems. The main thread simply waits for a shutdown signal.
Phase 5: Shutdown
Graceful Shutdown (ApplicationImp::signalStop()):
Shutdown Order: Components are stopped in reverse order of their creation to ensure dependencies are still available when each component shuts down.
Complete Lifecycle Diagram
Subsystem Coordination
The Service Locator Pattern
The Application acts as a service locator, allowing any component to access any other component through the app reference:
Major Subsystems
LedgerMaster
Purpose: Manages the chain of validated ledgers and coordinates ledger progression.
Key Responsibilities:
Track current validated ledger
Build candidate ledgers for consensus
Synchronize ledger history
Maintain ledger cache
Coordinate with consensus engine
Access: app.getLedgerMaster()
Important Methods:
NetworkOPs
Purpose: Coordinates network operations and transaction processing.
Key Responsibilities:
Process submitted transactions
Manage transaction queue
Coordinate consensus participation
Track network state
Publish ledger close events
Access: app.getOPs()
Important Methods:
Overlay
Purpose: Manages peer-to-peer networking layer.
Key Responsibilities:
Peer discovery and connection
Message routing
Network topology maintenance
Bandwidth management
Access: app.overlay()
Important Methods:
TxQ (Transaction Queue)
Purpose: Manages transaction queuing when network is busy.
Key Responsibilities:
Queue transactions during high load
Fee-based prioritization
Account-based queuing limits
Transaction expiration
Access: app.getTxQ()
Important Methods:
NodeStore
Purpose: Persistent storage for ledger data.
Key Responsibilities:
Store ledger state nodes
Provide efficient retrieval
Cache frequently accessed data
Support different backend databases (RocksDB, NuDB)
Access: app.getNodeStore()
Important Methods:
RelationalDatabase
Purpose: SQL database for indexed data and historical queries.
Key Responsibilities:
Store transaction metadata
Maintain account transaction history
Support RPC queries (account_tx, tx)
Ledger header storage
Access: app.getRelationalDatabase()
Database Types:
SQLite (default, embedded)
PostgreSQL (production deployments)
Validations
Purpose: Manages validator signatures on ledger closes.
Key Responsibilities:
Collect validations from validators
Track validator key rotations (manifests)
Determine ledger validation quorum
Publish validation stream
Access: app.getValidations()
Important Methods:
Job Queue System
Purpose and Design
The job queue is Rippled's work scheduling system. Instead of each subsystem creating its own threads, work is submitted as jobs to a centralized queue processed by a thread pool. This provides:
Centralized thread management: Easier to control thread count and CPU usage
Priority-based scheduling: Critical jobs processed before low-priority ones
void ApplicationImp::run()
{
JLOG(j_.info()) << "Application starting";
// Start processing jobs
jobQueue_->start();
// Enter main loop
{
std::unique_lock<std::mutex> lock(mutex_);
// Wait until shutdown signal
while (!isShutdown_)
{
cv_.wait(lock);
}
}
JLOG(j_.info()) << "Application stopping";
}
app->signalStop();
void ApplicationImp::signalStop()
{
JLOG(j_.info()) << "Shutdown requested";
// 1. Set shutdown flag
isShutdown_ = true;
// 2. Stop accepting new work
overlay_->stop();
rpcHandler_->stop();
// 3. Complete in-flight operations
jobQueue_->finish();
// 4. Stop subsystems (reverse order of creation)
networkOPs_->stop();
ledgerMaster_->stop();
// 5. Close databases
nodeStore_->close();
relationalDB_->close();
// 6. Wake up main thread
cv_.notify_all();
JLOG(j_.info()) << "Shutdown complete";
}
Program Start
↓
Load Configuration (rippled.cfg)
↓
Create Application Instance
↓
Construct Subsystems
• Databases
• Networking
• Ledger Management
• Transaction Processing
• Consensus
↓
Setup Phase
• Load Last Ledger
• Initialize Components
• Start Network
↓
Run Phase (Main Loop)
• Process Jobs
• Handle Consensus
• Process Transactions
• Serve RPC Requests
↓
Shutdown Signal Received
↓
Graceful Shutdown
• Stop Accepting Work
• Complete In-Flight Operations
• Stop Subsystems
• Close Databases
↓
Program Exit
class SomeComponent
{
public:
SomeComponent(Application& app)
: app_(app)
{
// Components store app reference
}
void doWork()
{
// Access other components through app
auto& ledgerMaster = app_.getLedgerMaster();
auto& overlay = app_.overlay();
auto& jobs = app_.getJobQueue();
// Use the components...
}
private:
Application& app_;
};
// Get current validated ledger
std::shared_ptr<Ledger const> getValidatedLedger();
// Get closed ledger (not yet validated)
std::shared_ptr<Ledger const> getClosedLedger();
// Advance to new ledger
void advanceLedger();
// Fetch missing ledgers
void fetchLedger(LedgerHash const& hash);
// Submit transaction
void submitTransaction(std::shared_ptr<STTx const> const& tx);
// Process transaction
void processTransaction(
std::shared_ptr<Transaction>& transaction,
bool trusted,
bool local);
// Get network state
OperatingMode getOperatingMode();
// Send message to all peers
void broadcast(std::shared_ptr<Message> const& message);
// Get active peer count
std::size_t size() const;
// Connect to specific peer
void connect(std::string const& ip);
// Check if transaction can be added
std::pair<TER, bool>
apply(Application& app, OpenView& view, STTx const& tx);
// Get queue status
Json::Value getJson();
// Add validation
void addValidation(STValidation const& val);
// Get validation for ledger
std::vector<std::shared_ptr<STValidation>>
getValidations(LedgerHash const& hash);
// Check if ledger is validated
bool hasQuorum(LedgerHash const& hash);
enum JobType
{
// Special job types
jtINVALID = -1,
jtPACK, // Job queue work pack
// High priority - consensus critical
jtPUBOLDLEDGER, // Publish old ledger
jtVALIDATION_ut, // Process validation (untrusted)
jtPROPOSAL_ut, // Process consensus proposal
jtLEDGER_DATA, // Process ledger data
// Medium priority
jtTRANSACTION, // Process transaction
jtADVANCE, // Advance ledger
jtPUBLEDGER, // Publish ledger
jtTXN_DATA, // Transaction data retrieval
// Low priority
jtUPDATE_PF, // Update path finding
jtCLIENT, // Handle client request
jtRPC, // Process RPC
jtTRANSACTION_l, // Process transaction (low priority)
// Lowest priority
jtPEER, // Peer message
jtDISK, // Disk operations
jtADMIN, // Administrative operations
};
// Get job queue reference
JobQueue& jobs = app.getJobQueue();
// Submit a job
jobs.addJob(
jtTRANSACTION, // Job type
"processTx", // Job name (for logging)
[this, tx](Job&) // Job function
{
// Do work here
processTransaction(tx);
});
[node_size]
# Affects worker thread count
tiny # 1 thread
small # 2 threads
medium # 4 threads (default)
large # 8 threads
huge # 16 threads