Application Layer: Central Orchestration and Coordination
Introduction
The Application layer is the heart of Rippled's architecture—the central orchestrator that initializes, coordinates, and manages all subsystems. Understanding the Application layer is essential for grasping how Rippled operates as a cohesive system, where consensus, networking, transaction processing, and ledger management all work together seamlessly.
At its core, the Application class acts as a dependency injection container and service locator, providing every component access to the resources it needs while maintaining clean separation of concerns. Whether you're debugging a startup issue, optimizing system performance, or implementing a new feature, you'll inevitably interact with the Application layer.
The Application Class Architecture
Design Philosophy
The Application class follows several key design principles that make Rippled maintainable and extensible:
Single Point of Coordination: Instead of components directly creating and managing their dependencies, everything flows through the Application. This centralization makes it easy to understand system initialization and component relationships.
Dependency Injection: Components receive their dependencies through constructor parameters rather than creating them internally. This makes testing easier and dependencies explicit.
Interface-Based Design: The Application class implements the Application interface, allowing for different implementations (production, test, mock) without changing dependent code.
Lifetime Management: The Application controls the creation, initialization, and destruction of all major subsystems, ensuring proper startup/shutdown sequences.
Application Interface
The Application interface is defined in src/ripple/app/main/Application.h:
class Application : public beast::PropertyStream::Source
{
public:
    // Core services
    virtual Logs& logs() = 0;
    virtual Config const& config() const = 0;
    
    // Networking
    virtual Overlay& overlay() = 0;
    virtual JobQueue& getJobQueue() = 0;
    
    // Ledger management
    virtual LedgerMaster& getLedgerMaster() = 0;
    virtual OpenLedger& openLedger() = 0;
    
    // Transaction processing
    virtual NetworkOPs& getOPs() = 0;
    virtual TxQ& getTxQ() = 0;
    
    // Consensus
    virtual Validations& getValidations() = 0;
    
    // Storage
    virtual NodeStore::Database& getNodeStore() = 0;
    virtual RelationalDatabase& getRelationalDatabase() = 0;
    
    // RPC and subscriptions
    virtual RPCHandler& getRPCHandler() = 0;
    
    // Lifecycle
    virtual void setup() = 0;
    virtual void run() = 0;
    virtual void signalStop() = 0;
    
    // Utility
    virtual bool isShutdown() = 0;
    virtual std::chrono::seconds getMaxDisallowedLedger() = 0;
    
protected:
    Application() = default;
};ApplicationImp Implementation
The concrete implementation ApplicationImp is in src/ripple/app/main/ApplicationImp.cpp. This class:
Implements all interface methods
Owns all major subsystem objects
Manages initialization order
Coordinates shutdown
Provides cross-cutting services
Key Member Variables:
class ApplicationImp : public Application
{
private:
    // Configuration and logging
    std::unique_ptr<Logs> logs_;
    Config config_;
    
    // Core services
    std::unique_ptr<JobQueue> jobQueue_;
    std::unique_ptr<NodeStore::Database> nodeStore_;
    std::unique_ptr<RelationalDatabase> relationalDB_;
    
    // Networking
    std::unique_ptr<Overlay> overlay_;
    
    // Ledger management
    std::unique_ptr<LedgerMaster> ledgerMaster_;
    std::unique_ptr<OpenLedger> openLedger_;
    
    // Transaction processing
    std::unique_ptr<NetworkOPs> networkOPs_;
    std::unique_ptr<TxQ> txQ_;
    
    // Consensus
    std::unique_ptr<Validations> validations_;
    
    // RPC
    std::unique_ptr<RPCHandler> rpcHandler_;
    
    // State
    std::atomic<bool> isShutdown_{false};
    std::condition_variable cv_;
    std::mutex mutex_;
};Initialization and Lifecycle
Startup Sequence
Understanding the startup sequence is crucial for debugging initialization issues and understanding component dependencies.
Phase 1: Configuration Loading
// In main.cpp
auto config = std::make_unique<Config>();
if (!config->setup(configFile, quiet))
{
    // Configuration failed
    return -1;
}What Happens:
Parse
rippled.cfgconfiguration fileLoad validator list configuration
Set up logging configuration
Validate configuration parameters
Apply defaults for unspecified options
Configuration Sections:
[server]- Server ports and interfaces[node_db]- NodeStore database configuration[node_size]- Performance tuning parameters[validation_seed]- Validator key configuration[ips_fixed]- Fixed peer connections[features]- Amendment votes
Phase 2: Application Construction
// Create the application instance
auto app = make_Application(
    std::move(config),
    std::move(logs),
    std::move(timeKeeper));Constructor Sequence (ApplicationImp::ApplicationImp()):
ApplicationImp::ApplicationImp(
    std::unique_ptr<Config> config,
    std::unique_ptr<Logs> logs,
    std::unique_ptr<TimeKeeper> timeKeeper)
    : config_(std::move(config))
    , logs_(std::move(logs))
    , timeKeeper_(std::move(timeKeeper))
{
    // 1. Create basic services
    jobQueue_ = std::make_unique<JobQueue>(
        *logs_,
        config_->WORKERS);
    
    // 2. Initialize databases
    nodeStore_ = NodeStore::Manager::make(
        "NodeStore.main",
        scheduler,
        *logs_,
        config_->section("node_db"));
    
    relationalDB_ = makeRelationalDatabase(
        *config_,
        *logs_);
    
    // 3. Create ledger management
    ledgerMaster_ = std::make_unique<LedgerMaster>(
        *this,
        stopwatch(),
        *logs_);
    
    // 4. Create networking
    overlay_ = std::make_unique<OverlayImpl>(
        *this,
        config_->section("overlay"),
        *logs_);
    
    // 5. Create transaction processing
    networkOPs_ = std::make_unique<NetworkOPsImp>(
        *this,
        *logs_);
    
    txQ_ = std::make_unique<TxQ>(
        *config_,
        *logs_);
    
    // 6. Create consensus components
    validations_ = std::make_unique<Validations>(
        *this);
    
    // 7. Create RPC handler
    rpcHandler_ = std::make_unique<RPCHandler>(
        *this,
        *logs_);
    
    // Note: Order matters! Components may depend on earlier ones
}Phase 3: Setup
app->setup();What Happens (ApplicationImp::setup()):
void ApplicationImp::setup()
{
    // 1. Load existing ledger state
    auto initLedger = getLastFullLedger();
    
    // 2. Initialize ledger master
    ledgerMaster_->setLastFullLedger(initLedger);
    
    // 3. Start open ledger
    openLedger_->accept(
        initLedger,
        orderTx,
        consensusParms,
        {}); // Empty transaction set for new ledger
    
    // 4. Initialize overlay network
    overlay_->start();
    
    // 5. Start RPC servers
    rpcHandler_->setup();
    
    // 6. Additional subsystem initialization
    // ...
    
    JLOG(j_.info()) << "Application setup complete";
}Phase 4: Run
app->run();Main Event Loop (ApplicationImp::run()):
void ApplicationImp::run()
{
    JLOG(j_.info()) << "Application starting";
    
    // Start processing jobs
    jobQueue_->start();
    
    // Enter main loop
    {
        std::unique_lock<std::mutex> lock(mutex_);
        
        // Wait until shutdown signal
        while (!isShutdown_)
        {
            cv_.wait(lock);
        }
    }
    
    JLOG(j_.info()) << "Application stopping";
}What Runs:
Job queue processes queued work
Overlay network handles peer connections
Consensus engine processes rounds
NetworkOPs coordinates operations
RPC handlers process client requests
All work happens in background threads managed by various subsystems. The main thread simply waits for a shutdown signal.
Phase 5: Shutdown
app->signalStop();Graceful Shutdown (ApplicationImp::signalStop()):
void ApplicationImp::signalStop()
{
    JLOG(j_.info()) << "Shutdown requested";
    
    // 1. Set shutdown flag
    isShutdown_ = true;
    
    // 2. Stop accepting new work
    overlay_->stop();
    rpcHandler_->stop();
    
    // 3. Complete in-flight operations
    jobQueue_->finish();
    
    // 4. Stop subsystems (reverse order of creation)
    networkOPs_->stop();
    ledgerMaster_->stop();
    
    // 5. Close databases
    nodeStore_->close();
    relationalDB_->close();
    
    // 6. Wake up main thread
    cv_.notify_all();
    
    JLOG(j_.info()) << "Shutdown complete";
}Shutdown Order: Components are stopped in reverse order of their creation to ensure dependencies are still available when each component shuts down.
Complete Lifecycle Diagram
Program Start
     ↓
Load Configuration (rippled.cfg)
     ↓
Create Application Instance
     ↓
Construct Subsystems
  • Databases
  • Networking
  • Ledger Management
  • Transaction Processing
  • Consensus
     ↓
Setup Phase
  • Load Last Ledger
  • Initialize Components
  • Start Network
     ↓
Run Phase (Main Loop)
  • Process Jobs
  • Handle Consensus
  • Process Transactions
  • Serve RPC Requests
     ↓
Shutdown Signal Received
     ↓
Graceful Shutdown
  • Stop Accepting Work
  • Complete In-Flight Operations
  • Stop Subsystems
  • Close Databases
     ↓
Program ExitSubsystem Coordination
The Service Locator Pattern
The Application acts as a service locator, allowing any component to access any other component through the app reference:
class SomeComponent
{
public:
    SomeComponent(Application& app)
        : app_(app)
    {
        // Components store app reference
    }
    
    void doWork()
    {
        // Access other components through app
        auto& ledgerMaster = app_.getLedgerMaster();
        auto& overlay = app_.overlay();
        auto& jobs = app_.getJobQueue();
        
        // Use the components...
    }
    
private:
    Application& app_;
};Major Subsystems
LedgerMaster
Purpose: Manages the chain of validated ledgers and coordinates ledger progression.
Key Responsibilities:
Track current validated ledger
Build candidate ledgers for consensus
Synchronize ledger history
Maintain ledger cache
Coordinate with consensus engine
Access: app.getLedgerMaster()
Important Methods:
// Get current validated ledger
std::shared_ptr<Ledger const> getValidatedLedger();
// Get closed ledger (not yet validated)
std::shared_ptr<Ledger const> getClosedLedger();
// Advance to new ledger
void advanceLedger();
// Fetch missing ledgers
void fetchLedger(LedgerHash const& hash);NetworkOPs
Purpose: Coordinates network operations and transaction processing.
Key Responsibilities:
Process submitted transactions
Manage transaction queue
Coordinate consensus participation
Track network state
Publish ledger close events
Access: app.getOPs()
Important Methods:
// Submit transaction
void submitTransaction(std::shared_ptr<STTx const> const& tx);
// Process transaction
void processTransaction(
    std::shared_ptr<Transaction>& transaction,
    bool trusted,
    bool local);
// Get network state
OperatingMode getOperatingMode();Overlay
Purpose: Manages peer-to-peer networking layer.
Key Responsibilities:
Peer discovery and connection
Message routing
Network topology maintenance
Bandwidth management
Access: app.overlay()
Important Methods:
// Send message to all peers
void broadcast(std::shared_ptr<Message> const& message);
// Get active peer count
std::size_t size() const;
// Connect to specific peer
void connect(std::string const& ip);TxQ (Transaction Queue)
Purpose: Manages transaction queuing when network is busy.
Key Responsibilities:
Queue transactions during high load
Fee-based prioritization
Account-based queuing limits
Transaction expiration
Access: app.getTxQ()
Important Methods:
// Check if transaction can be added
std::pair<TER, bool> 
apply(Application& app, OpenView& view, STTx const& tx);
// Get queue status
Json::Value getJson();NodeStore
Purpose: Persistent storage for ledger data.
Key Responsibilities:
Store ledger state nodes
Provide efficient retrieval
Cache frequently accessed data
Support different backend databases (RocksDB, NuDB)
Access: app.getNodeStore()
Important Methods:
// Store ledger node
void store(
    NodeObjectType type,
    Blob const& data,
    uint256 const& hash);
// Fetch ledger node
std::shared_ptr<NodeObject> 
fetch(uint256 const& hash);RelationalDatabase
Purpose: SQL database for indexed data and historical queries.
Key Responsibilities:
Store transaction metadata
Maintain account transaction history
Support RPC queries (account_tx, tx)
Ledger header storage
Access: app.getRelationalDatabase()
Database Types:
SQLite (default, embedded)
PostgreSQL (production deployments)
Validations
Purpose: Manages validator signatures on ledger closes.
Key Responsibilities:
Collect validations from validators
Track validator key rotations (manifests)
Determine ledger validation quorum
Publish validation stream
Access: app.getValidations()
Important Methods:
// Add validation
void addValidation(STValidation const& val);
// Get validation for ledger
std::vector<std::shared_ptr<STValidation>>
getValidations(LedgerHash const& hash);
// Check if ledger is validated
bool hasQuorum(LedgerHash const& hash);Job Queue System
Purpose and Design
The job queue is Rippled's work scheduling system. Instead of each subsystem creating its own threads, work is submitted as jobs to a centralized queue processed by a thread pool. This provides:
Centralized thread management: Easier to control thread count and CPU usage
Priority-based scheduling: Critical jobs processed before low-priority ones
Visibility: Easy to monitor what work is queued
Deadlock prevention: Structured concurrency patterns
Job Types
Jobs are categorized by type, which determines priority:
enum JobType
{
    // Special job types
    jtINVALID = -1,
    jtPACK,             // Job queue work pack
    
    // High priority - consensus critical
    jtPUBOLDLEDGER,     // Publish old ledger
    jtVALIDATION_ut,    // Process validation (untrusted)
    jtPROPOSAL_ut,      // Process consensus proposal
    jtLEDGER_DATA,      // Process ledger data
    
    // Medium priority
    jtTRANSACTION,      // Process transaction
    jtADVANCE,          // Advance ledger
    jtPUBLEDGER,        // Publish ledger
    jtTXN_DATA,         // Transaction data retrieval
    
    // Low priority
    jtUPDATE_PF,        // Update path finding
    jtCLIENT,           // Handle client request
    jtRPC,              // Process RPC
    jtTRANSACTION_l,    // Process transaction (low priority)
    
    // Lowest priority
    jtPEER,             // Peer message
    jtDISK,             // Disk operations
    jtADMIN,            // Administrative operations
};Submitting Jobs
Components submit work to the job queue:
// Get job queue reference
JobQueue& jobs = app.getJobQueue();
// Submit a job
jobs.addJob(
    jtTRANSACTION,  // Job type
    "processTx",     // Job name (for logging)
    [this, tx](Job&) // Job function
    {
        // Do work here
        processTransaction(tx);
    });Job Priority and Scheduling
Priority Levels:
Critical: Consensus, validations (must not be delayed)
High: Transaction processing, ledger advancement
Medium: RPC requests, client operations
Low: Maintenance, administrative tasks
Scheduling Algorithm:
Jobs sorted by priority and submission time
Worker threads pick highest priority job
Long-running jobs can be split into chunks
System monitors queue depth and adjusts behavior
Job Queue Configuration
In rippled.cfg:
[node_size]
# Affects worker thread count
tiny      # 1 thread
small     # 2 threads  
medium    # 4 threads (default)
large     # 8 threads
huge      # 16 threadsThread count is also influenced by CPU core count:
// Typically: max(2, std::thread::hardware_concurrency() - 1)Configuration Management
Configuration File Structure
The rippled.cfg file controls all aspects of server behavior. The Application loads and provides access to this configuration.
Example Configuration
[server]
port_rpc_admin_local
port_peer
port_ws_admin_local
[port_rpc_admin_local]
port = 5005
ip = 127.0.0.1
admin = 127.0.0.1
protocol = http
[port_peer]
port = 51235
ip = 0.0.0.0
protocol = peer
[port_ws_admin_local]
port = 6006
ip = 127.0.0.1
admin = 127.0.0.1
protocol = ws
[node_size]
medium
[node_db]
type=RocksDB
path=/var/lib/rippled/db/rocksdb
open_files=512
cache_mb=256
filter_bits=12
compression=1
[database_path]
/var/lib/rippled/db
[debug_logfile]
/var/log/rippled/debug.log
[sntp_servers]
time.windows.com
time.apple.com
time.nist.gov
pool.ntp.org
[ips_fixed]
r.ripple.com 51235
[validators_file]
validators.txt
[rpc_startup]
{ "command": "log_level", "severity": "warning" }
[features]
# Vote for or against amendments
# AmendmentNameAccessing Configuration
Components access configuration through the Application:
void SomeComponent::configure()
{
    // Get config reference
    Config const& config = app_.config();
    
    // Access specific sections
    auto const& nodeDB = config.section("node_db");
    auto const type = get<std::string>(nodeDB, "type");
    auto const path = get<std::string>(nodeDB, "path");
    
    // Access node size
    auto nodeSize = config.NODE_SIZE;
    
    // Access ports
    for (auto const& port : config.ports)
    {
        // Configure port...
    }
}Runtime Configuration
Some settings can be adjusted at runtime via RPC:
# Change log verbosity
rippled log_level partition severity
# Connect to peer
rippled connect ip:port
# Get server info
rippled server_infoComponent Interaction Patterns
Pattern 1: Direct Method Calls
Most common pattern—components call each other's methods:
void NetworkOPs::submitTransaction(STTx const& tx)
{
    // Validate transaction
    auto result = Transactor::preflight(tx);
    if (!isTesSuccess(result))
        return;
    
    // Apply to open ledger
    auto& openLedger = app_.openLedger();
    openLedger.modify([&](OpenView& view)
    {
        Transactor::apply(app_, view, tx);
    });
    
    // Broadcast to network
    auto& overlay = app_.overlay();
    overlay.broadcast(makeTransactionMessage(tx));
}Pattern 2: Job Queue for Asynchronous Work
For work that should not block the caller:
void LedgerMaster::fetchLedger(LedgerHash const& hash)
{
    // Submit fetch job
    app_.getJobQueue().addJob(
        jtLEDGER_DATA,
        "fetchLedger",
        [this, hash](Job&)
        {
            // Request from peers
            app_.overlay().sendRequest(hash);
            
            // Wait for response
            // Process received data
            // ...
        });
}Pattern 3: Event Publication
Components publish events that others subscribe to:
// Publisher (LedgerMaster)
void LedgerMaster::newLedgerValidated()
{
    // Notify subscribers
    for (auto& subscriber : subscribers_)
    {
        subscriber->onLedgerValidated(currentLedger_);
    }
}
// Subscriber (NetworkOPs)
void NetworkOPs::onLedgerValidated(
    std::shared_ptr<Ledger const> const& ledger)
{
    // React to new ledger
    updateSubscribers(ledger);
    processQueuedTransactions();
}Pattern 4: Callback Registration
Components register callbacks for specific events:
// Register callback
app_.getLedgerMaster().onConsensusReached(
    [this](std::shared_ptr<Ledger const> const& ledger)
    {
        handleConsensusLedger(ledger);
    });Codebase Deep Dive
Key Files and Directories
Application Core:
src/ripple/app/main/Application.h- Application interfacesrc/ripple/app/main/ApplicationImp.h- Implementation headersrc/ripple/app/main/ApplicationImp.cpp- Implementationsrc/ripple/app/main/main.cpp- Entry point, creates Application
Job Queue:
src/ripple/core/JobQueue.h- Job queue interfacesrc/ripple/core/impl/JobQueue.cpp- Implementationsrc/ripple/core/Job.h- Job definition
Configuration:
src/ripple/core/Config.h- Config classsrc/ripple/core/ConfigSections.h- Section definitions
Subsystem Implementations:
src/ripple/app/ledger/LedgerMaster.hsrc/ripple/app/misc/NetworkOPs.hsrc/ripple/overlay/Overlay.hsrc/ripple/app/tx/TxQ.h
Code Navigation Tips
Finding Application Creation
Start in main.cpp:
int main(int argc, char** argv)
{
    // Parse command line
    // Load configuration
    // Create logs
    
    // Create application
    auto app = make_Application(
        std::move(config),
        std::move(logs),
        std::move(timeKeeper));
    
    // Setup and run
    app->setup();
    app->run();
    
    return 0;
}Tracing Component Access
Follow how components access each other:
// In any component
void MyComponent::work()
{
    // Access through app_
    auto& ledgerMaster = app_.getLedgerMaster();  // → ApplicationImp::getLedgerMaster()
                                                   // → return *ledgerMaster_;
}Understanding Job Submission
Find job submissions:
# Search for addJob calls
grep -r "addJob" src/ripple/app/Example:
app_.getJobQueue().addJob(jtTRANSACTION, "processTx", [&](Job&) {
    // Job code
});Hands-On Exercise
Exercise: Trace Application Startup and Analyze Job Queue
Objective: Understand the application initialization sequence and monitor job queue activity.
Part 1: Code Exploration
Step 1: Navigate to application source
cd rippled/src/ripple/app/main/Step 2: Read the main entry point
Open main.cpp and trace:
Command-line argument parsing
Configuration loading
Application creation
Setup call
Run call
Step 3: Follow ApplicationImp construction
Open ApplicationImp.cpp and identify:
The order subsystems are created (constructor)
Dependencies between components
What happens in
setup()What happens in
run()
Questions to Answer:
Why is NodeStore created before LedgerMaster?
What does LedgerMaster need from Application?
Which components are created first and why?
Part 2: Monitor Job Queue Activity
Step 1: Enable detailed job queue logging
Edit rippled.cfg:
[rpc_startup]
{ "command": "log_level", "partition": "JobQueue", "severity": "trace" }Step 2: Start rippled in standalone mode
rippled --conf=rippled.cfg --standaloneStep 3: Watch the startup logs
Observe jobs during startup:
What job types execute first?
How many worker threads are created?
What's the initial job queue depth?
Step 4: Submit transactions and observe
# Submit a payment
rippled submit '{
  "TransactionType": "Payment",
  "Account": "...",
  "Destination": "...",
  "Amount": "1000000"
}'Watch the logs for:
jtTRANSACTIONjobs being queuedJob processing time
Queue depth changes
Step 5: Manually close a ledger
rippled ledger_acceptObserve jobs related to ledger close:
jtADVANCE- Advance to next ledgerjtPUBLEDGER- Publish ledgerjtUPDATE_PF- Update path finding
Part 3: Add Custom Logging
Step 1: Modify ApplicationImp.cpp
Add logging to track component initialization:
ApplicationImp::ApplicationImp(/* ... */)
{
    JLOG(j_.info()) << "Creating JobQueue...";
    jobQueue_ = std::make_unique<JobQueue>(/* ... */);
    JLOG(j_.info()) << "JobQueue created";
    
    JLOG(j_.info()) << "Creating NodeStore...";
    nodeStore_ = NodeStore::Manager::make(/* ... */);
    JLOG(j_.info()) << "NodeStore created";
    
    // Add similar logs for other components
}Step 2: Recompile
cd rippled/build
cmake --build . --target rippledStep 3: Run and observe
./rippled --conf=rippled.cfg --standaloneYou should see your custom log messages showing component creation order.
Analysis Questions
Answer these based on your exploration:
What's the first subsystem created?
Why does it need to be first?
How does the job queue decide which job to process next?
What factors influence priority?
What happens if a job throws an exception?
Find the exception handling code
How many jobs are queued during a typical ledger close?
Count from your logs
What's the relationship between Application and ApplicationImp?
Why use an interface?
How would you add a new subsystem?
What's the process?
Where would you add it?
Key Takeaways
Core Concepts
✅ Central Orchestration: Application class coordinates all subsystems and manages their lifecycle
✅ Dependency Injection: Components receive dependencies through Application reference, not by creating them
✅ Service Locator: Application provides access to all major services (getLedgerMaster(), overlay(), etc.)
✅ Initialization Order: Subsystems are created in dependency order during construction
✅ Job Queue: Centralized work scheduling with priority-based execution
✅ Configuration: All server behavior controlled through rippled.cfg
Development Skills
✅ Codebase Location: Application implementation in src/ripple/app/main/
✅ Adding Components: Create in constructor, expose through interface method
✅ Job Submission: Use app.getJobQueue().addJob() for asynchronous work
✅ Debugging Startup: Add logging in ApplicationImp constructor to trace initialization
✅ Configuration Access: Use app.config() to read configuration values
Common Patterns and Best Practices
Pattern 1: Accessing Subsystems
Always access subsystems through the Application:
// Good - through Application
void doWork(Application& app)
{
    auto& ledgerMaster = app.getLedgerMaster();
    ledgerMaster.getCurrentLedger();
}
// Bad - storing subsystem reference
class BadComponent
{
    LedgerMaster& ledgerMaster_;  // Don't do this
    
    BadComponent(LedgerMaster& lm) 
        : ledgerMaster_(lm) {}  // Tight coupling
};
// Good - storing Application reference
class GoodComponent
{
    Application& app_;
    
    GoodComponent(Application& app) 
        : app_(app) {}  // Loose coupling
        
    void work()
    {
        // Access when needed
        auto& lm = app_.getLedgerMaster();
    }
};Pattern 2: Asynchronous Work
Use job queue for work that shouldn't block:
// Don't block the caller
void expensiveOperation(Application& app)
{
    app.getJobQueue().addJob(
        jtCLIENT,
        "expensiveWork",
        [&app](Job&)
        {
            // Long-running work here
            performExpensiveCalculation();
            
            // Access other subsystems as needed
            app.getLedgerMaster().doSomething();
        });
}Pattern 3: Lifecycle Management
Let Application manage component lifetime:
// In ApplicationImp constructor
myComponent_ = std::make_unique<MyComponent>(*this);
// In ApplicationImp::setup()
myComponent_->initialize();
// In ApplicationImp::signalStop()
myComponent_->shutdown();
// Destructor automatically cleans up
// (unique_ptr handles deletion)Additional Resources
Official Documentation
XRP Ledger Dev Portal: xrpl.org/docs
Rippled Setup: xrpl.org/install-rippled
Configuration Reference: xrpl.org/rippled-server-configuration
Codebase References
src/ripple/app/main/- Application layer implementationsrc/ripple/core/JobQueue.h- Job queue systemsrc/ripple/core/Config.h- Configuration managementsrc/ripple/app/main/main.cpp- Program entry point
Related Topics
Transactors - How transactions are processed
Consensus Engine - How consensus integrates with Application
Codebase Navigation - Finding your way around the code
Last updated

