Anatomy of Rippled: Nodestore
Overview
The NodeStore serves as the persistent storage foundation for the XRP Ledger, providing a critical abstraction layer between the SHAMap data structures and the underlying database systems. As discussed in the SHAMap documentation, the SHAMap maintains the blockchain state through a Merkle-Patricia trie structure—the NodeStore is what makes this state persistent, recoverable, and accessible across application restarts and network synchronization.
Core Purpose:
Persist all ledger data as NodeObjects between application launches
Provide consistent storage interface independent of backend database choice
Manage efficient caching to minimize database I/O
Enable data retrieval by cryptographic hash
Support the complete lifecycle of ledger data from creation to archival
Architectural Position:
The NodeStore sits at a critical junction in the XRPL architecture:
SHAMap (In-Memory State) ↓ NodeStore Interface ↓ Cache Layer (TaggedCache) ↓ Backend Abstraction ↓ Database Implementation (RocksDB, NuDB, etc.)
Why NodeStore Matters:
Without the NodeStore, every aspect of XRPL operation would fail:
Consensus Protocol: Cannot store or retrieve validated ledgers
Transaction Processing: Cannot query current state or persist new transactions
Peer Synchronization: No source of truth for syncing with the network
API Services: Cannot serve historical data to clients
The NodeStore transforms the SHAMap's cryptographically-verified state into durable, disk-backed storage while maintaining the performance characteristics necessary for a high-throughput blockchain.
NodeObject: The Fundamental Storage Unit
Structure
A NodeObject is the atomic unit of storage in XRPL, encapsulating everything needed to persist and retrieve a piece of ledger data:
Core Components:
Type (mType): Enumeration identifying the content category
Hash (mHash): 256-bit unique identifier (the lookup key)
Data (mData): Variable-length binary payload containing serialized content
Key Characteristics:
Immutable once created
Uniquely identified by hash
Self-describing through type field
Optimized for both storage and network transmission
NodeObject Types
The NodeStore handles four distinct categories of blockchain data:
1. Ledger Headers (hotLEDGER)
Complete metadata for each ledger version
Includes sequence numbers, timestamps, parent references
Contains consensus information and validation data
Essential for ledger chain verification
2. Account State Nodes (hotACCOUNT_NODE)
Nodes from the account state SHAMap tree
Represents current account information: balances, settings, owned objects
Forms the leaves and inner nodes of the state tree
Updated with each transaction that affects accounts
3. Transaction Tree Nodes (hotTRANSACTION_NODE)
Nodes from the transaction SHAMap tree
Structural elements organizing transaction history
Enables efficient transaction lookups and proofs
Forms both leaves (containing transactions) and inner nodes (organizing structure)
4. Unknown Type (hotUNKNOWN)
Fallback category for unrecognized types
Provides forward compatibility
Allows graceful handling of future extensions
Type Enum Values:
hotLEDGER
1
Ledger headers
hotACCOUNT_NODE
3
Account state tree nodes
hotTRANSACTION_NODE
4
Transaction tree nodes
hotUNKNOWN
0
Unrecognized types
hotDUMMY
512
Cache marker for missing entries
Note: Type value 2 is intentionally unused in the current implementation.
Creation and Lifecycle
Factory Pattern:
NodeObjects are created through a static factory method:
createObject(type, data, hash)
All instances are heap-allocated and managed by
std::shared_ptr
Once created, NodeObjects are immutable
Lifecycle Stages:
Creation: Generated during transaction processing or ledger validation
Storage: Persisted to database through Backend interface
Caching: Kept in memory for fast access to frequently-used data
Retrieval: Fetched by hash when needed by SHAMap or other components
Archival: Moved to archive storage or deleted based on retention policy
Integration with SHAMap
The relationship between NodeStore and SHAMap is fundamental to understanding XRPL's architecture. They form a complementary pair where SHAMap provides the logical structure and cryptographic verification, while NodeStore provides the physical persistence.
Conceptual Relationship
SHAMap's Role:
Maintains the Merkle-Patricia trie structure in memory
Provides O(1) comparison through root hash verification
Enables efficient synchronization through hash-based difference detection
Implements cryptographic proofs of data inclusion
NodeStore's Role:
Persists SHAMap nodes to survive application restarts
Provides on-demand loading of nodes not currently in memory
Manages efficient caching to minimize I/O overhead
Handles backend database complexity and rotation
The Division of Labor:
SHAMap: Structure + Verification + Algorithms
NodeStore: Persistence + Caching + Storage Management
Storage of SHAMap Nodes
When a SHAMap needs to persist its state:
Inner Nodes:
SHAMapInnerNode is serialized (compressed or full format)
NodeObject created with type
hotACCOUNT_NODE
orhotTRANSACTION_NODE
Hash computed from serialized content
NodeStore persists the NodeObject to backend database
NodeObject cached in memory for future access
Leaf Nodes:
SHAMapLeafNode and its SHAMapItem are serialized
Type prefix added based on leaf specialization
NodeObject created with appropriate type
Persisted and cached through NodeStore
Key Insight: Each node in a SHAMap—whether inner node organizing structure or leaf node containing data—becomes exactly one NodeObject in the NodeStore.
Retrieval Flow
When a SHAMap needs to access a node not currently in memory:
Request Path:
SHAMap identifies needed node by hash
Calls NodeStore's
fetchNodeObject(hash)
NodeStore checks TaggedCache first (L1 cache)
If cache miss, NodeStore queries backend database
Backend retrieves serialized data by hash key
NodeObject deserialized and validated
NodeObject cached in TaggedCache
NodeObject returned to SHAMap
SHAMap deserializes into appropriate node type (inner or leaf)
Performance Optimization:
The cache layer is critical to performance:
Cache Hit: Returns in microseconds from memory
Cache Miss: Requires database query (milliseconds)
Typical Hit Rate: 90%+ for well-tuned systems
Asynchronous Fetching:
For bulk operations, NodeStore supports asynchronous fetching:
Multiple nodes requested in parallel
Background threads handle database queries
Reduces latency for operations needing many nodes
Used during synchronization and proof generation
Two Trees, One Storage
XRPL maintains two distinct SHAMaps, both backed by the same NodeStore:
Account State Tree:
Root hash represents current state of all accounts
Each modification creates new NodeObjects for changed paths
Old NodeObjects retained for historical ledgers
NodeStore contains both current and historical state nodes
Transaction Tree:
Root hash represents complete transaction history
Ensures unique, verifiable state transition path
Prevents multiple histories producing same state
NodeStore preserves full transaction provenance
Shared Storage Benefits:
Single database for all ledger data
Unified caching strategy
Consistent backup and recovery procedures
Simplified operational management
NodeObject Type Distinction:
While both trees share the NodeStore:
Account state nodes:
hotACCOUNT_NODE
Transaction nodes:
hotTRANSACTION_NODE
Type field enables logical separation within physical storage
Supports different retention policies if needed
Synchronization Scenario
The interplay between SHAMap and NodeStore is most visible during node synchronization:
Initial State:
Syncing node has only the root hash of a remote ledger
Needs to reconstruct entire SHAMap structure
Synchronization Process:
Request Root: Fetch NodeObject for root hash from peers
Store Root: Persist to local NodeStore, cache in memory
Deserialize: Convert NodeObject to SHAMapInnerNode
Identify Missing Children: Root has 16 potential children
Request Children: For each present child, fetch NodeObject by hash
Recursive Process: Each fetched inner node repeats the process
Reach Leaves: Eventually fetch leaf NodeObjects
Completion: When all NodeObjects fetched, SHAMap is complete
NodeStore's Critical Role:
Prevents duplicate fetches (cache hit on already-retrieved nodes)
Persists each node immediately (survive crashes during sync)
Manages concurrent requests efficiently (background threads)
Marks missing nodes with dummy objects (avoid repeated failed requests)
SHAMap's Critical Role:
Verifies each node's hash before accepting
Detects missing nodes through hash references
Constructs logical tree structure from flat NodeObject storage
Validates root hash matches expected value when complete
Backend Abstraction
One of NodeStore's key design principles is complete abstraction from the underlying database technology. This enables XRPL to support multiple storage backends without impacting upper layers.
Backend Interface
The Backend
class defines a minimal, essential interface:
Core Operations:
store(NodeObject)
: Persist a single objectfetch(hash)
: Retrieve an object by hashstoreBatch(batch)
: Persist multiple objects atomicallyfetchBatch(hashes)
: Retrieve multiple objects efficiently
Lifecycle Operations:
open()
: Initialize database connectionclose()
: Cleanly shut down databasefdRequired()
: Report file descriptor requirements for resource planning
Status Reporting:
Return status codes:
ok
,notFound
,dataCorrupt
,backendError
Enable appropriate error handling at higher layers
Supported Backends
XRPL supports multiple database backends, allowing operators to choose based on their specific requirements:
RocksDB (Preferred):
Modern key-value store from Facebook
Excellent performance for XRPL workloads
Built-in compression support
Active maintenance and optimization
NuDB (Preferred):
Purpose-built for XRPL by Ripple
Append-only design optimized for SSD
Very high write throughput
Efficient space utilization
Legacy Options:
LevelDB, HyperLevelDB: Earlier generation key-value stores (deprecated)
SQLite: Relational database option for specific use cases
Testing Backends:
Memory: In-memory storage for testing
None: No-op backend for specific test scenarios
Backend Selection:
The choice of backend affects:
Write throughput and latency
Read performance characteristics
Storage space efficiency
Memory usage patterns
File descriptor requirements
However, the abstraction ensures that application logic remains identical regardless of backend choice.
Data Encoding Format
To enable backend independence, NodeStore uses a standardized encoding format:
Storage Format:
Byte OffsetFieldDescription0-7UnusedReserved (set to zero)8TypeNodeObjectType enumeration value9...endDataSerialized object payload
Key as Hash:
The 256-bit hash serves as the database key
Separate from the encoded blob
Backend stores key → encoded blob mapping
Encoding Process:
Serialize NodeObject data (SHAMap node content)
Prepend type byte
Add reserved padding
Use NodeObject hash as database key
Store key-value pair in backend
Decoding Process:
Fetch blob from backend using hash key
Extract type from byte 8
Validate type is recognized
Extract data payload from bytes 9+
Construct NodeObject with type, hash, and data
Benefits:
Backend-agnostic format
Self-describing data (type embedded)
Efficient serialization (minimal overhead)
Forward compatibility (unknown types gracefully handled)
Cache Layer
The cache layer is NodeStore's first line of defense against database I/O, dramatically improving performance for hot data.
TaggedCache Architecture
Purpose:
Keep frequently accessed NodeObjects in memory
Minimize expensive database queries
Accelerate SHAMap operations
Smooth performance spikes
Cache Structure:
Key: NodeObject hash (uint256)
Value: shared_ptr to NodeObject
Thread-safe concurrent access
LRU (Least Recently Used) eviction policy
Cache Tiers:
The NodeStore implements a tiered caching strategy:
L1: TaggedCache (In-Memory)
Recently and frequently accessed NodeObjects
Configurable size limit (number of objects)
Configurable age limit (time-based eviction)
First check for all fetch operations
L2: Backend Database (Persistent)
Complete set of all stored NodeObjects
Accessed on cache miss
Significantly slower than L1 cache
Durable across restarts
Cache Management
Insertion:
Every fetched NodeObject is cached after backend retrieval
Newly stored NodeObjects are cached immediately
Dummy objects (hotDUMMY) mark failed fetches to avoid repeated attempts
Eviction:
Two eviction triggers:
Size-Based: When cache exceeds configured capacity
LRU algorithm selects victim
Least recently accessed objects evicted first
Maintains working set in memory
Age-Based: Objects older than configured threshold
Periodic sweep removes stale entries
Prevents cache bloat from one-time access patterns
Configurable age limit (typically minutes)
Configuration:
cache_size
: Maximum number of cached objectscache_age
: Maximum age in minutes before eviction
Dummy Objects:
Special marker objects prevent repeated failed lookups:
Type:
hotDUMMY
Represents "known to be missing"
Cached after failed fetch attempts
Prevents network requests for non-existent data
Particularly important during synchronization
Cache Performance Impact
Metrics:
The impact of caching is dramatic:
Cache Hit Latency: ~1-10 microseconds
Cache Miss Latency: ~1-10 milliseconds (1000x slower)
Typical Hit Rate: 90-95% in production systems
Optimization Strategies:
Predictive Loading:
During synchronization, prefetch child nodes when parent is accessed
Batch fetch operations populate cache efficiently
Anticipate access patterns based on SHAMap traversal
Working Set Management:
Recent ledgers kept in cache
Historical ledgers evicted naturally through LRU
Optimize cache size for ledger close cycle (3-5 seconds typically)
Performance Bottlenecks:
Without effective caching:
Every SHAMap traversal requires database queries
Synchronization becomes I/O bound
Transaction processing slowed by state lookups
API queries suffer high latency
Proper cache tuning is critical to XRPL performance.
Database Abstraction and Implementations
The Database
class sits above the Backend interface, providing higher-level operations and managing the full NodeStore lifecycle.
Database Interface
Core Responsibilities:
Orchestrate storage and retrieval operations
Manage cache layer interaction
Coordinate background threads for async operations
Track and report operational metrics
Handle batch operations efficiently
Key Methods:
Synchronous Operations:
fetchNodeObject(hash)
: Retrieve single object, checking cache firststore(NodeObject)
: Persist single object, update cachestoreBatch(batch)
: Persist multiple objects atomically
Asynchronous Operations:
asyncFetch(hash, callback)
: Initiate background fetchBackground thread pool executes queries
Callbacks invoked when results available
Management Operations:
import(otherDatabase)
: Bulk import from another databasefor_each(callback)
: Iterate all stored objectsgetCountsJson()
: Retrieve operational statistics
DatabaseNodeImp: Single Backend Database
The standard implementation for most XRPL nodes:
Architecture:
Single backend for all storage
Integrated TaggedCache for performance
Background thread pool for async fetches
Operation:
Store Flow:
Receive NodeObject from upper layer
Cache immediately in TaggedCache
Encode NodeObject to wire format
Call backend's
store()
methodUpdate metrics (bytes written, write count)
Fetch Flow:
Check TaggedCache for hash
If cache hit: return immediately (metrics updated)
If cache miss: query backend
Decode retrieved blob to NodeObject
Cache result (or dummy if not found)
Return to caller
Update metrics (bytes read, miss count, latency)
Thread Safety:
All cache operations protected by locks
Backend operations may have internal locking
Metrics updated atomically
DatabaseRotatingImp: Advanced Rotation and Archival
For production systems requiring online deletion and continuous operation:
Architecture:
Writable Backend: Current active database
Archive Backend: Previous database, read-only
Rotation Mechanism: Seamless switchover between databases
Purpose:
Addresses the fundamental problem of unbounded growth:
Without deletion, NodeStore grows indefinitely
Eventually exhausts disk space
Manual pruning requires downtime
Rotation enables online deletion without interruption
Rotation Process:
Trigger: Based on ledger count or operator command
Archive Transition: Current writable → archive
New Writable: Fresh backend created and activated
Old Archive Deletion: Previous archive deleted from disk
Continuation: System continues without downtime
Fetch Behavior:
With rotation, fetches check both databases:
Try writable backend first (most recent data)
If not found, try archive backend
Optional duplication: copy from archive to writable if found
Duplication Strategy:
The duplicate
parameter controls whether objects found in archive are copied forward:
True: Copy to writable, ensuring availability after next rotation
False: Leave in archive only, eventual deletion
Benefits:
Online Deletion: Remove old data without downtime
Retention Policy: Keep N ledgers, delete older
Disk Space Management: Prevent unbounded growth
Zero Downtime: Rotation is transparent to operations
Trade-offs:
Slightly more complex fetch logic
Requires coordination with SHAMapStore
Increased resource usage during rotation
Metrics and Monitoring
Both implementations track comprehensive operational metrics:
Storage Metrics:
node_writes
: Total objects writtennode_written_bytes
: Total bytes persistedWrite latency and throughput
Retrieval Metrics:
node_reads_total
: Total fetch attemptsnode_reads_hit
: Cache hitsnode_reads_duration_us
: Fetch latency in microsecondsnode_read_bytes
: Total bytes read from backend
Threading Metrics:
Background thread count
Queue depths for async operations
Thread utilization statistics
Cache Metrics:
Cache size (current object count)
Hit rate (hits / total reads)
Eviction count and rate
These metrics are exposed via the getCountsJson()
method and used for:
Performance monitoring and alerting
Capacity planning
Troubleshooting synchronization issues
Optimizing cache configuration
Operational Lifecycle
Understanding NodeStore's role in the complete application lifecycle clarifies its integration with other XRPL components.
Initialization
Startup Sequence:
Configuration Loading: Parse
[node_db]
sectionBackend Selection: Choose backend type (RocksDB, NuDB, etc.)
Backend Creation: Instantiate backend with configuration
Database Initialization: Create Database (Node or Rotating)
Cache Allocation: Initialize TaggedCache with configured limits
Import (Optional): Import data from another database if configured
Ready State: NodeStore available to SHAMap and other components
Import Process:
If configured with import_db
parameter:
Open both source and destination databases
Iterate all NodeObjects in source
Store each in destination
Verify counts match
Close source database
Continue with destination as primary
Integration with SHAMapStore:
The SHAMapStore
class orchestrates NodeStore lifecycle:
Manages online deletion schedule
Coordinates rotation timing
Handles cache warming on startup
Integrates with ledger validation
Runtime Operations
Transaction Processing:
Transaction validated and applied to in-memory SHAMap
Modified SHAMap nodes identified
Nodes serialized to NodeObjects
NodeStore persists NodeObjects
Cache updated with new objects
Ledger close completes
Ledger Validation:
Validator signs ledger
Consensus achieved on ledger hash
All nodes in validated ledger must be retrievable
NodeStore ensures persistence
Old ledger retained in archive (if using rotation)
Synchronization:
Node behind network, needs to catch up
Request missing ledgers from peers
For each ledger, fetch all NodeObjects
NodeStore persists and caches each object
SHAMap reconstructed from NodeObjects
Verify ledger root hash matches
Continue until synchronized
API Queries:
Client requests historical data (account state, transaction)
API layer identifies required ledger
Requests SHAMap for that ledger
SHAMap requests nodes from NodeStore
NodeStore fetches from cache or backend
Data returned to client
Shutdown
Graceful Shutdown Sequence:
Stop accepting new requests
Complete in-flight async fetches
Flush any pending writes
Close cache (no persistence needed, regenerated on restart)
Close backend database cleanly
Release resources (file descriptors, threads)
Crash Recovery:
NodeStore is designed for crash resilience:
All writes durable before ledger marked validated
No in-memory state required for correctness
Cache rebuilt on restart from access patterns
Backend databases handle their own crash recovery
Advanced Topics
fetchNodeObject Deep Dive
The fetchNodeObject
method is the workhorse of NodeStore, worth examining in detail:
Signature:
std::shared_ptr<NodeObject> fetchNodeObject(
uint256 const& hash,
std::uint32_t ledgerSeq = 0,
FetchType fetchType = FetchType::synchronous,
bool duplicate = false
);
Parameters:
hash
: The 256-bit identifier of the object to retrieveledgerSeq
: Associated ledger sequence (for metrics/logging)fetchType
: Synchronous (blocking) or async (background thread)duplicate
: Copy from archive to writable if found (rotation-specific)
Return Value:
shared_ptr<NodeObject>
: The object if foundnullptr
: If not found or marked as missing (dummy)
Execution Flow:
Phase 1: Cache Check
Acquire cache lock
Look up hash in TaggedCache
If found and not dummy: return immediately
If found and is dummy: return nullptr (known missing)
If not found: proceed to backend fetch
Phase 2: Backend Fetch
Release cache lock (allow concurrent access)
Call backend's
fetch(hash)
Handle return status:
ok
: NodeObject retrieved successfullynotFound
: Object doesn't existdataCorrupt
: Fatal error, log and throwOther: Log warning, treat as not found
Phase 3: Cache Update
Acquire cache lock
If found: cache the NodeObject
If not found: cache a dummy marker
Release cache lock
Phase 4: Rotation Handling (DatabaseRotatingImp only)
If not found in writable, try archive backend
If found in archive and
duplicate == true
: store in writableThis ensures object survives next rotation
Phase 5: Metrics and Return
Update fetch statistics (hits, misses, latency)
Report fetch event to scheduler (for monitoring)
Return NodeObject or nullptr
Thread Safety:
Cache protected by mutex
Backend operations may block
Multiple threads can fetch simultaneously
Dummy objects prevent cache thrashing on missing data
Error Handling:
Data corruption: Fatal log, exception thrown
Backend errors: Logged, treated as not found
Exceptions during fetch: Logged and rethrown
Missing objects: Graceful nullptr return
Performance Considerations:
Cache hit: ~1-10 microseconds (lock + pointer return)
Cache miss: ~1-10 milliseconds (backend query + deserialization)
Dummy hit: ~1-10 microseconds (prevents repeated backend queries)
Batch Operations
For efficiency, NodeStore supports batch operations:
Batch Store:
Write multiple NodeObjects in single transaction
Reduces database overhead
Maintains atomicity (all or nothing)
Used during ledger close (many nodes updated together)
Batch Fetch:
Request multiple NodeObjects simultaneously
Parallel backend queries
Reduces total latency
Used during synchronization
Batch Size Limits:
batchWritePreallocationSize = 256
: Optimize for small batchesbatchWriteLimitSize = 65536
: Maximum batch sizePrevents memory exhaustion and timeout issues
State Reconstruction Guarantee
The combination of SHAMap and NodeStore provides a critical guarantee:
Problem Statement:
Without transaction history, many different transaction sequences could produce the same final account state. This would undermine blockchain verifiability.
Solution:
The transaction tree, persisted in NodeStore, ensures unique history:
Current state sₙ = f(sₙ₋₁, txₙ) where f is deterministic
Full history preserved: s₀ → s₁ → ... → sₙ
Given any state root, can verify the exact transactions that produced it
NodeStore's Role:
Persists both state tree nodes and transaction tree nodes
Enables reconstruction of complete history
Proves blockchain integrity cryptographically
Supports auditing and compliance
Verification Process:
Retrieve ledger header (contains both tree roots)
Fetch all nodes in state tree
Fetch all nodes in transaction tree
Verify state root matches computed hash
Verify transaction root matches computed hash
Confirm transactions in tree produced the state
This guarantee depends entirely on NodeStore's reliable persistence of both trees.
Resource Management
File Descriptors:
Backends require varying numbers of file descriptors:
RocksDB: ~20-100 depending on configuration
NuDB: Typically fewer, optimized for append-only
SQLite: Usually 1-2
The fdRequired() method allows the system to:
Plan file descriptor allocation
Prevent exhaustion
Adjust ulimits appropriately
Monitor resource usage
Memory Usage: NodeStore memory footprint includes:
TaggedCache: Configurable, typically 100s of MB to several GB
Backend buffers: Database-specific, varies by backend
Thread pools: Background fetch threads (10-50 MB typical)
Working memory: Batch operations, encoding/decoding
Disk Space: Without online deletion:
Growth rate: ~1-5 GB per million ledgers (varies by transaction volume)
Unbounded growth over time
Requires periodic manual pruning
With rotating database:
Controlled growth: maintain last N ledgers
Automatic deletion of old data
Predictable disk usage
Performance Characteristics
Lookup Complexity:
Cache hit: O(1) hash table lookup
Cache miss: O(1) database key-value lookup
Overall: O(1) expected time
Write Throughput:
Depends heavily on backend choice
RocksDB: ~10,000-50,000 objects/second
NuDB: ~50,000-200,000 objects/second
Limited by disk I/O and batch size
Read Throughput:
Cache hit: ~1,000,000+ objects/second (memory bound)
Cache miss: ~10,000-100,000 objects/second (disk bound)
Critical importance of cache hit rate
Latency:
Cache hit: 1-10 microseconds -Cache miss: 1-10 milliseconds (SSD), 10-100 milliseconds (HDD)
Async fetch: Hide latency through pipelining
Summary
The NodeStore represents a masterful balance between simplicity and sophistication:
Core Achievements:
Persistence: Transforms ephemeral in-memory SHAMap into durable blockchain state
Abstraction: Clean interface isolates application logic from database implementation
Performance: Multi-tier caching achieves microsecond latency for hot data
Scalability: Rotating database enables unbounded operation with bounded resources
Reliability: Crash-resistant design ensures data integrity
Integration with SHAMap:
The relationship between SHAMap and NodeStore is what makes XRPL viable:
SHAMap: Provides structure, verification, and efficient algorithms
NodeStore: Provides persistence, caching, and practical storage
Together they enable:
Cryptographically verified state with durable persistence
Efficient synchronization with minimal network overhead
Fast API queries while maintaining complete history
Scalable operations that balance memory and disk usage
Operational Excellence: NodeStore's design supports the demanding requirements of a production blockchain:
Online deletion without downtime
Flexible backend choice for optimization
Comprehensive metrics for monitoring
Graceful degradation under resource constraints
The NodeStore is not merely a storage layer—it is the foundation that makes XRPL's sophisticated state management practical, performant, and reliable at scale.
Last updated