Request and Response Flow
Tracing a Request Through Rippled's RPC Pipeline
← Back to Understanding XRPL(d) RPC Architecture
Introduction
Understanding the complete lifecycle of an RPC request—from the moment it arrives at the server to when the response is sent back to the client—is essential for building robust custom handlers. This knowledge helps you anticipate edge cases, implement proper error handling, and optimize performance.
In this section, we'll trace the journey of a request through Rippled's RPC system, examining each stage of processing and the components involved.
The Complete Request Journey
Client → Transport Layer → Parser → Validator → Auth → Dispatcher → Handler → Response Builder → ClientLet's break down each stage in detail.
Stage 1: Request Reception
HTTP Entry Point
For HTTP requests, the entry point is the HTTP server configured in rippled.cfg:
Source Location: src/xrpld/rpc/detail/RPCCall.cpp
The HTTP server receives the raw request:
WebSocket Entry Point
For WebSocket connections, clients establish a persistent connection:
Source Location: src/xrpld/rpc/detail/RPCHandler.cpp
WebSocket messages use a slightly different format:
gRPC Entry Point
For gRPC, requests arrive as Protocol Buffer messages:
Source Location: src/xrpld/app/main/GRPCServer.cpp
Stage 2: Request Parsing
The raw request is parsed into a structured format.
JSON Parsing
Field Extraction
The parser extracts key fields:
Protocol Normalization
Different transports use different formats, which are normalized:
HTTP/WebSocket:
methodorcommandfieldparamsarray or direct parameters
gRPC:
Protobuf message fields
Converted to JSON internally
Stage 3: Role Determination
Before processing the request, the system determines the caller's role based on the connection:
IP-Based Role Assignment
Source Location: src/xrpld/core/detail/Role.cpp
Role Hierarchy
Role descriptions:
FORBID: Blacklisted client (blocked)
GUEST: Unauthenticated public access (limited commands)
USER: Authenticated client (most read operations)
IDENTIFIED: Trusted gateway (write operations)
ADMIN: Full administrative access (all commands)
Configuration Example
Stage 4: Handler Lookup
The dispatcher searches the handler table for the requested command:
Version Matching
If API versioning is in use:
Stage 5: Permission Verification
The system checks if the caller has sufficient permissions:
Example: A GUEST client attempting to call submit (requires USER role) would be rejected here.
Stage 6: Condition Validation
Handlers may require specific runtime conditions:
Ledger Availability Check
Network Connectivity Check
Closed Ledger Check
Stage 7: Context Construction
A JsonContext object is built with all necessary information:
Context provides:
Request parameters (
params)Application services (
app)Resource tracking (
consumer)Permission level (
role)Ledger access (
ledger,ledgerMaster)Network operations (
netOps)
Stage 8: Resource Charging
The system tracks API usage to prevent abuse:
Resource limits are configured per client and prevent DoS attacks.
Stage 9: Handler Invocation
The handler function is called with the constructed context:
Error handling: Any uncaught exceptions are converted to rpcINTERNAL errors.
Stage 10: Response Construction
Success Response
For successful requests:
Error Response
For failed requests:
Stage 11: Response Serialization
The JSON response is serialized back to the client's format:
HTTP Response
WebSocket Response
gRPC Response
Stage 12: Response Delivery
The response is sent back to the client over the same transport:
HTTP: Single request-response cycle completes
WebSocket: Response is pushed to the persistent connection
gRPC: Streamed response or unary response returned
Timing and Performance
Each stage has associated latency:
Reception
< 1 ms
Network overhead
Parsing
< 1 ms
JSON parsing
Lookup
< 0.1 ms
Hash table lookup
Permission Check
< 0.1 ms
Simple comparison
Condition Check
< 1 ms
Ledger availability
Handler Execution
1-100 ms
Varies by handler
Serialization
< 1 ms
JSON encoding
Delivery
< 1 ms
Network overhead
Total typical latency: 5-105 ms
Error Handling at Each Stage
Different errors can occur at each stage:
Parsing Errors
Lookup Errors
Permission Errors
Condition Errors
Handler Errors
Real-World Example: Tracing an account_info Request
Let's trace a complete request:
1. Client Request
2. Reception (HTTP)
3. Parsing
4. Role Determination
5. Handler Lookup
6. Permission Check
7. Condition Check
8. Context Construction
9. Handler Invocation
10. Response
Conclusion
The RPC request-response flow demonstrates Rippled's carefully orchestrated pipeline for handling API calls. From initial reception across multiple transport protocols, through parsing, role determination, permission checks, and condition validation, to handler invocation and response formatting—each stage serves a specific purpose. This multi-stage design enables early rejection of invalid requests, consistent error handling, proper resource management, and transport-agnostic processing. Mastering this flow is crucial for debugging RPC issues and understanding how custom handlers integrate into the system.
Last updated

