Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This is the home of the XRPL Commons technical trainings. This space aims to showcase all content related to training on the XRPL.
The following links are the key tools in your journey on the XRP ledger.
Ecosystem map: https://map.xrpl-commons.org
Block explorer: https://testnet.xrpl.org
Faucets: https://xrpl.org/resources/dev-tools/xrp-faucets/
Transactions references: https://xrpl.org/docs/references/protocol/transactions/types/
Training live sync: https://trainings.xrpl.at/training (password is training-april-2024)
Main XRPL Documentation:
Learning Portal:
Check out our events and opportunities at XRPL Commons:
Start here :)
First, you will want to set up your coding environment.
Any JavaScript editor will work, and feel free to use your own Node environment to follow along with JavaScript. Most of the code we will use can be run client side as well.
For a hassle-free setup, use Replit (). Accounts are free. Create a new Replit with the TypeScript defaults, or use the Node defaults if you prefer to avoid TypeScript.
If you use another language, you can follow along with your own script, but you will need the appropriate SDK, which may differ from xrpl.js.
You will often need to refer to the main API documentation for transactions here: https://xrpl.org/docs/references/protocol/transactions/types/
Before we get into the heart of the matter, lets setup a few wallets.
Let's add some wallet creation logic. We might as well create two wallets for some fun.
At this point, we should have two wallets with balances of 100 XRP.
We will save these in a more convenient way to reuse them as we progress through this tutorial.
Collect the seed values from the logs for both accounts, and let's create wallets from those seeds from now on. We'll need an issuer and a receiver so here we go:
First, we set the seed in the code
First, we need to get the public key of the person we want to chat with. To do so, we can go on the block explorer and look at the transactions the address has sent. From there, by looking at the raw transaction data, we can find their public key. Once we have their public key, we can use it to encrypt messages that only they can decrypt.
Then we need to set up our wallet to cypher the message and send the tx
While faucets are accessible programmatically, you can also create a test account: https://xrpl.org/resources/dev-tools/xrp-faucets/
This tutorial uses the Xaman wallet, which you can download to your mobile phone a from https://xumm.app. Once installed you can import a wallet via "family seed" (the address secret) for quicker setup.
However, most examples in this tutorial sign transactions programmatically.
To get started on replit create an account. You can create a free acount.
Go to developer frameworks, hit create a search for Typescript.
Open the right hand panel to browse files, edit index.ts, and run from the top left play button.
You can close the AI panel to the left for more screen real estate.
Explore an additional transaction type (e.g., OfferCreate, TrustSet)
Implement a custom log filter for specific transaction types
Create a detailed flowchart of the Transactor framework
Send encrypted messages over XRPL
This session will explore how to use the memo field present on every transaction on the XRP Ledger to send encrypted messages, effectively creating a chat engine.
While nothing is encrypted forever, we propose this activity to explore the basic principles of cryptography and using the memo field.
This session will cover token issuance and providing liquidity for your tokens using the automatic market maker built in to the XRPL
In this section we will cover some of the primary functionality of the XRPL ledger.
The XRPL Core Dev Bootcamp – Online Edition is a program designed for intermediate to advanced C++ developers looking to deepen their skills in the core of the XRP Ledger. This digital edition lets
import {Wallet} from "xrpl"
const receiverPubKey = "the public key from the explorer here"
const myWallet = Wallet.fromSeed("your seed here")
console.log('lets fund 2 accounts...')
const { wallet: wallet1, balance: balance1 } = await client.fundWallet()
const { wallet: wallet2, balance: balance2 } = await client.fundWallet()
console.log('wallet1', wallet1)
console.log('wallet2', wallet2)
console.log({
balance1,
address1: wallet1.address, //wallet1.seed
balance2,
address2: wallet2.address
})const issuerSeed = "s...";
const receiverSeed = "s..."; const issuer = Wallet.fromSeed(issuerSeed);
const receiver = Wallet.fromSeed(receiverSeed);https://opensource.ripple.com/docs/evm-sidechain/connect-metamask-to-xrpl-evm-sidechain/
When you receive a 1-drop transaction, it might contain a hidden message for you. By checking the raw transaction data in your block explorer and looking at the memo field, you can find an encrypted message. You'll see a scrambled string of characters - this is your encrypted message. Since this message was encrypted using your public (outside) key, only your private (inside) key can decrypt it. By copying this encrypted text and using your private key, you can transform this unreadable data back into the original message that was meant for your eyes only.
import tweetnacl from 'tweetnacl';
import { Wallet } from "xrpl";
import {
edwardsToMontgomeryPub,
edwardsToMontgomeryPriv,
} from "@noble/curves/ed25519";
const { box } = tweetnacl;
export function decryptMessage(
messageWithNonce: string,
recipientPublicKey: string,
senderSecretKey: string,
): string {
const pubKeyBytes = Buffer.from(recipientPublicKey.slice(2), "hex");
const secretKeyBytes = Buffer.from(senderSecretKey.slice(2), "hex");
const pubKeyCurve = edwardsToMontgomeryPub(pubKeyBytes);
const privKeyCurve = edwardsToMontgomeryPriv(secretKeyBytes);
console.log(messageWithNonce)
const { encrypted, nonce } = JSON.parse(messageWithNonce);
const messageBytes = Buffer.from(encrypted,"base64");
const nonceBytes = Buffer.from(nonce,"base64");
const decryptedMessage = box.open(
messageBytes,
nonceBytes,
pubKeyCurve,
privKeyCurve,
);
if (!decryptedMessage) {
throw new Error("Failed to decrypt message");
}
return new TextDecoder().decode(decryptedMessage);
}
function main() {
const cypherMessage =
"Your cypher message goes here";
const senderPubKey =
"The sender public key goes here";
const mySeed = "your seed goes here";
const myPrivateKey = Wallet.fromSeed(mySeed).privateKey;
const clearMessage = decryptMessage(
Buffer.from(cypherMessage, "hex").toString(),
senderPubKey,
myPrivateKey,
);
console.log("==============CLEAR MESSAGE==============");
console.log(clearMessage);
console.log("all done");
}
main();
In this workshop we will learn how to create a Fullstack app using hardhat
First off we will clone the repo or import it into replit depending on your configuration.
https://github.com/XRPL-Commons/evm-banking-kryptoshpere-2024
From the root directory, run npm ito install common dependencies.
Navigate to the lending-contract folder and follow the instructions in the readme to deploy the contract:
install dependencies with npm i
add your private key to the .env file
fix the hardhat config
review the contract ()
You can verify your contract has been deployed using the XRPL EVM Explorer
Navigate to the lending-frontend folder and follow the instructions in the readme to run the front end:
install dependencies with npm i
run the frontend with npm run dev and notice the Connect Wallet button does not work
We need to set up the web 3 context, to do this navigate to the .shared folder and fix the web3-context.ts file, your web app should now look like this.
connect your account (the same EVM account you deployed the contract to)
try to deposit
fix the deposit function
try to withdraw
Congratulations, you have just completed a full stack app using the XRPL EVM Sidechain.
Description of key libraries
To build our encrypted messaging system, we'll be using these key JavaScript libraries:
xrpl: The official XRPL client library to interact with the XRP Ledger
tweetnacl: For encryption/decryption operations using the NaCl cryptographic library
Network Name
XRPL EVM Sidechain
New RPC URL
https://rpc-evm-sidechain.xrpl.org
Chain ID
1440002
Currency Symbol
XRP
Block Explorer
https://evm-sidechain.xrpl.org
compile the contract using npm run compile
deploy the contract using npm run deploy
try to lend
fix the lend function
try to repay
fix the repay function


You’ll learn to launch Rippled locally and explore it with XRPL Explorer and Playground to simulate end-to-end XRPL workflows.
In stand-alone mode, the server operates without connecting to the network and participating in the consensus process. Without the consensus process, you have to manually advance the ledger and no distinction is made between "closed" and "validated" ledgers. However, the server still provides API access and processes transactions the same.
Option with genesis ledger:
Check the logs to confirm that the server is running correctly.
Install dependencies:
Launch the explorer:
Navigate to http://localhost:8080/ to view transactions and the ledger.
Test commands such as server_info, ledger_current, account_info from the interface or via cURL/WebSocket.
Create and fund test accounts to interact with the local ledger.
Verify transactions in the explorer and via the interface/command.
Create a config directory and copy the example configuration files.
Configure rippled.cfg to connect to the mainnet.
Start synchronization:
Identify your node on the network using server_info and check its visibility on the XRPL explorer.
Analyze the logs to observe: startup, peer connections, ledger acquisition, and participation in consensus.
Document your experience and any errors encountered.
Compare your node with other nodes on the network
Monitor connectivity and latency relative to the global network
@noble/curvesbuffer: To handle binary data manipulation
If you are running your own environment, install everything by running:
npm install xrpl tweetnacl @noble/curves buffercd ~/projects/rippled/build
cp ../config -r ./config
./rippled -a --conf ./config/rippled.cfg./rippled -a --conf ./config/rippled.cfg --ledgerfile ./config/genesis.jsoncd ~/core-dev-bootcamp-2025/explorer
npm installnpm run servegit clone https://github.com/XRPL-Commons/core-dev-bootcamp-2025/tree/main/playground
cd core-dev-bootcamp-2025/playground
yarn install
ts-node src/connect.ts
ts-node src/fund.ts./rippled --conf ./config/rippled.cfgIn this session, we'll demonstrate how to coordinate using the XRPL.
The XRPL is a public ledger, meaning that when you write on-chain, anyone can read it. In this session, we will demonstrate how to subscribe to events as they happen on-chain.
First, we will create a pair of accounts:
Check the logs to find the seed for both accounts. We will use these later to reuse the accounts deterministically.
Here is how to generate a payment transaction between the two accounts.
Insert the seeds from the previous section in the top section.
One way to retrieve transactions for a given account is to create a request for transactions. Here is an example
Finally, we will demonstrate how to listen for transactions affecting a given account to catch all transactions.
You could use this to build a notification bot that alerts you when any activity occurs in your account.
You can subscribe to an account, to trigger on each transaction for instance. This is how you would listen to a specific account.
How to compile, deploy and interact with a solidity smart contract on chain
Open Remix at https://remix.etherium.org
By default you can find the HelloWorld.sol contract in the file explorer under contracts.
Compile it using the third tab.
You can deploy it using the fourth tab. You will need to select the injected provider environment and make sure your metamask is connected to the right network and account.
Deploy the contract, it should open a window for signature (notice the fees in XRP).
Once deployed successfully you can interact with it by scrolling down tab 4.
Create the following contract in remix, give it the name counter.sol
Compile the new contract using the third tab.
Deploy the contract using the fourth tab.
You can now interact with the counter smart contract:
send a transaction to increment it, this will trigger a transaction to sign in Metamask
read the current value (notice how this is free)
To go further you can explore the behind the simple banking contract we deploy in the banking app.
← Back to Protocol Extensions and Quantum Signatures
Efficient message propagation is essential for a decentralized ledger. Transactions must reach validators quickly, proposals must spread to enable consensus, and validations must propagate to finalize ledgers. The overlay network's message relaying system ensures information flows efficiently while preventing network overload through intelligent squelching.
This lesson explores how messages propagate through the network and how Rippled optimizes this process to handle high-throughput scenarios.
OverlayImpl::relay(protocol::TMProposeSet& m, uint256 const& uid, PublicKey const& validator) ():
Calls app_.getHashRouter().shouldRelay(uid) to determine if the proposal should be relayed.
If not, returns an empty set.
If yes:
Creates a shared pointer to a Message object containing the proposal.
Slot::update ():
Tracks peer activity for a validator, incrementing message counts and considering peers for selection.
When enough peers reach the message threshold, randomly selects a subset to be "Selected" and squelches the rest (temporarily mutes them).
Squelched peers are unsquelched after expiration.
Handles all state transitions, logging, and squelch/unsquelch notifications via the SquelchHandler interface.
OverlayImpl::unsquelch ():
Looks up the peer by short ID.
If found, constructs a TMSquelch message with squelch=false for the validator.
Sends the message to the peer, instructing it to stop squelching messages from the validator.
Configure logging and analyze Rippled's internal behavior.
Format: Written report (PDF or Markdown) with screenshots/code snippets
Enable Detailed Logging
Enable trace-level logging for the Transaction partition
Submit a transaction
Capture the relevant log output
A report containing:
Commands used to enable logging
Log excerpts (formatted properly)
Analysis of 5 log messages with explanations
Now that we have our encrypted message, we can send it through the XRPL network. We'll create a transaction with a minimal amount (1 drop) and include our encrypted message in the transaction's memo field. The memo field is perfect for this as it can store arbitrary data, making it an ideal place for our encrypted message. Once the transaction is validated, our secret message will be securely stored on the blockchain - visible to everyone but readable only by the intended recipient who holds the matching private key.
In this session, we will learn how to create digital art using Non-Fungible Tokens (NFTs) with their metadata and specific attributes.
In this section we will create fungible tokens or IOUs and transfert them between accounts.
Token codes can be three letters or a padded hex string. Here is a convenience function to enable codes longer than three letters. This needs to be done to work with our training app, so you will want to include this in your code. This could easily be done via an import for instance, or placed in the index.ts file directly.
Now we will create an AMM Pool to provide some liquidity for our new token.
Now that the receiver has tokens, we can use the receiver's account to create an AMM. Note that this is the usual architecture where the issuer account solely issues the token, and other proprietary accounts hold the token and can create liquidity.
For this example, we will use pools that have XRP as one side.
Here is the createAMM.ts file:
The final main function index.ts should now look like this:
import xrpl from "xrpl";
const serverURL = "wss://s.altnet.rippletest.net:51233"; // For testnet
const main = async () => {
const client = new xrpl.Client(serverURL)
await client.connect()
// do something useful
console.log("lets fund 2 accounts...")
const { wallet: wallet1, balance: balance1 } = await client.fundWallet()
const { wallet: wallet2, balance: balance2 } = await client.fundWallet()
console.log({ wallet1, balance1 })
console.log({ wallet2, balance2 })
// end
client.disconnect()
};
main()Log Analysis
Identify and explain 5 key log messages related to transaction processing
For each log message, explain:
What subsystem generated it
What phase of processing it represents
What information it provides to developers
Iterates over all active peers.
For each peer not in the skip set, sends the proposal message.
Returns the set of peer IDs that were skipped.
Now that we have their public key, we can encrypt our secret message. Remember our "outside-in" principle: we use THEIR public (outside) key to encrypt the message, which ensures that only THEIR private (inside) key can decrypt it. The message gets transformed into a scrambled format that is completely unreadable to anyone who might intercept it - only the owner of the matching private key can convert it back to the original text. Each encrypted message is unique, even if you encrypt the same text multiple times, adding an extra layer of security.
import tweetnacl from 'tweetnacl';
import { Buffer } from "buffer";
import { Wallet } from "xrpl";
import {
edwardsToMontgomeryPub,
edwardsToMontgomeryPriv,
} from "@noble/curves/ed25519";
const { box, randomBytes } = tweetnacl;
/**
* Encrypts a message using X25519 (Montgomery curve) for Diffie-Hellman key exchange
*
* @param message - Plain text message to encrypt
* @param recipientPublicKey - Recipient's Ed25519 public key (hex encoded)
* @param senderSecretKey - Sender's Ed25519 private key (hex encoded)
* @returns JSON string containing base64 encoded encrypted message and nonce
*
* Steps:
* 1. Convert hex keys to byte arrays
* 2. Generate random nonce for uniqueness
* 3. Convert message to bytes
* 4. Convert Ed25519 keys to X25519 (Montgomery) format for encryption
* 5. Encrypt using NaCl box with converted keys
* 6. Return encrypted message and nonce as base64 JSON
*/
export function encryptMessage(
message: string,
recipientPublicKey: string,
senderSecretKey: string,
): string {
const pubKeyBytes = Buffer.from(recipientPublicKey.slice(2), "hex");
const secretKeyBytes = Buffer.from(senderSecretKey.slice(2), "hex");
const nonce = randomBytes(box.nonceLength);
const messageUint8 = Buffer.from(message);
const pubKeyCurve = edwardsToMontgomeryPub(pubKeyBytes);
const privKeyCurve = edwardsToMontgomeryPriv(secretKeyBytes);
const encryptedMessage = box(messageUint8, nonce, pubKeyCurve, privKeyCurve);
return JSON.stringify({
encrypted: Buffer.from(encryptedMessage).toString("base64"),
nonce: Buffer.from(nonce).toString("base64"),
});
}
function main() {
const recipientPublicKey ="Recipient pubkey goes here";
const myWallet = Wallet.fromSeed("your seed goes here");
const cypheredMessage = encryptMessage(
"Hello World",
recipientPublicKey,
myWallet.privateKey,
);
console.log(cypheredMessage);
}
main();First, to create digital art, you need to create an NFT using NFTokenMint. You need one available wallet for that. For an NFT to host metadata and have specific attributes, we usually provide a JSON file for the metadata. You can gain a better understanding of the OpenSea metadata standards (https://docs.opensea.io/docs/metadata-standards). When creating the NFT, you will use a URI field to include the URI to the metadata of the NFT you want to create.
At this point, you should have one account with an NFT created.
Once you have your NFT, you may want to put it on sale and allow people to bid on it. Thanks to XRPL built-in functions, the NFT marketplace is already provided within the ledger. We only need to use NFTokenCreateOffer to put our NFT up for sale.
At this point, your NFT should be on sale, and it will be available for someone specific or anyone to buy.
Canceling your offer
You can cancel your NFT offer, you will need to use the NFTokenCancelOffer function to do so (https://xrpl.org/docs/references/protocol/transactions/types/nftokencanceloffer/)
Burning your NFT
You can burn your NFT, you will need to use the NFTokenBurn function to do so (https://xrpl.org/docs/references/protocol/transactions/types/nftokenburn/)
Here is the function code:
For AMMs and the whole process to work, we need to enable rippling from the issuer account. To enable rippling we use the AccountSet transaction with the appropriate flag. Here is the function.
And in the main function of index.ts we can now add, to trigger this
To create an issued token, the receiver first needs to add a trust line to the issuer. It's straightforward, we create a TrustSet transaction and sign it with the receiver account.
Once the trustline is set, we send an amount from the issuer to the receiver that is less than the trust line maximum (500M tokens in this case).
Here is the whole file for createToken.ts
We can now call this function from the main function insideindex.ts, remembering to wrap our token currency code with the convertStringToHexPadded function.
We can check in the explorer that the issuer and receiver have balances in the new token at this point.
// enable ripling
await enableRippling({ wallet: issuer, client });This section will guide you through:
Cloning the official Rippled repository from the XRPL Foundation’s GitHub.
Setting up Conan profiles for macOS and Ubuntu.
Configuring the CMake build system.
Compiling the Rippled executable in Debug mode.
By following these steps, you’ll have a local, fully compiled version of Rippled — ready for testing, development, and contributing to the XRPL Core.
Importing default Conan profil.
You can check your Conan profile by running
If the default profile does not work for you and you do not yet have a Conan profile, you can create one by running:
The recipes in Conan Center occasionally need to be patched for compatibility with the latest version of rippled.
To ensure our patched recipes are used, you must add our Conan remote at a higher index than the default Conan Center remote, so it is consulted first. You can do this by running:
⚠️ Compilation may take 30 to 60 minutes depending on your machine.
Once compilation completes successfully, confirm that the Rippled binary has been created:
Then, check the version to ensure the binary runs correctly:
Don't forget to import enableRippling, createToken, createAMM and convertStringToHexPadded if needed.
You can now go to the training website at https://trainings.xrpl.at/ (password is training-april-2024) and interact with other pools.
You will need to set up Xaman with the receiver account you created above. You can use the family seed import route. If you are new to Xaman don't forget to enable developer mode in the advanced settings and then switch to testnet from the home page of the Xaman app.
Using the training app, connect and add your public address to the list. When you click "view tokens" next to an address you can see that account's available tokens.
You can create Trustlines for tokens you have not interacted with yet. You can swap XRP for other tokens where you have trustlines using the pool view's swap feature.
You may need to mint more XRP for your receiver account. That's where the print money function might come in handy.
Become a liquidity provider for other pools
ou can become a Liquidity Provider for other pools by using the AMMDeposit function: (https://xrpl.org/docs/references/protocol/transactions/types/ammdeposit/)
Withdraw from pools
You can also withdraw from pools using your LP tokens by utilizing the AMMWithdraw transaction. (https://xrpl.org/docs/references/protocol/transactions/types/ammwithdraw/)
import xrpl from "xrpl";
const serverURL = "wss://s.altnet.rippletest.net:51233"; // For testnet
const wallet1 = xrpl.Wallet.fromSeed("s...");
const wallet2 = xrpl.Wallet.fromSeed("s...");
const main = async () => {
const client = new xrpl.Client(serverURL);
await client.connect();
const tx = {
TransactionType: "Payment",
Account: wallet1.classicAddress,
Destination: wallet2.classicAddress,
Amount: "1234" // drops
}
const result = await client.submitAndWait(tx, {
autofill: true,
wallet: wallet1,
});
console.log(result)
// end
client.disconnect()
};
main()// ... define wallet1 before this point
const request = {
command: 'account_tx',
account: wallet1.classicAddress,
ledger_index_min: -1, // To get transactions from all ledgers
ledger_index_max: -1, // To get transactions up to the most recent ledger
limit: 10, // Limit the number of transactions (optional)
}
const response = await client.request(request)
console.log('Account Transactions:', response.result.transactions)import xrpl from 'xrpl';
const serverURL = 'wss://s.altnet.rippletest.net:51233'; // For testnet
const walletAddress = 'r...' // address to watch
const main = async () => {
const client = new xrpl.Client(serverURL)
await client.connect()
// do something useful
const subscriptionRequest = {
command: 'subscribe',
accounts: [walletAddress]
};
await client.request(subscriptionRequest)
console.log(`Subscribed to transactions for account: ${walletAddress}`)
// Event listener for incoming transactions
client.on('transaction', (transaction) => {
console.log('Transaction:', transaction);
})
// Event listener for errors
client.on('error', (error) => {
console.error('Error:', error);
})
// end
// keep open
console.log('all done')
}
main()
setInterval(() => {
console.log('One second has passed!');
}, 1000)// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;
contract Counter {
uint256 private counter;
constructor() {
counter = 0; // Initialize counter to 0
}
function incrementAndGet() public returns (uint256) {
counter += 1; // Increment the counter
return counter; // Return the updated counter value
}
function getCounter() public view returns (uint256) {
return counter; // Return the current counter value
}
}
import tweetnacl from 'tweetnacl';
import { Buffer } from "buffer";
import { Client, Wallet, deriveAddress, Payment } from "xrpl";
import {
edwardsToMontgomeryPub,
edwardsToMontgomeryPriv,
} from "@noble/curves/ed25519";
const { box, randomBytes } = tweetnacl;
/**
* Encrypts a message using X25519 (Montgomery curve) for Diffie-Hellman key exchange
*
* @param message - Plain text message to encrypt
* @param recipientPublicKey - Recipient's Ed25519 public key (hex encoded)
* @param senderSecretKey - Sender's Ed25519 private key (hex encoded)
* @returns JSON string containing base64 encoded encrypted message and nonce
*
* Steps:
* 1. Convert hex keys to byte arrays
* 2. Generate random nonce for uniqueness
* 3. Convert message to bytes
* 4. Convert Ed25519 keys to X25519 (Montgomery) format for encryption
* 5. Encrypt using NaCl box with converted keys
* 6. Return encrypted message and nonce as base64 JSON
*/
export function encryptMessage(
message: string,
recipientPublicKey: string,
senderSecretKey: string,
): string {
const pubKeyBytes = Buffer.from(recipientPublicKey.slice(2), "hex");
const secretKeyBytes = Buffer.from(senderSecretKey.slice(2), "hex");
const nonce = randomBytes(box.nonceLength);
const messageUint8 = Buffer.from(message);
const pubKeyCurve = edwardsToMontgomeryPub(pubKeyBytes);
const privKeyCurve = edwardsToMontgomeryPriv(secretKeyBytes);
const encryptedMessage = box(messageUint8, nonce, pubKeyCurve, privKeyCurve);
return JSON.stringify({
encrypted: Buffer.from(encryptedMessage).toString("base64"),
nonce: Buffer.from(nonce).toString("base64"),
});
}
/**
* Sends an XRPL transaction containing an encrypted message in its memo field
*
* @param cypherMessage - The encrypted message to send
* @param myWallet - Sender's XRPL wallet for signing the transaction
* @param receiverPubKey - Recipient's public key to derive their XRPL address
* @returns Transaction result or undefined if error occurs
*
* Flow:
* 1. Connect to XRPL testnet
* 2. Create Payment transaction:
* - Minimal amount (1 drop)
* - Include encrypted message in memo field
* - Derive recipient's address from their public key
* 3. Prepare, sign and submit transaction
* 4. Wait for validation and return result
* 5. Always disconnect client when done
*
* Note: MemoType is hex encoded "http://example.com/memo/generic"
*/
async function sendTX(
cypherMessage: string,
myWallet: Wallet,
recipientPublicKey: string,
) {
const client = new Client("wss://clio.altnet.rippletest.net:51233/");
try {
await client.connect();
const receiverAddress = deriveAddress(recipientPublicKey);
const tx: Payment = {
TransactionType: "Payment",
Account: myWallet.classicAddress,
Destination: receiverAddress,
Amount: "10000000",
Memos: [
{
Memo: {
MemoData: Buffer.from(cypherMessage).toString("hex"),
},
},
],
};
const prepared = await client.autofill(tx);
const signed = myWallet.sign(prepared);
const result = await client.submitAndWait(signed.tx_blob);
return result;
} catch (error) {
console.log(error);
} finally {
await client.disconnect();
}
}
async function main(): Promise<void> {
const recipientPublicKey = "Recipient public key goes here";
const myWallet = Wallet.fromSeed("Your seed goes here");
const cypherMessage = encryptMessage(
"Hello World",
recipientPublicKey,
myWallet.privateKey,
);
console.log("==============CYPHERED MESSAGE==============");
console.log(cypherMessage);
const tx = await sendTX(cypherMessage, myWallet, recipientPublicKey);
console.log("==============TRANSACTION==============");
console.log(tx);
console.log("all done");
}
main();
import { Client, Wallet } from "xrpl"
const client = new Client("wss://s.altnet.rippletest.net:51233")
const main = async () => {
console.log("lets get started...");
await client.connect();
// do something interesting here
await client.disconnect();
console.log("all done!");
};
main();import { convertStringToHex, NFTokenMint, NFTokenMintFlags, NFTokenCreateOffer } from 'xrpl';
const { wallet: wallet1, balance: balance1 } = await client.fundWallet()
console.log('wallet1', wallet1)
const uri = "My super NFT URI"
const nftMintTx: NFTokenMint = {
TransactionType: "NFTokenMint",
Account: wallet.address,
URI: convertStringToHex(uri),
Flags: NFTokenMintFlags.tfBurnable + NFTokenMintFlags.tfTransferable, // Burnable in case no one is buying it
NFTokenTaxon: 0, // Unique identifier for the NFT type
}
const prepared = await client.autofill(nftMintTx)
const signed = wallet.sign(prepared)
const result = await client.submitAndWait(signed.tx_blob)
const nftId = nftId: (result.result.meta as NFTokenMintMetadata)?.nftoken_id as string
console.log("NFT ID " + nftId + " created")import { NFTokenCreateOffer } from 'xrpl';
const nftCreateOfferTx: NFTokenCreateOffer = {
TransactionType: "NFTokenCreateOffer",
Account: wallet.address,
Destination: destination, // Optional if precised it would precised the offer for a specific address
NFTokenID: nftId,
Amount: "0", // 0 would represent a gift to someone
Flags: 1 // Sell offer
};
const prepared = await client.autofill(nftCreateOfferTx)
const signed = wallet.sign(prepared)
const result = await client.submitAndWait(signed.tx_blob)
const offerId = (result.result.meta as NFTokenCreateOfferMetadata)?.offer_id as string
console.log("Offer ID " + offerId + " created")function convertStringToHexPadded(str: string): string {
// Convert string to hexadecimal
let hex: string = "";
for (let i = 0; i < str.length; i++) {
const hexChar: string = str.charCodeAt(i).toString(16);
hex += hexChar;
}
// Pad with zeros to ensure it's 40 characters long
const paddedHex: string = hex.padEnd(40, "0");
return paddedHex.toUpperCase(); // Typically, hex is handled in uppercase
}async function enableRippling({ wallet, client }: any) {
const accountSet: AccountSet = {
TransactionType: "AccountSet",
Account: wallet.address,
SetFlag: AccountSetAsfFlags.asfDefaultRipple,
};
const prepared = await client.autofill(accountSet);
const signed = wallet.sign(prepared);
const result = await client.submitAndWait(signed.tx_blob);
console.log(result);
console.log("Enable rippling tx: ", result.result.hash);
return;
}import { TrustSet, convertStringToHex, TrustSetFlags } from "xrpl";
import { Payment } from "xrpl/src/models";
async function createToken({ issuer, receiver, client, tokenCode }: any) {
// Create the trust line to send the token
const trustSet: TrustSet = {
TransactionType: "TrustSet",
Account: receiver.address,
LimitAmount: {
currency: tokenCode,
issuer: issuer.address,
value: "500000000", // 500M tokens
},
Flags: TrustSetFlags.tfClearNoRipple,
};
console.log(trustSet);
// Receiver opening trust lines
const preparedTrust = await client.autofill(trustSet);
const signedTrust = receiver.sign(preparedTrust);
const resultTrust = await client.submitAndWait(signedTrust.tx_blob);
console.log(resultTrust);
console.log("Trust line issuance tx result: ", resultTrust.result.hash);
// Send the token to the receiver
const sendPayment: Payment = {
TransactionType: "Payment",
Account: issuer.address,
Destination: receiver.address,
Amount: {
currency: tokenCode,
issuer: issuer.address,
value: "200000000", // 200M tokens
},
};
console.log(sendPayment);
const preparedPayment = await client.autofill(sendPayment);
const signedPayment = issuer.sign(preparedPayment);
const resultPayment = await client.submitAndWait(signedPayment.tx_blob);
console.log(resultPayment);
console.log("Transfer issuance tx result: ", resultPayment.result.hash);
return;
}
export default createToken;// ... previous code
// create Token
await createToken({
issuer,
receiver,
client,
tokenCode: convertStringToHexPadded("LUC"),
});mkdir -p ~/projects
cd ~/projects
git clone https://github.com/XRPLF/rippled.git
cd rippled
git checkout developconan config install conan/profiles/ -tf $(conan config home)/profiles/conan profile showconan profile detectconan remote add --index 0 xrplf https://conan.ripplex.iorippled/
├── bin/ # Compiled executables
├── build/ # Build artifacts and temporary files
├── src/ # Source code
├── CMakeLists.txt # CMake configuration
└── README.md# Create the build directory
mkdir -p build && cd build
# Install dependencies via Conan
conan install .. --output-folder . --build missing --settings build_type=Debug
# Pass the CMake variable CMAKE_BUILD_TYPE and make sure it matches the one of the build_type settings you chose in the previous step
cmake -DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake -DCMAKE_BUILD_TYPE=Debug -Dxrpld=ON -Dtests=ON ..
# Build Rippled
cmake --build . --parallel 10ls build/rippled./build/rippled --versionimport { AMMCreate, AMMDeposit, AMMDepositFlags } from "xrpl";
import { OfferCreate, OfferCreateFlags } from "xrpl";
async function createAMM({ issuer, receiver, client, tokenCode }: any) {
console.log("create AMM", { issuer, receiver, tokenCode });
let createAmm: AMMCreate = {
TransactionType: "AMMCreate",
Account: receiver.address,
TradingFee: 600,
Amount: {
currency: tokenCode,
issuer: issuer.classicAddress,
value: "2000000", // 2M tokens
},
Amount2: "50000000", // 50 XRP in drops
};
console.log(createAmm);
const prepared = await client.autofill(createAmm);
const signed = receiver.sign(prepared);
const result = await client.submitAndWait(signed.tx_blob);
console.log(result);
console.log("Create amm tx: ", result.result.hash);
return;
}
export default createAMM;const main = async () => {
console.log("lets get started...");
await client.connect();
// retrieve wallets
const issuer = Wallet.fromSeed(issuerSeed);
const receiver = Wallet.fromSeed(receiverSeed);
// enable ripling
await enableRippling({ wallet: issuer, client });
// create Token
await createToken({
issuer,
receiver,
client,
tokenCode: convertStringToHexPadded("LUC"),
});
// create AMM
await createAMM({
issuer,
receiver,
client,
tokenCode: convertStringToHexPadded("LUC"),
});
await client.disconnect();
console.log("all done!");
};Getting started with the EVM sidechain:
EVM Docs
Peersyst Docs
Faucet & Bridge
Explorer
Network Name
XRPL EVM Sidechain
New RPC URL
https://rpc-evm-sidechain.xrpl.org
Chain ID
1440002
Currency Symbol
XRP
Block Explorer
https://evm-sidechain.xrpl.org
You will need to use the bridge tool to fund the EVM sidechain account https://bridge.devnet.xrpl.org/
There are many ways, here are a few...
https://github.com/XRPL-Commons/CyprusJan2024EVM
https://gist.github.com/lucbocahut/4f9f6c38e7140e3ecd7399703bfb03d5 https://wizard.openzeppelin.com/#erc20 -> then link into remix and publish with Metamask (
https://github.com/XRPL-Commons/xrpl-commons-january-2024/tree/main/apps/lending-contract
https://github.com/XRPL-Commons/xrpl-commons-january-2024/tree/main/apps/lending-frontend
https://github.com/XRPL-Commons/Jan2024_web3
This workshop is based on Floren Uzio's Coding on the XRPL Ledger series.
Locate the Code
Find Payment.cpp in the codebase
Identify the class declaration
Analyze the Three Phases
Document what checks occur in preflight()
Document what checks occur in preclaim()
Document what state changes occur in doApply()
A technical document with:
File path and line numbers
Description of each phase
Code snippets with explanations
At least one example of an error condition that would cause the transaction to fail
Run Rippled in standalone mode and trace a Payment transaction from submission to ledger closure.
Setup
Start Rippled in standalone mode
Submit a Payment transaction using RPC
Manually close the ledger
Documentation
Document the complete transaction lifecycle with timestamps
Capture and explain the transaction JSON
Take a screenshot of the transaction result
Identify which files in the codebase handle each phase:
A written report with:
Commands used
Transaction details (hash, account, destination, amount)
Screenshots of the transaction submission and result
List of relevant source files with brief explanations of their roles
The first module of the XRPL Core Dev Bootcamp is dedicated to setting up the development environment and compiling Rippled, the software at the core of the XRP Ledger.
This module is your hands-on guide to transforming the open-source code into a functional, running server. You will gain a practical understanding of the client's internal structure by mastering the compilation, configuration, and launch of Rippled in standalone mode.
Setup: Configure a C++ development environment (macOS/Ubuntu).
Tools: Master Conan and CMake for the build process.
Build: Successfully compile the rippled binary from source.
Launch: Run and configure the server in standalone mode.
Preparing Your Development Machine and Mastering the C++ Toolchain
Set up all the essential prerequisites for your development environment from installing compilers and dependencies to mastering modern C++ build tools.
You’ll configure Clang/G++, Homebrew or apt, and install Python and Node.js, before diving into Conan for dependency management and CMake for build configuration, ensuring full compliance with the C++20 standard.
Generating the Rippled Binary
Step-by-step guide to the compilation: dependency installation, CMake configuration, and running the final build command to create the executable.
Running Rippled in Standalone Mode and Interacting Locally
Run Rippled in standalone mode for local testing without connecting to the XRPL network. Configure your environment, launch the server, and interact with it via the API, WebSocket, or tools like XRPL Explorer and Playground to test transactions and ledger operations in isolation.
Key Topics: Standalone mode, configuration, API (RPC/WebSocket), local testing, XRPL Explorer, Playground
We often learn best through hands-on practice, which is why we offer an exercise that involves configuring and launching Rippled on the XRPL mainnet, then analyzing the logs to understand the node's behavior during initial synchronization.
If you have any questions about the homework or would like us to review your work, feel free to contact us.
➡️ Next Module: Rippled II -
← Back to Protocol Extensions and Quantum Signatures
When two nodes in the XRP Ledger overlay network establish a connection, they must verify each other's identity, agree on communication protocols, and establish mutual trust. This process, called the handshake, is critical for network security and interoperability.
The handshake prevents unauthorized nodes from joining the network, ensures protocol compatibility between peers, and establishes the cryptographic foundation for secure communication. Understanding this process is essential for debugging connection issues and implementing protocol upgrades.
The handshake accomplishes several essential goals:
Authentication: Each node proves its identity using cryptographic signatures. This prevents impersonation attacks where a malicious node pretends to be a trusted validator.
Protocol Negotiation: Nodes agree on the protocol version and features they will use for communication. This enables the network to evolve while maintaining backward compatibility.
Trust Establishment: Both parties verify that the other is a legitimate participant running compatible software. This ensures network integrity.
Capability Exchange: Nodes share information about their supported features, enabling peers to optimize their communication strategies.
()
Outbound peer initiates a TLS connection, then sends an HTTP/1.1 request with URI "/" and uses the HTTP/1.1 Upgrade mechanism with custom headers.
Both sides verify the provided signature against the session's unique fingerprint.
If signature check fails, the link is dropped.
PeerImp::run ():
Ensures execution on the correct strand for thread safety.
Parses handshake headers ("Closed-Ledger", "Previous-Ledger").
Stores parsed ledger hashes in peer state.
If inbound, calls doAccept(). If outbound, calls doProtocolStart()
PeerImp::doAccept ():
Asserts read buffer is empty.
Logs the accept event.
Generates shared value for session.
Logs protocol and public key.
The handshake protocol establishes secure, authenticated connections between XRP Ledger nodes. Through TLS encryption, cryptographic signatures, and careful protocol negotiation, it ensures that only legitimate nodes can participate in the network while maintaining compatibility across different software versions. Understanding this process is essential for diagnosing connection issues and implementing protocol enhancements.
Before diving into Rippled’s architecture or contributing to the codebase, it’s essential to prepare a clean and consistent development environment. Rippled is a high-performance C++ application with multiple system dependencies from compiler toolchains to build systems and scripting utilities that must be properly configured to ensure smooth compilation and runtime behavior.
This section provides a step-by-step setup guide for macOS, focusing on the tools and configurations required to compile Rippled from source. You’ll install Node.js for build tooling, configure your compiler (Clang) and Xcode environment, and prepare Python dependencies used in the build process.
By the end of this section, your environment will be fully ready to build and run Rippled locally, following the same structure used by production and continuous integration setups.
Install Node.js via nvm for easy version management:
macOS with administrator rights
Apple account (to download certain versions of Xcode)
Stable internet connection for dependencies
Download Xcode from Apple Developer Downloads
Extract the .xip file and rename it (e.g., Xcode_16.2.app)
Move it to /Applications and set it as the default toolchain:
Verify the installation:
You should see something like this for clang:
Apple clang version 16.0.0 (clang-1600.0.26.3)
Ubuntu with administrator rights
Stable internet connection
In this section we will create your first payment transaction.
This chapter explains the NodeStore storage abstraction and available backends in Rippled. You will learn how to choose between RocksDB, NuDB, and testing backends, configure them for optimal performance, and understand the standardized encoding format that ensures backend interoperability. Practical guidance on tuning and migration is also provided to help maintain reliable and efficient ledger storage.
The overlay network is the communication backbone of the XRP Ledger: without it validators couldn't share proposals, transactions wouldn't propagate, and consensus would be impossible.
In this module you dissect how rippled maintains a resilient mesh of peer connections (architecture, lifecycle, handshake, relaying, discovery) and then apply that knowledge in a practical cryptography exercise: implementing a quantum‑resistant (Dilithium) signature amendment. By the end you will both understand the networking substrate and extend the protocol with post‑quantum signing capability.
This homework deepens your understanding of how Rippled verifies transaction signatures. You will explore the codebase to trace the signature verification pipeline and analyze cryptographic operations step by step.
Format: Written report (PDF or Markdown) including diagrams, code snippets, and explanations.
Line numbers where the function is defined
2-3 specific validation checks performed
Code snippet (5-10 lines) showing a key validation
Submission
Validation (Preflight/Preclaim)
Application (DoApply)
Ledger closure
This deep dive is organized into focused topics, each exploring a critical component of the Overlay architecture. Click on any topic below to dive deeper into the concepts, codebase structure, and practical implementations.
By completing this module, you will be able to:
Understand the overlay network architecture and its role in the XRP Ledger
Trace the complete lifecycle of a peer connection from discovery to disconnection
Comprehend handshake protocols and secure connection establishment
Analyze message relaying mechanisms and squelching optimizations
Navigate the Overlay codebase and locate key networking components
Understand thread safety patterns in concurrent network operations
Debug peer connectivity issues using monitoring and logging tools
These skills are essential for understanding network resilience, optimizing peer selection algorithms, and contributing to the networking layer of Rippled.
Understanding the Peer-to-Peer Network Foundation
Learn how the overlay network forms a connected graph of Rippled nodes, enabling distributed communication independent of the underlying physical network infrastructure. Understand the relationship between Overlay, OverlayImpl, and the Peer abstraction.
Key Topics: Network topology, overlay design principles, connection graphs, network resilience
Codebase: src/xrpld/overlay/
From Discovery to Disconnection
Master the complete journey of a peer connection, from initial discovery through establishment, activation, maintenance, and graceful termination. Understand how resources are allocated and cleaned up at each stage.
Key Topics: Connection states, lifecycle management, resource allocation, cleanup procedures
Codebase: src/xrpld/overlay/detail/
Secure Connection Establishment
Discover how nodes authenticate each other, negotiate protocol versions, and establish trust during the connection handshake. Learn about the HTTP upgrade mechanism and cryptographic verification.
Key Topics: TLS handshake, HTTP upgrade, protocol negotiation, signature verification
Codebase: src/xrpld/overlay/detail/ConnectAttempt.cpp, src/xrpld/overlay/detail/PeerImp.cpp
Propagating Information Across the Network
Understand how messages propagate through the overlay network and how squelching prevents network overload. Learn about the HashRouter's role in preventing duplicate message processing.
Key Topics: Message broadcast, relay optimization, squelching mechanism, HashRouter integration
Codebase: src/xrpld/overlay/detail/OverlayImpl.cpp, src/xrpld/overlay/Slot.h
Finding and Connecting to the Network
Explore how nodes discover other peers and bootstrap their connections when joining the network. Learn about PeerFinder's role in managing connection slots, cache hierarchies, and endpoint quality assessment.
Key Topics: Bootstrapping stages, Livecache vs Bootcache, slot allocation, fixed peers, endpoint exchange
Codebase: src/xrpld/peerfinder/, src/xrpld/overlay/detail/OverlayImpl.cpp
Read the draft amendment proposal before coding: Quantum-Resistant Signatures XLS →
Implement the Dilithium (post-quantum) signature support and related features: Homework: Building Quantum-Resistant Signatures →
(Optional) Summarize the XLS in your report before showing implementation steps.
Review and Reinforce Your Understanding
Take a few minutes to review key concepts from this module.
From key generation and transaction signing to hash functions and secure memory practices, this short quiz will help you confirm your understanding of XRPL’s cryptographic foundations.
If you have any questions about the homework or would like us to review your work, feel free to contact us.
➡️ Next Module: Communication I - Understanding XRPL(d) RPC Architecture →
Test: Interact with your local ledger via API and test tools.
Calls overlay_.activate(shared_from_this()) to register the peer as active.
Prepares and sends handshake response.
On successful write, calls doProtocolStart().
Let's add some wallet creation logic, we might as well create 2 wallets to have some fun.
At this point, we should have 2 wallets with balances of 100 XRP.
Of course, we will want to transfer some XRP from one account to another, because that's why we're here.
Let's log it to ensure everything is correct, and then submit it to the ledger:
When you run this part you should get the following log output:
The most important being TransactionResult: 'tesSUCCESS'
You can verify the transaction on the ledger, as it is readable by anyone at this point. Go to https://testnet.xrpl.org and paste the hash value (in my example case it would be CC8241E3C4B57ED9183D6031F4E370AC13B6CE9E2332BD7AF77C25BD6ADFA4F6. You should see it. Alternatively, check the wallet addresses as well (remember you are using the public key).
I prefer to check the balances programmatically to ensure everything has gone according to plan. Adding this code will do that:
Here is the entire index.ts file at this point:
Don't you find it limiting that the faucet only gives out 100 XRP? Why not create a function to print money to an address?
You have all the necessary building blocks:
You will need to create and fund new wallets with await client.fundWallet()
Remember, you can only transfer XRP up to the reserve, so a maximum of 90 XRP at a time for brand new accounts, unless you are brave enough to use a new transaction type.? (https://xrpl.org/docs/references/protocol/transactions/types/accountdelete/). However, be warned that you will have to wait a block or two before the account can be deleted.
The final function signature should look something like this `await printMoney({ destinationWallet, client })
I can't wait to see who writes this faster than ChatGPT !
Here is the popular printMoney function
When to use:
General-purpose validators
Balanced performance and simplicity
Standard configurations
Configuration:
Performance:
Write throughput: 10,000-50,000 objects/sec
Compression: 50-70% disk space savings
Good for: Most deployments
When to use:
High-transaction-volume networks
Maximum write throughput needed
Modern SSD storage
Configuration:
Performance:
Write throughput: 50,000-200,000 objects/sec
Optimized for: Sequential writes
Good for: Archive nodes, heavily-used validators
Memory: In-memory, non-persistent (testing only)
Null: No-op backend (unit tests)
The standardized encoding format enables backend independence:
Structure (from Chapter 6):
This format is handled transparently by the Database layer, but understanding it is important for:
Backend implementation
Data corruption diagnosis
Migration between backends
To migrate from one backend to another:
See the following sections of Chapter 6:
"Backend Abstraction" - Interface design
"Supported Backends" - Feature comparison
"Data Encoding Format" - Serialization details
For implementation details, consult:
rippled/src/xrpld/nodestore/Backend.h
rippled/src/xrpld/nodestore/Database.h
rippled/src/xrpld/nodestore/backend/*Factory.cpp
Create this file in your project and import it as needed, when you are processing the transaction data.
Here is an example of a fully fledged transaction listener that can parse the memos from the transaction we created above.
// Create the JSON object you want to include in the memo
const memoData = {
invoice_id: '12345',
description: 'Coffee and Milk',
date: '2024-07-02'
};
// Convert JSON object to a string and then to hex
const memoJson = JSON.stringify(memoData);
const memoHex = Buffer.from(memoJson, 'utf8').toString('hex');
// Create the Memo field
const memo = {
Memo: {
MemoData: memoHex
}
};
const tx = {
TransactionType: "Payment",
Account: wallet1.classicAddress,
Destination: wallet2.classicAddress,
Amount: "1234",
Memos: [memo]
};
const result = await client.submitAndWait(tx, {
autofill: true,
wallet: wallet1,
});
console.log(result)Balance and Loan Tracking: Illustrates keeping track of user balances and loan amounts.
Events: Shows how to log contract actions for transparency using events.
Modifiers: Demonstrates using function modifiers for access control and logical checks.
These contracts can be used to teach the basic functionality associated with each concept in the original SimpleBankcontract.
A contract where only the owner can execute certain functions.
A simple contract where users can deposit Ether, and only the owner can withdraw it.
A contract that tracks balances and loans for users.
A contract that demonstrates emitting events to log actions such as deposits and withdrawals.
A contract that demonstrates the use of function modifiers for access control.
# Install nvm (if not already installed)
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash
# Install and use the latest LTS version of Node.js
nvm install --lts
nvm use --ltsclang --versionsudo mv Xcode_16.2.app /Applications/
sudo xcode-select -s /Applications/Xcode_16.2.app/Contents/Developer
export DEVELOPER_DIR=/Applications/Xcode_16.2.app/Contents/Developerclang --version
git --version# Homebrew
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
# Essential dependencies
brew update
brew install xz pyenv# Install Python 3.13 via pyenv
pyenv install 3.11.13
pyenv global 3.11.13
eval "$(pyenv init -)"
# Install Conan and CMake
pip install 'conan>2.16'
pip install 'cmake>3.21'apt update
apt install --yes curl git libssl-dev pipx python3.11 python3-pip make g++-11 libprotobuf-dev protobuf-compiler
# Install CMake
curl -LO "https://github.com/Kitware/CMake/releases/download/v3.25.1/cmake-3.25.1.tar.gz"
tar -xzf cmake-3.25.1.tar.gz
cd cmake-3.25.1
./bootstrap --parallel=$(nproc)
make --jobs $(nproc)
make install
cd ..
# Install Conan
pipx install 'conan>2.16'
pipx ensurepathimport { Client, Wallet } from "xrpl"
const client = new Client("wss://s.altnet.rippletest.net:51233")
const main = async () => {
console.log("lets get started...");
await client.connect();
// do something interesting here
await client.disconnect();
console.log("all done!");
};
main();console.log('lets fund 2 accounts...')
const { wallet: wallet1, balance: balance1 } = await client.fundWallet()
const { wallet: wallet2, balance: balance2 } = await client.fundWallet()
console.log('wallet1', wallet1)
console.log({
balance1,
address1: wallet1.address, //wallet1.seed
balance2,
address2: wallet2.address
})const tx:xrpl.Payment = {
TransactionType: "Payment",
Account: wallet1.classicAddress,
Destination: wallet2.classicAddress,
Amount: xrpl.xrpToDrops("13")
};console.log('submitting the payment transaction... ', tx)
const result = await client.submitAndWait(tx, {
autofill: true,
wallet: wallet1,
});
console.log(result){
id: 28,
result: {
Account: 'rGGJ71dbSY5yF9BJUDSHsDPSKDhVGGWzpY',
Amount: '13000000',
DeliverMax: '13000000',
Destination: 'rBAbErfjkwFWFerLfAHmbi3qgbRfuFWxEN',
Fee: '12',
Flags: 0,
LastLedgerSequence: 232639,
Sequence: 232617,
SigningPubKey: 'ED13EBC7F89545435E82DC19B2C38AF5ECF39CE099C8FA647280C71CD6FA5BEF3B',
TransactionType: 'Payment',
TxnSignature: '66D9338B3C93D212F89BB4F6731E7F38A01052F8ED94452D7D2A9BB0B0C6130A5708390E9B6233A98A3B28A1922E57E37317609727409B3D289C456BB3250E08',
ctid: 'C0038CAD00000001',
date: 767648991,
hash: 'CC8241E3C4B57ED9183D6031F4E370AC13B6CE9E2332BD7AF77C25BD6ADFA4F6',
inLedger: 232621,
ledger_index: 232621,
meta: {
AffectedNodes: [Array],
TransactionIndex: 0,
TransactionResult: 'tesSUCCESS',
delivered_amount: '13000000'
},
validated: true
},
type: 'response'
} console.log({
'balance 1': await client.getBalances(wallet1.classicAddress),
'balance 2': await client.getBalances(wallet2.classicAddress)
})import xrpl from "xrpl"
const client = new xrpl.Client("wss://s.altnet.rippletest.net:51233")
const main = async () => {
console.log("lets get started...");
await client.connect();
// do something interesting here
console.log('lets fund 2 accounts...')
const { wallet: wallet1, balance: balance1 } = await client.fundWallet();
const { wallet: wallet2, balance: balance2 } = await client.fundWallet();
console.log('wallet1', wallet1)
console.log({
balance1,
address1: wallet1.address, //wallet1.seed
balance2,
address2: wallet2.address
});
const tx:xrpl.Payment = {
TransactionType: "Payment",
Account: wallet1.classicAddress,
Destination: wallet2.classicAddress,
Amount: xrpl.xrpToDrops("13")
};
console.log('submitting the payment transaction... ', tx)
const result = await client.submitAndWait(tx, {
autofill: true,
wallet: wallet1,
});
console.log(result)
console.log({
'balance 1': await client.getBalances(wallet1.classicAddress),
'balance 2': await client.getBalances(wallet2.classicAddress)
})
await client.disconnect();
console.log("all done!");
};
main();import xrpl from "xrpl";
const printMoney = async ({ destinationWallet, client }: any) => {
const { wallet: wallet1, balance: balance1 } = await client.fundWallet();
console.log("wallet1", wallet1);
const tx: xrpl.Payment = {
TransactionType: "Payment",
Account: wallet1.classicAddress,
Destination: destinationWallet.classicAddress,
Amount: xrpl.xrpToDrops("90"),
};
console.log("submitting the payment transaction... ", tx);
const result = await client.submitAndWait(tx, {
autofill: true,
wallet: wallet1,
});
console.log(result);
console.log({
"balance 2": await client.getBalances(destinationWallet.classicAddress),
});
};
export default printMoney;[node_db]
type = RocksDB
path = /data/rippled.db
cache_size = 256[node_db]
type = NuDB
path = /data/nudbBytes 0-7: Reserved (set to zero)
Byte 8: Type (NodeObjectType enumeration)
Bytes 9+: Serialized data payload[node_db]
type = RocksDB
path = /var/lib/rippled/db/rocksdb
cache_size = 256 # Cache size in MB
cache_age = 60 # Age limit in seconds
# Performance tuning
compression = true # Enable compression
block_cache_size = 256 # Block cache in MB
write_buffer_size = 64 # Write buffer in MB
max_open_files = 100[node_db]
type = NuDB
path = /var/lib/rippled/db/nudb
# NuDB specific
key_size = 32 # Key size (always 32 for SHA256)
block_size = 4096 # Block size for writes# 1. Stop the server
systemctl stop rippled
# 2. Export from current backend
rippled --export current_db export.json
# 3. Update configuration
# Change [node_db] type in rippled.cfg
# 4. Import to new backend
rippled --import export.json --ledger-db new_db
# 5. Restart server
systemctl start rippledexport const parseMemo = (memos) => {
let report = {};
if (memos && memos.length > 0) {
for (const memo of memos) {
if (memo.Memo.MemoData) {
// Decode the hexadecimal memo data to a string
const memoDataHex = memo.Memo.MemoData;
const memoDataJson = Buffer.from(memoDataHex, "hex").toString("utf8");
// Parse the JSON string into a JavaScript object
const memoDataObject = JSON.parse(memoDataJson);
// console.log('Decoded memo data:', memoDataObject)
report = { ...report, ...memoDataObject };
} else {
console.log("No MemoData found.");
}
}
return report;
} else {
console.log("No memos found in the transaction.");
}
};
export default parseMemoimport xrpl from "xrpl";
import { parseMemo } from "./parseMemo";
const serverURL = "wss://s.altnet.rippletest.net:51233"; // For testnet
const walletAddress = "r...";
const main = async () => {
const client = new xrpl.Client(serverURL);
await client.connect();
// do something useful
const subscriptionRequest = {
command: "subscribe",
accounts: [walletAddress],
};
await client.request(subscriptionRequest);
console.log(`Subscribed to transactions for account: ${walletAddress}`);
// Event listener for incoming transactions
client.on("transaction", (transaction) => {
console.log("Transaction:", transaction);
const parsedMemos = parseMemo(transaction.transaction.Memos);
console.log("Parsed memo:", parsedMemos);
});
// Event listener for errors
client.on("error", (error) => {
console.error("Error:", error);
});
// end
// keep open
console.log("listening...");
};
main()solidityCopy code// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;
contract OwnerContract {
address public owner;
constructor() {
owner = msg.sender; // Set the owner as the account that deploys the contract
}
modifier onlyOwner() {
require(msg.sender == owner, "Only the owner can call this function");
_;
}
function changeOwner(address newOwner) public onlyOwner {
owner = newOwner;
}
function ownerOnlyFunction() public onlyOwner {
// Only owner can execute this function
}
}solidityCopy code// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;
contract DepositWithdraw {
address public owner;
mapping(address => uint256) public balances;
constructor() {
owner = msg.sender;
}
function deposit() public payable {
balances[msg.sender] += msg.value;
}
function withdraw() public {
require(balances[msg.sender] > 0, "No balance to withdraw");
uint256 amount = balances[msg.sender];
balances[msg.sender] = 0;
payable(msg.sender).transfer(amount);
}
function ownerWithdraw() public {
require(msg.sender == owner, "Only the owner can withdraw all funds");
payable(owner).transfer(address(this).balance);
}
}solidityCopy code// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;
contract LoanTracker {
mapping(address => uint256) public balances;
mapping(address => uint256) public loans;
function deposit() public payable {
balances[msg.sender] += msg.value;
}
function takeLoan(uint256 amount) public {
require(amount > 0, "Loan must be greater than 0");
loans[msg.sender] += amount;
}
function repayLoan() public payable {
require(loans[msg.sender] > 0, "No outstanding loan");
require(msg.value == loans[msg.sender], "Must repay exact loan amount");
loans[msg.sender] = 0;
}
function getLoanAmount() public view returns (uint256) {
return loans[msg.sender];
}
}solidityCopy code// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;
contract EventLogger {
event DepositMade(address indexed user, uint256 amount);
event WithdrawalMade(address indexed user, uint256 amount);
mapping(address => uint256) public balances;
function deposit() public payable {
balances[msg.sender] += msg.value;
emit DepositMade(msg.sender, msg.value);
}
function withdraw(uint256 amount) public {
require(balances[msg.sender] >= amount, "Insufficient balance");
balances[msg.sender] -= amount;
payable(msg.sender).transfer(amount);
emit WithdrawalMade(msg.sender, amount);
}
}solidityCopy code// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;
contract ModifierExample {
address public owner;
constructor() {
owner = msg.sender;
}
modifier onlyOwner() {
require(msg.sender == owner, "You are not the owner");
_;
}
function restrictedFunction() public onlyOwner {
// Only the owner can call this function
}
function openFunction() public {
// Anyone can call this function
}
}Each peer maintains multiple outgoing and optional incoming connections to other peers, forming a connected directed graph of nodes (vertices: rippled instances, edges: persistent TCP/IP connections).
The overlay network is layered on top of the public and private Internet, forming an overlay network.
Each connection is represented by a Peer object. The Overlay manager establishes, receives, and maintains connections to peers. Protocol messages are exchanged between peers and serialized using Google Protocol Buffers.
The OverlayImpl class manages the peer-to-peer overlay network, handling peer connections, peer discovery, message relaying, and network health monitoring (OverlayImpl.cpp).
The Overlay interface defines the contract for peer-to-peer network management. It abstracts away the complexity of connection handling, allowing other subsystems to focus on their responsibilities without understanding networking details.
This design follows the interface segregation principle: consumers of the Overlay interact with a minimal, focused API rather than the full complexity of the networking implementation.
The OverlayImpl class provides the actual implementation of overlay functionality. It manages the complete lifecycle of peer connections and coordinates with multiple subsystems including the Resource Manager, PeerFinder, and HashRouter.
The class maintains several data structures for efficient peer management. The m_peers map tracks peers by their connection slots, while ids_ provides fast lookup by peer ID. The list_ container tracks all active connection attempts and established connections as "child" objects.
Why use a recursive mutex? Networking operations often involve callbacks that may trigger additional operations requiring the same lock. A recursive mutex allows the same thread to acquire the lock multiple times, preventing deadlocks in these scenarios.
Peer (Peer.h):
Abstract base class representing a network peer.
Specifies pure virtual methods for peer communication, transaction queue management, resource charging, feature support, ledger and transaction set queries, and status reporting.
Methods include send, getRemoteAddress, id, cluster, getNodePublic, json, supportsFeature, and more.
PeerImp (, ):
Implements the core logic for a peer connection.
Manages state, communication, protocol handling, message sending/receiving, resource usage, protocol versioning, compression, transaction and ledger synchronization, and feature negotiation.
Tracks peer metrics, manages transaction and ledger queues, and handles protocol-specific messages.
OverlayImpl (OverlayImpl.h, OverlayImpl.cpp):
Main implementation of the Overlay interface.
Manages peer connections, message broadcasting and relaying, peer discovery, resource management, and network metrics.
Handles the lifecycle of peer objects, tracks network traffic, manages timers and asynchronous operations, and provides JSON-based status and metrics reporting.
Supports squelching (rate-limiting) of validators, manages manifests, and integrates with the server handler and resource manager.
The overlay architecture provides a robust foundation for peer-to-peer communication in the XRP Ledger. By abstracting network complexity behind clean interfaces and managing connections through a centralized Overlay manager, the system achieves both flexibility and reliability. Understanding this architecture is essential for anyone working on network optimization, debugging connectivity issues, or implementing new peer-to-peer features.
Perform a hands-on exploration of Rippled’s transaction processing code and trace how a transaction’s signature is verified from submission to final cryptographic validation.
Setup
Optionally run Rippled in standalone mode for testing
Select a transaction to analyze (e.g., Payment, OfferCreate)
Trace the transaction through the signature verification process in the codebase
Code Exploration
Identify the entry point: Transactor::apply()
Document the call to preflight() and its parameters
Follow the verification chain to checkSign() functions
Documentation
Create a call chain diagram from Transactor::apply() to the final verify() function
Prepare a function analysis table including:
A written report containing:
Call chain diagram
Function analysis table
Explanation of algorithm handling (secp256k1 vs ed25519)
Error handling details
Code snippets illustrating the verification process
Use grep or your IDE to locate function definitions
Follow #include statements to understand dependencies
Check return types to understand success/failure patterns
Use git log to examine history of cryptographic changes
By completing this homework, you should be able to:
Navigate Rippled’s transaction processing code
Understand the signature verification pipeline
Identify where cryptographic operations occur
Explain the difference between permission checks and signature verification
Pathfinding on the XRP Ledger (XRPL) enables efficient payment routing by discovering the optimal path for transferring value between accounts. This feature facilitates cross-currency transactions, ensuring payments can be made even when the sender and receiver use different currencies.
Pathfinding is a feature on the XRPL that identifies the most cost-effective route for payments involving multiple currencies. It leverages the decentralized exchange (DEX) on the XRPL, which supports cross-currency transactions by converting currencies seamlessly using order books.
Pathfinding ensures that payments are completed with minimal fees and optimal exchange rates, even if the sender and receiver operate in entirely different currencies. This makes XRPL a robust platform for international remittances and currency exchanges.
When initiating a cross-currency transaction, XRPL scans its order books and liquidity pools to identify the best combination of offers to complete the payment. This process includes:
Finding Direct Offers: Direct currency pairs between the sender and receiver.
Building Complex Paths: Identifying intermediate steps involving multiple currencies if no direct offer is available.
Efficient Cross-Currency Payments Pathfinding automatically converts currencies using the best available rates, eliminating the need for intermediaries.
Optimized Transaction Costs By leveraging competitive offers on the XRPL’s DEX, pathfinding minimizes fees associated with currency conversion.
Global Reach Enables seamless payments across borders, even in currencies that are not natively linked.
Automated Exchange
To use the pathfinding feature, ensure the following:
Both the sender and receiver accounts are active on XRPL.
Sufficient XRP balance is maintained to cover transaction fees.
The sender has trustlines established for any non-XRP currencies involved in the payment.
The currencies involved in the transaction must be active on the DEX with some trading activity. If you are using a testnet, you may need to create an active market using bots to simulate trading activity.
Begin by installing npm i xrpl
To initiate a cross-currency payment, start by retrieving available paths with the .
Alternative Example 2: Specified destination Amount
This way you will replace the other request and you will see what you need to spend in order for the destination address to receive a specified amount. This requires a list of source currencies.
Once a suitable path is identified, construct the .
For non-XRP currencies, both the sender and receiver must establish trustlines with the respective issuers.
Pathfinding depends on the liquidity of the currency pairs in the XRPL’s order books. Ensure sufficient market depth for transactions.
Each path incurs minimal fees, determined by the order book offers and XRPL’s transaction costs.
The conversion rate depends on the offers available on the DEX. Using higher-liquidity pairs generally results in better rates.
Pathfinding is a cornerstone of XRPL’s utility in financial ecosystems that require:
International Remittances Facilitates seamless currency exchanges for cross-border payments.
Decentralized Finance (DeFi) Powers complex financial workflows by automating currency conversions.
Multi-Currency Wallets Enhances user experience by enabling payments in any supported currency.
Pathfinding on XRPL underscores the ledger’s commitment to efficiency and accessibility in global financial transactions. By leveraging its DEX and automated routing, XRPL ensures that payments are seamless, cost-effective, and scalable across diverse currencies.
Ticket Feature on XRPL
Tickets on the XRPL allow you to reserve sequence numbers for transactions, enabling asynchronous transaction submission. This feature is particularly useful in multi-signature setups, where sequence management can become challenging due to independent signers.
Tickets are a feature on the XRP Ledger that allow you to preallocate sequence numbers for future transactions. This helps avoid sequence number conflicts, especially in scenarios where multiple parties (like signers in a MultiSignature setup) need to sign a transaction.
In standard transactions, the Sequence field determines the order of execution. When signers asynchronously sign transactions, the sequence number may become outdated. Tickets provide a solution by reserving sequence numbers in advance, ensuring transactions can proceed without conflict.
Tickets are created using the TicketCreate transaction. Once created, these tickets can be used in place of the Sequence field in any subsequent transaction.
Avoiding Sequence Conflicts
Tickets prevent sequence number mismatches in multi-signature transactions, where signers may not act in a synchronized manner.
Streamlining MultiSignature
By reserving sequence numbers, Tickets simplify the signing and submission process in MultiSignature environments.
Flexibility for Transaction Management
Tickets can be used to prioritize certain transactions or reserve future transaction slots, offering better control over transaction flow.
To create a Ticket on the XRPL, your account must hold sufficient XRP to meet the reserve requirements for the associated account object.
You can query your account to retrieve active tickets.
Once you have created tickets, you can use them in subsequent transactions by specifying the TicketSequence field instead of Sequence.
Deleting a Ticket is sometimes required. To free up the associated Account Object. The way to do this is generate a no-op transaction
Ticket Creation
The TicketCreate transaction reserves sequence numbers for future transactions.
Specify the number of tickets you want to create using the TicketCount field.
Querying Tickets
Each ticket costs 2 XRP to reserve. Ensure you have sufficient XRP balance in your account before creating tickets.
Tickets remain valid until used or until the account’s Sequence surpasses the ticket’s TicketSequence.
Efficiently manage tickets to avoid unnecessary fees and ensure smooth transaction workflows.
Tickets are particularly effective in MultiSignature setups, preventing sequence mismatches and allowing independent signers to collaborate seamlessly.
With the Ticket feature, XRPL provides a powerful mechanism to manage transaction order and avoid sequence conflicts, especially in asynchronous environments like MultiSignature. By leveraging Tickets, you can ensure smooth and conflict-free transaction workflows.
This Module focuses on the backbone of XRPL's state management: the SHAMap, which maintains cryptographically-verified ledger state in memory, and the NodeStore, which persists this state to disk.
Every transaction on the XRP Ledger generates a new snapshot of the ledger state. Without proper storage and retrieval, this state would be lost on crash. Even worse, without efficient algorithms for comparison and synchronization, the network could stall while sharing state across thousands of nodes.
SHAMap and NodeStore solve these problems by providing:
The second module of the XRPL Core Dev Bootcamp Online Edition builds upon your foundational knowledge to explore the architecture of Rippled. Now that you can compile and run the software, it's time to understand how it works internally, from transaction processing to consensus mechanisms and peer-to-peer networking.
This module transforms you from someone who can operate Rippled into a developer who understands its inner workings. You'll learn to navigate the complex C++ codebase, trace transaction flows through the system, and comprehend the architectural decisions that make the XRP Ledger fast, secure, and decentralized.
In this session, we will create an escrow.
An escrow on the XRP Ledger allows users to lock XRP and release it under specific conditions. This feature enables conditional payments, ensuring funds are only transferred when criteria are met. If conditions aren't fulfilled, the escrow can be canceled, returning the funds to the sender.
Create a new file or edit index.ts
Distinguish between Transactor::checkSign() and STTx::checkSign()
Identify which function performs actual cryptographic verification
Explore STTx::checkSign() and trace the call to verify()
Determine how the signature algorithm (secp256k1 or ed25519) is detected
Document differences in the verification flow for each algorithm
File location
Purpose
Key parameters
Return values
Explain error handling for failed signature verification
Include relevant code snippets (5–10 lines) for:
Signature verification calls
Algorithm detection logic
Final cryptographic verification
Cryptographically committed state changes
Persistent historical ledger states
Fast network synchronization for new nodes
Microsecond access latency for hot data
This module covers the full architecture of SHAMap and NodeStore, from conceptual foundations to production-ready implementations. You will explore each component in detail:
SHAMap – Understand the Merkle-Patricia trie structure, node hierarchy, hashing, traversal, and synchronization. Learn how state changes propagate and how nodes efficiently compare ledger states.
NodeStore – Learn how persistent storage is abstracted, including backend choices, caching strategies, and database lifecycle management.
Integration – See how SHAMap and NodeStore work together to maintain ledger integrity and enable fast node synchronization.
Advanced Topics – Explore cryptographic proofs, state reconstruction guarantees, resource management, and real-world performance optimization.
Appendices – Gain guidance on navigating the codebase, debugging, and configuring NodeStore for production workloads.
Through these topics, you will gain both conceptual understanding and hands-on knowledge of the systems that ensure XRPL’s ledger state remains consistent, verifiable, and performant across thousands of nodes.
By completing this module, you will be able to:
Navigate SHAMap and NodeStore code confidently
Trace the lifecycle of ledger state from creation to persistent storage
Explain why Merkle-Patricia tries are optimal for blockchain state
Understand caching strategies and backend choices for NodeStore
Optimize synchronization performance and troubleshoot issues
Implement efficient state queries and cryptographic proofs
Appreciate the engineering elegance enabling distributed consistency across thousands of nodes
Apply concepts in real code exploration and practical exercises
Understanding the Challenges
Explore why simple approaches fail for blockchain state management and why XRPL’s design choices are necessary to maintain consistency, performance, and synchronization across thousands of nodes.
Key Topics: State snapshots, crash recovery, network synchronization challenges Codebase: Conceptual overview
Foundations for State Integrity
Learn the mathematical and cryptographic foundations behind SHAMap and NodeStore, including trees, hashing, and cryptographic commitments.
Key Topics: Merkle trees, cryptographic hashes, commitment schemes Codebase: Conceptual overview
Inner and Leaf Nodes Structure
Dive into the SHAMap architecture, understanding how inner and leaf nodes are organized, and how the tree structure supports efficient state storage.
Key Topics: Node hierarchy, tree structure, leaf vs inner nodes
Codebase: src/xrpld/app/ledger/
Traversing and Hashing the Tree
Learn how to traverse SHAMap, compute hashes for each node, and leverage Merkle properties for cryptographic verification.
Key Topics: Tree traversal, node hashing, Merkle proofs
Codebase: src/xrpld/app/ledger/
Efficient State Comparison Across Nodes
Understand how nodes synchronize efficiently using hash comparisons, avoiding large data transfers and enabling fast state alignment.
Key Topics: Iteration, traversal, synchronization algorithms
Codebase: src/xrpld/app/ledger/
Persistent Storage Layer
Discover how NodeStore abstracts database complexity while providing persistence and performance. Learn why caching is critical, how the rotating database allows online deletion, and backend differences (RocksDB vs NuDB).
Key Topics: Storage abstraction, caching strategies, backend independence
Codebase: src/xrpld/core/
Database Flexibility
Learn how NodeStore separates the logical interface from backend implementation, supporting multiple storage engines while maintaining performance.
Key Topics: Backend abstraction, RocksDB, NuDB, pluggable storage
Codebase: src/xrpld/core/
Critical Role of Caching
Understand multi-tier caching strategies, why hot data access is crucial, and how to optimize NodeStore performance.
Key Topics: Cache layers, cache eviction policies, performance tuning
Codebase: src/xrpld/core/
Managing Persistent Storage
Explore NodeStore lifecycle operations, including initialization, rotation, online deletion, and shutdown, ensuring data integrity.
Key Topics: Database lifecycle, initialization, maintenance, shutdown procedures
Codebase: src/xrpld/core/
Ensuring Ledger Integrity
Learn how cryptographic proofs guarantee unique ledger history, enable state reconstruction, and verify correctness without transferring entire datasets.
Key Topics: Merkle proofs, state reconstruction, historical verification
Codebase: src/xrpld/app/ledger/
Optimizing for Production
Understand NodeStore and SHAMap resource usage, latency characteristics, and practical optimization strategies in real-world XRPL deployments.
Key Topics: Memory management, disk I/O, throughput, latency
Codebase: src/xrpld/core/
Rippled Repository: github.com/XRPLF/rippled
Core Dev Bootcamp Docs: docs.xrpl-commons.org/core-dev-bootcamp
SHAMap Code: src/ripple/app/ledger/
NodeStore Code: src/ripple/core/
Review and Reinforce Your Understanding
Before moving on, take a few minutes to review key concepts from this module. From SHAMap architecture and node hierarchies to NodeStore caching and cryptographic proofs, this short quiz will help you confirm your understanding of XRPL’s ledger state management.
If you have any questions about the homework or would like us to review your work, feel free to contact us.
➡️ Next Module: Cryptography I: Blockchain Security and Cryptographic Foundations →
This deep dive is organized into focused topics, each exploring a critical component of the Rippled architecture. Click on any topic below to dive deeper into the concepts, codebase structure, and practical implementations.
By completing this module, you will be able to:
Navigate the Rippled codebase and locate key components efficiently
Understand the architectural layers: Application, Transaction Processing, Consensus, and Networking
Trace transaction flows from submission to ledger inclusion
Comprehend the Transactor framework and how different transaction types are implemented
Explore peer-to-peer networking, message protocols, and peer discovery mechanisms
Use debugging tools, logging systems, and standalone mode for development
Read and interpret Rippled configuration files and runtime parameters
Understand protocol messages, RPC interfaces, and WebSocket subscriptions
These skills are fundamental for contributing to the XRPL core development, implementing protocol amendments, and building sophisticated blockchain applications.
Communication and Interoperability in Distributed Systems
Learn how distributed nodes communicate, synchronize, and maintain consensus through peer-to-peer networking and protocol messages. Understand the overlay network, message types, and how Rippled nodes discover and interact with each other.
Key Topics: Peer-to-peer networking, Protocol Buffers, message propagation, connection lifecycle
Codebase: src/xrpld/overlay/
Transaction Processing Framework
Understand the transaction processing framework and the three-phase validation process that ensures ledger integrity. Learn how different transaction types (Payment, Offer, Escrow) are implemented and how to create custom transactors.
Key Topics: Preflight, Preclaim, DoApply phases, transaction types, custom transactor creation
Codebase: src/xrpld/app/tx/detail/
Central Orchestration and Coordination
Explore how the Application class orchestrates all subsystems and manages the server lifecycle. Understand initialization sequences, job queue management, and how components interact.
Key Topics: Application class, subsystem coordination, job queues, initialization
Codebase: src/xrpld/app/main/
XRP Ledger Consensus Protocol
Discover how validators reach agreement on transaction sets and ledger state without proof-of-work. Learn about the consensus rounds, proposals, validations, and dispute resolution mechanisms.
Key Topics: Consensus algorithm, validator coordination, UNL management, ledger close
Codebase: src/xrpld/consensus/
Peer-to-Peer Networking Layer
Master peer discovery, connection management, and message propagation in the decentralized network. Understand network topology, peer quality assessment, and network resilience.
Key Topics: Network topology, peer discovery, connection management, message broadcasting
Codebase: src/xrpld/overlay/detail/
Complete Transaction Journey
Trace the complete journey of a transaction from submission to ledger inclusion. Understand each phase: submission, validation, consensus, application, and finalization.
Key Topics: Submission methods, validation phases, consensus inclusion, canonical application
Codebase: Multiple locations across src/xrpld/
Efficiently Navigating the Rippled Source
Learn to efficiently navigate the Rippled source code and locate key components. Understand directory structure, naming conventions, code patterns, and how to find specific functionality.
Key Topics: Directory structure, naming conventions, code patterns (Keylets, Views), IDE usage
Codebase: src/xrpld/ - all directories
Development and Debugging Techniques
Master logging systems, standalone mode, and debugging techniques for Rippled development. Learn to interpret logs, use debuggers, and troubleshoot common issues.
Key Topics: Logging system, standalone mode, GDB usage, log interpretation, testing
Codebase: Development tools and techniques
Trace a Payment transaction from submission through ledger closure while exploring the Payment transactor code, including preflight, preclaim, and doApply.
Analyze detailed logs to understand key messages and system behavior, and create a diagram illustrating how Rippled components interact during transaction processing.
Rippled Repository: github.com/XRPLF/rippled
Module Materials: docs.xrpl-commons.org/core-dev-bootcamp
Code Navigation Tips: Look in src/ripple/app/tx/impl/ for transaction implementations
Use Grep: Search the codebase with grep -r "preflight" src/ripple/app/tx/
Take Notes: Document your findings as you explore
Ask Questions: Use the feedback form if you're stuck
Be Specific: Provide file paths, line numbers, and concrete examples
Review and Reinforce Your Understanding
Before moving on, take a few minutes to review key concepts from this module. From consensus mechanisms to transaction flows, this short quiz will help you confirm your understanding of Rippled’s architecture.
If you have any questions about the homework or would like us to review your work, feel free to contact us.
➡️ Next Module: Data Architecture - SHAMap and NodeStore
Supports features like transaction reduce relay, ledger replay, and squelching.
Inherits from Peer and OverlayImpl::Child, and is tightly integrated with the application's overlay and resource management subsystems.
Use the account_objects command to fetch active tickets linked to your account.
Using Tickets
Include the TicketSequence field in your transaction to use a ticket instead of a regular sequence number.
Price Oracles are on-chain ledger objects that store external price information (e.g., XRP/USD). They are created and updated via the OracleSet transaction, and can be removed using the OracleDelete transaction. dApps can access this data to trigger smart contract logic, perform asset conversions, or maintain stablecoins.
Reliable Data Feeds: Provide verified, real-time pricing data from trusted providers.
Decentralized Trust: Price data is recorded on the immutable XRPL ledger.
Data Aggregation: Multiple oracle instances can be aggregated to calculate mean, median, or trimmed mean values, reducing the impact of outliers.
The PriceOracle object includes fields such as Owner, Provider, PriceDataSeries, and LastUpdateTime. You use an OracleSet transaction to create or update an oracle instance, while the OracleDelete transaction removes it from the ledger. Additionally, the get_aggregate_price API allows for combining data from multiple Price Oracle objects to generate reliable price statistics.
Real-World Data Integration Seamlessly integrate external pricing data into on-chain applications.
Enhanced Transparency and Security Data stored on XRPL is verifiable and tamper-resistant.
Robust Data Aggregation Aggregating data from several oracles minimizes anomalies and improves reliability.
Dynamic Data Management Easily update or remove price feeds with dedicated transactions.
Before working with Price Oracles, ensure you have:
An active Owner account on the XRPL that meets the XRP reserve requirements.
Sufficient XRP to cover transaction fees.
Trust in your external data provider (e.g., a reputable API or on-chain oracle service).
Begin by installing the XRPL library:
Then, initialize your XRPL client and wallet:
Submit an OracleSet transaction to create or update a Price Oracle instance:
Use the ledger_entry API to fetch the on-chain Price Oracle object:
To get a decimal representation of the AssetPrice convert it from HEX to a Number:
Aggregate data from multiple oracles with the get_aggregate_price API:
To delete the Oracle Object from your Account perform an OracleDelete Transaction.
Ensure your account meets the reserve requirement, especially when including more than five token pairs in PriceDataSeries.
Validate that the LastUpdateTime reflects current pricing data.
Maintain consistency for the Provider and AssetClass fields across updates.
All OracleSet and OracleDelete transactions must be signed by the owner account or its authorized multi-signers.
Price Oracles empower the XRPL ecosystem by enabling:
Decentralized Finance (DeFi) Utilize real-world price data to support lending, derivatives, and other DeFi products.
Stablecoin Management Maintain stablecoin peg values with aggregated price feeds.
Efficient Trading and DEX Integration Provide reliable market data for order matching and automated market making.
By integrating Price Oracles, developers can build robust, data-driven applications that benefit from transparent and up-to-date market insights.
Remember to disconnect your client after operations:
Happy building with Price Oracles on XRPL!
In order to create an escrow on the XRP Ledger, you need to specify the amount of XRP to lock, the destination account, and the conditions for release, such as a time-based or cryptographic condition. Additionally, you must ensure the transaction is properly signed and submitted to the network, and verify its success to confirm the escrow is active.
Create a helpers.ts file and add the generateConditionAndFulfillment and escrowTransaction function:
To finish an escrow on the XRP Ledger, you must wait until the specified conditions are met, such as a time-based or cryptographic condition. Then, submit an "EscrowFinish" transaction, providing the necessary details like the condition, fulfillment, and sequence number, to release the locked funds to the designated recipient.
After this step, the escrow transaction has been successfully submitted to the XRP Ledger, signaling the completion of the escrow process. The funds should now be released to the designated recipient, provided all conditions have been met. The client is then disconnected from the network, indicating the end of the transaction session.
If the conditions aren't met, you can cancel the escrow with the EscrowCancel transaction type:
You can use XRP Ledger escrows as smart contracts that release XRP after a certain time has passed or after a cryptographic condition has been fulfilled.
import { Client, Wallet } from "xrpl";
// To test pathfinding the best way is to use MainNet as we need active trading on the currencies. You can do this on testnet, this would require to make an active market between Currency 1 <> XRP <> Currency 2
const client = new Client("wss://xrplcluster.com");
await client.connect();
// The source and destination address can be the same, you will now effectively swap between a currency pair
const sourceWallet = Wallet.fromSeed("s...");
const destinationWallet = Wallet.fromSeed("s...");client.on('path_find', (path_find) => {
console.log('New path: ', path_find);
});
const pathfindRequest = {
command: "path_find",
subcommand: "create",
source_account: sourceWallet.classicAddress, // Sender's account
destination_account: destinationWallet.classicAddress, // Receiver's account
send_max: "1000000", // Maximum amount the sender is willing to spend (optional)
destination_amount: {
currency: "USD",
value: -1, // Using -1 to indicate unspecified destination amount
issuer: "rIssuerAddress" // Issuer of the destination currency
}
};
const paths = await client.request(pathfindRequest);
console.log('Available Paths:', paths.result.alternatives);const pathfindRequest = {
command: "path_find",
subcommand: "create",
source_account: sourceWallet.classicAddress, // Sender's account
destination_account: destinationWallet.classicAddress, // Receiver's account
destination_amount: {
currency: "USD",
value: "100", // Specify the exact amount the destination should receive
issuer: "rIssuerAddress" // Issuer of the destination currency
},
source_currencies: [{
currency: "524C555344000000000000000000000000000000", // RLUSD
issuer: "rMxCKbEDwqr76QuheSUMdEGf4B9xJ8m5De"
}]
};const paymentTransaction = {
TransactionType: "Payment",
Account: "rSenderAddress",
Destination: "rReceiverAddress",
Amount: {
currency: "USD",
value: "100",
issuer: "rIssuerAddress"
},
SendMax: {
currency: "EUR",
value: "90",
issuer: "rIssuerAddress" // Specify the maximum amount in source currency
},
Paths: paths.result.alternatives[0].paths_computed // Include computed path
};
try {
const response = await client.submitAndWait(paymentTransaction, { autofill: true, wallet: sourceWallet });
console.log('Transaction Result:', response);
} catch (error) {
console.error('Failed to submit payment:', error);
}
// Don't forget to close it!
await client.request({
command: 'path_find',
subcommand: 'close'
});import { Client, Wallet } from "xrpl";
const client = new Client("wss://s.altnet.rippletest.net:51233");
await client.connect();
const wallet = Wallet.fromSeed("s...");const accountObjects = await client.request({
command: "account_objects",
account: wallet.classicAddress,
type: "ticket"
});
console.log('Active Tickets:', accountObjects.result.account_objects);
// tickets [
// {
// Account: 'rHH6GByFtaKXXAEeue4myzFuq4ftAZW9un',
// TicketSequence: 2856829
// }
// ]const ticketSequence = accountObjects.result.account_objects[0].TicketSequence;
const transaction = {
TransactionType: "Payment",
Account: wallet.classicAddress,
Destination: "rrrrrrrrrrrrrrrrrNAMEtxvNvQ",
Amount: '1000000', // 1 XRP in drops
Sequence: 0,
TicketSequence: ticketSequence // Use the ticket's sequence number
};
try {
const result = await client.submit(transaction, { autofill: true, failHard: true, wallet });
console.log('Transaction Submit Result:', result);
} catch (error) {
console.error('Failed to submit transaction:', error);
}const ticketToDelete = accountObjects.result.account_objects[0].TicketSequence;
const noopTransaction = {
TransactionType: "AccountSet",
Account: wallet.classicAddress,
TicketSequence: ticketToDelete
};
try {
const result = await client.submit(noopTransaction, { autofill: true, failHard: true, wallet });
console.log('No-Op Transaction Result:', result);
} catch (error) {
console.error('Failed to delete ticket:', error);
}npm install xrplimport { Client, Wallet } from "xrpl";
const client = new Client("wss://xrplcluster.com");
await client.connect();
// Initialize your owner wallet (replace with your actual secret)
const ownerWallet = Wallet.fromSeed("sXXXXXXXXXXXXXXXXXXXXXXXXXXXX");const oracleSetTx = {
TransactionType: "OracleSet",
Account: ownerWallet.classicAddress,
OracleDocumentID: 34, // Unique identifier for this Price Oracle instance
Provider: "70726F7669646572", // Hex-encoded provider identifier (e.g., "provider")
AssetClass: "63757272656E6379", // Hex-encoded asset class (e.g., "currency")
LastUpdateTime: Math.floor(Date.now() / 1000), // Current Unix time
PriceDataSeries: [
{
PriceData: {
BaseAsset: "XRP",
QuoteAsset: "USD",
AssetPrice: 740, // Example: represents 7.40 with a Scale of 2
Scale: 2
}
}
]
};
try {
const response = await client.submitAndWait(oracleSetTx, { autofill: true, wallet: ownerWallet });
console.log("OracleSet Transaction Result:", response);
} catch (error) {
console.error("Failed to submit OracleSet transaction:", error);
}const ledgerEntryRequest = {
method: "ledger_entry",
oracle: {
account: ownerWallet.classicAddress,
oracle_document_id: 34
},
ledger_index: "validated"
};
const ledgerEntryResponse = await client.request(ledgerEntryRequest);
console.log("Retrieved Price Oracle:", ledgerEntryResponse.result.node);const AssetPriceHex = '0x' + '2e4'
const price = Number(AssetPriceHex)const aggregatePriceRequest = {
method: "get_aggregate_price",
ledger_index: "current",
base_asset: "XRP",
quote_asset: "USD",
trim: 20, // Trim 20% of outlier data
oracles: [
{
account: ownerWallet.classicAddress,
oracle_document_id: 34
}
// Include additional oracle objects as needed
]
};
const aggregatePriceResponse = await client.request(aggregatePriceRequest);
console.log("Aggregated Price Data:", aggregatePriceResponse.result);const oracleDeleteTx = {
TransactionType: "OracleDelete",
Account: ownerWallet.classicAddress,
OracleDocumentID: 34
};
try {
const response = await client.submitAndWait(oracleDeleteTx, { autofill: true, wallet: ownerWallet });
console.log("OracleDelete Transaction Result:", response);
} catch (error) {
console.error("Failed to submit OracleDelete transaction:", error);
}await client.disconnect();import dayjs from 'dayjs';
import { Client, isoTimeToRippleTime, xrpToDrops } from 'xrpl';
import { generateConditionAndFulfillment, escrowTransaction } from './helpers';
const main = async () => {
console.log('lets get started...');
// Connect the client to the network
const client = new Client('wss://s.altnet.rippletest.net:51233');
await client.connect();
const { wallet: walletOne } = await client.fundWallet();
const { wallet: walletTwo } = await client.fundWallet();
console.log({ walletOne, walletTwo });
};
main();// Time after which the destination user can claim the funds
const WAITING_TIME = 10; // seconds
// Define the time from when the Destination wallet can claim the money in the escrow. So here it would be 10 seconds after the escrow creation.
const finishAfter = dayjs().add(WAITING_TIME, 'seconds').toISOString();
// Generate the condition and fulfillment
const { condition, fulfillment } = generateConditionAndFulfillment();
const escrowCreateResponse = await escrowTransaction({
txn: {
Account: walletOne.address,
TransactionType: 'EscrowCreate',
Amount: xrpToDrops('1'),
Destination: walletTwo.address,
FinishAfter: isoTimeToRippleTime(finishAfter),
Condition: condition,
},
client,
wallet: walletOne,
});
// We need the sequence to finish an escrow, if it is not there, stop the function
if (!escrowCreateResponse.result.Sequence) {
await client.disconnect();
return;
}import crypto from 'crypto';
import {
Client,
EscrowCreate,
EscrowFinish,
EscrowCancel,
Wallet,
Transaction,
} from 'xrpl';
// @ts-expect-error no types available
import cc from 'five-bells-condition';
export const generateConditionAndFulfillment = () => {
console.log(
"******* LET'S GENERATE A CRYPTO CONDITION AND FULFILLMENT *******"
);
console.log();
// use cryptographically secure random bytes generation
const preimage = crypto.randomBytes(32);
const fulfillment = new cc.PreimageSha256();
fulfillment.setPreimage(preimage);
const condition = fulfillment
.getConditionBinary()
.toString('hex')
.toUpperCase();
console.log('Condition:', condition);
// Keep secret until you want to finish the escrow
const fulfillment_hex = fulfillment
.serializeBinary()
.toString('hex')
.toUpperCase();
console.log(
'Fulfillment (keep secret until you want to finish the escrow):',
fulfillment_hex
);
console.log();
return {
condition,
fulfillment: fulfillment_hex,
};
};
export type TransactionProps<T extends Transaction> = {
txn: T;
client: Client;
wallet: Wallet;
};
export const escrowTransaction = async <T extends Transaction>({
txn,
client,
wallet,
}: TransactionProps<T>) => {
const escrowResponse = await client.submitAndWait(txn, {
autofill: true,
wallet,
});
console.log(JSON.stringify(escrowResponse, null, 2));
return escrowResponse;
};// Wait "WAITING_TIME" seconds before finishing the escrow
console.log(`Waiting ${WAITING_TIME} seconds`);
const sleep = (ms: number) => {
return new Promise((resolve) => setTimeout(resolve, ms));
};
await sleep(WAITING_TIME * 1000);
await escrowTransaction({
txn: {
Account: walletTwo.address,
TransactionType: 'EscrowFinish',
Condition: condition,
Fulfillment: fulfillment,
OfferSequence: escrowCreateResponse.result.Sequence,
Owner: walletOne.address,
},
client,
wallet: walletTwo, // Make sure this is the wallet which was in the "Destination" field during the escrow creation
});
console.log('Escrow transaction sent successfully');
await client.disconnect();await escrowTransaction({
txn: {
Account: walletOne.address, // The account submitting the cancel request
TransactionType: 'EscrowCancel',
Owner: walletOne.address, // The account that created the escrow
OfferSequence: escrowCreateResponse.result.Sequence, // The sequence number of the EscrowCreate transaction
},
client,
wallet: walletOne, // The wallet of the account that created the escrow
});The RPC (Remote Procedure Call) layer is the primary interface through which applications, wallets, and developers interact with the XRP Ledger: without it, querying balances, submitting transactions, or monitoring network state would be impossible.
In this module you dissect how rippled handles RPC requests (handler architecture, request flow, authentication, error handling) and gain the foundational knowledge needed to implement custom RPC handlers in the next module. By the end you will understand how API requests traverse the system from entry point to response.
This deep dive is organized into focused topics, each exploring a critical component of the RPC architecture. Click on any topic below to dive deeper into the concepts, codebase structure, and practical implementations.
By completing this module, you will be able to:
Understand RPC handler architecture and how handlers are registered, discovered, and dispatched
Trace the complete journey of an RPC request from entry point through processing to response
Comprehend authentication and authorization with role-based access control (ADMIN, USER, IDENTIFIED, PROXY, FORBID)
Analyze error handling patterns and how Rippled validates input and returns proper error responses
These skills are essential for building applications on XRPL, debugging RPC issues, and contributing to the Rippled codebase.
Understanding Handler Registration and Dispatch
Learn how RPC handlers are registered in Rippled's central handler table, how requests are routed to the appropriate handler, and how versioning enables protocol evolution.
Key Topics: Handler registration, central dispatcher, request routing, handler table management
Codebase: src/xrpld/rpc/detail/Handler.cpp
From Entry Point to Response
Master the complete journey of an RPC request, from HTTP/WebSocket/gRPC entry points through parsing, validation, context construction, processing, and response serialization.
Key Topics: Entry points, request parsing, context objects, response formatting, request lifecycle
Codebase: src/xrpld/rpc/
Role-Based Access Control
Discover how Rippled determines user roles, enforces permissions, applies resource limits, and implements IP-based restrictions to secure the RPC interface.
Key Topics: Role determination, permission enforcement, resource charging, IP restrictions, security patterns
Codebase: src/xrpld/core/Config.h, src/xrpld/rpc/
Robust Error Management
Understand how Rippled handles errors comprehensively, maps errors to HTTP status codes, sanitizes input, masks sensitive data, and formats error responses.
Key Topics: Error codes, HTTP status mapping, input sanitization, data masking, error response formats
Codebase: src/xrpld/rpc/detail/RPCErrors.h
Review and Reinforce Your Understanding
Take a few minutes to review key concepts from this module.
From handler registration and request flow to authentication patterns and error handling, this short quiz will help you confirm your understanding of XRPL's RPC architecture.
If you have any questions about the module or would like us to review your work, feel free to contact us.
➡️ Next Module: Communication II -
Welcome to the Cryptography I module of the XRPL Core Dev Bootcamp. This course takes you deep into the mathematical and computational foundations that secure every transaction, every account, and every interaction on the XRP Ledger.
This module transforms you from someone who knows that transactions are secure into a developer who understands how that security is mathematically guaranteed. You'll learn to trace the journey of a key from its random birth to the creation of an unforgeable digital signature, and explore how rippled transforms abstract mathematical concepts into concrete security guarantees.
Navigate the RPC codebase and locate key components within the RPC system
This deep dive is organized into focused topics, each exploring a critical component of XRPL's security and cryptographic implementation. Click on any topic below to dive deeper into the concepts, codebase structure, and practical implementations.
By completing this module, you will be able to:
Navigate and understand rippled's cryptographic codebase (C++).
Explain the role of keys and signatures in securing the XRP Ledger.
Trace the key generation and transaction signing/verification process through the code.
Comprehend how hash functions ensure data integrity across the network.
Understand the security trade-offs and implementation choices made in the XRPL protocol.
Debug and troubleshoot common signature-related issues in applications.
Apply a security-first mindset when contributing to the core infrastructure.
These skills are fundamental for implementing new cryptographic standards, assessing security vulnerabilities, and building robust, trustless applications on the XRPL.
The Pillars of Digital Security
Understand the core principles of cryptographic security: confidentiality, integrity, authentication, and non-repudiation. Learn how keys establish digital identity and how the lifecycle of a cryptographic key secures the network.
Key Topics: Private/public keys, elliptic curves, security principles, key lifecycle management
Codebase: include/xrpl/protocol/PublicKey.h, include/xrpl/protocol/SecretKey.h
From Randomness to Account Identity
Follow the complete lifecycle of a cryptographic key in rippled, from its secure generation using cryptographic randomness or a deterministic seed, through derivation of the public key, to the creation of an account ID and human-readable address. Learn how secure memory handling and RAII patterns ensure keys remain protected throughout their use.
Key Topics: Random vs deterministic generation, secret → public derivation, account ID calculation, secure cleanup
Codebase: src/libxrpl/protocol/SecretKey.cpp, include/xrpl/protocol/SecretKey.h
Secure Random Numbers for Keys, Nonces, and Sessions
Dive into how Rippled ensures cryptographically secure randomness. From hardware and OS entropy sources to CSPRNG implementation and defensive mixing, this process underpins the security of secret keys, transaction nonces, and session tokens.
Key Topics: Random number generation, entropy collection, CSPRNG design, thread safety, error handling
Codebase: src/libxrpl/crypto/csprng.cpp, include/xrpl/crypto/csprng.h
Randomness, Derivation, and Protection
Trace the complete key generation process: from sourcing cryptographic randomness (entropy) to the final derivation of an account's secret and public keys. Understand how the system protects these critical assets.
Key Topics: Random number generation, entropy, key derivation, in-memory protection
Codebase: src/libxrpl/protocol/SecretKey.cpp
Creating and Verifying Digital Signatures
Dive into the core function of the ledger: signing transactions. Learn the implementation of ECDSA (for Secp256k1) and EdDSA (for Ed25519), and trace the code that verifies a signature in milliseconds.
Key Topics: Signature algorithms (ECDSA, EdDSA), transaction hashing, verification process
Codebase: src/libxrpl/protocol/SecretKey.cpp, src/libxrpl/protocol/PublicKey.cpp
Integrity and Data Representation
Explore the various cryptographic hash functions (e.g., SHA-512, SHA-256) used in XRPL. Understand how they ensure the integrity of data (transactions, ledgers) and how they're used to create unique IDs.
Key Topics: Hash functions, collisions, data integrity, transaction/ledger ID
Codebase: src/libxrpl/protocol/digest.cpp, include/xrpl/protocol/digest.h
Base58Check and Human-Readable Formats
Deconstruct the process of converting complex cryptographic keys and account IDs into the familiar, human-readable Base58Check format used for XRPL addresses.
Key Topics: Base58Check, address encoding, checksums, human-readable formats
Codebase: src/libxrpl/protocol/tokens.cpp, include/xrpl/protocol/tokens.h
Secure Communication and Handshakes
Understand how cryptography secures the peer-to-peer network. Explore the handshake protocol and the use of TLS (Transport Layer Security) to ensure nodes can prove their identities and communicate confidentially.
Key Topics: TLS, protocole de poignée de main, communication confidentielle, authentification des pairs
Codebase: src/xrpld/overlay/detail/Handshake.cpp, src/libxrpl/basics/make_SSLContext.cpp
Protecting Sensitive Data in Runtime
Learn critical techniques for handling highly sensitive data (like secret keys) in memory. Understand concepts like zeroization and using specialized containers to prevent secrets from being exposed via memory dumps or swap files.
Key Topics: Secure memory handling, zeroization (memory wiping), secure containers, protection against leaks
Codebase: src/libxrpl/crypto/secure_erase.cpp, include/xrpl/crypto/secure_erase.h
Avoiding Catastrophic Security Mistakes
Explore the most frequent cryptographic mistakes—from weak randomness and memory leaks to signature malleability and key reuse—and learn the correct practices to keep systems secure.
Key Topics: Weak RNG, memory handling, signature canonicality, key management, constant-time operations, error checking
Codebase: src/libxrpl/crypto/csprng.cpp, src/libxrpl/protocol/SecretKey.cpp
Balancing Security and Speed in Rippled
Understand the computational cost of cryptography in XRPL and how to optimize performance without compromising security. Compare signature algorithms like secp256k1 and ed25519, measure hashing throughput, and implement caching, batching, and parallel processing to achieve higher throughput in validation and consensus.
Key Topics: Signature and hash performance, caching strategies, batch verification, parallelism, profiling, optimization guidelines
Codebase: src/libxrpl/protocol/, src/xrpld/app/tx/, src/xrpld/shamap/
Consolidate Your Cryptography I Knowledge
Revisit the key concepts from the Cryptography I module in this concise review.
Trace the flow of a transaction’s signature verification from submission through final cryptographic validation. Explore the Rippled codebase, including Transactor::apply(), preflight(), checkSign(), and STTx::verify().
Review and Reinforce Your Understanding
Take a few minutes to review key concepts from this module.
From key generation and transaction signing to hash functions and secure memory practices, this short quiz will help you confirm your understanding of XRPL’s cryptographic foundations.
If you have any questions about the homework or would like us to review your work, feel free to contact us.
➡️ Next Module: Cryptography II - Protocol Extensions and Quantum Signatures →
In this session, we will work with multiple signatures
MultiSignature (or MultiSig) on the XRPL allows multiple users to collectively authorize transactions from a shared account. This ensures enhanced security and reduces risks associated with a single point of control. Think of it like a group-controlled vault where multiple stakeholders must approve access.
MultiSignature starts with a shared account on the XRPL. This account is configured with specific rules, defining how many and which signers are required for transaction approval. Each signer is assigned a "weight," reflecting their influence in the decision-making process.
The quorum is the minimum number of signers required to authorize a transaction. For example, a quorum of three means at least three pre-approved parties must sign off for the transaction to proceed.
When a transaction is initiated, it must receive the necessary number of approvals as specified in the MultiSignature configuration. For instance, if the quorum requires two out of three signers, the transaction will execute only after two parties provide their signatures.
MultiSignature ensures that even if one signer's private key is compromised, unauthorized transactions cannot occur without meeting the quorum. This reduces risks associated with fraud or hacking.
MultiSignature is highly adaptable, allowing configurations to be updated over time. For example, fewer signers can be required for everyday transactions, but unanimous approval might be enforced for high-value transfers.
MultiSignature decentralizes decision-making power by requiring consensus from multiple signers. This ensures no single individual has complete control, making it ideal for businesses, joint ventures, and collaborative setups.
By requiring multiple approvals, MultiSignature significantly reduces the risk of unauthorized transactions, even if one key is compromised. This layered security is a robust defense against hacking and fraud.
MultiSignature enhances trust in shared financial operations by ensuring transparency and consensus among all parties involved.
MultiSignature mitigates risks such as insider threats and key compromises by spreading responsibility across multiple signers.
Configurations can be tailored to operational requirements, such as setting different approval thresholds for routine versus high-value transactions.
Wallet Initialization Create wallets for the main account and the signers using their seeds.
Setting Up the Signer List Configure the signer list with their respective weights and set the quorum.
Submitting the SignerListSet Transaction Send the signer list configuration to the XRPL.
Creating the Payment Transaction Define the payment transaction that will be multi-signed.
Accounts
For this example you need a total of four accounts:
One main account
Three signers
The signers do not require funds but must enable the master key.
SignerQuorum: The minimum weight sum required for valid transactions.
SignerEntries: A list with at least one member and no more than 32 members.
Generate Wallets instead
Alternatively you can create Wallets dynamically:
With the accounts we just created we want to configure our signerlist and submit this to the ledger, we do this by creating a .
For creating transactions in a multi-sign environment, you have two options: either use or a . Tickets are used to reserve sequence numbers. Since MultiSign transactions are asynchronous, the sequence number the transactions have been based on might be outdated. In this example, we do not use Tickets.
In this example, we prepare a simple payment transaction to be later signed using MultiSignature. The transaction sends 1 XRP (in drops) from the main account to a specified destination address.
The fee and Sequence needs to be set in advance. The calculation for it goes as follows: (Base Fee + (Incremental Fee × Number of Signers)). We can use for this, 3 is the number of signers.
Each signer will the same Transaction
all signatures for one final Transaction
the multi-signed transaction
Signer Weights and Quorum
The sum of the signer weights must meet or exceed the SignerQuorum for a transaction to be valid.
In the example, Signer 1 has a weight of 2, while Signers 2 and 3 have a weight of 1 each.
With MultiSignature, you can ensure robust security and reliable transaction governance on the XRPL.
This proposal introduces quantum-resistant digital signatures to the XRP Ledger (XRPL) using the Dilithium post-quantum cryptographic algorithm. The amendment provides accounts with the ability to use quantum-resistant signatures for enhanced security against future quantum computing threats while maintaining backward compatibility with existing signature schemes.
Autofilling Transaction Details
Use autofill to complete transaction fields like Sequence and Fee.
Signing the Transaction by Each Signer Each signer independently signs the transaction.
Combining the Signatures
Use multisign to combine the individual signatures into one transaction.
Submitting the MultiSigned Transaction Submit the combined transaction to the XRPL.
Multi-signed transactions require higher fees due to their increased complexity.
Ensure the fee is sufficient to cover the multi-signature. In this example we used autofill(transaction, 3) specifying 3 signers for the Fee calculation.
Transaction Sequence or Tickets
Multi-signing is asynchronous, so using Tickets can help manage sequence numbers. More information here: https://xrpl.org/docs/concepts/accounts/tickets
In this example, we use the Sequence number provided by autofill.
Asynchronous Signing
Each signer can sign the transaction independently and at different times.
Security Considerations
MultiSignature enhances security by requiring multiple keys to authorize a transaction.
Even if one key is compromised, the attacker cannot perform unauthorized transactions without additional signatures, if the signer list is correctly set.
Retrieve state quickly (show me Alice's balance)
Detect changes efficiently (what changed since yesterday?)
Synchronize across branches (all offices must agree on state)
Recover from crashes (no data loss)
A traditional database handles all this. But blockchain adds a critical constraint: no trusted central authority. Every node must independently verify that state is correct, and thousands of nodes must reach consensus on a single shared state.
This chapter explores why naive approaches fail and what makes XRPL's SHAMap and NodeStore necessary.
Let's consider what happens if you store blockchain state like a simple key-value database:
State Storage:
Transaction Processing:
Validator receives transaction: send 10 XRP from Alice to Bob
Validator checks Alice's balance (100 XRP available)
Validator updates Alice's balance to 90 XRP
Validator updates Bob's balance to 60 XRP
State is now modified
The Problem: No Verification
Without a cryptographic commitment to the state, any node can claim any state is correct:
The Problem: Expensive Synchronization
A new node joining the network needs to learn the current state. With naive storage:
New node requests: "Send me all account state"
Network sends millions of accounts, gigabytes of data
New node has no way to verify this data is correct
Process takes hours or days
Blockchain state needs a cryptographic commitment: a single value that guarantees:
Authenticity: The value commits to the actual state (not a forgery)
Completeness: All accounts are included (not cherry-picked)
Uniqueness: Only one commitment can represent a given state
This is what cryptographic hashes provide:
Now a validator can broadcast: "The current state root is 0xAB12EF..."
Other nodes can verify this is correct by computing the same hash. If someone tries to cheat with different state, the hash will be different.
The Trade-off Problem:
But hashing all state from scratch has a terrible cost:
Every account lookup requires computing the hash of all million accounts
Synchronizing with a peer requires hashing millions of accounts multiple times
A single account change requires rehashing everything
Performance becomes prohibitive
XRPL solves the cryptographic commitment problem with a Merkle tree:
Instead of hashing all accounts together, organize them in a tree structure:
Key Insight: Each node's hash depends only on its descendants, not the entire tree:
Change Account0 → rehash 4 nodes (path from leaf to root)
Not millions of nodes
Synchronization Benefit:
When syncing with a peer:
Compare root hashes
If they match: entire state is identical (no need to compare anything else)
If they differ: identify which subtree diverges
Only synchronize the different parts
Recursive process: compare child hashes, descend into differences
A tree of 1 million accounts becomes synchronizable in a few thousand comparisons instead of millions.
A simple binary tree has a problem: unbalanced growth. If accounts are added sequentially, the tree becomes a linked list, losing logarithmic properties.
Patricia tries (Radix N tries) solve this:
Use the account identifier (a 256-bit hash) as a navigation guide
Each level of the tree represents 4 bits (one hex digit) of the account hash
This produces a balanced, predictable tree structure
Tree depth is always ~64 levels (256 bits / 4 bits per level)
XRPL's Choice: Patricia trie with radix 16 (hex digits):
This gives a balanced tree with:
Depth: 64 levels (one per hex digit of 256-bit key)
Branching: Up to 16 children per node
Perfect for accounts identified by 160-bit hashes
Now we have SHAMap: an elegant in-memory data structure for cryptographically-committed state.
But there's a problem: When the validator crashes, all in-memory state vanishes.
The next startup:
Reads the ledger blockchain from disk
Replays every transaction from genesis
Reconstructs the current state
For mainnet: this takes weeks
NodeStore solves this by making SHAMap persistent:
Every node in the SHAMap is serialized to storage
Identified by its cryptographic hash
Retrievable by any peer that needs it
On startup, state is reconstructed from disk in minutes, not weeks
The Storage Challenge:
But persistence introduces new challenges:
Database Size: A mature XRPL ledger creates millions of nodes. Storage can be terabytes.
Lookup Performance: Database queries are 1000x slower than memory access
Write Efficiency: Persisting every state change is I/O intensive
Backend Flexibility: Different operators need different storage engines (RocksDB, NuDB, SQLite)
NodeStore addresses each:
Caching: Keep hot data in memory, query disk only when needed
Abstraction: Support multiple database backends with identical logic
Batch Operations: Write multiple nodes atomically
Online Deletion: Rotate databases to manage disk space without downtime
SHAMap's Role:
Maintains blockchain state in a Merkle tree structure
Provides cryptographic commitment through root hash
Enables efficient synchronization through hash comparison
Supports proof generation for trustless verification
NodeStore's Role:
Persists SHAMap nodes to durable storage
Provides on-demand node retrieval
Implements intelligent caching to minimize I/O
Abstracts database implementation details
Together:
They solve the complete blockchain state management problem:
A validator can:
Process transactions at microsecond latencies (SHAMap is in-memory)
Know state is persisted safely (NodeStore writes atomically)
Sync new peers in minutes (hash-based comparison finds differences)
Recover from crashes without replaying genesis (state reconstructed from disk)
Switch database backends without changing application logic
Understanding SHAMap and NodeStore is essential because:
Consensus Correctness: The root hash is what validators vote on. You cannot understand consensus without understanding how that hash is computed.
Synchronization Performance: Why can a new node catch up to the network in minutes? Because hash-based tree comparison eliminates redundant data transfer.
API Performance: Why do account lookups return in milliseconds? Because careful caching keeps hot nodes in memory.
Operational Reliability: Why can validators safely delete old data? Because the rotating database enables online deletion without service interruption.
Scalability Limits: Why does XRPL have practical limits on transaction volume? Because synchronizing and storing the ever-growing tree hits physical limits of disk I/O and memory.
These aren't just implementation details—they're fundamental to what XRPL is and how it works.
In the next chapter, we'll explore the mathematical foundations: Merkle trees, Patricia tries, and cryptographic hashing. Then we'll dive deep into the SHAMap implementation, followed by the NodeStore persistence layer.
By the end of this module, you'll understand not just what SHAMap and NodeStore do, but why they're architected the way they are, and how to reason about their correctness, performance, and limitations.
import { Client, Wallet, multisign } from "xrpl";
const client = new Client("wss://s.altnet.rippletest.net:51233");
await client.connect();
const mainSeed = "s...";
const signerSeed_1 = "s...";
const signerSeed_2 = "s...";
const signerSeed_3 = "s...";
const mainWallet = Wallet.fromSeed(mainSeed);
const signer_1 = Wallet.fromSeed(signerSeed_1);
const signer_2 = Wallet.fromSeed(signerSeed_2);
const signer_3 = Wallet.fromSeed(signerSeed_3);
// Fund the main wallet only (testnet)
await client.fundWallet(mainWallet);const mainWallet = Wallet.generate();const SignerListSetTransaction = {
TransactionType: "SignerListSet",
Account: mainWallet.classicAddress,
SignerQuorum: 2,
SignerEntries: [
{
SignerEntry: {
Account: signer_1.classicAddress,
SignerWeight: 2
}
},
{
SignerEntry: {
Account: signer_2.classicAddress,
SignerWeight: 1
}
},
{
SignerEntry: {
Account: signer_3.classicAddress,
SignerWeight: 1
}
}
]
};
try {
const result = await client.submitAndWait(SignerListSetTransaction, { autofill: true, failHard: true, wallet: mainWallet });
console.log('SignerListSet Transaction Result:', result);
} catch (error) {
console.error('Failed to submit transaction:', error);
}const transaction = {
TransactionType: "Payment",
Account: mainWallet.classicAddress,
Destination: "rrrrrrrrrrrrrrrrrNAMEtxvNvQ",
Amount: '1000000' // 1 XRP expressed in drops
};const autofilledTransaction = await client.autofill(transaction, 3);
console.log('Transaction to sign:', autofilledTransaction);const signedWallet_1 = signer_1.sign(autofilledTransaction, true);
const signedWallet_2 = signer_2.sign(autofilledTransaction, true);
const signedWallet_3 = signer_3.sign(autofilledTransaction, true);
console.log('Signed by Signer 1:', signedWallet_1);
// Sample Output:
// {
// tx_blob: '1200002200000000...F1',
// hash: '5C4AC6A8FF7E3B...'
// }const combinedTransaction = multisign([signedWallet_1.tx_blob, signedWallet_2.tx_blob, signedWallet_3.tx_blob]);
console.log(combinedTransaction);
// '12000022000......97A6AEE1F1'try {
const result = await client.submitAndWait(combinedTransaction);
console.log('Transaction Submit Result', result);
// Sample Output:
// {
// result: {
// accepted: true,
// engine_result: 'tesSUCCESS',
// tx_json: { ... },
// validated_ledger_index: ...
// }
// }
} catch (error) {
console.error('Failed to submit transaction:', error);
}accounts = {
"rN7n7otQDd6FczFgLdlqtyMVrn3LNU8B4C": { balance: 100 XRP, ... },
"rLHzPsX6oXkzU2qL12kHCH8G8cnZv1rBJh": { balance: 50 XRP, ... },
"r3kmLJN5D28dHuH8vZvVrDjiV5sNSiUQXD": { balance: 75 XRP, ... },
...
}Alice broadcasts: "My balance is 1,000,000 XRP"
Bob broadcasts: "My balance is 1,000,000 XRP"
Charlie broadcasts: "Everyone's balance is 0 XRP"
Which is correct? Without a central authority, there's no way to know.uint256 stateHash = hashFunction(serializeAllState()); Root (hash of all state)
/ \
Hash(Accounts 0-500k) Hash(Accounts 500k-1M)
/ \ / \
Hash(0-250k) Hash(250-500k) ...
/ \ / \
Hash(0-125k) ...
/ \
Hash(Account0) Hash(Account1)
| |
Account0 Account1Level 0 (root): Evaluate first hex digit of account hash (0-F)
→ Child 0, 1, 2, ... or F
Level 1: Evaluate second hex digit
→ One of 16 children
... and so onApplication → SHAMap (in-memory, verified)
↓
NodeStore (persisted, indexed by hash)
↓
Database (RocksDB, NuDB, etc.)
↓
DiskAs quantum computing advances, current cryptographic signatures (secp256k1, ed25519) may become vulnerable to quantum attacks. This proposal adds support for Dilithium, a NIST-standardized post-quantum signature algorithm, ensuring long-term security for XRPL accounts.
This feature enables accounts to use quantum-resistant signatures with an optional enforcement mechanism.
The amendment adds:
Support for Dilithium signature algorithm (KeyType::dilithium = 2)
New account flag lsfForceQuantum to enforce quantum-resistant signatures
Updated key generation, encoding, and verification systems
The quantum-resistant signatures implementation is currently under active development in the following branch:
Repository: Transia-RnD/rippled
Branch: dilithium-full
This branch contains the working implementation of the quantum-resistant signature system, including:
Core Dilithium Integration: Implementation of the Dilithium post-quantum signature algorithm
Key Management Updates: Modified key generation, storage, and retrieval systems
Signature Verification: Updated transaction signing and verification processes
Account Flag Implementation: lsfForceQuantum flag enforcement mechanisms
Backward Compatibility: Maintained support for existing signature schemes
The quantum branch includes:
Unit tests for Dilithium key operations
Integration tests for quantum-resistant transaction processing
Performance benchmarks comparing signature verification times
Compatibility tests ensuring existing functionality remains intact
Developers interested in contributing to the quantum-resistant signatures implementation should:
Fork the repository and checkout the quantum branch
Review the existing implementation and test coverage
Submit pull requests against the quantum branch
Ensure all tests pass and maintain backward compatibility
Aspect
secp256k1
ed25519
Dilithium
Public Key Size
33 bytes
33 bytes
1312 bytes
Secret Key Size
32 bytes
32 bytes
2528 bytes
Signature Size
~70 bytes
64 bytes
~2420 bytes
lsfForceQuantum
0x02000000
When set, account requires quantum-resistant signatures
asfForceQuantum
11
AccountSet flag to enable/disable quantum requirement
Optional Phase: Quantum signatures available but not required
Account Choice: Individual accounts can enable lsfForceQuantum
Network Transition: Networks can mandate quantum signatures over time
Existing accounts continue using current signature types
No breaking changes to existing functionality
Smooth upgrade path for enhanced security
telBAD_PUBLIC_KEY
Non-quantum signature used with lsfForceQuantum account
As quantum-resistant signatures become standard, several validator-related components will require updates:
rippled: Core validator software must support quantum-resistant key generation and signature verification
Consensus Algorithm: Ensure quantum-resistant signatures are properly validated during consensus
Peer Communication: Update peer-to-peer communication to handle larger quantum signatures
UNL Tools: Update UNL generation tools to support quantum-resistant validator keys
Key Format: Modify UNL file format to accommodate larger Dilithium public keys (1312 bytes)
Validation: Ensure UNL validation processes can verify quantum-resistant signatures
Key Generation: Update validator-keys tool to generate Dilithium key pairs
Key Management: Modify key storage and management for larger quantum keys
Migration Tools: Provide utilities for existing validators to transition to quantum-resistant keys
Documentation: Update validator setup guides for quantum key generation
Phased Rollout: Gradual migration of validators to quantum-resistant keys
Backward Compatibility: Maintain support for existing validator keys during transition
Performance Impact: Account for increased signature verification time and bandwidth usage
Dilithium Library: pq-crystals/dilithium reference implementation
← Back to SHAMap and NodeStore: Data Persistence and State Management
With navigation and hashing understood, we now explore how SHAMap enables efficient tree traversal and network synchronization. These operations are critical for:
New nodes catching up to the network
Peers verifying they have identical state
Extracting state information for queries
Detecting and resolving ledger divergences
SHAMap provides multiple traversal strategies depending on the use case:
Depth-First Traversal: visitNodes
The visitNodes method provides complete tree traversal:
Use Cases:
Tree validation (verify all hashes)
Bulk operations on all nodes
Custom tree analysis
Leaf-Only Traversal: visitLeaves
Iterator-Based Traversal
Parallel Traversal: walkMapParallel
For performance-critical operations:
Use Cases:
Missing node detection at scale
High-throughput synchronization
The core synchronization primitive identifies nodes needed for complete tree reconstruction:
Algorithm: getMissingNodes
Output:
Returns vector of (NodeID, Hash) pairs representing missing nodes, prioritized for network retrieval.
Full Below Optimization
An optimization preventing redundant traversal:
When a subtree is verified complete (all descendants present), skip traversing it again until a new sync starts.
Adding the Root Node: addRootNode
Initializes or verifies the root node:
Adding Known Nodes: addKnownNode
Adds interior or leaf nodes during synchronization:
Purpose:
Ensure nodes are unique in memory (one NodeObject per hash):
Benefits:
Memory Efficiency: Identical nodes stored once
Thread Safety: Cache handles concurrent insertion atomically
Fast Equality: Compare pointers instead of content
Shared Trees: Multiple SHAMaps can share nodes
The Complete Flow:
Performance Metrics:
The transaction tree, persisted in NodeStore, ensures unique history:
Problem:
Without transaction history, many sequences could produce same state:
Solution:
The transaction tree proves the exact sequence:
NodeStore's Role:
Both tree nodes are persisted:
Query state tree to find current values
Query transaction tree to find history
Together: complete, verifiable record
Key Algorithms:
Traversal: visitNodes, visitLeaves, iterators
Missing Detection: getMissingNodes with Full Below cache
Node Addition: addRootNode, addKnownNode with canonicalization
Synchronization: Complete flow from root to leaves
Performance Properties:
Critical Insight:
Synchronization works because of hash-based tree structure:
Compare root hashes: O(1) comparison
If differ: compare children: O(16) comparisons max
Recursive descent: O(log N) total comparisons
vs. comparing all N leaves directly
This logarithmic advantage is what makes blockchain synchronization practical.
Implement post-quantum (Dilithium-2) signature support in Rippled as an amendment and extend the protocol to allow accounts to opt into quantum-resistant signing and register quantum keys.
Format: Repository + written report (PDF or Markdown) with commands, screenshots, code snippets.
Reference Commit:
Title: Quantum-Resistant Signatures
Revision: 1 (2025-07-08)
Type: Draft
Author:
Atharva Lele, Trinity College Dublin
Denis Angell, XRPL Labs// Generate quantum-resistant keys
auto keyPair = generateKeyPair(KeyType::dilithium, seed);
auto secretKey = randomSecretKey(KeyType::dilithium);std::optional<KeyType> publicKeyType(Slice const& slice) {
if (slice.size() == 33) {
if (slice[0] == 0xED) return KeyType::ed25519;
if (slice[0] == 0x02 || slice[0] == 0x03) return KeyType::secp256k1;
}
else if (slice.size() == CRYPTO_PUBLICKEYBYTES) {
return KeyType::dilithium; // 1312 bytes
}
return std::nullopt;
}{
"TransactionType": "AccountSet",
"Account": "rAccount...",
"SetFlag": 11 // Enable quantum-only signatures
}if (account.isFlag(lsfForceQuantum) && publicKey.size() != DILITHIUM_PK_SIZE)
return telBAD_PUBLIC_KEY;case KeyType::dilithium: {
uint8_t sig[CRYPTO_BYTES];
size_t len;
crypto_sign_signature(sig, &len, message.data(), message.size(), secretKey.data());
return Buffer{sig, len};
}if (keyType == KeyType::dilithium) {
return crypto_sign_verify(
sig.data(), sig.size(),
message.data(), message.size(),
publicKey.data()) == 0;
}// From seed
auto seed = generateSeed("masterpassphrase");
auto keyPair = generateKeyPair(KeyType::dilithium, seed);
// Random generation
auto secretKey = randomSecretKey(KeyType::dilithium);
auto publicKey = derivePublicKey(KeyType::dilithium, secretKey);{
"TransactionType": "AccountSet",
"Account": "rQuantumAccount...",
"SetFlag": 11
}auto signature = sign(publicKey, secretKey, transactionData);
bool isValid = verify(publicKey, transactionData, signature);Security Level
128-bit
128-bit
128-bit (quantum-resistant)
Verification: Hash chain validation
void SHAMap::visitNodes(
std::function<void(SHAMapTreeNode*)> callback)
{
std::stack<std::shared_ptr<SHAMapTreeNode>> toVisit;
toVisit.push(mRoot);
while (!toVisit.empty()) {
auto node = toVisit.top();
toVisit.pop();
callback(node);
// Process inner node's children
if (auto inner = std::dynamic_pointer_cast<SHAMapInnerNode>(node)) {
for (int i = 15; i >= 0; --i) { // Reverse order for stack
if (auto child = inner->getChild(i)) {
toVisit.push(child);
}
}
}
}
}void SHAMap::visitLeaves(
std::function<void(SHAMapItem const&)> callback)
{
visitNodes([this, &callback](SHAMapTreeNode* node) {
if (auto leaf = dynamic_cast<SHAMapLeafNode*>(node)) {
callback(leaf->getItem());
}
});
}for (auto it = shamap.begin(); it != shamap.end(); ++it) {
// Access SHAMapItem via *it
// Iteration order matches key ordering
}void SHAMap::walkMapParallel(
std::function<void(SHAMapTreeNode*)> callback,
int numThreads)
{
// Divide tree into subtrees
// Process subtrees concurrently
// Aggregate results
}std::vector<std::pair<SHAMapNodeID, uint256>>
SHAMap::getMissingNodes(
std::function<bool(uint256 const&)> nodeAvailable)
{
std::vector<std::pair<SHAMapNodeID, uint256>> missing;
std::stack<SHAMapNodeID> toVisit;
toVisit.push(SHAMapNodeID(0)); // Root node
while (!toVisit.empty() && missing.size() < MAX_RESULTS) {
SHAMapNodeID nodeID = toVisit.top();
toVisit.pop();
// Check if we have this node
auto node = getNode(nodeID);
if (!node) {
missing.push_back({nodeID, getExpectedHash(nodeID)});
continue;
}
// For inner nodes, check children
if (auto inner = dynamic_cast<SHAMapInnerNode*>(node.get())) {
for (int branch = 0; branch < 16; ++branch) {
uint256 childHash = inner->getChildHash(branch);
if (childHash.isValid()) {
// Child should exist
if (!nodeAvailable(childHash)) {
// Child is missing
SHAMapNodeID childID = nodeID.getChildNodeID(branch);
toVisit.push(childID);
}
}
}
}
}
return missing;
}class SHAMapInnerNode {
// Generation counter: when this subtree was marked "complete"
std::uint32_t mFullBelow = 0;
};
if (node->mFullBelow == currentGeneration) {
// Entire subtree known complete
// Skip traversal
continue;
}SHAMapAddNode SHAMap::addRootNode(
uint256 const& hash,
Blob const& nodeData,
SHANodeFilter* filter = nullptr)
{
// Check if root already exists with matching hash
if (mRoot && mRoot->getHash() == hash) {
return SHAMapAddNode::duplicate();
}
// Validate and deserialize
auto node = deserializeNode(nodeData, SHAMapNodeID(0));
if (!node) {
return SHAMapAddNode::invalid();
}
// Canonicalize: ensure uniqueness in cache
canonicalizeNode(node);
// Set as root
mRoot = std::dynamic_pointer_cast<SHAMapInnerNode>(node);
if (filter) {
filter->foundNode(hash);
}
return SHAMapAddNode::useful();
}SHAMapAddNode SHAMap::addKnownNode(
SHAMapNodeID const& nodeID,
Blob const& nodeData,
SHANodeFilter* filter = nullptr)
{
// Deserialize node
auto newNode = deserializeNode(nodeData, nodeID);
if (!newNode) {
return SHAMapAddNode::invalid();
}
// Canonicalize (prevent duplicate nodes in memory)
canonicalizeNode(newNode);
// Navigate from root to parent
auto parent = getNode(nodeID.getParentNodeID());
if (!parent || !parent->isInner()) {
return SHAMapAddNode::invalid();
}
// Verify hash matches before insertion
int branch = nodeID.getBranch();
if (parent->getChildHash(branch) != newNode->getHash()) {
return SHAMapAddNode::invalid();
}
// Insert into tree
auto parentInner = std::dynamic_pointer_cast<SHAMapInnerNode>(parent);
parentInner->setChild(branch, newNode);
if (filter) {
filter->foundNode(newNode->getHash());
}
return SHAMapAddNode::useful();
}std::shared_ptr<SHAMapTreeNode>
SHAMap::canonicalizeNode(std::shared_ptr<SHAMapTreeNode> node)
{
uint256 hash = node->getHash();
// Check cache for existing node
auto cached = mNodeCache->get(hash);
if (cached) {
return cached; // Use cached instance
}
// New node - insert into cache
mNodeCache->insert(hash, node);
return node;
}1. Syncing node receives ledger header with:
- Ledger sequence number
- Account state tree root hash
- Transaction tree root hash
2. Request missing nodes from peers:
- Start with root hashes
- Call getMissingNodes()
- Get list of missing (NodeID, Hash) pairs
3. Fetch missing nodes from network:
- Request nodes in parallel
- Asynchronously add them with addKnownNode()
4. Repeat until complete:
- Call getMissingNodes() again
- As new nodes are added, fewer appear missing
- Eventually: getMissingNodes() returns empty list
5. Verification:
- Recompute root hashes from all nodes
- Verify they match received values
- Ledger is now complete and verified
6. Persist:
- Nodes are stored in NodeStore
- SHAMap is marked immutable
- Ledger is now available locallySmall ledger (100k accounts):
Nodes in tree: ~20,000
Network requests: ~20,000 (batch fetch reduces count)
Time to sync: seconds to minutes
Large ledger (10M accounts):
Nodes in tree: ~2,000,000
Network requests: ~100,000 (batching and parallel)
Time to sync: hours to daysState A --tx1--> State B
\--tx2--/
Multiple paths, same end state
Which actually happened?Ledger header contains:
- Account state tree root (current balances)
- Transaction tree root (complete history)
Given state tree root + transaction tree root:
Can verify exact sequence that produced this state
No ambiguity about what happenedTree navigation: O(log N) worst case, O(1) typical
Missing detection: O(log N) comparisons
Complete sync: O(N) for N nodes, ~O(log N) * bandwidth
Parallel optimization: N-way parallelism with thread poolsSHAMap
shamap/SHAMap.h
Main tree class
SHAMapTreeNode
shamap/SHAMapTreeNode.h
Base node class
SHAMapInnerNode
shamap/SHAMapInnerNode.h
Inner nodes (branches)
SHAMapLeafNode
shamap/SHAMapLeafNode.h
Leaf nodes (data)
Database
nodestore/Database.h
High-level interface
Backend
nodestore/Backend.h
Low-level interface
NodeObject
nodestore/detail/NodeObject.h
Storage unit
DatabaseNodeImp
nodestore/detail/DatabaseNodeImp.h
Single backend
Start: SHAMap.h - Overview of the class
Read: SHAMapTreeNode.h - Base class and type system
Explore: SHAMapInnerNode.h - Branch structure
Explore: SHAMapLeafNode.h - Data storage
Study: detail/SHAMap.cpp - Implementation details
Start: shamap/SHAMapMissingNode.h - Missing node representation
Study: detail/SHAMapSync.cpp - Synchronization algorithm
Cross-reference: nodestore/Database.h - Fetch operations called during sync
Understand: shamap/SHAMapNodeID.h - How nodes are identified
Start: nodestore/Database.h - Public interface
Understand: nodestore/Backend.h - Storage abstraction
Explore: nodestore/detail/NodeObject.h - Storage unit
Study: nodestore/detail/DatabaseNodeImp.h - Standard implementation
Study: nodestore/detail/DatabaseRotatingImp.h - Rotation implementation
Read: nodestore/detail/DatabaseRotatingImp.h - Architecture
Study: nodestore/detail/DatabaseRotatingImp.cpp - Implementation
Understand: Synchronization with app/misc/SHAMapStoreImp.h
Explore: nodestore/detail/TaggedCache.h - Cache implementation
Study usage in: nodestore/detail/DatabaseNodeImp.cpp
Understand metrics in: Database metrics methods
shamap/SHAMap.h - Core API
shamap/SHAMapNodeID.h - Navigation understanding
nodestore/Database.h - NodeStore API
nodestore/Backend.h - Abstraction principle
shamap/detail/SHAMap.cpp - Implementation
nodestore/detail/DatabaseNodeImp.h - Cache logic
app/misc/SHAMapStoreImp.h - Integration
shamap/SHAMapInnerNode.h - Branch structure details
shamap/SHAMapLeafNode.h - Leaf implementations
nodestore/detail/SHAMapSync.cpp - Sync algorithm
nodestore/detail/DatabaseRotatingImp.h - Rotation details
Using VS Code or similar:
Open rippled repository
Go to Definition (Ctrl+Click) to jump to class definitions
Find All References (Shift+Ctrl+F) to see usage patterns
Use search to navigate between related classes
Locate tests in:
Study how these are tested to understand expected usage patterns.
NodeStore configuration in rippled.cfg:
Create NodeObject
nodestore/detail/NodeObject.h
createObject()
Fetch Node
nodestore/Database.h
fetchNodeObject()
Store Node
nodestore/Database.h
store()
Find in SHAMap
shamap/SHAMap.h
findLeaf()
Week 1: Understand architecture (Chapters 1-4 of this module)
Week 2: Read SHAMap.h and understand node types
Week 3: Study shamap/detail/SHAMap.cpp - implementation
Week 4: Read nodestore/Database.h and understand NodeStore design
Week 5: Study nodestore/detail/DatabaseNodeImp.h - caching
Week 6: Trace actual code execution (single transaction flow)
Week 7: Build and run tests
Week 8: Modify and experiment
This progression takes you from conceptual understanding to implementation mastery.
Location: src/libxrpl/protocol/SecretKey.cpp
Key functions:
Navigate to:
Line ~50: randomSecretKey() implementation
Line ~100: generateKeyPair() for secp256k1
Line ~150: generateKeyPair() for ed25519
Line ~200: derivePublicKey() implementations
Line ~300: sign() function
Location: src/libxrpl/protocol/PublicKey.cpp
Key functions:
Navigate to:
Line ~50: verify() function
Line ~120: verifyDigest() for secp256k1
Line ~180: Ed25519 verification
Line ~240: ecdsaCanonicality() implementation
Location: src/libxrpl/protocol/digest.cpp
Key implementations:
Navigate to:
Line ~20: sha512_half_hasher class
Line ~50: ripesha_hasher class
Line ~80: Utility functions
Location: src/libxrpl/crypto/csprng.cpp
Key implementation:
Navigate to:
Line ~30: csprng_engine class definition
Line ~50: Constructor (entropy initialization)
Line ~70: operator() (random byte generation)
Line ~90: mix_entropy() (additional entropy)
Location: src/libxrpl/protocol/tokens.cpp
Key functions:
Key generation:
Signing:
Verification:
Hashing:
Random numbers:
Address encoding:
Transaction signing:
Peer handshake:
Start at src/libxrpl/protocol/SecretKey.cpp:sign()
Find Ed25519 case in switch statement
See call to ed25519_sign()
Note: External library (ed25519-donna)
Implementation in src/ed25519-donna/
Start at src/libxrpl/protocol/PublicKey.cpp:verify()
Find secp256k1 case
Follow to verifyDigest()
See canonicality check: ecdsaCanonicality()
See secp256k1 library calls: secp256k1_ecdsa_verify()
Implementation in src/secp256k1/
Start at src/libxrpl/protocol/AccountID.cpp:calcAccountID()
See RIPESHA hash: ripesha_hasher
Implementation in src/libxrpl/protocol/digest.cpp
Double hash: SHA-256 then RIPEMD-160
Encoding: src/libxrpl/protocol/tokens.cpp:encodeBase58Token()
SecretKey.cpp
Key management, signing
randomSecretKey(), sign()
PublicKey.cpp
Verification
verify(), ecdsaCanonicality()
digest.cpp
Hashing
sha512Half(), RIPESHA
csprng.cpp
Random generation
crypto_prng()
Start with tests: Look in src/test/ for usage examples
Follow the data: Track how data flows through functions
Read comments: rippled has good documentation in code
Use a debugger: Step through code to understand flow
Check git history: See why code was written that way
Ask questions: rippled has active developer community
Complete all implementation steps (build integration, amendment creation, key type extension, account flag, ledger entry, new transaction type, tests) and document the process.
Create cmake/deps/dilithium.cmake to fetch/build Dilithium-2 (randomized signing + AES support).
Provide imported targets: NIH::dilithium2_ref, NIH::dilithium2aes_ref, NIH::fips202_ref.
Add include(deps/dilithium) to CMakeLists.txt.
Link NIH::dilithium2_ref in cmake/RippledCore.cmake.
Clean and rebuild:
Add to features.macro:
Add dilithium = 2 to KeyType.h; update keyTypeFromString() and to_string().
PublicKey:
Expand buffer to 1312 bytes (Dilithium public key size).
Implement size tracking and detection in publicKeyType().
Add Dilithium verify logic in PublicKey.cpp.
SecretKey:
Expand to 2528 bytes (Dilithium secret key size).
Implement generator, signing support.
Implement Dilithium generation in randomSecretKey(KeyType type).
Update Base58 handling in tokens.cpp (remove size limits blocking large keys).
Add lsfForceQuantum = 0x02000000 to LedgerFormats.h.
Add asfForceQuantum = 11 to TxFlags.h.
Update SetAccount.cpp for setting/clearing the flag.
In Transactor.cpp::checkSign():
If lsfForceQuantum set, reject non-Dilithium signatures.
Create src/test/app/MyTests_test.cpp covering:
Key generation
Dilithium signature verification
Payment signed with Dilithium
ForceQuantum flag behavior
Add QUANTUM_KEY = 'Q' to LedgerNameSpace.
In ledger_entries.macro:
Add field in sfields.macro:
Keylet helper (Indexes.cpp + header):
Add to transactions.macro:
Implement SetQuantumKey.h/.cpp:
In Transactor.cpp::checkSign():
If Dilithium signature present:
Lookup quantum key ledger entry via keylet::quantum.
Ensure stored key matches signing key.
Unit tests:
Quantum key index generation
Keylet lookup
Error cases (duplicate registration, invalid size)
Integration tests:
Register quantum key (SetQuantumKey)
Set ForceQuantum and submit non-Dilithium (expect failure)
Submit Dilithium-signed Payment (expect success)
Rotate quantum key (optional)
Include in the report:
Build System
Added CMake module snippet
Commands used (copy/paste)
Screenshot of successful build linking Dilithium
Amendment & Protocol
Feature declaration
KeyType + class changes summary
Ledger & Transactions
Quantum key ledger entry definition
SetQuantumKey transaction workflow
Code Snippets
PublicKey/SecretKey modifications
checkSign() Dilithium branch
Tests
Test file paths
Output from ./rippled -u ripple.app.MyTests
Validation
Sample Dilithium key pair sizes
Example SetQuantumKey transaction JSON
Example Payment signed with Dilithium
Issues & Resolutions
Any build or runtime errors and fixes
Public key size: 1312 bytes (Dilithium-2)
Secret key size: 2528 bytes
Signature size (verify expected range for Dilithium-2)
Amendment must be enabled (featureQuantum) for SetQuantumKey to succeed.
Dilithium: https://pq-crystals.org/dilithium/
Amendments: https://xrpl.org/amendments.html
Reference Commit: bc8f0c2e13002887d57e69e27eafd0f11260bac2
← Back to SHAMap and NodeStore: Data Persistence and State Management
This appendix provides techniques and tools for investigating SHAMap and NodeStore behavior.
Edit rippled.cfg:
Then restart rippled and check logs:
Symptoms:
Database queries slow
Ledger close times increasing
Hit rate < 80%
Investigation:
Solutions:
Increase cache_size if memory available
Reduce cache_age for faster eviction of cold data
Check if system is memory-constrained (use free)
Symptoms:
Ledger closes slow (>10 seconds)
Database write errors in logs
Validator falling behind network
Investigation:
Solutions:
Ensure SSD (not HDD) for database
Check disk I/O isn't saturated
Increase async_threads if I/O bound
Switch to faster backend (NuDB vs RocksDB)
Symptoms:
New nodes take hours to sync
Falling behind network
High database query count
Investigation:
Solutions:
Increase cache size for better hit rate during sync
Increase async_threads (more parallel fetches)
Use faster SSD
Check network bandwidth (might be bottleneck)
Add to rippled source:
For RocksDB:
See Appendix A for codebase navigation to find files mentioned here.
← Back to SHAMap and NodeStore: Data Persistence and State Management
The combination of SHAMap and NodeStore provides more than just efficient storage—they enable cryptographic proofs that transactions were executed correctly and state was computed honestly.
This chapter explores:
Merkle proof generation and verification
State reconstruction from transaction history
Cross-chain or light-client verification
The guarantee of verifiable ledger history
A Merkle proof allows someone to verify that data is in a tree without reconstructing the entire tree.
Algorithm: getProofPath
Example Proof:
For an account with key 0x3A7F2E1B...:
Proof Size Properties:
Verifying a proof requires only the root hash and the proof:
Algorithm: verifyProofPath
Verification Process:
1. Light Clients
2. Cross-Chain Bridges
3. Auditing and Compliance
4. Rollups and Scaling
The transaction tree provides a critical guarantee: verifiable unique history.
The Problem Without Transaction Tree:
The Solution: Transaction Tree
The XRP Ledger maintains both trees in every ledger:
State Reconstruction:
Why This Matters:
Every aspect of XRPL state is cryptographically verifiable:
Account Proof
Transaction Proof
Full Ledger Proof
Step 1: Trust the Ledger Header
Step 2: Request Proof
Step 3: Verify Proof
Step 4: Use Verified Data
Key Concepts:
Merkle Proofs: Logarithmic-sized proofs of inclusion
Proof Verification: O(log N) hash checks to verify
Light Clients: Verify without downloading full ledger
Unique History: Transaction tree guarantees verifiable history
Security Properties:
Performance:
This is the power of Merkle trees applied to blockchain: trustless verification at scale.
← Back to Cryptography I: Blockchain Security and Cryptographic Foundations
Before we dive into code, we need to understand what cryptography actually promises us. In rippled, every cryptographic operation serves one or more of four fundamental guarantees. These aren't abstract concepts—they're concrete properties that protect billions of dollars in value and enable a decentralized financial system to function without trusted intermediaries.
When you interact with the XRP Ledger, cryptography provides four essential security properties. Understanding these properties—what they mean, why they matter, and how they're achieved—is foundational to everything else in this module.
Confidentiality means that sensitive information remains hidden from those who shouldn't see it.
In XRPL Context
When two rippled nodes establish a connection, their communication is encrypted so that eavesdroppers on the network can't read their messages. The SSL/TLS layer provides this guarantee, ensuring that even though the internet is fundamentally public, peer-to-peer conversations remain private.
How It Works
Rippled uses TLS 1.2 or higher with carefully selected cipher suites that provide forward secrecy—even if a node's long-term key is compromised later, past communications remain protected.
Why It Matters
Without confidentiality:
Attackers could monitor which transactions nodes are sharing
Network topology could be mapped by observing communication patterns
Strategic information about ledger state could leak to adversaries
Integrity ensures that data hasn't been tampered with. A single flipped bit could change "send 1 XRP" to "send 100 XRP"—or worse, redirect funds to a different account entirely.
In XRPL Context
When you receive a transaction, you need to know that every byte is exactly as the sender intended. Hash functions provide this guarantee by creating unique fingerprints of data that change completely if even a single bit is modified.
How It Works
The SHA-512-Half hash function used throughout XRPL ensures that:
You can't find two different transactions with the same ID (collision resistance)
You can't create a transaction that produces a specific ID (preimage resistance)
Changing even one bit produces a completely different hash (avalanche effect)
Why It Matters
Without integrity:
Transactions could be modified in transit
Malicious nodes could alter payment amounts
The entire concept of "this transaction" becomes meaningless
Real-World Example
The hash changes so dramatically that any modification is immediately detectable.
Authenticity proves the identity of the sender. When a transaction claims to come from a particular account, cryptographic signatures prove that the holder of that account's private key actually created it.
In XRPL Context
Every transaction on XRPL must be signed with the private key corresponding to the sending account. Without this signature, the transaction is rejected. The signature is mathematical proof that only someone with the secret key could have created it.
How It Works
The mathematical relationship between public and secret keys ensures:
Only the secret key holder can create a valid signature
Anyone with the public key can verify the signature
The signature proves authorization for this specific transaction
Why It Matters
Without authenticity:
Anyone could claim to be anyone
Funds could be stolen by impersonating account owners
The entire concept of ownership would collapse
Attack Scenario (Prevented)
Without the victim's secret key, it's computationally infeasible to create a valid signature. The attacker would need to solve the discrete logarithm problem—which would take longer than the age of the universe with all of humanity's computing power.
Non-repudiation means that once you've signed something, you can't later deny you signed it. The mathematics of digital signatures make this guarantee absolute: if your private key created a signature, there's no ambiguity, no room for doubt.
In XRPL Context
This property is crucial for a financial system where disputes might arise and proof of authorization is essential. If you signed a payment, that signature is irrefutable proof that you authorized it.
How It Works
Digital signatures create an undeniable link between:
The signer (proved by possession of the secret key)
The message (what was signed)
The time (when it was signed, via timestamps or ledger sequence)
Why It Matters
Without non-repudiation:
Senders could deny authorizing payments after they complete
Dispute resolution would be impossible
Legal accountability for transactions wouldn't exist
Financial systems couldn't function reliably
Real-World Scenario
These four properties aren't independent—they work together to create a complete security system:
Let's see all four pillars in action:
The four pillars rest on hard mathematical problems:
This asymmetry enables public-key cryptography. You can freely share your public key, and no one can derive your secret key from it.
This one-way property makes hashes perfect for integrity checking.
TLS negotiates shared secret keys that both parties know but attackers don't.
These four pillars enable a trust model where:
You don't have to trust:
Network operators
Node operators
Other validators
Anyone else
You only have to trust:
Mathematics (that the cryptographic problems are actually hard)
Your ability to protect your own secret keys
The open-source implementation (which you can audit)
This is the fundamental shift blockchain enables: from institutional trust to mathematical trust.
Rippled doesn't rely on one cryptographic technique—it uses multiple layers:
Even if one layer has a vulnerability, others provide protection.
Understanding these four pillars helps you:
When reading code:
Recognize which security property a function provides
Understand why certain checks are performed
Identify what would break if a step is skipped
When writing code:
Choose appropriate cryptographic primitives
Implement proper error handling
Avoid introducing vulnerabilities
When debugging:
Identify which security property is failing
Trace the cryptographic operation responsible
Understand what went wrong and why
The four pillars of cryptographic security are:
Confidentiality - Keep secrets secret (via encryption)
Integrity - Detect tampering (via hashing)
Authenticity - Prove identity (via signatures)
Non-repudiation - Prove authorization (via signatures)
These properties work together to create a system where trust is mathematical rather than institutional. In the chapters ahead, you'll see exactly how rippled implements these properties through specific cryptographic algorithms and careful coding practices.
← Back to Understanding XRPL(d) RPC Architecture
The RPC (Remote Procedure Call) system is the primary interface through which external applications, wallets, and services interact with a Rippled node. Understanding its architecture is fundamental to building custom handlers that integrate seamlessly with the XRP Ledger.
Rippled's RPC architecture supports multiple transport protocols—JSON-RPC over HTTP, JSON-RPC over WebSocket, and gRPC—all converging on a unified handler dispatch system. This design ensures consistency, maintainability, and extensibility across different client types.
In this section, you'll learn how handlers are registered, discovered, and invoked within the Rippled codebase.
The RPC system consists of several key components that work together to process requests:
The handler table is a centralized registry that maps RPC command names to their corresponding handler functions.
Location: src/xrpld/rpc/detail/Handlers.cpp
Key characteristics:
Command Name: Case-sensitive string identifier (e.g., "account_info")
Handler Function: Pointer to the actual implementation function
Required Role: Minimum permission level needed to execute the command
Each handler is described by a HandlerInfo structure containing metadata:
Purpose:
Enables versioning for backward compatibility
Specifies permission requirements before execution
Defines runtime conditions (e.g., must have synced ledger)
All RPC handlers follow a standardized function signature:
Components:
Return Type: Json::Value — The JSON response object
Parameter: RPC::JsonContext& — Contains request data, ledger access, and configuration
This consistency allows the dispatcher to invoke any handler uniformly.
The JsonContext provides handlers with everything needed to process a request:
Key capabilities:
Ledger Access: Query account states, transactions, and metadata
Network Information: Node status, peer connections, consensus state
Resource Management: Track API usage and enforce limits
Authentication: Know the caller's permission level
Handlers are registered at compile time through static initialization:
When a client sends an RPC request, the system follows this flow:
The server receives a JSON-RPC request via HTTP, WebSocket, or gRPC:
The dispatcher searches the handler table:
Before invoking the handler, the system verifies the caller's role:
The system ensures required conditions are met:
Finally, the handler is executed:
The result is wrapped in a JSON-RPC response envelope and returned to the client.
Handlers can declare various capability requirements:
Example:
This ensures the handler cannot execute unless both conditions are satisfied.
Rippled's RPC system abstracts away transport details, allowing handlers to work across:
Handler Transparency: The same handler function serves all three transports—the dispatcher handles protocol-specific details.
Rippled supports API versioning to maintain backward compatibility:
Clients can specify the API version in their requests:
If no version is specified, the system uses the default (version 1) implementation.
Let's examine the registration of the widely-used account_info handler:
Registration (Handlers.cpp):
Analysis:
Command: "account_info"
Function: doAccountInfo (defined in src/xrpld/rpc/handlers/AccountInfo.cpp)
Role: USER — Available to authenticated users (not just admins)
This registration tells Rippled:
Accept requests with method: "account_info"
Ensure the caller has at least USER-level permissions
Verify a current ledger is available
Invoke doAccountInfo() with the request context
The RPC handler architecture provides a clean, extensible foundation for Rippled's API layer. Through centralized registration in a handler table, uniform function signatures, and automatic permission enforcement, the system ensures consistency across hundreds of commands while remaining easy to extend. The separation between transport protocols and handler logic means the same implementation serves HTTP, WebSocket, and gRPC clients transparently. Understanding this architecture is essential for navigating the codebase, debugging RPC issues, and preparing to implement custom handlers.
Before a node can participate in the XRP Ledger network, it must discover other nodes to connect with. The PeerFinder subsystem manages this critical process, maintaining knowledge of available peers, allocating connection slots, and orchestrating the bootstrap process when a node first joins the network.
Understanding PeerFinder is essential for debugging connectivity issues, optimizing network topology, and ensuring your node maintains healthy connections to the broader network.
rippled/src/xrpld/shamap/
├── SHAMap.h # Main SHAMap class
├── SHAMapInnerNode.h # Inner node implementation
├── SHAMapLeafNode.h # Leaf node implementations
├── SHAMapNodeID.h # Node identification
├── SHAMapHash.h # Hash computation
├── SHAMapMissingNode.h # Missing node tracking
├── detail/
│ ├── SHAMap.cpp # Core algorithms
│ ├── SHAMapSync.cpp # Synchronization logic
│ └── SHAMapDelta.cpp # Tree traversalrippled/src/xrpld/nodestore/
├── Database.h # Database interface
├── Backend.h # Backend interface
├── Manager.h # NodeStore manager
├── Types.h # Type definitions
├── Task.h # Async task definitions
├── Scheduler.h # Scheduling
├── backend/
│ ├── RocksDBFactory.cpp # RocksDB backend
│ ├── NuDBFactory.cpp # NuDB backend
│ ├── MemoryFactory.cpp # In-memory backend
│ └── NullFactory.cpp # No-op backend
├── detail/
│ ├── Database.cpp # Database implementation
│ ├── ManagerImp.cpp # Manager implementation
│ ├── BatchWriter.h/cpp # Batch write handling
│ ├── DatabaseNodeImp.h # Single-backend implementation
│ ├── DatabaseRotatingImp.h # Rotating-database implementation
│ ├── NodeObject.cpp # NodeObject implementation
│ ├── EncodedBlob.h # Encoding/decoding
│ ├── DecodedBlob.h # Blob structure
│ └── varint.h # Variable-length integersrippled/src/xrpld/app/
├── misc/SHAMapStore.h # SHAMap-NodeStore integration
├── misc/SHAMapStoreImp.h/cpp # Implementation
└── main/NodeStoreScheduler.h/cpp # Background scheduling// In shamap/SHAMap.h or detail/SHAMap.cpp
std::shared_ptr<SHAMapTreeNode> node = getNodePointer(nodeID);// In app/misc/SHAMapStoreImp.cpp
auto obj = NodeObject::createObject(type, data, hash);
mNodeStore->store(obj);// In nodestore/detail/DatabaseNodeImp.cpp
auto obj = mDatabase->fetchNodeObject(hash);// In nodestore/detail/DatabaseNodeImp.cpp
auto cached = mCache.get(hash); // L1 cache
if (!cached) {
cached = mBackend->fetch(hash); // L2 backend
if (cached) mCache.insert(hash, cached);
}cd rippled
mkdir build && cd build
cmake ..
make -j4// Add to your code to trace execution
#include <iostream>
std::cout << "Node hash: " << node->getHash() << std::endl;
std::cout << "Node type: " << (int)node->getNodeType() << std::endl;rippled/src/test/*/shamap* and */nodestore*[node_db]
type = RocksDB # Backend choice
path = /data/node.db # Location
cache_size = 256 # Cache size in MB
cache_age = 60 # Max age in seconds
# For NuDB
# type = NuDB
# path = /data/nudb
# For Rotating
# online_delete = 256 # Keep last N ledgers# Small validator (less memory available)
[node_db]
cache_size = 64
cache_age = 30
# Large validator (plenty of resources)
[node_db]
cache_size = 512
cache_age = 120rippled/
├── include/xrpl/ # Public headers
│ ├── protocol/
│ │ ├── SecretKey.h # Secret key class
│ │ ├── PublicKey.h # Public key class
│ │ ├── digest.h # Hash functions
│ │ ├── KeyType.h # Key type enumeration
│ │ └── AccountID.h # Account identifier
│ └── crypto/
│ ├── csprng.h # CSPRNG interface
│ └── secure_erase.h # Secure memory erasure
│
├── src/libxrpl/ # Core library implementation
│ ├── protocol/
│ │ ├── SecretKey.cpp # Key generation, signing (404 lines)
│ │ ├── PublicKey.cpp # Verification, canonicality (328 lines)
│ │ ├── digest.cpp # Hash implementations (109 lines)
│ │ ├── AccountID.cpp # Account ID utilities
│ │ └── tokens.cpp # Base58 encoding/decoding
│ ├── crypto/
│ │ ├── csprng.cpp # CSPRNG implementation (110 lines)
│ │ ├── secure_erase.cpp # Memory wiping (35 lines)
│ │ └── RFC1751.cpp # Mnemonic words
│ └── basics/
│ └── make_SSLContext.cpp # SSL/TLS configuration
│
└── src/xrpld/ # Daemon-specific code
├── app/tx/impl/
│ └── Transactor.cpp # Transaction signature validation
├── app/ledger/
│ └── LedgerMaster.cpp # Ledger management
└── overlay/detail/
└── Handshake.cpp # Peer cryptographic handshake// Random key generation
SecretKey randomSecretKey();
// Deterministic key generation
SecretKey generateSecretKey(KeyType type, Seed const& seed);
std::pair<PublicKey, SecretKey> generateKeyPair(KeyType type, Seed const& seed);
// Public key derivation
PublicKey derivePublicKey(KeyType type, SecretKey const& sk);
// Signing
Buffer sign(PublicKey const& pk, SecretKey const& sk, Slice const& m);
Buffer signDigest(PublicKey const& pk, SecretKey const& sk, uint256 const& digest);// Verification
bool verify(PublicKey const& pk, Slice const& m, Slice const& sig, bool canonical);
bool verifyDigest(PublicKey const& pk, uint256 const& digest, Slice const& sig, bool canonical);
// Canonicality checking
std::optional<ECDSACanonicality> ecdsaCanonicality(Slice const& sig);
bool ed25519Canonical(Slice const& sig);
// Key type detection
std::optional<KeyType> publicKeyType(Slice const& slice);// SHA-512-Half hasher
class sha512_half_hasher { /* ... */ };
// RIPESHA hasher
class ripesha_hasher { /* ... */ };
// Helper functions
uint256 sha512Half(Args const&... args);
uint256 sha512Half_s(Slice const& data); // Secure variantclass csprng_engine {
// Constructor: Initialize entropy
// operator(): Generate random bytes
// mix_entropy(): Add additional entropy
};
csprng_engine& crypto_prng(); // Global singleton// Base58Check encoding
std::string encodeBase58Token(TokenType type, void const* token, std::size_t size);
// Base58Check decoding
std::string decodeBase58Token(std::string const& s, TokenType type);
// Helpers
std::string toBase58(AccountID const& id);
std::optional<AccountID> parseBase58(std::string const& s);src/libxrpl/protocol/SecretKey.cpp
→ randomSecretKey() // Random generation
→ generateKeyPair() // Deterministic from seedsrc/libxrpl/protocol/SecretKey.cpp
→ sign() // Sign message
→ signDigest() // Sign pre-hashed messagesrc/libxrpl/protocol/PublicKey.cpp
→ verify() // Verify signature
→ verifyDigest() // Verify digest signaturesrc/libxrpl/protocol/digest.cpp
→ sha512Half() // SHA-512-Half hash
→ sha512Half_s() // Secure variant
include/xrpl/protocol/digest.h
→ Hash function interfacessrc/libxrpl/crypto/csprng.cpp
→ crypto_prng() // Get CSPRNG instance
→ csprng_engine::operator() // Generate random bytessrc/libxrpl/protocol/tokens.cpp
→ encodeBase58Token() // Encode to Base58Check
→ decodeBase58Token() // Decode from Base58Checksrc/libxrpl/protocol/STTx.cpp
→ STTx::sign() // Sign transaction
→ STTx::checkSign() // Verify transaction signaturesrc/xrpld/overlay/detail/Handshake.cpp
→ makeSharedValue() // Derive session value
→ buildHandshake() // Create handshake headers
→ verifyHandshake() // Verify peer handshake// Headers show:
// - Function declarations
// - Class interfaces
// - Documentation comments
// - Public API
// Good for:
// - Understanding what's available
// - API reference
// - Quick lookup// Implementation shows:
// - Actual algorithms
// - Error handling
// - Edge cases
// - Performance optimizations
// Good for:
// - Understanding how it works
// - Debugging
// - Learning
// - Contributing# Find all signing functions
grep -r "Buffer sign" src/libxrpl/protocol/
# Find CSPRNG usage
grep -r "crypto_prng()" src/
# Find signature verification
grep -r "verify.*signature" src/
# Find Base58 encoding
grep -r "encodeBase58" src/
# Find hash function usage
grep -r "sha512Half" src/# See who wrote/modified code and why
git blame src/libxrpl/protocol/SecretKey.cpp
# See commit that introduced a function
git log -S "randomSecretKey" --source --all# Generate ctags for symbol navigation
ctags -R src/ include/
# Jump to definition in vim
# Position cursor on function name, press Ctrl-]// Look for:
class SomeKey {
~SomeKey() {
secure_erase(/* ... */);
}
};// Look for:
if (operation_failed()) {
Throw<std::runtime_error>("Operation failed");
}// Look for:
switch (publicKeyType(pk)) {
case KeyType::secp256k1:
// ...
case KeyType::ed25519:
// ...
}cd .. && rm -r build
eval "$(pyenv init -)" && \
mkdir -p build && cd build && \
conan install .. --output-folder . --build --settings build_type=Debug && \
cmake -G Ninja \
-DCMAKE_TOOLCHAIN_FILE:FILEPATH=build/generators/conan_toolchain.cmake \
-DCMAKE_CXX_FLAGS=-DBOOST_ASIO_HAS_STD_INVOKE_RESULT \
-DCMAKE_BUILD_TYPE=Debug \
-DUNIT_TEST_REFERENCE_FEE=200 \
-Dtests=TRUE \
-Dxrpld=TRUE \
-Dstatic=OFF \
-Dassert=TRUE \
-Dwerr=TRUE ..
cmake --build . --target rippled --parallel 10XRPL_FEATURE(Quantum, Supported::yes, VoteBehavior::DefaultNo)cmake --build . --target rippled --parallel 10 && ./rippled -u ripple.app.MyTestsLEDGER_ENTRY(ltQUANTUM_KEY, 0x0071, QuantumKey, quantumKey, ({
{sfAccount, soeREQUIRED},
{sfQuantumPublicKey, soeREQUIRED},
{sfPreviousTxnID, soeREQUIRED},
{sfPreviousTxnLgrSeq, soeREQUIRED},
{sfOwnerNode, soeREQUIRED},
}))TYPED_SFIELD(sfQuantumPublicKey, VL, 19)Keylet
quantum(AccountID const& account, Slice const& quantumPublicKey) noexcept
{
return {
ltQUANTUM_KEY,
indexHash(
LedgerNameSpace::QUANTUM_KEY,
account,
quantumPublicKey)
};
}TRANSACTION(ttSET_QUANTUM_KEY, 25, SetQuantumKey, Delegation::notDelegatable, ({
{sfQuantumPublicKey, soeREQUIRED},
}))// SetQuantumKey.cpp (skeleton)
NotTEC
SetQuantumKey::preflight(PreflightContext const& ctx)
{
if (!ctx.rules.enabled(featureQuantum))
return temDISABLED;
if (auto ret = preflight1(ctx); !isTesSuccess(ret))
return ret;
// Validate Dilithium public key format/size
return preflight2(ctx);
}
TER
SetQuantumKey::preclaim(PreclaimContext const& ctx)
{
// Check existence (keylet::quantum)
return tesSUCCESS;
}
TER
SetQuantumKey::doApply()
{
// Create ledger entry (ltQUANTUM_KEY) with sfQuantumPublicKey
return tesSUCCESS;
}Disable amendment (temDISABLED behavior)
SHAMapNodeID
shamap/SHAMapNodeID.h
Node identification
SHAMapItem
shamap/SHAMapItem.h
Data in leaf nodes
DatabaseRotatingImp
nodestore/detail/DatabaseRotatingImp.h
Rotating backend
TaggedCache
nodestore/detail/TaggedCache.h
Cache implementation
Add to SHAMap
shamap/detail/SHAMap.cpp
addNode()
Check Cache
nodestore/detail/TaggedCache.h
get()
Rotate Database
nodestore/detail/DatabaseRotatingImp.h
rotate()
Get Metrics
nodestore/Database.h
getCountsJson()
tokens.cpp
Base58 encoding
encodeBase58Token()
Handshake.cpp
Peer authentication
makeSharedValue()
STTx.cpp
Transaction signing
STTx::sign(), STTx::checkSign()
Enable compression if disk is bottleneck
Switch to NuDB for higher throughput
Hang on startup
strace
Check database corruption
Consensus failing
Logs for validation errors
Check NodeStore consistency
High cache miss rate
Cache metrics
Increase cache_size
Slow sync
Fetch latency
Increase async_threads
Disk full
df -h
Enable online_delete
Memory leak
Valgrind
Fix code (likely nodes not freed)
Ledger Integrity: All state cryptographically provable
Condition: NEEDS_CURRENT_LEDGER — Requires access to the current open ledger
NEEDS_CURRENT_LEDGER
Requires an open (current) ledger
Real-time account queries
NEEDS_CLOSED_LEDGER
Requires a validated ledger
Historical transaction lookups
NEEDS_NETWORK_CONNECTION
Requires peer connectivity
Transaction submission
NO_CONDITION
No special requirements
Server info, ping
Hashing: Computing cryptographic commitments at each level
Verification: Ensuring data integrity through hash chains
Finding a leaf in a SHAMap is straightforward because the account's key determines the exact path:
Algorithm: findLeaf
Step-by-Step Example:
Time Complexity:
Worst case: O(64) inner node traversals (fixed depth)
Each traversal: O(1) array access to branch pointer
Total: O(1) expected time (with high probability leaf found before depth 64)
Space Requirement:
Path from root to leaf: ~64 pointers maximum
Working memory: O(1) per operation
Hashing is fundamental to SHAMap's integrity guarantees:
Leaf Node Hashing
Leaves compute their hash from their data with a type prefix:
Example:
Inner Node Hashing
Inner nodes compute their hash from their children's hashes:
Example:
Hash Update Process
When tree is modified, hashes must be recomputed bottom-up:
Critical Property: All Hashes Change
This is why root hash is used as the ledger's cryptographic commitment. Change anything in the ledger → root hash changes.
Property 1: O(1) Subtree Comparison
Two complete SHAMaps can be compared by comparing single 256-bit values:
Implication:
Peers can quickly determine if their ledgers match:
Property 2: Efficient Difference Detection
If root hashes differ, child hashes pinpoint the differences:
Search Complexity:
Property 3: Cryptographic Proofs
Prove that a specific item is in the tree:
Proof Verification:
Proof Size:
Use Cases:
Light Clients: Verify account state without full ledger
Cross-Chain Bridges: Prove XRPL state to other chains
Auditing: Prove specific transactions were executed
When constructing a new ledger, trees must support modifications:
Adding a New Leaf
Hash Updates
After modification, hashes propagate up:
Navigation:
Keys determine paths through tree
O(1) expected lookup time
Fixed depth ensures bounded operations
Hashing:
Leaves hash their data with type prefixes
Inner nodes hash their children's hashes
Changes propagate bottom-up to root
Merkle Properties:
O(1) tree comparison through root hash
O(log N) difference detection
O(log N) proof generation and verification
Performance:
Single account lookup: microseconds (tree traversal)
Root hash computation: milliseconds (hash all changed nodes)
Proof verification: thousands of hashes (proof depth)
These properties make SHAMap practical for blockchain state management: efficient updates, verifiable state, and fast synchronization.
Tree structures and why they matter for efficiency
How Merkle trees combine both to create verifiable commitments to data
Patricia tries and why they're optimal for key-value storage
Don't skip this chapter thinking it's just theory. Every design decision in SHAMap flows directly from the properties of these data structures. Understanding the foundations will make the implementation clear.
A cryptographic hash function H has three critical properties:
1. Determinism
Same input, always same output. This enables verification: others can compute the same hash and confirm correctness.
2. Collision Resistance
Finding two different inputs with the same hash output is computationally infeasible:
3. Avalanche Effect
Tiny changes produce completely different outputs:
Implication for XRPL:
Hash output acts as a "fingerprint" of data:
Two accounts with identical state have identical hashes
Even one byte difference produces completely different hashes
Anyone can verify a claimed hash by recomputing it
Cannot forge a hash without recomputing it (collision-resistant)
XRPL uses SHA-512Half: First 256 bits of SHA-512. This gives:
256-bit output (32 bytes, fits in a uint256)
Security level of ~128 bits (same as AES-128)
Good performance for cryptographic verification
Trees are recursive data structures: a root node with zero or more children, each of which is a tree.
Key Tree Property: Logarithmic Depth
For a balanced tree with branching factor B and N items:
Depth = log_B(N)
Binary tree (B=2): 1M items → depth 20
Radix-16 tree (B=16): 1M items → depth 5
Radix-256 tree (B=256): 1M items → depth 3
Why Depth Matters:
Access time: proportional to depth
Update time: change leaf → rehash path from leaf to root (depth operations)
Synchronization: identify differences by comparing hashes (depth comparisons)
Higher branching factor = shallower tree = faster operations.
The Balance Problem:
But trees can become unbalanced:
For blockchain state, accounts are inserted in unpredictable order. Without careful tree structure, you get unbalanced trees and logarithmic operations degrade.
Patricia tries solve the balance problem by using the key itself as navigation.
Basic Idea:
For a 256-bit key (like an account address), use the bits to guide traversal:
Radix-2 (binary): Each bit (0 or 1) determines left or right child
Radix-4: Each 2-bit pair determines one of 4 children
Radix-16 (hex): Each 4-bit nibble determines one of 16 children
Radix-256: Each 8-bit byte determines one of 256 children
Example: Radix-16 Patricia Trie
Balance Guarantee:
Since navigation is determined by the key:
All paths from root to any leaf have the same structure
Tree depth depends only on key length (256 bits / 4 bits = 64 levels)
Perfect balance for any set of keys
Space Trade-off:
Inner nodes have up to 16 pointers (for radix-16), but most sparse:
Many branches are empty (no accounts in that range)
Compressed representation avoids wasting space
Net result: similar space as binary tree despite higher branching
XRPL's Choice: Radix-16
Why radix-16?
Shallow trees: 256-bit keys → 64 levels (log_16 of ledger size)
Manageable fanout: 16 children per node (balanced tree complexity)
Natural alignment: hex notation matches code
Proven in Bitcoin and Ethereum: both use similar approaches
A Merkle tree combines tree structure with cryptographic hashing:
Definition:
Leaf nodes: Contain data (accounts, transactions)
Inner nodes: Contain hashes of their children
Root hash: Represents the entire tree
Key Property: Hash Propagation
When a leaf changes, its hash changes. This affects its parent:
Benefit: O(1) Subtree Comparison
Compare two trees:
Without Merkle trees:
With Merkle trees:
Benefit: Logarithmic Proofs
Prove that a specific item is in the tree:
XRPL's SHAMap combines both approaches:
Patricia Trie Structure:
Navigation determined by account key (256-bit hash)
Radix-16 branching (hex digit at each level)
Perfect balance regardless of account insertion order
Merkle Properties:
Each node contains hash of its content
Inner node hash computed from children's hashes
Root hash represents entire ledger state
Changes propagate up to root
The Result:
Properties:
Leaf depth: exactly log_16(number of accounts) ≈ 5 levels for 1M accounts
Hash changes: only O(log N) nodes affected by any modification
Verification: O(1) root hash comparison, O(log N) proof verification
Synchronization: O(log N) hash comparisons to find differences
XRPL uses SHA-512Half for all hashing:
Why SHA-512Half instead of SHA-256?
SHA-512 has better performance on 64-bit CPUs
Taking first 256 bits gives same security as SHA-256
But faster on modern hardware
256-bit output fits uint256 perfectly
Hash Computation in SHAMap:
Inner nodes hash their children's hashes:
Leaf nodes hash their content with a type prefix:
Why Type Prefixes?
Prevent collisions between different types of data:
Once a Merkle tree root hash is committed to (published in a ledger), that entire tree must be immutable:
Changing any leaf would change the root hash
Root hash commitment would become invalid
Trust in the ledger is broken
Solution: Snapshots
For each ledger version, create a snapshot:
Copy-on-Write:
When a mutable SHAMap modifies a shared node:
Both ledgers are verified by their root hashes, but they share most of the tree (unchanged nodes).
Key Concepts:
Cryptographic hashing: Creates fingerprints of data that are collision-resistant and deterministic
Trees: Enable hierarchical organization with logarithmic depth and operations
Patricia tries: Use key structure to guarantee balance regardless of insertion order
Merkle trees: Combine hashing with tree structure to create verifiable commitments
XRPL's design: Radix-16 Patricia trie with Merkle hashing creates:
Perfect balance for any set of accounts
O(log N) operations for updates and proofs
O(1) tree comparison through root hash
Safe snapshots through copy-on-write
Why This Matters:
Every algorithm in SHAMap—navigation, hashing, synchronization—follows directly from these properties. Understanding the math explains why the code is structured the way it is.
In the next chapter, we'll see how these principles are implemented in rippled's actual C++ code.
[rpc_startup]
command = log_level
severity = debug
[logging]
debug
rpctail -f /var/log/rippled/rippled.log | grep -i nodestoreTRACE Ledger: Ledger opened/closed
DEBUG SHAMap: Tree operations
DEBUG NodeStore: Database operations
INFO Consensus: Validation and agreement
WARN Performance: Slow operations detected# Get storage metrics
rippled-cli server_info | jq '.result.node_db'
# Expected output:
{
"type": "RocksDB",
"path": "/var/lib/rippled/db/rocksdb",
"cache_size": 256,
"cache_hit_rate": 0.923,
"writes": 1000000,
"bytes_written": 1000000000,
"reads": 50000000,
"cache_hits": 46150000,
"read_latency_us": 15
}# Check database size
du -sh /var/lib/rippled/db/*
# Monitor growth
watch -n 1 'du -sh /var/lib/rippled/db/*'
# Check free space
df -h /var/lib/rippled/
# Monitor I/O
iostat -x 1 /dev/sda# Check cache metrics
rippled-cli server_info | jq '.result.node_db.cache_hit_rate'
# Check cache size configuration
grep cache_size rippled.cfg
# Monitor cache evictions
tail -f /var/log/rippled/rippled.log | \
grep -i "evict\|cache"# Check write latency
rippled-cli server_info | \
jq '.result.node_db.write_latency_us'
# Monitor disk I/O
iotop -o -b -n 1
# Check disk space
df -h
# Monitor async queue
tail /var/log/rippled/rippled.log | \
grep -i "async.*queue"# Monitor sync progress
rippled-cli server_info | jq '.result.ledger.ledger_index'
# Track fetch operations
tail -f /var/log/rippled/rippled.log | \
grep -i "fetch\|sync"
# Monitor thread pool
ps -p $(pidof rippled) -L
# Check queue depths
rippled-cli server_info | jq '.result.node_db.async_queue_depth'cd rippled
mkdir build
cd build
cmake -DCMAKE_BUILD_TYPE=Debug ..
make -j4# Run under GDB
gdb --args rippled --conf /path/to/rippled.cfg
# Inside GDB:
(gdb) run
(gdb) break SHAMap::addKnownNode
(gdb) continue
# When breakpoint hit:
(gdb) print node->getHash()
(gdb) print nodeID
(gdb) step
(gdb) quit// Node addition
break SHAMap::addKnownNode
break Database::store
// Cache operations
break TaggedCache::get
break TaggedCache::insert
// Synchronization
break SHAMap::getMissingNodes
break NodeStore::fetchNodeObject(gdb) print node->getHash().hex()
(gdb) print nodeID.mDepth
(gdb) print nodeID.mNodeID.hex()
(gdb) print metrics.cacheHits
(gdb) print metrics.cacheMisses# Record 60 seconds of system behavior
perf record -F 99 -p $(pidof rippled) -- sleep 60
# Analyze results
perf report
# Show flame graph
perf record -F 99 -p $(pidof rippled) -- sleep 60
perf script | stackcollapse-perf.pl | flamegraph.pl > profile.svg# Run under memcheck (very slow)
valgrind --leak-check=full rippled --conf rippled.cfg
# Run specific test
valgrind --leak-check=full rippled --unittest test.nodestore// In Database::fetchNodeObject
auto startTime = std::chrono::steady_clock::now();
auto obj = mBackend->fetch(hash);
auto elapsed = std::chrono::steady_clock::now() - startTime;
auto ms = std::chrono::duration_cast<std::chrono::milliseconds>(elapsed);
if (ms.count() > 100) {
JLOG(mLog.warning()) << "Slow fetch: " << hash.hex()
<< " took " << ms.count() << "ms";
}# Build with tests enabled
cd rippled/build
cmake -DENABLE_TESTING=ON ..
# Run all tests
ctest
# Run specific test
ctest -R "shamap" -V
# Run single test file
./bin/rippled --unittest test.SHAMap// In rippled/src/test/shamap_test.cpp
SECTION("Debug specific case") {
// Create SHAMap
auto shamap = std::make_shared<SHAMap>(...);
// Add nodes
shamap->addRootNode(...);
// Test operation
auto node = shamap->getNode(hash);
// Assert behavior
REQUIRE(node != nullptr);
REQUIRE(node->getHash() == expectedHash);
}grep -E "^[a-z]|^\[" rippled.cfg | head -30# CPU/Memory
top -p $(pidof rippled)
# Disk I/O
iotop -p $(pidof rippled)
# Network traffic
netstat -an | grep ripple
# File descriptors
lsof -p $(pidof rippled) | wc -l# Use RocksDB tools
rocksdb_ldb --db=/var/lib/rippled/db/rocksdb scan
# List files
ls -lah /var/lib/rippled/db/rocksdb/# Count errors
grep ERROR /var/log/rippled/rippled.log | wc -l
# Find slow operations
grep "took.*ms" /var/log/rippled/rippled.log
# Timeline of events
tail -f /var/log/rippled/rippled.log | \
awk '{print $1" "$2" "$3" "$4" ..."}'# Get baseline
rippled --unittest test.SHAMap > baseline.txt 2>&1
# Modify code...
# Test after change
rippled --unittest test.SHAMap > modified.txt 2>&1
# Compare
diff baseline.txt modified.txt# Submit transactions and measure
./load_test.sh --transactions 1000 --duration 60
# Monitor metrics
watch -n 1 'rippled-cli server_info | jq ".result.node_db"'std::optional<std::vector<Blob>>
SHAMap::getProofPath(uint256 const& key)
{
std::vector<Blob> path;
auto node = std::dynamic_pointer_cast<SHAMapInnerNode>(mRoot);
for (int depth = 0; depth < 64; ++depth) {
// Serialize current node
Blob serialized = node->serialize();
path.push_back(serialized);
// Is this the target leaf?
if (auto leaf = std::dynamic_pointer_cast<SHAMapLeafNode>(node)) {
if (leaf->getKey() == key) {
return path; // Success
} else {
return std::nullopt; // Wrong leaf
}
}
// Navigate to next level
int branch = key.nthNibble(depth);
node = std::dynamic_pointer_cast<SHAMapInnerNode>(
node->getChild(branch));
if (!node) {
return std::nullopt; // Path doesn't exist
}
}
return std::nullopt; // Shouldn't reach here
}Proof path (from root to leaf):
[0]: Inner node (root)
Hash: 0x1234... (all state)
Children: [3 -> 0xABCD..., 5 -> 0x5678..., ...]
[1]: Inner node (depth 1, branch 3)
Hash: 0xABCD... (state with branch 3)
Children: [A -> 0xEF12..., B -> 0x3456..., ...]
[2]: Inner node (depth 2, branch A)
Hash: 0xEF12... (state with branch 3, then A)
Children: [7 -> 0x7890..., ...]
[3]: Inner node (depth 3, branch 7)
Hash: 0x7890... (state with branch 3, A, 7)
Children: [F -> 0x8765..., ...]
[4]: Leaf node
Content: {Account, Data}
Hash: 0x8765... (account data)
Total: 5 nodes (including leaf)
Size: ~500 bytes for 5 serialized nodesTree size: 1 million accounts
Tree depth: log_16(1M) ≈ 5 levels
Proof includes:
5 inner nodes + 1 leaf: ~100 bytes per node
Total: ~600 bytes
Without proof:
Send all 1M accounts: gigabytes
Proof is logarithmic in tree size
Verification requires one hash per level: O(log N)bool verifyProofPath(
uint256 const& key,
uint256 const& expectedRootHash,
std::vector<Blob> const& proof)
{
if (proof.empty()) {
return false;
}
// Start from leaf (last in proof)
auto leafNode = deserializeNode(proof.back(), /* leaf */);
if (!leafNode || leafNode->getKey() != key) {
return false;
}
// Compute leaf hash
uint256 computedHash = leafNode->computeHash();
// Walk from leaf toward root
for (int i = (int)proof.size() - 2; i >= 0; --i) {
auto innerNode = deserializeNode(proof[i], /* inner */);
if (!innerNode) {
return false;
}
// Determine which branch we came from
int depth = i;
int branch = key.nthNibble(depth);
// Verify the child hash matches
if (innerNode->getChildHash(branch) != computedHash) {
return false; // Proof is invalid
}
// Compute this node's hash for next iteration
computedHash = innerNode->computeHash();
}
// Verify final hash matches expected root
return (computedHash == expectedRootHash);
}Given:
- Key: 0x3A7F2E1B...
- Expected root: 0x1234...
- Proof: [inner_0, inner_1, inner_2, inner_3, leaf]
Step 1: Deserialize leaf, verify it contains key
Compute leaf hash: 0x8765...
Step 2: Deserialize inner_3
Check: child[F] hash == 0x8765... ✓
Compute inner_3 hash: 0x7890...
Step 3: Deserialize inner_2
Check: child[7] hash == 0x7890... ✓
Compute inner_2 hash: 0xEF12...
Step 4: Deserialize inner_1
Check: child[A] hash == 0xEF12... ✓
Compute inner_1 hash: 0xABCD...
Step 5: Deserialize inner_0 (root)
Check: child[3] hash == 0xABCD... ✓
Compute root hash: 0x1234...
Final check: Computed hash == expected? 0x1234 == 0x1234 ✓
Result: Proof is valid! Account is in ledger.Scenario: Mobile wallet wanting to verify account balance
Traditional: Download entire ledger (~gigabytes)
With proofs: Download only account proof (~600 bytes)
Client verifies:
1. Receives account balance and proof
2. Verifies root hash matches known ledger
3. Uses proof verification algorithm
4. Trusts balance without downloading full ledgerScenario: Ethereum bridge wants to verify XRPL state
Bridge requires: Proof that account exists in XRPL
Verification:
1. Receives XRPL ledger header (small)
2. Receives Merkle proof (small)
3. Verifies proof against root hash
4. Acts on verified stateScenario: Auditor wants to prove a transaction executed
Proof includes:
- Transaction existence proof (proof in tx tree)
- Resulting state proof (proof in state tree)
- Chain of ledgers connecting them
Verifier can:
- Confirm transaction was processed
- Confirm state effects were applied
- Build complete chain of evidenceScenario: Layer-2 rollup batches XRPL transactions
Instead of: All transactions + full ledger
Rollup uses: Transaction proofs + state root
Verification:
- Verify each transaction in batch
- Verify resulting state
- All with minimal dataState progression:
Initial state: S₀ (all accounts have 0 balance)
Could have arrived via many paths:
Path 1: S₀ --tx1--> S₁ --tx2--> S₂ --tx3--> S₃
Path 2: S₀ --tx4--> S₁' --tx5--> S₂' --tx3--> S₃
Path 3: S₀ --tx6--> S₁'' --tx1--> S₂'' --tx2--> S₃
Same final state S₃, but completely different history!
Which is correct? Without history, no way to know.struct LedgerHeader {
uint256 accountTreeHash; // Root of account state
uint256 transactionTreeHash; // Root of transaction history
// Both are cryptographically committed
// Both are signed by validators
// Both must match
};Given: LedgerHeader with both tree hashes
Verifiable state reconstruction:
1. Fetch account tree → current state
2. Fetch transaction tree → history
3. Replay transactions: S₀ + all_tx → S_computed
4. Verify: S_computed matches account tree root
5. Verify: transaction tree contains all_tx
Result: Proven that state is correct result of transactionsWithout verification: Anyone could claim different history
With verification: Only one possible history is correct
Example:
Alice claims she sent 100 XRP to Bob
Bob claims he received only 50 XRP
Ledger history is immutable and verified
Proof shows exactly what happened
No ambiguityProve: Account has balance 1000 XRP
Components:
1. Ledger header with account state root
2. Merkle proof from root to account leaf
3. Account data (balance, etc.)
Verification:
Verify proof leads from root to account
Account balance is in proof
Root hash signed by supermajority of validatorsProve: Transaction T was executed in ledger L
Components:
1. Ledger L header with transaction tree root
2. Merkle proof from root to transaction leaf
3. Transaction data and execution results
Verification:
Verify proof leads from root to transaction
Matches all known transaction identifiers
Root hash signed by validatorsProve: Ledger state transitions from L1 to L2
Components:
1. L1 ledger header (initial state)
2. All transactions between L1 and L2
3. L2 ledger header (final state)
Verification:
1. Verify all transaction proofs against L2
2. Verify all account proofs against L2
3. Verify L2 header is valid
Result: Complete proof that L2 is correct result of applying
all transactions to L1// Ledger headers are small (~100 bytes)
// Signed by supermajority (>80%) of validators
// Distributed via gossip protocol
LedgerHeader verified_header = getLedgerHeader(ledgerSeq);
// Root hashes are in verified_header// Client requests proof of account existence
MerkleProof proof = peer.requestProof(
accountID,
verified_header.accountStateRoot);
// Proof is small (~500 bytes)// Client verifies locally (no network needed)
if (verifyProofPath(accountID,
verified_header.accountStateRoot,
proof)) {
// Account exists and is proven
// Can trust all data in proof
auto account = parseAccountFromProof(proof);
}// Application can now use the verified account data
// with absolute confidence it's correct
double balance = account.balance; // Proven correctProof tampering: Detected by hash mismatch
Missing nodes: Detected by proof verification failure
Fork detection: Different ledger headers = detected forgery
State divergence: Account proof path breaks
All attacks: Detectable by cryptographic verificationTraditional verification: All data (gigabytes) + full hashing
Proof-based verification: 600 bytes + 5 hash operations
Speedup: 10,000x less data, 100,000x less compute// From src/libxrpl/basics/make_SSLContext.cpp
// SSL context is configured with strong cipher suites
auto ctx = boost::asio::ssl::context(boost::asio::ssl::context::tlsv12);
ctx.set_options(
boost::asio::ssl::context::default_workaround |
boost::asio::ssl::context::no_sslv2 |
boost::asio::ssl::context::no_sslv3 |
boost::asio::ssl::context::single_dh_use);// Transaction ID is computed from transaction data
uint256 transactionID = sha512Half(serializedTransaction);
// Any change to the transaction changes the ID completely
// Original: "Payment of 1 XRP" → ID: 0x7F3B9...
// Modified: "Payment of 2 XRP" → ID: 0xA21C4...
// Even 1 bit different: completely different hash// Original transaction
{
"Account": "rN7n7otQDd6FczFgLdlqtyMVrn3LNU8B4C",
"Destination": "rLHzPsX6oXkzU9w7fvQqJvGjzVtL5oJ47R",
"Amount": "1000000" // 1 XRP
}
// Hash: 0x7F3B9E4A...
// Attacker tries to change amount
{
"Account": "rN7n7otQDd6FczFgLdlqtyMVrn3LNU8B4C",
"Destination": "rLHzPsX6oXkzU9w7fvQqJvGjzVtL5oJ47R",
"Amount": "100000000" // 100 XRP - just one character different!
}
// Hash: 0xA21C4F8D... - completely different!// Signing a transaction
Buffer signature = sign(
publicKey, // Can be shared publicly
secretKey, // Must remain secret
txData // Transaction to sign
);
// Anyone can verify the signature
bool valid = verify(
publicKey, // Sender's public key
txData, // Transaction data
signature // Signature to verify
);// Attacker tries to steal funds
Transaction fakeTx = {
"Account": "rVictimAccount...", // Victim's address
"Destination": "rAttackerAccount...",
"Amount": "1000000000" // Attacker tries to drain account
};
// Attacker creates fake signature
Buffer fakeSignature = attacker.tryToForge();
// Verification fails!
bool valid = verify(victimPublicKey, fakeTx, fakeSignature);
// Returns false - attacker doesn't have victim's secret key// Alice signs a payment
auto [alicePubKey, aliceSecKey] = generateKeyPair(KeyType::ed25519);
Buffer sig = sign(alicePubKey, aliceSecKey, payment);
// Later, Alice claims: "I never authorized that payment!"
// But the signature proves otherwise:
bool proofOfAuthorization = verify(alicePubKey, payment, sig);
// Returns true - irrefutable proof Alice signed thisTimeline:
1. Alice signs transaction paying Bob 1000 XRP
2. Transaction is validated and included in ledger
3. Alice's balance decreases by 1000 XRP
4. Alice claims: "Someone stole my money! I never authorized this!"
Investigation:
- Retrieve the transaction from ledger history
- Extract Alice's signature from the transaction
- Verify signature against Alice's public key
- Signature verifies ✓
Conclusion:
Alice's secret key created this signature. Either:
a) Alice authorized the payment, or
b) Someone gained access to Alice's secret key
Either way, the signature is cryptographic proof that whoever held
Alice's secret key at the time authorized this payment. This is why
protecting secret keys is absolutely critical. ┌─────────────────────────────────────────┐
│ XRPL Security Model │
└─────────────────────────────────────────┘
│
┌───────────────┴───────────────┐
│ │
┌─────▼─────┐ ┌──────▼──────┐
│ Transport │ │ Application │
│ Security │ │ Security │
└─────┬─────┘ └──────┬──────┘
│ │
┌─────▼────────┐ ┌──────▼───────┐
│Confidentiality│ │ Authenticity │
│ (SSL/TLS) │ │ (Signatures) │
└──────────────┘ └──────────────┘
│ │
┌─────▼────────┐ ┌──────▼───────┐
│ Integrity │ │Non-repudiation│
│ (Hashing) │ │ (Signatures) │
└───────────────┘ └──────────────┘// 1. AUTHENTICITY - Alice creates and signs a transaction
auto tx = Payment{
.account = alice.address,
.destination = bob.address,
.amount = XRP(100)
};
Buffer signature = sign(alice.publicKey, alice.secretKey, tx);
// Only Alice's secret key can create this valid signature
// 2. INTEGRITY - Transaction is serialized and hashed
auto serialized = serialize(tx, signature);
uint256 txID = sha512Half(serialized);
// Any tampering changes the hash completely
// 3. NON-REPUDIATION - Signature proves Alice authorized this
bool authorized = verify(alice.publicKey, tx, signature);
// Alice cannot later deny signing this transaction
// 4. CONFIDENTIALITY - Transaction is sent to peers via encrypted connection
sslStream.write(serialized); // Protected by TLS encryption
// Network observers can't read transaction detailsGiven: PublicKey = SecretKey × G (where G is a generator point)
Hard Problem: Find SecretKey given only PublicKey
Computing PublicKey from SecretKey: microseconds
Computing SecretKey from PublicKey: longer than age of universeGiven: hash = SHA512Half(data)
Hard Problems:
1. Find different data' where SHA512Half(data') = hash (preimage)
2. Find data and data' where SHA512Half(data) = SHA512Half(data') (collision)
Computing hash from data: microseconds
Finding data from hash: computationally infeasibleGiven: Encrypted = AES_Encrypt(key, plaintext)
Hard Problem: Find plaintext without key
With key: decryption in microseconds
Without key: trying all 2^256 possible keysDefense Layer 1: Network Security
└─ TLS encryption (confidentiality)
└─ Prevents eavesdropping on communications
Defense Layer 2: Transaction Security
└─ Digital signatures (authenticity, non-repudiation)
└─ Proves who authorized what
└─ Hash-based integrity (integrity)
└─ Detects any tampering
Defense Layer 3: Protocol Security
└─ Consensus mechanism
└─ Requires majority agreement
└─ Byzantine fault tolerance// Handler table structure (simplified)
std::map<std::string, HandlerInfo> handlerTable = {
{"account_info", {&doAccountInfo, Role::USER, RPC::NEEDS_CURRENT_LEDGER}},
{"ledger", {&doLedger, Role::USER, RPC::NEEDS_NETWORK_CONNECTION}},
{"submit", {&doSubmit, Role::USER, RPC::NEEDS_CURRENT_LEDGER}},
// ... hundreds of other handlers
};struct HandlerInfo {
handler_type handler; // Function pointer
Role role; // Minimum role required
RPC::Condition condition; // Execution conditions
unsigned int version_min = 1; // Minimum API version
unsigned int version_max = UINT_MAX; // Maximum API version
};Json::Value handlerName(RPC::JsonContext& context);struct JsonContext {
Json::Value params; // Request parameters
Application& app; // Access to application services
Resource::Consumer& consumer; // Resource tracking
Role role; // Caller's permission level
std::shared_ptr<ReadView const> ledger; // Ledger view
NetworkOPs& netOps; // Network operations
LedgerMaster& ledgerMaster; // Ledger management
// ... additional context
};// src/xrpld/rpc/handlers/MyCustomHandler.cpp
namespace ripple {
Json::Value doMyCustomCommand(RPC::JsonContext& context)
{
Json::Value result;
// Implementation here
return result;
}
} // namespace ripple// src/xrpld/rpc/handlers/Handlers.cpp
{
"my_custom_command",
{
&doMyCustomCommand, // Function pointer
Role::USER, // Minimum role
RPC::NEEDS_CURRENT_LEDGER // Conditions
}
}// src/xrpld/rpc/handlers/Handlers.h
Json::Value doMyCustomCommand(RPC::JsonContext&);{
"method": "account_info",
"params": [{
"account": "rN7n7otQDd6FczFgLdlqtyMVrn3NnrcVXs"
}]
}auto it = handlerTable.find(request["method"].asString());
if (it == handlerTable.end()) {
return rpcError(rpcUNKNOWN_COMMAND);
}if (context.role < handlerInfo.role) {
return rpcError(rpcNO_PERMISSION);
}if (handlerInfo.condition & RPC::NEEDS_CURRENT_LEDGER) {
if (!context.ledgerMaster.haveLedger()) {
return rpcError(rpcNO_CURRENT);
}
}Json::Value response = handlerInfo.handler(context);{"submit", {&doSubmit, Role::USER, RPC::NEEDS_CURRENT_LEDGER | RPC::NEEDS_NETWORK_CONNECTION}}curl -X POST http://localhost:5005/ \
-H "Content-Type: application/json" \
-d '{
"method": "account_info",
"params": [{"account": "rN7n7otQDd6FczFgLdlqtyMVrn3NnrcVXs"}]
}'const ws = new WebSocket('ws://localhost:6006');
ws.send(JSON.stringify({
command: 'account_info',
account: 'rN7n7otQDd6FczFgLdlqtyMVrn3NnrcVXs'
}));service XRPLedgerAPIService {
rpc GetAccountInfo(GetAccountInfoRequest) returns (GetAccountInfoResponse);
}{
"account_info",
{
&doAccountInfo_v2, // New implementation
Role::USER,
RPC::NEEDS_CURRENT_LEDGER,
2, // Minimum API version
UINT_MAX
}
}{
"method": "account_info",
"api_version": 2,
"params": [...]
}{
"account_info",
{
&doAccountInfo,
Role::USER,
RPC::NEEDS_CURRENT_LEDGER
}
}std::shared_ptr<SHAMapLeafNode> findLeaf(uint256 key) {
std::shared_ptr<SHAMapTreeNode> node = mRoot;
for (int depth = 0; depth < 64; ++depth) {
// Is this a leaf?
if (auto leaf = std::dynamic_pointer_cast<SHAMapLeafNode>(node)) {
return leaf;
}
// Must be inner node
auto inner = std::dynamic_pointer_cast<SHAMapInnerNode>(node);
// Extract 4-bit chunk (nibble) at position 'depth'
int branch = key.nthNibble(depth);
// Get child node at that branch
node = inner->getChild(branch);
if (!node) {
// Child doesn't exist - key not in tree
return nullptr;
}
}
return nullptr; // Key not found
}Key: 0x3A7F2E1B4C9D... (account hash)
Depth 0: Root
Extract nibble 0 (first 4 bits): 3
Navigate to child 3
Depth 1:
Extract nibble 1 (next 4 bits): A
Navigate to child A in the child-3 subtree
Depth 2:
Extract nibble 2: 7
Navigate to child 7
... continue until reaching leafuint256 SHAMapLeafNode::computeHash() {
Blob data;
// Type prefix (1 byte) - prevents collisions
data.push_back(mLeafType); // ACCOUNT, TX, or TX_WITH_META
// Account key (32 bytes)
data.append(mItem->getTag());
// Account data (variable)
data.append(mItem->getData());
// Hash the complete structure
return SHA512Half(data);
}Account0 data: mTag=0x123ABC..., mData=<100 XRP, flags>
Type prefix: ACCOUNT_LEAF (1 byte)
Hash input: [0x01][123ABC...][100 XRP, flags]
Hash output: 0x47FA... (256 bits)
Changed: mData to "99 XRP"
Hash input: [0x01][123ABC...][99 XRP, flags]
Hash output: 0xB8EF... (completely different)uint256 SHAMapInnerNode::computeHash() {
Blob data;
// For each of 16 possible children
for (int i = 0; i < 16; ++i) {
if (hasChild(i)) {
// Get child's hash (whether child exists in memory or disk)
uint256 childHash = getChildHash(i);
data.append(childHash); // Append 32 bytes
}
}
// Hash all non-empty child hashes
return SHA512Half(data);
}Inner node has children 0, 3, 7, 15:
mBranches[0] → child hash: 0xAA11...
mBranches[3] → child hash: 0xBB22...
mBranches[7] → child hash: 0xCC33...
mBranches[15] → child hash: 0xDD44...
Compute hash:
data = [0xAA11...][0xBB22...][0xCC33...][0xDD44...]
hash = SHA512Half(data)
result: 0x5678...Scenario: Account0 balance changes
Step 1: Leaf changes
Old: Account0 leaf hash = 0xOLD1
New: Account0 leaf hash = 0xNEW1
Step 2: Parent recomputes
Parent stored hash references to all 16 children
One changed: [0xAAA][0xNEW1][0xBBB]...
Old parent hash: 0xOLD2
New parent hash: 0xNEW2
Step 3: Grandparent recomputes
Parent hash changed: 0xOLD2 → 0xNEW2
Grandparent's child references change: [...][0xNEW2][...]
Old grandparent hash: 0xOLD3
New grandparent hash: 0xNEW3
... propagates up to rootChange 1 account → 1 leaf hash changes
→ parent hash changes
→ grandparent hash changes
→ ... all ancestors up to root change
Root hash changes with certaintybool sameLedgerState = (treeA.getRootHash() == treeB.getRootHash());
if (sameLedgerState) {
// Entire trees identical
// Verified cryptographically
} else {
// At least one difference
// Must investigate child hashes to find it
}Peer A: "My ledger root is 0xABCD..."
Peer B: "My ledger root is 0xABCD..."
Comparison: 1 operation
Result: "Ledgers identical" (proven cryptographically)
No need to compare millions of accountsRoot hash differs: 0xABCD != 0xXYZW
Compare children:
Child 0: 0xAA == 0xAA (same)
Child 1: 0xBB == 0xBB (same)
Child 2: 0xCC != 0xXX (different!)
Child 3: 0xDD == 0xDD (same)
...
Recursively descend into Child 2 and its siblings
Eventually reach specific accounts that differComplete recount: O(N) comparisons (N = number of accounts)
Merkle tree method: O(log N) comparisons (log_16 of N)
Example: 1M accounts = 64 bits / 4 bits per level
Binary search depth = 20
Radix-16 Merkle depth = 5
Merkle method: 5 hash comparisons to find differences
vs. 1,000,000 account comparisons// Prove Account A exists in tree with root hash R
MerkleProof proof; // Vector of serialized nodes
std::optional<std::vector<Blob>> getProofPath(uint256 key) {
std::vector<Blob> path;
std::shared_ptr<SHAMapTreeNode> node = mRoot;
for (int depth = 0; depth < 64; ++depth) {
// Serialize current node
Blob serialized = node->serialize();
path.push_back(serialized);
if (auto leaf = std::dynamic_pointer_cast<SHAMapLeafNode>(node)) {
// Reached target leaf
return path;
}
// Descend to next node following key
auto inner = std::dynamic_pointer_cast<SHAMapInnerNode>(node);
int branch = key.nthNibble(depth);
node = inner->getChild(branch);
if (!node) {
return std::nullopt; // Key not found
}
}
return std::nullopt;
}bool verifyProofPath(
uint256 key,
uint256 expectedRootHash,
std::vector<Blob> proof)
{
// Start from leaf end of proof
auto leafNode = deserialize(proof.back());
// Verify leaf contains the key
if (leafNode->getKey() != key) {
return false;
}
uint256 computedHash = leafNode->computeHash();
// Move from leaf toward root
for (int i = proof.size() - 2; i >= 0; --i) {
auto innerNode = deserialize(proof[i]);
// Verify the branch we came from
int depth = i; // Depth in tree
int branch = key.nthNibble(depth);
// Child hash must match computed hash
if (innerNode->getChildHash(branch) != computedHash) {
return false;
}
// Compute this node's hash
computedHash = innerNode->computeHash();
}
// Verify final hash matches expected root
return (computedHash == expectedRootHash);
}Tree with 1M accounts:
Depth: log_16(1M) ≈ 5
Proof includes:
1 leaf node: ~50-100 bytes
5 inner nodes: ~100 bytes each
Total: ~600 bytes
Compare to:
Sending all accounts: millions of bytes
Merkle proof: <1 KB
Verification requires hashing ~5 nodes
vs. hashing millions of accountsvoid addLeaf(uint256 key, SHAMapItem item) {
auto leaf = std::make_shared<SHAMapLeafNode>(key, item);
leaf->setCowID(mCowID); // Mark as owned by this tree
std::shared_ptr<SHAMapTreeNode> node = mRoot;
// Navigate to position
for (int depth = 0; depth < 64; ++depth) {
if (auto inner = std::dynamic_pointer_cast<SHAMapInnerNode>(node)) {
int branch = key.nthNibble(depth);
auto child = inner->getChild(branch);
if (!child) {
// Empty slot - insert leaf here
inner = unshareNode(inner); // Copy-on-write
inner->setChild(branch, leaf);
updateHashes(inner); // Recompute hashes up to root
return;
} else if (auto childLeaf =
std::dynamic_pointer_cast<SHAMapLeafNode>(child)) {
// Slot occupied by another leaf
// Need to split into inner node
// ... (complex branch splitting logic)
}
node = child;
}
}
}void updateHashes(std::shared_ptr<SHAMapInnerNode> node) {
uint256 oldHash = node->getHash();
uint256 newHash = node->computeHash();
if (oldHash == newHash) {
return; // Nothing changed
}
// Find parent
auto parent = node->getParent();
if (parent) {
// Update parent's reference to this node
int branch = node->getNodeID().getNibbleAtDepth(
node->getDepth() - 1);
parent->setChildHash(branch, newHash);
// Recursively update parent
updateHashes(parent);
} else {
// This is root - update root hash
mRootHash = newHash;
}
}H(x) always returns the same result
H("hello") = 0x2cf24dba5fb0a30e26e83b2ac5b9e29e1b161e5c1fa7425e73043362938b9824
H("hello") = 0x2cf24dba5fb0a30e26e83b2ac5b9e29e1b161e5c1fa7425e73043362938b9824H(x) = H(y) where x ≠ y → "collision"
For SHA-256: Finding a collision requires ~2^128 operations
Current computers: ~10^18 operations/second
Time required: 10^20 seconds ≈ 3 billion years
Cryptographic security: "practical impossibility"H("hello") = 0x2cf24dba5fb0a30e26e83b2ac5b9e29e1b161e5c1fa7425e73043362938b9824
H("hallo") = 0xd3751713b7ac3d1ab39b1f7b1859d6e7baeac08b8cdb1c8fefd1f96a3f3c17f8
Change one character → hash completely changes Root
/ | \
/ | \
A B C
/ \ | / \
D E F G HSequential insertion:
Balanced (good): Unbalanced (bad - linked list):
4 1
/ \ \
2 6 2
/| |\ \
1 3 5 7 3
\
4Account1 hash: 0x3A7F2E1B...
Account2 hash: 0x7B4C9D3A...
Account1: Navigate using hex digits
Level 0: Digit 0 (3) → go to child 3
Level 1: Digit 1 (A) → go to child A
Level 2: Digit 2 (7) → go to child 7
...
Eventually reaches leaf containing Account1
Account2: Navigate using hex digits
Level 0: Digit 0 (7) → go to child 7
Level 1: Digit 1 (B) → go to child B
...
Eventually reaches leaf containing Account2// From rippled source: 16 possible children per level
static const int NUM_BRANCHES = 16;
// Each level represents 4 bits (one hex digit) of the key
for (int i = 0; i < keyLengthInBits; i += 4) {
int branch = (key >> (keyLengthInBits - i - 4)) & 0x0F;
// Navigate to child at position 'branch'
}Step 1: Account changes
Old hash: H("Alice: 100 XRP")
New hash: H("Alice: 90 XRP")
Step 2: Parent recomputes its hash
Old hash: H(H(Account) || H(Other_Accounts))
New hash: H(H(NewAccount) || H(Other_Accounts))
Step 3: Grandparent recomputes
... and so on up the tree
Root hash changesTree A: Tree B:
Root: 0xABCD Root: 0xXYZW
0xABCD == 0xXYZW?
YES: Entire trees are identical
NO: At least one leaf differsCompare all N leaves directly: O(N) timeCompare root hashes: O(1) time
If different, compare child hashes to find divergence: O(log N) comparisonsMerkle Proof for "Alice: 90 XRP":
Show: H("Alice: 90 XRP") = 0x1234...
H(Sibling1) = 0x5678...
H(Sibling2) = 0x9ABC...
Verifier computes:
Combine with siblings working up to root
Verify final hash matches claimed root
Only need to show O(log N) nodes, not entire tree Root Hash (e.g., 0xABCD...)
/ | \
Hash(0x0...) Hash(0x1...) Hash(0x2...)
/ | \
Hash(0x00) Hash(0x01) Hash(0x20)
/ \ / | / \
Data Data Data Data Data Data
(acct) (acct)// From rippled source
uint256 hashFunc(Blob const& data) {
using hasher = SHA512Half; // SHA-512, keep first 256 bits
return hasher()(data);
}uint256 computeHash(SHAMapInnerNode& node) {
Blob data;
for (int i = 0; i < 16; ++i) {
if (node.hasChild(i)) {
uint256 childHash = node.getChildHash(i);
data.append(childHash); // Concatenate child hashes
}
}
return SHA512Half(data);
}uint256 computeHash(SHAMapLeafNode& leaf, Type type) {
Blob data;
data.push_back(type); // Type prefix prevents hash collision
data.append(leaf.getKey());
data.append(leaf.getData());
return SHA512Half(data);
}Account with data: 0xAA || 0xBBBB
Leaf type 1, data 0xAA, item 0xBBBB
Transaction with data: 0xAABB || 0xBB
Leaf type 2, data 0xAABB, item 0xBB
These have same content but different meaning!
With type prefix:
H(1 || 0xAA || 0xBBBB) ≠ H(2 || 0xAABB || 0xBB)Current ledger state (in-memory, mutable):
SHAMap with live nodes, modified as transactions arrive
Ledger closes:
Create snapshot of current SHAMap
This becomes immutable historical ledger
All nodes locked (cannot be modified)
Next ledger:
Start new mutable SHAMap (sharing unchanged nodes)
Apply next block of transactions
Close creates new immutable snapshotHistorical Ledger 1: Historical Ledger 2:
Node A → Hash(old) Node A → Hash(old)
Transaction modifies Node A:
Can't modify Node A in place (Ledger 1 depends on it)
Create copy: Node A' → Hash(new)
Ledger 2 points to Node A'
Ledger 1 still points to Node AThe PeerFinder subsystem serves as the overlay network's "address book" and connection coordinator. It tracks known endpoints, manages connection slots, and makes intelligent decisions about which peers to connect to based on various criteria.
Key Responsibilities:
Endpoint Discovery: Learning about new potential peers from various sources
Slot Management: Allocating limited connection resources efficiently
Bootstrapping: Helping new nodes establish their initial connections
Address Quality Assessment: Evaluating the reliability and usefulness of known addresses
When a node starts, it needs to establish connections to become part of the overlay network. The bootstrapping process follows a specific priority order to ensure reliable connectivity.
The connection preference order is:
Fixed Peers (Highest Priority)
Configured in rippled.cfg under [ips_fixed]
Always-connect peers that the node prioritizes
Connections are maintained persistently
Livecache (Medium Priority)
Ephemeral cache of recently seen, active peers
Populated from peer exchange messages
Addresses are validated through successful connections
Bootcache (Lower Priority)
Persistent cache stored between node restarts
Contains addresses ranked by historical usefulness
Used when Livecache is insufficient
Hardcoded Bootstrap Nodes (Fallback)
Built-in addresses used as last resort
Ensures new nodes can always find the network
PeerFinder maintains two distinct caches for endpoint information, each serving a different purpose.
The Livecache stores information about peers that have been recently active and validated:
Contents: Endpoints learned from peer exchange and successful connections
Lifetime: Exists only in memory; cleared on restart
Quality: High confidence addresses (recently verified)
Updates: Continuously refreshed as peers connect and exchange information
The Bootcache provides persistent storage of known endpoints:
Contents: Historically useful addresses ranked by reliability
Lifetime: Persisted to disk; survives restarts
Quality: Variable; addresses may become stale
Updates: Updated based on connection success/failure history
The separation allows nodes to quickly reconnect to known-good peers (Livecache) while maintaining a fallback of historically useful addresses (Bootcache).
PeerFinder manages a finite number of connection "slots" to ensure resources are allocated efficiently. Different slot types serve different purposes.
Outbound
Connections initiated by this node
Yes
Inbound
Connections received from other nodes
Yes
Fixed
Connections to configured fixed peers
No*
Cluster
Connections to cluster members
*Fixed peers have reserved slots that don't count against normal limits.
When OverlayImpl wants to establish a connection, it requests a slot from PeerFinder:
PeerFinder evaluates:
Current slot utilization
Endpoint reputation
Connection diversity goals
Configuration limits
PeerFinder behavior is controlled through several configuration parameters:
autoConnect
Automatically connect to discovered peers
true
wantIncoming
Accept incoming connections
true
maxPeers
Maximum total peer connections
21
outPeers
Target number of outbound connections
Configuration in rippled.cfg:
Peers exchange endpoint information through protocol messages, helping the network maintain connectivity.
When a peer sends endpoint messages (e.g., in response to HTTP 503 with alternatives), PeerFinder processes them:
Not all received addresses are equally trustworthy. PeerFinder assesses quality by:
Source Reliability: Addresses from established peers rank higher
Connection Success: Successfully connected addresses are promoted
Recency: Recently validated addresses are preferred
Diversity: Addresses providing network diversity are valued
When an inbound connection succeeds, it validates that the connecting peer's advertised address is reachable, improving address quality assessment.
The PeerReservationTable allows operators to reserve connection slots for specific trusted nodes:
Purpose: Ensure critical peers (validators, monitoring nodes) can always connect
Configuration: Specified by public key in configuration
Behavior: Reserved slots bypass normal connection limits
This is particularly useful for:
Ensuring validator connectivity
Maintaining cluster coherence
Supporting monitoring infrastructure
PeerFinder integrates tightly with the Overlay subsystem:
Key Integration Points:
Connection Initiation: Overlay requests slots before connecting
Slot Release: Overlay notifies PeerFinder when connections close
Auto-connect: PeerFinder periodically suggests new connections
Endpoint Updates: Overlay forwards received endpoint information
When troubleshooting peer discovery:
Check Configuration: Verify [ips_fixed] and [ips] sections
Monitor Slot Usage: Use peers command to see current connections
Review Logs: PeerFinder logs connection attempts and failures
Verify Network: Ensure firewall allows port 51235 (or configured port)
For better network connectivity:
Configure diverse fixed peers across different geographic regions
Ensure your node accepts incoming connections if possible
Monitor and maintain good peer relationships
Consider running a public node to contribute to network health
PeerFinder is the intelligence behind XRP Ledger's peer-to-peer connectivity. By managing endpoint discovery, slot allocation, and bootstrapping, it ensures nodes can reliably join and maintain connections to the network. Understanding PeerFinder helps you configure nodes optimally, debug connectivity issues, and contribute to overall network health.
← Back to SHAMap and NodeStore: Data Persistence and State Management
Complete reference for NodeStore and SHAMap configuration options in rippled.cfg.
[node_db] SectionCore database configuration:
RocksDB (Recommended)
NuDB (High-Throughput)
SQLite (Legacy)
In-Memory (Testing)
Cache Age:
Enable automatic deletion of old ledgers:
Without Rotation:
Import from another database:
Keep complete history without deletion:
Expected:
Memory: ~300MB
Disk: ~50GB
CPU: 20-40%
Expected:
Memory: ~500-700MB
Disk: ~50GB (with rotation)
CPU: 30-50%
Expected:
Memory: ~1.5GB
Disk: ~80GB (with rotation)
CPU: 40-60%
Expected:
Memory: ~1GB
Disk: ~500GB - 1TB
CPU: 50-70%
Check configuration syntax:
Possible causes:
cache_size too small
cache_age too short
High variance in access patterns
Check:
Fix:
Increase cache_size by 50%
Increase cache_age to 120
Verify available memory
Possible causes:
Unclean shutdown
Disk failure
Backend corruption
Recovery:
Check:
Solution:
If rotation enabled: wait for deletion to complete
If rotation disabled: enable it with online_delete = 256
Monitor with watch -n 1 'du -sh /var/lib/rippled/db'
Check:
Solutions:
Use SSD (not HDD)
Enable compression: compression = true
Switch to NuDB: type = NuDB
Increase async_threads
This configuration is production-ready for most validators.
For more details, see:
Chapter 6: NodeStore Architecture
Chapter 8: Cache Layer
Appendix B: Debugging
This appendix provides practical tools and techniques for debugging cryptographic code in rippled, testing implementations, and developing new cryptographic features.
Use-after-free
Heap buffer overflow
Stack buffer overflow
Memory leaks
Essential debugging tools and techniques:
Logging: Enable trace logging for crypto operations
Standalone mode: Test without network complexity
GDB: Step through code, inspect variables
Valgrind: Detect memory issues
Best practices:
Test with both ed25519 and secp256k1
Verify canonicality for secp256k1
Check key pair consistency
Use hex dumps for visual inspection
← Back to SHAMap and NodeStore: Data Persistence and State Management
Understanding SHAMap and NodeStore theoretically is one thing. Operating them in production is another.
This final chapter covers:
Real-world resource requirements
Performance measurement and optimization
Bottleneck identification
Tuning for different deployment scenarios
SHAMap Lookup
Batch Fetch
Ledger Close Cycle
Node Object Volume
Write Latency
Cache Hit Scenario
Cache Miss Scenario
NodeStore Memory
SHAMap Memory
Total Memory Budget
Database Growth
With Rotation
Actual Sizes on Mainnet
File Descriptor Requirements
Identifying Bottlenecks
Monitor these metrics:
Tuning Parameters
Scenario 1: High-Traffic Validator
Scenario 2: Memory-Constrained
Scenario 3: Archive Node
Lookup Performance:
Write Performance:
Memory Usage:
Disk Space:
Scalability Limits:
Key Metrics to Track
Alerting Thresholds
Now that you understand SHAMap's in-memory structure and efficient algorithms, we turn to the critical question: how do you persist this state?
Without persistence, every validator restart requires replaying transactions from genesis—weeks of computation. With persistence, recovery takes minutes. But persistence introduces challenges:
┌─────────────────────────────────────────────────────────────┐
│ PeerFinder │
├─────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────────────┐ │
│ │ Livecache │ │ Bootcache │ │ Fixed/Cluster IPs │ │
│ │ (ephemeral) │ │(persistent) │ │ (configured) │ │
│ └──────┬──────┘ └──────┬──────┘ └──────────┬──────────┘ │
│ │ │ │ │
│ └────────────────┼────────────────────┘ │
│ │ │
│ ▼ │
│ ┌───────────────────────┐ │
│ │ Slot Allocation │ │
│ │ & Connection │ │
│ │ Decisions │ │
│ └───────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────┘ Node Startup
│
▼
┌─────────────────────┐
│ Load Fixed Peers │
│ from Configuration │
└──────────┬──────────┘
│
▼
┌─────────────────────┐
│ Connect to Fixed │◄─── Highest Priority
│ Peers First │
└──────────┬──────────┘
│
▼
┌─────────────────────┐
│ Query Livecache │◄─── Recently Active Peers
│ for Active Peers │
└──────────┬──────────┘
│
▼
┌─────────────────────┐
│ Fall Back to │◄─── Persistent Storage
│ Bootcache │
└──────────┬──────────┘
│
▼
┌─────────────────────┐
│ Use Hardcoded │◄─── Last Resort
│ Bootstrap Nodes │
└─────────────────────┘void
OverlayImpl::connect(beast::IP::Endpoint const& remote_endpoint)
{
XRPL_ASSERT(work_, "ripple::OverlayImpl::connect : work is set");
auto usage = resourceManager().newOutboundEndpoint(remote_endpoint);
if (usage.disconnect(journal_))
{
JLOG(journal_.info()) << "Over resource limit: " << remote_endpoint;
return;
}
auto const [slot, result] = peerFinder().new_outbound_slot(remote_endpoint);
if (slot == nullptr)
{
JLOG(journal_.debug()) << "Connect: No slot for " << remote_endpoint
<< ": " << to_string(result);
return;
}
auto const p = std::make_shared<ConnectAttempt>(
app_,
io_context_,
beast::IPAddressConversion::to_asio_endpoint(remote_endpoint),
usage,
setup_.context,
next_id_++,
slot,
app_.journal("Peer"),
*this);
std::lock_guard lock(mutex_);
list_.emplace(p.get(), p);
p->run();
}[peers_max]
21
[peer_private]
0
[ips_fixed]
validator1.example.com 51235
validator2.example.com 51235
[ips]
r.ripple.com 51235// From ConnectAttempt::processResponse
if (response_.result() == boost::beast::http::status::service_unavailable)
{
// Parse "peer-ips" header for alternative addresses
auto const ips = parse_peer_ips(response_["peer-ips"]);
if (!ips.empty())
{
// Inform PeerFinder about alternative endpoints
peerFinder().onRedirects(slot, ips);
}
}┌─────────────────┐ ┌─────────────────┐
│ OverlayImpl │◄───────►│ PeerFinder │
├─────────────────┤ ├─────────────────┤
│ │ │ │
│ • connect() │────────►│ • new_outbound_ │
│ │ │ slot() │
│ │ │ │
│ • onHandoff() │────────►│ • new_inbound_ │
│ │ │ slot() │
│ │ │ │
│ • onPeer │────────►│ • on_closed() │
│ Deactivate() │ │ │
│ │ │ │
│ • Timer tick │────────►│ • autoconnect() │
│ │ │ │
└─────────────────┘ └─────────────────┘online_delete
More disk space needed
Smaller database, bounded growth
cache_size
Less memory, slower
More memory, faster
cache_age
Quick eviction, less memory
Slow eviction, more memory
async_threads
Less CPU, slower I/O
More CPU, faster I/O
compression
Faster disk I/O
Slower disk I/O, less disk space
Unit tests: Verify correctness
Benchmarking: Measure performance
Inspection tools: Examine keys, signatures, addresses
Sanitizers: Catch memory errors automatically
Runtime fetch/store operations
Asynchronous background operations
Graceful shutdown
Database rotation and archival
This chapter covers these operational aspects that are critical for production XRPL nodes.
The Database class provides higher-level operations above Backend:
Key Responsibilities:
The standard implementation for most XRPL validators:
Architecture:
Storage Flow:
Fetch Flow:
Batch operations improve efficiency:
Batch Store:
Benefits:
Batch Size Limits:
Background threads handle expensive operations without blocking:
Asynchronous Fetch:
Use Cases:
Startup Sequence:
Shutdown Sequence:
For production systems needing online deletion:
Problem Solved:
Without deletion, database grows unbounded:
Rotation Architecture:
Rotation Process:
Dual Fetch Logic:
Benefits:
NodeStore exposes comprehensive metrics for monitoring:
Monitoring Typical Values:
Alert Thresholds:
Key Operational Components:
Fetch/Store: Core operations with caching
Batch Operations: Efficient bulk operations
Async Operations: Background threads for non-blocking I/O
Lifecycle: Startup, shutdown, error handling
Rotation: Online deletion and archival
Metrics: Monitoring and diagnostics
Design Properties:
Reliability: Graceful error handling, no data loss
Performance: Batch operations, async threading
Scalability: Online rotation enables unbounded operation
Observability: Comprehensive metrics for diagnostics
Flexibility: Different database backends, configurations
The Database layer transforms raw backend capabilities into a reliable, performant storage system that can handle blockchain-scale data volumes while maintaining the responsiveness required for real-time validation.
[node_db]
type = RocksDB # Type: RocksDB, NuDB, SQLite, Memory
path = /var/lib/rippled/db # Database location
cache_size = 256 # Cache size in MB (32-4096 typical)
cache_age = 60 # Cache entry age limit in seconds[node_db]
type = RocksDB
path = /var/lib/rippled/db/rocksdb
# RocksDB specific options
compression = true # Enable compression (reduces disk ~50%)
block_cache_size = 256 # Block cache in MB
write_buffer_size = 64 # Write buffer in MB
max_open_files = 100 # Max concurrent file handles[node_db]
type = NuDB
path = /var/lib/rippled/db/nudb
# NuDB specific options
key_size = 32 # SHA256 key size (always 32)
block_size = 4096 # Block size for writes[node_db]
type = SQLite
path = /var/lib/rippled/db/rippled.db[node_db]
type = Memory
# No path needed[node_db]
cache_size = 256 # Size in MB
# Tuning guide:
# Small (32MB): Minimal memory, slower
# Standard (256MB): Good for most validators
# Large (1GB): Better sync performance
# Very Large (4GB): Archive nodescache_age = 60 # Seconds before eviction
# Tuning guide:
# Short (30s): Low memory, frequent eviction
# Standard (60s): Good balance
# Long (300s): More memory, longer lifespan[node_db]
async_threads = 4 # Background fetch threads
# Tuning guide:
# Few (2-4): Lower CPU, simpler
# Standard (4-8): Balance CPU and throughput
# Many (16-32): High-throughput systems[node_db]
batch_write_size = 256 # Objects per batch
# Note: Most systems don't need to adjust this
# Default of 256 is well-optimized[node_db_rotation]
online_delete = 256 # Keep last N thousand ledgers
# Ledger counts:
# 8 (1 day): 3-5 seconds per ledger
# 256 (8 days): Common for validators
# 1000 (30 days): Archive-ish# Don't set online_delete section
# Database grows unbounded (~1-2 GB per day)
# Eventually disk fills
# Requires manual pruning[node_db]
type = RocksDB
path = /var/lib/rippled/db/new_rocksdb
import_db = /path/to/old/database # Source database
# During startup, rippled will:
# 1. Open source database
# 2. Read all objects
# 3. Write to destination
# 4. Verify counts match
# 5. Continue with destination as primary[node_db]
type = RocksDB
path = /var/lib/rippled/db/rocksdb
# Don't enable online_delete
# Don't set import_db
# Result: Complete ledger history preserved
# Disk grows ~1GB per day initially
# ~500GB - 1TB for ~2 years mainnet[node_db]
type = RocksDB
path = /var/lib/rippled/db/rocksdb
cache_size = 128 # Limited memory
cache_age = 30 # Short lifespan
async_threads = 2 # Few threads
[node_db_rotation]
online_delete = 256 # Keep 8 days[node_db]
type = RocksDB
path = /var/lib/rippled/db/rocksdb
cache_size = 256 # Standard size
cache_age = 60 # Standard lifespan
async_threads = 4 # Normal concurrency
compression = true # Enable compression
[node_db_rotation]
online_delete = 256 # Keep 8 days[node_db]
type = NuDB # Higher throughput
path = /var/lib/rippled/db/nudb
cache_size = 1024 # Large cache
cache_age = 120 # Longer lifespan
async_threads = 8 # More parallelism
batch_write_size = 512 # Larger batches
[node_db_rotation]
online_delete = 512 # Keep 15 days[node_db]
type = RocksDB
path = /var/lib/rippled/db/rocksdb
cache_size = 512 # Medium cache
cache_age = 300 # Long lifespan
async_threads = 16 # Many threads
compression = true # Important for space
# No online_delete section - keep all history[logging]
debug # Verbose logging
# or
info # Standard logging
# or
warning # Only warnings and errors[rpc_startup]
command = log_level
severity = debug
# Get metrics via RPC
# rippled-cli server_info | jq '.result.node_db'# Validate config file
rippled --validate-cfg
# Expected output:
# Config appears to be valid# Edit rippled.cfg
nano rippled.cfg
# Change cache_size value
# Restart rippled
systemctl stop rippled
systemctl start rippled
# Cache takes effect immediately# This requires data migration
# 1. Stop rippled
systemctl stop rippled
# 2. Export current database
rippled --export current_db export.json
# 3. Update config with new backend
nano rippled.cfg # Change type = XXX
# 4. Import to new backend
mkdir -p /var/lib/rippled/db/new_backend
rippled --import export.json --ledger-db new_backend
# 5. Backup old database
mv /var/lib/rippled/db/old_backend \
/var/lib/rippled/db/old_backend.backup
# 6. Restart with new database
systemctl start rippled
# 7. Verify it works
rippled-cli server_info | jq '.result.node_db.type'# Add to rippled.cfg
[node_db_rotation]
online_delete = 256
# Restart rippled
systemctl restart rippled
# Monitor deletion (may take time)
tail -f /var/log/rippled/rippled.log | grep -i deleterippled-cli server_info | jq '.result.node_db.cache_hit_rate'# Stop rippled
systemctl stop rippled
# Backup corrupted database
mv /var/lib/rippled/db /var/lib/rippled/db.corrupt
# Restart (will resync from network)
systemctl start rippled
# Check progress
rippled-cli server_info | jq '.result.ledger.ledger_index'df -h /var/lib/rippled/
# If near full:
du -sh /var/lib/rippled/db/*iostat -x 1 /dev/sda # Check I/O wait
iotop -o # Check top I/O processes[node_db]
type = RocksDB # ✓ Choose backend
path = /var/lib/rippled/db # ✓ Choose location
cache_size = 256 # ✓ Tune for hardware
cache_age = 60 # ✓ Default is good
async_threads = 4 # ✓ Default is good
compression = true # ✓ Enable (if RocksDB)
[node_db_rotation]
online_delete = 256 # ✓ Prevent unbounded growth#!/bin/bash
# Monitor NodeStore health
while true; do
clear
echo "=== NodeStore Health Check ==="
rippled-cli server_info | jq '{
"cache_hit_rate": .result.node_db.cache_hit_rate,
"cache_size_mb": .result.node_db.cache_size,
"write_latency_us": .result.node_db.write_latency_us,
"read_latency_us": .result.node_db.read_latency_us,
"async_queue_depth": .result.node_db.async_queue_depth
}'
echo ""
echo "=== Disk Usage ==="
du -sh /var/lib/rippled/db/*
sleep 5
done# Edit rippled.cfg
[rpc_startup]
{ "command": "log_level", "severity": "trace" }
# Or via RPC
./rippled log_level partition=Transaction severity=trace# Watch for signature verification
./rippled --conf rippled.cfg 2>&1 | grep -i "verify\|sign\|signature"
# Watch for failures
./rippled --conf rippled.cfg 2>&1 | grep -i "tefBAD_SIGNATURE\|temINVALID"// Add debug logging to crypto code
#include <ripple/beast/core/Journal.h>
void debugSign(PublicKey const& pk, SecretKey const& sk, Slice const& m)
{
JLOG(journal.trace()) << "Signing with key type: "
<< (publicKeyType(pk) == KeyType::ed25519 ? "ed25519" : "secp256k1");
auto sig = sign(pk, sk, m);
JLOG(journal.trace()) << "Signature size: " << sig.size();
JLOG(journal.trace()) << "Signature (hex): " << strHex(sig);
}# Start rippled in standalone mode (no network)
./rippled --standalone --conf rippled.cfg# Generate new account with ed25519
./rippled wallet_propose ed25519
# Output:
# {
# "account_id": "rN7n7otQDd6FczFgLdlqtyMVrn3LNU8B4C",
# "key_type": "ed25519",
# "master_key": "SNIT ARMY BOOM CALF ABLE ATOM CURE BARN FOWL ASIA HEAT TOUR",
# "master_seed": "sn3nxiW7v8KXzPzAqzyHXbSSKNuN9",
# "master_seed_hex": "DEDCE9CE67B451D852FD4E846FCDE31C",
# "public_key": "aB44YfzW24VDEJQ2UuLPV2PvqcPCSoLnL7y5M1EzhdW4LnK5xMS3",
# "public_key_hex": "ED9434799226374926EDA3B54B1B461B4ABF7237962EEB1144C10A7CA6A9D32C64"
# }
# Generate with secp256k1
./rippled wallet_propose secp256k1# Submit test transaction
./rippled submit '{
"tx_json": {
"Account": "rN7n7otQDd6FczFgLdlqtyMVrn3LNU8B4C",
"TransactionType": "Payment",
"Destination": "rLHzPsX6oXkzU9w7fvQqJvGjzVtL5oJ47R",
"Amount": "1000000"
},
"secret": "sn3nxiW7v8KXzPzAqzyHXbSSKNuN9",
"key_type": "ed25519"
}'
# Manually close ledger
./rippled ledger_accept# Build with debug info
cmake -DCMAKE_BUILD_TYPE=Debug ..
make# Start rippled in gdb
gdb --args ./rippled --standalone --conf rippled.cfg
# Common commands:
(gdb) break SecretKey.cpp:randomSecretKey # Set breakpoint
(gdb) run # Run program
(gdb) next # Step over
(gdb) step # Step into
(gdb) continue # Continue execution
(gdb) print sk # Print variable
(gdb) backtrace # Show call stack# Examine secret key bytes
(gdb) x/32xb &secretKey # Display 32 bytes in hex
# Examine signature
(gdb) x/64xb signature.data()
# Print public key
(gdb) print /x publicKey
# Check key type
(gdb) print publicKeyType(pk)# Break only for ed25519 keys
(gdb) break sign if publicKeyType(pk) == KeyType::ed25519
# Break on signature verification failure
(gdb) break verify if $retval == false# Run with valgrind
valgrind --leak-check=full --show-leak-kinds=all ./rippled --standalone
# Look for:
# - Definitely lost: Memory leaks
# - Possibly lost: Potential leaks
# - Still reachable: OK (cleanup at exit)# Detect uninitialized reads
valgrind --track-origins=yes ./rippled --standalone
# Look for:
# "Conditional jump or move depends on uninitialised value(s)"
# "Use of uninitialised value of size X"# Run all tests
./rippled --unittest
# Run specific test suite
./rippled --unittest=ripple.protocol.SecretKey
# Run with specific algorithm
./rippled --unittest=ripple.protocol.SecretKey:Ed25519// From src/test/protocol/SecretKey_test.cpp
class SecretKey_test : public beast::unit_test::suite
{
public:
void testRandomGeneration()
{
// Generate keys
auto sk1 = randomSecretKey();
auto sk2 = randomSecretKey();
// Should be different
expect(sk1 != sk2, "Random keys should be unique");
// Should be correct size
expect(sk1.size() == 32, "Secret key should be 32 bytes");
}
void testSigning()
{
auto [pk, sk] = randomKeyPair(KeyType::ed25519);
std::vector<uint8_t> message{0x01, 0x02, 0x03};
auto sig = sign(pk, sk, makeSlice(message));
// Verify signature
bool valid = verify(pk, makeSlice(message), sig, true);
expect(valid, "Signature should verify");
// Modify message
message[0] = 0xFF;
valid = verify(pk, makeSlice(message), sig, true);
expect(!valid, "Modified message should not verify");
}
void run() override
{
testRandomGeneration();
testSigning();
}
};
BEAST_DEFINE_TESTSUITE(SecretKey, protocol, ripple);#include <chrono>
void benchmarkSigning()
{
auto [pk, sk] = randomKeyPair(KeyType::ed25519);
std::vector<uint8_t> message(1000, 0xAA);
constexpr int iterations = 1000;
auto start = std::chrono::high_resolution_clock::now();
for (int i = 0; i < iterations; ++i) {
auto sig = sign(pk, sk, makeSlice(message));
}
auto end = std::chrono::high_resolution_clock::now();
auto duration = std::chrono::duration_cast<std::chrono::microseconds>(end - start);
std::cout << "Average signing time: "
<< (duration.count() / static_cast<double>(iterations))
<< " μs\n";
}void compareAlgorithms()
{
std::cout << "=== Signing Performance ===\n";
// Ed25519
{
auto [pk, sk] = randomKeyPair(KeyType::ed25519);
auto time = measureSign(pk, sk, 1000);
std::cout << "Ed25519: " << time << " μs/op\n";
}
// secp256k1
{
auto [pk, sk] = randomKeyPair(KeyType::secp256k1);
auto time = measureSign(pk, sk, 1000);
std::cout << "secp256k1: " << time << " μs/op\n";
}
}void inspectPublicKey(PublicKey const& pk)
{
std::cout << "=== Public Key Analysis ===\n";
auto type = publicKeyType(pk);
std::cout << "Type: ";
if (!type) {
std::cout << "INVALID\n";
return;
}
if (*type == KeyType::secp256k1)
std::cout << "secp256k1 (ECDSA)\n";
else if (*type == KeyType::ed25519)
std::cout << "ed25519 (EdDSA)\n";
std::cout << "Size: " << pk.size() << " bytes\n";
std::cout << "Hex: " << strHex(pk) << "\n";
auto accountID = calcAccountID(pk);
std::cout << "Account ID (hex): " << strHex(accountID) << "\n";
std::cout << "Address: " << toBase58(accountID) << "\n";
}bool verifyKeyPair(PublicKey const& pk, SecretKey const& sk)
{
// Derive public key from secret
auto derived = derivePublicKey(publicKeyType(pk).value(), sk);
if (derived != pk) {
std::cout << "ERROR: Public key doesn't match secret key!\n";
std::cout << "Expected: " << strHex(pk) << "\n";
std::cout << "Derived: " << strHex(derived) << "\n";
return false;
}
std::cout << "✓ Key pair is consistent\n";
return true;
}void testCanonicality(Slice const& signature)
{
auto canon = ecdsaCanonicality(signature);
if (!canon) {
std::cout << "ERROR: Invalid signature format\n";
return;
}
switch (*canon) {
case ECDSACanonicality::fullyCanonical:
std::cout << "✓ Fully canonical (S ≤ order/2)\n";
break;
case ECDSACanonicality::canonical:
std::cout << "⚠ Canonical but not fully (S > order/2)\n";
std::cout << " Should normalize for malleability prevention\n";
break;
}
}void hexDump(void const* data, size_t size, std::string const& label = "")
{
if (!label.empty())
std::cout << label << ":\n";
auto const* bytes = static_cast<uint8_t const*>(data);
for (size_t i = 0; i < size; ++i) {
if (i % 16 == 0)
std::cout << std::hex << std::setw(4) << std::setfill('0') << i << ": ";
std::cout << std::hex << std::setw(2) << std::setfill('0')
<< static_cast<int>(bytes[i]) << " ";
if ((i + 1) % 16 == 0 || i + 1 == size)
std::cout << "\n";
}
std::cout << std::dec; // Reset to decimal
}
// Usage:
hexDump(signature.data(), signature.size(), "Signature");# Build with ASan
cmake -DCMAKE_BUILD_TYPE=Debug \
-DCMAKE_CXX_FLAGS="-fsanitize=address -fno-omit-frame-pointer" \
..
make
# Run
./rippled --standalonevoid debugVerificationFailure(
PublicKey const& pk,
Slice const& message,
Slice const& signature)
{
std::cout << "=== Debugging Signature Verification ===\n";
// Check public key
auto pkType = publicKeyType(pk);
if (!pkType) {
std::cout << "ERROR: Invalid public key format\n";
return;
}
std::cout << "✓ Public key type: "
<< (*pkType == KeyType::ed25519 ? "ed25519" : "secp256k1")
<< "\n";
// Check signature size
if (*pkType == KeyType::ed25519 && signature.size() != 64) {
std::cout << "ERROR: Ed25519 signature should be 64 bytes, got "
<< signature.size() << "\n";
return;
}
std::cout << "✓ Signature size: " << signature.size() << " bytes\n";
// Check canonicality
if (*pkType == KeyType::secp256k1) {
auto canon = ecdsaCanonicality(signature);
if (!canon) {
std::cout << "ERROR: Invalid DER encoding\n";
return;
}
if (*canon != ECDSACanonicality::fullyCanonical) {
std::cout << "WARNING: Signature not fully canonical\n";
}
}
// Try verification
bool valid = verify(pk, message, signature, true);
std::cout << "Verification result: " << (valid ? "✓ VALID" : "✗ INVALID") << "\n";
if (!valid) {
std::cout << "\nPossible causes:\n";
std::cout << "- Wrong public key\n";
std::cout << "- Wrong message\n";
std::cout << "- Corrupted signature\n";
std::cout << "- Algorithm mismatch\n";
}
}Operation: Find account by ID
Worst case: O(64) node traversals
Tree depth: 256 bits / 4 bits per level = 64 levels
Typical case: O(1)
Most accounts found before depth 64
Average depth in realistic ledger: ~25 levels
Expected time:
Each traversal: O(1) array access (branch[0..15])
Total: O(1) expected time (linear in actual tree size,
but tree size ~ account count)
Cache hit: 1-10 microseconds
Direct pointer access, no I/O
Cache miss: 1-10 milliseconds
Database query requiredN objects requested:
Naive (sequential):
N × database_latency = N × 10ms
Example: 100 objects = 1000ms
Batched:
single_batch_latency + deserialize
Example: 100 objects = 10ms + 5ms = 15ms
Speedup: 66xTimeline for typical ledger close (3-5 seconds):
1. Receive transactions: 2 seconds
- Validate signatures
- Check preconditions
- Execute in SHAMap
2. Consensus: 1 second
- Reach agreement on state
- Sign ledger
3. Store phase: 0.5-1 second
- Serialize modified nodes
- Write to NodeStore
- Update indexes
Total: 3.5-5 secondsTypical ledger modification:
200-400 transactions per ledger
Average 2-4 modified accounts per transaction
= 500-1000 modified nodes
Plus structural nodes (parent rehashing):
Depth of modified accounts: ~25 levels
= 25 ancestor nodes modified
Total objects created: ~600-1100 per ledger
At 4 ledgers/second:
2400-4400 objects/second
Database requirement:
RocksDB: Handles 10,000-50,000 obj/sec easily
NuDB: Handles 50,000-200,000 obj/secStore a NodeObject:
1. Cache update: 1-10 microseconds
2. Encode to blob: 100 microseconds
3. Database write: 100-1000 microseconds (SSD)
4. Batch accumulation: 10-100 milliseconds
Total for batch of 100: 10-100 milliseconds
Per object in batch: 0.1-1 millisecondHit rate: 95% (well-tuned system)
1000 object requests:
950 cache hits × 5 microseconds = 4.75 milliseconds
50 cache misses × 10 milliseconds = 500 milliseconds
Total: 504.75 milliseconds = 0.5 seconds
Average per request: 0.5 millisecondsHit rate: 60% (poorly tuned system)
1000 object requests:
600 cache hits × 5 microseconds = 3 milliseconds
400 cache misses × 10 milliseconds = 4000 milliseconds = 4 seconds
Total: 4.003 seconds
Average per request: 4 milliseconds
10x slower due to cache misses!Cache layer:
Size: Configurable (32MB - 4GB typical)
Per object: ~100-500 bytes
At 256MB cache: ~300,000-500,000 cached objects
Backend buffers:
RocksDB: ~100-300MB for block cache
NuDB: ~50-100MB
Thread pools:
Each async thread: ~1-2MB stack
10 threads: ~20MB
Total NodeStore memory: cache_size + backend_buffers + thread_stacks
Typical: 256MB cache + 200MB backend = 500MB total
Large: 1GB cache + 300MB backend = 1.3GB totalIn-memory tree of current + recent ledgers:
Active ledger: ~10-50MB
Depends on account count and modification volume
Recent immutable ledgers (kept for quick access):
2-3 most recent: ~30-150MB
Total SHAMap: 50-200MB typical
Plus cached nodes (shared with NodeStore cache):
Counted above in NodeStore memoryMinimal validator:
SHAMap: 50MB
NodeStore: 200MB
Other rippled: 100MB
Total: 350MB
Standard validator:
SHAMap: 100MB
NodeStore: 500MB
Other rippled: 100MB
Total: 700MB
Large validator:
SHAMap: 200MB
NodeStore: 2000MB
Other rippled: 100MB
Total: 2.3GBWithout rotation: Unbounded growth
Per ledger:
~600-1100 new objects per ledger
~200KB per object (with compression)
= 120-220MB per ledger
Per day:
~20 ledgers per day
= 2.4-4.4 GB per day
Per year:
= 876GB - 1.6TB per year
Clearly unsustainable (disk fills in weeks)Retention policy: Keep last 100,000 ledgers
Ledger creation rate: 1 ledger per ~3 seconds
100,000 ledgers = ~8 days of history
Database size:
100,000 × 0.2MB = 20GB (stable)
With overhead: 30-50GB typical
Bounded growth enables indefinite operationSmall validator (RocksDB, compressed):
Database: 30-50GB
With binaries/logs: 60GB total
Archive node (full history):
Database: 500GB-1TB
With redundancy: 1.5TB total
Growth per day (with rotation):
~500MB-1GB per day
(old data deleted as new data added)Each backend type requires different FDs:
RocksDB:
- Main database: 1
- WAL (write-ahead log): 1
- SSTable files: 20-100 (per configuration)
- Total: 25-100 FDs
NuDB:
- Main data file: 1
- Index file: 1
- Total: 2-5 FDs
Operating system overhead:
stdin, stdout, stderr: 3
Socket listening: 2-5
Network connections: ~50 typical
Total rippled process:
- Without NodeStore: 50-100 FDs
- With RocksDB: 100-200 FDs
- Comfortable limit: 4096 FDs
Configuration:
ulimit -n 4096 # Set FD limit// Cache hit rate - most important
if (metrics.hitRate() < 90%) {
// Increase cache_size
problem = "Cache too small";
}
// Write latency - latency-sensitive
if (metrics.writeLatency > 100ms) {
// Switch to faster backend or increase batch size
problem = "Backend I/O too slow";
}
// Fetch latency
if (metrics.fetchLatency > 50ms) {
// Check cache hit rate
// Check disk health
problem = "Database queries too slow";
}
// Async queue depth
if (metrics.asyncQueueDepth > 10000) {
// Not keeping up with demand
problem = "Async processing overwhelmed";
}[node_db]
type = RocksDB
path = /var/lib/rippled/db
# Cache tuning
cache_size = 256 # Increase if memory available
cache_age = 60 # Longer = better hit rate
# Threading
async_threads = 4 # Increase for I/O-bound systems
# Batch operations
batch_write_size = 256 # Larger batches, fewer transactions
[node_db_rotation]
online_delete = 256 # Keep 256K ledgers (8 days)Problem: Write latency too high (ledgers close slowly)
Solution:
- Increase cache_size to 1GB+
- Switch to NuDB backend (higher throughput)
- Increase async_threads to 8-16
- Ensure SSD (not HDD)
- Increase batch_write_size
Result: Write throughput 50K+ objects/secProblem: Only 512MB RAM available
Solution:
- Set cache_size = 64MB (small)
- Still runs, but slower
- Increase cache_age for working set
- Monitor hit rate (may drop to 80%)
Result: Functional but slower sync and queriesProblem: Need complete history, very large disk
Solution:
- No rotation (online_delete disabled)
- RocksDB with compression
- Smaller cache_size (less frequently accessed)
- Parallel database with rotated copy
Result: Full history, terabyte+ databaseSingle object lookup:
Cache hit: 1-10 microseconds
Cache miss: 1-10 milliseconds
95% hit rate: ~0.5 milliseconds average
Batch operation (100 objects):
Sequential: 1000 milliseconds
Batched: 10 milliseconds
Speedup: 100xPer ledger:
1000 objects per ledger
Per-object: 0.1-1 millisecond
Batch overhead: 10-100 milliseconds
Total per ledger: 100-1100 milliseconds
Throughput:
4 ledgers/second × 1000 objects/ledger = 4000 obj/sec
Well within RocksDB/NuDB capacityMinimum: 200-300MB
Typical: 500-700MB
Large: 2-4GB
Depends on cache_size configurationWith rotation: 30-50GB (8-10 days history)
Unbounded: ~1TB per year (without rotation)
Growth rate: ~500MB-1GB per dayCurrent network:
2000+ validators
100-400 transactions/ledger
Proven sustainable
Theoretical limits:
Cache hit rate: 80%+ maintainable at any size
Write throughput: 100K obj/sec possible
Read throughput: 1M obj/sec with cache
Practical limits:
Memory: 4-16GB per validator typical
Disk: 100GB-1TB per validator typical
Network: Synchronization limits transaction volume
Consensus: Agreement time limits throughput1. Cache Statistics:
- Hit rate (target: >90%)
- Size (should be close to configured max)
- Eviction rate
2. Database Performance:
- Write latency (target: <100ms per ledger)
- Read latency (target: <50ms per request)
- Queue depth (target: <1000)
3. Resource Usage:
- Memory (should stabilize)
- CPU (typically 20-50% on modern systems)
- Disk I/O (peaks during sync)
4. Application:
- Ledger close time (target: 3-5 seconds)
- Synchronization lag (target: 0 when caught up)
- Block proposal success (target: >95%)Warning:
- Hit rate < 80%
- Write latency > 200ms
- Queue depth > 5000
Critical:
- Hit rate < 60%
- Write latency > 500ms
- Ledger close > 10 seconds
- Disk space < 10GB freeclass Database {
public:
// Synchronous operations
std::shared_ptr<NodeObject> fetchNodeObject(
uint256 const& hash,
std::uint32_t ledgerSeq = 0);
void store(std::shared_ptr<NodeObject> const& obj);
void storeBatch(std::vector<std::shared_ptr<NodeObject>> const& batch);
// Asynchronous operations
void asyncFetch(
uint256 const& hash,
std::function<void(std::shared_ptr<NodeObject>)> callback);
// Management
void open(std::string const& path);
void close();
// Metrics and diagnostics
Json::Value getCountsJson() const;
}; Application
↓
DatabaseNodeImp (Coordination)
/ | \
/ | \
Cache Backend Threads
(Hot) (Disk) (Async)void DatabaseNodeImp::store(std::shared_ptr<NodeObject> const& obj) {
// Step 1: Update cache immediately (likely reaccess soon)
{
std::lock_guard<std::mutex> lock(mCacheLock);
mCache.insert(obj->getHash(), obj);
}
// Step 2: Encode to persistent format
Blob encoded = encodeObject(obj);
// Step 3: Persist to backend
Status status = mBackend->store(obj->getHash(), encoded);
if (status != Status::ok) {
// Log error but don't crash
// Backend error doesn't lose data (already in cache)
logError("Backend store failed", status);
}
// Step 4: Update metrics
mMetrics.bytesWritten += encoded.size();
mMetrics.objectsWritten++;
}std::shared_ptr<NodeObject> DatabaseNodeImp::fetchNodeObject(
uint256 const& hash,
uint32_t ledgerSeq)
{
// Step 1: Check cache
{
std::lock_guard<std::mutex> lock(mCacheLock);
auto cached = mCache.get(hash);
if (cached) {
mMetrics.cacheHits++;
return cached;
}
}
// Step 2: Query backend (potentially slow)
Blob encoded;
Status status = mBackend->fetch(hash, encoded);
std::shared_ptr<NodeObject> result;
if (status == Status::ok) {
result = decodeObject(hash, encoded);
} else if (status == Status::notFound) {
// Not found - cache dummy to prevent retry
result = nullptr;
} else {
// Backend error
logWarning("Backend fetch error", status);
return nullptr;
}
// Step 3: Update cache
{
std::lock_guard<std::mutex> lock(mCacheLock);
if (result) {
mCache.insert(hash, result);
} else {
mCache.insertDummy(hash);
}
}
// Step 4: Update metrics
mMetrics.cacheMisses++;
mMetrics.bytesRead += encoded.size();
return result;
}void DatabaseNodeImp::storeBatch(
std::vector<std::shared_ptr<NodeObject>> const& batch)
{
// Step 1: Update cache for all objects
{
std::lock_guard<std::mutex> lock(mCacheLock);
for (auto const& obj : batch) {
mCache.insert(obj->getHash(), obj);
}
}
// Step 2: Encode all objects
std::vector<std::pair<uint256, Blob>> encoded;
encoded.reserve(batch.size());
for (auto const& obj : batch) {
encoded.emplace_back(obj->getHash(), encodeObject(obj));
}
// Step 3: Store atomically in backend
Status status = mBackend->storeBatch(encoded);
// Step 4: Update metrics
for (auto const& [hash, blob] : encoded) {
mMetrics.bytesWritten += blob.size();
}
mMetrics.objectsWritten += batch.size();
}Without batch:
Write 1000 objects → 1000 backend transactions
1000 disk I/O operations
With batch:
Write 1000 objects → 1 backend transaction
1 disk I/O operation (atomic write)
Throughput improvement: 10-50x depending on backendstatic const size_t BATCH_WRITE_PREALLOCATE_SIZE = 256;
static const size_t BATCH_WRITE_LIMIT_SIZE = 65536;
// Prevents:
// 1. Memory exhaustion (unbounded batches)
// 2. Transaction timeout (backend transaction too large)
// 3. Excessive latency (batching too much)void DatabaseNodeImp::asyncFetch(
uint256 const& hash,
std::function<void(std::shared_ptr<NodeObject>)> callback)
{
// Step 1: Queue request
mAsyncQueue.enqueue({hash, callback});
// Step 2: Background thread processes
// Wakes up, dequeues batch, fetches, invokes callbacks
// Meanwhile, caller continues without blocking
}
// Thread pool implementation
void asyncWorkerThread() {
while (running) {
// Wait for work or timeout
auto batch = mAsyncQueue.dequeueBatch(timeout);
if (batch.empty()) {
continue;
}
// Fetch all in batch (more efficient)
std::vector<uint256> hashes;
for (auto const& [hash, callback] : batch) {
hashes.push_back(hash);
}
auto results = mBackend->fetchBatch(hashes);
// Invoke callbacks
for (auto const& [hash, callback] : batch) {
auto result = results[hash];
callback(result);
}
}
}1. Synchronization:
Requesting many nodes from network
Can queue hundreds of async fetches
Process results as they arrive
2. API queries:
Historical account queries
Don't block validator thread
Return results via callback
3. Background tasks:
Cache warming
Prefetching likely-needed nodes
Doesn't impact real-time performancevoid DatabaseNodeImp::open(DatabaseConfig const& config) {
// Step 1: Parse configuration
std::string backend_type = config.get<std::string>("type");
std::string database_path = config.get<std::string>("path");
// Step 2: Create backend instance
mBackend = createBackend(backend_type, database_path);
// Step 3: Open backend (connects to database)
Status status = mBackend->open();
if (status != Status::ok) {
throw std::runtime_error("Failed to open database");
}
// Step 4: Allocate cache
size_t cache_size_mb = config.get<size_t>("cache_size");
mCache.setMaxSize(cache_size_mb * 1024 * 1024);
// Step 5: Start background threads
int num_threads = config.get<int>("async_threads", 4);
for (int i = 0; i < num_threads; ++i) {
mThreadPool.emplace_back([this] { asyncWorkerThread(); });
}
// Step 6: Optional: import from another database
if (config.has("import_db")) {
importFromDatabase(config.get<std::string>("import_db"));
}
// Step 7: Ready for operations
mReady = true;
}void DatabaseNodeImp::close() {
// Step 1: Stop accepting new operations
mReady = false;
// Step 2: Wait for in-flight async operations to complete
mAsyncQueue.stop();
for (auto& thread : mThreadPool) {
thread.join();
}
// Step 3: Flush any pending writes
// (Most backends buffer writes)
mBackend->flush();
// Step 4: Clear cache (will be regenerated on restart)
mCache.clear();
// Step 5: Close backend database
Status status = mBackend->close();
if (status != Status::ok) {
logWarning("Backend close not clean", status);
}
}Each ledger adds new nodes
Over time: thousands of gigabytes
Eventually: disk full
Options:
1. Stop validator (unacceptable)
2. Manual pruning (requires downtime)
3. Rotation (online deletion) Application
↓
DatabaseRotatingImp
/ | \
/ | \
Writable Archive Cache
Backend Backend
(New) (Old)void DatabaseRotatingImp::rotate() {
// Step 1: Stop writes to current backend
auto old_writable = mWritableBackend;
auto old_archive = mArchiveBackend;
// Step 2: Create new writable backend
mWritableBackend = createNewBackend();
mWritableBackend->open();
// Step 3: Transition current writable → archive
mArchiveBackend = old_writable;
// Step 4: Delete old archive (in background)
deleteBackendAsync(old_archive);
// Step 5: Copy critical data if needed
// (e.g., ledger headers required for validation)
copyCriticalData(old_archive, mWritableBackend);
// Step 6: Continue operation with no downtime
}std::shared_ptr<NodeObject> DatabaseRotatingImp::fetchNodeObject(
uint256 const& hash,
uint32_t ledgerSeq,
bool duplicate)
{
// Check cache first
auto cached = mCache.get(hash);
if (cached) {
return cached;
}
// Try writable (current ledgers)
auto obj = mWritableBackend->fetch(hash);
if (obj) {
mCache.insert(hash, obj);
return obj;
}
// Try archive (older ledgers)
obj = mArchiveBackend->fetch(hash);
if (obj) {
mCache.insert(hash, obj);
// Optionally duplicate to writable for longevity
if (duplicate) {
mWritableBackend->store(hash, obj);
}
return obj;
}
return nullptr;
}With rotation:
Ledger 1000000: stored in Writable
Ledger 1000001-1100000: stored in Writable
Ledger 900000-999999: stored in Archive
Ledger 900000: Delete ledger → Keep recent 100k only
Old archive deleted → Disk space reclaimed
No downtime, no backups needed, bounded growthstruct NodeStoreMetrics {
// Storage metrics
uint64_t objectsWritten;
uint64_t bytesWritten;
std::chrono::microseconds writeLatency;
// Retrieval metrics
uint64_t objectsFetched;
uint64_t cacheHits;
uint64_t cacheMisses;
uint64_t bytesFetched;
std::chrono::microseconds fetchLatency;
// Cache metrics
size_t cacheObjects;
double cacheHitRate() const {
return cacheHits / (double)(cacheHits + cacheMisses);
}
// Threading metrics
size_t asyncQueueDepth;
int activeAsyncThreads;
};Hit rate: 92-96% (well-configured systems)
Write latency: 0.1-1 ms per object
Fetch latency: 0.01-0.1 ms per object (mostly cache hits)
Cache size: 128MB - 2GB
Async queue depth: 0-100 (queue length)If hit rate < 80%: Cache too small or thrashing
If write latency > 10ms: Backend I/O struggling
If queue depth > 10000: Not keeping up with load
If fetch latency > 100ms: Serious performance issuePerformance: Database queries are 1000x slower than memory access
Flexibility: Different operators need different storage engines
Reliability: Data must survive crashes without corruption
The NodeStore solves all these problems through elegant abstraction and careful design.
NodeStore sits at a critical junction in XRPL's architecture:
SHAMap's Dependency:
SHAMap needs to retrieve historical nodes:
But SHAMap doesn't know or care about:
How data is stored
Which database backend is used
Where the data is physically located
How caching is implemented
All that complexity is hidden behind NodeStore's interface.
NodeStore provides four critical services:
1. Persistence
2. Consistent Interface
3. Performance Optimization
4. Lifecycle Management
The atomic unit of storage in XRPL is the NodeObject:
Structure:
Key Characteristics:
Immutable Once Created: Cannot modify data after creation
Hash as Key: Hash uniquely identifies the object
Type Distinguishing: Type prevents hash collisions between different data types
Serialized Format: Data is already in wire format
NodeObject Types:
hotLEDGER
Ledger headers and metadata
1
hotACCOUNT_NODE
Account state tree nodes
3
hotTRANSACTION_NODE
Transaction tree nodes
4
hotUNKNOWN
Unknown/unrecognized types
0
Type Prefix in Hashing:
Type fields prevent collisions:
Creation
Storage
Caching
Retrieval
Archival
The Backend class defines the minimal interface for any storage system:
Core Operations:
Status Codes:
Backend Independence:
NodeStore sits above backends, application logic unchanged:
RocksDB (Recommended for Most Cases)
Modern key-value store developed by Facebook
LSM tree (Log-Structured Merge tree) design
Excellent performance for XRPL workloads
Built-in compression support
Active maintenance
Characteristics:
Write throughput: ~10,000-50,000 objects/second
Read throughput: ~100,000+ objects/second
Compression: Reduces disk space by 50-70%
NuDB (High-Throughput Alternative)
Purpose-built for XRPL by Ripple
Append-only design optimized for SSD
Higher write throughput than RocksDB
Efficient space utilization
Characteristics:
Write throughput: ~50,000-200,000 objects/second
Read throughput: ~100,000+ objects/second
Better for high-volume systems
Testing Backends
To enable backend independence, NodeStore uses a standardized encoding:
Encoded Blob Structure:
Encoding Process:
Decoding Process:
Benefits:
Backend Agnostic: Any backend can store/retrieve encoded blobs
Self-Describing: Type embedded, forward-compatible with unknown types
Efficient: Minimal overhead (8 bytes) per object
Validated: Type byte catches most corruption
The database key is the object's hash (not a sequential ID):
Implications:
Direct Retrieval: Any node retrievable by hash
Deduplication: Identical content produces identical hash → same key
Immutability: Hash never changes for given data
Verification: Can verify data by recomputing hash
NodeStore integrates with SHAMap through the Family pattern:
Key Architectural Elements:
NodeObject: Atomic storage unit (type, hash, data)
Backend Interface: Minimal, consistent interface for storage
Abstraction: Decouples application logic from storage implementation
Encoding Format: Standardized format enables backend independence
Key as Hash: Direct retrieval without index lookups
Family Pattern: Provides access to caching and storage
Design Properties:
Backend Flexibility: Switch storage engines without code changes
Scale: Handles millions of objects efficiently
Persistence: Survives crashes and restarts
Verification: Data integrity through hashing
Simplicity: Minimal interface hides complexity
In the next chapter, we'll explore the critical Cache Layer that makes NodeStore practical for high-performance systems.
Security is paramount when building RPC handlers that interact with the XRP Ledger. Rippled implements a comprehensive role-based access control (RBAC) system to ensure that only authorized clients can execute sensitive operations.
In this section, you'll learn how to properly configure permissions, implement role checks, manage resource limits, and protect your custom handlers from unauthorized access.
Rippled defines five distinct permission levels:
FORBID
Blacklisted client
Blocked due to abuse
GUEST
Unauthenticated public access
Public API endpoints, read-only queries
USER
Authenticated client
Standard API operations, account queries
IDENTIFIED
Trusted gateway or service
Source Location: src/xrpld/core/Config.h
Roles are assigned based on the client's IP address and connection type:
File: rippled.cfg
When registering a handler, specify the minimum required role:
The RPC dispatcher automatically enforces role requirements before invoking handlers:
For fine-grained control:
Rippled tracks API usage to prevent denial-of-service attacks:
Admin connections have unlimited resources:
Rippled uses a "Gossip" mechanism to share blacklisted IPs across the network:
For production deployments, use secure gateway configuration:
Benefits:
Rippled only accepts connections from the proxy
Proxy handles TLS termination
Proxy performs initial authentication
Reduces attack surface
WebSocket connections support optional password authentication:
Let's build a handler with different behavior based on role:
Registration:
Behavior:
GUEST: Gets only account and balance
USER: Gets sequence and owner count
IDENTIFIED: Gets flags and previous transaction ID
ADMIN: Gets full administrative details
Always validate roles before sensitive operations
Use the minimum required role for each handler
Charge resources appropriately for expensive queries
Log security events for audit trails
Test with different roles during development
Don't hardcode IP addresses in handler code
Don't expose admin functions to lower roles
Don't skip resource charging for expensive operations
Don't leak sensitive information in error messages
Don't trust client-provided role information
Before deploying a custom handler:
Rippled's authentication and authorization system provides robust protection for the RPC interface through a well-designed role hierarchy. By combining IP-based role assignment, automatic permission enforcement in the dispatcher, resource charging for expensive operations, and fine-grained access control, the system prevents unauthorized access while enabling legitimate use cases. Understanding these security patterns is essential for building handlers that are both functional and secure, and for deploying nodes that safely expose APIs to different client types.
No
varies
listeningPort
Port for incoming connections
51235
ipLimit
Max connections per IP address
2
Learn how to create and manage Multi-Purpose Tokens (MPT) on the XRP Ledger.
Multi-Purpose Tokens (MPT) are a recent amendment to the XRP Ledger, enhancing the token system by allowing tokens to serve multiple functions beyond just being a medium of exchange. This amendment introduces new capabilities for token issuers and holders.
Stablecoins: Create stablecoins with features like freezing and clawback.
Utility Tokens: Design tokens with specific utility functions.
Security Tokens: Implement security tokens with transfer restrictions.
Community Credit: Track debts and credits between known parties.
Each flag is a power of 2, allowing them to be combined using bitwise operations. The total value is the sum of all enabled flags.
Basic: canTransfer (32)
Secure: canLock + canClawback + canTransfer (98)
Regulated: canLock + canClawback + canTransfer + requireAuth (102)
Create a new file or edit index.ts:
The mpt_issuance_id is a unique identifier generated when an MPToken is created via MPTokenIssuanceCreate. This ID is:
Automatically generated by the XRPL network upon successful token creation
Globally unique across all MPTokens on the ledger
Required for all subsequent operations (payments, clawbacks, authorization)
To obtain the MPTokenIssuanceID:
Submit an MPTokenIssuanceCreate transaction
Retrieve the ID from the transaction result: result.meta?.mpt_issuance_id
Store this ID for future operations with your token
Alternative ways to find the MPTokenIssuanceID:
From Explorer: You can find the ID on the XRPL explorer (https://devnet.xrpl.org/) by searching with the transaction hash or the creator's address
Query by Issuer: Use the account_lines API method with the issuer's address to find all tokens issued by that account
The relationship between issuer, token properties, and ID:
Issuer: The account that created the token (unchangeable)
Token Properties: Metadata, flags, and rules defined at creation
MPTokenIssuanceID: The unique identifier linking all operations to this specific token issuance
Uses decimal (base-10) math with 15 digits of precision
Can express values from 1.0 × 10^-81 to 9999999999999999 × 10^80
Supports transfer fees that are automatically deducted
Allows issuers to define tick sizes for exchange rates
The MPTokenIssuanceDestroy transaction allows an issuer to permanently destroy an MPToken issuance. This is useful for:
Retiring deprecated or unused tokens
Removing tokens that were created in error
Regulatory compliance and token lifecycle management
Only the original issuer can destroy the MPToken issuance
All tokens must be owned by the issuer (transferred back) before destruction
The MPTokenIssuanceID must be valid and reference an existing issuance
The MPTokenIssuanceSet transaction allows an issuer to modify the state of an MPToken issuance, including freezing/unfreezing transfers. This is useful for:
Temporarily halting transfers during maintenance or upgrades
Responding to security incidents
Complying with regulatory requirements
Managing token lifecycle events
Only the original issuer can modify the issuance settings
The MPTokenIssuanceID must be valid and reference an existing issuance
Appropriate flags must be set during issuance to enable freezing
The MPTokenAuthorize transaction enables fine-grained control over who can hold and transfer Multi-Purpose Tokens (MPTs) on the XRPL. This mechanism supports compliance, prevents unsolicited token spam, and allows issuers to manage token distribution effectively.
Authorize a Holder: Permit a specific account to receive and hold a designated MPT
Revoke Authorization: Remove a holder's permission, effectively locking their MPT balance
Self-Unauthorize: Allow a holder to voluntarily relinquish their MPT holdings (requires zero balance)
Global Locking: Restrict all transfers of a particular MPT issuance
Only the original issuer can authorize/revoke holders
The MPTokenIssuanceID must be valid and reference an existing issuance
Authorization flags must be enabled during issuance
Holder accounts must be valid XRPL addresses
The introduction of Multi-Purpose Tokens (MPT) on the XRP Ledger has brought significant changes to existing transaction types, especially Payment and Clawback. These transactions now support MPTokens, allowing you to transfer or recover tokens issued under the new amendment.
Payment:
The Payment transaction can now transfer MPTokens by using the Amount field as an object containing the issuance identifier (mpt_issuance_id) and the amount to transfer.
Example:
Clawback:
The Clawback transaction allows the issuer to recover MPTokens from a specific account, also using the adapted Amount field for MPTokens.
Example:
Let's create a complete example that demonstrates all the MPToken features by simulating a real-world scenario: issuing company shares to investors. In this example, we'll:
Create a company share token with regulatory controls
Authorize investors to receive shares
Distribute shares to investors
Demonstrate compliance controls (locking and clawback)
You can check every transaction hash on : https://devnet.xrpl.org/
This appendix provides references to the cryptographic standards, RFCs, and specifications that XRPL's cryptography is built upon. Understanding these standards helps you understand why rippled makes certain design choices.
RFC 8032 - Edwards-Curve Digital Signature Algorithm (EdDSA)
URL:
Published: January 2017
Status: Proposed Standard
What it defines:
EdDSA signature scheme using Edwards curves
Ed25519: EdDSA with Curve25519
Ed448: EdDSA with Ed448-Goldilocks
Test vectors and implementation guidelines
Key parameters (Ed25519):
Curve: Curve25519 (Edwards form)
Hash function: SHA-512
Public key size: 32 bytes
Signature size: 64 bytes
Why XRPL uses it:
Fast signature verification (~5x faster than ECDSA)
Simple implementation
No signature malleability
Modern design with security proofs
SEC 2: Recommended Elliptic Curve Domain Parameters
Publisher: Standards for Efficient Cryptography Group (SECG)
URL:
Version: 2.0, January 2010
What it defines:
Elliptic curve parameters for secp256k1
Curve equation: y² = x³ + 7 (mod p)
Prime field size p and generator point G
Compression/decompression of public keys
Key parameters (secp256k1):
Why XRPL uses it:
Ecosystem compatibility
Well-tested (used since 2009)
Supported by many libraries and tools
FIPS 186-4 - Digital Signature Standard (DSS)
Publisher: NIST
URL:
Published: July 2013
What it defines:
ECDSA signature algorithm
Key generation procedures
Signature generation and verification
Approved curves (including secp256k1)
RFC 6979 - Deterministic Usage of DSA and ECDSA
URL:
Published: August 2013
Status: Informational
What it defines:
Deterministic nonce generation for ECDSA
Eliminates need for secure random number generation during signing
Prevents nonce reuse vulnerabilities
Algorithm:
Why XRPL uses it:
Prevents catastrophic nonce reuse
Makes signing deterministic (same message = same signature)
No dependency on RNG quality during signing
FIPS 180-4 - Secure Hash Standard (SHS)
Publisher: NIST
URL:
Published: August 2015
What it defines:
SHA-256: 256-bit hash (32 bytes)
SHA-512: 512-bit hash (64 bytes)
Padding and iteration schemes
Test vectors
XRPL usage:
SHA-512-Half: First 32 bytes of SHA-512
SHA-256: Used in Base58Check checksums
Both used in RIPESHA double hash
Why SHA-512-Half:
Faster on 64-bit CPUs than SHA-256
Same output size (256 bits)
Same security level
Original Paper: "RIPEMD-160: A Strengthened Version of RIPEMD"
Authors: Dobbertin, Bosselaers, Preneel
Published: 1996
Hash size: 160 bits (20 bytes)
XRPL usage:
Second stage of RIPESHA hash
Used in address generation
Algorithm:
RFC 2898 - PKCS #5: Password-Based Cryptography Specification
URL:
Published: September 2000
Status: Informational
What it defines:
Password-Based Key Derivation Function 2
Iterated hashing to slow brute-force
Salt for uniqueness
Note: XRPL doesn't use PBKDF2 for key derivation (uses sha512Half of seed), but it's relevant for password-based seed derivation in wallets.
No formal RFC, but based on:
Design by Satoshi Nakamoto
Excludes similar-looking characters (0, O, I, l)
Used widely in blockchain systems
Alphabet:
XRPL implementation:
Base58Check with 4-byte SHA-256(SHA-256()) checksum
Type prefix byte determines first character
Compatible with other blockchain systems
RFC 4648 - The Base16, Base32, and Base64 Data Encodings
URL:
Published: October 2006
Status: Proposed Standard
XRPL usage:
Used in peer handshake (Session-Signature header)
Not used for addresses (uses Base58Check instead)
RFC 5246 - The Transport Layer Security (TLS) Protocol Version 1.2
URL:
Published: August 2008
Status: Proposed Standard
What it defines:
Handshake protocol
Record protocol
Cipher suites
Certificate verification
XRPL usage:
Peer-to-peer communication
WebSocket connections
RPC endpoints
RFC 8446 - The Transport Layer Security (TLS) Protocol Version 1.3
URL:
Published: August 2018
Status: Proposed Standard
Improvements over TLS 1.2:
Faster handshake
Forward secrecy by default
Simplified cipher suite negotiation
ITU-T X.690 - ASN.1 encoding rules
Publisher: ITU-T
Published: 2015
What it defines:
Distinguished Encoding Rules (DER)
Used for secp256k1 signature encoding
Canonical binary format
Structure:
NIST SP 800-90A - Recommendation for Random Number Generation
Publisher: NIST
URL:
Published: June 2015
What it defines:
Deterministic Random Bit Generators (DRBGs)
CTR_DRBG, HASH_DRBG, HMAC_DRBG
Entropy requirements
Testing procedures
XRPL usage:
Relies on OpenSSL's RAND_bytes
OpenSSL implements NIST-approved DRBGs
RFC 1751 - A Convention for Human-Readable 128-bit Keys
URL:
Published: December 1994
Status: Informational
What it defines:
Encoding 128-bit keys as English words
2048-word dictionary
Checksum embedded in last word
XRPL usage:
Optional seed encoding format
Alternative to Base58 for seeds
Easier to write down/speak
Example:
NIST SP 800-57 - Recommendation for Key Management
Publisher: NIST
URL:
Published: May 2020
What it defines:
Key length recommendations
Algorithm lifetime
Key usage guidance
Security strength equivalences
Security Levels:
XRPL compliance:
256-bit secret keys (ECC)
256-bit hashes (SHA-512-Half)
~128-bit security level for Ed25519
~128-bit security level for secp256k1
Website: License: Apache License 2.0
What rippled uses:
Random number generation (RAND_bytes)
Hash functions (SHA-256, SHA-512, RIPEMD-160)
SSL/TLS implementation
Some low-level crypto primitives
Repository: License: MIT
What rippled uses:
secp256k1 curve operations
ECDSA signing and verification
Public key derivation
Signature parsing/serialization
Repository: License: Public Domain
What rippled uses:
Ed25519 signing and verification
Public key derivation
Fast implementation
"Serious Cryptography" by Jean-Philippe Aumasson
Modern cryptography handbook
Practical focus
"Cryptography Engineering" by Ferguson, Schneier, Kohno
"A Graduate Course in Applied Cryptography" by Boneh and Shoup
Free online:
Modern approach
"High-speed high-security signatures" by Bernstein et al.
Cryptopals Challenges:
Hands-on crypto exercises
Breaking weak implementations
Crypto101:
IETF: Internet Engineering Task Force (RFCs)
NIST: National Institute of Standards and Technology
ISO: International Organization for Standardization
SECG: Standards for Efficient Cryptography Group
Key standards for XRPL cryptography:
RFC 8032: Ed25519 signatures
RFC 6979: Deterministic ECDSA nonces
FIPS 180-4: SHA-2 hash functions
SEC 2: secp256k1 curve parameters
Understanding these standards helps you:
Know why algorithms were chosen
Verify implementations are correct
Stay current with best practices
Contribute improvements
In this workshop we will learn how to create an evm project with hardhat
This guide will walk you through setting up an EVM project on the XRPL sidechain using Foundry, a powerful toolkit for Ethereum application development.
Foundry is a blazing fast, portable and modular toolkit for Ethereum application development written in Rust. It consists of four main tools:
Forge: Ethereum testing framework
Let's follow the lifecycle of a cryptographic key in rippled, from its creation as random noise to its role as the foundation of an account's identity. This journey touches every aspect of rippled's cryptographic system and shows how the pieces fit together.
Understanding this lifecycle is crucial because keys are the foundation of everything in XRPL. Every account, every transaction, every validator message—all depend on the proper generation, handling, and use of cryptographic keys.
Cryptography is essential for security, but it comes with computational cost. In a high-throughput blockchain like XRPL, cryptographic operations—signing, verifying, hashing—happen thousands of times per second. Understanding performance characteristics and optimization opportunities is crucial for building efficient systems.
This chapter explores the performance implications of different cryptographic choices and strategies for optimizing without compromising security.
Hash functions are the workhorses of cryptographic systems. While signatures prove authorization and keys establish identity, hash functions ensure integrity and enable efficient data structures. In XRPL, hash functions are everywhere—transaction IDs, ledger object keys, Merkle trees, address generation, and more.
This chapter explores how XRPL uses hash functions, why specific algorithms were chosen, and how they provide the integrity guarantees the system depends on.
Database queries are fundamentally slow compared to memory access:
With millions of nodes and transactions processing at ledger-close speeds (every 3-5 seconds), any system that naively queries the database for every node access will fail.
The NodeStore's Cache Layer solves this through a multi-tier strategy that keeps hot data in memory while safely delegating cold data to disk.
Cryptographic algorithms are only as secure as the secrets they protect. If an attacker can read your secret keys from memory, all the mathematical sophistication in the world won't help. This chapter explores how rippled protects sensitive data in memory, why it matters, and how to write code that doesn't leak secrets.
Cryptographic data is fundamentally binary—sequences of bytes with values from 0 to 255. But humans don't work well with binary data. We mistype it, confuse similar characters, and struggle to verify it. Base58Check encoding solves this problem by converting binary data into human-friendly strings that are easier to read, type, and verify.
This chapter explores how XRPL uses Base58Check encoding to create readable addresses, why certain characters are excluded, and how checksums provide error detection.
When two rippled nodes connect over the internet, they can't trust each other initially. How does node A know that the node claiming to be B really controls B's private key? How do they prevent man-in-the-middle attacks? How do they avoid accidentally connecting to themselves?
The peer handshake protocol solves all these problems through careful cryptographic design. This chapter explores how XRPL nodes establish secure, authenticated connections.
Application Layer
|
v
SHAMap (In-Memory State)
|
v
NodeStore Interface (Abstraction)
|
v
Cache Layer (TaggedCache) -- Hot Data in Memory
|
v
Backend Abstraction (Interface) -- Multiple implementations
|
v
Database Implementation (RocksDB, NuDB, etc.)
|
v
Physical Storage (Disk)// During synchronization or historical queries:
std::shared_ptr<SHAMapTreeNode> node = nodestore.fetch(nodeHash);SHAMap state exists in memory
↓
Serialize nodes to disk
↓
Survive application crash
↓
Reconstruct state on startup// Application code doesn't change regardless of backend
nodestore.store(node); // Works with RocksDB, NuDB, SQLite...
auto node = nodestore.fetch(hash);Database queries: 1-10 milliseconds
Memory access: 1-10 microseconds
1000x difference!
NodeStore uses caching to keep hot data in memory
Typical hit rate: 90-95%
Result: Average latency near memory speedStartup: Locate and open database
Runtime: Store and retrieve nodes as needed
Shutdown: Cleanly close database
Rotation: Enable online deletion and archivalclass NodeObject {
// Type of object (LEDGER_HEADER, ACCOUNT_NODE, TRANSACTION_NODE)
NodeObjectType mType;
// 256-bit unique identifier
uint256 mHash;
// Serialized content (variable length)
Blob mData;
public:
// Factory: create NodeObject from components
static std::shared_ptr<NodeObject> createObject(
NodeObjectType type,
Blob const& data,
uint256 const& hash);
// Access methods
NodeObjectType getType() const { return mType; }
uint256 const& getHash() const { return mHash; }
Blob const& getData() const { return mData; }
};// Two different types of data, might have same structure
// Type prefix ensures different hashes
uint256 hash_account = SHA512Half(
ACCOUNT_TYPE_BYTE || accountData);
uint256 hash_transaction = SHA512Half(
TRANSACTION_TYPE_BYTE || accountData);
// hash_account != hash_transactionDuring transaction processing:
1. Transaction validated and applied to SHAMap
2. SHAMap nodes modified
3. Each modified node serialized
4. NodeObject created with type, hash, serialized data
5. Stored in NodeStoreFor each NodeObject:
1. Encode to persistent format
2. Compute key (same as hash)
3. Write to database
4. Backend handles actual I/OAfter storage:
1. Keep in memory for fast reaccess
2. Move to cache tier
3. Evict when cache capacity exceeded
4. Dummy objects mark "known missing" (avoid repeated lookups)When SHAMap needs a node:
1. Check cache (microseconds)
2. If miss, query database (milliseconds)
3. Deserialize and validate
4. Add to cache
5. Return to SHAMapAfter ledger is validated and no longer current:
1. May be retained for history
2. Or moved to archive during rotation
3. Or deleted based on retention policyclass Backend {
// Store single object
virtual Status store(NodeObject const& object) = 0;
// Retrieve single object by hash
virtual Status fetch(uint256 const& hash,
std::shared_ptr<NodeObject>& object) = 0;
// Persist multiple objects atomically
virtual Status storeBatch(std::vector<NodeObject> const& batch) = 0;
// Retrieve multiple objects efficiently
virtual Status fetchBatch(std::vector<uint256> const& hashes,
std::vector<NodeObject>& objects) = 0;
// Lifecycle
virtual Status open(std::string const& path) = 0;
virtual Status close() = 0;
virtual int fdRequired() const = 0; // File descriptors needed
};enum class Status {
ok, // Operation succeeded
notFound, // Key doesn't exist
dataCorrupt, // Data integrity check failed (fatal)
backendError // Backend error
};// Same code works with any backend
struct DatabaseConfig {
std::string type; // "rocksdb", "nudb", "sqlite", etc.
std::string path;
// ... backend-specific options
};
auto backend = createBackend(config);
NodeStore store(backend);
// Application uses NodeStore
store.fetch(hash); // Works regardless of backend
store.store(node);Backend* createRocksDBBackend(std::string const& path) {
return new RocksDBBackend(path);
}Backend* createNuDBBackend(std::string const& path) {
return new NuDBBackend(path);
}Backend* createMemoryBackend() {
return new MemoryBackend(); // In-memory, non-persistent
}
Backend* createNullBackend() {
return new NullBackend(); // No-op backend
}Byte Offset | Field | Description
0-7 | Reserved | Set to zero, reserved for future use
8 | Type | NodeObjectType enumeration value
9+ | Data | Serialized object payload (variable length)void encodeNodeObject(NodeObject const& obj, Blob& blob) {
// Add 8 reserved bytes
blob.resize(8, 0);
// Add type byte
blob.push_back(obj.getType());
// Add data payload
blob.append(obj.getData());
}std::shared_ptr<NodeObject> decodeNodeObject(
uint256 const& hash,
Blob const& blob)
{
if (blob.size() < 9) {
return nullptr; // Corrupted
}
NodeObjectType type = static_cast<NodeObjectType>(blob[8]);
Blob data(blob.begin() + 9, blob.end());
return NodeObject::createObject(type, data, hash);
}Status Backend::store(NodeObject const& obj) {
uint256 key = obj.getHash(); // 256-bit hash as key
Blob value = encode(obj); // Encoded blob as value
return database.put(key, value); // Key-value store
}// Family provides NodeStore access to SHAMap
class Family {
virtual std::shared_ptr<NodeStore> getNodeStore() = 0;
virtual std::shared_ptr<TreeNodeCache> getTreeNodeCache() = 0;
virtual std::shared_ptr<FullBelowCache> getFullBelowCache() = 0;
};
class NodeFamily : public Family {
std::shared_ptr<NodeStore> mNodeStore;
std::shared_ptr<TreeNodeCache> mTreeCache;
std::shared_ptr<FullBelowCache> mFullBelow;
// ... implement Family interface
};
// SHAMap uses Family for storage access
class SHAMap {
std::shared_ptr<Family> mFamily;
std::shared_ptr<SHAMapTreeNode> getNode(uint256 const& hash) {
// Try cache first
auto cached = mFamily->getTreeNodeCache()->get(hash);
if (cached) return cached;
// Fetch from NodeStore
auto obj = mFamily->getNodeStore()->fetch(hash);
if (obj) {
auto node = deserializeNode(obj);
// Cache for future access
mFamily->getTreeNodeCache()->insert(hash, node);
return node;
}
return nullptr;
}
};FORBID < GUEST < USER < IDENTIFIED < ADMIN// src/xrpld/core/Config.cpp
Role getRoleFromConnection(
boost::asio::ip::address const& remoteIP,
Port const& port)
{
// Admin IPs have full access
if (config_.ADMIN.contains(remoteIP))
return Role::ADMIN;
// Secure gateway IPs are identified
if (config_.SECURE_GATEWAY.contains(remoteIP))
return Role::IDENTIFIED;
// Check if port requires admin access
if (port.admin_nets && port.admin_nets->contains(remoteIP))
return Role::ADMIN;
// Default to USER for authenticated connections
return Role::USER;
}# Admin-only access from localhost
[rpc_admin]
admin = 127.0.0.1, ::1
# Trusted gateway access
[secure_gateway]
ip = 192.168.1.100
# Port configuration
[port_rpc_admin_local]
port = 5005
ip = 127.0.0.1
admin = 127.0.0.1
protocol = http
[port_rpc_public]
port = 5006
ip = 0.0.0.0
protocol = http// Public read-only command (available to everyone)
{
"server_info",
{
&doServerInfo,
Role::GUEST, // Lowest permission
RPC::NO_CONDITION
}
}
// Standard query (requires authentication)
{
"account_info",
{
&doAccountInfo,
Role::USER, // Moderate permission
RPC::NEEDS_CURRENT_LEDGER
}
}
// Transaction submission (requires trust)
{
"submit",
{
&doSubmit,
Role::IDENTIFIED, // Higher permission
RPC::NEEDS_NETWORK_CONNECTION
}
}
// Administrative command (full access only)
{
"stop",
{
&doStop,
Role::ADMIN, // Maximum permission
RPC::NO_CONDITION
}
}// src/xrpld/rpc/detail/Handler.cpp
if (context.role < handlerInfo.role) {
return rpcError(rpcNO_PERMISSION,
"You don't have permission for this command");
}Json::Value doSensitiveOperation(RPC::JsonContext& context)
{
// Check if caller has admin privileges
if (context.role < Role::ADMIN) {
return rpcError(rpcNO_PERMISSION,
"This operation requires admin access");
}
// Additional checks
if (context.role < Role::IDENTIFIED &&
context.params.isMember("dangerous_option"))
{
return rpcError(rpcNO_PERMISSION,
"Only identified users can use this option");
}
// Proceed with operation
// ...
}// Each request consumes resources
context.consumer.charge(Resource::feeReferenceRPC);
// High-cost operations charge more
if (isExpensiveQuery) {
context.consumer.charge(Resource::feeHighBurdenRPC);
}// Check if client has exceeded limits
if (!context.consumer.isUnlimited() &&
context.consumer.balance() <= 0)
{
return rpcError(rpcSLOW_DOWN,
"You are making requests too frequently");
}bool isUnlimited() const
{
return role_ >= Role::ADMIN;
}# rippled.cfg
[rpc_admin]
admin = 127.0.0.1
admin = 192.168.1.50
admin = ::1// Mark a client as abusive
context.netOps.reportAbuse(remoteIP);
// Check if IP is blacklisted
if (context.netOps.isBlacklisted(remoteIP)) {
return rpcError(rpcFORBIDDEN, "Access denied");
}Client → Reverse Proxy (nginx) → Rippled
[IP: 192.168.1.100] [Trusted][secure_gateway]
ip = 192.168.1.100
[port_rpc]
port = 5005
ip = 127.0.0.1
protocol = http[rpc_startup]
{ "command": "log_level", "severity": "warning" }
[port_ws_admin_local]
port = 6006
ip = 127.0.0.1
admin = 127.0.0.1
protocol = ws
admin_user = myuser
admin_password = mypasswordconst ws = new WebSocket('ws://localhost:6006');
ws.send(JSON.stringify({
command: 'login',
user: 'myuser',
password: 'mypassword'
}));
// After successful login, role is elevated to ADMINJson::Value doAccountStats(RPC::JsonContext& context)
{
// Basic validation
if (!context.params.isMember(jss::account)) {
return rpcError(rpcINVALID_PARAMS, "Missing 'account' field");
}
auto const account = parseBase58<AccountID>(
context.params[jss::account].asString()
);
if (!account) {
return rpcError(rpcACT_MALFORMED);
}
// Get ledger
std::shared_ptr<ReadView const> ledger;
auto const result = RPC::lookupLedger(ledger, context);
if (!ledger) return result;
// Read account
auto const sleAccount = ledger->read(keylet::account(*account));
if (!sleAccount) {
return rpcError(rpcACT_NOT_FOUND);
}
// Build base response (available to all roles)
Json::Value response;
response[jss::account] = to_string(*account);
response["balance"] = to_string(sleAccount->getFieldAmount(sfBalance));
// Add details for USER and above
if (context.role >= Role::USER) {
response["sequence"] = sleAccount->getFieldU32(sfSequence);
response["owner_count"] = sleAccount->getFieldU32(sfOwnerCount);
}
// Add sensitive info for IDENTIFIED and above
if (context.role >= Role::IDENTIFIED) {
response["flags"] = sleAccount->getFieldU32(sfFlags);
response["previous_txn_id"] = to_string(
sleAccount->getFieldH256(sfPreviousTxnID)
);
}
// Add administrative data for ADMIN only
if (context.role >= Role::ADMIN) {
response["ledger_entry_type"] = "AccountRoot";
response["index"] = to_string(keylet::account(*account).key);
}
return response;
}{
"account_stats",
{
&doAccountStats,
Role::GUEST, // Base access for everyone
RPC::NEEDS_CURRENT_LEDGER
}
}Transaction submission, privileged reads
ADMIN
Full administrative access
Node management, dangerous operations
canTransfer
32
Enable basic token transfers
canClawback
64
Allow issuer to recover tokens
canLock
2
Lock tokens globally or per account
requireAuth
4
Require issuer approval for new holders
canEscrow
8
Enable time-locked escrow (Not implemented)
canTrade
16
Enable DEX trading (Not implemented)
Implementation-focused
Real-world protocols
"Applied Cryptography" by Bruce Schneier
Classic reference
Comprehensive coverage
Ed25519 design paper
Performance analysis
Introductory book
Free online
RFC 5246/8446: TLS for transport security
Everything begins with randomness. Not the pseudo-randomness of Math.random() or std::rand(), but true cryptographic randomness—numbers that are fundamentally unpredictable.
If an attacker can predict your random numbers, they can predict your keys. If they can predict your keys, they own your account. The stakes couldn't be higher.
In rippled, randomness comes from the crypto_prng() function, which wraps OpenSSL's RAND_bytes:
What happens here:
Allocate buffer: A 32-byte buffer is created on the stack
Fill with randomness: crypto_prng() fills it with cryptographically secure random bytes from OpenSSL
Create SecretKey: The buffer is wrapped in a SecretKey object
Secure cleanup: The temporary buffer is securely erased to prevent key material from lingering in memory
When you call crypto_prng(), OpenSSL pulls entropy from multiple sources:
This multi-source approach ensures that even if one entropy source is weak, others provide backup security.
With a secret key in hand, we need to derive its public key—the identity we can share with the world. This derivation is one of the beautiful ideas in modern cryptography: a mathematical function that's easy to compute in one direction but effectively impossible to reverse.
This asymmetry is what makes public-key cryptography possible.
XRPL supports two cryptographic algorithms, each with its own derivation process:
secp256k1: Elliptic Curve Point Multiplication
How it works:
Elliptic curve has a special "generator" point G
Public key = Secret key × G (point multiplication on the curve)
Result is a point with X and Y coordinates
Compressed format stores X coordinate + one bit for Y (33 bytes total)
Prefix byte: 0x02 or 0x03 (indicates Y parity)
X coordinate: 32 bytes
Why it's secure:
Computing Public = Secret × G is fast
Computing Secret from Public requires solving the discrete logarithm problem
No known efficient algorithm exists for this problem
ed25519: Curve25519 Operations
How it works:
Uses Ed25519 curve operations (optimized variant of Curve25519)
Derives public key through curve arithmetic
Adds 0xED prefix byte to identify key type
Total 33 bytes (1 prefix + 32 public key)
Why it's secure:
Based on different curve with different security proofs
Specifically designed for signing (not encryption)
More resistant to implementation errors
The public key can be:
Posted on websites
Included in transactions
Sent to strangers
Stored in public databases
No matter who has it or what they do with it, they can't derive your secret key. Your private identity remains private.
Sometimes we don't want pure randomness. Sometimes we want to be able to recreate the exact same key pair from a remembered or stored value. This is where seed-based deterministic key generation comes in.
Problem with pure randomness:
Solution with seeds:
A seed is a small piece of data—typically 16 bytes—that serves as the "master secret" for an entire family of keys:
For ed25519: Simple and Direct
Simple, deterministic, and secure. Same seed always produces same key.
For secp256k1: Handling Edge Cases
Why the loop? Not all 32-byte values are valid secret keys for secp256k1. The value must be less than the curve's "order" (a large prime number). If the hash result is too large, increment a counter and try again.
The odds of needing more than one attempt are vanishingly small (roughly 1 in 2^128), but the code handles it correctly.
The Generator class enables creating multiple independent keys from one seed:
Each ordinal produces a cryptographically independent key pair. This enables powerful features like:
Hierarchical wallets: One seed, many accounts
Key rotation: Generate new keys without remembering multiple seeds
Backup simplicity: One seed backs up everything
A public key isn't an address. To get the human-readable XRPL address (starting with 'r'), we need one more transformation:
Why two hash functions?
Compactness: 20 bytes instead of 33 bytes
Defense in depth: If SHA-256 is broken, RIPEMD-160 provides protection; if RIPEMD-160 is broken, SHA-256 does
Compatibility: Same scheme used by other blockchain systems
Why hash at all?
Shorter addresses are easier to use
Provides a level of indirection (can't derive public key from address)
Quantum-resistant: even if quantum computers break elliptic curve crypto, they can't derive the public key from the address alone
Let's trace a key from birth to address:
Each step is irreversible:
Can't derive secret from public
Can't derive public from account ID
Can't derive account ID from address (but can decode)
The SecretKey class demonstrates proper lifecycle management:
RAII (Resource Acquisition Is Initialization):
Constructor acquires resource (the secret key)
Destructor releases resource (securely erases key)
No manual cleanup needed
Automatic cleanup even if exceptions occur
Usage pattern:
Even if sign() throws an exception, the destructor still runs and the key is erased. This is defensive programming—making it impossible to forget cleanup.
How does rippled know which algorithm a key uses? The first byte:
This automatic detection means higher-level code doesn't need to track key types—the keys themselves carry the information.
Randomness is critical: Weak randomness = weak keys = stolen funds
One-way functions enable public-key crypto: Easy to derive public from secret, impossible to reverse
Two algorithms, same security guarantees: secp256k1 for compatibility, ed25519 for performance
Deterministic generation enables backups: One seed can recover many keys
Secure cleanup prevents leaks: Keys must be erased from memory when no longer needed
RAII makes security automatic: Proper C++ patterns prevent human error
In the next chapter, we'll see how these keys are used to create and verify signatures—the mathematical proof of authorization that makes XRPL secure.
Scenario: Synchronizing from the Network
A new node joins XRPL and must catch up to current ledger. This requires:
Fetching missing ledgers (blocks of transactions)
For each ledger, fetching all state nodes
Verifying each node's hash
Storing nodes to disk
Naive Approach (No Cache):
Clearly infeasible.
With Caching (90% Hit Rate):
Still slow, but realistic with parallel processing.
The difference between possible and impossible is caching.
The NodeStore's primary cache is the TaggedCache:
Purpose:
Structure:
Cache Tiers:
NodeStore implements a two-tier caching strategy:
Fetch Algorithm:
When Objects Enter Cache:
Dummy Objects:
Special marker objects prevent wasted lookups:
Benefit of Dummies:
Prevents thundering herd of repeated failed lookups.
Cache capacity is limited. When full, old objects must be evicted.
Eviction Triggers:
LRU (Least Recently Used) Eviction:
Age-Based Eviction:
Configuration Parameters:
Impact of Configuration:
NodeStore tracks cache effectiveness:
Example Metrics:
During network synchronization, special techniques optimize caching:
Prefetching
Batch Loading
Deferred Reads
Different phases of operation have different access patterns:
During Normal Operation (Steady State)
During Synchronization
After Sync Completion
Cache must be safe for concurrent access:
Concurrency Properties:
During synchronization, special tracking prevents redundant work:
The Problem:
The Solution: Full Below Generation Counter
Benefit:
Key Concepts:
Multi-Tier Caching: Memory + disk strategy balances performance and capacity
LRU Eviction: Keeps frequently-accessed data, evicts cold data
Dummy Markers: Prevent repeated failed lookups
Metrics Tracking: Monitor cache effectiveness
Synchronization Optimization: Prefetch and batch loading
Full Below Cache: Avoid redundant traversal during sync
Thread Safety: Shared locks for multiple readers
Performance Impact:
The Cache Layer transforms NodeStore from theoretical to practical. Database queries are unavoidable, but caching hides their latency, allowing XRPL to operate at microsecond efficiency despite millisecond database performance.
The problem: Memory isn't automatically erased when you're done with it.
1. Memory Dumps
2. Swap Files
3. Hibernation
4. Cold Boot Attacks
5. Debugging/Inspection
Compiler optimization example:
Why OPENSSL_cleanse works:
Key properties:
Cannot be optimized away: Compiler forced to execute it
Overwrites memory: Zeros written to actual memory
Works cross-platform: Handles different compiler optimizations
Validated: Extensively tested across compilers and architectures
Automatic cleanup:
Exception safety:
No forgetting:
1. Explicit erasure:
2. Use SecretKey wrapper:
3. Avoid std::string for secrets:
For highly sensitive applications:
Benefits:
Memory cannot be swapped to disk
Automatically erased on deallocation
Protected against paging attacks
Drawbacks:
Limited by OS limits on locked memory
Performance overhead
Complexity
When to use:
Extremely sensitive operations
Long-lived secrets
High-security requirements
Note: This is paranoid and rarely needed. RAII is usually sufficient.
1. Minimize lifetime:
2. Overwrite with new data:
3. Trust hardware:
Secure memory handling protects secrets from attackers who can read process memory:
Use OPENSSL_cleanse: Cannot be optimized away by compiler
Wrap in RAII classes: Automatic cleanup, exception-safe
Minimize lifetime: Create secrets late, destroy early
Explicit erasure: Clean up temporary buffers
Avoid std::string: For long-lived secrets
Test verification: Ensure erasure actually happens
Key principle: Assume attacker can read your memory. Make sure there's nothing to find.
Implementation:
secure_erase() wraps OPENSSL_cleanse()
SecretKey class uses RAII
Destructors automatically erase
Minimize copies and lifetime
1. Man-in-the-Middle (MITM)
2. Replay Attack
3. Self-Connection
4. Network Mismatch
Signatures alone aren't enough:
We need something unique to THIS specific connection:
What are "Finished" messages?
In the SSL/TLS handshake, both parties send a "Finished" message that contains:
A hash of all previous handshake messages
A MAC (Message Authentication Code) proving they know the session keys
These messages are:
Unique per session: Different for every SSL connection
Unpredictable: Depend on random values exchanged during handshake
Authenticated: Part of SSL's own security
Network-ID:
Network-Time:
Public-Key:
Session-Signature:
Instance-Cookie:
Server-Domain (optional):
Both nodes prove they possess their private keys:
Signatures are specific to this connection:
No:
No:
Very difficult:
The peer handshake protocol provides:
Mutual authentication: Both nodes prove identity
Session binding: Signatures tied to specific SSL session
Replay prevention: Old signatures don't work
MITM protection: Requires private keys to forge
Self-connection prevention: Instance cookies detect loops
Network segregation: Network IDs prevent cross-network connections
Key components:
Shared value: Derived from SSL session, unique per connection
Signatures: Prove possession of private keys
Headers: Exchange identities and verification data
Validation: Multiple checks ensure security
This cryptographic handshake enables XRPL to operate as a decentralized network where nodes don't need to trust each other—they can verify everything cryptographically.
import { Client } from 'xrpl';
const client = new Client("wss://s.devnet.rippletest.net:51233");
const main = async () => {
try {
console.log("Let's create a Multi-Purpose Token...");
await client.connect();
// Create issuer and holder wallets
const { wallet: issuerWallet } = await client.fundWallet();
const { wallet: holderWallet } = await client.fundWallet();
console.log('Issuer Wallet:', issuerWallet.address);
console.log('Holder Wallet:', holderWallet.address);
// Define token metadata
const tokenMetadata = {
name: "MyMPToken",
symbol: "MPT",
description: "A sample Multi-Purpose Token",
};
// Convert metadata to hex string
const metadataHex = Buffer.from(JSON.stringify(tokenMetadata)).toString('hex');
// Set flags for a regulated token
const totalFlagsValue = 102; // canLock + canClawback + canTransfer + requireAuth
// Create MPToken issuance transaction
const transactionBlob = {
TransactionType: "MPTokenIssuanceCreate",
Account: issuerWallet.address,
Flags: totalFlagsValue,
MPTokenMetadata: metadataHex
// MaximumAmount: "1000000000000000000", (Optional) The maximum asset amount of this token that can ever be issued
// TransferFee: 5000, (Optional) between 0 and 50000 for 50.000% fees charged by the issuer for secondary sales of Token
// AssetScale: 2, (Optional) 10^(-scale) of a corresponding fractional unit. For example, a US Dollar Stablecoin will likely have an asset scale of 2, representing 2 decimal places.
};
// Submit token issuance
const mptokenCreationResult = await client.submitAndWait(transactionBlob, {
autofill: true,
wallet: issuerWallet
});
console.log('MPToken Creation Result:', mptokenCreationResult.result.meta?.TransactionResult);
console.log('MPToken Issuance ID:', mptokenCreationResult.result.meta?.mpt_issuance_id);
await client.disconnect();
console.log("All done!");
} catch (error) {
console.error("Error creating MPToken:", error);
}
};
main();// Query the ledger for all MPTokens issued by a specific account
const response = await client.request({
command: "account_objects",
account: issuerAddress,
ledger_index: "validated",
type: "mpt_issuance"
});
// Find the specific token by currency code in the metadata
const mpTokens = response.result.account_objects;
const targetToken = mpTokens.find(token => {
// Parse metadata from hex to JSON
const metadata = JSON.parse(Buffer.from(token.MPTokenMetadata, 'hex').toString());
return metadata.symbol === "TECH"; // Replace with your token symbol
});
const mpTokenId = targetToken?.mpt_issuance_id;
console.log("mpTokenId: ", mpTokenId);
const transactionBlob = {
TransactionType: "MPTokenIssuanceDestroy",
Account: wallet.address,
MPTokenIssuanceID: mpTokenId
};const transactionBlob = {
TransactionType: "MPTokenIssuanceSet",
Account: wallet.address,
MPTokenIssuanceID: mpTokenId,
Flags: 1, // 1: Lock, 2: Unlock
// Holder: r3d4... // Specify holder to freeze; if not specified, it will freeze globally
};const transactionBlob = {
TransactionType: "MPTokenAuthorize",
Account: wallet.address,
MPTokenIssuanceID: mptokenID
Flags: 0 // If set to 1, and transaction is submitted by a holder, it indicates that the holder no longer wants to hold the MPToken
// Holder: r3d4... (Optional) Specify address to authorize; if not specified, it will authorize globally
};const transactionBlob = {
TransactionType: "Payment",
Account: wallet.address,
Amount: {
"mpt_issuance_id": mpTokenId,
"value": "10000"
},
Destination: destinationAddress.value,
};const transactionBlob = {
TransactionType: "Clawback",
Account: wallet.address,
Amount: {
"mpt_issuance_id": mpTokenId,
"value": "10000"
},
Holder: holderAddress.value,
};import { Client } from 'xrpl';
const client = new Client("wss://s.devnet.rippletest.net:51233");
const main = async () => {
try {
console.log("Creating company shares using Multi-Purpose Tokens...");
await client.connect();
console.log("Setting up company and investor wallets...");
// Create company (issuer) and investor wallets
const { wallet: companyWallet } = await client.fundWallet();
const { wallet: investor1Wallet } = await client.fundWallet();
const { wallet: investor2Wallet } = await client.fundWallet();
console.log('Company Wallet:', companyWallet.address);
console.log('Investor 1 Wallet:', investor1Wallet.address);
console.log('Investor 2 Wallet:', investor2Wallet.address);
console.log("");
// Define company share token metadata
const tokenMetadata = {
name: "TechCorp Shares",
symbol: "TECH",
description: "Equity shares in TechCorp with regulatory compliance features",
};
// Convert metadata to hex string
const metadataHex = Buffer.from(JSON.stringify(tokenMetadata)).toString('hex');
// Set flags for a regulated security token
const totalFlagsValue = 102; // canLock + canClawback + canTransfer + requireAuth
// Create company share token issuance
let transactionBlob = {
TransactionType: "MPTokenIssuanceCreate",
Account: companyWallet.address,
Flags: totalFlagsValue,
MPTokenMetadata: metadataHex
};
console.log("Issuing company share tokens...");
// Submit token issuance
const createTx = await client.submitAndWait(transactionBlob, { wallet: companyWallet });
// Get the MPTokenID for our company shares
const MPTokenID = createTx.result.meta?.mpt_issuance_id;
console.log('Share token creation transaction hash:', createTx.result.hash);
console.log('Company Share Token ID:', MPTokenID);
console.log("");
// First, investors need to self-authorize to receive the tokens
// Investor 1 self-authorization
transactionBlob = {
TransactionType: "MPTokenAuthorize",
Account: investor1Wallet.address,
MPTokenIssuanceID: MPTokenID,
};
console.log("Investor 1 authorizing to receive shares...");
const investor1SelfAuthTx = await client.submitAndWait(transactionBlob, {wallet: investor1Wallet });
// Investor 2 self-authorization
transactionBlob = {
TransactionType: "MPTokenAuthorize",
Account: investor2Wallet.address,
MPTokenIssuanceID: MPTokenID,
};
console.log("Investor 2 authorizing to receive shares...");
const investor2SelfAuthTx = await client.submitAndWait(transactionBlob, {wallet: investor2Wallet });
console.log("Investor 1 self-authorization transaction hash:", investor1SelfAuthTx.result.hash);
console.log("Investor 2 self-authorization transaction hash:", investor2SelfAuthTx.result.hash);
console.log("");
// With requireAuth flag, the company (issuer) must authorize investors
// Authorize investor 1
transactionBlob = {
TransactionType: "MPTokenAuthorize",
Account: companyWallet.address,
MPTokenIssuanceID: MPTokenID,
Holder: investor1Wallet.address
};
console.log("Company authorizing investor 1 to receive shares...");
const investor1AuthTx = await client.submitAndWait(transactionBlob, {wallet: companyWallet });
// Authorize investor 2
transactionBlob = {
TransactionType: "MPTokenAuthorize",
Account: companyWallet.address,
MPTokenIssuanceID: MPTokenID,
Holder: investor2Wallet.address
};
console.log("Company authorizing investor 2 to receive shares...");
const investor2AuthTx = await client.submitAndWait(transactionBlob, {wallet: companyWallet });
console.log("Investor 1 issuer authorization transaction hash:", investor1AuthTx.result.hash);
console.log("Investor 2 issuer authorization transaction hash:", investor2AuthTx.result.hash);
console.log("");
// Distribute shares to investor 1 (10,000 shares)
transactionBlob = {
TransactionType: "Payment",
Account: companyWallet.address,
Amount: {
"mpt_issuance_id": MPTokenID, // Company share token ID
"value": "10000" // 10,000 shares
},
Destination: investor1Wallet.address,
};
console.log("Distributing 10,000 shares to investor 1...");
const paymentTx = await client.submitAndWait(transactionBlob, {wallet: companyWallet });
console.log("Share distribution transaction hash: ", paymentTx.result.hash);
console.log("");
// Demonstrate compliance: Lock investor 1's shares (e.g., during regulatory investigation)
transactionBlob = {
TransactionType: "MPTokenIssuanceSet",
Account: companyWallet.address,
MPTokenIssuanceID: MPTokenID,
Holder: investor1Wallet.address,
Flags: 1, // Lock the shares
};
console.log("Locking investor 1's shares for compliance review...");
const lockTx = await client.submitAndWait(transactionBlob, {wallet: companyWallet });
console.log("Lock transaction hash: ", lockTx.result.hash);
console.log("TransactionResult: ", lockTx.result.meta.TransactionResult);
console.log("Investor 1 can no longer transfer their shares");
console.log("");
// Attempt transfer while locked (this will fail)
transactionBlob = {
TransactionType: "Payment",
Account: investor1Wallet.address,
Amount: {
"mpt_issuance_id": MPTokenID,
"value": "5000"
},
Destination: investor2Wallet.address,
};
console.log("Attempting to transfer locked shares to investor 2 (this will fail)...");
const transferTx = await client.submitAndWait(transactionBlob, {wallet: investor1Wallet });
console.log("Transfer transaction hash: ", transferTx.result.hash);
console.log("TransactionResult: ", transferTx.result.meta.TransactionResult);
console.log("Transfer failed as expected - shares are locked");
console.log("");
// Company exercises clawback rights (e.g., for regulatory compliance)
transactionBlob = {
TransactionType: "Clawback",
Account: companyWallet.address,
Amount: {
"mpt_issuance_id": MPTokenID,
"value": "10000"
},
Holder: investor1Wallet.address,
};
console.log("Company exercising clawback rights on investor 1's shares...");
const clawbackTx = await client.submitAndWait(transactionBlob, {wallet: companyWallet });
console.log("Clawback transaction hash: ", clawbackTx.result.hash);
console.log("All 10,000 shares have been returned to the company");
console.log("");
await client.disconnect();
console.log("Company share token demonstration complete!");
} catch (error) {
console.error("Error in share token operations:", error);
}
};
main();p = FFFFFFFF FFFFFFFF FFFFFFFF FFFFFFFF
FFFFFFFF FFFFFFFF FFFFFFFE FFFFFC2F
n = FFFFFFFF FFFFFFFF FFFFFFFF FFFFFFFE
BAAEDCE6 AF48A03B BFD25E8C D0364141
G = (x, y) where:
x = 79BE667E F9DCBBAC 55A06295 CE870B07
029BFCDB 2DCE28D9 59F2815B 16F81798
y = 483ADA77 26A3C465 5DA4FBFC 0E1108A8
FD17B448 A6855419 9C47D08F FB10D4B8k = HMAC_DRBG(private_key, message_hash)RIPESHA(data) = RIPEMD-160(SHA-256(data))123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyzSEQUENCE {
r INTEGER,
s INTEGER
}
Encoded as:
0x30 [length]
0x02 [r-length] [r]
0x02 [s-length] [s]Seed (hex): DEDCE9CE67B451D852FD4E846FCDE31C
Words: MAD WARM EVEN SHOW BALK FELT
TOY STIR OBOE COST HOPE VAINBits Symmetric Hash RSA ECC
────────────────────────────────────────────
128 AES-128 SHA-256 3072 256
192 AES-192 SHA-384 7680 384
256 AES-256 SHA-512 15360 512// ❌ WRONG - Predictable and insecure
void generateWeakKey() {
std::srand(std::time(nullptr)); // Predictable seed!
std::uint8_t buf[32];
for (auto& byte : buf)
byte = std::rand() % 256; // NOT cryptographically secure
}
// ✅ CORRECT - Cryptographically secure
SecretKey generateStrongKey() {
std::uint8_t buf[32];
beast::rngfill(buf, sizeof(buf), crypto_prng()); // CSPRNG
SecretKey sk{Slice{buf, sizeof(buf)}};
secure_erase(buf, sizeof(buf)); // Clean up
return sk;
}// From src/libxrpl/protocol/SecretKey.cpp
SecretKey randomSecretKey()
{
std::uint8_t buf[32];
beast::rngfill(buf, sizeof(buf), crypto_prng());
SecretKey sk(Slice{buf, sizeof(buf)});
secure_erase(buf, sizeof(buf));
return sk;
}┌─────────────────────────────────────────┐
│ OpenSSL Entropy Pool │
└─────────────────────────────────────────┘
↑ ↑ ↑
│ │ │
┌──────┴────┐ ┌───┴────┐ ┌────┴──────┐
│ Hardware │ │ OS │ │ Timing │
│ RNG │ │Entropy │ │ Jitter │
└───────────┘ └────────┘ └───────────┘
Hardware RNG:
- CPU instructions (RDRAND, RDSEED on x86)
- True random number generators
OS Entropy:
- /dev/urandom (Unix/Linux)
- CryptGenRandom (Windows)
- System events, disk I/O, network timing
Timing Jitter:
- High-resolution timers
- Thread scheduling randomness
- Cache timing variations// Easy direction: secret → public (microseconds)
PublicKey publicKey = derivePublicKey(KeyType::ed25519, secretKey);
// Impossible direction: public → secret (longer than age of universe)
// There is NO function: secretKey = deriveSecretKey(publicKey);// From src/libxrpl/protocol/SecretKey.cpp
case KeyType::secp256k1: {
secp256k1_pubkey pubkey_imp;
// Multiply the generator point G by the secret key
secp256k1_ec_pubkey_create(
secp256k1Context(),
&pubkey_imp,
reinterpret_cast<unsigned char const*>(sk.data()));
// Serialize to compressed format (33 bytes)
unsigned char pubkey[33];
std::size_t len = sizeof(pubkey);
secp256k1_ec_pubkey_serialize(
secp256k1Context(),
pubkey,
&len,
&pubkey_imp,
SECP256K1_EC_COMPRESSED); // Compressed format
return PublicKey{Slice{pubkey, len}};
}case KeyType::ed25519: {
unsigned char buf[33];
buf[0] = 0xED; // Type prefix
ed25519_publickey(sk.data(), &buf[1]);
return PublicKey(Slice{buf, sizeof(buf)});
}Secret Key Public Key
(32 random bytes) → (33 bytes)
KEEP SECRET! SHARE FREELY!
│ │
│ │
┌────▼────┐ ┌─────▼──────┐
│Can sign │ │Can verify │
│messages │ │signatures │
└─────────┘ └────────────┘
│ │
│ │
┌────▼─────────────────────▼────┐
│ Both needed to prove │
│ authorization │
└──────────────────────────────┘Generate Key1 → Secret1, Public1
Generate Key2 → Secret2, Public2
Generate Key3 → Secret3, Public3
To backup: Must save Secret1, Secret2, Secret3, ...Remember one seed → Can regenerate all keys
Seed → Key1 (ordinal 0)
→ Key2 (ordinal 1)
→ Key3 (ordinal 2)
→ ...// From src/libxrpl/protocol/SecretKey.cpp
std::pair<PublicKey, SecretKey>
generateKeyPair(KeyType type, Seed const& seed)
{
switch (type)
{
case KeyType::secp256k1: {
detail::Generator g(seed);
return g(0); // Generate the 0th key pair
}
case KeyType::ed25519: {
auto const sk = generateSecretKey(type, seed);
return {derivePublicKey(type, sk), sk};
}
}
}// Hash the seed to get the secret key
SecretKey generateSecretKey(KeyType::ed25519, Seed const& seed)
{
auto const secret = sha512Half_s(makeSlice(seed)); // Secure hash
return SecretKey{secret};
}// Must ensure result is valid secret key
SecretKey deriveDeterministicRootKey(Seed const& seed)
{
std::uint32_t ordinal = 0;
// Try up to 128 times to find valid key
for (int i = 0; i < 128; ++i)
{
// Create buffer with seed + ordinal
std::array<std::uint8_t, 20> buf;
std::copy(seed.data(), seed.data() + 16, buf.begin());
buf[16] = (ordinal >> 24) & 0xFF;
buf[17] = (ordinal >> 16) & 0xFF;
buf[18] = (ordinal >> 8) & 0xFF;
buf[19] = (ordinal >> 0) & 0xFF;
// Hash it
auto const secret = sha512Half(makeSlice(buf));
// Check if it's a valid secret key
if (isValidSecretKey(secret))
return SecretKey{secret};
// If not valid, try next ordinal
++ordinal;
}
// Should never happen (probability ~ 1 in 2^128)
Throw<std::runtime_error>("Failed to generate key from seed");
}detail::Generator g(seed);
auto [pub0, sec0] = g(0); // First key pair
auto [pub1, sec1] = g(1); // Second key pair
auto [pub2, sec2] = g(2); // Third key pair// From src/libxrpl/protocol/AccountID.cpp
AccountID calcAccountID(PublicKey const& pk)
{
ripesha_hasher h;
h(pk.data(), pk.size());
return AccountID{static_cast<ripesha_hasher::result_type>(h)};
}Public Key (33 bytes)
↓
SHA-256 hash
↓
256-bit digest
↓
RIPEMD-160 hash
↓
160-bit (20 byte) Account ID
↓
Base58Check encode with type byte
↓
Address: rN7n7otQDd6FczFgLdlqtyMVrn3LNU8B4C// 1. BIRTH: Generate random secret key
SecretKey secretKey = randomSecretKey();
// Result: 32 random bytes
// Example: 0x1a2b3c4d...
// 2. GROWTH: Derive public key
PublicKey publicKey = derivePublicKey(KeyType::ed25519, secretKey);
// Result: 33 bytes (0xED prefix + 32 bytes)
// Example: 0xED9434799226374926EDA3B54B1B461B4ABF7237962EEB1144C10A7CA6A9D32C64
// 3. IDENTITY: Calculate account ID
AccountID accountID = calcAccountID(publicKey);
// Result: 20 bytes
// Example: 0x8B8A6C533F09CA0E5E00E7C32AA7EC323485ED3F
// 4. PRESENTATION: Encode as address
std::string address = toBase58(accountID);
// Result: Human-readable address
// Example: rN7n7otQDd6FczFgLdlqtyMVrn3LNU8B4Cclass SecretKey
{
private:
std::uint8_t buf_[32];
public:
SecretKey(Slice const& slice)
{
std::memcpy(buf_, slice.data(), sizeof(buf_));
}
~SecretKey()
{
// Automatically called when SecretKey goes out of scope
secure_erase(buf_, sizeof(buf_));
}
// Prevent copying to avoid multiple erasures
SecretKey(SecretKey const&) = delete;
SecretKey& operator=(SecretKey const&) = delete;
// Allow moving (transfers ownership)
SecretKey(SecretKey&&) noexcept = default;
SecretKey& operator=(SecretKey&&) noexcept = default;
};void signTransaction(Transaction const& tx)
{
SecretKey sk = loadKeyFromSecureStorage();
auto signature = sign(pk, sk, tx);
// sk destructor automatically called here
// Key material is securely erased
}std::optional<KeyType> publicKeyType(Slice const& slice)
{
if (slice.size() != 33)
return std::nullopt;
switch (slice[0])
{
case 0x02:
case 0x03:
return KeyType::secp256k1;
case 0xED:
return KeyType::ed25519;
default:
return std::nullopt;
}
}secp256k1 public key: 0x02[32 bytes] or 0x03[32 bytes]
ed25519 public key: 0xED[32 bytes]BIRTH
│
├─ Random: crypto_prng() → 32 random bytes
│
└─ Deterministic: hash(seed) → 32 bytes
│
▼
SECRET KEY (32 bytes)
│
│ One-way function
│
▼
PUBLIC KEY (33 bytes)
│
│ Double hash
│
▼
ACCOUNT ID (20 bytes)
│
│ Base58Check encode
│
▼
ADDRESS (human-readable)
│
▼
SECURE CLEANUP
(key erased from memory)Memory access: 1-10 microseconds
Cache miss latency: 1-10 milliseconds
Ratio: 1000x slowerfor (each node in ledger) {
backend.fetch(nodeHash); // Database query - 10ms
}
For 10,000 nodes in ledger:
10,000 × 10ms = 100 seconds per ledger
For 100,000 ledgers to catch up:
100 seconds × 100,000 = 1,157 daysfor (each node in ledger) {
if (cache.has(nodeHash)) {
cache.get(nodeHash); // 10 microseconds
} else {
backend.fetch(nodeHash); // 10 milliseconds
}
}
For 10,000 nodes:
9,000 × 10µs (cache hits) = 90ms
1,000 × 10ms (cache misses) = 10,000ms
Total: 10,090ms ≈ 10 seconds per ledger
For 100,000 ledgers:
100,000 × 10 seconds = 1,000,000 seconds ≈ 11.5 daysclass TaggedCache {
// Keep frequently accessed NodeObjects in memory
// Minimize expensive database queries
// Provide thread-safe concurrent access
};class TaggedCache {
private:
// Key: NodeObject hash (uint256)
// Value: shared_ptr<NodeObject>
std::unordered_map<uint256, std::shared_ptr<NodeObject>> mCache;
// Protect concurrent access
std::mutex mLock;
// Configuration
size_t mMaxSize; // Maximum objects in cache
std::chrono::seconds mMaxAge; // Maximum age before eviction
};
public:
// Retrieve from cache
std::shared_ptr<NodeObject> get(uint256 const& hash);
// Store in cache
void insert(uint256 const& hash,
std::shared_ptr<NodeObject> const& obj);
// Remove from cache
void remove(uint256 const& hash);
// Evict old entries
void evictExpired();
}; Application
↓
L1: TaggedCache (In-Memory)
Fast: 1-10 microseconds
Size: ~100MB - 5GB (configurable)
↓ (on miss)
L2: Backend Database (Persistent)
Slow: 1-10 milliseconds
Size: Unlimited (disk)
↓
Physical Storage (Disk)std::shared_ptr<NodeObject> fetch(uint256 const& hash) {
// Step 1: Check cache
{
std::lock_guard<std::mutex> lock(mCacheLock);
auto it = mCache.find(hash);
if (it != mCache.end()) {
// Found in cache
recordHit(hash);
return it->second;
}
}
// Step 2: Cache miss - query backend
auto obj = backend->fetch(hash);
if (!obj) {
// Object not found anywhere
// Cache a dummy marker to prevent repeated lookups
cacheDummy(hash);
return nullptr;
}
// Step 3: Update cache with newly fetched object
{
std::lock_guard<std::mutex> lock(mCacheLock);
mCache.insert({hash, obj});
}
recordMiss(hash);
return obj;
}1. On Database Fetch
fetch() fails or succeeds → always cache result
(Success: cache object, Failure: cache dummy)
2. On Storage
store() called → immediately cache object
Object will be reaccessed soon
3. Predictive Loading
When traversing SHAMap, prefetch likely-needed siblingsclass NodeObject {
static std::shared_ptr<NodeObject> createDummy(uint256 const& hash) {
auto obj = std::make_shared<NodeObject>();
obj->mType = hotDUMMY; // Type 512: marker
obj->mHash = hash;
obj->mData.clear(); // Empty data
return obj;
}
bool isDummy() const {
return mType == hotDUMMY;
}
};
// In fetch:
std::shared_ptr<NodeObject> backend_result = backend->fetch(hash);
if (!backend_result) {
// Not found - cache dummy to avoid retry
auto dummy = NodeObject::createDummy(hash);
cache.insert(hash, dummy);
return nullptr;
}Scenario: Syncing but peer doesn't have a node
Without dummies:
Request node X from network
Peer: "I don't have it"
Local check: Try backend
Backend: "Not there"
(repeat this sequence multiple times)
With dummies:
Request node X from network
Peer: "I don't have it"
Local check: Cache hit on dummy
Immediately know: "Not available"
Don't retryvoid insertWithEviction(uint256 const& hash,
std::shared_ptr<NodeObject> const& obj) {
{
std::lock_guard<std::mutex> lock(mCacheLock);
// Check size limit
if (mCache.size() >= mMaxSize) {
evictLRU(); // Remove least recently used
}
mCache.insert({hash, obj});
}
// Check age limit (periodic)
if (shouldEvictExpired()) {
evictExpired(); // Remove objects older than mMaxAge
}
}void evictLRU() {
// Find object with oldest access time
auto oldest = findOldestAccess();
// Remove from cache
mCache.erase(oldest->hash);
}
// Track access times
struct CacheEntry {
std::shared_ptr<NodeObject> object;
std::chrono::steady_clock::time_point lastAccess;
};void evictExpired() {
auto now = std::chrono::steady_clock::now();
for (auto it = mCache.begin(); it != mCache.end();) {
auto age = now - it->second.lastAccess;
if (age > mMaxAge) {
// Object too old - remove
it = mCache.erase(it);
} else {
++it;
}
}
}// From rippled.cfg
[node_db]
cache_size = 256 // MB
cache_age = 60 // secondsSmall cache:
cache_size = 32 MB
Hit rate: ~60% (more evictions)
Disk queries: 40% of lookups (slower)
Large cache:
cache_size = 1024 MB
Hit rate: ~95% (fewer evictions)
Disk queries: 5% of lookups (faster)
Memory usage: Higher
Operators choose based on available RAM and performance needsstruct CacheMetrics {
uint64_t hits; // Cache hits
uint64_t misses; // Cache misses
uint64_t inserts; // Objects added
uint64_t evictions; // Objects removed
double hitRate() const {
return hits / (double)(hits + misses);
}
};
// Typical production hit rates:
// Well-configured: 90-95%
// Poorly tuned: 60-75%
// Synchronized node: 98% (accessing recent ledgers)From a running XRPL validator:
Period: 1 hour
Cache hits: 86,400 (hit on hot data)
Cache misses: 7,200 (query database)
Hit rate: 92.3% (excellent)
Latency impact:
Average: 0.92 * 1µs + 0.08 * 10ms = 0.81 milliseconds
Without cache: Average 10 milliseconds
Speedup: 12.3x
Throughput impact:
Queries handled: 93,600 per hour
If all disk: 360 per hour (would be ~260x slower)
Caching enables actual performancevoid synchronizeNode(SHAMapNodeID nodeID,
uint256 const& nodeHash) {
// Fetch this node
auto node = fetch(nodeHash);
if (auto inner = dynamic_cast<SHAMapInnerNode*>(node)) {
// This is an inner node
// Likely next accesses are to its children
// Prefetch children to warm cache
for (int i = 0; i < 16; ++i) {
uint256 childHash = inner->getChildHash(i);
if (childHash.isValid()) {
// Asynchronously prefetch
asyncFetch(childHash);
}
}
}
}// Fetch multiple nodes in single operation
std::vector<uint256> hashes = {hash1, hash2, hash3, ...};
auto results = backend->fetchBatch(hashes);
// Reduces backend overhead
// Populates cache efficiently
// Parallelizes I/O operations// During synchronization, identify missing nodes
std::vector<uint256> missing = getMissingNodes(shamap);
// Request from peers asynchronously
// When they arrive, cache them
// Continue traversal without blocking on network
// This allows pipelining: request more while processing previous resultsAccess pattern: Recent ledgers frequently accessed
Root hash: checked at consensus
Recent state nodes: queried for transactions
Old historical data: rarely accessed
Cache configuration:
Keep recent ledgers fully cached
Let old ledgers evict to make room
Result: Excellent hit rate for current operationsAccess pattern: Missing nodes from network
Need to verify hash chain from root to leaf
Often fetching siblings (related nodes)
May access same node multiple times
Cache strategy:
Smaller cache acceptable (still beneficial)
Prefetch siblings when fetching parent
Use dummy markers to avoid retry storms
Result: Synchronization completes in hours vs daysAccess pattern: Back to steady-state recent ledger access
Cache characteristics:
Most-accessed nodes pinned in cache
Hit rate quickly reaches 90%+
Warm cache from prior workclass TaggedCache {
std::unordered_map<uint256, CacheEntry> mCache;
mutable std::shared_mutex mLock; // Allow multiple readers
public:
std::shared_ptr<NodeObject> get(uint256 const& hash) {
std::shared_lock<std::shared_mutex> lock(mLock);
auto it = mCache.find(hash);
return (it != mCache.end()) ? it->second.object : nullptr;
}
void insert(uint256 const& hash,
std::shared_ptr<NodeObject> const& obj) {
std::unique_lock<std::shared_mutex> lock(mLock);
// Check if size exceeded
if (mCache.size() >= mMaxSize) {
evictLRU(); // Exclusive lock held, safe to modify
}
mCache[hash] = {obj, now()};
}
};Multiple readers:
Many threads can fetch simultaneously
No contention for cache hits
Scaling: hundreds of concurrent fetches possible
Insert/evict operations:
Exclusive lock for modification
Short-lived (just map operations)
Background eviction: doesn't block fetchesSynchronizing large subtree:
1. Check root: hash differs (need to sync)
2. Check children: most hashes match, one differs
3. Recurse into different child
4. Check its children: all match
5. Later: receive more nodes
6. Check same grandparent again
7. Redundant check (we already knew it was full)class SHAMapInnerNode {
// Generation number marking when this node was verified complete
std::uint32_t mFullBelow;
};
class FullBelowCache {
// Current generation
std::uint32_t mCurrentGeneration = 0;
bool isKnownFull(SHAMapInnerNode* node) {
return node->mFullBelow == mCurrentGeneration;
}
void markFull(SHAMapInnerNode* node) {
node->mFullBelow = mCurrentGeneration;
}
void invalidate() {
++mCurrentGeneration; // Invalidate all prior markings
}
};When synchronizing:
Fetch subtree, verify all descendants present
Mark as "full below" with generation ID
Later sync process checks generation
If matches current: skip this subtree (known complete)
If differs: need to re-verify (new sync started)
Result: Avoids re-traversing known-complete subtrees
Significant speedup in incremental sync scenariosWithout caching: Synchronization would take months
With 90% hit rate: Synchronization takes hours
With 95% hit rate: Synchronization takes minutes
Production systems: Carefully tuned caches running at 92-96% hit ratevoid processTransaction() {
SecretKey sk = loadKeyFromFile();
// Secret key is now in memory:
// - Stack frame
// - CPU registers
// - Potentially CPU cache
// - Maybe swapped to disk
auto sig = sign(pk, sk, tx);
// Function returns
// Stack frame deallocated
// But what happens to the secret key bytes?
}// Process crashes
// Core dump written to disk
// Contains all process memory
// Including secret keys!// System runs out of RAM
// Pages swapped to disk
// Secret keys written to swap file
// May persist even after process exits// System hibernates
// All RAM written to hibernation file
// Includes secret keys
// File remains on disk until next boot// System powered off
// RAM still contains data for seconds/minutes
// Attacker boots different OS
// Reads RAM contents
// Recovers secret keys// Debugger attached to process
// Can read all memory
// Can dump memory to file
// Secret keys exposed// ❌ WRONG - Compiler may optimize this away
void clearKey(uint8_t* key, size_t size) {
memset(key, 0, size);
// Compiler sees: "Memory about to be freed/unused"
// Optimizes: "No need to write zeros, skip this"
// Result: Key NOT actually erased!
}void function() {
uint8_t secretKey[32];
// ... use secretKey ...
memset(secretKey, 0, 32); // Compiler: "This write is never read"
// Optimized to: /* nothing */
} // Function returns with secretKey still in memory// From src/libxrpl/crypto/secure_erase.cpp
void secure_erase(void* dest, std::size_t bytes)
{
OPENSSL_cleanse(dest, bytes);
}// OpenSSL's implementation (simplified concept):
void OPENSSL_cleanse(void* ptr, size_t len)
{
// Mark as volatile to prevent optimization
unsigned char volatile* vptr = (unsigned char volatile*)ptr;
while (len--) {
*vptr++ = 0; // Compiler cannot optimize away volatile writes
}
// Additional measures:
// - Memory barriers
// - Inline assembly on some platforms
// - Function marked with attributes preventing optimization
}class SecretKey
{
private:
std::uint8_t buf_[32];
public:
// Constructor: Acquire resource
SecretKey(Slice const& slice)
{
std::memcpy(buf_, slice.data(), sizeof(buf_));
}
// Destructor: Release resource (automatically called)
~SecretKey()
{
secure_erase(buf_, sizeof(buf_));
}
// Prevent copying (would lead to double-erase issues)
SecretKey(SecretKey const&) = delete;
SecretKey& operator=(SecretKey const&) = delete;
// Allow moving (transfer ownership)
SecretKey(SecretKey&&) noexcept = default;
SecretKey& operator=(SecretKey&&) noexcept = default;
};void processTransaction() {
SecretKey sk = randomSecretKey();
// Use key...
auto sig = sign(pk, sk, tx);
// sk destructor automatically called here
// Key erased even if exception thrown
// No manual cleanup needed
}void riskyOperation() {
SecretKey sk = loadKey();
doSomething(); // Might throw
doSomethingElse(); // Might throw
finalStep(); // Might throw
// Even if any step throws, sk destructor runs
// Key is securely erased
}// ❌ Manual cleanup - easy to forget
void manual() {
uint8_t key[32];
fillRandom(key, 32);
// ... use key ...
secure_erase(key, 32); // Must remember!
// What if we add early return?
// What if exception thrown?
}
// ✅ RAII cleanup - automatic
void automatic() {
SecretKey key = randomSecretKey();
// ... use key ...
// Cleanup happens automatically
// Works with early returns
// Works with exceptions
}// ❌ std::string doesn't securely erase
void badExample() {
std::string secretHex = "1a2b3c4d...";
// Convert to binary
auto secret = parseHex(secretHex);
// Use secret...
// secretHex still in memory!
// std::string's destructor doesn't erase
// Data may be in multiple string instances (SSO, copies, etc.)
}void explicitErase() {
std::string secretHex = "1a2b3c4d...";
// Convert to binary
auto secret = parseHex(secretHex);
// Use secret...
// Explicitly erase string contents
secure_erase(
const_cast<char*>(secretHex.data()),
secretHex.size());
// Also erase the converted secret
secure_erase(
const_cast<uint8_t*>(secret.data()),
secret.size());
}void useWrapper() {
std::string secretHex = "1a2b3c4d...";
// Convert to SecretKey (RAII protection)
SecretKey sk = parseSecretKey(secretHex);
// Erase original string
secure_erase(
const_cast<char*>(secretHex.data()),
secretHex.size());
// sk automatically erased when out of scope
}// Better: Use fixed-size buffers
void fixedBuffer() {
uint8_t secretBytes[32];
getRandomBytes(secretBytes, 32);
SecretKey sk{Slice{secretBytes, 32}};
secure_erase(secretBytes, 32);
// sk automatically erased
}// Custom allocator that:
// 1. Locks memory (prevents swapping to disk)
// 2. Securely erases on deallocation
template<typename T>
class secure_allocator
{
public:
T* allocate(std::size_t n)
{
T* ptr = static_cast<T*>(std::malloc(n * sizeof(T)));
// Lock memory to prevent swapping
#ifdef _WIN32
VirtualLock(ptr, n * sizeof(T));
#else
mlock(ptr, n * sizeof(T));
#endif
return ptr;
}
void deallocate(T* ptr, std::size_t n)
{
// Securely erase
OPENSSL_cleanse(ptr, n * sizeof(T));
// Unlock
#ifdef _WIN32
VirtualUnlock(ptr, n * sizeof(T));
#else
munlock(ptr, n * sizeof(T));
#endif
std::free(ptr);
}
};
// Usage
std::vector<uint8_t, secure_allocator<uint8_t>> secretData;void function() {
uint8_t secretKey[32];
fillRandom(secretKey, 32);
// Use key...
secure_erase(secretKey, 32);
// Stack frame still contains key!
// Variables below secretKey might contain fragments
}void secureFunction() {
// Allocate large array to overwrite stack
uint8_t stackScrubber[4096];
secure_erase(stackScrubber, sizeof(stackScrubber));
// Now continue with sensitive operations
processSecrets();
// Scrub again before returning
secure_erase(stackScrubber, sizeof(stackScrubber));
}// Secret key passes through:
// 1. CPU registers (during computation)
// 2. L1/L2/L3 cache (for performance)
// 3. TLB (address translation)
// Cannot easily erase these!{
SecretKey sk = loadKey();
auto sig = sign(pk, sk, tx);
// sk destroyed immediately
} // Scope ends, memory reused quickly// Perform other operations that use same memory
// This overwrites cache and registers
doOtherWork();// Modern CPUs have mechanisms to prevent
// cache-based attacks between processes
// Rely on OS and hardware security features// 1. Use RAII wrappers for secrets
SecretKey sk = randomSecretKey();
// Automatic cleanup
// 2. Minimize secret lifetime
{
SecretKey sk = loadKey();
auto sig = sign(pk, sk, tx);
} // Erased immediately
// 3. Explicitly erase temporary buffers
uint8_t temp[32];
crypto_prng()(temp, 32);
SecretKey sk{Slice{temp, 32}};
secure_erase(temp, 32); // Clean up temp
// 4. Use secure_erase, not memset
secure_erase(buffer, size);
// 5. Mark sensitive functions with comments
// SECURITY: This function handles secret keys
void processSecretKey(SecretKey const& sk) {
// ...
}// ❌ Don't use memset for secrets
memset(secretKey, 0, 32); // May be optimized away
// ❌ Don't use std::string for long-lived secrets
std::string secretKey = /* ... */; // Not erased!
// ❌ Don't copy secrets unnecessarily
SecretKey copy1 = original; // Now two copies!
SecretKey copy2 = original; // Three copies!
// ❌ Don't log secrets
std::cout << "Key: " << hexKey << "\n"; // Logs persist!
// ❌ Don't pass secrets by value
void bad(SecretKey sk); // Copies made
void good(SecretKey const& sk); // No copy// Assume: Attacker can read all of your process memory
//
// Defense: Minimize time secrets exist in memory
// Erase immediately when done
// Use RAII to make erasure automatic// 1. RAII (automatic cleanup)
SecretKey sk = randomSecretKey();
// 2. Explicit temporary erasure
uint8_t temp[32];
/* use temp */
secure_erase(temp, 32);
// 3. Scope minimization
{
// Use secret
} // Destroyed here
// 4. Quick reuse of memory
// New allocations overwrite old datavoid testSecureErase() {
uint8_t buffer[32];
// Fill with known pattern
memset(buffer, 0xAA, 32);
// Erase
secure_erase(buffer, 32);
// Verify all zeros
for (int i = 0; i < 32; ++i) {
assert(buffer[i] == 0);
}
}// Use debugger or memory inspection tools
// Verify secrets are actually erased
// Example with gdb:
// (gdb) x/32xb &secretKey // Before erasure
// (gdb) next // Execute secure_erase
// (gdb) x/32xb &secretKey // After erasure - should be zerosNode A Node B
| |
| "I am Node A with public key PK_A" |
| "Prove you have secret key SK_A" |
|────────────────────────────────────────>|
| |
| "I am Node B with public key PK_B" |
| "Prove you have secret key SK_B" |
|<────────────────────────────────────────|
| |
| Both nodes must prove: |
| 1. Identity (I own this key) |
| 2. Liveness (I'm here NOW, not replay)|
| 3. Session binding (THIS connection) |Node A ──┐ ┌── Node B
│ │
└─> Evil <┘
Evil intercepts and relays messages
A thinks it's talking to B
B thinks it's talking to AEvil records handshake messages from previous session
Replays them to impersonate Node ANode A tries to connect to itself through network loop
Could cause infinite recursion/waste resourcesMainnet node connects to testnet node
Could cause confusion/invalid transactions1. SSL/TLS Connection Established
├─ Provides encryption (confidentiality)
├─ Provides basic authentication
└─ Creates shared session state
2. Extract Shared Value from SSL Session
├─ Unique to THIS specific SSL connection
├─ Both nodes can compute it independently
└─ Cannot be predicted beforehand
3. Sign the Shared Value
├─ Node A signs with SK_A
├─ Node B signs with SK_B
└─ Proves possession of private keys
4. Exchange Signatures in HTTP Headers
├─ Verify each other's signatures
├─ Check network IDs match
└─ Prevent self-connection
5. Connection Authenticated!// ❌ INSECURE: Sign static message
auto sig = sign(pk, sk, "I am Node A");
// Problem: Can be replayed in future connections!// ✅ SECURE: Sign session-specific value
auto sharedValue = deriveFromSSL(session);
auto sig = sign(pk, sk, sharedValue);
// Can only be used for THIS session// From src/xrpld/overlay/detail/Handshake.cpp
std::optional<uint256>
makeSharedValue(stream_type& ssl, beast::Journal journal)
{
// Get our "Finished" message from SSL handshake
auto const cookie1 = hashLastMessage(
ssl.native_handle(),
SSL_get_finished);
// Get peer's "Finished" message from SSL handshake
auto const cookie2 = hashLastMessage(
ssl.native_handle(),
SSL_get_peer_finished);
if (!cookie1 || !cookie2)
return std::nullopt;
// XOR the two hashes together
auto const result = (*cookie1 ^ *cookie2);
// Ensure they're not identical (would result in zero)
if (result == beast::zero)
{
JLOG(journal.error()) << "Identical finished messages";
return std::nullopt;
}
// Hash the XOR result to get final shared value
return sha512Half(Slice(result.data(), result.size()));
}static std::optional<base_uint<512>>
hashLastMessage(
SSL const* ssl,
size_t (*get)(const SSL*, void*, size_t))
{
// Buffer for SSL finished message
unsigned char buf[1024];
size_t len = get(ssl, buf, sizeof(buf));
if (len < 12) // Minimum valid length
return std::nullopt;
// Hash it with SHA-512
base_uint<512> cookie;
SHA512(buf, len, cookie.data());
return cookie;
}Properties:
1. Session-specific: Different for every connection
2. Unpredictable: Cannot be known before handshake completes
3. Mutual: Both nodes contribute (via XOR)
4. Verifiable: Both nodes can compute independently
5. Binding: Tied to THIS specific SSL session// From src/xrpld/overlay/detail/Handshake.cpp
void buildHandshake(
boost::beast::http::fields& h,
ripple::uint256 const& sharedValue,
std::optional<std::uint32_t> networkID,
beast::IP::Address public_ip,
beast::IP::Address remote_ip,
Application& app)
{
// 1. Network identification
if (networkID)
h.insert("Network-ID", std::to_string(*networkID));
// 2. Timestamp (freshness, prevent replay)
h.insert("Network-Time",
std::to_string(app.timeKeeper().now().time_since_epoch().count()));
// 3. Node's public key
h.insert("Public-Key",
toBase58(TokenType::NodePublic, app.nodeIdentity().first));
// 4. CRITICAL: Session signature
auto const sig = signDigest(
app.nodeIdentity().first, // Public key
app.nodeIdentity().second, // Secret key
sharedValue); // Session-specific value
h.insert("Session-Signature", base64_encode(sig));
// 5. Instance cookie (prevent self-connection)
h.insert("Instance-Cookie",
std::to_string(app.getInstanceCookie()));
// 6. Optional: Server domain
auto const domain = app.config().SERVER_DOMAIN;
if (!domain.empty())
h.insert("Server-Domain", domain);
// 7. Ledger information
if (auto closed = app.getLedgerMaster().getClosedLedger())
h.insert("Closed-Ledger", to_string(closed->info().hash));
}// Mainnet: 0
// Testnet: 1
// Devnet: 2, etc.
// Prevents nodes from different networks connecting// Current time in milliseconds since epoch
// Helps detect replayed handshakes (timestamps too old)
// Not strictly enforced (clocks may be slightly off)// Node's public key in Base58 format
// Example: nHUpcmNsxAw47yt2ADDoNoQrzLyTJPgnyq5o3xTmMcgV8X3iVVa7
// Used to verify the signature// Signature of the shared value
// Proves: "I have the secret key for this public key"
// AND "I'm participating in THIS specific SSL session"// Random value generated on node startup
// If we receive our own cookie back → we're connecting to ourselves!// Domain name like "ripple.com"
// Can be verified against validator list
// Helps with node identificationstd::optional<PublicKey>
verifyHandshake(
http_request_type const& request,
uint256 const& sharedValue,
std::optional<std::uint32_t> networkID,
uint64_t instanceCookie,
beast::Journal journal)
{
// 1. Extract and parse public key
auto const pkStr = request["Public-Key"];
auto const pk = parseBase58<PublicKey>(
TokenType::NodePublic,
pkStr);
if (!pk)
{
JLOG(journal.warn()) << "Invalid public key";
return std::nullopt;
}
// 2. Check network ID matches
if (networkID)
{
auto const theirNetworkID = request["Network-ID"];
if (theirNetworkID.empty() ||
std::to_string(*networkID) != theirNetworkID)
{
JLOG(journal.warn()) << "Network ID mismatch";
return std::nullopt;
}
}
// 3. Check for self-connection
auto const theirCookie = request["Instance-Cookie"];
if (theirCookie == std::to_string(instanceCookie))
{
JLOG(journal.warn()) << "Detected self-connection";
return std::nullopt;
}
// 4. Verify session signature
auto const sigStr = request["Session-Signature"];
auto const sig = base64_decode(sigStr);
if (!verifyDigest(*pk, sharedValue, sig, true))
{
JLOG(journal.warn()) << "Invalid session signature";
return std::nullopt;
}
// 5. Optional: Validate server domain
auto const domain = request["Server-Domain"];
if (!domain.empty() && !isProperlyFormedTomlDomain(domain))
{
JLOG(journal.warn()) << "Invalid server domain";
return std::nullopt;
}
// Success! Return authenticated public key
JLOG(journal.info()) << "Handshake verified for " << toBase58(*pk);
return pk;
}Node A Node B
| |
| 1. TCP connection established |
|<--------------------------------------------------->|
| |
| 2. SSL/TLS handshake |
| (Both send "Finished" messages) |
|<--------------------------------------------------->|
| |
| 3. Both compute shared value |
| shared = sha512Half(finishedA XOR finishedB) |
| |
| 4. HTTP Upgrade Request |
| Headers: |
| Public-Key: PK_A |
| Session-Signature: sign(SK_A, shared) |
| Network-ID: 0 |
| Instance-Cookie: COOKIE_A |
|---------------------------------------------------->|
| |
| 5. Node B: |
| - Verify PK_A|
| - Check sig |
| - Check net |
| - Check cookie|
| |
| 6. HTTP Upgrade Response |
| Headers: |
| Public-Key: PK_B |
| Session-Signature: sign(SK_B, shared) |
| Network-ID: 0 |
| Instance-Cookie: COOKIE_B |
|<----------------------------------------------------|
| |
| 7. Node A verifies Node B |
| |
| 8. Begin XRPL protocol |
|<--------------------------------------------------->|Node A proves: "I have SK_A"
Node B proves: "I have SK_B"Signature valid ONLY for THIS SSL session
Cannot be replayed in different sessionsharedValue = derived from THIS session's SSL handshake
Old signatures from previous sessions won't verifyAttacker cannot forge signatures without private keys
SSL provides encryption, handshake provides authenticationif (theirCookie == myCookie) {
// We're talking to ourselves!
reject();
}if (theirNetwork != myNetwork) {
// Different networks (mainnet vs testnet)
reject();
}Attacker needs to:
1. Know Node A's secret key (impossible - properly secured)
2. Sign the shared value (requires secret key)
Without SK_A, cannot create valid signatureShared value is different for each SSL session
Old signature: sign(SK, oldSharedValue)
New session: verify(PK, newSharedValue, oldSignature)
Result: Verification fails (different shared values)SSL provides:
- Encryption (attacker can't read/modify)
- Certificate validation (can detect impersonation)
Application handshake provides:
- Signature verification (requires private keys)
- Session binding (tied to SSL session)
Attacker would need to:
1. Break SSL (extremely difficult)
2. AND forge signatures (impossible without keys)// 1. Always verify the shared value
auto sharedValue = makeSharedValue(ssl, journal);
if (!sharedValue) {
disconnect("Failed to create shared value");
}
// 2. Always require canonical signatures
if (!verifyDigest(pk, sharedValue, sig, true)) {
disconnect("Invalid signature");
}
// 3. Always check network ID
if (theirNetwork != myNetwork) {
disconnect("Network mismatch");
}
// 4. Always check instance cookie
if (theirCookie == myCookie) {
disconnect("Self-connection detected");
}// ❌ Don't skip signature verification
if (config.TRUSTED_NODE) {
// Skip verification - WRONG!
}
// ❌ Don't ignore network ID
// connect(); // Oops, might be wrong network
// ❌ Don't allow self-connections
// They waste resources and can cause issues// Handshake happens once per connection
// Not a performance bottleneck
Typical handshake time:
- SSL/TLS handshake: 50-100ms
- Shared value computation: <1ms
- Signature creation: <1ms
- Signature verification: <1ms
Total: ~50-100ms
// Amortized over connection lifetime (hours/days)
// Cost is negligibleCast: Swiss army knife for interacting with EVM smart contracts
Anvil: Local Ethereum node for development
Chisel: Fast, utilitarian, and verbose solidity REPL
Before installing Foundry, ensure you have:
A terminal application (Git BASH or WSL for Windows users)
Internet connection for downloading the installer
Foundryup is the official installer for the Foundry toolchain and the easiest way to get started.
Install Foundryup
Open your terminal and run:
Follow the on-screen instructions to complete the installation
Install Foundry tools
Run the following command to install the latest stable version:
For the latest nightly build (with newest features), use:
If you prefer to build from source or need a custom configuration:
Install Rust
First, install Rust using rustup:
Update Rust (if already installed)
Install via Cargo
For containerized development:
After installation, verify that Foundry is properly installed by checking the version:
You should see version information for Forge, confirming the installation was successful.
Windows: Use Git BASH or WSL as your terminal. PowerShell and Command Prompt are not currently supported by foundryup
macOS: Standard terminal works perfectly
Linux: Any terminal emulator will work
If you encounter issues during installation:
Check your internet connection
Ensure you have the latest version of your terminal
For Windows users: Make sure you're using Git BASH or WSL
Refer to the Foundry FAQ for additional help
Foundry binaries are verified using GitHub artifact attestations to ensure integrity and authenticity. The installer automatically verifies these attestations during installation.
✅ Checkpoint: You now have Foundry installed and ready to use for XRPL sidechain development!
Now that Foundry is installed, let's create a new project and understand the project structure.
Initialize a new Foundry project using the forge init command:
This creates a new directory with a complete Foundry project structure.
After initialization, your project will have the following structure:
/src: Contains your smart contracts written in Solidity
/test: Contains test files for your contracts (typically with .t.sol extension)
/script: Contains deployment scripts and other automation scripts
/lib: Contains external dependencies managed by Soldeer package manager
Soldeer is Foundry's native package manager for handling smart contract dependencies. Let's configure it for your project:
Initialize Soldeer
Verify Soldeer Configuration
Check that foundry.toml now includes Soldeer configuration:
You should see something like:
Compile the smart contracts to ensure everything is set up correctly:
You should see output indicating successful compilation of the example Counter contract.
Run the included tests to verify the setup:
This will execute all tests in the /test directory and show the results.
Your project now has access to all Foundry tools:
forge
Build, test, debug, deploy and verify smart contracts
anvil
Run a local Ethereum development node with forking capabilities
cast
Interact with contracts, send transactions, and retrieve chain data
chisel
Fast Solidity REPL for rapid prototyping and debugging
To add external smart contract libraries (like OpenZeppelin), use Soldeer:
Dependencies will be automatically added to your foundry.toml and downloaded to the /lib directory.
You can customize various aspects of your project by editing foundry.toml:
✅ Checkpoint: You now have a fully initialized Foundry project with proper structure and dependency management!
Before deploying contracts to the XRPL sidechain, you need to set up a wallet and obtain test tokens for deployment.
MetaMask is a popular Ethereum wallet that works with XRPL EVM sidechain. Follow these steps to install and set it up:
Install MetaMask
Visit metamask.io
Click "Download" and choose your browser
Install the browser extension
Create a new wallet or import an existing one
Create a New Wallet (if you don't have one)
Click "Create a wallet"
Set a strong password
IMPORTANT: Write down your seed phrase securely and never share it
Configure MetaMask to connect to the XRPL EVM sidechain:
Open MetaMask and click the network dropdown (usually shows "Ethereum Mainnet")
Add Custom Network with these details:
Save the network configuration
Switch to the XRPL EVM Sidechain network
⚠️ Security Warning: Never share your private key with anyone. Only use it for development and testing purposes.
Open MetaMask
Click the three dots next to your account name
Select "Account Details"
Click "Show private key"
Enter your password
Copy the private key (you'll need this for deployment)
You need test XRP tokens to deploy contracts on the XRPL sidechain:
Visit the XRPL EVM Faucet: https://faucet.xrplevm.org/
Connect Your Wallet
Click "Connect Wallet"
Select MetaMask
Approve the connection
Request Test Tokens
Make sure you're on the XRPL EVM Sidechain network
Your wallet address should be displayed
Click "Send me XRP" or the equivalent button
Verify Token Receipt
Check your MetaMask balance
You should see test XRP tokens in your wallet
Create a .env file in your project root to store your private key securely:
Create the file:
Add your private key:
Add .env to .gitignore (if not already present):
Update your foundry.toml to include XRPL-specific settings:
Verify your environment is ready:
Check your balance:
Load environment variables (if using .env):
Test connection:
Never commit your private key to version control
Use environment variables for sensitive data
Use separate wallets for development and production
Regularly backup your seed phrase securely
Consider using hardware wallets for production deployments
✅ Checkpoint: Your environment is now configured with MetaMask, test tokens, and proper security setup!
Now that your environment is set up, let's deploy the example Counter contract to the XRPL EVM sidechain using the forge create command.
First, let's look at the Counter contract in src/Counter.sol:
This simple contract stores a number and provides functions to set and increment it.
The forge create command allows you to deploy a single contract directly without deployment scripts.
For better security, load your environment variables first:
If you configured the RPC endpoint in foundry.toml, you can use:
After successful deployment, you'll see output like:
Key Information:
Deployer: Your wallet address
Deployed to: Your contract's address on the blockchain
Transaction hash: Reference for the deployment transaction
If your contract has constructor parameters, use the --constructor-args flag:
Deploy with arguments:
Test your deployment without broadcasting to the network:
This simulates the deployment and shows gas estimates without spending real tokens.
Check deployment on block explorer:
Search for your contract address
View the contract details
Interact with your contract using Cast:
If the XRPL EVM sidechain supports contract verification, you can verify your contract:
Always test locally first:
Check gas costs before deployment:
Keep track of deployed contracts:
Save contract addresses in a deployment log
Document which version was deployed when
Keep constructor arguments for reference
Insufficient Funds:
Check your balance: cast balance YOUR_ADDRESS --rpc-url $RPC_URL
Get more test tokens from the faucet if needed
Private Key Format:
Ensure your private key starts with 0x
Check for any extra spaces or characters
Network Issues:
Verify the RPC URL is correct
Check if the XRPL EVM sidechain is operational
Compilation Errors:
Run forge build first to check for compilation issues
Ensure all dependencies are properly installed
If you encounter EIP-1559 issues, use the --legacy flag:
🎉 Congratulations! You have successfully deployed your first smart contract to the XRPL EVM sidechain using Foundry! Your contract is now live and ready for interaction.
1. Simpler mathematics:
2. Better caching:
3. Modern design:
Use ed25519:
New accounts (recommended)
High-throughput applications
When performance matters
Modern systems
Use secp256k1:
Compatibility requirements
Existing accounts (can't change)
Cross-chain interoperability
Legacy systems
Benefits:
Avoids repeated derivation
Reduces ledger lookups
Especially beneficial for frequently-used accounts
Considerations:
Cache must expire (memory limits)
Expiry time vs hit rate trade-off
Thread safety required
Only cache verified transactions (not unverified)
Benefits:
Avoids recomputing unchanged subtrees
Critical for Merkle tree performance
Cache invalidation on modification
Benefits:
Massive speedup for multiple verifications
Ideal for transaction processing
Only available for Ed25519
Limitations:
Batch fails if ANY signature is invalid
Must verify individually to find which failed
Requires all same algorithm (ed25519)
Considerations:
Cryptographic operations are CPU-bound
Parallelism limited by number of cores
Thread synchronization overhead
Good for batch processing
Use ed25519 for new accounts
Cache frequently-used data
Batch operations when possible
Profile before optimizing
Use parallel processing for batches
Don't sacrifice security for speed
Don't cache unverified data
Don't over-optimize negligible operations
Don't forget thread safety
Performance optimization in cryptography:
Algorithm choice matters: ed25519 is 4-5× faster than secp256k1
Verification is the bottleneck: Focus optimization here
Caching helps: Public keys, verification results, hashes
Batch operations: Especially for ed25519
Parallel processing: Utilize multiple cores
Profile first: Measure before optimizing
Never sacrifice security: Performance < Security
Key takeaways:
Use ed25519 for new accounts (faster, simpler)
Cache wisely (but verify first)
Batch when possible (ed25519 batch verification)
Profile to find real bottlenecks
Optimize hot paths only
Security always comes first
A cryptographic hash function takes arbitrary input and produces a fixed-size output:
1. Deterministic
2. Fast to Compute
3. Avalanche Effect
4. Preimage Resistance (One-Way)
5. Collision Resistance
Why truncate SHA-512 instead of using SHA-256?
Performance on 64-bit processors:
On 64-bit systems (which all modern servers are), SHA-512 is faster than SHA-256 despite producing more output. By truncating to 256 bits, we get the best of both worlds.
Transaction IDs:
Ledger Object Keys:
Merkle Tree Nodes:
When to use the secure variant:
Hashing secret keys or seeds
Deriving keys from passwords
Any operation involving sensitive data
Why it matters:
1. Defense in Depth
2. Compactness
3. Quantum Resistance (Partial)
Why double SHA-256?
Historical reasons (inherited from early cryptocurrency designs):
Provides defense against length-extension attacks
Standard pattern for checksums
Well-tested over many years
Checksum properties:
Why use prefixes?
Prevent cross-protocol attacks where a hash from one context is used in another:
Hash functions can process data incrementally:
Benefits:
Stream large files without loading into memory
Hash complex data structures field by field
More efficient for large inputs
Example: Hashing a transaction
The "birthday attack" on a 256-bit hash requires:
Conclusion: Collision attacks on SHA-512-Half are not feasible with current or foreseeable technology.
A collision in any of these would be catastrophic, but the probability is negligible.
Why cache?
Merkle tree nodes are hashed repeatedly
Caching avoids redundant computation
Invalidate when node contents change
SHA-512-Half
256 bits
~650 MB/s
Transaction IDs, object keys, Merkle trees
SHA-256
256 bits
~450 MB/s
Base58Check checksums
RIPEMD-160
160 bits
~200 MB/s
Use sha512Half for new protocols
Use hash prefixes for domain separation
Cache computed hashes when appropriate
Use secure variant for sensitive data
Don't use non-cryptographic hashes for security
Don't implement your own hash function
Don't assume hashes are unique without checking
Hash functions in XRPL provide:
Integrity: Detect any data modification
Identification: Unique IDs for transactions and objects
Efficiency: Fast computation on modern CPUs
Security: Collision and preimage resistance
Key algorithms:
SHA-512-Half: Primary hash (fast on 64-bit systems)
RIPESHA: Address generation (compact, defense in depth)
SHA-256: Checksums (standard, well-tested)
Usage patterns:
Always use hash prefixes for domain separation
Cache hashes when recomputed frequently
Use secure variants for sensitive data
Trust collision resistance but code defensively
In the next chapter, we'll explore Base58Check encoding and how XRPL makes binary data human-readable.
###Consider an account ID in different formats:
Problems with hex:
Easy to mistype: 8B8A vs 8B8B
Visually similar characters: 0 (zero) vs O (letter O)
No error detection: One wrong character, wrong address
Not compact: 40 characters for 20 bytes
Base58Check solutions:
Excludes confusing characters
Includes checksum (detects errors)
More compact: 34 characters for 20 bytes + checksum
URL-safe (no special characters)
Excluded characters:
These exclusions prevent human transcription errors.
Included: 58 characters
Base58 is like converting a number to a different base (like hexadecimal is base 16):
This ensures the encoding is one-to-one: every distinct byte sequence produces a distinct string.
Base58 alone doesn't detect errors. Base58Check adds a checksum:
The type byte determines the first character of the encoded result:
This provides visual identification of what kind of data you're looking at.
The 4-byte (32-bit) checksum provides strong error detection:
Types of errors detected:
Single character typos: 100%
Transpositions: 100%
Missing characters: 100%
Extra characters: 100%
Random corruption: 99.9999999767%
Seeds can be encoded in two formats:
Properties:
Compact (25-28 characters)
Checksum for error detection
Safe to copy-paste
Properties:
12 words from a dictionary
Easier to write down by hand
Easier to read aloud (for backup)
Checksum built into last word
Hex
16
No
No
No (2×)
Yes
Base64
64
Yes
Base58Check wins for:
Human readability (no confusing characters)
Error detection (checksum)
URL safety (no special characters)
Blockchain addresses
Solution:
Solution:
Solution:
When performance matters:
Base58Check encoding makes binary cryptographic data human-friendly:
Excludes confusing characters: No 0, O, I, l
Includes checksum: 4-byte SHA-256(SHA-256(...)) checksum
Type prefixes: Different first characters for different data types
Error detection: ~99.9999999767% of errors detected
URL-safe: No special characters
Compact: ~37% overhead vs 100% for hex
Usage in XRPL:
Account addresses: starts with 'r'
Seeds: starts with 's'
Public keys: starts with 'a' or 'n'
Best practices:
Always validate before using
Use library functions, don't implement yourself
Store binary internally, encode only for display
Provide clear error messages for invalid input
In the next chapter, we'll explore how cryptography secures peer-to-peer communication in the XRPL network.
hotDUMMY
Cache marker for missing entries
512
← Back to Protocol Extensions and Quantum Signatures
A peer connection in the XRP Ledger overlay network goes through a well-defined lifecycle: discovery, establishment, activation, maintenance, and termination. Understanding this lifecycle is crucial for debugging connectivity issues, optimizing network performance, and implementing new networking features.
Each phase involves careful coordination between multiple subsystems, resource management decisions, and thread-safe state transitions. This lesson traces the complete journey of a peer connection through the codebase.
The connection lifecycle consists of five distinct phases:
Before a connection can be established, nodes must discover potential peers. The PeerFinder subsystem manages peer discovery and slot allocation.
Discovery sources include:
Fixed Peers: Configured in rippled.cfg under [ips_fixed], these are always-connect peers that the node prioritizes.
Bootstrap Peers: Initial peers used when joining the network for the first time, typically well-known, reliable nodes.
Peer Exchange: Active peers share their known endpoints, enabling organic discovery of new nodes.
When OverlayImpl decides to connect to a peer, it creates a ConnectAttempt object that manages the asynchronous connection process:
The ConnectAttempt::run() method initiates an asynchronous TCP connection:
Using shared_from_this() ensures the ConnectAttempt object remains alive until the asynchronous operation completes, even if other references are released.
Once the TCP connection succeeds, the handshake phase begins. This involves TLS negotiation followed by protocol-level handshaking.
For outbound connections, ConnectAttempt::processResponse handles the handshake:
For inbound connections, PeerImp::doAccept handles the server side of the handshake:
Once the handshake completes successfully, the peer becomes active. The OverlayImpl::activate method registers the peer in the overlay's tracking structures:
The add_active method handles the full registration process:
After activation, PeerImp::doProtocolStart begins the message exchange:
During normal operation, peers exchange messages continuously. The maintenance phase involves:
Message Processing: Reading incoming messages and dispatching to appropriate handlers.
Health Monitoring: Tracking response times, message rates, and connection quality.
Resource Management: Ensuring fair bandwidth allocation and detecting abuse.
Connections may terminate for various reasons: network errors, protocol violations, resource limits, or graceful shutdown. Proper cleanup is essential to prevent resource leaks.
The PeerImp destructor handles final cleanup:
The overlay updates its state when a peer disconnects:
Every phase involves resource management decisions:
Discovery: PeerFinder limits the number of endpoints tracked to prevent memory exhaustion.
Establishment: Resource Manager checks if the endpoint has a good reputation before allowing connection.
Activation: Slots are finite resources allocated by PeerFinder based on configuration.
Maintenance: Bandwidth and message rates are monitored, with misbehaving peers penalized.
Termination: All allocated resources must be released to prevent leaks.
The connection lifecycle involves multiple threads:
IO Threads: Handle asynchronous network operations.
Job Queue Threads: Process completed operations and state transitions.
Application Threads: May query peer state or initiate connections.
The connection lifecycle is a carefully orchestrated sequence of phases, each with specific responsibilities and resource management requirements. Understanding this lifecycle enables you to debug connectivity issues, optimize network performance, and safely implement new networking features.
Digital signatures are the heart of XRPL's security. Every transaction must be signed with the private key corresponding to the sending account. This signature is mathematical proof that the account owner authorized the transaction. Without a valid signature, a transaction is rejected immediately.
In this chapter, we'll trace the complete signing and verification pipeline, understand the differences between secp256k1 and ed25519, and explore why canonical signatures matter.
curl -L https://foundry.paradigm.xyz | bashfoundryupfoundryup --version nightlycurl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | shrustup update stablecargo install --git https://github.com/foundry-rs/foundry --profile release --locked forge cast chisel anvilforge soldeer initcat foundry.toml[profile.default]
src = "src"
out = "out"
libs = ["lib"]
[dependencies]
# Dependencies will be listed hereNetwork Name: XRPL EVM Sidechain
New RPC URL: https://rpc-evm-sidechain.xrpl.org
Chain ID: 1440002
Currency Symbol: XRP
Block Explorer URL: https://evm-sidechain.xrpl.orgtouch .envPRIVATE_KEY=your_private_key_here
RPC_URL=https://rpc.testnet.xrplevm.orgecho ".env" >> .gitignorecast balance --rpc-url https://rpc.testnet.xrplevm.org YOUR_WALLET_ADDRESSsource .envcast block-number --rpc-url $RPC_URL# Start local Anvil node
anvil
# Deploy to local node (in another terminal)
forge create src/Counter.sol:Counter \
--rpc-url http://127.0.0.1:8545 \
--private-key 0xac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80 \
--broadcast# Dry run to see gas estimates
forge create src/Counter.sol:Counter --rpc-url $RPC_URLdocker pull ghcr.io/foundry-rs/foundry:latestforge --versionforge init my-xrpl-project
cd my-xrpl-projectmy-xrpl-project/
├── foundry.toml # Foundry configuration file
├── src/ # Smart contracts directory
│ └── Counter.sol # Example contract
├── script/ # Deployment and interaction scripts
│ └── Counter.s.sol # Example deployment script
├── test/ # Test files
│ └── Counter.t.sol # Example test file
└── lib/ # Dependencies (managed by Soldeer)forge buildforge test# Example: Add OpenZeppelin contracts
forge soldeer install @openzeppelin/[email protected][profile.default]
src = "src"
out = "out"
libs = ["lib"]
optimizer = true
optimizer_runs = 200
via_ir = true
[rpc_endpoints]
xrpl_devnet = "https://rpc-evm-sidechain.xrpl.org"[profile.default]
src = "src"
out = "out"
libs = ["lib"]
optimizer = true
optimizer_runs = 200
[rpc_endpoints]
xrpl = "https://rpc.testnet.xrplevm.org"
[etherscan]
xrpl = { key = "your_api_key_if_available", url = "https://evm-sidechain.xrpl.org/api" }// SPDX-License-Identifier: UNLICENSED
pragma solidity ^0.8.13;
contract Counter {
uint256 public number;
function setNumber(uint256 newNumber) public {
number = newNumber;
}
function increment() public {
number++;
}
}forge create src/Counter.sol:Counter \
--rpc-url https://rpc.testnet.xrplevm.org \
--private-key $PRIVATE_KEY \
--broadcastsource .env
forge create src/Counter.sol:Counter \
--rpc-url $RPC_URL \
--private-key $PRIVATE_KEY \
--broadcastforge create src/Counter.sol:Counter \
--rpc-url xrpl \
--private-key $PRIVATE_KEY \
--broadcast[⠢] Compiling...
[⠆] Compiling 1 files with 0.8.19
[⠰] Solc 0.8.19 finished in 1.23s
Compiler run successful!
Deployer: 0xa735b3c25f...
Deployed to: 0x4054415432...
Transaction hash: 0x6b4e0ff93a...// Example contract with constructor
contract MyToken {
constructor(string memory name, string memory symbol, uint256 supply) {
// constructor logic
}
}forge create src/MyToken.sol:MyToken \
--rpc-url $RPC_URL \
--private-key $PRIVATE_KEY \
--broadcast \
--constructor-args "MyToken" "MTK" 1000000forge create src/Counter.sol:Counter \
--rpc-url $RPC_URL \
--private-key $PRIVATE_KEY
# Note: No --broadcast flagforge verify-contract \
--chain-id 1440002 \
--watch \
--verifier blockscout \
--verifier-url https://evm-sidechain.xrpl.org/api \
$CONTRACT_ADDRESS \
src/Counter.sol:Counterforge create src/Counter.sol:Counter \
--rpc-url $RPC_URL \
--private-key $PRIVATE_KEY \
--broadcast \
--legacy// 4-5× faster than secp256k1
auto [pk, sk] = randomKeyPair(KeyType::ed25519);// Public keys, verification results, hashes
cache.get(key);// Especially for ed25519 batch verification
verifyBatch(pks, messages, sigs);// Measure actual bottlenecks
// Don't optimize blindly// ❌ Skipping canonicality checks
// ❌ Using weak algorithms
// ❌ Reducing key sizes// ❌ Caching before verification
// ✅ Cache after verification succeeds// Hashing is fast (~1 μs)
// Focus on signatures (~100-500 μs)// Caches need proper locking
// Crypto libraries might not be thread-safe// Approximate timings on modern hardware (2023-era CPU)
Operation secp256k1 ed25519 Winner
─────────────────────────────────────────────────────────
Key generation ~100 μs ~50 μs ed25519
Public key derivation ~100 μs ~50 μs ed25519
Signing ~200 μs ~50 μs ed25519 (4x faster)
Verification ~500 μs ~100 μs ed25519 (5x faster)
Batch verification N/A Available ed25519
─────────────────────────────────────────────────────────
Public key size 33 bytes 33 bytes Tie
Signature size ~71 bytes 64 bytes ed25519// secp256k1:
// - Complex curve operations
// - Modular arithmetic with large primes
// - DER encoding/decoding overhead
// ed25519:
// - Optimized curve (Curve25519)
// - Simpler point arithmetic
// - No encoding overhead (raw bytes)// ed25519 operations fit better in CPU cache
// Fewer memory accesses
// More predictable branching// Ed25519 designed in 2011 with performance in mind
// secp256k1 designed in 2000 before modern optimizations// In XRPL consensus:
// Every validator verifies EVERY transaction signature
// 1000 tx/s × 50 validators = 50,000 verifications/second
// With secp256k1:
50,000 × 500 μs = 25,000,000 μs = 25 seconds of CPU time
// With ed25519:
50,000 × 100 μs = 5,000,000 μs = 5 seconds of CPU time
// Ed25519 saves 20 seconds of CPU time per second!
// Allows for higher throughput or more validators// Throughput on modern 64-bit CPU
Algorithm Throughput Notes
────────────────────────────────────────────────────
SHA-512 ~650 MB/s 64-bit optimized
SHA-512-Half ~650 MB/s Same (just truncated)
SHA-256 ~450 MB/s 32-bit operations
RIPEMD-160 ~200 MB/s Older algorithm
RIPESHA ~200 MB/s Limited by RIPEMD-160// On 64-bit processors:
SHA-512: Uses 64-bit operations → fast
SHA-256: Uses 32-bit operations → slower on 64-bit CPU
// SHA-512-Half gives us:
Performance of SHA-512 (~650 MB/s)
Output size of SHA-256 (32 bytes)
// Best of both worlds!// Transaction ID calculation:
Serialize transaction: ~1 KB
Hash with SHA-512-Half: ~1.5 μs
// Negligible compared to signature verification (100-500 μs)
// Not a bottleneck// Problem: Deriving public key from signature is expensive
// Solution: Cache account ID → public key mappings
class PublicKeyCache
{
private:
std::unordered_map<AccountID, PublicKey> cache_;
std::shared_mutex mutex_;
size_t maxSize_ = 10000;
public:
std::optional<PublicKey> get(AccountID const& id)
{
std::shared_lock lock(mutex_);
auto it = cache_.find(id);
return it != cache_.end() ? std::optional{it->second} : std::nullopt;
}
void put(AccountID const& id, PublicKey const& pk)
{
std::unique_lock lock(mutex_);
if (cache_.size() >= maxSize_)
cache_.clear(); // Simple eviction
cache_[id] = pk;
}
};
// Usage:
PublicKey getAccountPublicKey(AccountID const& account)
{
// Check cache first
if (auto pk = keyCache.get(account))
return *pk;
// Not in cache - derive from ledger
auto pk = deriveFromLedger(account);
// Cache for next time
keyCache.put(account, pk);
return pk;
}// Problem: Same transaction verified multiple times
// Solution: Cache transaction hash → verification result
class VerificationCache
{
private:
struct Entry {
bool valid;
std::chrono::steady_clock::time_point expiry;
};
std::unordered_map<uint256, Entry> cache_;
std::shared_mutex mutex_;
public:
std::optional<bool> check(uint256 const& txHash)
{
std::shared_lock lock(mutex_);
auto it = cache_.find(txHash);
if (it == cache_.end())
return std::nullopt;
// Check if expired
if (std::chrono::steady_clock::now() > it->second.expiry) {
return std::nullopt; // Expired
}
return it->second.valid;
}
void store(uint256 const& txHash, bool valid)
{
std::unique_lock lock(mutex_);
cache_[txHash] = Entry{
valid,
std::chrono::steady_clock::now() + std::chrono::minutes(10)
};
}
};
// Usage:
bool verifyTransaction(Transaction const& tx)
{
auto txHash = tx.getHash();
// Check cache
if (auto cached = verifyCache.check(txHash))
return *cached;
// Not cached - verify
bool valid = verify(tx.publicKey, tx.data, tx.signature, true);
// Cache result
verifyCache.store(txHash, valid);
return valid;
}// Merkle tree nodes cache their hashes
class SHAMapNode
{
private:
uint256 hash_;
bool hashValid_ = false;
public:
uint256 const& getHash()
{
if (!hashValid_) {
hash_ = computeHash();
hashValid_ = true;
}
return hash_;
}
void invalidateHash()
{
hashValid_ = false;
// Parent nodes also invalidated (recursively)
}
};// Ed25519 supports batch verification
// Verify multiple signatures faster than individually
bool verifyBatch(
std::vector<PublicKey> const& publicKeys,
std::vector<Slice> const& messages,
std::vector<Slice> const& signatures)
{
// Batch verification algorithm:
// Combines multiple verification equations
// Single verification check for all signatures
//
// Time: ~1.2 × single verification
// Instead of: N × single verification
//
// For N=100: 100× speedup!
return ed25519_sign_open_batch(
messages.data(),
messages.size(),
publicKeys.data(),
signatures.data(),
messages.size()) == 0;
}// For hashing multiple items
void hashMultiple(
std::vector<Slice> const& items,
std::vector<uint256>& hashes)
{
hashes.reserve(items.size());
// Option 1: Parallel hashing
#pragma omp parallel for
for (size_t i = 0; i < items.size(); ++i) {
hashes[i] = sha512Half(items[i]);
}
// Option 2: Vectorized hashing (if available)
// Some crypto libraries support SIMD hashing
hashMultipleSIMD(items, hashes);
}// Verify signatures in parallel
std::vector<bool> verifyParallel(
std::vector<Transaction> const& transactions)
{
std::vector<bool> results(transactions.size());
// Use thread pool
#pragma omp parallel for
for (size_t i = 0; i < transactions.size(); ++i) {
results[i] = verifyTransaction(transactions[i]);
}
return results;
}// Verify asynchronously
std::future<bool> verifyAsync(Transaction const& tx)
{
return std::async(std::launch::async, [tx]() {
return verifyTransaction(tx);
});
}
// Usage:
std::vector<std::future<bool>> futures;
for (auto const& tx : transactions) {
futures.push_back(verifyAsync(tx));
}
// Collect results
for (auto& future : futures) {
bool valid = future.get();
// ...
}// Ed25519 signatures are smaller and fixed-size
Signature size:
secp256k1: 70-72 bytes (variable, DER encoded)
ed25519: 64 bytes (fixed, raw bytes)
// For 1,000,000 signatures:
secp256k1: ~71 MB
ed25519: ~64 MB
// Savings: 7 MB (10%)
// Also: Fixed size easier to handle// Compressed public keys
secp256k1: 33 bytes (compressed)
ed25519: 33 bytes
// Both use compression
// No optimization available// Measure cryptographic operations
auto measureSign = []() {
auto [pk, sk] = randomKeyPair(KeyType::ed25519);
std::vector<uint8_t> message(1000, 0xAA);
auto start = std::chrono::high_resolution_clock::now();
for (int i = 0; i < 1000; ++i) {
auto sig = sign(pk, sk, makeSlice(message));
}
auto end = std::chrono::high_resolution_clock::now();
auto duration = std::chrono::duration_cast<std::chrono::microseconds>(end - start);
std::cout << "Average sign time: " << duration.count() / 1000.0 << " μs\n";
};// Use profiler to find hotspots
// Example output:
Function Time % Total
───────────────────────────────────────────────
verifyTransaction 45.2% Critical
├─ ed25519_sign_open 42.1% ← Bottleneck
└─ sha512Half 2.8%
processLedger 35.1%
├─ computeMerkleRoot 20.3%
└─ serializeTransactions 14.8%// Approximate numbers from XRPL mainnet:
Transactions per ledger: ~50-200
Ledger close time: ~3-5 seconds
Validators: ~35-40
Signature verifications per second:
(150 tx/ledger × 40 validators) / 4 seconds = 1,500 verifications/second
With ed25519 (100 μs each):
1,500 × 0.0001s = 0.15 seconds of CPU time per second
= 15% CPU utilization
With secp256k1 (500 μs each):
1,500 × 0.0005s = 0.75 seconds of CPU time per second
= 75% CPU utilization
Ed25519 allows 5× higher throughput with same CPU!uint256 hash = sha512Half(data); // Fast and standarduint256 hash = sha512Half(HashPrefix::custom, data);if (cached)
return cachedHash;
cachedHash = sha512Half(data);
return cachedHash;uint256 hash = sha512Half_s(secretData);std::hash<std::string>{}(data); // ❌ NOT SECUREuint32_t myHash(data) { /* ... */ } // ❌ Don't do this// Even though collisions are infeasible, handle errors gracefully
if (hashExists(newHash))
handleCollision(); // Paranoid but correctInput (any size) → Hash Function → Output (fixed size)
"Hello" → sha512Half → 0x7F83B165...
"Hello World!" → sha512Half → 0xA591A6D4...
[1 MB file] → sha512Half → 0x3C9F2A8B...sha512Half("Hello") == sha512Half("Hello") // Always true
// Same input always produces same output// Can hash gigabytes per second
auto hash = sha512Half(largeData); // Microseconds to millisecondssha512Half("Hello") → 0x7F83B165...
sha512Half("Hello!") → 0xC89F3AB2... // Completely different!
// One bit change → ~50% of output bits flip// Given hash, cannot find input
uint256 hash = 0x7F83B165...;
// No way to compute: input = reverse_hash(hash);// Cannot find two inputs with same hash
// sha512Half(x) == sha512Half(y) where x != y
// Computationally infeasible// Not SHA-256, but SHA-512 truncated to 256 bits
template <class... Args>
uint256 sha512Half(Args const&... args)
{
sha512_half_hasher h;
hash_append(h, args...);
return static_cast<typename sha512_half_hasher::result_type>(h);
}SHA-512: Operates on 64-bit words → ~650 MB/s on modern CPUs
SHA-256: Operates on 32-bit words → ~450 MB/s on modern CPUs
SHA-512-Half = SHA-512 speed + SHA-256 output size// From src/libxrpl/protocol/digest.cpp
class sha512_half_hasher
{
private:
SHA512_CTX ctx_;
public:
using result_type = uint256;
sha512_half_hasher()
{
SHA512_Init(&ctx_);
}
void operator()(void const* data, std::size_t size) noexcept
{
SHA512_Update(&ctx_, data, size);
}
operator result_type() noexcept
{
// Compute full SHA-512 (64 bytes)
std::uint8_t digest[64];
SHA512_Final(digest, &ctx_);
// Return first 32 bytes (256 bits)
result_type result;
std::memcpy(result.data(), digest, 32);
return result;
}
};uint256 STTx::getTransactionID() const
{
Serializer s;
s.add32(HashPrefix::transactionID);
addWithoutSigningFields(s);
return sha512Half(s.slice());
}uint256 keylet::account(AccountID const& id)
{
return sha512Half(
HashPrefix::account,
id);
}uint256 SHAMapInnerNode::getHash() const
{
if (hashValid_)
return hash_;
Serializer s;
for (auto const& child : children_)
s.add256(child.getHash());
hash_ = sha512Half(s.slice());
hashValid_ = true;
return hash_;
}// Secure variant that erases internal state
uint256 sha512Half_s(Slice const& data)
{
sha512_half_hasher h;
h(data.data(), data.size());
auto result = static_cast<uint256>(h);
// Hasher destructor securely erases internal state
// This prevents sensitive data from lingering in memory
return result;
}// Regular variant
auto hash1 = sha512Half(secretData);
// SHA512_CTX still contains secretData fragments in memory
// Secure variant
auto hash2 = sha512Half_s(secretData);
// SHA512_CTX is securely erasedclass ripesha_hasher
{
private:
openssl_sha256_hasher sha_;
public:
using result_type = ripemd160_hasher::result_type; // 20 bytes
void operator()(void const* data, std::size_t size) noexcept
{
// First: SHA-256
sha_(data, size);
}
operator result_type() noexcept
{
// Get SHA-256 result (32 bytes)
auto const sha256_digest =
static_cast<openssl_sha256_hasher::result_type>(sha_);
// Second: RIPEMD-160 of the SHA-256
ripemd160_hasher ripe;
ripe(sha256_digest.data(), sha256_digest.size());
return static_cast<result_type>(ripe); // 20 bytes
}
};If SHA-256 is broken:
RIPEMD-160 provides second layer
If RIPEMD-160 is broken:
SHA-256 provides protection
Breaking both: requires defeating two independent algorithmsPublic Key: 33 bytes
↓ SHA-256
SHA-256 hash: 32 bytes
↓ RIPEMD-160
Account ID: 20 bytes (40% smaller than public key)Quantum computers may break elliptic curves:
PublicKey → SecretKey (vulnerable)
But cannot reverse hashes:
AccountID ↛ PublicKey (still secure)
This provides time to upgrade the system if quantum computers emerge.// Calculate account ID from public key
AccountID calcAccountID(PublicKey const& pk)
{
ripesha_hasher h;
h(pk.data(), pk.size());
return AccountID{static_cast<ripesha_hasher::result_type>(h)};
}
// Calculate node ID from public key
NodeID calcNodeID(PublicKey const& pk)
{
ripesha_hasher h;
h(pk.data(), pk.size());
return NodeID{static_cast<ripesha_hasher::result_type>(h)};
}// From src/libxrpl/protocol/tokens.cpp
std::string encodeBase58Token(
TokenType type,
void const* token,
std::size_t size)
{
std::vector<uint8_t> buffer;
buffer.push_back(static_cast<uint8_t>(type));
buffer.insert(buffer.end(), token, token + size);
// Compute checksum: first 4 bytes of SHA-256(SHA-256(data))
auto const hash1 = sha256(makeSlice(buffer));
auto const hash2 = sha256(makeSlice(hash1));
// Append checksum
buffer.insert(buffer.end(), hash2.begin(), hash2.begin() + 4);
// Base58 encode
return base58Encode(buffer);
}4 bytes = 32 bits = 2^32 possible values
Probability of random corruption matching checksum: 1 in 4,294,967,296
Effectively catches all typos and errors.// From include/xrpl/protocol/HashPrefix.h
enum class HashPrefix : std::uint32_t
{
transactionID = 0x54584E00, // 'TXN\0'
txSign = 0x53545800, // 'STX\0'
txMultiSign = 0x534D5400, // 'SMT\0'
manifest = 0x4D414E00, // 'MAN\0'
ledgerMaster = 0x4C575200, // 'LWR\0'
ledgerInner = 0x4D494E00, // 'MIN\0'
ledgerLeaf = 0x4D4C4E00, // 'MLN\0'
accountRoot = 0x41525400, // 'ART\0'
};// Without prefixes (BAD):
hash_tx = SHA512Half(tx_data)
hash_msg = SHA512Half(msg_data)
// If tx_data == msg_data, then hash_tx == hash_msg
// Could cause confusion/attacks
// With prefixes (GOOD):
hash_tx = SHA512Half(PREFIX_TX, tx_data)
hash_msg = SHA512Half(PREFIX_MSG, msg_data)
// Even if tx_data == msg_data, hash_tx != hash_msg// Transaction ID
uint256 getTransactionID(STTx const& tx)
{
Serializer s;
s.add32(HashPrefix::transactionID); // Add prefix first
tx.addWithoutSigningFields(s);
return sha512Half(s.slice());
}
// Signing data (different prefix, different hash)
uint256 getSigningHash(STTx const& tx)
{
Serializer s;
s.add32(HashPrefix::txSign); // Different prefix
tx.addWithoutSigningFields(s);
return sha512Half(s.slice());
}// Instead of hashing all at once:
auto hash = sha512Half(bigData); // Requires loading all data
// Can hash incrementally:
sha512_half_hasher h;
h(chunk1.data(), chunk1.size());
h(chunk2.data(), chunk2.size());
h(chunk3.data(), chunk3.size());
auto hash = static_cast<uint256>(h);Serializer s;
s.add32(HashPrefix::transactionID);
s.addVL(tx.getFieldVL(sfAccount));
s.addVL(tx.getFieldVL(sfDestination));
s.add64(tx.getFieldU64(sfAmount));
// ... more fields ...
return sha512Half(s.slice());Number of hashes to find collision = 2^(256/2) = 2^128
2^128 = 340,282,366,920,938,463,463,374,607,431,768,211,456
If you could compute 1 trillion hashes per second:
Time = 2^128 / (10^12) seconds
= 10^25 years
(Universe age ≈ 10^10 years)// XRPL relies on collision resistance for:
// 1. Transaction IDs must be unique
uint256 txID = sha512Half(tx);
// 2. Ledger object keys must not collide
uint256 accountKey = sha512Half(HashPrefix::account, accountID);
// 3. Merkle tree integrity
uint256 nodeHash = sha512Half(leftChild, rightChild);// Benchmark results (approximate, hardware-dependent):
SHA-512-Half: ~650 MB/s
SHA-256: ~450 MB/s
RIPEMD-160: ~200 MB/s
For 1 KB transaction:
SHA-512-Half: ~1.5 microsecondsclass SHAMapNode
{
private:
uint256 hash_;
bool hashValid_;
public:
uint256 getHash() const
{
if (hashValid_)
return hash_; // Return cached value
// Compute hash (expensive)
hash_ = computeHash();
hashValid_ = true;
return hash_;
}
void invalidateHash()
{
hashValid_ = false; // Force recomputation next time
}
};Binary (20 bytes):
10001011 10001010 01101100 01010011 00111111 ...
Hexadecimal:
8B8A6C533F09CA0E5E00E7C32AA7EC323485ED3F
Base58Check:
rN7n7otQDd6FczFgLdlqtyMVrn3LNU8B4C// Base58 alphabet - 58 unambiguous characters
static const char* BASE58_ALPHABET =
"123456789" // Digits (no 0)
"ABCDEFGHJKLMNPQRSTUVWXYZ" // Uppercase (no I, O)
"abcdefghijkmnopqrstuvwxyz"; // Lowercase (no l)0 (zero) - Looks like O (letter O)
O (letter O) - Looks like 0 (zero)
I (letter I) - Looks like l (lowercase L) or 1
l (lowercase L) - Looks like I (letter I) or 1Digits: 1 2 3 4 5 6 7 8 9 (9 characters)
Uppercase: A B C D E F G H J K L M N P Q R S T U V W X Y Z (24 characters)
Lowercase: a b c d e f g h i j k m n o p q r s t u v w x y z (25 characters)
Total: 58 charactersDecimal: 255 = 2×100 + 5×10 + 5×1
Hex: FF = 15×16 + 15×1
Base58: 4k = 4×58 + 45×1// Conceptually: treat byte array as big integer
std::vector<uint8_t> input = {0x8B, 0x8A, ...};
// Convert to big integer
BigInt value = 0;
for (uint8_t byte : input)
value = value * 256 + byte;
// Convert to base58
std::string result;
while (value > 0) {
int remainder = value % 58;
result = BASE58_ALPHABET[remainder] + result;
value = value / 58;
}// Special case: preserve leading zero bytes as '1' characters
for (uint8_t byte : input) {
if (byte == 0)
result = '1' + result;
else
break;
}// From src/libxrpl/protocol/tokens.cpp (simplified)
std::string base58Encode(std::vector<uint8_t> const& input)
{
// Skip leading zeros, but count them
int leadingZeros = 0;
for (auto byte : input) {
if (byte == 0)
++leadingZeros;
else
break;
}
// Allocate output buffer (worst case size)
std::vector<uint8_t> b58(input.size() * 138 / 100 + 1);
// Process the bytes
for (auto byte : input) {
int carry = byte;
for (auto it = b58.rbegin(); it != b58.rend(); ++it) {
carry += 256 * (*it);
*it = carry % 58;
carry /= 58;
}
}
// Convert to string, skipping leading zeros in b58
std::string result;
for (int i = 0; i < leadingZeros; ++i)
result += '1';
for (auto value : b58) {
if (value != 0 || !result.empty())
result += BASE58_ALPHABET[value];
}
return result.empty() ? "1" : result;
}Structure:
[Type Byte] [Payload] [Checksum (4 bytes)]
↓ ↓ ↓
0x00 20 bytes SHA256(SHA256(prefix + payload))// From src/libxrpl/protocol/tokens.cpp
std::string encodeBase58Token(
TokenType type,
void const* token,
std::size_t size)
{
std::vector<uint8_t> buffer;
buffer.reserve(1 + size + 4);
// Step 1: Add type prefix
buffer.push_back(static_cast<uint8_t>(type));
// Step 2: Add payload
auto const* tokenBytes = static_cast<uint8_t const*>(token);
buffer.insert(buffer.end(), tokenBytes, tokenBytes + size);
// Step 3: Compute checksum
// First SHA-256
auto const hash1 = sha256(makeSlice(buffer));
// Second SHA-256
auto const hash2 = sha256(makeSlice(hash1));
// Step 4: Append first 4 bytes of second hash as checksum
buffer.insert(buffer.end(), hash2.begin(), hash2.begin() + 4);
// Step 5: Base58 encode everything
return base58Encode(buffer);
}enum class TokenType : std::uint8_t {
None = 1,
NodePublic = 28, // Node public keys: starts with 'n'
NodePrivate = 32, // Node private keys
AccountID = 0, // Account addresses: starts with 'r'
AccountPublic = 35, // Account public keys: starts with 'a'
AccountSecret = 34, // Account secret keys (deprecated)
FamilySeed = 33, // Seeds: starts with 's'
};Type 0 (AccountID) → starts with 'r'
Type 33 (FamilySeed) → starts with 's'
Type 28 (NodePublic) → starts with 'n'
Type 35 (AccountPublic) → starts with 'a'std::string decodeBase58Token(
std::string const& s,
TokenType type)
{
// Step 1: Decode from Base58
auto const decoded = base58Decode(s);
if (decoded.empty())
return {}; // Invalid Base58
// Step 2: Check minimum size (type + checksum = 5 bytes minimum)
if (decoded.size() < 5)
return {};
// Step 3: Verify type byte matches
if (decoded[0] != static_cast<uint8_t>(type))
return {}; // Wrong type
// Step 4: Verify checksum
auto const dataEnd = decoded.end() - 4; // Last 4 bytes are checksum
auto const providedChecksum = Slice{dataEnd, decoded.end()};
// Recompute checksum
auto const hash1 = sha256(makeSlice(decoded.begin(), dataEnd));
auto const hash2 = sha256(makeSlice(hash1));
auto const computedChecksum = Slice{hash2.begin(), hash2.begin() + 4};
// Compare
if (!std::equal(
providedChecksum.begin(),
providedChecksum.end(),
computedChecksum.begin()))
return {}; // Checksum mismatch
// Step 5: Return payload (skip type byte and checksum)
return std::string(decoded.begin() + 1, dataEnd);
}Probability of random error passing checksum:
1 / 2^32 = 1 / 4,294,967,296
Approximately: 1 in 4.3 billion// Start with public key
PublicKey pk = /* ed25519 public key */;
// ED9434799226374926EDA3B54B1B461B4ABF7237962EEB1144C10A7CA6A9D32C64
// Step 1: Calculate account ID (RIPESHA hash)
AccountID accountID = calcAccountID(pk);
// 8B8A6C533F09CA0E5E00E7C32AA7EC323485ED3F (20 bytes)
// Step 2: Encode as Base58Check address
std::string address = toBase58(accountID);
// rN7n7otQDd6FczFgLdlqtyMVrn3LNU8B4C
// Encoding breakdown:
// 1. Prepend type byte 0x00
// 008B8A6C533F09CA0E5E00E7C32AA7EC323485ED3F
//
// 2. Compute checksum:
// SHA-256: 7C9B2F8F...
// SHA-256: 3D4B8E9C...
// Take first 4 bytes: 3D4B8E9C
//
// 3. Append checksum:
// 008B8A6C533F09CA0E5E00E7C32AA7EC323485ED3F3D4B8E9C
//
// 4. Base58 encode:
// rN7n7otQDd6FczFgLdlqtyMVrn3LNU8B4CSeed seed = generateRandomSeed();
std::string b58 = toBase58(seed);
// Example: sp5fghtJtpUorTwvof1NpDXAzNwf5std::string words = seedAs1751(seed);
// Example: "MAD WARM EVEN SHOW BALK FELT TOY STIR OBOE COST HOPE VAIN"// Generate key pair
auto [publicKey, secretKey] = randomKeyPair(KeyType::ed25519);
// Derive account ID
AccountID accountID = calcAccountID(publicKey);
// Encode as address
std::string address = toBase58(accountID);
std::cout << "Your XRPL address: " << address << "\n";
// Your XRPL address: rN7n7otQDd6FczFgLdlqtyMVrn3LNU8B4Cbool isValidAddress(std::string const& address)
{
// Try to decode
auto decoded = decodeBase58Token(address, TokenType::AccountID);
// Valid if:
// 1. Decoding succeeded
// 2. Payload is correct size (20 bytes)
return !decoded.empty() && decoded.size() == 20;
}
// Usage
if (!isValidAddress(userInput)) {
std::cerr << "Invalid XRPL address\n";
return;
}std::optional<PublicKey> parsePublicKey(std::string const& s)
{
// Try AccountPublic type (starts with 'a')
if (s[0] == 'a') {
auto decoded = decodeBase58Token(s, TokenType::AccountPublic);
if (!decoded.empty())
return PublicKey{makeSlice(decoded)};
}
// Try NodePublic type (starts with 'n')
if (s[0] == 'n') {
auto decoded = decodeBase58Token(s, TokenType::NodePublic);
if (!decoded.empty())
return PublicKey{makeSlice(decoded)};
}
return std::nullopt; // Invalid
}// User types address wrong
std::string userAddress = "rN7n7otQDd6FczFgLdlqtyMVrn3LNU8B4D"; // Last char wrong
// Send funds without validation
sendPayment(userAddress, amount); // WRONG ADDRESS!if (!isValidAddress(userAddress)) {
throw std::runtime_error("Invalid address - check for typos");
}// ❌ WRONG
bool isAddress(std::string const& s) {
return s[0] == 'r'; // Too simplistic
}// ✅ CORRECT
bool isAddress(std::string const& s) {
return !decodeBase58Token(s, TokenType::AccountID).empty();
}// ❌ WRONG - Don't implement yourself
std::string myBase58Encode(/* ... */) {
// Custom implementation - likely has bugs
}// ✅ CORRECT - Use library functions
std::string encoded = encodeBase58Token(type, data, size);// Base58 encoding is relatively slow compared to hex:
// Hex encoding: ~1 microsecond
// Base58 encoding: ~10 microseconds
// But this doesn't matter for user-facing operations:
// - Displaying addresses: once per UI render
// - Parsing user input: once per input
// - Not a bottleneck in practice// For internal storage and processing, use binary:
AccountID accountID; // 20 bytes, fast comparisons
// Only encode to Base58 when presenting to users:
std::string address = toBase58(accountID); // For display onlyPart of RIPESHA (address generation)
RIPESHA
160 bits
~300 MB/s
Account IDs, node IDs
No
Yes (1.33×)
No (+, /)
Base58
58
Yes
No
Yes (1.37×)
Yes
Base58Check
58
Yes
Yes (4 bytes)
Yes (1.37×)
Yes
Confirm your seed phrase
Wait for the transaction to complete
A digital signature proves three things:
Authenticity: The signature was created by someone with the secret key
Integrity: The signed data hasn't been modified
Non-repudiation: The signer cannot deny having signed
Parameters:
pk: Public key (for key type detection)
sk: Secret key (the signing key)
m: Message (the data to sign)
Returns:
A Buffer containing the signature bytes
How it works:
Allocate 64-byte buffer
Call ed25519_sign with message, keys, and output buffer
Return the signature
Properties:
Always produces exactly 64 bytes
Deterministic: same message + key = same signature
Fast: ~50 microseconds
No pre-hashing needed
Signature format:
Where R and S are elliptic curve points/scalars (mathematical details abstracted by the library).
How it works:
Pre-hash the message: Compute SHA-512-Half of the message
Sign the digest: Use ECDSA to sign the 32-byte hash
Serialize: Encode signature in DER format
Why pre-hash?
ECDSA works on fixed-size inputs (32 bytes)
Messages can be any size
Hashing first normalizes all inputs to 32 bytes
Security proof for ECDSA assumes you're signing a hash
Why DER encoding? DER (Distinguished Encoding Rules) is a standard binary format from X.509:
Deterministic Nonces (RFC 6979):
This is critical for security. ECDSA requires a random "nonce" (number used once) for each signature. If:
The same nonce is used twice with the same key → secret key can be extracted
The nonce is predictable → secret key can be extracted
RFC 6979 derives the nonce deterministically from the message and secret key, making it:
Different for every message
Unpredictable to attackers
Free from random number generation failures
Parameters:
publicKey: The public key to verify against
m: The message that was signed
sig: The signature to verify
mustBeFullyCanonical: Whether to enforce strict canonicality (important!)
Returns:
true if signature is valid
false if signature is invalid or malformed
Canonicality check:
Why check canonicality? Ensures the S component is in the valid range. This prevents malformed signatures from being processed.
The digest verification function:
Steps:
Check canonicality: Ensure signature is in canonical form
Parse public key: Convert from compressed format to library format
Parse signature: Decode DER encoding
Verify: Check mathematical relationship between public key, message, and signature
In secp256k1, a signature is a pair of numbers (R, S). Due to the mathematics of elliptic curves:
Where n is the curve order. This means one message has two valid signatures.
Why this is dangerous:
Attack scenarios:
Transaction ID confusion: Applications tracking txID1 won't see the transaction confirmed (it confirms as txID2)
Double-spend attempts: Submit both versions, one might get through
Chain reaction: If txID is used as input to another transaction, that transaction becomes invalid
Require S to be in the "low" range:
This makes each signature unique—only one valid signature per message.
Canonicality levels:
Enforcement:
In production, XRPL always sets mustBeFullyCanonical = true to prevent malleability.
Ed25519 signatures are inherently canonical—there's only one valid signature per message. The curve mathematics don't allow the kind of malleability that exists in ECDSA.
This is one of the design advantages of Ed25519 over secp256k1.
What gets signed:
The signature is computed over:
A prefix (HashPrefix::txSign)
All transaction fields (except the signature itself)
XRPL supports multi-signature transactions where multiple parties must sign:
Each signer independently signs the transaction, and all signatures are verified.
Why verification speed matters:
Every validator must verify every transaction signature. In a high-throughput system:
Ed25519's speed advantage is significant at scale.
Digital signatures in XRPL:
Purpose: Prove authorization, ensure integrity, enable non-repudiation
Two algorithms:
secp256k1: Hash-then-sign, DER encoding, requires canonicality checks
ed25519: Direct signing, fixed 64 bytes, inherently canonical
Signing: Secret key + message → signature
Verification: Public key + message + signature → valid/invalid
Malleability: secp256k1 requires canonical signatures to prevent attacks
Performance: ed25519 is faster for both signing and verification
Key takeaways:
Always enforce canonical signatures for secp256k1
Ed25519 is recommended for new accounts (faster, simpler)
Verification happens for every transaction in the network
Multi-signing allows multiple parties to authorize a transaction
In the next chapter, we'll explore hash functions and how they're used throughout XRPL for integrity and identification.
# Set your contract address
CONTRACT_ADDRESS="your_deployed_contract_address"
# Get the current counter value (should be 0)
cast call $CONTRACT_ADDRESS "number()" --rpc-url $RPC_URL
# Increment the counter
cast send $CONTRACT_ADDRESS "increment()" \
--rpc-url $RPC_URL \
--private-key $PRIVATE_KEY
# Check the new value (should be 1)
cast call $CONTRACT_ADDRESS "number()" --rpc-url $RPC_URL
# Set a specific number
cast send $CONTRACT_ADDRESS "setNumber(uint256)" 42 \
--rpc-url $RPC_URL \
--private-key $PRIVATE_KEY// Utilize multiple cores
#pragma omp parallel for ┌──────────────┐
│ DISCOVERY │ Finding potential peers
└──────┬───────┘
│
▼
┌──────────────┐
│ESTABLISHMENT │ Initiating TCP connection
└──────┬───────┘
│
▼
┌──────────────┐
│ HANDSHAKE │ Protocol negotiation
└──────┬───────┘
│
▼
┌──────────────┐
│ ACTIVATION │ Becoming active peer
└──────┬───────┘
│
▼
┌──────────────┐
│ MAINTENANCE │ Message exchange
└──────┬───────┘
│
▼
┌──────────────┐
│ TERMINATION │ Cleanup and removal
└──────────────┘void
OverlayImpl::connect(beast::IP::Endpoint const& remote_endpoint)
{
XRPL_ASSERT(work_, "ripple::OverlayImpl::connect : work is set");
auto usage = resourceManager().newOutboundEndpoint(remote_endpoint);
if (usage.disconnect(journal_))
{
JLOG(journal_.info()) << "Over resource limit: " << remote_endpoint;
return;
}
auto const [slot, result] = peerFinder().new_outbound_slot(remote_endpoint);
if (slot == nullptr)
{
JLOG(journal_.debug()) << "Connect: No slot for " << remote_endpoint
<< ": " << to_string(result);
return;
}
auto const p = std::make_shared<ConnectAttempt>(
app_,
io_context_,
beast::IPAddressConversion::to_asio_endpoint(remote_endpoint),
usage,
setup_.context,
next_id_++,
slot,
app_.journal("Peer"),
*this);
std::lock_guard lock(mutex_);
list_.emplace(p.get(), p);
p->run();
}void
ConnectAttempt::run()
{
if (!strand_.running_in_this_thread())
return boost::asio::post(
strand_, std::bind(&ConnectAttempt::run, shared_from_this()));
JLOG(journal_.debug()) << "run: connecting to " << remote_endpoint_;
ioPending_ = true;
// Allow up to connectTimeout_ seconds to establish remote peer connection
setTimer(ConnectionStep::TcpConnect);
stream_.next_layer().async_connect(
remote_endpoint_,
boost::asio::bind_executor(
strand_,
std::bind(
&ConnectAttempt::onConnect,
shared_from_this(),
std::placeholders::_1)));
}void
ConnectAttempt::processResponse()
{
if (!OverlayImpl::isPeerUpgrade(response_))
{
// A peer may respond with service_unavailable and a list of alternative
// peers to connect to, a differing status code is unexpected
if (response_.result() !=
boost::beast::http::status::service_unavailable)
{
JLOG(journal_.warn())
<< "Unable to upgrade to peer protocol: " << response_.result()
<< " (" << response_.reason() << ")";
return shutdown();
}
// Parse response body to determine if this is a redirect or other
// service unavailable
std::string responseBody;
responseBody.reserve(boost::asio::buffer_size(response_.body().data()));
for (auto const buffer : response_.body().data())
responseBody.append(
static_cast<char const*>(buffer.data()),
boost::asio::buffer_size(buffer));
Json::Value json;
Json::Reader reader;
auto const isValidJson = reader.parse(responseBody, json);
// Check if this is a redirect response (contains peer-ips field)
auto const isRedirect =
isValidJson && json.isObject() && json.isMember("peer-ips");
if (!isRedirect)
{
JLOG(journal_.warn())
<< "processResponse: " << remote_endpoint_
<< " failed to upgrade to peer protocol: " << response_.result()
<< " (" << response_.reason() << ")";
return shutdown();
}
Json::Value const& peerIps = json["peer-ips"];
if (!peerIps.isArray())
return fail("processResponse: invalid peer-ips format");
// Extract and validate peer endpoints
std::vector<boost::asio::ip::tcp::endpoint> redirectEndpoints;
redirectEndpoints.reserve(peerIps.size());
for (auto const& ipValue : peerIps)
{
if (!ipValue.isString())
continue;
error_code ec;
auto const endpoint = parse_endpoint(ipValue.asString(), ec);
if (!ec)
redirectEndpoints.push_back(endpoint);
}
// Notify PeerFinder about the redirect redirectEndpoints may be empty
overlay_.peerFinder().onRedirects(remote_endpoint_, redirectEndpoints);
return fail("processResponse: failed to connect to peer: redirected");
}
// Just because our peer selected a particular protocol version doesn't
// mean that it's acceptable to us. Check that it is:
std::optional<ProtocolVersion> negotiatedProtocol;
{
auto const pvs = parseProtocolVersions(response_["Upgrade"]);
if (pvs.size() == 1 && isProtocolSupported(pvs[0]))
negotiatedProtocol = pvs[0];
if (!negotiatedProtocol)
return fail(
"processResponse: Unable to negotiate protocol version");
}
auto const sharedValue = makeSharedValue(*stream_ptr_, journal_);
if (!sharedValue)
return shutdown(); // makeSharedValue logs
try
{
auto const publicKey = verifyHandshake(
response_,
*sharedValue,
overlay_.setup().networkID,
overlay_.setup().public_ip,
remote_endpoint_.address(),
app_);
usage_.setPublicKey(publicKey);
JLOG(journal_.debug())
<< "Protocol: " << to_string(*negotiatedProtocol);
JLOG(journal_.info())
<< "Public Key: " << toBase58(TokenType::NodePublic, publicKey);
auto const member = app_.cluster().member(publicKey);
if (member)
{
JLOG(journal_.info()) << "Cluster name: " << *member;
}
auto const result = overlay_.peerFinder().activate(
slot_, publicKey, member.has_value());
if (result != PeerFinder::Result::success)
{
std::stringstream ss;
ss << "Outbound Connect Attempt " << remote_endpoint_ << " "
<< to_string(result);
return fail(ss.str());
}
if (!socket_.is_open())
return;
if (shutdown_)
return tryAsyncShutdown();
auto const peer = std::make_shared<PeerImp>(
app_,
std::move(stream_ptr_),
read_buf_.data(),
std::move(slot_),
std::move(response_),
usage_,
publicKey,
*negotiatedProtocol,
id_,
overlay_);
overlay_.add_active(peer);
}
catch (std::exception const& e)
{
return fail(std::string("Handshake failure (") + e.what() + ")");
}
}
void
PeerImp::doAccept()
{
XRPL_ASSERT(
read_buffer_.size() == 0,
"ripple::PeerImp::doAccept : empty read buffer");
JLOG(journal_.debug()) << "doAccept";
// a shutdown was initiated before the handshake, there is nothing to do
if (shutdown_)
return tryAsyncShutdown();
auto const sharedValue = makeSharedValue(*stream_ptr_, journal_);
// This shouldn't fail since we already computed
// the shared value successfully in OverlayImpl
if (!sharedValue)
return fail("makeSharedValue: Unexpected failure");
JLOG(journal_.debug()) << "Protocol: " << to_string(protocol_);
if (auto member = app_.cluster().member(publicKey_))
{
{
std::unique_lock lock{nameMutex_};
name_ = *member;
}
JLOG(journal_.info()) << "Cluster name: " << *member;
}
overlay_.activate(shared_from_this());
// XXX Set timer: connection is in grace period to be useful.
// XXX Set timer: connection idle (idle may vary depending on connection
// type.)
auto write_buffer = std::make_shared<boost::beast::multi_buffer>();
boost::beast::ostream(*write_buffer) << makeResponse(
!overlay_.peerFinder().config().peerPrivate,
request_,
overlay_.setup().public_ip,
remote_address_.address(),
*sharedValue,
overlay_.setup().networkID,
protocol_,
app_);
// Write the whole buffer and only start protocol when that's done.
boost::asio::async_write(
stream_,
write_buffer->data(),
boost::asio::transfer_all(),
bind_executor(
strand_,
[this, write_buffer, self = shared_from_this()](
error_code ec, std::size_t bytes_transferred) {
if (!socket_.is_open())
return;
if (ec == boost::asio::error::operation_aborted)
return tryAsyncShutdown();
if (ec)
return fail("onWriteResponse", ec);
if (write_buffer->size() == bytes_transferred)
return doProtocolStart();
return fail("Failed to write header");
}));
}
void
OverlayImpl::activate(std::shared_ptr<PeerImp> const& peer)
{
beast::WrappedSink sink{journal_.sink(), peer->prefix()};
beast::Journal journal{sink};
// Now track this peer
{
std::lock_guard lock(mutex_);
auto const result(ids_.emplace(
std::piecewise_construct,
std::make_tuple(peer->id()),
std::make_tuple(peer)));
XRPL_ASSERT(
result.second,
"ripple::OverlayImpl::activate : peer ID is inserted");
(void)result.second;
}
JLOG(journal.debug()) << "activated";
// We just accepted this peer so we have non-zero active peers
XRPL_ASSERT(size(), "ripple::OverlayImpl::activate : nonzero peers");
}
void
OverlayImpl::add_active(std::shared_ptr<PeerImp> const& peer)
{
beast::WrappedSink sink{journal_.sink(), peer->prefix()};
beast::Journal journal{sink};
std::lock_guard lock(mutex_);
{
auto const result = m_peers.emplace(peer->slot(), peer);
XRPL_ASSERT(
result.second,
"ripple::OverlayImpl::add_active : peer is inserted");
(void)result.second;
}
{
auto const result = ids_.emplace(
std::piecewise_construct,
std::make_tuple(peer->id()),
std::make_tuple(peer));
XRPL_ASSERT(
result.second,
"ripple::OverlayImpl::add_active : peer ID is inserted");
(void)result.second;
}
list_.emplace(peer.get(), peer);
JLOG(journal.debug()) << "activated";
// As we are not on the strand, run() must be called
// while holding the lock, otherwise new I/O can be
// queued after a call to stop().
peer->run();
}void
PeerImp::doProtocolStart()
{
// a shutdown was initiated before the handshare, there is nothing to do
if (shutdown_)
return tryAsyncShutdown();
onReadMessage(error_code(), 0);
// Send all the validator lists that have been loaded
if (inbound_ && supportsFeature(ProtocolFeature::ValidatorListPropagation))
{
app_.validators().for_each_available(
[&](std::string const& manifest,
std::uint32_t version,
std::map<std::size_t, ValidatorBlobInfo> const& blobInfos,
PublicKey const& pubKey,
std::size_t maxSequence,
uint256 const& hash) {
ValidatorList::sendValidatorList(
*this,
0,
pubKey,
maxSequence,
version,
manifest,
blobInfos,
app_.getHashRouter(),
p_journal_);
// Don't send it next time.
app_.getHashRouter().addSuppressionPeer(hash, id_);
});
}
if (auto m = overlay_.getManifestsMessage())
send(m);
setTimer(peerTimerInterval);
}void
PeerImp::close()
{
XRPL_ASSERT(
strand_.running_in_this_thread(),
"ripple::PeerImp::close : strand in this thread");
if (!socket_.is_open())
return;
cancelTimer();
error_code ec;
socket_.close(ec);
overlay_.incPeerDisconnect();
// The rationale for using different severity levels is that
// outbound connections are under our control and may be logged
// at a higher level, but inbound connections are more numerous and
// uncontrolled so to prevent log flooding the severity is reduced.
JLOG((inbound_ ? journal_.debug() : journal_.info())) << "close: Closed";
}PeerImp::~PeerImp()
{
bool const inCluster{cluster()};
overlay_.deletePeer(id_);
overlay_.onPeerDeactivate(id_);
overlay_.peerFinder().on_closed(slot_);
overlay_.remove(slot_);
if (inCluster)
{
JLOG(journal_.warn()) << name() << " left cluster";
}
}void
OverlayImpl::onPeerDeactivate(Peer::id_t id)
{
std::lock_guard lock(mutex_);
ids_.erase(id);
}Transaction Data + Secret Key → Signature
Transaction Data + Public Key + Signature → Valid/Invalid// From src/libxrpl/protocol/SecretKey.cpp
Buffer sign(
PublicKey const& pk,
SecretKey const& sk,
Slice const& m)
{
// Automatically detect key type from public key
auto const type = publicKeyType(pk.slice());
switch (*type)
{
case KeyType::ed25519:
return signEd25519(pk, sk, m);
case KeyType::secp256k1:
return signSecp256k1(pk, sk, m);
}
}case KeyType::ed25519: {
Buffer b(64); // Ed25519 signatures are always 64 bytes
ed25519_sign(
m.data(), m.size(), // Message to sign
sk.data(), // Secret key
pk.data() + 1, // Public key (skip 0xED prefix)
b.data()); // Output buffer
return b;
}[R (32 bytes)][S (32 bytes)] = 64 bytes totalcase KeyType::secp256k1: {
// Step 1: Hash the message with SHA-512-Half
sha512_half_hasher h;
h(m.data(), m.size());
auto const digest = sha512_half_hasher::result_type(h);
// Step 2: Sign the digest (not the raw message)
secp256k1_ecdsa_signature sig_imp;
secp256k1_ecdsa_sign(
secp256k1Context(),
&sig_imp,
reinterpret_cast<unsigned char const*>(digest.data()),
reinterpret_cast<unsigned char const*>(sk.data()),
secp256k1_nonce_function_rfc6979, // Deterministic nonce
nullptr);
// Step 3: Serialize to DER format
unsigned char sig[72];
size_t len = sizeof(sig);
secp256k1_ecdsa_signature_serialize_der(
secp256k1Context(),
sig,
&len,
&sig_imp);
return Buffer{sig, len};
}DER Format:
0x30 [total length]
0x02 [R length] [R bytes]
0x02 [S length] [S bytes]
Example:
30 44
02 20 [32 bytes of R]
02 20 [32 bytes of S]
Total: ~70-72 bytes (variable length!)secp256k1_nonce_function_rfc6979// From src/libxrpl/protocol/PublicKey.cpp
bool verify(
PublicKey const& publicKey,
Slice const& m,
Slice const& sig,
bool mustBeFullyCanonical) noexcept
{
// Detect key type
auto const type = publicKeyType(publicKey);
if (!type)
return false;
if (*type == KeyType::secp256k1)
{
return verifySecp256k1(publicKey, m, sig, mustBeFullyCanonical);
}
else if (*type == KeyType::ed25519)
{
return verifyEd25519(publicKey, m, sig);
}
return false;
}else if (*type == KeyType::ed25519)
{
// Check signature is canonical
if (!ed25519Canonical(sig))
return false;
// Verify signature
return ed25519_sign_open(
m.data(), m.size(), // Message
publicKey.data() + 1, // Public key (skip 0xED prefix)
sig.data()) == 0; // Signature
}static bool ed25519Canonical(Slice const& sig)
{
// Signature must be exactly 64 bytes
if (sig.size() != 64)
return false;
// Ed25519 curve order (big-endian)
static std::uint8_const Order[] = {
0x10, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x14, 0xDE, 0xF9, 0xDE, 0xA2, 0xF7, 0x9C, 0xD6,
0x58, 0x12, 0x63, 0x1A, 0x5C, 0xF5, 0xD3, 0xED
};
// S component (second 32 bytes) must be < Order
auto const le = sig.data() + 32;
std::uint8_t S[32];
std::reverse_copy(le, le + 32, S); // Convert to big-endian
return std::lexicographical_compare(S, S + 32, Order, Order + 32);
}if (*type == KeyType::secp256k1)
{
// Hash the message first (same as signing)
return verifyDigest(
publicKey,
sha512Half(m),
sig,
mustBeFullyCanonical);
}bool verifyDigest(
PublicKey const& publicKey,
uint256 const& digest,
Slice const& sig,
bool mustBeFullyCanonical)
{
// Check signature canonicality
auto const canonical = ecdsaCanonicality(sig);
if (!canonical)
return false;
if (mustBeFullyCanonical && *canonical != ECDSACanonicality::fullyCanonical)
return false;
// Parse public key
secp256k1_pubkey pubkey_imp;
if (secp256k1_ec_pubkey_parse(
secp256k1Context(),
&pubkey_imp,
reinterpret_cast<unsigned char const*>(publicKey.data()),
publicKey.size()) != 1)
return false;
// Parse signature from DER
secp256k1_ecdsa_signature sig_imp;
if (secp256k1_ecdsa_signature_parse_der(
secp256k1Context(),
&sig_imp,
reinterpret_cast<unsigned char const*>(sig.data()),
sig.size()) != 1)
return false;
// Verify!
return secp256k1_ecdsa_verify(
secp256k1Context(),
&sig_imp,
reinterpret_cast<unsigned char const*>(digest.data()),
&pubkey_imp) == 1;
}If (R, S) is valid, then (R, -S mod n) is also valid// Alice creates and signs a transaction
Transaction tx = Payment{ /* ... */ };
Signature sig1 = sign(alice.publicKey, alice.secretKey, tx);
// Transaction ID includes the signature
uint256 txID1 = hash(tx, sig1);
// Attacker sees tx + sig1 in network
// Attacker creates malleated signature sig2 = (R, -S mod n)
Signature sig2 = malleate(sig1);
// sig2 is also valid!
bool valid = verify(alice.publicKey, tx, sig2); // Returns true
// But produces different transaction ID
uint256 txID2 = hash(tx, sig2);
assert(txID1 != txID2); // Different IDs!// Canonical if S <= order/2
if (S > order/2) {
S = order - S; // Flip to the low range
}std::optional<ECDSACanonicality>
ecdsaCanonicality(Slice const& sig)
{
// Parse DER-encoded signature
auto r = sigPart(p); // Extract R
auto s = sigPart(p); // Extract S
if (!r || !s)
return std::nullopt; // Invalid DER encoding
uint264 R(sliceToHex(*r));
uint264 S(sliceToHex(*s));
// secp256k1 curve order
static uint264 const G(
"0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141");
// Both R and S must be < G
if (R >= G || S >= G)
return std::nullopt;
// Calculate G - S (the "flipped" value)
auto const Sp = G - S;
// Is S in the lower half?
if (S > Sp)
return ECDSACanonicality::canonical; // Valid but not fully canonical
return ECDSACanonicality::fullyCanonical; // Perfect!
}enum class ECDSACanonicality {
fullyCanonical, // S <= order/2 (preferred)
canonical // S > order/2 but valid (deprecated)
};if (mustBeFullyCanonical && *canonical != ECDSACanonicality::fullyCanonical)
return false; // Reject non-canonical signatures// Ed25519: Each message has exactly ONE valid signature
Signature sig = sign(pk, sk, message);
// No way to create sig2 that's also valid// From src/libxrpl/protocol/STTx.cpp
void STTx::sign(PublicKey const& publicKey, SecretKey const& secretKey)
{
// Serialize transaction for signing
Serializer s = buildMultiSigningData(*this, publicKey);
// Create signature
auto const signature = ripple::sign(publicKey, secretKey, s.slice());
// Add signature to transaction
setFieldVL(sfTxnSignature, signature);
}Serializer buildMultiSigningData(STTx const& tx, PublicKey const& pk)
{
Serializer s;
// Add signing prefix
s.add32(HashPrefix::txSign);
// Serialize all transaction fields except signature
tx.addWithoutSigningFields(s);
return s;
}bool STTx::checkSign(bool mustBeFullyCanonical) const
{
try
{
// Get the signing public key
auto const publicKey = getSigningPubKey();
// Get the signature
auto const signature = getFieldVL(sfTxnSignature);
// Rebuild the data that was signed
Serializer s = buildMultiSigningData(*this, publicKey);
// Verify!
return verify(publicKey, s.slice(), signature, mustBeFullyCanonical);
}
catch (...)
{
return false; // Any error = invalid
}
}struct Signer {
AccountID account;
PublicKey publicKey;
Buffer signature;
uint16_t weight;
};
bool checkMultiSign(STTx const& tx) {
auto const signers = tx.getFieldArray(sfSigners);
uint32_t totalWeight = 0;
for (auto const& signer : signers) {
// Extract signer info
auto const account = signer.getAccountID(sfAccount);
auto const pubKey = signer.getFieldVL(sfSigningPubKey);
auto const sig = signer.getFieldVL(sfTxnSignature);
// Verify this signer's signature
Serializer s = buildMultiSigningData(tx, account, pubKey);
if (!verify(pubKey, s.slice(), sig, true))
return false; // Invalid signature
// Add weight
totalWeight += getSignerWeight(account);
}
// Check if total weight meets quorum
return totalWeight >= getRequiredQuorum(tx);
}Ed25519: ~50 microseconds
Secp256k1: ~200 microseconds
Ed25519 is 4x faster for signing.Ed25519: ~100 microseconds
Secp256k1: ~500 microseconds
Ed25519 is 5x faster for verification.1000 transactions/second × 500 μs/verification = 0.5 seconds of CPU time
1000 transactions/second × 100 μs/verification = 0.1 seconds of CPU timeEd25519: 64 bytes (fixed)
Secp256k1: ~71 bytes (variable, DER encoded)
Ed25519 signatures are slightly smaller and fixed-size.← Back to Cryptography I: Blockchain Security and Cryptographic Foundations
Cryptography is unforgiving. A single mistake—using weak randomness, accepting non-canonical signatures, or mishandling keys—can compromise the entire system. This chapter catalogs the most common cryptographic pitfalls, explains why they're dangerous, and shows you how to avoid them.
Predictability:
Real-world example:
Attack vectors:
Core dumps: Process crashes, core dump contains secrets
Swap files: Memory paged to disk, secrets written to swap
Hibernation: All memory written to hibernation file
Memory inspection: Debugger or malware reads process memory
Signature malleability allows attacks:
Cross-protocol attacks:
Key compromise amplification:
Timing side-channel:
Expertise required:
Cryptography is subtle and unforgiving
Experts spend years studying attack techniques
Small mistakes lead to complete breaks
Standard algorithms battle-tested by thousands of experts
Common mistakes in custom crypto:
Weak key scheduling
Poor mode of operation
No authentication
Padding oracle vulnerabilities
NEVER implement your own:
Encryption algorithms
Hash functions
Signature schemes
Random number generators
ALWAYS use:
OpenSSL
libsodium
Other well-vetted libraries
Standard algorithms (AES, SHA, etc.)
Exposure:
Source code in version control (git history)
Binary contains strings (recoverable)
Code reviews expose secrets
Logs may print secrets
Consequences:
Using invalid keys → signature verification fails
Using malformed data → undefined behavior
Skipping validation → security bypasses
Accepting bad input → system corruption
Common cryptographic pitfalls and how to avoid them:
Weak RNG: Use crypto_prng(), never std::rand()
Memory leaks: Use RAII, call secure_erase()
Non-canonical sigs: Always enforce canonicality
Golden rules:
Trust no input
Check all returns
Erase all secrets
Use standard crypto
← Back to Understanding XRPL(d) RPC Architecture
Understanding the complete lifecycle of an RPC request—from the moment it arrives at the server to when the response is sent back to the client—is essential for building robust custom handlers. This knowledge helps you anticipate edge cases, implement proper error handling, and optimize performance.
In this section, we'll trace the journey of a request through Rippled's RPC system, examining each stage of processing and the components involved.
Let's break down each stage in detail.
For HTTP requests, the entry point is the HTTP server configured in rippled.cfg:
Source Location: src/xrpld/rpc/detail/RPCCall.cpp
The HTTP server receives the raw request:
For WebSocket connections, clients establish a persistent connection:
Source Location: src/xrpld/rpc/detail/RPCHandler.cpp
WebSocket messages use a slightly different format:
For gRPC, requests arrive as Protocol Buffer messages:
Source Location: src/xrpld/app/main/GRPCServer.cpp
The raw request is parsed into a structured format.
The parser extracts key fields:
Different transports use different formats, which are normalized:
HTTP/WebSocket:
method or command field
params array or direct parameters
gRPC:
Protobuf message fields
Converted to JSON internally
Before processing the request, the system determines the caller's role based on the connection:
Source Location: src/xrpld/core/detail/Role.cpp
Role descriptions:
FORBID: Blacklisted client (blocked)
GUEST: Unauthenticated public access (limited commands)
USER: Authenticated client (most read operations)
IDENTIFIED: Trusted gateway (write operations)
The dispatcher searches the handler table for the requested command:
If API versioning is in use:
The system checks if the caller has sufficient permissions:
Example: A GUEST client attempting to call submit (requires USER role) would be rejected here.
Handlers may require specific runtime conditions:
A JsonContext object is built with all necessary information:
Context provides:
Request parameters (params)
Application services (app)
Resource tracking (consumer)
The system tracks API usage to prevent abuse:
Resource limits are configured per client and prevent DoS attacks.
The handler function is called with the constructed context:
Error handling: Any uncaught exceptions are converted to rpcINTERNAL errors.
For successful requests:
For failed requests:
The JSON response is serialized back to the client's format:
The response is sent back to the client over the same transport:
HTTP: Single request-response cycle completes
WebSocket: Response is pushed to the persistent connection
gRPC: Streamed response or unary response returned
Each stage has associated latency:
Total typical latency: 5-105 ms
Different errors can occur at each stage:
Let's trace a complete request:
The RPC request-response flow demonstrates Rippled's carefully orchestrated pipeline for handling API calls. From initial reception across multiple transport protocols, through parsing, role determination, permission checks, and condition validation, to handler invocation and response formatting—each stage serves a specific purpose. This multi-stage design enables early rejection of invalid requests, consistent error handling, proper resource management, and transport-agnostic processing. Mastering this flow is crucial for debugging RPC issues and understanding how custom handlers integrate into the system.
Cold boot: RAM retains data briefly after power off
Implementation bugs
Key reuse: Separate keys for separate purposes
Timing attacks: Use constant-time comparisons
Short keys: Use 256 bits minimum
Custom crypto: Never - use standard libraries
Ignored errors: Always check return values
Hardcoded secrets: Load from secure storage
Missing validation: Validate all inputs
ADMIN: Full administrative access (all commands)
Permission level (role)
Ledger access (ledger, ledgerMaster)
Network operations (netOps)
Condition Check
< 1 ms
Ledger availability
Handler Execution
1-100 ms
Varies by handler
Serialization
< 1 ms
JSON encoding
Delivery
< 1 ms
Network overhead
Reception
< 1 ms
Network overhead
Parsing
< 1 ms
JSON parsing
Lookup
< 0.1 ms
Hash table lookup
Permission Check
< 0.1 ms
Simple comparison
// ❌ WRONG - Predictable randomness
void generateWeakKey() {
std::srand(std::time(nullptr)); // Seed with current time
std::uint8_t secretKey[32];
for (auto& byte : secretKey) {
byte = std::rand() % 256; // NOT cryptographically secure
}
return SecretKey{Slice{secretKey, 32}};
}std::srand(time(NULL)) seeds with seconds since epoch
Attacker knows approximate time of key generation
Can try all possible times (seconds in a day = 86,400)
Tests each seed value → predicts all random numbers
Recovers secret key!// Key generated at approximately 2025-10-15 14:30:00 UTC
// Attacker knows this within ±1 hour = 3,600 seconds
// Tries all 3,600 possible seeds
// Generates keys for each
// Checks which key matches public key
// Finds correct seed and regenerates secret key
// Total time: seconds// ✅ CORRECT - Cryptographically secure
SecretKey generateStrongKey() {
std::uint8_t buf[32];
beast::rngfill(buf, sizeof(buf), crypto_prng());
SecretKey sk{Slice{buf, sizeof(buf)}};
secure_erase(buf, sizeof(buf));
return sk;
}// Red flags in code review:
std::srand(...) // ❌
std::rand() // ❌
std::mt19937 // ❌ (not cryptographic)
std::uniform_* // ❌ (if used with non-crypto RNG)
// Good signs:
crypto_prng() // ✅
RAND_bytes() // ✅
randomSecretKey() // ✅// ❌ WRONG - Secret key remains in memory
void processTransaction() {
std::string secretKeyHex = loadFromConfig();
auto secretKey = parseHex(secretKeyHex);
auto signature = sign(pk, sk, tx);
// Function returns
// secretKeyHex still in memory!
// secretKey still in memory!
// Can be recovered from memory dump
}// ✅ CORRECT - Explicit cleanup
void processTransaction() {
std::string secretKeyHex = loadFromConfig();
auto secretKey = parseSecretKey(secretKeyHex);
auto signature = sign(pk, secretKey, tx);
// Explicit cleanup
secure_erase(
const_cast<char*>(secretKeyHex.data()),
secretKeyHex.size());
// secretKey is SecretKey object - RAII erases automatically
}
// ✅ BETTER - Use RAII throughout
void processTransaction() {
SecretKey sk = loadSecretKey(); // RAII-protected
auto signature = sign(pk, sk, tx);
// sk destructor automatically erases
}// Code review checklist:
□ Are temporary buffers with secrets erased?
□ Are std::string secrets explicitly cleaned?
□ Are secrets in RAII wrappers (SecretKey)?
□ Is secure_erase called before function returns?
□ Are there early returns that skip cleanup?// ❌ WRONG - Doesn't enforce canonicality
bool verifyTransaction(Transaction const& tx) {
return verify(
tx.publicKey,
tx.data,
tx.signature,
false); // mustBeFullyCanonical = false ❌
}// Alice creates transaction
Transaction tx = Payment{ /* ... */ };
Signature sig1 = sign(alice.sk, tx);
uint256 txID1 = hash(tx, sig1);
// Attacker creates malleated signature
Signature sig2 = malleate(sig1); // Still valid!
uint256 txID2 = hash(tx, sig2); // Different ID!
// Both signatures valid, but different transaction IDs
// Applications tracking txID1 won't see confirmation (appears as txID2)
// Can cause confusion, double-spend attempts, invalid dependent transactions// ✅ CORRECT - Enforce canonical signatures
bool verifyTransaction(Transaction const& tx) {
return verify(
tx.publicKey,
tx.data,
tx.signature,
true); // mustBeFullyCanonical = true ✅
}// Search for:
verify(..., false) // ❌ Likely wrong
verifyDigest(..., false) // ❌ Likely wrong
// Should be:
verify(..., true) // ✅ Correct
verifyDigest(..., true) // ✅ Correct// ❌ WRONG - Same key for everything
SecretKey masterKey = loadKey();
// Use for transactions
auto txSig = sign(masterPK, masterKey, transaction);
// Use for peer handshakes
auto handshakeSig = sign(masterPK, masterKey, sessionData);
// Use for validator messages
auto validationSig = sign(masterPK, masterKey, ledgerHash);// Signature from one context used in another
// Example:
// 1. Attacker captures handshake signature
// 2. Replays it as transaction signature
// 3. If data happens to match, signature validates!
// 4. Unauthorized transaction executed// If one key compromised:
// - ALL contexts compromised
// - Transactions, handshakes, validations
// - Total system failure// ✅ CORRECT - Separate keys for separate purposes
struct NodeKeys {
SecretKey accountKey; // For account transactions only
SecretKey nodeKey; // For peer communication only
SecretKey validationKey; // For validator messages only
};
// Use appropriate key for each context
auto txSig = sign(keys.accountPK, keys.accountKey, transaction);
auto handshakeSig = sign(keys.nodePK, keys.nodeKey, sessionData);
auto validationSig = sign(keys.validationPK, keys.validationKey, ledgerHash);// Even with same key, use different prefixes
auto txHash = sha512Half(HashPrefix::transaction, txData);
auto handshakeHash = sha512Half(HashPrefix::manifest, handshakeData);
// Different prefixes → different hashes → cross-context attack prevented// ❌ WRONG - Variable-time comparison
bool compareSignatures(Slice const& a, Slice const& b) {
if (a.size() != b.size())
return false;
for (size_t i = 0; i < a.size(); ++i) {
if (a[i] != b[i])
return false; // Early exit leaks position of first mismatch!
}
return true;
}// Attacker measures time for comparison to fail
// If first byte wrong: returns quickly
// If first byte correct, second wrong: takes longer
// Byte-by-byte, extract the secret signature
// Example:
Signature guess = "00000000...";
Time: 1 microsecond // First byte wrong
guess = "A0000000...";
Time: 1.1 microseconds // First byte right, second wrong
guess = "AB000000...";
Time: 1.2 microseconds // First two bytes right
// Repeat until full signature recovered// ✅ CORRECT - Constant-time comparison
bool compareSignatures(Slice const& a, Slice const& b) {
if (a.size() != b.size())
return false;
// Use constant-time comparison
return CRYPTO_memcmp(a.data(), b.data(), a.size()) == 0;
}
// Or use OpenSSL's implementation
bool compareSignatures(Slice const& a, Slice const& b) {
if (a.size() != b.size())
return false;
return OPENSSL_memcmp(a.data(), b.data(), a.size()) == 0;
}// Red flags:
if (signature[i] != expected[i]) // ❌ Byte-by-byte comparison
if (signature == expected) // ❌ May use std::memcmp
memcmp(sig, expected, size) // ❌ Not constant-time
// Good signs:
CRYPTO_memcmp(...) // ✅
OPENSSL_memcmp(...) // ✅
Constant-time comparison library // ✅// ❌ WRONG - Only 64 bits of key material
std::uint8_t weakKey[8]; // 8 bytes = 64 bits
crypto_prng()(weakKey, sizeof(weakKey));Security level = bits / 2 (for symmetric keys)
64-bit key = 2^32 operations to break
With modern hardware: breakable in seconds/minutes
Modern GPUs can try billions of keys per second
2^32 = 4,294,967,296
At 1 billion/second = 4.3 seconds// ✅ CORRECT - Full 256 bits
std::uint8_t strongKey[32]; // 32 bytes = 256 bits
crypto_prng()(strongKey, sizeof(strongKey));
// Security level: 2^128 operations
// Even quantum computers won't break this (2^128 > 10^38)Minimum security levels:
- 128 bits: Adequate (2^64 operations)
- 256 bits: Strong (2^128 operations, quantum-resistant)
- 512 bits: Overkill (but doesn't hurt)
XRPL uses 256 bits for secret keys (standard)// ❌ WRONG - Custom encryption
std::vector<uint8_t> myEncrypt(
std::vector<uint8_t> const& data,
std::vector<uint8_t> const& key)
{
std::vector<uint8_t> encrypted;
for (size_t i = 0; i < data.size(); ++i) {
encrypted.push_back(data[i] ^ key[i % key.size()]);
}
return encrypted; // XOR "encryption" - trivially broken!
}// ✅ CORRECT - Use standard, vetted libraries
#include <openssl/evp.h>
// Use AES-256-GCM (authenticated encryption)
std::vector<uint8_t> encrypt(
std::vector<uint8_t> const& plaintext,
std::vector<uint8_t> const& key,
std::vector<uint8_t> const& iv)
{
// Use OpenSSL's EVP interface
// Handles all the complexity correctly
// ...
}// ❌ WRONG - Doesn't check return value
void generateKey() {
uint8_t key[32];
RAND_bytes(key, 32); // What if this fails?
// Continue using potentially-uninitialized key!
useKey(key);
}// If RAND_bytes fails:
// - key[] contains uninitialized data
// - Might be zeros
// - Might be predictable
// - Might be previous key material!
// Using failed key:
// - Weak encryption
// - Predictable signatures
// - Complete compromise// ✅ CORRECT - Check and handle errors
SecretKey generateKey() {
uint8_t buf[32];
if (RAND_bytes(buf, 32) != 1) {
// RNG failed - this is critical!
Throw<std::runtime_error>("RNG failure - cannot continue");
}
SecretKey sk{Slice{buf, 32}};
secure_erase(buf, 32);
return sk;
}// ALWAYS check return values for:
RAND_bytes() // Random generation
secp256k1_*() // secp256k1 operations
ed25519_*() // Ed25519 operations
EVP_*() // OpenSSL EVP operations
SSL_*() // SSL/TLS operations
// Fail loudly on error:
// - Throw exception
// - Return error code
// - Log and abort
// NEVER continue with failed crypto operations// ❌ WRONG - Secret in source code
const char* API_KEY = "sk_live_51Hx9y2JKLs...";
const char* MASTER_SEED = "sn3nxiW7v8KXzPzAqzyHXbSSKNuN9";
void authenticate() {
// Use hardcoded secret
}// ✅ CORRECT - Load from secure storage
SecretKey loadKey() {
// Option 1: Environment variable
auto const seedStr = std::getenv("XRPL_SEED");
// Option 2: Config file with restricted permissions
auto const seedStr = readSecureConfig("seed");
// Option 3: Hardware security module (HSM)
auto const seedStr = loadFromHSM();
// Option 4: OS keychain/credential manager
auto const seedStr = loadFromKeychain();
// Parse and use
auto seed = parseSeed(seedStr);
return generateSecretKey(KeyType::ed25519, seed);
}1. Never commit secrets to version control
2. Use environment variables or config files
3. Restrict file permissions (600 or 400)
4. Use secrets management systems (Vault, etc.)
5. Rotate secrets regularly
6. Audit access to secrets// ❌ WRONG - Doesn't validate inputs
void processTransaction(std::string const& addressStr) {
// Assume it's valid
AccountID account = decodeAddress(addressStr);
// What if decoding failed?
// What if address is malformed?
// Undefined behavior!
}// ✅ CORRECT - Validate everything
void processTransaction(std::string const& addressStr) {
// Validate address
if (!isValidAddress(addressStr)) {
throw std::invalid_argument("Invalid address format");
}
// Decode (will succeed because validated)
AccountID account = decodeAddress(addressStr);
// Validate account exists
if (!ledger.hasAccount(account)) {
throw std::runtime_error("Account not found");
}
// Continue...
}□ Public keys: Correct format? Right size? Valid curve point?
□ Signatures: Canonical? Correct size? Valid encoding?
□ Addresses: Valid checksum? Correct type prefix?
□ Seeds: Valid format? Sufficient entropy?
□ Hashes: Correct size? Expected format?
□ Amounts: Non-negative? Within limits?
□ All user input: Validated before use?Client → Transport Layer → Parser → Validator → Auth → Dispatcher → Handler → Response Builder → Client[port_rpc_admin_local]
port = 5005
ip = 127.0.0.1
admin = 127.0.0.1
protocol = httpPOST / HTTP/1.1
Host: localhost:5005
Content-Type: application/json
{
"method": "account_info",
"params": [{
"account": "rN7n7otQDd6FczFgLdlqtyMVrn3NnrcVXs",
"ledger_index": "validated"
}]
}[port_ws_admin_local]
port = 6006
ip = 127.0.0.1
admin = 127.0.0.1
protocol = ws{
"id": 1,
"command": "account_info",
"account": "rN7n7otQDd6FczFgLdlqtyMVrn3NnrcVXs",
"ledger_index": "validated"
}message GetAccountInfoRequest {
string account = 1;
LedgerSpecifier ledger = 2;
}// Parse the JSON body
Json::Value request;
Json::Reader reader;
if (!reader.parse(requestBody, request)) {
return rpcError(rpcINVALID_PARAMS, "Unable to parse JSON");
}std::string method = request["method"].asString();
Json::Value params = request["params"];
unsigned int apiVersion = request.get("api_version", 1).asUInt();Role
requestRole(
Role const& required,
Port const& port,
Json::Value const& params,
beast::IP::Endpoint const& remoteIp,
std::string_view user)
{
if (isAdmin(port, params, remoteIp.address()))
return Role::ADMIN;
if (required == Role::ADMIN)
return Role::FORBID;
if (ipAllowed(
remoteIp.address(),
port.secure_gateway_nets_v4,
port.secure_gateway_nets_v6))
{
if (user.size())
return Role::IDENTIFIED;
return Role::PROXY;
}
return Role::GUEST;
}FORBID < GUEST < USER < IDENTIFIED < ADMIN[rpc_admin]
admin = 127.0.0.1, ::1
[secure_gateway]
ip = 192.168.1.100// Look up the handler
auto const it = handlerTable.find(method);
if (it == handlerTable.end()) {
return rpcError(rpcUNKNOWN_COMMAND, "Unknown method");
}
HandlerInfo const& handlerInfo = it->second;if (apiVersion < handlerInfo.version_min ||
apiVersion > handlerInfo.version_max)
{
return rpcError(rpcINVALID_API_VERSION);
}if (context.role < handlerInfo.role) {
return rpcError(rpcNO_PERMISSION,
"You don't have permission for this command");
}if (handlerInfo.condition & RPC::NEEDS_CURRENT_LEDGER) {
if (!context.ledgerMaster.haveLedger()) {
return rpcError(rpcNO_CURRENT,
"Current ledger is not available");
}
}if (handlerInfo.condition & RPC::NEEDS_NETWORK_CONNECTION) {
if (context.netOps.getOperatingMode() < NetworkOPs::omSYNCING) {
return rpcError(rpcNO_NETWORK,
"Not connected to network");
}
}if (handlerInfo.condition & RPC::NEEDS_CLOSED_LEDGER) {
if (!context.ledgerMaster.getValidatedLedger()) {
return rpcError(rpcNO_CLOSED,
"No validated ledger available");
}
}RPC::JsonContext context {
.params = params,
.app = app,
.consumer = consumer,
.role = role,
.ledger = ledger,
.netOps = app.getOPs(),
.ledgerMaster = app.getLedgerMaster(),
.apiVersion = apiVersion
};// Charge the client for this request
context.consumer.charge(Resource::feeReferenceRPC);
// Check if client has exceeded limits
if (context.consumer.isUnlimited() == false &&
context.consumer.balance() <= 0)
{
return rpcError(rpcSLOW_DOWN,
"You are making requests too frequently");
}Json::Value result;
try {
result = handlerInfo.handler(context);
} catch (std::exception const& ex) {
return rpcError(rpcINTERNAL, ex.what());
}{
"result": {
"status": "success",
"account_data": {
"Account": "rN7n7otQDd6FczFgLdlqtyMVrn3NnrcVXs",
"Balance": "1000000000",
...
},
"ledger_index": 12345
}
}{
"result": {
"error": "actNotFound",
"error_code": 19,
"error_message": "Account not found.",
"status": "error",
"request": {
"command": "account_info",
"account": "rInvalidAccount"
}
}
}HTTP/1.1 200 OK
Content-Type: application/json
{
"result": { ... }
}{
"id": 1,
"status": "success",
"type": "response",
"result": { ... }
}GetAccountInfoResponse {
account_data: { ... }
}rpcINVALID_PARAMS // Malformed JSON
rpcBAD_SYNTAX // Invalid structurerpcUNKNOWN_COMMAND // Command not found
rpcINVALID_API_VERSION // Version mismatchrpcNO_PERMISSION // Insufficient role
rpcFORBIDDEN // Blacklisted clientrpcNO_CURRENT // No current ledger
rpcNO_NETWORK // Not connected
rpcNO_CLOSED // No validated ledgerrpcACT_NOT_FOUND // Account not found
rpcLGR_NOT_FOUND // Ledger not found
rpcINTERNAL // Unexpected error{
"method": "account_info",
"params": [{
"account": "rN7n7otQDd6FczFgLdlqtyMVrn3NnrcVXs",
"ledger_index": "validated"
}]
}POST / HTTP/1.1
Host: localhost:5005method = "account_info"
params = { "account": "rN7n...", "ledger_index": "validated" }remoteIP = 127.0.0.1 → Role::ADMINhandler = doAccountInfo
required_role = Role::USER
condition = NEEDS_CURRENT_LEDGERADMIN >= USER → PASShaveLedger() == true → PASScontext.params = params
context.role = ADMIN
context.ledger = currentLedgerresult = doAccountInfo(context){
"result": {
"status": "success",
"account_data": {
"Account": "rN7n7otQDd6FczFgLdlqtyMVrn3NnrcVXs",
"Balance": "1000000000"
}
}
}← Back to SHAMap and NodeStore: Data Persistence and State Management
Now that you understand the mathematical foundations of Merkle-Patricia tries, let's explore how XRPL actually implements them in the SHAMap data structure.
The SHAMap is responsible for:
Maintaining account and transaction state in a cryptographically-verified Merkle tree
Computing root hashes that represent entire ledger state
Enabling efficient navigation through key-based lookups
Supporting snapshots and immutability for historical ledgers
Providing proof generation for trustless state verification
This chapter covers the architectural overview and node types. Later chapters dive into specific algorithms for navigation, hashing, and synchronization.
The SHAMap architecture achieves three critical properties:
1. Cryptographic Integrity
Every change to data propagates through hashes up to the root:
This ensures that no data can be modified undetected. Changing even one bit in an account's balance changes the root hash.
2. Efficient Navigation
The Patricia trie structure uses account identifiers as navigation guides:
No binary search needed. The path is directly encoded in the key.
3. Optimized Synchronization
Hash-based comparison eliminates unnecessary data transfer:
This allows new nodes to synchronize full ledgers from peers in minutes.
A SHAMap instance represents a complete tree of ledger state:
Core Data:
Key Properties:
Root Node: Always an inner node, never a leaf (even if only one account, still has inner structure)
Depth: Exactly 64 levels (256-bit keys ÷ 4 bits per level)
Navigation Determinism: Any key uniquely determines its path from root to leaf
The SHAMap consists of three conceptual layers:
Layer 1: Root
Always present
Always an inner node
Contains the root hash representing entire tree
Can have up to 16 children (one for each hex digit 0-F)
Layer 2: Internal Structure
Inner nodes serve as branch points
Each can have 0-16 children
Store only hashes of children, not actual data
No data items in inner nodes
Layer 3: Leaf Nodes
Terminal nodes containing actual data items
Types: Account state, Transaction, Transaction+Metadata
All leaves in a SHAMap tree are homogeneous (same type)
Inner nodes form the branching structure:
Structure:
Key Characteristics:
Do not store data items directly - Only hashing information
Maintain cryptographic commitments through child hashes
Variable occupancy - Not all 16 children present
Support both serialization formats - Compressed and full
Serialization Formats:
Inner nodes support two wire formats:
Compressed Format (used when most slots are empty):
Saves space by omitting empty branches.
Full Format (used when most slots are occupied):
Simpler structure despite larger size.
Format Selection Algorithm:
XRPL automatically chooses format based on branch count:
Leaf nodes store the actual blockchain data:
Base Properties:
SHAMapItem Structure:
Leaf Node Specializations:
Three distinct leaf node types exist, each with unique hashing:
1. Account State Leaves (hotACCOUNT_NODE)
Store account information
Include balances, settings, owned objects, trust lines
Type prefix during hashing prevents collision with other types
Updated when transactions affect accounts
2. Transaction Leaves (hotTRANSACTION_NODE)
Store transaction data
Do not include execution metadata
Immutable once added to ledger
Enable verification of transaction history
3. Transaction+Metadata Leaves (hotTRANSACTION_LEDGER_ENTRY)
Store transactions with execution metadata
Include results (success/failure, ledger entries modified)
Complete information for replaying or auditing
Support full transaction reconstruction
Why Multiple Types?
Ensures that moving data between leaves would be immediately detected as invalid.
Every node in the tree is uniquely identified by its position:
Components:
Path Encoding:
The path is encoded as 4-bit chunks (nibbles) in a uint256:
Key Operations:
getChildNodeID(branch) - Compute child position:
selectBranch(nodeID, key) - Determine which branch to follow:
This deterministic navigation ensures every key has exactly one path through the tree.
SHAMaps exist in different states reflecting their role in the system:
Immutable State
Represents a finalized, historical ledger
Nodes cannot be modified
Multiple readers can access simultaneously
Critical limitation: cannot be trimmed (nodes loaded stay in memory)
Mutable State
Represents work-in-progress ledger state
Nodes can be modified through copy-on-write
Safe mutations without affecting other SHAMap instances
Single writer (typically), multiple readers possible
Synching State
Transitional state during network synchronization
Allows incremental tree construction
Transitions to Immutable or Modifying when complete
Used when receiving missing nodes from peers
State Transitions:
The copy-on-write system enables safe node sharing and snapshots:
Principle:
Nodes are shared between SHAMaps using shared pointers. When a mutable SHAMap needs to modify a shared node, it creates a copy:
Node Sharing Rules:
Benefits:
Efficient Snapshots: New SHAMap instances share unmodified subtrees
Safe Concurrent Access: Modifications don't affect other instances
Memory Efficiency: Identical subtrees stored once
SHAMap is designed for in-memory operation, but nodes can be persisted:
Persistent Nodes:
Some nodes are marked as "backed" - they exist in NodeStore:
Canonicalization:
Ensures nodes are unique in memory:
This enables:
Safe node sharing across multiple SHAMap instances
Fast equality checking (compare pointers, not content)
Memory efficiency (identical nodes stored once)
Key Architectural Elements:
SHAMap Instance: Complete tree representing one ledger version
Inner Nodes: Branching structure with 0-16 children, storing hashes
Leaf Nodes: Terminal nodes containing account or transaction data
SHAMapNodeID: Identifies node position through depth and path
Design Properties:
Perfect balance: All leaves at approximately same depth (log_16 of account count)
Deterministic navigation: Key uniquely determines path
Immutable persistence: Historical ledgers safe from modification
Efficient sharing: Snapshots require minimal additional memory
In the next chapter, we'll explore how nodes are navigated, hashes are computed, and the tree structure is maintained.
Understanding protocols is fundamental to grasping how the XRP Ledger operates as a distributed system. This deep dive explores how Rippled nodes discover each other, communicate, and synchronize ledger state across the decentralized network without any central authority.
The protocol layer is the foundation that enables the XRP Ledger to function as a truly decentralized system, where nodes around the world coordinate to maintain a consistent, validated ledger without relying on a single trusted entity.
Used for consensus history
Used for constructing new ledger
State Management: Immutable (historical), Mutable (work-in-progress), Synching (incomplete)
Copy-on-Write: Enables snapshots and safe sharing
NodeStore Integration: Persistent layer for large datasets
Scalable: Handles millions of accounts with logarithmic operations
The XRP Ledger operates as a decentralized network of Rippled servers (nodes) that communicate through peer-to-peer connections. Each node maintains connections with multiple peers, forming an overlay network on top of the internet infrastructure. This architecture ensures no single point of failure and enables the network to remain operational even if individual nodes go offline.
Unlike traditional client-server architectures where clients connect to centralized servers, the XRP Ledger uses a mesh topology where every node can communicate with multiple other nodes. This design provides:
Resilience: No single point of failure
Scalability: Network can grow organically as new nodes join
Censorship Resistance: No central authority can block transactions
Redundancy: Multiple paths exist for information propagation
Nodes discover peers through several complementary mechanisms, ensuring robust network connectivity:
1. Configured Peer Lists
Administrators can specify fixed peers in the rippled.cfg configuration file:
These peers are trusted connections that the node will always attempt to maintain. Fixed peers are particularly important for validators and high-availability servers.
2. DNS Seeds
Rippled uses DNS to discover bootstrap nodes:
DNS seeds provide a list of currently active nodes
Useful for initial network entry
Regularly updated to reflect network state
Multiple DNS providers ensure availability
3. Peer Gossip
Once connected, nodes share information about other available peers:
Nodes exchange lists of known peers
Information about peer quality and reliability is shared
Network topology naturally adapts to node availability
Helps discover new nodes joining the network
This multi-layered approach ensures that even if some discovery mechanisms fail, nodes can still find and connect to peers, maintaining network connectivity.
Each Rippled node actively manages its peer connections:
Connection Limits: Nodes maintain a configured number of active connections (typically 10-20 peers) to balance network visibility with resource usage.
Peer Quality Assessment: Nodes continuously evaluate peer behavior:
Response times
Message accuracy
Protocol compliance
Uptime and reliability
Connection Pruning: Poor-quality peers are disconnected and replaced with better alternatives.
Connection Diversity: Nodes prefer geographically and administratively diverse peers to improve network resilience.
Rippled uses Protocol Buffers (protobuf) for efficient binary serialization of messages exchanged between nodes. This choice provides several advantages:
Compact Message Sizes: Binary encoding is more efficient than text-based formats like JSON, reducing bandwidth usage.
Fast Serialization: Protobuf libraries provide high-performance encoding and decoding, critical for high-throughput systems.
Forward/Backward Compatibility: Protocol Buffers support schema evolution, allowing the protocol to evolve without breaking existing nodes.
Strong Typing: Protocol definitions provide clear contracts for message formats, reducing errors.
Cross-Language Support: Protobuf supports multiple languages, facilitating development of diverse XRPL tools.
Protocol messages are defined in .proto files located in the Rippled codebase. These definitions are compiled into C++ classes used throughout the application.
Example Structure:
The compiled classes provide methods for:
Creating messages
Serializing to binary
Deserializing from binary
Accessing fields with type safety
Understanding message types is essential for debugging network issues and implementing protocol improvements. Each message type serves a specific purpose in maintaining network consensus and ledger synchronization.
Purpose: Validator key rotation and identity verification
Validators use manifests to announce their identity and key rotation information. This allows validators to change their signing keys without losing their identity, improving security by enabling regular key rotation.
Key Information:
Validator's master public key (long-term identity)
Current ephemeral signing key (used for validations)
Key rotation sequence number
Signature proving authorization
When Sent:
When a validator starts up
When a validator rotates keys
Periodically to ensure all peers have current information
Purpose: Transaction propagation across the network
When a transaction is submitted to any node, it needs to reach all validators to be considered for inclusion in the next ledger. The tmTRANSACTION message broadcasts transactions throughout the network.
Key Information:
Serialized transaction data
Transaction signature
Account information
Fee and sequence number
Routing Logic:
Nodes verify transaction validity before relaying
Already-seen transactions are not re-broadcast (deduplication)
Invalid transactions are dropped without propagation
Purpose: Consensus proposals from validators
During the consensus process, validators broadcast their proposed transaction sets. These proposals inform other validators about which transactions should be included in the next ledger.
Key Information:
Validator's public key
Proposed ledger close time
Transaction set hash
Previous ledger hash
Validation signature
Consensus Flow:
Validators collect transactions from the open ledger
Each validator creates a proposal of transactions to include
Proposals are broadcast to other validators
Validators adjust their proposals based on what others propose
Consensus converges on a common transaction set
Purpose: Ledger validations signaling agreement
After a ledger closes, validators broadcast validations confirming their agreement on the ledger state. A ledger becomes fully validated when it receives validations from a supermajority of trusted validators.
Key Information:
Ledger hash being validated
Ledger sequence number
Ledger close time
Validator's signature
Flag indicating full/partial validation
Validation Process:
Validator applies consensus transaction set
Computes resulting ledger hash
Broadcasts validation message
Other nodes collect validations
Ledger is considered validated when quorum is reached
Purpose: Request and response for ledger synchronization
When a node is behind or missing ledger data, it requests information from peers. These messages enable ledger history synchronization.
tmGET_LEDGER Fields:
Ledger hash or sequence number
Type of data requested (transactions, state tree, etc.)
Specific nodes or ranges requested
tmLEDGER_DATA Fields:
Requested ledger data
Merkle proofs for verification
Metadata about the ledger
Use Cases:
Node startup (catching up to current ledger)
Network partition recovery
Historical ledger retrieval
Filling gaps in ledger history
Purpose: Notifications about ledger close events
Nodes broadcast status changes to inform peers about important events, particularly ledger closures. This helps the network stay synchronized on the current ledger state.
Key Information:
New ledger sequence number
Ledger close time
Flags indicating network status
Previous ledger hash
When a node receives a message, it must decide whether to relay it to other peers. Rippled implements intelligent routing to prevent message flooding while ensuring necessary information reaches all relevant nodes.
1. Squelch (Echo Prevention)
Problem: Without squelch, messages would bounce back to their sender, creating infinite loops.
Solution: When node A sends a message to node B, node B remembers that A already has this message and won't send it back to A.
Implementation: Each message carries an originator identifier, and nodes track which peers already have which messages.
2. Deduplication
Problem: The same message might arrive from multiple peers, wasting processing resources.
Solution: Nodes track recently seen messages using a hash-based cache. If a message hash is already in the cache, it's discarded without reprocessing.
Cache Management:
Time-based expiration (messages older than X seconds are removed)
Size limits (oldest entries removed when cache is full)
Hash collisions handled safely
3. Selective Relay
Problem: Not all peers need all messages. Broadcasting everything wastes bandwidth.
Solution: Messages are only relayed to peers that are likely to need them based on:
Message type (validators need proposals, regular nodes may not)
Peer capabilities (what protocol versions and features they support)
Network topology (avoid sending to peers who likely already have it)
4. Priority Queuing
Problem: Under high load, important messages might be delayed behind less critical ones.
Solution: Messages are categorized by importance and processed in priority order:
High Priority:
Validations (needed for ledger finalization)
Consensus proposals (time-sensitive for consensus)
Status changes (network coordination)
Medium Priority:
Transactions (should be processed promptly)
Ledger data responses (nodes are waiting for this)
Low Priority:
Peer discovery announcements
Historical data requests
Let's trace how a transaction propagates through the network:
Total time: Typically 3-5 seconds from submission to ledger inclusion.
The process of two Rippled nodes establishing a connection involves multiple steps, each verifying compatibility and authenticity:
If TLS is configured (recommended for security):
Rippled-specific handshake to exchange capabilities:
Both nodes verify compatibility:
Protocol Version Check:
Ensure both nodes speak compatible protocol versions
Reject if versions are too far apart
Public Key Validation:
Verify cryptographic signatures
Check against any configured trust lists
Ensure key is properly formatted
Network Compatibility:
Verify both nodes are on the same network (mainnet, testnet, devnet)
Check ledger hashes to confirm network agreement
Feature Negotiation:
Identify common supported features
Use lowest common denominator for communication
If Compatible: Connection is accepted and added to peer list
Begin exchanging regular protocol messages
Start participating in consensus (if validator)
Share transactions and ledger data
If Incompatible: Connection is rejected
Send rejection message with reason
Close TCP connection
Optionally log rejection reason for diagnostics
The protocol implementation is primarily located in src/ripple/overlay/. Understanding this directory structure is essential for working with networking code.
Primary Files
src/ripple/overlay/Overlay.h
Main interface for the overlay network
Defines public API for network operations
Used by other subsystems to access network functionality
src/ripple/overlay/impl/OverlayImpl.h and .cpp
Core implementation of overlay network
Manages peer connections
Handles message routing
Implements connection lifecycle
src/ripple/overlay/Peer.h
Interface for individual peer connections
Defines peer state and capabilities
Methods for sending messages to specific peers
src/ripple/overlay/impl/PeerImp.h and .cpp
Implementation of peer connections
State machine for connection lifecycle
Message parsing and serialization
src/ripple/overlay/Message.h
Protocol message definitions
Message type enumeration
Message handling interfaces
src/ripple/protocol/messages.proto
Protocol Buffer definitions
Defines structure of all network messages
Compiled into C++ classes
Finding Message Handlers:
Understanding Peer Connection State:
Message Creation Example:
Objective: Monitor and analyze protocol messages exchanged between Rippled nodes.
Setup
Step 1: Enable detailed overlay logging
This will produce very detailed logs showing every message sent and received.
Step 2: Set up two Rippled instances in standalone mode
Step 3: Configure nodes to peer with each other
In rippled-node1.cfg:
In rippled-node2.cfg:
Observation Tasks
Task 1: Observe connection establishment
Watch the logs as the nodes connect. You should see:
TCP connection establishment
Hello message exchange
Peer verification
Connection acceptance
Task 2: Submit a transaction
Watch the logs to see:
tmTRANSACTION message created on Node A
Message sent to Node B
Node B receives and processes transaction
Node B adds to its open ledger
Task 3: Trigger a ledger close
Observe:
Status change messages
Ledger close coordination
State synchronization
Analysis Questions
Answer these questions based on your observations:
What messages are exchanged during peer connection establishment?
List the message types in order
Note any authentication or verification steps
How is a transaction propagated from one node to another?
Trace the message flow
Identify any validation or filtering
What happens when a ledger closes?
What messages are exchanged?
How do nodes coordinate?
How does deduplication work?
Try submitting the same transaction twice
Observe how the second submission is handled
What is the average message propagation time?
Timestamp when transaction is submitted to Node A
Timestamp when Node B logs receiving it
Calculate the latency
You should gain practical understanding of:
How protocol messages flow through the network
The purpose of different message types
Network latency and performance characteristics
How nodes maintain synchronized state
✅ Decentralized Architecture: The XRP Ledger uses peer-to-peer networking to eliminate single points of failure and enable censorship resistance.
✅ Multiple Discovery Mechanisms: Peers are discovered through configured lists, DNS seeds, and gossip protocols, ensuring robust connectivity.
✅ Efficient Serialization: Protocol Buffers provide compact, fast, and version-compatible message encoding.
✅ Intelligent Routing: Message propagation uses squelch, deduplication, selective relay, and priority queuing to optimize network performance.
✅ Secure Connections: Multi-step handshake process ensures only compatible, authenticated peers connect.
✅ Message Types Matter: Each message type (tmTRANSACTION, tmVALIDATION, tmPROPOSE_LEDGER, etc.) serves a specific purpose in network coordination.
✅ Codebase Location: Protocol implementation is in src/ripple/overlay/
✅ Debugging: Use log levels and standalone mode to observe network behavior
✅ Protocol Evolution: Understanding message formats is essential for implementing protocol improvements
✅ Network Analysis: Monitoring protocol messages helps diagnose network issues and optimize performance
XRP Ledger Dev Portal: xrpl.org/docs
Rippled Overlay Network: xrpl.org/peer-protocol
Rippled Repository: github.com/XRPLF/rippled
src/ripple/overlay/ - Overlay network implementation
src/ripple/protocol/messages.proto - Protocol Buffer message definitions
src/ripple/overlay/impl/OverlayImpl.cpp - Core networking logic
src/ripple/overlay/impl/PeerImp.cpp - Peer connection handling
Transactors - How transactions are processed after propagation
Consensus Engine - How validators use protocol messages to reach consensus
Overlay Network - Deeper dive into P2P networking architecture
← Back to Understanding XRPL(d) RPC Architecture
The difference between a fragile handler and a production-ready one lies in proper error handling and input validation. Every RPC handler must anticipate failures—invalid input, missing resources, permission issues, and unexpected edge cases—and respond with clear, actionable error messages.
In this section, you'll learn the complete error handling framework used throughout Rippled, including standard error codes, HTTP status mapping, input validation patterns, and strategies for protecting sensitive data while providing useful debugging information.
Rippled defines a comprehensive set of error codes for different failure scenarios:
Source Location: src/xrpl/protocol/ErrorCodes.h
For a comprehensive list of all error codes and their meanings:
RPC errors must map to appropriate HTTP status codes:
Every error response follows this format:
Source Location: src/xrpld/rpc/detail/RPCHelpers.h
Here's a complete example showing all validation patterns:
Validate early and often — Check all inputs before processing
Use specific error messages — Help clients understand what went wrong
Validate all numeric bounds — Prevent overflow, underflow, and resource exhaustion
Check account existence — Before attempting operations
Trust client input — Always validate, even if it looks correct
Expose internal errors — Sanitize error messages
Allow injection attacks — Escape or validate all string inputs
Leak sensitive data — Never include secrets in responses
Comprehensive error handling and input validation separate production-quality handlers from fragile prototypes. Rippled's error framework provides specific codes for every failure scenario, proper HTTP status mapping, and patterns for protecting sensitive information while giving clients actionable feedback. By validating inputs early, handling exceptions gracefully, and following the principle of failing fast, handlers become robust against malformed requests, edge cases, and potential attacks. These practices are fundamental for any handler that will face real-world traffic.
Modify Account0 data:
Account0 hash changes
Parent hash changes (incorporates new Account0 hash)
Grandparent hash changes
... up to root
New root hash is deterministically different from old root hashAccount hash: 0x3A7F2E1B4C9D...
Tree navigation:
Level 0: Extract digit 3 → go to child 3
Level 1: Extract digit A → go to child A
Level 2: Extract digit 7 → go to child 7
... follow 4-bit chunks down the treePeer A claims: "Root is 0xABCD"
Peer B claims: "Root is 0xXYZW"
If 0xABCD == 0xXYZW:
Both have identical ledger state
No synchronization needed
If different:
Compare children's hashes
Identify exactly which subtrees differ
Request only those subtreesclass SHAMap {
// The root node (always an inner node)
std::shared_ptr<SHAMapInnerNode> mRoot;
// Tree state (Immutable, Mutable, or Synching)
State mState;
// For mutable trees: unique identifier
std::uint32_t mCowID; // Copy-on-write identifier
// For navigating to nodes
std::shared_ptr<Family> mFamily;
}; [Inner Node - Root]
(hashes all state) [Inner Node - Root]
/ | \ ... \
[Inner] [Inner] [Inner] ... [Inner]
/ | \ / | \
[Inner][Inner][Inner] [Inner Node]
/ | \ ... \
[Leaf][Leaf][Leaf]
| | |
Account Account Account
State State Stateclass SHAMapInnerNode {
// Up to 16 child slots (indexed 0-F)
std::array<Branch, 16> mBranches;
// Cryptographic hash of this node
uint256 mHash;
// Bitset: which children exist
std::uint16_t mChildBits;
// For copy-on-write: which SHAMap owns this node
std::uint32_t mCowID;
// Synchronization optimization: generation marker
std::uint32_t mFullBelow;
};
// Each branch slot contains:
struct Branch {
std::shared_ptr<SHAMapTreeNode> mNode; // nullptr if empty
uint256 mHash; // Hash of child (or empty)
};Header: "Inner (compressed)"
Bitmap: Which branches exist
For each existing branch:
Branch index
Child hashHeader: "Inner (full)"
For each of 16 branches:
Child hash (or empty marker)Branch count:
0-8: Use compressed (saves ~40 bytes)
9+: Use full (simpler, fewer bytes overall)class SHAMapLeafNode {
// The data contained in this leaf
std::shared_ptr<SHAMapItem> mItem;
// Cryptographic hash
uint256 mHash;
// For copy-on-write
std::uint32_t mCowID;
};class SHAMapItem {
// 256-bit unique identifier (determines tree position)
uint256 mTag;
// Variable-length data (transaction, account state, etc.)
Blob mData;
// Memory management: intrusive reference counting
};// Type prefix prevents collisions
uint256 hash_account_state = SHA512Half(
PREFIX_ACCOUNT || key || data);
uint256 hash_transaction = SHA512Half(
PREFIX_TRANSACTION || key || data);
// Even with identical (key, data), hashes differ
hash_account_state != hash_transactionclass SHAMapNodeID {
// Distance from root (0 = root)
std::uint32_t mDepth;
// Path from root to this node
// Packed as 4-bit nibbles in a uint256
uint256 mNodeID;
};Node at depth 3, path [3, A, 7]:
mNodeID = 0x3A7000...000
^^^ (significant nibbles)
^^^^^^ (zero padding for remaining levels)
Depth 0 (root): mNodeID = 0x0000...000
Depth 1: mNodeID = 0x3000...000
Depth 2: mNodeID = 0x3A00...000
Depth 3: mNodeID = 0x3A70...000
...
Depth 64: mNodeID = complete (all 64 nibbles filled)SHAMapNodeID parent(depth=2, nodeID=0x3A00...);
SHAMapNodeID child = parent.getChildNodeID(7);
// Result: depth=3, nodeID=0x3A70...uint256 key = 0x3A7F2E1B4C9D...;
int branch = key.nthNibble(depth); // Extract nth 4-bit chunk
// For depth=2: branch = 7 (extract bits 8-11)mState == State::Immutable;
mCowID == 0;mState == State::Modifying;
mCowID != 0; // Unique identifier for this treemState == State::Synching; New Tree
|
v
[Synching] <-- Receiving nodes from peers
|
+--> [Immutable] <-- Finalized ledger
|
+--> [Modifying] <-- New transactions incoming// Check if node is owned by this SHAMap
if (node->getCowID() != mCowID) {
// Node is shared or immutable
// Create a copy
auto newNode = node->clone();
newNode->setCowID(mCowID); // Mark as owned by this tree
return newNode;
} else {
// Node already owned by this tree
// Safe to modify in place
return node;
}cowID == 0: Immutable (shared by all)
cowID == tree.mCowID: Owned by this tree (safe to modify)
cowID != tree.mCowID: Owned by another tree (must copy before modifying)Create snapshot before ledger close:
New immutable SHAMap shares root and all unchanged nodes
No deep copy requiredSHAMap A modifies Account1:
Clones nodes along path to Account1
Leaves rest of tree shared
SHAMap B (different ledger) unaffected100 ledgers share 99% of tree structure
Only 1% of data duplicated for different accounts// Node came from persistent storage
mNode->setBacked();
// When synchronizing, try to retrieve from storage
std::shared_ptr<SHAMapTreeNode> node = nodestore.fetch(hash);
if (node) {
// Add to tree
canonicalizeNode(node); // Ensure uniqueness
}// Check cache for existing node with same hash
std::shared_ptr<SHAMapTreeNode> cached = cache.get(hash);
if (cached) {
return cached; // Use existing node
} else {
cache.insert(hash, newNode); // Cache new node
return newNode;
}[ips_fixed]
r.ripple.com 51235
s1.ripple.com 51235
s2.ripple.com 51235message TMTransaction {
required bytes raw_transaction = 1;
required uint32 status = 2;
optional bytes signature = 3;
}1. User submits transaction to Node A
↓
2. Node A validates transaction
↓
3. Node A adds transaction to open ledger
↓
4. Node A broadcasts tmTRANSACTION to peers (Nodes B, C, D)
↓
5. Nodes B, C, D validate transaction
↓
6. Nodes B, C, D relay to their peers (excluding Node A - squelch)
↓
7. Transaction reaches all network nodes
↓
8. Validators include transaction in proposals
↓
9. Consensus reached, transaction included in ledger# Node A wants to connect to peer.example.com
1. Query DNS for peer.example.com
2. Receive IP address (e.g., 203.0.113.50)
3. Prepare to connect to 203.0.113.50:51235# Establish TCP socket connection
1. Send SYN packet to peer
2. Receive SYN-ACK response
3. Send ACK to complete three-way handshake
4. TCP connection established1. Client sends ClientHello (supported cipher suites, TLS versions)
2. Server sends ServerHello (chosen cipher suite)
3. Server sends certificate
4. Key exchange and authentication
5. Both sides send Finished messages
6. Encrypted channel establishedNode A → Node B: Hello Message
{
"protocolVersion": 2,
"publicKey": "n9KorY8...",
"ledgerIndex": 75623421,
"ledgerHash": "8B3F...",
"features": ["MultiSign", "FlowCross"]
}
Node B → Node A: Hello Response
{
"protocolVersion": 2,
"publicKey": "n9LkzF...",
"ledgerIndex": 75623421,
"ledgerHash": "8B3F...",
"features": ["MultiSign", "FlowCross", "DepositAuth"]
}DNS Resolution → TCP Connect → TLS Handshake → Protocol Handshake
↓ ↓ ↓ ↓
IP Address Socket Open Encrypted Version Exchange
↓
Verification
↙ ↘
Accept Reject
↓ ↓
Active Peer Close Connection// Look in PeerImp.cpp for message processing
void PeerImp::onMessage(std::shared_ptr<Message> const& m)
{
switch(m->getType())
{
case protocol::mtTRANSACTION:
onTransaction(m);
break;
case protocol::mtVALIDATION:
onValidation(m);
break;
// ... other message types
}
}// Peer states (simplified)
enum class State
{
connecting, // TCP connection in progress
connected, // TCP connected, handshake in progress
active, // Fully connected and operational
closing, // Graceful shutdown in progress
closed // Connection terminated
};// Creating and sending a transaction message
protocol::TMTransaction tx;
tx.set_rawtransaction(serializedTx);
tx.set_status(protocol::tsCURRENT);
send(std::make_shared<Message>(tx, protocol::mtTRANSACTION));# In rippled.cfg or via RPC
rippled log_level Overlay trace# Terminal 1 - Node A
rippled --conf=/path/to/rippled-node1.cfg --standalone
# Terminal 2 - Node B
rippled --conf=/path/to/rippled-node2.cfg --standalone[ips_fixed]
127.0.0.1 51236[port_peer]
port = 51236
ip = 127.0.0.1
[ips_fixed]
127.0.0.1 51235# Submit to Node A
rippled submit <signed_transaction># In standalone mode
rippled ledger_acceptLog validation failures — For security monitoring and debugging
Fail fast — Return errors as soon as validation fails
Ignore format validation — Invalid formats can cause crashes
Disable validation for "trusted" clients — All clients need validation
// Parameter validation errors
rpcINVALID_PARAMS // Invalid or missing parameters
rpcBAD_SYNTAX // Malformed request structure
rpcINVALID_API_VERSION // Requested API version not supported
// Authentication/Permission errors
rpcNO_PERMISSION // Insufficient role for operation
rpcFORBIDDEN // Client IP blacklisted
rpcBAD_AUTH_MASTER // Master key authentication failed
rpcBAD_AUTH_TOKEN // Token authentication failed
// Ledger-related errors
rpcNO_CURRENT // No current (open) ledger available
rpcNO_CLOSED // No closed (validated) ledger available
rpcLGR_NOT_FOUND // Specified ledger not found
rpcLGR_IDXS_NOTFND // Ledger indices not found
rpcLGR_INDEX_BOUNDS // Ledger index out of valid range
// Account-related errors
rpcACT_NOT_FOUND // Account not found in ledger
rpcACT_MALFORMED // Account address malformed
rpcDUPLICATE // Duplicate account in request
// Transaction-related errors
rpcTXN_NOT_FOUND // Transaction not found
rpcTXN_FAILED // Transaction validation failed
rpcMASTER_DISABLED // Master key disabled on account
rpcINSUFFICIENT_FUNDS // Insufficient funds for operation
// Network/Server errors
rpcNO_NETWORK // Node not connected to network
rpcCOMMAND_UNIMPLEMENTED // Command not implemented
rpcUNKNOWN_COMMAND // Unknown RPC command
rpcINTERNAL // Internal server error
rpcSLOW_DOWN // Rate limited - too many requestsenum RippleErrorCode : int {
rpcUNKNOWN_COMMAND = -32600,
rpcINVALID_PARAMS = -32602,
rpcINTERNAL = -32603,
rpcNO_CURRENT = 20,
rpcNO_CLOSED = 21,
rpcACT_NOT_FOUND = 19,
rpcACT_MALFORMED = 18,
// ... many more defined
};int getHTTPStatusCode(RippleErrorCode errorCode)
{
switch (errorCode) {
// 400 Bad Request - Client error in request format
case rpcINVALID_PARAMS:
case rpcBAD_SYNTAX:
return 400;
// 401 Unauthorized - Authentication required
case rpcBAD_AUTH_MASTER:
case rpcBAD_AUTH_TOKEN:
return 401;
// 403 Forbidden - Client lacks permission
case rpcNO_PERMISSION:
case rpcFORBIDDEN:
return 403;
// 404 Not Found - Requested resource doesn't exist
case rpcACT_NOT_FOUND:
case rpcTXN_NOT_FOUND:
case rpcLGR_NOT_FOUND:
return 404;
// 429 Too Many Requests - Rate limited
case rpcSLOW_DOWN:
return 429;
// 503 Service Unavailable - Server temporarily unable
case rpcNO_CURRENT:
case rpcNO_NETWORK:
return 503;
// 500 Internal Server Error - Unexpected error
default:
return 500;
}
}HTTP/1.1 400 Bad Request
Content-Type: application/json
{
"result": {
"status": "error",
"error": "invalid_params",
"error_code": -32602,
"error_message": "Missing required field: 'account'"
}
}{
"result": {
"status": "error",
"error": "error_code_name",
"error_code": -32602,
"error_message": "Human-readable error description",
"request": {
"command": "the_command_that_failed",
"... ": "request parameters (sanitized)"
}
}
}// Simple error - just code and name
return rpcError(rpcINVALID_PARAMS);
// Error with custom message
return rpcError(rpcINVALID_PARAMS, "Missing 'account' field");
// Error with additional details
Json::Value error = rpcError(rpcACT_NOT_FOUND);
error["detail"] = "Account was deleted from ledger";
return error;
// Helper function definition
Json::Value rpcError(RippleErrorCode errorCode,
std::string const& message = "")
{
Json::Value result;
result[jss::status] = jss::error;
result[jss::error] = RPC::errorMessage(errorCode);
result[jss::error_code] = (int)errorCode;
if (!message.empty())
result[jss::error_message] = message;
return result;
}Json::Value doMyHandler(RPC::JsonContext& context)
{
// Check for required field
if (!context.params.isMember(jss::account)) {
return rpcError(rpcINVALID_PARAMS, "Missing 'account' field");
}
// Validate field type
if (!context.params[jss::account].isString()) {
return rpcError(rpcINVALID_PARAMS,
"'account' must be a string");
}
// Validate field not empty
std::string accountStr = context.params[jss::account].asString();
if (accountStr.empty()) {
return rpcError(rpcINVALID_PARAMS,
"'account' cannot be empty");
}
return Json::Value(); // Valid
}// Validate unsigned integer with bounds
if (context.params.isMember("limit")) {
if (!context.params["limit"].isUInt()) {
return rpcError(rpcINVALID_PARAMS,
"'limit' must be a positive integer");
}
unsigned int limit = context.params["limit"].asUInt();
// Check bounds
if (limit < 1 || limit > 1000) {
return rpcError(rpcINVALID_PARAMS,
"'limit' must be between 1 and 1000");
}
}
// Validate floating-point ranges
if (context.params.isMember("fee_multiplier")) {
if (!context.params["fee_multiplier"].isNumeric()) {
return rpcError(rpcINVALID_PARAMS,
"'fee_multiplier' must be numeric");
}
double multiplier = context.params["fee_multiplier"].asDouble();
if (multiplier < 0.1 || multiplier > 1000.0) {
return rpcError(rpcINVALID_PARAMS,
"'fee_multiplier' must be between 0.1 and 1000");
}
}// Parse and validate account address
std::string accountStr = context.params[jss::account].asString();
auto account = parseBase58<AccountID>(accountStr);
if (!account) {
return rpcError(rpcACT_MALFORMED,
"Invalid account address format");
}
// Parse and validate transaction hash
std::string txHashStr = context.params["tx_hash"].asString();
auto txHash = from_hex_string<Hash256>(txHashStr);
if (!txHash) {
return rpcError(rpcINVALID_PARAMS,
"Invalid transaction hash format");
}
// Parse and validate currency code
std::string currencyStr = context.params["currency"].asString();
auto currency = to_currency(currencyStr);
if (!currency) {
return rpcError(rpcINVALID_PARAMS,
"Invalid currency code");
}std::string command = context.params[jss::command].asString();
static constexpr std::array<std::string_view, 3> validCommands = {
"buy", "sell", "cancel"
};
if (std::find(validCommands.begin(), validCommands.end(), command)
== validCommands.end())
{
return rpcError(rpcINVALID_PARAMS,
"command must be 'buy', 'sell', or 'cancel'");
}// Optional field with default
unsigned int ledgerIndex = 0;
if (context.params.isMember(jss::ledger_index)) {
if (context.params[jss::ledger_index].isString()) {
// Special values like "current", "validated"
std::string indexStr = context.params[jss::ledger_index].asString();
if (indexStr != "current" && indexStr != "validated") {
return rpcError(rpcINVALID_PARAMS,
"ledger_index must be numeric or 'current'/'validated'");
}
} else if (context.params[jss::ledger_index].isUInt()) {
ledgerIndex = context.params[jss::ledger_index].asUInt();
} else {
return rpcError(rpcINVALID_PARAMS,
"ledger_index must be numeric or string");
}
}// NEVER expose private keys in responses
Json::Value response;
// Bad: Never do this
// response["private_key"] = account.getPrivateKey();
// Good: Omit sensitive data entirely
response[jss::account] = to_string(accountID);
response[jss::public_key] = to_string(publicKey);// Bad: Leaks information about internal structure
if (database.query(accountID) == nullptr) {
return rpcError(rpcACT_NOT_FOUND,
"SELECT * FROM accounts WHERE id = " + std::to_string(accountID)
+ " returned no rows");
}
// Good: Hide implementation details
if (database.query(accountID) == nullptr) {
return rpcError(rpcACT_NOT_FOUND,
"Account not found");
}Json::Value getSanitizedRequest(RPC::JsonContext const& context)
{
Json::Value sanitized = context.params;
// Remove sensitive fields from request echo
if (sanitized.isMember("secret")) {
sanitized.removeMember("secret");
}
if (sanitized.isMember("seed")) {
sanitized.removeMember("seed");
}
if (sanitized.isMember("private_key")) {
sanitized.removeMember("private_key");
}
// Mask sensitive values
if (sanitized.isMember("password")) {
sanitized["password"] = "[REDACTED]";
}
return sanitized;
}Json::Value doMyHandler(RPC::JsonContext& context)
{
try {
// Handler implementation
// ...
return result;
}
catch (std::invalid_argument const& ex) {
return rpcError(rpcINVALID_PARAMS,
"Invalid argument: " + std::string(ex.what()));
}
catch (std::runtime_error const& ex) {
return rpcError(rpcINTERNAL,
"Operation failed"); // Don't expose internal error
}
catch (std::exception const& ex) {
return rpcError(rpcINTERNAL,
"Unexpected error occurred");
}
}// std::invalid_argument - for validation errors
if (value < 0) {
throw std::invalid_argument("value must be non-negative");
}
// std::out_of_range - for bounds violations
if (index >= container.size()) {
throw std::out_of_range("index out of range");
}
// std::logic_error - for logical errors
if (!precondition) {
throw std::logic_error("precondition not met");
}
// std::runtime_error - for runtime failures
if (!resource.allocate()) {
throw std::runtime_error("failed to allocate resource");
}Json::Value doTransferFunds(RPC::JsonContext& context)
{
// 1. Validate required fields
for (auto const& field : {"source", "destination", "amount"}) {
if (!context.params.isMember(field)) {
return rpcError(rpcINVALID_PARAMS,
std::string("Missing required field: '") + field + "'");
}
}
// 2. Validate source account
auto source = parseBase58<AccountID>(
context.params["source"].asString()
);
if (!source) {
return rpcError(rpcACT_MALFORMED,
"Invalid source account address");
}
// 3. Validate destination account
auto destination = parseBase58<AccountID>(
context.params["destination"].asString()
);
if (!destination) {
return rpcError(rpcACT_MALFORMED,
"Invalid destination account address");
}
// 4. Validate source != destination
if (*source == *destination) {
return rpcError(rpcINVALID_PARAMS,
"Source and destination cannot be the same");
}
// 5. Validate amount
STAmount amount;
if (!amountFromJsonNoThrow(amount, context.params["amount"])) {
return rpcError(rpcINVALID_PARAMS,
"Invalid amount format");
}
// 6. Validate amount is positive
if (amount <= 0) {
return rpcError(rpcINVALID_PARAMS,
"Amount must be positive");
}
// 7. Optional: validate amount bounds
if (context.params.isMember("max_amount")) {
STAmount maxAmount;
if (!amountFromJsonNoThrow(maxAmount,
context.params["max_amount"]))
{
return rpcError(rpcINVALID_PARAMS,
"Invalid max_amount format");
}
if (amount > maxAmount) {
return rpcError(rpcINVALID_PARAMS,
"Amount exceeds maximum allowed");
}
}
// 8. Get and validate ledger
std::shared_ptr<ReadView const> ledger;
auto const ledgerResult = RPC::lookupLedger(ledger, context);
if (!ledger) {
return ledgerResult;
}
// 9. Verify source account exists
auto sleSource = ledger->read(keylet::account(*source));
if (!sleSource) {
return rpcError(rpcACT_NOT_FOUND,
"Source account not found");
}
// 10. Check sufficient balance
STAmount balance = sleSource->getFieldAmount(sfBalance);
if (balance < amount) {
return rpcError(rpcINSUFFICIENT_FUNDS,
"Insufficient funds in source account");
}
// All validation passed - proceed with operation
Json::Value result;
result[jss::status] = "success";
result["transaction_id"] = "..."; // Generated transaction ID
return result;
}// Validate XRP amount (drops)
if (context.params.isMember("drops")) {
if (!context.params["drops"].isString()) {
return rpcError(rpcINVALID_PARAMS,
"'drops' must be a string");
}
std::string dropsStr = context.params["drops"].asString();
auto drops = XRPAmount::from_string_throw(dropsStr);
if (drops < 0) {
return rpcError(rpcINVALID_PARAMS,
"XRP amount cannot be negative");
}
}
// Validate IOU amount
if (context.params.isMember("amount")) {
STAmount amount;
if (!amountFromJsonNoThrow(amount, context.params["amount"])) {
return rpcError(rpcINVALID_PARAMS,
"Invalid amount");
}
if (!amount.getCurrency().isValid()) {
return rpcError(rpcINVALID_PARAMS,
"Invalid currency in amount");
}
}// Validate and get specific ledger
std::shared_ptr<ReadView const> targetLedger;
if (context.params.isMember(jss::ledger_index)) {
Json::Value const& indexValue = context.params[jss::ledger_index];
if (indexValue.isString()) {
std::string index = indexValue.asString();
if (index == "validated") {
targetLedger = context.ledgerMaster.getValidatedLedger();
} else if (index == "current") {
targetLedger = context.ledgerMaster.getCurrentLedger();
} else if (index == "closed") {
targetLedger = context.ledgerMaster.getClosedLedger();
} else {
return rpcError(rpcINVALID_PARAMS,
"ledger_index must be 'validated', 'current', or a number");
}
} else if (indexValue.isUInt()) {
targetLedger = context.ledgerMaster.getLedgerBySeq(
indexValue.asUInt()
);
} else {
return rpcError(rpcINVALID_PARAMS,
"ledger_index must be numeric or string");
}
if (!targetLedger) {
return rpcError(rpcLGR_NOT_FOUND,
"Ledger not found");
}
}// Validate pagination parameters
unsigned int pageLimit = 20; // Default
unsigned int pageIndex = 0; // Default
if (context.params.isMember("limit")) {
if (!context.params["limit"].isUInt()) {
return rpcError(rpcINVALID_PARAMS,
"'limit' must be a positive integer");
}
pageLimit = context.params["limit"].asUInt();
// Enforce maximum limit to prevent DoS
if (pageLimit < 1 || pageLimit > 1000) {
return rpcError(rpcINVALID_PARAMS,
"'limit' must be between 1 and 1000");
}
}
if (context.params.isMember("marker")) {
if (!context.params["marker"].isString()) {
return rpcError(rpcINVALID_PARAMS,
"'marker' must be a string");
}
std::string marker = context.params["marker"].asString();
// Validate marker format...
}← Back to Cryptography I: Blockchain Security and Cryptographic Foundations
Now that we understand where randomness comes from, let's explore how rippled transforms that randomness into cryptographic keys. This chapter traces the complete key generation pipeline—from random bytes to secret keys, from secret keys to public keys, and from public keys to account addresses.
We'll examine both random key generation (for new accounts) and deterministic key generation (for wallet recovery), and understand why rippled supports two different cryptographic algorithms.
Rippled supports two approaches to key generation:
randomSecretKey()Step-by-step breakdown:
Allocate buffer: Create 32-byte buffer on stack
Fill with randomness: Use crypto_prng() to fill with random bytes
Construct SecretKey: Wrap bytes in SecretKey object
Security level:
128-bit security requires 2^128 operations to break
256 bits provides 2^256 operations (overkill, but standard)
Quantum computers reduce security by half (2^256 → 2^128)
So 256 bits ensures long-term security even against quantum attacks
A seed is a compact representation (typically 16 bytes) from which many keys can be derived:
Why seeds matter:
Backup: Remember one seed → recover all keys
Portability: Move keys between wallets
Hierarchy: Generate multiple accounts from one seed
Why this works:
SHA-512-Half is a one-way function
Same seed always produces same secret key
Different seeds produce uncorrelated secret keys
No special validation needed (all 32-byte values are valid ed25519 keys)
Why more complex?
Not all 32-byte values are valid secp256k1 secret keys. The value must be:
Greater than 0
Less than the curve order (a large prime number)
Why this loop?
The probability that a random 256-bit value is >= CURVE_ORDER is approximately 1 in 2^128. This is so unlikely that we almost never need a second try, but the code handles it correctly.
Incrementing ordinal: If the first hash isn't valid, we increment the ordinal and try again. This ensures:
Deterministic behavior (same seed always produces same result)
Eventually finds a valid key (extremely high probability on first try)
No bias in the resulting key distribution
Compressed vs Uncompressed:
Why compress?
Saves 32 bytes per public key
Given X, only two possible Y values exist
Prefix bit tells us which one
Simpler than secp256k1:
No compression needed (Ed25519 public keys are naturally 32 bytes)
No serialization complexity
Just prepend type marker (0xED)
Once we have a public key, we derive the account ID:
The pipeline:
Why double hash?
Defense in depth: If one hash is broken, the other provides protection
Compactness: 20 bytes is shorter than 32 bytes
Quantum resistance: Even if quantum computers break elliptic curve crypto, they can't reverse the hash to get the public key
The final step is encoding the account ID as a human-readable address:
Result:
We'll explore Base58Check encoding in detail in Chapter 7.
Key generation in rippled involves:
Randomness: Cryptographically secure random bytes from crypto_prng()
Secret keys: 32 bytes of random or deterministically-derived data
Public keys: Derived via one-way function
Account IDs: Double hash (SHA-256 + RIPEMD-160) of public keys
Two algorithms:
secp256k1 – complex, validated, widely used
ed25519 – simpler, faster, preferred
Two approaches:
Random: Maximum security, requires backup of each key
Deterministic: One seed recovers many keys, convenient for wallets
In the next chapter, we'll see how these keys are used to create and verify digital signatures—the mathematical proof of authorization.
Return: SecretKey object (move semantics, no copy)
Addresses: Base58Check encoding of account IDs
Path 1: Random Generation
crypto_prng() → SecretKey → PublicKey → AccountID
(Used for: New accounts, one-time keys)
Path 2: Deterministic Generation
Seed → SecretKey → PublicKey → AccountID
(Used for: Wallet recovery, multiple accounts from one seed)// From src/libxrpl/protocol/SecretKey.cpp
SecretKey randomSecretKey()
{
std::uint8_t buf[32];
beast::rngfill(buf, sizeof(buf), crypto_prng());
SecretKey sk(Slice{buf, sizeof(buf)});
secure_erase(buf, sizeof(buf));
return sk;
}// 32 bytes = 256 bits
std::uint8_t buf[32];
// This provides 2^256 possible keys
// That's approximately 10^77 combinations
// More than atoms in the observable universe!std::pair<PublicKey, SecretKey> randomKeyPair(KeyType type)
{
// Generate random secret key
SecretKey sk = randomSecretKey();
// Derive public key from secret
PublicKey pk = derivePublicKey(type, sk);
return {pk, sk};
}// Seed structure
class Seed
{
private:
std::array<std::uint8_t, 16> buf_; // 128 bits
public:
// Construction, access, etc.
};std::pair<PublicKey, SecretKey>
generateKeyPair(KeyType type, Seed const& seed)
{
switch (type)
{
case KeyType::secp256k1:
return generateSecp256k1KeyPair(seed);
case KeyType::ed25519:
return generateEd25519KeyPair(seed);
}
}// For ed25519, derivation is straightforward
case KeyType::ed25519: {
// Hash the seed to get secret key
auto const sk = generateSecretKey(type, seed);
// Derive public key from secret
return {derivePublicKey(type, sk), sk};
}
SecretKey generateSecretKey(KeyType::ed25519, Seed const& seed)
{
// Simply hash the seed
auto const secret = sha512Half_s(makeSlice(seed));
return SecretKey{secret};
}// For secp256k1, need to handle curve order constraint
case KeyType::secp256k1: {
detail::Generator g(seed);
return g(0); // Generate the 0th key pair
}// secp256k1 curve order
// Any secret key must be: 0 < key < order
static const uint256 CURVE_ORDER =
"0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141";class Generator
{
private:
Seed seed_;
public:
explicit Generator(Seed const& seed) : seed_(seed) {}
// Generate the n-th key pair
std::pair<PublicKey, SecretKey> operator()(std::uint32_t ordinal)
{
// Derive root key from seed
SecretKey rootKey = deriveRootKey(seed_, ordinal);
// Derive public key
PublicKey publicKey = derivePublicKey(KeyType::secp256k1, rootKey);
return {publicKey, rootKey};
}
};SecretKey deriveRootKey(Seed const& seed, std::uint32_t ordinal)
{
// Try up to 128 times to find valid key
for (int i = 0; i < 128; ++i)
{
// Create buffer: seed (16 bytes) + ordinal (4 bytes)
std::array<std::uint8_t, 20> buf;
// Copy seed
std::copy(seed.data(), seed.data() + 16, buf.begin());
// Append ordinal (big-endian)
buf[16] = (ordinal >> 24) & 0xFF;
buf[17] = (ordinal >> 16) & 0xFF;
buf[18] = (ordinal >> 8) & 0xFF;
buf[19] = (ordinal >> 0) & 0xFF;
// Hash it
auto const candidate = sha512Half(makeSlice(buf));
// Check if valid secp256k1 secret key
if (isValidSecretKey(candidate))
return SecretKey{candidate};
// Not valid, increment ordinal and try again
++ordinal;
}
// Should never reach here (probability ~ 1 in 2^128)
Throw<std::runtime_error>("Failed to derive key from seed");
}
bool isValidSecretKey(uint256 const& candidate)
{
// Must be in range: 0 < candidate < CURVE_ORDER
return candidate > 0 && candidate < CURVE_ORDER;
}PublicKey derivePublicKey(KeyType::secp256k1, SecretKey const& sk)
{
secp256k1_pubkey pubkey_imp;
// Perform elliptic curve point multiplication: PublicKey = SecretKey × G
secp256k1_ec_pubkey_create(
secp256k1Context(),
&pubkey_imp,
reinterpret_cast<unsigned char const*>(sk.data()));
// Serialize to compressed format
unsigned char pubkey[33];
std::size_t len = sizeof(pubkey);
secp256k1_ec_pubkey_serialize(
secp256k1Context(),
pubkey,
&len,
&pubkey_imp,
SECP256K1_EC_COMPRESSED); // 33 bytes: prefix + X coordinate
return PublicKey{Slice{pubkey, len}};
}Uncompressed: 0x04 | X (32 bytes) | Y (32 bytes) = 65 bytes
Compressed: 0x02/0x03 | X (32 bytes) = 33 bytes
Prefix byte indicates Y parity:
- 0x02: Y is even
- 0x03: Y is oddPublicKey derivePublicKey(KeyType::ed25519, SecretKey const& sk)
{
unsigned char buf[33];
buf[0] = 0xED; // Type prefix marker
// Derive public key using Ed25519 algorithm
ed25519_publickey(sk.data(), &buf[1]);
return PublicKey(Slice{buf, sizeof(buf)});
}AccountID calcAccountID(PublicKey const& pk)
{
ripesha_hasher h;
h(pk.data(), pk.size());
return AccountID{static_cast<ripesha_hasher::result_type>(h)};
}class ripesha_hasher
{
private:
openssl_sha256_hasher sha_;
public:
void operator()(void const* data, std::size_t size)
{
// First: SHA-256
sha_(data, size);
}
operator result_type()
{
// Get SHA-256 result
auto const sha256_result =
static_cast<openssl_sha256_hasher::result_type>(sha_);
// Second: RIPEMD-160 of SHA-256
ripemd160_hasher ripe;
ripe(sha256_result.data(), sha256_result.size());
return static_cast<result_type>(ripe);
}
};Public Key (33 bytes)
↓
SHA-256
↓
32-byte digest
↓
RIPEMD-160
↓
20-byte Account IDstd::string toBase58(AccountID const& accountID)
{
return encodeBase58Token(
TokenType::AccountID,
accountID.data(),
accountID.size());
}Account ID (20 bytes): 0x8B8A6C533F09CA0E5E00E7C32AA7EC323485ED3F
Address: rN7n7otQDd6FczFgLdlqtyMVrn3LNU8B4C// Generate random ed25519 key pair
auto [publicKey, secretKey] = randomKeyPair(KeyType::ed25519);
// Derive account ID
AccountID accountID = calcAccountID(publicKey);
// Encode as address
std::string address = toBase58(accountID);
std::cout << "Public Key: " << strHex(publicKey) << "\n";
std::cout << "Account ID: " << strHex(accountID) << "\n";
std::cout << "Address: " << address << "\n";// Create seed from passphrase (EXAMPLE ONLY - don't do this in production!)
Seed seed = generateSeedFromPassphrase("my secret passphrase");
// Generate deterministic key pair
auto [publicKey, secretKey] = generateKeyPair(KeyType::secp256k1, seed);
// Same seed always produces same keys
auto [publicKey2, secretKey2] = generateKeyPair(KeyType::secp256k1, seed);
assert(publicKey == publicKey2);
assert(secretKey == secretKey2);
// Derive account
AccountID accountID = calcAccountID(publicKey);
std::string address = toBase58(accountID);
std::cout << "Address: " << address << "\n";Seed seed = /* ... */;
// Create generator
detail::Generator gen(seed);
// Generate multiple accounts
auto [pub0, sec0] = gen(0);
auto [pub1, sec1] = gen(1);
auto [pub2, sec2] = gen(2);
// Each has different address
AccountID acc0 = calcAccountID(pub0);
AccountID acc1 = calcAccountID(pub1);
AccountID acc2 = calcAccountID(pub2);
std::cout << "Account 0: " << toBase58(acc0) << "\n";
std::cout << "Account 1: " << toBase58(acc1) << "\n";
std::cout << "Account 2: " << toBase58(acc2) << "\n";std::optional<KeyType> publicKeyType(Slice const& slice)
{
if (slice.size() != 33)
return std::nullopt;
// Check first byte
switch (slice[0])
{
case 0x02:
case 0x03:
return KeyType::secp256k1;
case 0xED:
return KeyType::ed25519;
default:
return std::nullopt;
}
}Buffer sign(PublicKey const& pk, SecretKey const& sk, Slice const& m)
{
// Automatically detect which algorithm to use
auto const type = publicKeyType(pk.slice());
switch (*type)
{
case KeyType::ed25519:
return signEd25519(pk, sk, m);
case KeyType::secp256k1:
return signSecp256k1(pk, sk, m);
}
}secp256k1_context const* secp256k1Context()
{
// Thread-local context for performance
static thread_local std::unique_ptr<
secp256k1_context,
decltype(&secp256k1_context_destroy)>
context{
secp256k1_context_create(
SECP256K1_CONTEXT_SIGN | SECP256K1_CONTEXT_VERIFY),
&secp256k1_context_destroy
};
return context.get();
}// ❌ WRONG
void badExample() {
SecretKey sk = randomSecretKey();
}
// ✅ CORRECT
void goodExample() {
SecretKey sk = randomSecretKey();
}bool validateKeys(PublicKey const& pk, SecretKey const& sk)
{
auto derived = derivePublicKey(publicKeyType(pk).value(), sk);
return derived == pk;
}class Seed {
~Seed() {
secure_erase(buf_.data(), buf_.size());
}
};Ed25519:
- Secret key generation: ~50 µs
- Public key derivation: ~50 µs
- Total: ~100 µs
Secp256k1:
- Secret key generation: ~50 µs
- Public key derivation: ~100 µs
- Total: ~150 µsstd::vector<std::pair<PublicKey, SecretKey>> generateKeys(int count)
{
std::vector<std::pair<PublicKey, SecretKey>> keys;
keys.reserve(count);
for (int i = 0; i < count; ++i) {
keys.push_back(randomKeyPair(KeyType::ed25519));
}
return keys;
}The Consensus Engine is the heart of the XRP Ledger's ability to reach agreement on transaction sets and ledger state without requiring proof-of-work mining or proof-of-stake validation. Understanding the consensus mechanism is crucial for anyone working on the core protocol, as it's what makes the XRP Ledger both fast (3-5 second confirmation times) and secure (Byzantine fault tolerant).
Unlike blockchain systems that require computational work to achieve consensus, the XRP Ledger uses a consensus protocol where a network of independent validators exchanges proposals and votes to agree on which transactions to include in the next ledger. This approach enables the network to process transactions quickly while maintaining strong security guarantees.
Consensus is the process by which a distributed network of nodes agrees on a single shared state. In the context of the XRP Ledger, this means agreeing on:
Which transactions to include in the next ledger
The order of those transactions (deterministic ordering)
The resulting ledger state after applying all transactions
Without consensus, different nodes might have different views of the ledger, leading to double-spending and inconsistencies.
Traditional blockchains like Bitcoin use proof-of-work (PoW):
Miners compete to solve cryptographic puzzles
Winner proposes the next block
Requires massive computational resources
Block confirmation takes ~10 minutes (Bitcoin) or ~15 seconds (Ethereum)
XRP Ledger's approach is different:
No mining or computational puzzles
Validators vote on transaction sets
Consensus achieved in rounds lasting 3-5 seconds
Energy efficient (no wasted computation)
The XRP Ledger consensus protocol is Byzantine Fault Tolerant (BFT), meaning it can tolerate some validators being:
Offline or unreachable
Malicious (trying to disrupt consensus)
Byzantine (behaving arbitrarily or incorrectly)
Key Property: As long as >80% of trusted validators are honest and online, consensus will be reached correctly.
Security Model:
Validators are nodes that participate in the XRP Ledger consensus process, validating transactions and agreeing on the state of the ledger. Each validator proposes and votes on ledger updates during consensus rounds.
A Unique Node List (UNL) is a trusted set of validators chosen by a participant. By relying on their UNL, a node can efficiently reach consensus while protecting against faulty or malicious validators. Proper UNL selection is crucial for network security, decentralization, and ledger reliability.
A validator is a rippled server configured to participate in consensus by:
Proposing transaction sets
Voting on other validators' proposals
Signing validated ledgers
Not all rippled servers are validators. Most servers are:
Tracking servers: Follow the network and process transactions but don't participate in consensus
Stock servers: Serve API requests but don't store full history
To become a validator, a server needs:
A validator key pair (generated with validator-keys tool)
Configuration in rippled.cfg to enable validation
To be trusted by other validators (added to their UNLs)
Each validator maintains a Unique Node List (UNL)—a list of validators it trusts to be honest and not collude.
Key Concepts:
Personal Choice: Each validator operator chooses their own UNL based on their trust relationships.
Overlap Required: For the network to reach consensus, there must be sufficient overlap between validators' UNLs. The protocol requires >90% overlap to ensure agreement.
Default UNL: Most operators use the default UNL provided by the XRP Ledger Foundation, which is regularly updated and reviewed.
Dynamic Updates: UNLs can be updated over time as validators join or leave the network.
In validators.txt:
The validator list can be automatically fetched from trusted sources:
This allows dynamic updates without manual configuration changes.
Consensus operates in discrete rounds, each typically lasting 3-5 seconds. Each round attempts to agree on the next ledger.
Round Phases:
Duration: Variable (typically 20-50 seconds, but can be shorter)
Purpose: Collect transactions for the next ledger
What Happens:
Transactions arrive from clients and peers
Transactions are validated and added to the open ledger
Each validator builds its own transaction set
Open ledger is tentatively applied (provides immediate feedback)
Key Point: The open ledger is not final—it shows what might be in the next ledger, but consensus hasn't been reached yet.
Duration: 2-4 seconds (multiple sub-rounds with 50% increase each time)
Purpose: Validators exchange proposals and converge on a common transaction set
Process:
Initial Proposal
Each validator creates a proposal containing:
Hash of their proposed transaction set
Their validator signature
Previous ledger hash
Proposed close time
Proposal Exchange
Validators broadcast proposals to the network using tmPROPOSE_LEDGER messages.
Agreement Threshold
Validators track which transactions appear in proposals from their UNL:
80% agreement: Transaction is considered "likely to be included"
50% agreement: Transaction is "disputed"
<50% agreement: Transaction is "unlikely to be included"
Iterative Refinement
Multiple rounds of proposals:
Round 1 (Initial): Each validator proposes their transaction set
Round 2 (50% threshold): Validators update proposals, including only transactions with >50% support
Round 3+ (Increasing threshold): Threshold increases each round, converging toward agreement
Avalanche Effect
Once enough validators converge on the same set, others quickly follow (avalanche effect), achieving rapid consensus.
Duration: Instant (threshold is reached)
Purpose: Consensus is reached, transaction set is accepted
Trigger: When a validator sees >80% of its UNL agreeing on the same transaction set
What Happens:
The agreed-upon transaction set becomes the "accepted" set
Ledger is closed with this transaction set
Validators compute the resulting ledger hash
Consensus round ends
Duration: 1-2 seconds
Purpose: Validators sign and broadcast validations
What Happens:
Each validator applies the agreed transaction set
Computes the resulting ledger hash
Creates a validation message
Signs and broadcasts the validation
Validation Message (tmVALIDATION):
Validation Collection:
Nodes collect validations from validators
When >80% of trusted validators validate the same ledger hash, it's considered fully validated
The ledger becomes immutable and part of the permanent ledger history
Total time from consensus start to validation: ~10 seconds Total time from transaction submission to confirmation: ~15-60 seconds (depending on when submitted during open phase)
For all validators to reach the same ledger state, they must apply transactions in exactly the same order. Different orders can produce different results:
Example:
The XRP Ledger uses canonical ordering to ensure determinism:
Primary Sort: By account (lexicographic order of account IDs)
Secondary Sort: By transaction sequence number (nonce)
This ensures:
All transactions from the same account are processed in sequence order
Transactions from different accounts are processed in a deterministic order
All validators apply transactions identically
The transaction set is represented by a hash:
This hash is what validators include in their proposals—a compact representation of the entire transaction set.
A dispute occurs when validators initially disagree about which transactions should be included in the next ledger. This is normal and expected—validators may have different views due to:
Network latency (different arrival times)
Transaction validity questions
Different open ledger states
Disputes are resolved through the iterative consensus rounds:
Round 1: Initial Disagreement
Round 2: Converge on High-Agreement TXs
Validators drop transactions with <50% support:
Round 3: Further Convergence
As threshold increases to 80%, validators must drop disputed transactions:
Transactions that don't reach consensus are not lost:
They remain in the open ledger
They'll be included in the next consensus round
They're only dropped if they become invalid
If a validator behaves maliciously:
Their proposals are signed, so misbehavior is detectable
Other validators ignore proposals that don't follow protocol rules
Byzantine validators cannot force consensus on invalid states (requires >80% support)
Operators can remove misbehaving validators from their UNL
A ledger close is triggered when:
Timer-based: Minimum time has elapsed (typically 2-10 seconds)
Transaction threshold: Sufficient transactions have accumulated
Consensus readiness: Validators are ready to reach agreement
Validators must also agree on the close time of the ledger:
Why it matters: Some transactions are time-dependent (escrows, offers with expiration)
Process:
Each validator proposes a close time
Consensus includes the close time in proposals
Final close time is the median of proposed times (Byzantine fault tolerant)
Close Time Resolution:
Rounded to nearest 10 seconds for efficiency
Prevents clock skew from causing issues
After consensus is reached:
Step 1: Apply agreed transaction set
Step 2: Compute ledger hash
Step 3: Create and broadcast validation
Step 4: Collect validations
Consensus Core:
src/ripple/consensus/Consensus.h - Main consensus engine interface
src/ripple/consensus/ConsensusProposal.h - Proposal structure
src/ripple/consensus/Validations.h - Validation tracking
Consensus Implementation:
src/ripple/app/consensus/RCLConsensus.h - XRP Ledger-specific consensus
src/ripple/app/consensus/RCLValidations.cpp - Validation handling
Network Messages:
src/ripple/overlay/impl/ProtocolMessage.h - tmPROPOSE_LEDGER, tmVALIDATION
Configuration:
validators.txt - UNL configuration
rippled.cfg - Validator key configuration
Consensus Class
RCLConsensus (Ripple Consensus Ledger)
XRP Ledger-specific consensus implementation:
Finding Consensus Start
Search for ledger close triggers:
Tracing Proposal Handling
Follow proposal processing:
Understanding Validation
Follow validation creation and verification:
Objective: Watch a complete consensus round and understand the proposal exchange process.
Part 1: Setup Multi-Validator Environment
This is advanced—requires multiple validators. For learning, we'll use logs from a single validator.
Step 1: Enable detailed consensus logging
Edit rippled.cfg:
Step 2: Start rippled
Step 3: Watch the logs
Part 2: Identify Consensus Phases
From the logs, identify:
Open Phase Start:
Consensus Round Start:
Proposals Received:
Agreement Tracking:
Consensus Reached:
Validation Created:
Validation Received:
Ledger Fully Validated:
Part 3: Timing Analysis
Measure the duration of each phase:
Open Phase Duration: Time between "open ledger started" and "Starting consensus round"
Consensus Duration: Time from "Starting consensus round" to "Consensus reached"
Validation Duration: Time from "Consensus reached" to "fully validated"
Create a timeline:
Part 4: Analysis Questions
Answer these based on your observations:
How many consensus rounds occurred?
Count from logs
What was the average consensus time?
Measure multiple rounds
Part 5: Compare to Whitepaper
Read the XRP Ledger Consensus Protocol whitepaper and compare:
Does the observed behavior match the description?
Are the timing estimates accurate?
How does the network handle disputes?
✅ Byzantine Fault Tolerance: Network can tolerate up to 20% faulty validators while maintaining security
✅ UNL-Based Trust: Each validator chooses which other validators to trust, creating a trust graph
✅ Iterative Consensus: Multiple rounds of proposals converge on an agreed transaction set
✅ Fast Finality: 3-5 second consensus rounds enable quick transaction confirmation
✅ No Mining: Consensus achieved through voting, not computational work
✅ Deterministic Ordering: Canonical transaction ordering ensures all validators reach identical state
✅ Dispute Resolution: Disagreements resolved by dropping disputed transactions to next round
✅ Safety: No conflicting ledgers validated (no forks in normal operation)
✅ Liveness: Network makes progress as long as >80% of UNL is honest and responsive
✅ Censorship Resistance: No single entity can block valid transactions
✅ Sybil Resistance: Trust relationships (UNL) prevent fake validator attacks
✅ Codebase Location: Consensus implementation in src/ripple/consensus/ and src/ripple/app/consensus/
✅ Proposal Format: Understand ConsensusProposal structure and tmPROPOSE_LEDGER messages
✅ Validation Format: Understand STValidation structure and tmVALIDATION messages
✅ Debugging: Use consensus logs to trace round progression and identify issues
False: Validators don't perform computational work. They simply vote on which transactions to include.
False: Any organization can run validators, and each validator operator independently chooses their UNL. While many operators use the recommended UNL from the XRP Ledger Foundation, they're free to customize it.
False: Most rippled servers are tracking servers that follow consensus but don't vote. Only configured validators participate.
False: As long as >80% of a validator's UNL is operational and honest, consensus proceeds normally.
False: The XRP Ledger has never had a fork (competing chains). The consensus protocol prevents this by design.
XRP Ledger Dev Portal:
Consensus Protocol:
Run a Validator:
Original Consensus Whitepaper: David Schwartz, Noah Youngs, Arthur Britto
Analysis of the XRP Ledger Consensus Protocol: Brad Chase, Ethan MacBrough
Cobalt: BFT Governance in Open Networks: Ethan MacBrough
src/ripple/consensus/ - Generic consensus framework
src/ripple/app/consensus/ - XRP Ledger-specific implementation
src/ripple/app/ledger/ConsensusTransSetSF.cpp - Transaction set management
- How consensus messages are propagated
- How consensus integrates with other components
- How transactions flow through consensus
The Rippled codebase is large, complex, and can be intimidating for new developers. With hundreds of files, thousands of classes, and millions of lines of code, knowing how to navigate efficiently is crucial for productivity. This guide teaches you the skills to quickly locate functionality, understand code organization, and become proficient in exploring the Rippled source code.
Whether you're tracking down a bug, implementing a new feature, or simply trying to understand how something works, mastering codebase navigation will dramatically accelerate your development workflow and deepen your understanding of the XRP Ledger protocol.
How many transactions were included in each ledger?
Look for transaction count in logs
Were there any disputed transactions?
Look for agreement percentages <100%
How many validations did each ledger receive?
Count validation messages
What percentage of UNL validated each ledger?
Compare validations received vs UNL size
This is where all the core Rippled code lives:
Transaction Processing:
src/ripple/app/tx/impl/ - All transactor implementations
src/ripple/protocol/TxFormats.cpp - Transaction type definitions
Consensus:
src/ripple/consensus/ - Generic consensus framework
src/ripple/app/consensus/ - XRPL-specific consensus
Networking:
src/ripple/overlay/ - P2P overlay network
src/ripple/beast/ - Low-level networking (HTTP, WebSocket)
Ledger Management:
src/ripple/app/ledger/ - Ledger operations
src/ripple/ledger/ - Ledger data structures
src/ripple/shamap/ - Merkle tree implementation
Storage:
src/ripple/nodestore/ - Key-value storage backend
RPC:
src/ripple/app/rpc/handlers/ - RPC command implementations
src/ripple/rpc/ - RPC infrastructure
Application Core:
src/ripple/app/main/ - Application initialization
src/ripple/core/ - Core services (Config, JobQueue)
Understanding naming conventions is essential for quickly identifying what a class or type represents:
ST* Classes (Serialized Types)
Classes representing serializable protocol objects:
STTx - Serialized Transaction
STObject - Serialized Object (base class)
STAmount - Serialized Amount
STValidation - Serialized Validation
STArray - Serialized Array
SLE - Serialized Ledger Entry
Represents an object stored in the ledger:
Common SLE Types:
Account (AccountRoot)
Offer
RippleState (Trust Line)
SignerList
PayChannel
Escrow
NFToken
TER - Transaction Engine Result
Result codes from transaction processing:
Categories:
tes* - Success
tem* - Malformed (permanent failure)
tef* - Failure (local, temporary)
ter* - Retry (waiting for condition)
tec* - Claimed fee (failed but fee charged)
SF* - Serialized Field
Field identifiers for serialized data:
Naming Pattern: sf + CamelCase field name
Other Common Prefixes
LedgerEntryType - Types of ledger objects
Keylet - Keys for accessing ledger objects
RPC* - RPC-related classes
Keylets are the standard way to access ledger objects:
Common Keylet Functions:
Views provide read/write access to ledger state:
Read-Only View:
Modifiable View:
View Types:
ReadView - Read-only access
ApplyView - Read/write for transaction application
OpenView - Open ledger view
PaymentSandbox - Sandboxed view for payments
Many fields are optional, use ~ operator:
Extensive use of RAII and smart pointers:
Most components hold an Application reference:
Command-line searching is often the fastest way:
Find where a function is defined:
Find where a variable is used:
Find specific transaction type:
Find RPC command handler:
Use type information to navigate:
Example: Finding where STTx is used
Example: Finding transaction submission
Entry Points:
main() - src/ripple/app/main/main.cpp
RPC handlers - src/ripple/app/rpc/handlers/*.cpp
Transaction types - src/ripple/app/tx/impl/*.cpp
Protocol messages - src/ripple/overlay/impl/ProtocolMessage.h
Example: Tracing RPC Call
Modern IDEs provide powerful navigation:
Visual Studio Code:
Ctrl/Cmd + Click - Go to definition
F12 - Go to definition
Shift + F12 - Find all references
Ctrl/Cmd + T - Go to symbol
Ctrl/Cmd + P - Quick file open
CLion:
Ctrl + B - Go to declaration
Ctrl + Alt + B - Go to implementation
Alt + F7 - Find usages
Ctrl + N - Go to class
Ctrl + Shift + N - Go to file
XCode:
Cmd + Click - Go to definition
Ctrl + 1 - Show related items
Cmd + Shift + O - Open quickly
Cmd + Shift + F - Find in project
Format: src/ripple/app/tx/impl/<TransactionType>.cpp
Finding Transaction Implementation:
Format: src/ripple/app/rpc/handlers/<CommandName>.cpp
Finding RPC Handler:
Header Files (.h):
Class declarations
Function prototypes
Template definitions
Inline functions
Implementation Files (.cpp):
Function implementations
Static variables
Template specializations
Finding Pattern:
Always read the header file first:
What to Look For:
Public methods (API)
Constructor parameters (dependencies)
Member variables (state)
Comments and documentation
Follow how data flows through functions:
Identify key decision points:
Tests show how code is meant to be used:
Extensions:
C/C++ (Microsoft)
C/C++ Extension Pack
CMake Tools
GitLens
Configuration (.vscode/settings.json):
CMake Configuration:
Open rippled directory
CLion auto-detects CMakeLists.txt
Configure build profiles (Debug, Release)
Let CLion index the project
Tips:
Use "Find in Path" (Ctrl+Shift+F) for project-wide search
Use "Go to Symbol" (Ctrl+Alt+Shift+N) to find classes/functions
Enable "Compact Middle Packages" in Project view
Generate for better IDE support:
This creates compile_commands.json that IDEs use for accurate code intelligence.
Rippled uses various documentation styles:
Doxygen-Style Comments:
Inline Comments:
README Files:
Design Documents:
Dev Null Productions Source Code Guide:
Comprehensive walkthrough of rippled codebase
Available online
Covers architecture and key components
XRP Ledger Dev Portal:
https://xrpl.org/docs
Protocol documentation
API reference
Objective: Practice navigation skills by locating and understanding specific code.
Task 1: Find Payment Transactor
Goal: Locate and read the Payment transactor implementation
Steps:
Navigate to transaction implementations:
Open Payment.cpp
Find the three key methods:
Payment::preflight()
Payment::preclaim()
Payment::doApply()
Answer:
What checks are performed in preflight?
What ledger state does preclaim verify?
What does doApply() do?
Task 2: Trace account_info RPC
Goal: Understand how account_info RPC works
Steps:
Find the handler:
Open AccountInfo.cpp
Trace the execution:
How is the account parameter extracted?
How is account data retrieved?
What information is returned?
Follow the keylet usage:
Find keylet::account definition:
Task 3: Find Consensus Round Start
Goal: Locate where consensus rounds begin
Steps:
Search for consensus entry point:
Find RCLConsensus usage:
Trace from NetworkOPs:
Find where consensus is triggered
Locate initial transaction set building
Find proposal creation
Task 4: Understand Transaction Propagation
Goal: Follow how transactions propagate through the network
Steps:
Start at submission:
Find open ledger application:
Find relay function:
Trace message creation:
How is tmTRANSACTION message created?
How is it sent to peers?
Task 5: Explore Ledger Closure
Goal: Understand ledger close process
Steps:
Find LedgerMaster:
Look for close-related methods:
Find doApply function in consensus:
Trace the complete sequence:
Consensus reached
Transactions applied
Ledger hash calculated
Validations created
VS Code:
CLion:
✅ Directory Structure: Understand the organization (app/, protocol/, consensus/, overlay/)
✅ Naming Conventions: Learn prefixes (ST*, SLE, TER, SF*)
✅ Code Patterns: Master Keylets, Views, Application reference pattern
✅ Search Tools: Use grep, IDE features, and type following
✅ Entry Points: Start from main(), RPC handlers, or transaction types
✅ Read Headers First: Understand the interface before implementation
✅ Follow Data Flow: Trace how data moves through functions
✅ Use Tests: Tests show intended usage
✅ IDE Setup: Configure properly for code intelligence
✅ Documentation: Read in-source docs and external guides
✅ Quick Location: Find any function or class in seconds
✅ Code Understanding: Comprehend complex code quickly
✅ Debugging: Trace execution paths efficiently
✅ Contributing: Navigate confidently when making changes
XRP Ledger Dev Portal: xrpl.org/docs
Rippled Repository: github.com/XRPLF/rippled
Build Instructions: github.com/XRPLF/rippled/BUILD.md
Dev Null Productions Source Code Guide: Comprehensive rippled walkthrough
In-Source Documentation: src/ripple/README.md and docs/ directory
Code Comments: Doxygen-style documentation throughout
Visual Studio Code: Free, excellent C++ support
CLion: Powerful C++ IDE (commercial)
grep/ag/ripgrep: Command-line search tools
ctags/cscope: Code indexing tools
Application Layer - Understanding the overall architecture
Transactors - How to read transaction implementations
Debugging Tools - Tools for exploring code at runtime
Debugging is an essential skill for any Rippled developer. Whether you're tracking down a subtle transaction processing bug, investigating network connectivity issues, or optimizing performance, having a solid understanding of debugging tools and techniques is critical. The complexity of distributed systems like the XRP Ledger means that effective debugging often requires multiple approaches—from analyzing logs to using debuggers to running controlled experiments.
This deep dive covers the comprehensive toolkit available for debugging Rippled, from basic logging to advanced profiling techniques. You'll learn how to diagnose issues quickly, understand system behavior, and develop confidence in working with the Rippled codebase.
Transactors are the heart of transaction processing in the XRP Ledger. Every transaction type—from simple XRP payments to complex escrow operations—is implemented as a Transactor, a C++ class that defines how that transaction is validated and executed. Understanding the transactor framework is essential for anyone who wants to contribute to the core protocol, implement custom transaction types, or debug transaction-related issues.
The transactor architecture ensures that all transactions undergo rigorous validation before modifying ledger state, maintaining the integrity and security that makes the XRP Ledger reliable for financial applications.
Network can tolerate up to 20% faulty validators
Examples:
- 100 validators → Can handle 20 failures
- 50 validators → Can handle 10 failures
- 35 validators (XRP Ledger mainnet) → Can handle 7 failures# Validator List (maintained by XRP Ledger Foundation)
# Format: validator_public_key [optional_comment]
nHUon2tpyJEHHYGmxqeGu37cvPYHzrMtUNQFVdCgGNvYkr4k
nHBidG3pZK11zQD6kpNDoAhDxH6WLGui6ZxSbUx7LSqLHsgzMPe
nHUcNC5ni7XjVYfCMe38Rm3KQaq27jw7wJpcUYdo4miWwpNePRTw
nHU95JxeaHJoSdpE7R49Mxp4611Yk5yL9SGEc12UDJLr4oEUN
# ... more validators
# Optional: Add custom validators
# nH... My Custom Validator[validators_file]
validators.txt
[validator_list_sites]
https://vl.ripple.com
https://vl.xrplf.org
[validator_list_keys]
ED2677ABFFD1B33AC6FBC3062B71F1E8397C1505E1C42C64D11AD1B28FF73F4734Open Phase (20-50s) → Establish Phase (2-4s) → Accepted Phase (instant) → Validated Phase (1-2s)
↓ ↓ ↓ ↓
Collect TXs Exchange Proposals Reach Agreement Confirm Ledger// Transactions entering open ledger
void NetworkOPs::processTransaction(
std::shared_ptr<Transaction> const& transaction)
{
// Apply to open ledger
auto const result = app_.openLedger().modify(
[&](OpenView& view)
{
return transaction->apply(app_, view);
});
if (result.second) // Transaction applied successfully
{
// Relay to network
app_.overlay().relay(transaction);
}
}// Simplified proposal structure
struct ConsensusProposal
{
uint256 previousLedger; // Hash of previous ledger
uint256 position; // Hash of proposed transaction set
NetClock::time_point closeTime; // Proposed close time
PublicKey publicKey; // Validator's public key
Signature signature; // Proposal signature
};Round 1: 50% threshold, 2 second timer
Round 2: 65% threshold, 3 second timer (50% increase)
Round 3: 80% threshold, 4.5 second timer
Round 4: 95% threshold, 6.75 second timer
...// Simplified consensus acceptance check
bool hasConsensus(ConsensusMode mode, int validations)
{
if (mode == ConsensusMode::proposing)
{
// Need 80% of UNL to agree
return validations >= (unlSize_ * 4 / 5);
}
return false;
}struct STValidation
{
uint256 ledgerHash; // Hash of validated ledger
uint32 ledgerSequence; // Ledger sequence number
NetClock::time_point signTime; // When validation was signed
PublicKey publicKey; // Validator's public key
Signature signature; // Validation signature
bool full; // Full validation vs partial
};Time 0s: Open Phase begins
↓
... transactions accumulate ...
↓
Time 20-50s: Consensus triggered (sufficient transactions or timer)
↓
Time 0s (consensus start): Initial proposals broadcast
↓
Time 2s: Round 1 complete, update proposals
↓
Time 5s: Round 2 complete, update proposals
↓
Time 9.5s: Round 3 complete, 80% threshold reached
↓
Time 9.5s: Accepted phase - consensus reached
↓
Time 9.5-11.5s: Validators apply transactions and create validations
↓
Time 11.5s: Validation phase - ledger fully validated
↓
Time 11.5s: Next open phase beginsAccount A has 100 XRP
Transaction 1: Send 60 XRP to B
Transaction 2: Send 60 XRP to C
Order 1 (TX1 then TX2):
- TX1 succeeds: A has 40 XRP, B has 60 XRP
- TX2 fails: insufficient balance
Order 2 (TX2 then TX1):
- TX2 succeeds: A has 40 XRP, C has 60 XRP
- TX1 fails: insufficient balance
Different results!// Canonical transaction ordering
bool txOrderCompare(STTx const& tx1, STTx const& tx2)
{
// First, sort by account
if (tx1.getAccountID(sfAccount) < tx2.getAccountID(sfAccount))
return true;
if (tx1.getAccountID(sfAccount) > tx2.getAccountID(sfAccount))
return false;
// Same account, sort by sequence number
return tx1.getSequence() < tx2.getSequence();
}// Calculate transaction set hash
uint256 calculateTxSetHash(std::vector<STTx> const& transactions)
{
// Sort transactions canonically
auto sortedTxs = transactions;
std::sort(sortedTxs.begin(), sortedTxs.end(), txOrderCompare);
// Hash all transactions together
Serializer s;
for (auto const& tx : sortedTxs)
{
s.addBitString(tx.getHash());
}
return s.getSHA512Half();
}Validator A proposes: {TX1, TX2, TX3, TX4, TX5}
Validator B proposes: {TX1, TX2, TX3, TX6, TX7}
Validator C proposes: {TX1, TX2, TX4, TX5, TX8}
Agreement:
- TX1: 100% (all three)
- TX2: 100% (all three)
- TX3: 67% (A, B)
- TX4: 67% (A, C)
- TX5: 67% (A, C)
- TX6: 33% (B only)
- TX7: 33% (B only)
- TX8: 33% (C only)Validator A proposes: {TX1, TX2, TX3, TX4, TX5} (drops nothing, all >50%)
Validator B proposes: {TX1, TX2, TX3} (drops TX6, TX7)
Validator C proposes: {TX1, TX2, TX4, TX5} (drops TX8)
Agreement:
- TX1: 100%
- TX2: 100%
- TX3: 67% (A, B)
- TX4: 67% (A, C)
- TX5: 67% (A, C)All validators propose: {TX1, TX2}
Agreement:
- TX1: 100% ✓ (exceeds 80%)
- TX2: 100% ✓ (exceeds 80%)
Consensus reached on {TX1, TX2}
TX3, TX4, TX5 deferred to next ledger// Simplified close trigger logic
bool shouldCloseLedger()
{
auto const elapsed = std::chrono::steady_clock::now() - lastClose_;
// Minimum close interval elapsed?
if (elapsed < minCloseInterval_)
return false;
// Sufficient transactions?
if (openLedger_.size() >= closeThreshold_)
return true;
// Maximum close interval elapsed?
if (elapsed >= maxCloseInterval_)
return true;
return false;
}// Apply transactions in canonical order
for (auto const& tx : canonicalOrder(agreedTxSet))
{
auto const result = applyTransaction(tx, view);
// Record metadata
}// Hash includes:
// - Parent ledger hash
// - Transaction set hash
// - Account state hash
// - Close time
// - Ledger sequence
auto ledgerHash = computeLedgerHash(
parentHash,
txSetHash,
stateHash,
closeTime,
ledgerSeq);STValidation validation;
validation.setLedgerHash(ledgerHash);
validation.setLedgerSequence(ledgerSeq);
validation.setSignTime(now);
validation.sign(validatorKey);
overlay().broadcast(validation);// Wait for validations from UNL
while (validationCount < unlSize_ * 4 / 5)
{
// Process incoming validations
auto val = receiveValidation();
if (val.getLedgerHash() == ledgerHash)
validationCount++;
}
// Ledger is now fully validated
ledgerMaster_.setFullyValidated(ledger);template <class Adaptor>
class Consensus
{
public:
// Start new consensus round
void startRound(
LedgerHash const& prevLedgerHash,
Ledger const& prevLedger,
NetClock::time_point closeTime);
// Process peer proposal
void peerProposal(
NetClock::time_point now,
ConsensusProposal const& proposal);
// Simulate new round
void timerEntry(NetClock::time_point now);
// Check if consensus reached
bool haveConsensus() const;
private:
// Current round state
ConsensusPhase phase_;
std::map<NodeID, ConsensusProposal> peerProposals_;
std::set<TxID> disputes_;
TxSet ourPosition_;
};class RCLConsensus
{
public:
// Handle consensus result
void onAccept(
Result const& result,
RCLCxLedger const& prevLedger,
NetClock::duration closeResolution,
CloseTimes const& rawCloseTimes,
ConsensusMode mode);
// Create initial position
RCLTxSet getInitialPosition(
RCLCxLedger const& prevLedger);
// Check if we should close ledger
void checkClose(NetClock::time_point now);
};// In NetworkOPs or LedgerMaster
void beginConsensus(LedgerHash const& prevHash)
{
// Build initial transaction set
auto initialSet = buildTxSet();
// Start consensus round
consensus_.startRound(
prevHash,
prevLedger,
suggestCloseTime());
}// Overlay receives tmPROPOSE_LEDGER message
void onProposal(std::shared_ptr<protocol::TMProposeSet> const& proposal)
{
// Validate proposal signature
if (!verifyProposal(proposal))
return;
// Pass to consensus engine
consensus_.peerProposal(
now(),
parseProposal(proposal));
}// Create validation
auto validation = std::make_shared<STValidation>(
ledgerHash,
signTime,
publicKey,
nodeID,
[&](STValidation& v)
{
v.sign(secretKey);
});
// Broadcast to network
overlay().send(validation);[rpc_startup]
{ "command": "log_level", "partition": "Consensus", "severity": "trace" }
{ "command": "log_level", "partition": "LedgerConsensus", "severity": "trace" }rippled --conf=rippled.cfgtail -f /var/log/rippled/debug.log | grep -E "Consensus|Proposal|Validation""Consensus":"open ledger started, seq=12345""Consensus":"Starting consensus round, prevLedger=ABCD...""Consensus":"Received proposal from nHU...., position=XYZ...""Consensus":"Transaction TX1 has 85% agreement"
"Consensus":"Transaction TX2 has 45% agreement (disputed)""Consensus":"Consensus reached on transaction set, hash=...""Consensus":"Created validation for ledger 12345, hash=...""Validations":"Received validation from nHU...., ledger=12345""LedgerConsensus":"Ledger 12345 fully validated with 28/35 validations"T+0s: Open phase begins
T+25s: Consensus triggered
T+27s: Consensus reached (2s consensus time)
T+29s: Ledger fully validated (2s validation time)
T+29s: Next open phase begins
Total cycle: 29 secondscd rippled/src/ripple/app/tx/impl/
ls *.cppgrep -r "doAccountInfo" src/ripple/app/rpc/handlers/grep -r "startRound" src/ripple/consensus/grep -r "RCLConsensus" src/ripple/app/consensus/grep -r "submitTransaction" src/ripple/app/misc/grep -r "openLedger().modify" src/ripple/grep -r "void.*relay.*Transaction" src/ripple/overlay/find src/ripple/ -name "LedgerMaster.h"grep "close" src/ripple/app/ledger/LedgerMaster.hgrep -r "consensus.*doApply" src/ripple/rippled/
├── src/ # Source code
│ ├── ripple/ # Main Rippled code
│ └── test/ # Unit and integration tests
├── Builds/ # Build configurations
├── bin/ # Compiled binaries
├── cfg/ # Configuration examples
├── docs/ # Documentation
├── external/ # Third-party dependencies
└── CMakeLists.txt # CMake build configurationsrc/ripple/
├── app/ # Application layer (80% of code)
│ ├── consensus/ # Consensus implementation
│ ├── ledger/ # Ledger management
│ ├── main/ # Application initialization
│ ├── misc/ # Network operations, utilities
│ ├── paths/ # Payment path finding
│ ├── rpc/ # RPC command handlers
│ └── tx/ # Transaction implementations
│ └── impl/ # Transactor implementations
├── basics/ # Fundamental utilities
│ ├── base58/ # Base58 encoding
│ ├── contract/ # Assertions and contracts
│ └── log/ # Logging infrastructure
├── beast/ # Boost.Beast networking (vendored)
├── conditions/ # Crypto-conditions (escrow)
├── consensus/ # Generic consensus framework
├── core/ # Core services (Config, JobQueue)
├── crypto/ # Cryptographic functions
├── json/ # JSON handling
├── ledger/ # Ledger data structures
├── net/ # Network utilities
├── nodestore/ # Persistent storage layer
├── overlay/ # Peer-to-peer networking
├── protocol/ # Protocol definitions
│ ├── messages.proto # Protobuf message definitions
│ ├── TxFormats.cpp # Transaction format definitions
│ └── SField.cpp # Serialized field definitions
├── resource/ # Resource management
├── rpc/ # RPC infrastructure
└── shamap/ # SHAMap (Merkle tree) implementation// Represents a transaction
class STTx : public STObject
{
TransactionType getTransactionType() const;
AccountID getAccountID(SField const& field) const;
STAmount getFieldAmount(SField const& field) const;
};// Base for all serialized objects
class STObject
{
void add(Serializer& s) const;
Json::Value getJson(JsonOptions options) const;
};// Represents XRP or issued currency amount
class STAmount
{
bool isXRP() const;
Issue const& issue() const;
std::int64_t mantissa() const;
};// Validator signature on a ledger
class STValidation
{
uint256 getLedgerHash() const;
std::uint32_t getLedgerSeq() const;
};// Array of STObjects
class STArray : public STBase
{
std::size_t size() const;
STObject const& operator[](std::size_t i) const;
};// An entry in the ledger state
class SLE
{
LedgerEntryType getType() const;
Keylet const& key() const;
// Field accessors
STAmount const& getFieldAmount(SField const& field) const;
AccountID getAccountID(SField const& field) const;
};// Result code enumeration
enum TER : int
{
// Success
tesSUCCESS = 0,
// Malformed (tem)
temMALFORMED = -299,
temBAD_FEE = -298,
temBAD_SIGNATURE = -297,
// Failure (tef)
tefFAILURE = -199,
tefPAST_SEQ = -198,
// Retry (ter)
terRETRY = -99,
terQUEUED = -89,
// Claimed fee (tec)
tecCLAIMED = -100,
tecUNFUNDED_PAYMENT = -101,
tecNO_TARGET = -102,
};// Field definitions
extern SField const sfAccount;
extern SField const sfDestination;
extern SField const sfAmount;
extern SField const sfFee;
extern SField const sfSequence;
extern SField const sfSigningPubKey;
extern SField const sfTxnSignature;enum LedgerEntryType
{
ltACCOUNT_ROOT = 'a',
ltOFFER = 'o',
ltRIPPLE_STATE = 'r',
ltESCROW = 'u',
ltPAYCHAN = 'x',
};// Factory functions for creating keylets
Keylet account(AccountID const& id);
Keylet offer(AccountID const& id, std::uint32_t seq);
Keylet escrow(AccountID const& src, std::uint32_t seq);class RPCHandler;
class RPCContext;// Create keylet for account
AccountID const accountID = ...;
Keylet const k = keylet::account(accountID);
// Read from ledger (immutable)
auto const sle = view.read(k);
if (!sle)
return tecNO_ACCOUNT;
// Access fields
auto const balance = (*sle)[sfBalance];
auto const sequence = (*sle)[sfSequence];// In src/ripple/protocol/Indexes.h
namespace keylet {
Keylet account(AccountID const& id);
Keylet offer(AccountID const& id, std::uint32_t seq);
Keylet line(AccountID const& id1, AccountID const& id2, Currency const& currency);
Keylet escrow(AccountID const& src, std::uint32_t seq);
Keylet payChan(AccountID const& src, AccountID const& dst, std::uint32_t seq);
}void analyzeAccount(ReadView const& view, AccountID const& id)
{
// Can only read, cannot modify
auto const sle = view.read(keylet::account(id));
// Safe for concurrent access
auto balance = (*sle)[sfBalance];
}TER modifyAccount(ApplyView& view, AccountID const& id)
{
// Can read and modify
auto sle = view.peek(keylet::account(id));
if (!sle)
return tecNO_ACCOUNT;
// Modify
(*sle)[sfBalance] = newBalance;
(*sle)[sfSequence] = (*sle)[sfSequence] + 1;
// Commit changes
view.update(sle);
return tesSUCCESS;
}// Required field (asserts if missing)
auto const account = tx[sfAccount];
// Optional field (returns std::optional)
auto const destTag = tx[~sfDestinationTag];
if (destTag)
useDestinationTag(*destTag);
// Optional with default
auto const flags = tx[~sfFlags].value_or(0);// Unique ownership
std::unique_ptr<LedgerMaster> ledgerMaster_;
// Shared ownership
std::shared_ptr<Ledger const> ledger = getLedger();
// Weak references
std::weak_ptr<Peer> weakPeer_;class SomeComponent
{
public:
SomeComponent(Application& app)
: app_(app)
, j_(app.journal("SomeComponent"))
{
}
void doWork()
{
// Access other components via app_
auto& ledgerMaster = app_.getLedgerMaster();
auto& overlay = app_.overlay();
}
private:
Application& app_;
beast::Journal j_;
};# Find definition of a function
grep -r "void processTransaction" src/ripple/
# Find class definition
grep -r "class NetworkOPs" src/ripple/# Find all uses of a variable
grep -r "ledgerMaster_" src/ripple/app/
# Case-insensitive search
grep -ri "transaction" src/ripple/app/tx/# Find Payment transactor
grep -r "class Payment" src/ripple/app/tx/impl/
# Find all transactor implementations
ls src/ripple/app/tx/impl/*.cpp# Find account_info handler
grep -r "doAccountInfo" src/ripple/app/rpc/handlers/# Find STTx usage
grep -r "STTx" src/ripple/ | grep -v ".h:" | head -20
# Find function taking STTx parameter
grep -r "STTx const&" src/ripple/# Find where transactions are submitted
grep -r "submitTransaction" src/ripple/
# Follow to NetworkOPs
cat src/ripple/app/misc/NetworkOPs.h | grep submitTransaction1. Client calls "account_info" RPC
2. Find handler: src/ripple/app/rpc/handlers/AccountInfo.cpp
3. Handler function: doAccountInfo()
4. Calls: view.read(keylet::account(accountID))
5. View implementation: src/ripple/ledger/ReadView.hPayment.cpp - Payment transactions
CreateOffer.cpp - Offer creation
CancelOffer.cpp - Offer cancellation
SetTrust.cpp - Trust line creation/modification
SetAccount.cpp - Account settings
Escrow.cpp - Escrow operations
PayChan.cpp - Payment channels
SetSignerList.cpp - Multi-signature configuration# If you know the transaction type
ls src/ripple/app/tx/impl/ | grep -i payment
# List all transaction implementations
ls src/ripple/app/tx/impl/*.cppAccountInfo.cpp - account_info command
AccountLines.cpp - account_lines command
AccountTx.cpp - account_tx command
Tx.cpp - tx command
Submit.cpp - submit command
LedgerCurrent.cpp - ledger_current command
ServerInfo.cpp - server_info command# Find specific handler
ls src/ripple/app/rpc/handlers/ | grep -i account
# Find handler function
grep -r "doAccountInfo" src/ripple/app/rpc/handlers/# Find header
find src/ripple/ -name "NetworkOPs.h"
# Find implementation
find src/ripple/ -name "NetworkOPs.cpp"// In LedgerMaster.h
class LedgerMaster
{
public:
// Public interface - what can be called
std::shared_ptr<Ledger const> getValidatedLedger();
std::shared_ptr<Ledger const> getClosedLedger();
void addValidatedLedger(std::shared_ptr<Ledger const> const& ledger);
// ...
private:
// Implementation details - how it works
std::shared_ptr<Ledger> mCurrentLedger;
std::shared_ptr<Ledger> mClosedLedger;
// ...
};// Example: Following a payment
void NetworkOPs::submitTransaction(STTx const& tx)
{
// 1. Initial validation
auto const result = checkTransaction(tx);
if (!isTesSuccess(result))
return;
// 2. Apply to open ledger
app_.openLedger().modify([&](OpenView& view)
{
return Transactor::apply(app_, view, tx); // → Go here
});
// 3. Broadcast
app_.overlay().relay(tx); // → And here
}TER Payment::doApply()
{
// Key decision: XRP or issued currency?
if (isXRP(amount_))
{
// XRP path
return payXRP();
}
else
{
// Issued currency path
return payIssued();
}
}// In Payment_test.cpp
void testPayment()
{
// Setup
Env env(*this);
Account alice{"alice"};
Account bob{"bob"};
env.fund(XRP(10000), alice, bob);
// Execute
env(pay(alice, bob, XRP(100)));
// Verify
BEAST_EXPECT(env.balance(alice) == XRP(9900));
BEAST_EXPECT(env.balance(bob) == XRP(10100));
}{
"C_Cpp.default.configurationProvider": "ms-vscode.cmake-tools",
"C_Cpp.default.compileCommands": "${workspaceFolder}/build/compile_commands.json",
"files.associations": {
"*.h": "cpp",
"*.cpp": "cpp"
},
"search.exclude": {
"**/build": true,
"**/external": true
}
}cd rippled
mkdir build && cd build
cmake -DCMAKE_EXPORT_COMPILE_COMMANDS=ON ../**
* @brief Apply a transaction to a view
*
* @param app Application instance
* @param view Ledger view to apply to
* @param tx Transaction to apply
* @return Pair of result code and success flag
*/
std::pair<TER, bool>
applyTransaction(
Application& app,
OpenView& view,
STTx const& tx);// Check if destination requires a tag
if (sleDest->getFlags() & lsfRequireDestTag)
{
if (!ctx.tx.isFieldPresent(sfDestinationTag))
return tecDST_TAG_NEEDED;
}src/ripple/README.md
src/ripple/app/README.md
src/ripple/consensus/README.mddocs/consensus.md
docs/build-unix.mdTransaction Types: src/ripple/app/tx/impl/
RPC Handlers: src/ripple/app/rpc/handlers/
Consensus: src/ripple/consensus/ and src/ripple/app/consensus/
Overlay Network: src/ripple/overlay/
Ledger Management: src/ripple/app/ledger/
Application Core: src/ripple/app/main/
Protocol Definitions: src/ripple/protocol/
Tests: src/test/ST* classes: Serialized types (STTx, STAmount, STObject)
SLE: Serialized Ledger Entry
TER: Transaction Engine Result codes
SF*: Serialized Field identifiers (sfAccount, sfAmount)
Keylet: Keys for ledger object access
*View: Ledger view abstractions
*Imp: Implementation classes (PeerImp, OverlayImpl)# Find class definition
grep -r "class ClassName" src/ripple/
# Find function implementation
grep -r "ReturnType functionName(" src/ripple/
# Find where something is used
grep -r "variableName" src/ripple/
# Case-insensitive search
grep -ri "searchterm" src/ripple/
# Search in specific file types
grep -r --include="*.cpp" "searchterm" src/ripple/
# Exclude directories
grep -r --exclude-dir="test" "searchterm" src/ripple/Go to Definition: F12 or Ctrl+Click
Find References: Shift+F12
Go to Symbol: Ctrl+T
Search in Files: Ctrl+Shift+FGo to Declaration: Ctrl+B
Go to Implementation: Ctrl+Alt+B
Find Usages: Alt+F7
Search Everywhere: Double ShiftRippled includes a sophisticated logging system that provides detailed visibility into system behavior. Understanding how to configure and use logging effectively is the foundation of debugging Rippled.
Partitions: Logs are organized by subsystem (partition) Severity Levels: Each log entry has a severity level Timestamps: All logs include precise timestamps Context: Logs include relevant context (account IDs, ledger numbers, etc.)
From most to least verbose:
Usage Guidelines:
Production: Use warning or error to minimize disk I/O
Development: Use debug or trace for active debugging
Investigation: Temporarily enable trace for specific partitions
Major subsystems have their own partitions:
In Configuration File
Edit rippled.cfg:
Via RPC Command
Dynamic adjustment without restart:
Programmatically
In code:
Default Locations:
Linux: /var/log/rippled/debug.log
macOS: ~/Library/Application Support/rippled/debug.log
Custom: Set in rippled.cfg:
Configure log rotation to prevent disk space issues:
Using system tools (Linux):
Tail Live Logs:
Filter by Partition:
Filter by Severity:
Timestamp Range:
Common Patterns:
Standalone mode runs Rippled as a single-node network where you have complete control:
No peers: Runs without connecting to other nodes
Manual ledger close: You trigger ledger closes
Deterministic: No network randomness
Fast: No consensus delays
Isolated: Perfect for testing
Configuration for Standalone:
Check Status:
Look for:
Submit Transaction:
Manually Close Ledger:
This immediately closes the current ledger and advances to the next one.
Check Transaction:
Deterministic Behavior:
No network randomness
Repeatable tests
Predictable timing
Complete Control:
Manual ledger progression
No unexpected transactions
Isolated environment
Fast Iteration:
Instant ledger closes
No waiting for consensus
Quick test cycles
Safe Experimentation:
Can't affect mainnet
Easy to reset (delete database)
Test dangerous operations safely
Install GDB:
Compile with Debug Symbols:
Launch with GDB:
Starting:
Breakpoints:
Execution Control:
Inspection:
Advanced:
Example Session:
Core Dumps:
Enable core dumps:
Run program until crash:
Analyze core dump:
Common Crash Patterns:
Successful Payment:
Failed Payment:
Normal Consensus Round:
Disputed Transaction:
Peer Connection:
Connection Failure:
Slow Ledger Close:
Using perf (Linux):
Using Instruments (macOS):
Interpreting Results:
Look for:
Hot functions (high CPU usage)
Unexpected call patterns
Inefficient algorithms
Lock contention
Using Valgrind:
Using AddressSanitizer:
Wireshark:
Filter by:
Protocol messages
Connection handshakes
Bandwidth usage
Measuring Latency:
Run All Tests:
Run Specific Test Suite:
Run with Verbose Output:
Writing Tests:
Test Network Setup:
Test Scenarios:
Multi-node consensus
Network partitions
Peer discovery
Transaction propagation
Symptoms: Transaction stuck in pending state
Debug Steps:
Check transaction status:
Check for sequence gaps:
Check logs for rejection:
Verify fee is sufficient:
Check LastLedgerSequence:
Symptoms: Ledger not closing, network stalled
Debug Steps:
Check validator connectivity:
Examine consensus logs:
Check network connectivity:
Look for disputes:
Symptoms: Rippled consuming excessive memory
Debug Steps:
Check ledger history:
Review configuration:
Profile memory usage:
Check for leaks:
Symptoms: Ledgers taking >5 seconds to close
Debug Steps:
Enable performance logging:
Check transaction count:
Profile CPU usage:
Check database performance:
Objective: Use debugging tools to diagnose and fix a transaction issue.
Part 1: Setup
Step 1: Start standalone mode with detailed logging
Step 2: Create test scenario with intentional issue
Part 2: Debugging Process
Step 3: Submit and observe failure
Step 4: Examine logs
Look for:
Step 5: Verify balance
Step 6: Calculate required amount
Step 7: Fix and resubmit
Part 3: Advanced Debugging
Step 8: Debug with GDB
Step 9: Set breakpoints
Step 10: Submit transaction (in another terminal)
Step 11: Examine state in GDB
Analysis Questions
What error code was returned?
tecUNFUNDED_PAYMENT
At which validation phase did it fail?
Preclaim (ledger state check)
What was the root cause?
Insufficient balance for payment + fee + reserve
How would you prevent this in client code?
Check balance before submitting
Account for reserve requirements
Include fee in calculation
What logs helped identify the issue?
Transaction partition trace logs
Preclaim failure message
✅ Logging System: Understand partitions, severity levels, and configuration
✅ Standalone Mode: Essential for controlled testing and debugging
✅ GDB: Set breakpoints, inspect variables, trace execution
✅ Log Analysis: Read and interpret logs to diagnose issues
✅ Performance Profiling: Identify bottlenecks and optimize
✅ Start Simple: Use logs before reaching for debugger
✅ Reproduce Reliably: Use standalone mode for consistent reproduction
✅ Isolate Issues: Narrow down to specific component
✅ Read the Code: Logs point you to code, understand the implementation
✅ Test Thoroughly: Write unit tests for bugs you fix
✅ Logs: First line of defense, always available
✅ Standalone Mode: Controlled environment, fast iteration
✅ GDB: Deep inspection, understanding execution flow
✅ Profiling: Performance issues, optimization
✅ Tests: Regression prevention, continuous validation
XRP Ledger Dev Portal: xrpl.org/docs
Rippled Repository: github.com/XRPLF/rippled
Build Instructions: github.com/XRPLF/rippled/blob/develop/BUILD.md
GDB Documentation: sourceware.org/gdb/documentation
Valgrind Manual: valgrind.org/docs/manual
Linux Perf Wiki: perf.wiki.kernel.org
src/ripple/core/impl/JobQueue.cpp - Job queue debugging
src/ripple/app/main/Application.cpp - Application startup debugging
src/test/ - Unit test examples
Application Layer - Understanding system architecture
Transaction Lifecycle - Understanding transaction flow
Codebase Navigation - Finding code to debug
auto const sleAccept = ledger->read(keylet::account(accountID));grep -r "Keylet account" src/ripple/protocol/trace - Extremely detailed, every function call
debug - Detailed debugging information
info - General informational messages
warning - Warning conditions
error - Error conditions
fatal - Fatal errors that cause terminationLedger - Ledger operations
LedgerMaster - Ledger master coordination
Transaction - Transaction processing
Consensus - Consensus rounds
Overlay - P2P networking
Peer - Individual peer connections
Protocol - Protocol message handling
RPC - RPC request handling
JobQueue - Job queue operations
NodeObject - NodeStore operations
Application - Application lifecycle
OrderBookDB - Order book database
PathRequest - Path finding
ValidatorList - Validator list management
Amendments - Amendment processing[rpc_startup]
{ "command": "log_level", "severity": "warning" }
{ "command": "log_level", "partition": "Transaction", "severity": "trace" }
{ "command": "log_level", "partition": "Consensus", "severity": "debug" }# Set all partitions to warning
rippled log_level warning
# Set specific partition to trace
rippled log_level Transaction trace
# Set multiple partitions
rippled log_level Consensus debug
rippled log_level Overlay debug
rippled log_level Peer trace// Get logger for this partition
beast::Journal j = app_.journal("MyComponent");
// Log at different levels
JLOG(j.trace()) << "Entering function with param: " << param;
JLOG(j.debug()) << "Processing transaction: " << tx.getTransactionID();
JLOG(j.info()) << "Ledger closed: " << ledger.seq();
JLOG(j.warning()) << "Unusual condition detected";
JLOG(j.error()) << "Failed to process: " << error;
JLOG(j.fatal()) << "Critical error, shutting down";[debug_logfile]
/path/to/custom/debug.log[debug_logfile]
/var/log/rippled/debug.log
# Rotate when file reaches 100MB
# Keep 10 old log files# /etc/logrotate.d/rippled
/var/log/rippled/debug.log {
daily
rotate 7
compress
delaycompress
missingok
notifempty
copytruncate
}tail -f /var/log/rippled/debug.loggrep "Transaction:" /var/log/rippled/debug.loggrep "ERR" /var/log/rippled/debug.log# Logs between specific times
awk '/2025-01-15 10:00/,/2025-01-15 11:00/' /var/log/rippled/debug.log# Find transaction processing
grep "Transaction.*tesSUCCESS" /var/log/rippled/debug.log
# Find consensus rounds
grep "Consensus.*Starting round" /var/log/rippled/debug.log
# Find peer connections
grep "Overlay.*Connected to peer" /var/log/rippled/debug.log
# Find errors
grep -E "ERROR|ERR|Fatal" /var/log/rippled/debug.logrippled --standalone --conf=/path/to/rippled.cfg[server]
port_rpc_admin_local
port_ws_admin_local
[port_rpc_admin_local]
port = 5005
ip = 127.0.0.1
admin = 127.0.0.1
protocol = http
[port_ws_admin_local]
port = 6006
ip = 127.0.0.1
admin = 127.0.0.1
protocol = ws
# No peer port needed in standalone
[node_db]
type=NuDB
path=/var/lib/rippled/standalone/db
[database_path]
/var/lib/rippled/standalonerippled server_info{
"result": {
"info": {
"build_version": "1.9.0",
"complete_ledgers": "1-5",
"peers": 0,
"server_state": "proposing",
"standalone": true
}
}
}rippled submit '{
"TransactionType": "Payment",
"Account": "rN7n7otQDd6FczFgLdlqtyMVrn3HMtthca",
"Destination": "rLNaPoKeeBjZe2qs6x52yVPZpZ8td4dc6w",
"Amount": "1000000",
"Fee": "12",
"Sequence": 1
}'rippled ledger_acceptrippled tx <hash># 1. Start standalone
rippled --standalone --conf=standalone.cfg
# 2. Fund accounts (in another terminal)
rippled wallet_propose
# 3. Submit transactions
rippled submit <signed_tx>
# 4. Close ledger to include transaction
rippled ledger_accept
# 5. Verify transaction
rippled tx <hash>
# 6. Repeat steps 3-5 as needed# Linux
sudo apt-get install gdb
# macOS
brew install gdbcd rippled/build
cmake -DCMAKE_BUILD_TYPE=Debug ..
cmake --build . --target rippledgdb --args ./rippled --conf=/path/to/rippled.cfg --standalone(gdb) run # Start program
(gdb) start # Start and break at main()(gdb) break Payment.cpp:123 # Break at file:line
(gdb) break Payment::doApply # Break at function
(gdb) break Transactor.cpp:apply # Break at method
(gdb) info breakpoints # List all breakpoints
(gdb) delete 1 # Delete breakpoint #1
(gdb) disable 2 # Disable breakpoint #2(gdb) continue # Continue execution
(gdb) next # Step over (one line)
(gdb) step # Step into (enter function)
(gdb) finish # Run until current function returns(gdb) print variable # Print variable value
(gdb) print *pointer # Dereference pointer
(gdb) print object.method() # Call method
(gdb) backtrace # Show call stack
(gdb) frame 3 # Switch to frame #3
(gdb) info locals # Show local variables(gdb) watch variable # Break when variable changes
(gdb) condition 1 i == 5 # Conditional breakpoint
(gdb) commands 1 # Execute commands at breakpoint# Start GDB with rippled
gdb --args ./rippled --standalone --conf=standalone.cfg
# Set breakpoints
(gdb) break Payment::doApply
(gdb) break Transactor::apply
(gdb) break NetworkOPs::processTransaction
# Run
(gdb) run
# In another terminal, submit transaction
$ rippled submit <signed_tx>
# GDB will break at processTransaction
(gdb) backtrace
#0 NetworkOPs::processTransaction
#1 RPCHandler::doCommand
#2 ...
# Step through
(gdb) next
(gdb) next
# Examine transaction
(gdb) print transaction->getTransactionID()
(gdb) print transaction->getFieldAmount(sfAmount)
# Continue to Payment::doApply
(gdb) continue
# Examine state
(gdb) print account_
(gdb) print ctx_.tx[sfDestination]
(gdb) print view().read(keylet::account(account_))
# Step through payment logic
(gdb) step
(gdb) next
# Check result
(gdb) print result
(gdb) continue# Set breakpoints in consensus
(gdb) break RCLConsensus::startRound
(gdb) break Consensus::propose
(gdb) break Consensus::peerProposal
# Run
(gdb) run
# When consensus starts
(gdb) print prevLedgerHash
(gdb) print transactions.size()
(gdb) backtrace
# Step through proposal creation
(gdb) step
(gdb) print position_
# Continue to peer proposal handling
(gdb) continue
(gdb) print proposal.position()
(gdb) print peerIDulimit -c unlimited./rippled --standalone --conf=standalone.cfg
# ... crash occursgdb ./rippled core
(gdb) backtrace
(gdb) frame 0
(gdb) info locals# Null pointer dereference
(gdb) print pointer
$1 = 0x0
(gdb) backtrace
# Look for where pointer should have been set
# Segmentation fault
(gdb) print array[index]
# Check if index is out of bounds
# Assert failure
(gdb) backtrace
# Look at assertion condition and surrounding code2025-01-15 10:23:45.123 Transaction:DBG Transaction E08D6E9754... submitted
2025-01-15 10:23:45.125 Transaction:TRC Preflight check passed
2025-01-15 10:23:45.126 Transaction:TRC Preclaim check passed
2025-01-15 10:23:45.127 Transaction:DBG Applied to open ledger: tesSUCCESS
2025-01-15 10:23:45.128 Overlay:TRC Relaying transaction to 18 peers
2025-01-15 10:23:50.234 Consensus:DBG Transaction included in consensus set
2025-01-15 10:23:50.456 Transaction:INF Applied to ledger 75234567: tesSUCCESS2025-01-15 10:23:45.123 Transaction:DBG Transaction E08D6E9754... submitted
2025-01-15 10:23:45.125 Transaction:TRC Preflight check passed
2025-01-15 10:23:45.126 Transaction:WRN Preclaim check failed: tecUNFUNDED
2025-01-15 10:23:45.127 Transaction:DBG Rejected: insufficient funds2025-01-15 10:23:50.000 Consensus:INF Starting consensus round
2025-01-15 10:23:50.001 Consensus:DBG Building initial position: 147 transactions
2025-01-15 10:23:50.010 Consensus:TRC Proposal sent: hash=ABC123...
2025-01-15 10:23:52.123 Consensus:TRC Received proposal from nHU...: 145 txns
2025-01-15 10:23:52.125 Consensus:TRC Received proposal from nHB...: 146 txns
2025-01-15 10:23:52.500 Consensus:DBG Agreement: 145/147 transactions (98%)
2025-01-15 10:23:52.501 Consensus:INF Consensus reached on transaction set
2025-01-15 10:23:52.600 LedgerMaster:INF Ledger 75234567 closed
2025-01-15 10:23:54.000 LedgerMaster:INF Ledger 75234567 validated with 28/35 validations2025-01-15 10:23:50.000 Consensus:INF Starting consensus round
2025-01-15 10:23:52.123 Consensus:DBG Transaction TX123 agreement: 65%
2025-01-15 10:23:54.456 Consensus:DBG Transaction TX123 agreement: 75%
2025-01-15 10:23:56.789 Consensus:WRN Transaction TX123 not included: only 75% agreement
2025-01-15 10:23:56.790 Consensus:INF Consensus reached on transaction set (TX123 excluded)2025-01-15 10:23:45.123 Overlay:INF Connecting to r.ripple.com:51235
2025-01-15 10:23:45.234 Overlay:DBG TCP connection established
2025-01-15 10:23:45.345 Overlay:TRC TLS handshake complete
2025-01-15 10:23:45.456 Overlay:TRC Protocol handshake: version 2, node nHU...
2025-01-15 10:23:45.567 Overlay:INF Connected to peer nHU... (validator)
2025-01-15 10:23:45.568 Peer:DBG Added to active peers (18/20)2025-01-15 10:23:45.123 Overlay:INF Connecting to bad-peer.example.com:51235
2025-01-15 10:23:50.123 Overlay:WRN Connection timeout
2025-01-15 10:23:50.124 Overlay:DBG Scheduling reconnect in 10 seconds2025-01-15 10:23:50.000 LedgerMaster:INF Closing ledger 75234567
2025-01-15 10:23:55.000 LedgerMaster:WRN Ledger close took 5000ms (expected <2000ms)
2025-01-15 10:23:55.001 Transaction:WRN Applied 500 transactions in 4800ms
2025-01-15 10:23:55.002 OrderBookDB:WRN Order book update took 1200ms# Record profile while running
perf record -g ./rippled --standalone --conf=standalone.cfg
# Generate report
perf report
# Generate flamegraph
perf script | stackcollapse-perf.pl | flamegraph.pl > rippled.svg# Launch with Instruments
instruments -t "Time Profiler" ./rippled --standalone --conf=standalone.cfg# Memory leak detection
valgrind --leak-check=full ./rippled --standalone --conf=standalone.cfg
# Memory profiler
valgrind --tool=massif ./rippled --standalone --conf=standalone.cfg
ms_print massif.out.12345# Compile with sanitizer
cmake -DCMAKE_BUILD_TYPE=Debug \
-DCMAKE_CXX_FLAGS="-fsanitize=address" ..
cmake --build . --target rippled
# Run (crashes on memory errors)
./rippled --standalone --conf=standalone.cfg# Capture traffic
sudo tcpdump -i any port 51235 -w rippled.pcap
# Analyze with Wireshark
wireshark rippled.pcap# Ping peer
rippled ping <peer_ip>
# Check peer latency
rippled peers | grep latency./rippled --unittest./rippled --unittest=Payment
./rippled --unittest=Consensus./rippled --unittest --unittest-log#include <test/jtx.h>
namespace ripple {
namespace test {
class MyTest_test : public beast::unit_test::suite
{
public:
void testBasicOperation()
{
using namespace jtx;
// Create test environment
Env env(*this);
// Create accounts
Account alice{"alice"};
Account bob{"bob"};
env.fund(XRP(10000), alice, bob);
// Test operation
env(pay(alice, bob, XRP(100)));
env.close();
// Verify result
BEAST_EXPECT(env.balance(bob) == XRP(10100));
}
void run() override
{
testBasicOperation();
}
};
BEAST_DEFINE_TESTSUITE(MyTest, app, ripple);
} // namespace test
} // namespace ripple# Start multiple rippled instances
./rippled --conf=node1.cfg &
./rippled --conf=node2.cfg &
./rippled --conf=node3.cfg &
# Configure them to peer
rippled --conf=node1.cfg connect localhost:51236
rippled --conf=node2.cfg connect localhost:51237rippled tx <hash>rippled account_info <account>
# Compare Sequence with expectedgrep "<hash>" /var/log/rippled/debug.logrippled server_info | grep "load_factor"
# Fee should be baseFee * loadFactorrippled ledger_current
# Compare with transaction's LastLedgerSequencerippled validatorsrippled log_level Consensus trace
tail -f /var/log/rippled/debug.log | grep Consensusrippled peers
# Verify connected to enough peersgrep "dispute" /var/log/rippled/debug.logrippled server_info | grep "complete_ledgers"[node_db]
cache_mb=256 # Reduce if too highvalgrind --tool=massif ./rippled --standalone
ms_print massif.out.12345valgrind --leak-check=full ./rippled --standalonerippled log_level LedgerMaster debug
rippled log_level Transaction debuggrep "Applied.*transactions" /var/log/rippled/debug.logperf record -g ./rippled
perf report# Check NodeStore backend performance
grep "NodeStore" /var/log/rippled/debug.log# Configure logging
cat > standalone.cfg << EOF
[server]
port_rpc_admin_local
[port_rpc_admin_local]
port = 5005
ip = 127.0.0.1
admin = 127.0.0.1
protocol = http
[node_db]
type=NuDB
path=/tmp/rippled_debug
[database_path]
/tmp/rippled_debug
[rpc_startup]
{ "command": "log_level", "severity": "debug" }
{ "command": "log_level", "partition": "Transaction", "severity": "trace" }
EOF
# Start
rippled --standalone --conf=standalone.cfg// Create underfunded transaction
const tx = {
TransactionType: 'Payment',
Account: 'rN7n7otQDd6FczFgLdlqtyMVrn3HMtthca',
Destination: 'rLNaPoKeeBjZe2qs6x52yVPZpZ8td4dc6w',
Amount: '999999999999', // More than account has
Fee: '12',
Sequence: 1
};rippled submit <signed_tx>tail -100 /var/log/rippled/debug.log | grep TransactionTransaction:TRC Preflight check: passed
Transaction:TRC Preclaim check: failed - tecUNFUNDED_PAYMENT
Transaction:DBG Rejected transaction: insufficient fundsrippled account_info rN7n7otQDd6FczFgLdlqtyMVrn3HMtthcaconst balance = accountInfo.account_data.Balance;
const reserve = 20000000; // Base reserve
const available = balance - reserve;
console.log(`Available: ${available} drops`);
console.log(`Requested: 999999999999 drops`);
console.log(`Shortfall: ${999999999999 - available} drops`);// Corrected transaction
const fixedTx = {
...tx,
Amount: String(available - 12) // Account for fee
};# Recompile with debug symbols
cd rippled/build
cmake -DCMAKE_BUILD_TYPE=Debug ..
make
# Start with GDB
gdb --args ./rippled --standalone --conf=standalone.cfg(gdb) break Payment::preclaim
(gdb) runrippled submit <signed_tx># Breakpoint hit
(gdb) print ctx.tx[sfAmount]
(gdb) print (*sleAccount)[sfBalance]
(gdb) print fee
(gdb) print balance < (amount + fee)
# Should be true, causing tecUNFUNDED
(gdb) continueEvery transaction type in Rippled inherits from the Transactor base class, which provides the fundamental framework for transaction processing. This inheritance model ensures consistent behavior across all transaction types while allowing each type to implement its specific business logic.
The base Transactor class is defined in src/ripple/app/tx/impl/Transactor.h and provides:
Common validation logic - Signature verification, fee checks, sequence number validation
Helper methods - Account balance queries, ledger state access, fee calculation
Virtual methods - Hooks for transaction-specific logic (preflight, preclaim, doApply)
Transaction context - Access to the ledger, transaction data, and application state
ApplyContext: Provides access to the transaction being processed, the ledger view, and application services. This context object is passed through all stages of transaction processing.
Transaction Engine Result (TER): Every validation step returns a TER code indicating success (tesSUCCESS), temporary failure (ter codes), or permanent failure (tem or tef codes). These codes determine whether a transaction can be retried or should be permanently rejected.
Ledger Views: Transactors work with "views" of the ledger state, allowing tentative modifications that can be committed or rolled back. This ensures atomic transaction processing.
The transactor framework implements a rigorous three-phase validation process. Each phase has a specific purpose and access to different levels of information, creating a defense-in-depth approach to transaction validation.
Purpose: Static validation that doesn't require ledger state
Access: Only the raw transaction data and protocol rules
When It Runs: Before any ledger state is accessed, can run in parallel
What It Checks:
Transaction format is valid
Required fields are present
Field values are within valid ranges
Amounts are positive and properly formatted
No malformed or contradictory data
Key Characteristic: Preflight checks are deterministic and stateless—they depend only on the transaction itself, not on current ledger state.
Preflight Example: Payment Transaction
Why Preflight Matters: By catching format errors early, preflight prevents wasting resources on obviously invalid transactions. It also provides fast feedback to clients about transaction formatting issues.
Purpose: Validation requiring read-only access to ledger state
Access: Current ledger state (read-only), transaction data, protocol rules
When It Runs: After preflight passes, but before any state modifications
What It Checks:
Source account exists and has sufficient balance
Destination account exists (or can be created)
Required authorizations are in place
Trust lines exist for non-XRP currencies
Account flags and settings permit the transaction
Sequence numbers are correct
Key Characteristic: Preclaim can read ledger state but cannot modify it. This allows for safe concurrent execution and caching of preclaim results.
Preclaim Example: Payment Transaction
Why Preclaim Matters: Preclaim catches state-dependent errors before attempting state modifications. This prevents partially-applied transactions and provides clear error messages about why a transaction cannot succeed.
Purpose: Actual ledger state modification
Access: Full read/write access to ledger state
When It Runs: After both preflight and preclaim succeed
What It Does:
Debits source account
Credits destination account
Creates or modifies ledger objects
Applies transaction-specific business logic
Records transaction metadata
Consumes transaction fee
Key Characteristic: DoApply modifies ledger state. All changes are atomic—either the entire transaction succeeds and all changes are applied, or it fails and no changes are made.
DoApply Example: Payment Transaction
Why DoApply Matters: This is where the actual ledger state changes happen. DoApply ensures that only transactions that have passed all validation steps can modify the ledger, maintaining data integrity.
The XRP Ledger supports numerous transaction types, each implemented as a specific transactor. Understanding the most common types helps you navigate the codebase and understand protocol capabilities.
File: src/ripple/app/tx/impl/Payment.cpp
Purpose: Transfer XRP or issued currencies between accounts
Key Features:
Direct XRP transfers
Issued currency transfers via trust lines
Path-based payments (automatic currency conversion)
Partial payments (deliver less than requested if full amount unavailable)
Common Fields:
Account - Source account
Destination - Recipient account
Amount - Amount to deliver
SendMax (optional) - Maximum amount to send
Paths (optional) - Payment paths for currency conversion
DestinationTag (optional) - Identifier for the recipient
Use Cases:
Simple XRP transfers
Issued currency payments
Cross-currency payments
Payment channel settlements
File: src/ripple/app/tx/impl/CreateOffer.cpp
Purpose: Place an offer on the decentralized exchange (DEX)
Key Features:
Buy or sell any currency pair
Immediate-or-cancel orders
Fill-or-kill orders
Passive offers (don't consume existing offers)
Auto-bridging via XRP
Common Fields:
TakerPays - Asset the taker (matcher) pays
TakerGets - Asset the taker receives
Expiration (optional) - When offer expires
OfferSequence (optional) - Sequence of offer to replace
Use Cases:
Currency exchange
Market making
Arbitrage
Limit orders
File: src/ripple/app/tx/impl/CancelOffer.cpp
Purpose: Remove an offer from the order book
Key Features:
Cancel by offer sequence number
Only offer owner can cancel
Common Fields:
OfferSequence - Sequence number of offer to cancel
File: src/ripple/app/tx/impl/SetTrust.cpp
Purpose: Create or modify a trust line for issued currencies
Key Features:
Set trust limit for a currency
Authorize/deauthorize trust lines
Configure trust line flags
Common Fields:
LimitAmount - Trust line limit and currency
QualityIn (optional) - Exchange rate for incoming transfers
QualityOut (optional) - Exchange rate for outgoing transfers
Use Cases:
Accept issued currencies
Set credit limits
Freeze trust lines
File: src/ripple/app/tx/impl/Escrow.cpp
Purpose: Lock XRP until conditions are met
Key Features:
Time-based release (CryptoConditions)
Conditional release (Interledger Protocol conditions)
Guaranteed delivery or return
Common Fields:
Destination - Who can claim the escrow
Amount - Amount of XRP to escrow
FinishAfter (optional) - Earliest finish time
CancelAfter (optional) - When escrow can be cancelled
Condition (optional) - Cryptographic condition for release
File: src/ripple/app/tx/impl/Escrow.cpp
Purpose: Complete an escrow and deliver XRP
Key Features:
Must meet time and/or condition requirements
Can be executed by anyone (typically destination)
Common Fields:
Owner - Account that created the escrow
OfferSequence - Sequence of EscrowCreate transaction
Fulfillment (optional) - Fulfillment of cryptographic condition
File: src/ripple/app/tx/impl/Escrow.cpp
Purpose: Return escrowed XRP to owner
Key Features:
Only after CancelAfter time passes
Can be executed by anyone
File: src/ripple/app/tx/impl/SetAccount.cpp
Purpose: Modify account settings and flags
Key Features:
Set account flags
Configure transfer rate
Set domain and message key
Configure email hash
Common Fields:
SetFlag / ClearFlag - Flags to modify
TransferRate (optional) - Fee for transferring issued currencies
Domain (optional) - Domain associated with account
MessageKey (optional) - Public key for encrypted messaging
Important Flags:
asfRequireDest - Require destination tag
asfRequireAuth - Require authorization for trust lines
asfDisallowXRP - Disallow XRP payments
asfDefaultRipple - Enable rippling by default
File: src/ripple/app/tx/impl/SetSignerList.cpp
Purpose: Create or modify multi-signature configuration
Key Features:
Define list of authorized signers
Set signing quorum
Enable complex authorization schemes
Common Fields:
SignerQuorum - Required signature weight
SignerEntries - List of authorized signers with weights
File: src/ripple/app/tx/impl/PayChan.cpp
Purpose: Open a unidirectional payment channel
Key Features:
Lock XRP for fast, off-ledger payments
Asynchronous payments with cryptographic claims
Efficient micropayments
When implementing new features through amendments, you'll often need to create custom transactors. Here's the complete process:
Add your transaction type to src/ripple/protocol/TxFormats.cpp:
Create src/ripple/app/tx/impl/MyCustomTx.h:
Create src/ripple/app/tx/impl/MyCustomTx.cpp:
Add to src/ripple/app/tx/applySteps.cpp:
Create src/test/app/MyCustomTx_test.cpp:
Understanding how a transaction flows through the transactor framework helps debug issues and optimize performance.
tem (Malformed): Transaction is permanently invalid due to format issues
Example: temMALFORMED, temBAD_AMOUNT, temDISABLED
Action: Reject immediately, never retry
tef (Failure): Transaction failed during local checks
Example: tefFAILURE, tefPAST_SEQ
Action: Reject, may indicate client error
ter (Retry): Transaction failed but might succeed later
Example: terQUEUED, terPRE_SEQ
Action: Can be retried after conditions change
tec (Claimed Fee): Transaction failed but consumed fee
Example: tecUNFUNDED, tecNO_DST, tecNO_PERMISSION
Action: Failed permanently, fee charged
tes (Success): Transaction succeeded
Example: tesSUCCESS
Action: Changes committed to ledger
Objective: Understand the payment transactor implementation through debugging and modification.
Part 1: Code Exploration
Step 1: Navigate to the Payment transactor
Step 2: Identify the three phases
Find and read:
Payment::preflight() - Lines implementing static checks
Payment::preclaim() - Lines checking ledger state
Payment::doApply() - Lines modifying state
Step 3: Trace a specific check
Follow how the Payment transactor checks if a destination requires a destination tag:
Questions:
Where is lsfRequireDestTag defined?
How is this flag set on an account?
What transaction type sets this flag?
Part 2: Debug a Payment
Step 1: Set up standalone mode with logging
Enable transaction logging:
Step 2: Create test accounts
Step 3: Set destination tag requirement
Step 4: Try payment without destination tag
Step 5: Try payment with destination tag
Part 3: Modify the Transactor (Advanced)
Step 1: Add custom logging
Edit Payment.cpp and add logging to doApply():
Step 2: Recompile rippled
Step 3: Run with your modified code
Step 4: Submit a payment and observe your logs
Analysis Questions
Answer these based on your exploration:
What happens in each validation phase?
List the checks performed in preflight
List the checks performed in preclaim
What state modifications occur in doApply?
How are transaction fees handled?
Where is payFee() called?
What happens if an account can't pay the fee?
How does the code handle XRP vs issued currencies?
Find the code that distinguishes between them
How do payment paths work for issued currencies?
What's the role of the accountSend() helper?
Where is it implemented?
What does it do internally?
✅ Three-Phase Validation: Preflight (static), Preclaim (read state), DoApply (modify state) ensures robust transaction processing
✅ Inheritance Architecture: All transaction types inherit from Transactor base class, ensuring consistent behavior
✅ Error Code System: tem/tef/ter/tec/tes codes provide clear feedback about transaction status
✅ Atomic Execution: Transactions either fully succeed or fully fail (except fee consumption)
✅ State Views: Ledger modifications happen in views that can be committed or rolled back
✅ Codebase Location: Transaction implementations in src/ripple/app/tx/impl/
✅ Creating Custom Transactions: Follow the pattern of defining format, implementing phases, registering transactor
✅ Debugging: Use standalone mode and logging to trace transaction execution
✅ Testing: Write comprehensive unit tests for all transaction scenarios
✅ Amendment Integration: New transaction types typically require amendments for activation
Always validate before modifying state:
The Transactor base class provides many helpers:
Creating, modifying, and deleting ledger objects:
XRP Ledger Dev Portal: xrpl.org/docs
Transaction Types: xrpl.org/transaction-types
Create Custom Transactors: xrpl.org/create-custom-transactors
src/ripple/app/tx/impl/ - All transactor implementations
src/ripple/app/tx/impl/Transactor.h - Base transactor class
src/ripple/protocol/TxFormats.cpp - Transaction format definitions
src/ripple/protocol/TER.h - Transaction result codes
Protocols - How transactions are propagated across the network
Transaction Lifecycle - Complete journey from submission to ledger
Application Layer - How transactors integrate with the overall system
class Transactor
{
public:
// Main entry point for transaction application
static std::pair<TER, bool>
apply(Application& app, OpenView& view, STTx const& tx, ApplyFlags flags);
// Virtual methods for transaction-specific logic
static NotTEC preflight(PreflightContext const& ctx);
static TER preclaim(PreclaimContext const& ctx);
virtual TER doApply() = 0;
protected:
// Constructor - available to derived classes
Transactor(ApplyContext& ctx);
// Helper methods
TER payFee();
TER checkSeq();
TER checkSign(PreclaimContext const& ctx);
// Member variables
ApplyContext& ctx_;
beast::Journal j_;
AccountID account_;
XRPAmount mPriorBalance;
XRPAmount mSourceBalance;
};NotTEC Payment::preflight(PreflightContext const& ctx)
{
// Check if the Payment transaction type is enabled
if (!ctx.rules.enabled(featurePayment))
return temDISABLED;
// Call base class preflight checks
auto const ret = preflight1(ctx);
if (!isTesSuccess(ret))
return ret;
// Verify destination account is specified
if (!ctx.tx.isFieldPresent(sfDestination))
return temDST_NEEDED;
// Verify amount is specified and valid
auto const amount = ctx.tx[sfAmount];
if (!amount)
return temBAD_AMOUNT;
// Amount must be positive
if (amount <= zero)
return temBAD_AMOUNT;
// Check for valid currency code if not XRP
if (!isXRP(amount))
{
if (!amount.issue().currency)
return temBAD_CURRENCY;
}
// Additional format validations...
return preflight2(ctx);
}TER Payment::preclaim(PreclaimContext const& ctx)
{
// Get source and destination account IDs
AccountID const src = ctx.tx[sfAccount];
AccountID const dst = ctx.tx[sfDestination];
// Source and destination cannot be the same
if (src == dst)
return temREDUNDANT;
// Check if destination account exists
auto const dstID = ctx.tx[sfDestination];
auto const sleDst = ctx.view.read(keylet::account(dstID));
// If destination doesn't exist, check if we can create it
if (!sleDst)
{
auto const amount = ctx.tx[sfAmount];
// Only XRP can create accounts
if (!isXRP(amount))
return tecNO_DST;
// Amount must meet reserve requirement
if (amount < ctx.view.fees().accountReserve(0))
return tecNO_DST_INSUF_XRP;
}
else
{
// Destination exists - check if it requires dest tag
auto const flags = sleDst->getFlags();
if (flags & lsfRequireDestTag)
{
// Destination requires a tag but none provided
if (!ctx.tx.isFieldPresent(sfDestinationTag))
return tecDST_TAG_NEEDED;
}
// Check if destination has disallowed XRP
if (flags & lsfDisallowXRP && isXRP(ctx.tx[sfAmount]))
return tecNO_TARGET;
}
// Check source account balance
auto const sleSrc = ctx.view.read(keylet::account(src));
if (!sleSrc)
return terNO_ACCOUNT;
auto const balance = (*sleSrc)[sfBalance];
auto const amount = ctx.tx[sfAmount];
// Ensure sufficient balance (including fee)
if (balance < amount + ctx.tx[sfFee])
return tecUNFUNDED_PAYMENT;
return tesSUCCESS;
}TER Payment::doApply()
{
// Pay the transaction fee (happens for all transactions)
auto const result = payFee();
if (result != tesSUCCESS)
return result;
// Get amount to send
auto const amount = ctx_.tx[sfAmount];
auto const dst = ctx_.tx[sfDestination];
// Perform the actual transfer
auto const transferResult = accountSend(
view(), // Ledger view to modify
account_, // Source account
dst, // Destination account
amount, // Amount to transfer
j_ // Journal for logging
);
if (transferResult != tesSUCCESS)
return transferResult;
// Handle partial payments and path finding if applicable
if (ctx_.tx.isFlag(tfPartialPayment))
{
// Partial payment logic...
}
// Record transaction metadata
ctx_.deliver(amount);
return tesSUCCESS;
}add(jss::MyCustomTx,
ttMY_CUSTOM_TX,
{
// Required fields
{sfAccount, soeREQUIRED},
{sfDestination, soeREQUIRED},
{sfCustomField, soeREQUIRED},
// Optional fields
{sfOptionalField, soeOPTIONAL},
},
commonFields);#ifndef RIPPLE_TX_MYCUSTOMTX_H_INCLUDED
#define RIPPLE_TX_MYCUSTOMTX_H_INCLUDED
#include <ripple/app/tx/impl/Transactor.h>
namespace ripple {
class MyCustomTx : public Transactor
{
public:
static constexpr ConsequencesFactoryType ConsequencesFactory{Normal};
explicit MyCustomTx(ApplyContext& ctx) : Transactor(ctx) {}
static NotTEC preflight(PreflightContext const& ctx);
static TER preclaim(PreclaimContext const& ctx);
TER doApply() override;
};
} // namespace ripple
#endif#include <ripple/app/tx/impl/MyCustomTx.h>
#include <ripple/basics/Log.h>
#include <ripple/protocol/Feature.h>
namespace ripple {
NotTEC MyCustomTx::preflight(PreflightContext const& ctx)
{
// Check if amendment is enabled
if (!ctx.rules.enabled(featureMyCustomTx))
return temDISABLED;
// Perform base class preflight checks
auto const ret = preflight1(ctx);
if (!isTesSuccess(ret))
return ret;
// Validate custom field format
if (!ctx.tx.isFieldPresent(sfCustomField))
return temMALFORMED;
auto const customValue = ctx.tx[sfCustomField];
if (customValue < 0 || customValue > 1000000)
return temBAD_AMOUNT;
// Additional validation...
return preflight2(ctx);
}TER MyCustomTx::preclaim(PreclaimContext const& ctx)
{
// Get account IDs
AccountID const src = ctx.tx[sfAccount];
AccountID const dst = ctx.tx[sfDestination];
// Verify destination account exists
auto const sleDst = ctx.view.read(keylet::account(dst));
if (!sleDst)
return tecNO_DST;
// Check source account has sufficient balance
auto const sleSrc = ctx.view.read(keylet::account(src));
if (!sleSrc)
return terNO_ACCOUNT;
auto const balance = (*sleSrc)[sfBalance];
auto const fee = ctx.tx[sfFee];
if (balance < fee)
return tecUNFUNDED;
// Additional state-based validation...
return tesSUCCESS;
}TER MyCustomTx::doApply()
{
// Pay transaction fee
auto const result = payFee();
if (result != tesSUCCESS)
return result;
// Get transaction fields
auto const dst = ctx_.tx[sfDestination];
auto const customValue = ctx_.tx[sfCustomField];
// Perform custom logic
// Example: Create a new ledger object
auto const sleNew = std::make_shared<SLE>(
keylet::custom(account_, ctx_.tx.getSeqProxy().value()));
sleNew->setAccountID(sfAccount, account_);
sleNew->setAccountID(sfDestination, dst);
sleNew->setFieldU32(sfCustomField, customValue);
// Insert into ledger
view().insert(sleNew);
// Log the operation
JLOG(j_.trace()) << "MyCustomTx applied successfully";
return tesSUCCESS;
}
} // namespace ripple#include <ripple/app/tx/impl/MyCustomTx.h>
// In the invoke function, add:
case ttMY_CUSTOM_TX:
return MyCustomTx::makeTxConsequences(ctx);#include <ripple/protocol/Feature.h>
#include <ripple/protocol/jss.h>
#include <test/jtx.h>
namespace ripple {
namespace test {
class MyCustomTx_test : public beast::unit_test::suite
{
public:
void testBasicOperation()
{
using namespace jtx;
Env env(*this, supported_amendments() | featureMyCustomTx);
// Create test accounts
Account const alice{"alice"};
Account const bob{"bob"};
env.fund(XRP(10000), alice, bob);
// Submit custom transaction
Json::Value jv;
jv[jss::Account] = alice.human();
jv[jss::Destination] = bob.human();
jv[jss::TransactionType] = jss::MyCustomTx;
jv[jss::CustomField] = 12345;
jv[jss::Fee] = "10";
env(jv);
env.close();
// Verify results
// Add assertions...
}
void run() override
{
testBasicOperation();
// More tests...
}
};
BEAST_DEFINE_TESTSUITE(MyCustomTx, app, ripple);
} // namespace test
} // namespace rippleTransaction Submission
↓
Preflight (Static Validation)
↓
✓ Pass → Continue
✗ Fail → Reject (return tem code)
↓
Preclaim (State Validation)
↓
✓ Pass → Continue
✗ Fail → Reject (return tec/ter code)
↓
Enter Consensus
↓
Reach Agreement
↓
DoApply (State Modification)
↓
✓ Success → Commit changes
✗ Fail → Rollback (still consumes fee)
↓
Transaction Finalized in Ledgercd rippled/src/ripple/app/tx/impl/
open Payment.cpp # or use your IDE// In preclaim:
if (sleDst->getFlags() & lsfRequireDestTag)
{
if (!ctx.tx.isFieldPresent(sfDestinationTag))
return tecDST_TAG_NEEDED;
}rippled --conf=rippled.cfg --standalonerippled log_level Transaction trace# Create and fund two accounts
rippled account_info <address1>
rippled account_info <address2># Set requireDestTag flag on destination account
rippled submit '{
"TransactionType": "AccountSet",
"Account": "<address2>",
"SetFlag": 1,
"Fee": "12"
}'# This should fail with tecDST_TAG_NEEDED
rippled submit '{
"TransactionType": "Payment",
"Account": "<address1>",
"Destination": "<address2>",
"Amount": "1000000",
"Fee": "12"
}'# This should succeed
rippled submit '{
"TransactionType": "Payment",
"Account": "<address1>",
"Destination": "<address2>",
"Amount": "1000000",
"DestinationTag": 12345,
"Fee": "12"
}'TER Payment::doApply()
{
JLOG(j_.info()) << "Payment doApply started";
JLOG(j_.info()) << "Source: " << account_;
JLOG(j_.info()) << "Destination: " << ctx_.tx[sfDestination];
JLOG(j_.info()) << "Amount: " << ctx_.tx[sfAmount];
// ... existing code ...
}cd rippled/build
cmake --build . --target rippled./rippled --conf=rippled.cfg --standalone// Bad - might partially modify state before failing
auto const result1 = modifyState1();
auto const result2 = modifyState2(); // If this fails, state1 is modified
if (result2 != tesSUCCESS)
return result2;
// Good - validate first, then modify
if (!canModifyState1())
return tecFAILURE;
if (!canModifyState2())
return tecFAILURE;
modifyState1();
modifyState2();// Check sequence number
auto const result = checkSeq();
if (result != tesSUCCESS)
return result;
// Pay fee
auto const feeResult = payFee();
if (feeResult != tesSUCCESS)
return feeResult;
// Check authorization
if (!hasAuthority())
return tecNO_PERMISSION;// Read existing object
auto const sle = view().read(keylet::account(accountID));
if (!sle)
return tecNO_TARGET;
// Modify object
auto sleMutable = view().peek(keylet::account(accountID));
(*sleMutable)[sfBalance] = newBalance;
view().update(sleMutable);
// Create new object
auto const sleNew = std::make_shared<SLE>(keylet);
sleNew->setFieldU32(sfFlags, 0);
view().insert(sleNew);
// Delete object
view().erase(sle);The Application layer is the heart of Rippled's architecture—the central orchestrator that initializes, coordinates, and manages all subsystems. Understanding the Application layer is essential for grasping how Rippled operates as a cohesive system, where consensus, networking, transaction processing, and ledger management all work together seamlessly.
At its core, the Application class acts as a dependency injection container and service locator, providing every component access to the resources it needs while maintaining clean separation of concerns. Whether you're debugging a startup issue, optimizing system performance, or implementing a new feature, you'll inevitably interact with the Application layer.
The Application class follows several key design principles that make Rippled maintainable and extensible:
Single Point of Coordination: Instead of components directly creating and managing their dependencies, everything flows through the Application. This centralization makes it easy to understand system initialization and component relationships.
Dependency Injection: Components receive their dependencies through constructor parameters rather than creating them internally. This makes testing easier and dependencies explicit.
Interface-Based Design: The Application class implements the Application interface, allowing for different implementations (production, test, mock) without changing dependent code.
Lifetime Management: The Application controls the creation, initialization, and destruction of all major subsystems, ensuring proper startup/shutdown sequences.
The Application interface is defined in src/ripple/app/main/Application.h:
The concrete implementation ApplicationImp is in src/ripple/app/main/ApplicationImp.cpp. This class:
Implements all interface methods
Owns all major subsystem objects
Manages initialization order
Coordinates shutdown
Key Member Variables:
Understanding the startup sequence is crucial for debugging initialization issues and understanding component dependencies.
Phase 1: Configuration Loading
What Happens:
Parse rippled.cfg configuration file
Load validator list configuration
Set up logging configuration
Validate configuration parameters
Configuration Sections:
[server] - Server ports and interfaces
[node_db] - NodeStore database configuration
[node_size] - Performance tuning parameters
Phase 2: Application Construction
Constructor Sequence (ApplicationImp::ApplicationImp()):
Phase 3: Setup
What Happens (ApplicationImp::setup()):
Phase 4: Run
Main Event Loop (ApplicationImp::run()):
What Runs:
Job queue processes queued work
Overlay network handles peer connections
Consensus engine processes rounds
NetworkOPs coordinates operations
All work happens in background threads managed by various subsystems. The main thread simply waits for a shutdown signal.
Phase 5: Shutdown
Graceful Shutdown (ApplicationImp::signalStop()):
Shutdown Order: Components are stopped in reverse order of their creation to ensure dependencies are still available when each component shuts down.
The Application acts as a service locator, allowing any component to access any other component through the app reference:
LedgerMaster
Purpose: Manages the chain of validated ledgers and coordinates ledger progression.
Key Responsibilities:
Track current validated ledger
Build candidate ledgers for consensus
Synchronize ledger history
Maintain ledger cache
Access: app.getLedgerMaster()
Important Methods:
NetworkOPs
Purpose: Coordinates network operations and transaction processing.
Key Responsibilities:
Process submitted transactions
Manage transaction queue
Coordinate consensus participation
Track network state
Access: app.getOPs()
Important Methods:
Overlay
Purpose: Manages peer-to-peer networking layer.
Key Responsibilities:
Peer discovery and connection
Message routing
Network topology maintenance
Bandwidth management
Access: app.overlay()
Important Methods:
TxQ (Transaction Queue)
Purpose: Manages transaction queuing when network is busy.
Key Responsibilities:
Queue transactions during high load
Fee-based prioritization
Account-based queuing limits
Transaction expiration
Access: app.getTxQ()
Important Methods:
NodeStore
Purpose: Persistent storage for ledger data.
Key Responsibilities:
Store ledger state nodes
Provide efficient retrieval
Cache frequently accessed data
Support different backend databases (RocksDB, NuDB)
Access: app.getNodeStore()
Important Methods:
RelationalDatabase
Purpose: SQL database for indexed data and historical queries.
Key Responsibilities:
Store transaction metadata
Maintain account transaction history
Support RPC queries (account_tx, tx)
Ledger header storage
Access: app.getRelationalDatabase()
Database Types:
SQLite (default, embedded)
PostgreSQL (production deployments)
Validations
Purpose: Manages validator signatures on ledger closes.
Key Responsibilities:
Collect validations from validators
Track validator key rotations (manifests)
Determine ledger validation quorum
Publish validation stream
Access: app.getValidations()
Important Methods:
The job queue is Rippled's work scheduling system. Instead of each subsystem creating its own threads, work is submitted as jobs to a centralized queue processed by a thread pool. This provides:
Centralized thread management: Easier to control thread count and CPU usage
Priority-based scheduling: Critical jobs processed before low-priority ones
Visibility: Easy to monitor what work is queued
Deadlock prevention: Structured concurrency patterns
Jobs are categorized by type, which determines priority:
Components submit work to the job queue:
Priority Levels:
Critical: Consensus, validations (must not be delayed)
High: Transaction processing, ledger advancement
Medium: RPC requests, client operations
Low: Maintenance, administrative tasks
Scheduling Algorithm:
Jobs sorted by priority and submission time
Worker threads pick highest priority job
Long-running jobs can be split into chunks
System monitors queue depth and adjusts behavior
In rippled.cfg:
Thread count is also influenced by CPU core count:
The rippled.cfg file controls all aspects of server behavior. The Application loads and provides access to this configuration.
Example Configuration
Components access configuration through the Application:
Some settings can be adjusted at runtime via RPC:
Most common pattern—components call each other's methods:
For work that should not block the caller:
Components publish events that others subscribe to:
Components register callbacks for specific events:
Application Core:
src/ripple/app/main/Application.h - Application interface
src/ripple/app/main/ApplicationImp.h - Implementation header
src/ripple/app/main/ApplicationImp.cpp - Implementation
Job Queue:
src/ripple/core/JobQueue.h - Job queue interface
src/ripple/core/impl/JobQueue.cpp - Implementation
src/ripple/core/Job.h - Job definition
Configuration:
src/ripple/core/Config.h - Config class
src/ripple/core/ConfigSections.h - Section definitions
Subsystem Implementations:
src/ripple/app/ledger/LedgerMaster.h
src/ripple/app/misc/NetworkOPs.h
src/ripple/overlay/Overlay.h
Finding Application Creation
Start in main.cpp:
Tracing Component Access
Follow how components access each other:
Understanding Job Submission
Find job submissions:
Example:
Objective: Understand the application initialization sequence and monitor job queue activity.
Part 1: Code Exploration
Step 1: Navigate to application source
Step 2: Read the main entry point
Open main.cpp and trace:
Command-line argument parsing
Configuration loading
Application creation
Setup call
Step 3: Follow ApplicationImp construction
Open ApplicationImp.cpp and identify:
The order subsystems are created (constructor)
Dependencies between components
What happens in setup()
What happens in run()
Questions to Answer:
Why is NodeStore created before LedgerMaster?
What does LedgerMaster need from Application?
Which components are created first and why?
Part 2: Monitor Job Queue Activity
Step 1: Enable detailed job queue logging
Edit rippled.cfg:
Step 2: Start rippled in standalone mode
Step 3: Watch the startup logs
Observe jobs during startup:
What job types execute first?
How many worker threads are created?
What's the initial job queue depth?
Step 4: Submit transactions and observe
Watch the logs for:
jtTRANSACTION jobs being queued
Job processing time
Queue depth changes
Step 5: Manually close a ledger
Observe jobs related to ledger close:
jtADVANCE - Advance to next ledger
jtPUBLEDGER - Publish ledger
jtUPDATE_PF - Update path finding
Part 3: Add Custom Logging
Step 1: Modify ApplicationImp.cpp
Add logging to track component initialization:
Step 2: Recompile
Step 3: Run and observe
You should see your custom log messages showing component creation order.
Analysis Questions
Answer these based on your exploration:
What's the first subsystem created?
Why does it need to be first?
How does the job queue decide which job to process next?
What factors influence priority?
✅ Central Orchestration: Application class coordinates all subsystems and manages their lifecycle
✅ Dependency Injection: Components receive dependencies through Application reference, not by creating them
✅ Service Locator: Application provides access to all major services (getLedgerMaster(), overlay(), etc.)
✅ Initialization Order: Subsystems are created in dependency order during construction
✅ Job Queue: Centralized work scheduling with priority-based execution
✅ Configuration: All server behavior controlled through rippled.cfg
✅ Codebase Location: Application implementation in src/ripple/app/main/
✅ Adding Components: Create in constructor, expose through interface method
✅ Job Submission: Use app.getJobQueue().addJob() for asynchronous work
✅ Debugging Startup: Add logging in ApplicationImp constructor to trace initialization
✅ Configuration Access: Use app.config() to read configuration values
Always access subsystems through the Application:
Use job queue for work that shouldn't block:
Let Application manage component lifetime:
XRP Ledger Dev Portal:
Rippled Setup:
Configuration Reference:
src/ripple/app/main/ - Application layer implementation
src/ripple/core/JobQueue.h - Job queue system
src/ripple/core/Config.h - Configuration management
- How transactions are processed
- How consensus integrates with Application
- Finding your way around the code
Apply defaults for unspecified options
[validation_seed] - Validator key configuration[ips_fixed] - Fixed peer connections
[features] - Amendment votes
src/ripple/app/main/main.cpp - Entry point, creates Applicationsrc/ripple/app/tx/TxQ.hWhat happens if a job throws an exception?
Find the exception handling code
How many jobs are queued during a typical ledger close?
Count from your logs
What's the relationship between Application and ApplicationImp?
Why use an interface?
How would you add a new subsystem?
What's the process?
Where would you add it?
src/ripple/app/main/main.cpp - Program entry pointclass Application : public beast::PropertyStream::Source
{
public:
// Core services
virtual Logs& logs() = 0;
virtual Config const& config() const = 0;
// Networking
virtual Overlay& overlay() = 0;
virtual JobQueue& getJobQueue() = 0;
// Ledger management
virtual LedgerMaster& getLedgerMaster() = 0;
virtual OpenLedger& openLedger() = 0;
// Transaction processing
virtual NetworkOPs& getOPs() = 0;
virtual TxQ& getTxQ() = 0;
// Consensus
virtual Validations& getValidations() = 0;
// Storage
virtual NodeStore::Database& getNodeStore() = 0;
virtual RelationalDatabase& getRelationalDatabase() = 0;
// RPC and subscriptions
virtual RPCHandler& getRPCHandler() = 0;
// Lifecycle
virtual void setup() = 0;
virtual void run() = 0;
virtual void signalStop() = 0;
// Utility
virtual bool isShutdown() = 0;
virtual std::chrono::seconds getMaxDisallowedLedger() = 0;
protected:
Application() = default;
};class ApplicationImp : public Application
{
private:
// Configuration and logging
std::unique_ptr<Logs> logs_;
Config config_;
// Core services
std::unique_ptr<JobQueue> jobQueue_;
std::unique_ptr<NodeStore::Database> nodeStore_;
std::unique_ptr<RelationalDatabase> relationalDB_;
// Networking
std::unique_ptr<Overlay> overlay_;
// Ledger management
std::unique_ptr<LedgerMaster> ledgerMaster_;
std::unique_ptr<OpenLedger> openLedger_;
// Transaction processing
std::unique_ptr<NetworkOPs> networkOPs_;
std::unique_ptr<TxQ> txQ_;
// Consensus
std::unique_ptr<Validations> validations_;
// RPC
std::unique_ptr<RPCHandler> rpcHandler_;
// State
std::atomic<bool> isShutdown_{false};
std::condition_variable cv_;
std::mutex mutex_;
};// In main.cpp
auto config = std::make_unique<Config>();
if (!config->setup(configFile, quiet))
{
// Configuration failed
return -1;
}// Create the application instance
auto app = make_Application(
std::move(config),
std::move(logs),
std::move(timeKeeper));ApplicationImp::ApplicationImp(
std::unique_ptr<Config> config,
std::unique_ptr<Logs> logs,
std::unique_ptr<TimeKeeper> timeKeeper)
: config_(std::move(config))
, logs_(std::move(logs))
, timeKeeper_(std::move(timeKeeper))
{
// 1. Create basic services
jobQueue_ = std::make_unique<JobQueue>(
*logs_,
config_->WORKERS);
// 2. Initialize databases
nodeStore_ = NodeStore::Manager::make(
"NodeStore.main",
scheduler,
*logs_,
config_->section("node_db"));
relationalDB_ = makeRelationalDatabase(
*config_,
*logs_);
// 3. Create ledger management
ledgerMaster_ = std::make_unique<LedgerMaster>(
*this,
stopwatch(),
*logs_);
// 4. Create networking
overlay_ = std::make_unique<OverlayImpl>(
*this,
config_->section("overlay"),
*logs_);
// 5. Create transaction processing
networkOPs_ = std::make_unique<NetworkOPsImp>(
*this,
*logs_);
txQ_ = std::make_unique<TxQ>(
*config_,
*logs_);
// 6. Create consensus components
validations_ = std::make_unique<Validations>(
*this);
// 7. Create RPC handler
rpcHandler_ = std::make_unique<RPCHandler>(
*this,
*logs_);
// Note: Order matters! Components may depend on earlier ones
}app->setup();void ApplicationImp::setup()
{
// 1. Load existing ledger state
auto initLedger = getLastFullLedger();
// 2. Initialize ledger master
ledgerMaster_->setLastFullLedger(initLedger);
// 3. Start open ledger
openLedger_->accept(
initLedger,
orderTx,
consensusParms,
{}); // Empty transaction set for new ledger
// 4. Initialize overlay network
overlay_->start();
// 5. Start RPC servers
rpcHandler_->setup();
// 6. Additional subsystem initialization
// ...
JLOG(j_.info()) << "Application setup complete";
}app->run();void ApplicationImp::run()
{
JLOG(j_.info()) << "Application starting";
// Start processing jobs
jobQueue_->start();
// Enter main loop
{
std::unique_lock<std::mutex> lock(mutex_);
// Wait until shutdown signal
while (!isShutdown_)
{
cv_.wait(lock);
}
}
JLOG(j_.info()) << "Application stopping";
}app->signalStop();void ApplicationImp::signalStop()
{
JLOG(j_.info()) << "Shutdown requested";
// 1. Set shutdown flag
isShutdown_ = true;
// 2. Stop accepting new work
overlay_->stop();
rpcHandler_->stop();
// 3. Complete in-flight operations
jobQueue_->finish();
// 4. Stop subsystems (reverse order of creation)
networkOPs_->stop();
ledgerMaster_->stop();
// 5. Close databases
nodeStore_->close();
relationalDB_->close();
// 6. Wake up main thread
cv_.notify_all();
JLOG(j_.info()) << "Shutdown complete";
}Program Start
↓
Load Configuration (rippled.cfg)
↓
Create Application Instance
↓
Construct Subsystems
• Databases
• Networking
• Ledger Management
• Transaction Processing
• Consensus
↓
Setup Phase
• Load Last Ledger
• Initialize Components
• Start Network
↓
Run Phase (Main Loop)
• Process Jobs
• Handle Consensus
• Process Transactions
• Serve RPC Requests
↓
Shutdown Signal Received
↓
Graceful Shutdown
• Stop Accepting Work
• Complete In-Flight Operations
• Stop Subsystems
• Close Databases
↓
Program Exitclass SomeComponent
{
public:
SomeComponent(Application& app)
: app_(app)
{
// Components store app reference
}
void doWork()
{
// Access other components through app
auto& ledgerMaster = app_.getLedgerMaster();
auto& overlay = app_.overlay();
auto& jobs = app_.getJobQueue();
// Use the components...
}
private:
Application& app_;
};// Get current validated ledger
std::shared_ptr<Ledger const> getValidatedLedger();
// Get closed ledger (not yet validated)
std::shared_ptr<Ledger const> getClosedLedger();
// Advance to new ledger
void advanceLedger();
// Fetch missing ledgers
void fetchLedger(LedgerHash const& hash);// Submit transaction
void submitTransaction(std::shared_ptr<STTx const> const& tx);
// Process transaction
void processTransaction(
std::shared_ptr<Transaction>& transaction,
bool trusted,
bool local);
// Get network state
OperatingMode getOperatingMode();// Send message to all peers
void broadcast(std::shared_ptr<Message> const& message);
// Get active peer count
std::size_t size() const;
// Connect to specific peer
void connect(std::string const& ip);// Check if transaction can be added
std::pair<TER, bool>
apply(Application& app, OpenView& view, STTx const& tx);
// Get queue status
Json::Value getJson();// Store ledger node
void store(
NodeObjectType type,
Blob const& data,
uint256 const& hash);
// Fetch ledger node
std::shared_ptr<NodeObject>
fetch(uint256 const& hash);// Add validation
void addValidation(STValidation const& val);
// Get validation for ledger
std::vector<std::shared_ptr<STValidation>>
getValidations(LedgerHash const& hash);
// Check if ledger is validated
bool hasQuorum(LedgerHash const& hash);enum JobType
{
// Special job types
jtINVALID = -1,
jtPACK, // Job queue work pack
// High priority - consensus critical
jtPUBOLDLEDGER, // Publish old ledger
jtVALIDATION_ut, // Process validation (untrusted)
jtPROPOSAL_ut, // Process consensus proposal
jtLEDGER_DATA, // Process ledger data
// Medium priority
jtTRANSACTION, // Process transaction
jtADVANCE, // Advance ledger
jtPUBLEDGER, // Publish ledger
jtTXN_DATA, // Transaction data retrieval
// Low priority
jtUPDATE_PF, // Update path finding
jtCLIENT, // Handle client request
jtRPC, // Process RPC
jtTRANSACTION_l, // Process transaction (low priority)
// Lowest priority
jtPEER, // Peer message
jtDISK, // Disk operations
jtADMIN, // Administrative operations
};// Get job queue reference
JobQueue& jobs = app.getJobQueue();
// Submit a job
jobs.addJob(
jtTRANSACTION, // Job type
"processTx", // Job name (for logging)
[this, tx](Job&) // Job function
{
// Do work here
processTransaction(tx);
});[node_size]
# Affects worker thread count
tiny # 1 thread
small # 2 threads
medium # 4 threads (default)
large # 8 threads
huge # 16 threads// Typically: max(2, std::thread::hardware_concurrency() - 1)[server]
port_rpc_admin_local
port_peer
port_ws_admin_local
[port_rpc_admin_local]
port = 5005
ip = 127.0.0.1
admin = 127.0.0.1
protocol = http
[port_peer]
port = 51235
ip = 0.0.0.0
protocol = peer
[port_ws_admin_local]
port = 6006
ip = 127.0.0.1
admin = 127.0.0.1
protocol = ws
[node_size]
medium
[node_db]
type=RocksDB
path=/var/lib/rippled/db/rocksdb
open_files=512
cache_mb=256
filter_bits=12
compression=1
[database_path]
/var/lib/rippled/db
[debug_logfile]
/var/log/rippled/debug.log
[sntp_servers]
time.windows.com
time.apple.com
time.nist.gov
pool.ntp.org
[ips_fixed]
r.ripple.com 51235
[validators_file]
validators.txt
[rpc_startup]
{ "command": "log_level", "severity": "warning" }
[features]
# Vote for or against amendments
# AmendmentNamevoid SomeComponent::configure()
{
// Get config reference
Config const& config = app_.config();
// Access specific sections
auto const& nodeDB = config.section("node_db");
auto const type = get<std::string>(nodeDB, "type");
auto const path = get<std::string>(nodeDB, "path");
// Access node size
auto nodeSize = config.NODE_SIZE;
// Access ports
for (auto const& port : config.ports)
{
// Configure port...
}
}# Change log verbosity
rippled log_level partition severity
# Connect to peer
rippled connect ip:port
# Get server info
rippled server_infovoid NetworkOPs::submitTransaction(STTx const& tx)
{
// Validate transaction
auto result = Transactor::preflight(tx);
if (!isTesSuccess(result))
return;
// Apply to open ledger
auto& openLedger = app_.openLedger();
openLedger.modify([&](OpenView& view)
{
Transactor::apply(app_, view, tx);
});
// Broadcast to network
auto& overlay = app_.overlay();
overlay.broadcast(makeTransactionMessage(tx));
}void LedgerMaster::fetchLedger(LedgerHash const& hash)
{
// Submit fetch job
app_.getJobQueue().addJob(
jtLEDGER_DATA,
"fetchLedger",
[this, hash](Job&)
{
// Request from peers
app_.overlay().sendRequest(hash);
// Wait for response
// Process received data
// ...
});
}// Publisher (LedgerMaster)
void LedgerMaster::newLedgerValidated()
{
// Notify subscribers
for (auto& subscriber : subscribers_)
{
subscriber->onLedgerValidated(currentLedger_);
}
}
// Subscriber (NetworkOPs)
void NetworkOPs::onLedgerValidated(
std::shared_ptr<Ledger const> const& ledger)
{
// React to new ledger
updateSubscribers(ledger);
processQueuedTransactions();
}// Register callback
app_.getLedgerMaster().onConsensusReached(
[this](std::shared_ptr<Ledger const> const& ledger)
{
handleConsensusLedger(ledger);
});int main(int argc, char** argv)
{
// Parse command line
// Load configuration
// Create logs
// Create application
auto app = make_Application(
std::move(config),
std::move(logs),
std::move(timeKeeper));
// Setup and run
app->setup();
app->run();
return 0;
}// In any component
void MyComponent::work()
{
// Access through app_
auto& ledgerMaster = app_.getLedgerMaster(); // → ApplicationImp::getLedgerMaster()
// → return *ledgerMaster_;
}# Search for addJob calls
grep -r "addJob" src/ripple/app/app_.getJobQueue().addJob(jtTRANSACTION, "processTx", [&](Job&) {
// Job code
});cd rippled/src/ripple/app/main/[rpc_startup]
{ "command": "log_level", "partition": "JobQueue", "severity": "trace" }rippled --conf=rippled.cfg --standalone# Submit a payment
rippled submit '{
"TransactionType": "Payment",
"Account": "...",
"Destination": "...",
"Amount": "1000000"
}'rippled ledger_acceptApplicationImp::ApplicationImp(/* ... */)
{
JLOG(j_.info()) << "Creating JobQueue...";
jobQueue_ = std::make_unique<JobQueue>(/* ... */);
JLOG(j_.info()) << "JobQueue created";
JLOG(j_.info()) << "Creating NodeStore...";
nodeStore_ = NodeStore::Manager::make(/* ... */);
JLOG(j_.info()) << "NodeStore created";
// Add similar logs for other components
}cd rippled/build
cmake --build . --target rippled./rippled --conf=rippled.cfg --standalone// Good - through Application
void doWork(Application& app)
{
auto& ledgerMaster = app.getLedgerMaster();
ledgerMaster.getCurrentLedger();
}
// Bad - storing subsystem reference
class BadComponent
{
LedgerMaster& ledgerMaster_; // Don't do this
BadComponent(LedgerMaster& lm)
: ledgerMaster_(lm) {} // Tight coupling
};
// Good - storing Application reference
class GoodComponent
{
Application& app_;
GoodComponent(Application& app)
: app_(app) {} // Loose coupling
void work()
{
// Access when needed
auto& lm = app_.getLedgerMaster();
}
};// Don't block the caller
void expensiveOperation(Application& app)
{
app.getJobQueue().addJob(
jtCLIENT,
"expensiveWork",
[&app](Job&)
{
// Long-running work here
performExpensiveCalculation();
// Access other subsystems as needed
app.getLedgerMaster().doSomething();
});
}// In ApplicationImp constructor
myComponent_ = std::make_unique<MyComponent>(*this);
// In ApplicationImp::setup()
myComponent_->initialize();
// In ApplicationImp::signalStop()
myComponent_->shutdown();
// Destructor automatically cleans up
// (unique_ptr handles deletion)The Overlay Network is Rippled's peer-to-peer networking layer that enables distributed nodes to discover each other, establish connections, and communicate efficiently. Without the overlay network, the XRP Ledger would be a collection of isolated servers—the overlay network is what transforms individual nodes into a cohesive, decentralized system.
Understanding the overlay network is essential for debugging connectivity issues, optimizing network performance, and ensuring your node participates effectively in the XRP Ledger network. Whether you're running a validator, a stock server, or developing network enhancements, deep knowledge of the overlay network is crucial.
The XRP Ledger uses a mesh topology where nodes maintain direct connections with multiple peers. This differs from:
Star topology: Central hub (single point of failure)
Ring topology: Sequential connections (vulnerable to breaks)
Tree topology: Hierarchical structure (root node critical)
Mesh Advantages:
No single point of failure: Network remains operational if individual nodes fail
Multiple communication paths: Messages can route around failed nodes
Scalability: Network can grow organically as nodes join
Resilience: Network topology self-heals as nodes enter and exit
The overlay network sits between the application logic and the transport layer, abstracting away the complexities of peer-to-peer communication.
Rippled maintains three types of peer connections:
1. Outbound Connections
Definition: Connections initiated by your node to other peers
Characteristics:
Your node acts as client
You choose which peers to connect to
Configurable connection limits
Active connection management
Configuration:
2. Inbound Connections
Definition: Connections initiated by other nodes to your server
Characteristics:
Your node acts as server
Must listen on public interface
Accept connections from unknown peers
Subject to connection limits
Configuration:
3. Fixed Connections
Definition: Persistent connections to trusted peers
Characteristics:
High priority, always maintained
Automatically reconnect if disconnected
Bypass some connection limits
Ideal for validators and cluster peers
Configuration:
Rippled aims to maintain a target number of active peer connections:
Default Targets (based on node_size):
Connection Distribution:
Approximately 50% outbound connections
Approximately 50% inbound connections
Fixed connections count toward total
System adjusts dynamically to maintain target
The most basic discovery method—manually configured peers:
[ips] Section: Peers to connect to automatically
[ips_fixed] Section: High-priority persistent connections
Advantages:
Reliable, known peers
Administrative control
Suitable for private networks
Disadvantages:
Manual maintenance required
Limited to configured peers
Doesn't scale automatically
DNS-based peer discovery for bootstrap:
How It Works:
Node queries DNS for peer addresses
DNS returns A records (IP addresses)
Node connects to returned addresses
Learns about additional peers through gossip
Configuration:
DNS Resolution Example:
Advantages:
Easy bootstrap for new nodes
Dynamic peer lists
Load balancing via DNS
Disadvantages:
Requires DNS infrastructure
Vulnerable to DNS attacks
Single point of failure for initial connection
Peers share information about other peers they know:
Message Type: Endpoint announcements (part of peer protocol)
Process:
Peer A connects to Peer B
Peer B shares list of other known peers
Peer A considers these peers for connection
Peer A may connect to some of the suggested peers
Gossip Information Includes:
Peer IP addresses
Peer public keys
Last seen time
Connection quality hints
Advantages:
Network self-organizes
No central directory needed
Discovers new peers automatically
Network grows organically
Disadvantages:
Potential for malicious peer injection
Network topology influenced by gossip patterns
Initial bootstrapping still needed
Some nodes run peer crawlers to discover and monitor network topology:
What Crawlers Do:
Connect to known peers
Request peer lists
Recursively discover more peers
Map network topology
Public Peer Lists:
Various community-maintained lists
Used by new nodes to bootstrap
Updated regularly
Step 1: TCP Connection
Standard TCP three-way handshake:
Configuration:
Step 2: TLS Handshake (Optional but Recommended)
If TLS is configured, encrypted channel is established:
Benefits of TLS:
Encrypted communication (privacy)
Peer authentication (security)
Protection against eavesdropping
Man-in-the-middle prevention
Step 3: Protocol Handshake
Rippled-specific handshake exchanges capabilities:
Hello Message (from initiator):
Response (from receiver):
Handshake Validation:
Compatibility Check:
Step 4: Connection Acceptance/Rejection
After handshake validation:
If Compatible:
Connection moves to Active state
Add to peer list
Begin normal message exchange
Log successful connection
If Incompatible:
Send rejection message with reason
Close connection gracefully
Log rejection reason
May add to temporary ban list
Rejection Reasons:
Rippled enforces various connection limits:
Per-IP Limits
Total Connection Limits
Based on node_size configuration:
Formula: target + (target / 2)
Fixed Peer Priority
Fixed peers bypass some limits:
Rippled continuously monitors peer quality:
Metrics Tracked
Latency: Response time to ping messages
Message Rate: Messages per second
Error Rate: Protocol errors, malformed messages
Uptime: Connection duration
Quality Scoring
Peers are scored based on metrics:
Score Usage:
Low-scoring peers may be disconnected
High-scoring peers prioritized for reconnection
Informs peer selection decisions
When connection limits are reached, low-quality peers are pruned:
After disconnection, Rippled may attempt to reconnect:
Exponential Backoff:
Fixed Peer Priority:
Different message types require different routing strategies:
Critical Messages (Broadcast to All)
Validations (tmVALIDATION):
Must reach all validators
Broadcast to all peers immediately
Critical for consensus
Consensus Proposals (tmPROPOSE_LEDGER):
Must reach all validators
Time-sensitive
Broadcast widely
Broadcast Pattern:
Transactions (Selective Relay)
Transaction Messages (tmTRANSACTION):
Should reach all nodes eventually
Don't need immediate broadcast to all
Use intelligent relay
Relay Logic:
Request/Response (Unicast)
Ledger Data Requests (tmGET_LEDGER):
Directed to specific peer
Response goes back to requester
No broadcasting needed
Unicast Pattern:
Squelch prevents message echo loops:
Problem:
Solution:
Recent Message Cache:
Time-based expiration (e.g., 30 seconds)
Size-based limits (e.g., 10,000 entries)
LRU eviction policy
Outbound messages are queued with priority:
Benefits:
Critical messages sent first
Prevents head-of-line blocking
Better network utilization
Connectivity Metrics
Active Peers: Current peer count
Target vs Actual: Comparison to target
Connection Distribution:
Network Quality Metrics
Average Latency:
Message Rate:
Validator Connectivity:
peers Command
Get current peer list:
Response:
peer_reservations Command
View reserved peer slots:
connect Command
Manually connect to peer:
Enable detailed overlay logging:
Log Messages to Monitor:
Overlay Core:
src/ripple/overlay/Overlay.h - Main overlay interface
src/ripple/overlay/impl/OverlayImpl.h - Implementation header
src/ripple/overlay/impl/OverlayImpl.cpp - Core implementation
Peer Management:
src/ripple/overlay/Peer.h - Peer interface
src/ripple/overlay/impl/PeerImp.h - Peer implementation
src/ripple/overlay/impl/PeerImp.cpp - Peer logic
Connection Handling:
src/ripple/overlay/impl/ConnectAttempt.h - Outbound connections
src/ripple/overlay/impl/InboundHandoff.h - Inbound connections
Message Processing:
src/ripple/overlay/impl/ProtocolMessage.h - Message definitions
src/ripple/overlay/impl/Message.cpp - Message handling
Overlay Class
PeerImp Class
Finding Connection Logic
Search for connection establishment:
Tracing Message Flow
Follow message from receipt to processing:
Objective: Understand your node's position in the network and analyze peer connections.
Part 1: Initial Network State
Step 1: Get current peer list
Step 2: Analyze the output
Count:
Total peers
Outbound vs inbound connections
Peer versions
Geographic distribution (if known)
Questions:
Do you have the target number of peers?
Is the outbound/inbound ratio balanced?
Are you connected to validators in your UNL?
Part 2: Connection Quality Analysis
Step 1: Enable overlay logging
Step 2: Monitor for 5 minutes
Step 3: Identify patterns
Look for:
Average peer latency
Connection failures
Disconnection reasons
Reconnection attempts
Part 3: Connectivity Test
Step 1: Manually connect to a peer
Step 2: Verify connection
Step 3: Observe handshake in logs
Part 4: Network Health Check
Step 1: Check peer count over time
Step 2: Monitor connection churn
Step 3: Assess stability
Calculate:
Connection churn rate (disconnections per hour)
Average peer lifetime
Reconnection frequency
Part 5: Peer Quality Distribution
Step 1: Extract peer metrics
From peers output, record for each peer:
Latency
Uptime
Complete ledgers range
Step 2: Create distribution charts
Latency distribution:
Step 3: Identify issues
Are any peers consistently high-latency?
Do any peers have incomplete ledger history?
Are there peers with low uptime?
Analysis Questions
Answer these based on your observations:
What's your average peer latency?
Is it acceptable (<200ms)?
How stable are your connections?
High churn may indicate network issues
✅ Mesh Topology: Decentralized network with no single point of failure
✅ Three Connection Types: Outbound, inbound, and fixed connections serve different purposes
✅ Multi-Mechanism Discovery: DNS seeds, configured peers, and gossip protocol enable robust peer discovery
✅ Connection Quality: Continuous monitoring and scoring of peer quality
✅ Intelligent Routing: Message-specific routing strategies optimize network efficiency
✅ Squelch Algorithm: Prevents message loops and duplicate processing
✅ Priority Queuing: Ensures critical messages are transmitted first
✅ Target Peer Count: Based on node_size configuration
✅ Balanced Connections: ~50% outbound, ~50% inbound
✅ Quality Metrics: Latency, message rate, error rate, uptime
✅ Connection Pruning: Low-quality peers replaced with better alternatives
✅ Fixed Peer Priority: Critical connections maintained aggressively
✅ Codebase Location: Overlay implementation in src/ripple/overlay/
✅ Configuration: Understanding [ips], [ips_fixed], [port_peer] sections
✅ Monitoring: Using RPC commands and logs to assess network health
✅ Debugging: Tracing connection issues and message flow
Symptoms: Active peers consistently below target
Possible Causes:
Firewall blocking inbound connections
ISP blocking port
Poor peer quality (all disconnect quickly)
Solutions:
Symptoms: Average latency >200ms
Possible Causes:
Geographic distance to peers
Network congestion
Poor quality peers
Solutions:
Symptoms: High connection churn rate
Possible Causes:
Network instability
Protocol incompatibility
Being saturated by other peers
Solutions:
Symptoms: Not connected to any UNL validators
Possible Causes:
Validators are unreachable
Validators' connection slots full
Network configuration issues
Solutions:
XRP Ledger Dev Portal:
Peer Protocol:
Server Configuration:
src/ripple/overlay/ - Overlay network implementation
src/ripple/overlay/impl/PeerImp.cpp - Peer connection handling
src/ripple/overlay/impl/OverlayImpl.cpp - Core overlay logic
- Protocol message formats and communication
- How consensus uses overlay network
- How overlay integrates with application
Are you well-connected to validators?
Check against your UNL
What's your network position?
Are you mostly receiving or mostly sending connections?
Do you see any problematic peers?
High latency, frequent disconnections?
How does your node handle connection limits?
Does it maintain target peer count?
Understanding the complete lifecycle of a transaction—from the moment it's created to its final inclusion in a validated ledger—is crucial for developing applications on the XRP Ledger, debugging transaction issues, and optimizing transaction processing. Every transaction follows a well-defined path through multiple validation stages, consensus rounds, and finalization steps.
This deep dive traces the entire journey of a transaction, explaining each phase, the checks performed, the state transitions, and how to monitor and query transaction status at every step. Whether you're building a wallet, an exchange integration, or contributing to the core protocol, mastering the transaction lifecycle is essential.
Fast Path (ideal conditions):
Slow Path (transaction arrives late in open phase):
Before submission, a transaction must be properly constructed:
Universal Fields (all transaction types):
TransactionType - Type of transaction (Payment, OfferCreate, etc.)
Account - Source account (sender)
Fee - Transaction fee in drops (1 XRP = 1,000,000 drops)
Optional but Recommended:
LastLedgerSequence - Expiration ledger (transaction invalid after this)
SourceTag / DestinationTag - Integer tags for routing/identification
Memos - Arbitrary data attached to transaction
Single Signature:
Multi-Signature:
The transaction hash (ID) is calculated from the signed transaction:
Important: The hash is deterministic—the same signed transaction always produces the same hash.
Method 1: RPC Submit
Submit via JSON-RPC:
Response:
Method 2: WebSocket Submit
Real-time submission with streaming updates:
Method 3: Peer Network Submission
Transactions submitted to one node propagate to all nodes:
Even if submitted to a non-validator, the transaction reaches validators through peer-to-peer propagation.
Immediate response indicates initial validation result:
Success Codes:
tesSUCCESS - Transaction applied to open ledger
terQUEUED - Transaction queued (network busy)
Temporary Failure (can retry):
terPRE_SEQ - Sequence too high, earlier tx needed
tefPAST_SEQ - Sequence too low (already used)
Permanent Failure (don't retry):
temMALFORMED - Malformed transaction
temBAD_FEE - Invalid fee
temBAD_SIGNATURE - Invalid signature
Before accessing ledger state, static validation occurs:
Checks Performed:
✓ Cryptographic signature valid
✓ Transaction format correct
✓ Required fields present
✓ Fee sufficient
Why Preflight Matters: Catches obvious errors before expensive ledger state access.
Read-only validation against current ledger state:
Checks Performed:
✓ Source account exists
✓ Sequence number correct
✓ Sufficient balance (including fee)
✓ Destination account requirements met
Transaction is tentatively applied to provide immediate feedback:
Open Ledger Characteristics:
Not Final: Open ledger is tentative, changes frequently
No Consensus: Local view only, other nodes may differ
Immediate Feedback: Clients get instant response
Can Change: Transaction may be removed or re-ordered
Why It Matters:
Users get immediate confirmation
Wallets can show pending transactions
Applications can provide real-time updates
Once applied to open ledger, transaction broadcasts to peers:
Propagation Speed:
Local network: < 100ms
Global network: 200-500ms
All nodes receive transaction within 1 second
Deduplication:
Nodes track recently seen transactions
Duplicate transactions not re-processed
Prevents network flooding
As ledger close approaches, validators build transaction sets:
Validators exchange proposals and converge:
Round 1: Initial proposals
Round 2: Converge on high-agreement transactions
Transaction Inclusion Criteria:
80% of UNL must agree to include
Transaction must still be valid
Must not have expired (LastLedgerSequence)
After consensus, transactions are applied in canonical order:
DoApply Execution:
Result Codes:
tesSUCCESS - Transaction succeeded
tecUNFUNDED - Failed but fee charged
tecNO_TARGET - Failed but fee charged
Important: Even failed transactions (tec codes) consume the fee and advance the sequence number.
After all transactions are applied:
Ledger Hash Calculation:
Validators sign the closed ledger:
When quorum is reached, ledger becomes fully validated:
Characteristics of Validated Ledger:
Immutable: Cannot be changed
Permanent: Part of ledger history forever
Canonical: All nodes have identical copy
Final: Transactions cannot be reversed
Method 1: tx RPC
Query by transaction hash:
Response:
Key Fields:
validated: true = in validated ledger, false = pending
meta.TransactionResult: Final result code
ledger_index: Which ledger contains transaction
Method 2: account_tx RPC
Query all transactions for an account:
Lists transactions in reverse chronological order.
Method 3: WebSocket Subscriptions
Real-time transaction monitoring:
Subscription Types:
accounts - Transactions affecting specific accounts
transactions - All transactions network-wide
ledger - Ledger close events
Metadata records the effects of a transaction:
AffectedNodes Types:
CreatedNode - New ledger object created
ModifiedNode - Existing object modified
DeletedNode - Object deleted
Key Metadata Fields:
TransactionIndex - Position in ledger
TransactionResult - Final result code
delivered_amount - Actual amount delivered (for partial payments)
Transactions can specify an expiration:
Behavior:
If not included by ledger 75234567, transaction becomes invalid
Prevents transactions from being stuck indefinitely
Recommended: Set to current ledger + 4
Checking Expiration:
Objective: Submit a transaction and observe it at each phase of the lifecycle.
Part 1: Prepare and Submit
Step 1: Create and fund test accounts
Step 2: Prepare transaction with monitoring
Step 3: Submit and record time
Part 2: Monitor Progress
Step 4: Subscribe to transaction
Step 5: Poll for status
Part 3: Analyze Results
Step 6: Examine metadata
Analysis Questions
Answer these based on your observations:
How long did each phase take?
Submission to initial result: ___ ms
Initial result to validated: ___ ms
Total time: ___ ms
✅ 11-Phase Journey: Transactions go through creation, submission, validation, consensus, application, and finalization
✅ Multiple Validation Stages: Preflight (static), Preclaim (state-based), DoApply (execution)
✅ Open Ledger Preview: Tentative application provides immediate feedback before consensus
✅ Consensus Inclusion: Validators must agree (>80%) to include transaction
✅ Canonical Order: Deterministic ordering ensures all nodes reach identical state
✅ Immutable Finality: Once validated, transactions cannot be reversed
✅ Metadata Records Effects: Complete record of all ledger modifications
✅ Fast Path: ~7 seconds submission to validation
✅ Slow Path: ~30 seconds if submitted late in open phase
✅ Network Propagation: <1 second to reach all nodes
✅ Consensus Round: 3-5 seconds
✅ Transaction Construction: Proper signing and field selection
✅ Status Monitoring: Using tx, account_tx, and subscriptions
✅ Error Handling: Understanding result codes (tem/tef/ter/tec/tes)
✅ Expiration Management: Setting LastLedgerSequence appropriately
✅ Metadata Analysis: Understanding transaction effects
Symptoms: Transaction not validating after 30+ seconds
Possible Causes:
Insufficient fee (transaction queued)
Network congestion
Sequence gap (earlier transaction missing)
Solutions:
Symptoms: Sequence number already used
Cause: Sequence out of sync or transaction already processed
Solution:
Symptoms: tx command returns "txnNotFound"
Possible Causes:
Transaction not yet in validated ledger
Transaction expired (LastLedgerSequence)
Transaction rejected during validation
Solution:
Symptoms: Transaction failed with fee charged
Cause: Insufficient balance between submission and execution
Prevention:
XRP Ledger Dev Portal:
Transaction Types:
Transaction Results:
src/ripple/app/tx/impl/Transactor.cpp - Transaction processing
src/ripple/app/misc/NetworkOPs.cpp - Network operations and transaction handling
src/ripple/app/ledger/OpenLedger.cpp - Open ledger management
- How transactions are validated and executed
- How transactions are included in consensus
- How transactions are propagated across the network
┌─────────────────────────────────────────────┐
│ Application Layer │
│ (Consensus, Transactions, Ledger) │
├─────────────────────────────────────────────┤
│ Overlay Network Layer │
│ (Peer Discovery, Connection Mgmt, │
│ Message Routing) │
├─────────────────────────────────────────────┤
│ Transport Layer (TCP/TLS) │
├─────────────────────────────────────────────┤
│ Internet Layer (IP) │
└─────────────────────────────────────────────┘[ips]
# DNS or IP addresses to connect to
r.ripple.com 51235
s1.ripple.com 51235
s2.ripple.com 51235[port_peer]
port = 51235
ip = 0.0.0.0 # Listen on all interfaces
protocol = peer[ips_fixed]
# Always maintain connections to these peers
validator1.example.com 51235
validator2.example.com 51235
cluster-peer.example.com 51235tiny: 10 peers
small: 15 peers
medium: 20 peers (default)
large: 30 peers
huge: 40 peers[ips]
r.ripple.com 51235
s1.ripple.com 51235
validator.example.com 51235[ips_fixed]
critical-peer.example.com 51235[ips]
# These resolve via DNS
r.ripple.com 51235
s1.ripple.com 51235$ dig +short r.ripple.com
54.186.73.52
54.184.149.41
52.24.169.78┌──────────────┐
│ Disconnected│
└──────┬───────┘
│ initiate()
↓
┌──────────────┐
│ Connecting │ ← TCP handshake, TLS negotiation
└──────┬───────┘
│ connected()
↓
┌──────────────┐
│ Connected │ ← Protocol handshake in progress
└──────┬───────┘
│ handshake complete
↓
┌──────────────┐
│ Active │ ← Fully operational, exchanging messages
└──────┬───────┘
│ close() or error
↓
┌──────────────┐
│ Closing │ ← Graceful shutdown
└──────┬───────┘
│
↓
┌──────────────┐
│ Closed │
└──────────────┘Client Server
│ │
│──────── SYN ──────────────────────>│
│ │
│<─────── SYN-ACK ──────────────────│
│ │
│──────── ACK ──────────────────────>│
│ │
│ TCP Connection Established │[port_peer]
port = 51235
ip = 0.0.0.0
protocol = peerClient Server
│ │
│──────── ClientHello ──────────────>│
│ │
│<─────── ServerHello ──────────────│
│<─────── Certificate ──────────────│
│<─────── ServerHelloDone ──────────│
│ │
│──────── ClientKeyExchange ────────>│
│──────── ChangeCipherSpec ─────────>│
│──────── Finished ─────────────────>│
│ │
│<─────── ChangeCipherSpec ─────────│
│<─────── Finished ─────────────────│
│ │
│ Encrypted Channel Established │message TMHello {
required uint32 protoVersion = 1; // Protocol version
required uint32 protoVersionMin = 2; // Minimum supported version
required bytes publicKey = 3; // Node's public key
optional bytes nodePrivate = 4; // Proof of key ownership
required uint32 ledgerIndex = 5; // Current ledger index
optional bytes ledgerClosed = 6; // Closed ledger hash
optional bytes ledgerPrevious = 7; // Previous ledger hash
optional uint32 closedTime = 8; // Ledger close time
}// Same TMHello structure with receiver's informationbool validateHandshake(TMHello const& hello)
{
// Check protocol version compatibility
if (hello.protoVersion < minSupportedVersion)
return false;
if (hello.protoVersionMin > currentVersion)
return false;
// Verify public key
if (!isValidPublicKey(hello.publicKey()))
return false;
// Verify key ownership proof
if (!verifySignature(hello.nodePrivate(), hello.publicKey()))
return false;
// Check we're on same network (same genesis ledger)
if (!isSameNetwork(hello.ledgerClosed()))
return false;
return true;
}Node A: version 1.7.0, min 1.5.0
Node B: version 1.6.0, min 1.4.0
Check: max(1.5.0, 1.4.0) ≤ min(1.7.0, 1.6.0)
1.5.0 ≤ 1.6.0 ✓ Compatible
Use protocol version: 1.6.0 (minimum of max versions)enum DisconnectReason
{
drBadData, // Malformed handshake
drProtocol, // Protocol incompatibility
drSaturated, // Too many connections
drDuplicate, // Already connected to this peer
drNetworkID, // Different network (testnet vs mainnet)
drBanned, // Peer is banned
drSelf, // Trying to connect to self
};// Maximum connections from single IP
constexpr size_t maxPeersPerIP = 2;
// Prevents single entity from dominating connections
bool acceptConnection(IPAddress const& ip)
{
auto count = countConnectionsFromIP(ip);
return count < maxPeersPerIP;
}tiny: max 10 connections
small: max 21 connections
medium: max 40 connections
large: max 62 connections
huge: max 88 connectionsbool shouldAcceptConnection(Peer const& peer)
{
// Always accept fixed peers
if (isFixed(peer))
return true;
// Check against limits for regular peers
if (activeConnections() >= maxConnections())
return false;
return true;
}// Ping-pong protocol
void sendPing()
{
auto ping = std::make_shared<protocol::TMPing>();
ping->set_type(protocol::TMPing::ptPING);
ping->set_seq(nextPingSeq_++);
ping->set_timestamp(now());
send(ping);
}
void onPong(protocol::TMPing const& pong)
{
auto latency = now() - pong.timestamp();
updateLatencyMetrics(latency);
}void trackMessageRate()
{
messagesReceived_++;
auto elapsed = now() - windowStart_;
if (elapsed >= 1s)
{
messageRate_ = messagesReceived_ / elapsed.count();
messagesReceived_ = 0;
windowStart_ = now();
}
}void onProtocolError()
{
errorCount_++;
if (errorCount_ > maxErrorThreshold)
{
// Disconnect problematic peer
disconnect(drBadData);
}
}auto uptime = now() - connectionTime_;int calculatePeerScore(Peer const& peer)
{
int score = 100; // Start with perfect score
// Penalize high latency
if (peer.latency() > 500ms)
score -= 20;
else if (peer.latency() > 200ms)
score -= 10;
// Penalize low message rate (inactive peer)
if (peer.messageRate() < 0.1)
score -= 15;
// Penalize errors
score -= peer.errorCount() * 5;
// Reward long uptime
if (peer.uptime() > 24h)
score += 10;
return std::max(0, std::min(100, score));
}void pruneConnections()
{
if (activeConnections() <= targetConnections())
return;
// Sort peers by score (lowest first)
auto peers = getAllPeers();
std::sort(peers.begin(), peers.end(),
[](auto const& a, auto const& b)
{
return a->score() < b->score();
});
// Disconnect lowest-scoring non-fixed peers
for (auto& peer : peers)
{
if (isFixed(peer))
continue; // Never disconnect fixed peers
peer->disconnect(drSaturated);
if (activeConnections() <= targetConnections())
break;
}
}Duration calculateReconnectDelay(int attempts)
{
// Exponential backoff with jitter
auto delay = minDelay * std::pow(2, attempts);
delay = std::min(delay, maxDelay);
// Add random jitter (±25%)
auto jitter = delay * (0.75 + random() * 0.5);
return jitter;
}
// Example progression:
// Attempt 1: ~5 seconds
// Attempt 2: ~10 seconds
// Attempt 3: ~20 seconds
// Attempt 4: ~40 seconds
// Attempt 5+: ~60 seconds (capped)void scheduleReconnect(Peer const& peer)
{
Duration delay;
if (isFixed(peer))
{
// Aggressive reconnection for fixed peers
delay = 5s;
}
else
{
// Exponential backoff for regular peers
delay = calculateReconnectDelay(peer.reconnectAttempts());
}
scheduleJob(delay, [this, peer]()
{
attemptConnection(peer.address());
});
}void broadcastCritical(std::shared_ptr<Message> const& msg)
{
for (auto& peer : getAllPeers())
{
// Send to everyone
peer->send(msg);
}
}void relayTransaction(
std::shared_ptr<Message> const& msg,
Peer* source)
{
for (auto& peer : getAllPeers())
{
// Don't echo back to source
if (peer.get() == source)
continue;
// Check if peer likely already has it
if (peerLikelyHas(peer, msg))
continue;
// Send to peer
peer->send(msg);
}
}void requestLedgerData(
LedgerHash const& hash,
Peer* peer)
{
auto request = makeGetLedgerMessage(hash);
peer->send(request); // Send only to this peer
}Node A → sends to B
Node B → receives from A
Node B → broadcasts to all (including A)
Node A → receives echo from B
Node A → broadcasts again...
(infinite loop)void onMessageReceived(
std::shared_ptr<Message> const& msg,
Peer* source)
{
// Track message hash
auto hash = msg->getHash();
// Have we seen this before?
if (recentMessages_.contains(hash))
return; // Ignore duplicate
// Record that we've seen it
recentMessages_.insert(hash);
// Process message
processMessage(msg);
// Relay to others (excluding source)
relayToOthers(msg, source);
}enum MessagePriority
{
priVeryHigh, // Validations, critical consensus
priHigh, // Proposals, status changes
priMedium, // Transactions
priLow, // Historical data, maintenance
};
class PeerMessageQueue
{
private:
std::map<MessagePriority, std::queue<Message>> queues_;
public:
void enqueue(Message msg, MessagePriority priority)
{
queues_[priority].push(msg);
}
Message dequeue()
{
// Dequeue from highest priority non-empty queue
for (auto& [priority, queue] : queues_)
{
if (!queue.empty())
{
auto msg = queue.front();
queue.pop();
return msg;
}
}
throw std::runtime_error("No messages");
}
};size_t activePeers = overlay.size();bool isHealthy = activePeers >= (targetPeers * 0.75);size_t outbound = countOutboundPeers();
size_t inbound = countInboundPeers();
float ratio = float(outbound) / inbound;
// Healthy: ratio between 0.5 and 2.0
bool balancedConnections = (ratio > 0.5 && ratio < 2.0);auto avgLatency = calculateAverageLatency(getAllPeers());
// Healthy: < 200ms average
bool lowLatency = avgLatency < 200ms;auto totalRate = sumMessageRates(getAllPeers());
// Messages per second across all peersauto validatorPeers = countValidatorPeers();
auto unlSize = getUNLSize();
// Should be connected to most of UNL
bool goodValidatorConnectivity =
validatorPeers >= (unlSize * 0.8);rippled peers{
"result": {
"peers": [
{
"address": "54.186.73.52:51235",
"latency": 45,
"uptime": 3600,
"version": "rippled-1.9.0",
"public_key": "n9KorY8QtTdRx...",
"complete_ledgers": "32570-75234891"
}
// ... more peers
]
}
}rippled peer_reservations_add <public_key> <description>
rippled peer_reservations_listrippled connect 192.168.1.100:51235[rpc_startup]
{ "command": "log_level", "partition": "Overlay", "severity": "trace" }"Overlay": "Connected to peer 54.186.73.52:51235"
"Overlay": "Disconnected from peer 54.186.73.52:51235, reason: saturated"
"Overlay": "Handshake failed with peer: protocol version mismatch"
"Overlay": "Received invalid message from peer, closing connection"
"Overlay": "Active peers: 18/20 (target)"class Overlay
{
public:
// Start/stop overlay network
virtual void start() = 0;
virtual void stop() = 0;
// Peer management
virtual void connect(std::string const& ip) = 0;
virtual std::size_t size() const = 0;
// Message broadcasting
virtual void broadcast(std::shared_ptr<Message> const&) = 0;
virtual void relay(
std::shared_ptr<Message> const&,
Peer* source = nullptr) = 0;
// Peer information
virtual Json::Value json() = 0;
virtual std::vector<Peer::ptr> getActivePeers() = 0;
};class PeerImp : public Peer
{
public:
// Send message to this peer
void send(std::shared_ptr<Message> const& m) override;
// Process received message
void onMessage(std::shared_ptr<Message> const& m);
// Connection state
bool isConnected() const;
void disconnect(DisconnectReason reason);
// Quality metrics
std::chrono::milliseconds latency() const;
int score() const;
private:
// Connection management
boost::asio::ip::tcp::socket socket_;
boost::asio::ssl::stream<socket_t&> stream_;
// Message queues
std::queue<std::shared_ptr<Message>> sendQueue_;
// Metrics
std::chrono::steady_clock::time_point connected_;
std::chrono::milliseconds latency_;
int score_;
};// In OverlayImpl.cpp
void OverlayImpl::connect(std::string const& ip)
{
// Parse IP and port
auto endpoint = parseEndpoint(ip);
// Create connection attempt
auto attempt = std::make_shared<ConnectAttempt>(
app_,
io_service_,
endpoint,
peerFinder_.config());
// Begin async connection
attempt->run();
}// PeerImp::onMessage (entry point)
void PeerImp::onMessage(std::shared_ptr<Message> const& msg)
{
// Check for duplicates (squelch)
if (app_.overlay().hasSeen(msg->getHash()))
return;
// Mark as seen
app_.overlay().markSeen(msg->getHash());
// Process based on type
switch (msg->getType())
{
case protocol::mtTRANSACTION:
onTransaction(msg);
break;
case protocol::mtVALIDATION:
onValidation(msg);
break;
// ... other types
}
// Relay to other peers
app_.overlay().relay(msg, this);
}rippled peers > peers_initial.jsonrippled log_level Overlay debugtail -f /var/log/rippled/debug.log | grep -E "latency|score|disconnect"# Connect to XRP Ledger Foundation validator
rippled connect r.ripple.com:51235rippled peers | grep "r.ripple.com""Overlay": "Connected to r.ripple.com:51235"
"Overlay": "Handshake complete with peer n9KorY8..."
"Overlay": "Added peer n9KorY8... to active peers"# Run every minute for 10 minutes
for i in {1..10}; do
echo "$(date): $(rippled peers | grep -c address) peers"
sleep 60
done# Count new connections and disconnections
grep -c "Connected to peer" /var/log/rippled/debug.log
grep -c "Disconnected from peer" /var/log/rippled/debug.log0-50ms: |||||| (6 peers)
51-100ms: |||||||||| (10 peers)
101-200ms: ||| (3 peers)
201+ms: | (1 peer)# Check firewall
sudo iptables -L | grep 51235
# Verify port is accessible
telnet your-ip 51235
# Check if node is reachable
rippled server_info | grep pubkey_node# Manually connect to closer peers
rippled connect low-latency-peer.example.com:51235
# Add fixed peers in same region
[ips_fixed]
local-peer-1.example.com 51235
local-peer-2.example.com 51235# Check logs for disconnect reasons
grep "Disconnected" /var/log/rippled/debug.log
# Look for patterns
grep "Disconnected.*reason" /var/log/rippled/debug.log | \
cut -d: -f4 | sort | uniq -c# Manually connect to validators
rippled connect validator.example.com:51235
# Use fixed connections for validators
[ips_fixed]
validator1.example.com 51235
validator2.example.com 51235Sequence - Account sequence number (nonce)SigningPubKey - Public key used for signing
TxnSignature - Cryptographic signature (or multi-signatures)
✓ No contradictory fields
✓ Account flags permit operation
What was the initial result?
Did it apply to open ledger?
Which ledger included the transaction?
Ledger index?
How many ledgers closed between submission and inclusion?
What was the metadata?
Which nodes were affected?
What balances changed?
Did the transaction expire?
Was LastLedgerSequence set?
How close to expiration was it?
┌─────────────────────────────────────────────────────────────┐
│ 1. Transaction Creation │
│ (Client creates and signs transaction) │
└──────────────────────────┬──────────────────────────────────┘
│
↓
┌─────────────────────────────────────────────────────────────┐
│ 2. Transaction Submission │
│ (Submit via RPC, WebSocket, or peer network) │
└──────────────────────────┬──────────────────────────────────┘
│
↓
┌─────────────────────────────────────────────────────────────┐
│ 3. Initial Validation │
│ (Signature check, format validation, preflight) │
└──────────────────────────┬──────────────────────────────────┘
│
┌────────┴────────┐
│ │
↓ ↓
✓ Valid ✗ Invalid
│ │
│ └──→ Rejected (temMALFORMED, etc.)
↓
┌─────────────────────────────────────────────────────────────┐
│ 4. Preclaim Validation │
│ (Check ledger state, balance, account existence) │
└──────────────────────────┬──────────────────────────────────┘
│
┌────────┴────────┐
│ │
↓ ↓
✓ Valid ✗ Invalid
│ │
│ └──→ Rejected (tecUNFUNDED, etc.)
↓
┌─────────────────────────────────────────────────────────────┐
│ 5. Open Ledger Application │
│ (Tentatively apply to open ledger for preview) │
└──────────────────────────┬──────────────────────────────────┘
│
↓
┌─────────────────────────────────────────────────────────────┐
│ 6. Network Propagation │
│ (Broadcast to peers via tmTRANSACTION) │
└──────────────────────────┬──────────────────────────────────┘
│
↓
┌─────────────────────────────────────────────────────────────┐
│ 7. Consensus Round │
│ (Validators propose and agree on transaction set) │
└──────────────────────────┬──────────────────────────────────┘
│
┌────────┴────────┐
│ │
↓ ↓
Included in Set Not Included
│ │
│ └──→ Deferred to next round
↓
┌─────────────────────────────────────────────────────────────┐
│ 8. Canonical Application (DoApply) │
│ (Apply to ledger in canonical order, final) │
└──────────────────────────┬──────────────────────────────────┘
│
┌────────┴────────┐
│ │
↓ ↓
tesSUCCESS tecFAILURE
│ │
│ └──→ Failed but fee charged
↓
┌─────────────────────────────────────────────────────────────┐
│ 9. Ledger Closure │
│ (Ledger closed with transaction included) │
└──────────────────────────┬──────────────────────────────────┘
│
↓
┌─────────────────────────────────────────────────────────────┐
│ 10. Validation Phase │
│ (Validators sign and broadcast validations) │
└──────────────────────────┬──────────────────────────────────┘
│
↓
┌─────────────────────────────────────────────────────────────┐
│ 11. Fully Validated │
│ (Transaction immutable, part of permanent history) │
└─────────────────────────────────────────────────────────────┘T+0s: Transaction submitted
T+0.1s: Initial validation complete
T+0.2s: Open ledger application
T+0.3s: Network propagation
T+5s: Consensus round completes
T+5.1s: Canonical application
T+5.2s: Ledger closed
T+7s: Validations collected
T+7s: Transaction fully validated
Total: ~7 seconds from submission to finalityT+0s: Transaction submitted
T+0.1s: Validation complete
T+25s: Consensus round starts (waiting for ledger close)
T+30s: Consensus completes
T+30.1s: Canonical application
T+30.2s: Ledger closed
T+32s: Transaction fully validated
Total: ~32 seconds (depends on when submitted){
"TransactionType": "Payment",
"Account": "rN7n7otQDd6FczFgLdlqtyMVrn3HMtthca",
"Destination": "rLNaPoKeeBjZe2qs6x52yVPZpZ8td4dc6w",
"Amount": "1000000",
"Fee": "12",
"Sequence": 42,
"LastLedgerSequence": 75234567,
"SigningPubKey": "03AB40A0490F9B7ED8DF29D246BF2D6269820A0EE7742ACDD457BEA7C7D0931EDB",
"TxnSignature": "30450221008..."
}// Using xrpl.js
const xrpl = require('xrpl');
// Create transaction
const tx = {
TransactionType: 'Payment',
Account: wallet.address,
Destination: 'rLNaPoKeeBjZe2qs6x52yVPZpZ8td4dc6w',
Amount: '1000000',
Fee: '12'
};
// Auto-fill (adds Sequence, LastLedgerSequence, etc.)
const prepared = await client.autofill(tx);
// Sign transaction
const signed = wallet.sign(prepared);
console.log(signed.tx_blob); // Signed transaction blob
console.log(signed.hash); // Transaction hash// For accounts with SignerList
const tx = {
TransactionType: 'Payment',
Account: multiSigAccount,
Destination: 'rLNaPoKeeBjZe2qs6x52yVPZpZ8td4dc6w',
Amount: '1000000',
Fee: '12',
SigningPubKey: '', // Empty for multi-sig
Sequence: 42
};
// Each signer signs independently
const signer1Sig = wallet1.sign(tx, true);
const signer2Sig = wallet2.sign(tx, true);
// Combine signatures
const multisigned = xrpl.multisign([signer1Sig, signer2Sig]);// Simplified hash calculation
uint256 calculateTransactionID(STTx const& tx)
{
// Serialize the entire signed transaction
Serializer s;
tx.add(s);
// Hash with SHA-512 and take first 256 bits
return s.getSHA512Half();
}curl -X POST https://s1.ripple.com:51234/ \
-H "Content-Type: application/json" \
-d '{
"method": "submit",
"params": [{
"tx_blob": "120000228000000024..."
}]
}'{
"result": {
"engine_result": "tesSUCCESS",
"engine_result_code": 0,
"engine_result_message": "The transaction was applied.",
"tx_blob": "120000228000000024...",
"tx_json": {
"Account": "rN7n7otQDd6FczFgLdlqtyMVrn3HMtthca",
"Amount": "1000000",
"Destination": "rLNaPoKeeBjZe2qs6x52yVPZpZ8td4dc6w",
"Fee": "12",
"TransactionType": "Payment",
"hash": "E08D6E9754025BA2534A78707605E0601F03ACE063687A0CA1BDDACFCD1698C7"
}
}
}const ws = new WebSocket('wss://s1.ripple.com');
ws.on('open', () => {
ws.send(JSON.stringify({
command: 'submit',
tx_blob: '120000228000000024...'
}));
});
ws.on('message', (data) => {
const response = JSON.parse(data);
console.log(response.result.engine_result);
});Client → Node A → Overlay Network → All NodesNotTEC Transactor::preflight(PreflightContext const& ctx)
{
// 1. Verify signature
if (!checkSignature(ctx.tx))
return temBAD_SIGNATURE;
// 2. Check transaction format
if (!ctx.tx.isFieldPresent(sfAccount))
return temMALFORMED;
// 3. Validate fee
auto const fee = ctx.tx.getFieldAmount(sfFee);
if (fee < ctx.baseFee)
return telINSUF_FEE_P;
// 4. Check sequence
if (ctx.tx.getSequence() == 0)
return temBAD_SEQUENCE;
// 5. Verify amounts are valid
auto const amount = ctx.tx.getFieldAmount(sfAmount);
if (amount <= zero)
return temBAD_AMOUNT;
return tesSUCCESS;
}TER Transactor::preclaim(PreclaimContext const& ctx)
{
// 1. Source account must exist
auto const sleAccount = ctx.view.read(
keylet::account(ctx.tx[sfAccount]));
if (!sleAccount)
return terNO_ACCOUNT;
// 2. Check sequence number
auto const txSeq = ctx.tx.getSequence();
auto const acctSeq = (*sleAccount)[sfSequence];
if (txSeq != acctSeq)
return tefPAST_SEQ; // Already used or too high
// 3. Verify sufficient balance
auto const balance = (*sleAccount)[sfBalance];
auto const fee = ctx.tx[sfFee];
auto const amount = ctx.tx[sfAmount];
if (balance < fee + amount)
return tecUNFUNDED_PAYMENT;
// 4. Check destination exists (for Payment)
if (ctx.tx.getTransactionType() == ttPAYMENT)
{
auto const dest = ctx.tx[sfDestination];
auto const sleDest = ctx.view.read(keylet::account(dest));
if (!sleDest)
{
// Can create account if amount >= reserve
if (amount < ctx.view.fees().accountReserve(0))
return tecNO_DST_INSUF_XRP;
}
}
return tesSUCCESS;
}std::pair<TER, bool>
NetworkOPs::processTransaction(
std::shared_ptr<Transaction> const& transaction)
{
// Apply to open ledger
auto result = app_.openLedger().modify(
[&](OpenView& view, beast::Journal j)
{
return transaction->apply(app_, view, ApplyFlags::tapNONE);
});
if (result.second) // Transaction applied successfully
{
JLOG(j_.trace())
<< "Transaction " << transaction->getID()
<< " applied to open ledger with result: "
<< transToken(result.first);
// Relay to network
app_.overlay().relay(transaction);
return {result.first, true};
}
return {result.first, false};
}void Overlay::relay(std::shared_ptr<Transaction> const& tx)
{
// Create protocol message
protocol::TMTransaction msg;
msg.set_rawtransaction(tx->getSerialized());
msg.set_status(protocol::tsNEW);
msg.set_receivetimestamp(
std::chrono::system_clock::now().time_since_epoch().count());
// Wrap in Message object
auto m = std::make_shared<Message>(msg, protocol::mtTRANSACTION);
// Broadcast to all peers (except source)
for (auto& peer : getActivePeers())
{
if (peer.get() != source)
peer->send(m);
}
JLOG(j_.trace())
<< "Relayed transaction " << tx->getID()
<< " to " << getActivePeers().size() << " peers";
}std::set<TxID> buildInitialPosition(OpenView const& openLedger)
{
std::set<TxID> position;
// Include transactions from open ledger
for (auto const& tx : openLedger.transactions())
{
// Only include transactions that:
// 1. Are still valid
// 2. Have sufficient fee
// 3. Haven't expired (LastLedgerSequence)
if (isValidForConsensus(tx))
position.insert(tx.getTransactionID());
}
return position;
}Validator A proposes: {TX1, TX2, TX3, TX4}
Validator B proposes: {TX1, TX2, TX3, TX5}
Validator C proposes: {TX1, TX2, TX4, TX5}
Agreement:
TX1: 100% ✓
TX2: 100% ✓
TX3: 67%
TX4: 67%
TX5: 67%All validators propose: {TX1, TX2}
Agreement:
TX1: 100% ✓ (included)
TX2: 100% ✓ (included)
TX3, TX4, TX5 deferred to next ledgervoid applyTransactionsCanonically(
Ledger& ledger,
std::set<TxID> const& txSet)
{
// 1. Retrieve transactions from set
std::vector<std::shared_ptr<STTx>> transactions;
for (auto const& id : txSet)
{
auto tx = fetchTransaction(id);
transactions.push_back(tx);
}
// 2. Sort in canonical order
std::sort(transactions.begin(), transactions.end(),
[](auto const& a, auto const& b)
{
// Sort by account, then sequence
if (a->getAccountID(sfAccount) != b->getAccountID(sfAccount))
return a->getAccountID(sfAccount) < b->getAccountID(sfAccount);
return a->getSequence() < b->getSequence();
});
// 3. Apply each transaction
for (auto const& tx : transactions)
{
auto const result = applyTransaction(ledger, tx);
// Record in metadata
ledger.recordTransaction(tx, result);
JLOG(j_.debug())
<< "Applied " << tx->getTransactionID()
<< " with result " << transToken(result);
}
}TER Payment::doApply()
{
// 1. Debit source account
auto const result = accountSend(
view(),
account_,
ctx_.tx[sfDestination],
ctx_.tx[sfAmount],
j_);
if (result != tesSUCCESS)
return result;
// 2. Update sequence number
auto const sleAccount = view().peek(keylet::account(account_));
(*sleAccount)[sfSequence] = ctx_.tx.getSequence() + 1;
view().update(sleAccount);
// 3. Record metadata
ctx_.deliver(ctx_.tx[sfAmount]);
return tesSUCCESS;
}void closeLedger(
std::shared_ptr<Ledger>& ledger,
NetClock::time_point closeTime)
{
// 1. Set close time
ledger->setCloseTime(closeTime);
// 2. Calculate state hash
auto const stateHash = ledger->stateMap().getHash();
ledger->setAccountHash(stateHash);
// 3. Calculate transaction tree hash
auto const txHash = ledger->txMap().getHash();
ledger->setTxHash(txHash);
// 4. Calculate ledger hash
auto const ledgerHash = ledger->getHash();
JLOG(j_.info())
<< "Closed ledger " << ledger->seq()
<< " hash: " << ledgerHash
<< " txns: " << ledger->txCount();
// 5. Mark as closed
ledger->setClosed();
}uint256 Ledger::getHash() const
{
Serializer s;
s.add32(seq_); // Ledger sequence
s.add64(closeTime_.count()); // Close time
s.addBitString(parentHash_); // Parent ledger hash
s.addBitString(txHash_); // Transaction tree hash
s.addBitString(accountHash_); // Account state hash
s.add64(totalCoins_); // Total XRP
s.add64(closeTimeResolution_); // Close time resolution
s.add8(closeFlags_); // Flags
return s.getSHA512Half();
}std::shared_ptr<STValidation> createValidation(
Ledger const& ledger,
SecretKey const& secretKey)
{
auto validation = std::make_shared<STValidation>(
ledger.getHash(),
ledger.seq(),
NetClock::now(),
publicKeyFromSecretKey(secretKey),
calculateNodeID(publicKeyFromSecretKey(secretKey)),
[&](STValidation& v)
{
v.setFlag(vfFullValidation);
v.sign(secretKey);
});
JLOG(j_.info())
<< "Created validation for ledger " << ledger.seq()
<< " hash: " << ledger.getHash();
return validation;
}void broadcastValidation(std::shared_ptr<STValidation> const& val)
{
// Create protocol message
protocol::TMValidation msg;
Serializer s;
val->add(s);
msg.set_validation(s.data(), s.size());
// Broadcast to all peers
auto m = std::make_shared<Message>(msg, protocol::mtVALIDATION);
overlay().broadcast(m);
JLOG(j_.trace())
<< "Broadcast validation for ledger " << val->getLedgerSeq();
}bool hasValidationQuorum(
LedgerHash const& hash,
std::set<NodeID> const& validators)
{
auto const& unl = getUNL();
size_t validationCount = 0;
for (auto const& validator : validators)
{
if (unl.contains(validator))
validationCount++;
}
// Need >80% of UNL
return validationCount >= (unl.size() * 4 / 5);
}void markLedgerValidated(std::shared_ptr<Ledger> const& ledger)
{
// 1. Mark as validated
ledger->setValidated();
// 2. Add to validated chain
ledgerMaster_.addValidatedLedger(ledger);
// 3. Update current validated ledger
ledgerMaster_.setValidatedLedger(ledger);
// 4. Publish to subscribers
publishValidatedLedger(ledger);
// 5. Start next open ledger
openLedger_.accept(
ledger,
orderTx,
consensusParms,
{}); // Empty initial transaction set
JLOG(j_.info())
<< "Ledger " << ledger->seq()
<< " validated with " << validationCount(ledger)
<< " validations";
}rippled tx E08D6E9754025BA2534A78707605E0601F03ACE063687A0CA1BDDACFCD1698C7{
"result": {
"Account": "rN7n7otQDd6FczFgLdlqtyMVrn3HMtthca",
"Amount": "1000000",
"Destination": "rLNaPoKeeBjZe2qs6x52yVPZpZ8td4dc6w",
"Fee": "12",
"Sequence": 42,
"TransactionType": "Payment",
"hash": "E08D6E9754025BA2534A78707605E0601F03ACE063687A0CA1BDDACFCD1698C7",
"ledger_index": 75234567,
"validated": true,
"meta": {
"TransactionIndex": 5,
"TransactionResult": "tesSUCCESS",
"AffectedNodes": [
// Nodes modified by transaction
]
}
}
}rippled account_tx rN7n7otQDd6FczFgLdlqtyMVrn3HMtthcaws.send(JSON.stringify({
command: 'subscribe',
accounts: ['rN7n7otQDd6FczFgLdlqtyMVrn3HMtthca']
}));
ws.on('message', (data) => {
const msg = JSON.parse(data);
if (msg.type === 'transaction') {
console.log('Transaction:', msg.transaction);
console.log('Status:', msg.validated ? 'Validated' : 'Pending');
}
});{
"meta": {
"TransactionIndex": 5,
"TransactionResult": "tesSUCCESS",
"AffectedNodes": [
{
"ModifiedNode": {
"LedgerEntryType": "AccountRoot",
"LedgerIndex": "13F1A95D7AAB7108D5CE7EEAF504B2894B8C674E6D68499076441C4837282BF8",
"PreviousFields": {
"Balance": "10000000",
"Sequence": 42
},
"FinalFields": {
"Balance": "8999988",
"Sequence": 43,
"Account": "rN7n7otQDd6FczFgLdlqtyMVrn3HMtthca"
}
}
},
{
"ModifiedNode": {
"LedgerEntryType": "AccountRoot",
"LedgerIndex": "4F83A2CF7E70F77F79A307E6A472BFC2585B806A70833CCD1C26105BAE0D6E05",
"PreviousFields": {
"Balance": "5000000"
},
"FinalFields": {
"Balance": "6000000",
"Account": "rLNaPoKeeBjZe2qs6x52yVPZpZ8td4dc6w"
}
}
}
],
"delivered_amount": "1000000"
}
}{
"TransactionType": "Payment",
"Account": "rN7n7otQDd6FczFgLdlqtyMVrn3HMtthca",
"Destination": "rLNaPoKeeBjZe2qs6x52yVPZpZ8td4dc6w",
"Amount": "1000000",
"LastLedgerSequence": 75234567
}bool isExpired(STTx const& tx, LedgerIndex currentLedger)
{
if (!tx.isFieldPresent(sfLastLedgerSequence))
return false; // No expiration set
return tx[sfLastLedgerSequence] < currentLedger;
}# Using testnet
rippled account_info <your_address>const xrpl = require('xrpl');
const client = new xrpl.Client('wss://s.altnet.rippletest.net:51233');
await client.connect();
const wallet = xrpl.Wallet.fromSeed('sXXXXXXXXXXXXXXXX');
// Create transaction
const tx = {
TransactionType: 'Payment',
Account: wallet.address,
Destination: 'rLNaPoKeeBjZe2qs6x52yVPZpZ8td4dc6w',
Amount: '1000000',
Fee: '12'
};
// Prepare with LastLedgerSequence
const prepared = await client.autofill(tx);
console.log('Prepared TX:', prepared);
console.log('LastLedgerSequence:', prepared.LastLedgerSequence);
// Sign
const signed = wallet.sign(prepared);
console.log('Transaction Hash:', signed.hash);const startTime = Date.now();
const result = await client.submit(signed.tx_blob);
console.log('Submission Time:', Date.now() - startTime, 'ms');
console.log('Initial Result:', result.result.engine_result);
console.log('Applied to Open Ledger:', result.result.engine_result === 'tesSUCCESS');await client.request({
command: 'subscribe',
accounts: [wallet.address]
});
client.on('transaction', (tx) => {
console.log('Transaction Event:');
console.log(' Hash:', tx.transaction.hash);
console.log(' Validated:', tx.validated);
console.log(' Ledger Index:', tx.ledger_index);
console.log(' Time:', Date.now() - startTime, 'ms');
if (tx.validated) {
console.log('✓ Transaction fully validated!');
console.log(' Result:', tx.meta.TransactionResult);
}
});async function checkStatus(hash) {
try {
const tx = await client.request({
command: 'tx',
transaction: hash
});
console.log('Status Check:');
console.log(' Found:', true);
console.log(' Validated:', tx.result.validated);
console.log(' Ledger:', tx.result.ledger_index);
console.log(' Result:', tx.result.meta.TransactionResult);
return tx.result.validated;
} catch (e) {
console.log('Status Check: Not yet in validated ledger');
return false;
}
}
// Poll every second
const pollInterval = setInterval(async () => {
const validated = await checkStatus(signed.hash);
if (validated) {
clearInterval(pollInterval);
console.log('Total time to validation:', Date.now() - startTime, 'ms');
}
}, 1000);const finalTx = await client.request({
command: 'tx',
transaction: signed.hash
});
console.log('\n=== Final Transaction Analysis ===');
console.log('Transaction Hash:', finalTx.result.hash);
console.log('Ledger Index:', finalTx.result.ledger_index);
console.log('Result Code:', finalTx.result.meta.TransactionResult);
console.log('Transaction Index:', finalTx.result.meta.TransactionIndex);
// Analyze affected nodes
console.log('\nAffected Nodes:');
for (const node of finalTx.result.meta.AffectedNodes) {
if (node.ModifiedNode) {
console.log(' Modified:', node.ModifiedNode.LedgerEntryType);
console.log(' Account:', node.ModifiedNode.FinalFields.Account);
if (node.ModifiedNode.PreviousFields.Balance) {
console.log(' Balance Change:',
node.ModifiedNode.FinalFields.Balance -
node.ModifiedNode.PreviousFields.Balance);
}
}
}
// Calculate total time
console.log('\nTiming:');
console.log(' Total time:', Date.now() - startTime, 'ms');# Check transaction status
rippled tx <hash>
# Check for sequence gaps
rippled account_info <account>
# Increase fee and resubmit if needed// Always fetch current sequence
const accountInfo = await client.request({
command: 'account_info',
account: wallet.address
});
const currentSeq = accountInfo.result.account_data.Sequence;// Wait for validation or check expiration
const ledger = await client.request({command: 'ledger_current'});
if (ledger.result.ledger_current_index > tx.LastLedgerSequence) {
console.log('Transaction expired');
}// Always check balance including reserves
const reserve = (2 + ownerCount) * baseReserve;
const available = balance - reserve;
if (amount + fee > available) {
throw new Error('Insufficient funds');
}