We ran the first throughput benchmarks on the Asentum testnet this week. 500 transactions, then 1000. Multiple submission strategies. The goal wasn't to produce marketing numbers — it was to find bottlenecks.
We found them. Then we fixed them. Then we ran the numbers again.
The tests
Simple ASE transfers between wallets. Each wallet pre-funded, round-robin transfers so nonces increment cleanly. Three approaches:
- Sequential (500 txs) — submit one tx, wait for response, submit next
- Parallel (500 txs) — submit in batches of 25 concurrent requests
- Batch (1000 txs) — submit via
POST /tx/batchin chunks of 50, single HTTP request per chunk
The numbers
| | Sequential (500) | Parallel (500) | Batch (1000) | |---|---|---|---| | Submit rate | 12.1 tx/s | 134.5 tx/s | 140 tx/s | | Effective TPS | 11.1 | 50.9 | 61.5 | | Block TPS | 11.5 | 75.2 | 166.7 | | Max txs/block | 26 | 235 | 550 | | Avg txs/block | 22 | 225.5 | 333 | | Blocks used | 23 | 2 | 3 | | Accepted | 500/500 | 500/500 | 1000/1000 | | Finality | ~2s | ~2s | ~2s |
Zero failures across all three runs. Every transaction accepted and confirmed on-chain.
What we learned
The sequential bottleneck was us, not the chain. At 12.1 tx/s, each submit took ~80ms round-trip over HTTPS. The chain was idle most of the time, packing only 22-26 txs per block because that's all that arrived between blocks.
The parallel run revealed real capacity. 235 transactions in a single block. The chain packed 500 txs into just 2 blocks. Block TPS jumped to 75 — a 6.5x improvement with zero code changes to the chain itself.
The batch endpoint pushed it further. 550 transactions in block #23020. 1000 txs across 3 blocks. Block TPS of 166.7 — the chain processed transactions faster than we could submit them.
The gas limit has massive headroom. 550 transfers at 21,000 gas each = ~11.5M gas. The block gas limit is 30M. We used 38% of the block's capacity. Theoretical max for simple transfers: ~1,428 per block, or ~714 tx/s at 2-second blocks.
Dilithium3 verification is fast enough. Each transaction requires a post-quantum signature verification (~3,300 byte signature). The block builder verified 550 signatures in under 2 seconds. This was the main performance concern with PQC — it's not the bottleneck.
2-second finality is real. Consistent across all three tests. Not "expected" or "targeted" — measured. Every tx confirmed in the block immediately after submission.
What these numbers are NOT
These are testnet numbers on a 4-validator network processing simple transfers. They don't account for:
- Contract calls (more gas, more state reads/writes)
- Sustained load over hours (state growth, disk I/O)
- Network with 100+ validators (more consensus overhead)
- Adversarial conditions (spam, conflicting txs)
They're a baseline. The first real data point.
Where we're aiming
- Near-term: 100+ tx/s sustained under mixed workloads (transfers + contract calls)
- Medium-term: 500+ tx/s with block builder optimizations and parallel signature verification
- Theoretical ceiling: ~714 tx/s at current gas limit and block time, achievable without protocol changes
The batch endpoint
We shipped POST /tx/batch during this testing — a new RPC endpoint that accepts an array of signed transactions in a single HTTP request. One connection, one DNS lookup, all transactions processed server-side. This is what enabled the 140 tx/s submit rate and the 550-tx block.
The bottleneck right now is single-threaded block building — sequential transaction application. Parallelising signature verification and state access is on the roadmap.
For now: the chain works, it packs 550 txs into a block, and we know exactly where to push next.
Blocks #23019–23021 on the explorer if you want to verify.
