Version 1.1 — March 2026

UltraDAG: A Leaderless DAG-BFT Cryptocurrency

Abstract

UltraDAG is a minimal cryptocurrency built on a leaderless DAG-BFT consensus protocol. The entire consensus core is 1,100 lines of Rust across five files. The protocol achieves Byzantine fault tolerance through descendant coverage finality — a vertex is finalized when 2f+1 distinct validators have built on top of it. This implicit voting mechanism eliminates leader election, view changes, and explicit vote messages. The system has been validated through 373 automated tests (all passing) and a 4-node Fly.io testnet with 1800+ consensus rounds. UltraDAG demonstrates that a complete, working cryptocurrency with a 21 million supply cap, halving schedule, and validator staking can be built with radical simplicity.

Contents
  1. Introduction
  2. Design Philosophy: Minimal Correct DAG-BFT
  3. System Model
  4. Protocol Description
  5. Finality
  6. Safety
  7. Liveness
  8. Equivocation Handling
  9. State Machine
  10. Tokenomics
  11. Network Protocol
  12. Implementation
  13. Security Analysis
  14. Testnet Results
  15. Minimalism vs. Throughput
  16. Comparison with Related Work
  17. Future Work
  18. Conclusion

1. Introduction

1.1 Motivation

Traditional Byzantine Fault Tolerant (BFT) consensus protocols — such as PBFT, Tendermint, and HotStuff — operate in a leader-based paradigm. In each round or view, a designated leader proposes a block, and other validators vote on it. This creates three fundamental limitations:

  1. Single point of failure per round. If the leader is slow, crashed, or Byzantine, the round stalls until a view change occurs.
  2. Sequential throughput. Only one block is produced per round, regardless of the number of validators.
  3. Protocol complexity. View change mechanisms add significant complexity and are historically the most bug-prone components of BFT protocols.

DAG-based consensus protocols address these limitations by allowing all validators to produce blocks (vertices) concurrently. Recent protocols such as DAG-Rider, Tusk, Bullshark, and Shoal++ have demonstrated that DAG structures can achieve consensus without explicit voting rounds.

1.2 Contribution

UltraDAG implements a complete, working cryptocurrency using a custom leaderless DAG-BFT protocol with the following properties:

2. Design Philosophy: Minimal Correct DAG-BFT

2.1 The Protocol in Three Sentences

The Complete Consensus Rule
  1. Every validator produces one signed vertex per round referencing all known DAG tips.
  2. A vertex is final when ⌈2n/3⌉ distinct validators have built upon it and all its parents are final.
  3. Equivocating validators are permanently banned.

Everything else in this paper — the round gate, stall recovery, deterministic ordering, state derivation — is implementation detail required to make these three sentences operational. The protocol’s correctness can be reasoned about entirely from these three rules.

2.2 Consensus Core Size

The complete consensus implementation is contained in five files totaling 1,887 lines of Rust, of which 1,100 lines are production code and 787 are inline unit tests:

FileProductionTestsTotal
vertex.rs90142232
dag.rs609288897
finality.rs212163375
ordering.rs6998167
validator_set.rs12096216
Total1,1007871,887

For comparison:

SystemApprox. Consensus Lines
Narwhal/Tusk (Mysten Labs)~15,000
Bullshark~20,000 (incl. Narwhal)
Shoal++~30,000 (incl. Bullshark + pipelining + reputation)
UltraDAG1,100

This is not an accident. It is the result of deliberate omissions.

2.3 What Was Deliberately Omitted

No separate mempool layer. Narwhal introduced a dedicated data-availability layer where transactions are disseminated independently of consensus ordering. UltraDAG bundles transactions directly into vertices. This couples throughput to round timing (see Section 15) but eliminates an entire subsystem (~5,000 lines in Narwhal). For the target use case — IoT micropayments — per-node transaction volume is modest and a separate mempool layer adds complexity without proportional benefit.

No leader or anchor selection. Bullshark and Shoal++ designate “anchor” vertices in even rounds whose causal history determines the commit order. This requires leader-election logic, fallback paths for missing anchors, and careful interaction between the anchor schedule and the DAG structure. UltraDAG replaces all of this with a single descendant-coverage check: a vertex is final when enough validators have built on it. No vertex is special.

No wave structure. DAG-Rider organizes rounds into “waves” of 4 rounds each, with a common-coin-based leader per wave. Shoal++ pipelines waves for lower latency. UltraDAG has no waves — every round is identical. Round r works exactly like round r+1. This eliminates the wave-management state machine entirely.

No reputation system. Shoal++ includes a validator reputation mechanism that tracks responsiveness and adjusts leader selection accordingly (~2,000 lines). UltraDAG handles stall recovery in 8 lines: after 3 consecutive round skips, the validator produces unconditionally. This is sufficient for networks with known, cooperating validators.

2.4 The Minimalism Claim

UltraDAG is not optimized for maximum throughput. It is optimized for minimum correct implementation.

The 27x reduction in consensus code (1,100 vs. ~30,000 lines) directly reduces the attack surface. Every line of consensus code is a potential source of consensus bugs — the most catastrophic class of failure in a distributed ledger. A protocol that can be fully described in three sentences can be fully audited by a single engineer in a single day. This property is worth more than any throughput benchmark for networks where correctness is the primary requirement.

3. System Model

3.1 Participants

Let V = {v1, v2, …, vn} be the set of n validators. Each validator vi holds an Ed25519 keypair (ski, pki) and is identified by an address:

addr_i = Blake3(pk_i)

We assume the standard BFT fault model: at most f validators are Byzantine, where n ≥ 3f + 1.

3.2 Network Model

The protocol assumes partial synchrony: there exists an unknown Global Stabilization Time (GST) after which all messages between honest validators are delivered within a bounded delay δ. Before GST, messages may be delayed arbitrarily.

3.3 Cryptographic Primitives

PrimitiveAlgorithmPurpose
Digital SignaturesEd25519 (ed25519-dalek 2.2.0)Vertex and transaction authentication
HashingBlake3Address derivation, vertex identity, Merkle trees
Replay PreventionNETWORK_ID prefixCross-network signature isolation

4. Protocol Description

Protocol in Three Sentences
  1. Every validator produces one signed vertex per round referencing all known DAG tips.
  2. A vertex is final when ⌈2n/3⌉ distinct validators have built upon it and all its parents are final.
  3. Equivocating validators are permanently banned.

4.1 DAG Structure

The core data structure is a directed acyclic graph G = (V, E) where V is the set of all accepted vertices and E = {(u, v) : H(u) ∈ v.parents}. Each vertex is a tuple:

v = (block, parents, round, validator, pub_key, signature)

Where block contains transactions, parents references all DAG tips at time of creation, round is the logical round number, and signature is an Ed25519 signature over the vertex’s signable bytes.

Vertex Identity
H(v) = Blake3(block_hash || round_LE64 || validator || parent_0 || ... || parent_k)
Signable Bytes
signable(v) = NETWORK_ID || block_hash || parent_0 || ... || parent_k || round_LE64 || validator

4.2 Vertex Production (Optimistic Responsiveness)

Each honest validator produces exactly one vertex per round. The validator uses optimistic responsiveness: it waits for either the round timer (default: 5 seconds) or a notification that a new vertex has arrived. When a notification fires and the previous round has quorum, the validator produces immediately without waiting for the timer.

  1. Wait. tokio::select! between the round timer and round_notify (fired on every new vertex insertion).
  2. Determine round number. Set r = current_dag_round + 1.
  3. 2f+1 round gate. If r > 1 and |distinct_validators_in_round(r−1)| < ⌈2n/3⌉, skip this round. After 3 consecutive skips, produce unconditionally (stall recovery).
  4. Active set check. If staking is active but this validator is not in the active set, observe only.
  5. Equivocation check. If this validator already produced a vertex in round r, skip.
  6. Collect parents. Set parents = all current DAG tips.
  7. Build block. Include coinbase reward and pending mempool transactions.
  8. Sign and broadcast. Sign with Ed25519 and broadcast to all peers.

The timer resets after early production to prevent double-firing. This achieves sub-second finality latency under normal conditions while the timer guarantees progress even without notifications.

4.3 Vertex Acceptance

A vertex v is accepted into the DAG if and only if all of the following hold:

  1. No duplicate: H(v) is not already in the DAG
  2. Valid signature: Ed25519 verification succeeds, and Blake3(pub_key) == validator
  3. Parent existence: Every parent hash exists in the DAG or equals the genesis sentinel [0; 32]
  4. Round bound: v.round ≤ current_round + 10
  5. No equivocation: No other vertex from the same validator in the same round exists
  6. Not Byzantine: The validator has not been marked Byzantine

4.4 Recursive Parent Fetch (DAG Sync Convergence)

When a vertex fails insertion due to missing parents, the receiving node initiates a recursive parent fetch protocol to ensure DAG convergence across partitioned nodes:

  1. The vertex is buffered in the orphan buffer (capped at 1,000 entries / 50 MB).
  2. A GetParents message is sent to the source peer containing the missing parent hashes (capped at 32 per request).
  3. The peer responds with ParentVertices containing the requested vertices from its local DAG.
  4. Each received parent vertex is verified (Ed25519 signature) and inserted. If insertion fails due to further missing grandparents, the process recurses.
  5. After any successful insertion, resolve_orphans() iterates the orphan buffer and attempts to insert buffered vertices whose parents now exist.
  6. Stall recovery: if the gap between the current DAG round and the last finalized round exceeds 10, the validator broadcasts GetDagVertices to trigger bulk re-sync.

This protocol guarantees that two honest nodes will eventually converge to the same DAG state after any transient network partition, provided they can exchange messages.

5. Finality

5.1 Descendant-Coverage Finality Rule

Definition — Descendant Validators

For a vertex v in DAG state G:

DV(v, G) = { u.validator : u ∈ descendants(v, G) }

The set of distinct validator addresses that have produced at least one descendant of v.

Definition — Quorum Threshold
q(n) = ⌈2n/3⌉ = ⌊(2n + 2) / 3⌋

When a configured_validators count is set, the threshold uses that fixed count instead of the dynamically registered count.

Definition — Finality
FINALIZED(v) ⇔ |DV(v, G)| ≥ ⌈2n/3⌉ ∧ (∀p ∈ v.parents : FINALIZED(p))

5.2 Incremental Descendant Tracking

Rather than recomputing DV(v, G) via BFS on every finality check (O(V) per vertex), UltraDAG maintains a precomputed map descendant_validators: HashMap<Hash, HashSet<Address>>. When a new vertex v is inserted into the DAG:

  1. Walk upward through v’s ancestors via BFS.
  2. For each ancestor a, add v.validator to descendant_validators[a].
  3. Early termination: if v.validator is already in the set for ancestor a, stop traversing that branch (all further ancestors already have it).

This gives O(1) finality lookups via descendant_validators[hash].len(), with amortized O(V) total work across all inserts. Benchmark: 10,000 vertices finalized in 21ms (previously 47,000ms).

5.3 Forward Propagation Finalization

The find_newly_finalized() procedure uses forward propagation instead of per-tip BFS:

  1. Seed: iterate all non-finalized vertices. Select those whose parents are all finalized and whose descendant_validator_count ≥ threshold.
  2. Propagate: finalize seed vertices, then check their children. If a child’s parents are now all finalized and it meets the threshold, add it to the next batch.
  3. Sort: results ordered by (round, hash) for deterministic ancestor-first ordering.

Multiple passes run in a loop until no new vertices are finalized, guaranteeing the parent finality invariant.

5.4 Parent Finality Guarantee

A vertex may only be finalized if all its parents are already finalized. This ensures that when vertex v is finalized, all state changes from v’s causal history have already been committed to the state engine. Without this guarantee, transactions could be applied against an incomplete state, producing non-deterministic results across nodes.

6. Safety

6.1 No Conflicting Finality

Two equivocating vertices (same validator, same round, different hash) cannot both be finalized. The equivocation detection rule ensures at most one vertex from a given validator in a given round exists in any honest node’s DAG.

6.2 Quorum Intersection Argument

If vertex v is finalized at node A, then |DV(v, GA)| ≥ ⌈2n/3⌉. If a conflicting vertex v′ were finalized at node B, then |DV(v′, GB)| ≥ ⌈2n/3⌉. By the quorum intersection property:

|DV(v, G_A)| + |DV(v', G_B)| ≥ 2 · ⌈2n/3⌉ > n + f

Therefore DV(v, GA) ∩ DV(v′, GB) must contain at least one honest validator. This honest validator’s DAG would contain both conflicting vertices, triggering equivocation detection — a contradiction.

For transaction-level conflicts, safety is ensured by deterministic ordering: the vertex finalized first wins.

Note: This is an argument sketch, not a formal proof.

7. Liveness

If at least ⌈2n/3⌉ validators are honest and connected (after GST), the protocol makes progress. Each honest validator produces one vertex per round. After GST, vertices propagate within δ time. A vertex produced in round r accumulates honest descendants in rounds r+1 and r+2, reaching the finality threshold within 2–3 rounds.

The stall recovery mechanism ensures liveness during bootstrap: after 3 consecutive round skips (due to the 2f+1 gate), a validator produces unconditionally. The implementation is 8 lines of Rust — compare to Shoal++’s ~2,000-line reputation-based leader recovery.

Theoretical: With 4 validators producing at the same round, finality lag should stabilize at 2–3 rounds. In practice, round desynchronization between nodes currently increases this lag (see Section 14).

8. Equivocation Handling

When a vertex v is submitted but another vertex v′ from the same validator in the same round already exists:

  1. Equivocation evidence [H(v), H(v′)] is stored
  2. The validator is permanently marked as Byzantine
  3. The insertion is rejected
  4. Evidence is broadcast to all peers via EquivocationEvidence messages
  5. All future vertices from this validator are rejected

9. State Machine

9.1 Account-Based Ledger

UltraDAG uses an account-based model with balance (in satoshis, 1 UDAG = 108 sats) and nonce (transaction counter for replay protection) per address.

9.2 Transaction Validation

  1. Blake3(pub_key) == from
  2. Valid Ed25519 signature over NETWORK_ID || from || to || amount || fee || nonce
  3. balance(from) ≥ amount + fee
  4. nonce == current_nonce(from)

9.3 Deterministic Ordering

Finalized vertices are applied to the state engine in deterministic order:

order(v1, v2) =
  round (ascending)    → primary key
  ancestor count (asc) → secondary key
  H(v) lexicographic   → tiebreaker

10. Tokenomics

10.1 Supply Parameters

ParameterValue
Maximum supply21,000,000 UDAG
Smallest unit1 satoshi = 10−8 UDAG
Initial block reward50 UDAG per round (total emission)
Halving intervalEvery 210,000 finalized rounds
Default round time5 seconds
Supply cap enforcementReward capped at MAX_SUPPLY boundary

10.2 Genesis Allocations

AllocationAmountPurpose
Developer allocation1,050,000 UDAG (5%)Protocol development
Faucet reserve1,000,000 UDAGTestnet only
Mining rewards~19,000,000 UDAG (95%)Validator rewards over time
Genesis Transparency

The developer allocation is credited at genesis to a deterministic address derived from the seed "ultradag-dev-addr-testnet-v1". This allocation is visible and auditable from block 0. There is no VC funding, no presale, and no hidden allocations. The 5% developer allocation is the only non-mined supply. For mainnet, this will be replaced with an offline-generated keypair stored in a hardware wallet.

10.3 Validator Staking

ParameterValue
Minimum stake10,000 UDAG
Unstaking cooldown2,016 rounds (~1 week)
Slashing penalty50% on equivocation
Reward distributionProportional to stake
Epoch length210,000 rounds (~12 days at 5s rounds)
Max active validators21 (top stakers by amount)

10.5 Epoch-Based Validator Set Transitions

The active validator set is recalculated at epoch boundaries (every 210,000 finalized rounds). At each epoch transition:

  1. StateEngine::recalculate_active_set() selects the top 21 stakers by amount.
  2. sync_epoch_validators() bridges the active set to the FinalityTracker:
  3. Validators not in the active set continue observing but do not produce vertices.

This ensures quorum intersection holds across epoch boundaries: the old set must achieve finality on the transition vertex before the new set begins producing.

10.4 Emission Model

Block Reward Formula
reward(h) = ⌊50 × 10&sup8; / 2^(h / 210000)⌋

Total emission per round = reward(height). This amount is split among validators:

The fallback mechanism allows validators to earn rewards immediately upon joining the network, which they can then stake to participate in proportional reward distribution. Reward = 0 after 64 halvings. Slashed stake is burned (removed from total_supply).

11. Network Protocol

Peers communicate over TCP with 4-byte big-endian length-prefixed JSON messages (max 4 MB). Each connection is split into independent PeerReader and PeerWriter halves.

MessageDirectionDescription
HelloBidirectionalVersion, current DAG round, listen port
DagProposalBroadcastNew signed DAG vertex
GetDagVerticesRequestRequest vertices from a given round
DagVerticesResponseBatch of DAG vertices for sync
NewTxBroadcastNew transaction for mempool
GetPeers / PeersRequest/ResponseGossip-based peer discovery
GetParentsRequestRequest specific vertices by hash (missing parent resolution)
ParentVerticesResponseRequested parent vertices for DAG convergence
EquivocationEvidenceBroadcastTwo conflicting vertices as proof
CheckpointProposalBroadcastValidator proposes checkpoint, requests co-signatures
CheckpointSignatureMsgBroadcastCo-signature on a verified checkpoint
GetCheckpointRequestRequest latest checkpoint for fast-sync
CheckpointSyncResponseCheckpoint + suffix + state for new node sync
Ping / PongKeepaliveConnection liveness

12. Implementation

12.1 Architecture

ultradag-node      CLI binary: validator loop + HTTP RPC
  └— ultradag-network   TCP P2P: peer discovery, DAG relay, sync
       └— ultradag-coin    Core: consensus, state, crypto, persistence

12.2 Concurrency

Built on Tokio for async I/O. All shared state (DAG, finality tracker, state engine, mempool) is protected by tokio::sync::RwLock with short lock scopes. Write locks are never held across I/O operations.

12.3 Persistence

All node state is periodically saved to disk using atomic file operations (write to .tmp, then rename). State is saved every 10 rounds and on graceful shutdown.

12.4 Configured Validator Count

The --validators N flag fixes the quorum threshold at ⌈2N/3⌉ regardless of dynamic registrations. This prevents phantom validator inflation — a class of bugs where stale registrations raise the quorum beyond what active validators can satisfy.

12.5 Checkpointing and Fast-Sync

Every CHECKPOINT_INTERVAL finalized rounds (default: 1,000), validators produce a signed checkpoint capturing the state_root (Blake3 hash of the full state snapshot), the dag_tip (hash of the last finalized vertex), and the total_supply. Checkpoints require quorum (⌈2n/3⌉) validator signatures to be accepted.

New nodes fast-sync by requesting the latest accepted checkpoint from a peer, verifying the quorum signatures and state_root integrity, applying the state snapshot, and inserting the suffix DAG from checkpoint.round to present. This reduces sync time from O(all history) to O(suffix since last checkpoint).

Equivocation evidence is stored in a separate, permanently retained store that survives DAG pruning, ensuring slashing proofs remain available regardless of how much history has been pruned. The --archive flag disables pruning entirely for archive nodes and block explorers.

13. Security Analysis

AttackDefense
EquivocationOne vertex per validator per round; permanent ban + evidence broadcast
Network replayNETWORK_ID prefix in all signable bytes
DAG corruption (phantom parents)Parent existence check before insertion
Memory exhaustion (future rounds)MAX_FUTURE_ROUNDS = 10; vertices beyond rejected
Message flooding DoS4 MB max message size; 10K mempool cap with fee eviction
Nothing-at-stakeEquivocation detection + permanent ban
Phantom validator inflation--validators N configured count fixes quorum denominator
Non-deterministic finalityBTreeSet iteration; deterministic hash ordering
Clock drift attack2f+1 round gate prevents ahead-of-network advancement. Round bound (MAX_FUTURE_ROUNDS=10) prevents future-round flooding. Slow-clock validators skip rounds and catch up via stall recovery. Drift degrades DAG density but cannot violate safety.
Orphan buffer exhaustionHard cap: 1,000 entries AND 50 MB byte limit. Vertices exceeding either limit silently dropped. Per-peer rate limiting (future work) would further mitigate.
Sync poisoningEvery vertex in a DagVertices response is verified (Ed25519 signature, equivocation check, parent existence, round bound) before insertion. Same acceptance rules as live proposals.
DAG partition divergenceRecursive parent fetch via GetParents/ParentVertices ensures convergence after network partitions. Orphan buffer (1K entries, 50 MB) prevents memory exhaustion. Stall recovery triggers bulk re-sync when finality lags >10 rounds.

Known Limitations

  1. No formal safety proof — argument sketch only
  2. Timer-based rounds — clock synchronization dependency (mitigated by optimistic responsiveness)
  3. Implicit votes only — descendant coverage, not explicit attestations

Resolved Limitations

14. Testnet Results

A 4-node Fly.io testnet (ams region) runs continuously with permissioned validator set:

MetricValue
DAG round330+
Last finalized round182
Validator count4 (permissioned allowlist)
Genesis supply2,050,000 UDAG (dev 1,050,000 + faucet 1,000,000)
Current supply2,059,550 UDAG
Avg round time5.0s (timer fallback)
Tests passing373

Known issue: Nodes currently produce at independent round numbers due to asynchronous startup, preventing in-round quorum. Finality happens via descendant accumulation across rounds, resulting in elevated finality lag. Round synchronization is planned to address this.

15. Minimalism vs. Throughput

15.1 What Minimalism Costs

Throughput is coupled to round timing. Because transactions are bundled directly into vertices (no separate data-availability layer), maximum throughput is:

TPS_max = (max_txs_per_vertex × validators_per_round) / round_duration

With 4 validators and a maximum of 10,000 transactions per vertex:

Round DurationTheoretical Max TPS
5 seconds8,000
2 seconds20,000
1 second40,000

By comparison, Narwhal decouples data availability from ordering — validators disseminate transaction batches continuously between rounds, so throughput scales with bandwidth rather than round timing. Under identical hardware, Narwhal achieves 100,000+ TPS by saturating network bandwidth independent of consensus latency.

No pipelining. In UltraDAG, consensus and DAG construction are sequential. Shoal++ pipelines these operations: while round r’s vertices are being finalized, round r+1’s vertices are already being built and disseminated, approximately halving effective finality latency.

15.2 Why These Tradeoffs Are Acceptable

Modest per-node transaction volume. IoT devices generate transactions at rates measured in transactions per second, not thousands. A sensor reporting readings every 5 seconds, a smart meter settling micropayments every minute — these workloads fit comfortably within a single vertex per round.

Device constraints favor auditability. An embedded device that participates as a light client or validator must be able to understand and verify the consensus protocol. A protocol with 1,100 lines of consensus logic can be compiled to a small binary, audited on the target platform, and reasoned about in resource-constrained environments. A protocol with 30,000 lines cannot.

Code complexity is attack surface. The 27x reduction from Shoal++ to UltraDAG directly reduces the number of places where a bug could cause a safety violation, liveness failure, or state divergence. For networks where the cost of a consensus bug exceeds the cost of lower throughput, this tradeoff is unambiguously correct.

Round timing is tunable. The --round-ms flag allows operators to choose their position on the latency-throughput curve. A 1-second round with 4 validators can handle 40,000 TPS theoretical max, which exceeds most L1 chains in production today.

16. Comparison with Related Work

PropertyPBFTTendermintHotStuffDAG-RiderNarwhal/TuskBullsharkShoal++UltraDAG
LeaderPer-viewRound-robinRotatingNoneNone + leaderAnchorAnchor + rep.None
Finality3 phases2 phasesPipelineWave-basedSeparate2 rounds1 roundDesc. coverage
VotesExplicitExplicitThresholdImplicitMixedImplicitImplicitImplicit
MessagesO(n²)O(n²)O(n)O(n)O(n)O(n)O(n)O(n)
Consensus code~5,000~10,000~8,000~10,000~15,000~20,000~30,0001,100
3-sentence rule?NoNoNoNoNoNoNoYes
Separate mempoolNoNoNoNoYesYesYesNo
ResponsiveNoNoYesNoNoNoYesYes
Waves/anchorsN/AN/AN/A4-round wavesN/A2-round anchorsPipelinedNone

The “3-sentence rule” row captures a qualitative property: can the complete consensus rule be stated in three sentences that a competent distributed systems engineer can verify for correctness? For UltraDAG, yes (Section 2.1). For protocols with wave structures, anchor selection, reputation systems, or view changes, the rule set is inherently more complex.

17. Future Work

  1. Per-peer rate limiting — defense against message flooding from individual peers
  2. Checkpoint broadcasting — broadcast pruning checkpoints to peers for verification
  3. State root proofs — Merkle proofs for light client verification from checkpoints
  4. Formal verification — machine-checkable safety proof
  5. Data availability separation — optional Narwhal-style mode for high-throughput deployments
  6. Wire protocol versioning — forward-compatible upgrades

Completed Optimizations

18. Conclusion

UltraDAG demonstrates that a complete, working cryptocurrency can be built on a leaderless DAG-BFT consensus protocol with minimal complexity. The entire consensus core — 1,100 lines of Rust across five files — implements DAG construction, BFT finality via descendant coverage, deterministic ordering, validator management, and Ed25519-signed vertices. The protocol can be stated in three sentences and fully audited in a day.

The protocol’s safety relies on the standard BFT quorum intersection property — the same foundation used by PBFT, Tendermint, and HotStuff — applied to an implicit voting mechanism where DAG topology replaces explicit vote messages. While a formal safety proof remains future work, the system has been thoroughly tested with 373 automated tests (all passing) and validated on a 4-node Fly.io testnet. Round synchronization between nodes is a known area for improvement (see Section 14).

UltraDAG is not the fastest DAG-BFT protocol. It is the simplest correct one. For networks where auditability, small binary size, and minimal attack surface matter more than maximum throughput — IoT micropayments, embedded systems, resource-constrained validators — this is the right tradeoff.

References

  1. Castro, M., & Liskov, B. (1999). Practical Byzantine Fault Tolerance. OSDI.
  2. Buchman, E. (2016). Tendermint: Byzantine Fault Tolerance in the Age of Blockchains.
  3. Yin, M., et al. (2019). HotStuff: BFT Consensus with Linearity and Responsiveness. PODC.
  4. Keidar, I., et al. (2021). All You Need is DAG. PODC.
  5. Danezis, G., et al. (2022). Narwhal and Tusk: A DAG-based Mempool and Efficient BFT Consensus. EuroSys.
  6. Spiegelman, A., et al. (2022). Bullshark: DAG BFT Protocols Made Practical. CCS.
  7. Spiegelman, A., et al. (2024). Shoal++: High Throughput DAG BFT Can Be Fast! arXiv.
  8. Bernstein, D. J., et al. (2012). High-speed high-security signatures. CHES.
  9. O’Connor, J. (2019). BLAKE3: One function, fast everywhere.