Sui published a post-mortem that explains the cause of a prolonged mainnet disruption that halted transactions and checkpoint certification on Jan. 14, 2026. The outage lasted roughly six hours and affected all transaction execution on the network.

According to the Sui team, the interruption stemmed from an internal divergence in validator consensus processing. Validators failed to certify new checkpoints, which caused transaction submissions to time out across the network. The team confirmed that the issue had no connection to network congestion, transaction volume, or external attacks.

The disclosure came in a blog post released on Jan. 16, where Sui emphasized that user funds remained safe throughout the incident and that no certified transactions were reversed.

Network halted to preserve safety guarantees

Sui stated that its safety-focused architecture behaved as designed. Validators halted progress after detecting inconsistent checkpoint data rather than risk finalizing an incorrect state.

The team confirmed the following outcomes during the outage:

No certified state forks occurred
No certified transactions faced rollback
User funds remained safe
No safety or consistency guarantees failed

Remote Procedure Call reads continued to serve the last certified state during the disruption. This remained the case unless nodes had explicit configurations that prevented outdated data delivery.

While disruptive for users, Sui described the halt as the correct failure mode for this class of consensus issue.

Edge-case bug caused validator disagreement

The root cause traced back to an edge-case bug in consensus commit logic. Under certain garbage collection conditions, an optimization path caused validators to reach different conclusions when handling conflicting transactions.

Consensus on Sui produces an ordered stream of commits. Deterministic execution converts those commits into checkpoints. A quarantine layer prevents transaction effects from becoming final until checkpoint certification succeeds.

During the incident, different validators derived different consensus commit outputs. This led to the execution of conflicting candidate checkpoints. When validators exchanged signatures, they observed that more than one-third of stake signed a different checkpoint digest. Certification became impossible under those conditions.

Validators are stalled by design to prevent inconsistent state finalization.

Users faced halted execution and timed-out submissions

During the incident window, Sui halted all transaction execution to preserve consistency. Transaction submissions timed out while validators remained stalled. No transactions executed during this period.

Read operations continued without interruption and served the last certified state. User balances and on-chain state remained unchanged.

Sui estimated total user-visible disruption at approximately six hours. On-chain value near $1 billion remained temporarily inactive during the halt, based on ecosystem estimates cited by industry coverage.

Recovery required coordinated validator action

Recovery followed several stages once engineers identified the divergence point.

The team removed incorrect consensus data and applied a fix to the commit logic. Mysten Labs validators deployed the fix in a controlled canary rollout and verified correct checkpoint production through logs.

After successful validation, the broader validator set upgraded to the fixed binary. Validators replayed consensus data safely and resumed checkpoint signing. Once a stake quorum signed the same checkpoint digest, checkpoint certification and state synchronization resumed.

Normal network operation returned later that day.

Incident highlighted early operational challenges

The Jan. 14 outage marked the second major disruption on Sui since its launch in 2023. A separate incident occurred in late 2024, which raised early questions about operational maturity.

Despite the halt, SUI price movement remained limited. Market response suggested that traders viewed the incident as operational rather than structural, according to post-incident price data.

Sui stated that the event confirmed the effectiveness of its safety-first design, even though uptime suffered.

Planned improvements after the outage

Sui outlined several changes aimed at reducing recovery time for rare failures.

The team plans faster detection mechanisms that pause consensus earlier when checkpoint inconsistencies appear. This approach should reduce replay scope and shorten restoration time.

Sui also plans improved operator tooling to identify and clean inconsistent internal state in a controlled and automated manner. The previous recovery required careful manual reasoning across validator environments.

Expanded consensus stress tests now target this specific edge case. Antithesis configurations already surface the scenario more reliably, according to the team.

Sui confirmed that all safety guarantees remained intact throughout the incident and that the network now operates normally.

FTX Sets March 31 Payout As Genesis Fights $1 billion Clawback Case | HODL FM
The bankruptcy estate overseeing the collapse of FTX said it will…
hodl-post-image

Disclaimer: All materials on this site are for informational purposes only. None of the material should be interpreted as investment advice. Please note that despite the nature of much of the material created and hosted on this website, HODL FM is not a financial reference resource, and the opinions of authors and other contributors are their own and should not be taken as financial advice. If you require adviceHODL FM strongly recommends contacting a qualified industry professional.