Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Futures Kickoff
Get prepared for your futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to experience risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
When it comes to decentralized storage, there's always been a persistent challenge: to ensure data is not lost and is available at any time, the cost must keep soaring. Looking at current solutions, Arweave requires each node to store the entire file, which is costly and lacks flexibility; Filecoin, while claiming to be cheap, becomes riskier with lower configurations, increasing the probability of data loss.
The Walrus team has come up with a different approach. They independently designed a system called "Red Stuff," which is based on a 2D erasure coding scheme in a Byzantine fault-tolerant environment. It sounds complex, but the actual logic is quite elegant.
During file upload, Red Stuff breaks the file into countless small fragments (Slivers). Each storage node only needs to hold a portion of these, rather than a full copy of the entire file. This significantly reduces storage pressure. The most impressive part is—even if two-thirds of the fragments in the network disappear, the original data can still be recovered from the remaining fragments.
To put it in numbers, other protocols require 25 times the data redundancy to achieve "twelve nines" (99.9999999999%) reliability. Walrus, using Red Stuff, only needs 4 to 5 times redundancy. This results in nearly a hundredfold efficiency improvement, which is quite remarkable.
Verification has also been optimized. Storage nodes periodically submit encrypted proofs to demonstrate they are indeed holding those data fragments, without needing to transmit the entire file each time. This greatly reduces verification costs, and the overall system scalability improves accordingly. With this logic, the longstanding issues of decentralized storage—high costs, low efficiency, and difficult verification—seem to have found new solutions.