Growth Points Round 1️⃣ 1️⃣ Summer Lucky Grand Draw is on fire!
Draw now for your chance to win an iPhone 16 Pro Max and exclusive merch!
👉 https://www.gate.com/activities/pointprize?now_period=11
🎁 100% win rate! Complete simple tasks like posting, liking, commenting in Gate Post to enter the draw.
iPhone 16 Pro Max 512G, Gate hoodies, Sportswear, popular tokens, Futures Vouchers await you!
Collect just 2 fragments to easily redeem Gate merch—take your rewards home!
Ends on June 4th, 16:00 UTC. Try your luck now!
More info: https://www.gate.com/announcements/article/45185
Interpretation of Sei's New White Paper: What Technological Innovations Does the Giga Upgrade Introduce?
Written by: Pavel Paramonov, Founder of Hazeflow
Compiled by: PANews
Sei has released a new white paper that introduces the latest Giga upgrade. Most readers find the 17 pages of in-depth technical content difficult to read. Therefore, this article will explain the content of this update and how to enhance blockchain performance at different levels.
About asynchronous execution of block generation
The main ideas and foundations of Giga are as follows:
"If our transaction list is ordered and the initial state of the blockchain is consistent, and all honest nodes process these transactions in the same order, then the nodes will reach the same final state."
In this case, the outcome depends solely on the initial state and the order of transactions. This means that consensus only needs to be reached on the order of transactions within the block, and each node can independently compute the final state.
In this model, consensus is separated from execution, allowing blocks to be executed asynchronously.
Once the block is finalized, the nodes will process it and submit its status in subsequent blocks.
Then verify the block through state consensus to ensure that all nodes have computed the correct final state.
An important detail here is that execution and consensus (generation) occur in parallel. While a node is executing the computations of one block, it will also receive other blocks.
Therefore, blocks are executed in total order (rather than in parallel), and the block generation process itself does indeed occur in parallel with consensus. However, for any given block, these processes are completely asynchronous.
Clearly, it seems impossible to achieve consensus and execution on the same block at the same time. Therefore, when executing block n, the nodes will receive block n+1 for the next step.
If there is a deviation in consensus (for example, if one-third of the nodes in the network act maliciously), the chain will pause, which is similar to standard BFT protocols.
Transactions that fail during execution within a block do not render the block invalid; they simply remain in a failed state, because block generation and execution are separated, and the final state of the current block will be submitted in subsequent blocks.
How is the multi-proposer model implemented and what is Autobahn?
The consensus protocol itself is called "Autobahn" (just like the unlimited speed German highway). Autobahn separates data availability and transaction ordering, backed by an interesting model.
Just like any lane on a highway, there are multiple lanes, and each node has its own channel. Nodes use these channels to propose suggestions regarding the ordering of transactions. A proposal is simply an ordered collection of transactions.
Autobahn sometimes performs the "tipcut" operation, which aggregates multiple proposals to finalize the order of transactions.
As mentioned earlier, each validator has its own channel to propose batches of transactions.
When a node receives a valid proposal, it sends a vote to confirm that the proposal has been received.
After the proposal is collected and voted on, a Proof of Availability (PoA) will be formed to ensure that the data has been received by at least one honest node in the network.
The occurrence time of Tipcut is measured in milliseconds, and the final proposals from Autobahn will be "cut.".
Proposers are incentivized to wait for block releases and to publish individual blocks when possible, but the execution time limit for each block (similar to a Gas limit) slightly alters this dynamic.
A proposal on a channel typically corresponds to a block, which means that when a Tipcut occurs, multiple blocks can be cut off simultaneously.
Thereafter, the leader of the slot will send the Tipcut to other nodes to complete the sorting. The nodes are actually preparing the next Tipcut while voting on a single Tipcut.
Nodes that miss a batch can asynchronously fetch from the validators listed in PoA: this is the essential reason for the need for data availability.
Under synchronous conditions, if the leader is correct, Autobahn will complete the proposal confirmation in two rounds of communication. If the leader fails, the mechanism will elect a new leader to maintain progress.
The next tip-cut proposal can actually start during the current tip-cut submission phase, thereby reducing latency, as execution is performed in parallel with generation.
In fact, the entire model is a multi-proposer model where many nodes can simultaneously propose blocks for ordering. Each validator proposes its own block and receives proof of ownership (PoA) of these blocks from the network, which helps improve the throughput and overall efficiency of the network.
Parallel execution and its applicable situations
As mentioned earlier, the block execution process and consensus occur in parallel, although the blocks themselves are actually executed in sequence. You may wonder if this constitutes true parallel execution.
The answer is both affirmative and negative.
Although blocks are executed in order, transactions within a block can indeed be executed in parallel. If transactions do not modify (write to) the same state, and the result of one transaction does not affect another transaction, then they can be executed in parallel.
In short, their execution paths should not depend on each other. Giga does not have a memory pool, and transactions are immediately included by the nodes.
Giga assumes that most transactions do not conflict and processes these transactions simultaneously on multiple processor cores.
Changes to each transaction are temporarily stored in a private buffer and are not immediately applied to the blockchain.
After processing is complete, the system will check whether this transaction conflicts with previous transactions.
If there is a conflict, the transaction will be reprocessed. If there is no conflict, its changes will be applied to the blockchain and finalized.
There may also be situations of high-frequency conflicts, in which case the system will switch to processing one transaction at a time to ensure that the transaction can proceed.
In simple terms, parallel execution allocates transactions to multiple cores so that those transactions without conflicts can run simultaneously.
Storage issues and optimization
Due to the large trading volume, the data needs to be both secure and easily accessible, so its storage method should be slightly different from traditional blockchain storage. Giga stores data in a simple key-value format, which is a relatively flat structure that helps reduce the multiple updates or checks required when data changes.
In addition, Giga also adopts a tiered storage approach: recent data is kept on SSDs (high speed), while less frequently used data is migrated to slower, more cost-effective storage systems.
If a node crashes, it can replay the logs to restore the correct state and apply updates to RocksDB (a specialized database) to organize the data.
The storage system employs a Cryptographic Accumulator that can prove the correctness of data without heavy computation. The accumulator is updated in a batch processing manner, allowing verifiers and light nodes to quickly reach consensus on the current state of the blockchain.
What does it mean to become a multi-proposer EVM L1 blockchain?
L1 infrastructure can undergo various improvements, and different L1s face various technical challenges, ranging from economic issues like MEV to technical problems such as state management.
As the first L1 chain to support multiple proposers, it is quite challenging, especially for EVM L1, as the original design of EVM was not intended to support a multi-proposer system.
However, Sei is trying different approaches to retain the EVM and many tools that developers are accustomed to using.
Parallel transaction execution, reaching consensus during execution, and multiple proposers operating in parallel all contribute to enhanced performance, with execution throughput potentially increasing by about 50 times. However, these improvements may also face some of the risks mentioned above.
This is Sei's second major update. Previously, Sei transitioned from the Cosmos chain to the EVM chain, and now Sei has launched an execution client optimized for speed.
The upcoming developments and the subsequent effects of these optimization measures are worth paying attention to.