From sharding to danksharding: the evolution of scalability solutions
Within the Ethereum ecosystem, blockchain scalability has evolved from traditional sharding to the cutting-edge danksharding approach, each offering its unique features.
- Sharding is a scalability solution that partitions a blockchain into smaller segments, or shards, to distribute the processing of transactions.
- By doing so, sharding increases the overall capacity and speed of the network and enhances its robustness against failures.
- However, sharding introduces potential security risks, as there are fewer validators on each shard, and presents challenges in the complex process of data migration.
- Danksharding is an advancement in Ethereum's scalability that relies on data availability sampling to improve efficiency in the network.
- Proto-danksharding, an intermediary step towards full danksharding, uses temporary storage called data blobs to reduce storage costs for rollups.
Traditional sharding is a technique in blockchain technology that aims to boost transaction scalability and speed. A blockchain network is divided into smaller segments called shards, therefore, transactions are validated within individual shards rather than on the entire network. This parallel processing allows for increased throughput, addressing the issue of scalability bottlenecks.
The main benefits of traditional sharding
- Scalability and improved performance. Distributing data across nodes lightens the workload for each node. This bolsters the speed of transaction processing and data retrieval, catering to a broader user base and facilitating larger transaction volumes.
- Enhanced fault tolerance. With sharding, the entire system doesn't depend on a single point of failure. If one shard faces issues or goes offline, the remaining shards can still operate effectively. This improves system availability and resistance to possible failures.
Challenges with traditional sharding
- Security and communication. In a sharded system, individual shards might have fewer validators compared to an entire network. This potentially makes them more vulnerable to malicious attacks.
- Data migration. Moving data between shards can be a complex and resource-intensive process, and it must be carefully planned and executed to avoid downtime or data inconsistencies.
A journey to full danksharding
Initially, sharding was considered a primary solution for Ethereum's scalability challenges. However, it faced issues with security, cross-shard communication and implementation complexity that required significant changes to Ethereum's infrastructure.
As Ethereum's ecosystem evolved, Layer 2s, especially rollups, emerged as a more efficient and less complex solution.
Rollups process transactions off-chain but submit batched proofs on-chain, offering increased throughput. Thus in the Ethereum roadmap, sharding was eventually replaced with another proposal, danksharding, which is designed as a simpler and less invasive upgrade compared to traditional sharding.
Danksharding, named after Ethereum researcher Dankrad Feist, aims to completely overhaul architecture, including how transactions are processed and data is stored. Instead of processing transactions, shards in danksharding would act as data availability layers. Danksharding will employ "data availability sampling," enabling Ethereum to validate large amounts of data by inspecting only small samples, ensuring quicker data verification.
Mechanics of proto-danksharding
Proto-danksharding, also known as EIP-4844, is an upgrade that paves the way to danksharding, focusing on improving the scalability of rollups. Rollups can involve high transaction costs as their transaction data remains on-chain permanently, even though only needed temporarily.
Proto-danksharding introduces data blobs attached to blocks and designed for short-term data storage. Data blobs are not available to the Ethereum Virtual Machine (EVM) and are automatically deleted within one to three months. This significantly reduces data storage costs for rollups, leading to more affordable transactions for users.
Rollups store data for executed transactions in data blobs and simultaneously post a cryptographic "commitment" to this data - a short piece of data that can be used to verify the data without having to read the entire blob. To verify the data, transactions in the blob need to be re-executed to compare the results to the commitment. If the results match, the data is valid.
A key aspect of proto-danksharding is the Kate-Zaverucha-Goldberg (KZG) scheme, which shrinks data blobs into compact cryptographic commitments, ensuring efficient data verification.
To visualize the concept, consider a box containing several items. Without examining its contents, one can still make a "commitment" to it - say, by capturing a picture and generating a unique hash. To authenticate the box's contents later, simply verify the photo's hash against the commitment. If they match, the box remains unaltered. KZG works in a similar way, but instead of hashing a picture, it hashes data in a blob.
While proto-danksharding allows one blob per block, danksharding aims to expand this to 64 blobs per block. One of the key challenges to creating full danksharding is developing a robust and efficient data availability sampling algorithm.
Although full danksharding is still relatively far away, a major step towards it has been made with proto-danksharding, and the Ethereum community awaits its wider implementation. As scalability remains a high-priority issue for blockchain tech, solutions such as proto-danksharding and the prospects of danksharding show Ethereum's commitment to addressing the scalability challenge.