What is block size and why is it important?
The size of the blocks that make up a blockchain can have a big effect on how fast and how much data the network can handle, but there are always pros and cons to consider. As you probably know, blockchains get their name from the fact that they’re made up of an ongoing chain of blocks. So what role do block size and scalability play here?
Blocks are batches of transaction data, and each block contains a certain amount of data. The number of transactions per second (TPS) that the network can handle depends on both the size of the blocks and how quickly new blocks are generated.
Having a higher TPS is more visually appealing, so developers are constantly on the lookout for methods to better this metric. current rates differ depending on network conditions, but as of now, Bitcoin’s max TPS is around seven and Ethereum sits at 15–not much difference.
To give you some perspective, Visa can handle approximately 1,700TPS. If these networks want to be competitive in the worldwide market of payment solutions, it’s crucial that they find ways to improve their speed.
The TPS rate of a blockchain is largely contingent on the size of each block, therefore this becomes a considerable challenge in terms of achieving mainstream adoption. Although, as we will see, merely enlarging the size is not the only option available; there are various approaches that can be taken.
When we talk about scaling solutions, we’re usually referring to one of two things: on-chain or off-chain. Both have their advantages and disadvantages, but as of right now, no one can say for sure which option is better for long-term growth.
On-chain scaling is the process of making a blockchain faster by changing something about the blockchain itself. For example, one way to scale is to reduce the amount of data used in each transaction so that more transactions can fit into a block. This is similar to what Bitcoin did with its Segregated Witness update, commonly called SegWit.
This patch to Bitcoin allows for a notable improvement in network capacity by changing how the transaction data is processed. Another way to improve Transactions per Second (TPS) would be to quicken the rate of block generation, though there are limits as this method Cousins with the time it takes new blocks to propagate throughout the network. You don’t want new blocks to be created before the previous block is communicated to all—or almost all—of the nodes on the network because it can create issues with consensus.
One way these systems could scale is by establishing seamless communication between discrete blockchains. If different chains can transact with each other, then each individual network wouldn’t have handled as much data and its respective throughput should improve.
Multiple native chains as well as smart contracts working in unison are the recipe for success that Polkadot has developed to create a platform where data sending between networks is virtually flawless. By doing this, they are opening up considerable opportunities for the decentralization of our current ecosystem.
On the one hand, it will still take a few years for sharding to be fully integrated into Ethereum. However, some people have argued that this also makes the system more complex and vulnerable to attack. The rationale behind this is that sharding raises the probability of a “double-spend” happening as an outcome of an assault. The problem here is if someone took over just one shard, it wouldn’t require nearly as many resources compared to what’s needed to launch a 51% attack.
This can lead to the confirmation of transactions that would be seen as invalid under other circumstances, such as the same Ether (ETH) being sent to two different addresses. Some projects have taken a different approach by limiting the amount of validating nodes in an attempt to improve network speeds.
One example of this different approach is EOS, which has limited its validators to just 21. These 21 validators are then voted on by token holders in an attempt to keep a fair, distributed form of governance — with mixed results.
Users that prefer limited validators do so because it’s harder for centralized bodies to control. While blockchains can be scaled in a number of ways, one popular method is by increasing the size of each individual block. This was the approach Bitcoin Cash took when it split from Bitcoin in 2017–because 1 MB blocks were too small, they changed the limit to 8 and later 32 MB per block.
Some argue that this is not a feasible solution because it would require an infinite amount of storage space to continue growing block sizes. Many people see this solution as simply postponing the problem instead of solving it, and they believe that it could have harmful consequences for the decentralized nature of the blockchain. However, given that The average block size on the Bitcoin Cash network is still under 1 MB, this debate has not been settled yet.
In addition to improving blockchain technology, there are methods for increasing network speed that doesn’t involve changing the blockchain. These solutions are often called “second-layer” because they exist “on top of” the current system. One popular project is Lightning Network for Bitcoin.
Essentially, Lightning Network nodes can connect with each other and complete transactions without going through the main network. When a channel is closed, the Lightning Network will then update the information on the blockchain.
Additionally, these nodes can be strung together to create a payment system that is faster and cheaper than the traditional network. Ethereum also has its own solutions for off-chain transactions, including the Raiden Network (designed to be Ethereum’s Lightning Network) and the Celer Network. These projects not only allow for off-chain transactions but also state changes, which enables the processing of smart contracts.
The largest problem these systems face is that they are beta versions with outstanding technical glitches. Another solution is called “sidechains.” Sidechains are blockchains created from the main chain, allowing native assets to be moved between them as needed. This would keep transaction activity off of the primary network and free up space for items that need to remain on the settled main chain. One potential issue with this is that each sidechain needs to be secured by nodes, which can lead to problems with trust and security if a user is unaware of who exactly is running them.
What are the pros and cons of increasing block size?
Some people think that block size must be increased so Bitcoin (BTC) and other decentralized assets can enter mainstream use. And it’s certainly a reasonable argument to say that if the block size is bigger, then not only can each block confirm more transactions, but the average transaction fee will go down too. If that happens, it would make the network both cheaper and faster – which sounds great.
When proponents of increasing the block size argue that other solutions, such as sharding and sidechains, are still being tested and aren’t ready to be implemented on a large scale yet. These are important points, but it’s also important to keep in mind that there will be some consequences to doing so. Many people see this as simply buying time rather than solving the real issue at hand, arguing that more sophisticated solutions will be necessary eventually.
The reason that larger blocks are seen as a problem by some is that node operators need to download each new block as it is propagated. With current technology, this wouldn’t be an issue if blocks were 1 MB, 4 MB, or even 32 MB in size. However, for a blockchain to be adopted on a global scale, even this might not be enough.
If blocks increase to the size of gigabytes, many users will be unable to store or access them. If average users cannot keep up with this technology, then there would be a less and less decentralized activity, leading to more centralization. The people who can change a network are the miners; they show support for an upgrade by “signaling” it.
While miners often work together in large pools, this can create centralization problems as these conglomerates have more power than individual miners. Fortunately, there are multiple ways to solve this issue, and not all projects want unlimited block sizes. Other developers use clever strategies with the goal of ending scaling debates once and for all.