Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

periodically increase the block size, splitting the transaction fee among more transactions. Although larger blocks make it more difficult to produce hashes

Couldn’t they simply lower the baseline difficulty along with a block size?

Edit: and wouldn’t difficulty drop automatically in this case?



Bitcoin is best understood as a timestamping service that signs up to 1MB of information every 10 minutes. Recording monetary transactions is only one of its applications.

The 10 minute interval is an important part of the consensus mechanism as it has been proven secure both theoretically and practically. With shorter block interval and odds of orphan blocks and small chain reorgs would become more likely and this could break some aspects of security model such as zero-conf transactions.

By the way larger blocks don't take longer to hash - only the fixed size header is used for PoW. However larger or more blocks do require more bandwidth and storage to process.


By the way larger blocks don't take longer to hash - only the fixed size header is used for PoW.

Yeah, I forgot about merkle trees or something like that. So, do I get it right, there is no problem in increasing a block size? Just start to sign up to 50MB every 10 minutes in 2022 and that’s it?

However larger or more blocks do require more bandwidth and storage to process.

But isn’t amount of data depend on tx count? No matter how big chunks you split them into, it’s the same bytes per minute in the end. Why hard limit at all?


The original security model of bitcoins also assumes that:

1. Every network node participates in mining. So everybody is incentivized to keep, validate and forward the blocks they receive as quickly as possible.

2. Anybody should be able to bootstrap a network node and verify the entire blockchain from the genesis block without explicitly trusting any other node. Thus when a new block arrives one can independently verify it.

#1 has not been the case for a few years since the advent of mining pools, whereas #2 might as well be a lost cause. The entire bitcoin blockchain sits at 330GB and counting, and the live UTXO database is rarely below 3GB.

Hence people have argued against making the blocks larger as it could bloat the blockchain to the point that only people with very powerful computers could afford to validate the chain on their own. However with the status quo we are already seeing the number of network nodes stagnate around 10k since 2017 as running one is largely an altruistic effort.

>But isn’t amount of data depend on tx count? No matter how big chunks you split them into, it’s the same bytes per minute in the end. Why hard limit at all?

That is correct. You could also see the limit as putting a minimum cost for putting information on the chain and this could benefit the miners too.


Your speculation doesn't square well with the original whitepaper that preempts your concerns by pointing out how you don't have to run a full node. It introduces SPV and points out you just need to store 80byte block headers (total of 50MB in 2021). (Satoshi was a big believer in SPV)


I was more or less quoting the point of view of the current crop of developer who are very keen to keep the status quo. Not that I necessarily agree with all of their arguments, however they make good counter points to the other extreme viewpoint of "let's have 4GB blocks tomorrow and everything will be A-OK"


Well that’s the question and the reason for the block war and the BTC/BCH split.

I’m not sure myself. Attempt to drive people towards LN or other “second layer” solutions for payments? Or deliberate attempt to create a high fee market for miners? Or attempt to keep bandwidth and storage costs down to lower barrier to entry for mining, to keep it more decentralised?

Certainly the decision to keep the cap has caused the move from “digital currency” to “digital gold” / store of wealth.


There aren't that many freebies, and the decision is more arbitrary than it seems.

But it has downsides like changing the fee market and making it more expensive for miners to race broadcasts.


Yea, difficulty would drop. You'd end up producing a block more often than every 10 minutes. There's probably some lower bound on difficulty (and block frequency) that also maintains network consensus.


I meant drop xN by less baseline but increase xN by more difficult blocks, thus still 10min per block. But your sibling commenter now noted that the block size doesn’t affect difficulty. That’s confusing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: