Cryptocurrency: Scaling, Privacy [re: on whatever...]

other.arkitech other.arkitech at protonmail.com
Mon Dec 16 06:22:58 PST 2019


> yeah, so what's the 'next generation' solution?
>
> > a) they can be sharded signed and distributed out to query trees,
> > doesn't solve things long term.
> > b) utxo can be flag cutover coinbase input to empty chain, very unlikely.
> > c) chain protocols will evolve to effectively consensus mine
> > a utxo database, hashed checkpoint series, etc thereby
> > allowing tx's and blocks to be thrown away after the series
> > becomes too confirmed and locked to mine a different one.
> > Storage becomes mooted,
>
> The problem isn't storage per se. The thing is, all peers have to validate all transactions. And that may include all past transactions. It may seem pointless to revalidate old transactions but how do you arrive at the current state if you don't?

these views are aligned with the system I am developing, (correct me if I am wrong) which include a separation between public data and private data.
The private network, run on the same nodes that run the public protocol, made of encrypted P2P links celebrating private trading protocols.
These trading events would generate public transactions to be settled in the public ledger.
Private blockchains could be created among closed groups.
both open and restricted blockchains wouldn't reflect the past, just the current state.


>
> > the lingering limitation seems
> > to be estimating bandwidth the processing participants
> > need.
>
> I don't know. The problem is that 'peers' need to audit the whole system. And that takes resources. So peers start 'delegating' powers because that's more 'efficient', and we end up with the same central tyranny we have now.
>

Regarding peers in need to audit the whole system (or just a fraction of it)

As part of scalability solution I am spinning around the following idea:

as the network grows and reach say 10000 nodes, it would split in two consensus clusters of 5000 nodes each. Each side is isolated network caring each one about half of the ledger. Tx would be relayed to the appropriate cluster or partition. Tx involving 2 clusters would require a temporal connection to a random node in the other cluster to perform an atomic validation.

Then, as the 2 clusters grow in nodes reaching 10000 each, another split would happen automatically leaving 4 networks (4 blockchains) caring each one about 1/4th of the global ledger.

Applying this model recursively a mass-adoption situation would consist on 1.000.000 network of 10.000 nodes each, totalling 10B nodes, each network caring on one millionth of the global ledger.

what problems I am not foreseeing this approach imply?


thx





More information about the cypherpunks mailing list