
- Why is
m_assumeutxo_data
hardcoded within the first place if we do not need to belief different’s UTXO set? (We’re getting compelled to use solely that UTXO set model)
The priority is folks placing up web sites with directions for “even sooner sync time!” with UTXO set downloads. If such a web site would turn out to be widespread, after which compromised, there’s a non-negligible probability of this truly leading to a malicious UTXO set being loaded and accepted by customers, even when briefly (something is feasible in such a UTXO set, together with the attacker giving themselves 1 million BTC).
By placing the dedication hash within the supply code, it turns into topic to Bitcoin Core’s evaluate ecosystem. I feel it is unfair to name this only a “builders determine”, as a result of:
- Energetic evaluate neighborhood Anybody can, and many individuals do, look over the adjustments to the supply code. A change to the
m_assumeutxo_data
worth is simple to evaluate (simply test an current node’s hash), and will get a variety of scrutiny. - Bitcoin Core has reproducible builds. Anybody, together with non-developers, can take part in constructing releases, and they need to find yourself with bit-for-bit similar binaries as those printed. This establishes confidence that the binaries which individuals truly run match the launched supply code, together with the
m_assumeutxo_data
worth.
In the event you consider “builders” as the complete group of individuals collaborating in these processes, then it is after all not incorrect to state that it is successfully this group making that call. However I feel the size and transparency of the entire thing issues. This is not a single individual selecting a worth earlier than a launch, with out oversight, as an instruction on a web site is perhaps. And naturally, the customers is inherently trusting this group of individuals/course of anyway for the validation software program itself, even when we attempt to reduce the extent this belief is required.
- Why is the
m_assumeutxo_data
set to 840.000 and to not the identical block asassumevalid
?
The unique concept, despite the fact that no person is working proper now on finishing it, behind assumeutxo included computerized snapshotting and distribution of snapshots over the community, in order that customers wouldn’t have to go discover a supply.
In such a mannequin, there could be a predefined schedule of heights at which snapshots could be made. For instance, there might be one each 52500 blocks (roughly as soon as per yr), and all nodes supporting the characteristic would make a snapshot at that peak when reached, and hold the previous couple of snapshots round for obtain over the P2P community. New nodes beginning up, with m_assumeutxo_data
values set to regardless of the final a number of of 52500 was on the time of launch, can then synchronize from any snapshot-providing node on the community, even when the supplier is utilizing older software program than the receiver.
Whereas there isn’t a progress presently on the P2P aspect of this, it nonetheless suggests utilizing a snapshot peak schedule that isn’t tied to Bitcoin Core releases.
- I perceive that we do not need folks to begin trusting random UTXO units due to laziness for ready to sync, however could not we use some sort of signed-by-self UTXO units? It will be nice if as a person you may backup the precise UTXO set, signal it in a roundabout way, and be capable of load+confirm it sooner or later to sync a brand new node.
If it is only for your self, you can also make a backup of the chainstate
listing (whereas the node is just not operating). Assumeutxo has a variety of options that matter within the huge distribution mannequin, however do not apply to private backups:
- The snapshot knowledge is canonical. Anybody can create a snapshot at a specific peak, and everybody will acquire an similar snapshot file, making it simple to check, and distribute (doubtlessly from a number of sources, bittorrent-style).
- Snapshot loading nonetheless includes background revalidation. It provides you a node that’s instantly synced to the snapshot level, and might proceed validation from that time on, however for safety, the node will nonetheless individually additionally carry out within the background a revalidation of the snapshot itself (from genesis to the snapshot level).
In the event you belief the snapshot creator and loader fully (since you are each of them your self), the overhead of those options is pointless. By making a backup of your chainstate (which holds the UTXO set), you may at any level, on any system, soar to that time in validation. It is a database, so it’s not byte-for-byte comparable between methods, however it’s appropriate. The aspect “restoring” the backup will not know it is loading one thing created externally, so it will not carry out background re-validation, however if you happen to in the end belief the information anyway, that is simply duplication of labor.