Add monero_add_minimal_executable and use in tests
This is done in order not to have to relink targets, when just an .so changed, but not its interface.
d20ff4f64 functional_tests: add a large (many randomx epochs) p2p reorg test (moneromooo-monero)
6a0b3b1f8 functional_tests: add randomx tests (moneromooo-monero)
9d42649d5 core: fix mining from a block that's not the current top (moneromooo-monero)
This reduces the attack surface for data that can come from
malicious sources (exported output and key images, multisig
transactions...) since the monero serialization is already
exposed to the outside, and the boost lib we were using had
a few known crashers.
For interoperability, a new load-deprecated-formats wallet
setting is added (off by default). This allows loading boost
format data if there is no alternative. It will likely go
at some point, along with the ability to load those.
Notably, the peer lists file still uses the boost serialization
code, as the data it stores is define in epee, while the new
serialization code is in monero, and migrating it was fairly
hairy. Since this file is local and not obtained from anyone
else, the marginal risk is minimal, but it could be migrated
later if needed.
Some tests and tools also do, this will stay as is for now.
- New flag in NOTIFY_NEW_TRANSACTION to indicate stem mode
- Stem loops detected in tx_pool.cpp
- Embargo timeout for a blackhole attack during stem phase
- e.g., fixes gen_block_big_major_version test, error: generation failed: what=events not set, cannot compute valid RandomX PoW
- ask for events only if difficulty > 1 (when it really matters)
- throwing an exception changed to logging, so it is easy to spot a problem if tests start to fail.
Avoids a DB error (leading to an assert) where a thread uses
a read txn previously created with an environment that was
since closed and reopened. While this usually works since
BlockchainLMDB renews txns if it detects the environment has
changed, this will not work if objects end up being allocated
at the same address as the previous instance, leading to stale
data usage.
Thanks hyc for the LMDB debugging.
If the peer (whether pruned or not itself) supports sending pruned blocks
to syncing nodes, the pruned version will be sent along with the hash
of the pruned data and the block weight. The original tx hashes can be
reconstructed from the pruned txes and theur prunable data hash. Those
hashes and the block weights are hashes and checked against the set of
precompiled hashes, ensuring the data we received is the original data.
It is currently not possible to use this system when not using the set
of precompiled hashes, since block weights can not otherwise be checked
for validity.
This is off by default for now, and is enabled by --sync-pruned-blocks
2cd4fd8 Changed the use of boost:value_initialized for C++ list initializer (JesusRami)
4ad191f Removed unused boost/value_init header (whyamiroot)
928f4be Make null hash constants constexpr (whyamiroot)
The db txn in add_block ending caused the entire overarching
batch txn to stop.
Also add a new guard class so a db txn can be stopped in the
face of exceptions.
Also use a read only db txn in init when the db itself is
read only, and do not save the max tx size in that case.
- tests fixes for HF10, builder change, rct_config; fix_chain
- get_tx_key test
- proper testing after live refresh added
- live refresh synthetic test
- log available funds for easier test construction
- wallet::API tests with mocked daemon
This curbs runaway growth while still allowing substantial
spikes in block weight
Original specification from ArticMine:
here is the scaling proposal
Define: LongTermBlockWeight
Before fork:
LongTermBlockWeight = BlockWeight
At or after fork:
LongTermBlockWeight = min(BlockWeight, 1.4*LongTermEffectiveMedianBlockWeight)
Note: To avoid possible consensus issues over rounding the LongTermBlockWeight for a given block should be calculated to the nearest byte, and stored as a integer in the block itself. The stored LongTermBlockWeight is then used for future calculations of the LongTermEffectiveMedianBlockWeight and not recalculated each time.
Define: LongTermEffectiveMedianBlockWeight
LongTermEffectiveMedianBlockWeight = max(300000, MedianOverPrevious100000Blocks(LongTermBlockWeight))
Change Definition of EffectiveMedianBlockWeight
From (current definition)
EffectiveMedianBlockWeight = max(300000, MedianOverPrevious100Blocks(BlockWeight))
To (proposed definition)
EffectiveMedianBlockWeight = min(max(300000, MedianOverPrevious100Blocks(BlockWeight)), 50*LongTermEffectiveMedianBlockWeight)
Notes:
1) There are no other changes to the existing penalty formula, median calculation, fees etc.
2) There is the requirement to store the LongTermBlockWeight of a block unencrypted in the block itself. This is to avoid possible consensus issues over rounding and also to prevent the calculations from becoming unwieldy as we move away from the fork.
3) When the EffectiveMedianBlockWeight cap is reached it is still possible to mine blocks up to 2x the EffectiveMedianBlockWeight by paying the corresponding penalty.
Note: the long term block weight is stored in the database, but not in the actual block itself,
since it requires recalculating anyway for verification.