This PR removes the version-check command and its associated logic as
discussed in issue #31222.
Removed versionCheckCommand from misccmd.go and main.go.
Deleted version_check.go and its corresponding tests.
Cleaned up testdata/vcheck directory (~800 lines of JSON/signatures
removed).
Verified build with make geth
HeadSync kept reqFinalityEpoch entries for servers after receiving
EvUnregistered, while other per-server maps were cleared. This left
stale request.Server keys reachable from HeadSync, which can lead to a
slow memory leak in setups that dynamically register and unregister
servers.
The fix adds deletion of the reqFinalityEpoch entry in the
EvUnregistered handler. This aligns HeadSync with the cleanup pattern
used by other sync modules and keeps the finality request bookkeeping
strictly limited to currently registered servers.
This pull request optimizes history indexing by splitting a single large
database
batch into multiple smaller chunks.
Originally, the indexer will resolve a batch of state histories and
commit all
corresponding index entries atomically together with the indexing
marker.
While indexing more state histories in a single batch improves
efficiency, excessively
large batches can cause significant memory issues.
To mitigate this, the pull request splits the mega-batch into several
smaller batches
and flushes them independently during indexing. However, this introduces
a potential
inconsistency that some index entries may be flushed while the indexing
marker is not,
and an unclean shutdown may leave the database in a partially updated
state.
This can corrupt index data.
To address this, head truncation is introduced. After a restart, any
excessive index
entries beyond the expected indexing marker are removed, ensuring the
index remains
consistent after an unclean shutdown.
This is a new step in my crusade against the braindead fad of starting
PR titles with a word that is completely redundant with github labels,
thus wasting prime first-line real-estate for something that isn't
necessary.
I noticed that every single one of these PRs are low-quality AI-slop, so
I think there is a strong case to be made for these PRs to be
auto-closed. A message is added before closing the PR, redirecting to
our contribution guidelines, so I expect quality first-time contributors
to read them and reopen the PR. In the case of spam PRs, the author is
unlikely to revisit a given PR, and so auto-closing might have a
positive impact. That's an experiment worth trying, imo.
In order to reduce the amount of code that is embedded into the keeper
binary, I am removing all the verkle code that uses go-verkle and
go-ipa. This will be followed by further PRs that are more like stubs to
replace code when the keeper build is detected.
I'm keeping the binary tree of course. This means that you will still
see `isVerkle` variables all over the codebase, but they will be renamed
when code is touched (i.e. this is not an invitation for 30+ AI slop
PRs).
---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
In this PR, two things have been fixed:
---
(a) truncate the stale beacon headers with latest snap block
Originally, b.filled is used as the indicator for deleting stale beacon headers.
This field is set only after synchronization has been scheduled, under the
assumption that the skeleton chain is already linked to the local chain.
However, the local chain can be mutated via `debug_setHead`, which may
cause `b.filled` outdated. For instance, `b.filled` refers to the last head snap block
in the last sync cycle while after `debug_setHead`, the head snap block has been
rewounded to 1.
As a result, Geth can enter an unintended loop: it repeatedly downloads
the missing beacon headers for the skeleton chain and attempts to schedule the
actual synchronization, but in the final step, all recently fetched headers are removed
by `cleanStales` due to the stale `b.filled` value.
This issue is addressed by always using the latest snap block as the indicator,
without relying on any cached value. However, note that before the skeleton
chain is linked to the local chain, the latest snap block will always be below
skeleton.tail, and this condition should not be treated as an error.
---
(b) merge the subchains once the skeleton chain links to local chain
Once the skeleton chain links with local one, it will try to schedule the
synchronization by fetching the missing blocks and import them then.
It's possible the last subchain already overwrites the previous subchain and
results in having two subchains leftover. As a result, an error log will printed
https://github.com/ethereum/go-ethereum/blob/master/eth/downloader/skeleton.go#L1074
Blobs are stored per transaction in the pool, so we need billy to handle
up to the per-tx limit, not to the per-block limit.
The per-block limit was larger than the per-tx limit, so it not a bug,
we just created and handled a few billy files for no reason.
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
This PR removes the legacy sidecar conversion logic.
After the Osaka fork, the blobpool will accept only blob sidecar version
1.
Any remaining version 0 blob transactions, if they still exist, will no
longer
be eligible for inclusion.
Note that conversion at the RPC layer is still supported, and version 0
blob
transactions will be automatically converted to version 1 there.
When iterating over a map with value types in Go, the loop variable is a
copy. In `markCodeExistence`, assigning to `code.exists` modified only
the local copy, not the actual map entry, causing the existence flag to
always remain false.
This resulted in overcounting contract codes in state size statistics,
as codes that already existed in the database were incorrectly counted
as new.
Fix by changing `codes` from `map[common.Address]contractCode` to
`map[common.Address]*contractCode`, so mutations apply directly to the
struct.
The fuzz test file has been broken for a while - it doesn't compile with
the `gofuzz` build tag.
Two issues:
- Line 59: called `SignifySignFile` which doesn't exist (should be
`SignFile`)
- Line 71: used `:=` instead of `=` for already declared `err` variable
This PR fixes the bug reported in #33365.
The impact of the bug is not catastrophic. After a transaction is
ultimately fetched, validation and propagation will be performed based
on the fetched body, and any response with a mismatched type is treated
as a protocol violation. An attacker could only waste the limited
portion of victim’s bandwidth at most.
However, the reasons for submitting this PR are as follows
1. Fetching a transaction announced with an arbitrary type is a weird
behavior.
2. It aligns with efforts such as EIP-8077 and #33119 to make the
fetcher smarter and reduce bandwidth waste.
Regarding the `FilterType` function, it could potentially be implemented
by modifying the Filter function's parameter itself, but I wasn’t sure
whether changing that function is acceptable, so I left it as is.
The simulator computed active precompiles from the base header, which is
incorrect when simulations cross fork boundaries. This change selects
precompiles using the current simulated header so the precompile set
matches the block’s number/time. It brings simulate in line with doCall,
tracing, and mining, and keeps precompile state overrides applied on the
correct epoch set.
## Description
This PR fixes incorrect contract code state metrics by ensuring
duplicate codes are not counted towards the reported results.
## Rationale
The contract code metrics don't consider database deduplication. The
current implementation assumes that the results are only **slightly
inaccurate**, but this is not true, especially for data collection
efforts that started from the genesis block.
Fixes an issue where HashFolder skipped the root directory upon hitting
the first file in the excludes list. This happened because the walk function
returned SkipDir even for regular files.
This moves the tracking of the current syncmode into the downloader, fixing an
issue where the syncmode being requested through the engine API could go
out-of-sync with the actual mode being performed by downloader.
Fixes#32629
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
The EIP says to increment PC by 2 _instead of_ the standard increment by
1. The opcode handlers added in #33095 result in incrementing PC by 3,
because they ignored the increment already present in `interpreter.go`.
Does this need to be better specified in the EIP? I've added a [new test
case](https://github.com/ethereum/EIPs/pull/10859) for it anyway.
Found by @0xriptide.
XORBytes was added to package crypto/subtle in Go 1.20, and it's faster
than our bitutil.XORBytes. There is only one use of this function
across go-ethereum so we can simply deprecate the custom implementation.
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
The random-port retry loop in addAnyPortMapping shadowed the err
variable, causing the function to return (0, nil) when all attempts
failed. This change removes the shadowing and preserves the last error
across both the fixed-port and random-port retries, ensuring failures
are reported to callers correctly.
This PR changes the Pebble configurations as below:
- increase the MemTableStopWritesThreshold for handling temporary spike
- decrease the L0CompactionConcurrency and CompactionDebtConcurrency to
scale up compaction readily
The original condition `deleted && !logPrinted || time.Since(...)` was
incorrectly grouping due to operator precedence, causing logs to print
every 10 seconds even when no deletion was happening (deleted=false).
According to SafeDeleteRange documentation, the 'deleted' parameter is
"true if entries have actually been deleted already". The logging should
only happen when deletion is active.
Fixed by adding parentheses: `deleted && (!logPrinted ||
time.Since(...))`Now logs print only when items are being deleted AND
either it's the first log or 10+ seconds have passed since the last one.
This improves the error code for cases where invalid query parameters
are submitted to `eth_getLogs`. I also improved the error message that
is emitted when querying into the future.
This is to benchmark how much the internal parts of GetBlobsV2 take.
This is not an RPC-level benchmark, so JSON-RPC overhead is not
included.
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
This PR exposes the state size statistics to the metrics, making them
easier to demonstrate.
Note that the contract code included in the metrics is not
de-duplicated, so the reported size
will appear larger than the actual storage footprint.
This introduces two main changes to Pebble's configuration:
(a) Remove the Bloom filter at Level 6
The Bloom filter is never used at the bottom-most level, so keeping it
serves no purpose. Removing it saves storage without affecting read
performance.
(b) Re-enable read-sampling compaction
Read-sampling compaction was previously disabled in the hash-based
scheme because all data was identified by hashes and basically no data
overwrite. Read sampling compaction makes no sense.
After switching to the path-based scheme, data overwrites are much more
common, making read-sampling compaction beneficial and reasonable to re-enable.
This PR introduces a new debug feature, logging the slow blocks with
detailed performance statistics, such as state read, EVM execution and
so on.
Notably, the detailed performance statistics of slow blocks won't be
logged during the sync to not overwhelm users. Specifically, the statistics
are only logged if there is a single block processed.
Example output
```
########## SLOW BLOCK #########
Block: 23537063 (0xa7f878611c2dd27f245fc41107d12ebcf06b4e289f1d6acf44d49a169554ee09) txs: 248, mgasps: 202.99
EVM execution: 63.295ms
Validation: 1.130ms
Account read: 6.634ms(648)
Storage read: 17.391ms(1434)
State hash: 6.722ms
DB commit: 3.260ms
Block write: 1.954ms
Total: 99.094ms
State read cache: account (hit: 622, miss: 26), storage (hit: 1325, miss: 109)
##############################
```
We still default to legacy txes for methods like eth_sendTransaction,
eth_signTransaction. We can default to 0x2 and if someone would like to
stay on legacy they can do so by setting the `gasPrice` field.
cc @deffrian
Recently in #31630 we removed support for overriding the network id in
preset networks. While this feature is niche, it is useful for shadow
forks. This PR proposes we add the functionality back, but in a simpler
way.
Instead of checking whether the flag is set in each branch of the
network switch statement, simply apply the network flag after the switch
statement is complete. This retains the following behavior:
1. Auto network id based on chain id still works, because `IsSet` only
returns true if the flag is _actually_ set. Not if it just has a default
set.
2. The preset networks will set their network id directly and only if
the network id flag is set is it overridden. This, combined with the
override genesis flag is what allows the shadow forks.
3. Setting the network id to the same network id that the preset _would
have_ set causes no issues and simply emits the `WARN` that the flag is
being set explicitly. I don't think people explicitly set the network id
flag often.
```
WARN [10-22|09:36:15.052] Setting network id with flag id=10
```
---------
Co-authored-by: Felix Lange <fjl@twurst.com>