This is an alternative PR for
https://github.com/ethereum/go-ethereum/pull/34746.
This PR implements the second approach among the two possible solutions
mentioned in the above PR.
Requests for unavailable items are possible when the peer is following a
different fork from us. However this is not expected to happen
frequently. Considering the amount of complexity added to the codebase,
the simpler approach (this PR) can be preferred.
The reconstruct callback indexes parallel response slices (bodies,
receipts). Passing the accept counter used the wrong element when an
earlier header in the same batch hit a stale slot.
The mux tracer fanned out every standard hook to its children but never
forwarded OnSystemCall{Start,End}. Tracers that rely on these - like
`logger.jsonLogger`, which uses the start hook to silence its opcode
hook for the duration of a system call - never got the signal when
wrapped behind a mux.
In evm t8n, combining `--trace` with `--opcode-count` (default for geth
with exec specs) produces exactly that wrapping. The first system call
(e.g. `ProcessBeaconBlockRoot`) then fires `OnOpcode` on the json logger
before any `OnTxStart` has run, dereferencing a nil env and crashing
t8n.
Forward both hooks through the mux. The V2 fan-out falls back to V1 for
children that only implement the legacy hook, mirroring the precedence
already used in `core/state_processor.go`.
This PR addresses one of the biggest performance issue with binary
tries: storing each internal node individually bloats the index, the
disk, and triggers a lot of write amplifications. To fix this issue,
this PR serializes groups of nodes together.
Because we are still looking for the ideal group size, the "depth" of
the group tree is made a parameter, but that will be removed in the
future, once the perfect size is known.
This is a rebase of #33658
---------
Co-authored-by: Copilot <copilot@github.com>
- Fixes an error shadowing issue in the deliver() function, where a
stale result from GetDeliverySlot caused the original failure to be
overwritten by errStaleDelivery.
- Adds errInvalidBody and errInvalidReceipt to the downloader error
checks to properly drop peers who sent invalid responses.
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
EIP-7825 caps the transaction gas limit at `MaxTxGas`, but after
Amsterdam/EIP-8037 the transaction gas limit can include state gas
reservoir in addition to the regular gas dimension. Applying the Osaka
cap to the full `tx.Gas()` rejects otherwise valid Amsterdam
transactions that need more than `MaxTxGas` total gas because of state
gas, while their regular gas use remains within the intended limit.
This changes geth to stop applying the full transaction gas cap once
Amsterdam is active:
- txpool stateless validation no longer rejects `tx.Gas() > MaxTxGas`
under Amsterdam
- legacy pool reorg cleanup does not purge high-total-gas transactions
at the Osaka transition if Amsterdam is also active
- execution precheck mirrors the txpool behavior and does not reject
high-total-gas messages under Amsterdam
The block gas limit check remains in place, so transactions still cannot
request more total gas than the current block gas limit.
Validation run:
```
go test ./core/txpool ./core/txpool/legacypool
go test ./core -run TestStateProcessorErrors
```
---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
Here, we change the EVM stack implementation to use an 'arena', i.e.
a shared allocation pool for sub-call stacks. The stack is now more
GC-friendly, since it is a slice of uint256 values instead of a slice of pointers.
Code that pushes an item to the stack has been changed to get() the top
item, then overwrite it.
The PR is a rewrite/rebase of #30362.
---------
Co-authored-by: Martin Holst Swende <martin@swende.se>
Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
This PR updates the BAL structure definition to the latest the spec,
- Balance has been changed from [16]byte to uint256
- Storage key and value has been changed from [32]byte to uint256
- BlockAccessList has been changed from a struct to a slice of
AccountChanges
- TxIndex has been changed from uint16 to uint32
The testPeer request counters (nAccountRequests, nStorageRequests,
nBytecodeRequests, nTrienodeRequests) were plain int fields incremented
with ++. These increments happen in Request* methods that are invoked
concurrently by the Syncer from multiple goroutines
(assignBytecodeTasks, assignStorageTasks, etc.), causing a data race
reliably detected by go test -race.
Change the counters to atomic.Int64 so increments and reads are
synchronized without introducing a mutex.
Fixes races detected in TestMultiSyncManyUseless,
TestMultiSyncManyUselessWithLowTimeout,
TestMultiSyncManyUnresponsive, TestSyncWithStorageAndOneCappedPeer,
TestSyncWithStorageAndCorruptPeer, and
TestSyncWithStorageAndNonProvingPeer.
scheduleFetches.func1 is the single biggest allocator in the Pyroscope
profile of a busy node (~13.5 GB/hr, 8% of total alloc_space). Each
peer-iteration pre-allocated 'make([]common.Hash, 0, maxTxRetrievals)'
= 8 KB, even for peers that end up collecting no new hashes (all their
announces were already being fetched by someone else).
Defer the slice allocation to the first append. Peers that collect zero
hashes now pay zero allocation, which is the common case on the
timeoutTrigger path where all peers with any announces are iterated.
New benchmarks BenchmarkScheduleFetches_{100peers_10new,
100peers_allFetching, 500peers_3new} (benchstat, 6 samples):
scenario ns/op B/op allocs/op
100p/10new unchanged unchanged unchanged (fast path)
100p/allFetching -62% -92% -20%
500p/3new -22% -44% -7%
geomean -33% -65% -9%
This PR introduces a gasBudget struct to track the available gas for EVM
execution.
With the upcoming EIP-8037, multi-dimensional gas accounting will be
introduced, requiring multiple gas budget counters to be tracked
simultaneously. To support this, the counters are grouped into a gasBudget
structure.
This change is a prerequisite for internal refactoring in preparation
for EIP-8037.
---------
Co-authored-by: MariusVanDerWijden <m.vanderwijden@live.de>
This Pr implements some prerequisite changes for #34004 : split the
`CachingDB` into a `MerkleDB` and a `UBTDB`, so that very different
behaviors don't clash as much.
The transition isn't handled by this PR, but after talking to Gary we
agreed that `UBTDB` should receive another `triedb`, which will only be
loaded if the `Ended` flag is set to false in the conversion contract.
If this is too hard to achieve, it makes sense to load it regardless,
and then loading can be prevented at a later stage by adding a
`UBTTransitionFinalizationTime` in `ChainConfig`.
---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
This is a copy of #34721 but against `master` (rather than
`bal-devnet-3`), as requested by @jwasinger, since the slotnum logic now
exists on `master` as well.
This PR refactors the encoding rules for `AccessListsPacket` in the wire
protocol. Specifically:
- The response is now encoded as a list of `rlp.RawValue`
- `rlp.EmptyString` is used as a placeholder for unavailable BAL objects
In this PR, the Database interface in `core/state` has been extended
with one more function:
```go
// Iteratee returns a state iteratee associated with the specified state root,
// through which the account iterator and storage iterator can be created.
Iteratee(root common.Hash) (Iteratee, error)
```
With this additional abstraction layer, the implementation details can be hidden
behind the interface. For example, state traversal can now operate directly on
the flat state for Verkle or binary trees, which do not natively support traversal.
Moreover, state dumping will now prefer using the flat state iterator as
the primary option, offering better efficiency.
Edit: this PR also fixes a tiny issue in the state dump, marshalling the
next field in the correct way.
This is a breaking change in the opcode (structLog) tracer. Several fields
will have a slight formatting difference to conform to the newly established
spec at: https://github.com/ethereum/execution-apis/pull/762. The differences
include:
- `memory`: words will have the 0x prefix. Also last word of memory will be padded to 32-bytes.
- `storage`: keys and values will have the 0x prefix.
---------
Co-authored-by: Sina M <1591639+s1na@users.noreply.github.com>
In this PR, we add support for protocol version eth/70, defined by EIP-7975.
Overall changes:
- Each response is buffered in the peer’s receipt buffer when the
`lastBlockIncomplete` field is true.
- Continued request uses the same request id of its original
request(`RequestPartialReceipts`).
- Partial responses are verified in `validateLastBlockReceipt`.
- Even if all receipts for partial blocks of the request are collected,
those partial results are not sinked to the downloader, to avoid
complexity. This assumes that partial response and buffering occur only
in exceptional cases.
---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
Co-authored-by: Felix Lange <fjl@twurst.com>
This PR introduces a new type HistoryPolicy which captures user intent
as opposed to pruning point stored in the blockchain which persists the
actual tail of data in the database.
It is in preparation for the rolling history expiry feature.
It comes with a semantic change: if database was pruned and geth is
running without a history mode flag (or explicit keep all flag) geth
will emit a warning but continue running as opposed to stopping the
world.
`TestSubscribePendingTxHashes` hangs indefinitely because pending tx
events are permanently missed due to a race condition in
`NewPendingTransactions` (and `NewHeads`). Both handlers called their
event subscription functions (`SubscribePendingTxs`,
`SubscribeNewHeads`) inside goroutines, so the RPC handler returned the
subscription ID to the client before the filter was installed in the
event loop. When the client then sent a transaction, the event fired but
no filter existed to catch it — the event was silently lost.
- Move `SubscribePendingTxs` and `SubscribeNewHeads` calls out of
goroutines so filters are installed synchronously before the RPC
response is sent, matching the pattern already used by `Logs` and
`TransactionReceipts`
<!-- START COPILOT CODING AGENT TIPS -->
---
💬 We'd love your input! Share your thoughts on Copilot coding agent in
our [2 minute survey](https://gh.io/copilot-coding-agent-survey).
---------
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: s1na <1591639+s1na@users.noreply.github.com>
This PR contains two changes:
Firstly, the finalized header will be resolved from local chain if it's
not recently announced via the `engine_newPayload`.
What's more importantly is, in the downloader, originally there are two
code paths to push forward the pivot point block, one in the beacon
header fetcher (`fetchHeaders`), and another one is in the snap content
processer (`processSnapSyncContent`).
Usually if there are new blocks and local pivot block becomes stale, it
will firstly be detected by the `fetchHeaders`. `processSnapSyncContent`
is fully driven by the beacon headers and will only detect the stale pivot
block after synchronizing the corresponding chain segment. I think the
detection here is redundant and useless.
This PR fixes a regression introduced in https://github.com/ethereum/go-ethereum/pull/33836/changes
Before PR 33836, running mainnet would automatically bump the cache size
to 4GB and trigger a cache re-calculation, specifically setting the key-value
database cache to 2GB.
After PR 33836, this logic was removed, and the cache value is no longer
recomputed if no command line flags are specified. The default key-value
database cache is 512MB.
This PR bumps the default key-value database cache size alongside the
default cache size for other components (such as snapshot) accordingly.
I observed failing tests in Hive `engine-withdrawals`:
-
https://hive.ethpandaops.io/#/test/generic/1772351960-ad3e3e460605c670efe1b4f4178eb422?testnumber=146
-
https://hive.ethpandaops.io/#/test/generic/1772351960-ad3e3e460605c670efe1b4f4178eb422?testnumber=147
```shell
DEBUG (Withdrawals Fork on Block 2): NextPayloadID before getPayloadV2:
id=0x01487547e54e8abe version=1
>> engine_getPayloadV2("0x01487547e54e8abe")
<< error: {"code":-38005,"message":"Unsupported fork"}
FAIL: Expected no error on EngineGetPayloadV2: error=Unsupported fork
```
The same failure pattern occurred for Block 3.
Per Shanghai engine_getPayloadV2 spec, pre-Shanghai payloads should be
accepted via V2 and returned as ExecutionPayloadV1:
- executionPayload: ExecutionPayloadV1 | ExecutionPayloadV2
- ExecutionPayloadV1 MUST be returned if payload timestamp < Shanghai
timestamp
- ExecutionPayloadV2 MUST be returned if payload timestamp >= Shanghai
timestamp
Reference:
-
https://github.com/ethereum/execution-apis/blob/main/src/engine/shanghai.md#engine_getpayloadv2
Current implementation only allows GetPayloadV2 on the Shanghai fork
window (`[]forks.Fork{forks.Shanghai}`), so pre-Shanghai payloads are
rejected with Unsupported fork.
If my interpretation of the spec is incorrect, please let me know and I
can adjust accordingly.
---------
Co-authored-by: muzry.li <muzry.li1@ambergroup.io>
Eth currently has a flaky test, related to the tx fetcher.
The issue seems to happen when Unsubscribe is called while sub is nil.
It seems that chain.Stop() may be invoked before the loop starts in some
tests, but the exact cause is still under investigation through repeated
runs. I think this change will at least prevent the error.
With this, we are dropping support for protocol version eth/68. The only supported
version is eth/69 now. The p2p receipt encoding logic can be simplified a lot, and
processing of receipts during sync gets a little faster because we now transform
the network encoding into the database encoding directly, without decoding the
receipts first.
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
fix the flaky test found in
https://ci.appveyor.com/project/ethereum/go-ethereum/builds/53601688/job/af5ccvufpm9usq39
1. increase the timeout from 3+1s to 15s, and use timer instead of
sleep(in the CI env, it may need more time to sync the 1024 blocks)
2. add `synced.Load()` to ensure the full async chain is finished
Signed-off-by: Delweng <delweng@gmail.com>
Previously, handshake timeouts were recorded as generic peer errors
instead of timeout errors. waitForHandshake passed a raw
p2p.DiscReadTimeout into markError, but markError classified errors only
via errors.Unwrap(err), which returns nil for non-wrapped errors. As a
result, the timeoutError meter was never incremented and all such
failures fell into the peerError bucket.
This change makes markError switch on the base error, using
errors.Unwrap(err) when available and falling back to the original error
otherwise. With this adjustment, p2p.DiscReadTimeout is correctly mapped
to timeoutError, while existing behaviour for the other wrapped sentinel
errors remains unchanged
---------
Co-authored-by: lightclient <lightclient@protonmail.com>
The fetcher should not fetch transactions that are already on chain.
Until now we were only checking in the txpool, but that does not have
the old transaction. This was leading to extra fetches of transactions
that were announced by a peer but are already on chain.
Here we extend the check to the chain as well.
All five `revert*Request` functions (account, bytecode, storage,
trienode heal, bytecode heal) remove the request from the tracked set
but never restore the peer to its corresponding idle pool. When a
request times out and no response arrives, the peer is permanently lost
from the idle pool, preventing new work from being assigned to it.
In normal operation mode (snap-sync full state) this bug is masked by
pivot movement (which resets idle pools via new Sync() cycles every ~15
minutes) and peer churn (reconnections re-add peers via Register()).
However in scenarios like the one I have running my (partial-stateful
node)[https://github.com/ethereum/go-ethereum/pull/33764] with
long-running sync cycles and few peers, all peers can eventually leak
out of the idle pools, stalling sync entirely.
Fix: after deleting from the request map, restore the peer to its idle
pool if it is still registered (guards against the peer-drop path where
Unregister already removed the peer). This mirrors the pattern used in
all five On* response handlers.
This only seems to manifest in peer-thirstly scenarios as where I find
myself when testing snapsync for the partial-statefull node).
Still, thought was at least good to raise this point. Unsure if required
to discuss or not
Adds `--opcode.count=<file>` flag to `evm t8n` that writes per-opcode
execution frequency counts to a JSON file (relative to
`--output.basedir`).
---------
Co-authored-by: MariusVanDerWijden <m.vanderwijden@live.de>
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
This changes the p2p protocol handlers to delay message decoding. It's
the first part of a larger change that will delay decoding all the way
through message processing. For responses, we delay the decoding until
it is confirmed that the response matches an active request and does not
exceed its limits.
In order to make this work, all messages have been changed to use
rlp.RawList instead of a slice of the decoded item type. For block
bodies specifically, the decoding has been delayed all the way until
after verification of the response hash.
The role of p2p/tracker.Tracker changes significantly in this PR. The
Tracker's original purpose was to maintain metrics about requests and
responses in the peer-to-peer protocols. Each protocol maintained a
single global Tracker instance. As of this change, the Tracker is now
always active (regardless of metrics collection), and there is a
separate instance of it for each peer. Whenever a response arrives, it
is first verified that a request exists for it in the tracker. The
tracker is also the place where limits are kept.