Compare commits

...

130 commits

Author SHA1 Message Date
rjl493456442
d8cb8a962b
core, eth, ethclient, triedb: report trienode index progress (#34633)
Some checks failed
/ Linux Build (push) Has been cancelled
/ Linux Build (arm) (push) Has been cancelled
/ Keeper Build (push) Has been cancelled
/ Windows Build (push) Has been cancelled
/ Docker Image (push) Has been cancelled
The trienode history indexing progress is also exposed via an RPC 
endpoint and contributes to the eth_syncing status.
2026-04-04 21:00:07 +08:00
Jonny Rhea
a608ac94ec
eth/protocols/snap: restore Bytes soft limit to GetAccessListsPacket (#34649)
This PR adds Bytes field back to GetAccesListsPacket
2026-04-04 20:53:54 +08:00
Jonny Rhea
00da4f51ff
core, eth/protocols/snap: Snap/2 Protocol + BAL Serving (#34083)
Some checks failed
/ Linux Build (push) Has been cancelled
/ Linux Build (arm) (push) Has been cancelled
/ Keeper Build (push) Has been cancelled
/ Windows Build (push) Has been cancelled
/ Docker Image (push) Has been cancelled
Implement the snap/2 wire protocol with BAL serving

---------

Co-authored-by: Gary Rong <garyrong0905@gmail.com>
2026-04-03 14:10:32 +08:00
rjl493456442
0ba4314321
core/state: introduce state iterator interface (#33102)
In this PR, the Database interface in `core/state` has been extended
with one more function:

```go
	// Iteratee returns a state iteratee associated with the specified state root,
	// through which the account iterator and storage iterator can be created.
	Iteratee(root common.Hash) (Iteratee, error)
```

With this additional abstraction layer, the implementation details can be hidden
behind the interface. For example, state traversal can now operate directly on 
the flat state for Verkle or binary trees, which do not natively support traversal.

Moreover, state dumping will now prefer using the flat state iterator as
the primary option, offering better efficiency.


Edit: this PR also fixes a tiny issue in the state dump, marshalling the
next field in the correct way.
2026-04-03 10:35:32 +08:00
cui
bcb0efd756
core/types: copy block access list hash in CopyHeader (#34636)
Some checks are pending
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Linux Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
2026-04-02 20:40:45 +08:00
rjl493456442
db6c7d06a2
triedb/pathdb: implement history index pruner (#33999)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
This PR implements the missing functionality for archive nodes by 
pruning stale index data.

The current mechanism is relatively simple but sufficient for now: 
it periodically iterates over index entries and deletes outdated data 
on a per-block basis. 

The pruning process is triggered every 90,000 new blocks (approximately 
every 12 days), and the iteration typically takes ~30 minutes on a 
mainnet node.

This mechanism is only applied with `gcmode=archive` enabled, having
no impact on normal full node.
2026-04-02 00:21:58 +02:00
Daniel Liu
14a26d9ccc
eth/gasestimator: fix block overrides in estimate gas (#34081)
Block overrides were to a great extent ignored by the gasestimator. This PR
fixes that.
2026-04-01 20:32:17 +02:00
Felföldi Zsolt
fc43170cdd
beacon/light: keep retrying checkpoint init if failed (#33966)
Some checks are pending
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Linux Build (push) Waiting to run
/ Docker Image (push) Waiting to run
This PR changes the blsync checkpoint init logic so that even if the
initialization fails with a certain server and an error log message is
printed, the server goes back to its initial state and is allowed to
retry initialization after the failure delay period. The previous logic
had an `ssDone` server state that did put the server in a permanently
unusable state once the checkpoint init failed for an apparently
permanent reason. This was not the correct behavior because different
servers behave differently in case of overload and sometimes the
response to a permanently missing item is not clearly distinguishable
from an overload response. A safer logic is to never assume anything to
be permanent and always give a chance to retry.
The failure delay formula is also fixed; now it is properly capped at
`maxFailureDelay`. The previous formula did allow the delay to grow
unlimited if a retry was attempted immediately after each delay period.
2026-04-01 16:05:57 +02:00
Chase Wright
92b4cb2663
eth/tracers/logger: conform structLog tracing to spec (#34093)
Some checks failed
/ Linux Build (arm) (push) Has been cancelled
/ Keeper Build (push) Has been cancelled
/ Windows Build (push) Has been cancelled
/ Linux Build (push) Has been cancelled
/ Docker Image (push) Has been cancelled
This is a breaking change in the opcode (structLog) tracer. Several fields
will have a slight formatting difference to conform to the newly established
spec at: https://github.com/ethereum/execution-apis/pull/762. The differences
include:

- `memory`: words will have the 0x prefix. Also last word of memory will be padded to 32-bytes.
- `storage`: keys and values will have the 0x prefix.

---------

Co-authored-by: Sina M <1591639+s1na@users.noreply.github.com>
2026-03-31 16:02:40 +02:00
CPerezz
3da517e239
core/state: fix storage counters in binary trie IntermediateRoot (#34110)
Add missing `StorageUpdated` and `StorageDeleted` counter increments
in the binary trie fast path of `IntermediateRoot()`.
2026-03-31 15:47:07 +02:00
Jonny Rhea
dc3794e3dc
core/rawdb: BAL storage layer (#34064)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Add persistent storage for Block Access Lists (BALs) in `core/rawdb/`.
This provides read/write/delete accessors for BALs in the active
key-value store.

---------

Co-authored-by: Jared Wasinger <j-wasinger@hotmail.com>
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
2026-03-31 15:05:31 +08:00
Bosul Mun
965bd6b6a0
eth: implement EIP-7975 (eth/70 - partial block receipt lists) (#33153)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
In this PR, we add support for protocol version eth/70, defined by EIP-7975.

Overall changes:

- Each response is buffered in the peer’s receipt buffer when the
`lastBlockIncomplete` field is true.
- Continued request uses the same request id of its original
  request(`RequestPartialReceipts`).
- Partial responses are verified in `validateLastBlockReceipt`.
- Even if all receipts for partial blocks of the request are collected,
  those partial results are not sinked to the downloader, to avoid
  complexity. This assumes that partial response and buffering occur only
  in exceptional cases.

---------

Co-authored-by: Gary Rong <garyrong0905@gmail.com>
Co-authored-by: Felix Lange <fjl@twurst.com>
2026-03-30 15:17:37 +02:00
rjl493456442
fe47c39903
version: start v1.17.3 release cycle (#34619) 2026-03-30 15:01:29 +02:00
rjl493456442
be4dc0c4be
version: release go-ethereum v1.17.2 stable (#34618) 2026-03-30 18:42:40 +08:00
Sina M
95705e8b7b
internal/ethapi: limit number of getProofs keys (#34617)
We can consider making this limit configurable if ever the need arose.
2026-03-30 16:01:30 +08:00
Sina M
ceabc39304
internal/ethapi: limit number of calls to eth_simulateV1 (#34616)
Later on we can consider making these limits configurable if the
use-case arose.
2026-03-30 16:01:12 +08:00
Charles Dusek
e585ad3b42
core/rawdb: fix freezer dir.Sync() failure on Windows (#34115) 2026-03-30 15:34:23 +08:00
Daniel Liu
d1369b69f5
core/txpool/legacypool: use types.Sender instead of signer.Sender (#34059)
Some checks failed
/ Linux Build (arm) (push) Has been cancelled
/ Keeper Build (push) Has been cancelled
/ Windows Build (push) Has been cancelled
/ Linux Build (push) Has been cancelled
/ Docker Image (push) Has been cancelled
`pool.signer.Sender(tx)` bypasses the sender cache used by types.Sender,
which can force an extra signature recovery for every promotable tx
(promotion runs frequently). Use `types.Sender(pool.signer, tx)` here to
keep sender derivation cached and consistent.
2026-03-28 11:46:09 +01:00
Guillaume Ballet
bd3c8431d9
build, cmd/keeper: add "womir" target (#34079)
This PR enables the block validation of keeper in the womir/openvm zkvm.

It also fixes some issues related to building the executables in CI.
Namely, it activates the build which was actually disabled, and also
resolves some resulting build conflicts by fixing the tags.

Co-authored-by: Leo <leo@powdrlabs.com>
2026-03-28 11:39:44 +01:00
Charles Dusek
a2496852e9
p2p/discover: resolve DNS hostnames for bootstrap nodes (#34101)
Fixes #31208
2026-03-28 11:37:39 +01:00
rjl493456442
c3467dd8b5
core, miner, trie: relocate witness stats (#34106)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
This PR relocates the witness statistics into the witness itself, making
it more self-contained.
2026-03-27 17:06:46 +01:00
jwasinger
acdd139717
miner: set slot number when building test payload (#34094)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
2026-03-27 09:45:49 +08:00
Daniel Liu
1b3b028d1d
miner: fix txFitsSize comment (#34100)
Rename the comment so it matches the helper name.
2026-03-27 09:41:56 +08:00
Lessa
8a3a309fa9
core/txpool/legacypool: remove redundant nil check in Get (#34092)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Leftover from d40a255 when return type changed from *txpool.Transaction
to *types.Transaction.
2026-03-26 14:02:31 +01:00
bigbear
5d0e18f775
core/tracing: fix NonceChangeAuthorization comment (#34085)
Some checks failed
/ Linux Build (push) Has been cancelled
/ Linux Build (arm) (push) Has been cancelled
/ Keeper Build (push) Has been cancelled
/ Windows Build (push) Has been cancelled
/ Docker Image (push) Has been cancelled
Comment referenced NonceChangeTransaction which doesn't exist, should be
NonceChangeAuthorization.
2026-03-25 09:16:09 +01:00
Andrew Davis
8f9061f937
cmd/utils: optimize history import with batched insertion (#33894)
Some checks are pending
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Linux Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Improve speed of import-history command by two orders of magnitude.

Rework ImportHistory to collect up to 2500 blocks per flush instead of
flushing after each block, reducing database commit overhead.

---------

Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
2026-03-24 21:47:18 +01:00
Csaba Kiraly
e951bcbff7
cmd/devp2p: fix discv5 PingMultiIP test session key mismatch (#34031)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
conn.read() used the actual UDP packet source address for
codec.Decode(), but conn.write() always used tc.remoteAddr. When the
remote node is reachable via multiple Docker networks, the packet source
IP differs from tc.remoteAddr, causing a session key lookup failure in
the codec.

Use tc.remoteAddr.String() consistently in conn.read() so the session
cache key matches what was used during Encode.
2026-03-24 07:57:11 -06:00
vickkkkkyy
745b0a8c09
cmd/utils: guard SampleRatio flag with IsSet check (#34062)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
In `setOpenTelemetry`, all other fields (Enabled, Endpoint, AuthUser,
AuthPassword, InstanceID, Tags) are guarded by `ctx.IsSet()` checks, so
they only override the config file when explicitly set via CLI flags.
`SampleRatio` was the only field missing this guard, causing the flag
default (`1.0`) to always overwrite whatever was loaded from the config
file.

- Fix OpenTelemetry `SampleRatio` being unconditionally overwritten by
the CLI flag default value (`1.0`), even when the user did not pass
`--rpc.telemetry.sample-ratio`
- This caused config file values for `SampleRatio` to be silently
ignored
2026-03-23 13:14:28 -04:00
Felföldi Zsolt
b87340a856
core, core/vm: implement EIP-7708 (#33645)
This PR implements EIP-7708 according to the latest "rough consensus":

https://github.com/ethereum/EIPs/pull/9003
https://github.com/etan-status/EIPs/blob/fl-ethlogs/EIPS/eip-7708.md

---------

Co-authored-by: Jared Wasinger <j-wasinger@hotmail.com>
Co-authored-by: raxhvl <raxhvl@users.noreply.github.com>
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
2026-03-23 22:29:53 +08:00
Daniel Liu
a61e5ccb1e
core, internal/ethapi: fix incorrect max-initcode RPC error mapping (#34067)
Problem:

The max-initcode sentinel moved from core to vm, but RPC pre-check
mapping still depended on core.ErrMaxInitCodeSizeExceeded. This mismatch
could surface inconsistent error mapping when oversized initcode is
submitted through JSON-RPC.

Solution:

- Remove core.ErrMaxInitCodeSizeExceeded from the core pre-check error
set.
- Map max-initcode validation errors in RPC from
vm.ErrMaxInitCodeSizeExceeded.
- Keep the RPC error code mapping unchanged (-38025).

Impact:

- Restores consistent max-initcode error mapping after the sentinel
move.
- Preserves existing JSON-RPC client expectations for error code -38025.
- No consensus, state, or protocol behavior changes.
2026-03-23 22:10:32 +08:00
Lessa
e23b0cbc22
core/rawdb: fix key length check for num -- hash in db inspect (#34074)
Fix incorrect key length calculation for `numHashPairings` in
`InspectDatabase`, introduced in #34000.

The `headerHashKey` format is `headerPrefix + num + headerHashSuffix`
(10 bytes), but the check incorrectly included `common.HashLength`,
expecting 42 bytes.

This caused all number -- hash entries to be misclassified as
unaccounted data.
2026-03-23 21:54:30 +08:00
Guillaume Ballet
305cd7b9eb
trie/bintrie: fix NodeIterator Empty node handling and expose tree accessors (#34056)
Some checks failed
/ Linux Build (push) Has been cancelled
/ Linux Build (arm) (push) Has been cancelled
/ Keeper Build (push) Has been cancelled
/ Windows Build (push) Has been cancelled
/ Docker Image (push) Has been cancelled
Fix three issues in the binary trie NodeIterator:

1. Empty nodes now properly backtrack to parent and continue iteration
instead of terminating the entire walk early.

2. `HashedNode` resolver handles `nil` data (all-zeros hash) gracefully
by treating it as Empty rather than panicking.

3. Parent update after node resolution guards against stack underflow
when resolving the root node itself.

---------

Co-authored-by: tellabg <249254436+tellabg@users.noreply.github.com>
2026-03-20 13:53:14 -04:00
CPerezz
77779d1098
core/state: bypass per-account updateTrie in IntermediateRoot for binary trie (#34022)
## Summary

In binary trie mode, `IntermediateRoot` calls `updateTrie()` once per
dirty account. But with the binary trie there is only one unified trie
(`OpenStorageTrie` returns `self`), so each call redundantly does
per-account trie setup: `getPrefetchedTrie`, `getTrie`, slice
allocations for deletions/used, and `prefetcher.used` — all for the same
trie pointer.

This PR replaces the per-account `updateTrie()` calls with a single flat
loop that applies all storage updates directly to `s.trie`. The MPT path
is unchanged. The prefetcher trie replacement is guarded to avoid
overwriting the binary trie that received updates.

This is the phase-1 counterpart to #34021 (H01). H01 fixes the commit
phase (`trie.Commit()` called N+1 times). This PR fixes the update phase
(`updateTrie()` called N times with redundant setup). Same root cause —
unified binary trie operated on per-account — different phases.

## Benchmark (Apple M4 Pro, 500K entries, `--benchtime=10s --count=3`,
on top of #34021)

| Metric | H01 baseline | H01 + this PR | Delta |
|--------|:------------:|:-------------:|:-----:|
| Approve (Mgas/s) | 368 | **414** | **+12.5%** |
| BalanceOf (Mgas/s) | 870 | 875 | +0.6% |

Should be rebased after #34021 is merged.
2026-03-20 15:40:04 +01:00
jvn
59ce2cb6a1
p2p: track in-progress inbound node IDs (#33198)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Avoid dialing a node while we have an inbound
connection request from them in progress.

Closes #33197
2026-03-20 05:52:15 +01:00
Felix Lange
35b91092c5
rlp: add Size method to EncoderBuffer (#34052)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
The new method returns the size of the written data, excluding any
unfinished list structure.
2026-03-19 18:26:00 +01:00
jwasinger
fd859638bd
core/vm: rework gas measurement for call variants (#33648)
EIP-7928 brings state reads into consensus by recording accounts and storage accessed during execution in the block access list. As part of the spec, we need to check that there is enough gas available to cover the cost component which doesn't depend on looking up state. If this component can't be covered by the available gas, we exit immediately.

The portion of the call dynamic cost which doesn't depend on state look ups:

- EIP2929 call costs
- value transfer cost
- memory expansion cost

This PR:

- breaks up the "inner" gas calculation for each call variant into a pair of stateless/stateful cost methods
- modifies the gas calculation logic of calls to check stateless cost component first, and go out of gas immediately if it is not covered.

---------

Co-authored-by: Gary Rong <garyrong0905@gmail.com>
2026-03-19 10:02:49 -06:00
rjl493456442
a3083ff5d0
cmd: add support for enumerating a single storage trie (#34051)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
2026-03-19 09:52:10 +01:00
Bosul Mun
4faadf17fb
rlp: add AppendList method to RawList (#34048)
This the AppendList method to merge two RawList instances by
appending the raw content.
2026-03-19 09:51:03 +01:00
vickkkkkyy
3341d8ace0
eth/filters: rangeLogs should error on invalid block range (#33763)
Some checks are pending
/ Keeper Build (push) Waiting to run
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Fixes log filter to reject out of order block ranges.
2026-03-18 23:31:40 +01:00
haoyu-haoyu
b35645bdf7
build: fix missing '!' in shebang of generated oss-fuzz scripts (#34044)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
\`oss-fuzz.sh\` line 38 writes \`#/bin/sh\` instead of \`#!/bin/sh\` as
the shebang of generated fuzz test runner scripts.

\`\`\`diff
-#/bin/sh
+#!/bin/sh
\`\`\`

Without the \`!\`, the kernel does not recognize the interpreter
directive.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-18 13:56:26 +01:00
Sina M
6ae3f9fa56
core/history: refactor pruning configuration (#34036)
This PR introduces a new type HistoryPolicy which captures user intent
as opposed to pruning point stored in the blockchain which persists the
actual tail of data in the database.

It is in preparation for the rolling history expiry feature.

It comes with a semantic change: if database was pruned and geth is
running without a history mode flag (or explicit keep all flag) geth
will emit a warning but continue running as opposed to stopping the
world.
2026-03-18 13:54:29 +01:00
CPerezz
6138a11c39
trie/bintrie: parallelize InternalNode.Hash at shallow tree depths (#34032)
## Summary

At tree depths below `log2(NumCPU)` (clamped to [2, 8]), hash the left
subtree in a goroutine while hashing the right subtree inline. This
exploits available CPU cores for the top levels of the tree where
subtree hashing is most expensive. On single-core machines, the parallel
path is disabled entirely.

Deeper nodes use sequential hashing with the existing `sync.Pool` hasher
where goroutine overhead would exceed the hash computation cost. The
parallel path uses `sha256.Sum256` with a stack-allocated buffer to
avoid pool contention across goroutines.

**Safety:**
- Left/right subtrees are disjoint — no shared mutable state
- `sync.WaitGroup` provides happens-before guarantee for the result
- `defer wg.Done()` + `recover()` prevents goroutine panics from
crashing the process
- `!bt.mustRecompute` early return means clean nodes never enter the
parallel path
- Hash results are deterministic regardless of computation order — no
consensus risk

## Benchmark (AMD EPYC 48-core, 500K entries, `--benchtime=10s
--count=3`, post-H01 baseline)

| Metric | Baseline | Parallel | Delta |
|--------|----------|----------|-------|
| Approve (Mgas/s) | 224.5 ± 7.1 | **259.6 ± 2.4** | **+15.6%** |
| BalanceOf (Mgas/s) | 982.9 ± 5.1 | 954.3 ± 10.8 | -2.9% (noise, clean
nodes skip parallel path) |
| Allocs/op (approve) | ~810K | ~700K | -13.6% |
2026-03-18 13:54:23 +01:00
Mayveskii
b6115e9a30
core: fix txLookupLock mutex leak on error returns in reorg() (#34039)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
2026-03-18 15:43:24 +08:00
felipe
ab357151da
cmd/evm: don't strip prefixes on requests over t8n (#33997)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Found this bug while implementing the Amsterdam changes t8n changes for
benchmark test filling in EELS.

Prefixes were incorrectly being stripped on requests over t8n and this
was leading to `fill` correctly catching hash mismatches on the EELS
side for some BAL tests. Though this was caught there, I think this
change might as well be cherry-picked there instead and merged to
`master`.

This PR brings this behavior to parity with EELS for Osaka filling.
There are still some quirks with regards to invalid block tests but I
did not investigate this further.
2026-03-17 16:07:28 +01:00
rjl493456442
9b2ce121dc
triedb/pathdb: enhance history index initer (#33640)
This PR improves the pbss archive mode. Initial sync
of an archive mode which has the --gcmode archive
flag enabled will be significantly sped up.

It achieves that with the following changes:

The indexer now attempts to process histories in batch whenever
possible.
Batch indexing is enforced when the node is still syncing and the local
chain
head is behind the network chain head. 

In this scenario, instead of scheduling indexing frequently alongside
block
insertion, the indexer waits until a sufficient amount of history has
accumulated
and then processes it in a batch, which is significantly more efficient.

---------

Co-authored-by: Sina M <1591639+s1na@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-03-17 15:29:30 +01:00
Sina M
fc1b0c0b83
internal/ethapi: warn on reaching global gas cap for eth_simulateV1 (#34016)
Warn user when gas limit of a tx is capped due to rpc server's gas cap
being reached.
2026-03-17 13:52:04 +01:00
CPerezz
519a450c43
core/state: skip redundant trie Commit for Verkle in stateObject.commit (#34021)
## Summary

**Bug fix.** In Verkle mode, all state objects share a single unified
trie (`OpenStorageTrie` returns `self`). During `stateDB.commit()`, the
main account trie is committed via `s.trie.Commit(true)`, which calls
`CollectNodes` to traverse and serialize the entire tree. However, each
dirty account's `obj.commit()` also calls `s.trie.Commit(false)` on the
**same trie object**, redundantly traversing and serializing the full
tree once per dirty account.

With N dirty accounts per block, this causes **N+1 full-tree
traversals** instead of 1. On a write-heavy workload (2250 SSTOREs),
this produces ~131 GB of allocations per block from duplicate NodeSet
creation and serialization. It also causes a latent data race from N+1
goroutines concurrently calling `CollectNodes` on shared `InternalNode`
objects.

This commit adds an `IsVerkle()` early return in `stateObject.commit()`
to skip the redundant `trie.Commit()` call.

## Benchmark (AMD EPYC 48-core, 500K entries, `--benchtime=10s
--count=3`)

| Metric | Baseline | Fixed | Delta |
|--------|----------|-------|-------|
| Approve (Mgas/s) | 4.16 ± 0.37 | **220.2 ± 10.1** | **+5190%** |
| BalanceOf (Mgas/s) | 966.2 ± 8.1 | 971.0 ± 3.0 | +0.5% |
| Allocs/op (approve) | 136.4M | 792K | **-99.4%** |

Resolves the TODO in statedb.go about the account trie commit being
"very heavy" and "something's wonky".

---------

Co-authored-by: Guillaume Ballet <3272758+gballet@users.noreply.github.com>
2026-03-17 12:27:29 +01:00
CPerezz
4b915af2c3
core/state: avoid Bytes() allocation in flatReader hash computations (#34025)
## Summary

Replace `addr.Bytes()` and `key.Bytes()` with `addr[:]` and `key[:]` in
`flatReader`'s `Account` and `Storage` methods. The former allocates a
copy while the latter creates a zero-allocation slice header over the
existing backing array.

## Benchmark (AMD EPYC 48-core, 500K entries, screening
`--benchtime=1x`)

| Metric | Baseline | Slice syntax | Delta |
|--------|----------|--------------|-------|
| Approve (Mgas/s) | 4.13 | 4.22 | +2.2% |
| BalanceOf (Mgas/s) | 168.3 | 190.0 | **+12.9%** |
2026-03-17 11:42:42 +01:00
Jonny Rhea
98b13f342f
miner: add OpenTelemetry spans for block building path (#33773)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Instruments the block building path with OpenTelemetry tracing spans.

- added spans in forkchoiceUpdated -> buildPayload -> background payload
loop -> generateWork iterations. Spans should look something like this:

```
jsonrpc.engine/forkchoiceUpdatedV3
|- rpc.runMethod
|  |- engine.forkchoiceUpdated
|     |- miner.buildPayload [payload.id, parent.hash, timestamp]
|        |- miner.generateWork [txs.count, gas.used, fees] (empty block)
|        |  |- miner.prepareWork
|        |  |- miner.FinalizeAndAssemble
|        |     |- consensus.beacon.FinalizeAndAssemble [block.number, txs.count, withdrawals.count]
|        |        |- consensus.beacon.Finalize
|        |        |- consensus.beacon.IntermediateRoot
|        |        |- consensus.beacon.NewBlock
|        |- miner.background [block.number, iterations.total, exit.reason, empty.delivered]
|           |- miner.buildIteration [iteration, update.accepted]
|           |  |- miner.generateWork [txs.count, gas.used, fees]
|           |     |- miner.prepareWork
|           |     |- miner.fillTransactions [pending.plain.count, pending.blob.count]
|           |     |  |- miner.commitTransactions.priority (if prio txs exist)
|           |     |  |  |- miner.commitTransactions
|           |     |  |     |- miner.commitTransaction (per tx)
|           |     |  |- miner.commitTransactions.normal (if normal txs exist)
|           |     |     |- miner.commitTransactions
|           |     |        |- miner.commitTransaction (per tx)
|           |     |- miner.FinalizeAndAssemble
|           |        |- consensus.beacon.FinalizeAndAssemble [block.number, txs.count, withdrawals.count]
|           |           |- consensus.beacon.Finalize
|           |           |- consensus.beacon.IntermediateRoot
|           |           |- consensus.beacon.NewBlock
|           |- miner.buildIteration [iteration, update.accepted]
|           |  |- ...
|           |- ...

```

- added simulated server spans in SimulatedBeacon.sealBlock so dev mode
(geth --dev) produces traces that mirror production Engine API calls
from a real consensus client.

---------

Co-authored-by: Felix Lange <fjl@twurst.com>
2026-03-16 19:24:41 +01:00
rjl493456442
a7d09cc14f
core: fix code database initialization in stateless mode (#34011)
Some checks are pending
/ Docker Image (push) Waiting to run
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
This PR fixes the statedb initialization, ensuring the data source is
bound with the stateless input.
2026-03-16 09:45:26 +01:00
Guillaume Ballet
77e7e5ad1a
go.mod, go.sum: update karalabe/hid to fix broken FreeBSD ports build (#34008)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
cgo builds have been broken in FreeBSD ports because of the hid lib.
@enriquefynn has made a temporary patch, but the fix has been merged in
the master branch, so let's reflect that here.
2026-03-16 14:35:07 +08:00
vickkkkkyy
24025c2bd0
build: fix signify flag name in doWindowsInstaller (#34006)
Some checks failed
/ Linux Build (push) Has been cancelled
/ Linux Build (arm) (push) Has been cancelled
/ Keeper Build (push) Has been cancelled
/ Windows Build (push) Has been cancelled
/ Docker Image (push) Has been cancelled
The signify flag in `doWindowsInstaller` was defined as "signify key"
(with a space), making it impossible to pass via CLI (`-signify
<value>`). This meant the Windows installer signify signing was silently
never executed.

Fix by renaming the flag to "signify", consistent with `doArchive` and
`doKeeperArchive`.
2026-03-14 10:22:50 +01:00
jwasinger
ede376af8e
internal/ethapi: encode slotNumber as hex in RPCMarshalHeader (#34005)
Some checks are pending
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
The slotNumber field was being passed as a raw *uint64 to the JSON
marshaler, which serializes it as a plain decimal integer (e.g. 159).
All Ethereum JSON-RPC quantity fields must be hex-encoded per spec.

Wrap with hexutil.Uint64 to match the encoding of other numeric header
fields like blobGasUsed and excessBlobGas.

Co-authored-by: qu0b <stefan@starflinger.eu>
2026-03-13 17:09:32 +01:00
vickkkkkyy
189f9d0b17
eth/filters: check history pruning cutoff in GetFilterLogs (#33823)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Return proper error for the log filters going beyond pruning
point on a node with expired history.
2026-03-13 13:26:20 +01:00
Lessa
dba741fd31
console: fix autocomplete digit range to include 0 (#34003)
PR #26518 added digit support but used '1'-'9' instead of '0'-'9'. This
breaks autocomplete for identifiers containing 0 like account0.
2026-03-13 12:39:45 +01:00
Lee Gyumin
eaa9418ac1
core/rawdb: enforce exact key length for num->hash and td in db inspect (#34000)
This PR improves `db inspect` classification accuracy in
`core/rawdb/database.go` by tightening key-shape checks for:

- `Block number->hash`
- `Difficulties (deprecated)`

Previously, both categories used prefix/suffix heuristics and could
mis-bucket unrelated entries.
2026-03-13 09:45:14 +01:00
Guillaume Ballet
1c9ddee16f
trie/bintrie: use a sync.Pool when hashing binary tree nodes (#33989)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Binary tree hashing is quite slow, owing to many factors. One of them is
the GC pressure that is the consequence of allocating many hashers, as a
binary tree has 4x the size of an MPT. This PR introduces an
optimization that already exists for the MPT: keep a pool of hashers, in
order to reduce the amount of allocations.
2026-03-12 10:20:12 +01:00
jvn
95b9a2ed77
core: Implement eip-7954 increase Maximum Contract Size (#33832)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Implement EIP7954, This PR raises the maximum contract code size
to 32KiB and initcode size to 64KiB , following https://eips.ethereum.org/EIPS/eip-7954

---------

Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
2026-03-12 10:23:49 +08:00
Copilot
de0a452f7d
eth/filters: fix race in pending tx and new heads subscriptions (#33990)
`TestSubscribePendingTxHashes` hangs indefinitely because pending tx
events are permanently missed due to a race condition in
`NewPendingTransactions` (and `NewHeads`). Both handlers called their
event subscription functions (`SubscribePendingTxs`,
`SubscribeNewHeads`) inside goroutines, so the RPC handler returned the
subscription ID to the client before the filter was installed in the
event loop. When the client then sent a transaction, the event fired but
no filter existed to catch it — the event was silently lost.

- Move `SubscribePendingTxs` and `SubscribeNewHeads` calls out of
goroutines so filters are installed synchronously before the RPC
response is sent, matching the pattern already used by `Logs` and
`TransactionReceipts`

<!-- START COPILOT CODING AGENT TIPS -->
---

💬 We'd love your input! Share your thoughts on Copilot coding agent in
our [2 minute survey](https://gh.io/copilot-coding-agent-survey).

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: s1na <1591639+s1na@users.noreply.github.com>
2026-03-12 10:21:45 +08:00
rjl493456442
7d13acd030
core/rawdb, triedb/pathdb: enable trienode history alongside existing data (#33934)
Fixes https://github.com/ethereum/go-ethereum/issues/33907

Notably there is a behavioral change:
- Previously Geth will refuse to restart if the existing trienode
history is gapped with the state data
- With this PR, the gapped trienode history will be entirely reset and
being constructed from scratch
2026-03-12 09:21:54 +08:00
Guillaume Ballet
59512b1849
cmd/fetchpayload: add payload-building utility (#33919)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
This PR adds a cmd tool fetchpayload which connects to a
node and gets all the information in order to create a serialized
payload that can then be passed to the zkvm.
2026-03-11 16:18:42 +01:00
Sina M
3c20e08cba
cmd/geth: add Prague pruning points (#33657)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
This PR allows users to prune their nodes up to the Prague fork. It
indirectly depends on #32157 and can't really be merged before eraE
files are widely available for download.

The `--history.chain` flag becomes mandatory for `prune-history`
command. Here I've listed all the edge cases that can happen and how we
behave:

## prune-history Behavior

| From        | To           | Result                   |
|-------------|--------------|--------------------------|
| full        | postmerge    |  prunes                |
| full        | postprague   |  prunes                |
| postmerge   | postprague   |  prunes further        |
| postprague  | postmerge    |  can't unprune         |
| any         | all          |  use import-history    |


## Node Startup Behavior

| DB State | Flag | Result |

|-------------|--------------|----------------------------------------------------------------|
| fresh | postprague |  syncs from Prague |
| full | postprague |  "run prune-history first" |
| postmerge | postprague |  "run prune-history first" |
| postprague | postmerge |  "can't unprune, use import-history or fix
flag" |
| pruned | all |  accepts known prune points |
2026-03-11 12:47:42 +01:00
bigbear
88f8549d37
cmd/geth: correct misleading flag description in removedb command (#33984)
The `--remove.chain` flag incorrectly described itself as selecting
"state data" for removal, which could mislead operators into removing
the wrong data category. This corrects the description to accurately
reflect that the flag targets chain data (block bodies and receipts).
2026-03-11 16:33:10 +08:00
jwasinger
32f05d68a2
core: end telemetry span for ApplyTransactionWithEVM if error is returned (#33955) 2026-03-11 14:41:43 +08:00
georgehao
f6068e3fb2
eth/tracers: fix accessList StorageKeys return null (#33976)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
2026-03-11 11:46:49 +08:00
rjl493456442
27c4ca9df0
eth: resolve finalized from disk if it's not recently announced (#33150)
This PR contains two changes:

Firstly, the finalized header will be resolved from local chain if it's
not recently announced via the `engine_newPayload`. 

What's more importantly is, in the downloader, originally there are two
code paths to push forward the pivot point block, one in the beacon 
header fetcher (`fetchHeaders`), and another one is in the snap content 
processer (`processSnapSyncContent`).

Usually if there are new blocks and local pivot block becomes stale, it
will firstly be detected by the `fetchHeaders`. `processSnapSyncContent` 
is fully driven by the beacon headers and will only detect the stale pivot 
block after synchronizing the corresponding chain segment. I think the 
detection here is redundant and useless.
2026-03-11 11:23:00 +08:00
Sina M
aa417b03a6
core/tracing: fix nonce revert edge case (#33978)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
We got a report for a bug in the tracing journal which has the
responsibility to emit events for all state that must be reverted.

The edge case is as follows: on CREATE operations the nonce is
incremented. When a create frame reverts, the nonce increment associated
with it does **not** revert. This works fine on master. Now one step
further: if the parent frame reverts tho, the nonce **should** revert
and there is the bug.
2026-03-10 16:53:21 +01:00
rjl493456442
91cec92bf3
core, miner, tests: introduce codedb and simplify cachingDB (#33816)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
2026-03-10 08:29:21 +01:00
rjl493456442
b8a3fa7d06
cmd/utils, eth/ethconfig: change default cache settings (#33975)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
This PR fixes a regression introduced in https://github.com/ethereum/go-ethereum/pull/33836/changes

Before PR 33836, running mainnet would automatically bump the cache size
to 4GB and trigger a cache re-calculation, specifically setting the key-value 
database cache to 2GB.
 
After PR 33836, this logic was removed, and the cache value is no longer
recomputed if no command line flags are specified. The default key-value 
database cache is 512MB.

This PR bumps the default key-value database cache size alongside the
default cache size for other components (such as snapshot) accordingly.
2026-03-09 23:18:18 +08:00
Muzry
b08aac1dbc
eth/catalyst: allow getPayloadV2 for pre-shanghai payloads (#33932)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
I observed failing tests in Hive `engine-withdrawals`:

-
https://hive.ethpandaops.io/#/test/generic/1772351960-ad3e3e460605c670efe1b4f4178eb422?testnumber=146
-
https://hive.ethpandaops.io/#/test/generic/1772351960-ad3e3e460605c670efe1b4f4178eb422?testnumber=147

```shell
  DEBUG (Withdrawals Fork on Block 2): NextPayloadID before getPayloadV2:
  id=0x01487547e54e8abe version=1
  >> engine_getPayloadV2("0x01487547e54e8abe")
  << error: {"code":-38005,"message":"Unsupported fork"}
  FAIL: Expected no error on EngineGetPayloadV2: error=Unsupported fork
```
 
The same failure pattern occurred for Block 3.

Per Shanghai engine_getPayloadV2 spec, pre-Shanghai payloads should be
accepted via V2 and returned as ExecutionPayloadV1:
- executionPayload: ExecutionPayloadV1 | ExecutionPayloadV2
- ExecutionPayloadV1 MUST be returned if payload timestamp < Shanghai
timestamp
- ExecutionPayloadV2 MUST be returned if payload timestamp >= Shanghai
timestamp

Reference:
-
https://github.com/ethereum/execution-apis/blob/main/src/engine/shanghai.md#engine_getpayloadv2

Current implementation only allows GetPayloadV2 on the Shanghai fork
window (`[]forks.Fork{forks.Shanghai}`), so pre-Shanghai payloads are
rejected with Unsupported fork.

If my interpretation of the spec is incorrect, please let me know and I
can adjust accordingly.

---------

Co-authored-by: muzry.li <muzry.li1@ambergroup.io>
2026-03-09 11:22:58 +01:00
Marius van der Wijden
00540f9469
go.mod: update go-eth-kzg (#33963)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Updates go-eth-kzg to
https://github.com/crate-crypto/go-eth-kzg/releases/tag/v1.5.0
Significantly reduces the allocations in VerifyCellProofBatch which is
around ~5% of all allocations on my node

---------

Co-authored-by: Guillaume Ballet <3272758+gballet@users.noreply.github.com>
2026-03-08 11:44:29 +01:00
cui
e15d4ccc01
core/types: reduce alloc in hot code path (#33523)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Reduce allocations in calculation of tx cost.

---------

Co-authored-by: weixie.cui <weixie.cui@okg.com>
Co-authored-by: Sina M <1591639+s1na@users.noreply.github.com>
2026-03-07 14:31:36 +01:00
marukai67
0d043d071e
signer/core: prevent nil pointer panics in keystore operations (#33829)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Add nil checks to prevent potential panics when keystore backend is
unavailable in the Clef signer API.
2026-03-06 21:50:30 +01:00
Guillaume Ballet
ecee64ecdc
core: fix TestProcessVerkle flaky test (#33971)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
`GenerateChain` commits trie nodes asynchronously, and it can happen
that some nodes aren't making it to the db in time for `GenerateChain`
to open it and find the data it is looking for.
2026-03-06 19:03:05 +01:00
Guillaume Ballet
3f1871524f
trie/bintrie: cache hashes of clean nodes so as not to rehash the whole tree (#33961)
This is an optimization that existed for verkle and the MPT, but that
got dropped during the rebase.

Mark the nodes that were modified as needing recomputation, and skip the
hash computation if this is not needed. Otherwise, the whole tree is
hashed, which kills performance.
2026-03-06 18:06:24 +01:00
Guillaume Ballet
a0fb8102fe
trie/bintrie: fix overflow management in slot key computation (#33951)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Docker Image (push) Waiting to run
/ Windows Build (push) Waiting to run
The computation of `MAIN_STORAGE_OFFSET` was incorrect, causing the last
byte of the stem to be dropped. This means that there would be a
collision in the hash computation (at the preimage level, not a hash
collision of course) if two keys were only differing at byte 31.
2026-03-05 14:43:31 +01:00
Bosul Mun
344ce84a43
eth/fetcher: fix flaky test by improving event unsubscription (#33950)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Eth currently has a flaky test, related to the tx fetcher.

The issue seems to happen when Unsubscribe is called while sub is nil.
It seems that chain.Stop() may be invoked before the loop starts in some
tests, but the exact cause is still under investigation through repeated
runs. I think this change will at least prevent the error.
2026-03-05 11:48:44 +08:00
Sina M
ce64ab44ed
internal/ethapi: fix gas cap for eth_simulateV1 (#33952)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Fixes a regression in #33593 where a block gas limit > gasCap resulted
in more execution than the gas cap.
2026-03-05 09:09:07 +08:00
J
fc8c10476d
internal/ethapi: add MaxUsedGas field to eth_simulateV1 response (#32789)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
closes #32741
2026-03-04 12:37:47 +01:00
Jonny Rhea
402c71f2e2
internal/telemetry: fix undersized span queue causing dropped spans (#33927)
The BatchSpanProcessor queue size was incorrectly set to
DefaultMaxExportBatchSize (512) instead of DefaultMaxQueueSize (2048).

I noticed the issue on bloatnet when analyzing the block building
traces. During a particular run, the miner was including 1000
transactions in a single block. When telemetry is enabled, the miner
creates a span for each transaction added to the block. With the queue
capped at 512, spans were silently dropped when production outpaced the
span export, resulting in incomplete traces with orphaned spans. While
this doesn't eliminate the possibility of drops under extreme
load, using the correct default restores the 4x buffer between queue
capacity and export batch size that the SDK was designed around.
2026-03-04 11:47:10 +01:00
Jonny Rhea
28dad943f6
cmd/geth: set default cache to 4096 (#33836)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Mainnet was already overriding --cache to 4096. This PR just makes this
the default.
2026-03-04 11:21:11 +01:00
Marius van der Wijden
6d0dd08860
core: implement eip-7778: block gas accounting without refunds (#33593)
Implements https://eips.ethereum.org/EIPS/eip-7778

---------

Co-authored-by: Gary Rong <garyrong0905@gmail.com>
2026-03-04 18:18:18 +08:00
rjl493456442
dd202d4283
core, ethdb, triedb: add batch close (#33708)
Pebble maintains a batch pool to recycle the batch object. Unfortunately
batch object must be
explicitly returned via `batch.Close` function. This PR extends the
batch interface by adding
the close function and also invoke batch.Close in some critical code
paths.

Memory allocation must be measured before merging this change. What's
more, it's an open
question that whether we should apply batch.Close as much as possible in
every invocation.
2026-03-04 11:17:47 +01:00
Jonny Rhea
814edc5308
core/vm: Switch to branchless normalization and extend EXCHANGE (#33869)
For bal-devnet-3 we need to update the EIP-8024 implementation to the
latest spec changes: https://github.com/ethereum/EIPs/pull/11306

> Note: I deleted tests not specified in the EIP bc maintaining them
through EIP changes is too error prone.
2026-03-04 10:34:27 +01:00
rjl493456442
6d99759f01
cmd, core, eth, tests: prevent state flushing in RPC (#33931)
Fixes https://github.com/ethereum/go-ethereum/issues/33572
2026-03-04 14:40:45 +08:00
DeFi Junkie
fe3a74e610
core/vm: use amsterdam jump table in lookup (#33947)
Return the Amsterdam instruction set from `LookupInstructionSet` when
`IsAmsterdam` is true, so Amsterdam rules no longer fall through to the
Osaka jump table.

---------

Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
2026-03-04 13:42:25 +08:00
Jonny Rhea
4f75049ea0
miner: avoid unnecessary work after payload resolution (#33943)
In `buildPayload()`, the background goroutine uses a `select` to wait on
the recommit timer, the stop channel, and the end timer. When both
`timer.C` and `payload.stop` are ready simultaneously, Go's `select`
picks a case non-deterministically. This means the loop can enter the
`timer.C` case and perform an unnecessary `generateWork` call even after
the payload has been resolved.

Add a non-blocking check of `payload.stop` at the top of the `timer.C`
case to exit immediately when the payload has already been delivered.
2026-03-04 11:58:51 +08:00
Jonny Rhea
773f71bb9e
miner: enable trie prefetcher in block builder (#33945)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
2026-03-04 10:17:07 +08:00
Jonny Rhea
856e4d55d8
go.mod: bump go.opentelemetry.io/otel/sdk from 1.39.0 to 1.40.0 (#33946)
https://github.com/ethereum/go-ethereum/pull/33916 + cmd/keeper go mod
tidy

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-03 22:28:09 +01:00
Felix Lange
db7d3a4e0e version: begin v1.17.2 release cycle
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
2026-03-03 13:49:09 +01:00
Felix Lange
16783c167c version: release go-ethereum v1.17.1 stable 2026-03-03 13:41:41 +01:00
Felix Lange
9962e2c9f3
p2p/tracker: fix crash in clean when tracker is stopped (#33940) 2026-03-03 12:54:24 +01:00
Sina M
d318e8eba9
node: disable http2 for auth API (#33922)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
We got a report that after v1.17.0 a geth-teku node starts to time out
on engine_getBlobsV2 after around 3h of operation. The culprit seems to
be our optional http2 service which Teku attempts first. The exact cause
of the timeout is still unclear.

This PR is more of a workaround than proper fix until we figure out the
underlying issue. But I don't expect http2 to particularly benefit
engine API throughput and latency. Hence it should be fine to disable it
for now.
2026-03-03 00:02:44 +01:00
vickkkkkyy
b25080cac0
miner: account for generateWork elapsed time in payload rebuild timer (#33908)
The payload rebuild loop resets the timer with the full Recommit
duration after generateWork returns, making the actual interval
generateWork_elapsed + Recommit instead of Recommit alone.

Since fillTransactions uses Recommit (2s) as its timeout ceiling, the
effective rebuild interval can reach ~4s under heavy blob workloads —
only 1–2 rebuilds in a 6s half-slot window instead of the intended 3.

Fix by subtracting elapsed time from the timer reset.

### Before this fix

```
t=0s  timer fires, generateWork starts
t=2s  fillTransactions times out, timer.Reset(2s)
t=4s  second rebuild starts
t=6s  CL calls getPayload — gets the t=2s result (1 effective rebuild)
```

### After

```
t=0s  timer fires, generateWork starts
t=2s  fillTransactions times out, timer.Reset(2s - 2s = 0)
t=2s  second rebuild starts immediately
t=4s  timer.Reset(0), third rebuild starts
t=6s  CL calls getPayload — gets the t=4s result (3 effective rebuilds)
```
2026-03-03 00:01:55 +01:00
Csaba Kiraly
48cfc97776
core/txpool/blobpool: delay announcement of low fee txs (#33893)
This PR introduces a threshold (relative to current market base fees),
below which we suppress the diffusion of low fee transactions. Once base
fees go down, and if the transactions were not evicted in the meantime,
we release these transactions.

The PR also updates the bucketing logic to be more sensitive, removing
the extra logarithm. Blobpool description is also
updated to reflect the new behavior.

EIP-7918 changed the maximim blob fee decrease that can happen in a
slot. The PR also updates fee jump calculation to reflect this.

---------

Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
2026-03-02 23:59:33 +01:00
Csaba Kiraly
1eead2ec33
core/types: fix transaction pool price-heap comparison (#33923)
Fixes priceheap comparison in some edge cases.

---------

Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
2026-03-02 23:42:39 +01:00
jwasinger
2726c9ef9e
core/vm: enable 8024 instructions in Amsterdam (#33928) 2026-03-02 17:01:06 -05:00
Guillaume Ballet
5695fbc156
.github: set @gballet as codeowner for keeper (#33920)
Some checks are pending
/ Keeper Build (push) Waiting to run
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
2026-03-02 06:43:21 -07:00
Guillaume Ballet
825436f043
AGENTS.md: add instruction not to commit binaries (#33921)
I noticed that some autonomous agents have a tendency to commit binaries
if asked to create a PR.
2026-03-02 06:42:38 -07:00
Bosul Mun
723aae2b4e
eth/protocols/eth: drop protocol version eth/68 (#33511)
Some checks failed
/ Keeper Build (push) Has been cancelled
/ Linux Build (push) Has been cancelled
/ Linux Build (arm) (push) Has been cancelled
/ Windows Build (push) Has been cancelled
/ Docker Image (push) Has been cancelled
With this, we are dropping support for protocol version eth/68. The only supported
version is eth/69 now. The p2p receipt encoding logic can be simplified a lot, and
processing of receipts during sync gets a little faster because we now transform
the network encoding into the database encoding directly, without decoding the
receipts first.

---------

Co-authored-by: Felix Lange <fjl@twurst.com>
2026-02-28 21:43:40 +01:00
Delweng
cee751a1ed
eth: fix the flaky test of TestSnapSyncDisabling68 (#33896)
Some checks failed
/ Keeper Build (push) Has been cancelled
/ Linux Build (arm) (push) Has been cancelled
/ Linux Build (push) Has been cancelled
/ Windows Build (push) Has been cancelled
/ Docker Image (push) Has been cancelled
fix the flaky test found in
https://ci.appveyor.com/project/ethereum/go-ethereum/builds/53601688/job/af5ccvufpm9usq39

1. increase the timeout from 3+1s to 15s, and use timer instead of
sleep(in the CI env, it may need more time to sync the 1024 blocks)
2. add `synced.Load()` to ensure the full async chain is finished

Signed-off-by: Delweng <delweng@gmail.com>
2026-02-27 12:51:01 +01:00
Guillaume Ballet
95c6b05806
trie/bintrie: fix endianness in code chunk key computation (#33900)
The endianness was wrong, which means that the code chunks were stored
in the wrong location in the tree.
2026-02-27 11:35:13 +01:00
Felix Lange
7793e00f0d
Dockerfile: upgrade to Go 1.26 (#33899)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
We didn't upgrade to 1.25, so this jumps over one version. I want to
upgrade all builds to Go 1.26 soon, but let's start with the Docker
build to get a sense of any possible issues.
2026-02-26 21:18:00 +01:00
Marius van der Wijden
1b1133d669
go.mod: update ckzg (#33901) 2026-02-26 20:04:25 +01:00
rjl493456442
be92f5487e
trie: error out for unexpected key-value pairs preceding the range (#33898)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
2026-02-26 23:00:02 +08:00
Sina M
8a4345611d
build: update ubuntu distros list (#33864)
The`plucky` and `oracular` have reached end of life. That's why
launchpad isn't building them anymore:
https://launchpad.net/~ethereum/+archive/ubuntu/ethereum/+packages.
2026-02-26 13:55:53 +01:00
Marius van der Wijden
f811bfe4fd
core/vm: implement eip-7843: SLOTNUM (#33589)
Implements the slotnum opcode as specified here:
https://eips.ethereum.org/EIPS/eip-7843
2026-02-26 13:53:46 +01:00
Guillaume Ballet
406a852ec8
AGENTS.md: add AGENTS.md (#33890)
Some checks failed
/ Linux Build (push) Has been cancelled
/ Linux Build (arm) (push) Has been cancelled
/ Keeper Build (push) Has been cancelled
/ Windows Build (push) Has been cancelled
/ Docker Image (push) Has been cancelled
Co-authored-by: tellabg <249254436+tellabg@users.noreply.github.com>
Co-authored-by: lightclient <lightclient@protonmail.com>
2026-02-24 23:08:23 -07:00
ANtutov
2a45272408
eth/protocols/eth: fix handshake timeout metrics classification (#33539)
Previously, handshake timeouts were recorded as generic peer errors
instead of timeout errors. waitForHandshake passed a raw
p2p.DiscReadTimeout into markError, but markError classified errors only
via errors.Unwrap(err), which returns nil for non-wrapped errors. As a
result, the timeoutError meter was never incremented and all such
failures fell into the peerError bucket.

This change makes markError switch on the base error, using
errors.Unwrap(err) when available and falling back to the original error
otherwise. With this adjustment, p2p.DiscReadTimeout is correctly mapped
to timeoutError, while existing behaviour for the other wrapped sentinel
errors remains unchanged

---------

Co-authored-by: lightclient <lightclient@protonmail.com>
2026-02-24 21:50:26 -07:00
Fynn
8450e40798
cmd/geth: add inspect trie tool to analysis trie storage (#28892)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
This pr adds a tool names `inpsect-trie`, aimed to analyze the mpt and
its node storage more efficiently.

## Example
 ./geth db inspect-trie --datadir server/data-seed/ latest 4000

## Result

- MPT shape
- Account Trie 
- Top N Storage Trie
```
+-------+-------+--------------+-------------+--------------+
|   -   | LEVEL | SHORTNODECNT | FULLNODECNT | VALUENODECNT |
+-------+-------+--------------+-------------+--------------+
|   -   |   0   |      0       |      1      |      0       |
|   -   |   1   |      0       |     16      |      0       |
|   -   |   2   |      76      |     32      |      74      |
|   -   |   3   |      66      |      1      |      66      |
|   -   |   4   |      2       |      0      |      2       |
| Total |  144  |      50      |     142     |
+-------+-------+--------------+-------------+--------------+
AccountTrie
+-------+-------+--------------+-------------+--------------+
|   -   | LEVEL | SHORTNODECNT | FULLNODECNT | VALUENODECNT |
+-------+-------+--------------+-------------+--------------+
|   -   |   0   |      0       |      1      |      0       |
|   -   |   1   |      0       |     16      |      0       |
|   -   |   2   |     108      |     84      |     104      |
|   -   |   3   |     195      |      5      |     195      |
|   -   |   4   |      10      |      0      |      10      |
| Total |  313  |     106      |     309     |
+-------+-------+--------------+-------------+--------------+
ContractTrie-0xc874e65ccffb133d9db4ff637e62532ef6ecef3223845d02f522c55786782911
+-------+-------+--------------+-------------+--------------+
|   -   | LEVEL | SHORTNODECNT | FULLNODECNT | VALUENODECNT |
+-------+-------+--------------+-------------+--------------+
|   -   |   0   |      0       |      1      |      0       |
|   -   |   1   |      0       |     16      |      0       |
|   -   |   2   |      57      |     14      |      56      |
|   -   |   3   |      33      |      0      |      33      |
| Total |  90   |      31      |     89      |
+-------+-------+--------------+-------------+--------------+
ContractTrie-0x1d7dcb6a0ce5227c5379fc5b0e004561d7833b063355f69bfea3178f08fbaab4
+-------+-------+--------------+-------------+--------------+
|   -   | LEVEL | SHORTNODECNT | FULLNODECNT | VALUENODECNT |
+-------+-------+--------------+-------------+--------------+
|   -   |   0   |      0       |      1      |      0       |
|   -   |   1   |      5       |      8      |      5       |
|   -   |   2   |      16      |      1      |      16      |
|   -   |   3   |      2       |      0      |      2       |
| Total |  23   |      10      |     23      |
+-------+-------+--------------+-------------+--------------+
ContractTrie-0xaa8a4783ebbb3bec45d3e804b3c59bfd486edfa39cbeda1d42bf86c08a0ebc0f
+-------+-------+--------------+-------------+--------------+
|   -   | LEVEL | SHORTNODECNT | FULLNODECNT | VALUENODECNT |
+-------+-------+--------------+-------------+--------------+
|   -   |   0   |      0       |      1      |      0       |
|   -   |   1   |      9       |      3      |      9       |
|   -   |   2   |      7       |      1      |      7       |
|   -   |   3   |      2       |      0      |      2       |
| Total |  18   |      5       |     18      |
+-------+-------+--------------+-------------+--------------+
ContractTrie-0x9d2804d0562391d7cfcfaf0013f0352e176a94403a58577ebf82168a21514441
+-------+-------+--------------+-------------+--------------+
|   -   | LEVEL | SHORTNODECNT | FULLNODECNT | VALUENODECNT |
+-------+-------+--------------+-------------+--------------+
|   -   |   0   |      0       |      1      |      0       |
|   -   |   1   |      6       |      4      |      6       |
|   -   |   2   |      8       |      0      |      8       |
| Total |  14   |      5       |     14      |
+-------+-------+--------------+-------------+--------------+
ContractTrie-0x17e3eb95d0e6e92b42c0b3e95c6e75080c9fcd83e706344712e9587375de96e1
+-------+-------+--------------+-------------+--------------+
|   -   | LEVEL | SHORTNODECNT | FULLNODECNT | VALUENODECNT |
+-------+-------+--------------+-------------+--------------+
|   -   |   0   |      0       |      1      |      0       |
|   -   |   1   |      5       |      3      |      5       |
|   -   |   2   |      7       |      0      |      7       |
| Total |  12   |      4       |     12      |
+-------+-------+--------------+-------------+--------------+
ContractTrie-0xc017ca90c8aa37693c38f80436bb15bde46d7b30a503aa808cb7814127468a44
Contract Trie, total trie num: 142, ShortNodeCnt: 620, FullNodeCnt: 204, ValueNodeCnt: 615
```

---------

Co-authored-by: lightclient <lightclient@protonmail.com>
Co-authored-by: MariusVanDerWijden <m.vanderwijden@live.de>
2026-02-24 10:56:00 -07:00
cui
9ecb6c4ae6
core: reduce alloc (#33576)
Some checks are pending
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Linux Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
inside tx.GasPrice()/GasFeeCap()/GasTipCap() already new a big.Int.  
bench result:  
```
goos: darwin
goarch: arm64
pkg: github.com/ethereum/go-ethereum/core
cpu: Apple M4
                        │   old.txt   │               new.txt               │
                        │   sec/op    │   sec/op     vs base                │
TransactionToMessage-10   240.1n ± 7%   175.1n ± 7%  -27.09% (p=0.000 n=10)

                        │  old.txt   │              new.txt               │
                        │    B/op    │    B/op     vs base                │
TransactionToMessage-10   544.0 ± 0%   424.0 ± 0%  -22.06% (p=0.000 n=10)

                        │  old.txt   │              new.txt               │
                        │ allocs/op  │ allocs/op   vs base                │
TransactionToMessage-10   17.00 ± 0%   11.00 ± 0%  -35.29% (p=0.000 n=10)
```
benchmark code:  

```
// Copyright 2025 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.

package core

import (
	"math/big"
	"testing"

	"github.com/ethereum/go-ethereum/common"
	"github.com/ethereum/go-ethereum/core/types"
	"github.com/ethereum/go-ethereum/crypto"
	"github.com/ethereum/go-ethereum/params"
)

// BenchmarkTransactionToMessage benchmarks the TransactionToMessage function.
func BenchmarkTransactionToMessage(b *testing.B) {
	key, _ := crypto.GenerateKey()
	signer := types.LatestSigner(params.TestChainConfig)
	to := common.HexToAddress("0x000000000000000000000000000000000000dead")
	
	// Create a DynamicFeeTx transaction
	txdata := &types.DynamicFeeTx{
		ChainID:   big.NewInt(1),
		Nonce:     42,
		GasTipCap: big.NewInt(1000000000),  // 1 gwei
		GasFeeCap: big.NewInt(2000000000),  // 2 gwei
		Gas:       21000,
		To:        &to,
		Value:     big.NewInt(1000000000000000000), // 1 ether
		Data:      []byte{0x12, 0x34, 0x56, 0x78},
		AccessList: types.AccessList{
			types.AccessTuple{
				Address:     common.HexToAddress("0x0000000000000000000000000000000000000001"),
				StorageKeys: []common.Hash{
					common.HexToHash("0x0000000000000000000000000000000000000000000000000000000000000001"),
				},
			},
		},
	}
	tx, _ := types.SignNewTx(key, signer, txdata)
	baseFee := big.NewInt(1500000000) // 1.5 gwei

	b.ResetTimer()
	b.ReportAllocs()
	for i := 0; i < b.N; i++ {
		_, err := TransactionToMessage(tx, signer, baseFee)
		if err != nil {
			b.Fatal(err)
		}
	}
}
l
```
2026-02-24 07:40:01 -07:00
rjl493456442
e636e4e3c1
core/state: track slot reads for empty storage (#33743)
From the https://eips.ethereum.org/EIPS/eip-7928

> SELFDESTRUCT (in-transaction): Accounts destroyed within a transaction
   MUST be included in AccountChanges without nonce or code changes. 
   However, if the account had a positive balance pre-transaction, the
   balance change to zero MUST be recorded. Storage keys within the self-destructed
   contracts that were modified or read MUST be included as a storage_reads
   entry.

The storage read against the empty contract (zero storage) should also
be recorded in the BAL's readlist.
2026-02-24 21:57:50 +08:00
rjl493456442
cbf3d8fed2
core/vm: touch precompile object with Amsterdam enabled (#33742)
https://eips.ethereum.org/EIPS/eip-7928 spec:

> Precompiled contracts: Precompiles MUST be included when accessed. 
   If a precompile receives value, it is recorded with a balance change.
   Otherwise, it is included with empty change lists.

The precompiled contracts are not explicitly touched when they are
invoked since Amsterdam fork.
2026-02-24 21:55:10 +08:00
rjl493456442
199ac16e07
core/types/bal: change code change type to list (#33774)
To align with the latest spec of EIP-7928:

```
# CodeChange: [block_access_index, new_code]
CodeChange = [BlockAccessIndex, Bytecode]
```
2026-02-24 21:53:20 +08:00
IONode Online
01083736c8
core/txpool/blobpool: remove unused adds slice in Add() (#33887) 2026-02-24 20:24:16 +08:00
Csaba Kiraly
59ad40e562
eth: check for tx on chain as well (#33607)
The fetcher should not fetch transactions that are already on chain.
Until now we were only checking in the txpool, but that does not have
the old transaction. This was leading to extra fetches of transactions
that were announced by a peer but are already on chain.

Here we extend the check to the chain as well.
2026-02-24 11:21:03 +01:00
CPerezz
c2e1785a48
eth/protocols/snap: restore peers to idle pool on request revert (#33790)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
All five `revert*Request` functions (account, bytecode, storage,
trienode heal, bytecode heal) remove the request from the tracked set
but never restore the peer to its corresponding idle pool. When a
request times out and no response arrives, the peer is permanently lost
from the idle pool, preventing new work from being assigned to it.

In normal operation mode (snap-sync full state) this bug is masked by
pivot movement (which resets idle pools via new Sync() cycles every ~15
minutes) and peer churn (reconnections re-add peers via Register()).
However in scenarios like the one I have running my (partial-stateful
node)[https://github.com/ethereum/go-ethereum/pull/33764] with
long-running sync cycles and few peers, all peers can eventually leak
out of the idle pools, stalling sync entirely.

Fix: after deleting from the request map, restore the peer to its idle
pool if it is still registered (guards against the peer-drop path where
Unregister already removed the peer). This mirrors the pattern used in
all five On* response handlers.


This only seems to manifest in peer-thirstly scenarios as where I find
myself when testing snapsync for the partial-statefull node).
Still, thought was at least good to raise this point. Unsure if required
to discuss or not
2026-02-24 09:14:11 +08:00
Nakanishi Hiro
82fad31540
internal/ethapi: add eth_getStorageValues method (#32591)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Implements the new eth_getStorageValues method. It returns storage
values for a list of contracts.

Spec: https://github.com/ethereum/execution-apis/pull/756

---------

Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
2026-02-23 20:47:30 +01:00
vickkkkkyy
1625064c68
internal/ethapi: include AuthorizationList in gas estimation (#33849)
Fixes an issue where AuthorizationList wasn't copied over when
estimating gas for a user-provided transaction.
2026-02-23 18:07:26 +01:00
Marius van der Wijden
1d1a094d51
beacon/blsync: ignore beacon syncer reorging errors (#33628)
Downgrades beacon syncer reorging from Error to Debug
closes https://github.com/ethereum/go-ethereum/issues/29916
2026-02-23 16:02:23 +01:00
Marius van der Wijden
e40aa46e88
eth/catalyst: implement testing_buildBlockV1 (#33656)
implements
https://github.com/ethereum/execution-apis/pull/710/changes#r2712256529

---------

Co-authored-by: Felix Lange <fjl@twurst.com>
2026-02-23 15:56:31 +01:00
Csaba Kiraly
d3dd48e59d
metrics: allow changing influxdb interval (#33767)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
The PR exposes the InfuxDB reporting interval as a CLI parameter, which
was previously fixed 10s. Default is still kept at 10s.
Note that decreasing the interval comes with notable extra traffic and
load on InfluxDB.
2026-02-23 14:27:25 +01:00
Felix Lange
00cbd2e6f4
p2p/discover/v5wire: use Whoareyou.ChallengeData instead of storing encoded packet (#31547)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
This changes the challenge resend logic again to use the existing
`ChallengeData` field of `v5wire.Whoareyou` instead of storing a second
copy of the packet in `Whoareyou.Encoded`. It's more correct this way
since `ChallengeData` is supposed to be the data that is used by the ID
verification procedure.

Also adapts the cross-client test to verify this behavior.

Follow-up to #31543
2026-02-22 21:58:47 +01:00
Felix Lange
453d0f9299
build: upgrade to golangci-lint v2.10.1 (#33875)
Some checks are pending
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Linux Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
2026-02-21 20:53:02 +01:00
Felix Lange
6d865ccd30
build: upgrade -dlgo version to 1.25.7 (#33874) 2026-02-21 20:52:43 +01:00
rjl493456442
54f72c796f
core/rawdb: revert "check pruning tail in HasBody and HasReceipts" (#33865)
Some checks failed
/ Linux Build (push) Has been cancelled
/ Linux Build (arm) (push) Has been cancelled
/ Keeper Build (push) Has been cancelled
/ Windows Build (push) Has been cancelled
/ Docker Image (push) Has been cancelled
Reverts ethereum/go-ethereum#33747.

This change suffers an unexpected issue during the sync with
`history.chain=postmerge`.
2026-02-19 11:43:44 +01:00
Sina M
2a62df3815
.github: fix actions 32bit test (#33866)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
2026-02-18 18:28:53 +01:00
rjl493456442
01fe1d716c
core/vm: disable the value transfer in syscall (#33741)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
In src/ethereum/forks/amsterdam/vm/interpreter.py:299-304, the caller
address is
only tracked for block level accessList when there's a value transfer:

```python
if message.should_transfer_value and message.value != 0:  
    # Track value transfer  
    sender_balance = get_account(state, message.caller).balance  
    recipient_balance = get_account(state, message.current_target).balance  

    track_address(message.state_changes, message.caller)  # Line 304
```

Since system transactions have should_transfer_value=False and value=0, 
this condition is never met, so the caller (SYSTEM_ADDRESS) is not
tracked.

This condition is applied for the syscall in the geth implementation,
aligning with the spec of EIP7928.

---------

Co-authored-by: Felix Lange <fjl@twurst.com>
2026-02-18 08:40:23 +08:00
spencer
3eed0580d4
cmd/evm: add --opcode.count flag to t8n (#33800)
Adds `--opcode.count=<file>` flag to `evm t8n` that writes per-opcode
execution frequency counts to a JSON file (relative to
`--output.basedir`).

---------

Co-authored-by: MariusVanDerWijden <m.vanderwijden@live.de>
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
2026-02-17 20:42:53 +01:00
Felix Lange
1054276906 version: begin v1.17.1 release cycle
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
2026-02-17 17:17:00 +01:00
307 changed files with 20281 additions and 12336 deletions

1
.github/CODEOWNERS vendored
View file

@ -10,6 +10,7 @@ beacon/merkle/ @zsfelfoldi
beacon/types/ @zsfelfoldi @fjl beacon/types/ @zsfelfoldi @fjl
beacon/params/ @zsfelfoldi @fjl beacon/params/ @zsfelfoldi @fjl
cmd/evm/ @MariusVanDerWijden @lightclient cmd/evm/ @MariusVanDerWijden @lightclient
cmd/keeper/ @gballet
core/state/ @rjl493456442 core/state/ @rjl493456442
crypto/ @gballet @jwasinger @fjl crypto/ @gballet @jwasinger @fjl
core/ @rjl493456442 core/ @rjl493456442

View file

@ -69,8 +69,8 @@ jobs:
- name: Install cross toolchain - name: Install cross toolchain
run: | run: |
apt-get update sudo apt-get update
apt-get -yq --no-install-suggests --no-install-recommends install gcc-multilib sudo apt-get -yq --no-install-suggests --no-install-recommends install gcc-multilib
- name: Build - name: Build
run: go run build/ci.go test -arch 386 -short -p 8 run: go run build/ci.go test -arch 386 -short -p 8

102
AGENTS.md Normal file
View file

@ -0,0 +1,102 @@
# AGENTS
## Guidelines
- **Keep changes minimal and focused.** Only modify code directly related to the task at hand. Do not refactor unrelated code, rename existing variables or functions for style, or bundle unrelated fixes into the same commit or PR.
- **Do not add, remove, or update dependencies** unless the task explicitly requires it.
## Pre-Commit Checklist
Before every commit, run **all** of the following checks and ensure they pass:
### 1. Formatting
Before committing, always run `gofmt` and `goimports` on all modified files:
```sh
gofmt -w <modified files>
goimports -w <modified files>
```
### 2. Build All Commands
Verify that all tools compile successfully:
```sh
make all
```
This builds all executables under `cmd/`, including `keeper` which has special build requirements.
### 3. Tests
While iterating during development, use `-short` for faster feedback:
```sh
go run ./build/ci.go test -short
```
Before committing, run the full test suite **without** `-short` to ensure all tests pass, including the Ethereum execution-spec tests and all state/block test permutations:
```sh
go run ./build/ci.go test
```
### 4. Linting
```sh
go run ./build/ci.go lint
```
This runs additional style checks. Fix any issues before committing.
### 5. Generated Code
```sh
go run ./build/ci.go check_generate
```
Ensures that all generated files (e.g., `gen_*.go`) are up to date. If this fails, first install the required code generators by running `make devtools`, then run the appropriate `go generate` commands and include the updated files in your commit.
### 6. Dependency Hygiene
```sh
go run ./build/ci.go check_baddeps
```
Verifies that no forbidden dependencies have been introduced.
## What to include in commits
Do not commit binaries, whether they are produced by the main build or byproducts of investigations.
## Commit Message Format
Commit messages must be prefixed with the package(s) they modify, followed by a short lowercase description:
```
<package(s)>: description
```
Examples:
- `core/vm: fix stack overflow in PUSH instruction`
- `eth, rpc: make trace configs optional`
- `cmd/geth: add new flag for sync mode`
Use comma-separated package names when multiple areas are affected. Keep the description concise.
## Pull Request Title Format
PR titles follow the same convention as commit messages:
```
<list of modified paths>: description
```
Examples:
- `core/vm: fix stack overflow in PUSH instruction`
- `core, eth: add arena allocator support`
- `cmd/geth, internal/ethapi: refactor transaction args`
- `trie/archiver: streaming subtree archival to fix OOM`
Use the top-level package paths, comma-separated if multiple areas are affected. Only mention the directories with functional changes, interface changes that trickle all over the codebase should not generate an exhaustive list. The description should be a short, lowercase summary of the change.

View file

@ -4,7 +4,7 @@ ARG VERSION=""
ARG BUILDNUM="" ARG BUILDNUM=""
# Build Geth in a stock Go builder container # Build Geth in a stock Go builder container
FROM golang:1.24-alpine AS builder FROM golang:1.26-alpine AS builder
RUN apk add --no-cache gcc musl-dev linux-headers git RUN apk add --no-cache gcc musl-dev linux-headers git

View file

@ -4,7 +4,7 @@ ARG VERSION=""
ARG BUILDNUM="" ARG BUILDNUM=""
# Build Geth in a stock Go builder container # Build Geth in a stock Go builder container
FROM golang:1.24-alpine AS builder FROM golang:1.26-alpine AS builder
RUN apk add --no-cache gcc musl-dev linux-headers git RUN apk add --no-cache gcc musl-dev linux-headers git

View file

@ -87,6 +87,10 @@ func (ec *engineClient) updateLoop(headCh <-chan types.ChainHeadEvent) {
if status, err := ec.callForkchoiceUpdated(forkName, event); err == nil { if status, err := ec.callForkchoiceUpdated(forkName, event); err == nil {
log.Info("Successful ForkchoiceUpdated", "head", event.Block.Hash(), "status", status) log.Info("Successful ForkchoiceUpdated", "head", event.Block.Hash(), "status", status)
} else { } else {
if err.Error() == "beacon syncer reorging" {
log.Debug("Failed ForkchoiceUpdated", "head", event.Block.Hash(), "error", err)
continue // ignore beacon syncer reorging errors, this error can occur if the blsync is skipping a block
}
log.Error("Failed ForkchoiceUpdated", "head", event.Block.Hash(), "error", err) log.Error("Failed ForkchoiceUpdated", "head", event.Block.Hash(), "error", err)
} }
} }

View file

@ -21,6 +21,7 @@ func (p PayloadAttributes) MarshalJSON() ([]byte, error) {
SuggestedFeeRecipient common.Address `json:"suggestedFeeRecipient" gencodec:"required"` SuggestedFeeRecipient common.Address `json:"suggestedFeeRecipient" gencodec:"required"`
Withdrawals []*types.Withdrawal `json:"withdrawals"` Withdrawals []*types.Withdrawal `json:"withdrawals"`
BeaconRoot *common.Hash `json:"parentBeaconBlockRoot"` BeaconRoot *common.Hash `json:"parentBeaconBlockRoot"`
SlotNumber *hexutil.Uint64 `json:"slotNumber"`
} }
var enc PayloadAttributes var enc PayloadAttributes
enc.Timestamp = hexutil.Uint64(p.Timestamp) enc.Timestamp = hexutil.Uint64(p.Timestamp)
@ -28,6 +29,7 @@ func (p PayloadAttributes) MarshalJSON() ([]byte, error) {
enc.SuggestedFeeRecipient = p.SuggestedFeeRecipient enc.SuggestedFeeRecipient = p.SuggestedFeeRecipient
enc.Withdrawals = p.Withdrawals enc.Withdrawals = p.Withdrawals
enc.BeaconRoot = p.BeaconRoot enc.BeaconRoot = p.BeaconRoot
enc.SlotNumber = (*hexutil.Uint64)(p.SlotNumber)
return json.Marshal(&enc) return json.Marshal(&enc)
} }
@ -39,6 +41,7 @@ func (p *PayloadAttributes) UnmarshalJSON(input []byte) error {
SuggestedFeeRecipient *common.Address `json:"suggestedFeeRecipient" gencodec:"required"` SuggestedFeeRecipient *common.Address `json:"suggestedFeeRecipient" gencodec:"required"`
Withdrawals []*types.Withdrawal `json:"withdrawals"` Withdrawals []*types.Withdrawal `json:"withdrawals"`
BeaconRoot *common.Hash `json:"parentBeaconBlockRoot"` BeaconRoot *common.Hash `json:"parentBeaconBlockRoot"`
SlotNumber *hexutil.Uint64 `json:"slotNumber"`
} }
var dec PayloadAttributes var dec PayloadAttributes
if err := json.Unmarshal(input, &dec); err != nil { if err := json.Unmarshal(input, &dec); err != nil {
@ -62,5 +65,8 @@ func (p *PayloadAttributes) UnmarshalJSON(input []byte) error {
if dec.BeaconRoot != nil { if dec.BeaconRoot != nil {
p.BeaconRoot = dec.BeaconRoot p.BeaconRoot = dec.BeaconRoot
} }
if dec.SlotNumber != nil {
p.SlotNumber = (*uint64)(dec.SlotNumber)
}
return nil return nil
} }

View file

@ -34,6 +34,7 @@ func (e ExecutableData) MarshalJSON() ([]byte, error) {
Withdrawals []*types.Withdrawal `json:"withdrawals"` Withdrawals []*types.Withdrawal `json:"withdrawals"`
BlobGasUsed *hexutil.Uint64 `json:"blobGasUsed"` BlobGasUsed *hexutil.Uint64 `json:"blobGasUsed"`
ExcessBlobGas *hexutil.Uint64 `json:"excessBlobGas"` ExcessBlobGas *hexutil.Uint64 `json:"excessBlobGas"`
SlotNumber *hexutil.Uint64 `json:"slotNumber"`
} }
var enc ExecutableData var enc ExecutableData
enc.ParentHash = e.ParentHash enc.ParentHash = e.ParentHash
@ -58,6 +59,7 @@ func (e ExecutableData) MarshalJSON() ([]byte, error) {
enc.Withdrawals = e.Withdrawals enc.Withdrawals = e.Withdrawals
enc.BlobGasUsed = (*hexutil.Uint64)(e.BlobGasUsed) enc.BlobGasUsed = (*hexutil.Uint64)(e.BlobGasUsed)
enc.ExcessBlobGas = (*hexutil.Uint64)(e.ExcessBlobGas) enc.ExcessBlobGas = (*hexutil.Uint64)(e.ExcessBlobGas)
enc.SlotNumber = (*hexutil.Uint64)(e.SlotNumber)
return json.Marshal(&enc) return json.Marshal(&enc)
} }
@ -81,6 +83,7 @@ func (e *ExecutableData) UnmarshalJSON(input []byte) error {
Withdrawals []*types.Withdrawal `json:"withdrawals"` Withdrawals []*types.Withdrawal `json:"withdrawals"`
BlobGasUsed *hexutil.Uint64 `json:"blobGasUsed"` BlobGasUsed *hexutil.Uint64 `json:"blobGasUsed"`
ExcessBlobGas *hexutil.Uint64 `json:"excessBlobGas"` ExcessBlobGas *hexutil.Uint64 `json:"excessBlobGas"`
SlotNumber *hexutil.Uint64 `json:"slotNumber"`
} }
var dec ExecutableData var dec ExecutableData
if err := json.Unmarshal(input, &dec); err != nil { if err := json.Unmarshal(input, &dec); err != nil {
@ -154,5 +157,8 @@ func (e *ExecutableData) UnmarshalJSON(input []byte) error {
if dec.ExcessBlobGas != nil { if dec.ExcessBlobGas != nil {
e.ExcessBlobGas = (*uint64)(dec.ExcessBlobGas) e.ExcessBlobGas = (*uint64)(dec.ExcessBlobGas)
} }
if dec.SlotNumber != nil {
e.SlotNumber = (*uint64)(dec.SlotNumber)
}
return nil return nil
} }

View file

@ -50,6 +50,13 @@ var (
// ExecutionPayloadV3 has the syntax of ExecutionPayloadV2 and appends the new // ExecutionPayloadV3 has the syntax of ExecutionPayloadV2 and appends the new
// fields: blobGasUsed and excessBlobGas. // fields: blobGasUsed and excessBlobGas.
PayloadV3 PayloadVersion = 0x3 PayloadV3 PayloadVersion = 0x3
// PayloadV4 is the identifier of ExecutionPayloadV4 introduced in amsterdam fork.
//
// https://github.com/ethereum/execution-apis/blob/main/src/engine/amsterdam.md#executionpayloadv4
// ExecutionPayloadV4 has the syntax of ExecutionPayloadV3 and appends the new
// field slotNumber.
PayloadV4 PayloadVersion = 0x4
) )
//go:generate go run github.com/fjl/gencodec -type PayloadAttributes -field-override payloadAttributesMarshaling -out gen_blockparams.go //go:generate go run github.com/fjl/gencodec -type PayloadAttributes -field-override payloadAttributesMarshaling -out gen_blockparams.go
@ -62,11 +69,13 @@ type PayloadAttributes struct {
SuggestedFeeRecipient common.Address `json:"suggestedFeeRecipient" gencodec:"required"` SuggestedFeeRecipient common.Address `json:"suggestedFeeRecipient" gencodec:"required"`
Withdrawals []*types.Withdrawal `json:"withdrawals"` Withdrawals []*types.Withdrawal `json:"withdrawals"`
BeaconRoot *common.Hash `json:"parentBeaconBlockRoot"` BeaconRoot *common.Hash `json:"parentBeaconBlockRoot"`
SlotNumber *uint64 `json:"slotNumber"`
} }
// JSON type overrides for PayloadAttributes. // JSON type overrides for PayloadAttributes.
type payloadAttributesMarshaling struct { type payloadAttributesMarshaling struct {
Timestamp hexutil.Uint64 Timestamp hexutil.Uint64
SlotNumber *hexutil.Uint64
} }
//go:generate go run github.com/fjl/gencodec -type ExecutableData -field-override executableDataMarshaling -out gen_ed.go //go:generate go run github.com/fjl/gencodec -type ExecutableData -field-override executableDataMarshaling -out gen_ed.go
@ -90,6 +99,7 @@ type ExecutableData struct {
Withdrawals []*types.Withdrawal `json:"withdrawals"` Withdrawals []*types.Withdrawal `json:"withdrawals"`
BlobGasUsed *uint64 `json:"blobGasUsed"` BlobGasUsed *uint64 `json:"blobGasUsed"`
ExcessBlobGas *uint64 `json:"excessBlobGas"` ExcessBlobGas *uint64 `json:"excessBlobGas"`
SlotNumber *uint64 `json:"slotNumber"`
} }
// JSON type overrides for executableData. // JSON type overrides for executableData.
@ -104,6 +114,7 @@ type executableDataMarshaling struct {
Transactions []hexutil.Bytes Transactions []hexutil.Bytes
BlobGasUsed *hexutil.Uint64 BlobGasUsed *hexutil.Uint64
ExcessBlobGas *hexutil.Uint64 ExcessBlobGas *hexutil.Uint64
SlotNumber *hexutil.Uint64
} }
// StatelessPayloadStatusV1 is the result of a stateless payload execution. // StatelessPayloadStatusV1 is the result of a stateless payload execution.
@ -213,7 +224,7 @@ func encodeTransactions(txs []*types.Transaction) [][]byte {
return enc return enc
} }
func decodeTransactions(enc [][]byte) ([]*types.Transaction, error) { func DecodeTransactions(enc [][]byte) ([]*types.Transaction, error) {
var txs = make([]*types.Transaction, len(enc)) var txs = make([]*types.Transaction, len(enc))
for i, encTx := range enc { for i, encTx := range enc {
var tx types.Transaction var tx types.Transaction
@ -251,7 +262,7 @@ func ExecutableDataToBlock(data ExecutableData, versionedHashes []common.Hash, b
// for stateless execution, so it skips checking if the executable data hashes to // for stateless execution, so it skips checking if the executable data hashes to
// the requested hash (stateless has to *compute* the root hash, it's not given). // the requested hash (stateless has to *compute* the root hash, it's not given).
func ExecutableDataToBlockNoHash(data ExecutableData, versionedHashes []common.Hash, beaconRoot *common.Hash, requests [][]byte) (*types.Block, error) { func ExecutableDataToBlockNoHash(data ExecutableData, versionedHashes []common.Hash, beaconRoot *common.Hash, requests [][]byte) (*types.Block, error) {
txs, err := decodeTransactions(data.Transactions) txs, err := DecodeTransactions(data.Transactions)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -313,6 +324,7 @@ func ExecutableDataToBlockNoHash(data ExecutableData, versionedHashes []common.H
BlobGasUsed: data.BlobGasUsed, BlobGasUsed: data.BlobGasUsed,
ParentBeaconRoot: beaconRoot, ParentBeaconRoot: beaconRoot,
RequestsHash: requestsHash, RequestsHash: requestsHash,
SlotNumber: data.SlotNumber,
} }
return types.NewBlockWithHeader(header). return types.NewBlockWithHeader(header).
WithBody(types.Body{Transactions: txs, Uncles: nil, Withdrawals: data.Withdrawals}), WithBody(types.Body{Transactions: txs, Uncles: nil, Withdrawals: data.Withdrawals}),
@ -340,6 +352,7 @@ func BlockToExecutableData(block *types.Block, fees *big.Int, sidecars []*types.
Withdrawals: block.Withdrawals(), Withdrawals: block.Withdrawals(),
BlobGasUsed: block.BlobGasUsed(), BlobGasUsed: block.BlobGasUsed(),
ExcessBlobGas: block.ExcessBlobGas(), ExcessBlobGas: block.ExcessBlobGas(),
SlotNumber: block.SlotNumber(),
} }
// Add blobs. // Add blobs.

View file

@ -438,14 +438,11 @@ func (s *serverWithLimits) fail(desc string) {
// failLocked calculates the dynamic failure delay and applies it. // failLocked calculates the dynamic failure delay and applies it.
func (s *serverWithLimits) failLocked(desc string) { func (s *serverWithLimits) failLocked(desc string) {
log.Debug("Server error", "description", desc) log.Debug("Server error", "description", desc)
s.failureDelay *= 2
now := s.clock.Now() now := s.clock.Now()
if now > s.failureDelayEnd { if now > s.failureDelayEnd {
s.failureDelay *= math.Pow(2, -float64(now-s.failureDelayEnd)/float64(maxFailureDelay)) s.failureDelay *= math.Pow(2, -float64(now-s.failureDelayEnd)/float64(maxFailureDelay))
} }
if s.failureDelay < float64(minFailureDelay) { s.failureDelay = max(min(s.failureDelay*2, float64(maxFailureDelay)), float64(minFailureDelay))
s.failureDelay = float64(minFailureDelay)
}
s.failureDelayEnd = now + mclock.AbsTime(s.failureDelay) s.failureDelayEnd = now + mclock.AbsTime(s.failureDelay)
s.delay(time.Duration(s.failureDelay)) s.delay(time.Duration(s.failureDelay))
} }

View file

@ -62,7 +62,6 @@ const (
ssNeedParent // cp header slot %32 != 0, need parent to check epoch boundary ssNeedParent // cp header slot %32 != 0, need parent to check epoch boundary
ssParentRequested // cp parent header requested ssParentRequested // cp parent header requested
ssPrintStatus // has all necessary info, print log message if init still not successful ssPrintStatus // has all necessary info, print log message if init still not successful
ssDone // log message printed, no more action required
) )
type serverState struct { type serverState struct {
@ -180,7 +179,8 @@ func (s *CheckpointInit) Process(requester request.Requester, events []request.E
default: default:
log.Error("blsync: checkpoint not available, but reported as finalized; specified checkpoint hash might be too old", "server", server.Name()) log.Error("blsync: checkpoint not available, but reported as finalized; specified checkpoint hash might be too old", "server", server.Name())
} }
s.serverState[server] = serverState{state: ssDone} s.serverState[server] = serverState{state: ssDefault}
requester.Fail(server, "checkpoint init failed")
} }
} }

View file

@ -5,81 +5,102 @@
# https://github.com/ethereum/execution-spec-tests/releases/download/v5.1.0 # https://github.com/ethereum/execution-spec-tests/releases/download/v5.1.0
a3192784375acec7eaec492799d5c5d0c47a2909a3cc40178898e4ecd20cc416 fixtures_develop.tar.gz a3192784375acec7eaec492799d5c5d0c47a2909a3cc40178898e4ecd20cc416 fixtures_develop.tar.gz
# version:golang 1.25.1 # version:golang 1.25.7
# https://go.dev/dl/ # https://go.dev/dl/
d010c109cee94d80efe681eab46bdea491ac906bf46583c32e9f0dbb0bd1a594 go1.25.1.src.tar.gz 178f2832820274b43e177d32f06a3ebb0129e427dd20a5e4c88df2c1763cf10a go1.25.7.src.tar.gz
1d622468f767a1b9fe1e1e67bd6ce6744d04e0c68712adc689748bbeccb126bb go1.25.1.darwin-amd64.tar.gz 81bf2a1f20633f62d55d826d82dde3b0570cf1408a91e15781b266037299285b go1.25.7.aix-ppc64.tar.gz
68deebb214f39d542e518ebb0598a406ab1b5a22bba8ec9ade9f55fb4dd94a6c go1.25.1.darwin-arm64.tar.gz bf5050a2152f4053837b886e8d9640c829dbacbc3370f913351eb0904cb706f5 go1.25.7.darwin-amd64.tar.gz
d03cdcbc9bd8baf5cf028de390478e9e2b3e4d0afe5a6582dedc19bfe6a263b2 go1.25.1.linux-386.tar.gz ff18369ffad05c57d5bed888b660b31385f3c913670a83ef557cdfd98ea9ae1b go1.25.7.darwin-arm64.tar.gz
7716a0d940a0f6ae8e1f3b3f4f36299dc53e31b16840dbd171254312c41ca12e go1.25.1.linux-amd64.tar.gz c5dccd7f192dd7b305dc209fb316ac1917776d74bd8e4d532ef2772f305bf42a go1.25.7.dragonfly-amd64.tar.gz
65a3e34fb2126f55b34e1edfc709121660e1be2dee6bdf405fc399a63a95a87d go1.25.1.linux-arm64.tar.gz a2de97c8ac74bf64b0ae73fe9d379e61af530e061bc7f8f825044172ffe61a8b go1.25.7.freebsd-386.tar.gz
eb949be683e82a99e9861dafd7057e31ea40b161eae6c4cd18fdc0e8c4ae6225 go1.25.1.linux-armv6l.tar.gz 055f9e138787dcafa81eb0314c8ff70c6dd0f6dba1e8a6957fef5d5efd1ab8fd go1.25.7.freebsd-amd64.tar.gz
be13d5479b8c75438f2efcaa8c191fba3af684b3228abc9c99c7aa8502f34424 go1.25.1.windows-386.zip 60e7f7a7c990f0b9539ac8ed668155746997d404643a4eecd47b3dee1b7e710b go1.25.7.freebsd-arm.tar.gz
4a974de310e7ee1d523d2fcedb114ba5fa75408c98eb3652023e55ccf3fa7cab go1.25.1.windows-amd64.zip 631e03d5fd4c526e2f499154d8c6bf4cb081afb2fff171c428722afc9539d53a go1.25.7.freebsd-arm64.tar.gz
45ab4290adbd6ee9e7f18f0d57eaa9008fdbef590882778ed93eac3c8cca06c5 go1.25.1.aix-ppc64.tar.gz 8a264fd685823808140672812e3ad9c43f6ad59444c0dc14cdd3a1351839ddd5 go1.25.7.freebsd-riscv64.tar.gz
2e3c1549bed3124763774d648f291ac42611232f48320ebbd23517c909c09b81 go1.25.1.dragonfly-amd64.tar.gz 57c672447d906a1bcab98f2b11492d54521a791aacbb4994a25169e59cbe289a go1.25.7.illumos-amd64.tar.gz
dc0198dd4ec520e13f26798def8750544edf6448d8e9c43fd2a814e4885932af go1.25.1.freebsd-386.tar.gz 2866517e9ca81e6a2e85a930e9b11bc8a05cfeb2fc6dc6cb2765e7fb3c14b715 go1.25.7.linux-386.tar.gz
c4f1a7e7b258406e6f3b677ecdbd97bbb23ff9c0d44be4eb238a07d360f69ac8 go1.25.1.freebsd-amd64.tar.gz 12e6d6a191091ae27dc31f6efc630e3a3b8ba409baf3573d955b196fdf086005 go1.25.7.linux-amd64.tar.gz
7772fc5ff71ed39297ec0c1599fc54e399642c9b848eac989601040923b0de9c go1.25.1.freebsd-arm.tar.gz ba611a53534135a81067240eff9508cd7e256c560edd5d8c2fef54f083c07129 go1.25.7.linux-arm64.tar.gz
5bb011d5d5b6218b12189f07aa0be618ab2002662fff1ca40afba7389735c207 go1.25.1.freebsd-arm64.tar.gz 1ba07e0eb86b839e72467f4b5c7a5597d07f30bcf5563c951410454f7cda5266 go1.25.7.linux-armv6l.tar.gz
ccac716240cb049bebfafcb7eebc3758512178a4c51fc26da9cc032035d850c8 go1.25.1.freebsd-riscv64.tar.gz 775753fc5952a334c415f08768df2f0b73a3228a16e8f5f63d545daacb4e3357 go1.25.7.linux-loong64.tar.gz
cc53910ffb9fcfdd988a9fa25b5423bae1cfa01b19616be646700e1f5453b466 go1.25.1.illumos-amd64.tar.gz 1a023bb367c5fbb4c637a2f6dc23ff17c6591ad929ce16ea88c74d857153b307 go1.25.7.linux-mips.tar.gz
efe809f923bcedab44bf7be2b3af8d182b512b1bf9c07d302e0c45d26c8f56f3 go1.25.1.linux-loong64.tar.gz a8e97223d8aa6fdfd45f132a4784d2f536bbac5f3d63a24b63d33b6bfe1549af go1.25.7.linux-mips64.tar.gz
c0de33679f6ed68991dc42dc4a602e74a666e3e166c1748ee1b5d1a7ea2ffbb2 go1.25.1.linux-mips.tar.gz eb9edb6223330d5e20275667c65dea076b064c08e595fe4eba5d7d6055cfaccf go1.25.7.linux-mips64le.tar.gz
c270f7b0c0bdfbcd54fef4481227c40d41bb518f9ae38ee930870f04a0a6a589 go1.25.1.linux-mips64.tar.gz 9c1e693552a5f9bb9e0012d1c5e01456ecefbc59bef53a77305222ce10aba368 go1.25.7.linux-mipsle.tar.gz
80be871ba9c944f34d1868cdf5047e1cf2e1289fe08cdb90e2453d2f0d6965ae go1.25.1.linux-mips64le.tar.gz 28a788798e7329acbbc0ac2caa5e4368b1e5ede646cc24429c991214cfb45c63 go1.25.7.linux-ppc64.tar.gz
9f09defa9bb22ebf2cde76162f40958564e57ce5c2b3649bc063bebcbc9294c1 go1.25.1.linux-mipsle.tar.gz 42124c0edc92464e2b37b2d7fcd3658f0c47ebd6a098732415a522be8cb88e3f go1.25.7.linux-ppc64le.tar.gz
2c76b7d278c1d43ad19d478ad3f0f05e7b782b64b90870701b314fa48b5f43c6 go1.25.1.linux-ppc64.tar.gz 88d59c6893c8425875d6eef8e3434bc2fa2552e5ad4c058c6cd8cd710a0301c8 go1.25.7.linux-riscv64.tar.gz
8b0c8d3ee5b1b5c28b6bd63dc4438792012e01d03b4bf7a61d985c87edab7d1f go1.25.1.linux-ppc64le.tar.gz c6b77facf666dc68195ecab05dbf0ebb4e755b2a8b7734c759880557f1c29b0c go1.25.7.linux-s390x.tar.gz
22fe934a9d0c9c57275716c55b92d46ebd887cec3177c9140705efa9f84ba1e2 go1.25.1.linux-riscv64.tar.gz f14c184d9ade0ee04c7735d4071257b90896ecbde1b32adae84135f055e6399b go1.25.7.netbsd-386.tar.gz
9cfe517ba423f59f3738ca5c3d907c103253cffbbcc2987142f79c5de8c1bf93 go1.25.1.linux-s390x.tar.gz 7e7389e404dca1088c31f0fc07f1dd60891d7182bcd621469c14f7e79eceb3ff go1.25.7.netbsd-amd64.tar.gz
6af8a08353e76205d5b743dd7a3f0126684f96f62be0a31b75daf9837e512c46 go1.25.1.netbsd-386.tar.gz 70388bb3ef2f03dbf1357e9056bd09034a67e018262557354f8cf549766b3f9d go1.25.7.netbsd-arm.tar.gz
e5d534ff362edb1bd8c8e10892b6a027c4c1482454245d1529167676498684c7 go1.25.1.netbsd-amd64.tar.gz 8c1cda9d25bfc9b18d24d5f95fc23949dd3ff99fa408a6cfa40e2cf12b07e362 go1.25.7.netbsd-arm64.tar.gz
88bcf39254fdcea6a199c1c27d787831b652427ce60851ae9e41a3d7eb477f45 go1.25.1.netbsd-arm.tar.gz 42f0d1bfbe39b8401cccb84dd66b30795b97bfc9620dfdc17c5cd4fcf6495cb0 go1.25.7.openbsd-386.tar.gz
d7c2eabe1d04ee47bcaea2816fdd90dbd25d90d4dfa756faa9786c788e4f3a4e go1.25.1.netbsd-arm64.tar.gz e514879c0a28bc32123cd52c4c093de912477fe83f36a6d07517d066ef55391a go1.25.7.openbsd-amd64.tar.gz
14a2845977eb4dde11d929858c437a043467c427db87899935e90cee04a38d72 go1.25.1.openbsd-386.tar.gz 8cd22530695a0218232bf7efea8f162df1697a3106942ac4129b8c3de39ce4ef go1.25.7.openbsd-arm.tar.gz
d27ac54b38a13a09c81e67c82ac70d387037341c85c3399291c73e13e83fdd8c go1.25.1.openbsd-amd64.tar.gz 938720f6ebc0d1c53d7840321d3a31f29fd02496e84a6538f442a9311dc1cc9a go1.25.7.openbsd-arm64.tar.gz
0f4ab5f02500afa4befd51fed1e8b45e4d07ca050f641cc3acc76eaa4027b2c3 go1.25.1.openbsd-arm.tar.gz a4c378b73b98f89a3596c2ef51aabbb28783d9ca29f7e317d8ca07939660ce6f go1.25.7.openbsd-ppc64.tar.gz
d46c3bd156843656f7f3cb0dec27ea51cd926ec3f7b80744bf8156e67c1c812f go1.25.1.openbsd-arm64.tar.gz 937b58734fbeaa8c7941a0e4285e7e84b7885396e8d11c23f9ab1a8ff10ff20e go1.25.7.openbsd-riscv64.tar.gz
c550514c67f22e409be10e40eace761e2e43069f4ef086ae6e60aac736c2b679 go1.25.1.openbsd-ppc64.tar.gz 61a093c8c5244916f25740316386bb9f141545dcf01b06a79d1c78ece488403e go1.25.7.plan9-386.tar.gz
8a09a8714a2556eb13fc1f10b7ce2553fcea4971e3330fc3be0efd24aab45734 go1.25.1.openbsd-riscv64.tar.gz 7fc8f6689c9de8ccb7689d2278035fa83c2d601409101840df6ddfe09ba58699 go1.25.7.plan9-amd64.tar.gz
b0e1fefaf0c7abd71f139a54eee9767944aff5f0bc9d69c968234804884e552f go1.25.1.plan9-386.tar.gz 9661dff8eaeeb62f1c3aadbc5ff189a2e6744e1ec885e32dbcb438f58a34def5 go1.25.7.plan9-arm.tar.gz
e94732c94f149690aa0ab11c26090577211b4a988137cb2c03ec0b54e750402e go1.25.1.plan9-amd64.tar.gz 28ecba0e1d7950c8b29a4a04962dd49c3bf5221f55a44f17d98f369f82859cf4 go1.25.7.solaris-amd64.tar.gz
7eb80e9de1e817d9089a54e8c7c5c8d8ed9e5fb4d4a012fc0f18fc422a484f0c go1.25.1.plan9-arm.tar.gz baa6b488291801642fa620026169e38bec2da2ac187cd3ae2145721cf826bbc3 go1.25.7.windows-386.zip
1261dfad7c4953c0ab90381bc1242dc54e394db7485c59349428d532b2273343 go1.25.1.solaris-amd64.tar.gz c75e5f4ff62d085cc0017be3ad19d5536f46825fa05db06ec468941f847e3228 go1.25.7.windows-amd64.zip
04bc3c078e9e904c4d58d6ac2532a5bdd402bd36a9ff0b5949b3c5e6006a05ee go1.25.1.windows-arm64.zip 807033f85931bc4a589ca8497535dcbeb1f30d506e47fa200f5f04c4a71c3d9f go1.25.7.windows-arm64.zip
# version:golangci 2.4.0 # version:golangci 2.10.1
# https://github.com/golangci/golangci-lint/releases/ # https://github.com/golangci/golangci-lint/releases/
# https://github.com/golangci/golangci-lint/releases/download/v2.4.0/ # https://github.com/golangci/golangci-lint/releases/download/v2.10.1
7904ce63f79db44934939cf7a063086ea0ea98e9b19eba0a9d52ccdd0d21951c golangci-lint-2.4.0-darwin-amd64.tar.gz 66fb0da81b8033b477f97eea420d4b46b230ca172b8bb87c6610109f3772b6b6 golangci-lint-2.10.1-darwin-amd64.tar.gz
cd4dd53fa09b6646baff5fd22b8c64d91db02c21c7496df27992d75d34feec59 golangci-lint-2.4.0-darwin-arm64.tar.gz 03bfadf67e52b441b7ec21305e501c717df93c959836d66c7f97312654acb297 golangci-lint-2.10.1-darwin-arm64.tar.gz
d58f426ebe14cc257e81562b4bf37a488ffb4ffbbb3ec73041eb3b38bb25c0e1 golangci-lint-2.4.0-freebsd-386.tar.gz c9a44658ccc8f7b8dbbd4ae6020ba91c1a5d3987f4d91ced0f7d2bea013e57ca golangci-lint-2.10.1-freebsd-386.tar.gz
6ec4a6177fc6c0dd541fbcb3a7612845266d020d35cc6fa92959220cdf64ca39 golangci-lint-2.4.0-freebsd-amd64.tar.gz a513c5cb4e0f5bd5767001af9d5e97e7868cfc2d9c46739a4df93e713cfb24af golangci-lint-2.10.1-freebsd-amd64.tar.gz
4d473e3e71c01feaa915a0604fb35758b41284fb976cdeac3f842118d9ee7e17 golangci-lint-2.4.0-freebsd-armv6.tar.gz 2ef38eefc4b5cee2febacb75a30579526e5656c16338a921d80e59a8e87d4425 golangci-lint-2.10.1-freebsd-arm64.tar.gz
58727746c6530801a3f9a702a5945556a5eb7e88809222536dd9f9d54cafaeff golangci-lint-2.4.0-freebsd-armv7.tar.gz 8fea6766318b4829e766bbe325f10191d75297dcc44ae35bf374816037878e38 golangci-lint-2.10.1-freebsd-armv6.tar.gz
fbf28c662760e24c32f82f8d16dffdb4a82de7726a52ba1fad94f890c22997ea golangci-lint-2.4.0-illumos-amd64.tar.gz 30b629870574d6254f3e8804e5a74b34f98e1263c9d55465830d739c88b862ed golangci-lint-2.10.1-freebsd-armv7.tar.gz
a15a000a8981ef665e971e0f67e2acda9066a9e37a59344393b7351d8fb49c81 golangci-lint-2.4.0-linux-386.tar.gz c0db839f866ce80b1b6c96167aa101cfe50d9c936f42d942a3c1cbdc1801af68 golangci-lint-2.10.1-illumos-amd64.tar.gz
fae792524c04424c0ac369f5b8076f04b45cf29fc945a370e55d369a8dc11840 golangci-lint-2.4.0-linux-amd64.tar.gz 280eb56636e9175f671cd7b755d7d67f628ae2ed00a164d1e443c43c112034e5 golangci-lint-2.10.1-linux-386.deb
70ac11f55b80ec78fd3a879249cc9255121b8dfd7f7ed4fc46ed137f4abf17e7 golangci-lint-2.4.0-linux-arm64.tar.gz 065a7d99da61dc7dfbfef2e2d7053dd3fa6672598f2747117aa4bb5f45e7df7f golangci-lint-2.10.1-linux-386.rpm
4acdc40e5cebe99e4e7ced358a05b2e71789f409b41cb4f39bbb86ccfa14b1dc golangci-lint-2.4.0-linux-armv6.tar.gz a55918c03bb413b2662287653ab2ae2fef4e37428b247dad6348724adde9d770 golangci-lint-2.10.1-linux-386.tar.gz
2a68749568fa22b4a97cb88dbea655595563c795076536aa6c087f7968784bf3 golangci-lint-2.4.0-linux-armv7.tar.gz 8aa9b3aa14f39745eeb7fc7ff50bcac683e785397d1e4bc9afd2184b12c4ce86 golangci-lint-2.10.1-linux-amd64.deb
9e3369afb023711036dcb0b4f45c9fe2792af962fa1df050c9f6ac101a6c5d73 golangci-lint-2.4.0-linux-loong64.tar.gz 62a111688e9e305032334a2cbc84f4d971b64bb3bffc99d3f80081d57fb25e32 golangci-lint-2.10.1-linux-amd64.rpm
bb9143d6329be2c4dbfffef9564078e7da7d88e7dde6c829b6263d98e072229e golangci-lint-2.4.0-linux-mips64.tar.gz dfa775874cf0561b404a02a8f4481fc69b28091da95aa697259820d429b09c99 golangci-lint-2.10.1-linux-amd64.tar.gz
5ad1765b40d56cd04d4afd805b3ba6f4bfd9b36181da93c31e9b17e483d8608d golangci-lint-2.4.0-linux-mips64le.tar.gz b3f36937e8ea1660739dc0f5c892ea59c9c21ed4e75a91a25957c561f7f79a55 golangci-lint-2.10.1-linux-arm64.deb
918936fb9c0d5ba96bef03cf4348b03938634cfcced49be1e9bb29cb5094fa73 golangci-lint-2.4.0-linux-ppc64le.tar.gz 36d50314d53683b1f1a2a6cedfb5a9468451b481c64ab9e97a8e843ea088074d golangci-lint-2.10.1-linux-arm64.rpm
f7474c638e1fb67ebbdc654b55ca0125377ea0bc88e8fee8d964a4f24eacf828 golangci-lint-2.4.0-linux-riscv64.tar.gz 6652b42ae02915eb2f9cb2a2e0cac99514c8eded8388d88ae3e06e1a52c00de8 golangci-lint-2.10.1-linux-arm64.tar.gz
b617a9543997c8bfceaffa88a75d4e595030c6add69fba800c1e4d8f5fe253dd golangci-lint-2.4.0-linux-s390x.tar.gz a32d8d318e803496812dd3461f250e52ccc7f53c47b95ce404a9cf55778ceb6a golangci-lint-2.10.1-linux-armv6.deb
7db027b03a9ba328f795215b04f594036837bc7dd0dd7cd16776b02a6167981c golangci-lint-2.4.0-netbsd-386.tar.gz 41d065f4c8ea165a1531abea644988ee2e973e4f0b49f9725ed3b979dac45112 golangci-lint-2.10.1-linux-armv6.rpm
52d8f9393f4313df0a62b752c37775e3af0b818e43e8dd28954351542d7c60bc golangci-lint-2.4.0-netbsd-amd64.tar.gz 59159a4df03aabbde69d15c7b7b3df143363cbb41f4bd4b200caffb8e34fb734 golangci-lint-2.10.1-linux-armv6.tar.gz
5c0086027fb5a4af3829e530c8115db4b35d11afe1914322eef528eb8cd38c69 golangci-lint-2.4.0-netbsd-arm64.tar.gz b2e8ec0e050a1e2251dfe1561434999d202f5a3f9fa47ce94378b0fd1662ea5a golangci-lint-2.10.1-linux-armv7.deb
6b779d6ed1aed87cefe195cc11759902b97a76551b593312c6833f2635a3488f golangci-lint-2.4.0-netbsd-armv6.tar.gz 28c9331429a497da27e9c77846063bd0e8275e878ffedb4eb9e9f21d24771cc0 golangci-lint-2.10.1-linux-armv7.rpm
f00d1f4b7ec3468a0f9fffd0d9ea036248b029b7621cbc9a59c449ef94356d09 golangci-lint-2.4.0-netbsd-armv7.tar.gz 818f33e95b273e3769284b25563b51ef6a294e9e25acf140fda5830c075a1a59 golangci-lint-2.10.1-linux-armv7.tar.gz
3ce671b0b42b58e35066493aab75a7e2826c9e079988f1ba5d814a4029faaf87 golangci-lint-2.4.0-windows-386.zip 6b6b85ed4b7c27f51097dd681523000409dde835e86e6e314e87be4bb013e2ab golangci-lint-2.10.1-linux-loong64.deb
003112f7a56746feaabf20b744054bf9acdf900c9e77176383623c4b1d76aaa9 golangci-lint-2.4.0-windows-amd64.zip 94050a0cf06169e2ae44afb307dcaafa7d7c3b38c0c23b5652cf9cb60f0c337f golangci-lint-2.10.1-linux-loong64.rpm
dc0c2092af5d47fc2cd31a1dfe7b4c7e765fab22de98bd21ef2ffcc53ad9f54f golangci-lint-2.4.0-windows-arm64.zip 25820300fccb8c961c1cdcb1f77928040c079e04c43a3a5ceb34b1cb4a1c5c8d golangci-lint-2.10.1-linux-loong64.tar.gz
0263d23e20a260cb1592d35e12a388f99efe2c51b3611fdc66fbd9db1fce664d golangci-lint-2.4.0-windows-armv6.zip 98bf39d10139fdcaa37f94950e9bbb8888660ae468847ae0bf1cb5bf67c1f68b golangci-lint-2.10.1-linux-mips64.deb
9403c03bf648e6313036e0273149d44bad1b9ad53889b6d00e4ccb842ba3c058 golangci-lint-2.4.0-windows-armv7.zip df3ce5f03808dcceaa8b683d1d06e95c885f09b59dc8e15deb840fbe2b3e3299 golangci-lint-2.10.1-linux-mips64.rpm
972508dda523067e6e6a1c8e6609d63bc7c4153819c11b947d439235cf17bac2 golangci-lint-2.10.1-linux-mips64.tar.gz
1d37f2919e183b5bf8b1777ed8c4b163d3b491d0158355a7999d647655cbbeb6 golangci-lint-2.10.1-linux-mips64le.deb
e341d031002cd09a416329ed40f674231051a38544b8f94deb2d1708ce1f4a6f golangci-lint-2.10.1-linux-mips64le.rpm
393560122b9cb5538df0c357d30eb27b6ee563533fbb9b138c8db4fd264002af golangci-lint-2.10.1-linux-mips64le.tar.gz
21ca46b6a96442e8957677a3ca059c6b93674a68a01b1c71f4e5df0ea2e96d19 golangci-lint-2.10.1-linux-ppc64le.deb
57fe0cbca0a9bbdf1547c5e8aa7d278e6896b438d72a541bae6bc62c38b43d1e golangci-lint-2.10.1-linux-ppc64le.rpm
e2883db9fa51584e5e203c64456f29993550a7faadc84e3faccdb48f0669992e golangci-lint-2.10.1-linux-ppc64le.tar.gz
aa6da0e98ab0ba3bb7582e112174c349907d5edfeff90a551dca3c6eecf92fc0 golangci-lint-2.10.1-linux-riscv64.deb
3c68d76cd884a7aad206223a980b9c20bb9ea74b560fa27ed02baf2389189234 golangci-lint-2.10.1-linux-riscv64.rpm
3bca11bfac4197205639cbd4676a5415054e629ac6c12ea10fcbe33ef852d9c3 golangci-lint-2.10.1-linux-riscv64.tar.gz
0c6aed2ce49db2586adbac72c80d871f06feb1caf4c0763a5ca98fec809a8f0b golangci-lint-2.10.1-linux-s390x.deb
16c285adfe1061d69dd8e503be69f87c7202857c6f4add74ac02e3571158fbec golangci-lint-2.10.1-linux-s390x.rpm
21011ad368eb04f024201b832095c6b5f96d0888de194cca5bfe4d9307d6364b golangci-lint-2.10.1-linux-s390x.tar.gz
7b5191e77a70485918712e31ed55159956323e4911bab1b67569c9d86e1b75eb golangci-lint-2.10.1-netbsd-386.tar.gz
07801fd38d293ebad10826f8285525a39ea91ce5ddad77d05bfa90bda9c884a9 golangci-lint-2.10.1-netbsd-amd64.tar.gz
7e7219d71c1bf33b98c328c93dc0560706dd896a1c43c44696e5222fc9d7446e golangci-lint-2.10.1-netbsd-arm64.tar.gz
92fbc90b9eec0e572269b0f5492a2895c426b086a68372fde49b7e4d4020863e golangci-lint-2.10.1-netbsd-armv6.tar.gz
f67b3ae1f47caeefa507a4ebb0c8336958a19011fe48766443212030f75d004b golangci-lint-2.10.1-netbsd-armv7.tar.gz
a40bc091c10cea84eaee1a90b84b65f5e8652113b0a600bb099e4e4d9d7caddb golangci-lint-2.10.1-windows-386.zip
c60c87695e79db8e320f0e5be885059859de52bb5ee5f11be5577828570bc2a3 golangci-lint-2.10.1-windows-amd64.zip
636ab790c8dcea8034aa34aba6031ca3893d68f7eda000460ab534341fadbab1 golangci-lint-2.10.1-windows-arm64.zip
# This is the builder on PPA that will build Go itself (inception-y), don't modify! # This is the builder on PPA that will build Go itself (inception-y), don't modify!
# #

View file

@ -107,17 +107,21 @@ var (
Tags: "ziren", Tags: "ziren",
Env: map[string]string{"GOMIPS": "softfloat", "CGO_ENABLED": "0"}, Env: map[string]string{"GOMIPS": "softfloat", "CGO_ENABLED": "0"},
}, },
{
Name: "womir",
GOOS: "wasip1",
GOARCH: "wasm",
Tags: "womir",
},
{ {
Name: "wasm-js", Name: "wasm-js",
GOOS: "js", GOOS: "js",
GOARCH: "wasm", GOARCH: "wasm",
Tags: "example",
}, },
{ {
Name: "wasm-wasi", Name: "wasm-wasi",
GOOS: "wasip1", GOOS: "wasip1",
GOARCH: "wasm", GOARCH: "wasm",
Tags: "example",
}, },
{ {
Name: "example", Name: "example",
@ -168,8 +172,6 @@ var (
"focal", // 20.04, EOL: 04/2030 "focal", // 20.04, EOL: 04/2030
"jammy", // 22.04, EOL: 04/2032 "jammy", // 22.04, EOL: 04/2032
"noble", // 24.04, EOL: 04/2034 "noble", // 24.04, EOL: 04/2034
"oracular", // 24.10, EOL: 07/2025
"plucky", // 25.04, EOL: 01/2026
} }
// This is where the tests should be unpacked. // This is where the tests should be unpacked.
@ -307,7 +309,7 @@ func doInstallKeeper(cmdline []string) {
args := slices.Clone(gobuild.Args) args := slices.Clone(gobuild.Args)
args = append(args, "-o", executablePath(outputName)) args = append(args, "-o", executablePath(outputName))
args = append(args, ".") args = append(args, ".")
build.MustRun(&exec.Cmd{Path: gobuild.Path, Args: args, Env: gobuild.Env}) build.MustRun(&exec.Cmd{Path: gobuild.Path, Args: args, Env: gobuild.Env, Dir: gobuild.Dir})
} }
} }
@ -1198,7 +1200,7 @@ func doWindowsInstaller(cmdline []string) {
var ( var (
arch = flag.String("arch", runtime.GOARCH, "Architecture for cross build packaging") arch = flag.String("arch", runtime.GOARCH, "Architecture for cross build packaging")
signer = flag.String("signer", "", `Environment variable holding the signing key (e.g. WINDOWS_SIGNING_KEY)`) signer = flag.String("signer", "", `Environment variable holding the signing key (e.g. WINDOWS_SIGNING_KEY)`)
signify = flag.String("signify key", "", `Environment variable holding the signify signing key (e.g. WINDOWS_SIGNIFY_KEY)`) signify = flag.String("signify", "", `Environment variable holding the signify signing key (e.g. WINDOWS_SIGNIFY_KEY)`)
upload = flag.String("upload", "", `Destination to upload the archives (usually "gethstore/builds")`) upload = flag.String("upload", "", `Destination to upload the archives (usually "gethstore/builds")`)
workdir = flag.String("workdir", "", `Output directory for packages (uses temp dir if unset)`) workdir = flag.String("workdir", "", `Output directory for packages (uses temp dir if unset)`)
) )

View file

@ -51,6 +51,12 @@ type Chain struct {
state map[common.Address]state.DumpAccount // state of head block state map[common.Address]state.DumpAccount // state of head block
senders map[common.Address]*senderInfo senders map[common.Address]*senderInfo
config *params.ChainConfig config *params.ChainConfig
txInfo txInfo
}
type txInfo struct {
LargeReceiptBlock *uint64 `json:"tx-largereceipt"`
} }
// NewChain takes the given chain.rlp file, and decodes and returns // NewChain takes the given chain.rlp file, and decodes and returns
@ -74,12 +80,20 @@ func NewChain(dir string) (*Chain, error) {
if err != nil { if err != nil {
return nil, err return nil, err
} }
var txInfo txInfo
err = common.LoadJSON(filepath.Join(dir, "txinfo.json"), &txInfo)
if err != nil {
return nil, err
}
return &Chain{ return &Chain{
genesis: gen, genesis: gen,
blocks: blocks, blocks: blocks,
state: state, state: state,
senders: accounts, senders: accounts,
config: gen.Config, config: gen.Config,
txInfo: txInfo,
}, nil }, nil
} }

View file

@ -66,9 +66,10 @@ func (s *Suite) dialAs(key *ecdsa.PrivateKey) (*Conn, error) {
return nil, err return nil, err
} }
conn.caps = []p2p.Cap{ conn.caps = []p2p.Cap{
{Name: "eth", Version: 70},
{Name: "eth", Version: 69}, {Name: "eth", Version: 69},
} }
conn.ourHighestProtoVersion = 69 conn.ourHighestProtoVersion = 70
return &conn, nil return &conn, nil
} }
@ -155,7 +156,7 @@ func (c *Conn) ReadEth() (any, error) {
var msg any var msg any
switch int(code) { switch int(code) {
case eth.StatusMsg: case eth.StatusMsg:
msg = new(eth.StatusPacket69) msg = new(eth.StatusPacket)
case eth.GetBlockHeadersMsg: case eth.GetBlockHeadersMsg:
msg = new(eth.GetBlockHeadersPacket) msg = new(eth.GetBlockHeadersPacket)
case eth.BlockHeadersMsg: case eth.BlockHeadersMsg:
@ -164,10 +165,6 @@ func (c *Conn) ReadEth() (any, error) {
msg = new(eth.GetBlockBodiesPacket) msg = new(eth.GetBlockBodiesPacket)
case eth.BlockBodiesMsg: case eth.BlockBodiesMsg:
msg = new(eth.BlockBodiesPacket) msg = new(eth.BlockBodiesPacket)
case eth.NewBlockMsg:
msg = new(eth.NewBlockPacket)
case eth.NewBlockHashesMsg:
msg = new(eth.NewBlockHashesPacket)
case eth.TransactionsMsg: case eth.TransactionsMsg:
msg = new(eth.TransactionsPacket) msg = new(eth.TransactionsPacket)
case eth.NewPooledTransactionHashesMsg: case eth.NewPooledTransactionHashesMsg:
@ -229,7 +226,7 @@ func (c *Conn) ReadSnap() (any, error) {
} }
// dialAndPeer creates a peer connection and runs the handshake. // dialAndPeer creates a peer connection and runs the handshake.
func (s *Suite) dialAndPeer(status *eth.StatusPacket69) (*Conn, error) { func (s *Suite) dialAndPeer(status *eth.StatusPacket) (*Conn, error) {
c, err := s.dial() c, err := s.dial()
if err != nil { if err != nil {
return nil, err return nil, err
@ -242,7 +239,7 @@ func (s *Suite) dialAndPeer(status *eth.StatusPacket69) (*Conn, error) {
// peer performs both the protocol handshake and the status message // peer performs both the protocol handshake and the status message
// exchange with the node in order to peer with it. // exchange with the node in order to peer with it.
func (c *Conn) peer(chain *Chain, status *eth.StatusPacket69) error { func (c *Conn) peer(chain *Chain, status *eth.StatusPacket) error {
if err := c.handshake(); err != nil { if err := c.handshake(); err != nil {
return fmt.Errorf("handshake failed: %v", err) return fmt.Errorf("handshake failed: %v", err)
} }
@ -315,7 +312,7 @@ func (c *Conn) negotiateEthProtocol(caps []p2p.Cap) {
} }
// statusExchange performs a `Status` message exchange with the given node. // statusExchange performs a `Status` message exchange with the given node.
func (c *Conn) statusExchange(chain *Chain, status *eth.StatusPacket69) error { func (c *Conn) statusExchange(chain *Chain, status *eth.StatusPacket) error {
loop: loop:
for { for {
code, data, err := c.Read() code, data, err := c.Read()
@ -324,7 +321,7 @@ loop:
} }
switch code { switch code {
case eth.StatusMsg + protoOffset(ethProto): case eth.StatusMsg + protoOffset(ethProto):
msg := new(eth.StatusPacket69) msg := new(eth.StatusPacket)
if err := rlp.DecodeBytes(data, &msg); err != nil { if err := rlp.DecodeBytes(data, &msg); err != nil {
return fmt.Errorf("error decoding status packet: %w", err) return fmt.Errorf("error decoding status packet: %w", err)
} }
@ -339,10 +336,12 @@ loop:
if have, want := msg.ForkID, chain.ForkID(); !reflect.DeepEqual(have, want) { if have, want := msg.ForkID, chain.ForkID(); !reflect.DeepEqual(have, want) {
return fmt.Errorf("wrong fork ID in status: have %v, want %v", have, want) return fmt.Errorf("wrong fork ID in status: have %v, want %v", have, want)
} }
if have, want := msg.ProtocolVersion, c.ourHighestProtoVersion; have != uint32(want) { for _, cap := range c.caps {
return fmt.Errorf("wrong protocol version: have %v, want %v", have, want) if cap.Name == "eth" && cap.Version == uint(msg.ProtocolVersion) {
}
break loop break loop
}
}
return fmt.Errorf("wrong protocol version: have %v, want %v", msg.ProtocolVersion, c.caps)
case discMsg: case discMsg:
var msg []p2p.DiscReason var msg []p2p.DiscReason
if rlp.DecodeBytes(data, &msg); len(msg) == 0 { if rlp.DecodeBytes(data, &msg); len(msg) == 0 {
@ -363,7 +362,7 @@ loop:
} }
if status == nil { if status == nil {
// default status message // default status message
status = &eth.StatusPacket69{ status = &eth.StatusPacket{
ProtocolVersion: uint32(c.negotiatedProtoVersion), ProtocolVersion: uint32(c.negotiatedProtoVersion),
NetworkID: chain.config.ChainID.Uint64(), NetworkID: chain.config.ChainID.Uint64(),
Genesis: chain.blocks[0].Hash(), Genesis: chain.blocks[0].Hash(),

View file

@ -87,9 +87,9 @@ func (s *Suite) TestSnapGetAccountRange(t *utesting.T) {
root: root, root: root,
startingHash: zero, startingHash: zero,
limitHash: ffHash, limitHash: ffHash,
expAccounts: 67, expAccounts: 68,
expFirst: firstKey, expFirst: firstKey,
expLast: common.HexToHash("0x622e662246601dd04f996289ce8b85e86db7bb15bb17f86487ec9d543ddb6f9a"), expLast: common.HexToHash("0x59312f89c13e9e24c1cb8b103aa39a9b2800348d97a92c2c9e2a78fa02b70025"),
desc: "In this test, we request the entire state range, but limit the response to 4000 bytes.", desc: "In this test, we request the entire state range, but limit the response to 4000 bytes.",
}, },
{ {
@ -97,9 +97,9 @@ func (s *Suite) TestSnapGetAccountRange(t *utesting.T) {
root: root, root: root,
startingHash: zero, startingHash: zero,
limitHash: ffHash, limitHash: ffHash,
expAccounts: 49, expAccounts: 50,
expFirst: firstKey, expFirst: firstKey,
expLast: common.HexToHash("0x445cb5c1278fdce2f9cbdb681bdd76c52f8e50e41dbd9e220242a69ba99ac099"), expLast: common.HexToHash("0x4615e5f5df5b25349a00ad313c6cd0436b6c08ee5826e33a018661997f85ebaa"),
desc: "In this test, we request the entire state range, but limit the response to 3000 bytes.", desc: "In this test, we request the entire state range, but limit the response to 3000 bytes.",
}, },
{ {
@ -107,9 +107,9 @@ func (s *Suite) TestSnapGetAccountRange(t *utesting.T) {
root: root, root: root,
startingHash: zero, startingHash: zero,
limitHash: ffHash, limitHash: ffHash,
expAccounts: 34, expAccounts: 35,
expFirst: firstKey, expFirst: firstKey,
expLast: common.HexToHash("0x2ef46ebd2073cecde499c2e8df028ad79a26d57bfaa812c4c6f7eb4c9617b913"), expLast: common.HexToHash("0x2de4bdbddcfbb9c3e195dae6b45f9c38daff897e926764bf34887fb0db5c3284"),
desc: "In this test, we request the entire state range, but limit the response to 2000 bytes.", desc: "In this test, we request the entire state range, but limit the response to 2000 bytes.",
}, },
{ {
@ -178,9 +178,9 @@ The server should return the first available account.`,
root: root, root: root,
startingHash: firstKey, startingHash: firstKey,
limitHash: ffHash, limitHash: ffHash,
expAccounts: 67, expAccounts: 68,
expFirst: firstKey, expFirst: firstKey,
expLast: common.HexToHash("0x622e662246601dd04f996289ce8b85e86db7bb15bb17f86487ec9d543ddb6f9a"), expLast: common.HexToHash("0x59312f89c13e9e24c1cb8b103aa39a9b2800348d97a92c2c9e2a78fa02b70025"),
desc: `In this test, startingHash is exactly the first available account key. desc: `In this test, startingHash is exactly the first available account key.
The server should return the first available account of the state as the first item.`, The server should return the first available account of the state as the first item.`,
}, },
@ -189,9 +189,9 @@ The server should return the first available account of the state as the first i
root: root, root: root,
startingHash: hashAdd(firstKey, 1), startingHash: hashAdd(firstKey, 1),
limitHash: ffHash, limitHash: ffHash,
expAccounts: 67, expAccounts: 68,
expFirst: secondKey, expFirst: secondKey,
expLast: common.HexToHash("0x66192e4c757fba1cdc776e6737008f42d50370d3cd801db3624274283bf7cd63"), expLast: common.HexToHash("0x59a7c8818f1c16b298a054020dc7c3f403a970d1d1db33f9478b1c36e3a2e509"),
desc: `In this test, startingHash is after the first available key. desc: `In this test, startingHash is after the first available key.
The server should return the second account of the state as the first item.`, The server should return the second account of the state as the first item.`,
}, },
@ -227,9 +227,9 @@ server to return no data because genesis is older than 127 blocks.`,
root: s.chain.RootAt(int(s.chain.Head().Number().Uint64()) - 127), root: s.chain.RootAt(int(s.chain.Head().Number().Uint64()) - 127),
startingHash: zero, startingHash: zero,
limitHash: ffHash, limitHash: ffHash,
expAccounts: 66, expAccounts: 68,
expFirst: firstKey, expFirst: firstKey,
expLast: common.HexToHash("0x729953a43ed6c913df957172680a17e5735143ad767bda8f58ac84ec62fbec5e"), expLast: common.HexToHash("0x683b6c03cc32afe5db8cb96050f711fdaff8f8ff44c7587a9a848f921d02815e"),
desc: `This test requests data at a state root that is 127 blocks old. desc: `This test requests data at a state root that is 127 blocks old.
We expect the server to have this state available.`, We expect the server to have this state available.`,
}, },
@ -658,8 +658,8 @@ The server should reject the request.`,
// It's a bit unfortunate these are hard-coded, but the result depends on // It's a bit unfortunate these are hard-coded, but the result depends on
// a lot of aspects of the state trie and can't be guessed in a simple // a lot of aspects of the state trie and can't be guessed in a simple
// way. So you'll have to update this when the test chain is changed. // way. So you'll have to update this when the test chain is changed.
common.HexToHash("0x5bdc0d6057b35642a16d27223ea5454e5a17a400e28f7328971a5f2a87773b76"), common.HexToHash("0x4bdecec09691ad38113eebee2df94fadefdff5841c0f182bae1be3c8a6d60bf3"),
common.HexToHash("0x0a76c9812ca90ffed8ee4d191e683f93386b6e50cfe3679c0760d27510aa7fc5"), common.HexToHash("0x4178696465d4514ff5924ef8c28ce64d41a669634b63184c2c093e252d6b4bc4"),
empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty,
empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty,
empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty,
@ -679,8 +679,8 @@ The server should reject the request.`,
// be updated when the test chain is changed. // be updated when the test chain is changed.
expHashes: []common.Hash{ expHashes: []common.Hash{
empty, empty,
common.HexToHash("0x0a76c9812ca90ffed8ee4d191e683f93386b6e50cfe3679c0760d27510aa7fc5"), common.HexToHash("0x4178696465d4514ff5924ef8c28ce64d41a669634b63184c2c093e252d6b4bc4"),
common.HexToHash("0x5bdc0d6057b35642a16d27223ea5454e5a17a400e28f7328971a5f2a87773b76"), common.HexToHash("0x4bdecec09691ad38113eebee2df94fadefdff5841c0f182bae1be3c8a6d60bf3"),
}, },
}, },

View file

@ -35,6 +35,7 @@ import (
"github.com/ethereum/go-ethereum/p2p" "github.com/ethereum/go-ethereum/p2p"
"github.com/ethereum/go-ethereum/p2p/enode" "github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/rlp" "github.com/ethereum/go-ethereum/rlp"
"github.com/ethereum/go-ethereum/trie"
"github.com/holiman/uint256" "github.com/holiman/uint256"
) )
@ -83,6 +84,7 @@ func (s *Suite) EthTests() []utesting.Test {
// get history // get history
{Name: "GetBlockBodies", Fn: s.TestGetBlockBodies}, {Name: "GetBlockBodies", Fn: s.TestGetBlockBodies},
{Name: "GetReceipts", Fn: s.TestGetReceipts}, {Name: "GetReceipts", Fn: s.TestGetReceipts},
{Name: "GetLargeReceipts", Fn: s.TestGetLargeReceipts},
// test transactions // test transactions
{Name: "LargeTxRequest", Fn: s.TestLargeTxRequest, Slow: true}, {Name: "LargeTxRequest", Fn: s.TestLargeTxRequest, Slow: true},
{Name: "Transaction", Fn: s.TestTransaction}, {Name: "Transaction", Fn: s.TestTransaction},
@ -429,6 +431,9 @@ func (s *Suite) TestGetReceipts(t *utesting.T) {
// Find some blocks containing receipts. // Find some blocks containing receipts.
var hashes = make([]common.Hash, 0, 3) var hashes = make([]common.Hash, 0, 3)
for i := range s.chain.Len() { for i := range s.chain.Len() {
if s.chain.txInfo.LargeReceiptBlock != nil && uint64(i) == *s.chain.txInfo.LargeReceiptBlock {
continue
}
block := s.chain.GetBlock(i) block := s.chain.GetBlock(i)
if len(block.Transactions()) > 0 { if len(block.Transactions()) > 0 {
hashes = append(hashes, block.Hash()) hashes = append(hashes, block.Hash())
@ -437,9 +442,9 @@ func (s *Suite) TestGetReceipts(t *utesting.T) {
break break
} }
} }
if conn.negotiatedProtoVersion < eth.ETH70 {
// Create receipts request. // Create block bodies request.
req := &eth.GetReceiptsPacket{ req := &eth.GetReceiptsPacket69{
RequestId: 66, RequestId: 66,
GetReceiptsRequest: (eth.GetReceiptsRequest)(hashes), GetReceiptsRequest: (eth.GetReceiptsRequest)(hashes),
} }
@ -447,9 +452,9 @@ func (s *Suite) TestGetReceipts(t *utesting.T) {
t.Fatalf("could not write to connection: %v", err) t.Fatalf("could not write to connection: %v", err)
} }
// Wait for response. // Wait for response.
resp := new(eth.ReceiptsPacket[*eth.ReceiptList69]) resp := new(eth.ReceiptsPacket69)
if err := conn.ReadMsg(ethProto, eth.ReceiptsMsg, &resp); err != nil { if err := conn.ReadMsg(ethProto, eth.ReceiptsMsg, &resp); err != nil {
t.Fatalf("error reading block bodies msg: %v", err) t.Fatalf("error reading block receipts msg: %v", err)
} }
if got, want := resp.RequestId, req.RequestId; got != want { if got, want := resp.RequestId, req.RequestId; got != want {
t.Fatalf("unexpected request id in respond", got, want) t.Fatalf("unexpected request id in respond", got, want)
@ -457,6 +462,102 @@ func (s *Suite) TestGetReceipts(t *utesting.T) {
if resp.List.Len() != len(req.GetReceiptsRequest) { if resp.List.Len() != len(req.GetReceiptsRequest) {
t.Fatalf("wrong receipts in response: expected %d receipts, got %d", len(req.GetReceiptsRequest), resp.List.Len()) t.Fatalf("wrong receipts in response: expected %d receipts, got %d", len(req.GetReceiptsRequest), resp.List.Len())
} }
} else {
// Create block bodies request.
req := &eth.GetReceiptsPacket70{
RequestId: 66,
FirstBlockReceiptIndex: 0,
GetReceiptsRequest: (eth.GetReceiptsRequest)(hashes),
}
if err := conn.Write(ethProto, eth.GetReceiptsMsg, req); err != nil {
t.Fatalf("could not write to connection: %v", err)
}
// Wait for response.
resp := new(eth.ReceiptsPacket70)
if err := conn.ReadMsg(ethProto, eth.ReceiptsMsg, &resp); err != nil {
t.Fatalf("error reading block receipts msg: %v", err)
}
if got, want := resp.RequestId, req.RequestId; got != want {
t.Fatalf("unexpected request id in respond", got, want)
}
if resp.List.Len() != len(req.GetReceiptsRequest) {
t.Fatalf("wrong receipts in response: expected %d receipts, got %d", len(req.GetReceiptsRequest), resp.List.Len())
}
}
}
func (s *Suite) TestGetLargeReceipts(t *utesting.T) {
t.Log(`This test sends GetReceipts requests to the node for large receipt (>10MiB) in the test chain.
This test is meaningful only if the client supports protocol version ETH70 or higher
and LargeReceiptBlock is configured in txInfo.json.`)
conn, err := s.dialAndPeer(nil)
if err != nil {
t.Fatalf("peering failed: %v", err)
}
defer conn.Close()
if conn.negotiatedProtoVersion < eth.ETH70 || s.chain.txInfo.LargeReceiptBlock == nil {
return
}
// Find block with large receipt.
// Place the large receipt block hash in the middle of the query
start := max(int(*s.chain.txInfo.LargeReceiptBlock)-2, 0)
end := min(*s.chain.txInfo.LargeReceiptBlock+2, uint64(len(s.chain.blocks)))
var blocks []common.Hash
var receiptHashes []common.Hash
var receipts []*eth.ReceiptList
for i := uint64(start); i < end; i++ {
block := s.chain.GetBlock(int(i))
blocks = append(blocks, block.Hash())
receiptHashes = append(receiptHashes, block.Header().ReceiptHash)
receipts = append(receipts, &eth.ReceiptList{})
}
incomplete := false
lastBlock := 0
for incomplete || lastBlock != len(blocks)-1 {
// Create get receipt request.
req := &eth.GetReceiptsPacket70{
RequestId: 66,
FirstBlockReceiptIndex: uint64(receipts[lastBlock].Derivable().Len()),
GetReceiptsRequest: blocks[lastBlock:],
}
if err := conn.Write(ethProto, eth.GetReceiptsMsg, req); err != nil {
t.Fatalf("could not write to connection: %v", err)
}
// Wait for response.
resp := new(eth.ReceiptsPacket70)
if err := conn.ReadMsg(ethProto, eth.ReceiptsMsg, &resp); err != nil {
t.Fatalf("error reading block receipts msg: %v", err)
}
if got, want := resp.RequestId, req.RequestId; got != want {
t.Fatalf("unexpected request id in respond, want: %d, got: %d", got, want)
}
receiptLists, _ := resp.List.Items()
for i, rc := range receiptLists {
receipts[lastBlock+i].Append(rc)
}
lastBlock += len(receiptLists) - 1
incomplete = resp.LastBlockIncomplete
}
hasher := trie.NewStackTrie(nil)
hashes := make([]common.Hash, len(receipts))
for i := range receipts {
hashes[i] = types.DeriveSha(receipts[i].Derivable(), hasher)
}
for i, hash := range hashes {
if receiptHashes[i] != hash {
t.Fatalf("wrong receipt root: want %x, got %x", receiptHashes[i], hash)
}
}
} }
// randBuf makes a random buffer size kilobytes large. // randBuf makes a random buffer size kilobytes large.

Binary file not shown.

View file

@ -37,7 +37,7 @@
"nonce": "0x0", "nonce": "0x0",
"timestamp": "0x0", "timestamp": "0x0",
"extraData": "0x68697665636861696e", "extraData": "0x68697665636861696e",
"gasLimit": "0x23f3e20", "gasLimit": "0x11e1a300",
"difficulty": "0x20000", "difficulty": "0x20000",
"mixHash": "0x0000000000000000000000000000000000000000000000000000000000000000", "mixHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"coinbase": "0x0000000000000000000000000000000000000000", "coinbase": "0x0000000000000000000000000000000000000000",
@ -119,6 +119,10 @@
"balance": "0x1", "balance": "0x1",
"nonce": "0x1" "nonce": "0x1"
}, },
"8dcd17433742f4c0ca53122ab541d0ba67fc27ff": {
"code": "0x6202e6306000a0",
"balance": "0x0"
},
"c7b99a164efd027a93f147376cc7da7c67c6bbe0": { "c7b99a164efd027a93f147376cc7da7c67c6bbe0": {
"balance": "0xc097ce7bc90715b34b9f1000000000" "balance": "0xc097ce7bc90715b34b9f1000000000"
}, },

View file

@ -1,24 +1,24 @@
{ {
"parentHash": "0x65151b101682b54cd08ba226f640c14c86176865ff9bfc57e0147dadaeac34bb", "parentHash": "0x7e80093a491eba0e5b2c1895837902f64f514100221801318fe391e1e09c96a6",
"sha3Uncles": "0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347", "sha3Uncles": "0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347",
"miner": "0x0000000000000000000000000000000000000000", "miner": "0x0000000000000000000000000000000000000000",
"stateRoot": "0xce423ebc60fc7764a43f09f1fe3ae61eef25e3eb8d09b1108f7e7eb77dfff5e6", "stateRoot": "0x8fcfb02cfca007773bd55bc1c3e50a3c8612a59c87ce057e5957e8bf17c1728b",
"transactionsRoot": "0x7ec1ae3989efa75d7bcc766e5e2443afa8a89a5fda42ebba90050e7e702980f7", "transactionsRoot": "0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421",
"receiptsRoot": "0xfe160832b1ca85f38c6674cb0aae3a24693bc49be56e2ecdf3698b71a794de86", "receiptsRoot": "0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421",
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000", "logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"difficulty": "0x0", "difficulty": "0x0",
"number": "0x258", "number": "0x258",
"gasLimit": "0x23f3e20", "gasLimit": "0x11e1a300",
"gasUsed": "0x19d36", "gasUsed": "0x0",
"timestamp": "0x1770", "timestamp": "0x1770",
"extraData": "0x", "extraData": "0x",
"mixHash": "0x0000000000000000000000000000000000000000000000000000000000000000", "mixHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"nonce": "0x0000000000000000", "nonce": "0x0000000000000000",
"baseFeePerGas": "0x7", "baseFeePerGas": "0x7",
"withdrawalsRoot": "0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421", "withdrawalsRoot": "0x92abfda39de7df7d705c5a8f30386802ad59d31e782a06d5c5b0f9a260056cf0",
"blobGasUsed": "0x0", "blobGasUsed": "0x0",
"excessBlobGas": "0x0", "excessBlobGas": "0x0",
"parentBeaconBlockRoot": "0xf5003fc8f92358e790a114bce93ce1d9c283c85e1787f8d7d56714d3489b49e6", "parentBeaconBlockRoot": "0xf5003fc8f92358e790a114bce93ce1d9c283c85e1787f8d7d56714d3489b49e6",
"requestsHash": "0xe3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855", "requestsHash": "0xe3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"hash": "0xce8d86ba17a2ec303155f0e264c58a4b8f94ce3436274cf1924f91acdb7502d0" "hash": "0x44e3809c9a3cda717f00aea3a9da336d149612c8d5657fbc0028176ef8d94d2a"
} }

View file

@ -4,9 +4,9 @@
"method": "engine_forkchoiceUpdatedV3", "method": "engine_forkchoiceUpdatedV3",
"params": [ "params": [
{ {
"headBlockHash": "0xce8d86ba17a2ec303155f0e264c58a4b8f94ce3436274cf1924f91acdb7502d0", "headBlockHash": "0x44e3809c9a3cda717f00aea3a9da336d149612c8d5657fbc0028176ef8d94d2a",
"safeBlockHash": "0xce8d86ba17a2ec303155f0e264c58a4b8f94ce3436274cf1924f91acdb7502d0", "safeBlockHash": "0x44e3809c9a3cda717f00aea3a9da336d149612c8d5657fbc0028176ef8d94d2a",
"finalizedBlockHash": "0xce8d86ba17a2ec303155f0e264c58a4b8f94ce3436274cf1924f91acdb7502d0" "finalizedBlockHash": "0x44e3809c9a3cda717f00aea3a9da336d149612c8d5657fbc0028176ef8d94d2a"
}, },
null null
] ]

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -52,7 +52,7 @@ func (s *Suite) AllTests() []utesting.Test {
{Name: "Ping", Fn: s.TestPing}, {Name: "Ping", Fn: s.TestPing},
{Name: "PingLargeRequestID", Fn: s.TestPingLargeRequestID}, {Name: "PingLargeRequestID", Fn: s.TestPingLargeRequestID},
{Name: "PingMultiIP", Fn: s.TestPingMultiIP}, {Name: "PingMultiIP", Fn: s.TestPingMultiIP},
{Name: "PingHandshakeInterrupted", Fn: s.TestPingHandshakeInterrupted}, {Name: "HandshakeResend", Fn: s.TestHandshakeResend},
{Name: "TalkRequest", Fn: s.TestTalkRequest}, {Name: "TalkRequest", Fn: s.TestTalkRequest},
{Name: "FindnodeZeroDistance", Fn: s.TestFindnodeZeroDistance}, {Name: "FindnodeZeroDistance", Fn: s.TestFindnodeZeroDistance},
{Name: "FindnodeResults", Fn: s.TestFindnodeResults}, {Name: "FindnodeResults", Fn: s.TestFindnodeResults},
@ -158,22 +158,20 @@ the attempt from a different IP.`)
} }
} }
// TestPingHandshakeInterrupted starts a handshake, but doesn't finish it and sends a second ordinary message // TestHandshakeResend starts a handshake, but doesn't finish it and sends a second ordinary message
// packet instead of a handshake message packet. The remote node should respond with // packet instead of a handshake message packet. The remote node should repeat the previous WHOAREYOU
// another WHOAREYOU challenge for the second packet. // challenge for the first PING.
func (s *Suite) TestPingHandshakeInterrupted(t *utesting.T) { func (s *Suite) TestHandshakeResend(t *utesting.T) {
t.Log(`TestPingHandshakeInterrupted starts a handshake, but doesn't finish it and sends a second ordinary message
packet instead of a handshake message packet. The remote node should respond with
another WHOAREYOU challenge for the second packet.`)
conn, l1 := s.listen1(t) conn, l1 := s.listen1(t)
defer conn.close() defer conn.close()
// First PING triggers challenge. // First PING triggers challenge.
ping := &v5wire.Ping{ReqID: conn.nextReqID()} ping := &v5wire.Ping{ReqID: conn.nextReqID()}
conn.write(l1, ping, nil) conn.write(l1, ping, nil)
var challenge1 *v5wire.Whoareyou
switch resp := conn.read(l1).(type) { switch resp := conn.read(l1).(type) {
case *v5wire.Whoareyou: case *v5wire.Whoareyou:
challenge1 = resp
t.Logf("got WHOAREYOU for PING") t.Logf("got WHOAREYOU for PING")
default: default:
t.Fatal("expected WHOAREYOU, got", resp) t.Fatal("expected WHOAREYOU, got", resp)
@ -181,9 +179,16 @@ another WHOAREYOU challenge for the second packet.`)
// Send second PING. // Send second PING.
ping2 := &v5wire.Ping{ReqID: conn.nextReqID()} ping2 := &v5wire.Ping{ReqID: conn.nextReqID()}
switch resp := conn.reqresp(l1, ping2).(type) { conn.write(l1, ping2, nil)
case *v5wire.Pong: switch resp := conn.read(l1).(type) {
checkPong(t, resp, ping2, l1) case *v5wire.Whoareyou:
if resp.Nonce != challenge1.Nonce {
t.Fatalf("wrong nonce %x in WHOAREYOU (want %x)", resp.Nonce[:], challenge1.Nonce[:])
}
if !bytes.Equal(resp.ChallengeData, challenge1.ChallengeData) {
t.Fatalf("wrong ChallengeData in resent WHOAREYOU (want %x)", resp.ChallengeData, challenge1.ChallengeData)
}
resp.Node = conn.remote
default: default:
t.Fatal("expected WHOAREYOU, got", resp) t.Fatal("expected WHOAREYOU, got", resp)
} }

View file

@ -218,11 +218,15 @@ func (tc *conn) read(c net.PacketConn) v5wire.Packet {
if err := c.SetReadDeadline(time.Now().Add(waitTime)); err != nil { if err := c.SetReadDeadline(time.Now().Add(waitTime)); err != nil {
return &readError{err} return &readError{err}
} }
n, fromAddr, err := c.ReadFrom(buf) n, _, err := c.ReadFrom(buf)
if err != nil { if err != nil {
return &readError{err} return &readError{err}
} }
_, _, p, err := tc.codec.Decode(buf[:n], fromAddr.String()) // Always use tc.remoteAddr for session lookup. The actual source address of
// the packet may differ from tc.remoteAddr when the remote node is reachable
// via multiple networks (e.g. Docker bridge vs. overlay), but the codec's
// session cache is keyed by the address used during Encode.
_, _, p, err := tc.codec.Decode(buf[:n], tc.remoteAddr.String())
if err != nil { if err != nil {
return &readError{err} return &readError{err}
} }

View file

@ -56,6 +56,7 @@ type header struct {
BlobGasUsed *uint64 `json:"blobGasUsed" rlp:"optional"` BlobGasUsed *uint64 `json:"blobGasUsed" rlp:"optional"`
ExcessBlobGas *uint64 `json:"excessBlobGas" rlp:"optional"` ExcessBlobGas *uint64 `json:"excessBlobGas" rlp:"optional"`
ParentBeaconBlockRoot *common.Hash `json:"parentBeaconBlockRoot" rlp:"optional"` ParentBeaconBlockRoot *common.Hash `json:"parentBeaconBlockRoot" rlp:"optional"`
SlotNumber *uint64 `json:"slotNumber" rlp:"optional"`
} }
type headerMarshaling struct { type headerMarshaling struct {
@ -68,6 +69,7 @@ type headerMarshaling struct {
BaseFee *math.HexOrDecimal256 BaseFee *math.HexOrDecimal256
BlobGasUsed *math.HexOrDecimal64 BlobGasUsed *math.HexOrDecimal64
ExcessBlobGas *math.HexOrDecimal64 ExcessBlobGas *math.HexOrDecimal64
SlotNumber *math.HexOrDecimal64
} }
type bbInput struct { type bbInput struct {
@ -136,6 +138,7 @@ func (i *bbInput) ToBlock() *types.Block {
BlobGasUsed: i.Header.BlobGasUsed, BlobGasUsed: i.Header.BlobGasUsed,
ExcessBlobGas: i.Header.ExcessBlobGas, ExcessBlobGas: i.Header.ExcessBlobGas,
ParentBeaconRoot: i.Header.ParentBeaconBlockRoot, ParentBeaconRoot: i.Header.ParentBeaconBlockRoot,
SlotNumber: i.Header.SlotNumber,
} }
// Fill optional values. // Fill optional values.

View file

@ -102,6 +102,7 @@ type stEnv struct {
ParentExcessBlobGas *uint64 `json:"parentExcessBlobGas,omitempty"` ParentExcessBlobGas *uint64 `json:"parentExcessBlobGas,omitempty"`
ParentBlobGasUsed *uint64 `json:"parentBlobGasUsed,omitempty"` ParentBlobGasUsed *uint64 `json:"parentBlobGasUsed,omitempty"`
ParentBeaconBlockRoot *common.Hash `json:"parentBeaconBlockRoot"` ParentBeaconBlockRoot *common.Hash `json:"parentBeaconBlockRoot"`
SlotNumber *uint64 `json:"slotNumber"`
} }
type stEnvMarshaling struct { type stEnvMarshaling struct {
@ -120,6 +121,7 @@ type stEnvMarshaling struct {
ExcessBlobGas *math.HexOrDecimal64 ExcessBlobGas *math.HexOrDecimal64
ParentExcessBlobGas *math.HexOrDecimal64 ParentExcessBlobGas *math.HexOrDecimal64
ParentBlobGasUsed *math.HexOrDecimal64 ParentBlobGasUsed *math.HexOrDecimal64
SlotNumber *math.HexOrDecimal64
} }
type rejectedTx struct { type rejectedTx struct {
@ -147,15 +149,13 @@ func (pre *Prestate) Apply(vmConfig vm.Config, chainConfig *params.ChainConfig,
isEIP4762 = chainConfig.IsVerkle(big.NewInt(int64(pre.Env.Number)), pre.Env.Timestamp) isEIP4762 = chainConfig.IsVerkle(big.NewInt(int64(pre.Env.Number)), pre.Env.Timestamp)
statedb = MakePreState(rawdb.NewMemoryDatabase(), pre.Pre, isEIP4762) statedb = MakePreState(rawdb.NewMemoryDatabase(), pre.Pre, isEIP4762)
signer = types.MakeSigner(chainConfig, new(big.Int).SetUint64(pre.Env.Number), pre.Env.Timestamp) signer = types.MakeSigner(chainConfig, new(big.Int).SetUint64(pre.Env.Number), pre.Env.Timestamp)
gaspool = new(core.GasPool) gaspool = core.NewGasPool(pre.Env.GasLimit)
blockHash = common.Hash{0x13, 0x37} blockHash = common.Hash{0x13, 0x37}
rejectedTxs []*rejectedTx rejectedTxs []*rejectedTx
includedTxs types.Transactions includedTxs types.Transactions
gasUsed = uint64(0)
blobGasUsed = uint64(0) blobGasUsed = uint64(0)
receipts = make(types.Receipts, 0) receipts = make(types.Receipts, 0)
) )
gaspool.AddGas(pre.Env.GasLimit)
vmContext := vm.BlockContext{ vmContext := vm.BlockContext{
CanTransfer: core.CanTransfer, CanTransfer: core.CanTransfer,
Transfer: core.Transfer, Transfer: core.Transfer,
@ -195,6 +195,7 @@ func (pre *Prestate) Apply(vmConfig vm.Config, chainConfig *params.ChainConfig,
ExcessBlobGas: pre.Env.ParentExcessBlobGas, ExcessBlobGas: pre.Env.ParentExcessBlobGas,
BlobGasUsed: pre.Env.ParentBlobGasUsed, BlobGasUsed: pre.Env.ParentBlobGasUsed,
BaseFee: pre.Env.ParentBaseFee, BaseFee: pre.Env.ParentBaseFee,
SlotNumber: pre.Env.SlotNumber,
} }
header := &types.Header{ header := &types.Header{
Time: pre.Env.Timestamp, Time: pre.Env.Timestamp,
@ -255,16 +256,19 @@ func (pre *Prestate) Apply(vmConfig vm.Config, chainConfig *params.ChainConfig,
statedb.SetTxContext(tx.Hash(), len(receipts)) statedb.SetTxContext(tx.Hash(), len(receipts))
var ( var (
snapshot = statedb.Snapshot() snapshot = statedb.Snapshot()
prevGas = gaspool.Gas() gp = gaspool.Snapshot()
) )
receipt, err := core.ApplyTransactionWithEVM(msg, gaspool, statedb, vmContext.BlockNumber, blockHash, pre.Env.Timestamp, tx, &gasUsed, evm) receipt, err := core.ApplyTransactionWithEVM(msg, gaspool, statedb, vmContext.BlockNumber, blockHash, pre.Env.Timestamp, tx, evm)
if err != nil { if err != nil {
statedb.RevertToSnapshot(snapshot) statedb.RevertToSnapshot(snapshot)
log.Info("rejected tx", "index", i, "hash", tx.Hash(), "from", msg.From, "error", err) log.Info("rejected tx", "index", i, "hash", tx.Hash(), "from", msg.From, "error", err)
rejectedTxs = append(rejectedTxs, &rejectedTx{i, err.Error()}) rejectedTxs = append(rejectedTxs, &rejectedTx{i, err.Error()})
gaspool.SetGas(prevGas) gaspool.Set(gp)
continue continue
} }
if receipt.Logs == nil {
receipt.Logs = []*types.Log{}
}
includedTxs = append(includedTxs, tx) includedTxs = append(includedTxs, tx)
if hashError != nil { if hashError != nil {
return nil, nil, nil, NewError(ErrorMissingBlockhash, hashError) return nil, nil, nil, NewError(ErrorMissingBlockhash, hashError)
@ -346,7 +350,7 @@ func (pre *Prestate) Apply(vmConfig vm.Config, chainConfig *params.ChainConfig,
Receipts: receipts, Receipts: receipts,
Rejected: rejectedTxs, Rejected: rejectedTxs,
Difficulty: (*math.HexOrDecimal256)(vmContext.Difficulty), Difficulty: (*math.HexOrDecimal256)(vmContext.Difficulty),
GasUsed: (math.HexOrDecimal64)(gasUsed), GasUsed: (math.HexOrDecimal64)(gaspool.Used()),
BaseFee: (*math.HexOrDecimal256)(vmContext.BaseFee), BaseFee: (*math.HexOrDecimal256)(vmContext.BaseFee),
} }
if pre.Env.Withdrawals != nil { if pre.Env.Withdrawals != nil {
@ -361,10 +365,6 @@ func (pre *Prestate) Apply(vmConfig vm.Config, chainConfig *params.ChainConfig,
// Set requestsHash on block. // Set requestsHash on block.
h := types.CalcRequestsHash(requests) h := types.CalcRequestsHash(requests)
execRs.RequestsHash = &h execRs.RequestsHash = &h
for i := range requests {
// remove prefix
requests[i] = requests[i][1:]
}
execRs.Requests = requests execRs.Requests = requests
} }

View file

@ -56,27 +56,35 @@ func (l *fileWritingTracer) Write(p []byte) (n int, err error) {
return n, nil return n, nil
} }
// newFileWriter creates a set of hooks which wraps inner hooks (typically a logger), // newFileWriter creates a tracer which wraps inner hooks (typically a logger),
// and writes the output to a file, one file per transaction. // and writes the output to a file, one file per transaction.
func newFileWriter(baseDir string, innerFn func(out io.Writer) *tracing.Hooks) *tracing.Hooks { func newFileWriter(baseDir string, innerFn func(out io.Writer) *tracing.Hooks) *tracers.Tracer {
t := &fileWritingTracer{ t := &fileWritingTracer{
baseDir: baseDir, baseDir: baseDir,
suffix: "jsonl", suffix: "jsonl",
} }
t.inner = innerFn(t) // instantiate the inner tracer t.inner = innerFn(t) // instantiate the inner tracer
return t.hooks() return &tracers.Tracer{
Hooks: t.hooks(),
GetResult: func() (json.RawMessage, error) { return json.RawMessage("{}"), nil },
Stop: func(err error) {},
}
} }
// newResultWriter creates a set of hooks wraps and invokes an underlying tracer, // newResultWriter creates a tracer that wraps and invokes an underlying tracer,
// and writes the result (getResult-output) to file, one per transaction. // and writes the result (getResult-output) to file, one per transaction.
func newResultWriter(baseDir string, tracer *tracers.Tracer) *tracing.Hooks { func newResultWriter(baseDir string, tracer *tracers.Tracer) *tracers.Tracer {
t := &fileWritingTracer{ t := &fileWritingTracer{
baseDir: baseDir, baseDir: baseDir,
getResult: tracer.GetResult, getResult: tracer.GetResult,
inner: tracer.Hooks, inner: tracer.Hooks,
suffix: "json", suffix: "json",
} }
return t.hooks() return &tracers.Tracer{
Hooks: t.hooks(),
GetResult: func() (json.RawMessage, error) { return json.RawMessage("{}"), nil },
Stop: func(err error) {},
}
} }
// OnTxStart creates a new output-file specific for this transaction, and invokes // OnTxStart creates a new output-file specific for this transaction, and invokes

View file

@ -162,6 +162,11 @@ var (
strings.Join(vm.ActivateableEips(), ", ")), strings.Join(vm.ActivateableEips(), ", ")),
Value: "GrayGlacier", Value: "GrayGlacier",
} }
OpcodeCountFlag = &cli.StringFlag{
Name: "opcode.count",
Usage: "If set, opcode execution counts will be written to this file (relative to output.basedir).",
Value: "",
}
VerbosityFlag = &cli.IntFlag{ VerbosityFlag = &cli.IntFlag{
Name: "verbosity", Name: "verbosity",
Usage: "sets the verbosity level", Usage: "sets the verbosity level",

View file

@ -38,6 +38,7 @@ func (h header) MarshalJSON() ([]byte, error) {
BlobGasUsed *math.HexOrDecimal64 `json:"blobGasUsed" rlp:"optional"` BlobGasUsed *math.HexOrDecimal64 `json:"blobGasUsed" rlp:"optional"`
ExcessBlobGas *math.HexOrDecimal64 `json:"excessBlobGas" rlp:"optional"` ExcessBlobGas *math.HexOrDecimal64 `json:"excessBlobGas" rlp:"optional"`
ParentBeaconBlockRoot *common.Hash `json:"parentBeaconBlockRoot" rlp:"optional"` ParentBeaconBlockRoot *common.Hash `json:"parentBeaconBlockRoot" rlp:"optional"`
SlotNumber *math.HexOrDecimal64 `json:"slotNumber" rlp:"optional"`
} }
var enc header var enc header
enc.ParentHash = h.ParentHash enc.ParentHash = h.ParentHash
@ -60,6 +61,7 @@ func (h header) MarshalJSON() ([]byte, error) {
enc.BlobGasUsed = (*math.HexOrDecimal64)(h.BlobGasUsed) enc.BlobGasUsed = (*math.HexOrDecimal64)(h.BlobGasUsed)
enc.ExcessBlobGas = (*math.HexOrDecimal64)(h.ExcessBlobGas) enc.ExcessBlobGas = (*math.HexOrDecimal64)(h.ExcessBlobGas)
enc.ParentBeaconBlockRoot = h.ParentBeaconBlockRoot enc.ParentBeaconBlockRoot = h.ParentBeaconBlockRoot
enc.SlotNumber = (*math.HexOrDecimal64)(h.SlotNumber)
return json.Marshal(&enc) return json.Marshal(&enc)
} }
@ -86,6 +88,7 @@ func (h *header) UnmarshalJSON(input []byte) error {
BlobGasUsed *math.HexOrDecimal64 `json:"blobGasUsed" rlp:"optional"` BlobGasUsed *math.HexOrDecimal64 `json:"blobGasUsed" rlp:"optional"`
ExcessBlobGas *math.HexOrDecimal64 `json:"excessBlobGas" rlp:"optional"` ExcessBlobGas *math.HexOrDecimal64 `json:"excessBlobGas" rlp:"optional"`
ParentBeaconBlockRoot *common.Hash `json:"parentBeaconBlockRoot" rlp:"optional"` ParentBeaconBlockRoot *common.Hash `json:"parentBeaconBlockRoot" rlp:"optional"`
SlotNumber *math.HexOrDecimal64 `json:"slotNumber" rlp:"optional"`
} }
var dec header var dec header
if err := json.Unmarshal(input, &dec); err != nil { if err := json.Unmarshal(input, &dec); err != nil {
@ -155,5 +158,8 @@ func (h *header) UnmarshalJSON(input []byte) error {
if dec.ParentBeaconBlockRoot != nil { if dec.ParentBeaconBlockRoot != nil {
h.ParentBeaconBlockRoot = dec.ParentBeaconBlockRoot h.ParentBeaconBlockRoot = dec.ParentBeaconBlockRoot
} }
if dec.SlotNumber != nil {
h.SlotNumber = (*uint64)(dec.SlotNumber)
}
return nil return nil
} }

View file

@ -37,6 +37,7 @@ func (s stEnv) MarshalJSON() ([]byte, error) {
ParentExcessBlobGas *math.HexOrDecimal64 `json:"parentExcessBlobGas,omitempty"` ParentExcessBlobGas *math.HexOrDecimal64 `json:"parentExcessBlobGas,omitempty"`
ParentBlobGasUsed *math.HexOrDecimal64 `json:"parentBlobGasUsed,omitempty"` ParentBlobGasUsed *math.HexOrDecimal64 `json:"parentBlobGasUsed,omitempty"`
ParentBeaconBlockRoot *common.Hash `json:"parentBeaconBlockRoot"` ParentBeaconBlockRoot *common.Hash `json:"parentBeaconBlockRoot"`
SlotNumber *math.HexOrDecimal64 `json:"slotNumber"`
} }
var enc stEnv var enc stEnv
enc.Coinbase = common.UnprefixedAddress(s.Coinbase) enc.Coinbase = common.UnprefixedAddress(s.Coinbase)
@ -59,6 +60,7 @@ func (s stEnv) MarshalJSON() ([]byte, error) {
enc.ParentExcessBlobGas = (*math.HexOrDecimal64)(s.ParentExcessBlobGas) enc.ParentExcessBlobGas = (*math.HexOrDecimal64)(s.ParentExcessBlobGas)
enc.ParentBlobGasUsed = (*math.HexOrDecimal64)(s.ParentBlobGasUsed) enc.ParentBlobGasUsed = (*math.HexOrDecimal64)(s.ParentBlobGasUsed)
enc.ParentBeaconBlockRoot = s.ParentBeaconBlockRoot enc.ParentBeaconBlockRoot = s.ParentBeaconBlockRoot
enc.SlotNumber = (*math.HexOrDecimal64)(s.SlotNumber)
return json.Marshal(&enc) return json.Marshal(&enc)
} }
@ -85,6 +87,7 @@ func (s *stEnv) UnmarshalJSON(input []byte) error {
ParentExcessBlobGas *math.HexOrDecimal64 `json:"parentExcessBlobGas,omitempty"` ParentExcessBlobGas *math.HexOrDecimal64 `json:"parentExcessBlobGas,omitempty"`
ParentBlobGasUsed *math.HexOrDecimal64 `json:"parentBlobGasUsed,omitempty"` ParentBlobGasUsed *math.HexOrDecimal64 `json:"parentBlobGasUsed,omitempty"`
ParentBeaconBlockRoot *common.Hash `json:"parentBeaconBlockRoot"` ParentBeaconBlockRoot *common.Hash `json:"parentBeaconBlockRoot"`
SlotNumber *math.HexOrDecimal64 `json:"slotNumber"`
} }
var dec stEnv var dec stEnv
if err := json.Unmarshal(input, &dec); err != nil { if err := json.Unmarshal(input, &dec); err != nil {
@ -154,5 +157,8 @@ func (s *stEnv) UnmarshalJSON(input []byte) error {
if dec.ParentBeaconBlockRoot != nil { if dec.ParentBeaconBlockRoot != nil {
s.ParentBeaconBlockRoot = dec.ParentBeaconBlockRoot s.ParentBeaconBlockRoot = dec.ParentBeaconBlockRoot
} }
if dec.SlotNumber != nil {
s.SlotNumber = (*uint64)(dec.SlotNumber)
}
return nil return nil
} }

View file

@ -27,7 +27,9 @@ import (
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil" "github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/core" "github.com/ethereum/go-ethereum/core"
"github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/core/vm"
"github.com/ethereum/go-ethereum/params" "github.com/ethereum/go-ethereum/params"
"github.com/ethereum/go-ethereum/rlp" "github.com/ethereum/go-ethereum/rlp"
"github.com/ethereum/go-ethereum/tests" "github.com/ethereum/go-ethereum/tests"
@ -177,9 +179,12 @@ func Transaction(ctx *cli.Context) error {
r.Error = errors.New("gas * maxFeePerGas exceeds 256 bits") r.Error = errors.New("gas * maxFeePerGas exceeds 256 bits")
} }
// Check whether the init code size has been exceeded. // Check whether the init code size has been exceeded.
if chainConfig.IsShanghai(new(big.Int), 0) && tx.To() == nil && len(tx.Data()) > params.MaxInitCodeSize { if tx.To() == nil {
r.Error = errors.New("max initcode size exceeded") if err := vm.CheckMaxInitCodeSize(&rules, uint64(len(tx.Data()))); err != nil {
r.Error = err
} }
}
if chainConfig.IsOsaka(new(big.Int), 0) && tx.Gas() > params.MaxTxGas { if chainConfig.IsOsaka(new(big.Int), 0) && tx.Gas() > params.MaxTxGas {
r.Error = errors.New("gas limit exceeds maximum") r.Error = errors.New("gas limit exceeds maximum")
} }

View file

@ -37,6 +37,7 @@ import (
"github.com/ethereum/go-ethereum/crypto" "github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/eth/tracers" "github.com/ethereum/go-ethereum/eth/tracers"
"github.com/ethereum/go-ethereum/eth/tracers/logger" "github.com/ethereum/go-ethereum/eth/tracers/logger"
"github.com/ethereum/go-ethereum/eth/tracers/native"
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/params" "github.com/ethereum/go-ethereum/params"
"github.com/ethereum/go-ethereum/tests" "github.com/ethereum/go-ethereum/tests"
@ -167,14 +168,15 @@ func Transition(ctx *cli.Context) error {
} }
// Configure tracer // Configure tracer
var tracer *tracers.Tracer
if ctx.IsSet(TraceTracerFlag.Name) { // Custom tracing if ctx.IsSet(TraceTracerFlag.Name) { // Custom tracing
config := json.RawMessage(ctx.String(TraceTracerConfigFlag.Name)) config := json.RawMessage(ctx.String(TraceTracerConfigFlag.Name))
tracer, err := tracers.DefaultDirectory.New(ctx.String(TraceTracerFlag.Name), innerTracer, err := tracers.DefaultDirectory.New(ctx.String(TraceTracerFlag.Name),
nil, config, chainConfig) nil, config, chainConfig)
if err != nil { if err != nil {
return NewError(ErrorConfig, fmt.Errorf("failed instantiating tracer: %v", err)) return NewError(ErrorConfig, fmt.Errorf("failed instantiating tracer: %v", err))
} }
vmConfig.Tracer = newResultWriter(baseDir, tracer) tracer = newResultWriter(baseDir, innerTracer)
} else if ctx.Bool(TraceFlag.Name) { // JSON opcode tracing } else if ctx.Bool(TraceFlag.Name) { // JSON opcode tracing
logConfig := &logger.Config{ logConfig := &logger.Config{
DisableStack: ctx.Bool(TraceDisableStackFlag.Name), DisableStack: ctx.Bool(TraceDisableStackFlag.Name),
@ -182,20 +184,45 @@ func Transition(ctx *cli.Context) error {
EnableReturnData: ctx.Bool(TraceEnableReturnDataFlag.Name), EnableReturnData: ctx.Bool(TraceEnableReturnDataFlag.Name),
} }
if ctx.Bool(TraceEnableCallFramesFlag.Name) { if ctx.Bool(TraceEnableCallFramesFlag.Name) {
vmConfig.Tracer = newFileWriter(baseDir, func(out io.Writer) *tracing.Hooks { tracer = newFileWriter(baseDir, func(out io.Writer) *tracing.Hooks {
return logger.NewJSONLoggerWithCallFrames(logConfig, out) return logger.NewJSONLoggerWithCallFrames(logConfig, out)
}) })
} else { } else {
vmConfig.Tracer = newFileWriter(baseDir, func(out io.Writer) *tracing.Hooks { tracer = newFileWriter(baseDir, func(out io.Writer) *tracing.Hooks {
return logger.NewJSONLogger(logConfig, out) return logger.NewJSONLogger(logConfig, out)
}) })
} }
} }
// Configure opcode counter
var opcodeTracer *tracers.Tracer
if ctx.IsSet(OpcodeCountFlag.Name) && ctx.String(OpcodeCountFlag.Name) != "" {
opcodeTracer = native.NewOpcodeCounter()
if tracer != nil {
// If we have an existing tracer, multiplex with the opcode tracer
mux, _ := native.NewMuxTracer([]string{"trace", "opcode"}, []*tracers.Tracer{tracer, opcodeTracer})
vmConfig.Tracer = mux.Hooks
} else {
vmConfig.Tracer = opcodeTracer.Hooks
}
} else if tracer != nil {
vmConfig.Tracer = tracer.Hooks
}
// Run the test and aggregate the result // Run the test and aggregate the result
s, result, body, err := prestate.Apply(vmConfig, chainConfig, txIt, ctx.Int64(RewardFlag.Name)) s, result, body, err := prestate.Apply(vmConfig, chainConfig, txIt, ctx.Int64(RewardFlag.Name))
if err != nil { if err != nil {
return err return err
} }
// Write opcode counts if enabled
if opcodeTracer != nil {
fname := ctx.String(OpcodeCountFlag.Name)
result, err := opcodeTracer.GetResult()
if err != nil {
return NewError(ErrorJson, fmt.Errorf("failed getting opcode counts: %v", err))
}
if err := saveFile(baseDir, fname, result); err != nil {
return err
}
}
// Dump the execution result // Dump the execution result
var ( var (
collector = make(Alloc) collector = make(Alloc)

View file

@ -161,6 +161,7 @@ var (
t8ntool.ForknameFlag, t8ntool.ForknameFlag,
t8ntool.ChainIDFlag, t8ntool.ChainIDFlag,
t8ntool.RewardFlag, t8ntool.RewardFlag,
t8ntool.OpcodeCountFlag,
}, },
} }

View file

@ -24,7 +24,7 @@
"status": "0x1", "status": "0x1",
"cumulativeGasUsed": "0x5208", "cumulativeGasUsed": "0x5208",
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000", "logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"logs": null, "logs": [],
"transactionHash": "0x0557bacce3375c98d806609b8d5043072f0b6a8bae45ae5a67a00d3a1a18d673", "transactionHash": "0x0557bacce3375c98d806609b8d5043072f0b6a8bae45ae5a67a00d3a1a18d673",
"contractAddress": "0x0000000000000000000000000000000000000000", "contractAddress": "0x0000000000000000000000000000000000000000",
"gasUsed": "0x5208", "gasUsed": "0x5208",

View file

@ -12,7 +12,7 @@
"status": "0x0", "status": "0x0",
"cumulativeGasUsed": "0x84d0", "cumulativeGasUsed": "0x84d0",
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000", "logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"logs": null, "logs": [],
"transactionHash": "0xa98a24882ea90916c6a86da650fbc6b14238e46f0af04a131ce92be897507476", "transactionHash": "0xa98a24882ea90916c6a86da650fbc6b14238e46f0af04a131ce92be897507476",
"contractAddress": "0x0000000000000000000000000000000000000000", "contractAddress": "0x0000000000000000000000000000000000000000",
"gasUsed": "0x84d0", "gasUsed": "0x84d0",
@ -27,7 +27,7 @@
"status": "0x0", "status": "0x0",
"cumulativeGasUsed": "0x109a0", "cumulativeGasUsed": "0x109a0",
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000", "logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"logs": null, "logs": [],
"transactionHash": "0x36bad80acce7040c45fd32764b5c2b2d2e6f778669fb41791f73f546d56e739a", "transactionHash": "0x36bad80acce7040c45fd32764b5c2b2d2e6f778669fb41791f73f546d56e739a",
"contractAddress": "0x0000000000000000000000000000000000000000", "contractAddress": "0x0000000000000000000000000000000000000000",
"gasUsed": "0x84d0", "gasUsed": "0x84d0",

View file

@ -11,7 +11,7 @@
"status": "0x1", "status": "0x1",
"cumulativeGasUsed": "0x520b", "cumulativeGasUsed": "0x520b",
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000", "logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"logs": null, "logs": [],
"transactionHash": "0x72fadbef39cd251a437eea619cfeda752271a5faaaa2147df012e112159ffb81", "transactionHash": "0x72fadbef39cd251a437eea619cfeda752271a5faaaa2147df012e112159ffb81",
"contractAddress": "0x0000000000000000000000000000000000000000", "contractAddress": "0x0000000000000000000000000000000000000000",
"gasUsed": "0x520b", "gasUsed": "0x520b",

View file

@ -27,7 +27,7 @@
"status": "0x1", "status": "0x1",
"cumulativeGasUsed": "0xa861", "cumulativeGasUsed": "0xa861",
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000", "logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"logs": null, "logs": [],
"transactionHash": "0x92ea4a28224d033afb20e0cc2b290d4c7c2d61f6a4800a680e4e19ac962ee941", "transactionHash": "0x92ea4a28224d033afb20e0cc2b290d4c7c2d61f6a4800a680e4e19ac962ee941",
"contractAddress": "0x0000000000000000000000000000000000000000", "contractAddress": "0x0000000000000000000000000000000000000000",
"gasUsed": "0xa861", "gasUsed": "0xa861",
@ -41,7 +41,7 @@
"status": "0x1", "status": "0x1",
"cumulativeGasUsed": "0x10306", "cumulativeGasUsed": "0x10306",
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000", "logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"logs": null, "logs": [],
"transactionHash": "0x16b1d912f1d664f3f60f4e1b5f296f3c82a64a1a253117b4851d18bc03c4f1da", "transactionHash": "0x16b1d912f1d664f3f60f4e1b5f296f3c82a64a1a253117b4851d18bc03c4f1da",
"contractAddress": "0x0000000000000000000000000000000000000000", "contractAddress": "0x0000000000000000000000000000000000000000",
"gasUsed": "0x5aa5", "gasUsed": "0x5aa5",

View file

@ -23,7 +23,7 @@
"status": "0x1", "status": "0x1",
"cumulativeGasUsed": "0x5208", "cumulativeGasUsed": "0x5208",
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000", "logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"logs": null, "logs": [],
"transactionHash": "0x92ea4a28224d033afb20e0cc2b290d4c7c2d61f6a4800a680e4e19ac962ee941", "transactionHash": "0x92ea4a28224d033afb20e0cc2b290d4c7c2d61f6a4800a680e4e19ac962ee941",
"contractAddress": "0x0000000000000000000000000000000000000000", "contractAddress": "0x0000000000000000000000000000000000000000",
"gasUsed": "0x5208", "gasUsed": "0x5208",

View file

@ -28,7 +28,7 @@
"status": "0x1", "status": "0x1",
"cumulativeGasUsed": "0xa865", "cumulativeGasUsed": "0xa865",
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000", "logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"logs": null, "logs": [],
"transactionHash": "0x7508d7139d002a4b3a26a4f12dec0d87cb46075c78bf77a38b569a133b509262", "transactionHash": "0x7508d7139d002a4b3a26a4f12dec0d87cb46075c78bf77a38b569a133b509262",
"contractAddress": "0x0000000000000000000000000000000000000000", "contractAddress": "0x0000000000000000000000000000000000000000",
"gasUsed": "0xa865", "gasUsed": "0xa865",

View file

@ -26,7 +26,7 @@
"status": "0x1", "status": "0x1",
"cumulativeGasUsed": "0x5208", "cumulativeGasUsed": "0x5208",
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000", "logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"logs": null, "logs": [],
"transactionHash": "0x84f70aba406a55628a0620f26d260f90aeb6ccc55fed6ec2ac13dd4f727032ed", "transactionHash": "0x84f70aba406a55628a0620f26d260f90aeb6ccc55fed6ec2ac13dd4f727032ed",
"contractAddress": "0x0000000000000000000000000000000000000000", "contractAddress": "0x0000000000000000000000000000000000000000",
"gasUsed": "0x5208", "gasUsed": "0x5208",

View file

@ -24,7 +24,7 @@
"status": "0x1", "status": "0x1",
"cumulativeGasUsed": "0x521f", "cumulativeGasUsed": "0x521f",
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000", "logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"logs": null, "logs": [],
"transactionHash": "0x72fadbef39cd251a437eea619cfeda752271a5faaaa2147df012e112159ffb81", "transactionHash": "0x72fadbef39cd251a437eea619cfeda752271a5faaaa2147df012e112159ffb81",
"contractAddress": "0x0000000000000000000000000000000000000000", "contractAddress": "0x0000000000000000000000000000000000000000",
"gasUsed": "0x521f", "gasUsed": "0x521f",

View file

@ -25,7 +25,7 @@
"status": "0x1", "status": "0x1",
"cumulativeGasUsed": "0x5208", "cumulativeGasUsed": "0x5208",
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000", "logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"logs": null, "logs": [],
"transactionHash": "0xa98a24882ea90916c6a86da650fbc6b14238e46f0af04a131ce92be897507476", "transactionHash": "0xa98a24882ea90916c6a86da650fbc6b14238e46f0af04a131ce92be897507476",
"contractAddress": "0x0000000000000000000000000000000000000000", "contractAddress": "0x0000000000000000000000000000000000000000",
"gasUsed": "0x5208", "gasUsed": "0x5208",
@ -40,7 +40,7 @@
"status": "0x1", "status": "0x1",
"cumulativeGasUsed": "0xa410", "cumulativeGasUsed": "0xa410",
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000", "logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"logs": null, "logs": [],
"transactionHash": "0x36bad80acce7040c45fd32764b5c2b2d2e6f778669fb41791f73f546d56e739a", "transactionHash": "0x36bad80acce7040c45fd32764b5c2b2d2e6f778669fb41791f73f546d56e739a",
"contractAddress": "0x0000000000000000000000000000000000000000", "contractAddress": "0x0000000000000000000000000000000000000000",
"gasUsed": "0x5208", "gasUsed": "0x5208",

View file

@ -44,7 +44,7 @@
"root": "0x", "root": "0x",
"status": "0x1", "status": "0x1",
"cumulativeGasUsed": "0x15fa9", "cumulativeGasUsed": "0x15fa9",
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","logs": null,"transactionHash": "0x0417aab7c1d8a3989190c3167c132876ce9b8afd99262c5a0f9d06802de3d7ef", "logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","logs": [],"transactionHash": "0x0417aab7c1d8a3989190c3167c132876ce9b8afd99262c5a0f9d06802de3d7ef",
"contractAddress": "0x0000000000000000000000000000000000000000", "contractAddress": "0x0000000000000000000000000000000000000000",
"gasUsed": "0x15fa9", "gasUsed": "0x15fa9",
"effectiveGasPrice": null, "effectiveGasPrice": null,

177
cmd/fetchpayload/main.go Normal file
View file

@ -0,0 +1,177 @@
// Copyright 2026 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
// fetchpayload queries an Ethereum node over RPC, fetches a block and its
// execution witness, and writes the combined Payload (ChainID + Block +
// Witness) to disk in the format consumed by cmd/keeper.
package main
import (
"context"
"encoding/json"
"flag"
"fmt"
"math/big"
"os"
"path/filepath"
"strings"
"time"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/core/stateless"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/ethclient"
"github.com/ethereum/go-ethereum/rlp"
"github.com/ethereum/go-ethereum/rpc"
)
// Payload is duplicated from cmd/keeper/main.go (package main, not importable).
type Payload struct {
ChainID uint64
Block *types.Block
Witness *stateless.Witness
}
func main() {
var (
rpcURL = flag.String("rpc", "http://localhost:8545", "RPC endpoint URL")
blockArg = flag.String("block", "latest", `Block number: decimal, 0x-hex, or "latest"`)
format = flag.String("format", "rlp", "Comma-separated output formats: rlp, hex, json")
outDir = flag.String("out", "", "Output directory (default: current directory)")
)
flag.Parse()
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
// Parse block number (nil means "latest" in ethclient).
blockNum, err := parseBlockNumber(*blockArg)
if err != nil {
fatal("invalid block number %q: %v", *blockArg, err)
}
// Connect to the node.
client, err := ethclient.DialContext(ctx, *rpcURL)
if err != nil {
fatal("failed to connect to %s: %v", *rpcURL, err)
}
defer client.Close()
chainID, err := client.ChainID(ctx)
if err != nil {
fatal("failed to get chain ID: %v", err)
}
// Fetch the block first so we have a concrete number for the witness call,
// avoiding a race where "latest" advances between the two RPCs.
block, err := client.BlockByNumber(ctx, blockNum)
if err != nil {
fatal("failed to fetch block: %v", err)
}
fmt.Printf("Fetched block %d (%#x)\n", block.NumberU64(), block.Hash())
// Fetch the execution witness via the debug namespace.
var extWitness stateless.ExtWitness
err = client.Client().CallContext(ctx, &extWitness, "debug_executionWitness", rpc.BlockNumber(block.NumberU64()))
if err != nil {
fatal("failed to fetch execution witness: %v", err)
}
witness := new(stateless.Witness)
err = witness.FromExtWitness(&extWitness)
if err != nil {
fatal("failed to convert witness: %v", err)
}
payload := Payload{
ChainID: chainID.Uint64(),
Block: block,
Witness: witness,
}
// Encode payload as RLP (shared by "rlp" and "hex" formats).
rlpBytes, err := rlp.EncodeToBytes(payload)
if err != nil {
fatal("failed to RLP-encode payload: %v", err)
}
// Write one output file per requested format.
blockHex := fmt.Sprintf("%x", block.NumberU64())
for f := range strings.SplitSeq(*format, ",") {
f = strings.TrimSpace(f)
outPath := filepath.Join(*outDir, fmt.Sprintf("%s_payload.%s", blockHex, f))
var data []byte
switch f {
case "rlp":
data = rlpBytes
case "hex":
data = []byte(hexutil.Encode(rlpBytes))
case "json":
data, err = marshalJSONPayload(chainID, block, &extWitness)
if err != nil {
fatal("failed to JSON-encode payload: %v", err)
}
default:
fatal("unknown format %q (valid: rlp, hex, json)", f)
}
if err := os.WriteFile(outPath, data, 0644); err != nil {
fatal("failed to write %s: %v", outPath, err)
}
fmt.Printf("Wrote %s (%d bytes)\n", outPath, len(data))
}
}
// parseBlockNumber converts a CLI string to *big.Int.
// Returns nil for "latest" (ethclient convention for the head block).
func parseBlockNumber(s string) (*big.Int, error) {
if strings.EqualFold(s, "latest") {
return nil, nil
}
n := new(big.Int)
if strings.HasPrefix(s, "0x") || strings.HasPrefix(s, "0X") {
if _, ok := n.SetString(s[2:], 16); !ok {
return nil, fmt.Errorf("invalid hex number")
}
return n, nil
}
if _, ok := n.SetString(s, 10); !ok {
return nil, fmt.Errorf("invalid decimal number")
}
return n, nil
}
// jsonPayload is a JSON-friendly representation of Payload. It uses ExtWitness
// instead of the internal Witness (which has no JSON marshaling).
type jsonPayload struct {
ChainID uint64 `json:"chainId"`
Block *types.Block `json:"block"`
Witness *stateless.ExtWitness `json:"witness"`
}
func marshalJSONPayload(chainID *big.Int, block *types.Block, ext *stateless.ExtWitness) ([]byte, error) {
return json.MarshalIndent(jsonPayload{
ChainID: chainID.Uint64(),
Block: block,
Witness: ext,
}, "", " ")
}
func fatal(format string, args ...any) {
fmt.Fprintf(os.Stderr, format+"\n", args...)
os.Exit(1)
}

View file

@ -111,6 +111,7 @@ if one is set. Otherwise it prints the genesis from the datadir.`,
utils.MetricsInfluxDBUsernameFlag, utils.MetricsInfluxDBUsernameFlag,
utils.MetricsInfluxDBPasswordFlag, utils.MetricsInfluxDBPasswordFlag,
utils.MetricsInfluxDBTagsFlag, utils.MetricsInfluxDBTagsFlag,
utils.MetricsInfluxDBIntervalFlag,
utils.MetricsInfluxDBTokenFlag, utils.MetricsInfluxDBTokenFlag,
utils.MetricsInfluxDBBucketFlag, utils.MetricsInfluxDBBucketFlag,
utils.MetricsInfluxDBOrganizationFlag, utils.MetricsInfluxDBOrganizationFlag,
@ -207,13 +208,19 @@ This command dumps out the state for a given block (or latest, if none provided)
pruneHistoryCommand = &cli.Command{ pruneHistoryCommand = &cli.Command{
Action: pruneHistory, Action: pruneHistory,
Name: "prune-history", Name: "prune-history",
Usage: "Prune blockchain history (block bodies and receipts) up to the merge block", Usage: "Prune blockchain history (block bodies and receipts) up to a specified point",
ArgsUsage: "", ArgsUsage: "",
Flags: utils.DatabaseFlags, Flags: slices.Concat(utils.DatabaseFlags, []cli.Flag{
utils.ChainHistoryFlag,
}),
Description: ` Description: `
The prune-history command removes historical block bodies and receipts from the The prune-history command removes historical block bodies and receipts from the
blockchain database up to the merge block, while preserving block headers. This blockchain database up to a specified point, while preserving block headers. This
helps reduce storage requirements for nodes that don't need full historical data.`, helps reduce storage requirements for nodes that don't need full historical data.
The --history.chain flag is required to specify the pruning target:
- postmerge: Prune up to the merge block. The node will keep the merge block and everything thereafter.
- postprague: Prune up to the Prague (Pectra) upgrade block. The node will keep the prague block and everything thereafter.`,
} }
downloadEraCommand = &cli.Command{ downloadEraCommand = &cli.Command{
@ -702,47 +709,77 @@ func hashish(x string) bool {
} }
func pruneHistory(ctx *cli.Context) error { func pruneHistory(ctx *cli.Context) error {
// Parse and validate the history mode flag.
if !ctx.IsSet(utils.ChainHistoryFlag.Name) {
return errors.New("--history.chain flag is required")
}
var mode history.HistoryMode
if err := mode.UnmarshalText([]byte(ctx.String(utils.ChainHistoryFlag.Name))); err != nil {
return err
}
if mode == history.KeepAll {
return errors.New("--history.chain=all is not valid for pruning. To restore history, use 'geth import-history'")
}
stack, _ := makeConfigNode(ctx) stack, _ := makeConfigNode(ctx)
defer stack.Close() defer stack.Close()
// Open the chain database // Open the chain database.
chain, chaindb := utils.MakeChain(ctx, stack, false) chain, chaindb := utils.MakeChain(ctx, stack, false)
defer chaindb.Close() defer chaindb.Close()
defer chain.Stop() defer chain.Stop()
// Determine the prune point. This will be the first PoS block. // Determine the prune point based on the history mode.
prunePoint, ok := history.PrunePoints[chain.Genesis().Hash()] genesisHash := chain.Genesis().Hash()
if !ok || prunePoint == nil { policy, err := history.NewPolicy(mode, genesisHash)
return errors.New("prune point not found") if err != nil {
return err
}
if policy.Target == nil {
return fmt.Errorf("prune point for %q not found for this network", mode.String())
} }
var ( var (
mergeBlock = prunePoint.BlockNumber targetBlock = policy.Target.BlockNumber
mergeBlockHash = prunePoint.BlockHash.Hex() targetBlockHash = policy.Target.BlockHash
) )
// Check we're far enough past merge to ensure all data is in freezer // Check the current freezer tail to see if pruning is needed/possible.
freezerTail, _ := chaindb.Tail()
if freezerTail > 0 {
if freezerTail == targetBlock {
log.Info("Database already pruned to target block", "tail", freezerTail)
return nil
}
if freezerTail > targetBlock {
// Database is pruned beyond the target - can't unprune.
return fmt.Errorf("database is already pruned to block %d, which is beyond target %d. Cannot unprune. To restore history, use 'geth import-history'", freezerTail, targetBlock)
}
// freezerTail < targetBlock: we can prune further, continue below.
}
// Check we're far enough past the target to ensure all data is in freezer.
currentHeader := chain.CurrentHeader() currentHeader := chain.CurrentHeader()
if currentHeader == nil { if currentHeader == nil {
return errors.New("current header not found") return errors.New("current header not found")
} }
if currentHeader.Number.Uint64() < mergeBlock+params.FullImmutabilityThreshold { if currentHeader.Number.Uint64() < targetBlock+params.FullImmutabilityThreshold {
return fmt.Errorf("chain not far enough past merge block, need %d more blocks", return fmt.Errorf("chain not far enough past target block %d, need %d more blocks",
mergeBlock+params.FullImmutabilityThreshold-currentHeader.Number.Uint64()) targetBlock, targetBlock+params.FullImmutabilityThreshold-currentHeader.Number.Uint64())
} }
// Double-check the prune block in db has the expected hash. // Double-check the target block in db has the expected hash.
hash := rawdb.ReadCanonicalHash(chaindb, mergeBlock) hash := rawdb.ReadCanonicalHash(chaindb, targetBlock)
if hash != common.HexToHash(mergeBlockHash) { if hash != targetBlockHash {
return fmt.Errorf("merge block hash mismatch: got %s, want %s", hash.Hex(), mergeBlockHash) return fmt.Errorf("target block hash mismatch: got %s, want %s", hash.Hex(), targetBlockHash.Hex())
} }
log.Info("Starting history pruning", "head", currentHeader.Number, "tail", mergeBlock, "tailHash", mergeBlockHash) log.Info("Starting history pruning", "head", currentHeader.Number, "target", targetBlock, "targetHash", targetBlockHash.Hex())
start := time.Now() start := time.Now()
rawdb.PruneTransactionIndex(chaindb, mergeBlock) rawdb.PruneTransactionIndex(chaindb, targetBlock)
if _, err := chaindb.TruncateTail(mergeBlock); err != nil { if _, err := chaindb.TruncateTail(targetBlock); err != nil {
return fmt.Errorf("failed to truncate ancient data: %v", err) return fmt.Errorf("failed to truncate ancient data: %v", err)
} }
log.Info("History pruning completed", "tail", mergeBlock, "elapsed", common.PrettyDuration(time.Since(start))) log.Info("History pruning completed", "tail", targetBlock, "elapsed", common.PrettyDuration(time.Since(start)))
// TODO(s1na): what if there is a crash between the two prune operations? // TODO(s1na): what if there is a crash between the two prune operations?

View file

@ -377,6 +377,9 @@ func applyMetricConfig(ctx *cli.Context, cfg *gethConfig) {
if ctx.IsSet(utils.MetricsInfluxDBTagsFlag.Name) { if ctx.IsSet(utils.MetricsInfluxDBTagsFlag.Name) {
cfg.Metrics.InfluxDBTags = ctx.String(utils.MetricsInfluxDBTagsFlag.Name) cfg.Metrics.InfluxDBTags = ctx.String(utils.MetricsInfluxDBTagsFlag.Name)
} }
if ctx.IsSet(utils.MetricsInfluxDBIntervalFlag.Name) {
cfg.Metrics.InfluxDBInterval = ctx.Duration(utils.MetricsInfluxDBIntervalFlag.Name)
}
if ctx.IsSet(utils.MetricsEnableInfluxDBV2Flag.Name) { if ctx.IsSet(utils.MetricsEnableInfluxDBV2Flag.Name) {
cfg.Metrics.EnableInfluxDBV2 = ctx.Bool(utils.MetricsEnableInfluxDBV2Flag.Name) cfg.Metrics.EnableInfluxDBV2 = ctx.Bool(utils.MetricsEnableInfluxDBV2Flag.Name)
} }

View file

@ -30,7 +30,7 @@ import (
) )
const ( const (
ipcAPIs = "admin:1.0 debug:1.0 engine:1.0 eth:1.0 miner:1.0 net:1.0 rpc:1.0 txpool:1.0 web3:1.0" ipcAPIs = "admin:1.0 debug:1.0 engine:1.0 eth:1.0 miner:1.0 net:1.0 rpc:1.0 testing:1.0 txpool:1.0 web3:1.0"
httpAPIs = "eth:1.0 net:1.0 rpc:1.0 web3:1.0" httpAPIs = "eth:1.0 net:1.0 rpc:1.0 web3:1.0"
) )

View file

@ -19,6 +19,7 @@ package main
import ( import (
"bytes" "bytes"
"fmt" "fmt"
"math"
"os" "os"
"os/signal" "os/signal"
"path/filepath" "path/filepath"
@ -37,6 +38,7 @@ import (
"github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto" "github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/ethdb" "github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/internal/tablewriter"
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/rlp" "github.com/ethereum/go-ethereum/rlp"
"github.com/ethereum/go-ethereum/trie" "github.com/ethereum/go-ethereum/trie"
@ -51,7 +53,24 @@ var (
} }
removeChainDataFlag = &cli.BoolFlag{ removeChainDataFlag = &cli.BoolFlag{
Name: "remove.chain", Name: "remove.chain",
Usage: "If set, selects the state data for removal", Usage: "If set, selects the chain data for removal",
}
inspectTrieTopFlag = &cli.IntFlag{
Name: "top",
Usage: "Print the top N results per ranking category",
Value: 10,
}
inspectTrieDumpPathFlag = &cli.StringFlag{
Name: "dump-path",
Usage: "Path for the trie statistics dump file",
}
inspectTrieSummarizeFlag = &cli.StringFlag{
Name: "summarize",
Usage: "Summarize an existing trie dump file (skip trie traversal)",
}
inspectTrieContractFlag = &cli.StringFlag{
Name: "contract",
Usage: "Inspect only the storage of the given contract address (skips full account trie walk)",
} }
removedbCommand = &cli.Command{ removedbCommand = &cli.Command{
@ -74,6 +93,7 @@ Remove blockchain and state databases`,
dbCompactCmd, dbCompactCmd,
dbGetCmd, dbGetCmd,
dbDeleteCmd, dbDeleteCmd,
dbInspectTrieCmd,
dbPutCmd, dbPutCmd,
dbGetSlotsCmd, dbGetSlotsCmd,
dbDumpFreezerIndex, dbDumpFreezerIndex,
@ -92,6 +112,22 @@ Remove blockchain and state databases`,
Usage: "Inspect the storage size for each type of data in the database", Usage: "Inspect the storage size for each type of data in the database",
Description: `This commands iterates the entire database. If the optional 'prefix' and 'start' arguments are provided, then the iteration is limited to the given subset of data.`, Description: `This commands iterates the entire database. If the optional 'prefix' and 'start' arguments are provided, then the iteration is limited to the given subset of data.`,
} }
dbInspectTrieCmd = &cli.Command{
Action: inspectTrie,
Name: "inspect-trie",
ArgsUsage: "<blocknum>",
Flags: slices.Concat([]cli.Flag{
utils.ExcludeStorageFlag,
inspectTrieTopFlag,
utils.OutputFileFlag,
inspectTrieDumpPathFlag,
inspectTrieSummarizeFlag,
inspectTrieContractFlag,
}, utils.NetworkFlags, utils.DatabaseFlags),
Usage: "Print detailed trie information about the structure of account trie and storage tries.",
Description: `This commands iterates the entrie trie-backed state. If the 'blocknum' is not specified,
the latest block number will be used by default.`,
}
dbCheckStateContentCmd = &cli.Command{ dbCheckStateContentCmd = &cli.Command{
Action: checkStateContent, Action: checkStateContent,
Name: "check-state-content", Name: "check-state-content",
@ -385,6 +421,88 @@ func checkStateContent(ctx *cli.Context) error {
return nil return nil
} }
func inspectTrie(ctx *cli.Context) error {
topN := ctx.Int(inspectTrieTopFlag.Name)
if topN <= 0 {
return fmt.Errorf("invalid --%s value %d (must be > 0)", inspectTrieTopFlag.Name, topN)
}
config := &trie.InspectConfig{
NoStorage: ctx.Bool(utils.ExcludeStorageFlag.Name),
TopN: topN,
Path: ctx.String(utils.OutputFileFlag.Name),
}
if summarizePath := ctx.String(inspectTrieSummarizeFlag.Name); summarizePath != "" {
if ctx.NArg() > 0 {
return fmt.Errorf("block number argument is not supported with --%s", inspectTrieSummarizeFlag.Name)
}
config.DumpPath = summarizePath
log.Info("Summarizing trie dump", "path", summarizePath, "top", topN)
return trie.Summarize(summarizePath, config)
}
if ctx.NArg() > 1 {
return fmt.Errorf("excessive number of arguments: %v", ctx.Command.ArgsUsage)
}
stack, _ := makeConfigNode(ctx)
db := utils.MakeChainDatabase(ctx, stack, false)
defer stack.Close()
defer db.Close()
var (
trieRoot common.Hash
hash common.Hash
number uint64
)
switch {
case ctx.NArg() == 0 || ctx.Args().Get(0) == "latest":
head := rawdb.ReadHeadHeaderHash(db)
n, ok := rawdb.ReadHeaderNumber(db, head)
if !ok {
return fmt.Errorf("could not load head block hash")
}
number = n
case ctx.Args().Get(0) == "snapshot":
trieRoot = rawdb.ReadSnapshotRoot(db)
number = math.MaxUint64
default:
var err error
number, err = strconv.ParseUint(ctx.Args().Get(0), 10, 64)
if err != nil {
return fmt.Errorf("failed to parse blocknum, Args[0]: %v, err: %v", ctx.Args().Get(0), err)
}
}
if number != math.MaxUint64 {
hash = rawdb.ReadCanonicalHash(db, number)
if hash == (common.Hash{}) {
return fmt.Errorf("canonical hash for block %d not found", number)
}
blockHeader := rawdb.ReadHeader(db, hash, number)
trieRoot = blockHeader.Root
}
if trieRoot == (common.Hash{}) {
log.Error("Empty root hash")
}
config.DumpPath = ctx.String(inspectTrieDumpPathFlag.Name)
if config.DumpPath == "" {
config.DumpPath = stack.ResolvePath("trie-dump.bin")
}
triedb := utils.MakeTrieDatabase(ctx, stack, db, false, true, false)
defer triedb.Close()
if contractAddr := ctx.String(inspectTrieContractFlag.Name); contractAddr != "" {
address := common.HexToAddress(contractAddr)
log.Info("Inspecting contract", "address", address, "root", trieRoot, "block", number)
return trie.InspectContract(triedb, db, trieRoot, address)
}
log.Info("Inspecting trie", "root", trieRoot, "block", number, "dump", config.DumpPath, "top", topN)
return trie.Inspect(triedb, trieRoot, config)
}
func showDBStats(db ethdb.KeyValueStater) { func showDBStats(db ethdb.KeyValueStater) {
stats, err := db.Stat() stats, err := db.Stat()
if err != nil { if err != nil {
@ -759,7 +877,7 @@ func showMetaData(ctx *cli.Context) error {
data = append(data, []string{"headHeader.Root", fmt.Sprintf("%v", h.Root)}) data = append(data, []string{"headHeader.Root", fmt.Sprintf("%v", h.Root)})
data = append(data, []string{"headHeader.Number", fmt.Sprintf("%d (%#x)", h.Number, h.Number)}) data = append(data, []string{"headHeader.Number", fmt.Sprintf("%d (%#x)", h.Number, h.Number)})
} }
table := rawdb.NewTableWriter(os.Stdout) table := tablewriter.NewWriter(os.Stdout)
table.SetHeader([]string{"Field", "Value"}) table.SetHeader([]string{"Field", "Value"})
table.AppendBulk(data) table.AppendBulk(data)
table.Render() table.Render()

View file

@ -22,7 +22,6 @@ import (
"os" "os"
"slices" "slices"
"sort" "sort"
"strconv"
"time" "time"
"github.com/ethereum/go-ethereum/accounts" "github.com/ethereum/go-ethereum/accounts"
@ -216,6 +215,7 @@ var (
utils.MetricsInfluxDBUsernameFlag, utils.MetricsInfluxDBUsernameFlag,
utils.MetricsInfluxDBPasswordFlag, utils.MetricsInfluxDBPasswordFlag,
utils.MetricsInfluxDBTagsFlag, utils.MetricsInfluxDBTagsFlag,
utils.MetricsInfluxDBIntervalFlag,
utils.MetricsEnableInfluxDBV2Flag, utils.MetricsEnableInfluxDBV2Flag,
utils.MetricsInfluxDBTokenFlag, utils.MetricsInfluxDBTokenFlag,
utils.MetricsInfluxDBBucketFlag, utils.MetricsInfluxDBBucketFlag,
@ -315,18 +315,6 @@ func prepare(ctx *cli.Context) {
case !ctx.IsSet(utils.NetworkIdFlag.Name): case !ctx.IsSet(utils.NetworkIdFlag.Name):
log.Info("Starting Geth on Ethereum mainnet...") log.Info("Starting Geth on Ethereum mainnet...")
} }
// If we're a full node on mainnet without --cache specified, bump default cache allowance
if !ctx.IsSet(utils.CacheFlag.Name) && !ctx.IsSet(utils.NetworkIdFlag.Name) {
// Make sure we're not on any supported preconfigured testnet either
if !ctx.IsSet(utils.HoleskyFlag.Name) &&
!ctx.IsSet(utils.SepoliaFlag.Name) &&
!ctx.IsSet(utils.HoodiFlag.Name) &&
!ctx.IsSet(utils.DeveloperFlag.Name) {
// Nope, we're really on mainnet. Bump that cache up!
log.Info("Bumping default cache on mainnet", "provided", ctx.Int(utils.CacheFlag.Name), "updated", 4096)
ctx.Set(utils.CacheFlag.Name, strconv.Itoa(4096))
}
}
} }
// geth is the main entry point into the system if no special subcommand is run. // geth is the main entry point into the system if no special subcommand is run.

View file

@ -36,6 +36,7 @@ import (
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/rlp" "github.com/ethereum/go-ethereum/rlp"
"github.com/ethereum/go-ethereum/trie" "github.com/ethereum/go-ethereum/trie"
"github.com/ethereum/go-ethereum/triedb"
"github.com/urfave/cli/v2" "github.com/urfave/cli/v2"
) )
@ -105,7 +106,9 @@ information about the specified address.
Usage: "Traverse the state with given root hash and perform quick verification", Usage: "Traverse the state with given root hash and perform quick verification",
ArgsUsage: "<root>", ArgsUsage: "<root>",
Action: traverseState, Action: traverseState,
Flags: slices.Concat(utils.NetworkFlags, utils.DatabaseFlags), Flags: slices.Concat([]cli.Flag{
utils.AccountFlag,
}, utils.NetworkFlags, utils.DatabaseFlags),
Description: ` Description: `
geth snapshot traverse-state <state-root> geth snapshot traverse-state <state-root>
will traverse the whole state from the given state root and will abort if any will traverse the whole state from the given state root and will abort if any
@ -113,6 +116,8 @@ referenced trie node or contract code is missing. This command can be used for
state integrity verification. The default checking target is the HEAD state. state integrity verification. The default checking target is the HEAD state.
It's also usable without snapshot enabled. It's also usable without snapshot enabled.
If --account is specified, only the storage trie of that account is traversed.
`, `,
}, },
{ {
@ -120,7 +125,9 @@ It's also usable without snapshot enabled.
Usage: "Traverse the state with given root hash and perform detailed verification", Usage: "Traverse the state with given root hash and perform detailed verification",
ArgsUsage: "<root>", ArgsUsage: "<root>",
Action: traverseRawState, Action: traverseRawState,
Flags: slices.Concat(utils.NetworkFlags, utils.DatabaseFlags), Flags: slices.Concat([]cli.Flag{
utils.AccountFlag,
}, utils.NetworkFlags, utils.DatabaseFlags),
Description: ` Description: `
geth snapshot traverse-rawstate <state-root> geth snapshot traverse-rawstate <state-root>
will traverse the whole state from the given root and will abort if any referenced will traverse the whole state from the given root and will abort if any referenced
@ -129,6 +136,8 @@ verification. The default checking target is the HEAD state. It's basically iden
to traverse-state, but the check granularity is smaller. to traverse-state, but the check granularity is smaller.
It's also usable without snapshot enabled. It's also usable without snapshot enabled.
If --account is specified, only the storage trie of that account is traversed.
`, `,
}, },
{ {
@ -272,6 +281,120 @@ func checkDanglingStorage(ctx *cli.Context) error {
return snapshot.CheckDanglingStorage(db) return snapshot.CheckDanglingStorage(db)
} }
// parseAccount parses the account flag value as either an address (20 bytes)
// or an account hash (32 bytes) and returns the hashed account key.
func parseAccount(input string) (common.Hash, error) {
switch len(input) {
case 40, 42: // address
return crypto.Keccak256Hash(common.HexToAddress(input).Bytes()), nil
case 64, 66: // hash
return common.HexToHash(input), nil
default:
return common.Hash{}, errors.New("malformed account address or hash")
}
}
// lookupAccount resolves the account from the state trie using the given
// account hash.
func lookupAccount(accountHash common.Hash, tr *trie.Trie) (*types.StateAccount, error) {
accData, err := tr.Get(accountHash.Bytes())
if err != nil {
return nil, fmt.Errorf("failed to get account %s: %w", accountHash, err)
}
if accData == nil {
return nil, fmt.Errorf("account not found: %s", accountHash)
}
var acc types.StateAccount
if err := rlp.DecodeBytes(accData, &acc); err != nil {
return nil, fmt.Errorf("invalid account data %s: %w", accountHash, err)
}
return &acc, nil
}
func traverseStorage(id *trie.ID, db *triedb.Database, report bool, detail bool) error {
tr, err := trie.NewStateTrie(id, db)
if err != nil {
log.Error("Failed to open storage trie", "account", id.Owner, "root", id.Root, "err", err)
return err
}
var (
slots int
nodes int
lastReport time.Time
start = time.Now()
)
it, err := tr.NodeIterator(nil)
if err != nil {
log.Error("Failed to open storage iterator", "account", id.Owner, "root", id.Root, "err", err)
return err
}
logger := log.Debug
if report {
logger = log.Info
}
logger("Start traversing storage trie", "account", id.Owner, "storageRoot", id.Root)
if !detail {
iter := trie.NewIterator(it)
for iter.Next() {
slots += 1
if time.Since(lastReport) > time.Second*8 {
logger("Traversing storage", "account", id.Owner, "slots", slots, "elapsed", common.PrettyDuration(time.Since(start)))
lastReport = time.Now()
}
}
if iter.Err != nil {
log.Error("Failed to traverse storage trie", "root", id.Root, "err", iter.Err)
return iter.Err
}
logger("Storage is complete", "account", id.Owner, "slots", slots, "elapsed", common.PrettyDuration(time.Since(start)))
} else {
reader, err := db.NodeReader(id.StateRoot)
if err != nil {
log.Error("Failed to open state reader", "err", err)
return err
}
var (
buffer = make([]byte, 32)
hasher = crypto.NewKeccakState()
)
for it.Next(true) {
nodes += 1
node := it.Hash()
// Check the presence for non-empty hash node(embedded node doesn't
// have their own hash).
if node != (common.Hash{}) {
blob, _ := reader.Node(id.Owner, it.Path(), node)
if len(blob) == 0 {
log.Error("Missing trie node(storage)", "hash", node)
return errors.New("missing storage")
}
hasher.Reset()
hasher.Write(blob)
hasher.Read(buffer)
if !bytes.Equal(buffer, node.Bytes()) {
log.Error("Invalid trie node(storage)", "hash", node.Hex(), "value", blob)
return errors.New("invalid storage node")
}
}
if it.Leaf() {
slots += 1
}
if time.Since(lastReport) > time.Second*8 {
logger("Traversing storage", "account", id.Owner, "nodes", nodes, "slots", slots, "elapsed", common.PrettyDuration(time.Since(start)))
lastReport = time.Now()
}
}
if err := it.Error(); err != nil {
log.Error("Failed to traverse storage trie", "root", id.Root, "err", err)
return err
}
logger("Storage is complete", "account", id.Owner, "nodes", nodes, "slots", slots, "elapsed", common.PrettyDuration(time.Since(start)))
}
return nil
}
// traverseState is a helper function used for pruning verification. // traverseState is a helper function used for pruning verification.
// Basically it just iterates the trie, ensure all nodes and associated // Basically it just iterates the trie, ensure all nodes and associated
// contract codes are present. // contract codes are present.
@ -309,6 +432,30 @@ func traverseState(ctx *cli.Context) error {
root = headBlock.Root() root = headBlock.Root()
log.Info("Start traversing the state", "root", root, "number", headBlock.NumberU64()) log.Info("Start traversing the state", "root", root, "number", headBlock.NumberU64())
} }
// If --account is specified, only traverse the storage trie of that account.
if accountStr := ctx.String(utils.AccountFlag.Name); accountStr != "" {
accountHash, err := parseAccount(accountStr)
if err != nil {
log.Error("Failed to parse account", "err", err)
return err
}
// Use raw trie since the account key is already hashed.
t, err := trie.New(trie.StateTrieID(root), triedb)
if err != nil {
log.Error("Failed to open state trie", "root", root, "err", err)
return err
}
acc, err := lookupAccount(accountHash, t)
if err != nil {
log.Error("Failed to look up account", "hash", accountHash, "err", err)
return err
}
if acc.Root == types.EmptyRootHash {
log.Info("Account has no storage", "hash", accountHash)
return nil
}
return traverseStorage(trie.StorageTrieID(root, accountHash, acc.Root), triedb, true, false)
}
t, err := trie.NewStateTrie(trie.StateTrieID(root), triedb) t, err := trie.NewStateTrie(trie.StateTrieID(root), triedb)
if err != nil { if err != nil {
log.Error("Failed to open trie", "root", root, "err", err) log.Error("Failed to open trie", "root", root, "err", err)
@ -335,30 +482,10 @@ func traverseState(ctx *cli.Context) error {
return err return err
} }
if acc.Root != types.EmptyRootHash { if acc.Root != types.EmptyRootHash {
id := trie.StorageTrieID(root, common.BytesToHash(accIter.Key), acc.Root) err := traverseStorage(trie.StorageTrieID(root, common.BytesToHash(accIter.Key), acc.Root), triedb, false, false)
storageTrie, err := trie.NewStateTrie(id, triedb)
if err != nil { if err != nil {
log.Error("Failed to open storage trie", "root", acc.Root, "err", err)
return err return err
} }
storageIt, err := storageTrie.NodeIterator(nil)
if err != nil {
log.Error("Failed to open storage iterator", "root", acc.Root, "err", err)
return err
}
storageIter := trie.NewIterator(storageIt)
for storageIter.Next() {
slots += 1
if time.Since(lastReport) > time.Second*8 {
log.Info("Traversing state", "accounts", accounts, "slots", slots, "codes", codes, "elapsed", common.PrettyDuration(time.Since(start)))
lastReport = time.Now()
}
}
if storageIter.Err != nil {
log.Error("Failed to traverse storage trie", "root", acc.Root, "err", storageIter.Err)
return storageIter.Err
}
} }
if !bytes.Equal(acc.CodeHash, types.EmptyCodeHash.Bytes()) { if !bytes.Equal(acc.CodeHash, types.EmptyCodeHash.Bytes()) {
if !rawdb.HasCode(chaindb, common.BytesToHash(acc.CodeHash)) { if !rawdb.HasCode(chaindb, common.BytesToHash(acc.CodeHash)) {
@ -418,6 +545,30 @@ func traverseRawState(ctx *cli.Context) error {
root = headBlock.Root() root = headBlock.Root()
log.Info("Start traversing the state", "root", root, "number", headBlock.NumberU64()) log.Info("Start traversing the state", "root", root, "number", headBlock.NumberU64())
} }
// If --account is specified, only traverse the storage trie of that account.
if accountStr := ctx.String(utils.AccountFlag.Name); accountStr != "" {
accountHash, err := parseAccount(accountStr)
if err != nil {
log.Error("Failed to parse account", "err", err)
return err
}
// Use raw trie since the account key is already hashed.
t, err := trie.New(trie.StateTrieID(root), triedb)
if err != nil {
log.Error("Failed to open state trie", "root", root, "err", err)
return err
}
acc, err := lookupAccount(accountHash, t)
if err != nil {
log.Error("Failed to look up account", "hash", accountHash, "err", err)
return err
}
if acc.Root == types.EmptyRootHash {
log.Info("Account has no storage", "hash", accountHash)
return nil
}
return traverseStorage(trie.StorageTrieID(root, accountHash, acc.Root), triedb, true, true)
}
t, err := trie.NewStateTrie(trie.StateTrieID(root), triedb) t, err := trie.NewStateTrie(trie.StateTrieID(root), triedb)
if err != nil { if err != nil {
log.Error("Failed to open trie", "root", root, "err", err) log.Error("Failed to open trie", "root", root, "err", err)
@ -473,50 +624,10 @@ func traverseRawState(ctx *cli.Context) error {
return errors.New("invalid account") return errors.New("invalid account")
} }
if acc.Root != types.EmptyRootHash { if acc.Root != types.EmptyRootHash {
id := trie.StorageTrieID(root, common.BytesToHash(accIter.LeafKey()), acc.Root) err := traverseStorage(trie.StorageTrieID(root, common.BytesToHash(accIter.LeafKey()), acc.Root), triedb, false, true)
storageTrie, err := trie.NewStateTrie(id, triedb)
if err != nil { if err != nil {
log.Error("Failed to open storage trie", "root", acc.Root, "err", err)
return errors.New("missing storage trie")
}
storageIter, err := storageTrie.NodeIterator(nil)
if err != nil {
log.Error("Failed to open storage iterator", "root", acc.Root, "err", err)
return err return err
} }
for storageIter.Next(true) {
nodes += 1
node := storageIter.Hash()
// Check the presence for non-empty hash node(embedded node doesn't
// have their own hash).
if node != (common.Hash{}) {
blob, _ := reader.Node(common.BytesToHash(accIter.LeafKey()), storageIter.Path(), node)
if len(blob) == 0 {
log.Error("Missing trie node(storage)", "hash", node)
return errors.New("missing storage")
}
hasher.Reset()
hasher.Write(blob)
hasher.Read(got)
if !bytes.Equal(got, node.Bytes()) {
log.Error("Invalid trie node(storage)", "hash", node.Hex(), "value", blob)
return errors.New("invalid storage node")
}
}
// Bump the counter if it's leaf node.
if storageIter.Leaf() {
slots += 1
}
if time.Since(lastReport) > time.Second*8 {
log.Info("Traversing state", "nodes", nodes, "accounts", accounts, "slots", slots, "codes", codes, "elapsed", common.PrettyDuration(time.Since(start)))
lastReport = time.Now()
}
}
if storageIter.Error() != nil {
log.Error("Failed to traverse storage trie", "root", acc.Root, "err", storageIter.Error())
return storageIter.Error()
}
} }
if !bytes.Equal(acc.CodeHash, types.EmptyCodeHash.Bytes()) { if !bytes.Equal(acc.CodeHash, types.EmptyCodeHash.Bytes()) {
if !rawdb.HasCode(chaindb, common.BytesToHash(acc.CodeHash)) { if !rawdb.HasCode(chaindb, common.BytesToHash(acc.CodeHash)) {

View file

@ -15,6 +15,7 @@
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>. // along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
//go:build example //go:build example
// +build example
package main package main

View file

@ -14,8 +14,8 @@
// You should have received a copy of the GNU Lesser General Public License // You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>. // along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
//go:build wasm //go:build wasm && !womir
// +build wasm // +build wasm,!womir
package main package main

View file

@ -0,0 +1,49 @@
// Copyright 2026 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
//go:build womir
package main
import "unsafe"
// These match the WOMIR guest-io imports (env module).
// Protocol: __hint_input prepares next item, __hint_buffer reads words.
// Each item has format: [byte_len_u32_le, ...data_words_padded_to_4bytes]
//
//go:wasmimport env __hint_input
func hintInput()
//go:wasmimport env __hint_buffer
func hintBuffer(ptr unsafe.Pointer, numWords uint32)
func readWord() uint32 {
var buf [4]byte
hintBuffer(unsafe.Pointer(&buf[0]), 1)
return uint32(buf[0]) | uint32(buf[1])<<8 | uint32(buf[2])<<16 | uint32(buf[3])<<24
}
func readBytes() []byte {
hintInput()
byteLen := readWord()
numWords := (byteLen + 3) / 4
data := make([]byte, numWords*4)
hintBuffer(unsafe.Pointer(&data[0]), numWords)
return data[:byteLen]
}
// getInput reads the RLP-encoded Payload from the WOMIR hint stream.
func getInput() []byte {
return readBytes()
}

View file

@ -13,11 +13,11 @@ require (
github.com/bits-and-blooms/bitset v1.20.0 // indirect github.com/bits-and-blooms/bitset v1.20.0 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/consensys/gnark-crypto v0.18.1 // indirect github.com/consensys/gnark-crypto v0.18.1 // indirect
github.com/crate-crypto/go-eth-kzg v1.4.0 // indirect github.com/crate-crypto/go-eth-kzg v1.5.0 // indirect
github.com/deckarep/golang-set/v2 v2.6.0 // indirect github.com/deckarep/golang-set/v2 v2.6.0 // indirect
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.0.1 // indirect github.com/decred/dcrd/dcrec/secp256k1/v4 v4.0.1 // indirect
github.com/emicklei/dot v1.6.2 // indirect github.com/emicklei/dot v1.6.2 // indirect
github.com/ethereum/c-kzg-4844/v2 v2.1.5 // indirect github.com/ethereum/c-kzg-4844/v2 v2.1.6 // indirect
github.com/ethereum/go-bigmodexpfix v0.0.0-20250911101455-f9e208c548ab // indirect github.com/ethereum/go-bigmodexpfix v0.0.0-20250911101455-f9e208c548ab // indirect
github.com/ferranbt/fastssz v0.1.4 // indirect github.com/ferranbt/fastssz v0.1.4 // indirect
github.com/go-logr/logr v1.4.3 // indirect github.com/go-logr/logr v1.4.3 // indirect
@ -31,16 +31,16 @@ require (
github.com/minio/sha256-simd v1.0.0 // indirect github.com/minio/sha256-simd v1.0.0 // indirect
github.com/mitchellh/mapstructure v1.4.1 // indirect github.com/mitchellh/mapstructure v1.4.1 // indirect
github.com/shirou/gopsutil v3.21.4-0.20210419000835-c7a38de76ee5+incompatible // indirect github.com/shirou/gopsutil v3.21.4-0.20210419000835-c7a38de76ee5+incompatible // indirect
github.com/supranational/blst v0.3.16-0.20250831170142-f48500c1fdbe // indirect github.com/supranational/blst v0.3.16 // indirect
github.com/tklauser/go-sysconf v0.3.12 // indirect github.com/tklauser/go-sysconf v0.3.12 // indirect
github.com/tklauser/numcpus v0.6.1 // indirect github.com/tklauser/numcpus v0.6.1 // indirect
go.opentelemetry.io/auto/sdk v1.2.1 // indirect go.opentelemetry.io/auto/sdk v1.2.1 // indirect
go.opentelemetry.io/otel v1.39.0 // indirect go.opentelemetry.io/otel v1.40.0 // indirect
go.opentelemetry.io/otel/metric v1.39.0 // indirect go.opentelemetry.io/otel/metric v1.40.0 // indirect
go.opentelemetry.io/otel/trace v1.39.0 // indirect go.opentelemetry.io/otel/trace v1.40.0 // indirect
golang.org/x/crypto v0.44.0 // indirect golang.org/x/crypto v0.44.0 // indirect
golang.org/x/sync v0.18.0 // indirect golang.org/x/sync v0.18.0 // indirect
golang.org/x/sys v0.39.0 // indirect golang.org/x/sys v0.40.0 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect gopkg.in/yaml.v2 v2.4.0 // indirect
) )

View file

@ -28,8 +28,8 @@ github.com/cockroachdb/tokenbucket v0.0.0-20230807174530-cc333fc44b06 h1:zuQyyAK
github.com/cockroachdb/tokenbucket v0.0.0-20230807174530-cc333fc44b06/go.mod h1:7nc4anLGjupUW/PeY5qiNYsdNXj7zopG+eqsS7To5IQ= github.com/cockroachdb/tokenbucket v0.0.0-20230807174530-cc333fc44b06/go.mod h1:7nc4anLGjupUW/PeY5qiNYsdNXj7zopG+eqsS7To5IQ=
github.com/consensys/gnark-crypto v0.18.1 h1:RyLV6UhPRoYYzaFnPQA4qK3DyuDgkTgskDdoGqFt3fI= github.com/consensys/gnark-crypto v0.18.1 h1:RyLV6UhPRoYYzaFnPQA4qK3DyuDgkTgskDdoGqFt3fI=
github.com/consensys/gnark-crypto v0.18.1/go.mod h1:L3mXGFTe1ZN+RSJ+CLjUt9x7PNdx8ubaYfDROyp2Z8c= github.com/consensys/gnark-crypto v0.18.1/go.mod h1:L3mXGFTe1ZN+RSJ+CLjUt9x7PNdx8ubaYfDROyp2Z8c=
github.com/crate-crypto/go-eth-kzg v1.4.0 h1:WzDGjHk4gFg6YzV0rJOAsTK4z3Qkz5jd4RE3DAvPFkg= github.com/crate-crypto/go-eth-kzg v1.5.0 h1:FYRiJMJG2iv+2Dy3fi14SVGjcPteZ5HAAUe4YWlJygc=
github.com/crate-crypto/go-eth-kzg v1.4.0/go.mod h1:J9/u5sWfznSObptgfa92Jq8rTswn6ahQWEuiLHOjCUI= github.com/crate-crypto/go-eth-kzg v1.5.0/go.mod h1:J9/u5sWfznSObptgfa92Jq8rTswn6ahQWEuiLHOjCUI=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/deckarep/golang-set/v2 v2.6.0 h1:XfcQbWM1LlMB8BsJ8N9vW5ehnnPVIw0je80NsVHagjM= github.com/deckarep/golang-set/v2 v2.6.0 h1:XfcQbWM1LlMB8BsJ8N9vW5ehnnPVIw0je80NsVHagjM=
@ -40,8 +40,8 @@ github.com/decred/dcrd/dcrec/secp256k1/v4 v4.0.1 h1:YLtO71vCjJRCBcrPMtQ9nqBsqpA1
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.0.1/go.mod h1:hyedUtir6IdtD/7lIxGeCxkaw7y45JueMRL4DIyJDKs= github.com/decred/dcrd/dcrec/secp256k1/v4 v4.0.1/go.mod h1:hyedUtir6IdtD/7lIxGeCxkaw7y45JueMRL4DIyJDKs=
github.com/emicklei/dot v1.6.2 h1:08GN+DD79cy/tzN6uLCT84+2Wk9u+wvqP+Hkx/dIR8A= github.com/emicklei/dot v1.6.2 h1:08GN+DD79cy/tzN6uLCT84+2Wk9u+wvqP+Hkx/dIR8A=
github.com/emicklei/dot v1.6.2/go.mod h1:DeV7GvQtIw4h2u73RKBkkFdvVAz0D9fzeJrgPW6gy/s= github.com/emicklei/dot v1.6.2/go.mod h1:DeV7GvQtIw4h2u73RKBkkFdvVAz0D9fzeJrgPW6gy/s=
github.com/ethereum/c-kzg-4844/v2 v2.1.5 h1:aVtoLK5xwJ6c5RiqO8g8ptJ5KU+2Hdquf6G3aXiHh5s= github.com/ethereum/c-kzg-4844/v2 v2.1.6 h1:xQymkKCT5E2Jiaoqf3v4wsNgjZLY0lRSkZn27fRjSls=
github.com/ethereum/c-kzg-4844/v2 v2.1.5/go.mod h1:u59hRTTah4Co6i9fDWtiCjTrblJv0UwsqZKCc0GfgUs= github.com/ethereum/c-kzg-4844/v2 v2.1.6/go.mod h1:8HMkUZ5JRv4hpw/XUrYWSQNAUzhHMg2UDb/U+5m+XNw=
github.com/ethereum/go-bigmodexpfix v0.0.0-20250911101455-f9e208c548ab h1:rvv6MJhy07IMfEKuARQ9TKojGqLVNxQajaXEp/BoqSk= github.com/ethereum/go-bigmodexpfix v0.0.0-20250911101455-f9e208c548ab h1:rvv6MJhy07IMfEKuARQ9TKojGqLVNxQajaXEp/BoqSk=
github.com/ethereum/go-bigmodexpfix v0.0.0-20250911101455-f9e208c548ab/go.mod h1:IuLm4IsPipXKF7CW5Lzf68PIbZ5yl7FFd74l/E0o9A8= github.com/ethereum/go-bigmodexpfix v0.0.0-20250911101455-f9e208c548ab/go.mod h1:IuLm4IsPipXKF7CW5Lzf68PIbZ5yl7FFd74l/E0o9A8=
github.com/ferranbt/fastssz v0.1.4 h1:OCDB+dYDEQDvAgtAGnTSidK1Pe2tW3nFV40XyMkTeDY= github.com/ferranbt/fastssz v0.1.4 h1:OCDB+dYDEQDvAgtAGnTSidK1Pe2tW3nFV40XyMkTeDY=
@ -111,22 +111,22 @@ github.com/shirou/gopsutil v3.21.4-0.20210419000835-c7a38de76ee5+incompatible h1
github.com/shirou/gopsutil v3.21.4-0.20210419000835-c7a38de76ee5+incompatible/go.mod h1:5b4v6he4MtMOwMlS0TUMTu2PcXUg8+E1lC7eC3UO/RA= github.com/shirou/gopsutil v3.21.4-0.20210419000835-c7a38de76ee5+incompatible/go.mod h1:5b4v6he4MtMOwMlS0TUMTu2PcXUg8+E1lC7eC3UO/RA=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U= github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U= github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/supranational/blst v0.3.16-0.20250831170142-f48500c1fdbe h1:nbdqkIGOGfUAD54q1s2YBcBz/WcsxCO9HUQ4aGV5hUw= github.com/supranational/blst v0.3.16 h1:bTDadT+3fK497EvLdWRQEjiGnUtzJ7jjIUMF0jqwYhE=
github.com/supranational/blst v0.3.16-0.20250831170142-f48500c1fdbe/go.mod h1:jZJtfjgudtNl4en1tzwPIV3KjUnQUvG3/j+w+fVonLw= github.com/supranational/blst v0.3.16/go.mod h1:jZJtfjgudtNl4en1tzwPIV3KjUnQUvG3/j+w+fVonLw=
github.com/tklauser/go-sysconf v0.3.12 h1:0QaGUFOdQaIVdPgfITYzaTegZvdCjmYO52cSFAEVmqU= github.com/tklauser/go-sysconf v0.3.12 h1:0QaGUFOdQaIVdPgfITYzaTegZvdCjmYO52cSFAEVmqU=
github.com/tklauser/go-sysconf v0.3.12/go.mod h1:Ho14jnntGE1fpdOqQEEaiKRpvIavV0hSfmBq8nJbHYI= github.com/tklauser/go-sysconf v0.3.12/go.mod h1:Ho14jnntGE1fpdOqQEEaiKRpvIavV0hSfmBq8nJbHYI=
github.com/tklauser/numcpus v0.6.1 h1:ng9scYS7az0Bk4OZLvrNXNSAO2Pxr1XXRAPyjhIx+Fk= github.com/tklauser/numcpus v0.6.1 h1:ng9scYS7az0Bk4OZLvrNXNSAO2Pxr1XXRAPyjhIx+Fk=
github.com/tklauser/numcpus v0.6.1/go.mod h1:1XfjsgE2zo8GVw7POkMbHENHzVg3GzmoZ9fESEdAacY= github.com/tklauser/numcpus v0.6.1/go.mod h1:1XfjsgE2zo8GVw7POkMbHENHzVg3GzmoZ9fESEdAacY=
go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64= go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64=
go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y= go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y=
go.opentelemetry.io/otel v1.39.0 h1:8yPrr/S0ND9QEfTfdP9V+SiwT4E0G7Y5MO7p85nis48= go.opentelemetry.io/otel v1.40.0 h1:oA5YeOcpRTXq6NN7frwmwFR0Cn3RhTVZvXsP4duvCms=
go.opentelemetry.io/otel v1.39.0/go.mod h1:kLlFTywNWrFyEdH0oj2xK0bFYZtHRYUdv1NklR/tgc8= go.opentelemetry.io/otel v1.40.0/go.mod h1:IMb+uXZUKkMXdPddhwAHm6UfOwJyh4ct1ybIlV14J0g=
go.opentelemetry.io/otel/metric v1.39.0 h1:d1UzonvEZriVfpNKEVmHXbdf909uGTOQjA0HF0Ls5Q0= go.opentelemetry.io/otel/metric v1.40.0 h1:rcZe317KPftE2rstWIBitCdVp89A2HqjkxR3c11+p9g=
go.opentelemetry.io/otel/metric v1.39.0/go.mod h1:jrZSWL33sD7bBxg1xjrqyDjnuzTUB0x1nBERXd7Ftcs= go.opentelemetry.io/otel/metric v1.40.0/go.mod h1:ib/crwQH7N3r5kfiBZQbwrTge743UDc7DTFVZrrXnqc=
go.opentelemetry.io/otel/sdk v1.39.0 h1:nMLYcjVsvdui1B/4FRkwjzoRVsMK8uL/cj0OyhKzt18= go.opentelemetry.io/otel/sdk v1.40.0 h1:KHW/jUzgo6wsPh9At46+h4upjtccTmuZCFAc9OJ71f8=
go.opentelemetry.io/otel/sdk v1.39.0/go.mod h1:vDojkC4/jsTJsE+kh+LXYQlbL8CgrEcwmt1ENZszdJE= go.opentelemetry.io/otel/sdk v1.40.0/go.mod h1:Ph7EFdYvxq72Y8Li9q8KebuYUr2KoeyHx0DRMKrYBUE=
go.opentelemetry.io/otel/trace v1.39.0 h1:2d2vfpEDmCJ5zVYz7ijaJdOF59xLomrvj7bjt6/qCJI= go.opentelemetry.io/otel/trace v1.40.0 h1:WA4etStDttCSYuhwvEa8OP8I5EWu24lkOzp+ZYblVjw=
go.opentelemetry.io/otel/trace v1.39.0/go.mod h1:88w4/PnZSazkGzz/w84VHpQafiU4EtqqlVdxWy+rNOA= go.opentelemetry.io/otel/trace v1.40.0/go.mod h1:zeAhriXecNGP/s2SEG3+Y8X9ujcJOTqQ5RgdEJcawiA=
golang.org/x/crypto v0.44.0 h1:A97SsFvM3AIwEEmTBiaxPPTYpDC47w720rdiiUvgoAU= golang.org/x/crypto v0.44.0 h1:A97SsFvM3AIwEEmTBiaxPPTYpDC47w720rdiiUvgoAU=
golang.org/x/crypto v0.44.0/go.mod h1:013i+Nw79BMiQiMsOPcVCB5ZIJbYkerPrGnOa00tvmc= golang.org/x/crypto v0.44.0/go.mod h1:013i+Nw79BMiQiMsOPcVCB5ZIJbYkerPrGnOa00tvmc=
golang.org/x/exp v0.0.0-20230626212559-97b1e661b5df h1:UA2aFVmmsIlefxMk29Dp2juaUSth8Pyn3Tq5Y5mJGME= golang.org/x/exp v0.0.0-20230626212559-97b1e661b5df h1:UA2aFVmmsIlefxMk29Dp2juaUSth8Pyn3Tq5Y5mJGME=
@ -137,8 +137,8 @@ golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.11.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.11.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.39.0 h1:CvCKL8MeisomCi6qNZ+wbb0DN9E5AATixKsvNtMoMFk= golang.org/x/sys v0.40.0 h1:DBZZqJ2Rkml6QMQsZywtnjnnGvHza6BTfYFWY9kjEWQ=
golang.org/x/sys v0.39.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks= golang.org/x/sys v0.40.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/text v0.31.0 h1:aC8ghyu4JhP8VojJ2lEHBnochRno1sgL6nEi9WGFGMM= golang.org/x/text v0.31.0 h1:aC8ghyu4JhP8VojJ2lEHBnochRno1sgL6nEi9WGFGMM=
golang.org/x/text v0.31.0/go.mod h1:tKRAlv61yKIjGGHX/4tP1LTbc13YSec1pxVEWXzfoeM= golang.org/x/text v0.31.0/go.mod h1:tKRAlv61yKIjGGHX/4tP1LTbc13YSec1pxVEWXzfoeM=
google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE= google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE=

View file

@ -14,8 +14,8 @@
// You should have received a copy of the GNU Lesser General Public License // You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>. // along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
//go:build !example && !ziren && !wasm //go:build !example && !ziren && !wasm && !womir
// +build !example,!ziren,!wasm // +build !example,!ziren,!wasm,!womir
package main package main

View file

@ -274,40 +274,66 @@ func ImportHistory(chain *core.BlockChain, dir string, network string, from func
reported = time.Now() reported = time.Now()
imported = 0 imported = 0
h = sha256.New() h = sha256.New()
scratch = bytes.NewBuffer(nil) buf = bytes.NewBuffer(nil)
) )
for i, file := range entries { for i, file := range entries {
err := func() error { err := func() error {
path := filepath.Join(dir, file) path := filepath.Join(dir, file)
// validate against checksum file in directory // Validate against checksum file in directory.
f, err := os.Open(path) f, err := os.Open(path)
if err != nil { if err != nil {
return fmt.Errorf("open %s: %w", path, err) return fmt.Errorf("open %s: %w", path, err)
} }
defer f.Close() defer f.Close()
if _, err := io.Copy(h, f); err != nil { if _, err := io.Copy(h, f); err != nil {
return fmt.Errorf("checksum %s: %w", path, err) return fmt.Errorf("checksum %s: %w", path, err)
} }
got := common.BytesToHash(h.Sum(scratch.Bytes()[:])).Hex() got := common.BytesToHash(h.Sum(buf.Bytes()[:])).Hex()
want := checksums[i]
h.Reset() h.Reset()
scratch.Reset() buf.Reset()
if got != checksums[i] {
if got != want { return fmt.Errorf("%s checksum mismatch: have %s want %s", file, got, checksums[i])
return fmt.Errorf("%s checksum mismatch: have %s want %s", file, got, want)
} }
// Import all block data from Era1. // Import all block data from Era1.
e, err := from(f) e, err := from(f)
if err != nil { if err != nil {
return fmt.Errorf("error opening era: %w", err) return fmt.Errorf("error opening era: %w", err)
} }
defer e.Close()
it, err := e.Iterator() it, err := e.Iterator()
if err != nil { if err != nil {
return fmt.Errorf("error creating iterator: %w", err) return fmt.Errorf("error creating iterator: %w", err)
} }
var (
blocks = make([]*types.Block, 0, importBatchSize)
receiptsList = make([]types.Receipts, 0, importBatchSize)
flush = func() error {
if len(blocks) == 0 {
return nil
}
enc := types.EncodeBlockReceiptLists(receiptsList)
if _, err := chain.InsertReceiptChain(blocks, enc, math.MaxUint64); err != nil {
return fmt.Errorf("error inserting blocks %d-%d: %w",
blocks[0].NumberU64(), blocks[len(blocks)-1].NumberU64(), err)
}
imported += len(blocks)
if time.Since(reported) >= 8*time.Second {
head := blocks[len(blocks)-1].NumberU64()
log.Info("Importing Era files", "head", head, "imported", imported,
"elapsed", common.PrettyDuration(time.Since(start)))
imported = 0
reported = time.Now()
}
blocks = blocks[:0]
receiptsList = receiptsList[:0]
return nil
}
)
for it.Next() { for it.Next() {
block, err := it.Block() block, err := it.Block()
if err != nil { if err != nil {
@ -320,23 +346,18 @@ func ImportHistory(chain *core.BlockChain, dir string, network string, from func
if err != nil { if err != nil {
return fmt.Errorf("error reading receipts %d: %w", it.Number(), err) return fmt.Errorf("error reading receipts %d: %w", it.Number(), err)
} }
enc := types.EncodeBlockReceiptLists([]types.Receipts{receipts}) blocks = append(blocks, block)
if _, err := chain.InsertReceiptChain([]*types.Block{block}, enc, math.MaxUint64); err != nil { receiptsList = append(receiptsList, receipts)
return fmt.Errorf("error inserting body %d: %w", it.Number(), err) if len(blocks) == importBatchSize {
if err := flush(); err != nil {
return err
} }
imported++
if time.Since(reported) >= 8*time.Second {
log.Info("Importing Era files", "head", it.Number(), "imported", imported,
"elapsed", common.PrettyDuration(time.Since(start)))
imported = 0
reported = time.Now()
} }
} }
if err := it.Error(); err != nil { if err := it.Error(); err != nil {
return err return err
} }
return nil return flush()
}() }()
if err != nil { if err != nil {
return err return err

View file

@ -218,7 +218,15 @@ var (
Usage: "Max number of elements (0 = no limit)", Usage: "Max number of elements (0 = no limit)",
Value: 0, Value: 0,
} }
AccountFlag = &cli.StringFlag{
Name: "account",
Usage: "Specifies the account address or hash to traverse a single storage trie",
}
OutputFileFlag = &cli.StringFlag{
Name: "output",
Usage: "Writes the result in json to the output",
Value: "",
}
SnapshotFlag = &cli.BoolFlag{ SnapshotFlag = &cli.BoolFlag{
Name: "snapshot", Name: "snapshot",
Usage: `Enables snapshot-database mode (default = enable)`, Usage: `Enables snapshot-database mode (default = enable)`,
@ -315,7 +323,7 @@ var (
} }
ChainHistoryFlag = &cli.StringFlag{ ChainHistoryFlag = &cli.StringFlag{
Name: "history.chain", Name: "history.chain",
Usage: `Blockchain history retention ("all" or "postmerge")`, Usage: `Blockchain history retention ("all", "postmerge", or "postprague")`,
Value: ethconfig.Defaults.HistoryMode.String(), Value: ethconfig.Defaults.HistoryMode.String(),
Category: flags.StateCategory, Category: flags.StateCategory,
} }
@ -480,8 +488,8 @@ var (
// Performance tuning settings // Performance tuning settings
CacheFlag = &cli.IntFlag{ CacheFlag = &cli.IntFlag{
Name: "cache", Name: "cache",
Usage: "Megabytes of memory allocated to internal caching (default = 4096 mainnet full node, 128 light mode)", Usage: "Megabytes of memory allocated to internal caching",
Value: 1024, Value: 4096,
Category: flags.PerfCategory, Category: flags.PerfCategory,
} }
CacheDatabaseFlag = &cli.IntFlag{ CacheDatabaseFlag = &cli.IntFlag{
@ -1016,6 +1024,13 @@ Please note that --` + MetricsHTTPFlag.Name + ` must be set to start the server.
Category: flags.MetricsCategory, Category: flags.MetricsCategory,
} }
MetricsInfluxDBIntervalFlag = &cli.DurationFlag{
Name: "metrics.influxdb.interval",
Usage: "Interval between metrics reports to InfluxDB (with time unit, e.g. 10s)",
Value: metrics.DefaultConfig.InfluxDBInterval,
Category: flags.MetricsCategory,
}
MetricsEnableInfluxDBV2Flag = &cli.BoolFlag{ MetricsEnableInfluxDBV2Flag = &cli.BoolFlag{
Name: "metrics.influxdbv2", Name: "metrics.influxdbv2",
Usage: "Enable metrics export/push to an external InfluxDB v2 database", Usage: "Enable metrics export/push to an external InfluxDB v2 database",
@ -1569,7 +1584,9 @@ func setOpenTelemetry(ctx *cli.Context, cfg *node.Config) {
if ctx.IsSet(RPCTelemetryTagsFlag.Name) { if ctx.IsSet(RPCTelemetryTagsFlag.Name) {
tcfg.Tags = ctx.String(RPCTelemetryTagsFlag.Name) tcfg.Tags = ctx.String(RPCTelemetryTagsFlag.Name)
} }
if ctx.IsSet(RPCTelemetrySampleRatioFlag.Name) {
tcfg.SampleRatio = ctx.Float64(RPCTelemetrySampleRatioFlag.Name) tcfg.SampleRatio = ctx.Float64(RPCTelemetrySampleRatioFlag.Name)
}
if tcfg.Endpoint != "" && !tcfg.Enabled { if tcfg.Endpoint != "" && !tcfg.Enabled {
log.Warn(fmt.Sprintf("OpenTelemetry endpoint configured but telemetry is not enabled, use --%s to enable.", RPCTelemetryFlag.Name)) log.Warn(fmt.Sprintf("OpenTelemetry endpoint configured but telemetry is not enabled, use --%s to enable.", RPCTelemetryFlag.Name))
@ -2246,13 +2263,14 @@ func SetupMetrics(cfg *metrics.Config) {
bucket = cfg.InfluxDBBucket bucket = cfg.InfluxDBBucket
organization = cfg.InfluxDBOrganization organization = cfg.InfluxDBOrganization
tagsMap = SplitTagsFlag(cfg.InfluxDBTags) tagsMap = SplitTagsFlag(cfg.InfluxDBTags)
interval = cfg.InfluxDBInterval
) )
if enableExport { if enableExport {
log.Info("Enabling metrics export to InfluxDB") log.Info("Enabling metrics export to InfluxDB", "interval", interval)
go influxdb.InfluxDBWithTags(metrics.DefaultRegistry, 10*time.Second, endpoint, database, username, password, "geth.", tagsMap) go influxdb.InfluxDBWithTags(metrics.DefaultRegistry, interval, endpoint, database, username, password, "geth.", tagsMap)
} else if enableExportV2 { } else if enableExportV2 {
log.Info("Enabling metrics export to InfluxDB (v2)") log.Info("Enabling metrics export to InfluxDB (v2)", "interval", interval)
go influxdb.InfluxDBV2WithTags(metrics.DefaultRegistry, 10*time.Second, endpoint, token, bucket, organization, "geth.", tagsMap) go influxdb.InfluxDBV2WithTags(metrics.DefaultRegistry, interval, endpoint, token, bucket, organization, "geth.", tagsMap)
} }
// Expvar exporter. // Expvar exporter.
@ -2457,8 +2475,6 @@ func MakeChain(ctx *cli.Context, stack *node.Node, readonly bool) (*core.BlockCh
} }
vmcfg := vm.Config{ vmcfg := vm.Config{
EnablePreimageRecording: ctx.Bool(VMEnableDebugFlag.Name), EnablePreimageRecording: ctx.Bool(VMEnableDebugFlag.Name),
EnableWitnessStats: ctx.Bool(VMWitnessStatsFlag.Name),
StatelessSelfValidation: ctx.Bool(VMStatelessSelfValidationFlag.Name) || ctx.Bool(VMWitnessStatsFlag.Name),
} }
if ctx.IsSet(VMTraceFlag.Name) { if ctx.IsSet(VMTraceFlag.Name) {
if name := ctx.String(VMTraceFlag.Name); name != "" { if name := ctx.String(VMTraceFlag.Name); name != "" {
@ -2472,6 +2488,9 @@ func MakeChain(ctx *cli.Context, stack *node.Node, readonly bool) (*core.BlockCh
} }
options.VmConfig = vmcfg options.VmConfig = vmcfg
options.StatelessSelfValidation = ctx.Bool(VMStatelessSelfValidationFlag.Name) || ctx.Bool(VMWitnessStatsFlag.Name)
options.EnableWitnessStats = ctx.Bool(VMWitnessStatsFlag.Name)
chain, err := core.NewBlockChain(chainDb, gspec, engine, options) chain, err := core.NewBlockChain(chainDb, gspec, engine, options)
if err != nil { if err != nil {
Fatalf("Can't create BlockChain: %v", err) Fatalf("Can't create BlockChain: %v", err)

View file

@ -155,7 +155,9 @@ func testConfigFromCLI(ctx *cli.Context) (cfg testConfig) {
} }
cfg.historyPruneBlock = new(uint64) cfg.historyPruneBlock = new(uint64)
*cfg.historyPruneBlock = history.PrunePoints[params.MainnetGenesisHash].BlockNumber if p, err := history.NewPolicy(history.KeepPostMerge, params.MainnetGenesisHash); err == nil {
*cfg.historyPruneBlock = p.Target.BlockNumber
}
case ctx.Bool(testSepoliaFlag.Name): case ctx.Bool(testSepoliaFlag.Name):
cfg.fsys = builtinTestFiles cfg.fsys = builtinTestFiles
if ctx.IsSet(filterQueryFileFlag.Name) { if ctx.IsSet(filterQueryFileFlag.Name) {
@ -180,7 +182,9 @@ func testConfigFromCLI(ctx *cli.Context) (cfg testConfig) {
} }
cfg.historyPruneBlock = new(uint64) cfg.historyPruneBlock = new(uint64)
*cfg.historyPruneBlock = history.PrunePoints[params.SepoliaGenesisHash].BlockNumber if p, err := history.NewPolicy(history.KeepPostMerge, params.SepoliaGenesisHash); err == nil {
*cfg.historyPruneBlock = p.Target.BlockNumber
}
default: default:
cfg.fsys = os.DirFS(".") cfg.fsys = os.DirFS(".")
cfg.filterQueryFile = ctx.String(filterQueryFileFlag.Name) cfg.filterQueryFile = ctx.String(filterQueryFileFlag.Name)

View file

@ -17,6 +17,7 @@
package beacon package beacon
import ( import (
"context"
"errors" "errors"
"fmt" "fmt"
"math/big" "math/big"
@ -29,6 +30,7 @@ import (
"github.com/ethereum/go-ethereum/core/tracing" "github.com/ethereum/go-ethereum/core/tracing"
"github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/core/vm" "github.com/ethereum/go-ethereum/core/vm"
"github.com/ethereum/go-ethereum/internal/telemetry"
"github.com/ethereum/go-ethereum/params" "github.com/ethereum/go-ethereum/params"
"github.com/ethereum/go-ethereum/trie" "github.com/ethereum/go-ethereum/trie"
"github.com/holiman/uint256" "github.com/holiman/uint256"
@ -272,6 +274,24 @@ func (beacon *Beacon) verifyHeader(chain consensus.ChainHeaderReader, header, pa
return err return err
} }
} }
// Verify the existence / non-existence of Amsterdam-specific header fields
amsterdam := chain.Config().IsAmsterdam(header.Number, header.Time)
if amsterdam {
if header.BlockAccessListHash == nil {
return errors.New("header is missing block access list hash")
}
if header.SlotNumber == nil {
return errors.New("header is missing slotNumber")
}
} else {
if header.BlockAccessListHash != nil {
return fmt.Errorf("invalid block access list hash: have %x, expected nil", *header.BlockAccessListHash)
}
if header.SlotNumber != nil {
return fmt.Errorf("invalid slotNumber: have %d, expected nil", *header.SlotNumber)
}
}
return nil return nil
} }
@ -343,9 +363,17 @@ func (beacon *Beacon) Finalize(chain consensus.ChainHeaderReader, header *types.
// FinalizeAndAssemble implements consensus.Engine, setting the final state and // FinalizeAndAssemble implements consensus.Engine, setting the final state and
// assembling the block. // assembling the block.
func (beacon *Beacon) FinalizeAndAssemble(chain consensus.ChainHeaderReader, header *types.Header, state *state.StateDB, body *types.Body, receipts []*types.Receipt) (*types.Block, error) { func (beacon *Beacon) FinalizeAndAssemble(ctx context.Context, chain consensus.ChainHeaderReader, header *types.Header, state *state.StateDB, body *types.Body, receipts []*types.Receipt) (result *types.Block, err error) {
ctx, _, spanEnd := telemetry.StartSpan(ctx, "consensus.beacon.FinalizeAndAssemble",
telemetry.Int64Attribute("block.number", int64(header.Number.Uint64())),
telemetry.Int64Attribute("txs.count", int64(len(body.Transactions))),
telemetry.Int64Attribute("withdrawals.count", int64(len(body.Withdrawals))),
)
defer spanEnd(&err)
if !beacon.IsPoSHeader(header) { if !beacon.IsPoSHeader(header) {
return beacon.ethone.FinalizeAndAssemble(chain, header, state, body, receipts) block, delegateErr := beacon.ethone.FinalizeAndAssemble(ctx, chain, header, state, body, receipts)
return block, delegateErr
} }
shanghai := chain.Config().IsShanghai(header.Number, header.Time) shanghai := chain.Config().IsShanghai(header.Number, header.Time)
if shanghai { if shanghai {
@ -359,13 +387,20 @@ func (beacon *Beacon) FinalizeAndAssemble(chain consensus.ChainHeaderReader, hea
} }
} }
// Finalize and assemble the block. // Finalize and assemble the block.
_, _, finalizeSpanEnd := telemetry.StartSpan(ctx, "consensus.beacon.Finalize")
beacon.Finalize(chain, header, state, body) beacon.Finalize(chain, header, state, body)
finalizeSpanEnd(nil)
// Assign the final state root to header. // Assign the final state root to header.
_, _, rootSpanEnd := telemetry.StartSpan(ctx, "consensus.beacon.IntermediateRoot")
header.Root = state.IntermediateRoot(true) header.Root = state.IntermediateRoot(true)
rootSpanEnd(nil)
// Assemble the final block. // Assemble the final block.
return types.NewBlock(header, body, receipts, trie.NewStackTrie(nil)), nil _, _, blockSpanEnd := telemetry.StartSpan(ctx, "consensus.beacon.NewBlock")
block := types.NewBlock(header, body, receipts, trie.NewStackTrie(nil))
blockSpanEnd(nil)
return block, nil
} }
// Seal generates a new sealing request for the given input block and pushes // Seal generates a new sealing request for the given input block and pushes

View file

@ -19,6 +19,7 @@ package clique
import ( import (
"bytes" "bytes"
"context"
"errors" "errors"
"fmt" "fmt"
"io" "io"
@ -310,6 +311,8 @@ func (c *Clique) verifyHeader(chain consensus.ChainHeaderReader, header *types.H
return fmt.Errorf("invalid blobGasUsed: have %d, expected nil", *header.BlobGasUsed) return fmt.Errorf("invalid blobGasUsed: have %d, expected nil", *header.BlobGasUsed)
case header.ParentBeaconRoot != nil: case header.ParentBeaconRoot != nil:
return fmt.Errorf("invalid parentBeaconRoot, have %#x, expected nil", *header.ParentBeaconRoot) return fmt.Errorf("invalid parentBeaconRoot, have %#x, expected nil", *header.ParentBeaconRoot)
case header.SlotNumber != nil:
return fmt.Errorf("invalid slotNumber, have %#x, expected nil", *header.SlotNumber)
} }
// All basic checks passed, verify cascading fields // All basic checks passed, verify cascading fields
return c.verifyCascadingFields(chain, header, parents) return c.verifyCascadingFields(chain, header, parents)
@ -579,7 +582,7 @@ func (c *Clique) Finalize(chain consensus.ChainHeaderReader, header *types.Heade
// FinalizeAndAssemble implements consensus.Engine, ensuring no uncles are set, // FinalizeAndAssemble implements consensus.Engine, ensuring no uncles are set,
// nor block rewards given, and returns the final block. // nor block rewards given, and returns the final block.
func (c *Clique) FinalizeAndAssemble(chain consensus.ChainHeaderReader, header *types.Header, state *state.StateDB, body *types.Body, receipts []*types.Receipt) (*types.Block, error) { func (c *Clique) FinalizeAndAssemble(ctx context.Context, chain consensus.ChainHeaderReader, header *types.Header, state *state.StateDB, body *types.Body, receipts []*types.Receipt) (*types.Block, error) {
if len(body.Withdrawals) > 0 { if len(body.Withdrawals) > 0 {
return nil, errors.New("clique does not support withdrawals") return nil, errors.New("clique does not support withdrawals")
} }
@ -694,6 +697,9 @@ func encodeSigHeader(w io.Writer, header *types.Header) {
if header.ParentBeaconRoot != nil { if header.ParentBeaconRoot != nil {
panic("unexpected parent beacon root value in clique") panic("unexpected parent beacon root value in clique")
} }
if header.SlotNumber != nil {
panic("unexpected slot number value in clique")
}
if err := rlp.Encode(w, enc); err != nil { if err := rlp.Encode(w, enc); err != nil {
panic("can't encode: " + err.Error()) panic("can't encode: " + err.Error())
} }

View file

@ -18,6 +18,7 @@
package consensus package consensus
import ( import (
"context"
"math/big" "math/big"
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
@ -92,7 +93,7 @@ type Engine interface {
// //
// Note: The block header and state database might be updated to reflect any // Note: The block header and state database might be updated to reflect any
// consensus rules that happen at finalization (e.g. block rewards). // consensus rules that happen at finalization (e.g. block rewards).
FinalizeAndAssemble(chain ChainHeaderReader, header *types.Header, state *state.StateDB, body *types.Body, receipts []*types.Receipt) (*types.Block, error) FinalizeAndAssemble(ctx context.Context, chain ChainHeaderReader, header *types.Header, state *state.StateDB, body *types.Body, receipts []*types.Receipt) (*types.Block, error)
// Seal generates a new sealing request for the given input block and pushes // Seal generates a new sealing request for the given input block and pushes
// the result into the given channel. // the result into the given channel.

View file

@ -17,6 +17,7 @@
package ethash package ethash
import ( import (
"context"
"errors" "errors"
"fmt" "fmt"
"math/big" "math/big"
@ -283,6 +284,8 @@ func (ethash *Ethash) verifyHeader(chain consensus.ChainHeaderReader, header, pa
return fmt.Errorf("invalid blobGasUsed: have %d, expected nil", *header.BlobGasUsed) return fmt.Errorf("invalid blobGasUsed: have %d, expected nil", *header.BlobGasUsed)
case header.ParentBeaconRoot != nil: case header.ParentBeaconRoot != nil:
return fmt.Errorf("invalid parentBeaconRoot, have %#x, expected nil", *header.ParentBeaconRoot) return fmt.Errorf("invalid parentBeaconRoot, have %#x, expected nil", *header.ParentBeaconRoot)
case header.SlotNumber != nil:
return fmt.Errorf("invalid slotNumber, have %#x, expected nil", *header.SlotNumber)
} }
// Add some fake checks for tests // Add some fake checks for tests
if ethash.fakeDelay != nil { if ethash.fakeDelay != nil {
@ -511,7 +514,7 @@ func (ethash *Ethash) Finalize(chain consensus.ChainHeaderReader, header *types.
// FinalizeAndAssemble implements consensus.Engine, accumulating the block and // FinalizeAndAssemble implements consensus.Engine, accumulating the block and
// uncle rewards, setting the final state and assembling the block. // uncle rewards, setting the final state and assembling the block.
func (ethash *Ethash) FinalizeAndAssemble(chain consensus.ChainHeaderReader, header *types.Header, state *state.StateDB, body *types.Body, receipts []*types.Receipt) (*types.Block, error) { func (ethash *Ethash) FinalizeAndAssemble(ctx context.Context, chain consensus.ChainHeaderReader, header *types.Header, state *state.StateDB, body *types.Body, receipts []*types.Receipt) (*types.Block, error) {
if len(body.Withdrawals) > 0 { if len(body.Withdrawals) > 0 {
return nil, errors.New("ethash does not support withdrawals") return nil, errors.New("ethash does not support withdrawals")
} }
@ -559,6 +562,9 @@ func (ethash *Ethash) SealHash(header *types.Header) (hash common.Hash) {
if header.ParentBeaconRoot != nil { if header.ParentBeaconRoot != nil {
panic("parent beacon root set on ethash") panic("parent beacon root set on ethash")
} }
if header.SlotNumber != nil {
panic("slot number set on ethash")
}
rlp.Encode(hasher, enc) rlp.Encode(hasher, enc)
hasher.Sum(hash[:0]) hasher.Sum(hash[:0])
return hash return hash

View file

@ -282,7 +282,7 @@ func (c *Console) AutoCompleteInput(line string, pos int) (string, []string, str
for ; start > 0; start-- { for ; start > 0; start-- {
// Skip all methods and namespaces (i.e. including the dot) // Skip all methods and namespaces (i.e. including the dot)
c := line[start] c := line[start]
if c == '.' || (c >= 'a' && c <= 'z') || (c >= 'A' && c <= 'Z') || (c >= '1' && c <= '9') { if c == '.' || (c >= 'a' && c <= 'z') || (c >= 'A' && c <= 'Z') || (c >= '0' && c <= '9') {
continue continue
} }
// We've hit an unexpected character, autocomplete form here // We've hit an unexpected character, autocomplete form here

View file

@ -93,8 +93,6 @@ var (
accountReadSingleTimer = metrics.NewRegisteredResettingTimer("chain/account/single/reads", nil) accountReadSingleTimer = metrics.NewRegisteredResettingTimer("chain/account/single/reads", nil)
storageReadSingleTimer = metrics.NewRegisteredResettingTimer("chain/storage/single/reads", nil) storageReadSingleTimer = metrics.NewRegisteredResettingTimer("chain/storage/single/reads", nil)
codeReadSingleTimer = metrics.NewRegisteredResettingTimer("chain/code/single/reads", nil) codeReadSingleTimer = metrics.NewRegisteredResettingTimer("chain/code/single/reads", nil)
snapshotCommitTimer = metrics.NewRegisteredResettingTimer("chain/snapshot/commits", nil)
triedbCommitTimer = metrics.NewRegisteredResettingTimer("chain/triedb/commits", nil) triedbCommitTimer = metrics.NewRegisteredResettingTimer("chain/triedb/commits", nil)
blockInsertTimer = metrics.NewRegisteredResettingTimer("chain/inserts", nil) blockInsertTimer = metrics.NewRegisteredResettingTimer("chain/inserts", nil)
@ -196,9 +194,8 @@ type BlockChainConfig struct {
SnapshotNoBuild bool // Whether the background generation is allowed SnapshotNoBuild bool // Whether the background generation is allowed
SnapshotWait bool // Wait for snapshot construction on startup. TODO(karalabe): This is a dirty hack for testing, nuke it SnapshotWait bool // Wait for snapshot construction on startup. TODO(karalabe): This is a dirty hack for testing, nuke it
// This defines the cutoff block for history expiry. // HistoryPolicy defines the chain history pruning intent.
// Blocks before this number may be unavailable in the chain database. HistoryPolicy history.HistoryPolicy
ChainHistoryMode history.HistoryMode
// Misc options // Misc options
NoPrefetch bool // Whether to disable heuristic state prefetching when processing blocks NoPrefetch bool // Whether to disable heuristic state prefetching when processing blocks
@ -219,6 +216,10 @@ type BlockChainConfig struct {
// detailed statistics will be logged. Negative value means disabled (default), // detailed statistics will be logged. Negative value means disabled (default),
// zero logs all blocks, positive value filters blocks by execution time. // zero logs all blocks, positive value filters blocks by execution time.
SlowBlockThreshold time.Duration SlowBlockThreshold time.Duration
// Execution configs
StatelessSelfValidation bool // Generate execution witnesses and self-check against them (testing purpose)
EnableWitnessStats bool // Whether trie access statistics collection is enabled
} }
// DefaultConfig returns the default config. // DefaultConfig returns the default config.
@ -231,7 +232,7 @@ func DefaultConfig() *BlockChainConfig {
StateScheme: rawdb.HashScheme, StateScheme: rawdb.HashScheme,
SnapshotLimit: 256, SnapshotLimit: 256,
SnapshotWait: true, SnapshotWait: true,
ChainHistoryMode: history.KeepAll, HistoryPolicy: history.HistoryPolicy{Mode: history.KeepAll},
// Transaction indexing is disabled by default. // Transaction indexing is disabled by default.
// This is appropriate for most unit tests. // This is appropriate for most unit tests.
TxLookupLimit: -1, TxLookupLimit: -1,
@ -322,7 +323,7 @@ type BlockChain struct {
lastWrite uint64 // Last block when the state was flushed lastWrite uint64 // Last block when the state was flushed
flushInterval atomic.Int64 // Time interval (processing time) after which to flush a state flushInterval atomic.Int64 // Time interval (processing time) after which to flush a state
triedb *triedb.Database // The database handler for maintaining trie nodes. triedb *triedb.Database // The database handler for maintaining trie nodes.
statedb *state.CachingDB // State database to reuse between imports (contains state cache) codedb *state.CodeDB // The database handler for maintaining contract codes.
txIndexer *txIndexer // Transaction indexer, might be nil if not enabled txIndexer *txIndexer // Transaction indexer, might be nil if not enabled
hc *HeaderChain hc *HeaderChain
@ -404,6 +405,7 @@ func NewBlockChain(db ethdb.Database, genesis *Genesis, engine consensus.Engine,
cfg: cfg, cfg: cfg,
db: db, db: db,
triedb: triedb, triedb: triedb,
codedb: state.NewCodeDB(db),
triegc: prque.New[int64, common.Hash](nil), triegc: prque.New[int64, common.Hash](nil),
chainmu: syncx.NewClosableMutex(), chainmu: syncx.NewClosableMutex(),
bodyCache: lru.NewCache[common.Hash, *types.Body](bodyCacheLimit), bodyCache: lru.NewCache[common.Hash, *types.Body](bodyCacheLimit),
@ -420,7 +422,6 @@ func NewBlockChain(db ethdb.Database, genesis *Genesis, engine consensus.Engine,
return nil, err return nil, err
} }
bc.flushInterval.Store(int64(cfg.TrieTimeLimit)) bc.flushInterval.Store(int64(cfg.TrieTimeLimit))
bc.statedb = state.NewDatabase(bc.triedb, nil)
bc.validator = NewBlockValidator(chainConfig, bc) bc.validator = NewBlockValidator(chainConfig, bc)
bc.prefetcher = newStatePrefetcher(chainConfig, bc.hc) bc.prefetcher = newStatePrefetcher(chainConfig, bc.hc)
bc.processor = NewStateProcessor(bc.hc) bc.processor = NewStateProcessor(bc.hc)
@ -597,9 +598,6 @@ func (bc *BlockChain) setupSnapshot() {
AsyncBuild: !bc.cfg.SnapshotWait, AsyncBuild: !bc.cfg.SnapshotWait,
} }
bc.snaps, _ = snapshot.New(snapconfig, bc.db, bc.triedb, head.Root) bc.snaps, _ = snapshot.New(snapconfig, bc.db, bc.triedb, head.Root)
// Re-initialize the state database with snapshot
bc.statedb = state.NewDatabase(bc.triedb, bc.snaps)
} }
} }
@ -717,45 +715,43 @@ func (bc *BlockChain) loadLastState() error {
// initializeHistoryPruning sets bc.historyPrunePoint. // initializeHistoryPruning sets bc.historyPrunePoint.
func (bc *BlockChain) initializeHistoryPruning(latest uint64) error { func (bc *BlockChain) initializeHistoryPruning(latest uint64) error {
freezerTail, _ := bc.db.Tail() freezerTail, _ := bc.db.Tail()
policy := bc.cfg.HistoryPolicy
switch bc.cfg.ChainHistoryMode { switch policy.Mode {
case history.KeepAll: case history.KeepAll:
if freezerTail == 0 { if freezerTail > 0 {
return nil // Database was pruned externally. Record the actual state.
log.Warn("Chain history database is pruned", "tail", freezerTail, "mode", policy.Mode)
bc.historyPrunePoint.Store(&history.PrunePoint{
BlockNumber: freezerTail,
BlockHash: bc.GetCanonicalHash(freezerTail),
})
} }
// The database was pruned somehow, so we need to figure out if it's a known
// configuration or an error.
predefinedPoint := history.PrunePoints[bc.genesisBlock.Hash()]
if predefinedPoint == nil || freezerTail != predefinedPoint.BlockNumber {
log.Error("Chain history database is pruned with unknown configuration", "tail", freezerTail)
return errors.New("unexpected database tail")
}
bc.historyPrunePoint.Store(predefinedPoint)
return nil return nil
case history.KeepPostMerge: case history.KeepPostMerge, history.KeepPostPrague:
if freezerTail == 0 && latest != 0 { target := policy.Target
// This is the case where a user is trying to run with --history.chain // Already at the target.
// postmerge directly on an existing DB. We could just trigger the pruning if freezerTail == target.BlockNumber {
// here, but it'd be a bit dangerous since they may not have intended this bc.historyPrunePoint.Store(target)
// action to happen. So just tell them how to do it. return nil
log.Error(fmt.Sprintf("Chain history mode is configured as %q, but database is not pruned.", bc.cfg.ChainHistoryMode.String()))
log.Error(fmt.Sprintf("Run 'geth prune-history' to prune pre-merge history."))
return errors.New("history pruning requested via configuration")
} }
predefinedPoint := history.PrunePoints[bc.genesisBlock.Hash()] // Database is pruned beyond the target.
if predefinedPoint == nil { if freezerTail > target.BlockNumber {
log.Error("Chain history pruning is not supported for this network", "genesis", bc.genesisBlock.Hash()) return fmt.Errorf("database pruned beyond requested history (tail=%d, target=%d)", freezerTail, target.BlockNumber)
return errors.New("history pruning requested for unknown network")
} else if freezerTail > 0 && freezerTail != predefinedPoint.BlockNumber {
log.Error("Chain history database is pruned to unknown block", "tail", freezerTail)
return errors.New("unexpected database tail")
} }
bc.historyPrunePoint.Store(predefinedPoint) // Database needs pruning (freezerTail < target).
if latest != 0 {
log.Error(fmt.Sprintf("Chain history mode is configured as %q, but database is not pruned to the target block.", policy.Mode.String()))
log.Error(fmt.Sprintf("Run 'geth prune-history --history.chain %s' to prune history.", policy.Mode.String()))
return errors.New("history pruning required")
}
// Fresh database (latest == 0), will sync from target point.
bc.historyPrunePoint.Store(target)
return nil return nil
default: default:
return fmt.Errorf("invalid history mode: %d", bc.cfg.ChainHistoryMode) return fmt.Errorf("invalid history mode: %d", policy.Mode)
} }
} }
@ -1279,6 +1275,8 @@ func (bc *BlockChain) ExportN(w io.Writer, first uint64, last uint64) error {
func (bc *BlockChain) writeHeadBlock(block *types.Block) { func (bc *BlockChain) writeHeadBlock(block *types.Block) {
// Add the block to the canonical chain number scheme and mark as the head // Add the block to the canonical chain number scheme and mark as the head
batch := bc.db.NewBatch() batch := bc.db.NewBatch()
defer batch.Close()
rawdb.WriteHeadHeaderHash(batch, block.Hash()) rawdb.WriteHeadHeaderHash(batch, block.Hash())
rawdb.WriteHeadFastBlockHash(batch, block.Hash()) rawdb.WriteHeadFastBlockHash(batch, block.Hash())
rawdb.WriteCanonicalHash(batch, block.Hash(), block.NumberU64()) rawdb.WriteCanonicalHash(batch, block.Hash(), block.NumberU64())
@ -1653,6 +1651,8 @@ func (bc *BlockChain) writeBlockWithState(block *types.Block, receipts []*types.
batch = bc.db.NewBatch() batch = bc.db.NewBatch()
start = time.Now() start = time.Now()
) )
defer batch.Close()
rawdb.WriteBlock(batch, block) rawdb.WriteBlock(batch, block)
rawdb.WriteReceipts(batch, block.Hash(), block.NumberU64(), receipts) rawdb.WriteReceipts(batch, block.Hash(), block.NumberU64(), receipts)
rawdb.WritePreimages(batch, statedb.Preimages()) rawdb.WritePreimages(batch, statedb.Preimages())
@ -1990,7 +1990,15 @@ func (bc *BlockChain) insertChain(ctx context.Context, chain types.Blocks, setHe
} }
// The traced section of block import. // The traced section of block import.
start := time.Now() start := time.Now()
res, err := bc.ProcessBlock(ctx, parent.Root, block, setHead, makeWitness && len(chain) == 1) config := ExecuteConfig{
WriteState: true,
WriteHead: setHead,
EnableTracer: true,
MakeWitness: makeWitness && len(chain) == 1,
StatelessSelfValidation: bc.cfg.StatelessSelfValidation,
EnableWitnessStats: bc.cfg.EnableWitnessStats,
}
res, err := bc.ProcessBlock(ctx, parent.Root, block, config)
if err != nil { if err != nil {
return nil, it.index, err return nil, it.index, err
} }
@ -2073,19 +2081,47 @@ func (bpr *blockProcessingResult) Stats() *ExecuteStats {
return bpr.stats return bpr.stats
} }
// ExecuteConfig defines optional behaviors during execution.
type ExecuteConfig struct {
// WriteState controls whether the computed state changes are persisted to
// the underlying storage. If false, execution is performed in-memory only.
WriteState bool
// WriteHead indicates whether the execution result should update the canonical
// chain head. It's only relevant with WriteState == True.
WriteHead bool
// EnableTracer enables execution tracing. This is typically used for debugging
// or analysis and may significantly impact performance.
EnableTracer bool
// MakeWitness indicates whether to generate execution witness data during
// execution. Enabling this may introduce additional memory and CPU overhead.
MakeWitness bool
// StatelessSelfValidation indicates whether the execution witnesses generation
// and self-validation (testing purpose) is enabled.
StatelessSelfValidation bool
// EnableWitnessStats indicates whether to enable collection of witness trie
// access statistics
EnableWitnessStats bool
}
// ProcessBlock executes and validates the given block. If there was no error // ProcessBlock executes and validates the given block. If there was no error
// it writes the block and associated state to database. // it writes the block and associated state to database.
func (bc *BlockChain) ProcessBlock(ctx context.Context, parentRoot common.Hash, block *types.Block, setHead bool, makeWitness bool) (result *blockProcessingResult, blockEndErr error) { func (bc *BlockChain) ProcessBlock(ctx context.Context, parentRoot common.Hash, block *types.Block, config ExecuteConfig) (result *blockProcessingResult, blockEndErr error) {
var ( var (
err error err error
startTime = time.Now() startTime = time.Now()
statedb *state.StateDB statedb *state.StateDB
interrupt atomic.Bool interrupt atomic.Bool
sdb = state.NewDatabase(bc.triedb, bc.codedb).WithSnapshot(bc.snaps)
) )
defer interrupt.Store(true) // terminate the prefetch at the end defer interrupt.Store(true) // terminate the prefetch at the end
if bc.cfg.NoPrefetch { if bc.cfg.NoPrefetch {
statedb, err = state.New(parentRoot, bc.statedb) statedb, err = state.New(parentRoot, sdb)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -2095,23 +2131,27 @@ func (bc *BlockChain) ProcessBlock(ctx context.Context, parentRoot common.Hash,
// //
// Note: the main processor and prefetcher share the same reader with a local // Note: the main processor and prefetcher share the same reader with a local
// cache for mitigating the overhead of state access. // cache for mitigating the overhead of state access.
prefetch, process, err := bc.statedb.ReadersWithCacheStats(parentRoot) prefetch, process, err := sdb.ReadersWithCacheStats(parentRoot)
if err != nil { if err != nil {
return nil, err return nil, err
} }
throwaway, err := state.NewWithReader(parentRoot, bc.statedb, prefetch) throwaway, err := state.NewWithReader(parentRoot, sdb, prefetch)
if err != nil { if err != nil {
return nil, err return nil, err
} }
statedb, err = state.NewWithReader(parentRoot, bc.statedb, process) statedb, err = state.NewWithReader(parentRoot, sdb, process)
if err != nil { if err != nil {
return nil, err return nil, err
} }
// Upload the statistics of reader at the end // Upload the statistics of reader at the end
defer func() { defer func() {
if result != nil { if result != nil {
result.stats.StatePrefetchCacheStats = prefetch.GetStats() if stater, ok := prefetch.(state.ReaderStater); ok {
result.stats.StateReadCacheStats = process.GetStats() result.stats.StatePrefetchCacheStats = stater.GetStats()
}
if stater, ok := process.(state.ReaderStater); ok {
result.stats.StateReadCacheStats = stater.GetStats()
}
} }
}() }()
go func(start time.Time, throwaway *state.StateDB, block *types.Block) { go func(start time.Time, throwaway *state.StateDB, block *types.Block) {
@ -2130,27 +2170,23 @@ func (bc *BlockChain) ProcessBlock(ctx context.Context, parentRoot common.Hash,
// If we are past Byzantium, enable prefetching to pull in trie node paths // If we are past Byzantium, enable prefetching to pull in trie node paths
// while processing transactions. Before Byzantium the prefetcher is mostly // while processing transactions. Before Byzantium the prefetcher is mostly
// useless due to the intermediate root hashing after each transaction. // useless due to the intermediate root hashing after each transaction.
var ( var witness *stateless.Witness
witness *stateless.Witness
witnessStats *stateless.WitnessStats
)
if bc.chainConfig.IsByzantium(block.Number()) { if bc.chainConfig.IsByzantium(block.Number()) {
// Generate witnesses either if we're self-testing, or if it's the // Generate witnesses either if we're self-testing, or if it's the
// only block being inserted. A bit crude, but witnesses are huge, // only block being inserted. A bit crude, but witnesses are huge,
// so we refuse to make an entire chain of them. // so we refuse to make an entire chain of them.
if bc.cfg.VmConfig.StatelessSelfValidation || makeWitness { if config.StatelessSelfValidation || config.MakeWitness {
witness, err = stateless.NewWitness(block.Header(), bc) witness, err = stateless.NewWitness(block.Header(), bc, config.EnableWitnessStats)
if err != nil { if err != nil {
return nil, err return nil, err
} }
if bc.cfg.VmConfig.EnableWitnessStats {
witnessStats = stateless.NewWitnessStats()
} }
} statedb.StartPrefetcher("chain", witness)
statedb.StartPrefetcher("chain", witness, witnessStats)
defer statedb.StopPrefetcher() defer statedb.StopPrefetcher()
} }
// Instrument the blockchain tracing
if config.EnableTracer {
if bc.logger != nil && bc.logger.OnBlockStart != nil { if bc.logger != nil && bc.logger.OnBlockStart != nil {
bc.logger.OnBlockStart(tracing.BlockEvent{ bc.logger.OnBlockStart(tracing.BlockEvent{
Block: block, Block: block,
@ -2163,6 +2199,7 @@ func (bc *BlockChain) ProcessBlock(ctx context.Context, parentRoot common.Hash,
bc.logger.OnBlockEnd(blockEndErr) bc.logger.OnBlockEnd(blockEndErr)
}() }()
} }
}
// Process block using the parent state as reference point // Process block using the parent state as reference point
pstart := time.Now() pstart := time.Now()
@ -2191,7 +2228,7 @@ func (bc *BlockChain) ProcessBlock(ctx context.Context, parentRoot common.Hash,
// witness builder/runner, which would otherwise be impossible due to the // witness builder/runner, which would otherwise be impossible due to the
// various invalid chain states/behaviors being contained in those tests. // various invalid chain states/behaviors being contained in those tests.
xvstart := time.Now() xvstart := time.Now()
if witness := statedb.Witness(); witness != nil && bc.cfg.VmConfig.StatelessSelfValidation { if witness := statedb.Witness(); witness != nil && config.StatelessSelfValidation {
log.Warn("Running stateless self-validation", "block", block.Number(), "hash", block.Hash()) log.Warn("Running stateless self-validation", "block", block.Number(), "hash", block.Hash())
// Remove critical computed fields from the block to force true recalculation // Remove critical computed fields from the block to force true recalculation
@ -2244,11 +2281,10 @@ func (bc *BlockChain) ProcessBlock(ctx context.Context, parentRoot common.Hash,
stats.CrossValidation = xvtime // The time spent on stateless cross validation stats.CrossValidation = xvtime // The time spent on stateless cross validation
// Write the block to the chain and get the status. // Write the block to the chain and get the status.
var ( var status WriteStatus
wstart = time.Now() if config.WriteState {
status WriteStatus wstart := time.Now()
) if !config.WriteHead {
if !setHead {
// Don't set the head, only insert the block // Don't set the head, only insert the block
err = bc.writeBlockWithState(block, res.Receipts, statedb) err = bc.writeBlockWithState(block, res.Receipts, statedb)
} else { } else {
@ -2257,18 +2293,16 @@ func (bc *BlockChain) ProcessBlock(ctx context.Context, parentRoot common.Hash,
if err != nil { if err != nil {
return nil, err return nil, err
} }
// Report the collected witness statistics
if witnessStats != nil {
witnessStats.ReportMetrics(block.NumberU64())
}
// Update the metrics touched during block commit // Update the metrics touched during block commit
stats.AccountCommits = statedb.AccountCommits // Account commits are complete, we can mark them stats.AccountCommits = statedb.AccountCommits // Account commits are complete, we can mark them
stats.StorageCommits = statedb.StorageCommits // Storage commits are complete, we can mark them stats.StorageCommits = statedb.StorageCommits // Storage commits are complete, we can mark them
stats.SnapshotCommit = statedb.SnapshotCommits // Snapshot commits are complete, we can mark them stats.DatabaseCommit = statedb.DatabaseCommits // Database commits are complete, we can mark them
stats.TrieDBCommit = statedb.TrieDBCommits // Trie database commits are complete, we can mark them stats.BlockWrite = time.Since(wstart) - max(statedb.AccountCommits, statedb.StorageCommits) /* concurrent */ - statedb.DatabaseCommits
stats.BlockWrite = time.Since(wstart) - max(statedb.AccountCommits, statedb.StorageCommits) /* concurrent */ - statedb.SnapshotCommits - statedb.TrieDBCommits }
// Report the collected witness statistics
if witness != nil {
witness.ReportMetrics(block.NumberU64())
}
elapsed := time.Since(startTime) + 1 // prevent zero division elapsed := time.Since(startTime) + 1 // prevent zero division
stats.TotalTime = elapsed stats.TotalTime = elapsed
stats.MgasPerSecond = float64(res.GasUsed) * 1000 / float64(elapsed) stats.MgasPerSecond = float64(res.GasUsed) * 1000 / float64(elapsed)
@ -2549,6 +2583,7 @@ func (bc *BlockChain) reorg(oldHead *types.Header, newHead *types.Header) error
// as the txlookups should be changed atomically, and all subsequent // as the txlookups should be changed atomically, and all subsequent
// reads should be blocked until the mutation is complete. // reads should be blocked until the mutation is complete.
bc.txLookupLock.Lock() bc.txLookupLock.Lock()
defer bc.txLookupLock.Unlock()
// Reorg can be executed, start reducing the chain's old blocks and appending // Reorg can be executed, start reducing the chain's old blocks and appending
// the new blocks // the new blocks
@ -2626,6 +2661,8 @@ func (bc *BlockChain) reorg(oldHead *types.Header, newHead *types.Header) error
// Delete useless indexes right now which includes the non-canonical // Delete useless indexes right now which includes the non-canonical
// transaction indexes, canonical chain indexes which above the head. // transaction indexes, canonical chain indexes which above the head.
batch := bc.db.NewBatch() batch := bc.db.NewBatch()
defer batch.Close()
for _, tx := range types.HashDifference(deletedTxs, rebirthTxs) { for _, tx := range types.HashDifference(deletedTxs, rebirthTxs) {
rawdb.DeleteTxLookupEntry(batch, tx) rawdb.DeleteTxLookupEntry(batch, tx)
} }
@ -2649,9 +2686,6 @@ func (bc *BlockChain) reorg(oldHead *types.Header, newHead *types.Header) error
// Reset the tx lookup cache to clear stale txlookup cache. // Reset the tx lookup cache to clear stale txlookup cache.
bc.txLookupCache.Purge() bc.txLookupCache.Purge()
// Release the tx-lookup lock after mutation.
bc.txLookupLock.Unlock()
return nil return nil
} }

View file

@ -296,6 +296,14 @@ func (bc *BlockChain) GetReceiptsRLP(hash common.Hash) rlp.RawValue {
return rawdb.ReadReceiptsRLP(bc.db, hash, number) return rawdb.ReadReceiptsRLP(bc.db, hash, number)
} }
func (bc *BlockChain) GetAccessListRLP(hash common.Hash) rlp.RawValue {
number, ok := rawdb.ReadHeaderNumber(bc.db, hash)
if !ok {
return nil
}
return rawdb.ReadAccessListRLP(bc.db, hash, number)
}
// GetUnclesInChain retrieves all the uncles from a given block backwards until // GetUnclesInChain retrieves all the uncles from a given block backwards until
// a specific distance is reached. // a specific distance is reached.
func (bc *BlockChain) GetUnclesInChain(block *types.Block, length int) []*types.Header { func (bc *BlockChain) GetUnclesInChain(block *types.Block, length int) []*types.Header {
@ -371,7 +379,7 @@ func (bc *BlockChain) TxIndexDone() bool {
// HasState checks if state trie is fully present in the database or not. // HasState checks if state trie is fully present in the database or not.
func (bc *BlockChain) HasState(hash common.Hash) bool { func (bc *BlockChain) HasState(hash common.Hash) bool {
_, err := bc.statedb.OpenTrie(hash) _, err := bc.triedb.NodeReader(hash)
return err == nil return err == nil
} }
@ -403,7 +411,7 @@ func (bc *BlockChain) stateRecoverable(root common.Hash) bool {
func (bc *BlockChain) ContractCodeWithPrefix(hash common.Hash) []byte { func (bc *BlockChain) ContractCodeWithPrefix(hash common.Hash) []byte {
// TODO(rjl493456442) The associated account address is also required // TODO(rjl493456442) The associated account address is also required
// in Verkle scheme. Fix it once snap-sync is supported for Verkle. // in Verkle scheme. Fix it once snap-sync is supported for Verkle.
return bc.statedb.ContractCodeWithPrefix(common.Address{}, hash) return bc.codedb.Reader().CodeWithPrefix(common.Address{}, hash)
} }
// State returns a new mutable state based on the current HEAD block. // State returns a new mutable state based on the current HEAD block.
@ -413,14 +421,14 @@ func (bc *BlockChain) State() (*state.StateDB, error) {
// StateAt returns a new mutable state based on a particular point in time. // StateAt returns a new mutable state based on a particular point in time.
func (bc *BlockChain) StateAt(root common.Hash) (*state.StateDB, error) { func (bc *BlockChain) StateAt(root common.Hash) (*state.StateDB, error) {
return state.New(root, bc.statedb) return state.New(root, state.NewDatabase(bc.triedb, bc.codedb).WithSnapshot(bc.snaps))
} }
// HistoricState returns a historic state specified by the given root. // HistoricState returns a historic state specified by the given root.
// Live states are not available and won't be served, please use `State` // Live states are not available and won't be served, please use `State`
// or `StateAt` instead. // or `StateAt` instead.
func (bc *BlockChain) HistoricState(root common.Hash) (*state.StateDB, error) { func (bc *BlockChain) HistoricState(root common.Hash) (*state.StateDB, error) {
return state.New(root, state.NewHistoricDatabase(bc.db, bc.triedb)) return state.New(root, state.NewHistoricDatabase(bc.triedb, bc.codedb))
} }
// Config retrieves the chain's fork configuration. // Config retrieves the chain's fork configuration.
@ -444,11 +452,6 @@ func (bc *BlockChain) Processor() Processor {
return bc.processor return bc.processor
} }
// StateCache returns the caching database underpinning the blockchain instance.
func (bc *BlockChain) StateCache() state.Database {
return bc.statedb
}
// GasLimit returns the gas limit of the current HEAD block. // GasLimit returns the gas limit of the current HEAD block.
func (bc *BlockChain) GasLimit() uint64 { func (bc *BlockChain) GasLimit() uint64 {
return bc.CurrentBlock().GasLimit return bc.CurrentBlock().GasLimit
@ -473,7 +476,7 @@ func (bc *BlockChain) TxIndexProgress() (TxIndexProgress, error) {
} }
// StateIndexProgress returns the historical state indexing progress. // StateIndexProgress returns the historical state indexing progress.
func (bc *BlockChain) StateIndexProgress() (uint64, error) { func (bc *BlockChain) StateIndexProgress() (uint64, uint64, error) {
return bc.triedb.IndexProgress() return bc.triedb.IndexProgress()
} }
@ -492,6 +495,11 @@ func (bc *BlockChain) TrieDB() *triedb.Database {
return bc.triedb return bc.triedb
} }
// CodeDB retrieves the low level contract code database used for data storage.
func (bc *BlockChain) CodeDB() *state.CodeDB {
return bc.codedb
}
// HeaderChain returns the underlying header chain. // HeaderChain returns the underlying header chain.
func (bc *BlockChain) HeaderChain() *HeaderChain { func (bc *BlockChain) HeaderChain() *HeaderChain {
return bc.hc return bc.hc

View file

@ -30,7 +30,6 @@ import (
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/consensus/ethash" "github.com/ethereum/go-ethereum/consensus/ethash"
"github.com/ethereum/go-ethereum/core/rawdb" "github.com/ethereum/go-ethereum/core/rawdb"
"github.com/ethereum/go-ethereum/core/state"
"github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/ethdb/pebble" "github.com/ethereum/go-ethereum/ethdb/pebble"
"github.com/ethereum/go-ethereum/params" "github.com/ethereum/go-ethereum/params"
@ -2041,7 +2040,6 @@ func testSetHeadWithScheme(t *testing.T, tt *rewindTest, snapshots bool, scheme
dbconfig.HashDB = hashdb.Defaults dbconfig.HashDB = hashdb.Defaults
} }
chain.triedb = triedb.NewDatabase(chain.db, dbconfig) chain.triedb = triedb.NewDatabase(chain.db, dbconfig)
chain.statedb = state.NewDatabase(chain.triedb, chain.snaps)
// Force run a freeze cycle // Force run a freeze cycle
type freezer interface { type freezer interface {

View file

@ -52,8 +52,7 @@ type ExecuteStats struct {
Execution time.Duration // Time spent on the EVM execution Execution time.Duration // Time spent on the EVM execution
Validation time.Duration // Time spent on the block validation Validation time.Duration // Time spent on the block validation
CrossValidation time.Duration // Optional, time spent on the block cross validation CrossValidation time.Duration // Optional, time spent on the block cross validation
SnapshotCommit time.Duration // Time spent on snapshot commit DatabaseCommit time.Duration // Time spent on database commit
TrieDBCommit time.Duration // Time spent on database commit
BlockWrite time.Duration // Time spent on block write BlockWrite time.Duration // Time spent on block write
TotalTime time.Duration // The total time spent on block execution TotalTime time.Duration // The total time spent on block execution
MgasPerSecond float64 // The million gas processed per second MgasPerSecond float64 // The million gas processed per second
@ -87,22 +86,21 @@ func (s *ExecuteStats) reportMetrics() {
blockExecutionTimer.Update(s.Execution) // The time spent on EVM processing blockExecutionTimer.Update(s.Execution) // The time spent on EVM processing
blockValidationTimer.Update(s.Validation) // The time spent on block validation blockValidationTimer.Update(s.Validation) // The time spent on block validation
blockCrossValidationTimer.Update(s.CrossValidation) // The time spent on stateless cross validation blockCrossValidationTimer.Update(s.CrossValidation) // The time spent on stateless cross validation
snapshotCommitTimer.Update(s.SnapshotCommit) // Snapshot commits are complete, we can mark them triedbCommitTimer.Update(s.DatabaseCommit) // Trie database commits are complete, we can mark them
triedbCommitTimer.Update(s.TrieDBCommit) // Trie database commits are complete, we can mark them
blockWriteTimer.Update(s.BlockWrite) // The time spent on block write blockWriteTimer.Update(s.BlockWrite) // The time spent on block write
blockInsertTimer.Update(s.TotalTime) // The total time spent on block execution blockInsertTimer.Update(s.TotalTime) // The total time spent on block execution
chainMgaspsMeter.Update(time.Duration(s.MgasPerSecond)) // TODO(rjl493456442) generalize the ResettingTimer chainMgaspsMeter.Update(time.Duration(s.MgasPerSecond)) // TODO(rjl493456442) generalize the ResettingTimer
// Cache hit rates // Cache hit rates
accountCacheHitPrefetchMeter.Mark(s.StatePrefetchCacheStats.AccountCacheHit) accountCacheHitPrefetchMeter.Mark(s.StatePrefetchCacheStats.StateStats.AccountCacheHit)
accountCacheMissPrefetchMeter.Mark(s.StatePrefetchCacheStats.AccountCacheMiss) accountCacheMissPrefetchMeter.Mark(s.StatePrefetchCacheStats.StateStats.AccountCacheMiss)
storageCacheHitPrefetchMeter.Mark(s.StatePrefetchCacheStats.StorageCacheHit) storageCacheHitPrefetchMeter.Mark(s.StatePrefetchCacheStats.StateStats.StorageCacheHit)
storageCacheMissPrefetchMeter.Mark(s.StatePrefetchCacheStats.StorageCacheMiss) storageCacheMissPrefetchMeter.Mark(s.StatePrefetchCacheStats.StateStats.StorageCacheMiss)
accountCacheHitMeter.Mark(s.StateReadCacheStats.AccountCacheHit) accountCacheHitMeter.Mark(s.StateReadCacheStats.StateStats.AccountCacheHit)
accountCacheMissMeter.Mark(s.StateReadCacheStats.AccountCacheMiss) accountCacheMissMeter.Mark(s.StateReadCacheStats.StateStats.AccountCacheMiss)
storageCacheHitMeter.Mark(s.StateReadCacheStats.StorageCacheHit) storageCacheHitMeter.Mark(s.StateReadCacheStats.StateStats.StorageCacheHit)
storageCacheMissMeter.Mark(s.StateReadCacheStats.StorageCacheMiss) storageCacheMissMeter.Mark(s.StateReadCacheStats.StateStats.StorageCacheMiss)
} }
// slowBlockLog represents the JSON structure for slow block logging. // slowBlockLog represents the JSON structure for slow block logging.
@ -177,14 +175,6 @@ type slowBlockCodeCacheEntry struct {
MissBytes int64 `json:"miss_bytes"` MissBytes int64 `json:"miss_bytes"`
} }
// calculateHitRate computes the cache hit rate as a percentage (0-100).
func calculateHitRate(hits, misses int64) float64 {
if total := hits + misses; total > 0 {
return float64(hits) / float64(total) * 100.0
}
return 0.0
}
// durationToMs converts a time.Duration to milliseconds as a float64 // durationToMs converts a time.Duration to milliseconds as a float64
// with sub-millisecond precision for accurate cross-client metrics. // with sub-millisecond precision for accurate cross-client metrics.
func durationToMs(d time.Duration) float64 { func durationToMs(d time.Duration) float64 {
@ -216,7 +206,7 @@ func (s *ExecuteStats) logSlow(block *types.Block, slowBlockThreshold time.Durat
ExecutionMs: durationToMs(s.Execution), ExecutionMs: durationToMs(s.Execution),
StateReadMs: durationToMs(s.AccountReads + s.StorageReads + s.CodeReads), StateReadMs: durationToMs(s.AccountReads + s.StorageReads + s.CodeReads),
StateHashMs: durationToMs(s.AccountHashes + s.AccountUpdates + s.StorageUpdates), StateHashMs: durationToMs(s.AccountHashes + s.AccountUpdates + s.StorageUpdates),
CommitMs: durationToMs(max(s.AccountCommits, s.StorageCommits) + s.TrieDBCommit + s.SnapshotCommit + s.BlockWrite), CommitMs: durationToMs(max(s.AccountCommits, s.StorageCommits) + s.DatabaseCommit + s.BlockWrite),
TotalMs: durationToMs(s.TotalTime), TotalMs: durationToMs(s.TotalTime),
}, },
Throughput: slowBlockThru{ Throughput: slowBlockThru{
@ -238,19 +228,19 @@ func (s *ExecuteStats) logSlow(block *types.Block, slowBlockThreshold time.Durat
}, },
Cache: slowBlockCache{ Cache: slowBlockCache{
Account: slowBlockCacheEntry{ Account: slowBlockCacheEntry{
Hits: s.StateReadCacheStats.AccountCacheHit, Hits: s.StateReadCacheStats.StateStats.AccountCacheHit,
Misses: s.StateReadCacheStats.AccountCacheMiss, Misses: s.StateReadCacheStats.StateStats.AccountCacheMiss,
HitRate: calculateHitRate(s.StateReadCacheStats.AccountCacheHit, s.StateReadCacheStats.AccountCacheMiss), HitRate: s.StateReadCacheStats.StateStats.AccountCacheHitRate(),
}, },
Storage: slowBlockCacheEntry{ Storage: slowBlockCacheEntry{
Hits: s.StateReadCacheStats.StorageCacheHit, Hits: s.StateReadCacheStats.StateStats.StorageCacheHit,
Misses: s.StateReadCacheStats.StorageCacheMiss, Misses: s.StateReadCacheStats.StateStats.StorageCacheMiss,
HitRate: calculateHitRate(s.StateReadCacheStats.StorageCacheHit, s.StateReadCacheStats.StorageCacheMiss), HitRate: s.StateReadCacheStats.StateStats.StorageCacheHitRate(),
}, },
Code: slowBlockCodeCacheEntry{ Code: slowBlockCodeCacheEntry{
Hits: s.StateReadCacheStats.CodeStats.CacheHit, Hits: s.StateReadCacheStats.CodeStats.CacheHit,
Misses: s.StateReadCacheStats.CodeStats.CacheMiss, Misses: s.StateReadCacheStats.CodeStats.CacheMiss,
HitRate: calculateHitRate(s.StateReadCacheStats.CodeStats.CacheHit, s.StateReadCacheStats.CodeStats.CacheMiss), HitRate: s.StateReadCacheStats.CodeStats.HitRate(),
HitBytes: s.StateReadCacheStats.CodeStats.CacheHitBytes, HitBytes: s.StateReadCacheStats.CodeStats.CacheHitBytes,
MissBytes: s.StateReadCacheStats.CodeStats.CacheMissBytes, MissBytes: s.StateReadCacheStats.CodeStats.CacheMissBytes,
}, },

View file

@ -36,7 +36,6 @@ import (
"github.com/ethereum/go-ethereum/consensus" "github.com/ethereum/go-ethereum/consensus"
"github.com/ethereum/go-ethereum/consensus/beacon" "github.com/ethereum/go-ethereum/consensus/beacon"
"github.com/ethereum/go-ethereum/consensus/ethash" "github.com/ethereum/go-ethereum/consensus/ethash"
"github.com/ethereum/go-ethereum/core/history"
"github.com/ethereum/go-ethereum/core/rawdb" "github.com/ethereum/go-ethereum/core/rawdb"
"github.com/ethereum/go-ethereum/core/state" "github.com/ethereum/go-ethereum/core/state"
"github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/core/types"
@ -157,7 +156,7 @@ func testBlockChainImport(chain types.Blocks, blockchain *BlockChain) error {
} }
return err return err
} }
statedb, err := state.New(blockchain.GetBlockByHash(block.ParentHash()).Root(), blockchain.statedb) statedb, err := state.New(blockchain.GetBlockByHash(block.ParentHash()).Root(), state.NewDatabase(blockchain.triedb, blockchain.codedb))
if err != nil { if err != nil {
return err return err
} }
@ -4337,26 +4336,13 @@ func TestInsertChainWithCutoff(t *testing.T) {
func testInsertChainWithCutoff(t *testing.T, cutoff uint64, ancientLimit uint64, genesis *Genesis, blocks []*types.Block, receipts []types.Receipts) { func testInsertChainWithCutoff(t *testing.T, cutoff uint64, ancientLimit uint64, genesis *Genesis, blocks []*types.Block, receipts []types.Receipts) {
// log.SetDefault(log.NewLogger(log.NewTerminalHandlerWithLevel(os.Stderr, log.LevelDebug, true))) // log.SetDefault(log.NewLogger(log.NewTerminalHandlerWithLevel(os.Stderr, log.LevelDebug, true)))
// Add a known pruning point for the duration of the test.
ghash := genesis.ToBlock().Hash() ghash := genesis.ToBlock().Hash()
cutoffBlock := blocks[cutoff-1] cutoffBlock := blocks[cutoff-1]
history.PrunePoints[ghash] = &history.PrunePoint{
BlockNumber: cutoffBlock.NumberU64(),
BlockHash: cutoffBlock.Hash(),
}
defer func() {
delete(history.PrunePoints, ghash)
}()
// Enable pruning in cache config.
config := DefaultConfig().WithStateScheme(rawdb.PathScheme)
config.ChainHistoryMode = history.KeepPostMerge
db, _ := rawdb.Open(rawdb.NewMemoryDatabase(), rawdb.OpenOptions{}) db, _ := rawdb.Open(rawdb.NewMemoryDatabase(), rawdb.OpenOptions{})
defer db.Close() defer db.Close()
options := DefaultConfig().WithStateScheme(rawdb.PathScheme) chain, _ := NewBlockChain(db, genesis, beacon.New(ethash.NewFaker()), DefaultConfig().WithStateScheme(rawdb.PathScheme))
chain, _ := NewBlockChain(db, genesis, beacon.New(ethash.NewFaker()), options)
defer chain.Stop() defer chain.Stop()
var ( var (

View file

@ -17,6 +17,7 @@
package core package core
import ( import (
"context"
"fmt" "fmt"
"math/big" "math/big"
@ -63,7 +64,7 @@ func (b *BlockGen) SetCoinbase(addr common.Address) {
panic("coinbase can only be set once") panic("coinbase can only be set once")
} }
b.header.Coinbase = addr b.header.Coinbase = addr
b.gasPool = new(GasPool).AddGas(b.header.GasLimit) b.gasPool = NewGasPool(b.header.GasLimit)
} }
// SetExtra sets the extra data field of the generated block. // SetExtra sets the extra data field of the generated block.
@ -117,10 +118,12 @@ func (b *BlockGen) addTx(bc *BlockChain, vmConfig vm.Config, tx *types.Transacti
evm = vm.NewEVM(blockContext, b.statedb, b.cm.config, vmConfig) evm = vm.NewEVM(blockContext, b.statedb, b.cm.config, vmConfig)
) )
b.statedb.SetTxContext(tx.Hash(), len(b.txs)) b.statedb.SetTxContext(tx.Hash(), len(b.txs))
receipt, err := ApplyTransaction(evm, b.gasPool, b.statedb, b.header, tx, &b.header.GasUsed) receipt, err := ApplyTransaction(evm, b.gasPool, b.statedb, b.header, tx)
if err != nil { if err != nil {
panic(err) panic(err)
} }
b.header.GasUsed = b.gasPool.Used()
// Merge the tx-local access event into the "block-local" one, in order to collect // Merge the tx-local access event into the "block-local" one, in order to collect
// all values, so that the witness can be built. // all values, so that the witness can be built.
if b.statedb.Database().TrieDB().IsVerkle() { if b.statedb.Database().TrieDB().IsVerkle() {
@ -409,7 +412,7 @@ func GenerateChain(config *params.ChainConfig, parent *types.Block, engine conse
} }
body := types.Body{Transactions: b.txs, Uncles: b.uncles, Withdrawals: b.withdrawals} body := types.Body{Transactions: b.txs, Uncles: b.uncles, Withdrawals: b.withdrawals}
block, err := b.engine.FinalizeAndAssemble(cm, b.header, statedb, &body, b.receipts) block, err := b.engine.FinalizeAndAssemble(context.Background(), cm, b.header, statedb, &body, b.receipts)
if err != nil { if err != nil {
panic(err) panic(err)
} }
@ -479,13 +482,14 @@ func GenerateChainWithGenesis(genesis *Genesis, engine consensus.Engine, n int,
if genesis.Config != nil && genesis.Config.IsVerkle(genesis.Config.ChainID, 0) { if genesis.Config != nil && genesis.Config.IsVerkle(genesis.Config.ChainID, 0) {
triedbConfig = triedb.VerkleDefaults triedbConfig = triedb.VerkleDefaults
} }
triedb := triedb.NewDatabase(db, triedbConfig) genesisTriedb := triedb.NewDatabase(db, triedbConfig)
defer triedb.Close() block, err := genesis.Commit(db, genesisTriedb, nil)
_, err := genesis.Commit(db, triedb, nil)
if err != nil { if err != nil {
genesisTriedb.Close()
panic(err) panic(err)
} }
blocks, receipts := GenerateChain(genesis.Config, genesis.ToBlock(), engine, db, n, gen) genesisTriedb.Close()
blocks, receipts := GenerateChain(genesis.Config, block, engine, db, n, gen)
return db, blocks, receipts return db, blocks, receipts
} }

View file

@ -58,14 +58,14 @@ var (
// by a transaction is higher than what's left in the block. // by a transaction is higher than what's left in the block.
ErrGasLimitReached = errors.New("gas limit reached") ErrGasLimitReached = errors.New("gas limit reached")
// ErrGasLimitOverflow is returned by the gas pool if the remaining gas
// exceeds the maximum value of uint64.
ErrGasLimitOverflow = errors.New("gas limit overflow")
// ErrInsufficientFundsForTransfer is returned if the transaction sender doesn't // ErrInsufficientFundsForTransfer is returned if the transaction sender doesn't
// have enough funds for transfer(topmost call only). // have enough funds for transfer(topmost call only).
ErrInsufficientFundsForTransfer = errors.New("insufficient funds for transfer") ErrInsufficientFundsForTransfer = errors.New("insufficient funds for transfer")
// ErrMaxInitCodeSizeExceeded is returned if creation transaction provides the init code bigger
// than init code size limit.
ErrMaxInitCodeSizeExceeded = errors.New("max initcode size exceeded")
// ErrInsufficientBalanceWitness is returned if the transaction sender has enough // ErrInsufficientBalanceWitness is returned if the transaction sender has enough
// funds to cover the transfer, but not enough to pay for witness access/modification // funds to cover the transfer, but not enough to pay for witness access/modification
// costs for the transaction // costs for the transaction

View file

@ -0,0 +1,169 @@
// Copyright 2026 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package core
import (
"encoding/binary"
"math/big"
"reflect"
"testing"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/consensus/beacon"
"github.com/ethereum/go-ethereum/consensus/ethash"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/params"
)
var ethTransferTestCode = common.FromHex("6080604052600436106100345760003560e01c8063574ffc311461003957806366e41cb714610090578063f8a8fd6d1461009a575b600080fd5b34801561004557600080fd5b5061004e6100a4565b604051808273ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff16815260200191505060405180910390f35b6100986100ac565b005b6100a26100f5565b005b63deadbeef81565b7f38e80b5c85ba49b7280ccc8f22548faa62ae30d5a008a1b168fba5f47f5d1ee560405160405180910390a1631234567873ffffffffffffffffffffffffffffffffffffffff16ff5b7f24ec1d3ff24c2f6ff210738839dbc339cd45a5294d85c79361016243157aae7b60405160405180910390a163deadbeef73ffffffffffffffffffffffffffffffffffffffff166002348161014657fe5b046040516024016040516020818303038152906040527f66e41cb7000000000000000000000000000000000000000000000000000000007bffffffffffffffffffffffffffffffffffffffffffffffffffffffff19166020820180517bffffffffffffffffffffffffffffffffffffffffffffffffffffffff83818316178352505050506040518082805190602001908083835b602083106101fd57805182526020820191506020810190506020830392506101da565b6001836020036101000a03801982511681845116808217855250505050505090500191505060006040518083038185875af1925050503d806000811461025f576040519150601f19603f3d011682016040523d82523d6000602084013e610264565b606091505b50505056fea265627a7a723158202cce817a434785d8560c200762f972d453ccd30694481be7545f9035a512826364736f6c63430005100032")
/*
pragma solidity >=0.4.22 <0.6.0;
contract TestLogs {
address public constant target_contract = 0x00000000000000000000000000000000DeaDBeef;
address payable constant selfdestruct_addr = 0x0000000000000000000000000000000012345678;
event Response(bool success, bytes data);
event TestEvent();
event TestEvent2();
function test() public payable {
emit TestEvent();
target_contract.call.value(msg.value/2)(abi.encodeWithSignature("test2()"));
}
function test2() public payable {
emit TestEvent2();
selfdestruct(selfdestruct_addr);
}
}
*/
// TestEthTransferLogs tests EIP-7708 ETH transfer log output by simulating a
// scenario including transaction, CALL and SELFDESTRUCT value transfers, and
// also "ordinary" logs emitted. The same scenario is also tested with no value
// transferred.
func TestEthTransferLogs(t *testing.T) {
testEthTransferLogs(t, 1_000_000_000)
testEthTransferLogs(t, 0)
}
func testEthTransferLogs(t *testing.T, value uint64) {
var (
key1, _ = crypto.HexToECDSA("b71c71a67e1177ad4e901695e1b4b9ee17ae16c6668d313eac2f96dbcda3f291")
addr1 = crypto.PubkeyToAddress(key1.PublicKey)
addr2 = common.HexToAddress("cafebabe") // caller
addr3 = common.HexToAddress("deadbeef") // callee
addr4 = common.HexToAddress("12345678") // selfdestruct target
testEvent = crypto.Keccak256Hash([]byte("TestEvent()"))
testEvent2 = crypto.Keccak256Hash([]byte("TestEvent2()"))
config = *params.MergedTestChainConfig
signer = types.LatestSigner(&config)
engine = beacon.New(ethash.NewFaker())
)
//TODO remove this hacky config initialization when final Amsterdam config is available
config.AmsterdamTime = new(uint64)
blobConfig := *config.BlobScheduleConfig
blobConfig.Amsterdam = blobConfig.Osaka
config.BlobScheduleConfig = &blobConfig
gspec := &Genesis{
Config: &config,
Alloc: types.GenesisAlloc{
addr1: {Balance: newGwei(1000000000)},
addr2: {Code: ethTransferTestCode},
addr3: {Code: ethTransferTestCode},
},
}
_, blocks, receipts := GenerateChainWithGenesis(gspec, engine, 1, func(i int, b *BlockGen) {
tx := types.MustSignNewTx(key1, signer, &types.DynamicFeeTx{
ChainID: gspec.Config.ChainID,
Nonce: 0,
To: &addr2,
Gas: 500_000,
GasFeeCap: newGwei(5),
GasTipCap: newGwei(5),
Value: big.NewInt(int64(value)),
Data: common.FromHex("f8a8fd6d"),
})
b.AddTx(tx)
})
blockHash := blocks[0].Hash()
txHash := blocks[0].Transactions()[0].Hash()
addr2hash := func(addr common.Address) (hash common.Hash) {
copy(hash[12:], addr[:])
return
}
u256 := func(amount uint64) []byte {
data := make([]byte, 32)
binary.BigEndian.PutUint64(data[24:], amount)
return data
}
var expLogs = []*types.Log{
{
Address: params.SystemAddress,
Topics: []common.Hash{params.EthTransferLogEvent, addr2hash(addr1), addr2hash(addr2)},
Data: u256(value),
},
{
Address: addr2,
Topics: []common.Hash{testEvent},
Data: nil,
},
{
Address: params.SystemAddress,
Topics: []common.Hash{params.EthTransferLogEvent, addr2hash(addr2), addr2hash(addr3)},
Data: u256(value / 2),
},
{
Address: addr3,
Topics: []common.Hash{testEvent2},
Data: nil,
},
{
Address: params.SystemAddress,
Topics: []common.Hash{params.EthTransferLogEvent, addr2hash(addr3), addr2hash(addr4)},
Data: u256(value / 2),
},
}
if value == 0 {
// no ETH transfer logs expected with zero value
expLogs = []*types.Log{expLogs[1], expLogs[3]}
}
for i, log := range expLogs {
log.BlockNumber = 1
log.BlockHash = blockHash
log.BlockTimestamp = 10
log.TxIndex = 0
log.TxHash = txHash
log.Index = uint(i)
}
if len(expLogs) != len(receipts[0][0].Logs) {
t.Fatalf("Incorrect number of logs (expected: %d, got: %d)", len(expLogs), len(receipts[0][0].Logs))
}
for i, log := range receipts[0][0].Logs {
if !reflect.DeepEqual(expLogs[i], log) {
t.Fatalf("Incorrect log at index %d (expected: %v, got: %v)", i, expLogs[i], log)
}
}
}

View file

@ -25,6 +25,7 @@ import (
"github.com/ethereum/go-ethereum/core/tracing" "github.com/ethereum/go-ethereum/core/tracing"
"github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/core/vm" "github.com/ethereum/go-ethereum/core/vm"
"github.com/ethereum/go-ethereum/params"
"github.com/holiman/uint256" "github.com/holiman/uint256"
) )
@ -44,6 +45,7 @@ func NewEVMBlockContext(header *types.Header, chain ChainContext, author *common
baseFee *big.Int baseFee *big.Int
blobBaseFee *big.Int blobBaseFee *big.Int
random *common.Hash random *common.Hash
slotNum uint64
) )
// If we don't have an explicit author (i.e. not mining), extract from the header // If we don't have an explicit author (i.e. not mining), extract from the header
@ -61,6 +63,10 @@ func NewEVMBlockContext(header *types.Header, chain ChainContext, author *common
if header.Difficulty.Sign() == 0 { if header.Difficulty.Sign() == 0 {
random = &header.MixDigest random = &header.MixDigest
} }
if header.SlotNumber != nil {
slotNum = *header.SlotNumber
}
return vm.BlockContext{ return vm.BlockContext{
CanTransfer: CanTransfer, CanTransfer: CanTransfer,
Transfer: Transfer, Transfer: Transfer,
@ -73,6 +79,7 @@ func NewEVMBlockContext(header *types.Header, chain ChainContext, author *common
BlobBaseFee: blobBaseFee, BlobBaseFee: blobBaseFee,
GasLimit: header.GasLimit, GasLimit: header.GasLimit,
Random: random, Random: random,
SlotNum: slotNum,
} }
} }
@ -132,7 +139,10 @@ func CanTransfer(db vm.StateDB, addr common.Address, amount *uint256.Int) bool {
} }
// Transfer subtracts amount from sender and adds amount to recipient using the given Db // Transfer subtracts amount from sender and adds amount to recipient using the given Db
func Transfer(db vm.StateDB, sender, recipient common.Address, amount *uint256.Int) { func Transfer(db vm.StateDB, sender, recipient common.Address, amount *uint256.Int, rules *params.Rules) {
db.SubBalance(sender, amount, tracing.BalanceChangeTransfer) db.SubBalance(sender, amount, tracing.BalanceChangeTransfer)
db.AddBalance(recipient, amount, tracing.BalanceChangeTransfer) db.AddBalance(recipient, amount, tracing.BalanceChangeTransfer)
if rules.IsAmsterdam && !amount.IsZero() && sender != recipient {
db.AddLog(types.EthTransferLog(sender, recipient, amount))
}
} }

View file

@ -21,39 +21,87 @@ import (
"math" "math"
) )
// GasPool tracks the amount of gas available during execution of the transactions // GasPool tracks the amount of gas available for transaction execution
// in a block. The zero value is a pool with zero gas available. // within a block, along with the cumulative gas consumed.
type GasPool uint64 type GasPool struct {
remaining uint64
// AddGas makes gas available for execution. initial uint64
func (gp *GasPool) AddGas(amount uint64) *GasPool { cumulativeUsed uint64
if uint64(*gp) > math.MaxUint64-amount { }
panic("gas pool pushed above uint64")
// NewGasPool initializes the gasPool with the given amount.
func NewGasPool(amount uint64) *GasPool {
return &GasPool{
remaining: amount,
initial: amount,
} }
*(*uint64)(gp) += amount
return gp
} }
// SubGas deducts the given amount from the pool if enough gas is // SubGas deducts the given amount from the pool if enough gas is
// available and returns an error otherwise. // available and returns an error otherwise.
func (gp *GasPool) SubGas(amount uint64) error { func (gp *GasPool) SubGas(amount uint64) error {
if uint64(*gp) < amount { if gp.remaining < amount {
return ErrGasLimitReached return ErrGasLimitReached
} }
*(*uint64)(gp) -= amount gp.remaining -= amount
return nil
}
// ReturnGas adds the refunded gas back to the pool and updates
// the cumulative gas usage accordingly.
func (gp *GasPool) ReturnGas(returned uint64, gasUsed uint64) error {
if gp.remaining > math.MaxUint64-returned {
return fmt.Errorf("%w: remaining: %d, returned: %d", ErrGasLimitOverflow, gp.remaining, returned)
}
// The returned gas calculation differs across forks.
//
// - Pre-Amsterdam:
// returned = purchased - remaining (refund included)
//
// - Post-Amsterdam:
// returned = purchased - gasUsed (refund excluded)
gp.remaining += returned
// gasUsed = max(txGasUsed - gasRefund, calldataFloorGasCost)
// regardless of Amsterdam is activated or not.
gp.cumulativeUsed += gasUsed
return nil return nil
} }
// Gas returns the amount of gas remaining in the pool. // Gas returns the amount of gas remaining in the pool.
func (gp *GasPool) Gas() uint64 { func (gp *GasPool) Gas() uint64 {
return uint64(*gp) return gp.remaining
} }
// SetGas sets the amount of gas with the provided number. // CumulativeUsed returns the amount of cumulative consumed gas (refunded included).
func (gp *GasPool) SetGas(gas uint64) { func (gp *GasPool) CumulativeUsed() uint64 {
*(*uint64)(gp) = gas return gp.cumulativeUsed
}
// Used returns the amount of consumed gas.
func (gp *GasPool) Used() uint64 {
if gp.initial < gp.remaining {
panic("gas used underflow")
}
return gp.initial - gp.remaining
}
// Snapshot returns the deep-copied object as the snapshot.
func (gp *GasPool) Snapshot() *GasPool {
return &GasPool{
initial: gp.initial,
remaining: gp.remaining,
cumulativeUsed: gp.cumulativeUsed,
}
}
// Set sets the content of gasPool with the provided one.
func (gp *GasPool) Set(other *GasPool) {
gp.initial = other.initial
gp.remaining = other.remaining
gp.cumulativeUsed = other.cumulativeUsed
} }
func (gp *GasPool) String() string { func (gp *GasPool) String() string {
return fmt.Sprintf("%d", *gp) return fmt.Sprintf("initial: %d, remaining: %d, cumulative used: %d", gp.initial, gp.remaining, gp.cumulativeUsed)
} }

View file

@ -34,6 +34,7 @@ func (g Genesis) MarshalJSON() ([]byte, error) {
BaseFee *math.HexOrDecimal256 `json:"baseFeePerGas"` BaseFee *math.HexOrDecimal256 `json:"baseFeePerGas"`
ExcessBlobGas *math.HexOrDecimal64 `json:"excessBlobGas"` ExcessBlobGas *math.HexOrDecimal64 `json:"excessBlobGas"`
BlobGasUsed *math.HexOrDecimal64 `json:"blobGasUsed"` BlobGasUsed *math.HexOrDecimal64 `json:"blobGasUsed"`
SlotNumber *uint64 `json:"slotNumber"`
} }
var enc Genesis var enc Genesis
enc.Config = g.Config enc.Config = g.Config
@ -56,6 +57,7 @@ func (g Genesis) MarshalJSON() ([]byte, error) {
enc.BaseFee = (*math.HexOrDecimal256)(g.BaseFee) enc.BaseFee = (*math.HexOrDecimal256)(g.BaseFee)
enc.ExcessBlobGas = (*math.HexOrDecimal64)(g.ExcessBlobGas) enc.ExcessBlobGas = (*math.HexOrDecimal64)(g.ExcessBlobGas)
enc.BlobGasUsed = (*math.HexOrDecimal64)(g.BlobGasUsed) enc.BlobGasUsed = (*math.HexOrDecimal64)(g.BlobGasUsed)
enc.SlotNumber = g.SlotNumber
return json.Marshal(&enc) return json.Marshal(&enc)
} }
@ -77,6 +79,7 @@ func (g *Genesis) UnmarshalJSON(input []byte) error {
BaseFee *math.HexOrDecimal256 `json:"baseFeePerGas"` BaseFee *math.HexOrDecimal256 `json:"baseFeePerGas"`
ExcessBlobGas *math.HexOrDecimal64 `json:"excessBlobGas"` ExcessBlobGas *math.HexOrDecimal64 `json:"excessBlobGas"`
BlobGasUsed *math.HexOrDecimal64 `json:"blobGasUsed"` BlobGasUsed *math.HexOrDecimal64 `json:"blobGasUsed"`
SlotNumber *uint64 `json:"slotNumber"`
} }
var dec Genesis var dec Genesis
if err := json.Unmarshal(input, &dec); err != nil { if err := json.Unmarshal(input, &dec); err != nil {
@ -133,5 +136,8 @@ func (g *Genesis) UnmarshalJSON(input []byte) error {
if dec.BlobGasUsed != nil { if dec.BlobGasUsed != nil {
g.BlobGasUsed = (*uint64)(dec.BlobGasUsed) g.BlobGasUsed = (*uint64)(dec.BlobGasUsed)
} }
if dec.SlotNumber != nil {
g.SlotNumber = dec.SlotNumber
}
return nil return nil
} }

View file

@ -73,6 +73,7 @@ type Genesis struct {
BaseFee *big.Int `json:"baseFeePerGas"` // EIP-1559 BaseFee *big.Int `json:"baseFeePerGas"` // EIP-1559
ExcessBlobGas *uint64 `json:"excessBlobGas"` // EIP-4844 ExcessBlobGas *uint64 `json:"excessBlobGas"` // EIP-4844
BlobGasUsed *uint64 `json:"blobGasUsed"` // EIP-4844 BlobGasUsed *uint64 `json:"blobGasUsed"` // EIP-4844
SlotNumber *uint64 `json:"slotNumber"` // EIP-7843
} }
// copy copies the genesis. // copy copies the genesis.
@ -122,6 +123,7 @@ func ReadGenesis(db ethdb.Database) (*Genesis, error) {
genesis.BaseFee = genesisHeader.BaseFee genesis.BaseFee = genesisHeader.BaseFee
genesis.ExcessBlobGas = genesisHeader.ExcessBlobGas genesis.ExcessBlobGas = genesisHeader.ExcessBlobGas
genesis.BlobGasUsed = genesisHeader.BlobGasUsed genesis.BlobGasUsed = genesisHeader.BlobGasUsed
genesis.SlotNumber = genesisHeader.SlotNumber
return &genesis, nil return &genesis, nil
} }
@ -547,6 +549,12 @@ func (g *Genesis) toBlockWithRoot(root common.Hash) *types.Block {
if conf.IsPrague(num, g.Timestamp) { if conf.IsPrague(num, g.Timestamp) {
head.RequestsHash = &types.EmptyRequestsHash head.RequestsHash = &types.EmptyRequestsHash
} }
if conf.IsAmsterdam(num, g.Timestamp) {
head.SlotNumber = g.SlotNumber
if head.SlotNumber == nil {
head.SlotNumber = new(uint64)
}
}
} }
return types.NewBlock(head, &types.Body{Withdrawals: withdrawals}, nil, trie.NewStackTrie(nil)) return types.NewBlock(head, &types.Body{Withdrawals: withdrawals}, nil, trie.NewStackTrie(nil))
} }

View file

@ -308,7 +308,7 @@ func TestVerkleGenesisCommit(t *testing.T) {
}, },
} }
expected := common.FromHex("b94812c1674dcf4f2bc98f4503d15f4cc674265135bcf3be6e4417b60881042a") expected := common.FromHex("1fd154971d9a386c4ec75fe7138c17efb569bfc2962e46e94a376ba997e3fadc")
got := genesis.ToBlock().Root().Bytes() got := genesis.ToBlock().Root().Bytes()
if !bytes.Equal(got, expected) { if !bytes.Equal(got, expected) {
t.Fatalf("invalid genesis state root, expected %x, got %x", expected, got) t.Fatalf("invalid genesis state root, expected %x, got %x", expected, got)

View file

@ -32,10 +32,13 @@ const (
// KeepPostMerge sets the history pruning point to the merge activation block. // KeepPostMerge sets the history pruning point to the merge activation block.
KeepPostMerge KeepPostMerge
// KeepPostPrague sets the history pruning point to the Prague (Pectra) activation block.
KeepPostPrague
) )
func (m HistoryMode) IsValid() bool { func (m HistoryMode) IsValid() bool {
return m <= KeepPostMerge return m <= KeepPostPrague
} }
func (m HistoryMode) String() string { func (m HistoryMode) String() string {
@ -44,6 +47,8 @@ func (m HistoryMode) String() string {
return "all" return "all"
case KeepPostMerge: case KeepPostMerge:
return "postmerge" return "postmerge"
case KeepPostPrague:
return "postprague"
default: default:
return fmt.Sprintf("invalid HistoryMode(%d)", m) return fmt.Sprintf("invalid HistoryMode(%d)", m)
} }
@ -64,31 +69,71 @@ func (m *HistoryMode) UnmarshalText(text []byte) error {
*m = KeepAll *m = KeepAll
case "postmerge": case "postmerge":
*m = KeepPostMerge *m = KeepPostMerge
case "postprague":
*m = KeepPostPrague
default: default:
return fmt.Errorf(`unknown sync mode %q, want "all" or "postmerge"`, text) return fmt.Errorf(`unknown history mode %q, want "all", "postmerge", or "postprague"`, text)
} }
return nil return nil
} }
// PrunePoint identifies a specific block for history pruning.
type PrunePoint struct { type PrunePoint struct {
BlockNumber uint64 BlockNumber uint64
BlockHash common.Hash BlockHash common.Hash
} }
// PrunePoints the pre-defined history pruning cutoff blocks for known networks. // staticPrunePoints contains the pre-defined history pruning cutoff blocks for
// They point to the first post-merge block. Any pruning should truncate *up to* but excluding // known networks, keyed by history mode and genesis hash. They point to the first
// given block. // block after the respective fork. Any pruning should truncate *up to* but
var PrunePoints = map[common.Hash]*PrunePoint{ // excluding the given block.
// mainnet var staticPrunePoints = map[HistoryMode]map[common.Hash]*PrunePoint{
KeepPostMerge: {
params.MainnetGenesisHash: { params.MainnetGenesisHash: {
BlockNumber: 15537393, BlockNumber: 15537393,
BlockHash: common.HexToHash("0x55b11b918355b1ef9c5db810302ebad0bf2544255b530cdce90674d5887bb286"), BlockHash: common.HexToHash("0x55b11b918355b1ef9c5db810302ebad0bf2544255b530cdce90674d5887bb286"),
}, },
// sepolia
params.SepoliaGenesisHash: { params.SepoliaGenesisHash: {
BlockNumber: 1450409, BlockNumber: 1450409,
BlockHash: common.HexToHash("0x229f6b18ca1552f1d5146deceb5387333f40dc6275aebee3f2c5c4ece07d02db"), BlockHash: common.HexToHash("0x229f6b18ca1552f1d5146deceb5387333f40dc6275aebee3f2c5c4ece07d02db"),
}, },
},
KeepPostPrague: {
params.MainnetGenesisHash: {
BlockNumber: 22431084,
BlockHash: common.HexToHash("0x50c8cab760b2948349c590461b166773c45d8f4858cccf5a43025ab2960152e8"),
},
params.SepoliaGenesisHash: {
BlockNumber: 7836331,
BlockHash: common.HexToHash("0xe6571beb68bf24dbd8a6ba354518996920c55a3f8d8fdca423e391b8ad071f22"),
},
},
}
// HistoryPolicy describes the configured history pruning strategy. It captures
// user intent as opposed to the actual DB state.
type HistoryPolicy struct {
Mode HistoryMode
// Static prune point for PostMerge/PostPrague, nil otherwise.
Target *PrunePoint
}
// NewPolicy constructs a HistoryPolicy from the given mode and genesis hash.
func NewPolicy(mode HistoryMode, genesisHash common.Hash) (HistoryPolicy, error) {
switch mode {
case KeepAll:
return HistoryPolicy{Mode: KeepAll}, nil
case KeepPostMerge, KeepPostPrague:
point := staticPrunePoints[mode][genesisHash]
if point == nil {
return HistoryPolicy{}, fmt.Errorf("%s history pruning not available for network %s", mode, genesisHash.Hex())
}
return HistoryPolicy{Mode: mode, Target: point}, nil
default:
return HistoryPolicy{}, fmt.Errorf("invalid history mode: %d", mode)
}
} }
// PrunedHistoryError is returned by APIs when the requested history is pruned. // PrunedHistoryError is returned by APIs when the requested history is pruned.

View file

@ -0,0 +1,58 @@
// Copyright 2026 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package history
import (
"testing"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/params"
)
func TestNewPolicy(t *testing.T) {
// KeepAll: no target.
p, err := NewPolicy(KeepAll, params.MainnetGenesisHash)
if err != nil {
t.Fatalf("KeepAll: %v", err)
}
if p.Mode != KeepAll || p.Target != nil {
t.Errorf("KeepAll: unexpected policy %+v", p)
}
// PostMerge: resolves known mainnet prune point.
p, err = NewPolicy(KeepPostMerge, params.MainnetGenesisHash)
if err != nil {
t.Fatalf("PostMerge: %v", err)
}
if p.Target == nil || p.Target.BlockNumber != 15537393 {
t.Errorf("PostMerge: unexpected target %+v", p.Target)
}
// PostPrague: resolves known mainnet prune point.
p, err = NewPolicy(KeepPostPrague, params.MainnetGenesisHash)
if err != nil {
t.Fatalf("PostPrague: %v", err)
}
if p.Target == nil || p.Target.BlockNumber != 22431084 {
t.Errorf("PostPrague: unexpected target %+v", p.Target)
}
// PostMerge on unknown network: error.
if _, err = NewPolicy(KeepPostMerge, common.HexToHash("0xdeadbeef")); err == nil {
t.Fatal("PostMerge unknown network: expected error")
}
}

View file

@ -26,6 +26,7 @@ import (
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/consensus/misc/eip4844" "github.com/ethereum/go-ethereum/consensus/misc/eip4844"
"github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/core/types/bal"
"github.com/ethereum/go-ethereum/crypto" "github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/ethdb" "github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
@ -424,14 +425,8 @@ func WriteBodyRLP(db ethdb.KeyValueWriter, hash common.Hash, number uint64, rlp
// HasBody verifies the existence of a block body corresponding to the hash. // HasBody verifies the existence of a block body corresponding to the hash.
func HasBody(db ethdb.Reader, hash common.Hash, number uint64) bool { func HasBody(db ethdb.Reader, hash common.Hash, number uint64) bool {
if isCanon(db, number, hash) { if isCanon(db, number, hash) {
// Block is in ancient store, but bodies can be pruned.
// Check if the block number is above the pruning tail.
tail, _ := db.Tail()
if number >= tail {
return true return true
} }
return false
}
if has, err := db.Has(blockBodyKey(number, hash)); !has || err != nil { if has, err := db.Has(blockBodyKey(number, hash)); !has || err != nil {
return false return false
} }
@ -472,14 +467,8 @@ func DeleteBody(db ethdb.KeyValueWriter, hash common.Hash, number uint64) {
// to a block. // to a block.
func HasReceipts(db ethdb.Reader, hash common.Hash, number uint64) bool { func HasReceipts(db ethdb.Reader, hash common.Hash, number uint64) bool {
if isCanon(db, number, hash) { if isCanon(db, number, hash) {
// Block is in ancient store, but receipts can be pruned.
// Check if the block number is above the pruning tail.
tail, _ := db.Tail()
if number >= tail {
return true return true
} }
return false
}
if has, err := db.Has(blockReceiptsKey(number, hash)); !has || err != nil { if has, err := db.Has(blockReceiptsKey(number, hash)); !has || err != nil {
return false return false
} }
@ -617,6 +606,55 @@ func DeleteReceipts(db ethdb.KeyValueWriter, hash common.Hash, number uint64) {
} }
} }
// HasAccessList verifies the existence of a block access list for a block.
func HasAccessList(db ethdb.Reader, hash common.Hash, number uint64) bool {
has, _ := db.Has(accessListKey(number, hash))
return has
}
// ReadAccessListRLP retrieves the RLP-encoded block access list for a block from KV.
func ReadAccessListRLP(db ethdb.Reader, hash common.Hash, number uint64) rlp.RawValue {
data, _ := db.Get(accessListKey(number, hash))
return data
}
// ReadAccessList retrieves and decodes the block access list for a block.
func ReadAccessList(db ethdb.Reader, hash common.Hash, number uint64) *bal.BlockAccessList {
data := ReadAccessListRLP(db, hash, number)
if len(data) == 0 {
return nil
}
b := new(bal.BlockAccessList)
if err := rlp.DecodeBytes(data, b); err != nil {
log.Error("Invalid BAL RLP", "hash", hash, "err", err)
return nil
}
return b
}
// WriteAccessList RLP-encodes and stores a block access list in the active KV store.
func WriteAccessList(db ethdb.KeyValueWriter, hash common.Hash, number uint64, b *bal.BlockAccessList) {
bytes, err := rlp.EncodeToBytes(b)
if err != nil {
log.Crit("Failed to encode BAL", "err", err)
}
WriteAccessListRLP(db, hash, number, bytes)
}
// WriteAccessListRLP stores a pre-encoded block access list in the active KV store.
func WriteAccessListRLP(db ethdb.KeyValueWriter, hash common.Hash, number uint64, encoded rlp.RawValue) {
if err := db.Put(accessListKey(number, hash), encoded); err != nil {
log.Crit("Failed to store BAL", "err", err)
}
}
// DeleteAccessList removes a block access list from the active KV store.
func DeleteAccessList(db ethdb.KeyValueWriter, hash common.Hash, number uint64) {
if err := db.Delete(accessListKey(number, hash)); err != nil {
log.Crit("Failed to delete BAL", "err", err)
}
}
// ReceiptLogs is a barebone version of ReceiptForStorage which only keeps // ReceiptLogs is a barebone version of ReceiptForStorage which only keeps
// the list of logs. When decoding a stored receipt into this object we // the list of logs. When decoding a stored receipt into this object we
// avoid creating the bloom filter. // avoid creating the bloom filter.
@ -671,13 +709,25 @@ func ReadBlock(db ethdb.Reader, hash common.Hash, number uint64) *types.Block {
if body == nil { if body == nil {
return nil return nil
} }
return types.NewBlockWithHeader(header).WithBody(*body) block := types.NewBlockWithHeader(header).WithBody(*body)
// Best-effort assembly of the block access list from the database.
if header.BlockAccessListHash != nil {
al := ReadAccessList(db, hash, number)
block = block.WithAccessListUnsafe(al)
}
return block
} }
// WriteBlock serializes a block into the database, header and body separately. // WriteBlock serializes a block into the database, header and body separately.
func WriteBlock(db ethdb.KeyValueWriter, block *types.Block) { func WriteBlock(db ethdb.KeyValueWriter, block *types.Block) {
WriteBody(db, block.Hash(), block.NumberU64(), block.Body()) hash, number := block.Hash(), block.NumberU64()
WriteBody(db, hash, number, block.Body())
WriteHeader(db, block.Header()) WriteHeader(db, block.Header())
if accessList := block.AccessList(); accessList != nil {
WriteAccessList(db, hash, number, accessList)
}
} }
// WriteAncientBlocks writes entire block data into ancient store and returns the total written size. // WriteAncientBlocks writes entire block data into ancient store and returns the total written size.

View file

@ -27,10 +27,12 @@ import (
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/core/types/bal"
"github.com/ethereum/go-ethereum/crypto" "github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/crypto/keccak" "github.com/ethereum/go-ethereum/crypto/keccak"
"github.com/ethereum/go-ethereum/params" "github.com/ethereum/go-ethereum/params"
"github.com/ethereum/go-ethereum/rlp" "github.com/ethereum/go-ethereum/rlp"
"github.com/holiman/uint256"
) )
// Tests block header storage and retrieval operations. // Tests block header storage and retrieval operations.
@ -899,3 +901,78 @@ func TestHeadersRLPStorage(t *testing.T) {
checkSequence(1, 1) // Only block 1 checkSequence(1, 1) // Only block 1
checkSequence(1, 2) // Genesis + block 1 checkSequence(1, 2) // Genesis + block 1
} }
func makeTestBAL(t *testing.T) (rlp.RawValue, *bal.BlockAccessList) {
t.Helper()
cb := bal.NewConstructionBlockAccessList()
addr := common.HexToAddress("0x1111111111111111111111111111111111111111")
cb.AccountRead(addr)
cb.StorageRead(addr, common.BytesToHash([]byte{0x01}))
cb.StorageWrite(0, addr, common.BytesToHash([]byte{0x02}), common.BytesToHash([]byte{0xaa}))
cb.BalanceChange(0, addr, uint256.NewInt(100))
cb.NonceChange(addr, 0, 1)
var buf bytes.Buffer
if err := cb.EncodeRLP(&buf); err != nil {
t.Fatalf("failed to encode BAL: %v", err)
}
encoded := buf.Bytes()
var decoded bal.BlockAccessList
if err := rlp.DecodeBytes(encoded, &decoded); err != nil {
t.Fatalf("failed to decode BAL: %v", err)
}
return encoded, &decoded
}
// TestBALStorage tests write/read/delete of BALs in the KV store.
func TestBALStorage(t *testing.T) {
db := NewMemoryDatabase()
hash := common.BytesToHash([]byte{0x03, 0x14})
number := uint64(42)
// Check that no BAL exists in a new database.
if HasAccessList(db, hash, number) {
t.Fatal("BAL found in new database")
}
if b := ReadAccessList(db, hash, number); b != nil {
t.Fatalf("non existent BAL returned: %v", b)
}
// Write a BAL and verify it can be read back.
encoded, testBAL := makeTestBAL(t)
WriteAccessList(db, hash, number, testBAL)
if !HasAccessList(db, hash, number) {
t.Fatal("HasAccessList returned false after write")
}
if blob := ReadAccessListRLP(db, hash, number); len(blob) == 0 {
t.Fatal("ReadAccessListRLP returned empty after write")
}
if b := ReadAccessList(db, hash, number); b == nil {
t.Fatal("ReadAccessList returned nil after write")
} else if b.Hash() != testBAL.Hash() {
t.Fatalf("BAL hash mismatch: got %x, want %x", b.Hash(), testBAL.Hash())
}
// Also test WriteAccessListRLP with pre-encoded data.
hash2 := common.BytesToHash([]byte{0x03, 0x15})
WriteAccessListRLP(db, hash2, number, encoded)
if b := ReadAccessList(db, hash2, number); b == nil {
t.Fatal("ReadAccessList returned nil after WriteAccessListRLP")
} else if b.Hash() != testBAL.Hash() {
t.Fatalf("BAL hash mismatch after WriteAccessListRLP: got %x, want %x", b.Hash(), testBAL.Hash())
}
// Delete the BAL and verify it's gone.
DeleteAccessList(db, hash, number)
if HasAccessList(db, hash, number) {
t.Fatal("HasAccessList returned true after delete")
}
if b := ReadAccessList(db, hash, number); b != nil {
t.Fatalf("deleted BAL returned: %v", b)
}
}

View file

@ -260,6 +260,46 @@ func basicWrite(t *testing.T, newFn func(kinds []string) ethdb.AncientStore) {
if err != nil { if err != nil {
t.Fatalf("Failed to write ancient data %v", err) t.Fatalf("Failed to write ancient data %v", err)
} }
// Write should work after truncating from tail but over the head
db.TruncateTail(200)
head, err := db.Ancients()
if err != nil {
t.Fatalf("Failed to retrieve head ancients %v", err)
}
tail, err := db.Tail()
if err != nil {
t.Fatalf("Failed to retrieve tail ancients %v", err)
}
if head != 200 || tail != 200 {
t.Fatalf("Ancient head and tail are not expected")
}
_, err = db.ModifyAncients(func(op ethdb.AncientWriteOp) error {
offset := uint64(200)
for i := 0; i < 100; i++ {
if err := op.AppendRaw("a", offset+uint64(i), dataA[i]); err != nil {
return err
}
if err := op.AppendRaw("b", offset+uint64(i), dataB[i]); err != nil {
return err
}
}
return nil
})
if err != nil {
t.Fatalf("Failed to write ancient data %v", err)
}
head, err = db.Ancients()
if err != nil {
t.Fatalf("Failed to retrieve head ancients %v", err)
}
tail, err = db.Tail()
if err != nil {
t.Fatalf("Failed to retrieve tail ancients %v", err)
}
if head != 300 || tail != 200 {
t.Fatalf("Ancient head and tail are not expected")
}
} }
func nonMutable(t *testing.T, newFn func(kinds []string) ethdb.AncientStore) { func nonMutable(t *testing.T, newFn func(kinds []string) ethdb.AncientStore) {

View file

@ -35,6 +35,7 @@ import (
"github.com/ethereum/go-ethereum/crypto" "github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/ethdb" "github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/ethdb/memorydb" "github.com/ethereum/go-ethereum/ethdb/memorydb"
"github.com/ethereum/go-ethereum/internal/tablewriter"
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
"golang.org/x/sync/errgroup" "golang.org/x/sync/errgroup"
) )
@ -412,6 +413,7 @@ func InspectDatabase(db ethdb.Database, keyPrefix, keyStart []byte) error {
tds stat tds stat
numHashPairings stat numHashPairings stat
hashNumPairings stat hashNumPairings stat
blockAccessList stat
legacyTries stat legacyTries stat
stateLookups stat stateLookups stat
accountTries stat accountTries stat
@ -477,12 +479,15 @@ func InspectDatabase(db ethdb.Database, keyPrefix, keyStart []byte) error {
bodies.add(size) bodies.add(size)
case bytes.HasPrefix(key, blockReceiptsPrefix) && len(key) == (len(blockReceiptsPrefix)+8+common.HashLength): case bytes.HasPrefix(key, blockReceiptsPrefix) && len(key) == (len(blockReceiptsPrefix)+8+common.HashLength):
receipts.add(size) receipts.add(size)
case bytes.HasPrefix(key, headerPrefix) && bytes.HasSuffix(key, headerTDSuffix): case bytes.HasPrefix(key, headerPrefix) && bytes.HasSuffix(key, headerTDSuffix) && len(key) == (len(headerPrefix)+8+common.HashLength+len(headerTDSuffix)):
tds.add(size) tds.add(size)
case bytes.HasPrefix(key, headerPrefix) && bytes.HasSuffix(key, headerHashSuffix): case bytes.HasPrefix(key, headerPrefix) && bytes.HasSuffix(key, headerHashSuffix) && len(key) == (len(headerPrefix)+8+len(headerHashSuffix)):
numHashPairings.add(size) numHashPairings.add(size)
case bytes.HasPrefix(key, headerNumberPrefix) && len(key) == (len(headerNumberPrefix)+common.HashLength): case bytes.HasPrefix(key, headerNumberPrefix) && len(key) == (len(headerNumberPrefix)+common.HashLength):
hashNumPairings.add(size) hashNumPairings.add(size)
case bytes.HasPrefix(key, accessListPrefix) && len(key) == len(accessListPrefix)+8+common.HashLength:
blockAccessList.add(size)
case IsLegacyTrieNode(key, it.Value()): case IsLegacyTrieNode(key, it.Value()):
legacyTries.add(size) legacyTries.add(size)
case bytes.HasPrefix(key, stateIDPrefix) && len(key) == len(stateIDPrefix)+common.HashLength: case bytes.HasPrefix(key, stateIDPrefix) && len(key) == len(stateIDPrefix)+common.HashLength:
@ -624,6 +629,7 @@ func InspectDatabase(db ethdb.Database, keyPrefix, keyStart []byte) error {
{"Key-Value store", "Difficulties (deprecated)", tds.sizeString(), tds.countString()}, {"Key-Value store", "Difficulties (deprecated)", tds.sizeString(), tds.countString()},
{"Key-Value store", "Block number->hash", numHashPairings.sizeString(), numHashPairings.countString()}, {"Key-Value store", "Block number->hash", numHashPairings.sizeString(), numHashPairings.countString()},
{"Key-Value store", "Block hash->number", hashNumPairings.sizeString(), hashNumPairings.countString()}, {"Key-Value store", "Block hash->number", hashNumPairings.sizeString(), hashNumPairings.countString()},
{"Key-Value store", "Block accessList", blockAccessList.sizeString(), blockAccessList.countString()},
{"Key-Value store", "Transaction index", txLookups.sizeString(), txLookups.countString()}, {"Key-Value store", "Transaction index", txLookups.sizeString(), txLookups.countString()},
{"Key-Value store", "Log index filter-map rows", filterMapRows.sizeString(), filterMapRows.countString()}, {"Key-Value store", "Log index filter-map rows", filterMapRows.sizeString(), filterMapRows.countString()},
{"Key-Value store", "Log index last-block-of-map", filterMapLastBlock.sizeString(), filterMapLastBlock.countString()}, {"Key-Value store", "Log index last-block-of-map", filterMapLastBlock.sizeString(), filterMapLastBlock.countString()},
@ -663,7 +669,7 @@ func InspectDatabase(db ethdb.Database, keyPrefix, keyStart []byte) error {
total.Add(uint64(ancient.size())) total.Add(uint64(ancient.size()))
} }
table := NewTableWriter(os.Stdout) table := tablewriter.NewWriter(os.Stdout)
table.SetHeader([]string{"Database", "Category", "Size", "Items"}) table.SetHeader([]string{"Database", "Category", "Size", "Items"})
table.SetFooter([]string{"", "Total", common.StorageSize(total.Load()).String(), fmt.Sprintf("%d", count.Load())}) table.SetFooter([]string{"", "Total", common.StorageSize(total.Load()).String(), fmt.Sprintf("%d", count.Load())})
table.AppendBulk(stats) table.AppendBulk(stats)

View file

@ -59,7 +59,7 @@ const freezerTableSize = 2 * 1000 * 1000 * 1000
// - The in-order data ensures that disk reads are always optimized. // - The in-order data ensures that disk reads are always optimized.
type Freezer struct { type Freezer struct {
datadir string datadir string
frozen atomic.Uint64 // Number of items already frozen head atomic.Uint64 // Number of items stored (including items removed from tail)
tail atomic.Uint64 // Number of the first stored item in the freezer tail atomic.Uint64 // Number of the first stored item in the freezer
// This lock synchronizes writers and the truncate operation, as well as // This lock synchronizes writers and the truncate operation, as well as
@ -97,12 +97,12 @@ func NewFreezer(datadir string, namespace string, readonly bool, maxTableSize ui
return nil, errSymlinkDatadir return nil, errSymlinkDatadir
} }
} }
// Leveldb/Pebble uses LOCK as the filelock filename. To prevent the
// name collision, we use FLOCK as the lock name.
flockFile := filepath.Join(datadir, "FLOCK") flockFile := filepath.Join(datadir, "FLOCK")
if err := os.MkdirAll(filepath.Dir(flockFile), 0755); err != nil { if err := os.MkdirAll(filepath.Dir(flockFile), 0755); err != nil {
return nil, err return nil, err
} }
// Leveldb uses LOCK as the filelock filename. To prevent the
// name collision, we use FLOCK as the lock name.
lock := flock.New(flockFile) lock := flock.New(flockFile)
tryLock := lock.TryLock tryLock := lock.TryLock
if readonly { if readonly {
@ -213,7 +213,7 @@ func (f *Freezer) AncientBytes(kind string, id, offset, length uint64) ([]byte,
// Ancients returns the length of the frozen items. // Ancients returns the length of the frozen items.
func (f *Freezer) Ancients() (uint64, error) { func (f *Freezer) Ancients() (uint64, error) {
return f.frozen.Load(), nil return f.head.Load(), nil
} }
// Tail returns the number of first stored item in the freezer. // Tail returns the number of first stored item in the freezer.
@ -252,7 +252,7 @@ func (f *Freezer) ModifyAncients(fn func(ethdb.AncientWriteOp) error) (writeSize
defer f.writeLock.Unlock() defer f.writeLock.Unlock()
// Roll back all tables to the starting position in case of error. // Roll back all tables to the starting position in case of error.
prevItem := f.frozen.Load() prevItem := f.head.Load()
defer func() { defer func() {
if err != nil { if err != nil {
// The write operation has failed. Go back to the previous item position. // The write operation has failed. Go back to the previous item position.
@ -273,7 +273,7 @@ func (f *Freezer) ModifyAncients(fn func(ethdb.AncientWriteOp) error) (writeSize
if err != nil { if err != nil {
return 0, err return 0, err
} }
f.frozen.Store(item) f.head.Store(item)
return writeSize, nil return writeSize, nil
} }
@ -286,7 +286,7 @@ func (f *Freezer) TruncateHead(items uint64) (uint64, error) {
f.writeLock.Lock() f.writeLock.Lock()
defer f.writeLock.Unlock() defer f.writeLock.Unlock()
oitems := f.frozen.Load() oitems := f.head.Load()
if oitems <= items { if oitems <= items {
return oitems, nil return oitems, nil
} }
@ -295,7 +295,7 @@ func (f *Freezer) TruncateHead(items uint64) (uint64, error) {
return 0, err return 0, err
} }
} }
f.frozen.Store(items) f.head.Store(items)
return oitems, nil return oitems, nil
} }
@ -320,6 +320,11 @@ func (f *Freezer) TruncateTail(tail uint64) (uint64, error) {
} }
} }
f.tail.Store(tail) f.tail.Store(tail)
// Update the head if the requested tail exceeds the current head
if f.head.Load() < tail {
f.head.Store(tail)
}
return old, nil return old, nil
} }
@ -379,7 +384,7 @@ func (f *Freezer) validate() error {
prunedTail = &tmp prunedTail = &tmp
} }
f.frozen.Store(head) f.head.Store(head)
f.tail.Store(*prunedTail) f.tail.Store(*prunedTail)
return nil return nil
} }
@ -414,7 +419,7 @@ func (f *Freezer) repair() error {
} }
} }
f.frozen.Store(head) f.head.Store(head)
f.tail.Store(prunedTail) f.tail.Store(prunedTail)
return nil return nil
} }

View file

@ -113,7 +113,7 @@ func (t *memoryTable) truncateTail(items uint64) error {
return nil return nil
} }
if t.items < items { if t.items < items {
return errors.New("truncation above head") return t.reset(items)
} }
for i := uint64(0); i < items-t.offset; i++ { for i := uint64(0); i < items-t.offset; i++ {
if t.size > uint64(len(t.data[i])) { if t.size > uint64(len(t.data[i])) {
@ -127,6 +127,16 @@ func (t *memoryTable) truncateTail(items uint64) error {
return nil return nil
} }
// reset clears the entire table and sets both the head and tail to the given
// value. It assumes the caller holds the lock and that tail > t.items.
func (t *memoryTable) reset(offset uint64) error {
t.size = 0
t.data = nil
t.items = offset
t.offset = offset
return nil
}
// commit merges the given item batch into table. It's presumed that the // commit merges the given item batch into table. It's presumed that the
// batch is ordered and continuous with table. // batch is ordered and continuous with table.
func (t *memoryTable) commit(batch [][]byte) error { func (t *memoryTable) commit(batch [][]byte) error {
@ -387,6 +397,9 @@ func (f *MemoryFreezer) TruncateTail(tail uint64) (uint64, error) {
} }
} }
f.tail = tail f.tail = tail
if f.items < tail {
f.items = tail
}
return old, nil return old, nil
} }

View file

@ -707,12 +707,13 @@ func (t *freezerTable) truncateTail(items uint64) error {
t.lock.Lock() t.lock.Lock()
defer t.lock.Unlock() defer t.lock.Unlock()
// Ensure the given truncate target falls in the correct range // Short-circuit if the requested tail deletion points to a stale position
if t.itemHidden.Load() >= items { if t.itemHidden.Load() >= items {
return nil return nil
} }
// If the requested tail exceeds the current head, reset the entire table
if t.items.Load() < items { if t.items.Load() < items {
return errors.New("truncation above head") return t.resetTo(items)
} }
// Load the new tail index by the given new tail position // Load the new tail index by the given new tail position
var ( var (
@ -822,11 +823,10 @@ func (t *freezerTable) truncateTail(items uint64) error {
shorten := indexEntrySize * int64(newDeleted-deleted) shorten := indexEntrySize * int64(newDeleted-deleted)
if t.metadata.flushOffset <= shorten { if t.metadata.flushOffset <= shorten {
return fmt.Errorf("invalid index flush offset: %d, shorten: %d", t.metadata.flushOffset, shorten) return fmt.Errorf("invalid index flush offset: %d, shorten: %d", t.metadata.flushOffset, shorten)
} else { }
if err := t.metadata.setFlushOffset(t.metadata.flushOffset-shorten, true); err != nil { if err := t.metadata.setFlushOffset(t.metadata.flushOffset-shorten, true); err != nil {
return err return err
} }
}
// Retrieve the new size and update the total size counter // Retrieve the new size and update the total size counter
newSize, err := t.sizeNolock() newSize, err := t.sizeNolock()
if err != nil { if err != nil {
@ -836,6 +836,59 @@ func (t *freezerTable) truncateTail(items uint64) error {
return nil return nil
} }
// resetTo clears the entire table and sets both the head and tail to the given
// value. It assumes the caller holds the lock and that tail > t.items.
func (t *freezerTable) resetTo(tail uint64) error {
// Sync the entire table before resetting, eliminating the potential
// data corruption.
err := t.doSync()
if err != nil {
return err
}
// Update the index file to reflect the new offset
if err := t.index.Close(); err != nil {
return err
}
entry := &indexEntry{
filenum: t.headId + 1,
offset: uint32(tail),
}
if err := reset(t.index.Name(), entry.append(nil)); err != nil {
return err
}
if err := t.metadata.setVirtualTail(tail, true); err != nil {
return err
}
if err := t.metadata.setFlushOffset(indexEntrySize, true); err != nil {
return err
}
t.index, err = openFreezerFileForAppend(t.index.Name())
if err != nil {
return err
}
// Purge all the existing data file
if err := t.head.Close(); err != nil {
return err
}
t.headId = t.headId + 1
t.tailId = t.headId
t.headBytes = 0
t.head, err = t.openFile(t.headId, openFreezerFileTruncated)
if err != nil {
return err
}
t.releaseFilesBefore(t.headId, true)
t.items.Store(tail)
t.itemOffset.Store(tail)
t.itemHidden.Store(tail)
t.sizeGauge.Update(0)
return nil
}
// Close closes all opened files and finalizes the freezer table for use. // Close closes all opened files and finalizes the freezer table for use.
// This operation must be completed before shutdown to prevent the loss of // This operation must be completed before shutdown to prevent the loss of
// recent writes. // recent writes.
@ -1247,25 +1300,20 @@ func (t *freezerTable) doSync() error {
if t.index == nil || t.head == nil || t.metadata.file == nil { if t.index == nil || t.head == nil || t.metadata.file == nil {
return errClosed return errClosed
} }
var err error if err := t.index.Sync(); err != nil {
trackError := func(e error) { return err
if e != nil && err == nil {
err = e
} }
if err := t.head.Sync(); err != nil {
return err
} }
trackError(t.index.Sync())
trackError(t.head.Sync())
// A crash may occur before the offset is updated, leaving the offset // A crash may occur before the offset is updated, leaving the offset
// points to a old position. If so, the extra items above the offset // points to an old position. If so, the extra items above the offset
// will be truncated during the next run. // will be truncated during the next run.
stat, err := t.index.Stat() stat, err := t.index.Stat()
if err != nil { if err != nil {
return err return err
} }
offset := stat.Size() return t.metadata.setFlushOffset(stat.Size(), true)
trackError(t.metadata.setFlushOffset(offset, true))
return err
} }
func (t *freezerTable) dumpIndexStdout(start, stop int64) { func (t *freezerTable) dumpIndexStdout(start, stop int64) {

View file

@ -1139,6 +1139,7 @@ const (
opTruncateHeadAll opTruncateHeadAll
opTruncateTail opTruncateTail
opTruncateTailAll opTruncateTailAll
opTruncateTailOverHead
opCheckAll opCheckAll
opMax // boundary value, not an actual op opMax // boundary value, not an actual op
) )
@ -1226,6 +1227,11 @@ func (randTest) Generate(r *rand.Rand, size int) reflect.Value {
step.target = deleted + uint64(len(items)) step.target = deleted + uint64(len(items))
items = items[:0] items = items[:0]
deleted = step.target deleted = step.target
case opTruncateTailOverHead:
newDeleted := deleted + uint64(len(items)) + 10
step.target = newDeleted
deleted = newDeleted
items = items[:0]
} }
steps = append(steps, step) steps = append(steps, step)
} }
@ -1268,7 +1274,7 @@ func runRandTest(rt randTest) bool {
for i := 0; i < len(step.items); i++ { for i := 0; i < len(step.items); i++ {
batch.AppendRaw(step.items[i], step.blobs[i]) batch.AppendRaw(step.items[i], step.blobs[i])
} }
batch.commit() rt[i].err = batch.commit()
values = append(values, step.blobs...) values = append(values, step.blobs...)
case opRetrieve: case opRetrieve:
@ -1290,24 +1296,28 @@ func runRandTest(rt randTest) bool {
} }
case opTruncateHead: case opTruncateHead:
f.truncateHead(step.target) rt[i].err = f.truncateHead(step.target)
length := f.items.Load() - f.itemHidden.Load() length := f.items.Load() - f.itemHidden.Load()
values = values[:length] values = values[:length]
case opTruncateHeadAll: case opTruncateHeadAll:
f.truncateHead(step.target) rt[i].err = f.truncateHead(step.target)
values = nil values = nil
case opTruncateTail: case opTruncateTail:
prev := f.itemHidden.Load() prev := f.itemHidden.Load()
f.truncateTail(step.target) rt[i].err = f.truncateTail(step.target)
truncated := f.itemHidden.Load() - prev truncated := f.itemHidden.Load() - prev
values = values[truncated:] values = values[truncated:]
case opTruncateTailAll: case opTruncateTailAll:
f.truncateTail(step.target) rt[i].err = f.truncateTail(step.target)
values = nil
case opTruncateTailOverHead:
rt[i].err = f.truncateTail(step.target)
values = nil values = nil
} }
// Abort the test on error. // Abort the test on error.
@ -1633,3 +1643,43 @@ func TestFreezerAncientBytes(t *testing.T) {
}) })
} }
} }
func TestTruncateOverHead(t *testing.T) {
t.Parallel()
fn := fmt.Sprintf("t-%d", rand.Uint64())
f, err := newTable(os.TempDir(), fn, metrics.NewMeter(), metrics.NewMeter(), metrics.NewGauge(), 100, freezerTableConfig{noSnappy: true}, false)
if err != nil {
t.Fatal(err)
}
// Tail truncation on an empty table
if err := f.truncateTail(10); err != nil {
t.Fatal(err)
}
batch := f.newBatch()
data := getChunk(10, 1)
require.NoError(t, batch.AppendRaw(uint64(10), data))
require.NoError(t, batch.commit())
got, err := f.RetrieveItems(uint64(10), 1, 0)
require.NoError(t, err)
if !bytes.Equal(got[0], data) {
t.Fatalf("Unexpected bytes, want: %v, got: %v", data, got[0])
}
// Tail truncation on the non-empty table
if err := f.truncateTail(20); err != nil {
t.Fatal(err)
}
batch = f.newBatch()
data = getChunk(10, 1)
require.NoError(t, batch.AppendRaw(uint64(20), data))
require.NoError(t, batch.commit())
got, err = f.RetrieveItems(uint64(20), 1, 0)
require.NoError(t, err)
if !bytes.Equal(got[0], data) {
t.Fatalf("Unexpected bytes, want: %v, got: %v", data, got[0])
}
}

View file

@ -22,6 +22,13 @@ import (
"path/filepath" "path/filepath"
) )
func atomicRename(src, dest string) error {
if err := os.Rename(src, dest); err != nil {
return err
}
return syncDir(filepath.Dir(src))
}
// copyFrom copies data from 'srcPath' at offset 'offset' into 'destPath'. // copyFrom copies data from 'srcPath' at offset 'offset' into 'destPath'.
// The 'destPath' is created if it doesn't exist, otherwise it is overwritten. // The 'destPath' is created if it doesn't exist, otherwise it is overwritten.
// Before the copy is executed, there is a callback can be registered to // Before the copy is executed, there is a callback can be registered to
@ -73,13 +80,48 @@ func copyFrom(srcPath, destPath string, offset uint64, before func(f *os.File) e
return err return err
} }
f = nil f = nil
return os.Rename(fname, destPath)
return atomicRename(fname, destPath)
}
// reset atomically replaces the file at the given path with the provided content.
func reset(path string, content []byte) error {
// Create a temp file in the same dir where we want it to wind up
f, err := os.CreateTemp(filepath.Dir(path), "*")
if err != nil {
return err
}
fname := f.Name()
// Clean up the leftover file
defer func() {
if f != nil {
f.Close()
}
os.Remove(fname)
}()
// Write the content into the temp file
_, err = f.Write(content)
if err != nil {
return err
}
// Permanently persist the content into disk
if err := f.Sync(); err != nil {
return err
}
if err := f.Close(); err != nil {
return err
}
f = nil
return atomicRename(fname, path)
} }
// openFreezerFileForAppend opens a freezer table file and seeks to the end // openFreezerFileForAppend opens a freezer table file and seeks to the end
func openFreezerFileForAppend(filename string) (*os.File, error) { func openFreezerFileForAppend(filename string) (*os.File, error) {
// Open the file without the O_APPEND flag // Open the file without the O_APPEND flag
// because it has differing behaviour during Truncate operations // because it has differing behavior during Truncate operations
// on different OS's // on different OS's
file, err := os.OpenFile(filename, os.O_RDWR|os.O_CREATE, 0644) file, err := os.OpenFile(filename, os.O_RDWR|os.O_CREATE, 0644)
if err != nil { if err != nil {

View file

@ -0,0 +1,49 @@
// Copyright 2022 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
//go:build !windows
// +build !windows
package rawdb
import (
"errors"
"os"
"syscall"
)
// syncDir ensures that the directory metadata (e.g. newly renamed files)
// is flushed to durable storage.
func syncDir(name string) error {
f, err := os.Open(name)
if err != nil {
return err
}
defer f.Close()
// Some file systems do not support fsyncing directories (e.g. some FUSE
// mounts). Ignore EINVAL in those cases.
if err := f.Sync(); err != nil {
if errors.Is(err, os.ErrInvalid) {
return nil
}
if patherr, ok := err.(*os.PathError); ok && patherr.Err == syscall.EINVAL {
return nil
}
return err
}
return nil
}

View file

@ -0,0 +1,26 @@
// Copyright 2022 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
//go:build windows
// +build windows
package rawdb
// syncDir is a no-op on Windows. Fsyncing a directory handle is not
// supported and returns "Access is denied".
func syncDir(name string) error {
return nil
}

View file

@ -112,6 +112,7 @@ var (
blockBodyPrefix = []byte("b") // blockBodyPrefix + num (uint64 big endian) + hash -> block body blockBodyPrefix = []byte("b") // blockBodyPrefix + num (uint64 big endian) + hash -> block body
blockReceiptsPrefix = []byte("r") // blockReceiptsPrefix + num (uint64 big endian) + hash -> block receipts blockReceiptsPrefix = []byte("r") // blockReceiptsPrefix + num (uint64 big endian) + hash -> block receipts
accessListPrefix = []byte("j") // accessListPrefix + num (uint64 big endian) + hash -> block access list
txLookupPrefix = []byte("l") // txLookupPrefix + hash -> transaction/receipt lookup metadata txLookupPrefix = []byte("l") // txLookupPrefix + hash -> transaction/receipt lookup metadata
bloomBitsPrefix = []byte("B") // bloomBitsPrefix + bit (uint16 big endian) + section (uint64 big endian) + hash -> bloom bits bloomBitsPrefix = []byte("B") // bloomBitsPrefix + bit (uint16 big endian) + section (uint64 big endian) + hash -> bloom bits
@ -214,6 +215,11 @@ func blockReceiptsKey(number uint64, hash common.Hash) []byte {
return append(append(blockReceiptsPrefix, encodeBlockNumber(number)...), hash.Bytes()...) return append(append(blockReceiptsPrefix, encodeBlockNumber(number)...), hash.Bytes()...)
} }
// accessListKey = accessListPrefix + num (uint64 big endian) + hash
func accessListKey(number uint64, hash common.Hash) []byte {
return append(append(accessListPrefix, encodeBlockNumber(number)...), hash.Bytes()...)
}
// txLookupKey = txLookupPrefix + hash // txLookupKey = txLookupPrefix + hash
func txLookupKey(hash common.Hash) []byte { func txLookupKey(hash common.Hash) []byte {
return append(txLookupPrefix, hash.Bytes()...) return append(txLookupPrefix, hash.Bytes()...)

View file

@ -253,6 +253,11 @@ func (b *tableBatch) Reset() {
b.batch.Reset() b.batch.Reset()
} }
// Close closes the batch and releases all associated resources.
func (b *tableBatch) Close() {
b.batch.Close()
}
// tableReplayer is a wrapper around a batch replayer which truncates // tableReplayer is a wrapper around a batch replayer which truncates
// the added prefix. // the added prefix.
type tableReplayer struct { type tableReplayer struct {

View file

@ -20,13 +20,13 @@ import (
"fmt" "fmt"
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/lru"
"github.com/ethereum/go-ethereum/core/overlay" "github.com/ethereum/go-ethereum/core/overlay"
"github.com/ethereum/go-ethereum/core/rawdb" "github.com/ethereum/go-ethereum/core/rawdb"
"github.com/ethereum/go-ethereum/core/state/snapshot" "github.com/ethereum/go-ethereum/core/state/snapshot"
"github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto" "github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/ethdb" "github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/trie" "github.com/ethereum/go-ethereum/trie"
"github.com/ethereum/go-ethereum/trie/bintrie" "github.com/ethereum/go-ethereum/trie/bintrie"
"github.com/ethereum/go-ethereum/trie/transitiontrie" "github.com/ethereum/go-ethereum/trie/transitiontrie"
@ -34,19 +34,15 @@ import (
"github.com/ethereum/go-ethereum/triedb" "github.com/ethereum/go-ethereum/triedb"
) )
const (
// Number of codehash->size associations to keep.
codeSizeCacheSize = 1_000_000 // 4 megabytes in total
// Cache size granted for caching clean code.
codeCacheSize = 256 * 1024 * 1024
)
// Database wraps access to tries and contract code. // Database wraps access to tries and contract code.
type Database interface { type Database interface {
// Reader returns a state reader associated with the specified state root. // Reader returns a state reader associated with the specified state root.
Reader(root common.Hash) (Reader, error) Reader(root common.Hash) (Reader, error)
// Iteratee returns a state iteratee associated with the specified state root,
// through which the account iterator and storage iterator can be created.
Iteratee(root common.Hash) (Iteratee, error)
// OpenTrie opens the main account trie. // OpenTrie opens the main account trie.
OpenTrie(root common.Hash) (Trie, error) OpenTrie(root common.Hash) (Trie, error)
@ -56,8 +52,10 @@ type Database interface {
// TrieDB returns the underlying trie database for managing trie nodes. // TrieDB returns the underlying trie database for managing trie nodes.
TrieDB() *triedb.Database TrieDB() *triedb.Database
// Snapshot returns the underlying state snapshot. // Commit flushes all pending writes and finalizes the state transition,
Snapshot() *snapshot.Tree // committing the changes to the underlying storage. It returns an error
// if the commit fails.
Commit(update *stateUpdate) error
} }
// Trie is a Ethereum Merkle Patricia trie. // Trie is a Ethereum Merkle Patricia trie.
@ -149,32 +147,34 @@ type Trie interface {
// state snapshot to provide functionalities for state access. It's meant to be a // state snapshot to provide functionalities for state access. It's meant to be a
// long-live object and has a few caches inside for sharing between blocks. // long-live object and has a few caches inside for sharing between blocks.
type CachingDB struct { type CachingDB struct {
disk ethdb.KeyValueStore
triedb *triedb.Database triedb *triedb.Database
codedb *CodeDB
snap *snapshot.Tree snap *snapshot.Tree
codeCache *lru.SizeConstrainedCache[common.Hash, []byte]
codeSizeCache *lru.Cache[common.Hash, int]
// Transition-specific fields
TransitionStatePerRoot *lru.Cache[common.Hash, *overlay.TransitionState]
} }
// NewDatabase creates a state database with the provided data sources. // NewDatabase creates a state database with the provided data sources.
func NewDatabase(triedb *triedb.Database, snap *snapshot.Tree) *CachingDB { func NewDatabase(triedb *triedb.Database, codedb *CodeDB) *CachingDB {
if codedb == nil {
codedb = NewCodeDB(triedb.Disk())
}
return &CachingDB{ return &CachingDB{
disk: triedb.Disk(),
triedb: triedb, triedb: triedb,
snap: snap, codedb: codedb,
codeCache: lru.NewSizeConstrainedCache[common.Hash, []byte](codeCacheSize),
codeSizeCache: lru.NewCache[common.Hash, int](codeSizeCacheSize),
TransitionStatePerRoot: lru.NewCache[common.Hash, *overlay.TransitionState](1000),
} }
} }
// NewDatabaseForTesting is similar to NewDatabase, but it initializes the caching // NewDatabaseForTesting is similar to NewDatabase, but it initializes the caching
// db by using an ephemeral memory db with default config for testing. // db by using an ephemeral memory db with default config for testing.
func NewDatabaseForTesting() *CachingDB { func NewDatabaseForTesting() *CachingDB {
return NewDatabase(triedb.NewDatabase(rawdb.NewMemoryDatabase(), nil), nil) db := rawdb.NewMemoryDatabase()
return NewDatabase(triedb.NewDatabase(db, nil), NewCodeDB(db))
}
// WithSnapshot configures the provided contract code cache. Note that this
// registration must be performed before the cachingDB is used.
func (db *CachingDB) WithSnapshot(snapshot *snapshot.Tree) *CachingDB {
db.snap = snapshot
return db
} }
// StateReader returns a state reader associated with the specified state root. // StateReader returns a state reader associated with the specified state root.
@ -218,21 +218,20 @@ func (db *CachingDB) Reader(stateRoot common.Hash) (Reader, error) {
if err != nil { if err != nil {
return nil, err return nil, err
} }
return newReader(newCachingCodeReader(db.disk, db.codeCache, db.codeSizeCache), sr), nil return newReader(db.codedb.Reader(), sr), nil
} }
// ReadersWithCacheStats creates a pair of state readers that share the same // ReadersWithCacheStats creates a pair of state readers that share the same
// underlying state reader and internal state cache, while maintaining separate // underlying state reader and internal state cache, while maintaining separate
// statistics respectively. // statistics respectively.
func (db *CachingDB) ReadersWithCacheStats(stateRoot common.Hash) (ReaderWithStats, ReaderWithStats, error) { func (db *CachingDB) ReadersWithCacheStats(stateRoot common.Hash) (Reader, Reader, error) {
r, err := db.StateReader(stateRoot) r, err := db.StateReader(stateRoot)
if err != nil { if err != nil {
return nil, nil, err return nil, nil, err
} }
sr := newStateReaderWithCache(r) sr := newStateReaderWithCache(r)
ra := newReader(db.codedb.Reader(), newStateReaderWithStats(sr))
ra := newReaderWithStats(sr, newCachingCodeReader(db.disk, db.codeCache, db.codeSizeCache)) rb := newReader(db.codedb.Reader(), newStateReaderWithStats(sr))
rb := newReaderWithStats(sr, newCachingCodeReader(db.disk, db.codeCache, db.codeSizeCache))
return ra, rb, nil return ra, rb, nil
} }
@ -268,22 +267,6 @@ func (db *CachingDB) OpenStorageTrie(stateRoot common.Hash, address common.Addre
return tr, nil return tr, nil
} }
// ContractCodeWithPrefix retrieves a particular contract's code. If the
// code can't be found in the cache, then check the existence with **new**
// db scheme.
func (db *CachingDB) ContractCodeWithPrefix(address common.Address, codeHash common.Hash) []byte {
code, _ := db.codeCache.Get(codeHash)
if len(code) > 0 {
return code
}
code = rawdb.ReadCodeWithPrefix(db.disk, codeHash)
if len(code) > 0 {
db.codeCache.Add(codeHash, code)
db.codeSizeCache.Add(codeHash, len(code))
}
return code
}
// TrieDB retrieves any intermediate trie-node caching layer. // TrieDB retrieves any intermediate trie-node caching layer.
func (db *CachingDB) TrieDB() *triedb.Database { func (db *CachingDB) TrieDB() *triedb.Database {
return db.triedb return db.triedb
@ -294,6 +277,46 @@ func (db *CachingDB) Snapshot() *snapshot.Tree {
return db.snap return db.snap
} }
// Commit flushes all pending writes and finalizes the state transition,
// committing the changes to the underlying storage. It returns an error
// if the commit fails.
func (db *CachingDB) Commit(update *stateUpdate) error {
// Short circuit if nothing to commit
if update.empty() {
return nil
}
// Commit dirty contract code if any exists
if len(update.codes) > 0 {
batch := db.codedb.NewBatchWithSize(len(update.codes))
for _, code := range update.codes {
batch.Put(code.hash, code.blob)
}
if err := batch.Commit(); err != nil {
return err
}
}
// If snapshotting is enabled, update the snapshot tree with this new version
if db.snap != nil && db.snap.Snapshot(update.originRoot) != nil {
if err := db.snap.Update(update.root, update.originRoot, update.accounts, update.storages); err != nil {
log.Warn("Failed to update snapshot tree", "from", update.originRoot, "to", update.root, "err", err)
}
// Keep 128 diff layers in the memory, persistent layer is 129th.
// - head layer is paired with HEAD state
// - head-1 layer is paired with HEAD-1 state
// - head-127 layer(bottom-most diff layer) is paired with HEAD-127 state
if err := db.snap.Cap(update.root, TriesInMemory); err != nil {
log.Warn("Failed to cap snapshot tree", "root", update.root, "layers", TriesInMemory, "err", err)
}
}
return db.triedb.Update(update.root, update.originRoot, update.blockNumber, update.nodes, update.stateSet())
}
// Iteratee returns a state iteratee associated with the specified state root,
// through which the account iterator and storage iterator can be created.
func (db *CachingDB) Iteratee(root common.Hash) (Iteratee, error) {
return newStateIteratee(!db.triedb.IsVerkle(), root, db.triedb, db.snap)
}
// mustCopyTrie returns a deep-copied trie. // mustCopyTrie returns a deep-copied trie.
func mustCopyTrie(t Trie) Trie { func mustCopyTrie(t Trie) Trie {
switch t := t.(type) { switch t := t.(type) {

231
core/state/database_code.go Normal file
View file

@ -0,0 +1,231 @@
// Copyright 2026 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package state
import (
"sync/atomic"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/lru"
"github.com/ethereum/go-ethereum/core/rawdb"
"github.com/ethereum/go-ethereum/ethdb"
)
const (
// Number of codeHash->size associations to keep.
codeSizeCacheSize = 1_000_000
// Cache size granted for caching clean code.
codeCacheSize = 256 * 1024 * 1024
)
// CodeCache maintains cached contract code that is shared across blocks, enabling
// fast access for external calls such as RPCs and state transitions.
//
// It is thread-safe and has a bounded size.
type codeCache struct {
codeCache *lru.SizeConstrainedCache[common.Hash, []byte]
codeSizeCache *lru.Cache[common.Hash, int]
}
// newCodeCache initializes the contract code cache with the predefined capacity.
func newCodeCache() *codeCache {
return &codeCache{
codeCache: lru.NewSizeConstrainedCache[common.Hash, []byte](codeCacheSize),
codeSizeCache: lru.NewCache[common.Hash, int](codeSizeCacheSize),
}
}
// Get returns the contract code associated with the provided code hash.
func (c *codeCache) Get(hash common.Hash) ([]byte, bool) {
return c.codeCache.Get(hash)
}
// GetSize returns the contract code size associated with the provided code hash.
func (c *codeCache) GetSize(hash common.Hash) (int, bool) {
return c.codeSizeCache.Get(hash)
}
// Put adds the provided contract code along with its size information into the cache.
func (c *codeCache) Put(hash common.Hash, code []byte) {
c.codeCache.Add(hash, code)
c.codeSizeCache.Add(hash, len(code))
}
// CodeReader implements state.ContractCodeReader, accessing contract code either in
// local key-value store or the shared code cache.
//
// Reader is safe for concurrent access.
type CodeReader struct {
db ethdb.KeyValueReader
cache *codeCache
// Cache statistics
hit atomic.Int64 // Number of code lookups found in the cache
miss atomic.Int64 // Number of code lookups not found in the cache
hitBytes atomic.Int64 // Total number of bytes read from cache
missBytes atomic.Int64 // Total number of bytes read from database
}
// newCodeReader constructs the code reader with provided key value store and the cache.
func newCodeReader(db ethdb.KeyValueReader, cache *codeCache) *CodeReader {
return &CodeReader{
db: db,
cache: cache,
}
}
// Has returns the flag indicating whether the contract code with
// specified address and hash exists or not.
func (r *CodeReader) Has(addr common.Address, codeHash common.Hash) bool {
return len(r.Code(addr, codeHash)) > 0
}
// Code implements state.ContractCodeReader, retrieving a particular contract's code.
// Null is returned if the contract code is not present.
func (r *CodeReader) Code(addr common.Address, codeHash common.Hash) []byte {
code, _ := r.cache.Get(codeHash)
if len(code) > 0 {
r.hit.Add(1)
r.hitBytes.Add(int64(len(code)))
return code
}
r.miss.Add(1)
code = rawdb.ReadCode(r.db, codeHash)
if len(code) > 0 {
r.cache.Put(codeHash, code)
r.missBytes.Add(int64(len(code)))
}
return code
}
// CodeSize implements state.ContractCodeReader, retrieving a particular contract
// code's size. Zero is returned if the contract code is not present.
func (r *CodeReader) CodeSize(addr common.Address, codeHash common.Hash) int {
if cached, ok := r.cache.GetSize(codeHash); ok {
r.hit.Add(1)
return cached
}
return len(r.Code(addr, codeHash))
}
// CodeWithPrefix retrieves the contract code for the specified account address
// and code hash. It is almost identical to Code, but uses rawdb.ReadCodeWithPrefix
// for database lookups. The intention is to gradually deprecate the old
// contract code scheme.
func (r *CodeReader) CodeWithPrefix(addr common.Address, codeHash common.Hash) []byte {
code, _ := r.cache.Get(codeHash)
if len(code) > 0 {
r.hit.Add(1)
r.hitBytes.Add(int64(len(code)))
return code
}
r.miss.Add(1)
code = rawdb.ReadCodeWithPrefix(r.db, codeHash)
if len(code) > 0 {
r.cache.Put(codeHash, code)
r.missBytes.Add(int64(len(code)))
}
return code
}
// GetCodeStats implements ContractCodeReaderStater, returning the statistics
// of the code reader.
func (r *CodeReader) GetCodeStats() ContractCodeReaderStats {
return ContractCodeReaderStats{
CacheHit: r.hit.Load(),
CacheMiss: r.miss.Load(),
CacheHitBytes: r.hitBytes.Load(),
CacheMissBytes: r.missBytes.Load(),
}
}
type CodeBatch struct {
db *CodeDB
codes [][]byte
codeHashes []common.Hash
}
// newCodeBatch constructs the batch for writing contract code.
func newCodeBatch(db *CodeDB) *CodeBatch {
return &CodeBatch{
db: db,
}
}
// newCodeBatchWithSize constructs the batch with a pre-allocated capacity.
func newCodeBatchWithSize(db *CodeDB, size int) *CodeBatch {
return &CodeBatch{
db: db,
codes: make([][]byte, 0, size),
codeHashes: make([]common.Hash, 0, size),
}
}
// Put inserts the given contract code into the writer, waiting for commit.
func (b *CodeBatch) Put(codeHash common.Hash, code []byte) {
b.codes = append(b.codes, code)
b.codeHashes = append(b.codeHashes, codeHash)
}
// Commit flushes the accumulated dirty contract code into the database and
// also place them in the cache.
func (b *CodeBatch) Commit() error {
batch := b.db.db.NewBatch()
for i, code := range b.codes {
rawdb.WriteCode(batch, b.codeHashes[i], code)
b.db.cache.Put(b.codeHashes[i], code)
}
if err := batch.Write(); err != nil {
return err
}
b.codes = b.codes[:0]
b.codeHashes = b.codeHashes[:0]
return nil
}
// CodeDB is responsible for managing the contract code and provides the access
// to it. It can be used as a global object, sharing it between multiple entities.
type CodeDB struct {
db ethdb.KeyValueStore
cache *codeCache
}
// NewCodeDB constructs the contract code database with the provided key value store.
func NewCodeDB(db ethdb.KeyValueStore) *CodeDB {
return &CodeDB{
db: db,
cache: newCodeCache(),
}
}
// Reader returns the contract code reader.
func (d *CodeDB) Reader() *CodeReader {
return newCodeReader(d.db, d.cache)
}
// NewBatch returns the batch for flushing contract codes.
func (d *CodeDB) NewBatch() *CodeBatch {
return newCodeBatch(d)
}
// NewBatchWithSize returns the batch with pre-allocated capacity.
func (d *CodeDB) NewBatchWithSize(size int) *CodeBatch {
return newCodeBatchWithSize(d, size)
}

View file

@ -17,15 +17,13 @@
package state package state
import ( import (
"errors"
"fmt" "fmt"
"sync" "sync"
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/lru"
"github.com/ethereum/go-ethereum/core/state/snapshot"
"github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto" "github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/rlp" "github.com/ethereum/go-ethereum/rlp"
"github.com/ethereum/go-ethereum/trie" "github.com/ethereum/go-ethereum/trie"
"github.com/ethereum/go-ethereum/triedb" "github.com/ethereum/go-ethereum/triedb"
@ -221,19 +219,15 @@ func (r *historicalTrieReader) Storage(addr common.Address, key common.Hash) (co
// HistoricDB is the implementation of Database interface, with the ability to // HistoricDB is the implementation of Database interface, with the ability to
// access historical state. // access historical state.
type HistoricDB struct { type HistoricDB struct {
disk ethdb.KeyValueStore
triedb *triedb.Database triedb *triedb.Database
codeCache *lru.SizeConstrainedCache[common.Hash, []byte] codedb *CodeDB
codeSizeCache *lru.Cache[common.Hash, int]
} }
// NewHistoricDatabase creates a historic state database. // NewHistoricDatabase creates a historic state database.
func NewHistoricDatabase(disk ethdb.KeyValueStore, triedb *triedb.Database) *HistoricDB { func NewHistoricDatabase(triedb *triedb.Database, codedb *CodeDB) *HistoricDB {
return &HistoricDB{ return &HistoricDB{
disk: disk,
triedb: triedb, triedb: triedb,
codeCache: lru.NewSizeConstrainedCache[common.Hash, []byte](codeCacheSize), codedb: codedb,
codeSizeCache: lru.NewCache[common.Hash, int](codeSizeCacheSize),
} }
} }
@ -258,7 +252,7 @@ func (db *HistoricDB) Reader(stateRoot common.Hash) (Reader, error) {
if err != nil { if err != nil {
return nil, err return nil, err
} }
return newReader(newCachingCodeReader(db.disk, db.codeCache, db.codeSizeCache), combined), nil return newReader(db.codedb.Reader(), combined), nil
} }
// OpenTrie opens the main account trie. It's not supported by historic database. // OpenTrie opens the main account trie. It's not supported by historic database.
@ -294,7 +288,15 @@ func (db *HistoricDB) TrieDB() *triedb.Database {
return db.triedb return db.triedb
} }
// Snapshot returns the underlying state snapshot. // Commit flushes all pending writes and finalizes the state transition,
func (db *HistoricDB) Snapshot() *snapshot.Tree { // committing the changes to the underlying storage. It returns an error
return nil // if the commit fails.
func (db *HistoricDB) Commit(update *stateUpdate) error {
return errors.New("not implemented")
}
// Iteratee returns a state iteratee associated with the specified state root,
// through which the account iterator and storage iterator can be created.
func (db *HistoricDB) Iteratee(root common.Hash) (Iteratee, error) {
return nil, errors.New("not implemented")
} }

View file

@ -0,0 +1,435 @@
// Copyright 2025 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package state
import (
"errors"
"fmt"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/state/snapshot"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/trie"
"github.com/ethereum/go-ethereum/triedb"
)
// Iterator is an iterator to step over all the accounts or the specific
// storage in the specific state.
type Iterator interface {
// Next steps the iterator forward one element. It returns false if the iterator
// is exhausted or if an error occurs. Any error encountered is retained and
// can be retrieved via Error().
Next() bool
// Error returns any failure that occurred during iteration, which might have
// caused a premature iteration exit.
Error() error
// Hash returns the hash of the account or storage slot the iterator is
// currently at.
Hash() common.Hash
// Release releases associated resources. Release should always succeed and
// can be called multiple times without causing error.
Release()
}
// AccountIterator is an iterator to step over all the accounts in the
// specific state.
type AccountIterator interface {
Iterator
// Address returns the raw account address the iterator is currently at.
// An error will be returned if the preimage is not available.
Address() (common.Address, error)
// Account returns the RLP encoded account the iterator is currently at.
// An error will be retained if the iterator becomes invalid.
Account() []byte
}
// StorageIterator is an iterator to step over the specific storage in the
// specific state.
type StorageIterator interface {
Iterator
// Key returns the raw storage slot key the iterator is currently at.
// An error will be returned if the preimage is not available.
Key() (common.Hash, error)
// Slot returns the storage slot the iterator is currently at. An error will
// be retained if the iterator becomes invalid.
Slot() []byte
}
// Iteratee wraps the NewIterator methods for traversing the accounts and
// storages of the specific state.
type Iteratee interface {
// NewAccountIterator creates an account iterator for the state specified by
// the given root. It begins at a specified starting position, corresponding
// to a particular initial key (or the next key if the specified one does
// not exist).
//
// The starting position here refers to the hash of the account address.
NewAccountIterator(start common.Hash) (AccountIterator, error)
// NewStorageIterator creates a storage iterator for the state specified by
// the address hash. It begins at a specified starting position, corresponding
// to a particular initial key (or the next key if the specified one does
// not exist).
//
// The starting position here refers to the hash of the slot key.
NewStorageIterator(addressHash common.Hash, start common.Hash) (StorageIterator, error)
}
// PreimageReader wraps the function Preimage for accessing the preimage of
// a given hash.
type PreimageReader interface {
// Preimage returns the preimage of associated hash.
Preimage(hash common.Hash) []byte
}
// flatAccountIterator is a wrapper around the underlying flat state iterator.
// Before returning data from the iterator, it performs an additional conversion
// to bridge the slim encoding with the full encoding format.
type flatAccountIterator struct {
err error
it snapshot.AccountIterator
preimage PreimageReader
}
// newFlatAccountIterator constructs the account iterator with the provided
// flat state iterator.
func newFlatAccountIterator(it snapshot.AccountIterator, preimage PreimageReader) *flatAccountIterator {
return &flatAccountIterator{it: it, preimage: preimage}
}
// Next steps the iterator forward one element. It returns false if the iterator
// is exhausted or if an error occurs. Any error encountered is retained and
// can be retrieved via Error().
func (ai *flatAccountIterator) Next() bool {
if ai.err != nil {
return false
}
return ai.it.Next()
}
// Error returns any failure that occurred during iteration, which might have
// caused a premature iteration exit.
func (ai *flatAccountIterator) Error() error {
if ai.err != nil {
return ai.err
}
return ai.it.Error()
}
// Hash returns the hash of the account or storage slot the iterator is
// currently at.
func (ai *flatAccountIterator) Hash() common.Hash {
return ai.it.Hash()
}
// Release releases associated resources. Release should always succeed and
// can be called multiple times without causing error.
func (ai *flatAccountIterator) Release() {
ai.it.Release()
}
// Address returns the raw account address the iterator is currently at.
// An error will be returned if the preimage is not available.
func (ai *flatAccountIterator) Address() (common.Address, error) {
if ai.preimage == nil {
return common.Address{}, errors.New("account address is not available")
}
preimage := ai.preimage.Preimage(ai.Hash())
if preimage == nil {
return common.Address{}, errors.New("account address is not available")
}
return common.BytesToAddress(preimage), nil
}
// Account returns the account data the iterator is currently at. The account
// data is encoded as slim format from the underlying iterator, the conversion
// is required.
func (ai *flatAccountIterator) Account() []byte {
data, err := types.FullAccountRLP(ai.it.Account())
if err != nil {
ai.err = err
return nil
}
return data
}
// flatStorageIterator is a wrapper around the underlying flat state iterator.
type flatStorageIterator struct {
it snapshot.StorageIterator
preimage PreimageReader
}
// newFlatStorageIterator constructs the storage iterator with the provided
// flat state iterator.
func newFlatStorageIterator(it snapshot.StorageIterator, preimage PreimageReader) *flatStorageIterator {
return &flatStorageIterator{it: it, preimage: preimage}
}
// Next steps the iterator forward one element. It returns false if the iterator
// is exhausted or if an error occurs. Any error encountered is retained and
// can be retrieved via Error().
func (si *flatStorageIterator) Next() bool {
return si.it.Next()
}
// Error returns any failure that occurred during iteration, which might have
// caused a premature iteration exit.
func (si *flatStorageIterator) Error() error {
return si.it.Error()
}
// Hash returns the hash of the account or storage slot the iterator is
// currently at.
func (si *flatStorageIterator) Hash() common.Hash {
return si.it.Hash()
}
// Release releases associated resources. Release should always succeed and
// can be called multiple times without causing error.
func (si *flatStorageIterator) Release() {
si.it.Release()
}
// Key returns the raw storage slot key the iterator is currently at.
// An error will be returned if the preimage is not available.
func (si *flatStorageIterator) Key() (common.Hash, error) {
if si.preimage == nil {
return common.Hash{}, errors.New("slot key is not available")
}
preimage := si.preimage.Preimage(si.Hash())
if preimage == nil {
return common.Hash{}, errors.New("slot key is not available")
}
return common.BytesToHash(preimage), nil
}
// Slot returns the storage slot data the iterator is currently at.
func (si *flatStorageIterator) Slot() []byte {
return si.it.Slot()
}
// merkleIterator implements the Iterator interface, providing functions to traverse
// the accounts or storages with the manner of Merkle-Patricia-Trie.
type merkleIterator struct {
tr Trie
it *trie.Iterator
account bool
}
// newMerkleTrieIterator constructs the iterator with the given trie and starting position.
func newMerkleTrieIterator(tr Trie, start common.Hash, account bool) (*merkleIterator, error) {
it, err := tr.NodeIterator(start.Bytes())
if err != nil {
return nil, err
}
return &merkleIterator{
tr: tr,
it: trie.NewIterator(it),
account: account,
}, nil
}
// Next steps the iterator forward one element. It returns false if the iterator
// is exhausted or if an error occurs. Any error encountered is retained and
// can be retrieved via Error().
func (ti *merkleIterator) Next() bool {
return ti.it.Next()
}
// Error returns any failure that occurred during iteration, which might have
// caused a premature iteration exit.
func (ti *merkleIterator) Error() error {
return ti.it.Err
}
// Hash returns the hash of the account or storage slot the iterator is
// currently at.
func (ti *merkleIterator) Hash() common.Hash {
return common.BytesToHash(ti.it.Key)
}
// Release releases associated resources. Release should always succeed and
// can be called multiple times without causing error.
func (ti *merkleIterator) Release() {}
// Address returns the raw account address the iterator is currently at.
// An error will be returned if the preimage is not available.
func (ti *merkleIterator) Address() (common.Address, error) {
if !ti.account {
return common.Address{}, errors.New("account address is not available")
}
preimage := ti.tr.GetKey(ti.it.Key)
if preimage == nil {
return common.Address{}, errors.New("account address is not available")
}
return common.BytesToAddress(preimage), nil
}
// Account returns the account data the iterator is currently at.
func (ti *merkleIterator) Account() []byte {
if !ti.account {
return nil
}
return ti.it.Value
}
// Key returns the raw storage slot key the iterator is currently at.
// An error will be returned if the preimage is not available.
func (ti *merkleIterator) Key() (common.Hash, error) {
if ti.account {
return common.Hash{}, errors.New("slot key is not available")
}
preimage := ti.tr.GetKey(ti.it.Key)
if preimage == nil {
return common.Hash{}, errors.New("slot key is not available")
}
return common.BytesToHash(preimage), nil
}
// Slot returns the storage slot the iterator is currently at.
func (ti *merkleIterator) Slot() []byte {
if ti.account {
return nil
}
return ti.it.Value
}
// stateIteratee implements Iteratee interface, providing the state traversal
// functionalities of a specific state.
type stateIteratee struct {
merkle bool
root common.Hash
triedb *triedb.Database
snap *snapshot.Tree
}
func newStateIteratee(merkle bool, root common.Hash, triedb *triedb.Database, snap *snapshot.Tree) (*stateIteratee, error) {
return &stateIteratee{
merkle: merkle,
root: root,
triedb: triedb,
snap: snap,
}, nil
}
// NewAccountIterator creates an account iterator for the state specified by
// the given root. It begins at a specified starting position, corresponding
// to a particular initial key (or the next key if the specified one does
// not exist).
//
// The starting position here refers to the hash of the account address.
func (si *stateIteratee) NewAccountIterator(start common.Hash) (AccountIterator, error) {
// If the external snapshot is available (hash scheme), try to initialize
// the account iterator from there first.
if si.snap != nil {
it, err := si.snap.AccountIterator(si.root, start)
if err == nil {
return newFlatAccountIterator(it, si.triedb), nil
}
}
// If the external snapshot is not available, try to initialize the
// account iterator from the trie database (path scheme)
it, err := si.triedb.AccountIterator(si.root, start)
if err == nil {
return newFlatAccountIterator(it, si.triedb), nil
}
if !si.merkle {
return nil, fmt.Errorf("state %x is not available for account traversal", si.root)
}
// The snapshot is not usable so far, construct the account iterator from
// the trie as the fallback. It's not as efficient as the flat state iterator.
tr, err := trie.NewStateTrie(trie.StateTrieID(si.root), si.triedb)
if err != nil {
return nil, err
}
return newMerkleTrieIterator(tr, start, true)
}
// NewStorageIterator creates a storage iterator for the state specified by
// the address hash. It begins at a specified starting position, corresponding
// to a particular initial key (or the next key if the specified one does not exist).
//
// The starting position here refers to the hash of the slot key.
func (si *stateIteratee) NewStorageIterator(addressHash common.Hash, start common.Hash) (StorageIterator, error) {
// If the external snapshot is available (hash scheme), try to initialize
// the storage iterator from there first.
if si.snap != nil {
it, err := si.snap.StorageIterator(si.root, addressHash, start)
if err == nil {
return newFlatStorageIterator(it, si.triedb), nil
}
}
// If the external snapshot is not available, try to initialize the
// storage iterator from the trie database (path scheme)
it, err := si.triedb.StorageIterator(si.root, addressHash, start)
if err == nil {
return newFlatStorageIterator(it, si.triedb), nil
}
if !si.merkle {
return nil, fmt.Errorf("state %x is not available for storage traversal", si.root)
}
// The snapshot is not usable so far, construct the storage iterator from
// the trie as the fallback. It's not as efficient as the flat state iterator.
tr, err := trie.NewStateTrie(trie.StateTrieID(si.root), si.triedb)
if err != nil {
return nil, err
}
acct, err := tr.GetAccountByHash(addressHash)
if err != nil {
return nil, err
}
if acct == nil || acct.Root == types.EmptyRootHash {
return &exhaustedIterator{}, nil
}
storageTr, err := trie.NewStateTrie(trie.StorageTrieID(si.root, addressHash, acct.Root), si.triedb)
if err != nil {
return nil, err
}
return newMerkleTrieIterator(storageTr, start, false)
}
type exhaustedIterator struct{}
func (e exhaustedIterator) Next() bool {
return false
}
func (e exhaustedIterator) Error() error {
return nil
}
func (e exhaustedIterator) Hash() common.Hash {
return common.Hash{}
}
func (e exhaustedIterator) Release() {
}
func (e exhaustedIterator) Key() (common.Hash, error) {
return common.Hash{}, nil
}
func (e exhaustedIterator) Slot() []byte {
return nil
}

View file

@ -0,0 +1,262 @@
// Copyright 2026 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package state
import (
"bytes"
"testing"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/rawdb"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/rlp"
"github.com/ethereum/go-ethereum/trie"
)
// TestExhaustedIterator verifies the exhaustedIterator sentinel: Next is false,
// Error is nil, Hash/Key are zero, Slot is nil, and double Release is safe.
func TestExhaustedIterator(t *testing.T) {
var it exhaustedIterator
if it.Next() {
t.Fatal("Next() returned true")
}
if err := it.Error(); err != nil {
t.Fatalf("Error() = %v, want nil", err)
}
if hash := it.Hash(); hash != (common.Hash{}) {
t.Fatalf("Hash() = %x, want zero", hash)
}
if key, err := it.Key(); key != (common.Hash{}) || err != nil {
t.Fatalf("Key() = %x, %v; want zero, nil", key, err)
}
if slot := it.Slot(); slot != nil {
t.Fatalf("Slot() = %x, want nil", slot)
}
it.Release()
it.Release()
}
// TestAccountIterator tests the account iterator: correct count, ascending
// hash order, valid full-format RLP, data integrity, address preimage
// resolution, and seek behavior.
func TestAccountIterator(t *testing.T) {
testAccountIterator(t, rawdb.HashScheme)
testAccountIterator(t, rawdb.PathScheme)
}
func testAccountIterator(t *testing.T, scheme string) {
_, sdb, ndb, root, accounts := makeTestState(scheme)
ndb.Commit(root, false)
iteratee, err := sdb.Iteratee(root)
if err != nil {
t.Fatalf("(%s) failed to create iteratee: %v", scheme, err)
}
// Build lookups from address hash.
addrByHash := make(map[common.Hash]*testAccount)
for _, acc := range accounts {
addrByHash[crypto.Keccak256Hash(acc.address.Bytes())] = acc
}
// --- Full iteration: count, ordering, RLP validity, data integrity, address resolution ---
acctIt, err := iteratee.NewAccountIterator(common.Hash{})
if err != nil {
t.Fatalf("(%s) failed to create account iterator: %v", scheme, err)
}
var (
hashes []common.Hash
prevHash common.Hash
)
for acctIt.Next() {
hash := acctIt.Hash()
if hash == (common.Hash{}) {
t.Fatalf("(%s) zero hash at position %d", scheme, len(hashes))
}
if len(hashes) > 0 && bytes.Compare(prevHash.Bytes(), hash.Bytes()) >= 0 {
t.Fatalf("(%s) hashes not ascending: %x >= %x", scheme, prevHash, hash)
}
prevHash = hash
hashes = append(hashes, hash)
// Decode and verify account data.
blob := acctIt.Account()
if blob == nil {
t.Fatalf("(%s) nil account at %x", scheme, hash)
}
var decoded types.StateAccount
if err := rlp.DecodeBytes(blob, &decoded); err != nil {
t.Fatalf("(%s) bad RLP at %x: %v", scheme, hash, err)
}
acc := addrByHash[hash]
if decoded.Nonce != acc.nonce {
t.Fatalf("(%s) nonce %x: got %d, want %d", scheme, hash, decoded.Nonce, acc.nonce)
}
if decoded.Balance.Cmp(acc.balance) != 0 {
t.Fatalf("(%s) balance %x: got %v, want %v", scheme, hash, decoded.Balance, acc.balance)
}
// Verify address preimage resolution.
addr, err := acctIt.Address()
if err != nil {
t.Fatalf("(%s) failed to address: %v", scheme, err)
}
if addr != acc.address {
t.Fatalf("(%s) Address() = %x, want %x", scheme, addr, acc.address)
}
}
acctIt.Release()
if err := acctIt.Error(); err != nil {
t.Fatalf("(%s) iteration error: %v", scheme, err)
}
if len(hashes) != len(accounts) {
t.Fatalf("(%s) iterated %d accounts, want %d", scheme, len(hashes), len(accounts))
}
// --- Seek: starting from midpoint should skip earlier entries ---
mid := hashes[len(hashes)/2]
seekIt, err := iteratee.NewAccountIterator(mid)
if err != nil {
t.Fatalf("(%s) failed to create seeked iterator: %v", scheme, err)
}
seekCount := 0
for seekIt.Next() {
if bytes.Compare(seekIt.Hash().Bytes(), mid.Bytes()) < 0 {
t.Fatalf("(%s) seeked iterator returned hash before start", scheme)
}
seekCount++
}
seekIt.Release()
if seekCount != len(hashes)/2 {
t.Fatalf("(%s) unexpected seeked count, %d != %d", scheme, seekCount, len(hashes)/2)
}
}
// TestStorageIterator tests the storage iterator: correct slot counts against
// the trie, ascending hash order, non-nil slot data, key preimage resolution,
// seek behavior, and empty-storage accounts.
func TestStorageIterator(t *testing.T) {
testStorageIterator(t, rawdb.HashScheme)
testStorageIterator(t, rawdb.PathScheme)
}
func testStorageIterator(t *testing.T, scheme string) {
_, sdb, ndb, root, accounts := makeTestState(scheme)
ndb.Commit(root, false)
iteratee, err := sdb.Iteratee(root)
if err != nil {
t.Fatalf("(%s) failed to create iteratee: %v", scheme, err)
}
// --- Slot count and ordering for every account ---
var withStorage common.Hash // remember an account that has storage for seek test
for _, acc := range accounts {
addrHash := crypto.Keccak256Hash(acc.address.Bytes())
expected := countStorageSlots(t, scheme, sdb, root, addrHash)
storageIt, err := iteratee.NewStorageIterator(addrHash, common.Hash{})
if err != nil {
t.Fatalf("(%s) failed to create storage iterator for %x: %v", scheme, acc.address, err)
}
count := 0
var prevHash common.Hash
for storageIt.Next() {
hash := storageIt.Hash()
if count > 0 && bytes.Compare(prevHash.Bytes(), hash.Bytes()) >= 0 {
t.Fatalf("(%s) storage hashes not ascending for %x", scheme, acc.address)
}
prevHash = hash
if storageIt.Slot() == nil {
t.Fatalf("(%s) nil slot at %x", scheme, hash)
}
// Check key preimage resolution on first slot.
if _, err := storageIt.Key(); err != nil {
t.Fatalf("(%s) Key() failed to resolve", scheme)
}
count++
}
if err := storageIt.Error(); err != nil {
t.Fatalf("(%s) storage iteration error for %x: %v", scheme, acc.address, err)
}
storageIt.Release()
if count != expected {
t.Fatalf("(%s) account %x: %d slots, want %d", scheme, acc.address, count, expected)
}
if count > 0 {
withStorage = addrHash
}
}
// --- Seek: starting from second slot should skip the first ---
if withStorage == (common.Hash{}) {
t.Fatalf("(%s) no account with storage found", scheme)
}
fullIt, err := iteratee.NewStorageIterator(withStorage, common.Hash{})
if err != nil {
t.Fatalf("(%s) failed to create full storage iterator: %v", scheme, err)
}
var slotHashes []common.Hash
for fullIt.Next() {
slotHashes = append(slotHashes, fullIt.Hash())
}
fullIt.Release()
seekIt, err := iteratee.NewStorageIterator(withStorage, slotHashes[1])
if err != nil {
t.Fatalf("(%s) failed to create seeked storage iterator: %v", scheme, err)
}
seekCount := 0
for seekIt.Next() {
if bytes.Compare(seekIt.Hash().Bytes(), slotHashes[1].Bytes()) < 0 {
t.Fatalf("(%s) seeked storage iterator returned hash before start", scheme)
}
seekCount++
}
seekIt.Release()
if seekCount != len(slotHashes)-1 {
t.Fatalf("(%s) unexpected seeked storage count %d != %d", scheme, seekCount, len(slotHashes)-1)
}
}
// countStorageSlots counts storage slots for an account by opening the
// storage trie directly.
func countStorageSlots(t *testing.T, scheme string, sdb Database, root common.Hash, addrHash common.Hash) int {
t.Helper()
accTrie, err := trie.NewStateTrie(trie.StateTrieID(root), sdb.TrieDB())
if err != nil {
t.Fatalf("(%s) failed to open account trie: %v", scheme, err)
}
acct, err := accTrie.GetAccountByHash(addrHash)
if err != nil || acct == nil || acct.Root == types.EmptyRootHash {
return 0
}
storageTrie, err := trie.NewStateTrie(trie.StorageTrieID(root, addrHash, acct.Root), sdb.TrieDB())
if err != nil {
t.Fatalf("(%s) failed to open storage trie for %x: %v", scheme, addrHash, err)
}
it := trie.NewIterator(storageTrie.MustNodeIterator(nil))
count := 0
for it.Next() {
count++
}
return count
}

View file

@ -27,7 +27,6 @@ import (
"github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/rlp" "github.com/ethereum/go-ethereum/rlp"
"github.com/ethereum/go-ethereum/trie"
"github.com/ethereum/go-ethereum/trie/bintrie" "github.com/ethereum/go-ethereum/trie/bintrie"
) )
@ -45,6 +44,7 @@ type DumpConfig struct {
type DumpCollector interface { type DumpCollector interface {
// OnRoot is called with the state root // OnRoot is called with the state root
OnRoot(common.Hash) OnRoot(common.Hash)
// OnAccount is called once for each account in the trie // OnAccount is called once for each account in the trie
OnAccount(*common.Address, DumpAccount) OnAccount(*common.Address, DumpAccount)
} }
@ -65,9 +65,10 @@ type DumpAccount struct {
type Dump struct { type Dump struct {
Root string `json:"root"` Root string `json:"root"`
Accounts map[string]DumpAccount `json:"accounts"` Accounts map[string]DumpAccount `json:"accounts"`
// Next can be set to represent that this dump is only partial, and Next // Next can be set to represent that this dump is only partial, and Next
// is where an iterator should be positioned in order to continue the dump. // is where an iterator should be positioned in order to continue the dump.
Next []byte `json:"next,omitempty"` // nil if no more accounts Next hexutil.Bytes `json:"next,omitempty"` // nil if no more accounts
} }
// OnRoot implements DumpCollector interface // OnRoot implements DumpCollector interface
@ -114,9 +115,6 @@ func (d iterativeDump) OnRoot(root common.Hash) {
// DumpToCollector iterates the state according to the given options and inserts // DumpToCollector iterates the state according to the given options and inserts
// the items into a collector for aggregation or serialization. // the items into a collector for aggregation or serialization.
//
// The state iterator is still trie-based and can be converted to snapshot-based
// once the state snapshot is fully integrated into database. TODO(rjl493456442).
func (s *StateDB) DumpToCollector(c DumpCollector, conf *DumpConfig) (nextKey []byte) { func (s *StateDB) DumpToCollector(c DumpCollector, conf *DumpConfig) (nextKey []byte) {
// Sanitize the input to allow nil configs // Sanitize the input to allow nil configs
if conf == nil { if conf == nil {
@ -131,20 +129,23 @@ func (s *StateDB) DumpToCollector(c DumpCollector, conf *DumpConfig) (nextKey []
log.Info("Trie dumping started", "root", s.originalRoot) log.Info("Trie dumping started", "root", s.originalRoot)
c.OnRoot(s.originalRoot) c.OnRoot(s.originalRoot)
tr, err := s.db.OpenTrie(s.originalRoot) iteratee, err := s.db.Iteratee(s.originalRoot)
if err != nil { if err != nil {
return nil return nil
} }
trieIt, err := tr.NodeIterator(conf.Start) var startHash common.Hash
if conf.Start != nil {
startHash = common.BytesToHash(conf.Start)
}
acctIt, err := iteratee.NewAccountIterator(startHash)
if err != nil { if err != nil {
log.Error("Trie dumping error", "err", err)
return nil return nil
} }
it := trie.NewIterator(trieIt) defer acctIt.Release()
for it.Next() { for acctIt.Next() {
var data types.StateAccount var data types.StateAccount
if err := rlp.DecodeBytes(it.Value, &data); err != nil { if err := rlp.DecodeBytes(acctIt.Account(), &data); err != nil {
panic(err) panic(err)
} }
var ( var (
@ -153,63 +154,55 @@ func (s *StateDB) DumpToCollector(c DumpCollector, conf *DumpConfig) (nextKey []
Nonce: data.Nonce, Nonce: data.Nonce,
Root: data.Root[:], Root: data.Root[:],
CodeHash: data.CodeHash, CodeHash: data.CodeHash,
AddressHash: it.Key, AddressHash: acctIt.Hash().Bytes(),
} }
address *common.Address address *common.Address
addr common.Address
addrBytes = tr.GetKey(it.Key)
) )
if addrBytes == nil { addrBytes, err := acctIt.Address()
if err != nil {
missingPreimages++ missingPreimages++
if conf.OnlyWithAddresses { if conf.OnlyWithAddresses {
continue continue
} }
} else { } else {
addr = common.BytesToAddress(addrBytes) address = &addrBytes
address = &addr
account.Address = address account.Address = address
} }
obj := newObject(s, addr, &data) obj := newObject(s, addrBytes, &data)
if !conf.SkipCode { if !conf.SkipCode {
account.Code = obj.Code() account.Code = obj.Code()
} }
if !conf.SkipStorage { if !conf.SkipStorage {
account.Storage = make(map[common.Hash]string) account.Storage = make(map[common.Hash]string)
storageTr, err := s.db.OpenStorageTrie(s.originalRoot, addr, obj.Root(), tr) storageIt, err := iteratee.NewStorageIterator(acctIt.Hash(), common.Hash{})
if err != nil { if err != nil {
log.Error("Failed to load storage trie", "err", err) log.Error("Failed to load storage trie", "err", err)
continue continue
} }
trieIt, err := storageTr.NodeIterator(nil)
if err != nil {
log.Error("Failed to create trie iterator", "err", err)
continue
}
storageIt := trie.NewIterator(trieIt)
for storageIt.Next() { for storageIt.Next() {
_, content, _, err := rlp.Split(storageIt.Value) _, content, _, err := rlp.Split(storageIt.Slot())
if err != nil { if err != nil {
log.Error("Failed to decode the value returned by iterator", "error", err) log.Error("Failed to decode the value returned by iterator", "error", err)
continue continue
} }
key := storageTr.GetKey(storageIt.Key) key, err := storageIt.Key()
if key == nil { if err != nil {
continue continue
} }
account.Storage[common.BytesToHash(key)] = common.Bytes2Hex(content) account.Storage[key] = common.Bytes2Hex(content)
} }
storageIt.Release()
} }
c.OnAccount(address, account) c.OnAccount(address, account)
accounts++ accounts++
if time.Since(logged) > 8*time.Second { if time.Since(logged) > 8*time.Second {
log.Info("Trie dumping in progress", "at", common.Bytes2Hex(it.Key), "accounts", accounts, log.Info("Trie dumping in progress", "at", acctIt.Hash().Hex(), "accounts", accounts, "elapsed", common.PrettyDuration(time.Since(start)))
"elapsed", common.PrettyDuration(time.Since(start)))
logged = time.Now() logged = time.Now()
} }
if conf.Max > 0 && accounts >= conf.Max { if conf.Max > 0 && accounts >= conf.Max {
if it.Next() { if acctIt.Next() {
nextKey = it.Key nextKey = acctIt.Hash().Bytes()
} }
break break
} }
@ -217,9 +210,7 @@ func (s *StateDB) DumpToCollector(c DumpCollector, conf *DumpConfig) (nextKey []
if missingPreimages > 0 { if missingPreimages > 0 {
log.Warn("Dump incomplete due to missing preimages", "missing", missingPreimages) log.Warn("Dump incomplete due to missing preimages", "missing", missingPreimages)
} }
log.Info("Trie dumping complete", "accounts", accounts, log.Info("Trie dumping complete", "accounts", accounts, "elapsed", common.PrettyDuration(time.Since(start)))
"elapsed", common.PrettyDuration(time.Since(start)))
return nextKey return nextKey
} }

Some files were not shown because too many files have changed in this diff Show more