Compare commits

...

217 commits

Author SHA1 Message Date
Guillaume Ballet
a15778c52f
trie: group 2^N binary trie nodes in serialization (#34794)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
This PR addresses one of the biggest performance issue with binary
tries: storing each internal node individually bloats the index, the
disk, and triggers a lot of write amplifications. To fix this issue,
this PR serializes groups of nodes together.

Because we are still looking for the ideal group size, the "depth" of
the group tree is made a parameter, but that will be removed in the
future, once the perfect size is known.


This is a rebase of #33658

---------

Co-authored-by: Copilot <copilot@github.com>
2026-05-01 15:28:19 +02:00
cui
68646229a0
internal/era/onedb: return false if err (#34816)
Next() function in RawIterator returned true on decompression errors.Now it
returns false on those cases. Redundant error check on cmd/era/main.go is also
removed.

---------

Co-authored-by: Bosul Mun <bsbs8645@snu.ac.kr>
2026-05-01 14:10:41 +02:00
cui
19dc690af8
triedb/pathdb: fix layer 5 key range in account iterator traversal test (#34639)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
The layer-5 diff condition used `i > 50 || i < 85`, which is true for
almost all keys in the 0..255 loop. Use `i > 50 && i < 85` so layer 5
only covers the intended band (51..84), consistent with the snapshot
iterator test fix.
2026-05-01 00:24:22 +08:00
Bosul Mun
75a64ee341
eth/downloader: drop peers sending invalid bodies or receipts (#34745)
- Fixes an error shadowing issue in the deliver() function, where a
stale result from GetDeliverySlot caused the original failure to be
overwritten by errStaleDelivery.
- Adds errInvalidBody and errInvalidReceipt to the downloader error
checks to properly drop peers who sent invalid responses.

---------

Co-authored-by: Felix Lange <fjl@twurst.com>
2026-04-30 17:55:26 +02:00
Giulio rebuffo
01036bed83
core: skip tx gas cap after Amsterdam (#34841)
Some checks failed
/ Linux Build (push) Has been cancelled
/ Linux Build (arm) (push) Has been cancelled
/ Keeper Build (push) Has been cancelled
/ Windows Build (push) Has been cancelled
/ Docker Image (push) Has been cancelled
EIP-7825 caps the transaction gas limit at `MaxTxGas`, but after
Amsterdam/EIP-8037 the transaction gas limit can include state gas
reservoir in addition to the regular gas dimension. Applying the Osaka
cap to the full `tx.Gas()` rejects otherwise valid Amsterdam
transactions that need more than `MaxTxGas` total gas because of state
gas, while their regular gas use remains within the intended limit.

This changes geth to stop applying the full transaction gas cap once
Amsterdam is active:

- txpool stateless validation no longer rejects `tx.Gas() > MaxTxGas`
under Amsterdam
- legacy pool reorg cleanup does not purge high-total-gas transactions
at the Osaka transition if Amsterdam is also active
- execution precheck mirrors the txpool behavior and does not reject
high-total-gas messages under Amsterdam

The block gas limit check remains in place, so transactions still cannot
request more total gas than the current block gas limit.

Validation run:

```
go test ./core/txpool ./core/txpool/legacypool
go test ./core -run TestStateProcessorErrors
```

---------

Co-authored-by: Gary Rong <garyrong0905@gmail.com>
2026-04-28 17:25:16 +02:00
Miki Noir
db8d6abced
accounts/keystore: enable fsnotify watcher on linux/arm64 (#34834) 2026-04-28 15:36:01 +02:00
cui
0c0d299c52
core/state: opt stateObject.GetState (#34825) 2026-04-28 20:33:40 +08:00
rjl493456442
b5d9c8d1c2
core: implement BAL reader for prefetching (#33737)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
2026-04-28 13:10:15 +02:00
Guillaume Ballet
4dc7d46155
core/vm: implement stack arena (#33960)
Here, we change the EVM stack implementation to use an 'arena', i.e.
a shared allocation pool for sub-call stacks. The stack is now more
GC-friendly, since it is a slice of uint256 values instead of a slice of pointers.

Code that pushes an item to the stack has been changed to get() the top
item, then overwrite it.

The PR is a rewrite/rebase of #30362.

---------

Co-authored-by: Martin Holst Swende <martin@swende.se>
Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
2026-04-28 11:10:44 +02:00
Rahman
51c97216c5
p2p/discover: fix timeout loop early exit when removing expired matchers (#34743)
Save `el.Next()` before calling `plist.Remove(el)` so iteration
continues correctly. Previously the loop exited after removing the first
expired matcher because `Remove` invalidates the element's links.

---------

Co-authored-by: Felix Lange <fjl@twurst.com>
2026-04-28 10:57:58 +02:00
cui
822e7c6486
accounts/scwallet: truncate before write (#34815)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Co-authored-by: Guillaume Ballet <3272758+gballet@users.noreply.github.com>
2026-04-27 16:13:42 +02:00
felipe
442bd28b0b
cmd/evm/internal/t8ntool: stream t8n alloc to ease heavy memory cases (#34785) 2026-04-27 20:35:49 +08:00
Rahman
a065580422
triedb/pathdb: compute size in StateSetWithOrigin.decode (#34828)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
`StateSetWithOrigin.decode()` was missing size computation after
deserializing origin data, causing `size` to remain zero after journal
reload. Added the same calculation logic used in
`NewStateSetWithOrigin()`.
2026-04-27 15:25:57 +08:00
rjl493456442
2d5da60371
core/types/bal: update the BAL definition to the latest spec (#34799)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
This PR updates the BAL structure definition to the latest the spec,

- Balance has been changed from [16]byte to uint256
- Storage key and value has been changed from [32]byte to uint256 
- BlockAccessList has been changed from a struct to a slice of
AccountChanges
- TxIndex has been changed from uint16 to uint32
2026-04-26 23:32:39 +08:00
cui
b26391773d
core/state: and instead of or (#34819)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
2026-04-26 11:54:07 +02:00
rayoo
b70d9a4b8e
core/state,core/types/bal: copy stateReadList in StateDB.Copy
Some checks failed
/ Linux Build (push) Has been cancelled
/ Linux Build (arm) (push) Has been cancelled
/ Keeper Build (push) Has been cancelled
/ Windows Build (push) Has been cancelled
/ Docker Image (push) Has been cancelled
The stateReadList field introduced by #34776 to track the state access
footprint for EIP-7928 was not propagated by StateDB.Copy. Every other
per-transaction field that lives alongside it (accessList,
transientStorage, journal, witness, accessEvents) is copied explicitly,
so this field was simply missed.

After Copy the copy's stateReadList is nil while the original keeps its
entries, so the nil-safe guards on StateAccessList.AddAccount / AddState
silently drop every access recorded on the copy. For any post-Amsterdam
code path that copies a prepared state and keeps reading from the copy,
the BAL footprint becomes incomplete.

Add a Copy method on bal.StateAccessList and invoke it from
StateDB.Copy, matching the pattern used for accessList and accessEvents.

---------

Co-authored-by: jwasinger <j-wasinger@hotmail.com>
2026-04-24 17:30:03 +02:00
rayoo
8091994e7b
eth/protocols/snap: fix data race on testPeer counters (#34802)
The testPeer request counters (nAccountRequests, nStorageRequests,
nBytecodeRequests, nTrienodeRequests) were plain int fields incremented
with ++. These increments happen in Request* methods that are invoked
concurrently by the Syncer from multiple goroutines
(assignBytecodeTasks, assignStorageTasks, etc.), causing a data race
reliably detected by go test -race.

Change the counters to atomic.Int64 so increments and reads are
synchronized without introducing a mutex.

Fixes races detected in TestMultiSyncManyUseless,
TestMultiSyncManyUselessWithLowTimeout,
TestMultiSyncManyUnresponsive, TestSyncWithStorageAndOneCappedPeer,
TestSyncWithStorageAndCorruptPeer, and
TestSyncWithStorageAndNonProvingPeer.
2026-04-24 13:37:34 +02:00
Bosul Mun
0da22dee45
eth/fetcher: lazy-allocate hashes slice in scheduleFetches
scheduleFetches.func1 is the biggest allocator in the long-duration
profile of node (11% of total alloc_space).
Each peer-iteration pre-allocated make([]common.Hash, 0, maxTxRetrievals),
even for peers that end up collecting no new hashes (all their announces
were already being fetched by someone else).

Defer the slice allocation to the first append. Peers that collect zero hashes
now pay zero allocation, which is the common case on the timeoutTrigger
path where all peers with any announces are iterated.
2026-04-24 13:24:52 +02:00
Sina M
c876755839
Update eth/fetcher/tx_fetcher.go
Co-authored-by: jwasinger <j-wasinger@hotmail.com>
2026-04-24 12:12:26 +02:00
YQ
33c1bd59ff
rpc: send WebSocket close frame on client disconnect (#33909)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
When `rpc.Client.Close()` is called, the TCP connection is torn down
without sending a WebSocket Close frame. The server sees `websocket:
close 1006 (abnormal closure): unexpected EOF` instead of a clean 1000
(normal closure).

### Root cause

`websocketCodec.close()` delegates to `jsonCodec.close()` which calls
`c.conn.Close()` — gorilla/websocket's `Conn.Close` explicitly "[closes
the underlying network connection without sending or waiting for a close
message](https://pkg.go.dev/github.com/gorilla/websocket#Conn.Close)"
(per RFC 6455).

### Fix

Send a WebSocket Close control frame (opcode 0x8, status 1000) before
closing the underlying connection. Uses `WriteControl` with the same
`encMu` mutex pattern already used by `pingLoop` for write
serialization, and reuses the existing `wsPingWriteTimeout` (5s)
constant.

`WriteControl` errors are safe to ignore — the connection may already be
broken by the time we attempt the close frame.

Fixes #30482
2026-04-24 11:27:39 +02:00
cui
6ece4cd143
crypto: fix unit test (#34811)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
2026-04-24 10:02:34 +08:00
Bosul Mun
526ad4f6f1
crypto/kzg4844: add cell-related functions (#34766)
Some checks are pending
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
This PR adds three cell-level kzg functions required for the sparse
blobpool (eth/72).

- VerifyCells: Verifies cells corresponding to proofs. This is used to
verify cells received from eth/72 peers.
- ComputeCells: Computes cells from blobs. This is needed because user
submissions and eth/71 transaction deliveries contain blobs, while
eth/72 peers expect cells.
- RecoverBlobs: Recovers blobs from partial cells. This is needed to
support both eth/71 and eth/72

---------

Co-authored-by: Felix Lange <fjl@twurst.com>
2026-04-23 15:39:07 +02:00
Sina Mahmoodi
2ca74d2ef9 eth/fetcher: lazy-allocate hashes slice in scheduleFetches
scheduleFetches.func1 is the single biggest allocator in the Pyroscope
profile of a busy node (~13.5 GB/hr, 8% of total alloc_space). Each
peer-iteration pre-allocated 'make([]common.Hash, 0, maxTxRetrievals)'
= 8 KB, even for peers that end up collecting no new hashes (all their
announces were already being fetched by someone else).

Defer the slice allocation to the first append. Peers that collect zero
hashes now pay zero allocation, which is the common case on the
timeoutTrigger path where all peers with any announces are iterated.

New benchmarks BenchmarkScheduleFetches_{100peers_10new,
100peers_allFetching, 500peers_3new} (benchstat, 6 samples):

  scenario            ns/op       B/op        allocs/op
  100p/10new          unchanged   unchanged   unchanged   (fast path)
  100p/allFetching   -62%        -92%        -20%
  500p/3new          -22%        -44%         -7%
  geomean            -33%        -65%         -9%
2026-04-23 08:38:40 +00:00
Matus Kysel
8e2107dc39
cmd/devp2p: fix disconnect decoding in rlpx ping (#34781)
Some checks are pending
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Linux Build (push) Waiting to run
/ Docker Image (push) Waiting to run
The rlpx ping command mishandled disconnect responses on two counts:
the error return from rlp.DecodeBytes was ignored, so decode failures
silently produced an "invalid disconnect message" error with no context;
and the decoder assumed the spec-compliant list form exclusively, while
older geth and some other implementations send the reason as a bare
byte.
                                                                  
Accept both wire forms (matching the legacy-tolerant behavior already
  in p2p.decodeDisconnectMessage), and on decode failure include the raw
payload so operators can see exactly what the peer sent. Add a unit
  test for the decoder covering both forms plus the empty-payload error
  path.
2026-04-22 16:48:38 +02:00
Sina M
b0ead5e17b
.gitea: add installer and archive steps for windows (#34793)
Adds the installer + archive steps that were done on appveyor to gitea
builder.
2026-04-22 16:18:29 +02:00
Guillaume Ballet
eb3283fb2e
accounts/usbwallet: revert github.com/karalabe/hid to fix freebsd build (#34784)
This PR reverts the last change to the freebsd build, and it fixes the
_direct_ FreeBSD build.

Here, we change the upstream of github.com/karalabe/hid to its new home,
github.com/ethereum/hid. The new dependency includes a dummy.go file
that makes `go mod vendor` work.

##### Origin of the problem

Enrique is maintaining the FreeBSD ports, and FreeBSD ports only support
vendored go modules. It turns out that `go mod vendor` will not include
C files if there is no `.go` file in the directory. Since the C files
were missing for `karalabe/hid`, the ports maintainer tried to use the
version of `hidapi` that is provided by the ports. To do so, he had to
modify the way things are included. This broke the _out of ports_
FreeBSD build.
2026-04-22 12:32:19 +02:00
Sina M
87b030780e
.github: add windows runner (#34742)
Difference to Appveyor:

- Missing 386 build. Hit some issue because user-space memory there is
around 2Gbs. Also seems generally extremely niche.
- Not doing the archive step and NSIS installer and uploads (those are
done on the builder).
2026-04-22 12:18:56 +02:00
rjl493456442
6f02965aab
core: track the state access footprint (#34776)
Some checks are pending
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
/ Linux Build (push) Waiting to run
This is a pre-requisite PR for landing the BAL construction
2026-04-22 13:42:49 +08:00
cui
3abc4cea35
core/state: use address hash cache if available (#34780) 2026-04-22 11:05:59 +08:00
cui
dca3cf02a2
core: pre-allocate the receipt slice (#34786) 2026-04-22 11:02:44 +08:00
rjl493456442
d422ab39d5
consensus, core, internal, miner: remove FinalizeAndAssemble (#34726)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
This PR removes `FinalizeAndAssemble` from the consensus engine
interface
and relocates block assembly logic outside of the consensus engine.

Block assembly is consensus-agnostic. Most validations can be performed 
by the caller. For example:

- Withdrawals must be nil prior to Shanghai
- After Shanghai upgrade, withdrawals must be non-nil, even if empty.

The only notable consensus-specific validation is related to uncles. In
clique,
the concept of uncles does not exist, and any block containing uncles
should
be considered invalid.

Within the block production package, the policy is to produce blocks
according
to the latest chain specification. As a result, Clique-specific block
production
is no longer supported. This tradeoff is considered acceptable.
2026-04-21 20:58:21 +02:00
Guillaume Ballet
c374e74ee1
trie/bintrie: print todot path in binary (#34777)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
The nodes were named using the byte representation of the path, instead
of the binary representation. This was confusing to other client devs
trying to achieve interop.
2026-04-21 14:50:09 +02:00
Barnabas Busa
f568ab9931
internal/telemetry: add gRPC transport for OTLP trace export (#33941)
## Summary
- Add `grpc://` and `grpcs://` URL scheme support for OTLP trace export
alongside existing `http://`/`https://`
- The OTLP spec defines two transports: HTTP (port 4318) and gRPC (port
4317). Many observability backends (Jaeger, Tempo, Datadog) prefer gRPC
for lower overhead
- Both `otlptracehttp` and `otlptracegrpc` return `*otlptrace.Exporter`,
so only exporter construction changes — everything downstream (batch
processor, tracer provider, lifecycle) is untouched
- Update flag usage strings to be transport-agnostic

## Example usage
```
geth --rpc.telemetry --rpc.telemetry.endpoint grpc://localhost:4317
geth --rpc.telemetry --rpc.telemetry.endpoint grpcs://tempo-grpc.example.com:443
```

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-21 14:48:21 +02:00
cui
077d83387a
metrics: reset internal value slice in Clear (#34761) 2026-04-21 13:58:49 +02:00
Marius van der Wijden
ac406c2fe7
core: implement eip-7976: Increase Calldata Floor Cost (#34748)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Increases calldata floor cost from 10/40 to 64/64
2026-04-21 16:20:02 +08:00
Toni Wahrstätter
e447a2696d
core/rawdb: clarify ReadLastPivotNumber comment (#34773)
clarify that `ReadLastPivotNumber` returns `nil` only when snap sync has
never been attempted, since the marker is written during snap sync and
never cleared.
2026-04-21 09:19:03 +08:00
rjl493456442
acbf699c33
core/state: export StateUpdate struct (#34724)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
In the recent refactoring, the state commit logic has been abstracted, 
making it more flexible to design state databases for various use cases.
For example, execution-only modes where state mutation is disabled.

As part of this change, the database interface was extended with a 
Commit function. However, it currently accepts an unexported struct
`stateUpdate`, which prevents downstream projects from customizing
the state commit behavior.

To address this limitation, the stateUpdate type is now exported.
2026-04-20 17:12:10 +02:00
rjl493456442
7e388fd09e
core/state: separate trie reader to mptReader and ubtReader (#34763)
This PR separates the trie reader to mptTrieReader and ubtTrieReader for
improved readability and extensibility.

---------

Co-authored-by: Guillaume Ballet <3272758+gballet@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-04-20 15:04:42 +02:00
CPerezz
b6d415c88d
trie/bintrie: replace BinaryNode interface with GC-free NodeRef arena (#34055)
## Summary

Replace the `BinaryNode` interface with `NodeRef uint32` indices into
typed arena pools, eliminating GC-scanned pointers from binary trie
nodes.

Inspired by [fjl's
observation](https://github.com/ethereum/go-ethereum/pull/34034#issuecomment-4075176446):
> *"if the binary trie produces such a large graph, it should probably
be changed so that the trie node type does not contain pointers. The
runtime does not scan objects that do not contain pointers, so it can
really help with the performance to build it this way."*

### The problem

CPU profiling of the binary trie (EIP-7864) showed **44% of CPU time in
garbage collection**. Each `InternalNode` held two `BinaryNode`
interface values (2 pointer-words each), and the GC scanned every one.
With ~25K `InternalNode`s in memory during block processing, this
created enormous GC pressure.

### The solution

`NodeRef` is a compact `uint32` (2-bit kind tag + 30-bit pool index).
`NodeStore` manages chunked typed pools per node kind:
- **InternalNode pool**: ZERO Go pointers (children are `NodeRef`, hash
is `[32]byte`) → noscan spans
- **HashedNode pool**: ZERO Go pointers → noscan spans
- **StemNode pool**: retains `Values [][]byte` (matching existing
format)

The serialization format is unchanged — flat InternalNode
`[type][leftHash][rightHash]` = 65 bytes.

## Benchmark: Apple M4 Pro (`--benchtime=10s --count=3`, on top of
#34021)

| Metric | Baseline | Arena | Delta |
|--------|----------|-------|-------|
| Approve (Mgas/s) | 374 | 382 | **+2.1%** |
| BalanceOf (Mgas/s) | 885 | 901 | **+1.8%** |
| Approve allocs/op | 775K | **607K** | **-21.7%** |
| BalanceOf allocs/op | 265K | **228K** | **-14.0%** |

## Benchmark: AMD EPYC 48-core (50GB state, execution-specs ERC-20, on
top of #34021 + #34032)

| Benchmark | Baseline | Arena | Delta |
|-----------|----------|-------|-------|
| erc20_approve (write) | 22.4 Mgas/s | **27.0 Mgas/s** | **+20.5%** |
| mixed_sload_sstore | 62.9 Mgas/s | **97.3 Mgas/s** | **+54.7%** |
| erc20_balanceof (read) | 180.8 Mgas/s | 167.6 Mgas/s | -7.3% (cold
cache variance) |

The arena benefit scales with heap size — the EPYC (larger heap, more GC
pressure) shows much larger gains than the M4 Pro (efficient unified
memory). The mixed workload baseline was unstable (62.9 vs 16.3 Mgas/s
between runs due to GC-induced throughput collapse); the arena
eliminates this entirely (95-97 Mgas/s, stable).

## Dependencies

Benchmarked with #34021 (H01 N+1 fix) + #34032 (R14 parallel hashing).
No code dependency — applies independently to master.

All test suites pass (`trie/bintrie` with `-race`, `core/state`,
`triedb/pathdb`, `cmd/geth`).

---------

Co-authored-by: Guillaume Ballet <3272758+gballet@users.noreply.github.com>
2026-04-20 14:08:30 +02:00
rjl493456442
29e0a6f404
core/vm, eth, tests: introduce gas budget (#34712)
Some checks are pending
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
/ Linux Build (push) Waiting to run
This PR introduces a gasBudget struct to track the available gas for EVM
execution.

With the upcoming EIP-8037, multi-dimensional gas accounting will be
introduced, requiring multiple gas budget counters to be tracked 
simultaneously. To support this, the counters are grouped into a gasBudget 
structure.

This change is a prerequisite for internal refactoring in preparation
for EIP-8037.

---------

Co-authored-by: MariusVanDerWijden <m.vanderwijden@live.de>
2026-04-20 15:33:29 +08:00
rayoo
5af5510b1e
core/rawdb: fix file descriptor leak in freezer error paths (#34735)
In openFreezerFileForAppend, if Seek fails after the file is
successfully opened, the file handle is not closed, leaking a
descriptor.

Similarly in newTable, if opening the meta file fails, the
already-opened index file is not closed. And if newMetadata fails, both
the index and meta files are leaked.

Under repeated error conditions (e.g., corrupted filesystem), these
leaks accumulate and may exhaust the OS file descriptor limit, causing
cascading failures.
2026-04-20 11:06:17 +08:00
cui
8c7d61fcfe
tests: fix invalid eip parse error (#34750) 2026-04-20 11:02:42 +08:00
cui
78505e48dd
triedb/pathdb: fix typo (#34762) 2026-04-20 10:07:41 +08:00
CPerezz
53ff723cc7
core/state: handle *bintrie.BinaryTrie in mustCopyTrie (#34758)
Some checks failed
/ Linux Build (push) Has been cancelled
/ Linux Build (arm) (push) Has been cancelled
/ Keeper Build (push) Has been cancelled
/ Windows Build (push) Has been cancelled
/ Docker Image (push) Has been cancelled
## Problem

`mustCopyTrie` in `core/state/database.go` panics on any trie type not
in its type switch:

```go
func mustCopyTrie(t Trie) Trie {
    switch t := t.(type) {
    case *trie.StateTrie:
        return t.Copy()
    case *transitiontrie.TransitionTrie:
        return t.Copy()
    default:
        panic(fmt.Errorf("unknown trie type %T", t))
    }
}
```

On UBT-backed databases (`state.NewUBTDatabase(...)`, used by
`blockchain.go:2124` when the triedb is configured for binary trie),
`StateDB.trie` is `*bintrie.BinaryTrie` — so every `StateDB.Copy()` call
(hit from `statedb.go:699` and the `*trie.StateTrie` branch of
`state_object.go:546`) crashes with `unknown trie type
*bintrie.BinaryTrie`.

## Fix

Add the `*bintrie.BinaryTrie` case. `BinaryTrie.Copy()` already exists
at `trie/bintrie/trie.go:372` and produces a correct deep copy — this
just wires it into the switch.
2026-04-18 18:47:22 +02:00
CPerezz
61bfacc52f
trie/bintrie: skip clean nodes in CollectNodes to reduce commit write amplification (#34754)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
## Problem

`BinaryTrie.Commit` unconditionally walked every resolved in-memory node
and flushed it into the `NodeSet`, producing one Pebble write per
resolved internal + stem node on every block — even when the node's
on-disk blob was bitwise identical to the previous commit. On a warm
400M-state workload this meant tens of thousands of redundant 65-byte
writes per block, compounding Pebble compaction pressure on every
commit.

The existing `mustRecompute` flag tracks *hash* staleness, not
*disk-blob* staleness: after `Hash()` completes, `mustRecompute` is
cleared even though the fresh blob has not been persisted. It is
therefore insufficient for a skip-flush optimization.

## Fix

Mirror the MPT committer pattern (`trie/committer.go:51-56`) by adding a
`dirty` flag on `InternalNode` and `StemNode` with the semantics *the
on-disk blob is stale*. The flag is:

- set to `true` wherever the node is created or structurally modified
(the same call sites that already set `mustRecompute = true`);
- set to `false` only after the node has been passed to the `flushfn`
inside `CollectNodes`;
- left `false` on nodes produced by `DeserializeNodeWithHash`, matching
the *loaded from disk, already persisted* semantics.

`CollectNodes` short-circuits on `!dirty` subtrees. The propagation
invariant (an ancestor of any dirty node is itself dirty) is already
maintained by the existing `InsertValuesAtStem` / `Insert` paths, which
now mirror every `mustRecompute = true` setter with a `dirty = true`
setter.

## Benchmark

New `BenchmarkCollectNodes_SparseWrite` measures commit cost when only
one leaf changes between blocks — the common case for state updates.
10,000-stem trie, one-leaf modification + Commit per iteration, Apple M4
Pro:

| | before | after | delta |
|---|---|---|---|
| time / op | 12,653,000 ns | 7,336 ns | **~1,725×** |
| bytes / op | 107,224,740 B | 37,774 B | **~2,839×** |
| allocs / op | 80,953 | 134 | **~604×** |

End-to-end impact on a real workload depends on the
resolved-footprint-to-dirty-path ratio; the new
`TestBinaryTrieCommitIncremental` provides a structural regression guard
(asserts that a Commit following a single-leaf modification flushes a
root-to-leaf path, not the whole tree).

---

Found all of this stuff while bloating my #34706 DB to make some
benchmarks. And saw we were spending A LOT OF TIME on hashing.
Hope this helps the perf a bit. Will rebase the flat-state PR on top of
this once merged.
2026-04-18 11:42:58 +02:00
Snehendu Roy
573d94013c
core/rawdb: fix incorrect fsync ordering for index file truncation (#34728)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
2026-04-17 19:45:03 +08:00
Daniel Liu
4c03a0631e
cmd/evm: compare errors by value in timedExec instead of interface identity (#34733)
`timedExec` compares errors by direct interface inequality (haveErr !=
err). If execFunc returns newly constructed errors with the same message
each run, this will panic even though behavior is equivalent.
2026-04-17 19:43:53 +08:00
Edgar
89c1c16a46
cmd/geth: add code exporter for db export (#34696)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Adds a 'code' exporter to 'geth db export' that iterates over all
contract bytecode entries (CodePrefix + code_hash -> bytecode).

Usage: geth --datadir <dir> db export code code.rlp

This enables exporting contract bytecode.
2026-04-17 09:53:00 +08:00
Guillaume Ballet
ba215fd927
cmd, core, trie, triedb: split CachingDB into merkle + binary dbs. (#34700)
This Pr implements some prerequisite changes for #34004 : split the
`CachingDB` into a `MerkleDB` and a `UBTDB`, so that very different
behaviors don't clash as much.

The transition isn't handled by this PR, but after talking to Gary we
agreed that `UBTDB` should receive another `triedb`, which will only be
loaded if the `Ended` flag is set to false in the conversion contract.
If this is too hard to achieve, it makes sense to load it regardless,
and then loading can be prevented at a later stage by adding a
`UBTTransitionFinalizationTime` in `ChainConfig`.

---------

Co-authored-by: Gary Rong <garyrong0905@gmail.com>
2026-04-17 08:55:54 +08:00
Sina M
f63e9f3a80
eth/tracers: fix codehash in prestate diffmode (#34675)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Fixes https://github.com/ethereum/go-ethereum/issues/34648.
2026-04-16 16:51:26 +08:00
Daniel Liu
b1baab4427
cmd/evm: fix trace.noreturndata usage string (#34731)
`trace.noreturndata` is documented as "enable return data output" but
the flag name/value imply it disables return data. This is confusing for
users and likely inverted wording. Update the Usage string to reflect
the actual behavior (disable return data output).
2026-04-16 16:16:53 +08:00
Jonny Rhea
d07a946a5b
log: allow –vmodule to downgrade log levels (#33111)
Changes the log handler to check for vmodule level overrides
even for messages above the current level. This enables the user to selectively
hide messages from certain packages, among other things.

Also fixes a bug where handler instances created by WithAttr would not follow
the level setting anymore. The WithAttrs method is calledd by slog.Logger.With,
which we also use in go-ethereum to create context specific loggers with
pre-filled attributes. Under the previous implementation of WithAttrs, if the
application created a long-lived logger (for example, for a specific peer), then
that logger would not be affected by later level changes done on the top-level
logger, leading to potentially missed events.

Closes: #30717

---------

Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
Co-authored-by: Felix Lange <fjl@twurst.com>
2026-04-16 08:53:08 +02:00
Rahman
0b35ad95f5
cmd/utils: fix witness stats auto-enable to respect config file (#34729)
Some checks are pending
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Linux Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Auto-enable logic for `StatelessSelfValidation` was reading CLI flag
directly via `ctx.Bool()`, bypassing the merged `cfg.EnableWitnessStats`
value. Now uses `cfg.EnableWitnessStats` so config file settings trigger
the same auto-enable behavior as CLI flags.
2026-04-16 11:17:01 +08:00
rjl493456442
ef0f1f96f9
core/state: ignore the root returned in Commit function for simplicity (#34723)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
StateDB.Commit first commits all storage changes into the storage trie,
then updates the account metadata with the new storage root into the 
account trie.

Within StateDB.Commit, the new storage trie root has already been
computed and applied as the storage root. This PR explicitly skips the 
redundant storage trie root assignment for readability.
2026-04-15 11:15:43 +08:00
felipe
c9fea44616
eth/catalyst: respect slot num if specified in payload attributes for testing_buildBlockV1 (#34722)
This is a copy of #34721 but against `master` (rather than
`bal-devnet-3`), as requested by @jwasinger, since the slotnum logic now
exists on `master` as well.
2026-04-14 19:00:29 -04:00
cui
2253fce48d
core/types: remove redundant ')' (#34719)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
2026-04-14 22:09:17 +08:00
rjl493456442
eb67d61933
cmd/geth, core/state, tests: rework EIP7610 check (#34718)
This PR simplifies the implementation of EIP-7610 by eliminating the
need to check storage emptiness during contract deployment.

EIP-7610 specifies that contract creation must be rejected if the
destination account has a non-zero nonce, non-empty runtime code, or 
**non-empty storage**.

After EIP-161, all newly deployed contracts are initialized with a nonce
of one. As a result, such accounts are no longer eligible as deployment 
targets unless they are explicitly cleared.

However, prior to EIP-161, contracts were initialized with a nonce of
zero. This made it possible to end up with accounts that have:

- zero nonce
- empty runtime code
- non-empty storage (created during constructor execution)
- non-zero balance

These edge-case accounts complicate the storage emptiness check.

In practice, contract addresses are derived using one of the following
formulas:
- `Keccak256(rlp({sender, nonce}))[12:]`
- `Keccak256([]byte{0xff}, sender, salt[:], initHash)[12:]`

As such, an existing address is not selected as a deployment target
unless a collision occurs, which is extremely unlikely.

---

Previously, verifying storage emptiness relied on GetStorageRoot.
However, with the transition to the block-based access list (BAL), 
the storage root is no longer available, as computing it would require 
reconstructing the full storage trie from all mutations of preceding 
transactions.

To address this, this PR introduces a simplified approach: it hardcodes
the set of known accounts that have zero nonce, empty runtime code, 
but non-empty storage and non-zero balance. During contract deployment, 
if the destination address belongs to this set, the deployment is
rejected.

This check is applied retroactively back to genesis. Since no address
collision events have occurred in Ethereum’s history, this change does
not
alter existing behavior. Instead, it serves as a safeguard for future
state
transitions.
2026-04-14 15:54:36 +02:00
cui
2414861d36
core/state: optimize transient storage (#33695)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Optimizes the transient storage. Turns it from a map of maps into a single map keyed by <account,slot>.
2026-04-14 15:39:42 +02:00
jvn
c690d6041e
cmd/geth: add Prague pruning points for hoodi (#34714)
Adds config to add Prague prune point for the hoodi testnet.
2026-04-14 14:58:27 +02:00
Guillaume Ballet
c2234462a8
.github/workflows: add freebsd test github action (#34078)
This is meant to be run daily, in order to verify the FreeBSD build
wasn't broken like last time.
2026-04-14 14:47:43 +02:00
Csaba Kiraly
c453b99a57
cmd/devp2p/internal/v5test: fix hive test for discv5 findnode results (#34043)
This fixes the remaining Hive discv5/FindnodeResults failures in the
cmd/devp2p/internal/v5test fixture.

The issue was in the simulator-side bystander behavior, not in
production discovery logic. The existing fixture could get bystanders
inserted into the remote table, but under current geth behavior they
were not stable enough to remain valid FINDNODE results. In
particular, the fixture still had a few protocol/behavior mismatches:

- incomplete WHOAREYOU recovery
- replies not consistently following the UDP envelope source
- incorrect endpoint echoing in PONG
- fixture-originated PING using the wrong ENR sequence
- bystanders answering background FINDNODE with empty NODES

That last point was important because current lookup accounting can
treat repeatedly unhelpful FINDNODE interactions as failures. As a
result, a bystander could become live via PING/PONG and still later be
dropped from the table before the final FindnodeResults assertion.
This change updates the fixture so that bystanders behave more like
stable discv5 peers:

- perform one explicit initial handshake, then switch to passive response handling
- resend the exact challenged packet when handling WHOAREYOU
- reply to the actual UDP packet source and mirror that source in PONG.ToIP / PONG.ToPort
- use the bystander’s own ENR sequence in fixture-originated PING
- prefill each bystander with the bystander ENR set and answer FINDNODE from that set

The result is that the fixture now forms a small self-consistent lookup
environment instead of a set of peers that are live but systematically
poor lookup participants.
2026-04-14 12:15:39 +02:00
Charles Dusek
e1fe4a1a98
p2p/discover: fix flaky TestUDPv5_findnodeHandling (#34109)
Fixes #34108

The UDPv5 test harness (`newUDPV5Test`) uses the default `PingInterval`
of 3 seconds. When tests like `TestUDPv5_findnodeHandling` insert nodes
into the routing table via `fillTable`, the table's revalidation loop
may schedule PING packets for those nodes. Under the race detector or on
slow CI runners, the test runs long enough for revalidation to fire,
causing background pings to be written to the test pipe. The `close()`
method then finds these as unmatched packets and fails.

The fix sets `PingInterval` to a very large value in the test harness so
revalidation never fires during tests.

Verified locally: 100 iterations with `-race -count=100` pass reliably,
where previously the test would fail within ~50 iterations.
2026-04-14 09:43:44 +02:00
Marius Kjærstad
01e33d14be
build: upgrade -dlgo version to Go 1.25.9 (#34707)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
2026-04-13 16:32:40 +02:00
phrwlk
527ea11e50
core/vm/runtime: don't overwrite user input with default value (#33510)
runtime.setDefaults was unconditionally assigning cfg.Random =
&common.Hash{}, which silently overwrote any caller-provided Random
value. This made it impossible to simulate a specific PREVRANDAO and
also forced post-merge rules whenever London was active, regardless of
the intended environment.

This change only initializes cfg.Random when it is nil, matching how
other fields in Config are defaulted. Existing callers that did not set
Random keep the same behavior (a non-nil zero hash still enables
post-merge semantics), while callers that explicitly set Random now get
their value respected.
2026-04-13 15:46:13 +02:00
Conor Patrick
4da1e29320
signer/core/apitypes: fix encoding of opening parenthesis (#33702)
This fixes a truncation bug that results in an invalid serialization of
empty EIP712.

For example:

```json
{
    "method": "eth_signTypedData_v4",
    "request": {
        "types": {
            "EIP712Domain": [
                {
                    "name": "version",
                    "type": "string"
                }
            ],
            "Empty": []
        },
        "primaryType": "Empty",
        "domain": {
            "version": "0"
        },
        "message": {}
    }
}
``` 

When calculating the type-hash for the stuct-hash, it will incorrectly
use `Empty)` instead of `Empty()`
2026-04-13 15:30:36 +02:00
Vadim Tertilov
289826fefb
cmd/abigen/v2: add package-level errors (#34076)
Some checks are pending
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Linux Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
# Summary

Replaces the inline `errors.New("event signature mismatch")` in
generated `UnpackXxxEvent` methods with per-event package-level sentinel
errors (e.g. `ErrTransferSignatureMismatch`,
`ErrApprovalSignatureMismatch`), allowing callers to reliably
distinguish a topic mismatch from a genuine decoding failure via
`errors.Is`.

Each event gets its own sentinel, generated via the abigen template:

```go
var ErrTransferSignatureMismatch = errors.New("event signature mismatch")
```

This scoping is intentional — it allows callers to be precise about
*which* event was mismatched, which is useful when routing logs across
multiple unpackers.

# Motivation
Previously, all errors returned from `UnpackXxxEvent` were
indistinguishable without string matching. This is especially
problematic when processing logs sourced from `eth_getBlockReceipts`,
where a caller receives the full set of logs for a block across all
contracts and event types. In that context, a signature mismatch is
expected and should be skipped, while any other error (malformed data,
topic parsing failure) indicates something is genuinely wrong and should
halt execution:

```go
for _, log := range blockLogs {
    event, err := contract.UnpackTransferEvent(log)
    if errors.Is(err, gen.ErrTransferSignatureMismatch) {
        continue // not our event, expected
    }
    if err != nil {
        return fmt.Errorf("unexpected decode failure: %w", err) // alert
    }
    // process event
}
```

**Changes:**
- `abigen` template: generates a `ErrXxxSignatureMismatch` sentinel per
event and returns it on topic mismatch instead of an inline error
- Existing generated bindings & testdata: regenerated to reflect the
update

Implements #34075
2026-04-13 14:42:34 +02:00
Daniel Liu
5b7511eeed
core/vm: include operand in error message (#34635)
Return ErrInvalidOpCode with the executing opcode and offending
immediate for forbidden DUPN, SWAPN, and EXCHANGE operands. Extend
TestEIP8024_Execution to assert both opcode and operand for all
invalid-immediate paths.
2026-04-13 14:13:33 +02:00
Daniel Liu
7d463aedd3
accounts/keystore: fix flaky TestUpdatedKeyfileContents (#34084)
TestUpdatedKeyfileContents was intermittently failing with:

- Emptying account file failed
- wasn't notified of new accounts

Root cause: waitForAccounts required the account list match and an
immediately readable ks.changes notification in the same instant,
creating a timing race between cache update visibility and channel
delivery.

This change keeps the same timeout window but waits until both
conditions are observed, which preserves test intent while removing the
flaky timing dependency.

Validation:
- go test ./accounts/keystore -run '^TestUpdatedKeyfileContents$'
-count=100
2026-04-13 14:10:56 +02:00
bigbear
f7f57d29d4
crypto/bn256: fix comment in MulXi (#34659)
The comment formula showed (i+3) but the code multiplies by 9 (Lsh 3 +
add = 8+1).
This was a error when porting from upstream golang.org/x/crypto/bn256
where ξ=i+3.
Go-ethereum changed the constant to ξ=i+9 but forgot to update the inner
formula.
2026-04-13 13:57:11 +02:00
Gaurav Dhiman
ecae519972
beacon/engine, miner: fix testing_buildBlockV1 response (#34704)
Two fixes for `testing_buildBlockV1`:

1. Add `omitempty` to `SlotNumber` in `ExecutableData` so it is omitted
for pre-Amsterdam payloads. The spec defines the response as
`ExecutionPayloadV3` which does not include `slotNumber`.

2. Pass `res.fees` instead of `new(big.Int)` in `BuildTestingPayload` so
`blockValue` reflects actual priority fees instead of always being zero.

Corresponding fixture update: ethereum/execution-apis#783
2026-04-13 13:45:35 +02:00
Guillaume Ballet
735bfd121a
trie/bintrie: spec change, big endian hashing of slot key (#34670)
The spec has been changed during SIC #49, the offset is encoded as a
big-endian number.
2026-04-13 09:42:37 +02:00
Marius van der Wijden
6333855163
core: turn gas into a vector <regularGas, stateGas> (#34691)
Pre-refactor PR to get 8037 upstreamed in chunks

---------

Co-authored-by: Gary Rong <garyrong0905@gmail.com>
2026-04-13 14:09:42 +08:00
CPerezz
deda47f6a1
trie/bintrie: fix GetAccount/GetStorage non-membership — verify stem before returning values (#34690)
Some checks failed
/ Linux Build (push) Has been cancelled
/ Linux Build (arm) (push) Has been cancelled
/ Keeper Build (push) Has been cancelled
/ Windows Build (push) Has been cancelled
/ Docker Image (push) Has been cancelled
Fix `GetAccount` returning **wrong account data** for non-existent
addresses when the trie root is a `StemNode` (single-account trie) — the
`StemNode` branch returned `r.Values` without verifying the queried
address's stem matches.

Co-authored-by: Guillaume Ballet <3272758+gballet@users.noreply.github.com>
2026-04-10 19:43:48 +02:00
CPerezz
f71a884e37
trie/bintrie: fix DeleteAccount no-op (#34676)
`BinaryTrie.DeleteAccount` was a no-op, silently ignoring the caller's
deletion request and leaving the old `BasicData` and `CodeHash` in the
trie.

Co-authored-by: Guillaume Ballet <3272758+gballet@users.noreply.github.com>
2026-04-10 19:23:44 +02:00
cui
ea5448814f
core/filtermaps: remove dead condition check (#34695)
already check on line 40 before.
2026-04-10 17:41:59 +02:00
Guillaume Ballet
58557cb463
cmd/geth: add subcommand for offline binary tree conversion (#33740)
Some checks failed
/ Linux Build (push) Has been cancelled
/ Linux Build (arm) (push) Has been cancelled
/ Keeper Build (push) Has been cancelled
/ Windows Build (push) Has been cancelled
/ Docker Image (push) Has been cancelled
This tool is designed for the offline translation of an MPT database to
a binary trie. This is to be used for users who e.g. want to prove
equivalence of a binary tree chain shadowing the MPT chain.

It adds a `bintrie` command, cleanly separating the concerns.
2026-04-09 10:27:19 +02:00
CPerezz
3772bb536a
triedb/pathdb: fix lookup sentinel collision with zero disk layer root (#34680) 2026-04-09 13:39:38 +08:00
Sina M
68c7058a80
core/stateless: fix parsing an empty witness (#34683)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
This is to fix a crasher in keeper.
2026-04-09 09:19:54 +08:00
Felföldi Zsolt
21b19362c2
core/state: fix tracer hook for EIP-7708 burn logs (#34688)
This PR fixes https://github.com/ethereum/go-ethereum/issues/34623 by
changing the `vm.StateDB` interface: 

Instead of `EmitLogsForBurnAccounts()` emitting burn logs, `LogsForBurnAccounts()
[]*types.Log` just returns these logs which are then emitted by the caller. 

This way when tracing is used, `hookedStateDB.AddLog` will be used 
automatically and there is no need to duplicate either the burn log
logic or the `OnLog` tracing hook.
2026-04-09 09:12:35 +08:00
Mael Regnery
a8ea6319f1
eth/filters: return -32602 when exceeding the block range limit (#34647)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Co-authored-by: Felix Lange <fjl@twurst.com>
2026-04-08 12:57:29 +02:00
DELENE-TCHIO
04e40995d9
core: merge access events for all system calls (#34637)
Some checks are pending
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Linux Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
ProcessBeaconBlockRoot (EIP-4788) and processRequestsSystemCall
(EIP-7002/7251) do not merge the EVM access events into the state after
execution. ProcessParentBlockHash (EIP-2935) already does this correctly
at line 290-291.

Without this merge, the Verkle witness will be missing the storage
accesses from the beacon root and request system calls, leading to
incomplete witnesses and potential consensus issues when Verkle
activates.
2026-04-07 21:55:09 +02:00
locoholy
9878ef926d
ethclient: omit empty address/topics fields in RPC filter requests (#33884)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Changes JSON serialization of FilterCriteria to exclude "address" when it is empty.
2026-04-07 18:01:26 +02:00
cui
0bafb29490
core/types: add accessList to WithSeal and WithBody (#34651)
Co-authored-by: Felix Lange <fjl@twurst.com>
2026-04-07 22:04:07 +08:00
Diego López León
52b8c09fdf
triedb/pathdb: skip duplicate-root layer insertion (#34642)
PathDB keys diff layers by state root, not by block hash. That means a
side-chain block can legitimately collide with an existing canonical diff layer
when both blocks produce the same post-state (for example same parent, 
same coinbase, no txs).

Today `layerTree.add` blindly inserts that second layer. If the root
already exists, this overwrites `tree.layers[root]` and appends the same 
root to the mutation lookup again. Later account/storage lookups resolve 
that root to the wrong diff layer, which can corrupt reads for descendant 
canonical states.

At runtime, the corruption is silent: no error is logged and no invariant check
fires. State reads against affected descendants simply return stale data
from the wrong diff layer (for example, an account balance that reflects one
fewer block reward), which can propagate into RPC responses and block 
validation.

This change makes duplicate-root inserts idempotent. A second layer with
the same state root does not add any new retrievable state to a tree that is
already keyed by root; keeping the original layer preserves the existing parent 
chain and avoids polluting the lookup history with duplicate roots.

The regression test imports a canonical chain of two layers followed by
a fork layer at height 1 with the same state root but a different block hash. 
Before the fix, account and storage lookups at the head resolve the fork 
layer instead of the canonical one. After the fix, the duplicate insert is 
skipped and lookups remain correct.
2026-04-07 21:31:41 +08:00
rjl493456442
b5d322000c
eth/protocols/snap: fix block accessList encoding rule (#34644)
This PR refactors the encoding rules for `AccessListsPacket` in the wire
protocol. Specifically:

- The response is now encoded as a list of `rlp.RawValue`
- `rlp.EmptyString` is used as a placeholder for unavailable BAL objects
2026-04-07 20:13:19 +08:00
Jonny Rhea
bd6530a1d4
triedb, triedb/internal, triedb/pathdb: add GenerateTrie + extract shared pipeline into triedb/internal (#34654)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
This PR adds `GenerateTrie(db, scheme, root)` to the `triedb` package,
which rebuilds all tries from flat snapshot KV data. This is needed by
snap/2 sync so it can rebuild the trie after downloading the flat state.
The shared trie generation pipeline from `pathdb/verifier.go` was moved
into `triedb/internal/conversion.go` so both `GenerateTrie` and
`VerifyState` reuse the same code.
2026-04-07 14:36:53 +08:00
Martin HS
44257950f1
tests: enable execution of amsterdam statetests (#34671)
👋

This PR makes it possible to run "Amsterdam" in statetests. I'm aware
that they'll be failing and not in consensus with other clients, yet,
but it's nice to be able to run tests and see what works and what
doesn't

Before the change: 
```
$ go run ./cmd/evm statetest ./amsterdam.json 
[
  {
    "name": "00000019-mixed-1",
    "pass": false,
    "fork": "Amsterdam",
    "error": "unexpected error: unsupported fork \"Amsterdam\""
  }
]
```
After
```
$ go run ./cmd/evm statetest ./amsterdam.json 
{"stateRoot": "0x25b78260b76493a783c77c513125c8b0c5d24e058b4e87130bbe06f1d8b9419e"}
[
  {
    "name": "00000019-mixed-1",
    "pass": false,
    "stateRoot": "0x25b78260b76493a783c77c513125c8b0c5d24e058b4e87130bbe06f1d8b9419e",
    "fork": "Amsterdam",
    "error": "post state root mismatch: got 25b78260b76493a783c77c513125c8b0c5d24e058b4e87130bbe06f1d8b9419e, want 0000000000000000000000000000000000000000000000000000000000000000"
  }
]
```
2026-04-07 14:13:25 +08:00
rjl493456442
d8cb8a962b
core, eth, ethclient, triedb: report trienode index progress (#34633)
Some checks failed
/ Linux Build (push) Has been cancelled
/ Linux Build (arm) (push) Has been cancelled
/ Keeper Build (push) Has been cancelled
/ Windows Build (push) Has been cancelled
/ Docker Image (push) Has been cancelled
The trienode history indexing progress is also exposed via an RPC 
endpoint and contributes to the eth_syncing status.
2026-04-04 21:00:07 +08:00
Jonny Rhea
a608ac94ec
eth/protocols/snap: restore Bytes soft limit to GetAccessListsPacket (#34649)
This PR adds Bytes field back to GetAccesListsPacket
2026-04-04 20:53:54 +08:00
Jonny Rhea
00da4f51ff
core, eth/protocols/snap: Snap/2 Protocol + BAL Serving (#34083)
Some checks failed
/ Linux Build (push) Has been cancelled
/ Linux Build (arm) (push) Has been cancelled
/ Keeper Build (push) Has been cancelled
/ Windows Build (push) Has been cancelled
/ Docker Image (push) Has been cancelled
Implement the snap/2 wire protocol with BAL serving

---------

Co-authored-by: Gary Rong <garyrong0905@gmail.com>
2026-04-03 14:10:32 +08:00
rjl493456442
0ba4314321
core/state: introduce state iterator interface (#33102)
In this PR, the Database interface in `core/state` has been extended
with one more function:

```go
	// Iteratee returns a state iteratee associated with the specified state root,
	// through which the account iterator and storage iterator can be created.
	Iteratee(root common.Hash) (Iteratee, error)
```

With this additional abstraction layer, the implementation details can be hidden
behind the interface. For example, state traversal can now operate directly on 
the flat state for Verkle or binary trees, which do not natively support traversal.

Moreover, state dumping will now prefer using the flat state iterator as
the primary option, offering better efficiency.


Edit: this PR also fixes a tiny issue in the state dump, marshalling the
next field in the correct way.
2026-04-03 10:35:32 +08:00
cui
bcb0efd756
core/types: copy block access list hash in CopyHeader (#34636)
Some checks are pending
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Linux Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
2026-04-02 20:40:45 +08:00
rjl493456442
db6c7d06a2
triedb/pathdb: implement history index pruner (#33999)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
This PR implements the missing functionality for archive nodes by 
pruning stale index data.

The current mechanism is relatively simple but sufficient for now: 
it periodically iterates over index entries and deletes outdated data 
on a per-block basis. 

The pruning process is triggered every 90,000 new blocks (approximately 
every 12 days), and the iteration typically takes ~30 minutes on a 
mainnet node.

This mechanism is only applied with `gcmode=archive` enabled, having
no impact on normal full node.
2026-04-02 00:21:58 +02:00
Daniel Liu
14a26d9ccc
eth/gasestimator: fix block overrides in estimate gas (#34081)
Block overrides were to a great extent ignored by the gasestimator. This PR
fixes that.
2026-04-01 20:32:17 +02:00
Felföldi Zsolt
fc43170cdd
beacon/light: keep retrying checkpoint init if failed (#33966)
Some checks are pending
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Linux Build (push) Waiting to run
/ Docker Image (push) Waiting to run
This PR changes the blsync checkpoint init logic so that even if the
initialization fails with a certain server and an error log message is
printed, the server goes back to its initial state and is allowed to
retry initialization after the failure delay period. The previous logic
had an `ssDone` server state that did put the server in a permanently
unusable state once the checkpoint init failed for an apparently
permanent reason. This was not the correct behavior because different
servers behave differently in case of overload and sometimes the
response to a permanently missing item is not clearly distinguishable
from an overload response. A safer logic is to never assume anything to
be permanent and always give a chance to retry.
The failure delay formula is also fixed; now it is properly capped at
`maxFailureDelay`. The previous formula did allow the delay to grow
unlimited if a retry was attempted immediately after each delay period.
2026-04-01 16:05:57 +02:00
Chase Wright
92b4cb2663
eth/tracers/logger: conform structLog tracing to spec (#34093)
Some checks failed
/ Linux Build (arm) (push) Has been cancelled
/ Keeper Build (push) Has been cancelled
/ Windows Build (push) Has been cancelled
/ Linux Build (push) Has been cancelled
/ Docker Image (push) Has been cancelled
This is a breaking change in the opcode (structLog) tracer. Several fields
will have a slight formatting difference to conform to the newly established
spec at: https://github.com/ethereum/execution-apis/pull/762. The differences
include:

- `memory`: words will have the 0x prefix. Also last word of memory will be padded to 32-bytes.
- `storage`: keys and values will have the 0x prefix.

---------

Co-authored-by: Sina M <1591639+s1na@users.noreply.github.com>
2026-03-31 16:02:40 +02:00
CPerezz
3da517e239
core/state: fix storage counters in binary trie IntermediateRoot (#34110)
Add missing `StorageUpdated` and `StorageDeleted` counter increments
in the binary trie fast path of `IntermediateRoot()`.
2026-03-31 15:47:07 +02:00
Jonny Rhea
dc3794e3dc
core/rawdb: BAL storage layer (#34064)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Add persistent storage for Block Access Lists (BALs) in `core/rawdb/`.
This provides read/write/delete accessors for BALs in the active
key-value store.

---------

Co-authored-by: Jared Wasinger <j-wasinger@hotmail.com>
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
2026-03-31 15:05:31 +08:00
Bosul Mun
965bd6b6a0
eth: implement EIP-7975 (eth/70 - partial block receipt lists) (#33153)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
In this PR, we add support for protocol version eth/70, defined by EIP-7975.

Overall changes:

- Each response is buffered in the peer’s receipt buffer when the
`lastBlockIncomplete` field is true.
- Continued request uses the same request id of its original
  request(`RequestPartialReceipts`).
- Partial responses are verified in `validateLastBlockReceipt`.
- Even if all receipts for partial blocks of the request are collected,
  those partial results are not sinked to the downloader, to avoid
  complexity. This assumes that partial response and buffering occur only
  in exceptional cases.

---------

Co-authored-by: Gary Rong <garyrong0905@gmail.com>
Co-authored-by: Felix Lange <fjl@twurst.com>
2026-03-30 15:17:37 +02:00
rjl493456442
fe47c39903
version: start v1.17.3 release cycle (#34619) 2026-03-30 15:01:29 +02:00
rjl493456442
be4dc0c4be
version: release go-ethereum v1.17.2 stable (#34618) 2026-03-30 18:42:40 +08:00
Sina M
95705e8b7b
internal/ethapi: limit number of getProofs keys (#34617)
We can consider making this limit configurable if ever the need arose.
2026-03-30 16:01:30 +08:00
Sina M
ceabc39304
internal/ethapi: limit number of calls to eth_simulateV1 (#34616)
Later on we can consider making these limits configurable if the
use-case arose.
2026-03-30 16:01:12 +08:00
Charles Dusek
e585ad3b42
core/rawdb: fix freezer dir.Sync() failure on Windows (#34115) 2026-03-30 15:34:23 +08:00
Daniel Liu
d1369b69f5
core/txpool/legacypool: use types.Sender instead of signer.Sender (#34059)
Some checks failed
/ Linux Build (arm) (push) Has been cancelled
/ Keeper Build (push) Has been cancelled
/ Windows Build (push) Has been cancelled
/ Linux Build (push) Has been cancelled
/ Docker Image (push) Has been cancelled
`pool.signer.Sender(tx)` bypasses the sender cache used by types.Sender,
which can force an extra signature recovery for every promotable tx
(promotion runs frequently). Use `types.Sender(pool.signer, tx)` here to
keep sender derivation cached and consistent.
2026-03-28 11:46:09 +01:00
Guillaume Ballet
bd3c8431d9
build, cmd/keeper: add "womir" target (#34079)
This PR enables the block validation of keeper in the womir/openvm zkvm.

It also fixes some issues related to building the executables in CI.
Namely, it activates the build which was actually disabled, and also
resolves some resulting build conflicts by fixing the tags.

Co-authored-by: Leo <leo@powdrlabs.com>
2026-03-28 11:39:44 +01:00
Charles Dusek
a2496852e9
p2p/discover: resolve DNS hostnames for bootstrap nodes (#34101)
Fixes #31208
2026-03-28 11:37:39 +01:00
rjl493456442
c3467dd8b5
core, miner, trie: relocate witness stats (#34106)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
This PR relocates the witness statistics into the witness itself, making
it more self-contained.
2026-03-27 17:06:46 +01:00
jwasinger
acdd139717
miner: set slot number when building test payload (#34094)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
2026-03-27 09:45:49 +08:00
Daniel Liu
1b3b028d1d
miner: fix txFitsSize comment (#34100)
Rename the comment so it matches the helper name.
2026-03-27 09:41:56 +08:00
Lessa
8a3a309fa9
core/txpool/legacypool: remove redundant nil check in Get (#34092)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Leftover from d40a255 when return type changed from *txpool.Transaction
to *types.Transaction.
2026-03-26 14:02:31 +01:00
bigbear
5d0e18f775
core/tracing: fix NonceChangeAuthorization comment (#34085)
Some checks failed
/ Linux Build (push) Has been cancelled
/ Linux Build (arm) (push) Has been cancelled
/ Keeper Build (push) Has been cancelled
/ Windows Build (push) Has been cancelled
/ Docker Image (push) Has been cancelled
Comment referenced NonceChangeTransaction which doesn't exist, should be
NonceChangeAuthorization.
2026-03-25 09:16:09 +01:00
Andrew Davis
8f9061f937
cmd/utils: optimize history import with batched insertion (#33894)
Some checks are pending
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Linux Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Improve speed of import-history command by two orders of magnitude.

Rework ImportHistory to collect up to 2500 blocks per flush instead of
flushing after each block, reducing database commit overhead.

---------

Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
2026-03-24 21:47:18 +01:00
Csaba Kiraly
e951bcbff7
cmd/devp2p: fix discv5 PingMultiIP test session key mismatch (#34031)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
conn.read() used the actual UDP packet source address for
codec.Decode(), but conn.write() always used tc.remoteAddr. When the
remote node is reachable via multiple Docker networks, the packet source
IP differs from tc.remoteAddr, causing a session key lookup failure in
the codec.

Use tc.remoteAddr.String() consistently in conn.read() so the session
cache key matches what was used during Encode.
2026-03-24 07:57:11 -06:00
vickkkkkyy
745b0a8c09
cmd/utils: guard SampleRatio flag with IsSet check (#34062)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
In `setOpenTelemetry`, all other fields (Enabled, Endpoint, AuthUser,
AuthPassword, InstanceID, Tags) are guarded by `ctx.IsSet()` checks, so
they only override the config file when explicitly set via CLI flags.
`SampleRatio` was the only field missing this guard, causing the flag
default (`1.0`) to always overwrite whatever was loaded from the config
file.

- Fix OpenTelemetry `SampleRatio` being unconditionally overwritten by
the CLI flag default value (`1.0`), even when the user did not pass
`--rpc.telemetry.sample-ratio`
- This caused config file values for `SampleRatio` to be silently
ignored
2026-03-23 13:14:28 -04:00
Felföldi Zsolt
b87340a856
core, core/vm: implement EIP-7708 (#33645)
This PR implements EIP-7708 according to the latest "rough consensus":

https://github.com/ethereum/EIPs/pull/9003
https://github.com/etan-status/EIPs/blob/fl-ethlogs/EIPS/eip-7708.md

---------

Co-authored-by: Jared Wasinger <j-wasinger@hotmail.com>
Co-authored-by: raxhvl <raxhvl@users.noreply.github.com>
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
2026-03-23 22:29:53 +08:00
Daniel Liu
a61e5ccb1e
core, internal/ethapi: fix incorrect max-initcode RPC error mapping (#34067)
Problem:

The max-initcode sentinel moved from core to vm, but RPC pre-check
mapping still depended on core.ErrMaxInitCodeSizeExceeded. This mismatch
could surface inconsistent error mapping when oversized initcode is
submitted through JSON-RPC.

Solution:

- Remove core.ErrMaxInitCodeSizeExceeded from the core pre-check error
set.
- Map max-initcode validation errors in RPC from
vm.ErrMaxInitCodeSizeExceeded.
- Keep the RPC error code mapping unchanged (-38025).

Impact:

- Restores consistent max-initcode error mapping after the sentinel
move.
- Preserves existing JSON-RPC client expectations for error code -38025.
- No consensus, state, or protocol behavior changes.
2026-03-23 22:10:32 +08:00
Lessa
e23b0cbc22
core/rawdb: fix key length check for num -- hash in db inspect (#34074)
Fix incorrect key length calculation for `numHashPairings` in
`InspectDatabase`, introduced in #34000.

The `headerHashKey` format is `headerPrefix + num + headerHashSuffix`
(10 bytes), but the check incorrectly included `common.HashLength`,
expecting 42 bytes.

This caused all number -- hash entries to be misclassified as
unaccounted data.
2026-03-23 21:54:30 +08:00
Guillaume Ballet
305cd7b9eb
trie/bintrie: fix NodeIterator Empty node handling and expose tree accessors (#34056)
Some checks failed
/ Linux Build (push) Has been cancelled
/ Linux Build (arm) (push) Has been cancelled
/ Keeper Build (push) Has been cancelled
/ Windows Build (push) Has been cancelled
/ Docker Image (push) Has been cancelled
Fix three issues in the binary trie NodeIterator:

1. Empty nodes now properly backtrack to parent and continue iteration
instead of terminating the entire walk early.

2. `HashedNode` resolver handles `nil` data (all-zeros hash) gracefully
by treating it as Empty rather than panicking.

3. Parent update after node resolution guards against stack underflow
when resolving the root node itself.

---------

Co-authored-by: tellabg <249254436+tellabg@users.noreply.github.com>
2026-03-20 13:53:14 -04:00
CPerezz
77779d1098
core/state: bypass per-account updateTrie in IntermediateRoot for binary trie (#34022)
## Summary

In binary trie mode, `IntermediateRoot` calls `updateTrie()` once per
dirty account. But with the binary trie there is only one unified trie
(`OpenStorageTrie` returns `self`), so each call redundantly does
per-account trie setup: `getPrefetchedTrie`, `getTrie`, slice
allocations for deletions/used, and `prefetcher.used` — all for the same
trie pointer.

This PR replaces the per-account `updateTrie()` calls with a single flat
loop that applies all storage updates directly to `s.trie`. The MPT path
is unchanged. The prefetcher trie replacement is guarded to avoid
overwriting the binary trie that received updates.

This is the phase-1 counterpart to #34021 (H01). H01 fixes the commit
phase (`trie.Commit()` called N+1 times). This PR fixes the update phase
(`updateTrie()` called N times with redundant setup). Same root cause —
unified binary trie operated on per-account — different phases.

## Benchmark (Apple M4 Pro, 500K entries, `--benchtime=10s --count=3`,
on top of #34021)

| Metric | H01 baseline | H01 + this PR | Delta |
|--------|:------------:|:-------------:|:-----:|
| Approve (Mgas/s) | 368 | **414** | **+12.5%** |
| BalanceOf (Mgas/s) | 870 | 875 | +0.6% |

Should be rebased after #34021 is merged.
2026-03-20 15:40:04 +01:00
jvn
59ce2cb6a1
p2p: track in-progress inbound node IDs (#33198)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Avoid dialing a node while we have an inbound
connection request from them in progress.

Closes #33197
2026-03-20 05:52:15 +01:00
Felix Lange
35b91092c5
rlp: add Size method to EncoderBuffer (#34052)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
The new method returns the size of the written data, excluding any
unfinished list structure.
2026-03-19 18:26:00 +01:00
jwasinger
fd859638bd
core/vm: rework gas measurement for call variants (#33648)
EIP-7928 brings state reads into consensus by recording accounts and storage accessed during execution in the block access list. As part of the spec, we need to check that there is enough gas available to cover the cost component which doesn't depend on looking up state. If this component can't be covered by the available gas, we exit immediately.

The portion of the call dynamic cost which doesn't depend on state look ups:

- EIP2929 call costs
- value transfer cost
- memory expansion cost

This PR:

- breaks up the "inner" gas calculation for each call variant into a pair of stateless/stateful cost methods
- modifies the gas calculation logic of calls to check stateless cost component first, and go out of gas immediately if it is not covered.

---------

Co-authored-by: Gary Rong <garyrong0905@gmail.com>
2026-03-19 10:02:49 -06:00
rjl493456442
a3083ff5d0
cmd: add support for enumerating a single storage trie (#34051)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
2026-03-19 09:52:10 +01:00
Bosul Mun
4faadf17fb
rlp: add AppendList method to RawList (#34048)
This the AppendList method to merge two RawList instances by
appending the raw content.
2026-03-19 09:51:03 +01:00
vickkkkkyy
3341d8ace0
eth/filters: rangeLogs should error on invalid block range (#33763)
Some checks are pending
/ Keeper Build (push) Waiting to run
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Fixes log filter to reject out of order block ranges.
2026-03-18 23:31:40 +01:00
haoyu-haoyu
b35645bdf7
build: fix missing '!' in shebang of generated oss-fuzz scripts (#34044)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
\`oss-fuzz.sh\` line 38 writes \`#/bin/sh\` instead of \`#!/bin/sh\` as
the shebang of generated fuzz test runner scripts.

\`\`\`diff
-#/bin/sh
+#!/bin/sh
\`\`\`

Without the \`!\`, the kernel does not recognize the interpreter
directive.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-18 13:56:26 +01:00
Sina M
6ae3f9fa56
core/history: refactor pruning configuration (#34036)
This PR introduces a new type HistoryPolicy which captures user intent
as opposed to pruning point stored in the blockchain which persists the
actual tail of data in the database.

It is in preparation for the rolling history expiry feature.

It comes with a semantic change: if database was pruned and geth is
running without a history mode flag (or explicit keep all flag) geth
will emit a warning but continue running as opposed to stopping the
world.
2026-03-18 13:54:29 +01:00
CPerezz
6138a11c39
trie/bintrie: parallelize InternalNode.Hash at shallow tree depths (#34032)
## Summary

At tree depths below `log2(NumCPU)` (clamped to [2, 8]), hash the left
subtree in a goroutine while hashing the right subtree inline. This
exploits available CPU cores for the top levels of the tree where
subtree hashing is most expensive. On single-core machines, the parallel
path is disabled entirely.

Deeper nodes use sequential hashing with the existing `sync.Pool` hasher
where goroutine overhead would exceed the hash computation cost. The
parallel path uses `sha256.Sum256` with a stack-allocated buffer to
avoid pool contention across goroutines.

**Safety:**
- Left/right subtrees are disjoint — no shared mutable state
- `sync.WaitGroup` provides happens-before guarantee for the result
- `defer wg.Done()` + `recover()` prevents goroutine panics from
crashing the process
- `!bt.mustRecompute` early return means clean nodes never enter the
parallel path
- Hash results are deterministic regardless of computation order — no
consensus risk

## Benchmark (AMD EPYC 48-core, 500K entries, `--benchtime=10s
--count=3`, post-H01 baseline)

| Metric | Baseline | Parallel | Delta |
|--------|----------|----------|-------|
| Approve (Mgas/s) | 224.5 ± 7.1 | **259.6 ± 2.4** | **+15.6%** |
| BalanceOf (Mgas/s) | 982.9 ± 5.1 | 954.3 ± 10.8 | -2.9% (noise, clean
nodes skip parallel path) |
| Allocs/op (approve) | ~810K | ~700K | -13.6% |
2026-03-18 13:54:23 +01:00
Mayveskii
b6115e9a30
core: fix txLookupLock mutex leak on error returns in reorg() (#34039)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
2026-03-18 15:43:24 +08:00
felipe
ab357151da
cmd/evm: don't strip prefixes on requests over t8n (#33997)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Found this bug while implementing the Amsterdam changes t8n changes for
benchmark test filling in EELS.

Prefixes were incorrectly being stripped on requests over t8n and this
was leading to `fill` correctly catching hash mismatches on the EELS
side for some BAL tests. Though this was caught there, I think this
change might as well be cherry-picked there instead and merged to
`master`.

This PR brings this behavior to parity with EELS for Osaka filling.
There are still some quirks with regards to invalid block tests but I
did not investigate this further.
2026-03-17 16:07:28 +01:00
rjl493456442
9b2ce121dc
triedb/pathdb: enhance history index initer (#33640)
This PR improves the pbss archive mode. Initial sync
of an archive mode which has the --gcmode archive
flag enabled will be significantly sped up.

It achieves that with the following changes:

The indexer now attempts to process histories in batch whenever
possible.
Batch indexing is enforced when the node is still syncing and the local
chain
head is behind the network chain head. 

In this scenario, instead of scheduling indexing frequently alongside
block
insertion, the indexer waits until a sufficient amount of history has
accumulated
and then processes it in a batch, which is significantly more efficient.

---------

Co-authored-by: Sina M <1591639+s1na@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-03-17 15:29:30 +01:00
Sina M
fc1b0c0b83
internal/ethapi: warn on reaching global gas cap for eth_simulateV1 (#34016)
Warn user when gas limit of a tx is capped due to rpc server's gas cap
being reached.
2026-03-17 13:52:04 +01:00
CPerezz
519a450c43
core/state: skip redundant trie Commit for Verkle in stateObject.commit (#34021)
## Summary

**Bug fix.** In Verkle mode, all state objects share a single unified
trie (`OpenStorageTrie` returns `self`). During `stateDB.commit()`, the
main account trie is committed via `s.trie.Commit(true)`, which calls
`CollectNodes` to traverse and serialize the entire tree. However, each
dirty account's `obj.commit()` also calls `s.trie.Commit(false)` on the
**same trie object**, redundantly traversing and serializing the full
tree once per dirty account.

With N dirty accounts per block, this causes **N+1 full-tree
traversals** instead of 1. On a write-heavy workload (2250 SSTOREs),
this produces ~131 GB of allocations per block from duplicate NodeSet
creation and serialization. It also causes a latent data race from N+1
goroutines concurrently calling `CollectNodes` on shared `InternalNode`
objects.

This commit adds an `IsVerkle()` early return in `stateObject.commit()`
to skip the redundant `trie.Commit()` call.

## Benchmark (AMD EPYC 48-core, 500K entries, `--benchtime=10s
--count=3`)

| Metric | Baseline | Fixed | Delta |
|--------|----------|-------|-------|
| Approve (Mgas/s) | 4.16 ± 0.37 | **220.2 ± 10.1** | **+5190%** |
| BalanceOf (Mgas/s) | 966.2 ± 8.1 | 971.0 ± 3.0 | +0.5% |
| Allocs/op (approve) | 136.4M | 792K | **-99.4%** |

Resolves the TODO in statedb.go about the account trie commit being
"very heavy" and "something's wonky".

---------

Co-authored-by: Guillaume Ballet <3272758+gballet@users.noreply.github.com>
2026-03-17 12:27:29 +01:00
CPerezz
4b915af2c3
core/state: avoid Bytes() allocation in flatReader hash computations (#34025)
## Summary

Replace `addr.Bytes()` and `key.Bytes()` with `addr[:]` and `key[:]` in
`flatReader`'s `Account` and `Storage` methods. The former allocates a
copy while the latter creates a zero-allocation slice header over the
existing backing array.

## Benchmark (AMD EPYC 48-core, 500K entries, screening
`--benchtime=1x`)

| Metric | Baseline | Slice syntax | Delta |
|--------|----------|--------------|-------|
| Approve (Mgas/s) | 4.13 | 4.22 | +2.2% |
| BalanceOf (Mgas/s) | 168.3 | 190.0 | **+12.9%** |
2026-03-17 11:42:42 +01:00
Jonny Rhea
98b13f342f
miner: add OpenTelemetry spans for block building path (#33773)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Instruments the block building path with OpenTelemetry tracing spans.

- added spans in forkchoiceUpdated -> buildPayload -> background payload
loop -> generateWork iterations. Spans should look something like this:

```
jsonrpc.engine/forkchoiceUpdatedV3
|- rpc.runMethod
|  |- engine.forkchoiceUpdated
|     |- miner.buildPayload [payload.id, parent.hash, timestamp]
|        |- miner.generateWork [txs.count, gas.used, fees] (empty block)
|        |  |- miner.prepareWork
|        |  |- miner.FinalizeAndAssemble
|        |     |- consensus.beacon.FinalizeAndAssemble [block.number, txs.count, withdrawals.count]
|        |        |- consensus.beacon.Finalize
|        |        |- consensus.beacon.IntermediateRoot
|        |        |- consensus.beacon.NewBlock
|        |- miner.background [block.number, iterations.total, exit.reason, empty.delivered]
|           |- miner.buildIteration [iteration, update.accepted]
|           |  |- miner.generateWork [txs.count, gas.used, fees]
|           |     |- miner.prepareWork
|           |     |- miner.fillTransactions [pending.plain.count, pending.blob.count]
|           |     |  |- miner.commitTransactions.priority (if prio txs exist)
|           |     |  |  |- miner.commitTransactions
|           |     |  |     |- miner.commitTransaction (per tx)
|           |     |  |- miner.commitTransactions.normal (if normal txs exist)
|           |     |     |- miner.commitTransactions
|           |     |        |- miner.commitTransaction (per tx)
|           |     |- miner.FinalizeAndAssemble
|           |        |- consensus.beacon.FinalizeAndAssemble [block.number, txs.count, withdrawals.count]
|           |           |- consensus.beacon.Finalize
|           |           |- consensus.beacon.IntermediateRoot
|           |           |- consensus.beacon.NewBlock
|           |- miner.buildIteration [iteration, update.accepted]
|           |  |- ...
|           |- ...

```

- added simulated server spans in SimulatedBeacon.sealBlock so dev mode
(geth --dev) produces traces that mirror production Engine API calls
from a real consensus client.

---------

Co-authored-by: Felix Lange <fjl@twurst.com>
2026-03-16 19:24:41 +01:00
rjl493456442
a7d09cc14f
core: fix code database initialization in stateless mode (#34011)
Some checks are pending
/ Docker Image (push) Waiting to run
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
This PR fixes the statedb initialization, ensuring the data source is
bound with the stateless input.
2026-03-16 09:45:26 +01:00
Guillaume Ballet
77e7e5ad1a
go.mod, go.sum: update karalabe/hid to fix broken FreeBSD ports build (#34008)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
cgo builds have been broken in FreeBSD ports because of the hid lib.
@enriquefynn has made a temporary patch, but the fix has been merged in
the master branch, so let's reflect that here.
2026-03-16 14:35:07 +08:00
vickkkkkyy
24025c2bd0
build: fix signify flag name in doWindowsInstaller (#34006)
Some checks failed
/ Linux Build (push) Has been cancelled
/ Linux Build (arm) (push) Has been cancelled
/ Keeper Build (push) Has been cancelled
/ Windows Build (push) Has been cancelled
/ Docker Image (push) Has been cancelled
The signify flag in `doWindowsInstaller` was defined as "signify key"
(with a space), making it impossible to pass via CLI (`-signify
<value>`). This meant the Windows installer signify signing was silently
never executed.

Fix by renaming the flag to "signify", consistent with `doArchive` and
`doKeeperArchive`.
2026-03-14 10:22:50 +01:00
jwasinger
ede376af8e
internal/ethapi: encode slotNumber as hex in RPCMarshalHeader (#34005)
Some checks are pending
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
The slotNumber field was being passed as a raw *uint64 to the JSON
marshaler, which serializes it as a plain decimal integer (e.g. 159).
All Ethereum JSON-RPC quantity fields must be hex-encoded per spec.

Wrap with hexutil.Uint64 to match the encoding of other numeric header
fields like blobGasUsed and excessBlobGas.

Co-authored-by: qu0b <stefan@starflinger.eu>
2026-03-13 17:09:32 +01:00
vickkkkkyy
189f9d0b17
eth/filters: check history pruning cutoff in GetFilterLogs (#33823)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Return proper error for the log filters going beyond pruning
point on a node with expired history.
2026-03-13 13:26:20 +01:00
Lessa
dba741fd31
console: fix autocomplete digit range to include 0 (#34003)
PR #26518 added digit support but used '1'-'9' instead of '0'-'9'. This
breaks autocomplete for identifiers containing 0 like account0.
2026-03-13 12:39:45 +01:00
Lee Gyumin
eaa9418ac1
core/rawdb: enforce exact key length for num->hash and td in db inspect (#34000)
This PR improves `db inspect` classification accuracy in
`core/rawdb/database.go` by tightening key-shape checks for:

- `Block number->hash`
- `Difficulties (deprecated)`

Previously, both categories used prefix/suffix heuristics and could
mis-bucket unrelated entries.
2026-03-13 09:45:14 +01:00
Guillaume Ballet
1c9ddee16f
trie/bintrie: use a sync.Pool when hashing binary tree nodes (#33989)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Binary tree hashing is quite slow, owing to many factors. One of them is
the GC pressure that is the consequence of allocating many hashers, as a
binary tree has 4x the size of an MPT. This PR introduces an
optimization that already exists for the MPT: keep a pool of hashers, in
order to reduce the amount of allocations.
2026-03-12 10:20:12 +01:00
jvn
95b9a2ed77
core: Implement eip-7954 increase Maximum Contract Size (#33832)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Implement EIP7954, This PR raises the maximum contract code size
to 32KiB and initcode size to 64KiB , following https://eips.ethereum.org/EIPS/eip-7954

---------

Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
2026-03-12 10:23:49 +08:00
Copilot
de0a452f7d
eth/filters: fix race in pending tx and new heads subscriptions (#33990)
`TestSubscribePendingTxHashes` hangs indefinitely because pending tx
events are permanently missed due to a race condition in
`NewPendingTransactions` (and `NewHeads`). Both handlers called their
event subscription functions (`SubscribePendingTxs`,
`SubscribeNewHeads`) inside goroutines, so the RPC handler returned the
subscription ID to the client before the filter was installed in the
event loop. When the client then sent a transaction, the event fired but
no filter existed to catch it — the event was silently lost.

- Move `SubscribePendingTxs` and `SubscribeNewHeads` calls out of
goroutines so filters are installed synchronously before the RPC
response is sent, matching the pattern already used by `Logs` and
`TransactionReceipts`

<!-- START COPILOT CODING AGENT TIPS -->
---

💬 We'd love your input! Share your thoughts on Copilot coding agent in
our [2 minute survey](https://gh.io/copilot-coding-agent-survey).

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: s1na <1591639+s1na@users.noreply.github.com>
2026-03-12 10:21:45 +08:00
rjl493456442
7d13acd030
core/rawdb, triedb/pathdb: enable trienode history alongside existing data (#33934)
Fixes https://github.com/ethereum/go-ethereum/issues/33907

Notably there is a behavioral change:
- Previously Geth will refuse to restart if the existing trienode
history is gapped with the state data
- With this PR, the gapped trienode history will be entirely reset and
being constructed from scratch
2026-03-12 09:21:54 +08:00
Guillaume Ballet
59512b1849
cmd/fetchpayload: add payload-building utility (#33919)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
This PR adds a cmd tool fetchpayload which connects to a
node and gets all the information in order to create a serialized
payload that can then be passed to the zkvm.
2026-03-11 16:18:42 +01:00
Sina M
3c20e08cba
cmd/geth: add Prague pruning points (#33657)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
This PR allows users to prune their nodes up to the Prague fork. It
indirectly depends on #32157 and can't really be merged before eraE
files are widely available for download.

The `--history.chain` flag becomes mandatory for `prune-history`
command. Here I've listed all the edge cases that can happen and how we
behave:

## prune-history Behavior

| From        | To           | Result                   |
|-------------|--------------|--------------------------|
| full        | postmerge    |  prunes                |
| full        | postprague   |  prunes                |
| postmerge   | postprague   |  prunes further        |
| postprague  | postmerge    |  can't unprune         |
| any         | all          |  use import-history    |


## Node Startup Behavior

| DB State | Flag | Result |

|-------------|--------------|----------------------------------------------------------------|
| fresh | postprague |  syncs from Prague |
| full | postprague |  "run prune-history first" |
| postmerge | postprague |  "run prune-history first" |
| postprague | postmerge |  "can't unprune, use import-history or fix
flag" |
| pruned | all |  accepts known prune points |
2026-03-11 12:47:42 +01:00
bigbear
88f8549d37
cmd/geth: correct misleading flag description in removedb command (#33984)
The `--remove.chain` flag incorrectly described itself as selecting
"state data" for removal, which could mislead operators into removing
the wrong data category. This corrects the description to accurately
reflect that the flag targets chain data (block bodies and receipts).
2026-03-11 16:33:10 +08:00
jwasinger
32f05d68a2
core: end telemetry span for ApplyTransactionWithEVM if error is returned (#33955) 2026-03-11 14:41:43 +08:00
georgehao
f6068e3fb2
eth/tracers: fix accessList StorageKeys return null (#33976)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
2026-03-11 11:46:49 +08:00
rjl493456442
27c4ca9df0
eth: resolve finalized from disk if it's not recently announced (#33150)
This PR contains two changes:

Firstly, the finalized header will be resolved from local chain if it's
not recently announced via the `engine_newPayload`. 

What's more importantly is, in the downloader, originally there are two
code paths to push forward the pivot point block, one in the beacon 
header fetcher (`fetchHeaders`), and another one is in the snap content 
processer (`processSnapSyncContent`).

Usually if there are new blocks and local pivot block becomes stale, it
will firstly be detected by the `fetchHeaders`. `processSnapSyncContent` 
is fully driven by the beacon headers and will only detect the stale pivot 
block after synchronizing the corresponding chain segment. I think the 
detection here is redundant and useless.
2026-03-11 11:23:00 +08:00
Sina M
aa417b03a6
core/tracing: fix nonce revert edge case (#33978)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
We got a report for a bug in the tracing journal which has the
responsibility to emit events for all state that must be reverted.

The edge case is as follows: on CREATE operations the nonce is
incremented. When a create frame reverts, the nonce increment associated
with it does **not** revert. This works fine on master. Now one step
further: if the parent frame reverts tho, the nonce **should** revert
and there is the bug.
2026-03-10 16:53:21 +01:00
rjl493456442
91cec92bf3
core, miner, tests: introduce codedb and simplify cachingDB (#33816)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
2026-03-10 08:29:21 +01:00
rjl493456442
b8a3fa7d06
cmd/utils, eth/ethconfig: change default cache settings (#33975)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
This PR fixes a regression introduced in https://github.com/ethereum/go-ethereum/pull/33836/changes

Before PR 33836, running mainnet would automatically bump the cache size
to 4GB and trigger a cache re-calculation, specifically setting the key-value 
database cache to 2GB.
 
After PR 33836, this logic was removed, and the cache value is no longer
recomputed if no command line flags are specified. The default key-value 
database cache is 512MB.

This PR bumps the default key-value database cache size alongside the
default cache size for other components (such as snapshot) accordingly.
2026-03-09 23:18:18 +08:00
Muzry
b08aac1dbc
eth/catalyst: allow getPayloadV2 for pre-shanghai payloads (#33932)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
I observed failing tests in Hive `engine-withdrawals`:

-
https://hive.ethpandaops.io/#/test/generic/1772351960-ad3e3e460605c670efe1b4f4178eb422?testnumber=146
-
https://hive.ethpandaops.io/#/test/generic/1772351960-ad3e3e460605c670efe1b4f4178eb422?testnumber=147

```shell
  DEBUG (Withdrawals Fork on Block 2): NextPayloadID before getPayloadV2:
  id=0x01487547e54e8abe version=1
  >> engine_getPayloadV2("0x01487547e54e8abe")
  << error: {"code":-38005,"message":"Unsupported fork"}
  FAIL: Expected no error on EngineGetPayloadV2: error=Unsupported fork
```
 
The same failure pattern occurred for Block 3.

Per Shanghai engine_getPayloadV2 spec, pre-Shanghai payloads should be
accepted via V2 and returned as ExecutionPayloadV1:
- executionPayload: ExecutionPayloadV1 | ExecutionPayloadV2
- ExecutionPayloadV1 MUST be returned if payload timestamp < Shanghai
timestamp
- ExecutionPayloadV2 MUST be returned if payload timestamp >= Shanghai
timestamp

Reference:
-
https://github.com/ethereum/execution-apis/blob/main/src/engine/shanghai.md#engine_getpayloadv2

Current implementation only allows GetPayloadV2 on the Shanghai fork
window (`[]forks.Fork{forks.Shanghai}`), so pre-Shanghai payloads are
rejected with Unsupported fork.

If my interpretation of the spec is incorrect, please let me know and I
can adjust accordingly.

---------

Co-authored-by: muzry.li <muzry.li1@ambergroup.io>
2026-03-09 11:22:58 +01:00
Marius van der Wijden
00540f9469
go.mod: update go-eth-kzg (#33963)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Updates go-eth-kzg to
https://github.com/crate-crypto/go-eth-kzg/releases/tag/v1.5.0
Significantly reduces the allocations in VerifyCellProofBatch which is
around ~5% of all allocations on my node

---------

Co-authored-by: Guillaume Ballet <3272758+gballet@users.noreply.github.com>
2026-03-08 11:44:29 +01:00
cui
e15d4ccc01
core/types: reduce alloc in hot code path (#33523)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Reduce allocations in calculation of tx cost.

---------

Co-authored-by: weixie.cui <weixie.cui@okg.com>
Co-authored-by: Sina M <1591639+s1na@users.noreply.github.com>
2026-03-07 14:31:36 +01:00
marukai67
0d043d071e
signer/core: prevent nil pointer panics in keystore operations (#33829)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Add nil checks to prevent potential panics when keystore backend is
unavailable in the Clef signer API.
2026-03-06 21:50:30 +01:00
Guillaume Ballet
ecee64ecdc
core: fix TestProcessVerkle flaky test (#33971)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
`GenerateChain` commits trie nodes asynchronously, and it can happen
that some nodes aren't making it to the db in time for `GenerateChain`
to open it and find the data it is looking for.
2026-03-06 19:03:05 +01:00
Guillaume Ballet
3f1871524f
trie/bintrie: cache hashes of clean nodes so as not to rehash the whole tree (#33961)
This is an optimization that existed for verkle and the MPT, but that
got dropped during the rebase.

Mark the nodes that were modified as needing recomputation, and skip the
hash computation if this is not needed. Otherwise, the whole tree is
hashed, which kills performance.
2026-03-06 18:06:24 +01:00
Guillaume Ballet
a0fb8102fe
trie/bintrie: fix overflow management in slot key computation (#33951)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Docker Image (push) Waiting to run
/ Windows Build (push) Waiting to run
The computation of `MAIN_STORAGE_OFFSET` was incorrect, causing the last
byte of the stem to be dropped. This means that there would be a
collision in the hash computation (at the preimage level, not a hash
collision of course) if two keys were only differing at byte 31.
2026-03-05 14:43:31 +01:00
Bosul Mun
344ce84a43
eth/fetcher: fix flaky test by improving event unsubscription (#33950)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Eth currently has a flaky test, related to the tx fetcher.

The issue seems to happen when Unsubscribe is called while sub is nil.
It seems that chain.Stop() may be invoked before the loop starts in some
tests, but the exact cause is still under investigation through repeated
runs. I think this change will at least prevent the error.
2026-03-05 11:48:44 +08:00
Sina M
ce64ab44ed
internal/ethapi: fix gas cap for eth_simulateV1 (#33952)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Fixes a regression in #33593 where a block gas limit > gasCap resulted
in more execution than the gas cap.
2026-03-05 09:09:07 +08:00
J
fc8c10476d
internal/ethapi: add MaxUsedGas field to eth_simulateV1 response (#32789)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
closes #32741
2026-03-04 12:37:47 +01:00
Jonny Rhea
402c71f2e2
internal/telemetry: fix undersized span queue causing dropped spans (#33927)
The BatchSpanProcessor queue size was incorrectly set to
DefaultMaxExportBatchSize (512) instead of DefaultMaxQueueSize (2048).

I noticed the issue on bloatnet when analyzing the block building
traces. During a particular run, the miner was including 1000
transactions in a single block. When telemetry is enabled, the miner
creates a span for each transaction added to the block. With the queue
capped at 512, spans were silently dropped when production outpaced the
span export, resulting in incomplete traces with orphaned spans. While
this doesn't eliminate the possibility of drops under extreme
load, using the correct default restores the 4x buffer between queue
capacity and export batch size that the SDK was designed around.
2026-03-04 11:47:10 +01:00
Jonny Rhea
28dad943f6
cmd/geth: set default cache to 4096 (#33836)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Mainnet was already overriding --cache to 4096. This PR just makes this
the default.
2026-03-04 11:21:11 +01:00
Marius van der Wijden
6d0dd08860
core: implement eip-7778: block gas accounting without refunds (#33593)
Implements https://eips.ethereum.org/EIPS/eip-7778

---------

Co-authored-by: Gary Rong <garyrong0905@gmail.com>
2026-03-04 18:18:18 +08:00
rjl493456442
dd202d4283
core, ethdb, triedb: add batch close (#33708)
Pebble maintains a batch pool to recycle the batch object. Unfortunately
batch object must be
explicitly returned via `batch.Close` function. This PR extends the
batch interface by adding
the close function and also invoke batch.Close in some critical code
paths.

Memory allocation must be measured before merging this change. What's
more, it's an open
question that whether we should apply batch.Close as much as possible in
every invocation.
2026-03-04 11:17:47 +01:00
Jonny Rhea
814edc5308
core/vm: Switch to branchless normalization and extend EXCHANGE (#33869)
For bal-devnet-3 we need to update the EIP-8024 implementation to the
latest spec changes: https://github.com/ethereum/EIPs/pull/11306

> Note: I deleted tests not specified in the EIP bc maintaining them
through EIP changes is too error prone.
2026-03-04 10:34:27 +01:00
rjl493456442
6d99759f01
cmd, core, eth, tests: prevent state flushing in RPC (#33931)
Fixes https://github.com/ethereum/go-ethereum/issues/33572
2026-03-04 14:40:45 +08:00
DeFi Junkie
fe3a74e610
core/vm: use amsterdam jump table in lookup (#33947)
Return the Amsterdam instruction set from `LookupInstructionSet` when
`IsAmsterdam` is true, so Amsterdam rules no longer fall through to the
Osaka jump table.

---------

Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
2026-03-04 13:42:25 +08:00
Jonny Rhea
4f75049ea0
miner: avoid unnecessary work after payload resolution (#33943)
In `buildPayload()`, the background goroutine uses a `select` to wait on
the recommit timer, the stop channel, and the end timer. When both
`timer.C` and `payload.stop` are ready simultaneously, Go's `select`
picks a case non-deterministically. This means the loop can enter the
`timer.C` case and perform an unnecessary `generateWork` call even after
the payload has been resolved.

Add a non-blocking check of `payload.stop` at the top of the `timer.C`
case to exit immediately when the payload has already been delivered.
2026-03-04 11:58:51 +08:00
Jonny Rhea
773f71bb9e
miner: enable trie prefetcher in block builder (#33945)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
2026-03-04 10:17:07 +08:00
Jonny Rhea
856e4d55d8
go.mod: bump go.opentelemetry.io/otel/sdk from 1.39.0 to 1.40.0 (#33946)
https://github.com/ethereum/go-ethereum/pull/33916 + cmd/keeper go mod
tidy

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-03 22:28:09 +01:00
Felix Lange
db7d3a4e0e version: begin v1.17.2 release cycle
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
2026-03-03 13:49:09 +01:00
Felix Lange
16783c167c version: release go-ethereum v1.17.1 stable 2026-03-03 13:41:41 +01:00
Felix Lange
9962e2c9f3
p2p/tracker: fix crash in clean when tracker is stopped (#33940) 2026-03-03 12:54:24 +01:00
Sina M
d318e8eba9
node: disable http2 for auth API (#33922)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
We got a report that after v1.17.0 a geth-teku node starts to time out
on engine_getBlobsV2 after around 3h of operation. The culprit seems to
be our optional http2 service which Teku attempts first. The exact cause
of the timeout is still unclear.

This PR is more of a workaround than proper fix until we figure out the
underlying issue. But I don't expect http2 to particularly benefit
engine API throughput and latency. Hence it should be fine to disable it
for now.
2026-03-03 00:02:44 +01:00
vickkkkkyy
b25080cac0
miner: account for generateWork elapsed time in payload rebuild timer (#33908)
The payload rebuild loop resets the timer with the full Recommit
duration after generateWork returns, making the actual interval
generateWork_elapsed + Recommit instead of Recommit alone.

Since fillTransactions uses Recommit (2s) as its timeout ceiling, the
effective rebuild interval can reach ~4s under heavy blob workloads —
only 1–2 rebuilds in a 6s half-slot window instead of the intended 3.

Fix by subtracting elapsed time from the timer reset.

### Before this fix

```
t=0s  timer fires, generateWork starts
t=2s  fillTransactions times out, timer.Reset(2s)
t=4s  second rebuild starts
t=6s  CL calls getPayload — gets the t=2s result (1 effective rebuild)
```

### After

```
t=0s  timer fires, generateWork starts
t=2s  fillTransactions times out, timer.Reset(2s - 2s = 0)
t=2s  second rebuild starts immediately
t=4s  timer.Reset(0), third rebuild starts
t=6s  CL calls getPayload — gets the t=4s result (3 effective rebuilds)
```
2026-03-03 00:01:55 +01:00
Csaba Kiraly
48cfc97776
core/txpool/blobpool: delay announcement of low fee txs (#33893)
This PR introduces a threshold (relative to current market base fees),
below which we suppress the diffusion of low fee transactions. Once base
fees go down, and if the transactions were not evicted in the meantime,
we release these transactions.

The PR also updates the bucketing logic to be more sensitive, removing
the extra logarithm. Blobpool description is also
updated to reflect the new behavior.

EIP-7918 changed the maximim blob fee decrease that can happen in a
slot. The PR also updates fee jump calculation to reflect this.

---------

Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
2026-03-02 23:59:33 +01:00
Csaba Kiraly
1eead2ec33
core/types: fix transaction pool price-heap comparison (#33923)
Fixes priceheap comparison in some edge cases.

---------

Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
2026-03-02 23:42:39 +01:00
jwasinger
2726c9ef9e
core/vm: enable 8024 instructions in Amsterdam (#33928) 2026-03-02 17:01:06 -05:00
Guillaume Ballet
5695fbc156
.github: set @gballet as codeowner for keeper (#33920)
Some checks are pending
/ Keeper Build (push) Waiting to run
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
2026-03-02 06:43:21 -07:00
Guillaume Ballet
825436f043
AGENTS.md: add instruction not to commit binaries (#33921)
I noticed that some autonomous agents have a tendency to commit binaries
if asked to create a PR.
2026-03-02 06:42:38 -07:00
Bosul Mun
723aae2b4e
eth/protocols/eth: drop protocol version eth/68 (#33511)
Some checks failed
/ Keeper Build (push) Has been cancelled
/ Linux Build (push) Has been cancelled
/ Linux Build (arm) (push) Has been cancelled
/ Windows Build (push) Has been cancelled
/ Docker Image (push) Has been cancelled
With this, we are dropping support for protocol version eth/68. The only supported
version is eth/69 now. The p2p receipt encoding logic can be simplified a lot, and
processing of receipts during sync gets a little faster because we now transform
the network encoding into the database encoding directly, without decoding the
receipts first.

---------

Co-authored-by: Felix Lange <fjl@twurst.com>
2026-02-28 21:43:40 +01:00
Delweng
cee751a1ed
eth: fix the flaky test of TestSnapSyncDisabling68 (#33896)
Some checks failed
/ Keeper Build (push) Has been cancelled
/ Linux Build (arm) (push) Has been cancelled
/ Linux Build (push) Has been cancelled
/ Windows Build (push) Has been cancelled
/ Docker Image (push) Has been cancelled
fix the flaky test found in
https://ci.appveyor.com/project/ethereum/go-ethereum/builds/53601688/job/af5ccvufpm9usq39

1. increase the timeout from 3+1s to 15s, and use timer instead of
sleep(in the CI env, it may need more time to sync the 1024 blocks)
2. add `synced.Load()` to ensure the full async chain is finished

Signed-off-by: Delweng <delweng@gmail.com>
2026-02-27 12:51:01 +01:00
Guillaume Ballet
95c6b05806
trie/bintrie: fix endianness in code chunk key computation (#33900)
The endianness was wrong, which means that the code chunks were stored
in the wrong location in the tree.
2026-02-27 11:35:13 +01:00
Felix Lange
7793e00f0d
Dockerfile: upgrade to Go 1.26 (#33899)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
We didn't upgrade to 1.25, so this jumps over one version. I want to
upgrade all builds to Go 1.26 soon, but let's start with the Docker
build to get a sense of any possible issues.
2026-02-26 21:18:00 +01:00
Marius van der Wijden
1b1133d669
go.mod: update ckzg (#33901) 2026-02-26 20:04:25 +01:00
rjl493456442
be92f5487e
trie: error out for unexpected key-value pairs preceding the range (#33898)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
2026-02-26 23:00:02 +08:00
Sina M
8a4345611d
build: update ubuntu distros list (#33864)
The`plucky` and `oracular` have reached end of life. That's why
launchpad isn't building them anymore:
https://launchpad.net/~ethereum/+archive/ubuntu/ethereum/+packages.
2026-02-26 13:55:53 +01:00
Marius van der Wijden
f811bfe4fd
core/vm: implement eip-7843: SLOTNUM (#33589)
Implements the slotnum opcode as specified here:
https://eips.ethereum.org/EIPS/eip-7843
2026-02-26 13:53:46 +01:00
Guillaume Ballet
406a852ec8
AGENTS.md: add AGENTS.md (#33890)
Some checks failed
/ Linux Build (push) Has been cancelled
/ Linux Build (arm) (push) Has been cancelled
/ Keeper Build (push) Has been cancelled
/ Windows Build (push) Has been cancelled
/ Docker Image (push) Has been cancelled
Co-authored-by: tellabg <249254436+tellabg@users.noreply.github.com>
Co-authored-by: lightclient <lightclient@protonmail.com>
2026-02-24 23:08:23 -07:00
ANtutov
2a45272408
eth/protocols/eth: fix handshake timeout metrics classification (#33539)
Previously, handshake timeouts were recorded as generic peer errors
instead of timeout errors. waitForHandshake passed a raw
p2p.DiscReadTimeout into markError, but markError classified errors only
via errors.Unwrap(err), which returns nil for non-wrapped errors. As a
result, the timeoutError meter was never incremented and all such
failures fell into the peerError bucket.

This change makes markError switch on the base error, using
errors.Unwrap(err) when available and falling back to the original error
otherwise. With this adjustment, p2p.DiscReadTimeout is correctly mapped
to timeoutError, while existing behaviour for the other wrapped sentinel
errors remains unchanged

---------

Co-authored-by: lightclient <lightclient@protonmail.com>
2026-02-24 21:50:26 -07:00
Fynn
8450e40798
cmd/geth: add inspect trie tool to analysis trie storage (#28892)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
This pr adds a tool names `inpsect-trie`, aimed to analyze the mpt and
its node storage more efficiently.

## Example
 ./geth db inspect-trie --datadir server/data-seed/ latest 4000

## Result

- MPT shape
- Account Trie 
- Top N Storage Trie
```
+-------+-------+--------------+-------------+--------------+
|   -   | LEVEL | SHORTNODECNT | FULLNODECNT | VALUENODECNT |
+-------+-------+--------------+-------------+--------------+
|   -   |   0   |      0       |      1      |      0       |
|   -   |   1   |      0       |     16      |      0       |
|   -   |   2   |      76      |     32      |      74      |
|   -   |   3   |      66      |      1      |      66      |
|   -   |   4   |      2       |      0      |      2       |
| Total |  144  |      50      |     142     |
+-------+-------+--------------+-------------+--------------+
AccountTrie
+-------+-------+--------------+-------------+--------------+
|   -   | LEVEL | SHORTNODECNT | FULLNODECNT | VALUENODECNT |
+-------+-------+--------------+-------------+--------------+
|   -   |   0   |      0       |      1      |      0       |
|   -   |   1   |      0       |     16      |      0       |
|   -   |   2   |     108      |     84      |     104      |
|   -   |   3   |     195      |      5      |     195      |
|   -   |   4   |      10      |      0      |      10      |
| Total |  313  |     106      |     309     |
+-------+-------+--------------+-------------+--------------+
ContractTrie-0xc874e65ccffb133d9db4ff637e62532ef6ecef3223845d02f522c55786782911
+-------+-------+--------------+-------------+--------------+
|   -   | LEVEL | SHORTNODECNT | FULLNODECNT | VALUENODECNT |
+-------+-------+--------------+-------------+--------------+
|   -   |   0   |      0       |      1      |      0       |
|   -   |   1   |      0       |     16      |      0       |
|   -   |   2   |      57      |     14      |      56      |
|   -   |   3   |      33      |      0      |      33      |
| Total |  90   |      31      |     89      |
+-------+-------+--------------+-------------+--------------+
ContractTrie-0x1d7dcb6a0ce5227c5379fc5b0e004561d7833b063355f69bfea3178f08fbaab4
+-------+-------+--------------+-------------+--------------+
|   -   | LEVEL | SHORTNODECNT | FULLNODECNT | VALUENODECNT |
+-------+-------+--------------+-------------+--------------+
|   -   |   0   |      0       |      1      |      0       |
|   -   |   1   |      5       |      8      |      5       |
|   -   |   2   |      16      |      1      |      16      |
|   -   |   3   |      2       |      0      |      2       |
| Total |  23   |      10      |     23      |
+-------+-------+--------------+-------------+--------------+
ContractTrie-0xaa8a4783ebbb3bec45d3e804b3c59bfd486edfa39cbeda1d42bf86c08a0ebc0f
+-------+-------+--------------+-------------+--------------+
|   -   | LEVEL | SHORTNODECNT | FULLNODECNT | VALUENODECNT |
+-------+-------+--------------+-------------+--------------+
|   -   |   0   |      0       |      1      |      0       |
|   -   |   1   |      9       |      3      |      9       |
|   -   |   2   |      7       |      1      |      7       |
|   -   |   3   |      2       |      0      |      2       |
| Total |  18   |      5       |     18      |
+-------+-------+--------------+-------------+--------------+
ContractTrie-0x9d2804d0562391d7cfcfaf0013f0352e176a94403a58577ebf82168a21514441
+-------+-------+--------------+-------------+--------------+
|   -   | LEVEL | SHORTNODECNT | FULLNODECNT | VALUENODECNT |
+-------+-------+--------------+-------------+--------------+
|   -   |   0   |      0       |      1      |      0       |
|   -   |   1   |      6       |      4      |      6       |
|   -   |   2   |      8       |      0      |      8       |
| Total |  14   |      5       |     14      |
+-------+-------+--------------+-------------+--------------+
ContractTrie-0x17e3eb95d0e6e92b42c0b3e95c6e75080c9fcd83e706344712e9587375de96e1
+-------+-------+--------------+-------------+--------------+
|   -   | LEVEL | SHORTNODECNT | FULLNODECNT | VALUENODECNT |
+-------+-------+--------------+-------------+--------------+
|   -   |   0   |      0       |      1      |      0       |
|   -   |   1   |      5       |      3      |      5       |
|   -   |   2   |      7       |      0      |      7       |
| Total |  12   |      4       |     12      |
+-------+-------+--------------+-------------+--------------+
ContractTrie-0xc017ca90c8aa37693c38f80436bb15bde46d7b30a503aa808cb7814127468a44
Contract Trie, total trie num: 142, ShortNodeCnt: 620, FullNodeCnt: 204, ValueNodeCnt: 615
```

---------

Co-authored-by: lightclient <lightclient@protonmail.com>
Co-authored-by: MariusVanDerWijden <m.vanderwijden@live.de>
2026-02-24 10:56:00 -07:00
cui
9ecb6c4ae6
core: reduce alloc (#33576)
Some checks are pending
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Linux Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
inside tx.GasPrice()/GasFeeCap()/GasTipCap() already new a big.Int.  
bench result:  
```
goos: darwin
goarch: arm64
pkg: github.com/ethereum/go-ethereum/core
cpu: Apple M4
                        │   old.txt   │               new.txt               │
                        │   sec/op    │   sec/op     vs base                │
TransactionToMessage-10   240.1n ± 7%   175.1n ± 7%  -27.09% (p=0.000 n=10)

                        │  old.txt   │              new.txt               │
                        │    B/op    │    B/op     vs base                │
TransactionToMessage-10   544.0 ± 0%   424.0 ± 0%  -22.06% (p=0.000 n=10)

                        │  old.txt   │              new.txt               │
                        │ allocs/op  │ allocs/op   vs base                │
TransactionToMessage-10   17.00 ± 0%   11.00 ± 0%  -35.29% (p=0.000 n=10)
```
benchmark code:  

```
// Copyright 2025 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.

package core

import (
	"math/big"
	"testing"

	"github.com/ethereum/go-ethereum/common"
	"github.com/ethereum/go-ethereum/core/types"
	"github.com/ethereum/go-ethereum/crypto"
	"github.com/ethereum/go-ethereum/params"
)

// BenchmarkTransactionToMessage benchmarks the TransactionToMessage function.
func BenchmarkTransactionToMessage(b *testing.B) {
	key, _ := crypto.GenerateKey()
	signer := types.LatestSigner(params.TestChainConfig)
	to := common.HexToAddress("0x000000000000000000000000000000000000dead")
	
	// Create a DynamicFeeTx transaction
	txdata := &types.DynamicFeeTx{
		ChainID:   big.NewInt(1),
		Nonce:     42,
		GasTipCap: big.NewInt(1000000000),  // 1 gwei
		GasFeeCap: big.NewInt(2000000000),  // 2 gwei
		Gas:       21000,
		To:        &to,
		Value:     big.NewInt(1000000000000000000), // 1 ether
		Data:      []byte{0x12, 0x34, 0x56, 0x78},
		AccessList: types.AccessList{
			types.AccessTuple{
				Address:     common.HexToAddress("0x0000000000000000000000000000000000000001"),
				StorageKeys: []common.Hash{
					common.HexToHash("0x0000000000000000000000000000000000000000000000000000000000000001"),
				},
			},
		},
	}
	tx, _ := types.SignNewTx(key, signer, txdata)
	baseFee := big.NewInt(1500000000) // 1.5 gwei

	b.ResetTimer()
	b.ReportAllocs()
	for i := 0; i < b.N; i++ {
		_, err := TransactionToMessage(tx, signer, baseFee)
		if err != nil {
			b.Fatal(err)
		}
	}
}
l
```
2026-02-24 07:40:01 -07:00
rjl493456442
e636e4e3c1
core/state: track slot reads for empty storage (#33743)
From the https://eips.ethereum.org/EIPS/eip-7928

> SELFDESTRUCT (in-transaction): Accounts destroyed within a transaction
   MUST be included in AccountChanges without nonce or code changes. 
   However, if the account had a positive balance pre-transaction, the
   balance change to zero MUST be recorded. Storage keys within the self-destructed
   contracts that were modified or read MUST be included as a storage_reads
   entry.

The storage read against the empty contract (zero storage) should also
be recorded in the BAL's readlist.
2026-02-24 21:57:50 +08:00
rjl493456442
cbf3d8fed2
core/vm: touch precompile object with Amsterdam enabled (#33742)
https://eips.ethereum.org/EIPS/eip-7928 spec:

> Precompiled contracts: Precompiles MUST be included when accessed. 
   If a precompile receives value, it is recorded with a balance change.
   Otherwise, it is included with empty change lists.

The precompiled contracts are not explicitly touched when they are
invoked since Amsterdam fork.
2026-02-24 21:55:10 +08:00
rjl493456442
199ac16e07
core/types/bal: change code change type to list (#33774)
To align with the latest spec of EIP-7928:

```
# CodeChange: [block_access_index, new_code]
CodeChange = [BlockAccessIndex, Bytecode]
```
2026-02-24 21:53:20 +08:00
IONode Online
01083736c8
core/txpool/blobpool: remove unused adds slice in Add() (#33887) 2026-02-24 20:24:16 +08:00
Csaba Kiraly
59ad40e562
eth: check for tx on chain as well (#33607)
The fetcher should not fetch transactions that are already on chain.
Until now we were only checking in the txpool, but that does not have
the old transaction. This was leading to extra fetches of transactions
that were announced by a peer but are already on chain.

Here we extend the check to the chain as well.
2026-02-24 11:21:03 +01:00
CPerezz
c2e1785a48
eth/protocols/snap: restore peers to idle pool on request revert (#33790)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
All five `revert*Request` functions (account, bytecode, storage,
trienode heal, bytecode heal) remove the request from the tracked set
but never restore the peer to its corresponding idle pool. When a
request times out and no response arrives, the peer is permanently lost
from the idle pool, preventing new work from being assigned to it.

In normal operation mode (snap-sync full state) this bug is masked by
pivot movement (which resets idle pools via new Sync() cycles every ~15
minutes) and peer churn (reconnections re-add peers via Register()).
However in scenarios like the one I have running my (partial-stateful
node)[https://github.com/ethereum/go-ethereum/pull/33764] with
long-running sync cycles and few peers, all peers can eventually leak
out of the idle pools, stalling sync entirely.

Fix: after deleting from the request map, restore the peer to its idle
pool if it is still registered (guards against the peer-drop path where
Unregister already removed the peer). This mirrors the pattern used in
all five On* response handlers.


This only seems to manifest in peer-thirstly scenarios as where I find
myself when testing snapsync for the partial-statefull node).
Still, thought was at least good to raise this point. Unsure if required
to discuss or not
2026-02-24 09:14:11 +08:00
Nakanishi Hiro
82fad31540
internal/ethapi: add eth_getStorageValues method (#32591)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Implements the new eth_getStorageValues method. It returns storage
values for a list of contracts.

Spec: https://github.com/ethereum/execution-apis/pull/756

---------

Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
2026-02-23 20:47:30 +01:00
vickkkkkyy
1625064c68
internal/ethapi: include AuthorizationList in gas estimation (#33849)
Fixes an issue where AuthorizationList wasn't copied over when
estimating gas for a user-provided transaction.
2026-02-23 18:07:26 +01:00
Marius van der Wijden
1d1a094d51
beacon/blsync: ignore beacon syncer reorging errors (#33628)
Downgrades beacon syncer reorging from Error to Debug
closes https://github.com/ethereum/go-ethereum/issues/29916
2026-02-23 16:02:23 +01:00
Marius van der Wijden
e40aa46e88
eth/catalyst: implement testing_buildBlockV1 (#33656)
implements
https://github.com/ethereum/execution-apis/pull/710/changes#r2712256529

---------

Co-authored-by: Felix Lange <fjl@twurst.com>
2026-02-23 15:56:31 +01:00
Csaba Kiraly
d3dd48e59d
metrics: allow changing influxdb interval (#33767)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
The PR exposes the InfuxDB reporting interval as a CLI parameter, which
was previously fixed 10s. Default is still kept at 10s.
Note that decreasing the interval comes with notable extra traffic and
load on InfluxDB.
2026-02-23 14:27:25 +01:00
Felix Lange
00cbd2e6f4
p2p/discover/v5wire: use Whoareyou.ChallengeData instead of storing encoded packet (#31547)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
This changes the challenge resend logic again to use the existing
`ChallengeData` field of `v5wire.Whoareyou` instead of storing a second
copy of the packet in `Whoareyou.Encoded`. It's more correct this way
since `ChallengeData` is supposed to be the data that is used by the ID
verification procedure.

Also adapts the cross-client test to verify this behavior.

Follow-up to #31543
2026-02-22 21:58:47 +01:00
Felix Lange
453d0f9299
build: upgrade to golangci-lint v2.10.1 (#33875)
Some checks are pending
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Linux Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
2026-02-21 20:53:02 +01:00
Felix Lange
6d865ccd30
build: upgrade -dlgo version to 1.25.7 (#33874) 2026-02-21 20:52:43 +01:00
rjl493456442
54f72c796f
core/rawdb: revert "check pruning tail in HasBody and HasReceipts" (#33865)
Some checks failed
/ Linux Build (push) Has been cancelled
/ Linux Build (arm) (push) Has been cancelled
/ Keeper Build (push) Has been cancelled
/ Windows Build (push) Has been cancelled
/ Docker Image (push) Has been cancelled
Reverts ethereum/go-ethereum#33747.

This change suffers an unexpected issue during the sync with
`history.chain=postmerge`.
2026-02-19 11:43:44 +01:00
Sina M
2a62df3815
.github: fix actions 32bit test (#33866)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
2026-02-18 18:28:53 +01:00
rjl493456442
01fe1d716c
core/vm: disable the value transfer in syscall (#33741)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
In src/ethereum/forks/amsterdam/vm/interpreter.py:299-304, the caller
address is
only tracked for block level accessList when there's a value transfer:

```python
if message.should_transfer_value and message.value != 0:  
    # Track value transfer  
    sender_balance = get_account(state, message.caller).balance  
    recipient_balance = get_account(state, message.current_target).balance  

    track_address(message.state_changes, message.caller)  # Line 304
```

Since system transactions have should_transfer_value=False and value=0, 
this condition is never met, so the caller (SYSTEM_ADDRESS) is not
tracked.

This condition is applied for the syscall in the geth implementation,
aligning with the spec of EIP7928.

---------

Co-authored-by: Felix Lange <fjl@twurst.com>
2026-02-18 08:40:23 +08:00
spencer
3eed0580d4
cmd/evm: add --opcode.count flag to t8n (#33800)
Adds `--opcode.count=<file>` flag to `evm t8n` that writes per-opcode
execution frequency counts to a JSON file (relative to
`--output.basedir`).

---------

Co-authored-by: MariusVanDerWijden <m.vanderwijden@live.de>
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
2026-02-17 20:42:53 +01:00
Felix Lange
1054276906 version: begin v1.17.1 release cycle
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
2026-02-17 17:17:00 +01:00
416 changed files with 29385 additions and 16584 deletions

View file

@ -166,6 +166,24 @@ jobs:
env:
GETH_MINGW: 'C:\msys64\mingw64'
- name: "Create/upload archive (amd64)"
shell: cmd
run: |
go run build/ci.go archive -arch amd64 -type zip -signer WINDOWS_SIGNING_KEY -upload gethstore/builds
env:
WINDOWS_SIGNING_KEY: ${{ secrets.WINDOWS_SIGNING_KEY }}
AZURE_BLOBSTORE_TOKEN: ${{ secrets.AZURE_BLOBSTORE_TOKEN }}
- name: "Create/upload NSIS installer (amd64)"
shell: cmd
run: |
set "PATH=C:\Program Files (x86)\NSIS;%PATH%"
go run build/ci.go nsis -arch amd64 -signer WINDOWS_SIGNING_KEY -upload gethstore/builds
del /Q build\bin\*
env:
WINDOWS_SIGNING_KEY: ${{ secrets.WINDOWS_SIGNING_KEY }}
AZURE_BLOBSTORE_TOKEN: ${{ secrets.AZURE_BLOBSTORE_TOKEN }}
- name: "Build (386)"
shell: cmd
run: |
@ -174,6 +192,24 @@ jobs:
env:
GETH_MINGW: 'C:\msys64\mingw32'
- name: "Create/upload archive (386)"
shell: cmd
run: |
go run build/ci.go archive -arch 386 -type zip -signer WINDOWS_SIGNING_KEY -upload gethstore/builds
env:
WINDOWS_SIGNING_KEY: ${{ secrets.WINDOWS_SIGNING_KEY }}
AZURE_BLOBSTORE_TOKEN: ${{ secrets.AZURE_BLOBSTORE_TOKEN }}
- name: "Create/upload NSIS installer (386)"
shell: cmd
run: |
set "PATH=C:\Program Files (x86)\NSIS;%PATH%"
go run build/ci.go nsis -arch 386 -signer WINDOWS_SIGNING_KEY -upload gethstore/builds
del /Q build\bin\*
env:
WINDOWS_SIGNING_KEY: ${{ secrets.WINDOWS_SIGNING_KEY }}
AZURE_BLOBSTORE_TOKEN: ${{ secrets.AZURE_BLOBSTORE_TOKEN }}
docker:
name: Docker Image
runs-on: ubuntu-latest

1
.github/CODEOWNERS vendored
View file

@ -10,6 +10,7 @@ beacon/merkle/ @zsfelfoldi
beacon/types/ @zsfelfoldi @fjl
beacon/params/ @zsfelfoldi @fjl
cmd/evm/ @MariusVanDerWijden @lightclient
cmd/keeper/ @gballet
core/state/ @rjl493456442
crypto/ @gballet @jwasinger @fjl
core/ @rjl493456442

29
.github/workflows/freebsd.yml vendored Normal file
View file

@ -0,0 +1,29 @@
on:
push:
branches:
- freebsd-github-action
workflow_dispatch:
jobs:
build:
name: FreeBSD-build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
with:
submodules: false
- name: Test in FreeBSD
id: test
uses: vmactions/freebsd-vm@v1
with:
release: "15.0"
usesh: true
prepare: |
pkg install -y go
run: |
freebsd-version
uname -a
go version
go run ./build/ci.go test -p 8

View file

@ -69,8 +69,8 @@ jobs:
- name: Install cross toolchain
run: |
apt-get update
apt-get -yq --no-install-suggests --no-install-recommends install gcc-multilib
sudo apt-get update
sudo apt-get -yq --no-install-suggests --no-install-recommends install gcc-multilib
- name: Build
run: go run build/ci.go test -arch 386 -short -p 8
@ -97,3 +97,44 @@ jobs:
- name: Run tests
run: go run build/ci.go test -p 8
windows:
name: Windows ${{ matrix.arch }}
needs: lint
runs-on: [self-hosted, windows, x64]
strategy:
fail-fast: false
matrix:
include:
- arch: amd64
mingw: 'C:\msys64\mingw64'
test: true
- arch: '386'
mingw: 'C:\msys64\mingw32'
test: false
env:
GETH_MINGW: ${{ matrix.mingw }}
GETH_CC: ${{ matrix.mingw }}\bin\gcc.exe
steps:
- uses: actions/checkout@v4
with:
submodules: true
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: '1.25'
cache: false
- name: Build
shell: cmd
run: |
set PATH=%GETH_MINGW%\bin;%PATH%
go run build/ci.go install -arch ${{ matrix.arch }} -cc %GETH_CC%
- name: Run tests
if: matrix.test
shell: cmd
run: |
set PATH=%GETH_MINGW%\bin;%PATH%
go run build/ci.go test -arch ${{ matrix.arch }} -cc %GETH_CC% -short -p 8

102
AGENTS.md Normal file
View file

@ -0,0 +1,102 @@
# AGENTS
## Guidelines
- **Keep changes minimal and focused.** Only modify code directly related to the task at hand. Do not refactor unrelated code, rename existing variables or functions for style, or bundle unrelated fixes into the same commit or PR.
- **Do not add, remove, or update dependencies** unless the task explicitly requires it.
## Pre-Commit Checklist
Before every commit, run **all** of the following checks and ensure they pass:
### 1. Formatting
Before committing, always run `gofmt` and `goimports` on all modified files:
```sh
gofmt -w <modified files>
goimports -w <modified files>
```
### 2. Build All Commands
Verify that all tools compile successfully:
```sh
make all
```
This builds all executables under `cmd/`, including `keeper` which has special build requirements.
### 3. Tests
While iterating during development, use `-short` for faster feedback:
```sh
go run ./build/ci.go test -short
```
Before committing, run the full test suite **without** `-short` to ensure all tests pass, including the Ethereum execution-spec tests and all state/block test permutations:
```sh
go run ./build/ci.go test
```
### 4. Linting
```sh
go run ./build/ci.go lint
```
This runs additional style checks. Fix any issues before committing.
### 5. Generated Code
```sh
go run ./build/ci.go check_generate
```
Ensures that all generated files (e.g., `gen_*.go`) are up to date. If this fails, first install the required code generators by running `make devtools`, then run the appropriate `go generate` commands and include the updated files in your commit.
### 6. Dependency Hygiene
```sh
go run ./build/ci.go check_baddeps
```
Verifies that no forbidden dependencies have been introduced.
## What to include in commits
Do not commit binaries, whether they are produced by the main build or byproducts of investigations.
## Commit Message Format
Commit messages must be prefixed with the package(s) they modify, followed by a short lowercase description:
```
<package(s)>: description
```
Examples:
- `core/vm: fix stack overflow in PUSH instruction`
- `eth, rpc: make trace configs optional`
- `cmd/geth: add new flag for sync mode`
Use comma-separated package names when multiple areas are affected. Keep the description concise.
## Pull Request Title Format
PR titles follow the same convention as commit messages:
```
<list of modified paths>: description
```
Examples:
- `core/vm: fix stack overflow in PUSH instruction`
- `core, eth: add arena allocator support`
- `cmd/geth, internal/ethapi: refactor transaction args`
- `trie/archiver: streaming subtree archival to fix OOM`
Use the top-level package paths, comma-separated if multiple areas are affected. Only mention the directories with functional changes, interface changes that trickle all over the codebase should not generate an exhaustive list. The description should be a short, lowercase summary of the change.

View file

@ -4,7 +4,7 @@ ARG VERSION=""
ARG BUILDNUM=""
# Build Geth in a stock Go builder container
FROM golang:1.24-alpine AS builder
FROM golang:1.26-alpine AS builder
RUN apk add --no-cache gcc musl-dev linux-headers git

View file

@ -4,7 +4,7 @@ ARG VERSION=""
ARG BUILDNUM=""
# Build Geth in a stock Go builder container
FROM golang:1.24-alpine AS builder
FROM golang:1.26-alpine AS builder
RUN apk add --no-cache gcc musl-dev linux-headers git

View file

@ -183,8 +183,11 @@ var (
// Solidity: {{.Original.String}}
func ({{ decapitalise $contract.Type}} *{{$contract.Type}}) Unpack{{.Normalized.Name}}Event(log *types.Log) (*{{$contract.Type}}{{.Normalized.Name}}, error) {
event := "{{.Original.Name}}"
if len(log.Topics) == 0 || log.Topics[0] != {{ decapitalise $contract.Type}}.abi.Events[event].ID {
return nil, errors.New("event signature mismatch")
if len(log.Topics) == 0 {
return nil, bind.ErrNoEventSignature
}
if log.Topics[0] != {{ decapitalise $contract.Type}}.abi.Events[event].ID {
return nil, bind.ErrEventSignatureMismatch
}
out := new({{$contract.Type}}{{.Normalized.Name}})
if len(log.Data) > 0 {

View file

@ -360,8 +360,11 @@ func (CrowdsaleFundTransfer) ContractEventName() string {
// Solidity: event FundTransfer(address backer, uint256 amount, bool isContribution)
func (crowdsale *Crowdsale) UnpackFundTransferEvent(log *types.Log) (*CrowdsaleFundTransfer, error) {
event := "FundTransfer"
if len(log.Topics) == 0 || log.Topics[0] != crowdsale.abi.Events[event].ID {
return nil, errors.New("event signature mismatch")
if len(log.Topics) == 0 {
return nil, bind.ErrNoEventSignature
}
if log.Topics[0] != crowdsale.abi.Events[event].ID {
return nil, bind.ErrEventSignatureMismatch
}
out := new(CrowdsaleFundTransfer)
if len(log.Data) > 0 {

View file

@ -606,8 +606,11 @@ func (DAOChangeOfRules) ContractEventName() string {
// Solidity: event ChangeOfRules(uint256 minimumQuorum, uint256 debatingPeriodInMinutes, int256 majorityMargin)
func (dAO *DAO) UnpackChangeOfRulesEvent(log *types.Log) (*DAOChangeOfRules, error) {
event := "ChangeOfRules"
if len(log.Topics) == 0 || log.Topics[0] != dAO.abi.Events[event].ID {
return nil, errors.New("event signature mismatch")
if len(log.Topics) == 0 {
return nil, bind.ErrNoEventSignature
}
if log.Topics[0] != dAO.abi.Events[event].ID {
return nil, bind.ErrEventSignatureMismatch
}
out := new(DAOChangeOfRules)
if len(log.Data) > 0 {
@ -648,8 +651,11 @@ func (DAOMembershipChanged) ContractEventName() string {
// Solidity: event MembershipChanged(address member, bool isMember)
func (dAO *DAO) UnpackMembershipChangedEvent(log *types.Log) (*DAOMembershipChanged, error) {
event := "MembershipChanged"
if len(log.Topics) == 0 || log.Topics[0] != dAO.abi.Events[event].ID {
return nil, errors.New("event signature mismatch")
if len(log.Topics) == 0 {
return nil, bind.ErrNoEventSignature
}
if log.Topics[0] != dAO.abi.Events[event].ID {
return nil, bind.ErrEventSignatureMismatch
}
out := new(DAOMembershipChanged)
if len(log.Data) > 0 {
@ -692,8 +698,11 @@ func (DAOProposalAdded) ContractEventName() string {
// Solidity: event ProposalAdded(uint256 proposalID, address recipient, uint256 amount, string description)
func (dAO *DAO) UnpackProposalAddedEvent(log *types.Log) (*DAOProposalAdded, error) {
event := "ProposalAdded"
if len(log.Topics) == 0 || log.Topics[0] != dAO.abi.Events[event].ID {
return nil, errors.New("event signature mismatch")
if len(log.Topics) == 0 {
return nil, bind.ErrNoEventSignature
}
if log.Topics[0] != dAO.abi.Events[event].ID {
return nil, bind.ErrEventSignatureMismatch
}
out := new(DAOProposalAdded)
if len(log.Data) > 0 {
@ -736,8 +745,11 @@ func (DAOProposalTallied) ContractEventName() string {
// Solidity: event ProposalTallied(uint256 proposalID, int256 result, uint256 quorum, bool active)
func (dAO *DAO) UnpackProposalTalliedEvent(log *types.Log) (*DAOProposalTallied, error) {
event := "ProposalTallied"
if len(log.Topics) == 0 || log.Topics[0] != dAO.abi.Events[event].ID {
return nil, errors.New("event signature mismatch")
if len(log.Topics) == 0 {
return nil, bind.ErrNoEventSignature
}
if log.Topics[0] != dAO.abi.Events[event].ID {
return nil, bind.ErrEventSignatureMismatch
}
out := new(DAOProposalTallied)
if len(log.Data) > 0 {
@ -780,8 +792,11 @@ func (DAOVoted) ContractEventName() string {
// Solidity: event Voted(uint256 proposalID, bool position, address voter, string justification)
func (dAO *DAO) UnpackVotedEvent(log *types.Log) (*DAOVoted, error) {
event := "Voted"
if len(log.Topics) == 0 || log.Topics[0] != dAO.abi.Events[event].ID {
return nil, errors.New("event signature mismatch")
if len(log.Topics) == 0 {
return nil, bind.ErrNoEventSignature
}
if log.Topics[0] != dAO.abi.Events[event].ID {
return nil, bind.ErrEventSignatureMismatch
}
out := new(DAOVoted)
if len(log.Data) > 0 {

View file

@ -72,8 +72,11 @@ func (EventCheckerDynamic) ContractEventName() string {
// Solidity: event dynamic(string indexed idxStr, bytes indexed idxDat, string str, bytes dat)
func (eventChecker *EventChecker) UnpackDynamicEvent(log *types.Log) (*EventCheckerDynamic, error) {
event := "dynamic"
if len(log.Topics) == 0 || log.Topics[0] != eventChecker.abi.Events[event].ID {
return nil, errors.New("event signature mismatch")
if len(log.Topics) == 0 {
return nil, bind.ErrNoEventSignature
}
if log.Topics[0] != eventChecker.abi.Events[event].ID {
return nil, bind.ErrEventSignatureMismatch
}
out := new(EventCheckerDynamic)
if len(log.Data) > 0 {
@ -112,8 +115,11 @@ func (EventCheckerEmpty) ContractEventName() string {
// Solidity: event empty()
func (eventChecker *EventChecker) UnpackEmptyEvent(log *types.Log) (*EventCheckerEmpty, error) {
event := "empty"
if len(log.Topics) == 0 || log.Topics[0] != eventChecker.abi.Events[event].ID {
return nil, errors.New("event signature mismatch")
if len(log.Topics) == 0 {
return nil, bind.ErrNoEventSignature
}
if log.Topics[0] != eventChecker.abi.Events[event].ID {
return nil, bind.ErrEventSignatureMismatch
}
out := new(EventCheckerEmpty)
if len(log.Data) > 0 {
@ -154,8 +160,11 @@ func (EventCheckerIndexed) ContractEventName() string {
// Solidity: event indexed(address indexed addr, int256 indexed num)
func (eventChecker *EventChecker) UnpackIndexedEvent(log *types.Log) (*EventCheckerIndexed, error) {
event := "indexed"
if len(log.Topics) == 0 || log.Topics[0] != eventChecker.abi.Events[event].ID {
return nil, errors.New("event signature mismatch")
if len(log.Topics) == 0 {
return nil, bind.ErrNoEventSignature
}
if log.Topics[0] != eventChecker.abi.Events[event].ID {
return nil, bind.ErrEventSignatureMismatch
}
out := new(EventCheckerIndexed)
if len(log.Data) > 0 {
@ -196,8 +205,11 @@ func (EventCheckerMixed) ContractEventName() string {
// Solidity: event mixed(address indexed addr, int256 num)
func (eventChecker *EventChecker) UnpackMixedEvent(log *types.Log) (*EventCheckerMixed, error) {
event := "mixed"
if len(log.Topics) == 0 || log.Topics[0] != eventChecker.abi.Events[event].ID {
return nil, errors.New("event signature mismatch")
if len(log.Topics) == 0 {
return nil, bind.ErrNoEventSignature
}
if log.Topics[0] != eventChecker.abi.Events[event].ID {
return nil, bind.ErrEventSignatureMismatch
}
out := new(EventCheckerMixed)
if len(log.Data) > 0 {
@ -238,8 +250,11 @@ func (EventCheckerUnnamed) ContractEventName() string {
// Solidity: event unnamed(uint256 indexed arg0, uint256 indexed arg1)
func (eventChecker *EventChecker) UnpackUnnamedEvent(log *types.Log) (*EventCheckerUnnamed, error) {
event := "unnamed"
if len(log.Topics) == 0 || log.Topics[0] != eventChecker.abi.Events[event].ID {
return nil, errors.New("event signature mismatch")
if len(log.Topics) == 0 {
return nil, bind.ErrNoEventSignature
}
if log.Topics[0] != eventChecker.abi.Events[event].ID {
return nil, bind.ErrEventSignatureMismatch
}
out := new(EventCheckerUnnamed)
if len(log.Data) > 0 {

View file

@ -134,8 +134,11 @@ func (NameConflictLog) ContractEventName() string {
// Solidity: event log(int256 msg, int256 _msg)
func (nameConflict *NameConflict) UnpackLogEvent(log *types.Log) (*NameConflictLog, error) {
event := "log"
if len(log.Topics) == 0 || log.Topics[0] != nameConflict.abi.Events[event].ID {
return nil, errors.New("event signature mismatch")
if len(log.Topics) == 0 {
return nil, bind.ErrNoEventSignature
}
if log.Topics[0] != nameConflict.abi.Events[event].ID {
return nil, bind.ErrEventSignatureMismatch
}
out := new(NameConflictLog)
if len(log.Data) > 0 {

View file

@ -136,8 +136,11 @@ func (NumericMethodNameE1TestEvent) ContractEventName() string {
// Solidity: event _1TestEvent(address _param)
func (numericMethodName *NumericMethodName) UnpackE1TestEventEvent(log *types.Log) (*NumericMethodNameE1TestEvent, error) {
event := "_1TestEvent"
if len(log.Topics) == 0 || log.Topics[0] != numericMethodName.abi.Events[event].ID {
return nil, errors.New("event signature mismatch")
if len(log.Topics) == 0 {
return nil, bind.ErrNoEventSignature
}
if log.Topics[0] != numericMethodName.abi.Events[event].ID {
return nil, bind.ErrEventSignatureMismatch
}
out := new(NumericMethodNameE1TestEvent)
if len(log.Data) > 0 {

View file

@ -114,8 +114,11 @@ func (OverloadBar) ContractEventName() string {
// Solidity: event bar(uint256 i)
func (overload *Overload) UnpackBarEvent(log *types.Log) (*OverloadBar, error) {
event := "bar"
if len(log.Topics) == 0 || log.Topics[0] != overload.abi.Events[event].ID {
return nil, errors.New("event signature mismatch")
if len(log.Topics) == 0 {
return nil, bind.ErrNoEventSignature
}
if log.Topics[0] != overload.abi.Events[event].ID {
return nil, bind.ErrEventSignatureMismatch
}
out := new(OverloadBar)
if len(log.Data) > 0 {
@ -156,8 +159,11 @@ func (OverloadBar0) ContractEventName() string {
// Solidity: event bar(uint256 i, uint256 j)
func (overload *Overload) UnpackBar0Event(log *types.Log) (*OverloadBar0, error) {
event := "bar0"
if len(log.Topics) == 0 || log.Topics[0] != overload.abi.Events[event].ID {
return nil, errors.New("event signature mismatch")
if len(log.Topics) == 0 {
return nil, bind.ErrNoEventSignature
}
if log.Topics[0] != overload.abi.Events[event].ID {
return nil, bind.ErrEventSignatureMismatch
}
out := new(OverloadBar0)
if len(log.Data) > 0 {

View file

@ -386,8 +386,11 @@ func (TokenTransfer) ContractEventName() string {
// Solidity: event Transfer(address indexed from, address indexed to, uint256 value)
func (token *Token) UnpackTransferEvent(log *types.Log) (*TokenTransfer, error) {
event := "Transfer"
if len(log.Topics) == 0 || log.Topics[0] != token.abi.Events[event].ID {
return nil, errors.New("event signature mismatch")
if len(log.Topics) == 0 {
return nil, bind.ErrNoEventSignature
}
if log.Topics[0] != token.abi.Events[event].ID {
return nil, bind.ErrEventSignatureMismatch
}
out := new(TokenTransfer)
if len(log.Data) > 0 {

View file

@ -193,8 +193,11 @@ func (TupleTupleEvent) ContractEventName() string {
// Solidity: event TupleEvent((uint256,uint256[],(uint256,uint256)[]) a, (uint256,uint256)[2][] b, (uint256,uint256)[][2] c, (uint256,uint256[],(uint256,uint256)[])[] d, uint256[] e)
func (tuple *Tuple) UnpackTupleEventEvent(log *types.Log) (*TupleTupleEvent, error) {
event := "TupleEvent"
if len(log.Topics) == 0 || log.Topics[0] != tuple.abi.Events[event].ID {
return nil, errors.New("event signature mismatch")
if len(log.Topics) == 0 {
return nil, bind.ErrNoEventSignature
}
if log.Topics[0] != tuple.abi.Events[event].ID {
return nil, bind.ErrEventSignatureMismatch
}
out := new(TupleTupleEvent)
if len(log.Data) > 0 {
@ -234,8 +237,11 @@ func (TupleTupleEvent2) ContractEventName() string {
// Solidity: event TupleEvent2((uint8,uint8)[] arg0)
func (tuple *Tuple) UnpackTupleEvent2Event(log *types.Log) (*TupleTupleEvent2, error) {
event := "TupleEvent2"
if len(log.Topics) == 0 || log.Topics[0] != tuple.abi.Events[event].ID {
return nil, errors.New("event signature mismatch")
if len(log.Topics) == 0 {
return nil, bind.ErrNoEventSignature
}
if log.Topics[0] != tuple.abi.Events[event].ID {
return nil, bind.ErrEventSignatureMismatch
}
out := new(TupleTupleEvent2)
if len(log.Data) > 0 {

View file

@ -35,8 +35,8 @@ import (
const basefeeWiggleMultiplier = 2
var (
errNoEventSignature = errors.New("no event signature")
errEventSignatureMismatch = errors.New("event signature mismatch")
ErrNoEventSignature = errors.New("no event signature")
ErrEventSignatureMismatch = errors.New("event signature mismatch")
)
// SignerFn is a signer function callback when a contract requires a method to
@ -536,10 +536,10 @@ func (c *BoundContract) WatchLogs(opts *WatchOpts, name string, query ...[]any)
func (c *BoundContract) UnpackLog(out any, event string, log types.Log) error {
// Anonymous events are not supported.
if len(log.Topics) == 0 {
return errNoEventSignature
return ErrNoEventSignature
}
if log.Topics[0] != c.abi.Events[event].ID {
return errEventSignatureMismatch
return ErrEventSignatureMismatch
}
if len(log.Data) > 0 {
if err := c.abi.UnpackIntoInterface(out, event, log.Data); err != nil {
@ -559,10 +559,10 @@ func (c *BoundContract) UnpackLog(out any, event string, log types.Log) error {
func (c *BoundContract) UnpackLogIntoMap(out map[string]any, event string, log types.Log) error {
// Anonymous events are not supported.
if len(log.Topics) == 0 {
return errNoEventSignature
return ErrNoEventSignature
}
if log.Topics[0] != c.abi.Events[event].ID {
return errEventSignatureMismatch
return ErrEventSignatureMismatch
}
if len(log.Data) > 0 {
if err := c.abi.UnpackIntoMap(out, event, log.Data); err != nil {

View file

@ -276,8 +276,11 @@ func (DBInsert) ContractEventName() string {
// Solidity: event Insert(uint256 key, uint256 value, uint256 length)
func (dB *DB) UnpackInsertEvent(log *types.Log) (*DBInsert, error) {
event := "Insert"
if len(log.Topics) == 0 || log.Topics[0] != dB.abi.Events[event].ID {
return nil, errors.New("event signature mismatch")
if len(log.Topics) == 0 {
return nil, bind.ErrNoEventSignature
}
if log.Topics[0] != dB.abi.Events[event].ID {
return nil, bind.ErrEventSignatureMismatch
}
out := new(DBInsert)
if len(log.Data) > 0 {
@ -318,8 +321,11 @@ func (DBKeyedInsert) ContractEventName() string {
// Solidity: event KeyedInsert(uint256 indexed key, uint256 value)
func (dB *DB) UnpackKeyedInsertEvent(log *types.Log) (*DBKeyedInsert, error) {
event := "KeyedInsert"
if len(log.Topics) == 0 || log.Topics[0] != dB.abi.Events[event].ID {
return nil, errors.New("event signature mismatch")
if len(log.Topics) == 0 {
return nil, bind.ErrNoEventSignature
}
if log.Topics[0] != dB.abi.Events[event].ID {
return nil, bind.ErrEventSignatureMismatch
}
out := new(DBKeyedInsert)
if len(log.Data) > 0 {

View file

@ -115,8 +115,11 @@ func (CBasic1) ContractEventName() string {
// Solidity: event basic1(uint256 indexed id, uint256 data)
func (c *C) UnpackBasic1Event(log *types.Log) (*CBasic1, error) {
event := "basic1"
if len(log.Topics) == 0 || log.Topics[0] != c.abi.Events[event].ID {
return nil, errors.New("event signature mismatch")
if len(log.Topics) == 0 {
return nil, bind.ErrNoEventSignature
}
if log.Topics[0] != c.abi.Events[event].ID {
return nil, bind.ErrEventSignatureMismatch
}
out := new(CBasic1)
if len(log.Data) > 0 {
@ -157,8 +160,11 @@ func (CBasic2) ContractEventName() string {
// Solidity: event basic2(bool indexed flag, uint256 data)
func (c *C) UnpackBasic2Event(log *types.Log) (*CBasic2, error) {
event := "basic2"
if len(log.Topics) == 0 || log.Topics[0] != c.abi.Events[event].ID {
return nil, errors.New("event signature mismatch")
if len(log.Topics) == 0 {
return nil, bind.ErrNoEventSignature
}
if log.Topics[0] != c.abi.Events[event].ID {
return nil, bind.ErrEventSignatureMismatch
}
out := new(CBasic2)
if len(log.Data) > 0 {

View file

@ -379,16 +379,16 @@ func TestEventUnpackEmptyTopics(t *testing.T) {
if err == nil {
t.Fatal("expected error when unpacking event with empty topics, got nil")
}
if err.Error() != "event signature mismatch" {
t.Fatalf("expected 'event signature mismatch' error, got: %v", err)
if err != bind.ErrNoEventSignature {
t.Fatalf("expected 'no event signature' error, got: %v", err)
}
_, err = c.UnpackBasic2Event(log)
if err == nil {
t.Fatal("expected error when unpacking event with empty topics, got nil")
}
if err.Error() != "event signature mismatch" {
t.Fatalf("expected 'event signature mismatch' error, got: %v", err)
if err != bind.ErrNoEventSignature {
t.Fatalf("expected 'no event signature' error, got: %v", err)
}
}
}

View file

@ -68,18 +68,27 @@ func waitWatcherStart(ks *KeyStore) bool {
func waitForAccounts(wantAccounts []accounts.Account, ks *KeyStore) error {
var list []accounts.Account
haveAccounts := false
haveChange := false
for t0 := time.Now(); time.Since(t0) < 5*time.Second; time.Sleep(100 * time.Millisecond) {
list = ks.Accounts()
if reflect.DeepEqual(list, wantAccounts) {
// ks should have also received change notifications
if !haveAccounts {
list = ks.Accounts()
haveAccounts = reflect.DeepEqual(list, wantAccounts)
}
if !haveChange {
select {
case <-ks.changes:
haveChange = true
default:
return errors.New("wasn't notified of new accounts")
}
}
if haveAccounts && haveChange {
return nil
}
}
if haveAccounts {
return errors.New("wasn't notified of new accounts")
}
return fmt.Errorf("\ngot %v\nwant %v", list, wantAccounts)
}

View file

@ -14,8 +14,8 @@
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
//go:build (darwin && !ios && cgo) || freebsd || (linux && !arm64) || netbsd || solaris
// +build darwin,!ios,cgo freebsd linux,!arm64 netbsd solaris
//go:build (darwin && !ios && cgo) || freebsd || linux || netbsd || solaris
// +build darwin,!ios,cgo freebsd linux netbsd solaris
package keystore

View file

@ -14,8 +14,8 @@
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
//go:build (darwin && !cgo) || ios || (linux && arm64) || windows || (!darwin && !freebsd && !linux && !netbsd && !solaris)
// +build darwin,!cgo ios linux,arm64 windows !darwin,!freebsd,!linux,!netbsd,!solaris
//go:build (darwin && !cgo) || ios || windows || (!darwin && !freebsd && !linux && !netbsd && !solaris)
// +build darwin,!cgo ios windows !darwin,!freebsd,!linux,!netbsd,!solaris
// This is the fallback implementation of directory watching.
// It is used on unsupported platforms.

View file

@ -113,7 +113,7 @@ func (hub *Hub) readPairings() error {
}
func (hub *Hub) writePairings() error {
pairingFile, err := os.OpenFile(filepath.Join(hub.datadir, "smartcards.json"), os.O_RDWR|os.O_CREATE, 0755)
pairingFile, err := os.OpenFile(filepath.Join(hub.datadir, "smartcards.json"), os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0755)
if err != nil {
return err
}
@ -129,11 +129,8 @@ func (hub *Hub) writePairings() error {
return err
}
if _, err := pairingFile.Write(pairingData); err != nil {
return err
}
return nil
_, err = pairingFile.Write(pairingData)
return err
}
func (hub *Hub) pairing(wallet *Wallet) *smartcardPairing {

View file

@ -26,7 +26,7 @@ import (
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/event"
"github.com/ethereum/go-ethereum/log"
"github.com/karalabe/hid"
"github.com/ethereum/hid"
)
// LedgerScheme is the protocol scheme prefixing account and wallet URLs.

View file

@ -31,7 +31,7 @@ import (
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/log"
"github.com/karalabe/hid"
"github.com/ethereum/hid"
)
// Maximum time between wallet health checks to detect USB unplugs.

View file

@ -87,6 +87,10 @@ func (ec *engineClient) updateLoop(headCh <-chan types.ChainHeadEvent) {
if status, err := ec.callForkchoiceUpdated(forkName, event); err == nil {
log.Info("Successful ForkchoiceUpdated", "head", event.Block.Hash(), "status", status)
} else {
if err.Error() == "beacon syncer reorging" {
log.Debug("Failed ForkchoiceUpdated", "head", event.Block.Hash(), "error", err)
continue // ignore beacon syncer reorging errors, this error can occur if the blsync is skipping a block
}
log.Error("Failed ForkchoiceUpdated", "head", event.Block.Hash(), "error", err)
}
}

View file

@ -21,6 +21,7 @@ func (p PayloadAttributes) MarshalJSON() ([]byte, error) {
SuggestedFeeRecipient common.Address `json:"suggestedFeeRecipient" gencodec:"required"`
Withdrawals []*types.Withdrawal `json:"withdrawals"`
BeaconRoot *common.Hash `json:"parentBeaconBlockRoot"`
SlotNumber *hexutil.Uint64 `json:"slotNumber"`
}
var enc PayloadAttributes
enc.Timestamp = hexutil.Uint64(p.Timestamp)
@ -28,6 +29,7 @@ func (p PayloadAttributes) MarshalJSON() ([]byte, error) {
enc.SuggestedFeeRecipient = p.SuggestedFeeRecipient
enc.Withdrawals = p.Withdrawals
enc.BeaconRoot = p.BeaconRoot
enc.SlotNumber = (*hexutil.Uint64)(p.SlotNumber)
return json.Marshal(&enc)
}
@ -39,6 +41,7 @@ func (p *PayloadAttributes) UnmarshalJSON(input []byte) error {
SuggestedFeeRecipient *common.Address `json:"suggestedFeeRecipient" gencodec:"required"`
Withdrawals []*types.Withdrawal `json:"withdrawals"`
BeaconRoot *common.Hash `json:"parentBeaconBlockRoot"`
SlotNumber *hexutil.Uint64 `json:"slotNumber"`
}
var dec PayloadAttributes
if err := json.Unmarshal(input, &dec); err != nil {
@ -62,5 +65,8 @@ func (p *PayloadAttributes) UnmarshalJSON(input []byte) error {
if dec.BeaconRoot != nil {
p.BeaconRoot = dec.BeaconRoot
}
if dec.SlotNumber != nil {
p.SlotNumber = (*uint64)(dec.SlotNumber)
}
return nil
}

View file

@ -34,6 +34,7 @@ func (e ExecutableData) MarshalJSON() ([]byte, error) {
Withdrawals []*types.Withdrawal `json:"withdrawals"`
BlobGasUsed *hexutil.Uint64 `json:"blobGasUsed"`
ExcessBlobGas *hexutil.Uint64 `json:"excessBlobGas"`
SlotNumber *hexutil.Uint64 `json:"slotNumber,omitempty"`
}
var enc ExecutableData
enc.ParentHash = e.ParentHash
@ -58,6 +59,7 @@ func (e ExecutableData) MarshalJSON() ([]byte, error) {
enc.Withdrawals = e.Withdrawals
enc.BlobGasUsed = (*hexutil.Uint64)(e.BlobGasUsed)
enc.ExcessBlobGas = (*hexutil.Uint64)(e.ExcessBlobGas)
enc.SlotNumber = (*hexutil.Uint64)(e.SlotNumber)
return json.Marshal(&enc)
}
@ -81,6 +83,7 @@ func (e *ExecutableData) UnmarshalJSON(input []byte) error {
Withdrawals []*types.Withdrawal `json:"withdrawals"`
BlobGasUsed *hexutil.Uint64 `json:"blobGasUsed"`
ExcessBlobGas *hexutil.Uint64 `json:"excessBlobGas"`
SlotNumber *hexutil.Uint64 `json:"slotNumber,omitempty"`
}
var dec ExecutableData
if err := json.Unmarshal(input, &dec); err != nil {
@ -154,5 +157,8 @@ func (e *ExecutableData) UnmarshalJSON(input []byte) error {
if dec.ExcessBlobGas != nil {
e.ExcessBlobGas = (*uint64)(dec.ExcessBlobGas)
}
if dec.SlotNumber != nil {
e.SlotNumber = (*uint64)(dec.SlotNumber)
}
return nil
}

View file

@ -50,6 +50,13 @@ var (
// ExecutionPayloadV3 has the syntax of ExecutionPayloadV2 and appends the new
// fields: blobGasUsed and excessBlobGas.
PayloadV3 PayloadVersion = 0x3
// PayloadV4 is the identifier of ExecutionPayloadV4 introduced in amsterdam fork.
//
// https://github.com/ethereum/execution-apis/blob/main/src/engine/amsterdam.md#executionpayloadv4
// ExecutionPayloadV4 has the syntax of ExecutionPayloadV3 and appends the new
// field slotNumber.
PayloadV4 PayloadVersion = 0x4
)
//go:generate go run github.com/fjl/gencodec -type PayloadAttributes -field-override payloadAttributesMarshaling -out gen_blockparams.go
@ -62,11 +69,13 @@ type PayloadAttributes struct {
SuggestedFeeRecipient common.Address `json:"suggestedFeeRecipient" gencodec:"required"`
Withdrawals []*types.Withdrawal `json:"withdrawals"`
BeaconRoot *common.Hash `json:"parentBeaconBlockRoot"`
SlotNumber *uint64 `json:"slotNumber"`
}
// JSON type overrides for PayloadAttributes.
type payloadAttributesMarshaling struct {
Timestamp hexutil.Uint64
Timestamp hexutil.Uint64
SlotNumber *hexutil.Uint64
}
//go:generate go run github.com/fjl/gencodec -type ExecutableData -field-override executableDataMarshaling -out gen_ed.go
@ -90,6 +99,7 @@ type ExecutableData struct {
Withdrawals []*types.Withdrawal `json:"withdrawals"`
BlobGasUsed *uint64 `json:"blobGasUsed"`
ExcessBlobGas *uint64 `json:"excessBlobGas"`
SlotNumber *uint64 `json:"slotNumber,omitempty"`
}
// JSON type overrides for executableData.
@ -104,6 +114,7 @@ type executableDataMarshaling struct {
Transactions []hexutil.Bytes
BlobGasUsed *hexutil.Uint64
ExcessBlobGas *hexutil.Uint64
SlotNumber *hexutil.Uint64
}
// StatelessPayloadStatusV1 is the result of a stateless payload execution.
@ -213,7 +224,7 @@ func encodeTransactions(txs []*types.Transaction) [][]byte {
return enc
}
func decodeTransactions(enc [][]byte) ([]*types.Transaction, error) {
func DecodeTransactions(enc [][]byte) ([]*types.Transaction, error) {
var txs = make([]*types.Transaction, len(enc))
for i, encTx := range enc {
var tx types.Transaction
@ -251,7 +262,7 @@ func ExecutableDataToBlock(data ExecutableData, versionedHashes []common.Hash, b
// for stateless execution, so it skips checking if the executable data hashes to
// the requested hash (stateless has to *compute* the root hash, it's not given).
func ExecutableDataToBlockNoHash(data ExecutableData, versionedHashes []common.Hash, beaconRoot *common.Hash, requests [][]byte) (*types.Block, error) {
txs, err := decodeTransactions(data.Transactions)
txs, err := DecodeTransactions(data.Transactions)
if err != nil {
return nil, err
}
@ -313,6 +324,7 @@ func ExecutableDataToBlockNoHash(data ExecutableData, versionedHashes []common.H
BlobGasUsed: data.BlobGasUsed,
ParentBeaconRoot: beaconRoot,
RequestsHash: requestsHash,
SlotNumber: data.SlotNumber,
}
return types.NewBlockWithHeader(header).
WithBody(types.Body{Transactions: txs, Uncles: nil, Withdrawals: data.Withdrawals}),
@ -340,6 +352,7 @@ func BlockToExecutableData(block *types.Block, fees *big.Int, sidecars []*types.
Withdrawals: block.Withdrawals(),
BlobGasUsed: block.BlobGasUsed(),
ExcessBlobGas: block.ExcessBlobGas(),
SlotNumber: block.SlotNumber(),
}
// Add blobs.

View file

@ -438,14 +438,11 @@ func (s *serverWithLimits) fail(desc string) {
// failLocked calculates the dynamic failure delay and applies it.
func (s *serverWithLimits) failLocked(desc string) {
log.Debug("Server error", "description", desc)
s.failureDelay *= 2
now := s.clock.Now()
if now > s.failureDelayEnd {
s.failureDelay *= math.Pow(2, -float64(now-s.failureDelayEnd)/float64(maxFailureDelay))
}
if s.failureDelay < float64(minFailureDelay) {
s.failureDelay = float64(minFailureDelay)
}
s.failureDelay = max(min(s.failureDelay*2, float64(maxFailureDelay)), float64(minFailureDelay))
s.failureDelayEnd = now + mclock.AbsTime(s.failureDelay)
s.delay(time.Duration(s.failureDelay))
}

View file

@ -62,7 +62,6 @@ const (
ssNeedParent // cp header slot %32 != 0, need parent to check epoch boundary
ssParentRequested // cp parent header requested
ssPrintStatus // has all necessary info, print log message if init still not successful
ssDone // log message printed, no more action required
)
type serverState struct {
@ -180,7 +179,8 @@ func (s *CheckpointInit) Process(requester request.Requester, events []request.E
default:
log.Error("blsync: checkpoint not available, but reported as finalized; specified checkpoint hash might be too old", "server", server.Name())
}
s.serverState[server] = serverState{state: ssDone}
s.serverState[server] = serverState{state: ssDefault}
requester.Fail(server, "checkpoint init failed")
}
}

View file

@ -5,81 +5,102 @@
# https://github.com/ethereum/execution-spec-tests/releases/download/v5.1.0
a3192784375acec7eaec492799d5c5d0c47a2909a3cc40178898e4ecd20cc416 fixtures_develop.tar.gz
# version:golang 1.25.1
# version:golang 1.25.9
# https://go.dev/dl/
d010c109cee94d80efe681eab46bdea491ac906bf46583c32e9f0dbb0bd1a594 go1.25.1.src.tar.gz
1d622468f767a1b9fe1e1e67bd6ce6744d04e0c68712adc689748bbeccb126bb go1.25.1.darwin-amd64.tar.gz
68deebb214f39d542e518ebb0598a406ab1b5a22bba8ec9ade9f55fb4dd94a6c go1.25.1.darwin-arm64.tar.gz
d03cdcbc9bd8baf5cf028de390478e9e2b3e4d0afe5a6582dedc19bfe6a263b2 go1.25.1.linux-386.tar.gz
7716a0d940a0f6ae8e1f3b3f4f36299dc53e31b16840dbd171254312c41ca12e go1.25.1.linux-amd64.tar.gz
65a3e34fb2126f55b34e1edfc709121660e1be2dee6bdf405fc399a63a95a87d go1.25.1.linux-arm64.tar.gz
eb949be683e82a99e9861dafd7057e31ea40b161eae6c4cd18fdc0e8c4ae6225 go1.25.1.linux-armv6l.tar.gz
be13d5479b8c75438f2efcaa8c191fba3af684b3228abc9c99c7aa8502f34424 go1.25.1.windows-386.zip
4a974de310e7ee1d523d2fcedb114ba5fa75408c98eb3652023e55ccf3fa7cab go1.25.1.windows-amd64.zip
45ab4290adbd6ee9e7f18f0d57eaa9008fdbef590882778ed93eac3c8cca06c5 go1.25.1.aix-ppc64.tar.gz
2e3c1549bed3124763774d648f291ac42611232f48320ebbd23517c909c09b81 go1.25.1.dragonfly-amd64.tar.gz
dc0198dd4ec520e13f26798def8750544edf6448d8e9c43fd2a814e4885932af go1.25.1.freebsd-386.tar.gz
c4f1a7e7b258406e6f3b677ecdbd97bbb23ff9c0d44be4eb238a07d360f69ac8 go1.25.1.freebsd-amd64.tar.gz
7772fc5ff71ed39297ec0c1599fc54e399642c9b848eac989601040923b0de9c go1.25.1.freebsd-arm.tar.gz
5bb011d5d5b6218b12189f07aa0be618ab2002662fff1ca40afba7389735c207 go1.25.1.freebsd-arm64.tar.gz
ccac716240cb049bebfafcb7eebc3758512178a4c51fc26da9cc032035d850c8 go1.25.1.freebsd-riscv64.tar.gz
cc53910ffb9fcfdd988a9fa25b5423bae1cfa01b19616be646700e1f5453b466 go1.25.1.illumos-amd64.tar.gz
efe809f923bcedab44bf7be2b3af8d182b512b1bf9c07d302e0c45d26c8f56f3 go1.25.1.linux-loong64.tar.gz
c0de33679f6ed68991dc42dc4a602e74a666e3e166c1748ee1b5d1a7ea2ffbb2 go1.25.1.linux-mips.tar.gz
c270f7b0c0bdfbcd54fef4481227c40d41bb518f9ae38ee930870f04a0a6a589 go1.25.1.linux-mips64.tar.gz
80be871ba9c944f34d1868cdf5047e1cf2e1289fe08cdb90e2453d2f0d6965ae go1.25.1.linux-mips64le.tar.gz
9f09defa9bb22ebf2cde76162f40958564e57ce5c2b3649bc063bebcbc9294c1 go1.25.1.linux-mipsle.tar.gz
2c76b7d278c1d43ad19d478ad3f0f05e7b782b64b90870701b314fa48b5f43c6 go1.25.1.linux-ppc64.tar.gz
8b0c8d3ee5b1b5c28b6bd63dc4438792012e01d03b4bf7a61d985c87edab7d1f go1.25.1.linux-ppc64le.tar.gz
22fe934a9d0c9c57275716c55b92d46ebd887cec3177c9140705efa9f84ba1e2 go1.25.1.linux-riscv64.tar.gz
9cfe517ba423f59f3738ca5c3d907c103253cffbbcc2987142f79c5de8c1bf93 go1.25.1.linux-s390x.tar.gz
6af8a08353e76205d5b743dd7a3f0126684f96f62be0a31b75daf9837e512c46 go1.25.1.netbsd-386.tar.gz
e5d534ff362edb1bd8c8e10892b6a027c4c1482454245d1529167676498684c7 go1.25.1.netbsd-amd64.tar.gz
88bcf39254fdcea6a199c1c27d787831b652427ce60851ae9e41a3d7eb477f45 go1.25.1.netbsd-arm.tar.gz
d7c2eabe1d04ee47bcaea2816fdd90dbd25d90d4dfa756faa9786c788e4f3a4e go1.25.1.netbsd-arm64.tar.gz
14a2845977eb4dde11d929858c437a043467c427db87899935e90cee04a38d72 go1.25.1.openbsd-386.tar.gz
d27ac54b38a13a09c81e67c82ac70d387037341c85c3399291c73e13e83fdd8c go1.25.1.openbsd-amd64.tar.gz
0f4ab5f02500afa4befd51fed1e8b45e4d07ca050f641cc3acc76eaa4027b2c3 go1.25.1.openbsd-arm.tar.gz
d46c3bd156843656f7f3cb0dec27ea51cd926ec3f7b80744bf8156e67c1c812f go1.25.1.openbsd-arm64.tar.gz
c550514c67f22e409be10e40eace761e2e43069f4ef086ae6e60aac736c2b679 go1.25.1.openbsd-ppc64.tar.gz
8a09a8714a2556eb13fc1f10b7ce2553fcea4971e3330fc3be0efd24aab45734 go1.25.1.openbsd-riscv64.tar.gz
b0e1fefaf0c7abd71f139a54eee9767944aff5f0bc9d69c968234804884e552f go1.25.1.plan9-386.tar.gz
e94732c94f149690aa0ab11c26090577211b4a988137cb2c03ec0b54e750402e go1.25.1.plan9-amd64.tar.gz
7eb80e9de1e817d9089a54e8c7c5c8d8ed9e5fb4d4a012fc0f18fc422a484f0c go1.25.1.plan9-arm.tar.gz
1261dfad7c4953c0ab90381bc1242dc54e394db7485c59349428d532b2273343 go1.25.1.solaris-amd64.tar.gz
04bc3c078e9e904c4d58d6ac2532a5bdd402bd36a9ff0b5949b3c5e6006a05ee go1.25.1.windows-arm64.zip
0ec9ef8ebcea097aac37decae9f09a7218b451cd96be7d6ed513d8e4bcf909cf go1.25.9.src.tar.gz
b9ede6378a8f8d3d22bf52e68beb69ef7abdb65929ab2456020383002da15846 go1.25.9.aix-ppc64.tar.gz
92cb78fba4796e218c1accb0ea0a214ef2094c382049a244ad6505505d015fbe go1.25.9.darwin-amd64.tar.gz
9528be7329b9770631a6bd09ca2f3a73ed7332bec01d87435e75e92d8f130363 go1.25.9.darwin-arm64.tar.gz
918e44a471c5524caa52f74185064240d5eb343aa8023d604776511fc7adffa6 go1.25.9.dragonfly-amd64.tar.gz
2d67dbdfd09c6fcaa0e64485367ef43b8837ea200c663d6417183237bcddf83d go1.25.9.freebsd-386.tar.gz
9152d0c0badbfeb0c0e148e47c12bec28099d8cf2db60958810c879e0b679d07 go1.25.9.freebsd-amd64.tar.gz
437dca59604ad4a806a6a88e3d7ec1cd98ac9b402a3671629f4e553dd8b9888f go1.25.9.freebsd-arm.tar.gz
4c0fe53977412036fc8081e8d0992bbaabe4d3e1926137271ba11c2f5753300f go1.25.9.freebsd-arm64.tar.gz
d6087cdd1c084bd186132f29e0d032852a745f3c7619003d0fd5612c1fa58c8a go1.25.9.freebsd-riscv64.tar.gz
f82e49037e195cb62beae6a6ad83497157b2af5a01bad2f1dcb65df41080aabb go1.25.9.illumos-amd64.tar.gz
1e14a73bc2b19e370e0d4c57ba87aabfe8aef1e435e14d246742d48a13254f36 go1.25.9.linux-386.tar.gz
00859d7bd6defe8bf84d9db9e57b9a4467b2887c18cd93ae7460e713db774bc1 go1.25.9.linux-amd64.tar.gz
ec342e7389b7f489564ed5463c63b16cf8040023dabc7861256677165a8c0e2b go1.25.9.linux-arm64.tar.gz
7d4f0d266d871301e08ef4ac31c56e66048688893b2848392e5c600276351ee8 go1.25.9.linux-armv6l.tar.gz
f3460d901a14496bc609636e4accf9110ee1869d41c64af7e29cd567cffcf49b go1.25.9.linux-loong64.tar.gz
1da96ea449382ff96c09c55cee74815324e01d687d5ac6d2ade58244b8574306 go1.25.9.linux-mips.tar.gz
311a7f5f01f9a4bd51288b575eb619dc8e28e1fbc0cd78256a428b3ca668ff01 go1.25.9.linux-mips64.tar.gz
0b4edaf9e2ba3f0a079547effda70ec6a4b51a6ca3271a1147652c87ebcf3735 go1.25.9.linux-mips64le.tar.gz
42667340df264896f20b12261429d954e736e9772ab83ba289e68c30cf6f9628 go1.25.9.linux-mipsle.tar.gz
b9cbb3a4894b5aca6966c23452608435e8535278ef019b18d8898fbbfab67e74 go1.25.9.linux-ppc64.tar.gz
b0c41c7da1fc8d39020d65296a0dc54167afd9f76d67064e22c31ce3d839a739 go1.25.9.linux-ppc64le.tar.gz
2a630be8f854177c13e5fa75f7812c721369ecb9bd6e4c0fb1bd1c708d08b37c go1.25.9.linux-riscv64.tar.gz
0cf55136ac7eaccfc36d849054f849510ea289c2d959ffbed7b3866b4f484d17 go1.25.9.linux-s390x.tar.gz
eaf8167ff10a6a3e5dd304ef5f2e020b3a7379e76fa1011dc49c895800bf367c go1.25.9.netbsd-386.tar.gz
3cc6a861e62e23feae660984e0f2f14a2efb5d1f655900afee1d51af98919ae4 go1.25.9.netbsd-amd64.tar.gz
c2c44dca10e882c30553f4aa2ab8f6722b670fb12882378c8f461a9105d40188 go1.25.9.netbsd-arm.tar.gz
f301b71a8ec448053a5d2597df2e178120204bc9a33266c81600dd5d020a61b4 go1.25.9.netbsd-arm64.tar.gz
c4543b7fdef9707b4896810c69b4160a43ecec210af45c300f3abd78aa0c9e72 go1.25.9.openbsd-386.tar.gz
37275325e314f5ab7cf8ae65c4efc7cbfdaf20b41c6849549739b57a3ac97544 go1.25.9.openbsd-amd64.tar.gz
f9c05b6b315e979ecdd47354dd287c01708d6a88dc6ae7af74c84df8fa00df94 go1.25.9.openbsd-arm.tar.gz
4e999f42cf959ff95ca84af1ea1db3771000f5e57e157904bc2ffc72c75e29a2 go1.25.9.openbsd-arm64.tar.gz
0c7fa6c7c2b1cc13ad32fa94fc31273b4adf39c1e0f0e5dcedac158ff526af3f go1.25.9.openbsd-ppc64.tar.gz
347b33953a4b6e8df17719296f360f60878fe48a2d482ceb3637a3dfd4950065 go1.25.9.openbsd-riscv64.tar.gz
889f77d567c06832e0d332fe2458653dc66d43cded7ddbca6f72ce0ca60029cc go1.25.9.plan9-386.tar.gz
978b1f931fadec2f2516237d2649ee845d93c8eaf47dd196cfd8d26c7b2706a1 go1.25.9.plan9-amd64.tar.gz
30b9565e5ad0a212fe00990ead700c751b416eb2ef8d7c91a204945a7ff83a48 go1.25.9.plan9-arm.tar.gz
9e9125ff84ab3c3522ec758cab9540a17e9cba12bfcc34b6bf556cb89b522591 go1.25.9.solaris-amd64.tar.gz
bf40515f5f4d834fa9ead31ff75581e61a38ac27bf49840b95c5c998d321c0f6 go1.25.9.windows-386.zip
a7a710e225467b34e9e09fb432b829c86c9b2da5821ee5418f7eb2e8ae1a22cc go1.25.9.windows-amd64.zip
33cd73cf1b3ceee655ef71bc96e94006c02ae3c617fdd67ac9be3dfae3957449 go1.25.9.windows-arm64.zip
# version:golangci 2.4.0
# version:golangci 2.10.1
# https://github.com/golangci/golangci-lint/releases/
# https://github.com/golangci/golangci-lint/releases/download/v2.4.0/
7904ce63f79db44934939cf7a063086ea0ea98e9b19eba0a9d52ccdd0d21951c golangci-lint-2.4.0-darwin-amd64.tar.gz
cd4dd53fa09b6646baff5fd22b8c64d91db02c21c7496df27992d75d34feec59 golangci-lint-2.4.0-darwin-arm64.tar.gz
d58f426ebe14cc257e81562b4bf37a488ffb4ffbbb3ec73041eb3b38bb25c0e1 golangci-lint-2.4.0-freebsd-386.tar.gz
6ec4a6177fc6c0dd541fbcb3a7612845266d020d35cc6fa92959220cdf64ca39 golangci-lint-2.4.0-freebsd-amd64.tar.gz
4d473e3e71c01feaa915a0604fb35758b41284fb976cdeac3f842118d9ee7e17 golangci-lint-2.4.0-freebsd-armv6.tar.gz
58727746c6530801a3f9a702a5945556a5eb7e88809222536dd9f9d54cafaeff golangci-lint-2.4.0-freebsd-armv7.tar.gz
fbf28c662760e24c32f82f8d16dffdb4a82de7726a52ba1fad94f890c22997ea golangci-lint-2.4.0-illumos-amd64.tar.gz
a15a000a8981ef665e971e0f67e2acda9066a9e37a59344393b7351d8fb49c81 golangci-lint-2.4.0-linux-386.tar.gz
fae792524c04424c0ac369f5b8076f04b45cf29fc945a370e55d369a8dc11840 golangci-lint-2.4.0-linux-amd64.tar.gz
70ac11f55b80ec78fd3a879249cc9255121b8dfd7f7ed4fc46ed137f4abf17e7 golangci-lint-2.4.0-linux-arm64.tar.gz
4acdc40e5cebe99e4e7ced358a05b2e71789f409b41cb4f39bbb86ccfa14b1dc golangci-lint-2.4.0-linux-armv6.tar.gz
2a68749568fa22b4a97cb88dbea655595563c795076536aa6c087f7968784bf3 golangci-lint-2.4.0-linux-armv7.tar.gz
9e3369afb023711036dcb0b4f45c9fe2792af962fa1df050c9f6ac101a6c5d73 golangci-lint-2.4.0-linux-loong64.tar.gz
bb9143d6329be2c4dbfffef9564078e7da7d88e7dde6c829b6263d98e072229e golangci-lint-2.4.0-linux-mips64.tar.gz
5ad1765b40d56cd04d4afd805b3ba6f4bfd9b36181da93c31e9b17e483d8608d golangci-lint-2.4.0-linux-mips64le.tar.gz
918936fb9c0d5ba96bef03cf4348b03938634cfcced49be1e9bb29cb5094fa73 golangci-lint-2.4.0-linux-ppc64le.tar.gz
f7474c638e1fb67ebbdc654b55ca0125377ea0bc88e8fee8d964a4f24eacf828 golangci-lint-2.4.0-linux-riscv64.tar.gz
b617a9543997c8bfceaffa88a75d4e595030c6add69fba800c1e4d8f5fe253dd golangci-lint-2.4.0-linux-s390x.tar.gz
7db027b03a9ba328f795215b04f594036837bc7dd0dd7cd16776b02a6167981c golangci-lint-2.4.0-netbsd-386.tar.gz
52d8f9393f4313df0a62b752c37775e3af0b818e43e8dd28954351542d7c60bc golangci-lint-2.4.0-netbsd-amd64.tar.gz
5c0086027fb5a4af3829e530c8115db4b35d11afe1914322eef528eb8cd38c69 golangci-lint-2.4.0-netbsd-arm64.tar.gz
6b779d6ed1aed87cefe195cc11759902b97a76551b593312c6833f2635a3488f golangci-lint-2.4.0-netbsd-armv6.tar.gz
f00d1f4b7ec3468a0f9fffd0d9ea036248b029b7621cbc9a59c449ef94356d09 golangci-lint-2.4.0-netbsd-armv7.tar.gz
3ce671b0b42b58e35066493aab75a7e2826c9e079988f1ba5d814a4029faaf87 golangci-lint-2.4.0-windows-386.zip
003112f7a56746feaabf20b744054bf9acdf900c9e77176383623c4b1d76aaa9 golangci-lint-2.4.0-windows-amd64.zip
dc0c2092af5d47fc2cd31a1dfe7b4c7e765fab22de98bd21ef2ffcc53ad9f54f golangci-lint-2.4.0-windows-arm64.zip
0263d23e20a260cb1592d35e12a388f99efe2c51b3611fdc66fbd9db1fce664d golangci-lint-2.4.0-windows-armv6.zip
9403c03bf648e6313036e0273149d44bad1b9ad53889b6d00e4ccb842ba3c058 golangci-lint-2.4.0-windows-armv7.zip
# https://github.com/golangci/golangci-lint/releases/download/v2.10.1
66fb0da81b8033b477f97eea420d4b46b230ca172b8bb87c6610109f3772b6b6 golangci-lint-2.10.1-darwin-amd64.tar.gz
03bfadf67e52b441b7ec21305e501c717df93c959836d66c7f97312654acb297 golangci-lint-2.10.1-darwin-arm64.tar.gz
c9a44658ccc8f7b8dbbd4ae6020ba91c1a5d3987f4d91ced0f7d2bea013e57ca golangci-lint-2.10.1-freebsd-386.tar.gz
a513c5cb4e0f5bd5767001af9d5e97e7868cfc2d9c46739a4df93e713cfb24af golangci-lint-2.10.1-freebsd-amd64.tar.gz
2ef38eefc4b5cee2febacb75a30579526e5656c16338a921d80e59a8e87d4425 golangci-lint-2.10.1-freebsd-arm64.tar.gz
8fea6766318b4829e766bbe325f10191d75297dcc44ae35bf374816037878e38 golangci-lint-2.10.1-freebsd-armv6.tar.gz
30b629870574d6254f3e8804e5a74b34f98e1263c9d55465830d739c88b862ed golangci-lint-2.10.1-freebsd-armv7.tar.gz
c0db839f866ce80b1b6c96167aa101cfe50d9c936f42d942a3c1cbdc1801af68 golangci-lint-2.10.1-illumos-amd64.tar.gz
280eb56636e9175f671cd7b755d7d67f628ae2ed00a164d1e443c43c112034e5 golangci-lint-2.10.1-linux-386.deb
065a7d99da61dc7dfbfef2e2d7053dd3fa6672598f2747117aa4bb5f45e7df7f golangci-lint-2.10.1-linux-386.rpm
a55918c03bb413b2662287653ab2ae2fef4e37428b247dad6348724adde9d770 golangci-lint-2.10.1-linux-386.tar.gz
8aa9b3aa14f39745eeb7fc7ff50bcac683e785397d1e4bc9afd2184b12c4ce86 golangci-lint-2.10.1-linux-amd64.deb
62a111688e9e305032334a2cbc84f4d971b64bb3bffc99d3f80081d57fb25e32 golangci-lint-2.10.1-linux-amd64.rpm
dfa775874cf0561b404a02a8f4481fc69b28091da95aa697259820d429b09c99 golangci-lint-2.10.1-linux-amd64.tar.gz
b3f36937e8ea1660739dc0f5c892ea59c9c21ed4e75a91a25957c561f7f79a55 golangci-lint-2.10.1-linux-arm64.deb
36d50314d53683b1f1a2a6cedfb5a9468451b481c64ab9e97a8e843ea088074d golangci-lint-2.10.1-linux-arm64.rpm
6652b42ae02915eb2f9cb2a2e0cac99514c8eded8388d88ae3e06e1a52c00de8 golangci-lint-2.10.1-linux-arm64.tar.gz
a32d8d318e803496812dd3461f250e52ccc7f53c47b95ce404a9cf55778ceb6a golangci-lint-2.10.1-linux-armv6.deb
41d065f4c8ea165a1531abea644988ee2e973e4f0b49f9725ed3b979dac45112 golangci-lint-2.10.1-linux-armv6.rpm
59159a4df03aabbde69d15c7b7b3df143363cbb41f4bd4b200caffb8e34fb734 golangci-lint-2.10.1-linux-armv6.tar.gz
b2e8ec0e050a1e2251dfe1561434999d202f5a3f9fa47ce94378b0fd1662ea5a golangci-lint-2.10.1-linux-armv7.deb
28c9331429a497da27e9c77846063bd0e8275e878ffedb4eb9e9f21d24771cc0 golangci-lint-2.10.1-linux-armv7.rpm
818f33e95b273e3769284b25563b51ef6a294e9e25acf140fda5830c075a1a59 golangci-lint-2.10.1-linux-armv7.tar.gz
6b6b85ed4b7c27f51097dd681523000409dde835e86e6e314e87be4bb013e2ab golangci-lint-2.10.1-linux-loong64.deb
94050a0cf06169e2ae44afb307dcaafa7d7c3b38c0c23b5652cf9cb60f0c337f golangci-lint-2.10.1-linux-loong64.rpm
25820300fccb8c961c1cdcb1f77928040c079e04c43a3a5ceb34b1cb4a1c5c8d golangci-lint-2.10.1-linux-loong64.tar.gz
98bf39d10139fdcaa37f94950e9bbb8888660ae468847ae0bf1cb5bf67c1f68b golangci-lint-2.10.1-linux-mips64.deb
df3ce5f03808dcceaa8b683d1d06e95c885f09b59dc8e15deb840fbe2b3e3299 golangci-lint-2.10.1-linux-mips64.rpm
972508dda523067e6e6a1c8e6609d63bc7c4153819c11b947d439235cf17bac2 golangci-lint-2.10.1-linux-mips64.tar.gz
1d37f2919e183b5bf8b1777ed8c4b163d3b491d0158355a7999d647655cbbeb6 golangci-lint-2.10.1-linux-mips64le.deb
e341d031002cd09a416329ed40f674231051a38544b8f94deb2d1708ce1f4a6f golangci-lint-2.10.1-linux-mips64le.rpm
393560122b9cb5538df0c357d30eb27b6ee563533fbb9b138c8db4fd264002af golangci-lint-2.10.1-linux-mips64le.tar.gz
21ca46b6a96442e8957677a3ca059c6b93674a68a01b1c71f4e5df0ea2e96d19 golangci-lint-2.10.1-linux-ppc64le.deb
57fe0cbca0a9bbdf1547c5e8aa7d278e6896b438d72a541bae6bc62c38b43d1e golangci-lint-2.10.1-linux-ppc64le.rpm
e2883db9fa51584e5e203c64456f29993550a7faadc84e3faccdb48f0669992e golangci-lint-2.10.1-linux-ppc64le.tar.gz
aa6da0e98ab0ba3bb7582e112174c349907d5edfeff90a551dca3c6eecf92fc0 golangci-lint-2.10.1-linux-riscv64.deb
3c68d76cd884a7aad206223a980b9c20bb9ea74b560fa27ed02baf2389189234 golangci-lint-2.10.1-linux-riscv64.rpm
3bca11bfac4197205639cbd4676a5415054e629ac6c12ea10fcbe33ef852d9c3 golangci-lint-2.10.1-linux-riscv64.tar.gz
0c6aed2ce49db2586adbac72c80d871f06feb1caf4c0763a5ca98fec809a8f0b golangci-lint-2.10.1-linux-s390x.deb
16c285adfe1061d69dd8e503be69f87c7202857c6f4add74ac02e3571158fbec golangci-lint-2.10.1-linux-s390x.rpm
21011ad368eb04f024201b832095c6b5f96d0888de194cca5bfe4d9307d6364b golangci-lint-2.10.1-linux-s390x.tar.gz
7b5191e77a70485918712e31ed55159956323e4911bab1b67569c9d86e1b75eb golangci-lint-2.10.1-netbsd-386.tar.gz
07801fd38d293ebad10826f8285525a39ea91ce5ddad77d05bfa90bda9c884a9 golangci-lint-2.10.1-netbsd-amd64.tar.gz
7e7219d71c1bf33b98c328c93dc0560706dd896a1c43c44696e5222fc9d7446e golangci-lint-2.10.1-netbsd-arm64.tar.gz
92fbc90b9eec0e572269b0f5492a2895c426b086a68372fde49b7e4d4020863e golangci-lint-2.10.1-netbsd-armv6.tar.gz
f67b3ae1f47caeefa507a4ebb0c8336958a19011fe48766443212030f75d004b golangci-lint-2.10.1-netbsd-armv7.tar.gz
a40bc091c10cea84eaee1a90b84b65f5e8652113b0a600bb099e4e4d9d7caddb golangci-lint-2.10.1-windows-386.zip
c60c87695e79db8e320f0e5be885059859de52bb5ee5f11be5577828570bc2a3 golangci-lint-2.10.1-windows-amd64.zip
636ab790c8dcea8034aa34aba6031ca3893d68f7eda000460ab534341fadbab1 golangci-lint-2.10.1-windows-arm64.zip
# This is the builder on PPA that will build Go itself (inception-y), don't modify!
#

View file

@ -107,17 +107,21 @@ var (
Tags: "ziren",
Env: map[string]string{"GOMIPS": "softfloat", "CGO_ENABLED": "0"},
},
{
Name: "womir",
GOOS: "wasip1",
GOARCH: "wasm",
Tags: "womir",
},
{
Name: "wasm-js",
GOOS: "js",
GOARCH: "wasm",
Tags: "example",
},
{
Name: "wasm-wasi",
GOOS: "wasip1",
GOARCH: "wasm",
Tags: "example",
},
{
Name: "example",
@ -163,13 +167,11 @@ var (
// Distros for which packages are created
debDistros = []string{
"xenial", // 16.04, EOL: 04/2026
"bionic", // 18.04, EOL: 04/2028
"focal", // 20.04, EOL: 04/2030
"jammy", // 22.04, EOL: 04/2032
"noble", // 24.04, EOL: 04/2034
"oracular", // 24.10, EOL: 07/2025
"plucky", // 25.04, EOL: 01/2026
"xenial", // 16.04, EOL: 04/2026
"bionic", // 18.04, EOL: 04/2028
"focal", // 20.04, EOL: 04/2030
"jammy", // 22.04, EOL: 04/2032
"noble", // 24.04, EOL: 04/2034
}
// This is where the tests should be unpacked.
@ -307,7 +309,7 @@ func doInstallKeeper(cmdline []string) {
args := slices.Clone(gobuild.Args)
args = append(args, "-o", executablePath(outputName))
args = append(args, ".")
build.MustRun(&exec.Cmd{Path: gobuild.Path, Args: args, Env: gobuild.Env})
build.MustRun(&exec.Cmd{Path: gobuild.Path, Args: args, Env: gobuild.Env, Dir: gobuild.Dir})
}
}
@ -1198,7 +1200,7 @@ func doWindowsInstaller(cmdline []string) {
var (
arch = flag.String("arch", runtime.GOARCH, "Architecture for cross build packaging")
signer = flag.String("signer", "", `Environment variable holding the signing key (e.g. WINDOWS_SIGNING_KEY)`)
signify = flag.String("signify key", "", `Environment variable holding the signify signing key (e.g. WINDOWS_SIGNIFY_KEY)`)
signify = flag.String("signify", "", `Environment variable holding the signify signing key (e.g. WINDOWS_SIGNIFY_KEY)`)
upload = flag.String("upload", "", `Destination to upload the archives (usually "gethstore/builds")`)
workdir = flag.String("workdir", "", `Output directory for packages (uses temp dir if unset)`)
)

View file

@ -51,6 +51,12 @@ type Chain struct {
state map[common.Address]state.DumpAccount // state of head block
senders map[common.Address]*senderInfo
config *params.ChainConfig
txInfo txInfo
}
type txInfo struct {
LargeReceiptBlock *uint64 `json:"tx-largereceipt"`
}
// NewChain takes the given chain.rlp file, and decodes and returns
@ -74,12 +80,20 @@ func NewChain(dir string) (*Chain, error) {
if err != nil {
return nil, err
}
var txInfo txInfo
err = common.LoadJSON(filepath.Join(dir, "txinfo.json"), &txInfo)
if err != nil {
return nil, err
}
return &Chain{
genesis: gen,
blocks: blocks,
state: state,
senders: accounts,
config: gen.Config,
txInfo: txInfo,
}, nil
}

View file

@ -66,9 +66,10 @@ func (s *Suite) dialAs(key *ecdsa.PrivateKey) (*Conn, error) {
return nil, err
}
conn.caps = []p2p.Cap{
{Name: "eth", Version: 70},
{Name: "eth", Version: 69},
}
conn.ourHighestProtoVersion = 69
conn.ourHighestProtoVersion = 70
return &conn, nil
}
@ -155,7 +156,7 @@ func (c *Conn) ReadEth() (any, error) {
var msg any
switch int(code) {
case eth.StatusMsg:
msg = new(eth.StatusPacket69)
msg = new(eth.StatusPacket)
case eth.GetBlockHeadersMsg:
msg = new(eth.GetBlockHeadersPacket)
case eth.BlockHeadersMsg:
@ -164,10 +165,6 @@ func (c *Conn) ReadEth() (any, error) {
msg = new(eth.GetBlockBodiesPacket)
case eth.BlockBodiesMsg:
msg = new(eth.BlockBodiesPacket)
case eth.NewBlockMsg:
msg = new(eth.NewBlockPacket)
case eth.NewBlockHashesMsg:
msg = new(eth.NewBlockHashesPacket)
case eth.TransactionsMsg:
msg = new(eth.TransactionsPacket)
case eth.NewPooledTransactionHashesMsg:
@ -229,7 +226,7 @@ func (c *Conn) ReadSnap() (any, error) {
}
// dialAndPeer creates a peer connection and runs the handshake.
func (s *Suite) dialAndPeer(status *eth.StatusPacket69) (*Conn, error) {
func (s *Suite) dialAndPeer(status *eth.StatusPacket) (*Conn, error) {
c, err := s.dial()
if err != nil {
return nil, err
@ -242,7 +239,7 @@ func (s *Suite) dialAndPeer(status *eth.StatusPacket69) (*Conn, error) {
// peer performs both the protocol handshake and the status message
// exchange with the node in order to peer with it.
func (c *Conn) peer(chain *Chain, status *eth.StatusPacket69) error {
func (c *Conn) peer(chain *Chain, status *eth.StatusPacket) error {
if err := c.handshake(); err != nil {
return fmt.Errorf("handshake failed: %v", err)
}
@ -315,7 +312,7 @@ func (c *Conn) negotiateEthProtocol(caps []p2p.Cap) {
}
// statusExchange performs a `Status` message exchange with the given node.
func (c *Conn) statusExchange(chain *Chain, status *eth.StatusPacket69) error {
func (c *Conn) statusExchange(chain *Chain, status *eth.StatusPacket) error {
loop:
for {
code, data, err := c.Read()
@ -324,7 +321,7 @@ loop:
}
switch code {
case eth.StatusMsg + protoOffset(ethProto):
msg := new(eth.StatusPacket69)
msg := new(eth.StatusPacket)
if err := rlp.DecodeBytes(data, &msg); err != nil {
return fmt.Errorf("error decoding status packet: %w", err)
}
@ -339,10 +336,12 @@ loop:
if have, want := msg.ForkID, chain.ForkID(); !reflect.DeepEqual(have, want) {
return fmt.Errorf("wrong fork ID in status: have %v, want %v", have, want)
}
if have, want := msg.ProtocolVersion, c.ourHighestProtoVersion; have != uint32(want) {
return fmt.Errorf("wrong protocol version: have %v, want %v", have, want)
for _, cap := range c.caps {
if cap.Name == "eth" && cap.Version == uint(msg.ProtocolVersion) {
break loop
}
}
break loop
return fmt.Errorf("wrong protocol version: have %v, want %v", msg.ProtocolVersion, c.caps)
case discMsg:
var msg []p2p.DiscReason
if rlp.DecodeBytes(data, &msg); len(msg) == 0 {
@ -363,7 +362,7 @@ loop:
}
if status == nil {
// default status message
status = &eth.StatusPacket69{
status = &eth.StatusPacket{
ProtocolVersion: uint32(c.negotiatedProtoVersion),
NetworkID: chain.config.ChainID.Uint64(),
Genesis: chain.blocks[0].Hash(),

View file

@ -87,9 +87,9 @@ func (s *Suite) TestSnapGetAccountRange(t *utesting.T) {
root: root,
startingHash: zero,
limitHash: ffHash,
expAccounts: 67,
expAccounts: 68,
expFirst: firstKey,
expLast: common.HexToHash("0x622e662246601dd04f996289ce8b85e86db7bb15bb17f86487ec9d543ddb6f9a"),
expLast: common.HexToHash("0x59312f89c13e9e24c1cb8b103aa39a9b2800348d97a92c2c9e2a78fa02b70025"),
desc: "In this test, we request the entire state range, but limit the response to 4000 bytes.",
},
{
@ -97,9 +97,9 @@ func (s *Suite) TestSnapGetAccountRange(t *utesting.T) {
root: root,
startingHash: zero,
limitHash: ffHash,
expAccounts: 49,
expAccounts: 50,
expFirst: firstKey,
expLast: common.HexToHash("0x445cb5c1278fdce2f9cbdb681bdd76c52f8e50e41dbd9e220242a69ba99ac099"),
expLast: common.HexToHash("0x4615e5f5df5b25349a00ad313c6cd0436b6c08ee5826e33a018661997f85ebaa"),
desc: "In this test, we request the entire state range, but limit the response to 3000 bytes.",
},
{
@ -107,9 +107,9 @@ func (s *Suite) TestSnapGetAccountRange(t *utesting.T) {
root: root,
startingHash: zero,
limitHash: ffHash,
expAccounts: 34,
expAccounts: 35,
expFirst: firstKey,
expLast: common.HexToHash("0x2ef46ebd2073cecde499c2e8df028ad79a26d57bfaa812c4c6f7eb4c9617b913"),
expLast: common.HexToHash("0x2de4bdbddcfbb9c3e195dae6b45f9c38daff897e926764bf34887fb0db5c3284"),
desc: "In this test, we request the entire state range, but limit the response to 2000 bytes.",
},
{
@ -178,9 +178,9 @@ The server should return the first available account.`,
root: root,
startingHash: firstKey,
limitHash: ffHash,
expAccounts: 67,
expAccounts: 68,
expFirst: firstKey,
expLast: common.HexToHash("0x622e662246601dd04f996289ce8b85e86db7bb15bb17f86487ec9d543ddb6f9a"),
expLast: common.HexToHash("0x59312f89c13e9e24c1cb8b103aa39a9b2800348d97a92c2c9e2a78fa02b70025"),
desc: `In this test, startingHash is exactly the first available account key.
The server should return the first available account of the state as the first item.`,
},
@ -189,9 +189,9 @@ The server should return the first available account of the state as the first i
root: root,
startingHash: hashAdd(firstKey, 1),
limitHash: ffHash,
expAccounts: 67,
expAccounts: 68,
expFirst: secondKey,
expLast: common.HexToHash("0x66192e4c757fba1cdc776e6737008f42d50370d3cd801db3624274283bf7cd63"),
expLast: common.HexToHash("0x59a7c8818f1c16b298a054020dc7c3f403a970d1d1db33f9478b1c36e3a2e509"),
desc: `In this test, startingHash is after the first available key.
The server should return the second account of the state as the first item.`,
},
@ -227,9 +227,9 @@ server to return no data because genesis is older than 127 blocks.`,
root: s.chain.RootAt(int(s.chain.Head().Number().Uint64()) - 127),
startingHash: zero,
limitHash: ffHash,
expAccounts: 66,
expAccounts: 68,
expFirst: firstKey,
expLast: common.HexToHash("0x729953a43ed6c913df957172680a17e5735143ad767bda8f58ac84ec62fbec5e"),
expLast: common.HexToHash("0x683b6c03cc32afe5db8cb96050f711fdaff8f8ff44c7587a9a848f921d02815e"),
desc: `This test requests data at a state root that is 127 blocks old.
We expect the server to have this state available.`,
},
@ -658,8 +658,8 @@ The server should reject the request.`,
// It's a bit unfortunate these are hard-coded, but the result depends on
// a lot of aspects of the state trie and can't be guessed in a simple
// way. So you'll have to update this when the test chain is changed.
common.HexToHash("0x5bdc0d6057b35642a16d27223ea5454e5a17a400e28f7328971a5f2a87773b76"),
common.HexToHash("0x0a76c9812ca90ffed8ee4d191e683f93386b6e50cfe3679c0760d27510aa7fc5"),
common.HexToHash("0x4bdecec09691ad38113eebee2df94fadefdff5841c0f182bae1be3c8a6d60bf3"),
common.HexToHash("0x4178696465d4514ff5924ef8c28ce64d41a669634b63184c2c093e252d6b4bc4"),
empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty,
empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty,
empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty,
@ -679,8 +679,8 @@ The server should reject the request.`,
// be updated when the test chain is changed.
expHashes: []common.Hash{
empty,
common.HexToHash("0x0a76c9812ca90ffed8ee4d191e683f93386b6e50cfe3679c0760d27510aa7fc5"),
common.HexToHash("0x5bdc0d6057b35642a16d27223ea5454e5a17a400e28f7328971a5f2a87773b76"),
common.HexToHash("0x4178696465d4514ff5924ef8c28ce64d41a669634b63184c2c093e252d6b4bc4"),
common.HexToHash("0x4bdecec09691ad38113eebee2df94fadefdff5841c0f182bae1be3c8a6d60bf3"),
},
},

View file

@ -35,6 +35,7 @@ import (
"github.com/ethereum/go-ethereum/p2p"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/rlp"
"github.com/ethereum/go-ethereum/trie"
"github.com/holiman/uint256"
)
@ -83,6 +84,7 @@ func (s *Suite) EthTests() []utesting.Test {
// get history
{Name: "GetBlockBodies", Fn: s.TestGetBlockBodies},
{Name: "GetReceipts", Fn: s.TestGetReceipts},
{Name: "GetLargeReceipts", Fn: s.TestGetLargeReceipts},
// test transactions
{Name: "LargeTxRequest", Fn: s.TestLargeTxRequest, Slow: true},
{Name: "Transaction", Fn: s.TestTransaction},
@ -429,6 +431,9 @@ func (s *Suite) TestGetReceipts(t *utesting.T) {
// Find some blocks containing receipts.
var hashes = make([]common.Hash, 0, 3)
for i := range s.chain.Len() {
if s.chain.txInfo.LargeReceiptBlock != nil && uint64(i) == *s.chain.txInfo.LargeReceiptBlock {
continue
}
block := s.chain.GetBlock(i)
if len(block.Transactions()) > 0 {
hashes = append(hashes, block.Hash())
@ -437,25 +442,121 @@ func (s *Suite) TestGetReceipts(t *utesting.T) {
break
}
}
if conn.negotiatedProtoVersion < eth.ETH70 {
// Create block bodies request.
req := &eth.GetReceiptsPacket69{
RequestId: 66,
GetReceiptsRequest: (eth.GetReceiptsRequest)(hashes),
}
if err := conn.Write(ethProto, eth.GetReceiptsMsg, req); err != nil {
t.Fatalf("could not write to connection: %v", err)
}
// Wait for response.
resp := new(eth.ReceiptsPacket69)
if err := conn.ReadMsg(ethProto, eth.ReceiptsMsg, &resp); err != nil {
t.Fatalf("error reading block receipts msg: %v", err)
}
if got, want := resp.RequestId, req.RequestId; got != want {
t.Fatalf("unexpected request id in respond", got, want)
}
if resp.List.Len() != len(req.GetReceiptsRequest) {
t.Fatalf("wrong receipts in response: expected %d receipts, got %d", len(req.GetReceiptsRequest), resp.List.Len())
}
} else {
// Create block bodies request.
req := &eth.GetReceiptsPacket70{
RequestId: 66,
FirstBlockReceiptIndex: 0,
GetReceiptsRequest: (eth.GetReceiptsRequest)(hashes),
}
if err := conn.Write(ethProto, eth.GetReceiptsMsg, req); err != nil {
t.Fatalf("could not write to connection: %v", err)
}
// Wait for response.
resp := new(eth.ReceiptsPacket70)
if err := conn.ReadMsg(ethProto, eth.ReceiptsMsg, &resp); err != nil {
t.Fatalf("error reading block receipts msg: %v", err)
}
if got, want := resp.RequestId, req.RequestId; got != want {
t.Fatalf("unexpected request id in respond", got, want)
}
if resp.List.Len() != len(req.GetReceiptsRequest) {
t.Fatalf("wrong receipts in response: expected %d receipts, got %d", len(req.GetReceiptsRequest), resp.List.Len())
}
}
}
// Create receipts request.
req := &eth.GetReceiptsPacket{
RequestId: 66,
GetReceiptsRequest: (eth.GetReceiptsRequest)(hashes),
func (s *Suite) TestGetLargeReceipts(t *utesting.T) {
t.Log(`This test sends GetReceipts requests to the node for large receipt (>10MiB) in the test chain.
This test is meaningful only if the client supports protocol version ETH70 or higher
and LargeReceiptBlock is configured in txInfo.json.`)
conn, err := s.dialAndPeer(nil)
if err != nil {
t.Fatalf("peering failed: %v", err)
}
if err := conn.Write(ethProto, eth.GetReceiptsMsg, req); err != nil {
t.Fatalf("could not write to connection: %v", err)
defer conn.Close()
if conn.negotiatedProtoVersion < eth.ETH70 || s.chain.txInfo.LargeReceiptBlock == nil {
return
}
// Wait for response.
resp := new(eth.ReceiptsPacket[*eth.ReceiptList69])
if err := conn.ReadMsg(ethProto, eth.ReceiptsMsg, &resp); err != nil {
t.Fatalf("error reading block bodies msg: %v", err)
// Find block with large receipt.
// Place the large receipt block hash in the middle of the query
start := max(int(*s.chain.txInfo.LargeReceiptBlock)-2, 0)
end := min(*s.chain.txInfo.LargeReceiptBlock+2, uint64(len(s.chain.blocks)))
var blocks []common.Hash
var receiptHashes []common.Hash
var receipts []*eth.ReceiptList
for i := uint64(start); i < end; i++ {
block := s.chain.GetBlock(int(i))
blocks = append(blocks, block.Hash())
receiptHashes = append(receiptHashes, block.Header().ReceiptHash)
receipts = append(receipts, &eth.ReceiptList{})
}
if got, want := resp.RequestId, req.RequestId; got != want {
t.Fatalf("unexpected request id in respond", got, want)
incomplete := false
lastBlock := 0
for incomplete || lastBlock != len(blocks)-1 {
// Create get receipt request.
req := &eth.GetReceiptsPacket70{
RequestId: 66,
FirstBlockReceiptIndex: uint64(receipts[lastBlock].Derivable().Len()),
GetReceiptsRequest: blocks[lastBlock:],
}
if err := conn.Write(ethProto, eth.GetReceiptsMsg, req); err != nil {
t.Fatalf("could not write to connection: %v", err)
}
// Wait for response.
resp := new(eth.ReceiptsPacket70)
if err := conn.ReadMsg(ethProto, eth.ReceiptsMsg, &resp); err != nil {
t.Fatalf("error reading block receipts msg: %v", err)
}
if got, want := resp.RequestId, req.RequestId; got != want {
t.Fatalf("unexpected request id in respond, want: %d, got: %d", got, want)
}
receiptLists, _ := resp.List.Items()
for i, rc := range receiptLists {
receipts[lastBlock+i].Append(rc)
}
lastBlock += len(receiptLists) - 1
incomplete = resp.LastBlockIncomplete
}
if resp.List.Len() != len(req.GetReceiptsRequest) {
t.Fatalf("wrong receipts in response: expected %d receipts, got %d", len(req.GetReceiptsRequest), resp.List.Len())
hasher := trie.NewStackTrie(nil)
hashes := make([]common.Hash, len(receipts))
for i := range receipts {
hashes[i] = types.DeriveSha(receipts[i].Derivable(), hasher)
}
for i, hash := range hashes {
if receiptHashes[i] != hash {
t.Fatalf("wrong receipt root: want %x, got %x", receiptHashes[i], hash)
}
}
}

Binary file not shown.

View file

@ -37,7 +37,7 @@
"nonce": "0x0",
"timestamp": "0x0",
"extraData": "0x68697665636861696e",
"gasLimit": "0x23f3e20",
"gasLimit": "0x11e1a300",
"difficulty": "0x20000",
"mixHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"coinbase": "0x0000000000000000000000000000000000000000",
@ -119,6 +119,10 @@
"balance": "0x1",
"nonce": "0x1"
},
"8dcd17433742f4c0ca53122ab541d0ba67fc27ff": {
"code": "0x6202e6306000a0",
"balance": "0x0"
},
"c7b99a164efd027a93f147376cc7da7c67c6bbe0": {
"balance": "0xc097ce7bc90715b34b9f1000000000"
},

View file

@ -1,24 +1,24 @@
{
"parentHash": "0x65151b101682b54cd08ba226f640c14c86176865ff9bfc57e0147dadaeac34bb",
"parentHash": "0x7e80093a491eba0e5b2c1895837902f64f514100221801318fe391e1e09c96a6",
"sha3Uncles": "0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347",
"miner": "0x0000000000000000000000000000000000000000",
"stateRoot": "0xce423ebc60fc7764a43f09f1fe3ae61eef25e3eb8d09b1108f7e7eb77dfff5e6",
"transactionsRoot": "0x7ec1ae3989efa75d7bcc766e5e2443afa8a89a5fda42ebba90050e7e702980f7",
"receiptsRoot": "0xfe160832b1ca85f38c6674cb0aae3a24693bc49be56e2ecdf3698b71a794de86",
"stateRoot": "0x8fcfb02cfca007773bd55bc1c3e50a3c8612a59c87ce057e5957e8bf17c1728b",
"transactionsRoot": "0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421",
"receiptsRoot": "0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421",
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"difficulty": "0x0",
"number": "0x258",
"gasLimit": "0x23f3e20",
"gasUsed": "0x19d36",
"gasLimit": "0x11e1a300",
"gasUsed": "0x0",
"timestamp": "0x1770",
"extraData": "0x",
"mixHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"nonce": "0x0000000000000000",
"baseFeePerGas": "0x7",
"withdrawalsRoot": "0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421",
"withdrawalsRoot": "0x92abfda39de7df7d705c5a8f30386802ad59d31e782a06d5c5b0f9a260056cf0",
"blobGasUsed": "0x0",
"excessBlobGas": "0x0",
"parentBeaconBlockRoot": "0xf5003fc8f92358e790a114bce93ce1d9c283c85e1787f8d7d56714d3489b49e6",
"requestsHash": "0xe3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"hash": "0xce8d86ba17a2ec303155f0e264c58a4b8f94ce3436274cf1924f91acdb7502d0"
"hash": "0x44e3809c9a3cda717f00aea3a9da336d149612c8d5657fbc0028176ef8d94d2a"
}

View file

@ -4,9 +4,9 @@
"method": "engine_forkchoiceUpdatedV3",
"params": [
{
"headBlockHash": "0xce8d86ba17a2ec303155f0e264c58a4b8f94ce3436274cf1924f91acdb7502d0",
"safeBlockHash": "0xce8d86ba17a2ec303155f0e264c58a4b8f94ce3436274cf1924f91acdb7502d0",
"finalizedBlockHash": "0xce8d86ba17a2ec303155f0e264c58a4b8f94ce3436274cf1924f91acdb7502d0"
"headBlockHash": "0x44e3809c9a3cda717f00aea3a9da336d149612c8d5657fbc0028176ef8d94d2a",
"safeBlockHash": "0x44e3809c9a3cda717f00aea3a9da336d149612c8d5657fbc0028176ef8d94d2a",
"finalizedBlockHash": "0x44e3809c9a3cda717f00aea3a9da336d149612c8d5657fbc0028176ef8d94d2a"
},
null
]

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -52,7 +52,7 @@ func (s *Suite) AllTests() []utesting.Test {
{Name: "Ping", Fn: s.TestPing},
{Name: "PingLargeRequestID", Fn: s.TestPingLargeRequestID},
{Name: "PingMultiIP", Fn: s.TestPingMultiIP},
{Name: "PingHandshakeInterrupted", Fn: s.TestPingHandshakeInterrupted},
{Name: "HandshakeResend", Fn: s.TestHandshakeResend},
{Name: "TalkRequest", Fn: s.TestTalkRequest},
{Name: "FindnodeZeroDistance", Fn: s.TestFindnodeZeroDistance},
{Name: "FindnodeResults", Fn: s.TestFindnodeResults},
@ -158,22 +158,20 @@ the attempt from a different IP.`)
}
}
// TestPingHandshakeInterrupted starts a handshake, but doesn't finish it and sends a second ordinary message
// packet instead of a handshake message packet. The remote node should respond with
// another WHOAREYOU challenge for the second packet.
func (s *Suite) TestPingHandshakeInterrupted(t *utesting.T) {
t.Log(`TestPingHandshakeInterrupted starts a handshake, but doesn't finish it and sends a second ordinary message
packet instead of a handshake message packet. The remote node should respond with
another WHOAREYOU challenge for the second packet.`)
// TestHandshakeResend starts a handshake, but doesn't finish it and sends a second ordinary message
// packet instead of a handshake message packet. The remote node should repeat the previous WHOAREYOU
// challenge for the first PING.
func (s *Suite) TestHandshakeResend(t *utesting.T) {
conn, l1 := s.listen1(t)
defer conn.close()
// First PING triggers challenge.
ping := &v5wire.Ping{ReqID: conn.nextReqID()}
conn.write(l1, ping, nil)
var challenge1 *v5wire.Whoareyou
switch resp := conn.read(l1).(type) {
case *v5wire.Whoareyou:
challenge1 = resp
t.Logf("got WHOAREYOU for PING")
default:
t.Fatal("expected WHOAREYOU, got", resp)
@ -181,9 +179,16 @@ another WHOAREYOU challenge for the second packet.`)
// Send second PING.
ping2 := &v5wire.Ping{ReqID: conn.nextReqID()}
switch resp := conn.reqresp(l1, ping2).(type) {
case *v5wire.Pong:
checkPong(t, resp, ping2, l1)
conn.write(l1, ping2, nil)
switch resp := conn.read(l1).(type) {
case *v5wire.Whoareyou:
if resp.Nonce != challenge1.Nonce {
t.Fatalf("wrong nonce %x in WHOAREYOU (want %x)", resp.Nonce[:], challenge1.Nonce[:])
}
if !bytes.Equal(resp.ChallengeData, challenge1.ChallengeData) {
t.Fatalf("wrong ChallengeData in resent WHOAREYOU (want %x)", resp.ChallengeData, challenge1.ChallengeData)
}
resp.Node = conn.remote
default:
t.Fatal("expected WHOAREYOU, got", resp)
}
@ -252,34 +257,50 @@ that they are returned by FINDNODE.`)
// Create bystanders.
nodes := make([]*bystander, 5)
added := make(chan enode.ID, len(nodes))
liveCh := make(chan enode.ID, len(nodes))
for i := range nodes {
nodes[i] = newBystander(t, s, added)
nodes[i] = newBystander(t, s, liveCh)
defer nodes[i].close()
}
// Get them added to the remote table.
// Prefill each bystander with the full bystander set so background FINDNODE
// lookups see useful routing data instead of empty responses.
known := make([]*enode.Node, 0, len(nodes))
for _, bn := range nodes {
known = append(known, bn.conn.localNode.Node())
}
for _, bn := range nodes {
bn.known = append([]*enode.Node(nil), known...)
}
// Wait until enough bystanders have actually become live, i.e. the remote node
// has revalidated them by sending PING and receiving our PONG.
requiredLiveNodes := len(nodes)
timeout := 60 * time.Second
timeoutCh := time.After(timeout)
for count := 0; count < len(nodes); {
liveSet := make(map[enode.ID]*enode.Node)
for len(liveSet) < requiredLiveNodes {
select {
case id := <-added:
t.Logf("bystander node %v added to remote table", id)
count++
case id := <-liveCh:
for _, bn := range nodes {
if bn.id() == id {
liveSet[id] = bn.conn.localNode.Node()
break
}
}
t.Logf("bystander node %v became live", id)
case <-timeoutCh:
t.Errorf("remote added %d bystander nodes in %v, need %d to continue", count, timeout, len(nodes))
t.Logf("this can happen if the node has a non-empty table from previous runs")
t.Errorf("remote revalidated %d bystander nodes in %v, need %d to continue", len(liveSet), timeout, requiredLiveNodes)
return
}
}
t.Logf("all %d bystander nodes were added", len(nodes))
t.Logf("continuing after all %d bystander nodes became live", len(liveSet))
// Collect our nodes by distance.
// Collect live nodes by distance.
var dists []uint
expect := make(map[enode.ID]*enode.Node)
for _, bn := range nodes {
n := bn.conn.localNode.Node()
expect[n.ID()] = n
for id, n := range liveSet {
expect[id] = n
d := uint(enode.LogDist(n.ID(), s.Dest.ID()))
if !slices.Contains(dists, d) {
dists = append(dists, d)
@ -290,42 +311,63 @@ that they are returned by FINDNODE.`)
t.Log("requesting nodes")
conn, l1 := s.listen1(t)
defer conn.close()
foundNodes, err := conn.findnode(l1, dists)
if err != nil {
t.Fatal(err)
}
t.Logf("remote returned %d nodes for distance list %v", len(foundNodes), dists)
for _, n := range foundNodes {
delete(expect, n.ID())
}
if len(expect) > 0 {
t.Errorf("missing %d nodes in FINDNODE result", len(expect))
t.Logf("this can happen if the test is run multiple times in quick succession")
t.Logf("and the remote node hasn't removed dead nodes from previous runs yet")
} else {
t.Logf("all %d expected nodes were returned", len(nodes))
const maxAttempts = 5
const retryInterval = 2 * time.Second
for attempt := 1; attempt <= maxAttempts; attempt++ {
foundNodes, err := conn.findnode(l1, dists)
if err != nil {
t.Fatal(err)
}
missing := make(map[enode.ID]struct{})
for id := range expect {
missing[id] = struct{}{}
}
for _, n := range foundNodes {
delete(missing, n.ID())
}
t.Logf("attempt %d: remote returned %d nodes for distance list %v, missing %d", attempt, len(foundNodes), dists, len(missing))
if len(missing) == 0 {
t.Logf("all %d expected live nodes were returned", len(expect))
return
}
if attempt < maxAttempts {
time.Sleep(retryInterval)
}
}
t.Errorf("missing nodes in FINDNODE result after %d attempts", maxAttempts)
t.Logf("this can happen if the node has a non-empty table from previous runs")
}
// A bystander is a node whose only purpose is filling a spot in the remote table.
type bystander struct {
dest *enode.Node
conn *conn
l net.PacketConn
dest *enode.Node
conn *conn
l net.PacketConn
known []*enode.Node
addedCh chan enode.ID
done sync.WaitGroup
liveCh chan enode.ID
sent map[v5wire.Nonce]v5wire.Packet
done sync.WaitGroup
}
func newBystander(t *utesting.T, s *Suite, added chan enode.ID) *bystander {
func newBystander(t *utesting.T, s *Suite, live chan enode.ID) *bystander {
conn, l := s.listen1(t)
conn.setEndpoint(l) // bystander nodes need IP/port to get pinged
bn := &bystander{
conn: conn,
l: l,
dest: s.Dest,
addedCh: added,
conn: conn,
l: l,
dest: s.Dest,
liveCh: live,
sent: make(map[v5wire.Nonce]v5wire.Packet),
}
// Establish an initial session and let the remote learn this node before
// switching to the passive responder loop below.
conn.reqresp(l, &v5wire.Ping{
ReqID: conn.nextReqID(),
ENRSeq: conn.localNode.Seq(),
})
bn.done.Add(1)
go bn.loop()
return bn
@ -346,48 +388,57 @@ func (bn *bystander) close() {
func (bn *bystander) loop() {
defer bn.done.Done()
var (
lastPing time.Time
wasAdded bool
)
for {
// Ping the remote node.
if !wasAdded && time.Since(lastPing) > 10*time.Second {
bn.conn.reqresp(bn.l, &v5wire.Ping{
ReqID: bn.conn.nextReqID(),
ENRSeq: bn.dest.Seq(),
})
lastPing = time.Now()
}
// Answer packets.
switch p := bn.conn.read(bn.l).(type) {
case *v5wire.Ping:
bn.conn.write(bn.l, &v5wire.Pong{
ReqID: p.ReqID,
ENRSeq: bn.conn.localNode.Seq(),
ToIP: bn.dest.IP(),
ToPort: uint16(bn.dest.UDP()),
}, nil)
wasAdded = true
bn.notifyAdded()
case *v5wire.Findnode:
bn.conn.write(bn.l, &v5wire.Nodes{ReqID: p.ReqID, RespCount: 1}, nil)
wasAdded = true
bn.notifyAdded()
case *v5wire.TalkRequest:
bn.conn.write(bn.l, &v5wire.TalkResponse{ReqID: p.ReqID}, nil)
case *readError:
if !netutil.IsTemporaryError(p.err) {
bn.conn.logf("shutting down: %v", p.err)
return
p, from := bn.conn.readFrom(bn.l)
switch p := p.(type) {
case *v5wire.Whoareyou:
p.Node = bn.dest
if resp, ok := bn.sent[p.Nonce]; ok {
nonce := bn.conn.writeTo(bn.l, resp, p, from)
delete(bn.sent, p.Nonce)
bn.sent[nonce] = resp
} else {
bn.conn.writeTo(bn.l, &v5wire.Ping{
ReqID: bn.conn.nextReqID(),
ENRSeq: bn.conn.localNode.Seq(),
}, p, from)
}
case *v5wire.Ping:
resp := &v5wire.Pong{
ReqID: append([]byte(nil), p.ReqID...),
ENRSeq: bn.conn.localNode.Seq(),
ToIP: from.IP,
ToPort: uint16(from.Port),
}
nonce := bn.conn.writeTo(bn.l, resp, nil, from)
bn.sent[nonce] = resp
bn.notifyLive()
case *v5wire.Findnode:
resp := &v5wire.Nodes{ReqID: append([]byte(nil), p.ReqID...), RespCount: 1}
for _, n := range bn.known {
if slices.Contains(p.Distances, uint(enode.LogDist(n.ID(), bn.id()))) {
resp.Nodes = append(resp.Nodes, n.Record())
}
}
nonce := bn.conn.writeTo(bn.l, resp, nil, from)
bn.sent[nonce] = resp
case *v5wire.TalkRequest:
resp := &v5wire.TalkResponse{ReqID: append([]byte(nil), p.ReqID...)}
nonce := bn.conn.writeTo(bn.l, resp, nil, from)
bn.sent[nonce] = resp
case *readError:
if netutil.IsTemporaryError(p.err) || v5wire.IsInvalidHeader(p.err) {
continue
}
bn.conn.logf("shutting down: %v", p.err)
return
}
}
}
func (bn *bystander) notifyAdded() {
if bn.addedCh != nil {
bn.addedCh <- bn.id()
bn.addedCh = nil
func (bn *bystander) notifyLive() {
if bn.liveCh != nil {
bn.liveCh <- bn.id()
bn.liveCh = nil
}
}

View file

@ -127,14 +127,16 @@ func (tc *conn) nextReqID() []byte {
// The request is retried if a handshake is requested.
func (tc *conn) reqresp(c net.PacketConn, req v5wire.Packet) v5wire.Packet {
reqnonce := tc.write(c, req, nil)
switch resp := tc.read(c).(type) {
resp, from := tc.readFrom(c)
switch resp := resp.(type) {
case *v5wire.Whoareyou:
if resp.Nonce != reqnonce {
return readErrorf("wrong nonce %x in WHOAREYOU (want %x)", resp.Nonce[:], reqnonce[:])
}
resp.Node = tc.remote
tc.write(c, req, resp)
return tc.read(c)
tc.writeTo(c, req, resp, from)
resp2, _ := tc.readFrom(c)
return resp2
default:
return resp
}
@ -150,21 +152,24 @@ func (tc *conn) findnode(c net.PacketConn, dists []uint) ([]*enode.Node, error)
results []*enode.Node
)
for n := 1; n > 0; {
switch resp := tc.read(c).(type) {
resp, from := tc.readFrom(c)
switch resp := resp.(type) {
case *v5wire.Whoareyou:
// Handle handshake.
if resp.Nonce == reqnonce {
resp.Node = tc.remote
tc.write(c, findnode, resp)
tc.writeTo(c, findnode, resp, from)
} else {
return nil, fmt.Errorf("unexpected WHOAREYOU (nonce %x), waiting for NODES", resp.Nonce[:])
}
case *v5wire.Ping:
// Handle ping from remote.
tc.write(c, &v5wire.Pong{
tc.writeTo(c, &v5wire.Pong{
ReqID: resp.ReqID,
ENRSeq: tc.localNode.Seq(),
}, nil)
ToIP: from.IP,
ToPort: uint16(from.Port),
}, nil, from)
case *v5wire.Nodes:
// Got NODES! Check request ID.
if !bytes.Equal(resp.ReqID, findnode.ReqID) {
@ -200,11 +205,16 @@ func (tc *conn) findnode(c net.PacketConn, dists []uint) ([]*enode.Node, error)
// write sends a packet on the given connection.
func (tc *conn) write(c net.PacketConn, p v5wire.Packet, challenge *v5wire.Whoareyou) v5wire.Nonce {
return tc.writeTo(c, p, challenge, tc.remoteAddr)
}
// writeTo sends a packet on the given connection to the given UDP address.
func (tc *conn) writeTo(c net.PacketConn, p v5wire.Packet, challenge *v5wire.Whoareyou, to *net.UDPAddr) v5wire.Nonce {
packet, nonce, err := tc.codec.Encode(tc.remote.ID(), tc.remoteAddr.String(), p, challenge)
if err != nil {
panic(fmt.Errorf("can't encode %v packet: %v", p.Name(), err))
}
if _, err := c.WriteTo(packet, tc.remoteAddr); err != nil {
if _, err := c.WriteTo(packet, to); err != nil {
tc.logf("Can't send %s: %v", p.Name(), err)
} else {
tc.logf(">> %s", p.Name())
@ -214,20 +224,30 @@ func (tc *conn) write(c net.PacketConn, p v5wire.Packet, challenge *v5wire.Whoar
// read waits for an incoming packet on the given connection.
func (tc *conn) read(c net.PacketConn) v5wire.Packet {
p, _ := tc.readFrom(c)
return p
}
// readFrom waits for an incoming packet and returns its source address.
func (tc *conn) readFrom(c net.PacketConn) (v5wire.Packet, *net.UDPAddr) {
buf := make([]byte, 1280)
if err := c.SetReadDeadline(time.Now().Add(waitTime)); err != nil {
return &readError{err}
return &readError{err}, nil
}
n, fromAddr, err := c.ReadFrom(buf)
n, from, err := c.ReadFrom(buf)
if err != nil {
return &readError{err}
return &readError{err}, nil
}
_, _, p, err := tc.codec.Decode(buf[:n], fromAddr.String())
udpFrom, _ := from.(*net.UDPAddr)
// Use tc.remoteAddr for codec/session lookup because the fixture keys sessions
// by the advertised endpoint, but return the actual UDP source so responses can
// comply with the spec and go back to the request envelope address.
_, _, p, err := tc.codec.Decode(buf[:n], tc.remoteAddr.String())
if err != nil {
return &readError{err}
return &readError{err}, udpFrom
}
tc.logf("<< %s", p.Name())
return p
return p, udpFrom
}
// logf prints to the test log.

View file

@ -17,6 +17,7 @@
package main
import (
"bytes"
"errors"
"fmt"
"net"
@ -30,6 +31,31 @@ import (
"github.com/urfave/cli/v2"
)
// decodeRLPxDisconnect parses a disconnect message payload. Per the RLPx spec
// the payload is a list containing a single reason, but some implementations
// (including older geth) sent the reason as a bare byte. Accept both forms.
func decodeRLPxDisconnect(data []byte) (p2p.DiscReason, error) {
s := rlp.NewStream(bytes.NewReader(data), uint64(len(data)))
k, _, err := s.Kind()
if err != nil {
return 0, err
}
var reason p2p.DiscReason
if k == rlp.List {
if _, err := s.List(); err != nil {
return 0, err
}
if err := s.Decode(&reason); err != nil {
return 0, err
}
return reason, nil
}
if err := s.Decode(&reason); err != nil {
return 0, err
}
return reason, nil
}
var (
rlpxCommand = &cli.Command{
Name: "rlpx",
@ -103,11 +129,15 @@ func rlpxPing(ctx *cli.Context) error {
}
fmt.Printf("%+v\n", h)
case 1:
var msg []p2p.DiscReason
if rlp.DecodeBytes(data, &msg); len(msg) == 0 {
return errors.New("invalid disconnect message")
// The disconnect message is specified as a list containing the reason,
// but some implementations (including older geth) send the reason as a
// single byte. Handle both forms, and on failure include the raw payload
// so the operator can see what was actually sent.
reason, decErr := decodeRLPxDisconnect(data)
if decErr != nil {
return fmt.Errorf("invalid disconnect message: %v (raw=0x%x)", decErr, data)
}
return fmt.Errorf("received disconnect message: %v", msg[0])
return fmt.Errorf("received disconnect message: %v", reason)
default:
return fmt.Errorf("invalid message code %d, expected handshake (code zero) or disconnect (code one)", code)
}

View file

@ -0,0 +1,75 @@
// Copyright 2026 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"testing"
"github.com/ethereum/go-ethereum/p2p"
)
func TestDecodeRLPxDisconnect(t *testing.T) {
tests := []struct {
name string
payload []byte
want p2p.DiscReason
wantErr bool
}{
{
name: "list form (spec-compliant)",
payload: []byte{0xc1, 0x04}, // [4] = TooManyPeers
want: p2p.DiscTooManyPeers,
},
{
name: "list form with reason zero",
payload: []byte{0xc1, 0x80}, // [0] = Requested
want: p2p.DiscRequested,
},
{
name: "bare byte form (legacy geth)",
payload: []byte{0x04}, // 4 = TooManyPeers
want: p2p.DiscTooManyPeers,
},
{
name: "bare byte form zero",
payload: []byte{0x80}, // 0 = Requested
want: p2p.DiscRequested,
},
{
name: "empty payload",
payload: []byte{},
wantErr: true,
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
got, err := decodeRLPxDisconnect(tc.payload)
if tc.wantErr {
if err == nil {
t.Fatalf("expected error, got reason=%v", got)
}
return
}
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if got != tc.want {
t.Fatalf("got reason %v, want %v", got, tc.want)
}
})
}
}

View file

@ -337,9 +337,6 @@ func checkAccumulator(e era.Era) error {
// accumulation across the entire set and are verified at the end.
for it.Next() {
// 1) next() walks the block index, so we're able to implicitly verify it.
if it.Error() != nil {
return fmt.Errorf("error reading block %d: %w", it.Number(), it.Error())
}
block, receipts, err := it.BlockAndReceipts()
if err != nil {
return fmt.Errorf("error reading block %d: %w", it.Number(), err)

View file

@ -56,6 +56,7 @@ type header struct {
BlobGasUsed *uint64 `json:"blobGasUsed" rlp:"optional"`
ExcessBlobGas *uint64 `json:"excessBlobGas" rlp:"optional"`
ParentBeaconBlockRoot *common.Hash `json:"parentBeaconBlockRoot" rlp:"optional"`
SlotNumber *uint64 `json:"slotNumber" rlp:"optional"`
}
type headerMarshaling struct {
@ -68,6 +69,7 @@ type headerMarshaling struct {
BaseFee *math.HexOrDecimal256
BlobGasUsed *math.HexOrDecimal64
ExcessBlobGas *math.HexOrDecimal64
SlotNumber *math.HexOrDecimal64
}
type bbInput struct {
@ -136,6 +138,7 @@ func (i *bbInput) ToBlock() *types.Block {
BlobGasUsed: i.Header.BlobGasUsed,
ExcessBlobGas: i.Header.ExcessBlobGas,
ParentBeaconRoot: i.Header.ParentBeaconBlockRoot,
SlotNumber: i.Header.SlotNumber,
}
// Fill optional values.

View file

@ -17,9 +17,11 @@
package t8ntool
import (
"encoding/json"
"fmt"
stdmath "math"
"math/big"
"os"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
@ -47,6 +49,9 @@ type Prestate struct {
Env stEnv `json:"env"`
Pre types.GenesisAlloc `json:"pre"`
TreeLeaves map[common.Hash]hexutil.Bytes `json:"vkt,omitempty"`
// AllocPath, when non-empty, causes Apply to stream the alloc from disk
// instead of reading Pre, so the full map never materializes in memory.
AllocPath string `json:"-"`
}
//go:generate go run github.com/fjl/gencodec -type ExecutionResult -field-override executionResultMarshaling -out gen_execresult.go
@ -102,6 +107,7 @@ type stEnv struct {
ParentExcessBlobGas *uint64 `json:"parentExcessBlobGas,omitempty"`
ParentBlobGasUsed *uint64 `json:"parentBlobGasUsed,omitempty"`
ParentBeaconBlockRoot *common.Hash `json:"parentBeaconBlockRoot"`
SlotNumber *uint64 `json:"slotNumber"`
}
type stEnvMarshaling struct {
@ -120,6 +126,7 @@ type stEnvMarshaling struct {
ExcessBlobGas *math.HexOrDecimal64
ParentExcessBlobGas *math.HexOrDecimal64
ParentBlobGasUsed *math.HexOrDecimal64
SlotNumber *math.HexOrDecimal64
}
type rejectedTx struct {
@ -144,18 +151,27 @@ func (pre *Prestate) Apply(vmConfig vm.Config, chainConfig *params.ChainConfig,
return h
}
var (
isEIP4762 = chainConfig.IsVerkle(big.NewInt(int64(pre.Env.Number)), pre.Env.Timestamp)
statedb = MakePreState(rawdb.NewMemoryDatabase(), pre.Pre, isEIP4762)
isEIP4762 = chainConfig.IsUBT(big.NewInt(int64(pre.Env.Number)), pre.Env.Timestamp)
statedb *state.StateDB
)
if pre.AllocPath != "" {
var err error
statedb, err = MakePreStateStreaming(rawdb.NewMemoryDatabase(), pre.AllocPath, isEIP4762)
if err != nil {
return nil, nil, nil, err
}
} else {
statedb = MakePreState(rawdb.NewMemoryDatabase(), pre.Pre, isEIP4762)
}
var (
signer = types.MakeSigner(chainConfig, new(big.Int).SetUint64(pre.Env.Number), pre.Env.Timestamp)
gaspool = new(core.GasPool)
gaspool = core.NewGasPool(pre.Env.GasLimit)
blockHash = common.Hash{0x13, 0x37}
rejectedTxs []*rejectedTx
includedTxs types.Transactions
gasUsed = uint64(0)
blobGasUsed = uint64(0)
receipts = make(types.Receipts, 0)
)
gaspool.AddGas(pre.Env.GasLimit)
vmContext := vm.BlockContext{
CanTransfer: core.CanTransfer,
Transfer: core.Transfer,
@ -195,6 +211,7 @@ func (pre *Prestate) Apply(vmConfig vm.Config, chainConfig *params.ChainConfig,
ExcessBlobGas: pre.Env.ParentExcessBlobGas,
BlobGasUsed: pre.Env.ParentBlobGasUsed,
BaseFee: pre.Env.ParentBaseFee,
SlotNumber: pre.Env.SlotNumber,
}
header := &types.Header{
Time: pre.Env.Timestamp,
@ -255,16 +272,19 @@ func (pre *Prestate) Apply(vmConfig vm.Config, chainConfig *params.ChainConfig,
statedb.SetTxContext(tx.Hash(), len(receipts))
var (
snapshot = statedb.Snapshot()
prevGas = gaspool.Gas()
gp = gaspool.Snapshot()
)
receipt, err := core.ApplyTransactionWithEVM(msg, gaspool, statedb, vmContext.BlockNumber, blockHash, pre.Env.Timestamp, tx, &gasUsed, evm)
receipt, err := core.ApplyTransactionWithEVM(msg, gaspool, statedb, vmContext.BlockNumber, blockHash, pre.Env.Timestamp, tx, evm)
if err != nil {
statedb.RevertToSnapshot(snapshot)
log.Info("rejected tx", "index", i, "hash", tx.Hash(), "from", msg.From, "error", err)
rejectedTxs = append(rejectedTxs, &rejectedTx{i, err.Error()})
gaspool.SetGas(prevGas)
gaspool.Set(gp)
continue
}
if receipt.Logs == nil {
receipt.Logs = []*types.Log{}
}
includedTxs = append(includedTxs, tx)
if hashError != nil {
return nil, nil, nil, NewError(ErrorMissingBlockhash, hashError)
@ -346,7 +366,7 @@ func (pre *Prestate) Apply(vmConfig vm.Config, chainConfig *params.ChainConfig,
Receipts: receipts,
Rejected: rejectedTxs,
Difficulty: (*math.HexOrDecimal256)(vmContext.Difficulty),
GasUsed: (math.HexOrDecimal64)(gasUsed),
GasUsed: (math.HexOrDecimal64)(gaspool.Used()),
BaseFee: (*math.HexOrDecimal256)(vmContext.BaseFee),
}
if pre.Env.Withdrawals != nil {
@ -361,10 +381,6 @@ func (pre *Prestate) Apply(vmConfig vm.Config, chainConfig *params.ChainConfig,
// Set requestsHash on block.
h := types.CalcRequestsHash(requests)
execRs.RequestsHash = &h
for i := range requests {
// remove prefix
requests[i] = requests[i][1:]
}
execRs.Requests = requests
}
@ -378,7 +394,7 @@ func (pre *Prestate) Apply(vmConfig vm.Config, chainConfig *params.ChainConfig,
}
func MakePreState(db ethdb.Database, accounts types.GenesisAlloc, isBintrie bool) *state.StateDB {
tdb := triedb.NewDatabase(db, &triedb.Config{Preimages: true, IsVerkle: isBintrie})
tdb := triedb.NewDatabase(db, &triedb.Config{Preimages: true, IsUBT: isBintrie})
sdb := state.NewDatabase(tdb, nil)
root := types.EmptyRootHash
@ -414,6 +430,76 @@ func MakePreState(db ethdb.Database, accounts types.GenesisAlloc, isBintrie bool
return statedb
}
// MakePreStateStreaming is like MakePreState, but decodes the alloc from disk
// one account at a time so the full map is never held in memory.
func MakePreStateStreaming(db ethdb.Database, allocPath string, isBintrie bool) (*state.StateDB, error) {
tdb := triedb.NewDatabase(db, &triedb.Config{Preimages: true, IsUBT: isBintrie})
sdb := state.NewDatabase(tdb, nil)
root := types.EmptyRootHash
if isBintrie {
root = types.EmptyBinaryHash
}
statedb, err := state.New(root, sdb)
if err != nil {
return nil, NewError(ErrorEVM, fmt.Errorf("failed to create initial statedb: %v", err))
}
f, err := os.Open(allocPath)
if err != nil {
return nil, NewError(ErrorIO, fmt.Errorf("failed reading alloc file: %v", err))
}
defer f.Close()
dec := json.NewDecoder(f)
tok, err := dec.Token()
if err != nil {
return nil, NewError(ErrorJson, fmt.Errorf("failed reading alloc opening token: %v", err))
}
if d, ok := tok.(json.Delim); !ok || d != '{' {
return nil, NewError(ErrorJson, fmt.Errorf("expected alloc object, got %v", tok))
}
for dec.More() {
keyTok, err := dec.Token()
if err != nil {
return nil, NewError(ErrorJson, fmt.Errorf("failed reading alloc key: %v", err))
}
keyStr, ok := keyTok.(string)
if !ok {
return nil, NewError(ErrorJson, fmt.Errorf("alloc key not a string: %v", keyTok))
}
addr := common.HexToAddress(keyStr)
var acct types.Account
if err := dec.Decode(&acct); err != nil {
return nil, NewError(ErrorJson, fmt.Errorf("failed decoding account %s: %v", keyStr, err))
}
statedb.SetCode(addr, acct.Code, tracing.CodeChangeUnspecified)
statedb.SetNonce(addr, acct.Nonce, tracing.NonceChangeGenesis)
if acct.Balance != nil {
statedb.SetBalance(addr, uint256.MustFromBig(acct.Balance), tracing.BalanceIncreaseGenesisBalance)
}
for k, v := range acct.Storage {
statedb.SetState(addr, k, v)
}
}
if _, err := dec.Token(); err != nil {
return nil, NewError(ErrorJson, fmt.Errorf("failed reading alloc closing token: %v", err))
}
root, err = statedb.Commit(0, false, false)
if err != nil {
return nil, NewError(ErrorEVM, fmt.Errorf("failed to commit initial state: %v", err))
}
if isBintrie {
return statedb, nil
}
statedb, err = state.New(root, sdb)
if err != nil {
return nil, NewError(ErrorEVM, fmt.Errorf("failed to reopen state after commit: %v", err))
}
return statedb, nil
}
func rlpHash(x any) (h common.Hash) {
hw := keccak.NewLegacyKeccak256()
rlp.Encode(hw, x)

View file

@ -56,27 +56,35 @@ func (l *fileWritingTracer) Write(p []byte) (n int, err error) {
return n, nil
}
// newFileWriter creates a set of hooks which wraps inner hooks (typically a logger),
// newFileWriter creates a tracer which wraps inner hooks (typically a logger),
// and writes the output to a file, one file per transaction.
func newFileWriter(baseDir string, innerFn func(out io.Writer) *tracing.Hooks) *tracing.Hooks {
func newFileWriter(baseDir string, innerFn func(out io.Writer) *tracing.Hooks) *tracers.Tracer {
t := &fileWritingTracer{
baseDir: baseDir,
suffix: "jsonl",
}
t.inner = innerFn(t) // instantiate the inner tracer
return t.hooks()
return &tracers.Tracer{
Hooks: t.hooks(),
GetResult: func() (json.RawMessage, error) { return json.RawMessage("{}"), nil },
Stop: func(err error) {},
}
}
// newResultWriter creates a set of hooks wraps and invokes an underlying tracer,
// newResultWriter creates a tracer that wraps and invokes an underlying tracer,
// and writes the result (getResult-output) to file, one per transaction.
func newResultWriter(baseDir string, tracer *tracers.Tracer) *tracing.Hooks {
func newResultWriter(baseDir string, tracer *tracers.Tracer) *tracers.Tracer {
t := &fileWritingTracer{
baseDir: baseDir,
getResult: tracer.GetResult,
inner: tracer.Hooks,
suffix: "json",
}
return t.hooks()
return &tracers.Tracer{
Hooks: t.hooks(),
GetResult: func() (json.RawMessage, error) { return json.RawMessage("{}"), nil },
Stop: func(err error) {},
}
}
// OnTxStart creates a new output-file specific for this transaction, and invokes

View file

@ -162,6 +162,11 @@ var (
strings.Join(vm.ActivateableEips(), ", ")),
Value: "GrayGlacier",
}
OpcodeCountFlag = &cli.StringFlag{
Name: "opcode.count",
Usage: "If set, opcode execution counts will be written to this file (relative to output.basedir).",
Value: "",
}
VerbosityFlag = &cli.IntFlag{
Name: "verbosity",
Usage: "sets the verbosity level",

View file

@ -38,6 +38,7 @@ func (h header) MarshalJSON() ([]byte, error) {
BlobGasUsed *math.HexOrDecimal64 `json:"blobGasUsed" rlp:"optional"`
ExcessBlobGas *math.HexOrDecimal64 `json:"excessBlobGas" rlp:"optional"`
ParentBeaconBlockRoot *common.Hash `json:"parentBeaconBlockRoot" rlp:"optional"`
SlotNumber *math.HexOrDecimal64 `json:"slotNumber" rlp:"optional"`
}
var enc header
enc.ParentHash = h.ParentHash
@ -60,6 +61,7 @@ func (h header) MarshalJSON() ([]byte, error) {
enc.BlobGasUsed = (*math.HexOrDecimal64)(h.BlobGasUsed)
enc.ExcessBlobGas = (*math.HexOrDecimal64)(h.ExcessBlobGas)
enc.ParentBeaconBlockRoot = h.ParentBeaconBlockRoot
enc.SlotNumber = (*math.HexOrDecimal64)(h.SlotNumber)
return json.Marshal(&enc)
}
@ -86,6 +88,7 @@ func (h *header) UnmarshalJSON(input []byte) error {
BlobGasUsed *math.HexOrDecimal64 `json:"blobGasUsed" rlp:"optional"`
ExcessBlobGas *math.HexOrDecimal64 `json:"excessBlobGas" rlp:"optional"`
ParentBeaconBlockRoot *common.Hash `json:"parentBeaconBlockRoot" rlp:"optional"`
SlotNumber *math.HexOrDecimal64 `json:"slotNumber" rlp:"optional"`
}
var dec header
if err := json.Unmarshal(input, &dec); err != nil {
@ -155,5 +158,8 @@ func (h *header) UnmarshalJSON(input []byte) error {
if dec.ParentBeaconBlockRoot != nil {
h.ParentBeaconBlockRoot = dec.ParentBeaconBlockRoot
}
if dec.SlotNumber != nil {
h.SlotNumber = (*uint64)(dec.SlotNumber)
}
return nil
}

View file

@ -37,6 +37,7 @@ func (s stEnv) MarshalJSON() ([]byte, error) {
ParentExcessBlobGas *math.HexOrDecimal64 `json:"parentExcessBlobGas,omitempty"`
ParentBlobGasUsed *math.HexOrDecimal64 `json:"parentBlobGasUsed,omitempty"`
ParentBeaconBlockRoot *common.Hash `json:"parentBeaconBlockRoot"`
SlotNumber *math.HexOrDecimal64 `json:"slotNumber"`
}
var enc stEnv
enc.Coinbase = common.UnprefixedAddress(s.Coinbase)
@ -59,6 +60,7 @@ func (s stEnv) MarshalJSON() ([]byte, error) {
enc.ParentExcessBlobGas = (*math.HexOrDecimal64)(s.ParentExcessBlobGas)
enc.ParentBlobGasUsed = (*math.HexOrDecimal64)(s.ParentBlobGasUsed)
enc.ParentBeaconBlockRoot = s.ParentBeaconBlockRoot
enc.SlotNumber = (*math.HexOrDecimal64)(s.SlotNumber)
return json.Marshal(&enc)
}
@ -85,6 +87,7 @@ func (s *stEnv) UnmarshalJSON(input []byte) error {
ParentExcessBlobGas *math.HexOrDecimal64 `json:"parentExcessBlobGas,omitempty"`
ParentBlobGasUsed *math.HexOrDecimal64 `json:"parentBlobGasUsed,omitempty"`
ParentBeaconBlockRoot *common.Hash `json:"parentBeaconBlockRoot"`
SlotNumber *math.HexOrDecimal64 `json:"slotNumber"`
}
var dec stEnv
if err := json.Unmarshal(input, &dec); err != nil {
@ -154,5 +157,8 @@ func (s *stEnv) UnmarshalJSON(input []byte) error {
if dec.ParentBeaconBlockRoot != nil {
s.ParentBeaconBlockRoot = dec.ParentBeaconBlockRoot
}
if dec.SlotNumber != nil {
s.SlotNumber = (*uint64)(dec.SlotNumber)
}
return nil
}

View file

@ -27,7 +27,9 @@ import (
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/core"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/core/vm"
"github.com/ethereum/go-ethereum/params"
"github.com/ethereum/go-ethereum/rlp"
"github.com/ethereum/go-ethereum/tests"
@ -131,21 +133,21 @@ func Transaction(ctx *cli.Context) error {
}
// Check intrinsic gas
rules := chainConfig.Rules(common.Big0, true, 0)
gas, err := core.IntrinsicGas(tx.Data(), tx.AccessList(), tx.SetCodeAuthorizations(), tx.To() == nil, rules.IsHomestead, rules.IsIstanbul, rules.IsShanghai)
cost, err := core.IntrinsicGas(tx.Data(), tx.AccessList(), tx.SetCodeAuthorizations(), tx.To() == nil, rules.IsHomestead, rules.IsIstanbul, rules.IsShanghai)
if err != nil {
r.Error = err
results = append(results, r)
continue
}
r.IntrinsicGas = gas
if tx.Gas() < gas {
r.Error = fmt.Errorf("%w: have %d, want %d", core.ErrIntrinsicGas, tx.Gas(), gas)
r.IntrinsicGas = cost.RegularGas
if tx.Gas() < cost.RegularGas {
r.Error = fmt.Errorf("%w: have %d, want %d", core.ErrIntrinsicGas, tx.Gas(), cost.RegularGas)
results = append(results, r)
continue
}
// For Prague txs, validate the floor data gas.
if rules.IsPrague {
floorDataGas, err := core.FloorDataGas(tx.Data())
floorDataGas, err := core.FloorDataGas(rules, tx.Data())
if err != nil {
r.Error = err
results = append(results, r)
@ -177,10 +179,15 @@ func Transaction(ctx *cli.Context) error {
r.Error = errors.New("gas * maxFeePerGas exceeds 256 bits")
}
// Check whether the init code size has been exceeded.
if chainConfig.IsShanghai(new(big.Int), 0) && tx.To() == nil && len(tx.Data()) > params.MaxInitCodeSize {
r.Error = errors.New("max initcode size exceeded")
if tx.To() == nil {
if err := vm.CheckMaxInitCodeSize(&rules, uint64(len(tx.Data()))); err != nil {
r.Error = err
}
}
if chainConfig.IsOsaka(new(big.Int), 0) && tx.Gas() > params.MaxTxGas {
isOsaka := chainConfig.IsOsaka(new(big.Int), 0)
isAmsterdam := chainConfig.IsAmsterdam(new(big.Int), 0)
if isOsaka && !isAmsterdam && tx.Gas() > params.MaxTxGas {
r.Error = errors.New("gas limit exceeds maximum")
}
results = append(results, r)

View file

@ -17,6 +17,7 @@
package t8ntool
import (
"bufio"
"encoding/json"
"errors"
"fmt"
@ -37,6 +38,7 @@ import (
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/eth/tracers"
"github.com/ethereum/go-ethereum/eth/tracers/logger"
"github.com/ethereum/go-ethereum/eth/tracers/native"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/params"
"github.com/ethereum/go-ethereum/tests"
@ -114,11 +116,10 @@ func Transition(ctx *cli.Context) error {
}
}
if allocStr != stdinSelector {
if err := readFile(allocStr, "alloc", &inputData.Alloc); err != nil {
return err
}
prestate.AllocPath = allocStr
} else {
prestate.Pre = inputData.Alloc
}
prestate.Pre = inputData.Alloc
if btStr != stdinSelector && btStr != "" {
if err := readFile(btStr, "BT", &inputData.BT); err != nil {
@ -167,14 +168,15 @@ func Transition(ctx *cli.Context) error {
}
// Configure tracer
var tracer *tracers.Tracer
if ctx.IsSet(TraceTracerFlag.Name) { // Custom tracing
config := json.RawMessage(ctx.String(TraceTracerConfigFlag.Name))
tracer, err := tracers.DefaultDirectory.New(ctx.String(TraceTracerFlag.Name),
innerTracer, err := tracers.DefaultDirectory.New(ctx.String(TraceTracerFlag.Name),
nil, config, chainConfig)
if err != nil {
return NewError(ErrorConfig, fmt.Errorf("failed instantiating tracer: %v", err))
}
vmConfig.Tracer = newResultWriter(baseDir, tracer)
tracer = newResultWriter(baseDir, innerTracer)
} else if ctx.Bool(TraceFlag.Name) { // JSON opcode tracing
logConfig := &logger.Config{
DisableStack: ctx.Bool(TraceDisableStackFlag.Name),
@ -182,36 +184,96 @@ func Transition(ctx *cli.Context) error {
EnableReturnData: ctx.Bool(TraceEnableReturnDataFlag.Name),
}
if ctx.Bool(TraceEnableCallFramesFlag.Name) {
vmConfig.Tracer = newFileWriter(baseDir, func(out io.Writer) *tracing.Hooks {
tracer = newFileWriter(baseDir, func(out io.Writer) *tracing.Hooks {
return logger.NewJSONLoggerWithCallFrames(logConfig, out)
})
} else {
vmConfig.Tracer = newFileWriter(baseDir, func(out io.Writer) *tracing.Hooks {
tracer = newFileWriter(baseDir, func(out io.Writer) *tracing.Hooks {
return logger.NewJSONLogger(logConfig, out)
})
}
}
// Configure opcode counter
var opcodeTracer *tracers.Tracer
if ctx.IsSet(OpcodeCountFlag.Name) && ctx.String(OpcodeCountFlag.Name) != "" {
opcodeTracer = native.NewOpcodeCounter()
if tracer != nil {
// If we have an existing tracer, multiplex with the opcode tracer
mux, _ := native.NewMuxTracer([]string{"trace", "opcode"}, []*tracers.Tracer{tracer, opcodeTracer})
vmConfig.Tracer = mux.Hooks
} else {
vmConfig.Tracer = opcodeTracer.Hooks
}
} else if tracer != nil {
vmConfig.Tracer = tracer.Hooks
}
// Run the test and aggregate the result
s, result, body, err := prestate.Apply(vmConfig, chainConfig, txIt, ctx.Int64(RewardFlag.Name))
if err != nil {
return err
}
// Dump the execution result
// Write opcode counts if enabled
if opcodeTracer != nil {
fname := ctx.String(OpcodeCountFlag.Name)
result, err := opcodeTracer.GetResult()
if err != nil {
return NewError(ErrorJson, fmt.Errorf("failed getting opcode counts: %v", err))
}
if err := saveFile(baseDir, fname, result); err != nil {
return err
}
}
// Dump the execution result.
var (
collector = make(Alloc)
collector Alloc
btleaves map[common.Hash]hexutil.Bytes
)
isBinary := chainConfig.IsVerkle(big.NewInt(int64(prestate.Env.Number)), prestate.Env.Timestamp)
if !isBinary {
isBinary := chainConfig.IsUBT(big.NewInt(int64(prestate.Env.Number)), prestate.Env.Timestamp)
allocOutput := ctx.String(OutputAllocFlag.Name)
switch {
case !isBinary && allocOutput != "" && allocOutput != "stdout" && allocOutput != "stderr":
// Stream directly to the output file to avoid materializing the
// whole post-state in memory. dispatchOutput is told to skip alloc
// by clearing the output name.
if err := writeStreamedAlloc(filepath.Join(baseDir, allocOutput), s); err != nil {
return err
}
allocOutput = ""
case !isBinary:
collector = make(Alloc)
s.DumpToCollector(collector, nil)
} else {
default:
btleaves = make(map[common.Hash]hexutil.Bytes)
if err := s.DumpBinTrieLeaves(btleaves); err != nil {
return err
}
}
return dispatchOutput(ctx, baseDir, result, collector, allocOutput, body, btleaves)
}
return dispatchOutput(ctx, baseDir, result, collector, body, btleaves)
// writeStreamedAlloc writes the post-state alloc to path one account at a
// time, producing the same JSON shape as saveFile on an Alloc map.
func writeStreamedAlloc(path string, s *state.StateDB) error {
f, err := os.Create(path)
if err != nil {
return NewError(ErrorIO, fmt.Errorf("failed creating alloc output file: %v", err))
}
bw := bufio.NewWriter(f)
sa := newStreamingAlloc(bw)
s.DumpToCollector(sa, nil)
if err := sa.Close(); err != nil {
f.Close()
return NewError(ErrorIO, fmt.Errorf("failed writing alloc output: %v", err))
}
if err := bw.Flush(); err != nil {
f.Close()
return NewError(ErrorIO, fmt.Errorf("failed flushing alloc output: %v", err))
}
if err := f.Close(); err != nil {
return NewError(ErrorIO, fmt.Errorf("failed closing alloc output file: %v", err))
}
log.Info("Wrote file", "file", path)
return nil
}
func applyLondonChecks(env *stEnv, chainConfig *params.ChainConfig) error {
@ -300,6 +362,10 @@ func (g Alloc) OnAccount(addr *common.Address, dumpAccount state.DumpAccount) {
if addr == nil {
return
}
g[*addr] = dumpAccountToTypesAccount(dumpAccount)
}
func dumpAccountToTypesAccount(dumpAccount state.DumpAccount) types.Account {
balance, _ := new(big.Int).SetString(dumpAccount.Balance, 0)
var storage map[common.Hash]common.Hash
if dumpAccount.Storage != nil {
@ -308,13 +374,64 @@ func (g Alloc) OnAccount(addr *common.Address, dumpAccount state.DumpAccount) {
storage[k] = common.HexToHash(v)
}
}
genesisAccount := types.Account{
return types.Account{
Code: dumpAccount.Code,
Storage: storage,
Balance: balance,
Nonce: dumpAccount.Nonce,
}
g[*addr] = genesisAccount
}
// streamingAlloc is a DumpCollector that writes each account to w as it is
// visited, emitting a single JSON object keyed by address. Close must be
// called to emit the closing brace.
type streamingAlloc struct {
w io.Writer
wroteOne bool
err error
}
func newStreamingAlloc(w io.Writer) *streamingAlloc {
return &streamingAlloc{w: w}
}
func (s *streamingAlloc) write(b []byte) {
if s.err != nil {
return
}
_, s.err = s.w.Write(b)
}
func (s *streamingAlloc) OnRoot(common.Hash) {
s.write([]byte{'{'})
}
func (s *streamingAlloc) OnAccount(addr *common.Address, dumpAccount state.DumpAccount) {
if s.err != nil || addr == nil {
return
}
keyJSON, err := json.Marshal(*addr)
if err != nil {
s.err = err
return
}
valueJSON, err := json.Marshal(dumpAccountToTypesAccount(dumpAccount))
if err != nil {
s.err = err
return
}
if s.wroteOne {
s.write([]byte{','})
}
s.write(keyJSON)
s.write([]byte{':'})
s.write(valueJSON)
s.wroteOne = true
}
func (s *streamingAlloc) Close() error {
s.write([]byte{'}'})
return s.err
}
// saveFile marshals the object to the given file
@ -332,8 +449,9 @@ func saveFile(baseDir, filename string, data interface{}) error {
}
// dispatchOutput writes the output data to either stderr or stdout, or to the specified
// files
func dispatchOutput(ctx *cli.Context, baseDir string, result *ExecutionResult, alloc Alloc, body hexutil.Bytes, bt map[common.Hash]hexutil.Bytes) error {
// files. An empty allocOutput skips the alloc dispatch, which is used when the
// alloc has already been streamed to disk by the caller.
func dispatchOutput(ctx *cli.Context, baseDir string, result *ExecutionResult, alloc Alloc, allocOutput string, body hexutil.Bytes, bt map[common.Hash]hexutil.Bytes) error {
stdOutObject := make(map[string]interface{})
stdErrObject := make(map[string]interface{})
dispatch := func(baseDir, fName, name string, obj interface{}) error {
@ -351,7 +469,7 @@ func dispatchOutput(ctx *cli.Context, baseDir string, result *ExecutionResult, a
}
return nil
}
if err := dispatch(baseDir, ctx.String(OutputAllocFlag.Name), "alloc", alloc); err != nil {
if err := dispatch(baseDir, allocOutput, "alloc", alloc); err != nil {
return err
}
if err := dispatch(baseDir, ctx.String(OutputResultFlag.Name), "result", result); err != nil {
@ -425,10 +543,10 @@ func BinKeys(ctx *cli.Context) error {
return err
}
}
db := triedb.NewDatabase(rawdb.NewMemoryDatabase(), triedb.VerkleDefaults)
db := triedb.NewDatabase(rawdb.NewMemoryDatabase(), triedb.UBTDefaults)
defer db.Close()
bt, err := genBinTrieFromAlloc(alloc, db)
bt, err := genBinTrieFromAlloc(alloc, db, triedb.UBTDefaults.BinTrieGroupDepth)
if err != nil {
return fmt.Errorf("error generating bt: %w", err)
}
@ -469,10 +587,10 @@ func BinTrieRoot(ctx *cli.Context) error {
return err
}
}
db := triedb.NewDatabase(rawdb.NewMemoryDatabase(), triedb.VerkleDefaults)
db := triedb.NewDatabase(rawdb.NewMemoryDatabase(), triedb.UBTDefaults)
defer db.Close()
bt, err := genBinTrieFromAlloc(alloc, db)
bt, err := genBinTrieFromAlloc(alloc, db, triedb.UBTDefaults.BinTrieGroupDepth)
if err != nil {
return fmt.Errorf("error generating bt: %w", err)
}
@ -482,8 +600,8 @@ func BinTrieRoot(ctx *cli.Context) error {
}
// TODO(@CPerezz): Should this go to `bintrie` module?
func genBinTrieFromAlloc(alloc core.GenesisAlloc, db database.NodeDatabase) (*bintrie.BinaryTrie, error) {
bt, err := bintrie.NewBinaryTrie(types.EmptyBinaryHash, db)
func genBinTrieFromAlloc(alloc core.GenesisAlloc, db database.NodeDatabase, groupDepth int) (*bintrie.BinaryTrie, error) {
bt, err := bintrie.NewBinaryTrie(types.EmptyBinaryHash, db, groupDepth)
if err != nil {
return nil, err
}

View file

@ -115,7 +115,7 @@ var (
Name: "trace.noreturndata",
Aliases: []string{"noreturndata"},
Value: true,
Usage: "enable return data output",
Usage: "disable return data output",
Category: traceCategory,
}
@ -161,6 +161,7 @@ var (
t8ntool.ForknameFlag,
t8ntool.ChainIDFlag,
t8ntool.RewardFlag,
t8ntool.OpcodeCountFlag,
},
}

View file

@ -166,8 +166,11 @@ func timedExec(bench bool, execFunc func() ([]byte, uint64, error)) ([]byte, exe
if haveGasUsed != gasUsed {
panic(fmt.Sprintf("gas differs, have %v want %v", haveGasUsed, gasUsed))
}
if haveErr != err {
panic(fmt.Sprintf("err differs, have %v want %v", haveErr, err))
if (haveErr == nil) != (err == nil) {
panic(fmt.Sprintf("err differs in nil-ness, have %v want %v", haveErr, err))
}
if haveErr != nil && err != nil && haveErr.Error() != err.Error() {
panic(fmt.Sprintf("err differs, have %q want %q", haveErr.Error(), err.Error()))
}
}
})

View file

@ -24,7 +24,7 @@
"status": "0x1",
"cumulativeGasUsed": "0x5208",
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"logs": null,
"logs": [],
"transactionHash": "0x0557bacce3375c98d806609b8d5043072f0b6a8bae45ae5a67a00d3a1a18d673",
"contractAddress": "0x0000000000000000000000000000000000000000",
"gasUsed": "0x5208",

View file

@ -12,7 +12,7 @@
"status": "0x0",
"cumulativeGasUsed": "0x84d0",
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"logs": null,
"logs": [],
"transactionHash": "0xa98a24882ea90916c6a86da650fbc6b14238e46f0af04a131ce92be897507476",
"contractAddress": "0x0000000000000000000000000000000000000000",
"gasUsed": "0x84d0",
@ -27,7 +27,7 @@
"status": "0x0",
"cumulativeGasUsed": "0x109a0",
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"logs": null,
"logs": [],
"transactionHash": "0x36bad80acce7040c45fd32764b5c2b2d2e6f778669fb41791f73f546d56e739a",
"contractAddress": "0x0000000000000000000000000000000000000000",
"gasUsed": "0x84d0",

View file

@ -11,7 +11,7 @@
"status": "0x1",
"cumulativeGasUsed": "0x520b",
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"logs": null,
"logs": [],
"transactionHash": "0x72fadbef39cd251a437eea619cfeda752271a5faaaa2147df012e112159ffb81",
"contractAddress": "0x0000000000000000000000000000000000000000",
"gasUsed": "0x520b",

View file

@ -27,7 +27,7 @@
"status": "0x1",
"cumulativeGasUsed": "0xa861",
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"logs": null,
"logs": [],
"transactionHash": "0x92ea4a28224d033afb20e0cc2b290d4c7c2d61f6a4800a680e4e19ac962ee941",
"contractAddress": "0x0000000000000000000000000000000000000000",
"gasUsed": "0xa861",
@ -41,7 +41,7 @@
"status": "0x1",
"cumulativeGasUsed": "0x10306",
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"logs": null,
"logs": [],
"transactionHash": "0x16b1d912f1d664f3f60f4e1b5f296f3c82a64a1a253117b4851d18bc03c4f1da",
"contractAddress": "0x0000000000000000000000000000000000000000",
"gasUsed": "0x5aa5",

View file

@ -23,7 +23,7 @@
"status": "0x1",
"cumulativeGasUsed": "0x5208",
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"logs": null,
"logs": [],
"transactionHash": "0x92ea4a28224d033afb20e0cc2b290d4c7c2d61f6a4800a680e4e19ac962ee941",
"contractAddress": "0x0000000000000000000000000000000000000000",
"gasUsed": "0x5208",

View file

@ -28,7 +28,7 @@
"status": "0x1",
"cumulativeGasUsed": "0xa865",
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"logs": null,
"logs": [],
"transactionHash": "0x7508d7139d002a4b3a26a4f12dec0d87cb46075c78bf77a38b569a133b509262",
"contractAddress": "0x0000000000000000000000000000000000000000",
"gasUsed": "0xa865",

View file

@ -26,7 +26,7 @@
"status": "0x1",
"cumulativeGasUsed": "0x5208",
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"logs": null,
"logs": [],
"transactionHash": "0x84f70aba406a55628a0620f26d260f90aeb6ccc55fed6ec2ac13dd4f727032ed",
"contractAddress": "0x0000000000000000000000000000000000000000",
"gasUsed": "0x5208",

View file

@ -24,7 +24,7 @@
"status": "0x1",
"cumulativeGasUsed": "0x521f",
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"logs": null,
"logs": [],
"transactionHash": "0x72fadbef39cd251a437eea619cfeda752271a5faaaa2147df012e112159ffb81",
"contractAddress": "0x0000000000000000000000000000000000000000",
"gasUsed": "0x521f",

View file

@ -25,7 +25,7 @@
"status": "0x1",
"cumulativeGasUsed": "0x5208",
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"logs": null,
"logs": [],
"transactionHash": "0xa98a24882ea90916c6a86da650fbc6b14238e46f0af04a131ce92be897507476",
"contractAddress": "0x0000000000000000000000000000000000000000",
"gasUsed": "0x5208",
@ -40,7 +40,7 @@
"status": "0x1",
"cumulativeGasUsed": "0xa410",
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"logs": null,
"logs": [],
"transactionHash": "0x36bad80acce7040c45fd32764b5c2b2d2e6f778669fb41791f73f546d56e739a",
"contractAddress": "0x0000000000000000000000000000000000000000",
"gasUsed": "0x5208",

View file

@ -44,7 +44,7 @@
"root": "0x",
"status": "0x1",
"cumulativeGasUsed": "0x15fa9",
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","logs": null,"transactionHash": "0x0417aab7c1d8a3989190c3167c132876ce9b8afd99262c5a0f9d06802de3d7ef",
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","logs": [],"transactionHash": "0x0417aab7c1d8a3989190c3167c132876ce9b8afd99262c5a0f9d06802de3d7ef",
"contractAddress": "0x0000000000000000000000000000000000000000",
"gasUsed": "0x15fa9",
"effectiveGasPrice": null,

177
cmd/fetchpayload/main.go Normal file
View file

@ -0,0 +1,177 @@
// Copyright 2026 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
// fetchpayload queries an Ethereum node over RPC, fetches a block and its
// execution witness, and writes the combined Payload (ChainID + Block +
// Witness) to disk in the format consumed by cmd/keeper.
package main
import (
"context"
"encoding/json"
"flag"
"fmt"
"math/big"
"os"
"path/filepath"
"strings"
"time"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/core/stateless"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/ethclient"
"github.com/ethereum/go-ethereum/rlp"
"github.com/ethereum/go-ethereum/rpc"
)
// Payload is duplicated from cmd/keeper/main.go (package main, not importable).
type Payload struct {
ChainID uint64
Block *types.Block
Witness *stateless.Witness
}
func main() {
var (
rpcURL = flag.String("rpc", "http://localhost:8545", "RPC endpoint URL")
blockArg = flag.String("block", "latest", `Block number: decimal, 0x-hex, or "latest"`)
format = flag.String("format", "rlp", "Comma-separated output formats: rlp, hex, json")
outDir = flag.String("out", "", "Output directory (default: current directory)")
)
flag.Parse()
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
// Parse block number (nil means "latest" in ethclient).
blockNum, err := parseBlockNumber(*blockArg)
if err != nil {
fatal("invalid block number %q: %v", *blockArg, err)
}
// Connect to the node.
client, err := ethclient.DialContext(ctx, *rpcURL)
if err != nil {
fatal("failed to connect to %s: %v", *rpcURL, err)
}
defer client.Close()
chainID, err := client.ChainID(ctx)
if err != nil {
fatal("failed to get chain ID: %v", err)
}
// Fetch the block first so we have a concrete number for the witness call,
// avoiding a race where "latest" advances between the two RPCs.
block, err := client.BlockByNumber(ctx, blockNum)
if err != nil {
fatal("failed to fetch block: %v", err)
}
fmt.Printf("Fetched block %d (%#x)\n", block.NumberU64(), block.Hash())
// Fetch the execution witness via the debug namespace.
var extWitness stateless.ExtWitness
err = client.Client().CallContext(ctx, &extWitness, "debug_executionWitness", rpc.BlockNumber(block.NumberU64()))
if err != nil {
fatal("failed to fetch execution witness: %v", err)
}
witness := new(stateless.Witness)
err = witness.FromExtWitness(&extWitness)
if err != nil {
fatal("failed to convert witness: %v", err)
}
payload := Payload{
ChainID: chainID.Uint64(),
Block: block,
Witness: witness,
}
// Encode payload as RLP (shared by "rlp" and "hex" formats).
rlpBytes, err := rlp.EncodeToBytes(payload)
if err != nil {
fatal("failed to RLP-encode payload: %v", err)
}
// Write one output file per requested format.
blockHex := fmt.Sprintf("%x", block.NumberU64())
for f := range strings.SplitSeq(*format, ",") {
f = strings.TrimSpace(f)
outPath := filepath.Join(*outDir, fmt.Sprintf("%s_payload.%s", blockHex, f))
var data []byte
switch f {
case "rlp":
data = rlpBytes
case "hex":
data = []byte(hexutil.Encode(rlpBytes))
case "json":
data, err = marshalJSONPayload(chainID, block, &extWitness)
if err != nil {
fatal("failed to JSON-encode payload: %v", err)
}
default:
fatal("unknown format %q (valid: rlp, hex, json)", f)
}
if err := os.WriteFile(outPath, data, 0644); err != nil {
fatal("failed to write %s: %v", outPath, err)
}
fmt.Printf("Wrote %s (%d bytes)\n", outPath, len(data))
}
}
// parseBlockNumber converts a CLI string to *big.Int.
// Returns nil for "latest" (ethclient convention for the head block).
func parseBlockNumber(s string) (*big.Int, error) {
if strings.EqualFold(s, "latest") {
return nil, nil
}
n := new(big.Int)
if strings.HasPrefix(s, "0x") || strings.HasPrefix(s, "0X") {
if _, ok := n.SetString(s[2:], 16); !ok {
return nil, fmt.Errorf("invalid hex number")
}
return n, nil
}
if _, ok := n.SetString(s, 10); !ok {
return nil, fmt.Errorf("invalid decimal number")
}
return n, nil
}
// jsonPayload is a JSON-friendly representation of Payload. It uses ExtWitness
// instead of the internal Witness (which has no JSON marshaling).
type jsonPayload struct {
ChainID uint64 `json:"chainId"`
Block *types.Block `json:"block"`
Witness *stateless.ExtWitness `json:"witness"`
}
func marshalJSONPayload(chainID *big.Int, block *types.Block, ext *stateless.ExtWitness) ([]byte, error) {
return json.MarshalIndent(jsonPayload{
ChainID: chainID.Uint64(),
Block: block,
Witness: ext,
}, "", " ")
}
func fatal(format string, args ...any) {
fmt.Fprintf(os.Stderr, format+"\n", args...)
os.Exit(1)
}

408
cmd/geth/bintrie_convert.go Normal file
View file

@ -0,0 +1,408 @@
// Copyright 2026 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"errors"
"fmt"
"runtime"
"runtime/debug"
"slices"
"time"
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/rawdb"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/rlp"
"github.com/ethereum/go-ethereum/trie"
"github.com/ethereum/go-ethereum/trie/bintrie"
"github.com/ethereum/go-ethereum/trie/trienode"
"github.com/ethereum/go-ethereum/triedb"
"github.com/ethereum/go-ethereum/triedb/pathdb"
"github.com/urfave/cli/v2"
)
var (
deleteSourceFlag = &cli.BoolFlag{
Name: "delete-source",
Usage: "Delete MPT trie nodes after conversion",
}
memoryLimitFlag = &cli.Uint64Flag{
Name: "memory-limit",
Usage: "Max heap allocation in MB before forcing a commit cycle",
Value: 16384,
}
bintrieCommand = &cli.Command{
Name: "bintrie",
Usage: "A set of commands for binary trie operations",
Description: "",
Subcommands: []*cli.Command{
{
Name: "convert",
Usage: "Convert MPT state to binary trie",
ArgsUsage: "[state-root]",
Action: convertToBinaryTrie,
Flags: slices.Concat([]cli.Flag{
deleteSourceFlag,
memoryLimitFlag,
}, utils.NetworkFlags, utils.DatabaseFlags),
Description: `
geth bintrie convert [--delete-source] [--memory-limit MB] [state-root]
Reads all state from the Merkle Patricia Trie and writes it into a Binary Trie,
operating offline. Memory-safe via periodic commit-and-reload cycles.
The optional state-root argument specifies which state root to convert.
If omitted, the head block's state root is used.
Flags:
--delete-source Delete MPT trie nodes after successful conversion
--memory-limit Max heap allocation in MB before forcing a commit (default: 16384)
`,
},
},
}
)
type conversionStats struct {
accounts uint64
slots uint64
codes uint64
commits uint64
start time.Time
lastReport time.Time
lastMemChk time.Time
}
func (s *conversionStats) report(force bool) {
if !force && time.Since(s.lastReport) < 8*time.Second {
return
}
elapsed := time.Since(s.start).Seconds()
acctRate := float64(0)
if elapsed > 0 {
acctRate = float64(s.accounts) / elapsed
}
log.Info("Conversion progress",
"accounts", s.accounts,
"slots", s.slots,
"codes", s.codes,
"commits", s.commits,
"accounts/sec", fmt.Sprintf("%.0f", acctRate),
"elapsed", common.PrettyDuration(time.Since(s.start)),
)
s.lastReport = time.Now()
}
func convertToBinaryTrie(ctx *cli.Context) error {
if ctx.NArg() > 1 {
return errors.New("too many arguments")
}
stack, _ := makeConfigNode(ctx)
defer stack.Close()
chaindb := utils.MakeChainDatabase(ctx, stack, false)
defer chaindb.Close()
headBlock := rawdb.ReadHeadBlock(chaindb)
if headBlock == nil {
return errors.New("no head block found")
}
var (
root common.Hash
err error
)
if ctx.NArg() == 1 {
root, err = parseRoot(ctx.Args().First())
if err != nil {
return fmt.Errorf("invalid state root: %w", err)
}
} else {
root = headBlock.Root()
}
log.Info("Starting MPT to binary trie conversion", "root", root, "block", headBlock.NumberU64())
srcTriedb := utils.MakeTrieDatabase(ctx, stack, chaindb, true, true, false)
defer srcTriedb.Close()
destTriedb := triedb.NewDatabase(chaindb, &triedb.Config{
IsUBT: true,
PathDB: &pathdb.Config{
JournalDirectory: stack.ResolvePath("triedb-bintrie"),
},
})
defer destTriedb.Close()
binTrie, err := bintrie.NewBinaryTrie(types.EmptyBinaryHash, destTriedb, ctx.Int(utils.BinTrieGroupDepthFlag.Name))
if err != nil {
return fmt.Errorf("failed to create binary trie: %w", err)
}
memLimit := ctx.Uint64(memoryLimitFlag.Name) * 1024 * 1024
currentRoot, err := runConversionLoop(chaindb, srcTriedb, destTriedb, binTrie, root, memLimit)
if err != nil {
return err
}
log.Info("Conversion complete", "binaryRoot", currentRoot)
if ctx.Bool(deleteSourceFlag.Name) {
log.Info("Deleting source MPT data")
if err := deleteMPTData(chaindb, srcTriedb, root); err != nil {
return fmt.Errorf("MPT deletion failed: %w", err)
}
log.Info("Source MPT data deleted")
}
return nil
}
func runConversionLoop(chaindb ethdb.Database, srcTriedb *triedb.Database, destTriedb *triedb.Database, binTrie *bintrie.BinaryTrie, root common.Hash, memLimit uint64) (common.Hash, error) {
currentRoot := types.EmptyBinaryHash
stats := &conversionStats{
start: time.Now(),
lastReport: time.Now(),
lastMemChk: time.Now(),
}
srcTrie, err := trie.NewStateTrie(trie.StateTrieID(root), srcTriedb)
if err != nil {
return common.Hash{}, fmt.Errorf("failed to open source trie: %w", err)
}
acctIt, err := srcTrie.NodeIterator(nil)
if err != nil {
return common.Hash{}, fmt.Errorf("failed to create account iterator: %w", err)
}
accIter := trie.NewIterator(acctIt)
for accIter.Next() {
var acc types.StateAccount
if err := rlp.DecodeBytes(accIter.Value, &acc); err != nil {
return common.Hash{}, fmt.Errorf("invalid account RLP: %w", err)
}
addrBytes := srcTrie.GetKey(accIter.Key)
if addrBytes == nil {
return common.Hash{}, fmt.Errorf("missing preimage for account hash %x (run with --cache.preimages)", accIter.Key)
}
addr := common.BytesToAddress(addrBytes)
var code []byte
codeHash := common.BytesToHash(acc.CodeHash)
if codeHash != types.EmptyCodeHash {
code = rawdb.ReadCode(chaindb, codeHash)
if code == nil {
return common.Hash{}, fmt.Errorf("missing code for hash %x (account %x)", codeHash, addr)
}
stats.codes++
}
if err := binTrie.UpdateAccount(addr, &acc, len(code)); err != nil {
return common.Hash{}, fmt.Errorf("failed to update account %x: %w", addr, err)
}
if len(code) > 0 {
if err := binTrie.UpdateContractCode(addr, codeHash, code); err != nil {
return common.Hash{}, fmt.Errorf("failed to update code for %x: %w", addr, err)
}
}
if acc.Root != types.EmptyRootHash {
addrHash := common.BytesToHash(accIter.Key)
storageTrie, err := trie.NewStateTrie(trie.StorageTrieID(root, addrHash, acc.Root), srcTriedb)
if err != nil {
return common.Hash{}, fmt.Errorf("failed to open storage trie for %x: %w", addr, err)
}
storageNodeIt, err := storageTrie.NodeIterator(nil)
if err != nil {
return common.Hash{}, fmt.Errorf("failed to create storage iterator for %x: %w", addr, err)
}
storageIter := trie.NewIterator(storageNodeIt)
slotCount := uint64(0)
for storageIter.Next() {
slotKey := storageTrie.GetKey(storageIter.Key)
if slotKey == nil {
return common.Hash{}, fmt.Errorf("missing preimage for storage key %x (account %x)", storageIter.Key, addr)
}
_, content, _, err := rlp.Split(storageIter.Value)
if err != nil {
return common.Hash{}, fmt.Errorf("invalid storage RLP for key %x (account %x): %w", slotKey, addr, err)
}
if err := binTrie.UpdateStorage(addr, slotKey, content); err != nil {
return common.Hash{}, fmt.Errorf("failed to update storage %x/%x: %w", addr, slotKey, err)
}
stats.slots++
slotCount++
if slotCount%10000 == 0 {
binTrie, currentRoot, err = maybeCommit(binTrie, currentRoot, destTriedb, memLimit, stats)
if err != nil {
return common.Hash{}, err
}
}
}
if storageIter.Err != nil {
return common.Hash{}, fmt.Errorf("storage iteration error for %x: %w", addr, storageIter.Err)
}
}
stats.accounts++
stats.report(false)
if stats.accounts%1000 == 0 {
binTrie, currentRoot, err = maybeCommit(binTrie, currentRoot, destTriedb, memLimit, stats)
if err != nil {
return common.Hash{}, err
}
}
}
if accIter.Err != nil {
return common.Hash{}, fmt.Errorf("account iteration error: %w", accIter.Err)
}
_, currentRoot, err = commitBinaryTrie(binTrie, currentRoot, destTriedb)
if err != nil {
return common.Hash{}, fmt.Errorf("final commit failed: %w", err)
}
stats.commits++
stats.report(true)
return currentRoot, nil
}
func maybeCommit(bt *bintrie.BinaryTrie, currentRoot common.Hash, destDB *triedb.Database, memLimit uint64, stats *conversionStats) (*bintrie.BinaryTrie, common.Hash, error) {
if time.Since(stats.lastMemChk) < 5*time.Second {
return bt, currentRoot, nil
}
stats.lastMemChk = time.Now()
var m runtime.MemStats
runtime.ReadMemStats(&m)
if m.Alloc < memLimit {
return bt, currentRoot, nil
}
log.Info("Memory limit reached, committing", "alloc", common.StorageSize(m.Alloc), "limit", common.StorageSize(memLimit))
bt, currentRoot, err := commitBinaryTrie(bt, currentRoot, destDB)
if err != nil {
return nil, common.Hash{}, err
}
stats.commits++
stats.report(true)
return bt, currentRoot, nil
}
func commitBinaryTrie(bt *bintrie.BinaryTrie, currentRoot common.Hash, destDB *triedb.Database) (*bintrie.BinaryTrie, common.Hash, error) {
newRoot, nodeSet := bt.Commit(false)
if nodeSet != nil {
merged := trienode.NewWithNodeSet(nodeSet)
if err := destDB.Update(newRoot, currentRoot, 0, merged, triedb.NewStateSet()); err != nil {
return nil, common.Hash{}, fmt.Errorf("triedb update failed: %w", err)
}
if err := destDB.Commit(newRoot, false); err != nil {
return nil, common.Hash{}, fmt.Errorf("triedb commit failed: %w", err)
}
}
runtime.GC()
debug.FreeOSMemory()
bt, err := bintrie.NewBinaryTrie(newRoot, destDB, bt.GroupDepth())
if err != nil {
return nil, common.Hash{}, fmt.Errorf("failed to reload binary trie: %w", err)
}
return bt, newRoot, nil
}
func deleteMPTData(chaindb ethdb.Database, srcTriedb *triedb.Database, root common.Hash) error {
isPathDB := srcTriedb.Scheme() == rawdb.PathScheme
srcTrie, err := trie.NewStateTrie(trie.StateTrieID(root), srcTriedb)
if err != nil {
return fmt.Errorf("failed to open source trie for deletion: %w", err)
}
acctIt, err := srcTrie.NodeIterator(nil)
if err != nil {
return fmt.Errorf("failed to create account iterator for deletion: %w", err)
}
batch := chaindb.NewBatch()
deleted := 0
for acctIt.Next(true) {
if isPathDB {
rawdb.DeleteAccountTrieNode(batch, acctIt.Path())
} else {
node := acctIt.Hash()
if node != (common.Hash{}) {
rawdb.DeleteLegacyTrieNode(batch, node)
}
}
deleted++
if acctIt.Leaf() {
var acc types.StateAccount
if err := rlp.DecodeBytes(acctIt.LeafBlob(), &acc); err != nil {
return fmt.Errorf("invalid account during deletion: %w", err)
}
if acc.Root != types.EmptyRootHash {
addrHash := common.BytesToHash(acctIt.LeafKey())
storageTrie, err := trie.NewStateTrie(trie.StorageTrieID(root, addrHash, acc.Root), srcTriedb)
if err != nil {
return fmt.Errorf("failed to open storage trie for deletion: %w", err)
}
storageIt, err := storageTrie.NodeIterator(nil)
if err != nil {
return fmt.Errorf("failed to create storage iterator for deletion: %w", err)
}
for storageIt.Next(true) {
if isPathDB {
rawdb.DeleteStorageTrieNode(batch, addrHash, storageIt.Path())
} else {
node := storageIt.Hash()
if node != (common.Hash{}) {
rawdb.DeleteLegacyTrieNode(batch, node)
}
}
deleted++
if batch.ValueSize() >= ethdb.IdealBatchSize {
if err := batch.Write(); err != nil {
return fmt.Errorf("batch write failed: %w", err)
}
batch.Reset()
}
}
if storageIt.Error() != nil {
return fmt.Errorf("storage deletion iterator error: %w", storageIt.Error())
}
}
}
if batch.ValueSize() >= ethdb.IdealBatchSize {
if err := batch.Write(); err != nil {
return fmt.Errorf("batch write failed: %w", err)
}
batch.Reset()
}
}
if acctIt.Error() != nil {
return fmt.Errorf("account deletion iterator error: %w", acctIt.Error())
}
if batch.ValueSize() > 0 {
if err := batch.Write(); err != nil {
return fmt.Errorf("final batch write failed: %w", err)
}
}
log.Info("MPT deletion complete", "nodesDeleted", deleted)
return nil
}

View file

@ -0,0 +1,229 @@
// Copyright 2026 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"math"
"math/big"
"testing"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core"
"github.com/ethereum/go-ethereum/core/rawdb"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/params"
"github.com/ethereum/go-ethereum/trie/bintrie"
"github.com/ethereum/go-ethereum/triedb"
"github.com/ethereum/go-ethereum/triedb/pathdb"
"github.com/holiman/uint256"
)
func TestBintrieConvert(t *testing.T) {
var (
addr1 = common.HexToAddress("0x1111111111111111111111111111111111111111")
addr2 = common.HexToAddress("0x2222222222222222222222222222222222222222")
slotKey1 = common.HexToHash("0x01")
slotKey2 = common.HexToHash("0x02")
slotVal1 = common.HexToHash("0xdeadbeef")
slotVal2 = common.HexToHash("0xcafebabe")
code = []byte{0x60, 0x42, 0x60, 0x00, 0x52, 0x60, 0x20, 0x60, 0x00, 0xf3}
)
chaindb := rawdb.NewMemoryDatabase()
srcTriedb := triedb.NewDatabase(chaindb, &triedb.Config{
Preimages: true,
PathDB: pathdb.Defaults,
})
gspec := &core.Genesis{
Config: params.TestChainConfig,
BaseFee: big.NewInt(params.InitialBaseFee),
Alloc: types.GenesisAlloc{
addr1: {
Balance: big.NewInt(1000000),
Nonce: 5,
},
addr2: {
Balance: big.NewInt(2000000),
Nonce: 10,
Code: code,
Storage: map[common.Hash]common.Hash{
slotKey1: slotVal1,
slotKey2: slotVal2,
},
},
},
}
genesisBlock := gspec.MustCommit(chaindb, srcTriedb)
root := genesisBlock.Root()
t.Logf("Genesis root: %x", root)
srcTriedb.Close()
srcTriedb2 := triedb.NewDatabase(chaindb, &triedb.Config{
Preimages: true,
PathDB: &pathdb.Config{ReadOnly: true},
})
defer srcTriedb2.Close()
destTriedb := triedb.NewDatabase(chaindb, &triedb.Config{
IsUBT: true,
PathDB: pathdb.Defaults,
})
defer destTriedb.Close()
bt, err := bintrie.NewBinaryTrie(types.EmptyBinaryHash, destTriedb, 8)
if err != nil {
t.Fatalf("failed to create binary trie: %v", err)
}
currentRoot, err := runConversionLoop(chaindb, srcTriedb2, destTriedb, bt, root, math.MaxUint64)
if err != nil {
t.Fatalf("conversion failed: %v", err)
}
t.Logf("Binary trie root: %x", currentRoot)
bt2, err := bintrie.NewBinaryTrie(currentRoot, destTriedb, 8)
if err != nil {
t.Fatalf("failed to reload binary trie: %v", err)
}
acc1, err := bt2.GetAccount(addr1)
if err != nil {
t.Fatalf("failed to get account1: %v", err)
}
if acc1 == nil {
t.Fatal("account1 not found in binary trie")
}
if acc1.Nonce != 5 {
t.Errorf("account1 nonce: got %d, want 5", acc1.Nonce)
}
wantBal1 := uint256.NewInt(1000000)
if acc1.Balance.Cmp(wantBal1) != 0 {
t.Errorf("account1 balance: got %s, want %s", acc1.Balance, wantBal1)
}
acc2, err := bt2.GetAccount(addr2)
if err != nil {
t.Fatalf("failed to get account2: %v", err)
}
if acc2 == nil {
t.Fatal("account2 not found in binary trie")
}
if acc2.Nonce != 10 {
t.Errorf("account2 nonce: got %d, want 10", acc2.Nonce)
}
wantBal2 := uint256.NewInt(2000000)
if acc2.Balance.Cmp(wantBal2) != 0 {
t.Errorf("account2 balance: got %s, want %s", acc2.Balance, wantBal2)
}
treeKey1 := bintrie.GetBinaryTreeKeyStorageSlot(addr2, slotKey1[:])
val1, err := bt2.GetWithHashedKey(treeKey1)
if err != nil {
t.Fatalf("failed to get storage slot1: %v", err)
}
if len(val1) == 0 {
t.Fatal("storage slot1 not found")
}
got1 := common.BytesToHash(val1)
if got1 != slotVal1 {
t.Errorf("storage slot1: got %x, want %x", got1, slotVal1)
}
treeKey2 := bintrie.GetBinaryTreeKeyStorageSlot(addr2, slotKey2[:])
val2, err := bt2.GetWithHashedKey(treeKey2)
if err != nil {
t.Fatalf("failed to get storage slot2: %v", err)
}
if len(val2) == 0 {
t.Fatal("storage slot2 not found")
}
got2 := common.BytesToHash(val2)
if got2 != slotVal2 {
t.Errorf("storage slot2: got %x, want %x", got2, slotVal2)
}
}
func TestBintrieConvertDeleteSource(t *testing.T) {
addr1 := common.HexToAddress("0x3333333333333333333333333333333333333333")
chaindb := rawdb.NewMemoryDatabase()
srcTriedb := triedb.NewDatabase(chaindb, &triedb.Config{
Preimages: true,
PathDB: pathdb.Defaults,
})
gspec := &core.Genesis{
Config: params.TestChainConfig,
BaseFee: big.NewInt(params.InitialBaseFee),
Alloc: types.GenesisAlloc{
addr1: {
Balance: big.NewInt(1000000),
},
},
}
genesisBlock := gspec.MustCommit(chaindb, srcTriedb)
root := genesisBlock.Root()
srcTriedb.Close()
srcTriedb2 := triedb.NewDatabase(chaindb, &triedb.Config{
Preimages: true,
PathDB: &pathdb.Config{ReadOnly: true},
})
destTriedb := triedb.NewDatabase(chaindb, &triedb.Config{
IsUBT: true,
PathDB: pathdb.Defaults,
})
bt, err := bintrie.NewBinaryTrie(types.EmptyBinaryHash, destTriedb, 8)
if err != nil {
t.Fatalf("failed to create binary trie: %v", err)
}
newRoot, err := runConversionLoop(chaindb, srcTriedb2, destTriedb, bt, root, math.MaxUint64)
if err != nil {
t.Fatalf("conversion failed: %v", err)
}
if err := deleteMPTData(chaindb, srcTriedb2, root); err != nil {
t.Fatalf("deletion failed: %v", err)
}
srcTriedb2.Close()
bt2, err := bintrie.NewBinaryTrie(newRoot, destTriedb, 8)
if err != nil {
t.Fatalf("failed to reload binary trie after deletion: %v", err)
}
acc, err := bt2.GetAccount(addr1)
if err != nil {
t.Fatalf("failed to get account after deletion: %v", err)
}
if acc == nil {
t.Fatal("account not found after MPT deletion")
}
wantBal := uint256.NewInt(1000000)
if acc.Balance.Cmp(wantBal) != 0 {
t.Errorf("balance after deletion: got %s, want %s", acc.Balance, wantBal)
}
destTriedb.Close()
}

View file

@ -64,7 +64,7 @@ var (
utils.OverrideOsaka,
utils.OverrideBPO1,
utils.OverrideBPO2,
utils.OverrideVerkle,
utils.OverrideUBT,
}, utils.DatabaseFlags),
Description: `
The init command initializes a new genesis block and definition for the network.
@ -111,6 +111,7 @@ if one is set. Otherwise it prints the genesis from the datadir.`,
utils.MetricsInfluxDBUsernameFlag,
utils.MetricsInfluxDBPasswordFlag,
utils.MetricsInfluxDBTagsFlag,
utils.MetricsInfluxDBIntervalFlag,
utils.MetricsInfluxDBTokenFlag,
utils.MetricsInfluxDBBucketFlag,
utils.MetricsInfluxDBOrganizationFlag,
@ -207,13 +208,19 @@ This command dumps out the state for a given block (or latest, if none provided)
pruneHistoryCommand = &cli.Command{
Action: pruneHistory,
Name: "prune-history",
Usage: "Prune blockchain history (block bodies and receipts) up to the merge block",
Usage: "Prune blockchain history (block bodies and receipts) up to a specified point",
ArgsUsage: "",
Flags: utils.DatabaseFlags,
Flags: slices.Concat(utils.DatabaseFlags, []cli.Flag{
utils.ChainHistoryFlag,
}),
Description: `
The prune-history command removes historical block bodies and receipts from the
blockchain database up to the merge block, while preserving block headers. This
helps reduce storage requirements for nodes that don't need full historical data.`,
blockchain database up to a specified point, while preserving block headers. This
helps reduce storage requirements for nodes that don't need full historical data.
The --history.chain flag is required to specify the pruning target:
- postmerge: Prune up to the merge block. The node will keep the merge block and everything thereafter.
- postprague: Prune up to the Prague (Pectra) upgrade block. The node will keep the prague block and everything thereafter.`,
}
downloadEraCommand = &cli.Command{
@ -290,15 +297,15 @@ func initGenesis(ctx *cli.Context) error {
v := ctx.Uint64(utils.OverrideBPO2.Name)
overrides.OverrideBPO2 = &v
}
if ctx.IsSet(utils.OverrideVerkle.Name) {
v := ctx.Uint64(utils.OverrideVerkle.Name)
overrides.OverrideVerkle = &v
if ctx.IsSet(utils.OverrideUBT.Name) {
v := ctx.Uint64(utils.OverrideUBT.Name)
overrides.OverrideUBT = &v
}
chaindb := utils.MakeChainDatabase(ctx, stack, false)
defer chaindb.Close()
triedb := utils.MakeTrieDatabase(ctx, stack, chaindb, ctx.Bool(utils.CachePreimagesFlag.Name), false, genesis.IsVerkle())
triedb := utils.MakeTrieDatabase(ctx, stack, chaindb, ctx.Bool(utils.CachePreimagesFlag.Name), false, genesis.IsUBT())
defer triedb.Close()
_, hash, compatErr, err := core.SetupGenesisBlockWithOverride(chaindb, triedb, genesis, &overrides, nil)
@ -702,47 +709,77 @@ func hashish(x string) bool {
}
func pruneHistory(ctx *cli.Context) error {
// Parse and validate the history mode flag.
if !ctx.IsSet(utils.ChainHistoryFlag.Name) {
return errors.New("--history.chain flag is required")
}
var mode history.HistoryMode
if err := mode.UnmarshalText([]byte(ctx.String(utils.ChainHistoryFlag.Name))); err != nil {
return err
}
if mode == history.KeepAll {
return errors.New("--history.chain=all is not valid for pruning. To restore history, use 'geth import-history'")
}
stack, _ := makeConfigNode(ctx)
defer stack.Close()
// Open the chain database
// Open the chain database.
chain, chaindb := utils.MakeChain(ctx, stack, false)
defer chaindb.Close()
defer chain.Stop()
// Determine the prune point. This will be the first PoS block.
prunePoint, ok := history.PrunePoints[chain.Genesis().Hash()]
if !ok || prunePoint == nil {
return errors.New("prune point not found")
// Determine the prune point based on the history mode.
genesisHash := chain.Genesis().Hash()
policy, err := history.NewPolicy(mode, genesisHash)
if err != nil {
return err
}
if policy.Target == nil {
return fmt.Errorf("prune point for %q not found for this network", mode.String())
}
var (
mergeBlock = prunePoint.BlockNumber
mergeBlockHash = prunePoint.BlockHash.Hex()
targetBlock = policy.Target.BlockNumber
targetBlockHash = policy.Target.BlockHash
)
// Check we're far enough past merge to ensure all data is in freezer
// Check the current freezer tail to see if pruning is needed/possible.
freezerTail, _ := chaindb.Tail()
if freezerTail > 0 {
if freezerTail == targetBlock {
log.Info("Database already pruned to target block", "tail", freezerTail)
return nil
}
if freezerTail > targetBlock {
// Database is pruned beyond the target - can't unprune.
return fmt.Errorf("database is already pruned to block %d, which is beyond target %d. Cannot unprune. To restore history, use 'geth import-history'", freezerTail, targetBlock)
}
// freezerTail < targetBlock: we can prune further, continue below.
}
// Check we're far enough past the target to ensure all data is in freezer.
currentHeader := chain.CurrentHeader()
if currentHeader == nil {
return errors.New("current header not found")
}
if currentHeader.Number.Uint64() < mergeBlock+params.FullImmutabilityThreshold {
return fmt.Errorf("chain not far enough past merge block, need %d more blocks",
mergeBlock+params.FullImmutabilityThreshold-currentHeader.Number.Uint64())
if currentHeader.Number.Uint64() < targetBlock+params.FullImmutabilityThreshold {
return fmt.Errorf("chain not far enough past target block %d, need %d more blocks",
targetBlock, targetBlock+params.FullImmutabilityThreshold-currentHeader.Number.Uint64())
}
// Double-check the prune block in db has the expected hash.
hash := rawdb.ReadCanonicalHash(chaindb, mergeBlock)
if hash != common.HexToHash(mergeBlockHash) {
return fmt.Errorf("merge block hash mismatch: got %s, want %s", hash.Hex(), mergeBlockHash)
// Double-check the target block in db has the expected hash.
hash := rawdb.ReadCanonicalHash(chaindb, targetBlock)
if hash != targetBlockHash {
return fmt.Errorf("target block hash mismatch: got %s, want %s", hash.Hex(), targetBlockHash.Hex())
}
log.Info("Starting history pruning", "head", currentHeader.Number, "tail", mergeBlock, "tailHash", mergeBlockHash)
log.Info("Starting history pruning", "head", currentHeader.Number, "target", targetBlock, "targetHash", targetBlockHash.Hex())
start := time.Now()
rawdb.PruneTransactionIndex(chaindb, mergeBlock)
if _, err := chaindb.TruncateTail(mergeBlock); err != nil {
rawdb.PruneTransactionIndex(chaindb, targetBlock)
if _, err := chaindb.TruncateTail(targetBlock); err != nil {
return fmt.Errorf("failed to truncate ancient data: %v", err)
}
log.Info("History pruning completed", "tail", mergeBlock, "elapsed", common.PrettyDuration(time.Since(start)))
log.Info("History pruning completed", "tail", targetBlock, "elapsed", common.PrettyDuration(time.Since(start)))
// TODO(s1na): what if there is a crash between the two prune operations?

View file

@ -235,9 +235,9 @@ func makeFullNode(ctx *cli.Context) *node.Node {
v := ctx.Uint64(utils.OverrideBPO2.Name)
cfg.Eth.OverrideBPO2 = &v
}
if ctx.IsSet(utils.OverrideVerkle.Name) {
v := ctx.Uint64(utils.OverrideVerkle.Name)
cfg.Eth.OverrideVerkle = &v
if ctx.IsSet(utils.OverrideUBT.Name) {
v := ctx.Uint64(utils.OverrideUBT.Name)
cfg.Eth.OverrideUBT = &v
}
// Start metrics export if enabled.
@ -377,6 +377,9 @@ func applyMetricConfig(ctx *cli.Context, cfg *gethConfig) {
if ctx.IsSet(utils.MetricsInfluxDBTagsFlag.Name) {
cfg.Metrics.InfluxDBTags = ctx.String(utils.MetricsInfluxDBTagsFlag.Name)
}
if ctx.IsSet(utils.MetricsInfluxDBIntervalFlag.Name) {
cfg.Metrics.InfluxDBInterval = ctx.Duration(utils.MetricsInfluxDBIntervalFlag.Name)
}
if ctx.IsSet(utils.MetricsEnableInfluxDBV2Flag.Name) {
cfg.Metrics.EnableInfluxDBV2 = ctx.Bool(utils.MetricsEnableInfluxDBV2Flag.Name)
}

View file

@ -30,7 +30,7 @@ import (
)
const (
ipcAPIs = "admin:1.0 debug:1.0 engine:1.0 eth:1.0 miner:1.0 net:1.0 rpc:1.0 txpool:1.0 web3:1.0"
ipcAPIs = "admin:1.0 debug:1.0 engine:1.0 eth:1.0 miner:1.0 net:1.0 rpc:1.0 testing:1.0 txpool:1.0 web3:1.0"
httpAPIs = "eth:1.0 net:1.0 rpc:1.0 web3:1.0"
)

View file

@ -19,6 +19,7 @@ package main
import (
"bytes"
"fmt"
"math"
"os"
"os/signal"
"path/filepath"
@ -37,6 +38,7 @@ import (
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/internal/tablewriter"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/rlp"
"github.com/ethereum/go-ethereum/trie"
@ -51,7 +53,24 @@ var (
}
removeChainDataFlag = &cli.BoolFlag{
Name: "remove.chain",
Usage: "If set, selects the state data for removal",
Usage: "If set, selects the chain data for removal",
}
inspectTrieTopFlag = &cli.IntFlag{
Name: "top",
Usage: "Print the top N results per ranking category",
Value: 10,
}
inspectTrieDumpPathFlag = &cli.StringFlag{
Name: "dump-path",
Usage: "Path for the trie statistics dump file",
}
inspectTrieSummarizeFlag = &cli.StringFlag{
Name: "summarize",
Usage: "Summarize an existing trie dump file (skip trie traversal)",
}
inspectTrieContractFlag = &cli.StringFlag{
Name: "contract",
Usage: "Inspect only the storage of the given contract address (skips full account trie walk)",
}
removedbCommand = &cli.Command{
@ -74,6 +93,7 @@ Remove blockchain and state databases`,
dbCompactCmd,
dbGetCmd,
dbDeleteCmd,
dbInspectTrieCmd,
dbPutCmd,
dbGetSlotsCmd,
dbDumpFreezerIndex,
@ -92,6 +112,22 @@ Remove blockchain and state databases`,
Usage: "Inspect the storage size for each type of data in the database",
Description: `This commands iterates the entire database. If the optional 'prefix' and 'start' arguments are provided, then the iteration is limited to the given subset of data.`,
}
dbInspectTrieCmd = &cli.Command{
Action: inspectTrie,
Name: "inspect-trie",
ArgsUsage: "<blocknum>",
Flags: slices.Concat([]cli.Flag{
utils.ExcludeStorageFlag,
inspectTrieTopFlag,
utils.OutputFileFlag,
inspectTrieDumpPathFlag,
inspectTrieSummarizeFlag,
inspectTrieContractFlag,
}, utils.NetworkFlags, utils.DatabaseFlags),
Usage: "Print detailed trie information about the structure of account trie and storage tries.",
Description: `This commands iterates the entrie trie-backed state. If the 'blocknum' is not specified,
the latest block number will be used by default.`,
}
dbCheckStateContentCmd = &cli.Command{
Action: checkStateContent,
Name: "check-state-content",
@ -385,6 +421,88 @@ func checkStateContent(ctx *cli.Context) error {
return nil
}
func inspectTrie(ctx *cli.Context) error {
topN := ctx.Int(inspectTrieTopFlag.Name)
if topN <= 0 {
return fmt.Errorf("invalid --%s value %d (must be > 0)", inspectTrieTopFlag.Name, topN)
}
config := &trie.InspectConfig{
NoStorage: ctx.Bool(utils.ExcludeStorageFlag.Name),
TopN: topN,
Path: ctx.String(utils.OutputFileFlag.Name),
}
if summarizePath := ctx.String(inspectTrieSummarizeFlag.Name); summarizePath != "" {
if ctx.NArg() > 0 {
return fmt.Errorf("block number argument is not supported with --%s", inspectTrieSummarizeFlag.Name)
}
config.DumpPath = summarizePath
log.Info("Summarizing trie dump", "path", summarizePath, "top", topN)
return trie.Summarize(summarizePath, config)
}
if ctx.NArg() > 1 {
return fmt.Errorf("excessive number of arguments: %v", ctx.Command.ArgsUsage)
}
stack, _ := makeConfigNode(ctx)
db := utils.MakeChainDatabase(ctx, stack, false)
defer stack.Close()
defer db.Close()
var (
trieRoot common.Hash
hash common.Hash
number uint64
)
switch {
case ctx.NArg() == 0 || ctx.Args().Get(0) == "latest":
head := rawdb.ReadHeadHeaderHash(db)
n, ok := rawdb.ReadHeaderNumber(db, head)
if !ok {
return fmt.Errorf("could not load head block hash")
}
number = n
case ctx.Args().Get(0) == "snapshot":
trieRoot = rawdb.ReadSnapshotRoot(db)
number = math.MaxUint64
default:
var err error
number, err = strconv.ParseUint(ctx.Args().Get(0), 10, 64)
if err != nil {
return fmt.Errorf("failed to parse blocknum, Args[0]: %v, err: %v", ctx.Args().Get(0), err)
}
}
if number != math.MaxUint64 {
hash = rawdb.ReadCanonicalHash(db, number)
if hash == (common.Hash{}) {
return fmt.Errorf("canonical hash for block %d not found", number)
}
blockHeader := rawdb.ReadHeader(db, hash, number)
trieRoot = blockHeader.Root
}
if trieRoot == (common.Hash{}) {
log.Error("Empty root hash")
}
config.DumpPath = ctx.String(inspectTrieDumpPathFlag.Name)
if config.DumpPath == "" {
config.DumpPath = stack.ResolvePath("trie-dump.bin")
}
triedb := utils.MakeTrieDatabase(ctx, stack, db, false, true, false)
defer triedb.Close()
if contractAddr := ctx.String(inspectTrieContractFlag.Name); contractAddr != "" {
address := common.HexToAddress(contractAddr)
log.Info("Inspecting contract", "address", address, "root", trieRoot, "block", number)
return trie.InspectContract(triedb, db, trieRoot, address)
}
log.Info("Inspecting trie", "root", trieRoot, "block", number, "dump", config.DumpPath, "top", topN)
return trie.Inspect(triedb, trieRoot, config)
}
func showDBStats(db ethdb.KeyValueStater) {
stats, err := db.Stat()
if err != nil {
@ -688,6 +806,24 @@ func (iter *snapshotIterator) Release() {
iter.storage.Release()
}
type codeIterator struct {
iter ethdb.Iterator
}
func (iter *codeIterator) Next() (byte, []byte, []byte, bool) {
for iter.iter.Next() {
key := iter.iter.Key()
if bytes.HasPrefix(key, rawdb.CodePrefix) && len(key) == (len(rawdb.CodePrefix)+common.HashLength) {
return utils.OpBatchAdd, key, iter.iter.Value(), true
}
}
return 0, nil, nil, false
}
func (iter *codeIterator) Release() {
iter.iter.Release()
}
// chainExporters defines the export scheme for all exportable chain data.
var chainExporters = map[string]func(db ethdb.Database) utils.ChainDataIterator{
"preimage": func(db ethdb.Database) utils.ChainDataIterator {
@ -699,6 +835,10 @@ var chainExporters = map[string]func(db ethdb.Database) utils.ChainDataIterator{
storage := db.NewIterator(rawdb.SnapshotStoragePrefix, nil)
return &snapshotIterator{account: account, storage: storage}
},
"code": func(db ethdb.Database) utils.ChainDataIterator {
iter := db.NewIterator(rawdb.CodePrefix, nil)
return &codeIterator{iter: iter}
},
}
func exportChaindata(ctx *cli.Context) error {
@ -759,7 +899,7 @@ func showMetaData(ctx *cli.Context) error {
data = append(data, []string{"headHeader.Root", fmt.Sprintf("%v", h.Root)})
data = append(data, []string{"headHeader.Number", fmt.Sprintf("%d (%#x)", h.Number, h.Number)})
}
table := rawdb.NewTableWriter(os.Stdout)
table := tablewriter.NewWriter(os.Stdout)
table.SetHeader([]string{"Field", "Value"})
table.AppendBulk(data)
table.Render()

View file

@ -22,7 +22,6 @@ import (
"os"
"slices"
"sort"
"strconv"
"time"
"github.com/ethereum/go-ethereum/accounts"
@ -65,7 +64,7 @@ var (
utils.OverrideOsaka,
utils.OverrideBPO1,
utils.OverrideBPO2,
utils.OverrideVerkle,
utils.OverrideUBT,
utils.OverrideGenesisFlag,
utils.EnablePersonal, // deprecated
utils.TxPoolLocalsFlag,
@ -96,6 +95,7 @@ var (
utils.StateHistoryFlag,
utils.TrienodeHistoryFlag,
utils.TrienodeHistoryFullValueCheckpointFlag,
utils.BinTrieGroupDepthFlag,
utils.LightKDFFlag,
utils.EthRequiredBlocksFlag,
utils.LegacyWhitelistFlag, // deprecated
@ -216,6 +216,7 @@ var (
utils.MetricsInfluxDBUsernameFlag,
utils.MetricsInfluxDBPasswordFlag,
utils.MetricsInfluxDBTagsFlag,
utils.MetricsInfluxDBIntervalFlag,
utils.MetricsEnableInfluxDBV2Flag,
utils.MetricsInfluxDBTokenFlag,
utils.MetricsInfluxDBBucketFlag,
@ -260,6 +261,8 @@ func init() {
utils.ShowDeprecated,
// See snapshot.go
snapshotCommand,
// See bintrie_convert.go
bintrieCommand,
}
if logTestCommand != nil {
app.Commands = append(app.Commands, logTestCommand)
@ -315,18 +318,6 @@ func prepare(ctx *cli.Context) {
case !ctx.IsSet(utils.NetworkIdFlag.Name):
log.Info("Starting Geth on Ethereum mainnet...")
}
// If we're a full node on mainnet without --cache specified, bump default cache allowance
if !ctx.IsSet(utils.CacheFlag.Name) && !ctx.IsSet(utils.NetworkIdFlag.Name) {
// Make sure we're not on any supported preconfigured testnet either
if !ctx.IsSet(utils.HoleskyFlag.Name) &&
!ctx.IsSet(utils.SepoliaFlag.Name) &&
!ctx.IsSet(utils.HoodiFlag.Name) &&
!ctx.IsSet(utils.DeveloperFlag.Name) {
// Nope, we're really on mainnet. Bump that cache up!
log.Info("Bumping default cache on mainnet", "provided", ctx.Int(utils.CacheFlag.Name), "updated", 4096)
ctx.Set(utils.CacheFlag.Name, strconv.Itoa(4096))
}
}
}
// geth is the main entry point into the system if no special subcommand is run.

View file

@ -18,15 +18,18 @@ package main
import (
"bytes"
"encoding/hex"
"encoding/json"
"errors"
"fmt"
"os"
"slices"
"sort"
"time"
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core"
"github.com/ethereum/go-ethereum/core/rawdb"
"github.com/ethereum/go-ethereum/core/state"
"github.com/ethereum/go-ethereum/core/state/pruner"
@ -36,6 +39,7 @@ import (
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/rlp"
"github.com/ethereum/go-ethereum/trie"
"github.com/ethereum/go-ethereum/triedb"
"github.com/urfave/cli/v2"
)
@ -105,7 +109,9 @@ information about the specified address.
Usage: "Traverse the state with given root hash and perform quick verification",
ArgsUsage: "<root>",
Action: traverseState,
Flags: slices.Concat(utils.NetworkFlags, utils.DatabaseFlags),
Flags: slices.Concat([]cli.Flag{
utils.AccountFlag,
}, utils.NetworkFlags, utils.DatabaseFlags),
Description: `
geth snapshot traverse-state <state-root>
will traverse the whole state from the given state root and will abort if any
@ -113,6 +119,8 @@ referenced trie node or contract code is missing. This command can be used for
state integrity verification. The default checking target is the HEAD state.
It's also usable without snapshot enabled.
If --account is specified, only the storage trie of that account is traversed.
`,
},
{
@ -120,7 +128,9 @@ It's also usable without snapshot enabled.
Usage: "Traverse the state with given root hash and perform detailed verification",
ArgsUsage: "<root>",
Action: traverseRawState,
Flags: slices.Concat(utils.NetworkFlags, utils.DatabaseFlags),
Flags: slices.Concat([]cli.Flag{
utils.AccountFlag,
}, utils.NetworkFlags, utils.DatabaseFlags),
Description: `
geth snapshot traverse-rawstate <state-root>
will traverse the whole state from the given root and will abort if any referenced
@ -129,6 +139,8 @@ verification. The default checking target is the HEAD state. It's basically iden
to traverse-state, but the check granularity is smaller.
It's also usable without snapshot enabled.
If --account is specified, only the storage trie of that account is traversed.
`,
},
{
@ -159,6 +171,22 @@ block is used.
Description: `
The export-preimages command exports hash preimages to a flat file, in exactly
the expected order for the overlay tree migration.
`,
},
{
Name: "list-eip-7610-accounts",
Aliases: []string{"eip7610"},
Usage: "list EIP7610 eligible accounts",
Action: listEIP7610EligibleAccounts,
Flags: slices.Concat(utils.NetworkFlags, utils.DatabaseFlags),
Description: `
geth snapshot list-eip-7610-accounts
traverses the postEIP-161 state and returns all accounts that are eligible
under EIP-7610: accounts with zero nonce, empty runtime code, and non-empty
storage. The traversal will be aborted immediately if the state is prior to
EIP-161.
The exported accounts are identified by their address.
`,
},
},
@ -272,6 +300,120 @@ func checkDanglingStorage(ctx *cli.Context) error {
return snapshot.CheckDanglingStorage(db)
}
// parseAccount parses the account flag value as either an address (20 bytes)
// or an account hash (32 bytes) and returns the hashed account key.
func parseAccount(input string) (common.Hash, error) {
switch len(input) {
case 40, 42: // address
return crypto.Keccak256Hash(common.HexToAddress(input).Bytes()), nil
case 64, 66: // hash
return common.HexToHash(input), nil
default:
return common.Hash{}, errors.New("malformed account address or hash")
}
}
// lookupAccount resolves the account from the state trie using the given
// account hash.
func lookupAccount(accountHash common.Hash, tr *trie.Trie) (*types.StateAccount, error) {
accData, err := tr.Get(accountHash.Bytes())
if err != nil {
return nil, fmt.Errorf("failed to get account %s: %w", accountHash, err)
}
if accData == nil {
return nil, fmt.Errorf("account not found: %s", accountHash)
}
var acc types.StateAccount
if err := rlp.DecodeBytes(accData, &acc); err != nil {
return nil, fmt.Errorf("invalid account data %s: %w", accountHash, err)
}
return &acc, nil
}
func traverseStorage(id *trie.ID, db *triedb.Database, report bool, detail bool) error {
tr, err := trie.NewStateTrie(id, db)
if err != nil {
log.Error("Failed to open storage trie", "account", id.Owner, "root", id.Root, "err", err)
return err
}
var (
slots int
nodes int
lastReport time.Time
start = time.Now()
)
it, err := tr.NodeIterator(nil)
if err != nil {
log.Error("Failed to open storage iterator", "account", id.Owner, "root", id.Root, "err", err)
return err
}
logger := log.Debug
if report {
logger = log.Info
}
logger("Start traversing storage trie", "account", id.Owner, "storageRoot", id.Root)
if !detail {
iter := trie.NewIterator(it)
for iter.Next() {
slots += 1
if time.Since(lastReport) > time.Second*8 {
logger("Traversing storage", "account", id.Owner, "slots", slots, "elapsed", common.PrettyDuration(time.Since(start)))
lastReport = time.Now()
}
}
if iter.Err != nil {
log.Error("Failed to traverse storage trie", "root", id.Root, "err", iter.Err)
return iter.Err
}
logger("Storage is complete", "account", id.Owner, "slots", slots, "elapsed", common.PrettyDuration(time.Since(start)))
} else {
reader, err := db.NodeReader(id.StateRoot)
if err != nil {
log.Error("Failed to open state reader", "err", err)
return err
}
var (
buffer = make([]byte, 32)
hasher = crypto.NewKeccakState()
)
for it.Next(true) {
nodes += 1
node := it.Hash()
// Check the presence for non-empty hash node(embedded node doesn't
// have their own hash).
if node != (common.Hash{}) {
blob, _ := reader.Node(id.Owner, it.Path(), node)
if len(blob) == 0 {
log.Error("Missing trie node(storage)", "hash", node)
return errors.New("missing storage")
}
hasher.Reset()
hasher.Write(blob)
hasher.Read(buffer)
if !bytes.Equal(buffer, node.Bytes()) {
log.Error("Invalid trie node(storage)", "hash", node.Hex(), "value", blob)
return errors.New("invalid storage node")
}
}
if it.Leaf() {
slots += 1
}
if time.Since(lastReport) > time.Second*8 {
logger("Traversing storage", "account", id.Owner, "nodes", nodes, "slots", slots, "elapsed", common.PrettyDuration(time.Since(start)))
lastReport = time.Now()
}
}
if err := it.Error(); err != nil {
log.Error("Failed to traverse storage trie", "root", id.Root, "err", err)
return err
}
logger("Storage is complete", "account", id.Owner, "nodes", nodes, "slots", slots, "elapsed", common.PrettyDuration(time.Since(start)))
}
return nil
}
// traverseState is a helper function used for pruning verification.
// Basically it just iterates the trie, ensure all nodes and associated
// contract codes are present.
@ -309,6 +451,30 @@ func traverseState(ctx *cli.Context) error {
root = headBlock.Root()
log.Info("Start traversing the state", "root", root, "number", headBlock.NumberU64())
}
// If --account is specified, only traverse the storage trie of that account.
if accountStr := ctx.String(utils.AccountFlag.Name); accountStr != "" {
accountHash, err := parseAccount(accountStr)
if err != nil {
log.Error("Failed to parse account", "err", err)
return err
}
// Use raw trie since the account key is already hashed.
t, err := trie.New(trie.StateTrieID(root), triedb)
if err != nil {
log.Error("Failed to open state trie", "root", root, "err", err)
return err
}
acc, err := lookupAccount(accountHash, t)
if err != nil {
log.Error("Failed to look up account", "hash", accountHash, "err", err)
return err
}
if acc.Root == types.EmptyRootHash {
log.Info("Account has no storage", "hash", accountHash)
return nil
}
return traverseStorage(trie.StorageTrieID(root, accountHash, acc.Root), triedb, true, false)
}
t, err := trie.NewStateTrie(trie.StateTrieID(root), triedb)
if err != nil {
log.Error("Failed to open trie", "root", root, "err", err)
@ -335,30 +501,10 @@ func traverseState(ctx *cli.Context) error {
return err
}
if acc.Root != types.EmptyRootHash {
id := trie.StorageTrieID(root, common.BytesToHash(accIter.Key), acc.Root)
storageTrie, err := trie.NewStateTrie(id, triedb)
err := traverseStorage(trie.StorageTrieID(root, common.BytesToHash(accIter.Key), acc.Root), triedb, false, false)
if err != nil {
log.Error("Failed to open storage trie", "root", acc.Root, "err", err)
return err
}
storageIt, err := storageTrie.NodeIterator(nil)
if err != nil {
log.Error("Failed to open storage iterator", "root", acc.Root, "err", err)
return err
}
storageIter := trie.NewIterator(storageIt)
for storageIter.Next() {
slots += 1
if time.Since(lastReport) > time.Second*8 {
log.Info("Traversing state", "accounts", accounts, "slots", slots, "codes", codes, "elapsed", common.PrettyDuration(time.Since(start)))
lastReport = time.Now()
}
}
if storageIter.Err != nil {
log.Error("Failed to traverse storage trie", "root", acc.Root, "err", storageIter.Err)
return storageIter.Err
}
}
if !bytes.Equal(acc.CodeHash, types.EmptyCodeHash.Bytes()) {
if !rawdb.HasCode(chaindb, common.BytesToHash(acc.CodeHash)) {
@ -418,6 +564,30 @@ func traverseRawState(ctx *cli.Context) error {
root = headBlock.Root()
log.Info("Start traversing the state", "root", root, "number", headBlock.NumberU64())
}
// If --account is specified, only traverse the storage trie of that account.
if accountStr := ctx.String(utils.AccountFlag.Name); accountStr != "" {
accountHash, err := parseAccount(accountStr)
if err != nil {
log.Error("Failed to parse account", "err", err)
return err
}
// Use raw trie since the account key is already hashed.
t, err := trie.New(trie.StateTrieID(root), triedb)
if err != nil {
log.Error("Failed to open state trie", "root", root, "err", err)
return err
}
acc, err := lookupAccount(accountHash, t)
if err != nil {
log.Error("Failed to look up account", "hash", accountHash, "err", err)
return err
}
if acc.Root == types.EmptyRootHash {
log.Info("Account has no storage", "hash", accountHash)
return nil
}
return traverseStorage(trie.StorageTrieID(root, accountHash, acc.Root), triedb, true, true)
}
t, err := trie.NewStateTrie(trie.StateTrieID(root), triedb)
if err != nil {
log.Error("Failed to open trie", "root", root, "err", err)
@ -473,50 +643,10 @@ func traverseRawState(ctx *cli.Context) error {
return errors.New("invalid account")
}
if acc.Root != types.EmptyRootHash {
id := trie.StorageTrieID(root, common.BytesToHash(accIter.LeafKey()), acc.Root)
storageTrie, err := trie.NewStateTrie(id, triedb)
err := traverseStorage(trie.StorageTrieID(root, common.BytesToHash(accIter.LeafKey()), acc.Root), triedb, false, true)
if err != nil {
log.Error("Failed to open storage trie", "root", acc.Root, "err", err)
return errors.New("missing storage trie")
}
storageIter, err := storageTrie.NodeIterator(nil)
if err != nil {
log.Error("Failed to open storage iterator", "root", acc.Root, "err", err)
return err
}
for storageIter.Next(true) {
nodes += 1
node := storageIter.Hash()
// Check the presence for non-empty hash node(embedded node doesn't
// have their own hash).
if node != (common.Hash{}) {
blob, _ := reader.Node(common.BytesToHash(accIter.LeafKey()), storageIter.Path(), node)
if len(blob) == 0 {
log.Error("Missing trie node(storage)", "hash", node)
return errors.New("missing storage")
}
hasher.Reset()
hasher.Write(blob)
hasher.Read(got)
if !bytes.Equal(got, node.Bytes()) {
log.Error("Invalid trie node(storage)", "hash", node.Hex(), "value", blob)
return errors.New("invalid storage node")
}
}
// Bump the counter if it's leaf node.
if storageIter.Leaf() {
slots += 1
}
if time.Since(lastReport) > time.Second*8 {
log.Info("Traversing state", "nodes", nodes, "accounts", accounts, "slots", slots, "codes", codes, "elapsed", common.PrettyDuration(time.Since(start)))
lastReport = time.Now()
}
}
if storageIter.Error() != nil {
log.Error("Failed to traverse storage trie", "root", acc.Root, "err", storageIter.Error())
return storageIter.Error()
}
}
if !bytes.Equal(acc.CodeHash, types.EmptyCodeHash.Bytes()) {
if !rawdb.HasCode(chaindb, common.BytesToHash(acc.CodeHash)) {
@ -690,3 +820,92 @@ func checkAccount(ctx *cli.Context) error {
log.Info("Checked the snapshot journalled storage", "time", common.PrettyDuration(time.Since(start)))
return nil
}
// listEIP7610EligibleAccounts traverses the postEIP-161 state and returns all
// accounts that are eligible under EIP-7610: accounts with zero nonce, empty
// runtime code, and non-empty storage.
//
// Such accounts could only have been created before EIP-161, since after that
// all newly created contracts are initialized with a nonce of one.
//
// This helper should be generally applicable to all networks, including the
// Ethereum mainnet. For most networks where EIP-161 was enabled from genesis,
// the resulting set is expected to be empty. Otherwise, network operators are
// responsible for generating the eligible account set themselves.
//
// Notably, the exported accounts are identified by their address.
func listEIP7610EligibleAccounts(ctx *cli.Context) error {
stack, _ := makeConfigNode(ctx)
defer stack.Close()
chaindb := utils.MakeChainDatabase(ctx, stack, true)
defer chaindb.Close()
headBlock := rawdb.ReadHeadBlock(chaindb)
if headBlock == nil {
log.Error("Failed to load head block")
return nil
}
config, _, err := core.LoadChainConfig(chaindb, utils.MakeGenesis(ctx))
if err != nil {
log.Error("Failed to load chain config", "err", err)
return err
}
if !config.IsEIP158(headBlock.Number()) {
log.Info("Local head is prior to EIP-161", "head", headBlock.Number(), "eip-161", *config.EIP158Block)
return nil
}
triedb := utils.MakeTrieDatabase(ctx, stack, chaindb, false, true, false)
defer triedb.Close()
if triedb.Scheme() != rawdb.PathScheme {
log.Error("Hash scheme is not supported")
return nil
}
iter, err := triedb.AccountIterator(headBlock.Root(), common.Hash{})
if err != nil {
log.Error("Failed to get account iterator", "err", err)
return err
}
var (
start = time.Now()
accounts []common.Address
)
for iter.Next() {
blob := iter.Account()
if blob == nil {
log.Error("Failed to get account blob")
return nil
}
var account types.SlimAccount
if err := rlp.DecodeBytes(blob, &account); err != nil {
log.Error("Failed to decode", "err", err)
return err
}
// EIP-7610 account eligibility:
// - account.nonce == 0
// - account.runtime_code == empty
// - account.storage != empty
if len(account.CodeHash) == 0 && account.Nonce == 0 && len(account.Root) != 0 {
preimage := rawdb.ReadPreimage(chaindb, iter.Hash())
if preimage == nil {
log.Error("Failed to read preimage", "hash", iter.Hash().Hex())
return nil
}
accounts = append(accounts, common.BytesToAddress(preimage))
}
}
if len(accounts) == 0 {
log.Info("Traversed state", "eligible", len(accounts), "elapsed", common.PrettyDuration(time.Since(start)))
} else {
sort.Slice(accounts, func(i, j int) bool {
return accounts[i].Cmp(accounts[j]) < 0
})
buf := make([]byte, len(accounts)*common.AddressLength)
for i, h := range accounts {
copy(buf[i*common.AddressLength:], h[:])
}
log.Info("Traversed state", "eligible", len(accounts), "elapsed", common.PrettyDuration(time.Since(start)), "output", hex.EncodeToString(buf))
}
return nil
}

View file

@ -15,6 +15,7 @@
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
//go:build example
// +build example
package main

View file

@ -14,8 +14,8 @@
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
//go:build wasm
// +build wasm
//go:build wasm && !womir
// +build wasm,!womir
package main

View file

@ -0,0 +1,49 @@
// Copyright 2026 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
//go:build womir
package main
import "unsafe"
// These match the WOMIR guest-io imports (env module).
// Protocol: __hint_input prepares next item, __hint_buffer reads words.
// Each item has format: [byte_len_u32_le, ...data_words_padded_to_4bytes]
//
//go:wasmimport env __hint_input
func hintInput()
//go:wasmimport env __hint_buffer
func hintBuffer(ptr unsafe.Pointer, numWords uint32)
func readWord() uint32 {
var buf [4]byte
hintBuffer(unsafe.Pointer(&buf[0]), 1)
return uint32(buf[0]) | uint32(buf[1])<<8 | uint32(buf[2])<<16 | uint32(buf[3])<<24
}
func readBytes() []byte {
hintInput()
byteLen := readWord()
numWords := (byteLen + 3) / 4
data := make([]byte, numWords*4)
hintBuffer(unsafe.Pointer(&data[0]), numWords)
return data[:byteLen]
}
// getInput reads the RLP-encoded Payload from the WOMIR hint stream.
func getInput() []byte {
return readBytes()
}

View file

@ -13,11 +13,11 @@ require (
github.com/bits-and-blooms/bitset v1.20.0 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/consensys/gnark-crypto v0.18.1 // indirect
github.com/crate-crypto/go-eth-kzg v1.4.0 // indirect
github.com/crate-crypto/go-eth-kzg v1.5.0 // indirect
github.com/deckarep/golang-set/v2 v2.6.0 // indirect
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.0.1 // indirect
github.com/emicklei/dot v1.6.2 // indirect
github.com/ethereum/c-kzg-4844/v2 v2.1.5 // indirect
github.com/ethereum/c-kzg-4844/v2 v2.1.6 // indirect
github.com/ethereum/go-bigmodexpfix v0.0.0-20250911101455-f9e208c548ab // indirect
github.com/ferranbt/fastssz v0.1.4 // indirect
github.com/go-logr/logr v1.4.3 // indirect
@ -31,16 +31,16 @@ require (
github.com/minio/sha256-simd v1.0.0 // indirect
github.com/mitchellh/mapstructure v1.4.1 // indirect
github.com/shirou/gopsutil v3.21.4-0.20210419000835-c7a38de76ee5+incompatible // indirect
github.com/supranational/blst v0.3.16-0.20250831170142-f48500c1fdbe // indirect
github.com/supranational/blst v0.3.16 // indirect
github.com/tklauser/go-sysconf v0.3.12 // indirect
github.com/tklauser/numcpus v0.6.1 // indirect
go.opentelemetry.io/auto/sdk v1.2.1 // indirect
go.opentelemetry.io/otel v1.39.0 // indirect
go.opentelemetry.io/otel/metric v1.39.0 // indirect
go.opentelemetry.io/otel/trace v1.39.0 // indirect
golang.org/x/crypto v0.44.0 // indirect
golang.org/x/sync v0.18.0 // indirect
golang.org/x/sys v0.39.0 // indirect
go.opentelemetry.io/otel v1.40.0 // indirect
go.opentelemetry.io/otel/metric v1.40.0 // indirect
go.opentelemetry.io/otel/trace v1.40.0 // indirect
golang.org/x/crypto v0.47.0 // indirect
golang.org/x/sync v0.19.0 // indirect
golang.org/x/sys v0.40.0 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
)

View file

@ -28,8 +28,8 @@ github.com/cockroachdb/tokenbucket v0.0.0-20230807174530-cc333fc44b06 h1:zuQyyAK
github.com/cockroachdb/tokenbucket v0.0.0-20230807174530-cc333fc44b06/go.mod h1:7nc4anLGjupUW/PeY5qiNYsdNXj7zopG+eqsS7To5IQ=
github.com/consensys/gnark-crypto v0.18.1 h1:RyLV6UhPRoYYzaFnPQA4qK3DyuDgkTgskDdoGqFt3fI=
github.com/consensys/gnark-crypto v0.18.1/go.mod h1:L3mXGFTe1ZN+RSJ+CLjUt9x7PNdx8ubaYfDROyp2Z8c=
github.com/crate-crypto/go-eth-kzg v1.4.0 h1:WzDGjHk4gFg6YzV0rJOAsTK4z3Qkz5jd4RE3DAvPFkg=
github.com/crate-crypto/go-eth-kzg v1.4.0/go.mod h1:J9/u5sWfznSObptgfa92Jq8rTswn6ahQWEuiLHOjCUI=
github.com/crate-crypto/go-eth-kzg v1.5.0 h1:FYRiJMJG2iv+2Dy3fi14SVGjcPteZ5HAAUe4YWlJygc=
github.com/crate-crypto/go-eth-kzg v1.5.0/go.mod h1:J9/u5sWfznSObptgfa92Jq8rTswn6ahQWEuiLHOjCUI=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/deckarep/golang-set/v2 v2.6.0 h1:XfcQbWM1LlMB8BsJ8N9vW5ehnnPVIw0je80NsVHagjM=
@ -40,8 +40,8 @@ github.com/decred/dcrd/dcrec/secp256k1/v4 v4.0.1 h1:YLtO71vCjJRCBcrPMtQ9nqBsqpA1
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.0.1/go.mod h1:hyedUtir6IdtD/7lIxGeCxkaw7y45JueMRL4DIyJDKs=
github.com/emicklei/dot v1.6.2 h1:08GN+DD79cy/tzN6uLCT84+2Wk9u+wvqP+Hkx/dIR8A=
github.com/emicklei/dot v1.6.2/go.mod h1:DeV7GvQtIw4h2u73RKBkkFdvVAz0D9fzeJrgPW6gy/s=
github.com/ethereum/c-kzg-4844/v2 v2.1.5 h1:aVtoLK5xwJ6c5RiqO8g8ptJ5KU+2Hdquf6G3aXiHh5s=
github.com/ethereum/c-kzg-4844/v2 v2.1.5/go.mod h1:u59hRTTah4Co6i9fDWtiCjTrblJv0UwsqZKCc0GfgUs=
github.com/ethereum/c-kzg-4844/v2 v2.1.6 h1:xQymkKCT5E2Jiaoqf3v4wsNgjZLY0lRSkZn27fRjSls=
github.com/ethereum/c-kzg-4844/v2 v2.1.6/go.mod h1:8HMkUZ5JRv4hpw/XUrYWSQNAUzhHMg2UDb/U+5m+XNw=
github.com/ethereum/go-bigmodexpfix v0.0.0-20250911101455-f9e208c548ab h1:rvv6MJhy07IMfEKuARQ9TKojGqLVNxQajaXEp/BoqSk=
github.com/ethereum/go-bigmodexpfix v0.0.0-20250911101455-f9e208c548ab/go.mod h1:IuLm4IsPipXKF7CW5Lzf68PIbZ5yl7FFd74l/E0o9A8=
github.com/ferranbt/fastssz v0.1.4 h1:OCDB+dYDEQDvAgtAGnTSidK1Pe2tW3nFV40XyMkTeDY=
@ -111,36 +111,36 @@ github.com/shirou/gopsutil v3.21.4-0.20210419000835-c7a38de76ee5+incompatible h1
github.com/shirou/gopsutil v3.21.4-0.20210419000835-c7a38de76ee5+incompatible/go.mod h1:5b4v6he4MtMOwMlS0TUMTu2PcXUg8+E1lC7eC3UO/RA=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/supranational/blst v0.3.16-0.20250831170142-f48500c1fdbe h1:nbdqkIGOGfUAD54q1s2YBcBz/WcsxCO9HUQ4aGV5hUw=
github.com/supranational/blst v0.3.16-0.20250831170142-f48500c1fdbe/go.mod h1:jZJtfjgudtNl4en1tzwPIV3KjUnQUvG3/j+w+fVonLw=
github.com/supranational/blst v0.3.16 h1:bTDadT+3fK497EvLdWRQEjiGnUtzJ7jjIUMF0jqwYhE=
github.com/supranational/blst v0.3.16/go.mod h1:jZJtfjgudtNl4en1tzwPIV3KjUnQUvG3/j+w+fVonLw=
github.com/tklauser/go-sysconf v0.3.12 h1:0QaGUFOdQaIVdPgfITYzaTegZvdCjmYO52cSFAEVmqU=
github.com/tklauser/go-sysconf v0.3.12/go.mod h1:Ho14jnntGE1fpdOqQEEaiKRpvIavV0hSfmBq8nJbHYI=
github.com/tklauser/numcpus v0.6.1 h1:ng9scYS7az0Bk4OZLvrNXNSAO2Pxr1XXRAPyjhIx+Fk=
github.com/tklauser/numcpus v0.6.1/go.mod h1:1XfjsgE2zo8GVw7POkMbHENHzVg3GzmoZ9fESEdAacY=
go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64=
go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y=
go.opentelemetry.io/otel v1.39.0 h1:8yPrr/S0ND9QEfTfdP9V+SiwT4E0G7Y5MO7p85nis48=
go.opentelemetry.io/otel v1.39.0/go.mod h1:kLlFTywNWrFyEdH0oj2xK0bFYZtHRYUdv1NklR/tgc8=
go.opentelemetry.io/otel/metric v1.39.0 h1:d1UzonvEZriVfpNKEVmHXbdf909uGTOQjA0HF0Ls5Q0=
go.opentelemetry.io/otel/metric v1.39.0/go.mod h1:jrZSWL33sD7bBxg1xjrqyDjnuzTUB0x1nBERXd7Ftcs=
go.opentelemetry.io/otel/sdk v1.39.0 h1:nMLYcjVsvdui1B/4FRkwjzoRVsMK8uL/cj0OyhKzt18=
go.opentelemetry.io/otel/sdk v1.39.0/go.mod h1:vDojkC4/jsTJsE+kh+LXYQlbL8CgrEcwmt1ENZszdJE=
go.opentelemetry.io/otel/trace v1.39.0 h1:2d2vfpEDmCJ5zVYz7ijaJdOF59xLomrvj7bjt6/qCJI=
go.opentelemetry.io/otel/trace v1.39.0/go.mod h1:88w4/PnZSazkGzz/w84VHpQafiU4EtqqlVdxWy+rNOA=
golang.org/x/crypto v0.44.0 h1:A97SsFvM3AIwEEmTBiaxPPTYpDC47w720rdiiUvgoAU=
golang.org/x/crypto v0.44.0/go.mod h1:013i+Nw79BMiQiMsOPcVCB5ZIJbYkerPrGnOa00tvmc=
go.opentelemetry.io/otel v1.40.0 h1:oA5YeOcpRTXq6NN7frwmwFR0Cn3RhTVZvXsP4duvCms=
go.opentelemetry.io/otel v1.40.0/go.mod h1:IMb+uXZUKkMXdPddhwAHm6UfOwJyh4ct1ybIlV14J0g=
go.opentelemetry.io/otel/metric v1.40.0 h1:rcZe317KPftE2rstWIBitCdVp89A2HqjkxR3c11+p9g=
go.opentelemetry.io/otel/metric v1.40.0/go.mod h1:ib/crwQH7N3r5kfiBZQbwrTge743UDc7DTFVZrrXnqc=
go.opentelemetry.io/otel/sdk v1.40.0 h1:KHW/jUzgo6wsPh9At46+h4upjtccTmuZCFAc9OJ71f8=
go.opentelemetry.io/otel/sdk v1.40.0/go.mod h1:Ph7EFdYvxq72Y8Li9q8KebuYUr2KoeyHx0DRMKrYBUE=
go.opentelemetry.io/otel/trace v1.40.0 h1:WA4etStDttCSYuhwvEa8OP8I5EWu24lkOzp+ZYblVjw=
go.opentelemetry.io/otel/trace v1.40.0/go.mod h1:zeAhriXecNGP/s2SEG3+Y8X9ujcJOTqQ5RgdEJcawiA=
golang.org/x/crypto v0.47.0 h1:V6e3FRj+n4dbpw86FJ8Fv7XVOql7TEwpHapKoMJ/GO8=
golang.org/x/crypto v0.47.0/go.mod h1:ff3Y9VzzKbwSSEzWqJsJVBnWmRwRSHt/6Op5n9bQc4A=
golang.org/x/exp v0.0.0-20230626212559-97b1e661b5df h1:UA2aFVmmsIlefxMk29Dp2juaUSth8Pyn3Tq5Y5mJGME=
golang.org/x/exp v0.0.0-20230626212559-97b1e661b5df/go.mod h1:FXUEEKJgO7OQYeo8N01OfiKP8RXMtf6e8aTskBGqWdc=
golang.org/x/sync v0.18.0 h1:kr88TuHDroi+UVf+0hZnirlk8o8T+4MrK6mr60WkH/I=
golang.org/x/sync v0.18.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4=
golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.11.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.39.0 h1:CvCKL8MeisomCi6qNZ+wbb0DN9E5AATixKsvNtMoMFk=
golang.org/x/sys v0.39.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/text v0.31.0 h1:aC8ghyu4JhP8VojJ2lEHBnochRno1sgL6nEi9WGFGMM=
golang.org/x/text v0.31.0/go.mod h1:tKRAlv61yKIjGGHX/4tP1LTbc13YSec1pxVEWXzfoeM=
golang.org/x/sys v0.40.0 h1:DBZZqJ2Rkml6QMQsZywtnjnnGvHza6BTfYFWY9kjEWQ=
golang.org/x/sys v0.40.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/text v0.33.0 h1:B3njUFyqtHDUI5jMn1YIr5B0IE2U0qck04r6d4KPAxE=
golang.org/x/text v0.33.0/go.mod h1:LuMebE6+rBincTi9+xWTY8TztLzKHc/9C1uBCG27+q8=
google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE=
google.golang.org/protobuf v1.36.11/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=

View file

@ -14,8 +14,8 @@
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
//go:build !example && !ziren && !wasm
// +build !example,!ziren,!wasm
//go:build !example && !ziren && !wasm && !womir
// +build !example,!ziren,!wasm,!womir
package main

View file

@ -274,40 +274,66 @@ func ImportHistory(chain *core.BlockChain, dir string, network string, from func
reported = time.Now()
imported = 0
h = sha256.New()
scratch = bytes.NewBuffer(nil)
buf = bytes.NewBuffer(nil)
)
for i, file := range entries {
err := func() error {
path := filepath.Join(dir, file)
// validate against checksum file in directory
// Validate against checksum file in directory.
f, err := os.Open(path)
if err != nil {
return fmt.Errorf("open %s: %w", path, err)
}
defer f.Close()
if _, err := io.Copy(h, f); err != nil {
return fmt.Errorf("checksum %s: %w", path, err)
}
got := common.BytesToHash(h.Sum(scratch.Bytes()[:])).Hex()
want := checksums[i]
got := common.BytesToHash(h.Sum(buf.Bytes()[:])).Hex()
h.Reset()
scratch.Reset()
if got != want {
return fmt.Errorf("%s checksum mismatch: have %s want %s", file, got, want)
buf.Reset()
if got != checksums[i] {
return fmt.Errorf("%s checksum mismatch: have %s want %s", file, got, checksums[i])
}
// Import all block data from Era1.
e, err := from(f)
if err != nil {
return fmt.Errorf("error opening era: %w", err)
}
defer e.Close()
it, err := e.Iterator()
if err != nil {
return fmt.Errorf("error creating iterator: %w", err)
}
var (
blocks = make([]*types.Block, 0, importBatchSize)
receiptsList = make([]types.Receipts, 0, importBatchSize)
flush = func() error {
if len(blocks) == 0 {
return nil
}
enc := types.EncodeBlockReceiptLists(receiptsList)
if _, err := chain.InsertReceiptChain(blocks, enc, math.MaxUint64); err != nil {
return fmt.Errorf("error inserting blocks %d-%d: %w",
blocks[0].NumberU64(), blocks[len(blocks)-1].NumberU64(), err)
}
imported += len(blocks)
if time.Since(reported) >= 8*time.Second {
head := blocks[len(blocks)-1].NumberU64()
log.Info("Importing Era files", "head", head, "imported", imported,
"elapsed", common.PrettyDuration(time.Since(start)))
imported = 0
reported = time.Now()
}
blocks = blocks[:0]
receiptsList = receiptsList[:0]
return nil
}
)
for it.Next() {
block, err := it.Block()
if err != nil {
@ -320,23 +346,18 @@ func ImportHistory(chain *core.BlockChain, dir string, network string, from func
if err != nil {
return fmt.Errorf("error reading receipts %d: %w", it.Number(), err)
}
enc := types.EncodeBlockReceiptLists([]types.Receipts{receipts})
if _, err := chain.InsertReceiptChain([]*types.Block{block}, enc, math.MaxUint64); err != nil {
return fmt.Errorf("error inserting body %d: %w", it.Number(), err)
}
imported++
if time.Since(reported) >= 8*time.Second {
log.Info("Importing Era files", "head", it.Number(), "imported", imported,
"elapsed", common.PrettyDuration(time.Since(start)))
imported = 0
reported = time.Now()
blocks = append(blocks, block)
receiptsList = append(receiptsList, receipts)
if len(blocks) == importBatchSize {
if err := flush(); err != nil {
return err
}
}
}
if err := it.Error(); err != nil {
return err
}
return nil
return flush()
}()
if err != nil {
return err

View file

@ -218,7 +218,15 @@ var (
Usage: "Max number of elements (0 = no limit)",
Value: 0,
}
AccountFlag = &cli.StringFlag{
Name: "account",
Usage: "Specifies the account address or hash to traverse a single storage trie",
}
OutputFileFlag = &cli.StringFlag{
Name: "output",
Usage: "Writes the result in json to the output",
Value: "",
}
SnapshotFlag = &cli.BoolFlag{
Name: "snapshot",
Usage: `Enables snapshot-database mode (default = enable)`,
@ -256,9 +264,9 @@ var (
Usage: "Manually specify the bpo2 fork timestamp, overriding the bundled setting",
Category: flags.EthCategory,
}
OverrideVerkle = &cli.Uint64Flag{
Name: "override.verkle",
Usage: "Manually specify the Verkle fork timestamp, overriding the bundled setting",
OverrideUBT = &cli.Uint64Flag{
Name: "override.ubt",
Usage: "Manually specify the UBT fork timestamp, overriding the bundled setting",
Category: flags.EthCategory,
}
OverrideGenesisFlag = &cli.StringFlag{
@ -289,6 +297,12 @@ var (
Value: ethconfig.Defaults.EnableStateSizeTracking,
Category: flags.StateCategory,
}
BinTrieGroupDepthFlag = &cli.IntFlag{
Name: "bintrie.groupdepth",
Usage: "Number of levels per serialized group in binary trie (1-8, default 5). Lower values create smaller groups with more nodes.",
Value: 5,
Category: flags.StateCategory,
}
StateHistoryFlag = &cli.Uint64Flag{
Name: "history.state",
Usage: "Number of recent blocks to retain state history for, only relevant in state.scheme=path (default = 90,000 blocks, 0 = entire chain)",
@ -315,7 +329,7 @@ var (
}
ChainHistoryFlag = &cli.StringFlag{
Name: "history.chain",
Usage: `Blockchain history retention ("all" or "postmerge")`,
Usage: `Blockchain history retention ("all", "postmerge", or "postprague")`,
Value: ethconfig.Defaults.HistoryMode.String(),
Category: flags.StateCategory,
}
@ -480,8 +494,8 @@ var (
// Performance tuning settings
CacheFlag = &cli.IntFlag{
Name: "cache",
Usage: "Megabytes of memory allocated to internal caching (default = 4096 mainnet full node, 128 light mode)",
Value: 1024,
Usage: "Megabytes of memory allocated to internal caching",
Value: 4096,
Category: flags.PerfCategory,
}
CacheDatabaseFlag = &cli.IntFlag{
@ -1016,6 +1030,13 @@ Please note that --` + MetricsHTTPFlag.Name + ` must be set to start the server.
Category: flags.MetricsCategory,
}
MetricsInfluxDBIntervalFlag = &cli.DurationFlag{
Name: "metrics.influxdb.interval",
Usage: "Interval between metrics reports to InfluxDB (with time unit, e.g. 10s)",
Value: metrics.DefaultConfig.InfluxDBInterval,
Category: flags.MetricsCategory,
}
MetricsEnableInfluxDBV2Flag = &cli.BoolFlag{
Name: "metrics.influxdbv2",
Usage: "Enable metrics export/push to an external InfluxDB v2 database",
@ -1052,19 +1073,19 @@ Please note that --` + MetricsHTTPFlag.Name + ` must be set to start the server.
RPCTelemetryEndpointFlag = &cli.StringFlag{
Name: "rpc.telemetry.endpoint",
Usage: "Defines where RPC telemetry is sent (e.g., http://localhost:4318)",
Usage: "Defines where RPC telemetry is sent (e.g., http://localhost:4318 or grpc://localhost:4317)",
Category: flags.APICategory,
}
RPCTelemetryUserFlag = &cli.StringFlag{
Name: "rpc.telemetry.username",
Usage: "HTTP Basic Auth username for OpenTelemetry",
Usage: "Basic Auth username for OpenTelemetry",
Category: flags.APICategory,
}
RPCTelemetryPasswordFlag = &cli.StringFlag{
Name: "rpc.telemetry.password",
Usage: "HTTP Basic Auth password for OpenTelemetry",
Usage: "Basic Auth password for OpenTelemetry",
Category: flags.APICategory,
}
@ -1569,7 +1590,9 @@ func setOpenTelemetry(ctx *cli.Context, cfg *node.Config) {
if ctx.IsSet(RPCTelemetryTagsFlag.Name) {
tcfg.Tags = ctx.String(RPCTelemetryTagsFlag.Name)
}
tcfg.SampleRatio = ctx.Float64(RPCTelemetrySampleRatioFlag.Name)
if ctx.IsSet(RPCTelemetrySampleRatioFlag.Name) {
tcfg.SampleRatio = ctx.Float64(RPCTelemetrySampleRatioFlag.Name)
}
if tcfg.Endpoint != "" && !tcfg.Enabled {
log.Warn(fmt.Sprintf("OpenTelemetry endpoint configured but telemetry is not enabled, use --%s to enable.", RPCTelemetryFlag.Name))
@ -1800,6 +1823,9 @@ func SetEthConfig(ctx *cli.Context, stack *node.Node, cfg *ethconfig.Config) {
if ctx.IsSet(TrienodeHistoryFullValueCheckpointFlag.Name) {
cfg.NodeFullValueCheckpoint = uint32(ctx.Uint(TrienodeHistoryFullValueCheckpointFlag.Name))
}
if ctx.IsSet(BinTrieGroupDepthFlag.Name) {
cfg.BinTrieGroupDepth = ctx.Int(BinTrieGroupDepthFlag.Name)
}
if ctx.IsSet(StateSchemeFlag.Name) {
cfg.StateScheme = ctx.String(StateSchemeFlag.Name)
}
@ -1882,7 +1908,7 @@ func SetEthConfig(ctx *cli.Context, stack *node.Node, cfg *ethconfig.Config) {
cfg.StatelessSelfValidation = ctx.Bool(VMStatelessSelfValidationFlag.Name)
}
// Auto-enable StatelessSelfValidation when witness stats are enabled
if ctx.Bool(VMWitnessStatsFlag.Name) {
if cfg.EnableWitnessStats {
cfg.StatelessSelfValidation = true
}
@ -2246,13 +2272,14 @@ func SetupMetrics(cfg *metrics.Config) {
bucket = cfg.InfluxDBBucket
organization = cfg.InfluxDBOrganization
tagsMap = SplitTagsFlag(cfg.InfluxDBTags)
interval = cfg.InfluxDBInterval
)
if enableExport {
log.Info("Enabling metrics export to InfluxDB")
go influxdb.InfluxDBWithTags(metrics.DefaultRegistry, 10*time.Second, endpoint, database, username, password, "geth.", tagsMap)
log.Info("Enabling metrics export to InfluxDB", "interval", interval)
go influxdb.InfluxDBWithTags(metrics.DefaultRegistry, interval, endpoint, database, username, password, "geth.", tagsMap)
} else if enableExportV2 {
log.Info("Enabling metrics export to InfluxDB (v2)")
go influxdb.InfluxDBV2WithTags(metrics.DefaultRegistry, 10*time.Second, endpoint, token, bucket, organization, "geth.", tagsMap)
log.Info("Enabling metrics export to InfluxDB (v2)", "interval", interval)
go influxdb.InfluxDBV2WithTags(metrics.DefaultRegistry, interval, endpoint, token, bucket, organization, "geth.", tagsMap)
}
// Expvar exporter.
@ -2415,6 +2442,7 @@ func MakeChain(ctx *cli.Context, stack *node.Node, readonly bool) (*core.BlockCh
StateHistory: ctx.Uint64(StateHistoryFlag.Name),
TrienodeHistory: ctx.Int64(TrienodeHistoryFlag.Name),
NodeFullValueCheckpoint: uint32(ctx.Uint(TrienodeHistoryFullValueCheckpointFlag.Name)),
BinTrieGroupDepth: ctx.Int(BinTrieGroupDepthFlag.Name),
// Disable transaction indexing/unindexing.
TxLookupLimit: -1,
@ -2457,8 +2485,6 @@ func MakeChain(ctx *cli.Context, stack *node.Node, readonly bool) (*core.BlockCh
}
vmcfg := vm.Config{
EnablePreimageRecording: ctx.Bool(VMEnableDebugFlag.Name),
EnableWitnessStats: ctx.Bool(VMWitnessStatsFlag.Name),
StatelessSelfValidation: ctx.Bool(VMStatelessSelfValidationFlag.Name) || ctx.Bool(VMWitnessStatsFlag.Name),
}
if ctx.IsSet(VMTraceFlag.Name) {
if name := ctx.String(VMTraceFlag.Name); name != "" {
@ -2472,6 +2498,9 @@ func MakeChain(ctx *cli.Context, stack *node.Node, readonly bool) (*core.BlockCh
}
options.VmConfig = vmcfg
options.StatelessSelfValidation = ctx.Bool(VMStatelessSelfValidationFlag.Name) || ctx.Bool(VMWitnessStatsFlag.Name)
options.EnableWitnessStats = ctx.Bool(VMWitnessStatsFlag.Name)
chain, err := core.NewBlockChain(chainDb, gspec, engine, options)
if err != nil {
Fatalf("Can't create BlockChain: %v", err)
@ -2497,10 +2526,10 @@ func MakeConsolePreloads(ctx *cli.Context) []string {
}
// MakeTrieDatabase constructs a trie database based on the configured scheme.
func MakeTrieDatabase(ctx *cli.Context, stack *node.Node, disk ethdb.Database, preimage bool, readOnly bool, isVerkle bool) *triedb.Database {
func MakeTrieDatabase(ctx *cli.Context, stack *node.Node, disk ethdb.Database, preimage bool, readOnly bool, isUBT bool) *triedb.Database {
config := &triedb.Config{
Preimages: preimage,
IsVerkle: isVerkle,
IsUBT: isUBT,
}
scheme, err := rawdb.ParseStateScheme(ctx.String(StateSchemeFlag.Name), disk)
if err != nil {

View file

@ -155,7 +155,9 @@ func testConfigFromCLI(ctx *cli.Context) (cfg testConfig) {
}
cfg.historyPruneBlock = new(uint64)
*cfg.historyPruneBlock = history.PrunePoints[params.MainnetGenesisHash].BlockNumber
if p, err := history.NewPolicy(history.KeepPostMerge, params.MainnetGenesisHash); err == nil {
*cfg.historyPruneBlock = p.Target.BlockNumber
}
case ctx.Bool(testSepoliaFlag.Name):
cfg.fsys = builtinTestFiles
if ctx.IsSet(filterQueryFileFlag.Name) {
@ -180,7 +182,9 @@ func testConfigFromCLI(ctx *cli.Context) (cfg testConfig) {
}
cfg.historyPruneBlock = new(uint64)
*cfg.historyPruneBlock = history.PrunePoints[params.SepoliaGenesisHash].BlockNumber
if p, err := history.NewPolicy(history.KeepPostMerge, params.SepoliaGenesisHash); err == nil {
*cfg.historyPruneBlock = p.Target.BlockNumber
}
default:
cfg.fsys = os.DirFS(".")
cfg.filterQueryFile = ctx.String(filterQueryFileFlag.Name)

View file

@ -25,12 +25,10 @@ import (
"github.com/ethereum/go-ethereum/consensus"
"github.com/ethereum/go-ethereum/consensus/misc/eip1559"
"github.com/ethereum/go-ethereum/consensus/misc/eip4844"
"github.com/ethereum/go-ethereum/core/state"
"github.com/ethereum/go-ethereum/core/tracing"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/core/vm"
"github.com/ethereum/go-ethereum/params"
"github.com/ethereum/go-ethereum/trie"
"github.com/holiman/uint256"
)
@ -272,6 +270,24 @@ func (beacon *Beacon) verifyHeader(chain consensus.ChainHeaderReader, header, pa
return err
}
}
// Verify the existence / non-existence of Amsterdam-specific header fields
amsterdam := chain.Config().IsAmsterdam(header.Number, header.Time)
if amsterdam {
if header.BlockAccessListHash == nil {
return errors.New("header is missing block access list hash")
}
if header.SlotNumber == nil {
return errors.New("header is missing slotNumber")
}
} else {
if header.BlockAccessListHash != nil {
return fmt.Errorf("invalid block access list hash: have %x, expected nil", *header.BlockAccessListHash)
}
if header.SlotNumber != nil {
return fmt.Errorf("invalid slotNumber: have %d, expected nil", *header.SlotNumber)
}
}
return nil
}
@ -341,33 +357,6 @@ func (beacon *Beacon) Finalize(chain consensus.ChainHeaderReader, header *types.
// No block reward which is issued by consensus layer instead.
}
// FinalizeAndAssemble implements consensus.Engine, setting the final state and
// assembling the block.
func (beacon *Beacon) FinalizeAndAssemble(chain consensus.ChainHeaderReader, header *types.Header, state *state.StateDB, body *types.Body, receipts []*types.Receipt) (*types.Block, error) {
if !beacon.IsPoSHeader(header) {
return beacon.ethone.FinalizeAndAssemble(chain, header, state, body, receipts)
}
shanghai := chain.Config().IsShanghai(header.Number, header.Time)
if shanghai {
// All blocks after Shanghai must include a withdrawals root.
if body.Withdrawals == nil {
body.Withdrawals = make([]*types.Withdrawal, 0)
}
} else {
if len(body.Withdrawals) > 0 {
return nil, errors.New("withdrawals set before Shanghai activation")
}
}
// Finalize and assemble the block.
beacon.Finalize(chain, header, state, body)
// Assign the final state root to header.
header.Root = state.IntermediateRoot(true)
// Assemble the final block.
return types.NewBlock(header, body, receipts, trie.NewStackTrie(nil)), nil
}
// Seal generates a new sealing request for the given input block and pushes
// the result into the given channel.
//

View file

@ -33,7 +33,6 @@ import (
"github.com/ethereum/go-ethereum/consensus"
"github.com/ethereum/go-ethereum/consensus/misc"
"github.com/ethereum/go-ethereum/consensus/misc/eip1559"
"github.com/ethereum/go-ethereum/core/state"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/core/vm"
"github.com/ethereum/go-ethereum/crypto"
@ -42,7 +41,6 @@ import (
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/params"
"github.com/ethereum/go-ethereum/rlp"
"github.com/ethereum/go-ethereum/trie"
)
const (
@ -310,6 +308,8 @@ func (c *Clique) verifyHeader(chain consensus.ChainHeaderReader, header *types.H
return fmt.Errorf("invalid blobGasUsed: have %d, expected nil", *header.BlobGasUsed)
case header.ParentBeaconRoot != nil:
return fmt.Errorf("invalid parentBeaconRoot, have %#x, expected nil", *header.ParentBeaconRoot)
case header.SlotNumber != nil:
return fmt.Errorf("invalid slotNumber, have %#x, expected nil", *header.SlotNumber)
}
// All basic checks passed, verify cascading fields
return c.verifyCascadingFields(chain, header, parents)
@ -577,22 +577,6 @@ func (c *Clique) Finalize(chain consensus.ChainHeaderReader, header *types.Heade
// No block rewards in PoA, so the state remains as is
}
// FinalizeAndAssemble implements consensus.Engine, ensuring no uncles are set,
// nor block rewards given, and returns the final block.
func (c *Clique) FinalizeAndAssemble(chain consensus.ChainHeaderReader, header *types.Header, state *state.StateDB, body *types.Body, receipts []*types.Receipt) (*types.Block, error) {
if len(body.Withdrawals) > 0 {
return nil, errors.New("clique does not support withdrawals")
}
// Finalize block
c.Finalize(chain, header, state, body)
// Assign the final state root to header.
header.Root = state.IntermediateRoot(chain.Config().IsEIP158(header.Number))
// Assemble and return the final block for sealing.
return types.NewBlock(header, &types.Body{Transactions: body.Transactions}, receipts, trie.NewStackTrie(nil)), nil
}
// Authorize injects a private key into the consensus engine to mint new blocks
// with.
func (c *Clique) Authorize(signer common.Address) {
@ -694,6 +678,9 @@ func encodeSigHeader(w io.Writer, header *types.Header) {
if header.ParentBeaconRoot != nil {
panic("unexpected parent beacon root value in clique")
}
if header.SlotNumber != nil {
panic("unexpected slot number value in clique")
}
if err := rlp.Encode(w, enc); err != nil {
panic("can't encode: " + err.Error())
}

View file

@ -21,7 +21,6 @@ import (
"math/big"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/state"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/core/vm"
"github.com/ethereum/go-ethereum/params"
@ -87,13 +86,6 @@ type Engine interface {
// that happen at finalization (e.g. block rewards).
Finalize(chain ChainHeaderReader, header *types.Header, state vm.StateDB, body *types.Body)
// FinalizeAndAssemble runs any post-transaction state modifications (e.g. block
// rewards or process withdrawals) and assembles the final block.
//
// Note: The block header and state database might be updated to reflect any
// consensus rules that happen at finalization (e.g. block rewards).
FinalizeAndAssemble(chain ChainHeaderReader, header *types.Header, state *state.StateDB, body *types.Body, receipts []*types.Receipt) (*types.Block, error)
// Seal generates a new sealing request for the given input block and pushes
// the result into the given channel.
//

View file

@ -27,14 +27,12 @@ import (
"github.com/ethereum/go-ethereum/consensus"
"github.com/ethereum/go-ethereum/consensus/misc"
"github.com/ethereum/go-ethereum/consensus/misc/eip1559"
"github.com/ethereum/go-ethereum/core/state"
"github.com/ethereum/go-ethereum/core/tracing"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/core/vm"
"github.com/ethereum/go-ethereum/crypto/keccak"
"github.com/ethereum/go-ethereum/params"
"github.com/ethereum/go-ethereum/rlp"
"github.com/ethereum/go-ethereum/trie"
"github.com/holiman/uint256"
)
@ -283,6 +281,8 @@ func (ethash *Ethash) verifyHeader(chain consensus.ChainHeaderReader, header, pa
return fmt.Errorf("invalid blobGasUsed: have %d, expected nil", *header.BlobGasUsed)
case header.ParentBeaconRoot != nil:
return fmt.Errorf("invalid parentBeaconRoot, have %#x, expected nil", *header.ParentBeaconRoot)
case header.SlotNumber != nil:
return fmt.Errorf("invalid slotNumber, have %#x, expected nil", *header.SlotNumber)
}
// Add some fake checks for tests
if ethash.fakeDelay != nil {
@ -509,22 +509,6 @@ func (ethash *Ethash) Finalize(chain consensus.ChainHeaderReader, header *types.
accumulateRewards(chain.Config(), state, header, body.Uncles)
}
// FinalizeAndAssemble implements consensus.Engine, accumulating the block and
// uncle rewards, setting the final state and assembling the block.
func (ethash *Ethash) FinalizeAndAssemble(chain consensus.ChainHeaderReader, header *types.Header, state *state.StateDB, body *types.Body, receipts []*types.Receipt) (*types.Block, error) {
if len(body.Withdrawals) > 0 {
return nil, errors.New("ethash does not support withdrawals")
}
// Finalize block
ethash.Finalize(chain, header, state, body)
// Assign the final state root to header.
header.Root = state.IntermediateRoot(chain.Config().IsEIP158(header.Number))
// Header seems complete, assemble into a block and return
return types.NewBlock(header, &types.Body{Transactions: body.Transactions, Uncles: body.Uncles}, receipts, trie.NewStackTrie(nil)), nil
}
// SealHash returns the hash of a block prior to it being sealed.
func (ethash *Ethash) SealHash(header *types.Header) (hash common.Hash) {
hasher := keccak.NewLegacyKeccak256()
@ -559,6 +543,9 @@ func (ethash *Ethash) SealHash(header *types.Header) (hash common.Hash) {
if header.ParentBeaconRoot != nil {
panic("parent beacon root set on ethash")
}
if header.SlotNumber != nil {
panic("slot number set on ethash")
}
rlp.Encode(hasher, enc)
hasher.Sum(hash[:0])
return hash

View file

@ -282,7 +282,7 @@ func (c *Console) AutoCompleteInput(line string, pos int) (string, []string, str
for ; start > 0; start-- {
// Skip all methods and namespaces (i.e. including the dot)
c := line[start]
if c == '.' || (c >= 'a' && c <= 'z') || (c >= 'A' && c <= 'Z') || (c >= '1' && c <= '9') {
if c == '.' || (c >= 'a' && c <= 'z') || (c >= 'A' && c <= 'Z') || (c >= '0' && c <= '9') {
continue
}
// We've hit an unexpected character, autocomplete form here

View file

@ -89,7 +89,7 @@ func genValueTx(nbytes int) func(int, *BlockGen) {
data := make([]byte, nbytes)
return func(i int, gen *BlockGen) {
toaddr := common.Address{}
gas, _ := IntrinsicGas(data, nil, nil, false, false, false, false)
cost, _ := IntrinsicGas(data, nil, nil, false, false, false, false)
signer := gen.Signer()
gasPrice := big.NewInt(0)
if gen.header.BaseFee != nil {
@ -99,7 +99,7 @@ func genValueTx(nbytes int) func(int, *BlockGen) {
Nonce: gen.TxNonce(benchRootAddr),
To: &toaddr,
Value: big.NewInt(1),
Gas: gas,
Gas: cost.RegularGas,
Data: data,
GasPrice: gasPrice,
})

View file

@ -36,7 +36,7 @@ import (
)
var (
testVerkleChainConfig = &params.ChainConfig{
testUBTChainConfig = &params.ChainConfig{
ChainID: big.NewInt(1),
HomesteadBlock: big.NewInt(0),
EIP150Block: big.NewInt(0),
@ -51,16 +51,16 @@ var (
LondonBlock: big.NewInt(0),
Ethash: new(params.EthashConfig),
ShanghaiTime: u64(0),
VerkleTime: u64(0),
UBTTime: u64(0),
TerminalTotalDifficulty: common.Big0,
EnableVerkleAtGenesis: true,
EnableUBTAtGenesis: true,
BlobScheduleConfig: &params.BlobScheduleConfig{
Verkle: params.DefaultPragueBlobConfig,
UBT: params.DefaultPragueBlobConfig,
},
}
)
func TestProcessVerkle(t *testing.T) {
func TestProcessUBT(t *testing.T) {
var (
code = common.FromHex(`6060604052600a8060106000396000f360606040526008565b00`)
intrinsicContractCreationGas, _ = IntrinsicGas(code, nil, nil, true, true, true, true)
@ -69,12 +69,12 @@ func TestProcessVerkle(t *testing.T) {
// Source: https://gist.github.com/gballet/a23db1e1cb4ed105616b5920feb75985
codeWithExtCodeCopy = common.FromHex(`0x60806040526040516100109061017b565b604051809103906000f08015801561002c573d6000803e3d6000fd5b506000806101000a81548173ffffffffffffffffffffffffffffffffffffffff021916908373ffffffffffffffffffffffffffffffffffffffff16021790555034801561007857600080fd5b5060008067ffffffffffffffff8111156100955761009461024a565b5b6040519080825280601f01601f1916602001820160405280156100c75781602001600182028036833780820191505090505b50905060008060009054906101000a900473ffffffffffffffffffffffffffffffffffffffff1690506020600083833c81610101906101e3565b60405161010d90610187565b61011791906101a3565b604051809103906000f080158015610133573d6000803e3d6000fd5b50600160006101000a81548173ffffffffffffffffffffffffffffffffffffffff021916908373ffffffffffffffffffffffffffffffffffffffff160217905550505061029b565b60d58061046783390190565b6102068061053c83390190565b61019d816101d9565b82525050565b60006020820190506101b86000830184610194565b92915050565b6000819050602082019050919050565b600081519050919050565b6000819050919050565b60006101ee826101ce565b826101f8846101be565b905061020381610279565b925060208210156102435761023e7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff8360200360080261028e565b831692505b5050919050565b7f4e487b7100000000000000000000000000000000000000000000000000000000600052604160045260246000fd5b600061028582516101d9565b80915050919050565b600082821b905092915050565b6101bd806102aa6000396000f3fe608060405234801561001057600080fd5b506004361061002b5760003560e01c8063f566852414610030575b600080fd5b61003861004e565b6040516100459190610146565b60405180910390f35b6000600160009054906101000a900473ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff166381ca91d36040518163ffffffff1660e01b815260040160206040518083038186803b1580156100b857600080fd5b505afa1580156100cc573d6000803e3d6000fd5b505050506040513d601f19601f820116820180604052508101906100f0919061010a565b905090565b60008151905061010481610170565b92915050565b6000602082840312156101205761011f61016b565b5b600061012e848285016100f5565b91505092915050565b61014081610161565b82525050565b600060208201905061015b6000830184610137565b92915050565b6000819050919050565b600080fd5b61017981610161565b811461018457600080fd5b5056fea2646970667358221220a6a0e11af79f176f9c421b7b12f441356b25f6489b83d38cc828a701720b41f164736f6c63430008070033608060405234801561001057600080fd5b5060b68061001f6000396000f3fe6080604052348015600f57600080fd5b506004361060285760003560e01c8063ab5ed15014602d575b600080fd5b60336047565b604051603e9190605d565b60405180910390f35b60006001905090565b6057816076565b82525050565b6000602082019050607060008301846050565b92915050565b600081905091905056fea26469706673582212203a14eb0d5cd07c277d3e24912f110ddda3e553245a99afc4eeefb2fbae5327aa64736f6c63430008070033608060405234801561001057600080fd5b5060405161020638038061020683398181016040528101906100329190610063565b60018160001c6100429190610090565b60008190555050610145565b60008151905061005d8161012e565b92915050565b60006020828403121561007957610078610129565b5b60006100878482850161004e565b91505092915050565b600061009b826100f0565b91506100a6836100f0565b9250827fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff038211156100db576100da6100fa565b5b828201905092915050565b6000819050919050565b6000819050919050565b7f4e487b7100000000000000000000000000000000000000000000000000000000600052601160045260246000fd5b600080fd5b610137816100e6565b811461014257600080fd5b50565b60b3806101536000396000f3fe6080604052348015600f57600080fd5b506004361060285760003560e01c806381ca91d314602d575b600080fd5b60336047565b604051603e9190605a565b60405180910390f35b60005481565b6054816073565b82525050565b6000602082019050606d6000830184604d565b92915050565b600081905091905056fea26469706673582212209bff7098a2f526de1ad499866f27d6d0d6f17b74a413036d6063ca6a0998ca4264736f6c63430008070033`)
intrinsicCodeWithExtCodeCopyGas, _ = IntrinsicGas(codeWithExtCodeCopy, nil, nil, true, true, true, true)
signer = types.LatestSigner(testVerkleChainConfig)
signer = types.LatestSigner(testUBTChainConfig)
testKey, _ = crypto.HexToECDSA("b71c71a67e1177ad4e901695e1b4b9ee17ae16c6668d313eac2f96dbcda3f291")
bcdb = rawdb.NewMemoryDatabase() // Database for the blockchain
coinbase = common.HexToAddress("0x71562b71999873DB5b286dF957af199Ec94617F7")
gspec = &Genesis{
Config: testVerkleChainConfig,
Config: testUBTChainConfig,
Alloc: GenesisAlloc{
coinbase: {
Balance: big.NewInt(1000000000000000000), // 1 ether
@ -87,21 +87,22 @@ func TestProcessVerkle(t *testing.T) {
},
}
)
// Verkle trees use the snapshot, which must be enabled before the
// UBTs use the snapshot, which must be enabled before the
// data is saved into the tree+database.
// genesis := gspec.MustCommit(bcdb, triedb)
options := DefaultConfig().WithStateScheme(rawdb.PathScheme)
options.SnapshotLimit = 0
options.BinTrieGroupDepth = triedb.DefaultBinTrieGroupDepth
blockchain, _ := NewBlockChain(bcdb, gspec, beacon.New(ethash.NewFaker()), options)
defer blockchain.Stop()
txCost1 := params.TxGas
txCost2 := params.TxGas
contractCreationCost := intrinsicContractCreationGas +
contractCreationCost := intrinsicContractCreationGas.RegularGas +
params.WitnessChunkReadCost + params.WitnessChunkWriteCost + params.WitnessBranchReadCost + params.WitnessBranchWriteCost + /* creation */
params.WitnessChunkReadCost + params.WitnessChunkWriteCost + /* creation with value */
739 /* execution costs */
codeWithExtCodeCopyGas := intrinsicCodeWithExtCodeCopyGas +
codeWithExtCodeCopyGas := intrinsicCodeWithExtCodeCopyGas.RegularGas +
params.WitnessChunkReadCost + params.WitnessChunkWriteCost + params.WitnessBranchReadCost + params.WitnessBranchWriteCost + /* creation (tx) */
params.WitnessChunkReadCost + params.WitnessChunkWriteCost + params.WitnessBranchReadCost + params.WitnessBranchWriteCost + /* creation (CREATE at pc=0x20) */
params.WitnessChunkReadCost + params.WitnessChunkWriteCost + /* write code hash */
@ -188,7 +189,7 @@ func TestProcessParentBlockHash(t *testing.T) {
// block 1 parent hash is 0x0100....
// block 2 parent hash is 0x0200....
// etc
checkBlockHashes := func(statedb *state.StateDB, isVerkle bool) {
checkBlockHashes := func(statedb *state.StateDB, isUBT bool) {
statedb.SetNonce(params.HistoryStorageAddress, 1, tracing.NonceChangeUnspecified)
statedb.SetCode(params.HistoryStorageAddress, params.HistoryStorageCode, tracing.CodeChangeUnspecified)
// Process n blocks, from 1 .. num
@ -196,8 +197,8 @@ func TestProcessParentBlockHash(t *testing.T) {
for i := 1; i <= num; i++ {
header := &types.Header{ParentHash: common.Hash{byte(i)}, Number: big.NewInt(int64(i)), Difficulty: new(big.Int)}
chainConfig := params.MergedTestChainConfig
if isVerkle {
chainConfig = testVerkleChainConfig
if isUBT {
chainConfig = testUBTChainConfig
}
vmContext := NewEVMBlockContext(header, nil, new(common.Address))
evm := vm.NewEVM(vmContext, statedb, chainConfig, vm.Config{})
@ -205,9 +206,9 @@ func TestProcessParentBlockHash(t *testing.T) {
}
// Read block hashes for block 0 .. num-1
for i := 0; i < num; i++ {
have, want := getContractStoredBlockHash(statedb, uint64(i), isVerkle), common.Hash{byte(i + 1)}
have, want := getContractStoredBlockHash(statedb, uint64(i), isUBT), common.Hash{byte(i + 1)}
if have != want {
t.Errorf("block %d, verkle=%v, have parent hash %v, want %v", i, isVerkle, have, want)
t.Errorf("block %d, verkle=%v, have parent hash %v, want %v", i, isUBT, have, want)
}
}
}
@ -215,22 +216,23 @@ func TestProcessParentBlockHash(t *testing.T) {
statedb, _ := state.New(types.EmptyRootHash, state.NewDatabaseForTesting())
checkBlockHashes(statedb, false)
})
t.Run("Verkle", func(t *testing.T) {
t.Run("UBT", func(t *testing.T) {
db := rawdb.NewMemoryDatabase()
cacheConfig := DefaultConfig().WithStateScheme(rawdb.PathScheme)
cacheConfig.BinTrieGroupDepth = triedb.DefaultBinTrieGroupDepth
cacheConfig.SnapshotLimit = 0
triedb := triedb.NewDatabase(db, cacheConfig.triedbConfig(true))
statedb, _ := state.New(types.EmptyVerkleHash, state.NewDatabase(triedb, nil))
statedb, _ := state.New(types.EmptyBinaryHash, state.NewDatabase(triedb, nil))
checkBlockHashes(statedb, true)
})
}
// getContractStoredBlockHash is a utility method which reads the stored parent blockhash for block 'number'
func getContractStoredBlockHash(statedb *state.StateDB, number uint64, isVerkle bool) common.Hash {
func getContractStoredBlockHash(statedb *state.StateDB, number uint64, isUBT bool) common.Hash {
ringIndex := number % params.HistoryServeWindow
var key common.Hash
binary.BigEndian.PutUint64(key[24:], ringIndex)
if isVerkle {
if isUBT {
return statedb.GetState(params.HistoryStorageAddress, key)
}
return statedb.GetState(params.HistoryStorageAddress, key)

View file

@ -93,9 +93,7 @@ var (
accountReadSingleTimer = metrics.NewRegisteredResettingTimer("chain/account/single/reads", nil)
storageReadSingleTimer = metrics.NewRegisteredResettingTimer("chain/storage/single/reads", nil)
codeReadSingleTimer = metrics.NewRegisteredResettingTimer("chain/code/single/reads", nil)
snapshotCommitTimer = metrics.NewRegisteredResettingTimer("chain/snapshot/commits", nil)
triedbCommitTimer = metrics.NewRegisteredResettingTimer("chain/triedb/commits", nil)
triedbCommitTimer = metrics.NewRegisteredResettingTimer("chain/triedb/commits", nil)
blockInsertTimer = metrics.NewRegisteredResettingTimer("chain/inserts", nil)
blockValidationTimer = metrics.NewRegisteredResettingTimer("chain/validation", nil)
@ -172,9 +170,10 @@ type BlockChainConfig struct {
TrieNoAsyncFlush bool // Whether the asynchronous buffer flushing is disallowed
TrieJournalDirectory string // Directory path to the journal used for persisting trie data across node restarts
Preimages bool // Whether to store preimage of trie key to the disk
StateScheme string // Scheme used to store ethereum states and merkle tree nodes on top
ArchiveMode bool // Whether to enable the archive mode
Preimages bool // Whether to store preimage of trie key to the disk
StateScheme string // Scheme used to store ethereum states and merkle tree nodes on top
ArchiveMode bool // Whether to enable the archive mode
BinTrieGroupDepth int // Number of levels per serialized group in binary trie (1-8)
// Number of blocks from the chain head for which state histories are retained.
// If set to 0, all state histories across the entire chain will be retained;
@ -196,9 +195,8 @@ type BlockChainConfig struct {
SnapshotNoBuild bool // Whether the background generation is allowed
SnapshotWait bool // Wait for snapshot construction on startup. TODO(karalabe): This is a dirty hack for testing, nuke it
// This defines the cutoff block for history expiry.
// Blocks before this number may be unavailable in the chain database.
ChainHistoryMode history.HistoryMode
// HistoryPolicy defines the chain history pruning intent.
HistoryPolicy history.HistoryPolicy
// Misc options
NoPrefetch bool // Whether to disable heuristic state prefetching when processing blocks
@ -219,19 +217,23 @@ type BlockChainConfig struct {
// detailed statistics will be logged. Negative value means disabled (default),
// zero logs all blocks, positive value filters blocks by execution time.
SlowBlockThreshold time.Duration
// Execution configs
StatelessSelfValidation bool // Generate execution witnesses and self-check against them (testing purpose)
EnableWitnessStats bool // Whether trie access statistics collection is enabled
}
// DefaultConfig returns the default config.
// Note the returned object is safe to modify!
func DefaultConfig() *BlockChainConfig {
return &BlockChainConfig{
TrieCleanLimit: 256,
TrieDirtyLimit: 256,
TrieTimeLimit: 5 * time.Minute,
StateScheme: rawdb.HashScheme,
SnapshotLimit: 256,
SnapshotWait: true,
ChainHistoryMode: history.KeepAll,
TrieCleanLimit: 256,
TrieDirtyLimit: 256,
TrieTimeLimit: 5 * time.Minute,
StateScheme: rawdb.HashScheme,
SnapshotLimit: 256,
SnapshotWait: true,
HistoryPolicy: history.HistoryPolicy{Mode: history.KeepAll},
// Transaction indexing is disabled by default.
// This is appropriate for most unit tests.
TxLookupLimit: -1,
@ -257,10 +259,11 @@ func (cfg BlockChainConfig) WithNoAsyncFlush(on bool) *BlockChainConfig {
}
// triedbConfig derives the configures for trie database.
func (cfg *BlockChainConfig) triedbConfig(isVerkle bool) *triedb.Config {
func (cfg *BlockChainConfig) triedbConfig(isUBT bool) *triedb.Config {
config := &triedb.Config{
Preimages: cfg.Preimages,
IsVerkle: isVerkle,
Preimages: cfg.Preimages,
IsUBT: isUBT,
BinTrieGroupDepth: cfg.BinTrieGroupDepth,
}
if cfg.StateScheme == rawdb.HashScheme {
config.HashDB = &hashdb.Config{
@ -322,7 +325,7 @@ type BlockChain struct {
lastWrite uint64 // Last block when the state was flushed
flushInterval atomic.Int64 // Time interval (processing time) after which to flush a state
triedb *triedb.Database // The database handler for maintaining trie nodes.
statedb *state.CachingDB // State database to reuse between imports (contains state cache)
codedb *state.CodeDB // The database handler for maintaining contract codes.
txIndexer *txIndexer // Transaction indexer, might be nil if not enabled
hc *HeaderChain
@ -377,7 +380,7 @@ func NewBlockChain(db ethdb.Database, genesis *Genesis, engine consensus.Engine,
}
// Open trie database with provided config
enableVerkle, err := EnableVerkleAtGenesis(db, genesis)
enableVerkle, err := EnableUBTAtGenesis(db, genesis)
if err != nil {
return nil, err
}
@ -404,6 +407,7 @@ func NewBlockChain(db ethdb.Database, genesis *Genesis, engine consensus.Engine,
cfg: cfg,
db: db,
triedb: triedb,
codedb: state.NewCodeDB(db),
triegc: prque.New[int64, common.Hash](nil),
chainmu: syncx.NewClosableMutex(),
bodyCache: lru.NewCache[common.Hash, *types.Body](bodyCacheLimit),
@ -420,7 +424,6 @@ func NewBlockChain(db ethdb.Database, genesis *Genesis, engine consensus.Engine,
return nil, err
}
bc.flushInterval.Store(int64(cfg.TrieTimeLimit))
bc.statedb = state.NewDatabase(bc.triedb, nil)
bc.validator = NewBlockValidator(chainConfig, bc)
bc.prefetcher = newStatePrefetcher(chainConfig, bc.hc)
bc.processor = NewStateProcessor(bc.hc)
@ -597,9 +600,6 @@ func (bc *BlockChain) setupSnapshot() {
AsyncBuild: !bc.cfg.SnapshotWait,
}
bc.snaps, _ = snapshot.New(snapconfig, bc.db, bc.triedb, head.Root)
// Re-initialize the state database with snapshot
bc.statedb = state.NewDatabase(bc.triedb, bc.snaps)
}
}
@ -717,45 +717,43 @@ func (bc *BlockChain) loadLastState() error {
// initializeHistoryPruning sets bc.historyPrunePoint.
func (bc *BlockChain) initializeHistoryPruning(latest uint64) error {
freezerTail, _ := bc.db.Tail()
policy := bc.cfg.HistoryPolicy
switch bc.cfg.ChainHistoryMode {
switch policy.Mode {
case history.KeepAll:
if freezerTail == 0 {
return nil
if freezerTail > 0 {
// Database was pruned externally. Record the actual state.
log.Warn("Chain history database is pruned", "tail", freezerTail, "mode", policy.Mode)
bc.historyPrunePoint.Store(&history.PrunePoint{
BlockNumber: freezerTail,
BlockHash: bc.GetCanonicalHash(freezerTail),
})
}
// The database was pruned somehow, so we need to figure out if it's a known
// configuration or an error.
predefinedPoint := history.PrunePoints[bc.genesisBlock.Hash()]
if predefinedPoint == nil || freezerTail != predefinedPoint.BlockNumber {
log.Error("Chain history database is pruned with unknown configuration", "tail", freezerTail)
return errors.New("unexpected database tail")
}
bc.historyPrunePoint.Store(predefinedPoint)
return nil
case history.KeepPostMerge:
if freezerTail == 0 && latest != 0 {
// This is the case where a user is trying to run with --history.chain
// postmerge directly on an existing DB. We could just trigger the pruning
// here, but it'd be a bit dangerous since they may not have intended this
// action to happen. So just tell them how to do it.
log.Error(fmt.Sprintf("Chain history mode is configured as %q, but database is not pruned.", bc.cfg.ChainHistoryMode.String()))
log.Error(fmt.Sprintf("Run 'geth prune-history' to prune pre-merge history."))
return errors.New("history pruning requested via configuration")
case history.KeepPostMerge, history.KeepPostPrague:
target := policy.Target
// Already at the target.
if freezerTail == target.BlockNumber {
bc.historyPrunePoint.Store(target)
return nil
}
predefinedPoint := history.PrunePoints[bc.genesisBlock.Hash()]
if predefinedPoint == nil {
log.Error("Chain history pruning is not supported for this network", "genesis", bc.genesisBlock.Hash())
return errors.New("history pruning requested for unknown network")
} else if freezerTail > 0 && freezerTail != predefinedPoint.BlockNumber {
log.Error("Chain history database is pruned to unknown block", "tail", freezerTail)
return errors.New("unexpected database tail")
// Database is pruned beyond the target.
if freezerTail > target.BlockNumber {
return fmt.Errorf("database pruned beyond requested history (tail=%d, target=%d)", freezerTail, target.BlockNumber)
}
bc.historyPrunePoint.Store(predefinedPoint)
// Database needs pruning (freezerTail < target).
if latest != 0 {
log.Error(fmt.Sprintf("Chain history mode is configured as %q, but database is not pruned to the target block.", policy.Mode.String()))
log.Error(fmt.Sprintf("Run 'geth prune-history --history.chain %s' to prune history.", policy.Mode.String()))
return errors.New("history pruning required")
}
// Fresh database (latest == 0), will sync from target point.
bc.historyPrunePoint.Store(target)
return nil
default:
return fmt.Errorf("invalid history mode: %d", bc.cfg.ChainHistoryMode)
return fmt.Errorf("invalid history mode: %d", policy.Mode)
}
}
@ -1279,6 +1277,8 @@ func (bc *BlockChain) ExportN(w io.Writer, first uint64, last uint64) error {
func (bc *BlockChain) writeHeadBlock(block *types.Block) {
// Add the block to the canonical chain number scheme and mark as the head
batch := bc.db.NewBatch()
defer batch.Close()
rawdb.WriteHeadHeaderHash(batch, block.Hash())
rawdb.WriteHeadFastBlockHash(batch, block.Hash())
rawdb.WriteCanonicalHash(batch, block.Hash(), block.NumberU64())
@ -1653,6 +1653,8 @@ func (bc *BlockChain) writeBlockWithState(block *types.Block, receipts []*types.
batch = bc.db.NewBatch()
start = time.Now()
)
defer batch.Close()
rawdb.WriteBlock(batch, block)
rawdb.WriteReceipts(batch, block.Hash(), block.NumberU64(), receipts)
rawdb.WritePreimages(batch, statedb.Preimages())
@ -1990,7 +1992,15 @@ func (bc *BlockChain) insertChain(ctx context.Context, chain types.Blocks, setHe
}
// The traced section of block import.
start := time.Now()
res, err := bc.ProcessBlock(ctx, parent.Root, block, setHead, makeWitness && len(chain) == 1)
config := ExecuteConfig{
WriteState: true,
WriteHead: setHead,
EnableTracer: true,
MakeWitness: makeWitness && len(chain) == 1,
StatelessSelfValidation: bc.cfg.StatelessSelfValidation,
EnableWitnessStats: bc.cfg.EnableWitnessStats,
}
res, err := bc.ProcessBlock(ctx, parent.Root, block, config)
if err != nil {
return nil, it.index, err
}
@ -2073,19 +2083,65 @@ func (bpr *blockProcessingResult) Stats() *ExecuteStats {
return bpr.stats
}
// ExecuteConfig defines optional behaviors during execution.
type ExecuteConfig struct {
// WriteState controls whether the computed state changes are persisted to
// the underlying storage. If false, execution is performed in-memory only.
WriteState bool
// WriteHead indicates whether the execution result should update the canonical
// chain head. It's only relevant with WriteState == True.
WriteHead bool
// EnableTracer enables execution tracing. This is typically used for debugging
// or analysis and may significantly impact performance.
EnableTracer bool
// MakeWitness indicates whether to generate execution witness data during
// execution. Enabling this may introduce additional memory and CPU overhead.
MakeWitness bool
// StatelessSelfValidation indicates whether the execution witnesses generation
// and self-validation (testing purpose) is enabled.
StatelessSelfValidation bool
// EnableWitnessStats indicates whether to enable collection of witness trie
// access statistics
EnableWitnessStats bool
}
// ProcessBlock executes and validates the given block. If there was no error
// it writes the block and associated state to database.
func (bc *BlockChain) ProcessBlock(ctx context.Context, parentRoot common.Hash, block *types.Block, setHead bool, makeWitness bool) (result *blockProcessingResult, blockEndErr error) {
func (bc *BlockChain) ProcessBlock(ctx context.Context, parentRoot common.Hash, block *types.Block, config ExecuteConfig) (result *blockProcessingResult, blockEndErr error) {
var (
err error
startTime = time.Now()
statedb *state.StateDB
interrupt atomic.Bool
sdb state.Database
)
defer interrupt.Store(true) // terminate the prefetch at the end
if bc.cfg.NoPrefetch {
statedb, err = state.New(parentRoot, bc.statedb)
if bc.chainConfig.IsUBT(block.Number(), block.Time()) {
sdb = state.NewUBTDatabase(bc.triedb, bc.codedb)
} else {
sdb = state.NewMPTDatabase(bc.triedb, bc.codedb).WithSnapshot(bc.snaps)
}
// If prefetching is enabled, run that against the current state to pre-cache
// transactions and probabilistically some of the account/storage trie nodes.
//
// Note: the main processor and prefetcher share the same reader with a local
// cache for mitigating the overhead of state access.
type prewarmReader interface {
// ReadersWithCacheStats creates a pair of state readers that share the
// same underlying state reader and internal state cache, while maintaining
// separate statistics respectively.
ReadersWithCacheStats(stateRoot common.Hash) (state.Reader, state.Reader, error)
}
warmer, ok := sdb.(prewarmReader)
if bc.cfg.NoPrefetch || !ok {
statedb, err = state.New(parentRoot, sdb)
if err != nil {
return nil, err
}
@ -2095,23 +2151,27 @@ func (bc *BlockChain) ProcessBlock(ctx context.Context, parentRoot common.Hash,
//
// Note: the main processor and prefetcher share the same reader with a local
// cache for mitigating the overhead of state access.
prefetch, process, err := bc.statedb.ReadersWithCacheStats(parentRoot)
prefetch, process, err := warmer.ReadersWithCacheStats(parentRoot)
if err != nil {
return nil, err
}
throwaway, err := state.NewWithReader(parentRoot, bc.statedb, prefetch)
throwaway, err := state.NewWithReader(parentRoot, sdb, prefetch)
if err != nil {
return nil, err
}
statedb, err = state.NewWithReader(parentRoot, bc.statedb, process)
statedb, err = state.NewWithReader(parentRoot, sdb, process)
if err != nil {
return nil, err
}
// Upload the statistics of reader at the end
defer func() {
if result != nil {
result.stats.StatePrefetchCacheStats = prefetch.GetStats()
result.stats.StateReadCacheStats = process.GetStats()
if stater, ok := prefetch.(state.ReaderStater); ok {
result.stats.StatePrefetchCacheStats = stater.GetStats()
}
if stater, ok := process.(state.ReaderStater); ok {
result.stats.StateReadCacheStats = stater.GetStats()
}
}
}()
go func(start time.Time, throwaway *state.StateDB, block *types.Block) {
@ -2130,38 +2190,35 @@ func (bc *BlockChain) ProcessBlock(ctx context.Context, parentRoot common.Hash,
// If we are past Byzantium, enable prefetching to pull in trie node paths
// while processing transactions. Before Byzantium the prefetcher is mostly
// useless due to the intermediate root hashing after each transaction.
var (
witness *stateless.Witness
witnessStats *stateless.WitnessStats
)
var witness *stateless.Witness
if bc.chainConfig.IsByzantium(block.Number()) {
// Generate witnesses either if we're self-testing, or if it's the
// only block being inserted. A bit crude, but witnesses are huge,
// so we refuse to make an entire chain of them.
if bc.cfg.VmConfig.StatelessSelfValidation || makeWitness {
witness, err = stateless.NewWitness(block.Header(), bc)
if config.StatelessSelfValidation || config.MakeWitness {
witness, err = stateless.NewWitness(block.Header(), bc, config.EnableWitnessStats)
if err != nil {
return nil, err
}
if bc.cfg.VmConfig.EnableWitnessStats {
witnessStats = stateless.NewWitnessStats()
}
}
statedb.StartPrefetcher("chain", witness, witnessStats)
statedb.StartPrefetcher("chain", witness)
defer statedb.StopPrefetcher()
}
if bc.logger != nil && bc.logger.OnBlockStart != nil {
bc.logger.OnBlockStart(tracing.BlockEvent{
Block: block,
Finalized: bc.CurrentFinalBlock(),
Safe: bc.CurrentSafeBlock(),
})
}
if bc.logger != nil && bc.logger.OnBlockEnd != nil {
defer func() {
bc.logger.OnBlockEnd(blockEndErr)
}()
// Instrument the blockchain tracing
if config.EnableTracer {
if bc.logger != nil && bc.logger.OnBlockStart != nil {
bc.logger.OnBlockStart(tracing.BlockEvent{
Block: block,
Finalized: bc.CurrentFinalBlock(),
Safe: bc.CurrentSafeBlock(),
})
}
if bc.logger != nil && bc.logger.OnBlockEnd != nil {
defer func() {
bc.logger.OnBlockEnd(blockEndErr)
}()
}
}
// Process block using the parent state as reference point
@ -2191,7 +2248,7 @@ func (bc *BlockChain) ProcessBlock(ctx context.Context, parentRoot common.Hash,
// witness builder/runner, which would otherwise be impossible due to the
// various invalid chain states/behaviors being contained in those tests.
xvstart := time.Now()
if witness := statedb.Witness(); witness != nil && bc.cfg.VmConfig.StatelessSelfValidation {
if witness := statedb.Witness(); witness != nil && config.StatelessSelfValidation {
log.Warn("Running stateless self-validation", "block", block.Number(), "hash", block.Hash())
// Remove critical computed fields from the block to force true recalculation
@ -2244,31 +2301,28 @@ func (bc *BlockChain) ProcessBlock(ctx context.Context, parentRoot common.Hash,
stats.CrossValidation = xvtime // The time spent on stateless cross validation
// Write the block to the chain and get the status.
var (
wstart = time.Now()
status WriteStatus
)
if !setHead {
// Don't set the head, only insert the block
err = bc.writeBlockWithState(block, res.Receipts, statedb)
} else {
status, err = bc.writeBlockAndSetHead(block, res.Receipts, res.Logs, statedb, false)
}
if err != nil {
return nil, err
var status WriteStatus
if config.WriteState {
wstart := time.Now()
if !config.WriteHead {
// Don't set the head, only insert the block
err = bc.writeBlockWithState(block, res.Receipts, statedb)
} else {
status, err = bc.writeBlockAndSetHead(block, res.Receipts, res.Logs, statedb, false)
}
if err != nil {
return nil, err
}
// Update the metrics touched during block commit
stats.AccountCommits = statedb.AccountCommits // Account commits are complete, we can mark them
stats.StorageCommits = statedb.StorageCommits // Storage commits are complete, we can mark them
stats.DatabaseCommit = statedb.DatabaseCommits // Database commits are complete, we can mark them
stats.BlockWrite = time.Since(wstart) - max(statedb.AccountCommits, statedb.StorageCommits) /* concurrent */ - statedb.DatabaseCommits
}
// Report the collected witness statistics
if witnessStats != nil {
witnessStats.ReportMetrics(block.NumberU64())
if witness != nil {
witness.ReportMetrics(block.NumberU64())
}
// Update the metrics touched during block commit
stats.AccountCommits = statedb.AccountCommits // Account commits are complete, we can mark them
stats.StorageCommits = statedb.StorageCommits // Storage commits are complete, we can mark them
stats.SnapshotCommit = statedb.SnapshotCommits // Snapshot commits are complete, we can mark them
stats.TrieDBCommit = statedb.TrieDBCommits // Trie database commits are complete, we can mark them
stats.BlockWrite = time.Since(wstart) - max(statedb.AccountCommits, statedb.StorageCommits) /* concurrent */ - statedb.SnapshotCommits - statedb.TrieDBCommits
elapsed := time.Since(startTime) + 1 // prevent zero division
stats.TotalTime = elapsed
stats.MgasPerSecond = float64(res.GasUsed) * 1000 / float64(elapsed)
@ -2549,6 +2603,7 @@ func (bc *BlockChain) reorg(oldHead *types.Header, newHead *types.Header) error
// as the txlookups should be changed atomically, and all subsequent
// reads should be blocked until the mutation is complete.
bc.txLookupLock.Lock()
defer bc.txLookupLock.Unlock()
// Reorg can be executed, start reducing the chain's old blocks and appending
// the new blocks
@ -2626,6 +2681,8 @@ func (bc *BlockChain) reorg(oldHead *types.Header, newHead *types.Header) error
// Delete useless indexes right now which includes the non-canonical
// transaction indexes, canonical chain indexes which above the head.
batch := bc.db.NewBatch()
defer batch.Close()
for _, tx := range types.HashDifference(deletedTxs, rebirthTxs) {
rawdb.DeleteTxLookupEntry(batch, tx)
}
@ -2649,9 +2706,6 @@ func (bc *BlockChain) reorg(oldHead *types.Header, newHead *types.Header) error
// Reset the tx lookup cache to clear stale txlookup cache.
bc.txLookupCache.Purge()
// Release the tx-lookup lock after mutation.
bc.txLookupLock.Unlock()
return nil
}

View file

@ -296,6 +296,14 @@ func (bc *BlockChain) GetReceiptsRLP(hash common.Hash) rlp.RawValue {
return rawdb.ReadReceiptsRLP(bc.db, hash, number)
}
func (bc *BlockChain) GetAccessListRLP(hash common.Hash) rlp.RawValue {
number, ok := rawdb.ReadHeaderNumber(bc.db, hash)
if !ok {
return nil
}
return rawdb.ReadAccessListRLP(bc.db, hash, number)
}
// GetUnclesInChain retrieves all the uncles from a given block backwards until
// a specific distance is reached.
func (bc *BlockChain) GetUnclesInChain(block *types.Block, length int) []*types.Header {
@ -371,7 +379,7 @@ func (bc *BlockChain) TxIndexDone() bool {
// HasState checks if state trie is fully present in the database or not.
func (bc *BlockChain) HasState(hash common.Hash) bool {
_, err := bc.statedb.OpenTrie(hash)
_, err := bc.triedb.NodeReader(hash)
return err == nil
}
@ -403,24 +411,47 @@ func (bc *BlockChain) stateRecoverable(root common.Hash) bool {
func (bc *BlockChain) ContractCodeWithPrefix(hash common.Hash) []byte {
// TODO(rjl493456442) The associated account address is also required
// in Verkle scheme. Fix it once snap-sync is supported for Verkle.
return bc.statedb.ContractCodeWithPrefix(common.Address{}, hash)
return bc.codedb.Reader().CodeWithPrefix(common.Address{}, hash)
}
// State returns a new mutable state based on the current HEAD block.
func (bc *BlockChain) State() (*state.StateDB, error) {
return bc.StateAt(bc.CurrentBlock().Root)
return bc.StateAt(bc.CurrentBlock())
}
// StateAt returns a new mutable state based on a particular point in time.
func (bc *BlockChain) StateAt(root common.Hash) (*state.StateDB, error) {
return state.New(root, bc.statedb)
func (bc *BlockChain) StateAt(header *types.Header) (*state.StateDB, error) {
if bc.chainConfig.IsUBT(header.Number, header.Time) {
return state.New(header.Root, state.NewUBTDatabase(bc.triedb, bc.codedb))
}
return state.New(header.Root, state.NewMPTDatabase(bc.triedb, bc.codedb).WithSnapshot(bc.snaps))
}
// HistoricState returns a historic state specified by the given root.
// StateAtForkBoundary returns a new mutable state based on the parent state
// and the given header, handling the transition across the UBT fork.
func (bc *BlockChain) StateAtForkBoundary(parent *types.Header, header *types.Header) (*state.StateDB, error) {
// The parent is already in the UBT fork.
if bc.chainConfig.IsUBT(parent.Number, parent.Time) {
return state.New(parent.Root, state.NewUBTDatabase(bc.triedb, bc.codedb))
}
// The current block is the first block in the UBT fork
// (i.e., the parent is the last MPT block).
if bc.chainConfig.IsUBT(header.Number, header.Time) {
// TODO(gballet): register chain context if needed
return state.New(parent.Root, state.NewUBTDatabase(bc.triedb, bc.codedb))
}
// Both the parent and current block are in the MPT fork.
return state.New(parent.Root, state.NewMPTDatabase(bc.triedb, bc.codedb).WithSnapshot(bc.snaps))
}
// HistoricState returns a historic state specified by the given header.
// Live states are not available and won't be served, please use `State`
// or `StateAt` instead.
func (bc *BlockChain) HistoricState(root common.Hash) (*state.StateDB, error) {
return state.New(root, state.NewHistoricDatabase(bc.db, bc.triedb))
func (bc *BlockChain) HistoricState(header *types.Header) (*state.StateDB, error) {
if bc.chainConfig.IsUBT(header.Number, header.Time) {
return nil, errors.New("historical state over ubt is not yet supported")
}
return state.New(header.Root, state.NewHistoricDatabase(bc.triedb, bc.codedb))
}
// Config retrieves the chain's fork configuration.
@ -444,11 +475,6 @@ func (bc *BlockChain) Processor() Processor {
return bc.processor
}
// StateCache returns the caching database underpinning the blockchain instance.
func (bc *BlockChain) StateCache() state.Database {
return bc.statedb
}
// GasLimit returns the gas limit of the current HEAD block.
func (bc *BlockChain) GasLimit() uint64 {
return bc.CurrentBlock().GasLimit
@ -473,7 +499,7 @@ func (bc *BlockChain) TxIndexProgress() (TxIndexProgress, error) {
}
// StateIndexProgress returns the historical state indexing progress.
func (bc *BlockChain) StateIndexProgress() (uint64, error) {
func (bc *BlockChain) StateIndexProgress() (uint64, uint64, error) {
return bc.triedb.IndexProgress()
}
@ -492,6 +518,11 @@ func (bc *BlockChain) TrieDB() *triedb.Database {
return bc.triedb
}
// CodeDB retrieves the low level contract code database used for data storage.
func (bc *BlockChain) CodeDB() *state.CodeDB {
return bc.codedb
}
// HeaderChain returns the underlying header chain.
func (bc *BlockChain) HeaderChain() *HeaderChain {
return bc.hc

View file

@ -30,7 +30,6 @@ import (
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/consensus/ethash"
"github.com/ethereum/go-ethereum/core/rawdb"
"github.com/ethereum/go-ethereum/core/state"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/ethdb/pebble"
"github.com/ethereum/go-ethereum/params"
@ -2041,7 +2040,6 @@ func testSetHeadWithScheme(t *testing.T, tt *rewindTest, snapshots bool, scheme
dbconfig.HashDB = hashdb.Defaults
}
chain.triedb = triedb.NewDatabase(chain.db, dbconfig)
chain.statedb = state.NewDatabase(chain.triedb, chain.snaps)
// Force run a freeze cycle
type freezer interface {

View file

@ -52,8 +52,7 @@ type ExecuteStats struct {
Execution time.Duration // Time spent on the EVM execution
Validation time.Duration // Time spent on the block validation
CrossValidation time.Duration // Optional, time spent on the block cross validation
SnapshotCommit time.Duration // Time spent on snapshot commit
TrieDBCommit time.Duration // Time spent on database commit
DatabaseCommit time.Duration // Time spent on database commit
BlockWrite time.Duration // Time spent on block write
TotalTime time.Duration // The total time spent on block execution
MgasPerSecond float64 // The million gas processed per second
@ -87,22 +86,21 @@ func (s *ExecuteStats) reportMetrics() {
blockExecutionTimer.Update(s.Execution) // The time spent on EVM processing
blockValidationTimer.Update(s.Validation) // The time spent on block validation
blockCrossValidationTimer.Update(s.CrossValidation) // The time spent on stateless cross validation
snapshotCommitTimer.Update(s.SnapshotCommit) // Snapshot commits are complete, we can mark them
triedbCommitTimer.Update(s.TrieDBCommit) // Trie database commits are complete, we can mark them
triedbCommitTimer.Update(s.DatabaseCommit) // Trie database commits are complete, we can mark them
blockWriteTimer.Update(s.BlockWrite) // The time spent on block write
blockInsertTimer.Update(s.TotalTime) // The total time spent on block execution
chainMgaspsMeter.Update(time.Duration(s.MgasPerSecond)) // TODO(rjl493456442) generalize the ResettingTimer
// Cache hit rates
accountCacheHitPrefetchMeter.Mark(s.StatePrefetchCacheStats.AccountCacheHit)
accountCacheMissPrefetchMeter.Mark(s.StatePrefetchCacheStats.AccountCacheMiss)
storageCacheHitPrefetchMeter.Mark(s.StatePrefetchCacheStats.StorageCacheHit)
storageCacheMissPrefetchMeter.Mark(s.StatePrefetchCacheStats.StorageCacheMiss)
accountCacheHitPrefetchMeter.Mark(s.StatePrefetchCacheStats.StateStats.AccountCacheHit)
accountCacheMissPrefetchMeter.Mark(s.StatePrefetchCacheStats.StateStats.AccountCacheMiss)
storageCacheHitPrefetchMeter.Mark(s.StatePrefetchCacheStats.StateStats.StorageCacheHit)
storageCacheMissPrefetchMeter.Mark(s.StatePrefetchCacheStats.StateStats.StorageCacheMiss)
accountCacheHitMeter.Mark(s.StateReadCacheStats.AccountCacheHit)
accountCacheMissMeter.Mark(s.StateReadCacheStats.AccountCacheMiss)
storageCacheHitMeter.Mark(s.StateReadCacheStats.StorageCacheHit)
storageCacheMissMeter.Mark(s.StateReadCacheStats.StorageCacheMiss)
accountCacheHitMeter.Mark(s.StateReadCacheStats.StateStats.AccountCacheHit)
accountCacheMissMeter.Mark(s.StateReadCacheStats.StateStats.AccountCacheMiss)
storageCacheHitMeter.Mark(s.StateReadCacheStats.StateStats.StorageCacheHit)
storageCacheMissMeter.Mark(s.StateReadCacheStats.StateStats.StorageCacheMiss)
}
// slowBlockLog represents the JSON structure for slow block logging.
@ -177,14 +175,6 @@ type slowBlockCodeCacheEntry struct {
MissBytes int64 `json:"miss_bytes"`
}
// calculateHitRate computes the cache hit rate as a percentage (0-100).
func calculateHitRate(hits, misses int64) float64 {
if total := hits + misses; total > 0 {
return float64(hits) / float64(total) * 100.0
}
return 0.0
}
// durationToMs converts a time.Duration to milliseconds as a float64
// with sub-millisecond precision for accurate cross-client metrics.
func durationToMs(d time.Duration) float64 {
@ -216,7 +206,7 @@ func (s *ExecuteStats) logSlow(block *types.Block, slowBlockThreshold time.Durat
ExecutionMs: durationToMs(s.Execution),
StateReadMs: durationToMs(s.AccountReads + s.StorageReads + s.CodeReads),
StateHashMs: durationToMs(s.AccountHashes + s.AccountUpdates + s.StorageUpdates),
CommitMs: durationToMs(max(s.AccountCommits, s.StorageCommits) + s.TrieDBCommit + s.SnapshotCommit + s.BlockWrite),
CommitMs: durationToMs(max(s.AccountCommits, s.StorageCommits) + s.DatabaseCommit + s.BlockWrite),
TotalMs: durationToMs(s.TotalTime),
},
Throughput: slowBlockThru{
@ -238,19 +228,19 @@ func (s *ExecuteStats) logSlow(block *types.Block, slowBlockThreshold time.Durat
},
Cache: slowBlockCache{
Account: slowBlockCacheEntry{
Hits: s.StateReadCacheStats.AccountCacheHit,
Misses: s.StateReadCacheStats.AccountCacheMiss,
HitRate: calculateHitRate(s.StateReadCacheStats.AccountCacheHit, s.StateReadCacheStats.AccountCacheMiss),
Hits: s.StateReadCacheStats.StateStats.AccountCacheHit,
Misses: s.StateReadCacheStats.StateStats.AccountCacheMiss,
HitRate: s.StateReadCacheStats.StateStats.AccountCacheHitRate(),
},
Storage: slowBlockCacheEntry{
Hits: s.StateReadCacheStats.StorageCacheHit,
Misses: s.StateReadCacheStats.StorageCacheMiss,
HitRate: calculateHitRate(s.StateReadCacheStats.StorageCacheHit, s.StateReadCacheStats.StorageCacheMiss),
Hits: s.StateReadCacheStats.StateStats.StorageCacheHit,
Misses: s.StateReadCacheStats.StateStats.StorageCacheMiss,
HitRate: s.StateReadCacheStats.StateStats.StorageCacheHitRate(),
},
Code: slowBlockCodeCacheEntry{
Hits: s.StateReadCacheStats.CodeStats.CacheHit,
Misses: s.StateReadCacheStats.CodeStats.CacheMiss,
HitRate: calculateHitRate(s.StateReadCacheStats.CodeStats.CacheHit, s.StateReadCacheStats.CodeStats.CacheMiss),
HitRate: s.StateReadCacheStats.CodeStats.HitRate(),
HitBytes: s.StateReadCacheStats.CodeStats.CacheHitBytes,
MissBytes: s.StateReadCacheStats.CodeStats.CacheMissBytes,
},

View file

@ -36,7 +36,6 @@ import (
"github.com/ethereum/go-ethereum/consensus"
"github.com/ethereum/go-ethereum/consensus/beacon"
"github.com/ethereum/go-ethereum/consensus/ethash"
"github.com/ethereum/go-ethereum/core/history"
"github.com/ethereum/go-ethereum/core/rawdb"
"github.com/ethereum/go-ethereum/core/state"
"github.com/ethereum/go-ethereum/core/types"
@ -157,7 +156,7 @@ func testBlockChainImport(chain types.Blocks, blockchain *BlockChain) error {
}
return err
}
statedb, err := state.New(blockchain.GetBlockByHash(block.ParentHash()).Root(), blockchain.statedb)
statedb, err := state.New(blockchain.GetBlockByHash(block.ParentHash()).Root(), state.NewDatabase(blockchain.triedb, blockchain.codedb))
if err != nil {
return err
}
@ -3891,7 +3890,7 @@ func TestTransientStorageReset(t *testing.T) {
t.Fatalf("failed to insert into chain: %v", err)
}
// Check the storage
state, err := chain.StateAt(chain.CurrentHeader().Root)
state, err := chain.StateAt(chain.CurrentHeader())
if err != nil {
t.Fatalf("Failed to load state %v", err)
}
@ -4337,26 +4336,13 @@ func TestInsertChainWithCutoff(t *testing.T) {
func testInsertChainWithCutoff(t *testing.T, cutoff uint64, ancientLimit uint64, genesis *Genesis, blocks []*types.Block, receipts []types.Receipts) {
// log.SetDefault(log.NewLogger(log.NewTerminalHandlerWithLevel(os.Stderr, log.LevelDebug, true)))
// Add a known pruning point for the duration of the test.
ghash := genesis.ToBlock().Hash()
cutoffBlock := blocks[cutoff-1]
history.PrunePoints[ghash] = &history.PrunePoint{
BlockNumber: cutoffBlock.NumberU64(),
BlockHash: cutoffBlock.Hash(),
}
defer func() {
delete(history.PrunePoints, ghash)
}()
// Enable pruning in cache config.
config := DefaultConfig().WithStateScheme(rawdb.PathScheme)
config.ChainHistoryMode = history.KeepPostMerge
db, _ := rawdb.Open(rawdb.NewMemoryDatabase(), rawdb.OpenOptions{})
defer db.Close()
options := DefaultConfig().WithStateScheme(rawdb.PathScheme)
chain, _ := NewBlockChain(db, genesis, beacon.New(ethash.NewFaker()), options)
chain, _ := NewBlockChain(db, genesis, beacon.New(ethash.NewFaker()), DefaultConfig().WithStateScheme(rawdb.PathScheme))
defer chain.Stop()
var (

Some files were not shown because too many files have changed in this diff Show more