Commit graph

16833 commits

Author SHA1 Message Date
Guillaume Ballet
305cd7b9eb
trie/bintrie: fix NodeIterator Empty node handling and expose tree accessors (#34056)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Fix three issues in the binary trie NodeIterator:

1. Empty nodes now properly backtrack to parent and continue iteration
instead of terminating the entire walk early.

2. `HashedNode` resolver handles `nil` data (all-zeros hash) gracefully
by treating it as Empty rather than panicking.

3. Parent update after node resolution guards against stack underflow
when resolving the root node itself.

---------

Co-authored-by: tellabg <249254436+tellabg@users.noreply.github.com>
2026-03-20 13:53:14 -04:00
CPerezz
77779d1098
core/state: bypass per-account updateTrie in IntermediateRoot for binary trie (#34022)
## Summary

In binary trie mode, `IntermediateRoot` calls `updateTrie()` once per
dirty account. But with the binary trie there is only one unified trie
(`OpenStorageTrie` returns `self`), so each call redundantly does
per-account trie setup: `getPrefetchedTrie`, `getTrie`, slice
allocations for deletions/used, and `prefetcher.used` — all for the same
trie pointer.

This PR replaces the per-account `updateTrie()` calls with a single flat
loop that applies all storage updates directly to `s.trie`. The MPT path
is unchanged. The prefetcher trie replacement is guarded to avoid
overwriting the binary trie that received updates.

This is the phase-1 counterpart to #34021 (H01). H01 fixes the commit
phase (`trie.Commit()` called N+1 times). This PR fixes the update phase
(`updateTrie()` called N times with redundant setup). Same root cause —
unified binary trie operated on per-account — different phases.

## Benchmark (Apple M4 Pro, 500K entries, `--benchtime=10s --count=3`,
on top of #34021)

| Metric | H01 baseline | H01 + this PR | Delta |
|--------|:------------:|:-------------:|:-----:|
| Approve (Mgas/s) | 368 | **414** | **+12.5%** |
| BalanceOf (Mgas/s) | 870 | 875 | +0.6% |

Should be rebased after #34021 is merged.
2026-03-20 15:40:04 +01:00
jvn
59ce2cb6a1
p2p: track in-progress inbound node IDs (#33198)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Avoid dialing a node while we have an inbound
connection request from them in progress.

Closes #33197
2026-03-20 05:52:15 +01:00
Felix Lange
35b91092c5
rlp: add Size method to EncoderBuffer (#34052)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
The new method returns the size of the written data, excluding any
unfinished list structure.
2026-03-19 18:26:00 +01:00
jwasinger
fd859638bd
core/vm: rework gas measurement for call variants (#33648)
EIP-7928 brings state reads into consensus by recording accounts and storage accessed during execution in the block access list. As part of the spec, we need to check that there is enough gas available to cover the cost component which doesn't depend on looking up state. If this component can't be covered by the available gas, we exit immediately.

The portion of the call dynamic cost which doesn't depend on state look ups:

- EIP2929 call costs
- value transfer cost
- memory expansion cost

This PR:

- breaks up the "inner" gas calculation for each call variant into a pair of stateless/stateful cost methods
- modifies the gas calculation logic of calls to check stateless cost component first, and go out of gas immediately if it is not covered.

---------

Co-authored-by: Gary Rong <garyrong0905@gmail.com>
2026-03-19 10:02:49 -06:00
rjl493456442
a3083ff5d0
cmd: add support for enumerating a single storage trie (#34051)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
2026-03-19 09:52:10 +01:00
Bosul Mun
4faadf17fb
rlp: add AppendList method to RawList (#34048)
This the AppendList method to merge two RawList instances by
appending the raw content.
2026-03-19 09:51:03 +01:00
vickkkkkyy
3341d8ace0
eth/filters: rangeLogs should error on invalid block range (#33763)
Some checks are pending
/ Keeper Build (push) Waiting to run
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Fixes log filter to reject out of order block ranges.
2026-03-18 23:31:40 +01:00
haoyu-haoyu
b35645bdf7
build: fix missing '!' in shebang of generated oss-fuzz scripts (#34044)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
\`oss-fuzz.sh\` line 38 writes \`#/bin/sh\` instead of \`#!/bin/sh\` as
the shebang of generated fuzz test runner scripts.

\`\`\`diff
-#/bin/sh
+#!/bin/sh
\`\`\`

Without the \`!\`, the kernel does not recognize the interpreter
directive.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-18 13:56:26 +01:00
Sina M
6ae3f9fa56
core/history: refactor pruning configuration (#34036)
This PR introduces a new type HistoryPolicy which captures user intent
as opposed to pruning point stored in the blockchain which persists the
actual tail of data in the database.

It is in preparation for the rolling history expiry feature.

It comes with a semantic change: if database was pruned and geth is
running without a history mode flag (or explicit keep all flag) geth
will emit a warning but continue running as opposed to stopping the
world.
2026-03-18 13:54:29 +01:00
CPerezz
6138a11c39
trie/bintrie: parallelize InternalNode.Hash at shallow tree depths (#34032)
## Summary

At tree depths below `log2(NumCPU)` (clamped to [2, 8]), hash the left
subtree in a goroutine while hashing the right subtree inline. This
exploits available CPU cores for the top levels of the tree where
subtree hashing is most expensive. On single-core machines, the parallel
path is disabled entirely.

Deeper nodes use sequential hashing with the existing `sync.Pool` hasher
where goroutine overhead would exceed the hash computation cost. The
parallel path uses `sha256.Sum256` with a stack-allocated buffer to
avoid pool contention across goroutines.

**Safety:**
- Left/right subtrees are disjoint — no shared mutable state
- `sync.WaitGroup` provides happens-before guarantee for the result
- `defer wg.Done()` + `recover()` prevents goroutine panics from
crashing the process
- `!bt.mustRecompute` early return means clean nodes never enter the
parallel path
- Hash results are deterministic regardless of computation order — no
consensus risk

## Benchmark (AMD EPYC 48-core, 500K entries, `--benchtime=10s
--count=3`, post-H01 baseline)

| Metric | Baseline | Parallel | Delta |
|--------|----------|----------|-------|
| Approve (Mgas/s) | 224.5 ± 7.1 | **259.6 ± 2.4** | **+15.6%** |
| BalanceOf (Mgas/s) | 982.9 ± 5.1 | 954.3 ± 10.8 | -2.9% (noise, clean
nodes skip parallel path) |
| Allocs/op (approve) | ~810K | ~700K | -13.6% |
2026-03-18 13:54:23 +01:00
Mayveskii
b6115e9a30
core: fix txLookupLock mutex leak on error returns in reorg() (#34039)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
2026-03-18 15:43:24 +08:00
felipe
ab357151da
cmd/evm: don't strip prefixes on requests over t8n (#33997)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Found this bug while implementing the Amsterdam changes t8n changes for
benchmark test filling in EELS.

Prefixes were incorrectly being stripped on requests over t8n and this
was leading to `fill` correctly catching hash mismatches on the EELS
side for some BAL tests. Though this was caught there, I think this
change might as well be cherry-picked there instead and merged to
`master`.

This PR brings this behavior to parity with EELS for Osaka filling.
There are still some quirks with regards to invalid block tests but I
did not investigate this further.
2026-03-17 16:07:28 +01:00
rjl493456442
9b2ce121dc
triedb/pathdb: enhance history index initer (#33640)
This PR improves the pbss archive mode. Initial sync
of an archive mode which has the --gcmode archive
flag enabled will be significantly sped up.

It achieves that with the following changes:

The indexer now attempts to process histories in batch whenever
possible.
Batch indexing is enforced when the node is still syncing and the local
chain
head is behind the network chain head. 

In this scenario, instead of scheduling indexing frequently alongside
block
insertion, the indexer waits until a sufficient amount of history has
accumulated
and then processes it in a batch, which is significantly more efficient.

---------

Co-authored-by: Sina M <1591639+s1na@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-03-17 15:29:30 +01:00
Sina M
fc1b0c0b83
internal/ethapi: warn on reaching global gas cap for eth_simulateV1 (#34016)
Warn user when gas limit of a tx is capped due to rpc server's gas cap
being reached.
2026-03-17 13:52:04 +01:00
CPerezz
519a450c43
core/state: skip redundant trie Commit for Verkle in stateObject.commit (#34021)
## Summary

**Bug fix.** In Verkle mode, all state objects share a single unified
trie (`OpenStorageTrie` returns `self`). During `stateDB.commit()`, the
main account trie is committed via `s.trie.Commit(true)`, which calls
`CollectNodes` to traverse and serialize the entire tree. However, each
dirty account's `obj.commit()` also calls `s.trie.Commit(false)` on the
**same trie object**, redundantly traversing and serializing the full
tree once per dirty account.

With N dirty accounts per block, this causes **N+1 full-tree
traversals** instead of 1. On a write-heavy workload (2250 SSTOREs),
this produces ~131 GB of allocations per block from duplicate NodeSet
creation and serialization. It also causes a latent data race from N+1
goroutines concurrently calling `CollectNodes` on shared `InternalNode`
objects.

This commit adds an `IsVerkle()` early return in `stateObject.commit()`
to skip the redundant `trie.Commit()` call.

## Benchmark (AMD EPYC 48-core, 500K entries, `--benchtime=10s
--count=3`)

| Metric | Baseline | Fixed | Delta |
|--------|----------|-------|-------|
| Approve (Mgas/s) | 4.16 ± 0.37 | **220.2 ± 10.1** | **+5190%** |
| BalanceOf (Mgas/s) | 966.2 ± 8.1 | 971.0 ± 3.0 | +0.5% |
| Allocs/op (approve) | 136.4M | 792K | **-99.4%** |

Resolves the TODO in statedb.go about the account trie commit being
"very heavy" and "something's wonky".

---------

Co-authored-by: Guillaume Ballet <3272758+gballet@users.noreply.github.com>
2026-03-17 12:27:29 +01:00
CPerezz
4b915af2c3
core/state: avoid Bytes() allocation in flatReader hash computations (#34025)
## Summary

Replace `addr.Bytes()` and `key.Bytes()` with `addr[:]` and `key[:]` in
`flatReader`'s `Account` and `Storage` methods. The former allocates a
copy while the latter creates a zero-allocation slice header over the
existing backing array.

## Benchmark (AMD EPYC 48-core, 500K entries, screening
`--benchtime=1x`)

| Metric | Baseline | Slice syntax | Delta |
|--------|----------|--------------|-------|
| Approve (Mgas/s) | 4.13 | 4.22 | +2.2% |
| BalanceOf (Mgas/s) | 168.3 | 190.0 | **+12.9%** |
2026-03-17 11:42:42 +01:00
Jonny Rhea
98b13f342f
miner: add OpenTelemetry spans for block building path (#33773)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Instruments the block building path with OpenTelemetry tracing spans.

- added spans in forkchoiceUpdated -> buildPayload -> background payload
loop -> generateWork iterations. Spans should look something like this:

```
jsonrpc.engine/forkchoiceUpdatedV3
|- rpc.runMethod
|  |- engine.forkchoiceUpdated
|     |- miner.buildPayload [payload.id, parent.hash, timestamp]
|        |- miner.generateWork [txs.count, gas.used, fees] (empty block)
|        |  |- miner.prepareWork
|        |  |- miner.FinalizeAndAssemble
|        |     |- consensus.beacon.FinalizeAndAssemble [block.number, txs.count, withdrawals.count]
|        |        |- consensus.beacon.Finalize
|        |        |- consensus.beacon.IntermediateRoot
|        |        |- consensus.beacon.NewBlock
|        |- miner.background [block.number, iterations.total, exit.reason, empty.delivered]
|           |- miner.buildIteration [iteration, update.accepted]
|           |  |- miner.generateWork [txs.count, gas.used, fees]
|           |     |- miner.prepareWork
|           |     |- miner.fillTransactions [pending.plain.count, pending.blob.count]
|           |     |  |- miner.commitTransactions.priority (if prio txs exist)
|           |     |  |  |- miner.commitTransactions
|           |     |  |     |- miner.commitTransaction (per tx)
|           |     |  |- miner.commitTransactions.normal (if normal txs exist)
|           |     |     |- miner.commitTransactions
|           |     |        |- miner.commitTransaction (per tx)
|           |     |- miner.FinalizeAndAssemble
|           |        |- consensus.beacon.FinalizeAndAssemble [block.number, txs.count, withdrawals.count]
|           |           |- consensus.beacon.Finalize
|           |           |- consensus.beacon.IntermediateRoot
|           |           |- consensus.beacon.NewBlock
|           |- miner.buildIteration [iteration, update.accepted]
|           |  |- ...
|           |- ...

```

- added simulated server spans in SimulatedBeacon.sealBlock so dev mode
(geth --dev) produces traces that mirror production Engine API calls
from a real consensus client.

---------

Co-authored-by: Felix Lange <fjl@twurst.com>
2026-03-16 19:24:41 +01:00
rjl493456442
a7d09cc14f
core: fix code database initialization in stateless mode (#34011)
Some checks are pending
/ Docker Image (push) Waiting to run
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
This PR fixes the statedb initialization, ensuring the data source is
bound with the stateless input.
2026-03-16 09:45:26 +01:00
Guillaume Ballet
77e7e5ad1a
go.mod, go.sum: update karalabe/hid to fix broken FreeBSD ports build (#34008)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
cgo builds have been broken in FreeBSD ports because of the hid lib.
@enriquefynn has made a temporary patch, but the fix has been merged in
the master branch, so let's reflect that here.
2026-03-16 14:35:07 +08:00
vickkkkkyy
24025c2bd0
build: fix signify flag name in doWindowsInstaller (#34006)
Some checks failed
/ Linux Build (push) Has been cancelled
/ Linux Build (arm) (push) Has been cancelled
/ Keeper Build (push) Has been cancelled
/ Windows Build (push) Has been cancelled
/ Docker Image (push) Has been cancelled
The signify flag in `doWindowsInstaller` was defined as "signify key"
(with a space), making it impossible to pass via CLI (`-signify
<value>`). This meant the Windows installer signify signing was silently
never executed.

Fix by renaming the flag to "signify", consistent with `doArchive` and
`doKeeperArchive`.
2026-03-14 10:22:50 +01:00
jwasinger
ede376af8e
internal/ethapi: encode slotNumber as hex in RPCMarshalHeader (#34005)
Some checks are pending
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
The slotNumber field was being passed as a raw *uint64 to the JSON
marshaler, which serializes it as a plain decimal integer (e.g. 159).
All Ethereum JSON-RPC quantity fields must be hex-encoded per spec.

Wrap with hexutil.Uint64 to match the encoding of other numeric header
fields like blobGasUsed and excessBlobGas.

Co-authored-by: qu0b <stefan@starflinger.eu>
2026-03-13 17:09:32 +01:00
vickkkkkyy
189f9d0b17
eth/filters: check history pruning cutoff in GetFilterLogs (#33823)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Return proper error for the log filters going beyond pruning
point on a node with expired history.
2026-03-13 13:26:20 +01:00
Lessa
dba741fd31
console: fix autocomplete digit range to include 0 (#34003)
PR #26518 added digit support but used '1'-'9' instead of '0'-'9'. This
breaks autocomplete for identifiers containing 0 like account0.
2026-03-13 12:39:45 +01:00
Lee Gyumin
eaa9418ac1
core/rawdb: enforce exact key length for num->hash and td in db inspect (#34000)
This PR improves `db inspect` classification accuracy in
`core/rawdb/database.go` by tightening key-shape checks for:

- `Block number->hash`
- `Difficulties (deprecated)`

Previously, both categories used prefix/suffix heuristics and could
mis-bucket unrelated entries.
2026-03-13 09:45:14 +01:00
Guillaume Ballet
1c9ddee16f
trie/bintrie: use a sync.Pool when hashing binary tree nodes (#33989)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Binary tree hashing is quite slow, owing to many factors. One of them is
the GC pressure that is the consequence of allocating many hashers, as a
binary tree has 4x the size of an MPT. This PR introduces an
optimization that already exists for the MPT: keep a pool of hashers, in
order to reduce the amount of allocations.
2026-03-12 10:20:12 +01:00
jvn
95b9a2ed77
core: Implement eip-7954 increase Maximum Contract Size (#33832)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Implement EIP7954, This PR raises the maximum contract code size
to 32KiB and initcode size to 64KiB , following https://eips.ethereum.org/EIPS/eip-7954

---------

Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
2026-03-12 10:23:49 +08:00
Copilot
de0a452f7d
eth/filters: fix race in pending tx and new heads subscriptions (#33990)
`TestSubscribePendingTxHashes` hangs indefinitely because pending tx
events are permanently missed due to a race condition in
`NewPendingTransactions` (and `NewHeads`). Both handlers called their
event subscription functions (`SubscribePendingTxs`,
`SubscribeNewHeads`) inside goroutines, so the RPC handler returned the
subscription ID to the client before the filter was installed in the
event loop. When the client then sent a transaction, the event fired but
no filter existed to catch it — the event was silently lost.

- Move `SubscribePendingTxs` and `SubscribeNewHeads` calls out of
goroutines so filters are installed synchronously before the RPC
response is sent, matching the pattern already used by `Logs` and
`TransactionReceipts`

<!-- START COPILOT CODING AGENT TIPS -->
---

💬 We'd love your input! Share your thoughts on Copilot coding agent in
our [2 minute survey](https://gh.io/copilot-coding-agent-survey).

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: s1na <1591639+s1na@users.noreply.github.com>
2026-03-12 10:21:45 +08:00
rjl493456442
7d13acd030
core/rawdb, triedb/pathdb: enable trienode history alongside existing data (#33934)
Fixes https://github.com/ethereum/go-ethereum/issues/33907

Notably there is a behavioral change:
- Previously Geth will refuse to restart if the existing trienode
history is gapped with the state data
- With this PR, the gapped trienode history will be entirely reset and
being constructed from scratch
2026-03-12 09:21:54 +08:00
Guillaume Ballet
59512b1849
cmd/fetchpayload: add payload-building utility (#33919)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
This PR adds a cmd tool fetchpayload which connects to a
node and gets all the information in order to create a serialized
payload that can then be passed to the zkvm.
2026-03-11 16:18:42 +01:00
Sina M
3c20e08cba
cmd/geth: add Prague pruning points (#33657)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
This PR allows users to prune their nodes up to the Prague fork. It
indirectly depends on #32157 and can't really be merged before eraE
files are widely available for download.

The `--history.chain` flag becomes mandatory for `prune-history`
command. Here I've listed all the edge cases that can happen and how we
behave:

## prune-history Behavior

| From        | To           | Result                   |
|-------------|--------------|--------------------------|
| full        | postmerge    |  prunes                |
| full        | postprague   |  prunes                |
| postmerge   | postprague   |  prunes further        |
| postprague  | postmerge    |  can't unprune         |
| any         | all          |  use import-history    |


## Node Startup Behavior

| DB State | Flag | Result |

|-------------|--------------|----------------------------------------------------------------|
| fresh | postprague |  syncs from Prague |
| full | postprague |  "run prune-history first" |
| postmerge | postprague |  "run prune-history first" |
| postprague | postmerge |  "can't unprune, use import-history or fix
flag" |
| pruned | all |  accepts known prune points |
2026-03-11 12:47:42 +01:00
bigbear
88f8549d37
cmd/geth: correct misleading flag description in removedb command (#33984)
The `--remove.chain` flag incorrectly described itself as selecting
"state data" for removal, which could mislead operators into removing
the wrong data category. This corrects the description to accurately
reflect that the flag targets chain data (block bodies and receipts).
2026-03-11 16:33:10 +08:00
jwasinger
32f05d68a2
core: end telemetry span for ApplyTransactionWithEVM if error is returned (#33955) 2026-03-11 14:41:43 +08:00
georgehao
f6068e3fb2
eth/tracers: fix accessList StorageKeys return null (#33976)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
2026-03-11 11:46:49 +08:00
rjl493456442
27c4ca9df0
eth: resolve finalized from disk if it's not recently announced (#33150)
This PR contains two changes:

Firstly, the finalized header will be resolved from local chain if it's
not recently announced via the `engine_newPayload`. 

What's more importantly is, in the downloader, originally there are two
code paths to push forward the pivot point block, one in the beacon 
header fetcher (`fetchHeaders`), and another one is in the snap content 
processer (`processSnapSyncContent`).

Usually if there are new blocks and local pivot block becomes stale, it
will firstly be detected by the `fetchHeaders`. `processSnapSyncContent` 
is fully driven by the beacon headers and will only detect the stale pivot 
block after synchronizing the corresponding chain segment. I think the 
detection here is redundant and useless.
2026-03-11 11:23:00 +08:00
Sina M
aa417b03a6
core/tracing: fix nonce revert edge case (#33978)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
We got a report for a bug in the tracing journal which has the
responsibility to emit events for all state that must be reverted.

The edge case is as follows: on CREATE operations the nonce is
incremented. When a create frame reverts, the nonce increment associated
with it does **not** revert. This works fine on master. Now one step
further: if the parent frame reverts tho, the nonce **should** revert
and there is the bug.
2026-03-10 16:53:21 +01:00
rjl493456442
91cec92bf3
core, miner, tests: introduce codedb and simplify cachingDB (#33816)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
2026-03-10 08:29:21 +01:00
rjl493456442
b8a3fa7d06
cmd/utils, eth/ethconfig: change default cache settings (#33975)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
This PR fixes a regression introduced in https://github.com/ethereum/go-ethereum/pull/33836/changes

Before PR 33836, running mainnet would automatically bump the cache size
to 4GB and trigger a cache re-calculation, specifically setting the key-value 
database cache to 2GB.
 
After PR 33836, this logic was removed, and the cache value is no longer
recomputed if no command line flags are specified. The default key-value 
database cache is 512MB.

This PR bumps the default key-value database cache size alongside the
default cache size for other components (such as snapshot) accordingly.
2026-03-09 23:18:18 +08:00
Muzry
b08aac1dbc
eth/catalyst: allow getPayloadV2 for pre-shanghai payloads (#33932)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
I observed failing tests in Hive `engine-withdrawals`:

-
https://hive.ethpandaops.io/#/test/generic/1772351960-ad3e3e460605c670efe1b4f4178eb422?testnumber=146
-
https://hive.ethpandaops.io/#/test/generic/1772351960-ad3e3e460605c670efe1b4f4178eb422?testnumber=147

```shell
  DEBUG (Withdrawals Fork on Block 2): NextPayloadID before getPayloadV2:
  id=0x01487547e54e8abe version=1
  >> engine_getPayloadV2("0x01487547e54e8abe")
  << error: {"code":-38005,"message":"Unsupported fork"}
  FAIL: Expected no error on EngineGetPayloadV2: error=Unsupported fork
```
 
The same failure pattern occurred for Block 3.

Per Shanghai engine_getPayloadV2 spec, pre-Shanghai payloads should be
accepted via V2 and returned as ExecutionPayloadV1:
- executionPayload: ExecutionPayloadV1 | ExecutionPayloadV2
- ExecutionPayloadV1 MUST be returned if payload timestamp < Shanghai
timestamp
- ExecutionPayloadV2 MUST be returned if payload timestamp >= Shanghai
timestamp

Reference:
-
https://github.com/ethereum/execution-apis/blob/main/src/engine/shanghai.md#engine_getpayloadv2

Current implementation only allows GetPayloadV2 on the Shanghai fork
window (`[]forks.Fork{forks.Shanghai}`), so pre-Shanghai payloads are
rejected with Unsupported fork.

If my interpretation of the spec is incorrect, please let me know and I
can adjust accordingly.

---------

Co-authored-by: muzry.li <muzry.li1@ambergroup.io>
2026-03-09 11:22:58 +01:00
Marius van der Wijden
00540f9469
go.mod: update go-eth-kzg (#33963)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Updates go-eth-kzg to
https://github.com/crate-crypto/go-eth-kzg/releases/tag/v1.5.0
Significantly reduces the allocations in VerifyCellProofBatch which is
around ~5% of all allocations on my node

---------

Co-authored-by: Guillaume Ballet <3272758+gballet@users.noreply.github.com>
2026-03-08 11:44:29 +01:00
cui
e15d4ccc01
core/types: reduce alloc in hot code path (#33523)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Reduce allocations in calculation of tx cost.

---------

Co-authored-by: weixie.cui <weixie.cui@okg.com>
Co-authored-by: Sina M <1591639+s1na@users.noreply.github.com>
2026-03-07 14:31:36 +01:00
marukai67
0d043d071e
signer/core: prevent nil pointer panics in keystore operations (#33829)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Add nil checks to prevent potential panics when keystore backend is
unavailable in the Clef signer API.
2026-03-06 21:50:30 +01:00
Guillaume Ballet
ecee64ecdc
core: fix TestProcessVerkle flaky test (#33971)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
`GenerateChain` commits trie nodes asynchronously, and it can happen
that some nodes aren't making it to the db in time for `GenerateChain`
to open it and find the data it is looking for.
2026-03-06 19:03:05 +01:00
Guillaume Ballet
3f1871524f
trie/bintrie: cache hashes of clean nodes so as not to rehash the whole tree (#33961)
This is an optimization that existed for verkle and the MPT, but that
got dropped during the rebase.

Mark the nodes that were modified as needing recomputation, and skip the
hash computation if this is not needed. Otherwise, the whole tree is
hashed, which kills performance.
2026-03-06 18:06:24 +01:00
Guillaume Ballet
a0fb8102fe
trie/bintrie: fix overflow management in slot key computation (#33951)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Docker Image (push) Waiting to run
/ Windows Build (push) Waiting to run
The computation of `MAIN_STORAGE_OFFSET` was incorrect, causing the last
byte of the stem to be dropped. This means that there would be a
collision in the hash computation (at the preimage level, not a hash
collision of course) if two keys were only differing at byte 31.
2026-03-05 14:43:31 +01:00
Bosul Mun
344ce84a43
eth/fetcher: fix flaky test by improving event unsubscription (#33950)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Eth currently has a flaky test, related to the tx fetcher.

The issue seems to happen when Unsubscribe is called while sub is nil.
It seems that chain.Stop() may be invoked before the loop starts in some
tests, but the exact cause is still under investigation through repeated
runs. I think this change will at least prevent the error.
2026-03-05 11:48:44 +08:00
Sina M
ce64ab44ed
internal/ethapi: fix gas cap for eth_simulateV1 (#33952)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Fixes a regression in #33593 where a block gas limit > gasCap resulted
in more execution than the gas cap.
2026-03-05 09:09:07 +08:00
J
fc8c10476d
internal/ethapi: add MaxUsedGas field to eth_simulateV1 response (#32789)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
closes #32741
2026-03-04 12:37:47 +01:00
Jonny Rhea
402c71f2e2
internal/telemetry: fix undersized span queue causing dropped spans (#33927)
The BatchSpanProcessor queue size was incorrectly set to
DefaultMaxExportBatchSize (512) instead of DefaultMaxQueueSize (2048).

I noticed the issue on bloatnet when analyzing the block building
traces. During a particular run, the miner was including 1000
transactions in a single block. When telemetry is enabled, the miner
creates a span for each transaction added to the block. With the queue
capped at 512, spans were silently dropped when production outpaced the
span export, resulting in incomplete traces with orphaned spans. While
this doesn't eliminate the possibility of drops under extreme
load, using the correct default restores the 4x buffer between queue
capacity and export batch size that the SDK was designed around.
2026-03-04 11:47:10 +01:00
Jonny Rhea
28dad943f6
cmd/geth: set default cache to 4096 (#33836)
Some checks are pending
/ Linux Build (push) Waiting to run
/ Linux Build (arm) (push) Waiting to run
/ Keeper Build (push) Waiting to run
/ Windows Build (push) Waiting to run
/ Docker Image (push) Waiting to run
Mainnet was already overriding --cache to 4096. This PR just makes this
the default.
2026-03-04 11:21:11 +01:00