Previously, handshake timeouts were recorded as generic peer errors
instead of timeout errors. waitForHandshake passed a raw
p2p.DiscReadTimeout into markError, but markError classified errors only
via errors.Unwrap(err), which returns nil for non-wrapped errors. As a
result, the timeoutError meter was never incremented and all such
failures fell into the peerError bucket.
This change makes markError switch on the base error, using
errors.Unwrap(err) when available and falling back to the original error
otherwise. With this adjustment, p2p.DiscReadTimeout is correctly mapped
to timeoutError, while existing behaviour for the other wrapped sentinel
errors remains unchanged
---------
Co-authored-by: lightclient <lightclient@protonmail.com>
inside tx.GasPrice()/GasFeeCap()/GasTipCap() already new a big.Int.
bench result:
```
goos: darwin
goarch: arm64
pkg: github.com/ethereum/go-ethereum/core
cpu: Apple M4
│ old.txt │ new.txt │
│ sec/op │ sec/op vs base │
TransactionToMessage-10 240.1n ± 7% 175.1n ± 7% -27.09% (p=0.000 n=10)
│ old.txt │ new.txt │
│ B/op │ B/op vs base │
TransactionToMessage-10 544.0 ± 0% 424.0 ± 0% -22.06% (p=0.000 n=10)
│ old.txt │ new.txt │
│ allocs/op │ allocs/op vs base │
TransactionToMessage-10 17.00 ± 0% 11.00 ± 0% -35.29% (p=0.000 n=10)
```
benchmark code:
```
// Copyright 2025 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package core
import (
"math/big"
"testing"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/params"
)
// BenchmarkTransactionToMessage benchmarks the TransactionToMessage function.
func BenchmarkTransactionToMessage(b *testing.B) {
key, _ := crypto.GenerateKey()
signer := types.LatestSigner(params.TestChainConfig)
to := common.HexToAddress("0x000000000000000000000000000000000000dead")
// Create a DynamicFeeTx transaction
txdata := &types.DynamicFeeTx{
ChainID: big.NewInt(1),
Nonce: 42,
GasTipCap: big.NewInt(1000000000), // 1 gwei
GasFeeCap: big.NewInt(2000000000), // 2 gwei
Gas: 21000,
To: &to,
Value: big.NewInt(1000000000000000000), // 1 ether
Data: []byte{0x12, 0x34, 0x56, 0x78},
AccessList: types.AccessList{
types.AccessTuple{
Address: common.HexToAddress("0x0000000000000000000000000000000000000001"),
StorageKeys: []common.Hash{
common.HexToHash("0x0000000000000000000000000000000000000000000000000000000000000001"),
},
},
},
}
tx, _ := types.SignNewTx(key, signer, txdata)
baseFee := big.NewInt(1500000000) // 1.5 gwei
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
_, err := TransactionToMessage(tx, signer, baseFee)
if err != nil {
b.Fatal(err)
}
}
}
l
```
From the https://eips.ethereum.org/EIPS/eip-7928
> SELFDESTRUCT (in-transaction): Accounts destroyed within a transaction
MUST be included in AccountChanges without nonce or code changes.
However, if the account had a positive balance pre-transaction, the
balance change to zero MUST be recorded. Storage keys within the self-destructed
contracts that were modified or read MUST be included as a storage_reads
entry.
The storage read against the empty contract (zero storage) should also
be recorded in the BAL's readlist.
https://eips.ethereum.org/EIPS/eip-7928 spec:
> Precompiled contracts: Precompiles MUST be included when accessed.
If a precompile receives value, it is recorded with a balance change.
Otherwise, it is included with empty change lists.
The precompiled contracts are not explicitly touched when they are
invoked since Amsterdam fork.
The fetcher should not fetch transactions that are already on chain.
Until now we were only checking in the txpool, but that does not have
the old transaction. This was leading to extra fetches of transactions
that were announced by a peer but are already on chain.
Here we extend the check to the chain as well.
All five `revert*Request` functions (account, bytecode, storage,
trienode heal, bytecode heal) remove the request from the tracked set
but never restore the peer to its corresponding idle pool. When a
request times out and no response arrives, the peer is permanently lost
from the idle pool, preventing new work from being assigned to it.
In normal operation mode (snap-sync full state) this bug is masked by
pivot movement (which resets idle pools via new Sync() cycles every ~15
minutes) and peer churn (reconnections re-add peers via Register()).
However in scenarios like the one I have running my (partial-stateful
node)[https://github.com/ethereum/go-ethereum/pull/33764] with
long-running sync cycles and few peers, all peers can eventually leak
out of the idle pools, stalling sync entirely.
Fix: after deleting from the request map, restore the peer to its idle
pool if it is still registered (guards against the peer-drop path where
Unregister already removed the peer). This mirrors the pattern used in
all five On* response handlers.
This only seems to manifest in peer-thirstly scenarios as where I find
myself when testing snapsync for the partial-statefull node).
Still, thought was at least good to raise this point. Unsure if required
to discuss or not
Implements the new eth_getStorageValues method. It returns storage
values for a list of contracts.
Spec: https://github.com/ethereum/execution-apis/pull/756
---------
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
The PR exposes the InfuxDB reporting interval as a CLI parameter, which
was previously fixed 10s. Default is still kept at 10s.
Note that decreasing the interval comes with notable extra traffic and
load on InfluxDB.
This changes the challenge resend logic again to use the existing
`ChallengeData` field of `v5wire.Whoareyou` instead of storing a second
copy of the packet in `Whoareyou.Encoded`. It's more correct this way
since `ChallengeData` is supposed to be the data that is used by the ID
verification procedure.
Also adapts the cross-client test to verify this behavior.
Follow-up to #31543
In src/ethereum/forks/amsterdam/vm/interpreter.py:299-304, the caller
address is
only tracked for block level accessList when there's a value transfer:
```python
if message.should_transfer_value and message.value != 0:
# Track value transfer
sender_balance = get_account(state, message.caller).balance
recipient_balance = get_account(state, message.current_target).balance
track_address(message.state_changes, message.caller) # Line 304
```
Since system transactions have should_transfer_value=False and value=0,
this condition is never met, so the caller (SYSTEM_ADDRESS) is not
tracked.
This condition is applied for the syscall in the geth implementation,
aligning with the spec of EIP7928.
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
Adds `--opcode.count=<file>` flag to `evm t8n` that writes per-opcode
execution frequency counts to a JSON file (relative to
`--output.basedir`).
---------
Co-authored-by: MariusVanDerWijden <m.vanderwijden@live.de>
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
This changes the p2p protocol handlers to delay message decoding. It's
the first part of a larger change that will delay decoding all the way
through message processing. For responses, we delay the decoding until
it is confirmed that the response matches an active request and does not
exceed its limits.
In order to make this work, all messages have been changed to use
rlp.RawList instead of a slice of the decoded item type. For block
bodies specifically, the decoding has been delayed all the way until
after verification of the response hash.
The role of p2p/tracker.Tracker changes significantly in this PR. The
Tracker's original purpose was to maintain metrics about requests and
responses in the peer-to-peer protocols. Each protocol maintained a
single global Tracker instance. As of this change, the Tracker is now
always active (regardless of metrics collection), and there is a
separate instance of it for each peer. Whenever a response arrives, it
is first verified that a request exists for it in the tracker. The
tracker is also the place where limits are kept.
This PR adds OpenTelemetry tracing configuration to geth via
command-line flags. When enabled, geth initializes the global
OpenTelemetry TracerProvider and installs standard trace context
propagation. When disabled (the default), tracing remains a no-op and
behavior is unchanged.
Co-authored-by: Felix Lange <fjl@twurst.com>
The endSpan closure accepted error by value, meaning deferred calls like
defer spanEnd(err) captured the error at defer-time (always nil), not at
function-return time. This meant errors were never recorded on spans.
- Changed endSpan to accept *error
- Updated all call sites in rpc/handler.go to pass error pointers, and
adjusted handleCall to avoid propagating child-span errors to the parent
- Added TestTracingHTTPErrorRecording to verify that errors from RPC
methods are properly recorded on the rpc.runMethod span
I removed `Iterator.Count` in #33840, because it appeared to be unused
and did not provide the documented invariant: the returned count should
always be an upper bound on the number of iterations allowed by `Next`.
In order to make `Count` work, the semantics of `CountValues` has to
change to return the number of items up and including the invalid one. I
have reviewed all callsites of `CountValues` to assess if changing this
is safe. There aren't that many, and the only call that doesn't check
the error and return is in the trie node parser,
`trie.decodeNodeUnsafe`. There, we distinguish the node type based on
the number of items, and it previously returned an error for item count
zero. In order to avoid any potential issue that could result from this
change, I'm adding an error check in that function, though it isn't
necessary.
This changes `RawList` to ensure the count of items is always valid.
Lists with invalid structure, i.e. ones where an element exceeds the
size of the container, are now detected during decoding of the `RawList`
and thus cannot exist.
Also remove `RawList.Empty` since it is now fully redundant, and
`Iterator.Count` since it returns incorrect results in the presence of
invalid input. There are no callers of these methods (yet).
The reasoning for using the cleartext format here is that the JSON-RPC
API is internal only. Providers which expose it publicly already put it
behind a proxy which handles also the encryption.
This fixes two cases where `Iterator.Err()` was misused. The method will
only return an error after `Next()` has returned false, so it makes no
sense to check for the error within the loop itself.
Most uses of the iterator are like this:
it, _ := rlp.NewListIterator(data)
for it.Next() {
do(it.Value())
}
This doesn't require the iterator to be a pointer and it's better to
have it stack-allocated. AFAIK the compiler cannot prove it is OK to
stack-allocate when it is returned as a pointer because the methods of
`Iterator` use pointer receiver and also mutate the object.
The iterator type was not exported until very recently, so I think it is
still OK to change this API.
GetStorage and DeleteStorage used GetBinaryTreeKey to compute the tree
key, while UpdateStorage used GetBinaryTreeKeyStorageSlot. The latter
applies storage slot remapping (header offset for slots <64, main
storage prefix for the rest), so reads and deletes were targeting
different tree locations than writes.
Replace GetBinaryTreeKey with GetBinaryTreeKeyStorageSlot in both
GetStorage and DeleteStorage to match UpdateStorage. Add a regression
test that verifies the write→read→delete→read round-trip for main
storage slots.
The `decodeRef` function used `size > hashLen` to reject oversized
embedded nodes, but this incorrectly allowed nodes of exactly 32 bytes
through. The encoding side (hasher.go, stacktrie.go) consistently uses
`len(enc) < 32` to decide whether to embed a node inline, meaning nodes
of 32+ bytes are always hash-referenced. The error message itself
already stated `want size < 32`, confirming the intended threshold.
Changed `size > hashLen` to `size >= hashLen` in `decodeRef` to align
the decoding validation with the encoding logic, the Yellow Paper spec,
and the surrounding comments.
This PR fixes a panic in a corner case situation when a `ChainEvent` is
received by `eth.Ethereum.updateFilterMapsHeads()` but the given chain
section does not exist in `BlockChain` any more. This can happen during
chain rewind because chain events are processed asynchronously. Ignoring
the event in this case is ok, the final event will point to the final
rewound head and the indexer will be updated.
Note that similar issues will not happen once we transition to
https://github.com/ethereum/go-ethereum/pull/32292 and the new indexer
built on top of this. Until then, the current fix should be fine.
This PR makes `TestEIP8024_Execution` verify explicit error types (e.g.,
`ErrStackUnderflow` vs `ErrInvalidOpCode`) rather than accepting any
error. It also fails fast on unexpected opcodes in the mini-interpreter
to avoid false positives from missing opcode handling.
Here is a draft for the New EraE implementation. The code follows along
with the spec listed at https://hackmd.io/pIZlxnitSciV5wUgW6W20w.
---------
Co-authored-by: shantichanal <158101918+shantichanal@users.noreply.github.com>
Co-authored-by: lightclient <lightclient@protonmail.com>
Co-authored-by: MariusVanDerWijden <m.vanderwijden@live.de>
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
### Problem
`HasBody` and `HasReceipts` returned `true` for pruned blocks because
they only checked `isCanon()` which verifies the hash table — but
hash/header tables have `prunable: false` while body/receipt tables have
`prunable: true`.
After `TruncateTail()`, hashes still exist but bodies/receipts are gone.
This caused inconsistency: `HasBody()` returns `true`, but `ReadBody()`
returns `nil`.
### Changes
Both functions now check `db.Tail()` when the block is in ancient store.
If `number < tail`, the data has been pruned and the function correctly
returns `false`.
This aligns `HasBody`/`HasReceipts` behavior with
`ReadBody`/`ReadReceipts` and fixes potential issues in
`skeleton.linked()` which relies on these checks during sync.
Follow-up to https://github.com/ethereum/go-ethereum/pull/33748
Same issue - ResettingTimer can be registered via loadOrRegister() but
GetAll() silently drops it during JSON export. The prometheus exporter
handles it fine (collector.go:70), so this is just an oversight in the
JSON path.
Note: ResettingTimer.Snapshot() resets the timer by design, which is
consistent with how the prometheus exporter uses it.
This adds a new type wrapper that decodes as a list, but does not
actually decode the contents of the list. The type parameter exists as a
marker, and enables decoding the elements lazily. RawList can also be
used for building a list incrementally.
The upstream libray has removed the assembly-based implementation of
keccak. We need to maintain our own library to avoid a peformance
regression.
---------
Co-authored-by: lightclient <lightclient@protonmail.com>
kzg4844.Blob is 131072 bytes. Using `for _, blob := range` copies the
entire blob on each iteration. With up to 6 blobs per transaction, this
wastes ~768KB of memory copies.
Switch to index-based iteration and pass pointers directly.
The `Witness` method was not implemented for the binary tree, which
caused `debug_excutionWitness` to panic. This PR fixes that.
Note that the `TransitionTrie` version isn't implemented, and that's on
purpose: more thought must be given to what should go in the global
witness.
I recently went on a longer flight and started profiling the geth block
production pipeline.
This PR contains a bunch of individual fixes split into separate
commits.
I can drop some if necessary.
Benchmarking is not super easy, the benchmark I wrote is a bit
non-deterministic.
I will try to write a better benchmark later
```
goos: linux
goarch: amd64
pkg: github.com/ethereum/go-ethereum/miner
cpu: Intel(R) Core(TM) Ultra 7 155U
│ /tmp/old.txt │ /tmp/new.txt │
│ sec/op │ sec/op vs base │
BuildPayload-14 141.5µ ± 3% 146.0µ ± 6% ~ (p=0.346 n=200)
│ /tmp/old.txt │ /tmp/new.txt │
│ B/op │ B/op vs base │
BuildPayload-14 188.2Ki ± 4% 177.4Ki ± 4% -5.71% (p=0.018 n=200)
│ /tmp/old.txt │ /tmp/new.txt │
│ allocs/op │ allocs/op vs base │
BuildPayload-14 2.703k ± 4% 2.453k ± 5% -9.25% (p=0.000 n=200)
```
Preallocates slices with known capacity in `stateSet.encode()` and
`StateSetWithOrigin.encode()` methods to eliminate redundant
reallocations during serialization.
Preallocate capacity for `keyOffsets` and `valOffsets` slices in
`decodeRestartTrailer` since the exact size (`nRestarts`) is known
upfront.
---------
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
core/state: add bounds check in heap eviction loop
Add len(h) > 0 check before accessing h[0] to prevent potential panic
and align with existing heap access patterns in txpool, p2p, and mclock
packages.
Fix ECIES invalid-curve handling in RLPx handshake (reject invalid
ephemeral pubkeys early)
- Add curve validation in crypto/ecies.GenerateShared to reject invalid
public keys before ECDH.
- Update RLPx PoC test to assert invalid curve points fail with
ErrInvalidPublicKey.
Motivation / Context
RLPx handshake uses ECIES decryption on unauthenticated network input.
Prior to this change, an invalid-curve ephemeral public key would
proceed into ECDH and only fail at MAC verification, returning
ErrInvalidMessage. This allows an oracle on decrypt success/failure and
leaves the code path vulnerable to invalid-curve/small-subgroup attacks.
The fix enforces IsOnCurve validation up front.
Heartbeats are used to drop non-executable transactions from the queue.
The timeout mechanism was not clearly documented, and it was updates
also when not necessary.
This PR restores the previous Pebble configuration, disabling seek compaction.
This feature is still needed by hash mode archive node, mitigating the
overhead of frequent compaction.
Implement standardized JSON format for slow block logging to enable
cross-client performance analysis and protocol research.
This change is part of the Cross-Client Execution Metrics initiative
proposed by Gary Rong: https://hackmd.io/dg7rizTyTXuCf2LSa2LsyQ
The standardized metrics enabled data-driven analysis like the EIP-7907
research: https://ethresear.ch/t/data-driven-analysis-on-eip-7907/23850
JSON format includes:
- block: number, hash, gas_used, tx_count
- timing: execution_ms, total_ms
- throughput: mgas_per_sec
- state_reads: accounts, storage_slots, bytecodes, code_bytes
- state_writes: accounts, storage_slots, bytecodes
- cache: account/storage/code hits, misses, hit_rate
This should come after merging #33522
---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
Based on [EIP-7864](https://eips.ethereum.org/EIPS/eip-7864), the tree
index should be 32 bytes instead of 31 bytes.
```
def get_tree_key(address: Address32, tree_index: int, sub_index: int):
# Assumes STEM_SUBTREE_WIDTH = 256
return tree_hash(address + tree_index.to_bytes(32, "little"))[:31] + bytes(
[sub_index]
)
```
This is a tweak to the wasm build, that expects the `geth_io` namespace
to expect a `geth_io` module, providing a `len` and `read` methods. This
will be provided by the WASM interface in sp1. This forces an API change
on the OpenVM side, but the interface on their side is still being
designed, so we should proceed with this change, and we'll make a
different tag for OpenVM if this can't work for them.
Co-authored-by: wakabat <wakabat@protonmail.com>
This PR optimizes the historical trie node reader by reworking how data
is accessed and memory is managed, reducing allocation overhead
significantly.
Specifically:
- Instead of decoding an entire history object to locate a specific trie node,
the reader now searches directly within the history.
- Besides, slice pre-allocation can avoid unnecessary deep-copy significantly.
This PR optimizes memory allocation in StateTrie.PrefetchAccount() and
StateTrie.PrefetchStorage() by preallocating slice capacity when the
final size is known.
This PR extends the statistics of contract code read by adding these
fields:
- **CacheHitBytes**: the total number of bytes served by cache
- **CacheMissBytes**: the total number of bytes read on cache miss
- **CodeReadBytes**: the total number of bytes for contract code read
Calling `pool.priced.Removed` is needed to keep is sync with
`pool.all.Remove`.
It was called in other occurances, but not here.
The counter is used for internal heap management. It was working even without this, just not calling reheap at the intended frequency.
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
This PR adds metrics that count the number of accounts having transactions
in the txpool. Together with the transaction count this can be used as a
simple indicator of the diversity of transactions in the pool.
Note: as an alternative implementation, we could use a periodic or event
driven update of these Gauges using len.
I've preferred this implementation to match what we have for the pool
sizes.
---------
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
Fixes#33630
Sort self-destructed addresses before emitting hooks in Finalise() to
ensure deterministic ordering and fix flaky test
TestHooks_OnCodeChangeV2.
---------
Co-authored-by: jwasinger <j-wasinger@hotmail.com>
This adds support for Grafana Pyroscope, a continuous profiling solution.
The client is configured similarly to metrics, i.e. run
geth --pyroscope --pyroscope.server=https://...
This commit is a resubmit of #33261 with some changes.
---------
Co-authored-by: Carlos Bermudez Porto <cbermudez.dev@gmail.com>
This PR reverts a part of changes brought by https://github.com/ethereum/go-ethereum/pull/33281/changes
Specifically, read-only protection should always be enforced at the opcode level,
regardless of whether the check has already been performed during gas metering.
It should act as a gatekeeper, otherwise, it is easy to introduce errors by adding
new gas measurement logic without consistently applying the read-only protection.
Adding an RPC flag to limit the block range size for eth_getLogs and
eth_newFilter requests.
closing https://github.com/ethereum/go-ethereum/issues/24508
---------
Co-authored-by: MariusVanDerWijden <m.vanderwijden@live.de>
The core part of this PR that we need to adopt is to move the code and
nonce change hook invocations to occur at tx finalization, instead of
when the selfdestruct opcode is called.
Additionally:
* remove `SelfDestruct6780` now that it is essentially the same as
`SelfDestruct` just gated by `is new contract`
* don't duplicate `BalanceIncreaseSelfdestruct` (transfer to recipient
of selfdestruct) in the hooked statedb and in the opcode handler for the
selfdestruct opcode.
* balance is burned immediately when the beneficiary of the selfdestruct
is the sender, and the contract was created in the same transaction.
Previously we emit two balance increases to the recipient (see above
point), and a balance decrease from the sender.
---------
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
Co-authored-by: lightclient <lightclient@protonmail.com>
There's no need to perform the subsequent state access on the target if
we already know that we are out of gas.
This aligns the state access behavior of selfdestruct with EIP-7928
This PR causes execution to terminate at the gas handler in the case of
sstore/call if they are invoked in a static execution context.
This aligns the behavior with EIP 7928 by ensuring that we don't record
any state reads in the access list from an SSTORE/CALL in this
circumstance.
---------
Co-authored-by: lightclient <lightclient@protonmail.com>
Remove a large amount of duplicate code from the tx_fetcher tests.
---------
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
Co-authored-by: lightclient <lightclient@protonmail.com>
The bitmap is used in compact-encoded trie nodes to indicate which elements
have been modified. The bitmap format has been updated to use big-endian
encoding.
Bit positions are numbered from 0 to 15, where position 0 corresponds to
the most significant bit of b[0], and position 15 corresponds to the least
significant bit of b[1].
This PR adds support for the extraction of OpenTelemetry trace context
from incoming JSON-RPC request headers, allowing geth spans to be linked
to upstream traces when present.
---------
Co-authored-by: lightclient <lightclient@protonmail.com>
Add Open Telemetry tracing inside the RPC server to help attribute runtime costs within `handler.handleCall()`. In particular, it allows us to distinguish time spent decoding arguments, invoking methods via reflection, and actually executing the method and constructing/encoding JSON responses.
---------
Co-authored-by: lightclient <lightclient@protonmail.com>
Updated the `avail` calculation to correctly compute remaining capacity:
`buf.limit - len(buf.output)`, ensuring the buffer never exceeds its
configured limit regardless of how many times `Write()` is called.
The coverage build path was generating go test commands with a bogus
-tags flag that held the coverpkg value, so the run kept failing. I
switched coverbuild to treat the optional argument as an override for
-coverpkg and stopped passing coverpkg from the caller. Now the script
emits a clean go test invocation that should actually succeed.
This PR fixes an issue where the tx indexer would repeatedly try to
“unindex” a block with a missing body, causing a spike in CPU usage.
This change skips these blocks and advances the index tail. The fix was
verified both manually on a local development chain and with a new test.
resolves#33371
This PR fixes an issue where `evm statetest` would not verify the
post-state root hash if the test case expected an exception (e.g.
invalid transaction).
The fix involves:
1. Modifying `tests/state_test_util.go` in the `Run` method.
2. When an expected error occurs (`err != nil`), we now check if
`post.Root` is defined.
3. If defined, we recalculate the intermediate root from the current
state (which is reverted to the pre-transaction snapshot upon error).
4. We use `GetChainConfig` and `IsEIP158` to ensure the correct state
clearing rules are applied when calculating the root, avoiding
regressions on forks that require EIP-158 state clearing.
5. If the calculated root mismatches the expected root, the test now
fails.
This ensures that state tests are strictly verified against their
expected post-state, even for failure scenarios.
Fixes issue #33527
---------
Co-authored-by: MariusVanDerWijden <m.vanderwijden@live.de>
- pass `rpc.BlockNumberOrHash` directly to `eth_getBlockReceipts` so
`requireCanonical` and other fields survive
- aligns `BlockReceipts` with other `ethclient` methods and re-enables
canonical-only receipt queries
Allow the blobpool to accept blobs out of nonce order
Previously, we were dropping blobs that arrived out-of-order. However,
since fetch decisions are done on receiver side,
out-of-order delivery can happen, leading to inefficiencies.
This PR:
- adds an in-memory blob tx storage, similar to the queue in the
legacypool
- a limited number of received txs can be added to this per account
- txs waiting in the gapped queue are not processed further and not
propagated further until they are unblocked by adding the previos nonce
to the blobpool
The size of the in-memory storage is currently limited per account,
following a slow-start logic.
An overall size limit, and a TTL is also enforced for DoS protection.
---------
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
Co-authored-by: MariusVanDerWijden <m.vanderwijden@live.de>
This pull request introduces a mechanism to compress trienode history by
storing only the node diffs between consecutive versions.
- For full nodes, only the modified children are recorded in the history;
- For short nodes, only the modified value is stored;
If the node type has changed, or if the node is newly created or
deleted, the entire node value is stored instead.
To mitigate the overhead of reassembling nodes from diffs during history
reads, checkpoints are introduced by periodically storing full node values.
The current checkpoint interval is set to every 16 mutations, though
this parameter may be made configurable in the future.
Fixes#33369
This omits "topics" and "addresses" from the filter when they are unspecified.
It is required for interoperability with some server implementations that cannot
handle `null` for these fields.
It's a PR based on #33303 and introduces an approach for trienode
history indexing.
---
In the current archive node design, resolving a historical trie node at
a specific block
involves the following steps:
- Look up the corresponding trie node index and locate the first entry
whose state ID
is greater than the target state ID.
- Resolve the trie node from the associated trienode history object.
A naive approach would be to store mutation records for every trie node,
similar to
how flat state mutations are recorded. However, the total number of trie
nodes is
extremely large (approximately 2.4 billion), and the vast majority of
them are rarely
modified. Creating an index entry for each individual trie node would be
very wasteful
in both storage and indexing overhead. To address this, we aggregate
multiple trie
nodes into chunks and index mutations at the chunk level instead.
---
For a storage trie, the trie is vertically partitioned into multiple sub
tries, each spanning
three consecutive levels. The top three levels (1 + 16 + 256 nodes) form
the first chunk,
and every subsequent three-level segment forms another chunk.
```
Original trie structure
Level 0 [ ROOT ] 1 node
Level 1 [0] [1] [2] ... [f] 16 nodes
Level 2 [00] [01] ... [0f] [10] ... [ff] 256 nodes
Level 3 [000] [001] ... [00f] [010] ... [fff] 4096 nodes
Level 4 [0000] ... [000f] [0010] ... [001f] ... [ffff] 65536 nodes
Vertical split into chunks (3 levels per chunk)
Level0 [ ROOT ] 1 chunk
Level3 [000] ... [fff] 4096 chunks
Level6 [000000] ... [fffffff] 16777216 chunks
```
Within each chunk, there are 273 nodes in total, regardless of the
chunk's depth in the trie.
```
Level 0 [ 0 ] 1 node
Level 1 [ 1 ] … [ 16 ] 16 nodes
Level 2 [ 17 ] … … [ 272 ] 256 nodes
```
Each chunk is uniquely identified by the path prefix of the root node of
its corresponding
sub-trie. Within a chunk, nodes are identified by a numeric index
ranging from 0 to 272.
For example, suppose that at block 100, the nodes with paths `[]`,
`[0]`, `[f]`, `[00]`, and `[ff]`
are modified. The mutation record for chunk 0 is then appended with the
following entry:
`[100 → [0, 1, 16, 17, 272]]`, `272` is the numeric ID of path `[ff]`.
Furthermore, due to the structural properties of the Merkle Patricia
Trie, if a child node
is modified, all of its ancestors along the same path must also be
updated. As a result,
in the above example, recording mutations for nodes `00` and `ff` alone
is sufficient,
as this implicitly indicates that their ancestor nodes `[]`, `[0]` and
`[f]` were also
modified at block 100.
---
Query processing is slightly more complicated. Since trie nodes are
indexed at the chunk
level, each individual trie node lookup requires an additional filtering
step to ensure that
a given mutation record actually corresponds to the target trie node.
As mentioned earlier, mutation records store only the numeric
identifiers of leaf nodes,
while ancestor nodes are omitted for storage efficiency. Consequently,
when querying
an ancestor node, additional checks are required to determine whether
the mutation
record implicitly represents a modification to that ancestor.
Moreover, since trie nodes are indexed at the chunk level, some trie
nodes may be
updated frequently, causing their mutation records to dominate the
index. Queries
targeting rarely modified trie nodes would then scan a large amount of
irrelevant
index data, significantly degrading performance.
To address this issue, a bitmap is introduced for each index block and
stored in the
chunk's metadata. Before loading a specific index block, the bitmap is
checked to
determine whether the block contains mutation records relevant to the
target trie node.
If the bitmap indicates that the block does not contain such records,
the block is skipped entirely.
Adds BlobTxType and SetCodeTxType to GasPrice switch case, aligning with
`MaxFeePerGas` and `MaxPriorityFeePerGas` handling.
Co-authored-by: m6xwzzz <maskk.weller@gmail.com>
### Description
Add a new `OnStateUpdate` hook which gets invoked after state is
committed.
### Rationale
For our particular use case, we need to obtain the state size metrics at
every single block when fuly syncing from genesis. With the current
state sizer, whenever the node is stopped, the background process must
be freshly initialized. During this re-initialization, it can skip some
blocks while the node continues executing blocks, causing gaps in the
recorded metrics.
Using this state update hook allows us to customize our own data
persistence logic, and we would never skip blocks upon node restart.
---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
Adds missing trienode freezer case to InspectFreezerTable, making it
consistent with InspectFreezer which already supports it.
Co-authored-by: m6xwzzz <maskk.weller@gmail.com>
Fix#33390
`setHeadBeyondRoot` was failing to invalidate finalized blocks because
it compared against the original head instead of the rewound root. This
fix updates the comparison to use the post-rewind block number,
preventing the node from reporting a finalized block that no longer
exists. Also added relevant test cases for it.
This PR removes the version-check command and its associated logic as
discussed in issue #31222.
Removed versionCheckCommand from misccmd.go and main.go.
Deleted version_check.go and its corresponding tests.
Cleaned up testdata/vcheck directory (~800 lines of JSON/signatures
removed).
Verified build with make geth
HeadSync kept reqFinalityEpoch entries for servers after receiving
EvUnregistered, while other per-server maps were cleared. This left
stale request.Server keys reachable from HeadSync, which can lead to a
slow memory leak in setups that dynamically register and unregister
servers.
The fix adds deletion of the reqFinalityEpoch entry in the
EvUnregistered handler. This aligns HeadSync with the cleanup pattern
used by other sync modules and keeps the finality request bookkeeping
strictly limited to currently registered servers.
This pull request optimizes history indexing by splitting a single large
database
batch into multiple smaller chunks.
Originally, the indexer will resolve a batch of state histories and
commit all
corresponding index entries atomically together with the indexing
marker.
While indexing more state histories in a single batch improves
efficiency, excessively
large batches can cause significant memory issues.
To mitigate this, the pull request splits the mega-batch into several
smaller batches
and flushes them independently during indexing. However, this introduces
a potential
inconsistency that some index entries may be flushed while the indexing
marker is not,
and an unclean shutdown may leave the database in a partially updated
state.
This can corrupt index data.
To address this, head truncation is introduced. After a restart, any
excessive index
entries beyond the expected indexing marker are removed, ensuring the
index remains
consistent after an unclean shutdown.
This is a new step in my crusade against the braindead fad of starting
PR titles with a word that is completely redundant with github labels,
thus wasting prime first-line real-estate for something that isn't
necessary.
I noticed that every single one of these PRs are low-quality AI-slop, so
I think there is a strong case to be made for these PRs to be
auto-closed. A message is added before closing the PR, redirecting to
our contribution guidelines, so I expect quality first-time contributors
to read them and reopen the PR. In the case of spam PRs, the author is
unlikely to revisit a given PR, and so auto-closing might have a
positive impact. That's an experiment worth trying, imo.
In order to reduce the amount of code that is embedded into the keeper
binary, I am removing all the verkle code that uses go-verkle and
go-ipa. This will be followed by further PRs that are more like stubs to
replace code when the keeper build is detected.
I'm keeping the binary tree of course. This means that you will still
see `isVerkle` variables all over the codebase, but they will be renamed
when code is touched (i.e. this is not an invitation for 30+ AI slop
PRs).
---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
In this PR, two things have been fixed:
---
(a) truncate the stale beacon headers with latest snap block
Originally, b.filled is used as the indicator for deleting stale beacon headers.
This field is set only after synchronization has been scheduled, under the
assumption that the skeleton chain is already linked to the local chain.
However, the local chain can be mutated via `debug_setHead`, which may
cause `b.filled` outdated. For instance, `b.filled` refers to the last head snap block
in the last sync cycle while after `debug_setHead`, the head snap block has been
rewounded to 1.
As a result, Geth can enter an unintended loop: it repeatedly downloads
the missing beacon headers for the skeleton chain and attempts to schedule the
actual synchronization, but in the final step, all recently fetched headers are removed
by `cleanStales` due to the stale `b.filled` value.
This issue is addressed by always using the latest snap block as the indicator,
without relying on any cached value. However, note that before the skeleton
chain is linked to the local chain, the latest snap block will always be below
skeleton.tail, and this condition should not be treated as an error.
---
(b) merge the subchains once the skeleton chain links to local chain
Once the skeleton chain links with local one, it will try to schedule the
synchronization by fetching the missing blocks and import them then.
It's possible the last subchain already overwrites the previous subchain and
results in having two subchains leftover. As a result, an error log will printed
https://github.com/ethereum/go-ethereum/blob/master/eth/downloader/skeleton.go#L1074
Blobs are stored per transaction in the pool, so we need billy to handle
up to the per-tx limit, not to the per-block limit.
The per-block limit was larger than the per-tx limit, so it not a bug,
we just created and handled a few billy files for no reason.
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
This PR removes the legacy sidecar conversion logic.
After the Osaka fork, the blobpool will accept only blob sidecar version
1.
Any remaining version 0 blob transactions, if they still exist, will no
longer
be eligible for inclusion.
Note that conversion at the RPC layer is still supported, and version 0
blob
transactions will be automatically converted to version 1 there.
When iterating over a map with value types in Go, the loop variable is a
copy. In `markCodeExistence`, assigning to `code.exists` modified only
the local copy, not the actual map entry, causing the existence flag to
always remain false.
This resulted in overcounting contract codes in state size statistics,
as codes that already existed in the database were incorrectly counted
as new.
Fix by changing `codes` from `map[common.Address]contractCode` to
`map[common.Address]*contractCode`, so mutations apply directly to the
struct.
The fuzz test file has been broken for a while - it doesn't compile with
the `gofuzz` build tag.
Two issues:
- Line 59: called `SignifySignFile` which doesn't exist (should be
`SignFile`)
- Line 71: used `:=` instead of `=` for already declared `err` variable
This PR fixes the bug reported in #33365.
The impact of the bug is not catastrophic. After a transaction is
ultimately fetched, validation and propagation will be performed based
on the fetched body, and any response with a mismatched type is treated
as a protocol violation. An attacker could only waste the limited
portion of victim’s bandwidth at most.
However, the reasons for submitting this PR are as follows
1. Fetching a transaction announced with an arbitrary type is a weird
behavior.
2. It aligns with efforts such as EIP-8077 and #33119 to make the
fetcher smarter and reduce bandwidth waste.
Regarding the `FilterType` function, it could potentially be implemented
by modifying the Filter function's parameter itself, but I wasn’t sure
whether changing that function is acceptable, so I left it as is.
The simulator computed active precompiles from the base header, which is
incorrect when simulations cross fork boundaries. This change selects
precompiles using the current simulated header so the precompile set
matches the block’s number/time. It brings simulate in line with doCall,
tracing, and mining, and keeps precompile state overrides applied on the
correct epoch set.
## Description
This PR fixes incorrect contract code state metrics by ensuring
duplicate codes are not counted towards the reported results.
## Rationale
The contract code metrics don't consider database deduplication. The
current implementation assumes that the results are only **slightly
inaccurate**, but this is not true, especially for data collection
efforts that started from the genesis block.
Fixes an issue where HashFolder skipped the root directory upon hitting
the first file in the excludes list. This happened because the walk function
returned SkipDir even for regular files.
This moves the tracking of the current syncmode into the downloader, fixing an
issue where the syncmode being requested through the engine API could go
out-of-sync with the actual mode being performed by downloader.
Fixes#32629
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
The EIP says to increment PC by 2 _instead of_ the standard increment by
1. The opcode handlers added in #33095 result in incrementing PC by 3,
because they ignored the increment already present in `interpreter.go`.
Does this need to be better specified in the EIP? I've added a [new test
case](https://github.com/ethereum/EIPs/pull/10859) for it anyway.
Found by @0xriptide.
XORBytes was added to package crypto/subtle in Go 1.20, and it's faster
than our bitutil.XORBytes. There is only one use of this function
across go-ethereum so we can simply deprecate the custom implementation.
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
The random-port retry loop in addAnyPortMapping shadowed the err
variable, causing the function to return (0, nil) when all attempts
failed. This change removes the shadowing and preserves the last error
across both the fixed-port and random-port retries, ensuring failures
are reported to callers correctly.
This PR changes the Pebble configurations as below:
- increase the MemTableStopWritesThreshold for handling temporary spike
- decrease the L0CompactionConcurrency and CompactionDebtConcurrency to
scale up compaction readily
The original condition `deleted && !logPrinted || time.Since(...)` was
incorrectly grouping due to operator precedence, causing logs to print
every 10 seconds even when no deletion was happening (deleted=false).
According to SafeDeleteRange documentation, the 'deleted' parameter is
"true if entries have actually been deleted already". The logging should
only happen when deletion is active.
Fixed by adding parentheses: `deleted && (!logPrinted ||
time.Since(...))`Now logs print only when items are being deleted AND
either it's the first log or 10+ seconds have passed since the last one.
This improves the error code for cases where invalid query parameters
are submitted to `eth_getLogs`. I also improved the error message that
is emitted when querying into the future.
This is to benchmark how much the internal parts of GetBlobsV2 take.
This is not an RPC-level benchmark, so JSON-RPC overhead is not
included.
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
This PR exposes the state size statistics to the metrics, making them
easier to demonstrate.
Note that the contract code included in the metrics is not
de-duplicated, so the reported size
will appear larger than the actual storage footprint.
This introduces two main changes to Pebble's configuration:
(a) Remove the Bloom filter at Level 6
The Bloom filter is never used at the bottom-most level, so keeping it
serves no purpose. Removing it saves storage without affecting read
performance.
(b) Re-enable read-sampling compaction
Read-sampling compaction was previously disabled in the hash-based
scheme because all data was identified by hashes and basically no data
overwrite. Read sampling compaction makes no sense.
After switching to the path-based scheme, data overwrites are much more
common, making read-sampling compaction beneficial and reasonable to re-enable.
This PR introduces a new debug feature, logging the slow blocks with
detailed performance statistics, such as state read, EVM execution and
so on.
Notably, the detailed performance statistics of slow blocks won't be
logged during the sync to not overwhelm users. Specifically, the statistics
are only logged if there is a single block processed.
Example output
```
########## SLOW BLOCK #########
Block: 23537063 (0xa7f878611c2dd27f245fc41107d12ebcf06b4e289f1d6acf44d49a169554ee09) txs: 248, mgasps: 202.99
EVM execution: 63.295ms
Validation: 1.130ms
Account read: 6.634ms(648)
Storage read: 17.391ms(1434)
State hash: 6.722ms
DB commit: 3.260ms
Block write: 1.954ms
Total: 99.094ms
State read cache: account (hit: 622, miss: 26), storage (hit: 1325, miss: 109)
##############################
```
We still default to legacy txes for methods like eth_sendTransaction,
eth_signTransaction. We can default to 0x2 and if someone would like to
stay on legacy they can do so by setting the `gasPrice` field.
cc @deffrian
Recently in #31630 we removed support for overriding the network id in
preset networks. While this feature is niche, it is useful for shadow
forks. This PR proposes we add the functionality back, but in a simpler
way.
Instead of checking whether the flag is set in each branch of the
network switch statement, simply apply the network flag after the switch
statement is complete. This retains the following behavior:
1. Auto network id based on chain id still works, because `IsSet` only
returns true if the flag is _actually_ set. Not if it just has a default
set.
2. The preset networks will set their network id directly and only if
the network id flag is set is it overridden. This, combined with the
override genesis flag is what allows the shadow forks.
3. Setting the network id to the same network id that the preset _would
have_ set causes no issues and simply emits the `WARN` that the flag is
being set explicitly. I don't think people explicitly set the network id
flag often.
```
WARN [10-22|09:36:15.052] Setting network id with flag id=10
```
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
This adds checks into getPayload to ensure the correct version is called
for the fork which applies to the payload.
---------
Co-authored-by: jsvisa <delweng@gmail.com>
This was found because other clients are failing RPC tests generated by
Geth. Nethermind and Besu return the correct error code, -32602, in this
situation.
Enable blocktest to read filenames from stdin when no path argument is
provided, matching the existing statetest behavior. This allows
efficient batch processing of blockchain tests.
Usage:
- Single file: evm blocktest <path>
- Batch mode: find tests/ -name "*.json" | evm blocktest
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: MariusVanDerWijden <m.vanderwijden@live.de>
Co-authored-by: Felix Lange <fjl@twurst.com>
No matter what value of P2P.DiscoveryV4 or DiscoveryV5 is set in config file,
it will be overwritten by the CLI flag, even if the flag is not set. This fixes it
to apply the flag only if set.
Fixes error messages to print the actual blob gas value instead of the
pointer address by dereferencing `ExcessBlobGas`, `BlobGasUsed` and
`ParentBeaconRoot`.
Bumps
[github.com/consensys/gnark-crypto](https://github.com/consensys/gnark-crypto)
from 0.18.0 to 0.18.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/consensys/gnark-crypto/releases">github.com/consensys/gnark-crypto's
releases</a>.</em></p>
<blockquote>
<h2>v0.18.1</h2>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/Consensys/gnark-crypto/compare/v0.18.0...v0.18.1">https://github.com/Consensys/gnark-crypto/compare/v0.18.0...v0.18.1</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/Consensys/gnark-crypto/blob/master/CHANGELOG.md">github.com/consensys/gnark-crypto's
changelog</a>.</em></p>
<blockquote>
<h2>[v0.18.1] - 2025-10-28</h2>
<h3>Docs</h3>
<ul>
<li>add CHANGELOG for 0.18.1</li>
</ul>
<h3>Perf</h3>
<ul>
<li>limit memory allocation during Vector deserialization (<a
href="https://redirect.github.com/Consensys/gnark-crypto/issues/759">#759</a>)</li>
</ul>
<p><!-- raw HTML omitted --><!-- raw HTML omitted --></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="fb04e95c3b"><code>fb04e95</code></a>
docs: add CHANGELOG for 0.18.1</li>
<li><a
href="0a4d04ae62"><code>0a4d04a</code></a>
perf: limit memory allocation during Vector deserialization (<a
href="https://redirect.github.com/consensys/gnark-crypto/issues/759">#759</a>)</li>
<li>See full diff in <a
href="https://github.com/consensys/gnark-crypto/compare/v0.18.0...v0.18.1">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/ethereum/go-ethereum/network/alerts).
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
This change fixes a stall in the legacy blob sidecar conversion pipeline
where tasks that arrived during an active batch could remain unprocessed
indefinitely after that batch completed, unless a new external event
arrived.
The root cause was that the loop did not restart processing in
the case <-done: branch even when txTasks had accumulated work, relying
instead on a future event to retrigger the scheduler. This behavior is
inconsistent with the billy task pipeline, which immediately chains to
the next task via runNextBillyTask() without requiring an external trigger.
The fix adds a symmetric restart path in `case <-done`: that checks
`len(txTasks) > 0`, clones the accumulated tasks, clears the queue, and
launches a new run with a fresh done and interrupt.
This preserves batching semantics, prevents indefinite blocking of callers
of convert(), and remains safe during shutdown since the quit path
still interrupts and awaits the active batch. No public interfaces or logging
were changed.
Fixes error messages to print the actual blob gas value instead of the
pointer address by dereferencing `ExcessBlobGas`, `BlobGasUsed` and
`ParentBeaconRoot`.
A new pointless fad appeared recently where people just create a fairly
low information tag at the beginning of their github PR titles.
Something like `feat` or other keywords.
This seems to originate from the angular community and to be used for
automation scripts over there. We do not use any of those scripts and if
we did we would be using the github labels, which offer strictly
equivalent functionalities without wasting useful PR title space.
In order for these keywords to fail the validation, I am adding a check
that these directories listed indeed exist in the repository.
Looks like (in some very EVM specific tests) we spent a lot of time
resizing memory. If the underlying array is big enough, we can speed it
up a bit by simply slicing the memory.
goos: linux
goarch: amd64
pkg: github.com/ethereum/go-ethereum/core/vm
cpu: Intel(R) Core(TM) Ultra 7 155U
│ /tmp/old.txt │ /tmp/new.txt │
│ sec/op │ sec/op vs base │
Resize-14 6.145n ± 9% 1.854n ± 14% -69.83% (p=0.000 n=10)
│ /tmp/old.txt │ /tmp/new.txt │
│ B/op │ B/op vs base │
Resize-14 5.000 ± 0% 5.000 ± 0% ~ (p=1.000 n=10)
│ /tmp/old.txt │ /tmp/new.txt │
│ allocs/op │ allocs/op vs base │
Resize-14 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=10) ¹
From the blocktest benchmark:
620ms 10.93s (flat, cum) 9.92% of Total
. . 80:func (m *Memory) Resize(size uint64) {
30ms 60ms 81: if uint64(m.Len()) < size {
590ms 10.87s 82: m.store = append(m.store, make([]byte, size-uint64(m.Len()))...)
. . 83: }
. . 84:}
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
EIP-8024: Backward compatible SWAPN, DUPN, EXCHANGE
Introduces additional instructions for manipulating the stack which
allow accessing the stack at higher depths. This is an initial implementation
of the EIP, which is still in Review stage.
Adds a flag to specify how many blobs a node is willing to include in
their locally build block as specified in
https://eips.ethereum.org/EIPS/eip-7872
I deviated from the EIP in one case, I allowed for specifying 0 as the
minimum blobs/block
The list iterator previously returned true on parse errors without
advancing the input, which could lead to non-advancing infinite loops
for callers that do not check Err() inside the loop; to make iteration
safe while preserving error visibility, Next() now marks the iterator as
finished when readKind fails, returning true for the error step so
existing users that check Err() can handle it, and then false on
subsequent calls, and the function comment was updated to document this
behavior and the need to check Err().
This PR adds the "FULU" beacon chain config entries for all networks and
fixes the select statements that choose the appropriate engine API call
versions (no new version there but the "default" was always the first
version; now it's the latest version so no need to change unless there
is actually a new version).
New beacon checkpoints are also added for mainnet, sepolia and hoodi
(not for holesky because it's not finalizing at the moment).
Note that though unrelated to fusaka, the log indexer checkpoints are
also updated for mainnet (not for the other testnets, mainly because I
only have mainnet synced here on my travel SSD; this should be fine
though because the index is also reverse generated for a year by default
so it does not really affect the indexing time)
Links for the new checkpoints:
https://beaconcha.in/slot/13108192https://light-sepolia.beaconcha.in/slot/9032384https://hoodi.beaconcha.in/slot/1825728
This change introduces an iterator for the history index in the pathdb.
It provides sequential access to historical entries, enabling efficient
scanning and future features built on top of historical state traversal.
Fix#33212.
This PR remove `github.com/olekukonko/tablewriter` from dependencies and
use a naive stub implementation.
`github.com/olekukonko/tablewriter` is used to format database inspection
output neatly. However, it requires custom adjustments for TinyGo and is
incompatible with the latest version.
---------
Co-authored-by: MariusVanDerWijden <m.vanderwijden@live.de>
This is broken off of #31730 to only focus on testing networks that
start with verkle at genesis.
The PR has seen a lot of work since its creation, and it now targets
creating and re-executing tests for a binary tree testnet without the
transition (so it starts at genesis). The transition tree has been moved
to its own package. It also replaces verkle with the binary tree for
this specific application.
---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
The iterator loop in findTxInBlockBody returned the outer-scoped err
when iter.Err() was non-nil, which could incorrectly propagate a nil or
stale error and hide actual RLP decoding issues. This patch returns
iter.Err() as intended by the rlp list iterator API, matching
established patterns elsewhere in the codebase and improving diagnostics
when encountering malformed transaction entries.
While updating to latest Geth, I noticed `OnCodeChangeV2` was not
properly handled in `SelfDestruct/6780`, this PR fixes this and bring a
unit test. Let me know if it's deemed more approriate to merge the tests
with the other one.
[powdr](github.com/powdr-labs/powdr) has tested keeper in their womir
system and managed to get it to work. This PR adds wasm as a keeper
target. There's another plan by the zkevm team to support wasm with wasi
as well, so these PR adds both targets.
These currently uses the `example` tag, as there is no precompile
intefrace defined for either target yet. Nonetheless, this is useful for
testing these zkvms so it makes sense to support these experimental
targets already.
The periodic sealing loop failed to reset its timer when sealBlock
returned an error, causing the timer to never fire again and effectively
halting block production in developer periodic mode after the first
failure. This is a bug because the loop relies on the timer to trigger
subsequent sealing attempts, and transient errors (e.g., pool races or
chain rewinds) should not permanently stop the loop. The change moves
timer.Reset after the sealing attempt unconditionally, ensuring the loop
continues ticking and retrying even when sealing fails, which matches
how other periodic timers in the codebase behave and preserves forward
progress.
The version check incorrectly used `&&` instead of `||`, causing
versions like v1.0.x through v1.4.x to be allowed when they should be
rejected. These versions don't support EIP-712 signing which was
introduced in firmware v1.5.0.
Because the map iteration is unstable, we need to order logs by tx index
and keep the same order with receipts and their logs, so we can still
get the same `LogsHash` across runs.
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
This pull request updates `PrettyAge.String` so that the age formatter
now treats exact unit boundaries (like a full day or week) as that unit
instead of spilling into smaller components, keeping duration output
aligned with human expectations.
This adds two new CI targets. One is for building all supported keeper
executables, the other is for running unit tests on 32-bit Linux.
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
## Description
- Summary: Correct the JS timer callback argument forwarding to match
standard JS semantics.
- What changed: In `internal/jsre/jsre.go`, the callback is now invoked
with only the arguments after the callback and delay.
- Why: Previously, the callback received the function and delay as
parameters, causing unexpected behavior and logic bugs for consumers.
Equal is called every time the transaction sender is accessed,
even when the sender is cached, so it is worth optimizing.
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
In this PR, several changes have been made:
(a) restructure the trienode history header section
Previously, the offsets of the key and value sections were recorded before
encoding data into these sections. As a result, these offsets referred to the
start position of each chunk rather than the end position.
This caused an issue where the end position of the last chunk was
unknown, making it incompatible with the freezer partial-read APIs.
With this update, all offsets now refer to the end position, and the
start position of the first chunk is always 0.
(b) Enable partial freezer read for trienode data retrieval
The partial freezer read feature is now utilized in trienode data
retrieval, improving efficiency.
At the time keeper support was added into ci.go, we were using a go.work
file to make ./cmd/keeper accessible from within the main go-ethereum
module. The workspace file has since been removed, so we need to build
keeper from within its own module instead.
Found in
https://github.com/ethereum/go-ethereum/actions/runs/17803828253/job/50611300621?pr=32585
```
--- FAIL: TestClientCancelWebsocket (0.33s)
panic: read tcp 127.0.0.1:36048->127.0.0.1:38643: read: connection reset by peer [recovered, repanicked]
goroutine 15 [running]:
testing.tRunner.func1.2({0x98dd20, 0xc0005b0100})
/opt/actions-runner/_work/_tool/go/1.25.1/x64/src/testing/testing.go:1872 +0x237
testing.tRunner.func1()
/opt/actions-runner/_work/_tool/go/1.25.1/x64/src/testing/testing.go:1875 +0x35b
panic({0x98dd20?, 0xc0005b0100?})
/opt/actions-runner/_work/_tool/go/1.25.1/x64/src/runtime/panic.go:783 +0x132
github.com/ethereum/go-ethereum/rpc.httpTestClient(0xc0001dc1c0?, {0x9d5e40, 0x2}, 0xc0002bc1c0)
/opt/actions-runner/_work/go-ethereum/go-ethereum/rpc/client_test.go:932 +0x2b1
github.com/ethereum/go-ethereum/rpc.testClientCancel({0x9d5e40, 0x2}, 0xc0001dc1c0)
/opt/actions-runner/_work/go-ethereum/go-ethereum/rpc/client_test.go:356 +0x15f
github.com/ethereum/go-ethereum/rpc.TestClientCancelWebsocket(0xc0001dc1c0?)
/opt/actions-runner/_work/go-ethereum/go-ethereum/rpc/client_test.go:319 +0x25
testing.tRunner(0xc0001dc1c0, 0xa07370)
/opt/actions-runner/_work/_tool/go/1.25.1/x64/src/testing/testing.go:1934 +0xea
created by testing.(*T).Run in goroutine 1
/opt/actions-runner/_work/_tool/go/1.25.1/x64/src/testing/testing.go:1997 +0x465
FAIL github.com/ethereum/go-ethereum/rpc 0.371s
```
In `testClientCancel` we wrap the server listener in `flakeyListener`,
which schedules an unconditional close of every accepted connection
after a random delay, if the random delay is zero then the timer fires
immediately, and then the http client paniced of connection reset by
peer.
Here we add a minimum 10ms to ensure the timeout won't fire immediately.
Signed-off-by: jsvisa <delweng@gmail.com>
This PR is an alternative to #32556.
Instead of trying to be smart and reuse `geth init`, we can introduce a
new flag `--genesis` that loads the `genesis.json` from file into the
`Genesis` object in the same path that the other network flags currently
work in.
Question: is something like `--genesis` enough to start deprecating
`geth init`?
--
```console
$ geth --datadir data --hoodi
..
INFO [10-06|22:37:11.202] - BPO2: @1762955544
..
$ geth --datadir data --genesis genesis.json
..
INFO [10-06|22:37:27.988] - BPO2: @1862955544
..
```
Pull the genesis [from the
specs](https://raw.githubusercontent.com/eth-clients/hoodi/refs/heads/main/metadata/genesis.json)
and modify one of the BPO timestamps to simulate a shadow fork.
---------
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
In this PR, the database batch for writing the history index data is
pre-allocated.
It's observed that database batch repeatedly grows the size of the
mega-batch,
causing significant memory allocation pressure. This approach can
effectively
mitigate the overhead.
This PR prevents the SetCode hook from being called when the contract
code
remains unchanged.
This situation can occur in the following cases:
- The deployed runtime code has zero length
- An EIP-7702 authorization attempt tries to unset a non-delegated
account
- An EIP-7702 authorization attempt tries to delegate to the same
account
Previously, the journal writer is nil until the first time rejournal
(default 1h), which means during this period, txs submitted to this node
are not written into journal file (transactions.rlp). If this node is
shutdown before the first time rejournal, then txs in pending or queue
will get lost.
Here, this PR initializes the journal writer soon after launch to solve
this issue.
---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
Uses the go module's `replace` directive to delegate keccak computation
to precompiles.
This is still in draft because it needs more testing. Also, it relies on
a PR that I created, that hasn't been merged yet.
_Note that this PR doesn't implement the stateful keccak state
structure, and it reverts to the current behavior. This is a bit silly
since this is what is used in the tree root computation. The runtime
doesn't currently export the sponge. I will see if I can fix that in a
further PR, but it is going to take more time. In the meantime, this is
a useful first step_
This PR removes dangling peers in `alternates` map
In the current code, a dropped peer is removed from alternates for only
the specific transaction hash it was requesting. If that peer is listed
as an alternate for other transaction hashes, those entries still stick
around in alternates/announced even though that peer already got
dropped.
This PR introduces two new metrics to monitor slow peers
- One tracks the number of slow peers.
- The other measures the time it takes for those peers to become
"unfrozen"
These metrics help with monitoring and evaluating the need for future
optimization of the transaction fetcher and peer management, for example i
n peer scoring and prioritization.
Additionally, this PR moves the fetcher metrics into a separate file,
`eth/fetcher/metrics.go`.
This change addresses critical issues in the state object duplication
process specific to Verkle trie implementations. Without these
modifications, updates to state objects fail to propagate correctly
through the trie structure after a statedb copy operation, leading to
inaccuracies in the computation of the state root hash.
---------
Co-authored-by: Guillaume Ballet <3272758+gballet@users.noreply.github.com>
New RPC method eth_sendRawTransactionSync(rawTx, timeoutMs?) that
submits a signed tx and blocks until a receipt is available or a timeout
elapses.
Two CLI flags to tune server-side limits:
--rpc.txsync.defaulttimeout (default wait window)
--rpc.txsync.maxtimeout (upper bound; requests are clamped)
closes https://github.com/ethereum/go-ethereum/issues/32094
---------
Co-authored-by: aodhgan <gawnieg@gmail.com>
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
Adds ethclient support for the eth_simulateV1 RPC method, which allows
simulating transactions on top of a base state without making changes to
the blockchain.
---------
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
Fix logging in the verkle dump path to report the actual key being
processed.
Previously, the loop always logged keylist[0], which misled users when
expanding multiple keys and made debugging harder. This change aligns
the
log with the key passed to root.Get, improving traceability and
diagnostics.
This fixes a regression introduced in #32518. In that PR, we removed the
slowdown logic that would throttle lookups when the table runs empty.
Said logic was originally added in #20389.
Usually it's fine, but there exist pathological cases, such as hive
tests, where the node can only discover one other node, so it can only
ever query that node and won't get any results. In cases like these, we
need to throttle the creation of lookups to avoid crazy CPU usage.
Drop peer if sending the same transaction multiple times in a single message.
Fixes https://github.com/ethereum/go-ethereum/issues/32724
---------
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
Co-authored-by: Csaba Kiraly <csaba.kiraly@gmail.com>
This adds a temporary conversion path for blob transactions with legacy
proof sidecar. This feature will activate after Fusaka. We will phase
this out when the fork has sufficiently settled and client side
libraries have been upgraded to send the new proofs.
This pr addresses a few issues brought by the #32270
- Add updates to pricedList after dropping transactions.
- Remove redundant deletions in queue.evictList, since
pool.removeTx(hash, true, true) already performs the removal.
- Prevent duplicate addresses during promotion when Reset is not nil.
The limit check for `MaxUint32` is done after the cast to `int`. On 64
bits machines, that will work without a problem. On 32 bits machines,
that will always fail. The compiler catches it and refuses to build.
Note that this only fixes the compiler build. ~~If the limit is above
`MaxInt32` but strictly below `MaxUint32` then this will fail at runtime
and we have another issue.~~ I checked and this should not happen during
regular execution, although it might happen in tests.
This PR adds a `filterfuzz` subcommand to the workload tester that
generates requests similarly to `filtergen` (though with a much smaller
block length limit) and also verifies the results by retrieving all
block receipts in the range and locally filtering out relevant results.
Unlike `filtergen` that operates on the finalized chain range only,
`filterfuzz` does check the head region, actually it seeds a new query
at every new chain head.
This PR move the queue out of the main transaction pool.
For now there should be no functional changes.
I see this as a first step to refactor the legacypool and make the queue
a fully separate concept from the main pending pool.
---------
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
Co-authored-by: Csaba Kiraly <csaba.kiraly@gmail.com>
This PR implements the partial read functionalities in the freezer, optimizing
the state history reader by resolving less data from freezer.
---------
Signed-off-by: jsvisa <delweng@gmail.com>
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
invalidTxMeter was counting txs, while validTxMeter was counting
accounts. Better make the two comparable.
---------
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
- Introduce a new subscription kind `transactionReceipts` to allow clients to
receive transaction receipts over WebSocket as soon as they are available.
- Accept optional `transactionHashes` filter to subscribe to receipts for specific
transactions; an empty or omitted filter subscribes to all receipts.
- Preserve the same receipt format as returned by `eth_getTransactionReceipt`.
- Avoid additional HTTP polling, reducing RPC load and latency.
---------
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
Fixes issue #32793. When the pending tx subscription ends, the filter
is removed from `api.filters`, but it is not terminated. There is no other
way to terminate it, so the subscription will leak, and potentially block
the producer side.
In both `TestSimultaneousRequests` and `TestSameRequestID`, we send two
concurrent requests. The client under test is free to respond in either
order, so we need to handle responses both ways.
Also fixes an issue where some generated blob transactions didn't have
any blobs.
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
This PR does a few things:
- Sets the gh actions runner sizes for lint (s) and test (l) workflows
- Runs the tests on gh actions in parallel
- Skips fetching the spec tests when unnecessary (on windows in
appveyor)
- Removes ubuntu appveyor runner since it's essentially duplicate of the
gh action workflow now
The gh test seems to go down from ~35min to ~13min.
This pr implements https://github.com/ethereum/go-ethereum/issues/32733
to make StateProcessor more customisable.
## Compatibility notes
This introduces a breaking change to users using geth EVM as a library.
The `NewStateProcessor` function now takes one parameter which has the
chainConfig embedded instead of 2 parameters.
Description:
We found a occasionally node hang issue on BSC, I think Geth may
also have the issue, so pick the fix patch here.
The fix on BSC repo: https://github.com/bnb-chain/bsc/pull/3347
When the hang occurs, there are two routines stuck.
- routine 1: AsyncFilter(...)
On node start, it will run part of the DiscoveryV4 protocol, which could
take considerable time, here is its hang callstack:
```
goroutine 9711 [chan receive]: // this routine was stuck on read channel: `<-f.slots`
github.com/ethereum/go-ethereum/p2p/enode.AsyncFilter.func1()
github.com/ethereum/go-ethereum/p2p/enode/iter.go:206 +0x125
created by github.com/ethereum/go-ethereum/p2p/enode.AsyncFilter in goroutine 1
github.com/ethereum/go-ethereum/p2p/enode/iter.go:192 +0x205
```
- Routine 2: Node Stop
It is the main routine to shutdown the process, but it got stuck when it
tries to shutdown the discovery components, as it tries to drain the
channel of `<-f.slots`, but the extra 1 slot will never have chance to
be resumed.
```
goroutine 11796 [chan receive]:
github.com/ethereum/go-ethereum/p2p/enode.(*asyncFilterIter).Close.func1()
github.com/ethereum/go-ethereum/p2p/enode/iter.go:248 +0x5c
sync.(*Once).doSlow(0xc032a97cb8?, 0xc032a97d18?)
sync/once.go:78 +0xab
sync.(*Once).Do(...)
sync/once.go:69
github.com/ethereum/go-ethereum/p2p/enode.(*asyncFilterIter).Close(0xc092ff8d00?)
github.com/ethereum/go-ethereum/p2p/enode/iter.go:244 +0x36
github.com/ethereum/go-ethereum/p2p/enode.(*bufferIter).Close.func1()
github.com/ethereum/go-ethereum/p2p/enode/iter.go:299 +0x24
sync.(*Once).doSlow(0x11a175f?, 0x2bfe63e?)
sync/once.go:78 +0xab
sync.(*Once).Do(...)
sync/once.go:69
github.com/ethereum/go-ethereum/p2p/enode.(*bufferIter).Close(0x30?)
github.com/ethereum/go-ethereum/p2p/enode/iter.go:298 +0x36
github.com/ethereum/go-ethereum/p2p/enode.(*FairMix).Close(0xc0004bfea0)
github.com/ethereum/go-ethereum/p2p/enode/iter.go:379 +0xb7
github.com/ethereum/go-ethereum/eth.(*Ethereum).Stop(0xc000997b00)
github.com/ethereum/go-ethereum/eth/backend.go:960 +0x4a
github.com/ethereum/go-ethereum/node.(*Node).stopServices(0xc0001362a0, {0xc012e16330, 0x1, 0xc000111410?})
github.com/ethereum/go-ethereum/node/node.go:333 +0xb3
github.com/ethereum/go-ethereum/node.(*Node).Close(0xc0001362a0)
github.com/ethereum/go-ethereum/node/node.go:263 +0x167
created by github.com/ethereum/go-ethereum/cmd/utils.StartNode.func1.1 in goroutine 9729
github.com/ethereum/go-ethereum/cmd/utils/cmd.go:101 +0x78
```
The rootcause of the hang is caused by the extra 1 slot, which was
designed to make sure the routines in `AsyncFilter(...)` can be
finished. This PR fixes it by making sure the extra 1 shot can always be
resumed when node shutdown.
This PR updates the `payloadVersion` function in `simulated_beacon.go`
to handle additional following forks used during development and testing
phases after Osaka.
This change ensures that the simulated beacon correctly resolves the
payload version for these forks, enabling consistent and valid execution
payload handling during local testing or simulation.
The TxPool.signer field was never read and each subpool (legacy/blob)
maintains its own signer instance. This field remained after txpool
refactoring into subpools and is dead code. Removing it reduces
confusion and simplifies the constructor.
- Replace outdated NewFreezer doc that referenced map[string]bool/snappy
toggle with accurate description of -map[string]freezerTableConfig
(noSnappy, prunable).
- Fix misleading field comment on freezerTable.config that spoke as if
it were a boolean (“if true”), clarifying it’s a struct and noting
compression is non-retroactive.
These functions were previously ignoring the error returned by both
`statedb.Commit()` and the subsequent `state.New()`,
which could silently fail and cause panics later when the `statedb` is
used.
This change adds proper error checking and panics with a descriptive
error
message if state creation fails.
While unlikely in normal operation, this can occur if there are database
corruption issues or if invalid root hashes are provided, making
debugging
significantly easier when such issues do occur.
This issue was encountered and fixed in
https://github.com/gballet/go-ethereum/pull/552
where the error handling proved essential for debugging
cc: @gballet as this was discussed in a call already.
Introduces a new tracer which returns the preimages
of evm KECCAK256 hashes.
See #32570.
---------
Co-authored-by: Sina M <1591639+s1na@users.noreply.github.com>
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
Bail out of decodeHash when the raw hex string is longer than 32 byte before actually decoding.
---------
Co-authored-by: lightclient <lightclient@protonmail.com>
This is a small improvement on #32656 in case Add was called with
multiple type 3 transactions, adding transactions to the pool one-by-one
as they are converted.
Announcement to peers is still done in a batch.
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
Fixes race in WaitDeploy test where the backend is closed before goroutine using it wraps up.
---------
Co-authored-by: lightclient <lightclient@protonmail.com>
Replace time.After with a long‑lived time.Ticker in KeyStore.updater, preventing per‑iteration timer allocations and potential timer buildup.
Co-authored-by: lightclient <lightclient@protonmail.com>
- Correct the error message in TestOneElementProof to expect 'v' instead
of 'k'.
- The trie is updated with key "k" and value "v"; on mismatch the
expected value must be 'v'.
- Aligns the message with the actual test logic and other similar checks
in this file, reducing confusion during test failures. No behavioral
changes.
This implements the conversion of existing blob transactions to the new proof
version. Conversion is triggered at the Osaka fork boundary. The conversion is
designed to be idempotent, and may be triggered multiple times in case of a reorg
around the fork boundary.
This change is the last missing piece that completes our strategy for the blobpool
conversion. After the Osaka fork,
- new transactions will be converted on-the-fly upon entry to the pool
- reorged transactions will be converted while being reinjected
- (this change) existing transactions will be converted in the background
---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
Co-authored-by: lightclient <lightclient@protonmail.com>
This fixes `go run build/ci.go install`. It was failing because we
resolved all main packages by parsing sources, which fails when the
source directory contains multiple modules.
The parent header was missing the BaseFee field when calculating the
reserve price for EIP-7918 in the Osaka fork, causing a nil pointer
dereference. This fix ensures BaseFee is properly set from ParentBaseFee
in the environment.
Added regression test case 34 to verify Osaka fork blob gas calculation
works correctly with parent base fee.
This change replaces wrapping a stale outer err with the iterator’s own
error after Next(), and switches the post-BlockAndReceipts() check to
use the returned err. According to internal/era iterator contract,
Error() should be consulted immediately after Next() to surface
iteration errors, while decoding errors from Block/Receipts are returned
directly. The previous code could hide the real failure (using nil or
unrelated err), leading to misleading diagnostics and missed iteration
errors.
---------
Co-authored-by: lightclient <lightclient@protonmail.com>
https://go.dev/ref/mod#go-work-file advises against checking `go.work`
files because they can interfere with local development. We added the
workspace file in order to make `go test` and other tools work across
multiple modules. But it seems to cause weird issues with the
`go.work.sum` file being modified, etc.
So with this PR, we instead run all the `ci.go` commands for all modules
in the workspace manually.
This pull request introduces a queue for legacy sidecar conversion to
handle transactions that persist after the Osaka fork. Simply dropping
these transactions would significantly harm the user experience.
To balance usability with system complexity, we have introduced a
conversion time window of two hours post Osaka fork. During this period,
the system will accept legacy blob transactions and convert them in a
background process.
After the window, all legacy transactions will be rejected. Notably, all
the blob transactions will be validated statically before the conversion,
and also all conversion are performed in a single thread, minimize the risk
of being DoS.
We believe this two hour window provides sufficient time to process
in-flight legacy transactions and allows submitters to migrate to the
new format.
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
Fix the t.Fatalf format arguments in TestBadBlockStorage to match the
intended #index output. Previously, the left number used i+1 and the
right index used the block number, producing misleading diagnostics.
Correct mapping improves test failure clarity and debuggability.
Remove redundant duplicate test vectors. The two entries were identical
and back-to-back, providing no additional coverage while adding noise.
Keeping a single instance maintains test intent and clarity without
altering behavior.
Fix typo in test error message where "MustParseBig" was incorrectly
used instead of "MustParseUint64" in the TestMustParseUint64Panic
function.
The test still functions correctly, but now the error message
accurately reflects the function being tested.
before:
go test -run=^$ -bench=. ./crypto/... 94.83s user 2.68s system 138% cpu
1:10.55 tota
after:
go test -run=^$ -bench=. ./crypto/... 75.43s user 2.58s system 123% cpu
1:03.01 total
before:
go test -run=^$ -bench=. ./core/types 47.80s user 2.18s system 102% cpu
48.936 tota
after:
go test -run=^$ -bench=. ./core/types 42.42s user 2.27s system 112% cpu
39.593 total
before:
go test -run=^$ -bench=. ./core/vm/... -timeout=1h 1841.87s user 40.96s
system 124% cpu 25:15.76 total
after:
go test -run=^$ -bench=. ./core/vm/... -timeout=1h 1588.65s user 33.79s
system 123% cpu 21:53.25 total
---------
Co-authored-by: lightclient <lightclient@protonmail.com>
before:
go test -run=^$ -bench=. ./log -timeout=1h 12.19s user 2.19s system 89%
cpu 16.025 total
after:
go test -run=^$ -bench=. ./log -timeout=1h 10.64s user 1.53s system 89%
cpu 13.607 total
---------
Co-authored-by: lightclient <lightclient@protonmail.com>
before:
go test -run=^$ -bench=. ./eth/... 827.57s user 23.80s system 361% cpu
3:55.49 total
after:
go test -run=^$ -bench=. ./eth/... 281.62s user 13.62s system 245% cpu
2:00.49 total
before:
go test -run=^$ -bench=. ./core/state/... 120.85s user 7.96s system 129%
cpu 1:39.13 tota
after:
go test -run=^$ -bench=. ./core/state/... 21.32s user 2.12s system 97%
cpu 24.006 total
This PR improves `TestBlockSync` so that it also tests the finality
update validation.
Note: to this date four long and complex (at least partly AI generated)
PRs arrived that did something related to testing finality but honestly
we do not need a bloated "comprehensive" test to test a trivial feature
because maintaining these tests can also be a pain over the long term.
This PR adds some sufficient sanity checks to detect if finality ever
gets broken by a future change.
This disables the tx gaslimit cap for eth_call and related RPC operations.
I don't like how this fix works. Ideally we'd be checking the tx
gaslimit somewhere else, like in the block validator, or any other place
that considers block transactions. Doing the check in StateTransition
means it affects all possible ways of executing a message.
The challenge is finding a place for this check that also triggers
correctly in tests where it is wanted. So for now, we are just combining
this with the EOA sender check for transactions. Both are disabled for
call-type messages.
Fixes a crash when loading the beacon chain config if new fields like
`BLOB_SCHEDULE: []` are present.
Previously, the config loader assumed all values were strings, causing
errors such as:
```
Fatal: Could not load beacon chain config '/network-configs/config.yaml': failed to parse beacon chain config file: yaml: unmarshal errors:
line 242: cannot unmarshal !!seq into string
```
This PR updates the parsing logic to handle non-string values correctly
and adds explicit validation for fork fields.
Add cli configurable limit for the number of addresses allowed in
eth_getLogs filter criteria:
https://github.com/ethereum/go-ethereum/issues/32264
Key changes:
- Added --rpc.getlogmaxaddrs CLI flag (default: 1000) to configure the
maximum number of addresses
- Updated ethconfig.Config with FilterMaxAddresses field for
configuration management
- Modified filter system to use the configurable limit instead of the
hardcoded maxAddresses constant
- Enhanced test coverage with new test cases for address limit
validation
- Removed hardcoded validation from JSON unmarshaling, moving it to
runtime validation
Please notice that I remove the check at FilterCriteria UnmarshalJSON
because the runtime config can not pass into this validation.
Please help review this change!
---------
Co-authored-by: zsfelfoldi <zsfelfoldi@gmail.com>
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
Addresses https://github.com/ethereum/go-ethereum/issues/32630
This pull request enables the stateless engine APIs for Osaka and the
following BPOs. Apart from that, a few more descriptions have been added
in the engine APIs, making it easier to follow the spec change.
https://github.com/ethereum/execution-spec-tests/releases/tag/v5.0.0
As of this release, execution-spec-tests also contains all state tests
that were previously in ethereum/tests. We can probably remove the tests
submodule now. However, this would mean we are missing the pre-cancun
tests. Still need to figure out how to resolve this.
---------
Co-authored-by: MariusVanDerWijden <m.vanderwijden@live.de>
When I implemented in #31340 I didn't expect multiple forks to be
configured at once, but this is exactly how BPOs are defined. This
updates the method to determine the next scheduled fork rather than the
last fork.
ZKVMs are constrained environments that liberally allocate memory and
never release it. In this context, using the GC is only going to cause
issues down the road, and slow things down in any case.
- Adds `NodeIteratorWithPrefix()` method to support iterating only nodes
within a specific key prefix
- Adds `NodeIteratorWithRange()` method to support iterating only nodes
within a specific key range
Current `NodeIterator` always traverses the entire remaining trie from a
start position. For non-ethereum applications using the trie implementation,
there's no way to limit iteration to just a subtree with a specific prefix.
**Usage:**
```go
// Only iterate nodes with prefix "key1"
iter, err := trie.NodeIteratorWithPrefix([]byte("key1"))
```
Testing: Comprehensive test suite covering edge cases and boundary conditions.
Closes#32484
---------
Co-authored-by: gballet <guillaume.ballet@gmail.com>
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
This pull request is based on #32306 , is the second part for shipping
trienode history.
Specifically, this pull request generalize the existing index mechanism,
making is usable
by both state history and trienode history in the near future.
The format that is currently reported by the chain isn't very useful, as
it gives an average for ALL the nodes, and not only the leaves, which
skews the results.
Also, until now there was no way to activate the reporting of errors.
We also decided that metrics weren't the right tool to report this data,
so we decided to dump it to the console if the flag is enabled. A better
system should be built, but for now, printing to the logs does the job.
This improves the latency of lookups in small networks and test setups. When the local node table runs empty, the lookupIterator will trigger refresh to try and fill the table again.
The behaviour of lookup in case of an empty table is changed:
- Previously, lookup waited fixed 1 second before trying to continue the lookup
- Now, lookup on an empty table returns immediately, and a better wait implementation is part of the LookupIterator. It reinitialises the table, and continues the interator as soon as a node becomes available.
This PR adds a new RPC call, which re-executes a block with stateless
mode activated, so that the witness data are collected and returned.
They are `debug_executionWitnessByHash` which takes in a block hash
and `debug_executionWitness` which takes in a block number.
---------
Signed-off-by: Guillaume Ballet <3272758+gballet@users.noreply.github.com>
ApplyTransaction calls the hooks and builds the receipt, so some
duplicated code can be removed from t8ntool. Test cases have been
changed to add the `blockNumber` and `blockHash` in receipts, since
they were previously not filled in.
Keeper is a zmvm guest program that runs the block transition.
It relies on the zkvm maker implementing `getInput`. For now, we only
provide a single implementation for the 'ziren' VM.
Why keeper?
In the _Mass Effect_ lore, the keepers are animals (?) who maintain the
citadel. Nothing is known from them, and attempts at tampering with them
have failed, as they self-destruct upon inquiry. They have a secret,
nefarious purpose that is only revealed later in the game series, don't
want any spoilers so I didn't dig deeper. All in all, a good metaphor
for zkvms.
---------
Co-authored-by: weilzkm <140377101+weilzkm@users.noreply.github.com>
Co-authored-by: Felix Lange <fjl@twurst.com>
Implements a migration path for the blobpool slotter
---------
Co-authored-by: lightclient <lightclient@protonmail.com>
Co-authored-by: lightclient <14004106+lightclient@users.noreply.github.com>
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
This change ensures TransitionState.Copy preserves BaseRoot. During a
Verkle transition, ts.BaseRoot is required to construct the overlay MPT
when ts.InTransition() is true. Previously, copies dropped BaseRoot,
risking an invalid zero-hash base trie and inconsistent behavior.
Fixes an issue I accidentally introduced in #32579. Essentially, because
we gate the engine methods based on particular forks and I did not add
the BPOs as allowed forks to the method.
Ensure Database.namespace is initialized in pebble.New(...). Without
this, the write-stall metrics registered in onWriteStallBegin/End are
emitted without the intended namespace prefix, while other Pebble
metrics use the provided constructor parameter. This aligns stall
metrics with the rest of the Pebble metric set and fixes inconsistent
metric naming.
---------
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
The lookup would add self into the replyBuffer if returned by another node.
Avoid doing that by marking self as seen.
With the changed initialization behavior of lookup, the lookupIterator needs to yield the
buffer right after creation. This fixes the smallNetConvergence test, where all results
are straight out of the local table.
Refresh is doing some lookups and thus it could block for some time. We
do not want the initializer of an iterator to block. If there is
something blocking, it should happen when calling Next.
Here, next will start a lookup, which will wait if needed (no nodes),
making sure the iterator's Next is not creating a busy loop.
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
This PR removes the conversion of legacy sidecars after Osaka and instead rejects them to the pool.
---------
Co-authored-by: lightclient <lightclient@protonmail.com>
This pull request addresses the corrupted path database with log
indicating:
`history head truncation out of range, tail: 122557, head: 212208,
target: 212557`
This is a rare edge case where the in-memory layers, including the write
buffer
in the disk layer, are fully persisted (e.g., written to file), but the
state history
freezer is not properly closed (e.g., Geth is terminated after
journaling but
before freezer.Close). In this situation, the recent state history
writes will be
truncated on the next startup, while the in-memory layers resolve
correctly.
As a result, the state history falls behind the disk layer (including
the write buffer).
In this pull request, the state history freezer is always synced before
journal,
ensuring the state history writes are always persisted before the
others.
Edit:
It's confirmed that devops team has 10s container termination setting.
It
explains why Geth didn't finish the entire termination without state
history
being closed.
https://github.com/ethpandaops/fusaka-devnets/pull/63/files
TestBlobTxWithoutSidecar test could run infinitely in case of a client
not requesting the good transaction. This adds a timeout to make the
test fail in this case.
Fixes https://github.com/ethereum/go-ethereum/issues/32422
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
Add state size tracking and retrieve api, start geth with `--state.size-tracking`,
the initial bootstrap is required (around 1h on mainnet), after the bootstrap,
use `debug_stateSize()` RPC to retrieve the state size:
```
> debug.stateSize()
{
accountBytes: "0x39681967b",
accountTrienodeBytes: "0xc57939f0c",
accountTrienodes: "0x198b36ac",
accounts: "0x129da14a",
blockNumber: "0x1635e90",
contractCodeBytes: "0x2b63ef481",
contractCodes: "0x1c7b45",
stateRoot: "0x9c36a3ec3745d72eea8700bd27b90dcaa66de0494b187c5600750044151e620a",
storageBytes: "0x18a6e7d3f1",
storageTrienodeBytes: "0x2e7f53fae6",
storageTrienodes: "0x6e49a234",
storages: "0x517859c5"
}
```
---------
Signed-off-by: jsvisa <delweng@gmail.com>
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
As in #32060 we introduced the file based journal path, for the other
sub command(eg: snapshot, db), we should also pass the directory to the
triedb, else the subcommand(eg: `geth snapshot`) failed to run:
```bash
geth snapshot verify-state --datadir /geth-data
...
INFO [09-02|02:12:29.493] Allocated cache and file handles database=/geth-data/geth/chaindata cache=512.00MiB handles=524,288
INFO [09-02|02:12:32.746] Opened ancient database database=/geth-data/geth/chaindata/ancient/chain readonly=true
INFO [09-02|02:12:32.746] Opened Era store datadir=/geth-data/geth/chaindata/ancient/chain/era
INFO [09-02|02:12:32.758] State scheme set to already existing scheme=path
INFO [09-02|02:12:32.760] Load database journal from disk
INFO [09-02|02:12:32.764] Failed to load journal, discard it err="journal not found"
INFO [09-02|02:12:32.789] Opened ancient database database=/geth-data/geth/chaindata/ancient/state readonly=true
INFO [09-02|02:12:32.790] Initialized path database readonly=true triecache=0.00B statecache=0.00B buffer=0.00B history="entire chain"
ERROR[09-02|02:12:32.791] Failed to verify state root=c5458d..4cc785 err="unknown layer: c5458d476da0136a67ef24a93b909aa5c29efa5c5b885dbd1fbaed4e784cc785"
```
This PR is the first step in the trienode history series.
It introduces the `nodeWithOrigin` struct in the path database, which tracks
the original values of dirty nodes to support trienode history construction.
Note, the original value is always empty in this PR, so it won't break the
existing journal for encoding and decoding. The compatibility of journal
should be handled in the following PR.
Another getBlobs PR on top of
https://github.com/ethereum/go-ethereum/pull/32190 to avoid some minor
regressions.
- bring back more log messages from before
- continue processing also on some internal errors
- ensure v2 complies with spec even if there are internal errors
---------
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
This pull request fixes a regression, introduced in #32190
Specifically, in GetBlobsV1 engine API, if any blob is missing, the null
should be placed in
response, unfortunately a behavioral change was introduced in #32190,
returning an error
instead.
What's more, a more comprehensive test suite is added to cover both
`GetBlobsV1` and
`GetBlobsV2` endpoints.
Switches to using counters so that the gauges don't cause any
information to be lost. Counters can be used to calculate all sorts of
metrics on Grafana. Which is also why min/avg/max logic is removed to
make things simple and small here.
~Will probably be mostly supplanted by #32224, but this should do for
now for devnet 3.~
Seems like #32224 is going to take some more time, so I have completed
the implementation of eth_config here. It is quite a bit simpler to
implement now that the config hashing was removed.
---------
Co-authored-by: MariusVanDerWijden <m.vanderwijden@live.de>
Co-authored-by: Guillaume Ballet <3272758+gballet@users.noreply.github.com>
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
Implement the binary tree as specified in [eip-7864](https://eips.ethereum.org/EIPS/eip-7864).
This will gradually replace verkle trees in the codebase. This is only
running the tests and will not be executed in production, but will help
me rebase some of my work, so that it doesn't bitrot as much.
---------
Signed-off-by: Guillaume Ballet
Co-authored-by: Parithosh Jayanthi <parithosh.jayanthi@ethereum.org>
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
Filtering for leaf nodes was missing from #32388, which means that even
the root done was reported, which made little sense for the bloatnet
data processing we want to do.
### Summary
Fixes long-standing ETA calculation errors in progress indicators that
have been present since February 2021. The current implementation
produces increasingly inaccurate estimates due to integer division
precision loss.
### Problem
3aeccadd04/triedb/pathdb/history_indexer.go (L541-L553)
The ETA calculation has two critical issues:
1. **Integer division precision loss**: `speed` is calculated as
`uint64`
2. **Off-by-one**: `speed` uses `+ 1`(2 times) to avoid division by
zero, however it makes mistake in the final calculation
This results in wildly inaccurate time estimates that don't improve as
progress continues.
### Example
Current output during state history indexing:
```
lvl=info msg="Indexing state history" processed=16858580 left=41802252 elapsed=18h22m59.848s eta=11h36m42.252s
```
**Expected calculation:**
- Speed: 16858580 ÷ 66179848ms = 0.255 blocks/ms
- ETA: 41802252 ÷ 0.255 = ~45.6 hours
**Current buggy calculation:**
- Speed: rounds to 1 block/ms
- ETA: 41802252 ÷ 1 = ~11.6 hours ❌
### Solution
- Created centralized `CalculateETA()` function in common package
- Replaced all 8 duplicate code copies across the codebase
### Testing
Verified accurate ETA calculations during archive node reindexing with
significantly improved time estimates.
`db inspect` on the full database currently takes **30min+**, because
the db iterate was run in one thread, propose to split the key-space to
256 sub range, and assign them to the worker pool to speed up.
After the change, the time of running `db inspect --workers 16` reduced
to **10min**(the keyspace is not evenly distributed).
---------
Signed-off-by: jsvisa <delweng@gmail.com>
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
This pull request preserves the root->ID mappings in the path database
even after the associated state histories are truncated, regardless of
whether the truncation occurs at the head or the tail.
The motivation is to support an additional history type, trienode history.
Since the root->ID mappings are shared between two history instances,
they must not be removed by either one.
As a consequence, the root->ID mappings remain in the database even
after the corresponding histories are pruned. While these mappings may
become dangling, it is safe and cheap to keep them.
Additionally, this pull request enhances validation during historical
reader construction, ensuring that only canonical historical state will be
served.
When maxPeers was just above some perfect square, and a few peers
dropped for some reason, we changed the peer selection function.
When new peers were acquired, we changed again.
This PR improves the selection function, in two ways. First, it will always select
sqrt(peers) to broadcast to. Second, the selection now uses siphash with a secret
key, to guard against information leaks about tx source.
---------
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
Co-authored-by: Felix Lange <fjl@twurst.com>
Fixes a prestateTracer test case covering 7702 delegation.
---------
Co-authored-by: Jared Wasinger <j-wasinger@hotmail.com>
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
This pull request implements #32235 , constructing blob sidecar in new
format (cell proof)
if the Osaka has been activated.
Apart from that, it introduces a pre-conversion step in the blob pool
before adding the txs.
This mechanism is essential for handling the remote **legacy** blob txs
from the network.
One thing is still missing and probably is worthy being highlighted
here: the blobpool may
contain several legacy blob txs before the Osaka and these txs should be
converted once
Osaka is activated. While the `GetBlob` API in blobpool is capable for
generating cell proofs
at the runtime, converting legacy txs at one time is much cheaper
overall.
---------
Co-authored-by: MariusVanDerWijden <m.vanderwijden@live.de>
Co-authored-by: lightclient <lightclient@protonmail.com>
Replace hardcoded 5-second sleep with polling loop that actively checks
snap sync state. This approach is already used in other project tests
(like account_cache_test.go) and provides better reliability by:
- Reducing flaky behavior on slower systems
- Finishing early when sync completes quickly
- Using 1-second timeout with 100ms polling intervals
---------
Co-authored-by: lightclient <lightclient@protonmail.com>
closes#32240#32232
The main cause for the time out is the slow json encoding of large data.
In #32240 they tried to resolve the issue by reducing the size of the
test. However as Felix pointed out, the test is still kind of confusing.
I've refactored the test so it is more understandable and have reduced
the amount of data needed to be json encoded. I think it is still
important to ensure that the default read limit is not active, so I have
retained one large (~32 MB) test case, but it's at least smaller than
the existing ~64 MB test case.
This pull request refactors the internal implementation in path database
a bit, specifically:
- purge the state index data in batch
- simplify the logic of state history construction and index, make it more readable
This is a internal refactoring PR, renaming the history to stateHistory.
It's a pre-requisite PR for merging trienode history, avoid the name
conflict.
The TestTraceChain function was missing a defer backend.teardown() call,
which is required to properly release blockchain resources after test
completion.
---------
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
Full disclosure: this has been generated by AI. The goal is to have a
quick check that the PR format is correct, before we merge it. This is
to avoid the periodical case when someone forgets to add a milestone or
check the title matches our preferred format.
Supersedes #32470.
### What
- snap: shorten stall watchdog in `eth/protocols/snap/sync_test.go` from
1m to 10s.
- discover/v5: consolidate FINDNODE negative tests into a single
table-driven test:
- `TestUDPv5_findnodeCall_InvalidNodes` covers:
- invalid IP (unspecified `0.0.0.0`) → ignored
- low UDP port (`<=1024`) → ignored
### Why
- Addresses TODOs:
- “Make tests smaller” (reduce long 1m timeout).
- “check invalid IPs”; also cover low port per `verifyResponseNode`
rules (UDP must be >1024).
### How it’s validated
- Test-only changes; no production code touched.
- Local runs:
- `go test ./p2p/discover -count=1 -timeout=300s` → ok
- `go test ./eth/protocols/snap -count=1 -timeout=600s` → ok
- Lint:
- `go run build/ci.go lint` → 0 issues on modified files.
### Notes
- The test harness uses `enode.ValidSchemesForTesting` (which includes
the “null” scheme), so records signed with `enode.SignNull` are
signature-valid; failures here are due to IP/port validation in
`verifyResponseNode` and `netutil.CheckRelayAddr`.
- Tests are written as a single table-driven function for clarity; no
helpers or environment switching.
---------
Co-authored-by: lightclient <lightclient@protonmail.com>
This PR should reduce overall allocations of a running node by ~10
percent. Since most allocations are coming from the re-heaping of the
transaction pool.
```
(pprof) list EffectiveGasTipCmp
Total: 38197204475
ROUTINE ======================== github.com/ethereum/go-ethereum/core/types.(*Transaction).EffectiveGasTipCmp in github.com/ethereum/go-ethereum/core/types/transaction.go
0 3766837369 (flat, cum) 9.86% of Total
. . 386:func (tx *Transaction) EffectiveGasTipCmp(other *Transaction, baseFee *big.Int) int {
. . 387: if baseFee == nil {
. . 388: return tx.GasTipCapCmp(other)
. . 389: }
. . 390: // Use more efficient internal method.
. . 391: txTip, otherTip := new(big.Int), new(big.Int)
. 1796172553 392: tx.calcEffectiveGasTip(txTip, baseFee)
. 1970664816 393: other.calcEffectiveGasTip(otherTip, baseFee)
. . 394: return txTip.Cmp(otherTip)
. . 395:}
. . 396:
. . 397:// EffectiveGasTipIntCmp compares the effective gasTipCap of a transaction to the given gasTipCap.
. . 398:func (tx *Transaction) EffectiveGasTipIntCmp(other *big.Int, baseFee *big.Int) int {
```
This PR reduces the allocations for comparing two transactions from 2 to
0:
```
goos: linux
goarch: amd64
pkg: github.com/ethereum/go-ethereum/core/types
cpu: Intel(R) Core(TM) Ultra 7 155U
│ /tmp/old.txt │ /tmp/new.txt │
│ sec/op │ sec/op vs base │
EffectiveGasTipCmp/Original-14 64.67n ± 2% 25.13n ± 9% -61.13% (p=0.000 n=10)
│ /tmp/old.txt │ /tmp/new.txt │
│ B/op │ B/op vs base │
EffectiveGasTipCmp/Original-14 16.00 ± 0% 0.00 ± 0% -100.00% (p=0.000 n=10)
│ /tmp/old.txt │ /tmp/new.txt │
│ allocs/op │ allocs/op vs base │
EffectiveGasTipCmp/Original-14 2.000 ± 0% 0.000 ± 0% -100.00% (p=0.000 n=10)
```
It also speeds up the process by ~60%
There are two minor caveats with this PR:
- We change the API for `EffectiveGasTipCmp` and `EffectiveGasTipIntCmp`
(which are probably not used by much)
- We slightly change the behavior of `tx.EffectiveGasTip` when it
returns an error. It would previously return a negative number on error,
now it does not (since uint256 does not allow for negative numbers)
---------
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
Co-authored-by: Csaba Kiraly <csaba.kiraly@gmail.com>
## Description
This PR fixes a bug in the Ledger hardware wallet version validation
logic for EIP-155 transaction signing. The original condition
incorrectly allowed older versions that don't support EIP-155 such as
0.9.9 and 0.1.5 to proceed.
This pull introduces a `Prefetch` operation in the trie to prefetch trie
nodes in parallel. It is used by the `triePrefetcher` to accelerate state
loading and improve overall chain processing performance.
## Summary
This PR addresses a DoS vulnerability in the GraphQL service by
implementing a maximum query depth limit. While #26026 introduced
timeout handling, it didn't fully mitigate the attack vector where
deeply nested queries can still consume excessive CPU and memory
resources before the timeout is reached.
## Changes
- Added `maxQueryDepth` constant (set to 20) to limit the maximum
nesting depth of GraphQL queries
- Applied the depth limit using `graphql.MaxDepth()` option when parsing
the schema
- Added test case `TestGraphQLMaxDepth` to verify that queries exceeding
the depth limit are properly rejected
## Security Impact
Without query depth limits, malicious actors could craft deeply nested
queries that:
- Consume excessive CPU cycles during query parsing and execution
- Allocate large amounts of memory for nested result structures
- Potentially cause service degradation or outages even with timeout
protection
This fix complements the existing timeout mechanism by preventing
resource-intensive queries from being executed in the first place.
## Testing
Added `TestGraphQLMaxDepth` which verifies that queries with nesting
depth > 20 are rejected with a `MaxDepthExceeded` error.
## References
- Original issue: #26026
- Related security best practices:
https://www.howtographql.com/advanced/4-security/
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
Exposing the public method to setReadLimits for Websocket RPC to
prevent OOM.
Current, Geth Server is using a default 32MB max read limit (message
size) for websocket, which is prune to being attacked for OOM. Any one
can easily launch a client to send a bunch of concurrent large request
to cause the node to crash for OOM. One example of such script that can
easily crash a Geth node running websocket server is like this:
ec830979ac/poc.go
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
Continuation of https://github.com/ethereum/go-ethereum/issues/32022
tablewriter assumes unix or windows, which may not be the case for
embedded targets.
For v0.0.5 of tablewriter, it is noted in table.go: "The protocols were
written in pure Go and works on windows and unix systems"
---------
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
The order of the checks was wrong which would have allowed a call to
modexp with `baseLen == 0 && modLen == 0` post fusaka.
Also handles an edge case where base/mod/exp length >= 2**64
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
This changes the implementation to resolve the blob parameters according
to the current header timestamp. This matters for EIP-7918, where we
would previously resolve the UpdateFraction according to the parent
header fork, leading to a confusing situation at the fork transition
block.
---------
Co-authored-by: MariusVanDerWijden <m.vanderwijden@live.de>
This add some of the changes that were missing from #31634. It
introduces the `TransitionTrie`, which is a façade pattern between the
current MPT trie and the overlay tree.
---------
Signed-off-by: Guillaume Ballet <3272758+gballet@users.noreply.github.com>
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
**Problem:** Including full account code in prestateTracer response
significantly increases response payload size.
**Solution:** Add codeHash field to the response. This will allow
client-side bytecode caching and is a non-breaking change.
**Note:** codeHash for EoAs is excluded to save space.
---------
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
## Description
Correct symmetric tolerance in gas limit validation:
Replace ambiguous "+-=" with standard "+/-" in the error message.
Logic rejects when |header − parent| ≥ limit, so allowed range is |Δ| ≤
limit − 1.
No logic or functionality has been modified.
These changes made in the PR should be highlighted here
The trie tracer is split into two distinct structs: opTracer and prevalueTracer.
The former is specific to MPT, while the latter is generic and applicable to all
trie implementations.
The original values of dirty nodes are tracked in a NodeSet. This serves
as the foundation for both full archive node implementations and the state live
tracer.
The GetHeader function was incorrectly returning an error when
encountering nil peers in the peers list, which contradicted the comment
"keep retrying if none are yet available".
Changed the logic to skip nil peers with 'continue' instead of returning
an error, allowing the function to properly iterate through all
available peers and attempt to retrieve the target header from each valid peer.
This ensures the function behaves as intended - trying all available
peers before giving up, rather than failing on the first nil peer encountered.
The previous comment stated that every 3rd block has a tx and every 5th
has an uncle.
The implementation actually adds one transaction to every second block
and does not add uncles.
Updated the comment to reflect the real behavior to avoid confusion when
reading tests.
Add missing it.Error() check after iteration in Database.DeleteRange to
avoid silently ignoring iterator errors before writing the batch.
Aligns behavior with batch.DeleteRange, which already validates iterator
errors. No other functional changes; existing tests pass (TestLevelDB).
This PR makes 2 changes to how
[EIP-7825](https://github.com/ethereum/go-ethereum/pull/31824) behaves.
When `eth_estimateGas` or `eth_createAccessList` is called without any
gas limit in the payload, geth will choose the block's gas limit or the
`RPCGasCap`, which can be larger than the `maxTxGas`.
When this happens for `estimateGas`, the gas estimation just errors out
and ends, when it should continue doing binary search to find the lowest
possible gas limit.
This PR will:
- Add a check to see if `hi` is larger than `maxTxGas` and cap it to
`maxTxGas` if it's larger. And add a special case handling for gas
estimation execute when it errs with `ErrGasLimitTooHigh`
---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
- If all the `vhashes` are in the same `sidecar`, then it will load the
same blob tx many times. This PR aims to upgrade this.
---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
This is the first part of #31532
It maintains a series of conversion maker which are to be updated by the
conversion code (in a follow-up PR, this is a breakdown of a larger PR
to make things easier to review). They can be used in this way:
- During the conversion, by storing the conversion markers when the
block has been processed. This is meant to be written in a function that
isn't currently present, hence [this
TODO](https://github.com/ethereum/go-ethereum/pull/31634/files#diff-89272f61e115723833d498a0acbe59fa2286e3dc7276a676a7f7816f21e248b7R384).
Part of https://github.com/ethereum/go-ethereum/issues/31583
---------
Signed-off-by: Guillaume Ballet <3272758+gballet@users.noreply.github.com>
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
This introduces an error when the filter has both `blockHash` and
`fromBlock`/`toBlock`, since these are mutually exclusive. Seems the
tests were actually returning `not found` error, which went undetected
since there was no check on the actual returned error in the test.
This adds a method on vm.EVM to set the jumpdest cache implementation.
It can be used to maintain an analysis cache across VM invocations, to improve
performance by skipping the analysis for already known contracts.
---------
Co-authored-by: lmittmann <lmittmann@users.noreply.github.com>
Co-authored-by: Felix Lange <fjl@twurst.com>
This pull request optimizes trie hashing by reducing memory allocation
overhead. Specifically:
- define a fullNodeEncoder pool to reuse encoders and avoid memory
allocations.
- simplify the encoding logic for shortNode and fullNode by getting rid
of the Go interfaces.
This PR addresses a flakiness in the rollback test discussed in
https://github.com/ethereum/go-ethereum/issues/32252
I found `nonce` collision caused transactions occasionally fail to send.
I tried to change error message in the failed test like:
```
if err = client.SendTransaction(ctx, signedTx); err != nil {
t.Fatalf("failed to send transaction: %v, nonce: %d", err, signedTx.Nonce())
}
```
and I occasionally got test failure with this message:
```
=== CONT TestFlakyFunction/Run_#100
rollback_test.go:44: failed to send transaction: already known, nonce: 0
--- FAIL: TestFlakyFunction/Run_#100 (0.07s)
```
Although `nonces` are obtained via `PendingNonceAt`, we observed that,
in rare cases (approximately 1 in 1000), two transactions from the same
sender end up with the same nonce. This likely happens because `tx0` has
not yet propagated to the transaction pool before `tx1` requests its
nonce. When the test succeeds, `tx0` and `tx1` have nonces `0` and `1`,
respectively. However, in rare failures, both transactions end up with
nonce `0`.
We modified the test to explicitly assign nonces to each transaction. By
controlling the nonce values manually, we eliminated the race condition
and ensured consistent behavior. After several thousand runs, the
flakiness was no longer reproducible in my local environment.
Reduced internal polling interval in `pendingStateHasTx()` to speed up
test execution without impacting stability. It reduces test time for
`TestTransactionRollbackBehavior` from about 7 seconds to 2 seconds.
Correct the error message in the ExecuteStatelessPayloadV4 function to
reference newPayloadV4 and the Prague fork, instead of incorrectly
referencing newPayloadV3 and Cancun.
This improves clarity during debugging and aligns the error message with
the actual function and fork being validated. No logic is changed.
---------
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
Seems the `signal.result` was not sent back in shorten case, this will
cause a deadlock.
---------
Signed-off-by: jsvisa <delweng@gmail.com>
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
Replace manual byte-by-byte XOR implementation with the optimized
bitutil.XORBytes function. This improves performance by using word-sized
operations on supported architectures while maintaining the same
functionality. The optimized version processes data in bulk rather than
one byte at a time
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
`binary.AppendUvarint` offers better performance than using append
directly, because it avoids unnecessary memory allocation and copying.
In our case, it can increase the performance by +35.8% for the
`blockWriter.append` function:
```
benchmark old ns/op new ns/op delta
BenchmarkBlockWriterAppend-8 5.97 3.83 -35.80%
```
---------
Signed-off-by: jsvisa <delweng@gmail.com>
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
[EIP-7594](https://eips.ethereum.org/EIPS/eip-7594) defines a limit of
max 6 blobs per transaction. We need to enforce this limit during block
processing.
> Additionally, a limit of 6 blobs per transaction is introduced.
Clients MUST enforce this limit when validating blob transactions at
submission time, when received from the network, and during block
production and processing.
The main purpose of this change is to enforce the version setting when
constructing the blobSidecar, avoiding creating sidecar with wrong/default
version tag.
The implementation of `parseIndexBlock` used a reverse loop with slice
appends to build the restart points, which was less cache-friendly and
involved unnecessary allocations and operations. In this PR we change
the implementation to read and validate the restart points in one single
forward loop.
Here is the benchmark test:
```bash
go test -benchmem -bench=BenchmarkParseIndexBlock ./triedb/pathdb/
```
The result as below:
```
benchmark old ns/op new ns/op delta
BenchmarkParseIndexBlock-8 52.9 37.5 -29.05%
```
about 29% improvements
---------
Signed-off-by: jsvisa <delweng@gmail.com>
Fixes#32175.
This fixes the scenario where the blockhash opcode would return 0x0
during RPC simulations when using BlockOverrides with a future block
number. The root cause was that BlockOverrides.Apply() only modified the
vm.BlockContext, but GetHashFn() depends on the actual
types.Header.Number to resolve valid historical block hashes. This
caused a mismatch and resulted in incorrect behavior during trace and
call simulations.
---------
Co-authored-by: shantichanal <158101918+shantichanal@users.noreply.github.com>
Co-authored-by: lightclient <lightclient@protonmail.com>
The root cause of the flaky test was a nonce conflict caused by async
contract deployments.
This solution defines a custom deployer with automatic nonce management.
This is something interesting I came across during my benchmarks, we
spent ~3.8% of all allocations allocating the header number on the heap.
```
(pprof) list GetHeaderByHash
Total: 38197204475
ROUTINE ======================== github.com/ethereum/go-ethereum/core.(*BlockChain).GetHeaderByHash in github.com/ethereum/go-ethereum/core/blockchain_reader.go
0 5786566117 (flat, cum) 15.15% of Total
. . 79:func (bc *BlockChain) GetHeaderByHash(hash common.Hash) *types.Header {
. 5786566117 80: return bc.hc.GetHeaderByHash(hash)
. . 81:}
. . 82:
. . 83:// GetHeaderByNumber retrieves a block header from the database by number,
. . 84:// caching it (associated with its hash) if found.
. . 85:func (bc *BlockChain) GetHeaderByNumber(number uint64) *types.Header {
ROUTINE ======================== github.com/ethereum/go-ethereum/core.(*HeaderChain).GetHeaderByHash in github.com/ethereum/go-ethereum/core/headerchain.go
0 5786566117 (flat, cum) 15.15% of Total
. . 404:func (hc *HeaderChain) GetHeaderByHash(hash common.Hash) *types.Header {
. 1471264309 405: number := hc.GetBlockNumber(hash)
. . 406: if number == nil {
. . 407: return nil
. . 408: }
. 4315301808 409: return hc.GetHeader(hash, *number)
. . 410:}
. . 411:
. . 412:// HasHeader checks if a block header is present in the database or not.
. . 413:// In theory, if header is present in the database, all relative components
. . 414:// like td and hash->number should be present too.
(pprof) list GetBlockNumber
Total: 38197204475
ROUTINE ======================== github.com/ethereum/go-ethereum/core.(*HeaderChain).GetBlockNumber in github.com/ethereum/go-ethereum/core/headerchain.go
94438817 1471264309 (flat, cum) 3.85% of Total
. . 100:func (hc *HeaderChain) GetBlockNumber(hash common.Hash) *uint64 {
94438817 94438817 101: if cached, ok := hc.numberCache.Get(hash); ok {
. . 102: return &cached
. . 103: }
. 1376270828 104: number := rawdb.ReadHeaderNumber(hc.chainDb, hash)
. . 105: if number != nil {
. 554664 106: hc.numberCache.Add(hash, *number)
. . 107: }
. . 108: return number
. . 109:}
. . 110:
. . 111:type headerWriteResult struct {
(pprof) list ReadHeaderNumber
Total: 38197204475
ROUTINE ======================== github.com/ethereum/go-ethereum/core/rawdb.ReadHeaderNumber in github.com/ethereum/go-ethereum/core/rawdb/accessors_chain.go
204606513 1376270828 (flat, cum) 3.60% of Total
. . 146:func ReadHeaderNumber(db ethdb.KeyValueReader, hash common.Hash) *uint64 {
109577863 1281242178 147: data, _ := db.Get(headerNumberKey(hash))
. . 148: if len(data) != 8 {
. . 149: return nil
. . 150: }
95028650 95028650 151: number := binary.BigEndian.Uint64(data)
. . 152: return &number
. . 153:}
. . 154:
. . 155:// WriteHeaderNumber stores the hash->number mapping.
. . 156:func WriteHeaderNumber(db ethdb.KeyValueWriter, hash common.Hash, number uint64) {
```
Opening this to discuss the idea, I know that rawdb.EmptyNumber is not a
great name for the variable, open to suggestions
---
**Description:**
- Replaced outdated GitHub wiki links with current, official
documentation URLs.
- Removed links that redirect or are no longer relevant.
- Ensured all references point to up-to-date and reliable sources.
---
This pull request slightly improves the freezer fsync mechanism by scheduling
the Sync operation based on the number of uncommitted items and original
time interval.
Originally, freezer.Sync was triggered every 30 seconds, which worked well during
active chain synchronization. However, once the initial state sync is complete,
the fixed interval causes Sync to be scheduled too frequently.
To address this, the scheduling logic has been improved to consider both the time
interval and the number of uncommitted items. This additional condition helps
avoid unnecessary Sync operations when the chain is idle.
Introduce file-based state journal in path database, fixing
the Pebble restriction when the journal size exceeds 4GB.
---------
Signed-off-by: jsvisa <delweng@gmail.com>
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
This PR fixes an issue in the tx_fetcher DoS prevention logic where the
code keeps the overflow amount (`want - maxTxAnnounces`) instead of the
allowed amount (`maxTxAnnounces - used`). The specific changes are:
- Correct slice indexing in the announcement drop logic
- Extend the overflow test case to cover the inversion scenario
This is a resubmit of https://github.com/ethereum/go-ethereum/pull/31820
against the `master` branch.
---------
Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
---
**Description:**
- Replaced outdated GitHub wiki links with the official Ethereum
documentation for Web3 Secret Storage.
- Updated references in `keystore.go` and `passphrase.go` for improved
accuracy and reliability.
---
This pull request fixes an issue in disabling direct-ancient mode in
snap sync.
Specifically, if `origin >= frozen && origin != 0`, it implies a part of
chain data has been written into the key-value store, all the following
writes into ancient store scheduled by downloader will be rejected
with error
`ERROR[07-10|03:46:57.924] Error importing chain data to ancients
err="can't add block 1166 hash: the append operation is out-order: have
1166 want 0"`.
This issue is detected by the https://github.com/ethpandaops/kurtosis-sync-test,
which initiates the first snap sync cycle without the finalized header and
implicitly disables the direct-ancient mode. A few seconds later the second
snap sync cycle is initiated with the finalized information and direct-ancient mode
is enabled incorrectly.
This adds the SSZ types from the
[EIP-7928](https://eips.ethereum.org/EIPS/eip-7928) and also adds
encoder/decoder generation using https://github.com/ferranbt/fastssz.
The fastssz dependency is updated because the generation will not work
properly with the master branch version due to a bug in fastssz.
---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
This PR updates the outdated documentation URL from docs.gnosis.io to
the new official docs.safe.global domain. The change reflects the
rebranding from Gnosis Safe to Safe and ensures that users are directed
to the current API documentation for transaction service reference.
This PR adds a block validation check for the maximum block size, as required by
EIP-7934, and also applies a slightly lower size limit during block building.
---------
Co-authored-by: spencer-tb <spencer@spencertaylorbrown.uk>
Co-authored-by: Felix Lange <fjl@twurst.com>
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
alternate approach to https://github.com/ethereum/go-ethereum/pull/31328
suggested by @MariusVanDerWijden . This prevents Geth from outputting a
lot of logs when trying to commit on-demand dev mode blocks while the
client is shutting down.
The issue is hard to reproduce, but I've seen it myself and it is
annoying when it happens. I think this is a reasonable simple solution,
and we can revisit if we find that the output is still too large (i.e.
there is a large delay between initiating shut down and the simulated
beacon receiving the signal, while in this loop).
Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
This change enables more tests to run on GitHub actions. First, it
removes the `-short` flag passed to `go test`, unskipping some longer
running tests. We also enable the full consensus tests to run by
enabling submodules during git clone.
The EF now operates org wide runners with the `self-hosted-ghr` label.
These are auto-scaling runners which should ideally allow us to process
any amount of testing load we throw at them. The new runners have `HOME`
configured differently from the actual user home directory, so our
internal test for resolving `~` had to be adapted to work in this scenario.
This is a follow up PR after #32128 , Seems I've missed to add
--txlookuplimit as hidden. In hte meanwhile, I also add the other
deprecated flags into the output of `show-deprecated-flags`
2025-07-04 09:40:16 +08:00
745 changed files with 73058 additions and 23701 deletions
- **Keep changes minimal and focused.** Only modify code directly related to the task at hand. Do not refactor unrelated code, rename existing variables or functions for style, or bundle unrelated fixes into the same commit or PR.
- **Do not add, remove, or update dependencies** unless the task explicitly requires it.
## Pre-Commit Checklist
Before every commit, run **all** of the following checks and ensure they pass:
### 1. Formatting
Before committing, always run `gofmt` and `goimports` on all modified files:
```sh
gofmt -w <modifiedfiles>
goimports -w <modifiedfiles>
```
### 2. Build All Commands
Verify that all tools compile successfully:
```sh
make all
```
This builds all executables under `cmd/`, including `keeper` which has special build requirements.
### 3. Tests
While iterating during development, use `-short` for faster feedback:
```sh
go run ./build/ci.go test -short
```
Before committing, run the full test suite **without**`-short` to ensure all tests pass, including the Ethereum execution-spec tests and all state/block test permutations:
```sh
go run ./build/ci.go test
```
### 4. Linting
```sh
go run ./build/ci.go lint
```
This runs additional style checks. Fix any issues before committing.
### 5. Generated Code
```sh
go run ./build/ci.go check_generate
```
Ensures that all generated files (e.g., `gen_*.go`) are up to date. If this fails, first install the required code generators by running `make devtools`, then run the appropriate `go generate` commands and include the updated files in your commit.
### 6. Dependency Hygiene
```sh
go run ./build/ci.go check_baddeps
```
Verifies that no forbidden dependencies have been introduced.
## Commit Message Format
Commit messages must be prefixed with the package(s) they modify, followed by a short lowercase description:
```
<package(s)>: description
```
Examples:
- `core/vm: fix stack overflow in PUSH instruction`
- `eth, rpc: make trace configs optional`
- `cmd/geth: add new flag for sync mode`
Use comma-separated package names when multiple areas are affected. Keep the description concise.
## Pull Request Title Format
PR titles follow the same convention as commit messages:
```
<listofmodifiedpaths>: description
```
Examples:
- `core/vm: fix stack overflow in PUSH instruction`
- `trie/archiver: streaming subtree archival to fix OOM`
Use the top-level package paths, comma-separated if multiple areas are affected. Only mention the directories with functional changes, interface changes that trickle all over the codebase should not generate an exhaustive list. The description should be a short, lowercase summary of the change.
@ -21,7 +21,6 @@ Audit reports are published in the `docs` folder: https://github.com/ethereum/go
To find out how to disclose a vulnerability in Ethereum visit [https://bounty.ethereum.org](https://bounty.ethereum.org) or email bounty@ethereum.org. Please read the [disclosure page](https://github.com/ethereum/go-ethereum/security/advisories?state=published) for more information about publicly disclosed security vulnerabilities.
Use the built-in `geth version-check` feature to check whether the software is affected by any known vulnerability. This command will fetch the latest [`vulnerabilities.json`](https://geth.ethereum.org/docs/vulnerabilities/vulnerabilities.json) file which contains known security vulnerabilities concerning `geth`, and cross-check the data against its own version number.
The following key may be used to communicate sensitive information to developers.
//lint:ignore ST1005 brand name displayed on the console
returncommon.Address{},nil,fmt.Errorf("Ledger v%d.%d.%d doesn't support signing this transaction, please update to v1.0.3 at least",w.version[0],w.version[1],w.version[2])
}
@ -184,7 +184,7 @@ func (w *ledgerDriver) SignTypedMessage(path accounts.DerivationPath, domainHash
returnnil,accounts.ErrWalletClosed
}
// Ensure the wallet is capable of signing the given transaction