Correct the error message in the ExecuteStatelessPayloadV4 function to
reference newPayloadV4 and the Prague fork, instead of incorrectly
referencing newPayloadV3 and Cancun.
This improves clarity during debugging and aligns the error message with
the actual function and fork being validated. No logic is changed.
---------
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
The main purpose of this change is to enforce the version setting when
constructing the blobSidecar, avoiding creating sidecar with wrong/default
version tag.
Fixes#32175.
This fixes the scenario where the blockhash opcode would return 0x0
during RPC simulations when using BlockOverrides with a future block
number. The root cause was that BlockOverrides.Apply() only modified the
vm.BlockContext, but GetHashFn() depends on the actual
types.Header.Number to resolve valid historical block hashes. This
caused a mismatch and resulted in incorrect behavior during trace and
call simulations.
---------
Co-authored-by: shantichanal <158101918+shantichanal@users.noreply.github.com>
Co-authored-by: lightclient <lightclient@protonmail.com>
This is something interesting I came across during my benchmarks, we
spent ~3.8% of all allocations allocating the header number on the heap.
```
(pprof) list GetHeaderByHash
Total: 38197204475
ROUTINE ======================== github.com/ethereum/go-ethereum/core.(*BlockChain).GetHeaderByHash in github.com/ethereum/go-ethereum/core/blockchain_reader.go
0 5786566117 (flat, cum) 15.15% of Total
. . 79:func (bc *BlockChain) GetHeaderByHash(hash common.Hash) *types.Header {
. 5786566117 80: return bc.hc.GetHeaderByHash(hash)
. . 81:}
. . 82:
. . 83:// GetHeaderByNumber retrieves a block header from the database by number,
. . 84:// caching it (associated with its hash) if found.
. . 85:func (bc *BlockChain) GetHeaderByNumber(number uint64) *types.Header {
ROUTINE ======================== github.com/ethereum/go-ethereum/core.(*HeaderChain).GetHeaderByHash in github.com/ethereum/go-ethereum/core/headerchain.go
0 5786566117 (flat, cum) 15.15% of Total
. . 404:func (hc *HeaderChain) GetHeaderByHash(hash common.Hash) *types.Header {
. 1471264309 405: number := hc.GetBlockNumber(hash)
. . 406: if number == nil {
. . 407: return nil
. . 408: }
. 4315301808 409: return hc.GetHeader(hash, *number)
. . 410:}
. . 411:
. . 412:// HasHeader checks if a block header is present in the database or not.
. . 413:// In theory, if header is present in the database, all relative components
. . 414:// like td and hash->number should be present too.
(pprof) list GetBlockNumber
Total: 38197204475
ROUTINE ======================== github.com/ethereum/go-ethereum/core.(*HeaderChain).GetBlockNumber in github.com/ethereum/go-ethereum/core/headerchain.go
94438817 1471264309 (flat, cum) 3.85% of Total
. . 100:func (hc *HeaderChain) GetBlockNumber(hash common.Hash) *uint64 {
94438817 94438817 101: if cached, ok := hc.numberCache.Get(hash); ok {
. . 102: return &cached
. . 103: }
. 1376270828 104: number := rawdb.ReadHeaderNumber(hc.chainDb, hash)
. . 105: if number != nil {
. 554664 106: hc.numberCache.Add(hash, *number)
. . 107: }
. . 108: return number
. . 109:}
. . 110:
. . 111:type headerWriteResult struct {
(pprof) list ReadHeaderNumber
Total: 38197204475
ROUTINE ======================== github.com/ethereum/go-ethereum/core/rawdb.ReadHeaderNumber in github.com/ethereum/go-ethereum/core/rawdb/accessors_chain.go
204606513 1376270828 (flat, cum) 3.60% of Total
. . 146:func ReadHeaderNumber(db ethdb.KeyValueReader, hash common.Hash) *uint64 {
109577863 1281242178 147: data, _ := db.Get(headerNumberKey(hash))
. . 148: if len(data) != 8 {
. . 149: return nil
. . 150: }
95028650 95028650 151: number := binary.BigEndian.Uint64(data)
. . 152: return &number
. . 153:}
. . 154:
. . 155:// WriteHeaderNumber stores the hash->number mapping.
. . 156:func WriteHeaderNumber(db ethdb.KeyValueWriter, hash common.Hash, number uint64) {
```
Opening this to discuss the idea, I know that rawdb.EmptyNumber is not a
great name for the variable, open to suggestions
Introduce file-based state journal in path database, fixing
the Pebble restriction when the journal size exceeds 4GB.
---------
Signed-off-by: jsvisa <delweng@gmail.com>
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
This PR fixes an issue in the tx_fetcher DoS prevention logic where the
code keeps the overflow amount (`want - maxTxAnnounces`) instead of the
allowed amount (`maxTxAnnounces - used`). The specific changes are:
- Correct slice indexing in the announcement drop logic
- Extend the overflow test case to cover the inversion scenario
This pull request fixes an issue in disabling direct-ancient mode in
snap sync.
Specifically, if `origin >= frozen && origin != 0`, it implies a part of
chain data has been written into the key-value store, all the following
writes into ancient store scheduled by downloader will be rejected
with error
`ERROR[07-10|03:46:57.924] Error importing chain data to ancients
err="can't add block 1166 hash: the append operation is out-order: have
1166 want 0"`.
This issue is detected by the https://github.com/ethpandaops/kurtosis-sync-test,
which initiates the first snap sync cycle without the finalized header and
implicitly disables the direct-ancient mode. A few seconds later the second
snap sync cycle is initiated with the finalized information and direct-ancient mode
is enabled incorrectly.
alternate approach to https://github.com/ethereum/go-ethereum/pull/31328
suggested by @MariusVanDerWijden . This prevents Geth from outputting a
lot of logs when trying to commit on-demand dev mode blocks while the
client is shutting down.
The issue is hard to reproduce, but I've seen it myself and it is
annoying when it happens. I think this is a reasonable simple solution,
and we can revisit if we find that the output is still too large (i.e.
there is a large delay between initiating shut down and the simulated
beacon receiving the signal, while in this loop).
Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
Improves the SSTORE gas calculation a bit. Previously we would pull up
the state object twice. This is okay for existing objects, since they
are cached, however non-existing objects are not cached, thus we needed
to go through all 128 diff layers as well as the disk layer twice, just
for the gas calculation
```
goos: linux
goarch: amd64
pkg: github.com/ethereum/go-ethereum/core/vm
cpu: AMD Ryzen 9 5900X 12-Core Processor
│ /tmp/old.txt │ /tmp/new.txt │
│ sec/op │ sec/op vs base │
Interpreter-24 1118.0n ± 2% 602.8n ± 1% -46.09% (p=0.000 n=10)
```
---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
Small update for logs when syncing with blsync. Downgrades the "latest
filled block is not available" to warn.
Co-authored-by: shantichanal <158101918+shantichanal@users.noreply.github.com>
geth cmd: `geth --dev --dev.period 5`
call: `debug.setHead` to rollback several blocks.
If the `debug.setHead` call is delayed, it will trigger a panic with a
small probability, due to using the null point of
`fcResponse.PayloadID`.
---------
Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
This pull request refines the filtermap implementation, defining key
APIs for map and
epoch calculations to improve readability.
This pull request doesn't change any logic, it's a pure cleanup.
---------
Co-authored-by: zsfelfoldi <zsfelfoldi@gmail.com>
The address filter was never checked against a maximum limit, which can
be somewhat abusive for API nodes. This PR adds a limit similar to
topics
## Description (AI generated)
This pull request introduces a new validation to enforce a maximum limit
on the number of addresses allowed in filter criteria for Ethereum logs.
It includes updates to the `FilterAPI` and `EventSystem` logic, as well
as corresponding test cases to ensure the new constraint is properly
enforced.
### Core functionality changes:
* **Validation for maximum addresses in filter criteria**:
- Added a new constant, `maxAddresses`, set to 100, to define the
maximum allowable addresses in a filter.
- Introduced a new error, `errExceedMaxAddresses`, to handle cases where
the number of addresses exceeds the limit.
- Updated the `GetLogs` method in `FilterAPI` to validate the number of
addresses against `maxAddresses`.
- Modified the `UnmarshalJSON` method to return an error if the number
of addresses in the input JSON exceeds `maxAddresses`.
- Added similar validation to the `SubscribeLogs` method in
`EventSystem`.
### Test updates:
* **New test cases for address limit validation**:
- Added a test in `TestUnmarshalJSONNewFilterArgs` to verify that
exceeding the maximum number of addresses triggers the
`errExceedMaxAddresses` error.
- Updated `TestInvalidLogFilterCreation` to include a test case for an
invalid filter with more than `maxAddresses` addresses.
- Updated `TestInvalidGetLogsRequest` to test for invalid log requests
with excessive addresses.
These changes ensure that the system enforces a reasonable limit on the
number of addresses in filter criteria, improving robustness and
preventing potential performance issues.
---------
Co-authored-by: zsfelfoldi <zsfelfoldi@gmail.com>
core.BlockChainConfig.VmConfig is not a pointer, so setting the Tracer
on the `vmConfig` object after it was passed to options does *not* apply
it to options.VmConfig
This fixes the issue by setting the value directly inside the `options`
object and removing the confusing `vmConfig` variable to prevent further
mistakes.
This pull request tracks the state indexing progress in eth_syncing
RPC response, i.e. we will return non-null syncing status until indexing
has finished.
This pull request fixes a flaw in PBSS archive mode that significantly
degrades performance when the mode is enabled.
Originally, in hash mode, the dirty trie cache is completely disabled
when archive mode is active, in order to disable the in-memory garbage
collection mechanism. However, the internal logic in path mode differs
significantly, and the dirty trie node cache is essential for maintaining
chain insertion performance. Therefore, the cache is now retained in
path mode.
If Geth is engaged in a long-run block synchronization, such as a full
syncing over a large number of blocks, invoking `debug_setHead` will
cause `downloader.Cancel` to wait for all fetchers to stop first.
This can be time-consuming, particularly for the block processing
thread.
To address this, we manually call `blockchain.StopInsert` to interrupt
the blocking processing thread and allow it to exit immediately, and
after that call `blockchain.ResumeInsert` to resume the block
downloading process.
Additionally, we add a sanity check for the input block number of
`debug_setHead` to ensure its validity.
---------
Signed-off-by: jsvisa <delweng@gmail.com>
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
In this pull request, the original `CacheConfig` has been renamed to `BlockChainConfig`.
Over time, more fields have been added to `CacheConfig` to support
blockchain configuration. Such as `ChainHistoryMode`, which clearly extends
beyond just caching concerns.
Additionally, adding new parameters to the blockchain constructor has
become increasingly complicated, since it’s initialized across multiple
places in the codebase. A natural solution is to consolidate these arguments
into a dedicated configuration struct.
As a result, the existing `CacheConfig` has been redefined as `BlockChainConfig`.
Some parameters, such as `VmConfig`, `TxLookupLimit`, and `ChainOverrides`
have been moved into `BlockChainConfig`. Besides, a few fields in `BlockChainConfig`
were renamed, specifically:
- `TrieCleanNoPrefetch` -> `NoPrefetch`
- `TrieDirtyDisabled` -> `ArchiveMode`
Notably, this change won't affect the command line flags or the toml
configuration file. It's just an internal refactoring and fully backward-compatible.
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
The prestateTracer had the intention of excluding accounts that were
empty prior to execution from the prestate. This was being done only for
created contracts. This PR makes it so all such empty accounts are
excluded. This behavior is configurable using the `includeEmpty: true`
flag introduced in #31855.
---------
Signed-off-by: Ignacio Hagopian <jsign.uy@gmail.com>
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
This PR introduces a flag that enables returning of newly created state
objects in the prestateTracer.
**Rationale**
Having this information is useful because local execution can more
easily distinguish between newly created objects and system contracts.
---------
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
This pull request introduces a new test suite in workload framework, for
transaction tracing.
**test generation**
`go run . tracegen --trace-tests trace-test.json http://host:8545`
and you can choose to store the trace result in a specific folder
`go run . tracegen --trace-tests trace-test.json --trace-output
./trace-result http://host:8545`
**test run**
`./workload test -run Trace/Transaction --trace-invalid ./trace-invalid
http://host:8545`
The mismatched trace result will be saved in the specific folder for
further investigation.
This PR improves the speed of Disc/v4 and Disc/v5 based discovery by
adding a prefetch buffer to discovery sources, eliminating slowdowns
due to timeouts and rate mismatch between the two processes.
Since we now want to filter the discv4 nodes iterator, it is being removed
from the default discovery mix in p2p.Server. To keep backwards-compatibility,
the default unfiltered discovery iterator will be utilized by the server when
no protocol-specific discovery is configured.
---------
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
Co-authored-by: Felix Lange <fjl@twurst.com>
This fixes a regression introduced by #29158 where receipts of empty blocks
were stored into the database as an empty byte array, instead of an RLP empty list.
Fixes#31938
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
Fixes an issue where querying logs for block ranges starting from 0 would fail with an irrelevant
error on a pruned node. Now the correct "history is pruned" error will be returned.
This situation was failing quietly for me recently when I had a partial
data corruption issue. Changing the log level to Error would increase
visibility for me.
This implements a backing store for chain history based on era1 files.
The new store is integrated with the freezer. Queries for blocks and receipts
below the current freezer tail are handled by the era store.
---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
Co-authored-by: Felix Lange <fjl@twurst.com>
Co-authored-by: lightclient <lightclient@protonmail.com>
This changes the API backend to return null for not-found blocks. This behavior
is required by the RPC When `BlockByNumberOrHash` always returned an error
for this case ever since being added in https://github.com/ethereum/go-ethereum/pull/19491.
The backend method has a couple of call sites, and all of them handle a `nil`
block result because `BlockByNumber` returns `nil` for not-found.
The only case where this makes a real difference is for `eth_getBlockReceipts`,
which was changed in #31361 to actually forward the error from `BlockByNumberOrHash`
to the caller.
This PR introduces a new native tracer for AA bundlers. Bundlers participating in the alternative
mempool will need to validate userops. This tracer will return sufficient information for them to
decide whether griefing is possible. Resolves#30546
---------
Co-authored-by: Sina M <1591639+s1na@users.noreply.github.com>