This PR fixes the bug reported in #33365.
The impact of the bug is not catastrophic. After a transaction is
ultimately fetched, validation and propagation will be performed based
on the fetched body, and any response with a mismatched type is treated
as a protocol violation. An attacker could only waste the limited
portion of victim’s bandwidth at most.
However, the reasons for submitting this PR are as follows
1. Fetching a transaction announced with an arbitrary type is a weird
behavior.
2. It aligns with efforts such as EIP-8077 and #33119 to make the
fetcher smarter and reduce bandwidth waste.
Regarding the `FilterType` function, it could potentially be implemented
by modifying the Filter function's parameter itself, but I wasn’t sure
whether changing that function is acceptable, so I left it as is.
## Description
This PR fixes incorrect contract code state metrics by ensuring
duplicate codes are not counted towards the reported results.
## Rationale
The contract code metrics don't consider database deduplication. The
current implementation assumes that the results are only **slightly
inaccurate**, but this is not true, especially for data collection
efforts that started from the genesis block.
The EIP says to increment PC by 2 _instead of_ the standard increment by
1. The opcode handlers added in #33095 result in incrementing PC by 3,
because they ignored the increment already present in `interpreter.go`.
Does this need to be better specified in the EIP? I've added a [new test
case](https://github.com/ethereum/EIPs/pull/10859) for it anyway.
Found by @0xriptide.
The original condition `deleted && !logPrinted || time.Since(...)` was
incorrectly grouping due to operator precedence, causing logs to print
every 10 seconds even when no deletion was happening (deleted=false).
According to SafeDeleteRange documentation, the 'deleted' parameter is
"true if entries have actually been deleted already". The logging should
only happen when deletion is active.
Fixed by adding parentheses: `deleted && (!logPrinted ||
time.Since(...))`Now logs print only when items are being deleted AND
either it's the first log or 10+ seconds have passed since the last one.
This PR exposes the state size statistics to the metrics, making them
easier to demonstrate.
Note that the contract code included in the metrics is not
de-duplicated, so the reported size
will appear larger than the actual storage footprint.
This PR introduces a new debug feature, logging the slow blocks with
detailed performance statistics, such as state read, EVM execution and
so on.
Notably, the detailed performance statistics of slow blocks won't be
logged during the sync to not overwhelm users. Specifically, the statistics
are only logged if there is a single block processed.
Example output
```
########## SLOW BLOCK #########
Block: 23537063 (0xa7f878611c2dd27f245fc41107d12ebcf06b4e289f1d6acf44d49a169554ee09) txs: 248, mgasps: 202.99
EVM execution: 63.295ms
Validation: 1.130ms
Account read: 6.634ms(648)
Storage read: 17.391ms(1434)
State hash: 6.722ms
DB commit: 3.260ms
Block write: 1.954ms
Total: 99.094ms
State read cache: account (hit: 622, miss: 26), storage (hit: 1325, miss: 109)
##############################
```
This change fixes a stall in the legacy blob sidecar conversion pipeline
where tasks that arrived during an active batch could remain unprocessed
indefinitely after that batch completed, unless a new external event
arrived.
The root cause was that the loop did not restart processing in
the case <-done: branch even when txTasks had accumulated work, relying
instead on a future event to retrigger the scheduler. This behavior is
inconsistent with the billy task pipeline, which immediately chains to
the next task via runNextBillyTask() without requiring an external trigger.
The fix adds a symmetric restart path in `case <-done`: that checks
`len(txTasks) > 0`, clones the accumulated tasks, clears the queue, and
launches a new run with a fresh done and interrupt.
This preserves batching semantics, prevents indefinite blocking of callers
of convert(), and remains safe during shutdown since the quit path
still interrupts and awaits the active batch. No public interfaces or logging
were changed.
Looks like (in some very EVM specific tests) we spent a lot of time
resizing memory. If the underlying array is big enough, we can speed it
up a bit by simply slicing the memory.
goos: linux
goarch: amd64
pkg: github.com/ethereum/go-ethereum/core/vm
cpu: Intel(R) Core(TM) Ultra 7 155U
│ /tmp/old.txt │ /tmp/new.txt │
│ sec/op │ sec/op vs base │
Resize-14 6.145n ± 9% 1.854n ± 14% -69.83% (p=0.000 n=10)
│ /tmp/old.txt │ /tmp/new.txt │
│ B/op │ B/op vs base │
Resize-14 5.000 ± 0% 5.000 ± 0% ~ (p=1.000 n=10)
│ /tmp/old.txt │ /tmp/new.txt │
│ allocs/op │ allocs/op vs base │
Resize-14 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=10) ¹
From the blocktest benchmark:
620ms 10.93s (flat, cum) 9.92% of Total
. . 80:func (m *Memory) Resize(size uint64) {
30ms 60ms 81: if uint64(m.Len()) < size {
590ms 10.87s 82: m.store = append(m.store, make([]byte, size-uint64(m.Len()))...)
. . 83: }
. . 84:}
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
EIP-8024: Backward compatible SWAPN, DUPN, EXCHANGE
Introduces additional instructions for manipulating the stack which
allow accessing the stack at higher depths. This is an initial implementation
of the EIP, which is still in Review stage.
This PR adds the "FULU" beacon chain config entries for all networks and
fixes the select statements that choose the appropriate engine API call
versions (no new version there but the "default" was always the first
version; now it's the latest version so no need to change unless there
is actually a new version).
New beacon checkpoints are also added for mainnet, sepolia and hoodi
(not for holesky because it's not finalizing at the moment).
Note that though unrelated to fusaka, the log indexer checkpoints are
also updated for mainnet (not for the other testnets, mainly because I
only have mainnet synced here on my travel SSD; this should be fine
though because the index is also reverse generated for a year by default
so it does not really affect the indexing time)
Links for the new checkpoints:
https://beaconcha.in/slot/13108192https://light-sepolia.beaconcha.in/slot/9032384https://hoodi.beaconcha.in/slot/1825728
Fix#33212.
This PR remove `github.com/olekukonko/tablewriter` from dependencies and
use a naive stub implementation.
`github.com/olekukonko/tablewriter` is used to format database inspection
output neatly. However, it requires custom adjustments for TinyGo and is
incompatible with the latest version.
---------
Co-authored-by: MariusVanDerWijden <m.vanderwijden@live.de>
This is broken off of #31730 to only focus on testing networks that
start with verkle at genesis.
The PR has seen a lot of work since its creation, and it now targets
creating and re-executing tests for a binary tree testnet without the
transition (so it starts at genesis). The transition tree has been moved
to its own package. It also replaces verkle with the binary tree for
this specific application.
---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
The iterator loop in findTxInBlockBody returned the outer-scoped err
when iter.Err() was non-nil, which could incorrectly propagate a nil or
stale error and hide actual RLP decoding issues. This patch returns
iter.Err() as intended by the rlp list iterator API, matching
established patterns elsewhere in the codebase and improving diagnostics
when encountering malformed transaction entries.
While updating to latest Geth, I noticed `OnCodeChangeV2` was not
properly handled in `SelfDestruct/6780`, this PR fixes this and bring a
unit test. Let me know if it's deemed more approriate to merge the tests
with the other one.
Because the map iteration is unstable, we need to order logs by tx index
and keep the same order with receipts and their logs, so we can still
get the same `LogsHash` across runs.
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
Equal is called every time the transaction sender is accessed,
even when the sender is cached, so it is worth optimizing.
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
In this PR, several changes have been made:
(a) restructure the trienode history header section
Previously, the offsets of the key and value sections were recorded before
encoding data into these sections. As a result, these offsets referred to the
start position of each chunk rather than the end position.
This caused an issue where the end position of the last chunk was
unknown, making it incompatible with the freezer partial-read APIs.
With this update, all offsets now refer to the end position, and the
start position of the first chunk is always 0.
(b) Enable partial freezer read for trienode data retrieval
The partial freezer read feature is now utilized in trienode data
retrieval, improving efficiency.
This PR prevents the SetCode hook from being called when the contract
code
remains unchanged.
This situation can occur in the following cases:
- The deployed runtime code has zero length
- An EIP-7702 authorization attempt tries to unset a non-delegated
account
- An EIP-7702 authorization attempt tries to delegate to the same
account
Previously, the journal writer is nil until the first time rejournal
(default 1h), which means during this period, txs submitted to this node
are not written into journal file (transactions.rlp). If this node is
shutdown before the first time rejournal, then txs in pending or queue
will get lost.
Here, this PR initializes the journal writer soon after launch to solve
this issue.
---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
This change addresses critical issues in the state object duplication
process specific to Verkle trie implementations. Without these
modifications, updates to state objects fail to propagate correctly
through the trie structure after a statedb copy operation, leading to
inaccuracies in the computation of the state root hash.
---------
Co-authored-by: Guillaume Ballet <3272758+gballet@users.noreply.github.com>
This pr addresses a few issues brought by the #32270
- Add updates to pricedList after dropping transactions.
- Remove redundant deletions in queue.evictList, since
pool.removeTx(hash, true, true) already performs the removal.
- Prevent duplicate addresses during promotion when Reset is not nil.
This PR move the queue out of the main transaction pool.
For now there should be no functional changes.
I see this as a first step to refactor the legacypool and make the queue
a fully separate concept from the main pending pool.
---------
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
Co-authored-by: Csaba Kiraly <csaba.kiraly@gmail.com>
This PR implements the partial read functionalities in the freezer, optimizing
the state history reader by resolving less data from freezer.
---------
Signed-off-by: jsvisa <delweng@gmail.com>
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
invalidTxMeter was counting txs, while validTxMeter was counting
accounts. Better make the two comparable.
---------
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
- Introduce a new subscription kind `transactionReceipts` to allow clients to
receive transaction receipts over WebSocket as soon as they are available.
- Accept optional `transactionHashes` filter to subscribe to receipts for specific
transactions; an empty or omitted filter subscribes to all receipts.
- Preserve the same receipt format as returned by `eth_getTransactionReceipt`.
- Avoid additional HTTP polling, reducing RPC load and latency.
---------
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
This pr implements https://github.com/ethereum/go-ethereum/issues/32733
to make StateProcessor more customisable.
## Compatibility notes
This introduces a breaking change to users using geth EVM as a library.
The `NewStateProcessor` function now takes one parameter which has the
chainConfig embedded instead of 2 parameters.
The TxPool.signer field was never read and each subpool (legacy/blob)
maintains its own signer instance. This field remained after txpool
refactoring into subpools and is dead code. Removing it reduces
confusion and simplifies the constructor.
- Replace outdated NewFreezer doc that referenced map[string]bool/snappy
toggle with accurate description of -map[string]freezerTableConfig
(noSnappy, prunable).
- Fix misleading field comment on freezerTable.config that spoke as if
it were a boolean (“if true”), clarifying it’s a struct and noting
compression is non-retroactive.
This is a small improvement on #32656 in case Add was called with
multiple type 3 transactions, adding transactions to the pool one-by-one
as they are converted.
Announcement to peers is still done in a batch.
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
This implements the conversion of existing blob transactions to the new proof
version. Conversion is triggered at the Osaka fork boundary. The conversion is
designed to be idempotent, and may be triggered multiple times in case of a reorg
around the fork boundary.
This change is the last missing piece that completes our strategy for the blobpool
conversion. After the Osaka fork,
- new transactions will be converted on-the-fly upon entry to the pool
- reorged transactions will be converted while being reinjected
- (this change) existing transactions will be converted in the background
---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
Co-authored-by: lightclient <lightclient@protonmail.com>
This pull request introduces a queue for legacy sidecar conversion to
handle transactions that persist after the Osaka fork. Simply dropping
these transactions would significantly harm the user experience.
To balance usability with system complexity, we have introduced a
conversion time window of two hours post Osaka fork. During this period,
the system will accept legacy blob transactions and convert them in a
background process.
After the window, all legacy transactions will be rejected. Notably, all
the blob transactions will be validated statically before the conversion,
and also all conversion are performed in a single thread, minimize the risk
of being DoS.
We believe this two hour window provides sufficient time to process
in-flight legacy transactions and allows submitters to migrate to the
new format.
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
Fix the t.Fatalf format arguments in TestBadBlockStorage to match the
intended #index output. Previously, the left number used i+1 and the
right index used the block number, producing misleading diagnostics.
Correct mapping improves test failure clarity and debuggability.
before:
go test -run=^$ -bench=. ./core/types 47.80s user 2.18s system 102% cpu
48.936 tota
after:
go test -run=^$ -bench=. ./core/types 42.42s user 2.27s system 112% cpu
39.593 total
before:
go test -run=^$ -bench=. ./core/vm/... -timeout=1h 1841.87s user 40.96s
system 124% cpu 25:15.76 total
after:
go test -run=^$ -bench=. ./core/vm/... -timeout=1h 1588.65s user 33.79s
system 123% cpu 21:53.25 total
---------
Co-authored-by: lightclient <lightclient@protonmail.com>
before:
go test -run=^$ -bench=. ./core/state/... 120.85s user 7.96s system 129%
cpu 1:39.13 tota
after:
go test -run=^$ -bench=. ./core/state/... 21.32s user 2.12s system 97%
cpu 24.006 total
This disables the tx gaslimit cap for eth_call and related RPC operations.
I don't like how this fix works. Ideally we'd be checking the tx
gaslimit somewhere else, like in the block validator, or any other place
that considers block transactions. Doing the check in StateTransition
means it affects all possible ways of executing a message.
The challenge is finding a place for this check that also triggers
correctly in tests where it is wanted. So for now, we are just combining
this with the EOA sender check for transactions. Both are disabled for
call-type messages.
https://github.com/ethereum/execution-spec-tests/releases/tag/v5.0.0
As of this release, execution-spec-tests also contains all state tests
that were previously in ethereum/tests. We can probably remove the tests
submodule now. However, this would mean we are missing the pre-cancun
tests. Still need to figure out how to resolve this.
---------
Co-authored-by: MariusVanDerWijden <m.vanderwijden@live.de>
When I implemented in #31340 I didn't expect multiple forks to be
configured at once, but this is exactly how BPOs are defined. This
updates the method to determine the next scheduled fork rather than the
last fork.
The format that is currently reported by the chain isn't very useful, as
it gives an average for ALL the nodes, and not only the leaves, which
skews the results.
Also, until now there was no way to activate the reporting of errors.
We also decided that metrics weren't the right tool to report this data,
so we decided to dump it to the console if the flag is enabled. A better
system should be built, but for now, printing to the logs does the job.
This PR adds a new RPC call, which re-executes a block with stateless
mode activated, so that the witness data are collected and returned.
They are `debug_executionWitnessByHash` which takes in a block hash
and `debug_executionWitness` which takes in a block number.
---------
Signed-off-by: Guillaume Ballet <3272758+gballet@users.noreply.github.com>
Implements a migration path for the blobpool slotter
---------
Co-authored-by: lightclient <lightclient@protonmail.com>
Co-authored-by: lightclient <14004106+lightclient@users.noreply.github.com>
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
This change ensures TransitionState.Copy preserves BaseRoot. During a
Verkle transition, ts.BaseRoot is required to construct the overlay MPT
when ts.InTransition() is true. Previously, copies dropped BaseRoot,
risking an invalid zero-hash base trie and inconsistent behavior.
This PR removes the conversion of legacy sidecars after Osaka and instead rejects them to the pool.
---------
Co-authored-by: lightclient <lightclient@protonmail.com>
Add state size tracking and retrieve api, start geth with `--state.size-tracking`,
the initial bootstrap is required (around 1h on mainnet), after the bootstrap,
use `debug_stateSize()` RPC to retrieve the state size:
```
> debug.stateSize()
{
accountBytes: "0x39681967b",
accountTrienodeBytes: "0xc57939f0c",
accountTrienodes: "0x198b36ac",
accounts: "0x129da14a",
blockNumber: "0x1635e90",
contractCodeBytes: "0x2b63ef481",
contractCodes: "0x1c7b45",
stateRoot: "0x9c36a3ec3745d72eea8700bd27b90dcaa66de0494b187c5600750044151e620a",
storageBytes: "0x18a6e7d3f1",
storageTrienodeBytes: "0x2e7f53fae6",
storageTrienodes: "0x6e49a234",
storages: "0x517859c5"
}
```
---------
Signed-off-by: jsvisa <delweng@gmail.com>
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
This PR is the first step in the trienode history series.
It introduces the `nodeWithOrigin` struct in the path database, which tracks
the original values of dirty nodes to support trienode history construction.
Note, the original value is always empty in this PR, so it won't break the
existing journal for encoding and decoding. The compatibility of journal
should be handled in the following PR.
Another getBlobs PR on top of
https://github.com/ethereum/go-ethereum/pull/32190 to avoid some minor
regressions.
- bring back more log messages from before
- continue processing also on some internal errors
- ensure v2 complies with spec even if there are internal errors
---------
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
This pull request fixes a regression, introduced in #32190
Specifically, in GetBlobsV1 engine API, if any blob is missing, the null
should be placed in
response, unfortunately a behavioral change was introduced in #32190,
returning an error
instead.
What's more, a more comprehensive test suite is added to cover both
`GetBlobsV1` and
`GetBlobsV2` endpoints.
Switches to using counters so that the gauges don't cause any
information to be lost. Counters can be used to calculate all sorts of
metrics on Grafana. Which is also why min/avg/max logic is removed to
make things simple and small here.
~Will probably be mostly supplanted by #32224, but this should do for
now for devnet 3.~
Seems like #32224 is going to take some more time, so I have completed
the implementation of eth_config here. It is quite a bit simpler to
implement now that the config hashing was removed.
---------
Co-authored-by: MariusVanDerWijden <m.vanderwijden@live.de>
Co-authored-by: Guillaume Ballet <3272758+gballet@users.noreply.github.com>
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
Implement the binary tree as specified in [eip-7864](https://eips.ethereum.org/EIPS/eip-7864).
This will gradually replace verkle trees in the codebase. This is only
running the tests and will not be executed in production, but will help
me rebase some of my work, so that it doesn't bitrot as much.
---------
Signed-off-by: Guillaume Ballet
Co-authored-by: Parithosh Jayanthi <parithosh.jayanthi@ethereum.org>
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
Filtering for leaf nodes was missing from #32388, which means that even
the root done was reported, which made little sense for the bloatnet
data processing we want to do.
### Summary
Fixes long-standing ETA calculation errors in progress indicators that
have been present since February 2021. The current implementation
produces increasingly inaccurate estimates due to integer division
precision loss.
### Problem
3aeccadd04/triedb/pathdb/history_indexer.go (L541-L553)
The ETA calculation has two critical issues:
1. **Integer division precision loss**: `speed` is calculated as
`uint64`
2. **Off-by-one**: `speed` uses `+ 1`(2 times) to avoid division by
zero, however it makes mistake in the final calculation
This results in wildly inaccurate time estimates that don't improve as
progress continues.
### Example
Current output during state history indexing:
```
lvl=info msg="Indexing state history" processed=16858580 left=41802252 elapsed=18h22m59.848s eta=11h36m42.252s
```
**Expected calculation:**
- Speed: 16858580 ÷ 66179848ms = 0.255 blocks/ms
- ETA: 41802252 ÷ 0.255 = ~45.6 hours
**Current buggy calculation:**
- Speed: rounds to 1 block/ms
- ETA: 41802252 ÷ 1 = ~11.6 hours ❌
### Solution
- Created centralized `CalculateETA()` function in common package
- Replaced all 8 duplicate code copies across the codebase
### Testing
Verified accurate ETA calculations during archive node reindexing with
significantly improved time estimates.
`db inspect` on the full database currently takes **30min+**, because
the db iterate was run in one thread, propose to split the key-space to
256 sub range, and assign them to the worker pool to speed up.
After the change, the time of running `db inspect --workers 16` reduced
to **10min**(the keyspace is not evenly distributed).
---------
Signed-off-by: jsvisa <delweng@gmail.com>
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
This pull request preserves the root->ID mappings in the path database
even after the associated state histories are truncated, regardless of
whether the truncation occurs at the head or the tail.
The motivation is to support an additional history type, trienode history.
Since the root->ID mappings are shared between two history instances,
they must not be removed by either one.
As a consequence, the root->ID mappings remain in the database even
after the corresponding histories are pruned. While these mappings may
become dangling, it is safe and cheap to keep them.
Additionally, this pull request enhances validation during historical
reader construction, ensuring that only canonical historical state will be
served.
This pull request implements #32235 , constructing blob sidecar in new
format (cell proof)
if the Osaka has been activated.
Apart from that, it introduces a pre-conversion step in the blob pool
before adding the txs.
This mechanism is essential for handling the remote **legacy** blob txs
from the network.
One thing is still missing and probably is worthy being highlighted
here: the blobpool may
contain several legacy blob txs before the Osaka and these txs should be
converted once
Osaka is activated. While the `GetBlob` API in blobpool is capable for
generating cell proofs
at the runtime, converting legacy txs at one time is much cheaper
overall.
---------
Co-authored-by: MariusVanDerWijden <m.vanderwijden@live.de>
Co-authored-by: lightclient <lightclient@protonmail.com>
This PR should reduce overall allocations of a running node by ~10
percent. Since most allocations are coming from the re-heaping of the
transaction pool.
```
(pprof) list EffectiveGasTipCmp
Total: 38197204475
ROUTINE ======================== github.com/ethereum/go-ethereum/core/types.(*Transaction).EffectiveGasTipCmp in github.com/ethereum/go-ethereum/core/types/transaction.go
0 3766837369 (flat, cum) 9.86% of Total
. . 386:func (tx *Transaction) EffectiveGasTipCmp(other *Transaction, baseFee *big.Int) int {
. . 387: if baseFee == nil {
. . 388: return tx.GasTipCapCmp(other)
. . 389: }
. . 390: // Use more efficient internal method.
. . 391: txTip, otherTip := new(big.Int), new(big.Int)
. 1796172553 392: tx.calcEffectiveGasTip(txTip, baseFee)
. 1970664816 393: other.calcEffectiveGasTip(otherTip, baseFee)
. . 394: return txTip.Cmp(otherTip)
. . 395:}
. . 396:
. . 397:// EffectiveGasTipIntCmp compares the effective gasTipCap of a transaction to the given gasTipCap.
. . 398:func (tx *Transaction) EffectiveGasTipIntCmp(other *big.Int, baseFee *big.Int) int {
```
This PR reduces the allocations for comparing two transactions from 2 to
0:
```
goos: linux
goarch: amd64
pkg: github.com/ethereum/go-ethereum/core/types
cpu: Intel(R) Core(TM) Ultra 7 155U
│ /tmp/old.txt │ /tmp/new.txt │
│ sec/op │ sec/op vs base │
EffectiveGasTipCmp/Original-14 64.67n ± 2% 25.13n ± 9% -61.13% (p=0.000 n=10)
│ /tmp/old.txt │ /tmp/new.txt │
│ B/op │ B/op vs base │
EffectiveGasTipCmp/Original-14 16.00 ± 0% 0.00 ± 0% -100.00% (p=0.000 n=10)
│ /tmp/old.txt │ /tmp/new.txt │
│ allocs/op │ allocs/op vs base │
EffectiveGasTipCmp/Original-14 2.000 ± 0% 0.000 ± 0% -100.00% (p=0.000 n=10)
```
It also speeds up the process by ~60%
There are two minor caveats with this PR:
- We change the API for `EffectiveGasTipCmp` and `EffectiveGasTipIntCmp`
(which are probably not used by much)
- We slightly change the behavior of `tx.EffectiveGasTip` when it
returns an error. It would previously return a negative number on error,
now it does not (since uint256 does not allow for negative numbers)
---------
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
Co-authored-by: Csaba Kiraly <csaba.kiraly@gmail.com>
This pull introduces a `Prefetch` operation in the trie to prefetch trie
nodes in parallel. It is used by the `triePrefetcher` to accelerate state
loading and improve overall chain processing performance.
Continuation of https://github.com/ethereum/go-ethereum/issues/32022
tablewriter assumes unix or windows, which may not be the case for
embedded targets.
For v0.0.5 of tablewriter, it is noted in table.go: "The protocols were
written in pure Go and works on windows and unix systems"
---------
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
The order of the checks was wrong which would have allowed a call to
modexp with `baseLen == 0 && modLen == 0` post fusaka.
Also handles an edge case where base/mod/exp length >= 2**64
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
This add some of the changes that were missing from #31634. It
introduces the `TransitionTrie`, which is a façade pattern between the
current MPT trie and the overlay tree.
---------
Signed-off-by: Guillaume Ballet <3272758+gballet@users.noreply.github.com>
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
These changes made in the PR should be highlighted here
The trie tracer is split into two distinct structs: opTracer and prevalueTracer.
The former is specific to MPT, while the latter is generic and applicable to all
trie implementations.
The original values of dirty nodes are tracked in a NodeSet. This serves
as the foundation for both full archive node implementations and the state live
tracer.
- If all the `vhashes` are in the same `sidecar`, then it will load the
same blob tx many times. This PR aims to upgrade this.
---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
This is the first part of #31532
It maintains a series of conversion maker which are to be updated by the
conversion code (in a follow-up PR, this is a breakdown of a larger PR
to make things easier to review). They can be used in this way:
- During the conversion, by storing the conversion markers when the
block has been processed. This is meant to be written in a function that
isn't currently present, hence [this
TODO](https://github.com/ethereum/go-ethereum/pull/31634/files#diff-89272f61e115723833d498a0acbe59fa2286e3dc7276a676a7f7816f21e248b7R384).
Part of https://github.com/ethereum/go-ethereum/issues/31583
---------
Signed-off-by: Guillaume Ballet <3272758+gballet@users.noreply.github.com>
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
This adds a method on vm.EVM to set the jumpdest cache implementation.
It can be used to maintain an analysis cache across VM invocations, to improve
performance by skipping the analysis for already known contracts.
---------
Co-authored-by: lmittmann <lmittmann@users.noreply.github.com>
Co-authored-by: Felix Lange <fjl@twurst.com>
[EIP-7594](https://eips.ethereum.org/EIPS/eip-7594) defines a limit of
max 6 blobs per transaction. We need to enforce this limit during block
processing.
> Additionally, a limit of 6 blobs per transaction is introduced.
Clients MUST enforce this limit when validating blob transactions at
submission time, when received from the network, and during block
production and processing.
The main purpose of this change is to enforce the version setting when
constructing the blobSidecar, avoiding creating sidecar with wrong/default
version tag.
This is something interesting I came across during my benchmarks, we
spent ~3.8% of all allocations allocating the header number on the heap.
```
(pprof) list GetHeaderByHash
Total: 38197204475
ROUTINE ======================== github.com/ethereum/go-ethereum/core.(*BlockChain).GetHeaderByHash in github.com/ethereum/go-ethereum/core/blockchain_reader.go
0 5786566117 (flat, cum) 15.15% of Total
. . 79:func (bc *BlockChain) GetHeaderByHash(hash common.Hash) *types.Header {
. 5786566117 80: return bc.hc.GetHeaderByHash(hash)
. . 81:}
. . 82:
. . 83:// GetHeaderByNumber retrieves a block header from the database by number,
. . 84:// caching it (associated with its hash) if found.
. . 85:func (bc *BlockChain) GetHeaderByNumber(number uint64) *types.Header {
ROUTINE ======================== github.com/ethereum/go-ethereum/core.(*HeaderChain).GetHeaderByHash in github.com/ethereum/go-ethereum/core/headerchain.go
0 5786566117 (flat, cum) 15.15% of Total
. . 404:func (hc *HeaderChain) GetHeaderByHash(hash common.Hash) *types.Header {
. 1471264309 405: number := hc.GetBlockNumber(hash)
. . 406: if number == nil {
. . 407: return nil
. . 408: }
. 4315301808 409: return hc.GetHeader(hash, *number)
. . 410:}
. . 411:
. . 412:// HasHeader checks if a block header is present in the database or not.
. . 413:// In theory, if header is present in the database, all relative components
. . 414:// like td and hash->number should be present too.
(pprof) list GetBlockNumber
Total: 38197204475
ROUTINE ======================== github.com/ethereum/go-ethereum/core.(*HeaderChain).GetBlockNumber in github.com/ethereum/go-ethereum/core/headerchain.go
94438817 1471264309 (flat, cum) 3.85% of Total
. . 100:func (hc *HeaderChain) GetBlockNumber(hash common.Hash) *uint64 {
94438817 94438817 101: if cached, ok := hc.numberCache.Get(hash); ok {
. . 102: return &cached
. . 103: }
. 1376270828 104: number := rawdb.ReadHeaderNumber(hc.chainDb, hash)
. . 105: if number != nil {
. 554664 106: hc.numberCache.Add(hash, *number)
. . 107: }
. . 108: return number
. . 109:}
. . 110:
. . 111:type headerWriteResult struct {
(pprof) list ReadHeaderNumber
Total: 38197204475
ROUTINE ======================== github.com/ethereum/go-ethereum/core/rawdb.ReadHeaderNumber in github.com/ethereum/go-ethereum/core/rawdb/accessors_chain.go
204606513 1376270828 (flat, cum) 3.60% of Total
. . 146:func ReadHeaderNumber(db ethdb.KeyValueReader, hash common.Hash) *uint64 {
109577863 1281242178 147: data, _ := db.Get(headerNumberKey(hash))
. . 148: if len(data) != 8 {
. . 149: return nil
. . 150: }
95028650 95028650 151: number := binary.BigEndian.Uint64(data)
. . 152: return &number
. . 153:}
. . 154:
. . 155:// WriteHeaderNumber stores the hash->number mapping.
. . 156:func WriteHeaderNumber(db ethdb.KeyValueWriter, hash common.Hash, number uint64) {
```
Opening this to discuss the idea, I know that rawdb.EmptyNumber is not a
great name for the variable, open to suggestions
This pull request slightly improves the freezer fsync mechanism by scheduling
the Sync operation based on the number of uncommitted items and original
time interval.
Originally, freezer.Sync was triggered every 30 seconds, which worked well during
active chain synchronization. However, once the initial state sync is complete,
the fixed interval causes Sync to be scheduled too frequently.
To address this, the scheduling logic has been improved to consider both the time
interval and the number of uncommitted items. This additional condition helps
avoid unnecessary Sync operations when the chain is idle.
Introduce file-based state journal in path database, fixing
the Pebble restriction when the journal size exceeds 4GB.
---------
Signed-off-by: jsvisa <delweng@gmail.com>
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
This adds the SSZ types from the
[EIP-7928](https://eips.ethereum.org/EIPS/eip-7928) and also adds
encoder/decoder generation using https://github.com/ferranbt/fastssz.
The fastssz dependency is updated because the generation will not work
properly with the master branch version due to a bug in fastssz.
---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
This PR adds a block validation check for the maximum block size, as required by
EIP-7934, and also applies a slightly lower size limit during block building.
---------
Co-authored-by: spencer-tb <spencer@spencertaylorbrown.uk>
Co-authored-by: Felix Lange <fjl@twurst.com>
Co-authored-by: Gary Rong <garyrong0905@gmail.com>