As the node hash scheme in verkle and merkle are totally different, the
original default node hasher in pathdb is no longer suitable. Therefore,
this pull request configures different node hasher respectively.
Here I am proposing two small changes to the exported API for EIP-7702:
(1) `Authorization` has a very generic name, but it is in fact only used
for one niche use case: authorizing code in a `SetCodeTx`. So I propose
calling it `SetCodeAuthorization` instead. The signing function is
renamed to `SignSetCode` instead of `SignAuth`.
(2) The signing function for authorizations should take key as the first
parameter, and the authorization second. The key will almost always be
in a variable, while the authorization can be given as a literal.
Fixing some issues I found while regenerating RPC tests for Prague:
- Authorization signature values were not encoded as hex
- `requestsRoot` in block should be `requestsHash`
- `authorizationList` should work for `eth_call`
Noticed this omission while doing some work on goevmlab. We don't
properly type some of the opcodes, but apparently implicit casting works
in all the internal usecases.
Adding some missing functionality I noticed while updating the hivechain
tool for the Prague fork:
- we forgot to process the parent block hash
- added `ConsensusLayerRequests` to get the requests list of the block
This PR implements EIP-7702: "Set EOA account code".
Specification: https://eips.ethereum.org/EIPS/eip-7702
> Add a new transaction type that adds a list of `[chain_id, address,
nonce, y_parity, r, s]` authorization tuples. For each tuple, write a
delegation designator `(0xef0100 ++ address)` to the signing account’s
code. All code reading operations must load the code pointed to by the
designator.
---------
Co-authored-by: Mario Vega <marioevz@gmail.com>
Co-authored-by: Martin Holst Swende <martin@swende.se>
Co-authored-by: Felix Lange <fjl@twurst.com>
This PR modifies how the metrics library handles `Enabled`: previously,
the package `init` decided whether to serve real metrics or just
dummy-types.
This has several drawbacks:
- During pkg init, we need to determine whether metrics are enabled or
not. So we first hacked in a check if certain geth-specific
commandline-flags were enabled. Then we added a similar check for
geth-env-vars. Then we almost added a very elaborate check for
toml-config-file, plus toml parsing.
- Using "real" types and dummy types interchangeably means that
everything is hidden behind interfaces. This has a performance penalty,
and also it just adds a lot of code.
This PR removes the interface stuff, uses concrete types, and allows for
the setting of Enabled to happen later. It is still assumed that
`metrics.Enable()` is invoked early on.
The somewhat 'heavy' operations, such as ticking meters and exp-decay,
now checks the enable-flag to prevent resource leak.
The change may be large, but it's mostly pretty trivial, and from the
last time I gutted the metrics, I ensured that we have fairly good test
coverage.
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
It's a pull request based on https://github.com/ethereum/go-ethereum/pull/30643
In this pull request, the partial functional state reader is enabled if **legacy snapshot
is not enabled**. The tracked flat states in pathdb will be used to serve the state
retrievals, as the second implementation to fasten the state access.
This pull request should be a noop change in normal cases.
This PR extends the Hooks interface with a new method,
`OnSystemCallStartV2`, which takes `VMContext` as its parameter.
Motivation
By including `VMContext` as a parameter, the `OnSystemCallStartV2` hook
achieves parity with the `OnTxStart` hook in terms of provided insights.
This alignment simplifies the inner tracer logic, enabling consistent
handling of state changes and internal calls within the same framework.
---------
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
This PR refactors the structlog a bit, making it so that it can be used
in a streaming mode.
-------------
OBS: this PR makes a change in the input `config` config, the third
input-parem field to `debug.traceCall`. Previously, seteting it to e.g.
` {"enableMemory": true, "limit": 1024}` would mean that the response
was limited to `1024` items. Since an 'item' may include both memory and
storage, the actual size of the response was undertermined.
After this change, the response will be limited to `1024` __`bytes`__
(or thereabouts).
-----------
The commandline usage of structlog now uses the streaming mode, leaving
the non-streaming mode of operation for the eth_Call.
There are two benefits of streaming mode
1. Not have to maintain a long list of operations,
2. Not have to duplicate / n-plicate data, e.g. memory / stack /
returndata so that each entry has their own private slice.
---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
* unify `staterunner` and `blockrunner` CLI flags, especially around
tracing
* added support for struct logger or json logging (although having issue
#30658)
* new --cross-check flag to validate the stateless witness collection
/ execution matches stateful
* adds support for tracing the stateless execution when a tracer is set
(to more easily debug differences)
* --human for more readable test summary
* directory or file input, so if you pass tests/spec-tests/fixtures/blockchain_tests it will execute all
blockchain tests
This change relocates the EVM tx context switching to the ApplyMessage function.
With this change, we can remove a lot of EVM.SetTxContext calls before
message execution.
### Tracing API changes
- This PR replaces the `GasPrice` field of the `VMContext` struct with
`BaseFee`. Users may instead take the effective gas price from
`tx.EffectiveGasTipValue(env.BaseFee)`.
---------
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
This PR introduces a `ContractCodeReader` interface with functions defined:
type ContractCodeReader interface {
Code(addr common.Address, codeHash common.Hash) ([]byte, error)
CodeSize(addr common.Address, codeHash common.Hash) (int, error)
}
This interface can be implemented in various ways. Although the codebase
currently includes only one implementation, additional implementations
could be created for different purposes and scenarios, such as a code
reader designed for the Verkle tree approach or one that reads code from
the witness.
*Notably, this interface modifies the function’s semantics. If the
contract code is not found, no error will be returned. An error should
only be returned in the event of an unexpected issue, primarily for
future implementations.*
The original state.Reader interface is extended with ContractCodeReader
methods, it gives us more flexibility to manipulate the reader with additional
logic on top, e.g. Hooks.
type Reader interface {
ContractCodeReader
StateReader
}
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
The existing implementation is correct when building and verifying
blocks, since we will only collect non-empty requests into the block
requests list.
But it isn't correct for cases where a requests list containing empty
items is sent by the consensus layer on the engine API. We want to
ensure that empty requests do not cause a difference in validation
there, so the commitment computation should explicitly skip them.
This workaround is meant to minimize the possibility for snapshot generation
once the geth node upgrades to new version (specifically #30752 )
In #30752, the journal format in state snapshot is modified by removing
the destruct set. Therefore, the existing old format (version = 0) will be
discarded and all in-memory layers will be lost. Unfortunately, the lost
in-memory layers can't be recovered by some other approaches, and the
entire state snapshot will be regenerated (it will last about 2.5 hours).
This pull request introduces a workaround to adopt the legacy journal if
the destruct set contained is empty. Since self-destruction has been
deprecated following the cancun fork, the destruct set is expected to be nil for
layers above the fork block. However, an exception occurs during contract
deployment: pre-funded accounts may self-destruct, causing accounts with
non-zero balances to be removed from the state. For example,
https://etherscan.io/tx/0xa087333d83f0cd63b96bdafb686462e1622ce25f40bd499e03efb1051f31fe49).
For nodes with a fully synced state, the legacy journal is likely compatible with
the updated definition, eliminating the need for regeneration. Unfortunately,
nodes performing a full sync of historical chain segments or encountering
pre-funded account deletions may face incompatibilities, leading to automatic
snapshot regeneration.
This reverts commit 23800122b3.
The original pull request introduces a bug and some flaky tests are
detected because of this flaw.
```
--- FAIL: TestRecoverSnapshotFromWipingCrash (0.27s)
blockchain_snapshot_test.go:158: The disk layer is not integrated snapshot is not constructed
{"pc":0,"op":88,"gas":"0x7148","gasCost":"0x2","memSize":0,"stack":[],"depth":1,"refund":0,"opName":"PC"}
{"pc":1,"op":255,"gas":"0x7146","gasCost":"0x1db0","memSize":0,"stack":["0x0"],"depth":1,"refund":0,"opName":"SELFDESTRUCT"}
{"output":"","gasUsed":"0x0"}
{"output":"","gasUsed":"0x1db2"}
{"pc":0,"op":116,"gas":"0x13498","gasCost":"0x3","memSize":0,"stack":[],"depth":1,"refund":0,"opName":"PUSH21"}
```
Before the original PR, the snapshot would block the function until the
disk layer
was fully generated under the following conditions:
(a) explicitly required by users with `AsyncBuild = false`.
(b) the snapshot was being fully rebuilt or *the disk layer generation
had resumed*.
Unfortunately, with the changes introduced in that PR, the snapshot no
longer waits
for disk layer generation to complete if the generation is resumed. It
brings lots of
uncertainty and breaks this tiny debug feature.
This PR is purely for improved readability; I was doing work involving
the file and think this may help others who are trying to understand
what's going on.
1. `snapshot.Tree.Rebuild()` now returns a function that blocks until
regeneration is complete, allowing `Tree.waitBuild()` to be removed
entirely as all it did was search for the `done` channel behind this new
function.
2. Its usage inside `New()` is also simplified by (a) only waiting if
`!AsyncBuild`; and (b) avoiding the double negative of `if !NoBuild`.
---------
Co-authored-by: Martin HS <martin@swende.se>
This pull request removes the destruct flag from the state snapshot to
simplify the code.
Previously, this flag indicated that an account was removed during a
state transition, making all associated storage slots inaccessible.
Because storage deletion can involve a large number of slots, the actual
deletion is deferred until the end of the process, where it is handled
in batches.
With the deprecation of self-destruct in the Cancun fork, storage
deletions are no longer expected. Historically, the largest storage
deletion event in Ethereum was around 15 megabytes—manageable in memory.
In this pull request, the single destruct flag is replaced by a set of
deletion markers for individual storage slots. Each deleted storage slot
will now appear in the Storage set with a nil value.
This change will simplify a lot logics, such as storage accessing,
storage flushing, storage iteration and so on.
This pull request refactors the EVM constructor by removing the
TxContext parameter.
The EVM object is frequently overused. Ideally, only a single EVM
instance should be created and reused throughout the entire state
transition of a block, with the transaction context switched as needed
by calling evm.SetTxContext.
Unfortunately, in some parts of the code, the EVM object is repeatedly
created, resulting in unnecessary complexity. This pull request is the
first step towards gradually improving and simplifying this setup.
---------
Co-authored-by: Martin Holst Swende <martin@swende.se>
In many cases, there is a need to create somewhat nontrivial bytecode. A
recent example is the verkle statetests, where we want a `CREATE2`- op
to create a contract, which can then be invoked, and when invoked does a
selfdestruct-to-self.
It is overkill to go full solidity, but it is also a bit tricky do
assemble this by concatenating bytes. This PR takes an approach that
has been used in in goevmlab for several years.
Using this utility, the case can be expressed as:
```golang
// Some runtime code
runtime := program.New().Ops(vm.ADDRESS, vm.SELFDESTRUCT).Bytecode()
// A constructor returning the runtime code
initcode := program.New().ReturnData(runtime).Bytecode()
// A factory invoking the constructor
outer := program.New().Create2AndCall(initcode, nil).Bytecode()
```
We have a lot of places in the codebase where we concatenate bytes, cast
from `vm.OpCode` . By taking tihs approach instead, thos places can be made a
bit more maintainable/robust.
This adds an API method `DropTransactions` to legacy pool, blob pool and
txpool interface. This method removes all txs currently tracked in the
pools.
It modifies the simulated beacon to use the new method in `Rollback`
which removes previous hacky implementation that also erroneously reset
the gas tip to 1 gwei.
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
This change invokes the OnCodeChange hook when selfdestruct operation is performed, and a contract is removed. This is an event which can be consumed by tracers.
This PR fixes some issues with benchmarks
- [x] Removes log output from a log-test
- [x] Avoids a `nil`-defer in `triedb/pathdb`
- [x] Fixes some crashes re tracers
- [x] Refactors a very resource-expensive benchmark for blobpol.
**NOTE**: this rewrite touches live production code (a little bit), as
it makes the validator-function used by the blobpool configurable.
- [x] Switch some benches over to use pebble over leveldb
- [x] reduce mem overhead in the setup-phase of some tests
- [x] Marks some tests with a long setup-phase to be skipped if `-short`
is specified (where long is on the order of tens of seconds). Ideally,
in my opinion, one should be able to run with `-benchtime 10ms -short`
and sanity-check all tests very quickly.
- [x] Drops some metrics-bechmark which times the speed of `copy`.
---------
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
Tests that are crucial to for verifying the verkle testnet functions properly.
---------
Signed-off-by: Guillaume Ballet <3272758+gballet@users.noreply.github.com>
Co-authored-by: Ignacio Hagopian <jsign.uy@gmail.com>
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
Co-authored-by: Martin HS <martin@swende.se>
I think the core code should generally be agnostic about the witness and
the statedb layer should determine what elements need to be included in
the witness. Because code is accessed via `GetCode`, and
`GetCodeLength`, the statedb will always know when it needs to add that
code into the witness.
The edge case is block hashes, so we continue to add them manually in
the implementation of `BLOCKHASH`.
It probably makes sense to refactor statedb so we have a wrapped
implementation that accumulates the witness, but this is a simpler
change that makes #30078 less aggressive.
Looking at the cpu profile of a burntpix benchmark, I noticed that a lot
of time was spent in gas-used, in the interpreter loop. It's an actual
call (not inlined), which explicitly wants to be ignored by tracing
("tracing.GasChangeIgnored"), so it can be safely and simply inlined.
The other change is in `pushX`. These also do a call to
`common.RightPadBytes`. I replaced that by a doing a corresponding `Lsh`
on the `u256` if needed. Note: it's needed only to make the stack output
look right, for fuzzers. It technically doesn't matter what we put
there: if code ends on a pushdata immediate, nothing will consume the
stack element. We could just as well just ignore it, if we didn't care
about fuzzers (which I do).
Seems quite a lot faster on burntpix, according to my runs.
This PR:
```
EVM gas used: 5642735088
execution time: 34.84609475s
allocations: 915683
allocated bytes: 175334088
```
```
EVM gas used: 5642735088
execution time: 36.671958278s
allocations: 915701
allocated bytes: 175340528
```
Master
```
EVM gas used: 5642735088
execution time: 49.349209526s
allocations: 915684
allocated bytes: 175333368
```
```
EVM gas used: 5642735088
execution time: 46.581006598s
allocations: 915681
allocated bytes: 175330728
```
---------
Co-authored-by: Sina M <1591639+s1na@users.noreply.github.com>
Co-authored-by: Felix Lange <fjl@twurst.com>
This PR adds `DeleteRange` to `ethdb.KeyValueWriter`. While range
deletion using an iterator can be really slow, `DeleteRange` is natively
supported by pebble and apparently runs in O(1) time (typically 20-30ms
in my tests for removing hundreds of millions of keys and gigabytes of
data). For leveldb and memorydb an iterator based fallback is
implemented. Note that since the iterator method can be slow and a
database function should not unexpectedly block for a very long time,
the number of deleted keys is limited at 10000 which should ensure that
it does not block for more than a second. ErrTooManyKeys is returned if
the range has only been partially deleted. In this case the caller can
repeat the call until it finally succeeds.
rebased https://github.com/ethereum/go-ethereum/pull/29766 . The
downstream branch appears to have been deleted and I don't have perms to
push to that fork.
`TerminalTotalDifficultyPassed` is removed. `TerminalTotalDifficulty`
must now be non-nil, and it is expected that networks are already
merged: we can only import PoW/Clique chains, not produce blocks on
them.
---------
Co-authored-by: stevemilk <wangpeculiar@gmail.com>
This PR moves the logging/tracing-facilities out of `*state.StateDB`,
in to a wrapping struct which implements `vm.StateDB` instead.
In most places, it is a pretty straight-forward change:
- First, hoisting the invocations from state objects up to the statedb.
- Then making the mutation-methods simply return the previous value, so
that the external logging layer could log everything.
Some internal code uses the direct object-accessors to mutate the state,
particularly in testing and in setting up state overrides, which means
that these changes are unobservable for the hooked layer. Thus, configuring
the overrides are not necessarily part of the API we want to publish.
The trickiest part about the layering is that when the selfdestructs are
finally deleted during `Finalise`, there's the possibility that someone
sent some ether to it, which is burnt at that point, and thus needs to
be logged. The hooked layer reaches into the inner layer to figure out
these events.
In package `vm`, the conversion from `state.StateDB + hooks` into a
hooked `vm.StateDB` is performed where needed.
---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
Way back we've added `common.math.BigMin` and `common.math.BigMax`.
These were kind of cute helpers, but unfortunate ones, because package
all over out codebase added dependencies to this package just to avoid
having to write out 3 lines of code.
Because of this, we've also started having package name clashes with the
stdlib `math`, which got solves even more badly by moving some helpers
over ***from*** the stdlib into our custom lib (e.g. MaxUint64). The
latter ones were nuked out in a previous PR and this PR nukes out BigMin
and BigMax, inlining them at all call sites.
As we're transitioning to uint256, if need be, we can add a min and max
to that.
Breaking changes:
- The ChainConfig was exposed to tracers via VMContext passed in
`OnTxStart`. This is unnecessary specially looking through the lens of
live tracers as chain config remains the same throughout the lifetime of
the program. It was there so that native API-invoked tracers could
access it. So instead we moved it to the constructor of API tracers.
Non-breaking:
- Change the default config of the tracers to be `{}` instead of nil.
This way an extra nil check can be avoided.
Refactoring:
- Rename `supply` struct to `supplyTracer`.
- Un-export some hook definitions.
~~Opening this as a draft to have a discussion.~~ Pressed the wrong
button
I had [a previous PR
](https://github.com/ethereum/go-ethereum/pull/24616)a long time ago
which reduced the peak memory used during reorgs by not accumulating all
transactions and logs.
This PR reduces the peak memory further by not storing the blocks in
memory.
However this means we need to pull the blocks back up from storage
multiple times during the reorg.
I collected the following numbers on peak memory usage:
// Master: BenchmarkReorg-8 10000 899591 ns/op 820154 B/op 1440
allocs/op 1549443072 bytes of heap used
// WithoutOldChain: BenchmarkReorg-8 10000 1147281 ns/op 943163 B/op
1564 allocs/op 1163870208 bytes of heap used
// WithoutNewChain: BenchmarkReorg-8 10000 1018922 ns/op 943580 B/op
1564 allocs/op 1171890176 bytes of heap used
Each block contains a transaction with ~50k bytes and we're doing a 10k
block reorg, so the chain should be ~500MB in size
---------
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
Changelog: https://golangci-lint.run/product/changelog/#1610
Removes `exportloopref` (no longer needed), replaces it with
`copyloopvar` which is basically the opposite.
Also adds:
- `durationcheck`
- `gocheckcompilerdirectives`
- `reassign`
- `mirror`
- `tenv`
---------
Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
This is a redo of #29052 based on newer specs. Here we implement EIPs
scheduled for the Prague fork:
- EIP-7002: Execution layer triggerable withdrawals
- EIP-7251: Increase the MAX_EFFECTIVE_BALANCE
Co-authored-by: lightclient <lightclient@protonmail.com>
A couple of tests set the debug level to `TRACE` on stdout,
and all subsequent tests in the same package are also affected
by that, resulting in outputs of tens of megabytes.
This PR removes such calls from two packages where it was prevalent.
This makes getting a summary of failing tests simpler, and possibly
reduces some strain from the CI pipeline.
This implements recent changes to EIP-7685, EIP-6110, and
execution-apis.
---------
Co-authored-by: lightclient <lightclient@protonmail.com>
Co-authored-by: Shude Li <islishude@gmail.com>
The bulk of this PR is authored by @lightclient , in the original
EOF-work. More recently, the code has been picked up and reworked for the new EOF
specification, by @MariusVanDerWijden , in https://github.com/ethereum/go-ethereum/pull/29518, and also @shemnon has contributed with fixes.
This PR is an attempt to start eating the elephant one small bite at a
time, by selecting only the eof-validation as a standalone piece which
can be merged without interfering too much in the core stuff.
In this PR:
- [x] Validation of eof containers, lifted from #29518, along with
test-vectors from consensus-tests and fuzzing, to ensure that the move
did not lose any functionality.
- [x] Definition of eof opcodes, which is a prerequisite for validation
- [x] Addition of `undefined` to a jumptable entry item. I'm not
super-happy with this, but for the moment it seems the least invasive
way to do it. A better way might be to go back and allowing nil-items or
nil execute-functions to denote "undefined".
- [x] benchmarks of eof validation speed
---------
Co-authored-by: lightclient <lightclient@protonmail.com>
Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
Co-authored-by: Danno Ferrin <danno.ferrin@shemnon.com>
This pull request removes the `fsync` of index files in freezer.ModifyAncients function for
performance gain.
Originally, fsync is added after each freezer write operation to ensure
the written data is truly transferred into disk. Unfortunately, it turns
out `fsync` can be relatively slow, especially on
macOS (see https://github.com/ethereum/go-ethereum/issues/28754 for more
information).
In this pull request, fsync for index file is removed as it turns out
index file can be recovered even after a unclean shutdown. But fsync for data file is still kept, as
we have no meaningful way to validate the data correctness after unclean shutdown.
---
**But why do we need the `fsync` in the first place?**
As it's necessary for freezer to survive/recover after the machine crash
(e.g. power failure).
In linux, whenever the file write is performed, the file metadata update
and data update are
not necessarily performed at the same time. Typically, the metadata will
be flushed/journalled
ahead of the file data. Therefore, we make the pessimistic assumption
that the file is first
extended with invalid "garbage" data (normally zero bytes) and that
afterwards the correct
data replaces the garbage.
We have observed that the index file of the freezer often contain
garbage entry with zero value
(filenumber = 0, offset = 0) after a machine power failure. It proves
that the index file is extended
without the data being flushed. And this corruption can destroy the
whole freezer data eventually.
Performing fsync after each write operation can reduce the time window
for data to be transferred
to the disk and ensure the correctness of the data in the disk to the
greatest extent.
---
**How can we maintain this guarantee without relying on fsync?**
Because the items in the index file are strictly in order, we can
leverage this characteristic to
detect the corruption and truncate them when freezer is opened.
Specifically these validation
rules are performed for each index file:
For two consecutive index items:
- If their file numbers are the same, then the offset of the latter one
MUST not be less than that of the former.
- If the file number of the latter one is equal to that of the former
plus one, then the offset of the latter one MUST not be 0.
- If their file numbers are not equal, and the latter's file number is
not equal to the former plus 1, the latter one is valid
And also, for the first non-head item, it must refer to the earliest
data file, or the next file if the
earliest file is not sufficient to place the first item(very special
case, only theoretical possible
in tests)
With these validation rules, we can detect the invalid item in index
file with greatest possibility.
---
But unfortunately, these scenarios are not covered and could still lead
to a freezer corruption if it occurs:
**All items in index file are in zero value**
It's impossible to distinguish if they are truly zero (e.g. all the data
entries maintained in freezer
are zero size) or just the garbage left by OS. In this case, these index
items will be kept by truncating
the entire data file, namely the freezer is corrupted.
However, we can consider that the probability of this situation
occurring is quite low, and even
if it occurs, the freezer can be considered to be close to an empty
state. Rerun the state sync
should be acceptable.
**Index file is integral while relative data file is corrupted**
It might be possible the data file is corrupted whose file size is
extended correctly with garbage
filled (e.g. zero bytes). In this case, it's impossible to detect the
corruption by index validation.
We can either choose to `fsync` the data file, or blindly believe that
if index file is integral then
the data file could be integral with very high chance. In this pull
request, the first option is taken.
Reverts ethereum/go-ethereum#30495
You are free to create a proper Clear method if that's the best way. But
one that does a proper cleanup, not some hacky call to set gas which
screws up logs, metrics and everything along the way. Also doesn't work
for legacy pool local transactions.
The current code had a hack in the simulated code, now we have a hack in
live txpooling code. No, that's not acceptable. I want the live code to
be proper, meaningful API, meaningful comments, meaningful
implementation.
Here we move the method that drops all transactions by temporarily increasing the fee
into the TxPool itself. It's better to have it there because we can set it back to the
configured value afterwards. This resolves a TODO in the simulated backend.
Extends the opcontext interface to include accessor for code being executed in current context. While it is possible to get the code via `statedb.GetCode`, that approach doesn't work for initcode.
This pull request skips the state snapshot update if the base layer is
not existent, eliminating the numerous warning logs after an unclean
shutdown.
Specifically, Geth will rewind its chain head to a historical block
after unclean shutdown and state snapshot will be remained as unchanged
waiting for recovery. During this period of time, the snapshot is unusable
and all state updates should be ignored/skipped for state snapshot update.
This PR integrates witness-enabled block production, witness-creating
payload execution and stateless cross-validation into the `engine` API.
The purpose of the PR is to enable the following use-cases (for API
details, please see next section):
- Cross validating locally created blocks:
- Call `forkchoiceUpdatedWithWitness` instead of `forkchoiceUpdated` to
trigger witness creation too.
- Call `getPayload` as before to retrieve the new block and also the
above created witness.
- Call `executeStatelessPayload` against another client to
cross-validate the block.
- Cross validating locally processed blocks:
- Call `newPayloadWithWitness` instead of `newPayload` to trigger
witness creation too.
- Call `executeStatelessPayload` against another client to
cross-validate the block.
- Block production for stateless clients (local or MEV builders):
- Call `forkchoiceUpdatedWithWitness` instead of `forkchoiceUpdated` to
trigger witness creation too.
- Call `getPayload` as before to retrieve the new block and also the
above created witness.
- Propagate witnesses across the consensus libp2p network for stateless
Ethereum.
- Stateless validator validation:
- Call `executeStatelessPayload` with the propagated witness to
statelessly validate the block.
*Note, the various `WithWitness` methods could also *just be* an
additional boolean flag on the base methods, but this PR wanted to keep
the methods separate until a final consensus is reached on how to
integrate in production.*
---
The following `engine` API types are introduced:
```go
// StatelessPayloadStatusV1 is the result of a stateless payload execution.
type StatelessPayloadStatusV1 struct {
Status string `json:"status"`
StateRoot common.Hash `json:"stateRoot"`
ReceiptsRoot common.Hash `json:"receiptsRoot"`
ValidationError *string `json:"validationError"`
}
```
- Add `forkchoiceUpdatedWithWitnessV1,2,3` with same params and returns
as `forkchoiceUpdatedV1,2,3`, but triggering a stateless witness
building if block production is requested.
- Extend `getPayloadV2,3` to return `executionPayloadEnvelope` with an
additional `witness` field of type `bytes` iff created via
`forkchoiceUpdatedWithWitnessV2,3`.
- Add `newPayloadWithWitnessV1,2,3,4` with same params and returns as
`newPayloadV1,2,3,4`, but triggering a stateless witness creation during
payload execution to allow cross validating it.
- Extend `payloadStatusV1` with a `witness` field of type `bytes` if
returned by `newPayloadWithWitnessV1,2,3,4`.
- Add `executeStatelessPayloadV1,2,3,4` with same base params as
`newPayloadV1,2,3,4` and one more additional param (`witness`) of type
`bytes`. The method returns `statelessPayloadStatusV1`, which mirrors
`payloadStatusV1` but replaces `latestValidHash` with `stateRoot` and
`receiptRoot`.
After this PR, https://github.com/ethereum/go-ethereum/pull/28187, the
way to set the default logger is different. This PR only updates the way
to set logger in some test cases' comments that existed in the codebase
(since this commit
https://github.com/ethereum/go-ethereum/commit/b63e3c37a6). Although I
am not sure if it a good way to leave the code in the comment, it truly
makes me more efficiently to debug and fix the failing test cases.
Add changes from #30409 and #29338 to changelog.
---------
Co-authored-by: Martin HS <martin@swende.se>
Co-authored-by: Guillaume Ballet <3272758+gballet@users.noreply.github.com>
This change makes the code slightly easier for downstream-projects to extend with more signer-types, but if functionalily equivalent to the previous code.
Remove redundant address presence check in `makeGasSStoreFunc`.
This PR simplifies the `makeGasSStoreFunc` function by removing the
redundant check for address presence in the access list. The updated
code now only checks for slot presence, streamlining the logic and
eliminating unnecessary panic conditions.
This change removes the unnecessary address presence check, simplifying
the code and improving maintainability without affecting functionality.
The previous panic condition was intended as a canary during the testing
phases (i.e. _YOLOv2_) and is no longer needed.
#29995 has been reverted due to an unexpected flaw in the state snapshot
process.
Specifically, it attempts to stop the state snapshot generation, which
could potentially
cause the system to halt if the generation is not currently running.
This pull request ports the changes made in #29995 and fixes the flaw.
This is a successor PR to #25743. This PR is based on a new iteration of
the spec: https://github.com/ethereum/execution-apis/pull/484.
`eth_multicall` takes in a list of blocks, each optionally overriding
fields like number, timestamp, etc. of a base block. Each block can
include calls. At each block users can override the state. There are
extra features, such as:
- Include ether transfers as part of the logs
- Overriding precompile codes with evm bytecode
- Redirecting accounts to another address
## Breaking changes
This PR includes the following breaking changes:
- Block override fields of eth_call and debug_traceCall have had the
following fields renamed
- `coinbase` -> `feeRecipient`
- `random` -> `prevRandao`
- `baseFee` -> `baseFeePerGas`
---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
Co-authored-by: Martin Holst Swende <martin@swende.se>
This pull request replaces the field pointer in journal entry with the
field itself, specifically the address of mutated account.
While it will introduce the extra allocation cost, but it's easier for
code reading. Let's measure the overhead overall to see if the change is
acceptable or not.
This pull request introduces a state.Reader interface for state
accessing.
The interface could be implemented in various ways. It can be pure trie
only reader, or the combination of trie and state snapshot. What's more,
this interface allows us to have more flexibility in the future, e.g.
the
archive reader (for accessing archive state).
Additionally, this pull request removes the following metrics
- `chain/snapshot/account/reads`
- `chain/snapshot/storage/reads`
This PR changes how sidechains are handled.
Before the merge, it was possible to import a chain with lower td and not set it as canonical. After the merge, we expect every chain that we get via InsertChain to be canonical. Non-canonical blocks can still be inserted
with InsertBlockWIthoutSetHead.
If during the InsertChain, the existing chain is not canonical anymore, we mark it as a sidechain and send the SideChainEvents normally.
This pull request fixes a flaw in prefetcher.
In verkle tree world, both accounts and storage slots are committed into
a single tree instance for state hashing. If the prefetcher is activated, we will
try to pull the trie for the prefetcher for performance speedup.
However, we had a special logic to skip pulling storage trie if the
storage root is empty. While it's true for merkle as we have nothing to
do with an empty storage trie, it's totally wrong for verkle. The consequences
for skipping pulling is the storage changes are committed into trie A, while the
account changes are committed into trie B (pulled from the prefetcher), boom.
This PR implements changes related to
[EIP-6800](https://eips.ethereum.org/EIPS/eip-6800) and
[EIP-4762](https://eips.ethereum.org/EIPS/eip-4762) spec updates.
A TL;DR of the changes is that `Version`, `Balance`, `Nonce` and
`CodeSize` are encoded in a single leaf named `BasicData`. For more
details, see the [_Header Values_ table in
EIP-6800](https://eips.ethereum.org/EIPS/eip-6800#header-values).
The motivation for this was simplifying access event patterns, reducing
code complexity, and, as a side effect, saving gas since fewer leaf
nodes must be accessed.
---------
Co-authored-by: Guillaume Ballet <3272758+gballet@users.noreply.github.com>
Co-authored-by: Felix Lange <fjl@twurst.com>
This PR adds the bulk verkle witness+proof production at the end of block
production. It reads all data from the tree in one swoop and produces
a verkle proof.
Co-authored-by: Felix Lange <fjl@twurst.com>
This is a follow-up to #29520, and a preparatory PR to a more thorough
change in the journalling system.
### API methods instead of `append` operations
This PR hides the journal-implementation details away, so that the
statedb invokes methods like `JournalCreate`, instead of explicitly
appending journal-events in a list. This means that it's up to the
journal whether to implement it as a sequence of events or
aggregate/merge events.
### Snapshot-management inside the journal
This PR also makes it so that management of valid snapshots is moved
inside the journal, exposed via the methods `Snapshot() int` and
`RevertToSnapshot(revid int, s *StateDB)`.
### SetCode
JournalSetCode journals the setting of code: it is implicit that the
previous values were "no code" and emptyCodeHash. Therefore, we can
simplify the setCode journal.
### Selfdestruct
The self-destruct journalling is a bit strange: we allow the
selfdestruct operation to be journalled several times. This makes it so
that we also are forced to store whether the account was already
destructed.
What we can do instead, is to only journal the first destruction, and
after that only journal balance-changes, but not journal the
selfdestruct itself.
This simplifies the journalling, so that internals about state
management does not leak into the journal-API.
### Preimages
Preimages were, for some reason, integrated into the journal management,
despite not being a consensus-critical data structure. This PR undoes
that.
---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
This pull request adds a few more performance metrics, specifically:
- The average time cost of an account read
- The average time cost of a storage read
- The rate of account reads
- The rate of storage reads
To allow all error paths in `vm.EVM.create()` to consume the necessary
gas, there is currently a pattern of gating code on `if err == nil`
instead of returning as soon as the error occurs. The same behaviour can
be achieved by abstracting the gated code into a method that returns
immediately on error, improving readability and thus making it easier to
understand and maintain.
This PR refactors the genesis initialization a bit, s.th. we only
compute the blockhash once instead of twice as before (during hashAlloc
and flushAlloc)
This will significantly reduce the amount of memory allocated during
genesis init
---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
Fixes a slight miscalculation in the downloader queue, which was not accurately taking block withdrawals into account when calculating the size of the items in the queue
Consistently use `uint64` for indices in `Memory` and drop lots of type
conversions from `uint64` to `int64`.
---------
Co-authored-by: lmittmann <lmittmann@users.noreply.github.com>
The struct-based tracing added in #29189 seems to have caused an issue
with the benchmark `BenchmarkTracerStepVsCallFrame`. On master we see
the following panic:
```console
BenchmarkTracerStepVsCallFrame
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x2 addr=0x40 pc=0x1019782f0]
goroutine 37 [running]:
github.com/ethereum/go-ethereum/eth/tracers/js.(*jsTracer).OnOpcode(0x140004c4000, 0x0, 0x10?, 0x989680, 0x1, {0x101ea2298, 0x1400000e258}, {0x1400000e258?, 0x14000155928?, 0x10173020c?}, ...)
/Users/matt/dev/go-ethereum/eth/tracers/js/goja.go:328 +0x140
github.com/ethereum/go-ethereum/core/vm.(*EVMInterpreter).Run(0x14000307da0, 0x140003cc0d0, {0x0, 0x0, 0x0}, 0x0)
...
FAIL github.com/ethereum/go-ethereum/core/vm/runtime 0.420s
FAIL
```
The issue seems to be that `OnOpcode` expects that `OnTxStart` has
already been called to initialize the `env` value in the tracer. The JS
tracer uses it in `OnOpcode` for the `GetRefund()` method.
This patch resolves the issue by reusing the `Call` method already
defined in `runtime_test.go` which correctly calls `OnTxStart`.
This pull request fixes the broken feature where the entire storage set is overridden.
Originally, the storage set override was achieved by marking the associated account
as deleted, preventing access to the storage slot on disk. However, since #29520, this
flag is also checked when accessing the account, rendering the account unreachable.
A fix has been applied in this pull request, which re-creates a new state object with all
account metadata inherited.
This pull request adds an additional error check after statedb.IntermediateRoot,
ensuring that no errors occur during this call. This step is essential, as the call might
encounter database errors.
The address recover is executed and cached in ValidateTransaction already. It's
expected that the cached one is returned in ValidateTransaction. However,
currently, we use the wrong function signer.Sender instead of types.Sender which
will do all the address recover again.
Originally, these metrics were added to track the largest storage wiping.
Since account self-destruction was deprecated with the Cancun fork,
these metrics have become meaningless.
This does not change the behavior here as the nonce in the argument is
tx.Nonce(). This commit helps to make the function easier to read and avoid
capturing the tx in the function.
* all: add stateless verifications
* all: simplify witness and integrate it into live geth
---------
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
* cmd/geth, ethdb/pebble: polish method naming and code comment
* implement db stat for pebble
* cmd, core, ethdb, internal, trie: remove db property selector
* cmd, core, ethdb: fix function description
---------
Co-authored-by: prpeh <prpeh@proton.me>
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
* .golangci.yml: enable check for consistent receiver name
* beacon/light/sync: fix receiver name
* core/txpool/blobpool: fix receiver name
* core/types: fix receiver name
* internal/ethapi: use consistent receiver name 'api' for handler object
* signer/core/apitypes: fix receiver name
* signer/core: use consistent receiver name 'api' for handler object
* log: fix receiver name
Always prefetch the account trie while starting the prefetcher.
Co-authored-by: steven <steven@stevendeMacBook-Pro.local>
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
Introduces the first built-in live tracer. The supply tracer tracks ETH supply changes across blocks
and writes the output to disk. This will need to be enabled through CLI using the `--vmtrace supply` flag.
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
It's a bit confusing to add msg.value into the balanceCheck within the conditional.
No impact on block validation since GasFeeCap is always set when processing transactions.
* core/state: trie prefetcher change: calling trie() doesn't stop the associated subfetcher
Co-authored-by: Martin HS <martin@swende.se>
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
* core/state: improve prefetcher
* core/state: restore async prefetcher stask scheduling
* core/state: finish prefetching async and process storage updates async
* core/state: don't use the prefetcher for missing snapshot items
* core/state: remove update concurrency for Verkle tries
* core/state: add some termination checks to prefetcher async shutdowns
* core/state: differentiate db tries and prefetched tries
* core/state: teh teh teh
---------
Co-authored-by: Jared Wasinger <j-wasinger@hotmail.com>
Co-authored-by: Martin HS <martin@swende.se>
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
Added a start/end system where tracer can be notified that processing of some Ethereum system calls is starting processing and also notifies it when the processing has completed.
Doing a start/end for system call will enable tracers to "route" incoming next tracing events to go to a separate bucket than other EVM calls. Those not interested by this fact can simply avoid registering the hooks.
The EVM call is going to be traced normally afterward between the signals provided by those 2 new hooks but outside of a transaction context OnTxStart/End. That something implementors of live tracers will need to be aware of (since only "trx tracers" are not concerned by ProcessBeaconRoot).
---------
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
* core/state, internal/workerpool: parallelize parts of state commit
* core, internal: move workerpool into syncx
* core/state: use errgroups, commit accounts concurrently
* core: resurrect detailed commit timers to almost-accuracy
* all: refactor so NewBlock(..) and WithBody(..) take a types.Body
* core: fixup comments, remove txs != receipts panic
* core/types: add empty withdrawls to body if len == 0
This PR fixes some flaws with the existing tests.
The randomized testing (TestSnapshotRandom) executes a series of steps which modify the state and create journal-events. Later on, we compare the forward-going-states against the backwards-unrolling-journal-states, and check that they are identical.
The "identical" check is performed using various accessors. It turned out that we failed to check some things:
- the accesslist contents
- the transient storage contents
- the 'newContract' flag
- the dirty storage map
This change adds these new checks
Currently our state journal tracks each storage update to a contract, having the ability to revert those changes to the previously set value.
For the very first modification however, it behaves a bit wonky. Reverting the update doesn't actually remove the dirty-ness of the slot, rather leaves it as "change this slot to it's original value". This can cause issues down the line with for example write witnesses needing to gather an unneeded proof.
This PR modifies the storageChange journal entry to not only track the previous value of a slot, but also whether there was any previous value at all set in the current execution context. In essence, the PR changes the semantic of storageChange so it does not simply track storage changes, rather it tracks dirty storage changes, an important distinction for being able to cleanly revert the journal item.
This PR updates the bls contracts from our internal implementation which is an unmaintained fork of the kilic library to the gnark-crypto library that is actively maintained by consensys.
It also updates the gas-costs according to the EIP
This pull request defines a gentrie for snap sync purpose.
The stackTrie is used to generate the merkle tree nodes upon receiving a state batch. Several additional options have been added into stackTrie to handle incomplete states (either missing states before or after).
In this pull request, these options have been relocated from stackTrie to genTrie, which serves as a wrapper for stackTrie specifically for snap sync purposes.
Further, the logic for managing incomplete state has been enhanced in this change. Originally, there are two cases handled:
- boundary node filtering
- internal (covered by extension node) node clearing
This changes adds one more:
- Clearing leftover nodes on the boundaries.
This feature is necessary if there are leftover trie nodes in database, otherwise node inconsistency may break the state healing.
time.After is equivalent to NewTimer(d).C, and does not call Stop if the timer is no longer needed. This can cause memory leaks. This change changes many such occations to use NewTimer instead, and calling Stop once the timer is no longer needed.
* use generic atomic types in tx caches
* use generic atomic types in block caches
* eth/catalyst: avoid copying tx in test
---------
Co-authored-by: lmittmann <lmittmann@users.noreply.github.com>
Co-authored-by: Felix Lange <fjl@twurst.com>
This addresses an edge-case (detailed in the code comment) where the computation of the intermediate trie root would force the unnecessary resolution of a hash node. The change makes it so that when we process changes from a block, we first process trie-updates and afterwards process trie-deletions.
Here we add a Go API for running tracing plugins within the main block import process.
As an advanced user of geth, you can now create a Go file in eth/tracers/live/, and within
that file register your custom tracer implementation. Then recompile geth and select your tracer
on the command line. Hooks defined in the tracer will run whenever a block is processed.
The hook system is defined in package core/tracing. It uses a struct with callbacks, instead of
requiring an interface, for several reasons:
- We plan to keep this API stable long-term. The core/tracing hook API does not depend on
on deep geth internals.
- There are a lot of hooks, and tracers will only need some of them. Using a struct allows you
to implement only the hooks you want to actually use.
All existing tracers in eth/tracers/native have been rewritten to use the new hook system.
This change breaks compatibility with the vm.EVMLogger interface that we used to have.
If you are a user of vm.EVMLogger, please migrate to core/tracing, and sorry for breaking
your stuff. But we just couldn't have both the old and new tracing APIs coexist in the EVM.
---------
Co-authored-by: Matthieu Vachon <matthieu.o.vachon@gmail.com>
Co-authored-by: Delweng <delweng@gmail.com>
Co-authored-by: Martin HS <martin@swende.se>
Package filepath implements utility routines for manipulating filename paths in a way compatible with the target operating system-defined file paths.
Package path implements utility routines for manipulating slash-separated paths.
The path package should only be used for paths separated by forward slashes, such as the paths in URLs
* eth: drop support for forward sync triggers and head block packets
* consensus, eth: enforce always merged network
* eth: fix tx looper startup and shutdown
* cmd, core: fix some tests
* core: remove notion of future blocks
* core, eth: drop unused methods and types
As SELF-DESTRUCT opcode is disabled in the cancun fork(unless the
account is created within the same transaction, nothing to delete
in this case). The account will only be deleted in the following
cases:
- The account is created within the same transaction. In this case
the original storage was empty.
- The account is empty(zero nonce, zero balance, zero code) and
is touched within the transaction. Fortunately this kind of accounts
are not-existent on ethereum-mainnet.
All in all, after cancun, we are pretty sure there is no large contract
deletion and we don't need this mechanism for oom protection.
* core/txpool: no need to run rotate if no local txs
Signed-off-by: jsvisa <delweng@gmail.com>
* Revert "core/txpool: no need to run rotate if no local txs"
This reverts commit 17fab17388.
Signed-off-by: jsvisa <delweng@gmail.com>
* use Debug if todo is empty
Signed-off-by: jsvisa <delweng@gmail.com>
---------
Signed-off-by: jsvisa <delweng@gmail.com>
* make blobpool reject blob transactions with fee below the minimum
* core/txpool: some minot nitpick polishes and unified error formats
* core/txpool: do less big.Int constructions with the min blob cap
---------
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
This PR fixes an overflow which can could happen if inconsistent blockchain rules were configured. Additionally, it tries to prevent such inconsistencies from occurring by making sure that merge cannot be enabled unless previous fork(s) are also enabled.
* core/txpool, miner: speed up blob pool pending retrievals
* miner: fix test merge issue
* eth: same same
* core/txpool/blobpool: speed up blobtx creation in benchmark a bit
* core/txpool/blobpool: fix linter
---------
Co-authored-by: Martin Holst Swende <martin@swende.se>
This change makes the legacy transaction pool use of `uint256.Int` instead of `big.Int`. The changes are made primarily only on the internal functions of legacypool.
---------
Co-authored-by: Martin Holst Swende <martin@swende.se>
This change adds support for blob-transaction in certain API-endpoints, e.g. eth_fillTransaction. A follow-up PR will add support for signing such transactions.
* all: implement era format, add history importer/export
* internal/era/e2store: refactor e2store to provide ReadAt interface
* internal/era/e2store: export HeaderSize
* internal/era: refactor era to use ReadAt interface
* internal/era: elevate anonymous func to named
* cmd/utils: don't store entire era file in-memory during import / export
* internal/era: better abstraction between era and e2store
* cmd/era: properly close era files
* cmd/era: don't let defers stack
* cmd/geth: add description for import-history
* cmd/utils: better bytes buffer
* internal/era: error if accumulator has more records than max allowed
* internal/era: better doc comment
* internal/era/e2store: rm superfluous reader, rm superfluous testcases, add fuzzer
* internal/era: avoid some repetition
* internal/era: simplify clauses
* internal/era: unexport things
* internal/era,cmd/utils,cmd/era: change to iterator interface for reading era entries
* cmd/utils: better defer handling in history test
* internal/era,cmd: add number method to era iterator to get the current block number
* internal/era/e2store: avoid double allocation during write
* internal/era,cmd/utils: fix lint issues
* internal/era: add ReaderAt func so entry value can be read lazily
Co-authored-by: lightclient <lightclient@protonmail.com>
Co-authored-by: Martin Holst Swende <martin@swende.se>
* internal/era: improve iterator interface
* internal/era: fix rlp decode of header and correctly read total difficulty
* cmd/era: fix rebase errors
* cmd/era: clearer comments
* cmd,internal: fix comment typos
---------
Co-authored-by: Martin Holst Swende <martin@swende.se>
* core/txpool/blobpool: clean up resurrected junk after a crash
* core/txpool/blobpool: track transaction insertions and rejections
* core/txpool/blobpool: linnnnnnnt
This PR fixes an issues in the new simulated backend. The root cause is the fact that the transaction pool has an internal reset operation that runs on a background thread.
When a new transaction is added to the pool via the RPC, the transaction is added to a non-executable queue and will be moved to its final location on a background thread. If the machine is overloaded (or simply due to timing issues), it can happen that the simulated backend will try to produce the next block, whilst the pool has not yet marked the newly added transaction executable. This will cause the block to not contain the transaction. This is an issue because we want determinism from the simulator: add a tx, mine a block. It should be in there.
The PR fixes it by adding a Sync function to the txpool, which waits for the current reset operation (if any) to finish, and then runs an entire round of reset on top. The new round is needed because resets are only triggered by new head events, so newly added transactions will not trigger the outer resets that we can wait on. The transaction pool would eventually internally do a reset even on transaction addition, but there's no easy way to wait on that and there's no meaningful reason to bubble that across everything. A clean outer reset will at worse be a small noop goroutine.
This change switches from using the `Hasher` interface to add/query the bloomfilter to implementing it as methods.
This significantly reduces the allocations for Search and Rebloom.
This change makes use of uin256 to represent balance in state. It touches primarily upon statedb, stateobject and state processing, trying to avoid changes in transaction pools, core types, rpc and tracers.
This change simplifies the logic for indexing transactions and enhances the UX when transaction is not found by returning more information to users.
Transaction indexing is now considered as a part of the initial sync, and `eth.syncing` will thus be `true` if transaction indexing is not yet finished. API consumers can use the syncing status to determine if the node is ready to serve users.
The code to compute a versioned hash was duplicated a couple times, and also had a small
issue: if we ever change params.BlobTxHashVersion, it will most likely also cause changes
to the actual hash computation. So it's a bit useless to have this constant in params.
EIP-4844 adds a new transaction type for blobs. Users can submit such transactions via `eth_sendRawTransaction`. In this PR we refrain from adding support to `eth_sendTransaction` and in fact it will fail if the user passes in a blob hash.
However since the chain can handle such transactions it makes sense to allow simulating them. E.g. an L2 operator should be able to simulate submitting a rollup blob and updating the L2 state. Most methods that take in a transaction object should recognize blobs. The change boils down to adding `blobVersionedHashes` and `maxFeePerBlobGas` to `TransactionArgs`. In summary:
- `eth_sendTransaction`: will fail for blob txes
- `eth_signTransaction`: will fail for blob txes
The methods that sign txes does not, as of this PR, add support the for new EIP-4844 transaction types. Resuming the summary:
- `eth_sendRawTransaction`: can send blob txes
- `eth_fillTransaction`: will fill in a blob tx. Note: here we simply fill in normal transaction fields + possibly `maxFeePerBlobGas` when blobs are present. One can imagine a more elaborate set-up where users can submit blobs themselves and we fill in proofs and commitments and such. Left for future PRs if desired.
- `eth_call`: can simulate blob messages
- `eth_estimateGas`: blobs have no effect here. They have a separate unit of gas which is not tunable in the transaction.
This pull request improves the condition to check if path state scheme is in use.
Originally, root node presence was used as the indicator if path scheme is used or not. However due to fact that root node will be deleted during the initial snap sync, this condition is no longer useful.
If PersistentStateID is present, it shows that we've already configured for path scheme.
Original problem was caused by #28595, where we made it so that as soon as we start to sync, the root of the disk layer is deleted. That is not wrong per se, but another part of the code uses the "presence of the root" as an init-check for the pathdb. And, since the init-check now failed, the code tried to re-initialize it which failed since a sync was already ongoing.
The total impact being: after a state-sync has begun, if the node for some reason is is shut down, it will refuse to start up again, with the error message: `Fatal: Failed to register the Ethereum service: waiting for sync.`.
This change also modifies how `geth removedb` works, so that the user is prompted for two things: `state data` and `ancient chain`. The former includes both the chaindb aswell as any state history stored in ancients.
---------
Co-authored-by: Martin HS <martin@swende.se>
Here we update the eth and snap protocol test suites with a new test chain,
created by the hivechain tool. The new test chain uses proof-of-stake. As such,
tests using PoW block propagation in the eth protocol are removed. The test suite
now connects to the node under test using the engine API in order to make it
accept transactions.
The snap protocol test suite has been rewritten to output test descriptions and
log requests more verbosely.
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
This is the fix to issue #27483. A new hiddenBytes() is introduced to calculate the byte size of hidden items in the freezer table. When reporting the size of the freezer table, size of the hidden items will be subtracted from the total size.
---------
Co-authored-by: Yifan <Yifan Wang>
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
This change implements CommitteeChain which is a key component of the beacon light client. It is a passive data structure that can validate, hold and update a chain of beacon light sync committees and updates, starting from a checkpoint that proves the starting committee through a beacon block hash, header and corresponding state. Once synced to the current sync period, CommitteeChain can also validate signed beacon headers.
The dump after state-test didn't work, the problem was an error, "Already committed", which was silently ignored.
This change re-initialises the state, so the dumping works again.
This PR replaces Geth's logger package (a fork of [log15](https://github.com/inconshreveable/log15)) with an implementation using slog, a logging library included as part of the Go standard library as of Go1.21.
Main changes are as follows:
* removes any log handlers that were unused in the Geth codebase.
* Json, logfmt, and terminal formatters are now slog handlers.
* Verbosity level constants are changed to match slog constant values. Internal translation is done to make this opaque to the user and backwards compatible with existing `--verbosity` and `--vmodule` options.
* `--log.backtraceat` and `--log.debug` are removed.
The external-facing API is largely the same as the existing Geth logger. Logger method signatures remain unchanged.
A small semantic difference is that a `Handler` can only be set once per `Logger` and not changed dynamically. This just means that a new logger must be instantiated every time the handler of the root logger is changed.
----
For users of the `go-ethereum/log` module. If you were using this module for your own project, you will need to change the initialization. If you previously did
```golang
log.Root().SetHandler(log.LvlFilterHandler(log.LvlInfo, log.StreamHandler(os.Stderr, log.TerminalFormat(true))))
```
You now instead need to do
```golang
log.SetDefault(log.NewLogger(log.NewTerminalHandlerWithLevel(os.Stderr, log.LevelInfo, true)))
```
See more about reasoning here: https://github.com/ethereum/go-ethereum/issues/28558#issuecomment-1820606613
* eth/gasestimator: early exit for plain transfer and error allowance
* core, eth/gasestimator: hard guess at a possible required gas
* internal/ethapi: update estimation tests with the error ratio
* eth/gasestimator: I hate you linter
* graphql: fix gas estimation test
---------
Co-authored-by: Oren <orenyomtov@users.noreply.github.com>
There were several problems related to dumping state.
- If a preimage was missing, even if we had set the `OnlyWithAddresses` to `false`, to export them anyway, the way the mapping was constructed (using `common.Address` as key) made the entries get lost anyway. Concerns both state- and blockchain tests.
- Blockchain test execution was not configured to store preimages.
This changes makes it so that the block test executor takes a callback, just like the state test executor already does. This callback can be used to examine the post-execution state, e.g. to aid debugging of test failures.
geth --dev can be used with an existing data directory and genesis block. Since
dev mode only works with PoS, we need to verify that the merge has happened.
Co-authored-by: Felix Lange <fjl@twurst.com>
* rpc: make subscription test faster
reduces time for TestClientSubscriptionChannelClose
from 25 sec to < 1 sec.
* trie: cache trie nodes for faster sanity check
This reduces the time spent on TestIncompleteSyncHash
from ~25s to ~16s.
* core/forkid: speed up validation test
This takes the validation test from > 5s to sub 1 sec
* core/state: improve snapshot test run
brings the time for TestSnapshotRandom from 13s down to 6s
* accounts/keystore: improve keyfile test
This removes some unnecessary waits and reduces the
runtime of TestUpdatedKeyfileContents from 5 to 3 seconds
* trie: remove resolver
* trie: only check ~5% of all trie nodes
This change adds a check to ensure that transactions added to the legacy pool are not treated as 'locals' if the global locals-management has been disabled.
This change makes the pool enforce the --txpool.pricelimit setting.
This PR moves our fuzzers from tests/fuzzers into whatever their respective 'native' package is.
The historical reason why they were placed in an external location, is that when they were based on go-fuzz, they could not be "hidden" via the _test.go prefix. So in order to shove them away from the go-ethereum "production code", they were put aside.
But now we've rewritten them to be based on golang testing, and thus can be brought back. I've left (in tests/) the ones that are not production (bls128381), require non-standard imports (secp requires btcec, bn256 requires gnark/google/cloudflare deps).
This PR also adds a fuzzer for precompiled contracts, because why not.
This PR utilizes a newly rewritten replacement for go-118-fuzz-build, namely gofuzz-shim, which utilises the inputs from the fuzzing engine better.
This change allows the creation of a genesis block for verkle testnets. This makes for a chunk of code that is easier to review and still touches many discussion points.
* core/vm: set basefee to 0 internally on eth_call
* core: nicer 0-basefee, make it work for blob fees too
* internal/ethapi: make tests a bit more complex
* core: fix blob fee checker
* core: make code a bit more readable
* core: fix some test error strings
* core/vm: Get rid of weird comment
* core: dict wrong typo
This change improves GenerateChain to support internal chain history access (ChainReader)
for the consensus engine and EVM.
GenerateChain takes a `parent` block and the number of blocks to create. With my changes,
the consensus engine and EVM can now access blocks from `parent` up to the block currently
being generated. This is required to make the BLOCKHASH instruction work, and also needed
to create real clique chains. Clique uses chain history to figure out if the current signer is in-turn,
for example.
I've also added some more accessors to BlockGen. These are helpful when creating transactions:
- g.Signer returns a signer instance for the current block
- g.Difficulty returns the current block difficulty
- g.Gas returns the remaining gas amount
Another fix in this commit concerns the receipts returned by GenerateChain. The receipts now
have properly derived fields (BlockHash, etc.) and should generally match what would be
returned by the RPC API.
This adds warning logs when the read does not match the expected count.
We can also remove the size limit since the function documentation explicitly states
that callers should limit the count.
This PR removes panics from stacktrie (mostly), and makes the Update return errors instead. While adding tests for this, I also found that one case of possible corruption was not caught, which is now fixed.
This change enhances the stacktrie constructor by introducing an option struct. It also simplifies the `Hash` and `Commit` operations, getting rid of the special handling round root node.
During snap-sync, we request ranges of values: either a range of accounts or a range of storage values. For any large trie, e.g. the main account trie or a large storage trie, we cannot fetch everything at once.
Short version; we split it up and request in multiple stages. To do so, we use an origin field, to say "Give me all storage key/values where key > 0x20000000000000000". When the server fulfils this, the server provides the first key after origin, let's say 0x2e030000000000000 -- never providing the exact origin. However, the client-side needs to be able to verify that the 0x2e03.. indeed is the first one after 0x2000.., and therefore the attached proof concerns the origin, not the first key.
So, short-short version: the left-hand side of the proof relates to the origin, and is free-standing from the first leaf.
On the other hand, (pun intended), the right-hand side, there's no such 'gap' between "along what path does the proof walk" and the last provided leaf. The proof must prove the last element (unless there are no elements).
Therefore, we can simplify the semantics for trie.VerifyRangeProof by removing an argument. This doesn't make much difference in practice, but makes it so that we can remove some tests. The reason I am raising this is that the upcoming stacktrie-based verifier does not support such fancy features as standalone right-hand borders.
This change addresses an issue in snap sync, specifically when the entire sync process can be halted due to an encountered empty storage range.
Currently, on the snap sync client side, the response to an empty (partial) storage range is discarded as a non-delivery. However, this response can be a valid response, when the particular range requested does not contain any slots.
For instance, consider a large contract where the entire key space is divided into 16 chunks, and there are no available slots in the last chunk [0xf] -> [end]. When the node receives a request for this particular range, the response includes:
The proof with origin [0xf]
A nil storage slot set
If we simply discard this response, the finalization of the last range will be skipped, halting the entire sync process indefinitely. The test case TestSyncWithUnevenStorage can reproduce the scenario described above.
In addition, this change also defines the common variables MaxAddress and MaxHash.
* cmd, core: resolve scheme from a read-write database
* cmd, core, eth: move the scheme check in the ethereum constructor
* cmd/geth: dump should in ro mode
* cmd: reverts
This change
- Removes the owner-notion from a stacktrie; the owner is only ever needed for comitting to the database, but the commit-function, the `writeFn` is provided by the caller, so the caller can just set the owner into the `writeFn` instead of having it passed through the stacktrie.
- Removes the `encoding.BinaryMarshaler`/`encoding.BinaryUnmarshaler` interface from stacktrie. We're not using it, and it is doubtful whether anyone downstream is either.