mirror of
https://github.com/ethereum/go-ethereum.git
synced 2026-04-02 08:05:55 +00:00
Compare commits
No commits in common. "master" and "v1.17.0" have entirely different histories.
295 changed files with 11523 additions and 18450 deletions
1
.github/CODEOWNERS
vendored
1
.github/CODEOWNERS
vendored
|
|
@ -10,7 +10,6 @@ beacon/merkle/ @zsfelfoldi
|
||||||
beacon/types/ @zsfelfoldi @fjl
|
beacon/types/ @zsfelfoldi @fjl
|
||||||
beacon/params/ @zsfelfoldi @fjl
|
beacon/params/ @zsfelfoldi @fjl
|
||||||
cmd/evm/ @MariusVanDerWijden @lightclient
|
cmd/evm/ @MariusVanDerWijden @lightclient
|
||||||
cmd/keeper/ @gballet
|
|
||||||
core/state/ @rjl493456442
|
core/state/ @rjl493456442
|
||||||
crypto/ @gballet @jwasinger @fjl
|
crypto/ @gballet @jwasinger @fjl
|
||||||
core/ @rjl493456442
|
core/ @rjl493456442
|
||||||
|
|
|
||||||
4
.github/workflows/go.yml
vendored
4
.github/workflows/go.yml
vendored
|
|
@ -69,8 +69,8 @@ jobs:
|
||||||
|
|
||||||
- name: Install cross toolchain
|
- name: Install cross toolchain
|
||||||
run: |
|
run: |
|
||||||
sudo apt-get update
|
apt-get update
|
||||||
sudo apt-get -yq --no-install-suggests --no-install-recommends install gcc-multilib
|
apt-get -yq --no-install-suggests --no-install-recommends install gcc-multilib
|
||||||
|
|
||||||
- name: Build
|
- name: Build
|
||||||
run: go run build/ci.go test -arch 386 -short -p 8
|
run: go run build/ci.go test -arch 386 -short -p 8
|
||||||
|
|
|
||||||
102
AGENTS.md
102
AGENTS.md
|
|
@ -1,102 +0,0 @@
|
||||||
# AGENTS
|
|
||||||
|
|
||||||
## Guidelines
|
|
||||||
|
|
||||||
- **Keep changes minimal and focused.** Only modify code directly related to the task at hand. Do not refactor unrelated code, rename existing variables or functions for style, or bundle unrelated fixes into the same commit or PR.
|
|
||||||
- **Do not add, remove, or update dependencies** unless the task explicitly requires it.
|
|
||||||
|
|
||||||
## Pre-Commit Checklist
|
|
||||||
|
|
||||||
Before every commit, run **all** of the following checks and ensure they pass:
|
|
||||||
|
|
||||||
### 1. Formatting
|
|
||||||
|
|
||||||
Before committing, always run `gofmt` and `goimports` on all modified files:
|
|
||||||
|
|
||||||
```sh
|
|
||||||
gofmt -w <modified files>
|
|
||||||
goimports -w <modified files>
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Build All Commands
|
|
||||||
|
|
||||||
Verify that all tools compile successfully:
|
|
||||||
|
|
||||||
```sh
|
|
||||||
make all
|
|
||||||
```
|
|
||||||
|
|
||||||
This builds all executables under `cmd/`, including `keeper` which has special build requirements.
|
|
||||||
|
|
||||||
### 3. Tests
|
|
||||||
|
|
||||||
While iterating during development, use `-short` for faster feedback:
|
|
||||||
|
|
||||||
```sh
|
|
||||||
go run ./build/ci.go test -short
|
|
||||||
```
|
|
||||||
|
|
||||||
Before committing, run the full test suite **without** `-short` to ensure all tests pass, including the Ethereum execution-spec tests and all state/block test permutations:
|
|
||||||
|
|
||||||
```sh
|
|
||||||
go run ./build/ci.go test
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. Linting
|
|
||||||
|
|
||||||
```sh
|
|
||||||
go run ./build/ci.go lint
|
|
||||||
```
|
|
||||||
|
|
||||||
This runs additional style checks. Fix any issues before committing.
|
|
||||||
|
|
||||||
### 5. Generated Code
|
|
||||||
|
|
||||||
```sh
|
|
||||||
go run ./build/ci.go check_generate
|
|
||||||
```
|
|
||||||
|
|
||||||
Ensures that all generated files (e.g., `gen_*.go`) are up to date. If this fails, first install the required code generators by running `make devtools`, then run the appropriate `go generate` commands and include the updated files in your commit.
|
|
||||||
|
|
||||||
### 6. Dependency Hygiene
|
|
||||||
|
|
||||||
```sh
|
|
||||||
go run ./build/ci.go check_baddeps
|
|
||||||
```
|
|
||||||
|
|
||||||
Verifies that no forbidden dependencies have been introduced.
|
|
||||||
|
|
||||||
## What to include in commits
|
|
||||||
|
|
||||||
Do not commit binaries, whether they are produced by the main build or byproducts of investigations.
|
|
||||||
|
|
||||||
## Commit Message Format
|
|
||||||
|
|
||||||
Commit messages must be prefixed with the package(s) they modify, followed by a short lowercase description:
|
|
||||||
|
|
||||||
```
|
|
||||||
<package(s)>: description
|
|
||||||
```
|
|
||||||
|
|
||||||
Examples:
|
|
||||||
- `core/vm: fix stack overflow in PUSH instruction`
|
|
||||||
- `eth, rpc: make trace configs optional`
|
|
||||||
- `cmd/geth: add new flag for sync mode`
|
|
||||||
|
|
||||||
Use comma-separated package names when multiple areas are affected. Keep the description concise.
|
|
||||||
|
|
||||||
## Pull Request Title Format
|
|
||||||
|
|
||||||
PR titles follow the same convention as commit messages:
|
|
||||||
|
|
||||||
```
|
|
||||||
<list of modified paths>: description
|
|
||||||
```
|
|
||||||
|
|
||||||
Examples:
|
|
||||||
- `core/vm: fix stack overflow in PUSH instruction`
|
|
||||||
- `core, eth: add arena allocator support`
|
|
||||||
- `cmd/geth, internal/ethapi: refactor transaction args`
|
|
||||||
- `trie/archiver: streaming subtree archival to fix OOM`
|
|
||||||
|
|
||||||
Use the top-level package paths, comma-separated if multiple areas are affected. Only mention the directories with functional changes, interface changes that trickle all over the codebase should not generate an exhaustive list. The description should be a short, lowercase summary of the change.
|
|
||||||
|
|
@ -4,7 +4,7 @@ ARG VERSION=""
|
||||||
ARG BUILDNUM=""
|
ARG BUILDNUM=""
|
||||||
|
|
||||||
# Build Geth in a stock Go builder container
|
# Build Geth in a stock Go builder container
|
||||||
FROM golang:1.26-alpine AS builder
|
FROM golang:1.24-alpine AS builder
|
||||||
|
|
||||||
RUN apk add --no-cache gcc musl-dev linux-headers git
|
RUN apk add --no-cache gcc musl-dev linux-headers git
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -4,7 +4,7 @@ ARG VERSION=""
|
||||||
ARG BUILDNUM=""
|
ARG BUILDNUM=""
|
||||||
|
|
||||||
# Build Geth in a stock Go builder container
|
# Build Geth in a stock Go builder container
|
||||||
FROM golang:1.26-alpine AS builder
|
FROM golang:1.24-alpine AS builder
|
||||||
|
|
||||||
RUN apk add --no-cache gcc musl-dev linux-headers git
|
RUN apk add --no-cache gcc musl-dev linux-headers git
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -87,10 +87,6 @@ func (ec *engineClient) updateLoop(headCh <-chan types.ChainHeadEvent) {
|
||||||
if status, err := ec.callForkchoiceUpdated(forkName, event); err == nil {
|
if status, err := ec.callForkchoiceUpdated(forkName, event); err == nil {
|
||||||
log.Info("Successful ForkchoiceUpdated", "head", event.Block.Hash(), "status", status)
|
log.Info("Successful ForkchoiceUpdated", "head", event.Block.Hash(), "status", status)
|
||||||
} else {
|
} else {
|
||||||
if err.Error() == "beacon syncer reorging" {
|
|
||||||
log.Debug("Failed ForkchoiceUpdated", "head", event.Block.Hash(), "error", err)
|
|
||||||
continue // ignore beacon syncer reorging errors, this error can occur if the blsync is skipping a block
|
|
||||||
}
|
|
||||||
log.Error("Failed ForkchoiceUpdated", "head", event.Block.Hash(), "error", err)
|
log.Error("Failed ForkchoiceUpdated", "head", event.Block.Hash(), "error", err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -21,7 +21,6 @@ func (p PayloadAttributes) MarshalJSON() ([]byte, error) {
|
||||||
SuggestedFeeRecipient common.Address `json:"suggestedFeeRecipient" gencodec:"required"`
|
SuggestedFeeRecipient common.Address `json:"suggestedFeeRecipient" gencodec:"required"`
|
||||||
Withdrawals []*types.Withdrawal `json:"withdrawals"`
|
Withdrawals []*types.Withdrawal `json:"withdrawals"`
|
||||||
BeaconRoot *common.Hash `json:"parentBeaconBlockRoot"`
|
BeaconRoot *common.Hash `json:"parentBeaconBlockRoot"`
|
||||||
SlotNumber *hexutil.Uint64 `json:"slotNumber"`
|
|
||||||
}
|
}
|
||||||
var enc PayloadAttributes
|
var enc PayloadAttributes
|
||||||
enc.Timestamp = hexutil.Uint64(p.Timestamp)
|
enc.Timestamp = hexutil.Uint64(p.Timestamp)
|
||||||
|
|
@ -29,7 +28,6 @@ func (p PayloadAttributes) MarshalJSON() ([]byte, error) {
|
||||||
enc.SuggestedFeeRecipient = p.SuggestedFeeRecipient
|
enc.SuggestedFeeRecipient = p.SuggestedFeeRecipient
|
||||||
enc.Withdrawals = p.Withdrawals
|
enc.Withdrawals = p.Withdrawals
|
||||||
enc.BeaconRoot = p.BeaconRoot
|
enc.BeaconRoot = p.BeaconRoot
|
||||||
enc.SlotNumber = (*hexutil.Uint64)(p.SlotNumber)
|
|
||||||
return json.Marshal(&enc)
|
return json.Marshal(&enc)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -41,7 +39,6 @@ func (p *PayloadAttributes) UnmarshalJSON(input []byte) error {
|
||||||
SuggestedFeeRecipient *common.Address `json:"suggestedFeeRecipient" gencodec:"required"`
|
SuggestedFeeRecipient *common.Address `json:"suggestedFeeRecipient" gencodec:"required"`
|
||||||
Withdrawals []*types.Withdrawal `json:"withdrawals"`
|
Withdrawals []*types.Withdrawal `json:"withdrawals"`
|
||||||
BeaconRoot *common.Hash `json:"parentBeaconBlockRoot"`
|
BeaconRoot *common.Hash `json:"parentBeaconBlockRoot"`
|
||||||
SlotNumber *hexutil.Uint64 `json:"slotNumber"`
|
|
||||||
}
|
}
|
||||||
var dec PayloadAttributes
|
var dec PayloadAttributes
|
||||||
if err := json.Unmarshal(input, &dec); err != nil {
|
if err := json.Unmarshal(input, &dec); err != nil {
|
||||||
|
|
@ -65,8 +62,5 @@ func (p *PayloadAttributes) UnmarshalJSON(input []byte) error {
|
||||||
if dec.BeaconRoot != nil {
|
if dec.BeaconRoot != nil {
|
||||||
p.BeaconRoot = dec.BeaconRoot
|
p.BeaconRoot = dec.BeaconRoot
|
||||||
}
|
}
|
||||||
if dec.SlotNumber != nil {
|
|
||||||
p.SlotNumber = (*uint64)(dec.SlotNumber)
|
|
||||||
}
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -34,7 +34,6 @@ func (e ExecutableData) MarshalJSON() ([]byte, error) {
|
||||||
Withdrawals []*types.Withdrawal `json:"withdrawals"`
|
Withdrawals []*types.Withdrawal `json:"withdrawals"`
|
||||||
BlobGasUsed *hexutil.Uint64 `json:"blobGasUsed"`
|
BlobGasUsed *hexutil.Uint64 `json:"blobGasUsed"`
|
||||||
ExcessBlobGas *hexutil.Uint64 `json:"excessBlobGas"`
|
ExcessBlobGas *hexutil.Uint64 `json:"excessBlobGas"`
|
||||||
SlotNumber *hexutil.Uint64 `json:"slotNumber"`
|
|
||||||
}
|
}
|
||||||
var enc ExecutableData
|
var enc ExecutableData
|
||||||
enc.ParentHash = e.ParentHash
|
enc.ParentHash = e.ParentHash
|
||||||
|
|
@ -59,7 +58,6 @@ func (e ExecutableData) MarshalJSON() ([]byte, error) {
|
||||||
enc.Withdrawals = e.Withdrawals
|
enc.Withdrawals = e.Withdrawals
|
||||||
enc.BlobGasUsed = (*hexutil.Uint64)(e.BlobGasUsed)
|
enc.BlobGasUsed = (*hexutil.Uint64)(e.BlobGasUsed)
|
||||||
enc.ExcessBlobGas = (*hexutil.Uint64)(e.ExcessBlobGas)
|
enc.ExcessBlobGas = (*hexutil.Uint64)(e.ExcessBlobGas)
|
||||||
enc.SlotNumber = (*hexutil.Uint64)(e.SlotNumber)
|
|
||||||
return json.Marshal(&enc)
|
return json.Marshal(&enc)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -83,7 +81,6 @@ func (e *ExecutableData) UnmarshalJSON(input []byte) error {
|
||||||
Withdrawals []*types.Withdrawal `json:"withdrawals"`
|
Withdrawals []*types.Withdrawal `json:"withdrawals"`
|
||||||
BlobGasUsed *hexutil.Uint64 `json:"blobGasUsed"`
|
BlobGasUsed *hexutil.Uint64 `json:"blobGasUsed"`
|
||||||
ExcessBlobGas *hexutil.Uint64 `json:"excessBlobGas"`
|
ExcessBlobGas *hexutil.Uint64 `json:"excessBlobGas"`
|
||||||
SlotNumber *hexutil.Uint64 `json:"slotNumber"`
|
|
||||||
}
|
}
|
||||||
var dec ExecutableData
|
var dec ExecutableData
|
||||||
if err := json.Unmarshal(input, &dec); err != nil {
|
if err := json.Unmarshal(input, &dec); err != nil {
|
||||||
|
|
@ -157,8 +154,5 @@ func (e *ExecutableData) UnmarshalJSON(input []byte) error {
|
||||||
if dec.ExcessBlobGas != nil {
|
if dec.ExcessBlobGas != nil {
|
||||||
e.ExcessBlobGas = (*uint64)(dec.ExcessBlobGas)
|
e.ExcessBlobGas = (*uint64)(dec.ExcessBlobGas)
|
||||||
}
|
}
|
||||||
if dec.SlotNumber != nil {
|
|
||||||
e.SlotNumber = (*uint64)(dec.SlotNumber)
|
|
||||||
}
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -50,13 +50,6 @@ var (
|
||||||
// ExecutionPayloadV3 has the syntax of ExecutionPayloadV2 and appends the new
|
// ExecutionPayloadV3 has the syntax of ExecutionPayloadV2 and appends the new
|
||||||
// fields: blobGasUsed and excessBlobGas.
|
// fields: blobGasUsed and excessBlobGas.
|
||||||
PayloadV3 PayloadVersion = 0x3
|
PayloadV3 PayloadVersion = 0x3
|
||||||
|
|
||||||
// PayloadV4 is the identifier of ExecutionPayloadV4 introduced in amsterdam fork.
|
|
||||||
//
|
|
||||||
// https://github.com/ethereum/execution-apis/blob/main/src/engine/amsterdam.md#executionpayloadv4
|
|
||||||
// ExecutionPayloadV4 has the syntax of ExecutionPayloadV3 and appends the new
|
|
||||||
// field slotNumber.
|
|
||||||
PayloadV4 PayloadVersion = 0x4
|
|
||||||
)
|
)
|
||||||
|
|
||||||
//go:generate go run github.com/fjl/gencodec -type PayloadAttributes -field-override payloadAttributesMarshaling -out gen_blockparams.go
|
//go:generate go run github.com/fjl/gencodec -type PayloadAttributes -field-override payloadAttributesMarshaling -out gen_blockparams.go
|
||||||
|
|
@ -69,13 +62,11 @@ type PayloadAttributes struct {
|
||||||
SuggestedFeeRecipient common.Address `json:"suggestedFeeRecipient" gencodec:"required"`
|
SuggestedFeeRecipient common.Address `json:"suggestedFeeRecipient" gencodec:"required"`
|
||||||
Withdrawals []*types.Withdrawal `json:"withdrawals"`
|
Withdrawals []*types.Withdrawal `json:"withdrawals"`
|
||||||
BeaconRoot *common.Hash `json:"parentBeaconBlockRoot"`
|
BeaconRoot *common.Hash `json:"parentBeaconBlockRoot"`
|
||||||
SlotNumber *uint64 `json:"slotNumber"`
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// JSON type overrides for PayloadAttributes.
|
// JSON type overrides for PayloadAttributes.
|
||||||
type payloadAttributesMarshaling struct {
|
type payloadAttributesMarshaling struct {
|
||||||
Timestamp hexutil.Uint64
|
Timestamp hexutil.Uint64
|
||||||
SlotNumber *hexutil.Uint64
|
|
||||||
}
|
}
|
||||||
|
|
||||||
//go:generate go run github.com/fjl/gencodec -type ExecutableData -field-override executableDataMarshaling -out gen_ed.go
|
//go:generate go run github.com/fjl/gencodec -type ExecutableData -field-override executableDataMarshaling -out gen_ed.go
|
||||||
|
|
@ -99,7 +90,6 @@ type ExecutableData struct {
|
||||||
Withdrawals []*types.Withdrawal `json:"withdrawals"`
|
Withdrawals []*types.Withdrawal `json:"withdrawals"`
|
||||||
BlobGasUsed *uint64 `json:"blobGasUsed"`
|
BlobGasUsed *uint64 `json:"blobGasUsed"`
|
||||||
ExcessBlobGas *uint64 `json:"excessBlobGas"`
|
ExcessBlobGas *uint64 `json:"excessBlobGas"`
|
||||||
SlotNumber *uint64 `json:"slotNumber"`
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// JSON type overrides for executableData.
|
// JSON type overrides for executableData.
|
||||||
|
|
@ -114,7 +104,6 @@ type executableDataMarshaling struct {
|
||||||
Transactions []hexutil.Bytes
|
Transactions []hexutil.Bytes
|
||||||
BlobGasUsed *hexutil.Uint64
|
BlobGasUsed *hexutil.Uint64
|
||||||
ExcessBlobGas *hexutil.Uint64
|
ExcessBlobGas *hexutil.Uint64
|
||||||
SlotNumber *hexutil.Uint64
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// StatelessPayloadStatusV1 is the result of a stateless payload execution.
|
// StatelessPayloadStatusV1 is the result of a stateless payload execution.
|
||||||
|
|
@ -224,7 +213,7 @@ func encodeTransactions(txs []*types.Transaction) [][]byte {
|
||||||
return enc
|
return enc
|
||||||
}
|
}
|
||||||
|
|
||||||
func DecodeTransactions(enc [][]byte) ([]*types.Transaction, error) {
|
func decodeTransactions(enc [][]byte) ([]*types.Transaction, error) {
|
||||||
var txs = make([]*types.Transaction, len(enc))
|
var txs = make([]*types.Transaction, len(enc))
|
||||||
for i, encTx := range enc {
|
for i, encTx := range enc {
|
||||||
var tx types.Transaction
|
var tx types.Transaction
|
||||||
|
|
@ -262,7 +251,7 @@ func ExecutableDataToBlock(data ExecutableData, versionedHashes []common.Hash, b
|
||||||
// for stateless execution, so it skips checking if the executable data hashes to
|
// for stateless execution, so it skips checking if the executable data hashes to
|
||||||
// the requested hash (stateless has to *compute* the root hash, it's not given).
|
// the requested hash (stateless has to *compute* the root hash, it's not given).
|
||||||
func ExecutableDataToBlockNoHash(data ExecutableData, versionedHashes []common.Hash, beaconRoot *common.Hash, requests [][]byte) (*types.Block, error) {
|
func ExecutableDataToBlockNoHash(data ExecutableData, versionedHashes []common.Hash, beaconRoot *common.Hash, requests [][]byte) (*types.Block, error) {
|
||||||
txs, err := DecodeTransactions(data.Transactions)
|
txs, err := decodeTransactions(data.Transactions)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
@ -324,7 +313,6 @@ func ExecutableDataToBlockNoHash(data ExecutableData, versionedHashes []common.H
|
||||||
BlobGasUsed: data.BlobGasUsed,
|
BlobGasUsed: data.BlobGasUsed,
|
||||||
ParentBeaconRoot: beaconRoot,
|
ParentBeaconRoot: beaconRoot,
|
||||||
RequestsHash: requestsHash,
|
RequestsHash: requestsHash,
|
||||||
SlotNumber: data.SlotNumber,
|
|
||||||
}
|
}
|
||||||
return types.NewBlockWithHeader(header).
|
return types.NewBlockWithHeader(header).
|
||||||
WithBody(types.Body{Transactions: txs, Uncles: nil, Withdrawals: data.Withdrawals}),
|
WithBody(types.Body{Transactions: txs, Uncles: nil, Withdrawals: data.Withdrawals}),
|
||||||
|
|
@ -352,7 +340,6 @@ func BlockToExecutableData(block *types.Block, fees *big.Int, sidecars []*types.
|
||||||
Withdrawals: block.Withdrawals(),
|
Withdrawals: block.Withdrawals(),
|
||||||
BlobGasUsed: block.BlobGasUsed(),
|
BlobGasUsed: block.BlobGasUsed(),
|
||||||
ExcessBlobGas: block.ExcessBlobGas(),
|
ExcessBlobGas: block.ExcessBlobGas(),
|
||||||
SlotNumber: block.SlotNumber(),
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Add blobs.
|
// Add blobs.
|
||||||
|
|
|
||||||
|
|
@ -438,11 +438,14 @@ func (s *serverWithLimits) fail(desc string) {
|
||||||
// failLocked calculates the dynamic failure delay and applies it.
|
// failLocked calculates the dynamic failure delay and applies it.
|
||||||
func (s *serverWithLimits) failLocked(desc string) {
|
func (s *serverWithLimits) failLocked(desc string) {
|
||||||
log.Debug("Server error", "description", desc)
|
log.Debug("Server error", "description", desc)
|
||||||
|
s.failureDelay *= 2
|
||||||
now := s.clock.Now()
|
now := s.clock.Now()
|
||||||
if now > s.failureDelayEnd {
|
if now > s.failureDelayEnd {
|
||||||
s.failureDelay *= math.Pow(2, -float64(now-s.failureDelayEnd)/float64(maxFailureDelay))
|
s.failureDelay *= math.Pow(2, -float64(now-s.failureDelayEnd)/float64(maxFailureDelay))
|
||||||
}
|
}
|
||||||
s.failureDelay = max(min(s.failureDelay*2, float64(maxFailureDelay)), float64(minFailureDelay))
|
if s.failureDelay < float64(minFailureDelay) {
|
||||||
|
s.failureDelay = float64(minFailureDelay)
|
||||||
|
}
|
||||||
s.failureDelayEnd = now + mclock.AbsTime(s.failureDelay)
|
s.failureDelayEnd = now + mclock.AbsTime(s.failureDelay)
|
||||||
s.delay(time.Duration(s.failureDelay))
|
s.delay(time.Duration(s.failureDelay))
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -62,6 +62,7 @@ const (
|
||||||
ssNeedParent // cp header slot %32 != 0, need parent to check epoch boundary
|
ssNeedParent // cp header slot %32 != 0, need parent to check epoch boundary
|
||||||
ssParentRequested // cp parent header requested
|
ssParentRequested // cp parent header requested
|
||||||
ssPrintStatus // has all necessary info, print log message if init still not successful
|
ssPrintStatus // has all necessary info, print log message if init still not successful
|
||||||
|
ssDone // log message printed, no more action required
|
||||||
)
|
)
|
||||||
|
|
||||||
type serverState struct {
|
type serverState struct {
|
||||||
|
|
@ -179,8 +180,7 @@ func (s *CheckpointInit) Process(requester request.Requester, events []request.E
|
||||||
default:
|
default:
|
||||||
log.Error("blsync: checkpoint not available, but reported as finalized; specified checkpoint hash might be too old", "server", server.Name())
|
log.Error("blsync: checkpoint not available, but reported as finalized; specified checkpoint hash might be too old", "server", server.Name())
|
||||||
}
|
}
|
||||||
s.serverState[server] = serverState{state: ssDefault}
|
s.serverState[server] = serverState{state: ssDone}
|
||||||
requester.Fail(server, "checkpoint init failed")
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -5,102 +5,81 @@
|
||||||
# https://github.com/ethereum/execution-spec-tests/releases/download/v5.1.0
|
# https://github.com/ethereum/execution-spec-tests/releases/download/v5.1.0
|
||||||
a3192784375acec7eaec492799d5c5d0c47a2909a3cc40178898e4ecd20cc416 fixtures_develop.tar.gz
|
a3192784375acec7eaec492799d5c5d0c47a2909a3cc40178898e4ecd20cc416 fixtures_develop.tar.gz
|
||||||
|
|
||||||
# version:golang 1.25.7
|
# version:golang 1.25.1
|
||||||
# https://go.dev/dl/
|
# https://go.dev/dl/
|
||||||
178f2832820274b43e177d32f06a3ebb0129e427dd20a5e4c88df2c1763cf10a go1.25.7.src.tar.gz
|
d010c109cee94d80efe681eab46bdea491ac906bf46583c32e9f0dbb0bd1a594 go1.25.1.src.tar.gz
|
||||||
81bf2a1f20633f62d55d826d82dde3b0570cf1408a91e15781b266037299285b go1.25.7.aix-ppc64.tar.gz
|
1d622468f767a1b9fe1e1e67bd6ce6744d04e0c68712adc689748bbeccb126bb go1.25.1.darwin-amd64.tar.gz
|
||||||
bf5050a2152f4053837b886e8d9640c829dbacbc3370f913351eb0904cb706f5 go1.25.7.darwin-amd64.tar.gz
|
68deebb214f39d542e518ebb0598a406ab1b5a22bba8ec9ade9f55fb4dd94a6c go1.25.1.darwin-arm64.tar.gz
|
||||||
ff18369ffad05c57d5bed888b660b31385f3c913670a83ef557cdfd98ea9ae1b go1.25.7.darwin-arm64.tar.gz
|
d03cdcbc9bd8baf5cf028de390478e9e2b3e4d0afe5a6582dedc19bfe6a263b2 go1.25.1.linux-386.tar.gz
|
||||||
c5dccd7f192dd7b305dc209fb316ac1917776d74bd8e4d532ef2772f305bf42a go1.25.7.dragonfly-amd64.tar.gz
|
7716a0d940a0f6ae8e1f3b3f4f36299dc53e31b16840dbd171254312c41ca12e go1.25.1.linux-amd64.tar.gz
|
||||||
a2de97c8ac74bf64b0ae73fe9d379e61af530e061bc7f8f825044172ffe61a8b go1.25.7.freebsd-386.tar.gz
|
65a3e34fb2126f55b34e1edfc709121660e1be2dee6bdf405fc399a63a95a87d go1.25.1.linux-arm64.tar.gz
|
||||||
055f9e138787dcafa81eb0314c8ff70c6dd0f6dba1e8a6957fef5d5efd1ab8fd go1.25.7.freebsd-amd64.tar.gz
|
eb949be683e82a99e9861dafd7057e31ea40b161eae6c4cd18fdc0e8c4ae6225 go1.25.1.linux-armv6l.tar.gz
|
||||||
60e7f7a7c990f0b9539ac8ed668155746997d404643a4eecd47b3dee1b7e710b go1.25.7.freebsd-arm.tar.gz
|
be13d5479b8c75438f2efcaa8c191fba3af684b3228abc9c99c7aa8502f34424 go1.25.1.windows-386.zip
|
||||||
631e03d5fd4c526e2f499154d8c6bf4cb081afb2fff171c428722afc9539d53a go1.25.7.freebsd-arm64.tar.gz
|
4a974de310e7ee1d523d2fcedb114ba5fa75408c98eb3652023e55ccf3fa7cab go1.25.1.windows-amd64.zip
|
||||||
8a264fd685823808140672812e3ad9c43f6ad59444c0dc14cdd3a1351839ddd5 go1.25.7.freebsd-riscv64.tar.gz
|
45ab4290adbd6ee9e7f18f0d57eaa9008fdbef590882778ed93eac3c8cca06c5 go1.25.1.aix-ppc64.tar.gz
|
||||||
57c672447d906a1bcab98f2b11492d54521a791aacbb4994a25169e59cbe289a go1.25.7.illumos-amd64.tar.gz
|
2e3c1549bed3124763774d648f291ac42611232f48320ebbd23517c909c09b81 go1.25.1.dragonfly-amd64.tar.gz
|
||||||
2866517e9ca81e6a2e85a930e9b11bc8a05cfeb2fc6dc6cb2765e7fb3c14b715 go1.25.7.linux-386.tar.gz
|
dc0198dd4ec520e13f26798def8750544edf6448d8e9c43fd2a814e4885932af go1.25.1.freebsd-386.tar.gz
|
||||||
12e6d6a191091ae27dc31f6efc630e3a3b8ba409baf3573d955b196fdf086005 go1.25.7.linux-amd64.tar.gz
|
c4f1a7e7b258406e6f3b677ecdbd97bbb23ff9c0d44be4eb238a07d360f69ac8 go1.25.1.freebsd-amd64.tar.gz
|
||||||
ba611a53534135a81067240eff9508cd7e256c560edd5d8c2fef54f083c07129 go1.25.7.linux-arm64.tar.gz
|
7772fc5ff71ed39297ec0c1599fc54e399642c9b848eac989601040923b0de9c go1.25.1.freebsd-arm.tar.gz
|
||||||
1ba07e0eb86b839e72467f4b5c7a5597d07f30bcf5563c951410454f7cda5266 go1.25.7.linux-armv6l.tar.gz
|
5bb011d5d5b6218b12189f07aa0be618ab2002662fff1ca40afba7389735c207 go1.25.1.freebsd-arm64.tar.gz
|
||||||
775753fc5952a334c415f08768df2f0b73a3228a16e8f5f63d545daacb4e3357 go1.25.7.linux-loong64.tar.gz
|
ccac716240cb049bebfafcb7eebc3758512178a4c51fc26da9cc032035d850c8 go1.25.1.freebsd-riscv64.tar.gz
|
||||||
1a023bb367c5fbb4c637a2f6dc23ff17c6591ad929ce16ea88c74d857153b307 go1.25.7.linux-mips.tar.gz
|
cc53910ffb9fcfdd988a9fa25b5423bae1cfa01b19616be646700e1f5453b466 go1.25.1.illumos-amd64.tar.gz
|
||||||
a8e97223d8aa6fdfd45f132a4784d2f536bbac5f3d63a24b63d33b6bfe1549af go1.25.7.linux-mips64.tar.gz
|
efe809f923bcedab44bf7be2b3af8d182b512b1bf9c07d302e0c45d26c8f56f3 go1.25.1.linux-loong64.tar.gz
|
||||||
eb9edb6223330d5e20275667c65dea076b064c08e595fe4eba5d7d6055cfaccf go1.25.7.linux-mips64le.tar.gz
|
c0de33679f6ed68991dc42dc4a602e74a666e3e166c1748ee1b5d1a7ea2ffbb2 go1.25.1.linux-mips.tar.gz
|
||||||
9c1e693552a5f9bb9e0012d1c5e01456ecefbc59bef53a77305222ce10aba368 go1.25.7.linux-mipsle.tar.gz
|
c270f7b0c0bdfbcd54fef4481227c40d41bb518f9ae38ee930870f04a0a6a589 go1.25.1.linux-mips64.tar.gz
|
||||||
28a788798e7329acbbc0ac2caa5e4368b1e5ede646cc24429c991214cfb45c63 go1.25.7.linux-ppc64.tar.gz
|
80be871ba9c944f34d1868cdf5047e1cf2e1289fe08cdb90e2453d2f0d6965ae go1.25.1.linux-mips64le.tar.gz
|
||||||
42124c0edc92464e2b37b2d7fcd3658f0c47ebd6a098732415a522be8cb88e3f go1.25.7.linux-ppc64le.tar.gz
|
9f09defa9bb22ebf2cde76162f40958564e57ce5c2b3649bc063bebcbc9294c1 go1.25.1.linux-mipsle.tar.gz
|
||||||
88d59c6893c8425875d6eef8e3434bc2fa2552e5ad4c058c6cd8cd710a0301c8 go1.25.7.linux-riscv64.tar.gz
|
2c76b7d278c1d43ad19d478ad3f0f05e7b782b64b90870701b314fa48b5f43c6 go1.25.1.linux-ppc64.tar.gz
|
||||||
c6b77facf666dc68195ecab05dbf0ebb4e755b2a8b7734c759880557f1c29b0c go1.25.7.linux-s390x.tar.gz
|
8b0c8d3ee5b1b5c28b6bd63dc4438792012e01d03b4bf7a61d985c87edab7d1f go1.25.1.linux-ppc64le.tar.gz
|
||||||
f14c184d9ade0ee04c7735d4071257b90896ecbde1b32adae84135f055e6399b go1.25.7.netbsd-386.tar.gz
|
22fe934a9d0c9c57275716c55b92d46ebd887cec3177c9140705efa9f84ba1e2 go1.25.1.linux-riscv64.tar.gz
|
||||||
7e7389e404dca1088c31f0fc07f1dd60891d7182bcd621469c14f7e79eceb3ff go1.25.7.netbsd-amd64.tar.gz
|
9cfe517ba423f59f3738ca5c3d907c103253cffbbcc2987142f79c5de8c1bf93 go1.25.1.linux-s390x.tar.gz
|
||||||
70388bb3ef2f03dbf1357e9056bd09034a67e018262557354f8cf549766b3f9d go1.25.7.netbsd-arm.tar.gz
|
6af8a08353e76205d5b743dd7a3f0126684f96f62be0a31b75daf9837e512c46 go1.25.1.netbsd-386.tar.gz
|
||||||
8c1cda9d25bfc9b18d24d5f95fc23949dd3ff99fa408a6cfa40e2cf12b07e362 go1.25.7.netbsd-arm64.tar.gz
|
e5d534ff362edb1bd8c8e10892b6a027c4c1482454245d1529167676498684c7 go1.25.1.netbsd-amd64.tar.gz
|
||||||
42f0d1bfbe39b8401cccb84dd66b30795b97bfc9620dfdc17c5cd4fcf6495cb0 go1.25.7.openbsd-386.tar.gz
|
88bcf39254fdcea6a199c1c27d787831b652427ce60851ae9e41a3d7eb477f45 go1.25.1.netbsd-arm.tar.gz
|
||||||
e514879c0a28bc32123cd52c4c093de912477fe83f36a6d07517d066ef55391a go1.25.7.openbsd-amd64.tar.gz
|
d7c2eabe1d04ee47bcaea2816fdd90dbd25d90d4dfa756faa9786c788e4f3a4e go1.25.1.netbsd-arm64.tar.gz
|
||||||
8cd22530695a0218232bf7efea8f162df1697a3106942ac4129b8c3de39ce4ef go1.25.7.openbsd-arm.tar.gz
|
14a2845977eb4dde11d929858c437a043467c427db87899935e90cee04a38d72 go1.25.1.openbsd-386.tar.gz
|
||||||
938720f6ebc0d1c53d7840321d3a31f29fd02496e84a6538f442a9311dc1cc9a go1.25.7.openbsd-arm64.tar.gz
|
d27ac54b38a13a09c81e67c82ac70d387037341c85c3399291c73e13e83fdd8c go1.25.1.openbsd-amd64.tar.gz
|
||||||
a4c378b73b98f89a3596c2ef51aabbb28783d9ca29f7e317d8ca07939660ce6f go1.25.7.openbsd-ppc64.tar.gz
|
0f4ab5f02500afa4befd51fed1e8b45e4d07ca050f641cc3acc76eaa4027b2c3 go1.25.1.openbsd-arm.tar.gz
|
||||||
937b58734fbeaa8c7941a0e4285e7e84b7885396e8d11c23f9ab1a8ff10ff20e go1.25.7.openbsd-riscv64.tar.gz
|
d46c3bd156843656f7f3cb0dec27ea51cd926ec3f7b80744bf8156e67c1c812f go1.25.1.openbsd-arm64.tar.gz
|
||||||
61a093c8c5244916f25740316386bb9f141545dcf01b06a79d1c78ece488403e go1.25.7.plan9-386.tar.gz
|
c550514c67f22e409be10e40eace761e2e43069f4ef086ae6e60aac736c2b679 go1.25.1.openbsd-ppc64.tar.gz
|
||||||
7fc8f6689c9de8ccb7689d2278035fa83c2d601409101840df6ddfe09ba58699 go1.25.7.plan9-amd64.tar.gz
|
8a09a8714a2556eb13fc1f10b7ce2553fcea4971e3330fc3be0efd24aab45734 go1.25.1.openbsd-riscv64.tar.gz
|
||||||
9661dff8eaeeb62f1c3aadbc5ff189a2e6744e1ec885e32dbcb438f58a34def5 go1.25.7.plan9-arm.tar.gz
|
b0e1fefaf0c7abd71f139a54eee9767944aff5f0bc9d69c968234804884e552f go1.25.1.plan9-386.tar.gz
|
||||||
28ecba0e1d7950c8b29a4a04962dd49c3bf5221f55a44f17d98f369f82859cf4 go1.25.7.solaris-amd64.tar.gz
|
e94732c94f149690aa0ab11c26090577211b4a988137cb2c03ec0b54e750402e go1.25.1.plan9-amd64.tar.gz
|
||||||
baa6b488291801642fa620026169e38bec2da2ac187cd3ae2145721cf826bbc3 go1.25.7.windows-386.zip
|
7eb80e9de1e817d9089a54e8c7c5c8d8ed9e5fb4d4a012fc0f18fc422a484f0c go1.25.1.plan9-arm.tar.gz
|
||||||
c75e5f4ff62d085cc0017be3ad19d5536f46825fa05db06ec468941f847e3228 go1.25.7.windows-amd64.zip
|
1261dfad7c4953c0ab90381bc1242dc54e394db7485c59349428d532b2273343 go1.25.1.solaris-amd64.tar.gz
|
||||||
807033f85931bc4a589ca8497535dcbeb1f30d506e47fa200f5f04c4a71c3d9f go1.25.7.windows-arm64.zip
|
04bc3c078e9e904c4d58d6ac2532a5bdd402bd36a9ff0b5949b3c5e6006a05ee go1.25.1.windows-arm64.zip
|
||||||
|
|
||||||
# version:golangci 2.10.1
|
# version:golangci 2.4.0
|
||||||
# https://github.com/golangci/golangci-lint/releases/
|
# https://github.com/golangci/golangci-lint/releases/
|
||||||
# https://github.com/golangci/golangci-lint/releases/download/v2.10.1
|
# https://github.com/golangci/golangci-lint/releases/download/v2.4.0/
|
||||||
66fb0da81b8033b477f97eea420d4b46b230ca172b8bb87c6610109f3772b6b6 golangci-lint-2.10.1-darwin-amd64.tar.gz
|
7904ce63f79db44934939cf7a063086ea0ea98e9b19eba0a9d52ccdd0d21951c golangci-lint-2.4.0-darwin-amd64.tar.gz
|
||||||
03bfadf67e52b441b7ec21305e501c717df93c959836d66c7f97312654acb297 golangci-lint-2.10.1-darwin-arm64.tar.gz
|
cd4dd53fa09b6646baff5fd22b8c64d91db02c21c7496df27992d75d34feec59 golangci-lint-2.4.0-darwin-arm64.tar.gz
|
||||||
c9a44658ccc8f7b8dbbd4ae6020ba91c1a5d3987f4d91ced0f7d2bea013e57ca golangci-lint-2.10.1-freebsd-386.tar.gz
|
d58f426ebe14cc257e81562b4bf37a488ffb4ffbbb3ec73041eb3b38bb25c0e1 golangci-lint-2.4.0-freebsd-386.tar.gz
|
||||||
a513c5cb4e0f5bd5767001af9d5e97e7868cfc2d9c46739a4df93e713cfb24af golangci-lint-2.10.1-freebsd-amd64.tar.gz
|
6ec4a6177fc6c0dd541fbcb3a7612845266d020d35cc6fa92959220cdf64ca39 golangci-lint-2.4.0-freebsd-amd64.tar.gz
|
||||||
2ef38eefc4b5cee2febacb75a30579526e5656c16338a921d80e59a8e87d4425 golangci-lint-2.10.1-freebsd-arm64.tar.gz
|
4d473e3e71c01feaa915a0604fb35758b41284fb976cdeac3f842118d9ee7e17 golangci-lint-2.4.0-freebsd-armv6.tar.gz
|
||||||
8fea6766318b4829e766bbe325f10191d75297dcc44ae35bf374816037878e38 golangci-lint-2.10.1-freebsd-armv6.tar.gz
|
58727746c6530801a3f9a702a5945556a5eb7e88809222536dd9f9d54cafaeff golangci-lint-2.4.0-freebsd-armv7.tar.gz
|
||||||
30b629870574d6254f3e8804e5a74b34f98e1263c9d55465830d739c88b862ed golangci-lint-2.10.1-freebsd-armv7.tar.gz
|
fbf28c662760e24c32f82f8d16dffdb4a82de7726a52ba1fad94f890c22997ea golangci-lint-2.4.0-illumos-amd64.tar.gz
|
||||||
c0db839f866ce80b1b6c96167aa101cfe50d9c936f42d942a3c1cbdc1801af68 golangci-lint-2.10.1-illumos-amd64.tar.gz
|
a15a000a8981ef665e971e0f67e2acda9066a9e37a59344393b7351d8fb49c81 golangci-lint-2.4.0-linux-386.tar.gz
|
||||||
280eb56636e9175f671cd7b755d7d67f628ae2ed00a164d1e443c43c112034e5 golangci-lint-2.10.1-linux-386.deb
|
fae792524c04424c0ac369f5b8076f04b45cf29fc945a370e55d369a8dc11840 golangci-lint-2.4.0-linux-amd64.tar.gz
|
||||||
065a7d99da61dc7dfbfef2e2d7053dd3fa6672598f2747117aa4bb5f45e7df7f golangci-lint-2.10.1-linux-386.rpm
|
70ac11f55b80ec78fd3a879249cc9255121b8dfd7f7ed4fc46ed137f4abf17e7 golangci-lint-2.4.0-linux-arm64.tar.gz
|
||||||
a55918c03bb413b2662287653ab2ae2fef4e37428b247dad6348724adde9d770 golangci-lint-2.10.1-linux-386.tar.gz
|
4acdc40e5cebe99e4e7ced358a05b2e71789f409b41cb4f39bbb86ccfa14b1dc golangci-lint-2.4.0-linux-armv6.tar.gz
|
||||||
8aa9b3aa14f39745eeb7fc7ff50bcac683e785397d1e4bc9afd2184b12c4ce86 golangci-lint-2.10.1-linux-amd64.deb
|
2a68749568fa22b4a97cb88dbea655595563c795076536aa6c087f7968784bf3 golangci-lint-2.4.0-linux-armv7.tar.gz
|
||||||
62a111688e9e305032334a2cbc84f4d971b64bb3bffc99d3f80081d57fb25e32 golangci-lint-2.10.1-linux-amd64.rpm
|
9e3369afb023711036dcb0b4f45c9fe2792af962fa1df050c9f6ac101a6c5d73 golangci-lint-2.4.0-linux-loong64.tar.gz
|
||||||
dfa775874cf0561b404a02a8f4481fc69b28091da95aa697259820d429b09c99 golangci-lint-2.10.1-linux-amd64.tar.gz
|
bb9143d6329be2c4dbfffef9564078e7da7d88e7dde6c829b6263d98e072229e golangci-lint-2.4.0-linux-mips64.tar.gz
|
||||||
b3f36937e8ea1660739dc0f5c892ea59c9c21ed4e75a91a25957c561f7f79a55 golangci-lint-2.10.1-linux-arm64.deb
|
5ad1765b40d56cd04d4afd805b3ba6f4bfd9b36181da93c31e9b17e483d8608d golangci-lint-2.4.0-linux-mips64le.tar.gz
|
||||||
36d50314d53683b1f1a2a6cedfb5a9468451b481c64ab9e97a8e843ea088074d golangci-lint-2.10.1-linux-arm64.rpm
|
918936fb9c0d5ba96bef03cf4348b03938634cfcced49be1e9bb29cb5094fa73 golangci-lint-2.4.0-linux-ppc64le.tar.gz
|
||||||
6652b42ae02915eb2f9cb2a2e0cac99514c8eded8388d88ae3e06e1a52c00de8 golangci-lint-2.10.1-linux-arm64.tar.gz
|
f7474c638e1fb67ebbdc654b55ca0125377ea0bc88e8fee8d964a4f24eacf828 golangci-lint-2.4.0-linux-riscv64.tar.gz
|
||||||
a32d8d318e803496812dd3461f250e52ccc7f53c47b95ce404a9cf55778ceb6a golangci-lint-2.10.1-linux-armv6.deb
|
b617a9543997c8bfceaffa88a75d4e595030c6add69fba800c1e4d8f5fe253dd golangci-lint-2.4.0-linux-s390x.tar.gz
|
||||||
41d065f4c8ea165a1531abea644988ee2e973e4f0b49f9725ed3b979dac45112 golangci-lint-2.10.1-linux-armv6.rpm
|
7db027b03a9ba328f795215b04f594036837bc7dd0dd7cd16776b02a6167981c golangci-lint-2.4.0-netbsd-386.tar.gz
|
||||||
59159a4df03aabbde69d15c7b7b3df143363cbb41f4bd4b200caffb8e34fb734 golangci-lint-2.10.1-linux-armv6.tar.gz
|
52d8f9393f4313df0a62b752c37775e3af0b818e43e8dd28954351542d7c60bc golangci-lint-2.4.0-netbsd-amd64.tar.gz
|
||||||
b2e8ec0e050a1e2251dfe1561434999d202f5a3f9fa47ce94378b0fd1662ea5a golangci-lint-2.10.1-linux-armv7.deb
|
5c0086027fb5a4af3829e530c8115db4b35d11afe1914322eef528eb8cd38c69 golangci-lint-2.4.0-netbsd-arm64.tar.gz
|
||||||
28c9331429a497da27e9c77846063bd0e8275e878ffedb4eb9e9f21d24771cc0 golangci-lint-2.10.1-linux-armv7.rpm
|
6b779d6ed1aed87cefe195cc11759902b97a76551b593312c6833f2635a3488f golangci-lint-2.4.0-netbsd-armv6.tar.gz
|
||||||
818f33e95b273e3769284b25563b51ef6a294e9e25acf140fda5830c075a1a59 golangci-lint-2.10.1-linux-armv7.tar.gz
|
f00d1f4b7ec3468a0f9fffd0d9ea036248b029b7621cbc9a59c449ef94356d09 golangci-lint-2.4.0-netbsd-armv7.tar.gz
|
||||||
6b6b85ed4b7c27f51097dd681523000409dde835e86e6e314e87be4bb013e2ab golangci-lint-2.10.1-linux-loong64.deb
|
3ce671b0b42b58e35066493aab75a7e2826c9e079988f1ba5d814a4029faaf87 golangci-lint-2.4.0-windows-386.zip
|
||||||
94050a0cf06169e2ae44afb307dcaafa7d7c3b38c0c23b5652cf9cb60f0c337f golangci-lint-2.10.1-linux-loong64.rpm
|
003112f7a56746feaabf20b744054bf9acdf900c9e77176383623c4b1d76aaa9 golangci-lint-2.4.0-windows-amd64.zip
|
||||||
25820300fccb8c961c1cdcb1f77928040c079e04c43a3a5ceb34b1cb4a1c5c8d golangci-lint-2.10.1-linux-loong64.tar.gz
|
dc0c2092af5d47fc2cd31a1dfe7b4c7e765fab22de98bd21ef2ffcc53ad9f54f golangci-lint-2.4.0-windows-arm64.zip
|
||||||
98bf39d10139fdcaa37f94950e9bbb8888660ae468847ae0bf1cb5bf67c1f68b golangci-lint-2.10.1-linux-mips64.deb
|
0263d23e20a260cb1592d35e12a388f99efe2c51b3611fdc66fbd9db1fce664d golangci-lint-2.4.0-windows-armv6.zip
|
||||||
df3ce5f03808dcceaa8b683d1d06e95c885f09b59dc8e15deb840fbe2b3e3299 golangci-lint-2.10.1-linux-mips64.rpm
|
9403c03bf648e6313036e0273149d44bad1b9ad53889b6d00e4ccb842ba3c058 golangci-lint-2.4.0-windows-armv7.zip
|
||||||
972508dda523067e6e6a1c8e6609d63bc7c4153819c11b947d439235cf17bac2 golangci-lint-2.10.1-linux-mips64.tar.gz
|
|
||||||
1d37f2919e183b5bf8b1777ed8c4b163d3b491d0158355a7999d647655cbbeb6 golangci-lint-2.10.1-linux-mips64le.deb
|
|
||||||
e341d031002cd09a416329ed40f674231051a38544b8f94deb2d1708ce1f4a6f golangci-lint-2.10.1-linux-mips64le.rpm
|
|
||||||
393560122b9cb5538df0c357d30eb27b6ee563533fbb9b138c8db4fd264002af golangci-lint-2.10.1-linux-mips64le.tar.gz
|
|
||||||
21ca46b6a96442e8957677a3ca059c6b93674a68a01b1c71f4e5df0ea2e96d19 golangci-lint-2.10.1-linux-ppc64le.deb
|
|
||||||
57fe0cbca0a9bbdf1547c5e8aa7d278e6896b438d72a541bae6bc62c38b43d1e golangci-lint-2.10.1-linux-ppc64le.rpm
|
|
||||||
e2883db9fa51584e5e203c64456f29993550a7faadc84e3faccdb48f0669992e golangci-lint-2.10.1-linux-ppc64le.tar.gz
|
|
||||||
aa6da0e98ab0ba3bb7582e112174c349907d5edfeff90a551dca3c6eecf92fc0 golangci-lint-2.10.1-linux-riscv64.deb
|
|
||||||
3c68d76cd884a7aad206223a980b9c20bb9ea74b560fa27ed02baf2389189234 golangci-lint-2.10.1-linux-riscv64.rpm
|
|
||||||
3bca11bfac4197205639cbd4676a5415054e629ac6c12ea10fcbe33ef852d9c3 golangci-lint-2.10.1-linux-riscv64.tar.gz
|
|
||||||
0c6aed2ce49db2586adbac72c80d871f06feb1caf4c0763a5ca98fec809a8f0b golangci-lint-2.10.1-linux-s390x.deb
|
|
||||||
16c285adfe1061d69dd8e503be69f87c7202857c6f4add74ac02e3571158fbec golangci-lint-2.10.1-linux-s390x.rpm
|
|
||||||
21011ad368eb04f024201b832095c6b5f96d0888de194cca5bfe4d9307d6364b golangci-lint-2.10.1-linux-s390x.tar.gz
|
|
||||||
7b5191e77a70485918712e31ed55159956323e4911bab1b67569c9d86e1b75eb golangci-lint-2.10.1-netbsd-386.tar.gz
|
|
||||||
07801fd38d293ebad10826f8285525a39ea91ce5ddad77d05bfa90bda9c884a9 golangci-lint-2.10.1-netbsd-amd64.tar.gz
|
|
||||||
7e7219d71c1bf33b98c328c93dc0560706dd896a1c43c44696e5222fc9d7446e golangci-lint-2.10.1-netbsd-arm64.tar.gz
|
|
||||||
92fbc90b9eec0e572269b0f5492a2895c426b086a68372fde49b7e4d4020863e golangci-lint-2.10.1-netbsd-armv6.tar.gz
|
|
||||||
f67b3ae1f47caeefa507a4ebb0c8336958a19011fe48766443212030f75d004b golangci-lint-2.10.1-netbsd-armv7.tar.gz
|
|
||||||
a40bc091c10cea84eaee1a90b84b65f5e8652113b0a600bb099e4e4d9d7caddb golangci-lint-2.10.1-windows-386.zip
|
|
||||||
c60c87695e79db8e320f0e5be885059859de52bb5ee5f11be5577828570bc2a3 golangci-lint-2.10.1-windows-amd64.zip
|
|
||||||
636ab790c8dcea8034aa34aba6031ca3893d68f7eda000460ab534341fadbab1 golangci-lint-2.10.1-windows-arm64.zip
|
|
||||||
|
|
||||||
# This is the builder on PPA that will build Go itself (inception-y), don't modify!
|
# This is the builder on PPA that will build Go itself (inception-y), don't modify!
|
||||||
#
|
#
|
||||||
|
|
|
||||||
14
build/ci.go
14
build/ci.go
|
|
@ -107,21 +107,17 @@ var (
|
||||||
Tags: "ziren",
|
Tags: "ziren",
|
||||||
Env: map[string]string{"GOMIPS": "softfloat", "CGO_ENABLED": "0"},
|
Env: map[string]string{"GOMIPS": "softfloat", "CGO_ENABLED": "0"},
|
||||||
},
|
},
|
||||||
{
|
|
||||||
Name: "womir",
|
|
||||||
GOOS: "wasip1",
|
|
||||||
GOARCH: "wasm",
|
|
||||||
Tags: "womir",
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
Name: "wasm-js",
|
Name: "wasm-js",
|
||||||
GOOS: "js",
|
GOOS: "js",
|
||||||
GOARCH: "wasm",
|
GOARCH: "wasm",
|
||||||
|
Tags: "example",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
Name: "wasm-wasi",
|
Name: "wasm-wasi",
|
||||||
GOOS: "wasip1",
|
GOOS: "wasip1",
|
||||||
GOARCH: "wasm",
|
GOARCH: "wasm",
|
||||||
|
Tags: "example",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
Name: "example",
|
Name: "example",
|
||||||
|
|
@ -172,6 +168,8 @@ var (
|
||||||
"focal", // 20.04, EOL: 04/2030
|
"focal", // 20.04, EOL: 04/2030
|
||||||
"jammy", // 22.04, EOL: 04/2032
|
"jammy", // 22.04, EOL: 04/2032
|
||||||
"noble", // 24.04, EOL: 04/2034
|
"noble", // 24.04, EOL: 04/2034
|
||||||
|
"oracular", // 24.10, EOL: 07/2025
|
||||||
|
"plucky", // 25.04, EOL: 01/2026
|
||||||
}
|
}
|
||||||
|
|
||||||
// This is where the tests should be unpacked.
|
// This is where the tests should be unpacked.
|
||||||
|
|
@ -309,7 +307,7 @@ func doInstallKeeper(cmdline []string) {
|
||||||
args := slices.Clone(gobuild.Args)
|
args := slices.Clone(gobuild.Args)
|
||||||
args = append(args, "-o", executablePath(outputName))
|
args = append(args, "-o", executablePath(outputName))
|
||||||
args = append(args, ".")
|
args = append(args, ".")
|
||||||
build.MustRun(&exec.Cmd{Path: gobuild.Path, Args: args, Env: gobuild.Env, Dir: gobuild.Dir})
|
build.MustRun(&exec.Cmd{Path: gobuild.Path, Args: args, Env: gobuild.Env})
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -1200,7 +1198,7 @@ func doWindowsInstaller(cmdline []string) {
|
||||||
var (
|
var (
|
||||||
arch = flag.String("arch", runtime.GOARCH, "Architecture for cross build packaging")
|
arch = flag.String("arch", runtime.GOARCH, "Architecture for cross build packaging")
|
||||||
signer = flag.String("signer", "", `Environment variable holding the signing key (e.g. WINDOWS_SIGNING_KEY)`)
|
signer = flag.String("signer", "", `Environment variable holding the signing key (e.g. WINDOWS_SIGNING_KEY)`)
|
||||||
signify = flag.String("signify", "", `Environment variable holding the signify signing key (e.g. WINDOWS_SIGNIFY_KEY)`)
|
signify = flag.String("signify key", "", `Environment variable holding the signify signing key (e.g. WINDOWS_SIGNIFY_KEY)`)
|
||||||
upload = flag.String("upload", "", `Destination to upload the archives (usually "gethstore/builds")`)
|
upload = flag.String("upload", "", `Destination to upload the archives (usually "gethstore/builds")`)
|
||||||
workdir = flag.String("workdir", "", `Output directory for packages (uses temp dir if unset)`)
|
workdir = flag.String("workdir", "", `Output directory for packages (uses temp dir if unset)`)
|
||||||
)
|
)
|
||||||
|
|
|
||||||
|
|
@ -51,12 +51,6 @@ type Chain struct {
|
||||||
state map[common.Address]state.DumpAccount // state of head block
|
state map[common.Address]state.DumpAccount // state of head block
|
||||||
senders map[common.Address]*senderInfo
|
senders map[common.Address]*senderInfo
|
||||||
config *params.ChainConfig
|
config *params.ChainConfig
|
||||||
|
|
||||||
txInfo txInfo
|
|
||||||
}
|
|
||||||
|
|
||||||
type txInfo struct {
|
|
||||||
LargeReceiptBlock *uint64 `json:"tx-largereceipt"`
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewChain takes the given chain.rlp file, and decodes and returns
|
// NewChain takes the given chain.rlp file, and decodes and returns
|
||||||
|
|
@ -80,20 +74,12 @@ func NewChain(dir string) (*Chain, error) {
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
var txInfo txInfo
|
|
||||||
err = common.LoadJSON(filepath.Join(dir, "txinfo.json"), &txInfo)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
return &Chain{
|
return &Chain{
|
||||||
genesis: gen,
|
genesis: gen,
|
||||||
blocks: blocks,
|
blocks: blocks,
|
||||||
state: state,
|
state: state,
|
||||||
senders: accounts,
|
senders: accounts,
|
||||||
config: gen.Config,
|
config: gen.Config,
|
||||||
txInfo: txInfo,
|
|
||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -66,10 +66,9 @@ func (s *Suite) dialAs(key *ecdsa.PrivateKey) (*Conn, error) {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
conn.caps = []p2p.Cap{
|
conn.caps = []p2p.Cap{
|
||||||
{Name: "eth", Version: 70},
|
|
||||||
{Name: "eth", Version: 69},
|
{Name: "eth", Version: 69},
|
||||||
}
|
}
|
||||||
conn.ourHighestProtoVersion = 70
|
conn.ourHighestProtoVersion = 69
|
||||||
return &conn, nil
|
return &conn, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -156,7 +155,7 @@ func (c *Conn) ReadEth() (any, error) {
|
||||||
var msg any
|
var msg any
|
||||||
switch int(code) {
|
switch int(code) {
|
||||||
case eth.StatusMsg:
|
case eth.StatusMsg:
|
||||||
msg = new(eth.StatusPacket)
|
msg = new(eth.StatusPacket69)
|
||||||
case eth.GetBlockHeadersMsg:
|
case eth.GetBlockHeadersMsg:
|
||||||
msg = new(eth.GetBlockHeadersPacket)
|
msg = new(eth.GetBlockHeadersPacket)
|
||||||
case eth.BlockHeadersMsg:
|
case eth.BlockHeadersMsg:
|
||||||
|
|
@ -165,6 +164,10 @@ func (c *Conn) ReadEth() (any, error) {
|
||||||
msg = new(eth.GetBlockBodiesPacket)
|
msg = new(eth.GetBlockBodiesPacket)
|
||||||
case eth.BlockBodiesMsg:
|
case eth.BlockBodiesMsg:
|
||||||
msg = new(eth.BlockBodiesPacket)
|
msg = new(eth.BlockBodiesPacket)
|
||||||
|
case eth.NewBlockMsg:
|
||||||
|
msg = new(eth.NewBlockPacket)
|
||||||
|
case eth.NewBlockHashesMsg:
|
||||||
|
msg = new(eth.NewBlockHashesPacket)
|
||||||
case eth.TransactionsMsg:
|
case eth.TransactionsMsg:
|
||||||
msg = new(eth.TransactionsPacket)
|
msg = new(eth.TransactionsPacket)
|
||||||
case eth.NewPooledTransactionHashesMsg:
|
case eth.NewPooledTransactionHashesMsg:
|
||||||
|
|
@ -226,7 +229,7 @@ func (c *Conn) ReadSnap() (any, error) {
|
||||||
}
|
}
|
||||||
|
|
||||||
// dialAndPeer creates a peer connection and runs the handshake.
|
// dialAndPeer creates a peer connection and runs the handshake.
|
||||||
func (s *Suite) dialAndPeer(status *eth.StatusPacket) (*Conn, error) {
|
func (s *Suite) dialAndPeer(status *eth.StatusPacket69) (*Conn, error) {
|
||||||
c, err := s.dial()
|
c, err := s.dial()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
|
|
@ -239,7 +242,7 @@ func (s *Suite) dialAndPeer(status *eth.StatusPacket) (*Conn, error) {
|
||||||
|
|
||||||
// peer performs both the protocol handshake and the status message
|
// peer performs both the protocol handshake and the status message
|
||||||
// exchange with the node in order to peer with it.
|
// exchange with the node in order to peer with it.
|
||||||
func (c *Conn) peer(chain *Chain, status *eth.StatusPacket) error {
|
func (c *Conn) peer(chain *Chain, status *eth.StatusPacket69) error {
|
||||||
if err := c.handshake(); err != nil {
|
if err := c.handshake(); err != nil {
|
||||||
return fmt.Errorf("handshake failed: %v", err)
|
return fmt.Errorf("handshake failed: %v", err)
|
||||||
}
|
}
|
||||||
|
|
@ -312,7 +315,7 @@ func (c *Conn) negotiateEthProtocol(caps []p2p.Cap) {
|
||||||
}
|
}
|
||||||
|
|
||||||
// statusExchange performs a `Status` message exchange with the given node.
|
// statusExchange performs a `Status` message exchange with the given node.
|
||||||
func (c *Conn) statusExchange(chain *Chain, status *eth.StatusPacket) error {
|
func (c *Conn) statusExchange(chain *Chain, status *eth.StatusPacket69) error {
|
||||||
loop:
|
loop:
|
||||||
for {
|
for {
|
||||||
code, data, err := c.Read()
|
code, data, err := c.Read()
|
||||||
|
|
@ -321,7 +324,7 @@ loop:
|
||||||
}
|
}
|
||||||
switch code {
|
switch code {
|
||||||
case eth.StatusMsg + protoOffset(ethProto):
|
case eth.StatusMsg + protoOffset(ethProto):
|
||||||
msg := new(eth.StatusPacket)
|
msg := new(eth.StatusPacket69)
|
||||||
if err := rlp.DecodeBytes(data, &msg); err != nil {
|
if err := rlp.DecodeBytes(data, &msg); err != nil {
|
||||||
return fmt.Errorf("error decoding status packet: %w", err)
|
return fmt.Errorf("error decoding status packet: %w", err)
|
||||||
}
|
}
|
||||||
|
|
@ -336,12 +339,10 @@ loop:
|
||||||
if have, want := msg.ForkID, chain.ForkID(); !reflect.DeepEqual(have, want) {
|
if have, want := msg.ForkID, chain.ForkID(); !reflect.DeepEqual(have, want) {
|
||||||
return fmt.Errorf("wrong fork ID in status: have %v, want %v", have, want)
|
return fmt.Errorf("wrong fork ID in status: have %v, want %v", have, want)
|
||||||
}
|
}
|
||||||
for _, cap := range c.caps {
|
if have, want := msg.ProtocolVersion, c.ourHighestProtoVersion; have != uint32(want) {
|
||||||
if cap.Name == "eth" && cap.Version == uint(msg.ProtocolVersion) {
|
return fmt.Errorf("wrong protocol version: have %v, want %v", have, want)
|
||||||
|
}
|
||||||
break loop
|
break loop
|
||||||
}
|
|
||||||
}
|
|
||||||
return fmt.Errorf("wrong protocol version: have %v, want %v", msg.ProtocolVersion, c.caps)
|
|
||||||
case discMsg:
|
case discMsg:
|
||||||
var msg []p2p.DiscReason
|
var msg []p2p.DiscReason
|
||||||
if rlp.DecodeBytes(data, &msg); len(msg) == 0 {
|
if rlp.DecodeBytes(data, &msg); len(msg) == 0 {
|
||||||
|
|
@ -362,7 +363,7 @@ loop:
|
||||||
}
|
}
|
||||||
if status == nil {
|
if status == nil {
|
||||||
// default status message
|
// default status message
|
||||||
status = ð.StatusPacket{
|
status = ð.StatusPacket69{
|
||||||
ProtocolVersion: uint32(c.negotiatedProtoVersion),
|
ProtocolVersion: uint32(c.negotiatedProtoVersion),
|
||||||
NetworkID: chain.config.ChainID.Uint64(),
|
NetworkID: chain.config.ChainID.Uint64(),
|
||||||
Genesis: chain.blocks[0].Hash(),
|
Genesis: chain.blocks[0].Hash(),
|
||||||
|
|
|
||||||
|
|
@ -87,9 +87,9 @@ func (s *Suite) TestSnapGetAccountRange(t *utesting.T) {
|
||||||
root: root,
|
root: root,
|
||||||
startingHash: zero,
|
startingHash: zero,
|
||||||
limitHash: ffHash,
|
limitHash: ffHash,
|
||||||
expAccounts: 68,
|
expAccounts: 67,
|
||||||
expFirst: firstKey,
|
expFirst: firstKey,
|
||||||
expLast: common.HexToHash("0x59312f89c13e9e24c1cb8b103aa39a9b2800348d97a92c2c9e2a78fa02b70025"),
|
expLast: common.HexToHash("0x622e662246601dd04f996289ce8b85e86db7bb15bb17f86487ec9d543ddb6f9a"),
|
||||||
desc: "In this test, we request the entire state range, but limit the response to 4000 bytes.",
|
desc: "In this test, we request the entire state range, but limit the response to 4000 bytes.",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
@ -97,9 +97,9 @@ func (s *Suite) TestSnapGetAccountRange(t *utesting.T) {
|
||||||
root: root,
|
root: root,
|
||||||
startingHash: zero,
|
startingHash: zero,
|
||||||
limitHash: ffHash,
|
limitHash: ffHash,
|
||||||
expAccounts: 50,
|
expAccounts: 49,
|
||||||
expFirst: firstKey,
|
expFirst: firstKey,
|
||||||
expLast: common.HexToHash("0x4615e5f5df5b25349a00ad313c6cd0436b6c08ee5826e33a018661997f85ebaa"),
|
expLast: common.HexToHash("0x445cb5c1278fdce2f9cbdb681bdd76c52f8e50e41dbd9e220242a69ba99ac099"),
|
||||||
desc: "In this test, we request the entire state range, but limit the response to 3000 bytes.",
|
desc: "In this test, we request the entire state range, but limit the response to 3000 bytes.",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
@ -107,9 +107,9 @@ func (s *Suite) TestSnapGetAccountRange(t *utesting.T) {
|
||||||
root: root,
|
root: root,
|
||||||
startingHash: zero,
|
startingHash: zero,
|
||||||
limitHash: ffHash,
|
limitHash: ffHash,
|
||||||
expAccounts: 35,
|
expAccounts: 34,
|
||||||
expFirst: firstKey,
|
expFirst: firstKey,
|
||||||
expLast: common.HexToHash("0x2de4bdbddcfbb9c3e195dae6b45f9c38daff897e926764bf34887fb0db5c3284"),
|
expLast: common.HexToHash("0x2ef46ebd2073cecde499c2e8df028ad79a26d57bfaa812c4c6f7eb4c9617b913"),
|
||||||
desc: "In this test, we request the entire state range, but limit the response to 2000 bytes.",
|
desc: "In this test, we request the entire state range, but limit the response to 2000 bytes.",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
@ -178,9 +178,9 @@ The server should return the first available account.`,
|
||||||
root: root,
|
root: root,
|
||||||
startingHash: firstKey,
|
startingHash: firstKey,
|
||||||
limitHash: ffHash,
|
limitHash: ffHash,
|
||||||
expAccounts: 68,
|
expAccounts: 67,
|
||||||
expFirst: firstKey,
|
expFirst: firstKey,
|
||||||
expLast: common.HexToHash("0x59312f89c13e9e24c1cb8b103aa39a9b2800348d97a92c2c9e2a78fa02b70025"),
|
expLast: common.HexToHash("0x622e662246601dd04f996289ce8b85e86db7bb15bb17f86487ec9d543ddb6f9a"),
|
||||||
desc: `In this test, startingHash is exactly the first available account key.
|
desc: `In this test, startingHash is exactly the first available account key.
|
||||||
The server should return the first available account of the state as the first item.`,
|
The server should return the first available account of the state as the first item.`,
|
||||||
},
|
},
|
||||||
|
|
@ -189,9 +189,9 @@ The server should return the first available account of the state as the first i
|
||||||
root: root,
|
root: root,
|
||||||
startingHash: hashAdd(firstKey, 1),
|
startingHash: hashAdd(firstKey, 1),
|
||||||
limitHash: ffHash,
|
limitHash: ffHash,
|
||||||
expAccounts: 68,
|
expAccounts: 67,
|
||||||
expFirst: secondKey,
|
expFirst: secondKey,
|
||||||
expLast: common.HexToHash("0x59a7c8818f1c16b298a054020dc7c3f403a970d1d1db33f9478b1c36e3a2e509"),
|
expLast: common.HexToHash("0x66192e4c757fba1cdc776e6737008f42d50370d3cd801db3624274283bf7cd63"),
|
||||||
desc: `In this test, startingHash is after the first available key.
|
desc: `In this test, startingHash is after the first available key.
|
||||||
The server should return the second account of the state as the first item.`,
|
The server should return the second account of the state as the first item.`,
|
||||||
},
|
},
|
||||||
|
|
@ -227,9 +227,9 @@ server to return no data because genesis is older than 127 blocks.`,
|
||||||
root: s.chain.RootAt(int(s.chain.Head().Number().Uint64()) - 127),
|
root: s.chain.RootAt(int(s.chain.Head().Number().Uint64()) - 127),
|
||||||
startingHash: zero,
|
startingHash: zero,
|
||||||
limitHash: ffHash,
|
limitHash: ffHash,
|
||||||
expAccounts: 68,
|
expAccounts: 66,
|
||||||
expFirst: firstKey,
|
expFirst: firstKey,
|
||||||
expLast: common.HexToHash("0x683b6c03cc32afe5db8cb96050f711fdaff8f8ff44c7587a9a848f921d02815e"),
|
expLast: common.HexToHash("0x729953a43ed6c913df957172680a17e5735143ad767bda8f58ac84ec62fbec5e"),
|
||||||
desc: `This test requests data at a state root that is 127 blocks old.
|
desc: `This test requests data at a state root that is 127 blocks old.
|
||||||
We expect the server to have this state available.`,
|
We expect the server to have this state available.`,
|
||||||
},
|
},
|
||||||
|
|
@ -658,8 +658,8 @@ The server should reject the request.`,
|
||||||
// It's a bit unfortunate these are hard-coded, but the result depends on
|
// It's a bit unfortunate these are hard-coded, but the result depends on
|
||||||
// a lot of aspects of the state trie and can't be guessed in a simple
|
// a lot of aspects of the state trie and can't be guessed in a simple
|
||||||
// way. So you'll have to update this when the test chain is changed.
|
// way. So you'll have to update this when the test chain is changed.
|
||||||
common.HexToHash("0x4bdecec09691ad38113eebee2df94fadefdff5841c0f182bae1be3c8a6d60bf3"),
|
common.HexToHash("0x5bdc0d6057b35642a16d27223ea5454e5a17a400e28f7328971a5f2a87773b76"),
|
||||||
common.HexToHash("0x4178696465d4514ff5924ef8c28ce64d41a669634b63184c2c093e252d6b4bc4"),
|
common.HexToHash("0x0a76c9812ca90ffed8ee4d191e683f93386b6e50cfe3679c0760d27510aa7fc5"),
|
||||||
empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty,
|
empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty,
|
||||||
empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty,
|
empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty,
|
||||||
empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty,
|
empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty, empty,
|
||||||
|
|
@ -679,8 +679,8 @@ The server should reject the request.`,
|
||||||
// be updated when the test chain is changed.
|
// be updated when the test chain is changed.
|
||||||
expHashes: []common.Hash{
|
expHashes: []common.Hash{
|
||||||
empty,
|
empty,
|
||||||
common.HexToHash("0x4178696465d4514ff5924ef8c28ce64d41a669634b63184c2c093e252d6b4bc4"),
|
common.HexToHash("0x0a76c9812ca90ffed8ee4d191e683f93386b6e50cfe3679c0760d27510aa7fc5"),
|
||||||
common.HexToHash("0x4bdecec09691ad38113eebee2df94fadefdff5841c0f182bae1be3c8a6d60bf3"),
|
common.HexToHash("0x5bdc0d6057b35642a16d27223ea5454e5a17a400e28f7328971a5f2a87773b76"),
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -35,7 +35,6 @@ import (
|
||||||
"github.com/ethereum/go-ethereum/p2p"
|
"github.com/ethereum/go-ethereum/p2p"
|
||||||
"github.com/ethereum/go-ethereum/p2p/enode"
|
"github.com/ethereum/go-ethereum/p2p/enode"
|
||||||
"github.com/ethereum/go-ethereum/rlp"
|
"github.com/ethereum/go-ethereum/rlp"
|
||||||
"github.com/ethereum/go-ethereum/trie"
|
|
||||||
"github.com/holiman/uint256"
|
"github.com/holiman/uint256"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
@ -84,7 +83,6 @@ func (s *Suite) EthTests() []utesting.Test {
|
||||||
// get history
|
// get history
|
||||||
{Name: "GetBlockBodies", Fn: s.TestGetBlockBodies},
|
{Name: "GetBlockBodies", Fn: s.TestGetBlockBodies},
|
||||||
{Name: "GetReceipts", Fn: s.TestGetReceipts},
|
{Name: "GetReceipts", Fn: s.TestGetReceipts},
|
||||||
{Name: "GetLargeReceipts", Fn: s.TestGetLargeReceipts},
|
|
||||||
// test transactions
|
// test transactions
|
||||||
{Name: "LargeTxRequest", Fn: s.TestLargeTxRequest, Slow: true},
|
{Name: "LargeTxRequest", Fn: s.TestLargeTxRequest, Slow: true},
|
||||||
{Name: "Transaction", Fn: s.TestTransaction},
|
{Name: "Transaction", Fn: s.TestTransaction},
|
||||||
|
|
@ -431,9 +429,6 @@ func (s *Suite) TestGetReceipts(t *utesting.T) {
|
||||||
// Find some blocks containing receipts.
|
// Find some blocks containing receipts.
|
||||||
var hashes = make([]common.Hash, 0, 3)
|
var hashes = make([]common.Hash, 0, 3)
|
||||||
for i := range s.chain.Len() {
|
for i := range s.chain.Len() {
|
||||||
if s.chain.txInfo.LargeReceiptBlock != nil && uint64(i) == *s.chain.txInfo.LargeReceiptBlock {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
block := s.chain.GetBlock(i)
|
block := s.chain.GetBlock(i)
|
||||||
if len(block.Transactions()) > 0 {
|
if len(block.Transactions()) > 0 {
|
||||||
hashes = append(hashes, block.Hash())
|
hashes = append(hashes, block.Hash())
|
||||||
|
|
@ -442,9 +437,9 @@ func (s *Suite) TestGetReceipts(t *utesting.T) {
|
||||||
break
|
break
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if conn.negotiatedProtoVersion < eth.ETH70 {
|
|
||||||
// Create block bodies request.
|
// Create receipts request.
|
||||||
req := ð.GetReceiptsPacket69{
|
req := ð.GetReceiptsPacket{
|
||||||
RequestId: 66,
|
RequestId: 66,
|
||||||
GetReceiptsRequest: (eth.GetReceiptsRequest)(hashes),
|
GetReceiptsRequest: (eth.GetReceiptsRequest)(hashes),
|
||||||
}
|
}
|
||||||
|
|
@ -452,9 +447,9 @@ func (s *Suite) TestGetReceipts(t *utesting.T) {
|
||||||
t.Fatalf("could not write to connection: %v", err)
|
t.Fatalf("could not write to connection: %v", err)
|
||||||
}
|
}
|
||||||
// Wait for response.
|
// Wait for response.
|
||||||
resp := new(eth.ReceiptsPacket69)
|
resp := new(eth.ReceiptsPacket[*eth.ReceiptList69])
|
||||||
if err := conn.ReadMsg(ethProto, eth.ReceiptsMsg, &resp); err != nil {
|
if err := conn.ReadMsg(ethProto, eth.ReceiptsMsg, &resp); err != nil {
|
||||||
t.Fatalf("error reading block receipts msg: %v", err)
|
t.Fatalf("error reading block bodies msg: %v", err)
|
||||||
}
|
}
|
||||||
if got, want := resp.RequestId, req.RequestId; got != want {
|
if got, want := resp.RequestId, req.RequestId; got != want {
|
||||||
t.Fatalf("unexpected request id in respond", got, want)
|
t.Fatalf("unexpected request id in respond", got, want)
|
||||||
|
|
@ -462,102 +457,6 @@ func (s *Suite) TestGetReceipts(t *utesting.T) {
|
||||||
if resp.List.Len() != len(req.GetReceiptsRequest) {
|
if resp.List.Len() != len(req.GetReceiptsRequest) {
|
||||||
t.Fatalf("wrong receipts in response: expected %d receipts, got %d", len(req.GetReceiptsRequest), resp.List.Len())
|
t.Fatalf("wrong receipts in response: expected %d receipts, got %d", len(req.GetReceiptsRequest), resp.List.Len())
|
||||||
}
|
}
|
||||||
} else {
|
|
||||||
// Create block bodies request.
|
|
||||||
req := ð.GetReceiptsPacket70{
|
|
||||||
RequestId: 66,
|
|
||||||
FirstBlockReceiptIndex: 0,
|
|
||||||
GetReceiptsRequest: (eth.GetReceiptsRequest)(hashes),
|
|
||||||
}
|
|
||||||
if err := conn.Write(ethProto, eth.GetReceiptsMsg, req); err != nil {
|
|
||||||
t.Fatalf("could not write to connection: %v", err)
|
|
||||||
}
|
|
||||||
// Wait for response.
|
|
||||||
resp := new(eth.ReceiptsPacket70)
|
|
||||||
if err := conn.ReadMsg(ethProto, eth.ReceiptsMsg, &resp); err != nil {
|
|
||||||
t.Fatalf("error reading block receipts msg: %v", err)
|
|
||||||
}
|
|
||||||
if got, want := resp.RequestId, req.RequestId; got != want {
|
|
||||||
t.Fatalf("unexpected request id in respond", got, want)
|
|
||||||
}
|
|
||||||
if resp.List.Len() != len(req.GetReceiptsRequest) {
|
|
||||||
t.Fatalf("wrong receipts in response: expected %d receipts, got %d", len(req.GetReceiptsRequest), resp.List.Len())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *Suite) TestGetLargeReceipts(t *utesting.T) {
|
|
||||||
t.Log(`This test sends GetReceipts requests to the node for large receipt (>10MiB) in the test chain.
|
|
||||||
This test is meaningful only if the client supports protocol version ETH70 or higher
|
|
||||||
and LargeReceiptBlock is configured in txInfo.json.`)
|
|
||||||
conn, err := s.dialAndPeer(nil)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("peering failed: %v", err)
|
|
||||||
}
|
|
||||||
defer conn.Close()
|
|
||||||
|
|
||||||
if conn.negotiatedProtoVersion < eth.ETH70 || s.chain.txInfo.LargeReceiptBlock == nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Find block with large receipt.
|
|
||||||
// Place the large receipt block hash in the middle of the query
|
|
||||||
start := max(int(*s.chain.txInfo.LargeReceiptBlock)-2, 0)
|
|
||||||
end := min(*s.chain.txInfo.LargeReceiptBlock+2, uint64(len(s.chain.blocks)))
|
|
||||||
|
|
||||||
var blocks []common.Hash
|
|
||||||
var receiptHashes []common.Hash
|
|
||||||
var receipts []*eth.ReceiptList
|
|
||||||
|
|
||||||
for i := uint64(start); i < end; i++ {
|
|
||||||
block := s.chain.GetBlock(int(i))
|
|
||||||
blocks = append(blocks, block.Hash())
|
|
||||||
receiptHashes = append(receiptHashes, block.Header().ReceiptHash)
|
|
||||||
receipts = append(receipts, ð.ReceiptList{})
|
|
||||||
}
|
|
||||||
|
|
||||||
incomplete := false
|
|
||||||
lastBlock := 0
|
|
||||||
|
|
||||||
for incomplete || lastBlock != len(blocks)-1 {
|
|
||||||
// Create get receipt request.
|
|
||||||
req := ð.GetReceiptsPacket70{
|
|
||||||
RequestId: 66,
|
|
||||||
FirstBlockReceiptIndex: uint64(receipts[lastBlock].Derivable().Len()),
|
|
||||||
GetReceiptsRequest: blocks[lastBlock:],
|
|
||||||
}
|
|
||||||
if err := conn.Write(ethProto, eth.GetReceiptsMsg, req); err != nil {
|
|
||||||
t.Fatalf("could not write to connection: %v", err)
|
|
||||||
}
|
|
||||||
// Wait for response.
|
|
||||||
resp := new(eth.ReceiptsPacket70)
|
|
||||||
if err := conn.ReadMsg(ethProto, eth.ReceiptsMsg, &resp); err != nil {
|
|
||||||
t.Fatalf("error reading block receipts msg: %v", err)
|
|
||||||
}
|
|
||||||
if got, want := resp.RequestId, req.RequestId; got != want {
|
|
||||||
t.Fatalf("unexpected request id in respond, want: %d, got: %d", got, want)
|
|
||||||
}
|
|
||||||
|
|
||||||
receiptLists, _ := resp.List.Items()
|
|
||||||
for i, rc := range receiptLists {
|
|
||||||
receipts[lastBlock+i].Append(rc)
|
|
||||||
}
|
|
||||||
lastBlock += len(receiptLists) - 1
|
|
||||||
|
|
||||||
incomplete = resp.LastBlockIncomplete
|
|
||||||
}
|
|
||||||
|
|
||||||
hasher := trie.NewStackTrie(nil)
|
|
||||||
hashes := make([]common.Hash, len(receipts))
|
|
||||||
for i := range receipts {
|
|
||||||
hashes[i] = types.DeriveSha(receipts[i].Derivable(), hasher)
|
|
||||||
}
|
|
||||||
|
|
||||||
for i, hash := range hashes {
|
|
||||||
if receiptHashes[i] != hash {
|
|
||||||
t.Fatalf("wrong receipt root: want %x, got %x", receiptHashes[i], hash)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// randBuf makes a random buffer size kilobytes large.
|
// randBuf makes a random buffer size kilobytes large.
|
||||||
|
|
|
||||||
BIN
cmd/devp2p/internal/ethtest/testdata/chain.rlp
vendored
BIN
cmd/devp2p/internal/ethtest/testdata/chain.rlp
vendored
Binary file not shown.
|
|
@ -37,7 +37,7 @@
|
||||||
"nonce": "0x0",
|
"nonce": "0x0",
|
||||||
"timestamp": "0x0",
|
"timestamp": "0x0",
|
||||||
"extraData": "0x68697665636861696e",
|
"extraData": "0x68697665636861696e",
|
||||||
"gasLimit": "0x11e1a300",
|
"gasLimit": "0x23f3e20",
|
||||||
"difficulty": "0x20000",
|
"difficulty": "0x20000",
|
||||||
"mixHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
|
"mixHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
|
||||||
"coinbase": "0x0000000000000000000000000000000000000000",
|
"coinbase": "0x0000000000000000000000000000000000000000",
|
||||||
|
|
@ -119,10 +119,6 @@
|
||||||
"balance": "0x1",
|
"balance": "0x1",
|
||||||
"nonce": "0x1"
|
"nonce": "0x1"
|
||||||
},
|
},
|
||||||
"8dcd17433742f4c0ca53122ab541d0ba67fc27ff": {
|
|
||||||
"code": "0x6202e6306000a0",
|
|
||||||
"balance": "0x0"
|
|
||||||
},
|
|
||||||
"c7b99a164efd027a93f147376cc7da7c67c6bbe0": {
|
"c7b99a164efd027a93f147376cc7da7c67c6bbe0": {
|
||||||
"balance": "0xc097ce7bc90715b34b9f1000000000"
|
"balance": "0xc097ce7bc90715b34b9f1000000000"
|
||||||
},
|
},
|
||||||
|
|
|
||||||
|
|
@ -1,24 +1,24 @@
|
||||||
{
|
{
|
||||||
"parentHash": "0x7e80093a491eba0e5b2c1895837902f64f514100221801318fe391e1e09c96a6",
|
"parentHash": "0x65151b101682b54cd08ba226f640c14c86176865ff9bfc57e0147dadaeac34bb",
|
||||||
"sha3Uncles": "0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347",
|
"sha3Uncles": "0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347",
|
||||||
"miner": "0x0000000000000000000000000000000000000000",
|
"miner": "0x0000000000000000000000000000000000000000",
|
||||||
"stateRoot": "0x8fcfb02cfca007773bd55bc1c3e50a3c8612a59c87ce057e5957e8bf17c1728b",
|
"stateRoot": "0xce423ebc60fc7764a43f09f1fe3ae61eef25e3eb8d09b1108f7e7eb77dfff5e6",
|
||||||
"transactionsRoot": "0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421",
|
"transactionsRoot": "0x7ec1ae3989efa75d7bcc766e5e2443afa8a89a5fda42ebba90050e7e702980f7",
|
||||||
"receiptsRoot": "0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421",
|
"receiptsRoot": "0xfe160832b1ca85f38c6674cb0aae3a24693bc49be56e2ecdf3698b71a794de86",
|
||||||
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
|
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
|
||||||
"difficulty": "0x0",
|
"difficulty": "0x0",
|
||||||
"number": "0x258",
|
"number": "0x258",
|
||||||
"gasLimit": "0x11e1a300",
|
"gasLimit": "0x23f3e20",
|
||||||
"gasUsed": "0x0",
|
"gasUsed": "0x19d36",
|
||||||
"timestamp": "0x1770",
|
"timestamp": "0x1770",
|
||||||
"extraData": "0x",
|
"extraData": "0x",
|
||||||
"mixHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
|
"mixHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
|
||||||
"nonce": "0x0000000000000000",
|
"nonce": "0x0000000000000000",
|
||||||
"baseFeePerGas": "0x7",
|
"baseFeePerGas": "0x7",
|
||||||
"withdrawalsRoot": "0x92abfda39de7df7d705c5a8f30386802ad59d31e782a06d5c5b0f9a260056cf0",
|
"withdrawalsRoot": "0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421",
|
||||||
"blobGasUsed": "0x0",
|
"blobGasUsed": "0x0",
|
||||||
"excessBlobGas": "0x0",
|
"excessBlobGas": "0x0",
|
||||||
"parentBeaconBlockRoot": "0xf5003fc8f92358e790a114bce93ce1d9c283c85e1787f8d7d56714d3489b49e6",
|
"parentBeaconBlockRoot": "0xf5003fc8f92358e790a114bce93ce1d9c283c85e1787f8d7d56714d3489b49e6",
|
||||||
"requestsHash": "0xe3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
|
"requestsHash": "0xe3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
|
||||||
"hash": "0x44e3809c9a3cda717f00aea3a9da336d149612c8d5657fbc0028176ef8d94d2a"
|
"hash": "0xce8d86ba17a2ec303155f0e264c58a4b8f94ce3436274cf1924f91acdb7502d0"
|
||||||
}
|
}
|
||||||
|
|
@ -4,9 +4,9 @@
|
||||||
"method": "engine_forkchoiceUpdatedV3",
|
"method": "engine_forkchoiceUpdatedV3",
|
||||||
"params": [
|
"params": [
|
||||||
{
|
{
|
||||||
"headBlockHash": "0x44e3809c9a3cda717f00aea3a9da336d149612c8d5657fbc0028176ef8d94d2a",
|
"headBlockHash": "0xce8d86ba17a2ec303155f0e264c58a4b8f94ce3436274cf1924f91acdb7502d0",
|
||||||
"safeBlockHash": "0x44e3809c9a3cda717f00aea3a9da336d149612c8d5657fbc0028176ef8d94d2a",
|
"safeBlockHash": "0xce8d86ba17a2ec303155f0e264c58a4b8f94ce3436274cf1924f91acdb7502d0",
|
||||||
"finalizedBlockHash": "0x44e3809c9a3cda717f00aea3a9da336d149612c8d5657fbc0028176ef8d94d2a"
|
"finalizedBlockHash": "0xce8d86ba17a2ec303155f0e264c58a4b8f94ce3436274cf1924f91acdb7502d0"
|
||||||
},
|
},
|
||||||
null
|
null
|
||||||
]
|
]
|
||||||
|
|
|
||||||
4220
cmd/devp2p/internal/ethtest/testdata/headstate.json
vendored
4220
cmd/devp2p/internal/ethtest/testdata/headstate.json
vendored
File diff suppressed because it is too large
Load diff
10313
cmd/devp2p/internal/ethtest/testdata/newpayload.json
vendored
10313
cmd/devp2p/internal/ethtest/testdata/newpayload.json
vendored
File diff suppressed because it is too large
Load diff
2653
cmd/devp2p/internal/ethtest/testdata/txinfo.json
vendored
2653
cmd/devp2p/internal/ethtest/testdata/txinfo.json
vendored
File diff suppressed because it is too large
Load diff
|
|
@ -52,7 +52,7 @@ func (s *Suite) AllTests() []utesting.Test {
|
||||||
{Name: "Ping", Fn: s.TestPing},
|
{Name: "Ping", Fn: s.TestPing},
|
||||||
{Name: "PingLargeRequestID", Fn: s.TestPingLargeRequestID},
|
{Name: "PingLargeRequestID", Fn: s.TestPingLargeRequestID},
|
||||||
{Name: "PingMultiIP", Fn: s.TestPingMultiIP},
|
{Name: "PingMultiIP", Fn: s.TestPingMultiIP},
|
||||||
{Name: "HandshakeResend", Fn: s.TestHandshakeResend},
|
{Name: "PingHandshakeInterrupted", Fn: s.TestPingHandshakeInterrupted},
|
||||||
{Name: "TalkRequest", Fn: s.TestTalkRequest},
|
{Name: "TalkRequest", Fn: s.TestTalkRequest},
|
||||||
{Name: "FindnodeZeroDistance", Fn: s.TestFindnodeZeroDistance},
|
{Name: "FindnodeZeroDistance", Fn: s.TestFindnodeZeroDistance},
|
||||||
{Name: "FindnodeResults", Fn: s.TestFindnodeResults},
|
{Name: "FindnodeResults", Fn: s.TestFindnodeResults},
|
||||||
|
|
@ -158,20 +158,22 @@ the attempt from a different IP.`)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// TestHandshakeResend starts a handshake, but doesn't finish it and sends a second ordinary message
|
// TestPingHandshakeInterrupted starts a handshake, but doesn't finish it and sends a second ordinary message
|
||||||
// packet instead of a handshake message packet. The remote node should repeat the previous WHOAREYOU
|
// packet instead of a handshake message packet. The remote node should respond with
|
||||||
// challenge for the first PING.
|
// another WHOAREYOU challenge for the second packet.
|
||||||
func (s *Suite) TestHandshakeResend(t *utesting.T) {
|
func (s *Suite) TestPingHandshakeInterrupted(t *utesting.T) {
|
||||||
|
t.Log(`TestPingHandshakeInterrupted starts a handshake, but doesn't finish it and sends a second ordinary message
|
||||||
|
packet instead of a handshake message packet. The remote node should respond with
|
||||||
|
another WHOAREYOU challenge for the second packet.`)
|
||||||
|
|
||||||
conn, l1 := s.listen1(t)
|
conn, l1 := s.listen1(t)
|
||||||
defer conn.close()
|
defer conn.close()
|
||||||
|
|
||||||
// First PING triggers challenge.
|
// First PING triggers challenge.
|
||||||
ping := &v5wire.Ping{ReqID: conn.nextReqID()}
|
ping := &v5wire.Ping{ReqID: conn.nextReqID()}
|
||||||
conn.write(l1, ping, nil)
|
conn.write(l1, ping, nil)
|
||||||
var challenge1 *v5wire.Whoareyou
|
|
||||||
switch resp := conn.read(l1).(type) {
|
switch resp := conn.read(l1).(type) {
|
||||||
case *v5wire.Whoareyou:
|
case *v5wire.Whoareyou:
|
||||||
challenge1 = resp
|
|
||||||
t.Logf("got WHOAREYOU for PING")
|
t.Logf("got WHOAREYOU for PING")
|
||||||
default:
|
default:
|
||||||
t.Fatal("expected WHOAREYOU, got", resp)
|
t.Fatal("expected WHOAREYOU, got", resp)
|
||||||
|
|
@ -179,16 +181,9 @@ func (s *Suite) TestHandshakeResend(t *utesting.T) {
|
||||||
|
|
||||||
// Send second PING.
|
// Send second PING.
|
||||||
ping2 := &v5wire.Ping{ReqID: conn.nextReqID()}
|
ping2 := &v5wire.Ping{ReqID: conn.nextReqID()}
|
||||||
conn.write(l1, ping2, nil)
|
switch resp := conn.reqresp(l1, ping2).(type) {
|
||||||
switch resp := conn.read(l1).(type) {
|
case *v5wire.Pong:
|
||||||
case *v5wire.Whoareyou:
|
checkPong(t, resp, ping2, l1)
|
||||||
if resp.Nonce != challenge1.Nonce {
|
|
||||||
t.Fatalf("wrong nonce %x in WHOAREYOU (want %x)", resp.Nonce[:], challenge1.Nonce[:])
|
|
||||||
}
|
|
||||||
if !bytes.Equal(resp.ChallengeData, challenge1.ChallengeData) {
|
|
||||||
t.Fatalf("wrong ChallengeData in resent WHOAREYOU (want %x)", resp.ChallengeData, challenge1.ChallengeData)
|
|
||||||
}
|
|
||||||
resp.Node = conn.remote
|
|
||||||
default:
|
default:
|
||||||
t.Fatal("expected WHOAREYOU, got", resp)
|
t.Fatal("expected WHOAREYOU, got", resp)
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -218,15 +218,11 @@ func (tc *conn) read(c net.PacketConn) v5wire.Packet {
|
||||||
if err := c.SetReadDeadline(time.Now().Add(waitTime)); err != nil {
|
if err := c.SetReadDeadline(time.Now().Add(waitTime)); err != nil {
|
||||||
return &readError{err}
|
return &readError{err}
|
||||||
}
|
}
|
||||||
n, _, err := c.ReadFrom(buf)
|
n, fromAddr, err := c.ReadFrom(buf)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return &readError{err}
|
return &readError{err}
|
||||||
}
|
}
|
||||||
// Always use tc.remoteAddr for session lookup. The actual source address of
|
_, _, p, err := tc.codec.Decode(buf[:n], fromAddr.String())
|
||||||
// the packet may differ from tc.remoteAddr when the remote node is reachable
|
|
||||||
// via multiple networks (e.g. Docker bridge vs. overlay), but the codec's
|
|
||||||
// session cache is keyed by the address used during Encode.
|
|
||||||
_, _, p, err := tc.codec.Decode(buf[:n], tc.remoteAddr.String())
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return &readError{err}
|
return &readError{err}
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -56,7 +56,6 @@ type header struct {
|
||||||
BlobGasUsed *uint64 `json:"blobGasUsed" rlp:"optional"`
|
BlobGasUsed *uint64 `json:"blobGasUsed" rlp:"optional"`
|
||||||
ExcessBlobGas *uint64 `json:"excessBlobGas" rlp:"optional"`
|
ExcessBlobGas *uint64 `json:"excessBlobGas" rlp:"optional"`
|
||||||
ParentBeaconBlockRoot *common.Hash `json:"parentBeaconBlockRoot" rlp:"optional"`
|
ParentBeaconBlockRoot *common.Hash `json:"parentBeaconBlockRoot" rlp:"optional"`
|
||||||
SlotNumber *uint64 `json:"slotNumber" rlp:"optional"`
|
|
||||||
}
|
}
|
||||||
|
|
||||||
type headerMarshaling struct {
|
type headerMarshaling struct {
|
||||||
|
|
@ -69,7 +68,6 @@ type headerMarshaling struct {
|
||||||
BaseFee *math.HexOrDecimal256
|
BaseFee *math.HexOrDecimal256
|
||||||
BlobGasUsed *math.HexOrDecimal64
|
BlobGasUsed *math.HexOrDecimal64
|
||||||
ExcessBlobGas *math.HexOrDecimal64
|
ExcessBlobGas *math.HexOrDecimal64
|
||||||
SlotNumber *math.HexOrDecimal64
|
|
||||||
}
|
}
|
||||||
|
|
||||||
type bbInput struct {
|
type bbInput struct {
|
||||||
|
|
@ -138,7 +136,6 @@ func (i *bbInput) ToBlock() *types.Block {
|
||||||
BlobGasUsed: i.Header.BlobGasUsed,
|
BlobGasUsed: i.Header.BlobGasUsed,
|
||||||
ExcessBlobGas: i.Header.ExcessBlobGas,
|
ExcessBlobGas: i.Header.ExcessBlobGas,
|
||||||
ParentBeaconRoot: i.Header.ParentBeaconBlockRoot,
|
ParentBeaconRoot: i.Header.ParentBeaconBlockRoot,
|
||||||
SlotNumber: i.Header.SlotNumber,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fill optional values.
|
// Fill optional values.
|
||||||
|
|
|
||||||
|
|
@ -102,7 +102,6 @@ type stEnv struct {
|
||||||
ParentExcessBlobGas *uint64 `json:"parentExcessBlobGas,omitempty"`
|
ParentExcessBlobGas *uint64 `json:"parentExcessBlobGas,omitempty"`
|
||||||
ParentBlobGasUsed *uint64 `json:"parentBlobGasUsed,omitempty"`
|
ParentBlobGasUsed *uint64 `json:"parentBlobGasUsed,omitempty"`
|
||||||
ParentBeaconBlockRoot *common.Hash `json:"parentBeaconBlockRoot"`
|
ParentBeaconBlockRoot *common.Hash `json:"parentBeaconBlockRoot"`
|
||||||
SlotNumber *uint64 `json:"slotNumber"`
|
|
||||||
}
|
}
|
||||||
|
|
||||||
type stEnvMarshaling struct {
|
type stEnvMarshaling struct {
|
||||||
|
|
@ -121,7 +120,6 @@ type stEnvMarshaling struct {
|
||||||
ExcessBlobGas *math.HexOrDecimal64
|
ExcessBlobGas *math.HexOrDecimal64
|
||||||
ParentExcessBlobGas *math.HexOrDecimal64
|
ParentExcessBlobGas *math.HexOrDecimal64
|
||||||
ParentBlobGasUsed *math.HexOrDecimal64
|
ParentBlobGasUsed *math.HexOrDecimal64
|
||||||
SlotNumber *math.HexOrDecimal64
|
|
||||||
}
|
}
|
||||||
|
|
||||||
type rejectedTx struct {
|
type rejectedTx struct {
|
||||||
|
|
@ -149,13 +147,15 @@ func (pre *Prestate) Apply(vmConfig vm.Config, chainConfig *params.ChainConfig,
|
||||||
isEIP4762 = chainConfig.IsVerkle(big.NewInt(int64(pre.Env.Number)), pre.Env.Timestamp)
|
isEIP4762 = chainConfig.IsVerkle(big.NewInt(int64(pre.Env.Number)), pre.Env.Timestamp)
|
||||||
statedb = MakePreState(rawdb.NewMemoryDatabase(), pre.Pre, isEIP4762)
|
statedb = MakePreState(rawdb.NewMemoryDatabase(), pre.Pre, isEIP4762)
|
||||||
signer = types.MakeSigner(chainConfig, new(big.Int).SetUint64(pre.Env.Number), pre.Env.Timestamp)
|
signer = types.MakeSigner(chainConfig, new(big.Int).SetUint64(pre.Env.Number), pre.Env.Timestamp)
|
||||||
gaspool = core.NewGasPool(pre.Env.GasLimit)
|
gaspool = new(core.GasPool)
|
||||||
blockHash = common.Hash{0x13, 0x37}
|
blockHash = common.Hash{0x13, 0x37}
|
||||||
rejectedTxs []*rejectedTx
|
rejectedTxs []*rejectedTx
|
||||||
includedTxs types.Transactions
|
includedTxs types.Transactions
|
||||||
|
gasUsed = uint64(0)
|
||||||
blobGasUsed = uint64(0)
|
blobGasUsed = uint64(0)
|
||||||
receipts = make(types.Receipts, 0)
|
receipts = make(types.Receipts, 0)
|
||||||
)
|
)
|
||||||
|
gaspool.AddGas(pre.Env.GasLimit)
|
||||||
vmContext := vm.BlockContext{
|
vmContext := vm.BlockContext{
|
||||||
CanTransfer: core.CanTransfer,
|
CanTransfer: core.CanTransfer,
|
||||||
Transfer: core.Transfer,
|
Transfer: core.Transfer,
|
||||||
|
|
@ -195,7 +195,6 @@ func (pre *Prestate) Apply(vmConfig vm.Config, chainConfig *params.ChainConfig,
|
||||||
ExcessBlobGas: pre.Env.ParentExcessBlobGas,
|
ExcessBlobGas: pre.Env.ParentExcessBlobGas,
|
||||||
BlobGasUsed: pre.Env.ParentBlobGasUsed,
|
BlobGasUsed: pre.Env.ParentBlobGasUsed,
|
||||||
BaseFee: pre.Env.ParentBaseFee,
|
BaseFee: pre.Env.ParentBaseFee,
|
||||||
SlotNumber: pre.Env.SlotNumber,
|
|
||||||
}
|
}
|
||||||
header := &types.Header{
|
header := &types.Header{
|
||||||
Time: pre.Env.Timestamp,
|
Time: pre.Env.Timestamp,
|
||||||
|
|
@ -256,19 +255,16 @@ func (pre *Prestate) Apply(vmConfig vm.Config, chainConfig *params.ChainConfig,
|
||||||
statedb.SetTxContext(tx.Hash(), len(receipts))
|
statedb.SetTxContext(tx.Hash(), len(receipts))
|
||||||
var (
|
var (
|
||||||
snapshot = statedb.Snapshot()
|
snapshot = statedb.Snapshot()
|
||||||
gp = gaspool.Snapshot()
|
prevGas = gaspool.Gas()
|
||||||
)
|
)
|
||||||
receipt, err := core.ApplyTransactionWithEVM(msg, gaspool, statedb, vmContext.BlockNumber, blockHash, pre.Env.Timestamp, tx, evm)
|
receipt, err := core.ApplyTransactionWithEVM(msg, gaspool, statedb, vmContext.BlockNumber, blockHash, pre.Env.Timestamp, tx, &gasUsed, evm)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
statedb.RevertToSnapshot(snapshot)
|
statedb.RevertToSnapshot(snapshot)
|
||||||
log.Info("rejected tx", "index", i, "hash", tx.Hash(), "from", msg.From, "error", err)
|
log.Info("rejected tx", "index", i, "hash", tx.Hash(), "from", msg.From, "error", err)
|
||||||
rejectedTxs = append(rejectedTxs, &rejectedTx{i, err.Error()})
|
rejectedTxs = append(rejectedTxs, &rejectedTx{i, err.Error()})
|
||||||
gaspool.Set(gp)
|
gaspool.SetGas(prevGas)
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
if receipt.Logs == nil {
|
|
||||||
receipt.Logs = []*types.Log{}
|
|
||||||
}
|
|
||||||
includedTxs = append(includedTxs, tx)
|
includedTxs = append(includedTxs, tx)
|
||||||
if hashError != nil {
|
if hashError != nil {
|
||||||
return nil, nil, nil, NewError(ErrorMissingBlockhash, hashError)
|
return nil, nil, nil, NewError(ErrorMissingBlockhash, hashError)
|
||||||
|
|
@ -350,7 +346,7 @@ func (pre *Prestate) Apply(vmConfig vm.Config, chainConfig *params.ChainConfig,
|
||||||
Receipts: receipts,
|
Receipts: receipts,
|
||||||
Rejected: rejectedTxs,
|
Rejected: rejectedTxs,
|
||||||
Difficulty: (*math.HexOrDecimal256)(vmContext.Difficulty),
|
Difficulty: (*math.HexOrDecimal256)(vmContext.Difficulty),
|
||||||
GasUsed: (math.HexOrDecimal64)(gaspool.Used()),
|
GasUsed: (math.HexOrDecimal64)(gasUsed),
|
||||||
BaseFee: (*math.HexOrDecimal256)(vmContext.BaseFee),
|
BaseFee: (*math.HexOrDecimal256)(vmContext.BaseFee),
|
||||||
}
|
}
|
||||||
if pre.Env.Withdrawals != nil {
|
if pre.Env.Withdrawals != nil {
|
||||||
|
|
@ -365,6 +361,10 @@ func (pre *Prestate) Apply(vmConfig vm.Config, chainConfig *params.ChainConfig,
|
||||||
// Set requestsHash on block.
|
// Set requestsHash on block.
|
||||||
h := types.CalcRequestsHash(requests)
|
h := types.CalcRequestsHash(requests)
|
||||||
execRs.RequestsHash = &h
|
execRs.RequestsHash = &h
|
||||||
|
for i := range requests {
|
||||||
|
// remove prefix
|
||||||
|
requests[i] = requests[i][1:]
|
||||||
|
}
|
||||||
execRs.Requests = requests
|
execRs.Requests = requests
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -56,35 +56,27 @@ func (l *fileWritingTracer) Write(p []byte) (n int, err error) {
|
||||||
return n, nil
|
return n, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// newFileWriter creates a tracer which wraps inner hooks (typically a logger),
|
// newFileWriter creates a set of hooks which wraps inner hooks (typically a logger),
|
||||||
// and writes the output to a file, one file per transaction.
|
// and writes the output to a file, one file per transaction.
|
||||||
func newFileWriter(baseDir string, innerFn func(out io.Writer) *tracing.Hooks) *tracers.Tracer {
|
func newFileWriter(baseDir string, innerFn func(out io.Writer) *tracing.Hooks) *tracing.Hooks {
|
||||||
t := &fileWritingTracer{
|
t := &fileWritingTracer{
|
||||||
baseDir: baseDir,
|
baseDir: baseDir,
|
||||||
suffix: "jsonl",
|
suffix: "jsonl",
|
||||||
}
|
}
|
||||||
t.inner = innerFn(t) // instantiate the inner tracer
|
t.inner = innerFn(t) // instantiate the inner tracer
|
||||||
return &tracers.Tracer{
|
return t.hooks()
|
||||||
Hooks: t.hooks(),
|
|
||||||
GetResult: func() (json.RawMessage, error) { return json.RawMessage("{}"), nil },
|
|
||||||
Stop: func(err error) {},
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// newResultWriter creates a tracer that wraps and invokes an underlying tracer,
|
// newResultWriter creates a set of hooks wraps and invokes an underlying tracer,
|
||||||
// and writes the result (getResult-output) to file, one per transaction.
|
// and writes the result (getResult-output) to file, one per transaction.
|
||||||
func newResultWriter(baseDir string, tracer *tracers.Tracer) *tracers.Tracer {
|
func newResultWriter(baseDir string, tracer *tracers.Tracer) *tracing.Hooks {
|
||||||
t := &fileWritingTracer{
|
t := &fileWritingTracer{
|
||||||
baseDir: baseDir,
|
baseDir: baseDir,
|
||||||
getResult: tracer.GetResult,
|
getResult: tracer.GetResult,
|
||||||
inner: tracer.Hooks,
|
inner: tracer.Hooks,
|
||||||
suffix: "json",
|
suffix: "json",
|
||||||
}
|
}
|
||||||
return &tracers.Tracer{
|
return t.hooks()
|
||||||
Hooks: t.hooks(),
|
|
||||||
GetResult: func() (json.RawMessage, error) { return json.RawMessage("{}"), nil },
|
|
||||||
Stop: func(err error) {},
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// OnTxStart creates a new output-file specific for this transaction, and invokes
|
// OnTxStart creates a new output-file specific for this transaction, and invokes
|
||||||
|
|
|
||||||
|
|
@ -162,11 +162,6 @@ var (
|
||||||
strings.Join(vm.ActivateableEips(), ", ")),
|
strings.Join(vm.ActivateableEips(), ", ")),
|
||||||
Value: "GrayGlacier",
|
Value: "GrayGlacier",
|
||||||
}
|
}
|
||||||
OpcodeCountFlag = &cli.StringFlag{
|
|
||||||
Name: "opcode.count",
|
|
||||||
Usage: "If set, opcode execution counts will be written to this file (relative to output.basedir).",
|
|
||||||
Value: "",
|
|
||||||
}
|
|
||||||
VerbosityFlag = &cli.IntFlag{
|
VerbosityFlag = &cli.IntFlag{
|
||||||
Name: "verbosity",
|
Name: "verbosity",
|
||||||
Usage: "sets the verbosity level",
|
Usage: "sets the verbosity level",
|
||||||
|
|
|
||||||
|
|
@ -38,7 +38,6 @@ func (h header) MarshalJSON() ([]byte, error) {
|
||||||
BlobGasUsed *math.HexOrDecimal64 `json:"blobGasUsed" rlp:"optional"`
|
BlobGasUsed *math.HexOrDecimal64 `json:"blobGasUsed" rlp:"optional"`
|
||||||
ExcessBlobGas *math.HexOrDecimal64 `json:"excessBlobGas" rlp:"optional"`
|
ExcessBlobGas *math.HexOrDecimal64 `json:"excessBlobGas" rlp:"optional"`
|
||||||
ParentBeaconBlockRoot *common.Hash `json:"parentBeaconBlockRoot" rlp:"optional"`
|
ParentBeaconBlockRoot *common.Hash `json:"parentBeaconBlockRoot" rlp:"optional"`
|
||||||
SlotNumber *math.HexOrDecimal64 `json:"slotNumber" rlp:"optional"`
|
|
||||||
}
|
}
|
||||||
var enc header
|
var enc header
|
||||||
enc.ParentHash = h.ParentHash
|
enc.ParentHash = h.ParentHash
|
||||||
|
|
@ -61,7 +60,6 @@ func (h header) MarshalJSON() ([]byte, error) {
|
||||||
enc.BlobGasUsed = (*math.HexOrDecimal64)(h.BlobGasUsed)
|
enc.BlobGasUsed = (*math.HexOrDecimal64)(h.BlobGasUsed)
|
||||||
enc.ExcessBlobGas = (*math.HexOrDecimal64)(h.ExcessBlobGas)
|
enc.ExcessBlobGas = (*math.HexOrDecimal64)(h.ExcessBlobGas)
|
||||||
enc.ParentBeaconBlockRoot = h.ParentBeaconBlockRoot
|
enc.ParentBeaconBlockRoot = h.ParentBeaconBlockRoot
|
||||||
enc.SlotNumber = (*math.HexOrDecimal64)(h.SlotNumber)
|
|
||||||
return json.Marshal(&enc)
|
return json.Marshal(&enc)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -88,7 +86,6 @@ func (h *header) UnmarshalJSON(input []byte) error {
|
||||||
BlobGasUsed *math.HexOrDecimal64 `json:"blobGasUsed" rlp:"optional"`
|
BlobGasUsed *math.HexOrDecimal64 `json:"blobGasUsed" rlp:"optional"`
|
||||||
ExcessBlobGas *math.HexOrDecimal64 `json:"excessBlobGas" rlp:"optional"`
|
ExcessBlobGas *math.HexOrDecimal64 `json:"excessBlobGas" rlp:"optional"`
|
||||||
ParentBeaconBlockRoot *common.Hash `json:"parentBeaconBlockRoot" rlp:"optional"`
|
ParentBeaconBlockRoot *common.Hash `json:"parentBeaconBlockRoot" rlp:"optional"`
|
||||||
SlotNumber *math.HexOrDecimal64 `json:"slotNumber" rlp:"optional"`
|
|
||||||
}
|
}
|
||||||
var dec header
|
var dec header
|
||||||
if err := json.Unmarshal(input, &dec); err != nil {
|
if err := json.Unmarshal(input, &dec); err != nil {
|
||||||
|
|
@ -158,8 +155,5 @@ func (h *header) UnmarshalJSON(input []byte) error {
|
||||||
if dec.ParentBeaconBlockRoot != nil {
|
if dec.ParentBeaconBlockRoot != nil {
|
||||||
h.ParentBeaconBlockRoot = dec.ParentBeaconBlockRoot
|
h.ParentBeaconBlockRoot = dec.ParentBeaconBlockRoot
|
||||||
}
|
}
|
||||||
if dec.SlotNumber != nil {
|
|
||||||
h.SlotNumber = (*uint64)(dec.SlotNumber)
|
|
||||||
}
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -37,7 +37,6 @@ func (s stEnv) MarshalJSON() ([]byte, error) {
|
||||||
ParentExcessBlobGas *math.HexOrDecimal64 `json:"parentExcessBlobGas,omitempty"`
|
ParentExcessBlobGas *math.HexOrDecimal64 `json:"parentExcessBlobGas,omitempty"`
|
||||||
ParentBlobGasUsed *math.HexOrDecimal64 `json:"parentBlobGasUsed,omitempty"`
|
ParentBlobGasUsed *math.HexOrDecimal64 `json:"parentBlobGasUsed,omitempty"`
|
||||||
ParentBeaconBlockRoot *common.Hash `json:"parentBeaconBlockRoot"`
|
ParentBeaconBlockRoot *common.Hash `json:"parentBeaconBlockRoot"`
|
||||||
SlotNumber *math.HexOrDecimal64 `json:"slotNumber"`
|
|
||||||
}
|
}
|
||||||
var enc stEnv
|
var enc stEnv
|
||||||
enc.Coinbase = common.UnprefixedAddress(s.Coinbase)
|
enc.Coinbase = common.UnprefixedAddress(s.Coinbase)
|
||||||
|
|
@ -60,7 +59,6 @@ func (s stEnv) MarshalJSON() ([]byte, error) {
|
||||||
enc.ParentExcessBlobGas = (*math.HexOrDecimal64)(s.ParentExcessBlobGas)
|
enc.ParentExcessBlobGas = (*math.HexOrDecimal64)(s.ParentExcessBlobGas)
|
||||||
enc.ParentBlobGasUsed = (*math.HexOrDecimal64)(s.ParentBlobGasUsed)
|
enc.ParentBlobGasUsed = (*math.HexOrDecimal64)(s.ParentBlobGasUsed)
|
||||||
enc.ParentBeaconBlockRoot = s.ParentBeaconBlockRoot
|
enc.ParentBeaconBlockRoot = s.ParentBeaconBlockRoot
|
||||||
enc.SlotNumber = (*math.HexOrDecimal64)(s.SlotNumber)
|
|
||||||
return json.Marshal(&enc)
|
return json.Marshal(&enc)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -87,7 +85,6 @@ func (s *stEnv) UnmarshalJSON(input []byte) error {
|
||||||
ParentExcessBlobGas *math.HexOrDecimal64 `json:"parentExcessBlobGas,omitempty"`
|
ParentExcessBlobGas *math.HexOrDecimal64 `json:"parentExcessBlobGas,omitempty"`
|
||||||
ParentBlobGasUsed *math.HexOrDecimal64 `json:"parentBlobGasUsed,omitempty"`
|
ParentBlobGasUsed *math.HexOrDecimal64 `json:"parentBlobGasUsed,omitempty"`
|
||||||
ParentBeaconBlockRoot *common.Hash `json:"parentBeaconBlockRoot"`
|
ParentBeaconBlockRoot *common.Hash `json:"parentBeaconBlockRoot"`
|
||||||
SlotNumber *math.HexOrDecimal64 `json:"slotNumber"`
|
|
||||||
}
|
}
|
||||||
var dec stEnv
|
var dec stEnv
|
||||||
if err := json.Unmarshal(input, &dec); err != nil {
|
if err := json.Unmarshal(input, &dec); err != nil {
|
||||||
|
|
@ -157,8 +154,5 @@ func (s *stEnv) UnmarshalJSON(input []byte) error {
|
||||||
if dec.ParentBeaconBlockRoot != nil {
|
if dec.ParentBeaconBlockRoot != nil {
|
||||||
s.ParentBeaconBlockRoot = dec.ParentBeaconBlockRoot
|
s.ParentBeaconBlockRoot = dec.ParentBeaconBlockRoot
|
||||||
}
|
}
|
||||||
if dec.SlotNumber != nil {
|
|
||||||
s.SlotNumber = (*uint64)(dec.SlotNumber)
|
|
||||||
}
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -27,9 +27,7 @@ import (
|
||||||
"github.com/ethereum/go-ethereum/common"
|
"github.com/ethereum/go-ethereum/common"
|
||||||
"github.com/ethereum/go-ethereum/common/hexutil"
|
"github.com/ethereum/go-ethereum/common/hexutil"
|
||||||
"github.com/ethereum/go-ethereum/core"
|
"github.com/ethereum/go-ethereum/core"
|
||||||
|
|
||||||
"github.com/ethereum/go-ethereum/core/types"
|
"github.com/ethereum/go-ethereum/core/types"
|
||||||
"github.com/ethereum/go-ethereum/core/vm"
|
|
||||||
"github.com/ethereum/go-ethereum/params"
|
"github.com/ethereum/go-ethereum/params"
|
||||||
"github.com/ethereum/go-ethereum/rlp"
|
"github.com/ethereum/go-ethereum/rlp"
|
||||||
"github.com/ethereum/go-ethereum/tests"
|
"github.com/ethereum/go-ethereum/tests"
|
||||||
|
|
@ -179,12 +177,9 @@ func Transaction(ctx *cli.Context) error {
|
||||||
r.Error = errors.New("gas * maxFeePerGas exceeds 256 bits")
|
r.Error = errors.New("gas * maxFeePerGas exceeds 256 bits")
|
||||||
}
|
}
|
||||||
// Check whether the init code size has been exceeded.
|
// Check whether the init code size has been exceeded.
|
||||||
if tx.To() == nil {
|
if chainConfig.IsShanghai(new(big.Int), 0) && tx.To() == nil && len(tx.Data()) > params.MaxInitCodeSize {
|
||||||
if err := vm.CheckMaxInitCodeSize(&rules, uint64(len(tx.Data()))); err != nil {
|
r.Error = errors.New("max initcode size exceeded")
|
||||||
r.Error = err
|
|
||||||
}
|
}
|
||||||
}
|
|
||||||
|
|
||||||
if chainConfig.IsOsaka(new(big.Int), 0) && tx.Gas() > params.MaxTxGas {
|
if chainConfig.IsOsaka(new(big.Int), 0) && tx.Gas() > params.MaxTxGas {
|
||||||
r.Error = errors.New("gas limit exceeds maximum")
|
r.Error = errors.New("gas limit exceeds maximum")
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -37,7 +37,6 @@ import (
|
||||||
"github.com/ethereum/go-ethereum/crypto"
|
"github.com/ethereum/go-ethereum/crypto"
|
||||||
"github.com/ethereum/go-ethereum/eth/tracers"
|
"github.com/ethereum/go-ethereum/eth/tracers"
|
||||||
"github.com/ethereum/go-ethereum/eth/tracers/logger"
|
"github.com/ethereum/go-ethereum/eth/tracers/logger"
|
||||||
"github.com/ethereum/go-ethereum/eth/tracers/native"
|
|
||||||
"github.com/ethereum/go-ethereum/log"
|
"github.com/ethereum/go-ethereum/log"
|
||||||
"github.com/ethereum/go-ethereum/params"
|
"github.com/ethereum/go-ethereum/params"
|
||||||
"github.com/ethereum/go-ethereum/tests"
|
"github.com/ethereum/go-ethereum/tests"
|
||||||
|
|
@ -168,15 +167,14 @@ func Transition(ctx *cli.Context) error {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Configure tracer
|
// Configure tracer
|
||||||
var tracer *tracers.Tracer
|
|
||||||
if ctx.IsSet(TraceTracerFlag.Name) { // Custom tracing
|
if ctx.IsSet(TraceTracerFlag.Name) { // Custom tracing
|
||||||
config := json.RawMessage(ctx.String(TraceTracerConfigFlag.Name))
|
config := json.RawMessage(ctx.String(TraceTracerConfigFlag.Name))
|
||||||
innerTracer, err := tracers.DefaultDirectory.New(ctx.String(TraceTracerFlag.Name),
|
tracer, err := tracers.DefaultDirectory.New(ctx.String(TraceTracerFlag.Name),
|
||||||
nil, config, chainConfig)
|
nil, config, chainConfig)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return NewError(ErrorConfig, fmt.Errorf("failed instantiating tracer: %v", err))
|
return NewError(ErrorConfig, fmt.Errorf("failed instantiating tracer: %v", err))
|
||||||
}
|
}
|
||||||
tracer = newResultWriter(baseDir, innerTracer)
|
vmConfig.Tracer = newResultWriter(baseDir, tracer)
|
||||||
} else if ctx.Bool(TraceFlag.Name) { // JSON opcode tracing
|
} else if ctx.Bool(TraceFlag.Name) { // JSON opcode tracing
|
||||||
logConfig := &logger.Config{
|
logConfig := &logger.Config{
|
||||||
DisableStack: ctx.Bool(TraceDisableStackFlag.Name),
|
DisableStack: ctx.Bool(TraceDisableStackFlag.Name),
|
||||||
|
|
@ -184,45 +182,20 @@ func Transition(ctx *cli.Context) error {
|
||||||
EnableReturnData: ctx.Bool(TraceEnableReturnDataFlag.Name),
|
EnableReturnData: ctx.Bool(TraceEnableReturnDataFlag.Name),
|
||||||
}
|
}
|
||||||
if ctx.Bool(TraceEnableCallFramesFlag.Name) {
|
if ctx.Bool(TraceEnableCallFramesFlag.Name) {
|
||||||
tracer = newFileWriter(baseDir, func(out io.Writer) *tracing.Hooks {
|
vmConfig.Tracer = newFileWriter(baseDir, func(out io.Writer) *tracing.Hooks {
|
||||||
return logger.NewJSONLoggerWithCallFrames(logConfig, out)
|
return logger.NewJSONLoggerWithCallFrames(logConfig, out)
|
||||||
})
|
})
|
||||||
} else {
|
} else {
|
||||||
tracer = newFileWriter(baseDir, func(out io.Writer) *tracing.Hooks {
|
vmConfig.Tracer = newFileWriter(baseDir, func(out io.Writer) *tracing.Hooks {
|
||||||
return logger.NewJSONLogger(logConfig, out)
|
return logger.NewJSONLogger(logConfig, out)
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
// Configure opcode counter
|
|
||||||
var opcodeTracer *tracers.Tracer
|
|
||||||
if ctx.IsSet(OpcodeCountFlag.Name) && ctx.String(OpcodeCountFlag.Name) != "" {
|
|
||||||
opcodeTracer = native.NewOpcodeCounter()
|
|
||||||
if tracer != nil {
|
|
||||||
// If we have an existing tracer, multiplex with the opcode tracer
|
|
||||||
mux, _ := native.NewMuxTracer([]string{"trace", "opcode"}, []*tracers.Tracer{tracer, opcodeTracer})
|
|
||||||
vmConfig.Tracer = mux.Hooks
|
|
||||||
} else {
|
|
||||||
vmConfig.Tracer = opcodeTracer.Hooks
|
|
||||||
}
|
|
||||||
} else if tracer != nil {
|
|
||||||
vmConfig.Tracer = tracer.Hooks
|
|
||||||
}
|
|
||||||
// Run the test and aggregate the result
|
// Run the test and aggregate the result
|
||||||
s, result, body, err := prestate.Apply(vmConfig, chainConfig, txIt, ctx.Int64(RewardFlag.Name))
|
s, result, body, err := prestate.Apply(vmConfig, chainConfig, txIt, ctx.Int64(RewardFlag.Name))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
// Write opcode counts if enabled
|
|
||||||
if opcodeTracer != nil {
|
|
||||||
fname := ctx.String(OpcodeCountFlag.Name)
|
|
||||||
result, err := opcodeTracer.GetResult()
|
|
||||||
if err != nil {
|
|
||||||
return NewError(ErrorJson, fmt.Errorf("failed getting opcode counts: %v", err))
|
|
||||||
}
|
|
||||||
if err := saveFile(baseDir, fname, result); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
// Dump the execution result
|
// Dump the execution result
|
||||||
var (
|
var (
|
||||||
collector = make(Alloc)
|
collector = make(Alloc)
|
||||||
|
|
|
||||||
|
|
@ -161,7 +161,6 @@ var (
|
||||||
t8ntool.ForknameFlag,
|
t8ntool.ForknameFlag,
|
||||||
t8ntool.ChainIDFlag,
|
t8ntool.ChainIDFlag,
|
||||||
t8ntool.RewardFlag,
|
t8ntool.RewardFlag,
|
||||||
t8ntool.OpcodeCountFlag,
|
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
2
cmd/evm/testdata/1/exp.json
vendored
2
cmd/evm/testdata/1/exp.json
vendored
|
|
@ -24,7 +24,7 @@
|
||||||
"status": "0x1",
|
"status": "0x1",
|
||||||
"cumulativeGasUsed": "0x5208",
|
"cumulativeGasUsed": "0x5208",
|
||||||
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
|
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
|
||||||
"logs": [],
|
"logs": null,
|
||||||
"transactionHash": "0x0557bacce3375c98d806609b8d5043072f0b6a8bae45ae5a67a00d3a1a18d673",
|
"transactionHash": "0x0557bacce3375c98d806609b8d5043072f0b6a8bae45ae5a67a00d3a1a18d673",
|
||||||
"contractAddress": "0x0000000000000000000000000000000000000000",
|
"contractAddress": "0x0000000000000000000000000000000000000000",
|
||||||
"gasUsed": "0x5208",
|
"gasUsed": "0x5208",
|
||||||
|
|
|
||||||
4
cmd/evm/testdata/13/exp2.json
vendored
4
cmd/evm/testdata/13/exp2.json
vendored
|
|
@ -12,7 +12,7 @@
|
||||||
"status": "0x0",
|
"status": "0x0",
|
||||||
"cumulativeGasUsed": "0x84d0",
|
"cumulativeGasUsed": "0x84d0",
|
||||||
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
|
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
|
||||||
"logs": [],
|
"logs": null,
|
||||||
"transactionHash": "0xa98a24882ea90916c6a86da650fbc6b14238e46f0af04a131ce92be897507476",
|
"transactionHash": "0xa98a24882ea90916c6a86da650fbc6b14238e46f0af04a131ce92be897507476",
|
||||||
"contractAddress": "0x0000000000000000000000000000000000000000",
|
"contractAddress": "0x0000000000000000000000000000000000000000",
|
||||||
"gasUsed": "0x84d0",
|
"gasUsed": "0x84d0",
|
||||||
|
|
@ -27,7 +27,7 @@
|
||||||
"status": "0x0",
|
"status": "0x0",
|
||||||
"cumulativeGasUsed": "0x109a0",
|
"cumulativeGasUsed": "0x109a0",
|
||||||
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
|
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
|
||||||
"logs": [],
|
"logs": null,
|
||||||
"transactionHash": "0x36bad80acce7040c45fd32764b5c2b2d2e6f778669fb41791f73f546d56e739a",
|
"transactionHash": "0x36bad80acce7040c45fd32764b5c2b2d2e6f778669fb41791f73f546d56e739a",
|
||||||
"contractAddress": "0x0000000000000000000000000000000000000000",
|
"contractAddress": "0x0000000000000000000000000000000000000000",
|
||||||
"gasUsed": "0x84d0",
|
"gasUsed": "0x84d0",
|
||||||
|
|
|
||||||
2
cmd/evm/testdata/23/exp.json
vendored
2
cmd/evm/testdata/23/exp.json
vendored
|
|
@ -11,7 +11,7 @@
|
||||||
"status": "0x1",
|
"status": "0x1",
|
||||||
"cumulativeGasUsed": "0x520b",
|
"cumulativeGasUsed": "0x520b",
|
||||||
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
|
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
|
||||||
"logs": [],
|
"logs": null,
|
||||||
"transactionHash": "0x72fadbef39cd251a437eea619cfeda752271a5faaaa2147df012e112159ffb81",
|
"transactionHash": "0x72fadbef39cd251a437eea619cfeda752271a5faaaa2147df012e112159ffb81",
|
||||||
"contractAddress": "0x0000000000000000000000000000000000000000",
|
"contractAddress": "0x0000000000000000000000000000000000000000",
|
||||||
"gasUsed": "0x520b",
|
"gasUsed": "0x520b",
|
||||||
|
|
|
||||||
4
cmd/evm/testdata/24/exp.json
vendored
4
cmd/evm/testdata/24/exp.json
vendored
|
|
@ -27,7 +27,7 @@
|
||||||
"status": "0x1",
|
"status": "0x1",
|
||||||
"cumulativeGasUsed": "0xa861",
|
"cumulativeGasUsed": "0xa861",
|
||||||
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
|
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
|
||||||
"logs": [],
|
"logs": null,
|
||||||
"transactionHash": "0x92ea4a28224d033afb20e0cc2b290d4c7c2d61f6a4800a680e4e19ac962ee941",
|
"transactionHash": "0x92ea4a28224d033afb20e0cc2b290d4c7c2d61f6a4800a680e4e19ac962ee941",
|
||||||
"contractAddress": "0x0000000000000000000000000000000000000000",
|
"contractAddress": "0x0000000000000000000000000000000000000000",
|
||||||
"gasUsed": "0xa861",
|
"gasUsed": "0xa861",
|
||||||
|
|
@ -41,7 +41,7 @@
|
||||||
"status": "0x1",
|
"status": "0x1",
|
||||||
"cumulativeGasUsed": "0x10306",
|
"cumulativeGasUsed": "0x10306",
|
||||||
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
|
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
|
||||||
"logs": [],
|
"logs": null,
|
||||||
"transactionHash": "0x16b1d912f1d664f3f60f4e1b5f296f3c82a64a1a253117b4851d18bc03c4f1da",
|
"transactionHash": "0x16b1d912f1d664f3f60f4e1b5f296f3c82a64a1a253117b4851d18bc03c4f1da",
|
||||||
"contractAddress": "0x0000000000000000000000000000000000000000",
|
"contractAddress": "0x0000000000000000000000000000000000000000",
|
||||||
"gasUsed": "0x5aa5",
|
"gasUsed": "0x5aa5",
|
||||||
|
|
|
||||||
2
cmd/evm/testdata/25/exp.json
vendored
2
cmd/evm/testdata/25/exp.json
vendored
|
|
@ -23,7 +23,7 @@
|
||||||
"status": "0x1",
|
"status": "0x1",
|
||||||
"cumulativeGasUsed": "0x5208",
|
"cumulativeGasUsed": "0x5208",
|
||||||
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
|
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
|
||||||
"logs": [],
|
"logs": null,
|
||||||
"transactionHash": "0x92ea4a28224d033afb20e0cc2b290d4c7c2d61f6a4800a680e4e19ac962ee941",
|
"transactionHash": "0x92ea4a28224d033afb20e0cc2b290d4c7c2d61f6a4800a680e4e19ac962ee941",
|
||||||
"contractAddress": "0x0000000000000000000000000000000000000000",
|
"contractAddress": "0x0000000000000000000000000000000000000000",
|
||||||
"gasUsed": "0x5208",
|
"gasUsed": "0x5208",
|
||||||
|
|
|
||||||
2
cmd/evm/testdata/28/exp.json
vendored
2
cmd/evm/testdata/28/exp.json
vendored
|
|
@ -28,7 +28,7 @@
|
||||||
"status": "0x1",
|
"status": "0x1",
|
||||||
"cumulativeGasUsed": "0xa865",
|
"cumulativeGasUsed": "0xa865",
|
||||||
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
|
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
|
||||||
"logs": [],
|
"logs": null,
|
||||||
"transactionHash": "0x7508d7139d002a4b3a26a4f12dec0d87cb46075c78bf77a38b569a133b509262",
|
"transactionHash": "0x7508d7139d002a4b3a26a4f12dec0d87cb46075c78bf77a38b569a133b509262",
|
||||||
"contractAddress": "0x0000000000000000000000000000000000000000",
|
"contractAddress": "0x0000000000000000000000000000000000000000",
|
||||||
"gasUsed": "0xa865",
|
"gasUsed": "0xa865",
|
||||||
|
|
|
||||||
2
cmd/evm/testdata/29/exp.json
vendored
2
cmd/evm/testdata/29/exp.json
vendored
|
|
@ -26,7 +26,7 @@
|
||||||
"status": "0x1",
|
"status": "0x1",
|
||||||
"cumulativeGasUsed": "0x5208",
|
"cumulativeGasUsed": "0x5208",
|
||||||
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
|
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
|
||||||
"logs": [],
|
"logs": null,
|
||||||
"transactionHash": "0x84f70aba406a55628a0620f26d260f90aeb6ccc55fed6ec2ac13dd4f727032ed",
|
"transactionHash": "0x84f70aba406a55628a0620f26d260f90aeb6ccc55fed6ec2ac13dd4f727032ed",
|
||||||
"contractAddress": "0x0000000000000000000000000000000000000000",
|
"contractAddress": "0x0000000000000000000000000000000000000000",
|
||||||
"gasUsed": "0x5208",
|
"gasUsed": "0x5208",
|
||||||
|
|
|
||||||
2
cmd/evm/testdata/3/exp.json
vendored
2
cmd/evm/testdata/3/exp.json
vendored
|
|
@ -24,7 +24,7 @@
|
||||||
"status": "0x1",
|
"status": "0x1",
|
||||||
"cumulativeGasUsed": "0x521f",
|
"cumulativeGasUsed": "0x521f",
|
||||||
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
|
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
|
||||||
"logs": [],
|
"logs": null,
|
||||||
"transactionHash": "0x72fadbef39cd251a437eea619cfeda752271a5faaaa2147df012e112159ffb81",
|
"transactionHash": "0x72fadbef39cd251a437eea619cfeda752271a5faaaa2147df012e112159ffb81",
|
||||||
"contractAddress": "0x0000000000000000000000000000000000000000",
|
"contractAddress": "0x0000000000000000000000000000000000000000",
|
||||||
"gasUsed": "0x521f",
|
"gasUsed": "0x521f",
|
||||||
|
|
|
||||||
4
cmd/evm/testdata/30/exp.json
vendored
4
cmd/evm/testdata/30/exp.json
vendored
|
|
@ -25,7 +25,7 @@
|
||||||
"status": "0x1",
|
"status": "0x1",
|
||||||
"cumulativeGasUsed": "0x5208",
|
"cumulativeGasUsed": "0x5208",
|
||||||
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
|
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
|
||||||
"logs": [],
|
"logs": null,
|
||||||
"transactionHash": "0xa98a24882ea90916c6a86da650fbc6b14238e46f0af04a131ce92be897507476",
|
"transactionHash": "0xa98a24882ea90916c6a86da650fbc6b14238e46f0af04a131ce92be897507476",
|
||||||
"contractAddress": "0x0000000000000000000000000000000000000000",
|
"contractAddress": "0x0000000000000000000000000000000000000000",
|
||||||
"gasUsed": "0x5208",
|
"gasUsed": "0x5208",
|
||||||
|
|
@ -40,7 +40,7 @@
|
||||||
"status": "0x1",
|
"status": "0x1",
|
||||||
"cumulativeGasUsed": "0xa410",
|
"cumulativeGasUsed": "0xa410",
|
||||||
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
|
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
|
||||||
"logs": [],
|
"logs": null,
|
||||||
"transactionHash": "0x36bad80acce7040c45fd32764b5c2b2d2e6f778669fb41791f73f546d56e739a",
|
"transactionHash": "0x36bad80acce7040c45fd32764b5c2b2d2e6f778669fb41791f73f546d56e739a",
|
||||||
"contractAddress": "0x0000000000000000000000000000000000000000",
|
"contractAddress": "0x0000000000000000000000000000000000000000",
|
||||||
"gasUsed": "0x5208",
|
"gasUsed": "0x5208",
|
||||||
|
|
|
||||||
2
cmd/evm/testdata/33/exp.json
vendored
2
cmd/evm/testdata/33/exp.json
vendored
|
|
@ -44,7 +44,7 @@
|
||||||
"root": "0x",
|
"root": "0x",
|
||||||
"status": "0x1",
|
"status": "0x1",
|
||||||
"cumulativeGasUsed": "0x15fa9",
|
"cumulativeGasUsed": "0x15fa9",
|
||||||
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","logs": [],"transactionHash": "0x0417aab7c1d8a3989190c3167c132876ce9b8afd99262c5a0f9d06802de3d7ef",
|
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","logs": null,"transactionHash": "0x0417aab7c1d8a3989190c3167c132876ce9b8afd99262c5a0f9d06802de3d7ef",
|
||||||
"contractAddress": "0x0000000000000000000000000000000000000000",
|
"contractAddress": "0x0000000000000000000000000000000000000000",
|
||||||
"gasUsed": "0x15fa9",
|
"gasUsed": "0x15fa9",
|
||||||
"effectiveGasPrice": null,
|
"effectiveGasPrice": null,
|
||||||
|
|
|
||||||
|
|
@ -1,177 +0,0 @@
|
||||||
// Copyright 2026 The go-ethereum Authors
|
|
||||||
// This file is part of go-ethereum.
|
|
||||||
//
|
|
||||||
// go-ethereum is free software: you can redistribute it and/or modify
|
|
||||||
// it under the terms of the GNU General Public License as published by
|
|
||||||
// the Free Software Foundation, either version 3 of the License, or
|
|
||||||
// (at your option) any later version.
|
|
||||||
//
|
|
||||||
// go-ethereum is distributed in the hope that it will be useful,
|
|
||||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
// GNU General Public License for more details.
|
|
||||||
//
|
|
||||||
// You should have received a copy of the GNU General Public License
|
|
||||||
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
|
|
||||||
|
|
||||||
// fetchpayload queries an Ethereum node over RPC, fetches a block and its
|
|
||||||
// execution witness, and writes the combined Payload (ChainID + Block +
|
|
||||||
// Witness) to disk in the format consumed by cmd/keeper.
|
|
||||||
package main
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"encoding/json"
|
|
||||||
"flag"
|
|
||||||
"fmt"
|
|
||||||
"math/big"
|
|
||||||
"os"
|
|
||||||
"path/filepath"
|
|
||||||
"strings"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/ethereum/go-ethereum/common/hexutil"
|
|
||||||
"github.com/ethereum/go-ethereum/core/stateless"
|
|
||||||
"github.com/ethereum/go-ethereum/core/types"
|
|
||||||
"github.com/ethereum/go-ethereum/ethclient"
|
|
||||||
"github.com/ethereum/go-ethereum/rlp"
|
|
||||||
"github.com/ethereum/go-ethereum/rpc"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Payload is duplicated from cmd/keeper/main.go (package main, not importable).
|
|
||||||
type Payload struct {
|
|
||||||
ChainID uint64
|
|
||||||
Block *types.Block
|
|
||||||
Witness *stateless.Witness
|
|
||||||
}
|
|
||||||
|
|
||||||
func main() {
|
|
||||||
var (
|
|
||||||
rpcURL = flag.String("rpc", "http://localhost:8545", "RPC endpoint URL")
|
|
||||||
blockArg = flag.String("block", "latest", `Block number: decimal, 0x-hex, or "latest"`)
|
|
||||||
format = flag.String("format", "rlp", "Comma-separated output formats: rlp, hex, json")
|
|
||||||
outDir = flag.String("out", "", "Output directory (default: current directory)")
|
|
||||||
)
|
|
||||||
flag.Parse()
|
|
||||||
|
|
||||||
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
|
|
||||||
defer cancel()
|
|
||||||
|
|
||||||
// Parse block number (nil means "latest" in ethclient).
|
|
||||||
blockNum, err := parseBlockNumber(*blockArg)
|
|
||||||
if err != nil {
|
|
||||||
fatal("invalid block number %q: %v", *blockArg, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Connect to the node.
|
|
||||||
client, err := ethclient.DialContext(ctx, *rpcURL)
|
|
||||||
if err != nil {
|
|
||||||
fatal("failed to connect to %s: %v", *rpcURL, err)
|
|
||||||
}
|
|
||||||
defer client.Close()
|
|
||||||
|
|
||||||
chainID, err := client.ChainID(ctx)
|
|
||||||
if err != nil {
|
|
||||||
fatal("failed to get chain ID: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Fetch the block first so we have a concrete number for the witness call,
|
|
||||||
// avoiding a race where "latest" advances between the two RPCs.
|
|
||||||
block, err := client.BlockByNumber(ctx, blockNum)
|
|
||||||
if err != nil {
|
|
||||||
fatal("failed to fetch block: %v", err)
|
|
||||||
}
|
|
||||||
fmt.Printf("Fetched block %d (%#x)\n", block.NumberU64(), block.Hash())
|
|
||||||
|
|
||||||
// Fetch the execution witness via the debug namespace.
|
|
||||||
var extWitness stateless.ExtWitness
|
|
||||||
err = client.Client().CallContext(ctx, &extWitness, "debug_executionWitness", rpc.BlockNumber(block.NumberU64()))
|
|
||||||
if err != nil {
|
|
||||||
fatal("failed to fetch execution witness: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
witness := new(stateless.Witness)
|
|
||||||
err = witness.FromExtWitness(&extWitness)
|
|
||||||
if err != nil {
|
|
||||||
fatal("failed to convert witness: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
payload := Payload{
|
|
||||||
ChainID: chainID.Uint64(),
|
|
||||||
Block: block,
|
|
||||||
Witness: witness,
|
|
||||||
}
|
|
||||||
|
|
||||||
// Encode payload as RLP (shared by "rlp" and "hex" formats).
|
|
||||||
rlpBytes, err := rlp.EncodeToBytes(payload)
|
|
||||||
if err != nil {
|
|
||||||
fatal("failed to RLP-encode payload: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Write one output file per requested format.
|
|
||||||
blockHex := fmt.Sprintf("%x", block.NumberU64())
|
|
||||||
for f := range strings.SplitSeq(*format, ",") {
|
|
||||||
f = strings.TrimSpace(f)
|
|
||||||
outPath := filepath.Join(*outDir, fmt.Sprintf("%s_payload.%s", blockHex, f))
|
|
||||||
|
|
||||||
var data []byte
|
|
||||||
switch f {
|
|
||||||
case "rlp":
|
|
||||||
data = rlpBytes
|
|
||||||
case "hex":
|
|
||||||
data = []byte(hexutil.Encode(rlpBytes))
|
|
||||||
case "json":
|
|
||||||
data, err = marshalJSONPayload(chainID, block, &extWitness)
|
|
||||||
if err != nil {
|
|
||||||
fatal("failed to JSON-encode payload: %v", err)
|
|
||||||
}
|
|
||||||
default:
|
|
||||||
fatal("unknown format %q (valid: rlp, hex, json)", f)
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := os.WriteFile(outPath, data, 0644); err != nil {
|
|
||||||
fatal("failed to write %s: %v", outPath, err)
|
|
||||||
}
|
|
||||||
fmt.Printf("Wrote %s (%d bytes)\n", outPath, len(data))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// parseBlockNumber converts a CLI string to *big.Int.
|
|
||||||
// Returns nil for "latest" (ethclient convention for the head block).
|
|
||||||
func parseBlockNumber(s string) (*big.Int, error) {
|
|
||||||
if strings.EqualFold(s, "latest") {
|
|
||||||
return nil, nil
|
|
||||||
}
|
|
||||||
n := new(big.Int)
|
|
||||||
if strings.HasPrefix(s, "0x") || strings.HasPrefix(s, "0X") {
|
|
||||||
if _, ok := n.SetString(s[2:], 16); !ok {
|
|
||||||
return nil, fmt.Errorf("invalid hex number")
|
|
||||||
}
|
|
||||||
return n, nil
|
|
||||||
}
|
|
||||||
if _, ok := n.SetString(s, 10); !ok {
|
|
||||||
return nil, fmt.Errorf("invalid decimal number")
|
|
||||||
}
|
|
||||||
return n, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// jsonPayload is a JSON-friendly representation of Payload. It uses ExtWitness
|
|
||||||
// instead of the internal Witness (which has no JSON marshaling).
|
|
||||||
type jsonPayload struct {
|
|
||||||
ChainID uint64 `json:"chainId"`
|
|
||||||
Block *types.Block `json:"block"`
|
|
||||||
Witness *stateless.ExtWitness `json:"witness"`
|
|
||||||
}
|
|
||||||
|
|
||||||
func marshalJSONPayload(chainID *big.Int, block *types.Block, ext *stateless.ExtWitness) ([]byte, error) {
|
|
||||||
return json.MarshalIndent(jsonPayload{
|
|
||||||
ChainID: chainID.Uint64(),
|
|
||||||
Block: block,
|
|
||||||
Witness: ext,
|
|
||||||
}, "", " ")
|
|
||||||
}
|
|
||||||
|
|
||||||
func fatal(format string, args ...any) {
|
|
||||||
fmt.Fprintf(os.Stderr, format+"\n", args...)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
|
|
@ -111,7 +111,6 @@ if one is set. Otherwise it prints the genesis from the datadir.`,
|
||||||
utils.MetricsInfluxDBUsernameFlag,
|
utils.MetricsInfluxDBUsernameFlag,
|
||||||
utils.MetricsInfluxDBPasswordFlag,
|
utils.MetricsInfluxDBPasswordFlag,
|
||||||
utils.MetricsInfluxDBTagsFlag,
|
utils.MetricsInfluxDBTagsFlag,
|
||||||
utils.MetricsInfluxDBIntervalFlag,
|
|
||||||
utils.MetricsInfluxDBTokenFlag,
|
utils.MetricsInfluxDBTokenFlag,
|
||||||
utils.MetricsInfluxDBBucketFlag,
|
utils.MetricsInfluxDBBucketFlag,
|
||||||
utils.MetricsInfluxDBOrganizationFlag,
|
utils.MetricsInfluxDBOrganizationFlag,
|
||||||
|
|
@ -208,19 +207,13 @@ This command dumps out the state for a given block (or latest, if none provided)
|
||||||
pruneHistoryCommand = &cli.Command{
|
pruneHistoryCommand = &cli.Command{
|
||||||
Action: pruneHistory,
|
Action: pruneHistory,
|
||||||
Name: "prune-history",
|
Name: "prune-history",
|
||||||
Usage: "Prune blockchain history (block bodies and receipts) up to a specified point",
|
Usage: "Prune blockchain history (block bodies and receipts) up to the merge block",
|
||||||
ArgsUsage: "",
|
ArgsUsage: "",
|
||||||
Flags: slices.Concat(utils.DatabaseFlags, []cli.Flag{
|
Flags: utils.DatabaseFlags,
|
||||||
utils.ChainHistoryFlag,
|
|
||||||
}),
|
|
||||||
Description: `
|
Description: `
|
||||||
The prune-history command removes historical block bodies and receipts from the
|
The prune-history command removes historical block bodies and receipts from the
|
||||||
blockchain database up to a specified point, while preserving block headers. This
|
blockchain database up to the merge block, while preserving block headers. This
|
||||||
helps reduce storage requirements for nodes that don't need full historical data.
|
helps reduce storage requirements for nodes that don't need full historical data.`,
|
||||||
|
|
||||||
The --history.chain flag is required to specify the pruning target:
|
|
||||||
- postmerge: Prune up to the merge block. The node will keep the merge block and everything thereafter.
|
|
||||||
- postprague: Prune up to the Prague (Pectra) upgrade block. The node will keep the prague block and everything thereafter.`,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
downloadEraCommand = &cli.Command{
|
downloadEraCommand = &cli.Command{
|
||||||
|
|
@ -709,77 +702,47 @@ func hashish(x string) bool {
|
||||||
}
|
}
|
||||||
|
|
||||||
func pruneHistory(ctx *cli.Context) error {
|
func pruneHistory(ctx *cli.Context) error {
|
||||||
// Parse and validate the history mode flag.
|
|
||||||
if !ctx.IsSet(utils.ChainHistoryFlag.Name) {
|
|
||||||
return errors.New("--history.chain flag is required")
|
|
||||||
}
|
|
||||||
var mode history.HistoryMode
|
|
||||||
if err := mode.UnmarshalText([]byte(ctx.String(utils.ChainHistoryFlag.Name))); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if mode == history.KeepAll {
|
|
||||||
return errors.New("--history.chain=all is not valid for pruning. To restore history, use 'geth import-history'")
|
|
||||||
}
|
|
||||||
|
|
||||||
stack, _ := makeConfigNode(ctx)
|
stack, _ := makeConfigNode(ctx)
|
||||||
defer stack.Close()
|
defer stack.Close()
|
||||||
|
|
||||||
// Open the chain database.
|
// Open the chain database
|
||||||
chain, chaindb := utils.MakeChain(ctx, stack, false)
|
chain, chaindb := utils.MakeChain(ctx, stack, false)
|
||||||
defer chaindb.Close()
|
defer chaindb.Close()
|
||||||
defer chain.Stop()
|
defer chain.Stop()
|
||||||
|
|
||||||
// Determine the prune point based on the history mode.
|
// Determine the prune point. This will be the first PoS block.
|
||||||
genesisHash := chain.Genesis().Hash()
|
prunePoint, ok := history.PrunePoints[chain.Genesis().Hash()]
|
||||||
policy, err := history.NewPolicy(mode, genesisHash)
|
if !ok || prunePoint == nil {
|
||||||
if err != nil {
|
return errors.New("prune point not found")
|
||||||
return err
|
|
||||||
}
|
|
||||||
if policy.Target == nil {
|
|
||||||
return fmt.Errorf("prune point for %q not found for this network", mode.String())
|
|
||||||
}
|
}
|
||||||
var (
|
var (
|
||||||
targetBlock = policy.Target.BlockNumber
|
mergeBlock = prunePoint.BlockNumber
|
||||||
targetBlockHash = policy.Target.BlockHash
|
mergeBlockHash = prunePoint.BlockHash.Hex()
|
||||||
)
|
)
|
||||||
|
|
||||||
// Check the current freezer tail to see if pruning is needed/possible.
|
// Check we're far enough past merge to ensure all data is in freezer
|
||||||
freezerTail, _ := chaindb.Tail()
|
|
||||||
if freezerTail > 0 {
|
|
||||||
if freezerTail == targetBlock {
|
|
||||||
log.Info("Database already pruned to target block", "tail", freezerTail)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
if freezerTail > targetBlock {
|
|
||||||
// Database is pruned beyond the target - can't unprune.
|
|
||||||
return fmt.Errorf("database is already pruned to block %d, which is beyond target %d. Cannot unprune. To restore history, use 'geth import-history'", freezerTail, targetBlock)
|
|
||||||
}
|
|
||||||
// freezerTail < targetBlock: we can prune further, continue below.
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check we're far enough past the target to ensure all data is in freezer.
|
|
||||||
currentHeader := chain.CurrentHeader()
|
currentHeader := chain.CurrentHeader()
|
||||||
if currentHeader == nil {
|
if currentHeader == nil {
|
||||||
return errors.New("current header not found")
|
return errors.New("current header not found")
|
||||||
}
|
}
|
||||||
if currentHeader.Number.Uint64() < targetBlock+params.FullImmutabilityThreshold {
|
if currentHeader.Number.Uint64() < mergeBlock+params.FullImmutabilityThreshold {
|
||||||
return fmt.Errorf("chain not far enough past target block %d, need %d more blocks",
|
return fmt.Errorf("chain not far enough past merge block, need %d more blocks",
|
||||||
targetBlock, targetBlock+params.FullImmutabilityThreshold-currentHeader.Number.Uint64())
|
mergeBlock+params.FullImmutabilityThreshold-currentHeader.Number.Uint64())
|
||||||
}
|
}
|
||||||
|
|
||||||
// Double-check the target block in db has the expected hash.
|
// Double-check the prune block in db has the expected hash.
|
||||||
hash := rawdb.ReadCanonicalHash(chaindb, targetBlock)
|
hash := rawdb.ReadCanonicalHash(chaindb, mergeBlock)
|
||||||
if hash != targetBlockHash {
|
if hash != common.HexToHash(mergeBlockHash) {
|
||||||
return fmt.Errorf("target block hash mismatch: got %s, want %s", hash.Hex(), targetBlockHash.Hex())
|
return fmt.Errorf("merge block hash mismatch: got %s, want %s", hash.Hex(), mergeBlockHash)
|
||||||
}
|
}
|
||||||
|
|
||||||
log.Info("Starting history pruning", "head", currentHeader.Number, "target", targetBlock, "targetHash", targetBlockHash.Hex())
|
log.Info("Starting history pruning", "head", currentHeader.Number, "tail", mergeBlock, "tailHash", mergeBlockHash)
|
||||||
start := time.Now()
|
start := time.Now()
|
||||||
rawdb.PruneTransactionIndex(chaindb, targetBlock)
|
rawdb.PruneTransactionIndex(chaindb, mergeBlock)
|
||||||
if _, err := chaindb.TruncateTail(targetBlock); err != nil {
|
if _, err := chaindb.TruncateTail(mergeBlock); err != nil {
|
||||||
return fmt.Errorf("failed to truncate ancient data: %v", err)
|
return fmt.Errorf("failed to truncate ancient data: %v", err)
|
||||||
}
|
}
|
||||||
log.Info("History pruning completed", "tail", targetBlock, "elapsed", common.PrettyDuration(time.Since(start)))
|
log.Info("History pruning completed", "tail", mergeBlock, "elapsed", common.PrettyDuration(time.Since(start)))
|
||||||
|
|
||||||
// TODO(s1na): what if there is a crash between the two prune operations?
|
// TODO(s1na): what if there is a crash between the two prune operations?
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -377,9 +377,6 @@ func applyMetricConfig(ctx *cli.Context, cfg *gethConfig) {
|
||||||
if ctx.IsSet(utils.MetricsInfluxDBTagsFlag.Name) {
|
if ctx.IsSet(utils.MetricsInfluxDBTagsFlag.Name) {
|
||||||
cfg.Metrics.InfluxDBTags = ctx.String(utils.MetricsInfluxDBTagsFlag.Name)
|
cfg.Metrics.InfluxDBTags = ctx.String(utils.MetricsInfluxDBTagsFlag.Name)
|
||||||
}
|
}
|
||||||
if ctx.IsSet(utils.MetricsInfluxDBIntervalFlag.Name) {
|
|
||||||
cfg.Metrics.InfluxDBInterval = ctx.Duration(utils.MetricsInfluxDBIntervalFlag.Name)
|
|
||||||
}
|
|
||||||
if ctx.IsSet(utils.MetricsEnableInfluxDBV2Flag.Name) {
|
if ctx.IsSet(utils.MetricsEnableInfluxDBV2Flag.Name) {
|
||||||
cfg.Metrics.EnableInfluxDBV2 = ctx.Bool(utils.MetricsEnableInfluxDBV2Flag.Name)
|
cfg.Metrics.EnableInfluxDBV2 = ctx.Bool(utils.MetricsEnableInfluxDBV2Flag.Name)
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -30,7 +30,7 @@ import (
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
ipcAPIs = "admin:1.0 debug:1.0 engine:1.0 eth:1.0 miner:1.0 net:1.0 rpc:1.0 testing:1.0 txpool:1.0 web3:1.0"
|
ipcAPIs = "admin:1.0 debug:1.0 engine:1.0 eth:1.0 miner:1.0 net:1.0 rpc:1.0 txpool:1.0 web3:1.0"
|
||||||
httpAPIs = "eth:1.0 net:1.0 rpc:1.0 web3:1.0"
|
httpAPIs = "eth:1.0 net:1.0 rpc:1.0 web3:1.0"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -19,7 +19,6 @@ package main
|
||||||
import (
|
import (
|
||||||
"bytes"
|
"bytes"
|
||||||
"fmt"
|
"fmt"
|
||||||
"math"
|
|
||||||
"os"
|
"os"
|
||||||
"os/signal"
|
"os/signal"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
|
|
@ -38,7 +37,6 @@ import (
|
||||||
"github.com/ethereum/go-ethereum/core/types"
|
"github.com/ethereum/go-ethereum/core/types"
|
||||||
"github.com/ethereum/go-ethereum/crypto"
|
"github.com/ethereum/go-ethereum/crypto"
|
||||||
"github.com/ethereum/go-ethereum/ethdb"
|
"github.com/ethereum/go-ethereum/ethdb"
|
||||||
"github.com/ethereum/go-ethereum/internal/tablewriter"
|
|
||||||
"github.com/ethereum/go-ethereum/log"
|
"github.com/ethereum/go-ethereum/log"
|
||||||
"github.com/ethereum/go-ethereum/rlp"
|
"github.com/ethereum/go-ethereum/rlp"
|
||||||
"github.com/ethereum/go-ethereum/trie"
|
"github.com/ethereum/go-ethereum/trie"
|
||||||
|
|
@ -53,24 +51,7 @@ var (
|
||||||
}
|
}
|
||||||
removeChainDataFlag = &cli.BoolFlag{
|
removeChainDataFlag = &cli.BoolFlag{
|
||||||
Name: "remove.chain",
|
Name: "remove.chain",
|
||||||
Usage: "If set, selects the chain data for removal",
|
Usage: "If set, selects the state data for removal",
|
||||||
}
|
|
||||||
inspectTrieTopFlag = &cli.IntFlag{
|
|
||||||
Name: "top",
|
|
||||||
Usage: "Print the top N results per ranking category",
|
|
||||||
Value: 10,
|
|
||||||
}
|
|
||||||
inspectTrieDumpPathFlag = &cli.StringFlag{
|
|
||||||
Name: "dump-path",
|
|
||||||
Usage: "Path for the trie statistics dump file",
|
|
||||||
}
|
|
||||||
inspectTrieSummarizeFlag = &cli.StringFlag{
|
|
||||||
Name: "summarize",
|
|
||||||
Usage: "Summarize an existing trie dump file (skip trie traversal)",
|
|
||||||
}
|
|
||||||
inspectTrieContractFlag = &cli.StringFlag{
|
|
||||||
Name: "contract",
|
|
||||||
Usage: "Inspect only the storage of the given contract address (skips full account trie walk)",
|
|
||||||
}
|
}
|
||||||
|
|
||||||
removedbCommand = &cli.Command{
|
removedbCommand = &cli.Command{
|
||||||
|
|
@ -93,7 +74,6 @@ Remove blockchain and state databases`,
|
||||||
dbCompactCmd,
|
dbCompactCmd,
|
||||||
dbGetCmd,
|
dbGetCmd,
|
||||||
dbDeleteCmd,
|
dbDeleteCmd,
|
||||||
dbInspectTrieCmd,
|
|
||||||
dbPutCmd,
|
dbPutCmd,
|
||||||
dbGetSlotsCmd,
|
dbGetSlotsCmd,
|
||||||
dbDumpFreezerIndex,
|
dbDumpFreezerIndex,
|
||||||
|
|
@ -112,22 +92,6 @@ Remove blockchain and state databases`,
|
||||||
Usage: "Inspect the storage size for each type of data in the database",
|
Usage: "Inspect the storage size for each type of data in the database",
|
||||||
Description: `This commands iterates the entire database. If the optional 'prefix' and 'start' arguments are provided, then the iteration is limited to the given subset of data.`,
|
Description: `This commands iterates the entire database. If the optional 'prefix' and 'start' arguments are provided, then the iteration is limited to the given subset of data.`,
|
||||||
}
|
}
|
||||||
dbInspectTrieCmd = &cli.Command{
|
|
||||||
Action: inspectTrie,
|
|
||||||
Name: "inspect-trie",
|
|
||||||
ArgsUsage: "<blocknum>",
|
|
||||||
Flags: slices.Concat([]cli.Flag{
|
|
||||||
utils.ExcludeStorageFlag,
|
|
||||||
inspectTrieTopFlag,
|
|
||||||
utils.OutputFileFlag,
|
|
||||||
inspectTrieDumpPathFlag,
|
|
||||||
inspectTrieSummarizeFlag,
|
|
||||||
inspectTrieContractFlag,
|
|
||||||
}, utils.NetworkFlags, utils.DatabaseFlags),
|
|
||||||
Usage: "Print detailed trie information about the structure of account trie and storage tries.",
|
|
||||||
Description: `This commands iterates the entrie trie-backed state. If the 'blocknum' is not specified,
|
|
||||||
the latest block number will be used by default.`,
|
|
||||||
}
|
|
||||||
dbCheckStateContentCmd = &cli.Command{
|
dbCheckStateContentCmd = &cli.Command{
|
||||||
Action: checkStateContent,
|
Action: checkStateContent,
|
||||||
Name: "check-state-content",
|
Name: "check-state-content",
|
||||||
|
|
@ -421,88 +385,6 @@ func checkStateContent(ctx *cli.Context) error {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func inspectTrie(ctx *cli.Context) error {
|
|
||||||
topN := ctx.Int(inspectTrieTopFlag.Name)
|
|
||||||
if topN <= 0 {
|
|
||||||
return fmt.Errorf("invalid --%s value %d (must be > 0)", inspectTrieTopFlag.Name, topN)
|
|
||||||
}
|
|
||||||
config := &trie.InspectConfig{
|
|
||||||
NoStorage: ctx.Bool(utils.ExcludeStorageFlag.Name),
|
|
||||||
TopN: topN,
|
|
||||||
Path: ctx.String(utils.OutputFileFlag.Name),
|
|
||||||
}
|
|
||||||
|
|
||||||
if summarizePath := ctx.String(inspectTrieSummarizeFlag.Name); summarizePath != "" {
|
|
||||||
if ctx.NArg() > 0 {
|
|
||||||
return fmt.Errorf("block number argument is not supported with --%s", inspectTrieSummarizeFlag.Name)
|
|
||||||
}
|
|
||||||
config.DumpPath = summarizePath
|
|
||||||
log.Info("Summarizing trie dump", "path", summarizePath, "top", topN)
|
|
||||||
return trie.Summarize(summarizePath, config)
|
|
||||||
}
|
|
||||||
if ctx.NArg() > 1 {
|
|
||||||
return fmt.Errorf("excessive number of arguments: %v", ctx.Command.ArgsUsage)
|
|
||||||
}
|
|
||||||
|
|
||||||
stack, _ := makeConfigNode(ctx)
|
|
||||||
db := utils.MakeChainDatabase(ctx, stack, false)
|
|
||||||
defer stack.Close()
|
|
||||||
defer db.Close()
|
|
||||||
|
|
||||||
var (
|
|
||||||
trieRoot common.Hash
|
|
||||||
hash common.Hash
|
|
||||||
number uint64
|
|
||||||
)
|
|
||||||
switch {
|
|
||||||
case ctx.NArg() == 0 || ctx.Args().Get(0) == "latest":
|
|
||||||
head := rawdb.ReadHeadHeaderHash(db)
|
|
||||||
n, ok := rawdb.ReadHeaderNumber(db, head)
|
|
||||||
if !ok {
|
|
||||||
return fmt.Errorf("could not load head block hash")
|
|
||||||
}
|
|
||||||
number = n
|
|
||||||
case ctx.Args().Get(0) == "snapshot":
|
|
||||||
trieRoot = rawdb.ReadSnapshotRoot(db)
|
|
||||||
number = math.MaxUint64
|
|
||||||
default:
|
|
||||||
var err error
|
|
||||||
number, err = strconv.ParseUint(ctx.Args().Get(0), 10, 64)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("failed to parse blocknum, Args[0]: %v, err: %v", ctx.Args().Get(0), err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if number != math.MaxUint64 {
|
|
||||||
hash = rawdb.ReadCanonicalHash(db, number)
|
|
||||||
if hash == (common.Hash{}) {
|
|
||||||
return fmt.Errorf("canonical hash for block %d not found", number)
|
|
||||||
}
|
|
||||||
blockHeader := rawdb.ReadHeader(db, hash, number)
|
|
||||||
trieRoot = blockHeader.Root
|
|
||||||
}
|
|
||||||
if trieRoot == (common.Hash{}) {
|
|
||||||
log.Error("Empty root hash")
|
|
||||||
}
|
|
||||||
|
|
||||||
config.DumpPath = ctx.String(inspectTrieDumpPathFlag.Name)
|
|
||||||
if config.DumpPath == "" {
|
|
||||||
config.DumpPath = stack.ResolvePath("trie-dump.bin")
|
|
||||||
}
|
|
||||||
|
|
||||||
triedb := utils.MakeTrieDatabase(ctx, stack, db, false, true, false)
|
|
||||||
defer triedb.Close()
|
|
||||||
|
|
||||||
if contractAddr := ctx.String(inspectTrieContractFlag.Name); contractAddr != "" {
|
|
||||||
address := common.HexToAddress(contractAddr)
|
|
||||||
log.Info("Inspecting contract", "address", address, "root", trieRoot, "block", number)
|
|
||||||
return trie.InspectContract(triedb, db, trieRoot, address)
|
|
||||||
}
|
|
||||||
|
|
||||||
log.Info("Inspecting trie", "root", trieRoot, "block", number, "dump", config.DumpPath, "top", topN)
|
|
||||||
return trie.Inspect(triedb, trieRoot, config)
|
|
||||||
}
|
|
||||||
|
|
||||||
func showDBStats(db ethdb.KeyValueStater) {
|
func showDBStats(db ethdb.KeyValueStater) {
|
||||||
stats, err := db.Stat()
|
stats, err := db.Stat()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
|
@ -877,7 +759,7 @@ func showMetaData(ctx *cli.Context) error {
|
||||||
data = append(data, []string{"headHeader.Root", fmt.Sprintf("%v", h.Root)})
|
data = append(data, []string{"headHeader.Root", fmt.Sprintf("%v", h.Root)})
|
||||||
data = append(data, []string{"headHeader.Number", fmt.Sprintf("%d (%#x)", h.Number, h.Number)})
|
data = append(data, []string{"headHeader.Number", fmt.Sprintf("%d (%#x)", h.Number, h.Number)})
|
||||||
}
|
}
|
||||||
table := tablewriter.NewWriter(os.Stdout)
|
table := rawdb.NewTableWriter(os.Stdout)
|
||||||
table.SetHeader([]string{"Field", "Value"})
|
table.SetHeader([]string{"Field", "Value"})
|
||||||
table.AppendBulk(data)
|
table.AppendBulk(data)
|
||||||
table.Render()
|
table.Render()
|
||||||
|
|
|
||||||
|
|
@ -22,6 +22,7 @@ import (
|
||||||
"os"
|
"os"
|
||||||
"slices"
|
"slices"
|
||||||
"sort"
|
"sort"
|
||||||
|
"strconv"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/ethereum/go-ethereum/accounts"
|
"github.com/ethereum/go-ethereum/accounts"
|
||||||
|
|
@ -215,7 +216,6 @@ var (
|
||||||
utils.MetricsInfluxDBUsernameFlag,
|
utils.MetricsInfluxDBUsernameFlag,
|
||||||
utils.MetricsInfluxDBPasswordFlag,
|
utils.MetricsInfluxDBPasswordFlag,
|
||||||
utils.MetricsInfluxDBTagsFlag,
|
utils.MetricsInfluxDBTagsFlag,
|
||||||
utils.MetricsInfluxDBIntervalFlag,
|
|
||||||
utils.MetricsEnableInfluxDBV2Flag,
|
utils.MetricsEnableInfluxDBV2Flag,
|
||||||
utils.MetricsInfluxDBTokenFlag,
|
utils.MetricsInfluxDBTokenFlag,
|
||||||
utils.MetricsInfluxDBBucketFlag,
|
utils.MetricsInfluxDBBucketFlag,
|
||||||
|
|
@ -315,6 +315,18 @@ func prepare(ctx *cli.Context) {
|
||||||
case !ctx.IsSet(utils.NetworkIdFlag.Name):
|
case !ctx.IsSet(utils.NetworkIdFlag.Name):
|
||||||
log.Info("Starting Geth on Ethereum mainnet...")
|
log.Info("Starting Geth on Ethereum mainnet...")
|
||||||
}
|
}
|
||||||
|
// If we're a full node on mainnet without --cache specified, bump default cache allowance
|
||||||
|
if !ctx.IsSet(utils.CacheFlag.Name) && !ctx.IsSet(utils.NetworkIdFlag.Name) {
|
||||||
|
// Make sure we're not on any supported preconfigured testnet either
|
||||||
|
if !ctx.IsSet(utils.HoleskyFlag.Name) &&
|
||||||
|
!ctx.IsSet(utils.SepoliaFlag.Name) &&
|
||||||
|
!ctx.IsSet(utils.HoodiFlag.Name) &&
|
||||||
|
!ctx.IsSet(utils.DeveloperFlag.Name) {
|
||||||
|
// Nope, we're really on mainnet. Bump that cache up!
|
||||||
|
log.Info("Bumping default cache on mainnet", "provided", ctx.Int(utils.CacheFlag.Name), "updated", 4096)
|
||||||
|
ctx.Set(utils.CacheFlag.Name, strconv.Itoa(4096))
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// geth is the main entry point into the system if no special subcommand is run.
|
// geth is the main entry point into the system if no special subcommand is run.
|
||||||
|
|
|
||||||
|
|
@ -36,7 +36,6 @@ import (
|
||||||
"github.com/ethereum/go-ethereum/log"
|
"github.com/ethereum/go-ethereum/log"
|
||||||
"github.com/ethereum/go-ethereum/rlp"
|
"github.com/ethereum/go-ethereum/rlp"
|
||||||
"github.com/ethereum/go-ethereum/trie"
|
"github.com/ethereum/go-ethereum/trie"
|
||||||
"github.com/ethereum/go-ethereum/triedb"
|
|
||||||
"github.com/urfave/cli/v2"
|
"github.com/urfave/cli/v2"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
@ -106,9 +105,7 @@ information about the specified address.
|
||||||
Usage: "Traverse the state with given root hash and perform quick verification",
|
Usage: "Traverse the state with given root hash and perform quick verification",
|
||||||
ArgsUsage: "<root>",
|
ArgsUsage: "<root>",
|
||||||
Action: traverseState,
|
Action: traverseState,
|
||||||
Flags: slices.Concat([]cli.Flag{
|
Flags: slices.Concat(utils.NetworkFlags, utils.DatabaseFlags),
|
||||||
utils.AccountFlag,
|
|
||||||
}, utils.NetworkFlags, utils.DatabaseFlags),
|
|
||||||
Description: `
|
Description: `
|
||||||
geth snapshot traverse-state <state-root>
|
geth snapshot traverse-state <state-root>
|
||||||
will traverse the whole state from the given state root and will abort if any
|
will traverse the whole state from the given state root and will abort if any
|
||||||
|
|
@ -116,8 +113,6 @@ referenced trie node or contract code is missing. This command can be used for
|
||||||
state integrity verification. The default checking target is the HEAD state.
|
state integrity verification. The default checking target is the HEAD state.
|
||||||
|
|
||||||
It's also usable without snapshot enabled.
|
It's also usable without snapshot enabled.
|
||||||
|
|
||||||
If --account is specified, only the storage trie of that account is traversed.
|
|
||||||
`,
|
`,
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
@ -125,9 +120,7 @@ If --account is specified, only the storage trie of that account is traversed.
|
||||||
Usage: "Traverse the state with given root hash and perform detailed verification",
|
Usage: "Traverse the state with given root hash and perform detailed verification",
|
||||||
ArgsUsage: "<root>",
|
ArgsUsage: "<root>",
|
||||||
Action: traverseRawState,
|
Action: traverseRawState,
|
||||||
Flags: slices.Concat([]cli.Flag{
|
Flags: slices.Concat(utils.NetworkFlags, utils.DatabaseFlags),
|
||||||
utils.AccountFlag,
|
|
||||||
}, utils.NetworkFlags, utils.DatabaseFlags),
|
|
||||||
Description: `
|
Description: `
|
||||||
geth snapshot traverse-rawstate <state-root>
|
geth snapshot traverse-rawstate <state-root>
|
||||||
will traverse the whole state from the given root and will abort if any referenced
|
will traverse the whole state from the given root and will abort if any referenced
|
||||||
|
|
@ -136,8 +129,6 @@ verification. The default checking target is the HEAD state. It's basically iden
|
||||||
to traverse-state, but the check granularity is smaller.
|
to traverse-state, but the check granularity is smaller.
|
||||||
|
|
||||||
It's also usable without snapshot enabled.
|
It's also usable without snapshot enabled.
|
||||||
|
|
||||||
If --account is specified, only the storage trie of that account is traversed.
|
|
||||||
`,
|
`,
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
@ -281,120 +272,6 @@ func checkDanglingStorage(ctx *cli.Context) error {
|
||||||
return snapshot.CheckDanglingStorage(db)
|
return snapshot.CheckDanglingStorage(db)
|
||||||
}
|
}
|
||||||
|
|
||||||
// parseAccount parses the account flag value as either an address (20 bytes)
|
|
||||||
// or an account hash (32 bytes) and returns the hashed account key.
|
|
||||||
func parseAccount(input string) (common.Hash, error) {
|
|
||||||
switch len(input) {
|
|
||||||
case 40, 42: // address
|
|
||||||
return crypto.Keccak256Hash(common.HexToAddress(input).Bytes()), nil
|
|
||||||
case 64, 66: // hash
|
|
||||||
return common.HexToHash(input), nil
|
|
||||||
default:
|
|
||||||
return common.Hash{}, errors.New("malformed account address or hash")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// lookupAccount resolves the account from the state trie using the given
|
|
||||||
// account hash.
|
|
||||||
func lookupAccount(accountHash common.Hash, tr *trie.Trie) (*types.StateAccount, error) {
|
|
||||||
accData, err := tr.Get(accountHash.Bytes())
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("failed to get account %s: %w", accountHash, err)
|
|
||||||
}
|
|
||||||
if accData == nil {
|
|
||||||
return nil, fmt.Errorf("account not found: %s", accountHash)
|
|
||||||
}
|
|
||||||
var acc types.StateAccount
|
|
||||||
if err := rlp.DecodeBytes(accData, &acc); err != nil {
|
|
||||||
return nil, fmt.Errorf("invalid account data %s: %w", accountHash, err)
|
|
||||||
}
|
|
||||||
return &acc, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func traverseStorage(id *trie.ID, db *triedb.Database, report bool, detail bool) error {
|
|
||||||
tr, err := trie.NewStateTrie(id, db)
|
|
||||||
if err != nil {
|
|
||||||
log.Error("Failed to open storage trie", "account", id.Owner, "root", id.Root, "err", err)
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
var (
|
|
||||||
slots int
|
|
||||||
nodes int
|
|
||||||
lastReport time.Time
|
|
||||||
start = time.Now()
|
|
||||||
)
|
|
||||||
it, err := tr.NodeIterator(nil)
|
|
||||||
if err != nil {
|
|
||||||
log.Error("Failed to open storage iterator", "account", id.Owner, "root", id.Root, "err", err)
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
logger := log.Debug
|
|
||||||
if report {
|
|
||||||
logger = log.Info
|
|
||||||
}
|
|
||||||
logger("Start traversing storage trie", "account", id.Owner, "storageRoot", id.Root)
|
|
||||||
|
|
||||||
if !detail {
|
|
||||||
iter := trie.NewIterator(it)
|
|
||||||
for iter.Next() {
|
|
||||||
slots += 1
|
|
||||||
if time.Since(lastReport) > time.Second*8 {
|
|
||||||
logger("Traversing storage", "account", id.Owner, "slots", slots, "elapsed", common.PrettyDuration(time.Since(start)))
|
|
||||||
lastReport = time.Now()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if iter.Err != nil {
|
|
||||||
log.Error("Failed to traverse storage trie", "root", id.Root, "err", iter.Err)
|
|
||||||
return iter.Err
|
|
||||||
}
|
|
||||||
logger("Storage is complete", "account", id.Owner, "slots", slots, "elapsed", common.PrettyDuration(time.Since(start)))
|
|
||||||
} else {
|
|
||||||
reader, err := db.NodeReader(id.StateRoot)
|
|
||||||
if err != nil {
|
|
||||||
log.Error("Failed to open state reader", "err", err)
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
var (
|
|
||||||
buffer = make([]byte, 32)
|
|
||||||
hasher = crypto.NewKeccakState()
|
|
||||||
)
|
|
||||||
for it.Next(true) {
|
|
||||||
nodes += 1
|
|
||||||
node := it.Hash()
|
|
||||||
|
|
||||||
// Check the presence for non-empty hash node(embedded node doesn't
|
|
||||||
// have their own hash).
|
|
||||||
if node != (common.Hash{}) {
|
|
||||||
blob, _ := reader.Node(id.Owner, it.Path(), node)
|
|
||||||
if len(blob) == 0 {
|
|
||||||
log.Error("Missing trie node(storage)", "hash", node)
|
|
||||||
return errors.New("missing storage")
|
|
||||||
}
|
|
||||||
hasher.Reset()
|
|
||||||
hasher.Write(blob)
|
|
||||||
hasher.Read(buffer)
|
|
||||||
if !bytes.Equal(buffer, node.Bytes()) {
|
|
||||||
log.Error("Invalid trie node(storage)", "hash", node.Hex(), "value", blob)
|
|
||||||
return errors.New("invalid storage node")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if it.Leaf() {
|
|
||||||
slots += 1
|
|
||||||
}
|
|
||||||
if time.Since(lastReport) > time.Second*8 {
|
|
||||||
logger("Traversing storage", "account", id.Owner, "nodes", nodes, "slots", slots, "elapsed", common.PrettyDuration(time.Since(start)))
|
|
||||||
lastReport = time.Now()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if err := it.Error(); err != nil {
|
|
||||||
log.Error("Failed to traverse storage trie", "root", id.Root, "err", err)
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
logger("Storage is complete", "account", id.Owner, "nodes", nodes, "slots", slots, "elapsed", common.PrettyDuration(time.Since(start)))
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// traverseState is a helper function used for pruning verification.
|
// traverseState is a helper function used for pruning verification.
|
||||||
// Basically it just iterates the trie, ensure all nodes and associated
|
// Basically it just iterates the trie, ensure all nodes and associated
|
||||||
// contract codes are present.
|
// contract codes are present.
|
||||||
|
|
@ -432,30 +309,6 @@ func traverseState(ctx *cli.Context) error {
|
||||||
root = headBlock.Root()
|
root = headBlock.Root()
|
||||||
log.Info("Start traversing the state", "root", root, "number", headBlock.NumberU64())
|
log.Info("Start traversing the state", "root", root, "number", headBlock.NumberU64())
|
||||||
}
|
}
|
||||||
// If --account is specified, only traverse the storage trie of that account.
|
|
||||||
if accountStr := ctx.String(utils.AccountFlag.Name); accountStr != "" {
|
|
||||||
accountHash, err := parseAccount(accountStr)
|
|
||||||
if err != nil {
|
|
||||||
log.Error("Failed to parse account", "err", err)
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
// Use raw trie since the account key is already hashed.
|
|
||||||
t, err := trie.New(trie.StateTrieID(root), triedb)
|
|
||||||
if err != nil {
|
|
||||||
log.Error("Failed to open state trie", "root", root, "err", err)
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
acc, err := lookupAccount(accountHash, t)
|
|
||||||
if err != nil {
|
|
||||||
log.Error("Failed to look up account", "hash", accountHash, "err", err)
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if acc.Root == types.EmptyRootHash {
|
|
||||||
log.Info("Account has no storage", "hash", accountHash)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
return traverseStorage(trie.StorageTrieID(root, accountHash, acc.Root), triedb, true, false)
|
|
||||||
}
|
|
||||||
t, err := trie.NewStateTrie(trie.StateTrieID(root), triedb)
|
t, err := trie.NewStateTrie(trie.StateTrieID(root), triedb)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Error("Failed to open trie", "root", root, "err", err)
|
log.Error("Failed to open trie", "root", root, "err", err)
|
||||||
|
|
@ -482,10 +335,30 @@ func traverseState(ctx *cli.Context) error {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
if acc.Root != types.EmptyRootHash {
|
if acc.Root != types.EmptyRootHash {
|
||||||
err := traverseStorage(trie.StorageTrieID(root, common.BytesToHash(accIter.Key), acc.Root), triedb, false, false)
|
id := trie.StorageTrieID(root, common.BytesToHash(accIter.Key), acc.Root)
|
||||||
|
storageTrie, err := trie.NewStateTrie(id, triedb)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
log.Error("Failed to open storage trie", "root", acc.Root, "err", err)
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
storageIt, err := storageTrie.NodeIterator(nil)
|
||||||
|
if err != nil {
|
||||||
|
log.Error("Failed to open storage iterator", "root", acc.Root, "err", err)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
storageIter := trie.NewIterator(storageIt)
|
||||||
|
for storageIter.Next() {
|
||||||
|
slots += 1
|
||||||
|
|
||||||
|
if time.Since(lastReport) > time.Second*8 {
|
||||||
|
log.Info("Traversing state", "accounts", accounts, "slots", slots, "codes", codes, "elapsed", common.PrettyDuration(time.Since(start)))
|
||||||
|
lastReport = time.Now()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if storageIter.Err != nil {
|
||||||
|
log.Error("Failed to traverse storage trie", "root", acc.Root, "err", storageIter.Err)
|
||||||
|
return storageIter.Err
|
||||||
|
}
|
||||||
}
|
}
|
||||||
if !bytes.Equal(acc.CodeHash, types.EmptyCodeHash.Bytes()) {
|
if !bytes.Equal(acc.CodeHash, types.EmptyCodeHash.Bytes()) {
|
||||||
if !rawdb.HasCode(chaindb, common.BytesToHash(acc.CodeHash)) {
|
if !rawdb.HasCode(chaindb, common.BytesToHash(acc.CodeHash)) {
|
||||||
|
|
@ -545,30 +418,6 @@ func traverseRawState(ctx *cli.Context) error {
|
||||||
root = headBlock.Root()
|
root = headBlock.Root()
|
||||||
log.Info("Start traversing the state", "root", root, "number", headBlock.NumberU64())
|
log.Info("Start traversing the state", "root", root, "number", headBlock.NumberU64())
|
||||||
}
|
}
|
||||||
// If --account is specified, only traverse the storage trie of that account.
|
|
||||||
if accountStr := ctx.String(utils.AccountFlag.Name); accountStr != "" {
|
|
||||||
accountHash, err := parseAccount(accountStr)
|
|
||||||
if err != nil {
|
|
||||||
log.Error("Failed to parse account", "err", err)
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
// Use raw trie since the account key is already hashed.
|
|
||||||
t, err := trie.New(trie.StateTrieID(root), triedb)
|
|
||||||
if err != nil {
|
|
||||||
log.Error("Failed to open state trie", "root", root, "err", err)
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
acc, err := lookupAccount(accountHash, t)
|
|
||||||
if err != nil {
|
|
||||||
log.Error("Failed to look up account", "hash", accountHash, "err", err)
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if acc.Root == types.EmptyRootHash {
|
|
||||||
log.Info("Account has no storage", "hash", accountHash)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
return traverseStorage(trie.StorageTrieID(root, accountHash, acc.Root), triedb, true, true)
|
|
||||||
}
|
|
||||||
t, err := trie.NewStateTrie(trie.StateTrieID(root), triedb)
|
t, err := trie.NewStateTrie(trie.StateTrieID(root), triedb)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Error("Failed to open trie", "root", root, "err", err)
|
log.Error("Failed to open trie", "root", root, "err", err)
|
||||||
|
|
@ -624,10 +473,50 @@ func traverseRawState(ctx *cli.Context) error {
|
||||||
return errors.New("invalid account")
|
return errors.New("invalid account")
|
||||||
}
|
}
|
||||||
if acc.Root != types.EmptyRootHash {
|
if acc.Root != types.EmptyRootHash {
|
||||||
err := traverseStorage(trie.StorageTrieID(root, common.BytesToHash(accIter.LeafKey()), acc.Root), triedb, false, true)
|
id := trie.StorageTrieID(root, common.BytesToHash(accIter.LeafKey()), acc.Root)
|
||||||
|
storageTrie, err := trie.NewStateTrie(id, triedb)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
log.Error("Failed to open storage trie", "root", acc.Root, "err", err)
|
||||||
|
return errors.New("missing storage trie")
|
||||||
|
}
|
||||||
|
storageIter, err := storageTrie.NodeIterator(nil)
|
||||||
|
if err != nil {
|
||||||
|
log.Error("Failed to open storage iterator", "root", acc.Root, "err", err)
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
for storageIter.Next(true) {
|
||||||
|
nodes += 1
|
||||||
|
node := storageIter.Hash()
|
||||||
|
|
||||||
|
// Check the presence for non-empty hash node(embedded node doesn't
|
||||||
|
// have their own hash).
|
||||||
|
if node != (common.Hash{}) {
|
||||||
|
blob, _ := reader.Node(common.BytesToHash(accIter.LeafKey()), storageIter.Path(), node)
|
||||||
|
if len(blob) == 0 {
|
||||||
|
log.Error("Missing trie node(storage)", "hash", node)
|
||||||
|
return errors.New("missing storage")
|
||||||
|
}
|
||||||
|
hasher.Reset()
|
||||||
|
hasher.Write(blob)
|
||||||
|
hasher.Read(got)
|
||||||
|
if !bytes.Equal(got, node.Bytes()) {
|
||||||
|
log.Error("Invalid trie node(storage)", "hash", node.Hex(), "value", blob)
|
||||||
|
return errors.New("invalid storage node")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// Bump the counter if it's leaf node.
|
||||||
|
if storageIter.Leaf() {
|
||||||
|
slots += 1
|
||||||
|
}
|
||||||
|
if time.Since(lastReport) > time.Second*8 {
|
||||||
|
log.Info("Traversing state", "nodes", nodes, "accounts", accounts, "slots", slots, "codes", codes, "elapsed", common.PrettyDuration(time.Since(start)))
|
||||||
|
lastReport = time.Now()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if storageIter.Error() != nil {
|
||||||
|
log.Error("Failed to traverse storage trie", "root", acc.Root, "err", storageIter.Error())
|
||||||
|
return storageIter.Error()
|
||||||
|
}
|
||||||
}
|
}
|
||||||
if !bytes.Equal(acc.CodeHash, types.EmptyCodeHash.Bytes()) {
|
if !bytes.Equal(acc.CodeHash, types.EmptyCodeHash.Bytes()) {
|
||||||
if !rawdb.HasCode(chaindb, common.BytesToHash(acc.CodeHash)) {
|
if !rawdb.HasCode(chaindb, common.BytesToHash(acc.CodeHash)) {
|
||||||
|
|
|
||||||
|
|
@ -15,7 +15,6 @@
|
||||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
//go:build example
|
//go:build example
|
||||||
// +build example
|
|
||||||
|
|
||||||
package main
|
package main
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -14,8 +14,8 @@
|
||||||
// You should have received a copy of the GNU Lesser General Public License
|
// You should have received a copy of the GNU Lesser General Public License
|
||||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
//go:build wasm && !womir
|
//go:build wasm
|
||||||
// +build wasm,!womir
|
// +build wasm
|
||||||
|
|
||||||
package main
|
package main
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,49 +0,0 @@
|
||||||
// Copyright 2026 The go-ethereum Authors
|
|
||||||
// This file is part of the go-ethereum library.
|
|
||||||
//
|
|
||||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
|
||||||
// it under the terms of the GNU Lesser General Public License as published by
|
|
||||||
// the Free Software Foundation, either version 3 of the License, or
|
|
||||||
// (at your option) any later version.
|
|
||||||
//
|
|
||||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
|
||||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
// GNU Lesser General Public License for more details.
|
|
||||||
//
|
|
||||||
// You should have received a copy of the GNU Lesser General Public License
|
|
||||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
|
||||||
|
|
||||||
//go:build womir
|
|
||||||
|
|
||||||
package main
|
|
||||||
|
|
||||||
import "unsafe"
|
|
||||||
|
|
||||||
// These match the WOMIR guest-io imports (env module).
|
|
||||||
// Protocol: __hint_input prepares next item, __hint_buffer reads words.
|
|
||||||
// Each item has format: [byte_len_u32_le, ...data_words_padded_to_4bytes]
|
|
||||||
//
|
|
||||||
//go:wasmimport env __hint_input
|
|
||||||
func hintInput()
|
|
||||||
|
|
||||||
//go:wasmimport env __hint_buffer
|
|
||||||
func hintBuffer(ptr unsafe.Pointer, numWords uint32)
|
|
||||||
func readWord() uint32 {
|
|
||||||
var buf [4]byte
|
|
||||||
hintBuffer(unsafe.Pointer(&buf[0]), 1)
|
|
||||||
return uint32(buf[0]) | uint32(buf[1])<<8 | uint32(buf[2])<<16 | uint32(buf[3])<<24
|
|
||||||
}
|
|
||||||
func readBytes() []byte {
|
|
||||||
hintInput()
|
|
||||||
byteLen := readWord()
|
|
||||||
numWords := (byteLen + 3) / 4
|
|
||||||
data := make([]byte, numWords*4)
|
|
||||||
hintBuffer(unsafe.Pointer(&data[0]), numWords)
|
|
||||||
return data[:byteLen]
|
|
||||||
}
|
|
||||||
|
|
||||||
// getInput reads the RLP-encoded Payload from the WOMIR hint stream.
|
|
||||||
func getInput() []byte {
|
|
||||||
return readBytes()
|
|
||||||
}
|
|
||||||
|
|
@ -13,11 +13,11 @@ require (
|
||||||
github.com/bits-and-blooms/bitset v1.20.0 // indirect
|
github.com/bits-and-blooms/bitset v1.20.0 // indirect
|
||||||
github.com/cespare/xxhash/v2 v2.3.0 // indirect
|
github.com/cespare/xxhash/v2 v2.3.0 // indirect
|
||||||
github.com/consensys/gnark-crypto v0.18.1 // indirect
|
github.com/consensys/gnark-crypto v0.18.1 // indirect
|
||||||
github.com/crate-crypto/go-eth-kzg v1.5.0 // indirect
|
github.com/crate-crypto/go-eth-kzg v1.4.0 // indirect
|
||||||
github.com/deckarep/golang-set/v2 v2.6.0 // indirect
|
github.com/deckarep/golang-set/v2 v2.6.0 // indirect
|
||||||
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.0.1 // indirect
|
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.0.1 // indirect
|
||||||
github.com/emicklei/dot v1.6.2 // indirect
|
github.com/emicklei/dot v1.6.2 // indirect
|
||||||
github.com/ethereum/c-kzg-4844/v2 v2.1.6 // indirect
|
github.com/ethereum/c-kzg-4844/v2 v2.1.5 // indirect
|
||||||
github.com/ethereum/go-bigmodexpfix v0.0.0-20250911101455-f9e208c548ab // indirect
|
github.com/ethereum/go-bigmodexpfix v0.0.0-20250911101455-f9e208c548ab // indirect
|
||||||
github.com/ferranbt/fastssz v0.1.4 // indirect
|
github.com/ferranbt/fastssz v0.1.4 // indirect
|
||||||
github.com/go-logr/logr v1.4.3 // indirect
|
github.com/go-logr/logr v1.4.3 // indirect
|
||||||
|
|
@ -31,16 +31,16 @@ require (
|
||||||
github.com/minio/sha256-simd v1.0.0 // indirect
|
github.com/minio/sha256-simd v1.0.0 // indirect
|
||||||
github.com/mitchellh/mapstructure v1.4.1 // indirect
|
github.com/mitchellh/mapstructure v1.4.1 // indirect
|
||||||
github.com/shirou/gopsutil v3.21.4-0.20210419000835-c7a38de76ee5+incompatible // indirect
|
github.com/shirou/gopsutil v3.21.4-0.20210419000835-c7a38de76ee5+incompatible // indirect
|
||||||
github.com/supranational/blst v0.3.16 // indirect
|
github.com/supranational/blst v0.3.16-0.20250831170142-f48500c1fdbe // indirect
|
||||||
github.com/tklauser/go-sysconf v0.3.12 // indirect
|
github.com/tklauser/go-sysconf v0.3.12 // indirect
|
||||||
github.com/tklauser/numcpus v0.6.1 // indirect
|
github.com/tklauser/numcpus v0.6.1 // indirect
|
||||||
go.opentelemetry.io/auto/sdk v1.2.1 // indirect
|
go.opentelemetry.io/auto/sdk v1.2.1 // indirect
|
||||||
go.opentelemetry.io/otel v1.40.0 // indirect
|
go.opentelemetry.io/otel v1.39.0 // indirect
|
||||||
go.opentelemetry.io/otel/metric v1.40.0 // indirect
|
go.opentelemetry.io/otel/metric v1.39.0 // indirect
|
||||||
go.opentelemetry.io/otel/trace v1.40.0 // indirect
|
go.opentelemetry.io/otel/trace v1.39.0 // indirect
|
||||||
golang.org/x/crypto v0.44.0 // indirect
|
golang.org/x/crypto v0.44.0 // indirect
|
||||||
golang.org/x/sync v0.18.0 // indirect
|
golang.org/x/sync v0.18.0 // indirect
|
||||||
golang.org/x/sys v0.40.0 // indirect
|
golang.org/x/sys v0.39.0 // indirect
|
||||||
gopkg.in/yaml.v2 v2.4.0 // indirect
|
gopkg.in/yaml.v2 v2.4.0 // indirect
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -28,8 +28,8 @@ github.com/cockroachdb/tokenbucket v0.0.0-20230807174530-cc333fc44b06 h1:zuQyyAK
|
||||||
github.com/cockroachdb/tokenbucket v0.0.0-20230807174530-cc333fc44b06/go.mod h1:7nc4anLGjupUW/PeY5qiNYsdNXj7zopG+eqsS7To5IQ=
|
github.com/cockroachdb/tokenbucket v0.0.0-20230807174530-cc333fc44b06/go.mod h1:7nc4anLGjupUW/PeY5qiNYsdNXj7zopG+eqsS7To5IQ=
|
||||||
github.com/consensys/gnark-crypto v0.18.1 h1:RyLV6UhPRoYYzaFnPQA4qK3DyuDgkTgskDdoGqFt3fI=
|
github.com/consensys/gnark-crypto v0.18.1 h1:RyLV6UhPRoYYzaFnPQA4qK3DyuDgkTgskDdoGqFt3fI=
|
||||||
github.com/consensys/gnark-crypto v0.18.1/go.mod h1:L3mXGFTe1ZN+RSJ+CLjUt9x7PNdx8ubaYfDROyp2Z8c=
|
github.com/consensys/gnark-crypto v0.18.1/go.mod h1:L3mXGFTe1ZN+RSJ+CLjUt9x7PNdx8ubaYfDROyp2Z8c=
|
||||||
github.com/crate-crypto/go-eth-kzg v1.5.0 h1:FYRiJMJG2iv+2Dy3fi14SVGjcPteZ5HAAUe4YWlJygc=
|
github.com/crate-crypto/go-eth-kzg v1.4.0 h1:WzDGjHk4gFg6YzV0rJOAsTK4z3Qkz5jd4RE3DAvPFkg=
|
||||||
github.com/crate-crypto/go-eth-kzg v1.5.0/go.mod h1:J9/u5sWfznSObptgfa92Jq8rTswn6ahQWEuiLHOjCUI=
|
github.com/crate-crypto/go-eth-kzg v1.4.0/go.mod h1:J9/u5sWfznSObptgfa92Jq8rTswn6ahQWEuiLHOjCUI=
|
||||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||||
github.com/deckarep/golang-set/v2 v2.6.0 h1:XfcQbWM1LlMB8BsJ8N9vW5ehnnPVIw0je80NsVHagjM=
|
github.com/deckarep/golang-set/v2 v2.6.0 h1:XfcQbWM1LlMB8BsJ8N9vW5ehnnPVIw0je80NsVHagjM=
|
||||||
|
|
@ -40,8 +40,8 @@ github.com/decred/dcrd/dcrec/secp256k1/v4 v4.0.1 h1:YLtO71vCjJRCBcrPMtQ9nqBsqpA1
|
||||||
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.0.1/go.mod h1:hyedUtir6IdtD/7lIxGeCxkaw7y45JueMRL4DIyJDKs=
|
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.0.1/go.mod h1:hyedUtir6IdtD/7lIxGeCxkaw7y45JueMRL4DIyJDKs=
|
||||||
github.com/emicklei/dot v1.6.2 h1:08GN+DD79cy/tzN6uLCT84+2Wk9u+wvqP+Hkx/dIR8A=
|
github.com/emicklei/dot v1.6.2 h1:08GN+DD79cy/tzN6uLCT84+2Wk9u+wvqP+Hkx/dIR8A=
|
||||||
github.com/emicklei/dot v1.6.2/go.mod h1:DeV7GvQtIw4h2u73RKBkkFdvVAz0D9fzeJrgPW6gy/s=
|
github.com/emicklei/dot v1.6.2/go.mod h1:DeV7GvQtIw4h2u73RKBkkFdvVAz0D9fzeJrgPW6gy/s=
|
||||||
github.com/ethereum/c-kzg-4844/v2 v2.1.6 h1:xQymkKCT5E2Jiaoqf3v4wsNgjZLY0lRSkZn27fRjSls=
|
github.com/ethereum/c-kzg-4844/v2 v2.1.5 h1:aVtoLK5xwJ6c5RiqO8g8ptJ5KU+2Hdquf6G3aXiHh5s=
|
||||||
github.com/ethereum/c-kzg-4844/v2 v2.1.6/go.mod h1:8HMkUZ5JRv4hpw/XUrYWSQNAUzhHMg2UDb/U+5m+XNw=
|
github.com/ethereum/c-kzg-4844/v2 v2.1.5/go.mod h1:u59hRTTah4Co6i9fDWtiCjTrblJv0UwsqZKCc0GfgUs=
|
||||||
github.com/ethereum/go-bigmodexpfix v0.0.0-20250911101455-f9e208c548ab h1:rvv6MJhy07IMfEKuARQ9TKojGqLVNxQajaXEp/BoqSk=
|
github.com/ethereum/go-bigmodexpfix v0.0.0-20250911101455-f9e208c548ab h1:rvv6MJhy07IMfEKuARQ9TKojGqLVNxQajaXEp/BoqSk=
|
||||||
github.com/ethereum/go-bigmodexpfix v0.0.0-20250911101455-f9e208c548ab/go.mod h1:IuLm4IsPipXKF7CW5Lzf68PIbZ5yl7FFd74l/E0o9A8=
|
github.com/ethereum/go-bigmodexpfix v0.0.0-20250911101455-f9e208c548ab/go.mod h1:IuLm4IsPipXKF7CW5Lzf68PIbZ5yl7FFd74l/E0o9A8=
|
||||||
github.com/ferranbt/fastssz v0.1.4 h1:OCDB+dYDEQDvAgtAGnTSidK1Pe2tW3nFV40XyMkTeDY=
|
github.com/ferranbt/fastssz v0.1.4 h1:OCDB+dYDEQDvAgtAGnTSidK1Pe2tW3nFV40XyMkTeDY=
|
||||||
|
|
@ -111,22 +111,22 @@ github.com/shirou/gopsutil v3.21.4-0.20210419000835-c7a38de76ee5+incompatible h1
|
||||||
github.com/shirou/gopsutil v3.21.4-0.20210419000835-c7a38de76ee5+incompatible/go.mod h1:5b4v6he4MtMOwMlS0TUMTu2PcXUg8+E1lC7eC3UO/RA=
|
github.com/shirou/gopsutil v3.21.4-0.20210419000835-c7a38de76ee5+incompatible/go.mod h1:5b4v6he4MtMOwMlS0TUMTu2PcXUg8+E1lC7eC3UO/RA=
|
||||||
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
|
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
|
||||||
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
|
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
|
||||||
github.com/supranational/blst v0.3.16 h1:bTDadT+3fK497EvLdWRQEjiGnUtzJ7jjIUMF0jqwYhE=
|
github.com/supranational/blst v0.3.16-0.20250831170142-f48500c1fdbe h1:nbdqkIGOGfUAD54q1s2YBcBz/WcsxCO9HUQ4aGV5hUw=
|
||||||
github.com/supranational/blst v0.3.16/go.mod h1:jZJtfjgudtNl4en1tzwPIV3KjUnQUvG3/j+w+fVonLw=
|
github.com/supranational/blst v0.3.16-0.20250831170142-f48500c1fdbe/go.mod h1:jZJtfjgudtNl4en1tzwPIV3KjUnQUvG3/j+w+fVonLw=
|
||||||
github.com/tklauser/go-sysconf v0.3.12 h1:0QaGUFOdQaIVdPgfITYzaTegZvdCjmYO52cSFAEVmqU=
|
github.com/tklauser/go-sysconf v0.3.12 h1:0QaGUFOdQaIVdPgfITYzaTegZvdCjmYO52cSFAEVmqU=
|
||||||
github.com/tklauser/go-sysconf v0.3.12/go.mod h1:Ho14jnntGE1fpdOqQEEaiKRpvIavV0hSfmBq8nJbHYI=
|
github.com/tklauser/go-sysconf v0.3.12/go.mod h1:Ho14jnntGE1fpdOqQEEaiKRpvIavV0hSfmBq8nJbHYI=
|
||||||
github.com/tklauser/numcpus v0.6.1 h1:ng9scYS7az0Bk4OZLvrNXNSAO2Pxr1XXRAPyjhIx+Fk=
|
github.com/tklauser/numcpus v0.6.1 h1:ng9scYS7az0Bk4OZLvrNXNSAO2Pxr1XXRAPyjhIx+Fk=
|
||||||
github.com/tklauser/numcpus v0.6.1/go.mod h1:1XfjsgE2zo8GVw7POkMbHENHzVg3GzmoZ9fESEdAacY=
|
github.com/tklauser/numcpus v0.6.1/go.mod h1:1XfjsgE2zo8GVw7POkMbHENHzVg3GzmoZ9fESEdAacY=
|
||||||
go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64=
|
go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64=
|
||||||
go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y=
|
go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y=
|
||||||
go.opentelemetry.io/otel v1.40.0 h1:oA5YeOcpRTXq6NN7frwmwFR0Cn3RhTVZvXsP4duvCms=
|
go.opentelemetry.io/otel v1.39.0 h1:8yPrr/S0ND9QEfTfdP9V+SiwT4E0G7Y5MO7p85nis48=
|
||||||
go.opentelemetry.io/otel v1.40.0/go.mod h1:IMb+uXZUKkMXdPddhwAHm6UfOwJyh4ct1ybIlV14J0g=
|
go.opentelemetry.io/otel v1.39.0/go.mod h1:kLlFTywNWrFyEdH0oj2xK0bFYZtHRYUdv1NklR/tgc8=
|
||||||
go.opentelemetry.io/otel/metric v1.40.0 h1:rcZe317KPftE2rstWIBitCdVp89A2HqjkxR3c11+p9g=
|
go.opentelemetry.io/otel/metric v1.39.0 h1:d1UzonvEZriVfpNKEVmHXbdf909uGTOQjA0HF0Ls5Q0=
|
||||||
go.opentelemetry.io/otel/metric v1.40.0/go.mod h1:ib/crwQH7N3r5kfiBZQbwrTge743UDc7DTFVZrrXnqc=
|
go.opentelemetry.io/otel/metric v1.39.0/go.mod h1:jrZSWL33sD7bBxg1xjrqyDjnuzTUB0x1nBERXd7Ftcs=
|
||||||
go.opentelemetry.io/otel/sdk v1.40.0 h1:KHW/jUzgo6wsPh9At46+h4upjtccTmuZCFAc9OJ71f8=
|
go.opentelemetry.io/otel/sdk v1.39.0 h1:nMLYcjVsvdui1B/4FRkwjzoRVsMK8uL/cj0OyhKzt18=
|
||||||
go.opentelemetry.io/otel/sdk v1.40.0/go.mod h1:Ph7EFdYvxq72Y8Li9q8KebuYUr2KoeyHx0DRMKrYBUE=
|
go.opentelemetry.io/otel/sdk v1.39.0/go.mod h1:vDojkC4/jsTJsE+kh+LXYQlbL8CgrEcwmt1ENZszdJE=
|
||||||
go.opentelemetry.io/otel/trace v1.40.0 h1:WA4etStDttCSYuhwvEa8OP8I5EWu24lkOzp+ZYblVjw=
|
go.opentelemetry.io/otel/trace v1.39.0 h1:2d2vfpEDmCJ5zVYz7ijaJdOF59xLomrvj7bjt6/qCJI=
|
||||||
go.opentelemetry.io/otel/trace v1.40.0/go.mod h1:zeAhriXecNGP/s2SEG3+Y8X9ujcJOTqQ5RgdEJcawiA=
|
go.opentelemetry.io/otel/trace v1.39.0/go.mod h1:88w4/PnZSazkGzz/w84VHpQafiU4EtqqlVdxWy+rNOA=
|
||||||
golang.org/x/crypto v0.44.0 h1:A97SsFvM3AIwEEmTBiaxPPTYpDC47w720rdiiUvgoAU=
|
golang.org/x/crypto v0.44.0 h1:A97SsFvM3AIwEEmTBiaxPPTYpDC47w720rdiiUvgoAU=
|
||||||
golang.org/x/crypto v0.44.0/go.mod h1:013i+Nw79BMiQiMsOPcVCB5ZIJbYkerPrGnOa00tvmc=
|
golang.org/x/crypto v0.44.0/go.mod h1:013i+Nw79BMiQiMsOPcVCB5ZIJbYkerPrGnOa00tvmc=
|
||||||
golang.org/x/exp v0.0.0-20230626212559-97b1e661b5df h1:UA2aFVmmsIlefxMk29Dp2juaUSth8Pyn3Tq5Y5mJGME=
|
golang.org/x/exp v0.0.0-20230626212559-97b1e661b5df h1:UA2aFVmmsIlefxMk29Dp2juaUSth8Pyn3Tq5Y5mJGME=
|
||||||
|
|
@ -137,8 +137,8 @@ golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7w
|
||||||
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.11.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.11.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.40.0 h1:DBZZqJ2Rkml6QMQsZywtnjnnGvHza6BTfYFWY9kjEWQ=
|
golang.org/x/sys v0.39.0 h1:CvCKL8MeisomCi6qNZ+wbb0DN9E5AATixKsvNtMoMFk=
|
||||||
golang.org/x/sys v0.40.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
golang.org/x/sys v0.39.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
||||||
golang.org/x/text v0.31.0 h1:aC8ghyu4JhP8VojJ2lEHBnochRno1sgL6nEi9WGFGMM=
|
golang.org/x/text v0.31.0 h1:aC8ghyu4JhP8VojJ2lEHBnochRno1sgL6nEi9WGFGMM=
|
||||||
golang.org/x/text v0.31.0/go.mod h1:tKRAlv61yKIjGGHX/4tP1LTbc13YSec1pxVEWXzfoeM=
|
golang.org/x/text v0.31.0/go.mod h1:tKRAlv61yKIjGGHX/4tP1LTbc13YSec1pxVEWXzfoeM=
|
||||||
google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE=
|
google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE=
|
||||||
|
|
|
||||||
|
|
@ -14,8 +14,8 @@
|
||||||
// You should have received a copy of the GNU Lesser General Public License
|
// You should have received a copy of the GNU Lesser General Public License
|
||||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
//go:build !example && !ziren && !wasm && !womir
|
//go:build !example && !ziren && !wasm
|
||||||
// +build !example,!ziren,!wasm,!womir
|
// +build !example,!ziren,!wasm
|
||||||
|
|
||||||
package main
|
package main
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -274,66 +274,40 @@ func ImportHistory(chain *core.BlockChain, dir string, network string, from func
|
||||||
reported = time.Now()
|
reported = time.Now()
|
||||||
imported = 0
|
imported = 0
|
||||||
h = sha256.New()
|
h = sha256.New()
|
||||||
buf = bytes.NewBuffer(nil)
|
scratch = bytes.NewBuffer(nil)
|
||||||
)
|
)
|
||||||
|
|
||||||
for i, file := range entries {
|
for i, file := range entries {
|
||||||
err := func() error {
|
err := func() error {
|
||||||
path := filepath.Join(dir, file)
|
path := filepath.Join(dir, file)
|
||||||
|
|
||||||
// Validate against checksum file in directory.
|
// validate against checksum file in directory
|
||||||
f, err := os.Open(path)
|
f, err := os.Open(path)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("open %s: %w", path, err)
|
return fmt.Errorf("open %s: %w", path, err)
|
||||||
}
|
}
|
||||||
defer f.Close()
|
defer f.Close()
|
||||||
|
|
||||||
if _, err := io.Copy(h, f); err != nil {
|
if _, err := io.Copy(h, f); err != nil {
|
||||||
return fmt.Errorf("checksum %s: %w", path, err)
|
return fmt.Errorf("checksum %s: %w", path, err)
|
||||||
}
|
}
|
||||||
got := common.BytesToHash(h.Sum(buf.Bytes()[:])).Hex()
|
got := common.BytesToHash(h.Sum(scratch.Bytes()[:])).Hex()
|
||||||
|
want := checksums[i]
|
||||||
h.Reset()
|
h.Reset()
|
||||||
buf.Reset()
|
scratch.Reset()
|
||||||
if got != checksums[i] {
|
|
||||||
return fmt.Errorf("%s checksum mismatch: have %s want %s", file, got, checksums[i])
|
if got != want {
|
||||||
|
return fmt.Errorf("%s checksum mismatch: have %s want %s", file, got, want)
|
||||||
}
|
}
|
||||||
// Import all block data from Era1.
|
// Import all block data from Era1.
|
||||||
e, err := from(f)
|
e, err := from(f)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("error opening era: %w", err)
|
return fmt.Errorf("error opening era: %w", err)
|
||||||
}
|
}
|
||||||
defer e.Close()
|
|
||||||
|
|
||||||
it, err := e.Iterator()
|
it, err := e.Iterator()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("error creating iterator: %w", err)
|
return fmt.Errorf("error creating iterator: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
var (
|
|
||||||
blocks = make([]*types.Block, 0, importBatchSize)
|
|
||||||
receiptsList = make([]types.Receipts, 0, importBatchSize)
|
|
||||||
flush = func() error {
|
|
||||||
if len(blocks) == 0 {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
enc := types.EncodeBlockReceiptLists(receiptsList)
|
|
||||||
if _, err := chain.InsertReceiptChain(blocks, enc, math.MaxUint64); err != nil {
|
|
||||||
return fmt.Errorf("error inserting blocks %d-%d: %w",
|
|
||||||
blocks[0].NumberU64(), blocks[len(blocks)-1].NumberU64(), err)
|
|
||||||
}
|
|
||||||
imported += len(blocks)
|
|
||||||
if time.Since(reported) >= 8*time.Second {
|
|
||||||
head := blocks[len(blocks)-1].NumberU64()
|
|
||||||
log.Info("Importing Era files", "head", head, "imported", imported,
|
|
||||||
"elapsed", common.PrettyDuration(time.Since(start)))
|
|
||||||
imported = 0
|
|
||||||
reported = time.Now()
|
|
||||||
}
|
|
||||||
blocks = blocks[:0]
|
|
||||||
receiptsList = receiptsList[:0]
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
)
|
|
||||||
for it.Next() {
|
for it.Next() {
|
||||||
block, err := it.Block()
|
block, err := it.Block()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
|
@ -346,18 +320,23 @@ func ImportHistory(chain *core.BlockChain, dir string, network string, from func
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("error reading receipts %d: %w", it.Number(), err)
|
return fmt.Errorf("error reading receipts %d: %w", it.Number(), err)
|
||||||
}
|
}
|
||||||
blocks = append(blocks, block)
|
enc := types.EncodeBlockReceiptLists([]types.Receipts{receipts})
|
||||||
receiptsList = append(receiptsList, receipts)
|
if _, err := chain.InsertReceiptChain([]*types.Block{block}, enc, math.MaxUint64); err != nil {
|
||||||
if len(blocks) == importBatchSize {
|
return fmt.Errorf("error inserting body %d: %w", it.Number(), err)
|
||||||
if err := flush(); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
}
|
||||||
|
imported++
|
||||||
|
|
||||||
|
if time.Since(reported) >= 8*time.Second {
|
||||||
|
log.Info("Importing Era files", "head", it.Number(), "imported", imported,
|
||||||
|
"elapsed", common.PrettyDuration(time.Since(start)))
|
||||||
|
imported = 0
|
||||||
|
reported = time.Now()
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if err := it.Error(); err != nil {
|
if err := it.Error(); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
return flush()
|
return nil
|
||||||
}()
|
}()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
|
|
|
||||||
|
|
@ -218,15 +218,7 @@ var (
|
||||||
Usage: "Max number of elements (0 = no limit)",
|
Usage: "Max number of elements (0 = no limit)",
|
||||||
Value: 0,
|
Value: 0,
|
||||||
}
|
}
|
||||||
AccountFlag = &cli.StringFlag{
|
|
||||||
Name: "account",
|
|
||||||
Usage: "Specifies the account address or hash to traverse a single storage trie",
|
|
||||||
}
|
|
||||||
OutputFileFlag = &cli.StringFlag{
|
|
||||||
Name: "output",
|
|
||||||
Usage: "Writes the result in json to the output",
|
|
||||||
Value: "",
|
|
||||||
}
|
|
||||||
SnapshotFlag = &cli.BoolFlag{
|
SnapshotFlag = &cli.BoolFlag{
|
||||||
Name: "snapshot",
|
Name: "snapshot",
|
||||||
Usage: `Enables snapshot-database mode (default = enable)`,
|
Usage: `Enables snapshot-database mode (default = enable)`,
|
||||||
|
|
@ -323,7 +315,7 @@ var (
|
||||||
}
|
}
|
||||||
ChainHistoryFlag = &cli.StringFlag{
|
ChainHistoryFlag = &cli.StringFlag{
|
||||||
Name: "history.chain",
|
Name: "history.chain",
|
||||||
Usage: `Blockchain history retention ("all", "postmerge", or "postprague")`,
|
Usage: `Blockchain history retention ("all" or "postmerge")`,
|
||||||
Value: ethconfig.Defaults.HistoryMode.String(),
|
Value: ethconfig.Defaults.HistoryMode.String(),
|
||||||
Category: flags.StateCategory,
|
Category: flags.StateCategory,
|
||||||
}
|
}
|
||||||
|
|
@ -488,8 +480,8 @@ var (
|
||||||
// Performance tuning settings
|
// Performance tuning settings
|
||||||
CacheFlag = &cli.IntFlag{
|
CacheFlag = &cli.IntFlag{
|
||||||
Name: "cache",
|
Name: "cache",
|
||||||
Usage: "Megabytes of memory allocated to internal caching",
|
Usage: "Megabytes of memory allocated to internal caching (default = 4096 mainnet full node, 128 light mode)",
|
||||||
Value: 4096,
|
Value: 1024,
|
||||||
Category: flags.PerfCategory,
|
Category: flags.PerfCategory,
|
||||||
}
|
}
|
||||||
CacheDatabaseFlag = &cli.IntFlag{
|
CacheDatabaseFlag = &cli.IntFlag{
|
||||||
|
|
@ -1024,13 +1016,6 @@ Please note that --` + MetricsHTTPFlag.Name + ` must be set to start the server.
|
||||||
Category: flags.MetricsCategory,
|
Category: flags.MetricsCategory,
|
||||||
}
|
}
|
||||||
|
|
||||||
MetricsInfluxDBIntervalFlag = &cli.DurationFlag{
|
|
||||||
Name: "metrics.influxdb.interval",
|
|
||||||
Usage: "Interval between metrics reports to InfluxDB (with time unit, e.g. 10s)",
|
|
||||||
Value: metrics.DefaultConfig.InfluxDBInterval,
|
|
||||||
Category: flags.MetricsCategory,
|
|
||||||
}
|
|
||||||
|
|
||||||
MetricsEnableInfluxDBV2Flag = &cli.BoolFlag{
|
MetricsEnableInfluxDBV2Flag = &cli.BoolFlag{
|
||||||
Name: "metrics.influxdbv2",
|
Name: "metrics.influxdbv2",
|
||||||
Usage: "Enable metrics export/push to an external InfluxDB v2 database",
|
Usage: "Enable metrics export/push to an external InfluxDB v2 database",
|
||||||
|
|
@ -1584,9 +1569,7 @@ func setOpenTelemetry(ctx *cli.Context, cfg *node.Config) {
|
||||||
if ctx.IsSet(RPCTelemetryTagsFlag.Name) {
|
if ctx.IsSet(RPCTelemetryTagsFlag.Name) {
|
||||||
tcfg.Tags = ctx.String(RPCTelemetryTagsFlag.Name)
|
tcfg.Tags = ctx.String(RPCTelemetryTagsFlag.Name)
|
||||||
}
|
}
|
||||||
if ctx.IsSet(RPCTelemetrySampleRatioFlag.Name) {
|
|
||||||
tcfg.SampleRatio = ctx.Float64(RPCTelemetrySampleRatioFlag.Name)
|
tcfg.SampleRatio = ctx.Float64(RPCTelemetrySampleRatioFlag.Name)
|
||||||
}
|
|
||||||
|
|
||||||
if tcfg.Endpoint != "" && !tcfg.Enabled {
|
if tcfg.Endpoint != "" && !tcfg.Enabled {
|
||||||
log.Warn(fmt.Sprintf("OpenTelemetry endpoint configured but telemetry is not enabled, use --%s to enable.", RPCTelemetryFlag.Name))
|
log.Warn(fmt.Sprintf("OpenTelemetry endpoint configured but telemetry is not enabled, use --%s to enable.", RPCTelemetryFlag.Name))
|
||||||
|
|
@ -2263,14 +2246,13 @@ func SetupMetrics(cfg *metrics.Config) {
|
||||||
bucket = cfg.InfluxDBBucket
|
bucket = cfg.InfluxDBBucket
|
||||||
organization = cfg.InfluxDBOrganization
|
organization = cfg.InfluxDBOrganization
|
||||||
tagsMap = SplitTagsFlag(cfg.InfluxDBTags)
|
tagsMap = SplitTagsFlag(cfg.InfluxDBTags)
|
||||||
interval = cfg.InfluxDBInterval
|
|
||||||
)
|
)
|
||||||
if enableExport {
|
if enableExport {
|
||||||
log.Info("Enabling metrics export to InfluxDB", "interval", interval)
|
log.Info("Enabling metrics export to InfluxDB")
|
||||||
go influxdb.InfluxDBWithTags(metrics.DefaultRegistry, interval, endpoint, database, username, password, "geth.", tagsMap)
|
go influxdb.InfluxDBWithTags(metrics.DefaultRegistry, 10*time.Second, endpoint, database, username, password, "geth.", tagsMap)
|
||||||
} else if enableExportV2 {
|
} else if enableExportV2 {
|
||||||
log.Info("Enabling metrics export to InfluxDB (v2)", "interval", interval)
|
log.Info("Enabling metrics export to InfluxDB (v2)")
|
||||||
go influxdb.InfluxDBV2WithTags(metrics.DefaultRegistry, interval, endpoint, token, bucket, organization, "geth.", tagsMap)
|
go influxdb.InfluxDBV2WithTags(metrics.DefaultRegistry, 10*time.Second, endpoint, token, bucket, organization, "geth.", tagsMap)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Expvar exporter.
|
// Expvar exporter.
|
||||||
|
|
@ -2475,6 +2457,8 @@ func MakeChain(ctx *cli.Context, stack *node.Node, readonly bool) (*core.BlockCh
|
||||||
}
|
}
|
||||||
vmcfg := vm.Config{
|
vmcfg := vm.Config{
|
||||||
EnablePreimageRecording: ctx.Bool(VMEnableDebugFlag.Name),
|
EnablePreimageRecording: ctx.Bool(VMEnableDebugFlag.Name),
|
||||||
|
EnableWitnessStats: ctx.Bool(VMWitnessStatsFlag.Name),
|
||||||
|
StatelessSelfValidation: ctx.Bool(VMStatelessSelfValidationFlag.Name) || ctx.Bool(VMWitnessStatsFlag.Name),
|
||||||
}
|
}
|
||||||
if ctx.IsSet(VMTraceFlag.Name) {
|
if ctx.IsSet(VMTraceFlag.Name) {
|
||||||
if name := ctx.String(VMTraceFlag.Name); name != "" {
|
if name := ctx.String(VMTraceFlag.Name); name != "" {
|
||||||
|
|
@ -2488,9 +2472,6 @@ func MakeChain(ctx *cli.Context, stack *node.Node, readonly bool) (*core.BlockCh
|
||||||
}
|
}
|
||||||
options.VmConfig = vmcfg
|
options.VmConfig = vmcfg
|
||||||
|
|
||||||
options.StatelessSelfValidation = ctx.Bool(VMStatelessSelfValidationFlag.Name) || ctx.Bool(VMWitnessStatsFlag.Name)
|
|
||||||
options.EnableWitnessStats = ctx.Bool(VMWitnessStatsFlag.Name)
|
|
||||||
|
|
||||||
chain, err := core.NewBlockChain(chainDb, gspec, engine, options)
|
chain, err := core.NewBlockChain(chainDb, gspec, engine, options)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
Fatalf("Can't create BlockChain: %v", err)
|
Fatalf("Can't create BlockChain: %v", err)
|
||||||
|
|
|
||||||
|
|
@ -155,9 +155,7 @@ func testConfigFromCLI(ctx *cli.Context) (cfg testConfig) {
|
||||||
}
|
}
|
||||||
|
|
||||||
cfg.historyPruneBlock = new(uint64)
|
cfg.historyPruneBlock = new(uint64)
|
||||||
if p, err := history.NewPolicy(history.KeepPostMerge, params.MainnetGenesisHash); err == nil {
|
*cfg.historyPruneBlock = history.PrunePoints[params.MainnetGenesisHash].BlockNumber
|
||||||
*cfg.historyPruneBlock = p.Target.BlockNumber
|
|
||||||
}
|
|
||||||
case ctx.Bool(testSepoliaFlag.Name):
|
case ctx.Bool(testSepoliaFlag.Name):
|
||||||
cfg.fsys = builtinTestFiles
|
cfg.fsys = builtinTestFiles
|
||||||
if ctx.IsSet(filterQueryFileFlag.Name) {
|
if ctx.IsSet(filterQueryFileFlag.Name) {
|
||||||
|
|
@ -182,9 +180,7 @@ func testConfigFromCLI(ctx *cli.Context) (cfg testConfig) {
|
||||||
}
|
}
|
||||||
|
|
||||||
cfg.historyPruneBlock = new(uint64)
|
cfg.historyPruneBlock = new(uint64)
|
||||||
if p, err := history.NewPolicy(history.KeepPostMerge, params.SepoliaGenesisHash); err == nil {
|
*cfg.historyPruneBlock = history.PrunePoints[params.SepoliaGenesisHash].BlockNumber
|
||||||
*cfg.historyPruneBlock = p.Target.BlockNumber
|
|
||||||
}
|
|
||||||
default:
|
default:
|
||||||
cfg.fsys = os.DirFS(".")
|
cfg.fsys = os.DirFS(".")
|
||||||
cfg.filterQueryFile = ctx.String(filterQueryFileFlag.Name)
|
cfg.filterQueryFile = ctx.String(filterQueryFileFlag.Name)
|
||||||
|
|
|
||||||
|
|
@ -17,7 +17,6 @@
|
||||||
package beacon
|
package beacon
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"math/big"
|
"math/big"
|
||||||
|
|
@ -30,7 +29,6 @@ import (
|
||||||
"github.com/ethereum/go-ethereum/core/tracing"
|
"github.com/ethereum/go-ethereum/core/tracing"
|
||||||
"github.com/ethereum/go-ethereum/core/types"
|
"github.com/ethereum/go-ethereum/core/types"
|
||||||
"github.com/ethereum/go-ethereum/core/vm"
|
"github.com/ethereum/go-ethereum/core/vm"
|
||||||
"github.com/ethereum/go-ethereum/internal/telemetry"
|
|
||||||
"github.com/ethereum/go-ethereum/params"
|
"github.com/ethereum/go-ethereum/params"
|
||||||
"github.com/ethereum/go-ethereum/trie"
|
"github.com/ethereum/go-ethereum/trie"
|
||||||
"github.com/holiman/uint256"
|
"github.com/holiman/uint256"
|
||||||
|
|
@ -274,24 +272,6 @@ func (beacon *Beacon) verifyHeader(chain consensus.ChainHeaderReader, header, pa
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Verify the existence / non-existence of Amsterdam-specific header fields
|
|
||||||
amsterdam := chain.Config().IsAmsterdam(header.Number, header.Time)
|
|
||||||
if amsterdam {
|
|
||||||
if header.BlockAccessListHash == nil {
|
|
||||||
return errors.New("header is missing block access list hash")
|
|
||||||
}
|
|
||||||
if header.SlotNumber == nil {
|
|
||||||
return errors.New("header is missing slotNumber")
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
if header.BlockAccessListHash != nil {
|
|
||||||
return fmt.Errorf("invalid block access list hash: have %x, expected nil", *header.BlockAccessListHash)
|
|
||||||
}
|
|
||||||
if header.SlotNumber != nil {
|
|
||||||
return fmt.Errorf("invalid slotNumber: have %d, expected nil", *header.SlotNumber)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -363,17 +343,9 @@ func (beacon *Beacon) Finalize(chain consensus.ChainHeaderReader, header *types.
|
||||||
|
|
||||||
// FinalizeAndAssemble implements consensus.Engine, setting the final state and
|
// FinalizeAndAssemble implements consensus.Engine, setting the final state and
|
||||||
// assembling the block.
|
// assembling the block.
|
||||||
func (beacon *Beacon) FinalizeAndAssemble(ctx context.Context, chain consensus.ChainHeaderReader, header *types.Header, state *state.StateDB, body *types.Body, receipts []*types.Receipt) (result *types.Block, err error) {
|
func (beacon *Beacon) FinalizeAndAssemble(chain consensus.ChainHeaderReader, header *types.Header, state *state.StateDB, body *types.Body, receipts []*types.Receipt) (*types.Block, error) {
|
||||||
ctx, _, spanEnd := telemetry.StartSpan(ctx, "consensus.beacon.FinalizeAndAssemble",
|
|
||||||
telemetry.Int64Attribute("block.number", int64(header.Number.Uint64())),
|
|
||||||
telemetry.Int64Attribute("txs.count", int64(len(body.Transactions))),
|
|
||||||
telemetry.Int64Attribute("withdrawals.count", int64(len(body.Withdrawals))),
|
|
||||||
)
|
|
||||||
defer spanEnd(&err)
|
|
||||||
|
|
||||||
if !beacon.IsPoSHeader(header) {
|
if !beacon.IsPoSHeader(header) {
|
||||||
block, delegateErr := beacon.ethone.FinalizeAndAssemble(ctx, chain, header, state, body, receipts)
|
return beacon.ethone.FinalizeAndAssemble(chain, header, state, body, receipts)
|
||||||
return block, delegateErr
|
|
||||||
}
|
}
|
||||||
shanghai := chain.Config().IsShanghai(header.Number, header.Time)
|
shanghai := chain.Config().IsShanghai(header.Number, header.Time)
|
||||||
if shanghai {
|
if shanghai {
|
||||||
|
|
@ -387,20 +359,13 @@ func (beacon *Beacon) FinalizeAndAssemble(ctx context.Context, chain consensus.C
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
// Finalize and assemble the block.
|
// Finalize and assemble the block.
|
||||||
_, _, finalizeSpanEnd := telemetry.StartSpan(ctx, "consensus.beacon.Finalize")
|
|
||||||
beacon.Finalize(chain, header, state, body)
|
beacon.Finalize(chain, header, state, body)
|
||||||
finalizeSpanEnd(nil)
|
|
||||||
|
|
||||||
// Assign the final state root to header.
|
// Assign the final state root to header.
|
||||||
_, _, rootSpanEnd := telemetry.StartSpan(ctx, "consensus.beacon.IntermediateRoot")
|
|
||||||
header.Root = state.IntermediateRoot(true)
|
header.Root = state.IntermediateRoot(true)
|
||||||
rootSpanEnd(nil)
|
|
||||||
|
|
||||||
// Assemble the final block.
|
// Assemble the final block.
|
||||||
_, _, blockSpanEnd := telemetry.StartSpan(ctx, "consensus.beacon.NewBlock")
|
return types.NewBlock(header, body, receipts, trie.NewStackTrie(nil)), nil
|
||||||
block := types.NewBlock(header, body, receipts, trie.NewStackTrie(nil))
|
|
||||||
blockSpanEnd(nil)
|
|
||||||
return block, nil
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Seal generates a new sealing request for the given input block and pushes
|
// Seal generates a new sealing request for the given input block and pushes
|
||||||
|
|
|
||||||
|
|
@ -19,7 +19,6 @@ package clique
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"bytes"
|
"bytes"
|
||||||
"context"
|
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"io"
|
"io"
|
||||||
|
|
@ -311,8 +310,6 @@ func (c *Clique) verifyHeader(chain consensus.ChainHeaderReader, header *types.H
|
||||||
return fmt.Errorf("invalid blobGasUsed: have %d, expected nil", *header.BlobGasUsed)
|
return fmt.Errorf("invalid blobGasUsed: have %d, expected nil", *header.BlobGasUsed)
|
||||||
case header.ParentBeaconRoot != nil:
|
case header.ParentBeaconRoot != nil:
|
||||||
return fmt.Errorf("invalid parentBeaconRoot, have %#x, expected nil", *header.ParentBeaconRoot)
|
return fmt.Errorf("invalid parentBeaconRoot, have %#x, expected nil", *header.ParentBeaconRoot)
|
||||||
case header.SlotNumber != nil:
|
|
||||||
return fmt.Errorf("invalid slotNumber, have %#x, expected nil", *header.SlotNumber)
|
|
||||||
}
|
}
|
||||||
// All basic checks passed, verify cascading fields
|
// All basic checks passed, verify cascading fields
|
||||||
return c.verifyCascadingFields(chain, header, parents)
|
return c.verifyCascadingFields(chain, header, parents)
|
||||||
|
|
@ -582,7 +579,7 @@ func (c *Clique) Finalize(chain consensus.ChainHeaderReader, header *types.Heade
|
||||||
|
|
||||||
// FinalizeAndAssemble implements consensus.Engine, ensuring no uncles are set,
|
// FinalizeAndAssemble implements consensus.Engine, ensuring no uncles are set,
|
||||||
// nor block rewards given, and returns the final block.
|
// nor block rewards given, and returns the final block.
|
||||||
func (c *Clique) FinalizeAndAssemble(ctx context.Context, chain consensus.ChainHeaderReader, header *types.Header, state *state.StateDB, body *types.Body, receipts []*types.Receipt) (*types.Block, error) {
|
func (c *Clique) FinalizeAndAssemble(chain consensus.ChainHeaderReader, header *types.Header, state *state.StateDB, body *types.Body, receipts []*types.Receipt) (*types.Block, error) {
|
||||||
if len(body.Withdrawals) > 0 {
|
if len(body.Withdrawals) > 0 {
|
||||||
return nil, errors.New("clique does not support withdrawals")
|
return nil, errors.New("clique does not support withdrawals")
|
||||||
}
|
}
|
||||||
|
|
@ -697,9 +694,6 @@ func encodeSigHeader(w io.Writer, header *types.Header) {
|
||||||
if header.ParentBeaconRoot != nil {
|
if header.ParentBeaconRoot != nil {
|
||||||
panic("unexpected parent beacon root value in clique")
|
panic("unexpected parent beacon root value in clique")
|
||||||
}
|
}
|
||||||
if header.SlotNumber != nil {
|
|
||||||
panic("unexpected slot number value in clique")
|
|
||||||
}
|
|
||||||
if err := rlp.Encode(w, enc); err != nil {
|
if err := rlp.Encode(w, enc); err != nil {
|
||||||
panic("can't encode: " + err.Error())
|
panic("can't encode: " + err.Error())
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -18,7 +18,6 @@
|
||||||
package consensus
|
package consensus
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
|
||||||
"math/big"
|
"math/big"
|
||||||
|
|
||||||
"github.com/ethereum/go-ethereum/common"
|
"github.com/ethereum/go-ethereum/common"
|
||||||
|
|
@ -93,7 +92,7 @@ type Engine interface {
|
||||||
//
|
//
|
||||||
// Note: The block header and state database might be updated to reflect any
|
// Note: The block header and state database might be updated to reflect any
|
||||||
// consensus rules that happen at finalization (e.g. block rewards).
|
// consensus rules that happen at finalization (e.g. block rewards).
|
||||||
FinalizeAndAssemble(ctx context.Context, chain ChainHeaderReader, header *types.Header, state *state.StateDB, body *types.Body, receipts []*types.Receipt) (*types.Block, error)
|
FinalizeAndAssemble(chain ChainHeaderReader, header *types.Header, state *state.StateDB, body *types.Body, receipts []*types.Receipt) (*types.Block, error)
|
||||||
|
|
||||||
// Seal generates a new sealing request for the given input block and pushes
|
// Seal generates a new sealing request for the given input block and pushes
|
||||||
// the result into the given channel.
|
// the result into the given channel.
|
||||||
|
|
|
||||||
|
|
@ -17,7 +17,6 @@
|
||||||
package ethash
|
package ethash
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"math/big"
|
"math/big"
|
||||||
|
|
@ -284,8 +283,6 @@ func (ethash *Ethash) verifyHeader(chain consensus.ChainHeaderReader, header, pa
|
||||||
return fmt.Errorf("invalid blobGasUsed: have %d, expected nil", *header.BlobGasUsed)
|
return fmt.Errorf("invalid blobGasUsed: have %d, expected nil", *header.BlobGasUsed)
|
||||||
case header.ParentBeaconRoot != nil:
|
case header.ParentBeaconRoot != nil:
|
||||||
return fmt.Errorf("invalid parentBeaconRoot, have %#x, expected nil", *header.ParentBeaconRoot)
|
return fmt.Errorf("invalid parentBeaconRoot, have %#x, expected nil", *header.ParentBeaconRoot)
|
||||||
case header.SlotNumber != nil:
|
|
||||||
return fmt.Errorf("invalid slotNumber, have %#x, expected nil", *header.SlotNumber)
|
|
||||||
}
|
}
|
||||||
// Add some fake checks for tests
|
// Add some fake checks for tests
|
||||||
if ethash.fakeDelay != nil {
|
if ethash.fakeDelay != nil {
|
||||||
|
|
@ -514,7 +511,7 @@ func (ethash *Ethash) Finalize(chain consensus.ChainHeaderReader, header *types.
|
||||||
|
|
||||||
// FinalizeAndAssemble implements consensus.Engine, accumulating the block and
|
// FinalizeAndAssemble implements consensus.Engine, accumulating the block and
|
||||||
// uncle rewards, setting the final state and assembling the block.
|
// uncle rewards, setting the final state and assembling the block.
|
||||||
func (ethash *Ethash) FinalizeAndAssemble(ctx context.Context, chain consensus.ChainHeaderReader, header *types.Header, state *state.StateDB, body *types.Body, receipts []*types.Receipt) (*types.Block, error) {
|
func (ethash *Ethash) FinalizeAndAssemble(chain consensus.ChainHeaderReader, header *types.Header, state *state.StateDB, body *types.Body, receipts []*types.Receipt) (*types.Block, error) {
|
||||||
if len(body.Withdrawals) > 0 {
|
if len(body.Withdrawals) > 0 {
|
||||||
return nil, errors.New("ethash does not support withdrawals")
|
return nil, errors.New("ethash does not support withdrawals")
|
||||||
}
|
}
|
||||||
|
|
@ -562,9 +559,6 @@ func (ethash *Ethash) SealHash(header *types.Header) (hash common.Hash) {
|
||||||
if header.ParentBeaconRoot != nil {
|
if header.ParentBeaconRoot != nil {
|
||||||
panic("parent beacon root set on ethash")
|
panic("parent beacon root set on ethash")
|
||||||
}
|
}
|
||||||
if header.SlotNumber != nil {
|
|
||||||
panic("slot number set on ethash")
|
|
||||||
}
|
|
||||||
rlp.Encode(hasher, enc)
|
rlp.Encode(hasher, enc)
|
||||||
hasher.Sum(hash[:0])
|
hasher.Sum(hash[:0])
|
||||||
return hash
|
return hash
|
||||||
|
|
|
||||||
|
|
@ -282,7 +282,7 @@ func (c *Console) AutoCompleteInput(line string, pos int) (string, []string, str
|
||||||
for ; start > 0; start-- {
|
for ; start > 0; start-- {
|
||||||
// Skip all methods and namespaces (i.e. including the dot)
|
// Skip all methods and namespaces (i.e. including the dot)
|
||||||
c := line[start]
|
c := line[start]
|
||||||
if c == '.' || (c >= 'a' && c <= 'z') || (c >= 'A' && c <= 'Z') || (c >= '0' && c <= '9') {
|
if c == '.' || (c >= 'a' && c <= 'z') || (c >= 'A' && c <= 'Z') || (c >= '1' && c <= '9') {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
// We've hit an unexpected character, autocomplete form here
|
// We've hit an unexpected character, autocomplete form here
|
||||||
|
|
|
||||||
|
|
@ -93,6 +93,8 @@ var (
|
||||||
accountReadSingleTimer = metrics.NewRegisteredResettingTimer("chain/account/single/reads", nil)
|
accountReadSingleTimer = metrics.NewRegisteredResettingTimer("chain/account/single/reads", nil)
|
||||||
storageReadSingleTimer = metrics.NewRegisteredResettingTimer("chain/storage/single/reads", nil)
|
storageReadSingleTimer = metrics.NewRegisteredResettingTimer("chain/storage/single/reads", nil)
|
||||||
codeReadSingleTimer = metrics.NewRegisteredResettingTimer("chain/code/single/reads", nil)
|
codeReadSingleTimer = metrics.NewRegisteredResettingTimer("chain/code/single/reads", nil)
|
||||||
|
|
||||||
|
snapshotCommitTimer = metrics.NewRegisteredResettingTimer("chain/snapshot/commits", nil)
|
||||||
triedbCommitTimer = metrics.NewRegisteredResettingTimer("chain/triedb/commits", nil)
|
triedbCommitTimer = metrics.NewRegisteredResettingTimer("chain/triedb/commits", nil)
|
||||||
|
|
||||||
blockInsertTimer = metrics.NewRegisteredResettingTimer("chain/inserts", nil)
|
blockInsertTimer = metrics.NewRegisteredResettingTimer("chain/inserts", nil)
|
||||||
|
|
@ -194,8 +196,9 @@ type BlockChainConfig struct {
|
||||||
SnapshotNoBuild bool // Whether the background generation is allowed
|
SnapshotNoBuild bool // Whether the background generation is allowed
|
||||||
SnapshotWait bool // Wait for snapshot construction on startup. TODO(karalabe): This is a dirty hack for testing, nuke it
|
SnapshotWait bool // Wait for snapshot construction on startup. TODO(karalabe): This is a dirty hack for testing, nuke it
|
||||||
|
|
||||||
// HistoryPolicy defines the chain history pruning intent.
|
// This defines the cutoff block for history expiry.
|
||||||
HistoryPolicy history.HistoryPolicy
|
// Blocks before this number may be unavailable in the chain database.
|
||||||
|
ChainHistoryMode history.HistoryMode
|
||||||
|
|
||||||
// Misc options
|
// Misc options
|
||||||
NoPrefetch bool // Whether to disable heuristic state prefetching when processing blocks
|
NoPrefetch bool // Whether to disable heuristic state prefetching when processing blocks
|
||||||
|
|
@ -216,10 +219,6 @@ type BlockChainConfig struct {
|
||||||
// detailed statistics will be logged. Negative value means disabled (default),
|
// detailed statistics will be logged. Negative value means disabled (default),
|
||||||
// zero logs all blocks, positive value filters blocks by execution time.
|
// zero logs all blocks, positive value filters blocks by execution time.
|
||||||
SlowBlockThreshold time.Duration
|
SlowBlockThreshold time.Duration
|
||||||
|
|
||||||
// Execution configs
|
|
||||||
StatelessSelfValidation bool // Generate execution witnesses and self-check against them (testing purpose)
|
|
||||||
EnableWitnessStats bool // Whether trie access statistics collection is enabled
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// DefaultConfig returns the default config.
|
// DefaultConfig returns the default config.
|
||||||
|
|
@ -232,7 +231,7 @@ func DefaultConfig() *BlockChainConfig {
|
||||||
StateScheme: rawdb.HashScheme,
|
StateScheme: rawdb.HashScheme,
|
||||||
SnapshotLimit: 256,
|
SnapshotLimit: 256,
|
||||||
SnapshotWait: true,
|
SnapshotWait: true,
|
||||||
HistoryPolicy: history.HistoryPolicy{Mode: history.KeepAll},
|
ChainHistoryMode: history.KeepAll,
|
||||||
// Transaction indexing is disabled by default.
|
// Transaction indexing is disabled by default.
|
||||||
// This is appropriate for most unit tests.
|
// This is appropriate for most unit tests.
|
||||||
TxLookupLimit: -1,
|
TxLookupLimit: -1,
|
||||||
|
|
@ -323,7 +322,7 @@ type BlockChain struct {
|
||||||
lastWrite uint64 // Last block when the state was flushed
|
lastWrite uint64 // Last block when the state was flushed
|
||||||
flushInterval atomic.Int64 // Time interval (processing time) after which to flush a state
|
flushInterval atomic.Int64 // Time interval (processing time) after which to flush a state
|
||||||
triedb *triedb.Database // The database handler for maintaining trie nodes.
|
triedb *triedb.Database // The database handler for maintaining trie nodes.
|
||||||
codedb *state.CodeDB // The database handler for maintaining contract codes.
|
statedb *state.CachingDB // State database to reuse between imports (contains state cache)
|
||||||
txIndexer *txIndexer // Transaction indexer, might be nil if not enabled
|
txIndexer *txIndexer // Transaction indexer, might be nil if not enabled
|
||||||
|
|
||||||
hc *HeaderChain
|
hc *HeaderChain
|
||||||
|
|
@ -405,7 +404,6 @@ func NewBlockChain(db ethdb.Database, genesis *Genesis, engine consensus.Engine,
|
||||||
cfg: cfg,
|
cfg: cfg,
|
||||||
db: db,
|
db: db,
|
||||||
triedb: triedb,
|
triedb: triedb,
|
||||||
codedb: state.NewCodeDB(db),
|
|
||||||
triegc: prque.New[int64, common.Hash](nil),
|
triegc: prque.New[int64, common.Hash](nil),
|
||||||
chainmu: syncx.NewClosableMutex(),
|
chainmu: syncx.NewClosableMutex(),
|
||||||
bodyCache: lru.NewCache[common.Hash, *types.Body](bodyCacheLimit),
|
bodyCache: lru.NewCache[common.Hash, *types.Body](bodyCacheLimit),
|
||||||
|
|
@ -422,6 +420,7 @@ func NewBlockChain(db ethdb.Database, genesis *Genesis, engine consensus.Engine,
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
bc.flushInterval.Store(int64(cfg.TrieTimeLimit))
|
bc.flushInterval.Store(int64(cfg.TrieTimeLimit))
|
||||||
|
bc.statedb = state.NewDatabase(bc.triedb, nil)
|
||||||
bc.validator = NewBlockValidator(chainConfig, bc)
|
bc.validator = NewBlockValidator(chainConfig, bc)
|
||||||
bc.prefetcher = newStatePrefetcher(chainConfig, bc.hc)
|
bc.prefetcher = newStatePrefetcher(chainConfig, bc.hc)
|
||||||
bc.processor = NewStateProcessor(bc.hc)
|
bc.processor = NewStateProcessor(bc.hc)
|
||||||
|
|
@ -598,6 +597,9 @@ func (bc *BlockChain) setupSnapshot() {
|
||||||
AsyncBuild: !bc.cfg.SnapshotWait,
|
AsyncBuild: !bc.cfg.SnapshotWait,
|
||||||
}
|
}
|
||||||
bc.snaps, _ = snapshot.New(snapconfig, bc.db, bc.triedb, head.Root)
|
bc.snaps, _ = snapshot.New(snapconfig, bc.db, bc.triedb, head.Root)
|
||||||
|
|
||||||
|
// Re-initialize the state database with snapshot
|
||||||
|
bc.statedb = state.NewDatabase(bc.triedb, bc.snaps)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -715,43 +717,45 @@ func (bc *BlockChain) loadLastState() error {
|
||||||
// initializeHistoryPruning sets bc.historyPrunePoint.
|
// initializeHistoryPruning sets bc.historyPrunePoint.
|
||||||
func (bc *BlockChain) initializeHistoryPruning(latest uint64) error {
|
func (bc *BlockChain) initializeHistoryPruning(latest uint64) error {
|
||||||
freezerTail, _ := bc.db.Tail()
|
freezerTail, _ := bc.db.Tail()
|
||||||
policy := bc.cfg.HistoryPolicy
|
|
||||||
|
|
||||||
switch policy.Mode {
|
switch bc.cfg.ChainHistoryMode {
|
||||||
case history.KeepAll:
|
case history.KeepAll:
|
||||||
if freezerTail > 0 {
|
if freezerTail == 0 {
|
||||||
// Database was pruned externally. Record the actual state.
|
return nil
|
||||||
log.Warn("Chain history database is pruned", "tail", freezerTail, "mode", policy.Mode)
|
|
||||||
bc.historyPrunePoint.Store(&history.PrunePoint{
|
|
||||||
BlockNumber: freezerTail,
|
|
||||||
BlockHash: bc.GetCanonicalHash(freezerTail),
|
|
||||||
})
|
|
||||||
}
|
}
|
||||||
|
// The database was pruned somehow, so we need to figure out if it's a known
|
||||||
|
// configuration or an error.
|
||||||
|
predefinedPoint := history.PrunePoints[bc.genesisBlock.Hash()]
|
||||||
|
if predefinedPoint == nil || freezerTail != predefinedPoint.BlockNumber {
|
||||||
|
log.Error("Chain history database is pruned with unknown configuration", "tail", freezerTail)
|
||||||
|
return errors.New("unexpected database tail")
|
||||||
|
}
|
||||||
|
bc.historyPrunePoint.Store(predefinedPoint)
|
||||||
return nil
|
return nil
|
||||||
|
|
||||||
case history.KeepPostMerge, history.KeepPostPrague:
|
case history.KeepPostMerge:
|
||||||
target := policy.Target
|
if freezerTail == 0 && latest != 0 {
|
||||||
// Already at the target.
|
// This is the case where a user is trying to run with --history.chain
|
||||||
if freezerTail == target.BlockNumber {
|
// postmerge directly on an existing DB. We could just trigger the pruning
|
||||||
bc.historyPrunePoint.Store(target)
|
// here, but it'd be a bit dangerous since they may not have intended this
|
||||||
return nil
|
// action to happen. So just tell them how to do it.
|
||||||
|
log.Error(fmt.Sprintf("Chain history mode is configured as %q, but database is not pruned.", bc.cfg.ChainHistoryMode.String()))
|
||||||
|
log.Error(fmt.Sprintf("Run 'geth prune-history' to prune pre-merge history."))
|
||||||
|
return errors.New("history pruning requested via configuration")
|
||||||
}
|
}
|
||||||
// Database is pruned beyond the target.
|
predefinedPoint := history.PrunePoints[bc.genesisBlock.Hash()]
|
||||||
if freezerTail > target.BlockNumber {
|
if predefinedPoint == nil {
|
||||||
return fmt.Errorf("database pruned beyond requested history (tail=%d, target=%d)", freezerTail, target.BlockNumber)
|
log.Error("Chain history pruning is not supported for this network", "genesis", bc.genesisBlock.Hash())
|
||||||
|
return errors.New("history pruning requested for unknown network")
|
||||||
|
} else if freezerTail > 0 && freezerTail != predefinedPoint.BlockNumber {
|
||||||
|
log.Error("Chain history database is pruned to unknown block", "tail", freezerTail)
|
||||||
|
return errors.New("unexpected database tail")
|
||||||
}
|
}
|
||||||
// Database needs pruning (freezerTail < target).
|
bc.historyPrunePoint.Store(predefinedPoint)
|
||||||
if latest != 0 {
|
|
||||||
log.Error(fmt.Sprintf("Chain history mode is configured as %q, but database is not pruned to the target block.", policy.Mode.String()))
|
|
||||||
log.Error(fmt.Sprintf("Run 'geth prune-history --history.chain %s' to prune history.", policy.Mode.String()))
|
|
||||||
return errors.New("history pruning required")
|
|
||||||
}
|
|
||||||
// Fresh database (latest == 0), will sync from target point.
|
|
||||||
bc.historyPrunePoint.Store(target)
|
|
||||||
return nil
|
return nil
|
||||||
|
|
||||||
default:
|
default:
|
||||||
return fmt.Errorf("invalid history mode: %d", policy.Mode)
|
return fmt.Errorf("invalid history mode: %d", bc.cfg.ChainHistoryMode)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -1275,8 +1279,6 @@ func (bc *BlockChain) ExportN(w io.Writer, first uint64, last uint64) error {
|
||||||
func (bc *BlockChain) writeHeadBlock(block *types.Block) {
|
func (bc *BlockChain) writeHeadBlock(block *types.Block) {
|
||||||
// Add the block to the canonical chain number scheme and mark as the head
|
// Add the block to the canonical chain number scheme and mark as the head
|
||||||
batch := bc.db.NewBatch()
|
batch := bc.db.NewBatch()
|
||||||
defer batch.Close()
|
|
||||||
|
|
||||||
rawdb.WriteHeadHeaderHash(batch, block.Hash())
|
rawdb.WriteHeadHeaderHash(batch, block.Hash())
|
||||||
rawdb.WriteHeadFastBlockHash(batch, block.Hash())
|
rawdb.WriteHeadFastBlockHash(batch, block.Hash())
|
||||||
rawdb.WriteCanonicalHash(batch, block.Hash(), block.NumberU64())
|
rawdb.WriteCanonicalHash(batch, block.Hash(), block.NumberU64())
|
||||||
|
|
@ -1651,8 +1653,6 @@ func (bc *BlockChain) writeBlockWithState(block *types.Block, receipts []*types.
|
||||||
batch = bc.db.NewBatch()
|
batch = bc.db.NewBatch()
|
||||||
start = time.Now()
|
start = time.Now()
|
||||||
)
|
)
|
||||||
defer batch.Close()
|
|
||||||
|
|
||||||
rawdb.WriteBlock(batch, block)
|
rawdb.WriteBlock(batch, block)
|
||||||
rawdb.WriteReceipts(batch, block.Hash(), block.NumberU64(), receipts)
|
rawdb.WriteReceipts(batch, block.Hash(), block.NumberU64(), receipts)
|
||||||
rawdb.WritePreimages(batch, statedb.Preimages())
|
rawdb.WritePreimages(batch, statedb.Preimages())
|
||||||
|
|
@ -1990,15 +1990,7 @@ func (bc *BlockChain) insertChain(ctx context.Context, chain types.Blocks, setHe
|
||||||
}
|
}
|
||||||
// The traced section of block import.
|
// The traced section of block import.
|
||||||
start := time.Now()
|
start := time.Now()
|
||||||
config := ExecuteConfig{
|
res, err := bc.ProcessBlock(ctx, parent.Root, block, setHead, makeWitness && len(chain) == 1)
|
||||||
WriteState: true,
|
|
||||||
WriteHead: setHead,
|
|
||||||
EnableTracer: true,
|
|
||||||
MakeWitness: makeWitness && len(chain) == 1,
|
|
||||||
StatelessSelfValidation: bc.cfg.StatelessSelfValidation,
|
|
||||||
EnableWitnessStats: bc.cfg.EnableWitnessStats,
|
|
||||||
}
|
|
||||||
res, err := bc.ProcessBlock(ctx, parent.Root, block, config)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, it.index, err
|
return nil, it.index, err
|
||||||
}
|
}
|
||||||
|
|
@ -2081,47 +2073,19 @@ func (bpr *blockProcessingResult) Stats() *ExecuteStats {
|
||||||
return bpr.stats
|
return bpr.stats
|
||||||
}
|
}
|
||||||
|
|
||||||
// ExecuteConfig defines optional behaviors during execution.
|
|
||||||
type ExecuteConfig struct {
|
|
||||||
// WriteState controls whether the computed state changes are persisted to
|
|
||||||
// the underlying storage. If false, execution is performed in-memory only.
|
|
||||||
WriteState bool
|
|
||||||
|
|
||||||
// WriteHead indicates whether the execution result should update the canonical
|
|
||||||
// chain head. It's only relevant with WriteState == True.
|
|
||||||
WriteHead bool
|
|
||||||
|
|
||||||
// EnableTracer enables execution tracing. This is typically used for debugging
|
|
||||||
// or analysis and may significantly impact performance.
|
|
||||||
EnableTracer bool
|
|
||||||
|
|
||||||
// MakeWitness indicates whether to generate execution witness data during
|
|
||||||
// execution. Enabling this may introduce additional memory and CPU overhead.
|
|
||||||
MakeWitness bool
|
|
||||||
|
|
||||||
// StatelessSelfValidation indicates whether the execution witnesses generation
|
|
||||||
// and self-validation (testing purpose) is enabled.
|
|
||||||
StatelessSelfValidation bool
|
|
||||||
|
|
||||||
// EnableWitnessStats indicates whether to enable collection of witness trie
|
|
||||||
// access statistics
|
|
||||||
EnableWitnessStats bool
|
|
||||||
}
|
|
||||||
|
|
||||||
// ProcessBlock executes and validates the given block. If there was no error
|
// ProcessBlock executes and validates the given block. If there was no error
|
||||||
// it writes the block and associated state to database.
|
// it writes the block and associated state to database.
|
||||||
func (bc *BlockChain) ProcessBlock(ctx context.Context, parentRoot common.Hash, block *types.Block, config ExecuteConfig) (result *blockProcessingResult, blockEndErr error) {
|
func (bc *BlockChain) ProcessBlock(ctx context.Context, parentRoot common.Hash, block *types.Block, setHead bool, makeWitness bool) (result *blockProcessingResult, blockEndErr error) {
|
||||||
var (
|
var (
|
||||||
err error
|
err error
|
||||||
startTime = time.Now()
|
startTime = time.Now()
|
||||||
statedb *state.StateDB
|
statedb *state.StateDB
|
||||||
interrupt atomic.Bool
|
interrupt atomic.Bool
|
||||||
sdb = state.NewDatabase(bc.triedb, bc.codedb).WithSnapshot(bc.snaps)
|
|
||||||
)
|
)
|
||||||
defer interrupt.Store(true) // terminate the prefetch at the end
|
defer interrupt.Store(true) // terminate the prefetch at the end
|
||||||
|
|
||||||
if bc.cfg.NoPrefetch {
|
if bc.cfg.NoPrefetch {
|
||||||
statedb, err = state.New(parentRoot, sdb)
|
statedb, err = state.New(parentRoot, bc.statedb)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
@ -2131,27 +2095,23 @@ func (bc *BlockChain) ProcessBlock(ctx context.Context, parentRoot common.Hash,
|
||||||
//
|
//
|
||||||
// Note: the main processor and prefetcher share the same reader with a local
|
// Note: the main processor and prefetcher share the same reader with a local
|
||||||
// cache for mitigating the overhead of state access.
|
// cache for mitigating the overhead of state access.
|
||||||
prefetch, process, err := sdb.ReadersWithCacheStats(parentRoot)
|
prefetch, process, err := bc.statedb.ReadersWithCacheStats(parentRoot)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
throwaway, err := state.NewWithReader(parentRoot, sdb, prefetch)
|
throwaway, err := state.NewWithReader(parentRoot, bc.statedb, prefetch)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
statedb, err = state.NewWithReader(parentRoot, sdb, process)
|
statedb, err = state.NewWithReader(parentRoot, bc.statedb, process)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
// Upload the statistics of reader at the end
|
// Upload the statistics of reader at the end
|
||||||
defer func() {
|
defer func() {
|
||||||
if result != nil {
|
if result != nil {
|
||||||
if stater, ok := prefetch.(state.ReaderStater); ok {
|
result.stats.StatePrefetchCacheStats = prefetch.GetStats()
|
||||||
result.stats.StatePrefetchCacheStats = stater.GetStats()
|
result.stats.StateReadCacheStats = process.GetStats()
|
||||||
}
|
|
||||||
if stater, ok := process.(state.ReaderStater); ok {
|
|
||||||
result.stats.StateReadCacheStats = stater.GetStats()
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}()
|
}()
|
||||||
go func(start time.Time, throwaway *state.StateDB, block *types.Block) {
|
go func(start time.Time, throwaway *state.StateDB, block *types.Block) {
|
||||||
|
|
@ -2170,23 +2130,27 @@ func (bc *BlockChain) ProcessBlock(ctx context.Context, parentRoot common.Hash,
|
||||||
// If we are past Byzantium, enable prefetching to pull in trie node paths
|
// If we are past Byzantium, enable prefetching to pull in trie node paths
|
||||||
// while processing transactions. Before Byzantium the prefetcher is mostly
|
// while processing transactions. Before Byzantium the prefetcher is mostly
|
||||||
// useless due to the intermediate root hashing after each transaction.
|
// useless due to the intermediate root hashing after each transaction.
|
||||||
var witness *stateless.Witness
|
var (
|
||||||
|
witness *stateless.Witness
|
||||||
|
witnessStats *stateless.WitnessStats
|
||||||
|
)
|
||||||
if bc.chainConfig.IsByzantium(block.Number()) {
|
if bc.chainConfig.IsByzantium(block.Number()) {
|
||||||
// Generate witnesses either if we're self-testing, or if it's the
|
// Generate witnesses either if we're self-testing, or if it's the
|
||||||
// only block being inserted. A bit crude, but witnesses are huge,
|
// only block being inserted. A bit crude, but witnesses are huge,
|
||||||
// so we refuse to make an entire chain of them.
|
// so we refuse to make an entire chain of them.
|
||||||
if config.StatelessSelfValidation || config.MakeWitness {
|
if bc.cfg.VmConfig.StatelessSelfValidation || makeWitness {
|
||||||
witness, err = stateless.NewWitness(block.Header(), bc, config.EnableWitnessStats)
|
witness, err = stateless.NewWitness(block.Header(), bc)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
if bc.cfg.VmConfig.EnableWitnessStats {
|
||||||
|
witnessStats = stateless.NewWitnessStats()
|
||||||
}
|
}
|
||||||
statedb.StartPrefetcher("chain", witness)
|
}
|
||||||
|
statedb.StartPrefetcher("chain", witness, witnessStats)
|
||||||
defer statedb.StopPrefetcher()
|
defer statedb.StopPrefetcher()
|
||||||
}
|
}
|
||||||
|
|
||||||
// Instrument the blockchain tracing
|
|
||||||
if config.EnableTracer {
|
|
||||||
if bc.logger != nil && bc.logger.OnBlockStart != nil {
|
if bc.logger != nil && bc.logger.OnBlockStart != nil {
|
||||||
bc.logger.OnBlockStart(tracing.BlockEvent{
|
bc.logger.OnBlockStart(tracing.BlockEvent{
|
||||||
Block: block,
|
Block: block,
|
||||||
|
|
@ -2199,7 +2163,6 @@ func (bc *BlockChain) ProcessBlock(ctx context.Context, parentRoot common.Hash,
|
||||||
bc.logger.OnBlockEnd(blockEndErr)
|
bc.logger.OnBlockEnd(blockEndErr)
|
||||||
}()
|
}()
|
||||||
}
|
}
|
||||||
}
|
|
||||||
|
|
||||||
// Process block using the parent state as reference point
|
// Process block using the parent state as reference point
|
||||||
pstart := time.Now()
|
pstart := time.Now()
|
||||||
|
|
@ -2228,7 +2191,7 @@ func (bc *BlockChain) ProcessBlock(ctx context.Context, parentRoot common.Hash,
|
||||||
// witness builder/runner, which would otherwise be impossible due to the
|
// witness builder/runner, which would otherwise be impossible due to the
|
||||||
// various invalid chain states/behaviors being contained in those tests.
|
// various invalid chain states/behaviors being contained in those tests.
|
||||||
xvstart := time.Now()
|
xvstart := time.Now()
|
||||||
if witness := statedb.Witness(); witness != nil && config.StatelessSelfValidation {
|
if witness := statedb.Witness(); witness != nil && bc.cfg.VmConfig.StatelessSelfValidation {
|
||||||
log.Warn("Running stateless self-validation", "block", block.Number(), "hash", block.Hash())
|
log.Warn("Running stateless self-validation", "block", block.Number(), "hash", block.Hash())
|
||||||
|
|
||||||
// Remove critical computed fields from the block to force true recalculation
|
// Remove critical computed fields from the block to force true recalculation
|
||||||
|
|
@ -2281,10 +2244,11 @@ func (bc *BlockChain) ProcessBlock(ctx context.Context, parentRoot common.Hash,
|
||||||
stats.CrossValidation = xvtime // The time spent on stateless cross validation
|
stats.CrossValidation = xvtime // The time spent on stateless cross validation
|
||||||
|
|
||||||
// Write the block to the chain and get the status.
|
// Write the block to the chain and get the status.
|
||||||
var status WriteStatus
|
var (
|
||||||
if config.WriteState {
|
wstart = time.Now()
|
||||||
wstart := time.Now()
|
status WriteStatus
|
||||||
if !config.WriteHead {
|
)
|
||||||
|
if !setHead {
|
||||||
// Don't set the head, only insert the block
|
// Don't set the head, only insert the block
|
||||||
err = bc.writeBlockWithState(block, res.Receipts, statedb)
|
err = bc.writeBlockWithState(block, res.Receipts, statedb)
|
||||||
} else {
|
} else {
|
||||||
|
|
@ -2293,16 +2257,18 @@ func (bc *BlockChain) ProcessBlock(ctx context.Context, parentRoot common.Hash,
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
// Report the collected witness statistics
|
||||||
|
if witnessStats != nil {
|
||||||
|
witnessStats.ReportMetrics(block.NumberU64())
|
||||||
|
}
|
||||||
|
|
||||||
// Update the metrics touched during block commit
|
// Update the metrics touched during block commit
|
||||||
stats.AccountCommits = statedb.AccountCommits // Account commits are complete, we can mark them
|
stats.AccountCommits = statedb.AccountCommits // Account commits are complete, we can mark them
|
||||||
stats.StorageCommits = statedb.StorageCommits // Storage commits are complete, we can mark them
|
stats.StorageCommits = statedb.StorageCommits // Storage commits are complete, we can mark them
|
||||||
stats.DatabaseCommit = statedb.DatabaseCommits // Database commits are complete, we can mark them
|
stats.SnapshotCommit = statedb.SnapshotCommits // Snapshot commits are complete, we can mark them
|
||||||
stats.BlockWrite = time.Since(wstart) - max(statedb.AccountCommits, statedb.StorageCommits) /* concurrent */ - statedb.DatabaseCommits
|
stats.TrieDBCommit = statedb.TrieDBCommits // Trie database commits are complete, we can mark them
|
||||||
}
|
stats.BlockWrite = time.Since(wstart) - max(statedb.AccountCommits, statedb.StorageCommits) /* concurrent */ - statedb.SnapshotCommits - statedb.TrieDBCommits
|
||||||
// Report the collected witness statistics
|
|
||||||
if witness != nil {
|
|
||||||
witness.ReportMetrics(block.NumberU64())
|
|
||||||
}
|
|
||||||
elapsed := time.Since(startTime) + 1 // prevent zero division
|
elapsed := time.Since(startTime) + 1 // prevent zero division
|
||||||
stats.TotalTime = elapsed
|
stats.TotalTime = elapsed
|
||||||
stats.MgasPerSecond = float64(res.GasUsed) * 1000 / float64(elapsed)
|
stats.MgasPerSecond = float64(res.GasUsed) * 1000 / float64(elapsed)
|
||||||
|
|
@ -2583,7 +2549,6 @@ func (bc *BlockChain) reorg(oldHead *types.Header, newHead *types.Header) error
|
||||||
// as the txlookups should be changed atomically, and all subsequent
|
// as the txlookups should be changed atomically, and all subsequent
|
||||||
// reads should be blocked until the mutation is complete.
|
// reads should be blocked until the mutation is complete.
|
||||||
bc.txLookupLock.Lock()
|
bc.txLookupLock.Lock()
|
||||||
defer bc.txLookupLock.Unlock()
|
|
||||||
|
|
||||||
// Reorg can be executed, start reducing the chain's old blocks and appending
|
// Reorg can be executed, start reducing the chain's old blocks and appending
|
||||||
// the new blocks
|
// the new blocks
|
||||||
|
|
@ -2661,8 +2626,6 @@ func (bc *BlockChain) reorg(oldHead *types.Header, newHead *types.Header) error
|
||||||
// Delete useless indexes right now which includes the non-canonical
|
// Delete useless indexes right now which includes the non-canonical
|
||||||
// transaction indexes, canonical chain indexes which above the head.
|
// transaction indexes, canonical chain indexes which above the head.
|
||||||
batch := bc.db.NewBatch()
|
batch := bc.db.NewBatch()
|
||||||
defer batch.Close()
|
|
||||||
|
|
||||||
for _, tx := range types.HashDifference(deletedTxs, rebirthTxs) {
|
for _, tx := range types.HashDifference(deletedTxs, rebirthTxs) {
|
||||||
rawdb.DeleteTxLookupEntry(batch, tx)
|
rawdb.DeleteTxLookupEntry(batch, tx)
|
||||||
}
|
}
|
||||||
|
|
@ -2686,6 +2649,9 @@ func (bc *BlockChain) reorg(oldHead *types.Header, newHead *types.Header) error
|
||||||
// Reset the tx lookup cache to clear stale txlookup cache.
|
// Reset the tx lookup cache to clear stale txlookup cache.
|
||||||
bc.txLookupCache.Purge()
|
bc.txLookupCache.Purge()
|
||||||
|
|
||||||
|
// Release the tx-lookup lock after mutation.
|
||||||
|
bc.txLookupLock.Unlock()
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -371,7 +371,7 @@ func (bc *BlockChain) TxIndexDone() bool {
|
||||||
|
|
||||||
// HasState checks if state trie is fully present in the database or not.
|
// HasState checks if state trie is fully present in the database or not.
|
||||||
func (bc *BlockChain) HasState(hash common.Hash) bool {
|
func (bc *BlockChain) HasState(hash common.Hash) bool {
|
||||||
_, err := bc.triedb.NodeReader(hash)
|
_, err := bc.statedb.OpenTrie(hash)
|
||||||
return err == nil
|
return err == nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -403,7 +403,7 @@ func (bc *BlockChain) stateRecoverable(root common.Hash) bool {
|
||||||
func (bc *BlockChain) ContractCodeWithPrefix(hash common.Hash) []byte {
|
func (bc *BlockChain) ContractCodeWithPrefix(hash common.Hash) []byte {
|
||||||
// TODO(rjl493456442) The associated account address is also required
|
// TODO(rjl493456442) The associated account address is also required
|
||||||
// in Verkle scheme. Fix it once snap-sync is supported for Verkle.
|
// in Verkle scheme. Fix it once snap-sync is supported for Verkle.
|
||||||
return bc.codedb.Reader().CodeWithPrefix(common.Address{}, hash)
|
return bc.statedb.ContractCodeWithPrefix(common.Address{}, hash)
|
||||||
}
|
}
|
||||||
|
|
||||||
// State returns a new mutable state based on the current HEAD block.
|
// State returns a new mutable state based on the current HEAD block.
|
||||||
|
|
@ -413,14 +413,14 @@ func (bc *BlockChain) State() (*state.StateDB, error) {
|
||||||
|
|
||||||
// StateAt returns a new mutable state based on a particular point in time.
|
// StateAt returns a new mutable state based on a particular point in time.
|
||||||
func (bc *BlockChain) StateAt(root common.Hash) (*state.StateDB, error) {
|
func (bc *BlockChain) StateAt(root common.Hash) (*state.StateDB, error) {
|
||||||
return state.New(root, state.NewDatabase(bc.triedb, bc.codedb).WithSnapshot(bc.snaps))
|
return state.New(root, bc.statedb)
|
||||||
}
|
}
|
||||||
|
|
||||||
// HistoricState returns a historic state specified by the given root.
|
// HistoricState returns a historic state specified by the given root.
|
||||||
// Live states are not available and won't be served, please use `State`
|
// Live states are not available and won't be served, please use `State`
|
||||||
// or `StateAt` instead.
|
// or `StateAt` instead.
|
||||||
func (bc *BlockChain) HistoricState(root common.Hash) (*state.StateDB, error) {
|
func (bc *BlockChain) HistoricState(root common.Hash) (*state.StateDB, error) {
|
||||||
return state.New(root, state.NewHistoricDatabase(bc.triedb, bc.codedb))
|
return state.New(root, state.NewHistoricDatabase(bc.db, bc.triedb))
|
||||||
}
|
}
|
||||||
|
|
||||||
// Config retrieves the chain's fork configuration.
|
// Config retrieves the chain's fork configuration.
|
||||||
|
|
@ -444,6 +444,11 @@ func (bc *BlockChain) Processor() Processor {
|
||||||
return bc.processor
|
return bc.processor
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// StateCache returns the caching database underpinning the blockchain instance.
|
||||||
|
func (bc *BlockChain) StateCache() state.Database {
|
||||||
|
return bc.statedb
|
||||||
|
}
|
||||||
|
|
||||||
// GasLimit returns the gas limit of the current HEAD block.
|
// GasLimit returns the gas limit of the current HEAD block.
|
||||||
func (bc *BlockChain) GasLimit() uint64 {
|
func (bc *BlockChain) GasLimit() uint64 {
|
||||||
return bc.CurrentBlock().GasLimit
|
return bc.CurrentBlock().GasLimit
|
||||||
|
|
@ -487,11 +492,6 @@ func (bc *BlockChain) TrieDB() *triedb.Database {
|
||||||
return bc.triedb
|
return bc.triedb
|
||||||
}
|
}
|
||||||
|
|
||||||
// CodeDB retrieves the low level contract code database used for data storage.
|
|
||||||
func (bc *BlockChain) CodeDB() *state.CodeDB {
|
|
||||||
return bc.codedb
|
|
||||||
}
|
|
||||||
|
|
||||||
// HeaderChain returns the underlying header chain.
|
// HeaderChain returns the underlying header chain.
|
||||||
func (bc *BlockChain) HeaderChain() *HeaderChain {
|
func (bc *BlockChain) HeaderChain() *HeaderChain {
|
||||||
return bc.hc
|
return bc.hc
|
||||||
|
|
|
||||||
|
|
@ -30,6 +30,7 @@ import (
|
||||||
"github.com/ethereum/go-ethereum/common"
|
"github.com/ethereum/go-ethereum/common"
|
||||||
"github.com/ethereum/go-ethereum/consensus/ethash"
|
"github.com/ethereum/go-ethereum/consensus/ethash"
|
||||||
"github.com/ethereum/go-ethereum/core/rawdb"
|
"github.com/ethereum/go-ethereum/core/rawdb"
|
||||||
|
"github.com/ethereum/go-ethereum/core/state"
|
||||||
"github.com/ethereum/go-ethereum/core/types"
|
"github.com/ethereum/go-ethereum/core/types"
|
||||||
"github.com/ethereum/go-ethereum/ethdb/pebble"
|
"github.com/ethereum/go-ethereum/ethdb/pebble"
|
||||||
"github.com/ethereum/go-ethereum/params"
|
"github.com/ethereum/go-ethereum/params"
|
||||||
|
|
@ -2040,6 +2041,7 @@ func testSetHeadWithScheme(t *testing.T, tt *rewindTest, snapshots bool, scheme
|
||||||
dbconfig.HashDB = hashdb.Defaults
|
dbconfig.HashDB = hashdb.Defaults
|
||||||
}
|
}
|
||||||
chain.triedb = triedb.NewDatabase(chain.db, dbconfig)
|
chain.triedb = triedb.NewDatabase(chain.db, dbconfig)
|
||||||
|
chain.statedb = state.NewDatabase(chain.triedb, chain.snaps)
|
||||||
|
|
||||||
// Force run a freeze cycle
|
// Force run a freeze cycle
|
||||||
type freezer interface {
|
type freezer interface {
|
||||||
|
|
|
||||||
|
|
@ -52,7 +52,8 @@ type ExecuteStats struct {
|
||||||
Execution time.Duration // Time spent on the EVM execution
|
Execution time.Duration // Time spent on the EVM execution
|
||||||
Validation time.Duration // Time spent on the block validation
|
Validation time.Duration // Time spent on the block validation
|
||||||
CrossValidation time.Duration // Optional, time spent on the block cross validation
|
CrossValidation time.Duration // Optional, time spent on the block cross validation
|
||||||
DatabaseCommit time.Duration // Time spent on database commit
|
SnapshotCommit time.Duration // Time spent on snapshot commit
|
||||||
|
TrieDBCommit time.Duration // Time spent on database commit
|
||||||
BlockWrite time.Duration // Time spent on block write
|
BlockWrite time.Duration // Time spent on block write
|
||||||
TotalTime time.Duration // The total time spent on block execution
|
TotalTime time.Duration // The total time spent on block execution
|
||||||
MgasPerSecond float64 // The million gas processed per second
|
MgasPerSecond float64 // The million gas processed per second
|
||||||
|
|
@ -86,21 +87,22 @@ func (s *ExecuteStats) reportMetrics() {
|
||||||
blockExecutionTimer.Update(s.Execution) // The time spent on EVM processing
|
blockExecutionTimer.Update(s.Execution) // The time spent on EVM processing
|
||||||
blockValidationTimer.Update(s.Validation) // The time spent on block validation
|
blockValidationTimer.Update(s.Validation) // The time spent on block validation
|
||||||
blockCrossValidationTimer.Update(s.CrossValidation) // The time spent on stateless cross validation
|
blockCrossValidationTimer.Update(s.CrossValidation) // The time spent on stateless cross validation
|
||||||
triedbCommitTimer.Update(s.DatabaseCommit) // Trie database commits are complete, we can mark them
|
snapshotCommitTimer.Update(s.SnapshotCommit) // Snapshot commits are complete, we can mark them
|
||||||
|
triedbCommitTimer.Update(s.TrieDBCommit) // Trie database commits are complete, we can mark them
|
||||||
blockWriteTimer.Update(s.BlockWrite) // The time spent on block write
|
blockWriteTimer.Update(s.BlockWrite) // The time spent on block write
|
||||||
blockInsertTimer.Update(s.TotalTime) // The total time spent on block execution
|
blockInsertTimer.Update(s.TotalTime) // The total time spent on block execution
|
||||||
chainMgaspsMeter.Update(time.Duration(s.MgasPerSecond)) // TODO(rjl493456442) generalize the ResettingTimer
|
chainMgaspsMeter.Update(time.Duration(s.MgasPerSecond)) // TODO(rjl493456442) generalize the ResettingTimer
|
||||||
|
|
||||||
// Cache hit rates
|
// Cache hit rates
|
||||||
accountCacheHitPrefetchMeter.Mark(s.StatePrefetchCacheStats.StateStats.AccountCacheHit)
|
accountCacheHitPrefetchMeter.Mark(s.StatePrefetchCacheStats.AccountCacheHit)
|
||||||
accountCacheMissPrefetchMeter.Mark(s.StatePrefetchCacheStats.StateStats.AccountCacheMiss)
|
accountCacheMissPrefetchMeter.Mark(s.StatePrefetchCacheStats.AccountCacheMiss)
|
||||||
storageCacheHitPrefetchMeter.Mark(s.StatePrefetchCacheStats.StateStats.StorageCacheHit)
|
storageCacheHitPrefetchMeter.Mark(s.StatePrefetchCacheStats.StorageCacheHit)
|
||||||
storageCacheMissPrefetchMeter.Mark(s.StatePrefetchCacheStats.StateStats.StorageCacheMiss)
|
storageCacheMissPrefetchMeter.Mark(s.StatePrefetchCacheStats.StorageCacheMiss)
|
||||||
|
|
||||||
accountCacheHitMeter.Mark(s.StateReadCacheStats.StateStats.AccountCacheHit)
|
accountCacheHitMeter.Mark(s.StateReadCacheStats.AccountCacheHit)
|
||||||
accountCacheMissMeter.Mark(s.StateReadCacheStats.StateStats.AccountCacheMiss)
|
accountCacheMissMeter.Mark(s.StateReadCacheStats.AccountCacheMiss)
|
||||||
storageCacheHitMeter.Mark(s.StateReadCacheStats.StateStats.StorageCacheHit)
|
storageCacheHitMeter.Mark(s.StateReadCacheStats.StorageCacheHit)
|
||||||
storageCacheMissMeter.Mark(s.StateReadCacheStats.StateStats.StorageCacheMiss)
|
storageCacheMissMeter.Mark(s.StateReadCacheStats.StorageCacheMiss)
|
||||||
}
|
}
|
||||||
|
|
||||||
// slowBlockLog represents the JSON structure for slow block logging.
|
// slowBlockLog represents the JSON structure for slow block logging.
|
||||||
|
|
@ -175,6 +177,14 @@ type slowBlockCodeCacheEntry struct {
|
||||||
MissBytes int64 `json:"miss_bytes"`
|
MissBytes int64 `json:"miss_bytes"`
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// calculateHitRate computes the cache hit rate as a percentage (0-100).
|
||||||
|
func calculateHitRate(hits, misses int64) float64 {
|
||||||
|
if total := hits + misses; total > 0 {
|
||||||
|
return float64(hits) / float64(total) * 100.0
|
||||||
|
}
|
||||||
|
return 0.0
|
||||||
|
}
|
||||||
|
|
||||||
// durationToMs converts a time.Duration to milliseconds as a float64
|
// durationToMs converts a time.Duration to milliseconds as a float64
|
||||||
// with sub-millisecond precision for accurate cross-client metrics.
|
// with sub-millisecond precision for accurate cross-client metrics.
|
||||||
func durationToMs(d time.Duration) float64 {
|
func durationToMs(d time.Duration) float64 {
|
||||||
|
|
@ -206,7 +216,7 @@ func (s *ExecuteStats) logSlow(block *types.Block, slowBlockThreshold time.Durat
|
||||||
ExecutionMs: durationToMs(s.Execution),
|
ExecutionMs: durationToMs(s.Execution),
|
||||||
StateReadMs: durationToMs(s.AccountReads + s.StorageReads + s.CodeReads),
|
StateReadMs: durationToMs(s.AccountReads + s.StorageReads + s.CodeReads),
|
||||||
StateHashMs: durationToMs(s.AccountHashes + s.AccountUpdates + s.StorageUpdates),
|
StateHashMs: durationToMs(s.AccountHashes + s.AccountUpdates + s.StorageUpdates),
|
||||||
CommitMs: durationToMs(max(s.AccountCommits, s.StorageCommits) + s.DatabaseCommit + s.BlockWrite),
|
CommitMs: durationToMs(max(s.AccountCommits, s.StorageCommits) + s.TrieDBCommit + s.SnapshotCommit + s.BlockWrite),
|
||||||
TotalMs: durationToMs(s.TotalTime),
|
TotalMs: durationToMs(s.TotalTime),
|
||||||
},
|
},
|
||||||
Throughput: slowBlockThru{
|
Throughput: slowBlockThru{
|
||||||
|
|
@ -228,19 +238,19 @@ func (s *ExecuteStats) logSlow(block *types.Block, slowBlockThreshold time.Durat
|
||||||
},
|
},
|
||||||
Cache: slowBlockCache{
|
Cache: slowBlockCache{
|
||||||
Account: slowBlockCacheEntry{
|
Account: slowBlockCacheEntry{
|
||||||
Hits: s.StateReadCacheStats.StateStats.AccountCacheHit,
|
Hits: s.StateReadCacheStats.AccountCacheHit,
|
||||||
Misses: s.StateReadCacheStats.StateStats.AccountCacheMiss,
|
Misses: s.StateReadCacheStats.AccountCacheMiss,
|
||||||
HitRate: s.StateReadCacheStats.StateStats.AccountCacheHitRate(),
|
HitRate: calculateHitRate(s.StateReadCacheStats.AccountCacheHit, s.StateReadCacheStats.AccountCacheMiss),
|
||||||
},
|
},
|
||||||
Storage: slowBlockCacheEntry{
|
Storage: slowBlockCacheEntry{
|
||||||
Hits: s.StateReadCacheStats.StateStats.StorageCacheHit,
|
Hits: s.StateReadCacheStats.StorageCacheHit,
|
||||||
Misses: s.StateReadCacheStats.StateStats.StorageCacheMiss,
|
Misses: s.StateReadCacheStats.StorageCacheMiss,
|
||||||
HitRate: s.StateReadCacheStats.StateStats.StorageCacheHitRate(),
|
HitRate: calculateHitRate(s.StateReadCacheStats.StorageCacheHit, s.StateReadCacheStats.StorageCacheMiss),
|
||||||
},
|
},
|
||||||
Code: slowBlockCodeCacheEntry{
|
Code: slowBlockCodeCacheEntry{
|
||||||
Hits: s.StateReadCacheStats.CodeStats.CacheHit,
|
Hits: s.StateReadCacheStats.CodeStats.CacheHit,
|
||||||
Misses: s.StateReadCacheStats.CodeStats.CacheMiss,
|
Misses: s.StateReadCacheStats.CodeStats.CacheMiss,
|
||||||
HitRate: s.StateReadCacheStats.CodeStats.HitRate(),
|
HitRate: calculateHitRate(s.StateReadCacheStats.CodeStats.CacheHit, s.StateReadCacheStats.CodeStats.CacheMiss),
|
||||||
HitBytes: s.StateReadCacheStats.CodeStats.CacheHitBytes,
|
HitBytes: s.StateReadCacheStats.CodeStats.CacheHitBytes,
|
||||||
MissBytes: s.StateReadCacheStats.CodeStats.CacheMissBytes,
|
MissBytes: s.StateReadCacheStats.CodeStats.CacheMissBytes,
|
||||||
},
|
},
|
||||||
|
|
|
||||||
|
|
@ -36,6 +36,7 @@ import (
|
||||||
"github.com/ethereum/go-ethereum/consensus"
|
"github.com/ethereum/go-ethereum/consensus"
|
||||||
"github.com/ethereum/go-ethereum/consensus/beacon"
|
"github.com/ethereum/go-ethereum/consensus/beacon"
|
||||||
"github.com/ethereum/go-ethereum/consensus/ethash"
|
"github.com/ethereum/go-ethereum/consensus/ethash"
|
||||||
|
"github.com/ethereum/go-ethereum/core/history"
|
||||||
"github.com/ethereum/go-ethereum/core/rawdb"
|
"github.com/ethereum/go-ethereum/core/rawdb"
|
||||||
"github.com/ethereum/go-ethereum/core/state"
|
"github.com/ethereum/go-ethereum/core/state"
|
||||||
"github.com/ethereum/go-ethereum/core/types"
|
"github.com/ethereum/go-ethereum/core/types"
|
||||||
|
|
@ -156,7 +157,7 @@ func testBlockChainImport(chain types.Blocks, blockchain *BlockChain) error {
|
||||||
}
|
}
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
statedb, err := state.New(blockchain.GetBlockByHash(block.ParentHash()).Root(), state.NewDatabase(blockchain.triedb, blockchain.codedb))
|
statedb, err := state.New(blockchain.GetBlockByHash(block.ParentHash()).Root(), blockchain.statedb)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
@ -4336,13 +4337,26 @@ func TestInsertChainWithCutoff(t *testing.T) {
|
||||||
func testInsertChainWithCutoff(t *testing.T, cutoff uint64, ancientLimit uint64, genesis *Genesis, blocks []*types.Block, receipts []types.Receipts) {
|
func testInsertChainWithCutoff(t *testing.T, cutoff uint64, ancientLimit uint64, genesis *Genesis, blocks []*types.Block, receipts []types.Receipts) {
|
||||||
// log.SetDefault(log.NewLogger(log.NewTerminalHandlerWithLevel(os.Stderr, log.LevelDebug, true)))
|
// log.SetDefault(log.NewLogger(log.NewTerminalHandlerWithLevel(os.Stderr, log.LevelDebug, true)))
|
||||||
|
|
||||||
|
// Add a known pruning point for the duration of the test.
|
||||||
ghash := genesis.ToBlock().Hash()
|
ghash := genesis.ToBlock().Hash()
|
||||||
cutoffBlock := blocks[cutoff-1]
|
cutoffBlock := blocks[cutoff-1]
|
||||||
|
history.PrunePoints[ghash] = &history.PrunePoint{
|
||||||
|
BlockNumber: cutoffBlock.NumberU64(),
|
||||||
|
BlockHash: cutoffBlock.Hash(),
|
||||||
|
}
|
||||||
|
defer func() {
|
||||||
|
delete(history.PrunePoints, ghash)
|
||||||
|
}()
|
||||||
|
|
||||||
|
// Enable pruning in cache config.
|
||||||
|
config := DefaultConfig().WithStateScheme(rawdb.PathScheme)
|
||||||
|
config.ChainHistoryMode = history.KeepPostMerge
|
||||||
|
|
||||||
db, _ := rawdb.Open(rawdb.NewMemoryDatabase(), rawdb.OpenOptions{})
|
db, _ := rawdb.Open(rawdb.NewMemoryDatabase(), rawdb.OpenOptions{})
|
||||||
defer db.Close()
|
defer db.Close()
|
||||||
|
|
||||||
chain, _ := NewBlockChain(db, genesis, beacon.New(ethash.NewFaker()), DefaultConfig().WithStateScheme(rawdb.PathScheme))
|
options := DefaultConfig().WithStateScheme(rawdb.PathScheme)
|
||||||
|
chain, _ := NewBlockChain(db, genesis, beacon.New(ethash.NewFaker()), options)
|
||||||
defer chain.Stop()
|
defer chain.Stop()
|
||||||
|
|
||||||
var (
|
var (
|
||||||
|
|
|
||||||
|
|
@ -17,7 +17,6 @@
|
||||||
package core
|
package core
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
|
||||||
"fmt"
|
"fmt"
|
||||||
"math/big"
|
"math/big"
|
||||||
|
|
||||||
|
|
@ -64,7 +63,7 @@ func (b *BlockGen) SetCoinbase(addr common.Address) {
|
||||||
panic("coinbase can only be set once")
|
panic("coinbase can only be set once")
|
||||||
}
|
}
|
||||||
b.header.Coinbase = addr
|
b.header.Coinbase = addr
|
||||||
b.gasPool = NewGasPool(b.header.GasLimit)
|
b.gasPool = new(GasPool).AddGas(b.header.GasLimit)
|
||||||
}
|
}
|
||||||
|
|
||||||
// SetExtra sets the extra data field of the generated block.
|
// SetExtra sets the extra data field of the generated block.
|
||||||
|
|
@ -118,12 +117,10 @@ func (b *BlockGen) addTx(bc *BlockChain, vmConfig vm.Config, tx *types.Transacti
|
||||||
evm = vm.NewEVM(blockContext, b.statedb, b.cm.config, vmConfig)
|
evm = vm.NewEVM(blockContext, b.statedb, b.cm.config, vmConfig)
|
||||||
)
|
)
|
||||||
b.statedb.SetTxContext(tx.Hash(), len(b.txs))
|
b.statedb.SetTxContext(tx.Hash(), len(b.txs))
|
||||||
receipt, err := ApplyTransaction(evm, b.gasPool, b.statedb, b.header, tx)
|
receipt, err := ApplyTransaction(evm, b.gasPool, b.statedb, b.header, tx, &b.header.GasUsed)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
panic(err)
|
panic(err)
|
||||||
}
|
}
|
||||||
b.header.GasUsed = b.gasPool.Used()
|
|
||||||
|
|
||||||
// Merge the tx-local access event into the "block-local" one, in order to collect
|
// Merge the tx-local access event into the "block-local" one, in order to collect
|
||||||
// all values, so that the witness can be built.
|
// all values, so that the witness can be built.
|
||||||
if b.statedb.Database().TrieDB().IsVerkle() {
|
if b.statedb.Database().TrieDB().IsVerkle() {
|
||||||
|
|
@ -412,7 +409,7 @@ func GenerateChain(config *params.ChainConfig, parent *types.Block, engine conse
|
||||||
}
|
}
|
||||||
|
|
||||||
body := types.Body{Transactions: b.txs, Uncles: b.uncles, Withdrawals: b.withdrawals}
|
body := types.Body{Transactions: b.txs, Uncles: b.uncles, Withdrawals: b.withdrawals}
|
||||||
block, err := b.engine.FinalizeAndAssemble(context.Background(), cm, b.header, statedb, &body, b.receipts)
|
block, err := b.engine.FinalizeAndAssemble(cm, b.header, statedb, &body, b.receipts)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
panic(err)
|
panic(err)
|
||||||
}
|
}
|
||||||
|
|
@ -482,14 +479,13 @@ func GenerateChainWithGenesis(genesis *Genesis, engine consensus.Engine, n int,
|
||||||
if genesis.Config != nil && genesis.Config.IsVerkle(genesis.Config.ChainID, 0) {
|
if genesis.Config != nil && genesis.Config.IsVerkle(genesis.Config.ChainID, 0) {
|
||||||
triedbConfig = triedb.VerkleDefaults
|
triedbConfig = triedb.VerkleDefaults
|
||||||
}
|
}
|
||||||
genesisTriedb := triedb.NewDatabase(db, triedbConfig)
|
triedb := triedb.NewDatabase(db, triedbConfig)
|
||||||
block, err := genesis.Commit(db, genesisTriedb, nil)
|
defer triedb.Close()
|
||||||
|
_, err := genesis.Commit(db, triedb, nil)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
genesisTriedb.Close()
|
|
||||||
panic(err)
|
panic(err)
|
||||||
}
|
}
|
||||||
genesisTriedb.Close()
|
blocks, receipts := GenerateChain(genesis.Config, genesis.ToBlock(), engine, db, n, gen)
|
||||||
blocks, receipts := GenerateChain(genesis.Config, block, engine, db, n, gen)
|
|
||||||
return db, blocks, receipts
|
return db, blocks, receipts
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -58,14 +58,14 @@ var (
|
||||||
// by a transaction is higher than what's left in the block.
|
// by a transaction is higher than what's left in the block.
|
||||||
ErrGasLimitReached = errors.New("gas limit reached")
|
ErrGasLimitReached = errors.New("gas limit reached")
|
||||||
|
|
||||||
// ErrGasLimitOverflow is returned by the gas pool if the remaining gas
|
|
||||||
// exceeds the maximum value of uint64.
|
|
||||||
ErrGasLimitOverflow = errors.New("gas limit overflow")
|
|
||||||
|
|
||||||
// ErrInsufficientFundsForTransfer is returned if the transaction sender doesn't
|
// ErrInsufficientFundsForTransfer is returned if the transaction sender doesn't
|
||||||
// have enough funds for transfer(topmost call only).
|
// have enough funds for transfer(topmost call only).
|
||||||
ErrInsufficientFundsForTransfer = errors.New("insufficient funds for transfer")
|
ErrInsufficientFundsForTransfer = errors.New("insufficient funds for transfer")
|
||||||
|
|
||||||
|
// ErrMaxInitCodeSizeExceeded is returned if creation transaction provides the init code bigger
|
||||||
|
// than init code size limit.
|
||||||
|
ErrMaxInitCodeSizeExceeded = errors.New("max initcode size exceeded")
|
||||||
|
|
||||||
// ErrInsufficientBalanceWitness is returned if the transaction sender has enough
|
// ErrInsufficientBalanceWitness is returned if the transaction sender has enough
|
||||||
// funds to cover the transfer, but not enough to pay for witness access/modification
|
// funds to cover the transfer, but not enough to pay for witness access/modification
|
||||||
// costs for the transaction
|
// costs for the transaction
|
||||||
|
|
|
||||||
|
|
@ -1,169 +0,0 @@
|
||||||
// Copyright 2026 The go-ethereum Authors
|
|
||||||
// This file is part of the go-ethereum library.
|
|
||||||
//
|
|
||||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
|
||||||
// it under the terms of the GNU Lesser General Public License as published by
|
|
||||||
// the Free Software Foundation, either version 3 of the License, or
|
|
||||||
// (at your option) any later version.
|
|
||||||
//
|
|
||||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
|
||||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
// GNU Lesser General Public License for more details.
|
|
||||||
//
|
|
||||||
// You should have received a copy of the GNU Lesser General Public License
|
|
||||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
|
||||||
|
|
||||||
package core
|
|
||||||
|
|
||||||
import (
|
|
||||||
"encoding/binary"
|
|
||||||
"math/big"
|
|
||||||
"reflect"
|
|
||||||
"testing"
|
|
||||||
|
|
||||||
"github.com/ethereum/go-ethereum/common"
|
|
||||||
"github.com/ethereum/go-ethereum/consensus/beacon"
|
|
||||||
"github.com/ethereum/go-ethereum/consensus/ethash"
|
|
||||||
"github.com/ethereum/go-ethereum/core/types"
|
|
||||||
"github.com/ethereum/go-ethereum/crypto"
|
|
||||||
"github.com/ethereum/go-ethereum/params"
|
|
||||||
)
|
|
||||||
|
|
||||||
var ethTransferTestCode = common.FromHex("6080604052600436106100345760003560e01c8063574ffc311461003957806366e41cb714610090578063f8a8fd6d1461009a575b600080fd5b34801561004557600080fd5b5061004e6100a4565b604051808273ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff16815260200191505060405180910390f35b6100986100ac565b005b6100a26100f5565b005b63deadbeef81565b7f38e80b5c85ba49b7280ccc8f22548faa62ae30d5a008a1b168fba5f47f5d1ee560405160405180910390a1631234567873ffffffffffffffffffffffffffffffffffffffff16ff5b7f24ec1d3ff24c2f6ff210738839dbc339cd45a5294d85c79361016243157aae7b60405160405180910390a163deadbeef73ffffffffffffffffffffffffffffffffffffffff166002348161014657fe5b046040516024016040516020818303038152906040527f66e41cb7000000000000000000000000000000000000000000000000000000007bffffffffffffffffffffffffffffffffffffffffffffffffffffffff19166020820180517bffffffffffffffffffffffffffffffffffffffffffffffffffffffff83818316178352505050506040518082805190602001908083835b602083106101fd57805182526020820191506020810190506020830392506101da565b6001836020036101000a03801982511681845116808217855250505050505090500191505060006040518083038185875af1925050503d806000811461025f576040519150601f19603f3d011682016040523d82523d6000602084013e610264565b606091505b50505056fea265627a7a723158202cce817a434785d8560c200762f972d453ccd30694481be7545f9035a512826364736f6c63430005100032")
|
|
||||||
|
|
||||||
/*
|
|
||||||
pragma solidity >=0.4.22 <0.6.0;
|
|
||||||
|
|
||||||
contract TestLogs {
|
|
||||||
|
|
||||||
address public constant target_contract = 0x00000000000000000000000000000000DeaDBeef;
|
|
||||||
address payable constant selfdestruct_addr = 0x0000000000000000000000000000000012345678;
|
|
||||||
|
|
||||||
event Response(bool success, bytes data);
|
|
||||||
event TestEvent();
|
|
||||||
event TestEvent2();
|
|
||||||
|
|
||||||
function test() public payable {
|
|
||||||
emit TestEvent();
|
|
||||||
target_contract.call.value(msg.value/2)(abi.encodeWithSignature("test2()"));
|
|
||||||
}
|
|
||||||
function test2() public payable {
|
|
||||||
emit TestEvent2();
|
|
||||||
selfdestruct(selfdestruct_addr);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
*/
|
|
||||||
|
|
||||||
// TestEthTransferLogs tests EIP-7708 ETH transfer log output by simulating a
|
|
||||||
// scenario including transaction, CALL and SELFDESTRUCT value transfers, and
|
|
||||||
// also "ordinary" logs emitted. The same scenario is also tested with no value
|
|
||||||
// transferred.
|
|
||||||
func TestEthTransferLogs(t *testing.T) {
|
|
||||||
testEthTransferLogs(t, 1_000_000_000)
|
|
||||||
testEthTransferLogs(t, 0)
|
|
||||||
}
|
|
||||||
|
|
||||||
func testEthTransferLogs(t *testing.T, value uint64) {
|
|
||||||
var (
|
|
||||||
key1, _ = crypto.HexToECDSA("b71c71a67e1177ad4e901695e1b4b9ee17ae16c6668d313eac2f96dbcda3f291")
|
|
||||||
addr1 = crypto.PubkeyToAddress(key1.PublicKey)
|
|
||||||
addr2 = common.HexToAddress("cafebabe") // caller
|
|
||||||
addr3 = common.HexToAddress("deadbeef") // callee
|
|
||||||
addr4 = common.HexToAddress("12345678") // selfdestruct target
|
|
||||||
testEvent = crypto.Keccak256Hash([]byte("TestEvent()"))
|
|
||||||
testEvent2 = crypto.Keccak256Hash([]byte("TestEvent2()"))
|
|
||||||
config = *params.MergedTestChainConfig
|
|
||||||
signer = types.LatestSigner(&config)
|
|
||||||
engine = beacon.New(ethash.NewFaker())
|
|
||||||
)
|
|
||||||
|
|
||||||
//TODO remove this hacky config initialization when final Amsterdam config is available
|
|
||||||
config.AmsterdamTime = new(uint64)
|
|
||||||
blobConfig := *config.BlobScheduleConfig
|
|
||||||
blobConfig.Amsterdam = blobConfig.Osaka
|
|
||||||
config.BlobScheduleConfig = &blobConfig
|
|
||||||
|
|
||||||
gspec := &Genesis{
|
|
||||||
Config: &config,
|
|
||||||
Alloc: types.GenesisAlloc{
|
|
||||||
addr1: {Balance: newGwei(1000000000)},
|
|
||||||
addr2: {Code: ethTransferTestCode},
|
|
||||||
addr3: {Code: ethTransferTestCode},
|
|
||||||
},
|
|
||||||
}
|
|
||||||
_, blocks, receipts := GenerateChainWithGenesis(gspec, engine, 1, func(i int, b *BlockGen) {
|
|
||||||
tx := types.MustSignNewTx(key1, signer, &types.DynamicFeeTx{
|
|
||||||
ChainID: gspec.Config.ChainID,
|
|
||||||
Nonce: 0,
|
|
||||||
To: &addr2,
|
|
||||||
Gas: 500_000,
|
|
||||||
GasFeeCap: newGwei(5),
|
|
||||||
GasTipCap: newGwei(5),
|
|
||||||
Value: big.NewInt(int64(value)),
|
|
||||||
Data: common.FromHex("f8a8fd6d"),
|
|
||||||
})
|
|
||||||
b.AddTx(tx)
|
|
||||||
})
|
|
||||||
|
|
||||||
blockHash := blocks[0].Hash()
|
|
||||||
txHash := blocks[0].Transactions()[0].Hash()
|
|
||||||
addr2hash := func(addr common.Address) (hash common.Hash) {
|
|
||||||
copy(hash[12:], addr[:])
|
|
||||||
return
|
|
||||||
}
|
|
||||||
u256 := func(amount uint64) []byte {
|
|
||||||
data := make([]byte, 32)
|
|
||||||
binary.BigEndian.PutUint64(data[24:], amount)
|
|
||||||
return data
|
|
||||||
}
|
|
||||||
|
|
||||||
var expLogs = []*types.Log{
|
|
||||||
{
|
|
||||||
Address: params.SystemAddress,
|
|
||||||
Topics: []common.Hash{params.EthTransferLogEvent, addr2hash(addr1), addr2hash(addr2)},
|
|
||||||
Data: u256(value),
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Address: addr2,
|
|
||||||
Topics: []common.Hash{testEvent},
|
|
||||||
Data: nil,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Address: params.SystemAddress,
|
|
||||||
Topics: []common.Hash{params.EthTransferLogEvent, addr2hash(addr2), addr2hash(addr3)},
|
|
||||||
Data: u256(value / 2),
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Address: addr3,
|
|
||||||
Topics: []common.Hash{testEvent2},
|
|
||||||
Data: nil,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Address: params.SystemAddress,
|
|
||||||
Topics: []common.Hash{params.EthTransferLogEvent, addr2hash(addr3), addr2hash(addr4)},
|
|
||||||
Data: u256(value / 2),
|
|
||||||
},
|
|
||||||
}
|
|
||||||
if value == 0 {
|
|
||||||
// no ETH transfer logs expected with zero value
|
|
||||||
expLogs = []*types.Log{expLogs[1], expLogs[3]}
|
|
||||||
}
|
|
||||||
for i, log := range expLogs {
|
|
||||||
log.BlockNumber = 1
|
|
||||||
log.BlockHash = blockHash
|
|
||||||
log.BlockTimestamp = 10
|
|
||||||
log.TxIndex = 0
|
|
||||||
log.TxHash = txHash
|
|
||||||
log.Index = uint(i)
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(expLogs) != len(receipts[0][0].Logs) {
|
|
||||||
t.Fatalf("Incorrect number of logs (expected: %d, got: %d)", len(expLogs), len(receipts[0][0].Logs))
|
|
||||||
}
|
|
||||||
for i, log := range receipts[0][0].Logs {
|
|
||||||
if !reflect.DeepEqual(expLogs[i], log) {
|
|
||||||
t.Fatalf("Incorrect log at index %d (expected: %v, got: %v)", i, expLogs[i], log)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
12
core/evm.go
12
core/evm.go
|
|
@ -25,7 +25,6 @@ import (
|
||||||
"github.com/ethereum/go-ethereum/core/tracing"
|
"github.com/ethereum/go-ethereum/core/tracing"
|
||||||
"github.com/ethereum/go-ethereum/core/types"
|
"github.com/ethereum/go-ethereum/core/types"
|
||||||
"github.com/ethereum/go-ethereum/core/vm"
|
"github.com/ethereum/go-ethereum/core/vm"
|
||||||
"github.com/ethereum/go-ethereum/params"
|
|
||||||
"github.com/holiman/uint256"
|
"github.com/holiman/uint256"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
@ -45,7 +44,6 @@ func NewEVMBlockContext(header *types.Header, chain ChainContext, author *common
|
||||||
baseFee *big.Int
|
baseFee *big.Int
|
||||||
blobBaseFee *big.Int
|
blobBaseFee *big.Int
|
||||||
random *common.Hash
|
random *common.Hash
|
||||||
slotNum uint64
|
|
||||||
)
|
)
|
||||||
|
|
||||||
// If we don't have an explicit author (i.e. not mining), extract from the header
|
// If we don't have an explicit author (i.e. not mining), extract from the header
|
||||||
|
|
@ -63,10 +61,6 @@ func NewEVMBlockContext(header *types.Header, chain ChainContext, author *common
|
||||||
if header.Difficulty.Sign() == 0 {
|
if header.Difficulty.Sign() == 0 {
|
||||||
random = &header.MixDigest
|
random = &header.MixDigest
|
||||||
}
|
}
|
||||||
if header.SlotNumber != nil {
|
|
||||||
slotNum = *header.SlotNumber
|
|
||||||
}
|
|
||||||
|
|
||||||
return vm.BlockContext{
|
return vm.BlockContext{
|
||||||
CanTransfer: CanTransfer,
|
CanTransfer: CanTransfer,
|
||||||
Transfer: Transfer,
|
Transfer: Transfer,
|
||||||
|
|
@ -79,7 +73,6 @@ func NewEVMBlockContext(header *types.Header, chain ChainContext, author *common
|
||||||
BlobBaseFee: blobBaseFee,
|
BlobBaseFee: blobBaseFee,
|
||||||
GasLimit: header.GasLimit,
|
GasLimit: header.GasLimit,
|
||||||
Random: random,
|
Random: random,
|
||||||
SlotNum: slotNum,
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -139,10 +132,7 @@ func CanTransfer(db vm.StateDB, addr common.Address, amount *uint256.Int) bool {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Transfer subtracts amount from sender and adds amount to recipient using the given Db
|
// Transfer subtracts amount from sender and adds amount to recipient using the given Db
|
||||||
func Transfer(db vm.StateDB, sender, recipient common.Address, amount *uint256.Int, rules *params.Rules) {
|
func Transfer(db vm.StateDB, sender, recipient common.Address, amount *uint256.Int) {
|
||||||
db.SubBalance(sender, amount, tracing.BalanceChangeTransfer)
|
db.SubBalance(sender, amount, tracing.BalanceChangeTransfer)
|
||||||
db.AddBalance(recipient, amount, tracing.BalanceChangeTransfer)
|
db.AddBalance(recipient, amount, tracing.BalanceChangeTransfer)
|
||||||
if rules.IsAmsterdam && !amount.IsZero() && sender != recipient {
|
|
||||||
db.AddLog(types.EthTransferLog(sender, recipient, amount))
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -21,87 +21,39 @@ import (
|
||||||
"math"
|
"math"
|
||||||
)
|
)
|
||||||
|
|
||||||
// GasPool tracks the amount of gas available for transaction execution
|
// GasPool tracks the amount of gas available during execution of the transactions
|
||||||
// within a block, along with the cumulative gas consumed.
|
// in a block. The zero value is a pool with zero gas available.
|
||||||
type GasPool struct {
|
type GasPool uint64
|
||||||
remaining uint64
|
|
||||||
initial uint64
|
|
||||||
cumulativeUsed uint64
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewGasPool initializes the gasPool with the given amount.
|
// AddGas makes gas available for execution.
|
||||||
func NewGasPool(amount uint64) *GasPool {
|
func (gp *GasPool) AddGas(amount uint64) *GasPool {
|
||||||
return &GasPool{
|
if uint64(*gp) > math.MaxUint64-amount {
|
||||||
remaining: amount,
|
panic("gas pool pushed above uint64")
|
||||||
initial: amount,
|
|
||||||
}
|
}
|
||||||
|
*(*uint64)(gp) += amount
|
||||||
|
return gp
|
||||||
}
|
}
|
||||||
|
|
||||||
// SubGas deducts the given amount from the pool if enough gas is
|
// SubGas deducts the given amount from the pool if enough gas is
|
||||||
// available and returns an error otherwise.
|
// available and returns an error otherwise.
|
||||||
func (gp *GasPool) SubGas(amount uint64) error {
|
func (gp *GasPool) SubGas(amount uint64) error {
|
||||||
if gp.remaining < amount {
|
if uint64(*gp) < amount {
|
||||||
return ErrGasLimitReached
|
return ErrGasLimitReached
|
||||||
}
|
}
|
||||||
gp.remaining -= amount
|
*(*uint64)(gp) -= amount
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// ReturnGas adds the refunded gas back to the pool and updates
|
|
||||||
// the cumulative gas usage accordingly.
|
|
||||||
func (gp *GasPool) ReturnGas(returned uint64, gasUsed uint64) error {
|
|
||||||
if gp.remaining > math.MaxUint64-returned {
|
|
||||||
return fmt.Errorf("%w: remaining: %d, returned: %d", ErrGasLimitOverflow, gp.remaining, returned)
|
|
||||||
}
|
|
||||||
// The returned gas calculation differs across forks.
|
|
||||||
//
|
|
||||||
// - Pre-Amsterdam:
|
|
||||||
// returned = purchased - remaining (refund included)
|
|
||||||
//
|
|
||||||
// - Post-Amsterdam:
|
|
||||||
// returned = purchased - gasUsed (refund excluded)
|
|
||||||
gp.remaining += returned
|
|
||||||
|
|
||||||
// gasUsed = max(txGasUsed - gasRefund, calldataFloorGasCost)
|
|
||||||
// regardless of Amsterdam is activated or not.
|
|
||||||
gp.cumulativeUsed += gasUsed
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Gas returns the amount of gas remaining in the pool.
|
// Gas returns the amount of gas remaining in the pool.
|
||||||
func (gp *GasPool) Gas() uint64 {
|
func (gp *GasPool) Gas() uint64 {
|
||||||
return gp.remaining
|
return uint64(*gp)
|
||||||
}
|
}
|
||||||
|
|
||||||
// CumulativeUsed returns the amount of cumulative consumed gas (refunded included).
|
// SetGas sets the amount of gas with the provided number.
|
||||||
func (gp *GasPool) CumulativeUsed() uint64 {
|
func (gp *GasPool) SetGas(gas uint64) {
|
||||||
return gp.cumulativeUsed
|
*(*uint64)(gp) = gas
|
||||||
}
|
|
||||||
|
|
||||||
// Used returns the amount of consumed gas.
|
|
||||||
func (gp *GasPool) Used() uint64 {
|
|
||||||
if gp.initial < gp.remaining {
|
|
||||||
panic("gas used underflow")
|
|
||||||
}
|
|
||||||
return gp.initial - gp.remaining
|
|
||||||
}
|
|
||||||
|
|
||||||
// Snapshot returns the deep-copied object as the snapshot.
|
|
||||||
func (gp *GasPool) Snapshot() *GasPool {
|
|
||||||
return &GasPool{
|
|
||||||
initial: gp.initial,
|
|
||||||
remaining: gp.remaining,
|
|
||||||
cumulativeUsed: gp.cumulativeUsed,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Set sets the content of gasPool with the provided one.
|
|
||||||
func (gp *GasPool) Set(other *GasPool) {
|
|
||||||
gp.initial = other.initial
|
|
||||||
gp.remaining = other.remaining
|
|
||||||
gp.cumulativeUsed = other.cumulativeUsed
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (gp *GasPool) String() string {
|
func (gp *GasPool) String() string {
|
||||||
return fmt.Sprintf("initial: %d, remaining: %d, cumulative used: %d", gp.initial, gp.remaining, gp.cumulativeUsed)
|
return fmt.Sprintf("%d", *gp)
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -34,7 +34,6 @@ func (g Genesis) MarshalJSON() ([]byte, error) {
|
||||||
BaseFee *math.HexOrDecimal256 `json:"baseFeePerGas"`
|
BaseFee *math.HexOrDecimal256 `json:"baseFeePerGas"`
|
||||||
ExcessBlobGas *math.HexOrDecimal64 `json:"excessBlobGas"`
|
ExcessBlobGas *math.HexOrDecimal64 `json:"excessBlobGas"`
|
||||||
BlobGasUsed *math.HexOrDecimal64 `json:"blobGasUsed"`
|
BlobGasUsed *math.HexOrDecimal64 `json:"blobGasUsed"`
|
||||||
SlotNumber *uint64 `json:"slotNumber"`
|
|
||||||
}
|
}
|
||||||
var enc Genesis
|
var enc Genesis
|
||||||
enc.Config = g.Config
|
enc.Config = g.Config
|
||||||
|
|
@ -57,7 +56,6 @@ func (g Genesis) MarshalJSON() ([]byte, error) {
|
||||||
enc.BaseFee = (*math.HexOrDecimal256)(g.BaseFee)
|
enc.BaseFee = (*math.HexOrDecimal256)(g.BaseFee)
|
||||||
enc.ExcessBlobGas = (*math.HexOrDecimal64)(g.ExcessBlobGas)
|
enc.ExcessBlobGas = (*math.HexOrDecimal64)(g.ExcessBlobGas)
|
||||||
enc.BlobGasUsed = (*math.HexOrDecimal64)(g.BlobGasUsed)
|
enc.BlobGasUsed = (*math.HexOrDecimal64)(g.BlobGasUsed)
|
||||||
enc.SlotNumber = g.SlotNumber
|
|
||||||
return json.Marshal(&enc)
|
return json.Marshal(&enc)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -79,7 +77,6 @@ func (g *Genesis) UnmarshalJSON(input []byte) error {
|
||||||
BaseFee *math.HexOrDecimal256 `json:"baseFeePerGas"`
|
BaseFee *math.HexOrDecimal256 `json:"baseFeePerGas"`
|
||||||
ExcessBlobGas *math.HexOrDecimal64 `json:"excessBlobGas"`
|
ExcessBlobGas *math.HexOrDecimal64 `json:"excessBlobGas"`
|
||||||
BlobGasUsed *math.HexOrDecimal64 `json:"blobGasUsed"`
|
BlobGasUsed *math.HexOrDecimal64 `json:"blobGasUsed"`
|
||||||
SlotNumber *uint64 `json:"slotNumber"`
|
|
||||||
}
|
}
|
||||||
var dec Genesis
|
var dec Genesis
|
||||||
if err := json.Unmarshal(input, &dec); err != nil {
|
if err := json.Unmarshal(input, &dec); err != nil {
|
||||||
|
|
@ -136,8 +133,5 @@ func (g *Genesis) UnmarshalJSON(input []byte) error {
|
||||||
if dec.BlobGasUsed != nil {
|
if dec.BlobGasUsed != nil {
|
||||||
g.BlobGasUsed = (*uint64)(dec.BlobGasUsed)
|
g.BlobGasUsed = (*uint64)(dec.BlobGasUsed)
|
||||||
}
|
}
|
||||||
if dec.SlotNumber != nil {
|
|
||||||
g.SlotNumber = dec.SlotNumber
|
|
||||||
}
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -73,7 +73,6 @@ type Genesis struct {
|
||||||
BaseFee *big.Int `json:"baseFeePerGas"` // EIP-1559
|
BaseFee *big.Int `json:"baseFeePerGas"` // EIP-1559
|
||||||
ExcessBlobGas *uint64 `json:"excessBlobGas"` // EIP-4844
|
ExcessBlobGas *uint64 `json:"excessBlobGas"` // EIP-4844
|
||||||
BlobGasUsed *uint64 `json:"blobGasUsed"` // EIP-4844
|
BlobGasUsed *uint64 `json:"blobGasUsed"` // EIP-4844
|
||||||
SlotNumber *uint64 `json:"slotNumber"` // EIP-7843
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// copy copies the genesis.
|
// copy copies the genesis.
|
||||||
|
|
@ -123,7 +122,6 @@ func ReadGenesis(db ethdb.Database) (*Genesis, error) {
|
||||||
genesis.BaseFee = genesisHeader.BaseFee
|
genesis.BaseFee = genesisHeader.BaseFee
|
||||||
genesis.ExcessBlobGas = genesisHeader.ExcessBlobGas
|
genesis.ExcessBlobGas = genesisHeader.ExcessBlobGas
|
||||||
genesis.BlobGasUsed = genesisHeader.BlobGasUsed
|
genesis.BlobGasUsed = genesisHeader.BlobGasUsed
|
||||||
genesis.SlotNumber = genesisHeader.SlotNumber
|
|
||||||
|
|
||||||
return &genesis, nil
|
return &genesis, nil
|
||||||
}
|
}
|
||||||
|
|
@ -549,12 +547,6 @@ func (g *Genesis) toBlockWithRoot(root common.Hash) *types.Block {
|
||||||
if conf.IsPrague(num, g.Timestamp) {
|
if conf.IsPrague(num, g.Timestamp) {
|
||||||
head.RequestsHash = &types.EmptyRequestsHash
|
head.RequestsHash = &types.EmptyRequestsHash
|
||||||
}
|
}
|
||||||
if conf.IsAmsterdam(num, g.Timestamp) {
|
|
||||||
head.SlotNumber = g.SlotNumber
|
|
||||||
if head.SlotNumber == nil {
|
|
||||||
head.SlotNumber = new(uint64)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
return types.NewBlock(head, &types.Body{Withdrawals: withdrawals}, nil, trie.NewStackTrie(nil))
|
return types.NewBlock(head, &types.Body{Withdrawals: withdrawals}, nil, trie.NewStackTrie(nil))
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -308,7 +308,7 @@ func TestVerkleGenesisCommit(t *testing.T) {
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
expected := common.FromHex("1fd154971d9a386c4ec75fe7138c17efb569bfc2962e46e94a376ba997e3fadc")
|
expected := common.FromHex("b94812c1674dcf4f2bc98f4503d15f4cc674265135bcf3be6e4417b60881042a")
|
||||||
got := genesis.ToBlock().Root().Bytes()
|
got := genesis.ToBlock().Root().Bytes()
|
||||||
if !bytes.Equal(got, expected) {
|
if !bytes.Equal(got, expected) {
|
||||||
t.Fatalf("invalid genesis state root, expected %x, got %x", expected, got)
|
t.Fatalf("invalid genesis state root, expected %x, got %x", expected, got)
|
||||||
|
|
|
||||||
|
|
@ -32,13 +32,10 @@ const (
|
||||||
|
|
||||||
// KeepPostMerge sets the history pruning point to the merge activation block.
|
// KeepPostMerge sets the history pruning point to the merge activation block.
|
||||||
KeepPostMerge
|
KeepPostMerge
|
||||||
|
|
||||||
// KeepPostPrague sets the history pruning point to the Prague (Pectra) activation block.
|
|
||||||
KeepPostPrague
|
|
||||||
)
|
)
|
||||||
|
|
||||||
func (m HistoryMode) IsValid() bool {
|
func (m HistoryMode) IsValid() bool {
|
||||||
return m <= KeepPostPrague
|
return m <= KeepPostMerge
|
||||||
}
|
}
|
||||||
|
|
||||||
func (m HistoryMode) String() string {
|
func (m HistoryMode) String() string {
|
||||||
|
|
@ -47,8 +44,6 @@ func (m HistoryMode) String() string {
|
||||||
return "all"
|
return "all"
|
||||||
case KeepPostMerge:
|
case KeepPostMerge:
|
||||||
return "postmerge"
|
return "postmerge"
|
||||||
case KeepPostPrague:
|
|
||||||
return "postprague"
|
|
||||||
default:
|
default:
|
||||||
return fmt.Sprintf("invalid HistoryMode(%d)", m)
|
return fmt.Sprintf("invalid HistoryMode(%d)", m)
|
||||||
}
|
}
|
||||||
|
|
@ -69,71 +64,31 @@ func (m *HistoryMode) UnmarshalText(text []byte) error {
|
||||||
*m = KeepAll
|
*m = KeepAll
|
||||||
case "postmerge":
|
case "postmerge":
|
||||||
*m = KeepPostMerge
|
*m = KeepPostMerge
|
||||||
case "postprague":
|
|
||||||
*m = KeepPostPrague
|
|
||||||
default:
|
default:
|
||||||
return fmt.Errorf(`unknown history mode %q, want "all", "postmerge", or "postprague"`, text)
|
return fmt.Errorf(`unknown sync mode %q, want "all" or "postmerge"`, text)
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// PrunePoint identifies a specific block for history pruning.
|
|
||||||
type PrunePoint struct {
|
type PrunePoint struct {
|
||||||
BlockNumber uint64
|
BlockNumber uint64
|
||||||
BlockHash common.Hash
|
BlockHash common.Hash
|
||||||
}
|
}
|
||||||
|
|
||||||
// staticPrunePoints contains the pre-defined history pruning cutoff blocks for
|
// PrunePoints the pre-defined history pruning cutoff blocks for known networks.
|
||||||
// known networks, keyed by history mode and genesis hash. They point to the first
|
// They point to the first post-merge block. Any pruning should truncate *up to* but excluding
|
||||||
// block after the respective fork. Any pruning should truncate *up to* but
|
// given block.
|
||||||
// excluding the given block.
|
var PrunePoints = map[common.Hash]*PrunePoint{
|
||||||
var staticPrunePoints = map[HistoryMode]map[common.Hash]*PrunePoint{
|
// mainnet
|
||||||
KeepPostMerge: {
|
|
||||||
params.MainnetGenesisHash: {
|
params.MainnetGenesisHash: {
|
||||||
BlockNumber: 15537393,
|
BlockNumber: 15537393,
|
||||||
BlockHash: common.HexToHash("0x55b11b918355b1ef9c5db810302ebad0bf2544255b530cdce90674d5887bb286"),
|
BlockHash: common.HexToHash("0x55b11b918355b1ef9c5db810302ebad0bf2544255b530cdce90674d5887bb286"),
|
||||||
},
|
},
|
||||||
|
// sepolia
|
||||||
params.SepoliaGenesisHash: {
|
params.SepoliaGenesisHash: {
|
||||||
BlockNumber: 1450409,
|
BlockNumber: 1450409,
|
||||||
BlockHash: common.HexToHash("0x229f6b18ca1552f1d5146deceb5387333f40dc6275aebee3f2c5c4ece07d02db"),
|
BlockHash: common.HexToHash("0x229f6b18ca1552f1d5146deceb5387333f40dc6275aebee3f2c5c4ece07d02db"),
|
||||||
},
|
},
|
||||||
},
|
|
||||||
KeepPostPrague: {
|
|
||||||
params.MainnetGenesisHash: {
|
|
||||||
BlockNumber: 22431084,
|
|
||||||
BlockHash: common.HexToHash("0x50c8cab760b2948349c590461b166773c45d8f4858cccf5a43025ab2960152e8"),
|
|
||||||
},
|
|
||||||
params.SepoliaGenesisHash: {
|
|
||||||
BlockNumber: 7836331,
|
|
||||||
BlockHash: common.HexToHash("0xe6571beb68bf24dbd8a6ba354518996920c55a3f8d8fdca423e391b8ad071f22"),
|
|
||||||
},
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
// HistoryPolicy describes the configured history pruning strategy. It captures
|
|
||||||
// user intent as opposed to the actual DB state.
|
|
||||||
type HistoryPolicy struct {
|
|
||||||
Mode HistoryMode
|
|
||||||
// Static prune point for PostMerge/PostPrague, nil otherwise.
|
|
||||||
Target *PrunePoint
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewPolicy constructs a HistoryPolicy from the given mode and genesis hash.
|
|
||||||
func NewPolicy(mode HistoryMode, genesisHash common.Hash) (HistoryPolicy, error) {
|
|
||||||
switch mode {
|
|
||||||
case KeepAll:
|
|
||||||
return HistoryPolicy{Mode: KeepAll}, nil
|
|
||||||
|
|
||||||
case KeepPostMerge, KeepPostPrague:
|
|
||||||
point := staticPrunePoints[mode][genesisHash]
|
|
||||||
if point == nil {
|
|
||||||
return HistoryPolicy{}, fmt.Errorf("%s history pruning not available for network %s", mode, genesisHash.Hex())
|
|
||||||
}
|
|
||||||
return HistoryPolicy{Mode: mode, Target: point}, nil
|
|
||||||
|
|
||||||
default:
|
|
||||||
return HistoryPolicy{}, fmt.Errorf("invalid history mode: %d", mode)
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// PrunedHistoryError is returned by APIs when the requested history is pruned.
|
// PrunedHistoryError is returned by APIs when the requested history is pruned.
|
||||||
|
|
|
||||||
|
|
@ -1,58 +0,0 @@
|
||||||
// Copyright 2026 The go-ethereum Authors
|
|
||||||
// This file is part of the go-ethereum library.
|
|
||||||
//
|
|
||||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
|
||||||
// it under the terms of the GNU Lesser General Public License as published by
|
|
||||||
// the Free Software Foundation, either version 3 of the License, or
|
|
||||||
// (at your option) any later version.
|
|
||||||
//
|
|
||||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
|
||||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
// GNU Lesser General Public License for more details.
|
|
||||||
//
|
|
||||||
// You should have received a copy of the GNU Lesser General Public License
|
|
||||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
|
||||||
|
|
||||||
package history
|
|
||||||
|
|
||||||
import (
|
|
||||||
"testing"
|
|
||||||
|
|
||||||
"github.com/ethereum/go-ethereum/common"
|
|
||||||
"github.com/ethereum/go-ethereum/params"
|
|
||||||
)
|
|
||||||
|
|
||||||
func TestNewPolicy(t *testing.T) {
|
|
||||||
// KeepAll: no target.
|
|
||||||
p, err := NewPolicy(KeepAll, params.MainnetGenesisHash)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("KeepAll: %v", err)
|
|
||||||
}
|
|
||||||
if p.Mode != KeepAll || p.Target != nil {
|
|
||||||
t.Errorf("KeepAll: unexpected policy %+v", p)
|
|
||||||
}
|
|
||||||
|
|
||||||
// PostMerge: resolves known mainnet prune point.
|
|
||||||
p, err = NewPolicy(KeepPostMerge, params.MainnetGenesisHash)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("PostMerge: %v", err)
|
|
||||||
}
|
|
||||||
if p.Target == nil || p.Target.BlockNumber != 15537393 {
|
|
||||||
t.Errorf("PostMerge: unexpected target %+v", p.Target)
|
|
||||||
}
|
|
||||||
|
|
||||||
// PostPrague: resolves known mainnet prune point.
|
|
||||||
p, err = NewPolicy(KeepPostPrague, params.MainnetGenesisHash)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("PostPrague: %v", err)
|
|
||||||
}
|
|
||||||
if p.Target == nil || p.Target.BlockNumber != 22431084 {
|
|
||||||
t.Errorf("PostPrague: unexpected target %+v", p.Target)
|
|
||||||
}
|
|
||||||
|
|
||||||
// PostMerge on unknown network: error.
|
|
||||||
if _, err = NewPolicy(KeepPostMerge, common.HexToHash("0xdeadbeef")); err == nil {
|
|
||||||
t.Fatal("PostMerge unknown network: expected error")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
@ -26,7 +26,6 @@ import (
|
||||||
"github.com/ethereum/go-ethereum/common"
|
"github.com/ethereum/go-ethereum/common"
|
||||||
"github.com/ethereum/go-ethereum/consensus/misc/eip4844"
|
"github.com/ethereum/go-ethereum/consensus/misc/eip4844"
|
||||||
"github.com/ethereum/go-ethereum/core/types"
|
"github.com/ethereum/go-ethereum/core/types"
|
||||||
"github.com/ethereum/go-ethereum/core/types/bal"
|
|
||||||
"github.com/ethereum/go-ethereum/crypto"
|
"github.com/ethereum/go-ethereum/crypto"
|
||||||
"github.com/ethereum/go-ethereum/ethdb"
|
"github.com/ethereum/go-ethereum/ethdb"
|
||||||
"github.com/ethereum/go-ethereum/log"
|
"github.com/ethereum/go-ethereum/log"
|
||||||
|
|
@ -425,8 +424,14 @@ func WriteBodyRLP(db ethdb.KeyValueWriter, hash common.Hash, number uint64, rlp
|
||||||
// HasBody verifies the existence of a block body corresponding to the hash.
|
// HasBody verifies the existence of a block body corresponding to the hash.
|
||||||
func HasBody(db ethdb.Reader, hash common.Hash, number uint64) bool {
|
func HasBody(db ethdb.Reader, hash common.Hash, number uint64) bool {
|
||||||
if isCanon(db, number, hash) {
|
if isCanon(db, number, hash) {
|
||||||
|
// Block is in ancient store, but bodies can be pruned.
|
||||||
|
// Check if the block number is above the pruning tail.
|
||||||
|
tail, _ := db.Tail()
|
||||||
|
if number >= tail {
|
||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
if has, err := db.Has(blockBodyKey(number, hash)); !has || err != nil {
|
if has, err := db.Has(blockBodyKey(number, hash)); !has || err != nil {
|
||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
|
|
@ -467,8 +472,14 @@ func DeleteBody(db ethdb.KeyValueWriter, hash common.Hash, number uint64) {
|
||||||
// to a block.
|
// to a block.
|
||||||
func HasReceipts(db ethdb.Reader, hash common.Hash, number uint64) bool {
|
func HasReceipts(db ethdb.Reader, hash common.Hash, number uint64) bool {
|
||||||
if isCanon(db, number, hash) {
|
if isCanon(db, number, hash) {
|
||||||
|
// Block is in ancient store, but receipts can be pruned.
|
||||||
|
// Check if the block number is above the pruning tail.
|
||||||
|
tail, _ := db.Tail()
|
||||||
|
if number >= tail {
|
||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
if has, err := db.Has(blockReceiptsKey(number, hash)); !has || err != nil {
|
if has, err := db.Has(blockReceiptsKey(number, hash)); !has || err != nil {
|
||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
|
|
@ -606,55 +617,6 @@ func DeleteReceipts(db ethdb.KeyValueWriter, hash common.Hash, number uint64) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// HasAccessList verifies the existence of a block access list for a block.
|
|
||||||
func HasAccessList(db ethdb.Reader, hash common.Hash, number uint64) bool {
|
|
||||||
has, _ := db.Has(accessListKey(number, hash))
|
|
||||||
return has
|
|
||||||
}
|
|
||||||
|
|
||||||
// ReadAccessListRLP retrieves the RLP-encoded block access list for a block from KV.
|
|
||||||
func ReadAccessListRLP(db ethdb.Reader, hash common.Hash, number uint64) rlp.RawValue {
|
|
||||||
data, _ := db.Get(accessListKey(number, hash))
|
|
||||||
return data
|
|
||||||
}
|
|
||||||
|
|
||||||
// ReadAccessList retrieves and decodes the block access list for a block.
|
|
||||||
func ReadAccessList(db ethdb.Reader, hash common.Hash, number uint64) *bal.BlockAccessList {
|
|
||||||
data := ReadAccessListRLP(db, hash, number)
|
|
||||||
if len(data) == 0 {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
b := new(bal.BlockAccessList)
|
|
||||||
if err := rlp.DecodeBytes(data, b); err != nil {
|
|
||||||
log.Error("Invalid BAL RLP", "hash", hash, "err", err)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
return b
|
|
||||||
}
|
|
||||||
|
|
||||||
// WriteAccessList RLP-encodes and stores a block access list in the active KV store.
|
|
||||||
func WriteAccessList(db ethdb.KeyValueWriter, hash common.Hash, number uint64, b *bal.BlockAccessList) {
|
|
||||||
bytes, err := rlp.EncodeToBytes(b)
|
|
||||||
if err != nil {
|
|
||||||
log.Crit("Failed to encode BAL", "err", err)
|
|
||||||
}
|
|
||||||
WriteAccessListRLP(db, hash, number, bytes)
|
|
||||||
}
|
|
||||||
|
|
||||||
// WriteAccessListRLP stores a pre-encoded block access list in the active KV store.
|
|
||||||
func WriteAccessListRLP(db ethdb.KeyValueWriter, hash common.Hash, number uint64, encoded rlp.RawValue) {
|
|
||||||
if err := db.Put(accessListKey(number, hash), encoded); err != nil {
|
|
||||||
log.Crit("Failed to store BAL", "err", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// DeleteAccessList removes a block access list from the active KV store.
|
|
||||||
func DeleteAccessList(db ethdb.KeyValueWriter, hash common.Hash, number uint64) {
|
|
||||||
if err := db.Delete(accessListKey(number, hash)); err != nil {
|
|
||||||
log.Crit("Failed to delete BAL", "err", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// ReceiptLogs is a barebone version of ReceiptForStorage which only keeps
|
// ReceiptLogs is a barebone version of ReceiptForStorage which only keeps
|
||||||
// the list of logs. When decoding a stored receipt into this object we
|
// the list of logs. When decoding a stored receipt into this object we
|
||||||
// avoid creating the bloom filter.
|
// avoid creating the bloom filter.
|
||||||
|
|
@ -709,25 +671,13 @@ func ReadBlock(db ethdb.Reader, hash common.Hash, number uint64) *types.Block {
|
||||||
if body == nil {
|
if body == nil {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
block := types.NewBlockWithHeader(header).WithBody(*body)
|
return types.NewBlockWithHeader(header).WithBody(*body)
|
||||||
|
|
||||||
// Best-effort assembly of the block access list from the database.
|
|
||||||
if header.BlockAccessListHash != nil {
|
|
||||||
al := ReadAccessList(db, hash, number)
|
|
||||||
block = block.WithAccessListUnsafe(al)
|
|
||||||
}
|
|
||||||
return block
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// WriteBlock serializes a block into the database, header and body separately.
|
// WriteBlock serializes a block into the database, header and body separately.
|
||||||
func WriteBlock(db ethdb.KeyValueWriter, block *types.Block) {
|
func WriteBlock(db ethdb.KeyValueWriter, block *types.Block) {
|
||||||
hash, number := block.Hash(), block.NumberU64()
|
WriteBody(db, block.Hash(), block.NumberU64(), block.Body())
|
||||||
WriteBody(db, hash, number, block.Body())
|
|
||||||
WriteHeader(db, block.Header())
|
WriteHeader(db, block.Header())
|
||||||
|
|
||||||
if accessList := block.AccessList(); accessList != nil {
|
|
||||||
WriteAccessList(db, hash, number, accessList)
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// WriteAncientBlocks writes entire block data into ancient store and returns the total written size.
|
// WriteAncientBlocks writes entire block data into ancient store and returns the total written size.
|
||||||
|
|
|
||||||
|
|
@ -27,12 +27,10 @@ import (
|
||||||
|
|
||||||
"github.com/ethereum/go-ethereum/common"
|
"github.com/ethereum/go-ethereum/common"
|
||||||
"github.com/ethereum/go-ethereum/core/types"
|
"github.com/ethereum/go-ethereum/core/types"
|
||||||
"github.com/ethereum/go-ethereum/core/types/bal"
|
|
||||||
"github.com/ethereum/go-ethereum/crypto"
|
"github.com/ethereum/go-ethereum/crypto"
|
||||||
"github.com/ethereum/go-ethereum/crypto/keccak"
|
"github.com/ethereum/go-ethereum/crypto/keccak"
|
||||||
"github.com/ethereum/go-ethereum/params"
|
"github.com/ethereum/go-ethereum/params"
|
||||||
"github.com/ethereum/go-ethereum/rlp"
|
"github.com/ethereum/go-ethereum/rlp"
|
||||||
"github.com/holiman/uint256"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
// Tests block header storage and retrieval operations.
|
// Tests block header storage and retrieval operations.
|
||||||
|
|
@ -901,78 +899,3 @@ func TestHeadersRLPStorage(t *testing.T) {
|
||||||
checkSequence(1, 1) // Only block 1
|
checkSequence(1, 1) // Only block 1
|
||||||
checkSequence(1, 2) // Genesis + block 1
|
checkSequence(1, 2) // Genesis + block 1
|
||||||
}
|
}
|
||||||
|
|
||||||
func makeTestBAL(t *testing.T) (rlp.RawValue, *bal.BlockAccessList) {
|
|
||||||
t.Helper()
|
|
||||||
|
|
||||||
cb := bal.NewConstructionBlockAccessList()
|
|
||||||
addr := common.HexToAddress("0x1111111111111111111111111111111111111111")
|
|
||||||
cb.AccountRead(addr)
|
|
||||||
cb.StorageRead(addr, common.BytesToHash([]byte{0x01}))
|
|
||||||
cb.StorageWrite(0, addr, common.BytesToHash([]byte{0x02}), common.BytesToHash([]byte{0xaa}))
|
|
||||||
cb.BalanceChange(0, addr, uint256.NewInt(100))
|
|
||||||
cb.NonceChange(addr, 0, 1)
|
|
||||||
|
|
||||||
var buf bytes.Buffer
|
|
||||||
if err := cb.EncodeRLP(&buf); err != nil {
|
|
||||||
t.Fatalf("failed to encode BAL: %v", err)
|
|
||||||
}
|
|
||||||
encoded := buf.Bytes()
|
|
||||||
|
|
||||||
var decoded bal.BlockAccessList
|
|
||||||
if err := rlp.DecodeBytes(encoded, &decoded); err != nil {
|
|
||||||
t.Fatalf("failed to decode BAL: %v", err)
|
|
||||||
}
|
|
||||||
return encoded, &decoded
|
|
||||||
}
|
|
||||||
|
|
||||||
// TestBALStorage tests write/read/delete of BALs in the KV store.
|
|
||||||
func TestBALStorage(t *testing.T) {
|
|
||||||
db := NewMemoryDatabase()
|
|
||||||
|
|
||||||
hash := common.BytesToHash([]byte{0x03, 0x14})
|
|
||||||
number := uint64(42)
|
|
||||||
|
|
||||||
// Check that no BAL exists in a new database.
|
|
||||||
if HasAccessList(db, hash, number) {
|
|
||||||
t.Fatal("BAL found in new database")
|
|
||||||
}
|
|
||||||
if b := ReadAccessList(db, hash, number); b != nil {
|
|
||||||
t.Fatalf("non existent BAL returned: %v", b)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Write a BAL and verify it can be read back.
|
|
||||||
encoded, testBAL := makeTestBAL(t)
|
|
||||||
WriteAccessList(db, hash, number, testBAL)
|
|
||||||
|
|
||||||
if !HasAccessList(db, hash, number) {
|
|
||||||
t.Fatal("HasAccessList returned false after write")
|
|
||||||
}
|
|
||||||
if blob := ReadAccessListRLP(db, hash, number); len(blob) == 0 {
|
|
||||||
t.Fatal("ReadAccessListRLP returned empty after write")
|
|
||||||
}
|
|
||||||
if b := ReadAccessList(db, hash, number); b == nil {
|
|
||||||
t.Fatal("ReadAccessList returned nil after write")
|
|
||||||
} else if b.Hash() != testBAL.Hash() {
|
|
||||||
t.Fatalf("BAL hash mismatch: got %x, want %x", b.Hash(), testBAL.Hash())
|
|
||||||
}
|
|
||||||
|
|
||||||
// Also test WriteAccessListRLP with pre-encoded data.
|
|
||||||
hash2 := common.BytesToHash([]byte{0x03, 0x15})
|
|
||||||
WriteAccessListRLP(db, hash2, number, encoded)
|
|
||||||
if b := ReadAccessList(db, hash2, number); b == nil {
|
|
||||||
t.Fatal("ReadAccessList returned nil after WriteAccessListRLP")
|
|
||||||
} else if b.Hash() != testBAL.Hash() {
|
|
||||||
t.Fatalf("BAL hash mismatch after WriteAccessListRLP: got %x, want %x", b.Hash(), testBAL.Hash())
|
|
||||||
}
|
|
||||||
|
|
||||||
// Delete the BAL and verify it's gone.
|
|
||||||
DeleteAccessList(db, hash, number)
|
|
||||||
|
|
||||||
if HasAccessList(db, hash, number) {
|
|
||||||
t.Fatal("HasAccessList returned true after delete")
|
|
||||||
}
|
|
||||||
if b := ReadAccessList(db, hash, number); b != nil {
|
|
||||||
t.Fatalf("deleted BAL returned: %v", b)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
|
||||||
|
|
@ -260,46 +260,6 @@ func basicWrite(t *testing.T, newFn func(kinds []string) ethdb.AncientStore) {
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("Failed to write ancient data %v", err)
|
t.Fatalf("Failed to write ancient data %v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Write should work after truncating from tail but over the head
|
|
||||||
db.TruncateTail(200)
|
|
||||||
head, err := db.Ancients()
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("Failed to retrieve head ancients %v", err)
|
|
||||||
}
|
|
||||||
tail, err := db.Tail()
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("Failed to retrieve tail ancients %v", err)
|
|
||||||
}
|
|
||||||
if head != 200 || tail != 200 {
|
|
||||||
t.Fatalf("Ancient head and tail are not expected")
|
|
||||||
}
|
|
||||||
_, err = db.ModifyAncients(func(op ethdb.AncientWriteOp) error {
|
|
||||||
offset := uint64(200)
|
|
||||||
for i := 0; i < 100; i++ {
|
|
||||||
if err := op.AppendRaw("a", offset+uint64(i), dataA[i]); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if err := op.AppendRaw("b", offset+uint64(i), dataB[i]); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("Failed to write ancient data %v", err)
|
|
||||||
}
|
|
||||||
head, err = db.Ancients()
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("Failed to retrieve head ancients %v", err)
|
|
||||||
}
|
|
||||||
tail, err = db.Tail()
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("Failed to retrieve tail ancients %v", err)
|
|
||||||
}
|
|
||||||
if head != 300 || tail != 200 {
|
|
||||||
t.Fatalf("Ancient head and tail are not expected")
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func nonMutable(t *testing.T, newFn func(kinds []string) ethdb.AncientStore) {
|
func nonMutable(t *testing.T, newFn func(kinds []string) ethdb.AncientStore) {
|
||||||
|
|
|
||||||
|
|
@ -35,7 +35,6 @@ import (
|
||||||
"github.com/ethereum/go-ethereum/crypto"
|
"github.com/ethereum/go-ethereum/crypto"
|
||||||
"github.com/ethereum/go-ethereum/ethdb"
|
"github.com/ethereum/go-ethereum/ethdb"
|
||||||
"github.com/ethereum/go-ethereum/ethdb/memorydb"
|
"github.com/ethereum/go-ethereum/ethdb/memorydb"
|
||||||
"github.com/ethereum/go-ethereum/internal/tablewriter"
|
|
||||||
"github.com/ethereum/go-ethereum/log"
|
"github.com/ethereum/go-ethereum/log"
|
||||||
"golang.org/x/sync/errgroup"
|
"golang.org/x/sync/errgroup"
|
||||||
)
|
)
|
||||||
|
|
@ -413,7 +412,6 @@ func InspectDatabase(db ethdb.Database, keyPrefix, keyStart []byte) error {
|
||||||
tds stat
|
tds stat
|
||||||
numHashPairings stat
|
numHashPairings stat
|
||||||
hashNumPairings stat
|
hashNumPairings stat
|
||||||
blockAccessList stat
|
|
||||||
legacyTries stat
|
legacyTries stat
|
||||||
stateLookups stat
|
stateLookups stat
|
||||||
accountTries stat
|
accountTries stat
|
||||||
|
|
@ -479,15 +477,12 @@ func InspectDatabase(db ethdb.Database, keyPrefix, keyStart []byte) error {
|
||||||
bodies.add(size)
|
bodies.add(size)
|
||||||
case bytes.HasPrefix(key, blockReceiptsPrefix) && len(key) == (len(blockReceiptsPrefix)+8+common.HashLength):
|
case bytes.HasPrefix(key, blockReceiptsPrefix) && len(key) == (len(blockReceiptsPrefix)+8+common.HashLength):
|
||||||
receipts.add(size)
|
receipts.add(size)
|
||||||
case bytes.HasPrefix(key, headerPrefix) && bytes.HasSuffix(key, headerTDSuffix) && len(key) == (len(headerPrefix)+8+common.HashLength+len(headerTDSuffix)):
|
case bytes.HasPrefix(key, headerPrefix) && bytes.HasSuffix(key, headerTDSuffix):
|
||||||
tds.add(size)
|
tds.add(size)
|
||||||
case bytes.HasPrefix(key, headerPrefix) && bytes.HasSuffix(key, headerHashSuffix) && len(key) == (len(headerPrefix)+8+len(headerHashSuffix)):
|
case bytes.HasPrefix(key, headerPrefix) && bytes.HasSuffix(key, headerHashSuffix):
|
||||||
numHashPairings.add(size)
|
numHashPairings.add(size)
|
||||||
case bytes.HasPrefix(key, headerNumberPrefix) && len(key) == (len(headerNumberPrefix)+common.HashLength):
|
case bytes.HasPrefix(key, headerNumberPrefix) && len(key) == (len(headerNumberPrefix)+common.HashLength):
|
||||||
hashNumPairings.add(size)
|
hashNumPairings.add(size)
|
||||||
case bytes.HasPrefix(key, accessListPrefix) && len(key) == len(accessListPrefix)+8+common.HashLength:
|
|
||||||
blockAccessList.add(size)
|
|
||||||
|
|
||||||
case IsLegacyTrieNode(key, it.Value()):
|
case IsLegacyTrieNode(key, it.Value()):
|
||||||
legacyTries.add(size)
|
legacyTries.add(size)
|
||||||
case bytes.HasPrefix(key, stateIDPrefix) && len(key) == len(stateIDPrefix)+common.HashLength:
|
case bytes.HasPrefix(key, stateIDPrefix) && len(key) == len(stateIDPrefix)+common.HashLength:
|
||||||
|
|
@ -629,7 +624,6 @@ func InspectDatabase(db ethdb.Database, keyPrefix, keyStart []byte) error {
|
||||||
{"Key-Value store", "Difficulties (deprecated)", tds.sizeString(), tds.countString()},
|
{"Key-Value store", "Difficulties (deprecated)", tds.sizeString(), tds.countString()},
|
||||||
{"Key-Value store", "Block number->hash", numHashPairings.sizeString(), numHashPairings.countString()},
|
{"Key-Value store", "Block number->hash", numHashPairings.sizeString(), numHashPairings.countString()},
|
||||||
{"Key-Value store", "Block hash->number", hashNumPairings.sizeString(), hashNumPairings.countString()},
|
{"Key-Value store", "Block hash->number", hashNumPairings.sizeString(), hashNumPairings.countString()},
|
||||||
{"Key-Value store", "Block accessList", blockAccessList.sizeString(), blockAccessList.countString()},
|
|
||||||
{"Key-Value store", "Transaction index", txLookups.sizeString(), txLookups.countString()},
|
{"Key-Value store", "Transaction index", txLookups.sizeString(), txLookups.countString()},
|
||||||
{"Key-Value store", "Log index filter-map rows", filterMapRows.sizeString(), filterMapRows.countString()},
|
{"Key-Value store", "Log index filter-map rows", filterMapRows.sizeString(), filterMapRows.countString()},
|
||||||
{"Key-Value store", "Log index last-block-of-map", filterMapLastBlock.sizeString(), filterMapLastBlock.countString()},
|
{"Key-Value store", "Log index last-block-of-map", filterMapLastBlock.sizeString(), filterMapLastBlock.countString()},
|
||||||
|
|
@ -669,7 +663,7 @@ func InspectDatabase(db ethdb.Database, keyPrefix, keyStart []byte) error {
|
||||||
total.Add(uint64(ancient.size()))
|
total.Add(uint64(ancient.size()))
|
||||||
}
|
}
|
||||||
|
|
||||||
table := tablewriter.NewWriter(os.Stdout)
|
table := NewTableWriter(os.Stdout)
|
||||||
table.SetHeader([]string{"Database", "Category", "Size", "Items"})
|
table.SetHeader([]string{"Database", "Category", "Size", "Items"})
|
||||||
table.SetFooter([]string{"", "Total", common.StorageSize(total.Load()).String(), fmt.Sprintf("%d", count.Load())})
|
table.SetFooter([]string{"", "Total", common.StorageSize(total.Load()).String(), fmt.Sprintf("%d", count.Load())})
|
||||||
table.AppendBulk(stats)
|
table.AppendBulk(stats)
|
||||||
|
|
|
||||||
|
|
@ -16,7 +16,7 @@
|
||||||
|
|
||||||
// Naive stub implementation for tablewriter
|
// Naive stub implementation for tablewriter
|
||||||
|
|
||||||
package tablewriter
|
package rawdb
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"errors"
|
"errors"
|
||||||
|
|
@ -37,7 +37,7 @@ type Table struct {
|
||||||
rows [][]string
|
rows [][]string
|
||||||
}
|
}
|
||||||
|
|
||||||
func NewWriter(w io.Writer) *Table {
|
func NewTableWriter(w io.Writer) *Table {
|
||||||
return &Table{out: w}
|
return &Table{out: w}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -58,12 +58,12 @@ func (t *Table) SetFooter(footer []string) {
|
||||||
t.footer = footer
|
t.footer = footer
|
||||||
}
|
}
|
||||||
|
|
||||||
// AppendBulk appends one or more data rows to the table.
|
// AppendBulk sets all data rows for the table at once, replacing any existing rows.
|
||||||
//
|
//
|
||||||
// Each row must have the same number of columns as the headers, or validation
|
// Each row must have the same number of columns as the headers, or validation
|
||||||
// will fail during Render().
|
// will fail during Render().
|
||||||
func (t *Table) AppendBulk(rows [][]string) {
|
func (t *Table) AppendBulk(rows [][]string) {
|
||||||
t.rows = append(t.rows, rows...)
|
t.rows = rows
|
||||||
}
|
}
|
||||||
|
|
||||||
// Render outputs the complete table to the configured writer. The table is rendered
|
// Render outputs the complete table to the configured writer. The table is rendered
|
||||||
|
|
@ -14,7 +14,7 @@
|
||||||
// You should have received a copy of the GNU Lesser General Public License
|
// You should have received a copy of the GNU Lesser General Public License
|
||||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
package tablewriter
|
package rawdb
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"bytes"
|
"bytes"
|
||||||
|
|
@ -24,7 +24,7 @@ import (
|
||||||
|
|
||||||
func TestTableWriterTinyGo(t *testing.T) {
|
func TestTableWriterTinyGo(t *testing.T) {
|
||||||
var buf bytes.Buffer
|
var buf bytes.Buffer
|
||||||
table := NewWriter(&buf)
|
table := NewTableWriter(&buf)
|
||||||
|
|
||||||
headers := []string{"Database", "Size", "Items", "Status"}
|
headers := []string{"Database", "Size", "Items", "Status"}
|
||||||
rows := [][]string{
|
rows := [][]string{
|
||||||
|
|
@ -48,7 +48,7 @@ func TestTableWriterValidationErrors(t *testing.T) {
|
||||||
// Test missing headers
|
// Test missing headers
|
||||||
t.Run("MissingHeaders", func(t *testing.T) {
|
t.Run("MissingHeaders", func(t *testing.T) {
|
||||||
var buf bytes.Buffer
|
var buf bytes.Buffer
|
||||||
table := NewWriter(&buf)
|
table := NewTableWriter(&buf)
|
||||||
|
|
||||||
rows := [][]string{{"x", "y", "z"}}
|
rows := [][]string{{"x", "y", "z"}}
|
||||||
|
|
||||||
|
|
@ -63,7 +63,7 @@ func TestTableWriterValidationErrors(t *testing.T) {
|
||||||
|
|
||||||
t.Run("NotEnoughRowColumns", func(t *testing.T) {
|
t.Run("NotEnoughRowColumns", func(t *testing.T) {
|
||||||
var buf bytes.Buffer
|
var buf bytes.Buffer
|
||||||
table := NewWriter(&buf)
|
table := NewTableWriter(&buf)
|
||||||
|
|
||||||
headers := []string{"A", "B", "C"}
|
headers := []string{"A", "B", "C"}
|
||||||
badRows := [][]string{
|
badRows := [][]string{
|
||||||
|
|
@ -82,7 +82,7 @@ func TestTableWriterValidationErrors(t *testing.T) {
|
||||||
|
|
||||||
t.Run("TooManyRowColumns", func(t *testing.T) {
|
t.Run("TooManyRowColumns", func(t *testing.T) {
|
||||||
var buf bytes.Buffer
|
var buf bytes.Buffer
|
||||||
table := NewWriter(&buf)
|
table := NewTableWriter(&buf)
|
||||||
|
|
||||||
headers := []string{"A", "B", "C"}
|
headers := []string{"A", "B", "C"}
|
||||||
badRows := [][]string{
|
badRows := [][]string{
|
||||||
|
|
@ -102,7 +102,7 @@ func TestTableWriterValidationErrors(t *testing.T) {
|
||||||
// Test mismatched footer columns
|
// Test mismatched footer columns
|
||||||
t.Run("MismatchedFooterColumns", func(t *testing.T) {
|
t.Run("MismatchedFooterColumns", func(t *testing.T) {
|
||||||
var buf bytes.Buffer
|
var buf bytes.Buffer
|
||||||
table := NewWriter(&buf)
|
table := NewTableWriter(&buf)
|
||||||
|
|
||||||
headers := []string{"A", "B", "C"}
|
headers := []string{"A", "B", "C"}
|
||||||
rows := [][]string{{"x", "y", "z"}}
|
rows := [][]string{{"x", "y", "z"}}
|
||||||
|
|
@ -59,7 +59,7 @@ const freezerTableSize = 2 * 1000 * 1000 * 1000
|
||||||
// - The in-order data ensures that disk reads are always optimized.
|
// - The in-order data ensures that disk reads are always optimized.
|
||||||
type Freezer struct {
|
type Freezer struct {
|
||||||
datadir string
|
datadir string
|
||||||
head atomic.Uint64 // Number of items stored (including items removed from tail)
|
frozen atomic.Uint64 // Number of items already frozen
|
||||||
tail atomic.Uint64 // Number of the first stored item in the freezer
|
tail atomic.Uint64 // Number of the first stored item in the freezer
|
||||||
|
|
||||||
// This lock synchronizes writers and the truncate operation, as well as
|
// This lock synchronizes writers and the truncate operation, as well as
|
||||||
|
|
@ -97,12 +97,12 @@ func NewFreezer(datadir string, namespace string, readonly bool, maxTableSize ui
|
||||||
return nil, errSymlinkDatadir
|
return nil, errSymlinkDatadir
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
// Leveldb/Pebble uses LOCK as the filelock filename. To prevent the
|
|
||||||
// name collision, we use FLOCK as the lock name.
|
|
||||||
flockFile := filepath.Join(datadir, "FLOCK")
|
flockFile := filepath.Join(datadir, "FLOCK")
|
||||||
if err := os.MkdirAll(filepath.Dir(flockFile), 0755); err != nil {
|
if err := os.MkdirAll(filepath.Dir(flockFile), 0755); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
// Leveldb uses LOCK as the filelock filename. To prevent the
|
||||||
|
// name collision, we use FLOCK as the lock name.
|
||||||
lock := flock.New(flockFile)
|
lock := flock.New(flockFile)
|
||||||
tryLock := lock.TryLock
|
tryLock := lock.TryLock
|
||||||
if readonly {
|
if readonly {
|
||||||
|
|
@ -213,7 +213,7 @@ func (f *Freezer) AncientBytes(kind string, id, offset, length uint64) ([]byte,
|
||||||
|
|
||||||
// Ancients returns the length of the frozen items.
|
// Ancients returns the length of the frozen items.
|
||||||
func (f *Freezer) Ancients() (uint64, error) {
|
func (f *Freezer) Ancients() (uint64, error) {
|
||||||
return f.head.Load(), nil
|
return f.frozen.Load(), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Tail returns the number of first stored item in the freezer.
|
// Tail returns the number of first stored item in the freezer.
|
||||||
|
|
@ -252,7 +252,7 @@ func (f *Freezer) ModifyAncients(fn func(ethdb.AncientWriteOp) error) (writeSize
|
||||||
defer f.writeLock.Unlock()
|
defer f.writeLock.Unlock()
|
||||||
|
|
||||||
// Roll back all tables to the starting position in case of error.
|
// Roll back all tables to the starting position in case of error.
|
||||||
prevItem := f.head.Load()
|
prevItem := f.frozen.Load()
|
||||||
defer func() {
|
defer func() {
|
||||||
if err != nil {
|
if err != nil {
|
||||||
// The write operation has failed. Go back to the previous item position.
|
// The write operation has failed. Go back to the previous item position.
|
||||||
|
|
@ -273,7 +273,7 @@ func (f *Freezer) ModifyAncients(fn func(ethdb.AncientWriteOp) error) (writeSize
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return 0, err
|
return 0, err
|
||||||
}
|
}
|
||||||
f.head.Store(item)
|
f.frozen.Store(item)
|
||||||
return writeSize, nil
|
return writeSize, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -286,7 +286,7 @@ func (f *Freezer) TruncateHead(items uint64) (uint64, error) {
|
||||||
f.writeLock.Lock()
|
f.writeLock.Lock()
|
||||||
defer f.writeLock.Unlock()
|
defer f.writeLock.Unlock()
|
||||||
|
|
||||||
oitems := f.head.Load()
|
oitems := f.frozen.Load()
|
||||||
if oitems <= items {
|
if oitems <= items {
|
||||||
return oitems, nil
|
return oitems, nil
|
||||||
}
|
}
|
||||||
|
|
@ -295,7 +295,7 @@ func (f *Freezer) TruncateHead(items uint64) (uint64, error) {
|
||||||
return 0, err
|
return 0, err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
f.head.Store(items)
|
f.frozen.Store(items)
|
||||||
return oitems, nil
|
return oitems, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -320,11 +320,6 @@ func (f *Freezer) TruncateTail(tail uint64) (uint64, error) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
f.tail.Store(tail)
|
f.tail.Store(tail)
|
||||||
|
|
||||||
// Update the head if the requested tail exceeds the current head
|
|
||||||
if f.head.Load() < tail {
|
|
||||||
f.head.Store(tail)
|
|
||||||
}
|
|
||||||
return old, nil
|
return old, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -384,7 +379,7 @@ func (f *Freezer) validate() error {
|
||||||
prunedTail = &tmp
|
prunedTail = &tmp
|
||||||
}
|
}
|
||||||
|
|
||||||
f.head.Store(head)
|
f.frozen.Store(head)
|
||||||
f.tail.Store(*prunedTail)
|
f.tail.Store(*prunedTail)
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
@ -419,7 +414,7 @@ func (f *Freezer) repair() error {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
f.head.Store(head)
|
f.frozen.Store(head)
|
||||||
f.tail.Store(prunedTail)
|
f.tail.Store(prunedTail)
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -113,7 +113,7 @@ func (t *memoryTable) truncateTail(items uint64) error {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
if t.items < items {
|
if t.items < items {
|
||||||
return t.reset(items)
|
return errors.New("truncation above head")
|
||||||
}
|
}
|
||||||
for i := uint64(0); i < items-t.offset; i++ {
|
for i := uint64(0); i < items-t.offset; i++ {
|
||||||
if t.size > uint64(len(t.data[i])) {
|
if t.size > uint64(len(t.data[i])) {
|
||||||
|
|
@ -127,16 +127,6 @@ func (t *memoryTable) truncateTail(items uint64) error {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// reset clears the entire table and sets both the head and tail to the given
|
|
||||||
// value. It assumes the caller holds the lock and that tail > t.items.
|
|
||||||
func (t *memoryTable) reset(offset uint64) error {
|
|
||||||
t.size = 0
|
|
||||||
t.data = nil
|
|
||||||
t.items = offset
|
|
||||||
t.offset = offset
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// commit merges the given item batch into table. It's presumed that the
|
// commit merges the given item batch into table. It's presumed that the
|
||||||
// batch is ordered and continuous with table.
|
// batch is ordered and continuous with table.
|
||||||
func (t *memoryTable) commit(batch [][]byte) error {
|
func (t *memoryTable) commit(batch [][]byte) error {
|
||||||
|
|
@ -397,9 +387,6 @@ func (f *MemoryFreezer) TruncateTail(tail uint64) (uint64, error) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
f.tail = tail
|
f.tail = tail
|
||||||
if f.items < tail {
|
|
||||||
f.items = tail
|
|
||||||
}
|
|
||||||
return old, nil
|
return old, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -707,13 +707,12 @@ func (t *freezerTable) truncateTail(items uint64) error {
|
||||||
t.lock.Lock()
|
t.lock.Lock()
|
||||||
defer t.lock.Unlock()
|
defer t.lock.Unlock()
|
||||||
|
|
||||||
// Short-circuit if the requested tail deletion points to a stale position
|
// Ensure the given truncate target falls in the correct range
|
||||||
if t.itemHidden.Load() >= items {
|
if t.itemHidden.Load() >= items {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
// If the requested tail exceeds the current head, reset the entire table
|
|
||||||
if t.items.Load() < items {
|
if t.items.Load() < items {
|
||||||
return t.resetTo(items)
|
return errors.New("truncation above head")
|
||||||
}
|
}
|
||||||
// Load the new tail index by the given new tail position
|
// Load the new tail index by the given new tail position
|
||||||
var (
|
var (
|
||||||
|
|
@ -823,10 +822,11 @@ func (t *freezerTable) truncateTail(items uint64) error {
|
||||||
shorten := indexEntrySize * int64(newDeleted-deleted)
|
shorten := indexEntrySize * int64(newDeleted-deleted)
|
||||||
if t.metadata.flushOffset <= shorten {
|
if t.metadata.flushOffset <= shorten {
|
||||||
return fmt.Errorf("invalid index flush offset: %d, shorten: %d", t.metadata.flushOffset, shorten)
|
return fmt.Errorf("invalid index flush offset: %d, shorten: %d", t.metadata.flushOffset, shorten)
|
||||||
}
|
} else {
|
||||||
if err := t.metadata.setFlushOffset(t.metadata.flushOffset-shorten, true); err != nil {
|
if err := t.metadata.setFlushOffset(t.metadata.flushOffset-shorten, true); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
}
|
||||||
// Retrieve the new size and update the total size counter
|
// Retrieve the new size and update the total size counter
|
||||||
newSize, err := t.sizeNolock()
|
newSize, err := t.sizeNolock()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
|
@ -836,59 +836,6 @@ func (t *freezerTable) truncateTail(items uint64) error {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// resetTo clears the entire table and sets both the head and tail to the given
|
|
||||||
// value. It assumes the caller holds the lock and that tail > t.items.
|
|
||||||
func (t *freezerTable) resetTo(tail uint64) error {
|
|
||||||
// Sync the entire table before resetting, eliminating the potential
|
|
||||||
// data corruption.
|
|
||||||
err := t.doSync()
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
// Update the index file to reflect the new offset
|
|
||||||
if err := t.index.Close(); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
entry := &indexEntry{
|
|
||||||
filenum: t.headId + 1,
|
|
||||||
offset: uint32(tail),
|
|
||||||
}
|
|
||||||
if err := reset(t.index.Name(), entry.append(nil)); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if err := t.metadata.setVirtualTail(tail, true); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if err := t.metadata.setFlushOffset(indexEntrySize, true); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
t.index, err = openFreezerFileForAppend(t.index.Name())
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Purge all the existing data file
|
|
||||||
if err := t.head.Close(); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
t.headId = t.headId + 1
|
|
||||||
t.tailId = t.headId
|
|
||||||
t.headBytes = 0
|
|
||||||
|
|
||||||
t.head, err = t.openFile(t.headId, openFreezerFileTruncated)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
t.releaseFilesBefore(t.headId, true)
|
|
||||||
|
|
||||||
t.items.Store(tail)
|
|
||||||
t.itemOffset.Store(tail)
|
|
||||||
t.itemHidden.Store(tail)
|
|
||||||
t.sizeGauge.Update(0)
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Close closes all opened files and finalizes the freezer table for use.
|
// Close closes all opened files and finalizes the freezer table for use.
|
||||||
// This operation must be completed before shutdown to prevent the loss of
|
// This operation must be completed before shutdown to prevent the loss of
|
||||||
// recent writes.
|
// recent writes.
|
||||||
|
|
@ -1300,20 +1247,25 @@ func (t *freezerTable) doSync() error {
|
||||||
if t.index == nil || t.head == nil || t.metadata.file == nil {
|
if t.index == nil || t.head == nil || t.metadata.file == nil {
|
||||||
return errClosed
|
return errClosed
|
||||||
}
|
}
|
||||||
if err := t.index.Sync(); err != nil {
|
var err error
|
||||||
return err
|
trackError := func(e error) {
|
||||||
|
if e != nil && err == nil {
|
||||||
|
err = e
|
||||||
}
|
}
|
||||||
if err := t.head.Sync(); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
}
|
||||||
|
trackError(t.index.Sync())
|
||||||
|
trackError(t.head.Sync())
|
||||||
|
|
||||||
// A crash may occur before the offset is updated, leaving the offset
|
// A crash may occur before the offset is updated, leaving the offset
|
||||||
// points to an old position. If so, the extra items above the offset
|
// points to a old position. If so, the extra items above the offset
|
||||||
// will be truncated during the next run.
|
// will be truncated during the next run.
|
||||||
stat, err := t.index.Stat()
|
stat, err := t.index.Stat()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
return t.metadata.setFlushOffset(stat.Size(), true)
|
offset := stat.Size()
|
||||||
|
trackError(t.metadata.setFlushOffset(offset, true))
|
||||||
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
func (t *freezerTable) dumpIndexStdout(start, stop int64) {
|
func (t *freezerTable) dumpIndexStdout(start, stop int64) {
|
||||||
|
|
|
||||||
|
|
@ -1139,7 +1139,6 @@ const (
|
||||||
opTruncateHeadAll
|
opTruncateHeadAll
|
||||||
opTruncateTail
|
opTruncateTail
|
||||||
opTruncateTailAll
|
opTruncateTailAll
|
||||||
opTruncateTailOverHead
|
|
||||||
opCheckAll
|
opCheckAll
|
||||||
opMax // boundary value, not an actual op
|
opMax // boundary value, not an actual op
|
||||||
)
|
)
|
||||||
|
|
@ -1227,11 +1226,6 @@ func (randTest) Generate(r *rand.Rand, size int) reflect.Value {
|
||||||
step.target = deleted + uint64(len(items))
|
step.target = deleted + uint64(len(items))
|
||||||
items = items[:0]
|
items = items[:0]
|
||||||
deleted = step.target
|
deleted = step.target
|
||||||
case opTruncateTailOverHead:
|
|
||||||
newDeleted := deleted + uint64(len(items)) + 10
|
|
||||||
step.target = newDeleted
|
|
||||||
deleted = newDeleted
|
|
||||||
items = items[:0]
|
|
||||||
}
|
}
|
||||||
steps = append(steps, step)
|
steps = append(steps, step)
|
||||||
}
|
}
|
||||||
|
|
@ -1274,7 +1268,7 @@ func runRandTest(rt randTest) bool {
|
||||||
for i := 0; i < len(step.items); i++ {
|
for i := 0; i < len(step.items); i++ {
|
||||||
batch.AppendRaw(step.items[i], step.blobs[i])
|
batch.AppendRaw(step.items[i], step.blobs[i])
|
||||||
}
|
}
|
||||||
rt[i].err = batch.commit()
|
batch.commit()
|
||||||
values = append(values, step.blobs...)
|
values = append(values, step.blobs...)
|
||||||
|
|
||||||
case opRetrieve:
|
case opRetrieve:
|
||||||
|
|
@ -1296,28 +1290,24 @@ func runRandTest(rt randTest) bool {
|
||||||
}
|
}
|
||||||
|
|
||||||
case opTruncateHead:
|
case opTruncateHead:
|
||||||
rt[i].err = f.truncateHead(step.target)
|
f.truncateHead(step.target)
|
||||||
|
|
||||||
length := f.items.Load() - f.itemHidden.Load()
|
length := f.items.Load() - f.itemHidden.Load()
|
||||||
values = values[:length]
|
values = values[:length]
|
||||||
|
|
||||||
case opTruncateHeadAll:
|
case opTruncateHeadAll:
|
||||||
rt[i].err = f.truncateHead(step.target)
|
f.truncateHead(step.target)
|
||||||
values = nil
|
values = nil
|
||||||
|
|
||||||
case opTruncateTail:
|
case opTruncateTail:
|
||||||
prev := f.itemHidden.Load()
|
prev := f.itemHidden.Load()
|
||||||
rt[i].err = f.truncateTail(step.target)
|
f.truncateTail(step.target)
|
||||||
|
|
||||||
truncated := f.itemHidden.Load() - prev
|
truncated := f.itemHidden.Load() - prev
|
||||||
values = values[truncated:]
|
values = values[truncated:]
|
||||||
|
|
||||||
case opTruncateTailAll:
|
case opTruncateTailAll:
|
||||||
rt[i].err = f.truncateTail(step.target)
|
f.truncateTail(step.target)
|
||||||
values = nil
|
|
||||||
|
|
||||||
case opTruncateTailOverHead:
|
|
||||||
rt[i].err = f.truncateTail(step.target)
|
|
||||||
values = nil
|
values = nil
|
||||||
}
|
}
|
||||||
// Abort the test on error.
|
// Abort the test on error.
|
||||||
|
|
@ -1643,43 +1633,3 @@ func TestFreezerAncientBytes(t *testing.T) {
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestTruncateOverHead(t *testing.T) {
|
|
||||||
t.Parallel()
|
|
||||||
|
|
||||||
fn := fmt.Sprintf("t-%d", rand.Uint64())
|
|
||||||
f, err := newTable(os.TempDir(), fn, metrics.NewMeter(), metrics.NewMeter(), metrics.NewGauge(), 100, freezerTableConfig{noSnappy: true}, false)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatal(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Tail truncation on an empty table
|
|
||||||
if err := f.truncateTail(10); err != nil {
|
|
||||||
t.Fatal(err)
|
|
||||||
}
|
|
||||||
batch := f.newBatch()
|
|
||||||
data := getChunk(10, 1)
|
|
||||||
require.NoError(t, batch.AppendRaw(uint64(10), data))
|
|
||||||
require.NoError(t, batch.commit())
|
|
||||||
|
|
||||||
got, err := f.RetrieveItems(uint64(10), 1, 0)
|
|
||||||
require.NoError(t, err)
|
|
||||||
if !bytes.Equal(got[0], data) {
|
|
||||||
t.Fatalf("Unexpected bytes, want: %v, got: %v", data, got[0])
|
|
||||||
}
|
|
||||||
|
|
||||||
// Tail truncation on the non-empty table
|
|
||||||
if err := f.truncateTail(20); err != nil {
|
|
||||||
t.Fatal(err)
|
|
||||||
}
|
|
||||||
batch = f.newBatch()
|
|
||||||
data = getChunk(10, 1)
|
|
||||||
require.NoError(t, batch.AppendRaw(uint64(20), data))
|
|
||||||
require.NoError(t, batch.commit())
|
|
||||||
|
|
||||||
got, err = f.RetrieveItems(uint64(20), 1, 0)
|
|
||||||
require.NoError(t, err)
|
|
||||||
if !bytes.Equal(got[0], data) {
|
|
||||||
t.Fatalf("Unexpected bytes, want: %v, got: %v", data, got[0])
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
|
||||||
|
|
@ -22,13 +22,6 @@ import (
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
)
|
)
|
||||||
|
|
||||||
func atomicRename(src, dest string) error {
|
|
||||||
if err := os.Rename(src, dest); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
return syncDir(filepath.Dir(src))
|
|
||||||
}
|
|
||||||
|
|
||||||
// copyFrom copies data from 'srcPath' at offset 'offset' into 'destPath'.
|
// copyFrom copies data from 'srcPath' at offset 'offset' into 'destPath'.
|
||||||
// The 'destPath' is created if it doesn't exist, otherwise it is overwritten.
|
// The 'destPath' is created if it doesn't exist, otherwise it is overwritten.
|
||||||
// Before the copy is executed, there is a callback can be registered to
|
// Before the copy is executed, there is a callback can be registered to
|
||||||
|
|
@ -80,48 +73,13 @@ func copyFrom(srcPath, destPath string, offset uint64, before func(f *os.File) e
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
f = nil
|
f = nil
|
||||||
|
return os.Rename(fname, destPath)
|
||||||
return atomicRename(fname, destPath)
|
|
||||||
}
|
|
||||||
|
|
||||||
// reset atomically replaces the file at the given path with the provided content.
|
|
||||||
func reset(path string, content []byte) error {
|
|
||||||
// Create a temp file in the same dir where we want it to wind up
|
|
||||||
f, err := os.CreateTemp(filepath.Dir(path), "*")
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
fname := f.Name()
|
|
||||||
|
|
||||||
// Clean up the leftover file
|
|
||||||
defer func() {
|
|
||||||
if f != nil {
|
|
||||||
f.Close()
|
|
||||||
}
|
|
||||||
os.Remove(fname)
|
|
||||||
}()
|
|
||||||
|
|
||||||
// Write the content into the temp file
|
|
||||||
_, err = f.Write(content)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
// Permanently persist the content into disk
|
|
||||||
if err := f.Sync(); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if err := f.Close(); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
f = nil
|
|
||||||
|
|
||||||
return atomicRename(fname, path)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// openFreezerFileForAppend opens a freezer table file and seeks to the end
|
// openFreezerFileForAppend opens a freezer table file and seeks to the end
|
||||||
func openFreezerFileForAppend(filename string) (*os.File, error) {
|
func openFreezerFileForAppend(filename string) (*os.File, error) {
|
||||||
// Open the file without the O_APPEND flag
|
// Open the file without the O_APPEND flag
|
||||||
// because it has differing behavior during Truncate operations
|
// because it has differing behaviour during Truncate operations
|
||||||
// on different OS's
|
// on different OS's
|
||||||
file, err := os.OpenFile(filename, os.O_RDWR|os.O_CREATE, 0644)
|
file, err := os.OpenFile(filename, os.O_RDWR|os.O_CREATE, 0644)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
|
|
||||||
|
|
@ -1,49 +0,0 @@
|
||||||
// Copyright 2022 The go-ethereum Authors
|
|
||||||
// This file is part of the go-ethereum library.
|
|
||||||
//
|
|
||||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
|
||||||
// it under the terms of the GNU Lesser General Public License as published by
|
|
||||||
// the Free Software Foundation, either version 3 of the License, or
|
|
||||||
// (at your option) any later version.
|
|
||||||
//
|
|
||||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
|
||||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
// GNU Lesser General Public License for more details.
|
|
||||||
//
|
|
||||||
// You should have received a copy of the GNU Lesser General Public License
|
|
||||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
|
||||||
|
|
||||||
//go:build !windows
|
|
||||||
// +build !windows
|
|
||||||
|
|
||||||
package rawdb
|
|
||||||
|
|
||||||
import (
|
|
||||||
"errors"
|
|
||||||
"os"
|
|
||||||
"syscall"
|
|
||||||
)
|
|
||||||
|
|
||||||
// syncDir ensures that the directory metadata (e.g. newly renamed files)
|
|
||||||
// is flushed to durable storage.
|
|
||||||
func syncDir(name string) error {
|
|
||||||
f, err := os.Open(name)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
defer f.Close()
|
|
||||||
|
|
||||||
// Some file systems do not support fsyncing directories (e.g. some FUSE
|
|
||||||
// mounts). Ignore EINVAL in those cases.
|
|
||||||
if err := f.Sync(); err != nil {
|
|
||||||
if errors.Is(err, os.ErrInvalid) {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
if patherr, ok := err.(*os.PathError); ok && patherr.Err == syscall.EINVAL {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
@ -1,26 +0,0 @@
|
||||||
// Copyright 2022 The go-ethereum Authors
|
|
||||||
// This file is part of the go-ethereum library.
|
|
||||||
//
|
|
||||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
|
||||||
// it under the terms of the GNU Lesser General Public License as published by
|
|
||||||
// the Free Software Foundation, either version 3 of the License, or
|
|
||||||
// (at your option) any later version.
|
|
||||||
//
|
|
||||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
|
||||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
// GNU Lesser General Public License for more details.
|
|
||||||
//
|
|
||||||
// You should have received a copy of the GNU Lesser General Public License
|
|
||||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
|
||||||
|
|
||||||
//go:build windows
|
|
||||||
// +build windows
|
|
||||||
|
|
||||||
package rawdb
|
|
||||||
|
|
||||||
// syncDir is a no-op on Windows. Fsyncing a directory handle is not
|
|
||||||
// supported and returns "Access is denied".
|
|
||||||
func syncDir(name string) error {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
@ -112,7 +112,6 @@ var (
|
||||||
|
|
||||||
blockBodyPrefix = []byte("b") // blockBodyPrefix + num (uint64 big endian) + hash -> block body
|
blockBodyPrefix = []byte("b") // blockBodyPrefix + num (uint64 big endian) + hash -> block body
|
||||||
blockReceiptsPrefix = []byte("r") // blockReceiptsPrefix + num (uint64 big endian) + hash -> block receipts
|
blockReceiptsPrefix = []byte("r") // blockReceiptsPrefix + num (uint64 big endian) + hash -> block receipts
|
||||||
accessListPrefix = []byte("j") // accessListPrefix + num (uint64 big endian) + hash -> block access list
|
|
||||||
|
|
||||||
txLookupPrefix = []byte("l") // txLookupPrefix + hash -> transaction/receipt lookup metadata
|
txLookupPrefix = []byte("l") // txLookupPrefix + hash -> transaction/receipt lookup metadata
|
||||||
bloomBitsPrefix = []byte("B") // bloomBitsPrefix + bit (uint16 big endian) + section (uint64 big endian) + hash -> bloom bits
|
bloomBitsPrefix = []byte("B") // bloomBitsPrefix + bit (uint16 big endian) + section (uint64 big endian) + hash -> bloom bits
|
||||||
|
|
@ -215,11 +214,6 @@ func blockReceiptsKey(number uint64, hash common.Hash) []byte {
|
||||||
return append(append(blockReceiptsPrefix, encodeBlockNumber(number)...), hash.Bytes()...)
|
return append(append(blockReceiptsPrefix, encodeBlockNumber(number)...), hash.Bytes()...)
|
||||||
}
|
}
|
||||||
|
|
||||||
// accessListKey = accessListPrefix + num (uint64 big endian) + hash
|
|
||||||
func accessListKey(number uint64, hash common.Hash) []byte {
|
|
||||||
return append(append(accessListPrefix, encodeBlockNumber(number)...), hash.Bytes()...)
|
|
||||||
}
|
|
||||||
|
|
||||||
// txLookupKey = txLookupPrefix + hash
|
// txLookupKey = txLookupPrefix + hash
|
||||||
func txLookupKey(hash common.Hash) []byte {
|
func txLookupKey(hash common.Hash) []byte {
|
||||||
return append(txLookupPrefix, hash.Bytes()...)
|
return append(txLookupPrefix, hash.Bytes()...)
|
||||||
|
|
|
||||||
|
|
@ -253,11 +253,6 @@ func (b *tableBatch) Reset() {
|
||||||
b.batch.Reset()
|
b.batch.Reset()
|
||||||
}
|
}
|
||||||
|
|
||||||
// Close closes the batch and releases all associated resources.
|
|
||||||
func (b *tableBatch) Close() {
|
|
||||||
b.batch.Close()
|
|
||||||
}
|
|
||||||
|
|
||||||
// tableReplayer is a wrapper around a batch replayer which truncates
|
// tableReplayer is a wrapper around a batch replayer which truncates
|
||||||
// the added prefix.
|
// the added prefix.
|
||||||
type tableReplayer struct {
|
type tableReplayer struct {
|
||||||
|
|
|
||||||
|
|
@ -20,13 +20,13 @@ import (
|
||||||
"fmt"
|
"fmt"
|
||||||
|
|
||||||
"github.com/ethereum/go-ethereum/common"
|
"github.com/ethereum/go-ethereum/common"
|
||||||
|
"github.com/ethereum/go-ethereum/common/lru"
|
||||||
"github.com/ethereum/go-ethereum/core/overlay"
|
"github.com/ethereum/go-ethereum/core/overlay"
|
||||||
"github.com/ethereum/go-ethereum/core/rawdb"
|
"github.com/ethereum/go-ethereum/core/rawdb"
|
||||||
"github.com/ethereum/go-ethereum/core/state/snapshot"
|
"github.com/ethereum/go-ethereum/core/state/snapshot"
|
||||||
"github.com/ethereum/go-ethereum/core/types"
|
"github.com/ethereum/go-ethereum/core/types"
|
||||||
"github.com/ethereum/go-ethereum/crypto"
|
"github.com/ethereum/go-ethereum/crypto"
|
||||||
"github.com/ethereum/go-ethereum/ethdb"
|
"github.com/ethereum/go-ethereum/ethdb"
|
||||||
"github.com/ethereum/go-ethereum/log"
|
|
||||||
"github.com/ethereum/go-ethereum/trie"
|
"github.com/ethereum/go-ethereum/trie"
|
||||||
"github.com/ethereum/go-ethereum/trie/bintrie"
|
"github.com/ethereum/go-ethereum/trie/bintrie"
|
||||||
"github.com/ethereum/go-ethereum/trie/transitiontrie"
|
"github.com/ethereum/go-ethereum/trie/transitiontrie"
|
||||||
|
|
@ -34,6 +34,14 @@ import (
|
||||||
"github.com/ethereum/go-ethereum/triedb"
|
"github.com/ethereum/go-ethereum/triedb"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
// Number of codehash->size associations to keep.
|
||||||
|
codeSizeCacheSize = 1_000_000 // 4 megabytes in total
|
||||||
|
|
||||||
|
// Cache size granted for caching clean code.
|
||||||
|
codeCacheSize = 256 * 1024 * 1024
|
||||||
|
)
|
||||||
|
|
||||||
// Database wraps access to tries and contract code.
|
// Database wraps access to tries and contract code.
|
||||||
type Database interface {
|
type Database interface {
|
||||||
// Reader returns a state reader associated with the specified state root.
|
// Reader returns a state reader associated with the specified state root.
|
||||||
|
|
@ -50,11 +58,6 @@ type Database interface {
|
||||||
|
|
||||||
// Snapshot returns the underlying state snapshot.
|
// Snapshot returns the underlying state snapshot.
|
||||||
Snapshot() *snapshot.Tree
|
Snapshot() *snapshot.Tree
|
||||||
|
|
||||||
// Commit flushes all pending writes and finalizes the state transition,
|
|
||||||
// committing the changes to the underlying storage. It returns an error
|
|
||||||
// if the commit fails.
|
|
||||||
Commit(update *stateUpdate) error
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Trie is a Ethereum Merkle Patricia trie.
|
// Trie is a Ethereum Merkle Patricia trie.
|
||||||
|
|
@ -146,34 +149,32 @@ type Trie interface {
|
||||||
// state snapshot to provide functionalities for state access. It's meant to be a
|
// state snapshot to provide functionalities for state access. It's meant to be a
|
||||||
// long-live object and has a few caches inside for sharing between blocks.
|
// long-live object and has a few caches inside for sharing between blocks.
|
||||||
type CachingDB struct {
|
type CachingDB struct {
|
||||||
|
disk ethdb.KeyValueStore
|
||||||
triedb *triedb.Database
|
triedb *triedb.Database
|
||||||
codedb *CodeDB
|
|
||||||
snap *snapshot.Tree
|
snap *snapshot.Tree
|
||||||
|
codeCache *lru.SizeConstrainedCache[common.Hash, []byte]
|
||||||
|
codeSizeCache *lru.Cache[common.Hash, int]
|
||||||
|
|
||||||
|
// Transition-specific fields
|
||||||
|
TransitionStatePerRoot *lru.Cache[common.Hash, *overlay.TransitionState]
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewDatabase creates a state database with the provided data sources.
|
// NewDatabase creates a state database with the provided data sources.
|
||||||
func NewDatabase(triedb *triedb.Database, codedb *CodeDB) *CachingDB {
|
func NewDatabase(triedb *triedb.Database, snap *snapshot.Tree) *CachingDB {
|
||||||
if codedb == nil {
|
|
||||||
codedb = NewCodeDB(triedb.Disk())
|
|
||||||
}
|
|
||||||
return &CachingDB{
|
return &CachingDB{
|
||||||
|
disk: triedb.Disk(),
|
||||||
triedb: triedb,
|
triedb: triedb,
|
||||||
codedb: codedb,
|
snap: snap,
|
||||||
|
codeCache: lru.NewSizeConstrainedCache[common.Hash, []byte](codeCacheSize),
|
||||||
|
codeSizeCache: lru.NewCache[common.Hash, int](codeSizeCacheSize),
|
||||||
|
TransitionStatePerRoot: lru.NewCache[common.Hash, *overlay.TransitionState](1000),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewDatabaseForTesting is similar to NewDatabase, but it initializes the caching
|
// NewDatabaseForTesting is similar to NewDatabase, but it initializes the caching
|
||||||
// db by using an ephemeral memory db with default config for testing.
|
// db by using an ephemeral memory db with default config for testing.
|
||||||
func NewDatabaseForTesting() *CachingDB {
|
func NewDatabaseForTesting() *CachingDB {
|
||||||
db := rawdb.NewMemoryDatabase()
|
return NewDatabase(triedb.NewDatabase(rawdb.NewMemoryDatabase(), nil), nil)
|
||||||
return NewDatabase(triedb.NewDatabase(db, nil), NewCodeDB(db))
|
|
||||||
}
|
|
||||||
|
|
||||||
// WithSnapshot configures the provided contract code cache. Note that this
|
|
||||||
// registration must be performed before the cachingDB is used.
|
|
||||||
func (db *CachingDB) WithSnapshot(snapshot *snapshot.Tree) *CachingDB {
|
|
||||||
db.snap = snapshot
|
|
||||||
return db
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// StateReader returns a state reader associated with the specified state root.
|
// StateReader returns a state reader associated with the specified state root.
|
||||||
|
|
@ -217,20 +218,21 @@ func (db *CachingDB) Reader(stateRoot common.Hash) (Reader, error) {
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
return newReader(db.codedb.Reader(), sr), nil
|
return newReader(newCachingCodeReader(db.disk, db.codeCache, db.codeSizeCache), sr), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// ReadersWithCacheStats creates a pair of state readers that share the same
|
// ReadersWithCacheStats creates a pair of state readers that share the same
|
||||||
// underlying state reader and internal state cache, while maintaining separate
|
// underlying state reader and internal state cache, while maintaining separate
|
||||||
// statistics respectively.
|
// statistics respectively.
|
||||||
func (db *CachingDB) ReadersWithCacheStats(stateRoot common.Hash) (Reader, Reader, error) {
|
func (db *CachingDB) ReadersWithCacheStats(stateRoot common.Hash) (ReaderWithStats, ReaderWithStats, error) {
|
||||||
r, err := db.StateReader(stateRoot)
|
r, err := db.StateReader(stateRoot)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, nil, err
|
return nil, nil, err
|
||||||
}
|
}
|
||||||
sr := newStateReaderWithCache(r)
|
sr := newStateReaderWithCache(r)
|
||||||
ra := newReader(db.codedb.Reader(), newStateReaderWithStats(sr))
|
|
||||||
rb := newReader(db.codedb.Reader(), newStateReaderWithStats(sr))
|
ra := newReaderWithStats(sr, newCachingCodeReader(db.disk, db.codeCache, db.codeSizeCache))
|
||||||
|
rb := newReaderWithStats(sr, newCachingCodeReader(db.disk, db.codeCache, db.codeSizeCache))
|
||||||
return ra, rb, nil
|
return ra, rb, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -266,6 +268,22 @@ func (db *CachingDB) OpenStorageTrie(stateRoot common.Hash, address common.Addre
|
||||||
return tr, nil
|
return tr, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ContractCodeWithPrefix retrieves a particular contract's code. If the
|
||||||
|
// code can't be found in the cache, then check the existence with **new**
|
||||||
|
// db scheme.
|
||||||
|
func (db *CachingDB) ContractCodeWithPrefix(address common.Address, codeHash common.Hash) []byte {
|
||||||
|
code, _ := db.codeCache.Get(codeHash)
|
||||||
|
if len(code) > 0 {
|
||||||
|
return code
|
||||||
|
}
|
||||||
|
code = rawdb.ReadCodeWithPrefix(db.disk, codeHash)
|
||||||
|
if len(code) > 0 {
|
||||||
|
db.codeCache.Add(codeHash, code)
|
||||||
|
db.codeSizeCache.Add(codeHash, len(code))
|
||||||
|
}
|
||||||
|
return code
|
||||||
|
}
|
||||||
|
|
||||||
// TrieDB retrieves any intermediate trie-node caching layer.
|
// TrieDB retrieves any intermediate trie-node caching layer.
|
||||||
func (db *CachingDB) TrieDB() *triedb.Database {
|
func (db *CachingDB) TrieDB() *triedb.Database {
|
||||||
return db.triedb
|
return db.triedb
|
||||||
|
|
@ -276,40 +294,6 @@ func (db *CachingDB) Snapshot() *snapshot.Tree {
|
||||||
return db.snap
|
return db.snap
|
||||||
}
|
}
|
||||||
|
|
||||||
// Commit flushes all pending writes and finalizes the state transition,
|
|
||||||
// committing the changes to the underlying storage. It returns an error
|
|
||||||
// if the commit fails.
|
|
||||||
func (db *CachingDB) Commit(update *stateUpdate) error {
|
|
||||||
// Short circuit if nothing to commit
|
|
||||||
if update.empty() {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
// Commit dirty contract code if any exists
|
|
||||||
if len(update.codes) > 0 {
|
|
||||||
batch := db.codedb.NewBatchWithSize(len(update.codes))
|
|
||||||
for _, code := range update.codes {
|
|
||||||
batch.Put(code.hash, code.blob)
|
|
||||||
}
|
|
||||||
if err := batch.Commit(); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
// If snapshotting is enabled, update the snapshot tree with this new version
|
|
||||||
if db.snap != nil && db.snap.Snapshot(update.originRoot) != nil {
|
|
||||||
if err := db.snap.Update(update.root, update.originRoot, update.accounts, update.storages); err != nil {
|
|
||||||
log.Warn("Failed to update snapshot tree", "from", update.originRoot, "to", update.root, "err", err)
|
|
||||||
}
|
|
||||||
// Keep 128 diff layers in the memory, persistent layer is 129th.
|
|
||||||
// - head layer is paired with HEAD state
|
|
||||||
// - head-1 layer is paired with HEAD-1 state
|
|
||||||
// - head-127 layer(bottom-most diff layer) is paired with HEAD-127 state
|
|
||||||
if err := db.snap.Cap(update.root, TriesInMemory); err != nil {
|
|
||||||
log.Warn("Failed to cap snapshot tree", "root", update.root, "layers", TriesInMemory, "err", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return db.triedb.Update(update.root, update.originRoot, update.blockNumber, update.nodes, update.stateSet())
|
|
||||||
}
|
|
||||||
|
|
||||||
// mustCopyTrie returns a deep-copied trie.
|
// mustCopyTrie returns a deep-copied trie.
|
||||||
func mustCopyTrie(t Trie) Trie {
|
func mustCopyTrie(t Trie) Trie {
|
||||||
switch t := t.(type) {
|
switch t := t.(type) {
|
||||||
|
|
|
||||||
|
|
@ -1,231 +0,0 @@
|
||||||
// Copyright 2026 The go-ethereum Authors
|
|
||||||
// This file is part of the go-ethereum library.
|
|
||||||
//
|
|
||||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
|
||||||
// it under the terms of the GNU Lesser General Public License as published by
|
|
||||||
// the Free Software Foundation, either version 3 of the License, or
|
|
||||||
// (at your option) any later version.
|
|
||||||
//
|
|
||||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
|
||||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
// GNU Lesser General Public License for more details.
|
|
||||||
//
|
|
||||||
// You should have received a copy of the GNU Lesser General Public License
|
|
||||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
|
||||||
|
|
||||||
package state
|
|
||||||
|
|
||||||
import (
|
|
||||||
"sync/atomic"
|
|
||||||
|
|
||||||
"github.com/ethereum/go-ethereum/common"
|
|
||||||
"github.com/ethereum/go-ethereum/common/lru"
|
|
||||||
"github.com/ethereum/go-ethereum/core/rawdb"
|
|
||||||
"github.com/ethereum/go-ethereum/ethdb"
|
|
||||||
)
|
|
||||||
|
|
||||||
const (
|
|
||||||
// Number of codeHash->size associations to keep.
|
|
||||||
codeSizeCacheSize = 1_000_000
|
|
||||||
|
|
||||||
// Cache size granted for caching clean code.
|
|
||||||
codeCacheSize = 256 * 1024 * 1024
|
|
||||||
)
|
|
||||||
|
|
||||||
// CodeCache maintains cached contract code that is shared across blocks, enabling
|
|
||||||
// fast access for external calls such as RPCs and state transitions.
|
|
||||||
//
|
|
||||||
// It is thread-safe and has a bounded size.
|
|
||||||
type codeCache struct {
|
|
||||||
codeCache *lru.SizeConstrainedCache[common.Hash, []byte]
|
|
||||||
codeSizeCache *lru.Cache[common.Hash, int]
|
|
||||||
}
|
|
||||||
|
|
||||||
// newCodeCache initializes the contract code cache with the predefined capacity.
|
|
||||||
func newCodeCache() *codeCache {
|
|
||||||
return &codeCache{
|
|
||||||
codeCache: lru.NewSizeConstrainedCache[common.Hash, []byte](codeCacheSize),
|
|
||||||
codeSizeCache: lru.NewCache[common.Hash, int](codeSizeCacheSize),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get returns the contract code associated with the provided code hash.
|
|
||||||
func (c *codeCache) Get(hash common.Hash) ([]byte, bool) {
|
|
||||||
return c.codeCache.Get(hash)
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetSize returns the contract code size associated with the provided code hash.
|
|
||||||
func (c *codeCache) GetSize(hash common.Hash) (int, bool) {
|
|
||||||
return c.codeSizeCache.Get(hash)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Put adds the provided contract code along with its size information into the cache.
|
|
||||||
func (c *codeCache) Put(hash common.Hash, code []byte) {
|
|
||||||
c.codeCache.Add(hash, code)
|
|
||||||
c.codeSizeCache.Add(hash, len(code))
|
|
||||||
}
|
|
||||||
|
|
||||||
// CodeReader implements state.ContractCodeReader, accessing contract code either in
|
|
||||||
// local key-value store or the shared code cache.
|
|
||||||
//
|
|
||||||
// Reader is safe for concurrent access.
|
|
||||||
type CodeReader struct {
|
|
||||||
db ethdb.KeyValueReader
|
|
||||||
cache *codeCache
|
|
||||||
|
|
||||||
// Cache statistics
|
|
||||||
hit atomic.Int64 // Number of code lookups found in the cache
|
|
||||||
miss atomic.Int64 // Number of code lookups not found in the cache
|
|
||||||
hitBytes atomic.Int64 // Total number of bytes read from cache
|
|
||||||
missBytes atomic.Int64 // Total number of bytes read from database
|
|
||||||
}
|
|
||||||
|
|
||||||
// newCodeReader constructs the code reader with provided key value store and the cache.
|
|
||||||
func newCodeReader(db ethdb.KeyValueReader, cache *codeCache) *CodeReader {
|
|
||||||
return &CodeReader{
|
|
||||||
db: db,
|
|
||||||
cache: cache,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Has returns the flag indicating whether the contract code with
|
|
||||||
// specified address and hash exists or not.
|
|
||||||
func (r *CodeReader) Has(addr common.Address, codeHash common.Hash) bool {
|
|
||||||
return len(r.Code(addr, codeHash)) > 0
|
|
||||||
}
|
|
||||||
|
|
||||||
// Code implements state.ContractCodeReader, retrieving a particular contract's code.
|
|
||||||
// Null is returned if the contract code is not present.
|
|
||||||
func (r *CodeReader) Code(addr common.Address, codeHash common.Hash) []byte {
|
|
||||||
code, _ := r.cache.Get(codeHash)
|
|
||||||
if len(code) > 0 {
|
|
||||||
r.hit.Add(1)
|
|
||||||
r.hitBytes.Add(int64(len(code)))
|
|
||||||
return code
|
|
||||||
}
|
|
||||||
r.miss.Add(1)
|
|
||||||
|
|
||||||
code = rawdb.ReadCode(r.db, codeHash)
|
|
||||||
if len(code) > 0 {
|
|
||||||
r.cache.Put(codeHash, code)
|
|
||||||
r.missBytes.Add(int64(len(code)))
|
|
||||||
}
|
|
||||||
return code
|
|
||||||
}
|
|
||||||
|
|
||||||
// CodeSize implements state.ContractCodeReader, retrieving a particular contract
|
|
||||||
// code's size. Zero is returned if the contract code is not present.
|
|
||||||
func (r *CodeReader) CodeSize(addr common.Address, codeHash common.Hash) int {
|
|
||||||
if cached, ok := r.cache.GetSize(codeHash); ok {
|
|
||||||
r.hit.Add(1)
|
|
||||||
return cached
|
|
||||||
}
|
|
||||||
return len(r.Code(addr, codeHash))
|
|
||||||
}
|
|
||||||
|
|
||||||
// CodeWithPrefix retrieves the contract code for the specified account address
|
|
||||||
// and code hash. It is almost identical to Code, but uses rawdb.ReadCodeWithPrefix
|
|
||||||
// for database lookups. The intention is to gradually deprecate the old
|
|
||||||
// contract code scheme.
|
|
||||||
func (r *CodeReader) CodeWithPrefix(addr common.Address, codeHash common.Hash) []byte {
|
|
||||||
code, _ := r.cache.Get(codeHash)
|
|
||||||
if len(code) > 0 {
|
|
||||||
r.hit.Add(1)
|
|
||||||
r.hitBytes.Add(int64(len(code)))
|
|
||||||
return code
|
|
||||||
}
|
|
||||||
r.miss.Add(1)
|
|
||||||
|
|
||||||
code = rawdb.ReadCodeWithPrefix(r.db, codeHash)
|
|
||||||
if len(code) > 0 {
|
|
||||||
r.cache.Put(codeHash, code)
|
|
||||||
r.missBytes.Add(int64(len(code)))
|
|
||||||
}
|
|
||||||
return code
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetCodeStats implements ContractCodeReaderStater, returning the statistics
|
|
||||||
// of the code reader.
|
|
||||||
func (r *CodeReader) GetCodeStats() ContractCodeReaderStats {
|
|
||||||
return ContractCodeReaderStats{
|
|
||||||
CacheHit: r.hit.Load(),
|
|
||||||
CacheMiss: r.miss.Load(),
|
|
||||||
CacheHitBytes: r.hitBytes.Load(),
|
|
||||||
CacheMissBytes: r.missBytes.Load(),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
type CodeBatch struct {
|
|
||||||
db *CodeDB
|
|
||||||
codes [][]byte
|
|
||||||
codeHashes []common.Hash
|
|
||||||
}
|
|
||||||
|
|
||||||
// newCodeBatch constructs the batch for writing contract code.
|
|
||||||
func newCodeBatch(db *CodeDB) *CodeBatch {
|
|
||||||
return &CodeBatch{
|
|
||||||
db: db,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// newCodeBatchWithSize constructs the batch with a pre-allocated capacity.
|
|
||||||
func newCodeBatchWithSize(db *CodeDB, size int) *CodeBatch {
|
|
||||||
return &CodeBatch{
|
|
||||||
db: db,
|
|
||||||
codes: make([][]byte, 0, size),
|
|
||||||
codeHashes: make([]common.Hash, 0, size),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Put inserts the given contract code into the writer, waiting for commit.
|
|
||||||
func (b *CodeBatch) Put(codeHash common.Hash, code []byte) {
|
|
||||||
b.codes = append(b.codes, code)
|
|
||||||
b.codeHashes = append(b.codeHashes, codeHash)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Commit flushes the accumulated dirty contract code into the database and
|
|
||||||
// also place them in the cache.
|
|
||||||
func (b *CodeBatch) Commit() error {
|
|
||||||
batch := b.db.db.NewBatch()
|
|
||||||
for i, code := range b.codes {
|
|
||||||
rawdb.WriteCode(batch, b.codeHashes[i], code)
|
|
||||||
b.db.cache.Put(b.codeHashes[i], code)
|
|
||||||
}
|
|
||||||
if err := batch.Write(); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
b.codes = b.codes[:0]
|
|
||||||
b.codeHashes = b.codeHashes[:0]
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// CodeDB is responsible for managing the contract code and provides the access
|
|
||||||
// to it. It can be used as a global object, sharing it between multiple entities.
|
|
||||||
type CodeDB struct {
|
|
||||||
db ethdb.KeyValueStore
|
|
||||||
cache *codeCache
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewCodeDB constructs the contract code database with the provided key value store.
|
|
||||||
func NewCodeDB(db ethdb.KeyValueStore) *CodeDB {
|
|
||||||
return &CodeDB{
|
|
||||||
db: db,
|
|
||||||
cache: newCodeCache(),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Reader returns the contract code reader.
|
|
||||||
func (d *CodeDB) Reader() *CodeReader {
|
|
||||||
return newCodeReader(d.db, d.cache)
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewBatch returns the batch for flushing contract codes.
|
|
||||||
func (d *CodeDB) NewBatch() *CodeBatch {
|
|
||||||
return newCodeBatch(d)
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewBatchWithSize returns the batch with pre-allocated capacity.
|
|
||||||
func (d *CodeDB) NewBatchWithSize(size int) *CodeBatch {
|
|
||||||
return newCodeBatchWithSize(d, size)
|
|
||||||
}
|
|
||||||
|
|
@ -17,14 +17,15 @@
|
||||||
package state
|
package state
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"errors"
|
|
||||||
"fmt"
|
"fmt"
|
||||||
"sync"
|
"sync"
|
||||||
|
|
||||||
"github.com/ethereum/go-ethereum/common"
|
"github.com/ethereum/go-ethereum/common"
|
||||||
|
"github.com/ethereum/go-ethereum/common/lru"
|
||||||
"github.com/ethereum/go-ethereum/core/state/snapshot"
|
"github.com/ethereum/go-ethereum/core/state/snapshot"
|
||||||
"github.com/ethereum/go-ethereum/core/types"
|
"github.com/ethereum/go-ethereum/core/types"
|
||||||
"github.com/ethereum/go-ethereum/crypto"
|
"github.com/ethereum/go-ethereum/crypto"
|
||||||
|
"github.com/ethereum/go-ethereum/ethdb"
|
||||||
"github.com/ethereum/go-ethereum/rlp"
|
"github.com/ethereum/go-ethereum/rlp"
|
||||||
"github.com/ethereum/go-ethereum/trie"
|
"github.com/ethereum/go-ethereum/trie"
|
||||||
"github.com/ethereum/go-ethereum/triedb"
|
"github.com/ethereum/go-ethereum/triedb"
|
||||||
|
|
@ -220,15 +221,19 @@ func (r *historicalTrieReader) Storage(addr common.Address, key common.Hash) (co
|
||||||
// HistoricDB is the implementation of Database interface, with the ability to
|
// HistoricDB is the implementation of Database interface, with the ability to
|
||||||
// access historical state.
|
// access historical state.
|
||||||
type HistoricDB struct {
|
type HistoricDB struct {
|
||||||
|
disk ethdb.KeyValueStore
|
||||||
triedb *triedb.Database
|
triedb *triedb.Database
|
||||||
codedb *CodeDB
|
codeCache *lru.SizeConstrainedCache[common.Hash, []byte]
|
||||||
|
codeSizeCache *lru.Cache[common.Hash, int]
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewHistoricDatabase creates a historic state database.
|
// NewHistoricDatabase creates a historic state database.
|
||||||
func NewHistoricDatabase(triedb *triedb.Database, codedb *CodeDB) *HistoricDB {
|
func NewHistoricDatabase(disk ethdb.KeyValueStore, triedb *triedb.Database) *HistoricDB {
|
||||||
return &HistoricDB{
|
return &HistoricDB{
|
||||||
|
disk: disk,
|
||||||
triedb: triedb,
|
triedb: triedb,
|
||||||
codedb: codedb,
|
codeCache: lru.NewSizeConstrainedCache[common.Hash, []byte](codeCacheSize),
|
||||||
|
codeSizeCache: lru.NewCache[common.Hash, int](codeSizeCacheSize),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -253,7 +258,7 @@ func (db *HistoricDB) Reader(stateRoot common.Hash) (Reader, error) {
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
return newReader(db.codedb.Reader(), combined), nil
|
return newReader(newCachingCodeReader(db.disk, db.codeCache, db.codeSizeCache), combined), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// OpenTrie opens the main account trie. It's not supported by historic database.
|
// OpenTrie opens the main account trie. It's not supported by historic database.
|
||||||
|
|
@ -293,10 +298,3 @@ func (db *HistoricDB) TrieDB() *triedb.Database {
|
||||||
func (db *HistoricDB) Snapshot() *snapshot.Tree {
|
func (db *HistoricDB) Snapshot() *snapshot.Tree {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Commit flushes all pending writes and finalizes the state transition,
|
|
||||||
// committing the changes to the underlying storage. It returns an error
|
|
||||||
// if the commit fails.
|
|
||||||
func (db *HistoricDB) Commit(update *stateUpdate) error {
|
|
||||||
return errors.New("not implemented")
|
|
||||||
}
|
|
||||||
|
|
|
||||||
|
|
@ -144,7 +144,10 @@ func (it *nodeIterator) step() error {
|
||||||
}
|
}
|
||||||
if !bytes.Equal(account.CodeHash, types.EmptyCodeHash.Bytes()) {
|
if !bytes.Equal(account.CodeHash, types.EmptyCodeHash.Bytes()) {
|
||||||
it.codeHash = common.BytesToHash(account.CodeHash)
|
it.codeHash = common.BytesToHash(account.CodeHash)
|
||||||
it.code = it.state.reader.Code(address, common.BytesToHash(account.CodeHash))
|
it.code, err = it.state.reader.Code(address, common.BytesToHash(account.CodeHash))
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("code %x: %v", account.CodeHash, err)
|
||||||
|
}
|
||||||
if len(it.code) == 0 {
|
if len(it.code) == 0 {
|
||||||
return fmt.Errorf("code is not found: %x", account.CodeHash)
|
return fmt.Errorf("code is not found: %x", account.CodeHash)
|
||||||
}
|
}
|
||||||
|
|
|
||||||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue