This changes the filtermaps to only pull up the raw receipts, not the
derived receipts which saves a lot of allocations.
During normal execution this will reduce the allocations of the whole
geth node by ~15%.
This PR applies the config overrides to the new config as well,
otherwise they will not be applied to defined configs, making
shadowforks impossible.
To test:
```
> ./build/bin/geth --override.prague 123 --dev --datadir /tmp/geth
INFO [04-28|21:20:47.009] - Prague: @123
> ./build/bin/geth --override.prague 321 --dev --datadir /tmp/geth
INFO [04-28|21:23:59.760] - Prague: @321
``
TruncatePending shows up bright red on our nodes, because it computes
the length of a map multiple times.
I don't know why this is so expensive, but around 20% of our time is
spent on this, which is super weird.
```
//PR: BenchmarkTruncatePending-24 17498 69397 ns/op 32872 B/op 3 allocs/op
//Master: BenchmarkTruncatePending-24 9960 123954 ns/op 32872 B/op 3 allocs/op
```
```
benchmark old ns/op new ns/op delta
BenchmarkTruncatePending-24 123954 69397 -44.01%
benchmark old allocs new allocs delta
BenchmarkTruncatePending-24 3 3 +0.00%
benchmark old bytes new bytes delta
BenchmarkTruncatePending-24 32872 32872 +0.00%
```
This simple PR is a 44% improvement over the old state
```
OUTINE ======================== github.com/ethereum/go-ethereum/core/txpool/legacypool.(*LegacyPool).truncatePending in github.com/ethereum/go-ethereum/core/txpool/legacypool/legacypool.go
1.96s 18.02s (flat, cum) 19.57% of Total
. . 1495:func (pool *LegacyPool) truncatePending() {
. . 1496: pending := uint64(0)
60ms 2.99s 1497: for _, list := range pool.pending {
250ms 5.48s 1498: pending += uint64(list.Len())
. . 1499: }
. . 1500: if pending <= pool.config.GlobalSlots {
. . 1501: return
. . 1502: }
. . 1503:
. . 1504: pendingBeforeCap := pending
. . 1505: // Assemble a spam order to penalize large transactors first
. 510ms 1506: spammers := prque.New[int64, common.Address](nil)
140ms 2.50s 1507: for addr, list := range pool.pending {
. . 1508: // Only evict transactions from high rollers
50ms 5.08s 1509: if uint64(list.Len()) > pool.config.AccountSlots {
. . 1510: spammers.Push(addr, int64(list.Len()))
. . 1511: }
. . 1512: }
. . 1513: // Gradually drop transactions from offenders
. . 1514: offenders := []common.Address{}
```
```go
// Benchmarks the speed of batch transaction insertion in case of multiple accounts.
func BenchmarkTruncatePending(b *testing.B) {
// Generate a batch of transactions to enqueue into the pool
pool, _ := setupPool()
defer pool.Close()
b.ReportAllocs()
batches := make(types.Transactions, 4096+1024+1)
for i := range len(batches) {
key, _ := crypto.GenerateKey()
account := crypto.PubkeyToAddress(key.PublicKey)
pool.currentState.AddBalance(account, uint256.NewInt(1000000), tracing.BalanceChangeUnspecified)
tx := transaction(uint64(0), 100000, key)
batches[i] = tx
}
for _, tx := range batches {
pool.addRemotesSync([]*types.Transaction{tx})
}
b.ResetTimer()
// benchmark truncating the pending
for range b.N {
pool.truncatePending()
}
}
```
This PR fixes a deadlock situation is deleteTailEpoch that might arise
when
range delete is running in iterator based fallback mode (either using
leveldb
database or the hashdb state storage scheme).
In this case a stopCb callback is called periodically that does check
events,
including matcher sync requests, in which case it tries to acquire
indexLock
for read access, while deleteTailEpoch already held it for write access.
This pull request removes the indexLock acquiring in
`FilterMapsMatcherBackend.synced`
as this function is only called in the indexLoop.
Fixes https://github.com/ethereum/go-ethereum/issues/31700
This PR adds the `AuthorizationList` field to the `CallMsg` interface to support `eth_call`
and `eth_estimateGas` of set-code transactions.
---------
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
This PR ensures that caching a slice or a slice of slices will never
affect the original version by always cloning a slice fetched from cache
if it is not used in a guaranteed read only way.
This PR changes the chain view update mechanism of the log filter.
Previously the head updates were all wired through the indexer, even in
unindexed mode. This was both a bit weird and also unsafe as the
indexer's chain view was updates asynchronously with some delay, making
some log related tests flaky. Also, the reorg safety of the indexed
search was integrated with unindexed search in a weird way, relying on
`syncRange.ValidBlocks` in the unindexed case too, with a special
condition added to only consider the head of the valid range but not the
tail in the unindexed case.
In this PR the current chain view is directly accessible through the
filter backend and unindexed search is also chain view based, making it
inherently safe. The matcher sync mechanism is now only used for indexed
search as originally intended, removing a few ugly special conditions.
The PR is currently based on top of
https://github.com/ethereum/go-ethereum/pull/31642
Together they fix https://github.com/ethereum/go-ethereum/issues/31518
and replace https://github.com/ethereum/go-ethereum/pull/31542
---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
This PR makes `filtermaps.ChainView` thread safe because it is used
concurrently both by the indexer and multiple matcher threads. Even
though it represents an immutable view of the chain, adding a mutex lock
to the `blockHash` function is necessary because it does so by extending
its list of non-canonical hashes if the underlying blockchain is
changed.
The unsafe concurrency did cause a panic once after running the unit
tests for several hours and it could also happen during live operation.
This PR makes the conditions for using a map rendering snapshot stricter
so that whenever a reorg happens, only a snapshot of a common ancestor
block can be used. The issue fixed in
https://github.com/ethereum/go-ethereum/pull/31642 originated from using
a snapshot that wasn't a common ancestor. For example in the following
reorg scenario: `A->B`, then `A->B2`, then `A->B2->C2`, then `A->B->C`
the last reorg triggered a render from snapshot `B` saved earlier. Now
this is possible under certain conditions but extra care is needed, for
example if block `B` crosses a map boundary then it should not be
allowed. With the latest fix the checks are sufficient but I realized I
would just feel safer if we disallowed this rare and risky scenario
altogether and just render from snapshot `A` after the last reorg in the
example above. The performance difference if a few milliseconds and it
occurs rarely (about once a day on Holesky, probably much more rare on
Mainnet).
Note that this PR only makes the snapshot conditions stricter and
`TestIndexerRandomRange` does check that snapshots are still used
whenever it's obviously possible (adding blocks after the current head
without a reorg) so this change can be considered safe. Also I am
running the unit tests and the fuzzer and everything seems to be fine.
This pull request improves error handling for local transaction submissions.
Specifically, if a transaction fails with a temporary error but might be
accepted later, the error will not be returned to the user; instead, the
transaction will be tracked locally for resubmission.
However, if the transaction fails with a permanent error (e.g., invalid
transaction or insufficient balance), the error will be propagated to the user.
These errors returned in the legacyPool are regarded as temporary failure:
- `ErrOutOfOrderTxFromDelegated`
- `txpool.ErrInflightTxLimitReached`
- `ErrAuthorityReserved`
- `txpool.ErrUnderpriced`
- `ErrTxPoolOverflow`
- `ErrFutureReplacePending`
Notably, InsufficientBalance is also treated as a permanent error, as
it’s highly unlikely that users will transfer funds into the sender account
after submitting the transaction. Otherwise, users may be confused—seeing
their transaction submitted but unaware that the sender lacks sufficient funds—and
continue waiting for it to be included.
---------
Co-authored-by: lightclient <lightclient@protonmail.com>
closes#31310
This has been requested a few times in the past and I think it is a nice
quality-of-life improvement for users. At a predetermined interval,
there will now be a "Fork ready" log when a future fork is scheduled,
but not yet active.
It can only possibly print after block import, which kinda avoids the
scenario where the client isn't progressing or is syncing and the user
thinks it's "ready" because it sees a ready log.
New output:
```console
INFO [03-08|21:32:57.472] Imported new potential chain segment number=7 hash=aa24ee..f09e62 blocks=1 txs=0 mgas=0.000 elapsed="874.916µs" mgasps=0.000 snapdiffs=973.00B triediffs=7.05KiB triedirty=0.00B
INFO [03-08|21:32:57.473] Ready for fork activation fork=Prague date="18 Mar 25 19:29 CET" remaining=237h57m0s timestamp=1,742,322,597
INFO [03-08|21:32:57.475] Chain head was updated number=7 hash=aa24ee..f09e62 root=19b0de..8d32f2 elapsed="129.125µs"
```
Easiest way to verify this behavior is to apply this patch and run `geth
--dev --dev.period=12`
```patch
diff --git a/params/config.go b/params/config.go
index 9c7719d901..030c4f80e7 100644
--- a/params/config.go
+++ b/params/config.go
@@ -174,7 +174,7 @@ var (
ShanghaiTime: newUint64(0),
CancunTime: newUint64(0),
TerminalTotalDifficulty: big.NewInt(0),
- PragueTime: newUint64(0),
+ PragueTime: newUint64(uint64(time.Now().Add(time.Hour * 300).Unix())),
BlobScheduleConfig: &BlobScheduleConfig{
Cancun: DefaultCancunBlobConfig,
Prague: DefaultPragueBlobConfig,
```
This is an attempt at fixing #31601. I think what happens is the startup
logic will try to get the full block body (it's `bc.loadLastState`) and
fail because genesis block has been pruned from the freezer. This will
cause it to keep repeating the reset logic, causing a deadlock.
This can happen when due to an unsuccessful sync we don't have the state
for the head (or any other state) fully, and try to redo the snap sync.
---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
This fixes an issue where running geth with `--history.chain postmerge`
would not work on an empty database.
```
ERROR[04-16|23:11:12.913] Chain history database is pruned to unknown block tail=0
Fatal: Failed to register the Ethereum service: unexpected database tail
```
This PR fixes a bug in the map renderer that sometimes used an obsolete
block log value pointer to initialize the iterator for rendering from a
snapshot. This bug was triggered by chain reorgs and sometimes caused
indexing errors and invalid search results. A few other conditions are
also made safer that were not reported to cause issues yet but could
potentially be unsafe in some corner cases. A new unit test is also
added that reproduced the bug but passes with the new fixes.
Fixes https://github.com/ethereum/go-ethereum/issues/31593
Might also fix https://github.com/ethereum/go-ethereum/issues/31589
though this issue has not been reproduced yet, but it appears to be
related to a log index database corruption around a specific block,
similarly to the other issue.
Note that running this branch resets and regenerates the log index
database. For this purpose a `Version` field has been added to
`rawdb.FilterMapsRange` which will also make this easier in the future
if a breaking database change is needed or the existing one is
considered potentially broken due to a bug, like in this case.
I added the history mode configuration in eth/ethconfig initially, since
it seemed like the logical place. But it turns out we need access to the
intended pruning setting at a deeper level, and it actually needs to be
integrated with the blockchain startup procedure.
With this change applied, if a node previously had its history pruned,
and is subsequently restarted **without** the `--history.chain
postmerge` flag, the `BlockChain` initialization code will now verify
the freezer tail against the known pruning point of the predefined
network and will restore pruning status. Note that this logic is quite
restrictive, we allow non-zero tail only for known networks, and only
for the specific pruning point that is defined.
This PR proposes a change to the authorizations' validation introduced
in commit cdb66c8. These changes make the expected behavior independent
of the order of admission of authorizations, improving the
predictability of the resulting state and the usability of the system
with it.
The current implementation behavior is dependent on the transaction
submission order: This issue is related to authorities and the sender of
a transaction, and can be reproduced respecting the normal nonce rules.
The issue can be reproduced by the two following cases:
**First case**
- Given an empty pool.
- Submit transaction `{ from: B, auths [ A ] }`: is accepted.
- Submit transaction `{ from: A }`: Is accepted: it becomes the one
in-flight transaction allowed.
**Second case**
- Given an empty pool.
- Submit transaction `{ from: A }`: is accepted
- Submit transaction `{ from: B, auths [ A ] }`: is rejected since there
is already a queued/pending transaction from A.
The expected behavior is that both sequences of events would lead to the
same sets of accepted and rejected transactions.
**Proposed changes**
The queued/pending transactions issued from any authority of the
transaction being validated have to be counted, allowing one transaction
from accounts submitting an authorization.
- Notice that the expected behavior was explicitly forbidden in the case
"reject-delegation-from-pending-account", I believe that this behavior
conflicts to the definition of the limitation, and it is removed in this
PR. The expected behavior is tested in
"accept-authorization-from-sender-of-one-inflight-tx".
- Replacement tests have been separated to improve readability of the
acceptance test.
- The test "allow-more-than-one-tx-from-replaced-authority" has been
extended with one extra transaction, since the system would always have
accepted one transaction (but not two).
- The test "accept-one-inflight-tx-of-delegated-account" is extended to
clean-up state, avoiding leaking the delegation used into the other
tests. Additionally, replacement check is removed to be tested in its
own test case.
**Expected behavior**
The expected behavior of the authorizations' validation shall be as
follows:

Notice that replacement shall be allowed, and behavior shall remain
coherent with the table, according to the replaced transaction.
---------
Co-authored-by: lightclient <lightclient@protonmail.com>
During my benchmarks on Holesky, around 10% of all CPU time was spent in
PUSH2
```
ROUTINE ======================== github.com/ethereum/go-ethereum/core/vm.newFrontierInstructionSet.makePush.func1 in github.com/ethereum/go-ethereum/core/vm/instructions.go
16.38s 20.35s (flat, cum) 10.31% of Total
740ms 740ms 976: return func(pc *uint64, interpreter *EVMInterpreter, scope *ScopeContext) ([]byte, error) {
. . 977: var (
40ms 40ms 978: codeLen = len(scope.Contract.Code)
970ms 970ms 979: start = min(codeLen, int(*pc+1))
200ms 200ms 980: end = min(codeLen, start+pushByteSize)
. . 981: )
670ms 2.39s 982: a := new(uint256.Int).SetBytes(scope.Contract.Code[start:end])
. . 983:
. . 984: // Missing bytes: pushByteSize - len(pushData)
410ms 410ms 985: if missing := pushByteSize - (end - start); missing > 0 {
. . 986: a.Lsh(a, uint(8*missing))
. . 987: }
12.69s 14.94s 988: scope.Stack.push2(*a)
10ms 10ms 989: *pc += size
650ms 650ms 990: return nil, nil
. . 991: }
. . 992:}
```
Which is quite crazy. We have a handwritten encoder for PUSH1 already,
this PR adds one for PUSH2.
PUSH2 is the second most used opcode as shown here:
https://gist.github.com/shemnon/fb9b292a103abb02d98d64df6fbd35c8 since
it is used by solidity quite significantly. Its used ~20 times as much
as PUSH20 and PUSH32.
# Benchmarks
```
BenchmarkPush/makePush-14 94196547 12.27 ns/op 0 B/op 0 allocs/op
BenchmarkPush/push-14 429976924 2.829 ns/op 0 B/op 0 allocs/op
```
---------
Co-authored-by: jwasinger <j-wasinger@hotmail.com>
This pull request introduces two constraints in the blobPool:
(a) If the sender has a pending authorization or delegation, only one
in-flight
executable transaction can be cached.
(b) If the authority address in a SetCode transaction is already
reserved by
the blobPool, the transaction will be rejected.
These constraints mitigate an attack where an attacker spams the pool
with
numerous blob transactions, evicts other transactions, and then cancels
all
pending blob transactions by draining the sender’s funds if they have a
delegation.
Note, because there is no exclusive lock held between different subpools
when processing transactions, it's totally possible the SetCode
transaction
and blob transactions with conflict sender and authorities are accepted
simultaneously. I think it's acceptable as it's very hard to be
exploited.
---------
Co-authored-by: lightclient <lightclient@protonmail.com>
This pull request introduces new sync logic for pruning mode. The downloader will now skip
insertion of block bodies and receipts before the configured history cutoff point.
Originally, in snap sync, the header chain and other components (bodies and receipts) were
inserted separately. However, in Proof-of-Stake, this separation is unnecessary since the
sync target is already verified by the CL.
To simplify the process, this pull request modifies `InsertReceiptChain` to insert headers
along with block bodies and receipts together. Besides, `InsertReceiptChain` doesn't have
the notion of reorg, as the common ancestor is always be found before the sync and extra
side chain is truncated at the beginning if they fall in the ancient store. The stale
canonical chain flags will always be rewritten by the new chain. Explicit reorg logic is
no longer required in `InsertReceiptChain`.
This is an alternative to #31309
With eth/68, transaction announcement must have transaction type and
size. So in announceTransactions, we need to query the transaction from
transaction pool with its hash. This creates overhead in case of blob
transaction which needs to load data from billy and RLP decode. This
commit creates a lightweight lookup from transaction hash to transaction
size and a function GetMetadata to query transaction type and
transaction size given the transaction hash.
---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
This PR adds `rawdb.SafeDeleteRange` and uses it for range deletion in
`core/filtermaps`. This includes deleting the old bloombits database,
resetting the log index database and removing index data for unindexed
tail epochs (which previously weren't properly implemented for the
fallback case).
`SafeDeleteRange` either calls `ethdb.DeleteRange` if the node uses the
new path based state scheme or uses an iterator based fallback method
that safely skips trie nodes in the range if the old hash based state
scheme is used. Note that `ethdb.DeleteRange` also has its own iterator
based fallback implementation in `ethdb/leveldb`. If a path based state
scheme is used and the backing db is pebble (as it is on the majority of
new nodes) then `rawdb.SafeDeleteRange` uses the fast native range
delete.
Also note that `rawdb.SafeDeleteRange` has different semantics from
`ethdb.DeleteRange`, it does not automatically return if the operation
takes a long time. Instead it receives a `stopCallback` that can
interrupt the process if necessary. This is because in the safe mode
potentially a lot of entries are iterated without being deleted (this is
definitely the case when deleting the old bloombits database which has a
single byte prefix) and therefore restarting the process every time a
fixed number of entries have been iterated would result in a quadratic
run time in the number of skipped entries.
When running in safe mode, unindexing an epoch takes about a second,
removing bloombits takes around 10s while resetting a full log index
might take a few minutes. If a range delete operation takes a
significant amount of time then log messages are printed. Also, any
range delete operation can be interrupted by shutdown (tail uinindexing
can also be interrupted by head indexing, similarly to how tail indexing
works). If the last unindexed epoch might have "dirty" index data left
then the indexed map range points to the first valid epoch and
`cleanedEpochsBefore` points to the previous, potentially dirty one. At
startup it is always assumed that the epoch before the first fully
indexed one might be dirty. New tail maps are never rendered and also no
further maps are unindexed before the previous unindexing is properly
cleaned up.
---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
Co-authored-by: Felix Lange <fjl@twurst.com>
This PR changes log indexer error handling so that if an indexing error
happens then it disables the indexer and reverts to unindexed more
without resetting the database (except in case of a failed database
init).
Resetting the database on the first error would probably be overkill as
a client update might fix this without having to reindex the entire
history. It would also make debugging very hard. On the other hand,
these errors do not resolve themselves automatically so constantly
retrying makes no sense either. With these changes a new attempt to
resume indexing is made every time the client is restarted.
The PR also fixes https://github.com/ethereum/go-ethereum/issues/31491
which originated from the tail indexer trying to resume processing a
failed map renderer.
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
This PR adds an extra condition to the log indexer initialization in
order to avoid initializing with block 0 as target head. Previously this
caused the indexer to initialize without a checkpoint. Later, when the
real chain head was set, it indexed the entire history, then unindexed
most of it if only the recent history was supposed to be indexed. Now
the init only happens when there is an actual synced chain head and
therefore the index is initialized at the most recent checkpoint and
only the last year is indexed according to the default parameters.
During checkpoint initialization the best available checkpoint is also
checked against the history cutoff point and fails if the indexing would
have to start from a block older than the cutoff. If initialization
fails then the indexer reverts to unindexed mode instead of retrying
because the the failure conditions cannot be expected to recover later.
Instead of reporting all filtermaps stuff in one line, I'm breaking it
down into the three separate kinds of entries here.
```
+-----------------------+-----------------------------+------------+------------+
| DATABASE | CATEGORY | SIZE | ITEMS |
+-----------------------+-----------------------------+------------+------------+
| Key-Value store | Log index filter-map rows | 59.21 GiB | 616077345 |
| Key-Value store | Log index last-block-of-map | 12.35 MiB | 269755 |
| Key-Value store | Log index block-lv | 421.70 MiB | 22109169 |
```
Also added some other changes to make it easier to debug:
- restored bloombits into the inspect output, so we notice if it doesn't
get deleted for some reason
- tracking of unaccounted key examples
This adds a new subcommand 'geth prune-history' that removes the pre-merge history
on supported networks. Geth is not fully ready to work in this mode, please do not run
this command on your production node.
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
In #31384 we unindex TXes prior to the merge block. However when the
node starts up it will try to re-index those back if the config is to index the
whole chain. This change makes the indexer aware of the history cutoff block,
avoiding reindexing in that segment.
---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
Co-authored-by: Felix Lange <fjl@twurst.com>
This pull request improves the protection mechanism in the txpool for
senders with delegation. A sender with either delegation or pending
delegation is now limited to a maximum of one in-flight executable
transaction, while gapped transactions will be rejected.
Reason:
If nonce-gapped transaction from delegated/pending-delegated senders
can be acceptable, then it's no-longer possible to send another
"executable" transaction with correct nonce due to the policy of at most
one inflight tx. The gapped transaction will be stuck in the txpool, with no
meaningful way to unlock the sender.
---------
Co-authored-by: lightclient <lightclient@protonmail.com>
This PR changes the matcher syncing conditions so that it is possible to
run a search while head indexing is in progress. Previously it was a
requirement to have the head indexed in order to perform matcher sync
before and after a search. This was unnecessarily strict as the purpose
was just to avoid syncing the valid range with the temporary shortened
indexed range applied while updating existing head maps. Now the sync
condition explicitly checks whether the indexer has a temporary indexed
range with some head maps being partially updated.
It also fixes a deadlock that happened when matcher synchronization was
attempted in the event handler called from the `writeFinishedMaps`
periodical callback.
This removes the signer type-train in favor of defining a single object
that can handle all tx types. Supported types are enabled via a map.
Notably, the new signer also supports disabling legacy transactions.
This PR roughly halves the number of allocations needed to compute the
sigHash for a transaction.
This sigHash is used whenever we recover a signature of a transaction,
so quite often. During a recent benchmark full syncing on Holesky,
roughly 2.8% of all allocations were happening here because the fields
from the transaction would be copied multiple times.
```
66168733 153175654 (flat, cum) 2.80% of Total
. . 368:func (s londonSigner) Hash(tx *Transaction) common.Hash {
. . 369: if tx.Type() != DynamicFeeTxType {
. . 370: return s.eip2930Signer.Hash(tx)
. . 371: }
. 19169966 372: return prefixedRlpHash(
. . 373: tx.Type(),
26442187 26442187 374: []interface{}{
. . 375: s.chainId,
6848616 6848616 376: tx.Nonce(),
. 19694077 377: tx.GasTipCap(),
. 18956774 378: tx.GasFeeCap(),
6357089 6357089 379: tx.Gas(),
. 12321050 380: tx.To(),
. 16865054 381: tx.Value(),
13435187 13435187 382: tx.Data(),
13085654 13085654 383: tx.AccessList(),
. . 384: })
. . 385:}
```
This PR reduces the allocations and speeds up the computation of the
sigHash by ~22%, which is quite significantly given that this operation
involves a call to Keccak
```
// BenchmarkHash-8 440082 2639 ns/op 384 B/op 13 allocs/op
// BenchmarkHash-8 493566 2033 ns/op 240 B/op 6 allocs/op
```
```
Hash-8 2.691µ ± 8% 2.097µ ± 9% -22.07% (p=0.000 n=10)
```
It also kinda cleans up stuff in my opinion, since the transaction
should itself know best how to compute the sighash

---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
Currently, when answering GetPooledTransaction request, txpool.Get() is
used. When the requested hash is blob transaction, blobpool.Get() is
called. This function loads the RLP-encoded transaction from limbo then
decodes and returns. Later, in answerGetPooledTransactions, we need to
RLP encode again. This decode then encode is wasteful. This commit adds
GetRLP to transaction pool interface so that answerGetPooledTransactions
can use the RLP-encoded from limbo directly.
---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>