Skip to content

eth/consensus : implement eccpow consensus engine#10

Open
mmingyeomm wants to merge 3340 commits intocryptoecc:worldlandfrom
ethereum:master
Open

eth/consensus : implement eccpow consensus engine#10
mmingyeomm wants to merge 3340 commits intocryptoecc:worldlandfrom
ethereum:master

Conversation

@mmingyeomm
Copy link
Copy Markdown

implements eccpow consensus engine for Worldland Network

s1na and others added 30 commits February 6, 2026 07:57
The error code for revert should be consistent with eth_call and be 3.
Clear was only used in tests, but it was missing some of the cleanup.

Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
Follow-up to #33748

Same issue - ResettingTimer can be registered via loadOrRegister() but
GetAll() silently drops it during JSON export. The prometheus exporter
handles it fine (collector.go:70), so this is just an oversight in the
JSON path.

Note: ResettingTimer.Snapshot() resets the timer by design, which is
consistent with how the prometheus exporter uses it.
### Problem

`HasBody` and `HasReceipts` returned `true` for pruned blocks because
they only checked `isCanon()` which verifies the hash table — but
hash/header tables have `prunable: false` while body/receipt tables have
`prunable: true`.

After `TruncateTail()`, hashes still exist but bodies/receipts are gone.
This caused inconsistency: `HasBody()` returns `true`, but `ReadBody()`
returns `nil`.

### Changes

Both functions now check `db.Tail()` when the block is in ancient store.
If `number < tail`, the data has been pruned and the function correctly
returns `false`.

This aligns `HasBody`/`HasReceipts` behavior with
`ReadBody`/`ReadReceipts` and fixes potential issues in
`skeleton.linked()` which relies on these checks during sync.
Here is a draft for the New EraE implementation. The code follows along
with the spec listed at https://hackmd.io/pIZlxnitSciV5wUgW6W20w.

---------

Co-authored-by: shantichanal <158101918+shantichanal@users.noreply.github.com>
Co-authored-by: lightclient <lightclient@protonmail.com>
Co-authored-by: MariusVanDerWijden <m.vanderwijden@live.de>
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
This PR makes `TestEIP8024_Execution` verify explicit error types (e.g.,
`ErrStackUnderflow` vs `ErrInvalidOpCode`) rather than accepting any
error. It also fails fast on unexpected opcodes in the mini-interpreter
to avoid false positives from missing opcode handling.
This PR fixes a panic in a corner case situation when a `ChainEvent` is
received by `eth.Ethereum.updateFilterMapsHeads()` but the given chain
section does not exist in `BlockChain` any more. This can happen during
chain rewind because chain events are processed asynchronously. Ignoring
the event in this case is ok, the final event will point to the final
rewound head and the indexer will be updated.
Note that similar issues will not happen once we transition to
#32292 and the new indexer
built on top of this. Until then, the current fix should be fine.
The `decodeRef` function used `size > hashLen` to reject oversized
embedded nodes, but this incorrectly allowed nodes of exactly 32 bytes
through. The encoding side (hasher.go, stacktrie.go) consistently uses
`len(enc) < 32` to decide whether to embed a node inline, meaning nodes
of 32+ bytes are always hash-referenced. The error message itself
already stated `want size < 32`, confirming the intended threshold.
Changed `size > hashLen` to `size >= hashLen` in `decodeRef` to align
the decoding validation with the encoding logic, the Yellow Paper spec,
and the surrounding comments.
Added methods `TraceCallWithCallTracer` and `TraceTransactionWithCallTracer`.

Fixes #28182

---------

Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
…#33807)

GetStorage and DeleteStorage used GetBinaryTreeKey to compute the tree
key, while UpdateStorage used GetBinaryTreeKeyStorageSlot. The latter
applies storage slot remapping (header offset for slots <64, main
storage prefix for the rest), so reads and deletes were targeting
different tree locations than writes.

Replace GetBinaryTreeKey with GetBinaryTreeKeyStorageSlot in both
GetStorage and DeleteStorage to match UpdateStorage. Add a regression
test that verifies the write→read→delete→read round-trip for main
storage slots.
Most uses of the iterator are like this:

    it, _ := rlp.NewListIterator(data)
    for it.Next() {
        do(it.Value())
    }

This doesn't require the iterator to be a pointer and it's better to
have it stack-allocated. AFAIK the compiler cannot prove it is OK to
stack-allocate when it is returned as a pointer because the methods of
`Iterator` use pointer receiver and also mutate the object.

The iterator type was not exported until very recently, so I think it is
still OK to change this API.
…#33820)

This fixes two cases where `Iterator.Err()` was misused. The method will
only return an error after `Next()` has returned false, so it makes no
sense to check for the error within the loop itself.
Update to match the spec:
eth-clients/e2store-format-specs#16

---------

Co-authored-by: lightclient <lightclient@protonmail.com>
The reasoning for using the cleartext format here is that the JSON-RPC
API is internal only. Providers which expose it publicly already put it
behind a proxy which handles also the encryption.
This is helpful when building a list from already-encoded elements.
This changes `RawList` to ensure the count of items is always valid.
Lists with invalid structure, i.e. ones where an element exceeds the
size of the container, are now detected during decoding of the `RawList`
and thus cannot exist.

Also remove `RawList.Empty` since it is now fully redundant, and
`Iterator.Count` since it returns incorrect results in the presence of
invalid input. There are no callers of these methods (yet).
I removed `Iterator.Count` in #33840, because it appeared to be unused
and did not provide the documented invariant: the returned count should
always be an upper bound on the number of iterations allowed by `Next`.

In order to make `Count` work, the semantics of `CountValues` has to
change to return the number of items up and including the invalid one. I
have reviewed all callsites of `CountValues` to assess if changing this
is safe. There aren't that many, and the only call that doesn't check
the error and return is in the trie node parser,
`trie.decodeNodeUnsafe`. There, we distinguish the node type based on
the number of items, and it previously returned an error for item count
zero. In order to avoid any potential issue that could result from this
change, I'm adding an error check in that function, though it isn't
necessary.
…inter (#33772)

The endSpan closure accepted error by value, meaning deferred calls like
defer spanEnd(err) captured the error at defer-time (always nil), not at
function-return time. This meant errors were never recorded on spans.

- Changed endSpan to accept *error
- Updated all call sites in rpc/handler.go to pass error pointers, and
adjusted handleCall to avoid propagating child-span errors to the parent
- Added TestTracingHTTPErrorRecording to verify that errors from RPC
methods are properly recorded on the rpc.runMethod span
#33484)

This PR adds OpenTelemetry tracing configuration to geth via
command-line flags. When enabled, geth initializes the global
OpenTelemetry TracerProvider and installs standard trace context
propagation. When disabled (the default), tracing remains a no-op and
behavior is unchanged.

Co-authored-by: Felix Lange <fjl@twurst.com>
…33835)

This changes the p2p protocol handlers to delay message decoding. It's
the first part of a larger change that will delay decoding all the way
through message processing. For responses, we delay the decoding until
it is confirmed that the response matches an active request and does not
exceed its limits.

In order to make this work, all messages have been changed to use
rlp.RawList instead of a slice of the decoded item type. For block
bodies specifically, the decoding has been delayed all the way until
after verification of the response hash.

The role of p2p/tracker.Tracker changes significantly in this PR. The
Tracker's original purpose was to maintain metrics about requests and
responses in the peer-to-peer protocols. Each protocol maintained a
single global Tracker instance. As of this change, the Tracker is now
always active (regardless of metrics collection), and there is a
separate instance of it for each peer. Whenever a response arrives, it
is first verified that a request exists for it in the tracker. The
tracker is also the place where limits are kept.
gballet and others added 30 commits April 21, 2026 14:50
The nodes were named using the byte representation of the path, instead
of the binary representation. This was confusing to other client devs
trying to achieve interop.
This PR removes `FinalizeAndAssemble` from the consensus engine
interface
and relocates block assembly logic outside of the consensus engine.

Block assembly is consensus-agnostic. Most validations can be performed 
by the caller. For example:

- Withdrawals must be nil prior to Shanghai
- After Shanghai upgrade, withdrawals must be non-nil, even if empty.

The only notable consensus-specific validation is related to uncles. In
clique,
the concept of uncles does not exist, and any block containing uncles
should
be considered invalid.

Within the block production package, the policy is to produce blocks
according
to the latest chain specification. As a result, Clique-specific block
production
is no longer supported. This tradeoff is considered acceptable.
This is a pre-requisite PR for landing the BAL construction
Difference to Appveyor:

- Missing 386 build. Hit some issue because user-space memory there is
around 2Gbs. Also seems generally extremely niche.
- Not doing the archive step and NSIS installer and uploads (those are
done on the builder).
…ld (#34784)

This PR reverts the last change to the freebsd build, and it fixes the
_direct_ FreeBSD build.

Here, we change the upstream of github.com/karalabe/hid to its new home,
github.com/ethereum/hid. The new dependency includes a dummy.go file
that makes `go mod vendor` work.

##### Origin of the problem

Enrique is maintaining the FreeBSD ports, and FreeBSD ports only support
vendored go modules. It turns out that `go mod vendor` will not include
C files if there is no `.go` file in the directory. Since the C files
were missing for `karalabe/hid`, the ports maintainer tried to use the
version of `hidapi` that is provided by the ports. To do so, he had to
modify the way things are included. This broke the _out of ports_
FreeBSD build.
Adds the installer + archive steps that were done on appveyor to gitea
builder.
The rlpx ping command mishandled disconnect responses on two counts:
the error return from rlp.DecodeBytes was ignored, so decode failures
silently produced an "invalid disconnect message" error with no context;
and the decoder assumed the spec-compliant list form exclusively, while
older geth and some other implementations send the reason as a bare
byte.
                                                                  
Accept both wire forms (matching the legacy-tolerant behavior already
  in p2p.decodeDisconnectMessage), and on decode failure include the raw
payload so operators can see exactly what the peer sent. Add a unit
  test for the decoder covering both forms plus the empty-payload error
  path.
scheduleFetches.func1 is the single biggest allocator in the Pyroscope
profile of a busy node (~13.5 GB/hr, 8% of total alloc_space). Each
peer-iteration pre-allocated 'make([]common.Hash, 0, maxTxRetrievals)'
= 8 KB, even for peers that end up collecting no new hashes (all their
announces were already being fetched by someone else).

Defer the slice allocation to the first append. Peers that collect zero
hashes now pay zero allocation, which is the common case on the
timeoutTrigger path where all peers with any announces are iterated.

New benchmarks BenchmarkScheduleFetches_{100peers_10new,
100peers_allFetching, 500peers_3new} (benchstat, 6 samples):

  scenario            ns/op       B/op        allocs/op
  100p/10new          unchanged   unchanged   unchanged   (fast path)
  100p/allFetching   -62%        -92%        -20%
  500p/3new          -22%        -44%         -7%
  geomean            -33%        -65%         -9%
This PR adds three cell-level kzg functions required for the sparse
blobpool (eth/72).

- VerifyCells: Verifies cells corresponding to proofs. This is used to
verify cells received from eth/72 peers.
- ComputeCells: Computes cells from blobs. This is needed because user
submissions and eth/71 transaction deliveries contain blobs, while
eth/72 peers expect cells.
- RecoverBlobs: Recovers blobs from partial cells. This is needed to
support both eth/71 and eth/72

---------

Co-authored-by: Felix Lange <fjl@twurst.com>
When `rpc.Client.Close()` is called, the TCP connection is torn down
without sending a WebSocket Close frame. The server sees `websocket:
close 1006 (abnormal closure): unexpected EOF` instead of a clean 1000
(normal closure).

### Root cause

`websocketCodec.close()` delegates to `jsonCodec.close()` which calls
`c.conn.Close()` — gorilla/websocket's `Conn.Close` explicitly "[closes
the underlying network connection without sending or waiting for a close
message](https://pkg.go.dev/github.com/gorilla/websocket#Conn.Close)"
(per RFC 6455).

### Fix

Send a WebSocket Close control frame (opcode 0x8, status 1000) before
closing the underlying connection. Uses `WriteControl` with the same
`encMu` mutex pattern already used by `pingLoop` for write
serialization, and reuses the existing `wsPingWriteTimeout` (5s)
constant.

`WriteControl` errors are safe to ignore — the connection may already be
broken by the time we attempt the close frame.

Fixes #30482
Co-authored-by: jwasinger <j-wasinger@hotmail.com>
scheduleFetches.func1 is the biggest allocator in the long-duration
profile of node (11% of total alloc_space).
Each peer-iteration pre-allocated make([]common.Hash, 0, maxTxRetrievals),
even for peers that end up collecting no new hashes (all their announces
were already being fetched by someone else).

Defer the slice allocation to the first append. Peers that collect zero hashes
now pay zero allocation, which is the common case on the timeoutTrigger
path where all peers with any announces are iterated.
The testPeer request counters (nAccountRequests, nStorageRequests,
nBytecodeRequests, nTrienodeRequests) were plain int fields incremented
with ++. These increments happen in Request* methods that are invoked
concurrently by the Syncer from multiple goroutines
(assignBytecodeTasks, assignStorageTasks, etc.), causing a data race
reliably detected by go test -race.

Change the counters to atomic.Int64 so increments and reads are
synchronized without introducing a mutex.

Fixes races detected in TestMultiSyncManyUseless,
TestMultiSyncManyUselessWithLowTimeout,
TestMultiSyncManyUnresponsive, TestSyncWithStorageAndOneCappedPeer,
TestSyncWithStorageAndCorruptPeer, and
TestSyncWithStorageAndNonProvingPeer.
The stateReadList field introduced by #34776 to track the state access
footprint for EIP-7928 was not propagated by StateDB.Copy. Every other
per-transaction field that lives alongside it (accessList,
transientStorage, journal, witness, accessEvents) is copied explicitly,
so this field was simply missed.

After Copy the copy's stateReadList is nil while the original keeps its
entries, so the nil-safe guards on StateAccessList.AddAccount / AddState
silently drop every access recorded on the copy. For any post-Amsterdam
code path that copies a prepared state and keeps reading from the copy,
the BAL footprint becomes incomplete.

Add a Copy method on bal.StateAccessList and invoke it from
StateDB.Copy, matching the pattern used for accessList and accessEvents.

---------

Co-authored-by: jwasinger <j-wasinger@hotmail.com>
This PR updates the BAL structure definition to the latest the spec,

- Balance has been changed from [16]byte to uint256
- Storage key and value has been changed from [32]byte to uint256 
- BlockAccessList has been changed from a struct to a slice of
AccountChanges
- TxIndex has been changed from uint16 to uint32
`StateSetWithOrigin.decode()` was missing size computation after
deserializing origin data, causing `size` to remain zero after journal
reload. Added the same calculation logic used in
`NewStateSetWithOrigin()`.
Co-authored-by: Guillaume Ballet <3272758+gballet@users.noreply.github.com>
…ers (#34743)

Save `el.Next()` before calling `plist.Remove(el)` so iteration
continues correctly. Previously the loop exited after removing the first
expired matcher because `Remove` invalidates the element's links.

---------

Co-authored-by: Felix Lange <fjl@twurst.com>
Here, we change the EVM stack implementation to use an 'arena', i.e.
a shared allocation pool for sub-call stacks. The stack is now more
GC-friendly, since it is a slice of uint256 values instead of a slice of pointers.

Code that pushes an item to the stack has been changed to get() the top
item, then overwrite it.

The PR is a rewrite/rebase of #30362.

---------

Co-authored-by: Martin Holst Swende <martin@swende.se>
Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
EIP-7825 caps the transaction gas limit at `MaxTxGas`, but after
Amsterdam/EIP-8037 the transaction gas limit can include state gas
reservoir in addition to the regular gas dimension. Applying the Osaka
cap to the full `tx.Gas()` rejects otherwise valid Amsterdam
transactions that need more than `MaxTxGas` total gas because of state
gas, while their regular gas use remains within the intended limit.

This changes geth to stop applying the full transaction gas cap once
Amsterdam is active:

- txpool stateless validation no longer rejects `tx.Gas() > MaxTxGas`
under Amsterdam
- legacy pool reorg cleanup does not purge high-total-gas transactions
at the Osaka transition if Amsterdam is also active
- execution precheck mirrors the txpool behavior and does not reject
high-total-gas messages under Amsterdam

The block gas limit check remains in place, so transactions still cannot
request more total gas than the current block gas limit.

Validation run:

```
go test ./core/txpool ./core/txpool/legacypool
go test ./core -run TestStateProcessorErrors
```

---------

Co-authored-by: Gary Rong <garyrong0905@gmail.com>
- Fixes an error shadowing issue in the deliver() function, where a
stale result from GetDeliverySlot caused the original failure to be
overwritten by errStaleDelivery.
- Adds errInvalidBody and errInvalidReceipt to the downloader error
checks to properly drop peers who sent invalid responses.

---------

Co-authored-by: Felix Lange <fjl@twurst.com>
…st (#34639)

The layer-5 diff condition used `i > 50 || i < 85`, which is true for
almost all keys in the 0..255 loop. Use `i > 50 && i < 85` so layer 5
only covers the intended band (51..84), consistent with the snapshot
iterator test fix.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.