# Lighthouse Update #21

A brief collection of testnet updates and development progress.

# The quick summary

Since the last update, Lighthouse has undergone a number of experimental testnets and significant code updates. In December we launched a mainnet-spec'd testnet with 16k validators. The result of this testnet is documented in our last post here.

Following this, we weren't happy with the block processing speeds, which manifest as slow syncing speeds (for an average grade computer was around 1-3 blocks per second). This lead to a number of significant performance improvements (the specifics are documented below) and a new experimental testnet to check these updates.

The newer testnet showed significant improvements on its predecessor. We managed to shrink the active hot database size (the on-disk database size whilst actively synced on the network) from over 3GB down to a few hundred megabytes. This in turn, increased the read/write speeds by an order of magnitude. Further, we improved block processing speeds and saw syncing speeds over 10 blocks per second for an average grade computer.

This experimental testnet was not without its issues. In the performance improvements, we saw some concurrency issues in block processing which affected our fork choice algorithm. We saw some nodes finding different head blocks once synced. As the design of Eth 2.0 is resilient, this bug did not halt the network. We chose to keep this testnet alive during this bug, and also saw an opportunity to upgrade our fork-choice algorithm whilst we are modifying this portion of the code. We also compiled a range of syncing issues reported by community members (thank you everyone!) which had to be rectified.

As an overall summary, the latest experimental testnet has now been terminated (details of which are given in the following section), our syncing algorithm has been improved and updated, we have a brand shiny new (and faster) fork-choice algorithm and are adding the final touches to updating all the code to the latest v0.10.1 specification. Expect another experimental testnet very soon. In this testnet, we will be aiming at testing the scalability of Lighthouse, with a large number of nodes and validators and, as always, encourage community members to join and help out by reporting any issues observed in the process.

## The end of the latest experimental testnet

Just as we were adding the final touches to our new fork-choice algorithm (see below) (this would correct the known-bug in the testnet that was introduced during our performance updates) we saw the majority of our validator nodes crash simultaneously. Although crashing typically sounds bad, events such as this is exactly what we are looking for in our testnets to identify any possible serious errors in the code that could possibly arise once we hit mainnet.

After a testnet post-mortem, we identified that the fork-choice bug compounded with an over-sight in the message propagation logic to create a cyclic amplification of attestations across the gossipsub network. This flooded and overloaded all the nodes with duplicate attestations. The over-sight involved checking the last 256 attestations for duplicates, however the fork-choice bug produced nodes with competing head blocks and hence more unique attestations for different head blocks. With over 16k validators, there was a cycle of greater than 256 attestation messages allowing duplicates to occur and re-propagate and flood the network. The fix for this is simple, in that the cache for the last seen attestations should be at least the size of the number of validators we expect to see attestations for.

As we had finished updating the syncing logic, the new fork-choice and specification updates, along with the fact that the crash stopped finality, created a large number of skip slots (not to mention happened on a weekend) we decided to let this testnet end, and spin up a fresh new one with all the updates.

This is a win for all new testnet joiners, who now no longer have to sync a month's worth of blocks.

## The new fork-choice algorithm

The Ethereum researcher, Diederik (more affectionately known as @protolambda or "Proto"), proposed a new method for finding the head of the chain, the fork-choice algorithm. (For more information of fork choice, checkout out our #08 update).

Previously we used the "reduced tree" fork choice optimization developed at IC3 2019 at Cornell University. Whilst reduced tree was nice from a theoretical perspective, it used CPU cycles to optimize against memory usage during catastrophic network conditions. In practice, we found that we really needed to optimize for CPU cycles (sync speed) and that memory is relatively abundant.

With this new perspective, we went shopping for a new fork choice optimization and settled on Proto's innovative proto_array data structure. Whilst Proto had provided a clean and elegant reference implementation, it was written several months ago before some additional complexity had been added to the Eth2 fork choice specification to mitigate the bounce attack on FFG.

Paul (Sigma Prime) spent several days reworking Proto's implementation to be suitable for the current Eth2 spec and he eventually produced a panic-free Rust implementation which then served as the reference for Prysm's own proto-array implementation. As always, Proto's help during the process was invaluable.

This week Paul's proto-array implementation was merged in to master on Lighthouse and it will serve as our sole fork choice algorithm moving forward. It has proven to run in orders of magnitude less time and perform significantly less database reads. Success all round.

Throughout our testnets there has been some issues with syncing that prevented some nodes from reaching the head and required some restarts. There were a range of edge-cases and various bugs that eventuated in syncing halting and preventing nodes from reaching the head slot. A number of these were corrected in the performance updates.

One of the biggest source of these issues (which has been known to us for a while) was that syncing was performing the block processing task and being run on our global executor. This is a heavy process which blocks other tasks (such as libp2p negotiations and RPC calls) which meant lags and timeouts occurring in various parts of the client. We were hoping to correct this during a client-wide upgrade to Rust's new stable-futures which in turn gives us added features and a faster runtime (thanks to tokio updates). As this update is still a bit away, we instead have updated the syncing mechanism to spawn a dedicated block-processing thread freeing up the rest of the clients tasks. Additionally, some more advanced error detection, handling and processing have been baked into our syncing methodology which should see significantly more stable and reliable syncing from Lighthouse than any previous version.

We have greatly reduced Lighthouse's disk usage, prompted by the runaway disk usage we saw on the previous testnet. The main change we've made is to store BeaconStates in the hot database less frequently. We use a scheme similar to the freezer database, whereby full states are stored at the start of each epoch, and all other states are reconstructed by quickly replaying blocks. This has reduced the on-disk size of the hot database by around 32x, and has had the positive side-effect of accelerating block processing and therefore sync. During a loss of finality this should allow nodes to last around 32 times longer before exhausting their disk space.

The improvement to the hot database wasn't without its complications. Much of Lighthouse's code had been written under the assumption that loading states was fast. Performance regressions in fork choice and attestation processing lead to the adoption of the new fork choice algorithm described above, and a new attestation validation strategy which makes use of a quick-to-load epoch boundary state.

The other major improvement was a simple tweak to the freezer database's default parameters. Rather than storing a full state every 64 slots, we now store one only every 2048 slots. This leads to a 32x reduction in the on-disk size of the freezer DB, which neatly matches the 32x reduction for the hot database. This space improvement doesn't come for free however, we're in a classic space-time tradeoff whereby halving the disk usage doubles the time required to fetch historical states. Given that historical states are never loaded during normal validation, we believe this is a reasonable default. Tips on configuring Lighthouse for other purposes can be found in the new Advanced section of the Lighthouse Book.

## Things to Come

A new large-scale testnet is imminent which will include all of the previously mentioned developments. We will use this experimental testnet to try and stress-test the network structure, code and design. This will also help stress-test our processing times, database implementation and hopefully catch any bugs that could arise from a largely distributed pool of validator clients.

In parallel, we are completing an extensive update to our networking infrastructure, which will bring us largely up to the v0.10 specification for mainnet (from a networking perspective) and will introduce shard subnets. This will be introduced in a subsequent testnet, as it should alleviate some network stress (which we want in our large-scale stress-testnet) as well as introduce breaking changes.

We'll be updating the status of our new testnets as they get launched and report any interesting findings. As always, we encourage community participation on our bug-hunting endeavours and we promise to try and make it easy for everyone to join.