You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Aug 23, 2020. It is now read-only.
Currently RocksDB caches data in bytes, this means that you may create two separate transaction objects and manipulate them in different threads.
We want to create a cache layer above the DB implementation that locks in X K transactions and only stored them to the actual DB when evicting. This should allow us to avoid most reads from the DB.
Motivation
Reducing I/O overhead.
Requirements
We store the transaction object in the cache.
Benchmark disabling the RocksDB block cache
Every time we save or read a transaction we store it in the cache. Not in the DB.
Only write to the disk when we evict from the cache.
Eviction policy: When the cache is full, we evict a block of X transactions. FIFO.
The cache will be Y transactions.
Purge the cache to the db on shutdown
Open Questions (optional)
We need to decide on the number of transactions that we want to store and calculate the amount of memory that would occupy for the node. Without any calculations, I'd like to be able to store an amount of transactions that supports 1000 TPS, but that is likely a ton of memory. We can start with 50-100 and see what that gets us. The eviction policy should be a fraction of the pool. For example 1%, 3%, 10%, ... whatever makes sense for the given size.
Configurability I'm open to unless we can squeeze a sufficient amount of TXs into a very small memory footprint (which I reckon we can't). In which case I'd recommend adding a minimum value to the configuration parameter, e.g., at least a transaction worth of 100-200 MB, which should at least somewhat help even low resource nodes.
I'm a bit reserved towards having the cache size dynamic. As we'd have to monitor/count the amount of inflow TXs and then react based on that, meaning that if a large jump in TPS happened, we'd have to evict a lot of transactions very fast before we adjust the cache mechanism. But happy if someone proves me wrong with an approach that would work here.
The text was updated successfully, but these errors were encountered:
The data is replicated from the block cache to the java application layer, causing a waste of memory.
If a transaction has been read from cache more than one time, several Transaction will be created consuming more memory.
There can race conditions between the 2 objects created. Currently it is not too bad since "Solidity" and "Validity" can only be changed from false to true and not the other way around. Still we may perform needless calculations.
Description
Currently RocksDB caches data in bytes, this means that you may create two separate transaction objects and manipulate them in different threads.
We want to create a cache layer above the DB implementation that locks in X K transactions and only stored them to the actual DB when evicting. This should allow us to avoid most reads from the DB.
Motivation
Reducing I/O overhead.
Requirements
Open Questions (optional)
We need to decide on the number of transactions that we want to store and calculate the amount of memory that would occupy for the node. Without any calculations, I'd like to be able to store an amount of transactions that supports 1000 TPS, but that is likely a ton of memory. We can start with 50-100 and see what that gets us. The eviction policy should be a fraction of the pool. For example 1%, 3%, 10%, ... whatever makes sense for the given size.
Configurability I'm open to unless we can squeeze a sufficient amount of TXs into a very small memory footprint (which I reckon we can't). In which case I'd recommend adding a minimum value to the configuration parameter, e.g., at least a transaction worth of 100-200 MB, which should at least somewhat help even low resource nodes.
I'm a bit reserved towards having the cache size dynamic. As we'd have to monitor/count the amount of inflow TXs and then react based on that, meaning that if a large jump in TPS happened, we'd have to evict a lot of transactions very fast before we adjust the cache mechanism. But happy if someone proves me wrong with an approach that would work here.
The text was updated successfully, but these errors were encountered: