-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Peer store refacto #700
Peer store refacto #700
Conversation
libp2p/peerstore.nim
Outdated
peerStore: PeerStore, | ||
peerId: PeerId) = | ||
|
||
if peerStore.capacity <= 0: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
usually -1
would mean infinite capasity, seems to be reversed here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added this,
0: regular capacity
0: no capacity
< 0: infinite
@@ -307,6 +307,9 @@ proc peerCleanup(c: ConnManager, conn: Connection) {.async.} = | |||
await c.triggerConnEvent( | |||
peerId, ConnEvent(kind: ConnEventKind.Disconnected)) | |||
await c.triggerPeerEvents(peerId, PeerEvent(kind: PeerEventKind.Left)) | |||
|
|||
if not(c.peerStore.isNil): | |||
c.peerStore.cleanup(peerId) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It makes sense to clean up peers that were disconnected. If Waku were to use the native switch
peer store, we'd need to change some peer persistence logic though - we do currently track disconnected peers in our peer store (a DisconnectBook
), mostly because it's a convenient way to determine a back off period before attempting to reconnect to such peers after a restart. I think we need to do something else here in any case, though, and can't think of any other good reasons to keep peers in the store after disconnection.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, currently this will keep a certain amount of disconnected peers (peerStore.capacity which the app can choose), and then remove them from every book.
I was wondering if you needed more sophisticated logic (eg, have a per-book settings on how much dead peer to save)
Happy to brainstorm a bit on your needs :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense to me. Thanks!
Whats missing here? |
libp2p/builders.nim
Outdated
@@ -187,6 +192,9 @@ proc build*(b: SwitchBuilder): Switch | |||
if isNil(b.rng): | |||
b.rng = newRng() | |||
|
|||
if isNil(b.peerStore): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If PeerStore
is required now, we don't need a withPeerStore
anymore, at least not one that takes an instance only the non-default params needed to construct it? Otherwise, this construction flow doesn't make much sense.
LGTM, besides the convoluted construction flow. Needs to be changed before merging. |
1734add
to
f0c0789
Compare
Improves the Peer Store:
There is now a global PeerStore structure (instead of having one for libp2p, one for waku, etc)
To create a PeerBook:
The operators have been nim-ified ([], []= & del)
And a pruner system, that will only keep a certain amount of dead peers in memory
Also, switch to seqs instead of HashSet for the inner books (eg
PeerBook[seq[MultiAddress]]
) because we generally don't have enough elements to justify the overhead of a HashSetLMKWYT