Skip to content
This repository was archived by the owner on Oct 15, 2024. It is now read-only.

potential memory trample in Mempool reactor #121

Closed
unclezoro opened this issue Oct 4, 2019 · 0 comments
Closed

potential memory trample in Mempool reactor #121

unclezoro opened this issue Oct 4, 2019 · 0 comments

Comments

@unclezoro
Copy link
Collaborator

func (memR *MempoolReactor) Receive(chID byte, src p2p.Peer, msgBytes []byte) {
	memR.recvCh <- &MempoolPacket{chID: chID, src: src, msgBytes: msgBytes}
}

while in Mconnection, the slice is being reused:

func (ch *Channel) recvPacketMsg(packet PacketMsg) ([]byte, error) {
	ch.Logger.Debug("Read PacketMsg", "conn", ch.conn, "packet", packet)
	var recvCap, recvReceived = ch.desc.RecvMessageCapacity, len(ch.recving) + len(packet.Bytes)
	if recvCap < recvReceived {
		return nil, fmt.Errorf("Received message exceeds available capacity: %v < %v", recvCap, recvReceived)
	}
	ch.recving = append(ch.recving, packet.Bytes...)
	if packet.EOF == byte(0x01) {
		msgBytes := ch.recving

		// clear the slice without re-allocating.
		// http://stackoverflow.com/questions/16971741/how-do-you-clear-a-slice-in-go
		//   suggests this could be a memory leak, but we might as well keep the memory for the channel until it closes,
		//	at which point the recving slice stops being used and should be garbage collected
		ch.recving = ch.recving[:0] // make([]byte, 0, ch.desc.RecvBufferCapacity)
		return msgBytes, nil
	}
	return nil, nil
}
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants