-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bblfshd crashed and does not start after parsing millions of files #286
Labels
Comments
Try removing the |
It also shows a giant stack trace at the end. From a quick look, it may have been affected by #264. |
bblfshd v2.13.0 is out and should fix the crash, memory consumption and hopefully the CPU consumption as well. Leaving this issue open, since we still need to solve an issue with the socket left after bblfshd crash. |
dennwc
pushed a commit
to dennwc/bblfshd
that referenced
this issue
May 3, 2019
Signed-off-by: Denys Smirnov <[email protected]>
dennwc
pushed a commit
to dennwc/bblfshd
that referenced
this issue
May 3, 2019
Signed-off-by: Denys Smirnov <[email protected]>
dennwc
pushed a commit
to dennwc/bblfshd
that referenced
this issue
May 3, 2019
Signed-off-by: Denys Smirnov <[email protected]>
dennwc
pushed a commit
to dennwc/bblfshd
that referenced
this issue
May 6, 2019
Signed-off-by: Denys Smirnov <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I ran massive file parsing on 4 machines:
typos-1.infra.mining.prod.srcd.host
typos-2.infra.mining.prod.srcd.host
typos-3.infra.mining.prod.srcd.host
typos-4.infra.mining.prod.srcd.host
The pipeline was the same as in #270 (comment) How I started bblfshd:
All the versions correspond to #281
Those instances ran for several days and then crashed.
docker start bblfshd
starts the server and it immediately stops. Each of the 4 machines has the same symptoms. The free space is fine: 75% of a 200GB volume is free.Here are the logs:
Each file is ~4 gigs uncompressed.
I noticed that #270 persists. The access to my machines can be granted by @rporres upon request.
The text was updated successfully, but these errors were encountered: