Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow async appenders to block when buffer is full instead of dropping event #559

Closed
brenuart opened this issue Jul 7, 2021 · 4 comments · Fixed by #565
Closed

Allow async appenders to block when buffer is full instead of dropping event #559

brenuart opened this issue Jul 7, 2021 · 4 comments · Fixed by #565
Milestone

Comments

@brenuart
Copy link
Collaborator

brenuart commented Jul 7, 2021

Async appenders currently drop events when the queue is full. Instead, they are cases where it might be useful for the appender to block until some space is available. The idea is to use the async feature as a buffer to handle spikes of events without loosing anything if they can't all fit in the buffer. In this case the logging thread is blocked until some free space becomes available...

This usage pattern is a mix between async (event processing offloaded on a separate thread) and sync (not loosing anything).

@philsttr
Copy link
Collaborator

philsttr commented Jul 7, 2021

Seems reasonable.

In addition to block-forever, I would also like to see block-with-timeout as an option.

@brenuart
Copy link
Collaborator Author

brenuart commented Jul 7, 2021

What would be the use case? I mean, if the buffer is full it means you are logging faster than what the destination can cope with... To me, adding a timeout is like increasing the buffer: it gives you more chances to have the event enqueued and not lost. If the timeout expires, the event is lost just like if you had none. Well, I don't see any valid use case where it would help... Do you have some in mind ?

brenuart added a commit to brenuart/logstash-logback-encoder that referenced this issue Jul 8, 2021
@philsttr
Copy link
Collaborator

I mean, if the buffer is full it means you are logging faster than what the destination can cope with...

It could also mean that the application is having transient connection issues to the log destination.

To me, adding a timeout is like increasing the buffer: it gives you more chances to have the event enqueued and not lost.

Yeah, it's like increasing the buffer, but without the memory usage side effect.

If the timeout expires, the event is lost just like if you had none. Well, I don't see any valid use case where it would help... Do you have some in mind ?

What I was trying to avoid is blocking the application forever if the connection to the downstream log destination fails and continues to fail (mainly thinking about the tcp appender). But maybe that's what some applications need (?). I personally do not want my production systems to lock up if the log destination has an issue. But I guess there are some apps where locking up is desirable (maybe for batch processing or something not handling active requests).

I see your point regarding a timeout though. And also if the buffer is consistently full (like when the connection to the log destination is failing), then if every log message has to wait for a timeout, then the app will just grind slowly, which is not much better than blocking forever.

Having said all that, a low timeout is roughly equivalent to just dropping immediately. And a high timeout is roughly equivalent to blocking forever. So, I'll drop the timeout as a requirement. We can just offer two options, to block indefinitely or to drop immediately.

@brenuart
Copy link
Collaborator Author

brenuart commented Jul 10, 2021

See #565 - I think we can support drop, block forever and block with timeout easily...

Unless offering all that flexibility is confusing to the user. In which case we could revert to the a simple boolean flag "block" or "drop".

brenuart added a commit to brenuart/logstash-logback-encoder that referenced this issue Jul 10, 2021
@philsttr philsttr linked a pull request Jul 10, 2021 that will close this issue
@brenuart brenuart added this to the 7.0 milestone Jul 14, 2021
philsttr pushed a commit that referenced this issue Jul 14, 2021
…ut (#565)

* Add ability block with optional timeout when appending to async appenders
* Wait for the buffer to drain before calling Disruptor#shutdown

Calling Disruptor#shutdown(timeout) while the buffer is not empty causes the disruptor to wait in a busy-loop consumuing a lot of CPU. Instead, wait during the grace period before asking the disruptor to shutdown immediately.

Fixes #559 
Fixes #566 
Fixes #569
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants