-
Notifications
You must be signed in to change notification settings - Fork 84
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix ack handling #73
Fix ack handling #73
Conversation
You only need to do that when you're passing the value of this to another function. Also remove wrapper function from `FluentSender.emit`
2fcee25
to
a3049e9
Compare
Previously, receiving multiple acks in a 100ms window cause all but the last one to be dropped. This fixes the issue and can even handle unordered acks.
a3049e9
to
59dd351
Compare
We must follow Fluentd forward protocol specification and we should follow Fluentd out_forward implementation and other logger implemantations. Fluentd out_forward and other loggers block after send require-ack-response request. So we must block until receive ack response. |
Where does the spec say we need to block until an ack is received? |
Spec does not say we need to block until an ack is received. BTW, fluent-logger-python, fluent-logger-java don't support requrie-ack-response. fluent-logger-ruby don't support yet, too.
If you need more performance, we can support PackedForward or CompressedPackedForward to send a lot of events at once. |
Over a high latency connection, not waiting for an ack should have a significant performance advantage. And yeah, I'm working on packed forward support right now, but that also benefits from asynchronous sending. |
Could you take a benchmark to compare packed forward mode and forward mode? |
It depends a lot on how you're doing packed forward mode. If you're trying to send messages as soon as possible then you're not going to be building up a backlog until you get well over 250 msg/sec (even if you limit to 1 outstanding ack at any given time), so messages will be sent individually up until that point. You could wait until you have a backlog of |
Are you sure that's a good thing? You don't need to look any further than the buffer compression and compressed forward implementation to see why that codebase shouldn't be treated as gospel. Plus, the fluentd daemon has on-disk storage, meaning that in the event of a crash the backlog can be recovered, unlike this implementation where the backlog is lost. They can afford delays that cause logs to build up because they're dealing with a fundamentally different problem. |
I think out_forward of fluentd v0.14 doesn't block sending data to wait for ack. It creates another thread to handle it: https://github.com/fluent/fluentd/blob/master/lib/fluent/plugin/out_forward.rb#L254 |
So far I've got this running at 250 msg/sec without issues. If I go faster than that I start missing acks, even though it seems that the messages got accepted. Not sure if that's an issue in the fluentd daemon, or something I've done wrong, but I'll keep polishing this up.
Previously, receiving multiple acks in a 100ms window cause all but the last one to be dropped. This fixes the issue and can even handle unordered acks.
Closes #67