-
Notifications
You must be signed in to change notification settings - Fork 179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rework the pipeline: batch allocation + deterministic backoff #1769
base: main
Are you sure you want to change the base?
Conversation
The performance regression on Linux has been fixed. This PR gives the same throughput as The current PR version embeds a batch reusing mechanism: one single batch is kept in memory, and if it is not available (meaning it is currently owned by the tx task), a new one is allocated on demand, respecting the limit fixed by the configuration. The batch will be refilled by the tx task after use, the next one being dropped if the slot is already refilled. Here are the flamegraphs of |
deadline: None, | ||
wait_time, | ||
impl StageIn { | ||
const SMALL_BATCH_SIZE: BatchSize = 1 << 11; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This constant is of course arbitrary. It can be exposed in configuration if needed.
This reverts commit 6360eb6
fd29946
to
524cf7b
Compare
data: 2, | ||
data_low: 2, | ||
background: 2, | ||
data: 4, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've reverted the previous commit, so the numbers are back to the original ones. Throughputs of big messages was indeed very impacted by a smaller batch number, 130kmsg/s -> 50kmsg/s for 70kB messages.
@Mallets Do you think we should modify the numbers? for example increasing the hight priority to 2? In fact this PR does lazy allocation, and only keep one buffer in memory, so we could put any numbers, it would not change memory consumption at all, it just a matter of throughput/congestion.
DEFAULT_CONFIG.json5
Outdated
data_low: 2, | ||
background: 2, | ||
data: 4, | ||
data_low: 4, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The numbers are not coherent with commons/zenoh-config/src/defaults.rs
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right, good catch !
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #1769 +/- ##
=======================================
Coverage ? 70.45%
=======================================
Files ? 359
Lines ? 64735
Branches ? 0
=======================================
Hits ? 45608
Misses ? 19127
Partials ? 0 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
See more details in #1769 (comment)