-
Notifications
You must be signed in to change notification settings - Fork 9.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[release-3.5] Fix a performance regression due to uncertain compaction sleep interval #19405
[release-3.5] Fix a performance regression due to uncertain compaction sleep interval #19405
Conversation
Hi @miancheng7. Thanks for your PR. I'm waiting for a etcd-io member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
The compaction behavior is changed in commit [02635](etcd-io@0263597) and introduces a latency issue. To be more speicific, the `ticker.C` acts as a fixed timer that triggers every 10ms, regardless of how long each batch of compaction takes. This means that if a previous compaction batch takes longer than 10ms, the next batch starts immediately, making compaction a blocking operation for etcd. To fix the issue, this commit revert the compaction to the previous behavior which ensures a 10ms delay between each batch of compaction, allowing other read and write operations to proceed smoothly. Signed-off-by: Miancheng Lin <iml@amazon.com>
07e9fd6
to
da930c7
Compare
cc @chaochn47 for visibility |
@@ -90,7 +88,7 @@ func (s *store) scheduleCompaction(compactMainRev, prevCompactRev int64) (KeyVal | |||
dbCompactionPauseMs.Observe(float64(time.Since(start) / time.Millisecond)) | |||
|
|||
select { | |||
case <-batchTicker.C: | |||
case <-time.After(s.cfg.CompactionSleepInterval): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While I understand you are doing a revert of the change.
Was wondering couple of things:
-
Curious why its defaulted to
10ms
today in the first place ? -
As I understand it, compaction is taking longer than
10ms
in clusters with large data sets, leading to noticeable latency increases on client side (that's how this issue is surfaced iiuc).
So, I was wondering if we can adjustsleep interval dynamically
based onpending apply requests
instead of creating a room for10ms
?. The more apply requests pending, the longer the sleep interval, thus reducing client latency(for reads/writes) during high load scenarios and could be beneficial in large clusters.
wdyt ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For q1, 10ms is introduced by #11034, and it seems like an empirical param #11021 (comment) (not the PR author's comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For question 2, regardless of the duration of a compaction batch, the current implementation does not guarantee a full CompactionSleepInterval sleep. Given that CompactionSleepInterval is already configurable, I think we could address this issue first and then introduce an adaptive sleep mode in a separate pull request 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For backport please stick to minimal change that will address the issue
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 to keep the minimal change in this PR.
For any improvement, let's address & discuss it separately.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I am totally all for merging this for sure to fix the problem at hand.
let's address & discuss it separately.
I just left the community to understand what community thinks about this to see if it makes sense to go in that direction ? I can probably cut a separate issue to channel this discussion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, do not get time to dig into your proposal yet. Please feel free to raise a separate ticket to present your thought/idea if you want. But you need to convince us that it's indeed a real issue and demo the real benefit to go in that direction. Overall I do not see a high priority to change it for now.
Please start from sending PR to the |
Would be also nice to have a test/benchmark to confirm fix and prevent regression in the future. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Thank you!
@@ -90,7 +88,7 @@ func (s *store) scheduleCompaction(compactMainRev, prevCompactRev int64) (KeyVal | |||
dbCompactionPauseMs.Observe(float64(time.Since(start) / time.Millisecond)) | |||
|
|||
select { | |||
case <-batchTicker.C: | |||
case <-time.After(s.cfg.CompactionSleepInterval): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 to keep the minimal change in this PR.
For any improvement, let's address & discuss it separately.
/ok-to-test |
+1. Sorry. I did not notice that this PR is on 3.5 |
Let me send a PR to main now. |
Link to the PR on main #19410 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes looks good.
Just thinking how to handle backport in the future. Since we have merged the change into main branch, should we cherry-pick it or just merge this one? My suggestion is that we should address issue in main branch and then backport if applicable.
However, this change came first. We can merge this.
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: ahrtr, fuweid, miancheng7 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Usually fix goes into main firstly, then backport to stable releases. But not a big problem. It's OK as long as we ensure the fix is merged on higher versions firstly. |
Thanks for the information. Will go with this approach next time. |
The compaction behavior here is changed in commit
02635 and introduces a latency issue. To be more speicific, the
ticker.C
acts as a fixed timer that triggers every 10ms, regardless of how long each batch of compaction takes. This means that if a previous compaction batch takes longer than 10ms, the next batch starts immediately, making compaction a blocking operation for etcd.To fix the issue, this commit revert the compaction to the previous behavior which ensures a 10ms delay between each batch of compaction, allowing other read and write operations to proceed smoothly.
issue #19406
Please read https://github.com/etcd-io/etcd/blob/main/CONTRIBUTING.md#contribution-flow.