Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[release-3.5] Fix a performance regression due to uncertain compaction sleep interval #19405

Merged
merged 1 commit into from
Feb 13, 2025

Conversation

miancheng7
Copy link

@miancheng7 miancheng7 commented Feb 12, 2025

The compaction behavior here is changed in commit
02635 and introduces a latency issue. To be more speicific, the ticker.C acts as a fixed timer that triggers every 10ms, regardless of how long each batch of compaction takes. This means that if a previous compaction batch takes longer than 10ms, the next batch starts immediately, making compaction a blocking operation for etcd.

To fix the issue, this commit revert the compaction to the previous behavior which ensures a 10ms delay between each batch of compaction, allowing other read and write operations to proceed smoothly.

issue #19406

Please read https://github.com/etcd-io/etcd/blob/main/CONTRIBUTING.md#contribution-flow.

@k8s-ci-robot
Copy link

Hi @miancheng7. Thanks for your PR.

I'm waiting for a etcd-io member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

The compaction behavior is changed in commit
[02635](etcd-io@0263597) and introduces a latency issue.
To be more speicific, the `ticker.C` acts as a fixed timer that triggers every 10ms, regardless of how long each batch of compaction takes.
This means that if a previous compaction batch takes longer than 10ms, the next batch starts immediately, making compaction a blocking operation for etcd.

To fix the issue, this commit revert the compaction to the previous behavior which ensures a 10ms delay between each batch of compaction, allowing other read and write operations to proceed smoothly.

Signed-off-by: Miancheng Lin <iml@amazon.com>
@miancheng7 miancheng7 force-pushed the fix-compaction-induce-latency branch from 07e9fd6 to da930c7 Compare February 12, 2025 22:55
@miancheng7
Copy link
Author

cc @chaochn47 for visibility

@@ -90,7 +88,7 @@ func (s *store) scheduleCompaction(compactMainRev, prevCompactRev int64) (KeyVal
dbCompactionPauseMs.Observe(float64(time.Since(start) / time.Millisecond))

select {
case <-batchTicker.C:
case <-time.After(s.cfg.CompactionSleepInterval):
Copy link

@hakuna-matatah hakuna-matatah Feb 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While I understand you are doing a revert of the change.

Was wondering couple of things:

  • Curious why its defaulted to 10ms today in the first place ?

  • As I understand it, compaction is taking longer than 10ms in clusters with large data sets, leading to noticeable latency increases on client side (that's how this issue is surfaced iiuc).
    So, I was wondering if we can adjust sleep interval dynamically based on pending apply requests instead of creating a room for 10ms ?. The more apply requests pending, the longer the sleep interval, thus reducing client latency(for reads/writes) during high load scenarios and could be beneficial in large clusters.
    wdyt ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For q1, 10ms is introduced by #11034, and it seems like an empirical param #11021 (comment) (not the PR author's comment)

Copy link
Contributor

@JalinWang JalinWang Feb 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For question 2, regardless of the duration of a compaction batch, the current implementation does not guarantee a full CompactionSleepInterval sleep. Given that CompactionSleepInterval is already configurable, I think we could address this issue first and then introduce an adaptive sleep mode in a separate pull request 🤔

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For backport please stick to minimal change that will address the issue

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 to keep the minimal change in this PR.

For any improvement, let's address & discuss it separately.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I am totally all for merging this for sure to fix the problem at hand.

let's address & discuss it separately.

I just left the community to understand what community thinks about this to see if it makes sense to go in that direction ? I can probably cut a separate issue to channel this discussion.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, do not get time to dig into your proposal yet. Please feel free to raise a separate ticket to present your thought/idea if you want. But you need to convince us that it's indeed a real issue and demo the real benefit to go in that direction. Overall I do not see a high priority to change it for now.

@serathius
Copy link
Member

Please start from sending PR to the main branch, get it merged and then backport to release-3.6 and release3.5 branches.

@serathius
Copy link
Member

serathius commented Feb 13, 2025

Would be also nice to have a test/benchmark to confirm fix and prevent regression in the future.

Copy link
Member

@ahrtr ahrtr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Thank you!

@@ -90,7 +88,7 @@ func (s *store) scheduleCompaction(compactMainRev, prevCompactRev int64) (KeyVal
dbCompactionPauseMs.Observe(float64(time.Since(start) / time.Millisecond))

select {
case <-batchTicker.C:
case <-time.After(s.cfg.CompactionSleepInterval):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 to keep the minimal change in this PR.

For any improvement, let's address & discuss it separately.

@ahrtr
Copy link
Member

ahrtr commented Feb 13, 2025

/ok-to-test

@ahrtr
Copy link
Member

ahrtr commented Feb 13, 2025

Would be also nice to have a test/benchmark to confirm fix and prevent regression in the future.

+1.

Sorry. I did not notice that this PR is on 3.5

@ahrtr
Copy link
Member

ahrtr commented Feb 13, 2025

Let me send a PR to main now.

@ahrtr
Copy link
Member

ahrtr commented Feb 13, 2025

Link to the PR on main #19410

@ahrtr ahrtr changed the title fix a compaction induce latency issue [release-3.5] fix a compaction induce latency issue Feb 13, 2025
@ahrtr ahrtr changed the title [release-3.5] fix a compaction induce latency issue [release-3.5] Fix a performance regression due to uncertain compaction sleep interval Feb 13, 2025
@ahrtr
Copy link
Member

ahrtr commented Feb 13, 2025

cc @fuweid @serathius

Copy link
Member

@fuweid fuweid left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changes looks good.

Just thinking how to handle backport in the future. Since we have merged the change into main branch, should we cherry-pick it or just merge this one? My suggestion is that we should address issue in main branch and then backport if applicable.

However, this change came first. We can merge this.

@k8s-ci-robot
Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: ahrtr, fuweid, miancheng7

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@ahrtr
Copy link
Member

ahrtr commented Feb 13, 2025

Just thinking how to handle backport in the future. Since we have merged the change into main branch, should we cherry-pick it or just merge this one?

Usually fix goes into main firstly, then backport to stable releases. But not a big problem. It's OK as long as we ensure the fix is merged on higher versions firstly.

@ahrtr ahrtr merged commit 4fb86eb into etcd-io:release-3.5 Feb 13, 2025
26 checks passed
@miancheng7
Copy link
Author

Usually fix goes into main firstly, then backport to stable releases. But not a big problem. It's OK as long as we ensure the fix is merged on higher versions firstly.

Thanks for the information. Will go with this approach next time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

Successfully merging this pull request may close these issues.

7 participants