-
Notifications
You must be signed in to change notification settings - Fork 202
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix flaky upgrade test #1368
Fix flaky upgrade test #1368
Conversation
Signed-off-by: Luis Rascao <luis.rascao@gmail.com>
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## master #1368 +/- ##
==========================================
- Coverage 22.76% 22.55% -0.22%
==========================================
Files 40 40
Lines 4713 4758 +45
==========================================
Hits 1073 1073
- Misses 3562 3607 +45
Partials 78 78 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lrascao
Thanks for the contribution.
Please change documentation in readme that the upgrade client now retries 5 times on failure.
done, added a note to the README. lmk if something else is needed |
Signed-off-by: Luis Rascao <luis.rascao@gmail.com>
f3fefdb
to
f946c34
Compare
@mukundansundar looks like the |
Agreed ... |
This pull request has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in 7 days if no further activity occurs. Please feel free to give a status update now, ping for review, or re-open when it's ready. Thank you for your contributions! |
Description
There's a flake in the upgrade test from 1.11.0 to 1.12.0, this is a version where we upgrade the
configuration
CRD by adding two new fields:controlPlaneTrustDomain
andsentryAddress
.The flake stems from a known race condition that arises when both updating a CRD and a matching CR close together (described here). This patch introduces a basic retry mechanism, it needs a fresh client on each try to the OpenAPI schema caching that's happening in the kubectl client.
Issue reference
No issue referenced so far but a few occurrences of the flake can be seen in the GHA history
The error is typically always the same:
Checklist
Please make sure you've completed the relevant tasks for this PR, out of the following list: