-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Migrate to sitrep mechanism for T5X and PAXML MGMN tests #401
Conversation
5cd10b8
to
0a2958b
Compare
LGTM. CC: @yhtang's to confirm sitrep impl. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool work! Please see some thoughts and suggestions below 😄
outputs: | ||
TEST_STATUS: | ||
description: 'Summary pass/fail value indicating if results from tests are acceptable' | ||
value: ${{ jobs.publish-test.outputs.STATUS }} | ||
value: ${{ jobs.sitrep.outputs.STATUS }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is similar to our current practice, which uses a final postprocessing job to determine whether the overall MGMN test suite succeeds.
It has the problem that error feedback is delayed and made it difficult to investigate which individual tests failed.
For the new sitrep reporting system, let's do the following:
- Individual jobs should fail immediately if the tests that it ran do not succeed. However, it should collect and generate test artifacts regardless of whether the tests pass or fail. The
continue-on-error
option can be helpful here. - The sitrep/postprocessing step should then follow regardless of the success/fail status of individual test jobs.
This helps to localize the error status for easier debugging and tracking.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If a single job fails, what should be the outcome of the entire workflow? Also, I think we still need this output for downstream jobs like triage
or publish container
since those depend on this overall outcome.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the MGMN usecase, the tests are submitted to slurm via SSH. To mark an individual job as failure I can inspect the exit code and mark it accordingly. We would still need the overall status though, if there's any other recommended way to get this output, I'm happy to incorporate that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense. How about we make individual jobs fail while also let the overall status be the Boolean AND of the individual job states?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, a single job failure is already indicated. (ref: https://github.com/NVIDIA/JAX-Toolbox/actions/runs/7053172928/job/19199718748). The STATUS has to be derived not only from the job result, but also from its metrics so it is not possible to set status just based on the job states.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At face value, the specific error message from the job that you provided looks coincidental. I understand that it may be the result of the true underlying error, but the message itself does not clearly indicate that. Could we improve on it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also could you please add a natural language description of the logic regarding
The STATUS has to be derived not only from the job result, but also from its metrics
so that it serves as a form of documentation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At face value, the specific error message from the job that you provided looks coincidental. I understand that it may be the result of the true underlying error, but the message itself does not clearly indicate that. Could we improve on it?
Not sure if there's a way to provide a more informative error based on the failure, might have to look at it outside the scope of this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also could you please add a natural language description of the logic regarding
The STATUS has to be derived not only from the job result, but also from its metrics
so that it serves as a form of documentation?
Done in 40aebf5
@@ -518,12 +481,12 @@ jobs: | |||
) | tee $GITHUB_STEP_SUMMARY | |||
|
|||
outcome: | |||
needs: publish-test | |||
needs: sitrep |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This job would be unnecessary if we localize the error status to individual test jobs. However, there may still be situations where it is needed, i.e. if some checks use the collective results of many/all test jobs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this job is still needed to publish the overall badge since that's a collection of the results of all tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it.
df917d4
to
a4c594a
Compare
@yhtang @terrykong I've updated For eg, if you do from root of repo summary=`cat README.md`
to_json summary # fails for older version, but succeeds for newer version Please take a look and let me know if it looks good. |
@yhtang All feedback should be addressed now. Please take another look to see if any more changes are required. |
I'm working on some light final touches on this PR. A quick question regarding
If I use a different mechanism to cancel the workflow if its upstream worfklow did not succeed, do we still need the logic above at all? It appears that we always want to run triage as long as 1) upstream workflow succeeded or 2) the workflow is manually run via web GUI dispatch. |
Listing the requirements for perf data dump here for tracking purpose @yhtang
|
@yhtang Yeah then it should be fine to remove this condition. |
@hemildesai I made some changes to the two |
@yhtang Yes the changes look good. Thanks for the updates. |
A latest job https://github.com/NVIDIA/JAX-Toolbox/actions/runs/7127530028/job/19408352254 failed with the following error:
Although it is not caused by this PR, the fact that the job status is green seems to defeat our expectation where individual job states should reflect the SLURM job exit states. The job status was:
Our job did successfully retrieved this information, but did not convert it into the status of the job step:
|
@yhtang This is because the status is retrieved after the job step is complete. Not sure if there's a way to retroactively set the job status. Either ways, I think the exit code of individual jobs is probably outside the scope of this PR. I'm happy to look into it and work on it in a separate PR (to prevent this PR from getting too verbose) |
Addresses #399, #235 and #236
Example badge:
Completed run: https://github.com/NVIDIA/JAX-Toolbox/actions/runs/7039302468
Changes
TODO
Closes #399
Closes #235
Closes #236