Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Migrate to sitrep mechanism for T5X and PAXML MGMN tests #401

Merged
merged 54 commits into from
Dec 8, 2023

Conversation

hemildesai
Copy link
Contributor

@hemildesai hemildesai commented Nov 29, 2023

Addresses #399, #235 and #236

Example badge:

Completed run: https://github.com/NVIDIA/JAX-Toolbox/actions/runs/7039302468

Changes

  • Create a reusable job for publishing sitrep for MGMN workflows
  • Integrate it in T5X MGMN
  • Integrate it in PAX MGMN

TODO

  • Change badge endpoint in README for T5X
  • Change badge endpoint in README for PAX

Closes #399
Closes #235
Closes #236

@hemildesai hemildesai force-pushed the hemil/fix-badge-mgmn-tests branch from 5cd10b8 to 0a2958b Compare November 29, 2023 21:49
@hemildesai hemildesai marked this pull request as ready for review November 29, 2023 23:18
@terrykong
Copy link
Contributor

LGTM.

CC: @yhtang's to confirm sitrep impl.

Copy link
Collaborator

@yhtang yhtang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool work! Please see some thoughts and suggestions below 😄

.github/workflows/_sitrep_mgmn.yaml Outdated Show resolved Hide resolved
.github/workflows/_sitrep_mgmn.yaml Outdated Show resolved Hide resolved
.github/workflows/_sitrep_mgmn.yaml Outdated Show resolved Hide resolved
.github/workflows/_sitrep_mgmn.yaml Outdated Show resolved Hide resolved
outputs:
TEST_STATUS:
description: 'Summary pass/fail value indicating if results from tests are acceptable'
value: ${{ jobs.publish-test.outputs.STATUS }}
value: ${{ jobs.sitrep.outputs.STATUS }}
Copy link
Collaborator

@yhtang yhtang Nov 30, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is similar to our current practice, which uses a final postprocessing job to determine whether the overall MGMN test suite succeeds.

It has the problem that error feedback is delayed and made it difficult to investigate which individual tests failed.

For the new sitrep reporting system, let's do the following:

  • Individual jobs should fail immediately if the tests that it ran do not succeed. However, it should collect and generate test artifacts regardless of whether the tests pass or fail. The continue-on-error option can be helpful here.
  • The sitrep/postprocessing step should then follow regardless of the success/fail status of individual test jobs.

This helps to localize the error status for easier debugging and tracking.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If a single job fails, what should be the outcome of the entire workflow? Also, I think we still need this output for downstream jobs like triage or publish container since those depend on this overall outcome.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the MGMN usecase, the tests are submitted to slurm via SSH. To mark an individual job as failure I can inspect the exit code and mark it accordingly. We would still need the overall status though, if there's any other recommended way to get this output, I'm happy to incorporate that.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense. How about we make individual jobs fail while also let the overall status be the Boolean AND of the individual job states?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, a single job failure is already indicated. (ref: https://github.com/NVIDIA/JAX-Toolbox/actions/runs/7053172928/job/19199718748). The STATUS has to be derived not only from the job result, but also from its metrics so it is not possible to set status just based on the job states.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At face value, the specific error message from the job that you provided looks coincidental. I understand that it may be the result of the true underlying error, but the message itself does not clearly indicate that. Could we improve on it?

Copy link
Collaborator

@yhtang yhtang Dec 4, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also could you please add a natural language description of the logic regarding

The STATUS has to be derived not only from the job result, but also from its metrics

so that it serves as a form of documentation?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At face value, the specific error message from the job that you provided looks coincidental. I understand that it may be the result of the true underlying error, but the message itself does not clearly indicate that. Could we improve on it?

Not sure if there's a way to provide a more informative error based on the failure, might have to look at it outside the scope of this PR.

Copy link
Contributor Author

@hemildesai hemildesai Dec 4, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also could you please add a natural language description of the logic regarding

The STATUS has to be derived not only from the job result, but also from its metrics

so that it serves as a form of documentation?

Done in 40aebf5

@@ -518,12 +481,12 @@ jobs:
) | tee $GITHUB_STEP_SUMMARY

outcome:
needs: publish-test
needs: sitrep
Copy link
Collaborator

@yhtang yhtang Nov 30, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This job would be unnecessary if we localize the error status to individual test jobs. However, there may still be situations where it is needed, i.e. if some checks use the collective results of many/all test jobs.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this job is still needed to publish the overall badge since that's a collection of the results of all tests.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it.

.github/workflows/nightly-t5x-test-mgmn.yaml Outdated Show resolved Hide resolved
@hemildesai hemildesai force-pushed the hemil/fix-badge-mgmn-tests branch 3 times, most recently from df917d4 to a4c594a Compare November 30, 2023 22:33
@hemildesai
Copy link
Contributor Author

@yhtang @terrykong I've updated to_json.sh in a4c594a to make it work better with multi-line variables.

For eg, if you do from root of repo

summary=`cat README.md`

to_json summary # fails for older version, but succeeds for newer version

Please take a look and let me know if it looks good.

@hemildesai
Copy link
Contributor Author

@yhtang All feedback should be addressed now. Please take another look to see if any more changes are required.

@hemildesai hemildesai requested a review from ashors1 December 5, 2023 19:14
@yhtang
Copy link
Collaborator

yhtang commented Dec 6, 2023

I'm working on some light final touches on this PR.

A quick question regarding

echo "SHOULD_TRIAGE=${{ ((github.event_name == 'workflow_run' && github.event.workflow_run.conclusion == 'success') || github.event_name == 'workflow_dispatch') }} >> $GITHUB_OUTPUT"

If I use a different mechanism to cancel the workflow if its upstream worfklow did not succeed, do we still need the logic above at all? It appears that we always want to run triage as long as 1) upstream workflow succeeded or 2) the workflow is manually run via web GUI dispatch.

@hemildesai

@hmonishN
Copy link
Contributor

hmonishN commented Dec 6, 2023

Listing the requirements for perf data dump here for tracking purpose @yhtang

  1. Job name to be dumped in artifacts (as part of sitrep.json file explaining the acronyms i.e TP, DP, FSDP, etc for pax and G, N for t5x, will be helpful if we add pax/t5x at the beginning of the job name)
  2. _metric.json files to be dumped in the artifacts.

@hemildesai
Copy link
Contributor Author

hemildesai commented Dec 6, 2023

I'm working on some light final touches on this PR.

A quick question regarding

echo "SHOULD_TRIAGE=${{ ((github.event_name == 'workflow_run' && github.event.workflow_run.conclusion == 'success') || github.event_name == 'workflow_dispatch') }} >> $GITHUB_OUTPUT"

If I use a different mechanism to cancel the workflow if its upstream worfklow did not succeed, do we still need the logic above at all? It appears that we always want to run triage as long as 1) upstream workflow succeeded or 2) the workflow is manually run via web GUI dispatch.

@hemildesai

@yhtang Yeah then it should be fine to remove this condition.

@yhtang
Copy link
Collaborator

yhtang commented Dec 7, 2023

@hemildesai I made some changes to the two nightly-*-mgmn.yaml files. Could you please verify whether the changes are good?

yhtang
yhtang previously approved these changes Dec 7, 2023
@hemildesai
Copy link
Contributor Author

@yhtang Yes the changes look good. Thanks for the updates.

@yhtang
Copy link
Collaborator

yhtang commented Dec 7, 2023

A latest job https://github.com/NVIDIA/JAX-Toolbox/actions/runs/7127530028/job/19408352254 failed with the following error:

2023-12-07 12:39:40.366819: F external/xla/xla/stream_executor/cuda/cuda_dnn.cc:7393] Failed to generate cuDNN execution plan for opGraph Mul_Matmul_Add_Reduction_Sub_Exp_Reduction_Div_Matmul_. Status of final plan: CUDNN_STATUS_NOT_SUPPORTED

Although it is not caused by this PR, the fact that the job status is green seems to defeat our expectation where individual job states should reflect the SLURM job exit states.

The job status was:

$ sacct -j 23766
JobID           JobName  Partition    Account  AllocCPUS      State ExitCode
------------ ---------- ---------- ---------- ---------- ---------- --------
23766        T5X-71275+    compute                   256     FAILED      6:0
23766.batch       batch                              128     FAILED      6:0
23766.extern     extern                              256  COMPLETED      0:0
23766.0      test-t5x.+                              256     FAILED      6:0

Our job did successfully retrieved this information, but did not convert it into the status of the job step:

SLRUM Job 23766 finished.
SLURM Job state is FAILED
SLURM Job exit code is 6

@hemildesai
Copy link
Contributor Author

@yhtang This is because the status is retrieved after the job step is complete. Not sure if there's a way to retroactively set the job status. Either ways, I think the exit code of individual jobs is probably outside the scope of this PR. I'm happy to look into it and work on it in a separate PR (to prevent this PR from getting too verbose)

@yhtang yhtang merged commit fe2b422 into main Dec 8, 2023
96 of 99 checks passed
@yhtang yhtang deleted the hemil/fix-badge-mgmn-tests branch December 8, 2023 07:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
5 participants