Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ROCm workaround: Use ParallelFor instead of Reduce #2749

Merged
merged 5 commits into from
Feb 10, 2024

Conversation

WeiqunZhang
Copy link
Member

Assuming the failure is not often, we can use ParallelFor with atomicAdd to
obtain the number of failures. With this change, the ROCm memory issue seems
to be gone.

Assuming the failure is not often, we can use ParallelFor with atomicAdd to
obtain the number of failures. With this change, the ROCm memory issue seems
to be gone.
@WeiqunZhang WeiqunZhang marked this pull request as ready for review February 9, 2024 23:15
@zingale
Copy link
Member

zingale commented Feb 9, 2024

this fixes #2569

@BenWibking
Copy link

BenWibking commented Feb 9, 2024

@WeiqunZhang
Copy link
Member Author

No, not for the functions in your links. Those kernels are small. Only if the kernel is very big and you see an issue, you might try to use atomics. Note that using atomics is very slow if it needs to be done a lot of times.

@BenWibking
Copy link

BenWibking commented Feb 9, 2024

No, not for the functions in your links. Those kernels are small. Only if the kernel is very big and you see an issue, you might try to use atomics. Note that using atomics is very slow if it needs to be done a lot of times.

Is there a way to identify kernels that might be affected by the same issue?

The kernel that calls our reaction network is here: https://github.com/quokka-astro/quokka/blob/2d863d655ec1e1f684f55ea745021b9c97a03c60/src/Chemistry.hpp#L43

The reaction network itself (6k LOC, symbolically generated, hundreds of local variables) is here:
https://github.com/AMReX-Astro/Microphysics/blob/main/networks/primordial_chem/actual_rhs.H#L20

We see errors like this when running simulations with this network:

Memory access fault by GPU node-8 (Agent handle: 0x2975b60) on address 0x800033773000. Reason: Unknown.

@WeiqunZhang
Copy link
Member Author

You can try to insert Gpu::streamSynchronize like in this PR. Without it, I saw

:0:rocdevice.cpp            :2724: 441955224434 us: [pid:105317 tid:0x7fffdd5ff700] Callback: Queue 0x7ffec2f00000 Aborting with error : HSA_STATUS_ERROR_OUT_OF_RESOURCES: T
he runtime failed to allocate the necessary resources. This error may also occur when the core runtime library needs to spawn threads or create internal OS-specific events. 
Code: 0x1008 Available Free mem : 13506 MB

It might help in your case.

@BenWibking
Copy link

BenWibking commented Feb 10, 2024

You can try to insert Gpu::streamSynchronize like in this PR. Without it, I saw

:0:rocdevice.cpp            :2724: 441955224434 us: [pid:105317 tid:0x7fffdd5ff700] Callback: Queue 0x7ffec2f00000 Aborting with error : HSA_STATUS_ERROR_OUT_OF_RESOURCES: T
he runtime failed to allocate the necessary resources. This error may also occur when the core runtime library needs to spawn threads or create internal OS-specific events. 
Code: 0x1008 Available Free mem : 13506 MB

It might help in your case.

We can try that. You are referring to the call immediately after the burn kernel on line 417, right?

@WeiqunZhang
Copy link
Member Author

Yes.

@WeiqunZhang
Copy link
Member Author

It might also help to turn on tiling MFIter mfi(mf, IntVect(32,32,2048)) for the failed MFIter loop. If you do that, you need to use tilebox(), not validbox().

@BenWibking
Copy link

It might also help to turn on tiling MFIter mfi(mf, IntVect(32,32,2048)) for the failed MFIter loop. If you do that, you need to use tilebox(), not validbox().

We could try that. Does this change the kernel launch parameters (and/or, do you know why it helps)?

@WeiqunZhang
Copy link
Member Author

It changes the number of blocks in the kernel launch.

@zingale zingale merged commit 1f3bfff into AMReX-Astro:main Feb 10, 2024
14 of 15 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants