You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to run a somewhat large system, and run into memory issues. I have access to multiple A100 (80GB) GPU's on the same node, is it possible to run the model across multiple GPU's effectively increasing the available memory? I've read some comments about updates which would downscale the memory issues significantly, but i some date back to summer last year, is there any update on that?
I'm using PBC, and the last attempt (using a mixed system with amber-tip3p as the solvation forcefield. Currently this system tries to allocate ~420 GB onto the GPU, but i can probably reduce the system size enough that it'll fit onto 4x80GB.
/Rune
The text was updated successfully, but these errors were encountered:
Hey ANI devs,
I'm trying to run a somewhat large system, and run into memory issues. I have access to multiple A100 (80GB) GPU's on the same node, is it possible to run the model across multiple GPU's effectively increasing the available memory? I've read some comments about updates which would downscale the memory issues significantly, but i some date back to summer last year, is there any update on that?
I'm using PBC, and the last attempt (using a mixed system with amber-tip3p as the solvation forcefield. Currently this system tries to allocate ~420 GB onto the GPU, but i can probably reduce the system size enough that it'll fit onto 4x80GB.
/Rune
The text was updated successfully, but these errors were encountered: