Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cudaErrorMemoryAllocation and troubleshooting memory requirements #830

Closed
silastittes opened this issue Nov 7, 2022 · 2 comments
Closed

Comments

@silastittes
Copy link

Hi there!
I've attached the log for my most recent attempt at running cactus on some cephalopod genomes.
The uncompressed genome sizes are 2.6G, 1.7G, 4.9G, 3.9G, 2.3G, 4.9G, 4.6G, 2.6G, 711M, and 4.4G, with
7376, 500, 2139, 1735, 48285, 2762, 583, 41584, 13516, 500, 77681, 500, and 5642 contigs, respectively.

While the memory error is straightforward, I guess I'm just hoping for some advice as to how much memory roughly I should plan on needing, if this is feasible, etc. Currently giving the job 24G and 2 GPUs. As you can see from the log it runs for a good while before failing.

Thanks for any and all suggestions!

cactus_error.txt

@glennhickey
Copy link
Collaborator

We've recently discovered a bug in segalign that prevents it from masking a cuttlefish genome: gsneha26/SegAlign#58. In my tests, no amount of system or GPU memory was enough to get it to go through. It used about 85G of system RAM (I forget how much GPU memory) before crashing, so your 24G is indeed too low. But even if you added more memory, I think the odds are you'd run into the same problem as in the issue above. The only work-around is to run cactus-preprocess with the CPU. You can then continue with the gpu by running cactus on the output with the --skipPreprocessor option added.

@silastittes
Copy link
Author

Thanks so much for the speedy and helpful reply! Will checkout the work-around you suggest.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants