-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
computational resource requirements #7
Comments
Hi @xubin04, each model we trained fits on a single NVIDIA RTX A6000 (with batch_size: 16). The reason we used multiple GPUs was for training every model on the five folds, for all baselines and ablations. To speed experiments up in parallel, we used circa 4-6 GPUs depending on availability. However, this is no requirement for model development and all experiments can be conducted in series on a single device. Hoping this helps! |
Hi @xubin04. Thanks for your interest. I would like to add some detail to what Patrick said. All models can be trained on a single machine, provided that machine has enough GPU VRAM and main RAM. GPU VRAM:
Main RAM: We provide two flags for keeping the dataset in RAM to make it run faster:
Warnings:
|
I am very interested in conducting experiments on this paper. I noticed that you specified using the A600 model graphics card for training. Could you please tell me how many you used? Also, could you share with me the computational resource requirements needed for the related experiments?
The text was updated successfully, but these errors were encountered: