You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have been using deep interpolation on our large-scale calcium imaging movies, but it seems that even with a state-of-the-art GPU (3090 Ti), inference step takes around 1.5 days in a 512x512x50,000 movie. I am suspecting that the inference is not using the GPU by default? Which value should I change for it to use GPU and also load with multiprocessing? In contrast, a reasonable training process takes 1-2 hours with good convergence on the first 2000 frames, which seems to be enough for these movies.
Also, is there a demo for inference on 1p movies? I was able to see a training script, but not demo. Thank you!
The text was updated successfully, but these errors were encountered:
Hi!
I have been using deep interpolation on our large-scale calcium imaging movies, but it seems that even with a state-of-the-art GPU (3090 Ti), inference step takes around 1.5 days in a 512x512x50,000 movie. I am suspecting that the inference is not using the GPU by default? Which value should I change for it to use GPU and also load with multiprocessing? In contrast, a reasonable training process takes 1-2 hours with good convergence on the first 2000 frames, which seems to be enough for these movies.
Also, is there a demo for inference on 1p movies? I was able to see a training script, but not demo. Thank you!
The text was updated successfully, but these errors were encountered: