You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First, the project is excellent, very helpful even in my setup which uses the posix fallback.
Second, I want to highlight that having a zarr decompress directly to the desired device_id is a bit baroque, as the device_ordinal used during compression is used by default unless overwritten explicitly:
Btw, I confirm that with the above the cupy array ends up in the correct device. But wanted to double check that indeed the above works as expected all the way, i.e., the compressed data move and decompression indeed happen on the specified device?
Third, the above fails when multiple python threads are involved. Not familiar with CUDA, but appears that streams are thread specific? When trying to use the above in a multi-threaded setup (have a threaded data loader):
File "libnvcomp.pyx", line 246, in kvikio._lib.libnvcomp._SnappyManager.__cinit__
RuntimeError: nvCOMP error: Stream 0 is not associated with device 1
Thank you.
The text was updated successfully, but these errors were encountered:
Hi,
First, the project is excellent, very helpful even in my setup which uses the posix fallback.
Second, I want to highlight that having a zarr decompress directly to the desired device_id is a bit baroque, as the device_ordinal used during compression is used by default unless overwritten explicitly:
Btw, I confirm that with the above the cupy array ends up in the correct device. But wanted to double check that indeed the above works as expected all the way, i.e., the compressed data move and decompression indeed happen on the specified device?
Third, the above fails when multiple python threads are involved. Not familiar with CUDA, but appears that streams are thread specific? When trying to use the above in a multi-threaded setup (have a threaded data loader):
Thank you.
The text was updated successfully, but these errors were encountered: