Zarr storage in asynchronous code #2846
Unanswered
jacopoabramo
asked this question in
Q&A
Replies: 1 comment 3 replies
-
Welcome @jacopoabramo! Zarr should work well for your use case. You're correct that there is no narrative documentation yet for the asynchronous API. However, there is API documenation. You code might look something like this import zarr.api.asynchronous as zarr
# note that asyncio will have the most benefit with remote storage like S3
store = "s3://bucket/path/to/store"
group = await zarr.group(store)
array = await group.create_array("foo", shape=1_000_000, chunks=100_000)
await array.setitem(slice(200_000), 2) Hope that helps! Feel free to share more about your use case. |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I'm working on a project where a set of data acquisition devices are streaming data with ZMQ using an in-process communication protocol. I haven't used zarr so far but I would be interested in using it as a primary storage option.
I've read the issues a bit and it seems that with version 3.0
asyncio
is natively supported; I checked the documentation but I couldn't find a relevant example on how this is implemented/how to implement a custom writer supporting asyncio. Is there any available example of streaming chunked data to file using asyncio?Beta Was this translation helpful? Give feedback.
All reactions