-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Request for read access to embargoed assets on DANDI Open Data bucket #7
Comments
Hi @satra, following up here. @yarikoptic @waxlamp Do either of you happen to have admin permissions for this AWS account? Thanks. |
I believe I do, after a fashion. Let's meet to come up with the right solution here. |
sorry, I think I do not have creds to manage open data bucket. I do have IAM creds though which allow for data access. I guess here we need a dedicated "read-only" backup IAM |
provided separately. however, let's also make sure s3invsync doesn't error out for access denied. simply raises that there were files that were not synced because of permissions and logs them somewhere. |
Thanks. I have started the download to
In our use case, I do like that it errors out by default if we don't provide credentials since we need all open and embargoed assets. I will file an issue to add an option to not error out. |
Hi @satra, it looks like the credentials don't allow for accessing the S3 Object Version. See error message below:
|
Looks like the |
we can add that to the policy, but check with @jwodder as i think the unit-tests in the s3invsync is using this same policy. we may want to know why it works there and not for this. |
@satra The |
cc @aaronkanzer |
Filed dandi/s3invsync#158 |
Sync of the DANDI Open Data bucket to the MIT Engaging Cluster ( |
that's underwhelming. any idea if CPU or network or ... is bottleneck. |
Agreed. I am looking into it. I will start it on a node with more cores.
cd /home/kabi/s3invsync
cargo run --release -- --ok-errors access-denied s3://dandiarchive/dandiarchive/dandiarchive/ /orcd/data/dandi/001/s3dandiarchive/ |
@yarikoptic @aaronkanzer Currently using a node with 32 cores and 256 GB. Download speed is now ~1 GB / minute. |
Hi @satra, following up here. Could you please add |
try now. |
Unfortunately that didn't seem to work. I will test on the LINC private bucket to determine which policy(s) we need, and then get back to you.
|
Hi @satra, as we are working to download all the data from S3 to MIT Engaging, I need read access to all embargoed data. I am currently encountering the following error with
s3invsync
:Would you be able to provide me with read permissions to all data on the
s3://dandiarchive/
bucket? Thanks.The text was updated successfully, but these errors were encountered: