Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

8422 list multipart uploads #8531

Merged
merged 50 commits into from
Jan 31, 2025
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
Show all changes
50 commits
Select commit Hold shift + click to select a range
e7fb7b8
Revert "docs: clarify auditing availble on enterprise (#8289)"
ItamarYuran Oct 23, 2024
cd5b80d
Merge branch 'master' of github.com:treeverse/lakeFS
ItamarYuran Jan 2, 2025
299032a
Merge branch 'master' of github.com:treeverse/lakeFS
ItamarYuran Jan 19, 2025
d9ea540
no work no mess
ItamarYuran Jan 20, 2025
7b1500d
got it
ItamarYuran Jan 21, 2025
08ad93c
f**** auditing
ItamarYuran Jan 21, 2025
1f7bf56
with tests
ItamarYuran Jan 21, 2025
2fbe42a
slight adjustment
ItamarYuran Jan 21, 2025
11f8d46
cross ya fingaz
ItamarYuran Jan 21, 2025
08977ec
yalla
ItamarYuran Jan 21, 2025
749f305
yalla
ItamarYuran Jan 21, 2025
7d121bd
tests dont yet work
ItamarYuran Jan 22, 2025
e04cd1b
check output
ItamarYuran Jan 22, 2025
dfc6699
check output
ItamarYuran Jan 22, 2025
8aa34f5
clean
ItamarYuran Jan 22, 2025
f8c8215
no tests
ItamarYuran Jan 22, 2025
cff37f2
tests
ItamarYuran Jan 23, 2025
219d4f4
ooooo baby
ItamarYuran Jan 23, 2025
0a37251
moving further
ItamarYuran Jan 23, 2025
52ef352
check errr
ItamarYuran Jan 23, 2025
ea30b22
check errr
ItamarYuran Jan 23, 2025
62f9e44
err not found
ItamarYuran Jan 23, 2025
1bdd16f
err not found
ItamarYuran Jan 23, 2025
644a390
yalla
ItamarYuran Jan 23, 2025
22c1756
check
ItamarYuran Jan 23, 2025
3d9225e
check
ItamarYuran Jan 23, 2025
b1d588b
schlafen
ItamarYuran Jan 23, 2025
f6d95c7
adapter
ItamarYuran Jan 23, 2025
9a4c5e1
namespace
ItamarYuran Jan 26, 2025
033784a
thank yoou
ItamarYuran Jan 26, 2025
2542855
names
ItamarYuran Jan 26, 2025
c74b714
refactor
ItamarYuran Jan 27, 2025
deb1fd3
refactored now
ItamarYuran Jan 27, 2025
5922ef8
no error
ItamarYuran Jan 27, 2025
2f740f4
trimmming
ItamarYuran Jan 27, 2025
f91771d
corrections
ItamarYuran Jan 27, 2025
b3e5884
fixin
ItamarYuran Jan 27, 2025
5a20568
yalla
ItamarYuran Jan 27, 2025
62f8bbb
testing key marker
ItamarYuran Jan 28, 2025
8f55605
is truncated
ItamarYuran Jan 28, 2025
645e5cf
with error not implemented
ItamarYuran Jan 28, 2025
4ecb691
testts will now pass
ItamarYuran Jan 28, 2025
58961d1
truncated for real
ItamarYuran Jan 28, 2025
fa64a69
tests
ItamarYuran Jan 29, 2025
82f74a3
final
ItamarYuran Jan 29, 2025
dfbee5c
kadima
ItamarYuran Jan 30, 2025
d21b831
final adjusments
ItamarYuran Jan 30, 2025
8ea2f8e
max uploads added
ItamarYuran Jan 30, 2025
eaf8972
kadima
ItamarYuran Jan 30, 2025
ad126b7
nu
ItamarYuran Jan 30, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 3 additions & 4 deletions esti/s3_gateway_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -291,6 +291,9 @@ func TestS3IfNoneMatch(t *testing.T) {

func TestListMultipartUploads(t *testing.T) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test should run only on S3 - I assume that the tests didn't pass for azure.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As @idanovo asked, these tests run for other adapters as well, only until the the first list mpu request in which they are expected to fail. then they are being skipped.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nopcoder I wanted to make sure we get the right error for other block stores

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, but I don't understand how it works as we currently do not support this flow for all adapters.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

	if blockStoreType != "s3" {
		require.Contains(t, err.Error(), "NotImplemented")
		return
	}

Copy link
Contributor

@nopcoder nopcoder Jan 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, I see the check down the code of this function - but why do we even start to upload if we can't test it on non-s3 implementation?
we have a different test to check missing functionality

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It can be tested in another test.
@ItamarYuran, my suggestion is to:

  1. Change TestListMultipartUploads TestListMultipartUploadsUnsupported back to t.skipp with a proper message if blockStoreType != "s3"
  2. Change TestListMultipartUploadsUnsupported name to indicate it checks only unsupported parameters
  3. Add another test that only checks that ListMultipartUpload returns the right error if blockStoreType != "s3"
    @nopcoder WDYT?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

think that the latest change include the above

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't

  1. We are returning and not skipping
  2. We don't test the error we get for object storage that are not s3

blockStoreType := viper.GetString(ViperBlockstoreType)
if blockStoreType != "s3" {
return
}
ctx, logger, repo := setupTest(t)
defer tearDownTest(repo)
s3Endpoint := viper.GetString("s3_endpoint")
Expand Down Expand Up @@ -333,10 +336,6 @@ func TestListMultipartUploads(t *testing.T) {
}
// check first mpu appears
output, err := s3Client.ListMultipartUploads(ctx, &s3.ListMultipartUploadsInput{Bucket: resp1.Bucket})
if blockStoreType != "s3" {
require.Contains(t, err.Error(), "NotImplemented")
return
}
require.NoError(t, err, "error listing multiparts")
keys := extractUploadKeys(output)
require.Contains(t, keys, obj1)
Expand Down
1 change: 1 addition & 0 deletions pkg/block/adapter.go
Original file line number Diff line number Diff line change
Expand Up @@ -128,6 +128,7 @@ type ListMultipartUploadsResponse struct {
NextUploadIDMarker *string
NextKeyMarker *string
IsTruncated bool
MaxUploads *int32
}

// CreateMultiPartUploadOpts contains optional arguments for
Expand Down
1 change: 1 addition & 0 deletions pkg/block/s3/adapter.go
Original file line number Diff line number Diff line change
Expand Up @@ -882,6 +882,7 @@ func (a *Adapter) ListMultipartUploads(ctx context.Context, obj block.ObjectPoin
NextUploadIDMarker: resp.NextUploadIdMarker,
NextKeyMarker: resp.NextKeyMarker,
IsTruncated: aws.ToBool(resp.IsTruncated),
MaxUploads: resp.MaxUploads,
}
return &mpuResp, nil
}
Expand Down
1 change: 0 additions & 1 deletion pkg/gateway/errors/errors.go
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,6 @@ const (
ErrInternalError
ErrInvalidAccessKeyID
ErrInvalidBucketName
ErrInvalidArgument
ErrInvalidDigest
ErrInvalidRange
ErrInvalidCopyPartRange
Expand Down
12 changes: 6 additions & 6 deletions pkg/gateway/operations/listobjects.go
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,9 @@ import (

const (
ListObjectMaxKeys = 1000
maxUploadsRange = 2147483647
minUploadsRange = 0
maxUploadsListMPU = 2147483647
minUploadsListMPU = 0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. if we leave the verification to the underlaying library - you can remove maxUploadsListMPU - it is MaxInt32 value and it will not pass this value as the cast will not let you.

  2. if we plan to pass 0 as a valid value and let the underlying package handle it - we just need to check if maxUploads32 < 0


// defaultBucketLocation used to identify if we need to specify the location constraint
defaultBucketLocation = "us-east-1"
QueryParamMaxUploads = "max-uploads"
Expand Down Expand Up @@ -425,13 +426,11 @@ func handleListMultipartUploads(w http.ResponseWriter, req *http.Request, o *Rep
opts := block.ListMultipartUploadsOpts{}
if maxUploadsStr != "" {
maxUploads, err := strconv.ParseInt(maxUploadsStr, 10, 32)
if err != nil || maxUploads < minUploadsRange || maxUploads > maxUploadsRange {
o.Log(req).WithField("maxUploadsStr", maxUploadsStr).
WithError(err).Error("malformed query parameter 'maxUploadsStr'")
maxUploads32 := int32(maxUploads)
if err != nil || maxUploads < minUploadsListMPU || maxUploads > maxUploadsListMPU {
_ = o.EncodeError(w, req, err, gatewayerrors.Codes.ToAPIErr(gatewayerrors.ErrInvalidMaxUploads))
return
}
maxUploads32 := int32(maxUploads)
opts.MaxUploads = &maxUploads32
}
if uploadIDMarker != "" {
Expand Down Expand Up @@ -476,6 +475,7 @@ func handleListMultipartUploads(w http.ResponseWriter, req *http.Request, o *Rep
NextKeyMarker: aws.StringValue(mpuResp.NextKeyMarker),
NextUploadIDMarker: aws.StringValue(mpuResp.NextUploadIDMarker),
IsTruncated: mpuResp.IsTruncated,
MaxUploads: aws.Int32Value(mpuResp.MaxUploads),
}
o.EncodeResponse(w, req, resp, http.StatusOK)
}
1 change: 1 addition & 0 deletions pkg/gateway/serde/xml.go
Original file line number Diff line number Diff line change
Expand Up @@ -171,6 +171,7 @@ type ListMultipartUploadsOutput struct {
NextKeyMarker string `xml:"NextKeyMarker,omitempty"`
NextUploadIDMarker string `xml:"NextUploadIDMarker,omitempty"`
IsTruncated bool `xml:"IsTruncated,omitempty"`
MaxUploads int32 `xml:"MaxUploads,omitempty"`
}

type VersioningConfiguration struct {
Expand Down
Loading