Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compute an uncompressed digest for chunked layers #2155

Merged
merged 7 commits into from
Jan 9, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 21 additions & 0 deletions docs/containers-storage.conf.5.md
Original file line number Diff line number Diff line change
Expand Up @@ -124,6 +124,27 @@ The `storage.options.pull_options` table supports the following keys:
It is an expensive operation so it is not enabled by default.
This is a "string bool": "false"|"true" (cannot be native TOML boolean)

**insecure_allow_unpredictable_image_contents="false"|"true"**
This should _almost never_ be set.
It allows partial pulls of images without guaranteeing that "partial
pulls" and non-partial pulls both result in consistent image contents.
Comment on lines +129 to +130
Copy link
Collaborator Author

@mtrmac mtrmac Jan 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BTW note the phrasing: without guaranteeing consistency. I.e. this disables the computation of uncompressed digest on partial pulls.

But images are still required to set correct DiffIDs: If we know the uncompressed digest for some reason (e.g. from a non-partial pull of the same blob, when the annotations were missing or the registry did not support partial pulls), and we see that it does not match DiffIDs, we are going to fail any future partial pull of that layer

This allows pulling estargz images and early versions of zstd:chunked images;
otherwise, these layers always use the traditional non-partial pull path.

This option should be enabled _extremely_ rarely, only if _all_ images that could
EVER be conceivably pulled on this system are _guaranteed_ (e.g. using a signature policy)
to come from a build system trusted to never attack image integrity.

If this consistency enforcement were disabled, malicious images could be built
in a way designed to evade other audit mechanisms, so presence of most other audit
mechanisms is not a replacement for the above-mentioned need for all images to come
from a trusted build system.

As a side effect, enabling this option will also make image IDs unpredictable
(usually not equal to the traditional value matching the config digest).

This is a "string bool": "false"|"true" (cannot be native TOML boolean)

### STORAGE OPTIONS FOR AUFS TABLE

The `storage.options.aufs` table supports the following options:
Expand Down
4 changes: 2 additions & 2 deletions drivers/driver.go
Original file line number Diff line number Diff line change
Expand Up @@ -231,8 +231,8 @@ const (
// DifferOutputFormatDir means the output is a directory and it will
// keep the original layout.
DifferOutputFormatDir = iota
// DifferOutputFormatFlat will store the files by their checksum, in the form
// checksum[0:2]/checksum[2:]
// DifferOutputFormatFlat will store the files by their checksum, per
// pkg/chunked/internal/composefs.RegularFilePathForValidatedDigest.
DifferOutputFormatFlat
)

Expand Down
2 changes: 1 addition & 1 deletion pkg/chunked/cache_linux.go
Original file line number Diff line number Diff line change
Expand Up @@ -710,7 +710,7 @@ func prepareCacheFile(manifest []byte, format graphdriver.DifferOutputFormat) ([
switch format {
case graphdriver.DifferOutputFormatDir:
case graphdriver.DifferOutputFormatFlat:
entries, err = makeEntriesFlat(entries)
entries, err = makeEntriesFlat(entries, nil)
if err != nil {
return nil, err
}
Expand Down
15 changes: 10 additions & 5 deletions pkg/chunked/dump/dump.go
Original file line number Diff line number Diff line change
Expand Up @@ -9,11 +9,11 @@ import (
"io"
"path/filepath"
"reflect"
"strings"
"time"

"github.com/containers/storage/pkg/chunked/internal/minimal"
storagePath "github.com/containers/storage/pkg/chunked/internal/path"
"github.com/opencontainers/go-digest"
"golang.org/x/sys/unix"
)

Expand Down Expand Up @@ -165,11 +165,16 @@ func dumpNode(out io.Writer, added map[string]*minimal.FileMetadata, links map[s
} else {
payload = storagePath.CleanAbsPath(entry.Linkname)
}
} else {
if len(entry.Digest) > 10 {
d := strings.Replace(entry.Digest, "sha256:", "", 1)
payload = d[:2] + "/" + d[2:]
} else if entry.Digest != "" {
d, err := digest.Parse(entry.Digest)
if err != nil {
return fmt.Errorf("invalid digest %q for %q: %w", entry.Digest, entry.Name, err)
}
path, err := storagePath.RegularFilePathForValidatedDigest(d)
if err != nil {
return fmt.Errorf("determining physical file path for %q: %w", entry.Name, err)
}
payload = path
}

if _, err := fmt.Fprint(out, escapedOptional([]byte(payload), ESCAPE_LONE_DASH)); err != nil {
Expand Down
8 changes: 4 additions & 4 deletions pkg/chunked/dump/dump_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ func TestDumpNode(t *testing.T) {
Devminor: 0,
ModTime: &modTime,
Linkname: "",
Digest: "sha256:abcdef1234567890",
Digest: "sha256:0123456789abcdef1123456789abcdef2123456789abcdef3123456789abcdef",
Xattrs: map[string]string{
"user.key1": base64.StdEncoding.EncodeToString([]byte("value1")),
},
Expand Down Expand Up @@ -150,15 +150,15 @@ func TestDumpNode(t *testing.T) {
entries: []*minimal.FileMetadata{
regularFileEntry,
},
expected: "/example.txt 100 100000 1 1000 1000 0 1672531200.0 ab/cdef1234567890 - - user.key1=value1\n",
expected: "/example.txt 100 100000 1 1000 1000 0 1672531200.0 01/23456789abcdef1123456789abcdef2123456789abcdef3123456789abcdef - - user.key1=value1\n",
},
{
name: "root entry with file",
entries: []*minimal.FileMetadata{
rootEntry,
regularFileEntry,
},
expected: "/ 0 40000 1 0 0 0 1672531200.0 - - -\n/example.txt 100 100000 1 1000 1000 0 1672531200.0 ab/cdef1234567890 - - user.key1=value1\n",
expected: "/ 0 40000 1 0 0 0 1672531200.0 - - -\n/example.txt 100 100000 1 1000 1000 0 1672531200.0 01/23456789abcdef1123456789abcdef2123456789abcdef3123456789abcdef - - user.key1=value1\n",
skipAddingRootEntry: true,
},
{
Expand Down Expand Up @@ -196,7 +196,7 @@ func TestDumpNode(t *testing.T) {
regularFileEntry,
directoryEntry,
},
expected: "/ 0 40000 1 0 0 0 1672531200.0 - - -\n/example.txt 100 100000 1 1000 1000 0 1672531200.0 ab/cdef1234567890 - - user.key1=value1\n/mydir 0 40000 1 1000 1000 0 1672531200.0 - - - user.key2=value2\n",
expected: "/ 0 40000 1 0 0 0 1672531200.0 - - -\n/example.txt 100 100000 1 1000 1000 0 1672531200.0 01/23456789abcdef1123456789abcdef2123456789abcdef3123456789abcdef - - user.key1=value1\n/mydir 0 40000 1 1000 1000 0 1672531200.0 - - - user.key2=value2\n",
skipAddingRootEntry: true,
},
}
Expand Down
15 changes: 15 additions & 0 deletions pkg/chunked/internal/path/path.go
Original file line number Diff line number Diff line change
@@ -1,7 +1,10 @@
package path

import (
"fmt"
"path/filepath"

"github.com/opencontainers/go-digest"
)

// CleanAbsPath removes any ".." and "." from the path
Expand All @@ -10,3 +13,15 @@ import (
func CleanAbsPath(path string) string {
return filepath.Clean("/" + path)
}

// RegularFilePath returns the path used in the composefs backing store for a
// regular file with the provided content digest.
//
// The caller MUST ensure d is a valid digest (in particular, that it contains no path separators or .. entries)
func RegularFilePathForValidatedDigest(d digest.Digest) (string, error) {
if algo := d.Algorithm(); algo != digest.SHA256 {
return "", fmt.Errorf("unexpected digest algorithm %q", algo)
}
e := d.Encoded()
return e[0:2] + "/" + e[2:], nil
}
15 changes: 15 additions & 0 deletions pkg/chunked/internal/path/path_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,9 @@ import (
"fmt"
"testing"

"github.com/opencontainers/go-digest"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)

func TestCleanAbsPath(t *testing.T) {
Expand Down Expand Up @@ -46,3 +48,16 @@ func TestCleanAbsPath(t *testing.T) {
assert.Equal(t, test.expected, CleanAbsPath(test.path), fmt.Sprintf("path %q failed", test.path))
}
}

func TestRegularFilePathForValidatedDigest(t *testing.T) {
d, err := digest.Parse("sha256:0123456789abcdef1123456789abcdef2123456789abcdef3123456789abcdef")
require.NoError(t, err)
res, err := RegularFilePathForValidatedDigest(d)
require.NoError(t, err)
assert.Equal(t, "01/23456789abcdef1123456789abcdef2123456789abcdef3123456789abcdef", res)

d, err = digest.Parse("sha512:0123456789abcdef1123456789abcdef2123456789abcdef3123456789abcdef0123456789abcdef1123456789abcdef2123456789abcdef3123456789abcdef")
require.NoError(t, err)
_, err = RegularFilePathForValidatedDigest(d)
assert.Error(t, err)
}
Loading