vendor: update buildkit to 539be170

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
This commit is contained in:
Tonis Tiigi
2021-12-15 22:09:13 -08:00
parent 59533bbb5c
commit 9c3be32bc9
581 changed files with 24648 additions and 16682 deletions

View File

@ -11,12 +11,27 @@ package.
Please see the LICENSE file for licensing information.
This project has adopted the [Microsoft Open Source Code of
Conduct](https://opensource.microsoft.com/codeofconduct/). For more information
see the [Code of Conduct
FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact
[opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional
questions or comments.
## Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA)
declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.
When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR
appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
We also require that contributors sign their commits using git commit -s or git commit --signoff to certify they either authored the work themselves
or otherwise have permission to use it in this project. Please see https://developercertificate.org/ for more info, as well as to make sure that you can
attest to the rules listed. Our CI uses the DCO Github app to ensure that all commits in a given PR are signed-off.
## Code of Conduct
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
## Special Thanks
Thanks to natefinch for the inspiration for this library. See https://github.com/natefinch/npipe
for another named pipe implementation.

191
vendor/github.com/containerd/containerd/api/LICENSE generated vendored Normal file
View File

@ -0,0 +1,191 @@
Apache License
Version 2.0, January 2004
https://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
Copyright The containerd Authors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,17 @@
/*
Copyright The containerd Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package content

View File

@ -21,15 +21,16 @@ import (
"bytes"
"compress/gzip"
"context"
"encoding/binary"
"fmt"
"io"
"os"
"os/exec"
"strconv"
"sync"
"github.com/containerd/containerd/log"
"github.com/klauspost/compress/zstd"
exec "golang.org/x/sys/execabs"
)
type (
@ -125,17 +126,52 @@ func (r *bufferedReader) Peek(n int) ([]byte, error) {
return r.buf.Peek(n)
}
const (
zstdMagicSkippableStart = 0x184D2A50
zstdMagicSkippableMask = 0xFFFFFFF0
)
var (
gzipMagic = []byte{0x1F, 0x8B, 0x08}
zstdMagic = []byte{0x28, 0xb5, 0x2f, 0xfd}
)
type matcher = func([]byte) bool
func magicNumberMatcher(m []byte) matcher {
return func(source []byte) bool {
return bytes.HasPrefix(source, m)
}
}
// zstdMatcher detects zstd compression algorithm.
// There are two frame formats defined by Zstandard: Zstandard frames and Skippable frames.
// See https://tools.ietf.org/id/draft-kucherawy-dispatch-zstd-00.html#rfc.section.2 for more details.
func zstdMatcher() matcher {
return func(source []byte) bool {
if bytes.HasPrefix(source, zstdMagic) {
// Zstandard frame
return true
}
// skippable frame
if len(source) < 8 {
return false
}
// magic number from 0x184D2A50 to 0x184D2A5F.
if binary.LittleEndian.Uint32(source[:4])&zstdMagicSkippableMask == zstdMagicSkippableStart {
return true
}
return false
}
}
// DetectCompression detects the compression algorithm of the source.
func DetectCompression(source []byte) Compression {
for compression, m := range map[Compression][]byte{
Gzip: {0x1F, 0x8B, 0x08},
Zstd: {0x28, 0xb5, 0x2f, 0xfd},
for compression, fn := range map[Compression]matcher{
Gzip: magicNumberMatcher(gzipMagic),
Zstd: zstdMatcher(),
} {
if len(source) < len(m) {
// Len too short
continue
}
if bytes.Equal(m, source[:len(m)]) {
if fn(source) {
return compression
}
}

View File

@ -19,7 +19,6 @@ package content
import (
"context"
"io"
"io/ioutil"
"math/rand"
"sync"
"time"
@ -144,9 +143,14 @@ func Copy(ctx context.Context, cw Writer, r io.Reader, size int64, expected dige
}
}
if _, err := copyWithBuffer(cw, r); err != nil {
copied, err := copyWithBuffer(cw, r)
if err != nil {
return errors.Wrap(err, "failed to copy")
}
if size != 0 && copied < size-ws.Offset {
// Short writes would return its own error, this indicates a read failure
return errors.Wrapf(io.ErrUnexpectedEOF, "failed to read expected number of bytes")
}
if err := cw.Commit(ctx, size, expected, opts...); err != nil {
if !errdefs.IsAlreadyExists(err) {
@ -165,8 +169,15 @@ func CopyReaderAt(cw Writer, ra ReaderAt, n int64) error {
return err
}
_, err = copyWithBuffer(cw, io.NewSectionReader(ra, ws.Offset, n))
return err
copied, err := copyWithBuffer(cw, io.NewSectionReader(ra, ws.Offset, n))
if err != nil {
return errors.Wrap(err, "failed to copy")
}
if copied < n {
// Short writes would return its own error, this indicates a read failure
return errors.Wrap(io.ErrUnexpectedEOF, "failed to read expected number of bytes")
}
return nil
}
// CopyReader copies to a writer from a given reader, returning
@ -218,7 +229,7 @@ func seekReader(r io.Reader, offset, size int64) (io.Reader, error) {
}
// well then, let's just discard up to the offset
n, err := copyWithBuffer(ioutil.Discard, io.LimitReader(r, offset))
n, err := copyWithBuffer(io.Discard, io.LimitReader(r, offset))
if err != nil {
return nil, errors.Wrap(err, "failed to discard to offset")
}

View File

@ -41,7 +41,13 @@ func tryLock(ref string) error {
defer locksMu.Unlock()
if v, ok := locks[ref]; ok {
return errors.Wrapf(errdefs.ErrUnavailable, "ref %s locked since %s", ref, v.since)
// Returning the duration may help developers distinguish dead locks (long duration) from
// lock contentions (short duration).
now := time.Now()
return errors.Wrapf(
errdefs.ErrUnavailable,
"ref %s locked for %s (since %s)", ref, now.Sub(v.since), v.since,
)
}
locks[ref] = &lock{time.Now()}

View File

@ -20,7 +20,6 @@ import (
"context"
"fmt"
"io"
"io/ioutil"
"math/rand"
"os"
"path/filepath"
@ -568,7 +567,7 @@ func (s *store) writer(ctx context.Context, ref string, total int64, expected di
// the ingest is new, we need to setup the target location.
// write the ref to a file for later use
if err := ioutil.WriteFile(refp, []byte(ref), 0666); err != nil {
if err := os.WriteFile(refp, []byte(ref), 0666); err != nil {
return nil, err
}
@ -581,7 +580,7 @@ func (s *store) writer(ctx context.Context, ref string, total int64, expected di
}
if total > 0 {
if err := ioutil.WriteFile(filepath.Join(path, "total"), []byte(fmt.Sprint(total)), 0666); err != nil {
if err := os.WriteFile(filepath.Join(path, "total"), []byte(fmt.Sprint(total)), 0666); err != nil {
return nil, err
}
}
@ -656,13 +655,13 @@ func (s *store) ingestPaths(ref string) (string, string, string) {
}
func readFileString(path string) (string, error) {
p, err := ioutil.ReadFile(path)
p, err := os.ReadFile(path)
return string(p), err
}
// readFileTimestamp reads a file with just a timestamp present.
func readFileTimestamp(p string) (time.Time, error) {
b, err := ioutil.ReadFile(p)
b, err := os.ReadFile(p)
if err != nil {
if os.IsNotExist(err) {
err = errors.Wrap(errdefs.ErrNotFound, err.Error())
@ -683,10 +682,10 @@ func writeTimestampFile(p string, t time.Time) error {
if err != nil {
return err
}
return atomicWrite(p, b, 0666)
return writeToCompletion(p, b, 0666)
}
func atomicWrite(path string, data []byte, mode os.FileMode) error {
func writeToCompletion(path string, data []byte, mode os.FileMode) error {
tmp := fmt.Sprintf("%s.tmp", path)
f, err := os.OpenFile(tmp, os.O_RDWR|os.O_CREATE|os.O_TRUNC|os.O_SYNC, mode)
if err != nil {
@ -695,7 +694,11 @@ func atomicWrite(path string, data []byte, mode os.FileMode) error {
_, err = f.Write(data)
f.Close()
if err != nil {
return errors.Wrap(err, "write atomic data")
return errors.Wrap(err, "write tmp file")
}
return os.Rename(tmp, path)
err = os.Rename(tmp, path)
if err != nil {
return errors.Wrap(err, "rename tmp file")
}
return nil
}

View File

@ -1,3 +1,4 @@
//go:build darwin || freebsd || netbsd
// +build darwin freebsd netbsd
/*

View File

@ -1,3 +1,4 @@
//go:build openbsd
// +build openbsd
/*

View File

@ -1,3 +1,4 @@
//go:build linux || solaris
// +build linux solaris
/*

View File

@ -0,0 +1,37 @@
/*
Copyright The containerd Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package defaults
const (
// DefaultRootDir is the default location used by containerd to store
// persistent data
DefaultRootDir = "/var/lib/containerd"
// DefaultStateDir is the default location used by containerd to store
// transient data
DefaultStateDir = "/var/run/containerd"
// DefaultAddress is the default unix socket address
DefaultAddress = "/var/run/containerd/containerd.sock"
// DefaultDebugAddress is the default unix socket address for pprof data
DefaultDebugAddress = "/var/run/containerd/debug.sock"
// DefaultFIFODir is the default location used by client-side cio library
// to store FIFOs.
DefaultFIFODir = "/var/run/containerd/fifo"
// DefaultRuntime would be a multiple of choices, thus empty
DefaultRuntime = ""
// DefaultConfigDir is the default location for config files.
DefaultConfigDir = "/etc/containerd"
)

View File

@ -1,4 +1,5 @@
// +build !windows
//go:build !windows && !darwin
// +build !windows,!darwin
/*
Copyright The containerd Authors.

View File

@ -1,5 +1,3 @@
// +build windows
/*
Copyright The containerd Authors.

View File

@ -40,6 +40,10 @@ var (
// This applies only to a single descriptor in a handler
// chain and does not apply to descendant descriptors.
ErrStopHandler = fmt.Errorf("stop handler")
// ErrEmptyWalk is used when the WalkNotEmpty handlers return no
// children (e.g.: they were filtered out).
ErrEmptyWalk = fmt.Errorf("image might be filtered out")
)
// Handler handles image manifests
@ -99,6 +103,36 @@ func Walk(ctx context.Context, handler Handler, descs ...ocispec.Descriptor) err
}
}
}
return nil
}
// WalkNotEmpty works the same way Walk does, with the exception that it ensures that
// some children are still found by Walking the descriptors (for example, not all of
// them have been filtered out by one of the handlers). If there are no children,
// then an ErrEmptyWalk error is returned.
func WalkNotEmpty(ctx context.Context, handler Handler, descs ...ocispec.Descriptor) error {
isEmpty := true
var notEmptyHandler HandlerFunc = func(ctx context.Context, desc ocispec.Descriptor) ([]ocispec.Descriptor, error) {
children, err := handler.Handle(ctx, desc)
if err != nil {
return children, err
}
if len(children) > 0 {
isEmpty = false
}
return children, nil
}
err := Walk(ctx, notEmptyHandler, descs...)
if err != nil {
return err
}
if isEmpty {
return ErrEmptyWalk
}
return nil
}

View File

@ -19,6 +19,7 @@ package images
import (
"context"
"encoding/json"
"fmt"
"sort"
"time"
@ -154,6 +155,10 @@ func Manifest(ctx context.Context, provider content.Provider, image ocispec.Desc
return nil, err
}
if err := validateMediaType(p, desc.MediaType); err != nil {
return nil, errors.Wrapf(err, "manifest: invalid desc %s", desc.Digest)
}
var manifest ocispec.Manifest
if err := json.Unmarshal(p, &manifest); err != nil {
return nil, err
@ -194,6 +199,10 @@ func Manifest(ctx context.Context, provider content.Provider, image ocispec.Desc
return nil, err
}
if err := validateMediaType(p, desc.MediaType); err != nil {
return nil, errors.Wrapf(err, "manifest: invalid desc %s", desc.Digest)
}
var idx ocispec.Index
if err := json.Unmarshal(p, &idx); err != nil {
return nil, err
@ -336,6 +345,10 @@ func Children(ctx context.Context, provider content.Provider, desc ocispec.Descr
return nil, err
}
if err := validateMediaType(p, desc.MediaType); err != nil {
return nil, errors.Wrapf(err, "children: invalid desc %s", desc.Digest)
}
// TODO(stevvooe): We just assume oci manifest, for now. There may be
// subtle differences from the docker version.
var manifest ocispec.Manifest
@ -351,6 +364,10 @@ func Children(ctx context.Context, provider content.Provider, desc ocispec.Descr
return nil, err
}
if err := validateMediaType(p, desc.MediaType); err != nil {
return nil, errors.Wrapf(err, "children: invalid desc %s", desc.Digest)
}
var index ocispec.Index
if err := json.Unmarshal(p, &index); err != nil {
return nil, err
@ -368,6 +385,44 @@ func Children(ctx context.Context, provider content.Provider, desc ocispec.Descr
return descs, nil
}
// unknownDocument represents a manifest, manifest list, or index that has not
// yet been validated.
type unknownDocument struct {
MediaType string `json:"mediaType,omitempty"`
Config json.RawMessage `json:"config,omitempty"`
Layers json.RawMessage `json:"layers,omitempty"`
Manifests json.RawMessage `json:"manifests,omitempty"`
FSLayers json.RawMessage `json:"fsLayers,omitempty"` // schema 1
}
// validateMediaType returns an error if the byte slice is invalid JSON or if
// the media type identifies the blob as one format but it contains elements of
// another format.
func validateMediaType(b []byte, mt string) error {
var doc unknownDocument
if err := json.Unmarshal(b, &doc); err != nil {
return err
}
if len(doc.FSLayers) != 0 {
return fmt.Errorf("media-type: schema 1 not supported")
}
switch mt {
case MediaTypeDockerSchema2Manifest, ocispec.MediaTypeImageManifest:
if len(doc.Manifests) != 0 ||
doc.MediaType == MediaTypeDockerSchema2ManifestList ||
doc.MediaType == ocispec.MediaTypeImageIndex {
return fmt.Errorf("media-type: expected manifest but found index (%s)", mt)
}
case MediaTypeDockerSchema2ManifestList, ocispec.MediaTypeImageIndex:
if len(doc.Config) != 0 || len(doc.Layers) != 0 ||
doc.MediaType == MediaTypeDockerSchema2Manifest ||
doc.MediaType == ocispec.MediaTypeImageManifest {
return fmt.Errorf("media-type: expected index but found manifest (%s)", mt)
}
}
return nil
}
// RootFS returns the unpacked diffids that make up and images rootfs.
//
// These are used to verify that a set of layers unpacked to the expected

View File

@ -19,3 +19,7 @@ package labels
// LabelUncompressed is added to compressed layer contents.
// The value is digest of the uncompressed content.
const LabelUncompressed = "containerd.io/uncompressed"
// LabelSharedNamespace is added to a namespace to allow that namespaces
// contents to be shared.
const LabelSharedNamespace = "containerd.io/namespace.shareable"

View File

@ -52,7 +52,8 @@ const (
// WithLogger returns a new context with the provided logger. Use in
// combination with logger.WithField(s) for great effect.
func WithLogger(ctx context.Context, logger *logrus.Entry) context.Context {
return context.WithValue(ctx, loggerKey{}, logger)
e := logger.WithContext(ctx)
return context.WithValue(ctx, loggerKey{}, e)
}
// GetLogger retrieves the current logger from the context. If no logger is
@ -61,7 +62,7 @@ func GetLogger(ctx context.Context) *logrus.Entry {
logger := ctx.Value(loggerKey{})
if logger == nil {
return L
return L.WithContext(ctx)
}
return logger.(*logrus.Entry)

View File

@ -1,3 +1,4 @@
//go:build !linux
// +build !linux
/*

View File

@ -1,3 +1,4 @@
//go:build !linux
// +build !linux
/*

View File

@ -38,7 +38,7 @@ func isLinuxOS(os string) bool {
// The OS value should be normalized before calling this function.
func isKnownOS(os string) bool {
switch os {
case "aix", "android", "darwin", "dragonfly", "freebsd", "hurd", "illumos", "js", "linux", "nacl", "netbsd", "openbsd", "plan9", "solaris", "windows", "zos":
case "aix", "android", "darwin", "dragonfly", "freebsd", "hurd", "illumos", "ios", "js", "linux", "nacl", "netbsd", "openbsd", "plan9", "solaris", "windows", "zos":
return true
}
return false
@ -60,7 +60,7 @@ func isArmArch(arch string) bool {
// The arch value should be normalized before being passed to this function.
func isKnownArch(arch string) bool {
switch arch {
case "386", "amd64", "amd64p32", "arm", "armbe", "arm64", "arm64be", "ppc64", "ppc64le", "mips", "mipsle", "mips64", "mips64le", "mips64p32", "mips64p32le", "ppc", "riscv", "riscv64", "s390", "s390x", "sparc", "sparc64", "wasm":
case "386", "amd64", "amd64p32", "arm", "armbe", "arm64", "arm64be", "ppc64", "ppc64le", "loong64", "mips", "mipsle", "mips64", "mips64le", "mips64p32", "mips64p32le", "ppc", "riscv", "riscv64", "s390", "s390x", "sparc", "sparc64", "wasm":
return true
}
return false

View File

@ -16,27 +16,11 @@
package platforms
import (
"runtime"
specs "github.com/opencontainers/image-spec/specs-go/v1"
)
// DefaultString returns the default string specifier for the platform.
func DefaultString() string {
return Format(DefaultSpec())
}
// DefaultSpec returns the current platform's default platform specification.
func DefaultSpec() specs.Platform {
return specs.Platform{
OS: runtime.GOOS,
Architecture: runtime.GOARCH,
// The Variant field will be empty if arch != ARM.
Variant: cpuVariant(),
}
}
// DefaultStrict returns strict form of Default.
func DefaultStrict() MatchComparer {
return OnlyStrict(DefaultSpec())

View File

@ -0,0 +1,45 @@
//go:build darwin
// +build darwin
/*
Copyright The containerd Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package platforms
import (
"runtime"
specs "github.com/opencontainers/image-spec/specs-go/v1"
)
// DefaultSpec returns the current platform's default platform specification.
func DefaultSpec() specs.Platform {
return specs.Platform{
OS: runtime.GOOS,
Architecture: runtime.GOARCH,
// The Variant field will be empty if arch != ARM.
Variant: cpuVariant(),
}
}
// Default returns the default matcher for the platform.
func Default() MatchComparer {
return Ordered(DefaultSpec(), specs.Platform{
// darwin runtime also supports Linux binary via runu/LKL
OS: "linux",
Architecture: runtime.GOARCH,
})
}

View File

@ -1,4 +1,5 @@
// +build !windows
//go:build !windows && !darwin
// +build !windows,!darwin
/*
Copyright The containerd Authors.
@ -18,6 +19,22 @@
package platforms
import (
"runtime"
specs "github.com/opencontainers/image-spec/specs-go/v1"
)
// DefaultSpec returns the current platform's default platform specification.
func DefaultSpec() specs.Platform {
return specs.Platform{
OS: runtime.GOOS,
Architecture: runtime.GOARCH,
// The Variant field will be empty if arch != ARM.
Variant: cpuVariant(),
}
}
// Default returns the default matcher for the platform.
func Default() MatchComparer {
return Only(DefaultSpec())

View File

@ -1,5 +1,3 @@
// +build windows
/*
Copyright The containerd Authors.
@ -29,6 +27,18 @@ import (
"golang.org/x/sys/windows"
)
// DefaultSpec returns the current platform's default platform specification.
func DefaultSpec() specs.Platform {
major, minor, build := windows.RtlGetNtVersionNumbers()
return specs.Platform{
OS: runtime.GOOS,
Architecture: runtime.GOARCH,
OSVersion: fmt.Sprintf("%d.%d.%d", major, minor, build),
// The Variant field will be empty if arch != ARM.
Variant: cpuVariant(),
}
}
type matchComparer struct {
defaults Matcher
osVersionPrefix string

View File

@ -107,6 +107,7 @@
package platforms
import (
"path"
"regexp"
"runtime"
"strconv"
@ -246,20 +247,7 @@ func Format(platform specs.Platform) string {
return "unknown"
}
return joinNotEmpty(platform.OS, platform.Architecture, platform.Variant)
}
func joinNotEmpty(s ...string) string {
var ss []string
for _, s := range s {
if s == "" {
continue
}
ss = append(ss, s)
}
return strings.Join(ss, "/")
return path.Join(platform.OS, platform.Architecture, platform.Variant)
}
// Normalize validates and translate the platform to the canonical value.

View File

@ -58,7 +58,7 @@ func GenerateTokenOptions(ctx context.Context, host, username, secret string, c
scope, ok := c.Parameters["scope"]
if ok {
to.Scopes = append(to.Scopes, scope)
to.Scopes = append(to.Scopes, strings.Split(scope, " ")...)
} else {
log.G(ctx).WithField("host", host).Debug("no scope specified for token auth challenge")
}

View File

@ -21,7 +21,6 @@ import (
"encoding/json"
"fmt"
"io"
"io/ioutil"
"net/http"
"net/url"
"strings"
@ -60,6 +59,10 @@ func (r dockerFetcher) Fetch(ctx context.Context, desc ocispec.Descriptor) (io.R
log.G(ctx).WithError(err).Debug("failed to parse")
continue
}
if u.Scheme != "http" && u.Scheme != "https" {
log.G(ctx).Debug("non-http(s) alternative url is unsupported")
continue
}
log.G(ctx).Debug("trying alternative url")
// Try this first, parse it
@ -197,7 +200,7 @@ func (r dockerFetcher) open(ctx context.Context, req *request, mediatype string,
// Discard up to offset
// Could use buffer pool here but this case should be rare
n, err := io.Copy(ioutil.Discard, io.LimitReader(resp.Body, offset))
n, err := io.Copy(io.Discard, io.LimitReader(resp.Body, offset))
if err != nil {
return nil, errors.Wrap(err, "failed to discard to offset")
}

View File

@ -19,19 +19,22 @@ package docker
import (
"bytes"
"io"
"io/ioutil"
"github.com/containerd/containerd/errdefs"
"github.com/containerd/containerd/log"
"github.com/pkg/errors"
)
const maxRetry = 3
type httpReadSeeker struct {
size int64
offset int64
rc io.ReadCloser
open func(offset int64) (io.ReadCloser, error)
closed bool
errsWithNoProgress int
}
func newHTTPReadSeeker(size int64, open func(offset int64) (io.ReadCloser, error)) (io.ReadCloser, error) {
@ -53,6 +56,27 @@ func (hrs *httpReadSeeker) Read(p []byte) (n int, err error) {
n, err = rd.Read(p)
hrs.offset += int64(n)
if n > 0 || err == nil {
hrs.errsWithNoProgress = 0
}
if err == io.ErrUnexpectedEOF {
// connection closed unexpectedly. try reconnecting.
if n == 0 {
hrs.errsWithNoProgress++
if hrs.errsWithNoProgress > maxRetry {
return // too many retries for this offset with no progress
}
}
if hrs.rc != nil {
if clsErr := hrs.rc.Close(); clsErr != nil {
log.L.WithError(clsErr).Errorf("httpReadSeeker: failed to close ReadCloser")
}
hrs.rc = nil
}
if _, err2 := hrs.reader(); err2 == nil {
return n, nil
}
}
return
}
@ -137,7 +161,7 @@ func (hrs *httpReadSeeker) reader() (io.Reader, error) {
// as the length is already satisfied but we just return the empty
// reader instead.
hrs.rc = ioutil.NopCloser(bytes.NewReader([]byte{}))
hrs.rc = io.NopCloser(bytes.NewReader([]byte{}))
}
return hrs.rc, nil

View File

@ -19,7 +19,6 @@ package docker
import (
"context"
"io"
"io/ioutil"
"net/http"
"net/url"
"strings"
@ -263,7 +262,7 @@ func (p dockerPusher) push(ctx context.Context, desc ocispec.Descriptor, ref str
pr, pw := io.Pipe()
respC := make(chan response, 1)
body := ioutil.NopCloser(pr)
body := io.NopCloser(pr)
req.body = func() (io.ReadCloser, error) {
if body == nil {

View File

@ -20,7 +20,6 @@ import (
"context"
"fmt"
"io"
"io/ioutil"
"net/http"
"net/url"
"path"
@ -359,7 +358,7 @@ func (r *dockerResolver) Resolve(ctx context.Context, ref string) (string, ocisp
return "", ocispec.Descriptor{}, err
}
}
} else if _, err := io.Copy(ioutil.Discard, &bodyReader); err != nil {
} else if _, err := io.Copy(io.Discard, &bodyReader); err != nil {
return "", ocispec.Descriptor{}, err
}
size = bodyReader.bytesRead

View File

@ -23,7 +23,6 @@ import (
"encoding/json"
"fmt"
"io"
"io/ioutil"
"strconv"
"strings"
"sync"
@ -230,7 +229,7 @@ func (c *Converter) Convert(ctx context.Context, opts ...ConvertOpt) (ocispec.De
// ReadStripSignature reads in a schema1 manifest and returns a byte array
// with the "signatures" field stripped
func ReadStripSignature(schema1Blob io.Reader) ([]byte, error) {
b, err := ioutil.ReadAll(io.LimitReader(schema1Blob, manifestSizeLimit)) // limit to 8MB
b, err := io.ReadAll(io.LimitReader(schema1Blob, manifestSizeLimit)) // limit to 8MB
if err != nil {
return nil, err
}
@ -256,6 +255,9 @@ func (c *Converter) fetchManifest(ctx context.Context, desc ocispec.Descriptor)
if err := json.Unmarshal(b, &m); err != nil {
return err
}
if len(m.Manifests) != 0 || len(m.Layers) != 0 {
return errors.New("converter: expected schema1 document but found extra keys")
}
c.pulledManifest = &m
return nil
@ -472,8 +474,10 @@ type history struct {
}
type manifest struct {
FSLayers []fsLayer `json:"fsLayers"`
History []history `json:"history"`
FSLayers []fsLayer `json:"fsLayers"`
History []history `json:"history"`
Layers json.RawMessage `json:"layers,omitempty"` // OCI manifest
Manifests json.RawMessage `json:"manifests,omitempty"` // OCI index
}
type v1History struct {

View File

@ -19,7 +19,6 @@ package errors
import (
"fmt"
"io"
"io/ioutil"
"net/http"
)
@ -41,7 +40,7 @@ func (e ErrUnexpectedStatus) Error() string {
func NewUnexpectedStatusErr(resp *http.Response) error {
var b []byte
if resp.Body != nil {
b, _ = ioutil.ReadAll(io.LimitReader(resp.Body, 64000)) // 64KB
b, _ = io.ReadAll(io.LimitReader(resp.Body, 64000)) // 64KB
}
err := ErrUnexpectedStatus{
Body: b,

View File

@ -423,6 +423,10 @@ func (s *service) Write(session api.Content_WriteServer) (err error) {
return err
}
if req.Action == api.WriteActionCommit {
return nil
}
req, err = session.Recv()
if err != nil {
if err == io.EOF {

View File

@ -23,7 +23,7 @@ var (
Package = "github.com/containerd/containerd"
// Version holds the complete version number. Filled in at linking time.
Version = "1.5.5+unknown"
Version = "1.6.0-beta.3+unknown"
// Revision is filled with the VCS (e.g. git) revision being used to build
// the program at linking time.

View File

@ -12,8 +12,11 @@ Darren Stahl <darst@microsoft.com>
Derek McGowan <derek@mcg.dev>
Derek McGowan <derek@mcgstyle.net>
Edward Pilatowicz <edward.pilatowicz@oracle.com>
Fu Wei <fuweid89@gmail.com>
Hajime Tazaki <thehajime@gmail.com>
Ian Campbell <ijc@docker.com>
Ivan Markin <sw@nogoegst.net>
Jacob Blain Christen <jacob@rancher.com>
Justin Cormack <justin.cormack@docker.com>
Justin Cummins <sul3n3t@gmail.com>
Kasper Fabæch Brandt <poizan@poizan.dk>
@ -23,10 +26,11 @@ Michael Crosby <michael@thepasture.io>
Michael Wan <zirenwan@gmail.com>
Mike Brown <brownwm@us.ibm.com>
Niels de Vos <ndevos@redhat.com>
Phil Estes <estesp@amazon.com>
Phil Estes <estesp@gmail.com>
Phil Estes <estesp@linux.vnet.ibm.com>
Samuel Karp <me@samuelkarp.com>
Sam Whited <sam@samwhited.com>
Samuel Karp <me@samuelkarp.com>
Sebastiaan van Stijn <github@gone.nl>
Shengjing Zhu <zhsj@debian.org>
Stephen J Day <stephen.day@docker.com>

View File

@ -1,3 +1,4 @@
//go:build darwin || freebsd || openbsd
// +build darwin freebsd openbsd
/*

View File

@ -1,3 +1,4 @@
//go:build linux || darwin
// +build linux darwin
/*

View File

@ -1,3 +1,4 @@
//go:build !linux && !darwin
// +build !linux,!darwin
/*

View File

@ -1,3 +1,4 @@
//go:build !windows
// +build !windows
package api // import "github.com/docker/docker/api"

View File

@ -382,11 +382,13 @@ definitions:
type: "string"
description: |
- Empty string means not to restart
- `no` Do not automatically restart
- `always` Always restart
- `unless-stopped` Restart always except when the user has manually stopped the container
- `on-failure` Restart only when the container exit code is non-zero
enum:
- ""
- "no"
- "always"
- "unless-stopped"
- "on-failure"
@ -744,6 +746,7 @@ definitions:
description: |
Health stores information about the container's healthcheck results.
type: "object"
x-nullable: true
properties:
Status:
description: |
@ -769,13 +772,13 @@ definitions:
description: |
Log contains the last few results (oldest first)
items:
x-nullable: true
$ref: "#/definitions/HealthcheckResult"
HealthcheckResult:
description: |
HealthcheckResult stores information about a single run of a healthcheck probe
type: "object"
x-nullable: true
properties:
Start:
description: |
@ -2188,6 +2191,25 @@ definitions:
type: "string"
x-nullable: false
PluginPrivilege:
description: |
Describes a permission the user has to accept upon installing
the plugin.
type: "object"
x-go-name: "PluginPrivilege"
properties:
Name:
type: "string"
example: "network"
Description:
type: "string"
Value:
type: "array"
items:
type: "string"
example:
- "host"
Plugin:
description: "A plugin for the Engine API"
type: "object"
@ -2970,19 +2992,7 @@ definitions:
PluginPrivilege:
type: "array"
items:
description: |
Describes a permission accepted by the user upon installing the
plugin.
type: "object"
properties:
Name:
type: "string"
Description:
type: "string"
Value:
type: "array"
items:
type: "string"
$ref: "#/definitions/PluginPrivilege"
ContainerSpec:
type: "object"
description: |
@ -4022,73 +4032,71 @@ definitions:
Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found"
ContainerSummary:
type: "array"
items:
type: "object"
properties:
Id:
description: "The ID of this container"
type: "object"
properties:
Id:
description: "The ID of this container"
type: "string"
x-go-name: "ID"
Names:
description: "The names that this container has been given"
type: "array"
items:
type: "string"
x-go-name: "ID"
Names:
description: "The names that this container has been given"
type: "array"
items:
Image:
description: "The name of the image used when creating this container"
type: "string"
ImageID:
description: "The ID of the image that this container was created from"
type: "string"
Command:
description: "Command to run when starting the container"
type: "string"
Created:
description: "When the container was created"
type: "integer"
format: "int64"
Ports:
description: "The ports exposed by this container"
type: "array"
items:
$ref: "#/definitions/Port"
SizeRw:
description: "The size of files that have been created or changed by this container"
type: "integer"
format: "int64"
SizeRootFs:
description: "The total size of all the files in this container"
type: "integer"
format: "int64"
Labels:
description: "User-defined key/value metadata."
type: "object"
additionalProperties:
type: "string"
State:
description: "The state of this container (e.g. `Exited`)"
type: "string"
Status:
description: "Additional human-readable status of this container (e.g. `Exit 0`)"
type: "string"
HostConfig:
type: "object"
properties:
NetworkMode:
type: "string"
Image:
description: "The name of the image used when creating this container"
type: "string"
ImageID:
description: "The ID of the image that this container was created from"
type: "string"
Command:
description: "Command to run when starting the container"
type: "string"
Created:
description: "When the container was created"
type: "integer"
format: "int64"
Ports:
description: "The ports exposed by this container"
type: "array"
items:
$ref: "#/definitions/Port"
SizeRw:
description: "The size of files that have been created or changed by this container"
type: "integer"
format: "int64"
SizeRootFs:
description: "The total size of all the files in this container"
type: "integer"
format: "int64"
Labels:
description: "User-defined key/value metadata."
type: "object"
additionalProperties:
type: "string"
State:
description: "The state of this container (e.g. `Exited`)"
type: "string"
Status:
description: "Additional human-readable status of this container (e.g. `Exit 0`)"
type: "string"
HostConfig:
type: "object"
properties:
NetworkMode:
type: "string"
NetworkSettings:
description: "A summary of the container's network settings"
type: "object"
properties:
Networks:
type: "object"
additionalProperties:
$ref: "#/definitions/EndpointSettings"
Mounts:
type: "array"
items:
$ref: "#/definitions/Mount"
NetworkSettings:
description: "A summary of the container's network settings"
type: "object"
properties:
Networks:
type: "object"
additionalProperties:
$ref: "#/definitions/EndpointSettings"
Mounts:
type: "array"
items:
$ref: "#/definitions/Mount"
Driver:
description: "Driver represents a driver (network, logging, secrets)."
@ -4210,6 +4218,7 @@ definitions:
ContainerState stores container's running state. It's part of ContainerJSONBase
and will be returned by the "inspect" command.
type: "object"
x-nullable: true
properties:
Status:
description: |
@ -4267,7 +4276,6 @@ definitions:
type: "string"
example: "2020-01-06T09:07:59.461876391Z"
Health:
x-nullable: true
$ref: "#/definitions/Health"
SystemVersion:
@ -4366,7 +4374,6 @@ definitions:
type: "string"
example: "2020-06-22T15:49:27.000000000+00:00"
SystemInfo:
type: "object"
properties:
@ -5199,6 +5206,158 @@ definitions:
additionalProperties:
type: "string"
EventActor:
description: |
Actor describes something that generates events, like a container, network,
or a volume.
type: "object"
properties:
ID:
description: "The ID of the object emitting the event"
type: "string"
example: "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743"
Attributes:
description: |
Various key/value attributes of the object, depending on its type.
type: "object"
additionalProperties:
type: "string"
example:
com.example.some-label: "some-label-value"
image: "alpine:latest"
name: "my-container"
EventMessage:
description: |
EventMessage represents the information an event contains.
type: "object"
title: "SystemEventsResponse"
properties:
Type:
description: "The type of object emitting the event"
type: "string"
enum: ["builder", "config", "container", "daemon", "image", "network", "node", "plugin", "secret", "service", "volume"]
example: "container"
Action:
description: "The type of event"
type: "string"
example: "create"
Actor:
$ref: "#/definitions/EventActor"
scope:
description: |
Scope of the event. Engine events are `local` scope. Cluster (Swarm)
events are `swarm` scope.
type: "string"
enum: ["local", "swarm"]
time:
description: "Timestamp of event"
type: "integer"
format: "int64"
example: 1629574695
timeNano:
description: "Timestamp of event, with nanosecond accuracy"
type: "integer"
format: "int64"
example: 1629574695515050031
OCIDescriptor:
type: "object"
x-go-name: Descriptor
description: |
A descriptor struct containing digest, media type, and size, as defined in
the [OCI Content Descriptors Specification](https://github.com/opencontainers/image-spec/blob/v1.0.1/descriptor.md).
properties:
mediaType:
description: |
The media type of the object this schema refers to.
type: "string"
example: "application/vnd.docker.distribution.manifest.v2+json"
digest:
description: |
The digest of the targeted content.
type: "string"
example: "sha256:c0537ff6a5218ef531ece93d4984efc99bbf3f7497c0a7726c88e2bb7584dc96"
size:
description: |
The size in bytes of the blob.
type: "integer"
format: "int64"
example: 3987495
# TODO Not yet including these fields for now, as they are nil / omitted in our response.
# urls:
# description: |
# List of URLs from which this object MAY be downloaded.
# type: "array"
# items:
# type: "string"
# format: "uri"
# annotations:
# description: |
# Arbitrary metadata relating to the targeted content.
# type: "object"
# additionalProperties:
# type: "string"
# platform:
# $ref: "#/definitions/OCIPlatform"
OCIPlatform:
type: "object"
x-go-name: Platform
description: |
Describes the platform which the image in the manifest runs on, as defined
in the [OCI Image Index Specification](https://github.com/opencontainers/image-spec/blob/v1.0.1/image-index.md).
properties:
architecture:
description: |
The CPU architecture, for example `amd64` or `ppc64`.
type: "string"
example: "arm"
os:
description: |
The operating system, for example `linux` or `windows`.
type: "string"
example: "windows"
os.version:
description: |
Optional field specifying the operating system version, for example on
Windows `10.0.19041.1165`.
type: "string"
example: "10.0.19041.1165"
os.features:
description: |
Optional field specifying an array of strings, each listing a required
OS feature (for example on Windows `win32k`).
type: "array"
items:
type: "string"
example:
- "win32k"
variant:
description: |
Optional field specifying a variant of the CPU, for example `v7` to
specify ARMv7 when architecture is `arm`.
type: "string"
example: "v7"
DistributionInspect:
type: "object"
x-go-name: DistributionInspect
title: "DistributionInspectResponse"
required: [Descriptor, Platforms]
description: |
Describes the result obtained from contacting the registry to retrieve
image metadata.
properties:
Descriptor:
$ref: "#/definitions/OCIDescriptor"
Platforms:
type: "array"
description: |
An array containing all platforms supported by the image.
items:
$ref: "#/definitions/OCIPlatform"
paths:
/containers/json:
get:
@ -5261,7 +5420,9 @@ paths:
200:
description: "no error"
schema:
$ref: "#/definitions/ContainerSummary"
type: "array"
items:
$ref: "#/definitions/ContainerSummary"
examples:
application/json:
- Id: "8dfafdbc3a40"
@ -5627,7 +5788,6 @@ paths:
items:
type: "string"
State:
x-nullable: true
$ref: "#/definitions/ContainerState"
Image:
description: "The container's image ID"
@ -7505,6 +7665,18 @@ paths:
Refer to the [authentication section](#section/Authentication) for
details.
type: "string"
- name: "changes"
in: "query"
description: |
Apply `Dockerfile` instructions to the image that is created,
for example: `changes=ENV DEBUG=true`.
Note that `ENV DEBUG=true` should be URI component encoded.
Supported `Dockerfile` instructions:
`CMD`|`ENTRYPOINT`|`ENV`|`EXPOSE`|`ONBUILD`|`USER`|`VOLUME`|`WORKDIR`
type: "array"
items:
type: "string"
- name: "platform"
in: "query"
description: "Platform in the format os[/arch[/variant]]"
@ -8179,44 +8351,7 @@ paths:
200:
description: "no error"
schema:
type: "object"
title: "SystemEventsResponse"
properties:
Type:
description: "The type of object emitting the event"
type: "string"
Action:
description: "The type of event"
type: "string"
Actor:
type: "object"
properties:
ID:
description: "The ID of the object emitting the event"
type: "string"
Attributes:
description: "Various key/value attributes of the object, depending on its type"
type: "object"
additionalProperties:
type: "string"
time:
description: "Timestamp of event"
type: "integer"
timeNano:
description: "Timestamp of event, with nanosecond accuracy"
type: "integer"
format: "int64"
examples:
application/json:
Type: "container"
Action: "create"
Actor:
ID: "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743"
Attributes:
com.example.some-label: "some-label-value"
image: "alpine"
name: "my-container"
time: 1461943101
$ref: "#/definitions/EventMessage"
400:
description: "bad parameter"
schema:
@ -8531,6 +8666,7 @@ paths:
description: "Exec configuration"
schema:
type: "object"
title: "ExecConfig"
properties:
AttachStdin:
type: "boolean"
@ -8621,6 +8757,7 @@ paths:
in: "body"
schema:
type: "object"
title: "ExecStartConfig"
properties:
Detach:
type: "boolean"
@ -9155,6 +9292,7 @@ paths:
required: true
schema:
type: "object"
title: "NetworkCreateRequest"
required: ["Name"]
properties:
Name:
@ -9265,6 +9403,7 @@ paths:
required: true
schema:
type: "object"
title: "NetworkConnectRequest"
properties:
Container:
type: "string"
@ -9311,6 +9450,7 @@ paths:
required: true
schema:
type: "object"
title: "NetworkDisconnectRequest"
properties:
Container:
type: "string"
@ -9395,20 +9535,7 @@ paths:
schema:
type: "array"
items:
description: |
Describes a permission the user has to accept upon installing
the plugin.
type: "object"
title: "PluginPrivilegeItem"
properties:
Name:
type: "string"
Description:
type: "string"
Value:
type: "array"
items:
type: "string"
$ref: "#/definitions/PluginPrivilege"
example:
- Name: "network"
Description: ""
@ -9484,19 +9611,7 @@ paths:
schema:
type: "array"
items:
description: |
Describes a permission accepted by the user upon installing the
plugin.
type: "object"
properties:
Name:
type: "string"
Description:
type: "string"
Value:
type: "array"
items:
type: "string"
$ref: "#/definitions/PluginPrivilege"
example:
- Name: "network"
Description: ""
@ -9668,19 +9783,7 @@ paths:
schema:
type: "array"
items:
description: |
Describes a permission accepted by the user upon installing the
plugin.
type: "object"
properties:
Name:
type: "string"
Description:
type: "string"
Value:
type: "array"
items:
type: "string"
$ref: "#/definitions/PluginPrivilege"
example:
- Name: "network"
Description: ""
@ -9970,6 +10073,7 @@ paths:
required: true
schema:
type: "object"
title: "SwarmInitRequest"
properties:
ListenAddr:
description: |
@ -10068,6 +10172,7 @@ paths:
required: true
schema:
type: "object"
title: "SwarmJoinRequest"
properties:
ListenAddr:
description: |
@ -10228,6 +10333,7 @@ paths:
required: true
schema:
type: "object"
title: "SwarmUnlockRequest"
properties:
UnlockKey:
description: "The swarm's unlock key."
@ -11339,67 +11445,7 @@ paths:
200:
description: "descriptor and platform information"
schema:
type: "object"
x-go-name: DistributionInspect
title: "DistributionInspectResponse"
required: [Descriptor, Platforms]
properties:
Descriptor:
type: "object"
description: |
A descriptor struct containing digest, media type, and size.
properties:
MediaType:
type: "string"
Size:
type: "integer"
format: "int64"
Digest:
type: "string"
URLs:
type: "array"
items:
type: "string"
Platforms:
type: "array"
description: |
An array containing all platforms supported by the image.
items:
type: "object"
properties:
Architecture:
type: "string"
OS:
type: "string"
OSVersion:
type: "string"
OSFeatures:
type: "array"
items:
type: "string"
Variant:
type: "string"
Features:
type: "array"
items:
type: "string"
examples:
application/json:
Descriptor:
MediaType: "application/vnd.docker.distribution.manifest.v2+json"
Digest: "sha256:c0537ff6a5218ef531ece93d4984efc99bbf3f7497c0a7726c88e2bb7584dc96"
Size: 3987495
URLs:
- ""
Platforms:
- Architecture: "amd64"
OS: "linux"
OSVersion: ""
OSFeatures:
- ""
Variant: ""
Features:
- ""
$ref: "#/definitions/DistributionInspect"
401:
description: "Failed authentication or no image found"
schema:

View File

@ -13,19 +13,26 @@ import (
// CgroupnsMode represents the cgroup namespace mode of the container
type CgroupnsMode string
// cgroup namespace modes for containers
const (
CgroupnsModeEmpty CgroupnsMode = ""
CgroupnsModePrivate CgroupnsMode = "private"
CgroupnsModeHost CgroupnsMode = "host"
)
// IsPrivate indicates whether the container uses its own private cgroup namespace
func (c CgroupnsMode) IsPrivate() bool {
return c == "private"
return c == CgroupnsModePrivate
}
// IsHost indicates whether the container shares the host's cgroup namespace
func (c CgroupnsMode) IsHost() bool {
return c == "host"
return c == CgroupnsModeHost
}
// IsEmpty indicates whether the container cgroup namespace mode is unset
func (c CgroupnsMode) IsEmpty() bool {
return c == ""
return c == CgroupnsModeEmpty
}
// Valid indicates whether the cgroup namespace mode is valid
@ -37,60 +44,69 @@ func (c CgroupnsMode) Valid() bool {
// values are platform specific
type Isolation string
// Isolation modes for containers
const (
IsolationEmpty Isolation = "" // IsolationEmpty is unspecified (same behavior as default)
IsolationDefault Isolation = "default" // IsolationDefault is the default isolation mode on current daemon
IsolationProcess Isolation = "process" // IsolationProcess is process isolation mode
IsolationHyperV Isolation = "hyperv" // IsolationHyperV is HyperV isolation mode
)
// IsDefault indicates the default isolation technology of a container. On Linux this
// is the native driver. On Windows, this is a Windows Server Container.
func (i Isolation) IsDefault() bool {
return strings.ToLower(string(i)) == "default" || string(i) == ""
// TODO consider making isolation-mode strict (case-sensitive)
v := Isolation(strings.ToLower(string(i)))
return v == IsolationDefault || v == IsolationEmpty
}
// IsHyperV indicates the use of a Hyper-V partition for isolation
func (i Isolation) IsHyperV() bool {
return strings.ToLower(string(i)) == "hyperv"
// TODO consider making isolation-mode strict (case-sensitive)
return Isolation(strings.ToLower(string(i))) == IsolationHyperV
}
// IsProcess indicates the use of process isolation
func (i Isolation) IsProcess() bool {
return strings.ToLower(string(i)) == "process"
// TODO consider making isolation-mode strict (case-sensitive)
return Isolation(strings.ToLower(string(i))) == IsolationProcess
}
const (
// IsolationEmpty is unspecified (same behavior as default)
IsolationEmpty = Isolation("")
// IsolationDefault is the default isolation mode on current daemon
IsolationDefault = Isolation("default")
// IsolationProcess is process isolation mode
IsolationProcess = Isolation("process")
// IsolationHyperV is HyperV isolation mode
IsolationHyperV = Isolation("hyperv")
)
// IpcMode represents the container ipc stack.
type IpcMode string
// IpcMode constants
const (
IPCModeNone IpcMode = "none"
IPCModeHost IpcMode = "host"
IPCModeContainer IpcMode = "container"
IPCModePrivate IpcMode = "private"
IPCModeShareable IpcMode = "shareable"
)
// IsPrivate indicates whether the container uses its own private ipc namespace which can not be shared.
func (n IpcMode) IsPrivate() bool {
return n == "private"
return n == IPCModePrivate
}
// IsHost indicates whether the container shares the host's ipc namespace.
func (n IpcMode) IsHost() bool {
return n == "host"
return n == IPCModeHost
}
// IsShareable indicates whether the container's ipc namespace can be shared with another container.
func (n IpcMode) IsShareable() bool {
return n == "shareable"
return n == IPCModeShareable
}
// IsContainer indicates whether the container uses another container's ipc namespace.
func (n IpcMode) IsContainer() bool {
parts := strings.SplitN(string(n), ":", 2)
return len(parts) > 1 && parts[0] == "container"
return strings.HasPrefix(string(n), string(IPCModeContainer)+":")
}
// IsNone indicates whether container IpcMode is set to "none".
func (n IpcMode) IsNone() bool {
return n == "none"
return n == IPCModeNone
}
// IsEmpty indicates whether container IpcMode is empty
@ -105,9 +121,8 @@ func (n IpcMode) Valid() bool {
// Container returns the name of the container ipc stack is going to be used.
func (n IpcMode) Container() string {
parts := strings.SplitN(string(n), ":", 2)
if len(parts) > 1 && parts[0] == "container" {
return parts[1]
if n.IsContainer() {
return strings.TrimPrefix(string(n), string(IPCModeContainer)+":")
}
return ""
}
@ -326,7 +341,7 @@ type LogMode string
// Available logging modes
const (
LogModeUnset = ""
LogModeUnset LogMode = ""
LogModeBlocking LogMode = "blocking"
LogModeNonBlock LogMode = "non-blocking"
)

View File

@ -1,3 +1,4 @@
//go:build !windows
// +build !windows
package container // import "github.com/docker/docker/api/types/container"

View File

@ -1,33 +1,26 @@
package events // import "github.com/docker/docker/api/types/events"
// Type is used for event-types.
type Type = string
// List of known event types.
const (
// BuilderEventType is the event type that the builder generates
BuilderEventType = "builder"
// ContainerEventType is the event type that containers generate
ContainerEventType = "container"
// DaemonEventType is the event type that daemon generate
DaemonEventType = "daemon"
// ImageEventType is the event type that images generate
ImageEventType = "image"
// NetworkEventType is the event type that networks generate
NetworkEventType = "network"
// PluginEventType is the event type that plugins generate
PluginEventType = "plugin"
// VolumeEventType is the event type that volumes generate
VolumeEventType = "volume"
// ServiceEventType is the event type that services generate
ServiceEventType = "service"
// NodeEventType is the event type that nodes generate
NodeEventType = "node"
// SecretEventType is the event type that secrets generate
SecretEventType = "secret"
// ConfigEventType is the event type that configs generate
ConfigEventType = "config"
BuilderEventType Type = "builder" // BuilderEventType is the event type that the builder generates.
ConfigEventType Type = "config" // ConfigEventType is the event type that configs generate.
ContainerEventType Type = "container" // ContainerEventType is the event type that containers generate.
DaemonEventType Type = "daemon" // DaemonEventType is the event type that daemon generate.
ImageEventType Type = "image" // ImageEventType is the event type that images generate.
NetworkEventType Type = "network" // NetworkEventType is the event type that networks generate.
NodeEventType Type = "node" // NodeEventType is the event type that nodes generate.
PluginEventType Type = "plugin" // PluginEventType is the event type that plugins generate.
SecretEventType Type = "secret" // SecretEventType is the event type that secrets generate.
ServiceEventType Type = "service" // ServiceEventType is the event type that services generate.
VolumeEventType Type = "volume" // VolumeEventType is the event type that volumes generate.
)
// Actor describes something that generates events,
// like a container, or a network, or a volume.
// It has a defined name and a set or attributes.
// It has a defined name and a set of attributes.
// The container attributes are its labels, other actors
// can generate these attributes from other properties.
type Actor struct {
@ -39,11 +32,11 @@ type Actor struct {
type Message struct {
// Deprecated information from JSONMessage.
// With data only in container events.
Status string `json:"status,omitempty"`
ID string `json:"id,omitempty"`
From string `json:"from,omitempty"`
Status string `json:"status,omitempty"` // Deprecated: use Action instead.
ID string `json:"id,omitempty"` // Deprecated: use Actor.ID instead.
From string `json:"from,omitempty"` // Deprecated: use Actor.Attributes["image"] instead.
Type string
Type Type
Action string
Actor Actor
// Engine events are local scope. Cluster events are swarm scope.

View File

@ -8,6 +8,9 @@ import (
// compare compares two version strings
// returns -1 if v1 < v2, 1 if v1 > v2, 0 otherwise.
func compare(v1, v2 string) int {
if v1 == v2 {
return 0
}
var (
currTab = strings.Split(v1, ".")
otherTab = strings.Split(v2, ".")

View File

@ -281,21 +281,6 @@ func ParseHostURL(host string) (*url.URL, error) {
}, nil
}
// CustomHTTPHeaders returns the custom http headers stored by the client.
func (cli *Client) CustomHTTPHeaders() map[string]string {
m := make(map[string]string)
for k, v := range cli.customHTTPHeaders {
m[k] = v
}
return m
}
// SetCustomHTTPHeaders that will be set on every HTTP request made by the client.
// Deprecated: use WithHTTPHeaders when creating the client.
func (cli *Client) SetCustomHTTPHeaders(headers map[string]string) {
cli.customHTTPHeaders = headers
}
// Dialer returns a dialer for a raw stream connection, with HTTP/1.1 header, that can be used for proxying the daemon connection.
// Used by `docker dial-stdio` (docker/cli#889).
func (cli *Client) Dialer() func(context.Context) (net.Conn, error) {

View File

@ -1,3 +1,4 @@
//go:build linux || freebsd || openbsd || netbsd || darwin || solaris || illumos || dragonfly
// +build linux freebsd openbsd netbsd darwin solaris illumos dragonfly
package client // import "github.com/docker/docker/client"

View File

@ -4,7 +4,7 @@ import (
"bytes"
"context"
"encoding/json"
"io/ioutil"
"io"
"github.com/docker/docker/api/types/swarm"
)
@ -23,7 +23,7 @@ func (cli *Client) ConfigInspectWithRaw(ctx context.Context, id string) (swarm.C
return swarm.Config{}, nil, wrapResponseError(err, resp, "config", id)
}
body, err := ioutil.ReadAll(resp.body)
body, err := io.ReadAll(resp.body)
if err != nil {
return swarm.Config{}, nil, err
}

View File

@ -4,7 +4,7 @@ import (
"bytes"
"context"
"encoding/json"
"io/ioutil"
"io"
"net/url"
"github.com/docker/docker/api/types"
@ -41,7 +41,7 @@ func (cli *Client) ContainerInspectWithRaw(ctx context.Context, containerID stri
return types.ContainerJSON{}, nil, wrapResponseError(err, serverResp, "container", containerID)
}
body, err := ioutil.ReadAll(serverResp.body)
body, err := io.ReadAll(serverResp.body)
if err != nil {
return types.ContainerJSON{}, nil, err
}

View File

@ -4,7 +4,7 @@ import (
"bytes"
"context"
"encoding/json"
"io/ioutil"
"io"
"github.com/docker/docker/api/types"
)
@ -20,7 +20,7 @@ func (cli *Client) ImageInspectWithRaw(ctx context.Context, imageID string) (typ
return types.ImageInspect{}, nil, wrapResponseError(err, serverResp, "image", imageID)
}
body, err := ioutil.ReadAll(serverResp.body)
body, err := io.ReadAll(serverResp.body)
if err != nil {
return types.ImageInspect{}, nil, err
}

View File

@ -4,7 +4,7 @@ import (
"bytes"
"context"
"encoding/json"
"io/ioutil"
"io"
"net/url"
"github.com/docker/docker/api/types"
@ -39,7 +39,7 @@ func (cli *Client) NetworkInspectWithRaw(ctx context.Context, networkID string,
return networkResource, nil, wrapResponseError(err, resp, "network", networkID)
}
body, err := ioutil.ReadAll(resp.body)
body, err := io.ReadAll(resp.body)
if err != nil {
return networkResource, nil, err
}

View File

@ -4,7 +4,7 @@ import (
"bytes"
"context"
"encoding/json"
"io/ioutil"
"io"
"github.com/docker/docker/api/types/swarm"
)
@ -20,7 +20,7 @@ func (cli *Client) NodeInspectWithRaw(ctx context.Context, nodeID string) (swarm
return swarm.Node{}, nil, wrapResponseError(err, serverResp, "node", nodeID)
}
body, err := ioutil.ReadAll(serverResp.body)
body, err := io.ReadAll(serverResp.body)
if err != nil {
return swarm.Node{}, nil, err
}

View File

@ -4,7 +4,7 @@ import (
"bytes"
"context"
"encoding/json"
"io/ioutil"
"io"
"github.com/docker/docker/api/types"
)
@ -20,7 +20,7 @@ func (cli *Client) PluginInspectWithRaw(ctx context.Context, name string) (*type
return nil, nil, wrapResponseError(err, resp, "plugin", name)
}
body, err := ioutil.ReadAll(resp.body)
body, err := io.ReadAll(resp.body)
if err != nil {
return nil, nil, err
}

View File

@ -6,7 +6,6 @@ import (
"encoding/json"
"fmt"
"io"
"io/ioutil"
"net"
"net/http"
"net/url"
@ -206,7 +205,7 @@ func (cli *Client) checkResponseErr(serverResp serverResponse) error {
R: serverResp.body,
N: int64(bodyMax),
}
body, err = ioutil.ReadAll(bodyR)
body, err = io.ReadAll(bodyR)
if err != nil {
return err
}
@ -266,7 +265,7 @@ func encodeData(data interface{}) (*bytes.Buffer, error) {
func ensureReaderClosed(response serverResponse) {
if response.body != nil {
// Drain up to 512 bytes and close the body to let the Transport reuse the connection
io.CopyN(ioutil.Discard, response.body, 512)
io.CopyN(io.Discard, response.body, 512)
response.body.Close()
}
}

View File

@ -4,7 +4,7 @@ import (
"bytes"
"context"
"encoding/json"
"io/ioutil"
"io"
"github.com/docker/docker/api/types/swarm"
)
@ -23,7 +23,7 @@ func (cli *Client) SecretInspectWithRaw(ctx context.Context, id string) (swarm.S
return swarm.Secret{}, nil, wrapResponseError(err, resp, "secret", id)
}
body, err := ioutil.ReadAll(resp.body)
body, err := io.ReadAll(resp.body)
if err != nil {
return swarm.Secret{}, nil, err
}

View File

@ -5,7 +5,7 @@ import (
"context"
"encoding/json"
"fmt"
"io/ioutil"
"io"
"net/url"
"github.com/docker/docker/api/types"
@ -25,7 +25,7 @@ func (cli *Client) ServiceInspectWithRaw(ctx context.Context, serviceID string,
return swarm.Service{}, nil, wrapResponseError(err, serverResp, "service", serviceID)
}
body, err := ioutil.ReadAll(serverResp.body)
body, err := io.ReadAll(serverResp.body)
if err != nil {
return swarm.Service{}, nil, err
}

View File

@ -4,7 +4,7 @@ import (
"bytes"
"context"
"encoding/json"
"io/ioutil"
"io"
"github.com/docker/docker/api/types/swarm"
)
@ -20,7 +20,7 @@ func (cli *Client) TaskInspectWithRaw(ctx context.Context, taskID string) (swarm
return swarm.Task{}, nil, wrapResponseError(err, serverResp, "task", taskID)
}
body, err := ioutil.ReadAll(serverResp.body)
body, err := io.ReadAll(serverResp.body)
if err != nil {
return swarm.Task{}, nil, err
}

View File

@ -4,7 +4,7 @@ import (
"bytes"
"context"
"encoding/json"
"io/ioutil"
"io"
"github.com/docker/docker/api/types"
)
@ -28,7 +28,7 @@ func (cli *Client) VolumeInspectWithRaw(ctx context.Context, volumeID string) (t
return volume, nil, wrapResponseError(err, resp, "volume", volumeID)
}
body, err := ioutil.ReadAll(resp.body)
body, err := io.ReadAll(resp.body)
if err != nil {
return volume, nil, err
}

View File

@ -100,10 +100,10 @@ func FromStatusCode(err error, statusCode int) error {
err = System(err)
}
default:
logrus.WithFields(logrus.Fields{
logrus.WithError(err).WithFields(logrus.Fields{
"module": "api",
"status_code": fmt.Sprintf("%d", statusCode),
}).Debugf("FIXME: Got an status-code for which error does not match any expected type!!!: %d", statusCode)
"status_code": statusCode,
}).Debug("FIXME: Got an status-code for which error does not match any expected type!!!")
switch {
case statusCode >= 200 && statusCode < 400:

View File

@ -7,9 +7,9 @@ import (
"compress/bzip2"
"compress/gzip"
"context"
"encoding/binary"
"fmt"
"io"
"io/ioutil"
"os"
"path/filepath"
"runtime"
@ -23,6 +23,7 @@ import (
"github.com/docker/docker/pkg/ioutils"
"github.com/docker/docker/pkg/pools"
"github.com/docker/docker/pkg/system"
"github.com/klauspost/compress/zstd"
"github.com/sirupsen/logrus"
exec "golang.org/x/sys/execabs"
)
@ -84,6 +85,8 @@ const (
Gzip
// Xz is xz compression algorithm.
Xz
// Zstd is zstd compression algorithm.
Zstd
)
const (
@ -122,14 +125,59 @@ func IsArchivePath(path string) bool {
return err == nil
}
const (
zstdMagicSkippableStart = 0x184D2A50
zstdMagicSkippableMask = 0xFFFFFFF0
)
var (
bzip2Magic = []byte{0x42, 0x5A, 0x68}
gzipMagic = []byte{0x1F, 0x8B, 0x08}
xzMagic = []byte{0xFD, 0x37, 0x7A, 0x58, 0x5A, 0x00}
zstdMagic = []byte{0x28, 0xb5, 0x2f, 0xfd}
)
type matcher = func([]byte) bool
func magicNumberMatcher(m []byte) matcher {
return func(source []byte) bool {
return bytes.HasPrefix(source, m)
}
}
// zstdMatcher detects zstd compression algorithm.
// Zstandard compressed data is made of one or more frames.
// There are two frame formats defined by Zstandard: Zstandard frames and Skippable frames.
// See https://tools.ietf.org/id/draft-kucherawy-dispatch-zstd-00.html#rfc.section.2 for more details.
func zstdMatcher() matcher {
return func(source []byte) bool {
if bytes.HasPrefix(source, zstdMagic) {
// Zstandard frame
return true
}
// skippable frame
if len(source) < 8 {
return false
}
// magic number from 0x184D2A50 to 0x184D2A5F.
if binary.LittleEndian.Uint32(source[:4])&zstdMagicSkippableMask == zstdMagicSkippableStart {
return true
}
return false
}
}
// DetectCompression detects the compression algorithm of the source.
func DetectCompression(source []byte) Compression {
for compression, m := range map[Compression][]byte{
Bzip2: {0x42, 0x5A, 0x68},
Gzip: {0x1F, 0x8B, 0x08},
Xz: {0xFD, 0x37, 0x7A, 0x58, 0x5A, 0x00},
} {
if bytes.HasPrefix(source, m) {
compressionMap := map[Compression]matcher{
Bzip2: magicNumberMatcher(bzip2Magic),
Gzip: magicNumberMatcher(gzipMagic),
Xz: magicNumberMatcher(xzMagic),
Zstd: zstdMatcher(),
}
for _, compression := range []Compression{Bzip2, Gzip, Xz, Zstd} {
fn := compressionMap[compression]
if fn(source) {
return compression
}
}
@ -216,6 +264,13 @@ func DecompressStream(archive io.Reader) (io.ReadCloser, error) {
}
readBufWrapper := p.NewReadCloserWrapper(buf, xzReader)
return wrapReadCloser(readBufWrapper, cancel), nil
case Zstd:
zstdReader, err := zstd.NewReader(buf)
if err != nil {
return nil, err
}
readBufWrapper := p.NewReadCloserWrapper(buf, zstdReader)
return readBufWrapper, nil
default:
return nil, fmt.Errorf("Unsupported compression format %s", (&compression).Extension())
}
@ -342,6 +397,8 @@ func (compression *Compression) Extension() string {
return "tar.gz"
case Xz:
return "tar.xz"
case Zstd:
return "tar.zst"
}
return ""
}
@ -809,8 +866,8 @@ func TarWithOptions(srcPath string, options *TarOptions) (io.ReadCloser, error)
rebaseName := options.RebaseNames[include]
var (
parentMatched []bool
parentDirs []string
parentMatchInfo []fileutils.MatchInfo
parentDirs []string
)
walkRoot := getWalkRoot(srcPath, include)
@ -845,13 +902,14 @@ func TarWithOptions(srcPath string, options *TarOptions) (io.ReadCloser, error)
break
}
parentDirs = parentDirs[:len(parentDirs)-1]
parentMatched = parentMatched[:len(parentMatched)-1]
parentMatchInfo = parentMatchInfo[:len(parentMatchInfo)-1]
}
if len(parentMatched) != 0 {
skip, err = pm.MatchesUsingParentResult(relFilePath, parentMatched[len(parentMatched)-1])
var matchInfo fileutils.MatchInfo
if len(parentMatchInfo) != 0 {
skip, matchInfo, err = pm.MatchesUsingParentResults(relFilePath, parentMatchInfo[len(parentMatchInfo)-1])
} else {
skip, err = pm.MatchesOrParentMatches(relFilePath)
skip, matchInfo, err = pm.MatchesUsingParentResults(relFilePath, fileutils.MatchInfo{})
}
if err != nil {
logrus.Errorf("Error matching %s: %v", relFilePath, err)
@ -860,7 +918,7 @@ func TarWithOptions(srcPath string, options *TarOptions) (io.ReadCloser, error)
if f.IsDir() {
parentDirs = append(parentDirs, relFilePath)
parentMatched = append(parentMatched, skip)
parentMatchInfo = append(parentMatchInfo, matchInfo)
}
}
@ -1284,7 +1342,7 @@ func cmdStream(cmd *exec.Cmd, input io.Reader) (io.ReadCloser, error) {
// of that file as an archive. The archive can only be read once - as soon as reading completes,
// the file will be deleted.
func NewTempArchive(src io.Reader, dir string) (*TempArchive, error) {
f, err := ioutil.TempFile(dir, "")
f, err := os.CreateTemp(dir, "")
if err != nil {
return nil, err
}

View File

@ -1,3 +1,4 @@
//go:build !linux
// +build !linux
package archive // import "github.com/docker/docker/pkg/archive"

View File

@ -1,3 +1,4 @@
//go:build !windows
// +build !windows
package archive // import "github.com/docker/docker/pkg/archive"

View File

@ -5,7 +5,6 @@ import (
"bytes"
"fmt"
"io"
"io/ioutil"
"os"
"path/filepath"
"sort"
@ -348,7 +347,7 @@ func ChangesDirs(newDir, oldDir string) ([]Change, error) {
oldRoot, newRoot *FileInfo
)
if oldDir == "" {
emptyDir, err := ioutil.TempDir("", "empty")
emptyDir, err := os.MkdirTemp("", "empty")
if err != nil {
return nil, err
}

View File

@ -1,3 +1,4 @@
//go:build !linux
// +build !linux
package archive // import "github.com/docker/docker/pkg/archive"

View File

@ -1,3 +1,4 @@
//go:build !windows
// +build !windows
package archive // import "github.com/docker/docker/pkg/archive"

View File

@ -4,7 +4,6 @@ import (
"archive/tar"
"errors"
"io"
"io/ioutil"
"os"
"path/filepath"
"strings"
@ -261,7 +260,7 @@ func PrepareArchiveCopy(srcContent io.Reader, srcInfo, dstInfo CopyInfo) (dstDir
// The destination exists as a directory. No alteration
// to srcContent is needed as its contents can be
// simply extracted to the destination directory.
return dstInfo.Path, ioutil.NopCloser(srcContent), nil
return dstInfo.Path, io.NopCloser(srcContent), nil
case dstInfo.Exists && srcInfo.IsDir:
// The destination exists as some type of file and the source
// content is a directory. This is an error condition since

View File

@ -1,3 +1,4 @@
//go:build !windows
// +build !windows
package archive // import "github.com/docker/docker/pkg/archive"

View File

@ -4,7 +4,6 @@ import (
"archive/tar"
"fmt"
"io"
"io/ioutil"
"os"
"path/filepath"
"runtime"
@ -100,7 +99,7 @@ func UnpackLayer(dest string, layer io.Reader, options *TarOptions) (size int64,
basename := filepath.Base(hdr.Name)
aufsHardlinks[basename] = hdr
if aufsTempdir == "" {
if aufsTempdir, err = ioutil.TempDir("", "dockerplnk"); err != nil {
if aufsTempdir, err = os.MkdirTemp("", "dockerplnk"); err != nil {
return 0, err
}
defer os.RemoveAll(aufsTempdir)

View File

@ -1,3 +1,4 @@
//go:build !linux
// +build !linux
package archive // import "github.com/docker/docker/pkg/archive"

View File

@ -9,8 +9,30 @@ import (
"regexp"
"strings"
"text/scanner"
"unicode/utf8"
)
// escapeBytes is a bitmap used to check whether a character should be escaped when creating the regex.
var escapeBytes [8]byte
// shouldEscape reports whether a rune should be escaped as part of the regex.
//
// This only includes characters that require escaping in regex but are also NOT valid filepath pattern characters.
// Additionally, '\' is not excluded because there is specific logic to properly handle this, as it's a path separator
// on Windows.
//
// Adapted from regexp::QuoteMeta in go stdlib.
// See https://cs.opensource.google/go/go/+/refs/tags/go1.17.2:src/regexp/regexp.go;l=703-715;drc=refs%2Ftags%2Fgo1.17.2
func shouldEscape(b rune) bool {
return b < utf8.RuneSelf && escapeBytes[b%8]&(1<<(b/8)) != 0
}
func init() {
for _, b := range []byte(`.+()|{}$`) {
escapeBytes[b%8] |= 1 << (b / 8)
}
}
// PatternMatcher allows checking paths against a list of patterns
type PatternMatcher struct {
patterns []*Pattern
@ -62,9 +84,9 @@ func NewPatternMatcher(patterns []string) (*PatternMatcher, error) {
//
// Matches is not safe to call concurrently.
//
// This implementation is buggy (it only checks a single parent dir against the
// pattern) and will be removed soon. Use either MatchesOrParentMatches or
// MatchesUsingParentResult instead.
// Deprecated: This implementation is buggy (it only checks a single parent dir
// against the pattern) and will be removed soon. Use either
// MatchesOrParentMatches or MatchesUsingParentResults instead.
func (pm *PatternMatcher) Matches(file string) (bool, error) {
matched := false
file = filepath.FromSlash(file)
@ -150,6 +172,11 @@ func (pm *PatternMatcher) MatchesOrParentMatches(file string) (bool, error) {
// The "file" argument should be a slash-delimited path.
//
// MatchesUsingParentResult is not safe to call concurrently.
//
// Deprecated: this function does behave correctly in some cases (see
// https://github.com/docker/buildx/issues/850).
//
// Use MatchesUsingParentResults instead.
func (pm *PatternMatcher) MatchesUsingParentResult(file string, parentMatched bool) (bool, error) {
matched := parentMatched
file = filepath.FromSlash(file)
@ -174,6 +201,78 @@ func (pm *PatternMatcher) MatchesUsingParentResult(file string, parentMatched bo
return matched, nil
}
// MatchInfo tracks information about parent dir matches while traversing a
// filesystem.
type MatchInfo struct {
parentMatched []bool
}
// MatchesUsingParentResults returns true if "file" matches any of the patterns
// and isn't excluded by any of the subsequent patterns. The functionality is
// the same as Matches, but as an optimization, the caller passes in
// intermediate results from matching the parent directory.
//
// The "file" argument should be a slash-delimited path.
//
// MatchesUsingParentResults is not safe to call concurrently.
func (pm *PatternMatcher) MatchesUsingParentResults(file string, parentMatchInfo MatchInfo) (bool, MatchInfo, error) {
parentMatched := parentMatchInfo.parentMatched
if len(parentMatched) != 0 && len(parentMatched) != len(pm.patterns) {
return false, MatchInfo{}, errors.New("wrong number of values in parentMatched")
}
file = filepath.FromSlash(file)
matched := false
matchInfo := MatchInfo{
parentMatched: make([]bool, len(pm.patterns)),
}
for i, pattern := range pm.patterns {
match := false
// If the parent matched this pattern, we don't need to recheck.
if len(parentMatched) != 0 {
match = parentMatched[i]
}
if !match {
// Skip evaluation if this is an inclusion and the filename
// already matched the pattern, or it's an exclusion and it has
// not matched the pattern yet.
if pattern.exclusion != matched {
continue
}
var err error
match, err = pattern.match(file)
if err != nil {
return false, matchInfo, err
}
// If the zero value of MatchInfo was passed in, we don't have
// any information about the parent dir's match results, and we
// apply the same logic as MatchesOrParentMatches.
if !match && len(parentMatched) == 0 {
if parentPath := filepath.Dir(file); parentPath != "." {
parentPathDirs := strings.Split(parentPath, string(os.PathSeparator))
// Check to see if the pattern matches one of our parent dirs.
for i := range parentPathDirs {
match, _ = pattern.match(strings.Join(parentPathDirs[:i+1], string(os.PathSeparator)))
if match {
break
}
}
}
}
}
matchInfo.parentMatched[i] = match
if match {
matched = !pattern.exclusion
}
}
return matched, matchInfo, nil
}
// Exclusions returns true if any of the patterns define exclusions
func (pm *PatternMatcher) Exclusions() bool {
return pm.exclusions
@ -256,7 +355,7 @@ func (p *Pattern) compile() error {
} else if ch == '?' {
// "?" is any char except "/"
regStr += "[^" + escSL + "]"
} else if ch == '.' || ch == '$' {
} else if shouldEscape(ch) {
// Escape some regexp special chars that have no meaning
// in golang's filepath.Match
regStr += `\` + string(ch)

View File

@ -1,10 +1,10 @@
//go:build linux || freebsd
// +build linux freebsd
package fileutils // import "github.com/docker/docker/pkg/fileutils"
import (
"fmt"
"io/ioutil"
"os"
"github.com/sirupsen/logrus"
@ -13,7 +13,7 @@ import (
// GetTotalUsedFds Returns the number of used File Descriptors by
// reading it via /proc filesystem.
func GetTotalUsedFds() int {
if fds, err := ioutil.ReadDir(fmt.Sprintf("/proc/%d/fd", os.Getpid())); err != nil {
if fds, err := os.ReadDir(fmt.Sprintf("/proc/%d/fd", os.Getpid())); err != nil {
logrus.Errorf("Error opening /proc/%d/fd: %s", os.Getpid(), err)
} else {
return len(fds)

View File

@ -1,3 +1,4 @@
//go:build !linux
// +build !linux
package homedir // import "github.com/docker/docker/pkg/homedir"

View File

@ -1,3 +1,4 @@
//go:build !windows
// +build !windows
package homedir // import "github.com/docker/docker/pkg/homedir"

View File

@ -1,3 +1,4 @@
//go:build !windows
// +build !windows
package idtools // import "github.com/docker/docker/pkg/idtools"

View File

@ -1,3 +1,4 @@
//go:build !linux
// +build !linux
package idtools // import "github.com/docker/docker/pkg/idtools"

View File

@ -1,3 +1,4 @@
//go:build !windows
// +build !windows
package idtools // import "github.com/docker/docker/pkg/idtools"

View File

@ -50,12 +50,12 @@ func NewBytesPipe() *BytesPipe {
// It can allocate new []byte slices in a process of writing.
func (bp *BytesPipe) Write(p []byte) (int, error) {
bp.mu.Lock()
defer bp.mu.Unlock()
written := 0
loop0:
for {
if bp.closeErr != nil {
bp.mu.Unlock()
return written, ErrClosed
}
@ -72,7 +72,6 @@ loop0:
// errBufferFull is an error we expect to get if the buffer is full
if err != nil && err != errBufferFull {
bp.wait.Broadcast()
bp.mu.Unlock()
return written, err
}
@ -100,7 +99,6 @@ loop0:
bp.buf = append(bp.buf, getBuffer(nextCap))
}
bp.wait.Broadcast()
bp.mu.Unlock()
return written, nil
}
@ -126,17 +124,14 @@ func (bp *BytesPipe) Close() error {
// Data could be read only once.
func (bp *BytesPipe) Read(p []byte) (n int, err error) {
bp.mu.Lock()
defer bp.mu.Unlock()
if bp.bufLen == 0 {
if bp.closeErr != nil {
err := bp.closeErr
bp.mu.Unlock()
return 0, err
return 0, bp.closeErr
}
bp.wait.Wait()
if bp.bufLen == 0 && bp.closeErr != nil {
err := bp.closeErr
bp.mu.Unlock()
return 0, err
return 0, bp.closeErr
}
}
@ -161,7 +156,6 @@ func (bp *BytesPipe) Read(p []byte) (n int, err error) {
}
bp.wait.Broadcast()
bp.mu.Unlock()
return
}

View File

@ -2,7 +2,6 @@ package ioutils // import "github.com/docker/docker/pkg/ioutils"
import (
"io"
"io/ioutil"
"os"
"path/filepath"
)
@ -11,7 +10,7 @@ import (
// temporary file and closing it atomically changes the temporary file to
// destination path. Writing and closing concurrently is not allowed.
func NewAtomicFileWriter(filename string, perm os.FileMode) (io.WriteCloser, error) {
f, err := ioutil.TempFile(filepath.Dir(filename), ".tmp-"+filepath.Base(filename))
f, err := os.CreateTemp(filepath.Dir(filename), ".tmp-"+filepath.Base(filename))
if err != nil {
return nil, err
}
@ -94,7 +93,7 @@ type AtomicWriteSet struct {
// commit. If no temporary directory is given the system
// default is used.
func NewAtomicWriteSet(tmpDir string) (*AtomicWriteSet, error) {
td, err := ioutil.TempDir(tmpDir, "write-set-")
td, err := os.MkdirTemp(tmpDir, "write-set-")
if err != nil {
return nil, err
}

View File

@ -2,9 +2,12 @@ package ioutils // import "github.com/docker/docker/pkg/ioutils"
import (
"context"
"crypto/sha256"
"encoding/hex"
"io"
// make sure crypto.SHA256, crypto.sha512 and crypto.SHA384 are registered
// TODO remove once https://github.com/opencontainers/go-digest/pull/64 is merged.
_ "crypto/sha256"
_ "crypto/sha512"
)
// ReadCloserWrapper wraps an io.Reader, and implements an io.ReadCloser
@ -49,15 +52,6 @@ func NewReaderErrWrapper(r io.Reader, closer func()) io.Reader {
}
}
// HashData returns the sha256 sum of src.
func HashData(src io.Reader) (string, error) {
h := sha256.New()
if _, err := io.Copy(h, src); err != nil {
return "", err
}
return "sha256:" + hex.EncodeToString(h.Sum(nil)), nil
}
// OnEOFReader wraps an io.ReadCloser and a function
// the function will run at the end of file or close the file.
type OnEOFReader struct {

View File

@ -1,10 +1,11 @@
//go:build !windows
// +build !windows
package ioutils // import "github.com/docker/docker/pkg/ioutils"
import "io/ioutil"
import "os"
// TempDir on Unix systems is equivalent to ioutil.TempDir.
// TempDir on Unix systems is equivalent to os.MkdirTemp.
func TempDir(dir, prefix string) (string, error) {
return ioutil.TempDir(dir, prefix)
return os.MkdirTemp(dir, prefix)
}

View File

@ -1,14 +1,14 @@
package ioutils // import "github.com/docker/docker/pkg/ioutils"
import (
"io/ioutil"
"os"
"github.com/docker/docker/pkg/longpath"
)
// TempDir is the equivalent of ioutil.TempDir, except that the result is in Windows longpath format.
// TempDir is the equivalent of os.MkdirTemp, except that the result is in Windows longpath format.
func TempDir(dir, prefix string) (string, error) {
tempDir, err := ioutil.TempDir(dir, prefix)
tempDir, err := os.MkdirTemp(dir, prefix)
if err != nil {
return "", err
}

View File

@ -1,8 +1,8 @@
package namesgenerator // import "github.com/docker/docker/pkg/namesgenerator"
import (
"fmt"
"math/rand"
"strconv"
)
var (
@ -840,13 +840,13 @@ var (
// integer between 0 and 10 will be added to the end of the name, e.g `focused_turing3`
func GetRandomName(retry int) string {
begin:
name := fmt.Sprintf("%s_%s", left[rand.Intn(len(left))], right[rand.Intn(len(right))]) //nolint:gosec // G404: Use of weak random number generator (math/rand instead of crypto/rand)
name := left[rand.Intn(len(left))] + "_" + right[rand.Intn(len(right))] //nolint:gosec // G404: Use of weak random number generator (math/rand instead of crypto/rand)
if name == "boring_wozniak" /* Steve Wozniak is not boring */ {
goto begin
}
if retry > 0 {
name = fmt.Sprintf("%s%d", name, rand.Intn(10)) //nolint:gosec // G404: Use of weak random number generator (math/rand instead of crypto/rand)
name += strconv.Itoa(rand.Intn(10)) //nolint:gosec // G404: Use of weak random number generator (math/rand instead of crypto/rand)
}
return name
}

View File

@ -1,3 +1,4 @@
//go:build !windows
// +build !windows
package system // import "github.com/docker/docker/pkg/system"

View File

@ -1,9 +1,9 @@
//go:build !windows
// +build !windows
package system // import "github.com/docker/docker/pkg/system"
import (
"io/ioutil"
"os"
"path/filepath"
)
@ -63,5 +63,5 @@ func OpenFileSequential(name string, flag int, perm os.FileMode) (*os.File, erro
// to find the pathname of the file. It is the caller's responsibility
// to remove the file when no longer needed.
func TempFileSequential(dir, prefix string) (f *os.File, err error) {
return ioutil.TempFile(dir, prefix)
return os.CreateTemp(dir, prefix)
}

View File

@ -258,7 +258,7 @@ func nextSuffix() string {
return strconv.Itoa(int(1e9 + r%1e9))[1:]
}
// TempFileSequential is a copy of ioutil.TempFile, modified to use sequential
// TempFileSequential is a copy of os.CreateTemp, modified to use sequential
// file access. Below is the original comment from golang:
// TempFile creates a new temporary file in the directory dir
// with a name beginning with prefix, opens the file for reading

View File

@ -1,29 +1,18 @@
package system // import "github.com/docker/docker/pkg/system"
import (
"os"
"github.com/sirupsen/logrus"
)
var (
// containerdRuntimeSupported determines if ContainerD should be the runtime.
// As of March 2019, this is an experimental feature.
// containerdRuntimeSupported determines if containerd should be the runtime.
containerdRuntimeSupported = false
)
// InitContainerdRuntime sets whether to use ContainerD for runtime
// on Windows. This is an experimental feature still in development, and
// also requires an environment variable to be set (so as not to turn the
// feature on from simply experimental which would also mean LCOW.
func InitContainerdRuntime(experimental bool, cdPath string) {
if experimental && len(cdPath) > 0 && len(os.Getenv("DOCKER_WINDOWS_CONTAINERD_RUNTIME")) > 0 {
logrus.Warnf("Using ContainerD runtime. This feature is experimental")
// InitContainerdRuntime sets whether to use containerd for runtime on Windows.
func InitContainerdRuntime(cdPath string) {
if len(cdPath) > 0 {
containerdRuntimeSupported = true
}
}
// ContainerdRuntimeSupported returns true if the use of ContainerD runtime is supported.
// ContainerdRuntimeSupported returns true if the use of containerd runtime is supported.
func ContainerdRuntimeSupported() bool {
return containerdRuntimeSupported
}

View File

@ -1,3 +1,4 @@
//go:build !windows
// +build !windows
package system // import "github.com/docker/docker/pkg/system"

View File

@ -1,3 +1,4 @@
//go:build !linux && !windows
// +build !linux,!windows
package system // import "github.com/docker/docker/pkg/system"

View File

@ -1,3 +1,4 @@
//go:build !windows
// +build !windows
package system // import "github.com/docker/docker/pkg/system"
@ -6,12 +7,6 @@ import (
"golang.org/x/sys/unix"
)
// Mknod creates a filesystem node (file, device special file or named pipe) named path
// with attributes specified by mode and dev.
func Mknod(path string, mode uint32, dev int) error {
return unix.Mknod(path, mode, dev)
}
// Mkdev is used to build the value of linux devices (in /dev/) which specifies major
// and minor number of the newly created device special file.
// Linux device nodes are a bit weird due to backwards compat with 16 bit device nodes.

View File

@ -0,0 +1,14 @@
//go:build freebsd
// +build freebsd
package system // import "github.com/docker/docker/pkg/system"
import (
"golang.org/x/sys/unix"
)
// Mknod creates a filesystem node (file, device special file or named pipe) named path
// with attributes specified by mode and dev.
func Mknod(path string, mode uint32, dev int) error {
return unix.Mknod(path, mode, uint64(dev))
}

View File

@ -0,0 +1,14 @@
//go:build !freebsd && !windows
// +build !freebsd,!windows
package system // import "github.com/docker/docker/pkg/system"
import (
"golang.org/x/sys/unix"
)
// Mknod creates a filesystem node (file, device special file or named pipe) named path
// with attributes specified by mode and dev.
func Mknod(path string, mode uint32, dev int) error {
return unix.Mknod(path, mode, dev)
}

View File

@ -1,3 +1,4 @@
//go:build !windows
// +build !windows
package system // import "github.com/docker/docker/pkg/system"

View File

@ -1,10 +1,11 @@
//go:build linux || freebsd || darwin
// +build linux freebsd darwin
package system // import "github.com/docker/docker/pkg/system"
import (
"fmt"
"io/ioutil"
"os"
"strings"
"syscall"
@ -30,7 +31,7 @@ func KillProcess(pid int) {
// http://man7.org/linux/man-pages/man5/proc.5.html
func IsProcessZombie(pid int) (bool, error) {
statPath := fmt.Sprintf("/proc/%d/stat", pid)
dataBytes, err := ioutil.ReadFile(statPath)
dataBytes, err := os.ReadFile(statPath)
if err != nil {
return false, err
}

View File

@ -1,3 +1,4 @@
//go:build !darwin && !windows
// +build !darwin,!windows
package system // import "github.com/docker/docker/pkg/system"

View File

@ -1,3 +1,4 @@
//go:build freebsd || netbsd
// +build freebsd netbsd
package system // import "github.com/docker/docker/pkg/system"

View File

@ -1,3 +1,4 @@
//go:build !windows
// +build !windows
package system // import "github.com/docker/docker/pkg/system"

View File

@ -1,3 +1,4 @@
//go:build !windows
// +build !windows
package system // import "github.com/docker/docker/pkg/system"

View File

@ -1,3 +1,4 @@
//go:build linux || freebsd
// +build linux freebsd
package system // import "github.com/docker/docker/pkg/system"

View File

@ -1,3 +1,4 @@
//go:build !linux && !freebsd
// +build !linux,!freebsd
package system // import "github.com/docker/docker/pkg/system"

View File

@ -1,3 +1,4 @@
//go:build !linux
// +build !linux
package system // import "github.com/docker/docker/pkg/system"

View File

@ -1,3 +1,4 @@
//go:build !windows
// +build !windows
package registry // import "github.com/docker/docker/registry"

Some files were not shown because too many files have changed in this diff Show More