Compare commits

...

83 Commits

Author SHA1 Message Date
Tõnis Tiigi
18ccba0720 Merge pull request #3068 from crazy-max/GHSA-m4gq-fm9h-8q75
cherry-picks for CVE-2025-0495
2025-03-17 11:37:50 -07:00
CrazyMax
f5196f1167 localstate: remove definition and inputs fields from group
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-03-17 18:14:55 +01:00
Tonis Tiigi
ef99381eab otel: avoid tracing raw os arguments
User might pass a value that they don't expect to
be kept in trace storage. For example some cache backends
allow passing authentication tokens with a flag.

Instead use known primary config values as attributes
of the root span.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-03-17 18:14:52 +01:00
CrazyMax
00fdcd38ab Merge pull request #3062 from crazy-max/builder-error-boot
builder: return error if a node fails to boot
2025-03-13 18:02:13 +01:00
Tõnis Tiigi
97f1d47464 Merge pull request #3063 from crazy-max/driver-ctn-gpu-request
driver: request gpu when creating container builder
2025-03-13 09:56:10 -07:00
CrazyMax
337578242d driver: request gpu when creating container builder
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-03-13 17:36:37 +01:00
CrazyMax
503a8925d2 builder: return error if a node fails to boot
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-03-12 16:05:16 +01:00
Tõnis Tiigi
0d708c0bc2 Merge pull request #3058 from crazy-max/buildkit-0.20.1
vendor: github.com/moby/buildkit v0.20.1
2025-03-11 09:30:42 -07:00
Tõnis Tiigi
3a7523a117 Merge pull request #3057 from crazy-max/update-compose
vendor: update compose-go to v2.4.8
2025-03-11 09:09:46 -07:00
CrazyMax
5dc1a3308d Merge pull request #3040 from crazy-max/ci-fix-no-space-left
ci: fix faulty bin-image job
2025-03-11 16:04:39 +01:00
CrazyMax
eb78253dfd Merge pull request #3055 from tonistiigi/history-queryrecord
history: generalize query loading
2025-03-11 15:10:00 +01:00
CrazyMax
5f8b78a113 vendor: github.com/moby/buildkit v0.20.1
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-03-11 15:07:47 +01:00
CrazyMax
67d3ed34e4 vendor: update compose-go to v2.4.8
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-03-11 14:56:19 +01:00
Tõnis Tiigi
b88423be50 Merge pull request #3053 from tonistiigi/modernize-fixes
lint: apply x/tools/modernize fixes and validation
2025-03-10 18:37:51 -07:00
Tonis Tiigi
c1e2ae5636 history: generalize query loading
Some commands (logs/open) were still missing offset handling.
Now all commands use the same reference parsing/sort.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-03-10 15:51:03 -07:00
Tõnis Tiigi
23afb70e40 Merge pull request #3039 from tonistiigi/history-import
history: add history import command
2025-03-10 10:09:36 -07:00
CrazyMax
812b42b329 history: desktop build backend not yet supported on WSL
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-03-10 17:12:21 +01:00
Tonis Tiigi
d5d3d3d502 lint: apply x/tools/modernize fixes
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-03-07 16:37:24 -08:00
Tõnis Tiigi
e19c729d3e Merge pull request #3049 from tonistiigi/history-inspect-index
history: allow index based inspect of builds
2025-03-06 11:09:36 -08:00
CrazyMax
aefa49c4fa Merge pull request #3044 from docker/dependabot/github_actions/peter-evans/create-pull-request-7.0.8
build(deps): bump peter-evans/create-pull-request from 7.0.7 to 7.0.8
2025-03-06 16:23:26 +01:00
dependabot[bot]
7d927ee604 build(deps): bump peter-evans/create-pull-request from 7.0.7 to 7.0.8
Bumps [peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request) from 7.0.7 to 7.0.8.
- [Release notes](https://github.com/peter-evans/create-pull-request/releases)
- [Commits](dd2324fc52...271a8d0340)

---
updated-dependencies:
- dependency-name: peter-evans/create-pull-request
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-06 14:58:27 +00:00
Tonis Tiigi
058c098c8c history: allow index based inspect of builds
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-03-05 21:33:24 -08:00
Tõnis Tiigi
7b7dbe88b1 Merge pull request #3046 from crazy-max/buildkit-0.20.1
dockerfile: update buildkit to 0.20.1
2025-03-05 17:20:14 -08:00
Tonis Tiigi
cadf4a5893 history: add multi-file/stdin import
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-03-05 11:12:52 -08:00
CrazyMax
6cd9fef556 dockerfile: update buildkit to 0.20.1
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-03-05 17:13:03 +01:00
Tonis Tiigi
963b9ca30d history: print urls after importing builds
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-03-04 16:13:49 -08:00
CrazyMax
4636c8051a ci: fix faulty bin-image job
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-03-05 00:47:17 +01:00
Tõnis Tiigi
e23695d50d Merge pull request #3042 from crazy-max/ci-bump-ubuntu
ci: bump to ubuntu-24.04
2025-03-04 15:41:06 -08:00
CrazyMax
6eff9b2d51 ci: update install-k3s step to fix issue with latest ubuntu runners
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-03-05 00:21:09 +01:00
CrazyMax
fcbfc85f42 ci: bump to ubuntu-24.04
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-03-04 23:20:01 +01:00
Tõnis Tiigi
9a204c44c3 Merge pull request #3031 from crazy-max/bake-set-append
bake: support += operator to append with overrides
2025-03-04 09:33:57 -08:00
CrazyMax
4c6eba5acd bake: support += operator to append with overrides
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-03-04 13:29:41 +01:00
Tonis Tiigi
fea7459880 history: add history import command
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-03-03 22:52:05 -08:00
Tõnis Tiigi
e2d52a8465 Merge pull request #2901 from crazy-max/netbsd
build and test netbsd
2025-03-03 16:43:02 -08:00
Tõnis Tiigi
48a591b1e1 Merge pull request #3032 from crazy-max/bake-secrets-dupes
correctly remove duplicated secrets and ssh keys
2025-03-03 16:40:14 -08:00
CrazyMax
128acdb471 Merge pull request #3027 from LaurentGoderre/fix-attest-extra-args
Fix attest extra arguments
2025-03-03 16:28:02 +01:00
CrazyMax
411d3f8cea Merge pull request #3035 from co63oc/fix1
Fix typos
2025-03-03 14:07:56 +01:00
co63oc
7925a96726 Fix
Signed-off-by: co63oc <co63oc@users.noreply.github.com>
2025-03-02 21:20:50 +08:00
Laurent Goderre
b06bddfee6 Fix handling of attest extra arguments
Signed-off-by: Laurent Goderre <laurent.goderre@docker.com>
2025-02-28 12:09:32 -05:00
CrazyMax
fe17ebda89 correctly remove duplicated secrets and ssh keys
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-02-28 15:24:16 +01:00
CrazyMax
4ed1e07f16 Merge pull request #3030 from thaJeztah/bump_docker_28.0.1
vendor: github.com/docker/docker, docker/cli v28.0.1
2025-02-28 10:54:35 +01:00
Sebastiaan van Stijn
f49593ce2c vendor: github.com/docker/docker, docker/cli v28.0.1
diffs:

- https://github.com/docker/docker/compare/v28.0.0...v28.0.1
- https://github.com/docker/cli/compare/v28.0.0...v28.0.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-02-28 00:50:48 +01:00
Laurent Goderre
4e91fe6507 Add attest extra args tests
Signed-off-by: Laurent Goderre <laurent.goderre@docker.com>
2025-02-27 17:10:30 -05:00
CrazyMax
921b576f3a Merge pull request #3023 from tonistiigi/dockerd-push-fix
avoid double pushing with docker driver with containerd
2025-02-25 16:44:00 +01:00
CrazyMax
548c80ab5a Merge pull request #3024 from tonistiigi/imagetools-push-tag-fix
imagetools: avoid multiple tag pushes on create
2025-02-25 16:36:37 +01:00
CrazyMax
f3a4740d5f Merge pull request #3026 from thaJeztah/bump_engine_28.0
vendor: docker/docker, docker/cli v28.0.0
2025-02-25 16:35:56 +01:00
Sebastiaan van Stijn
89917dc696 vendor: docker/docker, docker/cli v28.0.0
no code changes in vendored code

full diff:

- https://github.com/docker/cli/compare/v28.0.0-rc.3...v28.0.0
- https://github.com/docker/docker/compare/v28.0.0-rc.3...v28.0.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-02-25 12:37:44 +01:00
CrazyMax
f7276201ac Merge pull request #3021 from jsternberg/empty-cache-to-override
buildflags: skip empty cache entries when parsing
2025-02-25 10:48:39 +01:00
CrazyMax
beb9f515c0 Merge pull request #3022 from docker/dependabot/github_actions/peter-evans/create-pull-request-7.0.7
build(deps): bump peter-evans/create-pull-request from 7.0.6 to 7.0.7
2025-02-25 09:54:20 +01:00
Tonis Tiigi
4f7d145c0e avoid double pushing with docker driver with containerd
In this mode buildkit can push directly so pushing manually
with docker would result in pushing image twice.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-02-24 16:48:57 -08:00
Tonis Tiigi
ccdf63c644 imagetools: avoid multiple tag pushes on create
Ensure only the final manifest is pushed by tag and intermediate
blobs are only pushed by digest to avoid tag temorarily pointing to
wrong image.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-02-24 16:48:15 -08:00
dependabot[bot]
9a6b8754b1 build(deps): bump peter-evans/create-pull-request from 7.0.6 to 7.0.7
Bumps [peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request) from 7.0.6 to 7.0.7.
- [Release notes](https://github.com/peter-evans/create-pull-request/releases)
- [Commits](67ccf781d6...dd2324fc52)

---
updated-dependencies:
- dependency-name: peter-evans/create-pull-request
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-02-24 18:27:58 +00:00
Jonathan A. Sternberg
e75ac22ba6 buildflags: skip empty cache entries when parsing
Broken in 11c84973ef. The section to skip
an empty input was accidentally removed when some code was refactored to
fix a separate issue.

This skips empty cache entries which allows disabling the `cache-from` and
`cache-to` entries from the command line overrides.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-02-24 10:09:02 -06:00
Shaun Thompson
62f5cc7c80 Merge pull request #3017 from tonistiigi/remove-debug
remove accidental debug
2025-02-20 20:08:16 -05:00
Tonis Tiigi
6272ae1afa remove accidental debug
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-02-20 15:41:13 -08:00
CrazyMax
accfbf6e24 Merge pull request #2997 from jsternberg/bake-set-annotations
bake: allow annotations to be set on the command line
2025-02-20 17:53:48 +01:00
CrazyMax
af2d8fe555 build and test netbsd
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-02-20 13:04:48 +01:00
CrazyMax
18f4275a92 Merge pull request #2995 from crazy-max/ci-infer-goversion-bsd
ci: infer go version from workflow for bsd tests
2025-02-20 13:04:19 +01:00
CrazyMax
221a608b3c Merge pull request #3014 from crazy-max/dockerfile-docker-28
Dockerfile: update to docker v28.0.0
2025-02-20 11:36:06 +01:00
CrazyMax
cc0391eba5 ci: infer go version from workflow for bsd tests
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-02-20 11:29:40 +01:00
CrazyMax
aef388bf7a Dockerfile: update to docker v28.0.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-02-20 11:19:18 +01:00
CrazyMax
80c16bc28c Merge pull request #3013 from jsternberg/buildkit-bump
ci: update buildkit to 0.20.0
2025-02-20 10:57:02 +01:00
Jonathan A. Sternberg
75160643e1 ci: update buildkit to 0.20.0
Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-02-19 15:21:14 -06:00
Jonathan A. Sternberg
ad18ffc018 Merge pull request #3010 from jsternberg/vendor-update
vendor: github.com/moby/buildkit v0.20.0
2025-02-19 13:30:37 -06:00
Jonathan A. Sternberg
80c3832c94 vendor: github.com/moby/buildkit v0.20.0
Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-02-19 13:17:40 -06:00
Jonathan A. Sternberg
7762ab2c38 Merge pull request #3008 from thaJeztah/bump_engine_28.0_rc3
vendor: github.com/docker/docker, docker/cli v28.0.0-rc.3
2025-02-19 11:59:57 -06:00
Sebastiaan van Stijn
b973de2dd3 vendor: github.com/docker/cli v28.0.0-rc.3
no significant changes, only linting fixes

full diff: https://github.com/docker/cli/compare/v28.0.0-rc.2...v28.0.0-rc.3

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-02-19 13:39:40 +01:00
Sebastiaan van Stijn
352ce7e875 vendor: github.com/docker/docker v28.0.0-rc.3
no code changes in vendor, only updated swagger file

full diff: https://github.com/docker/docker/compare/v28.0.0-rc.2...v28.0.0-rc.3

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-02-19 13:37:43 +01:00
CrazyMax
cdfc1ed750 Merge pull request #2994 from tonistiigi/device-entitlements
support for device entitlement in build and bake
2025-02-18 22:28:23 +01:00
CrazyMax
d0d3433b12 vendor: update buildkit to v0.20.0-rc3
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-02-18 21:59:36 +01:00
CrazyMax
b04d39494f Merge pull request #3001 from crazy-max/fix-gha-cache-v2
cache: enable gha cache backend if cache service v2 detected
2025-02-18 21:24:14 +01:00
CrazyMax
52f503e806 Merge pull request #3003 from tonistiigi/debug-progress-fix
progress: fix race on pausing progress on debug shell
2025-02-18 10:58:51 +01:00
Tonis Tiigi
79a978484d progress: fix race on pausing progress on debug shell
Current progress writer has a logic of pausing/unpausing
the printer and internally recreating internal channels.

This conflicts with a change that added sync.Once to Wait
to allow it being called multiple times without erroring.

In debug shell this could mean that new progress printer
showed up in debug shell because it was not closed.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-02-17 21:02:49 -08:00
CrazyMax
f7992033bf cache: fix gha cache url handling
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-02-17 19:01:13 +01:00
CrazyMax
73f61aa338 cache: enable gha cache backend if cache service v2 detected
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-02-17 18:13:12 +01:00
CrazyMax
faa573f484 Merge pull request #2998 from thaJeztah/bump_docker
vendor:  docker/docker, docker/cli v28.0.0-rc.2
2025-02-17 17:08:43 +01:00
Sebastiaan van Stijn
0a4a1babd1 vendor: github.com/docker/cli v28.0.0-rc.2
full diff: https://github.com/docker/cli/compare/v28.0.0-rc.1...v28.0.0-rc.2

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-02-17 16:43:59 +01:00
Sebastiaan van Stijn
461bd9e5d1 vendor: github.com/docker/docker v28.0.0-rc.2
full diff: https://github.com/docker/docker/compare/v28.0.0-rc.1...v28.0.0-rc.2

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-02-17 16:43:51 +01:00
Jonathan A. Sternberg
d6fdf83f45 bake: allow annotations to be set on the command line
Annotations were not merged correctly. The overrides in `ArrValue` would
be merged, but the section of code setting them from the command line
did not include `annotations` in the list of available attributes so the
command line option was completely discarded.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-02-14 11:57:30 -06:00
CrazyMax
ef4e9fea83 Merge pull request #2992 from crazy-max/docker-28
vendor: docker, docker/cli v28.0.0-rc.1
2025-02-14 14:06:09 +01:00
Tõnis Tiigi
0c296fe857 support for device entitlement in build and bake
Allow access to CDI Devices in Buildkit v0.20.0+ for
devices that are not automatically allowed to be used by
everyone in BuildKit configuration.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-02-14 11:51:47 +01:00
Sebastiaan van Stijn
2dc0350ffe vendor: github.com/docker/cli/v28.0.0-rc.1
full diff: https://github.com/docker/cli/compare/v27.5.1..v28.0.0-rc.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-02-13 13:53:45 +01:00
Sebastiaan van Stijn
b85fc5c484 vendor: github.com/docker/docker/v28.0.0-rc.1
full diff: https://github.com/docker/docker/compare/v27.5.1..v28.0.0-rc.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-02-13 13:53:44 +01:00
383 changed files with 5812 additions and 4902 deletions

View File

@@ -54,9 +54,9 @@ jobs:
- master - master
- latest - latest
- buildx-stable-1 - buildx-stable-1
- v0.20.1
- v0.19.0 - v0.19.0
- v0.18.2 - v0.18.2
- v0.17.2
worker: worker:
- docker-container - docker-container
- remote - remote
@@ -76,6 +76,16 @@ jobs:
- worker: docker+containerd # same as docker, but with containerd snapshotter - worker: docker+containerd # same as docker, but with containerd snapshotter
pkg: ./tests pkg: ./tests
mode: experimental mode: experimental
- worker: "docker@27.5"
pkg: ./tests
- worker: "docker+containerd@27.5" # same as docker, but with containerd snapshotter
pkg: ./tests
- worker: "docker@27.5"
pkg: ./tests
mode: experimental
- worker: "docker+containerd@27.5" # same as docker, but with containerd snapshotter
pkg: ./tests
mode: experimental
- worker: "docker@26.1" - worker: "docker@26.1"
pkg: ./tests pkg: ./tests
- worker: "docker+containerd@26.1" # same as docker, but with containerd snapshotter - worker: "docker+containerd@26.1" # same as docker, but with containerd snapshotter
@@ -248,12 +258,17 @@ jobs:
matrix: matrix:
os: os:
- freebsd - freebsd
- netbsd
- openbsd - openbsd
steps: steps:
- -
name: Prepare name: Prepare
run: | run: |
echo "VAGRANT_FILE=hack/Vagrantfile.${{ matrix.os }}" >> $GITHUB_ENV echo "VAGRANT_FILE=hack/Vagrantfile.${{ matrix.os }}" >> $GITHUB_ENV
# Sets semver Go version to be able to download tarball during vagrant setup
goVersion=$(curl --silent "https://go.dev/dl/?mode=json&include=all" | jq -r '.[].files[].version' | uniq | sed -e 's/go//' | sort -V | grep $GO_VERSION | tail -1)
echo "GO_VERSION=$goVersion" >> $GITHUB_ENV
- -
name: Checkout name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v4
@@ -396,6 +411,15 @@ jobs:
- test-unit - test-unit
if: ${{ github.event_name != 'pull_request' && github.repository == 'docker/buildx' }} if: ${{ github.event_name != 'pull_request' && github.repository == 'docker/buildx' }}
steps: steps:
-
name: Free disk space
uses: jlumbroso/free-disk-space@54081f138730dfa15788a46383842cd2f914a1be # v1.3.1
with:
android: true
dotnet: true
haskell: true
large-packages: true
swap-storage: true
- -
name: Set up QEMU name: Set up QEMU
uses: docker/setup-qemu-action@v3 uses: docker/setup-qemu-action@v3

View File

@@ -77,7 +77,7 @@ jobs:
VENDOR_MODULE: github.com/docker/buildx@${{ env.RELEASE_NAME }} VENDOR_MODULE: github.com/docker/buildx@${{ env.RELEASE_NAME }}
- -
name: Create PR on docs repo name: Create PR on docs repo
uses: peter-evans/create-pull-request@67ccf781d68cd99b580ae25a5c18a1cc84ffff1f # v7.0.6 uses: peter-evans/create-pull-request@271a8d0340265f705b14b6d32b9829c1cb33d45e # v7.0.8
with: with:
token: ${{ secrets.GHPAT_DOCS_DISPATCH }} token: ${{ secrets.GHPAT_DOCS_DISPATCH }}
push-to-fork: docker-tools-robot/docker.github.io push-to-fork: docker-tools-robot/docker.github.io

View File

@@ -29,7 +29,7 @@ env:
SETUP_BUILDX_VERSION: "edge" SETUP_BUILDX_VERSION: "edge"
SETUP_BUILDKIT_IMAGE: "moby/buildkit:latest" SETUP_BUILDKIT_IMAGE: "moby/buildkit:latest"
DESTDIR: "./bin" DESTDIR: "./bin"
K3S_VERSION: "v1.21.2-k3s1" K3S_VERSION: "v1.32.2+k3s1"
jobs: jobs:
build: build:
@@ -65,7 +65,7 @@ jobs:
retention-days: 7 retention-days: 7
driver: driver:
runs-on: ubuntu-20.04 runs-on: ubuntu-24.04
needs: needs:
- build - build
strategy: strategy:
@@ -153,7 +153,7 @@ jobs:
- -
name: Install k3s name: Install k3s
if: matrix.driver == 'kubernetes' if: matrix.driver == 'kubernetes'
uses: crazy-max/.github/.github/actions/install-k3s@fa6141aedf23596fb8bdcceab9cce8dadaa31bd9 uses: crazy-max/.github/.github/actions/install-k3s@7730d1434364d4b9aded32735b078a7ace5ea79a
with: with:
version: ${{ env.K3S_VERSION }} version: ${{ env.K3S_VERSION }}
- -

View File

@@ -5,20 +5,23 @@ ARG ALPINE_VERSION=3.21
ARG XX_VERSION=1.6.1 ARG XX_VERSION=1.6.1
# for testing # for testing
ARG DOCKER_VERSION=28.0.0-rc.1 ARG DOCKER_VERSION=28.0.0
ARG DOCKER_VERSION_ALT_27=27.5.1
ARG DOCKER_VERSION_ALT_26=26.1.3 ARG DOCKER_VERSION_ALT_26=26.1.3
ARG DOCKER_CLI_VERSION=${DOCKER_VERSION} ARG DOCKER_CLI_VERSION=${DOCKER_VERSION}
ARG GOTESTSUM_VERSION=v1.12.0 ARG GOTESTSUM_VERSION=v1.12.0
ARG REGISTRY_VERSION=2.8.3 ARG REGISTRY_VERSION=2.8.3
ARG BUILDKIT_VERSION=v0.19.0 ARG BUILDKIT_VERSION=v0.20.1
ARG UNDOCK_VERSION=0.9.0 ARG UNDOCK_VERSION=0.9.0
FROM --platform=$BUILDPLATFORM tonistiigi/xx:${XX_VERSION} AS xx FROM --platform=$BUILDPLATFORM tonistiigi/xx:${XX_VERSION} AS xx
FROM --platform=$BUILDPLATFORM golang:${GO_VERSION}-alpine${ALPINE_VERSION} AS golatest FROM --platform=$BUILDPLATFORM golang:${GO_VERSION}-alpine${ALPINE_VERSION} AS golatest
FROM moby/moby-bin:$DOCKER_VERSION AS docker-engine FROM moby/moby-bin:$DOCKER_VERSION AS docker-engine
FROM dockereng/cli-bin:$DOCKER_CLI_VERSION AS docker-cli FROM dockereng/cli-bin:$DOCKER_CLI_VERSION AS docker-cli
FROM moby/moby-bin:$DOCKER_VERSION_ALT_26 AS docker-engine-alt FROM moby/moby-bin:$DOCKER_VERSION_ALT_27 AS docker-engine-alt27
FROM dockereng/cli-bin:$DOCKER_VERSION_ALT_26 AS docker-cli-alt FROM moby/moby-bin:$DOCKER_VERSION_ALT_26 AS docker-engine-alt26
FROM dockereng/cli-bin:$DOCKER_VERSION_ALT_27 AS docker-cli-alt27
FROM dockereng/cli-bin:$DOCKER_VERSION_ALT_26 AS docker-cli-alt26
FROM registry:$REGISTRY_VERSION AS registry FROM registry:$REGISTRY_VERSION AS registry
FROM moby/buildkit:$BUILDKIT_VERSION AS buildkit FROM moby/buildkit:$BUILDKIT_VERSION AS buildkit
FROM crazymax/undock:$UNDOCK_VERSION AS undock FROM crazymax/undock:$UNDOCK_VERSION AS undock
@@ -102,6 +105,7 @@ COPY --link --from=buildx-build /usr/bin/docker-buildx /buildx
FROM binaries-unix AS binaries-darwin FROM binaries-unix AS binaries-darwin
FROM binaries-unix AS binaries-freebsd FROM binaries-unix AS binaries-freebsd
FROM binaries-unix AS binaries-linux FROM binaries-unix AS binaries-linux
FROM binaries-unix AS binaries-netbsd
FROM binaries-unix AS binaries-openbsd FROM binaries-unix AS binaries-openbsd
FROM scratch AS binaries-windows FROM scratch AS binaries-windows
@@ -127,13 +131,15 @@ COPY --link --from=gotestsum /out /usr/bin/
COPY --link --from=registry /bin/registry /usr/bin/ COPY --link --from=registry /bin/registry /usr/bin/
COPY --link --from=docker-engine / /usr/bin/ COPY --link --from=docker-engine / /usr/bin/
COPY --link --from=docker-cli / /usr/bin/ COPY --link --from=docker-cli / /usr/bin/
COPY --link --from=docker-engine-alt / /opt/docker-alt-26/ COPY --link --from=docker-engine-alt27 / /opt/docker-alt-27/
COPY --link --from=docker-cli-alt / /opt/docker-alt-26/ COPY --link --from=docker-engine-alt26 / /opt/docker-alt-26/
COPY --link --from=docker-cli-alt27 / /opt/docker-alt-27/
COPY --link --from=docker-cli-alt26 / /opt/docker-alt-26/
COPY --link --from=buildkit /usr/bin/buildkitd /usr/bin/ COPY --link --from=buildkit /usr/bin/buildkitd /usr/bin/
COPY --link --from=buildkit /usr/bin/buildctl /usr/bin/ COPY --link --from=buildkit /usr/bin/buildctl /usr/bin/
COPY --link --from=undock /usr/local/bin/undock /usr/bin/ COPY --link --from=undock /usr/local/bin/undock /usr/bin/
COPY --link --from=binaries /buildx /usr/bin/ COPY --link --from=binaries /buildx /usr/bin/
ENV TEST_DOCKER_EXTRA="docker@26.1=/opt/docker-alt-26" ENV TEST_DOCKER_EXTRA="docker@27.5=/opt/docker-alt-27,docker@26.1=/opt/docker-alt-26"
FROM integration-test-base AS integration-test FROM integration-test-base AS integration-test
COPY . . COPY . .

View File

@@ -27,7 +27,6 @@ import (
"github.com/moby/buildkit/client" "github.com/moby/buildkit/client"
"github.com/moby/buildkit/client/llb" "github.com/moby/buildkit/client/llb"
"github.com/moby/buildkit/session/auth/authprovider" "github.com/moby/buildkit/session/auth/authprovider"
"github.com/moby/buildkit/util/entitlements"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/zclconf/go-cty/cty" "github.com/zclconf/go-cty/cty"
"github.com/zclconf/go-cty/cty/convert" "github.com/zclconf/go-cty/cty/convert"
@@ -46,6 +45,7 @@ type File struct {
type Override struct { type Override struct {
Value string Value string
ArrValue []string ArrValue []string
Append bool
} }
func defaultFilenames() []string { func defaultFilenames() []string {
@@ -486,11 +486,9 @@ func (c Config) loadLinks(name string, t *Target, m map[string]*Target, o map[st
if target == name { if target == name {
return errors.Errorf("target %s cannot link to itself", target) return errors.Errorf("target %s cannot link to itself", target)
} }
for _, v := range visited { if slices.Contains(visited, target) {
if v == target {
return errors.Errorf("infinite loop from %s to %s", name, target) return errors.Errorf("infinite loop from %s to %s", name, target)
} }
}
t2, ok := m[target] t2, ok := m[target]
if !ok { if !ok {
var err error var err error
@@ -529,9 +527,12 @@ func (c Config) newOverrides(v []string) (map[string]map[string]Override, error)
m := map[string]map[string]Override{} m := map[string]map[string]Override{}
for _, v := range v { for _, v := range v {
parts := strings.SplitN(v, "=", 2) parts := strings.SplitN(v, "=", 2)
keys := strings.SplitN(parts[0], ".", 3)
skey := strings.TrimSuffix(parts[0], "+")
appendTo := strings.HasSuffix(parts[0], "+")
keys := strings.SplitN(skey, ".", 3)
if len(keys) < 2 { if len(keys) < 2 {
return nil, errors.Errorf("invalid override key %s, expected target.name", parts[0]) return nil, errors.Errorf("invalid override key %s, expected target.name", skey)
} }
pattern := keys[0] pattern := keys[0]
@@ -544,8 +545,7 @@ func (c Config) newOverrides(v []string) (map[string]map[string]Override, error)
return nil, err return nil, err
} }
kk := strings.SplitN(parts[0], ".", 2) okey := strings.Join(keys[1:], ".")
for _, name := range names { for _, name := range names {
t, ok := m[name] t, ok := m[name]
if !ok { if !ok {
@@ -553,14 +553,15 @@ func (c Config) newOverrides(v []string) (map[string]map[string]Override, error)
m[name] = t m[name] = t
} }
o := t[kk[1]] override := t[okey]
// IMPORTANT: if you add more fields here, do not forget to update // IMPORTANT: if you add more fields here, do not forget to update
// docs/bake-reference.md and https://docs.docker.com/build/bake/overrides/ // docs/reference/buildx_bake.md (--set) and https://docs.docker.com/build/bake/overrides/
switch keys[1] { switch keys[1] {
case "output", "cache-to", "cache-from", "tags", "platform", "secrets", "ssh", "attest", "entitlements", "network": case "output", "cache-to", "cache-from", "tags", "platform", "secrets", "ssh", "attest", "entitlements", "network", "annotations":
if len(parts) == 2 { if len(parts) == 2 {
o.ArrValue = append(o.ArrValue, parts[1]) override.Append = appendTo
override.ArrValue = append(override.ArrValue, parts[1])
} }
case "args": case "args":
if len(keys) != 3 { if len(keys) != 3 {
@@ -571,7 +572,7 @@ func (c Config) newOverrides(v []string) (map[string]map[string]Override, error)
if !ok { if !ok {
continue continue
} }
o.Value = v override.Value = v
} }
fallthrough fallthrough
case "contexts": case "contexts":
@@ -581,11 +582,11 @@ func (c Config) newOverrides(v []string) (map[string]map[string]Override, error)
fallthrough fallthrough
default: default:
if len(parts) == 2 { if len(parts) == 2 {
o.Value = parts[1] override.Value = parts[1]
} }
} }
t[kk[1]] = o t[okey] = override
} }
} }
return m, nil return m, nil
@@ -897,13 +898,21 @@ func (t *Target) AddOverrides(overrides map[string]Override, ent *EntitlementCon
} }
t.Labels[keys[1]] = &value t.Labels[keys[1]] = &value
case "tags": case "tags":
if o.Append {
t.Tags = append(t.Tags, o.ArrValue...)
} else {
t.Tags = o.ArrValue t.Tags = o.ArrValue
}
case "cache-from": case "cache-from":
cacheFrom, err := buildflags.ParseCacheEntry(o.ArrValue) cacheFrom, err := buildflags.ParseCacheEntry(o.ArrValue)
if err != nil { if err != nil {
return err return err
} }
if o.Append {
t.CacheFrom = t.CacheFrom.Merge(cacheFrom)
} else {
t.CacheFrom = cacheFrom t.CacheFrom = cacheFrom
}
for _, c := range t.CacheFrom { for _, c := range t.CacheFrom {
if c.Type == "local" { if c.Type == "local" {
if v, ok := c.Attrs["src"]; ok { if v, ok := c.Attrs["src"]; ok {
@@ -916,7 +925,11 @@ func (t *Target) AddOverrides(overrides map[string]Override, ent *EntitlementCon
if err != nil { if err != nil {
return err return err
} }
if o.Append {
t.CacheTo = t.CacheTo.Merge(cacheTo)
} else {
t.CacheTo = cacheTo t.CacheTo = cacheTo
}
for _, c := range t.CacheTo { for _, c := range t.CacheTo {
if c.Type == "local" { if c.Type == "local" {
if v, ok := c.Attrs["dest"]; ok { if v, ok := c.Attrs["dest"]; ok {
@@ -933,7 +946,11 @@ func (t *Target) AddOverrides(overrides map[string]Override, ent *EntitlementCon
if err != nil { if err != nil {
return errors.Wrap(err, "invalid value for outputs") return errors.Wrap(err, "invalid value for outputs")
} }
if o.Append {
t.Secrets = t.Secrets.Merge(secrets)
} else {
t.Secrets = secrets t.Secrets = secrets
}
for _, s := range t.Secrets { for _, s := range t.Secrets {
if s.FilePath != "" { if s.FilePath != "" {
ent.FSRead = append(ent.FSRead, s.FilePath) ent.FSRead = append(ent.FSRead, s.FilePath)
@@ -944,18 +961,30 @@ func (t *Target) AddOverrides(overrides map[string]Override, ent *EntitlementCon
if err != nil { if err != nil {
return errors.Wrap(err, "invalid value for outputs") return errors.Wrap(err, "invalid value for outputs")
} }
if o.Append {
t.SSH = t.SSH.Merge(ssh)
} else {
t.SSH = ssh t.SSH = ssh
}
for _, s := range t.SSH { for _, s := range t.SSH {
ent.FSRead = append(ent.FSRead, s.Paths...) ent.FSRead = append(ent.FSRead, s.Paths...)
} }
case "platform": case "platform":
if o.Append {
t.Platforms = append(t.Platforms, o.ArrValue...)
} else {
t.Platforms = o.ArrValue t.Platforms = o.ArrValue
}
case "output": case "output":
outputs, err := parseArrValue[buildflags.ExportEntry](o.ArrValue) outputs, err := parseArrValue[buildflags.ExportEntry](o.ArrValue)
if err != nil { if err != nil {
return errors.Wrap(err, "invalid value for outputs") return errors.Wrap(err, "invalid value for outputs")
} }
if o.Append {
t.Outputs = t.Outputs.Merge(outputs)
} else {
t.Outputs = outputs t.Outputs = outputs
}
for _, o := range t.Outputs { for _, o := range t.Outputs {
if o.Destination != "" { if o.Destination != "" {
ent.FSWrite = append(ent.FSWrite, o.Destination) ent.FSWrite = append(ent.FSWrite, o.Destination)
@@ -985,11 +1014,19 @@ func (t *Target) AddOverrides(overrides map[string]Override, ent *EntitlementCon
} }
t.NoCache = &noCache t.NoCache = &noCache
case "no-cache-filter": case "no-cache-filter":
if o.Append {
t.NoCacheFilter = append(t.NoCacheFilter, o.ArrValue...)
} else {
t.NoCacheFilter = o.ArrValue t.NoCacheFilter = o.ArrValue
}
case "shm-size": case "shm-size":
t.ShmSize = &value t.ShmSize = &value
case "ulimits": case "ulimits":
if o.Append {
t.Ulimits = append(t.Ulimits, o.ArrValue...)
} else {
t.Ulimits = o.ArrValue t.Ulimits = o.ArrValue
}
case "network": case "network":
t.NetworkMode = &value t.NetworkMode = &value
case "pull": case "pull":
@@ -1434,9 +1471,7 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
} }
bo.Ulimits = ulimits bo.Ulimits = ulimits
for _, ent := range t.Entitlements { bo.Allow = append(bo.Allow, t.Entitlements...)
bo.Allow = append(bo.Allow, entitlements.Entitlement(ent))
}
return bo, nil return bo, nil
} }

View File

@@ -34,6 +34,18 @@ target "webapp" {
args = { args = {
VAR_BOTH = "webapp" VAR_BOTH = "webapp"
} }
annotations = [
"index,manifest:org.opencontainers.image.authors=dvdksn"
]
attest = [
"type=provenance,mode=max"
]
platforms = [
"linux/amd64"
]
secret = [
"id=FOO,env=FOO"
]
inherits = ["webDEP"] inherits = ["webDEP"]
}`), }`),
} }
@@ -115,6 +127,31 @@ target "webapp" {
}) })
}) })
t.Run("AnnotationsOverrides", func(t *testing.T) {
t.Parallel()
m, g, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"webapp.annotations=index,manifest:org.opencontainers.image.vendor=docker"}, nil, &EntitlementConf{})
require.NoError(t, err)
require.Equal(t, []string{"index,manifest:org.opencontainers.image.authors=dvdksn", "index,manifest:org.opencontainers.image.vendor=docker"}, m["webapp"].Annotations)
require.Equal(t, 1, len(g))
require.Equal(t, []string{"webapp"}, g["default"].Targets)
})
t.Run("AttestOverride", func(t *testing.T) {
m, _, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"webapp.attest=type=sbom"}, nil, &EntitlementConf{})
require.NoError(t, err)
require.Len(t, m["webapp"].Attest, 2)
require.Equal(t, "provenance", m["webapp"].Attest[0].Type)
require.Equal(t, "sbom", m["webapp"].Attest[1].Type)
})
t.Run("AttestAppend", func(t *testing.T) {
m, _, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"webapp.attest+=type=sbom"}, nil, &EntitlementConf{})
require.NoError(t, err)
require.Len(t, m["webapp"].Attest, 2)
require.Equal(t, "provenance", m["webapp"].Attest[0].Type)
require.Equal(t, "sbom", m["webapp"].Attest[1].Type)
})
t.Run("ContextOverride", func(t *testing.T) { t.Run("ContextOverride", func(t *testing.T) {
t.Parallel() t.Parallel()
_, _, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"webapp.context"}, nil, &EntitlementConf{}) _, _, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"webapp.context"}, nil, &EntitlementConf{})
@@ -136,6 +173,49 @@ target "webapp" {
require.Equal(t, []string{"webapp"}, g["default"].Targets) require.Equal(t, []string{"webapp"}, g["default"].Targets)
}) })
t.Run("PlatformOverride", func(t *testing.T) {
m, _, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"webapp.platform=linux/arm64"}, nil, &EntitlementConf{})
require.NoError(t, err)
require.Equal(t, []string{"linux/arm64"}, m["webapp"].Platforms)
})
t.Run("PlatformAppend", func(t *testing.T) {
m, _, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"webapp.platform+=linux/arm64"}, nil, &EntitlementConf{})
require.NoError(t, err)
require.Equal(t, []string{"linux/amd64", "linux/arm64"}, m["webapp"].Platforms)
})
t.Run("PlatformAppendMulti", func(t *testing.T) {
m, _, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"webapp.platform+=linux/arm64", "webapp.platform+=linux/riscv64"}, nil, &EntitlementConf{})
require.NoError(t, err)
require.Equal(t, []string{"linux/amd64", "linux/arm64", "linux/riscv64"}, m["webapp"].Platforms)
})
t.Run("PlatformAppendMultiLastOverride", func(t *testing.T) {
m, _, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"webapp.platform+=linux/arm64", "webapp.platform=linux/riscv64"}, nil, &EntitlementConf{})
require.NoError(t, err)
require.Equal(t, []string{"linux/arm64", "linux/riscv64"}, m["webapp"].Platforms)
})
t.Run("SecretsOverride", func(t *testing.T) {
t.Setenv("FOO", "foo")
t.Setenv("BAR", "bar")
m, _, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"webapp.secrets=id=BAR,env=BAR"}, nil, &EntitlementConf{})
require.NoError(t, err)
require.Len(t, m["webapp"].Secrets, 1)
require.Equal(t, "BAR", m["webapp"].Secrets[0].ID)
})
t.Run("SecretsAppend", func(t *testing.T) {
t.Setenv("FOO", "foo")
t.Setenv("BAR", "bar")
m, _, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"webapp.secrets+=id=BAR,env=BAR"}, nil, &EntitlementConf{})
require.NoError(t, err)
require.Len(t, m["webapp"].Secrets, 2)
require.Equal(t, "FOO", m["webapp"].Secrets[0].ID)
require.Equal(t, "BAR", m["webapp"].Secrets[1].ID)
})
t.Run("ShmSizeOverride", func(t *testing.T) { t.Run("ShmSizeOverride", func(t *testing.T) {
m, _, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"webapp.shm-size=256m"}, nil, &EntitlementConf{}) m, _, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"webapp.shm-size=256m"}, nil, &EntitlementConf{})
require.NoError(t, err) require.NoError(t, err)
@@ -1806,8 +1886,8 @@ func TestHCLEntitlements(t *testing.T) {
require.Equal(t, "network.host", m["app"].Entitlements[1]) require.Equal(t, "network.host", m["app"].Entitlements[1])
require.Len(t, bo["app"].Allow, 2) require.Len(t, bo["app"].Allow, 2)
require.Equal(t, entitlements.EntitlementSecurityInsecure, bo["app"].Allow[0]) require.Equal(t, entitlements.EntitlementSecurityInsecure.String(), bo["app"].Allow[0])
require.Equal(t, entitlements.EntitlementNetworkHost, bo["app"].Allow[1]) require.Equal(t, entitlements.EntitlementNetworkHost.String(), bo["app"].Allow[1])
} }
func TestEntitlementsForNetHostCompose(t *testing.T) { func TestEntitlementsForNetHostCompose(t *testing.T) {
@@ -1846,7 +1926,7 @@ func TestEntitlementsForNetHostCompose(t *testing.T) {
require.Equal(t, "host", *m["app"].NetworkMode) require.Equal(t, "host", *m["app"].NetworkMode)
require.Len(t, bo["app"].Allow, 1) require.Len(t, bo["app"].Allow, 1)
require.Equal(t, entitlements.EntitlementNetworkHost, bo["app"].Allow[0]) require.Equal(t, entitlements.EntitlementNetworkHost.String(), bo["app"].Allow[0])
require.Equal(t, "host", bo["app"].NetworkMode) require.Equal(t, "host", bo["app"].NetworkMode)
} }
@@ -1877,7 +1957,7 @@ func TestEntitlementsForNetHost(t *testing.T) {
require.Equal(t, "host", *m["app"].NetworkMode) require.Equal(t, "host", *m["app"].NetworkMode)
require.Len(t, bo["app"].Allow, 1) require.Len(t, bo["app"].Allow, 1)
require.Equal(t, entitlements.EntitlementNetworkHost, bo["app"].Allow[0]) require.Equal(t, entitlements.EntitlementNetworkHost.String(), bo["app"].Allow[0])
require.Equal(t, "host", bo["app"].NetworkMode) require.Equal(t, "host", bo["app"].NetworkMode)
} }

View File

@@ -315,7 +315,7 @@ type (
stringArray []string stringArray []string
) )
func (sa *stringArray) UnmarshalYAML(unmarshal func(interface{}) error) error { func (sa *stringArray) UnmarshalYAML(unmarshal func(any) error) error {
var multi []string var multi []string
err := unmarshal(&multi) err := unmarshal(&multi)
if err != nil { if err != nil {
@@ -332,7 +332,7 @@ func (sa *stringArray) UnmarshalYAML(unmarshal func(interface{}) error) error {
// composeExtTarget converts Compose build extension x-bake to bake Target // composeExtTarget converts Compose build extension x-bake to bake Target
// https://github.com/compose-spec/compose-spec/blob/master/spec.md#extension // https://github.com/compose-spec/compose-spec/blob/master/spec.md#extension
func (t *Target) composeExtTarget(exts map[string]interface{}) error { func (t *Target) composeExtTarget(exts map[string]any) error {
var xb xbake var xb xbake
ext, ok := exts["x-bake"] ext, ok := exts["x-bake"]

View File

@@ -20,6 +20,7 @@ import (
"github.com/moby/buildkit/util/entitlements" "github.com/moby/buildkit/util/entitlements"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
"github.com/tonistiigi/go-csvvalue"
) )
type EntitlementKey string type EntitlementKey string
@@ -27,6 +28,7 @@ type EntitlementKey string
const ( const (
EntitlementKeyNetworkHost EntitlementKey = "network.host" EntitlementKeyNetworkHost EntitlementKey = "network.host"
EntitlementKeySecurityInsecure EntitlementKey = "security.insecure" EntitlementKeySecurityInsecure EntitlementKey = "security.insecure"
EntitlementKeyDevice EntitlementKey = "device"
EntitlementKeyFSRead EntitlementKey = "fs.read" EntitlementKeyFSRead EntitlementKey = "fs.read"
EntitlementKeyFSWrite EntitlementKey = "fs.write" EntitlementKeyFSWrite EntitlementKey = "fs.write"
EntitlementKeyFS EntitlementKey = "fs" EntitlementKeyFS EntitlementKey = "fs"
@@ -39,6 +41,7 @@ const (
type EntitlementConf struct { type EntitlementConf struct {
NetworkHost bool NetworkHost bool
SecurityInsecure bool SecurityInsecure bool
Devices *EntitlementsDevicesConf
FSRead []string FSRead []string
FSWrite []string FSWrite []string
ImagePush []string ImagePush []string
@@ -46,6 +49,11 @@ type EntitlementConf struct {
SSH bool SSH bool
} }
type EntitlementsDevicesConf struct {
All bool
Devices map[string]struct{}
}
func ParseEntitlements(in []string) (EntitlementConf, error) { func ParseEntitlements(in []string) (EntitlementConf, error) {
var conf EntitlementConf var conf EntitlementConf
for _, e := range in { for _, e := range in {
@@ -59,6 +67,22 @@ func ParseEntitlements(in []string) (EntitlementConf, error) {
default: default:
k, v, _ := strings.Cut(e, "=") k, v, _ := strings.Cut(e, "=")
switch k { switch k {
case string(EntitlementKeyDevice):
if v == "" {
conf.Devices = &EntitlementsDevicesConf{All: true}
continue
}
fields, err := csvvalue.Fields(v, nil)
if err != nil {
return EntitlementConf{}, errors.Wrapf(err, "failed to parse device entitlement %q", v)
}
if conf.Devices == nil {
conf.Devices = &EntitlementsDevicesConf{}
}
if conf.Devices.Devices == nil {
conf.Devices.Devices = make(map[string]struct{}, 0)
}
conf.Devices.Devices[fields[0]] = struct{}{}
case string(EntitlementKeyFSRead): case string(EntitlementKeyFSRead):
conf.FSRead = append(conf.FSRead, v) conf.FSRead = append(conf.FSRead, v)
case string(EntitlementKeyFSWrite): case string(EntitlementKeyFSWrite):
@@ -95,12 +119,34 @@ func (c EntitlementConf) Validate(m map[string]build.Options) (EntitlementConf,
func (c EntitlementConf) check(bo build.Options, expected *EntitlementConf) error { func (c EntitlementConf) check(bo build.Options, expected *EntitlementConf) error {
for _, e := range bo.Allow { for _, e := range bo.Allow {
k, rest, _ := strings.Cut(e, "=")
switch k {
case entitlements.EntitlementDevice.String():
if rest == "" {
if c.Devices == nil || !c.Devices.All {
expected.Devices = &EntitlementsDevicesConf{All: true}
}
continue
}
fields, err := csvvalue.Fields(rest, nil)
if err != nil {
return errors.Wrapf(err, "failed to parse device entitlement %q", rest)
}
if expected.Devices == nil {
expected.Devices = &EntitlementsDevicesConf{}
}
if expected.Devices.Devices == nil {
expected.Devices.Devices = make(map[string]struct{}, 0)
}
expected.Devices.Devices[fields[0]] = struct{}{}
}
switch e { switch e {
case entitlements.EntitlementNetworkHost: case entitlements.EntitlementNetworkHost.String():
if !c.NetworkHost { if !c.NetworkHost {
expected.NetworkHost = true expected.NetworkHost = true
} }
case entitlements.EntitlementSecurityInsecure: case entitlements.EntitlementSecurityInsecure.String():
if !c.SecurityInsecure { if !c.SecurityInsecure {
expected.SecurityInsecure = true expected.SecurityInsecure = true
} }
@@ -187,6 +233,18 @@ func (c EntitlementConf) Prompt(ctx context.Context, isRemote bool, out io.Write
flags = append(flags, string(EntitlementKeySecurityInsecure)) flags = append(flags, string(EntitlementKeySecurityInsecure))
} }
if c.Devices != nil {
if c.Devices.All {
msgs = append(msgs, " - Access to CDI devices")
flags = append(flags, string(EntitlementKeyDevice))
} else {
for d := range c.Devices.Devices {
msgs = append(msgs, fmt.Sprintf(" - Access to device %s", d))
flags = append(flags, string(EntitlementKeyDevice)+"="+d)
}
}
}
if c.SSH { if c.SSH {
msgsFS = append(msgsFS, " - Forwarding default SSH agent socket") msgsFS = append(msgsFS, " - Forwarding default SSH agent socket")
flagsFS = append(flagsFS, string(EntitlementKeySSH)) flagsFS = append(flagsFS, string(EntitlementKeySSH))
@@ -248,7 +306,7 @@ func (c EntitlementConf) Prompt(ctx context.Context, isRemote bool, out io.Write
fmt.Fprintf(out, "\nPass %q to grant requested privileges.\n", strings.Join(slices.Concat(flags, flagsFS), " ")) fmt.Fprintf(out, "\nPass %q to grant requested privileges.\n", strings.Join(slices.Concat(flags, flagsFS), " "))
} }
args := append([]string(nil), os.Args...) args := slices.Clone(os.Args)
if v, ok := os.LookupEnv("DOCKER_CLI_PLUGIN_ORIGINAL_CLI_COMMAND"); ok && v != "" { if v, ok := os.LookupEnv("DOCKER_CLI_PLUGIN_ORIGINAL_CLI_COMMAND"); ok && v != "" {
args[0] = v args[0] = v
} }

View File

@@ -208,8 +208,8 @@ func TestValidateEntitlements(t *testing.T) {
{ {
name: "NetworkHostMissing", name: "NetworkHostMissing",
opt: build.Options{ opt: build.Options{
Allow: []entitlements.Entitlement{ Allow: []string{
entitlements.EntitlementNetworkHost, entitlements.EntitlementNetworkHost.String(),
}, },
}, },
expected: EntitlementConf{ expected: EntitlementConf{
@@ -223,8 +223,8 @@ func TestValidateEntitlements(t *testing.T) {
NetworkHost: true, NetworkHost: true,
}, },
opt: build.Options{ opt: build.Options{
Allow: []entitlements.Entitlement{ Allow: []string{
entitlements.EntitlementNetworkHost, entitlements.EntitlementNetworkHost.String(),
}, },
}, },
expected: EntitlementConf{ expected: EntitlementConf{
@@ -234,9 +234,9 @@ func TestValidateEntitlements(t *testing.T) {
{ {
name: "SecurityAndNetworkHostMissing", name: "SecurityAndNetworkHostMissing",
opt: build.Options{ opt: build.Options{
Allow: []entitlements.Entitlement{ Allow: []string{
entitlements.EntitlementNetworkHost, entitlements.EntitlementNetworkHost.String(),
entitlements.EntitlementSecurityInsecure, entitlements.EntitlementSecurityInsecure.String(),
}, },
}, },
expected: EntitlementConf{ expected: EntitlementConf{
@@ -251,9 +251,9 @@ func TestValidateEntitlements(t *testing.T) {
NetworkHost: true, NetworkHost: true,
}, },
opt: build.Options{ opt: build.Options{
Allow: []entitlements.Entitlement{ Allow: []string{
entitlements.EntitlementNetworkHost, entitlements.EntitlementNetworkHost.String(),
entitlements.EntitlementSecurityInsecure, entitlements.EntitlementSecurityInsecure.String(),
}, },
}, },
expected: EntitlementConf{ expected: EntitlementConf{

View File

@@ -608,7 +608,7 @@ func TestHCLAttrsCapsuleType(t *testing.T) {
target "app" { target "app" {
attest = [ attest = [
{ type = "provenance", mode = "max" }, { type = "provenance", mode = "max" },
"type=sbom,disabled=true", "type=sbom,disabled=true,generator=foo,\"ENV1=bar,baz\",ENV2=hello",
] ]
cache-from = [ cache-from = [
@@ -641,7 +641,7 @@ func TestHCLAttrsCapsuleType(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, 1, len(c.Targets)) require.Equal(t, 1, len(c.Targets))
require.Equal(t, []string{"type=provenance,mode=max", "type=sbom,disabled=true"}, stringify(c.Targets[0].Attest)) require.Equal(t, []string{"type=provenance,mode=max", "type=sbom,disabled=true,\"ENV1=bar,baz\",ENV2=hello,generator=foo"}, stringify(c.Targets[0].Attest))
require.Equal(t, []string{"type=local,dest=../out", "type=oci,dest=../out.tar"}, stringify(c.Targets[0].Outputs)) require.Equal(t, []string{"type=local,dest=../out", "type=oci,dest=../out.tar"}, stringify(c.Targets[0].Outputs))
require.Equal(t, []string{"type=local,src=path/to/cache", "user/app:cache"}, stringify(c.Targets[0].CacheFrom)) require.Equal(t, []string{"type=local,src=path/to/cache", "user/app:cache"}, stringify(c.Targets[0].CacheFrom))
require.Equal(t, []string{"type=local,dest=path/to/cache"}, stringify(c.Targets[0].CacheTo)) require.Equal(t, []string{"type=local,dest=path/to/cache"}, stringify(c.Targets[0].CacheTo))
@@ -1645,7 +1645,7 @@ func TestHCLIndexOfFunc(t *testing.T) {
require.Empty(t, c.Targets[1].Tags[1]) require.Empty(t, c.Targets[1].Tags[1])
} }
func ptrstr(s interface{}) *string { func ptrstr(s any) *string {
var n *string var n *string
if reflect.ValueOf(s).Kind() == reflect.String { if reflect.ValueOf(s).Kind() == reflect.String {
ss := s.(string) ss := s.(string)

View File

@@ -15,11 +15,11 @@ import (
// DecodeOptions allows customizing sections of the decoding process. // DecodeOptions allows customizing sections of the decoding process.
type DecodeOptions struct { type DecodeOptions struct {
ImpliedType func(gv interface{}) (cty.Type, error) ImpliedType func(gv any) (cty.Type, error)
Convert func(in cty.Value, want cty.Type) (cty.Value, error) Convert func(in cty.Value, want cty.Type) (cty.Value, error)
} }
func (o DecodeOptions) DecodeBody(body hcl.Body, ctx *hcl.EvalContext, val interface{}) hcl.Diagnostics { func (o DecodeOptions) DecodeBody(body hcl.Body, ctx *hcl.EvalContext, val any) hcl.Diagnostics {
o = o.withDefaults() o = o.withDefaults()
rv := reflect.ValueOf(val) rv := reflect.ValueOf(val)
@@ -46,7 +46,7 @@ func (o DecodeOptions) DecodeBody(body hcl.Body, ctx *hcl.EvalContext, val inter
// are returned then the given value may have been partially-populated but // are returned then the given value may have been partially-populated but
// may still be accessed by a careful caller for static analysis and editor // may still be accessed by a careful caller for static analysis and editor
// integration use-cases. // integration use-cases.
func DecodeBody(body hcl.Body, ctx *hcl.EvalContext, val interface{}) hcl.Diagnostics { func DecodeBody(body hcl.Body, ctx *hcl.EvalContext, val any) hcl.Diagnostics {
return DecodeOptions{}.DecodeBody(body, ctx, val) return DecodeOptions{}.DecodeBody(body, ctx, val)
} }
@@ -282,7 +282,7 @@ func (o DecodeOptions) decodeBlockToValue(block *hcl.Block, ctx *hcl.EvalContext
return diags return diags
} }
func (o DecodeOptions) DecodeExpression(expr hcl.Expression, ctx *hcl.EvalContext, val interface{}) hcl.Diagnostics { func (o DecodeOptions) DecodeExpression(expr hcl.Expression, ctx *hcl.EvalContext, val any) hcl.Diagnostics {
o = o.withDefaults() o = o.withDefaults()
srcVal, diags := expr.Value(ctx) srcVal, diags := expr.Value(ctx)
@@ -332,7 +332,7 @@ func (o DecodeOptions) DecodeExpression(expr hcl.Expression, ctx *hcl.EvalContex
// are returned then the given value may have been partially-populated but // are returned then the given value may have been partially-populated but
// may still be accessed by a careful caller for static analysis and editor // may still be accessed by a careful caller for static analysis and editor
// integration use-cases. // integration use-cases.
func DecodeExpression(expr hcl.Expression, ctx *hcl.EvalContext, val interface{}) hcl.Diagnostics { func DecodeExpression(expr hcl.Expression, ctx *hcl.EvalContext, val any) hcl.Diagnostics {
return DecodeOptions{}.DecodeExpression(expr, ctx, val) return DecodeOptions{}.DecodeExpression(expr, ctx, val)
} }

View File

@@ -16,8 +16,8 @@ import (
) )
func TestDecodeBody(t *testing.T) { func TestDecodeBody(t *testing.T) {
deepEquals := func(other interface{}) func(v interface{}) bool { deepEquals := func(other any) func(v any) bool {
return func(v interface{}) bool { return func(v any) bool {
return reflect.DeepEqual(v, other) return reflect.DeepEqual(v, other)
} }
} }
@@ -45,19 +45,19 @@ func TestDecodeBody(t *testing.T) {
} }
tests := []struct { tests := []struct {
Body map[string]interface{} Body map[string]any
Target func() interface{} Target func() any
Check func(v interface{}) bool Check func(v any) bool
DiagCount int DiagCount int
}{ }{
{ {
map[string]interface{}{}, map[string]any{},
makeInstantiateType(struct{}{}), makeInstantiateType(struct{}{}),
deepEquals(struct{}{}), deepEquals(struct{}{}),
0, 0,
}, },
{ {
map[string]interface{}{}, map[string]any{},
makeInstantiateType(struct { makeInstantiateType(struct {
Name string `hcl:"name"` Name string `hcl:"name"`
}{}), }{}),
@@ -67,7 +67,7 @@ func TestDecodeBody(t *testing.T) {
1, // name is required 1, // name is required
}, },
{ {
map[string]interface{}{}, map[string]any{},
makeInstantiateType(struct { makeInstantiateType(struct {
Name *string `hcl:"name"` Name *string `hcl:"name"`
}{}), }{}),
@@ -77,7 +77,7 @@ func TestDecodeBody(t *testing.T) {
0, 0,
}, // name nil }, // name nil
{ {
map[string]interface{}{}, map[string]any{},
makeInstantiateType(struct { makeInstantiateType(struct {
Name string `hcl:"name,optional"` Name string `hcl:"name,optional"`
}{}), }{}),
@@ -87,9 +87,9 @@ func TestDecodeBody(t *testing.T) {
0, 0,
}, // name optional }, // name optional
{ {
map[string]interface{}{}, map[string]any{},
makeInstantiateType(withNameExpression{}), makeInstantiateType(withNameExpression{}),
func(v interface{}) bool { func(v any) bool {
if v == nil { if v == nil {
return false return false
} }
@@ -109,11 +109,11 @@ func TestDecodeBody(t *testing.T) {
0, 0,
}, },
{ {
map[string]interface{}{ map[string]any{
"name": "Ermintrude", "name": "Ermintrude",
}, },
makeInstantiateType(withNameExpression{}), makeInstantiateType(withNameExpression{}),
func(v interface{}) bool { func(v any) bool {
if v == nil { if v == nil {
return false return false
} }
@@ -133,7 +133,7 @@ func TestDecodeBody(t *testing.T) {
0, 0,
}, },
{ {
map[string]interface{}{ map[string]any{
"name": "Ermintrude", "name": "Ermintrude",
}, },
makeInstantiateType(struct { makeInstantiateType(struct {
@@ -145,7 +145,7 @@ func TestDecodeBody(t *testing.T) {
0, 0,
}, },
{ {
map[string]interface{}{ map[string]any{
"name": "Ermintrude", "name": "Ermintrude",
"age": 23, "age": 23,
}, },
@@ -158,7 +158,7 @@ func TestDecodeBody(t *testing.T) {
1, // Extraneous "age" property 1, // Extraneous "age" property
}, },
{ {
map[string]interface{}{ map[string]any{
"name": "Ermintrude", "name": "Ermintrude",
"age": 50, "age": 50,
}, },
@@ -166,7 +166,7 @@ func TestDecodeBody(t *testing.T) {
Name string `hcl:"name"` Name string `hcl:"name"`
Attrs hcl.Attributes `hcl:",remain"` Attrs hcl.Attributes `hcl:",remain"`
}{}), }{}),
func(gotI interface{}) bool { func(gotI any) bool {
got := gotI.(struct { got := gotI.(struct {
Name string `hcl:"name"` Name string `hcl:"name"`
Attrs hcl.Attributes `hcl:",remain"` Attrs hcl.Attributes `hcl:",remain"`
@@ -176,7 +176,7 @@ func TestDecodeBody(t *testing.T) {
0, 0,
}, },
{ {
map[string]interface{}{ map[string]any{
"name": "Ermintrude", "name": "Ermintrude",
"age": 50, "age": 50,
}, },
@@ -184,7 +184,7 @@ func TestDecodeBody(t *testing.T) {
Name string `hcl:"name"` Name string `hcl:"name"`
Remain hcl.Body `hcl:",remain"` Remain hcl.Body `hcl:",remain"`
}{}), }{}),
func(gotI interface{}) bool { func(gotI any) bool {
got := gotI.(struct { got := gotI.(struct {
Name string `hcl:"name"` Name string `hcl:"name"`
Remain hcl.Body `hcl:",remain"` Remain hcl.Body `hcl:",remain"`
@@ -197,7 +197,7 @@ func TestDecodeBody(t *testing.T) {
0, 0,
}, },
{ {
map[string]interface{}{ map[string]any{
"name": "Ermintrude", "name": "Ermintrude",
"living": true, "living": true,
}, },
@@ -217,7 +217,7 @@ func TestDecodeBody(t *testing.T) {
0, 0,
}, },
{ {
map[string]interface{}{ map[string]any{
"name": "Ermintrude", "name": "Ermintrude",
"age": 50, "age": 50,
}, },
@@ -226,7 +226,7 @@ func TestDecodeBody(t *testing.T) {
Body hcl.Body `hcl:",body"` Body hcl.Body `hcl:",body"`
Remain hcl.Body `hcl:",remain"` Remain hcl.Body `hcl:",remain"`
}{}), }{}),
func(gotI interface{}) bool { func(gotI any) bool {
got := gotI.(struct { got := gotI.(struct {
Name string `hcl:"name"` Name string `hcl:"name"`
Body hcl.Body `hcl:",body"` Body hcl.Body `hcl:",body"`
@@ -241,76 +241,76 @@ func TestDecodeBody(t *testing.T) {
0, 0,
}, },
{ {
map[string]interface{}{ map[string]any{
"noodle": map[string]interface{}{}, "noodle": map[string]any{},
}, },
makeInstantiateType(struct { makeInstantiateType(struct {
Noodle struct{} `hcl:"noodle,block"` Noodle struct{} `hcl:"noodle,block"`
}{}), }{}),
func(gotI interface{}) bool { func(gotI any) bool {
// Generating no diagnostics is good enough for this one. // Generating no diagnostics is good enough for this one.
return true return true
}, },
0, 0,
}, },
{ {
map[string]interface{}{ map[string]any{
"noodle": []map[string]interface{}{{}}, "noodle": []map[string]any{{}},
}, },
makeInstantiateType(struct { makeInstantiateType(struct {
Noodle struct{} `hcl:"noodle,block"` Noodle struct{} `hcl:"noodle,block"`
}{}), }{}),
func(gotI interface{}) bool { func(gotI any) bool {
// Generating no diagnostics is good enough for this one. // Generating no diagnostics is good enough for this one.
return true return true
}, },
0, 0,
}, },
{ {
map[string]interface{}{ map[string]any{
"noodle": []map[string]interface{}{{}, {}}, "noodle": []map[string]any{{}, {}},
}, },
makeInstantiateType(struct { makeInstantiateType(struct {
Noodle struct{} `hcl:"noodle,block"` Noodle struct{} `hcl:"noodle,block"`
}{}), }{}),
func(gotI interface{}) bool { func(gotI any) bool {
// Generating one diagnostic is good enough for this one. // Generating one diagnostic is good enough for this one.
return true return true
}, },
1, 1,
}, },
{ {
map[string]interface{}{}, map[string]any{},
makeInstantiateType(struct { makeInstantiateType(struct {
Noodle struct{} `hcl:"noodle,block"` Noodle struct{} `hcl:"noodle,block"`
}{}), }{}),
func(gotI interface{}) bool { func(gotI any) bool {
// Generating one diagnostic is good enough for this one. // Generating one diagnostic is good enough for this one.
return true return true
}, },
1, 1,
}, },
{ {
map[string]interface{}{ map[string]any{
"noodle": []map[string]interface{}{}, "noodle": []map[string]any{},
}, },
makeInstantiateType(struct { makeInstantiateType(struct {
Noodle struct{} `hcl:"noodle,block"` Noodle struct{} `hcl:"noodle,block"`
}{}), }{}),
func(gotI interface{}) bool { func(gotI any) bool {
// Generating one diagnostic is good enough for this one. // Generating one diagnostic is good enough for this one.
return true return true
}, },
1, 1,
}, },
{ {
map[string]interface{}{ map[string]any{
"noodle": map[string]interface{}{}, "noodle": map[string]any{},
}, },
makeInstantiateType(struct { makeInstantiateType(struct {
Noodle *struct{} `hcl:"noodle,block"` Noodle *struct{} `hcl:"noodle,block"`
}{}), }{}),
func(gotI interface{}) bool { func(gotI any) bool {
return gotI.(struct { return gotI.(struct {
Noodle *struct{} `hcl:"noodle,block"` Noodle *struct{} `hcl:"noodle,block"`
}).Noodle != nil }).Noodle != nil
@@ -318,13 +318,13 @@ func TestDecodeBody(t *testing.T) {
0, 0,
}, },
{ {
map[string]interface{}{ map[string]any{
"noodle": []map[string]interface{}{{}}, "noodle": []map[string]any{{}},
}, },
makeInstantiateType(struct { makeInstantiateType(struct {
Noodle *struct{} `hcl:"noodle,block"` Noodle *struct{} `hcl:"noodle,block"`
}{}), }{}),
func(gotI interface{}) bool { func(gotI any) bool {
return gotI.(struct { return gotI.(struct {
Noodle *struct{} `hcl:"noodle,block"` Noodle *struct{} `hcl:"noodle,block"`
}).Noodle != nil }).Noodle != nil
@@ -332,13 +332,13 @@ func TestDecodeBody(t *testing.T) {
0, 0,
}, },
{ {
map[string]interface{}{ map[string]any{
"noodle": []map[string]interface{}{}, "noodle": []map[string]any{},
}, },
makeInstantiateType(struct { makeInstantiateType(struct {
Noodle *struct{} `hcl:"noodle,block"` Noodle *struct{} `hcl:"noodle,block"`
}{}), }{}),
func(gotI interface{}) bool { func(gotI any) bool {
return gotI.(struct { return gotI.(struct {
Noodle *struct{} `hcl:"noodle,block"` Noodle *struct{} `hcl:"noodle,block"`
}).Noodle == nil }).Noodle == nil
@@ -346,26 +346,26 @@ func TestDecodeBody(t *testing.T) {
0, 0,
}, },
{ {
map[string]interface{}{ map[string]any{
"noodle": []map[string]interface{}{{}, {}}, "noodle": []map[string]any{{}, {}},
}, },
makeInstantiateType(struct { makeInstantiateType(struct {
Noodle *struct{} `hcl:"noodle,block"` Noodle *struct{} `hcl:"noodle,block"`
}{}), }{}),
func(gotI interface{}) bool { func(gotI any) bool {
// Generating one diagnostic is good enough for this one. // Generating one diagnostic is good enough for this one.
return true return true
}, },
1, 1,
}, },
{ {
map[string]interface{}{ map[string]any{
"noodle": []map[string]interface{}{}, "noodle": []map[string]any{},
}, },
makeInstantiateType(struct { makeInstantiateType(struct {
Noodle []struct{} `hcl:"noodle,block"` Noodle []struct{} `hcl:"noodle,block"`
}{}), }{}),
func(gotI interface{}) bool { func(gotI any) bool {
noodle := gotI.(struct { noodle := gotI.(struct {
Noodle []struct{} `hcl:"noodle,block"` Noodle []struct{} `hcl:"noodle,block"`
}).Noodle }).Noodle
@@ -374,13 +374,13 @@ func TestDecodeBody(t *testing.T) {
0, 0,
}, },
{ {
map[string]interface{}{ map[string]any{
"noodle": []map[string]interface{}{{}}, "noodle": []map[string]any{{}},
}, },
makeInstantiateType(struct { makeInstantiateType(struct {
Noodle []struct{} `hcl:"noodle,block"` Noodle []struct{} `hcl:"noodle,block"`
}{}), }{}),
func(gotI interface{}) bool { func(gotI any) bool {
noodle := gotI.(struct { noodle := gotI.(struct {
Noodle []struct{} `hcl:"noodle,block"` Noodle []struct{} `hcl:"noodle,block"`
}).Noodle }).Noodle
@@ -389,13 +389,13 @@ func TestDecodeBody(t *testing.T) {
0, 0,
}, },
{ {
map[string]interface{}{ map[string]any{
"noodle": []map[string]interface{}{{}, {}}, "noodle": []map[string]any{{}, {}},
}, },
makeInstantiateType(struct { makeInstantiateType(struct {
Noodle []struct{} `hcl:"noodle,block"` Noodle []struct{} `hcl:"noodle,block"`
}{}), }{}),
func(gotI interface{}) bool { func(gotI any) bool {
noodle := gotI.(struct { noodle := gotI.(struct {
Noodle []struct{} `hcl:"noodle,block"` Noodle []struct{} `hcl:"noodle,block"`
}).Noodle }).Noodle
@@ -404,15 +404,15 @@ func TestDecodeBody(t *testing.T) {
0, 0,
}, },
{ {
map[string]interface{}{ map[string]any{
"noodle": map[string]interface{}{}, "noodle": map[string]any{},
}, },
makeInstantiateType(struct { makeInstantiateType(struct {
Noodle struct { Noodle struct {
Name string `hcl:"name,label"` Name string `hcl:"name,label"`
} `hcl:"noodle,block"` } `hcl:"noodle,block"`
}{}), }{}),
func(gotI interface{}) bool { func(gotI any) bool {
//nolint:misspell //nolint:misspell
// Generating two diagnostics is good enough for this one. // Generating two diagnostics is good enough for this one.
// (one for the missing noodle block and the other for // (one for the missing noodle block and the other for
@@ -423,9 +423,9 @@ func TestDecodeBody(t *testing.T) {
2, 2,
}, },
{ {
map[string]interface{}{ map[string]any{
"noodle": map[string]interface{}{ "noodle": map[string]any{
"foo_foo": map[string]interface{}{}, "foo_foo": map[string]any{},
}, },
}, },
makeInstantiateType(struct { makeInstantiateType(struct {
@@ -433,7 +433,7 @@ func TestDecodeBody(t *testing.T) {
Name string `hcl:"name,label"` Name string `hcl:"name,label"`
} `hcl:"noodle,block"` } `hcl:"noodle,block"`
}{}), }{}),
func(gotI interface{}) bool { func(gotI any) bool {
noodle := gotI.(struct { noodle := gotI.(struct {
Noodle struct { Noodle struct {
Name string `hcl:"name,label"` Name string `hcl:"name,label"`
@@ -444,10 +444,10 @@ func TestDecodeBody(t *testing.T) {
0, 0,
}, },
{ {
map[string]interface{}{ map[string]any{
"noodle": map[string]interface{}{ "noodle": map[string]any{
"foo_foo": map[string]interface{}{}, "foo_foo": map[string]any{},
"bar_baz": map[string]interface{}{}, "bar_baz": map[string]any{},
}, },
}, },
makeInstantiateType(struct { makeInstantiateType(struct {
@@ -455,17 +455,17 @@ func TestDecodeBody(t *testing.T) {
Name string `hcl:"name,label"` Name string `hcl:"name,label"`
} `hcl:"noodle,block"` } `hcl:"noodle,block"`
}{}), }{}),
func(gotI interface{}) bool { func(gotI any) bool {
// One diagnostic is enough for this one. // One diagnostic is enough for this one.
return true return true
}, },
1, 1,
}, },
{ {
map[string]interface{}{ map[string]any{
"noodle": map[string]interface{}{ "noodle": map[string]any{
"foo_foo": map[string]interface{}{}, "foo_foo": map[string]any{},
"bar_baz": map[string]interface{}{}, "bar_baz": map[string]any{},
}, },
}, },
makeInstantiateType(struct { makeInstantiateType(struct {
@@ -473,7 +473,7 @@ func TestDecodeBody(t *testing.T) {
Name string `hcl:"name,label"` Name string `hcl:"name,label"`
} `hcl:"noodle,block"` } `hcl:"noodle,block"`
}{}), }{}),
func(gotI interface{}) bool { func(gotI any) bool {
noodles := gotI.(struct { noodles := gotI.(struct {
Noodles []struct { Noodles []struct {
Name string `hcl:"name,label"` Name string `hcl:"name,label"`
@@ -484,9 +484,9 @@ func TestDecodeBody(t *testing.T) {
0, 0,
}, },
{ {
map[string]interface{}{ map[string]any{
"noodle": map[string]interface{}{ "noodle": map[string]any{
"foo_foo": map[string]interface{}{ "foo_foo": map[string]any{
"type": "rice", "type": "rice",
}, },
}, },
@@ -497,7 +497,7 @@ func TestDecodeBody(t *testing.T) {
Type string `hcl:"type"` Type string `hcl:"type"`
} `hcl:"noodle,block"` } `hcl:"noodle,block"`
}{}), }{}),
func(gotI interface{}) bool { func(gotI any) bool {
noodle := gotI.(struct { noodle := gotI.(struct {
Noodle struct { Noodle struct {
Name string `hcl:"name,label"` Name string `hcl:"name,label"`
@@ -510,7 +510,7 @@ func TestDecodeBody(t *testing.T) {
}, },
{ {
map[string]interface{}{ map[string]any{
"name": "Ermintrude", "name": "Ermintrude",
"age": 34, "age": 34,
}, },
@@ -522,31 +522,31 @@ func TestDecodeBody(t *testing.T) {
0, 0,
}, },
{ {
map[string]interface{}{ map[string]any{
"name": "Ermintrude", "name": "Ermintrude",
"age": 89, "age": 89,
}, },
makeInstantiateType(map[string]*hcl.Attribute(nil)), makeInstantiateType(map[string]*hcl.Attribute(nil)),
func(gotI interface{}) bool { func(gotI any) bool {
got := gotI.(map[string]*hcl.Attribute) got := gotI.(map[string]*hcl.Attribute)
return len(got) == 2 && got["name"] != nil && got["age"] != nil return len(got) == 2 && got["name"] != nil && got["age"] != nil
}, },
0, 0,
}, },
{ {
map[string]interface{}{ map[string]any{
"name": "Ermintrude", "name": "Ermintrude",
"age": 13, "age": 13,
}, },
makeInstantiateType(map[string]hcl.Expression(nil)), makeInstantiateType(map[string]hcl.Expression(nil)),
func(gotI interface{}) bool { func(gotI any) bool {
got := gotI.(map[string]hcl.Expression) got := gotI.(map[string]hcl.Expression)
return len(got) == 2 && got["name"] != nil && got["age"] != nil return len(got) == 2 && got["name"] != nil && got["age"] != nil
}, },
0, 0,
}, },
{ {
map[string]interface{}{ map[string]any{
"name": "Ermintrude", "name": "Ermintrude",
"living": true, "living": true,
}, },
@@ -559,10 +559,10 @@ func TestDecodeBody(t *testing.T) {
}, },
{ {
// Retain "nested" block while decoding // Retain "nested" block while decoding
map[string]interface{}{ map[string]any{
"plain": "foo", "plain": "foo",
}, },
func() interface{} { func() any {
return &withNestedBlock{ return &withNestedBlock{
Plain: "bar", Plain: "bar",
Nested: &withTwoAttributes{ Nested: &withTwoAttributes{
@@ -570,7 +570,7 @@ func TestDecodeBody(t *testing.T) {
}, },
} }
}, },
func(gotI interface{}) bool { func(gotI any) bool {
foo := gotI.(withNestedBlock) foo := gotI.(withNestedBlock)
return foo.Plain == "foo" && foo.Nested != nil && foo.Nested.A == "bar" return foo.Plain == "foo" && foo.Nested != nil && foo.Nested.A == "bar"
}, },
@@ -578,19 +578,19 @@ func TestDecodeBody(t *testing.T) {
}, },
{ {
// Retain values in "nested" block while decoding // Retain values in "nested" block while decoding
map[string]interface{}{ map[string]any{
"nested": map[string]interface{}{ "nested": map[string]any{
"a": "foo", "a": "foo",
}, },
}, },
func() interface{} { func() any {
return &withNestedBlock{ return &withNestedBlock{
Nested: &withTwoAttributes{ Nested: &withTwoAttributes{
B: "bar", B: "bar",
}, },
} }
}, },
func(gotI interface{}) bool { func(gotI any) bool {
foo := gotI.(withNestedBlock) foo := gotI.(withNestedBlock)
return foo.Nested.A == "foo" && foo.Nested.B == "bar" return foo.Nested.A == "foo" && foo.Nested.B == "bar"
}, },
@@ -598,14 +598,14 @@ func TestDecodeBody(t *testing.T) {
}, },
{ {
// Retain values in "nested" block list while decoding // Retain values in "nested" block list while decoding
map[string]interface{}{ map[string]any{
"nested": []map[string]interface{}{ "nested": []map[string]any{
{ {
"a": "foo", "a": "foo",
}, },
}, },
}, },
func() interface{} { func() any {
return &withListofNestedBlocks{ return &withListofNestedBlocks{
Nested: []*withTwoAttributes{ Nested: []*withTwoAttributes{
{ {
@@ -614,7 +614,7 @@ func TestDecodeBody(t *testing.T) {
}, },
} }
}, },
func(gotI interface{}) bool { func(gotI any) bool {
n := gotI.(withListofNestedBlocks) n := gotI.(withListofNestedBlocks)
return n.Nested[0].A == "foo" && n.Nested[0].B == "bar" return n.Nested[0].A == "foo" && n.Nested[0].B == "bar"
}, },
@@ -622,14 +622,14 @@ func TestDecodeBody(t *testing.T) {
}, },
{ {
// Remove additional elements from the list while decoding nested blocks // Remove additional elements from the list while decoding nested blocks
map[string]interface{}{ map[string]any{
"nested": []map[string]interface{}{ "nested": []map[string]any{
{ {
"a": "foo", "a": "foo",
}, },
}, },
}, },
func() interface{} { func() any {
return &withListofNestedBlocks{ return &withListofNestedBlocks{
Nested: []*withTwoAttributes{ Nested: []*withTwoAttributes{
{ {
@@ -641,7 +641,7 @@ func TestDecodeBody(t *testing.T) {
}, },
} }
}, },
func(gotI interface{}) bool { func(gotI any) bool {
n := gotI.(withListofNestedBlocks) n := gotI.(withListofNestedBlocks)
return len(n.Nested) == 1 return len(n.Nested) == 1
}, },
@@ -649,8 +649,8 @@ func TestDecodeBody(t *testing.T) {
}, },
{ {
// Make sure decoding value slices works the same as pointer slices. // Make sure decoding value slices works the same as pointer slices.
map[string]interface{}{ map[string]any{
"nested": []map[string]interface{}{ "nested": []map[string]any{
{ {
"b": "bar", "b": "bar",
}, },
@@ -659,7 +659,7 @@ func TestDecodeBody(t *testing.T) {
}, },
}, },
}, },
func() interface{} { func() any {
return &withListofNestedBlocksNoPointers{ return &withListofNestedBlocksNoPointers{
Nested: []withTwoAttributes{ Nested: []withTwoAttributes{
{ {
@@ -668,7 +668,7 @@ func TestDecodeBody(t *testing.T) {
}, },
} }
}, },
func(gotI interface{}) bool { func(gotI any) bool {
n := gotI.(withListofNestedBlocksNoPointers) n := gotI.(withListofNestedBlocksNoPointers)
return n.Nested[0].B == "bar" && len(n.Nested) == 2 return n.Nested[0].B == "bar" && len(n.Nested) == 2
}, },
@@ -710,8 +710,8 @@ func TestDecodeBody(t *testing.T) {
func TestDecodeExpression(t *testing.T) { func TestDecodeExpression(t *testing.T) {
tests := []struct { tests := []struct {
Value cty.Value Value cty.Value
Target interface{} Target any
Want interface{} Want any
DiagCount int DiagCount int
}{ }{
{ {
@@ -799,8 +799,8 @@ func (e *fixedExpression) Variables() []hcl.Traversal {
return nil return nil
} }
func makeInstantiateType(target interface{}) func() interface{} { func makeInstantiateType(target any) func() any {
return func() interface{} { return func() any {
return reflect.New(reflect.TypeOf(target)).Interface() return reflect.New(reflect.TypeOf(target)).Interface()
} }
} }

View File

@@ -34,9 +34,9 @@ import (
// The layout of the resulting HCL source is derived from the ordering of // The layout of the resulting HCL source is derived from the ordering of
// the struct fields, with blank lines around nested blocks of different types. // the struct fields, with blank lines around nested blocks of different types.
// Fields representing attributes should usually precede those representing // Fields representing attributes should usually precede those representing
// blocks so that the attributes can group togather in the result. For more // blocks so that the attributes can group together in the result. For more
// control, use the hclwrite API directly. // control, use the hclwrite API directly.
func EncodeIntoBody(val interface{}, dst *hclwrite.Body) { func EncodeIntoBody(val any, dst *hclwrite.Body) {
rv := reflect.ValueOf(val) rv := reflect.ValueOf(val)
ty := rv.Type() ty := rv.Type()
if ty.Kind() == reflect.Ptr { if ty.Kind() == reflect.Ptr {
@@ -60,7 +60,7 @@ func EncodeIntoBody(val interface{}, dst *hclwrite.Body) {
// //
// This function has the same constraints as EncodeIntoBody and will panic // This function has the same constraints as EncodeIntoBody and will panic
// if they are violated. // if they are violated.
func EncodeAsBlock(val interface{}, blockType string) *hclwrite.Block { func EncodeAsBlock(val any, blockType string) *hclwrite.Block {
rv := reflect.ValueOf(val) rv := reflect.ValueOf(val)
ty := rv.Type() ty := rv.Type()
if ty.Kind() == reflect.Ptr { if ty.Kind() == reflect.Ptr {
@@ -158,7 +158,7 @@ func populateBody(rv reflect.Value, ty reflect.Type, tags *fieldTags, dst *hclwr
if isSeq { if isSeq {
l := fieldVal.Len() l := fieldVal.Len()
for i := 0; i < l; i++ { for i := range l {
elemVal := fieldVal.Index(i) elemVal := fieldVal.Index(i)
if !elemVal.IsValid() { if !elemVal.IsValid() {
continue // ignore (elem value is nil pointer) continue // ignore (elem value is nil pointer)

View File

@@ -22,7 +22,7 @@ import (
// This uses the tags on the fields of the struct to discover how each // This uses the tags on the fields of the struct to discover how each
// field's value should be expressed within configuration. If an invalid // field's value should be expressed within configuration. If an invalid
// mapping is attempted, this function will panic. // mapping is attempted, this function will panic.
func ImpliedBodySchema(val interface{}) (schema *hcl.BodySchema, partial bool) { func ImpliedBodySchema(val any) (schema *hcl.BodySchema, partial bool) {
ty := reflect.TypeOf(val) ty := reflect.TypeOf(val)
if ty.Kind() == reflect.Ptr { if ty.Kind() == reflect.Ptr {
@@ -134,7 +134,7 @@ func getFieldTags(ty reflect.Type) *fieldTags {
} }
ct := ty.NumField() ct := ty.NumField()
for i := 0; i < ct; i++ { for i := range ct {
field := ty.Field(i) field := ty.Field(i)
tag := field.Tag.Get("hcl") tag := field.Tag.Get("hcl")
if tag == "" { if tag == "" {

View File

@@ -14,7 +14,7 @@ import (
func TestImpliedBodySchema(t *testing.T) { func TestImpliedBodySchema(t *testing.T) {
tests := []struct { tests := []struct {
val interface{} val any
wantSchema *hcl.BodySchema wantSchema *hcl.BodySchema
wantPartial bool wantPartial bool
}{ }{

View File

@@ -7,6 +7,7 @@ import (
"math" "math"
"math/big" "math/big"
"reflect" "reflect"
"slices"
"strconv" "strconv"
"strings" "strings"
@@ -589,7 +590,7 @@ type ParseMeta struct {
AllVariables []*Variable AllVariables []*Variable
} }
func Parse(b hcl.Body, opt Opt, val interface{}) (*ParseMeta, hcl.Diagnostics) { func Parse(b hcl.Body, opt Opt, val any) (*ParseMeta, hcl.Diagnostics) {
reserved := map[string]struct{}{} reserved := map[string]struct{}{}
schema, _ := gohcl.ImpliedBodySchema(val) schema, _ := gohcl.ImpliedBodySchema(val)
@@ -763,7 +764,7 @@ func Parse(b hcl.Body, opt Opt, val interface{}) (*ParseMeta, hcl.Diagnostics) {
types := map[string]field{} types := map[string]field{}
renamed := map[string]map[string][]string{} renamed := map[string]map[string][]string{}
vt := reflect.ValueOf(val).Elem().Type() vt := reflect.ValueOf(val).Elem().Type()
for i := 0; i < vt.NumField(); i++ { for i := range vt.NumField() {
tags := strings.Split(vt.Field(i).Tag.Get("hcl"), ",") tags := strings.Split(vt.Field(i).Tag.Get("hcl"), ",")
p.blockTypes[tags[0]] = vt.Field(i).Type.Elem().Elem() p.blockTypes[tags[0]] = vt.Field(i).Type.Elem().Elem()
@@ -831,7 +832,7 @@ func Parse(b hcl.Body, opt Opt, val interface{}) (*ParseMeta, hcl.Diagnostics) {
oldValue, exists := t.values[lblName] oldValue, exists := t.values[lblName]
if !exists && lblExists { if !exists && lblExists {
if v.Elem().Field(t.idx).Type().Kind() == reflect.Slice { if v.Elem().Field(t.idx).Type().Kind() == reflect.Slice {
for i := 0; i < v.Elem().Field(t.idx).Len(); i++ { for i := range v.Elem().Field(t.idx).Len() {
if lblName == v.Elem().Field(t.idx).Index(i).Elem().Field(lblIndex).String() { if lblName == v.Elem().Field(t.idx).Index(i).Elem().Field(lblIndex).String() {
exists = true exists = true
oldValue = value{Value: v.Elem().Field(t.idx).Index(i), idx: i} oldValue = value{Value: v.Elem().Field(t.idx).Index(i), idx: i}
@@ -898,7 +899,7 @@ func wrapErrorDiagnostic(message string, err error, subject *hcl.Range, context
func setName(v reflect.Value, name string) { func setName(v reflect.Value, name string) {
numFields := v.Elem().Type().NumField() numFields := v.Elem().Type().NumField()
for i := 0; i < numFields; i++ { for i := range numFields {
parts := strings.Split(v.Elem().Type().Field(i).Tag.Get("hcl"), ",") parts := strings.Split(v.Elem().Type().Field(i).Tag.Get("hcl"), ",")
for _, t := range parts[1:] { for _, t := range parts[1:] {
if t == "label" { if t == "label" {
@@ -910,27 +911,23 @@ func setName(v reflect.Value, name string) {
func getName(v reflect.Value) (string, bool) { func getName(v reflect.Value) (string, bool) {
numFields := v.Elem().Type().NumField() numFields := v.Elem().Type().NumField()
for i := 0; i < numFields; i++ { for i := range numFields {
parts := strings.Split(v.Elem().Type().Field(i).Tag.Get("hcl"), ",") parts := strings.Split(v.Elem().Type().Field(i).Tag.Get("hcl"), ",")
for _, t := range parts[1:] { if slices.Contains(parts[1:], "label") {
if t == "label" {
return v.Elem().Field(i).String(), true return v.Elem().Field(i).String(), true
} }
} }
}
return "", false return "", false
} }
func getNameIndex(v reflect.Value) (int, bool) { func getNameIndex(v reflect.Value) (int, bool) {
numFields := v.Elem().Type().NumField() numFields := v.Elem().Type().NumField()
for i := 0; i < numFields; i++ { for i := range numFields {
parts := strings.Split(v.Elem().Type().Field(i).Tag.Get("hcl"), ",") parts := strings.Split(v.Elem().Type().Field(i).Tag.Get("hcl"), ",")
for _, t := range parts[1:] { if slices.Contains(parts[1:], "label") {
if t == "label" {
return i, true return i, true
} }
} }
}
return 0, false return 0, false
} }
@@ -988,7 +985,7 @@ func key(ks ...any) uint64 {
return hash.Sum64() return hash.Sum64()
} }
func decodeBody(body hcl.Body, ctx *hcl.EvalContext, val interface{}) hcl.Diagnostics { func decodeBody(body hcl.Body, ctx *hcl.EvalContext, val any) hcl.Diagnostics {
dec := gohcl.DecodeOptions{ImpliedType: ImpliedType} dec := gohcl.DecodeOptions{ImpliedType: ImpliedType}
return dec.DecodeBody(body, ctx, val) return dec.DecodeBody(body, ctx, val)
} }

View File

@@ -43,7 +43,7 @@ import (
// In particular, ImpliedType will never use capsule types in its returned // In particular, ImpliedType will never use capsule types in its returned
// type, because it cannot know the capsule types supported by the calling // type, because it cannot know the capsule types supported by the calling
// program. // program.
func ImpliedType(gv interface{}) (cty.Type, error) { func ImpliedType(gv any) (cty.Type, error) {
rt := reflect.TypeOf(gv) rt := reflect.TypeOf(gv)
var path cty.Path var path cty.Path
return impliedType(rt, path) return impliedType(rt, path)
@@ -148,7 +148,7 @@ func structTagIndices(st reflect.Type) map[string]int {
ct := st.NumField() ct := st.NumField()
ret := make(map[string]int, ct) ret := make(map[string]int, ct)
for i := 0; i < ct; i++ { for i := range ct {
field := st.Field(i) field := st.Field(i)
attrName := field.Tag.Get("cty") attrName := field.Tag.Get("cty")
if attrName != "" { if attrName != "" {

View File

@@ -40,7 +40,6 @@ import (
"github.com/moby/buildkit/solver/errdefs" "github.com/moby/buildkit/solver/errdefs"
"github.com/moby/buildkit/solver/pb" "github.com/moby/buildkit/solver/pb"
spb "github.com/moby/buildkit/sourcepolicy/pb" spb "github.com/moby/buildkit/sourcepolicy/pb"
"github.com/moby/buildkit/util/entitlements"
"github.com/moby/buildkit/util/progress/progresswriter" "github.com/moby/buildkit/util/progress/progresswriter"
"github.com/moby/buildkit/util/tracing" "github.com/moby/buildkit/util/tracing"
"github.com/opencontainers/go-digest" "github.com/opencontainers/go-digest"
@@ -63,7 +62,7 @@ type Options struct {
Inputs Inputs Inputs Inputs
Ref string Ref string
Allow []entitlements.Entitlement Allow []string
Attests map[string]*string Attests map[string]*string
BuildArgs map[string]string BuildArgs map[string]string
CacheFrom []client.CacheOptionsEntry CacheFrom []client.CacheOptionsEntry
@@ -540,7 +539,7 @@ func BuildWithResultHandler(ctx context.Context, nodes []builder.Node, opts map[
node := dp.Node().Driver node := dp.Node().Driver
if node.IsMobyDriver() { if node.IsMobyDriver() {
for _, e := range so.Exports { for _, e := range so.Exports {
if e.Type == "moby" && e.Attrs["push"] != "" { if e.Type == "moby" && e.Attrs["push"] != "" && !node.Features(ctx)[driver.DirectPush] {
if ok, _ := strconv.ParseBool(e.Attrs["push"]); ok { if ok, _ := strconv.ParseBool(e.Attrs["push"]); ok {
pushNames = e.Attrs["name"] pushNames = e.Attrs["name"]
if pushNames == "" { if pushNames == "" {
@@ -623,7 +622,7 @@ func BuildWithResultHandler(ctx context.Context, nodes []builder.Node, opts map[
// This is fallback for some very old buildkit versions. // This is fallback for some very old buildkit versions.
// Note that the mediatype isn't really correct as most of the time it is image manifest and // Note that the mediatype isn't really correct as most of the time it is image manifest and
// not manifest list but actually both are handled because for Docker mediatypes the // not manifest list but actually both are handled because for Docker mediatypes the
// mediatype value in the Accpet header does not seem to matter. // mediatype value in the Accept header does not seem to matter.
s, ok = r.ExporterResponse[exptypes.ExporterImageDigestKey] s, ok = r.ExporterResponse[exptypes.ExporterImageDigestKey]
if ok { if ok {
descs = append(descs, specs.Descriptor{ descs = append(descs, specs.Descriptor{
@@ -835,7 +834,7 @@ func remoteDigestWithMoby(ctx context.Context, d *driver.DriverHandle, name stri
if err != nil { if err != nil {
return "", err return "", err
} }
img, _, err := api.ImageInspectWithRaw(ctx, name) img, err := api.ImageInspect(ctx, name)
if err != nil { if err != nil {
return "", err return "", err
} }

View File

@@ -4,6 +4,7 @@ import (
"context" "context"
stderrors "errors" stderrors "errors"
"net" "net"
"slices"
"github.com/containerd/platforms" "github.com/containerd/platforms"
"github.com/docker/buildx/builder" "github.com/docker/buildx/builder"
@@ -37,15 +38,7 @@ func Dial(ctx context.Context, nodes []builder.Node, pw progress.Writer, platfor
for _, ls := range resolved { for _, ls := range resolved {
for _, rn := range ls { for _, rn := range ls {
if platform != nil { if platform != nil {
p := *platform if !slices.ContainsFunc(rn.platforms, platforms.Only(*platform).Match) {
var found bool
for _, pp := range rn.platforms {
if platforms.Only(p).Match(pp) {
found = true
break
}
}
if !found {
continue continue
} }
} }

View File

@@ -3,6 +3,7 @@ package build
import ( import (
"context" "context"
"fmt" "fmt"
"slices"
"sync" "sync"
"github.com/containerd/platforms" "github.com/containerd/platforms"
@@ -221,7 +222,7 @@ func (r *nodeResolver) get(p specs.Platform, matcher matchMaker, additionalPlatf
for i, node := range r.nodes { for i, node := range r.nodes {
platforms := node.Platforms platforms := node.Platforms
if additionalPlatforms != nil { if additionalPlatforms != nil {
platforms = append([]specs.Platform{}, platforms...) platforms = slices.Clone(platforms)
platforms = append(platforms, additionalPlatforms(i, node)...) platforms = append(platforms, additionalPlatforms(i, node)...)
} }
for _, p2 := range platforms { for _, p2 := range platforms {

View File

@@ -318,7 +318,7 @@ func toSolveOpt(ctx context.Context, node builder.Node, multiDriver bool, opt *O
switch opt.NetworkMode { switch opt.NetworkMode {
case "host": case "host":
so.FrontendAttrs["force-network-mode"] = opt.NetworkMode so.FrontendAttrs["force-network-mode"] = opt.NetworkMode
so.AllowedEntitlements = append(so.AllowedEntitlements, entitlements.EntitlementNetworkHost) so.AllowedEntitlements = append(so.AllowedEntitlements, entitlements.EntitlementNetworkHost.String())
case "none": case "none":
so.FrontendAttrs["force-network-mode"] = opt.NetworkMode so.FrontendAttrs["force-network-mode"] = opt.NetworkMode
case "", "default": case "", "default":

View File

@@ -28,11 +28,11 @@ func TestSyncMultiReaderParallel(t *testing.T) {
readers := make([]io.ReadCloser, numReaders) readers := make([]io.ReadCloser, numReaders)
for i := 0; i < numReaders; i++ { for i := range numReaders {
readers[i] = mr.NewReadCloser() readers[i] = mr.NewReadCloser()
} }
for i := 0; i < numReaders; i++ { for i := range numReaders {
wg.Add(1) wg.Add(1)
go func(readerId int) { go func(readerId int) {
defer wg.Done() defer wg.Done()

View File

@@ -5,6 +5,7 @@ import (
"encoding/json" "encoding/json"
"net/url" "net/url"
"os" "os"
"slices"
"sort" "sort"
"strings" "strings"
"sync" "sync"
@@ -199,7 +200,7 @@ func (b *Builder) Boot(ctx context.Context) (bool, error) {
err = err1 err = err1
} }
if err == nil && len(errCh) == len(toBoot) { if err == nil && len(errCh) > 0 {
return false, <-errCh return false, <-errCh
} }
return true, err return true, err
@@ -656,13 +657,7 @@ func parseBuildkitdFlags(inp string, driver string, driverOpts map[string]string
flags.StringArrayVar(&allowInsecureEntitlements, "allow-insecure-entitlement", nil, "") flags.StringArrayVar(&allowInsecureEntitlements, "allow-insecure-entitlement", nil, "")
_ = flags.Parse(res) _ = flags.Parse(res)
var hasNetworkHostEntitlement bool hasNetworkHostEntitlement := slices.Contains(allowInsecureEntitlements, "network.host")
for _, e := range allowInsecureEntitlements {
if e == "network.host" {
hasNetworkHostEntitlement = true
break
}
}
var hasNetworkHostEntitlementInConf bool var hasNetworkHostEntitlementInConf bool
if buildkitdConfigFile != "" { if buildkitdConfigFile != "" {
@@ -671,11 +666,8 @@ func parseBuildkitdFlags(inp string, driver string, driverOpts map[string]string
return nil, err return nil, err
} else if btoml != nil { } else if btoml != nil {
if ies := btoml.GetArray("insecure-entitlements"); ies != nil { if ies := btoml.GetArray("insecure-entitlements"); ies != nil {
for _, e := range ies.([]string) { if slices.Contains(ies.([]string), "network.host") {
if e == "network.host" {
hasNetworkHostEntitlementInConf = true hasNetworkHostEntitlementInConf = true
break
}
} }
} }
} }

View File

@@ -169,7 +169,7 @@ func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []N
// dynamic nodes are used in Kubernetes driver. // dynamic nodes are used in Kubernetes driver.
// Kubernetes' pods are dynamically mapped to BuildKit Nodes. // Kubernetes' pods are dynamically mapped to BuildKit Nodes.
if di.DriverInfo != nil && len(di.DriverInfo.DynamicNodes) > 0 { if di.DriverInfo != nil && len(di.DriverInfo.DynamicNodes) > 0 {
for i := 0; i < len(di.DriverInfo.DynamicNodes); i++ { for i := range di.DriverInfo.DynamicNodes {
diClone := di diClone := di
if pl := di.DriverInfo.DynamicNodes[i].Platforms; len(pl) > 0 { if pl := di.DriverInfo.DynamicNodes[i].Platforms; len(pl) > 0 {
diClone.Platforms = pl diClone.Platforms = pl

View File

@@ -66,7 +66,11 @@ type bakeOptions struct {
func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in bakeOptions, cFlags commonFlags) (err error) { func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in bakeOptions, cFlags commonFlags) (err error) {
mp := dockerCli.MeterProvider() mp := dockerCli.MeterProvider()
ctx, end, err := tracing.TraceCurrentCommand(ctx, "bake") ctx, end, err := tracing.TraceCurrentCommand(ctx, append([]string{"bake"}, targets...),
attribute.String("builder", in.builder),
attribute.StringSlice("targets", targets),
attribute.StringSlice("files", in.files),
)
if err != nil { if err != nil {
return err return err
} }
@@ -283,7 +287,7 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
} }
} }
if err := saveLocalStateGroup(dockerCli, in, targets, bo, overrides, def); err != nil { if err := saveLocalStateGroup(dockerCli, in, targets, bo); err != nil {
return err return err
} }
@@ -305,7 +309,7 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
desktop.PrintBuildDetails(os.Stderr, printer.BuildRefs(), term) desktop.PrintBuildDetails(os.Stderr, printer.BuildRefs(), term)
} }
if len(in.metadataFile) > 0 { if len(in.metadataFile) > 0 {
dt := make(map[string]interface{}) dt := make(map[string]any)
for t, r := range resp { for t, r := range resp {
dt[t] = decodeExporterResponse(r.ExporterResponse) dt[t] = decodeExporterResponse(r.ExporterResponse)
} }
@@ -488,7 +492,14 @@ func bakeCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
return cmd return cmd
} }
func saveLocalStateGroup(dockerCli command.Cli, in bakeOptions, targets []string, bo map[string]build.Options, overrides []string, def any) error { func saveLocalStateGroup(dockerCli command.Cli, in bakeOptions, targets []string, bo map[string]build.Options) error {
l, err := localstate.New(confutil.NewConfig(dockerCli))
if err != nil {
return err
}
defer l.MigrateIfNeeded()
prm := confutil.MetadataProvenance() prm := confutil.MetadataProvenance()
if len(in.metadataFile) == 0 { if len(in.metadataFile) == 0 {
prm = confutil.MetadataProvenanceModeDisabled prm = confutil.MetadataProvenanceModeDisabled
@@ -508,19 +519,10 @@ func saveLocalStateGroup(dockerCli command.Cli, in bakeOptions, targets []string
if len(refs) == 0 { if len(refs) == 0 {
return nil return nil
} }
l, err := localstate.New(confutil.NewConfig(dockerCli))
if err != nil {
return err
}
dtdef, err := json.MarshalIndent(def, "", " ")
if err != nil {
return err
}
return l.SaveGroup(groupRef, localstate.StateGroup{ return l.SaveGroup(groupRef, localstate.StateGroup{
Definition: dtdef,
Targets: targets,
Inputs: overrides,
Refs: refs, Refs: refs,
Targets: targets,
}) })
} }

View File

@@ -11,6 +11,7 @@ import (
"io" "io"
"os" "os"
"path/filepath" "path/filepath"
"slices"
"strconv" "strconv"
"strings" "strings"
"sync" "sync"
@@ -41,7 +42,7 @@ import (
"github.com/docker/cli/cli/command" "github.com/docker/cli/cli/command"
dockeropts "github.com/docker/cli/opts" dockeropts "github.com/docker/cli/opts"
"github.com/docker/docker/api/types/versions" "github.com/docker/docker/api/types/versions"
"github.com/docker/docker/pkg/ioutils" "github.com/docker/docker/pkg/atomicwriter"
"github.com/moby/buildkit/client" "github.com/moby/buildkit/client"
"github.com/moby/buildkit/exporter/containerimage/exptypes" "github.com/moby/buildkit/exporter/containerimage/exptypes"
"github.com/moby/buildkit/frontend/subrequests" "github.com/moby/buildkit/frontend/subrequests"
@@ -156,7 +157,7 @@ func (o *buildOptions) toControllerOptions() (*controllerapi.BuildOptions, error
return nil, err return nil, err
} }
inAttests := append([]string{}, o.attests...) inAttests := slices.Clone(o.attests)
if o.provenance != "" { if o.provenance != "" {
inAttests = append(inAttests, buildflags.CanonicalizeAttest("provenance", o.provenance)) inAttests = append(inAttests, buildflags.CanonicalizeAttest("provenance", o.provenance))
} }
@@ -285,7 +286,11 @@ func (o *buildOptionsHash) String() string {
func runBuild(ctx context.Context, dockerCli command.Cli, options buildOptions) (err error) { func runBuild(ctx context.Context, dockerCli command.Cli, options buildOptions) (err error) {
mp := dockerCli.MeterProvider() mp := dockerCli.MeterProvider()
ctx, end, err := tracing.TraceCurrentCommand(ctx, "build") ctx, end, err := tracing.TraceCurrentCommand(ctx, []string{"build", options.contextPath},
attribute.String("builder", options.builder),
attribute.String("context", options.contextPath),
attribute.String("dockerfile", options.dockerfileName),
)
if err != nil { if err != nil {
return err return err
} }
@@ -593,7 +598,7 @@ func buildCmd(dockerCli command.Cli, rootOpts *rootOptions, debugConfig *debug.D
flags.StringSliceVar(&options.extraHosts, "add-host", []string{}, `Add a custom host-to-IP mapping (format: "host:ip")`) flags.StringSliceVar(&options.extraHosts, "add-host", []string{}, `Add a custom host-to-IP mapping (format: "host:ip")`)
flags.StringSliceVar(&options.allow, "allow", []string{}, `Allow extra privileged entitlement (e.g., "network.host", "security.insecure")`) flags.StringArrayVar(&options.allow, "allow", []string{}, `Allow extra privileged entitlement (e.g., "network.host", "security.insecure")`)
flags.StringArrayVarP(&options.annotations, "annotation", "", []string{}, "Add annotation to the image") flags.StringArrayVarP(&options.annotations, "annotation", "", []string{}, "Add annotation to the image")
@@ -740,15 +745,15 @@ func checkWarnedFlags(f *pflag.Flag) {
} }
} }
func writeMetadataFile(filename string, dt interface{}) error { func writeMetadataFile(filename string, dt any) error {
b, err := json.MarshalIndent(dt, "", " ") b, err := json.MarshalIndent(dt, "", " ")
if err != nil { if err != nil {
return err return err
} }
return ioutils.AtomicWriteFile(filename, b, 0644) return atomicwriter.WriteFile(filename, b, 0644)
} }
func decodeExporterResponse(exporterResponse map[string]string) map[string]interface{} { func decodeExporterResponse(exporterResponse map[string]string) map[string]any {
decFunc := func(k, v string) ([]byte, error) { decFunc := func(k, v string) ([]byte, error) {
if k == "result.json" { if k == "result.json" {
// result.json is part of metadata response for subrequests which // result.json is part of metadata response for subrequests which
@@ -757,16 +762,16 @@ func decodeExporterResponse(exporterResponse map[string]string) map[string]inter
} }
return base64.StdEncoding.DecodeString(v) return base64.StdEncoding.DecodeString(v)
} }
out := make(map[string]interface{}) out := make(map[string]any)
for k, v := range exporterResponse { for k, v := range exporterResponse {
dt, err := decFunc(k, v) dt, err := decFunc(k, v)
if err != nil { if err != nil {
out[k] = v out[k] = v
continue continue
} }
var raw map[string]interface{} var raw map[string]any
if err = json.Unmarshal(dt, &raw); err != nil || len(raw) == 0 { if err = json.Unmarshal(dt, &raw); err != nil || len(raw) == 0 {
var rawList []map[string]interface{} var rawList []map[string]any
if err = json.Unmarshal(dt, &rawList); err != nil || len(rawList) == 0 { if err = json.Unmarshal(dt, &rawList); err != nil || len(rawList) == 0 {
out[k] = v out[k] = v
continue continue

View File

@@ -124,7 +124,7 @@ func duCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
return cmd return cmd
} }
func printKV(w io.Writer, k string, v interface{}) { func printKV(w io.Writer, k string, v any) {
fmt.Fprintf(w, "%s:\t%v\n", k, v) fmt.Fprintf(w, "%s:\t%v\n", k, v)
} }

135
commands/history/import.go Normal file
View File

@@ -0,0 +1,135 @@
package history
import (
"context"
"encoding/json"
"fmt"
"io"
"net"
"net/http"
"os"
"strings"
remoteutil "github.com/docker/buildx/driver/remote/util"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/desktop"
"github.com/docker/cli/cli/command"
"github.com/pkg/browser"
"github.com/pkg/errors"
"github.com/spf13/cobra"
)
type importOptions struct {
file []string
}
func runImport(ctx context.Context, dockerCli command.Cli, opts importOptions) error {
sock, err := desktop.BuildServerAddr()
if err != nil {
return err
}
tr := http.DefaultTransport.(*http.Transport).Clone()
tr.DialContext = func(ctx context.Context, _, _ string) (net.Conn, error) {
network, addr, ok := strings.Cut(sock, "://")
if !ok {
return nil, errors.Errorf("invalid endpoint address: %s", sock)
}
return remoteutil.DialContext(ctx, network, addr)
}
client := &http.Client{
Transport: tr,
}
var urls []string
if len(opts.file) == 0 {
u, err := importFrom(ctx, client, os.Stdin)
if err != nil {
return err
}
urls = append(urls, u...)
} else {
for _, fn := range opts.file {
var f *os.File
var rdr io.Reader = os.Stdin
if fn != "-" {
f, err = os.Open(fn)
if err != nil {
return errors.Wrapf(err, "failed to open file %s", fn)
}
rdr = f
}
u, err := importFrom(ctx, client, rdr)
if err != nil {
return err
}
urls = append(urls, u...)
if f != nil {
f.Close()
}
}
}
if len(urls) == 0 {
return errors.New("no build records found in the bundle")
}
for i, url := range urls {
fmt.Fprintln(dockerCli.Err(), url)
if i == 0 {
err = browser.OpenURL(url)
}
}
return err
}
func importFrom(ctx context.Context, c *http.Client, rdr io.Reader) ([]string, error) {
req, err := http.NewRequestWithContext(ctx, http.MethodPost, "http://docker-desktop/upload", rdr)
if err != nil {
return nil, errors.Wrap(err, "failed to create request")
}
resp, err := c.Do(req)
if err != nil {
return nil, errors.Wrap(err, "failed to send request, check if Docker Desktop is running")
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
body, _ := io.ReadAll(resp.Body)
return nil, errors.Errorf("failed to import build: %s", string(body))
}
var refs []string
dec := json.NewDecoder(resp.Body)
if err := dec.Decode(&refs); err != nil {
return nil, errors.Wrap(err, "failed to decode response")
}
var urls []string
for _, ref := range refs {
urls = append(urls, desktop.BuildURL(fmt.Sprintf(".imported/_/%s", ref)))
}
return urls, err
}
func importCmd(dockerCli command.Cli, _ RootOptions) *cobra.Command {
var options importOptions
cmd := &cobra.Command{
Use: "import [OPTIONS] < bundle.dockerbuild",
Short: "Import a build into Docker Desktop",
Args: cobra.NoArgs,
RunE: func(cmd *cobra.Command, args []string) error {
return runImport(cmd.Context(), dockerCli, options)
},
ValidArgsFunction: completion.Disable,
}
flags := cmd.Flags()
flags.StringArrayVarP(&options.file, "file", "f", nil, "Import from a file path")
return cmd
}

View File

@@ -173,7 +173,7 @@ func runInspect(ctx context.Context, dockerCli command.Cli, opts inspectOptions)
} }
} }
recs, err := queryRecords(ctx, opts.ref, nodes) recs, err := queryRecords(ctx, opts.ref, nodes, nil)
if err != nil { if err != nil {
return err return err
} }
@@ -185,14 +185,7 @@ func runInspect(ctx context.Context, dockerCli command.Cli, opts inspectOptions)
return errors.Errorf("no record found for ref %q", opts.ref) return errors.Errorf("no record found for ref %q", opts.ref)
} }
if opts.ref == "" {
slices.SortFunc(recs, func(a, b historyRecord) int {
return b.CreatedAt.AsTime().Compare(a.CreatedAt.AsTime())
})
}
rec := &recs[0] rec := &recs[0]
c, err := rec.node.Driver.Client(ctx) c, err := rec.node.Driver.Client(ctx)
if err != nil { if err != nil {
return err return err
@@ -353,7 +346,7 @@ workers0:
out.Error.Name = name out.Error.Name = name
out.Error.Logs = logs out.Error.Logs = logs
} }
out.Error.Stack = []byte(fmt.Sprintf("%+v", stack.Formatter(retErr))) out.Error.Stack = fmt.Appendf(nil, "%+v", stack.Formatter(retErr))
} }
} }

View File

@@ -3,7 +3,6 @@ package history
import ( import (
"context" "context"
"io" "io"
"slices"
"github.com/containerd/containerd/v2/core/content/proxy" "github.com/containerd/containerd/v2/core/content/proxy"
"github.com/containerd/platforms" "github.com/containerd/platforms"
@@ -42,7 +41,7 @@ func runAttachment(ctx context.Context, dockerCli command.Cli, opts attachmentOp
} }
} }
recs, err := queryRecords(ctx, opts.ref, nodes) recs, err := queryRecords(ctx, opts.ref, nodes, nil)
if err != nil { if err != nil {
return err return err
} }
@@ -54,12 +53,6 @@ func runAttachment(ctx context.Context, dockerCli command.Cli, opts attachmentOp
return errors.Errorf("no record found for ref %q", opts.ref) return errors.Errorf("no record found for ref %q", opts.ref)
} }
if opts.ref == "" {
slices.SortFunc(recs, func(a, b historyRecord) int {
return b.CreatedAt.AsTime().Compare(a.CreatedAt.AsTime())
})
}
rec := &recs[0] rec := &recs[0]
c, err := rec.node.Driver.Client(ctx) c, err := rec.node.Driver.Client(ctx)

View File

@@ -4,7 +4,6 @@ import (
"context" "context"
"io" "io"
"os" "os"
"slices"
"github.com/docker/buildx/builder" "github.com/docker/buildx/builder"
"github.com/docker/buildx/util/cobrautil/completion" "github.com/docker/buildx/util/cobrautil/completion"
@@ -39,7 +38,7 @@ func runLogs(ctx context.Context, dockerCli command.Cli, opts logsOptions) error
} }
} }
recs, err := queryRecords(ctx, opts.ref, nodes) recs, err := queryRecords(ctx, opts.ref, nodes, nil)
if err != nil { if err != nil {
return err return err
} }
@@ -51,12 +50,6 @@ func runLogs(ctx context.Context, dockerCli command.Cli, opts logsOptions) error
return errors.Errorf("no record found for ref %q", opts.ref) return errors.Errorf("no record found for ref %q", opts.ref)
} }
if opts.ref == "" {
slices.SortFunc(recs, func(a, b historyRecord) int {
return b.CreatedAt.AsTime().Compare(a.CreatedAt.AsTime())
})
}
rec := &recs[0] rec := &recs[0]
c, err := rec.node.Driver.Client(ctx) c, err := rec.node.Driver.Client(ctx)
if err != nil { if err != nil {

View File

@@ -56,7 +56,7 @@ func runLs(ctx context.Context, dockerCli command.Cli, opts lsOptions) error {
} }
} }
out, err := queryRecords(ctx, "", nodes) out, err := queryRecords(ctx, "", nodes, nil)
if err != nil { if err != nil {
return err return err
} }
@@ -161,7 +161,7 @@ type lsContext struct {
} }
func (c *lsContext) MarshalJSON() ([]byte, error) { func (c *lsContext) MarshalJSON() ([]byte, error) {
m := map[string]interface{}{ m := map[string]any{
"ref": c.FullRef(), "ref": c.FullRef(),
"name": c.Name(), "name": c.Name(),
"status": c.Status(), "status": c.Status(),

View File

@@ -3,7 +3,6 @@ package history
import ( import (
"context" "context"
"fmt" "fmt"
"slices"
"github.com/docker/buildx/builder" "github.com/docker/buildx/builder"
"github.com/docker/buildx/util/cobrautil/completion" "github.com/docker/buildx/util/cobrautil/completion"
@@ -35,7 +34,7 @@ func runOpen(ctx context.Context, dockerCli command.Cli, opts openOptions) error
} }
} }
recs, err := queryRecords(ctx, opts.ref, nodes) recs, err := queryRecords(ctx, opts.ref, nodes, nil)
if err != nil { if err != nil {
return err return err
} }
@@ -47,12 +46,6 @@ func runOpen(ctx context.Context, dockerCli command.Cli, opts openOptions) error
return errors.Errorf("no record found for ref %q", opts.ref) return errors.Errorf("no record found for ref %q", opts.ref)
} }
if opts.ref == "" {
slices.SortFunc(recs, func(a, b historyRecord) int {
return b.CreatedAt.AsTime().Compare(a.CreatedAt.AsTime())
})
}
rec := &recs[0] rec := &recs[0]
url := desktop.BuildURL(fmt.Sprintf("%s/%s/%s", rec.node.Builder, rec.node.Name, rec.Ref)) url := desktop.BuildURL(fmt.Sprintf("%s/%s/%s", rec.node.Builder, rec.node.Name, rec.Ref))

View File

@@ -25,6 +25,7 @@ func RootCmd(rootcmd *cobra.Command, dockerCli command.Cli, opts RootOptions) *c
inspectCmd(dockerCli, opts), inspectCmd(dockerCli, opts),
openCmd(dockerCli, opts), openCmd(dockerCli, opts),
traceCmd(dockerCli, opts), traceCmd(dockerCli, opts),
importCmd(dockerCli, opts),
) )
return cmd return cmd

View File

@@ -8,9 +8,6 @@ import (
"io" "io"
"net" "net"
"os" "os"
"slices"
"strconv"
"strings"
"time" "time"
"github.com/containerd/console" "github.com/containerd/console"
@@ -37,51 +34,20 @@ type traceOptions struct {
} }
func loadTrace(ctx context.Context, ref string, nodes []builder.Node) (string, []byte, error) { func loadTrace(ctx context.Context, ref string, nodes []builder.Node) (string, []byte, error) {
var offset *int recs, err := queryRecords(ctx, ref, nodes, &queryOptions{
if strings.HasPrefix(ref, "^") { CompletedOnly: true,
off, err := strconv.Atoi(ref[1:]) })
if err != nil {
return "", nil, errors.Wrapf(err, "invalid offset %q", ref)
}
offset = &off
ref = ""
}
recs, err := queryRecords(ctx, ref, nodes)
if err != nil { if err != nil {
return "", nil, err return "", nil, err
} }
var rec *historyRecord if len(recs) == 0 {
if ref == "" {
slices.SortFunc(recs, func(a, b historyRecord) int {
return b.CreatedAt.AsTime().Compare(a.CreatedAt.AsTime())
})
for _, r := range recs {
if r.CompletedAt != nil {
if offset != nil {
if *offset > 0 {
*offset--
continue
}
}
rec = &r
break
}
}
if offset != nil && *offset > 0 {
return "", nil, errors.Errorf("no completed build found with offset %d", *offset)
}
} else {
rec = &recs[0]
}
if rec == nil {
if ref == "" { if ref == "" {
return "", nil, errors.New("no records found") return "", nil, errors.New("no records found")
} }
return "", nil, errors.Errorf("no record found for ref %q", ref) return "", nil, errors.Errorf("no record found for ref %q", ref)
} }
rec := &recs[0]
if rec.CompletedAt == nil { if rec.CompletedAt == nil {
return "", nil, errors.Errorf("build %q is not completed, only completed builds can be traced", rec.Ref) return "", nil, errors.Errorf("build %q is not completed, only completed builds can be traced", rec.Ref)
@@ -103,7 +69,9 @@ func loadTrace(ctx context.Context, ref string, nodes []builder.Node) (string, [
return "", nil, err return "", nil, err
} }
recs, err := queryRecords(ctx, rec.Ref, []builder.Node{*rec.node}) recs, err := queryRecords(ctx, rec.Ref, []builder.Node{*rec.node}, &queryOptions{
CompletedOnly: true,
})
if err != nil { if err != nil {
return "", nil, err return "", nil, err
} }

View File

@@ -5,6 +5,8 @@ import (
"fmt" "fmt"
"io" "io"
"path/filepath" "path/filepath"
"slices"
"strconv"
"strings" "strings"
"sync" "sync"
"time" "time"
@@ -106,10 +108,24 @@ type historyRecord struct {
name string name string
} }
func queryRecords(ctx context.Context, ref string, nodes []builder.Node) ([]historyRecord, error) { type queryOptions struct {
CompletedOnly bool
}
func queryRecords(ctx context.Context, ref string, nodes []builder.Node, opts *queryOptions) ([]historyRecord, error) {
var mu sync.Mutex var mu sync.Mutex
var out []historyRecord var out []historyRecord
var offset *int
if strings.HasPrefix(ref, "^") {
off, err := strconv.Atoi(ref[1:])
if err != nil {
return nil, errors.Wrapf(err, "invalid offset %q", ref)
}
offset = &off
ref = ""
}
eg, ctx := errgroup.WithContext(ctx) eg, ctx := errgroup.WithContext(ctx)
for _, node := range nodes { for _, node := range nodes {
node := node node := node
@@ -153,6 +169,10 @@ func queryRecords(ctx context.Context, ref string, nodes []builder.Node) ([]hist
if he.Type == controlapi.BuildHistoryEventType_DELETED || he.Record == nil { if he.Type == controlapi.BuildHistoryEventType_DELETED || he.Record == nil {
continue continue
} }
if opts != nil && opts.CompletedOnly && he.Type != controlapi.BuildHistoryEventType_COMPLETE {
continue
}
records = append(records, historyRecord{ records = append(records, historyRecord{
BuildHistoryRecord: he.Record, BuildHistoryRecord: he.Record,
currentTimestamp: ts, currentTimestamp: ts,
@@ -169,6 +189,27 @@ func queryRecords(ctx context.Context, ref string, nodes []builder.Node) ([]hist
if err := eg.Wait(); err != nil { if err := eg.Wait(); err != nil {
return nil, err return nil, err
} }
slices.SortFunc(out, func(a, b historyRecord) int {
return b.CreatedAt.AsTime().Compare(a.CreatedAt.AsTime())
})
if offset != nil {
var filtered []historyRecord
for _, r := range out {
if *offset > 0 {
*offset--
continue
}
filtered = append(filtered, r)
break
}
if *offset > 0 {
return nil, errors.Errorf("no completed build found with offset %d", *offset)
}
out = filtered
}
return out, nil return out, nil
} }

View File

@@ -194,7 +194,7 @@ func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, arg
} }
s := s s := s
eg2.Go(func() error { eg2.Go(func() error {
sub.Log(1, []byte(fmt.Sprintf("copying %s from %s to %s\n", s.Desc.Digest.String(), s.Ref.String(), t.String()))) sub.Log(1, fmt.Appendf(nil, "copying %s from %s to %s\n", s.Desc.Digest.String(), s.Ref.String(), t.String()))
return r.Copy(ctx, s, t) return r.Copy(ctx, s, t)
}) })
} }
@@ -202,7 +202,7 @@ func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, arg
if err := eg2.Wait(); err != nil { if err := eg2.Wait(); err != nil {
return err return err
} }
sub.Log(1, []byte(fmt.Sprintf("pushing %s to %s\n", desc.Digest.String(), t.String()))) sub.Log(1, fmt.Appendf(nil, "pushing %s to %s\n", desc.Digest.String(), t.String()))
return r.Push(ctx, t, desc, dt) return r.Push(ctx, t, desc, dt)
}) })
}) })

View File

@@ -13,8 +13,8 @@ import (
type BuildxController interface { type BuildxController interface {
Build(ctx context.Context, options *controllerapi.BuildOptions, in io.ReadCloser, progress progress.Writer) (ref string, resp *client.SolveResponse, inputs *build.Inputs, err error) Build(ctx context.Context, options *controllerapi.BuildOptions, in io.ReadCloser, progress progress.Writer) (ref string, resp *client.SolveResponse, inputs *build.Inputs, err error)
// Invoke starts an IO session into the specified process. // Invoke starts an IO session into the specified process.
// If pid doesn't matche to any running processes, it starts a new process with the specified config. // If pid doesn't match to any running processes, it starts a new process with the specified config.
// If there is no container running or InvokeConfig.Rollback is speicfied, the process will start in a newly created container. // If there is no container running or InvokeConfig.Rollback is specified, the process will start in a newly created container.
// NOTE: If needed, in the future, we can split this API into three APIs (NewContainer, NewProcess and Attach). // NOTE: If needed, in the future, we can split this API into three APIs (NewContainer, NewProcess and Attach).
Invoke(ctx context.Context, ref, pid string, options *controllerapi.InvokeConfig, ioIn io.ReadCloser, ioOut io.WriteCloser, ioErr io.WriteCloser) error Invoke(ctx context.Context, ref, pid string, options *controllerapi.InvokeConfig, ioIn io.ReadCloser, ioOut io.WriteCloser, ioErr io.WriteCloser) error
Kill(ctx context.Context) error Kill(ctx context.Context) error

View File

@@ -24,11 +24,11 @@ func (w *writer) Write(status *client.SolveStatus) {
func (w *writer) WriteBuildRef(target string, ref string) {} func (w *writer) WriteBuildRef(target string, ref string) {}
func (w *writer) ValidateLogSource(digest.Digest, interface{}) bool { func (w *writer) ValidateLogSource(digest.Digest, any) bool {
return true return true
} }
func (w *writer) ClearLogSource(interface{}) {} func (w *writer) ClearLogSource(any) {}
func ToControlStatus(s *client.SolveStatus) *StatusResponse { func ToControlStatus(s *client.SolveStatus) *StatusResponse {
resp := StatusResponse{} resp := StatusResponse{}

View File

@@ -1,6 +1,8 @@
package pb package pb
import ( import (
"slices"
"github.com/moby/buildkit/session" "github.com/moby/buildkit/session"
"github.com/moby/buildkit/session/sshforward/sshprovider" "github.com/moby/buildkit/session/sshforward/sshprovider"
) )
@@ -10,7 +12,7 @@ func CreateSSH(ssh []*SSH) (session.Attachable, error) {
for _, ssh := range ssh { for _, ssh := range ssh {
cfg := sshprovider.AgentConfig{ cfg := sshprovider.AgentConfig{
ID: ssh.ID, ID: ssh.ID,
Paths: append([]string{}, ssh.Paths...), Paths: slices.Clone(ssh.Paths),
} }
configs = append(configs, cfg) configs = append(configs, cfg)
} }

View File

@@ -39,7 +39,7 @@ func (p *Process) Done() <-chan error {
return p.errCh return p.errCh
} }
// Manager manages a set of proceses. // Manager manages a set of processes.
type Manager struct { type Manager struct {
container atomic.Value container atomic.Value
processes sync.Map processes sync.Map

View File

@@ -140,7 +140,7 @@ func serveCmd(dockerCli command.Cli) *cobra.Command {
return err return err
} }
pidF := filepath.Join(root, defaultPIDFilename) pidF := filepath.Join(root, defaultPIDFilename)
if err := os.WriteFile(pidF, []byte(fmt.Sprintf("%d", os.Getpid())), 0600); err != nil { if err := os.WriteFile(pidF, fmt.Appendf(nil, "%d", os.Getpid()), 0600); err != nil {
return err return err
} }
defer func() { defer func() {

View File

@@ -48,6 +48,8 @@ target "lint" {
"linux/s390x", "linux/s390x",
"linux/ppc64le", "linux/ppc64le",
"linux/riscv64", "linux/riscv64",
"netbsd/amd64",
"netbsd/arm64",
"openbsd/amd64", "openbsd/amd64",
"openbsd/arm64", "openbsd/arm64",
"windows/amd64", "windows/amd64",
@@ -167,6 +169,8 @@ target "binaries-cross" {
"linux/ppc64le", "linux/ppc64le",
"linux/riscv64", "linux/riscv64",
"linux/s390x", "linux/s390x",
"netbsd/amd64",
"netbsd/arm64",
"openbsd/amd64", "openbsd/amd64",
"openbsd/arm64", "openbsd/arm64",
"windows/amd64", "windows/amd64",

View File

@@ -350,15 +350,19 @@ $ docker buildx bake --set target.platform=linux/arm64
$ docker buildx bake --set foo*.args.mybuildarg=value # overrides build arg for all targets starting with 'foo' $ docker buildx bake --set foo*.args.mybuildarg=value # overrides build arg for all targets starting with 'foo'
$ docker buildx bake --set *.platform=linux/arm64 # overrides platform for all targets $ docker buildx bake --set *.platform=linux/arm64 # overrides platform for all targets
$ docker buildx bake --set foo*.no-cache # bypass caching only for targets starting with 'foo' $ docker buildx bake --set foo*.no-cache # bypass caching only for targets starting with 'foo'
$ docker buildx bake --set target.platform+=linux/arm64 # appends 'linux/arm64' to the platform list
``` ```
You can override the following fields: You can override the following fields:
* `annotations`
* `attest`
* `args` * `args`
* `cache-from` * `cache-from`
* `cache-to` * `cache-to`
* `context` * `context`
* `dockerfile` * `dockerfile`
* `entitlements`
* `labels` * `labels`
* `load` * `load`
* `no-cache` * `no-cache`
@@ -371,3 +375,20 @@ You can override the following fields:
* `ssh` * `ssh`
* `tags` * `tags`
* `target` * `target`
You can append using `+=` operator for the following fields:
* `annotations
* `attest
* `cache-from`
* `cache-to`
* `entitlements
* `no-cache-filter`
* `output`
* `platform`
* `secrets`
* `ssh`
* `tags`
> [!NOTE]
> ¹ These fields already append by default.

View File

@@ -16,7 +16,7 @@ Start a build
| Name | Type | Default | Description | | Name | Type | Default | Description |
|:----------------------------------------|:--------------|:----------|:-------------------------------------------------------------------------------------------------------------| |:----------------------------------------|:--------------|:----------|:-------------------------------------------------------------------------------------------------------------|
| [`--add-host`](#add-host) | `stringSlice` | | Add a custom host-to-IP mapping (format: `host:ip`) | | [`--add-host`](#add-host) | `stringSlice` | | Add a custom host-to-IP mapping (format: `host:ip`) |
| [`--allow`](#allow) | `stringSlice` | | Allow extra privileged entitlement (e.g., `network.host`, `security.insecure`) | | [`--allow`](#allow) | `stringArray` | | Allow extra privileged entitlement (e.g., `network.host`, `security.insecure`) |
| [`--annotation`](#annotation) | `stringArray` | | Add annotation to the image | | [`--annotation`](#annotation) | `stringArray` | | Add annotation to the image |
| [`--attest`](#attest) | `stringArray` | | Attestation parameters (format: `type=sbom,generator=image`) | | [`--attest`](#attest) | `stringArray` | | Attestation parameters (format: `type=sbom,generator=image`) |
| [`--build-arg`](#build-arg) | `stringArray` | | Set build-time variables | | [`--build-arg`](#build-arg) | `stringArray` | | Set build-time variables |

View File

@@ -12,7 +12,7 @@ Start a build
| Name | Type | Default | Description | | Name | Type | Default | Description |
|:--------------------|:--------------|:----------|:-------------------------------------------------------------------------------------------------------------| |:--------------------|:--------------|:----------|:-------------------------------------------------------------------------------------------------------------|
| `--add-host` | `stringSlice` | | Add a custom host-to-IP mapping (format: `host:ip`) | | `--add-host` | `stringSlice` | | Add a custom host-to-IP mapping (format: `host:ip`) |
| `--allow` | `stringSlice` | | Allow extra privileged entitlement (e.g., `network.host`, `security.insecure`) | | `--allow` | `stringArray` | | Allow extra privileged entitlement (e.g., `network.host`, `security.insecure`) |
| `--annotation` | `stringArray` | | Add annotation to the image | | `--annotation` | `stringArray` | | Add annotation to the image |
| `--attest` | `stringArray` | | Attestation parameters (format: `type=sbom,generator=image`) | | `--attest` | `stringArray` | | Attestation parameters (format: `type=sbom,generator=image`) |
| `--build-arg` | `stringArray` | | Set build-time variables | | `--build-arg` | `stringArray` | | Set build-time variables |

View File

@@ -7,6 +7,7 @@ Commands to work on build records
| Name | Description | | Name | Description |
|:---------------------------------------|:-----------------------------------------------| |:---------------------------------------|:-----------------------------------------------|
| [`import`](buildx_history_import.md) | Import a build into Docker Desktop |
| [`inspect`](buildx_history_inspect.md) | Inspect a build | | [`inspect`](buildx_history_inspect.md) | Inspect a build |
| [`logs`](buildx_history_logs.md) | Print the logs of a build | | [`logs`](buildx_history_logs.md) | Print the logs of a build |
| [`ls`](buildx_history_ls.md) | List build records | | [`ls`](buildx_history_ls.md) | List build records |

View File

@@ -0,0 +1,16 @@
# docker buildx history import
<!---MARKER_GEN_START-->
Import a build into Docker Desktop
### Options
| Name | Type | Default | Description |
|:----------------|:--------------|:--------|:-----------------------------------------|
| `--builder` | `string` | | Override the configured builder instance |
| `-D`, `--debug` | `bool` | | Enable debug logging |
| `-f`, `--file` | `stringArray` | | Import from a file path |
<!---MARKER_GEN_END-->

View File

@@ -56,6 +56,7 @@ type Driver struct {
restartPolicy container.RestartPolicy restartPolicy container.RestartPolicy
env []string env []string
defaultLoad bool defaultLoad bool
gpus []container.DeviceRequest
} }
func (d *Driver) IsMobyDriver() bool { func (d *Driver) IsMobyDriver() bool {
@@ -106,8 +107,9 @@ func (d *Driver) create(ctx context.Context, l progress.SubLogger) error {
}); err != nil { }); err != nil {
// image pulling failed, check if it exists in local image store. // image pulling failed, check if it exists in local image store.
// if not, return pulling error. otherwise log it. // if not, return pulling error. otherwise log it.
_, _, errInspect := d.DockerAPI.ImageInspectWithRaw(ctx, imageName) _, errInspect := d.DockerAPI.ImageInspect(ctx, imageName)
if errInspect != nil { found := errInspect == nil
if !found {
return err return err
} }
l.Wrap("pulling failed, using local image "+imageName, func() error { return nil }) l.Wrap("pulling failed, using local image "+imageName, func() error { return nil })
@@ -157,6 +159,9 @@ func (d *Driver) create(ctx context.Context, l progress.SubLogger) error {
if d.cpusetMems != "" { if d.cpusetMems != "" {
hc.Resources.CpusetMems = d.cpusetMems hc.Resources.CpusetMems = d.cpusetMems
} }
if len(d.gpus) > 0 && d.hasGPUCapability(ctx, cfg.Image, d.gpus) {
hc.Resources.DeviceRequests = d.gpus
}
if info, err := d.DockerAPI.Info(ctx); err == nil { if info, err := d.DockerAPI.Info(ctx); err == nil {
if info.CgroupDriver == "cgroupfs" { if info.CgroupDriver == "cgroupfs" {
// Place all buildkit containers inside this cgroup by default so limits can be attached // Place all buildkit containers inside this cgroup by default so limits can be attached
@@ -419,6 +424,7 @@ func (d *Driver) Features(ctx context.Context) map[driver.Feature]bool {
driver.DockerExporter: true, driver.DockerExporter: true,
driver.CacheExport: true, driver.CacheExport: true,
driver.MultiPlatform: true, driver.MultiPlatform: true,
driver.DirectPush: true,
driver.DefaultLoad: d.defaultLoad, driver.DefaultLoad: d.defaultLoad,
} }
} }
@@ -427,6 +433,31 @@ func (d *Driver) HostGatewayIP(ctx context.Context) (net.IP, error) {
return nil, errors.New("host-gateway is not supported by the docker-container driver") return nil, errors.New("host-gateway is not supported by the docker-container driver")
} }
// hasGPUCapability checks if docker daemon has GPU capability. We need to run
// a dummy container with GPU device to check if the daemon has this capability
// because there is no API to check it yet.
func (d *Driver) hasGPUCapability(ctx context.Context, image string, gpus []container.DeviceRequest) bool {
cfg := &container.Config{
Image: image,
Entrypoint: []string{"/bin/true"},
}
hc := &container.HostConfig{
NetworkMode: container.NetworkMode(container.IPCModeNone),
AutoRemove: true,
Resources: container.Resources{
DeviceRequests: gpus,
},
}
resp, err := d.DockerAPI.ContainerCreate(ctx, cfg, hc, &network.NetworkingConfig{}, nil, "")
if err != nil {
return false
}
if err := d.DockerAPI.ContainerStart(ctx, resp.ID, container.StartOptions{}); err != nil {
return false
}
return true
}
func demuxConn(c net.Conn) net.Conn { func demuxConn(c net.Conn) net.Conn {
pr, pw := io.Pipe() pr, pw := io.Pipe()
// TODO: rewrite parser with Reader() to avoid goroutine switch // TODO: rewrite parser with Reader() to avoid goroutine switch

View File

@@ -51,6 +51,12 @@ func (f *factory) New(ctx context.Context, cfg driver.InitConfig) (driver.Driver
InitConfig: cfg, InitConfig: cfg,
restartPolicy: rp, restartPolicy: rp,
} }
var gpus dockeropts.GpuOpts
if err := gpus.Set("all"); err == nil {
if v := gpus.Value(); len(v) > 0 {
d.gpus = v
}
}
for k, v := range cfg.DriverOpts { for k, v := range cfg.DriverOpts {
switch { switch {
case k == "network": case k == "network":

View File

@@ -93,6 +93,7 @@ func (d *Driver) Features(ctx context.Context) map[driver.Feature]bool {
driver.DockerExporter: useContainerdSnapshotter, driver.DockerExporter: useContainerdSnapshotter,
driver.CacheExport: useContainerdSnapshotter, driver.CacheExport: useContainerdSnapshotter,
driver.MultiPlatform: useContainerdSnapshotter, driver.MultiPlatform: useContainerdSnapshotter,
driver.DirectPush: useContainerdSnapshotter,
driver.DefaultLoad: true, driver.DefaultLoad: true,
} }
}) })

View File

@@ -7,5 +7,6 @@ const DockerExporter Feature = "Docker exporter"
const CacheExport Feature = "Cache export" const CacheExport Feature = "Cache export"
const MultiPlatform Feature = "Multi-platform build" const MultiPlatform Feature = "Multi-platform build"
const DirectPush Feature = "Direct push"
const DefaultLoad Feature = "Automatically load images to the Docker Engine image store" const DefaultLoad Feature = "Automatically load images to the Docker Engine image store"

View File

@@ -35,10 +35,10 @@ func testEndpoint(server, defaultNamespace string, ca, cert, key []byte, skipTLS
} }
var testStoreCfg = store.NewConfig( var testStoreCfg = store.NewConfig(
func() interface{} { func() any {
return &map[string]interface{}{} return &map[string]any{}
}, },
store.EndpointTypeGetter(KubernetesEndpoint, func() interface{} { return &EndpointMeta{} }), store.EndpointTypeGetter(KubernetesEndpoint, func() any { return &EndpointMeta{} }),
) )
func TestSaveLoadContexts(t *testing.T) { func TestSaveLoadContexts(t *testing.T) {
@@ -197,7 +197,7 @@ func checkClientConfig(t *testing.T, ep Endpoint, server, namespace string, ca,
func save(s store.Writer, ep Endpoint, name string) error { func save(s store.Writer, ep Endpoint, name string) error {
meta := store.Metadata{ meta := store.Metadata{
Endpoints: map[string]interface{}{ Endpoints: map[string]any{
KubernetesEndpoint: ep.EndpointMeta, KubernetesEndpoint: ep.EndpointMeta,
}, },
Name: name, Name: name,

View File

@@ -43,7 +43,7 @@ type Endpoint struct {
func init() { func init() {
command.RegisterDefaultStoreEndpoints( command.RegisterDefaultStoreEndpoints(
store.EndpointTypeGetter(KubernetesEndpoint, func() interface{} { return &EndpointMeta{} }), store.EndpointTypeGetter(KubernetesEndpoint, func() any { return &EndpointMeta{} }),
) )
} }
@@ -96,7 +96,7 @@ func (c *Endpoint) KubernetesConfig() clientcmd.ClientConfig {
// ResolveDefault returns endpoint metadata for the default Kubernetes // ResolveDefault returns endpoint metadata for the default Kubernetes
// endpoint, which is derived from the env-based kubeconfig. // endpoint, which is derived from the env-based kubeconfig.
func (c *EndpointMeta) ResolveDefault() (interface{}, *store.EndpointTLSData, error) { func (c *EndpointMeta) ResolveDefault() (any, *store.EndpointTLSData, error) {
kubeconfig := os.Getenv("KUBECONFIG") kubeconfig := os.Getenv("KUBECONFIG")
if kubeconfig == "" { if kubeconfig == "" {
kubeconfig = filepath.Join(homedir.Get(), ".kube/config") kubeconfig = filepath.Join(homedir.Get(), ".kube/config")

View File

@@ -238,6 +238,7 @@ func (d *Driver) Features(_ context.Context) map[driver.Feature]bool {
driver.DockerExporter: d.DockerAPI != nil, driver.DockerExporter: d.DockerAPI != nil,
driver.CacheExport: true, driver.CacheExport: true,
driver.MultiPlatform: true, // Untested (needs multiple Driver instances) driver.MultiPlatform: true, // Untested (needs multiple Driver instances)
driver.DirectPush: true,
driver.DefaultLoad: d.defaultLoad, driver.DefaultLoad: d.defaultLoad,
} }
} }

View File

@@ -90,7 +90,7 @@ func ListRunningPods(ctx context.Context, client clientcorev1.PodInterface, depl
for i := range podList.Items { for i := range podList.Items {
pod := &podList.Items[i] pod := &podList.Items[i]
if pod.Status.Phase == corev1.PodRunning { if pod.Status.Phase == corev1.PodRunning {
logrus.Debugf("pod runnning: %q", pod.Name) logrus.Debugf("pod running: %q", pod.Name)
runningPods = append(runningPods, pod) runningPods = append(runningPods, pod)
} }
} }

View File

@@ -25,7 +25,7 @@ func GenerateNodeName(builderName string, txn *store.Txn) (string, error) {
} }
var name string var name string
for i := 0; i < 6; i++ { for range 6 {
name, err = randomName() name, err = randomName()
if err != nil { if err != nil {
return "", err return "", err

View File

@@ -164,6 +164,7 @@ func (d *Driver) Features(ctx context.Context) map[driver.Feature]bool {
driver.DockerExporter: true, driver.DockerExporter: true,
driver.CacheExport: true, driver.CacheExport: true,
driver.MultiPlatform: true, driver.MultiPlatform: true,
driver.DirectPush: true,
driver.DefaultLoad: d.defaultLoad, driver.DefaultLoad: d.defaultLoad,
} }
} }

10
go.mod
View File

@@ -6,9 +6,9 @@ require (
github.com/Masterminds/semver/v3 v3.2.1 github.com/Masterminds/semver/v3 v3.2.1
github.com/Microsoft/go-winio v0.6.2 github.com/Microsoft/go-winio v0.6.2
github.com/aws/aws-sdk-go-v2/config v1.27.27 github.com/aws/aws-sdk-go-v2/config v1.27.27
github.com/compose-spec/compose-go/v2 v2.4.7 github.com/compose-spec/compose-go/v2 v2.4.8
github.com/containerd/console v1.0.4 github.com/containerd/console v1.0.4
github.com/containerd/containerd/v2 v2.0.2 github.com/containerd/containerd/v2 v2.0.3
github.com/containerd/continuity v0.4.5 github.com/containerd/continuity v0.4.5
github.com/containerd/errdefs v1.0.0 github.com/containerd/errdefs v1.0.0
github.com/containerd/log v0.1.0 github.com/containerd/log v0.1.0
@@ -17,9 +17,9 @@ require (
github.com/creack/pty v1.1.24 github.com/creack/pty v1.1.24
github.com/davecgh/go-spew v1.1.1 github.com/davecgh/go-spew v1.1.1
github.com/distribution/reference v0.6.0 github.com/distribution/reference v0.6.0
github.com/docker/cli v27.5.1+incompatible github.com/docker/cli v28.0.1+incompatible
github.com/docker/cli-docs-tool v0.9.0 github.com/docker/cli-docs-tool v0.9.0
github.com/docker/docker v27.5.1+incompatible github.com/docker/docker v28.0.1+incompatible
github.com/docker/go-units v0.5.0 github.com/docker/go-units v0.5.0
github.com/gofrs/flock v0.12.1 github.com/gofrs/flock v0.12.1
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510
@@ -29,7 +29,7 @@ require (
github.com/hashicorp/hcl/v2 v2.23.0 github.com/hashicorp/hcl/v2 v2.23.0
github.com/in-toto/in-toto-golang v0.5.0 github.com/in-toto/in-toto-golang v0.5.0
github.com/mitchellh/hashstructure/v2 v2.0.2 github.com/mitchellh/hashstructure/v2 v2.0.2
github.com/moby/buildkit v0.20.0-rc2 github.com/moby/buildkit v0.20.1
github.com/moby/sys/mountinfo v0.7.2 github.com/moby/sys/mountinfo v0.7.2
github.com/moby/sys/signal v0.7.1 github.com/moby/sys/signal v0.7.1
github.com/morikuni/aec v1.0.0 github.com/morikuni/aec v1.0.0

30
go.sum
View File

@@ -77,16 +77,16 @@ github.com/cloudflare/cfssl v0.0.0-20180223231731-4e2dcbde5004 h1:lkAMpLVBDaj17e
github.com/cloudflare/cfssl v0.0.0-20180223231731-4e2dcbde5004/go.mod h1:yMWuSON2oQp+43nFtAV/uvKQIFpSPerB57DCt9t8sSA= github.com/cloudflare/cfssl v0.0.0-20180223231731-4e2dcbde5004/go.mod h1:yMWuSON2oQp+43nFtAV/uvKQIFpSPerB57DCt9t8sSA=
github.com/codahale/rfc6979 v0.0.0-20141003034818-6a90f24967eb h1:EDmT6Q9Zs+SbUoc7Ik9EfrFqcylYqgPZ9ANSbTAntnE= github.com/codahale/rfc6979 v0.0.0-20141003034818-6a90f24967eb h1:EDmT6Q9Zs+SbUoc7Ik9EfrFqcylYqgPZ9ANSbTAntnE=
github.com/codahale/rfc6979 v0.0.0-20141003034818-6a90f24967eb/go.mod h1:ZjrT6AXHbDs86ZSdt/osfBi5qfexBrKUdONk989Wnk4= github.com/codahale/rfc6979 v0.0.0-20141003034818-6a90f24967eb/go.mod h1:ZjrT6AXHbDs86ZSdt/osfBi5qfexBrKUdONk989Wnk4=
github.com/compose-spec/compose-go/v2 v2.4.7 h1:WNpz5bIbKG+G+w9pfu72B1ZXr+Og9jez8TMEo8ecXPk= github.com/compose-spec/compose-go/v2 v2.4.8 h1:7Myl8wDRl/4mRz77S+eyDJymGGEHu0diQdGSSeyq90A=
github.com/compose-spec/compose-go/v2 v2.4.7/go.mod h1:lFN0DrMxIncJGYAXTfWuajfwj5haBJqrBkarHcnjJKc= github.com/compose-spec/compose-go/v2 v2.4.8/go.mod h1:lFN0DrMxIncJGYAXTfWuajfwj5haBJqrBkarHcnjJKc=
github.com/containerd/cgroups/v3 v3.0.5 h1:44na7Ud+VwyE7LIoJ8JTNQOa549a8543BmzaJHo6Bzo= github.com/containerd/cgroups/v3 v3.0.5 h1:44na7Ud+VwyE7LIoJ8JTNQOa549a8543BmzaJHo6Bzo=
github.com/containerd/cgroups/v3 v3.0.5/go.mod h1:SA5DLYnXO8pTGYiAHXz94qvLQTKfVM5GEVisn4jpins= github.com/containerd/cgroups/v3 v3.0.5/go.mod h1:SA5DLYnXO8pTGYiAHXz94qvLQTKfVM5GEVisn4jpins=
github.com/containerd/console v1.0.4 h1:F2g4+oChYvBTsASRTz8NP6iIAi97J3TtSAsLbIFn4ro= github.com/containerd/console v1.0.4 h1:F2g4+oChYvBTsASRTz8NP6iIAi97J3TtSAsLbIFn4ro=
github.com/containerd/console v1.0.4/go.mod h1:YynlIjWYF8myEu6sdkwKIvGQq+cOckRm6So2avqoYAk= github.com/containerd/console v1.0.4/go.mod h1:YynlIjWYF8myEu6sdkwKIvGQq+cOckRm6So2avqoYAk=
github.com/containerd/containerd/api v1.8.0 h1:hVTNJKR8fMc/2Tiw60ZRijntNMd1U+JVMyTRdsD2bS0= github.com/containerd/containerd/api v1.8.0 h1:hVTNJKR8fMc/2Tiw60ZRijntNMd1U+JVMyTRdsD2bS0=
github.com/containerd/containerd/api v1.8.0/go.mod h1:dFv4lt6S20wTu/hMcP4350RL87qPWLVa/OHOwmmdnYc= github.com/containerd/containerd/api v1.8.0/go.mod h1:dFv4lt6S20wTu/hMcP4350RL87qPWLVa/OHOwmmdnYc=
github.com/containerd/containerd/v2 v2.0.2 h1:GmH/tRBlTvrXOLwSpWE2vNAm8+MqI6nmxKpKBNKY8Wc= github.com/containerd/containerd/v2 v2.0.3 h1:zBKgwgZsuu+LPCMzCLgA4sC4MiZzZ59ZT31XkmiISQM=
github.com/containerd/containerd/v2 v2.0.2/go.mod h1:wIqEvQ/6cyPFUGJ5yMFanspPabMLor+bF865OHvNTTI= github.com/containerd/containerd/v2 v2.0.3/go.mod h1:5j9QUUaV/cy9ZeAx4S+8n9ffpf+iYnEj4jiExgcbuLY=
github.com/containerd/continuity v0.4.5 h1:ZRoN1sXq9u7V6QoHMcVWGhOwDFqZ4B9i5H6un1Wh0x4= github.com/containerd/continuity v0.4.5 h1:ZRoN1sXq9u7V6QoHMcVWGhOwDFqZ4B9i5H6un1Wh0x4=
github.com/containerd/continuity v0.4.5/go.mod h1:/lNJvtJKUQStBzpVQ1+rasXO1LAWtUQssk28EZvJ3nE= github.com/containerd/continuity v0.4.5/go.mod h1:/lNJvtJKUQStBzpVQ1+rasXO1LAWtUQssk28EZvJ3nE=
github.com/containerd/errdefs v1.0.0 h1:tg5yIfIlQIrxYtu9ajqY42W3lpS19XqdxRQeEwYG8PI= github.com/containerd/errdefs v1.0.0 h1:tg5yIfIlQIrxYtu9ajqY42W3lpS19XqdxRQeEwYG8PI=
@@ -122,15 +122,15 @@ github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSs
github.com/denisenkom/go-mssqldb v0.0.0-20191128021309-1d7a30a10f73/go.mod h1:xbL0rPBG9cCiLr28tMa8zpbdarY27NDyej4t/EjAShU= github.com/denisenkom/go-mssqldb v0.0.0-20191128021309-1d7a30a10f73/go.mod h1:xbL0rPBG9cCiLr28tMa8zpbdarY27NDyej4t/EjAShU=
github.com/distribution/reference v0.6.0 h1:0IXCQ5g4/QMHHkarYzh5l+u8T3t73zM5QvfrDyIgxBk= github.com/distribution/reference v0.6.0 h1:0IXCQ5g4/QMHHkarYzh5l+u8T3t73zM5QvfrDyIgxBk=
github.com/distribution/reference v0.6.0/go.mod h1:BbU0aIcezP1/5jX/8MP0YiH4SdvB5Y4f/wlDRiLyi3E= github.com/distribution/reference v0.6.0/go.mod h1:BbU0aIcezP1/5jX/8MP0YiH4SdvB5Y4f/wlDRiLyi3E=
github.com/docker/cli v27.5.1+incompatible h1:JB9cieUT9YNiMITtIsguaN55PLOHhBSz3LKVc6cqWaY= github.com/docker/cli v28.0.1+incompatible h1:g0h5NQNda3/CxIsaZfH4Tyf6vpxFth7PYl3hgCPOKzs=
github.com/docker/cli v27.5.1+incompatible/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8= github.com/docker/cli v28.0.1+incompatible/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8=
github.com/docker/cli-docs-tool v0.9.0 h1:CVwQbE+ZziwlPqrJ7LRyUF6GvCA+6gj7MTCsayaK9t0= github.com/docker/cli-docs-tool v0.9.0 h1:CVwQbE+ZziwlPqrJ7LRyUF6GvCA+6gj7MTCsayaK9t0=
github.com/docker/cli-docs-tool v0.9.0/go.mod h1:ClrwlNW+UioiRyH9GiAOe1o3J/TsY3Tr1ipoypjAUtc= github.com/docker/cli-docs-tool v0.9.0/go.mod h1:ClrwlNW+UioiRyH9GiAOe1o3J/TsY3Tr1ipoypjAUtc=
github.com/docker/distribution v2.7.1+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w= github.com/docker/distribution v2.7.1+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
github.com/docker/distribution v2.8.3+incompatible h1:AtKxIZ36LoNK51+Z6RpzLpddBirtxJnzDrHLEKxTAYk= github.com/docker/distribution v2.8.3+incompatible h1:AtKxIZ36LoNK51+Z6RpzLpddBirtxJnzDrHLEKxTAYk=
github.com/docker/distribution v2.8.3+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w= github.com/docker/distribution v2.8.3+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
github.com/docker/docker v27.5.1+incompatible h1:4PYU5dnBYqRQi0294d1FBECqT9ECWeQAIfE8q4YnPY8= github.com/docker/docker v28.0.1+incompatible h1:FCHjSRdXhNRFjlHMTv4jUNlIBbTeRjrWfeFuJp7jpo0=
github.com/docker/docker v27.5.1+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk= github.com/docker/docker v28.0.1+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/docker-credential-helpers v0.8.2 h1:bX3YxiGzFP5sOXWc3bTPEXdEaZSeVMrFgOr3T+zrFAo= github.com/docker/docker-credential-helpers v0.8.2 h1:bX3YxiGzFP5sOXWc3bTPEXdEaZSeVMrFgOr3T+zrFAo=
github.com/docker/docker-credential-helpers v0.8.2/go.mod h1:P3ci7E3lwkZg6XiHdRKft1KckHiO9a2rNtyFbZ/ry9M= github.com/docker/docker-credential-helpers v0.8.2/go.mod h1:P3ci7E3lwkZg6XiHdRKft1KckHiO9a2rNtyFbZ/ry9M=
github.com/docker/go v1.5.1-1.0.20160303222718-d30aec9fd63c h1:lzqkGL9b3znc+ZUgi7FlLnqjQhcXxkNM/quxIjBVMD0= github.com/docker/go v1.5.1-1.0.20160303222718-d30aec9fd63c h1:lzqkGL9b3znc+ZUgi7FlLnqjQhcXxkNM/quxIjBVMD0=
@@ -152,8 +152,6 @@ github.com/erikstmartin/go-testdb v0.0.0-20160219214506-8d10e4a1bae5/go.mod h1:a
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg= github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U= github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo= github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/fsnotify/fsnotify v1.7.0 h1:8JEhPFa5W2WU7YfeZzPNqzMP6Lwt7L2715Ggo0nosvA=
github.com/fsnotify/fsnotify v1.7.0/go.mod h1:40Bi/Hjc2AVfZrqy+aj+yEI+/bRxZnMJyTJwOpGvigM=
github.com/fvbommel/sortorder v1.0.1 h1:dSnXLt4mJYH25uDDGa3biZNQsozaUWDSWeKJ0qqFfzE= github.com/fvbommel/sortorder v1.0.1 h1:dSnXLt4mJYH25uDDGa3biZNQsozaUWDSWeKJ0qqFfzE=
github.com/fvbommel/sortorder v1.0.1/go.mod h1:uk88iVf1ovNn1iLfgUVU2F9o5eO30ui720w+kxuqRs0= github.com/fvbommel/sortorder v1.0.1/go.mod h1:uk88iVf1ovNn1iLfgUVU2F9o5eO30ui720w+kxuqRs0=
github.com/fxamacker/cbor/v2 v2.7.0 h1:iM5WgngdRBanHcxugY4JySA0nk1wZorNOpTgCMedv5E= github.com/fxamacker/cbor/v2 v2.7.0 h1:iM5WgngdRBanHcxugY4JySA0nk1wZorNOpTgCMedv5E=
@@ -297,8 +295,8 @@ github.com/mitchellh/hashstructure/v2 v2.0.2/go.mod h1:MG3aRVU/N29oo/V/IhBX8GR/z
github.com/mitchellh/mapstructure v0.0.0-20150613213606-2caf8efc9366/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= github.com/mitchellh/mapstructure v0.0.0-20150613213606-2caf8efc9366/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY= github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY=
github.com/mitchellh/mapstructure v1.5.0/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo= github.com/mitchellh/mapstructure v1.5.0/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
github.com/moby/buildkit v0.20.0-rc2 h1:QjACghvG0pSAp7dk9aQMYWioDEOljDWyyoUjyg35qfg= github.com/moby/buildkit v0.20.1 h1:sT0ZXhhNo5rVbMcYfgttma3TdUHfO5JjFA0UAL8p9fY=
github.com/moby/buildkit v0.20.0-rc2/go.mod h1:kMXf90l/f3zygRK8bYbyetfyzoJYntb6Bpi2VsLfXgQ= github.com/moby/buildkit v0.20.1/go.mod h1:Rq9nB/fJImdk6QeM0niKtOHJqwKeYMrK847hTTDVuA4=
github.com/moby/docker-image-spec v1.3.1 h1:jMKff3w6PgbfSa69GfNg+zN/XLhfXJGnEx3Nl2EsFP0= github.com/moby/docker-image-spec v1.3.1 h1:jMKff3w6PgbfSa69GfNg+zN/XLhfXJGnEx3Nl2EsFP0=
github.com/moby/docker-image-spec v1.3.1/go.mod h1:eKmb5VW8vQEh/BAr2yvVNvuiJuY6UIocYsFu/DxxRpo= github.com/moby/docker-image-spec v1.3.1/go.mod h1:eKmb5VW8vQEh/BAr2yvVNvuiJuY6UIocYsFu/DxxRpo=
github.com/moby/locker v1.0.1 h1:fOXqR41zeveg4fFODix+1Ch4mj/gT0NE1XJbp/epuBg= github.com/moby/locker v1.0.1 h1:fOXqR41zeveg4fFODix+1Ch4mj/gT0NE1XJbp/epuBg=
@@ -351,8 +349,6 @@ github.com/opencontainers/image-spec v1.1.0 h1:8SG7/vwALn54lVB/0yZ/MMwhFrPYtpEHQ
github.com/opencontainers/image-spec v1.1.0/go.mod h1:W4s4sFTMaBeK1BQLXbG4AdM2szdn85PY75RI83NrTrM= github.com/opencontainers/image-spec v1.1.0/go.mod h1:W4s4sFTMaBeK1BQLXbG4AdM2szdn85PY75RI83NrTrM=
github.com/opencontainers/runtime-spec v1.2.0 h1:z97+pHb3uELt/yiAWD691HNHQIF07bE7dzrbT927iTk= github.com/opencontainers/runtime-spec v1.2.0 h1:z97+pHb3uELt/yiAWD691HNHQIF07bE7dzrbT927iTk=
github.com/opencontainers/runtime-spec v1.2.0/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0= github.com/opencontainers/runtime-spec v1.2.0/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/opencontainers/runtime-tools v0.9.1-0.20221107090550-2e043c6bd626 h1:DmNGcqH3WDbV5k8OJ+esPWbqUOX5rMLR2PMvziDMJi0=
github.com/opencontainers/runtime-tools v0.9.1-0.20221107090550-2e043c6bd626/go.mod h1:BRHJJd0E+cx42OybVYSgUvZmU0B8P9gZuRXlZUP7TKI=
github.com/opencontainers/selinux v1.11.1 h1:nHFvthhM0qY8/m+vfhJylliSshm8G1jJ2jDMcgULaH8= github.com/opencontainers/selinux v1.11.1 h1:nHFvthhM0qY8/m+vfhJylliSshm8G1jJ2jDMcgULaH8=
github.com/opencontainers/selinux v1.11.1/go.mod h1:E5dMC3VPuVvVHDYmi78qvhJp8+M586T4DlDRYpFkyec= github.com/opencontainers/selinux v1.11.1/go.mod h1:E5dMC3VPuVvVHDYmi78qvhJp8+M586T4DlDRYpFkyec=
github.com/opentracing/opentracing-go v1.1.0 h1:pWlfV3Bxv7k65HYwkikxat0+s3pV4bsqf19k25Ur8rU= github.com/opentracing/opentracing-go v1.1.0 h1:pWlfV3Bxv7k65HYwkikxat0+s3pV4bsqf19k25Ur8rU=
@@ -437,8 +433,6 @@ github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4= github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA= github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY= github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635 h1:kdXcSzyDtseVEc4yCz2qF8ZrQvIDBJLl4S1c3GCXmoI=
github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
github.com/theupdateframework/notary v0.7.0 h1:QyagRZ7wlSpjT5N2qQAh/pN+DVqgekv4DzbAiAiEL3c= github.com/theupdateframework/notary v0.7.0 h1:QyagRZ7wlSpjT5N2qQAh/pN+DVqgekv4DzbAiAiEL3c=
github.com/theupdateframework/notary v0.7.0/go.mod h1:c9DRxcmhHmVLDay4/2fUYdISnHqbFDGRSlXPO0AhYWw= github.com/theupdateframework/notary v0.7.0/go.mod h1:c9DRxcmhHmVLDay4/2fUYdISnHqbFDGRSlXPO0AhYWw=
github.com/tonistiigi/dchapes-mode v0.0.0-20241001053921-ca0759fec205 h1:eUk79E1w8yMtXeHSzjKorxuC8qJOnyXQnLaJehxpJaI= github.com/tonistiigi/dchapes-mode v0.0.0-20241001053921-ca0759fec205 h1:eUk79E1w8yMtXeHSzjKorxuC8qJOnyXQnLaJehxpJaI=
@@ -632,7 +626,3 @@ sigs.k8s.io/structured-merge-diff/v4 v4.4.1 h1:150L+0vs/8DA78h1u02ooW1/fFq/Lwr+s
sigs.k8s.io/structured-merge-diff/v4 v4.4.1/go.mod h1:N8hJocpFajUSSeSJ9bOZ77VzejKZaXsTtZo4/u7Io08= sigs.k8s.io/structured-merge-diff/v4 v4.4.1/go.mod h1:N8hJocpFajUSSeSJ9bOZ77VzejKZaXsTtZo4/u7Io08=
sigs.k8s.io/yaml v1.4.0 h1:Mk1wCc2gy/F0THH0TAp1QYyJNzRm2KCLy3o5ASXVI5E= sigs.k8s.io/yaml v1.4.0 h1:Mk1wCc2gy/F0THH0TAp1QYyJNzRm2KCLy3o5ASXVI5E=
sigs.k8s.io/yaml v1.4.0/go.mod h1:Ejl7/uTz7PSA4eKMyQCUTnhZYNmLIl+5c2lQPGR2BPY= sigs.k8s.io/yaml v1.4.0/go.mod h1:Ejl7/uTz7PSA4eKMyQCUTnhZYNmLIl+5c2lQPGR2BPY=
tags.cncf.io/container-device-interface v0.8.0 h1:8bCFo/g9WODjWx3m6EYl3GfUG31eKJbaggyBDxEldRc=
tags.cncf.io/container-device-interface v0.8.0/go.mod h1:Apb7N4VdILW0EVdEMRYXIDVRZfNJZ+kmEUss2kRRQ6Y=
tags.cncf.io/container-device-interface/specs-go v0.8.0 h1:QYGFzGxvYK/ZLMrjhvY0RjpUavIn4KcmRmVP/JjdBTA=
tags.cncf.io/container-device-interface/specs-go v0.8.0/go.mod h1:BhJIkjjPh4qpys+qm4DAYtUyryaTDg9zris+AczXyws=

View File

@@ -9,10 +9,13 @@ Vagrant.configure("2") do |config|
config.vm.provision "init", type: "shell", run: "once" do |sh| config.vm.provision "init", type: "shell", run: "once" do |sh|
sh.inline = <<~SHELL sh.inline = <<~SHELL
set -x
pkg bootstrap pkg bootstrap
pkg install -y go123 git pkg install -y git
ln -s /usr/local/bin/go123 /usr/local/bin/go
go install gotest.tools/gotestsum@#{ENV['GOTESTSUM_VERSION']} fetch https://go.dev/dl/go#{ENV['GO_VERSION']}.freebsd-amd64.tar.gz
tar -C /usr/local -xzf go#{ENV['GO_VERSION']}.freebsd-amd64.tar.gz
ln -s /usr/local/go/bin/go /usr/local/bin/go
SHELL SHELL
end end
end end

32
hack/Vagrantfile.netbsd Normal file
View File

@@ -0,0 +1,32 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "generic/netbsd9"
config.vm.boot_timeout = 900
config.vm.synced_folder ".", "/vagrant", type: "rsync"
config.ssh.keep_alive = true
config.vm.provision "init", type: "shell", run: "once" do |sh|
sh.inline = <<~SHELL
set -x
mkdir -p /var/tmp
chmod 1777 /var/tmp
pkgin -y install git mozilla-rootcerts
mozilla-rootcerts install
ftp https://go.dev/dl/go#{ENV['GO_VERSION']}.netbsd-amd64.tar.gz
tar -C /var/tmp -xzf go#{ENV['GO_VERSION']}.netbsd-amd64.tar.gz
cat << 'EOF' > /usr/bin/go-wrapper
#!/bin/sh
export TMPDIR="/var/tmp"
exec /var/tmp/go/bin/go "$@"
EOF
chmod +x /usr/bin/go-wrapper
ln -s /usr/bin/go-wrapper /usr/bin/go
SHELL
end
end

View File

@@ -10,12 +10,12 @@ Vagrant.configure("2") do |config|
config.vm.provision "init", type: "shell", run: "once" do |sh| config.vm.provision "init", type: "shell", run: "once" do |sh|
sh.inline = <<~SHELL sh.inline = <<~SHELL
set -x
pkg_add -x git pkg_add -x git
ftp https://go.dev/dl/go1.23.3.openbsd-amd64.tar.gz ftp https://go.dev/dl/go#{ENV['GO_VERSION']}.openbsd-amd64.tar.gz
tar -C /usr/local -xzf go1.23.3.openbsd-amd64.tar.gz tar -C /usr/local -xzf go#{ENV['GO_VERSION']}.openbsd-amd64.tar.gz
ln -s /usr/local/go/bin/go /usr/local/bin/go ln -s /usr/local/go/bin/go /usr/local/bin/go
go install gotest.tools/gotestsum@#{ENV['GOTESTSUM_VERSION']}
SHELL SHELL
end end
end end

View File

@@ -5,9 +5,10 @@ ARG ALPINE_VERSION=3.21
ARG XX_VERSION=1.6.1 ARG XX_VERSION=1.6.1
ARG GOLANGCI_LINT_VERSION=1.62.0 ARG GOLANGCI_LINT_VERSION=1.62.0
ARG GOPLS_VERSION=v0.26.0 # v0.31 requires go1.24
ARG GOPLS_VERSION=v0.30.0
# disabled: deprecated unusedvariable simplifyrange # disabled: deprecated unusedvariable simplifyrange
ARG GOPLS_ANALYZERS="embeddirective fillreturns infertypeargs nonewvars noresultvalues simplifycompositelit simplifyslice undeclaredname unusedparams useany" ARG GOPLS_ANALYZERS="embeddirective fillreturns hostport infertypeargs modernize nonewvars noresultvalues simplifycompositelit simplifyslice unusedparams yield"
FROM --platform=$BUILDPLATFORM tonistiigi/xx:${XX_VERSION} AS xx FROM --platform=$BUILDPLATFORM tonistiigi/xx:${XX_VERSION} AS xx

View File

@@ -6,6 +6,7 @@ import (
"fmt" "fmt"
"os" "os"
"path/filepath" "path/filepath"
"strconv"
"sync" "sync"
"github.com/docker/buildx/util/confutil" "github.com/docker/buildx/util/confutil"
@@ -14,6 +15,7 @@ import (
) )
const ( const (
version = 2
refsDir = "refs" refsDir = "refs"
groupDir = "__group__" groupDir = "__group__"
) )
@@ -31,12 +33,8 @@ type State struct {
} }
type StateGroup struct { type StateGroup struct {
// Definition is the raw representation of the group (bake definition)
Definition []byte
// Targets are the targets invoked // Targets are the targets invoked
Targets []string `json:",omitempty"` Targets []string `json:",omitempty"`
// Inputs are the user inputs (bake overrides)
Inputs []string `json:",omitempty"`
// Refs are used to track all the refs that belong to the same group // Refs are used to track all the refs that belong to the same group
Refs []string Refs []string
} }
@@ -52,9 +50,7 @@ func New(cfg *confutil.Config) (*LocalState, error) {
if err := cfg.MkdirAll(refsDir, 0700); err != nil { if err := cfg.MkdirAll(refsDir, 0700); err != nil {
return nil, err return nil, err
} }
return &LocalState{ return &LocalState{cfg: cfg}, nil
cfg: cfg,
}, nil
} }
func (ls *LocalState) ReadRef(builderName, nodeName, id string) (*State, error) { func (ls *LocalState) ReadRef(builderName, nodeName, id string) (*State, error) {
@@ -87,8 +83,12 @@ func (ls *LocalState) SaveRef(builderName, nodeName, id string, st State) error
return ls.cfg.AtomicWriteFile(filepath.Join(refDir, id), dt, 0644) return ls.cfg.AtomicWriteFile(filepath.Join(refDir, id), dt, 0644)
} }
func (ls *LocalState) GroupDir() string {
return filepath.Join(ls.cfg.Dir(), refsDir, groupDir)
}
func (ls *LocalState) ReadGroup(id string) (*StateGroup, error) { func (ls *LocalState) ReadGroup(id string) (*StateGroup, error) {
dt, err := os.ReadFile(filepath.Join(ls.cfg.Dir(), refsDir, groupDir, id)) dt, err := os.ReadFile(filepath.Join(ls.GroupDir(), id))
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -208,7 +208,7 @@ func (ls *LocalState) removeGroup(id string) error {
if id == "" { if id == "" {
return errors.Errorf("group ref empty") return errors.Errorf("group ref empty")
} }
f := filepath.Join(ls.cfg.Dir(), refsDir, groupDir, id) f := filepath.Join(ls.GroupDir(), id)
if _, err := os.Lstat(f); err != nil { if _, err := os.Lstat(f); err != nil {
if !os.IsNotExist(err) { if !os.IsNotExist(err) {
return err return err
@@ -230,3 +230,16 @@ func (ls *LocalState) validate(builderName, nodeName, id string) error {
} }
return nil return nil
} }
func (ls *LocalState) readVersion() int {
if vdt, err := os.ReadFile(filepath.Join(ls.cfg.Dir(), refsDir, "version")); err == nil {
if v, err := strconv.Atoi(string(vdt)); err == nil {
return v
}
}
return 1
}
func (ls *LocalState) writeVersion(version int) error {
return ls.cfg.AtomicWriteFile(filepath.Join(refsDir, "version"), []byte(strconv.Itoa(version)), 0600)
}

View File

@@ -68,9 +68,7 @@ var (
testStateGroupID = "kvqs0sgly2rmitz84r25u9qd0" testStateGroupID = "kvqs0sgly2rmitz84r25u9qd0"
testStateGroup = StateGroup{ testStateGroup = StateGroup{
Definition: []byte(`{"group":{"default":{"targets":["pre-checkin"]},"pre-checkin":{"targets":["vendor-update","format","build"]}},"target":{"build":{"context":".","dockerfile":"dev.Dockerfile","target":"build-update","platforms":["linux/amd64"],"output":["."]},"format":{"context":".","dockerfile":"dev.Dockerfile","target":"format-update","platforms":["linux/amd64"],"output":["."]},"vendor-update":{"context":".","dockerfile":"dev.Dockerfile","target":"vendor-update","platforms":["linux/amd64"],"output":["."]}}}`),
Targets: []string{"pre-checkin"}, Targets: []string{"pre-checkin"},
Inputs: []string{"*.platform=linux/amd64"},
Refs: []string{"builder/builder0/hx2qf1w11qvz1x3k471c5i8xw", "builder/builder0/968zj0g03jmlx0s8qslnvh6rl", "builder/builder0/naf44f9i1710lf7y12lv5hb1z"}, Refs: []string{"builder/builder0/hx2qf1w11qvz1x3k471c5i8xw", "builder/builder0/968zj0g03jmlx0s8qslnvh6rl", "builder/builder0/naf44f9i1710lf7y12lv5hb1z"},
} }

56
localstate/migrate.go Normal file
View File

@@ -0,0 +1,56 @@
package localstate
import (
"encoding/json"
"os"
"path/filepath"
"github.com/pkg/errors"
)
func (ls *LocalState) MigrateIfNeeded() error {
currentVersion := ls.readVersion()
if currentVersion == version {
return nil
}
migrations := map[int]func(*LocalState) error{
2: (*LocalState).migration2,
}
for v := currentVersion + 1; v <= version; v++ {
migration, found := migrations[v]
if !found {
return errors.Errorf("localstate migration v%d not found", v)
}
if err := migration(ls); err != nil {
return errors.Wrapf(err, "localstate migration v%d failed", v)
}
}
return ls.writeVersion(version)
}
func (ls *LocalState) migration2() error {
return filepath.Walk(ls.GroupDir(), func(path string, info os.FileInfo, err error) error {
if err != nil {
return err
}
if info.IsDir() {
return nil
}
dt, err := os.ReadFile(path)
if err != nil {
return err
}
var stg StateGroup
if err := json.Unmarshal(dt, &stg); err != nil {
return err
}
mdt, err := json.Marshal(stg)
if err != nil {
return err
}
if err := os.WriteFile(path, mdt, 0600); err != nil {
return err
}
return nil
})
}

View File

@@ -4,6 +4,7 @@ import (
"context" "context"
"fmt" "fmt"
"io" "io"
"slices"
"github.com/docker/buildx/monitor/types" "github.com/docker/buildx/monitor/types"
"github.com/pkg/errors" "github.com/pkg/errors"
@@ -50,14 +51,7 @@ func (cm *AttachCmd) Exec(ctx context.Context, args []string) error {
if err != nil { if err != nil {
return errors.Errorf("failed to get the list of sessions: %v", err) return errors.Errorf("failed to get the list of sessions: %v", err)
} }
found := false if !slices.Contains(refs, ref) {
for _, s := range refs {
if s == ref {
found = true
break
}
}
if !found {
return errors.Errorf("unknown ID: %q", ref) return errors.Errorf("unknown ID: %q", ref)
} }
cm.m.Detach() // Finish existing attach cm.m.Detach() // Finish existing attach

View File

@@ -2,6 +2,7 @@ package store
import ( import (
"fmt" "fmt"
"slices"
"time" "time"
"github.com/containerd/platforms" "github.com/containerd/platforms"
@@ -44,7 +45,7 @@ func (ng *NodeGroup) Leave(name string) error {
if len(ng.Nodes) == 1 { if len(ng.Nodes) == 1 {
return errors.Errorf("can not leave last node, do you want to rm instance instead?") return errors.Errorf("can not leave last node, do you want to rm instance instead?")
} }
ng.Nodes = append(ng.Nodes[:i], ng.Nodes[i+1:]...) ng.Nodes = slices.Delete(ng.Nodes, i, i+1)
return nil return nil
} }

View File

@@ -39,7 +39,7 @@ func ValidateName(s string) (string, error) {
func GenerateName(txn *Txn) (string, error) { func GenerateName(txn *Txn) (string, error) {
var name string var name string
for i := 0; i < 6; i++ { for i := range 6 {
name = namesgenerator.GetRandomName(i) name = namesgenerator.GetRandomName(i)
if _, err := txn.NodeGroupByName(name); err != nil { if _, err := txn.NodeGroupByName(name); err != nil {
if !os.IsNotExist(errors.Cause(err)) { if !os.IsNotExist(errors.Cause(err)) {

View File

@@ -38,6 +38,7 @@ func bakeCmd(sb integration.Sandbox, opts ...cmdOpt) (string, error) {
var bakeTests = []func(t *testing.T, sb integration.Sandbox){ var bakeTests = []func(t *testing.T, sb integration.Sandbox){
testBakePrint, testBakePrint,
testBakePrintSensitive, testBakePrintSensitive,
testBakePrintOverrideEmpty,
testBakeLocal, testBakeLocal,
testBakeLocalMulti, testBakeLocalMulti,
testBakeRemote, testBakeRemote,
@@ -286,6 +287,47 @@ RUN echo "Hello ${HELLO}"
} }
} }
func testBakePrintOverrideEmpty(t *testing.T, sb integration.Sandbox) {
dockerfile := []byte(`
FROM scratch
COPY foo /foo
`)
bakefile := []byte(`
target "default" {
cache-to = ["type=gha,mode=min,scope=integration-tests"]
}
`)
dir := tmpdir(
t,
fstest.CreateFile("docker-bake.hcl", bakefile, 0600),
fstest.CreateFile("Dockerfile", dockerfile, 0600),
fstest.CreateFile("foo", []byte("foo"), 0600),
)
cmd := buildxCmd(sb, withDir(dir), withArgs("bake", "--print", "--set", "*.cache-to="))
stdout := bytes.Buffer{}
stderr := bytes.Buffer{}
cmd.Stdout = &stdout
cmd.Stderr = &stderr
require.NoError(t, cmd.Run(), stdout.String(), stderr.String())
require.JSONEq(t, `{
"group": {
"default": {
"targets": [
"default"
]
}
},
"target": {
"default": {
"context": ".",
"dockerfile": "Dockerfile"
}
}
}`, stdout.String())
}
func testBakeLocal(t *testing.T, sb integration.Sandbox) { func testBakeLocal(t *testing.T, sb integration.Sandbox) {
dockerfile := []byte(` dockerfile := []byte(`
FROM scratch FROM scratch
@@ -871,6 +913,7 @@ target "default" {
}) })
} }
} }
func testBakeSetNonExistingOutsideNoParallel(t *testing.T, sb integration.Sandbox) { func testBakeSetNonExistingOutsideNoParallel(t *testing.T, sb integration.Sandbox) {
for _, ent := range []bool{true, false} { for _, ent := range []bool{true, false} {
t.Run(fmt.Sprintf("ent=%v", ent), func(t *testing.T) { t.Run(fmt.Sprintf("ent=%v", ent), func(t *testing.T) {
@@ -973,11 +1016,11 @@ FROM scratch
COPY foo /foo COPY foo /foo
`) `)
destDir := t.TempDir() destDir := t.TempDir()
bakefile := []byte(fmt.Sprintf(` bakefile := fmt.Appendf(nil, `
target "default" { target "default" {
output = ["type=local,dest=%s/not/exists"] output = ["type=local,dest=%s/not/exists"]
} }
`, destDir)) `, destDir)
dir := tmpdir( dir := tmpdir(
t, t,
fstest.CreateFile("docker-bake.hcl", bakefile, 0600), fstest.CreateFile("docker-bake.hcl", bakefile, 0600),
@@ -1007,11 +1050,11 @@ FROM scratch
COPY foo /foo COPY foo /foo
`) `)
destDir := t.TempDir() destDir := t.TempDir()
bakefile := []byte(fmt.Sprintf(` bakefile := fmt.Appendf(nil, `
target "default" { target "default" {
output = ["type=local,dest=%s"] output = ["type=local,dest=%s"]
} }
`, destDir)) `, destDir)
dir := tmpdir( dir := tmpdir(
t, t,
fstest.CreateFile("docker-bake.hcl", bakefile, 0600), fstest.CreateFile("docker-bake.hcl", bakefile, 0600),
@@ -1108,11 +1151,11 @@ COPY Dockerfile /foo
keyDir := t.TempDir() keyDir := t.TempDir()
err := writeTempPrivateKey(filepath.Join(keyDir, "id_rsa")) err := writeTempPrivateKey(filepath.Join(keyDir, "id_rsa"))
require.NoError(t, err) require.NoError(t, err)
bakefile := []byte(fmt.Sprintf(` bakefile := fmt.Appendf(nil, `
target "default" { target "default" {
ssh = ["key=%s"] ssh = ["key=%s"]
} }
`, filepath.Join(keyDir, "id_rsa"))) `, filepath.Join(keyDir, "id_rsa"))
dir := tmpdir( dir := tmpdir(
t, t,
fstest.CreateFile("docker-bake.hcl", bakefile, 0600), fstest.CreateFile("docker-bake.hcl", bakefile, 0600),
@@ -1272,7 +1315,7 @@ target "default" {
type mdT struct { type mdT struct {
Default struct { Default struct {
BuildRef string `json:"buildx.build.ref"` BuildRef string `json:"buildx.build.ref"`
BuildProvenance map[string]interface{} `json:"buildx.build.provenance"` BuildProvenance map[string]any `json:"buildx.build.provenance"`
} `json:"default"` } `json:"default"`
} }
var md mdT var md mdT

View File

@@ -805,7 +805,7 @@ func buildMetadataProvenance(t *testing.T, sb integration.Sandbox, metadataMode
type mdT struct { type mdT struct {
BuildRef string `json:"buildx.build.ref"` BuildRef string `json:"buildx.build.ref"`
BuildProvenance map[string]interface{} `json:"buildx.build.provenance"` BuildProvenance map[string]any `json:"buildx.build.provenance"`
} }
var md mdT var md mdT
err = json.Unmarshal(dt, &md) err = json.Unmarshal(dt, &md)

View File

@@ -50,7 +50,7 @@ func withDir(dir string) cmdOpt {
func buildxCmd(sb integration.Sandbox, opts ...cmdOpt) *exec.Cmd { func buildxCmd(sb integration.Sandbox, opts ...cmdOpt) *exec.Cmd {
cmd := exec.Command("buildx") cmd := exec.Command("buildx")
cmd.Env = append([]string{}, os.Environ()...) cmd.Env = os.Environ()
for _, opt := range opts { for _, opt := range opts {
opt(cmd) opt(cmd)
} }
@@ -77,7 +77,7 @@ func buildxCmd(sb integration.Sandbox, opts ...cmdOpt) *exec.Cmd {
func dockerCmd(sb integration.Sandbox, opts ...cmdOpt) *exec.Cmd { func dockerCmd(sb integration.Sandbox, opts ...cmdOpt) *exec.Cmd {
cmd := exec.Command("docker") cmd := exec.Command("docker")
cmd.Env = append([]string{}, os.Environ()...) cmd.Env = os.Environ()
for _, opt := range opts { for _, opt := range opts {
opt(cmd) opt(cmd)
} }
@@ -214,7 +214,7 @@ func skipNoCompatBuildKit(t *testing.T, sb integration.Sandbox, constraint strin
} }
} }
func ptrstr(s interface{}) *string { func ptrstr(s any) *string {
var n *string var n *string
if reflect.ValueOf(s).Kind() == reflect.String { if reflect.ValueOf(s).Kind() == reflect.String {
ss := s.(string) ss := s.(string)

View File

@@ -45,7 +45,7 @@ func testRmMulti(t *testing.T, sb integration.Sandbox) {
} }
var builderNames []string var builderNames []string
for i := 0; i < 3; i++ { for range 3 {
out, err := createCmd(sb, withArgs("--driver", "docker-container")) out, err := createCmd(sb, withArgs("--driver", "docker-container"))
require.NoError(t, err, out) require.NoError(t, err, out)
builderName := strings.TrimSpace(out) builderName := strings.TrimSpace(out)

View File

@@ -2,6 +2,7 @@ package workers
import ( import (
"os" "os"
"slices"
"strings" "strings"
"github.com/moby/buildkit/util/testutil/integration" "github.com/moby/buildkit/util/testutil/integration"
@@ -49,23 +50,14 @@ func (s *backend) ExtraEnv() []string {
func (s backend) Supports(feature string) bool { func (s backend) Supports(feature string) bool {
if enabledFeatures := os.Getenv("BUILDKIT_TEST_ENABLE_FEATURES"); enabledFeatures != "" { if enabledFeatures := os.Getenv("BUILDKIT_TEST_ENABLE_FEATURES"); enabledFeatures != "" {
for _, enabledFeature := range strings.Split(enabledFeatures, ",") { if slices.Contains(strings.Split(enabledFeatures, ","), feature) {
if feature == enabledFeature {
return true return true
} }
} }
}
if disabledFeatures := os.Getenv("BUILDKIT_TEST_DISABLE_FEATURES"); disabledFeatures != "" { if disabledFeatures := os.Getenv("BUILDKIT_TEST_DISABLE_FEATURES"); disabledFeatures != "" {
for _, disabledFeature := range strings.Split(disabledFeatures, ",") { if slices.Contains(strings.Split(disabledFeatures, ","), feature) {
if feature == disabledFeature {
return false return false
} }
} }
} return !slices.Contains(s.unsupportedFeatures, feature)
for _, unsupportedFeature := range s.unsupportedFeatures {
if feature == unsupportedFeature {
return false
}
}
return true
} }

View File

@@ -90,7 +90,7 @@ func (a *Attest) ToPB() *controllerapi.Attest {
} }
func (a *Attest) MarshalJSON() ([]byte, error) { func (a *Attest) MarshalJSON() ([]byte, error) {
m := make(map[string]interface{}, len(a.Attrs)+2) m := make(map[string]any, len(a.Attrs)+2)
for k, v := range a.Attrs { for k, v := range a.Attrs {
m[k] = v m[k] = v
} }
@@ -102,7 +102,7 @@ func (a *Attest) MarshalJSON() ([]byte, error) {
} }
func (a *Attest) UnmarshalJSON(data []byte) error { func (a *Attest) UnmarshalJSON(data []byte) error {
var m map[string]interface{} var m map[string]any
if err := json.Unmarshal(data, &m); err != nil { if err := json.Unmarshal(data, &m); err != nil {
return err return err
} }
@@ -148,9 +148,8 @@ func (a *Attest) UnmarshalText(text []byte) error {
if !ok { if !ok {
return errors.Errorf("invalid value %s", field) return errors.Errorf("invalid value %s", field)
} }
key = strings.TrimSpace(strings.ToLower(key))
switch key { switch strings.TrimSpace(strings.ToLower(key)) {
case "type": case "type":
a.Type = value a.Type = value
case "disabled": case "disabled":

View File

@@ -13,16 +13,21 @@ func TestAttests(t *testing.T) {
attests := Attests{ attests := Attests{
{Type: "provenance", Attrs: map[string]string{"mode": "max"}}, {Type: "provenance", Attrs: map[string]string{"mode": "max"}},
{Type: "sbom", Disabled: true}, {Type: "sbom", Disabled: true},
{Type: "sbom", Attrs: map[string]string{
"generator": "scanner",
"ENV1": `"foo,bar"`,
"Env2": "hello",
}},
} }
expected := `[{"type":"provenance","mode":"max"},{"type":"sbom","disabled":true}]` expected := `[{"type":"provenance","mode":"max"},{"type":"sbom","disabled":true},{"ENV1":"\"foo,bar\"","Env2":"hello","generator":"scanner","type":"sbom"}]`
actual, err := json.Marshal(attests) actual, err := json.Marshal(attests)
require.NoError(t, err) require.NoError(t, err)
require.JSONEq(t, expected, string(actual)) require.JSONEq(t, expected, string(actual))
}) })
t.Run("UnmarshalJSON", func(t *testing.T) { t.Run("UnmarshalJSON", func(t *testing.T) {
in := `[{"type":"provenance","mode":"max"},{"type":"sbom","disabled":true}]` in := `[{"type":"provenance","mode":"max"},{"type":"sbom","disabled":true},{"ENV1":"\"foo,bar\"","Env2":"hello","generator":"scanner","type":"sbom"}]`
var actual Attests var actual Attests
err := json.Unmarshal([]byte(in), &actual) err := json.Unmarshal([]byte(in), &actual)
@@ -31,6 +36,11 @@ func TestAttests(t *testing.T) {
expected := Attests{ expected := Attests{
{Type: "provenance", Attrs: map[string]string{"mode": "max"}}, {Type: "provenance", Attrs: map[string]string{"mode": "max"}},
{Type: "sbom", Disabled: true, Attrs: map[string]string{}}, {Type: "sbom", Disabled: true, Attrs: map[string]string{}},
{Type: "sbom", Disabled: false, Attrs: map[string]string{
"generator": "scanner",
"ENV1": `"foo,bar"`,
"Env2": "hello",
}},
} }
require.Equal(t, expected, actual) require.Equal(t, expected, actual)
}) })
@@ -41,7 +51,14 @@ func TestAttests(t *testing.T) {
"type": cty.StringVal("provenance"), "type": cty.StringVal("provenance"),
"mode": cty.StringVal("max"), "mode": cty.StringVal("max"),
}), }),
cty.ObjectVal(map[string]cty.Value{
"type": cty.StringVal("sbom"),
"generator": cty.StringVal("scan"),
"ENV1": cty.StringVal(`foo,bar`),
"Env2": cty.StringVal(`hello`),
}),
cty.StringVal("type=sbom,disabled=true"), cty.StringVal("type=sbom,disabled=true"),
cty.StringVal(`type=sbom,generator=scan,"FOO=bar,baz",Hello=World`),
}) })
var actual Attests var actual Attests
@@ -50,7 +67,17 @@ func TestAttests(t *testing.T) {
expected := Attests{ expected := Attests{
{Type: "provenance", Attrs: map[string]string{"mode": "max"}}, {Type: "provenance", Attrs: map[string]string{"mode": "max"}},
{Type: "sbom", Attrs: map[string]string{
"generator": "scan",
"ENV1": "foo,bar",
"Env2": "hello",
}},
{Type: "sbom", Disabled: true, Attrs: map[string]string{}}, {Type: "sbom", Disabled: true, Attrs: map[string]string{}},
{Type: "sbom", Attrs: map[string]string{
"generator": "scan",
"FOO": "bar,baz",
"Hello": "World",
}},
} }
require.Equal(t, expected, actual) require.Equal(t, expected, actual)
}) })
@@ -59,6 +86,11 @@ func TestAttests(t *testing.T) {
attests := Attests{ attests := Attests{
{Type: "provenance", Attrs: map[string]string{"mode": "max"}}, {Type: "provenance", Attrs: map[string]string{"mode": "max"}},
{Type: "sbom", Disabled: true}, {Type: "sbom", Disabled: true},
{Type: "sbom", Attrs: map[string]string{
"generator": "scan",
"ENV1": `"foo,bar"`,
"Env2": "hello",
}},
} }
actual := attests.ToCtyValue() actual := attests.ToCtyValue()
@@ -71,6 +103,12 @@ func TestAttests(t *testing.T) {
"type": cty.StringVal("sbom"), "type": cty.StringVal("sbom"),
"disabled": cty.StringVal("true"), "disabled": cty.StringVal("true"),
}), }),
cty.MapVal(map[string]cty.Value{
"type": cty.StringVal("sbom"),
"generator": cty.StringVal("scan"),
"ENV1": cty.StringVal(`"foo,bar"`),
"Env2": cty.StringVal("hello"),
}),
}) })
result := actual.Equals(expected) result := actual.Equals(expected)

View File

@@ -150,7 +150,7 @@ func (e *CacheOptionsEntry) UnmarshalText(text []byte) error {
return e.validate(text) return e.validate(text)
} }
func (e *CacheOptionsEntry) validate(gv interface{}) error { func (e *CacheOptionsEntry) validate(gv any) error {
if e.Type == "" { if e.Type == "" {
var text []byte var text []byte
switch gv := gv.(type) { switch gv := gv.(type) {
@@ -175,6 +175,10 @@ func ParseCacheEntry(in []string) (CacheOptions, error) {
opts := make(CacheOptions, 0, len(in)) opts := make(CacheOptions, 0, len(in))
for _, in := range in { for _, in := range in {
if in == "" {
continue
}
if !strings.Contains(in, "=") { if !strings.Contains(in, "=") {
// This is ref only format. Each field in the CSV is its own entry. // This is ref only format. Each field in the CSV is its own entry.
fields, err := csvvalue.Fields(in, nil) fields, err := csvvalue.Fields(in, nil)
@@ -207,6 +211,7 @@ func addGithubToken(ci *controllerapi.CacheOptionsEntry) {
} }
version, ok := ci.Attrs["version"] version, ok := ci.Attrs["version"]
if !ok { if !ok {
// https://github.com/actions/toolkit/blob/2b08dc18f261b9fdd978b70279b85cbef81af8bc/packages/cache/src/internal/config.ts#L19
if v, ok := os.LookupEnv("ACTIONS_CACHE_SERVICE_V2"); ok { if v, ok := os.LookupEnv("ACTIONS_CACHE_SERVICE_V2"); ok {
if b, err := strconv.ParseBool(v); err == nil && b { if b, err := strconv.ParseBool(v); err == nil && b {
version = "2" version = "2"
@@ -218,15 +223,18 @@ func addGithubToken(ci *controllerapi.CacheOptionsEntry) {
ci.Attrs["token"] = v ci.Attrs["token"] = v
} }
} }
if _, ok := ci.Attrs["url"]; !ok { if _, ok := ci.Attrs["url_v2"]; !ok && version == "2" {
if version == "2" { // https://github.com/actions/toolkit/blob/2b08dc18f261b9fdd978b70279b85cbef81af8bc/packages/cache/src/internal/config.ts#L34-L35
if v, ok := os.LookupEnv("ACTIONS_RESULTS_URL"); ok { if v, ok := os.LookupEnv("ACTIONS_RESULTS_URL"); ok {
ci.Attrs["url_v2"] = v ci.Attrs["url_v2"] = v
} }
} else { }
if _, ok := ci.Attrs["url"]; !ok {
// https://github.com/actions/toolkit/blob/2b08dc18f261b9fdd978b70279b85cbef81af8bc/packages/cache/src/internal/config.ts#L28-L33
if v, ok := os.LookupEnv("ACTIONS_CACHE_URL"); ok { if v, ok := os.LookupEnv("ACTIONS_CACHE_URL"); ok {
ci.Attrs["url"] = v ci.Attrs["url"] = v
} } else if v, ok := os.LookupEnv("ACTIONS_RESULTS_URL"); ok {
ci.Attrs["url"] = v
} }
} }
} }
@@ -266,5 +274,5 @@ func isActive(pb *controllerapi.CacheOptionsEntry) bool {
if pb.Type != "gha" { if pb.Type != "gha" {
return true return true
} }
return pb.Attrs["token"] != "" && pb.Attrs["url"] != "" return pb.Attrs["token"] != "" && (pb.Attrs["url"] != "" || pb.Attrs["url_v2"] != "")
} }

View File

@@ -1,19 +1,20 @@
package buildflags package buildflags
import "github.com/moby/buildkit/util/entitlements" import (
"github.com/moby/buildkit/util/entitlements"
)
func ParseEntitlements(in []string) ([]entitlements.Entitlement, error) { func ParseEntitlements(in []string) ([]string, error) {
out := make([]entitlements.Entitlement, 0, len(in)) out := make([]string, 0, len(in))
for _, v := range in { for _, v := range in {
if v == "" { if v == "" {
continue continue
} }
e, err := entitlements.Parse(v) if _, _, err := entitlements.Parse(v); err != nil {
if err != nil {
return nil, err return nil, err
} }
out = append(out, e) out = append(out, v)
} }
return out, nil return out, nil
} }

View File

@@ -1,6 +1,7 @@
package buildflags package buildflags
import ( import (
"encoding/csv"
"encoding/json" "encoding/json"
"maps" "maps"
"regexp" "regexp"
@@ -259,9 +260,18 @@ func (w *csvBuilder) Write(key, value string) {
if w.sb.Len() > 0 { if w.sb.Len() > 0 {
w.sb.WriteByte(',') w.sb.WriteByte(',')
} }
w.sb.WriteString(key)
w.sb.WriteByte('=') pair := key + "=" + value
w.sb.WriteString(value) if strings.ContainsRune(pair, ',') || strings.ContainsRune(pair, '"') {
var attr strings.Builder
writer := csv.NewWriter(&attr)
writer.Write([]string{pair})
writer.Flush()
// Strips the extra newline added by the csv writer
pair = strings.TrimSpace(attr.String())
}
w.sb.WriteString(pair)
} }
func (w *csvBuilder) WriteAttributes(attrs map[string]string) { func (w *csvBuilder) WriteAttributes(attrs map[string]string) {

View File

@@ -27,7 +27,7 @@ func (s Secrets) Normalize() Secrets {
if len(s) == 0 { if len(s) == 0 {
return nil return nil
} }
return removeDupes(s) return removeSecretDupes(s)
} }
func (s Secrets) ToPB() []*controllerapi.Secret { func (s Secrets) ToPB() []*controllerapi.Secret {
@@ -155,3 +155,17 @@ func parseSecret(value string) (*controllerapi.Secret, error) {
} }
return s.ToPB(), nil return s.ToPB(), nil
} }
func removeSecretDupes(s []*Secret) []*Secret {
var res []*Secret
m := map[string]int{}
for _, sec := range s {
if i, ok := m[sec.ID]; ok {
res[i] = sec
} else {
m[sec.ID] = len(res)
res = append(res, sec)
}
}
return res
}

View File

@@ -81,4 +81,17 @@ func TestSecrets(t *testing.T) {
result := actual.Equals(expected) result := actual.Equals(expected)
require.True(t, result.True()) require.True(t, result.True())
}) })
t.Run("RemoveDupes", func(t *testing.T) {
secrets := Secrets{
{ID: "mysecret", Env: "FOO"},
{ID: "mysecret", Env: "BAR"},
{ID: "mysecret2", Env: "BAZ"},
}.Normalize()
expected := `[{"id":"mysecret","env":"BAR"},{"id":"mysecret2","env":"BAZ"}]`
actual, err := json.Marshal(secrets)
require.NoError(t, err)
require.JSONEq(t, expected, string(actual))
})
} }

View File

@@ -28,7 +28,7 @@ func (s SSHKeys) Normalize() SSHKeys {
if len(s) == 0 { if len(s) == 0 {
return nil return nil
} }
return removeDupes(s) return removeSSHDupes(s)
} }
func (s SSHKeys) ToPB() []*controllerapi.SSH { func (s SSHKeys) ToPB() []*controllerapi.SSH {
@@ -131,3 +131,17 @@ func IsGitSSH(repo string) bool {
} }
return url.Scheme == gitutil.SSHProtocol return url.Scheme == gitutil.SSHProtocol
} }
func removeSSHDupes(s []*SSH) []*SSH {
var res []*SSH
m := map[string]int{}
for _, ssh := range s {
if i, ok := m[ssh.ID]; ok {
res[i] = ssh
} else {
m[ssh.ID] = len(res)
res = append(res, ssh)
}
}
return res
}

View File

@@ -82,4 +82,17 @@ func TestSSHKeys(t *testing.T) {
result := actual.Equals(expected) result := actual.Equals(expected)
require.True(t, result.True()) require.True(t, result.True())
}) })
t.Run("RemoveDupes", func(t *testing.T) {
sshkeys := SSHKeys{
{ID: "default"},
{ID: "key", Paths: []string{"path/to/foo"}},
{ID: "key", Paths: []string{"path/to/bar"}},
}.Normalize()
expected := `[{"id":"default"},{"id":"key","paths":["path/to/bar"]}]`
actual, err := json.Marshal(sshkeys)
require.NoError(t, err)
require.JSONEq(t, expected, string(actual))
})
} }

View File

@@ -33,7 +33,7 @@ func removeDupes[E comparable[E]](s []E) []E {
return s return s
} }
func getAndDelete(m map[string]cty.Value, attr string, gv interface{}) error { func getAndDelete(m map[string]cty.Value, attr string, gv any) error {
if v, ok := m[attr]; ok && v.IsKnown() { if v, ok := m[attr]; ok && v.IsKnown() {
delete(m, attr) delete(m, attr)
return gocty.FromCtyValue(v, gv) return gocty.FromCtyValue(v, gv)

View File

@@ -8,7 +8,7 @@ import (
"sync" "sync"
"github.com/docker/cli/cli/command" "github.com/docker/cli/cli/command"
"github.com/docker/docker/pkg/ioutils" "github.com/docker/docker/pkg/atomicwriter"
"github.com/moby/buildkit/cmd/buildkitd/config" "github.com/moby/buildkit/cmd/buildkitd/config"
"github.com/pelletier/go-toml" "github.com/pelletier/go-toml"
"github.com/pkg/errors" "github.com/pkg/errors"
@@ -106,7 +106,7 @@ func (c *Config) MkdirAll(dir string, perm os.FileMode) error {
// AtomicWriteFile writes data to a file within the config dir atomically // AtomicWriteFile writes data to a file within the config dir atomically
func (c *Config) AtomicWriteFile(filename string, data []byte, perm os.FileMode) error { func (c *Config) AtomicWriteFile(filename string, data []byte, perm os.FileMode) error {
f := filepath.Join(c.dir, filename) f := filepath.Join(c.dir, filename)
if err := ioutils.AtomicWriteFile(f, data, perm); err != nil { if err := atomicwriter.WriteFile(f, data, perm); err != nil {
return err return err
} }
if c.chowner == nil { if c.chowner == nil {

View File

@@ -0,0 +1,21 @@
package desktop
import (
"os"
"path/filepath"
"github.com/pkg/errors"
)
const (
socketName = "docker-desktop-build.sock"
socketPath = "Library/Containers/com.docker.docker/Data"
)
func BuildServerAddr() (string, error) {
dir, err := os.UserHomeDir()
if err != nil {
return "", errors.Wrap(err, "failed to get user home directory")
}
return "unix://" + filepath.Join(dir, socketPath, socketName), nil
}

View File

@@ -0,0 +1,29 @@
package desktop
import (
"os"
"path/filepath"
"github.com/pkg/errors"
)
const (
socketName = "docker-desktop-build.sock"
socketPath = ".docker/desktop"
wslSocketPath = "/mnt/wsl/docker-desktop/shared-sockets/host-services"
)
func BuildServerAddr() (string, error) {
if os.Getenv("WSL_DISTRO_NAME") != "" {
socket := filepath.Join(wslSocketPath, socketName)
if _, err := os.Stat(socket); os.IsNotExist(err) {
return "", errors.New("Docker Desktop Build backend is not yet supported on WSL. Please run this command on Windows host instead.") //nolint:revive
}
return "unix://" + socket, nil
}
dir, err := os.UserHomeDir()
if err != nil {
return "", errors.Wrap(err, "failed to get user home directory")
}
return "unix://" + filepath.Join(dir, socketPath, socketName), nil
}

View File

@@ -0,0 +1,13 @@
//go:build !windows && !darwin && !linux
package desktop
import (
"runtime"
"github.com/pkg/errors"
)
func BuildServerAddr() (string, error) {
return "", errors.Errorf("Docker Desktop unsupported on %s", runtime.GOOS)
}

View File

@@ -0,0 +1,5 @@
package desktop
func BuildServerAddr() (string, error) {
return "npipe:////./pipe/dockerDesktopBuildServer", nil
}

View File

@@ -52,7 +52,7 @@ func (c *Client) LoadImage(ctx context.Context, name string, status progress.Wri
w.mu.Unlock() w.mu.Unlock()
} }
resp, err := dapi.ImageLoad(ctx, pr, false) resp, err := dapi.ImageLoad(ctx, pr)
defer close(done) defer close(done)
if err != nil { if err != nil {
handleErr(err) handleErr(err)

View File

@@ -156,7 +156,7 @@ func (r *Resolver) Combine(ctx context.Context, srcs []*Source, ann map[exptypes
case exptypes.AnnotationIndex: case exptypes.AnnotationIndex:
indexAnnotation[k.Key] = v indexAnnotation[k.Key] = v
case exptypes.AnnotationManifestDescriptor: case exptypes.AnnotationManifestDescriptor:
for i := 0; i < len(newDescs); i++ { for i := range newDescs {
if newDescs[i].Annotations == nil { if newDescs[i].Annotations == nil {
newDescs[i].Annotations = map[string]string{} newDescs[i].Annotations = map[string]string{}
} }
@@ -194,8 +194,11 @@ func (r *Resolver) Combine(ctx context.Context, srcs []*Source, ann map[exptypes
func (r *Resolver) Push(ctx context.Context, ref reference.Named, desc ocispec.Descriptor, dt []byte) error { func (r *Resolver) Push(ctx context.Context, ref reference.Named, desc ocispec.Descriptor, dt []byte) error {
ctx = remotes.WithMediaTypeKeyPrefix(ctx, "application/vnd.in-toto+json", "intoto") ctx = remotes.WithMediaTypeKeyPrefix(ctx, "application/vnd.in-toto+json", "intoto")
ref = reference.TagNameOnly(ref) fullRef, err := reference.WithDigest(reference.TagNameOnly(ref), desc.Digest)
p, err := r.resolver().Pusher(ctx, ref.String()) if err != nil {
return errors.Wrapf(err, "failed to combine ref %s with digest %s", ref, desc.Digest)
}
p, err := r.resolver().Pusher(ctx, fullRef.String())
if err != nil { if err != nil {
return err return err
} }
@@ -217,8 +220,8 @@ func (r *Resolver) Push(ctx context.Context, ref reference.Named, desc ocispec.D
func (r *Resolver) Copy(ctx context.Context, src *Source, dest reference.Named) error { func (r *Resolver) Copy(ctx context.Context, src *Source, dest reference.Named) error {
ctx = remotes.WithMediaTypeKeyPrefix(ctx, "application/vnd.in-toto+json", "intoto") ctx = remotes.WithMediaTypeKeyPrefix(ctx, "application/vnd.in-toto+json", "intoto")
dest = reference.TagNameOnly(dest) // push by digest
p, err := r.resolver().Pusher(ctx, dest.String()) p, err := r.resolver().Pusher(ctx, dest.Name())
if err != nil { if err != nil {
return err return err
} }

View File

@@ -278,8 +278,8 @@ func (l *loader) scanConfig(ctx context.Context, fetcher remotes.Fetcher, desc o
} }
type sbomStub struct { type sbomStub struct {
SPDX interface{} `json:",omitempty"` SPDX any `json:",omitempty"`
AdditionalSPDXs []interface{} `json:",omitempty"` AdditionalSPDXs []any `json:",omitempty"`
} }
func (l *loader) scanSBOM(ctx context.Context, fetcher remotes.Fetcher, r *result, refs []digest.Digest, as *asset) error { func (l *loader) scanSBOM(ctx context.Context, fetcher remotes.Fetcher, r *result, refs []digest.Digest, as *asset) error {
@@ -309,7 +309,7 @@ func (l *loader) scanSBOM(ctx context.Context, fetcher remotes.Fetcher, r *resul
} }
var spdx struct { var spdx struct {
Predicate interface{} `json:"predicate"` Predicate any `json:"predicate"`
} }
if err := json.Unmarshal(dt, &spdx); err != nil { if err := json.Unmarshal(dt, &spdx); err != nil {
return nil, err return nil, err
@@ -330,7 +330,7 @@ func (l *loader) scanSBOM(ctx context.Context, fetcher remotes.Fetcher, r *resul
} }
type provenanceStub struct { type provenanceStub struct {
SLSA interface{} `json:",omitempty"` SLSA any `json:",omitempty"`
} }
func (l *loader) scanProvenance(ctx context.Context, fetcher remotes.Fetcher, r *result, refs []digest.Digest, as *asset) error { func (l *loader) scanProvenance(ctx context.Context, fetcher remotes.Fetcher, r *result, refs []digest.Digest, as *asset) error {
@@ -360,7 +360,7 @@ func (l *loader) scanProvenance(ctx context.Context, fetcher remotes.Fetcher, r
} }
var slsa struct { var slsa struct {
Predicate interface{} `json:"predicate"` Predicate any `json:"predicate"`
} }
if err := json.Unmarshal(dt, &slsa); err != nil { if err := json.Unmarshal(dt, &slsa); err != nil {
return nil, err return nil, err

View File

@@ -89,7 +89,7 @@ func (p *Printer) Print(raw bool, out io.Writer) error {
} }
tpl, err := template.New("").Funcs(template.FuncMap{ tpl, err := template.New("").Funcs(template.FuncMap{
"json": func(v interface{}) string { "json": func(v any) string {
b, _ := json.MarshalIndent(v, "", " ") b, _ := json.MarshalIndent(v, "", " ")
return string(b) return string(b)
}, },
@@ -101,7 +101,7 @@ func (p *Printer) Print(raw bool, out io.Writer) error {
imageconfigs := res.Configs() imageconfigs := res.Configs()
format := tpl.Root.String() format := tpl.Root.String()
var mfst interface{} var mfst any
switch p.manifest.MediaType { switch p.manifest.MediaType {
case images.MediaTypeDockerSchema2Manifest, ocispecs.MediaTypeImageManifest: case images.MediaTypeDockerSchema2Manifest, ocispecs.MediaTypeImageManifest:
mfst = p.manifest mfst = p.manifest
@@ -206,7 +206,7 @@ func (p *Printer) printManifestList(out io.Writer) error {
type tplInput struct { type tplInput struct {
Name string `json:"name,omitempty"` Name string `json:"name,omitempty"`
Manifest interface{} `json:"manifest,omitempty"` Manifest any `json:"manifest,omitempty"`
Image *ocispecs.Image `json:"image,omitempty"` Image *ocispecs.Image `json:"image,omitempty"`
result *result result *result
@@ -236,7 +236,7 @@ func (inp tplInput) Provenance() (provenanceStub, error) {
type tplInputs struct { type tplInputs struct {
Name string `json:"name,omitempty"` Name string `json:"name,omitempty"`
Manifest interface{} `json:"manifest,omitempty"` Manifest any `json:"manifest,omitempty"`
Image map[string]*ocispecs.Image `json:"image,omitempty"` Image map[string]*ocispecs.Image `json:"image,omitempty"`
result *result result *result

View File

@@ -126,7 +126,7 @@ func TestMuxIO(t *testing.T) {
if tt.outputsNum != len(tt.wants) { if tt.outputsNum != len(tt.wants) {
t.Fatalf("wants != outputsNum") t.Fatalf("wants != outputsNum")
} }
for i := 0; i < tt.outputsNum; i++ { for i := range tt.outputsNum {
outBuf, out := newTestOut(i) outBuf, out := newTestOut(i)
outBufs = append(outBufs, outBuf) outBufs = append(outBufs, outBuf)
outs = append(outs, MuxOut{out, nil, nil}) outs = append(outs, MuxOut{out, nil, nil})
@@ -304,7 +304,7 @@ func writeMasked(w io.Writer, s string) io.Writer {
return return
} }
var masked string var masked string
for i := 0; i < n; i++ { for range n {
masked += s masked += s
} }
if _, err := w.Write([]byte(masked)); err != nil { if _, err := w.Write([]byte(masked)); err != nil {

View File

@@ -85,7 +85,7 @@ type Log struct {
type KeyValue struct { type KeyValue struct {
Key string `json:"key"` Key string `json:"key"`
Type ValueType `json:"type,omitempty"` Type ValueType `json:"type,omitempty"`
Value interface{} `json:"value"` Value any `json:"value"`
} }
// DependencyLink shows dependencies between services // DependencyLink shows dependencies between services

View File

@@ -149,7 +149,7 @@ type keyValue struct {
// value is a custom type used to unmarshal otel Value correctly. // value is a custom type used to unmarshal otel Value correctly.
type value struct { type value struct {
Type string Type string
Value interface{} Value any
} }
// UnmarshalJSON implements json.Unmarshaler for Span which allows correctly // UnmarshalJSON implements json.Unmarshaler for Span which allows correctly
@@ -318,7 +318,7 @@ func (kv *keyValue) asAttributeKeyValue() (attribute.KeyValue, error) {
switch sli := kv.Value.Value.(type) { switch sli := kv.Value.Value.(type) {
case []string: case []string:
strSli = sli strSli = sli
case []interface{}: case []any:
for i := range sli { for i := range sli {
var v string var v string
// best case we have a string, otherwise, cast it using // best case we have a string, otherwise, cast it using

View File

@@ -131,7 +131,7 @@ func TestAsAttributeKeyValue(t *testing.T) {
name: "stringslice (interface of string)", name: "stringslice (interface of string)",
args: args{ args: args{
Type: attribute.STRINGSLICE.String(), Type: attribute.STRINGSLICE.String(),
value: []interface{}{"value1", "value2"}, value: []any{"value1", "value2"},
}, },
want: attribute.StringSlice("key", []string{"value1", "value2"}), want: attribute.StringSlice("key", []string{"value1", "value2"}),
}, },
@@ -139,7 +139,7 @@ func TestAsAttributeKeyValue(t *testing.T) {
name: "stringslice (interface mixed)", name: "stringslice (interface mixed)",
args: args{ args: args{
Type: attribute.STRINGSLICE.String(), Type: attribute.STRINGSLICE.String(),
value: []interface{}{"value1", 2}, value: []any{"value1", 2},
}, },
want: attribute.StringSlice("key", []string{"value1", "2"}), want: attribute.StringSlice("key", []string{"value1", "2"}),
}, },

View File

@@ -27,7 +27,7 @@ type Printer struct {
err error err error
warnings []client.VertexWarning warnings []client.VertexWarning
logMu sync.Mutex logMu sync.Mutex
logSourceMap map[digest.Digest]interface{} logSourceMap map[digest.Digest]any
metrics *metricWriter metrics *metricWriter
// TODO: remove once we can use result context to pass build ref // TODO: remove once we can use result context to pass build ref
@@ -74,7 +74,7 @@ func (p *Printer) Warnings() []client.VertexWarning {
return dedupWarnings(p.warnings) return dedupWarnings(p.warnings)
} }
func (p *Printer) ValidateLogSource(dgst digest.Digest, v interface{}) bool { func (p *Printer) ValidateLogSource(dgst digest.Digest, v any) bool {
p.logMu.Lock() p.logMu.Lock()
defer p.logMu.Unlock() defer p.logMu.Unlock()
src, ok := p.logSourceMap[dgst] src, ok := p.logSourceMap[dgst]
@@ -89,7 +89,7 @@ func (p *Printer) ValidateLogSource(dgst digest.Digest, v interface{}) bool {
return false return false
} }
func (p *Printer) ClearLogSource(v interface{}) { func (p *Printer) ClearLogSource(v any) {
p.logMu.Lock() p.logMu.Lock()
defer p.logMu.Unlock() defer p.logMu.Unlock()
for d := range p.logSourceMap { for d := range p.logSourceMap {
@@ -122,9 +122,10 @@ func NewPrinter(ctx context.Context, out console.File, mode progressui.DisplayMo
for { for {
pw.status = make(chan *client.SolveStatus) pw.status = make(chan *client.SolveStatus)
pw.done = make(chan struct{}) pw.done = make(chan struct{})
pw.closeOnce = sync.Once{}
pw.logMu.Lock() pw.logMu.Lock()
pw.logSourceMap = map[digest.Digest]interface{}{} pw.logSourceMap = map[digest.Digest]any{}
pw.logMu.Unlock() pw.logMu.Unlock()
resumeLogs := logutil.Pause(logrus.StandardLogger()) resumeLogs := logutil.Pause(logrus.StandardLogger())

View File

@@ -11,8 +11,8 @@ import (
type Writer interface { type Writer interface {
Write(*client.SolveStatus) Write(*client.SolveStatus)
WriteBuildRef(string, string) WriteBuildRef(string, string)
ValidateLogSource(digest.Digest, interface{}) bool ValidateLogSource(digest.Digest, any) bool
ClearLogSource(interface{}) ClearLogSource(any)
} }
func Write(w Writer, name string, f func() error) error { func Write(w Writer, name string, f func() error) error {

Some files were not shown because too many files have changed in this diff Show More