Compare commits

..

38 Commits

Author SHA1 Message Date
Justin Chadwell
9872040b66 Merge pull request #1958 from thaJeztah/0.11_backport_buildkit_0.12 2023-07-18 15:52:35 +01:00
Sebastiaan van Stijn
d8c6c3fc30 vendor: github.com/moby/buildkit v0.12.1-0.20230717122532-faa0cc7da353
full diff:

- https://github.com/moby/buildkit/compare/20230620112432...v0.12.0
- https://github.com/moby/buildkit/compare/v0.12.0...faa0cc7da3536923d85b74b2bb2d13c12a6ecc99

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 130bbda00e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-07-18 16:42:24 +02:00
Sebastiaan van Stijn
69f929077b vendor: github.com/tonistiigi/fsutil v0.0.0-20230629203738-36ef4d8c0dbb
full diff: 9e7a6df485...36ef4d8c0d

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit ff2c8da803)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-07-18 16:42:24 +02:00
Sebastiaan van Stijn
87ce701fe0 vendor: github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb
full diff: 4e3ac2762d...02993c407b

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit e094296f37)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-07-18 16:42:24 +02:00
Justin Chadwell
6faf7e5688 tests: set a dedicated buildx config dir for each worker
This should help reduce any unexpected config conflict between workers.

Signed-off-by: Justin Chadwell <me@jedevc.com>
(cherry picked from commit 6f394a0691)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-07-18 16:41:54 +02:00
Justin Chadwell
d21e9fa8c6 ci: run docker-container tests in parallel
Signed-off-by: Justin Chadwell <me@jedevc.com>
(cherry picked from commit efd7279118)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-07-18 16:41:45 +02:00
Justin Chadwell
5657006c1f tests: share single docker between docker-container backends
This means that we can run our docker-container tests in parallel again,
which can help speed up our test runs by a *significant* amount.

Signed-off-by: Justin Chadwell <me@jedevc.com>
(cherry picked from commit 601056f3a7)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-07-18 16:41:34 +02:00
Justin Chadwell
0424ae14c0 vendor: update buildkit to master@2d91ddcceedc
Signed-off-by: Justin Chadwell <me@jedevc.com>
(cherry picked from commit 0a7f96cbfb)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-07-18 16:35:53 +02:00
Justin Chadwell
66fd2bbdee Merge pull request #1957 from crazy-max/v0.11_backport_fix-kube-config 2023-07-18 15:16:29 +01:00
CrazyMax
3305f18ce5 k8s: fix missing kubeconfig check from endpoint
Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
(cherry picked from commit 4384947be1)
2023-07-18 15:57:11 +02:00
CrazyMax
a8790788d1 Merge pull request #1955 from crazy-max/v0.11_backport_result-handle-internal
[0.11 backport] build: mark result handle build as internal
2023-07-17 20:51:47 +02:00
CrazyMax
0f6513a29a build: mark result handle build as internal
Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
(cherry picked from commit 418ea82d3a)
2023-07-17 17:13:10 +02:00
Justin Chadwell
44f5946a66 Merge pull request #1951 from thaJeztah/0.11_backport_remove_imageutil_dead_code 2023-07-17 14:26:58 +01:00
CrazyMax
ea610d8f14 Merge pull request #1953 from thaJeztah/0.11_backport_update-go
[0.11 backport] update go to 1.20.6
2023-07-17 14:05:11 +02:00
Sebastiaan van Stijn
d78c75947d util/imagetools: remove unused Resolver.ImageConfig
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit b9e25e82cf)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-07-17 13:39:43 +02:00
CrazyMax
7dddd3a7d3 hack(generated-files): bump golang image to bookworm
#7 [internal] load metadata for docker.io/library/golang:1.20.6-buster
#7 ERROR: docker.io/library/golang:1.20.6-buster: not found

Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
(cherry picked from commit 1123bfed10)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-07-17 13:39:13 +02:00
CrazyMax
54de900931 update go to 1.20.6
Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
(cherry picked from commit 7f2293308b)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-07-17 13:39:12 +02:00
Justin Chadwell
50e414f82a hack: force go version to 1.20.5
A temporary workaround for "http: invalid Host header" introduced in
go 1.20.6.

Signed-off-by: Justin Chadwell <me@jedevc.com>
(cherry picked from commit c4bec05466)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-07-17 13:39:12 +02:00
Justin Chadwell
a24b6dd4f5 Merge pull request #1952 from thaJeztah/0.11_backport_bump_docker 2023-07-17 12:38:16 +01:00
CrazyMax
66600be6ab vendor: github.com/docker/docker@24.0 36e9e79
client: define a "dummy" hostname to use for local connections
fixes "http: invalid Host header" errors when compiling with go1.20.6
or go1.19.11

full diff: https://github.com/docker/docker/compare/v24.0.2...36e9e796c6fc

Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
(cherry picked from commit 8a3a646c61)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-07-17 12:48:31 +02:00
Justin Chadwell
b4df08551f Merge pull request #1930 from jedevc/revert-bc597e6b 2023-07-05 17:37:36 +01:00
Justin Chadwell
f581942d7d Merge pull request #1929 from jedevc/vendor-vt100-update 2023-07-05 17:37:20 +01:00
Justin Chadwell
5159571dfc Revert "bake: fix incorrect dockerfile resolution against cwd:// context"
This reverts commit bc597e6b5e.

Signed-off-by: Justin Chadwell <me@jedevc.com>
2023-07-05 17:25:09 +01:00
Justin Chadwell
86a5c77c2b vendor: update tonistiigi/vt100 to master@f9a4f7ef6531
Signed-off-by: Justin Chadwell <me@jedevc.com>
2023-07-05 16:47:43 +01:00
Justin Chadwell
1602b491f9 Merge pull request #1926 from jedevc/v0.11-cherry-picks 2023-07-05 13:54:12 +01:00
CrazyMax
94baaf3c90 build: fix host-gateway handling
Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
(cherry picked from commit 8cbb7a9319)
2023-07-03 21:58:40 +02:00
CrazyMax
c5e279f295 docs: update generated content
Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
(cherry picked from commit 87b9f9ecfb)
2023-07-03 21:58:40 +02:00
CrazyMax
a0f91eb87e vendor: update cli-docs-tool to 0.6.0
Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
(cherry picked from commit cbc473359a)
2023-07-03 21:58:40 +02:00
CrazyMax
cb1812ec6a test: build details output
Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
(cherry picked from commit 20d2501edc)
2023-07-03 21:58:39 +02:00
CrazyMax
47e4c2576b build: missing newline when printing build details on error
Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
(cherry picked from commit d45601fdc6)
2023-07-03 21:58:39 +02:00
CrazyMax
3702e17ed5 dockerfile: update docker to 24.0.2
Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
(cherry picked from commit 7147463418)
2023-07-03 21:58:39 +02:00
Jhan S. Álvarez
8b85dbea72 controller: include CgroupParent in build.Options
Signed-off-by: Jhan S. Álvarez <alvarezpcuser@gmail.com>
(cherry picked from commit e65f6b8c8b)
2023-07-03 11:55:40 +01:00
CrazyMax
afcb118e10 bake: ignore profiles in compose definitions
Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
(cherry picked from commit 120f3a8918)
2023-07-03 11:55:40 +01:00
David Karlsson
cb4fea66e0 chore: make docs
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
(cherry picked from commit 1e576dd7c6)
2023-07-03 11:53:42 +01:00
CrazyMax
74fa66b496 docs: set experimental annotation
Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
(cherry picked from commit 7a5472153b)
2023-07-03 11:53:42 +01:00
Justin Chadwell
ff87dd183a Merge pull request #1885 from crazy-max/v0.11.1_backport 2023-06-21 11:23:10 +01:00
CrazyMax
9f844df9f7 builder: skip name validation for docker context
Although a builder from the store cannot be created unless
it has a valid name, this is not the case for a Docker context.

We should skip name validation when checking a node from the
store and fall back to finding one from Docker context instead.

Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
(cherry picked from commit b1c5449428)
2023-06-15 14:10:24 +02:00
Justin Chadwell
bc597e6b5e bake: fix incorrect dockerfile resolution against cwd:// context
We need to resolve the strip the cwd:// prefix before attempting to
resolve the dockerfile. Otherwise, we'll get the cwd:// prefix in the
dockerfile name, which isn't stripped out later.

Signed-off-by: Justin Chadwell <me@jedevc.com>
(cherry picked from commit 431732f5d1)
2023-06-15 14:10:23 +02:00
2185 changed files with 63278 additions and 129330 deletions

View File

@@ -5,11 +5,6 @@ updates:
directory: "/"
schedule:
interval: "daily"
ignore:
# ignore this dependency
# it seems a bug with dependabot as pining to commit sha should not
# trigger a new version: https://github.com/docker/buildx/pull/2222#issuecomment-1919092153
- dependency-name: "docker/docs"
labels:
- "dependencies"
- "bot"

View File

@@ -24,43 +24,41 @@ env:
REPO_SLUG: "docker/buildx-bin"
DESTDIR: "./bin"
TEST_CACHE_SCOPE: "test"
TESTFLAGS: "-v --parallel=6 --timeout=30m"
GOTESTSUM_FORMAT: "standard-verbose"
GO_VERSION: "1.21.6"
GOTESTSUM_VERSION: "v1.9.0" # same as one in Dockerfile
jobs:
prepare-test-integration:
prepare-test:
runs-on: ubuntu-22.04
steps:
-
name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v3
-
name: Set up QEMU
uses: docker/setup-qemu-action@v3
uses: docker/setup-qemu-action@v2
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@v2
with:
version: ${{ env.BUILDX_VERSION }}
driver-opts: image=${{ env.BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build
uses: docker/bake-action@v4
uses: docker/bake-action@v3
with:
targets: integration-test-base
set: |
*.cache-from=type=gha,scope=${{ env.TEST_CACHE_SCOPE }}
*.cache-to=type=gha,scope=${{ env.TEST_CACHE_SCOPE }}
test-integration:
test:
runs-on: ubuntu-22.04
needs:
- prepare-test-integration
- prepare-test
env:
TESTFLAGS: "-v --parallel=6 --timeout=30m"
TESTFLAGS_DOCKER: "-v --parallel=1 --timeout=30m"
GOTESTSUM_FORMAT: "standard-verbose"
TEST_IMAGE_BUILD: "0"
TEST_IMAGE_ID: "buildx-tests"
strategy:
@@ -68,34 +66,30 @@ jobs:
matrix:
worker:
- docker
- docker\+containerd # same as docker, but with containerd snapshotter
- docker-container
- remote
pkg:
- ./tests
include:
- pkg: ./...
skip-integration-tests: 1
steps:
-
name: Prepare
run: |
echo "TESTREPORTS_NAME=${{ github.job }}-$(echo "${{ matrix.pkg }}-${{ matrix.worker }}" | tr -dc '[:alnum:]-\n\r' | tr '[:upper:]' '[:lower:]')" >> $GITHUB_ENV
-
name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
uses: actions/checkout@v3
-
name: Set up QEMU
uses: docker/setup-qemu-action@v3
uses: docker/setup-qemu-action@v2
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@v2
with:
version: ${{ env.BUILDX_VERSION }}
driver-opts: image=${{ env.BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build test image
uses: docker/bake-action@v4
uses: docker/bake-action@v3
with:
targets: integration-test
set: |
@@ -104,106 +98,33 @@ jobs:
-
name: Test
run: |
export TEST_REPORT_SUFFIX=-${{ github.job }}-$(echo "${{ matrix.pkg }}-${{ matrix.skip-integration-tests }}-${{ matrix.worker }}" | tr -dc '[:alnum:]-\n\r' | tr '[:upper:]' '[:lower:]')
./hack/test
env:
TEST_REPORT_SUFFIX: "-${{ env.TESTREPORTS_NAME }}"
TEST_DOCKERD: "${{ startsWith(matrix.worker, 'docker') && '1' || '0' }}"
TESTFLAGS: "${{ (matrix.worker == 'docker' || matrix.worker == 'docker\\+containerd') && env.TESTFLAGS_DOCKER || env.TESTFLAGS }} --run=//worker=${{ matrix.worker }}$"
TEST_DOCKERD: "${{ (matrix.worker == 'docker' || matrix.worker == 'docker-container') && '1' || '0' }}"
TESTFLAGS: "${{ (matrix.worker == 'docker') && env.TESTFLAGS_DOCKER || env.TESTFLAGS }} --run=//worker=${{ matrix.worker }}$"
TESTPKGS: "${{ matrix.pkg }}"
SKIP_INTEGRATION_TESTS: "${{ matrix.skip-integration-tests }}"
-
name: Send to Codecov
if: always()
uses: codecov/codecov-action@v4
uses: codecov/codecov-action@v3
with:
directory: ./bin/testreports
flags: integration
token: ${{ secrets.CODECOV_TOKEN }}
-
name: Generate annotations
if: always()
uses: crazy-max/.github/.github/actions/gotest-annotations@fa6141aedf23596fb8bdcceab9cce8dadaa31bd9
uses: crazy-max/.github/.github/actions/gotest-annotations@1a64ea6d01db9a48aa61954cb20e265782c167d9
with:
directory: ./bin/testreports
-
name: Upload test reports
if: always()
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v3
with:
name: test-reports-${{ env.TESTREPORTS_NAME }}
name: test-reports
path: ./bin/testreports
test-unit:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os:
- ubuntu-22.04
- macos-12
- windows-2022
env:
SKIP_INTEGRATION_TESTS: 1
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up Go
uses: actions/setup-go@v5
with:
go-version: "${{ env.GO_VERSION }}"
-
name: Prepare
run: |
testreportsName=${{ github.job }}--${{ matrix.os }}
testreportsBaseDir=./bin/testreports
testreportsDir=$testreportsBaseDir/$testreportsName
echo "TESTREPORTS_NAME=$testreportsName" >> $GITHUB_ENV
echo "TESTREPORTS_BASEDIR=$testreportsBaseDir" >> $GITHUB_ENV
echo "TESTREPORTS_DIR=$testreportsDir" >> $GITHUB_ENV
mkdir -p $testreportsDir
shell: bash
-
name: Install gotestsum
run: |
go install gotest.tools/gotestsum@${{ env.GOTESTSUM_VERSION }}
-
name: Test
env:
TMPDIR: ${{ runner.temp }}
run: |
gotestsum \
--jsonfile="${{ env.TESTREPORTS_DIR }}/go-test-report.json" \
--junitfile="${{ env.TESTREPORTS_DIR }}/junit-report.xml" \
--packages="./..." \
-- \
"-mod=vendor" \
"-coverprofile" "${{ env.TESTREPORTS_DIR }}/coverage.txt" \
"-covermode" "atomic" ${{ env.TESTFLAGS }}
shell: bash
-
name: Send to Codecov
if: always()
uses: codecov/codecov-action@v4
with:
directory: ${{ env.TESTREPORTS_DIR }}
env_vars: RUNNER_OS
flags: unit
token: ${{ secrets.CODECOV_TOKEN }}
-
name: Generate annotations
if: always()
uses: crazy-max/.github/.github/actions/gotest-annotations@fa6141aedf23596fb8bdcceab9cce8dadaa31bd9
with:
directory: ${{ env.TESTREPORTS_DIR }}
-
name: Upload test reports
if: always()
uses: actions/upload-artifact@v4
with:
name: test-reports-${{ env.TESTREPORTS_NAME }}
path: ${{ env.TESTREPORTS_BASEDIR }}
prepare-binaries:
runs-on: ubuntu-22.04
outputs:
@@ -211,7 +132,7 @@ jobs:
steps:
-
name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v3
-
name: Create matrix
id: platforms
@@ -238,13 +159,13 @@ jobs:
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
-
name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v3
-
name: Set up QEMU
uses: docker/setup-qemu-action@v3
uses: docker/setup-qemu-action@v2
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@v2
with:
version: ${{ env.BUILDX_VERSION }}
driver-opts: image=${{ env.BUILDKIT_IMAGE }}
@@ -259,28 +180,27 @@ jobs:
CACHE_TO: type=gha,scope=binaries-${{ env.PLATFORM_PAIR }},mode=max
-
name: Upload artifacts
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v3
with:
name: buildx-${{ env.PLATFORM_PAIR }}
name: buildx
path: ${{ env.DESTDIR }}/*
if-no-files-found: error
bin-image:
runs-on: ubuntu-22.04
needs:
- test-integration
- test-unit
- test
if: ${{ github.event_name != 'pull_request' && github.repository == 'docker/buildx' }}
steps:
-
name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v3
-
name: Set up QEMU
uses: docker/setup-qemu-action@v3
uses: docker/setup-qemu-action@v2
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@v2
with:
version: ${{ env.BUILDX_VERSION }}
driver-opts: image=${{ env.BUILDKIT_IMAGE }}
@@ -288,7 +208,7 @@ jobs:
-
name: Docker meta
id: meta
uses: docker/metadata-action@v5
uses: docker/metadata-action@v4
with:
images: |
${{ env.REPO_SLUG }}
@@ -300,13 +220,13 @@ jobs:
-
name: Login to DockerHub
if: github.event_name != 'pull_request'
uses: docker/login-action@v3
uses: docker/login-action@v2
with:
username: ${{ vars.DOCKERPUBLICBOT_USERNAME }}
password: ${{ secrets.DOCKERPUBLICBOT_WRITE_PAT }}
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
-
name: Build and push image
uses: docker/bake-action@v4
uses: docker/bake-action@v3
with:
files: |
./docker-bake.hcl
@@ -321,20 +241,18 @@ jobs:
release:
runs-on: ubuntu-22.04
needs:
- test-integration
- test-unit
- test
- binaries
steps:
-
name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v3
-
name: Download binaries
uses: actions/download-artifact@v4
uses: actions/download-artifact@v3
with:
name: buildx
path: ${{ env.DESTDIR }}
pattern: buildx-*
merge-multiple: true
-
name: Create checksums
run: ./hack/hash-files
@@ -362,13 +280,13 @@ jobs:
steps:
-
name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v3
-
name: Set up QEMU
uses: docker/setup-qemu-action@v3
uses: docker/setup-qemu-action@v2
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@v2
with:
version: ${{ env.BUILDX_VERSION }}
driver-opts: image=moby/buildkit:master
@@ -376,6 +294,6 @@ jobs:
-
# Just run a bake target to check eveything runs fine
name: Build
uses: docker/bake-action@v4
uses: docker/bake-action@v3
with:
targets: binaries

View File

@@ -1,42 +0,0 @@
name: codeql
on:
push:
branches:
- 'master'
- 'v[0-9]*'
pull_request:
permissions:
actions: read
contents: read
security-events: write
env:
GO_VERSION: 1.21.6
jobs:
codeql:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
-
name: Initialize CodeQL
uses: github/codeql-action/init@v3
with:
languages: go
-
name: Autobuild
uses: github/codeql-action/autobuild@v3
-
name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3
with:
category: "/language:go"

View File

@@ -12,7 +12,7 @@ jobs:
steps:
-
name: Checkout docs repo
uses: actions/checkout@v4
uses: actions/checkout@v3
with:
token: ${{ secrets.GHPAT_DOCS_DISPATCH }}
repository: docker/docs
@@ -23,10 +23,10 @@ jobs:
rm -rf ./_data/buildx/*
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@v2
-
name: Build docs
uses: docker/bake-action@v4
uses: docker/bake-action@v3
with:
source: ${{ github.server_url }}/${{ github.repository }}.git#${{ github.event.release.name }}
targets: update-docs
@@ -44,7 +44,7 @@ jobs:
git add -A .
-
name: Create PR on docs repo
uses: peter-evans/create-pull-request@b1ddad2c994a25fbc81a28b3ec0e368bb2021c50
uses: peter-evans/create-pull-request@284f54f989303d2699d373481a0cfa13ad5a6666
with:
token: ${{ secrets.GHPAT_DOCS_DISPATCH }}
push-to-fork: docker-tools-robot/docker.github.io

View File

@@ -26,15 +26,15 @@ jobs:
steps:
-
name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v3
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@v2
with:
version: latest
-
name: Build reference YAML docs
uses: docker/bake-action@v4
uses: docker/bake-action@v3
with:
targets: update-docs
set: |
@@ -45,18 +45,18 @@ jobs:
DOCS_FORMATS: yaml
-
name: Upload reference YAML docs
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v3
with:
name: docs-yaml
path: /tmp/buildx-docs/out/reference
retention-days: 1
validate:
uses: docker/docs/.github/workflows/validate-upstream.yml@6b73b05acb21edf7995cc5b3c6672d8e314cee7a # pin for artifact v4 support: https://github.com/docker/docs/pull/19220
uses: docker/docs/.github/workflows/validate-upstream.yml@main
needs:
- docs-yaml
with:
module-name: docker/buildx
repo: https://github.com/${{ github.repository }}
data-files-id: docs-yaml
data-files-folder: buildx
create-placeholder-stubs: true
data-files-placeholder-folder: engine/reference/commandline

View File

@@ -25,15 +25,15 @@ jobs:
runs-on: ubuntu-22.04
steps:
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v3
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@v2
with:
version: latest
-
name: Build
uses: docker/bake-action@v4
uses: docker/bake-action@v3
with:
targets: binaries
set: |
@@ -46,7 +46,7 @@ jobs:
mv ${{ env.DESTDIR }}/build/buildx ${{ env.DESTDIR }}/build/docker-buildx
-
name: Upload artifacts
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v3
with:
name: binary
path: ${{ env.DESTDIR }}/build
@@ -96,14 +96,14 @@ jobs:
steps:
-
name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v3
-
name: Set up QEMU
uses: docker/setup-qemu-action@v3
uses: docker/setup-qemu-action@v2
if: matrix.driver == 'docker' || matrix.driver == 'docker-container'
-
name: Install buildx
uses: actions/download-artifact@v4
uses: actions/download-artifact@v3
with:
name: binary
path: /home/runner/.docker/cli-plugins
@@ -132,7 +132,7 @@ jobs:
-
name: Install k3s
if: matrix.driver == 'kubernetes'
uses: actions/github-script@v7
uses: actions/github-script@v6
with:
script: |
const fs = require('fs');

View File

@@ -19,8 +19,6 @@ on:
jobs:
validate:
runs-on: ubuntu-22.04
env:
GOLANGCI_LINT_MULTIPLATFORM: 1
strategy:
fail-fast: false
matrix:
@@ -32,10 +30,10 @@ jobs:
steps:
-
name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v3
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@v2
with:
version: latest
-

View File

@@ -1,5 +1,5 @@
run:
timeout: 30m
timeout: 10m
skip-files:
- ".*\\.pb\\.go$"
@@ -26,13 +26,12 @@ linters:
linters-settings:
depguard:
rules:
main:
deny:
# The io/ioutil package has been deprecated.
# https://go.dev/doc/go1.16#ioutil
- pkg: "io/ioutil"
desc: The io/ioutil package has been deprecated.
list-type: blacklist
include-go-root: true
packages:
# The io/ioutil package has been deprecated.
# https://go.dev/doc/go1.16#ioutil
- io/ioutil
forbidigo:
forbid:
- '^fmt\.Errorf(# use errors\.Errorf instead)?$'
@@ -48,22 +47,3 @@ issues:
- linters:
- revive
text: "stutters"
- linters:
- revive
text: "empty-block"
- linters:
- revive
text: "superfluous-else"
- linters:
- revive
text: "unused-parameter"
- linters:
- revive
text: "redefines-builtin-id"
- linters:
- revive
text: "if-return"
# show all
max-issues-per-linter: 0
max-same-issues: 0

View File

@@ -1,12 +1,12 @@
# syntax=docker/dockerfile:1
ARG GO_VERSION=1.21.6
ARG XX_VERSION=1.4.0
ARG GO_VERSION=1.20.6
ARG XX_VERSION=1.2.1
ARG DOCKER_VERSION=25.0.2
ARG DOCKER_VERSION=24.0.2
ARG GOTESTSUM_VERSION=v1.9.0
ARG REGISTRY_VERSION=2.8.0
ARG BUILDKIT_VERSION=v0.12.5
ARG BUILDKIT_VERSION=v0.11.6
# xx is a helper for cross-compilation
FROM --platform=$BUILDPLATFORM tonistiigi/xx:${XX_VERSION} AS xx

View File

@@ -41,10 +41,12 @@ Key features:
- [`buildx imagetools create`](docs/reference/buildx_imagetools_create.md)
- [`buildx imagetools inspect`](docs/reference/buildx_imagetools_inspect.md)
- [`buildx inspect`](docs/reference/buildx_inspect.md)
- [`buildx install`](docs/reference/buildx_install.md)
- [`buildx ls`](docs/reference/buildx_ls.md)
- [`buildx prune`](docs/reference/buildx_prune.md)
- [`buildx rm`](docs/reference/buildx_rm.md)
- [`buildx stop`](docs/reference/buildx_stop.md)
- [`buildx uninstall`](docs/reference/buildx_uninstall.md)
- [`buildx use`](docs/reference/buildx_use.md)
- [`buildx version`](docs/reference/buildx_version.md)
- [Contributing](#contributing)
@@ -69,9 +71,8 @@ for Windows and macOS.
## Linux packages
Docker Engine package repositories contain Docker Buildx packages when installed according to the
[Docker Engine install documentation](https://docs.docker.com/engine/install/). Install the
`docker-buildx-plugin` package to install the Buildx plugin.
Docker Linux packages also include Docker Buildx when installed using the
[DEB or RPM packages](https://docs.docker.com/engine/install/).
## Manual download
@@ -147,7 +148,7 @@ $ DOCKER_BUILDKIT=1 docker build --platform=local -o . "https://github.com/docke
$ mkdir -p ~/.docker/cli-plugins
$ mv buildx ~/.docker/cli-plugins/docker-buildx
# Local
# Local
$ git clone https://github.com/docker/buildx.git && cd buildx
$ make install
```
@@ -239,7 +240,7 @@ When you invoke a build, you can set the `--platform` flag to specify the target
platform for the build output, (for example, `linux/amd64`, `linux/arm64`, or
`darwin/amd64`).
When the current builder instance is backed by the `docker-container` or
When the current builder instance is backed by the `docker-container` or
`kubernetes` driver, you can specify multiple platforms together. In this case,
it builds a manifest list which contains images for all specified architectures.
When you use this image in [`docker run`](https://docs.docker.com/engine/reference/commandline/run/)

View File

@@ -11,19 +11,16 @@ import (
"sort"
"strconv"
"strings"
"time"
composecli "github.com/compose-spec/compose-go/v2/cli"
composecli "github.com/compose-spec/compose-go/cli"
"github.com/docker/buildx/bake/hclparser"
"github.com/docker/buildx/build"
controllerapi "github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/util/buildflags"
"github.com/docker/buildx/util/platformutil"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/config"
dockeropts "github.com/docker/cli/opts"
hcl "github.com/hashicorp/hcl/v2"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/client/llb"
"github.com/moby/buildkit/session/auth/authprovider"
"github.com/pkg/errors"
@@ -58,7 +55,7 @@ func defaultFilenames() []string {
return names
}
func ReadLocalFiles(names []string, stdin io.Reader, l progress.SubLogger) ([]File, error) {
func ReadLocalFiles(names []string, stdin io.Reader) ([]File, error) {
isDefault := false
if len(names) == 0 {
isDefault = true
@@ -66,26 +63,20 @@ func ReadLocalFiles(names []string, stdin io.Reader, l progress.SubLogger) ([]Fi
}
out := make([]File, 0, len(names))
setStatus := func(st *client.VertexStatus) {
if l != nil {
l.SetStatus(st)
}
}
for _, n := range names {
var dt []byte
var err error
if n == "-" {
dt, err = readWithProgress(stdin, setStatus)
dt, err = io.ReadAll(stdin)
if err != nil {
return nil, err
}
} else {
dt, err = readFileWithProgress(n, isDefault, setStatus)
if dt == nil && err == nil {
continue
}
dt, err = os.ReadFile(n)
if err != nil {
if isDefault && errors.Is(err, os.ErrNotExist) {
continue
}
return nil, err
}
}
@@ -94,88 +85,6 @@ func ReadLocalFiles(names []string, stdin io.Reader, l progress.SubLogger) ([]Fi
return out, nil
}
func readFileWithProgress(fname string, isDefault bool, setStatus func(st *client.VertexStatus)) (dt []byte, err error) {
st := &client.VertexStatus{
ID: "reading " + fname,
}
defer func() {
now := time.Now()
st.Completed = &now
if dt != nil || err != nil {
setStatus(st)
}
}()
now := time.Now()
st.Started = &now
f, err := os.Open(fname)
if err != nil {
if isDefault && errors.Is(err, os.ErrNotExist) {
return nil, nil
}
return nil, err
}
defer f.Close()
setStatus(st)
info, err := f.Stat()
if err != nil {
return nil, err
}
st.Total = info.Size()
setStatus(st)
buf := make([]byte, 1024)
for {
n, err := f.Read(buf)
if err == io.EOF {
break
}
if err != nil {
return nil, err
}
dt = append(dt, buf[:n]...)
st.Current += int64(n)
setStatus(st)
}
return dt, nil
}
func readWithProgress(r io.Reader, setStatus func(st *client.VertexStatus)) (dt []byte, err error) {
st := &client.VertexStatus{
ID: "reading from stdin",
}
defer func() {
now := time.Now()
st.Completed = &now
setStatus(st)
}()
now := time.Now()
st.Started = &now
setStatus(st)
buf := make([]byte, 1024)
for {
n, err := r.Read(buf)
if err == io.EOF {
break
}
if err != nil {
return nil, err
}
dt = append(dt, buf[:n]...)
st.Current += int64(n)
setStatus(st)
}
return dt, nil
}
func ListTargets(files []File) ([]string, error) {
c, err := ParseFiles(files, nil)
if err != nil {
@@ -339,7 +248,7 @@ func ParseFiles(files []File, defaults map[string]string) (_ *Config, err error)
}
if len(hclFiles) > 0 {
renamed, err := hclparser.Parse(hclparser.MergeFiles(hclFiles), hclparser.Opt{
renamed, err := hclparser.Parse(hcl.MergeFiles(hclFiles), hclparser.Opt{
LookupVar: os.LookupEnv,
Vars: defaults,
ValidateLabel: validateTargetName,
@@ -678,10 +587,9 @@ type Target struct {
Name string `json:"-" hcl:"name,label" cty:"name"`
// Inherits is the only field that cannot be overridden with --set
Attest []string `json:"attest,omitempty" hcl:"attest,optional" cty:"attest"`
Inherits []string `json:"inherits,omitempty" hcl:"inherits,optional" cty:"inherits"`
Annotations []string `json:"annotations,omitempty" hcl:"annotations,optional" cty:"annotations"`
Attest []string `json:"attest,omitempty" hcl:"attest,optional" cty:"attest"`
Context *string `json:"context,omitempty" hcl:"context,optional" cty:"context"`
Contexts map[string]string `json:"contexts,omitempty" hcl:"contexts,optional" cty:"contexts"`
Dockerfile *string `json:"dockerfile,omitempty" hcl:"dockerfile,optional" cty:"dockerfile"`
@@ -700,8 +608,6 @@ type Target struct {
NoCache *bool `json:"no-cache,omitempty" hcl:"no-cache,optional" cty:"no-cache"`
NetworkMode *string `json:"-" hcl:"-" cty:"-"`
NoCacheFilter []string `json:"no-cache-filter,omitempty" hcl:"no-cache-filter,optional" cty:"no-cache-filter"`
ShmSize *string `json:"shm-size,omitempty" hcl:"shm-size,optional"`
Ulimits []string `json:"ulimits,omitempty" hcl:"ulimits,optional"`
// IMPORTANT: if you add more fields here, do not forget to update newOverrides and docs/bake-reference.md.
// linked is a private field to mark a target used as a linked one
@@ -714,7 +620,6 @@ var _ hclparser.WithEvalContexts = &Group{}
var _ hclparser.WithGetName = &Group{}
func (t *Target) normalize() {
t.Annotations = removeDupes(t.Annotations)
t.Attest = removeAttestDupes(t.Attest)
t.Tags = removeDupes(t.Tags)
t.Secrets = removeDupes(t.Secrets)
@@ -724,7 +629,6 @@ func (t *Target) normalize() {
t.CacheTo = removeDupes(t.CacheTo)
t.Outputs = removeDupes(t.Outputs)
t.NoCacheFilter = removeDupes(t.NoCacheFilter)
t.Ulimits = removeDupes(t.Ulimits)
for k, v := range t.Contexts {
if v == "" {
@@ -776,9 +680,6 @@ func (t *Target) Merge(t2 *Target) {
if t2.Target != nil {
t.Target = t2.Target
}
if t2.Annotations != nil { // merge
t.Annotations = append(t.Annotations, t2.Annotations...)
}
if t2.Attest != nil { // merge
t.Attest = append(t.Attest, t2.Attest...)
t.Attest = removeAttestDupes(t.Attest)
@@ -813,12 +714,6 @@ func (t *Target) Merge(t2 *Target) {
if t2.NoCacheFilter != nil { // merge
t.NoCacheFilter = append(t.NoCacheFilter, t2.NoCacheFilter...)
}
if t2.ShmSize != nil { // no merge
t.ShmSize = t2.ShmSize
}
if t2.Ulimits != nil { // merge
t.Ulimits = append(t.Ulimits, t2.Ulimits...)
}
t.Inherits = append(t.Inherits, t2.Inherits...)
}
@@ -871,8 +766,6 @@ func (t *Target) AddOverrides(overrides map[string]Override) error {
t.Platforms = o.ArrValue
case "output":
t.Outputs = o.ArrValue
case "annotations":
t.Annotations = append(t.Annotations, o.ArrValue...)
case "attest":
t.Attest = append(t.Attest, o.ArrValue...)
case "no-cache":
@@ -883,10 +776,6 @@ func (t *Target) AddOverrides(overrides map[string]Override) error {
t.NoCache = &noCache
case "no-cache-filter":
t.NoCacheFilter = o.ArrValue
case "shm-size":
t.ShmSize = &value
case "ulimits":
t.Ulimits = o.ArrValue
case "pull":
pull, err := strconv.ParseBool(value)
if err != nil {
@@ -963,10 +852,8 @@ func (t *Target) GetEvalContexts(ectx *hcl.EvalContext, block *hcl.Block, loadDe
for _, e := range ectxs {
e2 := ectx.NewChild()
e2.Variables = make(map[string]cty.Value)
if e != ectx {
for k, v := range e.Variables {
e2.Variables[k] = v
}
for k, v := range e.Variables {
e2.Variables[k] = v
}
e2.Variables[k] = v
ectxs2 = append(ectxs2, e2)
@@ -1025,17 +912,12 @@ func (t *Target) GetName(ectx *hcl.EvalContext, block *hcl.Block, loadDeps func(
}
func TargetsToBuildOpt(m map[string]*Target, inp *Input) (map[string]build.Options, error) {
// make sure local credentials are loaded multiple times for different targets
dockerConfig := config.LoadDefaultConfigFile(os.Stderr)
authProvider := authprovider.NewDockerAuthProvider(dockerConfig, nil)
m2 := make(map[string]build.Options, len(m))
for k, v := range m {
bo, err := toBuildOpt(v, inp)
if err != nil {
return nil, err
}
bo.Session = append(bo.Session, authProvider)
m2[k] = *bo
}
return m2, nil
@@ -1156,9 +1038,6 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
if t.Dockerfile != nil {
dockerfilePath = *t.Dockerfile
}
if !strings.HasPrefix(dockerfilePath, "cwd://") {
dockerfilePath = path.Clean(dockerfilePath)
}
bi := build.Inputs{
ContextPath: contextPath,
@@ -1169,44 +1048,12 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
bi.DockerfileInline = *t.DockerfileInline
}
updateContext(&bi, inp)
if strings.HasPrefix(bi.DockerfilePath, "cwd://") {
// If Dockerfile is local for a remote invocation, we first check if
// it's not outside the working directory and then resolve it to an
// absolute path.
bi.DockerfilePath = path.Clean(strings.TrimPrefix(bi.DockerfilePath, "cwd://"))
if err := checkPath(bi.DockerfilePath); err != nil {
return nil, err
}
var err error
bi.DockerfilePath, err = filepath.Abs(bi.DockerfilePath)
if err != nil {
return nil, err
}
} else if !build.IsRemoteURL(bi.DockerfilePath) && strings.HasPrefix(bi.ContextPath, "cwd://") && (inp != nil && build.IsRemoteURL(inp.URL)) {
// We don't currently support reading a remote Dockerfile with a local
// context when doing a remote invocation because we automatically
// derive the dockerfile from the context atm:
//
// target "default" {
// context = BAKE_CMD_CONTEXT
// dockerfile = "Dockerfile.app"
// }
//
// > docker buildx bake https://github.com/foo/bar.git
// failed to solve: failed to read dockerfile: open /var/lib/docker/tmp/buildkit-mount3004544897/Dockerfile.app: no such file or directory
//
// To avoid mistakenly reading a local Dockerfile, we check if the
// Dockerfile exists locally and if so, we error out.
if _, err := os.Stat(filepath.Join(path.Clean(strings.TrimPrefix(bi.ContextPath, "cwd://")), bi.DockerfilePath)); err == nil {
return nil, errors.Errorf("reading a dockerfile for a remote build invocation is currently not supported")
}
if !build.IsRemoteURL(bi.ContextPath) && bi.ContextState == nil && !path.IsAbs(bi.DockerfilePath) {
bi.DockerfilePath = path.Join(bi.ContextPath, bi.DockerfilePath)
}
if strings.HasPrefix(bi.ContextPath, "cwd://") {
bi.ContextPath = path.Clean(strings.TrimPrefix(bi.ContextPath, "cwd://"))
}
if !build.IsRemoteURL(bi.ContextPath) && bi.ContextState == nil && !path.IsAbs(bi.DockerfilePath) {
bi.DockerfilePath = path.Join(bi.ContextPath, bi.DockerfilePath)
}
for k, v := range bi.NamedContexts {
if strings.HasPrefix(v.Path, "cwd://") {
bi.NamedContexts[k] = build.NamedContext{Path: path.Clean(strings.TrimPrefix(v.Path, "cwd://"))}
@@ -1247,12 +1094,6 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
if t.NetworkMode != nil {
networkMode = *t.NetworkMode
}
shmSize := new(dockeropts.MemBytes)
if t.ShmSize != nil {
if err := shmSize.Set(*t.ShmSize); err != nil {
return nil, errors.Errorf("invalid value %s for membytes key shm-size", *t.ShmSize)
}
}
bo := &build.Options{
Inputs: bi,
@@ -1264,7 +1105,6 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
Pull: pull,
NetworkMode: networkMode,
Linked: t.linked,
ShmSize: *shmSize,
}
platforms, err := platformutil.Parse(t.Platforms)
@@ -1273,6 +1113,9 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
}
bo.Platforms = platforms
dockerConfig := config.LoadDefaultConfigFile(os.Stderr)
bo.Session = append(bo.Session, authprovider.NewDockerAuthProvider(dockerConfig))
secrets, err := buildflags.ParseSecretSpecs(t.Secrets)
if err != nil {
return nil, err
@@ -1321,16 +1164,6 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
return nil, err
}
annotations, err := buildflags.ParseAnnotations(t.Annotations)
if err != nil {
return nil, err
}
for _, e := range bo.Exports {
for k, v := range annotations {
e.Attrs[k.String()] = v
}
}
attests, err := buildflags.ParseAttests(t.Attest)
if err != nil {
return nil, err
@@ -1342,14 +1175,6 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
return nil, err
}
ulimits := dockeropts.NewUlimitOpt(nil)
for _, field := range t.Ulimits {
if err := ulimits.Set(field); err != nil {
return nil, err
}
}
bo.Ulimits = ulimits
return bo, nil
}

View File

@@ -3,12 +3,10 @@ package bake
import (
"context"
"os"
"path/filepath"
"sort"
"strings"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
@@ -22,8 +20,6 @@ target "webDEP" {
VAR_BOTH = "webDEP"
}
no-cache = true
shm-size = "128m"
ulimits = ["nofile=1024:1024"]
}
target "webapp" {
@@ -47,8 +43,6 @@ target "webapp" {
require.Equal(t, ".", *m["webapp"].Context)
require.Equal(t, ptrstr("webDEP"), m["webapp"].Args["VAR_INHERITED"])
require.Equal(t, true, *m["webapp"].NoCache)
require.Equal(t, "128m", *m["webapp"].ShmSize)
require.Equal(t, []string{"nofile=1024:1024"}, m["webapp"].Ulimits)
require.Nil(t, m["webapp"].Pull)
require.Equal(t, 1, len(g))
@@ -133,12 +127,6 @@ target "webapp" {
require.Equal(t, []string{"webapp"}, g["default"].Targets)
})
t.Run("ShmSizeOverride", func(t *testing.T) {
m, _, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"webapp.shm-size=256m"}, nil)
require.NoError(t, err)
require.Equal(t, "256m", *m["webapp"].ShmSize)
})
t.Run("PullOverride", func(t *testing.T) {
t.Parallel()
m, g, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"webapp.pull=false"}, nil)
@@ -385,7 +373,7 @@ services:
require.Equal(t, []string{"web_app"}, g["default"].Targets)
}
func TestHCLContextCwdPrefix(t *testing.T) {
func TestHCLCwdPrefix(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(
@@ -398,49 +386,18 @@ func TestHCLContextCwdPrefix(t *testing.T) {
m, g, err := ReadTargets(ctx, []File{fp}, []string{"app"}, nil, nil)
require.NoError(t, err)
bo, err := TargetsToBuildOpt(m, &Input{})
require.Equal(t, 1, len(m))
_, ok := m["app"]
require.True(t, ok)
_, err = TargetsToBuildOpt(m, &Input{})
require.NoError(t, err)
require.Equal(t, "test", *m["app"].Dockerfile)
require.Equal(t, "foo", *m["app"].Context)
require.Equal(t, 1, len(g))
require.Equal(t, []string{"app"}, g["default"].Targets)
require.Equal(t, 1, len(m))
require.Contains(t, m, "app")
assert.Equal(t, "test", *m["app"].Dockerfile)
assert.Equal(t, "foo", *m["app"].Context)
assert.Equal(t, "foo/test", bo["app"].Inputs.DockerfilePath)
assert.Equal(t, "foo", bo["app"].Inputs.ContextPath)
}
func TestHCLDockerfileCwdPrefix(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(
`target "app" {
context = "."
dockerfile = "cwd://Dockerfile.app"
}`),
}
ctx := context.TODO()
cwd, err := os.Getwd()
require.NoError(t, err)
m, g, err := ReadTargets(ctx, []File{fp}, []string{"app"}, nil, nil)
require.NoError(t, err)
bo, err := TargetsToBuildOpt(m, &Input{})
require.NoError(t, err)
require.Equal(t, 1, len(g))
require.Equal(t, []string{"app"}, g["default"].Targets)
require.Equal(t, 1, len(m))
require.Contains(t, m, "app")
assert.Equal(t, "cwd://Dockerfile.app", *m["app"].Dockerfile)
assert.Equal(t, ".", *m["app"].Context)
assert.Equal(t, filepath.Join(cwd, "Dockerfile.app"), bo["app"].Inputs.DockerfilePath)
assert.Equal(t, ".", bo["app"].Inputs.ContextPath)
}
func TestOverrideMerge(t *testing.T) {
@@ -1441,7 +1398,7 @@ func TestReadLocalFilesDefault(t *testing.T) {
for _, tf := range tt.filenames {
require.NoError(t, os.WriteFile(tf, []byte(tf), 0644))
}
files, err := ReadLocalFiles(nil, nil, nil)
files, err := ReadLocalFiles(nil, nil)
require.NoError(t, err)
if len(files) == 0 {
require.Equal(t, len(tt.expected), len(files))
@@ -1493,31 +1450,3 @@ func TestAttestDuplicates(t *testing.T) {
"provenance": ptrstr("type=provenance,mode=max"),
}, opts["default"].Attests)
}
func TestAnnotations(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(
`target "app" {
output = ["type=image,name=foo"]
annotations = ["manifest[linux/amd64]:foo=bar"]
}`),
}
ctx := context.TODO()
m, g, err := ReadTargets(ctx, []File{fp}, []string{"app"}, nil, nil)
require.NoError(t, err)
bo, err := TargetsToBuildOpt(m, &Input{})
require.NoError(t, err)
require.Equal(t, 1, len(g))
require.Equal(t, []string{"app"}, g["default"].Targets)
require.Equal(t, 1, len(m))
require.Contains(t, m, "app")
require.Equal(t, "type=image,name=foo", m["app"].Outputs[0])
require.Equal(t, "manifest[linux/amd64]:foo=bar", m["app"].Annotations[0])
require.Len(t, bo["app"].Exports, 1)
require.Equal(t, "bar", bo["app"].Exports[0].Attrs["annotation-manifest[linux/amd64].foo"])
}

View File

@@ -1,17 +1,13 @@
package bake
import (
"context"
"fmt"
"os"
"path/filepath"
"strings"
"github.com/compose-spec/compose-go/v2/dotenv"
"github.com/compose-spec/compose-go/v2/loader"
composetypes "github.com/compose-spec/compose-go/v2/types"
dockeropts "github.com/docker/cli/opts"
"github.com/docker/go-units"
"github.com/compose-spec/compose-go/dotenv"
"github.com/compose-spec/compose-go/loader"
compose "github.com/compose-spec/compose-go/types"
"github.com/pkg/errors"
"gopkg.in/yaml.v3"
)
@@ -21,9 +17,9 @@ func ParseComposeFiles(fs []File) (*Config, error) {
if err != nil {
return nil, err
}
var cfgs []composetypes.ConfigFile
var cfgs []compose.ConfigFile
for _, f := range fs {
cfgs = append(cfgs, composetypes.ConfigFile{
cfgs = append(cfgs, compose.ConfigFile{
Filename: f.Name,
Content: f.Data,
})
@@ -31,11 +27,11 @@ func ParseComposeFiles(fs []File) (*Config, error) {
return ParseCompose(cfgs, envs)
}
func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Config, error) {
func ParseCompose(cfgs []compose.ConfigFile, envs map[string]string) (*Config, error) {
if envs == nil {
envs = make(map[string]string)
}
cfg, err := loader.LoadWithContext(context.Background(), composetypes.ConfigDetails{
cfg, err := loader.Load(compose.ConfigDetails{
ConfigFiles: cfgs,
Environment: envs,
}, func(options *loader.Options) {
@@ -55,7 +51,6 @@ func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Conf
g := &Group{Name: "default"}
for _, s := range cfg.Services {
s := s
if s.Build == nil {
continue
}
@@ -89,24 +84,6 @@ func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Conf
}
}
var shmSize *string
if s.Build.ShmSize > 0 {
shmSizeBytes := dockeropts.MemBytes(s.Build.ShmSize)
shmSizeStr := shmSizeBytes.String()
shmSize = &shmSizeStr
}
var ulimits []string
if s.Build.Ulimits != nil {
for n, u := range s.Build.Ulimits {
ulimit, err := units.ParseUlimit(fmt.Sprintf("%s=%d:%d", n, u.Soft, u.Hard))
if err != nil {
return nil, err
}
ulimits = append(ulimits, ulimit.String())
}
}
var secrets []string
for _, bs := range s.Build.Secrets {
secret, err := composeToBuildkitSecret(bs, cfg.Secrets[bs.Source])
@@ -143,8 +120,6 @@ func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Conf
CacheTo: s.Build.CacheTo,
NetworkMode: &s.Build.Network,
Secrets: secrets,
ShmSize: shmSize,
Ulimits: ulimits,
}
if err = t.composeExtTarget(s.Build.Extensions); err != nil {
return nil, err
@@ -182,8 +157,8 @@ func validateComposeFile(dt []byte, fn string) (bool, error) {
}
func validateCompose(dt []byte, envs map[string]string) error {
_, err := loader.Load(composetypes.ConfigDetails{
ConfigFiles: []composetypes.ConfigFile{
_, err := loader.Load(compose.ConfigDetails{
ConfigFiles: []compose.ConfigFile{
{
Content: dt,
},
@@ -246,7 +221,7 @@ func loadDotEnv(curenv map[string]string, workingDir string) (map[string]string,
return curenv, nil
}
func flatten(in composetypes.MappingWithEquals) map[string]*string {
func flatten(in compose.MappingWithEquals) map[string]*string {
if len(in) == 0 {
return nil
}
@@ -350,8 +325,8 @@ func (t *Target) composeExtTarget(exts map[string]interface{}) error {
// composeToBuildkitSecret converts secret from compose format to buildkit's
// csv format.
func composeToBuildkitSecret(inp composetypes.ServiceSecretConfig, psecret composetypes.SecretConfig) (string, error) {
if psecret.External {
func composeToBuildkitSecret(inp compose.ServiceSecretConfig, psecret compose.SecretConfig) (string, error) {
if psecret.External.External {
return "", errors.Errorf("unsupported external secret %s", psecret.Name)
}

View File

@@ -6,7 +6,7 @@ import (
"sort"
"testing"
composetypes "github.com/compose-spec/compose-go/v2/types"
compose "github.com/compose-spec/compose-go/types"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
@@ -22,7 +22,7 @@ services:
build:
context: ./dir
additional_contexts:
foo: ./bar
foo: /bar
dockerfile: Dockerfile-alternate
network:
none
@@ -49,7 +49,7 @@ secrets:
file: /root/.aws/credentials
`)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, 1, len(c.Groups))
@@ -62,12 +62,12 @@ secrets:
return c.Targets[i].Name < c.Targets[j].Name
})
require.Equal(t, "db", c.Targets[0].Name)
require.Equal(t, "db", *c.Targets[0].Context)
require.Equal(t, "./db", *c.Targets[0].Context)
require.Equal(t, []string{"docker.io/tonistiigi/db"}, c.Targets[0].Tags)
require.Equal(t, "webapp", c.Targets[1].Name)
require.Equal(t, "dir", *c.Targets[1].Context)
require.Equal(t, map[string]string{"foo": "bar"}, c.Targets[1].Contexts)
require.Equal(t, "./dir", *c.Targets[1].Context)
require.Equal(t, map[string]string{"foo": "/bar"}, c.Targets[1].Contexts)
require.Equal(t, "Dockerfile-alternate", *c.Targets[1].Dockerfile)
require.Equal(t, 1, len(c.Targets[1].Args))
require.Equal(t, ptrstr("123"), c.Targets[1].Args["buildno"])
@@ -80,7 +80,7 @@ secrets:
}, c.Targets[1].Secrets)
require.Equal(t, "webapp2", c.Targets[2].Name)
require.Equal(t, "dir", *c.Targets[2].Context)
require.Equal(t, "./dir", *c.Targets[2].Context)
require.Equal(t, "FROM alpine\n", *c.Targets[2].DockerfileInline)
}
@@ -92,7 +92,7 @@ services:
webapp:
build: ./db
`)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, 1, len(c.Groups))
require.Equal(t, 1, len(c.Targets))
@@ -111,7 +111,7 @@ services:
target: webapp
`)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, 2, len(c.Targets))
@@ -136,7 +136,7 @@ services:
target: webapp
`)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, 2, len(c.Targets))
sort.Slice(c.Targets, func(i, j int) bool {
@@ -167,7 +167,7 @@ services:
t.Setenv("BAR", "foo")
t.Setenv("ZZZ_BAR", "zzz_foo")
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, sliceToMap(os.Environ()))
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, sliceToMap(os.Environ()))
require.NoError(t, err)
require.Equal(t, ptrstr("bar"), c.Targets[0].Args["FOO"])
require.Equal(t, ptrstr("zzz_foo"), c.Targets[0].Args["BAR"])
@@ -181,7 +181,7 @@ services:
entrypoint: echo 1
`)
_, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
_, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
require.Error(t, err)
}
@@ -206,7 +206,7 @@ networks:
gateway: 10.5.0.254
`)
_, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
_, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
}
@@ -223,7 +223,7 @@ services:
- bar
`)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, []string{"foo", "bar"}, c.Targets[0].Tags)
}
@@ -260,7 +260,7 @@ networks:
name: test-net
`)
_, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
_, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
}
@@ -303,11 +303,6 @@ services:
args:
CT_ECR: foo
CT_TAG: bar
shm_size: 128m
ulimits:
nofile:
soft: 1024
hard: 1024
x-bake:
secret:
- id=mysecret,src=/local/secret
@@ -318,7 +313,7 @@ services:
no-cache: true
`)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, 2, len(c.Targets))
sort.Slice(c.Targets, func(i, j int) bool {
@@ -337,8 +332,6 @@ services:
require.Equal(t, []string{"linux/arm64"}, c.Targets[1].Platforms)
require.Equal(t, []string{"type=docker"}, c.Targets[1].Outputs)
require.Equal(t, newBool(true), c.Targets[1].NoCache)
require.Equal(t, ptrstr("128MiB"), c.Targets[1].ShmSize)
require.Equal(t, []string{"nofile=1024:1024"}, c.Targets[1].Ulimits)
}
func TestComposeExtDedup(t *testing.T) {
@@ -364,7 +357,7 @@ services:
- type=local,dest=path/to/cache
`)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, []string{"ct-addon:foo", "ct-addon:baz"}, c.Targets[0].Tags)
@@ -397,7 +390,7 @@ services:
- ` + envf.Name() + `
`)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, map[string]*string{"CT_ECR": ptrstr("foo"), "FOO": ptrstr("bsdf -csdf"), "NODE_ENV": ptrstr("test")}, c.Targets[0].Args)
}
@@ -443,7 +436,7 @@ services:
published: "3306"
protocol: tcp
`)
_, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
_, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
}
@@ -489,7 +482,7 @@ func TestServiceName(t *testing.T) {
for _, tt := range cases {
tt := tt
t.Run(tt.svc, func(t *testing.T) {
_, err := ParseCompose([]composetypes.ConfigFile{{Content: []byte(`
_, err := ParseCompose([]compose.ConfigFile{{Content: []byte(`
services:
` + tt.svc + `:
build:
@@ -560,7 +553,7 @@ services:
for _, tt := range cases {
tt := tt
t.Run(tt.name, func(t *testing.T) {
_, err := ParseCompose([]composetypes.ConfigFile{{Content: tt.dt}}, nil)
_, err := ParseCompose([]compose.ConfigFile{{Content: tt.dt}}, nil)
if tt.wantErr {
require.Error(t, err)
} else {
@@ -658,90 +651,11 @@ services:
bar: "baz"
`)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, map[string]*string{"bar": ptrstr("baz")}, c.Targets[0].Args)
}
func TestDependsOn(t *testing.T) {
var dt = []byte(`
services:
foo:
build:
context: .
ports:
- 3306:3306
depends_on:
- bar
bar:
build:
context: .
`)
_, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
}
func TestInclude(t *testing.T) {
tmpdir := t.TempDir()
err := os.WriteFile(filepath.Join(tmpdir, "compose-foo.yml"), []byte(`
services:
foo:
build:
context: .
target: buildfoo
ports:
- 3306:3306
`), 0644)
require.NoError(t, err)
var dt = []byte(`
include:
- compose-foo.yml
services:
bar:
build:
context: .
target: buildbar
`)
chdir(t, tmpdir)
c, err := ParseComposeFiles([]File{{
Name: "composetypes.yml",
Data: dt,
}})
require.NoError(t, err)
require.Equal(t, 2, len(c.Targets))
sort.Slice(c.Targets, func(i, j int) bool {
return c.Targets[i].Name < c.Targets[j].Name
})
require.Equal(t, "bar", c.Targets[0].Name)
require.Equal(t, "buildbar", *c.Targets[0].Target)
require.Equal(t, "foo", c.Targets[1].Name)
require.Equal(t, "buildfoo", *c.Targets[1].Target)
}
func TestDevelop(t *testing.T) {
var dt = []byte(`
services:
scratch:
build:
context: ./webapp
develop:
watch:
- path: ./webapp/html
action: sync
target: /var/www
ignore:
- node_modules/
`)
_, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
}
// chdir changes the current working directory to the named directory,
// and then restore the original working directory at the end of the test.
func chdir(t *testing.T, dir string) {

View File

@@ -634,29 +634,6 @@ func TestHCLMultiFileAttrs(t *testing.T) {
require.Equal(t, ptrstr("pre-ghi"), c.Targets[0].Args["v1"])
}
func TestHCLMultiFileGlobalAttrs(t *testing.T) {
dt := []byte(`
FOO = "abc"
target "app" {
args = {
v1 = "pre-${FOO}"
}
}
`)
dt2 := []byte(`
FOO = "def"
`)
c, err := ParseFiles([]File{
{Data: dt, Name: "c1.hcl"},
{Data: dt2, Name: "c2.hcl"},
}, nil)
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, "pre-def", *c.Targets[0].Args["v1"])
}
func TestHCLDuplicateTarget(t *testing.T) {
dt := []byte(`
target "app" {
@@ -1113,27 +1090,6 @@ func TestHCLMatrixBadTypes(t *testing.T) {
require.Error(t, err)
}
func TestHCLMatrixWithGlobalTarget(t *testing.T) {
dt := []byte(`
target "x" {
tags = ["a", "b"]
}
target "default" {
tags = target.x.tags
matrix = {
dummy = [""]
}
}
`)
c, err := ParseFile(dt, "docker-bake.hcl")
require.NoError(t, err)
require.Equal(t, 2, len(c.Targets))
require.Equal(t, "x", c.Targets[0].Name)
require.Equal(t, "default", c.Targets[1].Name)
require.Equal(t, []string{"a", "b"}, c.Targets[1].Tags)
}
func TestJSONAttributes(t *testing.T) {
dt := []byte(`{"FOO": "abc", "variable": {"BAR": {"default": "def"}}, "target": { "app": { "args": {"v1": "pre-${FOO}-${BAR}"}} } }`)

View File

@@ -613,7 +613,7 @@ func Parse(b hcl.Body, opt Opt, val interface{}) (map[string]map[string][]string
attrs, diags := b.JustAttributes()
if diags.HasErrors() {
if d := removeAttributesDiags(diags, reserved, p.vars, attrs); len(d) > 0 {
if d := removeAttributesDiags(diags, reserved, p.vars); len(d) > 0 {
return nil, d
}
}
@@ -631,14 +631,13 @@ func Parse(b hcl.Body, opt Opt, val interface{}) (map[string]map[string][]string
}
for _, a := range content.Attributes {
a := a
return nil, hcl.Diagnostics{
&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid attribute",
Detail: "global attributes currently not supported",
Subject: a.Range.Ptr(),
Context: a.Range.Ptr(),
Subject: &a.Range,
Context: &a.Range,
},
}
}
@@ -661,14 +660,13 @@ func Parse(b hcl.Body, opt Opt, val interface{}) (map[string]map[string][]string
var subject *hcl.Range
var context *hcl.Range
if p.funcs[k].Params != nil {
subject = p.funcs[k].Params.Range.Ptr()
subject = &p.funcs[k].Params.Range
context = subject
} else {
for _, block := range blocks.Blocks {
block := block
if block.Type == "function" && len(block.Labels) == 1 && block.Labels[0] == k {
subject = block.LabelRanges[0].Ptr()
context = block.DefRange.Ptr()
subject = &block.LabelRanges[0]
context = &block.DefRange
break
}
}
@@ -734,7 +732,6 @@ func Parse(b hcl.Body, opt Opt, val interface{}) (map[string]map[string][]string
diags = hcl.Diagnostics{}
for _, b := range content.Blocks {
b := b
v := reflect.ValueOf(val)
err := p.resolveBlock(b, nil)
@@ -745,7 +742,7 @@ func Parse(b hcl.Body, opt Opt, val interface{}) (map[string]map[string][]string
continue
}
} else {
return nil, wrapErrorDiagnostic("Invalid block", err, b.LabelRanges[0].Ptr(), b.DefRange.Ptr())
return nil, wrapErrorDiagnostic("Invalid block", err, &b.LabelRanges[0], &b.DefRange)
}
}
@@ -857,7 +854,7 @@ func getNameIndex(v reflect.Value) (int, bool) {
return 0, false
}
func removeAttributesDiags(diags hcl.Diagnostics, reserved map[string]struct{}, vars map[string]*variable, attrs hcl.Attributes) hcl.Diagnostics {
func removeAttributesDiags(diags hcl.Diagnostics, reserved map[string]struct{}, vars map[string]*variable) hcl.Diagnostics {
var fdiags hcl.Diagnostics
for _, d := range diags {
if fout := func(d *hcl.Diagnostic) bool {
@@ -879,12 +876,6 @@ func removeAttributesDiags(diags hcl.Diagnostics, reserved map[string]struct{},
return true
}
}
for a := range attrs {
// Do the same for attributes
if strings.HasPrefix(d.Detail, fmt.Sprintf(`Argument "%s" was already set at `, a)) {
return true
}
}
return false
}(d); !fout {
fdiags = append(fdiags, d)

View File

@@ -1,230 +0,0 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
// Forked from https://github.com/hashicorp/hcl/blob/4679383728fe331fc8a6b46036a27b8f818d9bc0/merged.go
package hclparser
import (
"fmt"
"github.com/hashicorp/hcl/v2"
)
// MergeFiles combines the given files to produce a single body that contains
// configuration from all of the given files.
//
// The ordering of the given files decides the order in which contained
// elements will be returned. If any top-level attributes are defined with
// the same name across multiple files, a diagnostic will be produced from
// the Content and PartialContent methods describing this error in a
// user-friendly way.
func MergeFiles(files []*hcl.File) hcl.Body {
var bodies []hcl.Body
for _, file := range files {
bodies = append(bodies, file.Body)
}
return MergeBodies(bodies)
}
// MergeBodies is like MergeFiles except it deals directly with bodies, rather
// than with entire files.
func MergeBodies(bodies []hcl.Body) hcl.Body {
if len(bodies) == 0 {
// Swap out for our singleton empty body, to reduce the number of
// empty slices we have hanging around.
return emptyBody
}
// If any of the given bodies are already merged bodies, we'll unpack
// to flatten to a single mergedBodies, since that's conceptually simpler.
// This also, as a side-effect, eliminates any empty bodies, since
// empties are merged bodies with no inner bodies.
var newLen int
var flatten bool
for _, body := range bodies {
if children, merged := body.(mergedBodies); merged {
newLen += len(children)
flatten = true
} else {
newLen++
}
}
if !flatten { // not just newLen == len, because we might have mergedBodies with single bodies inside
return mergedBodies(bodies)
}
if newLen == 0 {
// Don't allocate a new empty when we already have one
return emptyBody
}
n := make([]hcl.Body, 0, newLen)
for _, body := range bodies {
if children, merged := body.(mergedBodies); merged {
n = append(n, children...)
} else {
n = append(n, body)
}
}
return mergedBodies(n)
}
var emptyBody = mergedBodies([]hcl.Body{})
// EmptyBody returns a body with no content. This body can be used as a
// placeholder when a body is required but no body content is available.
func EmptyBody() hcl.Body {
return emptyBody
}
type mergedBodies []hcl.Body
// Content returns the content produced by applying the given schema to all
// of the merged bodies and merging the result.
//
// Although required attributes _are_ supported, they should be used sparingly
// with merged bodies since in this case there is no contextual information
// with which to return good diagnostics. Applications working with merged
// bodies may wish to mark all attributes as optional and then check for
// required attributes afterwards, to produce better diagnostics.
func (mb mergedBodies) Content(schema *hcl.BodySchema) (*hcl.BodyContent, hcl.Diagnostics) {
// the returned body will always be empty in this case, because mergedContent
// will only ever call Content on the child bodies.
content, _, diags := mb.mergedContent(schema, false)
return content, diags
}
func (mb mergedBodies) PartialContent(schema *hcl.BodySchema) (*hcl.BodyContent, hcl.Body, hcl.Diagnostics) {
return mb.mergedContent(schema, true)
}
func (mb mergedBodies) JustAttributes() (hcl.Attributes, hcl.Diagnostics) {
attrs := make(map[string]*hcl.Attribute)
var diags hcl.Diagnostics
for _, body := range mb {
thisAttrs, thisDiags := body.JustAttributes()
if len(thisDiags) != 0 {
diags = append(diags, thisDiags...)
}
if thisAttrs != nil {
for name, attr := range thisAttrs {
if existing := attrs[name]; existing != nil {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Duplicate argument",
Detail: fmt.Sprintf(
"Argument %q was already set at %s",
name, existing.NameRange.String(),
),
Subject: thisAttrs[name].NameRange.Ptr(),
})
}
attrs[name] = attr
}
}
}
return attrs, diags
}
func (mb mergedBodies) MissingItemRange() hcl.Range {
if len(mb) == 0 {
// Nothing useful to return here, so we'll return some garbage.
return hcl.Range{
Filename: "<empty>",
}
}
// arbitrarily use the first body's missing item range
return mb[0].MissingItemRange()
}
func (mb mergedBodies) mergedContent(schema *hcl.BodySchema, partial bool) (*hcl.BodyContent, hcl.Body, hcl.Diagnostics) {
// We need to produce a new schema with none of the attributes marked as
// required, since _any one_ of our bodies can contribute an attribute value.
// We'll separately check that all required attributes are present at
// the end.
mergedSchema := &hcl.BodySchema{
Blocks: schema.Blocks,
}
for _, attrS := range schema.Attributes {
mergedAttrS := attrS
mergedAttrS.Required = false
mergedSchema.Attributes = append(mergedSchema.Attributes, mergedAttrS)
}
var mergedLeftovers []hcl.Body
content := &hcl.BodyContent{
Attributes: map[string]*hcl.Attribute{},
}
var diags hcl.Diagnostics
for _, body := range mb {
var thisContent *hcl.BodyContent
var thisLeftovers hcl.Body
var thisDiags hcl.Diagnostics
if partial {
thisContent, thisLeftovers, thisDiags = body.PartialContent(mergedSchema)
} else {
thisContent, thisDiags = body.Content(mergedSchema)
}
if thisLeftovers != nil {
mergedLeftovers = append(mergedLeftovers, thisLeftovers)
}
if len(thisDiags) != 0 {
diags = append(diags, thisDiags...)
}
if thisContent.Attributes != nil {
for name, attr := range thisContent.Attributes {
if existing := content.Attributes[name]; existing != nil {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Duplicate argument",
Detail: fmt.Sprintf(
"Argument %q was already set at %s",
name, existing.NameRange.String(),
),
Subject: thisContent.Attributes[name].NameRange.Ptr(),
})
}
content.Attributes[name] = attr
}
}
if len(thisContent.Blocks) != 0 {
content.Blocks = append(content.Blocks, thisContent.Blocks...)
}
}
// Finally, we check for required attributes.
for _, attrS := range schema.Attributes {
if !attrS.Required {
continue
}
if content.Attributes[attrS.Name] == nil {
// We don't have any context here to produce a good diagnostic,
// which is why we warn in the Content docstring to minimize the
// use of required attributes on merged bodies.
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Missing required argument",
Detail: fmt.Sprintf(
"The argument %q is required, but was not set.",
attrS.Name,
),
})
}
}
leftoverBody := MergeBodies(mergedLeftovers)
return content, leftoverBody, diags
}

View File

@@ -59,7 +59,7 @@ func ReadRemoteFiles(ctx context.Context, nodes []builder.Node, url string, name
ch, done := progress.NewChannel(pw)
defer func() { <-done }()
_, err = c.Build(ctx, client.SolveOpt{Session: session, Internal: true}, "buildx", func(ctx context.Context, c gwclient.Client) (*gwclient.Result, error) {
_, err = c.Build(ctx, client.SolveOpt{Session: session}, "buildx", func(ctx context.Context, c gwclient.Client) (*gwclient.Result, error) {
def, err := st.Marshal(ctx)
if err != nil {
return nil, err

View File

@@ -4,8 +4,10 @@ import (
"bufio"
"bytes"
"context"
"crypto/rand"
_ "crypto/sha256" // ensure digests can be computed
"encoding/base64"
"encoding/hex"
"encoding/json"
"fmt"
"io"
@@ -21,18 +23,17 @@ import (
"github.com/containerd/containerd/content/local"
"github.com/containerd/containerd/images"
"github.com/containerd/containerd/platforms"
"github.com/distribution/reference"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/driver"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/localstate"
"github.com/docker/buildx/util/desktop"
"github.com/docker/buildx/util/dockerutil"
"github.com/docker/buildx/util/imagetools"
"github.com/docker/buildx/util/osutil"
"github.com/docker/buildx/util/progress"
"github.com/docker/buildx/util/resolver"
"github.com/docker/buildx/util/waitmap"
"github.com/docker/cli/opts"
"github.com/docker/distribution/reference"
"github.com/docker/docker/api/types"
"github.com/docker/docker/builder/remotecontext/urlutil"
"github.com/docker/docker/pkg/jsonmessage"
@@ -65,14 +66,12 @@ var (
)
const (
//nolint:gosec // G101: false-positive
printFallbackImage = "docker/dockerfile:1.5.2-labs@sha256:f2e91734a84c0922ff47aa4098ab775f1dfa932430d2888dd5cad5251fafdac4"
)
type Options struct {
Inputs Inputs
Ref string
Allow []entitlements.Entitlement
Attests map[string]*string
BuildArgs map[string]string
@@ -92,11 +91,12 @@ type Options struct {
Target string
Ulimits *opts.UlimitOpt
Session []session.Attachable
Linked bool // Linked marks this target as exclusively linked (not requested by the user).
Session []session.Attachable
// Linked marks this target as exclusively linked (not requested by the user).
Linked bool
PrintFunc *PrintFunc
SourcePolicy *spb.Policy
GroupRef string
}
type PrintFunc struct {
@@ -118,11 +118,6 @@ type NamedContext struct {
State *llb.State
}
type reqForNode struct {
*resolvedNode
so *client.SolveOpt
}
func filterAvailableNodes(nodes []builder.Node) ([]builder.Node, error) {
out := make([]builder.Node, 0, len(nodes))
err := errors.Errorf("no drivers found")
@@ -140,6 +135,218 @@ func filterAvailableNodes(nodes []builder.Node) ([]builder.Node, error) {
return nil, err
}
type driverPair struct {
driverIndex int
platforms []specs.Platform
so *client.SolveOpt
bopts gateway.BuildOpts
}
func driverIndexes(m map[string][]driverPair) []int {
out := make([]int, 0, len(m))
visited := map[int]struct{}{}
for _, dp := range m {
for _, d := range dp {
if _, ok := visited[d.driverIndex]; ok {
continue
}
visited[d.driverIndex] = struct{}{}
out = append(out, d.driverIndex)
}
}
return out
}
func allIndexes(l int) []int {
out := make([]int, 0, l)
for i := 0; i < l; i++ {
out = append(out, i)
}
return out
}
func ensureBooted(ctx context.Context, nodes []builder.Node, idxs []int, pw progress.Writer) ([]*client.Client, error) {
clients := make([]*client.Client, len(nodes))
baseCtx := ctx
eg, ctx := errgroup.WithContext(ctx)
for _, i := range idxs {
func(i int) {
eg.Go(func() error {
c, err := driver.Boot(ctx, baseCtx, nodes[i].Driver, pw)
if err != nil {
return err
}
clients[i] = c
return nil
})
}(i)
}
if err := eg.Wait(); err != nil {
return nil, err
}
return clients, nil
}
func splitToDriverPairs(availablePlatforms map[string]int, opt map[string]Options) map[string][]driverPair {
m := map[string][]driverPair{}
for k, opt := range opt {
mm := map[int][]specs.Platform{}
for _, p := range opt.Platforms {
k := platforms.Format(p)
idx := availablePlatforms[k] // default 0
pp := mm[idx]
pp = append(pp, p)
mm[idx] = pp
}
// if no platform is specified, use first driver
if len(mm) == 0 {
mm[0] = nil
}
dps := make([]driverPair, 0, 2)
for idx, pp := range mm {
dps = append(dps, driverPair{driverIndex: idx, platforms: pp})
}
m[k] = dps
}
return m
}
func resolveDrivers(ctx context.Context, nodes []builder.Node, opt map[string]Options, pw progress.Writer) (map[string][]driverPair, []*client.Client, error) {
dps, clients, err := resolveDriversBase(ctx, nodes, opt, pw)
if err != nil {
return nil, nil, err
}
bopts := make([]gateway.BuildOpts, len(clients))
span, ctx := tracing.StartSpan(ctx, "load buildkit capabilities", trace.WithSpanKind(trace.SpanKindInternal))
eg, ctx := errgroup.WithContext(ctx)
for i, c := range clients {
if c == nil {
continue
}
func(i int, c *client.Client) {
eg.Go(func() error {
clients[i].Build(ctx, client.SolveOpt{
Internal: true,
}, "buildx", func(ctx context.Context, c gateway.Client) (*gateway.Result, error) {
bopts[i] = c.BuildOpts()
return nil, nil
}, nil)
return nil
})
}(i, c)
}
err = eg.Wait()
tracing.FinishWithError(span, err)
if err != nil {
return nil, nil, err
}
for key := range dps {
for i, dp := range dps[key] {
dps[key][i].bopts = bopts[dp.driverIndex]
}
}
return dps, clients, nil
}
func resolveDriversBase(ctx context.Context, nodes []builder.Node, opt map[string]Options, pw progress.Writer) (map[string][]driverPair, []*client.Client, error) {
availablePlatforms := map[string]int{}
for i, node := range nodes {
for _, p := range node.Platforms {
availablePlatforms[platforms.Format(p)] = i
}
}
undetectedPlatform := false
allPlatforms := map[string]int{}
for _, opt := range opt {
for _, p := range opt.Platforms {
k := platforms.Format(p)
allPlatforms[k] = -1
if _, ok := availablePlatforms[k]; !ok {
undetectedPlatform = true
}
}
}
// fast path
if len(nodes) == 1 || len(allPlatforms) == 0 {
m := map[string][]driverPair{}
for k, opt := range opt {
m[k] = []driverPair{{driverIndex: 0, platforms: opt.Platforms}}
}
clients, err := ensureBooted(ctx, nodes, driverIndexes(m), pw)
if err != nil {
return nil, nil, err
}
return m, clients, nil
}
// map based on existing platforms
if !undetectedPlatform {
m := splitToDriverPairs(availablePlatforms, opt)
clients, err := ensureBooted(ctx, nodes, driverIndexes(m), pw)
if err != nil {
return nil, nil, err
}
return m, clients, nil
}
// boot all drivers in k
clients, err := ensureBooted(ctx, nodes, allIndexes(len(nodes)), pw)
if err != nil {
return nil, nil, err
}
eg, ctx := errgroup.WithContext(ctx)
workers := make([][]*client.WorkerInfo, len(clients))
for i, c := range clients {
if c == nil {
continue
}
func(i int) {
eg.Go(func() error {
ww, err := clients[i].ListWorkers(ctx)
if err != nil {
return errors.Wrap(err, "listing workers")
}
workers[i] = ww
return nil
})
}(i)
}
if err := eg.Wait(); err != nil {
return nil, nil, err
}
for i, ww := range workers {
for _, w := range ww {
for _, p := range w.Platforms {
p = platforms.Normalize(p)
ps := platforms.Format(p)
if _, ok := availablePlatforms[ps]; !ok {
availablePlatforms[ps] = i
}
}
}
}
return splitToDriverPairs(availablePlatforms, opt), clients, nil
}
func toRepoOnly(in string) (string, error) {
m := map[string]struct{}{}
p := strings.Split(in, ",")
@@ -184,7 +391,7 @@ func toSolveOpt(ctx context.Context, node builder.Node, multiDriver bool, opt Op
for _, e := range opt.CacheTo {
if e.Type != "inline" && !nodeDriver.Features(ctx)[driver.CacheExport] {
return nil, nil, notSupported(driver.CacheExport, nodeDriver, "https://docs.docker.com/go/build-cache-backends/")
return nil, nil, notSupported(nodeDriver, driver.CacheExport)
}
}
@@ -217,7 +424,6 @@ func toSolveOpt(ctx context.Context, node builder.Node, multiDriver bool, opt Op
}
so := client.SolveOpt{
Ref: opt.Ref,
Frontend: "dockerfile.v0",
FrontendAttrs: map[string]string{},
LocalDirs: map[string]string{},
@@ -227,10 +433,6 @@ func toSolveOpt(ctx context.Context, node builder.Node, multiDriver bool, opt Op
SourcePolicy: opt.SourcePolicy,
}
if so.Ref == "" {
so.Ref = identity.NewID()
}
if opt.CgroupParent != "" {
so.FrontendAttrs["cgroup-parent"] = opt.CgroupParent
}
@@ -252,21 +454,17 @@ func toSolveOpt(ctx context.Context, node builder.Node, multiDriver bool, opt Op
attests[k] = *v
}
}
supportAttestations := bopts.LLBCaps.Contains(apicaps.CapID("exporter.image.attestations")) && nodeDriver.Features(ctx)[driver.MultiPlatform]
supportsAttestations := bopts.LLBCaps.Contains(apicaps.CapID("exporter.image.attestations"))
if len(attests) > 0 {
if !supportAttestations {
if !nodeDriver.Features(ctx)[driver.MultiPlatform] {
return nil, nil, notSupported("Attestation", nodeDriver, "https://docs.docker.com/go/attestations/")
}
return nil, nil, errors.Errorf("Attestations are not supported by the current BuildKit daemon")
if !supportsAttestations {
return nil, nil, errors.Errorf("attestations are not supported by the current buildkitd")
}
for k, v := range attests {
so.FrontendAttrs["attest:"+k] = v
}
}
if _, ok := opt.Attests["provenance"]; !ok && supportAttestations {
if _, ok := opt.Attests["provenance"]; !ok && supportsAttestations {
const noAttestEnv = "BUILDX_NO_DEFAULT_ATTESTATIONS"
var noProv bool
if v, ok := os.LookupEnv(noAttestEnv); ok {
@@ -331,7 +529,7 @@ func toSolveOpt(ctx context.Context, node builder.Node, multiDriver bool, opt Op
// set up exporters
for i, e := range opt.Exports {
if e.Type == "oci" && !nodeDriver.Features(ctx)[driver.OCIExporter] {
return nil, nil, notSupported(driver.OCIExporter, nodeDriver, "https://docs.docker.com/go/build-exporters/")
return nil, nil, notSupported(nodeDriver, driver.OCIExporter)
}
if e.Type == "docker" {
features := docker.Features(ctx, e.Attrs["context"])
@@ -354,12 +552,10 @@ func toSolveOpt(ctx context.Context, node builder.Node, multiDriver bool, opt Op
return nil, nil, err
}
defers = append(defers, cancel)
opt.Exports[i].Output = func(_ map[string]string) (io.WriteCloser, error) {
return w, nil
}
opt.Exports[i].Output = wrapWriteCloser(w)
}
} else if !nodeDriver.Features(ctx)[driver.DockerExporter] {
return nil, nil, notSupported(driver.DockerExporter, nodeDriver, "https://docs.docker.com/go/build-exporters/")
return nil, nil, notSupported(nodeDriver, driver.DockerExporter)
}
}
if e.Type == "image" && nodeDriver.IsMobyDriver() {
@@ -393,14 +589,11 @@ func toSolveOpt(ctx context.Context, node builder.Node, multiDriver bool, opt Op
if p, err := filepath.Abs(sharedKey); err == nil {
sharedKey = filepath.Base(p)
}
so.SharedKey = sharedKey + ":" + confutil.TryNodeIdentifier(configDir)
so.SharedKey = sharedKey + ":" + tryNodeIdentifier(configDir)
}
if opt.Pull {
so.FrontendAttrs["image-resolve-mode"] = pb.AttrImageResolveModeForcePull
} else if nodeDriver.IsMobyDriver() {
// moby driver always resolves local images by default
so.FrontendAttrs["image-resolve-mode"] = pb.AttrImageResolveModePreferLocal
so.FrontendAttrs["image-resolve-mode"] = "pull"
}
if opt.Target != "" {
so.FrontendAttrs["target"] = opt.Target
@@ -431,7 +624,7 @@ func toSolveOpt(ctx context.Context, node builder.Node, multiDriver bool, opt Op
pp[i] = platforms.Format(p)
}
if len(pp) > 1 && !nodeDriver.Features(ctx)[driver.MultiPlatform] {
return nil, nil, notSupported(driver.MultiPlatform, nodeDriver, "https://docs.docker.com/go/build-multi-platform/")
return nil, nil, notSupported(nodeDriver, driver.MultiPlatform)
}
so.FrontendAttrs["platform"] = strings.Join(pp, ",")
}
@@ -470,6 +663,12 @@ func toSolveOpt(ctx context.Context, node builder.Node, multiDriver bool, opt Op
so.FrontendAttrs["ulimit"] = ulimits
}
// remember local state like directory path that is not sent to buildkit
so.Ref = identity.NewID()
if err := saveLocalState(so, opt, node, configDir); err != nil {
return nil, nil, err
}
return &so, releaseF, nil
}
@@ -514,7 +713,7 @@ func BuildWithResultHandler(ctx context.Context, nodes []builder.Node, opt map[s
}
}
drivers, err := resolveDrivers(ctx, nodes, opt, w)
m, clients, err := resolveDrivers(ctx, nodes, opt, w)
if err != nil {
return nil, err
}
@@ -528,46 +727,31 @@ func BuildWithResultHandler(ctx context.Context, nodes []builder.Node, opt map[s
}
}()
reqForNodes := make(map[string][]*reqForNode)
eg, ctx := errgroup.WithContext(ctx)
for k, opt := range opt {
multiDriver := len(drivers[k]) > 1
multiDriver := len(m[k]) > 1
hasMobyDriver := false
gitattrs, addVCSLocalDir, err := getGitAttributes(ctx, opt.Inputs.ContextPath, opt.Inputs.DockerfilePath)
gitattrs, err := getGitAttributes(ctx, opt.Inputs.ContextPath, opt.Inputs.DockerfilePath)
if err != nil {
logrus.WithError(err).Warn("current commit information was not captured by the build")
logrus.Warn(err)
}
var reqn []*reqForNode
for _, np := range drivers[k] {
if np.Node().Driver.IsMobyDriver() {
for i, np := range m[k] {
node := nodes[np.driverIndex]
if node.Driver.IsMobyDriver() {
hasMobyDriver = true
}
opt.Platforms = np.platforms
gatewayOpts, err := np.BuildOpts(ctx)
so, release, err := toSolveOpt(ctx, node, multiDriver, opt, np.bopts, configDir, w, docker)
if err != nil {
return nil, err
}
so, release, err := toSolveOpt(ctx, np.Node(), multiDriver, opt, gatewayOpts, configDir, w, docker)
if err != nil {
return nil, err
}
if err := saveLocalState(so, k, opt, np.Node(), configDir); err != nil {
return nil, err
}
for k, v := range gitattrs {
so.FrontendAttrs[k] = v
}
if addVCSLocalDir != nil {
addVCSLocalDir(so)
}
defers = append(defers, release)
reqn = append(reqn, &reqForNode{
resolvedNode: np,
so: so,
})
m[k][i].so = so
}
reqForNodes[k] = reqn
for _, at := range opt.Session {
if s, ok := at.(interface {
SetLogger(progresswriter.Logger)
@@ -580,8 +764,8 @@ func BuildWithResultHandler(ctx context.Context, nodes []builder.Node, opt map[s
// validate for multi-node push
if hasMobyDriver && multiDriver {
for _, np := range reqForNodes[k] {
for _, e := range np.so.Exports {
for _, dp := range m[k] {
for _, e := range dp.so.Exports {
if e.Type == "moby" {
if ok, _ := strconv.ParseBool(e.Attrs["push"]); ok {
return nil, errors.Errorf("multi-node push can't currently be performed with the docker driver, please switch to a different driver")
@@ -594,13 +778,12 @@ func BuildWithResultHandler(ctx context.Context, nodes []builder.Node, opt map[s
// validate that all links between targets use same drivers
for name := range opt {
dps := reqForNodes[name]
for i, dp := range dps {
so := reqForNodes[name][i].so
for k, v := range so.FrontendAttrs {
dps := m[name]
for _, dp := range dps {
for k, v := range dp.so.FrontendAttrs {
if strings.HasPrefix(k, "context:") && strings.HasPrefix(v, "target:") {
k2 := strings.TrimPrefix(v, "target:")
dps2, ok := drivers[k2]
dps2, ok := m[k2]
if !ok {
return nil, errors.Errorf("failed to find target %s for context %s", k2, strings.TrimPrefix(k, "context:")) // should be validated before already
}
@@ -624,13 +807,12 @@ func BuildWithResultHandler(ctx context.Context, nodes []builder.Node, opt map[s
results := waitmap.New()
multiTarget := len(opt) > 1
childTargets := calculateChildTargets(reqForNodes, opt)
for k, opt := range opt {
err := func(k string) error {
opt := opt
dps := drivers[k]
multiDriver := len(drivers[k]) > 1
dps := m[k]
multiDriver := len(m[k]) > 1
var span trace.Span
ctx := ctx
@@ -646,9 +828,8 @@ func BuildWithResultHandler(ctx context.Context, nodes []builder.Node, opt map[s
var insecurePush bool
for i, dp := range dps {
i, dp := i, dp
node := dp.Node()
so := reqForNodes[k][i].so
i, dp, so := i, dp, *dp.so
node := nodes[dp.driverIndex]
if multiDriver {
for i, e := range so.Exports {
switch e.Type {
@@ -679,14 +860,11 @@ func BuildWithResultHandler(ctx context.Context, nodes []builder.Node, opt map[s
pw := progress.WithPrefix(w, k, multiTarget)
c, err := dp.Client(ctx)
if err != nil {
return err
}
c := clients[dp.driverIndex]
eg2.Go(func() error {
pw = progress.ResetTime(pw)
if err := waitContextDeps(ctx, dp.driverIndex, results, so); err != nil {
if err := waitContextDeps(ctx, dp.driverIndex, results, &so); err != nil {
return err
}
@@ -759,43 +937,19 @@ func BuildWithResultHandler(ctx context.Context, nodes []builder.Node, opt map[s
printRes = res.Metadata
}
rKey := resultKey(dp.driverIndex, k)
results.Set(rKey, res)
if children, ok := childTargets[rKey]; ok && len(children) > 0 {
// wait for the child targets to register their LLB before evaluating
_, err := results.Get(ctx, children...)
if err != nil {
return nil, err
}
// we need to wait until the child targets have completed before we can release
eg, ctx := errgroup.WithContext(ctx)
eg.Go(func() error {
return res.EachRef(func(ref gateway.Reference) error {
return ref.Evaluate(ctx)
})
})
eg.Go(func() error {
_, err := results.Get(ctx, children...)
return err
})
if err := eg.Wait(); err != nil {
return nil, err
}
}
results.Set(resultKey(dp.driverIndex, k), res)
return res, nil
}
buildRef := fmt.Sprintf("%s/%s/%s", node.Builder, node.Name, so.Ref)
var rr *client.SolveResponse
if resultHandleFunc != nil {
var resultHandle *ResultHandle
resultHandle, rr, err = NewResultHandle(ctx, cc, *so, "buildx", buildFunc, ch)
resultHandle, rr, err = NewResultHandle(ctx, cc, so, "buildx", buildFunc, ch)
resultHandleFunc(dp.driverIndex, resultHandle)
} else {
rr, err = c.Build(ctx, *so, "buildx", buildFunc, ch)
rr, err = c.Build(ctx, so, "buildx", buildFunc, ch)
}
if desktop.BuildBackendEnabled() && node.Driver.HistoryAPISupported(ctx) {
buildRef := fmt.Sprintf("%s/%s/%s", node.Builder, node.Name, so.Ref)
if err != nil {
return &desktop.ErrorWithBuildRef{
Ref: buildRef,
@@ -815,9 +969,8 @@ func BuildWithResultHandler(ctx context.Context, nodes []builder.Node, opt map[s
for k, v := range printRes {
rr.ExporterResponse[k] = string(v)
}
rr.ExporterResponse["buildx.build.ref"] = buildRef
node := dp.Node().Driver
node := nodes[dp.driverIndex].Driver
if node.IsMobyDriver() {
for _, e := range so.Exports {
if e.Type == "moby" && e.Attrs["push"] != "" {
@@ -909,7 +1062,7 @@ func BuildWithResultHandler(ctx context.Context, nodes []builder.Node, opt map[s
if len(descs) > 0 {
var imageopt imagetools.Opt
for _, dp := range dps {
imageopt = dp.Node().ImageOpt
imageopt = nodes[dp.driverIndex].ImageOpt
break
}
names := strings.Split(pushNames, ",")
@@ -945,7 +1098,7 @@ func BuildWithResultHandler(ctx context.Context, nodes []builder.Node, opt map[s
}
}
dt, desc, err := itpull.Combine(ctx, srcs, nil)
dt, desc, err := itpull.Combine(ctx, srcs)
if err != nil {
return err
}
@@ -1158,7 +1311,7 @@ func LoadInputs(ctx context.Context, d *driver.DriverHandle, inp Inputs, pw prog
target.LocalDirs["context"] = inp.ContextPath
}
}
case osutil.IsLocalDir(inp.ContextPath):
case isLocalDir(inp.ContextPath):
target.LocalDirs["context"] = inp.ContextPath
switch inp.DockerfilePath {
case "-":
@@ -1172,10 +1325,6 @@ func LoadInputs(ctx context.Context, d *driver.DriverHandle, inp Inputs, pw prog
case IsRemoteURL(inp.ContextPath):
if inp.DockerfilePath == "-" {
dockerfileReader = inp.InStream
} else if filepath.IsAbs(inp.DockerfilePath) {
dockerfileDir = filepath.Dir(inp.DockerfilePath)
dockerfileName = filepath.Base(inp.DockerfilePath)
target.FrontendAttrs["dockerfilekey"] = "dockerfile"
}
target.FrontendAttrs["context"] = inp.ContextPath
default:
@@ -1322,24 +1471,6 @@ func resultKey(index int, name string) string {
return fmt.Sprintf("%d-%s", index, name)
}
// calculateChildTargets returns all the targets that depend on current target for reverse index
func calculateChildTargets(reqs map[string][]*reqForNode, opt map[string]Options) map[string][]string {
out := make(map[string][]string)
for name := range opt {
dps := reqs[name]
for i, dp := range dps {
so := reqs[name][i].so
for k, v := range so.FrontendAttrs {
if strings.HasPrefix(k, "context:") && strings.HasPrefix(v, "target:") {
target := resultKey(dp.driverIndex, strings.TrimPrefix(v, "target:"))
out[target] = append(out[target], resultKey(dp.driverIndex, name))
}
}
}
}
return out
}
func waitContextDeps(ctx context.Context, index int, results *waitmap.Map, so *client.SolveOpt) error {
m := map[string]string{}
for k, v := range so.FrontendAttrs {
@@ -1426,10 +1557,8 @@ func waitContextDeps(ctx context.Context, index int, results *waitmap.Map, so *c
return nil
}
func notSupported(f driver.Feature, d driver.Driver, docs string) error {
return errors.Errorf(`%s is not supported for the %s driver.
Switch to a different driver, or turn on the containerd image store, and try again.
Learn more at %s`, f, d.Factory().Name(), docs)
func notSupported(d driver.Driver, f driver.Feature) error {
return errors.Errorf("%s feature is currently not supported for %s driver. Please switch to a different driver (eg. \"docker buildx create --use\")", f, d.Factory().Name())
}
func noDefaultLoad() bool {
@@ -1475,6 +1604,37 @@ func handleLowercaseDockerfile(dir, p string) string {
return p
}
func wrapWriteCloser(wc io.WriteCloser) func(map[string]string) (io.WriteCloser, error) {
return func(map[string]string) (io.WriteCloser, error) {
return wc, nil
}
}
var nodeIdentifierMu sync.Mutex
func tryNodeIdentifier(configDir string) (out string) {
nodeIdentifierMu.Lock()
defer nodeIdentifierMu.Unlock()
sessionFile := filepath.Join(configDir, ".buildNodeID")
if _, err := os.Lstat(sessionFile); err != nil {
if os.IsNotExist(err) { // create a new file with stored randomness
b := make([]byte, 8)
if _, err := rand.Read(b); err != nil {
return out
}
if err := os.WriteFile(sessionFile, []byte(hex.EncodeToString(b)), 0600); err != nil {
return out
}
}
}
dt, err := os.ReadFile(sessionFile)
if err == nil {
return string(dt)
}
return
}
func noPrintFunc(opt map[string]Options) bool {
for _, v := range opt {
if v.PrintFunc != nil {
@@ -1484,6 +1644,43 @@ func noPrintFunc(opt map[string]Options) bool {
return true
}
func saveLocalState(so client.SolveOpt, opt Options, node builder.Node, configDir string) error {
var err error
if so.Ref == "" {
return nil
}
lp := opt.Inputs.ContextPath
dp := opt.Inputs.DockerfilePath
if lp != "" || dp != "" {
if lp != "" {
lp, err = filepath.Abs(lp)
if err != nil {
return err
}
}
if dp != "" {
dp, err = filepath.Abs(dp)
if err != nil {
return err
}
}
ls, err := localstate.New(configDir)
if err != nil {
return err
}
if err := ls.SaveRef(node.Builder, node.Name, so.Ref, localstate.State{
LocalPath: lp,
DockerfilePath: dp,
}); err != nil {
return err
}
}
return nil
}
// ReadSourcePolicy reads a source policy from a file.
// The file path is taken from EXPERIMENTAL_BUILDKIT_SOURCE_POLICY env var.
// if the env var is not set, this `returns nil, nil`

View File

@@ -1,62 +0,0 @@
package build
import (
"context"
stderrors "errors"
"net"
"github.com/containerd/containerd/platforms"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/util/progress"
v1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
func Dial(ctx context.Context, nodes []builder.Node, pw progress.Writer, platform *v1.Platform) (net.Conn, error) {
nodes, err := filterAvailableNodes(nodes)
if err != nil {
return nil, err
}
if len(nodes) == 0 {
return nil, errors.New("no nodes available")
}
var pls []v1.Platform
if platform != nil {
pls = []v1.Platform{*platform}
}
opts := map[string]Options{"default": {Platforms: pls}}
resolved, err := resolveDrivers(ctx, nodes, opts, pw)
if err != nil {
return nil, err
}
var dialError error
for _, ls := range resolved {
for _, rn := range ls {
if platform != nil {
p := *platform
var found bool
for _, pp := range rn.platforms {
if platforms.Only(p).Match(pp) {
found = true
break
}
}
if !found {
continue
}
}
conn, err := nodes[rn.driverIndex].Driver.Dial(ctx)
if err == nil {
return conn, nil
}
dialError = stderrors.Join(err)
}
}
return nil, errors.Wrap(dialError, "no nodes available")
}

View File

@@ -1,312 +0,0 @@
package build
import (
"context"
"fmt"
"github.com/containerd/containerd/platforms"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/driver"
"github.com/docker/buildx/util/progress"
"github.com/moby/buildkit/client"
gateway "github.com/moby/buildkit/frontend/gateway/client"
"github.com/moby/buildkit/util/flightcontrol"
"github.com/moby/buildkit/util/tracing"
specs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"go.opentelemetry.io/otel/trace"
"golang.org/x/sync/errgroup"
)
type resolvedNode struct {
resolver *nodeResolver
driverIndex int
platforms []specs.Platform
}
func (dp resolvedNode) Node() builder.Node {
return dp.resolver.nodes[dp.driverIndex]
}
func (dp resolvedNode) Client(ctx context.Context) (*client.Client, error) {
clients, err := dp.resolver.boot(ctx, []int{dp.driverIndex}, nil)
if err != nil {
return nil, err
}
return clients[0], nil
}
func (dp resolvedNode) BuildOpts(ctx context.Context) (gateway.BuildOpts, error) {
opts, err := dp.resolver.opts(ctx, []int{dp.driverIndex}, nil)
if err != nil {
return gateway.BuildOpts{}, err
}
return opts[0], nil
}
type matchMaker func(specs.Platform) platforms.MatchComparer
type nodeResolver struct {
nodes []builder.Node
clients flightcontrol.Group[*client.Client]
opt flightcontrol.Group[gateway.BuildOpts]
}
func resolveDrivers(ctx context.Context, nodes []builder.Node, opt map[string]Options, pw progress.Writer) (map[string][]*resolvedNode, error) {
driverRes := newDriverResolver(nodes)
drivers, err := driverRes.Resolve(ctx, opt, pw)
if err != nil {
return nil, err
}
return drivers, err
}
func newDriverResolver(nodes []builder.Node) *nodeResolver {
r := &nodeResolver{
nodes: nodes,
}
return r
}
func (r *nodeResolver) Resolve(ctx context.Context, opt map[string]Options, pw progress.Writer) (map[string][]*resolvedNode, error) {
if len(r.nodes) == 0 {
return nil, nil
}
nodes := map[string][]*resolvedNode{}
for k, opt := range opt {
node, perfect, err := r.resolve(ctx, opt.Platforms, pw, platforms.OnlyStrict, nil)
if err != nil {
return nil, err
}
if !perfect {
break
}
nodes[k] = node
}
if len(nodes) != len(opt) {
// if we didn't get a perfect match, we need to boot all drivers
allIndexes := make([]int, len(r.nodes))
for i := range allIndexes {
allIndexes[i] = i
}
clients, err := r.boot(ctx, allIndexes, pw)
if err != nil {
return nil, err
}
eg, egCtx := errgroup.WithContext(ctx)
workers := make([][]specs.Platform, len(clients))
for i, c := range clients {
i, c := i, c
if c == nil {
continue
}
eg.Go(func() error {
ww, err := c.ListWorkers(egCtx)
if err != nil {
return errors.Wrap(err, "listing workers")
}
ps := make(map[string]specs.Platform, len(ww))
for _, w := range ww {
for _, p := range w.Platforms {
pk := platforms.Format(platforms.Normalize(p))
ps[pk] = p
}
}
for _, p := range ps {
workers[i] = append(workers[i], p)
}
return nil
})
}
if err := eg.Wait(); err != nil {
return nil, err
}
// then we can attempt to match against all the available platforms
// (this time we don't care about imperfect matches)
nodes = map[string][]*resolvedNode{}
for k, opt := range opt {
node, _, err := r.resolve(ctx, opt.Platforms, pw, platforms.Only, func(idx int, n builder.Node) []specs.Platform {
return workers[idx]
})
if err != nil {
return nil, err
}
nodes[k] = node
}
}
idxs := make([]int, 0, len(r.nodes))
for _, nodes := range nodes {
for _, node := range nodes {
idxs = append(idxs, node.driverIndex)
}
}
// preload capabilities
span, ctx := tracing.StartSpan(ctx, "load buildkit capabilities", trace.WithSpanKind(trace.SpanKindInternal))
_, err := r.opts(ctx, idxs, pw)
tracing.FinishWithError(span, err)
if err != nil {
return nil, err
}
return nodes, nil
}
func (r *nodeResolver) resolve(ctx context.Context, ps []specs.Platform, pw progress.Writer, matcher matchMaker, additional func(idx int, n builder.Node) []specs.Platform) ([]*resolvedNode, bool, error) {
if len(r.nodes) == 0 {
return nil, true, nil
}
perfect := true
nodeIdxs := make([]int, 0)
for _, p := range ps {
idx := r.get(p, matcher, additional)
if idx == -1 {
idx = 0
perfect = false
}
nodeIdxs = append(nodeIdxs, idx)
}
var nodes []*resolvedNode
if len(nodeIdxs) == 0 {
nodes = append(nodes, &resolvedNode{
resolver: r,
driverIndex: 0,
})
} else {
for i, idx := range nodeIdxs {
node := &resolvedNode{
resolver: r,
driverIndex: idx,
}
if len(ps) > 0 {
node.platforms = []specs.Platform{ps[i]}
}
nodes = append(nodes, node)
}
}
nodes = recombineNodes(nodes)
if _, err := r.boot(ctx, nodeIdxs, pw); err != nil {
return nil, false, err
}
return nodes, perfect, nil
}
func (r *nodeResolver) get(p specs.Platform, matcher matchMaker, additionalPlatforms func(int, builder.Node) []specs.Platform) int {
best := -1
bestPlatform := specs.Platform{}
for i, node := range r.nodes {
platforms := node.Platforms
if additionalPlatforms != nil {
platforms = append([]specs.Platform{}, platforms...)
platforms = append(platforms, additionalPlatforms(i, node)...)
}
for _, p2 := range platforms {
m := matcher(p2)
if !m.Match(p) {
continue
}
if best == -1 {
best = i
bestPlatform = p2
continue
}
if matcher(p2).Less(p, bestPlatform) {
best = i
bestPlatform = p2
}
}
}
return best
}
func (r *nodeResolver) boot(ctx context.Context, idxs []int, pw progress.Writer) ([]*client.Client, error) {
clients := make([]*client.Client, len(idxs))
baseCtx := ctx
eg, ctx := errgroup.WithContext(ctx)
for i, idx := range idxs {
i, idx := i, idx
eg.Go(func() error {
c, err := r.clients.Do(ctx, fmt.Sprint(idx), func(ctx context.Context) (*client.Client, error) {
if r.nodes[idx].Driver == nil {
return nil, nil
}
return driver.Boot(ctx, baseCtx, r.nodes[idx].Driver, pw)
})
if err != nil {
return err
}
clients[i] = c
return nil
})
}
if err := eg.Wait(); err != nil {
return nil, err
}
return clients, nil
}
func (r *nodeResolver) opts(ctx context.Context, idxs []int, pw progress.Writer) ([]gateway.BuildOpts, error) {
clients, err := r.boot(ctx, idxs, pw)
if err != nil {
return nil, err
}
bopts := make([]gateway.BuildOpts, len(clients))
eg, ctx := errgroup.WithContext(ctx)
for i, idxs := range idxs {
i, idx := i, idxs
c := clients[i]
if c == nil {
continue
}
eg.Go(func() error {
opt, err := r.opt.Do(ctx, fmt.Sprint(idx), func(ctx context.Context) (gateway.BuildOpts, error) {
opt := gateway.BuildOpts{}
_, err := c.Build(ctx, client.SolveOpt{
Internal: true,
}, "buildx", func(ctx context.Context, c gateway.Client) (*gateway.Result, error) {
opt = c.BuildOpts()
return nil, nil
}, nil)
return opt, err
})
if err != nil {
return err
}
bopts[i] = opt
return nil
})
}
if err := eg.Wait(); err != nil {
return nil, err
}
return bopts, nil
}
// recombineDriverPairs recombines resolved nodes that are on the same driver
// back together into a single node.
func recombineNodes(nodes []*resolvedNode) []*resolvedNode {
result := make([]*resolvedNode, 0, len(nodes))
lookup := map[int]int{}
for _, node := range nodes {
if idx, ok := lookup[node.driverIndex]; ok {
result[idx].platforms = append(result[idx].platforms, node.platforms...)
} else {
lookup[node.driverIndex] = len(result)
result = append(result, node)
}
}
return result
}

View File

@@ -1,315 +0,0 @@
package build
import (
"context"
"sort"
"testing"
"github.com/containerd/containerd/platforms"
"github.com/docker/buildx/builder"
specs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/stretchr/testify/require"
)
func TestFindDriverSanity(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
"aaa": {platforms.DefaultSpec()},
})
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.DefaultSpec()}, nil, platforms.OnlyStrict, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, 0, res[0].driverIndex)
require.Equal(t, "aaa", res[0].Node().Builder)
require.Equal(t, []specs.Platform{platforms.DefaultSpec()}, res[0].platforms)
}
func TestFindDriverEmpty(t *testing.T) {
r := makeTestResolver(nil)
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.DefaultSpec()}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Nil(t, res)
}
func TestFindDriverWeirdName(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
"aaa": {platforms.MustParse("linux/amd64")},
"bbb": {platforms.MustParse("linux/foobar")},
})
// find first platform
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/foobar")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, 1, res[0].driverIndex)
require.Equal(t, "bbb", res[0].Node().Builder)
}
func TestFindDriverUnknown(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
"aaa": {platforms.MustParse("linux/amd64")},
})
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/riscv64")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.False(t, perfect)
require.Len(t, res, 1)
require.Equal(t, 0, res[0].driverIndex)
require.Equal(t, "aaa", res[0].Node().Builder)
}
func TestSelectNodeSinglePlatform(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
"aaa": {platforms.MustParse("linux/amd64")},
"bbb": {platforms.MustParse("linux/riscv64")},
})
// find first platform
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/amd64")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, 0, res[0].driverIndex)
require.Equal(t, "aaa", res[0].Node().Builder)
// find second platform
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/riscv64")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, 1, res[0].driverIndex)
require.Equal(t, "bbb", res[0].Node().Builder)
// find an unknown platform, should match the first driver
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/s390x")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.False(t, perfect)
require.Len(t, res, 1)
require.Equal(t, 0, res[0].driverIndex)
require.Equal(t, "aaa", res[0].Node().Builder)
}
func TestSelectNodeMultiPlatform(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
"aaa": {platforms.MustParse("linux/amd64"), platforms.MustParse("linux/arm64")},
"bbb": {platforms.MustParse("linux/riscv64")},
})
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/amd64")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, 0, res[0].driverIndex)
require.Equal(t, "aaa", res[0].Node().Builder)
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm64")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, 0, res[0].driverIndex)
require.Equal(t, "aaa", res[0].Node().Builder)
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/riscv64")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, 1, res[0].driverIndex)
require.Equal(t, "bbb", res[0].Node().Builder)
}
func TestSelectNodeNonStrict(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
"aaa": {platforms.MustParse("linux/amd64")},
"bbb": {platforms.MustParse("linux/arm64")},
})
// arm64 should match itself
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm64")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "bbb", res[0].Node().Builder)
// arm64 may support arm/v8
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v8")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "bbb", res[0].Node().Builder)
// arm64 may support arm/v7
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v7")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "bbb", res[0].Node().Builder)
}
func TestSelectNodeNonStrictARM(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
"aaa": {platforms.MustParse("linux/amd64")},
"bbb": {platforms.MustParse("linux/arm64")},
"ccc": {platforms.MustParse("linux/arm/v8")},
})
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v8")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "ccc", res[0].Node().Builder)
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v7")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "ccc", res[0].Node().Builder)
}
func TestSelectNodeNonStrictLower(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
"aaa": {platforms.MustParse("linux/amd64")},
"bbb": {platforms.MustParse("linux/arm/v7")},
})
// v8 can't be built on v7 (so we should select the default)...
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v8")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.False(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "aaa", res[0].Node().Builder)
// ...but v6 can be built on v8
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v6")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "bbb", res[0].Node().Builder)
}
func TestSelectNodePreferStart(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
"aaa": {platforms.MustParse("linux/amd64")},
"bbb": {platforms.MustParse("linux/riscv64")},
"ccc": {platforms.MustParse("linux/riscv64")},
})
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/riscv64")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "bbb", res[0].Node().Builder)
}
func TestSelectNodePreferExact(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
"aaa": {platforms.MustParse("linux/arm/v8")},
"bbb": {platforms.MustParse("linux/arm/v7")},
})
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v7")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "bbb", res[0].Node().Builder)
}
func TestSelectNodeNoPlatform(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
"aaa": {platforms.MustParse("linux/foobar")},
"bbb": {platforms.DefaultSpec()},
})
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "aaa", res[0].Node().Builder)
require.Empty(t, res[0].platforms)
}
func TestSelectNodeAdditionalPlatforms(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
"aaa": {platforms.MustParse("linux/amd64")},
"bbb": {platforms.MustParse("linux/arm/v8")},
})
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v7")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "bbb", res[0].Node().Builder)
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v7")}, nil, platforms.Only, func(idx int, n builder.Node) []specs.Platform {
if n.Builder == "aaa" {
return []specs.Platform{platforms.MustParse("linux/arm/v7")}
}
return nil
})
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "aaa", res[0].Node().Builder)
}
func TestSplitNodeMultiPlatform(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
"aaa": {platforms.MustParse("linux/amd64"), platforms.MustParse("linux/arm64")},
"bbb": {platforms.MustParse("linux/riscv64")},
})
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{
platforms.MustParse("linux/amd64"),
platforms.MustParse("linux/arm64"),
}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "aaa", res[0].Node().Builder)
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{
platforms.MustParse("linux/amd64"),
platforms.MustParse("linux/riscv64"),
}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 2)
require.Equal(t, "aaa", res[0].Node().Builder)
require.Equal(t, "bbb", res[1].Node().Builder)
}
func TestSplitNodeMultiPlatformNoUnify(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
"aaa": {platforms.MustParse("linux/amd64")},
"bbb": {platforms.MustParse("linux/amd64"), platforms.MustParse("linux/riscv64")},
})
// the "best" choice would be the node with both platforms, but we're using
// a naive algorithm that doesn't try to unify the platforms
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{
platforms.MustParse("linux/amd64"),
platforms.MustParse("linux/riscv64"),
}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 2)
require.Equal(t, "aaa", res[0].Node().Builder)
require.Equal(t, "bbb", res[1].Node().Builder)
}
func makeTestResolver(nodes map[string][]specs.Platform) *nodeResolver {
var ns []builder.Node
for name, platforms := range nodes {
ns = append(ns, builder.Node{
Builder: name,
Platforms: platforms,
})
}
sort.Slice(ns, func(i, j int) bool {
return ns[i].Builder < ns[j].Builder
})
return newDriverResolver(ns)
}

View File

@@ -9,18 +9,16 @@ import (
"strings"
"github.com/docker/buildx/util/gitutil"
"github.com/docker/buildx/util/osutil"
"github.com/moby/buildkit/client"
specs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
const DockerfileLabel = "com.docker.image.source.entrypoint"
func getGitAttributes(ctx context.Context, contextPath string, dockerfilePath string) (map[string]string, func(*client.SolveOpt), error) {
res := make(map[string]string)
func getGitAttributes(ctx context.Context, contextPath string, dockerfilePath string) (res map[string]string, _ error) {
res = make(map[string]string)
if contextPath == "" {
return nil, nil, nil
return
}
setGitLabels := false
@@ -39,7 +37,7 @@ func getGitAttributes(ctx context.Context, contextPath string, dockerfilePath st
}
if !setGitLabels && !setGitInfo {
return nil, nil, nil
return
}
// figure out in which directory the git command needs to run in
@@ -47,32 +45,27 @@ func getGitAttributes(ctx context.Context, contextPath string, dockerfilePath st
if filepath.IsAbs(contextPath) {
wd = contextPath
} else {
wd, _ = filepath.Abs(filepath.Join(osutil.GetWd(), contextPath))
cwd, _ := os.Getwd()
wd, _ = filepath.Abs(filepath.Join(cwd, contextPath))
}
wd = osutil.SanitizePath(wd)
gitc, err := gitutil.New(gitutil.WithContext(ctx), gitutil.WithWorkingDir(wd))
if err != nil {
if st, err1 := os.Stat(path.Join(wd, ".git")); err1 == nil && st.IsDir() {
return res, nil, errors.Wrap(err, "git was not found in the system")
if st, err := os.Stat(path.Join(wd, ".git")); err == nil && st.IsDir() {
return res, errors.New("buildx: git was not found in the system. Current commit information was not captured by the build")
}
return nil, nil, nil
return
}
if !gitc.IsInsideWorkTree() {
if st, err := os.Stat(path.Join(wd, ".git")); err == nil && st.IsDir() {
return res, nil, errors.New("failed to read current commit information with git rev-parse --is-inside-work-tree")
return res, errors.New("buildx: failed to read current commit information with git rev-parse --is-inside-work-tree")
}
return nil, nil, nil
}
root, err := gitc.RootDir()
if err != nil {
return res, nil, errors.Wrap(err, "failed to get git root dir")
return res, nil
}
if sha, err := gitc.FullCommit(); err != nil && !gitutil.IsUnknownRevision(err) {
return res, nil, errors.Wrap(err, "failed to get git commit")
return res, errors.Wrapf(err, "buildx: failed to get git commit")
} else if sha != "" {
checkDirty := false
if v, ok := os.LookupEnv("BUILDX_GIT_CHECK_DIRTY"); ok {
@@ -100,38 +93,23 @@ func getGitAttributes(ctx context.Context, contextPath string, dockerfilePath st
}
}
if setGitLabels && root != "" {
if dockerfilePath == "" {
dockerfilePath = filepath.Join(wd, "Dockerfile")
}
if !filepath.IsAbs(dockerfilePath) {
dockerfilePath = filepath.Join(osutil.GetWd(), dockerfilePath)
}
if r, err := filepath.Rel(root, dockerfilePath); err == nil && !strings.HasPrefix(r, "..") {
res["label:"+DockerfileLabel] = r
if setGitLabels {
if root, err := gitc.RootDir(); err != nil {
return res, errors.Wrapf(err, "buildx: failed to get git root dir")
} else if root != "" {
if dockerfilePath == "" {
dockerfilePath = filepath.Join(wd, "Dockerfile")
}
if !filepath.IsAbs(dockerfilePath) {
cwd, _ := os.Getwd()
dockerfilePath = filepath.Join(cwd, dockerfilePath)
}
dockerfilePath, _ = filepath.Rel(root, dockerfilePath)
if !strings.HasPrefix(dockerfilePath, "..") {
res["label:"+DockerfileLabel] = dockerfilePath
}
}
}
return res, func(so *client.SolveOpt) {
if !setGitInfo || root == "" {
return
}
for k, dir := range so.LocalDirs {
dir, err = filepath.EvalSymlinks(dir)
if err != nil {
continue
}
dir, err = filepath.Abs(dir)
if err != nil {
continue
}
if lp, err := osutil.GetLongPathName(dir); err == nil {
dir = lp
}
dir = osutil.SanitizePath(dir)
if r, err := filepath.Rel(root, dir); err == nil && !strings.HasPrefix(r, "..") {
so.FrontendAttrs["vcs:localdir:"+k] = r
}
}
}, nil
return
}

View File

@@ -9,7 +9,6 @@ import (
"testing"
"github.com/docker/buildx/util/gitutil"
"github.com/moby/buildkit/client"
specs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
@@ -31,7 +30,7 @@ func setupTest(tb testing.TB) {
}
func TestGetGitAttributesNotGitRepo(t *testing.T) {
_, _, err := getGitAttributes(context.Background(), t.TempDir(), "Dockerfile")
_, err := getGitAttributes(context.Background(), t.TempDir(), "Dockerfile")
assert.NoError(t, err)
}
@@ -39,14 +38,14 @@ func TestGetGitAttributesBadGitRepo(t *testing.T) {
tmp := t.TempDir()
require.NoError(t, os.MkdirAll(path.Join(tmp, ".git"), 0755))
_, _, err := getGitAttributes(context.Background(), tmp, "Dockerfile")
_, err := getGitAttributes(context.Background(), tmp, "Dockerfile")
assert.Error(t, err)
}
func TestGetGitAttributesNoContext(t *testing.T) {
setupTest(t)
gitattrs, _, err := getGitAttributes(context.Background(), "", "Dockerfile")
gitattrs, err := getGitAttributes(context.Background(), "", "Dockerfile")
assert.NoError(t, err)
assert.Empty(t, gitattrs)
}
@@ -115,7 +114,7 @@ func TestGetGitAttributes(t *testing.T) {
if tt.envGitInfo != "" {
t.Setenv("BUILDX_GIT_INFO", tt.envGitInfo)
}
gitattrs, _, err := getGitAttributes(context.Background(), ".", "Dockerfile")
gitattrs, err := getGitAttributes(context.Background(), ".", "Dockerfile")
require.NoError(t, err)
for _, e := range tt.expected {
assert.Contains(t, gitattrs, e)
@@ -140,7 +139,7 @@ func TestGetGitAttributesDirty(t *testing.T) {
require.NoError(t, os.WriteFile(filepath.Join("dir", "Dockerfile"), df, 0644))
t.Setenv("BUILDX_GIT_LABELS", "true")
gitattrs, _, _ := getGitAttributes(context.Background(), ".", "Dockerfile")
gitattrs, _ := getGitAttributes(context.Background(), ".", "Dockerfile")
assert.Equal(t, 5, len(gitattrs))
assert.Contains(t, gitattrs, "label:"+DockerfileLabel)
@@ -155,59 +154,3 @@ func TestGetGitAttributesDirty(t *testing.T) {
assert.Contains(t, gitattrs, "vcs:revision")
assert.True(t, strings.HasSuffix(gitattrs["vcs:revision"], "-dirty"))
}
func TestLocalDirs(t *testing.T) {
setupTest(t)
so := &client.SolveOpt{
FrontendAttrs: map[string]string{},
LocalDirs: map[string]string{
"context": ".",
"dockerfile": ".",
},
}
_, addVCSLocalDir, err := getGitAttributes(context.Background(), ".", "Dockerfile")
require.NoError(t, err)
require.NotNil(t, addVCSLocalDir)
addVCSLocalDir(so)
require.Contains(t, so.FrontendAttrs, "vcs:localdir:context")
assert.Equal(t, ".", so.FrontendAttrs["vcs:localdir:context"])
require.Contains(t, so.FrontendAttrs, "vcs:localdir:dockerfile")
assert.Equal(t, ".", so.FrontendAttrs["vcs:localdir:dockerfile"])
}
func TestLocalDirsSub(t *testing.T) {
gitutil.Mktmp(t)
c, err := gitutil.New()
require.NoError(t, err)
gitutil.GitInit(c, t)
df := []byte("FROM alpine:latest\n")
assert.NoError(t, os.MkdirAll("app", 0755))
assert.NoError(t, os.WriteFile("app/Dockerfile", df, 0644))
gitutil.GitAdd(c, t, "app/Dockerfile")
gitutil.GitCommit(c, t, "initial commit")
gitutil.GitSetRemote(c, t, "origin", "git@github.com:docker/buildx.git")
so := &client.SolveOpt{
FrontendAttrs: map[string]string{},
LocalDirs: map[string]string{
"context": ".",
"dockerfile": "app",
},
}
_, addVCSLocalDir, err := getGitAttributes(context.Background(), ".", "app/Dockerfile")
require.NoError(t, err)
require.NotNil(t, addVCSLocalDir)
addVCSLocalDir(so)
require.Contains(t, so.FrontendAttrs, "vcs:localdir:context")
assert.Equal(t, ".", so.FrontendAttrs["vcs:localdir:context"])
require.Contains(t, so.FrontendAttrs, "vcs:localdir:dockerfile")
assert.Equal(t, "app", so.FrontendAttrs["vcs:localdir:dockerfile"])
}

View File

@@ -1,43 +0,0 @@
package build
import (
"path/filepath"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/localstate"
"github.com/moby/buildkit/client"
)
func saveLocalState(so *client.SolveOpt, target string, opts Options, node builder.Node, configDir string) error {
var err error
if so.Ref == "" {
return nil
}
lp := opts.Inputs.ContextPath
dp := opts.Inputs.DockerfilePath
if lp != "" || dp != "" {
if lp != "" {
lp, err = filepath.Abs(lp)
if err != nil {
return err
}
}
if dp != "" {
dp, err = filepath.Abs(dp)
if err != nil {
return err
}
}
l, err := localstate.New(configDir)
if err != nil {
return err
}
return l.SaveRef(node.Builder, node.Name, so.Ref, localstate.State{
Target: target,
LocalPath: lp,
DockerfilePath: dp,
GroupRef: opts.GroupRef,
})
}
return nil
}

View File

@@ -117,7 +117,7 @@ func NewResultHandle(ctx context.Context, cc *client.Client, opt client.SolveOpt
gwClient: c,
gwCtx: ctx,
}
respErr = err // return original error to preserve stacktrace
respErr = se
close(done)
// Block until the caller closes the ResultHandle.
@@ -388,7 +388,7 @@ func populateProcessConfigFromResult(req *gateway.StartRequest, res *gateway.Res
} else if img != nil {
args = append(args, img.Config.Entrypoint...)
}
if !cfg.NoCmd {
if cfg.Cmd != nil {
args = append(args, cfg.Cmd...)
} else if img != nil {
args = append(args, img.Config.Cmd...)

View File

@@ -21,7 +21,7 @@ func createTempDockerfileFromURL(ctx context.Context, d *driver.DriverHandle, ur
var out string
ch, done := progress.NewChannel(pw)
defer func() { <-done }()
_, err = c.Build(ctx, client.SolveOpt{Internal: true}, "buildx", func(ctx context.Context, c gwclient.Client) (*gwclient.Result, error) {
_, err = c.Build(ctx, client.SolveOpt{}, "buildx", func(ctx context.Context, c gwclient.Client) (*gwclient.Result, error) {
def, err := llb.HTTP(url, llb.Filename("Dockerfile"), llb.WithCustomNamef("[internal] load %s", url)).Marshal(ctx)
if err != nil {
return nil, err

View File

@@ -5,6 +5,7 @@ import (
"bytes"
"context"
"net"
"os"
"strings"
"github.com/docker/buildx/driver"
@@ -33,6 +34,11 @@ func IsRemoteURL(c string) bool {
return false
}
func isLocalDir(c string) bool {
st, err := os.Stat(c)
return err == nil && st.IsDir()
}
func isArchive(header []byte) bool {
for _, m := range [][]byte{
{0x42, 0x5A, 0x68}, // bzip2
@@ -59,10 +65,7 @@ func toBuildkitExtraHosts(ctx context.Context, inp []string, nodeDriver *driver.
}
hosts := make([]string, 0, len(inp))
for _, h := range inp {
host, ip, ok := strings.Cut(h, "=")
if !ok {
host, ip, ok = strings.Cut(h, ":")
}
host, ip, ok := strings.Cut(h, ":")
if !ok || host == "" || ip == "" {
return "", errors.Errorf("invalid host %s", h)
}
@@ -74,16 +77,8 @@ func toBuildkitExtraHosts(ctx context.Context, inp []string, nodeDriver *driver.
return "", errors.Wrap(err, "unable to derive the IP value for host-gateway")
}
ip = hgip.String()
} else {
// If the address is enclosed in square brackets, extract it (for IPv6, but
// permit it for IPv4 as well; we don't know the address family here, but it's
// unambiguous).
if len(ip) > 2 && ip[0] == '[' && ip[len(ip)-1] == ']' {
ip = ip[1 : len(ip)-1]
}
if net.ParseIP(ip) == nil {
return "", errors.Errorf("invalid host %s", h)
}
} else if net.ParseIP(ip) == nil {
return "", errors.Errorf("invalid host %s", h)
}
hosts = append(hosts, host+"="+ip)
}

View File

@@ -1,148 +0,0 @@
package build
import (
"context"
"strings"
"testing"
"github.com/stretchr/testify/require"
)
func TestToBuildkitExtraHosts(t *testing.T) {
tests := []struct {
doc string
input []string
expectedOut string // Expect output==input if not set.
expectedErr string // Expect success if not set.
}{
{
doc: "IPv4, colon sep",
input: []string{`myhost:192.168.0.1`},
expectedOut: `myhost=192.168.0.1`,
},
{
doc: "IPv4, eq sep",
input: []string{`myhost=192.168.0.1`},
},
{
doc: "Weird but permitted, IPv4 with brackets",
input: []string{`myhost=[192.168.0.1]`},
expectedOut: `myhost=192.168.0.1`,
},
{
doc: "Host and domain",
input: []string{`host.and.domain.invalid:10.0.2.1`},
expectedOut: `host.and.domain.invalid=10.0.2.1`,
},
{
doc: "IPv6, colon sep",
input: []string{`anipv6host:2003:ab34:e::1`},
expectedOut: `anipv6host=2003:ab34:e::1`,
},
{
doc: "IPv6, colon sep, brackets",
input: []string{`anipv6host:[2003:ab34:e::1]`},
expectedOut: `anipv6host=2003:ab34:e::1`,
},
{
doc: "IPv6, eq sep, brackets",
input: []string{`anipv6host=[2003:ab34:e::1]`},
expectedOut: `anipv6host=2003:ab34:e::1`,
},
{
doc: "IPv6 localhost, colon sep",
input: []string{`ipv6local:::1`},
expectedOut: `ipv6local=::1`,
},
{
doc: "IPv6 localhost, eq sep",
input: []string{`ipv6local=::1`},
},
{
doc: "IPv6 localhost, eq sep, brackets",
input: []string{`ipv6local=[::1]`},
expectedOut: `ipv6local=::1`,
},
{
doc: "IPv6 localhost, non-canonical, colon sep",
input: []string{`ipv6local:0:0:0:0:0:0:0:1`},
expectedOut: `ipv6local=0:0:0:0:0:0:0:1`,
},
{
doc: "IPv6 localhost, non-canonical, eq sep",
input: []string{`ipv6local=0:0:0:0:0:0:0:1`},
},
{
doc: "IPv6 localhost, non-canonical, eq sep, brackets",
input: []string{`ipv6local=[0:0:0:0:0:0:0:1]`},
expectedOut: `ipv6local=0:0:0:0:0:0:0:1`,
},
{
doc: "Bad address, colon sep",
input: []string{`myhost:192.notanipaddress.1`},
expectedErr: `invalid IP address in add-host: "192.notanipaddress.1"`,
},
{
doc: "Bad address, eq sep",
input: []string{`myhost=192.notanipaddress.1`},
expectedErr: `invalid IP address in add-host: "192.notanipaddress.1"`,
},
{
doc: "No sep",
input: []string{`thathost-nosemicolon10.0.0.1`},
expectedErr: `bad format for add-host: "thathost-nosemicolon10.0.0.1"`,
},
{
doc: "Bad IPv6",
input: []string{`anipv6host:::::1`},
expectedErr: `invalid IP address in add-host: "::::1"`,
},
{
doc: "Bad IPv6, trailing colons",
input: []string{`ipv6local:::0::`},
expectedErr: `invalid IP address in add-host: "::0::"`,
},
{
doc: "Bad IPv6, missing close bracket",
input: []string{`ipv6addr=[::1`},
expectedErr: `invalid IP address in add-host: "[::1"`,
},
{
doc: "Bad IPv6, missing open bracket",
input: []string{`ipv6addr=::1]`},
expectedErr: `invalid IP address in add-host: "::1]"`,
},
{
doc: "Missing address, colon sep",
input: []string{`myhost.invalid:`},
expectedErr: `invalid IP address in add-host: ""`,
},
{
doc: "Missing address, eq sep",
input: []string{`myhost.invalid=`},
expectedErr: `invalid IP address in add-host: ""`,
},
{
doc: "No input",
input: []string{``},
expectedErr: `bad format for add-host: ""`,
},
}
for _, tc := range tests {
tc := tc
if tc.expectedOut == "" {
tc.expectedOut = strings.Join(tc.input, ",")
}
t.Run(tc.doc, func(t *testing.T) {
actualOut, actualErr := toBuildkitExtraHosts(context.TODO(), tc.input, nil)
if tc.expectedErr == "" {
require.Equal(t, tc.expectedOut, actualOut)
require.Nil(t, actualErr)
} else {
require.Zero(t, actualOut)
require.Error(t, actualErr, tc.expectedErr)
}
})
}
}

View File

@@ -2,31 +2,18 @@ package builder
import (
"context"
"encoding/csv"
"encoding/json"
"net/url"
"os"
"sort"
"strings"
"sync"
"time"
"github.com/docker/buildx/driver"
k8sutil "github.com/docker/buildx/driver/kubernetes/util"
remoteutil "github.com/docker/buildx/driver/remote/util"
"github.com/docker/buildx/localstate"
"github.com/docker/buildx/store"
"github.com/docker/buildx/store/storeutil"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/dockerutil"
"github.com/docker/buildx/util/imagetools"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/command"
dopts "github.com/docker/cli/opts"
"github.com/google/shlex"
"github.com/moby/buildkit/util/progress/progressui"
"github.com/pkg/errors"
"github.com/spf13/pflag"
"golang.org/x/sync/errgroup"
)
@@ -170,14 +157,13 @@ func (b *Builder) Boot(ctx context.Context) (bool, error) {
return false, nil
}
printer, err := progress.NewPrinter(context.TODO(), os.Stderr, progressui.AutoMode)
printer, err := progress.NewPrinter(context.TODO(), os.Stderr, os.Stderr, progress.PrinterModeAuto)
if err != nil {
return false, err
}
baseCtx := ctx
eg, _ := errgroup.WithContext(ctx)
errCh := make(chan error, len(toBoot))
for _, idx := range toBoot {
func(idx int) {
eg.Go(func() error {
@@ -185,7 +171,6 @@ func (b *Builder) Boot(ctx context.Context) (bool, error) {
_, err := driver.Boot(ctx, baseCtx, b.nodes[idx].Driver, pw)
if err != nil {
b.nodes[idx].Err = err
errCh <- err
}
return nil
})
@@ -193,15 +178,11 @@ func (b *Builder) Boot(ctx context.Context) (bool, error) {
}
err = eg.Wait()
close(errCh)
err1 := printer.Wait()
if err == nil {
err = err1
}
if err == nil && len(errCh) == len(toBoot) {
return false, <-errCh
}
return true, err
}
@@ -226,7 +207,7 @@ type driverFactory struct {
}
// Factory returns the driver factory.
func (b *Builder) Factory(ctx context.Context, dialMeta map[string][]string) (_ driver.Factory, err error) {
func (b *Builder) Factory(ctx context.Context) (_ driver.Factory, err error) {
b.driverFactory.once.Do(func() {
if b.Driver != "" {
b.driverFactory.Factory, err = driver.GetFactory(b.Driver, true)
@@ -249,7 +230,7 @@ func (b *Builder) Factory(ctx context.Context, dialMeta map[string][]string) (_
if _, err = dockerapi.Ping(ctx); err != nil {
return
}
b.driverFactory.Factory, err = driver.GetDefaultFactory(ctx, ep, dockerapi, false, dialMeta)
b.driverFactory.Factory, err = driver.GetDefaultFactory(ctx, ep, dockerapi, false)
if err != nil {
return
}
@@ -259,28 +240,6 @@ func (b *Builder) Factory(ctx context.Context, dialMeta map[string][]string) (_
return b.driverFactory.Factory, err
}
func (b *Builder) MarshalJSON() ([]byte, error) {
var berr string
if b.err != nil {
berr = strings.TrimSpace(b.err.Error())
}
return json.Marshal(struct {
Name string
Driver string
LastActivity time.Time `json:",omitempty"`
Dynamic bool
Nodes []Node
Err string `json:",omitempty"`
}{
Name: b.Name,
Driver: b.Driver,
LastActivity: b.LastActivity,
Dynamic: b.Dynamic,
Nodes: b.nodes,
Err: berr,
})
}
// GetBuilders returns all builders
func GetBuilders(dockerCli command.Cli, txn *store.Txn) ([]*Builder, error) {
storeng, err := txn.List()
@@ -331,347 +290,3 @@ func GetBuilders(dockerCli command.Cli, txn *store.Txn) ([]*Builder, error) {
return builders, nil
}
type CreateOpts struct {
Name string
Driver string
NodeName string
Platforms []string
BuildkitdFlags string
BuildkitdConfigFile string
DriverOpts []string
Use bool
Endpoint string
Append bool
}
func Create(ctx context.Context, txn *store.Txn, dockerCli command.Cli, opts CreateOpts) (*Builder, error) {
var err error
if opts.Name == "default" {
return nil, errors.Errorf("default is a reserved name and cannot be used to identify builder instance")
} else if opts.Append && opts.Name == "" {
return nil, errors.Errorf("append requires a builder name")
}
name := opts.Name
if name == "" {
name, err = store.GenerateName(txn)
if err != nil {
return nil, err
}
}
if !opts.Append {
contexts, err := dockerCli.ContextStore().List()
if err != nil {
return nil, err
}
for _, c := range contexts {
if c.Name == name {
return nil, errors.Errorf("instance name %q already exists as context builder", name)
}
}
}
ng, err := txn.NodeGroupByName(name)
if err != nil {
if os.IsNotExist(errors.Cause(err)) {
if opts.Append && opts.Name != "" {
return nil, errors.Errorf("failed to find instance %q for append", opts.Name)
}
} else {
return nil, err
}
}
buildkitHost := os.Getenv("BUILDKIT_HOST")
driverName := opts.Driver
if driverName == "" {
if ng != nil {
driverName = ng.Driver
} else if opts.Endpoint == "" && buildkitHost != "" {
driverName = "remote"
} else {
f, err := driver.GetDefaultFactory(ctx, opts.Endpoint, dockerCli.Client(), true, nil)
if err != nil {
return nil, err
}
if f == nil {
return nil, errors.Errorf("no valid drivers found")
}
driverName = f.Name()
}
}
if ng != nil {
if opts.NodeName == "" && !opts.Append {
return nil, errors.Errorf("existing instance for %q but no append mode, specify the node name to make changes for existing instances", name)
}
if driverName != ng.Driver {
return nil, errors.Errorf("existing instance for %q but has mismatched driver %q", name, ng.Driver)
}
}
if _, err := driver.GetFactory(driverName, true); err != nil {
return nil, err
}
ngOriginal := ng
if ngOriginal != nil {
ngOriginal = ngOriginal.Copy()
}
if ng == nil {
ng = &store.NodeGroup{
Name: name,
Driver: driverName,
}
}
driverOpts, err := csvToMap(opts.DriverOpts)
if err != nil {
return nil, err
}
buildkitdFlags, err := parseBuildkitdFlags(opts.BuildkitdFlags, driverName, driverOpts)
if err != nil {
return nil, err
}
var ep string
var setEp bool
switch {
case driverName == "kubernetes":
if opts.Endpoint != "" {
return nil, errors.Errorf("kubernetes driver does not support endpoint args %q", opts.Endpoint)
}
// generate node name if not provided to avoid duplicated endpoint
// error: https://github.com/docker/setup-buildx-action/issues/215
nodeName := opts.NodeName
if nodeName == "" {
nodeName, err = k8sutil.GenerateNodeName(name, txn)
if err != nil {
return nil, err
}
}
// naming endpoint to make append works
ep = (&url.URL{
Scheme: driverName,
Path: "/" + name,
RawQuery: (&url.Values{
"deployment": {nodeName},
"kubeconfig": {os.Getenv("KUBECONFIG")},
}).Encode(),
}).String()
setEp = false
case driverName == "remote":
if opts.Endpoint != "" {
ep = opts.Endpoint
} else if buildkitHost != "" {
ep = buildkitHost
} else {
return nil, errors.Errorf("no remote endpoint provided")
}
ep, err = validateBuildkitEndpoint(ep)
if err != nil {
return nil, err
}
setEp = true
case opts.Endpoint != "":
ep, err = validateEndpoint(dockerCli, opts.Endpoint)
if err != nil {
return nil, err
}
setEp = true
default:
if dockerCli.CurrentContext() == "default" && dockerCli.DockerEndpoint().TLSData != nil {
return nil, errors.Errorf("could not create a builder instance with TLS data loaded from environment. Please use `docker context create <context-name>` to create a context for current environment and then create a builder instance with context set to <context-name>")
}
ep, err = dockerutil.GetCurrentEndpoint(dockerCli)
if err != nil {
return nil, err
}
setEp = false
}
buildkitdConfigFile := opts.BuildkitdConfigFile
if buildkitdConfigFile == "" {
// if buildkit daemon config is not provided, check if the default one
// is available and use it
if f, ok := confutil.DefaultConfigFile(dockerCli); ok {
buildkitdConfigFile = f
}
}
if err := ng.Update(opts.NodeName, ep, opts.Platforms, setEp, opts.Append, buildkitdFlags, buildkitdConfigFile, driverOpts); err != nil {
return nil, err
}
if err := txn.Save(ng); err != nil {
return nil, err
}
b, err := New(dockerCli,
WithName(ng.Name),
WithStore(txn),
WithSkippedValidation(),
)
if err != nil {
return nil, err
}
timeoutCtx, cancel := context.WithTimeout(ctx, 20*time.Second)
defer cancel()
nodes, err := b.LoadNodes(timeoutCtx, WithData())
if err != nil {
return nil, err
}
for _, node := range nodes {
if err := node.Err; err != nil {
err := errors.Errorf("failed to initialize builder %s (%s): %s", ng.Name, node.Name, err)
var err2 error
if ngOriginal == nil {
err2 = txn.Remove(ng.Name)
} else {
err2 = txn.Save(ngOriginal)
}
if err2 != nil {
return nil, errors.Errorf("could not rollback to previous state: %s", err2)
}
return nil, err
}
}
if opts.Use && ep != "" {
current, err := dockerutil.GetCurrentEndpoint(dockerCli)
if err != nil {
return nil, err
}
if err := txn.SetCurrent(current, ng.Name, false, false); err != nil {
return nil, err
}
}
return b, nil
}
type LeaveOpts struct {
Name string
NodeName string
}
func Leave(ctx context.Context, txn *store.Txn, dockerCli command.Cli, opts LeaveOpts) error {
if opts.Name == "" {
return errors.Errorf("leave requires instance name")
}
if opts.NodeName == "" {
return errors.Errorf("leave requires node name")
}
ng, err := txn.NodeGroupByName(opts.Name)
if err != nil {
if os.IsNotExist(errors.Cause(err)) {
return errors.Errorf("failed to find instance %q for leave", opts.Name)
}
return err
}
if err := ng.Leave(opts.NodeName); err != nil {
return err
}
ls, err := localstate.New(confutil.ConfigDir(dockerCli))
if err != nil {
return err
}
if err := ls.RemoveBuilderNode(ng.Name, opts.NodeName); err != nil {
return err
}
return txn.Save(ng)
}
func csvToMap(in []string) (map[string]string, error) {
if len(in) == 0 {
return nil, nil
}
m := make(map[string]string, len(in))
for _, s := range in {
csvReader := csv.NewReader(strings.NewReader(s))
fields, err := csvReader.Read()
if err != nil {
return nil, err
}
for _, v := range fields {
p := strings.SplitN(v, "=", 2)
if len(p) != 2 {
return nil, errors.Errorf("invalid value %q, expecting k=v", v)
}
m[p[0]] = p[1]
}
}
return m, nil
}
// validateEndpoint validates that endpoint is either a context or a docker host
func validateEndpoint(dockerCli command.Cli, ep string) (string, error) {
dem, err := dockerutil.GetDockerEndpoint(dockerCli, ep)
if err == nil && dem != nil {
if ep == "default" {
return dem.Host, nil
}
return ep, nil
}
h, err := dopts.ParseHost(true, ep)
if err != nil {
return "", errors.Wrapf(err, "failed to parse endpoint %s", ep)
}
return h, nil
}
// validateBuildkitEndpoint validates that endpoint is a valid buildkit host
func validateBuildkitEndpoint(ep string) (string, error) {
if err := remoteutil.IsValidEndpoint(ep); err != nil {
return "", err
}
return ep, nil
}
// parseBuildkitdFlags parses buildkit flags
func parseBuildkitdFlags(inp string, driver string, driverOpts map[string]string) (res []string, err error) {
if inp != "" {
res, err = shlex.Split(inp)
if err != nil {
return nil, errors.Wrap(err, "failed to parse buildkit flags")
}
}
var allowInsecureEntitlements []string
flags := pflag.NewFlagSet("buildkitd", pflag.ContinueOnError)
flags.Usage = func() {}
flags.StringArrayVar(&allowInsecureEntitlements, "allow-insecure-entitlement", nil, "")
_ = flags.Parse(res)
var hasNetworkHostEntitlement bool
for _, e := range allowInsecureEntitlements {
if e == "network.host" {
hasNetworkHostEntitlement = true
break
}
}
if v, ok := driverOpts["network"]; ok && v == "host" && !hasNetworkHostEntitlement && driver == "docker-container" {
// always set network.host entitlement if user has set network=host
res = append(res, "--allow-insecure-entitlement=network.host")
} else if len(allowInsecureEntitlements) == 0 && (driver == "kubernetes" || driver == "docker-container") {
// set network.host entitlement if user does not provide any as
// network is isolated for container drivers.
res = append(res, "--allow-insecure-entitlement=network.host")
}
return res, nil
}

View File

@@ -1,139 +0,0 @@
package builder
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestCsvToMap(t *testing.T) {
d := []string{
"\"tolerations=key=foo,value=bar;key=foo2,value=bar2\",replicas=1",
"namespace=default",
}
r, err := csvToMap(d)
require.NoError(t, err)
require.Contains(t, r, "tolerations")
require.Equal(t, r["tolerations"], "key=foo,value=bar;key=foo2,value=bar2")
require.Contains(t, r, "replicas")
require.Equal(t, r["replicas"], "1")
require.Contains(t, r, "namespace")
require.Equal(t, r["namespace"], "default")
}
func TestParseBuildkitdFlags(t *testing.T) {
testCases := []struct {
name string
flags string
driver string
driverOpts map[string]string
expected []string
wantErr bool
}{
{
"docker-container no flags",
"",
"docker-container",
nil,
[]string{
"--allow-insecure-entitlement=network.host",
},
false,
},
{
"kubernetes no flags",
"",
"kubernetes",
nil,
[]string{
"--allow-insecure-entitlement=network.host",
},
false,
},
{
"remote no flags",
"",
"remote",
nil,
nil,
false,
},
{
"docker-container with insecure flag",
"--allow-insecure-entitlement=security.insecure",
"docker-container",
nil,
[]string{
"--allow-insecure-entitlement=security.insecure",
},
false,
},
{
"docker-container with insecure and host flag",
"--allow-insecure-entitlement=network.host --allow-insecure-entitlement=security.insecure",
"docker-container",
nil,
[]string{
"--allow-insecure-entitlement=network.host",
"--allow-insecure-entitlement=security.insecure",
},
false,
},
{
"docker-container with network host opt",
"",
"docker-container",
map[string]string{"network": "host"},
[]string{
"--allow-insecure-entitlement=network.host",
},
false,
},
{
"docker-container with host flag and network host opt",
"--allow-insecure-entitlement=network.host",
"docker-container",
map[string]string{"network": "host"},
[]string{
"--allow-insecure-entitlement=network.host",
},
false,
},
{
"docker-container with insecure, host flag and network host opt",
"--allow-insecure-entitlement=network.host --allow-insecure-entitlement=security.insecure",
"docker-container",
map[string]string{"network": "host"},
[]string{
"--allow-insecure-entitlement=network.host",
"--allow-insecure-entitlement=security.insecure",
},
false,
},
{
"error parsing flags",
"foo'",
"docker-container",
nil,
nil,
true,
},
}
for _, tt := range testCases {
tt := tt
t.Run(tt.name, func(t *testing.T) {
flags, err := parseBuildkitdFlags(tt.flags, tt.driver, tt.driverOpts)
if tt.wantErr {
require.Error(t, err)
return
}
require.NoError(t, err)
assert.Equal(t, tt.expected, flags)
})
}
}

View File

@@ -2,11 +2,7 @@ package builder
import (
"context"
"encoding/json"
"sort"
"strings"
"github.com/containerd/containerd/platforms"
"github.com/docker/buildx/driver"
ctxkube "github.com/docker/buildx/driver/kubernetes/context"
"github.com/docker/buildx/store"
@@ -28,16 +24,13 @@ type Node struct {
Builder string
Driver *driver.DriverHandle
DriverInfo *driver.Info
Platforms []ocispecs.Platform
GCPolicy []client.PruneInfo
Labels map[string]string
ImageOpt imagetools.Opt
ProxyConfig map[string]string
Version string
Err error
// worker settings
IDs []string
Platforms []ocispecs.Platform
GCPolicy []client.PruneInfo
Labels map[string]string
}
// Nodes returns nodes for this builder.
@@ -45,35 +38,9 @@ func (b *Builder) Nodes() []Node {
return b.nodes
}
type LoadNodesOption func(*loadNodesOptions)
type loadNodesOptions struct {
data bool
dialMeta map[string][]string
}
func WithData() LoadNodesOption {
return func(o *loadNodesOptions) {
o.data = true
}
}
func WithDialMeta(dialMeta map[string][]string) LoadNodesOption {
return func(o *loadNodesOptions) {
o.dialMeta = dialMeta
}
}
// LoadNodes loads and returns nodes for this builder.
// TODO: this should be a method on a Node object and lazy load data for each driver.
func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []Node, err error) {
lno := loadNodesOptions{
data: false,
}
for _, opt := range opts {
opt(&lno)
}
func (b *Builder) LoadNodes(ctx context.Context, withData bool) (_ []Node, err error) {
eg, _ := errgroup.WithContext(ctx)
b.nodes = make([]Node, len(b.NodeGroup.Nodes))
@@ -83,7 +50,7 @@ func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []N
}
}()
factory, err := b.Factory(ctx, lno.dialMeta)
factory, err := b.Factory(ctx)
if err != nil {
return nil, err
}
@@ -142,7 +109,7 @@ func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []N
}
}
d, err := driver.GetDriver(ctx, "buildx_buildkit_"+n.Name, factory, n.Endpoint, dockerapi, imageopt.Auth, kcc, n.BuildkitdFlags, n.Files, n.DriverOpts, n.Platforms, b.opts.contextPathHash, lno.dialMeta)
d, err := driver.GetDriver(ctx, "buildx_buildkit_"+n.Name, factory, n.Endpoint, dockerapi, imageopt.Auth, kcc, n.Flags, n.Files, n.DriverOpts, n.Platforms, b.opts.contextPathHash)
if err != nil {
node.Err = err
return nil
@@ -150,7 +117,7 @@ func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []N
node.Driver = d
node.ImageOpt = imageopt
if lno.data {
if withData {
if err := node.loadData(ctx); err != nil {
node.Err = err
}
@@ -165,7 +132,7 @@ func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []N
}
// TODO: This should be done in the routine loading driver data
if lno.data {
if withData {
kubernetesDriverCount := 0
for _, d := range b.nodes {
if d.DriverInfo != nil && len(d.DriverInfo.DynamicNodes) > 0 {
@@ -202,51 +169,6 @@ func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []N
return b.nodes, nil
}
func (n *Node) MarshalJSON() ([]byte, error) {
var status string
if n.DriverInfo != nil {
status = n.DriverInfo.Status.String()
}
var nerr string
if n.Err != nil {
status = "error"
nerr = strings.TrimSpace(n.Err.Error())
}
var pp []string
for _, p := range n.Platforms {
pp = append(pp, platforms.Format(p))
}
return json.Marshal(struct {
Name string
Endpoint string
BuildkitdFlags []string `json:"Flags,omitempty"`
DriverOpts map[string]string `json:",omitempty"`
Files map[string][]byte `json:",omitempty"`
Status string `json:",omitempty"`
ProxyConfig map[string]string `json:",omitempty"`
Version string `json:",omitempty"`
Err string `json:",omitempty"`
IDs []string `json:",omitempty"`
Platforms []string `json:",omitempty"`
GCPolicy []client.PruneInfo `json:",omitempty"`
Labels map[string]string `json:",omitempty"`
}{
Name: n.Name,
Endpoint: n.Endpoint,
BuildkitdFlags: n.BuildkitdFlags,
DriverOpts: n.DriverOpts,
Files: n.Files,
Status: status,
ProxyConfig: n.ProxyConfig,
Version: n.Version,
Err: nerr,
IDs: n.IDs,
Platforms: pp,
GCPolicy: n.GCPolicy,
Labels: n.Labels,
})
}
func (n *Node) loadData(ctx context.Context) error {
if n.Driver == nil {
return nil
@@ -266,14 +188,12 @@ func (n *Node) loadData(ctx context.Context) error {
return errors.Wrap(err, "listing workers")
}
for idx, w := range workers {
n.IDs = append(n.IDs, w.ID)
n.Platforms = append(n.Platforms, w.Platforms...)
if idx == 0 {
n.GCPolicy = w.GCPolicy
n.Labels = w.Labels
}
}
sort.Strings(n.IDs)
n.Platforms = platformutil.Dedupe(n.Platforms)
inf, err := driverClient.Info(ctx)
if err != nil {

View File

@@ -4,16 +4,13 @@ import (
"context"
"encoding/json"
"fmt"
"io"
"os"
"strings"
"github.com/containerd/console"
"github.com/containerd/containerd/platforms"
"github.com/docker/buildx/bake"
"github.com/docker/buildx/build"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/localstate"
"github.com/docker/buildx/util/buildflags"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/confutil"
@@ -22,8 +19,7 @@ import (
"github.com/docker/buildx/util/progress"
"github.com/docker/buildx/util/tracing"
"github.com/docker/cli/cli/command"
"github.com/moby/buildkit/identity"
"github.com/moby/buildkit/util/progress/progressui"
"github.com/moby/buildkit/util/appcontext"
"github.com/pkg/errors"
"github.com/spf13/cobra"
)
@@ -41,7 +37,9 @@ type bakeOptions struct {
exportLoad bool
}
func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in bakeOptions, cFlags commonFlags) (err error) {
func runBake(dockerCli command.Cli, targets []string, in bakeOptions, cFlags commonFlags) (err error) {
ctx := appcontext.Context()
ctx, end, err := tracing.TraceCurrentCommand(ctx, "bake")
if err != nil {
return err
@@ -97,6 +95,8 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
defer cancel()
var nodes []builder.Node
var files []bake.File
var inp *bake.Input
var progressConsoleDesc, progressTextDesc string
// instance only needed for reading remote bake files or building
@@ -111,7 +111,7 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
if err = updateLastActivity(dockerCli, b.NodeGroup); err != nil {
return errors.Wrapf(err, "failed to update builder last activity time")
}
nodes, err = b.LoadNodes(ctx)
nodes, err = b.LoadNodes(ctx, false)
if err != nil {
return err
}
@@ -124,8 +124,7 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
term = true
}
progressMode := progressui.DisplayMode(cFlags.progress)
printer, err := progress.NewPrinter(ctx2, os.Stderr, progressMode,
printer, err := progress.NewPrinter(ctx2, os.Stderr, os.Stderr, cFlags.progress,
progress.WithDesc(progressTextDesc, progressConsoleDesc),
)
if err != nil {
@@ -138,21 +137,21 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
if err == nil {
err = err1
}
if err == nil && progressMode != progressui.QuietMode && progressMode != progressui.RawJSONMode {
if err == nil && cFlags.progress != progress.PrinterModeQuiet {
desktop.PrintBuildDetails(os.Stderr, printer.BuildRefs(), term)
}
}
}()
files, inp, err := readBakeFiles(ctx, nodes, url, in.files, dockerCli.In(), printer)
if url != "" {
files, inp, err = bake.ReadRemoteFiles(ctx, nodes, url, in.files, printer)
} else {
files, err = bake.ReadLocalFiles(in.files, dockerCli.In())
}
if err != nil {
return err
}
if len(files) == 0 {
return errors.New("couldn't find a bake definition")
}
tgts, grps, err := bake.ReadTargets(ctx, files, targets, overrides, map[string]string{
// don't forget to update documentation if you add a new
// built-in variable: docs/bake-reference.md#built-in-variables
@@ -182,16 +181,14 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
return err
}
def := struct {
Group map[string]*bake.Group `json:"group,omitempty"`
Target map[string]*bake.Target `json:"target"`
}{
Group: grps,
Target: tgts,
}
if in.printOnly {
dt, err := json.MarshalIndent(def, "", " ")
dt, err := json.MarshalIndent(struct {
Group map[string]*bake.Group `json:"group,omitempty"`
Target map[string]*bake.Target `json:"target"`
}{
grps,
tgts,
}, "", " ")
if err != nil {
return err
}
@@ -204,28 +201,6 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
return nil
}
// local state group
groupRef := identity.NewID()
var refs []string
for k, b := range bo {
b.Ref = identity.NewID()
b.GroupRef = groupRef
refs = append(refs, b.Ref)
bo[k] = b
}
dt, err := json.Marshal(def)
if err != nil {
return err
}
if err := saveLocalStateGroup(dockerCli, groupRef, localstate.StateGroup{
Definition: dt,
Targets: targets,
Inputs: overrides,
Refs: refs,
}); err != nil {
return err
}
resp, err := build.Build(ctx, nodes, bo, dockerutil.NewClient(dockerCli), confutil.ConfigDir(dockerCli), printer)
if err != nil {
return wrapBuildError(err, true)
@@ -263,7 +238,7 @@ func bakeCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
options.builder = rootOpts.builder
options.metadataFile = cFlags.metadataFile
// Other common flags (noCache, pull and progress) are processed in runBake function.
return runBake(cmd.Context(), dockerCli, args, options, cFlags)
return runBake(dockerCli, args, options, cFlags)
},
ValidArgsFunction: completion.BakeTargets(options.files),
}
@@ -282,54 +257,3 @@ func bakeCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
return cmd
}
func saveLocalStateGroup(dockerCli command.Cli, ref string, lsg localstate.StateGroup) error {
l, err := localstate.New(confutil.ConfigDir(dockerCli))
if err != nil {
return err
}
return l.SaveGroup(ref, lsg)
}
func readBakeFiles(ctx context.Context, nodes []builder.Node, url string, names []string, stdin io.Reader, pw progress.Writer) (files []bake.File, inp *bake.Input, err error) {
var lnames []string // local
var rnames []string // remote
var anames []string // both
for _, v := range names {
if strings.HasPrefix(v, "cwd://") {
tname := strings.TrimPrefix(v, "cwd://")
lnames = append(lnames, tname)
anames = append(anames, tname)
} else {
rnames = append(rnames, v)
anames = append(anames, v)
}
}
if url != "" {
var rfiles []bake.File
rfiles, inp, err = bake.ReadRemoteFiles(ctx, nodes, url, rnames, pw)
if err != nil {
return nil, nil, err
}
files = append(files, rfiles...)
}
if len(lnames) > 0 || url == "" {
var lfiles []bake.File
progress.Wrap("[internal] load local bake definitions", pw.Write, func(sub progress.SubLogger) error {
if url != "" {
lfiles, err = bake.ReadLocalFiles(lnames, stdin, sub)
} else {
lfiles, err = bake.ReadLocalFiles(anames, stdin, sub)
}
return nil
})
if err != nil {
return nil, nil, err
}
files = append(files, lfiles...)
}
return
}

View File

@@ -3,10 +3,8 @@ package commands
import (
"bytes"
"context"
"crypto/sha256"
"encoding/base64"
"encoding/csv"
"encoding/hex"
"encoding/json"
"fmt"
"io"
@@ -15,13 +13,10 @@ import (
"path/filepath"
"strconv"
"strings"
"sync"
"time"
"github.com/containerd/console"
"github.com/docker/buildx/build"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/commands/debug"
"github.com/docker/buildx/controller"
cbuild "github.com/docker/buildx/controller/build"
"github.com/docker/buildx/controller/control"
@@ -31,12 +26,8 @@ import (
"github.com/docker/buildx/store"
"github.com/docker/buildx/store/storeutil"
"github.com/docker/buildx/util/buildflags"
"github.com/docker/buildx/util/cobrautil"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/desktop"
"github.com/docker/buildx/util/ioset"
"github.com/docker/buildx/util/metricutil"
"github.com/docker/buildx/util/osutil"
"github.com/docker/buildx/util/progress"
"github.com/docker/buildx/util/tracing"
"github.com/docker/cli-docs-tool/annotation"
@@ -51,21 +42,18 @@ import (
"github.com/moby/buildkit/frontend/subrequests/outline"
"github.com/moby/buildkit/frontend/subrequests/targets"
"github.com/moby/buildkit/solver/errdefs"
"github.com/moby/buildkit/util/appcontext"
"github.com/moby/buildkit/util/grpcerrors"
"github.com/moby/buildkit/util/progress/progressui"
"github.com/morikuni/aec"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/metric"
"google.golang.org/grpc/codes"
)
type buildOptions struct {
allow []string
annotations []string
buildArgs []string
cacheFrom []string
cacheTo []string
@@ -88,6 +76,9 @@ type buildOptions struct {
target string
ulimits *dockeropts.UlimitOpt
invoke *invokeConfig
noBuild bool
attests []string
sbom string
provenance string
@@ -103,32 +94,18 @@ type buildOptions struct {
exportLoad bool
control.ControlOptions
invokeConfig *invokeConfig
}
func (o *buildOptions) toControllerOptions() (*controllerapi.BuildOptions, error) {
var err error
buildArgs, err := listToMap(o.buildArgs, true)
if err != nil {
return nil, err
}
labels, err := listToMap(o.labels, false)
if err != nil {
return nil, err
}
opts := controllerapi.BuildOptions{
Allow: o.allow,
Annotations: o.annotations,
BuildArgs: buildArgs,
BuildArgs: listToMap(o.buildArgs, true),
CgroupParent: o.cgroupParent,
ContextPath: o.contextPath,
DockerfileName: o.dockerfileName,
ExtraHosts: o.extraHosts,
Labels: labels,
Labels: listToMap(o.labels, false),
NetworkMode: o.networkMode,
NoCacheFilter: o.noCacheFilter,
Platforms: o.platforms,
@@ -208,70 +185,24 @@ func (o *buildOptions) toControllerOptions() (*controllerapi.BuildOptions, error
return &opts, nil
}
func (o *buildOptions) toDisplayMode() (progressui.DisplayMode, error) {
progress := progressui.DisplayMode(o.progress)
func (o *buildOptions) toProgress() (string, error) {
switch o.progress {
case progress.PrinterModeAuto, progress.PrinterModeTty, progress.PrinterModePlain, progress.PrinterModeQuiet:
default:
return "", errors.Errorf("progress=%s is not a valid progress option", o.progress)
}
if o.quiet {
if progress != progressui.AutoMode && progress != progressui.QuietMode {
if o.progress != progress.PrinterModeAuto && o.progress != progress.PrinterModeQuiet {
return "", errors.Errorf("progress=%s and quiet cannot be used together", o.progress)
}
return progressui.QuietMode, nil
return progress.PrinterModeQuiet, nil
}
return progress, nil
return o.progress, nil
}
func buildMetricAttributes(dockerCli command.Cli, b *builder.Builder, options *buildOptions) attribute.Set {
return attribute.NewSet(
attribute.String("command.name", "build"),
attribute.Stringer("command.options.hash", &buildOptionsHash{
buildOptions: options,
configDir: confutil.ConfigDir(dockerCli),
}),
attribute.String("driver.name", options.builder),
attribute.String("driver.type", b.Driver),
)
}
// buildOptionsHash computes a hash for the buildOptions when the String method is invoked.
// This is done so we can delay the computation of the hash until needed by OTEL using
// the fmt.Stringer interface.
type buildOptionsHash struct {
*buildOptions
configDir string
result string
resultOnce sync.Once
}
func (o *buildOptionsHash) String() string {
o.resultOnce.Do(func() {
target := o.target
contextPath := o.contextPath
dockerfile := o.dockerfileName
if dockerfile == "" {
dockerfile = "Dockerfile"
}
if contextPath != "-" && osutil.IsLocalDir(contextPath) {
contextPath = osutil.ToAbs(contextPath)
}
salt := confutil.TryNodeIdentifier(o.configDir)
h := sha256.New()
for _, s := range []string{target, contextPath, dockerfile, salt} {
_, _ = io.WriteString(h, s)
h.Write([]byte{0})
}
o.result = hex.EncodeToString(h.Sum(nil))
})
return o.result
}
func runBuild(ctx context.Context, dockerCli command.Cli, options buildOptions) (err error) {
mp, err := metricutil.NewMeterProvider(ctx, dockerCli)
if err != nil {
return err
}
defer mp.Report(context.Background())
func runBuild(dockerCli command.Cli, options buildOptions) (err error) {
ctx := appcontext.Context()
ctx, end, err := tracing.TraceCurrentCommand(ctx, "build")
if err != nil {
return err
@@ -303,7 +234,7 @@ func runBuild(ctx context.Context, dockerCli command.Cli, options buildOptions)
if err != nil {
return err
}
_, err = b.LoadNodes(ctx)
_, err = b.LoadNodes(ctx, false)
if err != nil {
return err
}
@@ -312,21 +243,19 @@ func runBuild(ctx context.Context, dockerCli command.Cli, options buildOptions)
if _, err := console.ConsoleFromFile(os.Stderr); err == nil {
term = true
}
attributes := buildMetricAttributes(dockerCli, b, &options)
ctx2, cancel := context.WithCancel(context.TODO())
defer cancel()
progressMode, err := options.toDisplayMode()
progressMode, err := options.toProgress()
if err != nil {
return err
}
var printer *progress.Printer
printer, err = progress.NewPrinter(ctx2, os.Stderr, progressMode,
printer, err = progress.NewPrinter(ctx2, os.Stderr, os.Stderr, progressMode,
progress.WithDesc(
fmt.Sprintf("building with %q instance using %s driver", b.Name, b.Driver),
fmt.Sprintf("%s:%s", b.Driver, b.Name),
),
progress.WithMetrics(mp, attributes),
progress.WithOnClose(func() {
printWarnings(os.Stderr, printer.Warnings(), progressMode)
}),
@@ -335,7 +264,6 @@ func runBuild(ctx context.Context, dockerCli command.Cli, options buildOptions)
return err
}
done := timeBuildCommand(mp, attributes)
var resp *client.SolveResponse
var retErr error
if isExperimental() {
@@ -347,19 +275,14 @@ func runBuild(ctx context.Context, dockerCli command.Cli, options buildOptions)
if err := printer.Wait(); retErr == nil {
retErr = err
}
done(retErr)
if retErr != nil {
return retErr
}
switch progressMode {
case progressui.RawJSONMode:
// no additional display
case progressui.QuietMode:
fmt.Println(getImageID(resp.ExporterResponse))
default:
if progressMode != progress.PrinterModeQuiet {
desktop.PrintBuildDetails(os.Stderr, printer.BuildRefs(), term)
} else {
fmt.Println(getImageID(resp.ExporterResponse))
}
if options.imageIDFile != "" {
if err := os.WriteFile(options.imageIDFile, []byte(getImageID(resp.ExporterResponse)), 0644); err != nil {
@@ -397,10 +320,11 @@ func runBasicBuild(ctx context.Context, dockerCli command.Cli, opts *controllera
}
func runControllerBuild(ctx context.Context, dockerCli command.Cli, opts *controllerapi.BuildOptions, options buildOptions, printer *progress.Printer) (*client.SolveResponse, error) {
if options.invokeConfig != nil && (options.dockerfileName == "-" || options.contextPath == "-") {
if options.invoke != nil && (options.dockerfileName == "-" || options.contextPath == "-") {
// stdin must be usable for monitor
return nil, errors.Errorf("Dockerfile or context from stdin is not supported with invoke")
}
c, err := controller.NewController(ctx, options.ControlOptions, dockerCli, printer)
if err != nil {
return nil, err
@@ -423,54 +347,56 @@ func runControllerBuild(ctx context.Context, dockerCli command.Cli, opts *contro
var resp *client.SolveResponse
f := ioset.NewSingleForwarder()
f.SetReader(dockerCli.In())
pr, pw := io.Pipe()
f.SetWriter(pw, func() io.WriteCloser {
pw.Close() // propagate EOF
logrus.Debug("propagating stdin close")
return nil
})
if !options.noBuild {
pr, pw := io.Pipe()
f.SetWriter(pw, func() io.WriteCloser {
pw.Close() // propagate EOF
logrus.Debug("propagating stdin close")
return nil
})
ref, resp, err = c.Build(ctx, *opts, pr, printer)
if err != nil {
var be *controllererrors.BuildError
if errors.As(err, &be) {
ref = be.Ref
retErr = err
// We can proceed to monitor
} else {
return nil, errors.Wrapf(err, "failed to build")
ref, resp, err = c.Build(ctx, *opts, pr, printer)
if err != nil {
var be *controllererrors.BuildError
if errors.As(err, &be) {
ref = be.Ref
retErr = err
// We can proceed to monitor
} else {
return nil, errors.Wrapf(err, "failed to build")
}
}
if err := pw.Close(); err != nil {
logrus.Debug("failed to close stdin pipe writer")
}
if err := pr.Close(); err != nil {
logrus.Debug("failed to close stdin pipe reader")
}
}
if err := pw.Close(); err != nil {
logrus.Debug("failed to close stdin pipe writer")
}
if err := pr.Close(); err != nil {
logrus.Debug("failed to close stdin pipe reader")
}
if options.invokeConfig != nil && options.invokeConfig.needsDebug(retErr) {
// Print errors before launching monitor
if err := printError(retErr, printer); err != nil {
logrus.Warnf("failed to print error information: %v", err)
}
// post-build operations
if options.invoke != nil && options.invoke.needsMonitor(retErr) {
pr2, pw2 := io.Pipe()
f.SetWriter(pw2, func() io.WriteCloser {
pw2.Close() // propagate EOF
return nil
})
monitorBuildResult, err := options.invokeConfig.runDebug(ctx, ref, opts, c, pr2, os.Stdout, os.Stderr, printer)
con := console.Current()
if err := con.SetRaw(); err != nil {
if err := c.Disconnect(ctx, ref); err != nil {
logrus.Warnf("disconnect error: %v", err)
}
return nil, errors.Errorf("failed to configure terminal: %v", err)
}
err = monitor.RunMonitor(ctx, ref, opts, options.invoke.InvokeConfig, c, pr2, os.Stdout, os.Stderr, printer)
con.Reset()
if err := pw2.Close(); err != nil {
logrus.Debug("failed to close monitor stdin pipe reader")
}
if err != nil {
logrus.Warnf("failed to run monitor: %v", err)
}
if monitorBuildResult != nil {
// Update return values with the last build result from monitor
resp, retErr = monitorBuildResult.Resp, monitorBuildResult.Err
}
} else {
if err := c.Disconnect(ctx, ref); err != nil {
logrus.Warnf("disconnect error: %v", err)
@@ -480,37 +406,10 @@ func runControllerBuild(ctx context.Context, dockerCli command.Cli, opts *contro
return resp, retErr
}
func printError(err error, printer *progress.Printer) error {
if err == nil {
return nil
}
if err := printer.Pause(); err != nil {
return err
}
defer printer.Unpause()
for _, s := range errdefs.Sources(err) {
s.Print(os.Stderr)
}
fmt.Fprintf(os.Stderr, "ERROR: %v\n", err)
return nil
}
func newDebuggableBuild(dockerCli command.Cli, rootOpts *rootOptions) debug.DebuggableCmd {
return &debuggableBuild{dockerCli: dockerCli, rootOpts: rootOpts}
}
type debuggableBuild struct {
dockerCli command.Cli
rootOpts *rootOptions
}
func (b *debuggableBuild) NewDebugger(cfg *debug.DebugConfig) *cobra.Command {
return buildCmd(b.dockerCli, b.rootOpts, cfg)
}
func buildCmd(dockerCli command.Cli, rootOpts *rootOptions, debugConfig *debug.DebugConfig) *cobra.Command {
func buildCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
options := buildOptions{}
cFlags := &commonFlags{}
options := &buildOptions{}
var invokeFlag string
cmd := &cobra.Command{
Use: "build [OPTIONS] PATH | URL | -",
@@ -532,15 +431,15 @@ func buildCmd(dockerCli command.Cli, rootOpts *rootOptions, debugConfig *debug.D
options.progress = cFlags.progress
cmd.Flags().VisitAll(checkWarnedFlags)
if debugConfig != nil && (debugConfig.InvokeFlag != "" || debugConfig.OnFlag != "") {
iConfig := new(invokeConfig)
if err := iConfig.parseInvokeConfig(debugConfig.InvokeFlag, debugConfig.OnFlag); err != nil {
if invokeFlag != "" {
invoke, err := parseInvokeConfig(invokeFlag)
if err != nil {
return err
}
options.invokeConfig = iConfig
options.invoke = &invoke
options.noBuild = invokeFlag == "debug-shell"
}
return runBuild(cmd.Context(), dockerCli, *options)
return runBuild(dockerCli, options)
},
ValidArgsFunction: func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
return nil, cobra.ShellCompDirectiveFilterDirs
@@ -555,25 +454,23 @@ func buildCmd(dockerCli command.Cli, rootOpts *rootOptions, debugConfig *debug.D
flags := cmd.Flags()
flags.StringSliceVar(&options.extraHosts, "add-host", []string{}, `Add a custom host-to-IP mapping (format: "host:ip")`)
flags.SetAnnotation("add-host", annotation.ExternalURL, []string{"https://docs.docker.com/reference/cli/docker/image/build/#add-host"})
flags.SetAnnotation("add-host", annotation.ExternalURL, []string{"https://docs.docker.com/engine/reference/commandline/build/#add-host"})
flags.StringSliceVar(&options.allow, "allow", []string{}, `Allow extra privileged entitlement (e.g., "network.host", "security.insecure")`)
flags.StringArrayVarP(&options.annotations, "annotation", "", []string{}, "Add annotation to the image")
flags.StringArrayVar(&options.buildArgs, "build-arg", []string{}, "Set build-time variables")
flags.StringArrayVar(&options.cacheFrom, "cache-from", []string{}, `External cache sources (e.g., "user/app:cache", "type=local,src=path/to/dir")`)
flags.StringArrayVar(&options.cacheTo, "cache-to", []string{}, `Cache export destinations (e.g., "user/app:cache", "type=local,dest=path/to/dir")`)
flags.StringVar(&options.cgroupParent, "cgroup-parent", "", `Set the parent cgroup for the "RUN" instructions during build`)
flags.SetAnnotation("cgroup-parent", annotation.ExternalURL, []string{"https://docs.docker.com/reference/cli/docker/image/build/#cgroup-parent"})
flags.StringVar(&options.cgroupParent, "cgroup-parent", "", "Optional parent cgroup for the container")
flags.SetAnnotation("cgroup-parent", annotation.ExternalURL, []string{"https://docs.docker.com/engine/reference/commandline/build/#cgroup-parent"})
flags.StringArrayVar(&options.contexts, "build-context", []string{}, "Additional build contexts (e.g., name=path)")
flags.StringVarP(&options.dockerfileName, "file", "f", "", `Name of the Dockerfile (default: "PATH/Dockerfile")`)
flags.SetAnnotation("file", annotation.ExternalURL, []string{"https://docs.docker.com/reference/cli/docker/image/build/#file"})
flags.SetAnnotation("file", annotation.ExternalURL, []string{"https://docs.docker.com/engine/reference/commandline/build/#file"})
flags.StringVar(&options.imageIDFile, "iidfile", "", "Write the image ID to the file")
@@ -591,7 +488,7 @@ func buildCmd(dockerCli command.Cli, rootOpts *rootOptions, debugConfig *debug.D
if isExperimental() {
flags.StringVar(&options.printFunc, "print", "", "Print result of information request (e.g., outline, targets)")
cobrautil.MarkFlagsExperimental(flags, "print")
flags.SetAnnotation("print", "experimentalCLI", nil)
}
flags.BoolVar(&options.exportPush, "push", false, `Shorthand for "--output=type=registry"`)
@@ -600,15 +497,15 @@ func buildCmd(dockerCli command.Cli, rootOpts *rootOptions, debugConfig *debug.D
flags.StringArrayVar(&options.secrets, "secret", []string{}, `Secret to expose to the build (format: "id=mysecret[,src=/local/secret]")`)
flags.Var(&options.shmSize, "shm-size", `Shared memory size for build containers`)
flags.Var(&options.shmSize, "shm-size", `Size of "/dev/shm"`)
flags.StringArrayVar(&options.ssh, "ssh", []string{}, `SSH agent socket or keys to expose to the build (format: "default|<id>[=<socket>|<key>[,<key>]]")`)
flags.StringArrayVarP(&options.tags, "tag", "t", []string{}, `Name and optionally a tag (format: "name:tag")`)
flags.SetAnnotation("tag", annotation.ExternalURL, []string{"https://docs.docker.com/reference/cli/docker/image/build/#tag"})
flags.SetAnnotation("tag", annotation.ExternalURL, []string{"https://docs.docker.com/engine/reference/commandline/build/#tag"})
flags.StringVar(&options.target, "target", "", "Set the target build stage to build")
flags.SetAnnotation("target", annotation.ExternalURL, []string{"https://docs.docker.com/reference/cli/docker/image/build/#target"})
flags.SetAnnotation("target", annotation.ExternalURL, []string{"https://docs.docker.com/engine/reference/commandline/build/#target"})
options.ulimits = dockeropts.NewUlimitOpt(nil)
flags.Var(options.ulimits, "ulimit", "Ulimit options")
@@ -618,11 +515,14 @@ func buildCmd(dockerCli command.Cli, rootOpts *rootOptions, debugConfig *debug.D
flags.StringVar(&options.provenance, "provenance", "", `Shorthand for "--attest=type=provenance"`)
if isExperimental() {
// TODO: move this to debug command if needed
flags.StringVar(&invokeFlag, "invoke", "", "Invoke a command after the build")
flags.SetAnnotation("invoke", "experimentalCLI", nil)
flags.StringVar(&options.Root, "root", "", "Specify root directory of server to connect")
flags.SetAnnotation("root", "experimentalCLI", nil)
flags.BoolVar(&options.Detach, "detach", false, "Detach buildx server (supported only on linux)")
flags.SetAnnotation("detach", "experimentalCLI", nil)
flags.StringVar(&options.ServerConfig, "server-config", "", "Specify buildx server config file (used only when launching new server)")
cobrautil.MarkFlagsExperimental(flags, "root", "detach", "server-config")
flags.SetAnnotation("server-config", "experimentalCLI", nil)
}
// hidden flags
@@ -645,7 +545,7 @@ func buildCmd(dockerCli command.Cli, rootOpts *rootOptions, debugConfig *debug.D
flags.BoolVar(&ignoreBool, "squash", false, "Squash newly built layers into a single new layer")
flags.MarkHidden("squash")
flags.SetAnnotation("squash", "flag-warn", []string{"experimental flag squash is removed with BuildKit. You should squash inside build using a multi-stage Dockerfile for efficiency."})
cobrautil.MarkFlagsExperimental(flags, "squash")
flags.SetAnnotation("squash", "experimentalCLI", nil)
flags.StringVarP(&ignore, "memory", "m", "", "Memory limit")
flags.MarkHidden("memory")
@@ -779,24 +679,109 @@ func updateLastActivity(dockerCli command.Cli, ng *store.NodeGroup) error {
return txn.UpdateLastActivity(ng)
}
func listToMap(values []string, defaultEnv bool) (map[string]string, error) {
result := make(map[string]string, len(values))
for _, value := range values {
k, v, hasValue := strings.Cut(value, "=")
if k == "" {
return nil, errors.Errorf("invalid key-value pair %q: empty key", value)
type invokeConfig struct {
controllerapi.InvokeConfig
invokeFlag string
}
func (cfg *invokeConfig) needsMonitor(retErr error) bool {
switch cfg.invokeFlag {
case "debug-shell":
return true
case "on-error":
return retErr != nil
default:
return cfg.invokeFlag != ""
}
}
func parseInvokeConfig(invoke string) (cfg invokeConfig, err error) {
cfg.invokeFlag = invoke
cfg.Tty = true
switch invoke {
case "default", "debug-shell":
return cfg, nil
case "on-error":
// NOTE: we overwrite the command to run because the original one should fail on the failed step.
// TODO: make this configurable via flags or restorable from LLB.
// Discussion: https://github.com/docker/buildx/pull/1640#discussion_r1113295900
cfg.Cmd = []string{"/bin/sh"}
return cfg, nil
}
csvReader := csv.NewReader(strings.NewReader(invoke))
csvReader.LazyQuotes = true
fields, err := csvReader.Read()
if err != nil {
return cfg, err
}
if len(fields) == 1 && !strings.Contains(fields[0], "=") {
cfg.Cmd = []string{fields[0]}
return cfg, nil
}
cfg.NoUser = true
cfg.NoCwd = true
for _, field := range fields {
parts := strings.SplitN(field, "=", 2)
if len(parts) != 2 {
return cfg, errors.Errorf("invalid value %s", field)
}
if hasValue {
result[k] = v
} else if defaultEnv {
if envVal, ok := os.LookupEnv(k); ok {
result[k] = envVal
key := strings.ToLower(parts[0])
value := parts[1]
switch key {
case "args":
cfg.Cmd = append(cfg.Cmd, maybeJSONArray(value)...)
case "entrypoint":
cfg.Entrypoint = append(cfg.Entrypoint, maybeJSONArray(value)...)
if cfg.Cmd == nil {
cfg.Cmd = []string{}
}
} else {
result[k] = ""
case "env":
cfg.Env = append(cfg.Env, maybeJSONArray(value)...)
case "user":
cfg.User = value
cfg.NoUser = false
case "cwd":
cfg.Cwd = value
cfg.NoCwd = false
case "tty":
cfg.Tty, err = strconv.ParseBool(value)
if err != nil {
return cfg, errors.Errorf("failed to parse tty: %v", err)
}
default:
return cfg, errors.Errorf("unknown key %q", key)
}
}
return result, nil
return cfg, nil
}
func maybeJSONArray(v string) []string {
var list []string
if err := json.Unmarshal([]byte(v), &list); err == nil {
return list
}
return []string{v}
}
func listToMap(values []string, defaultEnv bool) map[string]string {
result := make(map[string]string, len(values))
for _, value := range values {
kv := strings.SplitN(value, "=", 2)
if len(kv) == 1 {
if defaultEnv {
v, ok := os.LookupEnv(kv[0])
if ok {
result[kv[0]] = v
}
} else {
result[kv[0]] = ""
}
} else {
result[kv[0]] = kv[1]
}
}
return result
}
func dockerUlimitToControllerUlimit(u *dockeropts.UlimitOpt) *controllerapi.UlimitOpt {
@@ -814,8 +799,8 @@ func dockerUlimitToControllerUlimit(u *dockeropts.UlimitOpt) *controllerapi.Ulim
return &controllerapi.UlimitOpt{Values: values}
}
func printWarnings(w io.Writer, warnings []client.VertexWarning, mode progressui.DisplayMode) {
if len(warnings) == 0 || mode == progressui.QuietMode || mode == progressui.RawJSONMode {
func printWarnings(w io.Writer, warnings []client.VertexWarning, mode string) {
if len(warnings) == 0 || mode == progress.PrinterModeQuiet {
return
}
fmt.Fprintf(w, "\n ")
@@ -887,143 +872,3 @@ func printValue(printer printFunc, version string, format string, res map[string
}
return printer([]byte(res["result.json"]), os.Stdout)
}
type invokeConfig struct {
controllerapi.InvokeConfig
onFlag string
invokeFlag string
}
func (cfg *invokeConfig) needsDebug(retErr error) bool {
switch cfg.onFlag {
case "always":
return true
case "error":
return retErr != nil
default:
return cfg.invokeFlag != ""
}
}
func (cfg *invokeConfig) runDebug(ctx context.Context, ref string, options *controllerapi.BuildOptions, c control.BuildxController, stdin io.ReadCloser, stdout io.WriteCloser, stderr console.File, progress *progress.Printer) (*monitor.MonitorBuildResult, error) {
con := console.Current()
if err := con.SetRaw(); err != nil {
// TODO: run disconnect in build command (on error case)
if err := c.Disconnect(ctx, ref); err != nil {
logrus.Warnf("disconnect error: %v", err)
}
return nil, errors.Errorf("failed to configure terminal: %v", err)
}
defer con.Reset()
return monitor.RunMonitor(ctx, ref, options, cfg.InvokeConfig, c, stdin, stdout, stderr, progress)
}
func (cfg *invokeConfig) parseInvokeConfig(invoke, on string) error {
cfg.onFlag = on
cfg.invokeFlag = invoke
cfg.Tty = true
cfg.NoCmd = true
switch invoke {
case "default", "":
return nil
case "on-error":
// NOTE: we overwrite the command to run because the original one should fail on the failed step.
// TODO: make this configurable via flags or restorable from LLB.
// Discussion: https://github.com/docker/buildx/pull/1640#discussion_r1113295900
cfg.Cmd = []string{"/bin/sh"}
cfg.NoCmd = false
return nil
}
csvReader := csv.NewReader(strings.NewReader(invoke))
csvReader.LazyQuotes = true
fields, err := csvReader.Read()
if err != nil {
return err
}
if len(fields) == 1 && !strings.Contains(fields[0], "=") {
cfg.Cmd = []string{fields[0]}
cfg.NoCmd = false
return nil
}
cfg.NoUser = true
cfg.NoCwd = true
for _, field := range fields {
parts := strings.SplitN(field, "=", 2)
if len(parts) != 2 {
return errors.Errorf("invalid value %s", field)
}
key := strings.ToLower(parts[0])
value := parts[1]
switch key {
case "args":
cfg.Cmd = append(cfg.Cmd, maybeJSONArray(value)...)
cfg.NoCmd = false
case "entrypoint":
cfg.Entrypoint = append(cfg.Entrypoint, maybeJSONArray(value)...)
if cfg.Cmd == nil {
cfg.Cmd = []string{}
cfg.NoCmd = false
}
case "env":
cfg.Env = append(cfg.Env, maybeJSONArray(value)...)
case "user":
cfg.User = value
cfg.NoUser = false
case "cwd":
cfg.Cwd = value
cfg.NoCwd = false
case "tty":
cfg.Tty, err = strconv.ParseBool(value)
if err != nil {
return errors.Errorf("failed to parse tty: %v", err)
}
default:
return errors.Errorf("unknown key %q", key)
}
}
return nil
}
func maybeJSONArray(v string) []string {
var list []string
if err := json.Unmarshal([]byte(v), &list); err == nil {
return list
}
return []string{v}
}
// timeBuildCommand will start a timer for timing the build command. It records the time when the returned
// function is invoked into a metric.
func timeBuildCommand(mp metric.MeterProvider, attrs attribute.Set) func(err error) {
meter := metricutil.Meter(mp)
counter, _ := meter.Float64Counter("command.time",
metric.WithDescription("Measures the duration of the build command."),
metric.WithUnit("ms"),
)
start := time.Now()
return func(err error) {
dur := float64(time.Since(start)) / float64(time.Millisecond)
extraAttrs := attribute.NewSet()
if err != nil {
extraAttrs = attribute.NewSet(
attribute.String("error.type", otelErrorType(err)),
)
}
counter.Add(context.Background(), dur,
metric.WithAttributeSet(attrs),
metric.WithAttributeSet(extraAttrs),
)
}
}
// otelErrorType returns an attribute for the error type based on the error category.
// If nil, this function returns an invalid attribute.
func otelErrorType(err error) string {
name := "generic"
if errors.Is(err, context.Canceled) {
name = "canceled"
}
return name
}

View File

@@ -3,72 +3,302 @@ package commands
import (
"bytes"
"context"
"encoding/csv"
"fmt"
"net/url"
"os"
"strings"
"time"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/driver"
k8sutil "github.com/docker/buildx/driver/kubernetes/util"
remoteutil "github.com/docker/buildx/driver/remote/util"
"github.com/docker/buildx/localstate"
"github.com/docker/buildx/store"
"github.com/docker/buildx/store/storeutil"
"github.com/docker/buildx/util/cobrautil"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/dockerutil"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
dopts "github.com/docker/cli/opts"
"github.com/google/shlex"
"github.com/moby/buildkit/util/appcontext"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
)
type createOptions struct {
name string
driver string
nodeName string
platform []string
actionAppend bool
actionLeave bool
use bool
driverOpts []string
buildkitdFlags string
buildkitdConfigFile string
bootstrap bool
name string
driver string
nodeName string
platform []string
actionAppend bool
actionLeave bool
use bool
flags string
configFile string
driverOpts []string
bootstrap bool
// upgrade bool // perform upgrade of the driver
}
func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, args []string) error {
func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
ctx := appcontext.Context()
if in.name == "default" {
return errors.Errorf("default is a reserved name and cannot be used to identify builder instance")
}
if in.actionLeave {
if in.name == "" {
return errors.Errorf("leave requires instance name")
}
if in.nodeName == "" {
return errors.Errorf("leave requires node name but --node not set")
}
}
if in.actionAppend {
if in.name == "" {
logrus.Warnf("append used without name, creating a new instance instead")
}
}
txn, release, err := storeutil.GetStore(dockerCli)
if err != nil {
return err
}
// Ensure the file lock gets released no matter what happens.
defer release()
if in.actionLeave {
return builder.Leave(ctx, txn, dockerCli, builder.LeaveOpts{
Name: in.name,
NodeName: in.nodeName,
})
name := in.name
if name == "" {
name, err = store.GenerateName(txn)
if err != nil {
return err
}
}
if !in.actionLeave && !in.actionAppend {
contexts, err := dockerCli.ContextStore().List()
if err != nil {
return err
}
for _, c := range contexts {
if c.Name == name {
logrus.Warnf("instance name %q already exists as context builder", name)
break
}
}
}
ng, err := txn.NodeGroupByName(name)
if err != nil {
if os.IsNotExist(errors.Cause(err)) {
if in.actionAppend && in.name != "" {
logrus.Warnf("failed to find %q for append, creating a new instance instead", in.name)
}
if in.actionLeave {
return errors.Errorf("failed to find instance %q for leave", in.name)
}
} else {
return err
}
}
buildkitHost := os.Getenv("BUILDKIT_HOST")
driverName := in.driver
if driverName == "" {
if ng != nil {
driverName = ng.Driver
} else if len(args) == 0 && buildkitHost != "" {
driverName = "remote"
} else {
var arg string
if len(args) > 0 {
arg = args[0]
}
f, err := driver.GetDefaultFactory(ctx, arg, dockerCli.Client(), true)
if err != nil {
return err
}
if f == nil {
return errors.Errorf("no valid drivers found")
}
driverName = f.Name()
}
}
if ng != nil {
if in.nodeName == "" && !in.actionAppend {
return errors.Errorf("existing instance for %q but no append mode, specify --node to make changes for existing instances", name)
}
if driverName != ng.Driver {
return errors.Errorf("existing instance for %q but has mismatched driver %q", name, ng.Driver)
}
}
if _, err := driver.GetFactory(driverName, true); err != nil {
return err
}
ngOriginal := ng
if ngOriginal != nil {
ngOriginal = ngOriginal.Copy()
}
if ng == nil {
ng = &store.NodeGroup{
Name: name,
Driver: driverName,
}
}
var flags []string
if in.flags != "" {
flags, err = shlex.Split(in.flags)
if err != nil {
return errors.Wrap(err, "failed to parse buildkit flags")
}
}
var ep string
if len(args) > 0 {
ep = args[0]
var setEp bool
if in.actionLeave {
if err := ng.Leave(in.nodeName); err != nil {
return err
}
ls, err := localstate.New(confutil.ConfigDir(dockerCli))
if err != nil {
return err
}
if err := ls.RemoveBuilderNode(ng.Name, in.nodeName); err != nil {
return err
}
} else {
switch {
case driverName == "kubernetes":
if len(args) > 0 {
logrus.Warnf("kubernetes driver does not support endpoint args %q", args[0])
}
// generate node name if not provided to avoid duplicated endpoint
// error: https://github.com/docker/setup-buildx-action/issues/215
nodeName := in.nodeName
if nodeName == "" {
nodeName, err = k8sutil.GenerateNodeName(name, txn)
if err != nil {
return err
}
}
// naming endpoint to make --append works
ep = (&url.URL{
Scheme: driverName,
Path: "/" + name,
RawQuery: (&url.Values{
"deployment": {nodeName},
"kubeconfig": {os.Getenv("KUBECONFIG")},
}).Encode(),
}).String()
setEp = false
case driverName == "remote":
if len(args) > 0 {
ep = args[0]
} else if buildkitHost != "" {
ep = buildkitHost
} else {
return errors.Errorf("no remote endpoint provided")
}
ep, err = validateBuildkitEndpoint(ep)
if err != nil {
return err
}
setEp = true
case len(args) > 0:
ep, err = validateEndpoint(dockerCli, args[0])
if err != nil {
return err
}
setEp = true
default:
if dockerCli.CurrentContext() == "default" && dockerCli.DockerEndpoint().TLSData != nil {
return errors.Errorf("could not create a builder instance with TLS data loaded from environment. Please use `docker context create <context-name>` to create a context for current environment and then create a builder instance with `docker buildx create <context-name>`")
}
ep, err = dockerutil.GetCurrentEndpoint(dockerCli)
if err != nil {
return err
}
setEp = false
}
m, err := csvToMap(in.driverOpts)
if err != nil {
return err
}
if in.configFile == "" {
// if buildkit config is not provided, check if the default one is
// available and use it
if f, ok := confutil.DefaultConfigFile(dockerCli); ok {
logrus.Warnf("Using default BuildKit config in %s", f)
in.configFile = f
}
}
if err := ng.Update(in.nodeName, ep, in.platform, setEp, in.actionAppend, flags, in.configFile, m); err != nil {
return err
}
}
b, err := builder.Create(ctx, txn, dockerCli, builder.CreateOpts{
Name: in.name,
Driver: in.driver,
NodeName: in.nodeName,
Platforms: in.platform,
DriverOpts: in.driverOpts,
BuildkitdFlags: in.buildkitdFlags,
BuildkitdConfigFile: in.buildkitdConfigFile,
Use: in.use,
Endpoint: ep,
Append: in.actionAppend,
})
if err := txn.Save(ng); err != nil {
return err
}
b, err := builder.New(dockerCli,
builder.WithName(ng.Name),
builder.WithStore(txn),
builder.WithSkippedValidation(),
)
if err != nil {
return err
}
// The store is no longer used from this point.
// Release it so we aren't holding the file lock during the boot.
release()
timeoutCtx, cancel := context.WithTimeout(ctx, 20*time.Second)
defer cancel()
nodes, err := b.LoadNodes(timeoutCtx, true)
if err != nil {
return err
}
for _, node := range nodes {
if err := node.Err; err != nil {
err := errors.Errorf("failed to initialize builder %s (%s): %s", ng.Name, node.Name, err)
var err2 error
if ngOriginal == nil {
err2 = txn.Remove(ng.Name)
} else {
err2 = txn.Save(ngOriginal)
}
if err2 != nil {
logrus.Warnf("Could not rollback to previous state: %s", err2)
}
return err
}
}
if in.use && ep != "" {
current, err := dockerutil.GetCurrentEndpoint(dockerCli)
if err != nil {
return err
}
if err := txn.SetCurrent(current, ng.Name, false, false); err != nil {
return err
}
}
if in.bootstrap {
if _, err = b.Boot(ctx); err != nil {
@@ -76,7 +306,7 @@ func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, arg
}
}
fmt.Printf("%s\n", b.Name)
fmt.Printf("%s\n", ng.Name)
return nil
}
@@ -96,7 +326,7 @@ func createCmd(dockerCli command.Cli) *cobra.Command {
Short: "Create a new builder instance",
Args: cli.RequiresMaxArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
return runCreate(cmd.Context(), dockerCli, options, args)
return runCreate(dockerCli, options, args)
},
ValidArgsFunction: completion.Disable,
}
@@ -106,16 +336,12 @@ func createCmd(dockerCli command.Cli) *cobra.Command {
flags.StringVar(&options.name, "name", "", "Builder instance name")
flags.StringVar(&options.driver, "driver", "", fmt.Sprintf("Driver to use (available: %s)", drivers.String()))
flags.StringVar(&options.nodeName, "node", "", "Create/modify node with given name")
flags.StringVar(&options.flags, "buildkitd-flags", "", "Flags for buildkitd daemon")
flags.StringVar(&options.configFile, "config", "", "BuildKit config file")
flags.StringArrayVar(&options.platform, "platform", []string{}, "Fixed platforms for current node")
flags.StringArrayVar(&options.driverOpts, "driver-opt", []string{}, "Options for the driver")
flags.StringVar(&options.buildkitdFlags, "buildkitd-flags", "", "BuildKit daemon flags")
// we allow for both "--config" and "--buildkitd-config", although the latter is the recommended way to avoid ambiguity.
flags.StringVar(&options.buildkitdConfigFile, "buildkitd-config", "", "BuildKit daemon config file")
flags.StringVar(&options.buildkitdConfigFile, "config", "", "BuildKit daemon config file")
flags.MarkHidden("config")
flags.BoolVar(&options.bootstrap, "bootstrap", false, "Boot builder after creation")
flags.BoolVar(&options.actionAppend, "append", false, "Append a node to builder instead of changing it")
flags.BoolVar(&options.actionLeave, "leave", false, "Remove a node from builder instead of changing it")
flags.BoolVar(&options.use, "use", false, "Set the current builder instance")
@@ -125,3 +351,49 @@ func createCmd(dockerCli command.Cli) *cobra.Command {
return cmd
}
func csvToMap(in []string) (map[string]string, error) {
if len(in) == 0 {
return nil, nil
}
m := make(map[string]string, len(in))
for _, s := range in {
csvReader := csv.NewReader(strings.NewReader(s))
fields, err := csvReader.Read()
if err != nil {
return nil, err
}
for _, v := range fields {
p := strings.SplitN(v, "=", 2)
if len(p) != 2 {
return nil, errors.Errorf("invalid value %q, expecting k=v", v)
}
m[p[0]] = p[1]
}
}
return m, nil
}
// validateEndpoint validates that endpoint is either a context or a docker host
func validateEndpoint(dockerCli command.Cli, ep string) (string, error) {
dem, err := dockerutil.GetDockerEndpoint(dockerCli, ep)
if err == nil && dem != nil {
if ep == "default" {
return dem.Host, nil
}
return ep, nil
}
h, err := dopts.ParseHost(true, ep)
if err != nil {
return "", errors.Wrapf(err, "failed to parse endpoint %s", ep)
}
return h, nil
}
// validateBuildkitEndpoint validates that endpoint is a valid buildkit host
func validateBuildkitEndpoint(ep string) (string, error) {
if err := remoteutil.IsValidEndpoint(ep); err != nil {
return "", err
}
return ep, nil
}

26
commands/create_test.go Normal file
View File

@@ -0,0 +1,26 @@
package commands
import (
"testing"
"github.com/stretchr/testify/require"
)
func TestCsvToMap(t *testing.T) {
d := []string{
"\"tolerations=key=foo,value=bar;key=foo2,value=bar2\",replicas=1",
"namespace=default",
}
r, err := csvToMap(d)
require.NoError(t, err)
require.Contains(t, r, "tolerations")
require.Equal(t, r["tolerations"], "key=foo,value=bar;key=foo2,value=bar2")
require.Contains(t, r, "replicas")
require.Equal(t, r["replicas"], "1")
require.Contains(t, r, "namespace")
require.Equal(t, r["namespace"], "default")
}

79
commands/debug-shell.go Normal file
View File

@@ -0,0 +1,79 @@
package commands
import (
"context"
"os"
"runtime"
"github.com/containerd/console"
"github.com/docker/buildx/controller"
"github.com/docker/buildx/controller/control"
controllerapi "github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/monitor"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/command"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
)
func debugShellCmd(dockerCli command.Cli) *cobra.Command {
var options control.ControlOptions
var progressMode string
cmd := &cobra.Command{
Use: "debug-shell",
Short: "Start a monitor",
Annotations: map[string]string{
"experimentalCLI": "",
},
RunE: func(cmd *cobra.Command, args []string) error {
printer, err := progress.NewPrinter(context.TODO(), os.Stderr, os.Stderr, progressMode)
if err != nil {
return err
}
ctx := context.TODO()
c, err := controller.NewController(ctx, options, dockerCli, printer)
if err != nil {
return err
}
defer func() {
if err := c.Close(); err != nil {
logrus.Warnf("failed to close server connection %v", err)
}
}()
con := console.Current()
if err := con.SetRaw(); err != nil {
return errors.Errorf("failed to configure terminal: %v", err)
}
err = monitor.RunMonitor(ctx, "", nil, controllerapi.InvokeConfig{
Tty: true,
}, c, dockerCli.In(), os.Stdout, os.Stderr, printer)
con.Reset()
return err
},
}
flags := cmd.Flags()
flags.StringVar(&options.Root, "root", "", "Specify root directory of server to connect")
flags.SetAnnotation("root", "experimentalCLI", nil)
flags.BoolVar(&options.Detach, "detach", runtime.GOOS == "linux", "Detach buildx server (supported only on linux)")
flags.SetAnnotation("detach", "experimentalCLI", nil)
flags.StringVar(&options.ServerConfig, "server-config", "", "Specify buildx server config file (used only when launching new server)")
flags.SetAnnotation("server-config", "experimentalCLI", nil)
flags.StringVar(&progressMode, "progress", "auto", `Set type of progress output ("auto", "plain", "tty"). Use plain to show container output`)
return cmd
}
func addDebugShellCommand(cmd *cobra.Command, dockerCli command.Cli) {
cmd.AddCommand(
debugShellCmd(dockerCli),
)
}

View File

@@ -1,92 +0,0 @@
package debug
import (
"context"
"os"
"runtime"
"github.com/containerd/console"
"github.com/docker/buildx/controller"
"github.com/docker/buildx/controller/control"
controllerapi "github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/monitor"
"github.com/docker/buildx/util/cobrautil"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/command"
"github.com/moby/buildkit/util/progress/progressui"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
)
// DebugConfig is a user-specified configuration for the debugger.
type DebugConfig struct {
// InvokeFlag is a flag to configure the launched debugger and the commaned executed on the debugger.
InvokeFlag string
// OnFlag is a flag to configure the timing of launching the debugger.
OnFlag string
}
// DebuggableCmd is a command that supports debugger with recognizing the user-specified DebugConfig.
type DebuggableCmd interface {
// NewDebugger returns the new *cobra.Command with support for the debugger with recognizing DebugConfig.
NewDebugger(*DebugConfig) *cobra.Command
}
func RootCmd(dockerCli command.Cli, children ...DebuggableCmd) *cobra.Command {
var controlOptions control.ControlOptions
var progressMode string
var options DebugConfig
cmd := &cobra.Command{
Use: "debug",
Short: "Start debugger",
Args: cobra.NoArgs,
RunE: func(cmd *cobra.Command, args []string) error {
printer, err := progress.NewPrinter(context.TODO(), os.Stderr, progressui.DisplayMode(progressMode))
if err != nil {
return err
}
ctx := context.TODO()
c, err := controller.NewController(ctx, controlOptions, dockerCli, printer)
if err != nil {
return err
}
defer func() {
if err := c.Close(); err != nil {
logrus.Warnf("failed to close server connection %v", err)
}
}()
con := console.Current()
if err := con.SetRaw(); err != nil {
return errors.Errorf("failed to configure terminal: %v", err)
}
_, err = monitor.RunMonitor(ctx, "", nil, controllerapi.InvokeConfig{
Tty: true,
}, c, dockerCli.In(), os.Stdout, os.Stderr, printer)
con.Reset()
return err
},
}
cobrautil.MarkCommandExperimental(cmd)
flags := cmd.Flags()
flags.StringVar(&options.InvokeFlag, "invoke", "", "Launch a monitor with executing specified command")
flags.StringVar(&options.OnFlag, "on", "error", "When to launch the monitor ([always, error])")
flags.StringVar(&controlOptions.Root, "root", "", "Specify root directory of server to connect for the monitor")
flags.BoolVar(&controlOptions.Detach, "detach", runtime.GOOS == "linux", "Detach buildx server for the monitor (supported only on linux)")
flags.StringVar(&controlOptions.ServerConfig, "server-config", "", "Specify buildx server config file for the monitor (used only when launching new server)")
flags.StringVar(&progressMode, "progress", "auto", `Set type of progress output ("auto", "plain", "tty") for the monitor. Use plain to show container output`)
cobrautil.MarkFlagsExperimental(flags, "invoke", "on", "root", "detach", "server-config")
for _, c := range children {
cmd.AddCommand(c.NewDebugger(&options))
}
return cmd
}

View File

@@ -1,132 +0,0 @@
package commands
import (
"io"
"net"
"os"
"github.com/containerd/containerd/platforms"
"github.com/docker/buildx/build"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/command"
"github.com/moby/buildkit/util/appcontext"
"github.com/moby/buildkit/util/progress/progressui"
v1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/spf13/cobra"
"golang.org/x/sync/errgroup"
)
type stdioOptions struct {
builder string
platform string
progress string
}
func runDialStdio(dockerCli command.Cli, opts stdioOptions) error {
ctx := appcontext.Context()
contextPathHash, _ := os.Getwd()
b, err := builder.New(dockerCli,
builder.WithName(opts.builder),
builder.WithContextPathHash(contextPathHash),
)
if err != nil {
return err
}
if err = updateLastActivity(dockerCli, b.NodeGroup); err != nil {
return errors.Wrapf(err, "failed to update builder last activity time")
}
nodes, err := b.LoadNodes(ctx)
if err != nil {
return err
}
printer, err := progress.NewPrinter(ctx, os.Stderr, progressui.DisplayMode(opts.progress), progress.WithPhase("dial-stdio"), progress.WithDesc("builder: "+b.Name, "builder:"+b.Name))
if err != nil {
return err
}
var p *v1.Platform
if opts.platform != "" {
pp, err := platforms.Parse(opts.platform)
if err != nil {
return errors.Wrapf(err, "invalid platform %q", opts.platform)
}
p = &pp
}
defer printer.Wait()
return progress.Wrap("Proxying to builder", printer.Write, func(sub progress.SubLogger) error {
var conn net.Conn
err := sub.Wrap("Dialing builder", func() error {
conn, err = build.Dial(ctx, nodes, printer, p)
if err != nil {
return err
}
return nil
})
if err != nil {
return err
}
defer conn.Close()
go func() {
<-ctx.Done()
closeWrite(conn)
}()
var eg errgroup.Group
eg.Go(func() error {
_, err := io.Copy(conn, os.Stdin)
closeWrite(conn)
return err
})
eg.Go(func() error {
_, err := io.Copy(os.Stdout, conn)
closeRead(conn)
return err
})
return eg.Wait()
})
}
func closeRead(conn net.Conn) error {
if c, ok := conn.(interface{ CloseRead() error }); ok {
return c.CloseRead()
}
return conn.Close()
}
func closeWrite(conn net.Conn) error {
if c, ok := conn.(interface{ CloseWrite() error }); ok {
return c.CloseWrite()
}
return conn.Close()
}
func dialStdioCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
opts := stdioOptions{}
cmd := &cobra.Command{
Use: "dial-stdio",
Short: "Proxy current stdio streams to builder instance",
Args: cobra.NoArgs,
RunE: func(cmd *cobra.Command, args []string) error {
opts.builder = rootOpts.builder
return runDialStdio(dockerCli, opts)
},
}
flags := cmd.Flags()
cmd.Flags()
flags.StringVar(&opts.platform, "platform", os.Getenv("DOCKER_DEFAULT_PLATFORM"), "Target platform: this is used for node selection")
flags.StringVar(&opts.progress, "progress", "quiet", "Set type of progress output (auto, plain, tty).")
return cmd
}

View File

@@ -1,7 +1,6 @@
package commands
import (
"context"
"fmt"
"io"
"os"
@@ -16,6 +15,7 @@ import (
"github.com/docker/cli/opts"
"github.com/docker/go-units"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/util/appcontext"
"github.com/spf13/cobra"
"golang.org/x/sync/errgroup"
)
@@ -26,7 +26,9 @@ type duOptions struct {
verbose bool
}
func runDiskUsage(ctx context.Context, dockerCli command.Cli, opts duOptions) error {
func runDiskUsage(dockerCli command.Cli, opts duOptions) error {
ctx := appcontext.Context()
pi, err := toBuildkitPruneInfo(opts.filter.Value())
if err != nil {
return err
@@ -37,7 +39,7 @@ func runDiskUsage(ctx context.Context, dockerCli command.Cli, opts duOptions) er
return err
}
nodes, err := b.LoadNodes(ctx)
nodes, err := b.LoadNodes(ctx, false)
if err != nil {
return err
}
@@ -112,7 +114,7 @@ func duCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
Args: cli.NoArgs,
RunE: func(cmd *cobra.Command, args []string) error {
options.builder = rootOpts.builder
return runDiskUsage(cmd.Context(), dockerCli, options)
return runDiskUsage(dockerCli, options)
},
ValidArgsFunction: completion.Disable,
}

View File

@@ -7,13 +7,13 @@ import (
"os"
"strings"
"github.com/distribution/reference"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/imagetools"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/command"
"github.com/moby/buildkit/util/progress/progressui"
"github.com/docker/distribution/reference"
"github.com/moby/buildkit/util/appcontext"
"github.com/opencontainers/go-digest"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
@@ -25,13 +25,12 @@ type createOptions struct {
builder string
files []string
tags []string
annotations []string
dryrun bool
actionAppend bool
progress string
}
func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, args []string) error {
func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
if len(args) == 0 && len(in.files) == 0 {
return errors.Errorf("no sources specified")
}
@@ -112,6 +111,8 @@ func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, arg
}
}
ctx := appcontext.Context()
b, err := builder.New(dockerCli, builder.WithName(in.builder))
if err != nil {
return err
@@ -153,7 +154,7 @@ func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, arg
}
}
dt, desc, err := r.Combine(ctx, srcs, in.annotations)
dt, desc, err := r.Combine(ctx, srcs)
if err != nil {
return err
}
@@ -168,7 +169,7 @@ func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, arg
ctx2, cancel := context.WithCancel(context.TODO())
defer cancel()
printer, err := progress.NewPrinter(ctx2, os.Stderr, progressui.DisplayMode(in.progress))
printer, err := progress.NewPrinter(ctx2, os.Stderr, os.Stderr, in.progress)
if err != nil {
return err
}
@@ -271,7 +272,7 @@ func createCmd(dockerCli command.Cli, opts RootOptions) *cobra.Command {
Short: "Create a new image based on source images",
RunE: func(cmd *cobra.Command, args []string) error {
options.builder = *opts.Builder
return runCreate(cmd.Context(), dockerCli, options, args)
return runCreate(dockerCli, options, args)
},
ValidArgsFunction: completion.Disable,
}
@@ -282,7 +283,6 @@ func createCmd(dockerCli command.Cli, opts RootOptions) *cobra.Command {
flags.BoolVar(&options.dryrun, "dry-run", false, "Show final image instead of pushing")
flags.BoolVar(&options.actionAppend, "append", false, "Append to existing manifest")
flags.StringVar(&options.progress, "progress", "auto", `Set type of progress output ("auto", "plain", "tty"). Use plain to show container output`)
flags.StringArrayVarP(&options.annotations, "annotation", "", []string{}, "Add annotation to the image")
return cmd
}

View File

@@ -1,14 +1,13 @@
package commands
import (
"context"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/imagetools"
"github.com/docker/cli-docs-tool/annotation"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/moby/buildkit/util/appcontext"
"github.com/pkg/errors"
"github.com/spf13/cobra"
)
@@ -19,7 +18,9 @@ type inspectOptions struct {
raw bool
}
func runInspect(ctx context.Context, dockerCli command.Cli, in inspectOptions, name string) error {
func runInspect(dockerCli command.Cli, in inspectOptions, name string) error {
ctx := appcontext.Context()
if in.format != "" && in.raw {
return errors.Errorf("format and raw cannot be used together")
}
@@ -50,7 +51,7 @@ func inspectCmd(dockerCli command.Cli, rootOpts RootOptions) *cobra.Command {
Args: cli.ExactArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
options.builder = *rootOpts.Builder
return runInspect(cmd.Context(), dockerCli, options, args[0])
return runInspect(dockerCli, options, args[0])
},
ValidArgsFunction: completion.Disable,
}

View File

@@ -17,6 +17,7 @@ import (
"github.com/docker/cli/cli/command"
"github.com/docker/cli/cli/debug"
"github.com/docker/go-units"
"github.com/moby/buildkit/util/appcontext"
"github.com/spf13/cobra"
)
@@ -25,7 +26,9 @@ type inspectOptions struct {
builder string
}
func runInspect(ctx context.Context, dockerCli command.Cli, in inspectOptions) error {
func runInspect(dockerCli command.Cli, in inspectOptions) error {
ctx := appcontext.Context()
b, err := builder.New(dockerCli,
builder.WithName(in.builder),
builder.WithSkippedValidation(),
@@ -37,7 +40,7 @@ func runInspect(ctx context.Context, dockerCli command.Cli, in inspectOptions) e
timeoutCtx, cancel := context.WithTimeout(ctx, 20*time.Second)
defer cancel()
nodes, err := b.LoadNodes(timeoutCtx, builder.WithData())
nodes, err := b.LoadNodes(timeoutCtx, true)
if in.bootstrap {
var ok bool
ok, err = b.Boot(ctx)
@@ -45,7 +48,7 @@ func runInspect(ctx context.Context, dockerCli command.Cli, in inspectOptions) e
return err
}
if ok {
nodes, err = b.LoadNodes(timeoutCtx, builder.WithData())
nodes, err = b.LoadNodes(timeoutCtx, true)
}
}
@@ -84,16 +87,13 @@ func runInspect(ctx context.Context, dockerCli command.Cli, in inspectOptions) e
fmt.Fprintf(w, "Error:\t%s\n", err.Error())
} else {
fmt.Fprintf(w, "Status:\t%s\n", nodes[i].DriverInfo.Status)
if len(n.BuildkitdFlags) > 0 {
fmt.Fprintf(w, "BuildKit daemon flags:\t%s\n", strings.Join(n.BuildkitdFlags, " "))
if len(n.Flags) > 0 {
fmt.Fprintf(w, "Flags:\t%s\n", strings.Join(n.Flags, " "))
}
if nodes[i].Version != "" {
fmt.Fprintf(w, "BuildKit version:\t%s\n", nodes[i].Version)
}
platforms := platformutil.FormatInGroups(n.Node.Platforms, n.Platforms)
if len(platforms) > 0 {
fmt.Fprintf(w, "Platforms:\t%s\n", strings.Join(platforms, ", "))
fmt.Fprintf(w, "Buildkit:\t%s\n", nodes[i].Version)
}
fmt.Fprintf(w, "Platforms:\t%s\n", strings.Join(platformutil.FormatInGroups(n.Node.Platforms, n.Platforms), ", "))
if debug.IsEnabled() {
fmt.Fprintf(w, "Features:\n")
features := nodes[i].Driver.Features(ctx)
@@ -147,7 +147,7 @@ func inspectCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
if len(args) > 0 {
options.builder = args[0]
}
return runInspect(cmd.Context(), dockerCli, options)
return runInspect(dockerCli, options)
},
ValidArgsFunction: completion.BuilderNames(dockerCli),
}

View File

@@ -2,43 +2,30 @@ package commands
import (
"context"
"encoding/json"
"fmt"
"sort"
"io"
"strings"
"text/tabwriter"
"time"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/store"
"github.com/docker/buildx/store/storeutil"
"github.com/docker/buildx/util/cobrautil"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/platformutil"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/docker/cli/cli/command/formatter"
"github.com/moby/buildkit/util/appcontext"
"github.com/spf13/cobra"
"golang.org/x/sync/errgroup"
)
const (
lsNameNodeHeader = "NAME/NODE"
lsDriverEndpointHeader = "DRIVER/ENDPOINT"
lsStatusHeader = "STATUS"
lsLastActivityHeader = "LAST ACTIVITY"
lsBuildkitHeader = "BUILDKIT"
lsPlatformsHeader = "PLATFORMS"
lsIndent = ` \_ `
lsDefaultTableFormat = "table {{.Name}}\t{{.DriverEndpoint}}\t{{.Status}}\t{{.Buildkit}}\t{{.Platforms}}"
)
type lsOptions struct {
format string
}
func runLs(ctx context.Context, dockerCli command.Cli, in lsOptions) error {
func runLs(dockerCli command.Cli, in lsOptions) error {
ctx := appcontext.Context()
txn, release, err := storeutil.GetStore(dockerCli)
if err != nil {
return err
@@ -62,7 +49,7 @@ func runLs(ctx context.Context, dockerCli command.Cli, in lsOptions) error {
for _, b := range builders {
func(b *builder.Builder) {
eg.Go(func() error {
_, _ = b.LoadNodes(timeoutCtx, builder.WithData())
_, _ = b.LoadNodes(timeoutCtx, true)
return nil
})
}(b)
@@ -72,9 +59,22 @@ func runLs(ctx context.Context, dockerCli command.Cli, in lsOptions) error {
return err
}
if hasErrors, err := lsPrint(dockerCli, current, builders, in.format); err != nil {
return err
} else if hasErrors {
w := tabwriter.NewWriter(dockerCli.Out(), 0, 0, 1, ' ', 0)
fmt.Fprintf(w, "NAME/NODE\tDRIVER/ENDPOINT\tSTATUS\tBUILDKIT\tPLATFORMS\n")
printErr := false
for _, b := range builders {
if current.Name == b.Name {
b.Name += " *"
}
if ok := printBuilder(w, b); !ok {
printErr = true
}
}
w.Flush()
if printErr {
_, _ = fmt.Fprintf(dockerCli.Err(), "\n")
for _, b := range builders {
if b.Err() != nil {
@@ -92,6 +92,31 @@ func runLs(ctx context.Context, dockerCli command.Cli, in lsOptions) error {
return nil
}
func printBuilder(w io.Writer, b *builder.Builder) (ok bool) {
ok = true
var err string
if b.Err() != nil {
ok = false
err = "error"
}
fmt.Fprintf(w, "%s\t%s\t%s\t\t\n", b.Name, b.Driver, err)
if b.Err() == nil {
for _, n := range b.Nodes() {
var status string
if n.DriverInfo != nil {
status = n.DriverInfo.Status.String()
}
if n.Err != nil {
ok = false
fmt.Fprintf(w, " %s\t%s\t%s\t\t\n", n.Name, n.Endpoint, "error")
} else {
fmt.Fprintf(w, " %s\t%s\t%s\t%s\t%s\n", n.Name, n.Endpoint, status, n.Version, strings.Join(platformutil.FormatInGroups(n.Node.Platforms, n.Platforms), ", "))
}
}
}
return
}
func lsCmd(dockerCli command.Cli) *cobra.Command {
var options lsOptions
@@ -100,175 +125,13 @@ func lsCmd(dockerCli command.Cli) *cobra.Command {
Short: "List builder instances",
Args: cli.ExactArgs(0),
RunE: func(cmd *cobra.Command, args []string) error {
return runLs(cmd.Context(), dockerCli, options)
return runLs(dockerCli, options)
},
ValidArgsFunction: completion.Disable,
}
flags := cmd.Flags()
flags.StringVar(&options.format, "format", formatter.TableFormatKey, "Format the output")
// hide builder persistent flag for this command
cobrautil.HideInheritedFlags(cmd, "builder")
return cmd
}
func lsPrint(dockerCli command.Cli, current *store.NodeGroup, builders []*builder.Builder, format string) (hasErrors bool, _ error) {
if format == formatter.TableFormatKey {
format = lsDefaultTableFormat
}
ctx := formatter.Context{
Output: dockerCli.Out(),
Format: formatter.Format(format),
}
sort.SliceStable(builders, func(i, j int) bool {
ierr := builders[i].Err() != nil
jerr := builders[j].Err() != nil
if ierr && !jerr {
return false
} else if !ierr && jerr {
return true
}
return i < j
})
render := func(format func(subContext formatter.SubContext) error) error {
for _, b := range builders {
if err := format(&lsContext{
Builder: &lsBuilder{
Builder: b,
Current: b.Name == current.Name,
},
format: ctx.Format,
}); err != nil {
return err
}
if b.Err() != nil {
if ctx.Format.IsTable() {
hasErrors = true
}
continue
}
for _, n := range b.Nodes() {
if n.Err != nil {
if ctx.Format.IsTable() {
hasErrors = true
}
}
if err := format(&lsContext{
format: ctx.Format,
Builder: &lsBuilder{
Builder: b,
Current: b.Name == current.Name,
},
node: n,
}); err != nil {
return err
}
}
}
return nil
}
lsCtx := lsContext{}
lsCtx.Header = formatter.SubHeaderContext{
"Name": lsNameNodeHeader,
"DriverEndpoint": lsDriverEndpointHeader,
"LastActivity": lsLastActivityHeader,
"Status": lsStatusHeader,
"Buildkit": lsBuildkitHeader,
"Platforms": lsPlatformsHeader,
}
return hasErrors, ctx.Write(&lsCtx, render)
}
type lsBuilder struct {
*builder.Builder
Current bool
}
type lsContext struct {
formatter.HeaderContext
Builder *lsBuilder
format formatter.Format
node builder.Node
}
func (c *lsContext) MarshalJSON() ([]byte, error) {
return json.Marshal(c.Builder)
}
func (c *lsContext) Name() string {
if c.node.Name == "" {
name := c.Builder.Name
if c.Builder.Current && c.format.IsTable() {
name += "*"
}
return name
}
if c.format.IsTable() {
return lsIndent + c.node.Name
}
return c.node.Name
}
func (c *lsContext) DriverEndpoint() string {
if c.node.Name == "" {
return c.Builder.Driver
}
if c.format.IsTable() {
return lsIndent + c.node.Endpoint
}
return c.node.Endpoint
}
func (c *lsContext) LastActivity() string {
if c.node.Name != "" || c.Builder.LastActivity.IsZero() {
return ""
}
return c.Builder.LastActivity.UTC().Format(time.RFC3339)
}
func (c *lsContext) Status() string {
if c.node.Name == "" {
if c.Builder.Err() != nil {
return "error"
}
return ""
}
if c.node.Err != nil {
return "error"
}
if c.node.DriverInfo != nil {
return c.node.DriverInfo.Status.String()
}
return ""
}
func (c *lsContext) Buildkit() string {
if c.node.Name == "" {
return ""
}
return c.node.Version
}
func (c *lsContext) Platforms() string {
if c.node.Name == "" {
return ""
}
return strings.Join(platformutil.FormatInGroups(c.node.Node.Platforms, c.node.Platforms), ", ")
}
func (c *lsContext) Error() string {
if c.node.Name != "" && c.node.Err != nil {
return c.node.Err.Error()
} else if err := c.Builder.Err(); err != nil {
return err.Error()
}
return ""
}

View File

@@ -1,7 +1,6 @@
package commands
import (
"context"
"fmt"
"os"
"strings"
@@ -16,6 +15,7 @@ import (
"github.com/docker/docker/api/types/filters"
"github.com/docker/go-units"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/util/appcontext"
"github.com/pkg/errors"
"github.com/spf13/cobra"
"golang.org/x/sync/errgroup"
@@ -35,7 +35,9 @@ const (
allCacheWarning = `WARNING! This will remove all build cache. Are you sure you want to continue?`
)
func runPrune(ctx context.Context, dockerCli command.Cli, opts pruneOptions) error {
func runPrune(dockerCli command.Cli, opts pruneOptions) error {
ctx := appcontext.Context()
pruneFilters := opts.filter.Value()
pruneFilters = command.PruneFilters(dockerCli, pruneFilters)
@@ -58,7 +60,7 @@ func runPrune(ctx context.Context, dockerCli command.Cli, opts pruneOptions) err
return err
}
nodes, err := b.LoadNodes(ctx)
nodes, err := b.LoadNodes(ctx, false)
if err != nil {
return err
}
@@ -136,7 +138,7 @@ func pruneCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
Args: cli.NoArgs,
RunE: func(cmd *cobra.Command, args []string) error {
options.builder = rootOpts.builder
return runPrune(cmd.Context(), dockerCli, options)
return runPrune(dockerCli, options)
},
ValidArgsFunction: completion.Disable,
}

View File

@@ -9,14 +9,16 @@ import (
"github.com/docker/buildx/store"
"github.com/docker/buildx/store/storeutil"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/moby/buildkit/util/appcontext"
"github.com/pkg/errors"
"github.com/spf13/cobra"
"golang.org/x/sync/errgroup"
)
type rmOptions struct {
builders []string
builder string
keepState bool
keepDaemon bool
allInactive bool
@@ -27,7 +29,9 @@ const (
rmInactiveWarning = `WARNING! This will remove all builders that are not in running state. Are you sure you want to continue?`
)
func runRm(ctx context.Context, dockerCli command.Cli, in rmOptions) error {
func runRm(dockerCli command.Cli, in rmOptions) error {
ctx := appcontext.Context()
if in.allInactive && !in.force && !command.PromptForConfirmation(dockerCli.In(), dockerCli.Out(), rmInactiveWarning) {
return nil
}
@@ -42,52 +46,33 @@ func runRm(ctx context.Context, dockerCli command.Cli, in rmOptions) error {
return rmAllInactive(ctx, txn, dockerCli, in)
}
eg, _ := errgroup.WithContext(ctx)
for _, name := range in.builders {
func(name string) {
eg.Go(func() (err error) {
defer func() {
if err == nil {
_, _ = fmt.Fprintf(dockerCli.Err(), "%s removed\n", name)
} else {
_, _ = fmt.Fprintf(dockerCli.Err(), "failed to remove %s: %v\n", name, err)
}
}()
b, err := builder.New(dockerCli,
builder.WithName(name),
builder.WithStore(txn),
builder.WithSkippedValidation(),
)
if err != nil {
return err
}
nodes, err := b.LoadNodes(ctx)
if err != nil {
return err
}
if cb := b.ContextName(); cb != "" {
return errors.Errorf("context builder cannot be removed, run `docker context rm %s` to remove this context", cb)
}
err1 := rm(ctx, nodes, in)
if err := txn.Remove(b.Name); err != nil {
return err
}
if err1 != nil {
return err1
}
return nil
})
}(name)
b, err := builder.New(dockerCli,
builder.WithName(in.builder),
builder.WithStore(txn),
builder.WithSkippedValidation(),
)
if err != nil {
return err
}
if err := eg.Wait(); err != nil {
return errors.New("failed to remove one or more builders")
nodes, err := b.LoadNodes(ctx, false)
if err != nil {
return err
}
if cb := b.ContextName(); cb != "" {
return errors.Errorf("context builder cannot be removed, run `docker context rm %s` to remove this context", cb)
}
err1 := rm(ctx, nodes, in)
if err := txn.Remove(b.Name); err != nil {
return err
}
if err1 != nil {
return err1
}
_, _ = fmt.Fprintf(dockerCli.Err(), "%s removed\n", b.Name)
return nil
}
@@ -95,24 +80,25 @@ func rmCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
var options rmOptions
cmd := &cobra.Command{
Use: "rm [OPTIONS] [NAME] [NAME...]",
Short: "Remove one or more builder instances",
Use: "rm [NAME]",
Short: "Remove a builder instance",
Args: cli.RequiresMaxArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
options.builders = []string{rootOpts.builder}
options.builder = rootOpts.builder
if len(args) > 0 {
if options.allInactive {
return errors.New("cannot specify builder name when --all-inactive is set")
}
options.builders = args
options.builder = args[0]
}
return runRm(cmd.Context(), dockerCli, options)
return runRm(dockerCli, options)
},
ValidArgsFunction: completion.BuilderNames(dockerCli),
}
flags := cmd.Flags()
flags.BoolVar(&options.keepState, "keep-state", false, "Keep BuildKit state")
flags.BoolVar(&options.keepDaemon, "keep-daemon", false, "Keep the BuildKit daemon running")
flags.BoolVar(&options.keepDaemon, "keep-daemon", false, "Keep the buildkitd daemon running")
flags.BoolVar(&options.allInactive, "all-inactive", false, "Remove all inactive builders")
flags.BoolVarP(&options.force, "force", "f", false, "Do not prompt for confirmation")
@@ -153,7 +139,7 @@ func rmAllInactive(ctx context.Context, txn *store.Txn, dockerCli command.Cli, i
for _, b := range builders {
func(b *builder.Builder) {
eg.Go(func() error {
nodes, err := b.LoadNodes(timeoutCtx, builder.WithData())
nodes, err := b.LoadNodes(timeoutCtx, true)
if err != nil {
return errors.Wrapf(err, "cannot load %s", b.Name)
}

View File

@@ -3,7 +3,6 @@ package commands
import (
"os"
debugcmd "github.com/docker/buildx/commands/debug"
imagetoolscmd "github.com/docker/buildx/commands/imagetools"
"github.com/docker/buildx/controller/remote"
"github.com/docker/buildx/util/cobrautil/completion"
@@ -12,8 +11,6 @@ import (
"github.com/docker/cli/cli"
"github.com/docker/cli/cli-plugins/plugin"
"github.com/docker/cli/cli/command"
"github.com/docker/cli/cli/debug"
"github.com/moby/buildkit/util/appcontext"
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
@@ -30,15 +27,12 @@ func NewRootCmd(name string, isPlugin bool, dockerCli command.Cli) *cobra.Comman
CompletionOptions: cobra.CompletionOptions{
HiddenDefaultCmd: true,
},
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
cmd.SetContext(appcontext.Context())
if !isPlugin {
return nil
}
return plugin.PersistentPreRunE(cmd, args)
},
}
if !isPlugin {
if isPlugin {
cmd.PersistentPreRunE = func(cmd *cobra.Command, args []string) error {
return plugin.PersistentPreRunE(cmd, args)
}
} else {
// match plugin behavior for standalone mode
// https://github.com/docker/cli/blob/6c9eb708fa6d17765d71965f90e1c59cea686ee9/cli-plugins/plugin/plugin.go#L117-L127
cmd.SilenceUsage = true
@@ -46,11 +40,6 @@ func NewRootCmd(name string, isPlugin bool, dockerCli command.Cli) *cobra.Comman
cmd.TraverseChildren = true
cmd.DisableFlagsInUseLine = true
cli.DisableFlagsInUseLine(cmd)
// DEBUG=1 should perform the same as --debug at the docker root level
if debug.IsEnabled() {
debug.Enable()
}
}
logrus.SetFormatter(&logutil.Formatter{})
@@ -63,9 +52,16 @@ func NewRootCmd(name string, isPlugin bool, dockerCli command.Cli) *cobra.Comman
"using default config store",
))
if !isExperimental() {
cmd.SetHelpTemplate(cmd.HelpTemplate() + "\nExperimental commands and flags are hidden. Set BUILDX_EXPERIMENTAL=1 to show them.\n")
}
// filter out useless commandConn.CloseWrite warning message that can occur
// when listing builder instances with "buildx ls" for those that are
// unreachable: "commandConn.CloseWrite: commandconn: failed to wait: signal: killed"
// https://github.com/docker/cli/blob/3fb4fb83dfb5db0c0753a8316f21aea54dab32c5/cli/connhelper/commandconn/commandconn.go#L203-L214
logrus.AddHook(logutil.NewFilter([]logrus.Level{
logrus.WarnLevel,
},
"commandConn.CloseWrite:",
"commandConn.CloseRead:",
))
addCommands(cmd, dockerCli)
return cmd
@@ -80,10 +76,9 @@ func addCommands(cmd *cobra.Command, dockerCli command.Cli) {
rootFlags(opts, cmd.PersistentFlags())
cmd.AddCommand(
buildCmd(dockerCli, opts, nil),
buildCmd(dockerCli, opts),
bakeCmd(dockerCli, opts),
createCmd(dockerCli),
dialStdioCmd(dockerCli, opts),
rmCmd(dockerCli, opts),
lsCmd(dockerCli),
useCmd(dockerCli, opts),
@@ -97,10 +92,8 @@ func addCommands(cmd *cobra.Command, dockerCli command.Cli) {
imagetoolscmd.RootCmd(dockerCli, imagetoolscmd.RootOptions{Builder: &opts.builder}),
)
if isExperimental() {
cmd.AddCommand(debugcmd.RootCmd(dockerCli,
newDebuggableBuild(dockerCli, opts),
))
remote.AddControllerCommands(cmd, dockerCli)
addDebugShellCommand(cmd, dockerCli)
}
cmd.RegisterFlagCompletionFunc( //nolint:errcheck

View File

@@ -7,6 +7,7 @@ import (
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/moby/buildkit/util/appcontext"
"github.com/spf13/cobra"
)
@@ -14,7 +15,9 @@ type stopOptions struct {
builder string
}
func runStop(ctx context.Context, dockerCli command.Cli, in stopOptions) error {
func runStop(dockerCli command.Cli, in stopOptions) error {
ctx := appcontext.Context()
b, err := builder.New(dockerCli,
builder.WithName(in.builder),
builder.WithSkippedValidation(),
@@ -22,7 +25,7 @@ func runStop(ctx context.Context, dockerCli command.Cli, in stopOptions) error {
if err != nil {
return err
}
nodes, err := b.LoadNodes(ctx)
nodes, err := b.LoadNodes(ctx, false)
if err != nil {
return err
}
@@ -42,7 +45,7 @@ func stopCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
if len(args) > 0 {
options.builder = args[0]
}
return runStop(cmd.Context(), dockerCli, options)
return runStop(dockerCli, options)
},
ValidArgsFunction: completion.BuilderNames(dockerCli),
}

View File

@@ -35,7 +35,10 @@ func runUse(dockerCli command.Cli, in useOptions) error {
if err != nil {
return err
}
return txn.SetCurrent(ep, "", false, false)
if err := txn.SetCurrent(ep, "", false, false); err != nil {
return err
}
return nil
}
list, err := dockerCli.ContextStore().List()
if err != nil {
@@ -55,7 +58,11 @@ func runUse(dockerCli command.Cli, in useOptions) error {
if err != nil {
return err
}
return txn.SetCurrent(ep, in.builder, in.isGlobal, in.isDefault)
if err := txn.SetCurrent(ep, in.builder, in.isGlobal, in.isDefault); err != nil {
return err
}
return nil
}
func useCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {

View File

@@ -53,7 +53,6 @@ func RunBuild(ctx context.Context, dockerCli command.Cli, in controllerapi.Build
InStream: inStream,
NamedContexts: contexts,
},
Ref: in.Ref,
BuildArgs: in.BuildArgs,
CgroupParent: in.CgroupParent,
ExtraHosts: in.ExtraHosts,
@@ -66,7 +65,6 @@ func RunBuild(ctx context.Context, dockerCli command.Cli, in controllerapi.Build
Tags: in.Tags,
Target: in.Target,
Ulimits: controllerUlimitOpt2DockerUlimit(in.Ulimits),
GroupRef: in.GroupRef,
}
platforms, err := platformutil.Parse(in.Platforms)
@@ -76,7 +74,7 @@ func RunBuild(ctx context.Context, dockerCli command.Cli, in controllerapi.Build
opts.Platforms = platforms
dockerConfig := config.LoadDefaultConfigFile(os.Stderr)
opts.Session = append(opts.Session, authprovider.NewDockerAuthProvider(dockerConfig, nil))
opts.Session = append(opts.Session, authprovider.NewDockerAuthProvider(dockerConfig))
secrets, err := controllerapi.CreateSecrets(in.Secrets)
if err != nil {
@@ -132,17 +130,6 @@ func RunBuild(ctx context.Context, dockerCli command.Cli, in controllerapi.Build
}
}
}
annotations, err := buildflags.ParseAnnotations(in.Annotations)
if err != nil {
return nil, nil, err
}
for _, o := range outputs {
for k, v := range annotations {
o.Attrs[k.String()] = v
}
}
opts.Exports = outputs
opts.CacheFrom = controllerapi.CreateCaches(in.CacheFrom)
@@ -182,7 +169,7 @@ func RunBuild(ctx context.Context, dockerCli command.Cli, in controllerapi.Build
if err = updateLastActivity(dockerCli, b.NodeGroup); err != nil {
return nil, nil, errors.Wrapf(err, "failed to update builder last activity time")
}
nodes, err := b.LoadNodes(ctx)
nodes, err := b.LoadNodes(ctx, false)
if err != nil {
return nil, nil, err
}

View File

@@ -299,9 +299,6 @@ type BuildOptions struct {
ExportPush bool `protobuf:"varint,26,opt,name=ExportPush,proto3" json:"ExportPush,omitempty"`
ExportLoad bool `protobuf:"varint,27,opt,name=ExportLoad,proto3" json:"ExportLoad,omitempty"`
SourcePolicy *pb.Policy `protobuf:"bytes,28,opt,name=SourcePolicy,proto3" json:"SourcePolicy,omitempty"`
Ref string `protobuf:"bytes,29,opt,name=Ref,proto3" json:"Ref,omitempty"`
GroupRef string `protobuf:"bytes,30,opt,name=GroupRef,proto3" json:"GroupRef,omitempty"`
Annotations []string `protobuf:"bytes,31,rep,name=Annotations,proto3" json:"Annotations,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
@@ -527,27 +524,6 @@ func (m *BuildOptions) GetSourcePolicy() *pb.Policy {
return nil
}
func (m *BuildOptions) GetRef() string {
if m != nil {
return m.Ref
}
return ""
}
func (m *BuildOptions) GetGroupRef() string {
if m != nil {
return m.GroupRef
}
return ""
}
func (m *BuildOptions) GetAnnotations() []string {
if m != nil {
return m.Annotations
}
return nil
}
type ExportEntry struct {
Type string `protobuf:"bytes,1,opt,name=Type,proto3" json:"Type,omitempty"`
Attrs map[string]string `protobuf:"bytes,2,rep,name=Attrs,proto3" json:"Attrs,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
@@ -1551,7 +1527,6 @@ func (m *InitMessage) GetInvokeConfig() *InvokeConfig {
type InvokeConfig struct {
Entrypoint []string `protobuf:"bytes,1,rep,name=Entrypoint,proto3" json:"Entrypoint,omitempty"`
Cmd []string `protobuf:"bytes,2,rep,name=Cmd,proto3" json:"Cmd,omitempty"`
NoCmd bool `protobuf:"varint,11,opt,name=NoCmd,proto3" json:"NoCmd,omitempty"`
Env []string `protobuf:"bytes,3,rep,name=Env,proto3" json:"Env,omitempty"`
User string `protobuf:"bytes,4,opt,name=User,proto3" json:"User,omitempty"`
NoUser bool `protobuf:"varint,5,opt,name=NoUser,proto3" json:"NoUser,omitempty"`
@@ -1603,13 +1578,6 @@ func (m *InvokeConfig) GetCmd() []string {
return nil
}
func (m *InvokeConfig) GetNoCmd() bool {
if m != nil {
return m.NoCmd
}
return false
}
func (m *InvokeConfig) GetEnv() []string {
if m != nil {
return m.Env
@@ -2078,128 +2046,125 @@ func init() {
func init() { proto.RegisterFile("controller.proto", fileDescriptor_ed7f10298fa1d90f) }
var fileDescriptor_ed7f10298fa1d90f = []byte{
// 1922 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xa4, 0x58, 0x5f, 0x73, 0x1b, 0x49,
0x11, 0x67, 0x25, 0x59, 0x7f, 0x5a, 0x96, 0xcf, 0x19, 0x9c, 0x30, 0xd9, 0xe4, 0x12, 0x67, 0x93,
0x1c, 0x2a, 0x42, 0xc9, 0x77, 0x3e, 0x82, 0x2f, 0x97, 0xbb, 0x2a, 0x6c, 0xd9, 0xc2, 0xbe, 0x4a,
0x6c, 0xd7, 0xca, 0xc9, 0x15, 0x50, 0xc5, 0xd5, 0x5a, 0x1a, 0xcb, 0x5b, 0x5a, 0xed, 0x88, 0x9d,
0x91, 0x6d, 0xf1, 0xc4, 0x03, 0xbc, 0x51, 0x14, 0x5f, 0x83, 0xe2, 0x23, 0xf0, 0xc4, 0x37, 0xe2,
0x23, 0x50, 0xd3, 0x33, 0xbb, 0x5a, 0x59, 0x5a, 0xd9, 0x86, 0x27, 0x4d, 0xf7, 0xfe, 0xba, 0x7b,
0xba, 0xa7, 0xa7, 0xbb, 0x47, 0xb0, 0xda, 0xe1, 0xa1, 0x8c, 0x78, 0x10, 0xb0, 0xa8, 0x31, 0x8c,
0xb8, 0xe4, 0x64, 0xed, 0x74, 0xe4, 0x07, 0xdd, 0xab, 0x46, 0xea, 0xc3, 0xc5, 0x17, 0xf6, 0xdb,
0x9e, 0x2f, 0xcf, 0x47, 0xa7, 0x8d, 0x0e, 0x1f, 0x6c, 0x0c, 0xf8, 0xe9, 0x78, 0x03, 0x51, 0x7d,
0x5f, 0x6e, 0x78, 0x43, 0x7f, 0x43, 0xb0, 0xe8, 0xc2, 0xef, 0x30, 0xb1, 0x61, 0x84, 0xe2, 0x5f,
0xad, 0xd2, 0x7e, 0x9d, 0x29, 0x2c, 0xf8, 0x28, 0xea, 0xb0, 0x21, 0x0f, 0xfc, 0xce, 0x78, 0x63,
0x78, 0xba, 0xa1, 0x57, 0x5a, 0xcc, 0xa9, 0xc3, 0xda, 0x3b, 0x5f, 0xc8, 0xe3, 0x88, 0x77, 0x98,
0x10, 0x4c, 0xb8, 0xec, 0x0f, 0x23, 0x26, 0x24, 0x59, 0x85, 0xbc, 0xcb, 0xce, 0xa8, 0xb5, 0x6e,
0xd5, 0x2b, 0xae, 0x5a, 0x3a, 0xc7, 0x70, 0xff, 0x1a, 0x52, 0x0c, 0x79, 0x28, 0x18, 0xd9, 0x82,
0xa5, 0x83, 0xf0, 0x8c, 0x0b, 0x6a, 0xad, 0xe7, 0xeb, 0xd5, 0xcd, 0x67, 0x8d, 0x79, 0xce, 0x35,
0x8c, 0x9c, 0x42, 0xba, 0x1a, 0xef, 0x08, 0xa8, 0xa6, 0xb8, 0xe4, 0x31, 0x54, 0x62, 0x72, 0xd7,
0x18, 0x9e, 0x30, 0x48, 0x0b, 0x96, 0x0f, 0xc2, 0x0b, 0xde, 0x67, 0x4d, 0x1e, 0x9e, 0xf9, 0x3d,
0x9a, 0x5b, 0xb7, 0xea, 0xd5, 0x4d, 0x67, 0xbe, 0xb1, 0x34, 0xd2, 0x9d, 0x92, 0x73, 0xbe, 0x03,
0xba, 0xeb, 0x8b, 0x0e, 0x0f, 0x43, 0xd6, 0x89, 0x9d, 0xc9, 0x74, 0x7a, 0x7a, 0x4f, 0xb9, 0x6b,
0x7b, 0x72, 0x1e, 0xc1, 0xc3, 0x39, 0xba, 0x74, 0x58, 0x9c, 0xdf, 0xc3, 0xf2, 0x8e, 0xda, 0x5b,
0xb6, 0xf2, 0x6f, 0xa0, 0x74, 0x34, 0x94, 0x3e, 0x0f, 0xc5, 0x62, 0x6f, 0x50, 0x8d, 0x41, 0xba,
0xb1, 0x88, 0xf3, 0xf7, 0x65, 0x63, 0xc0, 0x30, 0xc8, 0x3a, 0x54, 0x9b, 0x3c, 0x94, 0xec, 0x4a,
0x1e, 0x7b, 0xf2, 0xdc, 0x18, 0x4a, 0xb3, 0xc8, 0x67, 0xb0, 0xb2, 0xcb, 0x3b, 0x7d, 0x16, 0x9d,
0xf9, 0x01, 0x3b, 0xf4, 0x06, 0xcc, 0xb8, 0x74, 0x8d, 0x4b, 0xbe, 0x55, 0x5e, 0xfb, 0xa1, 0x6c,
0x8d, 0xc2, 0x0e, 0xcd, 0xe3, 0xd6, 0x9e, 0x66, 0x9d, 0xaa, 0x81, 0xb9, 0x13, 0x09, 0xf2, 0x3b,
0xa8, 0x29, 0x35, 0x5d, 0x63, 0x5a, 0xd0, 0x02, 0x26, 0xc6, 0xeb, 0x9b, 0xbd, 0x6b, 0x4c, 0xc9,
0xed, 0x85, 0x32, 0x1a, 0xbb, 0xd3, 0xba, 0xc8, 0x1a, 0x2c, 0x6d, 0x07, 0x01, 0xbf, 0xa4, 0x4b,
0xeb, 0xf9, 0x7a, 0xc5, 0xd5, 0x04, 0xf9, 0x25, 0x94, 0xb6, 0xa5, 0x64, 0x42, 0x0a, 0x5a, 0x44,
0x63, 0x8f, 0xe7, 0x1b, 0xd3, 0x20, 0x37, 0x06, 0x93, 0x23, 0xa8, 0xa0, 0xfd, 0xed, 0xa8, 0x27,
0x68, 0x09, 0x25, 0xbf, 0xb8, 0xc5, 0x36, 0x13, 0x19, 0xbd, 0xc5, 0x89, 0x0e, 0xb2, 0x07, 0x95,
0xa6, 0xd7, 0x39, 0x67, 0xad, 0x88, 0x0f, 0x68, 0x19, 0x15, 0xfe, 0x74, 0xbe, 0x42, 0x84, 0x19,
0x85, 0x46, 0x4d, 0x22, 0x49, 0xb6, 0xa1, 0x84, 0xc4, 0x09, 0xa7, 0x95, 0xbb, 0x29, 0x89, 0xe5,
0x88, 0x03, 0xcb, 0xcd, 0x5e, 0xc4, 0x47, 0xc3, 0x63, 0x2f, 0x62, 0xa1, 0xa4, 0x80, 0x47, 0x3d,
0xc5, 0x23, 0x6f, 0xa1, 0xb4, 0x77, 0x35, 0xe4, 0x91, 0x14, 0xb4, 0xba, 0xe8, 0xf2, 0x6a, 0x90,
0x31, 0x60, 0x24, 0xc8, 0x13, 0x80, 0xbd, 0x2b, 0x19, 0x79, 0xfb, 0x5c, 0x85, 0x7d, 0x19, 0x8f,
0x23, 0xc5, 0x21, 0x2d, 0x28, 0xbe, 0xf3, 0x4e, 0x59, 0x20, 0x68, 0x0d, 0x75, 0x37, 0x6e, 0x11,
0x58, 0x2d, 0xa0, 0x0d, 0x19, 0x69, 0x95, 0xd7, 0x87, 0x4c, 0x5e, 0xf2, 0xa8, 0xff, 0x9e, 0x77,
0x19, 0x5d, 0xd1, 0x79, 0x9d, 0x62, 0x91, 0x17, 0x50, 0x3b, 0xe4, 0x3a, 0x78, 0x7e, 0x20, 0x59,
0x44, 0x3f, 0xc1, 0xcd, 0x4c, 0x33, 0xf1, 0x2e, 0x07, 0x9e, 0x3c, 0xe3, 0xd1, 0x40, 0xd0, 0x55,
0x44, 0x4c, 0x18, 0x2a, 0x83, 0xda, 0xac, 0x13, 0x31, 0x29, 0xe8, 0xbd, 0x45, 0x19, 0xa4, 0x41,
0x6e, 0x0c, 0x26, 0x14, 0x4a, 0xed, 0xf3, 0x41, 0xdb, 0xff, 0x23, 0xa3, 0x64, 0xdd, 0xaa, 0xe7,
0xdd, 0x98, 0x24, 0xaf, 0x20, 0xdf, 0x6e, 0xef, 0xd3, 0x1f, 0xa3, 0xb6, 0x87, 0x19, 0xda, 0xda,
0xfb, 0xae, 0x42, 0x11, 0x02, 0x85, 0x13, 0xaf, 0x27, 0xe8, 0x1a, 0xee, 0x0b, 0xd7, 0xe4, 0x01,
0x14, 0x4f, 0xbc, 0xa8, 0xc7, 0x24, 0xbd, 0x8f, 0x3e, 0x1b, 0x8a, 0xbc, 0x81, 0xd2, 0x87, 0xc0,
0x1f, 0xf8, 0x52, 0xd0, 0x07, 0x8b, 0x2e, 0xa7, 0x06, 0x1d, 0x0d, 0xa5, 0x1b, 0xe3, 0xd5, 0x6e,
0x31, 0xde, 0x2c, 0xa2, 0x3f, 0x41, 0x9d, 0x31, 0xa9, 0xbe, 0x98, 0x70, 0x51, 0xba, 0x6e, 0xd5,
0xcb, 0x6e, 0x4c, 0xaa, 0xad, 0x1d, 0x8f, 0x82, 0x80, 0x3e, 0x44, 0x36, 0xae, 0xf5, 0xd9, 0xab,
0x34, 0x38, 0x1e, 0x89, 0x73, 0x6a, 0xe3, 0x97, 0x14, 0x67, 0xf2, 0xfd, 0x1d, 0xf7, 0xba, 0xf4,
0x51, 0xfa, 0xbb, 0xe2, 0x90, 0x03, 0x58, 0x6e, 0x63, 0x5b, 0x3a, 0xc6, 0x66, 0x44, 0x1f, 0xa3,
0x1f, 0x2f, 0x1b, 0xaa, 0x73, 0x35, 0xe2, 0xce, 0xa5, 0x7c, 0x48, 0x37, 0xaf, 0x86, 0x06, 0xbb,
0x53, 0xa2, 0x71, 0x5d, 0xfd, 0x74, 0x52, 0x57, 0x6d, 0x28, 0xff, 0x5a, 0x25, 0xb9, 0x62, 0x3f,
0x41, 0x76, 0x42, 0xab, 0x64, 0xda, 0x0e, 0x43, 0x2e, 0x3d, 0x5d, 0x77, 0x9f, 0x62, 0xb8, 0xd3,
0x2c, 0xfb, 0x57, 0x40, 0x66, 0xab, 0x90, 0xb2, 0xd2, 0x67, 0xe3, 0xb8, 0x7a, 0xf7, 0xd9, 0x58,
0x15, 0xa2, 0x0b, 0x2f, 0x18, 0xc5, 0x35, 0x54, 0x13, 0x5f, 0xe7, 0xbe, 0xb2, 0xec, 0x6f, 0x60,
0x65, 0xba, 0x40, 0xdc, 0x49, 0xfa, 0x0d, 0x54, 0x53, 0xb7, 0xe0, 0x2e, 0xa2, 0xce, 0xbf, 0x2d,
0xa8, 0xa6, 0xae, 0x2a, 0x26, 0xd5, 0x78, 0xc8, 0x8c, 0x30, 0xae, 0xc9, 0x0e, 0x2c, 0x6d, 0x4b,
0x19, 0xa9, 0x96, 0xa3, 0xf2, 0xf2, 0xe7, 0x37, 0x5e, 0xf8, 0x06, 0xc2, 0xf5, 0x95, 0xd4, 0xa2,
0x2a, 0x88, 0xbb, 0x4c, 0x48, 0x3f, 0xc4, 0x90, 0x61, 0x87, 0xa8, 0xb8, 0x69, 0x96, 0xfd, 0x15,
0xc0, 0x44, 0xec, 0x4e, 0x3e, 0xfc, 0xd3, 0x82, 0x7b, 0x33, 0x55, 0x6d, 0xae, 0x27, 0xfb, 0xd3,
0x9e, 0x6c, 0xde, 0xb2, 0x42, 0xce, 0xfa, 0xf3, 0x7f, 0xec, 0xf6, 0x10, 0x8a, 0xba, 0x95, 0xcc,
0xdd, 0xa1, 0x0d, 0xe5, 0x5d, 0x5f, 0x78, 0xa7, 0x01, 0xeb, 0xa2, 0x68, 0xd9, 0x4d, 0x68, 0xec,
0x63, 0xb8, 0x7b, 0x1d, 0x3d, 0x4d, 0x38, 0xba, 0x66, 0x90, 0x15, 0xc8, 0x25, 0x33, 0x50, 0xee,
0x60, 0x57, 0x81, 0x55, 0x03, 0xd7, 0xae, 0x56, 0x5c, 0x4d, 0x38, 0x2d, 0x28, 0xea, 0x2a, 0x34,
0x83, 0xb7, 0xa1, 0xdc, 0xf2, 0x03, 0x86, 0x73, 0x80, 0xde, 0x73, 0x42, 0x2b, 0xf7, 0xf6, 0xc2,
0x0b, 0x63, 0x56, 0x2d, 0x9d, 0xad, 0x54, 0xbb, 0x57, 0x7e, 0xe0, 0x64, 0x60, 0xfc, 0xc0, 0x79,
0xe0, 0x01, 0x14, 0x5b, 0x3c, 0x1a, 0x78, 0xd2, 0x28, 0x33, 0x94, 0xe3, 0xc0, 0xca, 0x41, 0x28,
0x86, 0xac, 0x23, 0xb3, 0xc7, 0xc6, 0x23, 0xf8, 0x24, 0xc1, 0x98, 0x81, 0x31, 0x35, 0xf7, 0x58,
0x77, 0x9f, 0x7b, 0xfe, 0x61, 0x41, 0x25, 0xa9, 0x6c, 0xa4, 0x09, 0x45, 0x3c, 0x8d, 0x78, 0xfa,
0x7c, 0x75, 0x43, 0x29, 0x6c, 0x7c, 0x44, 0xb4, 0xe9, 0x30, 0x5a, 0xd4, 0xfe, 0x1e, 0xaa, 0x29,
0xf6, 0x9c, 0x04, 0xd8, 0x4c, 0x27, 0x40, 0x66, 0x6b, 0xd0, 0x46, 0xd2, 0xe9, 0xb1, 0x0b, 0x45,
0xcd, 0x9c, 0x1b, 0x56, 0x02, 0x85, 0x7d, 0x2f, 0xd2, 0xa9, 0x91, 0x77, 0x71, 0xad, 0x78, 0x6d,
0x7e, 0x26, 0xf1, 0x78, 0xf2, 0x2e, 0xae, 0x9d, 0x7f, 0x59, 0x50, 0x33, 0xa3, 0xa4, 0x89, 0x20,
0x83, 0x55, 0x7d, 0x43, 0x59, 0x14, 0xf3, 0x8c, 0xff, 0x6f, 0x16, 0x84, 0x32, 0x86, 0x36, 0xae,
0xcb, 0xea, 0x68, 0xcc, 0xa8, 0xb4, 0x9b, 0x70, 0x7f, 0x2e, 0xf4, 0x4e, 0x57, 0xe4, 0x25, 0xdc,
0x9b, 0x0c, 0xc9, 0xd9, 0x79, 0xb2, 0x06, 0x24, 0x0d, 0x33, 0x43, 0xf4, 0x53, 0xa8, 0xaa, 0x47,
0x47, 0xb6, 0x98, 0x03, 0xcb, 0x1a, 0x60, 0x22, 0x43, 0xa0, 0xd0, 0x67, 0x63, 0x9d, 0x0d, 0x15,
0x17, 0xd7, 0xce, 0xdf, 0x2c, 0xf5, 0x76, 0x18, 0x8e, 0xe4, 0x7b, 0x26, 0x84, 0xd7, 0x53, 0x09,
0x58, 0x38, 0x08, 0x7d, 0x69, 0xb2, 0xef, 0xb3, 0xac, 0x37, 0xc4, 0x70, 0x24, 0x15, 0xcc, 0x48,
0xed, 0xff, 0xc8, 0x45, 0x29, 0xb2, 0x05, 0x85, 0x5d, 0x4f, 0x7a, 0x26, 0x17, 0x32, 0x26, 0x26,
0x85, 0x48, 0x09, 0x2a, 0x72, 0xa7, 0xa4, 0x1e, 0x4a, 0xc3, 0x91, 0x74, 0x5e, 0xc0, 0xea, 0x75,
0xed, 0x73, 0x5c, 0xfb, 0x12, 0xaa, 0x29, 0x2d, 0x78, 0x6f, 0x8f, 0x5a, 0x08, 0x28, 0xbb, 0x6a,
0xa9, 0x7c, 0x4d, 0x36, 0xb2, 0xac, 0x6d, 0x38, 0x9f, 0x40, 0x0d, 0x55, 0x27, 0x11, 0xfc, 0x53,
0x0e, 0x4a, 0xb1, 0x8a, 0xad, 0x29, 0xbf, 0x9f, 0x65, 0xf9, 0x3d, 0xeb, 0xf2, 0x6b, 0x28, 0xa8,
0xfa, 0x61, 0x5c, 0xce, 0x18, 0x37, 0x5a, 0xdd, 0x94, 0x98, 0x82, 0x93, 0x6f, 0xa1, 0xe8, 0x32,
0xa1, 0x46, 0x23, 0xfd, 0x88, 0x78, 0x3e, 0x5f, 0x50, 0x63, 0x26, 0xc2, 0x46, 0x48, 0x89, 0xb7,
0xfd, 0x5e, 0xe8, 0x05, 0xb4, 0xb0, 0x48, 0x5c, 0x63, 0x52, 0xe2, 0x9a, 0x31, 0x09, 0xf7, 0x5f,
0x2c, 0xa8, 0x2e, 0x0c, 0xf5, 0xe2, 0x67, 0xde, 0xcc, 0xd3, 0x33, 0xff, 0x3f, 0x3e, 0x3d, 0xff,
0x9c, 0x9b, 0x56, 0x84, 0x53, 0x92, 0xba, 0x4f, 0x43, 0xee, 0x87, 0xd2, 0xa4, 0x6c, 0x8a, 0xa3,
0x36, 0xda, 0x1c, 0x74, 0x4d, 0xd1, 0x57, 0x4b, 0x75, 0xcd, 0x0e, 0xb9, 0xe2, 0x55, 0x31, 0x0d,
0x34, 0x31, 0x29, 0xe9, 0x79, 0x53, 0xd2, 0x55, 0x6a, 0x7c, 0x10, 0x2c, 0xc2, 0xc0, 0x55, 0x5c,
0x5c, 0xab, 0x2a, 0x7e, 0xc8, 0x91, 0xbb, 0x84, 0xc2, 0x86, 0x42, 0x2b, 0x97, 0x5d, 0x5a, 0xd4,
0xe1, 0x68, 0x5e, 0xc6, 0x56, 0x2e, 0xbb, 0xb4, 0x94, 0x58, 0xb9, 0x44, 0x2b, 0x27, 0x72, 0x4c,
0xcb, 0x3a, 0x01, 0x4f, 0xe4, 0x58, 0xb5, 0x19, 0x97, 0x07, 0xc1, 0xa9, 0xd7, 0xe9, 0xd3, 0x8a,
0xee, 0x6f, 0x31, 0xad, 0xe6, 0x49, 0x15, 0x73, 0xdf, 0x0b, 0xf0, 0xe5, 0x51, 0x76, 0x63, 0xd2,
0xd9, 0x86, 0x4a, 0x92, 0x2a, 0xaa, 0x73, 0xb5, 0xba, 0x78, 0x14, 0x35, 0x37, 0xd7, 0xea, 0xc6,
0x59, 0x9e, 0x9b, 0xcd, 0xf2, 0x7c, 0x2a, 0xcb, 0xb7, 0xa0, 0x36, 0x95, 0x34, 0x0a, 0xe4, 0xf2,
0x4b, 0x61, 0x14, 0xe1, 0x5a, 0xf1, 0x9a, 0x3c, 0xd0, 0x6f, 0xeb, 0x9a, 0x8b, 0x6b, 0xe7, 0x39,
0xd4, 0xa6, 0xd2, 0x65, 0x5e, 0x5d, 0x76, 0x9e, 0x41, 0xad, 0x2d, 0x3d, 0x39, 0x5a, 0xf0, 0x67,
0xc8, 0x7f, 0x2c, 0x58, 0x89, 0x31, 0xa6, 0xf2, 0xfc, 0x02, 0xca, 0x17, 0x2c, 0x92, 0xec, 0x2a,
0xe9, 0x45, 0x74, 0x76, 0x9c, 0xfd, 0x88, 0x08, 0x37, 0x41, 0x92, 0xaf, 0xa1, 0x2c, 0x50, 0x0f,
0x8b, 0xe7, 0x98, 0x27, 0x59, 0x52, 0xc6, 0x5e, 0x82, 0x27, 0x1b, 0x50, 0x08, 0x78, 0x4f, 0xe0,
0xb9, 0x57, 0x37, 0x1f, 0x65, 0xc9, 0xbd, 0xe3, 0x3d, 0x17, 0x81, 0xe4, 0x2d, 0x94, 0x2f, 0xbd,
0x28, 0xf4, 0xc3, 0x5e, 0xfc, 0x26, 0x7f, 0x9a, 0x25, 0xf4, 0xbd, 0xc6, 0xb9, 0x89, 0x80, 0x53,
0x53, 0x97, 0xe8, 0x8c, 0x9b, 0x98, 0x38, 0xbf, 0x51, 0xb9, 0xac, 0x48, 0xe3, 0xfe, 0x01, 0xd4,
0xf4, 0x7d, 0xf8, 0xc8, 0x22, 0xa1, 0xa6, 0x42, 0x6b, 0xd1, 0x9d, 0xdd, 0x49, 0x43, 0xdd, 0x69,
0x49, 0xe7, 0x07, 0xd3, 0xee, 0x62, 0x86, 0xca, 0xa5, 0xa1, 0xd7, 0xe9, 0x7b, 0xbd, 0xf8, 0x9c,
0x62, 0x52, 0x7d, 0xb9, 0x30, 0xf6, 0xf4, 0xb5, 0x8d, 0x49, 0x95, 0x9b, 0x11, 0xbb, 0xf0, 0xc5,
0x64, 0x40, 0x4d, 0xe8, 0xcd, 0xbf, 0x96, 0x00, 0x9a, 0xc9, 0x7e, 0xc8, 0x31, 0x2c, 0xa1, 0x3d,
0xe2, 0x2c, 0x6c, 0x9e, 0xe8, 0xb7, 0xfd, 0xfc, 0x16, 0x0d, 0x96, 0x7c, 0x54, 0xc9, 0x8f, 0x43,
0x0f, 0x79, 0x91, 0x55, 0x26, 0xd2, 0x73, 0x93, 0xfd, 0xf2, 0x06, 0x94, 0xd1, 0xfb, 0x01, 0x8a,
0x3a, 0x0b, 0x48, 0x56, 0x2d, 0x4c, 0xe7, 0xad, 0xfd, 0x62, 0x31, 0x48, 0x2b, 0xfd, 0xdc, 0x22,
0xae, 0xa9, 0x94, 0xc4, 0x59, 0xd0, 0x0a, 0xcd, 0x8d, 0xc9, 0x0a, 0xc0, 0x54, 0xd7, 0xa9, 0x5b,
0xe4, 0x3b, 0x28, 0xea, 0x5a, 0x47, 0x3e, 0x9d, 0x2f, 0x10, 0xeb, 0x5b, 0xfc, 0xb9, 0x6e, 0x7d,
0x6e, 0x91, 0xf7, 0x50, 0x50, 0x4d, 0x9e, 0x64, 0x74, 0xac, 0xd4, 0x84, 0x60, 0x3b, 0x8b, 0x20,
0x26, 0x8a, 0x3f, 0x00, 0x4c, 0x46, 0x0d, 0x92, 0xf1, 0xcf, 0xca, 0xcc, 0xcc, 0x62, 0xd7, 0x6f,
0x06, 0x1a, 0x03, 0xef, 0x55, 0x9f, 0x3d, 0xe3, 0x24, 0xb3, 0xc3, 0x26, 0xd7, 0xc8, 0x76, 0x16,
0x41, 0x8c, 0xba, 0x73, 0xa8, 0x4d, 0xfd, 0xf3, 0x4a, 0x7e, 0x96, 0xed, 0xe4, 0xf5, 0x3f, 0x72,
0xed, 0x57, 0xb7, 0xc2, 0x1a, 0x4b, 0x32, 0x3d, 0xab, 0x99, 0xcf, 0xa4, 0x71, 0x93, 0xdf, 0xd3,
0xff, 0xa2, 0xda, 0x1b, 0xb7, 0xc6, 0x6b, 0xab, 0x3b, 0x85, 0xdf, 0xe6, 0x86, 0xa7, 0xa7, 0x45,
0xfc, 0x43, 0xfa, 0xcb, 0xff, 0x06, 0x00, 0x00, 0xff, 0xff, 0xe3, 0x77, 0x0e, 0x2f, 0x2e, 0x17,
0x00, 0x00,
// 1881 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xa4, 0x58, 0x5f, 0x6f, 0xdb, 0xc8,
0x11, 0x2f, 0x25, 0x59, 0x7f, 0x46, 0x96, 0xe3, 0x6c, 0x9d, 0x74, 0xc3, 0xa4, 0x17, 0x87, 0x49,
0xae, 0x42, 0x53, 0x48, 0x77, 0xbe, 0xa6, 0xbe, 0x5c, 0xee, 0x80, 0xda, 0xb2, 0x05, 0xfb, 0x90,
0xd8, 0xc6, 0xca, 0xc9, 0xa1, 0x2d, 0xd0, 0x80, 0x92, 0xd6, 0x32, 0x21, 0x8a, 0xab, 0x72, 0x57,
0xb6, 0xd5, 0xa7, 0xbe, 0xf4, 0xad, 0xe8, 0xf7, 0x28, 0xfa, 0x11, 0xfa, 0xd2, 0x7e, 0xa1, 0xa2,
0x1f, 0xa1, 0xd8, 0x3f, 0xa4, 0x48, 0x4b, 0x94, 0xed, 0xf6, 0x49, 0x3b, 0xc3, 0xdf, 0x6f, 0x76,
0x67, 0x38, 0x3b, 0x33, 0x14, 0xac, 0xf7, 0x58, 0x20, 0x42, 0xe6, 0xfb, 0x34, 0x6c, 0x8c, 0x43,
0x26, 0x18, 0xda, 0xe8, 0x4e, 0x3c, 0xbf, 0x7f, 0xd5, 0x48, 0x3c, 0xb8, 0xf8, 0xd2, 0x7e, 0x3b,
0xf0, 0xc4, 0xf9, 0xa4, 0xdb, 0xe8, 0xb1, 0x51, 0x73, 0xc4, 0xba, 0xd3, 0xa6, 0x42, 0x0d, 0x3d,
0xd1, 0x74, 0xc7, 0x5e, 0x93, 0xd3, 0xf0, 0xc2, 0xeb, 0x51, 0xde, 0x34, 0xa4, 0xe8, 0x57, 0x9b,
0xb4, 0x5f, 0x67, 0x92, 0x39, 0x9b, 0x84, 0x3d, 0x3a, 0x66, 0xbe, 0xd7, 0x9b, 0x36, 0xc7, 0xdd,
0xa6, 0x5e, 0x69, 0x9a, 0x53, 0x87, 0x8d, 0x77, 0x1e, 0x17, 0x27, 0x21, 0xeb, 0x51, 0xce, 0x29,
0x27, 0xf4, 0x0f, 0x13, 0xca, 0x05, 0x5a, 0x87, 0x3c, 0xa1, 0x67, 0xd8, 0xda, 0xb4, 0xea, 0x15,
0x22, 0x97, 0xce, 0x09, 0x3c, 0xb8, 0x86, 0xe4, 0x63, 0x16, 0x70, 0x8a, 0xb6, 0x61, 0xe5, 0x30,
0x38, 0x63, 0x1c, 0x5b, 0x9b, 0xf9, 0x7a, 0x75, 0xeb, 0x59, 0x63, 0x91, 0x73, 0x0d, 0xc3, 0x93,
0x48, 0xa2, 0xf1, 0x0e, 0x87, 0x6a, 0x42, 0x8b, 0x9e, 0x40, 0x25, 0x12, 0xf7, 0xcc, 0xc6, 0x33,
0x05, 0x6a, 0xc3, 0xea, 0x61, 0x70, 0xc1, 0x86, 0xb4, 0xc5, 0x82, 0x33, 0x6f, 0x80, 0x73, 0x9b,
0x56, 0xbd, 0xba, 0xe5, 0x2c, 0xde, 0x2c, 0x89, 0x24, 0x29, 0x9e, 0xf3, 0x3d, 0xe0, 0x3d, 0x8f,
0xf7, 0x58, 0x10, 0xd0, 0x5e, 0xe4, 0x4c, 0xa6, 0xd3, 0xe9, 0x33, 0xe5, 0xae, 0x9d, 0xc9, 0x79,
0x0c, 0x8f, 0x16, 0xd8, 0xd2, 0x61, 0x71, 0x7e, 0x0f, 0xab, 0xbb, 0xf2, 0x6c, 0xd9, 0xc6, 0xbf,
0x85, 0xd2, 0xf1, 0x58, 0x78, 0x2c, 0xe0, 0xcb, 0xbd, 0x51, 0x66, 0x0c, 0x92, 0x44, 0x14, 0xe7,
0x9f, 0x55, 0xb3, 0x81, 0x51, 0xa0, 0x4d, 0xa8, 0xb6, 0x58, 0x20, 0xe8, 0x95, 0x38, 0x71, 0xc5,
0xb9, 0xd9, 0x28, 0xa9, 0x42, 0x9f, 0xc3, 0xda, 0x1e, 0xeb, 0x0d, 0x69, 0x78, 0xe6, 0xf9, 0xf4,
0xc8, 0x1d, 0x51, 0xe3, 0xd2, 0x35, 0x2d, 0xfa, 0x4e, 0x7a, 0xed, 0x05, 0xa2, 0x3d, 0x09, 0x7a,
0x38, 0xaf, 0x8e, 0xf6, 0x34, 0xeb, 0xad, 0x1a, 0x18, 0x99, 0x31, 0xd0, 0xef, 0xa0, 0x26, 0xcd,
0xf4, 0xcd, 0xd6, 0x1c, 0x17, 0x54, 0x62, 0xbc, 0xbe, 0xd9, 0xbb, 0x46, 0x8a, 0xb7, 0x1f, 0x88,
0x70, 0x4a, 0xd2, 0xb6, 0xd0, 0x06, 0xac, 0xec, 0xf8, 0x3e, 0xbb, 0xc4, 0x2b, 0x9b, 0xf9, 0x7a,
0x85, 0x68, 0x01, 0xfd, 0x0a, 0x4a, 0x3b, 0x42, 0x50, 0x2e, 0x38, 0x2e, 0xaa, 0xcd, 0x9e, 0x2c,
0xde, 0x4c, 0x83, 0x48, 0x04, 0x46, 0xc7, 0x50, 0x51, 0xfb, 0xef, 0x84, 0x03, 0x8e, 0x4b, 0x8a,
0xf9, 0xe5, 0x2d, 0x8e, 0x19, 0x73, 0xf4, 0x11, 0x67, 0x36, 0xd0, 0x3e, 0x54, 0x5a, 0x6e, 0xef,
0x9c, 0xb6, 0x43, 0x36, 0xc2, 0x65, 0x65, 0xf0, 0x67, 0x8b, 0x0d, 0x2a, 0x98, 0x31, 0x68, 0xcc,
0xc4, 0x4c, 0xb4, 0x03, 0x25, 0x25, 0x9c, 0x32, 0x5c, 0xb9, 0x9b, 0x91, 0x88, 0x87, 0x1c, 0x58,
0x6d, 0x0d, 0x42, 0x36, 0x19, 0x9f, 0xb8, 0x21, 0x0d, 0x04, 0x06, 0xf5, 0xaa, 0x53, 0x3a, 0xf4,
0x16, 0x4a, 0xfb, 0x57, 0x63, 0x16, 0x0a, 0x8e, 0xab, 0xcb, 0x2e, 0xaf, 0x06, 0x99, 0x0d, 0x0c,
0x03, 0x7d, 0x06, 0xb0, 0x7f, 0x25, 0x42, 0xf7, 0x80, 0xc9, 0xb0, 0xaf, 0xaa, 0xd7, 0x91, 0xd0,
0xa0, 0x36, 0x14, 0xdf, 0xb9, 0x5d, 0xea, 0x73, 0x5c, 0x53, 0xb6, 0x1b, 0xb7, 0x08, 0xac, 0x26,
0xe8, 0x8d, 0x0c, 0x5b, 0xe6, 0xf5, 0x11, 0x15, 0x97, 0x2c, 0x1c, 0xbe, 0x67, 0x7d, 0x8a, 0xd7,
0x74, 0x5e, 0x27, 0x54, 0xe8, 0x05, 0xd4, 0x8e, 0x98, 0x0e, 0x9e, 0xe7, 0x0b, 0x1a, 0xe2, 0x7b,
0xea, 0x30, 0x69, 0xa5, 0xba, 0xcb, 0xbe, 0x2b, 0xce, 0x58, 0x38, 0xe2, 0x78, 0x5d, 0x21, 0x66,
0x0a, 0x99, 0x41, 0x1d, 0xda, 0x0b, 0xa9, 0xe0, 0xf8, 0xfe, 0xb2, 0x0c, 0xd2, 0x20, 0x12, 0x81,
0x11, 0x86, 0x52, 0xe7, 0x7c, 0xd4, 0xf1, 0xfe, 0x48, 0x31, 0xda, 0xb4, 0xea, 0x79, 0x12, 0x89,
0xe8, 0x15, 0xe4, 0x3b, 0x9d, 0x03, 0xfc, 0x63, 0x65, 0xed, 0x51, 0x86, 0xb5, 0xce, 0x01, 0x91,
0x28, 0x84, 0xa0, 0x70, 0xea, 0x0e, 0x38, 0xde, 0x50, 0xe7, 0x52, 0x6b, 0xf4, 0x10, 0x8a, 0xa7,
0x6e, 0x38, 0xa0, 0x02, 0x3f, 0x50, 0x3e, 0x1b, 0x09, 0xbd, 0x81, 0xd2, 0x07, 0xdf, 0x1b, 0x79,
0x82, 0xe3, 0x87, 0xcb, 0x2e, 0xa7, 0x06, 0x1d, 0x8f, 0x05, 0x89, 0xf0, 0xf2, 0xb4, 0x2a, 0xde,
0x34, 0xc4, 0x3f, 0x51, 0x36, 0x23, 0x51, 0x3e, 0x31, 0xe1, 0xc2, 0x78, 0xd3, 0xaa, 0x97, 0x49,
0x24, 0xca, 0xa3, 0x9d, 0x4c, 0x7c, 0x1f, 0x3f, 0x52, 0x6a, 0xb5, 0xd6, 0xef, 0x5e, 0xa6, 0xc1,
0xc9, 0x84, 0x9f, 0x63, 0x5b, 0x3d, 0x49, 0x68, 0x66, 0xcf, 0xdf, 0x31, 0xb7, 0x8f, 0x1f, 0x27,
0x9f, 0x4b, 0x0d, 0x3a, 0x84, 0xd5, 0x8e, 0x6a, 0x4b, 0x27, 0xaa, 0x19, 0xe1, 0x27, 0xca, 0x8f,
0x97, 0x0d, 0xd9, 0xb9, 0x1a, 0x51, 0xe7, 0x92, 0x3e, 0x24, 0x9b, 0x57, 0x43, 0x83, 0x49, 0x8a,
0x6a, 0xff, 0x1a, 0xd0, 0x7c, 0xd5, 0x90, 0xd5, 0x76, 0x48, 0xa7, 0x51, 0xb5, 0x1d, 0xd2, 0xa9,
0x2c, 0x1c, 0x17, 0xae, 0x3f, 0x89, 0x6a, 0x9e, 0x16, 0xbe, 0xc9, 0x7d, 0x6d, 0xd9, 0xdf, 0xc2,
0x5a, 0xfa, 0x42, 0xdf, 0x89, 0xfd, 0x06, 0xaa, 0x89, 0xac, 0xbd, 0x0b, 0xd5, 0xf9, 0x97, 0x05,
0xd5, 0xc4, 0xd5, 0x52, 0x49, 0x30, 0x1d, 0x53, 0x43, 0x56, 0x6b, 0xb4, 0x0b, 0x2b, 0x3b, 0x42,
0x84, 0xb2, 0x45, 0xc8, 0x3c, 0xfa, 0xc5, 0x8d, 0x17, 0xb4, 0xa1, 0xe0, 0xfa, 0x0a, 0x69, 0xaa,
0xbc, 0x41, 0x7b, 0x94, 0x0b, 0x2f, 0x70, 0xe5, 0x2d, 0x53, 0x15, 0xbd, 0x42, 0x92, 0x2a, 0xfb,
0x6b, 0x80, 0x19, 0xed, 0x4e, 0x3e, 0xfc, 0xdd, 0x82, 0xfb, 0x73, 0x55, 0x68, 0xa1, 0x27, 0x07,
0x69, 0x4f, 0xb6, 0x6e, 0x59, 0xd1, 0xe6, 0xfd, 0xf9, 0x3f, 0x4e, 0x7b, 0x04, 0x45, 0x5d, 0xfa,
0x17, 0x9e, 0xd0, 0x86, 0xf2, 0x9e, 0xc7, 0xdd, 0xae, 0x4f, 0xfb, 0x8a, 0x5a, 0x26, 0xb1, 0xac,
0xfa, 0x8e, 0x3a, 0xbd, 0x8e, 0x9e, 0x16, 0x1c, 0x7d, 0xc7, 0xd1, 0x1a, 0xe4, 0xe2, 0x99, 0x25,
0x77, 0xb8, 0x27, 0xc1, 0xb2, 0xe1, 0x6a, 0x57, 0x2b, 0x44, 0x0b, 0x4e, 0x1b, 0x8a, 0xba, 0x6a,
0xcc, 0xe1, 0x6d, 0x28, 0xb7, 0x3d, 0x9f, 0xaa, 0xbe, 0xad, 0xcf, 0x1c, 0xcb, 0xd2, 0xbd, 0xfd,
0xe0, 0xc2, 0x6c, 0x2b, 0x97, 0xce, 0x76, 0xa2, 0x3d, 0x4b, 0x3f, 0x54, 0x27, 0x37, 0x7e, 0xa8,
0xfe, 0xfd, 0x10, 0x8a, 0x6d, 0x16, 0x8e, 0x5c, 0x61, 0x8c, 0x19, 0xc9, 0x71, 0x60, 0xed, 0x30,
0xe0, 0x63, 0xda, 0x13, 0xd9, 0x63, 0xde, 0x31, 0xdc, 0x8b, 0x31, 0x66, 0xc0, 0x4b, 0xcc, 0x29,
0xd6, 0xdd, 0xe7, 0x94, 0xbf, 0x59, 0x50, 0x89, 0x2b, 0x11, 0x6a, 0x41, 0x51, 0xbd, 0x8d, 0x68,
0x5a, 0x7c, 0x75, 0x43, 0xe9, 0x6a, 0x7c, 0x54, 0x68, 0xd3, 0x11, 0x34, 0xd5, 0xfe, 0x01, 0xaa,
0x09, 0xf5, 0x82, 0x04, 0xd8, 0x4a, 0x26, 0x40, 0x66, 0x29, 0xd7, 0x9b, 0x24, 0xd3, 0x63, 0x0f,
0x8a, 0x5a, 0xb9, 0x30, 0xac, 0x08, 0x0a, 0x07, 0x6e, 0xa8, 0x53, 0x23, 0x4f, 0xd4, 0x5a, 0xea,
0x3a, 0xec, 0x4c, 0xa8, 0xd7, 0x93, 0x27, 0x6a, 0xed, 0xfc, 0xc3, 0x82, 0x9a, 0x19, 0xfd, 0x4c,
0x04, 0x29, 0xac, 0xeb, 0x1b, 0x4a, 0xc3, 0x48, 0x67, 0xfc, 0x7f, 0xb3, 0x24, 0x94, 0x11, 0xb4,
0x71, 0x9d, 0xab, 0xa3, 0x31, 0x67, 0xd2, 0x6e, 0xc1, 0x83, 0x85, 0xd0, 0x3b, 0x5d, 0x91, 0x97,
0x70, 0x7f, 0x36, 0xd4, 0x66, 0xe7, 0xc9, 0x06, 0xa0, 0x24, 0xcc, 0x0c, 0xbd, 0x4f, 0xa1, 0x2a,
0x3f, 0x12, 0xb2, 0x69, 0x0e, 0xac, 0x6a, 0x80, 0x89, 0x0c, 0x82, 0xc2, 0x90, 0x4e, 0x75, 0x36,
0x54, 0x88, 0x5a, 0x3b, 0x7f, 0xb5, 0xe4, 0xac, 0x3f, 0x9e, 0x88, 0xf7, 0x94, 0x73, 0x77, 0x20,
0x13, 0xb0, 0x70, 0x18, 0x78, 0xc2, 0x64, 0xdf, 0xe7, 0x59, 0x33, 0xff, 0x78, 0x22, 0x24, 0xcc,
0xb0, 0x0e, 0x7e, 0x44, 0x14, 0x0b, 0x6d, 0x43, 0x61, 0xcf, 0x15, 0xae, 0xc9, 0x85, 0x8c, 0x09,
0x47, 0x22, 0x12, 0x44, 0x29, 0xee, 0x96, 0xe4, 0x87, 0xcd, 0x78, 0x22, 0x9c, 0x17, 0xb0, 0x7e,
0xdd, 0xfa, 0x02, 0xd7, 0xbe, 0x82, 0x6a, 0xc2, 0x8a, 0xba, 0xb7, 0xc7, 0x6d, 0x05, 0x28, 0x13,
0xb9, 0x94, 0xbe, 0xc6, 0x07, 0x59, 0xd5, 0x7b, 0x38, 0xf7, 0xa0, 0xa6, 0x4c, 0xc7, 0x11, 0xfc,
0x53, 0x0e, 0x4a, 0x91, 0x89, 0xed, 0x94, 0xdf, 0xcf, 0xb2, 0xfc, 0x9e, 0x77, 0xf9, 0x35, 0x14,
0x64, 0xfd, 0x30, 0x2e, 0x67, 0x8c, 0x07, 0xed, 0x7e, 0x82, 0x26, 0xe1, 0xe8, 0x3b, 0x28, 0x12,
0xca, 0xe5, 0x28, 0xa3, 0x87, 0xfe, 0xe7, 0x8b, 0x89, 0x1a, 0x33, 0x23, 0x1b, 0x92, 0xa4, 0x77,
0xbc, 0x41, 0xe0, 0xfa, 0xb8, 0xb0, 0x8c, 0xae, 0x31, 0x09, 0xba, 0x56, 0xcc, 0xc2, 0xfd, 0x67,
0x0b, 0xaa, 0x4b, 0x43, 0xbd, 0xfc, 0xb3, 0x6c, 0xee, 0x53, 0x31, 0xff, 0x3f, 0x7e, 0x2a, 0xfe,
0xdb, 0x4a, 0x1b, 0x52, 0x53, 0x8d, 0xbc, 0x4f, 0x63, 0xe6, 0x05, 0xc2, 0xa4, 0x6c, 0x42, 0x23,
0x0f, 0xda, 0x1a, 0xf5, 0x4d, 0xd1, 0x97, 0xcb, 0x59, 0xf1, 0xce, 0x9b, 0xe2, 0x2d, 0x93, 0xe0,
0x03, 0xa7, 0xa1, 0x0a, 0x51, 0x85, 0xa8, 0xb5, 0xac, 0xd7, 0x47, 0x4c, 0x69, 0x57, 0x54, 0xb6,
0x18, 0x49, 0xd9, 0xbb, 0xec, 0xe3, 0xa2, 0x76, 0xbc, 0x75, 0xa9, 0xba, 0xd0, 0x11, 0x93, 0xba,
0x92, 0x02, 0x6a, 0x41, 0xe2, 0x4e, 0xc5, 0x14, 0x97, 0x75, 0xaa, 0x9d, 0x8a, 0xa9, 0x6c, 0x28,
0x84, 0xf9, 0x7e, 0xd7, 0xed, 0x0d, 0x71, 0x45, 0x77, 0xb2, 0x48, 0x96, 0x93, 0x9e, 0x8c, 0xae,
0xe7, 0xfa, 0xea, 0x9b, 0xa0, 0x4c, 0x22, 0xd1, 0xd9, 0x81, 0x4a, 0x9c, 0x14, 0xb2, 0x47, 0xb5,
0xfb, 0x2a, 0xe8, 0x35, 0x92, 0x6b, 0xf7, 0xa3, 0x7c, 0xce, 0xcd, 0xe7, 0x73, 0x3e, 0x91, 0xcf,
0xdb, 0x50, 0x4b, 0xa5, 0x87, 0x04, 0x11, 0x76, 0xc9, 0x8d, 0x21, 0xb5, 0x96, 0xba, 0x16, 0xf3,
0xf5, 0x57, 0x6f, 0x8d, 0xa8, 0xb5, 0xf3, 0x1c, 0x6a, 0xa9, 0xc4, 0x58, 0x54, 0x81, 0x9d, 0x67,
0x50, 0xeb, 0x08, 0x57, 0x4c, 0x96, 0xfc, 0x4d, 0xf1, 0x1f, 0x0b, 0xd6, 0x22, 0x8c, 0xa9, 0x31,
0xbf, 0x84, 0xf2, 0x05, 0x0d, 0x05, 0xbd, 0x8a, 0xbb, 0x0e, 0x9e, 0x1f, 0x34, 0x3f, 0x2a, 0x04,
0x89, 0x91, 0xe8, 0x1b, 0x28, 0x73, 0x65, 0x87, 0x46, 0x13, 0xcb, 0x67, 0x59, 0x2c, 0xb3, 0x5f,
0x8c, 0x47, 0x4d, 0x28, 0xf8, 0x6c, 0xc0, 0xd5, 0x7b, 0xaf, 0x6e, 0x3d, 0xce, 0xe2, 0xbd, 0x63,
0x03, 0xa2, 0x80, 0xe8, 0x2d, 0x94, 0x2f, 0xdd, 0x30, 0xf0, 0x82, 0x41, 0xf4, 0xb5, 0xfc, 0x34,
0x8b, 0xf4, 0x83, 0xc6, 0x91, 0x98, 0xe0, 0xd4, 0xe4, 0x75, 0x39, 0x63, 0x26, 0x26, 0xce, 0x6f,
0x64, 0xd6, 0x4a, 0xd1, 0xb8, 0x7f, 0x08, 0x35, 0x9d, 0xf9, 0x1f, 0x69, 0xc8, 0xe5, 0xfc, 0x67,
0x2d, 0xbb, 0x9d, 0xbb, 0x49, 0x28, 0x49, 0x33, 0x9d, 0x4f, 0xa6, 0xb1, 0x45, 0x0a, 0x99, 0x4b,
0x63, 0xb7, 0x37, 0x74, 0x07, 0xd1, 0x7b, 0x8a, 0x44, 0xf9, 0xe4, 0xc2, 0xec, 0xa7, 0x2f, 0x68,
0x24, 0xca, 0xdc, 0x0c, 0xe9, 0x85, 0xc7, 0x67, 0xa3, 0x68, 0x2c, 0x6f, 0xfd, 0xa5, 0x04, 0xd0,
0x8a, 0xcf, 0x83, 0x4e, 0x60, 0x45, 0xed, 0x87, 0x9c, 0xa5, 0x6d, 0x52, 0xf9, 0x6d, 0x3f, 0xbf,
0x45, 0x2b, 0x45, 0x1f, 0x65, 0xf2, 0xab, 0xf1, 0x06, 0xbd, 0xc8, 0x2a, 0x08, 0xc9, 0x09, 0xc9,
0x7e, 0x79, 0x03, 0xca, 0xd8, 0xfd, 0x00, 0x45, 0x9d, 0x05, 0x28, 0xab, 0xea, 0x25, 0xf3, 0xd6,
0x7e, 0xb1, 0x1c, 0xa4, 0x8d, 0x7e, 0x61, 0x21, 0x62, 0x6a, 0x22, 0x72, 0x96, 0x34, 0x3d, 0x73,
0x63, 0xb2, 0x02, 0x90, 0xea, 0x2f, 0x75, 0x0b, 0x7d, 0x0f, 0x45, 0x5d, 0xd5, 0xd0, 0x4f, 0x17,
0x13, 0x22, 0x7b, 0xcb, 0x1f, 0xd7, 0xad, 0x2f, 0x2c, 0xf4, 0x1e, 0x0a, 0xb2, 0x9d, 0xa3, 0x8c,
0xde, 0x94, 0x98, 0x05, 0x6c, 0x67, 0x19, 0xc4, 0x44, 0xf1, 0x13, 0xc0, 0x6c, 0xa8, 0x40, 0x19,
0xff, 0x79, 0xcc, 0x4d, 0x27, 0x76, 0xfd, 0x66, 0xa0, 0xd9, 0xe0, 0xbd, 0xec, 0xa8, 0x67, 0x0c,
0x65, 0xf6, 0xd2, 0xf8, 0x1a, 0xd9, 0xce, 0x32, 0x88, 0x31, 0x77, 0x0e, 0xb5, 0xd4, 0x7f, 0xa2,
0xe8, 0xe7, 0xd9, 0x4e, 0x5e, 0xff, 0x8b, 0xd5, 0x7e, 0x75, 0x2b, 0xac, 0xd9, 0x49, 0x24, 0xa7,
0x32, 0xf3, 0x18, 0x35, 0x6e, 0xf2, 0x3b, 0xfd, 0xff, 0xa6, 0xdd, 0xbc, 0x35, 0x5e, 0xef, 0xba,
0x5b, 0xf8, 0x6d, 0x6e, 0xdc, 0xed, 0x16, 0xd5, 0x5f, 0xc5, 0x5f, 0xfd, 0x37, 0x00, 0x00, 0xff,
0xff, 0xc1, 0x4b, 0x2d, 0x65, 0xc8, 0x16, 0x00, 0x00,
}
// Reference imports to suppress errors if they are not otherwise used.

View File

@@ -77,9 +77,6 @@ message BuildOptions {
bool ExportPush = 26;
bool ExportLoad = 27;
moby.buildkit.v1.sourcepolicy.Policy SourcePolicy = 28;
string Ref = 29;
string GroupRef = 30;
repeated string Annotations = 31;
}
message ExportEntry {
@@ -195,7 +192,6 @@ message InitMessage {
message InvokeConfig {
repeated string Entrypoint = 1;
repeated string Cmd = 2;
bool NoCmd = 11; // Do not set cmd but use the image's default
repeated string Env = 3;
string User = 4;
bool NoUser = 5; // Do not set user but use the image's default

View File

@@ -236,7 +236,6 @@ func TestResolvePaths(t *testing.T) {
},
}
for _, tt := range tests {
tt := tt
t.Run(tt.name, func(t *testing.T) {
got, err := ResolveOptionPaths(&tt.options)
require.NoError(t, err)

View File

@@ -137,7 +137,7 @@ func (m *Manager) StartProcess(pid string, resultCtx *build.ResultHandle, cfg *p
go func() {
var err error
if err = ctr.Exec(ctx, cfg, in.Stdin, in.Stdout, in.Stderr); err != nil {
logrus.Debugf("process error: %v", err)
logrus.Errorf("failed to exec process: %v", err)
}
logrus.Debugf("finished process %s %v", pid, cfg.Entrypoint)
m.processes.Delete(pid)

View File

@@ -15,7 +15,7 @@ import (
"syscall"
"time"
"github.com/containerd/log"
"github.com/containerd/containerd/log"
"github.com/docker/buildx/build"
cbuild "github.com/docker/buildx/controller/build"
"github.com/docker/buildx/controller/control"

View File

@@ -57,7 +57,9 @@ func (m *Server) ListProcesses(ctx context.Context, req *pb.ListProcessesRequest
return nil, errors.Errorf("unknown ref %q", req.Ref)
}
res = new(pb.ListProcessesResponse)
res.Infos = append(res.Infos, s.processes.ListProcesses()...)
for _, p := range s.processes.ListProcesses() {
res.Infos = append(res.Infos, p)
}
return res, nil
}

View File

@@ -7,9 +7,6 @@ variable "DOCS_FORMATS" {
variable "DESTDIR" {
default = "./bin"
}
variable "GOLANGCI_LINT_MULTIPLATFORM" {
default = null
}
# Special target: https://github.com/docker/metadata-action#bake-definition
target "meta-helper" {
@@ -35,17 +32,6 @@ target "lint" {
inherits = ["_common"]
dockerfile = "./hack/dockerfiles/lint.Dockerfile"
output = ["type=cacheonly"]
platforms = GOLANGCI_LINT_MULTIPLATFORM != null ? [
"darwin/amd64",
"darwin/arm64",
"linux/amd64",
"linux/arm64",
"linux/s390x",
"linux/ppc64le",
"linux/riscv64",
"windows/amd64",
"windows/arm64"
] : []
}
target "validate-vendor" {

View File

@@ -12,118 +12,18 @@ You can define your Bake file in the following file formats:
By default, Bake uses the following lookup order to find the configuration file:
1. `compose.yaml`
2. `compose.yml`
3. `docker-compose.yml`
4. `docker-compose.yaml`
5. `docker-bake.json`
6. `docker-bake.override.json`
7. `docker-bake.hcl`
8. `docker-bake.override.hcl`
1. `docker-bake.override.hcl`
2. `docker-bake.hcl`
3. `docker-bake.override.json`
4. `docker-bake.json`
5. `docker-compose.yaml`
6. `docker-compose.yml`
Bake searches for the file in the current working directory.
You can specify the file location explicitly using the `--file` flag:
```console
$ docker buildx bake --file ../docker/bake.hcl --print
```
If you don't specify a file explicitly, Bake searches for the file in the
current working directory. If more than one Bake file is found, all files are
merged into a single definition. Files are merged according to the lookup
order. That means that if your project contains both a `compose.yaml` file and
a `docker-bake.hcl` file, Bake loads the `compose.yaml` file first, and then
the `docker-bake.hcl` file.
If merged files contain duplicate attribute definitions, those definitions are
either merged or overridden by the last occurrence, depending on the attribute.
The following attributes are overridden by the last occurrence:
- `target.cache-to`
- `target.dockerfile-inline`
- `target.dockerfile`
- `target.outputs`
- `target.platforms`
- `target.pull`
- `target.tags`
- `target.target`
For example, if `compose.yaml` and `docker-bake.hcl` both define the `tags`
attribute, the `docker-bake.hcl` is used.
```console
$ cat compose.yaml
services:
webapp:
build:
context: .
tags:
- bar
$ cat docker-bake.hcl
target "webapp" {
tags = ["foo"]
}
$ docker buildx bake --print webapp
{
"group": {
"default": {
"targets": [
"webapp"
]
}
},
"target": {
"webapp": {
"context": ".",
"dockerfile": "Dockerfile",
"tags": [
"foo"
]
}
}
}
```
All other attributes are merged. For example, if `compose.yaml` and
`docker-bake.hcl` both define unique entries for the `labels` attribute, all
entries are included. Duplicate entries for the same label are overridden.
```console
$ cat compose.yaml
services:
webapp:
build:
context: .
labels:
com.example.foo: "foo"
com.example.name: "Alice"
$ cat docker-bake.hcl
target "webapp" {
labels = {
"com.example.bar" = "bar"
"com.example.name" = "Bob"
}
}
$ docker buildx bake --print webapp
{
"group": {
"default": {
"targets": [
"webapp"
]
}
},
"target": {
"webapp": {
"context": ".",
"dockerfile": "Dockerfile",
"labels": {
"com.example.foo": "foo",
"com.example.bar": "bar",
"com.example.name": "Bob"
}
}
}
}
$ docker buildx bake --file=../docker/bake.hcl --print
```
## Syntax
@@ -213,9 +113,8 @@ target "webapp" {
The following table shows the complete list of attributes that you can assign to a target:
| Name | Type | Description |
|-------------------------------------------------|---------|----------------------------------------------------------------------|
| ----------------------------------------------- | ------- | -------------------------------------------------------------------- |
| [`args`](#targetargs) | Map | Build arguments |
| [`annotations`](#targetannotations) | List | Exporter annotations |
| [`attest`](#targetattest) | List | Build attestations |
| [`cache-from`](#targetcache-from) | List | External cache sources |
| [`cache-to`](#targetcache-to) | List | External cache destinations |
@@ -233,11 +132,9 @@ The following table shows the complete list of attributes that you can assign to
| [`platforms`](#targetplatforms) | List | Target platforms |
| [`pull`](#targetpull) | Boolean | Always pull images |
| [`secret`](#targetsecret) | List | Secrets to expose to the build |
| [`shm-size`](#targetshm-size) | List | Size of `/dev/shm` |
| [`ssh`](#targetssh) | List | SSH agent sockets or keys to expose to the build |
| [`tags`](#targettags) | List | Image names and tags |
| [`target`](#targettarget) | String | Target build stage |
| [`ulimits`](#targetulimits) | List | Ulimit options |
### `target.args`
@@ -274,41 +171,6 @@ target "db" {
}
```
### `target.annotations`
The `annotations` attribute lets you add annotations to images built with bake.
The key takes a list of annotations, in the format of `KEY=VALUE`.
```hcl
target "default" {
output = ["type=image,name=foo"]
annotations = ["org.opencontainers.image.authors=dvdksn"]
}
```
is the same as
```hcl
target "default" {
output = ["type=image,name=foo,annotation.org.opencontainers.image.authors=dvdksn"]
}
```
By default, the annotation is added to image manifests. You can configure the
level of the annotations by adding a prefix to the annotation, containing a
comma-separated list of all the levels that you want to annotate. The following
example adds annotations to both the image index and manifests.
```hcl
target "default" {
output = ["type=image,name=foo"]
annotations = ["index,manifest:org.opencontainers.image.authors=dvdksn"]
}
```
Read about the supported levels in
[Specifying annotation levels](https://docs.docker.com/build/building/annotations/#specifying-annotation-levels).
### `target.attest`
The `attest` attribute lets you apply [build attestations][attestations] to the target.
@@ -834,29 +696,6 @@ RUN --mount=type=secret,id=KUBECONFIG \
KUBECONFIG=$(cat /run/secrets/KUBECONFIG) helm upgrade --install
```
### `target.shm-size`
Sets the size of the shared memory allocated for build containers when using
`RUN` instructions.
The format is `<number><unit>`. `number` must be greater than `0`. Unit is
optional and can be `b` (bytes), `k` (kilobytes), `m` (megabytes), or `g`
(gigabytes). If you omit the unit, the system uses bytes.
This is the same as the `--shm-size` flag for `docker build`.
```hcl
target "default" {
shm-size = "128m"
}
```
> **Note**
>
> In most cases, it is recommended to let the builder automatically determine
> the appropriate configurations. Manual adjustments should only be considered
> when specific performance tuning is required for complex build scenarios.
### `target.ssh`
Defines SSH agent sockets or keys to expose to the build.
@@ -903,32 +742,6 @@ target "default" {
}
```
### `target.ulimits`
Ulimits overrides the default ulimits of build's containers when using `RUN`
instructions and are specified with a soft and hard limit as such:
`<type>=<soft limit>[:<hard limit>]`, for example:
```hcl
target "app" {
ulimits = [
"nofile=1024:1024"
]
}
```
> **Note**
>
> If you do not provide a `hard limit`, the `soft limit` is used
> for both values. If no `ulimits` are set, they are inherited from
> the default `ulimits` set on the daemon.
> **Note**
>
> In most cases, it is recommended to let the builder automatically determine
> the appropriate configurations. Manual adjustments should only be considered
> when specific performance tuning is required for complex build scenarios.
## Group
Groups allow you to invoke multiple builds (targets) at once.
@@ -1120,20 +933,20 @@ target "webapp-dev" {
[attestations]: https://docs.docker.com/build/attestations/
[bake_stdlib]: https://github.com/docker/buildx/blob/master/bake/hclparser/stdlib.go
[build-arg]: https://docs.docker.com/reference/cli/docker/image/build/#build-arg
[build-context]: https://docs.docker.com/reference/cli/docker/buildx/build/#build-context
[build-arg]: https://docs.docker.com/engine/reference/commandline/build/#build-arg
[build-context]: https://docs.docker.com/engine/reference/commandline/buildx_build/#build-context
[cache-backends]: https://docs.docker.com/build/cache/backends/
[cache-from]: https://docs.docker.com/reference/cli/docker/buildx/build/#cache-from
[cache-to]: https://docs.docker.com/reference/cli/docker/buildx/build/#cache-to
[context]: https://docs.docker.com/reference/cli/docker/buildx/build/#build-context
[file]: https://docs.docker.com/reference/cli/docker/image/build/#file
[cache-from]: https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-from
[cache-to]: https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-to
[context]: https://docs.docker.com/engine/reference/commandline/buildx_build/#build-context
[file]: https://docs.docker.com/engine/reference/commandline/build/#file
[go-cty]: https://github.com/zclconf/go-cty/tree/main/cty/function/stdlib
[hcl-funcs]: https://docs.docker.com/build/bake/hcl-funcs/
[output]: https://docs.docker.com/reference/cli/docker/buildx/build/#output
[platform]: https://docs.docker.com/reference/cli/docker/buildx/build/#platform
[run_mount_secret]: https://docs.docker.com/reference/dockerfile/#run---mounttypesecret
[secret]: https://docs.docker.com/reference/cli/docker/buildx/build/#secret
[ssh]: https://docs.docker.com/reference/cli/docker/buildx/build/#ssh
[tag]: https://docs.docker.com/reference/cli/docker/image/build/#tag
[target]: https://docs.docker.com/reference/cli/docker/image/build/#target
[output]: https://docs.docker.com/engine/reference/commandline/buildx_build/#output
[platform]: https://docs.docker.com/engine/reference/commandline/buildx_build/#platform
[run_mount_secret]: https://docs.docker.com/engine/reference/builder/#run---mounttypesecret
[secret]: https://docs.docker.com/engine/reference/commandline/buildx_build/#secret
[ssh]: https://docs.docker.com/engine/reference/commandline/buildx_build/#ssh
[tag]: https://docs.docker.com/engine/reference/commandline/build/#tag
[target]: https://docs.docker.com/engine/reference/commandline/build/#target
[userfunc]: https://github.com/hashicorp/hcl/tree/main/ext/userfunc

View File

@@ -3,7 +3,6 @@ package main
import (
"log"
"os"
"strings"
"github.com/docker/buildx/commands"
clidocstool "github.com/docker/cli-docs-tool"
@@ -27,28 +26,6 @@ type options struct {
formats []string
}
// fixUpExperimentalCLI trims the " (EXPERIMENTAL)" suffix from the CLI output,
// as docs.docker.com already displays "experimental (CLI)",
//
// https://github.com/docker/buildx/pull/2188#issuecomment-1889487022
func fixUpExperimentalCLI(cmd *cobra.Command) {
const (
annotationExperimentalCLI = "experimentalCLI"
suffixExperimental = " (EXPERIMENTAL)"
)
if _, ok := cmd.Annotations[annotationExperimentalCLI]; ok {
cmd.Short = strings.TrimSuffix(cmd.Short, suffixExperimental)
}
cmd.Flags().VisitAll(func(f *pflag.Flag) {
if _, ok := f.Annotations[annotationExperimentalCLI]; ok {
f.Usage = strings.TrimSuffix(f.Usage, suffixExperimental)
}
})
for _, c := range cmd.Commands() {
fixUpExperimentalCLI(c)
}
}
func gen(opts *options) error {
log.SetFlags(0)
@@ -80,8 +57,6 @@ func gen(opts *options) error {
return err
}
case "yaml":
// fix up is needed only for yaml (used for generating docs.docker.com contents)
fixUpExperimentalCLI(cmd)
if err = c.GenYamlTree(cmd); err != nil {
return err
}

View File

@@ -19,13 +19,11 @@ your environment.
$ export BUILDX_EXPERIMENTAL=1
```
To start a debug session for a build, you can use the `buildx debug` command with `--invoke` flag to specify a command to launch in the resulting image.
`buildx debug` command provides `buildx debug build` subcommand that provides the same features as the normal `buildx build` command but allows launching the debugger session after the build.
Arguments available after `buildx debug build` are the same as the normal `buildx build`.
To start a debug session for a build, you can use the `--invoke` flag with the
build command to specify a command to launch in the resulting image.
```console
$ docker buildx debug --invoke /bin/sh build .
$ docker buildx build --invoke /bin/sh .
[+] Building 4.2s (19/19) FINISHED
=> [internal] connecting to local controller 0.0s
=> [internal] load build definition from Dockerfile 0.0s
@@ -58,16 +56,16 @@ Supported keys are `args` (can be JSON array format), `entrypoint` (can be JSON
Example:
```
$ docker buildx debug --invoke 'entrypoint=["sh"],"args=[""-c"", ""env | grep -e FOO -e AAA""]","env=[""FOO=bar"", ""AAA=bbb""]"' build .
$ docker buildx build --invoke 'entrypoint=["sh"],"args=[""-c"", ""env | grep -e FOO -e AAA""]","env=[""FOO=bar"", ""AAA=bbb""]"' .
```
#### `on` flag
#### `on-error`
If you want to start a debug session when a build fails, you can use
`--on=error` to start a debug session when the build fails.
`--invoke=on-error` to start a debug session when the build fails.
```console
$ docker buildx debug --on=error build .
$ docker buildx build --invoke on-error .
[+] Building 4.2s (19/19) FINISHED
=> [internal] connecting to local controller 0.0s
=> [internal] load build definition from Dockerfile 0.0s
@@ -87,13 +85,13 @@ Interactive container was restarted with process "edmzor60nrag7rh1mbi4o9lm8". Pr
This allows you to explore the state of the image when the build failed.
#### Launch the debug session directly with `buildx debug` subcommand
#### `debug-shell`
If you want to drop into a debug session without first starting the build, you
can use `buildx debug` command to start a debug session.
can use `--invoke=debug-shell` to start a debug session.
```
$ docker buildx debug
$ docker buildx build --invoke debug-shell .
[+] Building 4.2s (19/19) FINISHED
=> [internal] connecting to local controller 0.0s
(buildx)
@@ -118,12 +116,12 @@ Available commands are:
disconnect disconnect a client from a buildx server. Specific session ID can be specified an arg
exec execute a process in the interactive container
exit exits monitor
help shows this message. Optionally pass a command name as an argument to print the detailed usage.
help shows this message
kill kill buildx server
list list buildx sessions
ps list processes invoked by "exec". Use "attach" to attach IO to that process
reload reloads the context and build it
rollback re-runs the interactive container with the step's rootfs contents
rollback re-runs the interactive container with initial rootfs contents
```
## Build controllers
@@ -137,15 +135,15 @@ To detach the build process from the CLI, you can use the `--detach=true` flag w
the build command.
```console
$ docker buildx debug --invoke /bin/sh build --detach=true .
$ docker buildx build --detach=true --invoke /bin/sh .
```
If you start a debugging session using the `--invoke` flag with a detached
build, then you can attach to it using the `buildx debug` command to
build, then you can attach to it using the `buildx debug-shell` subcommand to
immediately enter the monitor mode.
```console
$ docker buildx debug
$ docker buildx debug-shell
[+] Building 0.0s (1/1) FINISHED
=> [internal] connecting to remote controller
(buildx) list

View File

@@ -1,6 +1,6 @@
# buildx
```text
```
docker buildx [OPTIONS] COMMAND
```
@@ -9,22 +9,21 @@ Extended build capabilities with BuildKit
### Subcommands
| Name | Description |
|:-------------------------------------|:------------------------------------------------|
| [`bake`](buildx_bake.md) | Build from a file |
| [`build`](buildx_build.md) | Start a build |
| [`create`](buildx_create.md) | Create a new builder instance |
| [`debug`](buildx_debug.md) | Start debugger (EXPERIMENTAL) |
| [`dial-stdio`](buildx_dial-stdio.md) | Proxy current stdio streams to builder instance |
| [`du`](buildx_du.md) | Disk usage |
| [`imagetools`](buildx_imagetools.md) | Commands to work on images in registry |
| [`inspect`](buildx_inspect.md) | Inspect current builder instance |
| [`ls`](buildx_ls.md) | List builder instances |
| [`prune`](buildx_prune.md) | Remove build cache |
| [`rm`](buildx_rm.md) | Remove one or more builder instances |
| [`stop`](buildx_stop.md) | Stop builder instance |
| [`use`](buildx_use.md) | Set the current builder instance |
| [`version`](buildx_version.md) | Show buildx version information |
| Name | Description |
|:---------------------------------------|:---------------------------------------|
| [`bake`](buildx_bake.md) | Build from a file |
| [`build`](buildx_build.md) | Start a build |
| [`create`](buildx_create.md) | Create a new builder instance |
| [`debug-shell`](buildx_debug-shell.md) | Start a monitor |
| [`du`](buildx_du.md) | Disk usage |
| [`imagetools`](buildx_imagetools.md) | Commands to work on images in registry |
| [`inspect`](buildx_inspect.md) | Inspect current builder instance |
| [`ls`](buildx_ls.md) | List builder instances |
| [`prune`](buildx_prune.md) | Remove build cache |
| [`rm`](buildx_rm.md) | Remove a builder instance |
| [`stop`](buildx_stop.md) | Stop builder instance |
| [`use`](buildx_use.md) | Set the current builder instance |
| [`version`](buildx_version.md) | Show buildx version information |
### Options

View File

@@ -1,6 +1,6 @@
# buildx bake
```text
```
docker buildx bake [OPTIONS] [TARGET...]
```
@@ -33,7 +33,7 @@ Build from a file
## Description
Bake is a high-level build command. Each specified target runs in parallel
Bake is a high-level build command. Each specified target will run in parallel
as part of the build.
Read [High-level build options with Bake](https://docs.docker.com/build/bake/)
@@ -54,8 +54,8 @@ Same as [`buildx --builder`](buildx.md#builder).
### <a name="file"></a> Specify a build definition file (-f, --file)
Use the `-f` / `--file` option to specify the build definition file to use.
The file can be an HCL, JSON or Compose file. If multiple files are specified,
all are read and the build configurations are combined.
The file can be an HCL, JSON or Compose file. If multiple files are specified
they are all read and configurations are combined.
You can pass the names of the targets to build, to build only specific target(s).
The following example builds the `db` and `webapp-release` targets that are
@@ -90,9 +90,9 @@ $ docker buildx bake -f docker-bake.dev.hcl db webapp-release
See the [Bake file reference](https://docs.docker.com/build/bake/reference/)
for more details.
### <a name="no-cache"></a> Don't use cache when building the image (--no-cache)
### <a name="no-cache"></a> Do not use cache when building the image (--no-cache)
Same as `build --no-cache`. Don't use cache when building the image.
Same as `build --no-cache`. Do not use cache when building the image.
### <a name="print"></a> Print the options without building (--print)
@@ -154,7 +154,7 @@ $ docker buildx bake --set *.platform=linux/arm64 # overrides platform for a
$ docker buildx bake --set foo*.no-cache # bypass caching only for targets starting with 'foo'
```
You can override the following fields:
Complete list of overridable fields:
* `args`
* `cache-from`

View File

@@ -1,6 +1,6 @@
# buildx build
```text
```
docker buildx build [OPTIONS] PATH | URL | -
```
@@ -13,44 +13,44 @@ Start a build
### Options
| Name | Type | Default | Description |
|:---------------------------------------------------------------------------------------------------------------------------------------------------|:--------------|:----------|:----------------------------------------------------------------------------------------------------|
| [`--add-host`](https://docs.docker.com/reference/cli/docker/image/build/#add-host) | `stringSlice` | | Add a custom host-to-IP mapping (format: `host:ip`) |
| [`--allow`](#allow) | `stringSlice` | | Allow extra privileged entitlement (e.g., `network.host`, `security.insecure`) |
| [`--annotation`](#annotation) | `stringArray` | | Add annotation to the image |
| [`--attest`](#attest) | `stringArray` | | Attestation parameters (format: `type=sbom,generator=image`) |
| [`--build-arg`](#build-arg) | `stringArray` | | Set build-time variables |
| [`--build-context`](#build-context) | `stringArray` | | Additional build contexts (e.g., name=path) |
| [`--builder`](#builder) | `string` | | Override the configured builder instance |
| [`--cache-from`](#cache-from) | `stringArray` | | External cache sources (e.g., `user/app:cache`, `type=local,src=path/to/dir`) |
| [`--cache-to`](#cache-to) | `stringArray` | | Cache export destinations (e.g., `user/app:cache`, `type=local,dest=path/to/dir`) |
| [`--cgroup-parent`](https://docs.docker.com/reference/cli/docker/image/build/#cgroup-parent) | `string` | | Set the parent cgroup for the `RUN` instructions during build |
| `--detach` | | | Detach buildx server (supported only on linux) (EXPERIMENTAL) |
| [`-f`](https://docs.docker.com/reference/cli/docker/image/build/#file), [`--file`](https://docs.docker.com/reference/cli/docker/image/build/#file) | `string` | | Name of the Dockerfile (default: `PATH/Dockerfile`) |
| `--iidfile` | `string` | | Write the image ID to the file |
| `--label` | `stringArray` | | Set metadata for an image |
| [`--load`](#load) | | | Shorthand for `--output=type=docker` |
| [`--metadata-file`](#metadata-file) | `string` | | Write build result metadata to the file |
| `--network` | `string` | `default` | Set the networking mode for the `RUN` instructions during build |
| `--no-cache` | | | Do not use cache when building the image |
| [`--no-cache-filter`](#no-cache-filter) | `stringArray` | | Do not cache specified stages |
| [`-o`](#output), [`--output`](#output) | `stringArray` | | Output destination (format: `type=local,dest=path`) |
| [`--platform`](#platform) | `stringArray` | | Set target platform for build |
| `--print` | `string` | | Print result of information request (e.g., outline, targets) (EXPERIMENTAL) |
| [`--progress`](#progress) | `string` | `auto` | Set type of progress output (`auto`, `plain`, `tty`). Use plain to show container output |
| [`--provenance`](#provenance) | `string` | | Shorthand for `--attest=type=provenance` |
| `--pull` | | | Always attempt to pull all referenced images |
| [`--push`](#push) | | | Shorthand for `--output=type=registry` |
| `-q`, `--quiet` | | | Suppress the build output and print image ID on success |
| `--root` | `string` | | Specify root directory of server to connect (EXPERIMENTAL) |
| [`--sbom`](#sbom) | `string` | | Shorthand for `--attest=type=sbom` |
| [`--secret`](#secret) | `stringArray` | | Secret to expose to the build (format: `id=mysecret[,src=/local/secret]`) |
| `--server-config` | `string` | | Specify buildx server config file (used only when launching new server) (EXPERIMENTAL) |
| [`--shm-size`](#shm-size) | `bytes` | `0` | Shared memory size for build containers |
| [`--ssh`](#ssh) | `stringArray` | | SSH agent socket or keys to expose to the build (format: `default\|<id>[=<socket>\|<key>[,<key>]]`) |
| [`-t`](https://docs.docker.com/reference/cli/docker/image/build/#tag), [`--tag`](https://docs.docker.com/reference/cli/docker/image/build/#tag) | `stringArray` | | Name and optionally a tag (format: `name:tag`) |
| [`--target`](https://docs.docker.com/reference/cli/docker/image/build/#target) | `string` | | Set the target build stage to build |
| [`--ulimit`](#ulimit) | `ulimit` | | Ulimit options |
| Name | Type | Default | Description |
|:-------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------|:----------|:----------------------------------------------------------------------------------------------------|
| [`--add-host`](https://docs.docker.com/engine/reference/commandline/build/#add-host) | `stringSlice` | | Add a custom host-to-IP mapping (format: `host:ip`) |
| [`--allow`](#allow) | `stringSlice` | | Allow extra privileged entitlement (e.g., `network.host`, `security.insecure`) |
| [`--attest`](#attest) | `stringArray` | | Attestation parameters (format: `type=sbom,generator=image`) |
| [`--build-arg`](#build-arg) | `stringArray` | | Set build-time variables |
| [`--build-context`](#build-context) | `stringArray` | | Additional build contexts (e.g., name=path) |
| [`--builder`](#builder) | `string` | | Override the configured builder instance |
| [`--cache-from`](#cache-from) | `stringArray` | | External cache sources (e.g., `user/app:cache`, `type=local,src=path/to/dir`) |
| [`--cache-to`](#cache-to) | `stringArray` | | Cache export destinations (e.g., `user/app:cache`, `type=local,dest=path/to/dir`) |
| [`--cgroup-parent`](https://docs.docker.com/engine/reference/commandline/build/#cgroup-parent) | `string` | | Optional parent cgroup for the container |
| `--detach` | | | Detach buildx server (supported only on linux) |
| [`-f`](https://docs.docker.com/engine/reference/commandline/build/#file), [`--file`](https://docs.docker.com/engine/reference/commandline/build/#file) | `string` | | Name of the Dockerfile (default: `PATH/Dockerfile`) |
| `--iidfile` | `string` | | Write the image ID to the file |
| `--invoke` | `string` | | Invoke a command after the build |
| `--label` | `stringArray` | | Set metadata for an image |
| [`--load`](#load) | | | Shorthand for `--output=type=docker` |
| [`--metadata-file`](#metadata-file) | `string` | | Write build result metadata to the file |
| `--network` | `string` | `default` | Set the networking mode for the `RUN` instructions during build |
| `--no-cache` | | | Do not use cache when building the image |
| `--no-cache-filter` | `stringArray` | | Do not cache specified stages |
| [`-o`](#output), [`--output`](#output) | `stringArray` | | Output destination (format: `type=local,dest=path`) |
| [`--platform`](#platform) | `stringArray` | | Set target platform for build |
| `--print` | `string` | | Print result of information request (e.g., outline, targets) |
| [`--progress`](#progress) | `string` | `auto` | Set type of progress output (`auto`, `plain`, `tty`). Use plain to show container output |
| [`--provenance`](#provenance) | `string` | | Shorthand for `--attest=type=provenance` |
| `--pull` | | | Always attempt to pull all referenced images |
| [`--push`](#push) | | | Shorthand for `--output=type=registry` |
| `-q`, `--quiet` | | | Suppress the build output and print image ID on success |
| `--root` | `string` | | Specify root directory of server to connect |
| [`--sbom`](#sbom) | `string` | | Shorthand for `--attest=type=sbom` |
| [`--secret`](#secret) | `stringArray` | | Secret to expose to the build (format: `id=mysecret[,src=/local/secret]`) |
| `--server-config` | `string` | | Specify buildx server config file (used only when launching new server) |
| [`--shm-size`](#shm-size) | `bytes` | `0` | Size of `/dev/shm` |
| [`--ssh`](#ssh) | `stringArray` | | SSH agent socket or keys to expose to the build (format: `default\|<id>[=<socket>\|<key>[,<key>]]`) |
| [`-t`](https://docs.docker.com/engine/reference/commandline/build/#tag), [`--tag`](https://docs.docker.com/engine/reference/commandline/build/#tag) | `stringArray` | | Name and optionally a tag (format: `name:tag`) |
| [`--target`](https://docs.docker.com/engine/reference/commandline/build/#target) | `string` | | Set the target build stage to build |
| [`--ulimit`](#ulimit) | `ulimit` | | Ulimit options |
<!---MARKER_GEN_END-->
@@ -64,60 +64,14 @@ The `buildx build` command starts a build using BuildKit. This command is simila
to the UI of `docker build` command and takes the same flags and arguments.
For documentation on most of these flags, refer to the [`docker build`
documentation](https://docs.docker.com/reference/cli/docker/image/build/).
This page describes a subset of the new flags.
documentation](https://docs.docker.com/engine/reference/commandline/build/). In
here we'll document a subset of the new flags.
## Examples
### <a name="annotation"></a> Create annotations (--annotation)
```text
--annotation="key=value"
--annotation="[type:]key=value"
```
Add OCI annotations to the image index, manifest, or descriptor.
The following example adds the `foo=bar` annotation to the image manifests:
```console
$ docker buildx build -t TAG --annotation "foo=bar" --push .
```
You can optionally add a type prefix to specify the level of the annotation. By
default, the image manifest is annotated. The following example adds the
`foo=bar` annotation the image index instead of the manifests:
```console
$ docker buildx build -t TAG --annotation "index:foo=bar" --push .
```
You can specify multiple types, separated by a comma (,) to add the annotation
to multiple image components. The following example adds the `foo=bar`
annotation to image index, descriptors, manifests:
```console
$ docker buildx build -t TAG --annotation "index,manifest,manifest-descriptor:foo=bar" --push .
```
You can also specify a platform qualifier in square brackets (`[os/arch]`) in
the type prefix, to apply the annotation to a subset of manifests with the
matching platform. The following example adds the `foo=bar` annotation only to
the manifest with the `linux/amd64` platform:
```console
$ docker buildx build -t TAG --annotation "manifest[linux/amd64]:foo=bar" --push .
```
Wildcards are not supported in the platform qualifier; you can't specify a type
prefix like `manifest[linux/*]` to add annotations only to manifests which has
`linux` as the OS platform.
For more information about annotations, see
[Annotations](https://docs.docker.com/build/building/annotations/).
### <a name="attest"></a> Create attestations (--attest)
```text
```
--attest=type=sbom,...
--attest=type=provenance,...
```
@@ -144,7 +98,7 @@ BuildKit currently supports:
### <a name="allow"></a> Allow extra privileged entitlement (--allow)
```text
```
--allow=ENTITLEMENT
```
@@ -152,10 +106,12 @@ Allow extra privileged entitlement. List of entitlements:
- `network.host` - Allows executions with host networking.
- `security.insecure` - Allows executions without sandbox. See
[related Dockerfile extensions](https://docs.docker.com/reference/dockerfile/#run---securitysandbox).
[related Dockerfile extensions](https://docs.docker.com/engine/reference/builder/#run---securitysandbox).
For entitlements to be enabled, the BuildKit daemon also needs to allow them
with `--allow-insecure-entitlement` (see [`create --buildkitd-flags`](buildx_create.md#buildkitd-flags)).
For entitlements to be enabled, the `buildkitd` daemon also needs to allow them
with `--allow-insecure-entitlement` (see [`create --buildkitd-flags`](buildx_create.md#buildkitd-flags))
**Examples**
```console
$ docker buildx create --use --name insecure-builder --buildkitd-flags '--allow-insecure-entitlement security.insecure'
@@ -164,23 +120,26 @@ $ docker buildx build --allow security.insecure .
### <a name="build-arg"></a> Set build-time variables (--build-arg)
Same as [`docker build` command](https://docs.docker.com/reference/cli/docker/image/build/#build-arg).
Same as [`docker build` command](https://docs.docker.com/engine/reference/commandline/build/#build-arg).
There are also useful built-in build arguments, such as:
There are also useful built-in build args like:
* `BUILDKIT_CONTEXT_KEEP_GIT_DIR=<bool>`: trigger git context to keep the `.git` directory
* `BUILDKIT_INLINE_CACHE=<bool>`: inline cache metadata to image config or not
* `BUILDKIT_MULTI_PLATFORM=<bool>`: opt into deterministic output regardless of multi-platform output or not
* `BUILDKIT_CONTEXT_KEEP_GIT_DIR=<bool>` trigger git context to keep the `.git` directory
* `BUILDKIT_INLINE_BUILDINFO_ATTRS=<bool>` inline build info attributes in image config or not
* `BUILDKIT_INLINE_CACHE=<bool>` inline cache metadata to image config or not
* `BUILDKIT_MULTI_PLATFORM=<bool>` opt into deterministic output regardless of multi-platform output or not
```console
$ docker buildx build --build-arg BUILDKIT_MULTI_PLATFORM=1 .
```
Learn more about the built-in build arguments in the [Dockerfile reference docs](https://docs.docker.com/reference/dockerfile/#buildkit-built-in-build-args).
> **Note**
>
> More built-in build args can be found in [Dockerfile reference docs](https://docs.docker.com/engine/reference/builder/#buildkit-built-in-build-args).
### <a name="build-context"></a> Additional build contexts (--build-context)
```text
```
--build-context=name=VALUE
```
@@ -208,7 +167,7 @@ FROM alpine
COPY --from=project myfile /
```
#### <a name="source-oci-layout"></a> Use an OCI layout directory as build context
#### <a name="source-oci-layout"></a> Source image from OCI layout directory
Source an image from a local [OCI layout compliant directory](https://github.com/opencontainers/image-spec/blob/main/image-layout.md),
either by tag, or by digest:
@@ -236,7 +195,7 @@ Same as [`buildx --builder`](buildx.md#builder).
### <a name="cache-from"></a> Use an external cache source for a build (--cache-from)
```text
```
--cache-from=[NAME|type=TYPE[,KEY=VALUE]]
```
@@ -272,7 +231,7 @@ More info about cache exporters and available attributes: https://github.com/mob
### <a name="cache-to"></a> Export build cache to an external cache destination (--cache-to)
```text
```
--cache-to=[NAME|type=TYPE[,KEY=VALUE]]
```
@@ -289,8 +248,9 @@ Export build cache to an external cache destination. Supported types are
- [`s3` type](https://github.com/moby/buildkit#s3-cache-experimental) exports
cache to a S3 bucket.
The `docker` driver only supports cache exports using the `inline` and `local`
cache backends.
`docker` driver currently only supports exporting inline cache metadata to image
configuration. Alternatively, `--build-arg BUILDKIT_INLINE_CACHE=1` can be used
to trigger inline cache exporter.
Attribute key:
@@ -324,10 +284,28 @@ directory of the specified file must already exist and be writable.
$ docker buildx build --load --metadata-file metadata.json .
$ cat metadata.json
```
```json
{
"buildx.build.ref": "mybuilder/mybuilder0/0fjb6ubs52xx3vygf6fgdl611",
"containerimage.buildinfo": {
"frontend": "dockerfile.v0",
"attrs": {
"context": "https://github.com/crazy-max/buildkit-buildsources-test.git#master",
"filename": "Dockerfile",
"source": "docker/dockerfile:master"
},
"sources": [
{
"type": "docker-image",
"ref": "docker.io/docker/buildx-bin:0.6.1@sha256:a652ced4a4141977c7daaed0a074dcd9844a78d7d2615465b12f433ae6dd29f0",
"pin": "sha256:a652ced4a4141977c7daaed0a074dcd9844a78d7d2615465b12f433ae6dd29f0"
},
{
"type": "docker-image",
"ref": "docker.io/library/alpine:3.13",
"pin": "sha256:026f721af4cf2843e07bba648e158fb35ecc876d822130633cc49f707f0fc88c"
}
]
},
"containerimage.config.digest": "sha256:2937f66a9722f7f4a2df583de2f8cb97fc9196059a410e7f00072fc918930e66",
"containerimage.descriptor": {
"annotations": {
@@ -342,71 +320,16 @@ $ cat metadata.json
}
```
### <a name="no-cache-filter"></a> Ignore build cache for specific stages (--no-cache-filter)
The `--no-cache-filter` lets you specify one or more stages of a multi-stage
Dockerfile for which build cache should be ignored. To specify multiple stages,
use a comma-separated syntax:
```console
$ docker buildx build --no-cache-filter stage1,stage2,stage3 .
```
For example, the following Dockerfile contains four stages:
- `base`
- `install`
- `test`
- `release`
```dockerfile
# syntax=docker/dockerfile:1
FROM oven/bun:1 as base
WORKDIR /app
FROM base AS install
WORKDIR /temp/dev
RUN --mount=type=bind,source=package.json,target=package.json \
--mount=type=bind,source=bun.lockb,target=bun.lockb \
bun install --frozen-lockfile
FROM base AS test
COPY --from=install /temp/dev/node_modules node_modules
COPY . .
RUN bun test
FROM base AS release
ENV NODE_ENV=production
COPY --from=install /temp/dev/node_modules node_modules
COPY . .
ENTRYPOINT ["bun", "run", "index.js"]
```
To ignore the cache for the `install` stage:
```console
$ docker buildx build --no-cache-filter install .
```
To ignore the cache the `install` and `release` stages:
```console
$ docker buildx build --no-cache-filter install,release .
```
The arguments for the `--no-cache-filter` flag must be names of stages.
### <a name="output"></a> Set the export action for the build result (-o, --output)
```text
```
-o, --output=[PATH,-,type=TYPE[,KEY=VALUE]
```
Sets the export action for the build result. In `docker build` all builds finish
by creating a container image and exporting it to `docker images`. `buildx` makes
this step configurable allowing results to be exported directly to the client,
OCI image tarballs, registry etc.
oci image tarballs, registry etc.
Buildx with `docker` driver currently only supports local, tarball exporter and
image exporter. `docker-container` driver supports all the exporters.
@@ -461,15 +384,15 @@ The `docker` export type writes the single-platform result image as a [Docker im
specification](https://github.com/docker/docker/blob/v20.10.2/image/spec/v1.2.md)
tarball on the client. Tarballs created by this exporter are also OCI compatible.
The default image store in Docker Engine doesn't support loading multi-platform
images. You can enable the containerd image store, or push multi-platform images
is to directly push to a registry, see [`registry`](#registry).
Currently, multi-platform images cannot be exported with the `docker` export type.
The most common usecase for multi-platform images is to directly push to a registry
(see [`registry`](#registry)).
Attribute keys:
- `dest` - destination path where tarball will be written. If not specified,
the tar will be loaded automatically to the local image store.
- `context` - name for the Docker context where to import the result
- `dest` - destination path where tarball will be written. If not specified the
tar will be loaded automatically to the current docker instance.
- `context` - name for the docker context where to import the result
#### `image`
@@ -480,7 +403,7 @@ can be automatically pushed to a registry by specifying attributes.
Attribute keys:
- `name` - name (references) for the new image.
- `push` - Boolean to automatically push the image.
- `push` - boolean to automatically push the image.
#### `registry`
@@ -488,7 +411,7 @@ The `registry` exporter is a shortcut for `type=image,push=true`.
### <a name="platform"></a> Set the target platforms for the build (--platform)
```text
```
--platform=value[,value]
```
@@ -517,12 +440,12 @@ and `arm` architectures. You can see what runtime platforms your current builder
instance supports by running `docker buildx inspect --bootstrap`.
Inside a `Dockerfile`, you can access the current platform value through
`TARGETPLATFORM` build argument. Refer to the [`docker build`
documentation](https://docs.docker.com/reference/dockerfile/#automatic-platform-args-in-the-global-scope)
`TARGETPLATFORM` build argument. Please refer to the [`docker build`
documentation](https://docs.docker.com/engine/reference/builder/#automatic-platform-args-in-the-global-scope)
for the full description of automatic platform argument variants .
You can find the formatting definition for the platform specifier in the
[containerd source code](https://github.com/containerd/containerd/blob/v1.4.3/platforms/platforms.go#L63).
The formatting for the platform specifier is defined in the [containerd source
code](https://github.com/containerd/containerd/blob/v1.4.3/platforms/platforms.go#L63).
```console
$ docker buildx build --platform=linux/arm64 .
@@ -532,11 +455,11 @@ $ docker buildx build --platform=darwin .
### <a name="progress"></a> Set type of progress output (--progress)
```text
```
--progress=VALUE
```
Set type of progress output (`auto`, `plain`, `tty`). Use plain to show container
Set type of progress output (auto, plain, tty). Use plain to show container
output (default "auto").
> **Note**
@@ -570,18 +493,15 @@ provenance attestations for the build result. For example,
`--provenance=mode=max` can be used as an abbreviation for
`--attest=type=provenance,mode=max`.
Additionally, `--provenance` can be used with Boolean values to enable or disable
provenance attestations. For example, `--provenance=false` disables all provenance attestations,
while `--provenance=true` enables all provenance attestations.
Additionally, `--provenance` can be used with boolean values to broadly enable
or disable provenance attestations. For example, `--provenance=false` can be
used to disable all provenance attestations, while `--provenance=true` can be
used to enable all provenance attestations.
By default, a minimal provenance attestation will be created for the build
result. Note that the default image store in Docker Engine doesn't support
attestations. Provenance attestations only persist for images pushed directly
to a registry if you use the default image store. Alternatively, you can switch
to using the containerd image store.
result, which will only be attached for images pushed to registries.
For more information about provenance attestations, see
[here](https://docs.docker.com/build/attestations/slsa-provenance/).
For more information, see [here](https://docs.docker.com/build/attestations/slsa-provenance/).
### <a name="push"></a> Push the build result to a registry (--push)
@@ -595,24 +515,20 @@ attestations for the build result. For example,
`--sbom=generator=<user>/<generator-image>` can be used as an abbreviation for
`--attest=type=sbom,generator=<user>/<generator-image>`.
Additionally, `--sbom` can be used with Boolean values to enable or disable
SBOM attestations. For example, `--sbom=false` disables all SBOM attestations.
Note that the default image store in Docker Engine doesn't support
attestations. Provenance attestations only persist for images pushed directly
to a registry if you use the default image store. Alternatively, you can switch
to using the containerd image store.
Additionally, `--sbom` can be used with boolean values to broadly enable or
disable SBOM attestations. For example, `--sbom=false` can be used to disable
all SBOM attestations.
For more information, see [here](https://docs.docker.com/build/attestations/sbom/).
### <a name="secret"></a> Secret to expose to the build (--secret)
```text
```
--secret=[type=TYPE[,KEY=VALUE]
```
Exposes secret to the build. The secret can be used by the build using
[`RUN --mount=type=secret` mount](https://docs.docker.com/reference/dockerfile/#run---mounttypesecret).
[`RUN --mount=type=secret` mount](https://docs.docker.com/engine/reference/builder/#run---mounttypesecret).
If `type` is unset it will be detected. Supported types are:
@@ -620,7 +536,7 @@ If `type` is unset it will be detected. Supported types are:
Attribute keys:
- `id` - ID of the secret. Defaults to base name of the `src` path.
- `id` - ID of the secret. Defaults to basename of the `src` path.
- `src`, `source` - Secret filename. `id` used if unset.
```dockerfile
@@ -654,24 +570,15 @@ RUN --mount=type=bind,target=. \
$ SECRET_TOKEN=token docker buildx build --secret id=SECRET_TOKEN .
```
### <a name="shm-size"></a> Shared memory size for build containers (--shm-size)
Sets the size of the shared memory allocated for build containers when using
`RUN` instructions.
### <a name="shm-size"></a> Size of /dev/shm (--shm-size)
The format is `<number><unit>`. `number` must be greater than `0`. Unit is
optional and can be `b` (bytes), `k` (kilobytes), `m` (megabytes), or `g`
(gigabytes). If you omit the unit, the system uses bytes.
> **Note**
>
> In most cases, it is recommended to let the builder automatically determine
> the appropriate configurations. Manual adjustments should only be considered
> when specific performance tuning is required for complex build scenarios.
### <a name="ssh"></a> SSH agent socket or keys to expose to the build (--ssh)
```text
```
--ssh=default|<id>[=<socket>|<key>[,<key>]]
```
@@ -679,7 +586,7 @@ This can be useful when some commands in your Dockerfile need specific SSH
authentication (e.g., cloning a private repository).
`--ssh` exposes SSH agent socket or keys to the build and can be used with the
[`RUN --mount=type=ssh` mount](https://docs.docker.com/reference/dockerfile/#run---mounttypessh).
[`RUN --mount=type=ssh` mount](https://docs.docker.com/engine/reference/builder/#run---mounttypessh).
Example to access Gitlab using an SSH agent socket:
@@ -702,8 +609,7 @@ $ docker buildx build --ssh default=$SSH_AUTH_SOCK .
### <a name="ulimit"></a> Set ulimits (--ulimit)
`--ulimit` overrides the default ulimits of build's containers when using `RUN`
instructions and are specified with a soft and hard limit as such:
`--ulimit` is specified with a soft and hard limit as such:
`<type>=<soft limit>[:<hard limit>]`, for example:
```console
@@ -712,12 +618,6 @@ $ docker buildx build --ulimit nofile=1024:1024 .
> **Note**
>
> If you don't provide a `hard limit`, the `soft limit` is used
> for both values. If no `ulimits` are set, they're inherited from
> If you do not provide a `hard limit`, the `soft limit` is used
> for both values. If no `ulimits` are set, they are inherited from
> the default `ulimits` set on the daemon.
> **Note**
>
> In most cases, it is recommended to let the builder automatically determine
> the appropriate configurations. Manual adjustments should only be considered
> when specific performance tuning is required for complex build scenarios.

View File

@@ -1,6 +1,6 @@
# buildx create
```text
```
docker buildx create [OPTIONS] [CONTEXT|ENDPOINT]
```
@@ -9,19 +9,19 @@ Create a new builder instance
### Options
| Name | Type | Default | Description |
|:------------------------------------------|:--------------|:--------|:----------------------------------------------------------------------|
| [`--append`](#append) | | | Append a node to builder instead of changing it |
| `--bootstrap` | | | Boot builder after creation |
| [`--buildkitd-config`](#buildkitd-config) | `string` | | BuildKit daemon config file |
| [`--buildkitd-flags`](#buildkitd-flags) | `string` | | BuildKit daemon flags |
| [`--driver`](#driver) | `string` | | Driver to use (available: `docker-container`, `kubernetes`, `remote`) |
| [`--driver-opt`](#driver-opt) | `stringArray` | | Options for the driver |
| [`--leave`](#leave) | | | Remove a node from builder instead of changing it |
| [`--name`](#name) | `string` | | Builder instance name |
| [`--node`](#node) | `string` | | Create/modify node with given name |
| [`--platform`](#platform) | `stringArray` | | Fixed platforms for current node |
| [`--use`](#use) | | | Set the current builder instance |
| Name | Type | Default | Description |
|:----------------------------------------|:--------------|:--------|:----------------------------------------------------------------------|
| [`--append`](#append) | | | Append a node to builder instead of changing it |
| `--bootstrap` | | | Boot builder after creation |
| [`--buildkitd-flags`](#buildkitd-flags) | `string` | | Flags for buildkitd daemon |
| [`--config`](#config) | `string` | | BuildKit config file |
| [`--driver`](#driver) | `string` | | Driver to use (available: `docker-container`, `kubernetes`, `remote`) |
| [`--driver-opt`](#driver-opt) | `stringArray` | | Options for the driver |
| [`--leave`](#leave) | | | Remove a node from builder instead of changing it |
| [`--name`](#name) | `string` | | Builder instance name |
| [`--node`](#node) | `string` | | Create/modify node with given name |
| [`--platform`](#platform) | `stringArray` | | Fixed platforms for current node |
| [`--use`](#use) | | | Set the current builder instance |
<!---MARKER_GEN_END-->
@@ -29,9 +29,9 @@ Create a new builder instance
## Description
Create makes a new builder instance pointing to a Docker context or endpoint,
Create makes a new builder instance pointing to a docker context or endpoint,
where context is the name of a context from `docker context ls` and endpoint is
the address for Docker socket (eg. `DOCKER_HOST` value).
the address for docker socket (eg. `DOCKER_HOST` value).
By default, the current Docker configuration is used for determining the
context/endpoint value.
@@ -55,18 +55,31 @@ $ docker buildx create --name eager_beaver --append mycontext2
eager_beaver
```
### <a name="buildkitd-config"></a> Specify a configuration file for the BuildKit daemon (--buildkitd-config)
### <a name="buildkitd-flags"></a> Specify options for the buildkitd daemon (--buildkitd-flags)
```text
--buildkitd-config FILE
```
--buildkitd-flags FLAGS
```
Specifies the configuration file for the BuildKit daemon to use. The
configuration can be overridden by [`--buildkitd-flags`](#buildkitd-flags).
See an [example BuildKit daemon configuration file](https://github.com/moby/buildkit/blob/master/docs/buildkitd.toml.md).
Adds flags when starting the buildkitd daemon. They take precedence over the
configuration file specified by [`--config`](#config). See `buildkitd --help`
for the available flags.
If you don't specify a configuration file, Buildx looks for one by default in:
```
--buildkitd-flags '--debug --debugaddr 0.0.0.0:6666'
```
### <a name="config"></a> Specify a configuration file for the buildkitd daemon (--config)
```
--config FILE
```
Specifies the configuration file for the buildkitd daemon to use. The configuration
can be overridden by [`--buildkitd-flags`](#buildkitd-flags).
See an [example buildkitd configuration file](https://github.com/moby/buildkit/blob/master/docs/buildkitd.toml.md).
If the configuration file is not specified, will look for one by default in:
* `$BUILDX_CONFIG/buildkitd.default.toml`
* `$DOCKER_CONFIG/buildx/buildkitd.default.toml`
* `~/.docker/buildx/buildkitd.default.toml`
@@ -76,62 +89,25 @@ certificates for registries in the `buildkitd.toml` configuration, the files
will be copied into the container under `/etc/buildkit/certs` and configuration
will be updated to reflect that.
### <a name="buildkitd-flags"></a> Specify options for the BuildKit daemon (--buildkitd-flags)
```text
--buildkitd-flags FLAGS
```
Adds flags when starting the BuildKit daemon. They take precedence over the
configuration file specified by [`--buildkitd-config`](#buildkitd-config). See
`buildkitd --help` for the available flags.
```text
--buildkitd-flags '--debug --debugaddr 0.0.0.0:6666'
```
#### BuildKit daemon network mode
You can specify the network mode for the BuildKit daemon with either the
configuration file specified by [`--buildkitd-config`](#buildkitd-config) using the
`worker.oci.networkMode` option or `--oci-worker-net` flag here. The default
value is `auto` and can be one of `bridge`, `cni`, `host`:
```text
--buildkitd-flags '--oci-worker-net bridge'
```
> **Note**
>
> Network mode "bridge" is supported since BuildKit v0.13 and will become the
> default in next v0.14.
### <a name="driver"></a> Set the builder driver to use (--driver)
```text
```
--driver DRIVER
```
Sets the builder driver to be used. A driver is a configuration of a BuildKit
backend. Buildx supports the following drivers:
* `docker` (default)
* `docker-container`
* `kubernetes`
* `remote`
For more information about build drivers, see [here](https://docs.docker.com/build/drivers/).
Sets the builder driver to be used. There are two available drivers, each have
their own specificities.
#### `docker` driver
Uses the builder that is built into the Docker daemon. With this driver,
Uses the builder that is built into the docker daemon. With this driver,
the [`--load`](buildx_build.md#load) flag is implied by default on
`buildx build`. However, building multi-platform images or exporting cache is
not currently supported.
#### `docker-container` driver
Uses a BuildKit container that will be spawned via Docker. With this driver,
Uses a BuildKit container that will be spawned via docker. With this driver,
both building multi-platform images and exporting cache are supported.
Unlike `docker` driver, built images will not automatically appear in
@@ -140,7 +116,7 @@ to achieve that.
#### `kubernetes` driver
Uses Kubernetes pods. With this driver, you can spin up pods with defined
Uses a kubernetes pods. With this driver, you can spin up pods with defined
BuildKit container image to build your images.
Unlike `docker` driver, built images will not automatically appear in
@@ -149,8 +125,8 @@ to achieve that.
#### `remote` driver
Uses a remote instance of BuildKit daemon over an arbitrary connection. With
this driver, you manually create and manage instances of buildkit yourself, and
Uses a remote instance of buildkitd over an arbitrary connection. With this
driver, you manually create and manage instances of buildkit yourself, and
configure buildx to point at it.
Unlike `docker` driver, built images will not automatically appear in
@@ -159,18 +135,48 @@ to achieve that.
### <a name="driver-opt"></a> Set additional driver-specific options (--driver-opt)
```text
```
--driver-opt OPTIONS
```
Passes additional driver-specific options.
For information about available driver options, refer to the detailed
documentation for the specific driver:
* [`docker` driver](https://docs.docker.com/build/drivers/docker/)
* [`docker-container` driver](https://docs.docker.com/build/drivers/docker-container/)
* [`kubernetes` driver](https://docs.docker.com/build/drivers/kubernetes/)
* [`remote` driver](https://docs.docker.com/build/drivers/remote/)
Note: When using quoted values for example for the `nodeselector` or
`tolerations` options, ensure that quotes are escaped correctly for your shell.
#### `docker` driver
No driver options.
#### `docker-container` driver
- `image=IMAGE` - Sets the container image to be used for running buildkit.
- `network=NETMODE` - Sets the network mode for running the buildkit container.
- `cgroup-parent=CGROUP` - Sets the cgroup parent of the buildkit container if docker is using the "cgroupfs" driver. Defaults to `/docker/buildx`.
#### `kubernetes` driver
- `image=IMAGE` - Sets the container image to be used for running buildkit.
- `namespace=NS` - Sets the Kubernetes namespace. Defaults to the current namespace.
- `replicas=N` - Sets the number of `Pod` replicas. Defaults to 1.
- `requests.cpu` - Sets the request CPU value specified in units of Kubernetes CPU. Example `requests.cpu=100m`, `requests.cpu=2`
- `requests.memory` - Sets the request memory value specified in bytes or with a valid suffix. Example `requests.memory=500Mi`, `requests.memory=4G`
- `limits.cpu` - Sets the limit CPU value specified in units of Kubernetes CPU. Example `limits.cpu=100m`, `limits.cpu=2`
- `limits.memory` - Sets the limit memory value specified in bytes or with a valid suffix. Example `limits.memory=500Mi`, `limits.memory=4G`
- `serviceaccount` - Sets the created pod's service account. Example `serviceaccount=example-sa`
- `"nodeselector=label1=value1,label2=value2"` - Sets the kv of `Pod` nodeSelector. No Defaults. Example `nodeselector=kubernetes.io/arch=arm64`
- `"tolerations=key=foo,value=bar;key=foo2,operator=exists;key=foo3,effect=NoSchedule"` - Sets the `Pod` tolerations. Accepts the same values as the kube manifest tolera>tions. Key-value pairs are separated by `,`, tolerations are separated by `;`. No Defaults. Example `tolerations=operator=exists`
- `rootless=(true|false)` - Run the container as a non-root user without `securityContext.privileged`. Needs Kubernetes 1.19 or later. [Using Ubuntu host kernel is recommended](https://github.com/moby/buildkit/blob/master/docs/rootless.md). Defaults to false.
- `loadbalance=(sticky|random)` - Load-balancing strategy. If set to "sticky", the pod is chosen using the hash of the context path. Defaults to "sticky"
- `qemu.install=(true|false)` - Install QEMU emulation for multi platforms support.
- `qemu.image=IMAGE` - Sets the QEMU emulation image. Defaults to `tonistiigi/binfmt:latest`
#### `remote` driver
- `key=KEY` - Sets the TLS client key.
- `cert=CERT` - Sets the TLS client certificate to present to buildkitd.
- `cacert=CACERT` - Sets the TLS certificate authority used for validation.
- `servername=SERVER` - Sets the TLS server name to be used in requests (defaults to the endpoint hostname).
### <a name="leave"></a> Remove a node from a builder (--leave)
@@ -184,7 +190,7 @@ $ docker buildx create --name mybuilder --node mybuilder0 --leave
### <a name="name"></a> Specify the name of the builder (--name)
```text
```
--name NAME
```
@@ -193,17 +199,17 @@ If none is specified, one will be automatically generated.
### <a name="node"></a> Specify the name of the node (--node)
```text
```
--node NODE
```
The `--node` flag specifies the name of the node to be created or modified. If
you don't specify a name, the node name defaults to the name of the builder it
belongs to, with an index number suffix.
none is specified, it is the name of the builder it belongs to, with an index
number suffix.
### <a name="platform"></a> Set the platforms supported by the node (--platform)
```text
```
--platform PLATFORMS
```
@@ -215,7 +221,7 @@ building for the same platform.
```console
$ docker buildx create --platform linux/amd64
$ docker buildx create --platform linux/arm64,linux/arm/v7
$ docker buildx create --platform linux/arm64,linux/arm/v8
```
### <a name="use"></a> Automatically switch to the newly created builder (--use)

View File

@@ -0,0 +1,18 @@
# docker buildx debug-shell
<!---MARKER_GEN_START-->
Start a monitor
### Options
| Name | Type | Default | Description |
|:------------------|:---------|:--------|:-----------------------------------------------------------------------------------------|
| `--builder` | `string` | | Override the configured builder instance |
| `--detach` | | | Detach buildx server (supported only on linux) |
| `--progress` | `string` | `auto` | Set type of progress output (`auto`, `plain`, `tty`). Use plain to show container output |
| `--root` | `string` | | Specify root directory of server to connect |
| `--server-config` | `string` | | Specify buildx server config file (used only when launching new server) |
<!---MARKER_GEN_END-->

View File

@@ -1,27 +0,0 @@
# docker buildx debug
<!---MARKER_GEN_START-->
Start debugger (EXPERIMENTAL)
### Subcommands
| Name | Description |
|:---------------------------------|:--------------|
| [`build`](buildx_debug_build.md) | Start a build |
### Options
| Name | Type | Default | Description |
|:------------------|:---------|:--------|:---------------------------------------------------------------------------------------------------------|
| `--builder` | `string` | | Override the configured builder instance |
| `--detach` | `bool` | `true` | Detach buildx server for the monitor (supported only on linux) (EXPERIMENTAL) |
| `--invoke` | `string` | | Launch a monitor with executing specified command (EXPERIMENTAL) |
| `--on` | `string` | `error` | When to launch the monitor ([always, error]) (EXPERIMENTAL) |
| `--progress` | `string` | `auto` | Set type of progress output (`auto`, `plain`, `tty`) for the monitor. Use plain to show container output |
| `--root` | `string` | | Specify root directory of server to connect for the monitor (EXPERIMENTAL) |
| `--server-config` | `string` | | Specify buildx server config file for the monitor (used only when launching new server) (EXPERIMENTAL) |
<!---MARKER_GEN_END-->

View File

@@ -1,53 +0,0 @@
# docker buildx debug build
<!---MARKER_GEN_START-->
Start a build
### Aliases
`docker buildx debug build`, `docker buildx debug b`
### Options
| Name | Type | Default | Description |
|:---------------------------------------------------------------------------------------------------------------------------------------------------|:--------------|:----------|:----------------------------------------------------------------------------------------------------|
| [`--add-host`](https://docs.docker.com/reference/cli/docker/image/build/#add-host) | `stringSlice` | | Add a custom host-to-IP mapping (format: `host:ip`) |
| `--allow` | `stringSlice` | | Allow extra privileged entitlement (e.g., `network.host`, `security.insecure`) |
| `--annotation` | `stringArray` | | Add annotation to the image |
| `--attest` | `stringArray` | | Attestation parameters (format: `type=sbom,generator=image`) |
| `--build-arg` | `stringArray` | | Set build-time variables |
| `--build-context` | `stringArray` | | Additional build contexts (e.g., name=path) |
| `--builder` | `string` | | Override the configured builder instance |
| `--cache-from` | `stringArray` | | External cache sources (e.g., `user/app:cache`, `type=local,src=path/to/dir`) |
| `--cache-to` | `stringArray` | | Cache export destinations (e.g., `user/app:cache`, `type=local,dest=path/to/dir`) |
| [`--cgroup-parent`](https://docs.docker.com/reference/cli/docker/image/build/#cgroup-parent) | `string` | | Set the parent cgroup for the `RUN` instructions during build |
| `--detach` | | | Detach buildx server (supported only on linux) (EXPERIMENTAL) |
| [`-f`](https://docs.docker.com/reference/cli/docker/image/build/#file), [`--file`](https://docs.docker.com/reference/cli/docker/image/build/#file) | `string` | | Name of the Dockerfile (default: `PATH/Dockerfile`) |
| `--iidfile` | `string` | | Write the image ID to the file |
| `--label` | `stringArray` | | Set metadata for an image |
| `--load` | | | Shorthand for `--output=type=docker` |
| `--metadata-file` | `string` | | Write build result metadata to the file |
| `--network` | `string` | `default` | Set the networking mode for the `RUN` instructions during build |
| `--no-cache` | | | Do not use cache when building the image |
| `--no-cache-filter` | `stringArray` | | Do not cache specified stages |
| `-o`, `--output` | `stringArray` | | Output destination (format: `type=local,dest=path`) |
| `--platform` | `stringArray` | | Set target platform for build |
| `--print` | `string` | | Print result of information request (e.g., outline, targets) (EXPERIMENTAL) |
| `--progress` | `string` | `auto` | Set type of progress output (`auto`, `plain`, `tty`). Use plain to show container output |
| `--provenance` | `string` | | Shorthand for `--attest=type=provenance` |
| `--pull` | | | Always attempt to pull all referenced images |
| `--push` | | | Shorthand for `--output=type=registry` |
| `-q`, `--quiet` | | | Suppress the build output and print image ID on success |
| `--root` | `string` | | Specify root directory of server to connect (EXPERIMENTAL) |
| `--sbom` | `string` | | Shorthand for `--attest=type=sbom` |
| `--secret` | `stringArray` | | Secret to expose to the build (format: `id=mysecret[,src=/local/secret]`) |
| `--server-config` | `string` | | Specify buildx server config file (used only when launching new server) (EXPERIMENTAL) |
| `--shm-size` | `bytes` | `0` | Shared memory size for build containers |
| `--ssh` | `stringArray` | | SSH agent socket or keys to expose to the build (format: `default\|<id>[=<socket>\|<key>[,<key>]]`) |
| [`-t`](https://docs.docker.com/reference/cli/docker/image/build/#tag), [`--tag`](https://docs.docker.com/reference/cli/docker/image/build/#tag) | `stringArray` | | Name and optionally a tag (format: `name:tag`) |
| [`--target`](https://docs.docker.com/reference/cli/docker/image/build/#target) | `string` | | Set the target build stage to build |
| `--ulimit` | `ulimit` | | Ulimit options |
<!---MARKER_GEN_END-->

View File

@@ -1,47 +0,0 @@
# docker buildx dial-stdio
<!---MARKER_GEN_START-->
Proxy current stdio streams to builder instance
### Options
| Name | Type | Default | Description |
|:-------------|:---------|:--------|:-------------------------------------------------|
| `--builder` | `string` | | Override the configured builder instance |
| `--platform` | `string` | | Target platform: this is used for node selection |
| `--progress` | `string` | `quiet` | Set type of progress output (auto, plain, tty). |
<!---MARKER_GEN_END-->
## Description
dial-stdio uses the stdin and stdout streams of the command to proxy to the configured builder instance.
It is not intended to be used by humans, but rather by other tools that want to interact with the builder instance via BuildKit API.
## Examples
Example go program that uses the dial-stdio command wire up a buildkit client.
This is for example use only and may not be suitable for production use.
```go
client.New(ctx, "", client.WithContextDialer(func(context.Context, string) (net.Conn, error) {
c1, c2 := net.Pipe()
cmd := exec.Command("docker", "buildx", "dial-stdio")
cmd.Stdin = c1
cmd.Stdout = c1
if err := cmd.Start(); err != nil {
c1.Close()
c2.Close()
return nil, err
}
go func() {
cmd.Wait()
c2.Close()
}()
return c2
}))
```

View File

@@ -1,6 +1,6 @@
# buildx du
```text
```
docker buildx du
```
@@ -13,106 +13,13 @@ Disk usage
|:------------------------|:---------|:--------|:-----------------------------------------|
| [`--builder`](#builder) | `string` | | Override the configured builder instance |
| `--filter` | `filter` | | Provide filter values |
| [`--verbose`](#verbose) | | | Provide a more verbose output |
| `--verbose` | | | Provide a more verbose output |
<!---MARKER_GEN_END-->
## Examples
### Show disk usage
The `docker buildx du` command shows the disk usage for the currently selected
builder.
```console
$ docker buildx du
ID RECLAIMABLE SIZE LAST ACCESSED
12wgll9os87pazzft8lt0yztp* true 1.704GB 13 days ago
iupsv3it5ubh92aweb7c1wojc* true 1.297GB 36 minutes ago
ek4ve8h4obyv5kld6vicmtqyn true 811.7MB 13 days ago
isovrfnmkelzhtdx942w9vjcb* true 811.7MB 13 days ago
0jty7mjrndi1yo7xkv1baralh true 810.5MB 13 days ago
jyzkefmsysqiaakgwmjgxjpcz* true 810.5MB 13 days ago
z8w1y95jn93gvj92jtaj6uhwk true 318MB 2 weeks ago
rz2zgfcwlfxsxd7d41w2sz2tt true 8.224kB* 43 hours ago
n5bkzpewmk2eiu6hn9tzx18jd true 8.224kB* 43 hours ago
ao94g6vtbzdl6k5zgdmrmnwpt true 8.224kB* 43 hours ago
2pyjep7njm0wh39vcingxb97i true 8.224kB* 43 hours ago
Shared: 115.5MB
Private: 10.25GB
Reclaimable: 10.36GB
Total: 10.36GB
```
If `RECLAIMABLE` is false, the `docker buildx du prune` command won't delete
the record, even if you use `--all`. That's because the record is actively in
use by some component of the builder.
The asterisks (\*) in the default output indicate the following:
- An asterisk next to an ID (`zu7m6evdpebh5h8kfkpw9dlf2*`) indicates that the record
is mutable. The size of the record may change, or another build can take ownership of
it and change or commit to it. If you run the `du` command again, this item may
not be there anymore, or the size might be different.
- An asterisk next to a size (`8.288kB*`) indicates that the record is shared.
Storage of the record is shared with some other resource, typically an image.
If you prune such a record then you will lose build cache but only metadata
will be deleted as the image still needs to actual storage layers.
### <a name="verbose"></a> Use verbose output (--verbose)
The verbose output of the `docker buildx du` command is useful for inspecting
the disk usage records in more detail. The verbose output shows the mutable and
shared states more clearly, as well as additional information about the
corresponding layer.
```console
$ docker buildx du --verbose
...
Last used: 2 days ago
Type: regular
ID: 05d0elirb4mmvpmnzbrp3ssrg
Parent: e8sfdn4mygrg7msi9ak1dy6op
Created at: 2023-11-20 09:53:30.881558721 +0000 UTC
Mutable: false
Reclaimable: true
Shared: false
Size: 0B
Description: [gobase 3/3] WORKDIR /src
Usage count: 3
Last used: 24 hours ago
Type: regular
Reclaimable: 4.453GB
Total: 4.453GB
```
### <a name="builder"></a> Override the configured builder instance (--builder)
Use the `--builder` flag to inspect the disk usage of a particular builder.
```console
$ docker buildx du --builder youthful_shtern
ID RECLAIMABLE SIZE LAST ACCESSED
g41agepgdczekxg2mtw0dujsv* true 1.312GB 47 hours ago
e6ycrsa0bn9akigqgzu0sc6kr true 318MB 47 hours ago
our9zg4ndly65ze1ccczdksiz true 204.9MB 45 hours ago
b7xv3xpxnwupc81tc9ya3mgq6* true 120.6MB 47 hours ago
zihgye15ss6vum3wmck9egdoy* true 79.81MB 2 days ago
aaydharssv1ug98yhuwclkfrh* true 79.81MB 2 days ago
ta1r4vmnjug5dhub76as4kkol* true 74.51MB 47 hours ago
murma9f83j9h8miifbq68udjf* true 74.51MB 47 hours ago
47f961866a49g5y8myz80ixw1* true 74.51MB 47 hours ago
tzh99xtzlaf6txllh3cobag8t true 74.49MB 47 hours ago
ld6laoeuo1kwapysu6afwqybl* true 59.89MB 47 hours ago
yitxizi5kaplpyomqpos2cryp* true 59.83MB 47 hours ago
iy8aa4b7qjn0qmy9wiga9cj8w true 33.65MB 47 hours ago
mci7okeijyp8aqqk16j80dy09 true 19.86MB 47 hours ago
lqvj091he652slxdla4wom3pz true 14.08MB 47 hours ago
fkt31oiv793nd26h42llsjcw7* true 11.87MB 2 days ago
uj802yxtvkcjysnjb4kgwvn2v true 11.68MB 45 hours ago
Reclaimable: 2.627GB
Total: 2.627GB
```
Same as [`buildx --builder`](buildx.md#builder).

View File

@@ -1,6 +1,6 @@
# buildx imagetools
```text
```
docker buildx imagetools [OPTIONS] COMMAND
```
@@ -26,9 +26,8 @@ Commands to work on images in registry
## Description
The `imagetools` commands contains subcommands for working with manifest lists
in container registries. These commands are useful for inspecting manifests
to check multi-platform configuration and attestations.
Imagetools contains commands for working with manifest lists in the registry.
These commands are useful for inspecting multi-platform build results.
## Examples

View File

@@ -1,6 +1,6 @@
# buildx imagetools create
```text
```
docker buildx imagetools create [OPTIONS] [SOURCE] [SOURCE...]
```
@@ -11,7 +11,6 @@ Create a new image based on source images
| Name | Type | Default | Description |
|:---------------------------------|:--------------|:--------|:-----------------------------------------------------------------------------------------|
| [`--annotation`](#annotation) | `stringArray` | | Add annotation to the image |
| [`--append`](#append) | | | Append to existing manifest |
| [`--builder`](#builder) | `string` | | Override the configured builder instance |
| [`--dry-run`](#dry-run) | | | Show final image instead of pushing |
@@ -31,34 +30,6 @@ specified, create performs a carbon copy.
## Examples
### <a name="annotation"></a> Add annotations to an image (--annotation)
The `--annotation` flag lets you add annotations the image index, manifest,
and descriptors when creating a new image.
The following command creates a `foo/bar:latest` image with the
`org.opencontainers.image.authors` annotation on the image index.
```console
$ docker buildx imagetools create \
--annotation "index:org.opencontainers.image.authors=dvdksn" \
--tag foo/bar:latest \
foo/bar:alpha foo/bar:beta foo/bar:gamma
```
> **Note**
>
> The `imagetools create` command supports adding annotations to the image
> index and descriptor, using the following type prefixes:
>
> - `index:`
> - `manifest-descriptor:`
>
> It doesn't support annotating manifests or OCI layouts.
For more information about annotations, see
[Annotations](https://docs.docker.com/build/building/annotations/).
### <a name="append"></a> Append new sources to an existing manifest list (--append)
Use the `--append` flag to append the new sources to an existing manifest list
@@ -74,7 +45,7 @@ Use the `--dry-run` flag to not push the image, just show it.
### <a name="file"></a> Read source descriptor from a file (-f, --file)
```text
```
-f FILE or --file FILE
```
@@ -95,7 +66,7 @@ The supported fields for the descriptor are defined in [OCI spec](https://github
### <a name="tag"></a> Set reference for new image (-t, --tag)
```text
```
-t IMAGE or --tag IMAGE
```

View File

@@ -1,6 +1,6 @@
# buildx imagetools inspect
```text
```
docker buildx imagetools inspect [OPTIONS] NAME
```
@@ -123,93 +123,23 @@ Manifests:
#### JSON output
A `json` template function is also available if you want to render fields in
JSON format:
A `json` go template func is also available if you want to render fields as
JSON bytes:
```console
$ docker buildx imagetools inspect crazymax/buildkit:attest --format "{{json .Manifest}}"
$ docker buildx imagetools inspect crazymax/loop --format "{{json .Manifest}}"
```
```json
{
"schemaVersion": 2,
"mediaType": "application/vnd.oci.image.index.v1+json",
"digest": "sha256:7007b387ccd52bd42a050f2e8020e56e64622c9269bf7bbe257b326fe99daf19",
"size": 855,
"manifests": [
{
"mediaType": "application/vnd.oci.image.manifest.v1+json",
"digest": "sha256:fbd10fe50b4b174bb9ea273e2eb9827fa8bf5c88edd8635a93dc83e0d1aecb55",
"size": 673,
"platform": {
"architecture": "amd64",
"os": "linux"
}
},
{
"mediaType": "application/vnd.oci.image.manifest.v1+json",
"digest": "sha256:a9de632c16998489fd63fbca42a03431df00639cfb2ecb8982bf9984b83c5b2b",
"size": 839,
"annotations": {
"vnd.docker.reference.digest": "sha256:fbd10fe50b4b174bb9ea273e2eb9827fa8bf5c88edd8635a93dc83e0d1aecb55",
"vnd.docker.reference.type": "attestation-manifest"
},
"platform": {
"architecture": "unknown",
"os": "unknown"
}
}
]
}
```
```console
$ docker buildx imagetools inspect crazymax/buildkit:attest --format "{{json .Image}}"
```
```json
{
"created": "2022-12-01T11:46:47.713777178Z",
"architecture": "amd64",
"os": "linux",
"config": {
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": [
"/bin/sh"
]
},
"rootfs": {
"type": "layers",
"diff_ids": [
"sha256:ded7a220bb058e28ee3254fbba04ca90b679070424424761a53a043b93b612bf",
"sha256:d85d09ab4b4e921666ccc2db8532e857bf3476b7588e52c9c17741d7af14204f"
]
},
"history": [
{
"created": "2022-11-22T22:19:28.870801855Z",
"created_by": "/bin/sh -c #(nop) ADD file:587cae71969871d3c6456d844a8795df9b64b12c710c275295a1182b46f630e7 in / "
},
{
"created": "2022-11-22T22:19:29.008562326Z",
"created_by": "/bin/sh -c #(nop) CMD [\"/bin/sh\"]",
"empty_layer": true
},
{
"created": "2022-12-01T11:46:47.713777178Z",
"created_by": "RUN /bin/sh -c apk add curl # buildkit",
"comment": "buildkit.dockerfile.v0"
}
]
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"digest": "sha256:a9ca35b798e0b198f9be7f3b8b53982e9a6cf96814cb10d78083f40ad8c127f1",
"size": 949
}
```
```console
$ docker buildx imagetools inspect moby/buildkit:master --format "{{json .Manifest}}"
```
```json
{
"schemaVersion": 2,
@@ -354,13 +284,11 @@ $ docker buildx imagetools inspect moby/buildkit:master --format "{{json .Manife
}
```
The following command provides [SLSA](https://github.com/moby/buildkit/blob/master/docs/attestations/slsa-provenance.md)
JSON output:
Following command provides [SLSA](https://github.com/moby/buildkit/blob/master/docs/attestations/slsa-provenance.md) JSON output:
```console
$ docker buildx imagetools inspect crazymax/buildkit:attest --format "{{json .Provenance}}"
```
```json
{
"SLSA": {
@@ -415,13 +343,11 @@ $ docker buildx imagetools inspect crazymax/buildkit:attest --format "{{json .Pr
}
```
The following command provides [SBOM](https://github.com/moby/buildkit/blob/master/docs/attestations/sbom.md)
JSON output:
Following command provides [SBOM](https://github.com/moby/buildkit/blob/master/docs/attestations/sbom.md) JSON output:
```console
$ docker buildx imagetools inspect crazymax/buildkit:attest --format "{{json .SBOM}}"
```
```json
{
"SPDX": {
@@ -446,7 +372,6 @@ $ docker buildx imagetools inspect crazymax/buildkit:attest --format "{{json .SB
```console
$ docker buildx imagetools inspect crazymax/buildkit:attest --format "{{json .}}"
```
```json
{
"name": "crazymax/buildkit:attest",
@@ -515,6 +440,75 @@ $ docker buildx imagetools inspect crazymax/buildkit:attest --format "{{json .}}
"comment": "buildkit.dockerfile.v0"
}
]
},
"Provenance": {
"SLSA": {
"builder": {
"id": ""
},
"buildType": "https://mobyproject.org/buildkit@v1",
"materials": [
{
"uri": "pkg:docker/docker/buildkit-syft-scanner@stable-1",
"digest": {
"sha256": "b45f1d207e16c3a3a5a10b254ad8ad358d01f7ea090d382b95c6b2ee2b3ef765"
}
},
{
"uri": "pkg:docker/alpine@latest?platform=linux%2Famd64",
"digest": {
"sha256": "8914eb54f968791faf6a8638949e480fef81e697984fba772b3976835194c6d4"
}
}
],
"invocation": {
"configSource": {},
"parameters": {
"frontend": "dockerfile.v0",
"locals": [
{
"name": "context"
},
{
"name": "dockerfile"
}
]
},
"environment": {
"platform": "linux/amd64"
}
},
"metadata": {
"buildInvocationID": "02tdha2xkbxvin87mz9drhag4",
"buildStartedOn": "2022-12-01T11:50:07.264704131Z",
"buildFinishedOn": "2022-12-01T11:50:08.243788739Z",
"reproducible": false,
"completeness": {
"parameters": true,
"environment": true,
"materials": false
},
"https://mobyproject.org/buildkit@v1#metadata": {}
}
}
},
"SBOM": {
"SPDX": {
"SPDXID": "SPDXRef-DOCUMENT",
"creationInfo": {
"created": "2022-12-01T11:46:48.063400162Z",
"creators": [
"Tool: syft-v0.60.3",
"Tool: buildkit-1ace2bb",
"Organization: Anchore, Inc"
],
"licenseListVersion": "3.18"
},
"dataLicense": "CC0-1.0",
"documentNamespace": "https://anchore.com/syft/dir/run/src/core-0a4ccc6d-1a72-4c3a-a40e-3df1a2ffca94",
"files": [...],
"spdxVersion": "SPDX-2.2"
}
}
}
```
@@ -528,7 +522,6 @@ go template function:
```console
$ docker buildx imagetools inspect --format '{{json (index .Image "linux/s390x")}}' moby/buildkit:master
```
```json
{
"created": "2022-11-30T17:42:26.414957336Z",
@@ -595,14 +588,15 @@ $ docker buildx imagetools inspect --format '{{json (index .Image "linux/s390x")
}
```
### <a name="raw"></a> Show original JSON manifest (--raw)
### <a name="raw"></a> Show original, unformatted JSON manifest (--raw)
Use the `--raw` option to print the raw JSON manifest.
Use the `--raw` option to print the unformatted JSON manifest bytes.
> `jq` is used here to get a better rendering of the output result.
```console
$ docker buildx imagetools inspect --raw crazymax/loop
$ docker buildx imagetools inspect --raw crazymax/loop | jq
```
```json
{
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
@@ -635,7 +629,6 @@ $ docker buildx imagetools inspect --raw crazymax/loop
```console
$ docker buildx imagetools inspect --raw moby/buildkit:master | jq
```
```json
{
"mediaType": "application/vnd.docker.distribution.manifest.list.v2+json",

View File

@@ -1,6 +1,6 @@
# buildx inspect
```text
```
docker buildx inspect [NAME]
```
@@ -27,7 +27,7 @@ Shows information about the current or specified builder.
Use the `--bootstrap` option to ensure that the builder is running before
inspecting it. If the driver is `docker-container`, then `--bootstrap` starts
the BuildKit container and waits until it's operational. Bootstrapping is
the buildkit container and waits until it is operational. Bootstrapping is
automatically done during build, and therefore not necessary. The same BuildKit
container is used during the lifetime of the associated builder node (as
displayed in `buildx ls`).
@@ -45,9 +45,7 @@ The following example shows information about a builder instance named
> **Note**
>
> The asterisk (`*`) next to node build platform(s) indicate they have been
> manually set during `buildx create`. Otherwise the platforms were
> automatically detected.
> Asterisk `*` next to node build platform(s) indicate they had been set manually during `buildx create`. Otherwise, it had been autodetected.
```console
$ docker buildx inspect elated_tesla

View File

@@ -1,84 +1,29 @@
# buildx ls
```text
```
docker buildx ls
```
<!---MARKER_GEN_START-->
List builder instances
### Options
| Name | Type | Default | Description |
|:----------------------|:---------|:--------|:------------------|
| [`--format`](#format) | `string` | `table` | Format the output |
<!---MARKER_GEN_END-->
## Description
Lists all builder instances and the nodes for each instance.
Lists all builder instances and the nodes for each instance
```console
$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
elated_tesla* docker-container
\_ elated_tesla0 \_ unix:///var/run/docker.sock running v0.10.3 linux/amd64
\_ elated_tesla1 \_ ssh://ubuntu@1.2.3.4 running v0.10.3 linux/arm64*, linux/arm/v7, linux/arm/v6
default docker
\_ default \_ default running v0.8.2 linux/amd64
NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
elated_tesla * docker-container
elated_tesla0 unix:///var/run/docker.sock running v0.10.3 linux/amd64
elated_tesla1 ssh://ubuntu@1.2.3.4 running v0.10.3 linux/arm64*, linux/arm/v7, linux/arm/v6
default docker
default default running v0.8.2 linux/amd64
```
Each builder has one or more nodes associated with it. The current builder's
name is marked with a `*` in `NAME/NODE` and explicit node to build against for
the target platform marked with a `*` in the `PLATFORMS` column.
## Examples
### <a name="format"></a> Format the output (--format)
The formatting options (`--format`) pretty-prints builder instances output
using a Go template.
Valid placeholders for the Go template are listed below:
| Placeholder | Description |
|-------------------|---------------------------------------------|
| `.Name` | Builder or node name |
| `.DriverEndpoint` | Driver (for builder) or Endpoint (for node) |
| `.LastActivity` | Builder last activity |
| `.Status` | Builder or node status |
| `.Buildkit` | BuildKit version of the node |
| `.Platforms` | Available node's platforms |
| `.Error` | Error |
| `.Builder` | Builder object |
When using the `--format` option, the `ls` command will either output the data
exactly as the template declares or, when using the `table` directive, includes
column headers as well.
The following example uses a template without headers and outputs the
`Name` and `DriverEndpoint` entries separated by a colon (`:`):
```console
$ docker buildx ls --format "{{.Name}}: {{.DriverEndpoint}}"
elated_tesla: docker-container
elated_tesla0: unix:///var/run/docker.sock
elated_tesla1: ssh://ubuntu@1.2.3.4
default: docker
default: default
```
The `Builder` placeholder can be used to access the builder object and its
fields. For example, the following template outputs the builder's and
nodes' names with their respective endpoints:
```console
$ docker buildx ls --format "{{.Builder.Name}}: {{range .Builder.Nodes}}\n {{.Name}}: {{.Endpoint}}{{end}}"
elated_tesla:
elated_tesla0: unix:///var/run/docker.sock
elated_tesla1: ssh://ubuntu@1.2.3.4
default: docker
default: default
```

View File

@@ -1,6 +1,6 @@
# buildx prune
```text
```
docker buildx prune
```

View File

@@ -1,11 +1,11 @@
# buildx rm
```text
docker buildx rm [OPTIONS] [NAME] [NAME...]
```
docker buildx rm [NAME]
```
<!---MARKER_GEN_START-->
Remove one or more builder instances
Remove a builder instance
### Options
@@ -14,7 +14,7 @@ Remove one or more builder instances
| [`--all-inactive`](#all-inactive) | | | Remove all inactive builders |
| [`--builder`](#builder) | `string` | | Override the configured builder instance |
| [`-f`](#force), [`--force`](#force) | | | Do not prompt for confirmation |
| [`--keep-daemon`](#keep-daemon) | | | Keep the BuildKit daemon running |
| [`--keep-daemon`](#keep-daemon) | | | Keep the buildkitd daemon running |
| [`--keep-state`](#keep-state) | | | Keep BuildKit state |
@@ -48,15 +48,12 @@ Do not prompt for confirmation before removing inactive builders.
$ docker buildx rm --all-inactive --force
```
### <a name="keep-daemon"></a> Keep the BuildKit daemon running (--keep-daemon)
### <a name="keep-daemon"></a> Keep the buildkitd daemon running (--keep-daemon)
Keep the BuildKit daemon running after the buildx context is removed. This is
useful when you manage BuildKit daemons and buildx contexts independently.
Only supported by the
[`docker-container`](https://docs.docker.com/build/drivers/docker-container/)
and [`kubernetes`](https://docs.docker.com/build/drivers/kubernetes/) drivers.
Keep the buildkitd daemon running after the buildx context is removed. This is useful when you manage buildkitd daemons and buildx contexts independently.
Currently, only supported by the [`docker-container` and `kubernetes` drivers](buildx_create.md#driver).
### <a name="keep-state"></a> Keep BuildKit state (--keep-state)
Keep BuildKit state, so it can be reused by a new builder with the same name.
Currently, only supported by the [`docker-container` driver](https://docs.docker.com/build/drivers/docker-container/).
Currently, only supported by the [`docker-container` driver](buildx_create.md#driver).

View File

@@ -18,7 +18,7 @@ Stop builder instance
## Description
Stops the specified or current builder. This does not prevent buildx build to
Stops the specified or current builder. This will not prevent buildx build to
restart the builder. The implementation of stop depends on the driver.
## Examples

View File

@@ -1,6 +1,6 @@
# buildx version
```text
```
docker buildx version
```
@@ -16,5 +16,5 @@ View version information
```console
$ docker buildx version
github.com/docker/buildx v0.11.2 9872040b6626fb7d87ef7296fd5b832e8cc2ad17
github.com/docker/buildx v0.5.1-docker 11057da37336192bfc57d81e02359ba7ba848e4a
```

View File

@@ -17,14 +17,11 @@ import (
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/imagetools"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/opts"
dockertypes "github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/mount"
"github.com/docker/docker/api/types/network"
"github.com/docker/docker/api/types/system"
dockerclient "github.com/docker/docker/client"
"github.com/docker/docker/errdefs"
dockerarchive "github.com/docker/docker/pkg/archive"
"github.com/docker/docker/pkg/idtools"
"github.com/docker/docker/pkg/stdcopy"
@@ -34,28 +31,16 @@ import (
)
const (
volumeStateSuffix = "_state"
buildkitdConfigFile = "buildkitd.toml"
volumeStateSuffix = "_state"
)
type Driver struct {
driver.InitConfig
factory driver.Factory
// if you add fields, remember to update docs:
// https://github.com/docker/docs/blob/main/content/build/drivers/docker-container.md
netMode string
image string
memory opts.MemBytes
memorySwap opts.MemSwapBytes
cpuQuota int64
cpuPeriod int64
cpuShares int64
cpusetCpus string
cpusetMems string
cgroupParent string
restartPolicy container.RestartPolicy
env []string
factory driver.Factory
netMode string
image string
cgroupParent string
env []string
}
func (d *Driver) IsMobyDriver() bool {
@@ -79,7 +64,10 @@ func (d *Driver) Bootstrap(ctx context.Context, l progress.Logger) error {
if err := d.start(ctx, sub); err != nil {
return err
}
return d.wait(ctx, sub)
if err := d.wait(ctx, sub); err != nil {
return err
}
return nil
})
})
}
@@ -117,13 +105,14 @@ func (d *Driver) create(ctx context.Context, l progress.SubLogger) error {
Image: imageName,
Env: d.env,
}
cfg.Cmd = getBuildkitFlags(d.InitConfig)
if d.InitConfig.BuildkitFlags != nil {
cfg.Cmd = d.InitConfig.BuildkitFlags
}
useInit := true // let it cleanup exited processes created by BuildKit's container API
return l.Wrap("creating container "+d.Name, func() error {
if err := l.Wrap("creating container "+d.Name, func() error {
hc := &container.HostConfig{
Privileged: true,
RestartPolicy: d.restartPolicy,
Privileged: true,
Mounts: []mount.Mount{
{
Type: mount.TypeVolume,
@@ -136,27 +125,6 @@ func (d *Driver) create(ctx context.Context, l progress.SubLogger) error {
if d.netMode != "" {
hc.NetworkMode = container.NetworkMode(d.netMode)
}
if d.memory != 0 {
hc.Resources.Memory = int64(d.memory)
}
if d.memorySwap != 0 {
hc.Resources.MemorySwap = int64(d.memorySwap)
}
if d.cpuQuota != 0 {
hc.Resources.CPUQuota = d.cpuQuota
}
if d.cpuPeriod != 0 {
hc.Resources.CPUPeriod = d.cpuPeriod
}
if d.cpuShares != 0 {
hc.Resources.CPUShares = d.cpuShares
}
if d.cpusetCpus != "" {
hc.Resources.CpusetCpus = d.cpusetCpus
}
if d.cpusetMems != "" {
hc.Resources.CpusetMems = d.cpusetMems
}
if info, err := d.DockerAPI.Info(ctx); err == nil {
if info.CgroupDriver == "cgroupfs" {
// Place all buildkit containers inside this cgroup by default so limits can be attached
@@ -167,7 +135,7 @@ func (d *Driver) create(ctx context.Context, l progress.SubLogger) error {
}
}
secOpts, err := system.DecodeSecurityOptions(info.SecurityOptions)
secOpts, err := dockertypes.DecodeSecurityOptions(info.SecurityOptions)
if err != nil {
return err
}
@@ -180,19 +148,23 @@ func (d *Driver) create(ctx context.Context, l progress.SubLogger) error {
}
_, err := d.DockerAPI.ContainerCreate(ctx, cfg, hc, &network.NetworkingConfig{}, nil, d.Name)
if err != nil && !errdefs.IsConflict(err) {
if err != nil {
return err
}
if err == nil {
if err := d.copyToContainer(ctx, d.InitConfig.Files); err != nil {
return err
}
if err := d.start(ctx, l); err != nil {
return err
}
if err := d.copyToContainer(ctx, d.InitConfig.Files); err != nil {
return err
}
return d.wait(ctx, l)
})
if err := d.start(ctx, l); err != nil {
return err
}
if err := d.wait(ctx, l); err != nil {
return err
}
return nil
}); err != nil {
return err
}
return nil
}
func (d *Driver) wait(ctx context.Context, l progress.SubLogger) error {
@@ -226,7 +198,7 @@ func (d *Driver) wait(ctx context.Context, l progress.SubLogger) error {
}
func (d *Driver) copyLogs(ctx context.Context, l progress.SubLogger) error {
rc, err := d.DockerAPI.ContainerLogs(ctx, d.Name, container.LogsOptions{
rc, err := d.DockerAPI.ContainerLogs(ctx, d.Name, dockertypes.ContainerLogsOptions{
ShowStdout: true, ShowStderr: true,
})
if err != nil {
@@ -255,9 +227,7 @@ func (d *Driver) copyToContainer(ctx context.Context, files map[string][]byte) e
return err
}
defer srcArchive.Close()
baseDir := path.Dir(confutil.DefaultBuildKitConfigDir)
return d.DockerAPI.CopyToContainer(ctx, d.Name, baseDir, srcArchive, dockertypes.CopyToContainerOptions{})
return d.DockerAPI.CopyToContainer(ctx, d.Name, "/", srcArchive, dockertypes.CopyToContainerOptions{})
}
func (d *Driver) exec(ctx context.Context, cmd []string) (string, net.Conn, error) {
@@ -304,7 +274,7 @@ func (d *Driver) run(ctx context.Context, cmd []string, stdout, stderr io.Writer
}
func (d *Driver) start(ctx context.Context, l progress.SubLogger) error {
return d.DockerAPI.ContainerStart(ctx, d.Name, container.StartOptions{})
return d.DockerAPI.ContainerStart(ctx, d.Name, dockertypes.ContainerStartOptions{})
}
func (d *Driver) Info(ctx context.Context) (*driver.Info, error) {
@@ -362,18 +332,18 @@ func (d *Driver) Rm(ctx context.Context, force, rmVolume, rmDaemon bool) error {
return err
}
if info.Status != driver.Inactive {
ctr, err := d.DockerAPI.ContainerInspect(ctx, d.Name)
container, err := d.DockerAPI.ContainerInspect(ctx, d.Name)
if err != nil {
return err
}
if rmDaemon {
if err := d.DockerAPI.ContainerRemove(ctx, d.Name, container.RemoveOptions{
if err := d.DockerAPI.ContainerRemove(ctx, d.Name, dockertypes.ContainerRemoveOptions{
RemoveVolumes: true,
Force: force,
}); err != nil {
return err
}
for _, v := range ctr.Mounts {
for _, v := range container.Mounts {
if v.Name != d.Name+volumeStateSuffix {
continue
}
@@ -386,22 +356,15 @@ func (d *Driver) Rm(ctx context.Context, force, rmVolume, rmDaemon bool) error {
return nil
}
func (d *Driver) Dial(ctx context.Context) (net.Conn, error) {
func (d *Driver) Client(ctx context.Context) (*client.Client, error) {
_, conn, err := d.exec(ctx, []string{"buildctl", "dial-stdio"})
if err != nil {
return nil, err
}
conn = demuxConn(conn)
return conn, nil
}
func (d *Driver) Client(ctx context.Context) (*client.Client, error) {
conn, err := d.Dial(ctx)
if err != nil {
return nil, err
}
exp, _, err := detect.Exporter()
exp, err := detect.Exporter()
if err != nil {
return nil, err
}
@@ -433,10 +396,6 @@ func (d *Driver) Features(ctx context.Context) map[driver.Feature]bool {
}
}
func (d *Driver) HostGatewayIP(ctx context.Context) (net.IP, error) {
return nil, errors.New("host-gateway is not supported by the docker-container driver")
}
func demuxConn(c net.Conn) net.Conn {
pr, pw := io.Pipe()
// TODO: rewrite parser with Reader() to avoid goroutine switch
@@ -480,34 +439,15 @@ func writeConfigFiles(m map[string][]byte) (_ string, err error) {
os.RemoveAll(tmpDir)
}
}()
configDir := filepath.Base(confutil.DefaultBuildKitConfigDir)
for f, dt := range m {
p := filepath.Join(tmpDir, configDir, f)
if err := os.MkdirAll(filepath.Dir(p), 0755); err != nil {
f = path.Join(confutil.DefaultBuildKitConfigDir, f)
p := filepath.Join(tmpDir, f)
if err := os.MkdirAll(filepath.Dir(p), 0700); err != nil {
return "", err
}
if err := os.WriteFile(p, dt, 0644); err != nil {
if err := os.WriteFile(p, dt, 0600); err != nil {
return "", err
}
}
return tmpDir, nil
}
func getBuildkitFlags(initConfig driver.InitConfig) []string {
flags := initConfig.BuildkitdFlags
if _, ok := initConfig.Files[buildkitdConfigFile]; ok {
// There's no way for us to determine the appropriate default configuration
// path and the default path can vary depending on if the image is normal
// or rootless.
//
// In order to ensure that --config works, copy to a specific path and
// specify the location.
//
// This should be appended before the user-specified arguments
// so that this option could be overwritten by the user.
newFlags := make([]string, 0, len(flags)+2)
newFlags = append(newFlags, "--config", path.Join("/etc/buildkit", buildkitdConfigFile))
flags = append(newFlags, flags...)
}
return flags
}

View File

@@ -3,18 +3,15 @@ package docker
import (
"context"
"fmt"
"strconv"
"strings"
"github.com/docker/buildx/driver"
dockeropts "github.com/docker/cli/opts"
dockerclient "github.com/docker/docker/client"
"github.com/pkg/errors"
)
const prioritySupported = 30
const priorityUnsupported = 70
const defaultRestartPolicy = "unless-stopped"
func init() {
driver.Register(&factory{})
@@ -31,7 +28,7 @@ func (*factory) Usage() string {
return "docker-container"
}
func (*factory) Priority(ctx context.Context, endpoint string, api dockerclient.APIClient, dialMeta map[string][]string) int {
func (*factory) Priority(ctx context.Context, endpoint string, api dockerclient.APIClient) int {
if api == nil {
return priorityUnsupported
}
@@ -42,58 +39,18 @@ func (f *factory) New(ctx context.Context, cfg driver.InitConfig) (driver.Driver
if cfg.DockerAPI == nil {
return nil, errors.Errorf("%s driver requires docker API access", f.Name())
}
rp, err := dockeropts.ParseRestartPolicy(defaultRestartPolicy)
if err != nil {
return nil, err
}
d := &Driver{
factory: f,
InitConfig: cfg,
restartPolicy: rp,
}
d := &Driver{factory: f, InitConfig: cfg}
for k, v := range cfg.DriverOpts {
switch {
case k == "network":
d.netMode = v
if v == "host" {
d.InitConfig.BuildkitFlags = append(d.InitConfig.BuildkitFlags, "--allow-insecure-entitlement=network.host")
}
case k == "image":
d.image = v
case k == "memory":
if err := d.memory.Set(v); err != nil {
return nil, err
}
case k == "memory-swap":
if err := d.memorySwap.Set(v); err != nil {
return nil, err
}
case k == "cpu-period":
vv, err := strconv.ParseInt(v, 10, 0)
if err != nil {
return nil, err
}
d.cpuPeriod = vv
case k == "cpu-quota":
vv, err := strconv.ParseInt(v, 10, 0)
if err != nil {
return nil, err
}
d.cpuQuota = vv
case k == "cpu-shares":
vv, err := strconv.ParseInt(v, 10, 0)
if err != nil {
return nil, err
}
d.cpuShares = vv
case k == "cpuset-cpus":
d.cpusetCpus = v
case k == "cpuset-mems":
d.cpusetMems = v
case k == "cgroup-parent":
d.cgroupParent = v
case k == "restart-policy":
d.restartPolicy, err = dockeropts.ParseRestartPolicy(v)
if err != nil {
return nil, err
}
case strings.HasPrefix(k, "env."):
envName := strings.TrimPrefix(k, "env.")
if envName == "" {

View File

@@ -4,23 +4,16 @@ import (
"context"
"net"
"strings"
"sync"
"github.com/docker/buildx/driver"
"github.com/docker/buildx/util/progress"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/util/tracing/detect"
"github.com/pkg/errors"
)
type Driver struct {
factory driver.Factory
driver.InitConfig
// if you add fields, remember to update docs:
// https://github.com/docker/docs/blob/main/content/build/drivers/docker.md
features features
hostGateway hostGateway
}
func (d *Driver) Bootstrap(ctx context.Context, l progress.Logger) error {
@@ -57,90 +50,32 @@ func (d *Driver) Rm(ctx context.Context, force, rmVolume, rmDaemon bool) error {
return nil
}
func (d *Driver) Dial(ctx context.Context) (net.Conn, error) {
return d.DockerAPI.DialHijack(ctx, "/grpc", "h2c", d.DialMeta)
}
func (d *Driver) Client(ctx context.Context) (*client.Client, error) {
opts := []client.ClientOpt{
client.WithContextDialer(func(context.Context, string) (net.Conn, error) {
return d.Dial(ctx)
}), client.WithSessionDialer(func(ctx context.Context, proto string, meta map[string][]string) (net.Conn, error) {
return d.DockerAPI.DialHijack(ctx, "/session", proto, meta)
}),
}
exp, _, err := detect.Exporter()
if err != nil {
return nil, err
}
if td, ok := exp.(client.TracerDelegate); ok {
opts = append(opts, client.WithTracerDelegate(td))
}
return client.New(ctx, "", opts...)
}
type features struct {
once sync.Once
list map[driver.Feature]bool
return client.New(ctx, "", client.WithContextDialer(func(context.Context, string) (net.Conn, error) {
return d.DockerAPI.DialHijack(ctx, "/grpc", "h2c", nil)
}), client.WithSessionDialer(func(ctx context.Context, proto string, meta map[string][]string) (net.Conn, error) {
return d.DockerAPI.DialHijack(ctx, "/session", proto, meta)
}))
}
func (d *Driver) Features(ctx context.Context) map[driver.Feature]bool {
d.features.once.Do(func() {
var useContainerdSnapshotter bool
if c, err := d.Client(ctx); err == nil {
workers, _ := c.ListWorkers(ctx)
for _, w := range workers {
if _, ok := w.Labels["org.mobyproject.buildkit.worker.snapshotter"]; ok {
useContainerdSnapshotter = true
}
}
c.Close()
}
d.features.list = map[driver.Feature]bool{
driver.OCIExporter: useContainerdSnapshotter,
driver.DockerExporter: useContainerdSnapshotter,
driver.CacheExport: useContainerdSnapshotter,
driver.MultiPlatform: useContainerdSnapshotter,
}
})
return d.features.list
}
type hostGateway struct {
once sync.Once
ip net.IP
err error
}
func (d *Driver) HostGatewayIP(ctx context.Context) (net.IP, error) {
d.hostGateway.once.Do(func() {
c, err := d.Client(ctx)
if err != nil {
d.hostGateway.err = err
return
}
defer c.Close()
workers, err := c.ListWorkers(ctx)
if err != nil {
d.hostGateway.err = errors.Wrap(err, "listing workers")
return
}
var useContainerdSnapshotter bool
c, err := d.Client(ctx)
if err == nil {
workers, _ := c.ListWorkers(ctx)
for _, w := range workers {
// should match github.com/docker/docker/builder/builder-next/worker/label.HostGatewayIP const
if v, ok := w.Labels["org.mobyproject.buildkit.worker.moby.host-gateway-ip"]; ok && v != "" {
ip := net.ParseIP(v)
if ip == nil {
d.hostGateway.err = errors.Errorf("failed to parse host-gateway IP: %s", v)
return
}
d.hostGateway.ip = ip
return
if _, ok := w.Labels["org.mobyproject.buildkit.worker.snapshotter"]; ok {
useContainerdSnapshotter = true
}
}
d.hostGateway.err = errors.New("host-gateway IP not found")
})
return d.hostGateway.ip, d.hostGateway.err
c.Close()
}
return map[driver.Feature]bool{
driver.OCIExporter: useContainerdSnapshotter,
driver.DockerExporter: useContainerdSnapshotter,
driver.CacheExport: useContainerdSnapshotter,
driver.MultiPlatform: useContainerdSnapshotter,
}
}
func (d *Driver) Factory() driver.Factory {

View File

@@ -26,12 +26,12 @@ func (*factory) Usage() string {
return "docker"
}
func (*factory) Priority(ctx context.Context, endpoint string, api dockerclient.APIClient, dialMeta map[string][]string) int {
func (*factory) Priority(ctx context.Context, endpoint string, api dockerclient.APIClient) int {
if api == nil {
return priorityUnsupported
}
c, err := api.DialHijack(ctx, "/grpc", "h2c", dialMeta)
c, err := api.DialHijack(ctx, "/grpc", "h2c", nil)
if err != nil {
return priorityUnsupported
}

View File

@@ -3,7 +3,6 @@ package driver
import (
"context"
"io"
"net"
"github.com/docker/buildx/store"
"github.com/docker/buildx/util/progress"
@@ -59,10 +58,8 @@ type Driver interface {
Version(context.Context) (string, error)
Stop(ctx context.Context, force bool) error
Rm(ctx context.Context, force, rmVolume, rmDaemon bool) error
Dial(ctx context.Context) (net.Conn, error)
Client(ctx context.Context) (*client.Client, error)
Features(ctx context.Context) map[Feature]bool
HostGatewayIP(ctx context.Context) (net.IP, error)
IsMobyDriver() bool
Config() InitConfig
}

View File

@@ -6,4 +6,4 @@ const OCIExporter Feature = "OCI exporter"
const DockerExporter Feature = "Docker exporter"
const CacheExport Feature = "Cache export"
const MultiPlatform Feature = "Multi-platform build"
const MultiPlatform Feature = "Multiple platforms"

View File

@@ -23,7 +23,6 @@ type EndpointMeta struct {
AuthProvider *clientcmdapi.AuthProviderConfig `json:",omitempty"`
Exec *clientcmdapi.ExecConfig `json:",omitempty"`
UsernamePassword *UsernamePassword `json:"usernamePassword,omitempty"`
Token string `json:"token,omitempty"`
}
// UsernamePassword contains username/password auth info
@@ -78,9 +77,6 @@ func (c *Endpoint) KubernetesConfig() clientcmd.ClientConfig {
authInfo.Username = c.UsernamePassword.Username
authInfo.Password = c.UsernamePassword.Password
}
if c.Token != "" {
authInfo.Token = c.Token
}
authInfo.AuthProvider = c.AuthProvider
authInfo.Exec = c.Exec
cfg.Clusters["cluster"] = cluster

View File

@@ -68,7 +68,6 @@ func FromKubeConfig(kubeconfig, kubeContext, namespaceOverride string) (Endpoint
AuthProvider: clientcfg.AuthProvider,
Exec: clientcfg.ExecProvider,
UsernamePassword: usernamePassword,
Token: clientcfg.BearerToken,
},
TLSData: tlsData,
}, nil

View File

@@ -38,10 +38,7 @@ const (
type Driver struct {
driver.InitConfig
factory driver.Factory
// if you add fields, remember to update docs:
// https://github.com/docker/docs/blob/main/content/build/drivers/kubernetes.md
factory driver.Factory
minReplicas int
deployment *appsv1.Deployment
configMaps []*corev1.ConfigMap
@@ -90,7 +87,10 @@ func (d *Driver) Bootstrap(ctx context.Context, l progress.Logger) error {
return sub.Wrap(
fmt.Sprintf("waiting for %d pods to be ready", d.minReplicas),
func() error {
return d.wait(ctx)
if err := d.wait(ctx); err != nil {
return err
}
return nil
})
})
}
@@ -189,7 +189,7 @@ func (d *Driver) Rm(ctx context.Context, force, rmVolume, rmDaemon bool) error {
return nil
}
func (d *Driver) Dial(ctx context.Context) (net.Conn, error) {
func (d *Driver) Client(ctx context.Context) (*client.Client, error) {
restClient := d.clientset.CoreV1().RESTClient()
restClientConfig, err := d.KubeClientConfig.ClientConfig()
if err != nil {
@@ -208,18 +208,15 @@ func (d *Driver) Dial(ctx context.Context) (net.Conn, error) {
if err != nil {
return nil, err
}
return conn, nil
}
func (d *Driver) Client(ctx context.Context) (*client.Client, error) {
exp, _, err := detect.Exporter()
exp, err := detect.Exporter()
if err != nil {
return nil, err
}
var opts []client.ClientOpt
opts = append(opts, client.WithContextDialer(func(context.Context, string) (net.Conn, error) {
return d.Dial(ctx)
return conn, nil
}))
if td, ok := exp.(client.TracerDelegate); ok {
opts = append(opts, client.WithTracerDelegate(td))
@@ -231,7 +228,7 @@ func (d *Driver) Factory() driver.Factory {
return d.factory
}
func (d *Driver) Features(_ context.Context) map[driver.Feature]bool {
func (d *Driver) Features(ctx context.Context) map[driver.Feature]bool {
return map[driver.Feature]bool{
driver.OCIExporter: true,
driver.DockerExporter: d.DockerAPI != nil,
@@ -239,7 +236,3 @@ func (d *Driver) Features(_ context.Context) map[driver.Feature]bool {
driver.MultiPlatform: true, // Untested (needs multiple Driver instances)
}
}
func (d *Driver) HostGatewayIP(_ context.Context) (net.IP, error) {
return nil, errors.New("host-gateway is not supported by the kubernetes driver")
}

View File

@@ -34,7 +34,7 @@ func (*factory) Usage() string {
return DriverName
}
func (*factory) Priority(ctx context.Context, endpoint string, api dockerclient.APIClient, dialMeta map[string][]string) int {
func (*factory) Priority(ctx context.Context, endpoint string, api dockerclient.APIClient) int {
if api == nil {
return priorityUnsupported
}
@@ -105,7 +105,7 @@ func (f *factory) processDriverOpts(deploymentName string, namespace string, cfg
Name: deploymentName,
Image: bkimage.DefaultImage,
Replicas: 1,
BuildkitFlags: cfg.BuildkitdFlags,
BuildkitFlags: cfg.BuildkitFlags,
Rootless: false,
Platforms: cfg.Platforms,
ConfigFiles: cfg.Files,
@@ -148,20 +148,15 @@ func (f *factory) processDriverOpts(deploymentName string, namespace string, cfg
case "serviceaccount":
deploymentOpt.ServiceAccountName = v
case "nodeselector":
deploymentOpt.NodeSelector, err = splitMultiValues(v, ",", "=")
if err != nil {
return nil, "", "", errors.Wrap(err, "cannot parse node selector")
}
case "annotations":
deploymentOpt.CustomAnnotations, err = splitMultiValues(v, ",", "=")
if err != nil {
return nil, "", "", errors.Wrap(err, "cannot parse annotations")
}
case "labels":
deploymentOpt.CustomLabels, err = splitMultiValues(v, ",", "=")
if err != nil {
return nil, "", "", errors.Wrap(err, "cannot parse labels")
kvs := strings.Split(strings.Trim(v, `"`), ",")
s := map[string]string{}
for i := range kvs {
kv := strings.Split(kvs[i], "=")
if len(kv) == 2 {
s[kv[0]] = kv[1]
}
}
deploymentOpt.NodeSelector = s
case "tolerations":
ts := strings.Split(v, ";")
deploymentOpt.Tolerations = []corev1.Toleration{}
@@ -222,19 +217,6 @@ func (f *factory) processDriverOpts(deploymentName string, namespace string, cfg
return deploymentOpt, loadbalance, namespace, nil
}
func splitMultiValues(in string, itemsep string, kvsep string) (map[string]string, error) {
kvs := strings.Split(strings.Trim(in, `"`), itemsep)
s := map[string]string{}
for i := range kvs {
kv := strings.Split(kvs[i], kvsep)
if len(kv) != 2 {
return nil, errors.Errorf("invalid key-value pair: %s", kvs[i])
}
s[kv[0]] = kv[1]
}
return s, nil
}
func (f *factory) AllowsInstances() bool {
return true
}

View File

@@ -47,13 +47,13 @@ func TestFactory_processDriverOpts(t *testing.T) {
"rootless": "true",
"nodeselector": "selector1=value1,selector2=value2",
"tolerations": "key=tolerationKey1,value=tolerationValue1,operator=Equal,effect=NoSchedule,tolerationSeconds=60;key=tolerationKey2,operator=Exists",
"annotations": "example.com/expires-after=annotation1,example.com/other=annotation2",
"labels": "example.com/owner=label1,example.com/other=label2",
"loadbalance": "random",
"qemu.install": "true",
"qemu.image": "qemu:latest",
}
r, loadbalance, ns, err := f.processDriverOpts(cfg.Name, "test", cfg)
ns := "test"
r, loadbalance, ns, err := f.processDriverOpts(cfg.Name, ns, cfg)
nodeSelectors := map[string]string{
"selector1": "value1",
@@ -75,16 +75,6 @@ func TestFactory_processDriverOpts(t *testing.T) {
},
}
customAnnotations := map[string]string{
"example.com/expires-after": "annotation1",
"example.com/other": "annotation2",
}
customLabels := map[string]string{
"example.com/owner": "label1",
"example.com/other": "label2",
}
require.NoError(t, err)
require.Equal(t, "test-ns", ns)
@@ -96,8 +86,6 @@ func TestFactory_processDriverOpts(t *testing.T) {
require.Equal(t, "64Mi", r.LimitsMemory)
require.True(t, r.Rootless)
require.Equal(t, nodeSelectors, r.NodeSelector)
require.Equal(t, customAnnotations, r.CustomAnnotations)
require.Equal(t, customLabels, r.CustomLabels)
require.Equal(t, tolerations, r.Tolerations)
require.Equal(t, LoadbalanceRandom, loadbalance)
require.True(t, r.Qemu.Install)
@@ -122,8 +110,6 @@ func TestFactory_processDriverOpts(t *testing.T) {
require.Equal(t, "", r.LimitsMemory)
require.False(t, r.Rootless)
require.Empty(t, r.NodeSelector)
require.Empty(t, r.CustomAnnotations)
require.Empty(t, r.CustomLabels)
require.Empty(t, r.Tolerations)
require.Equal(t, LoadbalanceSticky, loadbalance)
require.False(t, r.Qemu.Install)
@@ -151,8 +137,6 @@ func TestFactory_processDriverOpts(t *testing.T) {
require.Equal(t, "", r.LimitsMemory)
require.True(t, r.Rootless)
require.Empty(t, r.NodeSelector)
require.Empty(t, r.CustomAnnotations)
require.Empty(t, r.CustomLabels)
require.Empty(t, r.Tolerations)
require.Equal(t, LoadbalanceSticky, loadbalance)
require.False(t, r.Qemu.Install)
@@ -165,7 +149,9 @@ func TestFactory_processDriverOpts(t *testing.T) {
cfg.DriverOpts = map[string]string{
"replicas": "invalid",
}
_, _, _, err := f.processDriverOpts(cfg.Name, "test", cfg)
require.Error(t, err)
},
)
@@ -175,7 +161,9 @@ func TestFactory_processDriverOpts(t *testing.T) {
cfg.DriverOpts = map[string]string{
"rootless": "invalid",
}
_, _, _, err := f.processDriverOpts(cfg.Name, "test", cfg)
require.Error(t, err)
},
)
@@ -185,7 +173,9 @@ func TestFactory_processDriverOpts(t *testing.T) {
cfg.DriverOpts = map[string]string{
"tolerations": "key=foo,value=bar,invalid=foo2",
}
_, _, _, err := f.processDriverOpts(cfg.Name, "test", cfg)
require.Error(t, err)
},
)
@@ -195,27 +185,9 @@ func TestFactory_processDriverOpts(t *testing.T) {
cfg.DriverOpts = map[string]string{
"tolerations": "key=foo,value=bar,tolerationSeconds=invalid",
}
_, _, _, err := f.processDriverOpts(cfg.Name, "test", cfg)
require.Error(t, err)
},
)
t.Run(
"InvalidCustomAnnotation", func(t *testing.T) {
cfg.DriverOpts = map[string]string{
"annotations": "key,value",
}
_, _, _, err := f.processDriverOpts(cfg.Name, "test", cfg)
require.Error(t, err)
},
)
t.Run(
"InvalidCustomLabel", func(t *testing.T) {
cfg.DriverOpts = map[string]string{
"labels": "key=value=foo",
}
_, _, _, err := f.processDriverOpts(cfg.Name, "test", cfg)
require.Error(t, err)
},
)
@@ -225,7 +197,9 @@ func TestFactory_processDriverOpts(t *testing.T) {
cfg.DriverOpts = map[string]string{
"loadbalance": "invalid",
}
_, _, _, err := f.processDriverOpts(cfg.Name, "test", cfg)
require.Error(t, err)
},
)
@@ -235,7 +209,9 @@ func TestFactory_processDriverOpts(t *testing.T) {
cfg.DriverOpts = map[string]string{
"qemu.install": "invalid",
}
_, _, _, err := f.processDriverOpts(cfg.Name, "test", cfg)
require.Error(t, err)
},
)
@@ -245,7 +221,9 @@ func TestFactory_processDriverOpts(t *testing.T) {
cfg.DriverOpts = map[string]string{
"invalid": "foo",
}
_, _, _, err := f.processDriverOpts(cfg.Name, "test", cfg)
require.Error(t, err)
},
)

View File

@@ -7,7 +7,6 @@ import (
"github.com/docker/buildx/util/platformutil"
v1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/resource"
@@ -32,32 +31,24 @@ type DeploymentOpt struct {
// files mounted at /etc/buildkitd
ConfigFiles map[string][]byte
Rootless bool
NodeSelector map[string]string
CustomAnnotations map[string]string
CustomLabels map[string]string
Tolerations []corev1.Toleration
RequestsCPU string
RequestsMemory string
LimitsCPU string
LimitsMemory string
Platforms []v1.Platform
Rootless bool
NodeSelector map[string]string
Tolerations []corev1.Toleration
RequestsCPU string
RequestsMemory string
LimitsCPU string
LimitsMemory string
Platforms []v1.Platform
}
const (
containerName = "buildkitd"
AnnotationPlatform = "buildx.docker.com/platform"
LabelApp = "app"
)
var (
ErrReservedAnnotationPlatform = errors.Errorf("the annotation \"%s\" is reserved and cannot be customized", AnnotationPlatform)
ErrReservedLabelApp = errors.Errorf("the label \"%s\" is reserved and cannot be customized", LabelApp)
)
func NewDeployment(opt *DeploymentOpt) (d *appsv1.Deployment, c []*corev1.ConfigMap, err error) {
labels := map[string]string{
LabelApp: opt.Name,
"app": opt.Name,
}
annotations := map[string]string{}
replicas := int32(opt.Replicas)
@@ -68,20 +59,6 @@ func NewDeployment(opt *DeploymentOpt) (d *appsv1.Deployment, c []*corev1.Config
annotations[AnnotationPlatform] = strings.Join(platformutil.Format(opt.Platforms), ",")
}
for k, v := range opt.CustomAnnotations {
if k == AnnotationPlatform {
return nil, nil, ErrReservedAnnotationPlatform
}
annotations[k] = v
}
for k, v := range opt.CustomLabels {
if k == LabelApp {
return nil, nil, ErrReservedLabelApp
}
labels[k] = v
}
d = &appsv1.Deployment{
TypeMeta: metav1.TypeMeta{
APIVersion: appsv1.SchemeGroupVersion.String(),

View File

@@ -2,6 +2,7 @@ package driver
import (
"context"
"net"
"os"
"sort"
"strings"
@@ -17,7 +18,7 @@ import (
type Factory interface {
Name() string
Usage() string
Priority(ctx context.Context, endpoint string, api dockerclient.APIClient, dialMeta map[string][]string) int
Priority(ctx context.Context, endpoint string, api dockerclient.APIClient) int
New(ctx context.Context, cfg InitConfig) (Driver, error)
AllowsInstances() bool
}
@@ -52,13 +53,13 @@ type InitConfig struct {
EndpointAddr string
DockerAPI dockerclient.APIClient
KubeClientConfig KubeClientConfig
BuildkitdFlags []string
BuildkitFlags []string
Files map[string][]byte
DriverOpts map[string]string
Auth Auth
Platforms []specs.Platform
ContextPathHash string // can be used for determining pods in the driver instance
DialMeta map[string][]string
// ContextPathHash can be used for determining pods in the driver instance
ContextPathHash string
}
var drivers map[string]Factory
@@ -70,7 +71,7 @@ func Register(f Factory) {
drivers[f.Name()] = f
}
func GetDefaultFactory(ctx context.Context, ep string, c dockerclient.APIClient, instanceRequired bool, dialMeta map[string][]string) (Factory, error) {
func GetDefaultFactory(ctx context.Context, ep string, c dockerclient.APIClient, instanceRequired bool) (Factory, error) {
if len(drivers) == 0 {
return nil, errors.Errorf("no drivers available")
}
@@ -83,7 +84,7 @@ func GetDefaultFactory(ctx context.Context, ep string, c dockerclient.APIClient,
if instanceRequired && !f.AllowsInstances() {
continue
}
dd = append(dd, p{f: f, priority: f.Priority(ctx, ep, c, dialMeta)})
dd = append(dd, p{f: f, priority: f.Priority(ctx, ep, c)})
}
sort.Slice(dd, func(i, j int) bool {
return dd[i].priority < dd[j].priority
@@ -103,23 +104,22 @@ func GetFactory(name string, instanceRequired bool) (Factory, error) {
return nil, errors.Errorf("failed to find driver %q", name)
}
func GetDriver(ctx context.Context, name string, f Factory, endpointAddr string, api dockerclient.APIClient, auth Auth, kcc KubeClientConfig, buildkitdFlags []string, files map[string][]byte, do map[string]string, platforms []specs.Platform, contextPathHash string, dialMeta map[string][]string) (*DriverHandle, error) {
func GetDriver(ctx context.Context, name string, f Factory, endpointAddr string, api dockerclient.APIClient, auth Auth, kcc KubeClientConfig, flags []string, files map[string][]byte, do map[string]string, platforms []specs.Platform, contextPathHash string) (*DriverHandle, error) {
ic := InitConfig{
EndpointAddr: endpointAddr,
DockerAPI: api,
KubeClientConfig: kcc,
Name: name,
BuildkitdFlags: buildkitdFlags,
BuildkitFlags: flags,
DriverOpts: do,
Auth: auth,
Platforms: platforms,
ContextPathHash: contextPathHash,
DialMeta: dialMeta,
Files: files,
}
if f == nil {
var err error
f, err = GetDefaultFactory(ctx, endpointAddr, api, false, dialMeta)
f, err = GetDefaultFactory(ctx, endpointAddr, api, false)
if err != nil {
return nil, err
}
@@ -150,8 +150,13 @@ type DriverHandle struct {
client *client.Client
err error
once sync.Once
featuresOnce sync.Once
features map[Feature]bool
historyAPISupportedOnce sync.Once
historyAPISupported bool
hostGatewayIPOnce sync.Once
hostGatewayIP net.IP
hostGatewayIPErr error
}
func (d *DriverHandle) Client(ctx context.Context) (*client.Client, error) {
@@ -161,6 +166,13 @@ func (d *DriverHandle) Client(ctx context.Context) (*client.Client, error) {
return d.client, d.err
}
func (d *DriverHandle) Features(ctx context.Context) map[Feature]bool {
d.featuresOnce.Do(func() {
d.features = d.Driver.Features(ctx)
})
return d.features
}
func (d *DriverHandle) HistoryAPISupported(ctx context.Context) bool {
d.historyAPISupportedOnce.Do(func() {
if c, err := d.Client(ctx); err == nil {
@@ -169,3 +181,36 @@ func (d *DriverHandle) HistoryAPISupported(ctx context.Context) bool {
})
return d.historyAPISupported
}
func (d *DriverHandle) HostGatewayIP(ctx context.Context) (net.IP, error) {
d.hostGatewayIPOnce.Do(func() {
if !d.Driver.IsMobyDriver() {
d.hostGatewayIPErr = errors.New("host-gateway is only supported with the docker driver")
return
}
c, err := d.Client(ctx)
if err != nil {
d.hostGatewayIPErr = err
return
}
workers, err := c.ListWorkers(ctx)
if err != nil {
d.hostGatewayIPErr = errors.Wrap(err, "listing workers")
return
}
for _, w := range workers {
// should match github.com/docker/docker/builder/builder-next/worker/label.HostGatewayIP const
if v, ok := w.Labels["org.mobyproject.buildkit.worker.moby.host-gateway-ip"]; ok && v != "" {
ip := net.ParseIP(v)
if ip == nil {
d.hostGatewayIPErr = errors.Errorf("failed to parse host-gateway IP: %s", v)
return
}
d.hostGatewayIP = ip
return
}
}
d.hostGatewayIPErr = errors.New("host-gateway IP not found")
})
return d.hostGatewayIP, d.hostGatewayIPErr
}

View File

@@ -2,26 +2,16 @@ package remote
import (
"context"
"crypto/tls"
"crypto/x509"
"net"
"os"
"strings"
"time"
"github.com/docker/buildx/driver"
"github.com/docker/buildx/util/progress"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/util/tracing/detect"
"github.com/pkg/errors"
)
type Driver struct {
factory driver.Factory
driver.InitConfig
// if you add fields, remember to update docs:
// https://github.com/docker/docs/blob/main/content/build/drivers/remote.md
*tlsOpts
}
@@ -33,15 +23,25 @@ type tlsOpts struct {
}
func (d *Driver) Bootstrap(ctx context.Context, l progress.Logger) error {
c, err := d.Client(ctx)
if err != nil {
return err
for i := 0; ; i++ {
info, err := d.Info(ctx)
if err != nil {
return err
}
if info.Status != driver.Inactive {
return nil
}
select {
case <-ctx.Done():
return ctx.Err()
default:
if i > 10 {
i = 10
}
time.Sleep(time.Duration(i) * time.Second)
}
}
return progress.Wrap("[internal] waiting for connection", l, func(_ progress.SubLogger) error {
ctx, cancel := context.WithTimeout(ctx, 20*time.Second)
defer cancel()
return c.Wait(ctx)
})
}
func (d *Driver) Info(ctx context.Context) (*driver.Info, error) {
@@ -77,70 +77,14 @@ func (d *Driver) Rm(ctx context.Context, force, rmVolume, rmDaemon bool) error {
func (d *Driver) Client(ctx context.Context) (*client.Client, error) {
opts := []client.ClientOpt{}
exp, _, err := detect.Exporter()
if err != nil {
return nil, err
}
if td, ok := exp.(client.TracerDelegate); ok {
opts = append(opts, client.WithTracerDelegate(td))
}
opts = append(opts, client.WithContextDialer(func(ctx context.Context, _ string) (net.Conn, error) {
return d.Dial(ctx)
}))
return client.New(ctx, "", opts...)
}
func (d *Driver) Dial(ctx context.Context) (net.Conn, error) {
network, addr, ok := strings.Cut(d.InitConfig.EndpointAddr, "://")
if !ok {
return nil, errors.Errorf("invalid endpoint address: %s", d.InitConfig.EndpointAddr)
}
dialer := &net.Dialer{}
conn, err := dialer.DialContext(ctx, network, addr)
if err != nil {
return nil, errors.WithStack(err)
}
if d.tlsOpts != nil {
cfg, err := loadTLS(d.tlsOpts)
if err != nil {
return nil, errors.Wrap(err, "error loading tls config")
}
conn = tls.Client(conn, cfg)
}
return conn, nil
}
func loadTLS(opts *tlsOpts) (*tls.Config, error) {
cfg := &tls.Config{
ServerName: opts.serverName,
RootCAs: x509.NewCertPool(),
opts = append(opts, []client.ClientOpt{
client.WithServerConfig(d.tlsOpts.serverName, d.tlsOpts.caCert),
client.WithCredentials(d.tlsOpts.cert, d.tlsOpts.key),
}...)
}
if opts.caCert != "" {
ca, err := os.ReadFile(opts.caCert)
if err != nil {
return nil, errors.Wrap(err, "could not read ca certificate")
}
if ok := cfg.RootCAs.AppendCertsFromPEM(ca); !ok {
return nil, errors.New("failed to append ca certs")
}
}
if opts.cert != "" || opts.key != "" {
cert, err := tls.LoadX509KeyPair(opts.cert, opts.key)
if err != nil {
return nil, errors.Wrap(err, "could not read certificate/key")
}
cfg.Certificates = append(cfg.Certificates, cert)
}
return cfg, nil
return client.New(ctx, d.InitConfig.EndpointAddr, opts...)
}
func (d *Driver) Features(ctx context.Context) map[driver.Feature]bool {
@@ -152,10 +96,6 @@ func (d *Driver) Features(ctx context.Context) map[driver.Feature]bool {
}
}
func (d *Driver) HostGatewayIP(ctx context.Context) (net.IP, error) {
return nil, errors.New("host-gateway is not supported by the remote driver")
}
func (d *Driver) Factory() driver.Factory {
return d.factory
}

View File

@@ -35,7 +35,7 @@ func (*factory) Usage() string {
return "remote"
}
func (*factory) Priority(ctx context.Context, endpoint string, api dockerclient.APIClient, dialMeta map[string][]string) int {
func (*factory) Priority(ctx context.Context, endpoint string, api dockerclient.APIClient) int {
if util.IsValidEndpoint(endpoint) != nil {
return priorityUnsupported
}
@@ -46,7 +46,7 @@ func (f *factory) New(ctx context.Context, cfg driver.InitConfig) (driver.Driver
if len(cfg.Files) > 0 {
return nil, errors.Errorf("setting config file is not supported for remote driver")
}
if len(cfg.BuildkitdFlags) > 0 {
if len(cfg.BuildkitFlags) > 0 {
return nil, errors.Errorf("setting buildkit flags is not supported for remote driver")
}

View File

@@ -12,7 +12,6 @@ var schemes = map[string]struct{}{
"ssh": {},
"docker-container": {},
"kube-pod": {},
"npipe": {},
}
func IsValidEndpoint(ep string) error {

195
go.mod
View File

@@ -1,170 +1,173 @@
module github.com/docker/buildx
go 1.21
go 1.20
require (
github.com/Masterminds/semver/v3 v3.2.1
github.com/aws/aws-sdk-go-v2/config v1.26.6
github.com/compose-spec/compose-go/v2 v2.0.0-rc.3
github.com/containerd/console v1.0.4
github.com/containerd/containerd v1.7.13
github.com/containerd/continuity v0.4.3
github.com/containerd/log v0.1.0
github.com/aws/aws-sdk-go-v2/config v1.18.16
github.com/compose-spec/compose-go v1.14.0
github.com/containerd/console v1.0.3
github.com/containerd/containerd v1.7.2
github.com/containerd/continuity v0.4.1
github.com/containerd/typeurl/v2 v2.1.1
github.com/creack/pty v1.1.18
github.com/distribution/reference v0.5.0
github.com/docker/cli v25.0.3+incompatible
github.com/docker/cli-docs-tool v0.7.0
github.com/docker/docker v25.0.3+incompatible
github.com/docker/cli v24.0.2+incompatible
github.com/docker/cli-docs-tool v0.6.0
github.com/docker/distribution v2.8.2+incompatible
github.com/docker/docker v24.0.5-0.20230714235725-36e9e796c6fc+incompatible // 24.0
github.com/docker/go-units v0.5.0
github.com/gofrs/flock v0.8.1
github.com/gogo/protobuf v1.3.2
github.com/golang/protobuf v1.5.3
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510
github.com/google/uuid v1.5.0
github.com/hashicorp/go-cty-funcs v0.0.0-20230405223818-a090f58aa992
github.com/hashicorp/hcl/v2 v2.19.1
github.com/moby/buildkit v0.13.0-rc1.0.20240222164755-8e3fe35738c2 // master (v0.13.0-dev)
github.com/moby/sys/mountinfo v0.7.1
github.com/google/uuid v1.3.0
github.com/hashicorp/go-cty-funcs v0.0.0-20200930094925-2721b1e36840
github.com/hashicorp/hcl/v2 v2.8.2
github.com/moby/buildkit v0.12.1-0.20230717122532-faa0cc7da353 // v0.12.1-dev
github.com/moby/sys/mountinfo v0.6.2
github.com/moby/sys/signal v0.7.0
github.com/morikuni/aec v1.0.0
github.com/opencontainers/go-digest v1.0.0
github.com/opencontainers/image-spec v1.1.0-rc5
github.com/opencontainers/image-spec v1.1.0-rc3
github.com/pelletier/go-toml v1.9.5
github.com/pkg/errors v0.9.1
github.com/serialx/hashring v0.0.0-20190422032157-8b2912629002
github.com/sirupsen/logrus v1.9.3
github.com/spf13/cobra v1.8.0
github.com/sirupsen/logrus v1.9.0
github.com/spf13/cobra v1.7.0
github.com/spf13/pflag v1.0.5
github.com/stretchr/testify v1.8.4
github.com/zclconf/go-cty v1.14.1
go.opentelemetry.io/otel v1.19.0
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v0.42.0
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v0.42.0
go.opentelemetry.io/otel/metric v1.19.0
go.opentelemetry.io/otel/sdk v1.19.0
go.opentelemetry.io/otel/sdk/metric v1.19.0
go.opentelemetry.io/otel/trace v1.19.0
golang.org/x/mod v0.13.0
golang.org/x/sync v0.4.0
golang.org/x/sys v0.16.0
golang.org/x/term v0.15.0
google.golang.org/grpc v1.59.0
github.com/zclconf/go-cty v1.10.0
go.opentelemetry.io/otel v1.14.0
go.opentelemetry.io/otel/trace v1.14.0
golang.org/x/sync v0.2.0
golang.org/x/term v0.6.0
google.golang.org/grpc v1.53.0
gopkg.in/yaml.v3 v3.0.1
k8s.io/api v0.26.7
k8s.io/apimachinery v0.26.7
k8s.io/apiserver v0.26.7
k8s.io/client-go v0.26.7
k8s.io/api v0.26.2
k8s.io/apimachinery v0.26.2
k8s.io/apiserver v0.26.2
k8s.io/client-go v0.26.2
)
require (
github.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24 // indirect
github.com/AdaLogics/go-fuzz-headers v0.0.0-20230106234847-43070de90fa1 // indirect
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 // indirect
github.com/Microsoft/go-winio v0.6.1 // indirect
github.com/Microsoft/hcsshim v0.11.4 // indirect
github.com/Shopify/logrus-bugsnag v0.0.0-20171204204709-577dee27f20d // indirect
github.com/agext/levenshtein v1.2.3 // indirect
github.com/agl/ed25519 v0.0.0-20170116200512-5312a6153412 // indirect
github.com/apparentlymart/go-cidr v1.0.1 // indirect
github.com/apparentlymart/go-textseg/v15 v15.0.0 // indirect
github.com/aws/aws-sdk-go-v2 v1.24.1 // indirect
github.com/aws/aws-sdk-go-v2/credentials v1.16.16 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.14.11 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.2.10 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.5.10 // indirect
github.com/aws/aws-sdk-go-v2/internal/ini v1.7.3 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.10.4 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.10.10 // indirect
github.com/aws/aws-sdk-go-v2/service/sso v1.18.7 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.21.7 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.26.7 // indirect
github.com/aws/smithy-go v1.19.0 // indirect
github.com/apparentlymart/go-textseg/v12 v12.0.0 // indirect
github.com/apparentlymart/go-textseg/v13 v13.0.0 // indirect
github.com/aws/aws-sdk-go-v2 v1.17.6 // indirect
github.com/aws/aws-sdk-go-v2/credentials v1.13.16 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.12.24 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.1.30 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.4.24 // indirect
github.com/aws/aws-sdk-go-v2/internal/ini v1.3.31 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.9.24 // indirect
github.com/aws/aws-sdk-go-v2/service/sso v1.12.5 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.14.5 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.18.6 // indirect
github.com/aws/smithy-go v1.13.5 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/cenkalti/backoff/v4 v4.2.1 // indirect
github.com/bugsnag/bugsnag-go v1.4.1 // indirect
github.com/bugsnag/panicwrap v1.2.0 // indirect
github.com/cenkalti/backoff v2.1.1+incompatible // indirect
github.com/cenkalti/backoff/v4 v4.2.0 // indirect
github.com/cespare/xxhash/v2 v2.2.0 // indirect
github.com/cloudflare/cfssl v0.0.0-20181213083726-b94e044bb51e // indirect
github.com/containerd/ttrpc v1.2.2 // indirect
github.com/cyphar/filepath-securejoin v0.2.3 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/docker/distribution v2.8.2+incompatible // indirect
github.com/docker/docker-credential-helpers v0.8.0 // indirect
github.com/distribution/distribution/v3 v3.0.0-20230214150026-36d8c594d7aa // indirect
github.com/docker/docker-credential-helpers v0.7.0 // indirect
github.com/docker/go v1.5.1-1.0.20160303222718-d30aec9fd63c // indirect
github.com/docker/go-connections v0.5.0 // indirect
github.com/docker/go-connections v0.4.0 // indirect
github.com/docker/go-metrics v0.0.1 // indirect
github.com/docker/libtrust v0.0.0-20150526203908-9cbd2a1374f4 // indirect
github.com/elazarl/goproxy v0.0.0-20191011121108-aa519ddbe484 // indirect
github.com/emicklei/go-restful/v3 v3.10.1 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/felixge/httpsnoop v1.0.3 // indirect
github.com/fvbommel/sortorder v1.0.1 // indirect
github.com/go-logr/logr v1.3.0 // indirect
github.com/go-logr/logr v1.2.3 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-openapi/jsonpointer v0.19.5 // indirect
github.com/go-openapi/jsonreference v0.20.0 // indirect
github.com/go-openapi/swag v0.19.14 // indirect
github.com/go-sql-driver/mysql v1.6.0 // indirect
github.com/gogo/googleapis v1.4.1 // indirect
github.com/google/certificate-transparency-go v1.1.4 // indirect
github.com/google/gnostic v0.5.7-v3refs // indirect
github.com/google/go-cmp v0.6.0 // indirect
github.com/google/go-cmp v0.5.9 // indirect
github.com/google/gofuzz v1.2.0 // indirect
github.com/gorilla/mux v1.8.0 // indirect
github.com/grpc-ecosystem/go-grpc-middleware v1.3.0 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.16.0 // indirect
github.com/hashicorp/go-cleanhttp v0.5.2 // indirect
github.com/imdario/mergo v0.3.16 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.11.3 // indirect
github.com/hailocab/go-hostpool v0.0.0-20160125115350-e80d13ce29ed // indirect
github.com/imdario/mergo v0.3.15 // indirect
github.com/in-toto/in-toto-golang v0.5.0 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/jinzhu/gorm v1.9.2 // indirect
github.com/jinzhu/inflection v0.0.0-20180308033659-04140366298a // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/compress v1.17.4 // indirect
github.com/kardianos/osext v0.0.0-20190222173326-2bc1f35cddc0 // indirect
github.com/klauspost/compress v1.16.3 // indirect
github.com/kr/pretty v0.2.1 // indirect
github.com/mailru/easyjson v0.7.6 // indirect
github.com/mattn/go-runewidth v0.0.15 // indirect
github.com/mattn/go-shellwords v1.0.12 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.4 // indirect
github.com/miekg/pkcs11 v1.1.1 // indirect
github.com/mitchellh/copystructure v1.2.0 // indirect
github.com/mitchellh/go-wordwrap v0.0.0-20150314170334-ad45545899c7 // indirect
github.com/mitchellh/mapstructure v1.5.0 // indirect
github.com/mitchellh/reflectwalk v1.0.2 // indirect
github.com/moby/docker-image-spec v1.3.1 // indirect
github.com/moby/locker v1.0.1 // indirect
github.com/moby/patternmatcher v0.6.0 // indirect
github.com/moby/patternmatcher v0.5.0 // indirect
github.com/moby/spdystream v0.2.0 // indirect
github.com/moby/sys/sequential v0.5.0 // indirect
github.com/moby/sys/user v0.1.0 // indirect
github.com/moby/term v0.5.0 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/opencontainers/runc v1.1.7 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/prometheus/client_golang v1.17.0 // indirect
github.com/prometheus/client_model v0.5.0 // indirect
github.com/prometheus/common v0.44.0 // indirect
github.com/prometheus/procfs v0.12.0 // indirect
github.com/rivo/uniseg v0.2.0 // indirect
github.com/prometheus/client_golang v1.14.0 // indirect
github.com/prometheus/client_model v0.3.0 // indirect
github.com/prometheus/common v0.42.0 // indirect
github.com/prometheus/procfs v0.9.0 // indirect
github.com/secure-systems-lab/go-securesystemslib v0.4.0 // indirect
github.com/sergi/go-diff v1.2.0 // indirect
github.com/shibumi/go-pathspec v1.3.0 // indirect
github.com/theupdateframework/notary v0.7.0 // indirect
github.com/tonistiigi/fsutil v0.0.0-20230825212630-f09800878302 // indirect
github.com/spf13/viper v1.14.0 // indirect
github.com/theupdateframework/notary v0.6.1 // indirect
github.com/tonistiigi/fsutil v0.0.0-20230629203738-36ef4d8c0dbb // indirect
github.com/tonistiigi/units v0.0.0-20180711220420-6950e57a87ea // indirect
github.com/tonistiigi/vt100 v0.0.0-20230623042737-f9a4f7ef6531 // indirect
github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb // indirect
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 // indirect
github.com/xeipuuv/gojsonschema v1.2.0 // indirect
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.45.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace v0.45.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.45.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlpmetric v0.42.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.19.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.19.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.19.0 // indirect
go.opentelemetry.io/otel/exporters/prometheus v0.42.0 // indirect
go.opentelemetry.io/proto/otlp v1.0.0 // indirect
golang.org/x/crypto v0.17.0 // indirect
golang.org/x/exp v0.0.0-20230713183714-613f0c0eb8a1 // indirect
golang.org/x/net v0.17.0 // indirect
golang.org/x/oauth2 v0.11.0 // indirect
golang.org/x/text v0.14.0 // indirect
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.40.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace v0.40.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.40.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/internal/retry v1.14.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.14.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.14.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.14.0 // indirect
go.opentelemetry.io/otel/metric v0.37.0 // indirect
go.opentelemetry.io/otel/sdk v1.14.0 // indirect
go.opentelemetry.io/proto/otlp v0.19.0 // indirect
golang.org/x/crypto v0.2.0 // indirect
golang.org/x/mod v0.9.0 // indirect
golang.org/x/net v0.8.0 // indirect
golang.org/x/oauth2 v0.5.0 // indirect
golang.org/x/sys v0.7.0 // indirect
golang.org/x/text v0.8.0 // indirect
golang.org/x/time v0.3.0 // indirect
golang.org/x/tools v0.14.0 // indirect
golang.org/x/tools v0.7.0 // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/genproto v0.0.0-20231016165738-49dd2c1f3d0b // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20231016165738-49dd2c1f3d0b // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20231016165738-49dd2c1f3d0b // indirect
google.golang.org/protobuf v1.31.0 // indirect
google.golang.org/genproto v0.0.0-20230306155012-7f2fa6fef1f4 // indirect
google.golang.org/protobuf v1.30.0 // indirect
gopkg.in/dancannon/gorethink.v3 v3.0.5 // indirect
gopkg.in/fatih/pool.v2 v2.0.0 // indirect
gopkg.in/gorethink/gorethink.v3 v3.0.5 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
k8s.io/klog/v2 v2.90.1 // indirect

832
go.sum

File diff suppressed because it is too large Load Diff

View File

@@ -9,9 +9,8 @@ set -e
: "${CGO_ENABLED=0}"
: "${GO_PKG=github.com/docker/buildx}"
: "${GO_EXTRA_FLAGS=}"
: "${GO_LDFLAGS=-X ${GO_PKG}/version.Version=${VERSION} -X ${GO_PKG}/version.Revision=${REVISION} -X ${GO_PKG}/version.Package=${PACKAGE}}"
: "${GO_EXTRA_LDFLAGS=}"
set -x
CGO_ENABLED=$CGO_ENABLED go build -mod vendor -trimpath ${GO_EXTRA_FLAGS} -ldflags "${GO_LDFLAGS} ${GO_EXTRA_LDFLAGS}" -o "${DESTDIR}/docker-buildx" ./cmd/buildx
CGO_ENABLED=$CGO_ENABLED go build -mod vendor -trimpath -ldflags "${GO_LDFLAGS} ${GO_EXTRA_LDFLAGS}" -o "${DESTDIR}/docker-buildx" ./cmd/buildx

View File

@@ -1,6 +1,6 @@
# syntax=docker/dockerfile:1
ARG GO_VERSION=1.21.6
ARG GO_VERSION=1.20.6
ARG FORMATS=md,yaml
FROM golang:${GO_VERSION}-alpine AS docsgen

View File

@@ -5,7 +5,7 @@
# Copyright The Buildx Authors.
# Licensed under the Apache License, Version 2.0
ARG GO_VERSION="1.21.6"
ARG GO_VERSION="1.20.6"
ARG PROTOC_VERSION="3.11.4"
# protoc is dynamically linked to glibc so can't use alpine base

View File

@@ -1,19 +1,10 @@
# syntax=docker/dockerfile:1
ARG GO_VERSION=1.21.6
ARG XX_VERSION=1.3.0
ARG GOLANGCI_LINT_VERSION=1.54.2
ARG GO_VERSION=1.20.6
FROM --platform=$BUILDPLATFORM tonistiigi/xx:${XX_VERSION} AS xx
FROM --platform=$BUILDPLATFORM golang:${GO_VERSION}-alpine
FROM golang:${GO_VERSION}-alpine
RUN apk add --no-cache git gcc musl-dev
ENV GOFLAGS="-buildvcs=false"
ARG GOLANGCI_LINT_VERSION
RUN wget -O- -nv https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s v${GOLANGCI_LINT_VERSION}
COPY --link --from=xx / /
RUN wget -O- -nv https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s v1.51.1
WORKDIR /go/src/github.com/docker/buildx
ARG TARGETPLATFORM
RUN --mount=target=/go/src/github.com/docker/buildx \
--mount=target=/root/.cache,type=cache,id=lint-cache-$TARGETPLATFORM \
xx-go --wrap && \
golangci-lint run
RUN --mount=target=/go/src/github.com/docker/buildx --mount=target=/root/.cache,type=cache \
golangci-lint run

Some files were not shown because too many files have changed in this diff Show More