Compare commits

..

14 Commits

Author SHA1 Message Date
Tõnis Tiigi
788433953a Merge pull request #2333 from tonistiigi/v0.13.1-picks
[v0.13] cherry-picks for v0.13.1
2024-03-12 10:04:14 -07:00
CrazyMax
7e2460428d bake: fix output handling for push
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
(cherry picked from commit 47cf4a5dbe)
2024-03-12 09:35:38 -07:00
CrazyMax
3490181812 tests: create remote with container helper
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
(cherry picked from commit b1490ed5ce)
2024-03-12 09:35:19 -07:00
Tonis Tiigi
19dbf2f7c4 remote: fix connhelpers with custom dialer
With the new dial-stdio command the dialer is split
from `Client` function in order to access it directly.

This breaks the custom connhelpers functionality
as support for connhelpers is a feature of the default
dialer. If client defines a custom dialer then only
it is used without extra modifications. This means
that remote driver dialer needs to detect the
connhelpers on its own.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 8f576e5790)
2024-03-12 09:35:08 -07:00
CrazyMax
37b7ad1465 Merge pull request #2320 from dvdksn/backport-doc-securitysandbox-link
[v0.13 backport] docs: fix link to new target in dockerfile reference
2024-03-07 10:36:12 +01:00
David Karlsson
2758919cf6 docs: fix link to new target in dockerfile reference
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
(cherry picked from commit 1cc5e39cb8)
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-03-07 10:15:04 +01:00
CrazyMax
911e346501 Merge pull request #2311 from crazy-max/0.13_backport_fix-docs-release
[v0.13 backport] ci(docs-release): fix vendoring step
2024-03-06 09:19:53 +01:00
CrazyMax
46365ee32f ci(docs-release): manual trigger support
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
(cherry picked from commit c1dfa74b98)
2024-03-06 09:00:17 +01:00
CrazyMax
6430c9586a ci(docs-release): fix vendoring step
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
(cherry picked from commit 647491dd99)
2024-03-06 09:00:16 +01:00
Tõnis Tiigi
0de5f1ce3b Merge pull request #2309 from tonistiigi/v0.13.0-picks
[v0.13] cherry-picks for v0.13.0
2024-03-05 10:02:26 -08:00
Tonis Tiigi
0565a47ad4 vendor: update to buildkit v0.13.0
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 849456c198)
2024-03-05 09:11:17 -08:00
CrazyMax
ab350f48d2 test: multi exporters
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
(cherry picked from commit 9a2536dd0d)
2024-03-05 09:10:54 -08:00
CrazyMax
1861c07eab build: handle push/load shorthands for multi exporters
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
(cherry picked from commit a03263acf8)
2024-03-05 09:10:40 -08:00
Tõnis Tiigi
84913655a8 Merge pull request #2300 from vvoland/vendor-moby-v26-0.13
[0.13] vendor: github.com/docker/docker v26.0.0-rc1
2024-03-04 10:04:39 -08:00
4011 changed files with 184458 additions and 371924 deletions

View File

@@ -188,89 +188,6 @@ To generate new vendored files with go modules run:
$ make vendor $ make vendor
``` ```
### Generate profiling data
You can configure Buildx to generate [`pprof`](https://github.com/google/pprof)
memory and CPU profiles to analyze and optimize your builds. These profiles are
useful for identifying performance bottlenecks, detecting memory
inefficiencies, and ensuring the program (Buildx) runs efficiently.
The following environment variables control whether Buildx generates profiling
data for builds:
```console
$ export BUILDX_CPU_PROFILE=buildx_cpu.prof
$ export BUILDX_MEM_PROFILE=buildx_mem.prof
```
When set, Buildx emits profiling samples for the builds to the location
specified by the environment variable.
To analyze and visualize profiling samples, you need `pprof` from the Go
toolchain, and (optionally) GraphViz for visualization in a graphical format.
To inspect profiling data with `pprof`:
1. Build a local binary of Buildx from source.
```console
$ docker buildx bake
```
The binary gets exported to `./bin/build/buildx`.
2. Run a build and with the environment variables set to generate profiling data.
```console
$ export BUILDX_CPU_PROFILE=buildx_cpu.prof
$ export BUILDX_MEM_PROFILE=buildx_mem.prof
$ ./bin/build/buildx bake
```
This creates `buildx_cpu.prof` and `buildx_mem.prof` for the build.
3. Start `pprof` and specify the filename of the profile that you want to
analyze.
```console
$ go tool pprof buildx_cpu.prof
```
This opens the `pprof` interactive console. From here, you can inspect the
profiling sample using various commands. For example, use `top 10` command
to view the top 10 most time-consuming entries.
```plaintext
(pprof) top 10
Showing nodes accounting for 3.04s, 91.02% of 3.34s total
Dropped 123 nodes (cum <= 0.02s)
Showing top 10 nodes out of 159
flat flat% sum% cum cum%
1.14s 34.13% 34.13% 1.14s 34.13% syscall.syscall
0.91s 27.25% 61.38% 0.91s 27.25% runtime.kevent
0.35s 10.48% 71.86% 0.35s 10.48% runtime.pthread_cond_wait
0.22s 6.59% 78.44% 0.22s 6.59% runtime.pthread_cond_signal
0.15s 4.49% 82.93% 0.15s 4.49% runtime.usleep
0.10s 2.99% 85.93% 0.10s 2.99% runtime.memclrNoHeapPointers
0.10s 2.99% 88.92% 0.10s 2.99% runtime.memmove
0.03s 0.9% 89.82% 0.03s 0.9% runtime.madvise
0.02s 0.6% 90.42% 0.02s 0.6% runtime.(*mspan).typePointersOfUnchecked
0.02s 0.6% 91.02% 0.02s 0.6% runtime.pcvalue
```
To view the call graph in a GUI, run `go tool pprof -http=:8081 <sample>`.
> [!NOTE]
> Requires [GraphViz](https://www.graphviz.org/) to be installed.
```console
$ go tool pprof -http=:8081 buildx_cpu.prof
Serving web UI on http://127.0.0.1:8081
http://127.0.0.1:8081
```
For more information about using `pprof` and how to interpret the call graph,
refer to the [`pprof` README](https://github.com/google/pprof/blob/main/doc/README.md).
### Conventions ### Conventions
@@ -426,4 +343,4 @@ The rules:
If you are having trouble getting into the mood of idiomatic Go, we recommend If you are having trouble getting into the mood of idiomatic Go, we recommend
reading through [Effective Go](https://golang.org/doc/effective_go.html). The reading through [Effective Go](https://golang.org/doc/effective_go.html). The
[Go Blog](https://blog.golang.org) is also a great resource. [Go Blog](https://blog.golang.org) is also a great resource.

50
.github/SECURITY.md vendored
View File

@@ -1,44 +1,12 @@
# Security Policy # Reporting security issues
The maintainers of Docker Buildx take security seriously. If you discover The project maintainers take security seriously. If you discover a security
a security issue, please bring it to their attention right away! issue, please bring it to their attention right away!
## Reporting a Vulnerability **Please _DO NOT_ file a public issue**, instead send your report privately to
[security@docker.com](mailto:security@docker.com).
Please **DO NOT** file a public issue, instead send your report privately Security reports are greatly appreciated, and we will publicly thank you for it.
to [security@docker.com](mailto:security@docker.com). We also like to send gifts&mdash;if you're into schwag, make sure to let
us know. We currently do not offer a paid security bounty program, but are not
Reporter(s) can expect a response within 72 hours, acknowledging the issue was ruling it out in the future.
received.
## Review Process
After receiving the report, an initial triage and technical analysis is
performed to confirm the report and determine its scope. We may request
additional information in this stage of the process.
Once a reviewer has confirmed the relevance of the report, a draft security
advisory will be created on GitHub. The draft advisory will be used to discuss
the issue with maintainers, the reporter(s), and where applicable, other
affected parties under embargo.
If the vulnerability is accepted, a timeline for developing a patch, public
disclosure, and patch release will be determined. If there is an embargo period
on public disclosure before the patch release, the reporter(s) are expected to
participate in the discussion of the timeline and abide by agreed upon dates
for public disclosure.
## Accreditation
Security reports are greatly appreciated and we will publicly thank you,
although we will keep your name confidential if you request it. We also like to
send gifts - if you're into swag, make sure to let us know. We do not currently
offer a paid security bounty program at this time.
## Supported Versions
Once a new feature release is cut, support for the previous feature release is
discontinued. An exception may be made for urgent security releases that occur
shortly after a new feature release. Buildx does not offer LTS (Long-Term Support)
releases. Refer to the [Support Policy](https://github.com/docker/buildx/blob/master/PROJECT.md#support-policy)
for further details.

View File

@@ -11,5 +11,5 @@ updates:
# trigger a new version: https://github.com/docker/buildx/pull/2222#issuecomment-1919092153 # trigger a new version: https://github.com/docker/buildx/pull/2222#issuecomment-1919092153
- dependency-name: "docker/docs" - dependency-name: "docker/docs"
labels: labels:
- "area/dependencies" - "dependencies"
- "bot" - "bot"

109
.github/labeler.yml vendored
View File

@@ -1,109 +0,0 @@
# Add 'area/project' label to changes in basic project documentation and .github folder, excluding .github/workflows
area/project:
- all:
- changed-files:
- any-glob-to-any-file:
- .github/**
- LICENSE
- AUTHORS
- MAINTAINERS
- PROJECT.md
- README.md
- .gitignore
- codecov.yml
- all-globs-to-all-files: '!.github/workflows/*'
# Add 'area/github-actions' label to changes in the .github/workflows folder
area/ci:
- changed-files:
- any-glob-to-any-file: '.github/workflows/**'
# Add 'area/bake' label to changes in the bake
area/bake:
- changed-files:
- any-glob-to-any-file: 'bake/**'
# Add 'area/bake/compose' label to changes in the bake+compose
area/bake/compose:
- changed-files:
- any-glob-to-any-file:
- bake/compose.go
- bake/compose_test.go
# Add 'area/build' label to changes in build files
area/build:
- changed-files:
- any-glob-to-any-file: 'build/**'
# Add 'area/builder' label to changes in builder files
area/builder:
- changed-files:
- any-glob-to-any-file: 'builder/**'
# Add 'area/cli' label to changes in the CLI
area/cli:
- changed-files:
- any-glob-to-any-file:
- cmd/**
- commands/**
# Add 'area/controller' label to changes in the controller
area/controller:
- changed-files:
- any-glob-to-any-file: 'controller/**'
# Add 'area/docs' label to markdown files in the docs folder
area/docs:
- changed-files:
- any-glob-to-any-file: 'docs/**/*.md'
# Add 'area/dependencies' label to changes in go dependency files
area/dependencies:
- changed-files:
- any-glob-to-any-file:
- go.mod
- go.sum
- vendor/**
# Add 'area/driver' label to changes in the driver folder
area/driver:
- changed-files:
- any-glob-to-any-file: 'driver/**'
# Add 'area/driver/docker' label to changes in the docker driver
area/driver/docker:
- changed-files:
- any-glob-to-any-file: 'driver/docker/**'
# Add 'area/driver/docker-container' label to changes in the docker-container driver
area/driver/docker-container:
- changed-files:
- any-glob-to-any-file: 'driver/docker-container/**'
# Add 'area/driver/kubernetes' label to changes in the kubernetes driver
area/driver/kubernetes:
- changed-files:
- any-glob-to-any-file: 'driver/kubernetes/**'
# Add 'area/driver/remote' label to changes in the remote driver
area/driver/remote:
- changed-files:
- any-glob-to-any-file: 'driver/remote/**'
# Add 'area/hack' label to changes in the hack folder
area/hack:
- changed-files:
- any-glob-to-any-file: 'hack/**'
# Add 'area/history' label to changes in history command
area/history:
- changed-files:
- any-glob-to-any-file: 'commands/history/**'
# Add 'area/tests' label to changes in test files
area/tests:
- changed-files:
- any-glob-to-any-file:
- tests/**
- '**/*_test.go'

View File

@@ -1,14 +1,5 @@
name: build name: build
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
concurrency: concurrency:
group: ${{ github.workflow }}-${{ github.ref }} group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true cancel-in-progress: true
@@ -28,97 +19,65 @@ on:
- 'docs/**' - 'docs/**'
env: env:
SETUP_BUILDX_VERSION: "edge" BUILDX_VERSION: "latest"
SETUP_BUILDKIT_IMAGE: "moby/buildkit:latest" BUILDKIT_IMAGE: "moby/buildkit:latest"
SCOUT_VERSION: "1.11.0"
REPO_SLUG: "docker/buildx-bin" REPO_SLUG: "docker/buildx-bin"
DESTDIR: "./bin" DESTDIR: "./bin"
TEST_CACHE_SCOPE: "test" TEST_CACHE_SCOPE: "test"
TESTFLAGS: "-v --parallel=6 --timeout=30m" TESTFLAGS: "-v --parallel=6 --timeout=30m"
GOTESTSUM_FORMAT: "standard-verbose" GOTESTSUM_FORMAT: "standard-verbose"
GO_VERSION: "1.23" GO_VERSION: "1.21"
GOTESTSUM_VERSION: "v1.9.0" # same as one in Dockerfile GOTESTSUM_VERSION: "v1.9.0" # same as one in Dockerfile
jobs: jobs:
prepare-test-integration:
runs-on: ubuntu-22.04
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up QEMU
uses: docker/setup-qemu-action@v3
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.BUILDX_VERSION }}
driver-opts: image=${{ env.BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build
uses: docker/bake-action@v4
with:
targets: integration-test-base
set: |
*.cache-from=type=gha,scope=${{ env.TEST_CACHE_SCOPE }}
*.cache-to=type=gha,scope=${{ env.TEST_CACHE_SCOPE }}
test-integration: test-integration:
runs-on: ubuntu-24.04 runs-on: ubuntu-22.04
needs:
- prepare-test-integration
env: env:
TESTFLAGS_DOCKER: "-v --parallel=1 --timeout=30m" TESTFLAGS_DOCKER: "-v --parallel=1 --timeout=30m"
TEST_IMAGE_BUILD: "0" TEST_IMAGE_BUILD: "0"
TEST_IMAGE_ID: "buildx-tests" TEST_IMAGE_ID: "buildx-tests"
TEST_COVERAGE: "1"
strategy: strategy:
fail-fast: false fail-fast: false
matrix: matrix:
buildkit:
- master
- latest
- buildx-stable-1
- v0.20.1
- v0.19.0
- v0.18.2
worker: worker:
- docker
- docker\+containerd # same as docker, but with containerd snapshotter
- docker-container - docker-container
- remote - remote
pkg: pkg:
- ./tests - ./tests
mode:
- ""
- experimental
include:
- worker: docker
pkg: ./tests
- worker: docker+containerd # same as docker, but with containerd snapshotter
pkg: ./tests
- worker: docker
pkg: ./tests
mode: experimental
- worker: docker+containerd # same as docker, but with containerd snapshotter
pkg: ./tests
mode: experimental
- worker: "docker@27.5"
pkg: ./tests
- worker: "docker+containerd@27.5" # same as docker, but with containerd snapshotter
pkg: ./tests
- worker: "docker@27.5"
pkg: ./tests
mode: experimental
- worker: "docker+containerd@27.5" # same as docker, but with containerd snapshotter
pkg: ./tests
mode: experimental
- worker: "docker@26.1"
pkg: ./tests
- worker: "docker+containerd@26.1" # same as docker, but with containerd snapshotter
pkg: ./tests
- worker: "docker@26.1"
pkg: ./tests
mode: experimental
- worker: "docker+containerd@26.1" # same as docker, but with containerd snapshotter
pkg: ./tests
mode: experimental
steps: steps:
- -
name: Prepare name: Prepare
run: | run: |
echo "TESTREPORTS_NAME=${{ github.job }}-$(echo "${{ matrix.pkg }}-${{ matrix.buildkit }}-${{ matrix.worker }}-${{ matrix.mode }}" | tr -dc '[:alnum:]-\n\r' | tr '[:upper:]' '[:lower:]')" >> $GITHUB_ENV echo "TESTREPORTS_NAME=${{ github.job }}-$(echo "${{ matrix.pkg }}-${{ matrix.worker }}" | tr -dc '[:alnum:]-\n\r' | tr '[:upper:]' '[:lower:]')" >> $GITHUB_ENV
if [ -n "${{ matrix.buildkit }}" ]; then
echo "TEST_BUILDKIT_TAG=${{ matrix.buildkit }}" >> $GITHUB_ENV
fi
testFlags="--run=//worker=$(echo "${{ matrix.worker }}" | sed 's/\+/\\+/g')$"
case "${{ matrix.worker }}" in
docker | docker+containerd | docker@* | docker+containerd@*)
echo "TESTFLAGS=${{ env.TESTFLAGS_DOCKER }} $testFlags" >> $GITHUB_ENV
;;
*)
echo "TESTFLAGS=${{ env.TESTFLAGS }} $testFlags" >> $GITHUB_ENV
;;
esac
if [[ "${{ matrix.worker }}" == "docker"* ]]; then
echo "TEST_DOCKERD=1" >> $GITHUB_ENV
fi
if [ "${{ matrix.mode }}" = "experimental" ]; then
echo "TEST_BUILDX_EXPERIMENTAL=1" >> $GITHUB_ENV
fi
- -
name: Checkout name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v4
@@ -131,16 +90,16 @@ jobs:
name: Set up Docker Buildx name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3 uses: docker/setup-buildx-action@v3
with: with:
version: ${{ env.SETUP_BUILDX_VERSION }} version: ${{ env.BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }} driver-opts: image=${{ env.BUILDKIT_IMAGE }}
buildkitd-flags: --debug buildkitd-flags: --debug
- -
name: Build test image name: Build test image
uses: docker/bake-action@v6 uses: docker/bake-action@v4
with: with:
source: .
targets: integration-test targets: integration-test
set: | set: |
*.cache-from=type=gha,scope=${{ env.TEST_CACHE_SCOPE }}
*.output=type=docker,name=${{ env.TEST_IMAGE_ID }} *.output=type=docker,name=${{ env.TEST_IMAGE_ID }}
- -
name: Test name: Test
@@ -148,16 +107,17 @@ jobs:
./hack/test ./hack/test
env: env:
TEST_REPORT_SUFFIX: "-${{ env.TESTREPORTS_NAME }}" TEST_REPORT_SUFFIX: "-${{ env.TESTREPORTS_NAME }}"
TEST_DOCKERD: "${{ startsWith(matrix.worker, 'docker') && '1' || '0' }}"
TESTFLAGS: "${{ (matrix.worker == 'docker' || matrix.worker == 'docker\\+containerd') && env.TESTFLAGS_DOCKER || env.TESTFLAGS }} --run=//worker=${{ matrix.worker }}$"
TESTPKGS: "${{ matrix.pkg }}" TESTPKGS: "${{ matrix.pkg }}"
- -
name: Send to Codecov name: Send to Codecov
if: always() if: always()
uses: codecov/codecov-action@v5 uses: codecov/codecov-action@v4
with: with:
directory: ./bin/testreports directory: ./bin/testreports
flags: integration flags: integration
token: ${{ secrets.CODECOV_TOKEN }} token: ${{ secrets.CODECOV_TOKEN }}
disable_file_fixes: true
- -
name: Generate annotations name: Generate annotations
if: always() if: always()
@@ -178,17 +138,12 @@ jobs:
fail-fast: false fail-fast: false
matrix: matrix:
os: os:
- ubuntu-24.04 - ubuntu-22.04
- macos-14 - macos-12
- windows-2022 - windows-2022
env: env:
SKIP_INTEGRATION_TESTS: 1 SKIP_INTEGRATION_TESTS: 1
steps: steps:
-
name: Setup Git config
run: |
git config --global core.autocrlf false
git config --global core.eol lf
- -
name: Checkout name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v4
@@ -229,13 +184,12 @@ jobs:
- -
name: Send to Codecov name: Send to Codecov
if: always() if: always()
uses: codecov/codecov-action@v5 uses: codecov/codecov-action@v4
with: with:
directory: ${{ env.TESTREPORTS_DIR }} directory: ${{ env.TESTREPORTS_DIR }}
env_vars: RUNNER_OS env_vars: RUNNER_OS
flags: unit flags: unit
token: ${{ secrets.CODECOV_TOKEN }} token: ${{ secrets.CODECOV_TOKEN }}
disable_file_fixes: true
- -
name: Generate annotations name: Generate annotations
if: always() if: always()
@@ -250,101 +204,8 @@ jobs:
name: test-reports-${{ env.TESTREPORTS_NAME }} name: test-reports-${{ env.TESTREPORTS_NAME }}
path: ${{ env.TESTREPORTS_BASEDIR }} path: ${{ env.TESTREPORTS_BASEDIR }}
test-bsd-unit:
runs-on: ubuntu-22.04
continue-on-error: true
strategy:
fail-fast: false
matrix:
os:
- freebsd
- netbsd
- openbsd
steps:
-
name: Prepare
run: |
echo "VAGRANT_FILE=hack/Vagrantfile.${{ matrix.os }}" >> $GITHUB_ENV
# Sets semver Go version to be able to download tarball during vagrant setup
goVersion=$(curl --silent "https://go.dev/dl/?mode=json&include=all" | jq -r '.[].files[].version' | uniq | sed -e 's/go//' | sort -V | grep $GO_VERSION | tail -1)
echo "GO_VERSION=$goVersion" >> $GITHUB_ENV
-
name: Checkout
uses: actions/checkout@v4
-
name: Cache Vagrant boxes
uses: actions/cache@v4
with:
path: ~/.vagrant.d/boxes
key: ${{ runner.os }}-vagrant-${{ matrix.os }}-${{ hashFiles(env.VAGRANT_FILE) }}
restore-keys: |
${{ runner.os }}-vagrant-${{ matrix.os }}-
-
name: Install vagrant
run: |
set -x
wget -O - https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt-get update
sudo apt-get install -y libvirt-dev libvirt-daemon libvirt-daemon-system vagrant vagrant-libvirt ruby-libvirt
sudo systemctl enable --now libvirtd
sudo chmod a+rw /var/run/libvirt/libvirt-sock
vagrant plugin install vagrant-libvirt
vagrant --version
-
name: Set up vagrant
run: |
ln -sf ${{ env.VAGRANT_FILE }} Vagrantfile
vagrant up --no-tty
-
name: Test
run: |
vagrant ssh -- "cd /vagrant; SKIP_INTEGRATION_TESTS=1 go test -mod=vendor -coverprofile=coverage.txt -covermode=atomic ${{ env.TESTFLAGS }} ./..."
vagrant ssh -c "sudo cat /vagrant/coverage.txt" > coverage.txt
-
name: Upload coverage
if: always()
uses: codecov/codecov-action@v5
with:
files: ./coverage.txt
env_vars: RUNNER_OS
flags: unit,${{ matrix.os }}
token: ${{ secrets.CODECOV_TOKEN }}
env:
RUNNER_OS: ${{ matrix.os }}
govulncheck:
runs-on: ubuntu-24.04
permissions:
# same as global permission
contents: read
# required to write sarif report
security-events: write
steps:
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Run
uses: docker/bake-action@v6
with:
targets: govulncheck
env:
GOVULNCHECK_FORMAT: sarif
-
name: Upload SARIF report
if: ${{ github.ref == 'refs/heads/master' && github.repository == 'docker/buildx' }}
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: ${{ env.DESTDIR }}/govulncheck.out
prepare-binaries: prepare-binaries:
runs-on: ubuntu-24.04 runs-on: ubuntu-22.04
outputs: outputs:
matrix: ${{ steps.platforms.outputs.matrix }} matrix: ${{ steps.platforms.outputs.matrix }}
steps: steps:
@@ -362,7 +223,7 @@ jobs:
echo ${{ steps.platforms.outputs.matrix }} echo ${{ steps.platforms.outputs.matrix }}
binaries: binaries:
runs-on: ubuntu-24.04 runs-on: ubuntu-22.04
needs: needs:
- prepare-binaries - prepare-binaries
strategy: strategy:
@@ -385,8 +246,8 @@ jobs:
name: Set up Docker Buildx name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3 uses: docker/setup-buildx-action@v3
with: with:
version: ${{ env.SETUP_BUILDX_VERSION }} version: ${{ env.BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }} driver-opts: image=${{ env.BUILDKIT_IMAGE }}
buildkitd-flags: --debug buildkitd-flags: --debug
- -
name: Build name: Build
@@ -405,21 +266,15 @@ jobs:
if-no-files-found: error if-no-files-found: error
bin-image: bin-image:
runs-on: ubuntu-24.04 runs-on: ubuntu-22.04
needs: needs:
- test-integration - test-integration
- test-unit - test-unit
if: ${{ github.event_name != 'pull_request' && github.repository == 'docker/buildx' }} if: ${{ github.event_name != 'pull_request' && github.repository == 'docker/buildx' }}
steps: steps:
- -
name: Free disk space name: Checkout
uses: jlumbroso/free-disk-space@54081f138730dfa15788a46383842cd2f914a1be # v1.3.1 uses: actions/checkout@v4
with:
android: true
dotnet: true
haskell: true
large-packages: true
swap-storage: true
- -
name: Set up QEMU name: Set up QEMU
uses: docker/setup-qemu-action@v3 uses: docker/setup-qemu-action@v3
@@ -427,8 +282,8 @@ jobs:
name: Set up Docker Buildx name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3 uses: docker/setup-buildx-action@v3
with: with:
version: ${{ env.SETUP_BUILDX_VERSION }} version: ${{ env.BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }} driver-opts: image=${{ env.BUILDKIT_IMAGE }}
buildkitd-flags: --debug buildkitd-flags: --debug
- -
name: Docker meta name: Docker meta
@@ -451,11 +306,11 @@ jobs:
password: ${{ secrets.DOCKERPUBLICBOT_WRITE_PAT }} password: ${{ secrets.DOCKERPUBLICBOT_WRITE_PAT }}
- -
name: Build and push image name: Build and push image
uses: docker/bake-action@v6 uses: docker/bake-action@v4
with: with:
files: | files: |
./docker-bake.hcl ./docker-bake.hcl
cwd://${{ steps.meta.outputs.bake-file }} ${{ steps.meta.outputs.bake-file }}
targets: image-cross targets: image-cross
push: ${{ github.event_name != 'pull_request' }} push: ${{ github.event_name != 'pull_request' }}
sbom: true sbom: true
@@ -463,42 +318,8 @@ jobs:
*.cache-from=type=gha,scope=bin-image *.cache-from=type=gha,scope=bin-image
*.cache-to=type=gha,scope=bin-image,mode=max *.cache-to=type=gha,scope=bin-image,mode=max
scout:
runs-on: ubuntu-24.04
if: ${{ github.ref == 'refs/heads/master' && github.repository == 'docker/buildx' }}
permissions:
# same as global permission
contents: read
# required to write sarif report
security-events: write
needs:
- bin-image
steps:
-
name: Login to DockerHub
uses: docker/login-action@v3
with:
username: ${{ vars.DOCKERPUBLICBOT_USERNAME }}
password: ${{ secrets.DOCKERPUBLICBOT_WRITE_PAT }}
-
name: Scout
id: scout
uses: crazy-max/.github/.github/actions/docker-scout@ccae1c98f1237b5c19e4ef77ace44fa68b3bc7e4
with:
version: ${{ env.SCOUT_VERSION }}
format: sarif
image: registry://${{ env.REPO_SLUG }}:master
-
name: Upload SARIF report
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: ${{ steps.scout.outputs.result-file }}
release: release:
runs-on: ubuntu-24.04 runs-on: ubuntu-22.04
permissions:
# required to create GitHub release
contents: write
needs: needs:
- test-integration - test-integration
- test-unit - test-unit
@@ -528,9 +349,33 @@ jobs:
- -
name: GitHub Release name: GitHub Release
if: startsWith(github.ref, 'refs/tags/v') if: startsWith(github.ref, 'refs/tags/v')
uses: softprops/action-gh-release@c95fe1489396fe8a9eb87c0abf8aa5b2ef267fda # v2.2.1 uses: softprops/action-gh-release@de2c0eb89ae2a093876385947365aca7b0e5f844 # v0.1.15
env: env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with: with:
draft: true draft: true
files: ${{ env.DESTDIR }}/* files: ${{ env.DESTDIR }}/*
buildkit-edge:
runs-on: ubuntu-22.04
continue-on-error: true
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up QEMU
uses: docker/setup-qemu-action@v3
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.BUILDX_VERSION }}
driver-opts: image=moby/buildkit:master
buildkitd-flags: --debug
-
# Just run a bake target to check eveything runs fine
name: Build
uses: docker/bake-action@v4
with:
targets: binaries

View File

@@ -1,14 +1,5 @@
name: codeql name: codeql
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
on: on:
push: push:
branches: branches:
@@ -16,16 +7,17 @@ on:
- 'v[0-9]*' - 'v[0-9]*'
pull_request: pull_request:
permissions:
actions: read
contents: read
security-events: write
env: env:
GO_VERSION: "1.23" GO_VERSION: "1.21"
jobs: jobs:
codeql: codeql:
runs-on: ubuntu-24.04 runs-on: ubuntu-latest
permissions:
contents: read
actions: read
security-events: write
steps: steps:
- -
name: Checkout name: Checkout

View File

@@ -1,14 +1,5 @@
name: docs-release name: docs-release
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
on: on:
workflow_dispatch: workflow_dispatch:
inputs: inputs:
@@ -19,17 +10,10 @@ on:
types: types:
- released - released
env:
SETUP_BUILDX_VERSION: "edge"
SETUP_BUILDKIT_IMAGE: "moby/buildkit:latest"
jobs: jobs:
open-pr: open-pr:
runs-on: ubuntu-24.04 runs-on: ubuntu-22.04
if: ${{ (github.event.release.prerelease != true || github.event.inputs.tag != '') && github.repository == 'docker/buildx' }} if: ${{ (github.event.release.prerelease != true || github.event.inputs.tag != '') && github.repository == 'docker/buildx' }}
permissions:
contents: write
pull-requests: write
steps: steps:
- -
name: Checkout docs repo name: Checkout docs repo
@@ -42,6 +26,7 @@ jobs:
name: Prepare name: Prepare
run: | run: |
rm -rf ./data/buildx/* rm -rf ./data/buildx/*
rm -rf ./_vendor/github.com/docker/buildx
if [ -n "${{ github.event.inputs.tag }}" ]; then if [ -n "${{ github.event.inputs.tag }}" ]; then
echo "RELEASE_NAME=${{ github.event.inputs.tag }}" >> $GITHUB_ENV echo "RELEASE_NAME=${{ github.event.inputs.tag }}" >> $GITHUB_ENV
else else
@@ -50,17 +35,12 @@ jobs:
- -
name: Set up Docker Buildx name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3 uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
- -
name: Generate yaml name: Generate yaml
uses: docker/bake-action@v6 uses: docker/bake-action@v4
with: with:
source: ${{ github.server_url }}/${{ github.repository }}.git#${{ env.RELEASE_NAME }} source: ${{ github.server_url }}/${{ github.repository }}.git#${{ env.RELEASE_NAME }}
targets: update-docs targets: update-docs
provenance: false
set: | set: |
*.output=/tmp/buildx-docs *.output=/tmp/buildx-docs
env: env:
@@ -71,13 +51,14 @@ jobs:
cp /tmp/buildx-docs/out/reference/*.yaml ./data/buildx/ cp /tmp/buildx-docs/out/reference/*.yaml ./data/buildx/
- -
name: Update vendor name: Update vendor
run: | uses: docker/bake-action@v4
make vendor with:
env: targets: vendor
VENDOR_MODULE: github.com/docker/buildx@${{ env.RELEASE_NAME }} set: |
vendor.args.MODULE=github.com/docker/buildx@${{ env.RELEASE_NAME }}
- -
name: Create PR on docs repo name: Create PR on docs repo
uses: peter-evans/create-pull-request@271a8d0340265f705b14b6d32b9829c1cb33d45e # v7.0.8 uses: peter-evans/create-pull-request@a4f52f8033a6168103c2538976c07b467e8163bc
with: with:
token: ${{ secrets.GHPAT_DOCS_DISPATCH }} token: ${{ secrets.GHPAT_DOCS_DISPATCH }}
push-to-fork: docker-tools-robot/docker.github.io push-to-fork: docker-tools-robot/docker.github.io

View File

@@ -3,15 +3,6 @@
# https://github.com/docker/docker.github.io/blob/98c7c9535063ae4cd2cd0a31478a21d16d2f07a3/docker-bake.hcl#L34-L36 # https://github.com/docker/docker.github.io/blob/98c7c9535063ae4cd2cd0a31478a21d16d2f07a3/docker-bake.hcl#L34-L36
name: docs-upstream name: docs-upstream
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
concurrency: concurrency:
group: ${{ github.workflow }}-${{ github.ref }} group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true cancel-in-progress: true
@@ -29,27 +20,23 @@ on:
- '.github/workflows/docs-upstream.yml' - '.github/workflows/docs-upstream.yml'
- 'docs/**' - 'docs/**'
env:
SETUP_BUILDX_VERSION: "edge"
SETUP_BUILDKIT_IMAGE: "moby/buildkit:latest"
jobs: jobs:
docs-yaml: docs-yaml:
runs-on: ubuntu-24.04 runs-on: ubuntu-22.04
steps: steps:
-
name: Checkout
uses: actions/checkout@v4
- -
name: Set up Docker Buildx name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3 uses: docker/setup-buildx-action@v3
with: with:
version: ${{ env.SETUP_BUILDX_VERSION }} version: latest
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
- -
name: Build reference YAML docs name: Build reference YAML docs
uses: docker/bake-action@v6 uses: docker/bake-action@v4
with: with:
targets: update-docs targets: update-docs
provenance: false
set: | set: |
*.output=/tmp/buildx-docs *.output=/tmp/buildx-docs
*.cache-from=type=gha,scope=docs-yaml *.cache-from=type=gha,scope=docs-yaml
@@ -65,7 +52,7 @@ jobs:
retention-days: 1 retention-days: 1
validate: validate:
uses: docker/docs/.github/workflows/validate-upstream.yml@main uses: docker/docs/.github/workflows/validate-upstream.yml@6b73b05acb21edf7995cc5b3c6672d8e314cee7a # pin for artifact v4 support: https://github.com/docker/docs/pull/19220
needs: needs:
- docs-yaml - docs-yaml
with: with:

View File

@@ -1,14 +1,5 @@
name: e2e name: e2e
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
concurrency: concurrency:
group: ${{ github.workflow }}-${{ github.ref }} group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true cancel-in-progress: true
@@ -26,25 +17,23 @@ on:
- 'docs/**' - 'docs/**'
env: env:
SETUP_BUILDX_VERSION: "edge"
SETUP_BUILDKIT_IMAGE: "moby/buildkit:latest"
DESTDIR: "./bin" DESTDIR: "./bin"
K3S_VERSION: "v1.32.2+k3s1" K3S_VERSION: "v1.21.2-k3s1"
jobs: jobs:
build: build:
runs-on: ubuntu-24.04 runs-on: ubuntu-22.04
steps: steps:
- name: Checkout
uses: actions/checkout@v4
- -
name: Set up Docker Buildx name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3 uses: docker/setup-buildx-action@v3
with: with:
version: ${{ env.SETUP_BUILDX_VERSION }} version: latest
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
- -
name: Build name: Build
uses: docker/bake-action@v6 uses: docker/bake-action@v4
with: with:
targets: binaries targets: binaries
set: | set: |
@@ -65,7 +54,7 @@ jobs:
retention-days: 7 retention-days: 7
driver: driver:
runs-on: ubuntu-24.04 runs-on: ubuntu-20.04
needs: needs:
- build - build
strategy: strategy:
@@ -93,10 +82,6 @@ jobs:
driver-opt: qemu.install=true driver-opt: qemu.install=true
- driver: remote - driver: remote
endpoint: tcp://localhost:1234 endpoint: tcp://localhost:1234
- driver: docker-container
metadata-provenance: max
- driver: docker-container
metadata-warnings: true
exclude: exclude:
- driver: docker - driver: docker
multi-node: mnode-true multi-node: mnode-true
@@ -144,18 +129,70 @@ jobs:
else else
echo "MULTI_NODE=0" >> $GITHUB_ENV echo "MULTI_NODE=0" >> $GITHUB_ENV
fi fi
if [ -n "${{ matrix.metadata-provenance }}" ]; then
echo "BUILDX_METADATA_PROVENANCE=${{ matrix.metadata-provenance }}" >> $GITHUB_ENV
fi
if [ -n "${{ matrix.metadata-warnings }}" ]; then
echo "BUILDX_METADATA_WARNINGS=${{ matrix.metadata-warnings }}" >> $GITHUB_ENV
fi
- -
name: Install k3s name: Install k3s
if: matrix.driver == 'kubernetes' if: matrix.driver == 'kubernetes'
uses: crazy-max/.github/.github/actions/install-k3s@7730d1434364d4b9aded32735b078a7ace5ea79a uses: actions/github-script@v7
with: with:
version: ${{ env.K3S_VERSION }} script: |
const fs = require('fs');
let wait = function(milliseconds) {
return new Promise((resolve, reject) => {
if (typeof(milliseconds) !== 'number') {
throw new Error('milleseconds not a number');
}
setTimeout(() => resolve("done!"), milliseconds)
});
}
try {
const kubeconfig="/tmp/buildkit-k3s/kubeconfig.yaml";
core.info(`storing kubeconfig in ${kubeconfig}`);
await exec.exec('docker', ["run", "-d",
"--privileged",
"--name=buildkit-k3s",
"-e", "K3S_KUBECONFIG_OUTPUT="+kubeconfig,
"-e", "K3S_KUBECONFIG_MODE=666",
"-v", "/tmp/buildkit-k3s:/tmp/buildkit-k3s",
"-p", "6443:6443",
"-p", "80:80",
"-p", "443:443",
"-p", "8080:8080",
"rancher/k3s:${{ env.K3S_VERSION }}", "server"
]);
await wait(10000);
core.exportVariable('KUBECONFIG', kubeconfig);
let nodeName;
for (let count = 1; count <= 5; count++) {
try {
const nodeNameOutput = await exec.getExecOutput("kubectl get nodes --no-headers -oname");
nodeName = nodeNameOutput.stdout
} catch (error) {
core.info(`Unable to resolve node name (${error.message}). Attempt ${count} of 5.`)
} finally {
if (nodeName) {
break;
}
await wait(5000);
}
}
if (!nodeName) {
throw new Error(`Unable to resolve node name after 5 attempts.`);
}
await exec.exec(`kubectl wait --for=condition=Ready ${nodeName}`);
} catch (error) {
core.setFailed(error.message);
}
-
name: Print KUBECONFIG
if: matrix.driver == 'kubernetes'
run: |
yq ${{ env.KUBECONFIG }}
- -
name: Launch remote buildkitd name: Launch remote buildkitd
if: matrix.driver == 'remote' if: matrix.driver == 'remote'
@@ -177,78 +214,3 @@ jobs:
DRIVER_OPT: ${{ matrix.driver-opt }} DRIVER_OPT: ${{ matrix.driver-opt }}
ENDPOINT: ${{ matrix.endpoint }} ENDPOINT: ${{ matrix.endpoint }}
PLATFORMS: ${{ matrix.platforms }} PLATFORMS: ${{ matrix.platforms }}
bake:
runs-on: ubuntu-24.04
needs:
- build
env:
DOCKER_BUILD_CHECKS_ANNOTATIONS: false
DOCKER_BUILD_SUMMARY: false
strategy:
fail-fast: false
matrix:
include:
-
# https://github.com/docker/bake-action/blob/v5.11.0/.github/workflows/ci.yml#L227-L237
source: "https://github.com/docker/bake-action.git#v5.11.0:test/go"
overrides: |
*.output=/tmp/bake-build
-
# https://github.com/tonistiigi/xx/blob/2fc85604e7280bfb3f626569bd4c5413c43eb4af/.github/workflows/ld.yml#L90-L98
source: "https://github.com/tonistiigi/xx.git#2fc85604e7280bfb3f626569bd4c5413c43eb4af"
targets: |
ld64-static-tgz
overrides: |
ld64-static-tgz.output=type=local,dest=./dist
ld64-static-tgz.platform=linux/amd64
ld64-static-tgz.cache-from=type=gha,scope=xx-ld64-static-tgz
ld64-static-tgz.cache-to=type=gha,scope=xx-ld64-static-tgz
-
# https://github.com/moby/buildkit-bench/blob/54c194011c4fc99a94aa75d4b3d4f3ffd4c4ce27/docker-bake.hcl#L154-L160
source: "https://github.com/moby/buildkit-bench.git#54c194011c4fc99a94aa75d4b3d4f3ffd4c4ce27"
targets: |
tests-buildkit
envs: |
BUILDKIT_REFS=v0.18.2
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Expose GitHub Runtime
uses: crazy-max/ghaction-github-runtime@v3
-
name: Environment variables
if: matrix.envs != ''
run: |
for l in "${{ matrix.envs }}"; do
echo "${l?}" >> $GITHUB_ENV
done
-
name: Set up QEMU
uses: docker/setup-qemu-action@v3
-
name: Install buildx
uses: actions/download-artifact@v4
with:
name: binary
path: /home/runner/.docker/cli-plugins
-
name: Fix perms and check
run: |
chmod +x /home/runner/.docker/cli-plugins/docker-buildx
docker buildx version
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build
uses: docker/bake-action@v6
with:
source: ${{ matrix.source }}
targets: ${{ matrix.targets }}
set: ${{ matrix.overrides }}

View File

@@ -1,32 +0,0 @@
name: labeler
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
pull_request_target:
jobs:
labeler:
runs-on: ubuntu-latest
permissions:
# same as global permission
contents: read
# required for writing labels
pull-requests: write
steps:
-
name: Run
uses: actions/labeler@v5
with:
sync-labels: true

View File

@@ -1,14 +1,5 @@
name: validate name: validate
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
concurrency: concurrency:
group: ${{ github.workflow }}-${{ github.ref }} group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true cancel-in-progress: true
@@ -25,86 +16,29 @@ on:
paths-ignore: paths-ignore:
- '.github/releases.json' - '.github/releases.json'
env:
SETUP_BUILDX_VERSION: "edge"
SETUP_BUILDKIT_IMAGE: "moby/buildkit:latest"
jobs: jobs:
prepare: validate:
runs-on: ubuntu-24.04 runs-on: ubuntu-22.04
outputs: env:
includes: ${{ steps.matrix.outputs.includes }} GOLANGCI_LINT_MULTIPLATFORM: 1
strategy:
fail-fast: false
matrix:
target:
- lint
- validate-vendor
- validate-docs
- validate-generated-files
steps: steps:
- -
name: Checkout name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v4
-
name: Matrix
id: matrix
uses: actions/github-script@v7
with:
script: |
let def = {};
await core.group(`Parsing definition`, async () => {
const printEnv = Object.assign({}, process.env, {
GOLANGCI_LINT_MULTIPLATFORM: process.env.GITHUB_REPOSITORY === 'docker/buildx' ? '1' : ''
});
const resPrint = await exec.getExecOutput('docker', ['buildx', 'bake', 'validate', '--print'], {
ignoreReturnCode: true,
env: printEnv
});
if (resPrint.stderr.length > 0 && resPrint.exitCode != 0) {
throw new Error(res.stderr);
}
def = JSON.parse(resPrint.stdout.trim());
});
await core.group(`Generating matrix`, async () => {
const includes = [];
for (const targetName of Object.keys(def.target)) {
const target = def.target[targetName];
if (target.platforms && target.platforms.length > 0) {
target.platforms.forEach(platform => {
includes.push({
target: targetName,
platform: platform
});
});
} else {
includes.push({
target: targetName
});
}
}
core.info(JSON.stringify(includes, null, 2));
core.setOutput('includes', JSON.stringify(includes));
});
validate:
runs-on: ubuntu-24.04
needs:
- prepare
strategy:
fail-fast: false
matrix:
include: ${{ fromJson(needs.prepare.outputs.includes) }}
steps:
-
name: Prepare
run: |
if [ "$GITHUB_REPOSITORY" = "docker/buildx" ]; then
echo "GOLANGCI_LINT_MULTIPLATFORM=1" >> $GITHUB_ENV
fi
- -
name: Set up Docker Buildx name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3 uses: docker/setup-buildx-action@v3
with: with:
version: ${{ env.SETUP_BUILDX_VERSION }} version: latest
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
- -
name: Validate name: Run
uses: docker/bake-action@v6 run: |
with: make ${{ matrix.target }}
targets: ${{ matrix.target }}
set: |
*.platform=${{ matrix.platform }}

View File

@@ -1,102 +1,49 @@
run: run:
timeout: 30m timeout: 30m
skip-files:
- ".*\\.pb\\.go$"
modules-download-mode: vendor modules-download-mode: vendor
# default uses Go version from the go.mod file, fallback on the env var
# `GOVERSION`, fallback on 1.17: https://golangci-lint.run/usage/configuration/#run-configuration build-tags:
go: "1.23"
linters: linters:
enable: enable:
- bodyclose
- depguard
- forbidigo
- gocritic
- gofmt - gofmt
- goimports
- gosec
- gosimple
- govet - govet
- depguard
- goimports
- ineffassign - ineffassign
- makezero
- misspell - misspell
- noctx - unused
- nolintlint
- revive - revive
- staticcheck - staticcheck
- testifylint
- typecheck - typecheck
- unused - nolintlint
- whitespace - gosec
- forbidigo
disable-all: true disable-all: true
linters-settings: linters-settings:
gocritic:
disabled-checks:
- "ifElseChain"
- "assignOp"
- "appendAssign"
- "singleCaseSwitch"
- "exitAfterDefer" # FIXME
importas:
alias:
# Enforce alias to prevent it accidentally being used instead of
# buildkit errdefs package (or vice-versa).
- pkg: "github.com/containerd/errdefs"
alias: "cerrdefs"
# Use a consistent alias to prevent confusion with "github.com/moby/buildkit/client"
- pkg: "github.com/docker/docker/client"
alias: "dockerclient"
- pkg: "github.com/opencontainers/image-spec/specs-go/v1"
alias: "ocispecs"
- pkg: "github.com/opencontainers/go-digest"
alias: "digest"
govet:
enable:
- nilness
- unusedwrite
# enable-all: true
# disable:
# - fieldalignment
# - shadow
depguard: depguard:
rules: rules:
main: main:
deny: deny:
- pkg: "github.com/containerd/containerd/errdefs" # The io/ioutil package has been deprecated.
desc: The containerd errdefs package was migrated to a separate module. Use github.com/containerd/errdefs instead. # https://go.dev/doc/go1.16#ioutil
- pkg: "github.com/containerd/containerd/log"
desc: The containerd log package was migrated to a separate module. Use github.com/containerd/log instead.
- pkg: "github.com/containerd/containerd/platforms"
desc: The containerd platforms package was migrated to a separate module. Use github.com/containerd/platforms instead.
- pkg: "io/ioutil" - pkg: "io/ioutil"
desc: The io/ioutil package has been deprecated. desc: The io/ioutil package has been deprecated.
forbidigo: forbidigo:
forbid: forbid:
- '^context\.WithCancel(# use context\.WithCancelCause instead)?$'
- '^context\.WithDeadline(# use context\.WithDeadline instead)?$'
- '^context\.WithTimeout(# use context\.WithTimeoutCause instead)?$'
- '^ctx\.Err(# use context\.Cause instead)?$'
- '^fmt\.Errorf(# use errors\.Errorf instead)?$' - '^fmt\.Errorf(# use errors\.Errorf instead)?$'
- '^platforms\.DefaultString(# use platforms\.Format(platforms\.DefaultSpec()) instead\.)?$'
gosec: gosec:
excludes: excludes:
- G204 # Audit use of command execution - G204 # Audit use of command execution
- G402 # TLS MinVersion too low - G402 # TLS MinVersion too low
- G115 # integer overflow conversion (TODO: verify these)
config: config:
G306: "0644" G306: "0644"
testifylint:
disable:
# disable rules that reduce the test condition
- "empty"
- "bool-compare"
- "len"
- "negative-positive"
issues: issues:
exclude-files:
- ".*\\.pb\\.go$"
exclude-rules: exclude-rules:
- linters: - linters:
- revive - revive
@@ -117,6 +64,6 @@ issues:
- revive - revive
text: "if-return" text: "if-return"
# show all # show all
max-issues-per-linter: 0 max-issues-per-linter: 0
max-same-issues: 0 max-same-issues: 0

View File

@@ -1,25 +1,11 @@
# This file lists all individuals having contributed content to the repository. # This file lists all individuals having contributed content to the repository.
# For how it is generated, see hack/dockerfiles/authors.Dockerfile. # For how it is generated, see hack/dockerfiles/authors.Dockerfile.
Batuhan Apaydın <batuhan.apaydin@trendyol.com>
Batuhan Apaydın <batuhan.apaydin@trendyol.com> <developerguy2@gmail.com>
CrazyMax <github@crazymax.dev> CrazyMax <github@crazymax.dev>
CrazyMax <github@crazymax.dev> <1951866+crazy-max@users.noreply.github.com> CrazyMax <github@crazymax.dev> <1951866+crazy-max@users.noreply.github.com>
CrazyMax <github@crazymax.dev> <crazy-max@users.noreply.github.com> CrazyMax <github@crazymax.dev> <crazy-max@users.noreply.github.com>
David Karlsson <david.karlsson@docker.com>
David Karlsson <david.karlsson@docker.com> <35727626+dvdksn@users.noreply.github.com>
jaihwan104 <jaihwan104@woowahan.com>
jaihwan104 <jaihwan104@woowahan.com> <42341126+jaihwan104@users.noreply.github.com>
Kenyon Ralph <kenyon@kenyonralph.com>
Kenyon Ralph <kenyon@kenyonralph.com> <quic_kralph@quicinc.com>
Sebastiaan van Stijn <github@gone.nl> Sebastiaan van Stijn <github@gone.nl>
Sebastiaan van Stijn <github@gone.nl> <thaJeztah@users.noreply.github.com> Sebastiaan van Stijn <github@gone.nl> <thaJeztah@users.noreply.github.com>
Shaun Thompson <shaun.thompson@docker.com>
Shaun Thompson <shaun.thompson@docker.com> <shaun.b.thompson@gmail.com>
Silvin Lubecki <silvin.lubecki@docker.com>
Silvin Lubecki <silvin.lubecki@docker.com> <31478878+silvin-lubecki@users.noreply.github.com>
Talon Bowler <talon.bowler@docker.com>
Talon Bowler <talon.bowler@docker.com> <nolat301@gmail.com>
Tibor Vass <tibor@docker.com> Tibor Vass <tibor@docker.com>
Tibor Vass <tibor@docker.com> <tiborvass@users.noreply.github.com> Tibor Vass <tibor@docker.com> <tiborvass@users.noreply.github.com>
Tõnis Tiigi <tonistiigi@gmail.com> Tõnis Tiigi <tonistiigi@gmail.com>

69
AUTHORS
View File

@@ -1,112 +1,45 @@
# This file lists all individuals having contributed content to the repository. # This file lists all individuals having contributed content to the repository.
# For how it is generated, see hack/dockerfiles/authors.Dockerfile. # For how it is generated, see hack/dockerfiles/authors.Dockerfile.
accetto <34798830+accetto@users.noreply.github.com>
Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp> Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
Aleksa Sarai <cyphar@cyphar.com>
Alex Couture-Beil <alex@earthly.dev> Alex Couture-Beil <alex@earthly.dev>
Andrew Haines <andrew.haines@zencargo.com> Andrew Haines <andrew.haines@zencargo.com>
Andy Caldwell <andrew.caldwell@metaswitch.com>
Andy MacKinlay <admackin@users.noreply.github.com> Andy MacKinlay <admackin@users.noreply.github.com>
Anthony Poschen <zanven42@gmail.com> Anthony Poschen <zanven42@gmail.com>
Arnold Sobanski <arnold@l4g.dev>
Artur Klauser <Artur.Klauser@computer.org> Artur Klauser <Artur.Klauser@computer.org>
Avi Deitcher <avi@deitcher.net> Batuhan Apaydın <developerguy2@gmail.com>
Batuhan Apaydın <batuhan.apaydin@trendyol.com>
Ben Peachey <potherca@gmail.com>
Bertrand Paquet <bertrand.paquet@gmail.com>
Bin Du <bindu@microsoft.com> Bin Du <bindu@microsoft.com>
Brandon Philips <brandon@ifup.org> Brandon Philips <brandon@ifup.org>
Brian Goff <cpuguy83@gmail.com> Brian Goff <cpuguy83@gmail.com>
Bryce Lampe <bryce@pulumi.com>
Cameron Adams <pnzreba@gmail.com>
Christian Dupuis <cd@atomist.com>
Cory Snider <csnider@mirantis.com>
CrazyMax <github@crazymax.dev> CrazyMax <github@crazymax.dev>
David Gageot <david.gageot@docker.com>
David Karlsson <david.karlsson@docker.com>
David Scott <dave@recoil.org>
dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Devin Bayer <dev@doubly.so> Devin Bayer <dev@doubly.so>
Djordje Lukic <djordje.lukic@docker.com> Djordje Lukic <djordje.lukic@docker.com>
Dmitry Makovey <dmakovey@gitlab.com>
Dmytro Makovey <dmytro.makovey@docker.com> Dmytro Makovey <dmytro.makovey@docker.com>
Donghui Wang <977675308@qq.com> Donghui Wang <977675308@qq.com>
Doug Borg <dougborg@apple.com>
Edgar Lee <edgarl@netflix.com>
Eli Treuherz <et@arenko.group>
Eliott Wiener <eliottwiener@gmail.com>
Elran Shefer <elran.shefer@velocity.tech>
faust <faustin@fala.red> faust <faustin@fala.red>
Felipe Santos <felipecassiors@gmail.com> Felipe Santos <felipecassiors@gmail.com>
Felix de Souza <fdesouza@palantir.com>
Fernando Miguel <github@FernandoMiguel.net> Fernando Miguel <github@FernandoMiguel.net>
gfrancesco <gfrancesco@users.noreply.github.com> gfrancesco <gfrancesco@users.noreply.github.com>
gracenoah <gracenoahgh@gmail.com> gracenoah <gracenoahgh@gmail.com>
Guillaume Lours <705411+glours@users.noreply.github.com>
guoguangwu <guoguangwu@magic-shield.com>
Hollow Man <hollowman@hollowman.ml> Hollow Man <hollowman@hollowman.ml>
Ian King'ori <kingorim.ian@gmail.com>
idnandre <andre@idntimes.com>
Ilya Dmitrichenko <errordeveloper@gmail.com> Ilya Dmitrichenko <errordeveloper@gmail.com>
Isaac Gaskin <isaac.gaskin@circle.com>
Jack Laxson <jackjrabbit@gmail.com> Jack Laxson <jackjrabbit@gmail.com>
jaihwan104 <jaihwan104@woowahan.com>
Jean-Yves Gastaud <jygastaud@gmail.com> Jean-Yves Gastaud <jygastaud@gmail.com>
Jhan S. Álvarez <51450231+yastanotheruser@users.noreply.github.com>
Jonathan A. Sternberg <jonathan.sternberg@docker.com>
Jonathan Piché <jpiche@coveo.com>
Justin Chadwell <me@jedevc.com>
Kenyon Ralph <kenyon@kenyonralph.com>
khs1994 <khs1994@khs1994.com> khs1994 <khs1994@khs1994.com>
Kijima Daigo <norimaking777@gmail.com>
Kohei Tokunaga <ktokunaga.mail@gmail.com>
Kotaro Adachi <k33asby@gmail.com> Kotaro Adachi <k33asby@gmail.com>
Kushagra Mansingh <12158241+kushmansingh@users.noreply.github.com>
l00397676 <lujingxiao@huawei.com> l00397676 <lujingxiao@huawei.com>
Laura Brehm <laurabrehm@hey.com>
Laurent Goderre <laurent.goderre@docker.com>
Mark Hildreth <113933455+markhildreth-gravity@users.noreply.github.com>
Mayeul Blanzat <mayeul.blanzat@datadoghq.com>
Michal Augustyn <michal.augustyn@mail.com> Michal Augustyn <michal.augustyn@mail.com>
Milas Bowman <milas.bowman@docker.com>
Mitsuru Kariya <mitsuru.kariya@nttdata.com>
Moleus <fafufuburr@gmail.com>
Nick Santos <nick.santos@docker.com>
Nick Sieger <nick@nicksieger.com>
Nicolas De Loof <nicolas.deloof@gmail.com>
Niklas Gehlen <niklas@namespacelabs.com>
Patrick Van Stee <patrick@vanstee.me> Patrick Van Stee <patrick@vanstee.me>
Paweł Gronowski <pawel.gronowski@docker.com>
Phong Tran <tran.pho@northeastern.edu>
Qasim Sarfraz <qasimsarfraz@microsoft.com>
Rob Murray <rob.murray@docker.com>
robertlestak <robert.lestak@umusic.com>
Saul Shanabrook <s.shanabrook@gmail.com> Saul Shanabrook <s.shanabrook@gmail.com>
Sean P. Kane <spkane00@gmail.com>
Sebastiaan van Stijn <github@gone.nl> Sebastiaan van Stijn <github@gone.nl>
Shaun Thompson <shaun.thompson@docker.com>
SHIMA Tatsuya <ts1s1andn@gmail.com> SHIMA Tatsuya <ts1s1andn@gmail.com>
Silvin Lubecki <silvin.lubecki@docker.com> Silvin Lubecki <silvin.lubecki@docker.com>
Simon A. Eugster <simon.eu@gmail.com>
Solomon Hykes <sh.github.6811@hykes.org> Solomon Hykes <sh.github.6811@hykes.org>
Sumner Warren <sumner.warren@gmail.com>
Sune Keller <absukl@almbrand.dk> Sune Keller <absukl@almbrand.dk>
Talon Bowler <talon.bowler@docker.com>
Tianon Gravi <admwiggin@gmail.com>
Tibor Vass <tibor@docker.com> Tibor Vass <tibor@docker.com>
Tim Smith <tismith@rvohealth.com>
Timofey Kirillov <timofey.kirillov@flant.com>
Tyler Smith <tylerlwsmith@gmail.com>
Tõnis Tiigi <tonistiigi@gmail.com> Tõnis Tiigi <tonistiigi@gmail.com>
Ulysses Souza <ulyssessouza@gmail.com> Ulysses Souza <ulyssessouza@gmail.com>
Usual Coder <34403413+Usual-Coder@users.noreply.github.com>
Wang Jinglei <morlay.null@gmail.com> Wang Jinglei <morlay.null@gmail.com>
Wei <daviseago@gmail.com>
Wojciech M <wmiedzybrodzki@outlook.com>
Xiang Dai <764524258@qq.com> Xiang Dai <764524258@qq.com>
Zachary Povey <zachary.povey@autotrader.co.uk>
zelahi <elahi.zuhayr@gmail.com> zelahi <elahi.zuhayr@gmail.com>
Zero <tobewhatwewant@gmail.com>
zhyon404 <zhyong4@gmail.com>
Zsolt <zsolt.szeberenyi@figured.com>

View File

@@ -1,30 +1,17 @@
# syntax=docker/dockerfile:1 # syntax=docker/dockerfile:1
ARG GO_VERSION=1.23 ARG GO_VERSION=1.21
ARG ALPINE_VERSION=3.21 ARG XX_VERSION=1.4.0
ARG XX_VERSION=1.6.1
# for testing ARG DOCKER_VERSION=25.0.2
ARG DOCKER_VERSION=28.0.0 ARG GOTESTSUM_VERSION=v1.9.0
ARG DOCKER_VERSION_ALT_27=27.5.1 ARG REGISTRY_VERSION=2.8.0
ARG DOCKER_VERSION_ALT_26=26.1.3 ARG BUILDKIT_VERSION=v0.12.5
ARG DOCKER_CLI_VERSION=${DOCKER_VERSION}
ARG GOTESTSUM_VERSION=v1.12.0
ARG REGISTRY_VERSION=2.8.3
ARG BUILDKIT_VERSION=v0.20.1
ARG UNDOCK_VERSION=0.9.0
# xx is a helper for cross-compilation
FROM --platform=$BUILDPLATFORM tonistiigi/xx:${XX_VERSION} AS xx FROM --platform=$BUILDPLATFORM tonistiigi/xx:${XX_VERSION} AS xx
FROM --platform=$BUILDPLATFORM golang:${GO_VERSION}-alpine${ALPINE_VERSION} AS golatest
FROM moby/moby-bin:$DOCKER_VERSION AS docker-engine FROM --platform=$BUILDPLATFORM golang:${GO_VERSION}-alpine AS golatest
FROM dockereng/cli-bin:$DOCKER_CLI_VERSION AS docker-cli
FROM moby/moby-bin:$DOCKER_VERSION_ALT_27 AS docker-engine-alt27
FROM moby/moby-bin:$DOCKER_VERSION_ALT_26 AS docker-engine-alt26
FROM dockereng/cli-bin:$DOCKER_VERSION_ALT_27 AS docker-cli-alt27
FROM dockereng/cli-bin:$DOCKER_VERSION_ALT_26 AS docker-cli-alt26
FROM registry:$REGISTRY_VERSION AS registry
FROM moby/buildkit:$BUILDKIT_VERSION AS buildkit
FROM crazymax/undock:$UNDOCK_VERSION AS undock
FROM golatest AS gobase FROM golatest AS gobase
COPY --from=xx / / COPY --from=xx / /
@@ -33,38 +20,32 @@ ENV GOFLAGS=-mod=vendor
ENV CGO_ENABLED=0 ENV CGO_ENABLED=0
WORKDIR /src WORKDIR /src
FROM registry:$REGISTRY_VERSION AS registry
FROM moby/buildkit:$BUILDKIT_VERSION AS buildkit
FROM gobase AS docker
ARG TARGETPLATFORM
ARG DOCKER_VERSION
WORKDIR /opt/docker
RUN DOCKER_ARCH=$(case ${TARGETPLATFORM:-linux/amd64} in \
"linux/amd64") echo "x86_64" ;; \
"linux/arm/v6") echo "armel" ;; \
"linux/arm/v7") echo "armhf" ;; \
"linux/arm64") echo "aarch64" ;; \
"linux/ppc64le") echo "ppc64le" ;; \
"linux/s390x") echo "s390x" ;; \
*) echo "" ;; esac) \
&& echo "DOCKER_ARCH=$DOCKER_ARCH" \
&& wget -qO- "https://download.docker.com/linux/static/stable/${DOCKER_ARCH}/docker-${DOCKER_VERSION}.tgz" | tar xvz --strip 1
RUN ./dockerd --version && ./containerd --version && ./ctr --version && ./runc --version
FROM gobase AS gotestsum FROM gobase AS gotestsum
ARG GOTESTSUM_VERSION ARG GOTESTSUM_VERSION
ENV GOFLAGS="" ENV GOFLAGS=
RUN --mount=target=/root/.cache,type=cache <<EOT RUN --mount=target=/root/.cache,type=cache \
set -ex GOBIN=/out/ go install "gotest.tools/gotestsum@${GOTESTSUM_VERSION}" && \
go install "gotest.tools/gotestsum@${GOTESTSUM_VERSION}" /out/gotestsum --version
go install "github.com/wadey/gocovmerge@latest"
mkdir /out
/go/bin/gotestsum --version
mv /go/bin/gotestsum /out
mv /go/bin/gocovmerge /out
EOT
COPY --chmod=755 <<"EOF" /out/gotestsumandcover
#!/bin/sh
set -x
if [ -z "$GO_TEST_COVERPROFILE" ]; then
exec gotestsum "$@"
fi
coverdir="$(dirname "$GO_TEST_COVERPROFILE")"
mkdir -p "$coverdir/helpers"
gotestsum "$@" "-coverprofile=$GO_TEST_COVERPROFILE"
ecode=$?
go tool covdata textfmt -i=$coverdir/helpers -o=$coverdir/helpers-report.txt
gocovmerge "$coverdir/helpers-report.txt" "$GO_TEST_COVERPROFILE" > "$coverdir/merged-report.txt"
mv "$coverdir/merged-report.txt" "$GO_TEST_COVERPROFILE"
rm "$coverdir/helpers-report.txt"
for f in "$coverdir/helpers"/*; do
rm "$f"
done
rmdir "$coverdir/helpers"
exit $ecode
EOF
FROM gobase AS buildx-version FROM gobase AS buildx-version
RUN --mount=type=bind,target=. <<EOT RUN --mount=type=bind,target=. <<EOT
@@ -76,7 +57,6 @@ EOT
FROM gobase AS buildx-build FROM gobase AS buildx-build
ARG TARGETPLATFORM ARG TARGETPLATFORM
ARG GO_EXTRA_FLAGS
RUN --mount=type=bind,target=. \ RUN --mount=type=bind,target=. \
--mount=type=cache,target=/root/.cache \ --mount=type=cache,target=/root/.cache \
--mount=type=cache,target=/go/pkg/mod \ --mount=type=cache,target=/go/pkg/mod \
@@ -84,7 +64,6 @@ RUN --mount=type=bind,target=. \
set -e set -e
xx-go --wrap xx-go --wrap
DESTDIR=/usr/bin VERSION=$(cat /buildx-version/version) REVISION=$(cat /buildx-version/revision) GO_EXTRA_LDFLAGS="-s -w" ./hack/build DESTDIR=/usr/bin VERSION=$(cat /buildx-version/version) REVISION=$(cat /buildx-version/revision) GO_EXTRA_LDFLAGS="-s -w" ./hack/build
file /usr/bin/docker-buildx
xx-verify --static /usr/bin/docker-buildx xx-verify --static /usr/bin/docker-buildx
EOT EOT
@@ -103,10 +82,7 @@ FROM scratch AS binaries-unix
COPY --link --from=buildx-build /usr/bin/docker-buildx /buildx COPY --link --from=buildx-build /usr/bin/docker-buildx /buildx
FROM binaries-unix AS binaries-darwin FROM binaries-unix AS binaries-darwin
FROM binaries-unix AS binaries-freebsd
FROM binaries-unix AS binaries-linux FROM binaries-unix AS binaries-linux
FROM binaries-unix AS binaries-netbsd
FROM binaries-unix AS binaries-openbsd
FROM scratch AS binaries-windows FROM scratch AS binaries-windows
COPY --link --from=buildx-build /usr/bin/docker-buildx /buildx.exe COPY --link --from=buildx-build /usr/bin/docker-buildx /buildx.exe
@@ -127,25 +103,18 @@ RUN apk add --no-cache \
shadow-uidmap \ shadow-uidmap \
xfsprogs \ xfsprogs \
xz xz
COPY --link --from=gotestsum /out /usr/bin/ COPY --link --from=gotestsum /out/gotestsum /usr/bin/
COPY --link --from=registry /bin/registry /usr/bin/ COPY --link --from=registry /bin/registry /usr/bin/
COPY --link --from=docker-engine / /usr/bin/ COPY --link --from=docker /opt/docker/* /usr/bin/
COPY --link --from=docker-cli / /usr/bin/
COPY --link --from=docker-engine-alt27 / /opt/docker-alt-27/
COPY --link --from=docker-engine-alt26 / /opt/docker-alt-26/
COPY --link --from=docker-cli-alt27 / /opt/docker-alt-27/
COPY --link --from=docker-cli-alt26 / /opt/docker-alt-26/
COPY --link --from=buildkit /usr/bin/buildkitd /usr/bin/ COPY --link --from=buildkit /usr/bin/buildkitd /usr/bin/
COPY --link --from=buildkit /usr/bin/buildctl /usr/bin/ COPY --link --from=buildkit /usr/bin/buildctl /usr/bin/
COPY --link --from=undock /usr/local/bin/undock /usr/bin/
COPY --link --from=binaries /buildx /usr/bin/ COPY --link --from=binaries /buildx /usr/bin/
ENV TEST_DOCKER_EXTRA="docker@27.5=/opt/docker-alt-27,docker@26.1=/opt/docker-alt-26"
FROM integration-test-base AS integration-test FROM integration-test-base AS integration-test
COPY . . COPY . .
# Release # Release
FROM --platform=$BUILDPLATFORM alpine:${ALPINE_VERSION} AS releaser FROM --platform=$BUILDPLATFORM alpine AS releaser
WORKDIR /work WORKDIR /work
ARG TARGETPLATFORM ARG TARGETPLATFORM
RUN --mount=from=binaries \ RUN --mount=from=binaries \
@@ -160,7 +129,7 @@ COPY --from=releaser /out/ /
# Shell # Shell
FROM docker:$DOCKER_VERSION AS dockerd-release FROM docker:$DOCKER_VERSION AS dockerd-release
FROM alpine:${ALPINE_VERSION} AS shell FROM alpine AS shell
RUN apk add --no-cache iptables tmux git vim less openssh RUN apk add --no-cache iptables tmux git vim less openssh
RUN mkdir -p /usr/local/lib/docker/cli-plugins && ln -s /usr/local/bin/buildx /usr/local/lib/docker/cli-plugins/docker-buildx RUN mkdir -p /usr/local/lib/docker/cli-plugins && ln -s /usr/local/bin/buildx /usr/local/lib/docker/cli-plugins/docker-buildx
COPY ./hack/demo-env/entrypoint.sh /usr/local/bin COPY ./hack/demo-env/entrypoint.sh /usr/local/bin

View File

@@ -153,7 +153,6 @@ made through a pull request.
"akihirosuda", "akihirosuda",
"crazy-max", "crazy-max",
"jedevc", "jedevc",
"jsternberg",
"tiborvass", "tiborvass",
"tonistiigi", "tonistiigi",
] ]
@@ -195,11 +194,6 @@ made through a pull request.
Email = "me@jedevc.com" Email = "me@jedevc.com"
GitHub = "jedevc" GitHub = "jedevc"
[people.jsternberg]
Name = "Jonathan Sternberg"
Email = "jonathan.sternberg@docker.com"
GitHub = "jsternberg"
[people.thajeztah] [people.thajeztah]
Name = "Sebastiaan van Stijn" Name = "Sebastiaan van Stijn"
Email = "github@gone.nl" Email = "github@gone.nl"

View File

@@ -8,8 +8,6 @@ endif
export BUILDX_CMD ?= docker buildx export BUILDX_CMD ?= docker buildx
BAKE_TARGETS := binaries binaries-cross lint lint-gopls validate-vendor validate-docs validate-authors validate-generated-files
.PHONY: all .PHONY: all
all: binaries all: binaries
@@ -21,9 +19,13 @@ build:
shell: shell:
./hack/shell ./hack/shell
.PHONY: $(BAKE_TARGETS) .PHONY: binaries
$(BAKE_TARGETS): binaries:
$(BUILDX_CMD) bake $@ $(BUILDX_CMD) bake binaries
.PHONY: binaries-cross
binaries-cross:
$(BUILDX_CMD) bake binaries-cross
.PHONY: install .PHONY: install
install: binaries install: binaries
@@ -37,6 +39,10 @@ release:
.PHONY: validate-all .PHONY: validate-all
validate-all: lint test validate-vendor validate-docs validate-generated-files validate-all: lint test validate-vendor validate-docs validate-generated-files
.PHONY: lint
lint:
$(BUILDX_CMD) bake lint
.PHONY: test .PHONY: test
test: test:
./hack/test ./hack/test
@@ -49,6 +55,22 @@ test-unit:
test-integration: test-integration:
TESTPKGS=./tests ./hack/test TESTPKGS=./tests ./hack/test
.PHONY: validate-vendor
validate-vendor:
$(BUILDX_CMD) bake validate-vendor
.PHONY: validate-docs
validate-docs:
$(BUILDX_CMD) bake validate-docs
.PHONY: validate-authors
validate-authors:
$(BUILDX_CMD) bake validate-authors
.PHONY: validate-generated-files
validate-generated-files:
$(BUILDX_CMD) bake validate-generated-files
.PHONY: test-driver .PHONY: test-driver
test-driver: test-driver:
./hack/test-driver ./hack/test-driver

View File

@@ -1,453 +0,0 @@
# Project processing guide <!-- omit from toc -->
- [Project scope](#project-scope)
- [Labels](#labels)
- [Global](#global)
- [`area/`](#area)
- [`exp/`](#exp)
- [`impact/`](#impact)
- [`kind/`](#kind)
- [`needs/`](#needs)
- [`priority/`](#priority)
- [`status/`](#status)
- [Types of releases](#types-of-releases)
- [Feature releases](#feature-releases)
- [Release Candidates](#release-candidates)
- [Support Policy](#support-policy)
- [Contributing to Releases](#contributing-to-releases)
- [Patch releases](#patch-releases)
- [Milestones](#milestones)
- [Triage process](#triage-process)
- [Verify essential information](#verify-essential-information)
- [Classify the issue](#classify-the-issue)
- [Prioritization guidelines for `kind/bug`](#prioritization-guidelines-for-kindbug)
- [Issue lifecyle](#issue-lifecyle)
- [Examples](#examples)
- [Submitting a bug](#submitting-a-bug)
- [Pull request review process](#pull-request-review-process)
- [Handling stalled issues and pull requests](#handling-stalled-issues-and-pull-requests)
- [Moving to a discussion](#moving-to-a-discussion)
- [Workflow automation](#workflow-automation)
- [Exempting an issue/PR from stale bot processing](#exempting-an-issuepr-from-stale-bot-processing)
- [Updating dependencies](#updating-dependencies)
---
## Project scope
**Docker Buildx** is a Docker CLI plugin designed to extend build capabilities using BuildKit. It provides advanced features for building container images, supporting multiple builder instances, multi-node builds, and high-level build constructs. Buildx enhances the Docker build process, making it more efficient and flexible, and is compatible with both Docker and Kubernetes environments. Key features include:
- **Familiar user experience:** Buildx offers a user experience similar to legacy docker build, ensuring a smooth transition from legacy commands
- **Full BuildKit capabilities:** Leverage the full feature set of [`moby/buildkit`](https://github.com/moby/buildkit) when using the container driver
- **Multiple builder instances:** Supports the use of multiple builder instances, allowing concurrent builds and effective management and monitoring of these builders.
- **Multi-node builds:** Use multiple nodes to build cross-platform images
- **Compose integration:** Build complex, multi-services files as defined in compose
- **High-level build constructs via `bake`:** Introduces high-level build constructs for more complex build workflows
- **In-container driver support:** Support in-container drivers for both Docker and Kubernetes environments to support isolation/security.
## Labels
Below are common groups, labels, and their intended usage to support issues, pull requests, and discussion processing.
### Global
General attributes that can apply to nearly any issue or pull request.
| Label | Applies to | Description |
| ------------------- | ----------- | ------------------------------------------------------------------------- |
| `bot` | Issues, PRs | Created by a bot |
| `good first issue ` | Issues | Suitable for first-time contributors |
| `help wanted` | Issues, PRs | Assistance requested |
| `lgtm` | PRs | “Looks good to me” approval |
| `stale` | Issues, PRs | The issue/PR has not had activity for a while |
| `rotten` | Issues, PRs | The issue/PR has not had activity since being marked stale and was closed |
| `frozen` | Issues, PRs | The issue/PR should be skipped by the stale-bot |
| `dco/no` | PRs | The PR is missing a developer certificate of origin sign-off |
### `area/`
Area or component of the project affected. Please note that the table below may not be inclusive of all current options.
| Label | Applies to | Description |
| ------------------------------ | ---------- | -------------------------- |
| `area/bake` | Any | `bake` |
| `area/bake/compose` | Any | `bake/compose` |
| `area/build` | Any | `build` |
| `area/builder` | Any | `builder` |
| `area/buildkit` | Any | Relates to `moby/buildkit` |
| `area/cache` | Any | `cache` |
| `area/checks` | Any | `checks` |
| `area/ci` | Any | Project CI |
| `area/cli` | Any | `cli` |
| `area/controller` | Any | `controller` |
| `area/debug` | Any | `debug` |
| `area/dependencies` | Any | Project dependencies |
| `area/dockerfile` | Any | `dockerfile` |
| `area/docs` | Any | `docs` |
| `area/driver` | Any | `driver` |
| `area/driver/docker` | Any | `driver/docker` |
| `area/driver/docker-container` | Any | `driver/docker-container` |
| `area/driver/kubernetes` | Any | `driver/kubernetes` |
| `area/driver/remote` | Any | `driver/remote` |
| `area/feature-parity` | Any | `feature-parity` |
| `area/github-actions` | Any | `github-actions` |
| `area/hack` | Any | Project hack/support |
| `area/imagetools` | Any | `imagetools` |
| `area/metrics` | Any | `metrics` |
| `area/moby` | Any | Relates to `moby/moby` |
| `area/project` | Any | Project support |
| `area/qemu` | Any | `qemu` |
| `area/tests` | Any | Project testing |
| `area/windows` | Any | `windows` |
### `exp/`
Estimated experience level to complete the item
| Label | Applies to | Description |
| ------------------ | ---------- | ------------------------------------------------------------------------------- |
| `exp/beginner` | Issue | Suitable for contributors new to the project or technology stack |
| `exp/intermediate` | Issue | Requires some familiarity with the project and technology |
| `exp/expert` | Issue | Requires deep understanding and advanced skills with the project and technology |
### `impact/`
Potential impact areas of the issue or pull request.
| Label | Applies to | Description |
| -------------------- | ---------- | -------------------------------------------------- |
| `impact/breaking` | PR | Change is API-breaking |
| `impact/changelog` | PR | When complete, the item should be in the changelog |
| `impact/deprecation` | PR | Change is a deprecation of a feature |
### `kind/`
The type of issue, pull request or discussion
| Label | Applies to | Description |
| ------------------ | ----------------- | ------------------------------------------------------- |
| `kind/bug` | Issue, PR | Confirmed bug |
| `kind/chore` | Issue, PR | Project support tasks |
| `kind/docs` | Issue, PR | Additions or modifications to the documentation |
| `kind/duplicate` | Any | Duplicate of another item |
| `kind/enhancement` | Any | Enhancement of an existing feature |
| `kind/feature` | Any | A brand new feature |
| `kind/maybe-bug` | Issue, PR | Unconfirmed bug, turns into kind/bug when confirmed |
| `kind/proposal` | Issue, Discussion | A proposed major change |
| `kind/refactor` | Issue, PR | Refactor of existing code |
| `kind/support` | Any | A question, discussion, or other user support item |
| `kind/tests` | Issue, PR | Additions or modifications to the project testing suite |
### `needs/`
Actions or missing requirements needed by the issue or pull request.
| Label | Applies to | Description |
| --------------------------- | ---------- | ----------------------------------------------------- |
| `needs/assignee` | Issue, PR | Needs an assignee |
| `needs/code-review` | PR | Needs review of code |
| `needs/design-review` | Issue, PR | Needs review of design |
| `needs/docs-review` | Issue, PR | Needs review by the documentation team |
| `needs/docs-update` | Issue, PR | Needs an update to the docs |
| `needs/follow-on-work` | Issue, PR | Needs follow-on work/PR |
| `needs/issue` | PR | Needs an issue |
| `needs/maintainer-decision` | Issue, PR | Needs maintainer discussion/decision before advancing |
| `needs/milestone` | Issue, PR | Needs milestone assignment |
| `needs/more-info` | Any | Needs more information from the author |
| `needs/more-investigation` | Issue, PR | Needs further investigation |
| `needs/priority` | Issue, PR | Needs priority assignment |
| `needs/pull-request` | Issue | Needs a pull request |
| `needs/rebase` | PR | Needs rebase to target branch |
| `needs/reproduction` | Issue, PR | Needs reproduction steps |
### `priority/`
Level of urgency of a `kind/bug` issue or pull request.
| Label | Applies to | Description |
| ------------- | ---------- | ----------------------------------------------------------------------- |
| `priority/P0` | Issue, PR | Urgent: Security, critical bugs, blocking issues. |
| `priority/P1` | Issue, PR | Important: This is a top priority and a must-have for the next release. |
| `priority/P2` | Issue, PR | Normal: Default priority |
### `status/`
Current lifecycle state of the issue or pull request.
| Label | Applies to | Description |
| --------------------- | ---------- | ---------------------------------------------------------------------- |
| `status/accepted` | Issue, PR | The issue has been reviewed and accepted for implementation |
| `status/active` | PR | The PR is actively being worked on by a maintainer or community member |
| `status/blocked` | Issue, PR | The issue/PR is blocked from advancing to another status |
| `status/do-not-merge` | PR | Should not be merged pending further review or changes |
| `status/transfer` | Any | Transferred to another project |
| `status/triage` | Any | The item needs to be sorted by maintainers |
| `status/wontfix` | Issue, PR | The issue/PR will not be fixed or addressed as described |
## Types of releases
This project has feature releases, patch releases, and security releases.
### Feature releases
Feature releases are made from the development branch, followed by cutting a release branch for future patch releases, which may also occur during the code freeze period.
#### Release Candidates
Users can expect 2-3 release candidate (RC) test releases prior to a feature release. The first RC is typically released about one to two weeks before the final release.
#### Support Policy
Once a new feature release is cut, support for the previous feature release is discontinued. An exception may be made for urgent security releases that occur shortly after a new feature release. Buildx does not offer LTS (Long-Term Support) releases.
#### Contributing to Releases
Anyone can request that an issue or PR be included in the next feature or patch release milestone, provided it meets the necessary requirements.
### Patch releases
Patch releases should only include the most critical patches. Stability is vital, so everyone should always use the latest patch release.
If a fix is needed but does not qualify for a patch release because of its code size or other criteria that make it too unpredictable, we will prioritize cutting a new feature release sooner rather than making an exception for backporting.
Following PRs are included in patch releases
- `priority/P0` fixes
- `priority/P1` fixes, assuming maintainers dont object because of the patch size
- `priority/P2` fixes, only if (both required)
- proposed by maintainer
- the patch is trivial and self-contained
- Documentation-only patches
- Vendored dependency updates, only if:
- Fixing (qualifying) bug or security issue in Buildx
- The patch is small, else a forked version of the dependency with only the patches required
New features do not qualify for patch release.
## Milestones
Milestones are used to help identify what releases a contribution will be in.
- The `v0.next` milestone collects unblocked items planned for the next 2-3 feature releases but not yet assigned to a specific version milestone.
- The `v0.backlog` milestone gathers all triaged items considered for the long-term (beyond the next 3 feature releases) or currently unfit for a future release due to certain conditions. These items may be blocked and need to be unblocked before progressing.
## Triage process
Triage provides an important way to contribute to an open-source project. When submitted without an issue this process applies to Pull Requests as well. Triage helps ensure work items are resolved quickly by:
- Ensuring the issue's intent and purpose are described precisely. This is necessary because it can be difficult for an issue to explain how an end user experiences a problem and what actions they took to arrive at the problem.
- Giving a contributor the information they need before they commit to resolving an issue.
- Lowering the issue count by preventing duplicate issues.
- Streamlining the development process by preventing duplicate discussions.
If you don't have time to code, consider helping with triage. The community will thank you for saving them time by spending some of yours. The same basic process should be applied upon receipt of a new issue.
1. Verify essential information
2. Classify the issue
3. Prioritizing the issue
### Verify essential information
Before advancing the triage process, ensure the issue contains all necessary information to be properly understood and assessed. The required information may vary by issue type, but typically includes the system environment, version numbers, reproduction steps, expected outcomes, and actual results.
- **Exercising Judgment**: Use your best judgment to assess the issue descriptions completeness.
- **Communicating Needs**: If the information provided is insufficient, kindly request additional details from the author. Explain that this information is crucial for clarity and resolution of the issue, and apply the `needs/more-information` label to indicate a response from the author is required.
### Classify the issue
An issue will typically have multiple labels. These are used to help communicate key information about context, requirements, and status. At a minimum, a properly classified issue should have:
- (Required) One or more [`area/*`](#area) labels
- (Required) One [`kind/*`](#kind) label to indicate the type of issue
- (Required if `kind/bug`) A [`priority/*`](#priority) label
When assigning a decision the following labels should be present:
- (Required) One [`status/*`](#status) label to indicate lifecycle status
Additional labels can provide more clarity:
- Zero or more [`needs/*`](#needs) labels to indicate missing items
- Zero or more [`impact/*`](#impact) labels
- One [`exp/*`](#exp) label
## Prioritization guidelines for `kind/bug`
When an issue or pull request of `kind/bug` is correctly categorized and attached to a milestone, the labels indicate the urgency with which it should be completed.
**priority/P0**
Fixing this item is the highest priority. A patch release will follow as soon as a patch is available and verified. This level is used exclusively for bugs.
Examples:
- Regression in a critical code path
- Panic in a critical code path
- Corruption in critical code path or rest of the system
- Leaked zero-day critical security
**priority/P1**
Items with this label should be fixed with high priority and almost always included in a patch release. Unless waiting for another issue, patch releases should happen within a week. This level is not used for features or enhancements.
Examples:
- Any regression, panic
- Measurable performance regression
- A major bug in a new feature in the latest release
- Incompatibility with upgraded external dependency
**priority/P2**
This is the default priority and is implied in the absence of a `priority/` label. Bugs with this priority should be included in the next feature release but may land in a patch release if they are ready and unlikely to impact other functionality adversely. Non-bug issues with this priority should also be included in the next feature release if they are available and ready.
Examples:
- Confirmed bugs
- Bugs in non-default configurations
- Most enhancements
## Issue lifecyle
```mermaid
flowchart LR
create([New issue]) --> triage
subgraph triage[Triage Loop]
review[Review]
end
subgraph decision[Decision]
accept[Accept]
close[Close]
end
triage -- if accepted --> accept[Assign status, milestone]
triage -- if rejected --> close[Assign status, close issue]
```
### Examples
#### Submitting a bug
To help illustrate the issue life cycle lets walk through submitting an issue as a potential bug in CI that enters a feedback loop and is eventually accepted as P2 priority and placed on the backlog.
```mermaid
flowchart LR
new([New issue])
subgraph triage[Triage]
direction LR
create["Action: Submit issue via Bug form\nLabels: kind/maybe-bug, status/triage"]
style create text-align:left
subgraph review[Review]
direction TB
classify["Action: Maintainer reviews issue, requests more info\nLabels: kind/maybe-bug, status/triage, needs/more-info, area/*"]
style classify text-align:left
update["Action: Author updates issue\nLabels: kind/maybe-bug, status/triage, needs/more-info, area/*"]
style update text-align:left
classify --> update
update --> classify
end
create --> review
end
subgraph decision[Decision]
accept["Action: Maintainer reviews updates, accepts, assigns milestone\nLabels: kind/bug, priority/P2, status/accepted, area/*, impact/*"]
style accept text-align: left
end
new --> triage
triage --> decision
```
## Pull request review process
A thorough and timely review process for pull requests (PRs) is crucial for maintaining the integrity and quality of the project while fostering a collaborative environment.
- **Labeling**: Most labels should be inherited from a linked issue. If no issue is linked an extended review process may be required.
- **Continuous Integration**: With few exceptions, it is crucial that all Continuous Integration (CI) workflows pass successfully.
- **Draft Status**: Incomplete or long-running PRs should be placed in "Draft" status. They may revert to "Draft" status upon initial review if significant rework is required.
```mermaid
flowchart LR
triage([Triage])
draft[Draft PR]
review[PR Review]
closed{{Close PR}}
merge{{Merge PR}}
subgraph feedback1[Feedback Loop]
draft
end
subgraph feedback2[Feedback Loop]
review
end
triage --> draft
draft --> review
review --> closed
review --> draft
review --> merge
```
## Handling stalled issues and pull requests
Unfortunately, some issues or pull requests can remain inactive for extended periods. To mitigate this, automation is employed to prompt both the author and maintainers, ensuring that all contributions receive appropriate attention.
**For Authors:**
- **Closure of Inactive Items**: If your issue or PR becomes irrelevant or is no longer needed, please close it to help keep the project clean.
- **Prompt Responses**: If additional information is requested, please respond promptly to facilitate progress.
**For Maintainers:**
- **Timely Responses**: Endeavor to address issues and PRs within a reasonable timeframe to keep the community actively engaged.
- **Engagement with Stale Issues**: If an issue becomes stale due to maintainer inaction, re-engage with the author to reassess and revitalize the discussion.
**Stale and Rotten Policy:**
- An issue or PR will be labeled as **`stale`** after 14 calendar days of inactivity. If it remains inactive for another 30 days, it will be labeled as **`rotten`** and closed.
- Authors whose issues or PRs have been closed are welcome to re-open them or create new ones and link to the original.
**Skipping Stale Processing:**
- To prevent an issue or PR from being marked as stale, label it as **`frozen`**.
**Exceptions to Stale Processing:**
- Issues or PRs marked as **`frozen`**.
- Issues or PRs assigned to a milestone.
## Moving to a discussion
Sometimes, an issue or pull request may not be the appropriate medium for what is essentially a discussion. In such cases, the issue or PR will either be converted to a discussion or a new discussion will be created. The original item will then be labeled appropriately (**`kind/discussion`** or **`kind/question`**) and closed.
If you believe this conversion was made in error, please express your concerns in the new discussion thread. If necessary, a reversal to the original issue or PR format can be facilitated.
## Workflow automation
To help expedite common operations, avoid errors and reduce toil some workflow automation is used by the project. This can include:
- Stale issue or pull request processing
- Auto-labeling actions
- Auto-response actions
- Label carry over from issue to pull request
### Exempting an issue/PR from stale bot processing
The stale item handling is configured in the [repository](link-to-config-file). To exempt an issue or PR from stale processing you can:
- Add the item to a milestone
- Add the `frozen` label to the item
## Updating dependencies
- **Runtime Dependencies**: Use the latest stable release available when the first Release Candidate (RC) of a new feature release is cut. For patch releases, update to the latest corresponding patch release of the dependency.
- **Other Dependencies**: Always permitted to update to the latest patch release in the development branch. Updates to a new feature release require justification, unless the dependency is outdated. Prefer tagged versions of dependencies unless a specific untagged commit is needed. Go modules should specify the lowest compatible version; there is no requirement to update all dependencies to their latest versions before cutting a new Buildx feature release.
- **Patch Releases**: Vendored dependency updates are considered for patch releases, except in the rare cases specified previously.
- **Security Considerations**: A security scanner report indicating a non-exploitable issue via Buildx does not justify backports.

View File

@@ -56,7 +56,8 @@ For more information on how to use Buildx, see
Using `buildx` with Docker requires Docker engine 19.03 or newer. Using `buildx` with Docker requires Docker engine 19.03 or newer.
> [!WARNING] > **Warning**
>
> Using an incompatible version of Docker may result in unexpected behavior, > Using an incompatible version of Docker may result in unexpected behavior,
> and will likely cause issues, especially when using Buildx builders with more > and will likely cause issues, especially when using Buildx builders with more
> recent versions of BuildKit. > recent versions of BuildKit.
@@ -74,7 +75,8 @@ Docker Engine package repositories contain Docker Buildx packages when installed
## Manual download ## Manual download
> [!IMPORTANT] > **Important**
>
> This section is for unattended installation of the buildx component. These > This section is for unattended installation of the buildx component. These
> instructions are mostly suitable for testing purposes. We do not recommend > instructions are mostly suitable for testing purposes. We do not recommend
> installing buildx using manual download in production environments as they > installing buildx using manual download in production environments as they
@@ -105,7 +107,8 @@ On Windows:
* `C:\ProgramData\Docker\cli-plugins` * `C:\ProgramData\Docker\cli-plugins`
* `C:\Program Files\Docker\cli-plugins` * `C:\Program Files\Docker\cli-plugins`
> [!NOTE] > **Note**
>
> On Unix environments, it may also be necessary to make it executable with `chmod +x`: > On Unix environments, it may also be necessary to make it executable with `chmod +x`:
> ```shell > ```shell
> $ chmod +x ~/.docker/cli-plugins/docker-buildx > $ chmod +x ~/.docker/cli-plugins/docker-buildx
@@ -184,12 +187,12 @@ through various "drivers". Each driver defines how and where a build should
run, and have different feature sets. run, and have different feature sets.
We currently support the following drivers: We currently support the following drivers:
- The `docker` driver ([guide](https://docs.docker.com/build/drivers/docker/), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver)) - The `docker` driver ([guide](docs/manuals/drivers/docker.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
- The `docker-container` driver ([guide](https://docs.docker.com/build/drivers/docker-container/), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver)) - The `docker-container` driver ([guide](docs/manuals/drivers/docker-container.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
- The `kubernetes` driver ([guide](https://docs.docker.com/build/drivers/kubernetes/), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver)) - The `kubernetes` driver ([guide](docs/manuals/drivers/kubernetes.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
- The `remote` driver ([guide](https://docs.docker.com/build/drivers/remote/)) - The `remote` driver ([guide](docs/manuals/drivers/remote.md))
For more information on drivers, see the [drivers guide](https://docs.docker.com/build/drivers/). For more information on drivers, see the [drivers guide](docs/manuals/drivers/index.md).
## Working with builder instances ## Working with builder instances

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -5,14 +5,11 @@ import (
"fmt" "fmt"
"os" "os"
"path/filepath" "path/filepath"
"slices"
"strings" "strings"
"github.com/compose-spec/compose-go/v2/consts"
"github.com/compose-spec/compose-go/v2/dotenv" "github.com/compose-spec/compose-go/v2/dotenv"
"github.com/compose-spec/compose-go/v2/loader" "github.com/compose-spec/compose-go/v2/loader"
composetypes "github.com/compose-spec/compose-go/v2/types" composetypes "github.com/compose-spec/compose-go/v2/types"
"github.com/docker/buildx/util/buildflags"
dockeropts "github.com/docker/cli/opts" dockeropts "github.com/docker/cli/opts"
"github.com/docker/go-units" "github.com/docker/go-units"
"github.com/pkg/errors" "github.com/pkg/errors"
@@ -42,11 +39,7 @@ func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Conf
ConfigFiles: cfgs, ConfigFiles: cfgs,
Environment: envs, Environment: envs,
}, func(options *loader.Options) { }, func(options *loader.Options) {
projectName := "bake" options.SetProjectName("bake", false)
if v, ok := envs[consts.ComposeProjectName]; ok && v != "" {
projectName = v
}
options.SetProjectName(projectName, false)
options.SkipNormalization = true options.SkipNormalization = true
options.Profiles = []string{"*"} options.Profiles = []string{"*"}
}) })
@@ -103,12 +96,6 @@ func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Conf
shmSize = &shmSizeStr shmSize = &shmSizeStr
} }
var networkModeP *string
if s.Build.Network != "" {
networkMode := s.Build.Network
networkModeP = &networkMode
}
var ulimits []string var ulimits []string
if s.Build.Ulimits != nil { if s.Build.Ulimits != nil {
for n, u := range s.Build.Ulimits { for n, u := range s.Build.Ulimits {
@@ -120,16 +107,7 @@ func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Conf
} }
} }
var ssh []*buildflags.SSH var secrets []string
for _, bkey := range s.Build.SSH {
sshkey := composeToBuildkitSSH(bkey)
ssh = append(ssh, sshkey)
}
slices.SortFunc(ssh, func(a, b *buildflags.SSH) int {
return a.Less(b)
})
var secrets []*buildflags.Secret
for _, bs := range s.Build.Secrets { for _, bs := range s.Build.Secrets {
secret, err := composeToBuildkitSecret(bs, cfg.Secrets[bs.Source]) secret, err := composeToBuildkitSecret(bs, cfg.Secrets[bs.Source])
if err != nil { if err != nil {
@@ -145,16 +123,6 @@ func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Conf
labels[k] = &v labels[k] = &v
} }
cacheFrom, err := buildflags.ParseCacheEntry(s.Build.CacheFrom)
if err != nil {
return nil, err
}
cacheTo, err := buildflags.ParseCacheEntry(s.Build.CacheTo)
if err != nil {
return nil, err
}
g.Targets = append(g.Targets, targetName) g.Targets = append(g.Targets, targetName)
t := &Target{ t := &Target{
Name: targetName, Name: targetName,
@@ -171,10 +139,9 @@ func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Conf
val, ok := cfg.Environment[val] val, ok := cfg.Environment[val]
return val, ok return val, ok
})), })),
CacheFrom: cacheFrom, CacheFrom: s.Build.CacheFrom,
CacheTo: cacheTo, CacheTo: s.Build.CacheTo,
NetworkMode: networkModeP, NetworkMode: &s.Build.Network,
SSH: ssh,
Secrets: secrets, Secrets: secrets,
ShmSize: shmSize, ShmSize: shmSize,
Ulimits: ulimits, Ulimits: ulimits,
@@ -192,6 +159,7 @@ func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Conf
c.Targets = append(c.Targets, t) c.Targets = append(c.Targets, t)
} }
c.Groups = append(c.Groups, g) c.Groups = append(c.Groups, g)
} }
return &c, nil return &c, nil
@@ -307,15 +275,13 @@ type xbake struct {
NoCacheFilter stringArray `yaml:"no-cache-filter,omitempty"` NoCacheFilter stringArray `yaml:"no-cache-filter,omitempty"`
Contexts stringMap `yaml:"contexts,omitempty"` Contexts stringMap `yaml:"contexts,omitempty"`
// don't forget to update documentation if you add a new field: // don't forget to update documentation if you add a new field:
// https://github.com/docker/docs/blob/main/content/build/bake/compose-file.md#extension-field-with-x-bake // docs/manuals/bake/compose-file.md#extension-field-with-x-bake
} }
type ( type stringMap map[string]string
stringMap map[string]string type stringArray []string
stringArray []string
)
func (sa *stringArray) UnmarshalYAML(unmarshal func(any) error) error { func (sa *stringArray) UnmarshalYAML(unmarshal func(interface{}) error) error {
var multi []string var multi []string
err := unmarshal(&multi) err := unmarshal(&multi)
if err != nil { if err != nil {
@@ -332,7 +298,7 @@ func (sa *stringArray) UnmarshalYAML(unmarshal func(any) error) error {
// composeExtTarget converts Compose build extension x-bake to bake Target // composeExtTarget converts Compose build extension x-bake to bake Target
// https://github.com/compose-spec/compose-spec/blob/master/spec.md#extension // https://github.com/compose-spec/compose-spec/blob/master/spec.md#extension
func (t *Target) composeExtTarget(exts map[string]any) error { func (t *Target) composeExtTarget(exts map[string]interface{}) error {
var xb xbake var xb xbake
ext, ok := exts["x-bake"] ext, ok := exts["x-bake"]
@@ -349,45 +315,22 @@ func (t *Target) composeExtTarget(exts map[string]any) error {
t.Tags = dedupSlice(append(t.Tags, xb.Tags...)) t.Tags = dedupSlice(append(t.Tags, xb.Tags...))
} }
if len(xb.CacheFrom) > 0 { if len(xb.CacheFrom) > 0 {
cacheFrom, err := buildflags.ParseCacheEntry(xb.CacheFrom) t.CacheFrom = dedupSlice(append(t.CacheFrom, xb.CacheFrom...))
if err != nil {
return err
}
t.CacheFrom = t.CacheFrom.Merge(cacheFrom)
} }
if len(xb.CacheTo) > 0 { if len(xb.CacheTo) > 0 {
cacheTo, err := buildflags.ParseCacheEntry(xb.CacheTo) t.CacheTo = dedupSlice(append(t.CacheTo, xb.CacheTo...))
if err != nil {
return err
}
t.CacheTo = t.CacheTo.Merge(cacheTo)
} }
if len(xb.Secrets) > 0 { if len(xb.Secrets) > 0 {
secrets, err := parseArrValue[buildflags.Secret](xb.Secrets) t.Secrets = dedupSlice(append(t.Secrets, xb.Secrets...))
if err != nil {
return err
}
t.Secrets = t.Secrets.Merge(secrets)
} }
if len(xb.SSH) > 0 { if len(xb.SSH) > 0 {
ssh, err := parseArrValue[buildflags.SSH](xb.SSH) t.SSH = dedupSlice(append(t.SSH, xb.SSH...))
if err != nil {
return err
}
t.SSH = t.SSH.Merge(ssh)
slices.SortFunc(t.SSH, func(a, b *buildflags.SSH) int {
return a.Less(b)
})
} }
if len(xb.Platforms) > 0 { if len(xb.Platforms) > 0 {
t.Platforms = dedupSlice(append(t.Platforms, xb.Platforms...)) t.Platforms = dedupSlice(append(t.Platforms, xb.Platforms...))
} }
if len(xb.Outputs) > 0 { if len(xb.Outputs) > 0 {
outputs, err := parseArrValue[buildflags.ExportEntry](xb.Outputs) t.Outputs = dedupSlice(append(t.Outputs, xb.Outputs...))
if err != nil {
return err
}
t.Outputs = t.Outputs.Merge(outputs)
} }
if xb.Pull != nil { if xb.Pull != nil {
t.Pull = xb.Pull t.Pull = xb.Pull
@@ -407,30 +350,21 @@ func (t *Target) composeExtTarget(exts map[string]any) error {
// composeToBuildkitSecret converts secret from compose format to buildkit's // composeToBuildkitSecret converts secret from compose format to buildkit's
// csv format. // csv format.
func composeToBuildkitSecret(inp composetypes.ServiceSecretConfig, psecret composetypes.SecretConfig) (*buildflags.Secret, error) { func composeToBuildkitSecret(inp composetypes.ServiceSecretConfig, psecret composetypes.SecretConfig) (string, error) {
if psecret.External { if psecret.External {
return nil, errors.Errorf("unsupported external secret %s", psecret.Name) return "", errors.Errorf("unsupported external secret %s", psecret.Name)
} }
secret := &buildflags.Secret{} var bkattrs []string
if inp.Source != "" { if inp.Source != "" {
secret.ID = inp.Source bkattrs = append(bkattrs, "id="+inp.Source)
} }
if psecret.File != "" { if psecret.File != "" {
secret.FilePath = psecret.File bkattrs = append(bkattrs, "src="+psecret.File)
} }
if psecret.Environment != "" { if psecret.Environment != "" {
secret.Env = psecret.Environment bkattrs = append(bkattrs, "env="+psecret.Environment)
} }
return secret, nil
}
// composeToBuildkitSSH converts secret from compose format to buildkit's return strings.Join(bkattrs, ","), nil
// csv format.
func composeToBuildkitSSH(sshKey composetypes.SSHKey) *buildflags.SSH {
bkssh := &buildflags.SSH{ID: sshKey.ID}
if sshKey.Path != "" {
bkssh.Paths = []string{sshKey.Path}
}
return bkssh
} }

View File

@@ -12,7 +12,7 @@ import (
) )
func TestParseCompose(t *testing.T) { func TestParseCompose(t *testing.T) {
dt := []byte(` var dt = []byte(`
services: services:
db: db:
build: ./db build: ./db
@@ -32,9 +32,6 @@ services:
- type=local,src=path/to/cache - type=local,src=path/to/cache
cache_to: cache_to:
- type=local,dest=path/to/cache - type=local,dest=path/to/cache
ssh:
- key=/path/to/key
- default
secrets: secrets:
- token - token
- aws - aws
@@ -74,14 +71,13 @@ secrets:
require.Equal(t, "Dockerfile-alternate", *c.Targets[1].Dockerfile) require.Equal(t, "Dockerfile-alternate", *c.Targets[1].Dockerfile)
require.Equal(t, 1, len(c.Targets[1].Args)) require.Equal(t, 1, len(c.Targets[1].Args))
require.Equal(t, ptrstr("123"), c.Targets[1].Args["buildno"]) require.Equal(t, ptrstr("123"), c.Targets[1].Args["buildno"])
require.Equal(t, []string{"type=local,src=path/to/cache"}, stringify(c.Targets[1].CacheFrom)) require.Equal(t, []string{"type=local,src=path/to/cache"}, c.Targets[1].CacheFrom)
require.Equal(t, []string{"type=local,dest=path/to/cache"}, stringify(c.Targets[1].CacheTo)) require.Equal(t, []string{"type=local,dest=path/to/cache"}, c.Targets[1].CacheTo)
require.Equal(t, "none", *c.Targets[1].NetworkMode) require.Equal(t, "none", *c.Targets[1].NetworkMode)
require.Equal(t, []string{"default", "key=/path/to/key"}, stringify(c.Targets[1].SSH))
require.Equal(t, []string{ require.Equal(t, []string{
"id=aws,src=/root/.aws/credentials",
"id=token,env=ENV_TOKEN", "id=token,env=ENV_TOKEN",
}, stringify(c.Targets[1].Secrets)) "id=aws,src=/root/.aws/credentials",
}, c.Targets[1].Secrets)
require.Equal(t, "webapp2", c.Targets[2].Name) require.Equal(t, "webapp2", c.Targets[2].Name)
require.Equal(t, "dir", *c.Targets[2].Context) require.Equal(t, "dir", *c.Targets[2].Context)
@@ -89,7 +85,7 @@ secrets:
} }
func TestNoBuildOutOfTreeService(t *testing.T) { func TestNoBuildOutOfTreeService(t *testing.T) {
dt := []byte(` var dt = []byte(`
services: services:
external: external:
image: "verycooldb:1337" image: "verycooldb:1337"
@@ -103,7 +99,7 @@ services:
} }
func TestParseComposeTarget(t *testing.T) { func TestParseComposeTarget(t *testing.T) {
dt := []byte(` var dt = []byte(`
services: services:
db: db:
build: build:
@@ -129,7 +125,7 @@ services:
} }
func TestComposeBuildWithoutContext(t *testing.T) { func TestComposeBuildWithoutContext(t *testing.T) {
dt := []byte(` var dt = []byte(`
services: services:
db: db:
build: build:
@@ -153,7 +149,7 @@ services:
} }
func TestBuildArgEnvCompose(t *testing.T) { func TestBuildArgEnvCompose(t *testing.T) {
dt := []byte(` var dt = []byte(`
version: "3.8" version: "3.8"
services: services:
example: example:
@@ -179,7 +175,7 @@ services:
} }
func TestInconsistentComposeFile(t *testing.T) { func TestInconsistentComposeFile(t *testing.T) {
dt := []byte(` var dt = []byte(`
services: services:
webapp: webapp:
entrypoint: echo 1 entrypoint: echo 1
@@ -190,7 +186,7 @@ services:
} }
func TestAdvancedNetwork(t *testing.T) { func TestAdvancedNetwork(t *testing.T) {
dt := []byte(` var dt = []byte(`
services: services:
db: db:
networks: networks:
@@ -215,7 +211,7 @@ networks:
} }
func TestTags(t *testing.T) { func TestTags(t *testing.T) {
dt := []byte(` var dt = []byte(`
services: services:
example: example:
image: example image: example
@@ -233,7 +229,7 @@ services:
} }
func TestDependsOnList(t *testing.T) { func TestDependsOnList(t *testing.T) {
dt := []byte(` var dt = []byte(`
version: "3.8" version: "3.8"
services: services:
@@ -269,7 +265,7 @@ networks:
} }
func TestComposeExt(t *testing.T) { func TestComposeExt(t *testing.T) {
dt := []byte(` var dt = []byte(`
services: services:
addon: addon:
image: ct-addon:bar image: ct-addon:bar
@@ -282,8 +278,6 @@ services:
- user/app:cache - user/app:cache
tags: tags:
- ct-addon:baz - ct-addon:baz
ssh:
key: /path/to/key
args: args:
CT_ECR: foo CT_ECR: foo
CT_TAG: bar CT_TAG: bar
@@ -293,9 +287,6 @@ services:
tags: tags:
- ct-addon:foo - ct-addon:foo
- ct-addon:alp - ct-addon:alp
ssh:
- default
- other=path/to/otherkey
platforms: platforms:
- linux/amd64 - linux/amd64
- linux/arm64 - linux/arm64
@@ -336,23 +327,22 @@ services:
require.Equal(t, map[string]*string{"CT_ECR": ptrstr("foo"), "CT_TAG": ptrstr("bar")}, c.Targets[0].Args) require.Equal(t, map[string]*string{"CT_ECR": ptrstr("foo"), "CT_TAG": ptrstr("bar")}, c.Targets[0].Args)
require.Equal(t, []string{"ct-addon:baz", "ct-addon:foo", "ct-addon:alp"}, c.Targets[0].Tags) require.Equal(t, []string{"ct-addon:baz", "ct-addon:foo", "ct-addon:alp"}, c.Targets[0].Tags)
require.Equal(t, []string{"linux/amd64", "linux/arm64"}, c.Targets[0].Platforms) require.Equal(t, []string{"linux/amd64", "linux/arm64"}, c.Targets[0].Platforms)
require.Equal(t, []string{"type=local,src=path/to/cache", "user/app:cache"}, stringify(c.Targets[0].CacheFrom)) require.Equal(t, []string{"user/app:cache", "type=local,src=path/to/cache"}, c.Targets[0].CacheFrom)
require.Equal(t, []string{"type=local,dest=path/to/cache", "user/app:cache"}, stringify(c.Targets[0].CacheTo)) require.Equal(t, []string{"user/app:cache", "type=local,dest=path/to/cache"}, c.Targets[0].CacheTo)
require.Equal(t, []string{"default", "key=/path/to/key", "other=path/to/otherkey"}, stringify(c.Targets[0].SSH))
require.Equal(t, newBool(true), c.Targets[0].Pull) require.Equal(t, newBool(true), c.Targets[0].Pull)
require.Equal(t, map[string]string{"alpine": "docker-image://alpine:3.13"}, c.Targets[0].Contexts) require.Equal(t, map[string]string{"alpine": "docker-image://alpine:3.13"}, c.Targets[0].Contexts)
require.Equal(t, []string{"ct-fake-aws:bar"}, c.Targets[1].Tags) require.Equal(t, []string{"ct-fake-aws:bar"}, c.Targets[1].Tags)
require.Equal(t, []string{"id=mysecret,src=/local/secret", "id=mysecret2,src=/local/secret2"}, stringify(c.Targets[1].Secrets)) require.Equal(t, []string{"id=mysecret,src=/local/secret", "id=mysecret2,src=/local/secret2"}, c.Targets[1].Secrets)
require.Equal(t, []string{"default"}, stringify(c.Targets[1].SSH)) require.Equal(t, []string{"default"}, c.Targets[1].SSH)
require.Equal(t, []string{"linux/arm64"}, c.Targets[1].Platforms) require.Equal(t, []string{"linux/arm64"}, c.Targets[1].Platforms)
require.Equal(t, []string{"type=docker"}, stringify(c.Targets[1].Outputs)) require.Equal(t, []string{"type=docker"}, c.Targets[1].Outputs)
require.Equal(t, newBool(true), c.Targets[1].NoCache) require.Equal(t, newBool(true), c.Targets[1].NoCache)
require.Equal(t, ptrstr("128MiB"), c.Targets[1].ShmSize) require.Equal(t, ptrstr("128MiB"), c.Targets[1].ShmSize)
require.Equal(t, []string{"nofile=1024:1024"}, c.Targets[1].Ulimits) require.Equal(t, []string{"nofile=1024:1024"}, c.Targets[1].Ulimits)
} }
func TestComposeExtDedup(t *testing.T) { func TestComposeExtDedup(t *testing.T) {
dt := []byte(` var dt = []byte(`
services: services:
webapp: webapp:
image: app:bar image: app:bar
@@ -363,8 +353,6 @@ services:
- user/app:cache - user/app:cache
tags: tags:
- ct-addon:foo - ct-addon:foo
ssh:
- default
x-bake: x-bake:
tags: tags:
- ct-addon:foo - ct-addon:foo
@@ -374,18 +362,14 @@ services:
- type=local,src=path/to/cache - type=local,src=path/to/cache
cache-to: cache-to:
- type=local,dest=path/to/cache - type=local,dest=path/to/cache
ssh:
- default
- key=path/to/key
`) `)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil) c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, 1, len(c.Targets)) require.Equal(t, 1, len(c.Targets))
require.Equal(t, []string{"ct-addon:foo", "ct-addon:baz"}, c.Targets[0].Tags) require.Equal(t, []string{"ct-addon:foo", "ct-addon:baz"}, c.Targets[0].Tags)
require.Equal(t, []string{"type=local,src=path/to/cache", "user/app:cache"}, stringify(c.Targets[0].CacheFrom)) require.Equal(t, []string{"user/app:cache", "type=local,src=path/to/cache"}, c.Targets[0].CacheFrom)
require.Equal(t, []string{"type=local,dest=path/to/cache", "user/app:cache"}, stringify(c.Targets[0].CacheTo)) require.Equal(t, []string{"user/app:cache", "type=local,dest=path/to/cache"}, c.Targets[0].CacheTo)
require.Equal(t, []string{"default", "key=path/to/key"}, stringify(c.Targets[0].SSH))
} }
func TestEnv(t *testing.T) { func TestEnv(t *testing.T) {
@@ -396,7 +380,7 @@ func TestEnv(t *testing.T) {
_, err = envf.WriteString("FOO=bsdf -csdf\n") _, err = envf.WriteString("FOO=bsdf -csdf\n")
require.NoError(t, err) require.NoError(t, err)
dt := []byte(` var dt = []byte(`
services: services:
scratch: scratch:
build: build:
@@ -424,7 +408,7 @@ func TestDotEnv(t *testing.T) {
err := os.WriteFile(filepath.Join(tmpdir, ".env"), []byte("FOO=bar"), 0644) err := os.WriteFile(filepath.Join(tmpdir, ".env"), []byte("FOO=bar"), 0644)
require.NoError(t, err) require.NoError(t, err)
dt := []byte(` var dt = []byte(`
services: services:
scratch: scratch:
build: build:
@@ -443,7 +427,7 @@ services:
} }
func TestPorts(t *testing.T) { func TestPorts(t *testing.T) {
dt := []byte(` var dt = []byte(`
services: services:
foo: foo:
build: build:
@@ -664,7 +648,7 @@ target "default" {
} }
func TestComposeNullArgs(t *testing.T) { func TestComposeNullArgs(t *testing.T) {
dt := []byte(` var dt = []byte(`
services: services:
scratch: scratch:
build: build:
@@ -680,7 +664,7 @@ services:
} }
func TestDependsOn(t *testing.T) { func TestDependsOn(t *testing.T) {
dt := []byte(` var dt = []byte(`
services: services:
foo: foo:
build: build:
@@ -711,7 +695,7 @@ services:
`), 0644) `), 0644)
require.NoError(t, err) require.NoError(t, err)
dt := []byte(` var dt = []byte(`
include: include:
- compose-foo.yml - compose-foo.yml
@@ -740,7 +724,7 @@ services:
} }
func TestDevelop(t *testing.T) { func TestDevelop(t *testing.T) {
dt := []byte(` var dt = []byte(`
services: services:
scratch: scratch:
build: build:
@@ -758,46 +742,6 @@ services:
require.NoError(t, err) require.NoError(t, err)
} }
func TestCgroup(t *testing.T) {
dt := []byte(`
services:
scratch:
build:
context: ./webapp
cgroup: private
`)
_, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
}
func TestProjectName(t *testing.T) {
dt := []byte(`
services:
scratch:
build:
context: ./webapp
args:
PROJECT_NAME: ${COMPOSE_PROJECT_NAME}
`)
t.Run("default", func(t *testing.T) {
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Len(t, c.Targets, 1)
require.Len(t, c.Targets[0].Args, 1)
require.Equal(t, map[string]*string{"PROJECT_NAME": ptrstr("bake")}, c.Targets[0].Args)
})
t.Run("env", func(t *testing.T) {
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, map[string]string{"COMPOSE_PROJECT_NAME": "foo"})
require.NoError(t, err)
require.Len(t, c.Targets, 1)
require.Len(t, c.Targets[0].Args, 1)
require.Equal(t, map[string]*string{"PROJECT_NAME": ptrstr("foo")}, c.Targets[0].Args)
})
}
// chdir changes the current working directory to the named directory, // chdir changes the current working directory to the named directory,
// and then restore the original working directory at the end of the test. // and then restore the original working directory at the end of the test.
func chdir(t *testing.T, dir string) { func chdir(t *testing.T, dir string) {

View File

@@ -1,659 +0,0 @@
package bake
import (
"bufio"
"cmp"
"context"
"fmt"
"io"
"io/fs"
"os"
"path/filepath"
"slices"
"strconv"
"strings"
"syscall"
"github.com/containerd/console"
"github.com/docker/buildx/build"
"github.com/docker/buildx/util/osutil"
"github.com/moby/buildkit/util/entitlements"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/tonistiigi/go-csvvalue"
)
type EntitlementKey string
const (
EntitlementKeyNetworkHost EntitlementKey = "network.host"
EntitlementKeySecurityInsecure EntitlementKey = "security.insecure"
EntitlementKeyDevice EntitlementKey = "device"
EntitlementKeyFSRead EntitlementKey = "fs.read"
EntitlementKeyFSWrite EntitlementKey = "fs.write"
EntitlementKeyFS EntitlementKey = "fs"
EntitlementKeyImagePush EntitlementKey = "image.push"
EntitlementKeyImageLoad EntitlementKey = "image.load"
EntitlementKeyImage EntitlementKey = "image"
EntitlementKeySSH EntitlementKey = "ssh"
)
type EntitlementConf struct {
NetworkHost bool
SecurityInsecure bool
Devices *EntitlementsDevicesConf
FSRead []string
FSWrite []string
ImagePush []string
ImageLoad []string
SSH bool
}
type EntitlementsDevicesConf struct {
All bool
Devices map[string]struct{}
}
func ParseEntitlements(in []string) (EntitlementConf, error) {
var conf EntitlementConf
for _, e := range in {
switch e {
case string(EntitlementKeyNetworkHost):
conf.NetworkHost = true
case string(EntitlementKeySecurityInsecure):
conf.SecurityInsecure = true
case string(EntitlementKeySSH):
conf.SSH = true
default:
k, v, _ := strings.Cut(e, "=")
switch k {
case string(EntitlementKeyDevice):
if v == "" {
conf.Devices = &EntitlementsDevicesConf{All: true}
continue
}
fields, err := csvvalue.Fields(v, nil)
if err != nil {
return EntitlementConf{}, errors.Wrapf(err, "failed to parse device entitlement %q", v)
}
if conf.Devices == nil {
conf.Devices = &EntitlementsDevicesConf{}
}
if conf.Devices.Devices == nil {
conf.Devices.Devices = make(map[string]struct{}, 0)
}
conf.Devices.Devices[fields[0]] = struct{}{}
case string(EntitlementKeyFSRead):
conf.FSRead = append(conf.FSRead, v)
case string(EntitlementKeyFSWrite):
conf.FSWrite = append(conf.FSWrite, v)
case string(EntitlementKeyFS):
conf.FSRead = append(conf.FSRead, v)
conf.FSWrite = append(conf.FSWrite, v)
case string(EntitlementKeyImagePush):
conf.ImagePush = append(conf.ImagePush, v)
case string(EntitlementKeyImageLoad):
conf.ImageLoad = append(conf.ImageLoad, v)
case string(EntitlementKeyImage):
conf.ImagePush = append(conf.ImagePush, v)
conf.ImageLoad = append(conf.ImageLoad, v)
default:
return conf, errors.Errorf("unknown entitlement key %q", k)
}
}
}
return conf, nil
}
func (c EntitlementConf) Validate(m map[string]build.Options) (EntitlementConf, error) {
var expected EntitlementConf
for _, v := range m {
if err := c.check(v, &expected); err != nil {
return EntitlementConf{}, err
}
}
return expected, nil
}
func (c EntitlementConf) check(bo build.Options, expected *EntitlementConf) error {
for _, e := range bo.Allow {
k, rest, _ := strings.Cut(e, "=")
switch k {
case entitlements.EntitlementDevice.String():
if rest == "" {
if c.Devices == nil || !c.Devices.All {
expected.Devices = &EntitlementsDevicesConf{All: true}
}
continue
}
fields, err := csvvalue.Fields(rest, nil)
if err != nil {
return errors.Wrapf(err, "failed to parse device entitlement %q", rest)
}
if expected.Devices == nil {
expected.Devices = &EntitlementsDevicesConf{}
}
if expected.Devices.Devices == nil {
expected.Devices.Devices = make(map[string]struct{}, 0)
}
expected.Devices.Devices[fields[0]] = struct{}{}
}
switch e {
case entitlements.EntitlementNetworkHost.String():
if !c.NetworkHost {
expected.NetworkHost = true
}
case entitlements.EntitlementSecurityInsecure.String():
if !c.SecurityInsecure {
expected.SecurityInsecure = true
}
}
}
rwPaths := map[string]struct{}{}
roPaths := map[string]struct{}{}
for _, p := range collectLocalPaths(bo.Inputs) {
roPaths[p] = struct{}{}
}
for _, p := range bo.ExportsLocalPathsTemporary {
rwPaths[p] = struct{}{}
}
for _, ce := range bo.CacheTo {
if ce.Type == "local" {
if dest, ok := ce.Attrs["dest"]; ok {
rwPaths[dest] = struct{}{}
}
}
}
for _, ci := range bo.CacheFrom {
if ci.Type == "local" {
if src, ok := ci.Attrs["src"]; ok {
roPaths[src] = struct{}{}
}
}
}
for _, secret := range bo.SecretSpecs {
if secret.FilePath != "" {
roPaths[secret.FilePath] = struct{}{}
}
}
for _, ssh := range bo.SSHSpecs {
for _, p := range ssh.Paths {
roPaths[p] = struct{}{}
}
if len(ssh.Paths) == 0 {
if !c.SSH {
expected.SSH = true
}
}
}
var err error
expected.FSRead, err = findMissingPaths(c.FSRead, roPaths)
if err != nil {
return err
}
expected.FSWrite, err = findMissingPaths(c.FSWrite, rwPaths)
if err != nil {
return err
}
return nil
}
func (c EntitlementConf) Prompt(ctx context.Context, isRemote bool, out io.Writer) error {
var term bool
if _, err := console.ConsoleFromFile(os.Stdin); err == nil {
term = true
}
var msgs []string
var flags []string
// these warnings are currently disabled to give users time to update
var msgsFS []string
var flagsFS []string
if c.NetworkHost {
msgs = append(msgs, " - Running build containers that can access host network")
flags = append(flags, string(EntitlementKeyNetworkHost))
}
if c.SecurityInsecure {
msgs = append(msgs, " - Running privileged containers that can make system changes")
flags = append(flags, string(EntitlementKeySecurityInsecure))
}
if c.Devices != nil {
if c.Devices.All {
msgs = append(msgs, " - Access to CDI devices")
flags = append(flags, string(EntitlementKeyDevice))
} else {
for d := range c.Devices.Devices {
msgs = append(msgs, fmt.Sprintf(" - Access to device %s", d))
flags = append(flags, string(EntitlementKeyDevice)+"="+d)
}
}
}
if c.SSH {
msgsFS = append(msgsFS, " - Forwarding default SSH agent socket")
flagsFS = append(flagsFS, string(EntitlementKeySSH))
}
roPaths, rwPaths, commonPaths := groupSamePaths(c.FSRead, c.FSWrite)
wd, err := os.Getwd()
if err != nil {
return errors.Wrap(err, "failed to get current working directory")
}
wd, err = filepath.EvalSymlinks(wd)
if err != nil {
return errors.Wrap(err, "failed to evaluate working directory")
}
roPaths = toRelativePaths(roPaths, wd)
rwPaths = toRelativePaths(rwPaths, wd)
commonPaths = toRelativePaths(commonPaths, wd)
if len(commonPaths) > 0 {
for _, p := range commonPaths {
msgsFS = append(msgsFS, fmt.Sprintf(" - Read and write access to path %s", p))
flagsFS = append(flagsFS, string(EntitlementKeyFS)+"="+p)
}
}
if len(roPaths) > 0 {
for _, p := range roPaths {
msgsFS = append(msgsFS, fmt.Sprintf(" - Read access to path %s", p))
flagsFS = append(flagsFS, string(EntitlementKeyFSRead)+"="+p)
}
}
if len(rwPaths) > 0 {
for _, p := range rwPaths {
msgsFS = append(msgsFS, fmt.Sprintf(" - Write access to path %s", p))
flagsFS = append(flagsFS, string(EntitlementKeyFSWrite)+"="+p)
}
}
if len(msgs) == 0 && len(msgsFS) == 0 {
return nil
}
fmt.Fprintf(out, "Your build is requesting privileges for following possibly insecure capabilities:\n\n")
for _, m := range slices.Concat(msgs, msgsFS) {
fmt.Fprintf(out, "%s\n", m)
}
for i, f := range flags {
flags[i] = "--allow=" + f
}
for i, f := range flagsFS {
flagsFS[i] = "--allow=" + f
}
if term {
fmt.Fprintf(out, "\nIn order to not see this message in the future pass %q to grant requested privileges.\n", strings.Join(slices.Concat(flags, flagsFS), " "))
} else {
fmt.Fprintf(out, "\nPass %q to grant requested privileges.\n", strings.Join(slices.Concat(flags, flagsFS), " "))
}
args := slices.Clone(os.Args)
if v, ok := os.LookupEnv("DOCKER_CLI_PLUGIN_ORIGINAL_CLI_COMMAND"); ok && v != "" {
args[0] = v
}
idx := slices.Index(args, "bake")
if idx != -1 {
fmt.Fprintf(out, "\nYour full command with requested privileges:\n\n")
fmt.Fprintf(out, "%s %s %s\n\n", strings.Join(args[:idx+1], " "), strings.Join(slices.Concat(flags, flagsFS), " "), strings.Join(args[idx+1:], " "))
}
fsEntitlementsEnabled := true
if isRemote {
if v, ok := os.LookupEnv("BAKE_ALLOW_REMOTE_FS_ACCESS"); ok {
vv, err := strconv.ParseBool(v)
if err != nil {
return errors.Wrapf(err, "failed to parse BAKE_ALLOW_REMOTE_FS_ACCESS value %q", v)
}
fsEntitlementsEnabled = !vv
}
}
v, fsEntitlementsSet := os.LookupEnv("BUILDX_BAKE_ENTITLEMENTS_FS")
if fsEntitlementsSet {
vv, err := strconv.ParseBool(v)
if err != nil {
return errors.Wrapf(err, "failed to parse BUILDX_BAKE_ENTITLEMENTS_FS value %q", v)
}
fsEntitlementsEnabled = vv
}
if !fsEntitlementsEnabled && len(msgs) == 0 {
return nil
}
if fsEntitlementsEnabled && !fsEntitlementsSet && len(msgsFS) != 0 {
fmt.Fprintf(out, "To disable filesystem entitlements checks, you can set BUILDX_BAKE_ENTITLEMENTS_FS=0 .\n\n")
}
if term {
fmt.Fprintf(out, "Do you want to grant requested privileges and continue? [y/N] ")
reader := bufio.NewReader(os.Stdin)
answerCh := make(chan string, 1)
go func() {
answer, _, _ := reader.ReadLine()
answerCh <- string(answer)
close(answerCh)
}()
select {
case <-ctx.Done():
case answer := <-answerCh:
if strings.ToLower(string(answer)) == "y" {
return nil
}
}
}
return errors.Errorf("additional privileges requested")
}
func isParentOrEqualPath(p, parent string) bool {
if p == parent || parent == "/" {
return true
}
if strings.HasPrefix(p, filepath.Clean(parent+string(filepath.Separator))) {
return true
}
return false
}
func findMissingPaths(set []string, paths map[string]struct{}) ([]string, error) {
set, allowAny, err := evaluatePaths(set)
if err != nil {
return nil, err
} else if allowAny {
return nil, nil
}
paths, err = evaluateToExistingPaths(paths)
if err != nil {
return nil, err
}
paths, err = dedupPaths(paths)
if err != nil {
return nil, err
}
out := make([]string, 0, len(paths))
loop0:
for p := range paths {
for _, c := range set {
if isParentOrEqualPath(p, c) {
continue loop0
}
}
out = append(out, p)
}
if len(out) == 0 {
return nil, nil
}
slices.Sort(out)
return out, nil
}
func dedupPaths(in map[string]struct{}) (map[string]struct{}, error) {
arr := make([]string, 0, len(in))
for p := range in {
arr = append(arr, filepath.Clean(p))
}
slices.SortFunc(arr, func(a, b string) int {
return cmp.Compare(len(a), len(b))
})
m := make(map[string]struct{}, len(arr))
loop0:
for _, p := range arr {
for parent := range m {
if strings.HasPrefix(p, parent+string(filepath.Separator)) {
continue loop0
}
}
m[p] = struct{}{}
}
return m, nil
}
func toRelativePaths(in []string, wd string) []string {
out := make([]string, 0, len(in))
for _, p := range in {
rel, err := filepath.Rel(wd, p)
if err == nil {
// allow up to one level of ".." in the path
if !strings.HasPrefix(rel, ".."+string(filepath.Separator)+"..") {
out = append(out, rel)
continue
}
}
out = append(out, p)
}
return out
}
func groupSamePaths(in1, in2 []string) ([]string, []string, []string) {
if in1 == nil || in2 == nil {
return in1, in2, nil
}
slices.Sort(in1)
slices.Sort(in2)
common := []string{}
i, j := 0, 0
for i < len(in1) && j < len(in2) {
switch {
case in1[i] == in2[j]:
common = append(common, in1[i])
i++
j++
case in1[i] < in2[j]:
i++
default:
j++
}
}
in1 = removeCommonPaths(in1, common)
in2 = removeCommonPaths(in2, common)
return in1, in2, common
}
func removeCommonPaths(in, common []string) []string {
filtered := make([]string, 0, len(in))
commonIndex := 0
for _, path := range in {
if commonIndex < len(common) && path == common[commonIndex] {
commonIndex++
continue
}
filtered = append(filtered, path)
}
return filtered
}
func evaluatePaths(in []string) ([]string, bool, error) {
out := make([]string, 0, len(in))
allowAny := false
for _, p := range in {
if p == "*" {
allowAny = true
continue
}
v, err := filepath.Abs(p)
if err != nil {
logrus.Warnf("failed to evaluate entitlement path %q: %v", p, err)
continue
}
v, rest, err := evaluateToExistingPath(v)
if err != nil {
return nil, false, errors.Wrapf(err, "failed to evaluate path %q", p)
}
v, err = osutil.GetLongPathName(v)
if err != nil {
return nil, false, errors.Wrapf(err, "failed to evaluate path %q", p)
}
if rest != "" {
v = filepath.Join(v, rest)
}
out = append(out, v)
}
return out, allowAny, nil
}
func evaluateToExistingPaths(in map[string]struct{}) (map[string]struct{}, error) {
m := make(map[string]struct{}, len(in))
for p := range in {
v, _, err := evaluateToExistingPath(p)
if err != nil {
return nil, errors.Wrapf(err, "failed to evaluate path %q", p)
}
v, err = osutil.GetLongPathName(v)
if err != nil {
return nil, errors.Wrapf(err, "failed to evaluate path %q", p)
}
m[v] = struct{}{}
}
return m, nil
}
func evaluateToExistingPath(in string) (string, string, error) {
in, err := filepath.Abs(in)
if err != nil {
return "", "", err
}
volLen := volumeNameLen(in)
pathSeparator := string(os.PathSeparator)
if volLen < len(in) && os.IsPathSeparator(in[volLen]) {
volLen++
}
vol := in[:volLen]
dest := vol
linksWalked := 0
var end int
for start := volLen; start < len(in); start = end {
for start < len(in) && os.IsPathSeparator(in[start]) {
start++
}
end = start
for end < len(in) && !os.IsPathSeparator(in[end]) {
end++
}
if end == start {
break
} else if in[start:end] == "." {
continue
} else if in[start:end] == ".." {
var r int
for r = len(dest) - 1; r >= volLen; r-- {
if os.IsPathSeparator(dest[r]) {
break
}
}
if r < volLen || dest[r+1:] == ".." {
if len(dest) > volLen {
dest += pathSeparator
}
dest += ".."
} else {
dest = dest[:r]
}
continue
}
if len(dest) > volumeNameLen(dest) && !os.IsPathSeparator(dest[len(dest)-1]) {
dest += pathSeparator
}
dest += in[start:end]
fi, err := os.Lstat(dest)
if err != nil {
// If the component doesn't exist, return the last valid path
if os.IsNotExist(err) {
for r := len(dest) - 1; r >= volLen; r-- {
if os.IsPathSeparator(dest[r]) {
return dest[:r], in[start:], nil
}
}
return vol, in[start:], nil
}
return "", "", err
}
if fi.Mode()&fs.ModeSymlink == 0 {
if !fi.Mode().IsDir() && end < len(in) {
return "", "", syscall.ENOTDIR
}
continue
}
linksWalked++
if linksWalked > 255 {
return "", "", errors.New("too many symlinks")
}
link, err := os.Readlink(dest)
if err != nil {
return "", "", err
}
in = link + in[end:]
v := volumeNameLen(link)
if v > 0 {
if v < len(link) && os.IsPathSeparator(link[v]) {
v++
}
vol = link[:v]
dest = vol
end = len(vol)
} else if len(link) > 0 && os.IsPathSeparator(link[0]) {
dest = link[:1]
end = 1
vol = link[:1]
volLen = 1
} else {
var r int
for r = len(dest) - 1; r >= volLen; r-- {
if os.IsPathSeparator(dest[r]) {
break
}
}
if r < volLen {
dest = vol
} else {
dest = dest[:r]
}
end = 0
}
}
return filepath.Clean(dest), "", nil
}
func volumeNameLen(s string) int {
return len(filepath.VolumeName(s))
}

View File

@@ -1,486 +0,0 @@
package bake
import (
"fmt"
"os"
"path/filepath"
"slices"
"testing"
"github.com/docker/buildx/build"
"github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/util/osutil"
"github.com/moby/buildkit/client/llb"
"github.com/moby/buildkit/util/entitlements"
"github.com/stretchr/testify/require"
)
func TestEvaluateToExistingPath(t *testing.T) {
tempDir, err := osutil.GetLongPathName(t.TempDir())
require.NoError(t, err)
// Setup temporary directory structure for testing
existingFile := filepath.Join(tempDir, "existing_file")
require.NoError(t, os.WriteFile(existingFile, []byte("test"), 0644))
existingDir := filepath.Join(tempDir, "existing_dir")
require.NoError(t, os.Mkdir(existingDir, 0755))
symlinkToFile := filepath.Join(tempDir, "symlink_to_file")
require.NoError(t, os.Symlink(existingFile, symlinkToFile))
symlinkToDir := filepath.Join(tempDir, "symlink_to_dir")
require.NoError(t, os.Symlink(existingDir, symlinkToDir))
nonexistentPath := filepath.Join(tempDir, "nonexistent", "path", "file.txt")
tests := []struct {
name string
input string
expected string
expectErr bool
}{
{
name: "Existing file",
input: existingFile,
expected: existingFile,
expectErr: false,
},
{
name: "Existing directory",
input: existingDir,
expected: existingDir,
expectErr: false,
},
{
name: "Symlink to file",
input: symlinkToFile,
expected: existingFile,
expectErr: false,
},
{
name: "Symlink to directory",
input: symlinkToDir,
expected: existingDir,
expectErr: false,
},
{
name: "Non-existent path",
input: nonexistentPath,
expected: tempDir,
expectErr: false,
},
{
name: "Non-existent intermediate path",
input: filepath.Join(tempDir, "nonexistent", "file.txt"),
expected: tempDir,
expectErr: false,
},
{
name: "Root path",
input: "/",
expected: func() string {
root, _ := filepath.Abs("/")
return root
}(),
expectErr: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result, _, err := evaluateToExistingPath(tt.input)
if tt.expectErr {
require.Error(t, err)
} else {
require.NoError(t, err)
require.Equal(t, tt.expected, result)
}
})
}
}
func TestDedupePaths(t *testing.T) {
wd := osutil.GetWd()
tcases := []struct {
in map[string]struct{}
out map[string]struct{}
}{
{
in: map[string]struct{}{
"/a/b/c": {},
"/a/b/d": {},
"/a/b/e": {},
},
out: map[string]struct{}{
"/a/b/c": {},
"/a/b/d": {},
"/a/b/e": {},
},
},
{
in: map[string]struct{}{
"/a/b/c": {},
"/a/b/c/d": {},
"/a/b/c/d/e": {},
"/a/b/../b/c": {},
},
out: map[string]struct{}{
"/a/b/c": {},
},
},
{
in: map[string]struct{}{
filepath.Join(wd, "a/b/c"): {},
filepath.Join(wd, "../aa"): {},
filepath.Join(wd, "a/b"): {},
filepath.Join(wd, "a/b/d"): {},
filepath.Join(wd, "../aa/b"): {},
filepath.Join(wd, "../../bb"): {},
},
out: map[string]struct{}{
"a/b": {},
"../aa": {},
filepath.Join(wd, "../../bb"): {},
},
},
}
for i, tc := range tcases {
t.Run(fmt.Sprintf("case%d", i), func(t *testing.T) {
out, err := dedupPaths(tc.in)
if err != nil {
require.NoError(t, err)
}
// convert to relative paths as that is shown to user
arr := make([]string, 0, len(out))
for k := range out {
arr = append(arr, k)
}
require.NoError(t, err)
arr = toRelativePaths(arr, wd)
m := make(map[string]struct{})
for _, v := range arr {
m[filepath.ToSlash(v)] = struct{}{}
}
o := make(map[string]struct{}, len(tc.out))
for k := range tc.out {
o[filepath.ToSlash(k)] = struct{}{}
}
require.Equal(t, o, m)
})
}
}
func TestValidateEntitlements(t *testing.T) {
dir1 := t.TempDir()
dir2 := t.TempDir()
// the paths returned by entitlements validation will have symlinks resolved
expDir1, err := filepath.EvalSymlinks(dir1)
require.NoError(t, err)
expDir2, err := filepath.EvalSymlinks(dir2)
require.NoError(t, err)
escapeLink := filepath.Join(dir1, "escape_link")
require.NoError(t, os.Symlink("../../aa", escapeLink))
wd, err := os.Getwd()
require.NoError(t, err)
expWd, err := filepath.EvalSymlinks(wd)
require.NoError(t, err)
tcases := []struct {
name string
conf EntitlementConf
opt build.Options
expected EntitlementConf
}{
{
name: "No entitlements",
opt: build.Options{
Inputs: build.Inputs{
ContextState: &llb.State{},
},
},
},
{
name: "NetworkHostMissing",
opt: build.Options{
Allow: []string{
entitlements.EntitlementNetworkHost.String(),
},
},
expected: EntitlementConf{
NetworkHost: true,
FSRead: []string{expWd},
},
},
{
name: "NetworkHostSet",
conf: EntitlementConf{
NetworkHost: true,
},
opt: build.Options{
Allow: []string{
entitlements.EntitlementNetworkHost.String(),
},
},
expected: EntitlementConf{
FSRead: []string{expWd},
},
},
{
name: "SecurityAndNetworkHostMissing",
opt: build.Options{
Allow: []string{
entitlements.EntitlementNetworkHost.String(),
entitlements.EntitlementSecurityInsecure.String(),
},
},
expected: EntitlementConf{
NetworkHost: true,
SecurityInsecure: true,
FSRead: []string{expWd},
},
},
{
name: "SecurityMissingAndNetworkHostSet",
conf: EntitlementConf{
NetworkHost: true,
},
opt: build.Options{
Allow: []string{
entitlements.EntitlementNetworkHost.String(),
entitlements.EntitlementSecurityInsecure.String(),
},
},
expected: EntitlementConf{
SecurityInsecure: true,
FSRead: []string{expWd},
},
},
{
name: "SSHMissing",
opt: build.Options{
SSHSpecs: []*pb.SSH{
{
ID: "test",
},
},
},
expected: EntitlementConf{
SSH: true,
FSRead: []string{expWd},
},
},
{
name: "ExportLocal",
opt: build.Options{
ExportsLocalPathsTemporary: []string{
dir1,
filepath.Join(dir1, "subdir"),
dir2,
},
},
expected: EntitlementConf{
FSWrite: func() []string {
exp := []string{expDir1, expDir2}
slices.Sort(exp)
return exp
}(),
FSRead: []string{expWd},
},
},
{
name: "SecretFromSubFile",
opt: build.Options{
SecretSpecs: []*pb.Secret{
{
FilePath: filepath.Join(dir1, "subfile"),
},
},
},
conf: EntitlementConf{
FSRead: []string{wd, dir1},
},
},
{
name: "SecretFromEscapeLink",
opt: build.Options{
SecretSpecs: []*pb.Secret{
{
FilePath: escapeLink,
},
},
},
conf: EntitlementConf{
FSRead: []string{wd, dir1},
},
expected: EntitlementConf{
FSRead: []string{filepath.Join(expDir1, "../..")},
},
},
{
name: "SecretFromEscapeLinkAllowRoot",
opt: build.Options{
SecretSpecs: []*pb.Secret{
{
FilePath: escapeLink,
},
},
},
conf: EntitlementConf{
FSRead: []string{"/"},
},
expected: EntitlementConf{
FSRead: func() []string {
// on windows root (/) is only allowed if it is the same volume as wd
if filepath.VolumeName(wd) == filepath.VolumeName(escapeLink) {
return nil
}
// if not, then escapeLink is not allowed
exp, _, err := evaluateToExistingPath(escapeLink)
require.NoError(t, err)
exp, err = filepath.EvalSymlinks(exp)
require.NoError(t, err)
return []string{exp}
}(),
},
},
{
name: "SecretFromEscapeLinkAllowAny",
opt: build.Options{
SecretSpecs: []*pb.Secret{
{
FilePath: escapeLink,
},
},
},
conf: EntitlementConf{
FSRead: []string{"*"},
},
expected: EntitlementConf{},
},
{
name: "NonExistingAllowedPathSubpath",
opt: build.Options{
ExportsLocalPathsTemporary: []string{
dir1,
},
},
conf: EntitlementConf{
FSRead: []string{wd},
FSWrite: []string{filepath.Join(dir1, "not/exists")},
},
expected: EntitlementConf{
FSWrite: []string{expDir1}, // dir1 is still needed as only subpath was allowed
},
},
{
name: "NonExistingAllowedPathMatches",
opt: build.Options{
ExportsLocalPathsTemporary: []string{
filepath.Join(dir1, "not/exists"),
},
},
conf: EntitlementConf{
FSRead: []string{wd},
FSWrite: []string{filepath.Join(dir1, "not/exists")},
},
expected: EntitlementConf{
FSWrite: []string{expDir1}, // dir1 is still needed as build also needs to write not/exists directory
},
},
{
name: "NonExistingBuildPath",
opt: build.Options{
ExportsLocalPathsTemporary: []string{
filepath.Join(dir1, "not/exists"),
},
},
conf: EntitlementConf{
FSRead: []string{wd},
FSWrite: []string{dir1},
},
},
}
for _, tc := range tcases {
t.Run(tc.name, func(t *testing.T) {
expected, err := tc.conf.Validate(map[string]build.Options{"test": tc.opt})
require.NoError(t, err)
require.Equal(t, tc.expected, expected)
})
}
}
func TestGroupSamePaths(t *testing.T) {
tests := []struct {
name string
in1 []string
in2 []string
expected1 []string
expected2 []string
expectedC []string
}{
{
name: "All common paths",
in1: []string{"/path/a", "/path/b", "/path/c"},
in2: []string{"/path/a", "/path/b", "/path/c"},
expected1: []string{},
expected2: []string{},
expectedC: []string{"/path/a", "/path/b", "/path/c"},
},
{
name: "No common paths",
in1: []string{"/path/a", "/path/b"},
in2: []string{"/path/c", "/path/d"},
expected1: []string{"/path/a", "/path/b"},
expected2: []string{"/path/c", "/path/d"},
expectedC: []string{},
},
{
name: "Some common paths",
in1: []string{"/path/a", "/path/b", "/path/c"},
in2: []string{"/path/b", "/path/c", "/path/d"},
expected1: []string{"/path/a"},
expected2: []string{"/path/d"},
expectedC: []string{"/path/b", "/path/c"},
},
{
name: "Empty inputs",
in1: []string{},
in2: []string{},
expected1: []string{},
expected2: []string{},
expectedC: []string{},
},
{
name: "One empty input",
in1: []string{"/path/a", "/path/b"},
in2: []string{},
expected1: []string{"/path/a", "/path/b"},
expected2: []string{},
expectedC: []string{},
},
{
name: "Unsorted inputs with common paths",
in1: []string{"/path/c", "/path/a", "/path/b"},
in2: []string{"/path/b", "/path/c", "/path/a"},
expected1: []string{},
expected2: []string{},
expectedC: []string{"/path/a", "/path/b", "/path/c"},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
out1, out2, common := groupSamePaths(tt.in1, tt.in2)
require.Equal(t, tt.expected1, out1, "in1 should match expected1")
require.Equal(t, tt.expected2, out2, "in2 should match expected2")
require.Equal(t, tt.expectedC, common, "common should match expectedC")
})
}
}

View File

@@ -56,7 +56,7 @@ func formatHCLError(err error, files []File) error {
break break
} }
} }
src := &errdefs.Source{ src := errdefs.Source{
Info: &pb.SourceInfo{ Info: &pb.SourceInfo{
Filename: d.Subject.Filename, Filename: d.Subject.Filename,
Data: dt, Data: dt,
@@ -72,7 +72,7 @@ func formatHCLError(err error, files []File) error {
func toErrRange(in *hcl.Range) *pb.Range { func toErrRange(in *hcl.Range) *pb.Range {
return &pb.Range{ return &pb.Range{
Start: &pb.Position{Line: int32(in.Start.Line), Character: int32(in.Start.Column)}, Start: pb.Position{Line: int32(in.Start.Line), Character: int32(in.Start.Column)},
End: &pb.Position{Line: int32(in.End.Line), Character: int32(in.End.Column)}, End: pb.Position{Line: int32(in.End.Line), Character: int32(in.End.Column)},
} }
} }

View File

@@ -2,10 +2,8 @@ package bake
import ( import (
"reflect" "reflect"
"regexp"
"testing" "testing"
hcl "github.com/hashicorp/hcl/v2"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@@ -19,7 +17,6 @@ func TestHCLBasic(t *testing.T) {
target "db" { target "db" {
context = "./db" context = "./db"
tags = ["docker.io/tonistiigi/db"] tags = ["docker.io/tonistiigi/db"]
output = ["type=image"]
} }
target "webapp" { target "webapp" {
@@ -28,9 +25,6 @@ func TestHCLBasic(t *testing.T) {
args = { args = {
buildno = "123" buildno = "123"
} }
output = [
{ type = "image" }
]
} }
target "cross" { target "cross" {
@@ -55,18 +49,18 @@ func TestHCLBasic(t *testing.T) {
require.Equal(t, []string{"db", "webapp"}, c.Groups[0].Targets) require.Equal(t, []string{"db", "webapp"}, c.Groups[0].Targets)
require.Equal(t, 4, len(c.Targets)) require.Equal(t, 4, len(c.Targets))
require.Equal(t, "db", c.Targets[0].Name) require.Equal(t, c.Targets[0].Name, "db")
require.Equal(t, "./db", *c.Targets[0].Context) require.Equal(t, "./db", *c.Targets[0].Context)
require.Equal(t, "webapp", c.Targets[1].Name) require.Equal(t, c.Targets[1].Name, "webapp")
require.Equal(t, 1, len(c.Targets[1].Args)) require.Equal(t, 1, len(c.Targets[1].Args))
require.Equal(t, ptrstr("123"), c.Targets[1].Args["buildno"]) require.Equal(t, ptrstr("123"), c.Targets[1].Args["buildno"])
require.Equal(t, "cross", c.Targets[2].Name) require.Equal(t, c.Targets[2].Name, "cross")
require.Equal(t, 2, len(c.Targets[2].Platforms)) require.Equal(t, 2, len(c.Targets[2].Platforms))
require.Equal(t, []string{"linux/amd64", "linux/arm64"}, c.Targets[2].Platforms) require.Equal(t, []string{"linux/amd64", "linux/arm64"}, c.Targets[2].Platforms)
require.Equal(t, "webapp-plus", c.Targets[3].Name) require.Equal(t, c.Targets[3].Name, "webapp-plus")
require.Equal(t, 1, len(c.Targets[3].Args)) require.Equal(t, 1, len(c.Targets[3].Args))
require.Equal(t, map[string]*string{"IAMCROSS": ptrstr("true")}, c.Targets[3].Args) require.Equal(t, map[string]*string{"IAMCROSS": ptrstr("true")}, c.Targets[3].Args)
} }
@@ -115,18 +109,18 @@ func TestHCLBasicInJSON(t *testing.T) {
require.Equal(t, []string{"db", "webapp"}, c.Groups[0].Targets) require.Equal(t, []string{"db", "webapp"}, c.Groups[0].Targets)
require.Equal(t, 4, len(c.Targets)) require.Equal(t, 4, len(c.Targets))
require.Equal(t, "db", c.Targets[0].Name) require.Equal(t, c.Targets[0].Name, "db")
require.Equal(t, "./db", *c.Targets[0].Context) require.Equal(t, "./db", *c.Targets[0].Context)
require.Equal(t, "webapp", c.Targets[1].Name) require.Equal(t, c.Targets[1].Name, "webapp")
require.Equal(t, 1, len(c.Targets[1].Args)) require.Equal(t, 1, len(c.Targets[1].Args))
require.Equal(t, ptrstr("123"), c.Targets[1].Args["buildno"]) require.Equal(t, ptrstr("123"), c.Targets[1].Args["buildno"])
require.Equal(t, "cross", c.Targets[2].Name) require.Equal(t, c.Targets[2].Name, "cross")
require.Equal(t, 2, len(c.Targets[2].Platforms)) require.Equal(t, 2, len(c.Targets[2].Platforms))
require.Equal(t, []string{"linux/amd64", "linux/arm64"}, c.Targets[2].Platforms) require.Equal(t, []string{"linux/amd64", "linux/arm64"}, c.Targets[2].Platforms)
require.Equal(t, "webapp-plus", c.Targets[3].Name) require.Equal(t, c.Targets[3].Name, "webapp-plus")
require.Equal(t, 1, len(c.Targets[3].Args)) require.Equal(t, 1, len(c.Targets[3].Args))
require.Equal(t, map[string]*string{"IAMCROSS": ptrstr("true")}, c.Targets[3].Args) require.Equal(t, map[string]*string{"IAMCROSS": ptrstr("true")}, c.Targets[3].Args)
} }
@@ -152,7 +146,7 @@ func TestHCLWithFunctions(t *testing.T) {
require.Equal(t, []string{"webapp"}, c.Groups[0].Targets) require.Equal(t, []string{"webapp"}, c.Groups[0].Targets)
require.Equal(t, 1, len(c.Targets)) require.Equal(t, 1, len(c.Targets))
require.Equal(t, "webapp", c.Targets[0].Name) require.Equal(t, c.Targets[0].Name, "webapp")
require.Equal(t, ptrstr("124"), c.Targets[0].Args["buildno"]) require.Equal(t, ptrstr("124"), c.Targets[0].Args["buildno"])
} }
@@ -182,7 +176,7 @@ func TestHCLWithUserDefinedFunctions(t *testing.T) {
require.Equal(t, []string{"webapp"}, c.Groups[0].Targets) require.Equal(t, []string{"webapp"}, c.Groups[0].Targets)
require.Equal(t, 1, len(c.Targets)) require.Equal(t, 1, len(c.Targets))
require.Equal(t, "webapp", c.Targets[0].Name) require.Equal(t, c.Targets[0].Name, "webapp")
require.Equal(t, ptrstr("124"), c.Targets[0].Args["buildno"]) require.Equal(t, ptrstr("124"), c.Targets[0].Args["buildno"])
} }
@@ -211,7 +205,7 @@ func TestHCLWithVariables(t *testing.T) {
require.Equal(t, []string{"webapp"}, c.Groups[0].Targets) require.Equal(t, []string{"webapp"}, c.Groups[0].Targets)
require.Equal(t, 1, len(c.Targets)) require.Equal(t, 1, len(c.Targets))
require.Equal(t, "webapp", c.Targets[0].Name) require.Equal(t, c.Targets[0].Name, "webapp")
require.Equal(t, ptrstr("123"), c.Targets[0].Args["buildno"]) require.Equal(t, ptrstr("123"), c.Targets[0].Args["buildno"])
t.Setenv("BUILD_NUMBER", "456") t.Setenv("BUILD_NUMBER", "456")
@@ -224,7 +218,7 @@ func TestHCLWithVariables(t *testing.T) {
require.Equal(t, []string{"webapp"}, c.Groups[0].Targets) require.Equal(t, []string{"webapp"}, c.Groups[0].Targets)
require.Equal(t, 1, len(c.Targets)) require.Equal(t, 1, len(c.Targets))
require.Equal(t, "webapp", c.Targets[0].Name) require.Equal(t, c.Targets[0].Name, "webapp")
require.Equal(t, ptrstr("456"), c.Targets[0].Args["buildno"]) require.Equal(t, ptrstr("456"), c.Targets[0].Args["buildno"])
} }
@@ -247,7 +241,7 @@ func TestHCLWithVariablesInFunctions(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, 1, len(c.Targets)) require.Equal(t, 1, len(c.Targets))
require.Equal(t, "webapp", c.Targets[0].Name) require.Equal(t, c.Targets[0].Name, "webapp")
require.Equal(t, []string{"user/repo:v1"}, c.Targets[0].Tags) require.Equal(t, []string{"user/repo:v1"}, c.Targets[0].Tags)
t.Setenv("REPO", "docker/buildx") t.Setenv("REPO", "docker/buildx")
@@ -256,7 +250,7 @@ func TestHCLWithVariablesInFunctions(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, 1, len(c.Targets)) require.Equal(t, 1, len(c.Targets))
require.Equal(t, "webapp", c.Targets[0].Name) require.Equal(t, c.Targets[0].Name, "webapp")
require.Equal(t, []string{"docker/buildx:v1"}, c.Targets[0].Tags) require.Equal(t, []string{"docker/buildx:v1"}, c.Targets[0].Tags)
} }
@@ -279,26 +273,26 @@ func TestHCLMultiFileSharedVariables(t *testing.T) {
} }
`) `)
c, _, err := ParseFiles([]File{ c, err := ParseFiles([]File{
{Data: dt, Name: "c1.hcl"}, {Data: dt, Name: "c1.hcl"},
{Data: dt2, Name: "c2.hcl"}, {Data: dt2, Name: "c2.hcl"},
}, nil) }, nil)
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, 1, len(c.Targets)) require.Equal(t, 1, len(c.Targets))
require.Equal(t, "app", c.Targets[0].Name) require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, ptrstr("pre-abc"), c.Targets[0].Args["v1"]) require.Equal(t, ptrstr("pre-abc"), c.Targets[0].Args["v1"])
require.Equal(t, ptrstr("abc-post"), c.Targets[0].Args["v2"]) require.Equal(t, ptrstr("abc-post"), c.Targets[0].Args["v2"])
t.Setenv("FOO", "def") t.Setenv("FOO", "def")
c, _, err = ParseFiles([]File{ c, err = ParseFiles([]File{
{Data: dt, Name: "c1.hcl"}, {Data: dt, Name: "c1.hcl"},
{Data: dt2, Name: "c2.hcl"}, {Data: dt2, Name: "c2.hcl"},
}, nil) }, nil)
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, 1, len(c.Targets)) require.Equal(t, 1, len(c.Targets))
require.Equal(t, "app", c.Targets[0].Name) require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, ptrstr("pre-def"), c.Targets[0].Args["v1"]) require.Equal(t, ptrstr("pre-def"), c.Targets[0].Args["v1"])
require.Equal(t, ptrstr("def-post"), c.Targets[0].Args["v2"]) require.Equal(t, ptrstr("def-post"), c.Targets[0].Args["v2"])
} }
@@ -328,26 +322,26 @@ func TestHCLVarsWithVars(t *testing.T) {
} }
`) `)
c, _, err := ParseFiles([]File{ c, err := ParseFiles([]File{
{Data: dt, Name: "c1.hcl"}, {Data: dt, Name: "c1.hcl"},
{Data: dt2, Name: "c2.hcl"}, {Data: dt2, Name: "c2.hcl"},
}, nil) }, nil)
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, 1, len(c.Targets)) require.Equal(t, 1, len(c.Targets))
require.Equal(t, "app", c.Targets[0].Name) require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, ptrstr("pre--ABCDEF-"), c.Targets[0].Args["v1"]) require.Equal(t, ptrstr("pre--ABCDEF-"), c.Targets[0].Args["v1"])
require.Equal(t, ptrstr("ABCDEF-post"), c.Targets[0].Args["v2"]) require.Equal(t, ptrstr("ABCDEF-post"), c.Targets[0].Args["v2"])
t.Setenv("BASE", "new") t.Setenv("BASE", "new")
c, _, err = ParseFiles([]File{ c, err = ParseFiles([]File{
{Data: dt, Name: "c1.hcl"}, {Data: dt, Name: "c1.hcl"},
{Data: dt2, Name: "c2.hcl"}, {Data: dt2, Name: "c2.hcl"},
}, nil) }, nil)
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, 1, len(c.Targets)) require.Equal(t, 1, len(c.Targets))
require.Equal(t, "app", c.Targets[0].Name) require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, ptrstr("pre--NEWDEF-"), c.Targets[0].Args["v1"]) require.Equal(t, ptrstr("pre--NEWDEF-"), c.Targets[0].Args["v1"])
require.Equal(t, ptrstr("NEWDEF-post"), c.Targets[0].Args["v2"]) require.Equal(t, ptrstr("NEWDEF-post"), c.Targets[0].Args["v2"])
} }
@@ -372,7 +366,7 @@ func TestHCLTypedVariables(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, 1, len(c.Targets)) require.Equal(t, 1, len(c.Targets))
require.Equal(t, "app", c.Targets[0].Name) require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, ptrstr("lower"), c.Targets[0].Args["v1"]) require.Equal(t, ptrstr("lower"), c.Targets[0].Args["v1"])
require.Equal(t, ptrstr("yes"), c.Targets[0].Args["v2"]) require.Equal(t, ptrstr("yes"), c.Targets[0].Args["v2"])
@@ -383,7 +377,7 @@ func TestHCLTypedVariables(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, 1, len(c.Targets)) require.Equal(t, 1, len(c.Targets))
require.Equal(t, "app", c.Targets[0].Name) require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, ptrstr("higher"), c.Targets[0].Args["v1"]) require.Equal(t, ptrstr("higher"), c.Targets[0].Args["v1"])
require.Equal(t, ptrstr("no"), c.Targets[0].Args["v2"]) require.Equal(t, ptrstr("no"), c.Targets[0].Args["v2"])
@@ -481,7 +475,7 @@ func TestHCLAttrs(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, 1, len(c.Targets)) require.Equal(t, 1, len(c.Targets))
require.Equal(t, "app", c.Targets[0].Name) require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, ptrstr("attr-abcdef"), c.Targets[0].Args["v1"]) require.Equal(t, ptrstr("attr-abcdef"), c.Targets[0].Args["v1"])
// env does not apply if no variable // env does not apply if no variable
@@ -490,7 +484,7 @@ func TestHCLAttrs(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, 1, len(c.Targets)) require.Equal(t, 1, len(c.Targets))
require.Equal(t, "app", c.Targets[0].Name) require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, ptrstr("attr-abcdef"), c.Targets[0].Args["v1"]) require.Equal(t, ptrstr("attr-abcdef"), c.Targets[0].Args["v1"])
// attr-multifile // attr-multifile
} }
@@ -598,172 +592,11 @@ func TestHCLAttrsCustomType(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, 1, len(c.Targets)) require.Equal(t, 1, len(c.Targets))
require.Equal(t, "app", c.Targets[0].Name) require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, []string{"linux/arm64", "linux/amd64"}, c.Targets[0].Platforms) require.Equal(t, []string{"linux/arm64", "linux/amd64"}, c.Targets[0].Platforms)
require.Equal(t, ptrstr("linux/arm64"), c.Targets[0].Args["v1"]) require.Equal(t, ptrstr("linux/arm64"), c.Targets[0].Args["v1"])
} }
func TestHCLAttrsCapsuleType(t *testing.T) {
dt := []byte(`
target "app" {
attest = [
{ type = "provenance", mode = "max" },
"type=sbom,disabled=true,generator=foo,\"ENV1=bar,baz\",ENV2=hello",
]
cache-from = [
{ type = "registry", ref = "user/app:cache" },
"type=local,src=path/to/cache",
]
cache-to = [
{ type = "local", dest = "path/to/cache" },
]
output = [
{ type = "oci", dest = "../out.tar" },
"type=local,dest=../out",
]
secret = [
{ id = "mysecret", src = "/local/secret" },
{ id = "mysecret2", env = "TOKEN" },
]
ssh = [
{ id = "default" },
{ id = "key", paths = ["path/to/key"] },
]
}
`)
c, err := ParseFile(dt, "docker-bake.hcl")
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, []string{"type=provenance,mode=max", "type=sbom,disabled=true,\"ENV1=bar,baz\",ENV2=hello,generator=foo"}, stringify(c.Targets[0].Attest))
require.Equal(t, []string{"type=local,dest=../out", "type=oci,dest=../out.tar"}, stringify(c.Targets[0].Outputs))
require.Equal(t, []string{"type=local,src=path/to/cache", "user/app:cache"}, stringify(c.Targets[0].CacheFrom))
require.Equal(t, []string{"type=local,dest=path/to/cache"}, stringify(c.Targets[0].CacheTo))
require.Equal(t, []string{"id=mysecret,src=/local/secret", "id=mysecret2,env=TOKEN"}, stringify(c.Targets[0].Secrets))
require.Equal(t, []string{"default", "key=path/to/key"}, stringify(c.Targets[0].SSH))
}
func TestHCLAttrsCapsuleType_ObjectVars(t *testing.T) {
dt := []byte(`
variable "foo" {
default = "bar"
}
target "app" {
cache-from = [
{ type = "registry", ref = "user/app:cache" },
"type=local,src=path/to/cache",
]
cache-to = [ target.app.cache-from[0] ]
output = [
{ type = "oci", dest = "../out.tar" },
"type=local,dest=../out",
]
secret = [
{ id = "mysecret", src = "/local/secret" },
]
ssh = [
{ id = "default" },
{ id = "key", paths = ["path/to/${target.app.output[0].type}"] },
]
}
target "web" {
cache-from = target.app.cache-from
output = [ "type=oci,dest=../${foo}.tar" ]
secret = [
{ id = target.app.output[0].type, src = "/${target.app.cache-from[1].type}/secret" },
]
}
`)
c, err := ParseFile(dt, "docker-bake.hcl")
require.NoError(t, err)
require.Equal(t, 2, len(c.Targets))
findTarget := func(t *testing.T, name string) *Target {
t.Helper()
for _, tgt := range c.Targets {
if tgt.Name == name {
return tgt
}
}
t.Fatalf("could not find target %q", name)
return nil
}
app := findTarget(t, "app")
require.Equal(t, []string{"type=local,dest=../out", "type=oci,dest=../out.tar"}, stringify(app.Outputs))
require.Equal(t, []string{"type=local,src=path/to/cache", "user/app:cache"}, stringify(app.CacheFrom))
require.Equal(t, []string{"user/app:cache"}, stringify(app.CacheTo))
require.Equal(t, []string{"id=mysecret,src=/local/secret"}, stringify(app.Secrets))
require.Equal(t, []string{"default", "key=path/to/oci"}, stringify(app.SSH))
web := findTarget(t, "web")
require.Equal(t, []string{"type=oci,dest=../bar.tar"}, stringify(web.Outputs))
require.Equal(t, []string{"type=local,src=path/to/cache", "user/app:cache"}, stringify(web.CacheFrom))
require.Equal(t, []string{"id=oci,src=/local/secret"}, stringify(web.Secrets))
}
func TestHCLAttrsCapsuleType_MissingVars(t *testing.T) {
dt := []byte(`
target "app" {
attest = [
"type=sbom,disabled=${SBOM}",
]
cache-from = [
{ type = "registry", ref = "user/app:${FOO1}" },
"type=local,src=path/to/cache:${FOO2}",
]
cache-to = [
{ type = "local", dest = "path/to/${BAR}" },
]
output = [
{ type = "oci", dest = "../${OUTPUT}.tar" },
]
secret = [
{ id = "mysecret", src = "/local/${SECRET}" },
]
ssh = [
{ id = "key", paths = ["path/to/${SSH_KEY}"] },
]
}
`)
var diags hcl.Diagnostics
_, err := ParseFile(dt, "docker-bake.hcl")
require.ErrorAs(t, err, &diags)
re := regexp.MustCompile(`There is no variable named "([\w\d_]+)"`)
var actual []string
for _, diag := range diags {
if m := re.FindStringSubmatch(diag.Error()); m != nil {
actual = append(actual, m[1])
}
}
require.ElementsMatch(t,
[]string{"SBOM", "FOO1", "FOO2", "BAR", "OUTPUT", "SECRET", "SSH_KEY"},
actual)
}
func TestHCLMultiFileAttrs(t *testing.T) { func TestHCLMultiFileAttrs(t *testing.T) {
dt := []byte(` dt := []byte(`
variable "FOO" { variable "FOO" {
@@ -779,25 +612,25 @@ func TestHCLMultiFileAttrs(t *testing.T) {
FOO="def" FOO="def"
`) `)
c, _, err := ParseFiles([]File{ c, err := ParseFiles([]File{
{Data: dt, Name: "c1.hcl"}, {Data: dt, Name: "c1.hcl"},
{Data: dt2, Name: "c2.hcl"}, {Data: dt2, Name: "c2.hcl"},
}, nil) }, nil)
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, 1, len(c.Targets)) require.Equal(t, 1, len(c.Targets))
require.Equal(t, "app", c.Targets[0].Name) require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, ptrstr("pre-def"), c.Targets[0].Args["v1"]) require.Equal(t, ptrstr("pre-def"), c.Targets[0].Args["v1"])
t.Setenv("FOO", "ghi") t.Setenv("FOO", "ghi")
c, _, err = ParseFiles([]File{ c, err = ParseFiles([]File{
{Data: dt, Name: "c1.hcl"}, {Data: dt, Name: "c1.hcl"},
{Data: dt2, Name: "c2.hcl"}, {Data: dt2, Name: "c2.hcl"},
}, nil) }, nil)
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, 1, len(c.Targets)) require.Equal(t, 1, len(c.Targets))
require.Equal(t, "app", c.Targets[0].Name) require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, ptrstr("pre-ghi"), c.Targets[0].Args["v1"]) require.Equal(t, ptrstr("pre-ghi"), c.Targets[0].Args["v1"])
} }
@@ -814,13 +647,13 @@ func TestHCLMultiFileGlobalAttrs(t *testing.T) {
FOO = "def" FOO = "def"
`) `)
c, _, err := ParseFiles([]File{ c, err := ParseFiles([]File{
{Data: dt, Name: "c1.hcl"}, {Data: dt, Name: "c1.hcl"},
{Data: dt2, Name: "c2.hcl"}, {Data: dt2, Name: "c2.hcl"},
}, nil) }, nil)
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, 1, len(c.Targets)) require.Equal(t, 1, len(c.Targets))
require.Equal(t, "app", c.Targets[0].Name) require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, "pre-def", *c.Targets[0].Args["v1"]) require.Equal(t, "pre-def", *c.Targets[0].Args["v1"])
} }
@@ -997,7 +830,7 @@ func TestHCLRenameMultiFile(t *testing.T) {
} }
`) `)
c, _, err := ParseFiles([]File{ c, err := ParseFiles([]File{
{Data: dt, Name: "c1.hcl"}, {Data: dt, Name: "c1.hcl"},
{Data: dt2, Name: "c2.hcl"}, {Data: dt2, Name: "c2.hcl"},
{Data: dt3, Name: "c3.hcl"}, {Data: dt3, Name: "c3.hcl"},
@@ -1006,12 +839,12 @@ func TestHCLRenameMultiFile(t *testing.T) {
require.Equal(t, 2, len(c.Targets)) require.Equal(t, 2, len(c.Targets))
require.Equal(t, "bar", c.Targets[0].Name) require.Equal(t, c.Targets[0].Name, "bar")
require.Equal(t, "x", *c.Targets[0].Dockerfile) require.Equal(t, *c.Targets[0].Dockerfile, "x")
require.Equal(t, "z", *c.Targets[0].Target) require.Equal(t, *c.Targets[0].Target, "z")
require.Equal(t, "foo", c.Targets[1].Name) require.Equal(t, c.Targets[1].Name, "foo")
require.Equal(t, "y", *c.Targets[1].Context) require.Equal(t, *c.Targets[1].Context, "y")
} }
func TestHCLMatrixBasic(t *testing.T) { func TestHCLMatrixBasic(t *testing.T) {
@@ -1029,10 +862,10 @@ func TestHCLMatrixBasic(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, 2, len(c.Targets)) require.Equal(t, 2, len(c.Targets))
require.Equal(t, "x", c.Targets[0].Name) require.Equal(t, c.Targets[0].Name, "x")
require.Equal(t, "y", c.Targets[1].Name) require.Equal(t, c.Targets[1].Name, "y")
require.Equal(t, "x.Dockerfile", *c.Targets[0].Dockerfile) require.Equal(t, *c.Targets[0].Dockerfile, "x.Dockerfile")
require.Equal(t, "y.Dockerfile", *c.Targets[1].Dockerfile) require.Equal(t, *c.Targets[1].Dockerfile, "y.Dockerfile")
require.Equal(t, 1, len(c.Groups)) require.Equal(t, 1, len(c.Groups))
require.Equal(t, "default", c.Groups[0].Name) require.Equal(t, "default", c.Groups[0].Name)
@@ -1115,9 +948,9 @@ func TestHCLMatrixMaps(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, 2, len(c.Targets)) require.Equal(t, 2, len(c.Targets))
require.Equal(t, "aa", c.Targets[0].Name) require.Equal(t, c.Targets[0].Name, "aa")
require.Equal(t, c.Targets[0].Args["target"], ptrstr("valbb")) require.Equal(t, c.Targets[0].Args["target"], ptrstr("valbb"))
require.Equal(t, "cc", c.Targets[1].Name) require.Equal(t, c.Targets[1].Name, "cc")
require.Equal(t, c.Targets[1].Args["target"], ptrstr("valdd")) require.Equal(t, c.Targets[1].Args["target"], ptrstr("valdd"))
} }
@@ -1217,7 +1050,7 @@ func TestHCLMatrixArgsOverride(t *testing.T) {
} }
`) `)
c, _, err := ParseFiles([]File{ c, err := ParseFiles([]File{
{Data: dt, Name: "docker-bake.hcl"}, {Data: dt, Name: "docker-bake.hcl"},
}, map[string]string{"ABC": "11,22,33"}) }, map[string]string{"ABC": "11,22,33"})
require.NoError(t, err) require.NoError(t, err)
@@ -1308,7 +1141,7 @@ func TestJSONAttributes(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, 1, len(c.Targets)) require.Equal(t, 1, len(c.Targets))
require.Equal(t, "app", c.Targets[0].Name) require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, ptrstr("pre-abc-def"), c.Targets[0].Args["v1"]) require.Equal(t, ptrstr("pre-abc-def"), c.Targets[0].Args["v1"])
} }
@@ -1333,7 +1166,7 @@ func TestJSONFunctions(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, 1, len(c.Targets)) require.Equal(t, 1, len(c.Targets))
require.Equal(t, "app", c.Targets[0].Name) require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, ptrstr("pre-<FOO-abc>"), c.Targets[0].Args["v1"]) require.Equal(t, ptrstr("pre-<FOO-abc>"), c.Targets[0].Args["v1"])
} }
@@ -1351,7 +1184,7 @@ func TestJSONInvalidFunctions(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, 1, len(c.Targets)) require.Equal(t, 1, len(c.Targets))
require.Equal(t, "app", c.Targets[0].Name) require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, ptrstr(`myfunc("foo")`), c.Targets[0].Args["v1"]) require.Equal(t, ptrstr(`myfunc("foo")`), c.Targets[0].Args["v1"])
} }
@@ -1379,7 +1212,7 @@ func TestHCLFunctionInAttr(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, 1, len(c.Targets)) require.Equal(t, 1, len(c.Targets))
require.Equal(t, "app", c.Targets[0].Name) require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, ptrstr("FOO <> [baz]"), c.Targets[0].Args["v1"]) require.Equal(t, ptrstr("FOO <> [baz]"), c.Targets[0].Args["v1"])
} }
@@ -1403,14 +1236,14 @@ services:
v2: "bar" v2: "bar"
`) `)
c, _, err := ParseFiles([]File{ c, err := ParseFiles([]File{
{Data: dt, Name: "c1.hcl"}, {Data: dt, Name: "c1.hcl"},
{Data: dt2, Name: "c2.yml"}, {Data: dt2, Name: "c2.yml"},
}, nil) }, nil)
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, 1, len(c.Targets)) require.Equal(t, 1, len(c.Targets))
require.Equal(t, "app", c.Targets[0].Name) require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, ptrstr("foo"), c.Targets[0].Args["v1"]) require.Equal(t, ptrstr("foo"), c.Targets[0].Args["v1"])
require.Equal(t, ptrstr("bar"), c.Targets[0].Args["v2"]) require.Equal(t, ptrstr("bar"), c.Targets[0].Args["v2"])
require.Equal(t, "dir", *c.Targets[0].Context) require.Equal(t, "dir", *c.Targets[0].Context)
@@ -1425,7 +1258,7 @@ func TestHCLBuiltinVars(t *testing.T) {
} }
`) `)
c, _, err := ParseFiles([]File{ c, err := ParseFiles([]File{
{Data: dt, Name: "c1.hcl"}, {Data: dt, Name: "c1.hcl"},
}, map[string]string{ }, map[string]string{
"BAKE_CMD_CONTEXT": "foo", "BAKE_CMD_CONTEXT": "foo",
@@ -1433,13 +1266,13 @@ func TestHCLBuiltinVars(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, 1, len(c.Targets)) require.Equal(t, 1, len(c.Targets))
require.Equal(t, "app", c.Targets[0].Name) require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, "foo", *c.Targets[0].Context) require.Equal(t, "foo", *c.Targets[0].Context)
require.Equal(t, "test", *c.Targets[0].Dockerfile) require.Equal(t, "test", *c.Targets[0].Dockerfile)
} }
func TestCombineHCLAndJSONTargets(t *testing.T) { func TestCombineHCLAndJSONTargets(t *testing.T) {
c, _, err := ParseFiles([]File{ c, err := ParseFiles([]File{
{ {
Name: "docker-bake.hcl", Name: "docker-bake.hcl",
Data: []byte(` Data: []byte(`
@@ -1499,23 +1332,23 @@ target "b" {
require.Equal(t, 4, len(c.Targets)) require.Equal(t, 4, len(c.Targets))
require.Equal(t, "metadata-a", c.Targets[0].Name) require.Equal(t, c.Targets[0].Name, "metadata-a")
require.Equal(t, []string{"app/a:1.0.0", "app/a:latest"}, c.Targets[0].Tags) require.Equal(t, []string{"app/a:1.0.0", "app/a:latest"}, c.Targets[0].Tags)
require.Equal(t, "metadata-b", c.Targets[1].Name) require.Equal(t, c.Targets[1].Name, "metadata-b")
require.Equal(t, []string{"app/b:1.0.0", "app/b:latest"}, c.Targets[1].Tags) require.Equal(t, []string{"app/b:1.0.0", "app/b:latest"}, c.Targets[1].Tags)
require.Equal(t, "a", c.Targets[2].Name) require.Equal(t, c.Targets[2].Name, "a")
require.Equal(t, ".", *c.Targets[2].Context) require.Equal(t, ".", *c.Targets[2].Context)
require.Equal(t, "a", *c.Targets[2].Target) require.Equal(t, "a", *c.Targets[2].Target)
require.Equal(t, "b", c.Targets[3].Name) require.Equal(t, c.Targets[3].Name, "b")
require.Equal(t, ".", *c.Targets[3].Context) require.Equal(t, ".", *c.Targets[3].Context)
require.Equal(t, "b", *c.Targets[3].Target) require.Equal(t, "b", *c.Targets[3].Target)
} }
func TestCombineHCLAndJSONVars(t *testing.T) { func TestCombineHCLAndJSONVars(t *testing.T) {
c, _, err := ParseFiles([]File{ c, err := ParseFiles([]File{
{ {
Name: "docker-bake.hcl", Name: "docker-bake.hcl",
Data: []byte(` Data: []byte(`
@@ -1556,10 +1389,10 @@ target "two" {
require.Equal(t, 2, len(c.Targets)) require.Equal(t, 2, len(c.Targets))
require.Equal(t, "one", c.Targets[0].Name) require.Equal(t, c.Targets[0].Name, "one")
require.Equal(t, map[string]*string{"a": ptrstr("pre-ghi-jkl")}, c.Targets[0].Args) require.Equal(t, map[string]*string{"a": ptrstr("pre-ghi-jkl")}, c.Targets[0].Args)
require.Equal(t, "two", c.Targets[1].Name) require.Equal(t, c.Targets[1].Name, "two")
require.Equal(t, map[string]*string{"b": ptrstr("pre-jkl")}, c.Targets[1].Args) require.Equal(t, map[string]*string{"b": ptrstr("pre-jkl")}, c.Targets[1].Args)
} }
@@ -1612,40 +1445,7 @@ func TestVarUnsupportedType(t *testing.T) {
require.Error(t, err) require.Error(t, err)
} }
func TestHCLIndexOfFunc(t *testing.T) { func ptrstr(s interface{}) *string {
dt := []byte(`
variable "APP_VERSIONS" {
default = [
"1.42.4",
"1.42.3"
]
}
target "default" {
args = {
APP_VERSION = app_version
}
matrix = {
app_version = APP_VERSIONS
}
name="app-${replace(app_version, ".", "-")}"
tags = [
"app:${app_version}",
indexof(APP_VERSIONS, app_version) == 0 ? "app:latest" : "",
]
}
`)
c, err := ParseFile(dt, "docker-bake.hcl")
require.NoError(t, err)
require.Equal(t, 2, len(c.Targets))
require.Equal(t, "app-1-42-4", c.Targets[0].Name)
require.Equal(t, "app:latest", c.Targets[0].Tags[1])
require.Equal(t, "app-1-42-3", c.Targets[1].Name)
require.Empty(t, c.Targets[1].Tags[1])
}
func ptrstr(s any) *string {
var n *string var n *string
if reflect.ValueOf(s).Kind() == reflect.String { if reflect.ValueOf(s).Kind() == reflect.String {
ss := s.(string) ss := s.(string)

View File

@@ -1,355 +0,0 @@
Copyright (c) 2014 HashiCorp, Inc.
Mozilla Public License, version 2.0
1. Definitions
1.1. “Contributor”
means each individual or legal entity that creates, contributes to the
creation of, or owns Covered Software.
1.2. “Contributor Version”
means the combination of the Contributions of others (if any) used by a
Contributor and that particular Contributors Contribution.
1.3. “Contribution”
means Covered Software of a particular Contributor.
1.4. “Covered Software”
means Source Code Form to which the initial Contributor has attached the
notice in Exhibit A, the Executable Form of such Source Code Form, and
Modifications of such Source Code Form, in each case including portions
thereof.
1.5. “Incompatible With Secondary Licenses”
means
a. that the initial Contributor has attached the notice described in
Exhibit B to the Covered Software; or
b. that the Covered Software was made available under the terms of version
1.1 or earlier of the License, but not also under the terms of a
Secondary License.
1.6. “Executable Form”
means any form of the work other than Source Code Form.
1.7. “Larger Work”
means a work that combines Covered Software with other material, in a separate
file or files, that is not Covered Software.
1.8. “License”
means this document.
1.9. “Licensable”
means having the right to grant, to the maximum extent possible, whether at the
time of the initial grant or subsequently, any and all of the rights conveyed by
this License.
1.10. “Modifications”
means any of the following:
a. any file in Source Code Form that results from an addition to, deletion
from, or modification of the contents of Covered Software; or
b. any new file in Source Code Form that contains any Covered Software.
1.11. “Patent Claims” of a Contributor
means any patent claim(s), including without limitation, method, process,
and apparatus claims, in any patent Licensable by such Contributor that
would be infringed, but for the grant of the License, by the making,
using, selling, offering for sale, having made, import, or transfer of
either its Contributions or its Contributor Version.
1.12. “Secondary License”
means either the GNU General Public License, Version 2.0, the GNU Lesser
General Public License, Version 2.1, the GNU Affero General Public
License, Version 3.0, or any later versions of those licenses.
1.13. “Source Code Form”
means the form of the work preferred for making modifications.
1.14. “You” (or “Your”)
means an individual or a legal entity exercising rights under this
License. For legal entities, “You” includes any entity that controls, is
controlled by, or is under common control with You. For purposes of this
definition, “control” means (a) the power, direct or indirect, to cause
the direction or management of such entity, whether by contract or
otherwise, or (b) ownership of more than fifty percent (50%) of the
outstanding shares or beneficial ownership of such entity.
2. License Grants and Conditions
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:
a. under intellectual property rights (other than patent or trademark)
Licensable by such Contributor to use, reproduce, make available,
modify, display, perform, distribute, and otherwise exploit its
Contributions, either on an unmodified basis, with Modifications, or as
part of a Larger Work; and
b. under Patent Claims of such Contributor to make, use, sell, offer for
sale, have made, import, and otherwise transfer either its Contributions
or its Contributor Version.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution become
effective for each Contribution on the date the Contributor first distributes
such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under this
License. No additional rights or licenses will be implied from the distribution
or licensing of Covered Software under this License. Notwithstanding Section
2.1(b) above, no patent license is granted by a Contributor:
a. for any code that a Contributor has removed from Covered Software; or
b. for infringements caused by: (i) Your and any other third partys
modifications of Covered Software, or (ii) the combination of its
Contributions with other software (except as part of its Contributor
Version); or
c. under Patent Claims infringed by Covered Software in the absence of its
Contributions.
This License does not grant any rights in the trademarks, service marks, or
logos of any Contributor (except as may be necessary to comply with the
notice requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this License
(see Section 10.2) or under the terms of a Secondary License (if permitted
under the terms of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its Contributions
are its original creation(s) or it has sufficient rights to grant the
rights to its Contributions conveyed by this License.
2.6. Fair Use
This License is not intended to limit any rights You have under applicable
copyright doctrines of fair use, fair dealing, or other equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in
Section 2.1.
3. Responsibilities
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any
Modifications that You create or to which You contribute, must be under the
terms of this License. You must inform recipients that the Source Code Form
of the Covered Software is governed by the terms of this License, and how
they can obtain a copy of this License. You may not attempt to alter or
restrict the recipients rights in the Source Code Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
a. such Covered Software must also be made available in Source Code Form,
as described in Section 3.1, and You must inform recipients of the
Executable Form how they can obtain a copy of such Source Code Form by
reasonable means in a timely manner, at a charge no more than the cost
of distribution to the recipient; and
b. You may distribute such Executable Form under the terms of this License,
or sublicense it under different terms, provided that the license for
the Executable Form does not attempt to limit or alter the recipients
rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for the
Covered Software. If the Larger Work is a combination of Covered Software
with a work governed by one or more Secondary Licenses, and the Covered
Software is not Incompatible With Secondary Licenses, this License permits
You to additionally distribute such Covered Software under the terms of
such Secondary License(s), so that the recipient of the Larger Work may, at
their option, further distribute the Covered Software under the terms of
either this License or such Secondary License(s).
3.4. Notices
You may not remove or alter the substance of any license notices (including
copyright notices, patent notices, disclaimers of warranty, or limitations
of liability) contained within the Source Code Form of the Covered
Software, except that You may alter any license notices to the extent
required to remedy known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of Covered
Software. However, You may do so only on Your own behalf, and not on behalf
of any Contributor. You must make it absolutely clear that any such
warranty, support, indemnity, or liability obligation is offered by You
alone, and You hereby agree to indemnify every Contributor for any
liability incurred by such Contributor as a result of warranty, support,
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.
4. Inability to Comply Due to Statute or Regulation
If it is impossible for You to comply with any of the terms of this License
with respect to some or all of the Covered Software due to statute, judicial
order, or regulation then You must: (a) comply with the terms of this License
to the maximum extent possible; and (b) describe the limitations and the code
they affect. Such description must be placed in a text file included with all
distributions of the Covered Software under this License. Except to the
extent prohibited by statute or regulation, such description must be
sufficiently detailed for a recipient of ordinary skill to be able to
understand it.
5. Termination
5.1. The rights granted under this License will terminate automatically if You
fail to comply with any of its terms. However, if You become compliant,
then the rights granted under this License from a particular Contributor
are reinstated (a) provisionally, unless and until such Contributor
explicitly and finally terminates Your grants, and (b) on an ongoing basis,
if such Contributor fails to notify You of the non-compliance by some
reasonable means prior to 60 days after You have come back into compliance.
Moreover, Your grants from a particular Contributor are reinstated on an
ongoing basis if such Contributor notifies You of the non-compliance by
some reasonable means, this is the first time You have received notice of
non-compliance with this License from such Contributor, and You become
compliant prior to 30 days after Your receipt of the notice.
5.2. If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions, counter-claims,
and cross-claims) alleging that a Contributor Version directly or
indirectly infringes any patent, then the rights granted to You by any and
all Contributors for the Covered Software under Section 2.1 of this License
shall terminate.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user
license agreements (excluding distributors and resellers) which have been
validly granted by You or Your distributors under this License prior to
termination shall survive termination.
6. Disclaimer of Warranty
Covered Software is provided under this License on an “as is” basis, without
warranty of any kind, either expressed, implied, or statutory, including,
without limitation, warranties that the Covered Software is free of defects,
merchantable, fit for a particular purpose or non-infringing. The entire
risk as to the quality and performance of the Covered Software is with You.
Should any Covered Software prove defective in any respect, You (not any
Contributor) assume the cost of any necessary servicing, repair, or
correction. This disclaimer of warranty constitutes an essential part of this
License. No use of any Covered Software is authorized under this License
except under this disclaimer.
7. Limitation of Liability
Under no circumstances and under no legal theory, whether tort (including
negligence), contract, or otherwise, shall any Contributor, or anyone who
distributes Covered Software as permitted above, be liable to You for any
direct, indirect, special, incidental, or consequential damages of any
character including, without limitation, damages for lost profits, loss of
goodwill, work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses, even if such party shall have been
informed of the possibility of such damages. This limitation of liability
shall not apply to liability for death or personal injury resulting from such
partys negligence to the extent applicable law prohibits such limitation.
Some jurisdictions do not allow the exclusion or limitation of incidental or
consequential damages, so this exclusion and limitation may not apply to You.
8. Litigation
Any litigation relating to this License may be brought only in the courts of
a jurisdiction where the defendant maintains its principal place of business
and such litigation shall be governed by laws of that jurisdiction, without
reference to its conflict-of-law provisions. Nothing in this Section shall
prevent a partys ability to bring cross-claims or counter-claims.
9. Miscellaneous
This License represents the complete agreement concerning the subject matter
hereof. If any provision of this License is held to be unenforceable, such
provision shall be reformed only to the extent necessary to make it
enforceable. Any law or regulation which provides that the language of a
contract shall be construed against the drafter shall not be used to construe
this License against a Contributor.
10. Versions of the License
10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section
10.3, no one other than the license steward has the right to modify or
publish new versions of this License. Each version will be given a
distinguishing version number.
10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version of
the License under which You originally received the Covered Software, or
under the terms of any subsequent version published by the license
steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to
create a new license for such software, you may create and use a modified
version of this License if you rename the license and remove any
references to the name of the license steward (except to note that such
modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses
If You choose to distribute Source Code Form that is Incompatible With
Secondary Licenses under the terms of this version of the License, the
notice described in Exhibit B of this License must be attached.
Exhibit A - Source Code Form License Notice
This Source Code Form is subject to the
terms of the Mozilla Public License, v.
2.0. If a copy of the MPL was not
distributed with this file, You can
obtain one at
http://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular file, then
You may include the notice in a location (such as a LICENSE file in a relevant
directory) where a recipient would be likely to look for such a notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - “Incompatible With Secondary Licenses” Notice
This Source Code Form is “Incompatible
With Secondary Licenses”, as defined by
the Mozilla Public License, v. 2.0.

View File

@@ -1,348 +0,0 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package gohcl
import (
"fmt"
"reflect"
"github.com/hashicorp/hcl/v2"
"github.com/zclconf/go-cty/cty"
"github.com/zclconf/go-cty/cty/convert"
"github.com/zclconf/go-cty/cty/gocty"
)
// DecodeOptions allows customizing sections of the decoding process.
type DecodeOptions struct {
ImpliedType func(gv any) (cty.Type, error)
Convert func(in cty.Value, want cty.Type) (cty.Value, error)
}
func (o DecodeOptions) DecodeBody(body hcl.Body, ctx *hcl.EvalContext, val any) hcl.Diagnostics {
o = o.withDefaults()
rv := reflect.ValueOf(val)
if rv.Kind() != reflect.Ptr {
panic(fmt.Sprintf("target value must be a pointer, not %s", rv.Type().String()))
}
return o.decodeBodyToValue(body, ctx, rv.Elem())
}
// DecodeBody extracts the configuration within the given body into the given
// value. This value must be a non-nil pointer to either a struct or
// a map, where in the former case the configuration will be decoded using
// struct tags and in the latter case only attributes are allowed and their
// values are decoded into the map.
//
// The given EvalContext is used to resolve any variables or functions in
// expressions encountered while decoding. This may be nil to require only
// constant values, for simple applications that do not support variables or
// functions.
//
// The returned diagnostics should be inspected with its HasErrors method to
// determine if the populated value is valid and complete. If error diagnostics
// are returned then the given value may have been partially-populated but
// may still be accessed by a careful caller for static analysis and editor
// integration use-cases.
func DecodeBody(body hcl.Body, ctx *hcl.EvalContext, val any) hcl.Diagnostics {
return DecodeOptions{}.DecodeBody(body, ctx, val)
}
func (o DecodeOptions) decodeBodyToValue(body hcl.Body, ctx *hcl.EvalContext, val reflect.Value) hcl.Diagnostics {
et := val.Type()
switch et.Kind() {
case reflect.Struct:
return o.decodeBodyToStruct(body, ctx, val)
case reflect.Map:
return o.decodeBodyToMap(body, ctx, val)
default:
panic(fmt.Sprintf("target value must be pointer to struct or map, not %s", et.String()))
}
}
func (o DecodeOptions) decodeBodyToStruct(body hcl.Body, ctx *hcl.EvalContext, val reflect.Value) hcl.Diagnostics {
schema, partial := ImpliedBodySchema(val.Interface())
var content *hcl.BodyContent
var leftovers hcl.Body
var diags hcl.Diagnostics
if partial {
content, leftovers, diags = body.PartialContent(schema)
} else {
content, diags = body.Content(schema)
}
if content == nil {
return diags
}
tags := getFieldTags(val.Type())
if tags.Body != nil {
fieldIdx := *tags.Body
field := val.Type().Field(fieldIdx)
fieldV := val.Field(fieldIdx)
switch {
case bodyType.AssignableTo(field.Type):
fieldV.Set(reflect.ValueOf(body))
default:
diags = append(diags, o.decodeBodyToValue(body, ctx, fieldV)...)
}
}
if tags.Remain != nil {
fieldIdx := *tags.Remain
field := val.Type().Field(fieldIdx)
fieldV := val.Field(fieldIdx)
switch {
case bodyType.AssignableTo(field.Type):
fieldV.Set(reflect.ValueOf(leftovers))
case attrsType.AssignableTo(field.Type):
attrs, attrsDiags := leftovers.JustAttributes()
if len(attrsDiags) > 0 {
diags = append(diags, attrsDiags...)
}
fieldV.Set(reflect.ValueOf(attrs))
default:
diags = append(diags, o.decodeBodyToValue(leftovers, ctx, fieldV)...)
}
}
for name, fieldIdx := range tags.Attributes {
attr := content.Attributes[name]
field := val.Type().Field(fieldIdx)
fieldV := val.Field(fieldIdx)
if attr == nil {
if !exprType.AssignableTo(field.Type) {
continue
}
// As a special case, if the target is of type hcl.Expression then
// we'll assign an actual expression that evalues to a cty null,
// so the caller can deal with it within the cty realm rather
// than within the Go realm.
synthExpr := hcl.StaticExpr(cty.NullVal(cty.DynamicPseudoType), body.MissingItemRange())
fieldV.Set(reflect.ValueOf(synthExpr))
continue
}
switch {
case attrType.AssignableTo(field.Type):
fieldV.Set(reflect.ValueOf(attr))
case exprType.AssignableTo(field.Type):
fieldV.Set(reflect.ValueOf(attr.Expr))
default:
diags = append(diags, o.DecodeExpression(
attr.Expr, ctx, fieldV.Addr().Interface(),
)...)
}
}
blocksByType := content.Blocks.ByType()
for typeName, fieldIdx := range tags.Blocks {
blocks := blocksByType[typeName]
field := val.Type().Field(fieldIdx)
ty := field.Type
isSlice := false
isPtr := false
if ty.Kind() == reflect.Slice {
isSlice = true
ty = ty.Elem()
}
if ty.Kind() == reflect.Ptr {
isPtr = true
ty = ty.Elem()
}
if len(blocks) > 1 && !isSlice {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: fmt.Sprintf("Duplicate %s block", typeName),
Detail: fmt.Sprintf(
"Only one %s block is allowed. Another was defined at %s.",
typeName, blocks[0].DefRange.String(),
),
Subject: &blocks[1].DefRange,
})
continue
}
if len(blocks) == 0 {
if isSlice || isPtr {
if val.Field(fieldIdx).IsNil() {
val.Field(fieldIdx).Set(reflect.Zero(field.Type))
}
} else {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: fmt.Sprintf("Missing %s block", typeName),
Detail: fmt.Sprintf("A %s block is required.", typeName),
Subject: body.MissingItemRange().Ptr(),
})
}
continue
}
switch {
case isSlice:
elemType := ty
if isPtr {
elemType = reflect.PointerTo(ty)
}
sli := val.Field(fieldIdx)
if sli.IsNil() {
sli = reflect.MakeSlice(reflect.SliceOf(elemType), len(blocks), len(blocks))
}
for i, block := range blocks {
if isPtr {
if i >= sli.Len() {
sli = reflect.Append(sli, reflect.New(ty))
}
v := sli.Index(i)
if v.IsNil() {
v = reflect.New(ty)
}
diags = append(diags, o.decodeBlockToValue(block, ctx, v.Elem())...)
sli.Index(i).Set(v)
} else {
if i >= sli.Len() {
sli = reflect.Append(sli, reflect.Indirect(reflect.New(ty)))
}
diags = append(diags, o.decodeBlockToValue(block, ctx, sli.Index(i))...)
}
}
if sli.Len() > len(blocks) {
sli.SetLen(len(blocks))
}
val.Field(fieldIdx).Set(sli)
default:
block := blocks[0]
if isPtr {
v := val.Field(fieldIdx)
if v.IsNil() {
v = reflect.New(ty)
}
diags = append(diags, o.decodeBlockToValue(block, ctx, v.Elem())...)
val.Field(fieldIdx).Set(v)
} else {
diags = append(diags, o.decodeBlockToValue(block, ctx, val.Field(fieldIdx))...)
}
}
}
return diags
}
func (o DecodeOptions) decodeBodyToMap(body hcl.Body, ctx *hcl.EvalContext, v reflect.Value) hcl.Diagnostics {
attrs, diags := body.JustAttributes()
if attrs == nil {
return diags
}
mv := reflect.MakeMap(v.Type())
for k, attr := range attrs {
switch {
case attrType.AssignableTo(v.Type().Elem()):
mv.SetMapIndex(reflect.ValueOf(k), reflect.ValueOf(attr))
case exprType.AssignableTo(v.Type().Elem()):
mv.SetMapIndex(reflect.ValueOf(k), reflect.ValueOf(attr.Expr))
default:
ev := reflect.New(v.Type().Elem())
diags = append(diags, o.DecodeExpression(attr.Expr, ctx, ev.Interface())...)
mv.SetMapIndex(reflect.ValueOf(k), ev.Elem())
}
}
v.Set(mv)
return diags
}
func (o DecodeOptions) decodeBlockToValue(block *hcl.Block, ctx *hcl.EvalContext, v reflect.Value) hcl.Diagnostics {
diags := o.decodeBodyToValue(block.Body, ctx, v)
if len(block.Labels) > 0 {
blockTags := getFieldTags(v.Type())
for li, lv := range block.Labels {
lfieldIdx := blockTags.Labels[li].FieldIndex
v.Field(lfieldIdx).Set(reflect.ValueOf(lv))
}
}
return diags
}
func (o DecodeOptions) DecodeExpression(expr hcl.Expression, ctx *hcl.EvalContext, val any) hcl.Diagnostics {
o = o.withDefaults()
srcVal, diags := expr.Value(ctx)
convTy, err := o.ImpliedType(val)
if err != nil {
panic(fmt.Sprintf("unsuitable DecodeExpression target: %s", err))
}
srcVal, err = o.Convert(srcVal, convTy)
if err != nil {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Unsuitable value type",
Detail: fmt.Sprintf("Unsuitable value: %s", err.Error()),
Subject: expr.StartRange().Ptr(),
Context: expr.Range().Ptr(),
})
return diags
}
err = gocty.FromCtyValue(srcVal, val)
if err != nil {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Unsuitable value type",
Detail: fmt.Sprintf("Unsuitable value: %s", err.Error()),
Subject: expr.StartRange().Ptr(),
Context: expr.Range().Ptr(),
})
}
return diags
}
// DecodeExpression extracts the value of the given expression into the given
// value. This value must be something that gocty is able to decode into,
// since the final decoding is delegated to that package.
//
// The given EvalContext is used to resolve any variables or functions in
// expressions encountered while decoding. This may be nil to require only
// constant values, for simple applications that do not support variables or
// functions.
//
// The returned diagnostics should be inspected with its HasErrors method to
// determine if the populated value is valid and complete. If error diagnostics
// are returned then the given value may have been partially-populated but
// may still be accessed by a careful caller for static analysis and editor
// integration use-cases.
func DecodeExpression(expr hcl.Expression, ctx *hcl.EvalContext, val any) hcl.Diagnostics {
return DecodeOptions{}.DecodeExpression(expr, ctx, val)
}
func (o DecodeOptions) withDefaults() DecodeOptions {
if o.ImpliedType == nil {
o.ImpliedType = gocty.ImpliedType
}
if o.Convert == nil {
o.Convert = convert.Convert
}
return o
}

View File

@@ -1,806 +0,0 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package gohcl
import (
"encoding/json"
"fmt"
"reflect"
"testing"
"github.com/davecgh/go-spew/spew"
"github.com/hashicorp/hcl/v2"
hclJSON "github.com/hashicorp/hcl/v2/json"
"github.com/zclconf/go-cty/cty"
)
func TestDecodeBody(t *testing.T) {
deepEquals := func(other any) func(v any) bool {
return func(v any) bool {
return reflect.DeepEqual(v, other)
}
}
type withNameExpression struct {
Name hcl.Expression `hcl:"name"`
}
type withTwoAttributes struct {
A string `hcl:"a,optional"`
B string `hcl:"b,optional"`
}
type withNestedBlock struct {
Plain string `hcl:"plain,optional"`
Nested *withTwoAttributes `hcl:"nested,block"`
}
type withListofNestedBlocks struct {
Nested []*withTwoAttributes `hcl:"nested,block"`
}
type withListofNestedBlocksNoPointers struct {
Nested []withTwoAttributes `hcl:"nested,block"`
}
tests := []struct {
Body map[string]any
Target func() any
Check func(v any) bool
DiagCount int
}{
{
map[string]any{},
makeInstantiateType(struct{}{}),
deepEquals(struct{}{}),
0,
},
{
map[string]any{},
makeInstantiateType(struct {
Name string `hcl:"name"`
}{}),
deepEquals(struct {
Name string `hcl:"name"`
}{}),
1, // name is required
},
{
map[string]any{},
makeInstantiateType(struct {
Name *string `hcl:"name"`
}{}),
deepEquals(struct {
Name *string `hcl:"name"`
}{}),
0,
}, // name nil
{
map[string]any{},
makeInstantiateType(struct {
Name string `hcl:"name,optional"`
}{}),
deepEquals(struct {
Name string `hcl:"name,optional"`
}{}),
0,
}, // name optional
{
map[string]any{},
makeInstantiateType(withNameExpression{}),
func(v any) bool {
if v == nil {
return false
}
wne, valid := v.(withNameExpression)
if !valid {
return false
}
if wne.Name == nil {
return false
}
nameVal, _ := wne.Name.Value(nil)
return nameVal.IsNull()
},
0,
},
{
map[string]any{
"name": "Ermintrude",
},
makeInstantiateType(withNameExpression{}),
func(v any) bool {
if v == nil {
return false
}
wne, valid := v.(withNameExpression)
if !valid {
return false
}
if wne.Name == nil {
return false
}
nameVal, _ := wne.Name.Value(nil)
return nameVal.Equals(cty.StringVal("Ermintrude")).True()
},
0,
},
{
map[string]any{
"name": "Ermintrude",
},
makeInstantiateType(struct {
Name string `hcl:"name"`
}{}),
deepEquals(struct {
Name string `hcl:"name"`
}{"Ermintrude"}),
0,
},
{
map[string]any{
"name": "Ermintrude",
"age": 23,
},
makeInstantiateType(struct {
Name string `hcl:"name"`
}{}),
deepEquals(struct {
Name string `hcl:"name"`
}{"Ermintrude"}),
1, // Extraneous "age" property
},
{
map[string]any{
"name": "Ermintrude",
"age": 50,
},
makeInstantiateType(struct {
Name string `hcl:"name"`
Attrs hcl.Attributes `hcl:",remain"`
}{}),
func(gotI any) bool {
got := gotI.(struct {
Name string `hcl:"name"`
Attrs hcl.Attributes `hcl:",remain"`
})
return got.Name == "Ermintrude" && len(got.Attrs) == 1 && got.Attrs["age"] != nil
},
0,
},
{
map[string]any{
"name": "Ermintrude",
"age": 50,
},
makeInstantiateType(struct {
Name string `hcl:"name"`
Remain hcl.Body `hcl:",remain"`
}{}),
func(gotI any) bool {
got := gotI.(struct {
Name string `hcl:"name"`
Remain hcl.Body `hcl:",remain"`
})
attrs, _ := got.Remain.JustAttributes()
return got.Name == "Ermintrude" && len(attrs) == 1 && attrs["age"] != nil
},
0,
},
{
map[string]any{
"name": "Ermintrude",
"living": true,
},
makeInstantiateType(struct {
Name string `hcl:"name"`
Remain map[string]cty.Value `hcl:",remain"`
}{}),
deepEquals(struct {
Name string `hcl:"name"`
Remain map[string]cty.Value `hcl:",remain"`
}{
Name: "Ermintrude",
Remain: map[string]cty.Value{
"living": cty.True,
},
}),
0,
},
{
map[string]any{
"name": "Ermintrude",
"age": 50,
},
makeInstantiateType(struct {
Name string `hcl:"name"`
Body hcl.Body `hcl:",body"`
Remain hcl.Body `hcl:",remain"`
}{}),
func(gotI any) bool {
got := gotI.(struct {
Name string `hcl:"name"`
Body hcl.Body `hcl:",body"`
Remain hcl.Body `hcl:",remain"`
})
attrs, _ := got.Body.JustAttributes()
return got.Name == "Ermintrude" && len(attrs) == 2 &&
attrs["name"] != nil && attrs["age"] != nil
},
0,
},
{
map[string]any{
"noodle": map[string]any{},
},
makeInstantiateType(struct {
Noodle struct{} `hcl:"noodle,block"`
}{}),
func(gotI any) bool {
// Generating no diagnostics is good enough for this one.
return true
},
0,
},
{
map[string]any{
"noodle": []map[string]any{{}},
},
makeInstantiateType(struct {
Noodle struct{} `hcl:"noodle,block"`
}{}),
func(gotI any) bool {
// Generating no diagnostics is good enough for this one.
return true
},
0,
},
{
map[string]any{
"noodle": []map[string]any{{}, {}},
},
makeInstantiateType(struct {
Noodle struct{} `hcl:"noodle,block"`
}{}),
func(gotI any) bool {
// Generating one diagnostic is good enough for this one.
return true
},
1,
},
{
map[string]any{},
makeInstantiateType(struct {
Noodle struct{} `hcl:"noodle,block"`
}{}),
func(gotI any) bool {
// Generating one diagnostic is good enough for this one.
return true
},
1,
},
{
map[string]any{
"noodle": []map[string]any{},
},
makeInstantiateType(struct {
Noodle struct{} `hcl:"noodle,block"`
}{}),
func(gotI any) bool {
// Generating one diagnostic is good enough for this one.
return true
},
1,
},
{
map[string]any{
"noodle": map[string]any{},
},
makeInstantiateType(struct {
Noodle *struct{} `hcl:"noodle,block"`
}{}),
func(gotI any) bool {
return gotI.(struct {
Noodle *struct{} `hcl:"noodle,block"`
}).Noodle != nil
},
0,
},
{
map[string]any{
"noodle": []map[string]any{{}},
},
makeInstantiateType(struct {
Noodle *struct{} `hcl:"noodle,block"`
}{}),
func(gotI any) bool {
return gotI.(struct {
Noodle *struct{} `hcl:"noodle,block"`
}).Noodle != nil
},
0,
},
{
map[string]any{
"noodle": []map[string]any{},
},
makeInstantiateType(struct {
Noodle *struct{} `hcl:"noodle,block"`
}{}),
func(gotI any) bool {
return gotI.(struct {
Noodle *struct{} `hcl:"noodle,block"`
}).Noodle == nil
},
0,
},
{
map[string]any{
"noodle": []map[string]any{{}, {}},
},
makeInstantiateType(struct {
Noodle *struct{} `hcl:"noodle,block"`
}{}),
func(gotI any) bool {
// Generating one diagnostic is good enough for this one.
return true
},
1,
},
{
map[string]any{
"noodle": []map[string]any{},
},
makeInstantiateType(struct {
Noodle []struct{} `hcl:"noodle,block"`
}{}),
func(gotI any) bool {
noodle := gotI.(struct {
Noodle []struct{} `hcl:"noodle,block"`
}).Noodle
return len(noodle) == 0
},
0,
},
{
map[string]any{
"noodle": []map[string]any{{}},
},
makeInstantiateType(struct {
Noodle []struct{} `hcl:"noodle,block"`
}{}),
func(gotI any) bool {
noodle := gotI.(struct {
Noodle []struct{} `hcl:"noodle,block"`
}).Noodle
return len(noodle) == 1
},
0,
},
{
map[string]any{
"noodle": []map[string]any{{}, {}},
},
makeInstantiateType(struct {
Noodle []struct{} `hcl:"noodle,block"`
}{}),
func(gotI any) bool {
noodle := gotI.(struct {
Noodle []struct{} `hcl:"noodle,block"`
}).Noodle
return len(noodle) == 2
},
0,
},
{
map[string]any{
"noodle": map[string]any{},
},
makeInstantiateType(struct {
Noodle struct {
Name string `hcl:"name,label"`
} `hcl:"noodle,block"`
}{}),
func(gotI any) bool {
//nolint:misspell
// Generating two diagnostics is good enough for this one.
// (one for the missing noodle block and the other for
// the JSON serialization detecting the missing level of
// heirarchy for the label.)
return true
},
2,
},
{
map[string]any{
"noodle": map[string]any{
"foo_foo": map[string]any{},
},
},
makeInstantiateType(struct {
Noodle struct {
Name string `hcl:"name,label"`
} `hcl:"noodle,block"`
}{}),
func(gotI any) bool {
noodle := gotI.(struct {
Noodle struct {
Name string `hcl:"name,label"`
} `hcl:"noodle,block"`
}).Noodle
return noodle.Name == "foo_foo"
},
0,
},
{
map[string]any{
"noodle": map[string]any{
"foo_foo": map[string]any{},
"bar_baz": map[string]any{},
},
},
makeInstantiateType(struct {
Noodle struct {
Name string `hcl:"name,label"`
} `hcl:"noodle,block"`
}{}),
func(gotI any) bool {
// One diagnostic is enough for this one.
return true
},
1,
},
{
map[string]any{
"noodle": map[string]any{
"foo_foo": map[string]any{},
"bar_baz": map[string]any{},
},
},
makeInstantiateType(struct {
Noodles []struct {
Name string `hcl:"name,label"`
} `hcl:"noodle,block"`
}{}),
func(gotI any) bool {
noodles := gotI.(struct {
Noodles []struct {
Name string `hcl:"name,label"`
} `hcl:"noodle,block"`
}).Noodles
return len(noodles) == 2 && (noodles[0].Name == "foo_foo" || noodles[0].Name == "bar_baz") && (noodles[1].Name == "foo_foo" || noodles[1].Name == "bar_baz") && noodles[0].Name != noodles[1].Name
},
0,
},
{
map[string]any{
"noodle": map[string]any{
"foo_foo": map[string]any{
"type": "rice",
},
},
},
makeInstantiateType(struct {
Noodle struct {
Name string `hcl:"name,label"`
Type string `hcl:"type"`
} `hcl:"noodle,block"`
}{}),
func(gotI any) bool {
noodle := gotI.(struct {
Noodle struct {
Name string `hcl:"name,label"`
Type string `hcl:"type"`
} `hcl:"noodle,block"`
}).Noodle
return noodle.Name == "foo_foo" && noodle.Type == "rice"
},
0,
},
{
map[string]any{
"name": "Ermintrude",
"age": 34,
},
makeInstantiateType(map[string]string(nil)),
deepEquals(map[string]string{
"name": "Ermintrude",
"age": "34",
}),
0,
},
{
map[string]any{
"name": "Ermintrude",
"age": 89,
},
makeInstantiateType(map[string]*hcl.Attribute(nil)),
func(gotI any) bool {
got := gotI.(map[string]*hcl.Attribute)
return len(got) == 2 && got["name"] != nil && got["age"] != nil
},
0,
},
{
map[string]any{
"name": "Ermintrude",
"age": 13,
},
makeInstantiateType(map[string]hcl.Expression(nil)),
func(gotI any) bool {
got := gotI.(map[string]hcl.Expression)
return len(got) == 2 && got["name"] != nil && got["age"] != nil
},
0,
},
{
map[string]any{
"name": "Ermintrude",
"living": true,
},
makeInstantiateType(map[string]cty.Value(nil)),
deepEquals(map[string]cty.Value{
"name": cty.StringVal("Ermintrude"),
"living": cty.True,
}),
0,
},
{
// Retain "nested" block while decoding
map[string]any{
"plain": "foo",
},
func() any {
return &withNestedBlock{
Plain: "bar",
Nested: &withTwoAttributes{
A: "bar",
},
}
},
func(gotI any) bool {
foo := gotI.(withNestedBlock)
return foo.Plain == "foo" && foo.Nested != nil && foo.Nested.A == "bar"
},
0,
},
{
// Retain values in "nested" block while decoding
map[string]any{
"nested": map[string]any{
"a": "foo",
},
},
func() any {
return &withNestedBlock{
Nested: &withTwoAttributes{
B: "bar",
},
}
},
func(gotI any) bool {
foo := gotI.(withNestedBlock)
return foo.Nested.A == "foo" && foo.Nested.B == "bar"
},
0,
},
{
// Retain values in "nested" block list while decoding
map[string]any{
"nested": []map[string]any{
{
"a": "foo",
},
},
},
func() any {
return &withListofNestedBlocks{
Nested: []*withTwoAttributes{
{
B: "bar",
},
},
}
},
func(gotI any) bool {
n := gotI.(withListofNestedBlocks)
return n.Nested[0].A == "foo" && n.Nested[0].B == "bar"
},
0,
},
{
// Remove additional elements from the list while decoding nested blocks
map[string]any{
"nested": []map[string]any{
{
"a": "foo",
},
},
},
func() any {
return &withListofNestedBlocks{
Nested: []*withTwoAttributes{
{
B: "bar",
},
{
B: "bar",
},
},
}
},
func(gotI any) bool {
n := gotI.(withListofNestedBlocks)
return len(n.Nested) == 1
},
0,
},
{
// Make sure decoding value slices works the same as pointer slices.
map[string]any{
"nested": []map[string]any{
{
"b": "bar",
},
{
"b": "baz",
},
},
},
func() any {
return &withListofNestedBlocksNoPointers{
Nested: []withTwoAttributes{
{
B: "foo",
},
},
}
},
func(gotI any) bool {
n := gotI.(withListofNestedBlocksNoPointers)
return n.Nested[0].B == "bar" && len(n.Nested) == 2
},
0,
},
}
for i, test := range tests {
// For convenience here we're going to use the JSON parser
// to process the given body.
buf, err := json.Marshal(test.Body)
if err != nil {
t.Fatalf("error JSON-encoding body for test %d: %s", i, err)
}
t.Run(string(buf), func(t *testing.T) {
file, diags := hclJSON.Parse(buf, "test.json")
if len(diags) != 0 {
t.Fatalf("diagnostics while parsing: %s", diags.Error())
}
targetVal := reflect.ValueOf(test.Target())
diags = DecodeBody(file.Body, nil, targetVal.Interface())
if len(diags) != test.DiagCount {
t.Errorf("wrong number of diagnostics %d; want %d", len(diags), test.DiagCount)
for _, diag := range diags {
t.Logf(" - %s", diag.Error())
}
}
got := targetVal.Elem().Interface()
if !test.Check(got) {
t.Errorf("wrong result\ngot: %s", spew.Sdump(got))
}
})
}
}
func TestDecodeExpression(t *testing.T) {
tests := []struct {
Value cty.Value
Target any
Want any
DiagCount int
}{
{
cty.StringVal("hello"),
"",
"hello",
0,
},
{
cty.StringVal("hello"),
cty.NilVal,
cty.StringVal("hello"),
0,
},
{
cty.NumberIntVal(2),
"",
"2",
0,
},
{
cty.StringVal("true"),
false,
true,
0,
},
{
cty.NullVal(cty.String),
"",
"",
1, // null value is not allowed
},
{
cty.UnknownVal(cty.String),
"",
"",
1, // value must be known
},
{
cty.ListVal([]cty.Value{cty.True}),
false,
false,
1, // bool required
},
}
for i, test := range tests {
t.Run(fmt.Sprintf("%02d", i), func(t *testing.T) {
expr := &fixedExpression{test.Value}
targetVal := reflect.New(reflect.TypeOf(test.Target))
diags := DecodeExpression(expr, nil, targetVal.Interface())
if len(diags) != test.DiagCount {
t.Errorf("wrong number of diagnostics %d; want %d", len(diags), test.DiagCount)
for _, diag := range diags {
t.Logf(" - %s", diag.Error())
}
}
got := targetVal.Elem().Interface()
if !reflect.DeepEqual(got, test.Want) {
t.Errorf("wrong result\ngot: %#v\nwant: %#v", got, test.Want)
}
})
}
}
type fixedExpression struct {
val cty.Value
}
func (e *fixedExpression) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) {
return e.val, nil
}
func (e *fixedExpression) Range() (r hcl.Range) {
return
}
func (e *fixedExpression) StartRange() (r hcl.Range) {
return
}
func (e *fixedExpression) Variables() []hcl.Traversal {
return nil
}
func makeInstantiateType(target any) func() any {
return func() any {
return reflect.New(reflect.TypeOf(target)).Interface()
}
}

View File

@@ -1,65 +0,0 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
// Package gohcl allows decoding HCL configurations into Go data structures.
//
// It provides a convenient and concise way of describing the schema for
// configuration and then accessing the resulting data via native Go
// types.
//
// A struct field tag scheme is used, similar to other decoding and
// unmarshalling libraries. The tags are formatted as in the following example:
//
// ThingType string `hcl:"thing_type,attr"`
//
// Within each tag there are two comma-separated tokens. The first is the
// name of the corresponding construct in configuration, while the second
// is a keyword giving the kind of construct expected. The following
// kind keywords are supported:
//
// attr (the default) indicates that the value is to be populated from an attribute
// block indicates that the value is to populated from a block
// label indicates that the value is to populated from a block label
// optional is the same as attr, but the field is optional
// remain indicates that the value is to be populated from the remaining body after populating other fields
//
// "attr" fields may either be of type *hcl.Expression, in which case the raw
// expression is assigned, or of any type accepted by gocty, in which case
// gocty will be used to assign the value to a native Go type.
//
// "block" fields may be a struct that recursively uses the same tags, or a
// slice of such structs, in which case multiple blocks of the corresponding
// type are decoded into the slice.
//
// "body" can be placed on a single field of type hcl.Body to capture
// the full hcl.Body that was decoded for a block. This does not allow leftover
// values like "remain", so a decoding error will still be returned if leftover
// fields are given. If you want to capture the decoding body PLUS leftover
// fields, you must specify a "remain" field as well to prevent errors. The
// body field and the remain field will both contain the leftover fields.
//
// "label" fields are considered only in a struct used as the type of a field
// marked as "block", and are used sequentially to capture the labels of
// the blocks being decoded. In this case, the name token is used only as
// an identifier for the label in diagnostic messages.
//
// "optional" fields behave like "attr" fields, but they are optional
// and will not give parsing errors if they are missing.
//
// "remain" can be placed on a single field that may be either of type
// hcl.Body or hcl.Attributes, in which case any remaining body content is
// placed into this field for delayed processing. If no "remain" field is
// present then any attributes or blocks not matched by another valid tag
// will cause an error diagnostic.
//
// Only a subset of this tagging/typing vocabulary is supported for the
// "Encode" family of functions. See the EncodeIntoBody docs for full details
// on the constraints there.
//
// Broadly-speaking this package deals with two types of error. The first is
// errors in the configuration itself, which are returned as diagnostics
// written with the configuration author as the target audience. The second
// is bugs in the calling program, such as invalid struct tags, which are
// surfaced via panics since there can be no useful runtime handling of such
// errors and they should certainly not be returned to the user as diagnostics.
package gohcl

View File

@@ -1,192 +0,0 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package gohcl
import (
"fmt"
"reflect"
"sort"
"github.com/hashicorp/hcl/v2/hclwrite"
"github.com/zclconf/go-cty/cty/gocty"
)
// EncodeIntoBody replaces the contents of the given hclwrite Body with
// attributes and blocks derived from the given value, which must be a
// struct value or a pointer to a struct value with the struct tags defined
// in this package.
//
// This function can work only with fully-decoded data. It will ignore any
// fields tagged as "remain", any fields that decode attributes into either
// hcl.Attribute or hcl.Expression values, and any fields that decode blocks
// into hcl.Attributes values. This function does not have enough information
// to complete the decoding of these types.
//
// Any fields tagged as "label" are ignored by this function. Use EncodeAsBlock
// to produce a whole hclwrite.Block including block labels.
//
// As long as a suitable value is given to encode and the destination body
// is non-nil, this function will always complete. It will panic in case of
// any errors in the calling program, such as passing an inappropriate type
// or a nil body.
//
// The layout of the resulting HCL source is derived from the ordering of
// the struct fields, with blank lines around nested blocks of different types.
// Fields representing attributes should usually precede those representing
// blocks so that the attributes can group together in the result. For more
// control, use the hclwrite API directly.
func EncodeIntoBody(val any, dst *hclwrite.Body) {
rv := reflect.ValueOf(val)
ty := rv.Type()
if ty.Kind() == reflect.Ptr {
rv = rv.Elem()
ty = rv.Type()
}
if ty.Kind() != reflect.Struct {
panic(fmt.Sprintf("value is %s, not struct", ty.Kind()))
}
tags := getFieldTags(ty)
populateBody(rv, ty, tags, dst)
}
// EncodeAsBlock creates a new hclwrite.Block populated with the data from
// the given value, which must be a struct or pointer to struct with the
// struct tags defined in this package.
//
// If the given struct type has fields tagged with "label" tags then they
// will be used in order to annotate the created block with labels.
//
// This function has the same constraints as EncodeIntoBody and will panic
// if they are violated.
func EncodeAsBlock(val any, blockType string) *hclwrite.Block {
rv := reflect.ValueOf(val)
ty := rv.Type()
if ty.Kind() == reflect.Ptr {
rv = rv.Elem()
ty = rv.Type()
}
if ty.Kind() != reflect.Struct {
panic(fmt.Sprintf("value is %s, not struct", ty.Kind()))
}
tags := getFieldTags(ty)
labels := make([]string, len(tags.Labels))
for i, lf := range tags.Labels {
lv := rv.Field(lf.FieldIndex)
// We just stringify whatever we find. It should always be a string
// but if not then we'll still do something reasonable.
labels[i] = fmt.Sprintf("%s", lv.Interface())
}
block := hclwrite.NewBlock(blockType, labels)
populateBody(rv, ty, tags, block.Body())
return block
}
func populateBody(rv reflect.Value, ty reflect.Type, tags *fieldTags, dst *hclwrite.Body) {
nameIdxs := make(map[string]int, len(tags.Attributes)+len(tags.Blocks))
namesOrder := make([]string, 0, len(tags.Attributes)+len(tags.Blocks))
for n, i := range tags.Attributes {
nameIdxs[n] = i
namesOrder = append(namesOrder, n)
}
for n, i := range tags.Blocks {
nameIdxs[n] = i
namesOrder = append(namesOrder, n)
}
sort.SliceStable(namesOrder, func(i, j int) bool {
ni, nj := namesOrder[i], namesOrder[j]
return nameIdxs[ni] < nameIdxs[nj]
})
dst.Clear()
prevWasBlock := false
for _, name := range namesOrder {
fieldIdx := nameIdxs[name]
field := ty.Field(fieldIdx)
fieldTy := field.Type
fieldVal := rv.Field(fieldIdx)
if fieldTy.Kind() == reflect.Ptr {
fieldTy = fieldTy.Elem()
fieldVal = fieldVal.Elem()
}
if _, isAttr := tags.Attributes[name]; isAttr {
if exprType.AssignableTo(fieldTy) || attrType.AssignableTo(fieldTy) {
continue // ignore undecoded fields
}
if !fieldVal.IsValid() {
continue // ignore (field value is nil pointer)
}
if fieldTy.Kind() == reflect.Ptr && fieldVal.IsNil() {
continue // ignore
}
if prevWasBlock {
dst.AppendNewline()
prevWasBlock = false
}
valTy, err := gocty.ImpliedType(fieldVal.Interface())
if err != nil {
panic(fmt.Sprintf("cannot encode %T as HCL expression: %s", fieldVal.Interface(), err))
}
val, err := gocty.ToCtyValue(fieldVal.Interface(), valTy)
if err != nil {
// This should never happen, since we should always be able
// to decode into the implied type.
panic(fmt.Sprintf("failed to encode %T as %#v: %s", fieldVal.Interface(), valTy, err))
}
dst.SetAttributeValue(name, val)
} else { // must be a block, then
elemTy := fieldTy
isSeq := false
if elemTy.Kind() == reflect.Slice || elemTy.Kind() == reflect.Array {
isSeq = true
elemTy = elemTy.Elem()
}
if bodyType.AssignableTo(elemTy) || attrsType.AssignableTo(elemTy) {
continue // ignore undecoded fields
}
prevWasBlock = false
if isSeq {
l := fieldVal.Len()
for i := range l {
elemVal := fieldVal.Index(i)
if !elemVal.IsValid() {
continue // ignore (elem value is nil pointer)
}
if elemTy.Kind() == reflect.Ptr && elemVal.IsNil() {
continue // ignore
}
block := EncodeAsBlock(elemVal.Interface(), name)
if !prevWasBlock {
dst.AppendNewline()
prevWasBlock = true
}
dst.AppendBlock(block)
}
} else {
if !fieldVal.IsValid() {
continue // ignore (field value is nil pointer)
}
if elemTy.Kind() == reflect.Ptr && fieldVal.IsNil() {
continue // ignore
}
block := EncodeAsBlock(fieldVal.Interface(), name)
if !prevWasBlock {
dst.AppendNewline()
prevWasBlock = true
}
dst.AppendBlock(block)
}
}
}
}

View File

@@ -1,67 +0,0 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package gohcl_test
import (
"fmt"
"github.com/hashicorp/hcl/v2/gohcl"
"github.com/hashicorp/hcl/v2/hclwrite"
)
func ExampleEncodeIntoBody() {
type Service struct {
Name string `hcl:"name,label"`
Exe []string `hcl:"executable"`
}
type Constraints struct {
OS string `hcl:"os"`
Arch string `hcl:"arch"`
}
type App struct {
Name string `hcl:"name"`
Desc string `hcl:"description"`
Constraints *Constraints `hcl:"constraints,block"`
Services []Service `hcl:"service,block"`
}
app := App{
Name: "awesome-app",
Desc: "Such an awesome application",
Constraints: &Constraints{
OS: "linux",
Arch: "amd64",
},
Services: []Service{
{
Name: "web",
Exe: []string{"./web", "--listen=:8080"},
},
{
Name: "worker",
Exe: []string{"./worker"},
},
},
}
f := hclwrite.NewEmptyFile()
gohcl.EncodeIntoBody(&app, f.Body())
fmt.Printf("%s", f.Bytes())
// Output:
// name = "awesome-app"
// description = "Such an awesome application"
//
// constraints {
// os = "linux"
// arch = "amd64"
// }
//
// service "web" {
// executable = ["./web", "--listen=:8080"]
// }
// service "worker" {
// executable = ["./worker"]
// }
}

View File

@@ -1,185 +0,0 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package gohcl
import (
"fmt"
"reflect"
"sort"
"strings"
"github.com/hashicorp/hcl/v2"
)
// ImpliedBodySchema produces a hcl.BodySchema derived from the type of the
// given value, which must be a struct value or a pointer to one. If an
// inappropriate value is passed, this function will panic.
//
// The second return argument indicates whether the given struct includes
// a "remain" field, and thus the returned schema is non-exhaustive.
//
// This uses the tags on the fields of the struct to discover how each
// field's value should be expressed within configuration. If an invalid
// mapping is attempted, this function will panic.
func ImpliedBodySchema(val any) (schema *hcl.BodySchema, partial bool) {
ty := reflect.TypeOf(val)
if ty.Kind() == reflect.Ptr {
ty = ty.Elem()
}
if ty.Kind() != reflect.Struct {
panic(fmt.Sprintf("given value must be struct, not %T", val))
}
var attrSchemas []hcl.AttributeSchema
var blockSchemas []hcl.BlockHeaderSchema
tags := getFieldTags(ty)
attrNames := make([]string, 0, len(tags.Attributes))
for n := range tags.Attributes {
attrNames = append(attrNames, n)
}
sort.Strings(attrNames)
for _, n := range attrNames {
idx := tags.Attributes[n]
optional := tags.Optional[n]
field := ty.Field(idx)
var required bool
switch {
case field.Type.AssignableTo(exprType):
//nolint:misspell
// If we're decoding to hcl.Expression then absense can be
// indicated via a null value, so we don't specify that
// the field is required during decoding.
required = false
case field.Type.Kind() != reflect.Ptr && !optional:
required = true
default:
required = false
}
attrSchemas = append(attrSchemas, hcl.AttributeSchema{
Name: n,
Required: required,
})
}
blockNames := make([]string, 0, len(tags.Blocks))
for n := range tags.Blocks {
blockNames = append(blockNames, n)
}
sort.Strings(blockNames)
for _, n := range blockNames {
idx := tags.Blocks[n]
field := ty.Field(idx)
fty := field.Type
if fty.Kind() == reflect.Slice {
fty = fty.Elem()
}
if fty.Kind() == reflect.Ptr {
fty = fty.Elem()
}
if fty.Kind() != reflect.Struct {
panic(fmt.Sprintf(
"hcl 'block' tag kind cannot be applied to %s field %s: struct required", field.Type.String(), field.Name,
))
}
ftags := getFieldTags(fty)
var labelNames []string
if len(ftags.Labels) > 0 {
labelNames = make([]string, len(ftags.Labels))
for i, l := range ftags.Labels {
labelNames[i] = l.Name
}
}
blockSchemas = append(blockSchemas, hcl.BlockHeaderSchema{
Type: n,
LabelNames: labelNames,
})
}
partial = tags.Remain != nil
schema = &hcl.BodySchema{
Attributes: attrSchemas,
Blocks: blockSchemas,
}
return schema, partial
}
type fieldTags struct {
Attributes map[string]int
Blocks map[string]int
Labels []labelField
Remain *int
Body *int
Optional map[string]bool
}
type labelField struct {
FieldIndex int
Name string
}
func getFieldTags(ty reflect.Type) *fieldTags {
ret := &fieldTags{
Attributes: map[string]int{},
Blocks: map[string]int{},
Optional: map[string]bool{},
}
ct := ty.NumField()
for i := range ct {
field := ty.Field(i)
tag := field.Tag.Get("hcl")
if tag == "" {
continue
}
comma := strings.Index(tag, ",")
var name, kind string
if comma != -1 {
name = tag[:comma]
kind = tag[comma+1:]
} else {
name = tag
kind = "attr"
}
switch kind {
case "attr":
ret.Attributes[name] = i
case "block":
ret.Blocks[name] = i
case "label":
ret.Labels = append(ret.Labels, labelField{
FieldIndex: i,
Name: name,
})
case "remain":
if ret.Remain != nil {
panic("only one 'remain' tag is permitted")
}
idx := i // copy, because this loop will continue assigning to i
ret.Remain = &idx
case "body":
if ret.Body != nil {
panic("only one 'body' tag is permitted")
}
idx := i // copy, because this loop will continue assigning to i
ret.Body = &idx
case "optional":
ret.Attributes[name] = i
ret.Optional[name] = true
default:
panic(fmt.Sprintf("invalid hcl field tag kind %q on %s %q", kind, field.Type.String(), field.Name))
}
}
return ret
}

View File

@@ -1,233 +0,0 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package gohcl
import (
"fmt"
"reflect"
"testing"
"github.com/davecgh/go-spew/spew"
"github.com/hashicorp/hcl/v2"
)
func TestImpliedBodySchema(t *testing.T) {
tests := []struct {
val any
wantSchema *hcl.BodySchema
wantPartial bool
}{
{
struct{}{},
&hcl.BodySchema{},
false,
},
{
struct {
Ignored bool
}{},
&hcl.BodySchema{},
false,
},
{
struct {
Attr1 bool `hcl:"attr1"`
Attr2 bool `hcl:"attr2"`
}{},
&hcl.BodySchema{
Attributes: []hcl.AttributeSchema{
{
Name: "attr1",
Required: true,
},
{
Name: "attr2",
Required: true,
},
},
},
false,
},
{
struct {
Attr *bool `hcl:"attr,attr"`
}{},
&hcl.BodySchema{
Attributes: []hcl.AttributeSchema{
{
Name: "attr",
Required: false,
},
},
},
false,
},
{
struct {
Thing struct{} `hcl:"thing,block"`
}{},
&hcl.BodySchema{
Blocks: []hcl.BlockHeaderSchema{
{
Type: "thing",
},
},
},
false,
},
{
struct {
Thing struct {
Type string `hcl:"type,label"`
Name string `hcl:"name,label"`
} `hcl:"thing,block"`
}{},
&hcl.BodySchema{
Blocks: []hcl.BlockHeaderSchema{
{
Type: "thing",
LabelNames: []string{"type", "name"},
},
},
},
false,
},
{
struct {
Thing []struct {
Type string `hcl:"type,label"`
Name string `hcl:"name,label"`
} `hcl:"thing,block"`
}{},
&hcl.BodySchema{
Blocks: []hcl.BlockHeaderSchema{
{
Type: "thing",
LabelNames: []string{"type", "name"},
},
},
},
false,
},
{
struct {
Thing *struct {
Type string `hcl:"type,label"`
Name string `hcl:"name,label"`
} `hcl:"thing,block"`
}{},
&hcl.BodySchema{
Blocks: []hcl.BlockHeaderSchema{
{
Type: "thing",
LabelNames: []string{"type", "name"},
},
},
},
false,
},
{
struct {
Thing struct {
Name string `hcl:"name,label"`
Something string `hcl:"something"`
} `hcl:"thing,block"`
}{},
&hcl.BodySchema{
Blocks: []hcl.BlockHeaderSchema{
{
Type: "thing",
LabelNames: []string{"name"},
},
},
},
false,
},
{
struct {
Doodad string `hcl:"doodad"`
Thing struct {
Name string `hcl:"name,label"`
} `hcl:"thing,block"`
}{},
&hcl.BodySchema{
Attributes: []hcl.AttributeSchema{
{
Name: "doodad",
Required: true,
},
},
Blocks: []hcl.BlockHeaderSchema{
{
Type: "thing",
LabelNames: []string{"name"},
},
},
},
false,
},
{
struct {
Doodad string `hcl:"doodad"`
Config string `hcl:",remain"`
}{},
&hcl.BodySchema{
Attributes: []hcl.AttributeSchema{
{
Name: "doodad",
Required: true,
},
},
},
true,
},
{
struct {
Expr hcl.Expression `hcl:"expr"`
}{},
&hcl.BodySchema{
Attributes: []hcl.AttributeSchema{
{
Name: "expr",
Required: false,
},
},
},
false,
},
{
struct {
Meh string `hcl:"meh,optional"`
}{},
&hcl.BodySchema{
Attributes: []hcl.AttributeSchema{
{
Name: "meh",
Required: false,
},
},
},
false,
},
}
for _, test := range tests {
t.Run(fmt.Sprintf("%#v", test.val), func(t *testing.T) {
schema, partial := ImpliedBodySchema(test.val)
if !reflect.DeepEqual(schema, test.wantSchema) {
t.Errorf(
"wrong schema\ngot: %s\nwant: %s",
spew.Sdump(schema), spew.Sdump(test.wantSchema),
)
}
if partial != test.wantPartial {
t.Errorf(
"wrong partial flag\ngot: %#v\nwant: %#v",
partial, test.wantPartial,
)
}
})
}
}

View File

@@ -1,19 +0,0 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package gohcl
import (
"reflect"
"github.com/hashicorp/hcl/v2"
)
var victimExpr hcl.Expression
var victimBody hcl.Body
var exprType = reflect.TypeOf(&victimExpr).Elem()
var bodyType = reflect.TypeOf(&victimBody).Elem()
var blockType = reflect.TypeOf((*hcl.Block)(nil)) //nolint:unused
var attrType = reflect.TypeOf((*hcl.Attribute)(nil))
var attrsType = reflect.TypeOf(hcl.Attributes(nil))

View File

@@ -7,15 +7,15 @@ import (
"math" "math"
"math/big" "math/big"
"reflect" "reflect"
"slices"
"strconv" "strconv"
"strings" "strings"
"github.com/docker/buildx/bake/hclparser/gohcl"
"github.com/docker/buildx/util/userfunc" "github.com/docker/buildx/util/userfunc"
"github.com/hashicorp/hcl/v2" "github.com/hashicorp/hcl/v2"
"github.com/hashicorp/hcl/v2/gohcl"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/zclconf/go-cty/cty" "github.com/zclconf/go-cty/cty"
"github.com/zclconf/go-cty/cty/gocty"
) )
type Opt struct { type Opt struct {
@@ -25,17 +25,9 @@ type Opt struct {
} }
type variable struct { type variable struct {
Name string `json:"-" hcl:"name,label"` Name string `json:"-" hcl:"name,label"`
Default *hcl.Attribute `json:"default,omitempty" hcl:"default,optional"` Default *hcl.Attribute `json:"default,omitempty" hcl:"default,optional"`
Description string `json:"description,omitempty" hcl:"description,optional"` Body hcl.Body `json:"-" hcl:",body"`
Validations []*variableValidation `json:"validation,omitempty" hcl:"validation,block"`
Body hcl.Body `json:"-" hcl:",body"`
Remain hcl.Body `json:"-" hcl:",remain"`
}
type variableValidation struct {
Condition hcl.Expression `json:"condition" hcl:"condition"`
ErrorMessage hcl.Expression `json:"error_message" hcl:"error_message"`
} }
type functionDef struct { type functionDef struct {
@@ -81,12 +73,7 @@ type WithGetName interface {
GetName(ectx *hcl.EvalContext, block *hcl.Block, loadDeps func(hcl.Expression) hcl.Diagnostics) (string, error) GetName(ectx *hcl.EvalContext, block *hcl.Block, loadDeps func(hcl.Expression) hcl.Diagnostics) (string, error)
} }
// errUndefined is returned when a variable or function is not defined. var errUndefined = errors.New("undefined")
type errUndefined struct{}
func (errUndefined) Error() string {
return "undefined"
}
func (p *parser) loadDeps(ectx *hcl.EvalContext, exp hcl.Expression, exclude map[string]struct{}, allowMissing bool) hcl.Diagnostics { func (p *parser) loadDeps(ectx *hcl.EvalContext, exp hcl.Expression, exclude map[string]struct{}, allowMissing bool) hcl.Diagnostics {
fns, hcldiags := funcCalls(exp) fns, hcldiags := funcCalls(exp)
@@ -96,7 +83,7 @@ func (p *parser) loadDeps(ectx *hcl.EvalContext, exp hcl.Expression, exclude map
for _, fn := range fns { for _, fn := range fns {
if err := p.resolveFunction(ectx, fn); err != nil { if err := p.resolveFunction(ectx, fn); err != nil {
if allowMissing && errors.Is(err, errUndefined{}) { if allowMissing && errors.Is(err, errUndefined) {
continue continue
} }
return wrapErrorDiagnostic("Invalid expression", err, exp.Range().Ptr(), exp.Range().Ptr()) return wrapErrorDiagnostic("Invalid expression", err, exp.Range().Ptr(), exp.Range().Ptr())
@@ -150,7 +137,7 @@ func (p *parser) loadDeps(ectx *hcl.EvalContext, exp hcl.Expression, exclude map
} }
for _, block := range blocks { for _, block := range blocks {
if err := p.resolveBlock(block, target); err != nil { if err := p.resolveBlock(block, target); err != nil {
if allowMissing && errors.Is(err, errUndefined{}) { if allowMissing && errors.Is(err, errUndefined) {
continue continue
} }
return wrapErrorDiagnostic("Invalid expression", err, exp.Range().Ptr(), exp.Range().Ptr()) return wrapErrorDiagnostic("Invalid expression", err, exp.Range().Ptr(), exp.Range().Ptr())
@@ -158,7 +145,7 @@ func (p *parser) loadDeps(ectx *hcl.EvalContext, exp hcl.Expression, exclude map
} }
} else { } else {
if err := p.resolveValue(ectx, v.RootName()); err != nil { if err := p.resolveValue(ectx, v.RootName()); err != nil {
if allowMissing && errors.Is(err, errUndefined{}) { if allowMissing && errors.Is(err, errUndefined) {
continue continue
} }
return wrapErrorDiagnostic("Invalid expression", err, exp.Range().Ptr(), exp.Range().Ptr()) return wrapErrorDiagnostic("Invalid expression", err, exp.Range().Ptr(), exp.Range().Ptr())
@@ -180,7 +167,7 @@ func (p *parser) resolveFunction(ectx *hcl.EvalContext, name string) error {
} }
f, ok := p.funcs[name] f, ok := p.funcs[name]
if !ok { if !ok {
return errors.Wrapf(errUndefined{}, "function %q does not exist", name) return errors.Wrapf(errUndefined, "function %q does not exist", name)
} }
if _, ok := p.progressF[key(ectx, name)]; ok { if _, ok := p.progressF[key(ectx, name)]; ok {
return errors.Errorf("function cycle not allowed for %s", name) return errors.Errorf("function cycle not allowed for %s", name)
@@ -270,7 +257,7 @@ func (p *parser) resolveValue(ectx *hcl.EvalContext, name string) (err error) {
if _, builtin := p.opt.Vars[name]; !ok && !builtin { if _, builtin := p.opt.Vars[name]; !ok && !builtin {
vr, ok := p.vars[name] vr, ok := p.vars[name]
if !ok { if !ok {
return errors.Wrapf(errUndefined{}, "variable %q does not exist", name) return errors.Wrapf(errUndefined, "variable %q does not exist", name)
} }
def = vr.Default def = vr.Default
ectx = p.ectx ectx = p.ectx
@@ -454,7 +441,7 @@ func (p *parser) resolveBlock(block *hcl.Block, target *hcl.BodySchema) (err err
} }
// decode! // decode!
diag = decodeBody(body(), ectx, output.Interface()) diag = gohcl.DecodeBody(body(), ectx, output.Interface())
if diag.HasErrors() { if diag.HasErrors() {
return diag return diag
} }
@@ -476,11 +463,11 @@ func (p *parser) resolveBlock(block *hcl.Block, target *hcl.BodySchema) (err err
} }
// store the result into the evaluation context (so it can be referenced) // store the result into the evaluation context (so it can be referenced)
outputType, err := ImpliedType(output.Interface()) outputType, err := gocty.ImpliedType(output.Interface())
if err != nil { if err != nil {
return err return err
} }
outputValue, err := ToCtyValue(output.Interface(), outputType) outputValue, err := gocty.ToCtyValue(output.Interface(), outputType)
if err != nil { if err != nil {
return err return err
} }
@@ -492,12 +479,7 @@ func (p *parser) resolveBlock(block *hcl.Block, target *hcl.BodySchema) (err err
m = map[string]cty.Value{} m = map[string]cty.Value{}
} }
m[name] = outputValue m[name] = outputValue
p.ectx.Variables[block.Type] = cty.MapVal(m)
// The logical contents of this structure is similar to a map,
// but it's possible for some attributes to be different in a way that's
// illegal for a map so we use an object here instead which is structurally
// equivalent but allows disparate types for different keys.
p.ectx.Variables[block.Type] = cty.ObjectVal(m)
} }
return nil return nil
@@ -552,45 +534,7 @@ func (p *parser) resolveBlockNames(block *hcl.Block) ([]string, error) {
return names, nil return names, nil
} }
func (p *parser) validateVariables(vars map[string]*variable, ectx *hcl.EvalContext) hcl.Diagnostics { func Parse(b hcl.Body, opt Opt, val interface{}) (map[string]map[string][]string, hcl.Diagnostics) {
var diags hcl.Diagnostics
for _, v := range vars {
for _, validation := range v.Validations {
condition, condDiags := validation.Condition.Value(ectx)
if condDiags.HasErrors() {
diags = append(diags, condDiags...)
continue
}
if !condition.True() {
message, msgDiags := validation.ErrorMessage.Value(ectx)
if msgDiags.HasErrors() {
diags = append(diags, msgDiags...)
continue
}
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Validation failed",
Detail: message.AsString(),
Subject: validation.Condition.Range().Ptr(),
})
}
}
}
return diags
}
type Variable struct {
Name string `json:"name"`
Description string `json:"description,omitempty"`
Value *string `json:"value,omitempty"`
}
type ParseMeta struct {
Renamed map[string]map[string][]string
AllVariables []*Variable
}
func Parse(b hcl.Body, opt Opt, val any) (*ParseMeta, hcl.Diagnostics) {
reserved := map[string]struct{}{} reserved := map[string]struct{}{}
schema, _ := gohcl.ImpliedBodySchema(val) schema, _ := gohcl.ImpliedBodySchema(val)
@@ -699,7 +643,6 @@ func Parse(b hcl.Body, opt Opt, val any) (*ParseMeta, hcl.Diagnostics) {
} }
} }
vars := make([]*Variable, 0, len(p.vars))
for k := range p.vars { for k := range p.vars {
if err := p.resolveValue(p.ectx, k); err != nil { if err := p.resolveValue(p.ectx, k); err != nil {
if diags, ok := err.(hcl.Diagnostics); ok { if diags, ok := err.(hcl.Diagnostics); ok {
@@ -708,24 +651,6 @@ func Parse(b hcl.Body, opt Opt, val any) (*ParseMeta, hcl.Diagnostics) {
r := p.vars[k].Body.MissingItemRange() r := p.vars[k].Body.MissingItemRange()
return nil, wrapErrorDiagnostic("Invalid value", err, &r, &r) return nil, wrapErrorDiagnostic("Invalid value", err, &r, &r)
} }
v := &Variable{
Name: p.vars[k].Name,
Description: p.vars[k].Description,
}
if vv := p.ectx.Variables[k]; !vv.IsNull() {
var s string
switch vv.Type() {
case cty.String:
s = vv.AsString()
case cty.Bool:
s = strconv.FormatBool(vv.True())
}
v.Value = &s
}
vars = append(vars, v)
}
if diags := p.validateVariables(p.vars, p.ectx); diags.HasErrors() {
return nil, diags
} }
for k := range p.funcs { for k := range p.funcs {
@@ -764,7 +689,7 @@ func Parse(b hcl.Body, opt Opt, val any) (*ParseMeta, hcl.Diagnostics) {
types := map[string]field{} types := map[string]field{}
renamed := map[string]map[string][]string{} renamed := map[string]map[string][]string{}
vt := reflect.ValueOf(val).Elem().Type() vt := reflect.ValueOf(val).Elem().Type()
for i := range vt.NumField() { for i := 0; i < vt.NumField(); i++ {
tags := strings.Split(vt.Field(i).Tag.Get("hcl"), ",") tags := strings.Split(vt.Field(i).Tag.Get("hcl"), ",")
p.blockTypes[tags[0]] = vt.Field(i).Type.Elem().Elem() p.blockTypes[tags[0]] = vt.Field(i).Type.Elem().Elem()
@@ -832,7 +757,7 @@ func Parse(b hcl.Body, opt Opt, val any) (*ParseMeta, hcl.Diagnostics) {
oldValue, exists := t.values[lblName] oldValue, exists := t.values[lblName]
if !exists && lblExists { if !exists && lblExists {
if v.Elem().Field(t.idx).Type().Kind() == reflect.Slice { if v.Elem().Field(t.idx).Type().Kind() == reflect.Slice {
for i := range v.Elem().Field(t.idx).Len() { for i := 0; i < v.Elem().Field(t.idx).Len(); i++ {
if lblName == v.Elem().Field(t.idx).Index(i).Elem().Field(lblIndex).String() { if lblName == v.Elem().Field(t.idx).Index(i).Elem().Field(lblIndex).String() {
exists = true exists = true
oldValue = value{Value: v.Elem().Field(t.idx).Index(i), idx: i} oldValue = value{Value: v.Elem().Field(t.idx).Index(i), idx: i}
@@ -870,10 +795,7 @@ func Parse(b hcl.Body, opt Opt, val any) (*ParseMeta, hcl.Diagnostics) {
} }
} }
return &ParseMeta{ return renamed, nil
Renamed: renamed,
AllVariables: vars,
}, nil
} }
// wrapErrorDiagnostic wraps an error into a hcl.Diagnostics object. // wrapErrorDiagnostic wraps an error into a hcl.Diagnostics object.
@@ -899,7 +821,7 @@ func wrapErrorDiagnostic(message string, err error, subject *hcl.Range, context
func setName(v reflect.Value, name string) { func setName(v reflect.Value, name string) {
numFields := v.Elem().Type().NumField() numFields := v.Elem().Type().NumField()
for i := range numFields { for i := 0; i < numFields; i++ {
parts := strings.Split(v.Elem().Type().Field(i).Tag.Get("hcl"), ",") parts := strings.Split(v.Elem().Type().Field(i).Tag.Get("hcl"), ",")
for _, t := range parts[1:] { for _, t := range parts[1:] {
if t == "label" { if t == "label" {
@@ -911,10 +833,12 @@ func setName(v reflect.Value, name string) {
func getName(v reflect.Value) (string, bool) { func getName(v reflect.Value) (string, bool) {
numFields := v.Elem().Type().NumField() numFields := v.Elem().Type().NumField()
for i := range numFields { for i := 0; i < numFields; i++ {
parts := strings.Split(v.Elem().Type().Field(i).Tag.Get("hcl"), ",") parts := strings.Split(v.Elem().Type().Field(i).Tag.Get("hcl"), ",")
if slices.Contains(parts[1:], "label") { for _, t := range parts[1:] {
return v.Elem().Field(i).String(), true if t == "label" {
return v.Elem().Field(i).String(), true
}
} }
} }
return "", false return "", false
@@ -922,10 +846,12 @@ func getName(v reflect.Value) (string, bool) {
func getNameIndex(v reflect.Value) (int, bool) { func getNameIndex(v reflect.Value) (int, bool) {
numFields := v.Elem().Type().NumField() numFields := v.Elem().Type().NumField()
for i := range numFields { for i := 0; i < numFields; i++ {
parts := strings.Split(v.Elem().Type().Field(i).Tag.Get("hcl"), ",") parts := strings.Split(v.Elem().Type().Field(i).Tag.Get("hcl"), ",")
if slices.Contains(parts[1:], "label") { for _, t := range parts[1:] {
return i, true if t == "label" {
return i, true
}
} }
} }
return 0, false return 0, false
@@ -984,8 +910,3 @@ func key(ks ...any) uint64 {
} }
return hash.Sum64() return hash.Sum64()
} }
func decodeBody(body hcl.Body, ctx *hcl.EvalContext, val any) hcl.Diagnostics {
dec := gohcl.DecodeOptions{ImpliedType: ImpliedType}
return dec.DecodeBody(body, ctx, val)
}

View File

@@ -111,19 +111,21 @@ func (mb mergedBodies) JustAttributes() (hcl.Attributes, hcl.Diagnostics) {
diags = append(diags, thisDiags...) diags = append(diags, thisDiags...)
} }
for name, attr := range thisAttrs { if thisAttrs != nil {
if existing := attrs[name]; existing != nil { for name, attr := range thisAttrs {
diags = diags.Append(&hcl.Diagnostic{ if existing := attrs[name]; existing != nil {
Severity: hcl.DiagError, diags = diags.Append(&hcl.Diagnostic{
Summary: "Duplicate argument", Severity: hcl.DiagError,
Detail: fmt.Sprintf( Summary: "Duplicate argument",
"Argument %q was already set at %s", Detail: fmt.Sprintf(
name, existing.NameRange.String(), "Argument %q was already set at %s",
), name, existing.NameRange.String(),
Subject: thisAttrs[name].NameRange.Ptr(), ),
}) Subject: thisAttrs[name].NameRange.Ptr(),
})
}
attrs[name] = attr
} }
attrs[name] = attr
} }
} }

View File

@@ -1,9 +1,6 @@
package hclparser package hclparser
import ( import (
"errors"
"path"
"strings"
"time" "time"
"github.com/hashicorp/go-cty-funcs/cidr" "github.com/hashicorp/go-cty-funcs/cidr"
@@ -17,245 +14,122 @@ import (
"github.com/zclconf/go-cty/cty/function/stdlib" "github.com/zclconf/go-cty/cty/function/stdlib"
) )
type funcDef struct { var stdlibFunctions = map[string]function.Function{
name string "absolute": stdlib.AbsoluteFunc,
fn function.Function "add": stdlib.AddFunc,
factory func() function.Function "and": stdlib.AndFunc,
} "base64decode": encoding.Base64DecodeFunc,
"base64encode": encoding.Base64EncodeFunc,
var stdlibFunctions = []funcDef{ "bcrypt": crypto.BcryptFunc,
{name: "absolute", fn: stdlib.AbsoluteFunc}, "byteslen": stdlib.BytesLenFunc,
{name: "add", fn: stdlib.AddFunc}, "bytesslice": stdlib.BytesSliceFunc,
{name: "and", fn: stdlib.AndFunc}, "can": tryfunc.CanFunc,
{name: "base64decode", fn: encoding.Base64DecodeFunc}, "ceil": stdlib.CeilFunc,
{name: "base64encode", fn: encoding.Base64EncodeFunc}, "chomp": stdlib.ChompFunc,
{name: "basename", factory: basenameFunc}, "chunklist": stdlib.ChunklistFunc,
{name: "bcrypt", fn: crypto.BcryptFunc}, "cidrhost": cidr.HostFunc,
{name: "byteslen", fn: stdlib.BytesLenFunc}, "cidrnetmask": cidr.NetmaskFunc,
{name: "bytesslice", fn: stdlib.BytesSliceFunc}, "cidrsubnet": cidr.SubnetFunc,
{name: "can", fn: tryfunc.CanFunc}, "cidrsubnets": cidr.SubnetsFunc,
{name: "ceil", fn: stdlib.CeilFunc}, "coalesce": stdlib.CoalesceFunc,
{name: "chomp", fn: stdlib.ChompFunc}, "coalescelist": stdlib.CoalesceListFunc,
{name: "chunklist", fn: stdlib.ChunklistFunc}, "compact": stdlib.CompactFunc,
{name: "cidrhost", fn: cidr.HostFunc}, "concat": stdlib.ConcatFunc,
{name: "cidrnetmask", fn: cidr.NetmaskFunc}, "contains": stdlib.ContainsFunc,
{name: "cidrsubnet", fn: cidr.SubnetFunc}, "convert": typeexpr.ConvertFunc,
{name: "cidrsubnets", fn: cidr.SubnetsFunc}, "csvdecode": stdlib.CSVDecodeFunc,
{name: "coalesce", fn: stdlib.CoalesceFunc}, "distinct": stdlib.DistinctFunc,
{name: "coalescelist", fn: stdlib.CoalesceListFunc}, "divide": stdlib.DivideFunc,
{name: "compact", fn: stdlib.CompactFunc}, "element": stdlib.ElementFunc,
{name: "concat", fn: stdlib.ConcatFunc}, "equal": stdlib.EqualFunc,
{name: "contains", fn: stdlib.ContainsFunc}, "flatten": stdlib.FlattenFunc,
{name: "convert", fn: typeexpr.ConvertFunc}, "floor": stdlib.FloorFunc,
{name: "csvdecode", fn: stdlib.CSVDecodeFunc}, "format": stdlib.FormatFunc,
{name: "dirname", factory: dirnameFunc}, "formatdate": stdlib.FormatDateFunc,
{name: "distinct", fn: stdlib.DistinctFunc}, "formatlist": stdlib.FormatListFunc,
{name: "divide", fn: stdlib.DivideFunc}, "greaterthan": stdlib.GreaterThanFunc,
{name: "element", fn: stdlib.ElementFunc}, "greaterthanorequalto": stdlib.GreaterThanOrEqualToFunc,
{name: "equal", fn: stdlib.EqualFunc}, "hasindex": stdlib.HasIndexFunc,
{name: "flatten", fn: stdlib.FlattenFunc}, "indent": stdlib.IndentFunc,
{name: "floor", fn: stdlib.FloorFunc}, "index": stdlib.IndexFunc,
{name: "format", fn: stdlib.FormatFunc}, "int": stdlib.IntFunc,
{name: "formatdate", fn: stdlib.FormatDateFunc}, "join": stdlib.JoinFunc,
{name: "formatlist", fn: stdlib.FormatListFunc}, "jsondecode": stdlib.JSONDecodeFunc,
{name: "greaterthan", fn: stdlib.GreaterThanFunc}, "jsonencode": stdlib.JSONEncodeFunc,
{name: "greaterthanorequalto", fn: stdlib.GreaterThanOrEqualToFunc}, "keys": stdlib.KeysFunc,
{name: "hasindex", fn: stdlib.HasIndexFunc}, "length": stdlib.LengthFunc,
{name: "indent", fn: stdlib.IndentFunc}, "lessthan": stdlib.LessThanFunc,
{name: "index", fn: stdlib.IndexFunc}, "lessthanorequalto": stdlib.LessThanOrEqualToFunc,
{name: "indexof", factory: indexOfFunc}, "log": stdlib.LogFunc,
{name: "int", fn: stdlib.IntFunc}, "lookup": stdlib.LookupFunc,
{name: "join", fn: stdlib.JoinFunc}, "lower": stdlib.LowerFunc,
{name: "jsondecode", fn: stdlib.JSONDecodeFunc}, "max": stdlib.MaxFunc,
{name: "jsonencode", fn: stdlib.JSONEncodeFunc}, "md5": crypto.Md5Func,
{name: "keys", fn: stdlib.KeysFunc}, "merge": stdlib.MergeFunc,
{name: "length", fn: stdlib.LengthFunc}, "min": stdlib.MinFunc,
{name: "lessthan", fn: stdlib.LessThanFunc}, "modulo": stdlib.ModuloFunc,
{name: "lessthanorequalto", fn: stdlib.LessThanOrEqualToFunc}, "multiply": stdlib.MultiplyFunc,
{name: "log", fn: stdlib.LogFunc}, "negate": stdlib.NegateFunc,
{name: "lookup", fn: stdlib.LookupFunc}, "not": stdlib.NotFunc,
{name: "lower", fn: stdlib.LowerFunc}, "notequal": stdlib.NotEqualFunc,
{name: "max", fn: stdlib.MaxFunc}, "or": stdlib.OrFunc,
{name: "md5", fn: crypto.Md5Func}, "parseint": stdlib.ParseIntFunc,
{name: "merge", fn: stdlib.MergeFunc}, "pow": stdlib.PowFunc,
{name: "min", fn: stdlib.MinFunc}, "range": stdlib.RangeFunc,
{name: "modulo", fn: stdlib.ModuloFunc}, "regex_replace": stdlib.RegexReplaceFunc,
{name: "multiply", fn: stdlib.MultiplyFunc}, "regex": stdlib.RegexFunc,
{name: "negate", fn: stdlib.NegateFunc}, "regexall": stdlib.RegexAllFunc,
{name: "not", fn: stdlib.NotFunc}, "replace": stdlib.ReplaceFunc,
{name: "notequal", fn: stdlib.NotEqualFunc}, "reverse": stdlib.ReverseFunc,
{name: "or", fn: stdlib.OrFunc}, "reverselist": stdlib.ReverseListFunc,
{name: "parseint", fn: stdlib.ParseIntFunc}, "rsadecrypt": crypto.RsaDecryptFunc,
{name: "pow", fn: stdlib.PowFunc}, "sethaselement": stdlib.SetHasElementFunc,
{name: "range", fn: stdlib.RangeFunc}, "setintersection": stdlib.SetIntersectionFunc,
{name: "regex_replace", fn: stdlib.RegexReplaceFunc}, "setproduct": stdlib.SetProductFunc,
{name: "regex", fn: stdlib.RegexFunc}, "setsubtract": stdlib.SetSubtractFunc,
{name: "regexall", fn: stdlib.RegexAllFunc}, "setsymmetricdifference": stdlib.SetSymmetricDifferenceFunc,
{name: "replace", fn: stdlib.ReplaceFunc}, "setunion": stdlib.SetUnionFunc,
{name: "reverse", fn: stdlib.ReverseFunc}, "sha1": crypto.Sha1Func,
{name: "reverselist", fn: stdlib.ReverseListFunc}, "sha256": crypto.Sha256Func,
{name: "rsadecrypt", fn: crypto.RsaDecryptFunc}, "sha512": crypto.Sha512Func,
{name: "sanitize", factory: sanitizeFunc}, "signum": stdlib.SignumFunc,
{name: "sethaselement", fn: stdlib.SetHasElementFunc}, "slice": stdlib.SliceFunc,
{name: "setintersection", fn: stdlib.SetIntersectionFunc}, "sort": stdlib.SortFunc,
{name: "setproduct", fn: stdlib.SetProductFunc}, "split": stdlib.SplitFunc,
{name: "setsubtract", fn: stdlib.SetSubtractFunc}, "strlen": stdlib.StrlenFunc,
{name: "setsymmetricdifference", fn: stdlib.SetSymmetricDifferenceFunc}, "substr": stdlib.SubstrFunc,
{name: "setunion", fn: stdlib.SetUnionFunc}, "subtract": stdlib.SubtractFunc,
{name: "sha1", fn: crypto.Sha1Func}, "timeadd": stdlib.TimeAddFunc,
{name: "sha256", fn: crypto.Sha256Func}, "timestamp": timestampFunc,
{name: "sha512", fn: crypto.Sha512Func}, "title": stdlib.TitleFunc,
{name: "signum", fn: stdlib.SignumFunc}, "trim": stdlib.TrimFunc,
{name: "slice", fn: stdlib.SliceFunc}, "trimprefix": stdlib.TrimPrefixFunc,
{name: "sort", fn: stdlib.SortFunc}, "trimspace": stdlib.TrimSpaceFunc,
{name: "split", fn: stdlib.SplitFunc}, "trimsuffix": stdlib.TrimSuffixFunc,
{name: "strlen", fn: stdlib.StrlenFunc}, "try": tryfunc.TryFunc,
{name: "substr", fn: stdlib.SubstrFunc}, "upper": stdlib.UpperFunc,
{name: "subtract", fn: stdlib.SubtractFunc}, "urlencode": encoding.URLEncodeFunc,
{name: "timeadd", fn: stdlib.TimeAddFunc}, "uuidv4": uuid.V4Func,
{name: "timestamp", factory: timestampFunc}, "uuidv5": uuid.V5Func,
{name: "title", fn: stdlib.TitleFunc}, "values": stdlib.ValuesFunc,
{name: "trim", fn: stdlib.TrimFunc}, "zipmap": stdlib.ZipmapFunc,
{name: "trimprefix", fn: stdlib.TrimPrefixFunc},
{name: "trimspace", fn: stdlib.TrimSpaceFunc},
{name: "trimsuffix", fn: stdlib.TrimSuffixFunc},
{name: "try", fn: tryfunc.TryFunc},
{name: "upper", fn: stdlib.UpperFunc},
{name: "urlencode", fn: encoding.URLEncodeFunc},
{name: "uuidv4", fn: uuid.V4Func},
{name: "uuidv5", fn: uuid.V5Func},
{name: "values", fn: stdlib.ValuesFunc},
{name: "zipmap", fn: stdlib.ZipmapFunc},
}
// indexOfFunc constructs a function that finds the element index for a given
// value in a list.
func indexOfFunc() function.Function {
return function.New(&function.Spec{
Params: []function.Parameter{
{
Name: "list",
Type: cty.DynamicPseudoType,
},
{
Name: "value",
Type: cty.DynamicPseudoType,
},
},
Type: function.StaticReturnType(cty.Number),
Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) {
if !(args[0].Type().IsListType() || args[0].Type().IsTupleType()) {
return cty.NilVal, errors.New("argument must be a list or tuple")
}
if !args[0].IsKnown() {
return cty.UnknownVal(cty.Number), nil
}
if args[0].LengthInt() == 0 { // Easy path
return cty.NilVal, errors.New("cannot search an empty list")
}
for it := args[0].ElementIterator(); it.Next(); {
i, v := it.Element()
eq, err := stdlib.Equal(v, args[1])
if err != nil {
return cty.NilVal, err
}
if !eq.IsKnown() {
return cty.UnknownVal(cty.Number), nil
}
if eq.True() {
return i, nil
}
}
return cty.NilVal, errors.New("item not found")
},
})
}
// basenameFunc constructs a function that returns the last element of a path.
func basenameFunc() function.Function {
return function.New(&function.Spec{
Params: []function.Parameter{
{
Name: "path",
Type: cty.String,
},
},
Type: function.StaticReturnType(cty.String),
Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) {
in := args[0].AsString()
return cty.StringVal(path.Base(in)), nil
},
})
}
// dirnameFunc constructs a function that returns the directory of a path.
func dirnameFunc() function.Function {
return function.New(&function.Spec{
Params: []function.Parameter{
{
Name: "path",
Type: cty.String,
},
},
Type: function.StaticReturnType(cty.String),
Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) {
in := args[0].AsString()
return cty.StringVal(path.Dir(in)), nil
},
})
}
// sanitizyFunc constructs a function that replaces all non-alphanumeric characters with a underscore,
// leaving only characters that are valid for a Bake target name.
func sanitizeFunc() function.Function {
return function.New(&function.Spec{
Params: []function.Parameter{
{
Name: "name",
Type: cty.String,
},
},
Type: function.StaticReturnType(cty.String),
Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) {
in := args[0].AsString()
// only [a-zA-Z0-9_-]+ is allowed
var b strings.Builder
for _, r := range in {
if r >= 'a' && r <= 'z' || r >= 'A' && r <= 'Z' || r >= '0' && r <= '9' || r == '_' || r == '-' {
b.WriteRune(r)
} else {
b.WriteRune('_')
}
}
return cty.StringVal(b.String()), nil
},
})
} }
// timestampFunc constructs a function that returns a string representation of the current date and time. // timestampFunc constructs a function that returns a string representation of the current date and time.
// //
// This function was imported from terraform's datetime utilities. // This function was imported from terraform's datetime utilities.
func timestampFunc() function.Function { var timestampFunc = function.New(&function.Spec{
return function.New(&function.Spec{ Params: []function.Parameter{},
Params: []function.Parameter{}, Type: function.StaticReturnType(cty.String),
Type: function.StaticReturnType(cty.String), Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) {
Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) { return cty.StringVal(time.Now().UTC().Format(time.RFC3339)), nil
return cty.StringVal(time.Now().UTC().Format(time.RFC3339)), nil },
}, })
})
}
func Stdlib() map[string]function.Function { func Stdlib() map[string]function.Function {
funcs := make(map[string]function.Function, len(stdlibFunctions)) funcs := make(map[string]function.Function, len(stdlibFunctions))
for _, v := range stdlibFunctions { for k, v := range stdlibFunctions {
if v.factory != nil { funcs[k] = v
funcs[v.name] = v.factory()
} else {
funcs[v.name] = v.fn
}
} }
return funcs return funcs
} }

View File

@@ -1,199 +0,0 @@
package hclparser
import (
"testing"
"github.com/stretchr/testify/require"
"github.com/zclconf/go-cty/cty"
)
func TestIndexOf(t *testing.T) {
type testCase struct {
input cty.Value
key cty.Value
want cty.Value
wantErr bool
}
tests := map[string]testCase{
"index 0": {
input: cty.TupleVal([]cty.Value{cty.StringVal("one"), cty.NumberIntVal(2.0), cty.NumberIntVal(3), cty.StringVal("four")}),
key: cty.StringVal("one"),
want: cty.NumberIntVal(0),
},
"index 3": {
input: cty.TupleVal([]cty.Value{cty.StringVal("one"), cty.NumberIntVal(2.0), cty.NumberIntVal(3), cty.StringVal("four")}),
key: cty.StringVal("four"),
want: cty.NumberIntVal(3),
},
"index -1": {
input: cty.TupleVal([]cty.Value{cty.StringVal("one"), cty.NumberIntVal(2.0), cty.NumberIntVal(3), cty.StringVal("four")}),
key: cty.StringVal("3"),
wantErr: true,
},
}
for name, test := range tests {
name, test := name, test
t.Run(name, func(t *testing.T) {
got, err := indexOfFunc().Call([]cty.Value{test.input, test.key})
if test.wantErr {
require.Error(t, err)
} else {
require.NoError(t, err)
require.Equal(t, test.want, got)
}
})
}
}
func TestBasename(t *testing.T) {
type testCase struct {
input cty.Value
want cty.Value
wantErr bool
}
tests := map[string]testCase{
"empty": {
input: cty.StringVal(""),
want: cty.StringVal("."),
},
"slash": {
input: cty.StringVal("/"),
want: cty.StringVal("/"),
},
"simple": {
input: cty.StringVal("/foo/bar"),
want: cty.StringVal("bar"),
},
"simple no slash": {
input: cty.StringVal("foo/bar"),
want: cty.StringVal("bar"),
},
"dot": {
input: cty.StringVal("/foo/bar."),
want: cty.StringVal("bar."),
},
"dotdot": {
input: cty.StringVal("/foo/bar.."),
want: cty.StringVal("bar.."),
},
"dotdotdot": {
input: cty.StringVal("/foo/bar..."),
want: cty.StringVal("bar..."),
},
}
for name, test := range tests {
name, test := name, test
t.Run(name, func(t *testing.T) {
got, err := basenameFunc().Call([]cty.Value{test.input})
if test.wantErr {
require.Error(t, err)
} else {
require.NoError(t, err)
require.Equal(t, test.want, got)
}
})
}
}
func TestDirname(t *testing.T) {
type testCase struct {
input cty.Value
want cty.Value
wantErr bool
}
tests := map[string]testCase{
"empty": {
input: cty.StringVal(""),
want: cty.StringVal("."),
},
"slash": {
input: cty.StringVal("/"),
want: cty.StringVal("/"),
},
"simple": {
input: cty.StringVal("/foo/bar"),
want: cty.StringVal("/foo"),
},
"simple no slash": {
input: cty.StringVal("foo/bar"),
want: cty.StringVal("foo"),
},
"dot": {
input: cty.StringVal("/foo/bar."),
want: cty.StringVal("/foo"),
},
"dotdot": {
input: cty.StringVal("/foo/bar.."),
want: cty.StringVal("/foo"),
},
"dotdotdot": {
input: cty.StringVal("/foo/bar..."),
want: cty.StringVal("/foo"),
},
}
for name, test := range tests {
name, test := name, test
t.Run(name, func(t *testing.T) {
got, err := dirnameFunc().Call([]cty.Value{test.input})
if test.wantErr {
require.Error(t, err)
} else {
require.NoError(t, err)
require.Equal(t, test.want, got)
}
})
}
}
func TestSanitize(t *testing.T) {
type testCase struct {
input cty.Value
want cty.Value
}
tests := map[string]testCase{
"empty": {
input: cty.StringVal(""),
want: cty.StringVal(""),
},
"simple": {
input: cty.StringVal("foo/bar"),
want: cty.StringVal("foo_bar"),
},
"simple no slash": {
input: cty.StringVal("foobar"),
want: cty.StringVal("foobar"),
},
"dot": {
input: cty.StringVal("foo/bar."),
want: cty.StringVal("foo_bar_"),
},
"dotdot": {
input: cty.StringVal("foo/bar.."),
want: cty.StringVal("foo_bar__"),
},
"dotdotdot": {
input: cty.StringVal("foo/bar..."),
want: cty.StringVal("foo_bar___"),
},
"utf8": {
input: cty.StringVal("foo/🍕bar"),
want: cty.StringVal("foo__bar"),
},
"symbols": {
input: cty.StringVal("foo/bar!@(ba+z)"),
want: cty.StringVal("foo_bar___ba_z_"),
},
}
for name, test := range tests {
name, test := name, test
t.Run(name, func(t *testing.T) {
got, err := sanitizeFunc().Call([]cty.Value{test.input})
require.NoError(t, err)
require.Equal(t, test.want, got)
})
}
}

View File

@@ -1,160 +0,0 @@
// MIT License
//
// Copyright (c) 2017-2018 Martin Atkins
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in all
// copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
// SOFTWARE.
package hclparser
import (
"reflect"
"github.com/zclconf/go-cty/cty"
)
// ImpliedType takes an arbitrary Go value (as an interface{}) and attempts
// to find a suitable cty.Type instance that could be used for a conversion
// with ToCtyValue.
//
// This allows -- for simple situations at least -- types to be defined just
// once in Go and the cty types derived from the Go types, but in the process
// it makes some assumptions that may be undesirable so applications are
// encouraged to build their cty types directly if exacting control is
// required.
//
// Not all Go types can be represented as cty types, so an error may be
// returned which is usually considered to be a bug in the calling program.
// In particular, ImpliedType will never use capsule types in its returned
// type, because it cannot know the capsule types supported by the calling
// program.
func ImpliedType(gv any) (cty.Type, error) {
rt := reflect.TypeOf(gv)
var path cty.Path
return impliedType(rt, path)
}
func impliedType(rt reflect.Type, path cty.Path) (cty.Type, error) {
if ety, err := impliedTypeExt(rt, path); err == nil {
return ety, nil
}
switch rt.Kind() {
case reflect.Ptr:
return impliedType(rt.Elem(), path)
// Primitive types
case reflect.Bool:
return cty.Bool, nil
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return cty.Number, nil
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return cty.Number, nil
case reflect.Float32, reflect.Float64:
return cty.Number, nil
case reflect.String:
return cty.String, nil
// Collection types
case reflect.Slice:
path := append(path, cty.IndexStep{Key: cty.UnknownVal(cty.Number)})
ety, err := impliedType(rt.Elem(), path)
if err != nil {
return cty.NilType, err
}
return cty.List(ety), nil
case reflect.Map:
if !stringType.AssignableTo(rt.Key()) {
return cty.NilType, path.NewErrorf("no cty.Type for %s (must have string keys)", rt)
}
path := append(path, cty.IndexStep{Key: cty.UnknownVal(cty.String)})
ety, err := impliedType(rt.Elem(), path)
if err != nil {
return cty.NilType, err
}
return cty.Map(ety), nil
// Structural types
case reflect.Struct:
return impliedStructType(rt, path)
default:
return cty.NilType, path.NewErrorf("no cty.Type for %s", rt)
}
}
func impliedStructType(rt reflect.Type, path cty.Path) (cty.Type, error) {
if valueType.AssignableTo(rt) {
// Special case: cty.Value represents cty.DynamicPseudoType, for
// type conformance checking.
return cty.DynamicPseudoType, nil
}
fieldIdxs := structTagIndices(rt)
if len(fieldIdxs) == 0 {
return cty.NilType, path.NewErrorf("no cty.Type for %s (no cty field tags)", rt)
}
atys := make(map[string]cty.Type, len(fieldIdxs))
{
// Temporary extension of path for attributes
path := append(path, nil)
for k, fi := range fieldIdxs {
path[len(path)-1] = cty.GetAttrStep{Name: k}
ft := rt.Field(fi).Type
aty, err := impliedType(ft, path)
if err != nil {
return cty.NilType, err
}
atys[k] = aty
}
}
return cty.Object(atys), nil
}
var (
valueType = reflect.TypeOf(cty.Value{})
stringType = reflect.TypeOf("")
)
// structTagIndices interrogates the fields of the given type (which must
// be a struct type, or we'll panic) and returns a map from the cty
// attribute names declared via struct tags to the indices of the
// fields holding those tags.
//
// This function will panic if two fields within the struct are tagged with
// the same cty attribute name.
func structTagIndices(st reflect.Type) map[string]int {
ct := st.NumField()
ret := make(map[string]int, ct)
for i := range ct {
field := st.Field(i)
attrName := field.Tag.Get("cty")
if attrName != "" {
ret[attrName] = i
}
}
return ret
}

View File

@@ -1,166 +0,0 @@
package hclparser
import (
"reflect"
"sync"
"github.com/containerd/errdefs"
"github.com/zclconf/go-cty/cty"
"github.com/zclconf/go-cty/cty/convert"
"github.com/zclconf/go-cty/cty/gocty"
)
type ToCtyValueConverter interface {
// ToCtyValue will convert this capsule value into a native
// cty.Value. This should not return a capsule type.
ToCtyValue() cty.Value
}
type FromCtyValueConverter interface {
// FromCtyValue will initialize this value using a cty.Value.
FromCtyValue(in cty.Value, path cty.Path) error
}
type extensionType int
const (
unwrapCapsuleValueExtension extensionType = iota
)
func impliedTypeExt(rt reflect.Type, _ cty.Path) (cty.Type, error) {
if rt.Kind() != reflect.Pointer {
rt = reflect.PointerTo(rt)
}
if isCapsuleType(rt) {
return capsuleValueCapsuleType(rt), nil
}
return cty.NilType, errdefs.ErrNotImplemented
}
func isCapsuleType(rt reflect.Type) bool {
fromCtyValueType := reflect.TypeFor[FromCtyValueConverter]()
toCtyValueType := reflect.TypeFor[ToCtyValueConverter]()
return rt.Implements(fromCtyValueType) && rt.Implements(toCtyValueType)
}
var capsuleValueTypes sync.Map
func capsuleValueCapsuleType(rt reflect.Type) cty.Type {
if rt.Kind() != reflect.Pointer {
panic("capsule value must be a pointer")
}
elem := rt.Elem()
if val, loaded := capsuleValueTypes.Load(elem); loaded {
return val.(cty.Type)
}
toCtyValueType := reflect.TypeFor[ToCtyValueConverter]()
// First time used. Initialize new capsule ops.
ops := &cty.CapsuleOps{
ConversionTo: func(_ cty.Type) func(cty.Value, cty.Path) (any, error) {
return func(in cty.Value, p cty.Path) (any, error) {
rv := reflect.New(elem).Interface()
if err := rv.(FromCtyValueConverter).FromCtyValue(in, p); err != nil {
return nil, err
}
return rv, nil
}
},
ConversionFrom: func(want cty.Type) func(any, cty.Path) (cty.Value, error) {
return func(in any, _ cty.Path) (cty.Value, error) {
rv := reflect.ValueOf(in).Convert(toCtyValueType)
v := rv.Interface().(ToCtyValueConverter).ToCtyValue()
return convert.Convert(v, want)
}
},
ExtensionData: func(key any) any {
switch key {
case unwrapCapsuleValueExtension:
zero := reflect.Zero(elem).Interface()
if conv, ok := zero.(ToCtyValueConverter); ok {
return conv.ToCtyValue().Type()
}
zero = reflect.Zero(rt).Interface()
if conv, ok := zero.(ToCtyValueConverter); ok {
return conv.ToCtyValue().Type()
}
}
return nil
},
}
// Attempt to store the new type. Use whichever was loaded first in the case
// of a race condition.
ety := cty.CapsuleWithOps(elem.Name(), elem, ops)
val, _ := capsuleValueTypes.LoadOrStore(elem, ety)
return val.(cty.Type)
}
// UnwrapCtyValue will unwrap capsule type values into their native cty value
// equivalents if possible.
func UnwrapCtyValue(in cty.Value) cty.Value {
want := toCtyValueType(in.Type())
if in.Type().Equals(want) {
return in
} else if out, err := convert.Convert(in, want); err == nil {
return out
}
return cty.NullVal(want)
}
func toCtyValueType(in cty.Type) cty.Type {
if et := in.MapElementType(); et != nil {
return cty.Map(toCtyValueType(*et))
}
if et := in.SetElementType(); et != nil {
return cty.Set(toCtyValueType(*et))
}
if et := in.ListElementType(); et != nil {
return cty.List(toCtyValueType(*et))
}
if in.IsObjectType() {
var optional []string
inAttrTypes := in.AttributeTypes()
outAttrTypes := make(map[string]cty.Type, len(inAttrTypes))
for name, typ := range inAttrTypes {
outAttrTypes[name] = toCtyValueType(typ)
if in.AttributeOptional(name) {
optional = append(optional, name)
}
}
return cty.ObjectWithOptionalAttrs(outAttrTypes, optional)
}
if in.IsTupleType() {
inTypes := in.TupleElementTypes()
outTypes := make([]cty.Type, len(inTypes))
for i, typ := range inTypes {
outTypes[i] = toCtyValueType(typ)
}
return cty.Tuple(outTypes)
}
if in.IsCapsuleType() {
if out := in.CapsuleExtensionData(unwrapCapsuleValueExtension); out != nil {
return out.(cty.Type)
}
return cty.DynamicPseudoType
}
return in
}
func ToCtyValue(val any, ty cty.Type) (cty.Value, error) {
out, err := gocty.ToCtyValue(val, ty)
if err != nil {
return out, err
}
return UnwrapCtyValue(out), nil
}

View File

@@ -4,14 +4,11 @@ import (
"archive/tar" "archive/tar"
"bytes" "bytes"
"context" "context"
"os"
"strings"
"github.com/docker/buildx/builder" "github.com/docker/buildx/builder"
controllerapi "github.com/docker/buildx/controller/pb" controllerapi "github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/driver" "github.com/docker/buildx/driver"
"github.com/docker/buildx/util/progress" "github.com/docker/buildx/util/progress"
"github.com/docker/go-units"
"github.com/moby/buildkit/client" "github.com/moby/buildkit/client"
"github.com/moby/buildkit/client/llb" "github.com/moby/buildkit/client/llb"
"github.com/moby/buildkit/frontend/dockerui" "github.com/moby/buildkit/frontend/dockerui"
@@ -20,42 +17,19 @@ import (
"github.com/pkg/errors" "github.com/pkg/errors"
) )
const maxBakeDefinitionSize = 2 * 1024 * 1024 // 2 MB
type Input struct { type Input struct {
State *llb.State State *llb.State
URL string URL string
} }
func ReadRemoteFiles(ctx context.Context, nodes []builder.Node, url string, names []string, pw progress.Writer) ([]File, *Input, error) { func ReadRemoteFiles(ctx context.Context, nodes []builder.Node, url string, names []string, pw progress.Writer) ([]File, *Input, error) {
var sessions []session.Attachable var session []session.Attachable
var filename string var filename string
st, ok := dockerui.DetectGitContext(url, false) st, ok := dockerui.DetectGitContext(url, false)
if ok { if ok {
if ssh, err := controllerapi.CreateSSH([]*controllerapi.SSH{{ ssh, err := controllerapi.CreateSSH([]*controllerapi.SSH{{ID: "default"}})
ID: "default", if err == nil {
Paths: strings.Split(os.Getenv("BUILDX_BAKE_GIT_SSH"), ","), session = append(session, ssh)
}}); err == nil {
sessions = append(sessions, ssh)
}
var gitAuthSecrets []*controllerapi.Secret
if _, ok := os.LookupEnv("BUILDX_BAKE_GIT_AUTH_TOKEN"); ok {
gitAuthSecrets = append(gitAuthSecrets, &controllerapi.Secret{
ID: llb.GitAuthTokenKey,
Env: "BUILDX_BAKE_GIT_AUTH_TOKEN",
})
}
if _, ok := os.LookupEnv("BUILDX_BAKE_GIT_AUTH_HEADER"); ok {
gitAuthSecrets = append(gitAuthSecrets, &controllerapi.Secret{
ID: llb.GitAuthHeaderKey,
Env: "BUILDX_BAKE_GIT_AUTH_HEADER",
})
}
if len(gitAuthSecrets) > 0 {
if secrets, err := controllerapi.CreateSecrets(gitAuthSecrets); err == nil {
sessions = append(sessions, secrets)
}
} }
} else { } else {
st, filename, ok = dockerui.DetectHTTPContext(url) st, filename, ok = dockerui.DetectHTTPContext(url)
@@ -85,7 +59,7 @@ func ReadRemoteFiles(ctx context.Context, nodes []builder.Node, url string, name
ch, done := progress.NewChannel(pw) ch, done := progress.NewChannel(pw)
defer func() { <-done }() defer func() { <-done }()
_, err = c.Build(ctx, client.SolveOpt{Session: sessions, Internal: true}, "buildx", func(ctx context.Context, c gwclient.Client) (*gwclient.Result, error) { _, err = c.Build(ctx, client.SolveOpt{Session: session, Internal: true}, "buildx", func(ctx context.Context, c gwclient.Client) (*gwclient.Result, error) {
def, err := st.Marshal(ctx) def, err := st.Marshal(ctx)
if err != nil { if err != nil {
return nil, err return nil, err
@@ -109,6 +83,7 @@ func ReadRemoteFiles(ctx context.Context, nodes []builder.Node, url string, name
} }
return nil, err return nil, err
}, ch) }, ch)
if err != nil { if err != nil {
return nil, nil, err return nil, nil, err
} }
@@ -180,9 +155,9 @@ func filesFromURLRef(ctx context.Context, c gwclient.Client, ref gwclient.Refere
name := inp.URL name := inp.URL
inp.URL = "" inp.URL = ""
if int64(len(dt)) > stat.Size { if len(dt) > stat.Size() {
if stat.Size > maxBakeDefinitionSize { if stat.Size() > 1024*512 {
return nil, errors.Errorf("non-archive definition URL bigger than maximum allowed size (%s)", units.HumanSize(maxBakeDefinitionSize)) return nil, errors.Errorf("non-archive definition URL bigger than maximum allowed size")
} }
dt, err = ref.ReadFile(ctx, gwclient.ReadRequest{ dt, err = ref.ReadFile(ctx, gwclient.ReadRequest{

File diff suppressed because it is too large Load Diff

View File

@@ -4,9 +4,8 @@ import (
"context" "context"
stderrors "errors" stderrors "errors"
"net" "net"
"slices"
"github.com/containerd/platforms" "github.com/containerd/containerd/platforms"
"github.com/docker/buildx/builder" "github.com/docker/buildx/builder"
"github.com/docker/buildx/util/progress" "github.com/docker/buildx/util/progress"
v1 "github.com/opencontainers/image-spec/specs-go/v1" v1 "github.com/opencontainers/image-spec/specs-go/v1"
@@ -38,7 +37,15 @@ func Dial(ctx context.Context, nodes []builder.Node, pw progress.Writer, platfor
for _, ls := range resolved { for _, ls := range resolved {
for _, rn := range ls { for _, rn := range ls {
if platform != nil { if platform != nil {
if !slices.ContainsFunc(rn.platforms, platforms.Only(*platform).Match) { p := *platform
var found bool
for _, pp := range rn.platforms {
if platforms.Only(p).Match(pp) {
found = true
break
}
}
if !found {
continue continue
} }
} }

View File

@@ -3,10 +3,8 @@ package build
import ( import (
"context" "context"
"fmt" "fmt"
"slices"
"sync"
"github.com/containerd/platforms" "github.com/containerd/containerd/platforms"
"github.com/docker/buildx/builder" "github.com/docker/buildx/builder"
"github.com/docker/buildx/driver" "github.com/docker/buildx/driver"
"github.com/docker/buildx/util/progress" "github.com/docker/buildx/util/progress"
@@ -48,22 +46,10 @@ func (dp resolvedNode) BuildOpts(ctx context.Context) (gateway.BuildOpts, error)
type matchMaker func(specs.Platform) platforms.MatchComparer type matchMaker func(specs.Platform) platforms.MatchComparer
type cachedGroup[T any] struct {
g flightcontrol.Group[T]
cache map[int]T
cacheMu sync.Mutex
}
func newCachedGroup[T any]() cachedGroup[T] {
return cachedGroup[T]{
cache: map[int]T{},
}
}
type nodeResolver struct { type nodeResolver struct {
nodes []builder.Node nodes []builder.Node
clients cachedGroup[*client.Client] clients flightcontrol.Group[*client.Client]
buildOpts cachedGroup[gateway.BuildOpts] opt flightcontrol.Group[gateway.BuildOpts]
} }
func resolveDrivers(ctx context.Context, nodes []builder.Node, opt map[string]Options, pw progress.Writer) (map[string][]*resolvedNode, error) { func resolveDrivers(ctx context.Context, nodes []builder.Node, opt map[string]Options, pw progress.Writer) (map[string][]*resolvedNode, error) {
@@ -77,9 +63,7 @@ func resolveDrivers(ctx context.Context, nodes []builder.Node, opt map[string]Op
func newDriverResolver(nodes []builder.Node) *nodeResolver { func newDriverResolver(nodes []builder.Node) *nodeResolver {
r := &nodeResolver{ r := &nodeResolver{
nodes: nodes, nodes: nodes,
clients: newCachedGroup[*client.Client](),
buildOpts: newCachedGroup[gateway.BuildOpts](),
} }
return r return r
} }
@@ -195,7 +179,6 @@ func (r *nodeResolver) resolve(ctx context.Context, ps []specs.Platform, pw prog
resolver: r, resolver: r,
driverIndex: 0, driverIndex: 0,
}) })
nodeIdxs = append(nodeIdxs, 0)
} else { } else {
for i, idx := range nodeIdxs { for i, idx := range nodeIdxs {
node := &resolvedNode{ node := &resolvedNode{
@@ -222,7 +205,7 @@ func (r *nodeResolver) get(p specs.Platform, matcher matchMaker, additionalPlatf
for i, node := range r.nodes { for i, node := range r.nodes {
platforms := node.Platforms platforms := node.Platforms
if additionalPlatforms != nil { if additionalPlatforms != nil {
platforms = slices.Clone(platforms) platforms = append([]specs.Platform{}, platforms...)
platforms = append(platforms, additionalPlatforms(i, node)...) platforms = append(platforms, additionalPlatforms(i, node)...)
} }
for _, p2 := range platforms { for _, p2 := range platforms {
@@ -254,24 +237,11 @@ func (r *nodeResolver) boot(ctx context.Context, idxs []int, pw progress.Writer)
for i, idx := range idxs { for i, idx := range idxs {
i, idx := i, idx i, idx := i, idx
eg.Go(func() error { eg.Go(func() error {
c, err := r.clients.g.Do(ctx, fmt.Sprint(idx), func(ctx context.Context) (*client.Client, error) { c, err := r.clients.Do(ctx, fmt.Sprint(idx), func(ctx context.Context) (*client.Client, error) {
if r.nodes[idx].Driver == nil { if r.nodes[idx].Driver == nil {
return nil, nil return nil, nil
} }
r.clients.cacheMu.Lock() return driver.Boot(ctx, baseCtx, r.nodes[idx].Driver, pw)
c, ok := r.clients.cache[idx]
r.clients.cacheMu.Unlock()
if ok {
return c, nil
}
c, err := driver.Boot(ctx, baseCtx, r.nodes[idx].Driver, pw)
if err != nil {
return nil, err
}
r.clients.cacheMu.Lock()
r.clients.cache[idx] = c
r.clients.cacheMu.Unlock()
return c, nil
}) })
if err != nil { if err != nil {
return err return err
@@ -302,25 +272,14 @@ func (r *nodeResolver) opts(ctx context.Context, idxs []int, pw progress.Writer)
continue continue
} }
eg.Go(func() error { eg.Go(func() error {
opt, err := r.buildOpts.g.Do(ctx, fmt.Sprint(idx), func(ctx context.Context) (gateway.BuildOpts, error) { opt, err := r.opt.Do(ctx, fmt.Sprint(idx), func(ctx context.Context) (gateway.BuildOpts, error) {
r.buildOpts.cacheMu.Lock() opt := gateway.BuildOpts{}
opt, ok := r.buildOpts.cache[idx]
r.buildOpts.cacheMu.Unlock()
if ok {
return opt, nil
}
_, err := c.Build(ctx, client.SolveOpt{ _, err := c.Build(ctx, client.SolveOpt{
Internal: true, Internal: true,
}, "buildx", func(ctx context.Context, c gateway.Client) (*gateway.Result, error) { }, "buildx", func(ctx context.Context, c gateway.Client) (*gateway.Result, error) {
opt = c.BuildOpts() opt = c.BuildOpts()
return nil, nil return nil, nil
}, nil) }, nil)
if err != nil {
return gateway.BuildOpts{}, err
}
r.buildOpts.cacheMu.Lock()
r.buildOpts.cache[idx] = opt
r.buildOpts.cacheMu.Unlock()
return opt, err return opt, err
}) })
if err != nil { if err != nil {

View File

@@ -5,7 +5,7 @@ import (
"sort" "sort"
"testing" "testing"
"github.com/containerd/platforms" "github.com/containerd/containerd/platforms"
"github.com/docker/buildx/builder" "github.com/docker/buildx/builder"
specs "github.com/opencontainers/image-spec/specs-go/v1" specs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"

View File

@@ -17,19 +17,10 @@ import (
const DockerfileLabel = "com.docker.image.source.entrypoint" const DockerfileLabel = "com.docker.image.source.entrypoint"
type gitAttrsAppendFunc func(so *client.SolveOpt) func getGitAttributes(ctx context.Context, contextPath string, dockerfilePath string) (map[string]string, func(*client.SolveOpt), error) {
res := make(map[string]string)
func gitAppendNoneFunc(_ *client.SolveOpt) {}
func getGitAttributes(ctx context.Context, contextPath, dockerfilePath string) (f gitAttrsAppendFunc, err error) {
defer func() {
if f == nil {
f = gitAppendNoneFunc
}
}()
if contextPath == "" { if contextPath == "" {
return nil, nil return nil, nil, nil
} }
setGitLabels := false setGitLabels := false
@@ -48,7 +39,7 @@ func getGitAttributes(ctx context.Context, contextPath, dockerfilePath string) (
} }
if !setGitLabels && !setGitInfo { if !setGitLabels && !setGitInfo {
return nil, nil return nil, nil, nil
} }
// figure out in which directory the git command needs to run in // figure out in which directory the git command needs to run in
@@ -63,27 +54,25 @@ func getGitAttributes(ctx context.Context, contextPath, dockerfilePath string) (
gitc, err := gitutil.New(gitutil.WithContext(ctx), gitutil.WithWorkingDir(wd)) gitc, err := gitutil.New(gitutil.WithContext(ctx), gitutil.WithWorkingDir(wd))
if err != nil { if err != nil {
if st, err1 := os.Stat(path.Join(wd, ".git")); err1 == nil && st.IsDir() { if st, err1 := os.Stat(path.Join(wd, ".git")); err1 == nil && st.IsDir() {
return nil, errors.Wrap(err, "git was not found in the system") return res, nil, errors.Wrap(err, "git was not found in the system")
} }
return nil, nil return nil, nil, nil
} }
if !gitc.IsInsideWorkTree() { if !gitc.IsInsideWorkTree() {
if st, err := os.Stat(path.Join(wd, ".git")); err == nil && st.IsDir() { if st, err := os.Stat(path.Join(wd, ".git")); err == nil && st.IsDir() {
return nil, errors.New("failed to read current commit information with git rev-parse --is-inside-work-tree") return res, nil, errors.New("failed to read current commit information with git rev-parse --is-inside-work-tree")
} }
return nil, nil return nil, nil, nil
} }
root, err := gitc.RootDir() root, err := gitc.RootDir()
if err != nil { if err != nil {
return nil, errors.Wrap(err, "failed to get git root dir") return res, nil, errors.Wrap(err, "failed to get git root dir")
} }
res := make(map[string]string)
if sha, err := gitc.FullCommit(); err != nil && !gitutil.IsUnknownRevision(err) { if sha, err := gitc.FullCommit(); err != nil && !gitutil.IsUnknownRevision(err) {
return nil, errors.Wrap(err, "failed to get git commit") return res, nil, errors.Wrap(err, "failed to get git commit")
} else if sha != "" { } else if sha != "" {
checkDirty := false checkDirty := false
if v, ok := os.LookupEnv("BUILDX_GIT_CHECK_DIRTY"); ok { if v, ok := os.LookupEnv("BUILDX_GIT_CHECK_DIRTY"); ok {
@@ -123,24 +112,12 @@ func getGitAttributes(ctx context.Context, contextPath, dockerfilePath string) (
} }
} }
return func(so *client.SolveOpt) { return res, func(so *client.SolveOpt) {
if so.FrontendAttrs == nil {
so.FrontendAttrs = make(map[string]string)
}
for k, v := range res {
so.FrontendAttrs[k] = v
}
if !setGitInfo || root == "" { if !setGitInfo || root == "" {
return return
} }
for k, dir := range so.LocalDirs {
for key, mount := range so.LocalMounts { dir, err = filepath.EvalSymlinks(dir)
fs, ok := mount.(*fs)
if !ok {
continue
}
dir, err := filepath.EvalSymlinks(fs.dir) // keep same behavior as fsutil.NewFS
if err != nil { if err != nil {
continue continue
} }
@@ -153,7 +130,7 @@ func getGitAttributes(ctx context.Context, contextPath, dockerfilePath string) (
} }
dir = osutil.SanitizePath(dir) dir = osutil.SanitizePath(dir)
if r, err := filepath.Rel(root, dir); err == nil && !strings.HasPrefix(r, "..") { if r, err := filepath.Rel(root, dir); err == nil && !strings.HasPrefix(r, "..") {
so.FrontendAttrs["vcs:localdir:"+key] = r so.FrontendAttrs["vcs:localdir:"+k] = r
} }
} }
}, nil }, nil

View File

@@ -23,7 +23,7 @@ func setupTest(tb testing.TB) {
gitutil.GitInit(c, tb) gitutil.GitInit(c, tb)
df := []byte("FROM alpine:latest\n") df := []byte("FROM alpine:latest\n")
require.NoError(tb, os.WriteFile("Dockerfile", df, 0644)) assert.NoError(tb, os.WriteFile("Dockerfile", df, 0644))
gitutil.GitAdd(c, tb, "Dockerfile") gitutil.GitAdd(c, tb, "Dockerfile")
gitutil.GitCommit(c, tb, "initial commit") gitutil.GitCommit(c, tb, "initial commit")
@@ -31,26 +31,24 @@ func setupTest(tb testing.TB) {
} }
func TestGetGitAttributesNotGitRepo(t *testing.T) { func TestGetGitAttributesNotGitRepo(t *testing.T) {
_, err := getGitAttributes(context.Background(), t.TempDir(), "Dockerfile") _, _, err := getGitAttributes(context.Background(), t.TempDir(), "Dockerfile")
require.NoError(t, err) assert.NoError(t, err)
} }
func TestGetGitAttributesBadGitRepo(t *testing.T) { func TestGetGitAttributesBadGitRepo(t *testing.T) {
tmp := t.TempDir() tmp := t.TempDir()
require.NoError(t, os.MkdirAll(path.Join(tmp, ".git"), 0755)) require.NoError(t, os.MkdirAll(path.Join(tmp, ".git"), 0755))
_, err := getGitAttributes(context.Background(), tmp, "Dockerfile") _, _, err := getGitAttributes(context.Background(), tmp, "Dockerfile")
assert.Error(t, err) assert.Error(t, err)
} }
func TestGetGitAttributesNoContext(t *testing.T) { func TestGetGitAttributesNoContext(t *testing.T) {
setupTest(t) setupTest(t)
addGitAttrs, err := getGitAttributes(context.Background(), "", "Dockerfile") gitattrs, _, err := getGitAttributes(context.Background(), "", "Dockerfile")
require.NoError(t, err) assert.NoError(t, err)
var so client.SolveOpt assert.Empty(t, gitattrs)
addGitAttrs(&so)
assert.Empty(t, so.FrontendAttrs)
} }
func TestGetGitAttributes(t *testing.T) { func TestGetGitAttributes(t *testing.T) {
@@ -117,17 +115,15 @@ func TestGetGitAttributes(t *testing.T) {
if tt.envGitInfo != "" { if tt.envGitInfo != "" {
t.Setenv("BUILDX_GIT_INFO", tt.envGitInfo) t.Setenv("BUILDX_GIT_INFO", tt.envGitInfo)
} }
addGitAttrs, err := getGitAttributes(context.Background(), ".", "Dockerfile") gitattrs, _, err := getGitAttributes(context.Background(), ".", "Dockerfile")
require.NoError(t, err) require.NoError(t, err)
var so client.SolveOpt
addGitAttrs(&so)
for _, e := range tt.expected { for _, e := range tt.expected {
assert.Contains(t, so.FrontendAttrs, e) assert.Contains(t, gitattrs, e)
assert.NotEmpty(t, so.FrontendAttrs[e]) assert.NotEmpty(t, gitattrs[e])
if e == "label:"+DockerfileLabel { if e == "label:"+DockerfileLabel {
assert.Equal(t, "Dockerfile", so.FrontendAttrs[e]) assert.Equal(t, "Dockerfile", gitattrs[e])
} else if e == "label:"+specs.AnnotationSource || e == "vcs:source" { } else if e == "label:"+specs.AnnotationSource || e == "vcs:source" {
assert.Equal(t, "git@github.com:docker/buildx.git", so.FrontendAttrs[e]) assert.Equal(t, "git@github.com:docker/buildx.git", gitattrs[e])
} }
} }
}) })
@@ -144,25 +140,20 @@ func TestGetGitAttributesDirty(t *testing.T) {
require.NoError(t, os.WriteFile(filepath.Join("dir", "Dockerfile"), df, 0644)) require.NoError(t, os.WriteFile(filepath.Join("dir", "Dockerfile"), df, 0644))
t.Setenv("BUILDX_GIT_LABELS", "true") t.Setenv("BUILDX_GIT_LABELS", "true")
addGitAttrs, err := getGitAttributes(context.Background(), ".", "Dockerfile") gitattrs, _, _ := getGitAttributes(context.Background(), ".", "Dockerfile")
require.NoError(t, err) assert.Equal(t, 5, len(gitattrs))
var so client.SolveOpt assert.Contains(t, gitattrs, "label:"+DockerfileLabel)
addGitAttrs(&so) assert.Equal(t, "Dockerfile", gitattrs["label:"+DockerfileLabel])
assert.Contains(t, gitattrs, "label:"+specs.AnnotationSource)
assert.Equal(t, "git@github.com:docker/buildx.git", gitattrs["label:"+specs.AnnotationSource])
assert.Contains(t, gitattrs, "label:"+specs.AnnotationRevision)
assert.True(t, strings.HasSuffix(gitattrs["label:"+specs.AnnotationRevision], "-dirty"))
assert.Equal(t, 5, len(so.FrontendAttrs)) assert.Contains(t, gitattrs, "vcs:source")
assert.Equal(t, "git@github.com:docker/buildx.git", gitattrs["vcs:source"])
assert.Contains(t, so.FrontendAttrs, "label:"+DockerfileLabel) assert.Contains(t, gitattrs, "vcs:revision")
assert.Equal(t, "Dockerfile", so.FrontendAttrs["label:"+DockerfileLabel]) assert.True(t, strings.HasSuffix(gitattrs["vcs:revision"], "-dirty"))
assert.Contains(t, so.FrontendAttrs, "label:"+specs.AnnotationSource)
assert.Equal(t, "git@github.com:docker/buildx.git", so.FrontendAttrs["label:"+specs.AnnotationSource])
assert.Contains(t, so.FrontendAttrs, "label:"+specs.AnnotationRevision)
assert.True(t, strings.HasSuffix(so.FrontendAttrs["label:"+specs.AnnotationRevision], "-dirty"))
assert.Contains(t, so.FrontendAttrs, "vcs:source")
assert.Equal(t, "git@github.com:docker/buildx.git", so.FrontendAttrs["vcs:source"])
assert.Contains(t, so.FrontendAttrs, "vcs:revision")
assert.True(t, strings.HasSuffix(so.FrontendAttrs["vcs:revision"], "-dirty"))
} }
func TestLocalDirs(t *testing.T) { func TestLocalDirs(t *testing.T) {
@@ -170,19 +161,19 @@ func TestLocalDirs(t *testing.T) {
so := &client.SolveOpt{ so := &client.SolveOpt{
FrontendAttrs: map[string]string{}, FrontendAttrs: map[string]string{},
LocalDirs: map[string]string{
"context": ".",
"dockerfile": ".",
},
} }
addGitAttrs, err := getGitAttributes(context.Background(), ".", "Dockerfile") _, addVCSLocalDir, err := getGitAttributes(context.Background(), ".", "Dockerfile")
require.NoError(t, err) require.NoError(t, err)
require.NotNil(t, addVCSLocalDir)
require.NoError(t, setLocalMount("context", ".", so)) addVCSLocalDir(so)
require.NoError(t, setLocalMount("dockerfile", ".", so))
addGitAttrs(so)
require.Contains(t, so.FrontendAttrs, "vcs:localdir:context") require.Contains(t, so.FrontendAttrs, "vcs:localdir:context")
assert.Equal(t, ".", so.FrontendAttrs["vcs:localdir:context"]) assert.Equal(t, ".", so.FrontendAttrs["vcs:localdir:context"])
require.Contains(t, so.FrontendAttrs, "vcs:localdir:dockerfile") require.Contains(t, so.FrontendAttrs, "vcs:localdir:dockerfile")
assert.Equal(t, ".", so.FrontendAttrs["vcs:localdir:dockerfile"]) assert.Equal(t, ".", so.FrontendAttrs["vcs:localdir:dockerfile"])
} }
@@ -195,8 +186,8 @@ func TestLocalDirsSub(t *testing.T) {
gitutil.GitInit(c, t) gitutil.GitInit(c, t)
df := []byte("FROM alpine:latest\n") df := []byte("FROM alpine:latest\n")
require.NoError(t, os.MkdirAll("app", 0755)) assert.NoError(t, os.MkdirAll("app", 0755))
require.NoError(t, os.WriteFile("app/Dockerfile", df, 0644)) assert.NoError(t, os.WriteFile("app/Dockerfile", df, 0644))
gitutil.GitAdd(c, t, "app/Dockerfile") gitutil.GitAdd(c, t, "app/Dockerfile")
gitutil.GitCommit(c, t, "initial commit") gitutil.GitCommit(c, t, "initial commit")
@@ -204,18 +195,19 @@ func TestLocalDirsSub(t *testing.T) {
so := &client.SolveOpt{ so := &client.SolveOpt{
FrontendAttrs: map[string]string{}, FrontendAttrs: map[string]string{},
LocalDirs: map[string]string{
"context": ".",
"dockerfile": "app",
},
} }
require.NoError(t, setLocalMount("context", ".", so))
require.NoError(t, setLocalMount("dockerfile", "app", so))
addGitAttrs, err := getGitAttributes(context.Background(), ".", "app/Dockerfile") _, addVCSLocalDir, err := getGitAttributes(context.Background(), ".", "app/Dockerfile")
require.NoError(t, err) require.NoError(t, err)
require.NotNil(t, addVCSLocalDir)
addGitAttrs(so) addVCSLocalDir(so)
require.Contains(t, so.FrontendAttrs, "vcs:localdir:context") require.Contains(t, so.FrontendAttrs, "vcs:localdir:context")
assert.Equal(t, ".", so.FrontendAttrs["vcs:localdir:context"]) assert.Equal(t, ".", so.FrontendAttrs["vcs:localdir:context"])
require.Contains(t, so.FrontendAttrs, "vcs:localdir:dockerfile") require.Contains(t, so.FrontendAttrs, "vcs:localdir:dockerfile")
assert.Equal(t, "app", so.FrontendAttrs["vcs:localdir:dockerfile"]) assert.Equal(t, "app", so.FrontendAttrs["vcs:localdir:dockerfile"])
} }

View File

@@ -16,7 +16,7 @@ import (
type Container struct { type Container struct {
cancelOnce sync.Once cancelOnce sync.Once
containerCancel func(error) containerCancel func()
isUnavailable atomic.Bool isUnavailable atomic.Bool
initStarted atomic.Bool initStarted atomic.Bool
container gateway.Container container gateway.Container
@@ -31,18 +31,18 @@ func NewContainer(ctx context.Context, resultCtx *ResultHandle, cfg *controllera
errCh := make(chan error) errCh := make(chan error)
go func() { go func() {
err := resultCtx.build(func(ctx context.Context, c gateway.Client) (*gateway.Result, error) { err := resultCtx.build(func(ctx context.Context, c gateway.Client) (*gateway.Result, error) {
ctx, cancel := context.WithCancelCause(ctx) ctx, cancel := context.WithCancel(ctx)
go func() { go func() {
<-mainCtx.Done() <-mainCtx.Done()
cancel(errors.WithStack(context.Canceled)) cancel()
}() }()
containerCfg, err := resultCtx.getContainerConfig(cfg) containerCfg, err := resultCtx.getContainerConfig(ctx, c, cfg)
if err != nil { if err != nil {
return nil, err return nil, err
} }
containerCtx, containerCancel := context.WithCancelCause(ctx) containerCtx, containerCancel := context.WithCancel(ctx)
defer containerCancel(errors.WithStack(context.Canceled)) defer containerCancel()
bkContainer, err := c.NewContainer(containerCtx, containerCfg) bkContainer, err := c.NewContainer(containerCtx, containerCfg)
if err != nil { if err != nil {
return nil, err return nil, err
@@ -83,7 +83,7 @@ func (c *Container) Cancel() {
c.markUnavailable() c.markUnavailable()
c.cancelOnce.Do(func() { c.cancelOnce.Do(func() {
if c.containerCancel != nil { if c.containerCancel != nil {
c.containerCancel(errors.WithStack(context.Canceled)) c.containerCancel()
} }
close(c.releaseCh) close(c.releaseCh)
}) })

View File

@@ -5,40 +5,39 @@ import (
"github.com/docker/buildx/builder" "github.com/docker/buildx/builder"
"github.com/docker/buildx/localstate" "github.com/docker/buildx/localstate"
"github.com/docker/buildx/util/confutil"
"github.com/moby/buildkit/client" "github.com/moby/buildkit/client"
) )
func saveLocalState(so *client.SolveOpt, target string, opts Options, node builder.Node, cfg *confutil.Config) error { func saveLocalState(so *client.SolveOpt, target string, opts Options, node builder.Node, configDir string) error {
var err error var err error
if so.Ref == "" || opts.CallFunc != nil { if so.Ref == "" {
return nil return nil
} }
lp := opts.Inputs.ContextPath lp := opts.Inputs.ContextPath
dp := opts.Inputs.DockerfilePath dp := opts.Inputs.DockerfilePath
if dp != "" && !IsRemoteURL(lp) && lp != "-" && dp != "-" { if lp != "" || dp != "" {
dp, err = filepath.Abs(dp) if lp != "" {
lp, err = filepath.Abs(lp)
if err != nil {
return err
}
}
if dp != "" {
dp, err = filepath.Abs(dp)
if err != nil {
return err
}
}
l, err := localstate.New(configDir)
if err != nil { if err != nil {
return err return err
} }
return l.SaveRef(node.Builder, node.Name, so.Ref, localstate.State{
Target: target,
LocalPath: lp,
DockerfilePath: dp,
GroupRef: opts.GroupRef,
})
} }
if lp != "" && !IsRemoteURL(lp) && lp != "-" { return nil
lp, err = filepath.Abs(lp)
if err != nil {
return err
}
}
if lp == "" && dp == "" {
return nil
}
l, err := localstate.New(cfg)
if err != nil {
return err
}
return l.SaveRef(node.Builder, node.Name, so.Ref, localstate.State{
Target: target,
LocalPath: lp,
DockerfilePath: dp,
GroupRef: opts.GroupRef,
})
} }

View File

@@ -1,657 +0,0 @@
package build
import (
"bytes"
"context"
"io"
"os"
"path/filepath"
"slices"
"strconv"
"strings"
"syscall"
"github.com/containerd/containerd/v2/core/content"
"github.com/containerd/containerd/v2/plugins/content/local"
"github.com/containerd/platforms"
"github.com/distribution/reference"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/driver"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/dockerutil"
"github.com/docker/buildx/util/osutil"
"github.com/docker/buildx/util/progress"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/client/llb"
"github.com/moby/buildkit/client/ociindex"
gateway "github.com/moby/buildkit/frontend/gateway/client"
"github.com/moby/buildkit/identity"
"github.com/moby/buildkit/session/upload/uploadprovider"
"github.com/moby/buildkit/solver/pb"
"github.com/moby/buildkit/util/apicaps"
"github.com/moby/buildkit/util/entitlements"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/tonistiigi/fsutil"
)
func toSolveOpt(ctx context.Context, node builder.Node, multiDriver bool, opt *Options, bopts gateway.BuildOpts, cfg *confutil.Config, pw progress.Writer, docker *dockerutil.Client) (_ *client.SolveOpt, release func(), err error) {
nodeDriver := node.Driver
defers := make([]func(), 0, 2)
releaseF := func() {
for _, f := range defers {
f()
}
}
defer func() {
if err != nil {
releaseF()
}
}()
// inline cache from build arg
if v, ok := opt.BuildArgs["BUILDKIT_INLINE_CACHE"]; ok {
if v, _ := strconv.ParseBool(v); v {
opt.CacheTo = append(opt.CacheTo, client.CacheOptionsEntry{
Type: "inline",
Attrs: map[string]string{},
})
}
}
for _, e := range opt.CacheTo {
if e.Type != "inline" && !nodeDriver.Features(ctx)[driver.CacheExport] {
return nil, nil, notSupported(driver.CacheExport, nodeDriver, "https://docs.docker.com/go/build-cache-backends/")
}
}
cacheTo := make([]client.CacheOptionsEntry, 0, len(opt.CacheTo))
for _, e := range opt.CacheTo {
if e.Type == "gha" {
if !bopts.LLBCaps.Contains(apicaps.CapID("cache.gha")) {
continue
}
} else if e.Type == "s3" {
if !bopts.LLBCaps.Contains(apicaps.CapID("cache.s3")) {
continue
}
}
cacheTo = append(cacheTo, e)
}
cacheFrom := make([]client.CacheOptionsEntry, 0, len(opt.CacheFrom))
for _, e := range opt.CacheFrom {
if e.Type == "gha" {
if !bopts.LLBCaps.Contains(apicaps.CapID("cache.gha")) {
continue
}
} else if e.Type == "s3" {
if !bopts.LLBCaps.Contains(apicaps.CapID("cache.s3")) {
continue
}
}
cacheFrom = append(cacheFrom, e)
}
so := client.SolveOpt{
Ref: opt.Ref,
Frontend: "dockerfile.v0",
FrontendAttrs: map[string]string{},
LocalMounts: map[string]fsutil.FS{},
CacheExports: cacheTo,
CacheImports: cacheFrom,
AllowedEntitlements: opt.Allow,
SourcePolicy: opt.SourcePolicy,
}
if opt.CgroupParent != "" {
so.FrontendAttrs["cgroup-parent"] = opt.CgroupParent
}
if v, ok := opt.BuildArgs["BUILDKIT_MULTI_PLATFORM"]; ok {
if v, _ := strconv.ParseBool(v); v {
so.FrontendAttrs["multi-platform"] = "true"
}
}
if multiDriver {
// force creation of manifest list
so.FrontendAttrs["multi-platform"] = "true"
}
attests := make(map[string]string)
for k, v := range opt.Attests {
if v != nil {
attests[k] = *v
}
}
supportAttestations := bopts.LLBCaps.Contains(apicaps.CapID("exporter.image.attestations")) && nodeDriver.Features(ctx)[driver.MultiPlatform]
if len(attests) > 0 {
if !supportAttestations {
if !nodeDriver.Features(ctx)[driver.MultiPlatform] {
return nil, nil, notSupported("Attestation", nodeDriver, "https://docs.docker.com/go/attestations/")
}
return nil, nil, errors.Errorf("Attestations are not supported by the current BuildKit daemon")
}
for k, v := range attests {
so.FrontendAttrs["attest:"+k] = v
}
}
if _, ok := opt.Attests["provenance"]; !ok && supportAttestations {
const noAttestEnv = "BUILDX_NO_DEFAULT_ATTESTATIONS"
var noProv bool
if v, ok := os.LookupEnv(noAttestEnv); ok {
noProv, err = strconv.ParseBool(v)
if err != nil {
return nil, nil, errors.Wrap(err, "invalid "+noAttestEnv)
}
}
if !noProv {
so.FrontendAttrs["attest:provenance"] = "mode=min,inline-only=true"
}
}
switch len(opt.Exports) {
case 1:
// valid
case 0:
if !noDefaultLoad() && opt.CallFunc == nil {
if nodeDriver.IsMobyDriver() {
// backwards compat for docker driver only:
// this ensures the build results in a docker image.
opt.Exports = []client.ExportEntry{{Type: "image", Attrs: map[string]string{}}}
} else if nodeDriver.Features(ctx)[driver.DefaultLoad] {
opt.Exports = []client.ExportEntry{{Type: "docker", Attrs: map[string]string{}}}
}
}
default:
if err := bopts.LLBCaps.Supports(pb.CapMultipleExporters); err != nil {
return nil, nil, errors.Errorf("multiple outputs currently unsupported by the current BuildKit daemon, please upgrade to version v0.13+ or use a single output")
}
}
// fill in image exporter names from tags
if len(opt.Tags) > 0 {
tags := make([]string, len(opt.Tags))
for i, tag := range opt.Tags {
ref, err := reference.Parse(tag)
if err != nil {
return nil, nil, errors.Wrapf(err, "invalid tag %q", tag)
}
tags[i] = ref.String()
}
for i, e := range opt.Exports {
switch e.Type {
case "image", "oci", "docker":
opt.Exports[i].Attrs["name"] = strings.Join(tags, ",")
}
}
} else {
for _, e := range opt.Exports {
if e.Type == "image" && e.Attrs["name"] == "" && e.Attrs["push"] != "" {
if ok, _ := strconv.ParseBool(e.Attrs["push"]); ok {
return nil, nil, errors.Errorf("tag is needed when pushing to registry")
}
}
}
}
// cacheonly is a fake exporter to opt out of default behaviors
exports := make([]client.ExportEntry, 0, len(opt.Exports))
for _, e := range opt.Exports {
if e.Type != "cacheonly" {
exports = append(exports, e)
}
}
opt.Exports = exports
// set up exporters
for i, e := range opt.Exports {
if e.Type == "oci" && !nodeDriver.Features(ctx)[driver.OCIExporter] {
return nil, nil, notSupported(driver.OCIExporter, nodeDriver, "https://docs.docker.com/go/build-exporters/")
}
if e.Type == "docker" {
features := docker.Features(ctx, e.Attrs["context"])
if features[dockerutil.OCIImporter] && e.Output == nil {
// rely on oci importer if available (which supports
// multi-platform images), otherwise fall back to docker
opt.Exports[i].Type = "oci"
} else if len(opt.Platforms) > 1 || len(attests) > 0 {
if e.Output != nil {
return nil, nil, errors.Errorf("docker exporter does not support exporting manifest lists, use the oci exporter instead")
}
return nil, nil, errors.Errorf("docker exporter does not currently support exporting manifest lists")
}
if e.Output == nil {
if nodeDriver.IsMobyDriver() {
e.Type = "image"
} else {
w, cancel, err := docker.LoadImage(ctx, e.Attrs["context"], pw)
if err != nil {
return nil, nil, err
}
defers = append(defers, cancel)
opt.Exports[i].Output = func(_ map[string]string) (io.WriteCloser, error) {
return w, nil
}
}
} else if !nodeDriver.Features(ctx)[driver.DockerExporter] {
return nil, nil, notSupported(driver.DockerExporter, nodeDriver, "https://docs.docker.com/go/build-exporters/")
}
}
if e.Type == "image" && nodeDriver.IsMobyDriver() {
opt.Exports[i].Type = "moby"
if e.Attrs["push"] != "" {
if ok, _ := strconv.ParseBool(e.Attrs["push"]); ok {
if ok, _ := strconv.ParseBool(e.Attrs["push-by-digest"]); ok {
return nil, nil, errors.Errorf("push-by-digest is currently not implemented for docker driver, please create a new builder instance")
}
}
}
}
if e.Type == "docker" || e.Type == "image" || e.Type == "oci" {
// inline buildinfo attrs from build arg
if v, ok := opt.BuildArgs["BUILDKIT_INLINE_BUILDINFO_ATTRS"]; ok {
opt.Exports[i].Attrs["buildinfo-attrs"] = v
}
}
}
so.Exports = opt.Exports
so.Session = slices.Clone(opt.Session)
releaseLoad, err := loadInputs(ctx, nodeDriver, &opt.Inputs, pw, &so)
if err != nil {
return nil, nil, err
}
defers = append(defers, releaseLoad)
// add node identifier to shared key if one was specified
if so.SharedKey != "" {
so.SharedKey += ":" + cfg.TryNodeIdentifier()
}
if opt.Pull {
so.FrontendAttrs["image-resolve-mode"] = pb.AttrImageResolveModeForcePull
} else if nodeDriver.IsMobyDriver() {
// moby driver always resolves local images by default
so.FrontendAttrs["image-resolve-mode"] = pb.AttrImageResolveModePreferLocal
}
if opt.Target != "" {
so.FrontendAttrs["target"] = opt.Target
}
if len(opt.NoCacheFilter) > 0 {
so.FrontendAttrs["no-cache"] = strings.Join(opt.NoCacheFilter, ",")
}
if opt.NoCache {
so.FrontendAttrs["no-cache"] = ""
}
for k, v := range opt.BuildArgs {
so.FrontendAttrs["build-arg:"+k] = v
}
for k, v := range opt.Labels {
so.FrontendAttrs["label:"+k] = v
}
for k, v := range node.ProxyConfig {
if _, ok := opt.BuildArgs[k]; !ok {
so.FrontendAttrs["build-arg:"+k] = v
}
}
// set platforms
if len(opt.Platforms) != 0 {
pp := make([]string, len(opt.Platforms))
for i, p := range opt.Platforms {
pp[i] = platforms.Format(p)
}
if len(pp) > 1 && !nodeDriver.Features(ctx)[driver.MultiPlatform] {
return nil, nil, notSupported(driver.MultiPlatform, nodeDriver, "https://docs.docker.com/go/build-multi-platform/")
}
so.FrontendAttrs["platform"] = strings.Join(pp, ",")
}
// setup networkmode
switch opt.NetworkMode {
case "host":
so.FrontendAttrs["force-network-mode"] = opt.NetworkMode
so.AllowedEntitlements = append(so.AllowedEntitlements, entitlements.EntitlementNetworkHost.String())
case "none":
so.FrontendAttrs["force-network-mode"] = opt.NetworkMode
case "", "default":
default:
return nil, nil, errors.Errorf("network mode %q not supported by buildkit - you can define a custom network for your builder using the network driver-opt in buildx create", opt.NetworkMode)
}
// setup extrahosts
extraHosts, err := toBuildkitExtraHosts(ctx, opt.ExtraHosts, nodeDriver)
if err != nil {
return nil, nil, err
}
if len(extraHosts) > 0 {
so.FrontendAttrs["add-hosts"] = extraHosts
}
// setup shm size
if opt.ShmSize.Value() > 0 {
so.FrontendAttrs["shm-size"] = strconv.FormatInt(opt.ShmSize.Value(), 10)
}
// setup ulimits
ulimits, err := toBuildkitUlimits(opt.Ulimits)
if err != nil {
return nil, nil, err
} else if len(ulimits) > 0 {
so.FrontendAttrs["ulimit"] = ulimits
}
// mark call request as internal
if opt.CallFunc != nil {
so.Internal = true
}
return &so, releaseF, nil
}
func loadInputs(ctx context.Context, d *driver.DriverHandle, inp *Inputs, pw progress.Writer, target *client.SolveOpt) (func(), error) {
if inp.ContextPath == "" {
return nil, errors.New("please specify build context (e.g. \".\" for the current directory)")
}
// TODO: handle stdin, symlinks, remote contexts, check files exist
var (
err error
dockerfileReader io.ReadCloser
dockerfileDir string
dockerfileName = inp.DockerfilePath
dockerfileSrcName = inp.DockerfilePath
toRemove []string
)
switch {
case inp.ContextState != nil:
if target.FrontendInputs == nil {
target.FrontendInputs = make(map[string]llb.State)
}
target.FrontendInputs["context"] = *inp.ContextState
target.FrontendInputs["dockerfile"] = *inp.ContextState
case inp.ContextPath == "-":
if inp.DockerfilePath == "-" {
return nil, errors.Errorf("invalid argument: can't use stdin for both build context and dockerfile")
}
rc := inp.InStream.NewReadCloser()
magic, err := inp.InStream.Peek(archiveHeaderSize * 2)
if err != nil && err != io.EOF {
return nil, errors.Wrap(err, "failed to peek context header from STDIN")
}
if !(err == io.EOF && len(magic) == 0) {
if isArchive(magic) {
// stdin is context
up := uploadprovider.New()
target.FrontendAttrs["context"] = up.Add(rc)
target.Session = append(target.Session, up)
} else {
if inp.DockerfilePath != "" {
return nil, errors.Errorf("ambiguous Dockerfile source: both stdin and flag correspond to Dockerfiles")
}
// stdin is dockerfile
dockerfileReader = rc
inp.ContextPath, _ = os.MkdirTemp("", "empty-dir")
toRemove = append(toRemove, inp.ContextPath)
if err := setLocalMount("context", inp.ContextPath, target); err != nil {
return nil, err
}
}
}
case osutil.IsLocalDir(inp.ContextPath):
if err := setLocalMount("context", inp.ContextPath, target); err != nil {
return nil, err
}
sharedKey := inp.ContextPath
if p, err := filepath.Abs(sharedKey); err == nil {
sharedKey = filepath.Base(p)
}
target.SharedKey = sharedKey
switch inp.DockerfilePath {
case "-":
dockerfileReader = inp.InStream.NewReadCloser()
case "":
dockerfileDir = inp.ContextPath
default:
dockerfileDir = filepath.Dir(inp.DockerfilePath)
dockerfileName = filepath.Base(inp.DockerfilePath)
}
case IsRemoteURL(inp.ContextPath):
if inp.DockerfilePath == "-" {
dockerfileReader = inp.InStream.NewReadCloser()
} else if filepath.IsAbs(inp.DockerfilePath) {
dockerfileDir = filepath.Dir(inp.DockerfilePath)
dockerfileName = filepath.Base(inp.DockerfilePath)
target.FrontendAttrs["dockerfilekey"] = "dockerfile"
}
target.FrontendAttrs["context"] = inp.ContextPath
default:
return nil, errors.Errorf("unable to prepare context: path %q not found", inp.ContextPath)
}
if inp.DockerfileInline != "" {
dockerfileReader = io.NopCloser(strings.NewReader(inp.DockerfileInline))
dockerfileSrcName = "inline"
} else if inp.DockerfilePath == "-" {
dockerfileSrcName = "stdin"
} else if inp.DockerfilePath == "" {
dockerfileSrcName = filepath.Join(inp.ContextPath, "Dockerfile")
}
if dockerfileReader != nil {
dockerfileDir, err = createTempDockerfile(dockerfileReader, inp.InStream)
if err != nil {
return nil, err
}
toRemove = append(toRemove, dockerfileDir)
dockerfileName = "Dockerfile"
target.FrontendAttrs["dockerfilekey"] = "dockerfile"
}
if isHTTPURL(inp.DockerfilePath) {
dockerfileDir, err = createTempDockerfileFromURL(ctx, d, inp.DockerfilePath, pw)
if err != nil {
return nil, err
}
toRemove = append(toRemove, dockerfileDir)
dockerfileName = "Dockerfile"
target.FrontendAttrs["dockerfilekey"] = "dockerfile"
delete(target.FrontendInputs, "dockerfile")
}
if dockerfileName == "" {
dockerfileName = "Dockerfile"
}
if dockerfileDir != "" {
if err := setLocalMount("dockerfile", dockerfileDir, target); err != nil {
return nil, err
}
dockerfileName = handleLowercaseDockerfile(dockerfileDir, dockerfileName)
}
target.FrontendAttrs["filename"] = dockerfileName
for k, v := range inp.NamedContexts {
target.FrontendAttrs["frontend.caps"] = "moby.buildkit.frontend.contexts+forward"
if v.State != nil {
target.FrontendAttrs["context:"+k] = "input:" + k
if target.FrontendInputs == nil {
target.FrontendInputs = make(map[string]llb.State)
}
target.FrontendInputs[k] = *v.State
continue
}
if IsRemoteURL(v.Path) || strings.HasPrefix(v.Path, "docker-image://") || strings.HasPrefix(v.Path, "target:") {
target.FrontendAttrs["context:"+k] = v.Path
continue
}
// handle OCI layout
if strings.HasPrefix(v.Path, "oci-layout://") {
localPath := strings.TrimPrefix(v.Path, "oci-layout://")
localPath, dig, hasDigest := strings.Cut(localPath, "@")
localPath, tag, hasTag := strings.Cut(localPath, ":")
if !hasTag {
tag = "latest"
}
if !hasDigest {
dig, err = resolveDigest(localPath, tag)
if err != nil {
return nil, errors.Wrapf(err, "oci-layout reference %q could not be resolved", v.Path)
}
}
store, err := local.NewStore(localPath)
if err != nil {
return nil, errors.Wrapf(err, "invalid store at %s", localPath)
}
storeName := identity.NewID()
if target.OCIStores == nil {
target.OCIStores = map[string]content.Store{}
}
target.OCIStores[storeName] = store
target.FrontendAttrs["context:"+k] = "oci-layout://" + storeName + ":" + tag + "@" + dig
continue
}
st, err := os.Stat(v.Path)
if err != nil {
return nil, errors.Wrapf(err, "failed to get build context %v", k)
}
if !st.IsDir() {
return nil, errors.Wrapf(syscall.ENOTDIR, "failed to get build context path %v", v)
}
localName := k
if k == "context" || k == "dockerfile" {
localName = "_" + k // underscore to avoid collisions
}
if err := setLocalMount(localName, v.Path, target); err != nil {
return nil, err
}
target.FrontendAttrs["context:"+k] = "local:" + localName
}
release := func() {
for _, dir := range toRemove {
_ = os.RemoveAll(dir)
}
}
inp.DockerfileMappingSrc = dockerfileSrcName
inp.DockerfileMappingDst = dockerfileName
return release, nil
}
func resolveDigest(localPath, tag string) (dig string, _ error) {
idx := ociindex.NewStoreIndex(localPath)
// lookup by name
desc, err := idx.Get(tag)
if err != nil {
return "", err
}
if desc == nil {
// lookup single
desc, err = idx.GetSingle()
if err != nil {
return "", err
}
}
if desc == nil {
return "", errors.New("failed to resolve digest")
}
dig = string(desc.Digest)
_, err = digest.Parse(dig)
if err != nil {
return "", errors.Wrapf(err, "invalid digest %s", dig)
}
return dig, nil
}
func setLocalMount(name, dir string, so *client.SolveOpt) error {
lm, err := fsutil.NewFS(dir)
if err != nil {
return err
}
if so.LocalMounts == nil {
so.LocalMounts = map[string]fsutil.FS{}
}
so.LocalMounts[name] = &fs{FS: lm, dir: dir}
return nil
}
func createTempDockerfile(r io.Reader, multiReader *SyncMultiReader) (string, error) {
dir, err := os.MkdirTemp("", "dockerfile")
if err != nil {
return "", err
}
f, err := os.Create(filepath.Join(dir, "Dockerfile"))
if err != nil {
return "", err
}
defer f.Close()
if multiReader != nil {
dt, err := io.ReadAll(r)
if err != nil {
return "", err
}
multiReader.Reset(dt)
r = bytes.NewReader(dt)
}
if _, err := io.Copy(f, r); err != nil {
return "", err
}
return dir, err
}
// handle https://github.com/moby/moby/pull/10858
func handleLowercaseDockerfile(dir, p string) string {
if filepath.Base(p) != "Dockerfile" {
return p
}
f, err := os.Open(filepath.Dir(filepath.Join(dir, p)))
if err != nil {
return p
}
names, err := f.Readdirnames(-1)
if err != nil {
return p
}
foundLowerCase := false
for _, n := range names {
if n == "Dockerfile" {
return p
}
if n == "dockerfile" {
foundLowerCase = true
}
}
if foundLowerCase {
return filepath.Join(filepath.Dir(p), "dockerfile")
}
return p
}
type fs struct {
fsutil.FS
dir string
}
var _ fsutil.FS = &fs{}

View File

@@ -1,157 +0,0 @@
package build
import (
"context"
"encoding/base64"
"encoding/json"
"io"
"strings"
"sync"
"github.com/containerd/containerd/v2/core/content"
"github.com/containerd/containerd/v2/core/content/proxy"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/progress"
controlapi "github.com/moby/buildkit/api/services/control"
"github.com/moby/buildkit/client"
provenancetypes "github.com/moby/buildkit/solver/llbsolver/provenance/types"
digest "github.com/opencontainers/go-digest"
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"golang.org/x/sync/errgroup"
)
type provenancePredicate struct {
Builder *provenanceBuilder `json:"builder,omitempty"`
provenancetypes.ProvenancePredicate
}
type provenanceBuilder struct {
ID string `json:"id,omitempty"`
}
func setRecordProvenance(ctx context.Context, c *client.Client, sr *client.SolveResponse, ref string, mode confutil.MetadataProvenanceMode, pw progress.Writer) error {
if mode == confutil.MetadataProvenanceModeDisabled {
return nil
}
pw = progress.ResetTime(pw)
return progress.Wrap("resolving provenance for metadata file", pw.Write, func(l progress.SubLogger) error {
res, err := fetchProvenance(ctx, c, ref, mode)
if err != nil {
return err
}
for k, v := range res {
sr.ExporterResponse[k] = v
}
return nil
})
}
func fetchProvenance(ctx context.Context, c *client.Client, ref string, mode confutil.MetadataProvenanceMode) (out map[string]string, err error) {
cl, err := c.ControlClient().ListenBuildHistory(ctx, &controlapi.BuildHistoryRequest{
Ref: ref,
EarlyExit: true,
})
if err != nil {
return nil, err
}
var mu sync.Mutex
eg, ctx := errgroup.WithContext(ctx)
store := proxy.NewContentStore(c.ContentClient())
for {
ev, err := cl.Recv()
if errors.Is(err, io.EOF) {
break
} else if err != nil {
return nil, err
}
if ev.Record == nil {
continue
}
if ev.Record.Result != nil {
desc := lookupProvenance(ev.Record.Result)
if desc == nil {
continue
}
eg.Go(func() error {
dt, err := content.ReadBlob(ctx, store, *desc)
if err != nil {
return errors.Wrapf(err, "failed to load provenance blob from build record")
}
prv, err := encodeProvenance(dt, mode)
if err != nil {
return err
}
mu.Lock()
if out == nil {
out = make(map[string]string)
}
out["buildx.build.provenance"] = prv
mu.Unlock()
return nil
})
} else if ev.Record.Results != nil {
for platform, res := range ev.Record.Results {
platform := platform
desc := lookupProvenance(res)
if desc == nil {
continue
}
eg.Go(func() error {
dt, err := content.ReadBlob(ctx, store, *desc)
if err != nil {
return errors.Wrapf(err, "failed to load provenance blob from build record")
}
prv, err := encodeProvenance(dt, mode)
if err != nil {
return err
}
mu.Lock()
if out == nil {
out = make(map[string]string)
}
out["buildx.build.provenance/"+platform] = prv
mu.Unlock()
return nil
})
}
}
}
return out, eg.Wait()
}
func lookupProvenance(res *controlapi.BuildResultInfo) *ocispecs.Descriptor {
for _, a := range res.Attestations {
if a.MediaType == "application/vnd.in-toto+json" && strings.HasPrefix(a.Annotations["in-toto.io/predicate-type"], "https://slsa.dev/provenance/") {
return &ocispecs.Descriptor{
Digest: digest.Digest(a.Digest),
Size: a.Size,
MediaType: a.MediaType,
Annotations: a.Annotations,
}
}
}
return nil
}
func encodeProvenance(dt []byte, mode confutil.MetadataProvenanceMode) (string, error) {
var prv provenancePredicate
if err := json.Unmarshal(dt, &prv); err != nil {
return "", errors.Wrapf(err, "failed to unmarshal provenance")
}
if prv.Builder != nil && prv.Builder.ID == "" {
// reset builder if id is empty
prv.Builder = nil
}
if mode == confutil.MetadataProvenanceModeMin {
// reset fields for minimal provenance
prv.BuildConfig = nil
prv.Metadata = nil
}
dtprv, err := json.Marshal(prv)
if err != nil {
return "", errors.Wrapf(err, "failed to marshal provenance")
}
return base64.StdEncoding.EncodeToString(dtprv), nil
}

View File

@@ -1,164 +0,0 @@
package build
import (
"bufio"
"bytes"
"io"
"sync"
)
type SyncMultiReader struct {
source *bufio.Reader
buffer []byte
static []byte
mu sync.Mutex
cond *sync.Cond
readers []*syncReader
err error
offset int
}
type syncReader struct {
mr *SyncMultiReader
offset int
closed bool
}
func NewSyncMultiReader(source io.Reader) *SyncMultiReader {
mr := &SyncMultiReader{
source: bufio.NewReader(source),
buffer: make([]byte, 0, 32*1024),
}
mr.cond = sync.NewCond(&mr.mu)
return mr
}
func (mr *SyncMultiReader) Peek(n int) ([]byte, error) {
mr.mu.Lock()
defer mr.mu.Unlock()
if mr.static != nil {
return mr.static[min(n, len(mr.static)):], nil
}
return mr.source.Peek(n)
}
func (mr *SyncMultiReader) Reset(dt []byte) {
mr.mu.Lock()
defer mr.mu.Unlock()
mr.static = dt
}
func (mr *SyncMultiReader) NewReadCloser() io.ReadCloser {
mr.mu.Lock()
defer mr.mu.Unlock()
if mr.static != nil {
return io.NopCloser(bytes.NewReader(mr.static))
}
reader := &syncReader{
mr: mr,
}
mr.readers = append(mr.readers, reader)
return reader
}
func (sr *syncReader) Read(p []byte) (int, error) {
sr.mr.mu.Lock()
defer sr.mr.mu.Unlock()
return sr.read(p)
}
func (sr *syncReader) read(p []byte) (int, error) {
end := sr.mr.offset + len(sr.mr.buffer)
loop0:
for {
if sr.closed {
return 0, io.EOF
}
end := sr.mr.offset + len(sr.mr.buffer)
if sr.mr.err != nil && sr.offset == end {
return 0, sr.mr.err
}
start := sr.offset - sr.mr.offset
dt := sr.mr.buffer[start:]
if len(dt) > 0 {
n := copy(p, dt)
sr.offset += n
sr.mr.cond.Broadcast()
return n, nil
}
// check for readers that have not caught up
hasOpen := false
for _, r := range sr.mr.readers {
if !r.closed {
hasOpen = true
} else {
continue
}
if r.offset < end {
sr.mr.cond.Wait()
continue loop0
}
}
if !hasOpen {
return 0, io.EOF
}
break
}
last := sr.mr.offset + len(sr.mr.buffer)
// another reader has already updated the buffer
if last > end || sr.mr.err != nil {
return sr.read(p)
}
sr.mr.offset += len(sr.mr.buffer)
sr.mr.buffer = sr.mr.buffer[:cap(sr.mr.buffer)]
n, err := sr.mr.source.Read(sr.mr.buffer)
if n >= 0 {
sr.mr.buffer = sr.mr.buffer[:n]
} else {
sr.mr.buffer = sr.mr.buffer[:0]
}
sr.mr.cond.Broadcast()
if err != nil {
sr.mr.err = err
return 0, err
}
nn := copy(p, sr.mr.buffer)
sr.offset += nn
return nn, nil
}
func (sr *syncReader) Close() error {
sr.mr.mu.Lock()
defer sr.mr.mu.Unlock()
if sr.closed {
return nil
}
sr.closed = true
sr.mr.cond.Broadcast()
return nil
}

View File

@@ -1,76 +0,0 @@
package build
import (
"bytes"
"crypto/rand"
"io"
mathrand "math/rand"
"sync"
"testing"
"time"
"github.com/stretchr/testify/assert"
)
func generateRandomData(size int) []byte {
data := make([]byte, size)
rand.Read(data)
return data
}
func TestSyncMultiReaderParallel(t *testing.T) {
data := generateRandomData(1024 * 1024)
source := bytes.NewReader(data)
mr := NewSyncMultiReader(source)
var wg sync.WaitGroup
numReaders := 10
bufferSize := 4096 * 4
readers := make([]io.ReadCloser, numReaders)
for i := range numReaders {
readers[i] = mr.NewReadCloser()
}
for i := range numReaders {
wg.Add(1)
go func(readerId int) {
defer wg.Done()
reader := readers[readerId]
defer reader.Close()
totalRead := 0
buf := make([]byte, bufferSize)
for totalRead < len(data) {
// Simulate random read sizes
readSize := mathrand.Intn(bufferSize) //nolint:gosec
n, err := reader.Read(buf[:readSize])
if n > 0 {
assert.Equal(t, data[totalRead:totalRead+n], buf[:n], "Reader %d mismatch", readerId)
totalRead += n
}
if err == io.EOF {
assert.Equal(t, len(data), totalRead, "Reader %d EOF mismatch", readerId)
return
}
assert.NoError(t, err, "Reader %d error", readerId)
if mathrand.Intn(1000) == 0 { //nolint:gosec
t.Logf("Reader %d closing", readerId)
// Simulate random close
return
}
// Simulate random timing between reads
time.Sleep(time.Millisecond * time.Duration(mathrand.Intn(5))) //nolint:gosec
}
assert.Equal(t, len(data), totalRead, "Reader %d total read mismatch", readerId)
}(i)
}
wg.Wait()
}

View File

@@ -82,7 +82,7 @@ func NewResultHandle(ctx context.Context, cc *client.Client, opt client.SolveOpt
var respHandle *ResultHandle var respHandle *ResultHandle
go func() { go func() {
defer func() { cancel(errors.WithStack(context.Canceled)) }() // ensure no dangling processes defer cancel(context.Canceled) // ensure no dangling processes
var res *gateway.Result var res *gateway.Result
var err error var err error
@@ -181,7 +181,7 @@ func NewResultHandle(ctx context.Context, cc *client.Client, opt client.SolveOpt
case <-respHandle.done: case <-respHandle.done:
case <-ctx.Done(): case <-ctx.Done():
} }
return nil, context.Cause(ctx) return nil, ctx.Err()
}, nil) }, nil)
if respHandle != nil { if respHandle != nil {
return return
@@ -292,17 +292,17 @@ func (r *ResultHandle) build(buildFunc gateway.BuildFunc) (err error) {
return err return err
} }
func (r *ResultHandle) getContainerConfig(cfg *controllerapi.InvokeConfig) (containerCfg gateway.NewContainerRequest, _ error) { func (r *ResultHandle) getContainerConfig(ctx context.Context, c gateway.Client, cfg *controllerapi.InvokeConfig) (containerCfg gateway.NewContainerRequest, _ error) {
if r.res != nil && r.solveErr == nil { if r.res != nil && r.solveErr == nil {
logrus.Debugf("creating container from successful build") logrus.Debugf("creating container from successful build")
ccfg, err := containerConfigFromResult(r.res, cfg) ccfg, err := containerConfigFromResult(ctx, r.res, c, *cfg)
if err != nil { if err != nil {
return containerCfg, err return containerCfg, err
} }
containerCfg = *ccfg containerCfg = *ccfg
} else { } else {
logrus.Debugf("creating container from failed build %+v", cfg) logrus.Debugf("creating container from failed build %+v", cfg)
ccfg, err := containerConfigFromError(r.solveErr, cfg) ccfg, err := containerConfigFromError(r.solveErr, *cfg)
if err != nil { if err != nil {
return containerCfg, errors.Wrapf(err, "no result nor error is available") return containerCfg, errors.Wrapf(err, "no result nor error is available")
} }
@@ -315,19 +315,19 @@ func (r *ResultHandle) getProcessConfig(cfg *controllerapi.InvokeConfig, stdin i
processCfg := newStartRequest(stdin, stdout, stderr) processCfg := newStartRequest(stdin, stdout, stderr)
if r.res != nil && r.solveErr == nil { if r.res != nil && r.solveErr == nil {
logrus.Debugf("creating container from successful build") logrus.Debugf("creating container from successful build")
if err := populateProcessConfigFromResult(&processCfg, r.res, cfg); err != nil { if err := populateProcessConfigFromResult(&processCfg, r.res, *cfg); err != nil {
return processCfg, err return processCfg, err
} }
} else { } else {
logrus.Debugf("creating container from failed build %+v", cfg) logrus.Debugf("creating container from failed build %+v", cfg)
if err := populateProcessConfigFromError(&processCfg, r.solveErr, cfg); err != nil { if err := populateProcessConfigFromError(&processCfg, r.solveErr, *cfg); err != nil {
return processCfg, err return processCfg, err
} }
} }
return processCfg, nil return processCfg, nil
} }
func containerConfigFromResult(res *gateway.Result, cfg *controllerapi.InvokeConfig) (*gateway.NewContainerRequest, error) { func containerConfigFromResult(ctx context.Context, res *gateway.Result, c gateway.Client, cfg controllerapi.InvokeConfig) (*gateway.NewContainerRequest, error) {
if cfg.Initial { if cfg.Initial {
return nil, errors.Errorf("starting from the container from the initial state of the step is supported only on the failed steps") return nil, errors.Errorf("starting from the container from the initial state of the step is supported only on the failed steps")
} }
@@ -352,7 +352,7 @@ func containerConfigFromResult(res *gateway.Result, cfg *controllerapi.InvokeCon
}, nil }, nil
} }
func populateProcessConfigFromResult(req *gateway.StartRequest, res *gateway.Result, cfg *controllerapi.InvokeConfig) error { func populateProcessConfigFromResult(req *gateway.StartRequest, res *gateway.Result, cfg controllerapi.InvokeConfig) error {
imgData := res.Metadata[exptypes.ExporterImageConfigKey] imgData := res.Metadata[exptypes.ExporterImageConfigKey]
var img *specs.Image var img *specs.Image
if len(imgData) > 0 { if len(imgData) > 0 {
@@ -403,7 +403,7 @@ func populateProcessConfigFromResult(req *gateway.StartRequest, res *gateway.Res
return nil return nil
} }
func containerConfigFromError(solveErr *errdefs.SolveError, cfg *controllerapi.InvokeConfig) (*gateway.NewContainerRequest, error) { func containerConfigFromError(solveErr *errdefs.SolveError, cfg controllerapi.InvokeConfig) (*gateway.NewContainerRequest, error) {
exec, err := execOpFromError(solveErr) exec, err := execOpFromError(solveErr)
if err != nil { if err != nil {
return nil, err return nil, err
@@ -431,7 +431,7 @@ func containerConfigFromError(solveErr *errdefs.SolveError, cfg *controllerapi.I
}, nil }, nil
} }
func populateProcessConfigFromError(req *gateway.StartRequest, solveErr *errdefs.SolveError, cfg *controllerapi.InvokeConfig) error { func populateProcessConfigFromError(req *gateway.StartRequest, solveErr *errdefs.SolveError, cfg controllerapi.InvokeConfig) error {
exec, err := execOpFromError(solveErr) exec, err := execOpFromError(solveErr)
if err != nil { if err != nil {
return err return err

View File

@@ -7,15 +7,12 @@ import (
"github.com/docker/buildx/driver" "github.com/docker/buildx/driver"
"github.com/docker/buildx/util/progress" "github.com/docker/buildx/util/progress"
"github.com/docker/go-units"
"github.com/moby/buildkit/client" "github.com/moby/buildkit/client"
"github.com/moby/buildkit/client/llb" "github.com/moby/buildkit/client/llb"
gwclient "github.com/moby/buildkit/frontend/gateway/client" gwclient "github.com/moby/buildkit/frontend/gateway/client"
"github.com/pkg/errors" "github.com/pkg/errors"
) )
const maxDockerfileSize = 2 * 1024 * 1024 // 2 MB
func createTempDockerfileFromURL(ctx context.Context, d *driver.DriverHandle, url string, pw progress.Writer) (string, error) { func createTempDockerfileFromURL(ctx context.Context, d *driver.DriverHandle, url string, pw progress.Writer) (string, error) {
c, err := driver.Boot(ctx, ctx, d, pw) c, err := driver.Boot(ctx, ctx, d, pw)
if err != nil { if err != nil {
@@ -46,8 +43,8 @@ func createTempDockerfileFromURL(ctx context.Context, d *driver.DriverHandle, ur
if err != nil { if err != nil {
return nil, err return nil, err
} }
if stat.Size > maxDockerfileSize { if stat.Size() > 512*1024 {
return nil, errors.Errorf("Dockerfile %s bigger than allowed max size (%s)", url, units.HumanSize(maxDockerfileSize)) return nil, errors.Errorf("Dockerfile %s bigger than allowed max size", url)
} }
dt, err := ref.ReadFile(ctx, gwclient.ReadRequest{ dt, err := ref.ReadFile(ctx, gwclient.ReadRequest{
@@ -66,6 +63,7 @@ func createTempDockerfileFromURL(ctx context.Context, d *driver.DriverHandle, ur
out = dir out = dir
return nil, nil return nil, nil
}, ch) }, ch)
if err != nil { if err != nil {
return "", err return "", err
} }

View File

@@ -5,15 +5,13 @@ import (
"bytes" "bytes"
"context" "context"
"net" "net"
"os"
"strconv"
"strings" "strings"
"github.com/docker/buildx/driver" "github.com/docker/buildx/driver"
"github.com/docker/cli/opts" "github.com/docker/cli/opts"
"github.com/docker/docker/builder/remotecontext/urlutil"
"github.com/moby/buildkit/util/gitutil" "github.com/moby/buildkit/util/gitutil"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/sirupsen/logrus"
) )
const ( const (
@@ -25,15 +23,8 @@ const (
mobyHostGatewayName = "host-gateway" mobyHostGatewayName = "host-gateway"
) )
// isHTTPURL returns true if the provided str is an HTTP(S) URL by checking if it
// has a http:// or https:// scheme. No validation is performed to verify if the
// URL is well-formed.
func isHTTPURL(str string) bool {
return strings.HasPrefix(str, "https://") || strings.HasPrefix(str, "http://")
}
func IsRemoteURL(c string) bool { func IsRemoteURL(c string) bool {
if isHTTPURL(c) { if urlutil.IsURL(c) {
return true return true
} }
if _, err := gitutil.ParseGitRef(c); err == nil { if _, err := gitutil.ParseGitRef(c); err == nil {
@@ -110,21 +101,3 @@ func toBuildkitUlimits(inp *opts.UlimitOpt) (string, error) {
} }
return strings.Join(ulimits, ","), nil return strings.Join(ulimits, ","), nil
} }
func notSupported(f driver.Feature, d *driver.DriverHandle, docs string) error {
return errors.Errorf(`%s is not supported for the %s driver.
Switch to a different driver, or turn on the containerd image store, and try again.
Learn more at %s`, f, d.Factory().Name(), docs)
}
func noDefaultLoad() bool {
v, ok := os.LookupEnv("BUILDX_NO_DEFAULT_LOAD")
if !ok {
return false
}
b, err := strconv.ParseBool(v)
if err != nil {
logrus.Warnf("invalid non-bool value for BUILDX_NO_DEFAULT_LOAD: %s", v)
}
return b
}

View File

@@ -138,7 +138,7 @@ func TestToBuildkitExtraHosts(t *testing.T) {
actualOut, actualErr := toBuildkitExtraHosts(context.TODO(), tc.input, nil) actualOut, actualErr := toBuildkitExtraHosts(context.TODO(), tc.input, nil)
if tc.expectedErr == "" { if tc.expectedErr == "" {
require.Equal(t, tc.expectedOut, actualOut) require.Equal(t, tc.expectedOut, actualOut)
require.NoError(t, actualErr) require.Nil(t, actualErr)
} else { } else {
require.Zero(t, actualOut) require.Zero(t, actualOut)
require.Error(t, actualErr, tc.expectedErr) require.Error(t, actualErr, tc.expectedErr)

View File

@@ -2,10 +2,10 @@ package builder
import ( import (
"context" "context"
"encoding/csv"
"encoding/json" "encoding/json"
"net/url" "net/url"
"os" "os"
"slices"
"sort" "sort"
"strings" "strings"
"sync" "sync"
@@ -27,7 +27,6 @@ import (
"github.com/moby/buildkit/util/progress/progressui" "github.com/moby/buildkit/util/progress/progressui"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/spf13/pflag" "github.com/spf13/pflag"
"github.com/tonistiigi/go-csvvalue"
"golang.org/x/sync/errgroup" "golang.org/x/sync/errgroup"
) )
@@ -200,7 +199,7 @@ func (b *Builder) Boot(ctx context.Context) (bool, error) {
err = err1 err = err1
} }
if err == nil && len(errCh) > 0 { if err == nil && len(errCh) == len(toBoot) {
return false, <-errCh return false, <-errCh
} }
return true, err return true, err
@@ -289,15 +288,7 @@ func GetBuilders(dockerCli command.Cli, txn *store.Txn) ([]*Builder, error) {
return nil, err return nil, err
} }
contexts, err := dockerCli.ContextStore().List() builders := make([]*Builder, len(storeng))
if err != nil {
return nil, err
}
sort.Slice(contexts, func(i, j int) bool {
return contexts[i].Name < contexts[j].Name
})
builders := make([]*Builder, len(storeng), len(storeng)+len(contexts))
seen := make(map[string]struct{}) seen := make(map[string]struct{})
for i, ng := range storeng { for i, ng := range storeng {
b, err := New(dockerCli, b, err := New(dockerCli,
@@ -312,6 +303,14 @@ func GetBuilders(dockerCli command.Cli, txn *store.Txn) ([]*Builder, error) {
seen[b.NodeGroup.Name] = struct{}{} seen[b.NodeGroup.Name] = struct{}{}
} }
contexts, err := dockerCli.ContextStore().List()
if err != nil {
return nil, err
}
sort.Slice(contexts, func(i, j int) bool {
return contexts[i].Name < contexts[j].Name
})
for _, c := range contexts { for _, c := range contexts {
// if a context has the same name as an instance from the store, do not // if a context has the same name as an instance from the store, do not
// add it to the builders list. An instance from the store takes // add it to the builders list. An instance from the store takes
@@ -436,16 +435,7 @@ func Create(ctx context.Context, txn *store.Txn, dockerCli command.Cli, opts Cre
return nil, err return nil, err
} }
buildkitdConfigFile := opts.BuildkitdConfigFile buildkitdFlags, err := parseBuildkitdFlags(opts.BuildkitdFlags, driverName, driverOpts)
if buildkitdConfigFile == "" {
// if buildkit daemon config is not provided, check if the default one
// is available and use it
if f, ok := confutil.NewConfig(dockerCli).BuildKitConfigFile(); ok {
buildkitdConfigFile = f
}
}
buildkitdFlags, err := parseBuildkitdFlags(opts.BuildkitdFlags, driverName, driverOpts, buildkitdConfigFile)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -506,6 +496,15 @@ func Create(ctx context.Context, txn *store.Txn, dockerCli command.Cli, opts Cre
setEp = false setEp = false
} }
buildkitdConfigFile := opts.BuildkitdConfigFile
if buildkitdConfigFile == "" {
// if buildkit daemon config is not provided, check if the default one
// is available and use it
if f, ok := confutil.DefaultConfigFile(dockerCli); ok {
buildkitdConfigFile = f
}
}
if err := ng.Update(opts.NodeName, ep, opts.Platforms, setEp, opts.Append, buildkitdFlags, buildkitdConfigFile, driverOpts); err != nil { if err := ng.Update(opts.NodeName, ep, opts.Platforms, setEp, opts.Append, buildkitdFlags, buildkitdConfigFile, driverOpts); err != nil {
return nil, err return nil, err
} }
@@ -523,9 +522,8 @@ func Create(ctx context.Context, txn *store.Txn, dockerCli command.Cli, opts Cre
return nil, err return nil, err
} }
cancelCtx, cancel := context.WithCancelCause(ctx) timeoutCtx, cancel := context.WithTimeout(ctx, 20*time.Second)
timeoutCtx, _ := context.WithTimeoutCause(cancelCtx, 20*time.Second, errors.WithStack(context.DeadlineExceeded)) //nolint:govet,lostcancel // no need to manually cancel this context as we already rely on parent defer cancel()
defer func() { cancel(errors.WithStack(context.Canceled)) }()
nodes, err := b.LoadNodes(timeoutCtx, WithData()) nodes, err := b.LoadNodes(timeoutCtx, WithData())
if err != nil { if err != nil {
@@ -586,7 +584,7 @@ func Leave(ctx context.Context, txn *store.Txn, dockerCli command.Cli, opts Leav
return err return err
} }
ls, err := localstate.New(confutil.NewConfig(dockerCli)) ls, err := localstate.New(confutil.ConfigDir(dockerCli))
if err != nil { if err != nil {
return err return err
} }
@@ -603,7 +601,8 @@ func csvToMap(in []string) (map[string]string, error) {
} }
m := make(map[string]string, len(in)) m := make(map[string]string, len(in))
for _, s := range in { for _, s := range in {
fields, err := csvvalue.Fields(s, nil) csvReader := csv.NewReader(strings.NewReader(s))
fields, err := csvReader.Read()
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -643,7 +642,7 @@ func validateBuildkitEndpoint(ep string) (string, error) {
} }
// parseBuildkitdFlags parses buildkit flags // parseBuildkitdFlags parses buildkit flags
func parseBuildkitdFlags(inp string, driver string, driverOpts map[string]string, buildkitdConfigFile string) (res []string, err error) { func parseBuildkitdFlags(inp string, driver string, driverOpts map[string]string) (res []string, err error) {
if inp != "" { if inp != "" {
res, err = shlex.Split(inp) res, err = shlex.Split(inp)
if err != nil { if err != nil {
@@ -657,26 +656,18 @@ func parseBuildkitdFlags(inp string, driver string, driverOpts map[string]string
flags.StringArrayVar(&allowInsecureEntitlements, "allow-insecure-entitlement", nil, "") flags.StringArrayVar(&allowInsecureEntitlements, "allow-insecure-entitlement", nil, "")
_ = flags.Parse(res) _ = flags.Parse(res)
hasNetworkHostEntitlement := slices.Contains(allowInsecureEntitlements, "network.host") var hasNetworkHostEntitlement bool
for _, e := range allowInsecureEntitlements {
var hasNetworkHostEntitlementInConf bool if e == "network.host" {
if buildkitdConfigFile != "" { hasNetworkHostEntitlement = true
btoml, err := confutil.LoadConfigTree(buildkitdConfigFile) break
if err != nil {
return nil, err
} else if btoml != nil {
if ies := btoml.GetArray("insecure-entitlements"); ies != nil {
if slices.Contains(ies.([]string), "network.host") {
hasNetworkHostEntitlementInConf = true
}
}
} }
} }
if v, ok := driverOpts["network"]; ok && v == "host" && !hasNetworkHostEntitlement && driver == "docker-container" { if v, ok := driverOpts["network"]; ok && v == "host" && !hasNetworkHostEntitlement && driver == "docker-container" {
// always set network.host entitlement if user has set network=host // always set network.host entitlement if user has set network=host
res = append(res, "--allow-insecure-entitlement=network.host") res = append(res, "--allow-insecure-entitlement=network.host")
} else if len(allowInsecureEntitlements) == 0 && !hasNetworkHostEntitlementInConf && (driver == "kubernetes" || driver == "docker-container") { } else if len(allowInsecureEntitlements) == 0 && (driver == "kubernetes" || driver == "docker-container") {
// set network.host entitlement if user does not provide any as // set network.host entitlement if user does not provide any as
// network is isolated for container drivers. // network is isolated for container drivers.
res = append(res, "--allow-insecure-entitlement=network.host") res = append(res, "--allow-insecure-entitlement=network.host")

View File

@@ -1,8 +1,6 @@
package builder package builder
import ( import (
"os"
"path"
"testing" "testing"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
@@ -19,55 +17,29 @@ func TestCsvToMap(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
require.Contains(t, r, "tolerations") require.Contains(t, r, "tolerations")
require.Equal(t, "key=foo,value=bar;key=foo2,value=bar2", r["tolerations"]) require.Equal(t, r["tolerations"], "key=foo,value=bar;key=foo2,value=bar2")
require.Contains(t, r, "replicas") require.Contains(t, r, "replicas")
require.Equal(t, "1", r["replicas"]) require.Equal(t, r["replicas"], "1")
require.Contains(t, r, "namespace") require.Contains(t, r, "namespace")
require.Equal(t, "default", r["namespace"]) require.Equal(t, r["namespace"], "default")
} }
func TestParseBuildkitdFlags(t *testing.T) { func TestParseBuildkitdFlags(t *testing.T) {
dirConf := t.TempDir()
buildkitdConfPath := path.Join(dirConf, "buildkitd-conf.toml")
require.NoError(t, os.WriteFile(buildkitdConfPath, []byte(`
# debug enables additional debug logging
debug = true
# insecure-entitlements allows insecure entitlements, disabled by default.
insecure-entitlements = [ "network.host", "security.insecure" ]
[log]
# log formatter: json or text
format = "text"
`), 0644))
buildkitdConfBrokenPath := path.Join(dirConf, "buildkitd-conf-broken.toml")
require.NoError(t, os.WriteFile(buildkitdConfBrokenPath, []byte(`
[worker.oci]
gc = "maybe"
`), 0644))
buildkitdConfUnknownFieldPath := path.Join(dirConf, "buildkitd-unknown-field.toml")
require.NoError(t, os.WriteFile(buildkitdConfUnknownFieldPath, []byte(`
foo = "bar"
`), 0644))
testCases := []struct { testCases := []struct {
name string name string
flags string flags string
driver string driver string
driverOpts map[string]string driverOpts map[string]string
buildkitdConfigFile string expected []string
expected []string wantErr bool
wantErr bool
}{ }{
{ {
"docker-container no flags", "docker-container no flags",
"", "",
"docker-container", "docker-container",
nil, nil,
"",
[]string{ []string{
"--allow-insecure-entitlement=network.host", "--allow-insecure-entitlement=network.host",
}, },
@@ -78,7 +50,6 @@ foo = "bar"
"", "",
"kubernetes", "kubernetes",
nil, nil,
"",
[]string{ []string{
"--allow-insecure-entitlement=network.host", "--allow-insecure-entitlement=network.host",
}, },
@@ -89,7 +60,6 @@ foo = "bar"
"", "",
"remote", "remote",
nil, nil,
"",
nil, nil,
false, false,
}, },
@@ -98,7 +68,6 @@ foo = "bar"
"--allow-insecure-entitlement=security.insecure", "--allow-insecure-entitlement=security.insecure",
"docker-container", "docker-container",
nil, nil,
"",
[]string{ []string{
"--allow-insecure-entitlement=security.insecure", "--allow-insecure-entitlement=security.insecure",
}, },
@@ -109,7 +78,6 @@ foo = "bar"
"--allow-insecure-entitlement=network.host --allow-insecure-entitlement=security.insecure", "--allow-insecure-entitlement=network.host --allow-insecure-entitlement=security.insecure",
"docker-container", "docker-container",
nil, nil,
"",
[]string{ []string{
"--allow-insecure-entitlement=network.host", "--allow-insecure-entitlement=network.host",
"--allow-insecure-entitlement=security.insecure", "--allow-insecure-entitlement=security.insecure",
@@ -121,7 +89,6 @@ foo = "bar"
"", "",
"docker-container", "docker-container",
map[string]string{"network": "host"}, map[string]string{"network": "host"},
"",
[]string{ []string{
"--allow-insecure-entitlement=network.host", "--allow-insecure-entitlement=network.host",
}, },
@@ -132,7 +99,6 @@ foo = "bar"
"--allow-insecure-entitlement=network.host", "--allow-insecure-entitlement=network.host",
"docker-container", "docker-container",
map[string]string{"network": "host"}, map[string]string{"network": "host"},
"",
[]string{ []string{
"--allow-insecure-entitlement=network.host", "--allow-insecure-entitlement=network.host",
}, },
@@ -143,56 +109,25 @@ foo = "bar"
"--allow-insecure-entitlement=network.host --allow-insecure-entitlement=security.insecure", "--allow-insecure-entitlement=network.host --allow-insecure-entitlement=security.insecure",
"docker-container", "docker-container",
map[string]string{"network": "host"}, map[string]string{"network": "host"},
"",
[]string{ []string{
"--allow-insecure-entitlement=network.host", "--allow-insecure-entitlement=network.host",
"--allow-insecure-entitlement=security.insecure", "--allow-insecure-entitlement=security.insecure",
}, },
false, false,
}, },
{
"docker-container with buildkitd conf setting network.host entitlement",
"",
"docker-container",
nil,
buildkitdConfPath,
nil,
false,
},
{ {
"error parsing flags", "error parsing flags",
"foo'", "foo'",
"docker-container", "docker-container",
nil, nil,
"",
nil, nil,
true, true,
}, },
{
"error parsing buildkit config",
"",
"docker-container",
nil,
buildkitdConfBrokenPath,
nil,
true,
},
{
"unknown field in buildkit config",
"",
"docker-container",
nil,
buildkitdConfUnknownFieldPath,
[]string{
"--allow-insecure-entitlement=network.host",
},
false,
},
} }
for _, tt := range testCases { for _, tt := range testCases {
tt := tt tt := tt
t.Run(tt.name, func(t *testing.T) { t.Run(tt.name, func(t *testing.T) {
flags, err := parseBuildkitdFlags(tt.flags, tt.driver, tt.driverOpts, tt.buildkitdConfigFile) flags, err := parseBuildkitdFlags(tt.flags, tt.driver, tt.driverOpts)
if tt.wantErr { if tt.wantErr {
require.Error(t, err) require.Error(t, err)
return return

View File

@@ -6,8 +6,9 @@ import (
"sort" "sort"
"strings" "strings"
"github.com/containerd/platforms" "github.com/containerd/containerd/platforms"
"github.com/docker/buildx/driver" "github.com/docker/buildx/driver"
ctxkube "github.com/docker/buildx/driver/kubernetes/context"
"github.com/docker/buildx/store" "github.com/docker/buildx/store"
"github.com/docker/buildx/store/storeutil" "github.com/docker/buildx/store/storeutil"
"github.com/docker/buildx/util/dockerutil" "github.com/docker/buildx/util/dockerutil"
@@ -17,6 +18,7 @@ import (
"github.com/moby/buildkit/util/grpcerrors" "github.com/moby/buildkit/util/grpcerrors"
ocispecs "github.com/opencontainers/image-spec/specs-go/v1" ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/sirupsen/logrus"
"golang.org/x/sync/errgroup" "golang.org/x/sync/errgroup"
"google.golang.org/grpc/codes" "google.golang.org/grpc/codes"
) )
@@ -32,11 +34,10 @@ type Node struct {
Err error Err error
// worker settings // worker settings
IDs []string IDs []string
Platforms []ocispecs.Platform Platforms []ocispecs.Platform
GCPolicy []client.PruneInfo GCPolicy []client.PruneInfo
Labels map[string]string Labels map[string]string
CDIDevices []client.CDIDevice
} }
// Nodes returns nodes for this builder. // Nodes returns nodes for this builder.
@@ -47,9 +48,8 @@ func (b *Builder) Nodes() []Node {
type LoadNodesOption func(*loadNodesOptions) type LoadNodesOption func(*loadNodesOptions)
type loadNodesOptions struct { type loadNodesOptions struct {
data bool data bool
dialMeta map[string][]string dialMeta map[string][]string
clientOpt []client.ClientOpt
} }
func WithData() LoadNodesOption { func WithData() LoadNodesOption {
@@ -64,12 +64,6 @@ func WithDialMeta(dialMeta map[string][]string) LoadNodesOption {
} }
} }
func WithClientOpt(clientOpt ...client.ClientOpt) LoadNodesOption {
return func(o *loadNodesOptions) {
o.clientOpt = clientOpt
}
}
// LoadNodes loads and returns nodes for this builder. // LoadNodes loads and returns nodes for this builder.
// TODO: this should be a method on a Node object and lazy load data for each driver. // TODO: this should be a method on a Node object and lazy load data for each driver.
func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []Node, err error) { func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []Node, err error) {
@@ -118,19 +112,37 @@ func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []N
return nil return nil
} }
d, err := driver.GetDriver(ctx, factory, driver.InitConfig{ contextStore := b.opts.dockerCli.ContextStore()
Name: driver.BuilderName(n.Name),
EndpointAddr: n.Endpoint, var kcc driver.KubeClientConfig
DockerAPI: dockerapi, kcc, err = ctxkube.ConfigFromEndpoint(n.Endpoint, contextStore)
ContextStore: b.opts.dockerCli.ContextStore(), if err != nil {
BuildkitdFlags: n.BuildkitdFlags, // err is returned if n.Endpoint is non-context name like "unix:///var/run/docker.sock".
Files: n.Files, // try again with name="default".
DriverOpts: n.DriverOpts, // FIXME(@AkihiroSuda): n should retain real context name.
Auth: imageopt.Auth, kcc, err = ctxkube.ConfigFromEndpoint("default", contextStore)
Platforms: n.Platforms, if err != nil {
ContextPathHash: b.opts.contextPathHash, logrus.Error(err)
DialMeta: lno.dialMeta, }
}) }
tryToUseKubeConfigInCluster := false
if kcc == nil {
tryToUseKubeConfigInCluster = true
} else {
if _, err := kcc.ClientConfig(); err != nil {
tryToUseKubeConfigInCluster = true
}
}
if tryToUseKubeConfigInCluster {
kccInCluster := driver.KubeClientConfigInCluster{}
if _, err := kccInCluster.ClientConfig(); err == nil {
logrus.Debug("using kube config in cluster")
kcc = kccInCluster
}
}
d, err := driver.GetDriver(ctx, driver.BuilderName(n.Name), factory, n.Endpoint, dockerapi, imageopt.Auth, kcc, n.BuildkitdFlags, n.Files, n.DriverOpts, n.Platforms, b.opts.contextPathHash, lno.dialMeta)
if err != nil { if err != nil {
node.Err = err node.Err = err
return nil return nil
@@ -139,7 +151,7 @@ func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []N
node.ImageOpt = imageopt node.ImageOpt = imageopt
if lno.data { if lno.data {
if err := node.loadData(ctx, lno.clientOpt...); err != nil { if err := node.loadData(ctx); err != nil {
node.Err = err node.Err = err
} }
} }
@@ -169,12 +181,12 @@ func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []N
// dynamic nodes are used in Kubernetes driver. // dynamic nodes are used in Kubernetes driver.
// Kubernetes' pods are dynamically mapped to BuildKit Nodes. // Kubernetes' pods are dynamically mapped to BuildKit Nodes.
if di.DriverInfo != nil && len(di.DriverInfo.DynamicNodes) > 0 { if di.DriverInfo != nil && len(di.DriverInfo.DynamicNodes) > 0 {
for i := range di.DriverInfo.DynamicNodes { for i := 0; i < len(di.DriverInfo.DynamicNodes); i++ {
diClone := di diClone := di
if pl := di.DriverInfo.DynamicNodes[i].Platforms; len(pl) > 0 { if pl := di.DriverInfo.DynamicNodes[i].Platforms; len(pl) > 0 {
diClone.Platforms = pl diClone.Platforms = pl
} }
nodes = append(nodes, diClone) nodes = append(nodes, di)
} }
dynamicNodes = append(dynamicNodes, di.DriverInfo.DynamicNodes...) dynamicNodes = append(dynamicNodes, di.DriverInfo.DynamicNodes...)
} }
@@ -235,7 +247,7 @@ func (n *Node) MarshalJSON() ([]byte, error) {
}) })
} }
func (n *Node) loadData(ctx context.Context, clientOpt ...client.ClientOpt) error { func (n *Node) loadData(ctx context.Context) error {
if n.Driver == nil { if n.Driver == nil {
return nil return nil
} }
@@ -245,7 +257,7 @@ func (n *Node) loadData(ctx context.Context, clientOpt ...client.ClientOpt) erro
} }
n.DriverInfo = info n.DriverInfo = info
if n.DriverInfo.Status == driver.Running { if n.DriverInfo.Status == driver.Running {
driverClient, err := n.Driver.Client(ctx, clientOpt...) driverClient, err := n.Driver.Client(ctx)
if err != nil { if err != nil {
return err return err
} }
@@ -260,7 +272,6 @@ func (n *Node) loadData(ctx context.Context, clientOpt ...client.ClientOpt) erro
n.GCPolicy = w.GCPolicy n.GCPolicy = w.GCPolicy
n.Labels = w.Labels n.Labels = w.Labels
} }
n.CDIDevices = w.CDIDevices
} }
sort.Strings(n.IDs) sort.Strings(n.IDs)
n.Platforms = platformutil.Dedupe(n.Platforms) n.Platforms = platformutil.Dedupe(n.Platforms)

View File

@@ -1,75 +0,0 @@
package main
import (
"context"
"os"
"runtime"
"runtime/pprof"
"github.com/moby/buildkit/util/bklog"
"github.com/sirupsen/logrus"
)
func setupDebugProfiles(ctx context.Context) (stop func()) {
var stopFuncs []func()
if fn := setupCPUProfile(ctx); fn != nil {
stopFuncs = append(stopFuncs, fn)
}
if fn := setupHeapProfile(ctx); fn != nil {
stopFuncs = append(stopFuncs, fn)
}
return func() {
for _, fn := range stopFuncs {
fn()
}
}
}
func setupCPUProfile(ctx context.Context) (stop func()) {
if cpuProfile := os.Getenv("BUILDX_CPU_PROFILE"); cpuProfile != "" {
f, err := os.Create(cpuProfile)
if err != nil {
bklog.G(ctx).Warn("could not create cpu profile", logrus.WithError(err))
return nil
}
if err := pprof.StartCPUProfile(f); err != nil {
bklog.G(ctx).Warn("could not start cpu profile", logrus.WithError(err))
_ = f.Close()
return nil
}
return func() {
pprof.StopCPUProfile()
if err := f.Close(); err != nil {
bklog.G(ctx).Warn("could not close file for cpu profile", logrus.WithError(err))
}
}
}
return nil
}
func setupHeapProfile(ctx context.Context) (stop func()) {
if heapProfile := os.Getenv("BUILDX_MEM_PROFILE"); heapProfile != "" {
// Memory profile is only created on stop.
return func() {
f, err := os.Create(heapProfile)
if err != nil {
bklog.G(ctx).Warn("could not create memory profile", logrus.WithError(err))
return
}
// get up-to-date statistics
runtime.GC()
if err := pprof.WriteHeapProfile(f); err != nil {
bklog.G(ctx).Warn("could not write memory profile", logrus.WithError(err))
}
if err := f.Close(); err != nil {
bklog.G(ctx).Warn("could not close file for memory profile", logrus.WithError(err))
}
}
}
return nil
}

View File

@@ -1,13 +1,10 @@
package main package main
import ( import (
"context"
"fmt" "fmt"
"os" "os"
"path/filepath"
"github.com/docker/buildx/commands" "github.com/docker/buildx/commands"
controllererrors "github.com/docker/buildx/controller/errdefs"
"github.com/docker/buildx/util/desktop" "github.com/docker/buildx/util/desktop"
"github.com/docker/buildx/version" "github.com/docker/buildx/version"
"github.com/docker/cli/cli" "github.com/docker/cli/cli"
@@ -18,8 +15,9 @@ import (
cliflags "github.com/docker/cli/cli/flags" cliflags "github.com/docker/cli/cli/flags"
"github.com/moby/buildkit/solver/errdefs" "github.com/moby/buildkit/solver/errdefs"
"github.com/moby/buildkit/util/stack" "github.com/moby/buildkit/util/stack"
"github.com/pkg/errors"
"go.opentelemetry.io/otel" //nolint:staticcheck // vendored dependencies may still use this
"github.com/containerd/containerd/pkg/seed"
_ "k8s.io/client-go/plugin/pkg/client/auth/oidc" _ "k8s.io/client-go/plugin/pkg/client/auth/oidc"
@@ -27,12 +25,12 @@ import (
_ "github.com/docker/buildx/driver/docker-container" _ "github.com/docker/buildx/driver/docker-container"
_ "github.com/docker/buildx/driver/kubernetes" _ "github.com/docker/buildx/driver/kubernetes"
_ "github.com/docker/buildx/driver/remote" _ "github.com/docker/buildx/driver/remote"
// Use custom grpc codec to utilize vtprotobuf
_ "github.com/moby/buildkit/util/grpcutil/encoding/proto"
) )
func init() { func init() {
//nolint:staticcheck
seed.WithTimeAndRand()
stack.SetVersionInfo(version.Version, version.Revision) stack.SetVersionInfo(version.Version, version.Revision)
} }
@@ -40,28 +38,10 @@ func runStandalone(cmd *command.DockerCli) error {
if err := cmd.Initialize(cliflags.NewClientOptions()); err != nil { if err := cmd.Initialize(cliflags.NewClientOptions()); err != nil {
return err return err
} }
defer flushMetrics(cmd) rootCmd := commands.NewRootCmd(os.Args[0], false, cmd)
executable := os.Args[0]
rootCmd := commands.NewRootCmd(filepath.Base(executable), false, cmd)
return rootCmd.Execute() return rootCmd.Execute()
} }
// flushMetrics will manually flush metrics from the configured
// meter provider. This is needed when running in standalone mode
// because the meter provider is initialized by the cli library,
// but the mechanism for forcing it to report is not presently
// exposed and not invoked when run in standalone mode.
// There are plans to fix that in the next release, but this is
// needed temporarily until the API for this is more thorough.
func flushMetrics(cmd *command.DockerCli) {
if mp, ok := cmd.MeterProvider().(command.MeterProvider); ok {
if err := mp.ForceFlush(context.Background()); err != nil {
otel.Handle(err)
}
}
}
func runPlugin(cmd *command.DockerCli) error { func runPlugin(cmd *command.DockerCli) error {
rootCmd := commands.NewRootCmd("buildx", true, cmd) rootCmd := commands.NewRootCmd("buildx", true, cmd)
return plugin.RunPlugin(cmd, rootCmd, manager.Metadata{ return plugin.RunPlugin(cmd, rootCmd, manager.Metadata{
@@ -71,16 +51,6 @@ func runPlugin(cmd *command.DockerCli) error {
}) })
} }
func run(cmd *command.DockerCli) error {
stopProfiles := setupDebugProfiles(context.TODO())
defer stopProfiles()
if plugin.RunningStandalone() {
return runStandalone(cmd)
}
return runPlugin(cmd)
}
func main() { func main() {
cmd, err := command.NewDockerCli() cmd, err := command.NewDockerCli()
if err != nil { if err != nil {
@@ -88,11 +58,15 @@ func main() {
os.Exit(1) os.Exit(1)
} }
if err = run(cmd); err == nil { if plugin.RunningStandalone() {
err = runStandalone(cmd)
} else {
err = runPlugin(cmd)
}
if err == nil {
return return
} }
// Check the error from the run function above.
if sterr, ok := err.(cli.StatusError); ok { if sterr, ok := err.(cli.StatusError); ok {
if sterr.Status != "" { if sterr.Status != "" {
fmt.Fprintln(cmd.Err(), sterr.Status) fmt.Fprintln(cmd.Err(), sterr.Status)
@@ -113,15 +87,8 @@ func main() {
} else { } else {
fmt.Fprintf(cmd.Err(), "ERROR: %v\n", err) fmt.Fprintf(cmd.Err(), "ERROR: %v\n", err)
} }
if ebr, ok := err.(*desktop.ErrorWithBuildRef); ok {
var ebr *desktop.ErrorWithBuildRef
if errors.As(err, &ebr) {
ebr.Print(cmd.Err()) ebr.Print(cmd.Err())
} else {
var be *controllererrors.BuildError
if errors.As(err, &be) {
be.PrintBuildDetails(cmd.Err())
}
} }
os.Exit(1) os.Exit(1)

View File

@@ -4,6 +4,7 @@ import (
"github.com/moby/buildkit/util/tracing/detect" "github.com/moby/buildkit/util/tracing/detect"
"go.opentelemetry.io/otel" "go.opentelemetry.io/otel"
_ "github.com/moby/buildkit/util/tracing/detect/delegated"
_ "github.com/moby/buildkit/util/tracing/env" _ "github.com/moby/buildkit/util/tracing/env"
) )

View File

@@ -1,4 +1 @@
comment: false comment: false
ignore:
- "**/*.pb.go"

View File

@@ -1,35 +1,24 @@
package commands package commands
import ( import (
"bytes"
"cmp"
"context" "context"
"crypto/sha256"
"encoding/hex"
"encoding/json" "encoding/json"
"fmt" "fmt"
"io" "io"
"os" "os"
"slices"
"sort"
"strings" "strings"
"sync"
"text/tabwriter"
"github.com/containerd/console" "github.com/containerd/console"
"github.com/containerd/platforms" "github.com/containerd/containerd/platforms"
"github.com/docker/buildx/bake" "github.com/docker/buildx/bake"
"github.com/docker/buildx/bake/hclparser"
"github.com/docker/buildx/build" "github.com/docker/buildx/build"
"github.com/docker/buildx/builder" "github.com/docker/buildx/builder"
"github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/localstate" "github.com/docker/buildx/localstate"
"github.com/docker/buildx/util/buildflags" "github.com/docker/buildx/util/buildflags"
"github.com/docker/buildx/util/cobrautil/completion" "github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/confutil" "github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/desktop" "github.com/docker/buildx/util/desktop"
"github.com/docker/buildx/util/dockerutil" "github.com/docker/buildx/util/dockerutil"
"github.com/docker/buildx/util/osutil"
"github.com/docker/buildx/util/progress" "github.com/docker/buildx/util/progress"
"github.com/docker/buildx/util/tracing" "github.com/docker/buildx/util/tracing"
"github.com/docker/cli/cli/command" "github.com/docker/cli/cli/command"
@@ -37,40 +26,23 @@ import (
"github.com/moby/buildkit/util/progress/progressui" "github.com/moby/buildkit/util/progress/progressui"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"github.com/tonistiigi/go-csvvalue"
"go.opentelemetry.io/otel/attribute"
) )
type bakeOptions struct { type bakeOptions struct {
files []string files []string
overrides []string overrides []string
printOnly bool
sbom string sbom string
provenance string provenance string
allow []string
builder string builder string
metadataFile string metadataFile string
exportPush bool exportPush bool
exportLoad bool exportLoad bool
callFunc string
print bool
list string
// TODO: remove deprecated flags
listTargets bool
listVars bool
} }
func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in bakeOptions, cFlags commonFlags) (err error) { func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in bakeOptions, cFlags commonFlags) (err error) {
mp := dockerCli.MeterProvider() ctx, end, err := tracing.TraceCurrentCommand(ctx, "bake")
ctx, end, err := tracing.TraceCurrentCommand(ctx, append([]string{"bake"}, targets...),
attribute.String("builder", in.builder),
attribute.StringSlice("targets", targets),
attribute.StringSlice("files", in.files),
)
if err != nil { if err != nil {
return err return err
} }
@@ -78,14 +50,24 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
end(err) end(err)
}() }()
url, cmdContext, targets := bakeArgs(targets) var url string
if len(targets) == 0 { cmdContext := "cwd://"
targets = []string{"default"}
if len(targets) > 0 {
if build.IsRemoteURL(targets[0]) {
url = targets[0]
targets = targets[1:]
if len(targets) > 0 {
if build.IsRemoteURL(targets[0]) {
cmdContext = targets[0]
targets = targets[1:]
}
}
}
} }
callFunc, err := buildflags.ParseCallFunc(in.callFunc) if len(targets) == 0 {
if err != nil { targets = []string{"default"}
return err
} }
overrides := in.overrides overrides := in.overrides
@@ -93,10 +75,7 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
overrides = append(overrides, "*.push=true") overrides = append(overrides, "*.push=true")
} }
if in.exportLoad { if in.exportLoad {
overrides = append(overrides, "*.load=true") overrides = append(overrides, "*.output=type=docker")
}
if callFunc != nil {
overrides = append(overrides, fmt.Sprintf("*.call=%s", callFunc.Name))
} }
if cFlags.noCache != nil { if cFlags.noCache != nil {
overrides = append(overrides, fmt.Sprintf("*.no-cache=%t", *cFlags.noCache)) overrides = append(overrides, fmt.Sprintf("*.no-cache=%t", *cFlags.noCache))
@@ -112,31 +91,14 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
} }
contextPathHash, _ := os.Getwd() contextPathHash, _ := os.Getwd()
ent, err := bake.ParseEntitlements(in.allow) ctx2, cancel := context.WithCancel(context.TODO())
if err != nil { defer cancel()
return err
}
wd, err := os.Getwd()
if err != nil {
return errors.Wrapf(err, "failed to get current working directory")
}
// filesystem access under the current working directory is allowed by default
ent.FSRead = append(ent.FSRead, wd)
ent.FSWrite = append(ent.FSWrite, wd)
ctx2, cancel := context.WithCancelCause(context.TODO())
defer cancel(errors.WithStack(context.Canceled))
var nodes []builder.Node var nodes []builder.Node
var progressConsoleDesc, progressTextDesc string var progressConsoleDesc, progressTextDesc string
if in.print && in.list != "" {
return errors.New("--print and --list are mutually exclusive")
}
// instance only needed for reading remote bake files or building // instance only needed for reading remote bake files or building
var driverType string if url != "" || !in.printOnly {
if url != "" || !(in.print || in.list != "") {
b, err := builder.New(dockerCli, b, err := builder.New(dockerCli,
builder.WithName(in.builder), builder.WithName(in.builder),
builder.WithContextPathHash(contextPathHash), builder.WithContextPathHash(contextPathHash),
@@ -153,33 +115,32 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
} }
progressConsoleDesc = fmt.Sprintf("%s:%s", b.Driver, b.Name) progressConsoleDesc = fmt.Sprintf("%s:%s", b.Driver, b.Name)
progressTextDesc = fmt.Sprintf("building with %q instance using %s driver", b.Name, b.Driver) progressTextDesc = fmt.Sprintf("building with %q instance using %s driver", b.Name, b.Driver)
driverType = b.Driver
} }
var term bool var term bool
if _, err := console.ConsoleFromFile(os.Stderr); err == nil { if _, err := console.ConsoleFromFile(os.Stderr); err == nil {
term = true term = true
} }
attributes := bakeMetricAttributes(dockerCli, driverType, url, cmdContext, targets, &in)
progressMode := progressui.DisplayMode(cFlags.progress) progressMode := progressui.DisplayMode(cFlags.progress)
var printer *progress.Printer printer, err := progress.NewPrinter(ctx2, os.Stderr, progressMode,
progress.WithDesc(progressTextDesc, progressConsoleDesc),
makePrinter := func() error { )
var err error if err != nil {
printer, err = progress.NewPrinter(ctx2, os.Stderr, progressMode,
progress.WithDesc(progressTextDesc, progressConsoleDesc),
progress.WithMetrics(mp, attributes),
progress.WithOnClose(func() {
printWarnings(os.Stderr, printer.Warnings(), progressMode)
}),
)
return err return err
} }
if err := makePrinter(); err != nil { defer func() {
return err if printer != nil {
} err1 := printer.Wait()
if err == nil {
err = err1
}
if err == nil && progressMode != progressui.QuietMode && progressMode != progressui.RawJSONMode {
desktop.PrintBuildDetails(os.Stderr, printer.BuildRefs(), term)
}
}
}()
files, inp, err := readBakeFiles(ctx, nodes, url, in.files, dockerCli.In(), printer) files, inp, err := readBakeFiles(ctx, nodes, url, in.files, dockerCli.In(), printer)
if err != nil { if err != nil {
@@ -190,34 +151,12 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
return errors.New("couldn't find a bake definition") return errors.New("couldn't find a bake definition")
} }
defaults := map[string]string{ tgts, grps, err := bake.ReadTargets(ctx, files, targets, overrides, map[string]string{
// don't forget to update documentation if you add a new // don't forget to update documentation if you add a new
// built-in variable: docs/bake-reference.md#built-in-variables // built-in variable: docs/bake-reference.md#built-in-variables
"BAKE_CMD_CONTEXT": cmdContext, "BAKE_CMD_CONTEXT": cmdContext,
"BAKE_LOCAL_PLATFORM": platforms.Format(platforms.DefaultSpec()), "BAKE_LOCAL_PLATFORM": platforms.DefaultString(),
} })
if in.list != "" {
cfg, pm, err := bake.ParseFiles(files, defaults)
if err != nil {
return err
}
if err = printer.Wait(); err != nil {
return err
}
list, err := parseList(in.list)
if err != nil {
return err
}
switch list.Type {
case "targets":
return printTargetList(dockerCli.Out(), list.Format, cfg)
case "variables":
return printVars(dockerCli.Out(), list.Format, pm.AllVariables)
}
}
tgts, grps, err := bake.ReadTargets(ctx, files, targets, overrides, defaults, &ent)
if err != nil { if err != nil {
return err return err
} }
@@ -249,186 +188,58 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
Target: tgts, Target: tgts,
} }
if in.print { if in.printOnly {
if err = printer.Wait(); err != nil { dt, err := json.MarshalIndent(def, "", " ")
return err
}
dtdef, err := json.MarshalIndent(def, "", " ")
if err != nil { if err != nil {
return err return err
} }
_, err = fmt.Fprintln(dockerCli.Out(), string(dtdef)) err = printer.Wait()
return err printer = nil
} if err != nil {
return err
for _, opt := range bo {
if opt.CallFunc != nil {
cf, err := buildflags.ParseCallFunc(opt.CallFunc.Name)
if err != nil {
return err
}
opt.CallFunc.Name = cf.Name
} }
fmt.Fprintln(dockerCli.Out(), string(dt))
return nil
} }
exp, err := ent.Validate(bo) // local state group
groupRef := identity.NewID()
var refs []string
for k, b := range bo {
b.Ref = identity.NewID()
b.GroupRef = groupRef
refs = append(refs, b.Ref)
bo[k] = b
}
dt, err := json.Marshal(def)
if err != nil { if err != nil {
return err return err
} }
if progressMode != progressui.RawJSONMode { if err := saveLocalStateGroup(dockerCli, groupRef, localstate.StateGroup{
if err := exp.Prompt(ctx, url != "", &syncWriter{w: dockerCli.Err(), wait: printer.Wait}); err != nil { Definition: dt,
return err Targets: targets,
} Inputs: overrides,
} Refs: refs,
if printer.IsDone() { }); err != nil {
// init new printer as old one was stopped to show the prompt
if err := makePrinter(); err != nil {
return err
}
}
if err := saveLocalStateGroup(dockerCli, in, targets, bo); err != nil {
return err return err
} }
done := timeBuildCommand(mp, attributes) resp, err := build.Build(ctx, nodes, bo, dockerutil.NewClient(dockerCli), confutil.ConfigDir(dockerCli), printer)
resp, retErr := build.Build(ctx, nodes, bo, dockerutil.NewClient(dockerCli), confutil.NewConfig(dockerCli), printer)
if err := printer.Wait(); retErr == nil {
retErr = err
}
if retErr != nil {
err = wrapBuildError(retErr, true)
}
done(err)
if err != nil { if err != nil {
return err return wrapBuildError(err, true)
} }
if progressMode != progressui.QuietMode && progressMode != progressui.RawJSONMode {
desktop.PrintBuildDetails(os.Stderr, printer.BuildRefs(), term)
}
if len(in.metadataFile) > 0 { if len(in.metadataFile) > 0 {
dt := make(map[string]any) dt := make(map[string]interface{})
for t, r := range resp { for t, r := range resp {
dt[t] = decodeExporterResponse(r.ExporterResponse) dt[t] = decodeExporterResponse(r.ExporterResponse)
} }
if callFunc == nil {
if warnings := printer.Warnings(); len(warnings) > 0 && confutil.MetadataWarningsEnabled() {
dt["buildx.build.warnings"] = warnings
}
}
if err := writeMetadataFile(in.metadataFile, dt); err != nil { if err := writeMetadataFile(in.metadataFile, dt); err != nil {
return err return err
} }
} }
var callFormatJSON bool return err
jsonResults := map[string]map[string]any{}
if callFunc != nil {
callFormatJSON = callFunc.Format == "json"
}
var sep bool
var exitCode int
names := make([]string, 0, len(bo))
for name := range bo {
names = append(names, name)
}
slices.Sort(names)
for _, name := range names {
req := bo[name]
if req.CallFunc == nil {
continue
}
pf := &pb.CallFunc{
Name: req.CallFunc.Name,
Format: req.CallFunc.Format,
IgnoreStatus: req.CallFunc.IgnoreStatus,
}
if callFunc != nil {
pf.Format = callFunc.Format
pf.IgnoreStatus = callFunc.IgnoreStatus
}
var res map[string]string
if sp, ok := resp[name]; ok {
res = sp.ExporterResponse
}
if callFormatJSON {
jsonResults[name] = map[string]any{}
buf := &bytes.Buffer{}
if code, err := printResult(buf, pf, res, name, &req.Inputs); err != nil {
jsonResults[name]["error"] = err.Error()
exitCode = 1
} else if code != 0 && exitCode == 0 {
exitCode = code
}
m := map[string]*json.RawMessage{}
if err := json.Unmarshal(buf.Bytes(), &m); err == nil {
for k, v := range m {
jsonResults[name][k] = v
}
} else {
jsonResults[name][pf.Name] = json.RawMessage(buf.Bytes())
}
} else {
if sep {
fmt.Fprintln(dockerCli.Out())
} else {
sep = true
}
fmt.Fprintf(dockerCli.Out(), "%s\n", name)
if descr := tgts[name].Description; descr != "" {
fmt.Fprintf(dockerCli.Out(), "%s\n", descr)
}
fmt.Fprintln(dockerCli.Out())
if code, err := printResult(dockerCli.Out(), pf, res, name, &req.Inputs); err != nil {
fmt.Fprintf(dockerCli.Out(), "error: %v\n", err)
exitCode = 1
} else if code != 0 && exitCode == 0 {
exitCode = code
}
}
}
if callFormatJSON {
out := struct {
Group map[string]*bake.Group `json:"group,omitempty"`
Target map[string]map[string]any `json:"target"`
}{
Group: grps,
Target: map[string]map[string]any{},
}
for name, def := range tgts {
out.Target[name] = map[string]any{
"build": def,
}
if res, ok := jsonResults[name]; ok {
printName := bo[name].CallFunc.Name
if printName == "lint" {
printName = "check"
}
out.Target[name][printName] = res
}
}
dt, err := json.MarshalIndent(out, "", " ")
if err != nil {
return err
}
fmt.Fprintln(dockerCli.Out(), string(dt))
}
if exitCode != 0 {
os.Exit(exitCode)
}
return nil
} }
func bakeCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command { func bakeCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
@@ -447,13 +258,6 @@ func bakeCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
if !cmd.Flags().Lookup("pull").Changed { if !cmd.Flags().Lookup("pull").Changed {
cFlags.pull = nil cFlags.pull = nil
} }
if options.list == "" {
if options.listTargets {
options.list = "targets"
} else if options.listVars {
options.list = "variables"
}
}
options.builder = rootOpts.builder options.builder = rootOpts.builder
options.metadataFile = cFlags.metadataFile options.metadataFile = cFlags.metadataFile
// Other common flags (noCache, pull and progress) are processed in runBake function. // Other common flags (noCache, pull and progress) are processed in runBake function.
@@ -466,79 +270,23 @@ func bakeCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
flags.StringArrayVarP(&options.files, "file", "f", []string{}, "Build definition file") flags.StringArrayVarP(&options.files, "file", "f", []string{}, "Build definition file")
flags.BoolVar(&options.exportLoad, "load", false, `Shorthand for "--set=*.output=type=docker"`) flags.BoolVar(&options.exportLoad, "load", false, `Shorthand for "--set=*.output=type=docker"`)
flags.BoolVar(&options.printOnly, "print", false, "Print the options without building")
flags.BoolVar(&options.exportPush, "push", false, `Shorthand for "--set=*.output=type=registry"`) flags.BoolVar(&options.exportPush, "push", false, `Shorthand for "--set=*.output=type=registry"`)
flags.StringVar(&options.sbom, "sbom", "", `Shorthand for "--set=*.attest=type=sbom"`) flags.StringVar(&options.sbom, "sbom", "", `Shorthand for "--set=*.attest=type=sbom"`)
flags.StringVar(&options.provenance, "provenance", "", `Shorthand for "--set=*.attest=type=provenance"`) flags.StringVar(&options.provenance, "provenance", "", `Shorthand for "--set=*.attest=type=provenance"`)
flags.StringArrayVar(&options.overrides, "set", nil, `Override target value (e.g., "targetpattern.key=value")`) flags.StringArrayVar(&options.overrides, "set", nil, `Override target value (e.g., "targetpattern.key=value")`)
flags.StringVar(&options.callFunc, "call", "build", `Set method for evaluating build ("check", "outline", "targets")`)
flags.StringArrayVar(&options.allow, "allow", nil, "Allow build to access specified resources")
flags.VarPF(callAlias(&options.callFunc, "check"), "check", "", `Shorthand for "--call=check"`)
flags.Lookup("check").NoOptDefVal = "true"
flags.BoolVar(&options.print, "print", false, "Print the options without building")
flags.StringVar(&options.list, "list", "", "List targets or variables")
// TODO: remove deprecated flags
flags.BoolVar(&options.listTargets, "list-targets", false, "List available targets")
flags.MarkHidden("list-targets")
flags.MarkDeprecated("list-targets", "list-targets is deprecated, use list=targets instead")
flags.BoolVar(&options.listVars, "list-variables", false, "List defined variables")
flags.MarkHidden("list-variables")
flags.MarkDeprecated("list-variables", "list-variables is deprecated, use list=variables instead")
commonBuildFlags(&cFlags, flags) commonBuildFlags(&cFlags, flags)
return cmd return cmd
} }
func saveLocalStateGroup(dockerCli command.Cli, in bakeOptions, targets []string, bo map[string]build.Options) error { func saveLocalStateGroup(dockerCli command.Cli, ref string, lsg localstate.StateGroup) error {
l, err := localstate.New(confutil.NewConfig(dockerCli)) l, err := localstate.New(confutil.ConfigDir(dockerCli))
if err != nil { if err != nil {
return err return err
} }
return l.SaveGroup(ref, lsg)
defer l.MigrateIfNeeded()
prm := confutil.MetadataProvenance()
if len(in.metadataFile) == 0 {
prm = confutil.MetadataProvenanceModeDisabled
}
groupRef := identity.NewID()
refs := make([]string, 0, len(bo))
for k, b := range bo {
if b.CallFunc != nil {
continue
}
b.Ref = identity.NewID()
b.GroupRef = groupRef
b.ProvenanceResponseMode = prm
refs = append(refs, b.Ref)
bo[k] = b
}
if len(refs) == 0 {
return nil
}
return l.SaveGroup(groupRef, localstate.StateGroup{
Refs: refs,
Targets: targets,
})
}
// bakeArgs will retrieve the remote url, command context, and targets
// from the command line arguments.
func bakeArgs(args []string) (url, cmdContext string, targets []string) {
cmdContext, targets = "cwd://", args
if len(targets) == 0 || !build.IsRemoteURL(targets[0]) {
return url, cmdContext, targets
}
url, targets = targets[0], targets[1:]
if len(targets) == 0 || !build.IsRemoteURL(targets[0]) {
return url, cmdContext, targets
}
cmdContext, targets = targets[0], targets[1:]
return url, cmdContext, targets
} }
func readBakeFiles(ctx context.Context, nodes []builder.Node, url string, names []string, stdin io.Reader, pw progress.Writer) (files []bake.File, inp *bake.Input, err error) { func readBakeFiles(ctx context.Context, nodes []builder.Node, url string, names []string, stdin io.Reader, pw progress.Writer) (files []bake.File, inp *bake.Input, err error) {
@@ -583,235 +331,3 @@ func readBakeFiles(ctx context.Context, nodes []builder.Node, url string, names
return return
} }
type listEntry struct {
Type string
Format string
}
func parseList(input string) (listEntry, error) {
res := listEntry{}
fields, err := csvvalue.Fields(input, nil)
if err != nil {
return res, err
}
if len(fields) == 1 && fields[0] == input && !strings.HasPrefix(input, "type=") {
res.Type = input
}
if res.Type == "" {
for _, field := range fields {
key, value, ok := strings.Cut(field, "=")
if !ok {
return res, errors.Errorf("invalid value %s", field)
}
key = strings.TrimSpace(strings.ToLower(key))
switch key {
case "type":
res.Type = value
case "format":
res.Format = value
default:
return res, errors.Errorf("unexpected key '%s' in '%s'", key, field)
}
}
}
if res.Format == "" {
res.Format = "table"
}
switch res.Type {
case "targets", "variables":
default:
return res, errors.Errorf("invalid list type %q", res.Type)
}
switch res.Format {
case "table", "json":
default:
return res, errors.Errorf("invalid list format %q", res.Format)
}
return res, nil
}
func printVars(w io.Writer, format string, vars []*hclparser.Variable) error {
slices.SortFunc(vars, func(a, b *hclparser.Variable) int {
return cmp.Compare(a.Name, b.Name)
})
if format == "json" {
enc := json.NewEncoder(w)
enc.SetIndent("", " ")
return enc.Encode(vars)
}
tw := tabwriter.NewWriter(w, 1, 8, 1, '\t', 0)
defer tw.Flush()
tw.Write([]byte("VARIABLE\tVALUE\tDESCRIPTION\n"))
for _, v := range vars {
var value string
if v.Value != nil {
value = *v.Value
} else {
value = "<null>"
}
fmt.Fprintf(tw, "%s\t%s\t%s\n", v.Name, value, v.Description)
}
return nil
}
func printTargetList(w io.Writer, format string, cfg *bake.Config) error {
type targetOrGroup struct {
name string
target *bake.Target
group *bake.Group
}
list := make([]targetOrGroup, 0, len(cfg.Targets)+len(cfg.Groups))
for _, tgt := range cfg.Targets {
list = append(list, targetOrGroup{name: tgt.Name, target: tgt})
}
for _, grp := range cfg.Groups {
list = append(list, targetOrGroup{name: grp.Name, group: grp})
}
slices.SortFunc(list, func(a, b targetOrGroup) int {
return cmp.Compare(a.name, b.name)
})
var tw *tabwriter.Writer
if format == "table" {
tw = tabwriter.NewWriter(w, 1, 8, 1, '\t', 0)
defer tw.Flush()
tw.Write([]byte("TARGET\tDESCRIPTION\n"))
}
type targetList struct {
Name string `json:"name"`
Description string `json:"description,omitempty"`
Group bool `json:"group,omitempty"`
}
var targetsList []targetList
for _, tgt := range list {
if strings.HasPrefix(tgt.name, "_") {
// convention for a private target
continue
}
var descr string
if tgt.target != nil {
descr = tgt.target.Description
targetsList = append(targetsList, targetList{Name: tgt.name, Description: descr})
} else if tgt.group != nil {
descr = tgt.group.Description
if len(tgt.group.Targets) > 0 {
slices.Sort(tgt.group.Targets)
names := strings.Join(tgt.group.Targets, ", ")
if descr != "" {
descr += " (" + names + ")"
} else {
descr = names
}
}
targetsList = append(targetsList, targetList{Name: tgt.name, Description: descr, Group: true})
}
if format == "table" {
fmt.Fprintf(tw, "%s\t%s\n", tgt.name, descr)
}
}
if format == "json" {
enc := json.NewEncoder(w)
enc.SetIndent("", " ")
return enc.Encode(targetsList)
}
return nil
}
func bakeMetricAttributes(dockerCli command.Cli, driverType, url, cmdContext string, targets []string, options *bakeOptions) attribute.Set {
return attribute.NewSet(
commandNameAttribute.String("bake"),
attribute.Stringer(string(commandOptionsHash), &bakeOptionsHash{
bakeOptions: options,
cfg: confutil.NewConfig(dockerCli),
url: url,
cmdContext: cmdContext,
targets: targets,
}),
driverNameAttribute.String(options.builder),
driverTypeAttribute.String(driverType),
)
}
type bakeOptionsHash struct {
*bakeOptions
cfg *confutil.Config
url string
cmdContext string
targets []string
result string
resultOnce sync.Once
}
func (o *bakeOptionsHash) String() string {
o.resultOnce.Do(func() {
url := o.url
cmdContext := o.cmdContext
if cmdContext == "cwd://" {
// Resolve the directory if the cmdContext is the current working directory.
cmdContext = osutil.GetWd()
}
// Sort the inputs for files and targets since the ordering
// doesn't matter, but avoid modifying the original slice.
files := immutableSort(o.files)
targets := immutableSort(o.targets)
joinedFiles := strings.Join(files, ",")
joinedTargets := strings.Join(targets, ",")
salt := o.cfg.TryNodeIdentifier()
h := sha256.New()
for _, s := range []string{url, cmdContext, joinedFiles, joinedTargets, salt} {
_, _ = io.WriteString(h, s)
h.Write([]byte{0})
}
o.result = hex.EncodeToString(h.Sum(nil))
})
return o.result
}
// immutableSort will sort the entries in s without modifying the original slice.
func immutableSort(s []string) []string {
if !sort.StringsAreSorted(s) {
cpy := make([]string, len(s))
copy(cpy, s)
sort.Strings(cpy)
return cpy
}
return s
}
type syncWriter struct {
w io.Writer
once sync.Once
wait func() error
}
func (w *syncWriter) Write(p []byte) (n int, err error) {
w.once.Do(func() {
if w.wait != nil {
err = w.wait()
}
})
if err != nil {
return 0, err
}
return w.w.Write(p)
}

View File

@@ -5,13 +5,14 @@ import (
"context" "context"
"crypto/sha256" "crypto/sha256"
"encoding/base64" "encoding/base64"
"encoding/csv"
"encoding/hex" "encoding/hex"
"encoding/json" "encoding/json"
"fmt" "fmt"
"io" "io"
"log"
"os" "os"
"path/filepath" "path/filepath"
"slices"
"strconv" "strconv"
"strings" "strings"
"sync" "sync"
@@ -38,19 +39,18 @@ import (
"github.com/docker/buildx/util/osutil" "github.com/docker/buildx/util/osutil"
"github.com/docker/buildx/util/progress" "github.com/docker/buildx/util/progress"
"github.com/docker/buildx/util/tracing" "github.com/docker/buildx/util/tracing"
"github.com/docker/cli-docs-tool/annotation"
"github.com/docker/cli/cli" "github.com/docker/cli/cli"
"github.com/docker/cli/cli/command" "github.com/docker/cli/cli/command"
dockeropts "github.com/docker/cli/opts" dockeropts "github.com/docker/cli/opts"
"github.com/docker/docker/api/types/versions" "github.com/docker/docker/api/types/versions"
"github.com/docker/docker/pkg/atomicwriter" "github.com/docker/docker/pkg/ioutils"
"github.com/moby/buildkit/client" "github.com/moby/buildkit/client"
"github.com/moby/buildkit/exporter/containerimage/exptypes" "github.com/moby/buildkit/exporter/containerimage/exptypes"
"github.com/moby/buildkit/frontend/subrequests" "github.com/moby/buildkit/frontend/subrequests"
"github.com/moby/buildkit/frontend/subrequests/lint"
"github.com/moby/buildkit/frontend/subrequests/outline" "github.com/moby/buildkit/frontend/subrequests/outline"
"github.com/moby/buildkit/frontend/subrequests/targets" "github.com/moby/buildkit/frontend/subrequests/targets"
"github.com/moby/buildkit/solver/errdefs" "github.com/moby/buildkit/solver/errdefs"
solverpb "github.com/moby/buildkit/solver/pb"
"github.com/moby/buildkit/util/grpcerrors" "github.com/moby/buildkit/util/grpcerrors"
"github.com/moby/buildkit/util/progress/progressui" "github.com/moby/buildkit/util/progress/progressui"
"github.com/morikuni/aec" "github.com/morikuni/aec"
@@ -58,11 +58,9 @@ import (
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"github.com/spf13/pflag" "github.com/spf13/pflag"
"github.com/tonistiigi/go-csvvalue"
"go.opentelemetry.io/otel/attribute" "go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/metric" "go.opentelemetry.io/otel/metric"
"google.golang.org/grpc/codes" "google.golang.org/grpc/codes"
"google.golang.org/protobuf/proto"
) )
type buildOptions struct { type buildOptions struct {
@@ -82,7 +80,7 @@ type buildOptions struct {
noCacheFilter []string noCacheFilter []string
outputs []string outputs []string
platforms []string platforms []string
callFunc string printFunc string
secrets []string secrets []string
shmSize dockeropts.MemBytes shmSize dockeropts.MemBytes
ssh []string ssh []string
@@ -157,7 +155,7 @@ func (o *buildOptions) toControllerOptions() (*controllerapi.BuildOptions, error
return nil, err return nil, err
} }
inAttests := slices.Clone(o.attests) inAttests := append([]string{}, o.attests...)
if o.provenance != "" { if o.provenance != "" {
inAttests = append(inAttests, buildflags.CanonicalizeAttest("provenance", o.provenance)) inAttests = append(inAttests, buildflags.CanonicalizeAttest("provenance", o.provenance))
} }
@@ -184,17 +182,14 @@ func (o *buildOptions) toControllerOptions() (*controllerapi.BuildOptions, error
} }
} }
cacheFrom, err := buildflags.ParseCacheEntry(o.cacheFrom) opts.CacheFrom, err = buildflags.ParseCacheEntry(o.cacheFrom)
if err != nil { if err != nil {
return nil, err return nil, err
} }
opts.CacheFrom = cacheFrom.ToPB() opts.CacheTo, err = buildflags.ParseCacheEntry(o.cacheTo)
cacheTo, err := buildflags.ParseCacheEntry(o.cacheTo)
if err != nil { if err != nil {
return nil, err return nil, err
} }
opts.CacheTo = cacheTo.ToPB()
opts.Secrets, err = buildflags.ParseSecretSpecs(o.secrets) opts.Secrets, err = buildflags.ParseSecretSpecs(o.secrets)
if err != nil { if err != nil {
@@ -205,17 +200,11 @@ func (o *buildOptions) toControllerOptions() (*controllerapi.BuildOptions, error
return nil, err return nil, err
} }
opts.CallFunc, err = buildflags.ParseCallFunc(o.callFunc) opts.PrintFunc, err = buildflags.ParsePrintFunc(o.printFunc)
if err != nil { if err != nil {
return nil, err return nil, err
} }
prm := confutil.MetadataProvenance()
if opts.CallFunc != nil || len(o.metadataFile) == 0 {
prm = confutil.MetadataProvenanceModeDisabled
}
opts.ProvenanceResponseMode = string(prm)
return &opts, nil return &opts, nil
} }
@@ -230,22 +219,15 @@ func (o *buildOptions) toDisplayMode() (progressui.DisplayMode, error) {
return progress, nil return progress, nil
} }
const ( func buildMetricAttributes(dockerCli command.Cli, b *builder.Builder, options *buildOptions) attribute.Set {
commandNameAttribute = attribute.Key("command.name")
commandOptionsHash = attribute.Key("command.options.hash")
driverNameAttribute = attribute.Key("driver.name")
driverTypeAttribute = attribute.Key("driver.type")
)
func buildMetricAttributes(dockerCli command.Cli, driverType string, options *buildOptions) attribute.Set {
return attribute.NewSet( return attribute.NewSet(
commandNameAttribute.String("build"), attribute.String("command.name", "build"),
attribute.Stringer(string(commandOptionsHash), &buildOptionsHash{ attribute.Stringer("command.options.hash", &buildOptionsHash{
buildOptions: options, buildOptions: options,
cfg: confutil.NewConfig(dockerCli), configDir: confutil.ConfigDir(dockerCli),
}), }),
driverNameAttribute.String(options.builder), attribute.String("driver.name", options.builder),
driverTypeAttribute.String(driverType), attribute.String("driver.type", b.Driver),
) )
} }
@@ -254,7 +236,7 @@ func buildMetricAttributes(dockerCli command.Cli, driverType string, options *bu
// the fmt.Stringer interface. // the fmt.Stringer interface.
type buildOptionsHash struct { type buildOptionsHash struct {
*buildOptions *buildOptions
cfg *confutil.Config configDir string
result string result string
resultOnce sync.Once resultOnce sync.Once
} }
@@ -271,7 +253,7 @@ func (o *buildOptionsHash) String() string {
if contextPath != "-" && osutil.IsLocalDir(contextPath) { if contextPath != "-" && osutil.IsLocalDir(contextPath) {
contextPath = osutil.ToAbs(contextPath) contextPath = osutil.ToAbs(contextPath)
} }
salt := o.cfg.TryNodeIdentifier() salt := confutil.TryNodeIdentifier(o.configDir)
h := sha256.New() h := sha256.New()
for _, s := range []string{target, contextPath, dockerfile, salt} { for _, s := range []string{target, contextPath, dockerfile, salt} {
@@ -284,13 +266,13 @@ func (o *buildOptionsHash) String() string {
} }
func runBuild(ctx context.Context, dockerCli command.Cli, options buildOptions) (err error) { func runBuild(ctx context.Context, dockerCli command.Cli, options buildOptions) (err error) {
mp := dockerCli.MeterProvider() mp, err := metricutil.NewMeterProvider(ctx, dockerCli)
if err != nil {
return err
}
defer mp.Report(context.Background())
ctx, end, err := tracing.TraceCurrentCommand(ctx, []string{"build", options.contextPath}, ctx, end, err := tracing.TraceCurrentCommand(ctx, "build")
attribute.String("builder", options.builder),
attribute.String("context", options.contextPath),
attribute.String("dockerfile", options.dockerfileName),
)
if err != nil { if err != nil {
return err return err
} }
@@ -325,16 +307,15 @@ func runBuild(ctx context.Context, dockerCli command.Cli, options buildOptions)
if err != nil { if err != nil {
return err return err
} }
driverType := b.Driver
var term bool var term bool
if _, err := console.ConsoleFromFile(os.Stderr); err == nil { if _, err := console.ConsoleFromFile(os.Stderr); err == nil {
term = true term = true
} }
attributes := buildMetricAttributes(dockerCli, driverType, &options) attributes := buildMetricAttributes(dockerCli, b, &options)
ctx2, cancel := context.WithCancelCause(context.TODO()) ctx2, cancel := context.WithCancel(context.TODO())
defer func() { cancel(errors.WithStack(context.Canceled)) }() defer cancel()
progressMode, err := options.toDisplayMode() progressMode, err := options.toDisplayMode()
if err != nil { if err != nil {
return err return err
@@ -356,12 +337,11 @@ func runBuild(ctx context.Context, dockerCli command.Cli, options buildOptions)
done := timeBuildCommand(mp, attributes) done := timeBuildCommand(mp, attributes)
var resp *client.SolveResponse var resp *client.SolveResponse
var inputs *build.Inputs
var retErr error var retErr error
if confutil.IsExperimental() { if isExperimental() {
resp, inputs, retErr = runControllerBuild(ctx, dockerCli, opts, options, printer) resp, retErr = runControllerBuild(ctx, dockerCli, opts, options, printer)
} else { } else {
resp, inputs, retErr = runBasicBuild(ctx, dockerCli, opts, printer) resp, retErr = runBasicBuild(ctx, dockerCli, opts, options, printer)
} }
if err := printer.Wait(); retErr == nil { if err := printer.Wait(); retErr == nil {
@@ -387,21 +367,13 @@ func runBuild(ctx context.Context, dockerCli command.Cli, options buildOptions)
} }
} }
if options.metadataFile != "" { if options.metadataFile != "" {
dt := decodeExporterResponse(resp.ExporterResponse) if err := writeMetadataFile(options.metadataFile, decodeExporterResponse(resp.ExporterResponse)); err != nil {
if opts.CallFunc == nil {
if warnings := printer.Warnings(); len(warnings) > 0 && confutil.MetadataWarningsEnabled() {
dt["buildx.build.warnings"] = warnings
}
}
if err := writeMetadataFile(options.metadataFile, dt); err != nil {
return err return err
} }
} }
if opts.CallFunc != nil { if opts.PrintFunc != nil {
if exitcode, err := printResult(dockerCli.Out(), opts.CallFunc, resp.ExporterResponse, options.target, inputs); err != nil { if err := printResult(opts.PrintFunc, resp.ExporterResponse); err != nil {
return err return err
} else if exitcode != 0 {
os.Exit(exitcode)
} }
} }
return nil return nil
@@ -416,22 +388,22 @@ func getImageID(resp map[string]string) string {
return dgst return dgst
} }
func runBasicBuild(ctx context.Context, dockerCli command.Cli, opts *controllerapi.BuildOptions, printer *progress.Printer) (*client.SolveResponse, *build.Inputs, error) { func runBasicBuild(ctx context.Context, dockerCli command.Cli, opts *controllerapi.BuildOptions, options buildOptions, printer *progress.Printer) (*client.SolveResponse, error) {
resp, res, dfmap, err := cbuild.RunBuild(ctx, dockerCli, opts, dockerCli.In(), printer, false) resp, res, err := cbuild.RunBuild(ctx, dockerCli, *opts, dockerCli.In(), printer, false)
if res != nil { if res != nil {
res.Done() res.Done()
} }
return resp, dfmap, err return resp, err
} }
func runControllerBuild(ctx context.Context, dockerCli command.Cli, opts *controllerapi.BuildOptions, options buildOptions, printer *progress.Printer) (*client.SolveResponse, *build.Inputs, error) { func runControllerBuild(ctx context.Context, dockerCli command.Cli, opts *controllerapi.BuildOptions, options buildOptions, printer *progress.Printer) (*client.SolveResponse, error) {
if options.invokeConfig != nil && (options.dockerfileName == "-" || options.contextPath == "-") { if options.invokeConfig != nil && (options.dockerfileName == "-" || options.contextPath == "-") {
// stdin must be usable for monitor // stdin must be usable for monitor
return nil, nil, errors.Errorf("Dockerfile or context from stdin is not supported with invoke") return nil, errors.Errorf("Dockerfile or context from stdin is not supported with invoke")
} }
c, err := controller.NewController(ctx, options.ControlOptions, dockerCli, printer) c, err := controller.NewController(ctx, options.ControlOptions, dockerCli, printer)
if err != nil { if err != nil {
return nil, nil, err return nil, err
} }
defer func() { defer func() {
if err := c.Close(); err != nil { if err := c.Close(); err != nil {
@@ -443,49 +415,38 @@ func runControllerBuild(ctx context.Context, dockerCli command.Cli, opts *contro
// so we need to resolve paths to abosolute ones in the client. // so we need to resolve paths to abosolute ones in the client.
opts, err = controllerapi.ResolveOptionPaths(opts) opts, err = controllerapi.ResolveOptionPaths(opts)
if err != nil { if err != nil {
return nil, nil, err return nil, err
} }
var ref string var ref string
var retErr error var retErr error
var resp *client.SolveResponse var resp *client.SolveResponse
var inputs *build.Inputs f := ioset.NewSingleForwarder()
f.SetReader(dockerCli.In())
pr, pw := io.Pipe()
f.SetWriter(pw, func() io.WriteCloser {
pw.Close() // propagate EOF
logrus.Debug("propagating stdin close")
return nil
})
var f *ioset.SingleForwarder ref, resp, err = c.Build(ctx, *opts, pr, printer)
var pr io.ReadCloser
var pw io.WriteCloser
if options.invokeConfig == nil {
pr = dockerCli.In()
} else {
f = ioset.NewSingleForwarder()
f.SetReader(dockerCli.In())
pr, pw = io.Pipe()
f.SetWriter(pw, func() io.WriteCloser {
pw.Close() // propagate EOF
logrus.Debug("propagating stdin close")
return nil
})
}
ref, resp, inputs, err = c.Build(ctx, opts, pr, printer)
if err != nil { if err != nil {
var be *controllererrors.BuildError var be *controllererrors.BuildError
if errors.As(err, &be) { if errors.As(err, &be) {
ref = be.SessionID ref = be.Ref
retErr = err retErr = err
// We can proceed to monitor // We can proceed to monitor
} else { } else {
return nil, nil, errors.Wrapf(err, "failed to build") return nil, errors.Wrapf(err, "failed to build")
} }
} }
if options.invokeConfig != nil { if err := pw.Close(); err != nil {
if err := pw.Close(); err != nil { logrus.Debug("failed to close stdin pipe writer")
logrus.Debug("failed to close stdin pipe writer") }
} if err := pr.Close(); err != nil {
if err := pr.Close(); err != nil { logrus.Debug("failed to close stdin pipe reader")
logrus.Debug("failed to close stdin pipe reader")
}
} }
if options.invokeConfig != nil && options.invokeConfig.needsDebug(retErr) { if options.invokeConfig != nil && options.invokeConfig.needsDebug(retErr) {
@@ -516,7 +477,7 @@ func runControllerBuild(ctx context.Context, dockerCli command.Cli, opts *contro
} }
} }
return resp, inputs, retErr return resp, retErr
} }
func printError(err error, printer *progress.Printer) error { func printError(err error, printer *progress.Printer) error {
@@ -553,12 +514,9 @@ func buildCmd(dockerCli command.Cli, rootOpts *rootOptions, debugConfig *debug.D
cmd := &cobra.Command{ cmd := &cobra.Command{
Use: "build [OPTIONS] PATH | URL | -", Use: "build [OPTIONS] PATH | URL | -",
Aliases: []string{"b"},
Short: "Start a build", Short: "Start a build",
Args: cli.ExactArgs(1), Args: cli.ExactArgs(1),
Aliases: []string{"b"},
Annotations: map[string]string{
"aliases": "docker build, docker builder build, docker image build, docker buildx b",
},
RunE: func(cmd *cobra.Command, args []string) error { RunE: func(cmd *cobra.Command, args []string) error {
options.contextPath = args[0] options.contextPath = args[0]
options.builder = rootOpts.builder options.builder = rootOpts.builder
@@ -597,8 +555,9 @@ func buildCmd(dockerCli command.Cli, rootOpts *rootOptions, debugConfig *debug.D
flags := cmd.Flags() flags := cmd.Flags()
flags.StringSliceVar(&options.extraHosts, "add-host", []string{}, `Add a custom host-to-IP mapping (format: "host:ip")`) flags.StringSliceVar(&options.extraHosts, "add-host", []string{}, `Add a custom host-to-IP mapping (format: "host:ip")`)
flags.SetAnnotation("add-host", annotation.ExternalURL, []string{"https://docs.docker.com/reference/cli/docker/image/build/#add-host"})
flags.StringArrayVar(&options.allow, "allow", []string{}, `Allow extra privileged entitlement (e.g., "network.host", "security.insecure")`) flags.StringSliceVar(&options.allow, "allow", []string{}, `Allow extra privileged entitlement (e.g., "network.host", "security.insecure")`)
flags.StringArrayVarP(&options.annotations, "annotation", "", []string{}, "Add annotation to the image") flags.StringArrayVarP(&options.annotations, "annotation", "", []string{}, "Add annotation to the image")
@@ -609,12 +568,14 @@ func buildCmd(dockerCli command.Cli, rootOpts *rootOptions, debugConfig *debug.D
flags.StringArrayVar(&options.cacheTo, "cache-to", []string{}, `Cache export destinations (e.g., "user/app:cache", "type=local,dest=path/to/dir")`) flags.StringArrayVar(&options.cacheTo, "cache-to", []string{}, `Cache export destinations (e.g., "user/app:cache", "type=local,dest=path/to/dir")`)
flags.StringVar(&options.cgroupParent, "cgroup-parent", "", `Set the parent cgroup for the "RUN" instructions during build`) flags.StringVar(&options.cgroupParent, "cgroup-parent", "", `Set the parent cgroup for the "RUN" instructions during build`)
flags.SetAnnotation("cgroup-parent", annotation.ExternalURL, []string{"https://docs.docker.com/reference/cli/docker/image/build/#cgroup-parent"})
flags.StringArrayVar(&options.contexts, "build-context", []string{}, "Additional build contexts (e.g., name=path)") flags.StringArrayVar(&options.contexts, "build-context", []string{}, "Additional build contexts (e.g., name=path)")
flags.StringVarP(&options.dockerfileName, "file", "f", "", `Name of the Dockerfile (default: "PATH/Dockerfile")`) flags.StringVarP(&options.dockerfileName, "file", "f", "", `Name of the Dockerfile (default: "PATH/Dockerfile")`)
flags.SetAnnotation("file", annotation.ExternalURL, []string{"https://docs.docker.com/reference/cli/docker/image/build/#file"})
flags.StringVar(&options.imageIDFile, "iidfile", "", "Write the image ID to a file") flags.StringVar(&options.imageIDFile, "iidfile", "", "Write the image ID to the file")
flags.StringArrayVar(&options.labels, "label", []string{}, "Set metadata for an image") flags.StringArrayVar(&options.labels, "label", []string{}, "Set metadata for an image")
@@ -628,6 +589,11 @@ func buildCmd(dockerCli command.Cli, rootOpts *rootOptions, debugConfig *debug.D
flags.StringArrayVar(&options.platforms, "platform", platformsDefault, "Set target platform for build") flags.StringArrayVar(&options.platforms, "platform", platformsDefault, "Set target platform for build")
if isExperimental() {
flags.StringVar(&options.printFunc, "print", "", "Print result of information request (e.g., outline, targets)")
cobrautil.MarkFlagsExperimental(flags, "print")
}
flags.BoolVar(&options.exportPush, "push", false, `Shorthand for "--output=type=registry"`) flags.BoolVar(&options.exportPush, "push", false, `Shorthand for "--output=type=registry"`)
flags.BoolVarP(&options.quiet, "quiet", "q", false, "Suppress the build output and print image ID on success") flags.BoolVarP(&options.quiet, "quiet", "q", false, "Suppress the build output and print image ID on success")
@@ -639,8 +605,10 @@ func buildCmd(dockerCli command.Cli, rootOpts *rootOptions, debugConfig *debug.D
flags.StringArrayVar(&options.ssh, "ssh", []string{}, `SSH agent socket or keys to expose to the build (format: "default|<id>[=<socket>|<key>[,<key>]]")`) flags.StringArrayVar(&options.ssh, "ssh", []string{}, `SSH agent socket or keys to expose to the build (format: "default|<id>[=<socket>|<key>[,<key>]]")`)
flags.StringArrayVarP(&options.tags, "tag", "t", []string{}, `Name and optionally a tag (format: "name:tag")`) flags.StringArrayVarP(&options.tags, "tag", "t", []string{}, `Name and optionally a tag (format: "name:tag")`)
flags.SetAnnotation("tag", annotation.ExternalURL, []string{"https://docs.docker.com/reference/cli/docker/image/build/#tag"})
flags.StringVar(&options.target, "target", "", "Set the target build stage to build") flags.StringVar(&options.target, "target", "", "Set the target build stage to build")
flags.SetAnnotation("target", annotation.ExternalURL, []string{"https://docs.docker.com/reference/cli/docker/image/build/#target"})
options.ulimits = dockeropts.NewUlimitOpt(nil) options.ulimits = dockeropts.NewUlimitOpt(nil)
flags.Var(options.ulimits, "ulimit", "Ulimit options") flags.Var(options.ulimits, "ulimit", "Ulimit options")
@@ -649,7 +617,7 @@ func buildCmd(dockerCli command.Cli, rootOpts *rootOptions, debugConfig *debug.D
flags.StringVar(&options.sbom, "sbom", "", `Shorthand for "--attest=type=sbom"`) flags.StringVar(&options.sbom, "sbom", "", `Shorthand for "--attest=type=sbom"`)
flags.StringVar(&options.provenance, "provenance", "", `Shorthand for "--attest=type=provenance"`) flags.StringVar(&options.provenance, "provenance", "", `Shorthand for "--attest=type=provenance"`)
if confutil.IsExperimental() { if isExperimental() {
// TODO: move this to debug command if needed // TODO: move this to debug command if needed
flags.StringVar(&options.Root, "root", "", "Specify root directory of server to connect") flags.StringVar(&options.Root, "root", "", "Specify root directory of server to connect")
flags.BoolVar(&options.Detach, "detach", false, "Detach buildx server (supported only on linux)") flags.BoolVar(&options.Detach, "detach", false, "Detach buildx server (supported only on linux)")
@@ -657,20 +625,12 @@ func buildCmd(dockerCli command.Cli, rootOpts *rootOptions, debugConfig *debug.D
cobrautil.MarkFlagsExperimental(flags, "root", "detach", "server-config") cobrautil.MarkFlagsExperimental(flags, "root", "detach", "server-config")
} }
flags.StringVar(&options.callFunc, "call", "build", `Set method for evaluating build ("check", "outline", "targets")`)
flags.VarPF(callAlias(&options.callFunc, "check"), "check", "", `Shorthand for "--call=check"`)
flags.Lookup("check").NoOptDefVal = "true"
// hidden flags // hidden flags
var ignore string var ignore string
var ignoreSlice []string var ignoreSlice []string
var ignoreBool bool var ignoreBool bool
var ignoreInt int64 var ignoreInt int64
flags.StringVar(&options.callFunc, "print", "", "Print result of information request (e.g., outline, targets)")
cobrautil.MarkFlagsExperimental(flags, "print")
flags.MarkHidden("print")
flags.BoolVar(&ignoreBool, "compress", false, "Compress the build context using gzip") flags.BoolVar(&ignoreBool, "compress", false, "Compress the build context using gzip")
flags.MarkHidden("compress") flags.MarkHidden("compress")
@@ -728,9 +688,9 @@ type commonFlags struct {
func commonBuildFlags(options *commonFlags, flags *pflag.FlagSet) { func commonBuildFlags(options *commonFlags, flags *pflag.FlagSet) {
options.noCache = flags.Bool("no-cache", false, "Do not use cache when building the image") options.noCache = flags.Bool("no-cache", false, "Do not use cache when building the image")
flags.StringVar(&options.progress, "progress", "auto", `Set type of progress output ("auto", "quiet", "plain", "tty", "rawjson"). Use plain to show container output`) flags.StringVar(&options.progress, "progress", "auto", `Set type of progress output ("auto", "plain", "tty"). Use plain to show container output`)
options.pull = flags.Bool("pull", false, "Always attempt to pull all referenced images") options.pull = flags.Bool("pull", false, "Always attempt to pull all referenced images")
flags.StringVar(&options.metadataFile, "metadata-file", "", "Write build result metadata to a file") flags.StringVar(&options.metadataFile, "metadata-file", "", "Write build result metadata to the file")
} }
func checkWarnedFlags(f *pflag.Flag) { func checkWarnedFlags(f *pflag.Flag) {
@@ -745,37 +705,26 @@ func checkWarnedFlags(f *pflag.Flag) {
} }
} }
func writeMetadataFile(filename string, dt any) error { func writeMetadataFile(filename string, dt interface{}) error {
b, err := json.MarshalIndent(dt, "", " ") b, err := json.MarshalIndent(dt, "", " ")
if err != nil { if err != nil {
return err return err
} }
return atomicwriter.WriteFile(filename, b, 0644) return ioutils.AtomicWriteFile(filename, b, 0644)
} }
func decodeExporterResponse(exporterResponse map[string]string) map[string]any { func decodeExporterResponse(exporterResponse map[string]string) map[string]interface{} {
decFunc := func(k, v string) ([]byte, error) { out := make(map[string]interface{})
if k == "result.json" {
// result.json is part of metadata response for subrequests which
// is already a JSON object: https://github.com/moby/buildkit/blob/f6eb72f2f5db07ddab89ac5e2bd3939a6444f4be/frontend/dockerui/requests.go#L100-L102
return []byte(v), nil
}
return base64.StdEncoding.DecodeString(v)
}
out := make(map[string]any)
for k, v := range exporterResponse { for k, v := range exporterResponse {
dt, err := decFunc(k, v) dt, err := base64.StdEncoding.DecodeString(v)
if err != nil { if err != nil {
out[k] = v out[k] = v
continue continue
} }
var raw map[string]any var raw map[string]interface{}
if err = json.Unmarshal(dt, &raw); err != nil || len(raw) == 0 { if err = json.Unmarshal(dt, &raw); err != nil || len(raw) == 0 {
var rawList []map[string]any out[k] = v
if err = json.Unmarshal(dt, &rawList); err != nil || len(rawList) == 0 { continue
out[k] = v
continue
}
} }
out[k] = json.RawMessage(dt) out[k] = json.RawMessage(dt)
} }
@@ -813,6 +762,14 @@ func (w *wrapped) Unwrap() error {
return w.err return w.err
} }
func isExperimental() bool {
if v, ok := os.LookupEnv("BUILDX_EXPERIMENTAL"); ok {
vv, _ := strconv.ParseBool(v)
return vv
}
return false
}
func updateLastActivity(dockerCli command.Cli, ng *store.NodeGroup) error { func updateLastActivity(dockerCli command.Cli, ng *store.NodeGroup) error {
txn, release, err := storeutil.GetStore(dockerCli) txn, release, err := storeutil.GetStore(dockerCli)
if err != nil { if err != nil {
@@ -869,7 +826,7 @@ func printWarnings(w io.Writer, warnings []client.VertexWarning, mode progressui
fmt.Fprintf(sb, "%d warnings found", len(warnings)) fmt.Fprintf(sb, "%d warnings found", len(warnings))
} }
if logrus.GetLevel() < logrus.DebugLevel { if logrus.GetLevel() < logrus.DebugLevel {
fmt.Fprintf(sb, " (use docker --debug to expand)") fmt.Fprintf(sb, " (use --debug to expand)")
} }
fmt.Fprintf(sb, ":\n") fmt.Fprintf(sb, ":\n")
fmt.Fprint(w, aec.Apply(sb.String(), aec.YellowF)) fmt.Fprint(w, aec.Apply(sb.String(), aec.YellowF))
@@ -893,107 +850,42 @@ func printWarnings(w io.Writer, warnings []client.VertexWarning, mode progressui
src.Print(w) src.Print(w)
} }
fmt.Fprintf(w, "\n") fmt.Fprintf(w, "\n")
} }
} }
func printResult(w io.Writer, f *controllerapi.CallFunc, res map[string]string, target string, inp *build.Inputs) (int, error) { func printResult(f *controllerapi.PrintFunc, res map[string]string) error {
switch f.Name { switch f.Name {
case "outline": case "outline":
return 0, printValue(w, outline.PrintOutline, outline.SubrequestsOutlineDefinition.Version, f.Format, res) return printValue(outline.PrintOutline, outline.SubrequestsOutlineDefinition.Version, f.Format, res)
case "targets": case "targets":
return 0, printValue(w, targets.PrintTargets, targets.SubrequestsTargetsDefinition.Version, f.Format, res) return printValue(targets.PrintTargets, targets.SubrequestsTargetsDefinition.Version, f.Format, res)
case "subrequests.describe": case "subrequests.describe":
return 0, printValue(w, subrequests.PrintDescribe, subrequests.SubrequestsDescribeDefinition.Version, f.Format, res) return printValue(subrequests.PrintDescribe, subrequests.SubrequestsDescribeDefinition.Version, f.Format, res)
case "lint":
lintResults := lint.LintResults{}
if result, ok := res["result.json"]; ok {
if err := json.Unmarshal([]byte(result), &lintResults); err != nil {
return 0, err
}
}
warningCount := len(lintResults.Warnings)
if f.Format != "json" && warningCount > 0 {
var warningCountMsg string
if warningCount == 1 {
warningCountMsg = "1 warning has been found!"
} else if warningCount > 1 {
warningCountMsg = fmt.Sprintf("%d warnings have been found!", warningCount)
}
fmt.Fprintf(w, "Check complete, %s\n", warningCountMsg)
}
sourceInfoMap := func(sourceInfo *solverpb.SourceInfo) *solverpb.SourceInfo {
if sourceInfo == nil || inp == nil {
return sourceInfo
}
if target == "" {
target = "default"
}
if inp.DockerfileMappingSrc != "" {
newSourceInfo := proto.Clone(sourceInfo).(*solverpb.SourceInfo)
newSourceInfo.Filename = inp.DockerfileMappingSrc
return newSourceInfo
}
return sourceInfo
}
printLintWarnings := func(dt []byte, w io.Writer) error {
return lintResults.PrintTo(w, sourceInfoMap)
}
err := printValue(w, printLintWarnings, lint.SubrequestLintDefinition.Version, f.Format, res)
if err != nil {
return 0, err
}
if lintResults.Error != nil {
// Print the error message and the source
// Normally, we would use `errdefs.WithSource` to attach the source to the
// error and let the error be printed by the handling that's already in place,
// but here we want to print the error in a way that's consistent with how
// the lint warnings are printed via the `lint.PrintLintViolations` function,
// which differs from the default error printing.
if f.Format != "json" && len(lintResults.Warnings) > 0 {
fmt.Fprintln(w)
}
lintBuf := bytes.NewBuffer(nil)
lintResults.PrintErrorTo(lintBuf, sourceInfoMap)
return 0, errors.New(lintBuf.String())
} else if len(lintResults.Warnings) == 0 && f.Format != "json" {
fmt.Fprintln(w, "Check complete, no warnings found.")
}
default: default:
if dt, ok := res["result.json"]; ok && f.Format == "json" { if dt, ok := res["result.txt"]; ok {
fmt.Fprintln(w, dt) fmt.Print(dt)
} else if dt, ok := res["result.txt"]; ok {
fmt.Fprint(w, dt)
} else { } else {
fmt.Fprintf(w, "%s %+v\n", f, res) log.Printf("%s %+v", f, res)
} }
} }
if v, ok := res["result.statuscode"]; !f.IgnoreStatus && ok { return nil
if n, err := strconv.Atoi(v); err == nil && n != 0 {
return n, nil
}
}
return 0, nil
} }
type callFunc func([]byte, io.Writer) error type printFunc func([]byte, io.Writer) error
func printValue(w io.Writer, printer callFunc, version string, format string, res map[string]string) error { func printValue(printer printFunc, version string, format string, res map[string]string) error {
if format == "json" { if format == "json" {
fmt.Fprintln(w, res["result.json"]) fmt.Fprintln(os.Stdout, res["result.json"])
return nil return nil
} }
if res["version"] != "" && versions.LessThan(version, res["version"]) && res["result.txt"] != "" { if res["version"] != "" && versions.LessThan(version, res["version"]) && res["result.txt"] != "" {
// structure is too new and we don't know how to print it // structure is too new and we don't know how to print it
fmt.Fprint(w, res["result.txt"]) fmt.Fprint(os.Stdout, res["result.txt"])
return nil return nil
} }
return printer([]byte(res["result.json"]), w) return printer([]byte(res["result.json"]), os.Stdout)
} }
type invokeConfig struct { type invokeConfig struct {
@@ -1023,7 +915,7 @@ func (cfg *invokeConfig) runDebug(ctx context.Context, ref string, options *cont
return nil, errors.Errorf("failed to configure terminal: %v", err) return nil, errors.Errorf("failed to configure terminal: %v", err)
} }
defer con.Reset() defer con.Reset()
return monitor.RunMonitor(ctx, ref, options, &cfg.InvokeConfig, c, stdin, stdout, stderr, progress) return monitor.RunMonitor(ctx, ref, options, cfg.InvokeConfig, c, stdin, stdout, stderr, progress)
} }
func (cfg *invokeConfig) parseInvokeConfig(invoke, on string) error { func (cfg *invokeConfig) parseInvokeConfig(invoke, on string) error {
@@ -1043,9 +935,9 @@ func (cfg *invokeConfig) parseInvokeConfig(invoke, on string) error {
return nil return nil
} }
csvParser := csvvalue.NewParser() csvReader := csv.NewReader(strings.NewReader(invoke))
csvParser.LazyQuotes = true csvReader.LazyQuotes = true
fields, err := csvParser.Fields(invoke, nil) fields, err := csvReader.Read()
if err != nil { if err != nil {
return err return err
} }
@@ -1101,20 +993,6 @@ func maybeJSONArray(v string) []string {
return []string{v} return []string{v}
} }
func callAlias(target *string, value string) cobrautil.BoolFuncValue {
return func(s string) error {
v, err := strconv.ParseBool(s)
if err != nil {
return err
}
if v {
*target = value
}
return nil
}
}
// timeBuildCommand will start a timer for timing the build command. It records the time when the returned // timeBuildCommand will start a timer for timing the build command. It records the time when the returned
// function is invoked into a metric. // function is invoked into a metric.
func timeBuildCommand(mp metric.MeterProvider, attrs attribute.Set) func(err error) { func timeBuildCommand(mp metric.MeterProvider, attrs attribute.Set) func(err error) {

View File

@@ -64,7 +64,7 @@ func RootCmd(dockerCli command.Cli, children ...DebuggableCmd) *cobra.Command {
return errors.Errorf("failed to configure terminal: %v", err) return errors.Errorf("failed to configure terminal: %v", err)
} }
_, err = monitor.RunMonitor(ctx, "", nil, &controllerapi.InvokeConfig{ _, err = monitor.RunMonitor(ctx, "", nil, controllerapi.InvokeConfig{
Tty: true, Tty: true,
}, c, dockerCli.In(), os.Stdout, os.Stderr, printer) }, c, dockerCli.In(), os.Stdout, os.Stderr, printer)
con.Reset() con.Reset()
@@ -80,7 +80,7 @@ func RootCmd(dockerCli command.Cli, children ...DebuggableCmd) *cobra.Command {
flags.StringVar(&controlOptions.Root, "root", "", "Specify root directory of server to connect for the monitor") flags.StringVar(&controlOptions.Root, "root", "", "Specify root directory of server to connect for the monitor")
flags.BoolVar(&controlOptions.Detach, "detach", runtime.GOOS == "linux", "Detach buildx server for the monitor (supported only on linux)") flags.BoolVar(&controlOptions.Detach, "detach", runtime.GOOS == "linux", "Detach buildx server for the monitor (supported only on linux)")
flags.StringVar(&controlOptions.ServerConfig, "server-config", "", "Specify buildx server config file for the monitor (used only when launching new server)") flags.StringVar(&controlOptions.ServerConfig, "server-config", "", "Specify buildx server config file for the monitor (used only when launching new server)")
flags.StringVar(&progressMode, "progress", "auto", `Set type of progress output ("auto", "plain", "tty", "rawjson") for the monitor. Use plain to show container output`) flags.StringVar(&progressMode, "progress", "auto", `Set type of progress output ("auto", "plain", "tty") for the monitor. Use plain to show container output`)
cobrautil.MarkFlagsExperimental(flags, "invoke", "on", "root", "detach", "server-config") cobrautil.MarkFlagsExperimental(flags, "invoke", "on", "root", "detach", "server-config")

View File

@@ -5,7 +5,7 @@ import (
"net" "net"
"os" "os"
"github.com/containerd/platforms" "github.com/containerd/containerd/platforms"
"github.com/docker/buildx/build" "github.com/docker/buildx/build"
"github.com/docker/buildx/builder" "github.com/docker/buildx/builder"
"github.com/docker/buildx/util/progress" "github.com/docker/buildx/util/progress"
@@ -125,7 +125,8 @@ func dialStdioCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
} }
flags := cmd.Flags() flags := cmd.Flags()
cmd.Flags()
flags.StringVar(&opts.platform, "platform", os.Getenv("DOCKER_DEFAULT_PLATFORM"), "Target platform: this is used for node selection") flags.StringVar(&opts.platform, "platform", os.Getenv("DOCKER_DEFAULT_PLATFORM"), "Target platform: this is used for node selection")
flags.StringVar(&opts.progress, "progress", "quiet", `Set type of progress output ("auto", "plain", "tty", "rawjson"). Use plain to show container output`) flags.StringVar(&opts.progress, "progress", "quiet", "Set type of progress output (auto, plain, tty).")
return cmd return cmd
} }

View File

@@ -124,7 +124,7 @@ func duCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
return cmd return cmd
} }
func printKV(w io.Writer, k string, v any) { func printKV(w io.Writer, k string, v interface{}) {
fmt.Fprintf(w, "%s:\t%v\n", k, v) fmt.Fprintf(w, "%s:\t%v\n", k, v)
} }

View File

@@ -1,135 +0,0 @@
package history
import (
"context"
"encoding/json"
"fmt"
"io"
"net"
"net/http"
"os"
"strings"
remoteutil "github.com/docker/buildx/driver/remote/util"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/desktop"
"github.com/docker/cli/cli/command"
"github.com/pkg/browser"
"github.com/pkg/errors"
"github.com/spf13/cobra"
)
type importOptions struct {
file []string
}
func runImport(ctx context.Context, dockerCli command.Cli, opts importOptions) error {
sock, err := desktop.BuildServerAddr()
if err != nil {
return err
}
tr := http.DefaultTransport.(*http.Transport).Clone()
tr.DialContext = func(ctx context.Context, _, _ string) (net.Conn, error) {
network, addr, ok := strings.Cut(sock, "://")
if !ok {
return nil, errors.Errorf("invalid endpoint address: %s", sock)
}
return remoteutil.DialContext(ctx, network, addr)
}
client := &http.Client{
Transport: tr,
}
var urls []string
if len(opts.file) == 0 {
u, err := importFrom(ctx, client, os.Stdin)
if err != nil {
return err
}
urls = append(urls, u...)
} else {
for _, fn := range opts.file {
var f *os.File
var rdr io.Reader = os.Stdin
if fn != "-" {
f, err = os.Open(fn)
if err != nil {
return errors.Wrapf(err, "failed to open file %s", fn)
}
rdr = f
}
u, err := importFrom(ctx, client, rdr)
if err != nil {
return err
}
urls = append(urls, u...)
if f != nil {
f.Close()
}
}
}
if len(urls) == 0 {
return errors.New("no build records found in the bundle")
}
for i, url := range urls {
fmt.Fprintln(dockerCli.Err(), url)
if i == 0 {
err = browser.OpenURL(url)
}
}
return err
}
func importFrom(ctx context.Context, c *http.Client, rdr io.Reader) ([]string, error) {
req, err := http.NewRequestWithContext(ctx, http.MethodPost, "http://docker-desktop/upload", rdr)
if err != nil {
return nil, errors.Wrap(err, "failed to create request")
}
resp, err := c.Do(req)
if err != nil {
return nil, errors.Wrap(err, "failed to send request, check if Docker Desktop is running")
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
body, _ := io.ReadAll(resp.Body)
return nil, errors.Errorf("failed to import build: %s", string(body))
}
var refs []string
dec := json.NewDecoder(resp.Body)
if err := dec.Decode(&refs); err != nil {
return nil, errors.Wrap(err, "failed to decode response")
}
var urls []string
for _, ref := range refs {
urls = append(urls, desktop.BuildURL(fmt.Sprintf(".imported/_/%s", ref)))
}
return urls, err
}
func importCmd(dockerCli command.Cli, _ RootOptions) *cobra.Command {
var options importOptions
cmd := &cobra.Command{
Use: "import [OPTIONS] < bundle.dockerbuild",
Short: "Import a build into Docker Desktop",
Args: cobra.NoArgs,
RunE: func(cmd *cobra.Command, args []string) error {
return runImport(cmd.Context(), dockerCli, options)
},
ValidArgsFunction: completion.Disable,
}
flags := cmd.Flags()
flags.StringArrayVarP(&options.file, "file", "f", nil, "Import from a file path")
return cmd
}

View File

@@ -1,893 +0,0 @@
package history
import (
"bytes"
"cmp"
"context"
"encoding/json"
"fmt"
"io"
"os"
"path/filepath"
"slices"
"strconv"
"strings"
"text/tabwriter"
"text/template"
"time"
"github.com/containerd/containerd/v2/core/content"
"github.com/containerd/containerd/v2/core/content/proxy"
"github.com/containerd/containerd/v2/core/images"
"github.com/containerd/platforms"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/localstate"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/desktop"
"github.com/docker/cli/cli/command"
"github.com/docker/cli/cli/command/formatter"
"github.com/docker/cli/cli/debug"
slsa "github.com/in-toto/in-toto-golang/in_toto/slsa_provenance/common"
slsa02 "github.com/in-toto/in-toto-golang/in_toto/slsa_provenance/v0.2"
controlapi "github.com/moby/buildkit/api/services/control"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/solver/errdefs"
provenancetypes "github.com/moby/buildkit/solver/llbsolver/provenance/types"
"github.com/moby/buildkit/util/grpcerrors"
"github.com/moby/buildkit/util/stack"
"github.com/opencontainers/go-digest"
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/spf13/cobra"
"github.com/tonistiigi/go-csvvalue"
spb "google.golang.org/genproto/googleapis/rpc/status"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
proto "google.golang.org/protobuf/proto"
)
type statusT string
const (
statusComplete statusT = "completed"
statusRunning statusT = "running"
statusError statusT = "failed"
statusCanceled statusT = "canceled"
)
type inspectOptions struct {
builder string
ref string
format string
}
type inspectOutput struct {
Name string `json:",omitempty"`
Ref string
Context string `json:",omitempty"`
Dockerfile string `json:",omitempty"`
VCSRepository string `json:",omitempty"`
VCSRevision string `json:",omitempty"`
Target string `json:",omitempty"`
Platform []string `json:",omitempty"`
KeepGitDir bool `json:",omitempty"`
NamedContexts []keyValueOutput `json:",omitempty"`
StartedAt *time.Time `json:",omitempty"`
CompletedAt *time.Time `json:",omitempty"`
Duration time.Duration `json:",omitempty"`
Status statusT `json:",omitempty"`
Error *errorOutput `json:",omitempty"`
NumCompletedSteps int32
NumTotalSteps int32
NumCachedSteps int32
BuildArgs []keyValueOutput `json:",omitempty"`
Labels []keyValueOutput `json:",omitempty"`
Config configOutput `json:",omitempty"`
Materials []materialOutput `json:",omitempty"`
Attachments []attachmentOutput `json:",omitempty"`
Errors []string `json:",omitempty"`
}
type configOutput struct {
Network string `json:",omitempty"`
ExtraHosts []string `json:",omitempty"`
Hostname string `json:",omitempty"`
CgroupParent string `json:",omitempty"`
ImageResolveMode string `json:",omitempty"`
MultiPlatform bool `json:",omitempty"`
NoCache bool `json:",omitempty"`
NoCacheFilter []string `json:",omitempty"`
ShmSize string `json:",omitempty"`
Ulimit string `json:",omitempty"`
CacheMountNS string `json:",omitempty"`
DockerfileCheckConfig string `json:",omitempty"`
SourceDateEpoch string `json:",omitempty"`
SandboxHostname string `json:",omitempty"`
RestRaw []keyValueOutput `json:",omitempty"`
}
type materialOutput struct {
URI string `json:",omitempty"`
Digests []string `json:",omitempty"`
}
type attachmentOutput struct {
Digest string `json:",omitempty"`
Platform string `json:",omitempty"`
Type string `json:",omitempty"`
}
type errorOutput struct {
Code int `json:",omitempty"`
Message string `json:",omitempty"`
Name string `json:",omitempty"`
Logs []string `json:",omitempty"`
Sources []byte `json:",omitempty"`
Stack []byte `json:",omitempty"`
}
type keyValueOutput struct {
Name string `json:",omitempty"`
Value string `json:",omitempty"`
}
func readAttr[T any](attrs map[string]string, k string, dest *T, f func(v string) (T, bool)) {
if sv, ok := attrs[k]; ok {
if f != nil {
v, ok := f(sv)
if ok {
*dest = v
}
}
if d, ok := any(dest).(*string); ok {
*d = sv
}
}
delete(attrs, k)
}
func runInspect(ctx context.Context, dockerCli command.Cli, opts inspectOptions) error {
b, err := builder.New(dockerCli, builder.WithName(opts.builder))
if err != nil {
return err
}
nodes, err := b.LoadNodes(ctx)
if err != nil {
return err
}
for _, node := range nodes {
if node.Err != nil {
return node.Err
}
}
recs, err := queryRecords(ctx, opts.ref, nodes, nil)
if err != nil {
return err
}
if len(recs) == 0 {
if opts.ref == "" {
return errors.New("no records found")
}
return errors.Errorf("no record found for ref %q", opts.ref)
}
rec := &recs[0]
c, err := rec.node.Driver.Client(ctx)
if err != nil {
return err
}
store := proxy.NewContentStore(c.ContentClient())
var defaultPlatform string
workers, err := c.ListWorkers(ctx)
if err != nil {
return errors.Wrap(err, "failed to list workers")
}
workers0:
for _, w := range workers {
for _, p := range w.Platforms {
defaultPlatform = platforms.FormatAll(platforms.Normalize(p))
break workers0
}
}
ls, err := localstate.New(confutil.NewConfig(dockerCli))
if err != nil {
return err
}
st, _ := ls.ReadRef(rec.node.Builder, rec.node.Name, rec.Ref)
attrs := rec.FrontendAttrs
delete(attrs, "frontend.caps")
var out inspectOutput
var context string
var dockerfile string
if st != nil {
context = st.LocalPath
dockerfile = st.DockerfilePath
wd, _ := os.Getwd()
if dockerfile != "" && dockerfile != "-" {
if rel, err := filepath.Rel(context, dockerfile); err == nil {
if !strings.HasPrefix(rel, ".."+string(filepath.Separator)) {
dockerfile = rel
}
}
}
if context != "" {
if rel, err := filepath.Rel(wd, context); err == nil {
if !strings.HasPrefix(rel, ".."+string(filepath.Separator)) {
context = rel
}
}
}
}
if v, ok := attrs["context"]; ok && context == "" {
delete(attrs, "context")
context = v
}
if dockerfile == "" {
if v, ok := attrs["filename"]; ok {
dockerfile = v
if dfdir, ok := attrs["vcs:localdir:dockerfile"]; ok {
dockerfile = filepath.Join(dfdir, dockerfile)
}
}
}
delete(attrs, "filename")
out.Name = buildName(rec.FrontendAttrs, st)
out.Ref = rec.Ref
out.Context = context
out.Dockerfile = dockerfile
if _, ok := attrs["context"]; !ok {
if src, ok := attrs["vcs:source"]; ok {
out.VCSRepository = src
}
if rev, ok := attrs["vcs:revision"]; ok {
out.VCSRevision = rev
}
}
readAttr(attrs, "target", &out.Target, nil)
readAttr(attrs, "platform", &out.Platform, func(v string) ([]string, bool) {
return tryParseValue(v, &out.Errors, func(v string) ([]string, error) {
var pp []string
for _, v := range strings.Split(v, ",") {
p, err := platforms.Parse(v)
if err != nil {
return nil, err
}
pp = append(pp, platforms.FormatAll(platforms.Normalize(p)))
}
if len(pp) == 0 {
pp = append(pp, defaultPlatform)
}
return pp, nil
})
})
readAttr(attrs, "build-arg:BUILDKIT_CONTEXT_KEEP_GIT_DIR", &out.KeepGitDir, func(v string) (bool, bool) {
return tryParseValue(v, &out.Errors, strconv.ParseBool)
})
out.NamedContexts = readKeyValues(attrs, "context:")
if rec.CreatedAt != nil {
tm := rec.CreatedAt.AsTime().Local()
out.StartedAt = &tm
}
out.Status = statusRunning
if rec.CompletedAt != nil {
tm := rec.CompletedAt.AsTime().Local()
out.CompletedAt = &tm
out.Status = statusComplete
}
if rec.Error != nil || rec.ExternalError != nil {
out.Error = &errorOutput{}
if rec.Error != nil {
if codes.Code(rec.Error.Code) == codes.Canceled {
out.Status = statusCanceled
} else {
out.Status = statusError
}
out.Error.Code = int(codes.Code(rec.Error.Code))
out.Error.Message = rec.Error.Message
}
if rec.ExternalError != nil {
dt, err := content.ReadBlob(ctx, store, ociDesc(rec.ExternalError))
if err != nil {
return errors.Wrapf(err, "failed to read external error %s", rec.ExternalError.Digest)
}
var st spb.Status
if err := proto.Unmarshal(dt, &st); err != nil {
return errors.Wrapf(err, "failed to unmarshal external error %s", rec.ExternalError.Digest)
}
retErr := grpcerrors.FromGRPC(status.ErrorProto(&st))
var errsources bytes.Buffer
for _, s := range errdefs.Sources(retErr) {
s.Print(&errsources)
errsources.WriteString("\n")
}
out.Error.Sources = errsources.Bytes()
var ve *errdefs.VertexError
if errors.As(retErr, &ve) {
dgst, err := digest.Parse(ve.Vertex.Digest)
if err != nil {
return errors.Wrapf(err, "failed to parse vertex digest %s", ve.Vertex.Digest)
}
name, logs, err := loadVertexLogs(ctx, c, rec.Ref, dgst, 16)
if err != nil {
return errors.Wrapf(err, "failed to load vertex logs %s", dgst)
}
out.Error.Name = name
out.Error.Logs = logs
}
out.Error.Stack = fmt.Appendf(nil, "%+v", stack.Formatter(retErr))
}
}
if out.StartedAt != nil {
if out.CompletedAt != nil {
out.Duration = out.CompletedAt.Sub(*out.StartedAt)
} else {
out.Duration = rec.currentTimestamp.Sub(*out.StartedAt)
}
}
out.NumCompletedSteps = rec.NumCompletedSteps
out.NumTotalSteps = rec.NumTotalSteps
out.NumCachedSteps = rec.NumCachedSteps
out.BuildArgs = readKeyValues(attrs, "build-arg:")
out.Labels = readKeyValues(attrs, "label:")
readAttr(attrs, "force-network-mode", &out.Config.Network, nil)
readAttr(attrs, "hostname", &out.Config.Hostname, nil)
readAttr(attrs, "cgroup-parent", &out.Config.CgroupParent, nil)
readAttr(attrs, "image-resolve-mode", &out.Config.ImageResolveMode, nil)
readAttr(attrs, "build-arg:BUILDKIT_MULTI_PLATFORM", &out.Config.MultiPlatform, func(v string) (bool, bool) {
return tryParseValue(v, &out.Errors, strconv.ParseBool)
})
readAttr(attrs, "multi-platform", &out.Config.MultiPlatform, func(v string) (bool, bool) {
return tryParseValue(v, &out.Errors, strconv.ParseBool)
})
readAttr(attrs, "no-cache", &out.Config.NoCache, func(v string) (bool, bool) {
if v == "" {
return true, true
}
return false, false
})
readAttr(attrs, "no-cache", &out.Config.NoCacheFilter, func(v string) ([]string, bool) {
if v == "" {
return nil, false
}
return strings.Split(v, ","), true
})
readAttr(attrs, "add-hosts", &out.Config.ExtraHosts, func(v string) ([]string, bool) {
return tryParseValue(v, &out.Errors, func(v string) ([]string, error) {
fields, err := csvvalue.Fields(v, nil)
if err != nil {
return nil, err
}
return fields, nil
})
})
readAttr(attrs, "shm-size", &out.Config.ShmSize, nil)
readAttr(attrs, "ulimit", &out.Config.Ulimit, nil)
readAttr(attrs, "build-arg:BUILDKIT_CACHE_MOUNT_NS", &out.Config.CacheMountNS, nil)
readAttr(attrs, "build-arg:BUILDKIT_DOCKERFILE_CHECK", &out.Config.DockerfileCheckConfig, nil)
readAttr(attrs, "build-arg:SOURCE_DATE_EPOCH", &out.Config.SourceDateEpoch, nil)
readAttr(attrs, "build-arg:SANDBOX_HOSTNAME", &out.Config.SandboxHostname, nil)
var unusedAttrs []keyValueOutput
for k := range attrs {
if strings.HasPrefix(k, "vcs:") || strings.HasPrefix(k, "build-arg:") || strings.HasPrefix(k, "label:") || strings.HasPrefix(k, "context:") || strings.HasPrefix(k, "attest:") {
continue
}
unusedAttrs = append(unusedAttrs, keyValueOutput{
Name: k,
Value: attrs[k],
})
}
slices.SortFunc(unusedAttrs, func(a, b keyValueOutput) int {
return cmp.Compare(a.Name, b.Name)
})
out.Config.RestRaw = unusedAttrs
attachments, err := allAttachments(ctx, store, *rec)
if err != nil {
return err
}
provIndex := slices.IndexFunc(attachments, func(a attachment) bool {
return descrType(a.descr) == slsa02.PredicateSLSAProvenance
})
if provIndex != -1 {
prov := attachments[provIndex]
dt, err := content.ReadBlob(ctx, store, prov.descr)
if err != nil {
return errors.Errorf("failed to read provenance %s: %v", prov.descr.Digest, err)
}
var pred provenancetypes.ProvenancePredicate
if err := json.Unmarshal(dt, &pred); err != nil {
return errors.Errorf("failed to unmarshal provenance %s: %v", prov.descr.Digest, err)
}
for _, m := range pred.Materials {
out.Materials = append(out.Materials, materialOutput{
URI: m.URI,
Digests: digestSetToDigests(m.Digest),
})
}
}
if len(attachments) > 0 {
for _, a := range attachments {
p := ""
if a.platform != nil {
p = platforms.FormatAll(*a.platform)
}
out.Attachments = append(out.Attachments, attachmentOutput{
Digest: a.descr.Digest.String(),
Platform: p,
Type: descrType(a.descr),
})
}
}
if opts.format == formatter.JSONFormatKey {
enc := json.NewEncoder(dockerCli.Out())
enc.SetIndent("", " ")
return enc.Encode(out)
} else if opts.format != formatter.PrettyFormatKey {
tmpl, err := template.New("inspect").Parse(opts.format)
if err != nil {
return errors.Wrapf(err, "failed to parse format template")
}
var buf bytes.Buffer
if err := tmpl.Execute(&buf, out); err != nil {
return errors.Wrapf(err, "failed to execute format template")
}
fmt.Fprintln(dockerCli.Out(), buf.String())
return nil
}
tw := tabwriter.NewWriter(dockerCli.Out(), 1, 8, 1, '\t', 0)
if out.Name != "" {
fmt.Fprintf(tw, "Name:\t%s\n", out.Name)
}
if opts.ref == "" && out.Ref != "" {
fmt.Fprintf(tw, "Ref:\t%s\n", out.Ref)
}
if out.Context != "" {
fmt.Fprintf(tw, "Context:\t%s\n", out.Context)
}
if out.Dockerfile != "" {
fmt.Fprintf(tw, "Dockerfile:\t%s\n", out.Dockerfile)
}
if out.VCSRepository != "" {
fmt.Fprintf(tw, "VCS Repository:\t%s\n", out.VCSRepository)
}
if out.VCSRevision != "" {
fmt.Fprintf(tw, "VCS Revision:\t%s\n", out.VCSRevision)
}
if out.Target != "" {
fmt.Fprintf(tw, "Target:\t%s\n", out.Target)
}
if len(out.Platform) > 0 {
fmt.Fprintf(tw, "Platforms:\t%s\n", strings.Join(out.Platform, ", "))
}
if out.KeepGitDir {
fmt.Fprintf(tw, "Keep Git Dir:\t%s\n", strconv.FormatBool(out.KeepGitDir))
}
tw.Flush()
fmt.Fprintln(dockerCli.Out())
printTable(dockerCli.Out(), out.NamedContexts, "Named Context")
tw = tabwriter.NewWriter(dockerCli.Out(), 1, 8, 1, '\t', 0)
fmt.Fprintf(tw, "Started:\t%s\n", out.StartedAt.Format("2006-01-02 15:04:05"))
var statusStr string
if out.Status == statusRunning {
statusStr = " (running)"
}
fmt.Fprintf(tw, "Duration:\t%s%s\n", formatDuration(out.Duration), statusStr)
if out.Status == statusError {
fmt.Fprintf(tw, "Error:\t%s %s\n", codes.Code(rec.Error.Code).String(), rec.Error.Message)
} else if out.Status == statusCanceled {
fmt.Fprintf(tw, "Status:\tCanceled\n")
}
fmt.Fprintf(tw, "Build Steps:\t%d/%d (%.0f%% cached)\n", out.NumCompletedSteps, out.NumTotalSteps, float64(out.NumCachedSteps)/float64(out.NumTotalSteps)*100)
tw.Flush()
fmt.Fprintln(dockerCli.Out())
tw = tabwriter.NewWriter(dockerCli.Out(), 1, 8, 1, '\t', 0)
if out.Config.Network != "" {
fmt.Fprintf(tw, "Network:\t%s\n", out.Config.Network)
}
if out.Config.Hostname != "" {
fmt.Fprintf(tw, "Hostname:\t%s\n", out.Config.Hostname)
}
if len(out.Config.ExtraHosts) > 0 {
fmt.Fprintf(tw, "Extra Hosts:\t%s\n", strings.Join(out.Config.ExtraHosts, ", "))
}
if out.Config.CgroupParent != "" {
fmt.Fprintf(tw, "Cgroup Parent:\t%s\n", out.Config.CgroupParent)
}
if out.Config.ImageResolveMode != "" {
fmt.Fprintf(tw, "Image Resolve Mode:\t%s\n", out.Config.ImageResolveMode)
}
if out.Config.MultiPlatform {
fmt.Fprintf(tw, "Multi-Platform:\t%s\n", strconv.FormatBool(out.Config.MultiPlatform))
}
if out.Config.NoCache {
fmt.Fprintf(tw, "No Cache:\t%s\n", strconv.FormatBool(out.Config.NoCache))
}
if len(out.Config.NoCacheFilter) > 0 {
fmt.Fprintf(tw, "No Cache Filter:\t%s\n", strings.Join(out.Config.NoCacheFilter, ", "))
}
if out.Config.ShmSize != "" {
fmt.Fprintf(tw, "Shm Size:\t%s\n", out.Config.ShmSize)
}
if out.Config.Ulimit != "" {
fmt.Fprintf(tw, "Resource Limits:\t%s\n", out.Config.Ulimit)
}
if out.Config.CacheMountNS != "" {
fmt.Fprintf(tw, "Cache Mount Namespace:\t%s\n", out.Config.CacheMountNS)
}
if out.Config.DockerfileCheckConfig != "" {
fmt.Fprintf(tw, "Dockerfile Check Config:\t%s\n", out.Config.DockerfileCheckConfig)
}
if out.Config.SourceDateEpoch != "" {
fmt.Fprintf(tw, "Source Date Epoch:\t%s\n", out.Config.SourceDateEpoch)
}
if out.Config.SandboxHostname != "" {
fmt.Fprintf(tw, "Sandbox Hostname:\t%s\n", out.Config.SandboxHostname)
}
for _, kv := range out.Config.RestRaw {
fmt.Fprintf(tw, "%s:\t%s\n", kv.Name, kv.Value)
}
tw.Flush()
fmt.Fprintln(dockerCli.Out())
printTable(dockerCli.Out(), out.BuildArgs, "Build Arg")
printTable(dockerCli.Out(), out.Labels, "Label")
if len(out.Materials) > 0 {
fmt.Fprintln(dockerCli.Out(), "Materials:")
tw = tabwriter.NewWriter(dockerCli.Out(), 1, 8, 1, '\t', 0)
fmt.Fprintf(tw, "URI\tDIGEST\n")
for _, m := range out.Materials {
fmt.Fprintf(tw, "%s\t%s\n", m.URI, strings.Join(m.Digests, ", "))
}
tw.Flush()
fmt.Fprintln(dockerCli.Out())
}
if len(out.Attachments) > 0 {
fmt.Fprintf(tw, "Attachments:\n")
tw = tabwriter.NewWriter(dockerCli.Out(), 1, 8, 1, '\t', 0)
fmt.Fprintf(tw, "DIGEST\tPLATFORM\tTYPE\n")
for _, a := range out.Attachments {
fmt.Fprintf(tw, "%s\t%s\t%s\n", a.Digest, a.Platform, a.Type)
}
tw.Flush()
fmt.Fprintln(dockerCli.Out())
}
if out.Error != nil {
if out.Error.Sources != nil {
fmt.Fprint(dockerCli.Out(), string(out.Error.Sources))
}
if len(out.Error.Logs) > 0 {
fmt.Fprintln(dockerCli.Out(), "Logs:")
fmt.Fprintf(dockerCli.Out(), "> => %s:\n", out.Error.Name)
for _, l := range out.Error.Logs {
fmt.Fprintln(dockerCli.Out(), "> "+l)
}
fmt.Fprintln(dockerCli.Out())
}
if len(out.Error.Stack) > 0 {
if debug.IsEnabled() {
fmt.Fprintf(dockerCli.Out(), "\n%s\n", out.Error.Stack)
} else {
fmt.Fprintf(dockerCli.Out(), "Enable --debug to see stack traces for error\n")
}
}
}
fmt.Fprintf(dockerCli.Out(), "Print build logs: docker buildx history logs %s\n", rec.Ref)
fmt.Fprintf(dockerCli.Out(), "View build in Docker Desktop: %s\n", desktop.BuildURL(fmt.Sprintf("%s/%s/%s", rec.node.Builder, rec.node.Name, rec.Ref)))
return nil
}
func inspectCmd(dockerCli command.Cli, rootOpts RootOptions) *cobra.Command {
var options inspectOptions
cmd := &cobra.Command{
Use: "inspect [OPTIONS] [REF]",
Short: "Inspect a build",
Args: cobra.MaximumNArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
if len(args) > 0 {
options.ref = args[0]
}
options.builder = *rootOpts.Builder
return runInspect(cmd.Context(), dockerCli, options)
},
ValidArgsFunction: completion.Disable,
}
cmd.AddCommand(
attachmentCmd(dockerCli, rootOpts),
)
flags := cmd.Flags()
flags.StringVar(&options.format, "format", formatter.PrettyFormatKey, "Format the output")
return cmd
}
func loadVertexLogs(ctx context.Context, c *client.Client, ref string, dgst digest.Digest, limit int) (string, []string, error) {
st, err := c.ControlClient().Status(ctx, &controlapi.StatusRequest{
Ref: ref,
})
if err != nil {
return "", nil, err
}
var name string
var logs []string
lastState := map[int]int{}
loop0:
for {
select {
case <-ctx.Done():
st.CloseSend()
return "", nil, context.Cause(ctx)
default:
ev, err := st.Recv()
if err != nil {
if errors.Is(err, io.EOF) {
break loop0
}
return "", nil, err
}
ss := client.NewSolveStatus(ev)
for _, v := range ss.Vertexes {
if v.Digest == dgst {
name = v.Name
break
}
}
for _, l := range ss.Logs {
if l.Vertex == dgst {
parts := bytes.Split(l.Data, []byte("\n"))
for i, p := range parts {
var wrote bool
if i == 0 {
idx, ok := lastState[l.Stream]
if ok && idx != -1 {
logs[idx] = logs[idx] + string(p)
wrote = true
}
}
if !wrote {
if len(p) > 0 {
logs = append(logs, string(p))
}
lastState[l.Stream] = len(logs) - 1
}
if i == len(parts)-1 && len(p) == 0 {
lastState[l.Stream] = -1
}
}
}
}
}
}
if limit > 0 && len(logs) > limit {
logs = logs[len(logs)-limit:]
}
return name, logs, nil
}
type attachment struct {
platform *ocispecs.Platform
descr ocispecs.Descriptor
}
func allAttachments(ctx context.Context, store content.Store, rec historyRecord) ([]attachment, error) {
var attachments []attachment
if rec.Result != nil {
for _, a := range rec.Result.Attestations {
attachments = append(attachments, attachment{
descr: ociDesc(a),
})
}
for _, r := range rec.Result.Results {
attachments = append(attachments, walkAttachments(ctx, store, ociDesc(r), nil)...)
}
}
for key, ri := range rec.Results {
p, err := platforms.Parse(key)
if err != nil {
return nil, err
}
for _, a := range ri.Attestations {
attachments = append(attachments, attachment{
platform: &p,
descr: ociDesc(a),
})
}
for _, r := range ri.Results {
attachments = append(attachments, walkAttachments(ctx, store, ociDesc(r), &p)...)
}
}
slices.SortFunc(attachments, func(a, b attachment) int {
pCmp := 0
if a.platform == nil && b.platform != nil {
return -1
} else if a.platform != nil && b.platform == nil {
return 1
} else if a.platform != nil && b.platform != nil {
pCmp = cmp.Compare(platforms.FormatAll(*a.platform), platforms.FormatAll(*b.platform))
}
return cmp.Or(
pCmp,
cmp.Compare(descrType(a.descr), descrType(b.descr)),
)
})
return attachments, nil
}
func walkAttachments(ctx context.Context, store content.Store, desc ocispecs.Descriptor, platform *ocispecs.Platform) []attachment {
_, err := store.Info(ctx, desc.Digest)
if err != nil {
return nil
}
var out []attachment
if desc.Annotations["vnd.docker.reference.type"] != "attestation-manifest" {
out = append(out, attachment{platform: platform, descr: desc})
}
if desc.MediaType != ocispecs.MediaTypeImageIndex && desc.MediaType != images.MediaTypeDockerSchema2ManifestList {
return out
}
dt, err := content.ReadBlob(ctx, store, desc)
if err != nil {
return out
}
var idx ocispecs.Index
if err := json.Unmarshal(dt, &idx); err != nil {
return out
}
for _, d := range idx.Manifests {
p := platform
if d.Platform != nil {
p = d.Platform
}
out = append(out, walkAttachments(ctx, store, d, p)...)
}
return out
}
func ociDesc(in *controlapi.Descriptor) ocispecs.Descriptor {
return ocispecs.Descriptor{
MediaType: in.MediaType,
Digest: digest.Digest(in.Digest),
Size: in.Size,
Annotations: in.Annotations,
}
}
func descrType(desc ocispecs.Descriptor) string {
if typ, ok := desc.Annotations["in-toto.io/predicate-type"]; ok {
return typ
}
return desc.MediaType
}
func tryParseValue[T any](s string, errs *[]string, f func(string) (T, error)) (T, bool) {
v, err := f(s)
if err != nil {
errStr := fmt.Sprintf("failed to parse %s: (%v)", s, err)
*errs = append(*errs, errStr)
}
return v, true
}
func printTable(w io.Writer, kvs []keyValueOutput, title string) {
if len(kvs) == 0 {
return
}
tw := tabwriter.NewWriter(w, 1, 8, 1, '\t', 0)
fmt.Fprintf(tw, "%s\tVALUE\n", strings.ToUpper(title))
for _, k := range kvs {
fmt.Fprintf(tw, "%s\t%s\n", k.Name, k.Value)
}
tw.Flush()
fmt.Fprintln(w)
}
func readKeyValues(attrs map[string]string, prefix string) []keyValueOutput {
var out []keyValueOutput
for k, v := range attrs {
if strings.HasPrefix(k, prefix) {
out = append(out, keyValueOutput{
Name: strings.TrimPrefix(k, prefix),
Value: v,
})
}
}
if len(out) == 0 {
return nil
}
slices.SortFunc(out, func(a, b keyValueOutput) int {
return cmp.Compare(a.Name, b.Name)
})
return out
}
func digestSetToDigests(ds slsa.DigestSet) []string {
var out []string
for k, v := range ds {
out = append(out, fmt.Sprintf("%s:%s", k, v))
}
return out
}

View File

@@ -1,145 +0,0 @@
package history
import (
"context"
"io"
"github.com/containerd/containerd/v2/core/content/proxy"
"github.com/containerd/platforms"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/cli/cli/command"
intoto "github.com/in-toto/in-toto-golang/in_toto"
slsa02 "github.com/in-toto/in-toto-golang/in_toto/slsa_provenance/v0.2"
"github.com/opencontainers/go-digest"
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/spf13/cobra"
)
type attachmentOptions struct {
builder string
typ string
platform string
ref string
digest digest.Digest
}
func runAttachment(ctx context.Context, dockerCli command.Cli, opts attachmentOptions) error {
b, err := builder.New(dockerCli, builder.WithName(opts.builder))
if err != nil {
return err
}
nodes, err := b.LoadNodes(ctx)
if err != nil {
return err
}
for _, node := range nodes {
if node.Err != nil {
return node.Err
}
}
recs, err := queryRecords(ctx, opts.ref, nodes, nil)
if err != nil {
return err
}
if len(recs) == 0 {
if opts.ref == "" {
return errors.New("no records found")
}
return errors.Errorf("no record found for ref %q", opts.ref)
}
rec := &recs[0]
c, err := rec.node.Driver.Client(ctx)
if err != nil {
return err
}
store := proxy.NewContentStore(c.ContentClient())
if opts.digest != "" {
ra, err := store.ReaderAt(ctx, ocispecs.Descriptor{Digest: opts.digest})
if err != nil {
return err
}
_, err = io.Copy(dockerCli.Out(), io.NewSectionReader(ra, 0, ra.Size()))
return err
}
attachments, err := allAttachments(ctx, store, *rec)
if err != nil {
return err
}
typ := opts.typ
switch typ {
case "index":
typ = ocispecs.MediaTypeImageIndex
case "manifest":
typ = ocispecs.MediaTypeImageManifest
case "image":
typ = ocispecs.MediaTypeImageConfig
case "provenance":
typ = slsa02.PredicateSLSAProvenance
case "sbom":
typ = intoto.PredicateSPDX
}
for _, a := range attachments {
if opts.platform != "" && (a.platform == nil || platforms.FormatAll(*a.platform) != opts.platform) {
continue
}
if typ != "" && descrType(a.descr) != typ {
continue
}
ra, err := store.ReaderAt(ctx, a.descr)
if err != nil {
return err
}
_, err = io.Copy(dockerCli.Out(), io.NewSectionReader(ra, 0, ra.Size()))
return err
}
return errors.Errorf("no matching attachment found for ref %q", opts.ref)
}
func attachmentCmd(dockerCli command.Cli, rootOpts RootOptions) *cobra.Command {
var options attachmentOptions
cmd := &cobra.Command{
Use: "attachment [OPTIONS] REF [DIGEST]",
Short: "Inspect a build attachment",
Args: cobra.RangeArgs(1, 2),
RunE: func(cmd *cobra.Command, args []string) error {
if len(args) > 0 {
options.ref = args[0]
}
if len(args) > 1 {
dgst, err := digest.Parse(args[1])
if err != nil {
return errors.Wrapf(err, "invalid digest %q", args[1])
}
options.digest = dgst
}
if options.digest == "" && options.platform == "" && options.typ == "" {
return errors.New("at least one of --type, --platform or DIGEST must be specified")
}
options.builder = *rootOpts.Builder
return runAttachment(cmd.Context(), dockerCli, options)
},
ValidArgsFunction: completion.Disable,
}
flags := cmd.Flags()
flags.StringVar(&options.typ, "type", "", "Type of attachment")
flags.StringVar(&options.platform, "platform", "", "Platform of attachment")
return cmd
}

View File

@@ -1,117 +0,0 @@
package history
import (
"context"
"io"
"os"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/command"
controlapi "github.com/moby/buildkit/api/services/control"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/util/progress/progressui"
"github.com/pkg/errors"
"github.com/spf13/cobra"
)
type logsOptions struct {
builder string
ref string
progress string
}
func runLogs(ctx context.Context, dockerCli command.Cli, opts logsOptions) error {
b, err := builder.New(dockerCli, builder.WithName(opts.builder))
if err != nil {
return err
}
nodes, err := b.LoadNodes(ctx)
if err != nil {
return err
}
for _, node := range nodes {
if node.Err != nil {
return node.Err
}
}
recs, err := queryRecords(ctx, opts.ref, nodes, nil)
if err != nil {
return err
}
if len(recs) == 0 {
if opts.ref == "" {
return errors.New("no records found")
}
return errors.Errorf("no record found for ref %q", opts.ref)
}
rec := &recs[0]
c, err := rec.node.Driver.Client(ctx)
if err != nil {
return err
}
cl, err := c.ControlClient().Status(ctx, &controlapi.StatusRequest{
Ref: rec.Ref,
})
if err != nil {
return err
}
var mode progressui.DisplayMode = progressui.DisplayMode(opts.progress)
if mode == progressui.AutoMode {
mode = progressui.PlainMode
}
printer, err := progress.NewPrinter(context.TODO(), os.Stderr, mode)
if err != nil {
return err
}
loop0:
for {
select {
case <-ctx.Done():
cl.CloseSend()
return context.Cause(ctx)
default:
ev, err := cl.Recv()
if err != nil {
if errors.Is(err, io.EOF) {
break loop0
}
return err
}
printer.Write(client.NewSolveStatus(ev))
}
}
return printer.Wait()
}
func logsCmd(dockerCli command.Cli, rootOpts RootOptions) *cobra.Command {
var options logsOptions
cmd := &cobra.Command{
Use: "logs [OPTIONS] [REF]",
Short: "Print the logs of a build",
Args: cobra.MaximumNArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
if len(args) > 0 {
options.ref = args[0]
}
options.builder = *rootOpts.Builder
return runLogs(cmd.Context(), dockerCli, options)
},
ValidArgsFunction: completion.Disable,
}
flags := cmd.Flags()
flags.StringVar(&options.progress, "progress", "plain", "Set type of progress output (plain, rawjson, tty)")
return cmd
}

View File

@@ -1,234 +0,0 @@
package history
import (
"context"
"encoding/json"
"fmt"
"os"
"slices"
"time"
"github.com/containerd/console"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/localstate"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/desktop"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/docker/cli/cli/command/formatter"
"github.com/docker/go-units"
"github.com/spf13/cobra"
)
const (
lsHeaderBuildID = "BUILD ID"
lsHeaderName = "NAME"
lsHeaderStatus = "STATUS"
lsHeaderCreated = "CREATED AT"
lsHeaderDuration = "DURATION"
lsHeaderLink = ""
lsDefaultTableFormat = "table {{.Ref}}\t{{.Name}}\t{{.Status}}\t{{.CreatedAt}}\t{{.Duration}}\t{{.Link}}"
headerKeyTimestamp = "buildkit-current-timestamp"
)
type lsOptions struct {
builder string
format string
noTrunc bool
}
func runLs(ctx context.Context, dockerCli command.Cli, opts lsOptions) error {
b, err := builder.New(dockerCli, builder.WithName(opts.builder))
if err != nil {
return err
}
nodes, err := b.LoadNodes(ctx)
if err != nil {
return err
}
for _, node := range nodes {
if node.Err != nil {
return node.Err
}
}
out, err := queryRecords(ctx, "", nodes, nil)
if err != nil {
return err
}
ls, err := localstate.New(confutil.NewConfig(dockerCli))
if err != nil {
return err
}
for i, rec := range out {
st, _ := ls.ReadRef(rec.node.Builder, rec.node.Name, rec.Ref)
rec.name = buildName(rec.FrontendAttrs, st)
out[i] = rec
}
return lsPrint(dockerCli, out, opts)
}
func lsCmd(dockerCli command.Cli, rootOpts RootOptions) *cobra.Command {
var options lsOptions
cmd := &cobra.Command{
Use: "ls",
Short: "List build records",
Args: cli.NoArgs,
RunE: func(cmd *cobra.Command, args []string) error {
options.builder = *rootOpts.Builder
return runLs(cmd.Context(), dockerCli, options)
},
ValidArgsFunction: completion.Disable,
}
flags := cmd.Flags()
flags.StringVar(&options.format, "format", formatter.TableFormatKey, "Format the output")
flags.BoolVar(&options.noTrunc, "no-trunc", false, "Don't truncate output")
return cmd
}
func lsPrint(dockerCli command.Cli, records []historyRecord, in lsOptions) error {
if in.format == formatter.TableFormatKey {
in.format = lsDefaultTableFormat
}
ctx := formatter.Context{
Output: dockerCli.Out(),
Format: formatter.Format(in.format),
Trunc: !in.noTrunc,
}
slices.SortFunc(records, func(a, b historyRecord) int {
if a.CompletedAt == nil && b.CompletedAt != nil {
return -1
}
if a.CompletedAt != nil && b.CompletedAt == nil {
return 1
}
return b.CreatedAt.AsTime().Compare(a.CreatedAt.AsTime())
})
var term bool
if _, err := console.ConsoleFromFile(os.Stdout); err == nil {
term = true
}
render := func(format func(subContext formatter.SubContext) error) error {
for _, r := range records {
if err := format(&lsContext{
format: formatter.Format(in.format),
isTerm: term,
trunc: !in.noTrunc,
record: &r,
}); err != nil {
return err
}
}
return nil
}
lsCtx := lsContext{
isTerm: term,
trunc: !in.noTrunc,
}
lsCtx.Header = formatter.SubHeaderContext{
"Ref": lsHeaderBuildID,
"Name": lsHeaderName,
"Status": lsHeaderStatus,
"CreatedAt": lsHeaderCreated,
"Duration": lsHeaderDuration,
"Link": lsHeaderLink,
}
return ctx.Write(&lsCtx, render)
}
type lsContext struct {
formatter.HeaderContext
isTerm bool
trunc bool
format formatter.Format
record *historyRecord
}
func (c *lsContext) MarshalJSON() ([]byte, error) {
m := map[string]any{
"ref": c.FullRef(),
"name": c.Name(),
"status": c.Status(),
"created_at": c.record.CreatedAt.AsTime().Format(time.RFC3339Nano),
"total_steps": c.record.NumTotalSteps,
"completed_steps": c.record.NumCompletedSteps,
"cached_steps": c.record.NumCachedSteps,
}
if c.record.CompletedAt != nil {
m["completed_at"] = c.record.CompletedAt.AsTime().Format(time.RFC3339Nano)
}
return json.Marshal(m)
}
func (c *lsContext) Ref() string {
return c.record.Ref
}
func (c *lsContext) FullRef() string {
return fmt.Sprintf("%s/%s/%s", c.record.node.Builder, c.record.node.Name, c.record.Ref)
}
func (c *lsContext) Name() string {
name := c.record.name
if c.trunc && c.format.IsTable() {
return trimBeginning(name, 36)
}
return name
}
func (c *lsContext) Status() string {
if c.record.CompletedAt != nil {
if c.record.Error != nil {
return "Error"
}
return "Completed"
}
return "Running"
}
func (c *lsContext) CreatedAt() string {
return units.HumanDuration(time.Since(c.record.CreatedAt.AsTime())) + " ago"
}
func (c *lsContext) Duration() string {
lastTime := c.record.currentTimestamp
if c.record.CompletedAt != nil {
tm := c.record.CompletedAt.AsTime()
lastTime = &tm
}
if lastTime == nil {
return ""
}
v := formatDuration(lastTime.Sub(c.record.CreatedAt.AsTime()))
if c.record.CompletedAt == nil {
v += "+"
}
return v
}
func (c *lsContext) Link() string {
url := desktop.BuildURL(c.FullRef())
if c.format.IsTable() {
if c.isTerm {
return desktop.ANSIHyperlink(url, "Open")
}
return ""
}
return url
}

View File

@@ -1,73 +0,0 @@
package history
import (
"context"
"fmt"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/desktop"
"github.com/docker/cli/cli/command"
"github.com/pkg/browser"
"github.com/pkg/errors"
"github.com/spf13/cobra"
)
type openOptions struct {
builder string
ref string
}
func runOpen(ctx context.Context, dockerCli command.Cli, opts openOptions) error {
b, err := builder.New(dockerCli, builder.WithName(opts.builder))
if err != nil {
return err
}
nodes, err := b.LoadNodes(ctx)
if err != nil {
return err
}
for _, node := range nodes {
if node.Err != nil {
return node.Err
}
}
recs, err := queryRecords(ctx, opts.ref, nodes, nil)
if err != nil {
return err
}
if len(recs) == 0 {
if opts.ref == "" {
return errors.New("no records found")
}
return errors.Errorf("no record found for ref %q", opts.ref)
}
rec := &recs[0]
url := desktop.BuildURL(fmt.Sprintf("%s/%s/%s", rec.node.Builder, rec.node.Name, rec.Ref))
return browser.OpenURL(url)
}
func openCmd(dockerCli command.Cli, rootOpts RootOptions) *cobra.Command {
var options openOptions
cmd := &cobra.Command{
Use: "open [OPTIONS] [REF]",
Short: "Open a build in Docker Desktop",
Args: cobra.MaximumNArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
if len(args) > 0 {
options.ref = args[0]
}
options.builder = *rootOpts.Builder
return runOpen(cmd.Context(), dockerCli, options)
},
ValidArgsFunction: completion.Disable,
}
return cmd
}

View File

@@ -1,151 +0,0 @@
package history
import (
"context"
"io"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/cli/cli/command"
"github.com/hashicorp/go-multierror"
controlapi "github.com/moby/buildkit/api/services/control"
"github.com/pkg/errors"
"github.com/spf13/cobra"
"golang.org/x/sync/errgroup"
)
type rmOptions struct {
builder string
refs []string
all bool
}
func runRm(ctx context.Context, dockerCli command.Cli, opts rmOptions) error {
b, err := builder.New(dockerCli, builder.WithName(opts.builder))
if err != nil {
return err
}
nodes, err := b.LoadNodes(ctx)
if err != nil {
return err
}
for _, node := range nodes {
if node.Err != nil {
return node.Err
}
}
errs := make([][]error, len(opts.refs))
for i := range errs {
errs[i] = make([]error, len(nodes))
}
eg, ctx := errgroup.WithContext(ctx)
for i, node := range nodes {
node := node
eg.Go(func() error {
if node.Driver == nil {
return nil
}
c, err := node.Driver.Client(ctx)
if err != nil {
return err
}
refs := opts.refs
if opts.all {
serv, err := c.ControlClient().ListenBuildHistory(ctx, &controlapi.BuildHistoryRequest{
EarlyExit: true,
})
if err != nil {
return err
}
defer serv.CloseSend()
for {
resp, err := serv.Recv()
if err != nil {
if errors.Is(err, io.EOF) {
break
}
return err
}
if resp.Type == controlapi.BuildHistoryEventType_COMPLETE {
refs = append(refs, resp.Record.Ref)
}
}
}
for j, ref := range refs {
_, err = c.ControlClient().UpdateBuildHistory(ctx, &controlapi.UpdateBuildHistoryRequest{
Ref: ref,
Delete: true,
})
if opts.all {
if err != nil {
return err
}
} else {
errs[j][i] = err
}
}
return nil
})
}
if err := eg.Wait(); err != nil {
return err
}
var out []error
loop0:
for _, nodeErrs := range errs {
var nodeErr error
for _, err1 := range nodeErrs {
if err1 == nil {
continue loop0
}
if nodeErr == nil {
nodeErr = err1
} else {
nodeErr = multierror.Append(nodeErr, err1)
}
}
out = append(out, nodeErr)
}
if len(out) == 0 {
return nil
}
if len(out) == 1 {
return out[0]
}
return multierror.Append(out[0], out[1:]...)
}
func rmCmd(dockerCli command.Cli, rootOpts RootOptions) *cobra.Command {
var options rmOptions
cmd := &cobra.Command{
Use: "rm [OPTIONS] [REF...]",
Short: "Remove build records",
RunE: func(cmd *cobra.Command, args []string) error {
if len(args) == 0 && !options.all {
return errors.New("rm requires at least one argument")
}
if len(args) > 0 && options.all {
return errors.New("rm requires either --all or at least one argument")
}
options.refs = args
options.builder = *rootOpts.Builder
return runRm(cmd.Context(), dockerCli, options)
},
ValidArgsFunction: completion.Disable,
}
flags := cmd.Flags()
flags.BoolVar(&options.all, "all", false, "Remove all build records")
return cmd
}

View File

@@ -1,32 +0,0 @@
package history
import (
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/cli/cli/command"
"github.com/spf13/cobra"
)
type RootOptions struct {
Builder *string
}
func RootCmd(rootcmd *cobra.Command, dockerCli command.Cli, opts RootOptions) *cobra.Command {
cmd := &cobra.Command{
Use: "history",
Short: "Commands to work on build records",
ValidArgsFunction: completion.Disable,
RunE: rootcmd.RunE,
}
cmd.AddCommand(
lsCmd(dockerCli, opts),
rmCmd(dockerCli, opts),
logsCmd(dockerCli, opts),
inspectCmd(dockerCli, opts),
openCmd(dockerCli, opts),
traceCmd(dockerCli, opts),
importCmd(dockerCli, opts),
)
return cmd
}

View File

@@ -1,228 +0,0 @@
package history
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"net"
"os"
"time"
"github.com/containerd/console"
"github.com/containerd/containerd/v2/core/content/proxy"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/otelutil"
"github.com/docker/buildx/util/otelutil/jaeger"
"github.com/docker/cli/cli/command"
controlapi "github.com/moby/buildkit/api/services/control"
"github.com/opencontainers/go-digest"
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/browser"
"github.com/pkg/errors"
"github.com/spf13/cobra"
jaegerui "github.com/tonistiigi/jaeger-ui-rest"
)
type traceOptions struct {
builder string
ref string
addr string
compare string
}
func loadTrace(ctx context.Context, ref string, nodes []builder.Node) (string, []byte, error) {
recs, err := queryRecords(ctx, ref, nodes, &queryOptions{
CompletedOnly: true,
})
if err != nil {
return "", nil, err
}
if len(recs) == 0 {
if ref == "" {
return "", nil, errors.New("no records found")
}
return "", nil, errors.Errorf("no record found for ref %q", ref)
}
rec := &recs[0]
if rec.CompletedAt == nil {
return "", nil, errors.Errorf("build %q is not completed, only completed builds can be traced", rec.Ref)
}
if rec.Trace == nil {
// build is complete but no trace yet. try to finalize the trace
time.Sleep(1 * time.Second) // give some extra time for last parts of trace to be written
c, err := rec.node.Driver.Client(ctx)
if err != nil {
return "", nil, err
}
_, err = c.ControlClient().UpdateBuildHistory(ctx, &controlapi.UpdateBuildHistoryRequest{
Ref: rec.Ref,
Finalize: true,
})
if err != nil {
return "", nil, err
}
recs, err := queryRecords(ctx, rec.Ref, []builder.Node{*rec.node}, &queryOptions{
CompletedOnly: true,
})
if err != nil {
return "", nil, err
}
if len(recs) == 0 {
return "", nil, errors.Errorf("build record %q was deleted", rec.Ref)
}
rec = &recs[0]
if rec.Trace == nil {
return "", nil, errors.Errorf("build record %q is missing a trace", rec.Ref)
}
}
c, err := rec.node.Driver.Client(ctx)
if err != nil {
return "", nil, err
}
store := proxy.NewContentStore(c.ContentClient())
ra, err := store.ReaderAt(ctx, ocispecs.Descriptor{
Digest: digest.Digest(rec.Trace.Digest),
MediaType: rec.Trace.MediaType,
Size: rec.Trace.Size,
})
if err != nil {
return "", nil, err
}
spans, err := otelutil.ParseSpanStubs(io.NewSectionReader(ra, 0, ra.Size()))
if err != nil {
return "", nil, err
}
wrapper := struct {
Data []jaeger.Trace `json:"data"`
}{
Data: spans.JaegerData().Data,
}
if len(wrapper.Data) == 0 {
return "", nil, errors.New("no trace data")
}
buf := &bytes.Buffer{}
enc := json.NewEncoder(buf)
enc.SetIndent("", " ")
if err := enc.Encode(wrapper); err != nil {
return "", nil, err
}
return string(wrapper.Data[0].TraceID), buf.Bytes(), nil
}
func runTrace(ctx context.Context, dockerCli command.Cli, opts traceOptions) error {
b, err := builder.New(dockerCli, builder.WithName(opts.builder))
if err != nil {
return err
}
nodes, err := b.LoadNodes(ctx)
if err != nil {
return err
}
for _, node := range nodes {
if node.Err != nil {
return node.Err
}
}
traceID, data, err := loadTrace(ctx, opts.ref, nodes)
if err != nil {
return err
}
srv := jaegerui.NewServer(jaegerui.Config{})
if err := srv.AddTrace(traceID, bytes.NewReader(data)); err != nil {
return err
}
url := "/trace/" + traceID
if opts.compare != "" {
traceIDcomp, data, err := loadTrace(ctx, opts.compare, nodes)
if err != nil {
return errors.Wrapf(err, "failed to load trace for %s", opts.compare)
}
if err := srv.AddTrace(traceIDcomp, bytes.NewReader(data)); err != nil {
return err
}
url = "/trace/" + traceIDcomp + "..." + traceID
}
var term bool
if _, err := console.ConsoleFromFile(os.Stdout); err == nil {
term = true
}
if !term && opts.compare == "" {
fmt.Fprintln(dockerCli.Out(), string(data))
return nil
}
ln, err := net.Listen("tcp", opts.addr)
if err != nil {
return err
}
go func() {
time.Sleep(100 * time.Millisecond)
browser.OpenURL(url)
}()
url = "http://" + ln.Addr().String() + url
fmt.Fprintf(dockerCli.Err(), "Trace available at %s\n", url)
go func() {
<-ctx.Done()
ln.Close()
}()
err = srv.Serve(ln)
if err != nil {
select {
case <-ctx.Done():
return nil
default:
}
}
return err
}
func traceCmd(dockerCli command.Cli, rootOpts RootOptions) *cobra.Command {
var options traceOptions
cmd := &cobra.Command{
Use: "trace [OPTIONS] [REF]",
Short: "Show the OpenTelemetry trace of a build record",
Args: cobra.MaximumNArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
if len(args) > 0 {
options.ref = args[0]
}
options.builder = *rootOpts.Builder
return runTrace(cmd.Context(), dockerCli, options)
},
ValidArgsFunction: completion.Disable,
}
flags := cmd.Flags()
flags.StringVar(&options.addr, "addr", "127.0.0.1:0", "Address to bind the UI server")
flags.StringVar(&options.compare, "compare", "", "Compare with another build reference")
return cmd
}

View File

@@ -1,221 +0,0 @@
package history
import (
"context"
"fmt"
"io"
"path/filepath"
"slices"
"strconv"
"strings"
"sync"
"time"
"github.com/docker/buildx/build"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/localstate"
controlapi "github.com/moby/buildkit/api/services/control"
"github.com/pkg/errors"
"golang.org/x/sync/errgroup"
)
func buildName(fattrs map[string]string, ls *localstate.State) string {
var res string
var target, contextPath, dockerfilePath, vcsSource string
if v, ok := fattrs["target"]; ok {
target = v
}
if v, ok := fattrs["context"]; ok {
contextPath = filepath.ToSlash(v)
} else if v, ok := fattrs["vcs:localdir:context"]; ok && v != "." {
contextPath = filepath.ToSlash(v)
}
if v, ok := fattrs["vcs:source"]; ok {
vcsSource = v
}
if v, ok := fattrs["filename"]; ok && v != "Dockerfile" {
dockerfilePath = filepath.ToSlash(v)
}
if v, ok := fattrs["vcs:localdir:dockerfile"]; ok && v != "." {
dockerfilePath = filepath.ToSlash(filepath.Join(v, dockerfilePath))
}
var localPath string
if ls != nil && !build.IsRemoteURL(ls.LocalPath) {
if ls.LocalPath != "" && ls.LocalPath != "-" {
localPath = filepath.ToSlash(ls.LocalPath)
}
if ls.DockerfilePath != "" && ls.DockerfilePath != "-" && ls.DockerfilePath != "Dockerfile" {
dockerfilePath = filepath.ToSlash(ls.DockerfilePath)
}
}
// remove default dockerfile name
const defaultFilename = "/Dockerfile"
hasDefaultFileName := strings.HasSuffix(dockerfilePath, defaultFilename) || dockerfilePath == ""
dockerfilePath = strings.TrimSuffix(dockerfilePath, defaultFilename)
// dockerfile is a subpath of context
if strings.HasPrefix(dockerfilePath, localPath) && len(dockerfilePath) > len(localPath) {
res = dockerfilePath[strings.LastIndex(localPath, "/")+1:]
} else {
// Otherwise, use basename
bpath := localPath
if len(dockerfilePath) > 0 {
bpath = dockerfilePath
}
if len(bpath) > 0 {
lidx := strings.LastIndex(bpath, "/")
res = bpath[lidx+1:]
if !hasDefaultFileName {
if lidx != -1 {
res = filepath.ToSlash(filepath.Join(filepath.Base(bpath[:lidx]), res))
} else {
res = filepath.ToSlash(filepath.Join(filepath.Base(bpath), res))
}
}
}
}
if len(contextPath) > 0 {
res = contextPath
}
if len(target) > 0 {
if len(res) > 0 {
res = res + " (" + target + ")"
} else {
res = target
}
}
if res == "" && vcsSource != "" {
return vcsSource
}
return res
}
func trimBeginning(s string, n int) string {
if len(s) <= n {
return s
}
return ".." + s[len(s)-n+2:]
}
type historyRecord struct {
*controlapi.BuildHistoryRecord
currentTimestamp *time.Time
node *builder.Node
name string
}
type queryOptions struct {
CompletedOnly bool
}
func queryRecords(ctx context.Context, ref string, nodes []builder.Node, opts *queryOptions) ([]historyRecord, error) {
var mu sync.Mutex
var out []historyRecord
var offset *int
if strings.HasPrefix(ref, "^") {
off, err := strconv.Atoi(ref[1:])
if err != nil {
return nil, errors.Wrapf(err, "invalid offset %q", ref)
}
offset = &off
ref = ""
}
eg, ctx := errgroup.WithContext(ctx)
for _, node := range nodes {
node := node
eg.Go(func() error {
if node.Driver == nil {
return nil
}
var records []historyRecord
c, err := node.Driver.Client(ctx)
if err != nil {
return err
}
serv, err := c.ControlClient().ListenBuildHistory(ctx, &controlapi.BuildHistoryRequest{
EarlyExit: true,
Ref: ref,
})
if err != nil {
return err
}
md, err := serv.Header()
if err != nil {
return err
}
var ts *time.Time
if v, ok := md[headerKeyTimestamp]; ok {
t, err := time.Parse(time.RFC3339Nano, v[0])
if err != nil {
return err
}
ts = &t
}
defer serv.CloseSend()
for {
he, err := serv.Recv()
if err != nil {
if errors.Is(err, io.EOF) {
break
}
return err
}
if he.Type == controlapi.BuildHistoryEventType_DELETED || he.Record == nil {
continue
}
if opts != nil && opts.CompletedOnly && he.Type != controlapi.BuildHistoryEventType_COMPLETE {
continue
}
records = append(records, historyRecord{
BuildHistoryRecord: he.Record,
currentTimestamp: ts,
node: &node,
})
}
mu.Lock()
out = append(out, records...)
mu.Unlock()
return nil
})
}
if err := eg.Wait(); err != nil {
return nil, err
}
slices.SortFunc(out, func(a, b historyRecord) int {
return b.CreatedAt.AsTime().Compare(a.CreatedAt.AsTime())
})
if offset != nil {
var filtered []historyRecord
for _, r := range out {
if *offset > 0 {
*offset--
continue
}
filtered = append(filtered, r)
break
}
if *offset > 0 {
return nil, errors.Errorf("no completed build found with offset %d", *offset)
}
out = filtered
}
return out, nil
}
func formatDuration(d time.Duration) string {
if d < time.Minute {
return fmt.Sprintf("%.1fs", d.Seconds())
}
return fmt.Sprintf("%dm %2ds", int(d.Minutes()), int(d.Seconds())%60)
}

View File

@@ -9,7 +9,6 @@ import (
"github.com/distribution/reference" "github.com/distribution/reference"
"github.com/docker/buildx/builder" "github.com/docker/buildx/builder"
"github.com/docker/buildx/util/buildflags"
"github.com/docker/buildx/util/cobrautil/completion" "github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/imagetools" "github.com/docker/buildx/util/imagetools"
"github.com/docker/buildx/util/progress" "github.com/docker/buildx/util/progress"
@@ -30,7 +29,6 @@ type createOptions struct {
dryrun bool dryrun bool
actionAppend bool actionAppend bool
progress string progress string
preferIndex bool
} }
func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, args []string) error { func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, args []string) error {
@@ -42,7 +40,7 @@ func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, arg
return errors.Errorf("can't push with no tags specified, please set --tag or --dry-run") return errors.Errorf("can't push with no tags specified, please set --tag or --dry-run")
} }
fileArgs := make([]string, len(in.files), len(in.files)+len(args)) fileArgs := make([]string, len(in.files))
for i, f := range in.files { for i, f := range in.files {
dt, err := os.ReadFile(f) dt, err := os.ReadFile(f)
if err != nil { if err != nil {
@@ -155,12 +153,7 @@ func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, arg
} }
} }
annotations, err := buildflags.ParseAnnotations(in.annotations) dt, desc, err := r.Combine(ctx, srcs, in.annotations)
if err != nil {
return errors.Wrapf(err, "failed to parse annotations")
}
dt, desc, err := r.Combine(ctx, srcs, annotations, in.preferIndex)
if err != nil { if err != nil {
return err return err
} }
@@ -173,8 +166,8 @@ func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, arg
// new resolver cause need new auth // new resolver cause need new auth
r = imagetools.New(imageopt) r = imagetools.New(imageopt)
ctx2, cancel := context.WithCancelCause(context.TODO()) ctx2, cancel := context.WithCancel(context.TODO())
defer func() { cancel(errors.WithStack(context.Canceled)) }() defer cancel()
printer, err := progress.NewPrinter(ctx2, os.Stderr, progressui.DisplayMode(in.progress)) printer, err := progress.NewPrinter(ctx2, os.Stderr, progressui.DisplayMode(in.progress))
if err != nil { if err != nil {
return err return err
@@ -194,7 +187,7 @@ func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, arg
} }
s := s s := s
eg2.Go(func() error { eg2.Go(func() error {
sub.Log(1, fmt.Appendf(nil, "copying %s from %s to %s\n", s.Desc.Digest.String(), s.Ref.String(), t.String())) sub.Log(1, []byte(fmt.Sprintf("copying %s from %s to %s\n", s.Desc.Digest.String(), s.Ref.String(), t.String())))
return r.Copy(ctx, s, t) return r.Copy(ctx, s, t)
}) })
} }
@@ -202,7 +195,7 @@ func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, arg
if err := eg2.Wait(); err != nil { if err := eg2.Wait(); err != nil {
return err return err
} }
sub.Log(1, fmt.Appendf(nil, "pushing %s to %s\n", desc.Digest.String(), t.String())) sub.Log(1, []byte(fmt.Sprintf("pushing %s to %s\n", desc.Digest.String(), t.String())))
return r.Push(ctx, t, desc, dt) return r.Push(ctx, t, desc, dt)
}) })
}) })
@@ -288,9 +281,8 @@ func createCmd(dockerCli command.Cli, opts RootOptions) *cobra.Command {
flags.StringArrayVarP(&options.tags, "tag", "t", []string{}, "Set reference for new image") flags.StringArrayVarP(&options.tags, "tag", "t", []string{}, "Set reference for new image")
flags.BoolVar(&options.dryrun, "dry-run", false, "Show final image instead of pushing") flags.BoolVar(&options.dryrun, "dry-run", false, "Show final image instead of pushing")
flags.BoolVar(&options.actionAppend, "append", false, "Append to existing manifest") flags.BoolVar(&options.actionAppend, "append", false, "Append to existing manifest")
flags.StringVar(&options.progress, "progress", "auto", `Set type of progress output ("auto", "plain", "tty", "rawjson"). Use plain to show container output`) flags.StringVar(&options.progress, "progress", "auto", `Set type of progress output ("auto", "plain", "tty"). Use plain to show container output`)
flags.StringArrayVarP(&options.annotations, "annotation", "", []string{}, "Add annotation to the image") flags.StringArrayVarP(&options.annotations, "annotation", "", []string{}, "Add annotation to the image")
flags.BoolVar(&options.preferIndex, "prefer-index", true, "When only a single source is specified, prefer outputting an image index or manifest list instead of performing a carbon copy")
return cmd return cmd
} }

View File

@@ -10,12 +10,11 @@ type RootOptions struct {
Builder *string Builder *string
} }
func RootCmd(rootcmd *cobra.Command, dockerCli command.Cli, opts RootOptions) *cobra.Command { func RootCmd(dockerCli command.Cli, opts RootOptions) *cobra.Command {
cmd := &cobra.Command{ cmd := &cobra.Command{
Use: "imagetools", Use: "imagetools",
Short: "Commands to work on images in registry", Short: "Commands to work on images in registry",
ValidArgsFunction: completion.Disable, ValidArgsFunction: completion.Disable,
RunE: rootcmd.RunE,
} }
cmd.AddCommand( cmd.AddCommand(

View File

@@ -17,7 +17,6 @@ import (
"github.com/docker/cli/cli/command" "github.com/docker/cli/cli/command"
"github.com/docker/cli/cli/debug" "github.com/docker/cli/cli/debug"
"github.com/docker/go-units" "github.com/docker/go-units"
"github.com/pkg/errors"
"github.com/spf13/cobra" "github.com/spf13/cobra"
) )
@@ -35,9 +34,8 @@ func runInspect(ctx context.Context, dockerCli command.Cli, in inspectOptions) e
return err return err
} }
timeoutCtx, cancel := context.WithCancelCause(ctx) timeoutCtx, cancel := context.WithTimeout(ctx, 20*time.Second)
timeoutCtx, _ = context.WithTimeoutCause(timeoutCtx, 20*time.Second, errors.WithStack(context.DeadlineExceeded)) //nolint:govet,lostcancel // no need to manually cancel this context as we already rely on parent defer cancel()
defer func() { cancel(errors.WithStack(context.Canceled)) }()
nodes, err := b.LoadNodes(timeoutCtx, builder.WithData()) nodes, err := b.LoadNodes(timeoutCtx, builder.WithData())
if in.bootstrap { if in.bootstrap {
@@ -115,25 +113,6 @@ func runInspect(ctx context.Context, dockerCli command.Cli, in inspectOptions) e
fmt.Fprintf(w, "\t%s:\t%s\n", k, v) fmt.Fprintf(w, "\t%s:\t%s\n", k, v)
} }
} }
if len(nodes[i].CDIDevices) > 0 {
fmt.Fprintf(w, "Devices:\n")
for _, dev := range nodes[i].CDIDevices {
fmt.Fprintf(w, "\tName:\t%s\n", dev.Name)
if dev.OnDemand {
fmt.Fprintf(w, "\tOn-Demand:\t%v\n", dev.OnDemand)
} else {
fmt.Fprintf(w, "\tAutomatically allowed:\t%v\n", dev.AutoAllow)
}
if len(dev.Annotations) > 0 {
fmt.Fprintf(w, "\tAnnotations:\n")
for k, v := range dev.Annotations {
fmt.Fprintf(w, "\t\t%s:\t%s\n", k, v)
}
}
}
}
for ri, rule := range nodes[i].GCPolicy { for ri, rule := range nodes[i].GCPolicy {
fmt.Fprintf(w, "GC Policy rule#%d:\n", ri) fmt.Fprintf(w, "GC Policy rule#%d:\n", ri)
fmt.Fprintf(w, "\tAll:\t%v\n", rule.All) fmt.Fprintf(w, "\tAll:\t%v\n", rule.All)
@@ -143,20 +122,8 @@ func runInspect(ctx context.Context, dockerCli command.Cli, in inspectOptions) e
if rule.KeepDuration > 0 { if rule.KeepDuration > 0 {
fmt.Fprintf(w, "\tKeep Duration:\t%v\n", rule.KeepDuration.String()) fmt.Fprintf(w, "\tKeep Duration:\t%v\n", rule.KeepDuration.String())
} }
if rule.ReservedSpace > 0 { if rule.KeepBytes > 0 {
fmt.Fprintf(w, "\tReserved Space:\t%s\n", units.BytesSize(float64(rule.ReservedSpace))) fmt.Fprintf(w, "\tKeep Bytes:\t%s\n", units.BytesSize(float64(rule.KeepBytes)))
}
if rule.MaxUsedSpace > 0 {
fmt.Fprintf(w, "\tMax Used Space:\t%s\n", units.BytesSize(float64(rule.MaxUsedSpace)))
}
if rule.MinFreeSpace > 0 {
fmt.Fprintf(w, "\tMin Free Space:\t%s\n", units.BytesSize(float64(rule.MinFreeSpace)))
}
}
for f, dt := range nodes[i].Files {
fmt.Fprintf(w, "File#%s:\n", f)
for _, line := range strings.Split(string(dt), "\n") {
fmt.Fprintf(w, "\t> %s\n", line)
} }
} }
} }

View File

@@ -15,7 +15,7 @@ import (
type installOptions struct { type installOptions struct {
} }
func runInstall(_ command.Cli, _ installOptions) error { func runInstall(dockerCli command.Cli, in installOptions) error {
dir := config.Dir() dir := config.Dir()
if err := os.MkdirAll(dir, 0755); err != nil { if err := os.MkdirAll(dir, 0755); err != nil {
return errors.Wrap(err, "could not create docker config") return errors.Wrap(err, "could not create docker config")

View File

@@ -8,7 +8,6 @@ import (
"strings" "strings"
"time" "time"
"github.com/containerd/platforms"
"github.com/docker/buildx/builder" "github.com/docker/buildx/builder"
"github.com/docker/buildx/store" "github.com/docker/buildx/store"
"github.com/docker/buildx/store/storeutil" "github.com/docker/buildx/store/storeutil"
@@ -18,7 +17,6 @@ import (
"github.com/docker/cli/cli" "github.com/docker/cli/cli"
"github.com/docker/cli/cli/command" "github.com/docker/cli/cli/command"
"github.com/docker/cli/cli/command/formatter" "github.com/docker/cli/cli/command/formatter"
"github.com/pkg/errors"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"golang.org/x/sync/errgroup" "golang.org/x/sync/errgroup"
) )
@@ -37,8 +35,7 @@ const (
) )
type lsOptions struct { type lsOptions struct {
format string format string
noTrunc bool
} }
func runLs(ctx context.Context, dockerCli command.Cli, in lsOptions) error { func runLs(ctx context.Context, dockerCli command.Cli, in lsOptions) error {
@@ -58,9 +55,8 @@ func runLs(ctx context.Context, dockerCli command.Cli, in lsOptions) error {
return err return err
} }
timeoutCtx, cancel := context.WithCancelCause(ctx) timeoutCtx, cancel := context.WithTimeout(ctx, 20*time.Second)
timeoutCtx, _ = context.WithTimeoutCause(timeoutCtx, 20*time.Second, errors.WithStack(context.DeadlineExceeded)) //nolint:govet,lostcancel // no need to manually cancel this context as we already rely on parent defer cancel()
defer func() { cancel(errors.WithStack(context.Canceled)) }()
eg, _ := errgroup.WithContext(timeoutCtx) eg, _ := errgroup.WithContext(timeoutCtx)
for _, b := range builders { for _, b := range builders {
@@ -76,7 +72,7 @@ func runLs(ctx context.Context, dockerCli command.Cli, in lsOptions) error {
return err return err
} }
if hasErrors, err := lsPrint(dockerCli, current, builders, in); err != nil { if hasErrors, err := lsPrint(dockerCli, current, builders, in.format); err != nil {
return err return err
} else if hasErrors { } else if hasErrors {
_, _ = fmt.Fprintf(dockerCli.Err(), "\n") _, _ = fmt.Fprintf(dockerCli.Err(), "\n")
@@ -111,7 +107,6 @@ func lsCmd(dockerCli command.Cli) *cobra.Command {
flags := cmd.Flags() flags := cmd.Flags()
flags.StringVar(&options.format, "format", formatter.TableFormatKey, "Format the output") flags.StringVar(&options.format, "format", formatter.TableFormatKey, "Format the output")
flags.BoolVar(&options.noTrunc, "no-trunc", false, "Don't truncate output")
// hide builder persistent flag for this command // hide builder persistent flag for this command
cobrautil.HideInheritedFlags(cmd, "builder") cobrautil.HideInheritedFlags(cmd, "builder")
@@ -119,15 +114,14 @@ func lsCmd(dockerCli command.Cli) *cobra.Command {
return cmd return cmd
} }
func lsPrint(dockerCli command.Cli, current *store.NodeGroup, builders []*builder.Builder, in lsOptions) (hasErrors bool, _ error) { func lsPrint(dockerCli command.Cli, current *store.NodeGroup, builders []*builder.Builder, format string) (hasErrors bool, _ error) {
if in.format == formatter.TableFormatKey { if format == formatter.TableFormatKey {
in.format = lsDefaultTableFormat format = lsDefaultTableFormat
} }
ctx := formatter.Context{ ctx := formatter.Context{
Output: dockerCli.Out(), Output: dockerCli.Out(),
Format: formatter.Format(in.format), Format: formatter.Format(format),
Trunc: !in.noTrunc,
} }
sort.SliceStable(builders, func(i, j int) bool { sort.SliceStable(builders, func(i, j int) bool {
@@ -144,12 +138,11 @@ func lsPrint(dockerCli command.Cli, current *store.NodeGroup, builders []*builde
render := func(format func(subContext formatter.SubContext) error) error { render := func(format func(subContext formatter.SubContext) error) error {
for _, b := range builders { for _, b := range builders {
if err := format(&lsContext{ if err := format(&lsContext{
format: ctx.Format,
trunc: ctx.Trunc,
Builder: &lsBuilder{ Builder: &lsBuilder{
Builder: b, Builder: b,
Current: b.Name == current.Name, Current: b.Name == current.Name,
}, },
format: ctx.Format,
}); err != nil { }); err != nil {
return err return err
} }
@@ -159,9 +152,6 @@ func lsPrint(dockerCli command.Cli, current *store.NodeGroup, builders []*builde
} }
continue continue
} }
if ctx.Format.IsJSON() {
continue
}
for _, n := range b.Nodes() { for _, n := range b.Nodes() {
if n.Err != nil { if n.Err != nil {
if ctx.Format.IsTable() { if ctx.Format.IsTable() {
@@ -170,7 +160,6 @@ func lsPrint(dockerCli command.Cli, current *store.NodeGroup, builders []*builde
} }
if err := format(&lsContext{ if err := format(&lsContext{
format: ctx.Format, format: ctx.Format,
trunc: ctx.Trunc,
Builder: &lsBuilder{ Builder: &lsBuilder{
Builder: b, Builder: b,
Current: b.Name == current.Name, Current: b.Name == current.Name,
@@ -207,7 +196,6 @@ type lsContext struct {
Builder *lsBuilder Builder *lsBuilder
format formatter.Format format formatter.Format
trunc bool
node builder.Node node builder.Node
} }
@@ -273,11 +261,7 @@ func (c *lsContext) Platforms() string {
if c.node.Name == "" { if c.node.Name == "" {
return "" return ""
} }
pfs := platformutil.FormatInGroups(c.node.Node.Platforms, c.node.Platforms) return strings.Join(platformutil.FormatInGroups(c.node.Node.Platforms, c.node.Platforms), ", ")
if c.trunc && c.format.IsTable() {
return truncPlatforms(pfs, 4).String()
}
return strings.Join(pfs, ", ")
} }
func (c *lsContext) Error() string { func (c *lsContext) Error() string {
@@ -288,133 +272,3 @@ func (c *lsContext) Error() string {
} }
return "" return ""
} }
var truncMajorPlatforms = []string{
"linux/amd64",
"linux/arm64",
"linux/arm",
"linux/ppc64le",
"linux/s390x",
"linux/riscv64",
"linux/mips64",
}
type truncatedPlatforms struct {
res map[string][]string
input []string
max int
}
func (tp truncatedPlatforms) List() map[string][]string {
return tp.res
}
func (tp truncatedPlatforms) String() string {
var out []string
var count int
var keys []string
for k := range tp.res {
keys = append(keys, k)
}
sort.Strings(keys)
seen := make(map[string]struct{})
for _, mpf := range truncMajorPlatforms {
if tpf, ok := tp.res[mpf]; ok {
seen[mpf] = struct{}{}
if len(tpf) == 1 {
out = append(out, tpf[0])
count++
} else {
hasPreferredPlatform := false
for _, pf := range tpf {
if strings.HasSuffix(pf, "*") {
hasPreferredPlatform = true
break
}
}
mainpf := mpf
if hasPreferredPlatform {
mainpf += "*"
}
out = append(out, fmt.Sprintf("%s (+%d)", mainpf, len(tpf)))
count += len(tpf)
}
}
}
for _, mpf := range keys {
if len(out) >= tp.max {
break
}
if _, ok := seen[mpf]; ok {
continue
}
if len(tp.res[mpf]) == 1 {
out = append(out, tp.res[mpf][0])
count++
} else {
hasPreferredPlatform := false
for _, pf := range tp.res[mpf] {
if strings.HasSuffix(pf, "*") {
hasPreferredPlatform = true
break
}
}
mainpf := mpf
if hasPreferredPlatform {
mainpf += "*"
}
out = append(out, fmt.Sprintf("%s (+%d)", mainpf, len(tp.res[mpf])))
count += len(tp.res[mpf])
}
}
left := len(tp.input) - count
if left > 0 {
out = append(out, fmt.Sprintf("(%d more)", left))
}
return strings.Join(out, ", ")
}
func truncPlatforms(pfs []string, max int) truncatedPlatforms {
res := make(map[string][]string)
for _, mpf := range truncMajorPlatforms {
for _, pf := range pfs {
if len(res) >= max {
break
}
pp, err := platforms.Parse(strings.TrimSuffix(pf, "*"))
if err != nil {
continue
}
if pp.OS+"/"+pp.Architecture == mpf {
res[mpf] = append(res[mpf], pf)
}
}
}
left := make(map[string][]string)
for _, pf := range pfs {
if len(res) >= max {
break
}
pp, err := platforms.Parse(strings.TrimSuffix(pf, "*"))
if err != nil {
continue
}
ppf := strings.TrimSuffix(pp.OS+"/"+pp.Architecture, "*")
if _, ok := res[ppf]; !ok {
left[ppf] = append(left[ppf], pf)
}
}
for k, v := range left {
res[k] = v
}
return truncatedPlatforms{
res: res,
input: pfs,
max: max,
}
}

View File

@@ -1,174 +0,0 @@
package commands
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestTruncPlatforms(t *testing.T) {
tests := []struct {
name string
platforms []string
max int
expectedList map[string][]string
expectedOut string
}{
{
name: "arm64 preferred and emulated",
platforms: []string{"linux/arm64*", "linux/amd64", "linux/amd64/v2", "linux/riscv64", "linux/ppc64le", "linux/s390x", "linux/386", "linux/mips64le", "linux/mips64", "linux/arm/v7", "linux/arm/v6"},
max: 4,
expectedList: map[string][]string{
"linux/amd64": {
"linux/amd64",
"linux/amd64/v2",
},
"linux/arm": {
"linux/arm/v7",
"linux/arm/v6",
},
"linux/arm64": {
"linux/arm64*",
},
"linux/ppc64le": {
"linux/ppc64le",
},
},
expectedOut: "linux/amd64 (+2), linux/arm64*, linux/arm (+2), linux/ppc64le, (5 more)",
},
{
name: "riscv64 preferred only",
platforms: []string{"linux/riscv64*"},
max: 4,
expectedList: map[string][]string{
"linux/riscv64": {
"linux/riscv64*",
},
},
expectedOut: "linux/riscv64*",
},
{
name: "amd64 no preferred and emulated",
platforms: []string{"linux/amd64", "linux/amd64/v2", "linux/amd64/v3", "linux/386", "linux/arm64", "linux/riscv64", "linux/ppc64le", "linux/s390x", "linux/mips64le", "linux/mips64", "linux/arm/v7", "linux/arm/v6"},
max: 4,
expectedList: map[string][]string{
"linux/amd64": {
"linux/amd64",
"linux/amd64/v2",
"linux/amd64/v3",
},
"linux/arm": {
"linux/arm/v7",
"linux/arm/v6",
},
"linux/arm64": {
"linux/arm64",
},
"linux/ppc64le": {
"linux/ppc64le",
}},
expectedOut: "linux/amd64 (+3), linux/arm64, linux/arm (+2), linux/ppc64le, (5 more)",
},
{
name: "amd64 no preferred",
platforms: []string{"linux/amd64", "linux/386"},
max: 4,
expectedList: map[string][]string{
"linux/386": {
"linux/386",
},
"linux/amd64": {
"linux/amd64",
},
},
expectedOut: "linux/amd64, linux/386",
},
{
name: "arm64 no preferred",
platforms: []string{"linux/arm64", "linux/arm/v7", "linux/arm/v6"},
max: 4,
expectedList: map[string][]string{
"linux/arm": {
"linux/arm/v7",
"linux/arm/v6",
},
"linux/arm64": {
"linux/arm64",
},
},
expectedOut: "linux/arm64, linux/arm (+2)",
},
{
name: "all preferred",
platforms: []string{"darwin/arm64*", "linux/arm64*", "linux/arm/v5*", "linux/arm/v6*", "linux/arm/v7*", "windows/arm64*"},
max: 4,
expectedList: map[string][]string{
"darwin/arm64": {
"darwin/arm64*",
},
"linux/arm": {
"linux/arm/v5*",
"linux/arm/v6*",
"linux/arm/v7*",
},
"linux/arm64": {
"linux/arm64*",
},
"windows/arm64": {
"windows/arm64*",
},
},
expectedOut: "linux/arm64*, linux/arm* (+3), darwin/arm64*, windows/arm64*",
},
{
name: "no major preferred",
platforms: []string{"linux/amd64/v2*", "linux/arm/v6*", "linux/mips64le*", "linux/amd64", "linux/amd64/v3", "linux/386", "linux/arm64", "linux/riscv64", "linux/ppc64le", "linux/s390x", "linux/mips64", "linux/arm/v7"},
max: 4,
expectedList: map[string][]string{
"linux/amd64": {
"linux/amd64/v2*",
"linux/amd64",
"linux/amd64/v3",
},
"linux/arm": {
"linux/arm/v6*",
"linux/arm/v7",
},
"linux/arm64": {
"linux/arm64",
},
"linux/ppc64le": {
"linux/ppc64le",
},
},
expectedOut: "linux/amd64* (+3), linux/arm64, linux/arm* (+2), linux/ppc64le, (5 more)",
},
{
name: "no major with multiple variants",
platforms: []string{"linux/arm64", "linux/arm/v7", "linux/arm/v6", "linux/mips64le/softfloat", "linux/mips64le/hardfloat"},
max: 4,
expectedList: map[string][]string{
"linux/arm": {
"linux/arm/v7",
"linux/arm/v6",
},
"linux/arm64": {
"linux/arm64",
},
"linux/mips64le": {
"linux/mips64le/softfloat",
"linux/mips64le/hardfloat",
},
},
expectedOut: "linux/arm64, linux/arm (+2), linux/mips64le (+2)",
},
}
for _, tt := range tests {
tt := tt
t.Run(tt.name, func(t *testing.T) {
tpfs := truncPlatforms(tt.platforms, tt.max)
assert.Equal(t, tt.expectedList, tpfs.List())
assert.Equal(t, tt.expectedOut, tpfs.String())
})
}
}

View File

@@ -16,23 +16,18 @@ import (
"github.com/docker/docker/api/types/filters" "github.com/docker/docker/api/types/filters"
"github.com/docker/go-units" "github.com/docker/go-units"
"github.com/moby/buildkit/client" "github.com/moby/buildkit/client"
gateway "github.com/moby/buildkit/frontend/gateway/client"
pb "github.com/moby/buildkit/solver/pb"
"github.com/moby/buildkit/util/apicaps"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"golang.org/x/sync/errgroup" "golang.org/x/sync/errgroup"
) )
type pruneOptions struct { type pruneOptions struct {
builder string builder string
all bool all bool
filter opts.FilterOpt filter opts.FilterOpt
reservedSpace opts.MemBytes keepStorage opts.MemBytes
maxUsedSpace opts.MemBytes force bool
minFreeSpace opts.MemBytes verbose bool
force bool
verbose bool
} }
const ( const (
@@ -110,19 +105,8 @@ func runPrune(ctx context.Context, dockerCli command.Cli, opts pruneOptions) err
if err != nil { if err != nil {
return err return err
} }
// check if the client supports newer prune options
if opts.maxUsedSpace.Value() != 0 || opts.minFreeSpace.Value() != 0 {
caps, err := loadLLBCaps(ctx, c)
if err != nil {
return errors.Wrap(err, "failed to load buildkit capabilities for prune")
}
if caps.Supports(pb.CapGCFreeSpaceFilter) != nil {
return errors.New("buildkit v0.17.0+ is required for max-used-space and min-free-space filters")
}
}
popts := []client.PruneOption{ popts := []client.PruneOption{
client.WithKeepOpt(pi.KeepDuration, opts.reservedSpace.Value(), opts.maxUsedSpace.Value(), opts.minFreeSpace.Value()), client.WithKeepOpt(pi.KeepDuration, opts.keepStorage.Value()),
client.WithFilter(pi.Filter), client.WithFilter(pi.Filter),
} }
if opts.all { if opts.all {
@@ -147,17 +131,6 @@ func runPrune(ctx context.Context, dockerCli command.Cli, opts pruneOptions) err
return nil return nil
} }
func loadLLBCaps(ctx context.Context, c *client.Client) (apicaps.CapSet, error) {
var caps apicaps.CapSet
_, err := c.Build(ctx, client.SolveOpt{
Internal: true,
}, "buildx", func(ctx context.Context, c gateway.Client) (*gateway.Result, error) {
caps = c.BuildOpts().LLBCaps
return nil, nil
}, nil)
return caps, err
}
func pruneCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command { func pruneCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
options := pruneOptions{filter: opts.NewFilterOpt()} options := pruneOptions{filter: opts.NewFilterOpt()}
@@ -175,15 +148,10 @@ func pruneCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
flags := cmd.Flags() flags := cmd.Flags()
flags.BoolVarP(&options.all, "all", "a", false, "Include internal/frontend images") flags.BoolVarP(&options.all, "all", "a", false, "Include internal/frontend images")
flags.Var(&options.filter, "filter", `Provide filter values (e.g., "until=24h")`) flags.Var(&options.filter, "filter", `Provide filter values (e.g., "until=24h")`)
flags.Var(&options.reservedSpace, "reserved-space", "Amount of disk space always allowed to keep for cache") flags.Var(&options.keepStorage, "keep-storage", "Amount of disk space to keep for cache")
flags.Var(&options.minFreeSpace, "min-free-space", "Target amount of free disk space after pruning")
flags.Var(&options.maxUsedSpace, "max-used-space", "Maximum amount of disk space allowed to keep for cache")
flags.BoolVar(&options.verbose, "verbose", false, "Provide a more verbose output") flags.BoolVar(&options.verbose, "verbose", false, "Provide a more verbose output")
flags.BoolVarP(&options.force, "force", "f", false, "Do not prompt for confirmation") flags.BoolVarP(&options.force, "force", "f", false, "Do not prompt for confirmation")
flags.Var(&options.reservedSpace, "keep-storage", "Amount of disk space to keep for cache")
flags.MarkDeprecated("keep-storage", "keep-storage flag has been changed to max-storage")
return cmd return cmd
} }
@@ -227,8 +195,6 @@ func toBuildkitPruneInfo(f filters.Args) (*client.PruneInfo, error) {
case 1: case 1:
if filterKey == "id" { if filterKey == "id" {
filters = append(filters, filterKey+"~="+values[0]) filters = append(filters, filterKey+"~="+values[0])
} else if strings.HasSuffix(filterKey, "!") || strings.HasSuffix(filterKey, "~") {
filters = append(filters, filterKey+"="+values[0])
} else { } else {
filters = append(filters, filterKey+"=="+values[0]) filters = append(filters, filterKey+"=="+values[0])
} }

View File

@@ -150,9 +150,8 @@ func rmAllInactive(ctx context.Context, txn *store.Txn, dockerCli command.Cli, i
return err return err
} }
timeoutCtx, cancel := context.WithCancelCause(ctx) timeoutCtx, cancel := context.WithTimeout(ctx, 20*time.Second)
timeoutCtx, _ = context.WithTimeoutCause(timeoutCtx, 20*time.Second, errors.WithStack(context.DeadlineExceeded)) //nolint:govet,lostcancel // no need to manually cancel this context as we already rely on parent defer cancel()
defer func() { cancel(errors.WithStack(context.Canceled)) }()
eg, _ := errgroup.WithContext(timeoutCtx) eg, _ := errgroup.WithContext(timeoutCtx)
for _, b := range builders { for _, b := range builders {

View File

@@ -1,15 +1,12 @@
package commands package commands
import ( import (
"fmt"
"os" "os"
debugcmd "github.com/docker/buildx/commands/debug" debugcmd "github.com/docker/buildx/commands/debug"
historycmd "github.com/docker/buildx/commands/history"
imagetoolscmd "github.com/docker/buildx/commands/imagetools" imagetoolscmd "github.com/docker/buildx/commands/imagetools"
"github.com/docker/buildx/controller/remote" "github.com/docker/buildx/controller/remote"
"github.com/docker/buildx/util/cobrautil/completion" "github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/logutil" "github.com/docker/buildx/util/logutil"
"github.com/docker/cli-docs-tool/annotation" "github.com/docker/cli-docs-tool/annotation"
"github.com/docker/cli/cli" "github.com/docker/cli/cli"
@@ -23,7 +20,6 @@ import (
) )
func NewRootCmd(name string, isPlugin bool, dockerCli command.Cli) *cobra.Command { func NewRootCmd(name string, isPlugin bool, dockerCli command.Cli) *cobra.Command {
var opt rootOptions
cmd := &cobra.Command{ cmd := &cobra.Command{
Short: "Docker Buildx", Short: "Docker Buildx",
Long: `Extended build capabilities with BuildKit`, Long: `Extended build capabilities with BuildKit`,
@@ -35,25 +31,12 @@ func NewRootCmd(name string, isPlugin bool, dockerCli command.Cli) *cobra.Comman
HiddenDefaultCmd: true, HiddenDefaultCmd: true,
}, },
PersistentPreRunE: func(cmd *cobra.Command, args []string) error { PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
if opt.debug {
debug.Enable()
}
cmd.SetContext(appcontext.Context()) cmd.SetContext(appcontext.Context())
if !isPlugin { if !isPlugin {
return nil return nil
} }
return plugin.PersistentPreRunE(cmd, args) return plugin.PersistentPreRunE(cmd, args)
}, },
RunE: func(cmd *cobra.Command, args []string) error {
if len(args) == 0 {
return cmd.Help()
}
_ = cmd.Help()
return cli.StatusError{
StatusCode: 1,
Status: fmt.Sprintf("ERROR: unknown command: %q", args[0]),
}
},
} }
if !isPlugin { if !isPlugin {
// match plugin behavior for standalone mode // match plugin behavior for standalone mode
@@ -63,6 +46,11 @@ func NewRootCmd(name string, isPlugin bool, dockerCli command.Cli) *cobra.Comman
cmd.TraverseChildren = true cmd.TraverseChildren = true
cmd.DisableFlagsInUseLine = true cmd.DisableFlagsInUseLine = true
cli.DisableFlagsInUseLine(cmd) cli.DisableFlagsInUseLine(cmd)
// DEBUG=1 should perform the same as --debug at the docker root level
if debug.IsEnabled() {
debug.Enable()
}
} }
logrus.SetFormatter(&logutil.Formatter{}) logrus.SetFormatter(&logutil.Formatter{})
@@ -75,20 +63,20 @@ func NewRootCmd(name string, isPlugin bool, dockerCli command.Cli) *cobra.Comman
"using default config store", "using default config store",
)) ))
if !confutil.IsExperimental() { if !isExperimental() {
cmd.SetHelpTemplate(cmd.HelpTemplate() + "\nExperimental commands and flags are hidden. Set BUILDX_EXPERIMENTAL=1 to show them.\n") cmd.SetHelpTemplate(cmd.HelpTemplate() + "\nExperimental commands and flags are hidden. Set BUILDX_EXPERIMENTAL=1 to show them.\n")
} }
addCommands(cmd, &opt, dockerCli) addCommands(cmd, dockerCli)
return cmd return cmd
} }
type rootOptions struct { type rootOptions struct {
builder string builder string
debug bool
} }
func addCommands(cmd *cobra.Command, opts *rootOptions, dockerCli command.Cli) { func addCommands(cmd *cobra.Command, dockerCli command.Cli) {
opts := &rootOptions{}
rootFlags(opts, cmd.PersistentFlags()) rootFlags(opts, cmd.PersistentFlags())
cmd.AddCommand( cmd.AddCommand(
@@ -106,10 +94,9 @@ func addCommands(cmd *cobra.Command, opts *rootOptions, dockerCli command.Cli) {
versionCmd(dockerCli), versionCmd(dockerCli),
pruneCmd(dockerCli, opts), pruneCmd(dockerCli, opts),
duCmd(dockerCli, opts), duCmd(dockerCli, opts),
imagetoolscmd.RootCmd(cmd, dockerCli, imagetoolscmd.RootOptions{Builder: &opts.builder}), imagetoolscmd.RootCmd(dockerCli, imagetoolscmd.RootOptions{Builder: &opts.builder}),
historycmd.RootCmd(cmd, dockerCli, historycmd.RootOptions{Builder: &opts.builder}),
) )
if confutil.IsExperimental() { if isExperimental() {
cmd.AddCommand(debugcmd.RootCmd(dockerCli, cmd.AddCommand(debugcmd.RootCmd(dockerCli,
newDebuggableBuild(dockerCli, opts), newDebuggableBuild(dockerCli, opts),
)) ))
@@ -124,5 +111,4 @@ func addCommands(cmd *cobra.Command, opts *rootOptions, dockerCli command.Cli) {
func rootFlags(options *rootOptions, flags *pflag.FlagSet) { func rootFlags(options *rootOptions, flags *pflag.FlagSet) {
flags.StringVar(&options.builder, "builder", os.Getenv("BUILDX_BUILDER"), "Override the configured builder instance") flags.StringVar(&options.builder, "builder", os.Getenv("BUILDX_BUILDER"), "Override the configured builder instance")
flags.BoolVarP(&options.debug, "debug", "D", debug.IsEnabled(), "Enable debug logging")
} }

View File

@@ -15,7 +15,7 @@ import (
type uninstallOptions struct { type uninstallOptions struct {
} }
func runUninstall(_ command.Cli, _ uninstallOptions) error { func runUninstall(dockerCli command.Cli, in uninstallOptions) error {
dir := config.Dir() dir := config.Dir()
cfg, err := config.Load(dir) cfg, err := config.Load(dir)
if err != nil { if err != nil {

View File

@@ -46,6 +46,7 @@ func runUse(dockerCli command.Cli, in useOptions) error {
return errors.Errorf("run `docker context use %s` to switch to context %s", in.builder, in.builder) return errors.Errorf("run `docker context use %s` to switch to context %s", in.builder, in.builder)
} }
} }
} }
return errors.Wrapf(err, "failed to find instance %q", in.builder) return errors.Wrapf(err, "failed to find instance %q", in.builder)
} }

View File

@@ -1,22 +1,17 @@
package commands package commands
import ( import (
"bufio"
"context" "context"
"fmt"
"io" "io"
"os"
"runtime"
"strings"
"github.com/docker/cli/cli/streams" "github.com/docker/cli/cli/command"
) )
func prompt(ctx context.Context, ins io.Reader, out io.Writer, msg string) (bool, error) { func prompt(ctx context.Context, ins io.Reader, out io.Writer, msg string) (bool, error) {
done := make(chan struct{}) done := make(chan struct{})
var ok bool var ok bool
go func() { go func() {
ok = promptForConfirmation(ins, out, msg) ok = command.PromptForConfirmation(ins, out, msg)
close(done) close(done)
}() }()
select { select {
@@ -26,32 +21,3 @@ func prompt(ctx context.Context, ins io.Reader, out io.Writer, msg string) (bool
return ok, nil return ok, nil
} }
} }
// promptForConfirmation requests and checks confirmation from user.
// This will display the provided message followed by ' [y/N] '. If
// the user input 'y' or 'Y' it returns true other false. If no
// message is provided "Are you sure you want to proceed? [y/N] "
// will be used instead.
//
// Copied from github.com/docker/cli since the upstream version changed
// recently with an incompatible change.
//
// See https://github.com/docker/buildx/pull/2359#discussion_r1544736494
// for discussion on the issue.
func promptForConfirmation(ins io.Reader, outs io.Writer, message string) bool {
if message == "" {
message = "Are you sure you want to proceed?"
}
message += " [y/N] "
_, _ = fmt.Fprint(outs, message)
// On Windows, force the use of the regular OS stdin stream.
if runtime.GOOS == "windows" {
ins = streams.NewIn(os.Stdin)
}
reader := bufio.NewReader(ins)
answer, _, _ := reader.ReadLine()
return strings.ToLower(string(answer)) == "y"
}

View File

@@ -11,7 +11,7 @@ import (
"github.com/spf13/cobra" "github.com/spf13/cobra"
) )
func runVersion(_ command.Cli) error { func runVersion(dockerCli command.Cli) error {
fmt.Println(version.Package, version.Version, version.Revision) fmt.Println(version.Package, version.Version, version.Revision)
return nil return nil
} }

View File

@@ -3,6 +3,7 @@ package build
import ( import (
"context" "context"
"io" "io"
"os"
"path/filepath" "path/filepath"
"strings" "strings"
"sync" "sync"
@@ -18,8 +19,9 @@ import (
"github.com/docker/buildx/util/platformutil" "github.com/docker/buildx/util/platformutil"
"github.com/docker/buildx/util/progress" "github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/command" "github.com/docker/cli/cli/command"
"github.com/docker/cli/cli/config"
dockeropts "github.com/docker/cli/opts" dockeropts "github.com/docker/cli/opts"
"github.com/docker/docker/api/types/container" "github.com/docker/go-units"
"github.com/moby/buildkit/client" "github.com/moby/buildkit/client"
"github.com/moby/buildkit/session/auth/authprovider" "github.com/moby/buildkit/session/auth/authprovider"
"github.com/moby/buildkit/util/grpcerrors" "github.com/moby/buildkit/util/grpcerrors"
@@ -34,9 +36,9 @@ const defaultTargetName = "default"
// NOTE: When an error happens during the build and this function acquires the debuggable *build.ResultHandle, // NOTE: When an error happens during the build and this function acquires the debuggable *build.ResultHandle,
// this function returns it in addition to the error (i.e. it does "return nil, res, err"). The caller can // this function returns it in addition to the error (i.e. it does "return nil, res, err"). The caller can
// inspect the result and debug the cause of that error. // inspect the result and debug the cause of that error.
func RunBuild(ctx context.Context, dockerCli command.Cli, in *controllerapi.BuildOptions, inStream io.Reader, progress progress.Writer, generateResult bool) (*client.SolveResponse, *build.ResultHandle, *build.Inputs, error) { func RunBuild(ctx context.Context, dockerCli command.Cli, in controllerapi.BuildOptions, inStream io.Reader, progress progress.Writer, generateResult bool) (*client.SolveResponse, *build.ResultHandle, error) {
if in.NoCache && len(in.NoCacheFilter) > 0 { if in.NoCache && len(in.NoCacheFilter) > 0 {
return nil, nil, nil, errors.Errorf("--no-cache and --no-cache-filter cannot currently be used together") return nil, nil, errors.Errorf("--no-cache and --no-cache-filter cannot currently be used together")
} }
contexts := map[string]build.NamedContext{} contexts := map[string]build.NamedContext{}
@@ -48,40 +50,37 @@ func RunBuild(ctx context.Context, dockerCli command.Cli, in *controllerapi.Buil
Inputs: build.Inputs{ Inputs: build.Inputs{
ContextPath: in.ContextPath, ContextPath: in.ContextPath,
DockerfilePath: in.DockerfileName, DockerfilePath: in.DockerfileName,
InStream: build.NewSyncMultiReader(inStream), InStream: inStream,
NamedContexts: contexts, NamedContexts: contexts,
}, },
Ref: in.Ref, Ref: in.Ref,
BuildArgs: in.BuildArgs, BuildArgs: in.BuildArgs,
CgroupParent: in.CgroupParent, CgroupParent: in.CgroupParent,
ExtraHosts: in.ExtraHosts, ExtraHosts: in.ExtraHosts,
Labels: in.Labels, Labels: in.Labels,
NetworkMode: in.NetworkMode, NetworkMode: in.NetworkMode,
NoCache: in.NoCache, NoCache: in.NoCache,
NoCacheFilter: in.NoCacheFilter, NoCacheFilter: in.NoCacheFilter,
Pull: in.Pull, Pull: in.Pull,
ShmSize: dockeropts.MemBytes(in.ShmSize), ShmSize: dockeropts.MemBytes(in.ShmSize),
Tags: in.Tags, Tags: in.Tags,
Target: in.Target, Target: in.Target,
Ulimits: controllerUlimitOpt2DockerUlimit(in.Ulimits), Ulimits: controllerUlimitOpt2DockerUlimit(in.Ulimits),
GroupRef: in.GroupRef, GroupRef: in.GroupRef,
ProvenanceResponseMode: confutil.ParseMetadataProvenance(in.ProvenanceResponseMode),
} }
platforms, err := platformutil.Parse(in.Platforms) platforms, err := platformutil.Parse(in.Platforms)
if err != nil { if err != nil {
return nil, nil, nil, err return nil, nil, err
} }
opts.Platforms = platforms opts.Platforms = platforms
dockerConfig := dockerCli.ConfigFile() dockerConfig := config.LoadDefaultConfigFile(os.Stderr)
opts.Session = append(opts.Session, authprovider.NewDockerAuthProvider(authprovider.DockerAuthProviderConfig{ opts.Session = append(opts.Session, authprovider.NewDockerAuthProvider(dockerConfig, nil))
ConfigFile: dockerConfig,
}))
secrets, err := controllerapi.CreateSecrets(in.Secrets) secrets, err := controllerapi.CreateSecrets(in.Secrets)
if err != nil { if err != nil {
return nil, nil, nil, err return nil, nil, err
} }
opts.Session = append(opts.Session, secrets) opts.Session = append(opts.Session, secrets)
@@ -91,13 +90,13 @@ func RunBuild(ctx context.Context, dockerCli command.Cli, in *controllerapi.Buil
} }
ssh, err := controllerapi.CreateSSH(sshSpecs) ssh, err := controllerapi.CreateSSH(sshSpecs)
if err != nil { if err != nil {
return nil, nil, nil, err return nil, nil, err
} }
opts.Session = append(opts.Session, ssh) opts.Session = append(opts.Session, ssh)
outputs, _, err := controllerapi.CreateExports(in.Exports) outputs, err := controllerapi.CreateExports(in.Exports)
if err != nil { if err != nil {
return nil, nil, nil, err return nil, nil, err
} }
if in.ExportPush { if in.ExportPush {
var pushUsed bool var pushUsed bool
@@ -136,9 +135,8 @@ func RunBuild(ctx context.Context, dockerCli command.Cli, in *controllerapi.Buil
annotations, err := buildflags.ParseAnnotations(in.Annotations) annotations, err := buildflags.ParseAnnotations(in.Annotations)
if err != nil { if err != nil {
return nil, nil, nil, errors.Wrap(err, "parse annotations") return nil, nil, err
} }
for _, o := range outputs { for _, o := range outputs {
for k, v := range annotations { for k, v := range annotations {
o.Attrs[k.String()] = v o.Attrs[k.String()] = v
@@ -156,15 +154,14 @@ func RunBuild(ctx context.Context, dockerCli command.Cli, in *controllerapi.Buil
allow, err := buildflags.ParseEntitlements(in.Allow) allow, err := buildflags.ParseEntitlements(in.Allow)
if err != nil { if err != nil {
return nil, nil, nil, err return nil, nil, err
} }
opts.Allow = allow opts.Allow = allow
if in.CallFunc != nil { if in.PrintFunc != nil {
opts.CallFunc = &build.CallFunc{ opts.PrintFunc = &build.PrintFunc{
Name: in.CallFunc.Name, Name: in.PrintFunc.Name,
Format: in.CallFunc.Format, Format: in.PrintFunc.Format,
IgnoreStatus: in.CallFunc.IgnoreStatus,
} }
} }
@@ -180,28 +177,23 @@ func RunBuild(ctx context.Context, dockerCli command.Cli, in *controllerapi.Buil
builder.WithContextPathHash(contextPathHash), builder.WithContextPathHash(contextPathHash),
) )
if err != nil { if err != nil {
return nil, nil, nil, err return nil, nil, err
} }
if err = updateLastActivity(dockerCli, b.NodeGroup); err != nil { if err = updateLastActivity(dockerCli, b.NodeGroup); err != nil {
return nil, nil, nil, errors.Wrapf(err, "failed to update builder last activity time") return nil, nil, errors.Wrapf(err, "failed to update builder last activity time")
} }
nodes, err := b.LoadNodes(ctx) nodes, err := b.LoadNodes(ctx)
if err != nil { if err != nil {
return nil, nil, nil, err return nil, nil, err
} }
var inputs *build.Inputs resp, res, err := buildTargets(ctx, dockerCli, b.NodeGroup, nodes, map[string]build.Options{defaultTargetName: opts}, progress, generateResult)
buildOptions := map[string]build.Options{defaultTargetName: opts}
resp, res, err := buildTargets(ctx, dockerCli, nodes, buildOptions, progress, generateResult)
err = wrapBuildError(err, false) err = wrapBuildError(err, false)
if err != nil { if err != nil {
// NOTE: buildTargets can return *build.ResultHandle even on error. // NOTE: buildTargets can return *build.ResultHandle even on error.
return nil, res, nil, err return nil, res, err
} }
if i, ok := buildOptions[defaultTargetName]; ok { return resp, res, nil
inputs = &i.Inputs
}
return resp, res, inputs, nil
} }
// buildTargets runs the specified build and returns the result. // buildTargets runs the specified build and returns the result.
@@ -209,14 +201,14 @@ func RunBuild(ctx context.Context, dockerCli command.Cli, in *controllerapi.Buil
// NOTE: When an error happens during the build and this function acquires the debuggable *build.ResultHandle, // NOTE: When an error happens during the build and this function acquires the debuggable *build.ResultHandle,
// this function returns it in addition to the error (i.e. it does "return nil, res, err"). The caller can // this function returns it in addition to the error (i.e. it does "return nil, res, err"). The caller can
// inspect the result and debug the cause of that error. // inspect the result and debug the cause of that error.
func buildTargets(ctx context.Context, dockerCli command.Cli, nodes []builder.Node, opts map[string]build.Options, progress progress.Writer, generateResult bool) (*client.SolveResponse, *build.ResultHandle, error) { func buildTargets(ctx context.Context, dockerCli command.Cli, ng *store.NodeGroup, nodes []builder.Node, opts map[string]build.Options, progress progress.Writer, generateResult bool) (*client.SolveResponse, *build.ResultHandle, error) {
var res *build.ResultHandle var res *build.ResultHandle
var resp map[string]*client.SolveResponse var resp map[string]*client.SolveResponse
var err error var err error
if generateResult { if generateResult {
var mu sync.Mutex var mu sync.Mutex
var idx int var idx int
resp, err = build.BuildWithResultHandler(ctx, nodes, opts, dockerutil.NewClient(dockerCli), confutil.NewConfig(dockerCli), progress, func(driverIndex int, gotRes *build.ResultHandle) { resp, err = build.BuildWithResultHandler(ctx, nodes, opts, dockerutil.NewClient(dockerCli), confutil.ConfigDir(dockerCli), progress, func(driverIndex int, gotRes *build.ResultHandle) {
mu.Lock() mu.Lock()
defer mu.Unlock() defer mu.Unlock()
if res == nil || driverIndex < idx { if res == nil || driverIndex < idx {
@@ -224,7 +216,7 @@ func buildTargets(ctx context.Context, dockerCli command.Cli, nodes []builder.No
} }
}) })
} else { } else {
resp, err = build.Build(ctx, nodes, opts, dockerutil.NewClient(dockerCli), confutil.NewConfig(dockerCli), progress) resp, err = build.Build(ctx, nodes, opts, dockerutil.NewClient(dockerCli), confutil.ConfigDir(dockerCli), progress)
} }
if err != nil { if err != nil {
return nil, res, err return nil, res, err
@@ -276,9 +268,9 @@ func controllerUlimitOpt2DockerUlimit(u *controllerapi.UlimitOpt) *dockeropts.Ul
if u == nil { if u == nil {
return nil return nil
} }
values := make(map[string]*container.Ulimit) values := make(map[string]*units.Ulimit)
for k, v := range u.Values { for k, v := range u.Values {
values[k] = &container.Ulimit{ values[k] = &units.Ulimit{
Name: v.Name, Name: v.Name,
Hard: v.Hard, Hard: v.Hard,
Soft: v.Soft, Soft: v.Soft,

View File

@@ -4,19 +4,18 @@ import (
"context" "context"
"io" "io"
"github.com/docker/buildx/build"
controllerapi "github.com/docker/buildx/controller/pb" controllerapi "github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/util/progress" "github.com/docker/buildx/util/progress"
"github.com/moby/buildkit/client" "github.com/moby/buildkit/client"
) )
type BuildxController interface { type BuildxController interface {
Build(ctx context.Context, options *controllerapi.BuildOptions, in io.ReadCloser, progress progress.Writer) (ref string, resp *client.SolveResponse, inputs *build.Inputs, err error) Build(ctx context.Context, options controllerapi.BuildOptions, in io.ReadCloser, progress progress.Writer) (ref string, resp *client.SolveResponse, err error)
// Invoke starts an IO session into the specified process. // Invoke starts an IO session into the specified process.
// If pid doesn't match to any running processes, it starts a new process with the specified config. // If pid doesn't matche to any running processes, it starts a new process with the specified config.
// If there is no container running or InvokeConfig.Rollback is specified, the process will start in a newly created container. // If there is no container running or InvokeConfig.Rollback is speicfied, the process will start in a newly created container.
// NOTE: If needed, in the future, we can split this API into three APIs (NewContainer, NewProcess and Attach). // NOTE: If needed, in the future, we can split this API into three APIs (NewContainer, NewProcess and Attach).
Invoke(ctx context.Context, ref, pid string, options *controllerapi.InvokeConfig, ioIn io.ReadCloser, ioOut io.WriteCloser, ioErr io.WriteCloser) error Invoke(ctx context.Context, ref, pid string, options controllerapi.InvokeConfig, ioIn io.ReadCloser, ioOut io.WriteCloser, ioErr io.WriteCloser) error
Kill(ctx context.Context) error Kill(ctx context.Context) error
Close() error Close() error
List(ctx context.Context) (refs []string, _ error) List(ctx context.Context) (refs []string, _ error)

View File

@@ -1,10 +1,7 @@
package errdefs package errdefs
import ( import (
"io"
"github.com/containerd/typeurl/v2" "github.com/containerd/typeurl/v2"
"github.com/docker/buildx/util/desktop"
"github.com/moby/buildkit/util/grpcerrors" "github.com/moby/buildkit/util/grpcerrors"
) )
@@ -13,7 +10,7 @@ func init() {
} }
type BuildError struct { type BuildError struct {
*Build Build
error error
} }
@@ -22,27 +19,16 @@ func (e *BuildError) Unwrap() error {
} }
func (e *BuildError) ToProto() grpcerrors.TypedErrorProto { func (e *BuildError) ToProto() grpcerrors.TypedErrorProto {
return e.Build return &e.Build
} }
func (e *BuildError) PrintBuildDetails(w io.Writer) error { func WrapBuild(err error, ref string) error {
if e.Ref == "" {
return nil
}
ebr := &desktop.ErrorWithBuildRef{
Ref: e.Ref,
Err: e.error,
}
return ebr.Print(w)
}
func WrapBuild(err error, sessionID string, ref string) error {
if err == nil { if err == nil {
return nil return nil
} }
return &BuildError{Build: &Build{SessionID: sessionID, Ref: ref}, error: err} return &BuildError{Build: Build{Ref: ref}, error: err}
} }
func (b *Build) WrapError(err error) error { func (b *Build) WrapError(err error) error {
return &BuildError{error: err, Build: b} return &BuildError{error: err, Build: *b}
} }

View File

@@ -1,157 +1,77 @@
// Code generated by protoc-gen-go. DO NOT EDIT. // Code generated by protoc-gen-gogo. DO NOT EDIT.
// versions: // source: errdefs.proto
// protoc-gen-go v1.34.1
// protoc v3.11.4
// source: github.com/docker/buildx/controller/errdefs/errdefs.proto
package errdefs package errdefs
import ( import (
protoreflect "google.golang.org/protobuf/reflect/protoreflect" fmt "fmt"
protoimpl "google.golang.org/protobuf/runtime/protoimpl" proto "github.com/gogo/protobuf/proto"
reflect "reflect" _ "github.com/moby/buildkit/solver/pb"
sync "sync" math "math"
) )
const ( // Reference imports to suppress errors if they are not otherwise used.
// Verify that this generated code is sufficiently up-to-date. var _ = proto.Marshal
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion) var _ = fmt.Errorf
// Verify that runtime/protoimpl is sufficiently up-to-date. var _ = math.Inf
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
) // This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package
type Build struct { type Build struct {
state protoimpl.MessageState Ref string `protobuf:"bytes,1,opt,name=Ref,proto3" json:"Ref,omitempty"`
sizeCache protoimpl.SizeCache XXX_NoUnkeyedLiteral struct{} `json:"-"`
unknownFields protoimpl.UnknownFields XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
SessionID string `protobuf:"bytes,1,opt,name=SessionID,proto3" json:"SessionID,omitempty"`
Ref string `protobuf:"bytes,2,opt,name=Ref,proto3" json:"Ref,omitempty"`
} }
func (x *Build) Reset() { func (m *Build) Reset() { *m = Build{} }
*x = Build{} func (m *Build) String() string { return proto.CompactTextString(m) }
if protoimpl.UnsafeEnabled { func (*Build) ProtoMessage() {}
mi := &file_github_com_docker_buildx_controller_errdefs_errdefs_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *Build) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*Build) ProtoMessage() {}
func (x *Build) ProtoReflect() protoreflect.Message {
mi := &file_github_com_docker_buildx_controller_errdefs_errdefs_proto_msgTypes[0]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use Build.ProtoReflect.Descriptor instead.
func (*Build) Descriptor() ([]byte, []int) { func (*Build) Descriptor() ([]byte, []int) {
return file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDescGZIP(), []int{0} return fileDescriptor_689dc58a5060aff5, []int{0}
}
func (m *Build) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_Build.Unmarshal(m, b)
}
func (m *Build) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_Build.Marshal(b, m, deterministic)
}
func (m *Build) XXX_Merge(src proto.Message) {
xxx_messageInfo_Build.Merge(m, src)
}
func (m *Build) XXX_Size() int {
return xxx_messageInfo_Build.Size(m)
}
func (m *Build) XXX_DiscardUnknown() {
xxx_messageInfo_Build.DiscardUnknown(m)
} }
func (x *Build) GetSessionID() string { var xxx_messageInfo_Build proto.InternalMessageInfo
if x != nil {
return x.SessionID func (m *Build) GetRef() string {
if m != nil {
return m.Ref
} }
return "" return ""
} }
func (x *Build) GetRef() string { func init() {
if x != nil { proto.RegisterType((*Build)(nil), "errdefs.Build")
return x.Ref
}
return ""
} }
var File_github_com_docker_buildx_controller_errdefs_errdefs_proto protoreflect.FileDescriptor func init() { proto.RegisterFile("errdefs.proto", fileDescriptor_689dc58a5060aff5) }
var file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDesc = []byte{ var fileDescriptor_689dc58a5060aff5 = []byte{
0x0a, 0x39, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x64, 0x6f, 0x63, // 111 bytes of a gzipped FileDescriptorProto
0x6b, 0x65, 0x72, 0x2f, 0x62, 0x75, 0x69, 0x6c, 0x64, 0x78, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x72, 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0xe2, 0x4d, 0x2d, 0x2a, 0x4a,
0x6f, 0x6c, 0x6c, 0x65, 0x72, 0x2f, 0x65, 0x72, 0x72, 0x64, 0x65, 0x66, 0x73, 0x2f, 0x65, 0x72, 0x49, 0x4d, 0x2b, 0xd6, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x17, 0x62, 0x87, 0x72, 0xa5, 0x74, 0xd2,
0x72, 0x64, 0x65, 0x66, 0x73, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x15, 0x64, 0x6f, 0x63, 0x33, 0x4b, 0x32, 0x4a, 0x93, 0xf4, 0x92, 0xf3, 0x73, 0xf5, 0x73, 0xf3, 0x93, 0x2a, 0xf5, 0x93,
0x6b, 0x65, 0x72, 0x2e, 0x62, 0x75, 0x69, 0x6c, 0x64, 0x78, 0x2e, 0x65, 0x72, 0x72, 0x64, 0x65, 0x4a, 0x33, 0x73, 0x52, 0xb2, 0x33, 0x4b, 0xf4, 0x8b, 0xf3, 0x73, 0xca, 0x52, 0x8b, 0xf4, 0x0b,
0x66, 0x73, 0x22, 0x37, 0x0a, 0x05, 0x42, 0x75, 0x69, 0x6c, 0x64, 0x12, 0x1c, 0x0a, 0x09, 0x53, 0x92, 0xf4, 0xf3, 0x0b, 0xa0, 0xda, 0x94, 0x24, 0xb9, 0x58, 0x9d, 0x40, 0xf2, 0x42, 0x02, 0x5c,
0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, 0x49, 0x44, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0xcc, 0x41, 0xa9, 0x69, 0x12, 0x8c, 0x0a, 0x8c, 0x1a, 0x9c, 0x41, 0x20, 0x66, 0x12, 0x1b, 0x58,
0x53, 0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, 0x49, 0x44, 0x12, 0x10, 0x0a, 0x03, 0x52, 0x65, 0x66, 0x85, 0x31, 0x20, 0x00, 0x00, 0xff, 0xff, 0x56, 0x52, 0x41, 0x91, 0x69, 0x00, 0x00, 0x00,
0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x52, 0x65, 0x66, 0x42, 0x2d, 0x5a, 0x2b, 0x67,
0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x64, 0x6f, 0x63, 0x6b, 0x65, 0x72,
0x2f, 0x62, 0x75, 0x69, 0x6c, 0x64, 0x78, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x72, 0x6f, 0x6c, 0x6c,
0x65, 0x72, 0x2f, 0x65, 0x72, 0x72, 0x64, 0x65, 0x66, 0x73, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74,
0x6f, 0x33,
}
var (
file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDescOnce sync.Once
file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDescData = file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDesc
)
func file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDescGZIP() []byte {
file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDescOnce.Do(func() {
file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDescData = protoimpl.X.CompressGZIP(file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDescData)
})
return file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDescData
}
var file_github_com_docker_buildx_controller_errdefs_errdefs_proto_msgTypes = make([]protoimpl.MessageInfo, 1)
var file_github_com_docker_buildx_controller_errdefs_errdefs_proto_goTypes = []interface{}{
(*Build)(nil), // 0: docker.buildx.errdefs.Build
}
var file_github_com_docker_buildx_controller_errdefs_errdefs_proto_depIdxs = []int32{
0, // [0:0] is the sub-list for method output_type
0, // [0:0] is the sub-list for method input_type
0, // [0:0] is the sub-list for extension type_name
0, // [0:0] is the sub-list for extension extendee
0, // [0:0] is the sub-list for field type_name
}
func init() { file_github_com_docker_buildx_controller_errdefs_errdefs_proto_init() }
func file_github_com_docker_buildx_controller_errdefs_errdefs_proto_init() {
if File_github_com_docker_buildx_controller_errdefs_errdefs_proto != nil {
return
}
if !protoimpl.UnsafeEnabled {
file_github_com_docker_buildx_controller_errdefs_errdefs_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*Build); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDesc,
NumEnums: 0,
NumMessages: 1,
NumExtensions: 0,
NumServices: 0,
},
GoTypes: file_github_com_docker_buildx_controller_errdefs_errdefs_proto_goTypes,
DependencyIndexes: file_github_com_docker_buildx_controller_errdefs_errdefs_proto_depIdxs,
MessageInfos: file_github_com_docker_buildx_controller_errdefs_errdefs_proto_msgTypes,
}.Build()
File_github_com_docker_buildx_controller_errdefs_errdefs_proto = out.File
file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDesc = nil
file_github_com_docker_buildx_controller_errdefs_errdefs_proto_goTypes = nil
file_github_com_docker_buildx_controller_errdefs_errdefs_proto_depIdxs = nil
} }

View File

@@ -1,10 +1,9 @@
syntax = "proto3"; syntax = "proto3";
package docker.buildx.errdefs; package errdefs;
option go_package = "github.com/docker/buildx/controller/errdefs"; import "github.com/moby/buildkit/solver/pb/ops.proto";
message Build { message Build {
string SessionID = 1; string Ref = 1;
string Ref = 2; }
}

View File

@@ -1,241 +0,0 @@
// Code generated by protoc-gen-go-vtproto. DO NOT EDIT.
// protoc-gen-go-vtproto version: v0.6.1-0.20240319094008-0393e58bdf10
// source: github.com/docker/buildx/controller/errdefs/errdefs.proto
package errdefs
import (
fmt "fmt"
protohelpers "github.com/planetscale/vtprotobuf/protohelpers"
proto "google.golang.org/protobuf/proto"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
io "io"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
func (m *Build) CloneVT() *Build {
if m == nil {
return (*Build)(nil)
}
r := new(Build)
r.SessionID = m.SessionID
r.Ref = m.Ref
if len(m.unknownFields) > 0 {
r.unknownFields = make([]byte, len(m.unknownFields))
copy(r.unknownFields, m.unknownFields)
}
return r
}
func (m *Build) CloneMessageVT() proto.Message {
return m.CloneVT()
}
func (this *Build) EqualVT(that *Build) bool {
if this == that {
return true
} else if this == nil || that == nil {
return false
}
if this.SessionID != that.SessionID {
return false
}
if this.Ref != that.Ref {
return false
}
return string(this.unknownFields) == string(that.unknownFields)
}
func (this *Build) EqualMessageVT(thatMsg proto.Message) bool {
that, ok := thatMsg.(*Build)
if !ok {
return false
}
return this.EqualVT(that)
}
func (m *Build) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
size := m.SizeVT()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBufferVT(dAtA[:size])
if err != nil {
return nil, err
}
return dAtA[:n], nil
}
func (m *Build) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
func (m *Build) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
i := len(dAtA)
_ = i
var l int
_ = l
if m.unknownFields != nil {
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
if len(m.Ref) > 0 {
i -= len(m.Ref)
copy(dAtA[i:], m.Ref)
i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Ref)))
i--
dAtA[i] = 0x12
}
if len(m.SessionID) > 0 {
i -= len(m.SessionID)
copy(dAtA[i:], m.SessionID)
i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.SessionID)))
i--
dAtA[i] = 0xa
}
return len(dAtA) - i, nil
}
func (m *Build) SizeVT() (n int) {
if m == nil {
return 0
}
var l int
_ = l
l = len(m.SessionID)
if l > 0 {
n += 1 + l + protohelpers.SizeOfVarint(uint64(l))
}
l = len(m.Ref)
if l > 0 {
n += 1 + l + protohelpers.SizeOfVarint(uint64(l))
}
n += len(m.unknownFields)
return n
}
func (m *Build) UnmarshalVT(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return protohelpers.ErrIntOverflow
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
return fmt.Errorf("proto: Build: wiretype end group for non-group")
}
if fieldNum <= 0 {
return fmt.Errorf("proto: Build: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field SessionID", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return protohelpers.ErrIntOverflow
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return protohelpers.ErrInvalidLength
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return protohelpers.ErrInvalidLength
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.SessionID = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 2:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Ref", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return protohelpers.ErrIntOverflow
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return protohelpers.ErrInvalidLength
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return protohelpers.ErrInvalidLength
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Ref = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := protohelpers.Skip(dAtA[iNdEx:])
if err != nil {
return err
}
if (skippy < 0) || (iNdEx+skippy) < 0 {
return protohelpers.ErrInvalidLength
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
m.unknownFields = append(m.unknownFields, dAtA[iNdEx:iNdEx+skippy]...)
iNdEx += skippy
}
}
if iNdEx > l {
return io.ErrUnexpectedEOF
}
return nil
}

Some files were not shown because too many files have changed in this diff Show More