Compare commits

..

5 Commits

Author SHA1 Message Date
Tõnis Tiigi
266c0eac61 Merge pull request #752 from tonistiigi/v0.6-mount-path-fix
[v0.6] container-driver: fix volume destination for cache
2021-08-28 10:07:48 -07:00
Tonis Tiigi
0b320faf34 github: fix running ci in version branches
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 9833420a03)
2021-08-27 21:20:17 -07:00
Sebastiaan van Stijn
944fd444f6 container-driver: fix volume destination for cache
The container-driver creates a Linux container (as there currently isn't a
Windows version of buildkitd). However, the defaults are platform specific.

Buildx was using the defaults from the buildkit `util/appdefault' package,
which resulted in Buildx running on a Windows client to create a Linux
container that used the Windows location, which causes it to fail:

    invalid mount config for type "volume": invalid mount path: 'C:/ProgramData/buildkitd/.buildstate' mount path must be absolute

This patch hard-codes the destination to the default Linux path.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 93867d02f0)
2021-08-27 21:03:27 -07:00
CrazyMax
de7dfb925d Merge pull request #743 from tonistiigi/v0.6-client-ctx
[v0.6] use long-running context for client initialization
2021-08-20 18:34:15 +02:00
Tonis Tiigi
43e51fd089 use long-running context for client initialization
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 422ba60b04)
2021-08-20 09:30:23 -07:00
6346 changed files with 280102 additions and 938687 deletions

View File

@@ -1 +1,3 @@
bin/
cross-out/
release-out/

View File

@@ -116,60 +116,6 @@ commit automatically with `git commit -s`.
### Run the unit- and integration-tests
Running tests:
```bash
make test
```
This runs all unit and integration tests, in a containerized environment.
Locally, every package can be tested separately with standard Go tools, but
integration tests are skipped if local user doesn't have enough permissions or
worker binaries are not installed.
```bash
# run unit tests only
make test-unit
# run integration tests only
make test-integration
# test a specific package
TESTPKGS=./bake make test
# run all integration tests with a specific worker
TESTFLAGS="--run=//worker=remote -v" make test-integration
# run a specific integration test
TESTFLAGS="--run /TestBuild/worker=remote/ -v" make test-integration
# run a selection of integration tests using a regexp
TESTFLAGS="--run /TestBuild.*/worker=remote/ -v" make test-integration
```
> **Note**
>
> Set `TEST_KEEP_CACHE=1` for the test framework to keep external dependant
> images in a docker volume if you are repeatedly calling `make test`. This
> helps to avoid rate limiting on the remote registry side.
> **Note**
>
> Set `TEST_DOCKERD=1` for the test framework to enable the docker workers,
> specifically the `docker` and `docker-container` drivers.
>
> The docker tests cannot be run in parallel, so require passing `--parallel=1`
> in `TESTFLAGS`.
> **Note**
>
> If you are working behind a proxy, you can set some of or all
> `HTTP_PROXY=http://ip:port`, `HTTPS_PROXY=http://ip:port`, `NO_PROXY=http://ip:port`
> for the test framework to specify the proxy build args.
### Run the helper commands
To enter a demo container environment and experiment, you may run:
```
@@ -188,89 +134,6 @@ To generate new vendored files with go modules run:
$ make vendor
```
### Generate profiling data
You can configure Buildx to generate [`pprof`](https://github.com/google/pprof)
memory and CPU profiles to analyze and optimize your builds. These profiles are
useful for identifying performance bottlenecks, detecting memory
inefficiencies, and ensuring the program (Buildx) runs efficiently.
The following environment variables control whether Buildx generates profiling
data for builds:
```console
$ export BUILDX_CPU_PROFILE=buildx_cpu.prof
$ export BUILDX_MEM_PROFILE=buildx_mem.prof
```
When set, Buildx emits profiling samples for the builds to the location
specified by the environment variable.
To analyze and visualize profiling samples, you need `pprof` from the Go
toolchain, and (optionally) GraphViz for visualization in a graphical format.
To inspect profiling data with `pprof`:
1. Build a local binary of Buildx from source.
```console
$ docker buildx bake
```
The binary gets exported to `./bin/build/buildx`.
2. Run a build and with the environment variables set to generate profiling data.
```console
$ export BUILDX_CPU_PROFILE=buildx_cpu.prof
$ export BUILDX_MEM_PROFILE=buildx_mem.prof
$ ./bin/build/buildx bake
```
This creates `buildx_cpu.prof` and `buildx_mem.prof` for the build.
3. Start `pprof` and specify the filename of the profile that you want to
analyze.
```console
$ go tool pprof buildx_cpu.prof
```
This opens the `pprof` interactive console. From here, you can inspect the
profiling sample using various commands. For example, use `top 10` command
to view the top 10 most time-consuming entries.
```plaintext
(pprof) top 10
Showing nodes accounting for 3.04s, 91.02% of 3.34s total
Dropped 123 nodes (cum <= 0.02s)
Showing top 10 nodes out of 159
flat flat% sum% cum cum%
1.14s 34.13% 34.13% 1.14s 34.13% syscall.syscall
0.91s 27.25% 61.38% 0.91s 27.25% runtime.kevent
0.35s 10.48% 71.86% 0.35s 10.48% runtime.pthread_cond_wait
0.22s 6.59% 78.44% 0.22s 6.59% runtime.pthread_cond_signal
0.15s 4.49% 82.93% 0.15s 4.49% runtime.usleep
0.10s 2.99% 85.93% 0.10s 2.99% runtime.memclrNoHeapPointers
0.10s 2.99% 88.92% 0.10s 2.99% runtime.memmove
0.03s 0.9% 89.82% 0.03s 0.9% runtime.madvise
0.02s 0.6% 90.42% 0.02s 0.6% runtime.(*mspan).typePointersOfUnchecked
0.02s 0.6% 91.02% 0.02s 0.6% runtime.pcvalue
```
To view the call graph in a GUI, run `go tool pprof -http=:8081 <sample>`.
> [!NOTE]
> Requires [GraphViz](https://www.graphviz.org/) to be installed.
```console
$ go tool pprof -http=:8081 buildx_cpu.prof
Serving web UI on http://127.0.0.1:8081
http://127.0.0.1:8081
```
For more information about using `pprof` and how to interpret the call graph,
refer to the [`pprof` README](https://github.com/google/pprof/blob/main/doc/README.md).
### Conventions
@@ -426,4 +289,4 @@ The rules:
If you are having trouble getting into the mood of idiomatic Go, we recommend
reading through [Effective Go](https://golang.org/doc/effective_go.html). The
[Go Blog](https://blog.golang.org) is also a great resource.
[Go Blog](https://blog.golang.org) is also a great resource.

View File

@@ -1,124 +0,0 @@
# https://docs.github.com/en/communities/using-templates-to-encourage-useful-issues-and-pull-requests/syntax-for-githubs-form-schema
name: Bug Report
description: Report a bug
labels:
- status/triage
body:
- type: markdown
attributes:
value: |
Thank you for taking the time to report a bug!
If this is a security issue please report it to the [Docker Security team](mailto:security@docker.com).
- type: checkboxes
attributes:
label: Contributing guidelines
description: |
Please read the contributing guidelines before proceeding.
options:
- label: I've read the [contributing guidelines](https://github.com/docker/buildx/blob/master/.github/CONTRIBUTING.md) and wholeheartedly agree
required: true
- type: checkboxes
attributes:
label: I've found a bug and checked that ...
description: |
Make sure that your request fulfills all of the following requirements.
If one requirement cannot be satisfied, explain in detail why.
options:
- label: ... the documentation does not mention anything about my problem
- label: ... there are no open or closed issues that are related to my problem
- type: textarea
attributes:
label: Description
description: |
Please provide a brief description of the bug in 1-2 sentences.
validations:
required: true
- type: textarea
attributes:
label: Expected behaviour
description: |
Please describe precisely what you'd expect to happen.
validations:
required: true
- type: textarea
attributes:
label: Actual behaviour
description: |
Please describe precisely what is actually happening.
validations:
required: true
- type: input
attributes:
label: Buildx version
description: |
Output of `docker buildx version` command.
Example: `github.com/docker/buildx v0.8.1 5fac64c2c49dae1320f2b51f1a899ca451935554`
validations:
required: true
- type: textarea
attributes:
label: Docker info
description: |
Output of `docker info` command.
render: text
- type: textarea
attributes:
label: Builders list
description: |
Output of `docker buildx ls` command.
render: text
validations:
required: true
- type: textarea
attributes:
label: Configuration
description: >
Please provide a minimal Dockerfile, bake definition (if applicable) and
invoked commands to help reproducing your issue.
placeholder: |
```dockerfile
FROM alpine
echo hello
```
```hcl
group "default" {
targets = ["app"]
}
target "app" {
dockerfile = "Dockerfile"
target = "build"
}
```
```console
$ docker buildx build .
$ docker buildx bake
```
validations:
required: true
- type: textarea
attributes:
label: Build logs
description: |
Please provide logs output (and/or BuildKit logs if applicable).
render: text
validations:
required: false
- type: textarea
attributes:
label: Additional info
description: |
Please provide any additional information that could be useful.

View File

@@ -1,12 +0,0 @@
# https://docs.github.com/en/communities/using-templates-to-encourage-useful-issues-and-pull-requests/configuring-issue-templates-for-your-repository#configuring-the-template-chooser
blank_issues_enabled: true
contact_links:
- name: Questions and Discussions
url: https://github.com/docker/buildx/discussions/new
about: Use Github Discussions to ask questions and/or open discussion topics.
- name: Command line reference
url: https://docs.docker.com/engine/reference/commandline/buildx/
about: Read the command line reference.
- name: Documentation
url: https://docs.docker.com/build/
about: Read the documentation.

View File

@@ -1,15 +0,0 @@
# https://docs.github.com/en/communities/using-templates-to-encourage-useful-issues-and-pull-requests/syntax-for-githubs-form-schema
name: Feature request
description: Missing functionality? Come tell us about it!
labels:
- kind/enhancement
- status/triage
body:
- type: textarea
id: description
attributes:
label: Description
description: What is the feature you want to see?
validations:
required: true

44
.github/SECURITY.md vendored
View File

@@ -1,44 +0,0 @@
# Security Policy
The maintainers of Docker Buildx take security seriously. If you discover
a security issue, please bring it to their attention right away!
## Reporting a Vulnerability
Please **DO NOT** file a public issue, instead send your report privately
to [security@docker.com](mailto:security@docker.com).
Reporter(s) can expect a response within 72 hours, acknowledging the issue was
received.
## Review Process
After receiving the report, an initial triage and technical analysis is
performed to confirm the report and determine its scope. We may request
additional information in this stage of the process.
Once a reviewer has confirmed the relevance of the report, a draft security
advisory will be created on GitHub. The draft advisory will be used to discuss
the issue with maintainers, the reporter(s), and where applicable, other
affected parties under embargo.
If the vulnerability is accepted, a timeline for developing a patch, public
disclosure, and patch release will be determined. If there is an embargo period
on public disclosure before the patch release, the reporter(s) are expected to
participate in the discussion of the timeline and abide by agreed upon dates
for public disclosure.
## Accreditation
Security reports are greatly appreciated and we will publicly thank you,
although we will keep your name confidential if you request it. We also like to
send gifts - if you're into swag, make sure to let us know. We do not currently
offer a paid security bounty program at this time.
## Supported Versions
Once a new feature release is cut, support for the previous feature release is
discontinued. An exception may be made for urgent security releases that occur
shortly after a new feature release. Buildx does not offer LTS (Long-Term Support)
releases. Refer to the [Support Policy](https://github.com/docker/buildx/blob/master/PROJECT.md#support-policy)
for further details.

View File

@@ -1,15 +0,0 @@
version: 2
updates:
- package-ecosystem: "github-actions"
open-pull-requests-limit: 10
directory: "/"
schedule:
interval: "daily"
ignore:
# ignore this dependency
# it seems a bug with dependabot as pining to commit sha should not
# trigger a new version: https://github.com/docker/buildx/pull/2222#issuecomment-1919092153
- dependency-name: "docker/docs"
labels:
- "area/dependencies"
- "bot"

104
.github/labeler.yml vendored
View File

@@ -1,104 +0,0 @@
# Add 'area/project' label to changes in basic project documentation and .github folder, excluding .github/workflows
area/project:
- all:
- changed-files:
- any-glob-to-any-file:
- .github/**
- LICENSE
- AUTHORS
- MAINTAINERS
- PROJECT.md
- README.md
- .gitignore
- codecov.yml
- all-globs-to-all-files: '!.github/workflows/*'
# Add 'area/github-actions' label to changes in the .github/workflows folder
area/ci:
- changed-files:
- any-glob-to-any-file: '.github/workflows/**'
# Add 'area/bake' label to changes in the bake
area/bake:
- changed-files:
- any-glob-to-any-file: 'bake/**'
# Add 'area/bake/compose' label to changes in the bake+compose
area/bake/compose:
- changed-files:
- any-glob-to-any-file:
- bake/compose.go
- bake/compose_test.go
# Add 'area/build' label to changes in build files
area/build:
- changed-files:
- any-glob-to-any-file: 'build/**'
# Add 'area/builder' label to changes in builder files
area/builder:
- changed-files:
- any-glob-to-any-file: 'builder/**'
# Add 'area/cli' label to changes in the CLI
area/cli:
- changed-files:
- any-glob-to-any-file:
- cmd/**
- commands/**
# Add 'area/controller' label to changes in the controller
area/controller:
- changed-files:
- any-glob-to-any-file: 'controller/**'
# Add 'area/docs' label to markdown files in the docs folder
area/docs:
- changed-files:
- any-glob-to-any-file: 'docs/**/*.md'
# Add 'area/dependencies' label to changes in go dependency files
area/dependencies:
- changed-files:
- any-glob-to-any-file:
- go.mod
- go.sum
- vendor/**
# Add 'area/driver' label to changes in the driver folder
area/driver:
- changed-files:
- any-glob-to-any-file: 'driver/**'
# Add 'area/driver/docker' label to changes in the docker driver
area/driver/docker:
- changed-files:
- any-glob-to-any-file: 'driver/docker/**'
# Add 'area/driver/docker-container' label to changes in the docker-container driver
area/driver/docker-container:
- changed-files:
- any-glob-to-any-file: 'driver/docker-container/**'
# Add 'area/driver/kubernetes' label to changes in the kubernetes driver
area/driver/kubernetes:
- changed-files:
- any-glob-to-any-file: 'driver/kubernetes/**'
# Add 'area/driver/remote' label to changes in the remote driver
area/driver/remote:
- changed-files:
- any-glob-to-any-file: 'driver/remote/**'
# Add 'area/hack' label to changes in the hack folder
area/hack:
- changed-files:
- any-glob-to-any-file: 'hack/**'
# Add 'area/tests' label to changes in test files
area/tests:
- changed-files:
- any-glob-to-any-file:
- tests/**
- '**/*_test.go'

735
.github/releases.json vendored
View File

@@ -1,735 +0,0 @@
{
"latest": {
"id": 90741208,
"tag_name": "v0.10.2",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.10.2",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v6.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v6.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v7.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v7.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-ppc64le.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-ppc64le.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-riscv64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-riscv64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-s390x.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-s390x.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/checksums.txt"
]
},
"v0.10.2": {
"id": 90741208,
"tag_name": "v0.10.2",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.10.2",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v6.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v6.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v7.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v7.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-ppc64le.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-ppc64le.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-riscv64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-riscv64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-s390x.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-s390x.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/checksums.txt"
]
},
"v0.10.1": {
"id": 90346950,
"tag_name": "v0.10.1",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.10.1",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.darwin-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.darwin-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.darwin-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.darwin-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm-v6.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm-v6.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm-v7.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm-v7.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-ppc64le.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-ppc64le.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-riscv64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-riscv64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-s390x.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-s390x.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.windows-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.windows-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.windows-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.windows-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/checksums.txt"
]
},
"v0.10.0": {
"id": 88388110,
"tag_name": "v0.10.0",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.10.0",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.darwin-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.darwin-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.darwin-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.darwin-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm-v6.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm-v6.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm-v7.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm-v7.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-ppc64le.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-ppc64le.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-riscv64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-riscv64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-s390x.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-s390x.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.windows-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.windows-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.windows-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.windows-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/checksums.txt"
]
},
"v0.10.0-rc3": {
"id": 88191592,
"tag_name": "v0.10.0-rc3",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.10.0-rc3",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.darwin-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.darwin-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.darwin-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.darwin-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm-v6.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm-v6.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm-v7.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm-v7.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-ppc64le.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-ppc64le.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-riscv64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-riscv64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-s390x.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-s390x.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.windows-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.windows-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.windows-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.windows-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/checksums.txt"
]
},
"v0.10.0-rc2": {
"id": 86248476,
"tag_name": "v0.10.0-rc2",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.10.0-rc2",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.darwin-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.darwin-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.darwin-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.darwin-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm-v6.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm-v6.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm-v7.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm-v7.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-ppc64le.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-ppc64le.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-riscv64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-riscv64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-s390x.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-s390x.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.windows-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.windows-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.windows-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.windows-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/checksums.txt"
]
},
"v0.10.0-rc1": {
"id": 85963900,
"tag_name": "v0.10.0-rc1",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.10.0-rc1",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/checksums.txt"
]
},
"v0.9.1": {
"id": 74760068,
"tag_name": "v0.9.1",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.9.1",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.9.1/checksums.txt"
]
},
"v0.9.0": {
"id": 74546589,
"tag_name": "v0.9.0",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.9.0",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.9.0/checksums.txt"
]
},
"v0.9.0-rc2": {
"id": 74052235,
"tag_name": "v0.9.0-rc2",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.9.0-rc2",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/checksums.txt"
]
},
"v0.9.0-rc1": {
"id": 73389692,
"tag_name": "v0.9.0-rc1",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.9.0-rc1",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/checksums.txt"
]
},
"v0.8.2": {
"id": 63479740,
"tag_name": "v0.8.2",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.8.2",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.8.2/checksums.txt"
]
},
"v0.8.1": {
"id": 62289050,
"tag_name": "v0.8.1",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.8.1",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.8.1/checksums.txt"
]
},
"v0.8.0": {
"id": 61423774,
"tag_name": "v0.8.0",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.8.0",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.8.0/checksums.txt"
]
},
"v0.8.0-rc1": {
"id": 60513568,
"tag_name": "v0.8.0-rc1",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.8.0-rc1",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/checksums.txt"
]
},
"v0.7.1": {
"id": 54098347,
"tag_name": "v0.7.1",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.7.1",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.7.1/checksums.txt"
]
},
"v0.7.0": {
"id": 53109422,
"tag_name": "v0.7.0",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.7.0",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.7.0/checksums.txt"
]
},
"v0.7.0-rc1": {
"id": 52726324,
"tag_name": "v0.7.0-rc1",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.7.0-rc1",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/checksums.txt"
]
},
"v0.6.3": {
"id": 48691641,
"tag_name": "v0.6.3",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.6.3",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.windows-arm64.exe"
]
},
"v0.6.2": {
"id": 48207405,
"tag_name": "v0.6.2",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.6.2",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.windows-arm64.exe"
]
},
"v0.6.1": {
"id": 47064772,
"tag_name": "v0.6.1",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.6.1",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.windows-arm64.exe"
]
},
"v0.6.0": {
"id": 46343260,
"tag_name": "v0.6.0",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.6.0",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.windows-arm64.exe"
]
},
"v0.6.0-rc1": {
"id": 46230351,
"tag_name": "v0.6.0-rc1",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.6.0-rc1",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.windows-arm64.exe"
]
},
"v0.5.1": {
"id": 35276550,
"tag_name": "v0.5.1",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.5.1",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.darwin-universal",
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.windows-amd64.exe"
]
},
"v0.5.0": {
"id": 35268960,
"tag_name": "v0.5.0",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.5.0",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.darwin-universal",
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.windows-amd64.exe"
]
},
"v0.5.0-rc1": {
"id": 35015334,
"tag_name": "v0.5.0-rc1",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.5.0-rc1",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.windows-amd64.exe"
]
},
"v0.4.2": {
"id": 30007794,
"tag_name": "v0.4.2",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.4.2",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.windows-amd64.exe"
]
},
"v0.4.1": {
"id": 26067509,
"tag_name": "v0.4.1",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.4.1",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.windows-amd64.exe"
]
},
"v0.4.0": {
"id": 26028174,
"tag_name": "v0.4.0",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.4.0",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.windows-amd64.exe"
]
},
"v0.3.1": {
"id": 20316235,
"tag_name": "v0.3.1",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.3.1",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.windows-amd64.exe"
]
},
"v0.3.0": {
"id": 19029664,
"tag_name": "v0.3.0",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.3.0",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.windows-amd64.exe"
]
},
"v0.2.2": {
"id": 17671545,
"tag_name": "v0.2.2",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.2.2",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.windows-amd64.exe"
]
},
"v0.2.1": {
"id": 17582885,
"tag_name": "v0.2.1",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.2.1",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.windows-amd64.exe"
]
},
"v0.2.0": {
"id": 16965310,
"tag_name": "v0.2.0",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.2.0",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.windows-amd64.exe"
]
}
}

View File

@@ -1,18 +1,5 @@
name: build
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
workflow_dispatch:
push:
@@ -22,335 +9,175 @@ on:
tags:
- 'v*'
pull_request:
paths-ignore:
- '.github/releases.json'
- 'README.md'
- 'docs/**'
branches:
- 'master'
- 'v[0-9]*'
env:
BUILDX_VERSION: "latest"
BUILDKIT_IMAGE: "moby/buildkit:latest"
SCOUT_VERSION: "1.11.0"
REPO_SLUG: "docker/buildx-bin"
DESTDIR: "./bin"
TEST_CACHE_SCOPE: "test"
TESTFLAGS: "-v --parallel=6 --timeout=30m"
GOTESTSUM_FORMAT: "standard-verbose"
GO_VERSION: "1.23"
GOTESTSUM_VERSION: "v1.9.0" # same as one in Dockerfile
REPO_SLUG_ORIGIN: "moby/buildkit:master"
CACHEKEY_BINARIES: "binaries"
PLATFORMS: "linux/amd64,linux/arm/v6,linux/arm/v7,linux/arm64,linux/s390x,linux/ppc64le,linux/riscv64"
jobs:
test-integration:
runs-on: ubuntu-24.04
env:
TESTFLAGS_DOCKER: "-v --parallel=1 --timeout=30m"
TEST_IMAGE_BUILD: "0"
TEST_IMAGE_ID: "buildx-tests"
TEST_COVERAGE: "1"
strategy:
fail-fast: false
matrix:
buildkit:
- master
- latest
- buildx-stable-1
- v0.17.2
- v0.16.0
- v0.15.2
worker:
- docker-container
- remote
pkg:
- ./tests
mode:
- ""
- experimental
include:
- worker: docker
pkg: ./tests
- worker: docker+containerd # same as docker, but with containerd snapshotter
pkg: ./tests
- worker: docker
pkg: ./tests
mode: experimental
- worker: docker+containerd # same as docker, but with containerd snapshotter
pkg: ./tests
mode: experimental
- worker: "docker@26.1"
pkg: ./tests
- worker: "docker+containerd@26.1" # same as docker, but with containerd snapshotter
pkg: ./tests
- worker: "docker@26.1"
pkg: ./tests
mode: experimental
- worker: "docker+containerd@26.1" # same as docker, but with containerd snapshotter
pkg: ./tests
mode: experimental
base:
runs-on: ubuntu-latest
steps:
-
name: Prepare
run: |
echo "TESTREPORTS_NAME=${{ github.job }}-$(echo "${{ matrix.pkg }}-${{ matrix.buildkit }}-${{ matrix.worker }}-${{ matrix.mode }}" | tr -dc '[:alnum:]-\n\r' | tr '[:upper:]' '[:lower:]')" >> $GITHUB_ENV
if [ -n "${{ matrix.buildkit }}" ]; then
echo "TEST_BUILDKIT_TAG=${{ matrix.buildkit }}" >> $GITHUB_ENV
fi
testFlags="--run=//worker=$(echo "${{ matrix.worker }}" | sed 's/\+/\\+/g')$"
case "${{ matrix.worker }}" in
docker | docker+containerd | docker@* | docker+containerd@*)
echo "TESTFLAGS=${{ env.TESTFLAGS_DOCKER }} $testFlags" >> $GITHUB_ENV
;;
*)
echo "TESTFLAGS=${{ env.TESTFLAGS }} $testFlags" >> $GITHUB_ENV
;;
esac
if [[ "${{ matrix.worker }}" == "docker"* ]]; then
echo "TEST_DOCKERD=1" >> $GITHUB_ENV
fi
if [ "${{ matrix.mode }}" = "experimental" ]; then
echo "TEST_BUILDX_EXPERIMENTAL=1" >> $GITHUB_ENV
fi
-
name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v2
-
name: Cache ${{ env.CACHEKEY_BINARIES }}
uses: actions/cache@v2
with:
fetch-depth: 0
path: /tmp/.buildx-cache/${{ env.CACHEKEY_BINARIES }}
key: ${{ runner.os }}-buildx-${{ env.CACHEKEY_BINARIES }}-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-${{ env.CACHEKEY_BINARIES }}-
-
name: Set up QEMU
uses: docker/setup-qemu-action@v3
uses: docker/setup-qemu-action@v1
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@v1
with:
version: ${{ env.BUILDX_VERSION }}
driver-opts: image=${{ env.BUILDKIT_IMAGE }}
buildkitd-flags: --debug
driver-opts: image=${{ env.REPO_SLUG_ORIGIN }}
-
name: Build test image
uses: docker/bake-action@v5
name: Build ${{ env.CACHEKEY_BINARIES }}
run: |
./hack/build_ci_first_pass binaries
env:
CACHEDIR_FROM: /tmp/.buildx-cache/${{ env.CACHEKEY_BINARIES }}
CACHEDIR_TO: /tmp/.buildx-cache/${{ env.CACHEKEY_BINARIES }}-new
-
# FIXME: Temp fix for https://github.com/moby/buildkit/issues/1850
name: Move cache
run: |
rm -rf /tmp/.buildx-cache/${{ env.CACHEKEY_BINARIES }}
mv /tmp/.buildx-cache/${{ env.CACHEKEY_BINARIES }}-new /tmp/.buildx-cache/${{ env.CACHEKEY_BINARIES }}
test:
runs-on: ubuntu-latest
needs: [base]
steps:
-
name: Checkout
uses: actions/checkout@v2
-
name: Cache ${{ env.CACHEKEY_BINARIES }}
uses: actions/cache@v2
with:
targets: integration-test
set: |
*.output=type=docker,name=${{ env.TEST_IMAGE_ID }}
path: /tmp/.buildx-cache/${{ env.CACHEKEY_BINARIES }}
key: ${{ runner.os }}-buildx-${{ env.CACHEKEY_BINARIES }}-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-${{ env.CACHEKEY_BINARIES }}-
-
name: Set up QEMU
uses: docker/setup-qemu-action@v1
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
with:
driver-opts: image=${{ env.REPO_SLUG_ORIGIN }}
-
name: Test
run: |
./hack/test
make test
env:
TEST_REPORT_SUFFIX: "-${{ env.TESTREPORTS_NAME }}"
TESTPKGS: "${{ matrix.pkg }}"
TEST_COVERAGE: 1
TESTFLAGS: -v --parallel=6 --timeout=20m
CACHEDIR_FROM: /tmp/.buildx-cache/${{ env.CACHEKEY_BINARIES }}
-
name: Send to Codecov
if: always()
uses: codecov/codecov-action@v5
uses: codecov/codecov-action@v2
with:
directory: ./bin/testreports
flags: integration
token: ${{ secrets.CODECOV_TOKEN }}
disable_file_fixes: true
-
name: Generate annotations
if: always()
uses: crazy-max/.github/.github/actions/gotest-annotations@fa6141aedf23596fb8bdcceab9cce8dadaa31bd9
with:
directory: ./bin/testreports
-
name: Upload test reports
if: always()
uses: actions/upload-artifact@v4
with:
name: test-reports-${{ env.TESTREPORTS_NAME }}
path: ./bin/testreports
file: ./coverage/coverage.txt
test-unit:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os:
- ubuntu-24.04
- macos-14
- windows-2022
env:
SKIP_INTEGRATION_TESTS: 1
cross:
runs-on: ubuntu-latest
needs: [base]
steps:
-
name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v2
-
name: Set up Go
uses: actions/setup-go@v5
name: Cache ${{ env.CACHEKEY_BINARIES }}
uses: actions/cache@v2
with:
go-version: "${{ env.GO_VERSION }}"
path: /tmp/.buildx-cache/${{ env.CACHEKEY_BINARIES }}
key: ${{ runner.os }}-buildx-${{ env.CACHEKEY_BINARIES }}-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-${{ env.CACHEKEY_BINARIES }}-
-
name: Prepare
run: |
testreportsName=${{ github.job }}--${{ matrix.os }}
testreportsBaseDir=./bin/testreports
testreportsDir=$testreportsBaseDir/$testreportsName
echo "TESTREPORTS_NAME=$testreportsName" >> $GITHUB_ENV
echo "TESTREPORTS_BASEDIR=$testreportsBaseDir" >> $GITHUB_ENV
echo "TESTREPORTS_DIR=$testreportsDir" >> $GITHUB_ENV
mkdir -p $testreportsDir
shell: bash
-
name: Install gotestsum
run: |
go install gotest.tools/gotestsum@${{ env.GOTESTSUM_VERSION }}
-
name: Test
env:
TMPDIR: ${{ runner.temp }}
run: |
gotestsum \
--jsonfile="${{ env.TESTREPORTS_DIR }}/go-test-report.json" \
--junitfile="${{ env.TESTREPORTS_DIR }}/junit-report.xml" \
--packages="./..." \
-- \
"-mod=vendor" \
"-coverprofile" "${{ env.TESTREPORTS_DIR }}/coverage.txt" \
"-covermode" "atomic" ${{ env.TESTFLAGS }}
shell: bash
-
name: Send to Codecov
if: always()
uses: codecov/codecov-action@v5
with:
directory: ${{ env.TESTREPORTS_DIR }}
env_vars: RUNNER_OS
flags: unit
token: ${{ secrets.CODECOV_TOKEN }}
disable_file_fixes: true
-
name: Generate annotations
if: always()
uses: crazy-max/.github/.github/actions/gotest-annotations@fa6141aedf23596fb8bdcceab9cce8dadaa31bd9
with:
directory: ${{ env.TESTREPORTS_DIR }}
-
name: Upload test reports
if: always()
uses: actions/upload-artifact@v4
with:
name: test-reports-${{ env.TESTREPORTS_NAME }}
path: ${{ env.TESTREPORTS_BASEDIR }}
govulncheck:
runs-on: ubuntu-24.04
permissions:
# same as global permission
contents: read
# required to write sarif report
security-events: write
steps:
-
name: Checkout
uses: actions/checkout@v4
name: Set up QEMU
uses: docker/setup-qemu-action@v1
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@v1
with:
version: ${{ env.BUILDX_VERSION }}
driver-opts: image=${{ env.BUILDKIT_IMAGE }}
buildkitd-flags: --debug
driver-opts: image=${{ env.REPO_SLUG_ORIGIN }}
-
name: Run
uses: docker/bake-action@v5
with:
targets: govulncheck
name: Cross
run: |
make cross
env:
GOVULNCHECK_FORMAT: sarif
-
name: Upload SARIF report
if: ${{ github.ref == 'refs/heads/master' && github.repository == 'docker/buildx' }}
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: ${{ env.DESTDIR }}/govulncheck.out
prepare-binaries:
runs-on: ubuntu-24.04
outputs:
matrix: ${{ steps.platforms.outputs.matrix }}
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Create matrix
id: platforms
run: |
echo "matrix=$(docker buildx bake binaries-cross --print | jq -cr '.target."binaries-cross".platforms')" >>${GITHUB_OUTPUT}
-
name: Show matrix
run: |
echo ${{ steps.platforms.outputs.matrix }}
TARGETPLATFORM: ${{ env.PLATFORMS }},darwin/amd64,darwin/arm64,windows/amd64,windows/arm64
CACHEDIR_FROM: /tmp/.buildx-cache/${{ env.CACHEKEY_BINARIES }}
binaries:
runs-on: ubuntu-24.04
needs:
- prepare-binaries
strategy:
fail-fast: false
matrix:
platform: ${{ fromJson(needs.prepare-binaries.outputs.matrix) }}
runs-on: ubuntu-latest
needs: [test, cross]
env:
RELEASE_OUT: ./release-out
steps:
-
name: Checkout
uses: actions/checkout@v2
-
name: Prepare
id: prep
run: |
platform=${{ matrix.platform }}
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
TAG=pr
if [[ $GITHUB_REF == refs/tags/v* ]]; then
TAG=${GITHUB_REF#refs/tags/}
elif [[ $GITHUB_REF == refs/heads/* ]]; then
TAG=$(echo ${GITHUB_REF#refs/heads/} | sed -r 's#/+#-#g')
fi
echo ::set-output name=tag::${TAG}
-
name: Checkout
uses: actions/checkout@v4
name: Cache ${{ env.CACHEKEY_BINARIES }}
uses: actions/cache@v2
with:
path: /tmp/.buildx-cache/${{ env.CACHEKEY_BINARIES }}
key: ${{ runner.os }}-buildx-${{ env.CACHEKEY_BINARIES }}-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-${{ env.CACHEKEY_BINARIES }}-
-
name: Set up QEMU
uses: docker/setup-qemu-action@v3
uses: docker/setup-qemu-action@v1
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@v1
with:
version: ${{ env.BUILDX_VERSION }}
driver-opts: image=${{ env.BUILDKIT_IMAGE }}
buildkitd-flags: --debug
driver-opts: image=${{ env.REPO_SLUG_ORIGIN }}
-
name: Build
name: Build ${{ steps.prep.outputs.tag }}
run: |
make release
./hack/release ${{ env.RELEASE_OUT }}
env:
PLATFORMS: ${{ matrix.platform }}
CACHE_FROM: type=gha,scope=binaries-${{ env.PLATFORM_PAIR }}
CACHE_TO: type=gha,scope=binaries-${{ env.PLATFORM_PAIR }},mode=max
PLATFORMS: ${{ env.PLATFORMS }},darwin/amd64,darwin/arm64,windows/amd64,windows/arm64
CACHEDIR_FROM: /tmp/.buildx-cache/${{ env.CACHEKEY_BINARIES }}
-
name: Upload artifacts
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v2
with:
name: buildx-${{ env.PLATFORM_PAIR }}
path: ${{ env.DESTDIR }}/*
name: buildx
path: ${{ env.RELEASE_OUT }}/*
if-no-files-found: error
bin-image:
runs-on: ubuntu-24.04
needs:
- test-integration
- test-unit
if: ${{ github.event_name != 'pull_request' && github.repository == 'docker/buildx' }}
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up QEMU
uses: docker/setup-qemu-action@v3
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.BUILDX_VERSION }}
driver-opts: image=${{ env.BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Docker meta
id: meta
uses: docker/metadata-action@v5
uses: docker/metadata-action@v3
with:
images: |
${{ env.REPO_SLUG }}
@@ -358,99 +185,31 @@ jobs:
type=ref,event=branch
type=ref,event=pr
type=semver,pattern={{version}}
bake-target: meta-helper
-
name: Login to DockerHub
if: github.event_name != 'pull_request'
uses: docker/login-action@v3
uses: docker/login-action@v1
with:
username: ${{ vars.DOCKERPUBLICBOT_USERNAME }}
password: ${{ secrets.DOCKERPUBLICBOT_WRITE_PAT }}
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
-
name: Build and push image
uses: docker/bake-action@v5
uses: docker/build-push-action@v2
with:
files: |
./docker-bake.hcl
${{ steps.meta.outputs.bake-file }}
targets: image-cross
context: .
target: binaries
push: ${{ github.event_name != 'pull_request' }}
sbom: true
set: |
*.cache-from=type=gha,scope=bin-image
*.cache-to=type=gha,scope=bin-image,mode=max
scout:
runs-on: ubuntu-24.04
if: ${{ github.ref == 'refs/heads/master' && github.repository == 'docker/buildx' }}
permissions:
# same as global permission
contents: read
# required to write sarif report
security-events: write
needs:
- bin-image
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Login to DockerHub
uses: docker/login-action@v3
with:
username: ${{ vars.DOCKERPUBLICBOT_USERNAME }}
password: ${{ secrets.DOCKERPUBLICBOT_WRITE_PAT }}
-
name: Scout
id: scout
uses: crazy-max/.github/.github/actions/docker-scout@ccae1c98f1237b5c19e4ef77ace44fa68b3bc7e4
with:
version: ${{ env.SCOUT_VERSION }}
format: sarif
image: registry://${{ env.REPO_SLUG }}:master
-
name: Upload SARIF report
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: ${{ steps.scout.outputs.result-file }}
release:
runs-on: ubuntu-24.04
permissions:
# required to create GitHub release
contents: write
needs:
- test-integration
- test-unit
- binaries
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Download binaries
uses: actions/download-artifact@v4
with:
path: ${{ env.DESTDIR }}
pattern: buildx-*
merge-multiple: true
-
name: Create checksums
run: ./hack/hash-files
-
name: List artifacts
run: |
tree -nh ${{ env.DESTDIR }}
-
name: Check artifacts
run: |
find ${{ env.DESTDIR }} -type f -exec file -e ascii -- {} +
cache-from: type=local,src=/tmp/.buildx-cache/${{ env.CACHEKEY_BINARIES }}
platforms: ${{ env.PLATFORMS }},darwin/amd64,darwin/arm64,windows/amd64,windows/arm64
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
-
name: GitHub Release
if: startsWith(github.ref, 'refs/tags/v')
uses: softprops/action-gh-release@01570a1f39cb168c169c802c3bceb9e93fb10974 # v2.1.0
uses: softprops/action-gh-release@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
draft: true
files: ${{ env.DESTDIR }}/*
files: ${{ env.RELEASE_OUT }}/*
name: ${{ steps.prep.outputs.tag }}

View File

@@ -1,50 +0,0 @@
name: codeql
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
on:
push:
branches:
- 'master'
- 'v[0-9]*'
pull_request:
env:
GO_VERSION: "1.23"
jobs:
codeql:
runs-on: ubuntu-24.04
permissions:
contents: read
actions: read
security-events: write
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
-
name: Initialize CodeQL
uses: github/codeql-action/init@v3
with:
languages: go
-
name: Autobuild
uses: github/codeql-action/autobuild@v3
-
name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3
with:
category: "/language:go"

View File

@@ -1,83 +0,0 @@
name: docs-release
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
on:
workflow_dispatch:
inputs:
tag:
description: 'Git tag'
required: true
release:
types:
- released
jobs:
open-pr:
runs-on: ubuntu-24.04
if: ${{ (github.event.release.prerelease != true || github.event.inputs.tag != '') && github.repository == 'docker/buildx' }}
permissions:
contents: write
pull-requests: write
steps:
-
name: Checkout docs repo
uses: actions/checkout@v4
with:
token: ${{ secrets.GHPAT_DOCS_DISPATCH }}
repository: docker/docs
ref: main
-
name: Prepare
run: |
rm -rf ./data/buildx/*
if [ -n "${{ github.event.inputs.tag }}" ]; then
echo "RELEASE_NAME=${{ github.event.inputs.tag }}" >> $GITHUB_ENV
else
echo "RELEASE_NAME=${{ github.event.release.name }}" >> $GITHUB_ENV
fi
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
-
name: Generate yaml
uses: docker/bake-action@v5
with:
source: ${{ github.server_url }}/${{ github.repository }}.git#${{ env.RELEASE_NAME }}
targets: update-docs
provenance: false
set: |
*.output=/tmp/buildx-docs
env:
DOCS_FORMATS: yaml
-
name: Copy yaml
run: |
cp /tmp/buildx-docs/out/reference/*.yaml ./data/buildx/
-
name: Update vendor
run: |
make vendor
env:
VENDOR_MODULE: github.com/docker/buildx@${{ env.RELEASE_NAME }}
-
name: Create PR on docs repo
uses: peter-evans/create-pull-request@5e914681df9dc83aa4e4905692ca88beb2f9e91f # v7.0.5
with:
token: ${{ secrets.GHPAT_DOCS_DISPATCH }}
push-to-fork: docker-tools-robot/docker.github.io
commit-message: "vendor: github.com/docker/buildx ${{ env.RELEASE_NAME }}"
signoff: true
branch: dispatch/buildx-ref-${{ env.RELEASE_NAME }}
delete-branch: true
title: Update buildx reference to ${{ env.RELEASE_NAME }}
body: |
Update the buildx reference documentation to keep in sync with the latest release `${{ env.RELEASE_NAME }}`
draft: false

View File

@@ -1,72 +0,0 @@
# this workflow runs the remote validate bake target from docker/docker.github.io
# to check if yaml reference docs and markdown files used in this repo are still valid
# https://github.com/docker/docker.github.io/blob/98c7c9535063ae4cd2cd0a31478a21d16d2f07a3/docker-bake.hcl#L34-L36
name: docs-upstream
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
push:
branches:
- 'master'
- 'v[0-9]*'
paths:
- '.github/workflows/docs-upstream.yml'
- 'docs/**'
pull_request:
paths:
- '.github/workflows/docs-upstream.yml'
- 'docs/**'
jobs:
docs-yaml:
runs-on: ubuntu-24.04
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: latest
-
name: Build reference YAML docs
uses: docker/bake-action@v5
with:
targets: update-docs
provenance: false
set: |
*.output=/tmp/buildx-docs
*.cache-from=type=gha,scope=docs-yaml
*.cache-to=type=gha,scope=docs-yaml,mode=max
env:
DOCS_FORMATS: yaml
-
name: Upload reference YAML docs
uses: actions/upload-artifact@v4
with:
name: docs-yaml
path: /tmp/buildx-docs/out/reference
retention-days: 1
validate:
uses: docker/docs/.github/workflows/validate-upstream.yml@6b73b05acb21edf7995cc5b3c6672d8e314cee7a # pin for artifact v4 support: https://github.com/docker/docs/pull/19220
needs:
- docs-yaml
with:
module-name: docker/buildx
data-files-id: docs-yaml
data-files-folder: buildx
create-placeholder-stubs: true

View File

@@ -1,177 +0,0 @@
name: e2e
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
workflow_dispatch:
push:
branches:
- 'master'
- 'v[0-9]*'
pull_request:
paths-ignore:
- '.github/releases.json'
- 'README.md'
- 'docs/**'
env:
DESTDIR: "./bin"
K3S_VERSION: "v1.21.2-k3s1"
jobs:
build:
runs-on: ubuntu-24.04
steps:
- name: Checkout
uses: actions/checkout@v4
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: latest
-
name: Build
uses: docker/bake-action@v5
with:
targets: binaries
set: |
*.cache-from=type=gha,scope=release
*.cache-from=type=gha,scope=binaries
*.cache-to=type=gha,scope=binaries
-
name: Rename binary
run: |
mv ${{ env.DESTDIR }}/build/buildx ${{ env.DESTDIR }}/build/docker-buildx
-
name: Upload artifacts
uses: actions/upload-artifact@v4
with:
name: binary
path: ${{ env.DESTDIR }}/build
if-no-files-found: error
retention-days: 7
driver:
runs-on: ubuntu-20.04
needs:
- build
strategy:
fail-fast: false
matrix:
driver:
- docker
- docker-container
- kubernetes
- remote
buildkit:
- moby/buildkit:buildx-stable-1
- moby/buildkit:master
buildkit-cfg:
- bkcfg-false
- bkcfg-true
multi-node:
- mnode-false
- mnode-true
platforms:
- linux/amd64
- linux/amd64,linux/arm64
include:
- driver: kubernetes
driver-opt: qemu.install=true
- driver: remote
endpoint: tcp://localhost:1234
- driver: docker-container
metadata-provenance: max
- driver: docker-container
metadata-warnings: true
exclude:
- driver: docker
multi-node: mnode-true
- driver: docker
buildkit-cfg: bkcfg-true
- driver: docker-container
multi-node: mnode-true
- driver: remote
multi-node: mnode-true
- driver: remote
buildkit-cfg: bkcfg-true
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up QEMU
uses: docker/setup-qemu-action@v3
if: matrix.driver == 'docker' || matrix.driver == 'docker-container'
-
name: Install buildx
uses: actions/download-artifact@v4
with:
name: binary
path: /home/runner/.docker/cli-plugins
-
name: Fix perms and check
run: |
chmod +x /home/runner/.docker/cli-plugins/docker-buildx
docker buildx version
-
name: Init env vars
run: |
# BuildKit cfg
if [ "${{ matrix.buildkit-cfg }}" = "bkcfg-true" ]; then
cat > "/tmp/buildkitd.toml" <<EOL
[worker.oci]
max-parallelism = 2
EOL
echo "BUILDKIT_CFG=/tmp/buildkitd.toml" >> $GITHUB_ENV
fi
# Multi node
if [ "${{ matrix.multi-node }}" = "mnode-true" ]; then
echo "MULTI_NODE=1" >> $GITHUB_ENV
else
echo "MULTI_NODE=0" >> $GITHUB_ENV
fi
if [ -n "${{ matrix.metadata-provenance }}" ]; then
echo "BUILDX_METADATA_PROVENANCE=${{ matrix.metadata-provenance }}" >> $GITHUB_ENV
fi
if [ -n "${{ matrix.metadata-warnings }}" ]; then
echo "BUILDX_METADATA_WARNINGS=${{ matrix.metadata-warnings }}" >> $GITHUB_ENV
fi
-
name: Install k3s
if: matrix.driver == 'kubernetes'
uses: crazy-max/.github/.github/actions/install-k3s@fa6141aedf23596fb8bdcceab9cce8dadaa31bd9
with:
version: ${{ env.K3S_VERSION }}
-
name: Launch remote buildkitd
if: matrix.driver == 'remote'
run: |
docker run -d \
--privileged \
--name=remote-buildkit \
-p 1234:1234 \
${{ matrix.buildkit }} \
--addr unix:///run/buildkit/buildkitd.sock \
--addr tcp://0.0.0.0:1234
-
name: Test
run: |
make test-driver
env:
BUILDKIT_IMAGE: ${{ matrix.buildkit }}
DRIVER: ${{ matrix.driver }}
DRIVER_OPT: ${{ matrix.driver-opt }}
ENDPOINT: ${{ matrix.endpoint }}
PLATFORMS: ${{ matrix.platforms }}

25
.github/workflows/godev.yml vendored Normal file
View File

@@ -0,0 +1,25 @@
# Workflow used to make a request to proxy.golang.org to refresh cache on https://pkg.go.dev/github.com/docker/buildx
# when a released of buildx is produced
name: godev
on:
push:
tags:
- 'v*'
jobs:
update:
runs-on: ubuntu-latest
steps:
-
name: Set up Go
uses: actions/setup-go@v2
with:
go-version: 1.13
-
name: Call pkg.go.dev
run: |
go get github.com/${GITHUB_REPOSITORY}@${GITHUB_REF#refs/tags/}
env:
GO111MODULE: on
GOPROXY: https://proxy.golang.org

View File

@@ -1,32 +0,0 @@
name: labeler
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
pull_request_target:
jobs:
labeler:
runs-on: ubuntu-latest
permissions:
# same as global permission
contents: read
# required for writing labels
pull-requests: write
steps:
-
name: Run
uses: actions/labeler@v5
with:
sync-labels: true

View File

@@ -1,18 +1,5 @@
name: validate
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
workflow_dispatch:
push:
@@ -22,86 +9,33 @@ on:
tags:
- 'v*'
pull_request:
paths-ignore:
- '.github/releases.json'
branches:
- 'master'
- 'v[0-9]*'
env:
REPO_SLUG_ORIGIN: "moby/buildkit:master"
jobs:
prepare:
runs-on: ubuntu-24.04
outputs:
includes: ${{ steps.matrix.outputs.includes }}
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Matrix
id: matrix
uses: actions/github-script@v7
with:
script: |
let def = {};
await core.group(`Parsing definition`, async () => {
const printEnv = Object.assign({}, process.env, {
GOLANGCI_LINT_MULTIPLATFORM: process.env.GITHUB_REPOSITORY === 'docker/buildx' ? '1' : ''
});
const resPrint = await exec.getExecOutput('docker', ['buildx', 'bake', 'validate', '--print'], {
ignoreReturnCode: true,
env: printEnv
});
if (resPrint.stderr.length > 0 && resPrint.exitCode != 0) {
throw new Error(res.stderr);
}
def = JSON.parse(resPrint.stdout.trim());
});
await core.group(`Generating matrix`, async () => {
const includes = [];
for (const targetName of Object.keys(def.target)) {
const target = def.target[targetName];
if (target.platforms && target.platforms.length > 0) {
target.platforms.forEach(platform => {
includes.push({
target: targetName,
platform: platform
});
});
} else {
includes.push({
target: targetName
});
}
}
core.info(JSON.stringify(includes, null, 2));
core.setOutput('includes', JSON.stringify(includes));
});
validate:
runs-on: ubuntu-24.04
needs:
- prepare
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
include: ${{ fromJson(needs.prepare.outputs.includes) }}
target:
- lint
- validate-vendor
- validate-docs
steps:
-
name: Prepare
run: |
if [ "$GITHUB_REPOSITORY" = "docker/buildx" ]; then
echo "GOLANGCI_LINT_MULTIPLATFORM=1" >> $GITHUB_ENV
fi
-
name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v2
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@v1
with:
version: latest
driver-opts: image=${{ env.REPO_SLUG_ORIGIN }}
-
name: Validate
uses: docker/bake-action@v5
with:
targets: ${{ matrix.target }}
set: |
*.platform=${{ matrix.platform }}
name: Run
run: |
make ${{ matrix.target }}

5
.gitignore vendored
View File

@@ -1 +1,4 @@
/bin
bin
coverage
cross-out
release-out

View File

@@ -1,119 +1,30 @@
run:
timeout: 30m
timeout: 10m
skip-files:
- ".*\\.pb\\.go$"
modules-download-mode: vendor
# default uses Go version from the go.mod file, fallback on the env var
# `GOVERSION`, fallback on 1.17: https://golangci-lint.run/usage/configuration/#run-configuration
go: "1.23"
build-tags:
linters:
enable:
- bodyclose
- depguard
- forbidigo
- gocritic
- gofmt
- goimports
- gosec
- gosimple
- govet
- deadcode
- goimports
- ineffassign
- makezero
- misspell
- noctx
- nolintlint
- revive
- staticcheck
- testifylint
- typecheck
- unused
- whitespace
- varcheck
- golint
- staticcheck
- typecheck
- structcheck
disable-all: true
linters-settings:
gocritic:
disabled-checks:
- "ifElseChain"
- "assignOp"
- "appendAssign"
- "singleCaseSwitch"
- "exitAfterDefer" # FIXME
importas:
alias:
# Enforce alias to prevent it accidentally being used instead of
# buildkit errdefs package (or vice-versa).
- pkg: "github.com/containerd/errdefs"
alias: "cerrdefs"
- pkg: "github.com/opencontainers/image-spec/specs-go/v1"
alias: "ocispecs"
- pkg: "github.com/opencontainers/go-digest"
alias: "digest"
govet:
enable:
- nilness
- unusedwrite
# enable-all: true
# disable:
# - fieldalignment
# - shadow
depguard:
rules:
main:
deny:
- pkg: "github.com/containerd/containerd/errdefs"
desc: The containerd errdefs package was migrated to a separate module. Use github.com/containerd/errdefs instead.
- pkg: "github.com/containerd/containerd/log"
desc: The containerd log package was migrated to a separate module. Use github.com/containerd/log instead.
- pkg: "github.com/containerd/containerd/platforms"
desc: The containerd platforms package was migrated to a separate module. Use github.com/containerd/platforms instead.
- pkg: "io/ioutil"
desc: The io/ioutil package has been deprecated.
forbidigo:
forbid:
- '^context\.WithCancel(# use context\.WithCancelCause instead)?$'
- '^context\.WithDeadline(# use context\.WithDeadline instead)?$'
- '^context\.WithTimeout(# use context\.WithTimeoutCause instead)?$'
- '^ctx\.Err(# use context\.Cause instead)?$'
- '^fmt\.Errorf(# use errors\.Errorf instead)?$'
- '^platforms\.DefaultString(# use platforms\.Format(platforms\.DefaultSpec()) instead\.)?$'
gosec:
excludes:
- G204 # Audit use of command execution
- G402 # TLS MinVersion too low
- G115 # integer overflow conversion (TODO: verify these)
config:
G306: "0644"
testifylint:
disable:
# disable rules that reduce the test condition
- "empty"
- "bool-compare"
- "len"
- "negative-positive"
issues:
exclude-files:
- ".*\\.pb\\.go$"
exclude-rules:
- linters:
- revive
- golint
text: "stutters"
- linters:
- revive
text: "empty-block"
- linters:
- revive
text: "superfluous-else"
- linters:
- revive
text: "unused-parameter"
- linters:
- revive
text: "redefines-builtin-id"
- linters:
- revive
text: "if-return"
# show all
max-issues-per-linter: 0
max-same-issues: 0

View File

@@ -1,27 +1,6 @@
# This file lists all individuals having contributed content to the repository.
# For how it is generated, see hack/dockerfiles/authors.Dockerfile.
# For how it is generated, see `hack/generate-authors`.
Batuhan Apaydın <batuhan.apaydin@trendyol.com>
Batuhan Apaydın <batuhan.apaydin@trendyol.com> <developerguy2@gmail.com>
CrazyMax <github@crazymax.dev>
CrazyMax <github@crazymax.dev> <1951866+crazy-max@users.noreply.github.com>
CrazyMax <github@crazymax.dev> <crazy-max@users.noreply.github.com>
David Karlsson <david.karlsson@docker.com>
David Karlsson <david.karlsson@docker.com> <35727626+dvdksn@users.noreply.github.com>
jaihwan104 <jaihwan104@woowahan.com>
jaihwan104 <jaihwan104@woowahan.com> <42341126+jaihwan104@users.noreply.github.com>
Kenyon Ralph <kenyon@kenyonralph.com>
Kenyon Ralph <kenyon@kenyonralph.com> <quic_kralph@quicinc.com>
Sebastiaan van Stijn <github@gone.nl>
Sebastiaan van Stijn <github@gone.nl> <thaJeztah@users.noreply.github.com>
Shaun Thompson <shaun.thompson@docker.com>
Shaun Thompson <shaun.thompson@docker.com> <shaun.b.thompson@gmail.com>
Silvin Lubecki <silvin.lubecki@docker.com>
Silvin Lubecki <silvin.lubecki@docker.com> <31478878+silvin-lubecki@users.noreply.github.com>
Talon Bowler <talon.bowler@docker.com>
Talon Bowler <talon.bowler@docker.com> <nolat301@gmail.com>
Tibor Vass <tibor@docker.com>
Tibor Vass <tibor@docker.com> <tiborvass@users.noreply.github.com>
Tõnis Tiigi <tonistiigi@gmail.com>
Ulysses Souza <ulyssessouza@gmail.com>
Wang Jinglei <morlay.null@gmail.com>

107
AUTHORS
View File

@@ -1,112 +1,7 @@
# This file lists all individuals having contributed content to the repository.
# For how it is generated, see hack/dockerfiles/authors.Dockerfile.
# For how it is generated, see `scripts/generate-authors.sh`.
accetto <34798830+accetto@users.noreply.github.com>
Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
Aleksa Sarai <cyphar@cyphar.com>
Alex Couture-Beil <alex@earthly.dev>
Andrew Haines <andrew.haines@zencargo.com>
Andy Caldwell <andrew.caldwell@metaswitch.com>
Andy MacKinlay <admackin@users.noreply.github.com>
Anthony Poschen <zanven42@gmail.com>
Arnold Sobanski <arnold@l4g.dev>
Artur Klauser <Artur.Klauser@computer.org>
Avi Deitcher <avi@deitcher.net>
Batuhan Apaydın <batuhan.apaydin@trendyol.com>
Ben Peachey <potherca@gmail.com>
Bertrand Paquet <bertrand.paquet@gmail.com>
Bin Du <bindu@microsoft.com>
Brandon Philips <brandon@ifup.org>
Brian Goff <cpuguy83@gmail.com>
Bryce Lampe <bryce@pulumi.com>
Cameron Adams <pnzreba@gmail.com>
Christian Dupuis <cd@atomist.com>
Cory Snider <csnider@mirantis.com>
CrazyMax <github@crazymax.dev>
David Gageot <david.gageot@docker.com>
David Karlsson <david.karlsson@docker.com>
David Scott <dave@recoil.org>
dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Devin Bayer <dev@doubly.so>
Djordje Lukic <djordje.lukic@docker.com>
Dmitry Makovey <dmakovey@gitlab.com>
Dmytro Makovey <dmytro.makovey@docker.com>
Donghui Wang <977675308@qq.com>
Doug Borg <dougborg@apple.com>
Edgar Lee <edgarl@netflix.com>
Eli Treuherz <et@arenko.group>
Eliott Wiener <eliottwiener@gmail.com>
Elran Shefer <elran.shefer@velocity.tech>
faust <faustin@fala.red>
Felipe Santos <felipecassiors@gmail.com>
Felix de Souza <fdesouza@palantir.com>
Fernando Miguel <github@FernandoMiguel.net>
gfrancesco <gfrancesco@users.noreply.github.com>
gracenoah <gracenoahgh@gmail.com>
Guillaume Lours <705411+glours@users.noreply.github.com>
guoguangwu <guoguangwu@magic-shield.com>
Hollow Man <hollowman@hollowman.ml>
Ian King'ori <kingorim.ian@gmail.com>
idnandre <andre@idntimes.com>
Ilya Dmitrichenko <errordeveloper@gmail.com>
Isaac Gaskin <isaac.gaskin@circle.com>
Jack Laxson <jackjrabbit@gmail.com>
jaihwan104 <jaihwan104@woowahan.com>
Jean-Yves Gastaud <jygastaud@gmail.com>
Jhan S. Álvarez <51450231+yastanotheruser@users.noreply.github.com>
Jonathan A. Sternberg <jonathan.sternberg@docker.com>
Jonathan Piché <jpiche@coveo.com>
Justin Chadwell <me@jedevc.com>
Kenyon Ralph <kenyon@kenyonralph.com>
khs1994 <khs1994@khs1994.com>
Kijima Daigo <norimaking777@gmail.com>
Kohei Tokunaga <ktokunaga.mail@gmail.com>
Kotaro Adachi <k33asby@gmail.com>
Kushagra Mansingh <12158241+kushmansingh@users.noreply.github.com>
l00397676 <lujingxiao@huawei.com>
Laura Brehm <laurabrehm@hey.com>
Laurent Goderre <laurent.goderre@docker.com>
Mark Hildreth <113933455+markhildreth-gravity@users.noreply.github.com>
Mayeul Blanzat <mayeul.blanzat@datadoghq.com>
Michal Augustyn <michal.augustyn@mail.com>
Milas Bowman <milas.bowman@docker.com>
Mitsuru Kariya <mitsuru.kariya@nttdata.com>
Moleus <fafufuburr@gmail.com>
Nick Santos <nick.santos@docker.com>
Nick Sieger <nick@nicksieger.com>
Nicolas De Loof <nicolas.deloof@gmail.com>
Niklas Gehlen <niklas@namespacelabs.com>
Patrick Van Stee <patrick@vanstee.me>
Paweł Gronowski <pawel.gronowski@docker.com>
Phong Tran <tran.pho@northeastern.edu>
Qasim Sarfraz <qasimsarfraz@microsoft.com>
Rob Murray <rob.murray@docker.com>
robertlestak <robert.lestak@umusic.com>
Saul Shanabrook <s.shanabrook@gmail.com>
Sean P. Kane <spkane00@gmail.com>
Sebastiaan van Stijn <github@gone.nl>
Shaun Thompson <shaun.thompson@docker.com>
SHIMA Tatsuya <ts1s1andn@gmail.com>
Silvin Lubecki <silvin.lubecki@docker.com>
Simon A. Eugster <simon.eu@gmail.com>
Solomon Hykes <sh.github.6811@hykes.org>
Sumner Warren <sumner.warren@gmail.com>
Sune Keller <absukl@almbrand.dk>
Talon Bowler <talon.bowler@docker.com>
Tianon Gravi <admwiggin@gmail.com>
Tibor Vass <tibor@docker.com>
Tim Smith <tismith@rvohealth.com>
Timofey Kirillov <timofey.kirillov@flant.com>
Tyler Smith <tylerlwsmith@gmail.com>
Tõnis Tiigi <tonistiigi@gmail.com>
Ulysses Souza <ulyssessouza@gmail.com>
Usual Coder <34403413+Usual-Coder@users.noreply.github.com>
Wang Jinglei <morlay.null@gmail.com>
Wei <daviseago@gmail.com>
Wojciech M <wmiedzybrodzki@outlook.com>
Xiang Dai <764524258@qq.com>
Zachary Povey <zachary.povey@autotrader.co.uk>
zelahi <elahi.zuhayr@gmail.com>
Zero <tobewhatwewant@gmail.com>
zhyon404 <zhyong4@gmail.com>
Zsolt <zsolt.szeberenyi@figured.com>

View File

@@ -1,168 +1,90 @@
# syntax=docker/dockerfile:1
# syntax=docker/dockerfile:1.2
ARG GO_VERSION=1.23
ARG XX_VERSION=1.5.0
ARG DOCKERD_VERSION=19.03
ARG CLI_VERSION=19.03
# for testing
ARG DOCKER_VERSION=27.4.0-rc.2
ARG DOCKER_VERSION_ALT_26=26.1.3
ARG DOCKER_CLI_VERSION=${DOCKER_VERSION}
ARG GOTESTSUM_VERSION=v1.12.0
ARG REGISTRY_VERSION=2.8.3
ARG BUILDKIT_VERSION=v0.17.2
ARG UNDOCK_VERSION=0.8.0
FROM docker:$DOCKERD_VERSION AS dockerd-release
FROM --platform=$BUILDPLATFORM tonistiigi/xx:${XX_VERSION} AS xx
FROM --platform=$BUILDPLATFORM golang:${GO_VERSION}-alpine AS golatest
FROM moby/moby-bin:$DOCKER_VERSION AS docker-engine
FROM dockereng/cli-bin:$DOCKER_CLI_VERSION AS docker-cli
FROM moby/moby-bin:$DOCKER_VERSION_ALT_26 AS docker-engine-alt
FROM dockereng/cli-bin:$DOCKER_VERSION_ALT_26 AS docker-cli-alt
FROM registry:$REGISTRY_VERSION AS registry
FROM moby/buildkit:$BUILDKIT_VERSION AS buildkit
FROM crazymax/undock:$UNDOCK_VERSION AS undock
# xx is a helper for cross-compilation
FROM --platform=$BUILDPLATFORM tonistiigi/xx@sha256:21a61be4744f6531cb5f33b0e6f40ede41fa3a1b8c82d5946178f80cc84bfc04 AS xx
FROM golatest AS gobase
FROM --platform=$BUILDPLATFORM golang:1.16-alpine AS golatest
FROM golatest AS go-linux
FROM golatest AS go-darwin
FROM golatest AS go-windows-amd64
FROM golatest AS go-windows-386
FROM golatest AS go-windows-arm
FROM --platform=$BUILDPLATFORM golang:1.17beta1-alpine AS go-windows-arm64
FROM go-windows-${TARGETARCH} AS go-windows
FROM go-${TARGETOS} AS gobase
COPY --from=xx / /
RUN apk add --no-cache file git
ENV GOFLAGS=-mod=vendor
ENV CGO_ENABLED=0
WORKDIR /src
FROM gobase AS gotestsum
ARG GOTESTSUM_VERSION
ENV GOFLAGS=""
RUN --mount=target=/root/.cache,type=cache <<EOT
set -ex
go install "gotest.tools/gotestsum@${GOTESTSUM_VERSION}"
go install "github.com/wadey/gocovmerge@latest"
mkdir /out
/go/bin/gotestsum --version
mv /go/bin/gotestsum /out
mv /go/bin/gocovmerge /out
EOT
COPY --chmod=755 <<"EOF" /out/gotestsumandcover
#!/bin/sh
set -x
if [ -z "$GO_TEST_COVERPROFILE" ]; then
exec gotestsum "$@"
fi
coverdir="$(dirname "$GO_TEST_COVERPROFILE")"
mkdir -p "$coverdir/helpers"
gotestsum "$@" "-coverprofile=$GO_TEST_COVERPROFILE"
ecode=$?
go tool covdata textfmt -i=$coverdir/helpers -o=$coverdir/helpers-report.txt
gocovmerge "$coverdir/helpers-report.txt" "$GO_TEST_COVERPROFILE" > "$coverdir/merged-report.txt"
mv "$coverdir/merged-report.txt" "$GO_TEST_COVERPROFILE"
rm "$coverdir/helpers-report.txt"
for f in "$coverdir/helpers"/*; do
rm "$f"
done
rmdir "$coverdir/helpers"
exit $ecode
EOF
FROM gobase AS buildx-version
RUN --mount=type=bind,target=. <<EOT
set -e
mkdir /buildx-version
echo -n "$(./hack/git-meta version)" | tee /buildx-version/version
echo -n "$(./hack/git-meta revision)" | tee /buildx-version/revision
EOT
RUN --mount=target=. \
PKG=github.com/docker/buildx VERSION=$(git describe --match 'v[0-9]*' --dirty='.m' --always --tags) REVISION=$(git rev-parse HEAD)$(if ! git diff --no-ext-diff --quiet --exit-code; then echo .m; fi); \
echo "-X ${PKG}/version.Version=${VERSION} -X ${PKG}/version.Revision=${REVISION} -X ${PKG}/version.Package=${PKG}" | tee /tmp/.ldflags; \
echo -n "${VERSION}" | tee /tmp/.version;
FROM gobase AS buildx-build
ENV CGO_ENABLED=0
ARG TARGETPLATFORM
ARG GO_EXTRA_FLAGS
RUN --mount=type=bind,target=. \
--mount=type=cache,target=/root/.cache \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=bind,from=buildx-version,source=/buildx-version,target=/buildx-version <<EOT
set -e
xx-go --wrap
DESTDIR=/usr/bin VERSION=$(cat /buildx-version/version) REVISION=$(cat /buildx-version/revision) GO_EXTRA_LDFLAGS="-s -w" ./hack/build
file /usr/bin/docker-buildx
xx-verify --static /usr/bin/docker-buildx
EOT
RUN --mount=target=. --mount=target=/root/.cache,type=cache \
--mount=target=/go/pkg/mod,type=cache \
--mount=source=/tmp/.ldflags,target=/tmp/.ldflags,from=buildx-version \
set -x; xx-go build -ldflags "$(cat /tmp/.ldflags)" -o /usr/bin/buildx ./cmd/buildx && \
xx-verify --static /usr/bin/buildx
FROM gobase AS test
ENV SKIP_INTEGRATION_TESTS=1
RUN --mount=type=bind,target=. \
--mount=type=cache,target=/root/.cache \
--mount=type=cache,target=/go/pkg/mod \
go test -v -coverprofile=/tmp/coverage.txt -covermode=atomic ./... && \
go tool cover -func=/tmp/coverage.txt
FROM scratch AS test-coverage
COPY --from=test /tmp/coverage.txt /coverage.txt
FROM scratch AS binaries-unix
COPY --link --from=buildx-build /usr/bin/docker-buildx /buildx
FROM binaries-unix AS binaries-darwin
FROM binaries-unix AS binaries-freebsd
FROM binaries-unix AS binaries-linux
FROM binaries-unix AS binaries-openbsd
FROM scratch AS binaries-windows
COPY --link --from=buildx-build /usr/bin/docker-buildx /buildx.exe
FROM binaries-$TARGETOS AS binaries
# enable scanning for this stage
ARG BUILDKIT_SBOM_SCAN_STAGE=true
FROM gobase AS integration-test-base
# https://github.com/docker/docker/blob/master/project/PACKAGERS.md#runtime-dependencies
RUN apk add --no-cache \
btrfs-progs \
e2fsprogs \
e2fsprogs-extra \
ip6tables \
iptables \
openssl \
shadow-uidmap \
xfsprogs \
xz
COPY --link --from=gotestsum /out /usr/bin/
COPY --link --from=registry /bin/registry /usr/bin/
COPY --link --from=docker-engine / /usr/bin/
COPY --link --from=docker-cli / /usr/bin/
COPY --link --from=docker-engine-alt / /opt/docker-alt-26/
COPY --link --from=docker-cli-alt / /opt/docker-alt-26/
COPY --link --from=buildkit /usr/bin/buildkitd /usr/bin/
COPY --link --from=buildkit /usr/bin/buildctl /usr/bin/
COPY --link --from=undock /usr/local/bin/undock /usr/bin/
COPY --link --from=binaries /buildx /usr/bin/
ENV TEST_DOCKER_EXTRA="docker@26.1=/opt/docker-alt-26"
FROM integration-test-base AS integration-test
FROM buildx-build AS integration-tests
COPY . .
# Release
# FROM golang:1.12-alpine AS docker-cli-build
# RUN apk add -U git bash coreutils gcc musl-dev
# ENV CGO_ENABLED=0
# ARG REPO=github.com/tiborvass/cli
# ARG BRANCH=cli-plugin-aliases
# ARG CLI_VERSION
# WORKDIR /go/src/github.com/docker/cli
# RUN git clone git://$REPO . && git checkout $BRANCH
# RUN ./scripts/build/binary
FROM scratch AS binaries-unix
COPY --from=buildx-build /usr/bin/buildx /
FROM binaries-unix AS binaries-darwin
FROM binaries-unix AS binaries-linux
FROM scratch AS binaries-windows
COPY --from=buildx-build /usr/bin/buildx /buildx.exe
FROM binaries-$TARGETOS AS binaries
FROM --platform=$BUILDPLATFORM alpine AS releaser
WORKDIR /work
ARG TARGETPLATFORM
RUN --mount=from=binaries \
--mount=type=bind,from=buildx-version,source=/buildx-version,target=/buildx-version <<EOT
set -e
mkdir -p /out
cp buildx* "/out/buildx-$(cat /buildx-version/version).$(echo $TARGETPLATFORM | sed 's/\//-/g')$(ls buildx* | sed -e 's/^buildx//')"
EOT
--mount=source=/tmp/.version,target=/tmp/.version,from=buildx-version \
mkdir -p /out && cp buildx* "/out/buildx-$(cat /tmp/.version).$(echo $TARGETPLATFORM | sed 's/\//-/g')$(ls buildx* | sed -e 's/^buildx//')"
FROM scratch AS release
COPY --from=releaser /out/ /
# Shell
FROM docker:$DOCKER_VERSION AS dockerd-release
FROM alpine AS shell
FROM alpine AS demo-env
RUN apk add --no-cache iptables tmux git vim less openssh
RUN mkdir -p /usr/local/lib/docker/cli-plugins && ln -s /usr/local/bin/buildx /usr/local/lib/docker/cli-plugins/docker-buildx
COPY ./hack/demo-env/entrypoint.sh /usr/local/bin
COPY ./hack/demo-env/tmux.conf /root/.tmux.conf
COPY --from=dockerd-release /usr/local/bin /usr/local/bin
#COPY --from=docker-cli-build /go/src/github.com/docker/cli/build/docker /usr/local/bin
WORKDIR /work
COPY ./hack/demo-env/examples .
COPY --from=binaries / /usr/local/bin/
VOLUME /var/lib/docker
ENTRYPOINT ["entrypoint.sh"]
FROM binaries
FROM binaries

View File

@@ -152,8 +152,6 @@ made through a pull request.
people = [
"akihirosuda",
"crazy-max",
"jedevc",
"jsternberg",
"tiborvass",
"tonistiigi",
]
@@ -190,16 +188,6 @@ made through a pull request.
Email = "contact@crazymax.dev"
GitHub = "crazy-max"
[people.jedevc]
Name = "Justin Chadwell"
Email = "me@jedevc.com"
GitHub = "jedevc"
[people.jsternberg]
Name = "Jonathan Sternberg"
Email = "jonathan.sternberg@docker.com"
GitHub = "jsternberg"
[people.thajeztah]
Name = "Sebastiaan van Stijn"
Email = "github@gone.nl"

View File

@@ -1,74 +1,40 @@
ifneq (, $(BUILDX_BIN))
export BUILDX_CMD = $(BUILDX_BIN)
else ifneq (, $(shell docker buildx version))
export BUILDX_CMD = docker buildx
else ifneq (, $(shell which buildx))
export BUILDX_CMD = $(which buildx)
endif
export BUILDX_CMD ?= docker buildx
BAKE_TARGETS := binaries binaries-cross lint lint-gopls validate-vendor validate-docs validate-authors validate-generated-files
.PHONY: all
all: binaries
.PHONY: build
build:
./hack/build
.PHONY: shell
shell:
./hack/shell
.PHONY: $(BAKE_TARGETS)
$(BAKE_TARGETS):
$(BUILDX_CMD) bake $@
binaries:
./hack/binaries
binaries-cross:
EXPORT_LOCAL=cross-out ./hack/cross
cross:
./hack/cross
.PHONY: install
install: binaries
mkdir -p ~/.docker/cli-plugins
install bin/build/buildx ~/.docker/cli-plugins/docker-buildx
install bin/buildx ~/.docker/cli-plugins/docker-buildx
.PHONY: release
release:
./hack/release
lint:
./hack/lint
.PHONY: validate-all
validate-all: lint test validate-vendor validate-docs validate-generated-files
.PHONY: test
test:
./hack/test
.PHONY: test-unit
test-unit:
TESTPKGS=./... SKIP_INTEGRATION_TESTS=1 ./hack/test
validate-vendor:
./hack/validate-vendor
validate-docs:
./hack/validate-docs
validate-all: lint test validate-vendor validate-docs
.PHONY: test
test-integration:
TESTPKGS=./tests ./hack/test
.PHONY: test-driver
test-driver:
./hack/test-driver
.PHONY: vendor
vendor:
./hack/update-vendor
.PHONY: docs
docs:
./hack/update-docs
.PHONY: authors
authors:
$(BUILDX_CMD) bake update-authors
generate-authors:
./hack/generate-authors
.PHONY: mod-outdated
mod-outdated:
$(BUILDX_CMD) bake mod-outdated
.PHONY: generated-files
generated-files:
$(BUILDX_CMD) bake update-generated-files
.PHONY: vendor lint shell binaries install binaries-cross validate-all generate-authors validate-docs docs

View File

@@ -1,453 +0,0 @@
# Project processing guide <!-- omit from toc -->
- [Project scope](#project-scope)
- [Labels](#labels)
- [Global](#global)
- [`area/`](#area)
- [`exp/`](#exp)
- [`impact/`](#impact)
- [`kind/`](#kind)
- [`needs/`](#needs)
- [`priority/`](#priority)
- [`status/`](#status)
- [Types of releases](#types-of-releases)
- [Feature releases](#feature-releases)
- [Release Candidates](#release-candidates)
- [Support Policy](#support-policy)
- [Contributing to Releases](#contributing-to-releases)
- [Patch releases](#patch-releases)
- [Milestones](#milestones)
- [Triage process](#triage-process)
- [Verify essential information](#verify-essential-information)
- [Classify the issue](#classify-the-issue)
- [Prioritization guidelines for `kind/bug`](#prioritization-guidelines-for-kindbug)
- [Issue lifecyle](#issue-lifecyle)
- [Examples](#examples)
- [Submitting a bug](#submitting-a-bug)
- [Pull request review process](#pull-request-review-process)
- [Handling stalled issues and pull requests](#handling-stalled-issues-and-pull-requests)
- [Moving to a discussion](#moving-to-a-discussion)
- [Workflow automation](#workflow-automation)
- [Exempting an issue/PR from stale bot processing](#exempting-an-issuepr-from-stale-bot-processing)
- [Updating dependencies](#updating-dependencies)
---
## Project scope
**Docker Buildx** is a Docker CLI plugin designed to extend build capabilities using BuildKit. It provides advanced features for building container images, supporting multiple builder instances, multi-node builds, and high-level build constructs. Buildx enhances the Docker build process, making it more efficient and flexible, and is compatible with both Docker and Kubernetes environments. Key features include:
- **Familiar user experience:** Buildx offers a user experience similar to legacy docker build, ensuring a smooth transition from legacy commands
- **Full BuildKit capabilities:** Leverage the full feature set of [`moby/buildkit`](https://github.com/moby/buildkit) when using the container driver
- **Multiple builder instances:** Supports the use of multiple builder instances, allowing concurrent builds and effective management and monitoring of these builders.
- **Multi-node builds:** Use multiple nodes to build cross-platform images
- **Compose integration:** Build complex, multi-services files as defined in compose
- **High-level build constructs via `bake`:** Introduces high-level build constructs for more complex build workflows
- **In-container driver support:** Support in-container drivers for both Docker and Kubernetes environments to support isolation/security.
## Labels
Below are common groups, labels, and their intended usage to support issues, pull requests, and discussion processing.
### Global
General attributes that can apply to nearly any issue or pull request.
| Label | Applies to | Description |
| ------------------- | ----------- | ------------------------------------------------------------------------- |
| `bot` | Issues, PRs | Created by a bot |
| `good first issue ` | Issues | Suitable for first-time contributors |
| `help wanted` | Issues, PRs | Assistance requested |
| `lgtm` | PRs | “Looks good to me” approval |
| `stale` | Issues, PRs | The issue/PR has not had activity for a while |
| `rotten` | Issues, PRs | The issue/PR has not had activity since being marked stale and was closed |
| `frozen` | Issues, PRs | The issue/PR should be skipped by the stale-bot |
| `dco/no` | PRs | The PR is missing a developer certificate of origin sign-off |
### `area/`
Area or component of the project affected. Please note that the table below may not be inclusive of all current options.
| Label | Applies to | Description |
| ------------------------------ | ---------- | -------------------------- |
| `area/bake` | Any | `bake` |
| `area/bake/compose` | Any | `bake/compose` |
| `area/build` | Any | `build` |
| `area/builder` | Any | `builder` |
| `area/buildkit` | Any | Relates to `moby/buildkit` |
| `area/cache` | Any | `cache` |
| `area/checks` | Any | `checks` |
| `area/ci` | Any | Project CI |
| `area/cli` | Any | `cli` |
| `area/controller` | Any | `controller` |
| `area/debug` | Any | `debug` |
| `area/dependencies` | Any | Project dependencies |
| `area/dockerfile` | Any | `dockerfile` |
| `area/docs` | Any | `docs` |
| `area/driver` | Any | `driver` |
| `area/driver/docker` | Any | `driver/docker` |
| `area/driver/docker-container` | Any | `driver/docker-container` |
| `area/driver/kubernetes` | Any | `driver/kubernetes` |
| `area/driver/remote` | Any | `driver/remote` |
| `area/feature-parity` | Any | `feature-parity` |
| `area/github-actions` | Any | `github-actions` |
| `area/hack` | Any | Project hack/support |
| `area/imagetools` | Any | `imagetools` |
| `area/metrics` | Any | `metrics` |
| `area/moby` | Any | Relates to `moby/moby` |
| `area/project` | Any | Project support |
| `area/qemu` | Any | `qemu` |
| `area/tests` | Any | Project testing |
| `area/windows` | Any | `windows` |
### `exp/`
Estimated experience level to complete the item
| Label | Applies to | Description |
| ------------------ | ---------- | ------------------------------------------------------------------------------- |
| `exp/beginner` | Issue | Suitable for contributors new to the project or technology stack |
| `exp/intermediate` | Issue | Requires some familiarity with the project and technology |
| `exp/expert` | Issue | Requires deep understanding and advanced skills with the project and technology |
### `impact/`
Potential impact areas of the issue or pull request.
| Label | Applies to | Description |
| -------------------- | ---------- | -------------------------------------------------- |
| `impact/breaking` | PR | Change is API-breaking |
| `impact/changelog` | PR | When complete, the item should be in the changelog |
| `impact/deprecation` | PR | Change is a deprecation of a feature |
### `kind/`
The type of issue, pull request or discussion
| Label | Applies to | Description |
| ------------------ | ----------------- | ------------------------------------------------------- |
| `kind/bug` | Issue, PR | Confirmed bug |
| `kind/chore` | Issue, PR | Project support tasks |
| `kind/docs` | Issue, PR | Additions or modifications to the documentation |
| `kind/duplicate` | Any | Duplicate of another item |
| `kind/enhancement` | Any | Enhancement of an existing feature |
| `kind/feature` | Any | A brand new feature |
| `kind/maybe-bug` | Issue, PR | Unconfirmed bug, turns into kind/bug when confirmed |
| `kind/proposal` | Issue, Discussion | A proposed major change |
| `kind/refactor` | Issue, PR | Refactor of existing code |
| `kind/support` | Any | A question, discussion, or other user support item |
| `kind/tests` | Issue, PR | Additions or modifications to the project testing suite |
### `needs/`
Actions or missing requirements needed by the issue or pull request.
| Label | Applies to | Description |
| --------------------------- | ---------- | ----------------------------------------------------- |
| `needs/assignee` | Issue, PR | Needs an assignee |
| `needs/code-review` | PR | Needs review of code |
| `needs/design-review` | Issue, PR | Needs review of design |
| `needs/docs-review` | Issue, PR | Needs review by the documentation team |
| `needs/docs-update` | Issue, PR | Needs an update to the docs |
| `needs/follow-on-work` | Issue, PR | Needs follow-on work/PR |
| `needs/issue` | PR | Needs an issue |
| `needs/maintainer-decision` | Issue, PR | Needs maintainer discussion/decision before advancing |
| `needs/milestone` | Issue, PR | Needs milestone assignment |
| `needs/more-info` | Any | Needs more information from the author |
| `needs/more-investigation` | Issue, PR | Needs further investigation |
| `needs/priority` | Issue, PR | Needs priority assignment |
| `needs/pull-request` | Issue | Needs a pull request |
| `needs/rebase` | PR | Needs rebase to target branch |
| `needs/reproduction` | Issue, PR | Needs reproduction steps |
### `priority/`
Level of urgency of a `kind/bug` issue or pull request.
| Label | Applies to | Description |
| ------------- | ---------- | ----------------------------------------------------------------------- |
| `priority/P0` | Issue, PR | Urgent: Security, critical bugs, blocking issues. |
| `priority/P1` | Issue, PR | Important: This is a top priority and a must-have for the next release. |
| `priority/P2` | Issue, PR | Normal: Default priority |
### `status/`
Current lifecycle state of the issue or pull request.
| Label | Applies to | Description |
| --------------------- | ---------- | ---------------------------------------------------------------------- |
| `status/accepted` | Issue, PR | The issue has been reviewed and accepted for implementation |
| `status/active` | PR | The PR is actively being worked on by a maintainer or community member |
| `status/blocked` | Issue, PR | The issue/PR is blocked from advancing to another status |
| `status/do-not-merge` | PR | Should not be merged pending further review or changes |
| `status/transfer` | Any | Transferred to another project |
| `status/triage` | Any | The item needs to be sorted by maintainers |
| `status/wontfix` | Issue, PR | The issue/PR will not be fixed or addressed as described |
## Types of releases
This project has feature releases, patch releases, and security releases.
### Feature releases
Feature releases are made from the development branch, followed by cutting a release branch for future patch releases, which may also occur during the code freeze period.
#### Release Candidates
Users can expect 2-3 release candidate (RC) test releases prior to a feature release. The first RC is typically released about one to two weeks before the final release.
#### Support Policy
Once a new feature release is cut, support for the previous feature release is discontinued. An exception may be made for urgent security releases that occur shortly after a new feature release. Buildx does not offer LTS (Long-Term Support) releases.
#### Contributing to Releases
Anyone can request that an issue or PR be included in the next feature or patch release milestone, provided it meets the necessary requirements.
### Patch releases
Patch releases should only include the most critical patches. Stability is vital, so everyone should always use the latest patch release.
If a fix is needed but does not qualify for a patch release because of its code size or other criteria that make it too unpredictable, we will prioritize cutting a new feature release sooner rather than making an exception for backporting.
Following PRs are included in patch releases
- `priority/P0` fixes
- `priority/P1` fixes, assuming maintainers dont object because of the patch size
- `priority/P2` fixes, only if (both required)
- proposed by maintainer
- the patch is trivial and self-contained
- Documentation-only patches
- Vendored dependency updates, only if:
- Fixing (qualifying) bug or security issue in Buildx
- The patch is small, else a forked version of the dependency with only the patches required
New features do not qualify for patch release.
## Milestones
Milestones are used to help identify what releases a contribution will be in.
- The `v0.next` milestone collects unblocked items planned for the next 2-3 feature releases but not yet assigned to a specific version milestone.
- The `v0.backlog` milestone gathers all triaged items considered for the long-term (beyond the next 3 feature releases) or currently unfit for a future release due to certain conditions. These items may be blocked and need to be unblocked before progressing.
## Triage process
Triage provides an important way to contribute to an open-source project. When submitted without an issue this process applies to Pull Requests as well. Triage helps ensure work items are resolved quickly by:
- Ensuring the issue's intent and purpose are described precisely. This is necessary because it can be difficult for an issue to explain how an end user experiences a problem and what actions they took to arrive at the problem.
- Giving a contributor the information they need before they commit to resolving an issue.
- Lowering the issue count by preventing duplicate issues.
- Streamlining the development process by preventing duplicate discussions.
If you don't have time to code, consider helping with triage. The community will thank you for saving them time by spending some of yours. The same basic process should be applied upon receipt of a new issue.
1. Verify essential information
2. Classify the issue
3. Prioritizing the issue
### Verify essential information
Before advancing the triage process, ensure the issue contains all necessary information to be properly understood and assessed. The required information may vary by issue type, but typically includes the system environment, version numbers, reproduction steps, expected outcomes, and actual results.
- **Exercising Judgment**: Use your best judgment to assess the issue descriptions completeness.
- **Communicating Needs**: If the information provided is insufficient, kindly request additional details from the author. Explain that this information is crucial for clarity and resolution of the issue, and apply the `needs/more-information` label to indicate a response from the author is required.
### Classify the issue
An issue will typically have multiple labels. These are used to help communicate key information about context, requirements, and status. At a minimum, a properly classified issue should have:
- (Required) One or more [`area/*`](#area) labels
- (Required) One [`kind/*`](#kind) label to indicate the type of issue
- (Required if `kind/bug`) A [`priority/*`](#priority) label
When assigning a decision the following labels should be present:
- (Required) One [`status/*`](#status) label to indicate lifecycle status
Additional labels can provide more clarity:
- Zero or more [`needs/*`](#needs) labels to indicate missing items
- Zero or more [`impact/*`](#impact) labels
- One [`exp/*`](#exp) label
## Prioritization guidelines for `kind/bug`
When an issue or pull request of `kind/bug` is correctly categorized and attached to a milestone, the labels indicate the urgency with which it should be completed.
**priority/P0**
Fixing this item is the highest priority. A patch release will follow as soon as a patch is available and verified. This level is used exclusively for bugs.
Examples:
- Regression in a critical code path
- Panic in a critical code path
- Corruption in critical code path or rest of the system
- Leaked zero-day critical security
**priority/P1**
Items with this label should be fixed with high priority and almost always included in a patch release. Unless waiting for another issue, patch releases should happen within a week. This level is not used for features or enhancements.
Examples:
- Any regression, panic
- Measurable performance regression
- A major bug in a new feature in the latest release
- Incompatibility with upgraded external dependency
**priority/P2**
This is the default priority and is implied in the absence of a `priority/` label. Bugs with this priority should be included in the next feature release but may land in a patch release if they are ready and unlikely to impact other functionality adversely. Non-bug issues with this priority should also be included in the next feature release if they are available and ready.
Examples:
- Confirmed bugs
- Bugs in non-default configurations
- Most enhancements
## Issue lifecyle
```mermaid
flowchart LR
create([New issue]) --> triage
subgraph triage[Triage Loop]
review[Review]
end
subgraph decision[Decision]
accept[Accept]
close[Close]
end
triage -- if accepted --> accept[Assign status, milestone]
triage -- if rejected --> close[Assign status, close issue]
```
### Examples
#### Submitting a bug
To help illustrate the issue life cycle lets walk through submitting an issue as a potential bug in CI that enters a feedback loop and is eventually accepted as P2 priority and placed on the backlog.
```mermaid
flowchart LR
new([New issue])
subgraph triage[Triage]
direction LR
create["Action: Submit issue via Bug form\nLabels: kind/maybe-bug, status/triage"]
style create text-align:left
subgraph review[Review]
direction TB
classify["Action: Maintainer reviews issue, requests more info\nLabels: kind/maybe-bug, status/triage, needs/more-info, area/*"]
style classify text-align:left
update["Action: Author updates issue\nLabels: kind/maybe-bug, status/triage, needs/more-info, area/*"]
style update text-align:left
classify --> update
update --> classify
end
create --> review
end
subgraph decision[Decision]
accept["Action: Maintainer reviews updates, accepts, assigns milestone\nLabels: kind/bug, priority/P2, status/accepted, area/*, impact/*"]
style accept text-align: left
end
new --> triage
triage --> decision
```
## Pull request review process
A thorough and timely review process for pull requests (PRs) is crucial for maintaining the integrity and quality of the project while fostering a collaborative environment.
- **Labeling**: Most labels should be inherited from a linked issue. If no issue is linked an extended review process may be required.
- **Continuous Integration**: With few exceptions, it is crucial that all Continuous Integration (CI) workflows pass successfully.
- **Draft Status**: Incomplete or long-running PRs should be placed in "Draft" status. They may revert to "Draft" status upon initial review if significant rework is required.
```mermaid
flowchart LR
triage([Triage])
draft[Draft PR]
review[PR Review]
closed{{Close PR}}
merge{{Merge PR}}
subgraph feedback1[Feedback Loop]
draft
end
subgraph feedback2[Feedback Loop]
review
end
triage --> draft
draft --> review
review --> closed
review --> draft
review --> merge
```
## Handling stalled issues and pull requests
Unfortunately, some issues or pull requests can remain inactive for extended periods. To mitigate this, automation is employed to prompt both the author and maintainers, ensuring that all contributions receive appropriate attention.
**For Authors:**
- **Closure of Inactive Items**: If your issue or PR becomes irrelevant or is no longer needed, please close it to help keep the project clean.
- **Prompt Responses**: If additional information is requested, please respond promptly to facilitate progress.
**For Maintainers:**
- **Timely Responses**: Endeavor to address issues and PRs within a reasonable timeframe to keep the community actively engaged.
- **Engagement with Stale Issues**: If an issue becomes stale due to maintainer inaction, re-engage with the author to reassess and revitalize the discussion.
**Stale and Rotten Policy:**
- An issue or PR will be labeled as **`stale`** after 14 calendar days of inactivity. If it remains inactive for another 30 days, it will be labeled as **`rotten`** and closed.
- Authors whose issues or PRs have been closed are welcome to re-open them or create new ones and link to the original.
**Skipping Stale Processing:**
- To prevent an issue or PR from being marked as stale, label it as **`frozen`**.
**Exceptions to Stale Processing:**
- Issues or PRs marked as **`frozen`**.
- Issues or PRs assigned to a milestone.
## Moving to a discussion
Sometimes, an issue or pull request may not be the appropriate medium for what is essentially a discussion. In such cases, the issue or PR will either be converted to a discussion or a new discussion will be created. The original item will then be labeled appropriately (**`kind/discussion`** or **`kind/question`**) and closed.
If you believe this conversion was made in error, please express your concerns in the new discussion thread. If necessary, a reversal to the original issue or PR format can be facilitated.
## Workflow automation
To help expedite common operations, avoid errors and reduce toil some workflow automation is used by the project. This can include:
- Stale issue or pull request processing
- Auto-labeling actions
- Auto-response actions
- Label carry over from issue to pull request
### Exempting an issue/PR from stale bot processing
The stale item handling is configured in the [repository](link-to-config-file). To exempt an issue or PR from stale processing you can:
- Add the item to a milestone
- Add the `frozen` label to the item
## Updating dependencies
- **Runtime Dependencies**: Use the latest stable release available when the first Release Candidate (RC) of a new feature release is cut. For patch releases, update to the latest corresponding patch release of the dependency.
- **Other Dependencies**: Always permitted to update to the latest patch release in the development branch. Updates to a new feature release require justification, unless the dependency is outdated. Prefer tagged versions of dependencies unless a specific untagged commit is needed. Go modules should specify the lowest compatible version; there is no requirement to update all dependencies to their latest versions before cutting a new Buildx feature release.
- **Patch Releases**: Vendored dependency updates are considered for patch releases, except in the rare cases specified previously.
- **Security Considerations**: A security scanner report indicating a non-exploitable issue via Buildx does not justify backports.

290
README.md
View File

@@ -1,13 +1,11 @@
# buildx
[![GitHub release](https://img.shields.io/github/release/docker/buildx.svg?style=flat-square)](https://github.com/docker/buildx/releases/latest)
[![PkgGoDev](https://img.shields.io/badge/go.dev-docs-007d9c?style=flat-square&logo=go&logoColor=white)](https://pkg.go.dev/github.com/docker/buildx)
[![Build Status](https://img.shields.io/github/actions/workflow/status/docker/buildx/build.yml?branch=master&label=build&logo=github&style=flat-square)](https://github.com/docker/buildx/actions?query=workflow%3Abuild)
[![Go Report Card](https://goreportcard.com/badge/github.com/docker/buildx?style=flat-square)](https://goreportcard.com/report/github.com/docker/buildx)
[![codecov](https://img.shields.io/codecov/c/github/docker/buildx?logo=codecov&style=flat-square)](https://codecov.io/gh/docker/buildx)
[![PkgGoDev](https://img.shields.io/badge/go.dev-docs-007d9c?logo=go&logoColor=white)](https://pkg.go.dev/github.com/docker/buildx)
[![Build Status](https://github.com/docker/buildx/workflows/build/badge.svg)](https://github.com/docker/buildx/actions?query=workflow%3Abuild)
[![Go Report Card](https://goreportcard.com/badge/github.com/docker/buildx)](https://goreportcard.com/report/github.com/docker/buildx)
[![codecov](https://codecov.io/gh/docker/buildx/branch/master/graph/badge.svg)](https://codecov.io/gh/docker/buildx)
`buildx` is a Docker CLI plugin for extended build capabilities with
[BuildKit](https://github.com/moby/buildkit).
`buildx` is a Docker CLI plugin for extended build capabilities with [BuildKit](https://github.com/moby/buildkit).
Key features:
@@ -22,130 +20,74 @@ Key features:
# Table of Contents
- [Installing](#installing)
- [Windows and macOS](#windows-and-macos)
- [Linux packages](#linux-packages)
- [Manual download](#manual-download)
- [Dockerfile](#dockerfile)
- [Set buildx as the default builder](#set-buildx-as-the-default-builder)
- [Docker](#docker)
- [Binary release](#binary-release)
- [From `Dockerfile`](#from-dockerfile)
- [Building](#building)
- [with Docker 18.09+](#with-docker-1809)
- [with buildx or Docker 19.03](#with-buildx-or-docker-1903)
- [Getting started](#getting-started)
- [Building with buildx](#building-with-buildx)
- [Working with builder instances](#working-with-builder-instances)
- [Building multi-platform images](#building-multi-platform-images)
- [Reference](docs/reference/buildx.md)
- [`buildx bake`](docs/reference/buildx_bake.md)
- [`buildx build`](docs/reference/buildx_build.md)
- [`buildx create`](docs/reference/buildx_create.md)
- [`buildx du`](docs/reference/buildx_du.md)
- [`buildx imagetools`](docs/reference/buildx_imagetools.md)
- [`buildx imagetools create`](docs/reference/buildx_imagetools_create.md)
- [`buildx imagetools inspect`](docs/reference/buildx_imagetools_inspect.md)
- [`buildx inspect`](docs/reference/buildx_inspect.md)
- [High-level build options](#high-level-build-options)
- [Documentation](docs/reference)
- [`buildx build [OPTIONS] PATH | URL | -`](docs/reference/buildx_build.md)
- [`buildx create [OPTIONS] [CONTEXT|ENDPOINT]`](docs/reference/buildx_create.md)
- [`buildx use NAME`](docs/reference/buildx_use.md)
- [`buildx inspect [NAME]`](docs/reference/buildx_inspect.md)
- [`buildx ls`](docs/reference/buildx_ls.md)
- [`buildx prune`](docs/reference/buildx_prune.md)
- [`buildx rm`](docs/reference/buildx_rm.md)
- [`buildx stop`](docs/reference/buildx_stop.md)
- [`buildx use`](docs/reference/buildx_use.md)
- [`buildx version`](docs/reference/buildx_version.md)
- [`buildx stop [NAME]`](docs/reference/buildx_stop.md)
- [`buildx rm [NAME]`](docs/reference/buildx_rm.md)
- [`buildx bake [OPTIONS] [TARGET...]`](docs/reference/buildx_bake.md)
- [`buildx imagetools create [OPTIONS] [SOURCE] [SOURCE...]`](docs/reference/buildx_imagetools_create.md)
- [`buildx imagetools inspect NAME`](docs/reference/buildx_imagetools_inspect.md)
- [Setting buildx as default builder in Docker 19.03+](#setting-buildx-as-default-builder-in-docker-1903)
- [Contributing](#contributing)
For more information on how to use Buildx, see
[Docker Build docs](https://docs.docker.com/build/).
# Installing
Using `buildx` with Docker requires Docker engine 19.03 or newer.
Using `buildx` as a docker CLI plugin requires using Docker 19.03 or newer. A limited set of functionality works with older versions of Docker when invoking the binary directly.
> [!WARNING]
> Using an incompatible version of Docker may result in unexpected behavior,
> and will likely cause issues, especially when using Buildx builders with more
> recent versions of BuildKit.
### Docker
## Windows and macOS
`buildx` comes bundled with Docker Desktop and in latest Docker CE packages, but may not be included in all Linux distros (in which case follow the binary release instructions).
Docker Buildx is included in [Docker Desktop](https://docs.docker.com/desktop/)
for Windows and macOS.
### Binary release
## Linux packages
Download the latest binary release from https://github.com/docker/buildx/releases/latest and copy it to `~/.docker/cli-plugins` folder with name `docker-buildx`.
Docker Engine package repositories contain Docker Buildx packages when installed according to the
[Docker Engine install documentation](https://docs.docker.com/engine/install/). Install the
`docker-buildx-plugin` package to install the Buildx plugin.
Change the permission to execute:
```sh
chmod a+x ~/.docker/cli-plugins/docker-buildx
```
## Manual download
### From `Dockerfile`
> [!IMPORTANT]
> This section is for unattended installation of the buildx component. These
> instructions are mostly suitable for testing purposes. We do not recommend
> installing buildx using manual download in production environments as they
> will not be updated automatically with security updates.
>
> On Windows and macOS, we recommend that you install [Docker Desktop](https://docs.docker.com/desktop/)
> instead. For Linux, we recommend that you follow the [instructions specific for your distribution](#linux-packages).
Here is how to use buildx inside a Dockerfile through the [`docker/buildx-bin`](https://hub.docker.com/r/docker/buildx-bin) image:
You can also download the latest binary from the [GitHub releases page](https://github.com/docker/buildx/releases/latest).
Rename the relevant binary and copy it to the destination matching your OS:
| OS | Binary name | Destination folder |
| -------- | -------------------- | -----------------------------------------|
| Linux | `docker-buildx` | `$HOME/.docker/cli-plugins` |
| macOS | `docker-buildx` | `$HOME/.docker/cli-plugins` |
| Windows | `docker-buildx.exe` | `%USERPROFILE%\.docker\cli-plugins` |
Or copy it into one of these folders for installing it system-wide.
On Unix environments:
* `/usr/local/lib/docker/cli-plugins` OR `/usr/local/libexec/docker/cli-plugins`
* `/usr/lib/docker/cli-plugins` OR `/usr/libexec/docker/cli-plugins`
On Windows:
* `C:\ProgramData\Docker\cli-plugins`
* `C:\Program Files\Docker\cli-plugins`
> [!NOTE]
> On Unix environments, it may also be necessary to make it executable with `chmod +x`:
> ```shell
> $ chmod +x ~/.docker/cli-plugins/docker-buildx
> ```
## Dockerfile
Here is how to install and use Buildx inside a Dockerfile through the
[`docker/buildx-bin`](https://hub.docker.com/r/docker/buildx-bin) image:
```dockerfile
# syntax=docker/dockerfile:1
```Dockerfile
FROM docker
COPY --from=docker/buildx-bin /buildx /usr/libexec/docker/cli-plugins/docker-buildx
COPY --from=docker/buildx-bin:latest /buildx /usr/libexec/docker/cli-plugins/docker-buildx
RUN docker buildx version
```
# Set buildx as the default builder
Running the command [`docker buildx install`](docs/reference/buildx_install.md)
sets up docker builder command as an alias to `docker buildx build`. This
results in the ability to have `docker build` use the current buildx builder.
To remove this alias, run [`docker buildx uninstall`](docs/reference/buildx_uninstall.md).
# Building
```console
# Buildx 0.6+
$ docker buildx bake "https://github.com/docker/buildx.git"
$ mkdir -p ~/.docker/cli-plugins
$ mv ./bin/build/buildx ~/.docker/cli-plugins/docker-buildx
# Docker 19.03+
$ DOCKER_BUILDKIT=1 docker build --platform=local -o . "https://github.com/docker/buildx.git"
### with buildx or Docker 19.03+
```
$ export DOCKER_BUILDKIT=1
$ docker build --platform=local -o . git://github.com/docker/buildx
$ mkdir -p ~/.docker/cli-plugins
$ mv buildx ~/.docker/cli-plugins/docker-buildx
```
# Local
$ git clone https://github.com/docker/buildx.git && cd buildx
### with Docker 18.09+
```
$ git clone git://github.com/docker/buildx && cd buildx
$ make install
```
@@ -153,146 +95,65 @@ $ make install
## Building with buildx
Buildx is a Docker CLI plugin that extends the `docker build` command with the
full support of the features provided by [Moby BuildKit](https://github.com/moby/buildkit)
builder toolkit. It provides the same user experience as `docker build` with
many new features like creating scoped builder instances and building against
multiple nodes concurrently.
Buildx is a Docker CLI plugin that extends the `docker build` command with the full support of the features provided by [Moby BuildKit](https://github.com/moby/buildkit) builder toolkit. It provides the same user experience as `docker build` with many new features like creating scoped builder instances and building against multiple nodes concurrently.
After installation, buildx can be accessed through the `docker buildx` command
with Docker 19.03. `docker buildx build` is the command for starting a new
build. With Docker versions older than 19.03 buildx binary can be called
directly to access the `docker buildx` subcommands.
After installation, buildx can be accessed through the `docker buildx` command with Docker 19.03. `docker buildx build` is the command for starting a new build. With Docker versions older than 19.03 buildx binary can be called directly to access the `docker buildx` subcommands.
```console
```
$ docker buildx build .
[+] Building 8.4s (23/32)
=> ...
```
Buildx will always build using the BuildKit engine and does not require
`DOCKER_BUILDKIT=1` environment variable for starting builds.
The `docker buildx build` command supports features available for `docker build`,
including features such as outputs configuration, inline build caching, and
specifying target platform. In addition, Buildx also supports new features that
are not yet available for regular `docker build` like building manifest lists,
distributed caching, and exporting build results to OCI image tarballs.
Buildx will always build using the BuildKit engine and does not require `DOCKER_BUILDKIT=1` environment variable for starting builds.
Buildx is flexible and can be run in different configurations that are exposed
through various "drivers". Each driver defines how and where a build should
run, and have different feature sets.
Buildx build command supports the features available for `docker build` including the new features in Docker 19.03 such as outputs configuration, inline build caching or specifying target platform. In addition, buildx supports new features not yet available for regular `docker build` like building manifest lists, distributed caching, exporting build results to OCI image tarballs etc.
We currently support the following drivers:
- The `docker` driver ([guide](https://docs.docker.com/build/drivers/docker/), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
- The `docker-container` driver ([guide](https://docs.docker.com/build/drivers/docker-container/), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
- The `kubernetes` driver ([guide](https://docs.docker.com/build/drivers/kubernetes/), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
- The `remote` driver ([guide](https://docs.docker.com/build/drivers/remote/))
Buildx is supposed to be flexible and can be run in different configurations that are exposed through a driver concept. Currently, we support a "docker" driver that uses the BuildKit library bundled into the Docker daemon binary, and a "docker-container" driver that automatically launches BuildKit inside a Docker container. We plan to add more drivers in the future, for example, one that would allow running buildx inside an (unprivileged) container.
The user experience of using buildx is very similar across drivers, but there are some features that are not currently supported by the "docker" driver, because the BuildKit library bundled into docker daemon currently uses a different storage component. In contrast, all images built with "docker" driver are automatically added to the "docker images" view by default, whereas when using other drivers the method for outputting an image needs to be selected with `--output`.
For more information on drivers, see the [drivers guide](https://docs.docker.com/build/drivers/).
## Working with builder instances
By default, buildx will initially use the `docker` driver if it is supported,
providing a very similar user experience to the native `docker build`. Note that
you must use a local shared daemon to build your applications.
By default, buildx will initially use the "docker" driver if it is supported, providing a very similar user experience to the native `docker build`. But using a local shared daemon is only one way to build your applications.
Buildx allows you to create new instances of isolated builders. This can be
used for getting a scoped environment for your CI builds that does not change
the state of the shared daemon or for isolating the builds for different
projects. You can create a new instance for a set of remote nodes, forming a
build farm, and quickly switch between them.
Buildx allows you to create new instances of isolated builders. This can be used for getting a scoped environment for your CI builds that does not change the state of the shared daemon or for isolating the builds for different projects. You can create a new instance for a set of remote nodes, forming a build farm, and quickly switch between them.
You can create new instances using the [`docker buildx create`](docs/reference/buildx_create.md)
command. This creates a new builder instance with a single node based on your
current configuration.
New instances can be created with `docker buildx create` command. This will create a new builder instance with a single node based on your current configuration. To use a remote node you can specify the `DOCKER_HOST` or remote context name while creating the new builder. After creating a new instance you can manage its lifecycle with the `inspect`, `stop` and `rm` commands and list all available builders with `ls`. After creating a new builder you can also append new nodes to it.
To use a remote node you can specify the `DOCKER_HOST` or the remote context name
while creating the new builder. After creating a new instance, you can manage its
lifecycle using the [`docker buildx inspect`](docs/reference/buildx_inspect.md),
[`docker buildx stop`](docs/reference/buildx_stop.md), and
[`docker buildx rm`](docs/reference/buildx_rm.md) commands. To list all
available builders, use [`buildx ls`](docs/reference/buildx_ls.md). After
creating a new builder you can also append new nodes to it.
To switch between different builders, use `docker buildx use <name>`. After running this command the build commands would automatically keep using this builder.
To switch between different builders, use [`docker buildx use <name>`](docs/reference/buildx_use.md).
After running this command, the build commands will automatically use this
builder.
Docker also features a [`docker context`](https://docs.docker.com/engine/reference/commandline/context/)
command that can be used for giving names for remote Docker API endpoints.
Buildx integrates with `docker context` so that all of your contexts
automatically get a default builder instance. While creating a new builder
instance or when adding a node to it you can also set the context name as the
target.
Docker 19.03 also features a new `docker context` command that can be used for giving names for remote Docker API endpoints. Buildx integrates with `docker context` so that all of your contexts automatically get a default builder instance. While creating a new builder instance or when adding a node to it you can also set the context name as the target.
## Building multi-platform images
BuildKit is designed to work well for building for multiple platforms and not
only for the architecture and operating system that the user invoking the build
happens to run.
BuildKit is designed to work well for building for multiple platforms and not only for the architecture and operating system that the user invoking the build happens to run.
When you invoke a build, you can set the `--platform` flag to specify the target
platform for the build output, (for example, `linux/amd64`, `linux/arm64`, or
`darwin/amd64`).
When invoking a build, the `--platform` flag can be used to specify the target platform for the build output, (e.g. linux/amd64, linux/arm64, darwin/amd64). When the current builder instance is backed by the "docker-container" driver, multiple platforms can be specified together. In this case, a manifest list will be built, containing images for all of the specified architectures. When this image is used in `docker run` or `docker service`, Docker will pick the correct image based on the nodes platform.
When the current builder instance is backed by the `docker-container` or
`kubernetes` driver, you can specify multiple platforms together. In this case,
it builds a manifest list which contains images for all specified architectures.
When you use this image in [`docker run`](https://docs.docker.com/engine/reference/commandline/run/)
or [`docker service`](https://docs.docker.com/engine/reference/commandline/service/),
Docker picks the correct image based on the node's platform.
Multi-platform images can be built by mainly three different strategies that are all supported by buildx and Dockerfiles. You can use the QEMU emulation support in the kernel, build on multiple native nodes using the same builder instance or use a stage in Dockerfile to cross-compile to different architectures.
You can build multi-platform images using three different strategies that are
supported by Buildx and Dockerfiles:
QEMU is the easiest way to get started if your node already supports it (e.g. if you are using Docker Desktop). It requires no changes to your Dockerfile and BuildKit will automatically detect the secondary architectures that are available. When BuildKit needs to run a binary for a different architecture it will automatically load it through a binary registered in the binfmt_misc handler. For QEMU binaries registered with binfmt_misc on the host OS to work transparently inside containers they must be registered with the fix_binary flag. This requires a kernel >= 4.8 and binfmt-support >= 2.1.7. You can check for proper registration by checking if `F` is among the flags in `/proc/sys/fs/binfmt_misc/qemu-*`. While Docker Desktop comes preconfigured with binfmt_misc support for additional platforms, for other installations it likely needs to be installed using [`tonistiigi/binfmt`](https://github.com/tonistiigi/binfmt) image.
1. Using the QEMU emulation support in the kernel
2. Building on multiple native nodes using the same builder instance
3. Using a stage in Dockerfile to cross-compile to different architectures
QEMU is the easiest way to get started if your node already supports it (for
example. if you are using Docker Desktop). It requires no changes to your
Dockerfile and BuildKit automatically detects the secondary architectures that
are available. When BuildKit needs to run a binary for a different architecture,
it automatically loads it through a binary registered in the `binfmt_misc`
handler.
For QEMU binaries registered with `binfmt_misc` on the host OS to work
transparently inside containers they must be registered with the `fix_binary`
flag. This requires a kernel >= 4.8 and binfmt-support >= 2.1.7. You can check
for proper registration by checking if `F` is among the flags in
`/proc/sys/fs/binfmt_misc/qemu-*`. While Docker Desktop comes preconfigured
with `binfmt_misc` support for additional platforms, for other installations
it likely needs to be installed using [`tonistiigi/binfmt`](https://github.com/tonistiigi/binfmt)
image.
```console
```
$ docker run --privileged --rm tonistiigi/binfmt --install all
```
Using multiple native nodes provide better support for more complicated cases
that are not handled by QEMU and generally have better performance. You can
add additional nodes to the builder instance using the `--append` flag.
Using multiple native nodes provides better support for more complicated cases not handled by QEMU and generally have better performance. Additional nodes can be added to the builder instance with `--append` flag.
Assuming contexts `node-amd64` and `node-arm64` exist in `docker context ls`;
```console
```
# assuming contexts node-amd64 and node-arm64 exist in "docker context ls"
$ docker buildx create --use --name mybuild node-amd64
mybuild
$ docker buildx create --append --name mybuild node-arm64
$ docker buildx build --platform linux/amd64,linux/arm64 .
```
Finally, depending on your project, the language that you use may have good
support for cross-compilation. In that case, multi-stage builds in Dockerfiles
can be effectively used to build binaries for the platform specified with
`--platform` using the native architecture of the build node. A list of build
arguments like `BUILDPLATFORM` and `TARGETPLATFORM` is available automatically
inside your Dockerfile and can be leveraged by the processes running as part
of your build.
Finally, depending on your project, the language that you use may have good support for cross-compilation. In that case, multi-stage builds in Dockerfiles can be effectively used to build binaries for the platform specified with `--platform` using the native architecture of the build node. List of build arguments like `BUILDPLATFORM` and `TARGETPLATFORM` are available automatically inside your Dockerfile and can be leveraged by the processes running as part of your build.
```dockerfile
# syntax=docker/dockerfile:1
```
FROM --platform=$BUILDPLATFORM golang:alpine AS build
ARG TARGETPLATFORM
ARG BUILDPLATFORM
@@ -301,12 +162,25 @@ FROM alpine
COPY --from=build /log /log
```
You can also use [`tonistiigi/xx`](https://github.com/tonistiigi/xx) Dockerfile
cross-compilation helpers for more advanced use-cases.
## High-level build options
See [High-level builds with Bake](https://docs.docker.com/build/bake/) for more details.
Buildx also aims to provide support for higher level build concepts that go beyond invoking a single build command. We want to support building all the images in your application together and let the users define project specific reusable build flows that can then be easily invoked by anyone.
BuildKit has great support for efficiently handling multiple concurrent build requests and deduplicating work. While build commands can be combined with general-purpose command runners (eg. make), these tools generally invoke builds in sequence and therefore cant leverage the full potential of BuildKit parallelization or combine BuildKits output for the user. For this use case we have added a command called `docker buildx bake`.
Currently, the bake command supports building images from compose files, similar to `compose build` but allowing all the services to be built concurrently as part of a single request.
There is also support for custom build rules from HCL/JSON files allowing better code reuse and different target groups. The design of bake is in very early stages and we are looking for feedback from users.
[`buildx bake` Reference Docs](docs/reference/buildx_bake.md)
# Setting buildx as default builder in Docker 19.03+
Running `docker buildx install` sets up `docker builder` command as an alias to `docker buildx`. This results in the ability to have `docker build` use the current buildx builder.
To remove this alias, you can run `docker buildx uninstall`.
# Contributing

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,59 +1,48 @@
package bake
import (
"context"
"fmt"
"os"
"path/filepath"
"sort"
"reflect"
"strings"
"github.com/compose-spec/compose-go/v2/consts"
"github.com/compose-spec/compose-go/v2/dotenv"
"github.com/compose-spec/compose-go/v2/loader"
composetypes "github.com/compose-spec/compose-go/v2/types"
dockeropts "github.com/docker/cli/opts"
"github.com/docker/go-units"
"github.com/pkg/errors"
"gopkg.in/yaml.v3"
"github.com/compose-spec/compose-go/loader"
compose "github.com/compose-spec/compose-go/types"
)
func ParseComposeFiles(fs []File) (*Config, error) {
envs, err := composeEnv()
if err != nil {
return nil, err
}
var cfgs []composetypes.ConfigFile
for _, f := range fs {
cfgs = append(cfgs, composetypes.ConfigFile{
Filename: f.Name,
Content: f.Data,
})
}
return ParseCompose(cfgs, envs)
func parseCompose(dt []byte) (*compose.Project, error) {
return loader.Load(compose.ConfigDetails{
ConfigFiles: []compose.ConfigFile{
{
Content: dt,
},
},
Environment: envMap(os.Environ()),
}, func(options *loader.Options) {
options.SkipNormalization = true
})
}
func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Config, error) {
if envs == nil {
envs = make(map[string]string)
}
cfg, err := loader.LoadWithContext(context.Background(), composetypes.ConfigDetails{
ConfigFiles: cfgs,
Environment: envs,
}, func(options *loader.Options) {
projectName := "bake"
if v, ok := envs[consts.ComposeProjectName]; ok && v != "" {
projectName = v
func envMap(env []string) map[string]string {
result := make(map[string]string, len(env))
for _, s := range env {
kv := strings.SplitN(s, "=", 2)
if len(kv) != 2 {
continue
}
options.SetProjectName(projectName, false)
options.SkipNormalization = true
options.Profiles = []string{"*"}
})
result[kv[0]] = kv[1]
}
return result
}
func ParseCompose(dt []byte) (*Config, error) {
cfg, err := parseCompose(dt)
if err != nil {
return nil, err
}
var c Config
var zeroBuildConfig compose.BuildConfig
if len(cfg.Services) > 0 {
c.Groups = []*Group{}
c.Targets = []*Target{}
@@ -61,14 +50,13 @@ func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Conf
g := &Group{Name: "default"}
for _, s := range cfg.Services {
s := s
if s.Build == nil {
continue
}
targetName := sanitizeTargetName(s.Name)
if err = validateTargetName(targetName); err != nil {
return nil, errors.Wrapf(err, "invalid service name %q", targetName)
if s.Build == nil || reflect.DeepEqual(s.Build, zeroBuildConfig) {
// if not make sure they're setting an image or it's invalid d-c.yml
if s.Image == "" {
return nil, fmt.Errorf("compose file invalid: service %s has neither an image nor a build context specified. At least one must be provided", s.Name)
}
continue
}
var contextPathP *string
@@ -81,324 +69,45 @@ func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Conf
dockerfilePath := s.Build.Dockerfile
dockerfilePathP = &dockerfilePath
}
var dockerfileInlineP *string
if s.Build.DockerfileInline != "" {
dockerfileInline := s.Build.DockerfileInline
dockerfileInlineP = &dockerfileInline
}
var additionalContexts map[string]string
if s.Build.AdditionalContexts != nil {
additionalContexts = map[string]string{}
for k, v := range s.Build.AdditionalContexts {
additionalContexts[k] = v
}
}
var shmSize *string
if s.Build.ShmSize > 0 {
shmSizeBytes := dockeropts.MemBytes(s.Build.ShmSize)
shmSizeStr := shmSizeBytes.String()
shmSize = &shmSizeStr
}
var networkModeP *string
if s.Build.Network != "" {
networkMode := s.Build.Network
networkModeP = &networkMode
}
var ulimits []string
if s.Build.Ulimits != nil {
for n, u := range s.Build.Ulimits {
ulimit, err := units.ParseUlimit(fmt.Sprintf("%s=%d:%d", n, u.Soft, u.Hard))
if err != nil {
return nil, err
}
ulimits = append(ulimits, ulimit.String())
}
}
var ssh []string
for _, bkey := range s.Build.SSH {
sshkey := composeToBuildkitSSH(bkey)
ssh = append(ssh, sshkey)
}
sort.Strings(ssh)
var secrets []string
for _, bs := range s.Build.Secrets {
secret, err := composeToBuildkitSecret(bs, cfg.Secrets[bs.Source])
if err != nil {
return nil, err
}
secrets = append(secrets, secret)
}
// compose does not support nil values for labels
labels := map[string]*string{}
for k, v := range s.Build.Labels {
v := v
labels[k] = &v
}
g.Targets = append(g.Targets, targetName)
g.Targets = append(g.Targets, s.Name)
t := &Target{
Name: targetName,
Context: contextPathP,
Contexts: additionalContexts,
Dockerfile: dockerfilePathP,
DockerfileInline: dockerfileInlineP,
Tags: s.Build.Tags,
Labels: labels,
Name: s.Name,
Context: contextPathP,
Dockerfile: dockerfilePathP,
Labels: s.Build.Labels,
Args: flatten(s.Build.Args.Resolve(func(val string) (string, bool) {
if val, ok := s.Environment[val]; ok && val != nil {
return *val, true
}
val, ok := cfg.Environment[val]
return val, ok
})),
CacheFrom: s.Build.CacheFrom,
CacheTo: s.Build.CacheTo,
NetworkMode: networkModeP,
SSH: ssh,
Secrets: secrets,
ShmSize: shmSize,
Ulimits: ulimits,
}
if err = t.composeExtTarget(s.Build.Extensions); err != nil {
return nil, err
CacheFrom: s.Build.CacheFrom,
// TODO: add platforms
}
if s.Build.Target != "" {
target := s.Build.Target
t.Target = &target
}
if len(t.Tags) == 0 && s.Image != "" {
if s.Image != "" {
t.Tags = []string{s.Image}
}
c.Targets = append(c.Targets, t)
}
c.Groups = append(c.Groups, g)
}
return &c, nil
}
func validateComposeFile(dt []byte, fn string) (bool, error) {
envs, err := composeEnv()
if err != nil {
return true, err
}
fnl := strings.ToLower(fn)
if strings.HasSuffix(fnl, ".yml") || strings.HasSuffix(fnl, ".yaml") {
return true, validateCompose(dt, envs)
}
if strings.HasSuffix(fnl, ".json") || strings.HasSuffix(fnl, ".hcl") {
return false, nil
}
err = validateCompose(dt, envs)
return err == nil, err
}
func validateCompose(dt []byte, envs map[string]string) error {
_, err := loader.Load(composetypes.ConfigDetails{
ConfigFiles: []composetypes.ConfigFile{
{
Content: dt,
},
},
Environment: envs,
}, func(options *loader.Options) {
options.SetProjectName("bake", false)
options.SkipNormalization = true
// consistency is checked later in ParseCompose to ensure multiple
// compose files can be merged together
options.SkipConsistencyCheck = true
})
return err
}
func composeEnv() (map[string]string, error) {
envs := sliceToMap(os.Environ())
if wd, err := os.Getwd(); err == nil {
envs, err = loadDotEnv(envs, wd)
if err != nil {
return nil, err
}
}
return envs, nil
}
func loadDotEnv(curenv map[string]string, workingDir string) (map[string]string, error) {
if curenv == nil {
curenv = make(map[string]string)
}
ef, err := filepath.Abs(filepath.Join(workingDir, ".env"))
if err != nil {
return nil, err
}
if _, err = os.Stat(ef); os.IsNotExist(err) {
return curenv, nil
} else if err != nil {
return nil, err
}
dt, err := os.ReadFile(ef)
if err != nil {
return nil, err
}
envs, err := dotenv.UnmarshalBytesWithLookup(dt, nil)
if err != nil {
return nil, err
}
for k, v := range envs {
if _, set := curenv[k]; set {
continue
}
curenv[k] = v
}
return curenv, nil
}
func flatten(in composetypes.MappingWithEquals) map[string]*string {
func flatten(in compose.MappingWithEquals) compose.Mapping {
if len(in) == 0 {
return nil
}
out := map[string]*string{}
out := compose.Mapping{}
for k, v := range in {
if v == nil {
continue
}
out[k] = v
out[k] = *v
}
return out
}
// xbake Compose build extension provides fields not (yet) available in
// Compose build specification: https://github.com/compose-spec/compose-spec/blob/master/build.md
type xbake struct {
Tags stringArray `yaml:"tags,omitempty"`
CacheFrom stringArray `yaml:"cache-from,omitempty"`
CacheTo stringArray `yaml:"cache-to,omitempty"`
Secrets stringArray `yaml:"secret,omitempty"`
SSH stringArray `yaml:"ssh,omitempty"`
Platforms stringArray `yaml:"platforms,omitempty"`
Outputs stringArray `yaml:"output,omitempty"`
Pull *bool `yaml:"pull,omitempty"`
NoCache *bool `yaml:"no-cache,omitempty"`
NoCacheFilter stringArray `yaml:"no-cache-filter,omitempty"`
Contexts stringMap `yaml:"contexts,omitempty"`
// don't forget to update documentation if you add a new field:
// https://github.com/docker/docs/blob/main/content/build/bake/compose-file.md#extension-field-with-x-bake
}
type stringMap map[string]string
type stringArray []string
func (sa *stringArray) UnmarshalYAML(unmarshal func(interface{}) error) error {
var multi []string
err := unmarshal(&multi)
if err != nil {
var single string
if err := unmarshal(&single); err != nil {
return err
}
*sa = strings.Fields(single)
} else {
*sa = multi
}
return nil
}
// composeExtTarget converts Compose build extension x-bake to bake Target
// https://github.com/compose-spec/compose-spec/blob/master/spec.md#extension
func (t *Target) composeExtTarget(exts map[string]interface{}) error {
var xb xbake
ext, ok := exts["x-bake"]
if !ok || ext == nil {
return nil
}
yb, _ := yaml.Marshal(ext)
if err := yaml.Unmarshal(yb, &xb); err != nil {
return err
}
if len(xb.Tags) > 0 {
t.Tags = dedupSlice(append(t.Tags, xb.Tags...))
}
if len(xb.CacheFrom) > 0 {
t.CacheFrom = dedupSlice(append(t.CacheFrom, xb.CacheFrom...))
}
if len(xb.CacheTo) > 0 {
t.CacheTo = dedupSlice(append(t.CacheTo, xb.CacheTo...))
}
if len(xb.Secrets) > 0 {
t.Secrets = dedupSlice(append(t.Secrets, xb.Secrets...))
}
if len(xb.SSH) > 0 {
t.SSH = dedupSlice(append(t.SSH, xb.SSH...))
sort.Strings(t.SSH)
}
if len(xb.Platforms) > 0 {
t.Platforms = dedupSlice(append(t.Platforms, xb.Platforms...))
}
if len(xb.Outputs) > 0 {
t.Outputs = dedupSlice(append(t.Outputs, xb.Outputs...))
}
if xb.Pull != nil {
t.Pull = xb.Pull
}
if xb.NoCache != nil {
t.NoCache = xb.NoCache
}
if len(xb.NoCacheFilter) > 0 {
t.NoCacheFilter = dedupSlice(append(t.NoCacheFilter, xb.NoCacheFilter...))
}
if len(xb.Contexts) > 0 {
t.Contexts = dedupMap(t.Contexts, xb.Contexts)
}
return nil
}
// composeToBuildkitSecret converts secret from compose format to buildkit's
// csv format.
func composeToBuildkitSecret(inp composetypes.ServiceSecretConfig, psecret composetypes.SecretConfig) (string, error) {
if psecret.External {
return "", errors.Errorf("unsupported external secret %s", psecret.Name)
}
var bkattrs []string
if inp.Source != "" {
bkattrs = append(bkattrs, "id="+inp.Source)
}
if psecret.File != "" {
bkattrs = append(bkattrs, "src="+psecret.File)
}
if psecret.Environment != "" {
bkattrs = append(bkattrs, "env="+psecret.Environment)
}
return strings.Join(bkattrs, ","), nil
}
// composeToBuildkitSSH converts secret from compose format to buildkit's
// csv format.
func composeToBuildkitSSH(sshKey composetypes.SSHKey) string {
var bkattrs []string
bkattrs = append(bkattrs, sshKey.ID)
if sshKey.Path != "" {
bkattrs = append(bkattrs, sshKey.Path)
}
return strings.Join(bkattrs, "=")
}

View File

@@ -2,12 +2,9 @@ package bake
import (
"os"
"path/filepath"
"sort"
"testing"
composetypes "github.com/compose-spec/compose-go/v2/types"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
@@ -21,71 +18,31 @@ services:
webapp:
build:
context: ./dir
additional_contexts:
foo: ./bar
dockerfile: Dockerfile-alternate
network:
none
args:
buildno: 123
cache_from:
- type=local,src=path/to/cache
cache_to:
- type=local,dest=path/to/cache
ssh:
- key=path/to/key
- default
secrets:
- token
- aws
webapp2:
profiles:
- test
build:
context: ./dir
dockerfile_inline: |
FROM alpine
secrets:
token:
environment: ENV_TOKEN
aws:
file: /root/.aws/credentials
`)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
c, err := ParseCompose(dt)
require.NoError(t, err)
require.Equal(t, 1, len(c.Groups))
require.Equal(t, "default", c.Groups[0].Name)
require.Equal(t, c.Groups[0].Name, "default")
sort.Strings(c.Groups[0].Targets)
require.Equal(t, []string{"db", "webapp", "webapp2"}, c.Groups[0].Targets)
require.Equal(t, []string{"db", "webapp"}, c.Groups[0].Targets)
require.Equal(t, 3, len(c.Targets))
require.Equal(t, 2, len(c.Targets))
sort.Slice(c.Targets, func(i, j int) bool {
return c.Targets[i].Name < c.Targets[j].Name
})
require.Equal(t, "db", c.Targets[0].Name)
require.Equal(t, "db", *c.Targets[0].Context)
require.Equal(t, []string{"docker.io/tonistiigi/db"}, c.Targets[0].Tags)
require.Equal(t, "./db", *c.Targets[0].Context)
require.Equal(t, "webapp", c.Targets[1].Name)
require.Equal(t, "dir", *c.Targets[1].Context)
require.Equal(t, map[string]string{"foo": "bar"}, c.Targets[1].Contexts)
require.Equal(t, "./dir", *c.Targets[1].Context)
require.Equal(t, "Dockerfile-alternate", *c.Targets[1].Dockerfile)
require.Equal(t, 1, len(c.Targets[1].Args))
require.Equal(t, ptrstr("123"), c.Targets[1].Args["buildno"])
require.Equal(t, []string{"type=local,src=path/to/cache"}, c.Targets[1].CacheFrom)
require.Equal(t, []string{"type=local,dest=path/to/cache"}, c.Targets[1].CacheTo)
require.Equal(t, "none", *c.Targets[1].NetworkMode)
require.Equal(t, []string{"default", "key=path/to/key"}, c.Targets[1].SSH)
require.Equal(t, []string{
"id=token,env=ENV_TOKEN",
"id=aws,src=/root/.aws/credentials",
}, c.Targets[1].Secrets)
require.Equal(t, "webapp2", c.Targets[2].Name)
require.Equal(t, "dir", *c.Targets[2].Context)
require.Equal(t, "FROM alpine\n", *c.Targets[2].DockerfileInline)
require.Equal(t, "123", c.Targets[1].Args["buildno"])
}
func TestNoBuildOutOfTreeService(t *testing.T) {
@@ -96,10 +53,9 @@ services:
webapp:
build: ./db
`)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
c, err := ParseCompose(dt)
require.NoError(t, err)
require.Equal(t, 1, len(c.Groups))
require.Equal(t, 1, len(c.Targets))
}
func TestParseComposeTarget(t *testing.T) {
@@ -115,7 +71,7 @@ services:
target: webapp
`)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
c, err := ParseCompose(dt)
require.NoError(t, err)
require.Equal(t, 2, len(c.Targets))
@@ -140,15 +96,15 @@ services:
target: webapp
`)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
c, err := ParseCompose(dt)
require.NoError(t, err)
require.Equal(t, 2, len(c.Targets))
sort.Slice(c.Targets, func(i, j int) bool {
return c.Targets[i].Name < c.Targets[j].Name
})
require.Equal(t, "db", c.Targets[0].Name)
require.Equal(t, c.Targets[0].Name, "db")
require.Equal(t, "db", *c.Targets[0].Target)
require.Equal(t, "webapp", c.Targets[1].Name)
require.Equal(t, c.Targets[1].Name, "webapp")
require.Equal(t, "webapp", *c.Targets[1].Target)
}
@@ -167,26 +123,35 @@ services:
BRB: FOO
`)
t.Setenv("FOO", "bar")
t.Setenv("BAR", "foo")
t.Setenv("ZZZ_BAR", "zzz_foo")
os.Setenv("FOO", "bar")
defer os.Unsetenv("FOO")
os.Setenv("BAR", "foo")
defer os.Unsetenv("BAR")
os.Setenv("ZZZ_BAR", "zzz_foo")
defer os.Unsetenv("ZZZ_BAR")
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, sliceToMap(os.Environ()))
c, err := ParseCompose(dt)
require.NoError(t, err)
require.Equal(t, ptrstr("bar"), c.Targets[0].Args["FOO"])
require.Equal(t, ptrstr("zzz_foo"), c.Targets[0].Args["BAR"])
require.Equal(t, ptrstr("FOO"), c.Targets[0].Args["BRB"])
require.Equal(t, c.Targets[0].Args["FOO"], "bar")
require.Equal(t, c.Targets[0].Args["BAR"], "zzz_foo")
require.Equal(t, c.Targets[0].Args["BRB"], "FOO")
}
func TestInconsistentComposeFile(t *testing.T) {
func TestBogusCompose(t *testing.T) {
var dt = []byte(`
services:
db:
labels:
- "foo"
webapp:
entrypoint: echo 1
build:
context: .
target: webapp
`)
_, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
_, err := ParseCompose(dt)
require.Error(t, err)
require.Contains(t, err.Error(), "has neither an image nor a build context specified: invalid compose project")
}
func TestAdvancedNetwork(t *testing.T) {
@@ -210,28 +175,10 @@ networks:
gateway: 10.5.0.254
`)
_, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
_, err := ParseCompose(dt)
require.NoError(t, err)
}
func TestTags(t *testing.T) {
var dt = []byte(`
services:
example:
image: example
build:
context: .
dockerfile: Dockerfile
tags:
- foo
- bar
`)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, []string{"foo", "bar"}, c.Targets[0].Tags)
}
func TestDependsOnList(t *testing.T) {
var dt = []byte(`
version: "3.8"
@@ -264,554 +211,6 @@ networks:
name: test-net
`)
_, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
_, err := ParseCompose(dt)
require.NoError(t, err)
}
func TestComposeExt(t *testing.T) {
var dt = []byte(`
services:
addon:
image: ct-addon:bar
build:
context: .
dockerfile: ./Dockerfile
cache_from:
- user/app:cache
cache_to:
- user/app:cache
tags:
- ct-addon:baz
ssh:
key: path/to/key
args:
CT_ECR: foo
CT_TAG: bar
x-bake:
contexts:
alpine: docker-image://alpine:3.13
tags:
- ct-addon:foo
- ct-addon:alp
ssh:
- default
- other=path/to/otherkey
platforms:
- linux/amd64
- linux/arm64
cache-from:
- type=local,src=path/to/cache
cache-to:
- type=local,dest=path/to/cache
pull: true
aws:
image: ct-fake-aws:bar
build:
dockerfile: ./aws.Dockerfile
args:
CT_ECR: foo
CT_TAG: bar
shm_size: 128m
ulimits:
nofile:
soft: 1024
hard: 1024
x-bake:
secret:
- id=mysecret,src=/local/secret
- id=mysecret2,src=/local/secret2
ssh: default
platforms: linux/arm64
output: type=docker
no-cache: true
`)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, 2, len(c.Targets))
sort.Slice(c.Targets, func(i, j int) bool {
return c.Targets[i].Name < c.Targets[j].Name
})
require.Equal(t, map[string]*string{"CT_ECR": ptrstr("foo"), "CT_TAG": ptrstr("bar")}, c.Targets[0].Args)
require.Equal(t, []string{"ct-addon:baz", "ct-addon:foo", "ct-addon:alp"}, c.Targets[0].Tags)
require.Equal(t, []string{"linux/amd64", "linux/arm64"}, c.Targets[0].Platforms)
require.Equal(t, []string{"user/app:cache", "type=local,src=path/to/cache"}, c.Targets[0].CacheFrom)
require.Equal(t, []string{"user/app:cache", "type=local,dest=path/to/cache"}, c.Targets[0].CacheTo)
require.Equal(t, []string{"default", "key=path/to/key", "other=path/to/otherkey"}, c.Targets[0].SSH)
require.Equal(t, newBool(true), c.Targets[0].Pull)
require.Equal(t, map[string]string{"alpine": "docker-image://alpine:3.13"}, c.Targets[0].Contexts)
require.Equal(t, []string{"ct-fake-aws:bar"}, c.Targets[1].Tags)
require.Equal(t, []string{"id=mysecret,src=/local/secret", "id=mysecret2,src=/local/secret2"}, c.Targets[1].Secrets)
require.Equal(t, []string{"default"}, c.Targets[1].SSH)
require.Equal(t, []string{"linux/arm64"}, c.Targets[1].Platforms)
require.Equal(t, []string{"type=docker"}, c.Targets[1].Outputs)
require.Equal(t, newBool(true), c.Targets[1].NoCache)
require.Equal(t, ptrstr("128MiB"), c.Targets[1].ShmSize)
require.Equal(t, []string{"nofile=1024:1024"}, c.Targets[1].Ulimits)
}
func TestComposeExtDedup(t *testing.T) {
var dt = []byte(`
services:
webapp:
image: app:bar
build:
cache_from:
- user/app:cache
cache_to:
- user/app:cache
tags:
- ct-addon:foo
ssh:
- default
x-bake:
tags:
- ct-addon:foo
- ct-addon:baz
cache-from:
- user/app:cache
- type=local,src=path/to/cache
cache-to:
- type=local,dest=path/to/cache
ssh:
- default
- key=path/to/key
`)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, []string{"ct-addon:foo", "ct-addon:baz"}, c.Targets[0].Tags)
require.Equal(t, []string{"user/app:cache", "type=local,src=path/to/cache"}, c.Targets[0].CacheFrom)
require.Equal(t, []string{"user/app:cache", "type=local,dest=path/to/cache"}, c.Targets[0].CacheTo)
require.Equal(t, []string{"default", "key=path/to/key"}, c.Targets[0].SSH)
}
func TestEnv(t *testing.T) {
envf, err := os.CreateTemp("", "env")
require.NoError(t, err)
defer os.Remove(envf.Name())
_, err = envf.WriteString("FOO=bsdf -csdf\n")
require.NoError(t, err)
var dt = []byte(`
services:
scratch:
build:
context: .
args:
CT_ECR: foo
FOO:
NODE_ENV:
environment:
- NODE_ENV=test
- AWS_ACCESS_KEY_ID=dummy
- AWS_SECRET_ACCESS_KEY=dummy
env_file:
- ` + envf.Name() + `
`)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, map[string]*string{"CT_ECR": ptrstr("foo"), "FOO": ptrstr("bsdf -csdf"), "NODE_ENV": ptrstr("test")}, c.Targets[0].Args)
}
func TestDotEnv(t *testing.T) {
tmpdir := t.TempDir()
err := os.WriteFile(filepath.Join(tmpdir, ".env"), []byte("FOO=bar"), 0644)
require.NoError(t, err)
var dt = []byte(`
services:
scratch:
build:
context: .
args:
FOO:
`)
chdir(t, tmpdir)
c, err := ParseComposeFiles([]File{{
Name: "docker-compose.yml",
Data: dt,
}})
require.NoError(t, err)
require.Equal(t, map[string]*string{"FOO": ptrstr("bar")}, c.Targets[0].Args)
}
func TestPorts(t *testing.T) {
var dt = []byte(`
services:
foo:
build:
context: .
ports:
- 3306:3306
bar:
build:
context: .
ports:
- mode: ingress
target: 3306
published: "3306"
protocol: tcp
`)
_, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
}
func newBool(val bool) *bool {
b := val
return &b
}
func TestServiceName(t *testing.T) {
cases := []struct {
svc string
wantErr bool
}{
{
svc: "a",
wantErr: false,
},
{
svc: "abc",
wantErr: false,
},
{
svc: "a.b",
wantErr: false,
},
{
svc: "_a",
wantErr: false,
},
{
svc: "a_b",
wantErr: false,
},
{
svc: "AbC",
wantErr: false,
},
{
svc: "AbC-0123",
wantErr: false,
},
}
for _, tt := range cases {
tt := tt
t.Run(tt.svc, func(t *testing.T) {
_, err := ParseCompose([]composetypes.ConfigFile{{Content: []byte(`
services:
` + tt.svc + `:
build:
context: .
`)}}, nil)
if tt.wantErr {
require.Error(t, err)
} else {
require.NoError(t, err)
}
})
}
}
func TestValidateComposeSecret(t *testing.T) {
cases := []struct {
name string
dt []byte
wantErr bool
}{
{
name: "secret set by file",
dt: []byte(`
secrets:
foo:
file: .secret
`),
wantErr: false,
},
{
name: "secret set by environment",
dt: []byte(`
secrets:
foo:
environment: TOKEN
`),
wantErr: false,
},
{
name: "external secret",
dt: []byte(`
secrets:
foo:
external: true
`),
wantErr: false,
},
{
name: "unset secret",
dt: []byte(`
secrets:
foo: {}
`),
wantErr: true,
},
{
name: "undefined secret",
dt: []byte(`
services:
foo:
build:
secrets:
- token
`),
wantErr: true,
},
}
for _, tt := range cases {
tt := tt
t.Run(tt.name, func(t *testing.T) {
_, err := ParseCompose([]composetypes.ConfigFile{{Content: tt.dt}}, nil)
if tt.wantErr {
require.Error(t, err)
} else {
require.NoError(t, err)
}
})
}
}
func TestValidateComposeFile(t *testing.T) {
cases := []struct {
name string
fn string
dt []byte
isCompose bool
wantErr bool
}{
{
name: "empty service",
fn: "docker-compose.yml",
dt: []byte(`
services:
foo:
`),
isCompose: true,
wantErr: true,
},
{
name: "build",
fn: "docker-compose.yml",
dt: []byte(`
services:
foo:
build: .
`),
isCompose: true,
wantErr: false,
},
{
name: "image",
fn: "docker-compose.yml",
dt: []byte(`
services:
simple:
image: nginx
`),
isCompose: true,
wantErr: false,
},
{
name: "unknown ext",
fn: "docker-compose.foo",
dt: []byte(`
services:
simple:
image: nginx
`),
isCompose: true,
wantErr: false,
},
{
name: "hcl",
fn: "docker-bake.hcl",
dt: []byte(`
target "default" {
dockerfile = "test"
}
`),
isCompose: false,
wantErr: false,
},
}
for _, tt := range cases {
tt := tt
t.Run(tt.name, func(t *testing.T) {
isCompose, err := validateComposeFile(tt.dt, tt.fn)
assert.Equal(t, tt.isCompose, isCompose)
if tt.wantErr {
require.Error(t, err)
} else {
require.NoError(t, err)
}
})
}
}
func TestComposeNullArgs(t *testing.T) {
var dt = []byte(`
services:
scratch:
build:
context: .
args:
FOO: null
bar: "baz"
`)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, map[string]*string{"bar": ptrstr("baz")}, c.Targets[0].Args)
}
func TestDependsOn(t *testing.T) {
var dt = []byte(`
services:
foo:
build:
context: .
ports:
- 3306:3306
depends_on:
- bar
bar:
build:
context: .
`)
_, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
}
func TestInclude(t *testing.T) {
tmpdir := t.TempDir()
err := os.WriteFile(filepath.Join(tmpdir, "compose-foo.yml"), []byte(`
services:
foo:
build:
context: .
target: buildfoo
ports:
- 3306:3306
`), 0644)
require.NoError(t, err)
var dt = []byte(`
include:
- compose-foo.yml
services:
bar:
build:
context: .
target: buildbar
`)
chdir(t, tmpdir)
c, err := ParseComposeFiles([]File{{
Name: "composetypes.yml",
Data: dt,
}})
require.NoError(t, err)
require.Equal(t, 2, len(c.Targets))
sort.Slice(c.Targets, func(i, j int) bool {
return c.Targets[i].Name < c.Targets[j].Name
})
require.Equal(t, "bar", c.Targets[0].Name)
require.Equal(t, "buildbar", *c.Targets[0].Target)
require.Equal(t, "foo", c.Targets[1].Name)
require.Equal(t, "buildfoo", *c.Targets[1].Target)
}
func TestDevelop(t *testing.T) {
var dt = []byte(`
services:
scratch:
build:
context: ./webapp
develop:
watch:
- path: ./webapp/html
action: sync
target: /var/www
ignore:
- node_modules/
`)
_, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
}
func TestCgroup(t *testing.T) {
var dt = []byte(`
services:
scratch:
build:
context: ./webapp
cgroup: private
`)
_, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
}
func TestProjectName(t *testing.T) {
var dt = []byte(`
services:
scratch:
build:
context: ./webapp
args:
PROJECT_NAME: ${COMPOSE_PROJECT_NAME}
`)
t.Run("default", func(t *testing.T) {
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Len(t, c.Targets, 1)
require.Len(t, c.Targets[0].Args, 1)
require.Equal(t, map[string]*string{"PROJECT_NAME": ptrstr("bake")}, c.Targets[0].Args)
})
t.Run("env", func(t *testing.T) {
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, map[string]string{"COMPOSE_PROJECT_NAME": "foo"})
require.NoError(t, err)
require.Len(t, c.Targets, 1)
require.Len(t, c.Targets[0].Args, 1)
require.Equal(t, map[string]*string{"PROJECT_NAME": ptrstr("foo")}, c.Targets[0].Args)
})
}
// chdir changes the current working directory to the named directory,
// and then restore the original working directory at the end of the test.
func chdir(t *testing.T, dir string) {
olddir, err := os.Getwd()
if err != nil {
t.Fatalf("chdir: %v", err)
}
if err := os.Chdir(dir); err != nil {
t.Fatalf("chdir %s: %v", dir, err)
}
t.Cleanup(func() {
if err := os.Chdir(olddir); err != nil {
t.Errorf("chdir to original working directory %s: %v", olddir, err)
os.Exit(1)
}
})
}

View File

@@ -1,601 +0,0 @@
package bake
import (
"bufio"
"cmp"
"context"
"fmt"
"io"
"io/fs"
"os"
"path/filepath"
"slices"
"strconv"
"strings"
"syscall"
"github.com/containerd/console"
"github.com/docker/buildx/build"
"github.com/docker/buildx/util/osutil"
"github.com/moby/buildkit/util/entitlements"
"github.com/pkg/errors"
)
type EntitlementKey string
const (
EntitlementKeyNetworkHost EntitlementKey = "network.host"
EntitlementKeySecurityInsecure EntitlementKey = "security.insecure"
EntitlementKeyFSRead EntitlementKey = "fs.read"
EntitlementKeyFSWrite EntitlementKey = "fs.write"
EntitlementKeyFS EntitlementKey = "fs"
EntitlementKeyImagePush EntitlementKey = "image.push"
EntitlementKeyImageLoad EntitlementKey = "image.load"
EntitlementKeyImage EntitlementKey = "image"
EntitlementKeySSH EntitlementKey = "ssh"
)
type EntitlementConf struct {
NetworkHost bool
SecurityInsecure bool
FSRead []string
FSWrite []string
ImagePush []string
ImageLoad []string
SSH bool
}
func ParseEntitlements(in []string) (EntitlementConf, error) {
var conf EntitlementConf
for _, e := range in {
switch e {
case string(EntitlementKeyNetworkHost):
conf.NetworkHost = true
case string(EntitlementKeySecurityInsecure):
conf.SecurityInsecure = true
case string(EntitlementKeySSH):
conf.SSH = true
default:
k, v, _ := strings.Cut(e, "=")
switch k {
case string(EntitlementKeyFSRead):
conf.FSRead = append(conf.FSRead, v)
case string(EntitlementKeyFSWrite):
conf.FSWrite = append(conf.FSWrite, v)
case string(EntitlementKeyFS):
conf.FSRead = append(conf.FSRead, v)
conf.FSWrite = append(conf.FSWrite, v)
case string(EntitlementKeyImagePush):
conf.ImagePush = append(conf.ImagePush, v)
case string(EntitlementKeyImageLoad):
conf.ImageLoad = append(conf.ImageLoad, v)
case string(EntitlementKeyImage):
conf.ImagePush = append(conf.ImagePush, v)
conf.ImageLoad = append(conf.ImageLoad, v)
default:
return conf, errors.Errorf("unknown entitlement key %q", k)
}
}
}
return conf, nil
}
func (c EntitlementConf) Validate(m map[string]build.Options) (EntitlementConf, error) {
var expected EntitlementConf
for _, v := range m {
if err := c.check(v, &expected); err != nil {
return EntitlementConf{}, err
}
}
return expected, nil
}
func (c EntitlementConf) check(bo build.Options, expected *EntitlementConf) error {
for _, e := range bo.Allow {
switch e {
case entitlements.EntitlementNetworkHost:
if !c.NetworkHost {
expected.NetworkHost = true
}
case entitlements.EntitlementSecurityInsecure:
if !c.SecurityInsecure {
expected.SecurityInsecure = true
}
}
}
rwPaths := map[string]struct{}{}
roPaths := map[string]struct{}{}
for _, p := range collectLocalPaths(bo.Inputs) {
roPaths[p] = struct{}{}
}
for _, out := range bo.Exports {
if out.Type == "local" {
if dest, ok := out.Attrs["dest"]; ok {
rwPaths[dest] = struct{}{}
}
}
if out.Type == "tar" {
if dest, ok := out.Attrs["dest"]; ok && dest != "-" {
rwPaths[dest] = struct{}{}
}
}
}
for _, ce := range bo.CacheTo {
if ce.Type == "local" {
if dest, ok := ce.Attrs["dest"]; ok {
rwPaths[dest] = struct{}{}
}
}
}
for _, ci := range bo.CacheFrom {
if ci.Type == "local" {
if src, ok := ci.Attrs["src"]; ok {
roPaths[src] = struct{}{}
}
}
}
for _, secret := range bo.SecretSpecs {
if secret.FilePath != "" {
roPaths[secret.FilePath] = struct{}{}
}
}
for _, ssh := range bo.SSHSpecs {
for _, p := range ssh.Paths {
roPaths[p] = struct{}{}
}
if len(ssh.Paths) == 0 {
expected.SSH = true
}
}
var err error
expected.FSRead, err = findMissingPaths(c.FSRead, roPaths)
if err != nil {
return err
}
expected.FSWrite, err = findMissingPaths(c.FSWrite, rwPaths)
if err != nil {
return err
}
return nil
}
func (c EntitlementConf) Prompt(ctx context.Context, isRemote bool, out io.Writer) error {
var term bool
if _, err := console.ConsoleFromFile(os.Stdin); err == nil {
term = true
}
var msgs []string
var flags []string
// these warnings are currently disabled to give users time to update
var msgsFS []string
var flagsFS []string
if c.NetworkHost {
msgs = append(msgs, " - Running build containers that can access host network")
flags = append(flags, string(EntitlementKeyNetworkHost))
}
if c.SecurityInsecure {
msgs = append(msgs, " - Running privileged containers that can make system changes")
flags = append(flags, string(EntitlementKeySecurityInsecure))
}
if c.SSH {
msgsFS = append(msgsFS, " - Forwarding default SSH agent socket")
flagsFS = append(flagsFS, string(EntitlementKeySSH))
}
roPaths, rwPaths, commonPaths := groupSamePaths(c.FSRead, c.FSWrite)
wd, err := os.Getwd()
if err != nil {
return errors.Wrap(err, "failed to get current working directory")
}
wd, err = filepath.EvalSymlinks(wd)
if err != nil {
return errors.Wrap(err, "failed to evaluate working directory")
}
roPaths = toRelativePaths(roPaths, wd)
rwPaths = toRelativePaths(rwPaths, wd)
commonPaths = toRelativePaths(commonPaths, wd)
if len(commonPaths) > 0 {
for _, p := range commonPaths {
msgsFS = append(msgsFS, fmt.Sprintf(" - Read and write access to path %s", p))
flagsFS = append(flagsFS, string(EntitlementKeyFS)+"="+p)
}
}
if len(roPaths) > 0 {
for _, p := range roPaths {
msgsFS = append(msgsFS, fmt.Sprintf(" - Read access to path %s", p))
flagsFS = append(flagsFS, string(EntitlementKeyFSRead)+"="+p)
}
}
if len(rwPaths) > 0 {
for _, p := range rwPaths {
msgsFS = append(msgsFS, fmt.Sprintf(" - Write access to path %s", p))
flagsFS = append(flagsFS, string(EntitlementKeyFSWrite)+"="+p)
}
}
if len(msgs) == 0 && len(msgsFS) == 0 {
return nil
}
fmt.Fprintf(out, "Your build is requesting privileges for following possibly insecure capabilities:\n\n")
for _, m := range slices.Concat(msgs, msgsFS) {
fmt.Fprintf(out, "%s\n", m)
}
for i, f := range flags {
flags[i] = "--allow=" + f
}
for i, f := range flagsFS {
flagsFS[i] = "--allow=" + f
}
if term {
fmt.Fprintf(out, "\nIn order to not see this message in the future pass %q to grant requested privileges.\n", strings.Join(slices.Concat(flags, flagsFS), " "))
} else {
fmt.Fprintf(out, "\nPass %q to grant requested privileges.\n", strings.Join(slices.Concat(flags, flagsFS), " "))
}
args := append([]string(nil), os.Args...)
if v, ok := os.LookupEnv("DOCKER_CLI_PLUGIN_ORIGINAL_CLI_COMMAND"); ok && v != "" {
args[0] = v
}
idx := slices.Index(args, "bake")
if idx != -1 {
fmt.Fprintf(out, "\nYour full command with requested privileges:\n\n")
fmt.Fprintf(out, "%s %s %s\n\n", strings.Join(args[:idx+1], " "), strings.Join(slices.Concat(flags, flagsFS), " "), strings.Join(args[idx+1:], " "))
}
fsEntitlementsEnabled := false
if isRemote {
if v, ok := os.LookupEnv("BAKE_ALLOW_REMOTE_FS_ACCESS"); ok {
vv, err := strconv.ParseBool(v)
if err != nil {
return errors.Wrapf(err, "failed to parse BAKE_ALLOW_REMOTE_FS_ACCESS value %q", v)
}
fsEntitlementsEnabled = !vv
} else {
fsEntitlementsEnabled = true
}
}
v, fsEntitlementsSet := os.LookupEnv("BUILDX_BAKE_ENTITLEMENTS_FS")
if fsEntitlementsSet {
vv, err := strconv.ParseBool(v)
if err != nil {
return errors.Wrapf(err, "failed to parse BUILDX_BAKE_ENTITLEMENTS_FS value %q", v)
}
fsEntitlementsEnabled = vv
}
if !fsEntitlementsEnabled && len(msgs) == 0 {
if !fsEntitlementsSet {
fmt.Fprintf(out, "This warning will become an error in a future release. To enable filesystem entitlements checks at the moment, set BUILDX_BAKE_ENTITLEMENTS_FS=1 .\n\n")
}
return nil
}
if term {
fmt.Fprintf(out, "Do you want to grant requested privileges and continue? [y/N] ")
reader := bufio.NewReader(os.Stdin)
answerCh := make(chan string, 1)
go func() {
answer, _, _ := reader.ReadLine()
answerCh <- string(answer)
close(answerCh)
}()
select {
case <-ctx.Done():
case answer := <-answerCh:
if strings.ToLower(string(answer)) == "y" {
return nil
}
}
}
return errors.Errorf("additional privileges requested")
}
func isParentOrEqualPath(p, parent string) bool {
if p == parent || parent == "/" {
return true
}
if strings.HasPrefix(p, filepath.Clean(parent+string(filepath.Separator))) {
return true
}
return false
}
func findMissingPaths(set []string, paths map[string]struct{}) ([]string, error) {
set, allowAny, err := evaluatePaths(set)
if err != nil {
return nil, err
} else if allowAny {
return nil, nil
}
paths, err = evaluateToExistingPaths(paths)
if err != nil {
return nil, err
}
paths, err = dedupPaths(paths)
if err != nil {
return nil, err
}
out := make([]string, 0, len(paths))
loop0:
for p := range paths {
for _, c := range set {
if isParentOrEqualPath(p, c) {
continue loop0
}
}
out = append(out, p)
}
if len(out) == 0 {
return nil, nil
}
slices.Sort(out)
return out, nil
}
func dedupPaths(in map[string]struct{}) (map[string]struct{}, error) {
arr := make([]string, 0, len(in))
for p := range in {
arr = append(arr, filepath.Clean(p))
}
slices.SortFunc(arr, func(a, b string) int {
return cmp.Compare(len(a), len(b))
})
m := make(map[string]struct{}, len(arr))
loop0:
for _, p := range arr {
for parent := range m {
if strings.HasPrefix(p, parent+string(filepath.Separator)) {
continue loop0
}
}
m[p] = struct{}{}
}
return m, nil
}
func toRelativePaths(in []string, wd string) []string {
out := make([]string, 0, len(in))
for _, p := range in {
rel, err := filepath.Rel(wd, p)
if err == nil {
// allow up to one level of ".." in the path
if !strings.HasPrefix(rel, ".."+string(filepath.Separator)+"..") {
out = append(out, rel)
continue
}
}
out = append(out, p)
}
return out
}
func groupSamePaths(in1, in2 []string) ([]string, []string, []string) {
if in1 == nil || in2 == nil {
return in1, in2, nil
}
slices.Sort(in1)
slices.Sort(in2)
common := []string{}
i, j := 0, 0
for i < len(in1) && j < len(in2) {
switch {
case in1[i] == in2[j]:
common = append(common, in1[i])
i++
j++
case in1[i] < in2[j]:
i++
default:
j++
}
}
in1 = removeCommonPaths(in1, common)
in2 = removeCommonPaths(in2, common)
return in1, in2, common
}
func removeCommonPaths(in, common []string) []string {
filtered := make([]string, 0, len(in))
commonIndex := 0
for _, path := range in {
if commonIndex < len(common) && path == common[commonIndex] {
commonIndex++
continue
}
filtered = append(filtered, path)
}
return filtered
}
func evaluatePaths(in []string) ([]string, bool, error) {
out := make([]string, 0, len(in))
allowAny := false
for _, p := range in {
if p == "*" {
allowAny = true
continue
}
v, err := filepath.Abs(p)
if err != nil {
return nil, false, errors.Wrapf(err, "failed to evaluate path %q", p)
}
v, err = filepath.EvalSymlinks(v)
if err != nil {
return nil, false, errors.Wrapf(err, "failed to evaluate path %q", p)
}
out = append(out, v)
}
return out, allowAny, nil
}
func evaluateToExistingPaths(in map[string]struct{}) (map[string]struct{}, error) {
m := make(map[string]struct{}, len(in))
for p := range in {
v, err := evaluateToExistingPath(p)
if err != nil {
return nil, errors.Wrapf(err, "failed to evaluate path %q", p)
}
v, err = osutil.GetLongPathName(v)
if err != nil {
return nil, errors.Wrapf(err, "failed to evaluate path %q", p)
}
m[v] = struct{}{}
}
return m, nil
}
func evaluateToExistingPath(in string) (string, error) {
in, err := filepath.Abs(in)
if err != nil {
return "", err
}
volLen := volumeNameLen(in)
pathSeparator := string(os.PathSeparator)
if volLen < len(in) && os.IsPathSeparator(in[volLen]) {
volLen++
}
vol := in[:volLen]
dest := vol
linksWalked := 0
var end int
for start := volLen; start < len(in); start = end {
for start < len(in) && os.IsPathSeparator(in[start]) {
start++
}
end = start
for end < len(in) && !os.IsPathSeparator(in[end]) {
end++
}
if end == start {
break
} else if in[start:end] == "." {
continue
} else if in[start:end] == ".." {
var r int
for r = len(dest) - 1; r >= volLen; r-- {
if os.IsPathSeparator(dest[r]) {
break
}
}
if r < volLen || dest[r+1:] == ".." {
if len(dest) > volLen {
dest += pathSeparator
}
dest += ".."
} else {
dest = dest[:r]
}
continue
}
if len(dest) > volumeNameLen(dest) && !os.IsPathSeparator(dest[len(dest)-1]) {
dest += pathSeparator
}
dest += in[start:end]
fi, err := os.Lstat(dest)
if err != nil {
// If the component doesn't exist, return the last valid path
if os.IsNotExist(err) {
for r := len(dest) - 1; r >= volLen; r-- {
if os.IsPathSeparator(dest[r]) {
return dest[:r], nil
}
}
return vol, nil
}
return "", err
}
if fi.Mode()&fs.ModeSymlink == 0 {
if !fi.Mode().IsDir() && end < len(in) {
return "", syscall.ENOTDIR
}
continue
}
linksWalked++
if linksWalked > 255 {
return "", errors.New("too many symlinks")
}
link, err := os.Readlink(dest)
if err != nil {
return "", err
}
in = link + in[end:]
v := volumeNameLen(link)
if v > 0 {
if v < len(link) && os.IsPathSeparator(link[v]) {
v++
}
vol = link[:v]
dest = vol
end = len(vol)
} else if len(link) > 0 && os.IsPathSeparator(link[0]) {
dest = link[:1]
end = 1
vol = link[:1]
volLen = 1
} else {
var r int
for r = len(dest) - 1; r >= volLen; r-- {
if os.IsPathSeparator(dest[r]) {
break
}
}
if r < volLen {
dest = vol
} else {
dest = dest[:r]
}
end = 0
}
}
return filepath.Clean(dest), nil
}
func volumeNameLen(s string) int {
return len(filepath.VolumeName(s))
}

View File

@@ -1,460 +0,0 @@
package bake
import (
"fmt"
"os"
"path/filepath"
"slices"
"testing"
"github.com/docker/buildx/build"
"github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/util/osutil"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/client/llb"
"github.com/moby/buildkit/util/entitlements"
"github.com/stretchr/testify/require"
)
func TestEvaluateToExistingPath(t *testing.T) {
tempDir, err := osutil.GetLongPathName(t.TempDir())
require.NoError(t, err)
// Setup temporary directory structure for testing
existingFile := filepath.Join(tempDir, "existing_file")
require.NoError(t, os.WriteFile(existingFile, []byte("test"), 0644))
existingDir := filepath.Join(tempDir, "existing_dir")
require.NoError(t, os.Mkdir(existingDir, 0755))
symlinkToFile := filepath.Join(tempDir, "symlink_to_file")
require.NoError(t, os.Symlink(existingFile, symlinkToFile))
symlinkToDir := filepath.Join(tempDir, "symlink_to_dir")
require.NoError(t, os.Symlink(existingDir, symlinkToDir))
nonexistentPath := filepath.Join(tempDir, "nonexistent", "path", "file.txt")
tests := []struct {
name string
input string
expected string
expectErr bool
}{
{
name: "Existing file",
input: existingFile,
expected: existingFile,
expectErr: false,
},
{
name: "Existing directory",
input: existingDir,
expected: existingDir,
expectErr: false,
},
{
name: "Symlink to file",
input: symlinkToFile,
expected: existingFile,
expectErr: false,
},
{
name: "Symlink to directory",
input: symlinkToDir,
expected: existingDir,
expectErr: false,
},
{
name: "Non-existent path",
input: nonexistentPath,
expected: tempDir,
expectErr: false,
},
{
name: "Non-existent intermediate path",
input: filepath.Join(tempDir, "nonexistent", "file.txt"),
expected: tempDir,
expectErr: false,
},
{
name: "Root path",
input: "/",
expected: func() string {
root, _ := filepath.Abs("/")
return root
}(),
expectErr: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result, err := evaluateToExistingPath(tt.input)
if tt.expectErr {
require.Error(t, err)
} else {
require.NoError(t, err)
require.Equal(t, tt.expected, result)
}
})
}
}
func TestDedupePaths(t *testing.T) {
wd := osutil.GetWd()
tcases := []struct {
in map[string]struct{}
out map[string]struct{}
}{
{
in: map[string]struct{}{
"/a/b/c": {},
"/a/b/d": {},
"/a/b/e": {},
},
out: map[string]struct{}{
"/a/b/c": {},
"/a/b/d": {},
"/a/b/e": {},
},
},
{
in: map[string]struct{}{
"/a/b/c": {},
"/a/b/c/d": {},
"/a/b/c/d/e": {},
"/a/b/../b/c": {},
},
out: map[string]struct{}{
"/a/b/c": {},
},
},
{
in: map[string]struct{}{
filepath.Join(wd, "a/b/c"): {},
filepath.Join(wd, "../aa"): {},
filepath.Join(wd, "a/b"): {},
filepath.Join(wd, "a/b/d"): {},
filepath.Join(wd, "../aa/b"): {},
filepath.Join(wd, "../../bb"): {},
},
out: map[string]struct{}{
"a/b": {},
"../aa": {},
filepath.Join(wd, "../../bb"): {},
},
},
}
for i, tc := range tcases {
t.Run(fmt.Sprintf("case%d", i), func(t *testing.T) {
out, err := dedupPaths(tc.in)
if err != nil {
require.NoError(t, err)
}
// convert to relative paths as that is shown to user
arr := make([]string, 0, len(out))
for k := range out {
arr = append(arr, k)
}
require.NoError(t, err)
arr = toRelativePaths(arr, wd)
m := make(map[string]struct{})
for _, v := range arr {
m[filepath.ToSlash(v)] = struct{}{}
}
o := make(map[string]struct{}, len(tc.out))
for k := range tc.out {
o[filepath.ToSlash(k)] = struct{}{}
}
require.Equal(t, o, m)
})
}
}
func TestValidateEntitlements(t *testing.T) {
dir1 := t.TempDir()
dir2 := t.TempDir()
// the paths returned by entitlements validation will have symlinks resolved
expDir1, err := filepath.EvalSymlinks(dir1)
require.NoError(t, err)
expDir2, err := filepath.EvalSymlinks(dir2)
require.NoError(t, err)
escapeLink := filepath.Join(dir1, "escape_link")
require.NoError(t, os.Symlink("../../aa", escapeLink))
wd, err := os.Getwd()
require.NoError(t, err)
expWd, err := filepath.EvalSymlinks(wd)
require.NoError(t, err)
tcases := []struct {
name string
conf EntitlementConf
opt build.Options
expected EntitlementConf
}{
{
name: "No entitlements",
opt: build.Options{
Inputs: build.Inputs{
ContextState: &llb.State{},
},
},
},
{
name: "NetworkHostMissing",
opt: build.Options{
Allow: []entitlements.Entitlement{
entitlements.EntitlementNetworkHost,
},
},
expected: EntitlementConf{
NetworkHost: true,
FSRead: []string{expWd},
},
},
{
name: "NetworkHostSet",
conf: EntitlementConf{
NetworkHost: true,
},
opt: build.Options{
Allow: []entitlements.Entitlement{
entitlements.EntitlementNetworkHost,
},
},
expected: EntitlementConf{
FSRead: []string{expWd},
},
},
{
name: "SecurityAndNetworkHostMissing",
opt: build.Options{
Allow: []entitlements.Entitlement{
entitlements.EntitlementNetworkHost,
entitlements.EntitlementSecurityInsecure,
},
},
expected: EntitlementConf{
NetworkHost: true,
SecurityInsecure: true,
FSRead: []string{expWd},
},
},
{
name: "SecurityMissingAndNetworkHostSet",
conf: EntitlementConf{
NetworkHost: true,
},
opt: build.Options{
Allow: []entitlements.Entitlement{
entitlements.EntitlementNetworkHost,
entitlements.EntitlementSecurityInsecure,
},
},
expected: EntitlementConf{
SecurityInsecure: true,
FSRead: []string{expWd},
},
},
{
name: "SSHMissing",
opt: build.Options{
SSHSpecs: []*pb.SSH{
{
ID: "test",
},
},
},
expected: EntitlementConf{
SSH: true,
FSRead: []string{expWd},
},
},
{
name: "ExportLocal",
opt: build.Options{
Exports: []client.ExportEntry{
{
Type: "local",
Attrs: map[string]string{
"dest": dir1,
},
},
{
Type: "local",
Attrs: map[string]string{
"dest": filepath.Join(dir1, "subdir"),
},
},
{
Type: "local",
Attrs: map[string]string{
"dest": dir2,
},
},
},
},
expected: EntitlementConf{
FSWrite: func() []string {
exp := []string{expDir1, expDir2}
slices.Sort(exp)
return exp
}(),
FSRead: []string{expWd},
},
},
{
name: "SecretFromSubFile",
opt: build.Options{
SecretSpecs: []*pb.Secret{
{
FilePath: filepath.Join(dir1, "subfile"),
},
},
},
conf: EntitlementConf{
FSRead: []string{wd, dir1},
},
},
{
name: "SecretFromEscapeLink",
opt: build.Options{
SecretSpecs: []*pb.Secret{
{
FilePath: escapeLink,
},
},
},
conf: EntitlementConf{
FSRead: []string{wd, dir1},
},
expected: EntitlementConf{
FSRead: []string{filepath.Join(expDir1, "../..")},
},
},
{
name: "SecretFromEscapeLinkAllowRoot",
opt: build.Options{
SecretSpecs: []*pb.Secret{
{
FilePath: escapeLink,
},
},
},
conf: EntitlementConf{
FSRead: []string{"/"},
},
expected: EntitlementConf{
FSRead: func() []string {
// on windows root (/) is only allowed if it is the same volume as wd
if filepath.VolumeName(wd) == filepath.VolumeName(escapeLink) {
return nil
}
// if not, then escapeLink is not allowed
exp, err := evaluateToExistingPath(escapeLink)
require.NoError(t, err)
exp, err = filepath.EvalSymlinks(exp)
require.NoError(t, err)
return []string{exp}
}(),
},
},
{
name: "SecretFromEscapeLinkAllowAny",
opt: build.Options{
SecretSpecs: []*pb.Secret{
{
FilePath: escapeLink,
},
},
},
conf: EntitlementConf{
FSRead: []string{"*"},
},
expected: EntitlementConf{},
},
}
for _, tc := range tcases {
t.Run(tc.name, func(t *testing.T) {
expected, err := tc.conf.Validate(map[string]build.Options{"test": tc.opt})
require.NoError(t, err)
require.Equal(t, tc.expected, expected)
})
}
}
func TestGroupSamePaths(t *testing.T) {
tests := []struct {
name string
in1 []string
in2 []string
expected1 []string
expected2 []string
expectedC []string
}{
{
name: "All common paths",
in1: []string{"/path/a", "/path/b", "/path/c"},
in2: []string{"/path/a", "/path/b", "/path/c"},
expected1: []string{},
expected2: []string{},
expectedC: []string{"/path/a", "/path/b", "/path/c"},
},
{
name: "No common paths",
in1: []string{"/path/a", "/path/b"},
in2: []string{"/path/c", "/path/d"},
expected1: []string{"/path/a", "/path/b"},
expected2: []string{"/path/c", "/path/d"},
expectedC: []string{},
},
{
name: "Some common paths",
in1: []string{"/path/a", "/path/b", "/path/c"},
in2: []string{"/path/b", "/path/c", "/path/d"},
expected1: []string{"/path/a"},
expected2: []string{"/path/d"},
expectedC: []string{"/path/b", "/path/c"},
},
{
name: "Empty inputs",
in1: []string{},
in2: []string{},
expected1: []string{},
expected2: []string{},
expectedC: []string{},
},
{
name: "One empty input",
in1: []string{"/path/a", "/path/b"},
in2: []string{},
expected1: []string{"/path/a", "/path/b"},
expected2: []string{},
expectedC: []string{},
},
{
name: "Unsorted inputs with common paths",
in1: []string{"/path/c", "/path/a", "/path/b"},
in2: []string{"/path/b", "/path/c", "/path/a"},
expected1: []string{},
expected2: []string{},
expectedC: []string{"/path/a", "/path/b", "/path/c"},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
out1, out2, common := groupSamePaths(tt.in1, tt.in2)
require.Equal(t, tt.expected1, out1, "in1 should match expected1")
require.Equal(t, tt.expected2, out2, "in2 should match expected2")
require.Equal(t, tt.expectedC, common, "common should match expectedC")
})
}
}

View File

@@ -3,7 +3,7 @@ package bake
import (
"strings"
"github.com/hashicorp/hcl/v2"
hcl "github.com/hashicorp/hcl/v2"
"github.com/hashicorp/hcl/v2/hclparse"
"github.com/moby/buildkit/solver/errdefs"
"github.com/moby/buildkit/solver/pb"
@@ -56,7 +56,7 @@ func formatHCLError(err error, files []File) error {
break
}
}
src := &errdefs.Source{
src := errdefs.Source{
Info: &pb.SourceInfo{
Filename: d.Subject.Filename,
Data: dt,
@@ -72,7 +72,7 @@ func formatHCLError(err error, files []File) error {
func toErrRange(in *hcl.Range) *pb.Range {
return &pb.Range{
Start: &pb.Position{Line: int32(in.Start.Line), Character: int32(in.Start.Column)},
End: &pb.Position{Line: int32(in.End.Line), Character: int32(in.End.Column)},
Start: pb.Position{Line: int32(in.Start.Line), Character: int32(in.Start.Column)},
End: pb.Position{Line: int32(in.End.Line), Character: int32(in.End.Column)},
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,103 +0,0 @@
package hclparser
import (
"github.com/hashicorp/hcl/v2"
)
type filterBody struct {
body hcl.Body
schema *hcl.BodySchema
exclude bool
}
func FilterIncludeBody(body hcl.Body, schema *hcl.BodySchema) hcl.Body {
return &filterBody{
body: body,
schema: schema,
}
}
func FilterExcludeBody(body hcl.Body, schema *hcl.BodySchema) hcl.Body {
return &filterBody{
body: body,
schema: schema,
exclude: true,
}
}
func (b *filterBody) Content(schema *hcl.BodySchema) (*hcl.BodyContent, hcl.Diagnostics) {
if b.exclude {
schema = subtractSchemas(schema, b.schema)
} else {
schema = intersectSchemas(schema, b.schema)
}
content, _, diag := b.body.PartialContent(schema)
return content, diag
}
func (b *filterBody) PartialContent(schema *hcl.BodySchema) (*hcl.BodyContent, hcl.Body, hcl.Diagnostics) {
if b.exclude {
schema = subtractSchemas(schema, b.schema)
} else {
schema = intersectSchemas(schema, b.schema)
}
return b.body.PartialContent(schema)
}
func (b *filterBody) JustAttributes() (hcl.Attributes, hcl.Diagnostics) {
return b.body.JustAttributes()
}
func (b *filterBody) MissingItemRange() hcl.Range {
return b.body.MissingItemRange()
}
func intersectSchemas(a, b *hcl.BodySchema) *hcl.BodySchema {
result := &hcl.BodySchema{}
for _, blockA := range a.Blocks {
for _, blockB := range b.Blocks {
if blockA.Type == blockB.Type {
result.Blocks = append(result.Blocks, blockA)
break
}
}
}
for _, attrA := range a.Attributes {
for _, attrB := range b.Attributes {
if attrA.Name == attrB.Name {
result.Attributes = append(result.Attributes, attrA)
break
}
}
}
return result
}
func subtractSchemas(a, b *hcl.BodySchema) *hcl.BodySchema {
result := &hcl.BodySchema{}
for _, blockA := range a.Blocks {
found := false
for _, blockB := range b.Blocks {
if blockA.Type == blockB.Type {
found = true
break
}
}
if !found {
result.Blocks = append(result.Blocks, blockA)
}
}
for _, attrA := range a.Attributes {
found := false
for _, attrB := range b.Attributes {
if attrA.Name == attrB.Name {
found = true
break
}
}
if !found {
result.Attributes = append(result.Attributes, attrA)
}
}
return result
}

View File

@@ -14,7 +14,15 @@ func funcCalls(exp hcl.Expression) ([]string, hcl.Diagnostics) {
if !ok {
fns, err := jsonFuncCallsRecursive(exp)
if err != nil {
return nil, wrapErrorDiagnostic("Invalid expression", err, exp.Range().Ptr(), exp.Range().Ptr())
return nil, hcl.Diagnostics{
&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid expression",
Detail: err.Error(),
Subject: exp.Range().Ptr(),
Context: exp.Range().Ptr(),
},
}
}
return fns, nil
}
@@ -75,11 +83,11 @@ func appendJSONFuncCalls(exp hcl.Expression, m map[string]struct{}) error {
// hcl/v2/json/ast#stringVal
val := src.FieldByName("Value")
if !val.IsValid() || val.IsZero() {
if val.IsZero() {
return nil
}
rng := src.FieldByName("SrcRange")
if rng.IsZero() {
if val.IsZero() {
return nil
}
var stringVal struct {

File diff suppressed because it is too large Load Diff

View File

@@ -1,228 +0,0 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
// Forked from https://github.com/hashicorp/hcl/blob/4679383728fe331fc8a6b46036a27b8f818d9bc0/merged.go
package hclparser
import (
"fmt"
"github.com/hashicorp/hcl/v2"
)
// MergeFiles combines the given files to produce a single body that contains
// configuration from all of the given files.
//
// The ordering of the given files decides the order in which contained
// elements will be returned. If any top-level attributes are defined with
// the same name across multiple files, a diagnostic will be produced from
// the Content and PartialContent methods describing this error in a
// user-friendly way.
func MergeFiles(files []*hcl.File) hcl.Body {
var bodies []hcl.Body
for _, file := range files {
bodies = append(bodies, file.Body)
}
return MergeBodies(bodies)
}
// MergeBodies is like MergeFiles except it deals directly with bodies, rather
// than with entire files.
func MergeBodies(bodies []hcl.Body) hcl.Body {
if len(bodies) == 0 {
// Swap out for our singleton empty body, to reduce the number of
// empty slices we have hanging around.
return emptyBody
}
// If any of the given bodies are already merged bodies, we'll unpack
// to flatten to a single mergedBodies, since that's conceptually simpler.
// This also, as a side-effect, eliminates any empty bodies, since
// empties are merged bodies with no inner bodies.
var newLen int
var flatten bool
for _, body := range bodies {
if children, merged := body.(mergedBodies); merged {
newLen += len(children)
flatten = true
} else {
newLen++
}
}
if !flatten { // not just newLen == len, because we might have mergedBodies with single bodies inside
return mergedBodies(bodies)
}
if newLen == 0 {
// Don't allocate a new empty when we already have one
return emptyBody
}
n := make([]hcl.Body, 0, newLen)
for _, body := range bodies {
if children, merged := body.(mergedBodies); merged {
n = append(n, children...)
} else {
n = append(n, body)
}
}
return mergedBodies(n)
}
var emptyBody = mergedBodies([]hcl.Body{})
// EmptyBody returns a body with no content. This body can be used as a
// placeholder when a body is required but no body content is available.
func EmptyBody() hcl.Body {
return emptyBody
}
type mergedBodies []hcl.Body
// Content returns the content produced by applying the given schema to all
// of the merged bodies and merging the result.
//
// Although required attributes _are_ supported, they should be used sparingly
// with merged bodies since in this case there is no contextual information
// with which to return good diagnostics. Applications working with merged
// bodies may wish to mark all attributes as optional and then check for
// required attributes afterwards, to produce better diagnostics.
func (mb mergedBodies) Content(schema *hcl.BodySchema) (*hcl.BodyContent, hcl.Diagnostics) {
// the returned body will always be empty in this case, because mergedContent
// will only ever call Content on the child bodies.
content, _, diags := mb.mergedContent(schema, false)
return content, diags
}
func (mb mergedBodies) PartialContent(schema *hcl.BodySchema) (*hcl.BodyContent, hcl.Body, hcl.Diagnostics) {
return mb.mergedContent(schema, true)
}
func (mb mergedBodies) JustAttributes() (hcl.Attributes, hcl.Diagnostics) {
attrs := make(map[string]*hcl.Attribute)
var diags hcl.Diagnostics
for _, body := range mb {
thisAttrs, thisDiags := body.JustAttributes()
if len(thisDiags) != 0 {
diags = append(diags, thisDiags...)
}
for name, attr := range thisAttrs {
if existing := attrs[name]; existing != nil {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Duplicate argument",
Detail: fmt.Sprintf(
"Argument %q was already set at %s",
name, existing.NameRange.String(),
),
Subject: thisAttrs[name].NameRange.Ptr(),
})
}
attrs[name] = attr
}
}
return attrs, diags
}
func (mb mergedBodies) MissingItemRange() hcl.Range {
if len(mb) == 0 {
// Nothing useful to return here, so we'll return some garbage.
return hcl.Range{
Filename: "<empty>",
}
}
// arbitrarily use the first body's missing item range
return mb[0].MissingItemRange()
}
func (mb mergedBodies) mergedContent(schema *hcl.BodySchema, partial bool) (*hcl.BodyContent, hcl.Body, hcl.Diagnostics) {
// We need to produce a new schema with none of the attributes marked as
// required, since _any one_ of our bodies can contribute an attribute value.
// We'll separately check that all required attributes are present at
// the end.
mergedSchema := &hcl.BodySchema{
Blocks: schema.Blocks,
}
for _, attrS := range schema.Attributes {
mergedAttrS := attrS
mergedAttrS.Required = false
mergedSchema.Attributes = append(mergedSchema.Attributes, mergedAttrS)
}
var mergedLeftovers []hcl.Body
content := &hcl.BodyContent{
Attributes: map[string]*hcl.Attribute{},
}
var diags hcl.Diagnostics
for _, body := range mb {
var thisContent *hcl.BodyContent
var thisLeftovers hcl.Body
var thisDiags hcl.Diagnostics
if partial {
thisContent, thisLeftovers, thisDiags = body.PartialContent(mergedSchema)
} else {
thisContent, thisDiags = body.Content(mergedSchema)
}
if thisLeftovers != nil {
mergedLeftovers = append(mergedLeftovers, thisLeftovers)
}
if len(thisDiags) != 0 {
diags = append(diags, thisDiags...)
}
if thisContent.Attributes != nil {
for name, attr := range thisContent.Attributes {
if existing := content.Attributes[name]; existing != nil {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Duplicate argument",
Detail: fmt.Sprintf(
"Argument %q was already set at %s",
name, existing.NameRange.String(),
),
Subject: thisContent.Attributes[name].NameRange.Ptr(),
})
}
content.Attributes[name] = attr
}
}
if len(thisContent.Blocks) != 0 {
content.Blocks = append(content.Blocks, thisContent.Blocks...)
}
}
// Finally, we check for required attributes.
for _, attrS := range schema.Attributes {
if !attrS.Required {
continue
}
if content.Attributes[attrS.Name] == nil {
// We don't have any context here to produce a good diagnostic,
// which is why we warn in the Content docstring to minimize the
// use of required attributes on merged bodies.
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Missing required argument",
Detail: fmt.Sprintf(
"The argument %q is required, but was not set.",
attrS.Name,
),
})
}
}
leftoverBody := MergeBodies(mergedLeftovers)
return content, leftoverBody, diags
}

View File

@@ -1,261 +1,111 @@
package hclparser
import (
"errors"
"path"
"strings"
"time"
"github.com/hashicorp/go-cty-funcs/cidr"
"github.com/hashicorp/go-cty-funcs/crypto"
"github.com/hashicorp/go-cty-funcs/encoding"
"github.com/hashicorp/go-cty-funcs/uuid"
"github.com/hashicorp/hcl/v2/ext/tryfunc"
"github.com/hashicorp/hcl/v2/ext/typeexpr"
"github.com/zclconf/go-cty/cty"
"github.com/zclconf/go-cty/cty/function"
"github.com/zclconf/go-cty/cty/function/stdlib"
)
type funcDef struct {
name string
fn function.Function
factory func() function.Function
}
var stdlibFunctions = []funcDef{
{name: "absolute", fn: stdlib.AbsoluteFunc},
{name: "add", fn: stdlib.AddFunc},
{name: "and", fn: stdlib.AndFunc},
{name: "base64decode", fn: encoding.Base64DecodeFunc},
{name: "base64encode", fn: encoding.Base64EncodeFunc},
{name: "basename", factory: basenameFunc},
{name: "bcrypt", fn: crypto.BcryptFunc},
{name: "byteslen", fn: stdlib.BytesLenFunc},
{name: "bytesslice", fn: stdlib.BytesSliceFunc},
{name: "can", fn: tryfunc.CanFunc},
{name: "ceil", fn: stdlib.CeilFunc},
{name: "chomp", fn: stdlib.ChompFunc},
{name: "chunklist", fn: stdlib.ChunklistFunc},
{name: "cidrhost", fn: cidr.HostFunc},
{name: "cidrnetmask", fn: cidr.NetmaskFunc},
{name: "cidrsubnet", fn: cidr.SubnetFunc},
{name: "cidrsubnets", fn: cidr.SubnetsFunc},
{name: "coalesce", fn: stdlib.CoalesceFunc},
{name: "coalescelist", fn: stdlib.CoalesceListFunc},
{name: "compact", fn: stdlib.CompactFunc},
{name: "concat", fn: stdlib.ConcatFunc},
{name: "contains", fn: stdlib.ContainsFunc},
{name: "convert", fn: typeexpr.ConvertFunc},
{name: "csvdecode", fn: stdlib.CSVDecodeFunc},
{name: "dirname", factory: dirnameFunc},
{name: "distinct", fn: stdlib.DistinctFunc},
{name: "divide", fn: stdlib.DivideFunc},
{name: "element", fn: stdlib.ElementFunc},
{name: "equal", fn: stdlib.EqualFunc},
{name: "flatten", fn: stdlib.FlattenFunc},
{name: "floor", fn: stdlib.FloorFunc},
{name: "format", fn: stdlib.FormatFunc},
{name: "formatdate", fn: stdlib.FormatDateFunc},
{name: "formatlist", fn: stdlib.FormatListFunc},
{name: "greaterthan", fn: stdlib.GreaterThanFunc},
{name: "greaterthanorequalto", fn: stdlib.GreaterThanOrEqualToFunc},
{name: "hasindex", fn: stdlib.HasIndexFunc},
{name: "indent", fn: stdlib.IndentFunc},
{name: "index", fn: stdlib.IndexFunc},
{name: "indexof", factory: indexOfFunc},
{name: "int", fn: stdlib.IntFunc},
{name: "join", fn: stdlib.JoinFunc},
{name: "jsondecode", fn: stdlib.JSONDecodeFunc},
{name: "jsonencode", fn: stdlib.JSONEncodeFunc},
{name: "keys", fn: stdlib.KeysFunc},
{name: "length", fn: stdlib.LengthFunc},
{name: "lessthan", fn: stdlib.LessThanFunc},
{name: "lessthanorequalto", fn: stdlib.LessThanOrEqualToFunc},
{name: "log", fn: stdlib.LogFunc},
{name: "lookup", fn: stdlib.LookupFunc},
{name: "lower", fn: stdlib.LowerFunc},
{name: "max", fn: stdlib.MaxFunc},
{name: "md5", fn: crypto.Md5Func},
{name: "merge", fn: stdlib.MergeFunc},
{name: "min", fn: stdlib.MinFunc},
{name: "modulo", fn: stdlib.ModuloFunc},
{name: "multiply", fn: stdlib.MultiplyFunc},
{name: "negate", fn: stdlib.NegateFunc},
{name: "not", fn: stdlib.NotFunc},
{name: "notequal", fn: stdlib.NotEqualFunc},
{name: "or", fn: stdlib.OrFunc},
{name: "parseint", fn: stdlib.ParseIntFunc},
{name: "pow", fn: stdlib.PowFunc},
{name: "range", fn: stdlib.RangeFunc},
{name: "regex_replace", fn: stdlib.RegexReplaceFunc},
{name: "regex", fn: stdlib.RegexFunc},
{name: "regexall", fn: stdlib.RegexAllFunc},
{name: "replace", fn: stdlib.ReplaceFunc},
{name: "reverse", fn: stdlib.ReverseFunc},
{name: "reverselist", fn: stdlib.ReverseListFunc},
{name: "rsadecrypt", fn: crypto.RsaDecryptFunc},
{name: "sanitize", factory: sanitizeFunc},
{name: "sethaselement", fn: stdlib.SetHasElementFunc},
{name: "setintersection", fn: stdlib.SetIntersectionFunc},
{name: "setproduct", fn: stdlib.SetProductFunc},
{name: "setsubtract", fn: stdlib.SetSubtractFunc},
{name: "setsymmetricdifference", fn: stdlib.SetSymmetricDifferenceFunc},
{name: "setunion", fn: stdlib.SetUnionFunc},
{name: "sha1", fn: crypto.Sha1Func},
{name: "sha256", fn: crypto.Sha256Func},
{name: "sha512", fn: crypto.Sha512Func},
{name: "signum", fn: stdlib.SignumFunc},
{name: "slice", fn: stdlib.SliceFunc},
{name: "sort", fn: stdlib.SortFunc},
{name: "split", fn: stdlib.SplitFunc},
{name: "strlen", fn: stdlib.StrlenFunc},
{name: "substr", fn: stdlib.SubstrFunc},
{name: "subtract", fn: stdlib.SubtractFunc},
{name: "timeadd", fn: stdlib.TimeAddFunc},
{name: "timestamp", factory: timestampFunc},
{name: "title", fn: stdlib.TitleFunc},
{name: "trim", fn: stdlib.TrimFunc},
{name: "trimprefix", fn: stdlib.TrimPrefixFunc},
{name: "trimspace", fn: stdlib.TrimSpaceFunc},
{name: "trimsuffix", fn: stdlib.TrimSuffixFunc},
{name: "try", fn: tryfunc.TryFunc},
{name: "upper", fn: stdlib.UpperFunc},
{name: "urlencode", fn: encoding.URLEncodeFunc},
{name: "uuidv4", fn: uuid.V4Func},
{name: "uuidv5", fn: uuid.V5Func},
{name: "values", fn: stdlib.ValuesFunc},
{name: "zipmap", fn: stdlib.ZipmapFunc},
}
// indexOfFunc constructs a function that finds the element index for a given
// value in a list.
func indexOfFunc() function.Function {
return function.New(&function.Spec{
Params: []function.Parameter{
{
Name: "list",
Type: cty.DynamicPseudoType,
},
{
Name: "value",
Type: cty.DynamicPseudoType,
},
},
Type: function.StaticReturnType(cty.Number),
Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) {
if !(args[0].Type().IsListType() || args[0].Type().IsTupleType()) {
return cty.NilVal, errors.New("argument must be a list or tuple")
}
if !args[0].IsKnown() {
return cty.UnknownVal(cty.Number), nil
}
if args[0].LengthInt() == 0 { // Easy path
return cty.NilVal, errors.New("cannot search an empty list")
}
for it := args[0].ElementIterator(); it.Next(); {
i, v := it.Element()
eq, err := stdlib.Equal(v, args[1])
if err != nil {
return cty.NilVal, err
}
if !eq.IsKnown() {
return cty.UnknownVal(cty.Number), nil
}
if eq.True() {
return i, nil
}
}
return cty.NilVal, errors.New("item not found")
},
})
}
// basenameFunc constructs a function that returns the last element of a path.
func basenameFunc() function.Function {
return function.New(&function.Spec{
Params: []function.Parameter{
{
Name: "path",
Type: cty.String,
},
},
Type: function.StaticReturnType(cty.String),
Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) {
in := args[0].AsString()
return cty.StringVal(path.Base(in)), nil
},
})
}
// dirnameFunc constructs a function that returns the directory of a path.
func dirnameFunc() function.Function {
return function.New(&function.Spec{
Params: []function.Parameter{
{
Name: "path",
Type: cty.String,
},
},
Type: function.StaticReturnType(cty.String),
Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) {
in := args[0].AsString()
return cty.StringVal(path.Dir(in)), nil
},
})
}
// sanitizyFunc constructs a function that replaces all non-alphanumeric characters with a underscore,
// leaving only characters that are valid for a Bake target name.
func sanitizeFunc() function.Function {
return function.New(&function.Spec{
Params: []function.Parameter{
{
Name: "name",
Type: cty.String,
},
},
Type: function.StaticReturnType(cty.String),
Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) {
in := args[0].AsString()
// only [a-zA-Z0-9_-]+ is allowed
var b strings.Builder
for _, r := range in {
if r >= 'a' && r <= 'z' || r >= 'A' && r <= 'Z' || r >= '0' && r <= '9' || r == '_' || r == '-' {
b.WriteRune(r)
} else {
b.WriteRune('_')
}
}
return cty.StringVal(b.String()), nil
},
})
}
// timestampFunc constructs a function that returns a string representation of the current date and time.
//
// This function was imported from terraform's datetime utilities.
func timestampFunc() function.Function {
return function.New(&function.Spec{
Params: []function.Parameter{},
Type: function.StaticReturnType(cty.String),
Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) {
return cty.StringVal(time.Now().UTC().Format(time.RFC3339)), nil
},
})
}
func Stdlib() map[string]function.Function {
funcs := make(map[string]function.Function, len(stdlibFunctions))
for _, v := range stdlibFunctions {
if v.factory != nil {
funcs[v.name] = v.factory()
} else {
funcs[v.name] = v.fn
}
}
return funcs
var stdlibFunctions = map[string]function.Function{
"absolute": stdlib.AbsoluteFunc,
"add": stdlib.AddFunc,
"and": stdlib.AndFunc,
"base64decode": encoding.Base64DecodeFunc,
"base64encode": encoding.Base64EncodeFunc,
"bcrypt": crypto.BcryptFunc,
"byteslen": stdlib.BytesLenFunc,
"bytesslice": stdlib.BytesSliceFunc,
"can": tryfunc.CanFunc,
"ceil": stdlib.CeilFunc,
"chomp": stdlib.ChompFunc,
"chunklist": stdlib.ChunklistFunc,
"cidrhost": cidr.HostFunc,
"cidrnetmask": cidr.NetmaskFunc,
"cidrsubnet": cidr.SubnetFunc,
"cidrsubnets": cidr.SubnetsFunc,
"csvdecode": stdlib.CSVDecodeFunc,
"coalesce": stdlib.CoalesceFunc,
"coalescelist": stdlib.CoalesceListFunc,
"compact": stdlib.CompactFunc,
"concat": stdlib.ConcatFunc,
"contains": stdlib.ContainsFunc,
"convert": typeexpr.ConvertFunc,
"distinct": stdlib.DistinctFunc,
"divide": stdlib.DivideFunc,
"element": stdlib.ElementFunc,
"equal": stdlib.EqualFunc,
"flatten": stdlib.FlattenFunc,
"floor": stdlib.FloorFunc,
"formatdate": stdlib.FormatDateFunc,
"format": stdlib.FormatFunc,
"formatlist": stdlib.FormatListFunc,
"greaterthan": stdlib.GreaterThanFunc,
"greaterthanorequalto": stdlib.GreaterThanOrEqualToFunc,
"hasindex": stdlib.HasIndexFunc,
"indent": stdlib.IndentFunc,
"index": stdlib.IndexFunc,
"int": stdlib.IntFunc,
"jsondecode": stdlib.JSONDecodeFunc,
"jsonencode": stdlib.JSONEncodeFunc,
"keys": stdlib.KeysFunc,
"join": stdlib.JoinFunc,
"length": stdlib.LengthFunc,
"lessthan": stdlib.LessThanFunc,
"lessthanorequalto": stdlib.LessThanOrEqualToFunc,
"log": stdlib.LogFunc,
"lookup": stdlib.LookupFunc,
"lower": stdlib.LowerFunc,
"max": stdlib.MaxFunc,
"md5": crypto.Md5Func,
"merge": stdlib.MergeFunc,
"min": stdlib.MinFunc,
"modulo": stdlib.ModuloFunc,
"multiply": stdlib.MultiplyFunc,
"negate": stdlib.NegateFunc,
"notequal": stdlib.NotEqualFunc,
"not": stdlib.NotFunc,
"or": stdlib.OrFunc,
"parseint": stdlib.ParseIntFunc,
"pow": stdlib.PowFunc,
"range": stdlib.RangeFunc,
"regexall": stdlib.RegexAllFunc,
"regex": stdlib.RegexFunc,
"regex_replace": stdlib.RegexReplaceFunc,
"reverse": stdlib.ReverseFunc,
"reverselist": stdlib.ReverseListFunc,
"rsadecrypt": crypto.RsaDecryptFunc,
"sethaselement": stdlib.SetHasElementFunc,
"setintersection": stdlib.SetIntersectionFunc,
"setproduct": stdlib.SetProductFunc,
"setsubtract": stdlib.SetSubtractFunc,
"setsymmetricdifference": stdlib.SetSymmetricDifferenceFunc,
"setunion": stdlib.SetUnionFunc,
"sha1": crypto.Sha1Func,
"sha256": crypto.Sha256Func,
"sha512": crypto.Sha512Func,
"signum": stdlib.SignumFunc,
"slice": stdlib.SliceFunc,
"sort": stdlib.SortFunc,
"split": stdlib.SplitFunc,
"strlen": stdlib.StrlenFunc,
"substr": stdlib.SubstrFunc,
"subtract": stdlib.SubtractFunc,
"timeadd": stdlib.TimeAddFunc,
"title": stdlib.TitleFunc,
"trim": stdlib.TrimFunc,
"trimprefix": stdlib.TrimPrefixFunc,
"trimspace": stdlib.TrimSpaceFunc,
"trimsuffix": stdlib.TrimSuffixFunc,
"try": tryfunc.TryFunc,
"upper": stdlib.UpperFunc,
"urlencode": encoding.URLEncodeFunc,
"uuidv4": uuid.V4Func,
"uuidv5": uuid.V5Func,
"values": stdlib.ValuesFunc,
"zipmap": stdlib.ZipmapFunc,
}

View File

@@ -1,199 +0,0 @@
package hclparser
import (
"testing"
"github.com/stretchr/testify/require"
"github.com/zclconf/go-cty/cty"
)
func TestIndexOf(t *testing.T) {
type testCase struct {
input cty.Value
key cty.Value
want cty.Value
wantErr bool
}
tests := map[string]testCase{
"index 0": {
input: cty.TupleVal([]cty.Value{cty.StringVal("one"), cty.NumberIntVal(2.0), cty.NumberIntVal(3), cty.StringVal("four")}),
key: cty.StringVal("one"),
want: cty.NumberIntVal(0),
},
"index 3": {
input: cty.TupleVal([]cty.Value{cty.StringVal("one"), cty.NumberIntVal(2.0), cty.NumberIntVal(3), cty.StringVal("four")}),
key: cty.StringVal("four"),
want: cty.NumberIntVal(3),
},
"index -1": {
input: cty.TupleVal([]cty.Value{cty.StringVal("one"), cty.NumberIntVal(2.0), cty.NumberIntVal(3), cty.StringVal("four")}),
key: cty.StringVal("3"),
wantErr: true,
},
}
for name, test := range tests {
name, test := name, test
t.Run(name, func(t *testing.T) {
got, err := indexOfFunc().Call([]cty.Value{test.input, test.key})
if test.wantErr {
require.Error(t, err)
} else {
require.NoError(t, err)
require.Equal(t, test.want, got)
}
})
}
}
func TestBasename(t *testing.T) {
type testCase struct {
input cty.Value
want cty.Value
wantErr bool
}
tests := map[string]testCase{
"empty": {
input: cty.StringVal(""),
want: cty.StringVal("."),
},
"slash": {
input: cty.StringVal("/"),
want: cty.StringVal("/"),
},
"simple": {
input: cty.StringVal("/foo/bar"),
want: cty.StringVal("bar"),
},
"simple no slash": {
input: cty.StringVal("foo/bar"),
want: cty.StringVal("bar"),
},
"dot": {
input: cty.StringVal("/foo/bar."),
want: cty.StringVal("bar."),
},
"dotdot": {
input: cty.StringVal("/foo/bar.."),
want: cty.StringVal("bar.."),
},
"dotdotdot": {
input: cty.StringVal("/foo/bar..."),
want: cty.StringVal("bar..."),
},
}
for name, test := range tests {
name, test := name, test
t.Run(name, func(t *testing.T) {
got, err := basenameFunc().Call([]cty.Value{test.input})
if test.wantErr {
require.Error(t, err)
} else {
require.NoError(t, err)
require.Equal(t, test.want, got)
}
})
}
}
func TestDirname(t *testing.T) {
type testCase struct {
input cty.Value
want cty.Value
wantErr bool
}
tests := map[string]testCase{
"empty": {
input: cty.StringVal(""),
want: cty.StringVal("."),
},
"slash": {
input: cty.StringVal("/"),
want: cty.StringVal("/"),
},
"simple": {
input: cty.StringVal("/foo/bar"),
want: cty.StringVal("/foo"),
},
"simple no slash": {
input: cty.StringVal("foo/bar"),
want: cty.StringVal("foo"),
},
"dot": {
input: cty.StringVal("/foo/bar."),
want: cty.StringVal("/foo"),
},
"dotdot": {
input: cty.StringVal("/foo/bar.."),
want: cty.StringVal("/foo"),
},
"dotdotdot": {
input: cty.StringVal("/foo/bar..."),
want: cty.StringVal("/foo"),
},
}
for name, test := range tests {
name, test := name, test
t.Run(name, func(t *testing.T) {
got, err := dirnameFunc().Call([]cty.Value{test.input})
if test.wantErr {
require.Error(t, err)
} else {
require.NoError(t, err)
require.Equal(t, test.want, got)
}
})
}
}
func TestSanitize(t *testing.T) {
type testCase struct {
input cty.Value
want cty.Value
}
tests := map[string]testCase{
"empty": {
input: cty.StringVal(""),
want: cty.StringVal(""),
},
"simple": {
input: cty.StringVal("foo/bar"),
want: cty.StringVal("foo_bar"),
},
"simple no slash": {
input: cty.StringVal("foobar"),
want: cty.StringVal("foobar"),
},
"dot": {
input: cty.StringVal("foo/bar."),
want: cty.StringVal("foo_bar_"),
},
"dotdot": {
input: cty.StringVal("foo/bar.."),
want: cty.StringVal("foo_bar__"),
},
"dotdotdot": {
input: cty.StringVal("foo/bar..."),
want: cty.StringVal("foo_bar___"),
},
"utf8": {
input: cty.StringVal("foo/🍕bar"),
want: cty.StringVal("foo__bar"),
},
"symbols": {
input: cty.StringVal("foo/bar!@(ba+z)"),
want: cty.StringVal("foo_bar___ba_z_"),
},
}
for name, test := range tests {
name, test := name, test
t.Run(name, func(t *testing.T) {
got, err := sanitizeFunc().Call([]cty.Value{test.input})
require.NoError(t, err)
require.Equal(t, test.want, got)
})
}
}

View File

@@ -4,61 +4,26 @@ import (
"archive/tar"
"bytes"
"context"
"os"
"strings"
"github.com/docker/buildx/builder"
controllerapi "github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/build"
"github.com/docker/buildx/driver"
"github.com/docker/buildx/util/progress"
"github.com/docker/go-units"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/client/llb"
"github.com/moby/buildkit/frontend/dockerui"
gwclient "github.com/moby/buildkit/frontend/gateway/client"
"github.com/moby/buildkit/session"
"github.com/pkg/errors"
)
const maxBakeDefinitionSize = 2 * 1024 * 1024 // 2 MB
type Input struct {
State *llb.State
URL string
}
func ReadRemoteFiles(ctx context.Context, nodes []builder.Node, url string, names []string, pw progress.Writer) ([]File, *Input, error) {
var sessions []session.Attachable
var filename string
st, ok := dockerui.DetectGitContext(url, false)
if ok {
if ssh, err := controllerapi.CreateSSH([]*controllerapi.SSH{{
ID: "default",
Paths: strings.Split(os.Getenv("BUILDX_BAKE_GIT_SSH"), ","),
}}); err == nil {
sessions = append(sessions, ssh)
}
var gitAuthSecrets []*controllerapi.Secret
if _, ok := os.LookupEnv("BUILDX_BAKE_GIT_AUTH_TOKEN"); ok {
gitAuthSecrets = append(gitAuthSecrets, &controllerapi.Secret{
ID: llb.GitAuthTokenKey,
Env: "BUILDX_BAKE_GIT_AUTH_TOKEN",
})
}
if _, ok := os.LookupEnv("BUILDX_BAKE_GIT_AUTH_HEADER"); ok {
gitAuthSecrets = append(gitAuthSecrets, &controllerapi.Secret{
ID: llb.GitAuthHeaderKey,
Env: "BUILDX_BAKE_GIT_AUTH_HEADER",
})
}
if len(gitAuthSecrets) > 0 {
if secrets, err := controllerapi.CreateSecrets(gitAuthSecrets); err == nil {
sessions = append(sessions, secrets)
}
}
} else {
st, filename, ok = dockerui.DetectHTTPContext(url)
func ReadRemoteFiles(ctx context.Context, dis []build.DriverInfo, url string, names []string, pw progress.Writer) ([]File, *Input, error) {
st, filename, ok := detectHTTPContext(url)
if !ok {
st, ok = detectGitContext(url)
if !ok {
return nil, nil, errors.Errorf("not url context")
}
@@ -67,25 +32,25 @@ func ReadRemoteFiles(ctx context.Context, nodes []builder.Node, url string, name
inp := &Input{State: st, URL: url}
var files []File
var node *builder.Node
for i, n := range nodes {
if n.Err == nil {
node = &nodes[i]
var di *build.DriverInfo
for _, d := range dis {
if d.Err == nil {
di = &d
continue
}
}
if node == nil {
if di == nil {
return nil, nil, nil
}
c, err := driver.Boot(ctx, ctx, node.Driver, pw)
c, err := driver.Boot(ctx, ctx, di.Driver, pw)
if err != nil {
return nil, nil, err
}
ch, done := progress.NewChannel(pw)
defer func() { <-done }()
_, err = c.Build(ctx, client.SolveOpt{Session: sessions, Internal: true}, "buildx", func(ctx context.Context, c gwclient.Client) (*gwclient.Result, error) {
_, err = c.Build(ctx, client.SolveOpt{}, "buildx", func(ctx context.Context, c gwclient.Client) (*gwclient.Result, error) {
def, err := st.Marshal(ctx)
if err != nil {
return nil, err
@@ -109,6 +74,7 @@ func ReadRemoteFiles(ctx context.Context, nodes []builder.Node, url string, name
}
return nil, err
}, ch)
if err != nil {
return nil, nil, err
}
@@ -116,6 +82,51 @@ func ReadRemoteFiles(ctx context.Context, nodes []builder.Node, url string, name
return files, inp, nil
}
func IsRemoteURL(url string) bool {
if _, _, ok := detectHTTPContext(url); ok {
return true
}
if _, ok := detectGitContext(url); ok {
return true
}
return false
}
func detectHTTPContext(url string) (*llb.State, string, bool) {
if httpPrefix.MatchString(url) {
httpContext := llb.HTTP(url, llb.Filename("context"), llb.WithCustomName("[internal] load remote build context"))
return &httpContext, "context", true
}
return nil, "", false
}
func detectGitContext(ref string) (*llb.State, bool) {
found := false
if httpPrefix.MatchString(ref) && gitURLPathWithFragmentSuffix.MatchString(ref) {
found = true
}
for _, prefix := range []string{"git://", "github.com/", "git@"} {
if strings.HasPrefix(ref, prefix) {
found = true
break
}
}
if !found {
return nil, false
}
parts := strings.SplitN(ref, "#", 2)
branch := ""
if len(parts) > 1 {
branch = parts[1]
}
gitOpts := []llb.GitOption{llb.WithCustomName("[internal] load git source " + ref)}
st := llb.Git(parts[0], branch, gitOpts...)
return &st, true
}
func isArchive(header []byte) bool {
for _, m := range [][]byte{
{0x42, 0x5A, 0x68}, // bzip2
@@ -180,9 +191,9 @@ func filesFromURLRef(ctx context.Context, c gwclient.Client, ref gwclient.Refere
name := inp.URL
inp.URL = ""
if int64(len(dt)) > stat.Size {
if stat.Size > maxBakeDefinitionSize {
return nil, errors.Errorf("non-archive definition URL bigger than maximum allowed size (%s)", units.HumanSize(maxBakeDefinitionSize))
if len(dt) > stat.Size() {
if stat.Size() > 1024*512 {
return nil, errors.Errorf("non-archive definition URL bigger than maximum allowed size")
}
dt, err = ref.ReadFile(ctx, gwclient.ReadRequest{

File diff suppressed because it is too large Load Diff

View File

@@ -1,62 +0,0 @@
package build
import (
"context"
stderrors "errors"
"net"
"github.com/containerd/platforms"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/util/progress"
v1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
func Dial(ctx context.Context, nodes []builder.Node, pw progress.Writer, platform *v1.Platform) (net.Conn, error) {
nodes, err := filterAvailableNodes(nodes)
if err != nil {
return nil, err
}
if len(nodes) == 0 {
return nil, errors.New("no nodes available")
}
var pls []v1.Platform
if platform != nil {
pls = []v1.Platform{*platform}
}
opts := map[string]Options{"default": {Platforms: pls}}
resolved, err := resolveDrivers(ctx, nodes, opts, pw)
if err != nil {
return nil, err
}
var dialError error
for _, ls := range resolved {
for _, rn := range ls {
if platform != nil {
p := *platform
var found bool
for _, pp := range rn.platforms {
if platforms.Only(p).Match(pp) {
found = true
break
}
}
if !found {
continue
}
}
conn, err := nodes[rn.driverIndex].Driver.Dial(ctx)
if err == nil {
return conn, nil
}
dialError = stderrors.Join(err)
}
}
return nil, errors.Wrap(dialError, "no nodes available")
}

View File

@@ -1,352 +0,0 @@
package build
import (
"context"
"fmt"
"sync"
"github.com/containerd/platforms"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/driver"
"github.com/docker/buildx/util/progress"
"github.com/moby/buildkit/client"
gateway "github.com/moby/buildkit/frontend/gateway/client"
"github.com/moby/buildkit/util/flightcontrol"
"github.com/moby/buildkit/util/tracing"
specs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"go.opentelemetry.io/otel/trace"
"golang.org/x/sync/errgroup"
)
type resolvedNode struct {
resolver *nodeResolver
driverIndex int
platforms []specs.Platform
}
func (dp resolvedNode) Node() builder.Node {
return dp.resolver.nodes[dp.driverIndex]
}
func (dp resolvedNode) Client(ctx context.Context) (*client.Client, error) {
clients, err := dp.resolver.boot(ctx, []int{dp.driverIndex}, nil)
if err != nil {
return nil, err
}
return clients[0], nil
}
func (dp resolvedNode) BuildOpts(ctx context.Context) (gateway.BuildOpts, error) {
opts, err := dp.resolver.opts(ctx, []int{dp.driverIndex}, nil)
if err != nil {
return gateway.BuildOpts{}, err
}
return opts[0], nil
}
type matchMaker func(specs.Platform) platforms.MatchComparer
type cachedGroup[T any] struct {
g flightcontrol.Group[T]
cache map[int]T
cacheMu sync.Mutex
}
func newCachedGroup[T any]() cachedGroup[T] {
return cachedGroup[T]{
cache: map[int]T{},
}
}
type nodeResolver struct {
nodes []builder.Node
clients cachedGroup[*client.Client]
buildOpts cachedGroup[gateway.BuildOpts]
}
func resolveDrivers(ctx context.Context, nodes []builder.Node, opt map[string]Options, pw progress.Writer) (map[string][]*resolvedNode, error) {
driverRes := newDriverResolver(nodes)
drivers, err := driverRes.Resolve(ctx, opt, pw)
if err != nil {
return nil, err
}
return drivers, err
}
func newDriverResolver(nodes []builder.Node) *nodeResolver {
r := &nodeResolver{
nodes: nodes,
clients: newCachedGroup[*client.Client](),
buildOpts: newCachedGroup[gateway.BuildOpts](),
}
return r
}
func (r *nodeResolver) Resolve(ctx context.Context, opt map[string]Options, pw progress.Writer) (map[string][]*resolvedNode, error) {
if len(r.nodes) == 0 {
return nil, nil
}
nodes := map[string][]*resolvedNode{}
for k, opt := range opt {
node, perfect, err := r.resolve(ctx, opt.Platforms, pw, platforms.OnlyStrict, nil)
if err != nil {
return nil, err
}
if !perfect {
break
}
nodes[k] = node
}
if len(nodes) != len(opt) {
// if we didn't get a perfect match, we need to boot all drivers
allIndexes := make([]int, len(r.nodes))
for i := range allIndexes {
allIndexes[i] = i
}
clients, err := r.boot(ctx, allIndexes, pw)
if err != nil {
return nil, err
}
eg, egCtx := errgroup.WithContext(ctx)
workers := make([][]specs.Platform, len(clients))
for i, c := range clients {
i, c := i, c
if c == nil {
continue
}
eg.Go(func() error {
ww, err := c.ListWorkers(egCtx)
if err != nil {
return errors.Wrap(err, "listing workers")
}
ps := make(map[string]specs.Platform, len(ww))
for _, w := range ww {
for _, p := range w.Platforms {
pk := platforms.Format(platforms.Normalize(p))
ps[pk] = p
}
}
for _, p := range ps {
workers[i] = append(workers[i], p)
}
return nil
})
}
if err := eg.Wait(); err != nil {
return nil, err
}
// then we can attempt to match against all the available platforms
// (this time we don't care about imperfect matches)
nodes = map[string][]*resolvedNode{}
for k, opt := range opt {
node, _, err := r.resolve(ctx, opt.Platforms, pw, platforms.Only, func(idx int, n builder.Node) []specs.Platform {
return workers[idx]
})
if err != nil {
return nil, err
}
nodes[k] = node
}
}
idxs := make([]int, 0, len(r.nodes))
for _, nodes := range nodes {
for _, node := range nodes {
idxs = append(idxs, node.driverIndex)
}
}
// preload capabilities
span, ctx := tracing.StartSpan(ctx, "load buildkit capabilities", trace.WithSpanKind(trace.SpanKindInternal))
_, err := r.opts(ctx, idxs, pw)
tracing.FinishWithError(span, err)
if err != nil {
return nil, err
}
return nodes, nil
}
func (r *nodeResolver) resolve(ctx context.Context, ps []specs.Platform, pw progress.Writer, matcher matchMaker, additional func(idx int, n builder.Node) []specs.Platform) ([]*resolvedNode, bool, error) {
if len(r.nodes) == 0 {
return nil, true, nil
}
perfect := true
nodeIdxs := make([]int, 0)
for _, p := range ps {
idx := r.get(p, matcher, additional)
if idx == -1 {
idx = 0
perfect = false
}
nodeIdxs = append(nodeIdxs, idx)
}
var nodes []*resolvedNode
if len(nodeIdxs) == 0 {
nodes = append(nodes, &resolvedNode{
resolver: r,
driverIndex: 0,
})
nodeIdxs = append(nodeIdxs, 0)
} else {
for i, idx := range nodeIdxs {
node := &resolvedNode{
resolver: r,
driverIndex: idx,
}
if len(ps) > 0 {
node.platforms = []specs.Platform{ps[i]}
}
nodes = append(nodes, node)
}
}
nodes = recombineNodes(nodes)
if _, err := r.boot(ctx, nodeIdxs, pw); err != nil {
return nil, false, err
}
return nodes, perfect, nil
}
func (r *nodeResolver) get(p specs.Platform, matcher matchMaker, additionalPlatforms func(int, builder.Node) []specs.Platform) int {
best := -1
bestPlatform := specs.Platform{}
for i, node := range r.nodes {
platforms := node.Platforms
if additionalPlatforms != nil {
platforms = append([]specs.Platform{}, platforms...)
platforms = append(platforms, additionalPlatforms(i, node)...)
}
for _, p2 := range platforms {
m := matcher(p2)
if !m.Match(p) {
continue
}
if best == -1 {
best = i
bestPlatform = p2
continue
}
if matcher(p2).Less(p, bestPlatform) {
best = i
bestPlatform = p2
}
}
}
return best
}
func (r *nodeResolver) boot(ctx context.Context, idxs []int, pw progress.Writer) ([]*client.Client, error) {
clients := make([]*client.Client, len(idxs))
baseCtx := ctx
eg, ctx := errgroup.WithContext(ctx)
for i, idx := range idxs {
i, idx := i, idx
eg.Go(func() error {
c, err := r.clients.g.Do(ctx, fmt.Sprint(idx), func(ctx context.Context) (*client.Client, error) {
if r.nodes[idx].Driver == nil {
return nil, nil
}
r.clients.cacheMu.Lock()
c, ok := r.clients.cache[idx]
r.clients.cacheMu.Unlock()
if ok {
return c, nil
}
c, err := driver.Boot(ctx, baseCtx, r.nodes[idx].Driver, pw)
if err != nil {
return nil, err
}
r.clients.cacheMu.Lock()
r.clients.cache[idx] = c
r.clients.cacheMu.Unlock()
return c, nil
})
if err != nil {
return err
}
clients[i] = c
return nil
})
}
if err := eg.Wait(); err != nil {
return nil, err
}
return clients, nil
}
func (r *nodeResolver) opts(ctx context.Context, idxs []int, pw progress.Writer) ([]gateway.BuildOpts, error) {
clients, err := r.boot(ctx, idxs, pw)
if err != nil {
return nil, err
}
bopts := make([]gateway.BuildOpts, len(clients))
eg, ctx := errgroup.WithContext(ctx)
for i, idxs := range idxs {
i, idx := i, idxs
c := clients[i]
if c == nil {
continue
}
eg.Go(func() error {
opt, err := r.buildOpts.g.Do(ctx, fmt.Sprint(idx), func(ctx context.Context) (gateway.BuildOpts, error) {
r.buildOpts.cacheMu.Lock()
opt, ok := r.buildOpts.cache[idx]
r.buildOpts.cacheMu.Unlock()
if ok {
return opt, nil
}
_, err := c.Build(ctx, client.SolveOpt{
Internal: true,
}, "buildx", func(ctx context.Context, c gateway.Client) (*gateway.Result, error) {
opt = c.BuildOpts()
return nil, nil
}, nil)
if err != nil {
return gateway.BuildOpts{}, err
}
r.buildOpts.cacheMu.Lock()
r.buildOpts.cache[idx] = opt
r.buildOpts.cacheMu.Unlock()
return opt, err
})
if err != nil {
return err
}
bopts[i] = opt
return nil
})
}
if err := eg.Wait(); err != nil {
return nil, err
}
return bopts, nil
}
// recombineDriverPairs recombines resolved nodes that are on the same driver
// back together into a single node.
func recombineNodes(nodes []*resolvedNode) []*resolvedNode {
result := make([]*resolvedNode, 0, len(nodes))
lookup := map[int]int{}
for _, node := range nodes {
if idx, ok := lookup[node.driverIndex]; ok {
result[idx].platforms = append(result[idx].platforms, node.platforms...)
} else {
lookup[node.driverIndex] = len(result)
result = append(result, node)
}
}
return result
}

View File

@@ -1,315 +0,0 @@
package build
import (
"context"
"sort"
"testing"
"github.com/containerd/platforms"
"github.com/docker/buildx/builder"
specs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/stretchr/testify/require"
)
func TestFindDriverSanity(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
"aaa": {platforms.DefaultSpec()},
})
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.DefaultSpec()}, nil, platforms.OnlyStrict, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, 0, res[0].driverIndex)
require.Equal(t, "aaa", res[0].Node().Builder)
require.Equal(t, []specs.Platform{platforms.DefaultSpec()}, res[0].platforms)
}
func TestFindDriverEmpty(t *testing.T) {
r := makeTestResolver(nil)
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.DefaultSpec()}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Nil(t, res)
}
func TestFindDriverWeirdName(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
"aaa": {platforms.MustParse("linux/amd64")},
"bbb": {platforms.MustParse("linux/foobar")},
})
// find first platform
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/foobar")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, 1, res[0].driverIndex)
require.Equal(t, "bbb", res[0].Node().Builder)
}
func TestFindDriverUnknown(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
"aaa": {platforms.MustParse("linux/amd64")},
})
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/riscv64")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.False(t, perfect)
require.Len(t, res, 1)
require.Equal(t, 0, res[0].driverIndex)
require.Equal(t, "aaa", res[0].Node().Builder)
}
func TestSelectNodeSinglePlatform(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
"aaa": {platforms.MustParse("linux/amd64")},
"bbb": {platforms.MustParse("linux/riscv64")},
})
// find first platform
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/amd64")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, 0, res[0].driverIndex)
require.Equal(t, "aaa", res[0].Node().Builder)
// find second platform
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/riscv64")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, 1, res[0].driverIndex)
require.Equal(t, "bbb", res[0].Node().Builder)
// find an unknown platform, should match the first driver
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/s390x")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.False(t, perfect)
require.Len(t, res, 1)
require.Equal(t, 0, res[0].driverIndex)
require.Equal(t, "aaa", res[0].Node().Builder)
}
func TestSelectNodeMultiPlatform(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
"aaa": {platforms.MustParse("linux/amd64"), platforms.MustParse("linux/arm64")},
"bbb": {platforms.MustParse("linux/riscv64")},
})
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/amd64")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, 0, res[0].driverIndex)
require.Equal(t, "aaa", res[0].Node().Builder)
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm64")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, 0, res[0].driverIndex)
require.Equal(t, "aaa", res[0].Node().Builder)
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/riscv64")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, 1, res[0].driverIndex)
require.Equal(t, "bbb", res[0].Node().Builder)
}
func TestSelectNodeNonStrict(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
"aaa": {platforms.MustParse("linux/amd64")},
"bbb": {platforms.MustParse("linux/arm64")},
})
// arm64 should match itself
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm64")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "bbb", res[0].Node().Builder)
// arm64 may support arm/v8
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v8")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "bbb", res[0].Node().Builder)
// arm64 may support arm/v7
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v7")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "bbb", res[0].Node().Builder)
}
func TestSelectNodeNonStrictARM(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
"aaa": {platforms.MustParse("linux/amd64")},
"bbb": {platforms.MustParse("linux/arm64")},
"ccc": {platforms.MustParse("linux/arm/v8")},
})
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v8")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "ccc", res[0].Node().Builder)
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v7")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "ccc", res[0].Node().Builder)
}
func TestSelectNodeNonStrictLower(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
"aaa": {platforms.MustParse("linux/amd64")},
"bbb": {platforms.MustParse("linux/arm/v7")},
})
// v8 can't be built on v7 (so we should select the default)...
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v8")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.False(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "aaa", res[0].Node().Builder)
// ...but v6 can be built on v8
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v6")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "bbb", res[0].Node().Builder)
}
func TestSelectNodePreferStart(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
"aaa": {platforms.MustParse("linux/amd64")},
"bbb": {platforms.MustParse("linux/riscv64")},
"ccc": {platforms.MustParse("linux/riscv64")},
})
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/riscv64")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "bbb", res[0].Node().Builder)
}
func TestSelectNodePreferExact(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
"aaa": {platforms.MustParse("linux/arm/v8")},
"bbb": {platforms.MustParse("linux/arm/v7")},
})
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v7")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "bbb", res[0].Node().Builder)
}
func TestSelectNodeNoPlatform(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
"aaa": {platforms.MustParse("linux/foobar")},
"bbb": {platforms.DefaultSpec()},
})
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "aaa", res[0].Node().Builder)
require.Empty(t, res[0].platforms)
}
func TestSelectNodeAdditionalPlatforms(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
"aaa": {platforms.MustParse("linux/amd64")},
"bbb": {platforms.MustParse("linux/arm/v8")},
})
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v7")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "bbb", res[0].Node().Builder)
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v7")}, nil, platforms.Only, func(idx int, n builder.Node) []specs.Platform {
if n.Builder == "aaa" {
return []specs.Platform{platforms.MustParse("linux/arm/v7")}
}
return nil
})
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "aaa", res[0].Node().Builder)
}
func TestSplitNodeMultiPlatform(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
"aaa": {platforms.MustParse("linux/amd64"), platforms.MustParse("linux/arm64")},
"bbb": {platforms.MustParse("linux/riscv64")},
})
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{
platforms.MustParse("linux/amd64"),
platforms.MustParse("linux/arm64"),
}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "aaa", res[0].Node().Builder)
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{
platforms.MustParse("linux/amd64"),
platforms.MustParse("linux/riscv64"),
}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 2)
require.Equal(t, "aaa", res[0].Node().Builder)
require.Equal(t, "bbb", res[1].Node().Builder)
}
func TestSplitNodeMultiPlatformNoUnify(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
"aaa": {platforms.MustParse("linux/amd64")},
"bbb": {platforms.MustParse("linux/amd64"), platforms.MustParse("linux/riscv64")},
})
// the "best" choice would be the node with both platforms, but we're using
// a naive algorithm that doesn't try to unify the platforms
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{
platforms.MustParse("linux/amd64"),
platforms.MustParse("linux/riscv64"),
}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 2)
require.Equal(t, "aaa", res[0].Node().Builder)
require.Equal(t, "bbb", res[1].Node().Builder)
}
func makeTestResolver(nodes map[string][]specs.Platform) *nodeResolver {
var ns []builder.Node
for name, platforms := range nodes {
ns = append(ns, builder.Node{
Builder: name,
Platforms: platforms,
})
}
sort.Slice(ns, func(i, j int) bool {
return ns[i].Builder < ns[j].Builder
})
return newDriverResolver(ns)
}

View File

@@ -1,160 +0,0 @@
package build
import (
"context"
"os"
"path"
"path/filepath"
"strconv"
"strings"
"github.com/docker/buildx/util/gitutil"
"github.com/docker/buildx/util/osutil"
"github.com/moby/buildkit/client"
specs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
const DockerfileLabel = "com.docker.image.source.entrypoint"
type gitAttrsAppendFunc func(so *client.SolveOpt)
func gitAppendNoneFunc(_ *client.SolveOpt) {}
func getGitAttributes(ctx context.Context, contextPath, dockerfilePath string) (f gitAttrsAppendFunc, err error) {
defer func() {
if f == nil {
f = gitAppendNoneFunc
}
}()
if contextPath == "" {
return nil, nil
}
setGitLabels := false
if v, ok := os.LookupEnv("BUILDX_GIT_LABELS"); ok {
if v == "full" { // backward compatibility with old "full" mode
setGitLabels = true
} else if v, err := strconv.ParseBool(v); err == nil {
setGitLabels = v
}
}
setGitInfo := true
if v, ok := os.LookupEnv("BUILDX_GIT_INFO"); ok {
if v, err := strconv.ParseBool(v); err == nil {
setGitInfo = v
}
}
if !setGitLabels && !setGitInfo {
return nil, nil
}
// figure out in which directory the git command needs to run in
var wd string
if filepath.IsAbs(contextPath) {
wd = contextPath
} else {
wd, _ = filepath.Abs(filepath.Join(osutil.GetWd(), contextPath))
}
wd = osutil.SanitizePath(wd)
gitc, err := gitutil.New(gitutil.WithContext(ctx), gitutil.WithWorkingDir(wd))
if err != nil {
if st, err1 := os.Stat(path.Join(wd, ".git")); err1 == nil && st.IsDir() {
return nil, errors.Wrap(err, "git was not found in the system")
}
return nil, nil
}
if !gitc.IsInsideWorkTree() {
if st, err := os.Stat(path.Join(wd, ".git")); err == nil && st.IsDir() {
return nil, errors.New("failed to read current commit information with git rev-parse --is-inside-work-tree")
}
return nil, nil
}
root, err := gitc.RootDir()
if err != nil {
return nil, errors.Wrap(err, "failed to get git root dir")
}
res := make(map[string]string)
if sha, err := gitc.FullCommit(); err != nil && !gitutil.IsUnknownRevision(err) {
return nil, errors.Wrap(err, "failed to get git commit")
} else if sha != "" {
checkDirty := false
if v, ok := os.LookupEnv("BUILDX_GIT_CHECK_DIRTY"); ok {
if v, err := strconv.ParseBool(v); err == nil {
checkDirty = v
}
}
if checkDirty && gitc.IsDirty() {
sha += "-dirty"
}
if setGitLabels {
res["label:"+specs.AnnotationRevision] = sha
}
if setGitInfo {
res["vcs:revision"] = sha
}
}
if rurl, err := gitc.RemoteURL(); err == nil && rurl != "" {
if setGitLabels {
res["label:"+specs.AnnotationSource] = rurl
}
if setGitInfo {
res["vcs:source"] = rurl
}
}
if setGitLabels && root != "" {
if dockerfilePath == "" {
dockerfilePath = filepath.Join(wd, "Dockerfile")
}
if !filepath.IsAbs(dockerfilePath) {
dockerfilePath = filepath.Join(osutil.GetWd(), dockerfilePath)
}
if r, err := filepath.Rel(root, dockerfilePath); err == nil && !strings.HasPrefix(r, "..") {
res["label:"+DockerfileLabel] = r
}
}
return func(so *client.SolveOpt) {
if so.FrontendAttrs == nil {
so.FrontendAttrs = make(map[string]string)
}
for k, v := range res {
so.FrontendAttrs[k] = v
}
if !setGitInfo || root == "" {
return
}
for key, mount := range so.LocalMounts {
fs, ok := mount.(*fs)
if !ok {
continue
}
dir, err := filepath.EvalSymlinks(fs.dir) // keep same behavior as fsutil.NewFS
if err != nil {
continue
}
dir, err = filepath.Abs(dir)
if err != nil {
continue
}
if lp, err := osutil.GetLongPathName(dir); err == nil {
dir = lp
}
dir = osutil.SanitizePath(dir)
if r, err := filepath.Rel(root, dir); err == nil && !strings.HasPrefix(r, "..") {
so.FrontendAttrs["vcs:localdir:"+key] = r
}
}
}, nil
}

View File

@@ -1,221 +0,0 @@
package build
import (
"context"
"os"
"path"
"path/filepath"
"strings"
"testing"
"github.com/docker/buildx/util/gitutil"
"github.com/moby/buildkit/client"
specs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func setupTest(tb testing.TB) {
gitutil.Mktmp(tb)
c, err := gitutil.New()
require.NoError(tb, err)
gitutil.GitInit(c, tb)
df := []byte("FROM alpine:latest\n")
require.NoError(tb, os.WriteFile("Dockerfile", df, 0644))
gitutil.GitAdd(c, tb, "Dockerfile")
gitutil.GitCommit(c, tb, "initial commit")
gitutil.GitSetRemote(c, tb, "origin", "git@github.com:docker/buildx.git")
}
func TestGetGitAttributesNotGitRepo(t *testing.T) {
_, err := getGitAttributes(context.Background(), t.TempDir(), "Dockerfile")
require.NoError(t, err)
}
func TestGetGitAttributesBadGitRepo(t *testing.T) {
tmp := t.TempDir()
require.NoError(t, os.MkdirAll(path.Join(tmp, ".git"), 0755))
_, err := getGitAttributes(context.Background(), tmp, "Dockerfile")
assert.Error(t, err)
}
func TestGetGitAttributesNoContext(t *testing.T) {
setupTest(t)
addGitAttrs, err := getGitAttributes(context.Background(), "", "Dockerfile")
require.NoError(t, err)
var so client.SolveOpt
addGitAttrs(&so)
assert.Empty(t, so.FrontendAttrs)
}
func TestGetGitAttributes(t *testing.T) {
cases := []struct {
name string
envGitLabels string
envGitInfo string
expected []string
}{
{
name: "default",
envGitLabels: "",
envGitInfo: "",
expected: []string{
"vcs:revision",
"vcs:source",
},
},
{
name: "none",
envGitLabels: "false",
envGitInfo: "false",
expected: []string{},
},
{
name: "gitinfo",
envGitLabels: "false",
envGitInfo: "true",
expected: []string{
"vcs:revision",
"vcs:source",
},
},
{
name: "gitlabels",
envGitLabels: "true",
envGitInfo: "false",
expected: []string{
"label:" + DockerfileLabel,
"label:" + specs.AnnotationRevision,
"label:" + specs.AnnotationSource,
},
},
{
name: "both",
envGitLabels: "true",
envGitInfo: "",
expected: []string{
"label:" + DockerfileLabel,
"label:" + specs.AnnotationRevision,
"label:" + specs.AnnotationSource,
"vcs:revision",
"vcs:source",
},
},
}
for _, tt := range cases {
tt := tt
t.Run(tt.name, func(t *testing.T) {
setupTest(t)
if tt.envGitLabels != "" {
t.Setenv("BUILDX_GIT_LABELS", tt.envGitLabels)
}
if tt.envGitInfo != "" {
t.Setenv("BUILDX_GIT_INFO", tt.envGitInfo)
}
addGitAttrs, err := getGitAttributes(context.Background(), ".", "Dockerfile")
require.NoError(t, err)
var so client.SolveOpt
addGitAttrs(&so)
for _, e := range tt.expected {
assert.Contains(t, so.FrontendAttrs, e)
assert.NotEmpty(t, so.FrontendAttrs[e])
if e == "label:"+DockerfileLabel {
assert.Equal(t, "Dockerfile", so.FrontendAttrs[e])
} else if e == "label:"+specs.AnnotationSource || e == "vcs:source" {
assert.Equal(t, "git@github.com:docker/buildx.git", so.FrontendAttrs[e])
}
}
})
}
}
func TestGetGitAttributesDirty(t *testing.T) {
setupTest(t)
t.Setenv("BUILDX_GIT_CHECK_DIRTY", "true")
// make a change to test dirty flag
df := []byte("FROM alpine:edge\n")
require.NoError(t, os.Mkdir("dir", 0755))
require.NoError(t, os.WriteFile(filepath.Join("dir", "Dockerfile"), df, 0644))
t.Setenv("BUILDX_GIT_LABELS", "true")
addGitAttrs, err := getGitAttributes(context.Background(), ".", "Dockerfile")
require.NoError(t, err)
var so client.SolveOpt
addGitAttrs(&so)
assert.Equal(t, 5, len(so.FrontendAttrs))
assert.Contains(t, so.FrontendAttrs, "label:"+DockerfileLabel)
assert.Equal(t, "Dockerfile", so.FrontendAttrs["label:"+DockerfileLabel])
assert.Contains(t, so.FrontendAttrs, "label:"+specs.AnnotationSource)
assert.Equal(t, "git@github.com:docker/buildx.git", so.FrontendAttrs["label:"+specs.AnnotationSource])
assert.Contains(t, so.FrontendAttrs, "label:"+specs.AnnotationRevision)
assert.True(t, strings.HasSuffix(so.FrontendAttrs["label:"+specs.AnnotationRevision], "-dirty"))
assert.Contains(t, so.FrontendAttrs, "vcs:source")
assert.Equal(t, "git@github.com:docker/buildx.git", so.FrontendAttrs["vcs:source"])
assert.Contains(t, so.FrontendAttrs, "vcs:revision")
assert.True(t, strings.HasSuffix(so.FrontendAttrs["vcs:revision"], "-dirty"))
}
func TestLocalDirs(t *testing.T) {
setupTest(t)
so := &client.SolveOpt{
FrontendAttrs: map[string]string{},
}
addGitAttrs, err := getGitAttributes(context.Background(), ".", "Dockerfile")
require.NoError(t, err)
require.NoError(t, setLocalMount("context", ".", so))
require.NoError(t, setLocalMount("dockerfile", ".", so))
addGitAttrs(so)
require.Contains(t, so.FrontendAttrs, "vcs:localdir:context")
assert.Equal(t, ".", so.FrontendAttrs["vcs:localdir:context"])
require.Contains(t, so.FrontendAttrs, "vcs:localdir:dockerfile")
assert.Equal(t, ".", so.FrontendAttrs["vcs:localdir:dockerfile"])
}
func TestLocalDirsSub(t *testing.T) {
gitutil.Mktmp(t)
c, err := gitutil.New()
require.NoError(t, err)
gitutil.GitInit(c, t)
df := []byte("FROM alpine:latest\n")
require.NoError(t, os.MkdirAll("app", 0755))
require.NoError(t, os.WriteFile("app/Dockerfile", df, 0644))
gitutil.GitAdd(c, t, "app/Dockerfile")
gitutil.GitCommit(c, t, "initial commit")
gitutil.GitSetRemote(c, t, "origin", "git@github.com:docker/buildx.git")
so := &client.SolveOpt{
FrontendAttrs: map[string]string{},
}
require.NoError(t, setLocalMount("context", ".", so))
require.NoError(t, setLocalMount("dockerfile", "app", so))
addGitAttrs, err := getGitAttributes(context.Background(), ".", "app/Dockerfile")
require.NoError(t, err)
addGitAttrs(so)
require.Contains(t, so.FrontendAttrs, "vcs:localdir:context")
assert.Equal(t, ".", so.FrontendAttrs["vcs:localdir:context"])
require.Contains(t, so.FrontendAttrs, "vcs:localdir:dockerfile")
assert.Equal(t, "app", so.FrontendAttrs["vcs:localdir:dockerfile"])
}

View File

@@ -1,138 +0,0 @@
package build
import (
"context"
_ "crypto/sha256" // ensure digests can be computed
"io"
"sync"
"sync/atomic"
"syscall"
controllerapi "github.com/docker/buildx/controller/pb"
gateway "github.com/moby/buildkit/frontend/gateway/client"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
type Container struct {
cancelOnce sync.Once
containerCancel func(error)
isUnavailable atomic.Bool
initStarted atomic.Bool
container gateway.Container
releaseCh chan struct{}
resultCtx *ResultHandle
}
func NewContainer(ctx context.Context, resultCtx *ResultHandle, cfg *controllerapi.InvokeConfig) (*Container, error) {
mainCtx := ctx
ctrCh := make(chan *Container)
errCh := make(chan error)
go func() {
err := resultCtx.build(func(ctx context.Context, c gateway.Client) (*gateway.Result, error) {
ctx, cancel := context.WithCancelCause(ctx)
go func() {
<-mainCtx.Done()
cancel(errors.WithStack(context.Canceled))
}()
containerCfg, err := resultCtx.getContainerConfig(cfg)
if err != nil {
return nil, err
}
containerCtx, containerCancel := context.WithCancelCause(ctx)
defer containerCancel(errors.WithStack(context.Canceled))
bkContainer, err := c.NewContainer(containerCtx, containerCfg)
if err != nil {
return nil, err
}
releaseCh := make(chan struct{})
container := &Container{
containerCancel: containerCancel,
container: bkContainer,
releaseCh: releaseCh,
resultCtx: resultCtx,
}
doneCh := make(chan struct{})
defer close(doneCh)
resultCtx.registerCleanup(func() {
container.Cancel()
<-doneCh
})
ctrCh <- container
<-container.releaseCh
return nil, bkContainer.Release(ctx)
})
if err != nil {
errCh <- err
}
}()
select {
case ctr := <-ctrCh:
return ctr, nil
case err := <-errCh:
return nil, err
case <-mainCtx.Done():
return nil, mainCtx.Err()
}
}
func (c *Container) Cancel() {
c.markUnavailable()
c.cancelOnce.Do(func() {
if c.containerCancel != nil {
c.containerCancel(errors.WithStack(context.Canceled))
}
close(c.releaseCh)
})
}
func (c *Container) IsUnavailable() bool {
return c.isUnavailable.Load()
}
func (c *Container) markUnavailable() {
c.isUnavailable.Store(true)
}
func (c *Container) Exec(ctx context.Context, cfg *controllerapi.InvokeConfig, stdin io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) error {
if isInit := c.initStarted.CompareAndSwap(false, true); isInit {
defer func() {
// container can't be used after init exits
c.markUnavailable()
}()
}
err := exec(ctx, c.resultCtx, cfg, c.container, stdin, stdout, stderr)
if err != nil {
// Container becomes unavailable if one of the processes fails in it.
c.markUnavailable()
}
return err
}
func exec(ctx context.Context, resultCtx *ResultHandle, cfg *controllerapi.InvokeConfig, ctr gateway.Container, stdin io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) error {
processCfg, err := resultCtx.getProcessConfig(cfg, stdin, stdout, stderr)
if err != nil {
return err
}
proc, err := ctr.Start(ctx, processCfg)
if err != nil {
return errors.Errorf("failed to start container: %v", err)
}
doneCh := make(chan struct{})
defer close(doneCh)
go func() {
select {
case <-ctx.Done():
if err := proc.Signal(ctx, syscall.SIGKILL); err != nil {
logrus.Warnf("failed to kill process: %v", err)
}
case <-doneCh:
}
}()
return proc.Wait()
}

View File

@@ -1,44 +0,0 @@
package build
import (
"path/filepath"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/localstate"
"github.com/docker/buildx/util/confutil"
"github.com/moby/buildkit/client"
)
func saveLocalState(so *client.SolveOpt, target string, opts Options, node builder.Node, cfg *confutil.Config) error {
var err error
if so.Ref == "" || opts.CallFunc != nil {
return nil
}
lp := opts.Inputs.ContextPath
dp := opts.Inputs.DockerfilePath
if dp != "" && !IsRemoteURL(lp) && lp != "-" && dp != "-" {
dp, err = filepath.Abs(dp)
if err != nil {
return err
}
}
if lp != "" && !IsRemoteURL(lp) && lp != "-" {
lp, err = filepath.Abs(lp)
if err != nil {
return err
}
}
if lp == "" && dp == "" {
return nil
}
l, err := localstate.New(cfg)
if err != nil {
return err
}
return l.SaveRef(node.Builder, node.Name, so.Ref, localstate.State{
Target: target,
LocalPath: lp,
DockerfilePath: dp,
GroupRef: opts.GroupRef,
})
}

View File

@@ -1,657 +0,0 @@
package build
import (
"bytes"
"context"
"io"
"os"
"path/filepath"
"slices"
"strconv"
"strings"
"syscall"
"github.com/containerd/containerd/content"
"github.com/containerd/containerd/content/local"
"github.com/containerd/platforms"
"github.com/distribution/reference"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/driver"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/dockerutil"
"github.com/docker/buildx/util/osutil"
"github.com/docker/buildx/util/progress"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/client/llb"
"github.com/moby/buildkit/client/ociindex"
gateway "github.com/moby/buildkit/frontend/gateway/client"
"github.com/moby/buildkit/identity"
"github.com/moby/buildkit/session/upload/uploadprovider"
"github.com/moby/buildkit/solver/pb"
"github.com/moby/buildkit/util/apicaps"
"github.com/moby/buildkit/util/entitlements"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/tonistiigi/fsutil"
)
func toSolveOpt(ctx context.Context, node builder.Node, multiDriver bool, opt *Options, bopts gateway.BuildOpts, cfg *confutil.Config, pw progress.Writer, docker *dockerutil.Client) (_ *client.SolveOpt, release func(), err error) {
nodeDriver := node.Driver
defers := make([]func(), 0, 2)
releaseF := func() {
for _, f := range defers {
f()
}
}
defer func() {
if err != nil {
releaseF()
}
}()
// inline cache from build arg
if v, ok := opt.BuildArgs["BUILDKIT_INLINE_CACHE"]; ok {
if v, _ := strconv.ParseBool(v); v {
opt.CacheTo = append(opt.CacheTo, client.CacheOptionsEntry{
Type: "inline",
Attrs: map[string]string{},
})
}
}
for _, e := range opt.CacheTo {
if e.Type != "inline" && !nodeDriver.Features(ctx)[driver.CacheExport] {
return nil, nil, notSupported(driver.CacheExport, nodeDriver, "https://docs.docker.com/go/build-cache-backends/")
}
}
cacheTo := make([]client.CacheOptionsEntry, 0, len(opt.CacheTo))
for _, e := range opt.CacheTo {
if e.Type == "gha" {
if !bopts.LLBCaps.Contains(apicaps.CapID("cache.gha")) {
continue
}
} else if e.Type == "s3" {
if !bopts.LLBCaps.Contains(apicaps.CapID("cache.s3")) {
continue
}
}
cacheTo = append(cacheTo, e)
}
cacheFrom := make([]client.CacheOptionsEntry, 0, len(opt.CacheFrom))
for _, e := range opt.CacheFrom {
if e.Type == "gha" {
if !bopts.LLBCaps.Contains(apicaps.CapID("cache.gha")) {
continue
}
} else if e.Type == "s3" {
if !bopts.LLBCaps.Contains(apicaps.CapID("cache.s3")) {
continue
}
}
cacheFrom = append(cacheFrom, e)
}
so := client.SolveOpt{
Ref: opt.Ref,
Frontend: "dockerfile.v0",
FrontendAttrs: map[string]string{},
LocalMounts: map[string]fsutil.FS{},
CacheExports: cacheTo,
CacheImports: cacheFrom,
AllowedEntitlements: opt.Allow,
SourcePolicy: opt.SourcePolicy,
}
if opt.CgroupParent != "" {
so.FrontendAttrs["cgroup-parent"] = opt.CgroupParent
}
if v, ok := opt.BuildArgs["BUILDKIT_MULTI_PLATFORM"]; ok {
if v, _ := strconv.ParseBool(v); v {
so.FrontendAttrs["multi-platform"] = "true"
}
}
if multiDriver {
// force creation of manifest list
so.FrontendAttrs["multi-platform"] = "true"
}
attests := make(map[string]string)
for k, v := range opt.Attests {
if v != nil {
attests[k] = *v
}
}
supportAttestations := bopts.LLBCaps.Contains(apicaps.CapID("exporter.image.attestations")) && nodeDriver.Features(ctx)[driver.MultiPlatform]
if len(attests) > 0 {
if !supportAttestations {
if !nodeDriver.Features(ctx)[driver.MultiPlatform] {
return nil, nil, notSupported("Attestation", nodeDriver, "https://docs.docker.com/go/attestations/")
}
return nil, nil, errors.Errorf("Attestations are not supported by the current BuildKit daemon")
}
for k, v := range attests {
so.FrontendAttrs["attest:"+k] = v
}
}
if _, ok := opt.Attests["provenance"]; !ok && supportAttestations {
const noAttestEnv = "BUILDX_NO_DEFAULT_ATTESTATIONS"
var noProv bool
if v, ok := os.LookupEnv(noAttestEnv); ok {
noProv, err = strconv.ParseBool(v)
if err != nil {
return nil, nil, errors.Wrap(err, "invalid "+noAttestEnv)
}
}
if !noProv {
so.FrontendAttrs["attest:provenance"] = "mode=min,inline-only=true"
}
}
switch len(opt.Exports) {
case 1:
// valid
case 0:
if !noDefaultLoad() && opt.CallFunc == nil {
if nodeDriver.IsMobyDriver() {
// backwards compat for docker driver only:
// this ensures the build results in a docker image.
opt.Exports = []client.ExportEntry{{Type: "image", Attrs: map[string]string{}}}
} else if nodeDriver.Features(ctx)[driver.DefaultLoad] {
opt.Exports = []client.ExportEntry{{Type: "docker", Attrs: map[string]string{}}}
}
}
default:
if err := bopts.LLBCaps.Supports(pb.CapMultipleExporters); err != nil {
return nil, nil, errors.Errorf("multiple outputs currently unsupported by the current BuildKit daemon, please upgrade to version v0.13+ or use a single output")
}
}
// fill in image exporter names from tags
if len(opt.Tags) > 0 {
tags := make([]string, len(opt.Tags))
for i, tag := range opt.Tags {
ref, err := reference.Parse(tag)
if err != nil {
return nil, nil, errors.Wrapf(err, "invalid tag %q", tag)
}
tags[i] = ref.String()
}
for i, e := range opt.Exports {
switch e.Type {
case "image", "oci", "docker":
opt.Exports[i].Attrs["name"] = strings.Join(tags, ",")
}
}
} else {
for _, e := range opt.Exports {
if e.Type == "image" && e.Attrs["name"] == "" && e.Attrs["push"] != "" {
if ok, _ := strconv.ParseBool(e.Attrs["push"]); ok {
return nil, nil, errors.Errorf("tag is needed when pushing to registry")
}
}
}
}
// cacheonly is a fake exporter to opt out of default behaviors
exports := make([]client.ExportEntry, 0, len(opt.Exports))
for _, e := range opt.Exports {
if e.Type != "cacheonly" {
exports = append(exports, e)
}
}
opt.Exports = exports
// set up exporters
for i, e := range opt.Exports {
if e.Type == "oci" && !nodeDriver.Features(ctx)[driver.OCIExporter] {
return nil, nil, notSupported(driver.OCIExporter, nodeDriver, "https://docs.docker.com/go/build-exporters/")
}
if e.Type == "docker" {
features := docker.Features(ctx, e.Attrs["context"])
if features[dockerutil.OCIImporter] && e.Output == nil {
// rely on oci importer if available (which supports
// multi-platform images), otherwise fall back to docker
opt.Exports[i].Type = "oci"
} else if len(opt.Platforms) > 1 || len(attests) > 0 {
if e.Output != nil {
return nil, nil, errors.Errorf("docker exporter does not support exporting manifest lists, use the oci exporter instead")
}
return nil, nil, errors.Errorf("docker exporter does not currently support exporting manifest lists")
}
if e.Output == nil {
if nodeDriver.IsMobyDriver() {
e.Type = "image"
} else {
w, cancel, err := docker.LoadImage(ctx, e.Attrs["context"], pw)
if err != nil {
return nil, nil, err
}
defers = append(defers, cancel)
opt.Exports[i].Output = func(_ map[string]string) (io.WriteCloser, error) {
return w, nil
}
}
} else if !nodeDriver.Features(ctx)[driver.DockerExporter] {
return nil, nil, notSupported(driver.DockerExporter, nodeDriver, "https://docs.docker.com/go/build-exporters/")
}
}
if e.Type == "image" && nodeDriver.IsMobyDriver() {
opt.Exports[i].Type = "moby"
if e.Attrs["push"] != "" {
if ok, _ := strconv.ParseBool(e.Attrs["push"]); ok {
if ok, _ := strconv.ParseBool(e.Attrs["push-by-digest"]); ok {
return nil, nil, errors.Errorf("push-by-digest is currently not implemented for docker driver, please create a new builder instance")
}
}
}
}
if e.Type == "docker" || e.Type == "image" || e.Type == "oci" {
// inline buildinfo attrs from build arg
if v, ok := opt.BuildArgs["BUILDKIT_INLINE_BUILDINFO_ATTRS"]; ok {
opt.Exports[i].Attrs["buildinfo-attrs"] = v
}
}
}
so.Exports = opt.Exports
so.Session = slices.Clone(opt.Session)
releaseLoad, err := loadInputs(ctx, nodeDriver, &opt.Inputs, pw, &so)
if err != nil {
return nil, nil, err
}
defers = append(defers, releaseLoad)
// add node identifier to shared key if one was specified
if so.SharedKey != "" {
so.SharedKey += ":" + cfg.TryNodeIdentifier()
}
if opt.Pull {
so.FrontendAttrs["image-resolve-mode"] = pb.AttrImageResolveModeForcePull
} else if nodeDriver.IsMobyDriver() {
// moby driver always resolves local images by default
so.FrontendAttrs["image-resolve-mode"] = pb.AttrImageResolveModePreferLocal
}
if opt.Target != "" {
so.FrontendAttrs["target"] = opt.Target
}
if len(opt.NoCacheFilter) > 0 {
so.FrontendAttrs["no-cache"] = strings.Join(opt.NoCacheFilter, ",")
}
if opt.NoCache {
so.FrontendAttrs["no-cache"] = ""
}
for k, v := range opt.BuildArgs {
so.FrontendAttrs["build-arg:"+k] = v
}
for k, v := range opt.Labels {
so.FrontendAttrs["label:"+k] = v
}
for k, v := range node.ProxyConfig {
if _, ok := opt.BuildArgs[k]; !ok {
so.FrontendAttrs["build-arg:"+k] = v
}
}
// set platforms
if len(opt.Platforms) != 0 {
pp := make([]string, len(opt.Platforms))
for i, p := range opt.Platforms {
pp[i] = platforms.Format(p)
}
if len(pp) > 1 && !nodeDriver.Features(ctx)[driver.MultiPlatform] {
return nil, nil, notSupported(driver.MultiPlatform, nodeDriver, "https://docs.docker.com/go/build-multi-platform/")
}
so.FrontendAttrs["platform"] = strings.Join(pp, ",")
}
// setup networkmode
switch opt.NetworkMode {
case "host":
so.FrontendAttrs["force-network-mode"] = opt.NetworkMode
so.AllowedEntitlements = append(so.AllowedEntitlements, entitlements.EntitlementNetworkHost)
case "none":
so.FrontendAttrs["force-network-mode"] = opt.NetworkMode
case "", "default":
default:
return nil, nil, errors.Errorf("network mode %q not supported by buildkit - you can define a custom network for your builder using the network driver-opt in buildx create", opt.NetworkMode)
}
// setup extrahosts
extraHosts, err := toBuildkitExtraHosts(ctx, opt.ExtraHosts, nodeDriver)
if err != nil {
return nil, nil, err
}
if len(extraHosts) > 0 {
so.FrontendAttrs["add-hosts"] = extraHosts
}
// setup shm size
if opt.ShmSize.Value() > 0 {
so.FrontendAttrs["shm-size"] = strconv.FormatInt(opt.ShmSize.Value(), 10)
}
// setup ulimits
ulimits, err := toBuildkitUlimits(opt.Ulimits)
if err != nil {
return nil, nil, err
} else if len(ulimits) > 0 {
so.FrontendAttrs["ulimit"] = ulimits
}
// mark call request as internal
if opt.CallFunc != nil {
so.Internal = true
}
return &so, releaseF, nil
}
func loadInputs(ctx context.Context, d *driver.DriverHandle, inp *Inputs, pw progress.Writer, target *client.SolveOpt) (func(), error) {
if inp.ContextPath == "" {
return nil, errors.New("please specify build context (e.g. \".\" for the current directory)")
}
// TODO: handle stdin, symlinks, remote contexts, check files exist
var (
err error
dockerfileReader io.ReadCloser
dockerfileDir string
dockerfileName = inp.DockerfilePath
dockerfileSrcName = inp.DockerfilePath
toRemove []string
)
switch {
case inp.ContextState != nil:
if target.FrontendInputs == nil {
target.FrontendInputs = make(map[string]llb.State)
}
target.FrontendInputs["context"] = *inp.ContextState
target.FrontendInputs["dockerfile"] = *inp.ContextState
case inp.ContextPath == "-":
if inp.DockerfilePath == "-" {
return nil, errors.Errorf("invalid argument: can't use stdin for both build context and dockerfile")
}
rc := inp.InStream.NewReadCloser()
magic, err := inp.InStream.Peek(archiveHeaderSize * 2)
if err != nil && err != io.EOF {
return nil, errors.Wrap(err, "failed to peek context header from STDIN")
}
if !(err == io.EOF && len(magic) == 0) {
if isArchive(magic) {
// stdin is context
up := uploadprovider.New()
target.FrontendAttrs["context"] = up.Add(rc)
target.Session = append(target.Session, up)
} else {
if inp.DockerfilePath != "" {
return nil, errors.Errorf("ambiguous Dockerfile source: both stdin and flag correspond to Dockerfiles")
}
// stdin is dockerfile
dockerfileReader = rc
inp.ContextPath, _ = os.MkdirTemp("", "empty-dir")
toRemove = append(toRemove, inp.ContextPath)
if err := setLocalMount("context", inp.ContextPath, target); err != nil {
return nil, err
}
}
}
case osutil.IsLocalDir(inp.ContextPath):
if err := setLocalMount("context", inp.ContextPath, target); err != nil {
return nil, err
}
sharedKey := inp.ContextPath
if p, err := filepath.Abs(sharedKey); err == nil {
sharedKey = filepath.Base(p)
}
target.SharedKey = sharedKey
switch inp.DockerfilePath {
case "-":
dockerfileReader = inp.InStream.NewReadCloser()
case "":
dockerfileDir = inp.ContextPath
default:
dockerfileDir = filepath.Dir(inp.DockerfilePath)
dockerfileName = filepath.Base(inp.DockerfilePath)
}
case IsRemoteURL(inp.ContextPath):
if inp.DockerfilePath == "-" {
dockerfileReader = inp.InStream.NewReadCloser()
} else if filepath.IsAbs(inp.DockerfilePath) {
dockerfileDir = filepath.Dir(inp.DockerfilePath)
dockerfileName = filepath.Base(inp.DockerfilePath)
target.FrontendAttrs["dockerfilekey"] = "dockerfile"
}
target.FrontendAttrs["context"] = inp.ContextPath
default:
return nil, errors.Errorf("unable to prepare context: path %q not found", inp.ContextPath)
}
if inp.DockerfileInline != "" {
dockerfileReader = io.NopCloser(strings.NewReader(inp.DockerfileInline))
dockerfileSrcName = "inline"
} else if inp.DockerfilePath == "-" {
dockerfileSrcName = "stdin"
} else if inp.DockerfilePath == "" {
dockerfileSrcName = filepath.Join(inp.ContextPath, "Dockerfile")
}
if dockerfileReader != nil {
dockerfileDir, err = createTempDockerfile(dockerfileReader, inp.InStream)
if err != nil {
return nil, err
}
toRemove = append(toRemove, dockerfileDir)
dockerfileName = "Dockerfile"
target.FrontendAttrs["dockerfilekey"] = "dockerfile"
}
if isHTTPURL(inp.DockerfilePath) {
dockerfileDir, err = createTempDockerfileFromURL(ctx, d, inp.DockerfilePath, pw)
if err != nil {
return nil, err
}
toRemove = append(toRemove, dockerfileDir)
dockerfileName = "Dockerfile"
target.FrontendAttrs["dockerfilekey"] = "dockerfile"
delete(target.FrontendInputs, "dockerfile")
}
if dockerfileName == "" {
dockerfileName = "Dockerfile"
}
if dockerfileDir != "" {
if err := setLocalMount("dockerfile", dockerfileDir, target); err != nil {
return nil, err
}
dockerfileName = handleLowercaseDockerfile(dockerfileDir, dockerfileName)
}
target.FrontendAttrs["filename"] = dockerfileName
for k, v := range inp.NamedContexts {
target.FrontendAttrs["frontend.caps"] = "moby.buildkit.frontend.contexts+forward"
if v.State != nil {
target.FrontendAttrs["context:"+k] = "input:" + k
if target.FrontendInputs == nil {
target.FrontendInputs = make(map[string]llb.State)
}
target.FrontendInputs[k] = *v.State
continue
}
if IsRemoteURL(v.Path) || strings.HasPrefix(v.Path, "docker-image://") || strings.HasPrefix(v.Path, "target:") {
target.FrontendAttrs["context:"+k] = v.Path
continue
}
// handle OCI layout
if strings.HasPrefix(v.Path, "oci-layout://") {
localPath := strings.TrimPrefix(v.Path, "oci-layout://")
localPath, dig, hasDigest := strings.Cut(localPath, "@")
localPath, tag, hasTag := strings.Cut(localPath, ":")
if !hasTag {
tag = "latest"
}
if !hasDigest {
dig, err = resolveDigest(localPath, tag)
if err != nil {
return nil, errors.Wrapf(err, "oci-layout reference %q could not be resolved", v.Path)
}
}
store, err := local.NewStore(localPath)
if err != nil {
return nil, errors.Wrapf(err, "invalid store at %s", localPath)
}
storeName := identity.NewID()
if target.OCIStores == nil {
target.OCIStores = map[string]content.Store{}
}
target.OCIStores[storeName] = store
target.FrontendAttrs["context:"+k] = "oci-layout://" + storeName + ":" + tag + "@" + dig
continue
}
st, err := os.Stat(v.Path)
if err != nil {
return nil, errors.Wrapf(err, "failed to get build context %v", k)
}
if !st.IsDir() {
return nil, errors.Wrapf(syscall.ENOTDIR, "failed to get build context path %v", v)
}
localName := k
if k == "context" || k == "dockerfile" {
localName = "_" + k // underscore to avoid collisions
}
if err := setLocalMount(localName, v.Path, target); err != nil {
return nil, err
}
target.FrontendAttrs["context:"+k] = "local:" + localName
}
release := func() {
for _, dir := range toRemove {
_ = os.RemoveAll(dir)
}
}
inp.DockerfileMappingSrc = dockerfileSrcName
inp.DockerfileMappingDst = dockerfileName
return release, nil
}
func resolveDigest(localPath, tag string) (dig string, _ error) {
idx := ociindex.NewStoreIndex(localPath)
// lookup by name
desc, err := idx.Get(tag)
if err != nil {
return "", err
}
if desc == nil {
// lookup single
desc, err = idx.GetSingle()
if err != nil {
return "", err
}
}
if desc == nil {
return "", errors.New("failed to resolve digest")
}
dig = string(desc.Digest)
_, err = digest.Parse(dig)
if err != nil {
return "", errors.Wrapf(err, "invalid digest %s", dig)
}
return dig, nil
}
func setLocalMount(name, dir string, so *client.SolveOpt) error {
lm, err := fsutil.NewFS(dir)
if err != nil {
return err
}
if so.LocalMounts == nil {
so.LocalMounts = map[string]fsutil.FS{}
}
so.LocalMounts[name] = &fs{FS: lm, dir: dir}
return nil
}
func createTempDockerfile(r io.Reader, multiReader *SyncMultiReader) (string, error) {
dir, err := os.MkdirTemp("", "dockerfile")
if err != nil {
return "", err
}
f, err := os.Create(filepath.Join(dir, "Dockerfile"))
if err != nil {
return "", err
}
defer f.Close()
if multiReader != nil {
dt, err := io.ReadAll(r)
if err != nil {
return "", err
}
multiReader.Reset(dt)
r = bytes.NewReader(dt)
}
if _, err := io.Copy(f, r); err != nil {
return "", err
}
return dir, err
}
// handle https://github.com/moby/moby/pull/10858
func handleLowercaseDockerfile(dir, p string) string {
if filepath.Base(p) != "Dockerfile" {
return p
}
f, err := os.Open(filepath.Dir(filepath.Join(dir, p)))
if err != nil {
return p
}
names, err := f.Readdirnames(-1)
if err != nil {
return p
}
foundLowerCase := false
for _, n := range names {
if n == "Dockerfile" {
return p
}
if n == "dockerfile" {
foundLowerCase = true
}
}
if foundLowerCase {
return filepath.Join(filepath.Dir(p), "dockerfile")
}
return p
}
type fs struct {
fsutil.FS
dir string
}
var _ fsutil.FS = &fs{}

View File

@@ -1,157 +0,0 @@
package build
import (
"context"
"encoding/base64"
"encoding/json"
"io"
"strings"
"sync"
"github.com/containerd/containerd/content"
"github.com/containerd/containerd/content/proxy"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/progress"
controlapi "github.com/moby/buildkit/api/services/control"
"github.com/moby/buildkit/client"
provenancetypes "github.com/moby/buildkit/solver/llbsolver/provenance/types"
digest "github.com/opencontainers/go-digest"
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"golang.org/x/sync/errgroup"
)
type provenancePredicate struct {
Builder *provenanceBuilder `json:"builder,omitempty"`
provenancetypes.ProvenancePredicate
}
type provenanceBuilder struct {
ID string `json:"id,omitempty"`
}
func setRecordProvenance(ctx context.Context, c *client.Client, sr *client.SolveResponse, ref string, mode confutil.MetadataProvenanceMode, pw progress.Writer) error {
if mode == confutil.MetadataProvenanceModeDisabled {
return nil
}
pw = progress.ResetTime(pw)
return progress.Wrap("resolving provenance for metadata file", pw.Write, func(l progress.SubLogger) error {
res, err := fetchProvenance(ctx, c, ref, mode)
if err != nil {
return err
}
for k, v := range res {
sr.ExporterResponse[k] = v
}
return nil
})
}
func fetchProvenance(ctx context.Context, c *client.Client, ref string, mode confutil.MetadataProvenanceMode) (out map[string]string, err error) {
cl, err := c.ControlClient().ListenBuildHistory(ctx, &controlapi.BuildHistoryRequest{
Ref: ref,
EarlyExit: true,
})
if err != nil {
return nil, err
}
var mu sync.Mutex
eg, ctx := errgroup.WithContext(ctx)
store := proxy.NewContentStore(c.ContentClient())
for {
ev, err := cl.Recv()
if errors.Is(err, io.EOF) {
break
} else if err != nil {
return nil, err
}
if ev.Record == nil {
continue
}
if ev.Record.Result != nil {
desc := lookupProvenance(ev.Record.Result)
if desc == nil {
continue
}
eg.Go(func() error {
dt, err := content.ReadBlob(ctx, store, *desc)
if err != nil {
return errors.Wrapf(err, "failed to load provenance blob from build record")
}
prv, err := encodeProvenance(dt, mode)
if err != nil {
return err
}
mu.Lock()
if out == nil {
out = make(map[string]string)
}
out["buildx.build.provenance"] = prv
mu.Unlock()
return nil
})
} else if ev.Record.Results != nil {
for platform, res := range ev.Record.Results {
platform := platform
desc := lookupProvenance(res)
if desc == nil {
continue
}
eg.Go(func() error {
dt, err := content.ReadBlob(ctx, store, *desc)
if err != nil {
return errors.Wrapf(err, "failed to load provenance blob from build record")
}
prv, err := encodeProvenance(dt, mode)
if err != nil {
return err
}
mu.Lock()
if out == nil {
out = make(map[string]string)
}
out["buildx.build.provenance/"+platform] = prv
mu.Unlock()
return nil
})
}
}
}
return out, eg.Wait()
}
func lookupProvenance(res *controlapi.BuildResultInfo) *ocispecs.Descriptor {
for _, a := range res.Attestations {
if a.MediaType == "application/vnd.in-toto+json" && strings.HasPrefix(a.Annotations["in-toto.io/predicate-type"], "https://slsa.dev/provenance/") {
return &ocispecs.Descriptor{
Digest: digest.Digest(a.Digest),
Size: a.Size,
MediaType: a.MediaType,
Annotations: a.Annotations,
}
}
}
return nil
}
func encodeProvenance(dt []byte, mode confutil.MetadataProvenanceMode) (string, error) {
var prv provenancePredicate
if err := json.Unmarshal(dt, &prv); err != nil {
return "", errors.Wrapf(err, "failed to unmarshal provenance")
}
if prv.Builder != nil && prv.Builder.ID == "" {
// reset builder if id is empty
prv.Builder = nil
}
if mode == confutil.MetadataProvenanceModeMin {
// reset fields for minimal provenance
prv.BuildConfig = nil
prv.Metadata = nil
}
dtprv, err := json.Marshal(prv)
if err != nil {
return "", errors.Wrapf(err, "failed to marshal provenance")
}
return base64.StdEncoding.EncodeToString(dtprv), nil
}

View File

@@ -1,164 +0,0 @@
package build
import (
"bufio"
"bytes"
"io"
"sync"
)
type SyncMultiReader struct {
source *bufio.Reader
buffer []byte
static []byte
mu sync.Mutex
cond *sync.Cond
readers []*syncReader
err error
offset int
}
type syncReader struct {
mr *SyncMultiReader
offset int
closed bool
}
func NewSyncMultiReader(source io.Reader) *SyncMultiReader {
mr := &SyncMultiReader{
source: bufio.NewReader(source),
buffer: make([]byte, 0, 32*1024),
}
mr.cond = sync.NewCond(&mr.mu)
return mr
}
func (mr *SyncMultiReader) Peek(n int) ([]byte, error) {
mr.mu.Lock()
defer mr.mu.Unlock()
if mr.static != nil {
return mr.static[min(n, len(mr.static)):], nil
}
return mr.source.Peek(n)
}
func (mr *SyncMultiReader) Reset(dt []byte) {
mr.mu.Lock()
defer mr.mu.Unlock()
mr.static = dt
}
func (mr *SyncMultiReader) NewReadCloser() io.ReadCloser {
mr.mu.Lock()
defer mr.mu.Unlock()
if mr.static != nil {
return io.NopCloser(bytes.NewReader(mr.static))
}
reader := &syncReader{
mr: mr,
}
mr.readers = append(mr.readers, reader)
return reader
}
func (sr *syncReader) Read(p []byte) (int, error) {
sr.mr.mu.Lock()
defer sr.mr.mu.Unlock()
return sr.read(p)
}
func (sr *syncReader) read(p []byte) (int, error) {
end := sr.mr.offset + len(sr.mr.buffer)
loop0:
for {
if sr.closed {
return 0, io.EOF
}
end := sr.mr.offset + len(sr.mr.buffer)
if sr.mr.err != nil && sr.offset == end {
return 0, sr.mr.err
}
start := sr.offset - sr.mr.offset
dt := sr.mr.buffer[start:]
if len(dt) > 0 {
n := copy(p, dt)
sr.offset += n
sr.mr.cond.Broadcast()
return n, nil
}
// check for readers that have not caught up
hasOpen := false
for _, r := range sr.mr.readers {
if !r.closed {
hasOpen = true
} else {
continue
}
if r.offset < end {
sr.mr.cond.Wait()
continue loop0
}
}
if !hasOpen {
return 0, io.EOF
}
break
}
last := sr.mr.offset + len(sr.mr.buffer)
// another reader has already updated the buffer
if last > end || sr.mr.err != nil {
return sr.read(p)
}
sr.mr.offset += len(sr.mr.buffer)
sr.mr.buffer = sr.mr.buffer[:cap(sr.mr.buffer)]
n, err := sr.mr.source.Read(sr.mr.buffer)
if n >= 0 {
sr.mr.buffer = sr.mr.buffer[:n]
} else {
sr.mr.buffer = sr.mr.buffer[:0]
}
sr.mr.cond.Broadcast()
if err != nil {
sr.mr.err = err
return 0, err
}
nn := copy(p, sr.mr.buffer)
sr.offset += nn
return nn, nil
}
func (sr *syncReader) Close() error {
sr.mr.mu.Lock()
defer sr.mr.mu.Unlock()
if sr.closed {
return nil
}
sr.closed = true
sr.mr.cond.Broadcast()
return nil
}

View File

@@ -1,76 +0,0 @@
package build
import (
"bytes"
"crypto/rand"
"io"
mathrand "math/rand"
"sync"
"testing"
"time"
"github.com/stretchr/testify/assert"
)
func generateRandomData(size int) []byte {
data := make([]byte, size)
rand.Read(data)
return data
}
func TestSyncMultiReaderParallel(t *testing.T) {
data := generateRandomData(1024 * 1024)
source := bytes.NewReader(data)
mr := NewSyncMultiReader(source)
var wg sync.WaitGroup
numReaders := 10
bufferSize := 4096 * 4
readers := make([]io.ReadCloser, numReaders)
for i := 0; i < numReaders; i++ {
readers[i] = mr.NewReadCloser()
}
for i := 0; i < numReaders; i++ {
wg.Add(1)
go func(readerId int) {
defer wg.Done()
reader := readers[readerId]
defer reader.Close()
totalRead := 0
buf := make([]byte, bufferSize)
for totalRead < len(data) {
// Simulate random read sizes
readSize := mathrand.Intn(bufferSize) //nolint:gosec
n, err := reader.Read(buf[:readSize])
if n > 0 {
assert.Equal(t, data[totalRead:totalRead+n], buf[:n], "Reader %d mismatch", readerId)
totalRead += n
}
if err == io.EOF {
assert.Equal(t, len(data), totalRead, "Reader %d EOF mismatch", readerId)
return
}
assert.NoError(t, err, "Reader %d error", readerId)
if mathrand.Intn(1000) == 0 { //nolint:gosec
t.Logf("Reader %d closing", readerId)
// Simulate random close
return
}
// Simulate random timing between reads
time.Sleep(time.Millisecond * time.Duration(mathrand.Intn(5))) //nolint:gosec
}
assert.Equal(t, len(data), totalRead, "Reader %d total read mismatch", readerId)
}(i)
}
wg.Wait()
}

View File

@@ -1,495 +0,0 @@
package build
import (
"context"
_ "crypto/sha256" // ensure digests can be computed
"encoding/json"
"io"
"sync"
controllerapi "github.com/docker/buildx/controller/pb"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/exporter/containerimage/exptypes"
gateway "github.com/moby/buildkit/frontend/gateway/client"
"github.com/moby/buildkit/solver/errdefs"
"github.com/moby/buildkit/solver/pb"
"github.com/moby/buildkit/solver/result"
specs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"golang.org/x/sync/errgroup"
)
// NewResultHandle makes a call to client.Build, additionally returning a
// opaque ResultHandle alongside the standard response and error.
//
// This ResultHandle can be used to execute additional build steps in the same
// context as the build occurred, which can allow easy debugging of build
// failures and successes.
//
// If the returned ResultHandle is not nil, the caller must call Done() on it.
func NewResultHandle(ctx context.Context, cc *client.Client, opt client.SolveOpt, product string, buildFunc gateway.BuildFunc, ch chan *client.SolveStatus) (*ResultHandle, *client.SolveResponse, error) {
// Create a new context to wrap the original, and cancel it when the
// caller-provided context is cancelled.
//
// We derive the context from the background context so that we can forbid
// cancellation of the build request after <-done is closed (which we do
// before returning the ResultHandle).
baseCtx := ctx
ctx, cancel := context.WithCancelCause(context.Background())
done := make(chan struct{})
go func() {
select {
case <-baseCtx.Done():
cancel(baseCtx.Err())
case <-done:
// Once done is closed, we've recorded a ResultHandle, so we
// shouldn't allow cancelling the underlying build request anymore.
}
}()
// Create a new channel to forward status messages to the original.
//
// We do this so that we can discard status messages after the main portion
// of the build is complete. This is necessary for the solve error case,
// where the original gateway is kept open until the ResultHandle is
// closed - we don't want progress messages from operations in that
// ResultHandle to display after this function exits.
//
// Additionally, callers should wait for the progress channel to be closed.
// If we keep the session open and never close the progress channel, the
// caller will likely hang.
baseCh := ch
ch = make(chan *client.SolveStatus)
go func() {
for {
s, ok := <-ch
if !ok {
return
}
select {
case <-baseCh:
// base channel is closed, discard status messages
default:
baseCh <- s
}
}
}()
defer close(baseCh)
var resp *client.SolveResponse
var respErr error
var respHandle *ResultHandle
go func() {
defer func() { cancel(errors.WithStack(context.Canceled)) }() // ensure no dangling processes
var res *gateway.Result
var err error
resp, err = cc.Build(ctx, opt, product, func(ctx context.Context, c gateway.Client) (*gateway.Result, error) {
var err error
res, err = buildFunc(ctx, c)
if res != nil && err == nil {
// Force evaluation of the build result (otherwise, we likely
// won't get a solve error)
def, err2 := getDefinition(ctx, res)
if err2 != nil {
return nil, err2
}
res, err = evalDefinition(ctx, c, def)
}
if err != nil {
// Scenario 1: we failed to evaluate a node somewhere in the
// build graph.
//
// In this case, we construct a ResultHandle from this
// original Build session, and return it alongside the original
// build error. We then need to keep the gateway session open
// until the caller explicitly closes the ResultHandle.
var se *errdefs.SolveError
if errors.As(err, &se) {
respHandle = &ResultHandle{
done: make(chan struct{}),
solveErr: se,
gwClient: c,
gwCtx: ctx,
}
respErr = err // return original error to preserve stacktrace
close(done)
// Block until the caller closes the ResultHandle.
select {
case <-respHandle.done:
case <-ctx.Done():
}
}
}
return res, err
}, ch)
if respHandle != nil {
return
}
if err != nil {
// Something unexpected failed during the build, we didn't succeed,
// but we also didn't make it far enough to create a ResultHandle.
respErr = err
close(done)
return
}
// Scenario 2: we successfully built the image with no errors.
//
// In this case, the original gateway session has now been closed
// since the Build has been completed. So, we need to create a new
// gateway session to populate the ResultHandle. To do this, we
// need to re-evaluate the target result, in this new session. This
// should be instantaneous since the result should be cached.
def, err := getDefinition(ctx, res)
if err != nil {
respErr = err
close(done)
return
}
// NOTE: ideally this second connection should be lazily opened
opt := opt
opt.Ref = ""
opt.Exports = nil
opt.CacheExports = nil
opt.Internal = true
_, respErr = cc.Build(ctx, opt, "buildx", func(ctx context.Context, c gateway.Client) (*gateway.Result, error) {
res, err := evalDefinition(ctx, c, def)
if err != nil {
// This should probably not happen, since we've previously
// successfully evaluated the same result with no issues.
return nil, errors.Wrap(err, "inconsistent solve result")
}
respHandle = &ResultHandle{
done: make(chan struct{}),
res: res,
gwClient: c,
gwCtx: ctx,
}
close(done)
// Block until the caller closes the ResultHandle.
select {
case <-respHandle.done:
case <-ctx.Done():
}
return nil, context.Cause(ctx)
}, nil)
if respHandle != nil {
return
}
close(done)
}()
// Block until the other thread signals that it's completed the build.
select {
case <-done:
case <-baseCtx.Done():
if respErr == nil {
respErr = baseCtx.Err()
}
}
return respHandle, resp, respErr
}
// getDefinition converts a gateway result into a collection of definitions for
// each ref in the result.
func getDefinition(ctx context.Context, res *gateway.Result) (*result.Result[*pb.Definition], error) {
return result.ConvertResult(res, func(ref gateway.Reference) (*pb.Definition, error) {
st, err := ref.ToState()
if err != nil {
return nil, err
}
def, err := st.Marshal(ctx)
if err != nil {
return nil, err
}
return def.ToPB(), nil
})
}
// evalDefinition performs the reverse of getDefinition, converting a
// collection of definitions into a gateway result.
func evalDefinition(ctx context.Context, c gateway.Client, defs *result.Result[*pb.Definition]) (*gateway.Result, error) {
// force evaluation of all targets in parallel
results := make(map[*pb.Definition]*gateway.Result)
resultsMu := sync.Mutex{}
eg, egCtx := errgroup.WithContext(ctx)
defs.EachRef(func(def *pb.Definition) error {
eg.Go(func() error {
res, err := c.Solve(egCtx, gateway.SolveRequest{
Evaluate: true,
Definition: def,
})
if err != nil {
return err
}
resultsMu.Lock()
results[def] = res
resultsMu.Unlock()
return nil
})
return nil
})
if err := eg.Wait(); err != nil {
return nil, err
}
res, _ := result.ConvertResult(defs, func(def *pb.Definition) (gateway.Reference, error) {
if res, ok := results[def]; ok {
return res.Ref, nil
}
return nil, nil
})
return res, nil
}
// ResultHandle is a build result with the client that built it.
type ResultHandle struct {
res *gateway.Result
solveErr *errdefs.SolveError
done chan struct{}
doneOnce sync.Once
gwClient gateway.Client
gwCtx context.Context
cleanups []func()
cleanupsMu sync.Mutex
}
func (r *ResultHandle) Done() {
r.doneOnce.Do(func() {
r.cleanupsMu.Lock()
cleanups := r.cleanups
r.cleanups = nil
r.cleanupsMu.Unlock()
for _, f := range cleanups {
f()
}
close(r.done)
<-r.gwCtx.Done()
})
}
func (r *ResultHandle) registerCleanup(f func()) {
r.cleanupsMu.Lock()
r.cleanups = append(r.cleanups, f)
r.cleanupsMu.Unlock()
}
func (r *ResultHandle) build(buildFunc gateway.BuildFunc) (err error) {
_, err = buildFunc(r.gwCtx, r.gwClient)
return err
}
func (r *ResultHandle) getContainerConfig(cfg *controllerapi.InvokeConfig) (containerCfg gateway.NewContainerRequest, _ error) {
if r.res != nil && r.solveErr == nil {
logrus.Debugf("creating container from successful build")
ccfg, err := containerConfigFromResult(r.res, cfg)
if err != nil {
return containerCfg, err
}
containerCfg = *ccfg
} else {
logrus.Debugf("creating container from failed build %+v", cfg)
ccfg, err := containerConfigFromError(r.solveErr, cfg)
if err != nil {
return containerCfg, errors.Wrapf(err, "no result nor error is available")
}
containerCfg = *ccfg
}
return containerCfg, nil
}
func (r *ResultHandle) getProcessConfig(cfg *controllerapi.InvokeConfig, stdin io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) (_ gateway.StartRequest, err error) {
processCfg := newStartRequest(stdin, stdout, stderr)
if r.res != nil && r.solveErr == nil {
logrus.Debugf("creating container from successful build")
if err := populateProcessConfigFromResult(&processCfg, r.res, cfg); err != nil {
return processCfg, err
}
} else {
logrus.Debugf("creating container from failed build %+v", cfg)
if err := populateProcessConfigFromError(&processCfg, r.solveErr, cfg); err != nil {
return processCfg, err
}
}
return processCfg, nil
}
func containerConfigFromResult(res *gateway.Result, cfg *controllerapi.InvokeConfig) (*gateway.NewContainerRequest, error) {
if cfg.Initial {
return nil, errors.Errorf("starting from the container from the initial state of the step is supported only on the failed steps")
}
ps, err := exptypes.ParsePlatforms(res.Metadata)
if err != nil {
return nil, err
}
ref, ok := res.FindRef(ps.Platforms[0].ID)
if !ok {
return nil, errors.Errorf("no reference found")
}
return &gateway.NewContainerRequest{
Mounts: []gateway.Mount{
{
Dest: "/",
MountType: pb.MountType_BIND,
Ref: ref,
},
},
}, nil
}
func populateProcessConfigFromResult(req *gateway.StartRequest, res *gateway.Result, cfg *controllerapi.InvokeConfig) error {
imgData := res.Metadata[exptypes.ExporterImageConfigKey]
var img *specs.Image
if len(imgData) > 0 {
img = &specs.Image{}
if err := json.Unmarshal(imgData, img); err != nil {
return err
}
}
user := ""
if !cfg.NoUser {
user = cfg.User
} else if img != nil {
user = img.Config.User
}
cwd := ""
if !cfg.NoCwd {
cwd = cfg.Cwd
} else if img != nil {
cwd = img.Config.WorkingDir
}
env := []string{}
if img != nil {
env = append(env, img.Config.Env...)
}
env = append(env, cfg.Env...)
args := []string{}
if cfg.Entrypoint != nil {
args = append(args, cfg.Entrypoint...)
} else if img != nil {
args = append(args, img.Config.Entrypoint...)
}
if !cfg.NoCmd {
args = append(args, cfg.Cmd...)
} else if img != nil {
args = append(args, img.Config.Cmd...)
}
req.Args = args
req.Env = env
req.User = user
req.Cwd = cwd
req.Tty = cfg.Tty
return nil
}
func containerConfigFromError(solveErr *errdefs.SolveError, cfg *controllerapi.InvokeConfig) (*gateway.NewContainerRequest, error) {
exec, err := execOpFromError(solveErr)
if err != nil {
return nil, err
}
var mounts []gateway.Mount
for i, mnt := range exec.Mounts {
rid := solveErr.Solve.MountIDs[i]
if cfg.Initial {
rid = solveErr.Solve.InputIDs[i]
}
mounts = append(mounts, gateway.Mount{
Selector: mnt.Selector,
Dest: mnt.Dest,
ResultID: rid,
Readonly: mnt.Readonly,
MountType: mnt.MountType,
CacheOpt: mnt.CacheOpt,
SecretOpt: mnt.SecretOpt,
SSHOpt: mnt.SSHOpt,
})
}
return &gateway.NewContainerRequest{
Mounts: mounts,
NetMode: exec.Network,
}, nil
}
func populateProcessConfigFromError(req *gateway.StartRequest, solveErr *errdefs.SolveError, cfg *controllerapi.InvokeConfig) error {
exec, err := execOpFromError(solveErr)
if err != nil {
return err
}
meta := exec.Meta
user := ""
if !cfg.NoUser {
user = cfg.User
} else {
user = meta.User
}
cwd := ""
if !cfg.NoCwd {
cwd = cfg.Cwd
} else {
cwd = meta.Cwd
}
env := append(meta.Env, cfg.Env...)
args := []string{}
if cfg.Entrypoint != nil {
args = append(args, cfg.Entrypoint...)
}
if cfg.Cmd != nil {
args = append(args, cfg.Cmd...)
}
if len(args) == 0 {
args = meta.Args
}
req.Args = args
req.Env = env
req.User = user
req.Cwd = cwd
req.Tty = cfg.Tty
return nil
}
func execOpFromError(solveErr *errdefs.SolveError) (*pb.ExecOp, error) {
if solveErr == nil {
return nil, errors.Errorf("no error is available")
}
switch op := solveErr.Solve.Op.GetOp().(type) {
case *pb.Op_Exec:
return op.Exec, nil
default:
return nil, errors.Errorf("invoke: unsupported error type")
}
// TODO: support other ops
}
func newStartRequest(stdin io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) gateway.StartRequest {
return gateway.StartRequest{
Stdin: stdin,
Stdout: stdout,
Stderr: stderr,
}
}

View File

@@ -2,21 +2,18 @@ package build
import (
"context"
"os"
"io/ioutil"
"path/filepath"
"github.com/docker/buildx/driver"
"github.com/docker/buildx/util/progress"
"github.com/docker/go-units"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/client/llb"
gwclient "github.com/moby/buildkit/frontend/gateway/client"
"github.com/pkg/errors"
)
const maxDockerfileSize = 2 * 1024 * 1024 // 2 MB
func createTempDockerfileFromURL(ctx context.Context, d *driver.DriverHandle, url string, pw progress.Writer) (string, error) {
func createTempDockerfileFromURL(ctx context.Context, d driver.Driver, url string, pw progress.Writer) (string, error) {
c, err := driver.Boot(ctx, ctx, d, pw)
if err != nil {
return "", err
@@ -24,7 +21,7 @@ func createTempDockerfileFromURL(ctx context.Context, d *driver.DriverHandle, ur
var out string
ch, done := progress.NewChannel(pw)
defer func() { <-done }()
_, err = c.Build(ctx, client.SolveOpt{Internal: true}, "buildx", func(ctx context.Context, c gwclient.Client) (*gwclient.Result, error) {
_, err = c.Build(ctx, client.SolveOpt{}, "buildx", func(ctx context.Context, c gwclient.Client) (*gwclient.Result, error) {
def, err := llb.HTTP(url, llb.Filename("Dockerfile"), llb.WithCustomNamef("[internal] load %s", url)).Marshal(ctx)
if err != nil {
return nil, err
@@ -46,8 +43,8 @@ func createTempDockerfileFromURL(ctx context.Context, d *driver.DriverHandle, ur
if err != nil {
return nil, err
}
if stat.Size > maxDockerfileSize {
return nil, errors.Errorf("Dockerfile %s bigger than allowed max size (%s)", url, units.HumanSize(maxDockerfileSize))
if stat.Size() > 512*1024 {
return nil, errors.Errorf("Dockerfile %s bigger than allowed max size", url)
}
dt, err := ref.ReadFile(ctx, gwclient.ReadRequest{
@@ -56,16 +53,17 @@ func createTempDockerfileFromURL(ctx context.Context, d *driver.DriverHandle, ur
if err != nil {
return nil, err
}
dir, err := os.MkdirTemp("", "buildx")
dir, err := ioutil.TempDir("", "buildx")
if err != nil {
return nil, err
}
if err := os.WriteFile(filepath.Join(dir, "Dockerfile"), dt, 0600); err != nil {
if err := ioutil.WriteFile(filepath.Join(dir, "Dockerfile"), dt, 0600); err != nil {
return nil, err
}
out = dir
return nil, nil
}, ch)
if err != nil {
return "", err
}

View File

@@ -3,43 +3,19 @@ package build
import (
"archive/tar"
"bytes"
"context"
"net"
"os"
"strconv"
"strings"
"github.com/docker/buildx/driver"
"github.com/docker/cli/opts"
"github.com/moby/buildkit/util/gitutil"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
const (
// archiveHeaderSize is the number of bytes in an archive header
archiveHeaderSize = 512
// mobyHostGatewayName defines a special string which users can append to
// --add-host to add an extra entry in /etc/hosts that maps
// host.docker.internal to the host IP
mobyHostGatewayName = "host-gateway"
)
// archiveHeaderSize is the number of bytes in an archive header
const archiveHeaderSize = 512
// isHTTPURL returns true if the provided str is an HTTP(S) URL by checking if it
// has a http:// or https:// scheme. No validation is performed to verify if the
// URL is well-formed.
func isHTTPURL(str string) bool {
return strings.HasPrefix(str, "https://") || strings.HasPrefix(str, "http://")
}
func IsRemoteURL(c string) bool {
if isHTTPURL(c) {
return true
}
if _, err := gitutil.ParseGitRef(c); err == nil {
return true
}
return false
func isLocalDir(c string) bool {
st, err := os.Stat(c)
return err == nil && st.IsDir()
}
func isArchive(header []byte) bool {
@@ -62,69 +38,18 @@ func isArchive(header []byte) bool {
}
// toBuildkitExtraHosts converts hosts from docker key:value format to buildkit's csv format
func toBuildkitExtraHosts(ctx context.Context, inp []string, nodeDriver *driver.DriverHandle) (string, error) {
func toBuildkitExtraHosts(inp []string) (string, error) {
if len(inp) == 0 {
return "", nil
}
hosts := make([]string, 0, len(inp))
for _, h := range inp {
host, ip, ok := strings.Cut(h, "=")
if !ok {
host, ip, ok = strings.Cut(h, ":")
}
if !ok || host == "" || ip == "" {
parts := strings.Split(h, ":")
if len(parts) != 2 || parts[0] == "" || net.ParseIP(parts[1]) == nil {
return "", errors.Errorf("invalid host %s", h)
}
// If the IP Address is a "host-gateway", replace this value with the
// IP address provided by the worker's label.
if ip == mobyHostGatewayName {
hgip, err := nodeDriver.HostGatewayIP(ctx)
if err != nil {
return "", errors.Wrap(err, "unable to derive the IP value for host-gateway")
}
ip = hgip.String()
} else {
// If the address is enclosed in square brackets, extract it (for IPv6, but
// permit it for IPv4 as well; we don't know the address family here, but it's
// unambiguous).
if len(ip) > 2 && ip[0] == '[' && ip[len(ip)-1] == ']' {
ip = ip[1 : len(ip)-1]
}
if net.ParseIP(ip) == nil {
return "", errors.Errorf("invalid host %s", h)
}
}
hosts = append(hosts, host+"="+ip)
hosts = append(hosts, parts[0]+"="+parts[1])
}
return strings.Join(hosts, ","), nil
}
// toBuildkitUlimits converts ulimits from docker type=soft:hard format to buildkit's csv format
func toBuildkitUlimits(inp *opts.UlimitOpt) (string, error) {
if inp == nil || len(inp.GetList()) == 0 {
return "", nil
}
ulimits := make([]string, 0, len(inp.GetList()))
for _, ulimit := range inp.GetList() {
ulimits = append(ulimits, ulimit.String())
}
return strings.Join(ulimits, ","), nil
}
func notSupported(f driver.Feature, d *driver.DriverHandle, docs string) error {
return errors.Errorf(`%s is not supported for the %s driver.
Switch to a different driver, or turn on the containerd image store, and try again.
Learn more at %s`, f, d.Factory().Name(), docs)
}
func noDefaultLoad() bool {
v, ok := os.LookupEnv("BUILDX_NO_DEFAULT_LOAD")
if !ok {
return false
}
b, err := strconv.ParseBool(v)
if err != nil {
logrus.Warnf("invalid non-bool value for BUILDX_NO_DEFAULT_LOAD: %s", v)
}
return b
}

View File

@@ -1,148 +0,0 @@
package build
import (
"context"
"strings"
"testing"
"github.com/stretchr/testify/require"
)
func TestToBuildkitExtraHosts(t *testing.T) {
tests := []struct {
doc string
input []string
expectedOut string // Expect output==input if not set.
expectedErr string // Expect success if not set.
}{
{
doc: "IPv4, colon sep",
input: []string{`myhost:192.168.0.1`},
expectedOut: `myhost=192.168.0.1`,
},
{
doc: "IPv4, eq sep",
input: []string{`myhost=192.168.0.1`},
},
{
doc: "Weird but permitted, IPv4 with brackets",
input: []string{`myhost=[192.168.0.1]`},
expectedOut: `myhost=192.168.0.1`,
},
{
doc: "Host and domain",
input: []string{`host.and.domain.invalid:10.0.2.1`},
expectedOut: `host.and.domain.invalid=10.0.2.1`,
},
{
doc: "IPv6, colon sep",
input: []string{`anipv6host:2003:ab34:e::1`},
expectedOut: `anipv6host=2003:ab34:e::1`,
},
{
doc: "IPv6, colon sep, brackets",
input: []string{`anipv6host:[2003:ab34:e::1]`},
expectedOut: `anipv6host=2003:ab34:e::1`,
},
{
doc: "IPv6, eq sep, brackets",
input: []string{`anipv6host=[2003:ab34:e::1]`},
expectedOut: `anipv6host=2003:ab34:e::1`,
},
{
doc: "IPv6 localhost, colon sep",
input: []string{`ipv6local:::1`},
expectedOut: `ipv6local=::1`,
},
{
doc: "IPv6 localhost, eq sep",
input: []string{`ipv6local=::1`},
},
{
doc: "IPv6 localhost, eq sep, brackets",
input: []string{`ipv6local=[::1]`},
expectedOut: `ipv6local=::1`,
},
{
doc: "IPv6 localhost, non-canonical, colon sep",
input: []string{`ipv6local:0:0:0:0:0:0:0:1`},
expectedOut: `ipv6local=0:0:0:0:0:0:0:1`,
},
{
doc: "IPv6 localhost, non-canonical, eq sep",
input: []string{`ipv6local=0:0:0:0:0:0:0:1`},
},
{
doc: "IPv6 localhost, non-canonical, eq sep, brackets",
input: []string{`ipv6local=[0:0:0:0:0:0:0:1]`},
expectedOut: `ipv6local=0:0:0:0:0:0:0:1`,
},
{
doc: "Bad address, colon sep",
input: []string{`myhost:192.notanipaddress.1`},
expectedErr: `invalid IP address in add-host: "192.notanipaddress.1"`,
},
{
doc: "Bad address, eq sep",
input: []string{`myhost=192.notanipaddress.1`},
expectedErr: `invalid IP address in add-host: "192.notanipaddress.1"`,
},
{
doc: "No sep",
input: []string{`thathost-nosemicolon10.0.0.1`},
expectedErr: `bad format for add-host: "thathost-nosemicolon10.0.0.1"`,
},
{
doc: "Bad IPv6",
input: []string{`anipv6host:::::1`},
expectedErr: `invalid IP address in add-host: "::::1"`,
},
{
doc: "Bad IPv6, trailing colons",
input: []string{`ipv6local:::0::`},
expectedErr: `invalid IP address in add-host: "::0::"`,
},
{
doc: "Bad IPv6, missing close bracket",
input: []string{`ipv6addr=[::1`},
expectedErr: `invalid IP address in add-host: "[::1"`,
},
{
doc: "Bad IPv6, missing open bracket",
input: []string{`ipv6addr=::1]`},
expectedErr: `invalid IP address in add-host: "::1]"`,
},
{
doc: "Missing address, colon sep",
input: []string{`myhost.invalid:`},
expectedErr: `invalid IP address in add-host: ""`,
},
{
doc: "Missing address, eq sep",
input: []string{`myhost.invalid=`},
expectedErr: `invalid IP address in add-host: ""`,
},
{
doc: "No input",
input: []string{``},
expectedErr: `bad format for add-host: ""`,
},
}
for _, tc := range tests {
tc := tc
if tc.expectedOut == "" {
tc.expectedOut = strings.Join(tc.input, ",")
}
t.Run(tc.doc, func(t *testing.T) {
actualOut, actualErr := toBuildkitExtraHosts(context.TODO(), tc.input, nil)
if tc.expectedErr == "" {
require.Equal(t, tc.expectedOut, actualOut)
require.NoError(t, actualErr)
} else {
require.Zero(t, actualOut)
require.Error(t, actualErr, tc.expectedErr)
}
})
}
}

View File

@@ -1,694 +0,0 @@
package builder
import (
"context"
"encoding/json"
"net/url"
"os"
"sort"
"strings"
"sync"
"time"
"github.com/docker/buildx/driver"
k8sutil "github.com/docker/buildx/driver/kubernetes/util"
remoteutil "github.com/docker/buildx/driver/remote/util"
"github.com/docker/buildx/localstate"
"github.com/docker/buildx/store"
"github.com/docker/buildx/store/storeutil"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/dockerutil"
"github.com/docker/buildx/util/imagetools"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/command"
dopts "github.com/docker/cli/opts"
"github.com/google/shlex"
"github.com/moby/buildkit/util/progress/progressui"
"github.com/pkg/errors"
"github.com/spf13/pflag"
"github.com/tonistiigi/go-csvvalue"
"golang.org/x/sync/errgroup"
)
// Builder represents an active builder object
type Builder struct {
*store.NodeGroup
driverFactory driverFactory
nodes []Node
opts builderOpts
err error
}
type builderOpts struct {
dockerCli command.Cli
name string
txn *store.Txn
contextPathHash string
validate bool
}
// Option provides a variadic option for configuring the builder.
type Option func(b *Builder)
// WithName sets builder name.
func WithName(name string) Option {
return func(b *Builder) {
b.opts.name = name
}
}
// WithStore sets a store instance used at init.
func WithStore(txn *store.Txn) Option {
return func(b *Builder) {
b.opts.txn = txn
}
}
// WithContextPathHash is used for determining pods in k8s driver instance.
func WithContextPathHash(contextPathHash string) Option {
return func(b *Builder) {
b.opts.contextPathHash = contextPathHash
}
}
// WithSkippedValidation skips builder context validation.
func WithSkippedValidation() Option {
return func(b *Builder) {
b.opts.validate = false
}
}
// New initializes a new builder client
func New(dockerCli command.Cli, opts ...Option) (_ *Builder, err error) {
b := &Builder{
opts: builderOpts{
dockerCli: dockerCli,
validate: true,
},
}
for _, opt := range opts {
opt(b)
}
if b.opts.txn == nil {
// if store instance is nil we create a short-lived one using the
// default store and ensure we release it on completion
var release func()
b.opts.txn, release, err = storeutil.GetStore(dockerCli)
if err != nil {
return nil, err
}
defer release()
}
if b.opts.name != "" {
if b.NodeGroup, err = storeutil.GetNodeGroup(b.opts.txn, dockerCli, b.opts.name); err != nil {
return nil, err
}
} else {
if b.NodeGroup, err = storeutil.GetCurrentInstance(b.opts.txn, dockerCli); err != nil {
return nil, err
}
}
if b.opts.validate {
if err = b.Validate(); err != nil {
return nil, err
}
}
return b, nil
}
// Validate validates builder context
func (b *Builder) Validate() error {
if b.NodeGroup != nil && b.NodeGroup.DockerContext {
list, err := b.opts.dockerCli.ContextStore().List()
if err != nil {
return err
}
currentContext := b.opts.dockerCli.CurrentContext()
for _, l := range list {
if l.Name == b.Name && l.Name != currentContext {
return errors.Errorf("use `docker --context=%s buildx` to switch to context %q", l.Name, l.Name)
}
}
}
return nil
}
// ContextName returns builder context name if available.
func (b *Builder) ContextName() string {
ctxbuilders, err := b.opts.dockerCli.ContextStore().List()
if err != nil {
return ""
}
for _, cb := range ctxbuilders {
if b.NodeGroup.Driver == "docker" && len(b.NodeGroup.Nodes) == 1 && b.NodeGroup.Nodes[0].Endpoint == cb.Name {
return cb.Name
}
}
return ""
}
// ImageOpt returns registry auth configuration
func (b *Builder) ImageOpt() (imagetools.Opt, error) {
return storeutil.GetImageConfig(b.opts.dockerCli, b.NodeGroup)
}
// Boot bootstrap a builder
func (b *Builder) Boot(ctx context.Context) (bool, error) {
toBoot := make([]int, 0, len(b.nodes))
for idx, d := range b.nodes {
if d.Err != nil || d.Driver == nil || d.DriverInfo == nil {
continue
}
if d.DriverInfo.Status != driver.Running {
toBoot = append(toBoot, idx)
}
}
if len(toBoot) == 0 {
return false, nil
}
printer, err := progress.NewPrinter(context.TODO(), os.Stderr, progressui.AutoMode)
if err != nil {
return false, err
}
baseCtx := ctx
eg, _ := errgroup.WithContext(ctx)
errCh := make(chan error, len(toBoot))
for _, idx := range toBoot {
func(idx int) {
eg.Go(func() error {
pw := progress.WithPrefix(printer, b.NodeGroup.Nodes[idx].Name, len(toBoot) > 1)
_, err := driver.Boot(ctx, baseCtx, b.nodes[idx].Driver, pw)
if err != nil {
b.nodes[idx].Err = err
errCh <- err
}
return nil
})
}(idx)
}
err = eg.Wait()
close(errCh)
err1 := printer.Wait()
if err == nil {
err = err1
}
if err == nil && len(errCh) == len(toBoot) {
return false, <-errCh
}
return true, err
}
// Inactive checks if all nodes are inactive for this builder.
func (b *Builder) Inactive() bool {
for _, d := range b.nodes {
if d.DriverInfo != nil && d.DriverInfo.Status == driver.Running {
return false
}
}
return true
}
// Err returns error if any.
func (b *Builder) Err() error {
return b.err
}
type driverFactory struct {
driver.Factory
once sync.Once
}
// Factory returns the driver factory.
func (b *Builder) Factory(ctx context.Context, dialMeta map[string][]string) (_ driver.Factory, err error) {
b.driverFactory.once.Do(func() {
if b.Driver != "" {
b.driverFactory.Factory, err = driver.GetFactory(b.Driver, true)
if err != nil {
return
}
} else {
// empty driver means nodegroup was implicitly created as a default
// driver for a docker context and allows falling back to a
// docker-container driver for older daemon that doesn't support
// buildkit (< 18.06).
ep := b.NodeGroup.Nodes[0].Endpoint
var dockerapi *dockerutil.ClientAPI
dockerapi, err = dockerutil.NewClientAPI(b.opts.dockerCli, b.NodeGroup.Nodes[0].Endpoint)
if err != nil {
return
}
// check if endpoint is healthy is needed to determine the driver type.
// if this fails then can't continue with driver selection.
if _, err = dockerapi.Ping(ctx); err != nil {
return
}
b.driverFactory.Factory, err = driver.GetDefaultFactory(ctx, ep, dockerapi, false, dialMeta)
if err != nil {
return
}
b.Driver = b.driverFactory.Factory.Name()
}
})
return b.driverFactory.Factory, err
}
func (b *Builder) MarshalJSON() ([]byte, error) {
var berr string
if b.err != nil {
berr = strings.TrimSpace(b.err.Error())
}
return json.Marshal(struct {
Name string
Driver string
LastActivity time.Time `json:",omitempty"`
Dynamic bool
Nodes []Node
Err string `json:",omitempty"`
}{
Name: b.Name,
Driver: b.Driver,
LastActivity: b.LastActivity,
Dynamic: b.Dynamic,
Nodes: b.nodes,
Err: berr,
})
}
// GetBuilders returns all builders
func GetBuilders(dockerCli command.Cli, txn *store.Txn) ([]*Builder, error) {
storeng, err := txn.List()
if err != nil {
return nil, err
}
contexts, err := dockerCli.ContextStore().List()
if err != nil {
return nil, err
}
sort.Slice(contexts, func(i, j int) bool {
return contexts[i].Name < contexts[j].Name
})
builders := make([]*Builder, len(storeng), len(storeng)+len(contexts))
seen := make(map[string]struct{})
for i, ng := range storeng {
b, err := New(dockerCli,
WithName(ng.Name),
WithStore(txn),
WithSkippedValidation(),
)
if err != nil {
return nil, err
}
builders[i] = b
seen[b.NodeGroup.Name] = struct{}{}
}
for _, c := range contexts {
// if a context has the same name as an instance from the store, do not
// add it to the builders list. An instance from the store takes
// precedence over context builders.
if _, ok := seen[c.Name]; ok {
continue
}
b, err := New(dockerCli,
WithName(c.Name),
WithStore(txn),
WithSkippedValidation(),
)
if err != nil {
return nil, err
}
builders = append(builders, b)
}
return builders, nil
}
type CreateOpts struct {
Name string
Driver string
NodeName string
Platforms []string
BuildkitdFlags string
BuildkitdConfigFile string
DriverOpts []string
Use bool
Endpoint string
Append bool
}
func Create(ctx context.Context, txn *store.Txn, dockerCli command.Cli, opts CreateOpts) (*Builder, error) {
var err error
if opts.Name == "default" {
return nil, errors.Errorf("default is a reserved name and cannot be used to identify builder instance")
} else if opts.Append && opts.Name == "" {
return nil, errors.Errorf("append requires a builder name")
}
name := opts.Name
if name == "" {
name, err = store.GenerateName(txn)
if err != nil {
return nil, err
}
}
if !opts.Append {
contexts, err := dockerCli.ContextStore().List()
if err != nil {
return nil, err
}
for _, c := range contexts {
if c.Name == name {
return nil, errors.Errorf("instance name %q already exists as context builder", name)
}
}
}
ng, err := txn.NodeGroupByName(name)
if err != nil {
if os.IsNotExist(errors.Cause(err)) {
if opts.Append && opts.Name != "" {
return nil, errors.Errorf("failed to find instance %q for append", opts.Name)
}
} else {
return nil, err
}
}
buildkitHost := os.Getenv("BUILDKIT_HOST")
driverName := opts.Driver
if driverName == "" {
if ng != nil {
driverName = ng.Driver
} else if opts.Endpoint == "" && buildkitHost != "" {
driverName = "remote"
} else {
f, err := driver.GetDefaultFactory(ctx, opts.Endpoint, dockerCli.Client(), true, nil)
if err != nil {
return nil, err
}
if f == nil {
return nil, errors.Errorf("no valid drivers found")
}
driverName = f.Name()
}
}
if ng != nil {
if opts.NodeName == "" && !opts.Append {
return nil, errors.Errorf("existing instance for %q but no append mode, specify the node name to make changes for existing instances", name)
}
if driverName != ng.Driver {
return nil, errors.Errorf("existing instance for %q but has mismatched driver %q", name, ng.Driver)
}
}
if _, err := driver.GetFactory(driverName, true); err != nil {
return nil, err
}
ngOriginal := ng
if ngOriginal != nil {
ngOriginal = ngOriginal.Copy()
}
if ng == nil {
ng = &store.NodeGroup{
Name: name,
Driver: driverName,
}
}
driverOpts, err := csvToMap(opts.DriverOpts)
if err != nil {
return nil, err
}
buildkitdConfigFile := opts.BuildkitdConfigFile
if buildkitdConfigFile == "" {
// if buildkit daemon config is not provided, check if the default one
// is available and use it
if f, ok := confutil.NewConfig(dockerCli).BuildKitConfigFile(); ok {
buildkitdConfigFile = f
}
}
buildkitdFlags, err := parseBuildkitdFlags(opts.BuildkitdFlags, driverName, driverOpts, buildkitdConfigFile)
if err != nil {
return nil, err
}
var ep string
var setEp bool
switch {
case driverName == "kubernetes":
if opts.Endpoint != "" {
return nil, errors.Errorf("kubernetes driver does not support endpoint args %q", opts.Endpoint)
}
// generate node name if not provided to avoid duplicated endpoint
// error: https://github.com/docker/setup-buildx-action/issues/215
nodeName := opts.NodeName
if nodeName == "" {
nodeName, err = k8sutil.GenerateNodeName(name, txn)
if err != nil {
return nil, err
}
}
// naming endpoint to make append works
ep = (&url.URL{
Scheme: driverName,
Path: "/" + name,
RawQuery: (&url.Values{
"deployment": {nodeName},
"kubeconfig": {os.Getenv("KUBECONFIG")},
}).Encode(),
}).String()
setEp = false
case driverName == "remote":
if opts.Endpoint != "" {
ep = opts.Endpoint
} else if buildkitHost != "" {
ep = buildkitHost
} else {
return nil, errors.Errorf("no remote endpoint provided")
}
ep, err = validateBuildkitEndpoint(ep)
if err != nil {
return nil, err
}
setEp = true
case opts.Endpoint != "":
ep, err = validateEndpoint(dockerCli, opts.Endpoint)
if err != nil {
return nil, err
}
setEp = true
default:
if dockerCli.CurrentContext() == "default" && dockerCli.DockerEndpoint().TLSData != nil {
return nil, errors.Errorf("could not create a builder instance with TLS data loaded from environment. Please use `docker context create <context-name>` to create a context for current environment and then create a builder instance with context set to <context-name>")
}
ep, err = dockerutil.GetCurrentEndpoint(dockerCli)
if err != nil {
return nil, err
}
setEp = false
}
if err := ng.Update(opts.NodeName, ep, opts.Platforms, setEp, opts.Append, buildkitdFlags, buildkitdConfigFile, driverOpts); err != nil {
return nil, err
}
if err := txn.Save(ng); err != nil {
return nil, err
}
b, err := New(dockerCli,
WithName(ng.Name),
WithStore(txn),
WithSkippedValidation(),
)
if err != nil {
return nil, err
}
cancelCtx, cancel := context.WithCancelCause(ctx)
timeoutCtx, _ := context.WithTimeoutCause(cancelCtx, 20*time.Second, errors.WithStack(context.DeadlineExceeded)) //nolint:govet,lostcancel // no need to manually cancel this context as we already rely on parent
defer func() { cancel(errors.WithStack(context.Canceled)) }()
nodes, err := b.LoadNodes(timeoutCtx, WithData())
if err != nil {
return nil, err
}
for _, node := range nodes {
if err := node.Err; err != nil {
err := errors.Errorf("failed to initialize builder %s (%s): %s", ng.Name, node.Name, err)
var err2 error
if ngOriginal == nil {
err2 = txn.Remove(ng.Name)
} else {
err2 = txn.Save(ngOriginal)
}
if err2 != nil {
return nil, errors.Errorf("could not rollback to previous state: %s", err2)
}
return nil, err
}
}
if opts.Use && ep != "" {
current, err := dockerutil.GetCurrentEndpoint(dockerCli)
if err != nil {
return nil, err
}
if err := txn.SetCurrent(current, ng.Name, false, false); err != nil {
return nil, err
}
}
return b, nil
}
type LeaveOpts struct {
Name string
NodeName string
}
func Leave(ctx context.Context, txn *store.Txn, dockerCli command.Cli, opts LeaveOpts) error {
if opts.Name == "" {
return errors.Errorf("leave requires instance name")
}
if opts.NodeName == "" {
return errors.Errorf("leave requires node name")
}
ng, err := txn.NodeGroupByName(opts.Name)
if err != nil {
if os.IsNotExist(errors.Cause(err)) {
return errors.Errorf("failed to find instance %q for leave", opts.Name)
}
return err
}
if err := ng.Leave(opts.NodeName); err != nil {
return err
}
ls, err := localstate.New(confutil.NewConfig(dockerCli))
if err != nil {
return err
}
if err := ls.RemoveBuilderNode(ng.Name, opts.NodeName); err != nil {
return err
}
return txn.Save(ng)
}
func csvToMap(in []string) (map[string]string, error) {
if len(in) == 0 {
return nil, nil
}
m := make(map[string]string, len(in))
for _, s := range in {
fields, err := csvvalue.Fields(s, nil)
if err != nil {
return nil, err
}
for _, v := range fields {
p := strings.SplitN(v, "=", 2)
if len(p) != 2 {
return nil, errors.Errorf("invalid value %q, expecting k=v", v)
}
m[p[0]] = p[1]
}
}
return m, nil
}
// validateEndpoint validates that endpoint is either a context or a docker host
func validateEndpoint(dockerCli command.Cli, ep string) (string, error) {
dem, err := dockerutil.GetDockerEndpoint(dockerCli, ep)
if err == nil && dem != nil {
if ep == "default" {
return dem.Host, nil
}
return ep, nil
}
h, err := dopts.ParseHost(true, ep)
if err != nil {
return "", errors.Wrapf(err, "failed to parse endpoint %s", ep)
}
return h, nil
}
// validateBuildkitEndpoint validates that endpoint is a valid buildkit host
func validateBuildkitEndpoint(ep string) (string, error) {
if err := remoteutil.IsValidEndpoint(ep); err != nil {
return "", err
}
return ep, nil
}
// parseBuildkitdFlags parses buildkit flags
func parseBuildkitdFlags(inp string, driver string, driverOpts map[string]string, buildkitdConfigFile string) (res []string, err error) {
if inp != "" {
res, err = shlex.Split(inp)
if err != nil {
return nil, errors.Wrap(err, "failed to parse buildkit flags")
}
}
var allowInsecureEntitlements []string
flags := pflag.NewFlagSet("buildkitd", pflag.ContinueOnError)
flags.Usage = func() {}
flags.StringArrayVar(&allowInsecureEntitlements, "allow-insecure-entitlement", nil, "")
_ = flags.Parse(res)
var hasNetworkHostEntitlement bool
for _, e := range allowInsecureEntitlements {
if e == "network.host" {
hasNetworkHostEntitlement = true
break
}
}
var hasNetworkHostEntitlementInConf bool
if buildkitdConfigFile != "" {
btoml, err := confutil.LoadConfigTree(buildkitdConfigFile)
if err != nil {
return nil, err
} else if btoml != nil {
if ies := btoml.GetArray("insecure-entitlements"); ies != nil {
for _, e := range ies.([]string) {
if e == "network.host" {
hasNetworkHostEntitlementInConf = true
break
}
}
}
}
}
if v, ok := driverOpts["network"]; ok && v == "host" && !hasNetworkHostEntitlement && driver == "docker-container" {
// always set network.host entitlement if user has set network=host
res = append(res, "--allow-insecure-entitlement=network.host")
} else if len(allowInsecureEntitlements) == 0 && !hasNetworkHostEntitlementInConf && (driver == "kubernetes" || driver == "docker-container") {
// set network.host entitlement if user does not provide any as
// network is isolated for container drivers.
res = append(res, "--allow-insecure-entitlement=network.host")
}
return res, nil
}

View File

@@ -1,173 +0,0 @@
package builder
import (
"os"
"path"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestCsvToMap(t *testing.T) {
d := []string{
"\"tolerations=key=foo,value=bar;key=foo2,value=bar2\",replicas=1",
"namespace=default",
}
r, err := csvToMap(d)
require.NoError(t, err)
require.Contains(t, r, "tolerations")
require.Equal(t, "key=foo,value=bar;key=foo2,value=bar2", r["tolerations"])
require.Contains(t, r, "replicas")
require.Equal(t, "1", r["replicas"])
require.Contains(t, r, "namespace")
require.Equal(t, "default", r["namespace"])
}
func TestParseBuildkitdFlags(t *testing.T) {
buildkitdConf := `
# debug enables additional debug logging
debug = true
# insecure-entitlements allows insecure entitlements, disabled by default.
insecure-entitlements = [ "network.host", "security.insecure" ]
[log]
# log formatter: json or text
format = "text"
`
dirConf := t.TempDir()
buildkitdConfPath := path.Join(dirConf, "buildkitd-conf.toml")
require.NoError(t, os.WriteFile(buildkitdConfPath, []byte(buildkitdConf), 0644))
testCases := []struct {
name string
flags string
driver string
driverOpts map[string]string
buildkitdConfigFile string
expected []string
wantErr bool
}{
{
"docker-container no flags",
"",
"docker-container",
nil,
"",
[]string{
"--allow-insecure-entitlement=network.host",
},
false,
},
{
"kubernetes no flags",
"",
"kubernetes",
nil,
"",
[]string{
"--allow-insecure-entitlement=network.host",
},
false,
},
{
"remote no flags",
"",
"remote",
nil,
"",
nil,
false,
},
{
"docker-container with insecure flag",
"--allow-insecure-entitlement=security.insecure",
"docker-container",
nil,
"",
[]string{
"--allow-insecure-entitlement=security.insecure",
},
false,
},
{
"docker-container with insecure and host flag",
"--allow-insecure-entitlement=network.host --allow-insecure-entitlement=security.insecure",
"docker-container",
nil,
"",
[]string{
"--allow-insecure-entitlement=network.host",
"--allow-insecure-entitlement=security.insecure",
},
false,
},
{
"docker-container with network host opt",
"",
"docker-container",
map[string]string{"network": "host"},
"",
[]string{
"--allow-insecure-entitlement=network.host",
},
false,
},
{
"docker-container with host flag and network host opt",
"--allow-insecure-entitlement=network.host",
"docker-container",
map[string]string{"network": "host"},
"",
[]string{
"--allow-insecure-entitlement=network.host",
},
false,
},
{
"docker-container with insecure, host flag and network host opt",
"--allow-insecure-entitlement=network.host --allow-insecure-entitlement=security.insecure",
"docker-container",
map[string]string{"network": "host"},
"",
[]string{
"--allow-insecure-entitlement=network.host",
"--allow-insecure-entitlement=security.insecure",
},
false,
},
{
"docker-container with buildkitd conf setting network.host entitlement",
"",
"docker-container",
nil,
buildkitdConfPath,
nil,
false,
},
{
"error parsing flags",
"foo'",
"docker-container",
nil,
"",
nil,
true,
},
}
for _, tt := range testCases {
tt := tt
t.Run(tt.name, func(t *testing.T) {
flags, err := parseBuildkitdFlags(tt.flags, tt.driver, tt.driverOpts, tt.buildkitdConfigFile)
if tt.wantErr {
require.Error(t, err)
return
}
require.NoError(t, err)
assert.Equal(t, tt.expected, flags)
})
}
}

View File

@@ -1,278 +0,0 @@
package builder
import (
"context"
"encoding/json"
"sort"
"strings"
"github.com/containerd/platforms"
"github.com/docker/buildx/driver"
"github.com/docker/buildx/store"
"github.com/docker/buildx/store/storeutil"
"github.com/docker/buildx/util/dockerutil"
"github.com/docker/buildx/util/imagetools"
"github.com/docker/buildx/util/platformutil"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/util/grpcerrors"
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"golang.org/x/sync/errgroup"
"google.golang.org/grpc/codes"
)
type Node struct {
store.Node
Builder string
Driver *driver.DriverHandle
DriverInfo *driver.Info
ImageOpt imagetools.Opt
ProxyConfig map[string]string
Version string
Err error
// worker settings
IDs []string
Platforms []ocispecs.Platform
GCPolicy []client.PruneInfo
Labels map[string]string
}
// Nodes returns nodes for this builder.
func (b *Builder) Nodes() []Node {
return b.nodes
}
type LoadNodesOption func(*loadNodesOptions)
type loadNodesOptions struct {
data bool
dialMeta map[string][]string
clientOpt []client.ClientOpt
}
func WithData() LoadNodesOption {
return func(o *loadNodesOptions) {
o.data = true
}
}
func WithDialMeta(dialMeta map[string][]string) LoadNodesOption {
return func(o *loadNodesOptions) {
o.dialMeta = dialMeta
}
}
func WithClientOpt(clientOpt ...client.ClientOpt) LoadNodesOption {
return func(o *loadNodesOptions) {
o.clientOpt = clientOpt
}
}
// LoadNodes loads and returns nodes for this builder.
// TODO: this should be a method on a Node object and lazy load data for each driver.
func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []Node, err error) {
lno := loadNodesOptions{
data: false,
}
for _, opt := range opts {
opt(&lno)
}
eg, _ := errgroup.WithContext(ctx)
b.nodes = make([]Node, len(b.NodeGroup.Nodes))
defer func() {
if b.err == nil && err != nil {
b.err = err
}
}()
factory, err := b.Factory(ctx, lno.dialMeta)
if err != nil {
return nil, err
}
imageopt, err := b.ImageOpt()
if err != nil {
return nil, err
}
for i, n := range b.NodeGroup.Nodes {
func(i int, n store.Node) {
eg.Go(func() error {
node := Node{
Node: n,
ProxyConfig: storeutil.GetProxyConfig(b.opts.dockerCli),
Platforms: n.Platforms,
Builder: b.Name,
}
defer func() {
b.nodes[i] = node
}()
dockerapi, err := dockerutil.NewClientAPI(b.opts.dockerCli, n.Endpoint)
if err != nil {
node.Err = err
return nil
}
d, err := driver.GetDriver(ctx, factory, driver.InitConfig{
Name: driver.BuilderName(n.Name),
EndpointAddr: n.Endpoint,
DockerAPI: dockerapi,
ContextStore: b.opts.dockerCli.ContextStore(),
BuildkitdFlags: n.BuildkitdFlags,
Files: n.Files,
DriverOpts: n.DriverOpts,
Auth: imageopt.Auth,
Platforms: n.Platforms,
ContextPathHash: b.opts.contextPathHash,
DialMeta: lno.dialMeta,
})
if err != nil {
node.Err = err
return nil
}
node.Driver = d
node.ImageOpt = imageopt
if lno.data {
if err := node.loadData(ctx, lno.clientOpt...); err != nil {
node.Err = err
}
}
return nil
})
}(i, n)
}
if err := eg.Wait(); err != nil {
return nil, err
}
// TODO: This should be done in the routine loading driver data
if lno.data {
kubernetesDriverCount := 0
for _, d := range b.nodes {
if d.DriverInfo != nil && len(d.DriverInfo.DynamicNodes) > 0 {
kubernetesDriverCount++
}
}
isAllKubernetesDrivers := len(b.nodes) == kubernetesDriverCount
if isAllKubernetesDrivers {
var nodes []Node
var dynamicNodes []store.Node
for _, di := range b.nodes {
// dynamic nodes are used in Kubernetes driver.
// Kubernetes' pods are dynamically mapped to BuildKit Nodes.
if di.DriverInfo != nil && len(di.DriverInfo.DynamicNodes) > 0 {
for i := 0; i < len(di.DriverInfo.DynamicNodes); i++ {
diClone := di
if pl := di.DriverInfo.DynamicNodes[i].Platforms; len(pl) > 0 {
diClone.Platforms = pl
}
nodes = append(nodes, diClone)
}
dynamicNodes = append(dynamicNodes, di.DriverInfo.DynamicNodes...)
}
}
// not append (remove the static nodes in the store)
b.NodeGroup.Nodes = dynamicNodes
b.nodes = nodes
b.NodeGroup.Dynamic = true
}
}
return b.nodes, nil
}
func (n *Node) MarshalJSON() ([]byte, error) {
var status string
if n.DriverInfo != nil {
status = n.DriverInfo.Status.String()
}
var nerr string
if n.Err != nil {
status = "error"
nerr = strings.TrimSpace(n.Err.Error())
}
var pp []string
for _, p := range n.Platforms {
pp = append(pp, platforms.Format(p))
}
return json.Marshal(struct {
Name string
Endpoint string
BuildkitdFlags []string `json:"Flags,omitempty"`
DriverOpts map[string]string `json:",omitempty"`
Files map[string][]byte `json:",omitempty"`
Status string `json:",omitempty"`
ProxyConfig map[string]string `json:",omitempty"`
Version string `json:",omitempty"`
Err string `json:",omitempty"`
IDs []string `json:",omitempty"`
Platforms []string `json:",omitempty"`
GCPolicy []client.PruneInfo `json:",omitempty"`
Labels map[string]string `json:",omitempty"`
}{
Name: n.Name,
Endpoint: n.Endpoint,
BuildkitdFlags: n.BuildkitdFlags,
DriverOpts: n.DriverOpts,
Files: n.Files,
Status: status,
ProxyConfig: n.ProxyConfig,
Version: n.Version,
Err: nerr,
IDs: n.IDs,
Platforms: pp,
GCPolicy: n.GCPolicy,
Labels: n.Labels,
})
}
func (n *Node) loadData(ctx context.Context, clientOpt ...client.ClientOpt) error {
if n.Driver == nil {
return nil
}
info, err := n.Driver.Info(ctx)
if err != nil {
return err
}
n.DriverInfo = info
if n.DriverInfo.Status == driver.Running {
driverClient, err := n.Driver.Client(ctx, clientOpt...)
if err != nil {
return err
}
workers, err := driverClient.ListWorkers(ctx)
if err != nil {
return errors.Wrap(err, "listing workers")
}
for idx, w := range workers {
n.IDs = append(n.IDs, w.ID)
n.Platforms = append(n.Platforms, w.Platforms...)
if idx == 0 {
n.GCPolicy = w.GCPolicy
n.Labels = w.Labels
}
}
sort.Strings(n.IDs)
n.Platforms = platformutil.Dedupe(n.Platforms)
inf, err := driverClient.Info(ctx)
if err != nil {
if st, ok := grpcerrors.AsGRPCStatus(err); ok && st.Code() == codes.Unimplemented {
n.Version, err = n.Driver.Version(ctx)
if err != nil {
return errors.Wrap(err, "getting version")
}
}
} else {
n.Version = inf.BuildkitVersion.Version
}
}
return nil
}

View File

@@ -1,75 +0,0 @@
package main
import (
"context"
"os"
"runtime"
"runtime/pprof"
"github.com/moby/buildkit/util/bklog"
"github.com/sirupsen/logrus"
)
func setupDebugProfiles(ctx context.Context) (stop func()) {
var stopFuncs []func()
if fn := setupCPUProfile(ctx); fn != nil {
stopFuncs = append(stopFuncs, fn)
}
if fn := setupHeapProfile(ctx); fn != nil {
stopFuncs = append(stopFuncs, fn)
}
return func() {
for _, fn := range stopFuncs {
fn()
}
}
}
func setupCPUProfile(ctx context.Context) (stop func()) {
if cpuProfile := os.Getenv("BUILDX_CPU_PROFILE"); cpuProfile != "" {
f, err := os.Create(cpuProfile)
if err != nil {
bklog.G(ctx).Warn("could not create cpu profile", logrus.WithError(err))
return nil
}
if err := pprof.StartCPUProfile(f); err != nil {
bklog.G(ctx).Warn("could not start cpu profile", logrus.WithError(err))
_ = f.Close()
return nil
}
return func() {
pprof.StopCPUProfile()
if err := f.Close(); err != nil {
bklog.G(ctx).Warn("could not close file for cpu profile", logrus.WithError(err))
}
}
}
return nil
}
func setupHeapProfile(ctx context.Context) (stop func()) {
if heapProfile := os.Getenv("BUILDX_MEM_PROFILE"); heapProfile != "" {
// Memory profile is only created on stop.
return func() {
f, err := os.Create(heapProfile)
if err != nil {
bklog.G(ctx).Warn("could not create memory profile", logrus.WithError(err))
return
}
// get up-to-date statistics
runtime.GC()
if err := pprof.WriteHeapProfile(f); err != nil {
bklog.G(ctx).Warn("could not write memory profile", logrus.WithError(err))
}
if err := f.Close(); err != nil {
bklog.G(ctx).Warn("could not close file for memory profile", logrus.WithError(err))
}
}
}
return nil
}

View File

@@ -1,13 +1,11 @@
package main
import (
"context"
"fmt"
"os"
"github.com/containerd/containerd/pkg/seed"
"github.com/docker/buildx/commands"
controllererrors "github.com/docker/buildx/controller/errdefs"
"github.com/docker/buildx/util/desktop"
"github.com/docker/buildx/version"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli-plugins/manager"
@@ -17,116 +15,91 @@ import (
cliflags "github.com/docker/cli/cli/flags"
"github.com/moby/buildkit/solver/errdefs"
"github.com/moby/buildkit/util/stack"
"github.com/pkg/errors"
"github.com/moby/buildkit/util/tracing/detect"
"go.opentelemetry.io/otel"
//nolint:staticcheck // vendored dependencies may still use this
"github.com/containerd/containerd/pkg/seed"
_ "github.com/moby/buildkit/util/tracing/detect/delegated"
_ "github.com/moby/buildkit/util/tracing/env"
// FIXME: "k8s.io/client-go/plugin/pkg/client/auth/azure" is excluded because of compilation error
_ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
_ "k8s.io/client-go/plugin/pkg/client/auth/oidc"
_ "k8s.io/client-go/plugin/pkg/client/auth/openstack"
_ "github.com/docker/buildx/driver/docker"
_ "github.com/docker/buildx/driver/docker-container"
_ "github.com/docker/buildx/driver/kubernetes"
_ "github.com/docker/buildx/driver/remote"
// Use custom grpc codec to utilize vtprotobuf
_ "github.com/moby/buildkit/util/grpcutil/encoding/proto"
)
var experimental string
func init() {
//nolint:staticcheck
seed.WithTimeAndRand()
stack.SetVersionInfo(version.Version, version.Revision)
}
func runStandalone(cmd *command.DockerCli) error {
if err := cmd.Initialize(cliflags.NewClientOptions()); err != nil {
return err
}
defer flushMetrics(cmd)
rootCmd := commands.NewRootCmd(os.Args[0], false, cmd)
return rootCmd.Execute()
}
// flushMetrics will manually flush metrics from the configured
// meter provider. This is needed when running in standalone mode
// because the meter provider is initialized by the cli library,
// but the mechanism for forcing it to report is not presently
// exposed and not invoked when run in standalone mode.
// There are plans to fix that in the next release, but this is
// needed temporarily until the API for this is more thorough.
func flushMetrics(cmd *command.DockerCli) {
if mp, ok := cmd.MeterProvider().(command.MeterProvider); ok {
if err := mp.ForceFlush(context.Background()); err != nil {
otel.Handle(err)
}
}
}
func runPlugin(cmd *command.DockerCli) error {
rootCmd := commands.NewRootCmd("buildx", true, cmd)
return plugin.RunPlugin(cmd, rootCmd, manager.Metadata{
SchemaVersion: "0.1.0",
Vendor: "Docker Inc.",
Version: version.Version,
})
}
func run(cmd *command.DockerCli) error {
stopProfiles := setupDebugProfiles(context.TODO())
defer stopProfiles()
if plugin.RunningStandalone() {
return runStandalone(cmd)
}
return runPlugin(cmd)
detect.ServiceName = "buildx"
// do not log tracing errors to stdio
otel.SetErrorHandler(skipErrors{})
}
func main() {
cmd, err := command.NewDockerCli()
if os.Getenv("DOCKER_CLI_PLUGIN_ORIGINAL_CLI_COMMAND") == "" {
if len(os.Args) < 2 || os.Args[1] != manager.MetadataSubcommandName {
dockerCli, err := command.NewDockerCli()
if err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
opts := cliflags.NewClientOptions()
dockerCli.Initialize(opts)
rootCmd := commands.NewRootCmd(os.Args[0], false, dockerCli)
if err := rootCmd.Execute(); err != nil {
os.Exit(1)
}
os.Exit(0)
}
}
dockerCli, err := command.NewDockerCli()
if err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
if err = run(cmd); err == nil {
return
p := commands.NewRootCmd("buildx", true, dockerCli)
meta := manager.Metadata{
SchemaVersion: "0.1.0",
Vendor: "Docker Inc.",
Version: version.Version,
Experimental: experimental != "",
}
// Check the error from the run function above.
if sterr, ok := err.(cli.StatusError); ok {
if sterr.Status != "" {
fmt.Fprintln(cmd.Err(), sterr.Status)
if err := plugin.RunPlugin(dockerCli, p, meta); err != nil {
if sterr, ok := err.(cli.StatusError); ok {
if sterr.Status != "" {
fmt.Fprintln(dockerCli.Err(), sterr.Status)
}
// StatusError should only be used for errors, and all errors should
// have a non-zero exit status, so never exit with 0
if sterr.StatusCode == 0 {
os.Exit(1)
}
os.Exit(sterr.StatusCode)
}
// StatusError should only be used for errors, and all errors should
// have a non-zero exit status, so never exit with 0
if sterr.StatusCode == 0 {
os.Exit(1)
for _, s := range errdefs.Sources(err) {
s.Print(dockerCli.Err())
}
os.Exit(sterr.StatusCode)
}
for _, s := range errdefs.Sources(err) {
s.Print(cmd.Err())
}
if debug.IsEnabled() {
fmt.Fprintf(cmd.Err(), "ERROR: %+v", stack.Formatter(err))
} else {
fmt.Fprintf(cmd.Err(), "ERROR: %v\n", err)
}
var ebr *desktop.ErrorWithBuildRef
if errors.As(err, &ebr) {
ebr.Print(cmd.Err())
} else {
var be *controllererrors.BuildError
if errors.As(err, &be) {
be.PrintBuildDetails(cmd.Err())
if debug.IsEnabled() {
fmt.Fprintf(dockerCli.Err(), "error: %+v", stack.Formatter(err))
} else {
fmt.Fprintf(dockerCli.Err(), "error: %v\n", err)
}
}
os.Exit(1)
os.Exit(1)
}
}
type skipErrors struct{}
func (skipErrors) Handle(err error) {}

View File

@@ -1,18 +0,0 @@
package main
import (
"github.com/moby/buildkit/util/tracing/detect"
"go.opentelemetry.io/otel"
_ "github.com/moby/buildkit/util/tracing/env"
)
func init() {
detect.ServiceName = "buildx"
// do not log tracing errors to stdio
otel.SetErrorHandler(skipErrors{})
}
type skipErrors struct{}
func (skipErrors) Handle(err error) {}

View File

@@ -1,4 +1 @@
comment: false
ignore:
- "**/*.pb.go"

View File

@@ -1,65 +1,31 @@
package commands
import (
"bytes"
"cmp"
"context"
"crypto/sha256"
"encoding/hex"
"encoding/json"
"fmt"
"io"
"os"
"slices"
"sort"
"strings"
"sync"
"text/tabwriter"
"github.com/containerd/console"
"github.com/containerd/platforms"
"github.com/docker/buildx/bake"
"github.com/docker/buildx/bake/hclparser"
"github.com/docker/buildx/build"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/localstate"
"github.com/docker/buildx/util/buildflags"
"github.com/docker/buildx/util/cobrautil"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/desktop"
"github.com/docker/buildx/util/dockerutil"
"github.com/docker/buildx/util/osutil"
"github.com/docker/buildx/util/progress"
"github.com/docker/buildx/util/tracing"
"github.com/docker/cli/cli/command"
"github.com/moby/buildkit/identity"
"github.com/moby/buildkit/util/progress/progressui"
"github.com/docker/docker/pkg/ioutils"
"github.com/moby/buildkit/util/appcontext"
"github.com/pkg/errors"
"github.com/spf13/cobra"
"go.opentelemetry.io/otel/attribute"
)
type bakeOptions struct {
files []string
overrides []string
printOnly bool
listTargets bool
listVars bool
sbom string
provenance string
allow []string
builder string
metadataFile string
exportPush bool
exportLoad bool
callFunc string
files []string
printOnly bool
overrides []string
commonOptions
}
func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in bakeOptions, cFlags commonFlags) (err error) {
mp := dockerCli.MeterProvider()
func runBake(dockerCli command.Cli, targets []string, in bakeOptions) (err error) {
ctx := appcontext.Context()
ctx, end, err := tracing.TraceCurrentCommand(ctx, "bake")
if err != nil {
@@ -69,351 +35,125 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
end(err)
}()
url, cmdContext, targets := bakeArgs(targets)
var url string
cmdContext := "cwd://"
if len(targets) > 0 {
if bake.IsRemoteURL(targets[0]) {
url = targets[0]
targets = targets[1:]
if len(targets) > 0 {
if bake.IsRemoteURL(targets[0]) {
cmdContext = targets[0]
targets = targets[1:]
}
}
}
}
if len(targets) == 0 {
targets = []string{"default"}
}
callFunc, err := buildflags.ParseCallFunc(in.callFunc)
if err != nil {
return err
}
overrides := in.overrides
if in.exportPush {
overrides = append(overrides, "*.push=true")
if in.exportLoad {
return errors.Errorf("push and load may not be set together at the moment")
}
overrides = append(overrides, "*.output=type=registry")
} else if in.exportLoad {
overrides = append(overrides, "*.output=type=docker")
}
if in.exportLoad {
overrides = append(overrides, "*.load=true")
if in.noCache != nil {
overrides = append(overrides, fmt.Sprintf("*.no-cache=%t", *in.noCache))
}
if callFunc != nil {
overrides = append(overrides, fmt.Sprintf("*.call=%s", callFunc.Name))
}
if cFlags.noCache != nil {
overrides = append(overrides, fmt.Sprintf("*.no-cache=%t", *cFlags.noCache))
}
if cFlags.pull != nil {
overrides = append(overrides, fmt.Sprintf("*.pull=%t", *cFlags.pull))
}
if in.sbom != "" {
overrides = append(overrides, fmt.Sprintf("*.attest=%s", buildflags.CanonicalizeAttest("sbom", in.sbom)))
}
if in.provenance != "" {
overrides = append(overrides, fmt.Sprintf("*.attest=%s", buildflags.CanonicalizeAttest("provenance", in.provenance)))
if in.pull != nil {
overrides = append(overrides, fmt.Sprintf("*.pull=%t", *in.pull))
}
contextPathHash, _ := os.Getwd()
ent, err := bake.ParseEntitlements(in.allow)
if err != nil {
return err
}
wd, err := os.Getwd()
if err != nil {
return errors.Wrapf(err, "failed to get current working directory")
}
// filesystem access under the current working directory is allowed by default
ent.FSRead = append(ent.FSRead, wd)
ent.FSWrite = append(ent.FSWrite, wd)
ctx2, cancel := context.WithCancel(context.TODO())
defer cancel()
printer := progress.NewPrinter(ctx2, os.Stderr, in.progress)
ctx2, cancel := context.WithCancelCause(context.TODO())
defer cancel(errors.WithStack(context.Canceled))
var nodes []builder.Node
var progressConsoleDesc, progressTextDesc string
// instance only needed for reading remote bake files or building
var driverType string
if url != "" || !(in.printOnly || in.listTargets || in.listVars) {
b, err := builder.New(dockerCli,
builder.WithName(in.builder),
builder.WithContextPathHash(contextPathHash),
)
if err != nil {
return err
}
if err = updateLastActivity(dockerCli, b.NodeGroup); err != nil {
return errors.Wrapf(err, "failed to update builder last activity time")
}
nodes, err = b.LoadNodes(ctx)
if err != nil {
return err
}
progressConsoleDesc = fmt.Sprintf("%s:%s", b.Driver, b.Name)
progressTextDesc = fmt.Sprintf("building with %q instance using %s driver", b.Name, b.Driver)
driverType = b.Driver
}
var term bool
if _, err := console.ConsoleFromFile(os.Stderr); err == nil {
term = true
}
attributes := bakeMetricAttributes(dockerCli, driverType, url, cmdContext, targets, &in)
progressMode := progressui.DisplayMode(cFlags.progress)
var printer *progress.Printer
makePrinter := func() error {
var err error
printer, err = progress.NewPrinter(ctx2, os.Stderr, progressMode,
progress.WithDesc(progressTextDesc, progressConsoleDesc),
progress.WithMetrics(mp, attributes),
progress.WithOnClose(func() {
printWarnings(os.Stderr, printer.Warnings(), progressMode)
}),
)
return err
}
if err := makePrinter(); err != nil {
return err
}
files, inp, err := readBakeFiles(ctx, nodes, url, in.files, dockerCli.In(), printer)
if err != nil {
return err
}
if len(files) == 0 {
return errors.New("couldn't find a bake definition")
}
defaults := map[string]string{
// don't forget to update documentation if you add a new
// built-in variable: docs/bake-reference.md#built-in-variables
"BAKE_CMD_CONTEXT": cmdContext,
"BAKE_LOCAL_PLATFORM": platforms.Format(platforms.DefaultSpec()),
}
if in.listTargets || in.listVars {
cfg, pm, err := bake.ParseFiles(files, defaults)
if err != nil {
return err
}
if err = printer.Wait(); err != nil {
return err
}
if in.listTargets {
return printTargetList(dockerCli.Out(), cfg)
} else if in.listVars {
return printVars(dockerCli.Out(), pm.AllVariables)
}
}
tgts, grps, err := bake.ReadTargets(ctx, files, targets, overrides, defaults)
if err != nil {
return err
}
if v := os.Getenv("SOURCE_DATE_EPOCH"); v != "" {
// TODO: extract env var parsing to a method easily usable by library consumers
for _, t := range tgts {
if _, ok := t.Args["SOURCE_DATE_EPOCH"]; ok {
continue
defer func() {
if printer != nil {
err1 := printer.Wait()
if err == nil {
err = err1
}
if t.Args == nil {
t.Args = map[string]*string{}
}
t.Args["SOURCE_DATE_EPOCH"] = &v
}
}()
dis, err := getInstanceOrDefault(ctx, dockerCli, in.builder, contextPathHash)
if err != nil {
return err
}
var files []bake.File
var inp *bake.Input
if url != "" {
files, inp, err = bake.ReadRemoteFiles(ctx, dis, url, in.files, printer)
} else {
files, err = bake.ReadLocalFiles(in.files)
}
if err != nil {
return err
}
m, err := bake.ReadTargets(ctx, files, targets, overrides, map[string]string{
"BAKE_CMD_CONTEXT": cmdContext,
})
if err != nil {
return err
}
// this function can update target context string from the input so call before printOnly check
bo, err := bake.TargetsToBuildOpt(tgts, inp)
bo, err := bake.TargetsToBuildOpt(m, inp)
if err != nil {
return err
}
def := struct {
Group map[string]*bake.Group `json:"group,omitempty"`
Target map[string]*bake.Target `json:"target"`
}{
Group: grps,
Target: tgts,
}
if in.printOnly {
if err = printer.Wait(); err != nil {
return err
}
dtdef, err := json.MarshalIndent(def, "", " ")
dt, err := json.MarshalIndent(map[string]map[string]*bake.Target{"target": m}, "", " ")
if err != nil {
return err
}
_, err = fmt.Fprintln(dockerCli.Out(), string(dtdef))
return err
}
for _, opt := range bo {
if opt.CallFunc != nil {
cf, err := buildflags.ParseCallFunc(opt.CallFunc.Name)
if err != nil {
return err
}
opt.CallFunc.Name = cf.Name
}
}
exp, err := ent.Validate(bo)
if err != nil {
return err
}
if err := exp.Prompt(ctx, url != "", &syncWriter{w: dockerCli.Err(), wait: printer.Wait}); err != nil {
return err
}
if printer.IsDone() {
// init new printer as old one was stopped to show the prompt
if err := makePrinter(); err != nil {
return err
}
}
if err := saveLocalStateGroup(dockerCli, in, targets, bo, overrides, def); err != nil {
return err
}
done := timeBuildCommand(mp, attributes)
resp, retErr := build.Build(ctx, nodes, bo, dockerutil.NewClient(dockerCli), confutil.NewConfig(dockerCli), printer)
if err := printer.Wait(); retErr == nil {
retErr = err
}
if retErr != nil {
err = wrapBuildError(retErr, true)
}
done(err)
if err != nil {
return err
}
if progressMode != progressui.QuietMode && progressMode != progressui.RawJSONMode {
desktop.PrintBuildDetails(os.Stderr, printer.BuildRefs(), term)
}
if len(in.metadataFile) > 0 {
dt := make(map[string]interface{})
for t, r := range resp {
dt[t] = decodeExporterResponse(r.ExporterResponse)
}
if callFunc == nil {
if warnings := printer.Warnings(); len(warnings) > 0 && confutil.MetadataWarningsEnabled() {
dt["buildx.build.warnings"] = warnings
}
}
if err := writeMetadataFile(in.metadataFile, dt); err != nil {
return err
}
}
var callFormatJSON bool
jsonResults := map[string]map[string]any{}
if callFunc != nil {
callFormatJSON = callFunc.Format == "json"
}
var sep bool
var exitCode int
names := make([]string, 0, len(bo))
for name := range bo {
names = append(names, name)
}
slices.Sort(names)
for _, name := range names {
req := bo[name]
if req.CallFunc == nil {
continue
}
pf := &pb.CallFunc{
Name: req.CallFunc.Name,
Format: req.CallFunc.Format,
IgnoreStatus: req.CallFunc.IgnoreStatus,
}
if callFunc != nil {
pf.Format = callFunc.Format
pf.IgnoreStatus = callFunc.IgnoreStatus
}
var res map[string]string
if sp, ok := resp[name]; ok {
res = sp.ExporterResponse
}
if callFormatJSON {
jsonResults[name] = map[string]any{}
buf := &bytes.Buffer{}
if code, err := printResult(buf, pf, res, name, &req.Inputs); err != nil {
jsonResults[name]["error"] = err.Error()
exitCode = 1
} else if code != 0 && exitCode == 0 {
exitCode = code
}
m := map[string]*json.RawMessage{}
if err := json.Unmarshal(buf.Bytes(), &m); err == nil {
for k, v := range m {
jsonResults[name][k] = v
}
} else {
jsonResults[name][pf.Name] = json.RawMessage(buf.Bytes())
}
} else {
if sep {
fmt.Fprintln(dockerCli.Out())
} else {
sep = true
}
fmt.Fprintf(dockerCli.Out(), "%s\n", name)
if descr := tgts[name].Description; descr != "" {
fmt.Fprintf(dockerCli.Out(), "%s\n", descr)
}
fmt.Fprintln(dockerCli.Out())
if code, err := printResult(dockerCli.Out(), pf, res, name, &req.Inputs); err != nil {
fmt.Fprintf(dockerCli.Out(), "error: %v\n", err)
exitCode = 1
} else if code != 0 && exitCode == 0 {
exitCode = code
}
}
}
if callFormatJSON {
out := struct {
Group map[string]*bake.Group `json:"group,omitempty"`
Target map[string]map[string]any `json:"target"`
}{
Group: grps,
Target: map[string]map[string]any{},
}
for name, def := range tgts {
out.Target[name] = map[string]any{
"build": def,
}
if res, ok := jsonResults[name]; ok {
printName := bo[name].CallFunc.Name
if printName == "lint" {
printName = "check"
}
out.Target[name][printName] = res
}
}
dt, err := json.MarshalIndent(out, "", " ")
err = printer.Wait()
printer = nil
if err != nil {
return err
}
fmt.Fprintln(dockerCli.Out(), string(dt))
return nil
}
if exitCode != 0 {
os.Exit(exitCode)
resp, err := build.Build(ctx, dis, bo, dockerAPI(dockerCli), dockerCli.ConfigFile(), printer)
if err != nil {
return err
}
return nil
if len(in.metadataFile) > 0 && resp != nil {
mdata := map[string]map[string]string{}
for k, r := range resp {
mdata[k] = r.ExporterResponse
}
mdatab, err := json.MarshalIndent(mdata, "", " ")
if err != nil {
return err
}
if err := ioutils.AtomicWriteFile(in.metadataFile, mdatab, 0644); err != nil {
return err
}
}
return err
}
func bakeCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
var options bakeOptions
var cFlags commonFlags
cmd := &cobra.Command{
Use: "bake [OPTIONS] [TARGET...]",
@@ -422,291 +162,25 @@ func bakeCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
RunE: func(cmd *cobra.Command, args []string) error {
// reset to nil to avoid override is unset
if !cmd.Flags().Lookup("no-cache").Changed {
cFlags.noCache = nil
options.noCache = nil
}
if !cmd.Flags().Lookup("pull").Changed {
cFlags.pull = nil
options.pull = nil
}
options.builder = rootOpts.builder
options.metadataFile = cFlags.metadataFile
// Other common flags (noCache, pull and progress) are processed in runBake function.
return runBake(cmd.Context(), dockerCli, args, options, cFlags)
options.commonOptions.builder = rootOpts.builder
return runBake(dockerCli, args, options)
},
ValidArgsFunction: completion.BakeTargets(options.files),
}
flags := cmd.Flags()
flags.StringArrayVarP(&options.files, "file", "f", []string{}, "Build definition file")
flags.BoolVar(&options.exportLoad, "load", false, `Shorthand for "--set=*.output=type=docker"`)
flags.BoolVar(&options.printOnly, "print", false, "Print the options without building")
flags.BoolVar(&options.exportPush, "push", false, `Shorthand for "--set=*.output=type=registry"`)
flags.StringVar(&options.sbom, "sbom", "", `Shorthand for "--set=*.attest=type=sbom"`)
flags.StringVar(&options.provenance, "provenance", "", `Shorthand for "--set=*.attest=type=provenance"`)
flags.StringArrayVar(&options.overrides, "set", nil, `Override target value (e.g., "targetpattern.key=value")`)
flags.StringVar(&options.callFunc, "call", "build", `Set method for evaluating build ("check", "outline", "targets")`)
flags.StringArrayVar(&options.allow, "allow", nil, "Allow build to access specified resources")
flags.StringArrayVar(&options.overrides, "set", nil, "Override target value (eg: targetpattern.key=value)")
flags.BoolVar(&options.exportPush, "push", false, "Shorthand for --set=*.output=type=registry")
flags.BoolVar(&options.exportLoad, "load", false, "Shorthand for --set=*.output=type=docker")
flags.VarPF(callAlias(&options.callFunc, "check"), "check", "", `Shorthand for "--call=check"`)
flags.Lookup("check").NoOptDefVal = "true"
flags.BoolVar(&options.listTargets, "list-targets", false, "List available targets")
cobrautil.MarkFlagsExperimental(flags, "list-targets")
flags.MarkHidden("list-targets")
flags.BoolVar(&options.listVars, "list-variables", false, "List defined variables")
cobrautil.MarkFlagsExperimental(flags, "list-variables")
flags.MarkHidden("list-variables")
commonBuildFlags(&cFlags, flags)
commonBuildFlags(&options.commonOptions, flags)
return cmd
}
func saveLocalStateGroup(dockerCli command.Cli, in bakeOptions, targets []string, bo map[string]build.Options, overrides []string, def any) error {
prm := confutil.MetadataProvenance()
if len(in.metadataFile) == 0 {
prm = confutil.MetadataProvenanceModeDisabled
}
groupRef := identity.NewID()
refs := make([]string, 0, len(bo))
for k, b := range bo {
if b.CallFunc != nil {
continue
}
b.Ref = identity.NewID()
b.GroupRef = groupRef
b.ProvenanceResponseMode = prm
refs = append(refs, b.Ref)
bo[k] = b
}
if len(refs) == 0 {
return nil
}
l, err := localstate.New(confutil.NewConfig(dockerCli))
if err != nil {
return err
}
dtdef, err := json.MarshalIndent(def, "", " ")
if err != nil {
return err
}
return l.SaveGroup(groupRef, localstate.StateGroup{
Definition: dtdef,
Targets: targets,
Inputs: overrides,
Refs: refs,
})
}
// bakeArgs will retrieve the remote url, command context, and targets
// from the command line arguments.
func bakeArgs(args []string) (url, cmdContext string, targets []string) {
cmdContext, targets = "cwd://", args
if len(targets) == 0 || !build.IsRemoteURL(targets[0]) {
return url, cmdContext, targets
}
url, targets = targets[0], targets[1:]
if len(targets) == 0 || !build.IsRemoteURL(targets[0]) {
return url, cmdContext, targets
}
cmdContext, targets = targets[0], targets[1:]
return url, cmdContext, targets
}
func readBakeFiles(ctx context.Context, nodes []builder.Node, url string, names []string, stdin io.Reader, pw progress.Writer) (files []bake.File, inp *bake.Input, err error) {
var lnames []string // local
var rnames []string // remote
var anames []string // both
for _, v := range names {
if strings.HasPrefix(v, "cwd://") {
tname := strings.TrimPrefix(v, "cwd://")
lnames = append(lnames, tname)
anames = append(anames, tname)
} else {
rnames = append(rnames, v)
anames = append(anames, v)
}
}
if url != "" {
var rfiles []bake.File
rfiles, inp, err = bake.ReadRemoteFiles(ctx, nodes, url, rnames, pw)
if err != nil {
return nil, nil, err
}
files = append(files, rfiles...)
}
if len(lnames) > 0 || url == "" {
var lfiles []bake.File
progress.Wrap("[internal] load local bake definitions", pw.Write, func(sub progress.SubLogger) error {
if url != "" {
lfiles, err = bake.ReadLocalFiles(lnames, stdin, sub)
} else {
lfiles, err = bake.ReadLocalFiles(anames, stdin, sub)
}
return nil
})
if err != nil {
return nil, nil, err
}
files = append(files, lfiles...)
}
return
}
func printVars(w io.Writer, vars []*hclparser.Variable) error {
slices.SortFunc(vars, func(a, b *hclparser.Variable) int {
return cmp.Compare(a.Name, b.Name)
})
tw := tabwriter.NewWriter(w, 1, 8, 1, '\t', 0)
defer tw.Flush()
tw.Write([]byte("VARIABLE\tVALUE\tDESCRIPTION\n"))
for _, v := range vars {
var value string
if v.Value != nil {
value = *v.Value
} else {
value = "<null>"
}
fmt.Fprintf(tw, "%s\t%s\t%s\n", v.Name, value, v.Description)
}
return nil
}
func printTargetList(w io.Writer, cfg *bake.Config) error {
tw := tabwriter.NewWriter(w, 1, 8, 1, '\t', 0)
defer tw.Flush()
tw.Write([]byte("TARGET\tDESCRIPTION\n"))
type targetOrGroup struct {
name string
target *bake.Target
group *bake.Group
}
list := make([]targetOrGroup, 0, len(cfg.Targets)+len(cfg.Groups))
for _, tgt := range cfg.Targets {
list = append(list, targetOrGroup{name: tgt.Name, target: tgt})
}
for _, grp := range cfg.Groups {
list = append(list, targetOrGroup{name: grp.Name, group: grp})
}
slices.SortFunc(list, func(a, b targetOrGroup) int {
return cmp.Compare(a.name, b.name)
})
for _, tgt := range list {
if strings.HasPrefix(tgt.name, "_") {
// convention for a private target
continue
}
var descr string
if tgt.target != nil {
descr = tgt.target.Description
} else if tgt.group != nil {
descr = tgt.group.Description
if len(tgt.group.Targets) > 0 {
slices.Sort(tgt.group.Targets)
names := strings.Join(tgt.group.Targets, ", ")
if descr != "" {
descr += " (" + names + ")"
} else {
descr = names
}
}
}
fmt.Fprintf(tw, "%s\t%s\n", tgt.name, descr)
}
return nil
}
func bakeMetricAttributes(dockerCli command.Cli, driverType, url, cmdContext string, targets []string, options *bakeOptions) attribute.Set {
return attribute.NewSet(
commandNameAttribute.String("bake"),
attribute.Stringer(string(commandOptionsHash), &bakeOptionsHash{
bakeOptions: options,
cfg: confutil.NewConfig(dockerCli),
url: url,
cmdContext: cmdContext,
targets: targets,
}),
driverNameAttribute.String(options.builder),
driverTypeAttribute.String(driverType),
)
}
type bakeOptionsHash struct {
*bakeOptions
cfg *confutil.Config
url string
cmdContext string
targets []string
result string
resultOnce sync.Once
}
func (o *bakeOptionsHash) String() string {
o.resultOnce.Do(func() {
url := o.url
cmdContext := o.cmdContext
if cmdContext == "cwd://" {
// Resolve the directory if the cmdContext is the current working directory.
cmdContext = osutil.GetWd()
}
// Sort the inputs for files and targets since the ordering
// doesn't matter, but avoid modifying the original slice.
files := immutableSort(o.files)
targets := immutableSort(o.targets)
joinedFiles := strings.Join(files, ",")
joinedTargets := strings.Join(targets, ",")
salt := o.cfg.TryNodeIdentifier()
h := sha256.New()
for _, s := range []string{url, cmdContext, joinedFiles, joinedTargets, salt} {
_, _ = io.WriteString(h, s)
h.Write([]byte{0})
}
o.result = hex.EncodeToString(h.Sum(nil))
})
return o.result
}
// immutableSort will sort the entries in s without modifying the original slice.
func immutableSort(s []string) []string {
if !sort.StringsAreSorted(s) {
cpy := make([]string, len(s))
copy(cpy, s)
sort.Strings(cpy)
return cpy
}
return s
}
type syncWriter struct {
w io.Writer
once sync.Once
wait func() error
}
func (w *syncWriter) Write(p []byte) (n int, err error) {
w.once.Do(func() {
if w.wait != nil {
err = w.wait()
}
})
if err != nil {
return 0, err
}
return w.w.Write(p)
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,94 +1,194 @@
package commands
import (
"bytes"
"context"
"encoding/csv"
"fmt"
"net/url"
"os"
"strings"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/driver"
"github.com/docker/buildx/store/storeutil"
"github.com/docker/buildx/util/cobrautil"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/store"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/google/shlex"
"github.com/moby/buildkit/util/appcontext"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
)
type createOptions struct {
name string
driver string
nodeName string
platform []string
actionAppend bool
actionLeave bool
use bool
driverOpts []string
buildkitdFlags string
buildkitdConfigFile string
bootstrap bool
name string
driver string
nodeName string
platform []string
actionAppend bool
actionLeave bool
use bool
flags string
configFile string
driverOpts []string
// upgrade bool // perform upgrade of the driver
}
func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, args []string) error {
txn, release, err := storeutil.GetStore(dockerCli)
if err != nil {
return err
func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
ctx := appcontext.Context()
if in.name == "default" {
return errors.Errorf("default is a reserved name and cannot be used to identify builder instance")
}
// Ensure the file lock gets released no matter what happens.
defer release()
if in.actionLeave {
return builder.Leave(ctx, txn, dockerCli, builder.LeaveOpts{
Name: in.name,
NodeName: in.nodeName,
})
if in.name == "" {
return errors.Errorf("leave requires instance name")
}
if in.nodeName == "" {
return errors.Errorf("leave requires node name but --node not set")
}
}
var ep string
if len(args) > 0 {
ep = args[0]
if in.actionAppend {
if in.name == "" {
logrus.Warnf("append used without name, creating a new instance instead")
}
}
b, err := builder.Create(ctx, txn, dockerCli, builder.CreateOpts{
Name: in.name,
Driver: in.driver,
NodeName: in.nodeName,
Platforms: in.platform,
DriverOpts: in.driverOpts,
BuildkitdFlags: in.buildkitdFlags,
BuildkitdConfigFile: in.buildkitdConfigFile,
Use: in.use,
Endpoint: ep,
Append: in.actionAppend,
})
driverName := in.driver
if driverName == "" {
f, err := driver.GetDefaultFactory(ctx, dockerCli.Client(), true)
if err != nil {
return err
}
if f == nil {
return errors.Errorf("no valid drivers found")
}
driverName = f.Name()
}
if driver.GetFactory(driverName, true) == nil {
return errors.Errorf("failed to find driver %q", in.driver)
}
txn, release, err := getStore(dockerCli)
if err != nil {
return err
}
defer release()
// The store is no longer used from this point.
// Release it so we aren't holding the file lock during the boot.
release()
if in.bootstrap {
if _, err = b.Boot(ctx); err != nil {
name := in.name
if name == "" {
name, err = store.GenerateName(txn)
if err != nil {
return err
}
}
fmt.Printf("%s\n", b.Name)
ng, err := txn.NodeGroupByName(name)
if err != nil {
if os.IsNotExist(errors.Cause(err)) {
if in.actionAppend && in.name != "" {
logrus.Warnf("failed to find %q for append, creating a new instance instead", in.name)
}
if in.actionLeave {
return errors.Errorf("failed to find instance %q for leave", name)
}
} else {
return err
}
}
if ng != nil {
if in.nodeName == "" && !in.actionAppend {
return errors.Errorf("existing instance for %s but no append mode, specify --node to make changes for existing instances", name)
}
}
if ng == nil {
ng = &store.NodeGroup{
Name: name,
}
}
if ng.Driver == "" || in.driver != "" {
ng.Driver = driverName
}
var flags []string
if in.flags != "" {
flags, err = shlex.Split(in.flags)
if err != nil {
return errors.Wrap(err, "failed to parse buildkit flags")
}
}
var ep string
if in.actionLeave {
if err := ng.Leave(in.nodeName); err != nil {
return err
}
} else {
if len(args) > 0 {
ep, err = validateEndpoint(dockerCli, args[0])
if err != nil {
return err
}
} else {
if dockerCli.CurrentContext() == "default" && dockerCli.DockerEndpoint().TLSData != nil {
return errors.Errorf("could not create a builder instance with TLS data loaded from environment. Please use `docker context create <context-name>` to create a context for current environment and then create a builder instance with `docker buildx create <context-name>`")
}
ep, err = getCurrentEndpoint(dockerCli)
if err != nil {
return err
}
}
if in.driver == "kubernetes" {
// naming endpoint to make --append works
ep = (&url.URL{
Scheme: in.driver,
Path: "/" + in.name,
RawQuery: (&url.Values{
"deployment": {in.nodeName},
"kubeconfig": {os.Getenv("KUBECONFIG")},
}).Encode(),
}).String()
}
m, err := csvToMap(in.driverOpts)
if err != nil {
return err
}
if err := ng.Update(in.nodeName, ep, in.platform, len(args) > 0, in.actionAppend, flags, in.configFile, m); err != nil {
return err
}
}
if err := txn.Save(ng); err != nil {
return err
}
if in.use && ep != "" {
current, err := getCurrentEndpoint(dockerCli)
if err != nil {
return err
}
if err := txn.SetCurrent(current, ng.Name, false, false); err != nil {
return err
}
}
fmt.Printf("%s\n", ng.Name)
return nil
}
func createCmd(dockerCli command.Cli) *cobra.Command {
var options createOptions
var drivers bytes.Buffer
for _, d := range driver.GetFactories(true) {
if len(drivers.String()) > 0 {
drivers.WriteString(", ")
}
drivers.WriteString(fmt.Sprintf(`"%s"`, d.Name()))
var drivers []string
for s := range driver.GetFactories() {
drivers = append(drivers, s)
}
cmd := &cobra.Command{
@@ -96,32 +196,44 @@ func createCmd(dockerCli command.Cli) *cobra.Command {
Short: "Create a new builder instance",
Args: cli.RequiresMaxArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
return runCreate(cmd.Context(), dockerCli, options, args)
return runCreate(dockerCli, options, args)
},
ValidArgsFunction: completion.Disable,
}
flags := cmd.Flags()
flags.StringVar(&options.name, "name", "", "Builder instance name")
flags.StringVar(&options.driver, "driver", "", fmt.Sprintf("Driver to use (available: %s)", drivers.String()))
flags.StringVar(&options.driver, "driver", "", fmt.Sprintf("Driver to use (available: %v)", drivers))
flags.StringVar(&options.nodeName, "node", "", "Create/modify node with given name")
flags.StringVar(&options.flags, "buildkitd-flags", "", "Flags for buildkitd daemon")
flags.StringVar(&options.configFile, "config", "", "BuildKit config file")
flags.StringArrayVar(&options.platform, "platform", []string{}, "Fixed platforms for current node")
flags.StringArrayVar(&options.driverOpts, "driver-opt", []string{}, "Options for the driver")
flags.StringVar(&options.buildkitdFlags, "buildkitd-flags", "", "BuildKit daemon flags")
// we allow for both "--config" and "--buildkitd-config", although the latter is the recommended way to avoid ambiguity.
flags.StringVar(&options.buildkitdConfigFile, "buildkitd-config", "", "BuildKit daemon config file")
flags.StringVar(&options.buildkitdConfigFile, "config", "", "BuildKit daemon config file")
flags.MarkHidden("config")
flags.BoolVar(&options.bootstrap, "bootstrap", false, "Boot builder after creation")
flags.BoolVar(&options.actionAppend, "append", false, "Append a node to builder instead of changing it")
flags.BoolVar(&options.actionLeave, "leave", false, "Remove a node from builder instead of changing it")
flags.BoolVar(&options.use, "use", false, "Set the current builder instance")
// hide builder persistent flag for this command
cobrautil.HideInheritedFlags(cmd, "builder")
_ = flags
return cmd
}
func csvToMap(in []string) (map[string]string, error) {
m := make(map[string]string, len(in))
for _, s := range in {
csvReader := csv.NewReader(strings.NewReader(s))
fields, err := csvReader.Read()
if err != nil {
return nil, err
}
for _, v := range fields {
p := strings.SplitN(v, "=", 2)
if len(p) != 2 {
return nil, errors.Errorf("invalid value %q, expecting k=v", v)
}
m[p[0]] = p[1]
}
}
return m, nil
}

View File

@@ -1,92 +0,0 @@
package debug
import (
"context"
"os"
"runtime"
"github.com/containerd/console"
"github.com/docker/buildx/controller"
"github.com/docker/buildx/controller/control"
controllerapi "github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/monitor"
"github.com/docker/buildx/util/cobrautil"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/command"
"github.com/moby/buildkit/util/progress/progressui"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
)
// DebugConfig is a user-specified configuration for the debugger.
type DebugConfig struct {
// InvokeFlag is a flag to configure the launched debugger and the commaned executed on the debugger.
InvokeFlag string
// OnFlag is a flag to configure the timing of launching the debugger.
OnFlag string
}
// DebuggableCmd is a command that supports debugger with recognizing the user-specified DebugConfig.
type DebuggableCmd interface {
// NewDebugger returns the new *cobra.Command with support for the debugger with recognizing DebugConfig.
NewDebugger(*DebugConfig) *cobra.Command
}
func RootCmd(dockerCli command.Cli, children ...DebuggableCmd) *cobra.Command {
var controlOptions control.ControlOptions
var progressMode string
var options DebugConfig
cmd := &cobra.Command{
Use: "debug",
Short: "Start debugger",
Args: cobra.NoArgs,
RunE: func(cmd *cobra.Command, args []string) error {
printer, err := progress.NewPrinter(context.TODO(), os.Stderr, progressui.DisplayMode(progressMode))
if err != nil {
return err
}
ctx := context.TODO()
c, err := controller.NewController(ctx, controlOptions, dockerCli, printer)
if err != nil {
return err
}
defer func() {
if err := c.Close(); err != nil {
logrus.Warnf("failed to close server connection %v", err)
}
}()
con := console.Current()
if err := con.SetRaw(); err != nil {
return errors.Errorf("failed to configure terminal: %v", err)
}
_, err = monitor.RunMonitor(ctx, "", nil, &controllerapi.InvokeConfig{
Tty: true,
}, c, dockerCli.In(), os.Stdout, os.Stderr, printer)
con.Reset()
return err
},
}
cobrautil.MarkCommandExperimental(cmd)
flags := cmd.Flags()
flags.StringVar(&options.InvokeFlag, "invoke", "", "Launch a monitor with executing specified command")
flags.StringVar(&options.OnFlag, "on", "error", "When to launch the monitor ([always, error])")
flags.StringVar(&controlOptions.Root, "root", "", "Specify root directory of server to connect for the monitor")
flags.BoolVar(&controlOptions.Detach, "detach", runtime.GOOS == "linux", "Detach buildx server for the monitor (supported only on linux)")
flags.StringVar(&controlOptions.ServerConfig, "server-config", "", "Specify buildx server config file for the monitor (used only when launching new server)")
flags.StringVar(&progressMode, "progress", "auto", `Set type of progress output ("auto", "plain", "tty", "rawjson") for the monitor. Use plain to show container output`)
cobrautil.MarkFlagsExperimental(flags, "invoke", "on", "root", "detach", "server-config")
for _, c := range children {
cmd.AddCommand(c.NewDebugger(&options))
}
return cmd
}

View File

@@ -1,131 +0,0 @@
package commands
import (
"io"
"net"
"os"
"github.com/containerd/platforms"
"github.com/docker/buildx/build"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/command"
"github.com/moby/buildkit/util/appcontext"
"github.com/moby/buildkit/util/progress/progressui"
v1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/spf13/cobra"
"golang.org/x/sync/errgroup"
)
type stdioOptions struct {
builder string
platform string
progress string
}
func runDialStdio(dockerCli command.Cli, opts stdioOptions) error {
ctx := appcontext.Context()
contextPathHash, _ := os.Getwd()
b, err := builder.New(dockerCli,
builder.WithName(opts.builder),
builder.WithContextPathHash(contextPathHash),
)
if err != nil {
return err
}
if err = updateLastActivity(dockerCli, b.NodeGroup); err != nil {
return errors.Wrapf(err, "failed to update builder last activity time")
}
nodes, err := b.LoadNodes(ctx)
if err != nil {
return err
}
printer, err := progress.NewPrinter(ctx, os.Stderr, progressui.DisplayMode(opts.progress), progress.WithPhase("dial-stdio"), progress.WithDesc("builder: "+b.Name, "builder:"+b.Name))
if err != nil {
return err
}
var p *v1.Platform
if opts.platform != "" {
pp, err := platforms.Parse(opts.platform)
if err != nil {
return errors.Wrapf(err, "invalid platform %q", opts.platform)
}
p = &pp
}
defer printer.Wait()
return progress.Wrap("Proxying to builder", printer.Write, func(sub progress.SubLogger) error {
var conn net.Conn
err := sub.Wrap("Dialing builder", func() error {
conn, err = build.Dial(ctx, nodes, printer, p)
if err != nil {
return err
}
return nil
})
if err != nil {
return err
}
defer conn.Close()
go func() {
<-ctx.Done()
closeWrite(conn)
}()
var eg errgroup.Group
eg.Go(func() error {
_, err := io.Copy(conn, os.Stdin)
closeWrite(conn)
return err
})
eg.Go(func() error {
_, err := io.Copy(os.Stdout, conn)
closeRead(conn)
return err
})
return eg.Wait()
})
}
func closeRead(conn net.Conn) error {
if c, ok := conn.(interface{ CloseRead() error }); ok {
return c.CloseRead()
}
return conn.Close()
}
func closeWrite(conn net.Conn) error {
if c, ok := conn.(interface{ CloseWrite() error }); ok {
return c.CloseWrite()
}
return conn.Close()
}
func dialStdioCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
opts := stdioOptions{}
cmd := &cobra.Command{
Use: "dial-stdio",
Short: "Proxy current stdio streams to builder instance",
Args: cobra.NoArgs,
RunE: func(cmd *cobra.Command, args []string) error {
opts.builder = rootOpts.builder
return runDialStdio(dockerCli, opts)
},
}
flags := cmd.Flags()
flags.StringVar(&opts.platform, "platform", os.Getenv("DOCKER_DEFAULT_PLATFORM"), "Target platform: this is used for node selection")
flags.StringVar(&opts.progress, "progress", "quiet", `Set type of progress output ("auto", "plain", "tty", "rawjson"). Use plain to show container output`)
return cmd
}

View File

@@ -1,22 +1,19 @@
package commands
import (
"context"
"fmt"
"io"
"os"
"strings"
"text/tabwriter"
"time"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/build"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/docker/cli/opts"
"github.com/docker/go-units"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/util/appcontext"
"github.com/spf13/cobra"
"github.com/tonistiigi/units"
"golang.org/x/sync/errgroup"
)
@@ -26,35 +23,33 @@ type duOptions struct {
verbose bool
}
func runDiskUsage(ctx context.Context, dockerCli command.Cli, opts duOptions) error {
func runDiskUsage(dockerCli command.Cli, opts duOptions) error {
ctx := appcontext.Context()
pi, err := toBuildkitPruneInfo(opts.filter.Value())
if err != nil {
return err
}
b, err := builder.New(dockerCli, builder.WithName(opts.builder))
dis, err := getInstanceOrDefault(ctx, dockerCli, opts.builder, "")
if err != nil {
return err
}
nodes, err := b.LoadNodes(ctx)
if err != nil {
return err
}
for _, node := range nodes {
if node.Err != nil {
return node.Err
for _, di := range dis {
if di.Err != nil {
return err
}
}
out := make([][]*client.UsageInfo, len(nodes))
out := make([][]*client.UsageInfo, len(dis))
eg, ctx := errgroup.WithContext(ctx)
for i, node := range nodes {
func(i int, node builder.Node) {
for i, di := range dis {
func(i int, di build.DriverInfo) {
eg.Go(func() error {
if node.Driver != nil {
c, err := node.Driver.Client(ctx)
if di.Driver != nil {
c, err := di.Driver.Client(ctx)
if err != nil {
return err
}
@@ -67,7 +62,7 @@ func runDiskUsage(ctx context.Context, dockerCli command.Cli, opts duOptions) er
}
return nil
})
}(i, node)
}(i, di)
}
if err := eg.Wait(); err != nil {
@@ -112,9 +107,8 @@ func duCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
Args: cli.NoArgs,
RunE: func(cmd *cobra.Command, args []string) error {
options.builder = rootOpts.builder
return runDiskUsage(cmd.Context(), dockerCli, options)
return runDiskUsage(dockerCli, options)
},
ValidArgsFunction: completion.Disable,
}
flags := cmd.Flags()
@@ -131,20 +125,20 @@ func printKV(w io.Writer, k string, v interface{}) {
func printVerbose(tw *tabwriter.Writer, du []*client.UsageInfo) {
for _, di := range du {
printKV(tw, "ID", di.ID)
if len(di.Parents) != 0 {
printKV(tw, "Parent", strings.Join(di.Parents, ","))
if di.Parent != "" {
printKV(tw, "Parent", di.Parent)
}
printKV(tw, "Created at", di.CreatedAt)
printKV(tw, "Mutable", di.Mutable)
printKV(tw, "Reclaimable", !di.InUse)
printKV(tw, "Shared", di.Shared)
printKV(tw, "Size", units.HumanSize(float64(di.Size)))
printKV(tw, "Size", fmt.Sprintf("%.2f", units.Bytes(di.Size)))
if di.Description != "" {
printKV(tw, "Description", di.Description)
}
printKV(tw, "Usage count", di.UsageCount)
if di.LastUsedAt != nil {
printKV(tw, "Last used", units.HumanDuration(time.Since(*di.LastUsedAt))+" ago")
printKV(tw, "Last used", di.LastUsedAt)
}
if di.RecordType != "" {
printKV(tw, "Type", di.RecordType)
@@ -165,15 +159,11 @@ func printTableRow(tw *tabwriter.Writer, di *client.UsageInfo) {
if di.Mutable {
id += "*"
}
size := units.HumanSize(float64(di.Size))
size := fmt.Sprintf("%.2f", units.Bytes(di.Size))
if di.Shared {
size += "*"
}
lastAccessed := ""
if di.LastUsedAt != nil {
lastAccessed = units.HumanDuration(time.Since(*di.LastUsedAt)) + " ago"
}
fmt.Fprintf(tw, "%-40s\t%-5v\t%-10s\t%s\n", id, !di.InUse, size, lastAccessed)
fmt.Fprintf(tw, "%-71s\t%-11v\t%s\t\n", id, !di.InUse, size)
}
func printSummary(tw *tabwriter.Writer, dus [][]*client.UsageInfo) {
@@ -196,11 +186,11 @@ func printSummary(tw *tabwriter.Writer, dus [][]*client.UsageInfo) {
}
if shared > 0 {
fmt.Fprintf(tw, "Shared:\t%s\n", units.HumanSize(float64(shared)))
fmt.Fprintf(tw, "Private:\t%s\n", units.HumanSize(float64(total-shared)))
fmt.Fprintf(tw, "Shared:\t%.2f\n", units.Bytes(shared))
fmt.Fprintf(tw, "Private:\t%.2f\n", units.Bytes(total-shared))
}
fmt.Fprintf(tw, "Reclaimable:\t%s\n", units.HumanSize(float64(reclaimable)))
fmt.Fprintf(tw, "Total:\t%s\n", units.HumanSize(float64(total)))
fmt.Fprintf(tw, "Reclaimable:\t%.2f\n", units.Bytes(reclaimable))
fmt.Fprintf(tw, "Total:\t%.2f\n", units.Bytes(total))
tw.Flush()
}

View File

@@ -1,20 +1,15 @@
package commands
import (
"context"
"encoding/json"
"fmt"
"os"
"io/ioutil"
"strings"
"github.com/distribution/reference"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/util/buildflags"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/imagetools"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/command"
"github.com/moby/buildkit/util/progress/progressui"
"github.com/docker/distribution/reference"
"github.com/moby/buildkit/util/appcontext"
"github.com/opencontainers/go-digest"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
@@ -23,17 +18,13 @@ import (
)
type createOptions struct {
builder string
files []string
tags []string
annotations []string
dryrun bool
actionAppend bool
progress string
preferIndex bool
}
func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, args []string) error {
func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
if len(args) == 0 && len(in.files) == 0 {
return errors.Errorf("no sources specified")
}
@@ -42,9 +33,9 @@ func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, arg
return errors.Errorf("can't push with no tags specified, please set --tag or --dry-run")
}
fileArgs := make([]string, len(in.files), len(in.files)+len(args))
fileArgs := make([]string, len(in.files))
for i, f := range in.files {
dt, err := os.ReadFile(f)
dt, err := ioutil.ReadFile(f)
if err != nil {
return err
}
@@ -84,46 +75,35 @@ func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, arg
if len(repos) == 0 {
return errors.Errorf("no repositories specified, please set a reference in tag or source")
}
if len(repos) > 1 {
return errors.Errorf("multiple repositories currently not supported, found %v", repos)
}
var defaultRepo *string
if len(repos) == 1 {
for repo := range repos {
defaultRepo = &repo
}
var repo string
for r := range repos {
repo = r
}
for i, s := range srcs {
if s.Ref == nil {
if defaultRepo == nil {
return errors.Errorf("multiple repositories specified, cannot infer repository for %q", args[i])
}
n, err := reference.ParseNormalizedNamed(*defaultRepo)
if s.Ref == nil && s.Desc.MediaType == "" && s.Desc.Digest != "" {
n, err := reference.ParseNormalizedNamed(repo)
if err != nil {
return err
}
if s.Desc.MediaType == "" && s.Desc.Digest != "" {
r, err := reference.WithDigest(n, s.Desc.Digest)
if err != nil {
return err
}
srcs[i].Ref = r
sourceRefs = true
} else {
srcs[i].Ref = reference.TagNameOnly(n)
r, err := reference.WithDigest(n, s.Desc.Digest)
if err != nil {
return err
}
srcs[i].Ref = r
sourceRefs = true
}
}
b, err := builder.New(dockerCli, builder.WithName(in.builder))
if err != nil {
return err
}
imageopt, err := b.ImageOpt()
if err != nil {
return err
}
ctx := appcontext.Context()
r := imagetools.New(imageopt)
r := imagetools.New(imagetools.Opt{
Auth: dockerCli.ConfigFile(),
})
if sourceRefs {
eg, ctx2 := errgroup.WithContext(ctx)
@@ -137,6 +117,7 @@ func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, arg
if err != nil {
return err
}
srcs[i].Ref = nil
if srcs[i].Desc.Digest == "" {
srcs[i].Desc = desc
} else {
@@ -155,12 +136,12 @@ func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, arg
}
}
annotations, err := buildflags.ParseAnnotations(in.annotations)
if err != nil {
return errors.Wrapf(err, "failed to parse annotations")
descs := make([]ocispec.Descriptor, len(srcs))
for i := range descs {
descs[i] = srcs[i].Desc
}
dt, desc, err := r.Combine(ctx, srcs, annotations, in.preferIndex)
dt, desc, err := r.Combine(ctx, repo, descs)
if err != nil {
return err
}
@@ -171,54 +152,27 @@ func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, arg
}
// new resolver cause need new auth
r = imagetools.New(imageopt)
ctx2, cancel := context.WithCancelCause(context.TODO())
defer func() { cancel(errors.WithStack(context.Canceled)) }()
printer, err := progress.NewPrinter(ctx2, os.Stderr, progressui.DisplayMode(in.progress))
if err != nil {
return err
}
eg, _ := errgroup.WithContext(ctx)
pw := progress.WithPrefix(printer, "internal", true)
r = imagetools.New(imagetools.Opt{
Auth: dockerCli.ConfigFile(),
})
for _, t := range tags {
t := t
eg.Go(func() error {
return progress.Wrap(fmt.Sprintf("pushing %s", t.String()), pw.Write, func(sub progress.SubLogger) error {
eg2, _ := errgroup.WithContext(ctx)
for _, s := range srcs {
if reference.Domain(s.Ref) == reference.Domain(t) && reference.Path(s.Ref) == reference.Path(t) {
continue
}
s := s
eg2.Go(func() error {
sub.Log(1, []byte(fmt.Sprintf("copying %s from %s to %s\n", s.Desc.Digest.String(), s.Ref.String(), t.String())))
return r.Copy(ctx, s, t)
})
}
if err := eg2.Wait(); err != nil {
return err
}
sub.Log(1, []byte(fmt.Sprintf("pushing %s to %s\n", desc.Digest.String(), t.String())))
return r.Push(ctx, t, desc, dt)
})
})
if err := r.Push(ctx, t, desc, dt); err != nil {
return err
}
fmt.Println(t.String())
}
err = eg.Wait()
err1 := printer.Wait()
if err == nil {
err = err1
}
return err
return nil
}
func parseSources(in []string) ([]*imagetools.Source, error) {
out := make([]*imagetools.Source, len(in))
type src struct {
Desc ocispec.Descriptor
Ref reference.Named
}
func parseSources(in []string) ([]*src, error) {
out := make([]*src, len(in))
for i, in := range in {
s, err := parseSource(in)
if err != nil {
@@ -241,11 +195,11 @@ func parseRefs(in []string) ([]reference.Named, error) {
return refs, nil
}
func parseSource(in string) (*imagetools.Source, error) {
func parseSource(in string) (*src, error) {
// source can be a digest, reference or a descriptor JSON
dgst, err := digest.Parse(in)
if err == nil {
return &imagetools.Source{
return &src{
Desc: ocispec.Descriptor{
Digest: dgst,
},
@@ -256,41 +210,39 @@ func parseSource(in string) (*imagetools.Source, error) {
ref, err := reference.ParseNormalizedNamed(in)
if err == nil {
return &imagetools.Source{
return &src{
Ref: ref,
}, nil
} else if !strings.HasPrefix(in, "{") {
return nil, err
}
var s imagetools.Source
var s src
if err := json.Unmarshal([]byte(in), &s.Desc); err != nil {
return nil, errors.WithStack(err)
}
return &s, nil
}
func createCmd(dockerCli command.Cli, opts RootOptions) *cobra.Command {
func createCmd(dockerCli command.Cli) *cobra.Command {
var options createOptions
cmd := &cobra.Command{
Use: "create [OPTIONS] [SOURCE] [SOURCE...]",
Short: "Create a new image based on source images",
RunE: func(cmd *cobra.Command, args []string) error {
options.builder = *opts.Builder
return runCreate(cmd.Context(), dockerCli, options, args)
return runCreate(dockerCli, options, args)
},
ValidArgsFunction: completion.Disable,
}
flags := cmd.Flags()
flags.StringArrayVarP(&options.files, "file", "f", []string{}, "Read source descriptor from file")
flags.StringArrayVarP(&options.tags, "tag", "t", []string{}, "Set reference for new image")
flags.BoolVar(&options.dryrun, "dry-run", false, "Show final image instead of pushing")
flags.BoolVar(&options.actionAppend, "append", false, "Append to existing manifest")
flags.StringVar(&options.progress, "progress", "auto", `Set type of progress output ("auto", "plain", "tty", "rawjson"). Use plain to show container output`)
flags.StringArrayVarP(&options.annotations, "annotation", "", []string{}, "Add annotation to the image")
flags.BoolVar(&options.preferIndex, "prefer-index", true, "When only a single source is specified, prefer outputting an image index or manifest list instead of performing a carbon copy")
_ = flags
return cmd
}

View File

@@ -1,66 +1,68 @@
package commands
import (
"context"
"fmt"
"os"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/containerd/containerd/images"
"github.com/docker/buildx/util/imagetools"
"github.com/docker/cli-docs-tool/annotation"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/pkg/errors"
"github.com/moby/buildkit/util/appcontext"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/spf13/cobra"
)
type inspectOptions struct {
builder string
format string
raw bool
raw bool
}
func runInspect(ctx context.Context, dockerCli command.Cli, in inspectOptions, name string) error {
if in.format != "" && in.raw {
return errors.Errorf("format and raw cannot be used together")
}
func runInspect(dockerCli command.Cli, in inspectOptions, name string) error {
ctx := appcontext.Context()
b, err := builder.New(dockerCli, builder.WithName(in.builder))
if err != nil {
return err
}
imageopt, err := b.ImageOpt()
r := imagetools.New(imagetools.Opt{
Auth: dockerCli.ConfigFile(),
})
dt, desc, err := r.Get(ctx, name)
if err != nil {
return err
}
p, err := imagetools.NewPrinter(ctx, imageopt, name, in.format)
if err != nil {
return err
if in.raw {
fmt.Printf("%s", dt) // avoid newline to keep digest
return nil
}
return p.Print(in.raw, dockerCli.Out())
switch desc.MediaType {
// case images.MediaTypeDockerSchema2Manifest, specs.MediaTypeImageManifest:
// TODO: handle distribution manifest and schema1
case images.MediaTypeDockerSchema2ManifestList, ocispec.MediaTypeImageIndex:
return imagetools.PrintManifestList(dt, desc, name, os.Stdout)
default:
fmt.Printf("%s\n", dt)
}
return nil
}
func inspectCmd(dockerCli command.Cli, rootOpts RootOptions) *cobra.Command {
func inspectCmd(dockerCli command.Cli) *cobra.Command {
var options inspectOptions
cmd := &cobra.Command{
Use: "inspect [OPTIONS] NAME",
Short: "Show details of an image in the registry",
Short: "Show details of image in the registry",
Args: cli.ExactArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
options.builder = *rootOpts.Builder
return runInspect(cmd.Context(), dockerCli, options, args[0])
return runInspect(dockerCli, options, args[0])
},
ValidArgsFunction: completion.Disable,
}
flags := cmd.Flags()
flags.StringVar(&options.format, "format", "", "Format the output using the given Go template")
flags.SetAnnotation("format", annotation.DefaultValue, []string{`"{{.Manifest}}"`})
flags.BoolVar(&options.raw, "raw", false, "Show original JSON manifest")
flags.BoolVar(&options.raw, "raw", false, "Show original, unformatted JSON manifest")
_ = flags
return cmd
}

View File

@@ -1,26 +1,19 @@
package commands
import (
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/cli/cli/command"
"github.com/spf13/cobra"
)
type RootOptions struct {
Builder *string
}
func RootCmd(rootcmd *cobra.Command, dockerCli command.Cli, opts RootOptions) *cobra.Command {
func RootCmd(dockerCli command.Cli) *cobra.Command {
cmd := &cobra.Command{
Use: "imagetools",
Short: "Commands to work on images in registry",
ValidArgsFunction: completion.Disable,
RunE: rootcmd.RunE,
Use: "imagetools",
Short: "Commands to work on images in registry",
}
cmd.AddCommand(
createCmd(dockerCli, opts),
inspectCmd(dockerCli, opts),
inspectCmd(dockerCli),
createCmd(dockerCli),
)
return cmd

View File

@@ -4,21 +4,21 @@ import (
"context"
"fmt"
"os"
"sort"
"strings"
"text/tabwriter"
"time"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/build"
"github.com/docker/buildx/driver"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/store"
"github.com/docker/buildx/util/platformutil"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/docker/cli/cli/debug"
"github.com/docker/go-units"
"github.com/pkg/errors"
"github.com/moby/buildkit/util/appcontext"
specs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/spf13/cobra"
"golang.org/x/sync/errgroup"
)
type inspectOptions struct {
@@ -26,120 +26,103 @@ type inspectOptions struct {
builder string
}
func runInspect(ctx context.Context, dockerCli command.Cli, in inspectOptions) error {
b, err := builder.New(dockerCli,
builder.WithName(in.builder),
builder.WithSkippedValidation(),
)
type dinfo struct {
di *build.DriverInfo
info *driver.Info
platforms []specs.Platform
err error
}
type nginfo struct {
ng *store.NodeGroup
drivers []dinfo
err error
}
func runInspect(dockerCli command.Cli, in inspectOptions) error {
ctx := appcontext.Context()
txn, release, err := getStore(dockerCli)
if err != nil {
return err
}
defer release()
timeoutCtx, cancel := context.WithCancelCause(ctx)
timeoutCtx, _ = context.WithTimeoutCause(timeoutCtx, 20*time.Second, errors.WithStack(context.DeadlineExceeded)) //nolint:govet,lostcancel // no need to manually cancel this context as we already rely on parent
defer func() { cancel(errors.WithStack(context.Canceled)) }()
var ng *store.NodeGroup
nodes, err := b.LoadNodes(timeoutCtx, builder.WithData())
if in.bootstrap {
var ok bool
ok, err = b.Boot(ctx)
if in.builder != "" {
ng, err = getNodeGroup(txn, dockerCli, in.builder)
if err != nil {
return err
}
} else {
ng, err = getCurrentInstance(txn, dockerCli)
if err != nil {
return err
}
}
if ng == nil {
ng = &store.NodeGroup{
Name: "default",
Nodes: []store.Node{{
Name: "default",
Endpoint: "default",
}},
}
}
ngi := &nginfo{ng: ng}
timeoutCtx, cancel := context.WithTimeout(ctx, 20*time.Second)
defer cancel()
err = loadNodeGroupData(timeoutCtx, dockerCli, ngi)
var bootNgi *nginfo
if in.bootstrap {
var ok bool
ok, err = boot(ctx, ngi, dockerCli)
if err != nil {
return err
}
bootNgi = ngi
if ok {
nodes, err = b.LoadNodes(timeoutCtx, builder.WithData())
ngi = &nginfo{ng: ng}
err = loadNodeGroupData(ctx, dockerCli, ngi)
}
}
w := tabwriter.NewWriter(os.Stdout, 0, 0, 1, ' ', 0)
fmt.Fprintf(w, "Name:\t%s\n", b.Name)
fmt.Fprintf(w, "Driver:\t%s\n", b.Driver)
if !b.NodeGroup.LastActivity.IsZero() {
fmt.Fprintf(w, "Last Activity:\t%v\n", b.NodeGroup.LastActivity)
}
fmt.Fprintf(w, "Name:\t%s\n", ngi.ng.Name)
fmt.Fprintf(w, "Driver:\t%s\n", ngi.ng.Driver)
if err != nil {
fmt.Fprintf(w, "Error:\t%s\n", err.Error())
} else if b.Err() != nil {
fmt.Fprintf(w, "Error:\t%s\n", b.Err().Error())
} else if ngi.err != nil {
fmt.Fprintf(w, "Error:\t%s\n", ngi.err.Error())
}
if err == nil {
fmt.Fprintln(w, "")
fmt.Fprintln(w, "Nodes:")
for i, n := range nodes {
for i, n := range ngi.ng.Nodes {
if i != 0 {
fmt.Fprintln(w, "")
}
fmt.Fprintf(w, "Name:\t%s\n", n.Name)
fmt.Fprintf(w, "Endpoint:\t%s\n", n.Endpoint)
var driverOpts []string
for k, v := range n.DriverOpts {
driverOpts = append(driverOpts, fmt.Sprintf("%s=%q", k, v))
}
if len(driverOpts) > 0 {
fmt.Fprintf(w, "Driver Options:\t%s\n", strings.Join(driverOpts, " "))
}
if err := n.Err; err != nil {
if err := ngi.drivers[i].di.Err; err != nil {
fmt.Fprintf(w, "Error:\t%s\n", err.Error())
} else if err := ngi.drivers[i].err; err != nil {
fmt.Fprintf(w, "Error:\t%s\n", err.Error())
} else if bootNgi != nil && len(bootNgi.drivers) > i && bootNgi.drivers[i].err != nil {
fmt.Fprintf(w, "Error:\t%s\n", bootNgi.drivers[i].err.Error())
} else {
fmt.Fprintf(w, "Status:\t%s\n", nodes[i].DriverInfo.Status)
if len(n.BuildkitdFlags) > 0 {
fmt.Fprintf(w, "BuildKit daemon flags:\t%s\n", strings.Join(n.BuildkitdFlags, " "))
}
if nodes[i].Version != "" {
fmt.Fprintf(w, "BuildKit version:\t%s\n", nodes[i].Version)
}
platforms := platformutil.FormatInGroups(n.Node.Platforms, n.Platforms)
if len(platforms) > 0 {
fmt.Fprintf(w, "Platforms:\t%s\n", strings.Join(platforms, ", "))
}
if debug.IsEnabled() {
fmt.Fprintf(w, "Features:\n")
features := nodes[i].Driver.Features(ctx)
featKeys := make([]string, 0, len(features))
for k := range features {
featKeys = append(featKeys, string(k))
}
sort.Strings(featKeys)
for _, k := range featKeys {
fmt.Fprintf(w, "\t%s:\t%t\n", k, features[driver.Feature(k)])
}
}
if len(nodes[i].Labels) > 0 {
fmt.Fprintf(w, "Labels:\n")
for _, k := range sortedKeys(nodes[i].Labels) {
v := nodes[i].Labels[k]
fmt.Fprintf(w, "\t%s:\t%s\n", k, v)
}
}
for ri, rule := range nodes[i].GCPolicy {
fmt.Fprintf(w, "GC Policy rule#%d:\n", ri)
fmt.Fprintf(w, "\tAll:\t%v\n", rule.All)
if len(rule.Filter) > 0 {
fmt.Fprintf(w, "\tFilters:\t%s\n", strings.Join(rule.Filter, " "))
}
if rule.KeepDuration > 0 {
fmt.Fprintf(w, "\tKeep Duration:\t%v\n", rule.KeepDuration.String())
}
if rule.ReservedSpace > 0 {
fmt.Fprintf(w, "\tReserved Space:\t%s\n", units.BytesSize(float64(rule.ReservedSpace)))
}
if rule.MaxUsedSpace > 0 {
fmt.Fprintf(w, "\tMax Used Space:\t%s\n", units.BytesSize(float64(rule.MaxUsedSpace)))
}
if rule.MinFreeSpace > 0 {
fmt.Fprintf(w, "\tMin Free Space:\t%s\n", units.BytesSize(float64(rule.MinFreeSpace)))
}
}
for f, dt := range nodes[i].Files {
fmt.Fprintf(w, "File#%s:\n", f)
for _, line := range strings.Split(string(dt), "\n") {
fmt.Fprintf(w, "\t> %s\n", line)
}
fmt.Fprintf(w, "Status:\t%s\n", ngi.drivers[i].info.Status)
if len(n.Flags) > 0 {
fmt.Fprintf(w, "Flags:\t%s\n", strings.Join(n.Flags, " "))
}
fmt.Fprintf(w, "Platforms:\t%s\n", strings.Join(platformutil.FormatInGroups(n.Platforms, ngi.drivers[i].platforms), ", "))
}
}
}
@@ -161,24 +144,55 @@ func inspectCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
if len(args) > 0 {
options.builder = args[0]
}
return runInspect(cmd.Context(), dockerCli, options)
return runInspect(dockerCli, options)
},
ValidArgsFunction: completion.BuilderNames(dockerCli),
}
flags := cmd.Flags()
flags.BoolVar(&options.bootstrap, "bootstrap", false, "Ensure builder has booted before inspecting")
_ = flags
return cmd
}
func sortedKeys(m map[string]string) []string {
s := make([]string, len(m))
i := 0
for k := range m {
s[i] = k
i++
func boot(ctx context.Context, ngi *nginfo, dockerCli command.Cli) (bool, error) {
toBoot := make([]int, 0, len(ngi.drivers))
for i, d := range ngi.drivers {
if d.err != nil || d.di.Err != nil || d.di.Driver == nil || d.info == nil {
continue
}
if d.info.Status != driver.Running {
toBoot = append(toBoot, i)
}
}
sort.Strings(s)
return s
if len(toBoot) == 0 {
return false, nil
}
printer := progress.NewPrinter(context.TODO(), os.Stderr, "auto")
baseCtx := ctx
eg, _ := errgroup.WithContext(ctx)
for _, idx := range toBoot {
func(idx int) {
eg.Go(func() error {
pw := progress.WithPrefix(printer, ngi.ng.Nodes[idx].Name, len(toBoot) > 1)
_, err := driver.Boot(ctx, baseCtx, ngi.drivers[idx].di.Driver, pw)
if err != nil {
ngi.drivers[idx].err = err
}
return nil
})
}(idx)
}
err := eg.Wait()
err1 := printer.Wait()
if err == nil {
err = err1
}
return true, err
}

View File

@@ -3,8 +3,6 @@ package commands
import (
"os"
"github.com/docker/buildx/util/cobrautil"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/docker/cli/cli/config"
@@ -15,7 +13,7 @@ import (
type installOptions struct {
}
func runInstall(_ command.Cli, _ installOptions) error {
func runInstall(dockerCli command.Cli, in installOptions) error {
dir := config.Dir()
if err := os.MkdirAll(dir, 0755); err != nil {
return errors.Wrap(err, "could not create docker config")
@@ -47,12 +45,8 @@ func installCmd(dockerCli command.Cli) *cobra.Command {
RunE: func(cmd *cobra.Command, args []string) error {
return runInstall(dockerCli, options)
},
Hidden: true,
ValidArgsFunction: completion.Disable,
Hidden: true,
}
// hide builder persistent flag for this command
cobrautil.HideInheritedFlags(cmd, "builder")
return cmd
}

View File

@@ -2,71 +2,73 @@ package commands
import (
"context"
"encoding/json"
"fmt"
"sort"
"io"
"os"
"strings"
"text/tabwriter"
"time"
"github.com/containerd/platforms"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/store"
"github.com/docker/buildx/store/storeutil"
"github.com/docker/buildx/util/cobrautil"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/platformutil"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/docker/cli/cli/command/formatter"
"github.com/pkg/errors"
"github.com/moby/buildkit/util/appcontext"
"github.com/spf13/cobra"
"golang.org/x/sync/errgroup"
)
const (
lsNameNodeHeader = "NAME/NODE"
lsDriverEndpointHeader = "DRIVER/ENDPOINT"
lsStatusHeader = "STATUS"
lsLastActivityHeader = "LAST ACTIVITY"
lsBuildkitHeader = "BUILDKIT"
lsPlatformsHeader = "PLATFORMS"
lsIndent = ` \_ `
lsDefaultTableFormat = "table {{.Name}}\t{{.DriverEndpoint}}\t{{.Status}}\t{{.Buildkit}}\t{{.Platforms}}"
)
type lsOptions struct {
format string
noTrunc bool
}
func runLs(ctx context.Context, dockerCli command.Cli, in lsOptions) error {
txn, release, err := storeutil.GetStore(dockerCli)
func runLs(dockerCli command.Cli, in lsOptions) error {
ctx := appcontext.Context()
txn, release, err := getStore(dockerCli)
if err != nil {
return err
}
defer release()
current, err := storeutil.GetCurrentInstance(txn, dockerCli)
ctx, cancel := context.WithTimeout(ctx, 20*time.Second)
defer cancel()
ll, err := txn.List()
if err != nil {
return err
}
builders, err := builder.GetBuilders(dockerCli, txn)
builders := make([]*nginfo, len(ll))
for i, ng := range ll {
builders[i] = &nginfo{ng: ng}
}
list, err := dockerCli.ContextStore().List()
if err != nil {
return err
}
ctxbuilders := make([]*nginfo, len(list))
for i, l := range list {
ctxbuilders[i] = &nginfo{ng: &store.NodeGroup{
Name: l.Name,
Nodes: []store.Node{{
Name: l.Name,
Endpoint: l.Name,
}},
}}
}
timeoutCtx, cancel := context.WithCancelCause(ctx)
timeoutCtx, _ = context.WithTimeoutCause(timeoutCtx, 20*time.Second, errors.WithStack(context.DeadlineExceeded)) //nolint:govet,lostcancel // no need to manually cancel this context as we already rely on parent
defer func() { cancel(errors.WithStack(context.Canceled)) }()
builders = append(builders, ctxbuilders...)
eg, _ := errgroup.WithContext(ctx)
eg, _ := errgroup.WithContext(timeoutCtx)
for _, b := range builders {
func(b *builder.Builder) {
func(b *nginfo) {
eg.Go(func() error {
_, _ = b.LoadNodes(timeoutCtx, builder.WithData())
err = loadNodeGroupData(ctx, dockerCli, b)
if b.err == nil && err != nil {
b.err = err
}
return nil
})
}(b)
@@ -76,26 +78,63 @@ func runLs(ctx context.Context, dockerCli command.Cli, in lsOptions) error {
return err
}
if hasErrors, err := lsPrint(dockerCli, current, builders, in); err != nil {
currentName := "default"
current, err := getCurrentInstance(txn, dockerCli)
if err != nil {
return err
} else if hasErrors {
_, _ = fmt.Fprintf(dockerCli.Err(), "\n")
for _, b := range builders {
if b.Err() != nil {
_, _ = fmt.Fprintf(dockerCli.Err(), "Cannot load builder %s: %s\n", b.Name, strings.TrimSpace(b.Err().Error()))
} else {
for _, d := range b.Nodes() {
if d.Err != nil {
_, _ = fmt.Fprintf(dockerCli.Err(), "Failed to get status for %s (%s): %s\n", b.Name, d.Name, strings.TrimSpace(d.Err.Error()))
}
}
}
}
if current != nil {
currentName = current.Name
if current.Name == "default" {
currentName = current.Nodes[0].Endpoint
}
}
w := tabwriter.NewWriter(os.Stdout, 0, 0, 1, ' ', 0)
fmt.Fprintf(w, "NAME/NODE\tDRIVER/ENDPOINT\tSTATUS\tPLATFORMS\n")
currentSet := false
for _, b := range builders {
if !currentSet && b.ng.Name == currentName {
b.ng.Name += " *"
currentSet = true
}
printngi(w, b)
}
w.Flush()
return nil
}
func printngi(w io.Writer, ngi *nginfo) {
var err string
if ngi.err != nil {
err = ngi.err.Error()
}
fmt.Fprintf(w, "%s\t%s\t%s\t\n", ngi.ng.Name, ngi.ng.Driver, err)
if ngi.err == nil {
for idx, n := range ngi.ng.Nodes {
d := ngi.drivers[idx]
var err string
if d.err != nil {
err = d.err.Error()
} else if d.di.Err != nil {
err = d.di.Err.Error()
}
var status string
if d.info != nil {
status = d.info.Status.String()
}
if err != "" {
fmt.Fprintf(w, " %s\t%s\t%s\n", n.Name, n.Endpoint, err)
} else {
fmt.Fprintf(w, " %s\t%s\t%s\t%s\n", n.Name, n.Endpoint, status, strings.Join(platformutil.FormatInGroups(n.Platforms, d.platforms), ", "))
}
}
}
}
func lsCmd(dockerCli command.Cli) *cobra.Command {
var options lsOptions
@@ -104,314 +143,9 @@ func lsCmd(dockerCli command.Cli) *cobra.Command {
Short: "List builder instances",
Args: cli.ExactArgs(0),
RunE: func(cmd *cobra.Command, args []string) error {
return runLs(cmd.Context(), dockerCli, options)
return runLs(dockerCli, options)
},
ValidArgsFunction: completion.Disable,
}
flags := cmd.Flags()
flags.StringVar(&options.format, "format", formatter.TableFormatKey, "Format the output")
flags.BoolVar(&options.noTrunc, "no-trunc", false, "Don't truncate output")
// hide builder persistent flag for this command
cobrautil.HideInheritedFlags(cmd, "builder")
return cmd
}
func lsPrint(dockerCli command.Cli, current *store.NodeGroup, builders []*builder.Builder, in lsOptions) (hasErrors bool, _ error) {
if in.format == formatter.TableFormatKey {
in.format = lsDefaultTableFormat
}
ctx := formatter.Context{
Output: dockerCli.Out(),
Format: formatter.Format(in.format),
Trunc: !in.noTrunc,
}
sort.SliceStable(builders, func(i, j int) bool {
ierr := builders[i].Err() != nil
jerr := builders[j].Err() != nil
if ierr && !jerr {
return false
} else if !ierr && jerr {
return true
}
return i < j
})
render := func(format func(subContext formatter.SubContext) error) error {
for _, b := range builders {
if err := format(&lsContext{
format: ctx.Format,
trunc: ctx.Trunc,
Builder: &lsBuilder{
Builder: b,
Current: b.Name == current.Name,
},
}); err != nil {
return err
}
if b.Err() != nil {
if ctx.Format.IsTable() {
hasErrors = true
}
continue
}
for _, n := range b.Nodes() {
if n.Err != nil {
if ctx.Format.IsTable() {
hasErrors = true
}
}
if err := format(&lsContext{
format: ctx.Format,
trunc: ctx.Trunc,
Builder: &lsBuilder{
Builder: b,
Current: b.Name == current.Name,
},
node: n,
}); err != nil {
return err
}
}
}
return nil
}
lsCtx := lsContext{}
lsCtx.Header = formatter.SubHeaderContext{
"Name": lsNameNodeHeader,
"DriverEndpoint": lsDriverEndpointHeader,
"LastActivity": lsLastActivityHeader,
"Status": lsStatusHeader,
"Buildkit": lsBuildkitHeader,
"Platforms": lsPlatformsHeader,
}
return hasErrors, ctx.Write(&lsCtx, render)
}
type lsBuilder struct {
*builder.Builder
Current bool
}
type lsContext struct {
formatter.HeaderContext
Builder *lsBuilder
format formatter.Format
trunc bool
node builder.Node
}
func (c *lsContext) MarshalJSON() ([]byte, error) {
return json.Marshal(c.Builder)
}
func (c *lsContext) Name() string {
if c.node.Name == "" {
name := c.Builder.Name
if c.Builder.Current && c.format.IsTable() {
name += "*"
}
return name
}
if c.format.IsTable() {
return lsIndent + c.node.Name
}
return c.node.Name
}
func (c *lsContext) DriverEndpoint() string {
if c.node.Name == "" {
return c.Builder.Driver
}
if c.format.IsTable() {
return lsIndent + c.node.Endpoint
}
return c.node.Endpoint
}
func (c *lsContext) LastActivity() string {
if c.node.Name != "" || c.Builder.LastActivity.IsZero() {
return ""
}
return c.Builder.LastActivity.UTC().Format(time.RFC3339)
}
func (c *lsContext) Status() string {
if c.node.Name == "" {
if c.Builder.Err() != nil {
return "error"
}
return ""
}
if c.node.Err != nil {
return "error"
}
if c.node.DriverInfo != nil {
return c.node.DriverInfo.Status.String()
}
return ""
}
func (c *lsContext) Buildkit() string {
if c.node.Name == "" {
return ""
}
return c.node.Version
}
func (c *lsContext) Platforms() string {
if c.node.Name == "" {
return ""
}
pfs := platformutil.FormatInGroups(c.node.Node.Platforms, c.node.Platforms)
if c.trunc && c.format.IsTable() {
return truncPlatforms(pfs, 4).String()
}
return strings.Join(pfs, ", ")
}
func (c *lsContext) Error() string {
if c.node.Name != "" && c.node.Err != nil {
return c.node.Err.Error()
} else if err := c.Builder.Err(); err != nil {
return err.Error()
}
return ""
}
var truncMajorPlatforms = []string{
"linux/amd64",
"linux/arm64",
"linux/arm",
"linux/ppc64le",
"linux/s390x",
"linux/riscv64",
"linux/mips64",
}
type truncatedPlatforms struct {
res map[string][]string
input []string
max int
}
func (tp truncatedPlatforms) List() map[string][]string {
return tp.res
}
func (tp truncatedPlatforms) String() string {
var out []string
var count int
var keys []string
for k := range tp.res {
keys = append(keys, k)
}
sort.Strings(keys)
seen := make(map[string]struct{})
for _, mpf := range truncMajorPlatforms {
if tpf, ok := tp.res[mpf]; ok {
seen[mpf] = struct{}{}
if len(tpf) == 1 {
out = append(out, tpf[0])
count++
} else {
hasPreferredPlatform := false
for _, pf := range tpf {
if strings.HasSuffix(pf, "*") {
hasPreferredPlatform = true
break
}
}
mainpf := mpf
if hasPreferredPlatform {
mainpf += "*"
}
out = append(out, fmt.Sprintf("%s (+%d)", mainpf, len(tpf)))
count += len(tpf)
}
}
}
for _, mpf := range keys {
if len(out) >= tp.max {
break
}
if _, ok := seen[mpf]; ok {
continue
}
if len(tp.res[mpf]) == 1 {
out = append(out, tp.res[mpf][0])
count++
} else {
hasPreferredPlatform := false
for _, pf := range tp.res[mpf] {
if strings.HasSuffix(pf, "*") {
hasPreferredPlatform = true
break
}
}
mainpf := mpf
if hasPreferredPlatform {
mainpf += "*"
}
out = append(out, fmt.Sprintf("%s (+%d)", mainpf, len(tp.res[mpf])))
count += len(tp.res[mpf])
}
}
left := len(tp.input) - count
if left > 0 {
out = append(out, fmt.Sprintf("(%d more)", left))
}
return strings.Join(out, ", ")
}
func truncPlatforms(pfs []string, max int) truncatedPlatforms {
res := make(map[string][]string)
for _, mpf := range truncMajorPlatforms {
for _, pf := range pfs {
if len(res) >= max {
break
}
pp, err := platforms.Parse(strings.TrimSuffix(pf, "*"))
if err != nil {
continue
}
if pp.OS+"/"+pp.Architecture == mpf {
res[mpf] = append(res[mpf], pf)
}
}
}
left := make(map[string][]string)
for _, pf := range pfs {
if len(res) >= max {
break
}
pp, err := platforms.Parse(strings.TrimSuffix(pf, "*"))
if err != nil {
continue
}
ppf := strings.TrimSuffix(pp.OS+"/"+pp.Architecture, "*")
if _, ok := res[ppf]; !ok {
left[ppf] = append(left[ppf], pf)
}
}
for k, v := range left {
res[k] = v
}
return truncatedPlatforms{
res: res,
input: pfs,
max: max,
}
}

View File

@@ -1,174 +0,0 @@
package commands
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestTruncPlatforms(t *testing.T) {
tests := []struct {
name string
platforms []string
max int
expectedList map[string][]string
expectedOut string
}{
{
name: "arm64 preferred and emulated",
platforms: []string{"linux/arm64*", "linux/amd64", "linux/amd64/v2", "linux/riscv64", "linux/ppc64le", "linux/s390x", "linux/386", "linux/mips64le", "linux/mips64", "linux/arm/v7", "linux/arm/v6"},
max: 4,
expectedList: map[string][]string{
"linux/amd64": {
"linux/amd64",
"linux/amd64/v2",
},
"linux/arm": {
"linux/arm/v7",
"linux/arm/v6",
},
"linux/arm64": {
"linux/arm64*",
},
"linux/ppc64le": {
"linux/ppc64le",
},
},
expectedOut: "linux/amd64 (+2), linux/arm64*, linux/arm (+2), linux/ppc64le, (5 more)",
},
{
name: "riscv64 preferred only",
platforms: []string{"linux/riscv64*"},
max: 4,
expectedList: map[string][]string{
"linux/riscv64": {
"linux/riscv64*",
},
},
expectedOut: "linux/riscv64*",
},
{
name: "amd64 no preferred and emulated",
platforms: []string{"linux/amd64", "linux/amd64/v2", "linux/amd64/v3", "linux/386", "linux/arm64", "linux/riscv64", "linux/ppc64le", "linux/s390x", "linux/mips64le", "linux/mips64", "linux/arm/v7", "linux/arm/v6"},
max: 4,
expectedList: map[string][]string{
"linux/amd64": {
"linux/amd64",
"linux/amd64/v2",
"linux/amd64/v3",
},
"linux/arm": {
"linux/arm/v7",
"linux/arm/v6",
},
"linux/arm64": {
"linux/arm64",
},
"linux/ppc64le": {
"linux/ppc64le",
}},
expectedOut: "linux/amd64 (+3), linux/arm64, linux/arm (+2), linux/ppc64le, (5 more)",
},
{
name: "amd64 no preferred",
platforms: []string{"linux/amd64", "linux/386"},
max: 4,
expectedList: map[string][]string{
"linux/386": {
"linux/386",
},
"linux/amd64": {
"linux/amd64",
},
},
expectedOut: "linux/amd64, linux/386",
},
{
name: "arm64 no preferred",
platforms: []string{"linux/arm64", "linux/arm/v7", "linux/arm/v6"},
max: 4,
expectedList: map[string][]string{
"linux/arm": {
"linux/arm/v7",
"linux/arm/v6",
},
"linux/arm64": {
"linux/arm64",
},
},
expectedOut: "linux/arm64, linux/arm (+2)",
},
{
name: "all preferred",
platforms: []string{"darwin/arm64*", "linux/arm64*", "linux/arm/v5*", "linux/arm/v6*", "linux/arm/v7*", "windows/arm64*"},
max: 4,
expectedList: map[string][]string{
"darwin/arm64": {
"darwin/arm64*",
},
"linux/arm": {
"linux/arm/v5*",
"linux/arm/v6*",
"linux/arm/v7*",
},
"linux/arm64": {
"linux/arm64*",
},
"windows/arm64": {
"windows/arm64*",
},
},
expectedOut: "linux/arm64*, linux/arm* (+3), darwin/arm64*, windows/arm64*",
},
{
name: "no major preferred",
platforms: []string{"linux/amd64/v2*", "linux/arm/v6*", "linux/mips64le*", "linux/amd64", "linux/amd64/v3", "linux/386", "linux/arm64", "linux/riscv64", "linux/ppc64le", "linux/s390x", "linux/mips64", "linux/arm/v7"},
max: 4,
expectedList: map[string][]string{
"linux/amd64": {
"linux/amd64/v2*",
"linux/amd64",
"linux/amd64/v3",
},
"linux/arm": {
"linux/arm/v6*",
"linux/arm/v7",
},
"linux/arm64": {
"linux/arm64",
},
"linux/ppc64le": {
"linux/ppc64le",
},
},
expectedOut: "linux/amd64* (+3), linux/arm64, linux/arm* (+2), linux/ppc64le, (5 more)",
},
{
name: "no major with multiple variants",
platforms: []string{"linux/arm64", "linux/arm/v7", "linux/arm/v6", "linux/mips64le/softfloat", "linux/mips64le/hardfloat"},
max: 4,
expectedList: map[string][]string{
"linux/arm": {
"linux/arm/v7",
"linux/arm/v6",
},
"linux/arm64": {
"linux/arm64",
},
"linux/mips64le": {
"linux/mips64le/softfloat",
"linux/mips64le/hardfloat",
},
},
expectedOut: "linux/arm64, linux/arm (+2), linux/mips64le (+2)",
},
}
for _, tt := range tests {
tt := tt
t.Run(tt.name, func(t *testing.T) {
tpfs := truncPlatforms(tt.platforms, tt.max)
assert.Equal(t, tt.expectedList, tpfs.List())
assert.Equal(t, tt.expectedOut, tpfs.String())
})
}
}

View File

@@ -1,38 +1,32 @@
package commands
import (
"context"
"fmt"
"os"
"strings"
"text/tabwriter"
"time"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/build"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/docker/cli/opts"
"github.com/docker/docker/api/types/filters"
"github.com/docker/go-units"
"github.com/moby/buildkit/client"
gateway "github.com/moby/buildkit/frontend/gateway/client"
pb "github.com/moby/buildkit/solver/pb"
"github.com/moby/buildkit/util/apicaps"
"github.com/moby/buildkit/util/appcontext"
"github.com/pkg/errors"
"github.com/spf13/cobra"
"github.com/tonistiigi/units"
"golang.org/x/sync/errgroup"
)
type pruneOptions struct {
builder string
all bool
filter opts.FilterOpt
reservedSpace opts.MemBytes
maxUsedSpace opts.MemBytes
minFreeSpace opts.MemBytes
force bool
verbose bool
builder string
all bool
filter opts.FilterOpt
keepStorage opts.MemBytes
force bool
verbose bool
}
const (
@@ -40,7 +34,9 @@ const (
allCacheWarning = `WARNING! This will remove all build cache. Are you sure you want to continue?`
)
func runPrune(ctx context.Context, dockerCli command.Cli, opts pruneOptions) error {
func runPrune(dockerCli command.Cli, opts pruneOptions) error {
ctx := appcontext.Context()
pruneFilters := opts.filter.Value()
pruneFilters = command.PruneFilters(dockerCli, pruneFilters)
@@ -54,26 +50,18 @@ func runPrune(ctx context.Context, dockerCli command.Cli, opts pruneOptions) err
warning = allCacheWarning
}
if !opts.force {
if ok, err := prompt(ctx, dockerCli.In(), dockerCli.Out(), warning); err != nil {
if !opts.force && !command.PromptForConfirmation(dockerCli.In(), dockerCli.Out(), warning) {
return nil
}
dis, err := getInstanceOrDefault(ctx, dockerCli, opts.builder, "")
if err != nil {
return err
}
for _, di := range dis {
if di.Err != nil {
return err
} else if !ok {
return nil
}
}
b, err := builder.New(dockerCli, builder.WithName(opts.builder))
if err != nil {
return err
}
nodes, err := b.LoadNodes(ctx)
if err != nil {
return err
}
for _, node := range nodes {
if node.Err != nil {
return node.Err
}
}
@@ -102,27 +90,16 @@ func runPrune(ctx context.Context, dockerCli command.Cli, opts pruneOptions) err
}()
eg, ctx := errgroup.WithContext(ctx)
for _, node := range nodes {
func(node builder.Node) {
for _, di := range dis {
func(di build.DriverInfo) {
eg.Go(func() error {
if node.Driver != nil {
c, err := node.Driver.Client(ctx)
if di.Driver != nil {
c, err := di.Driver.Client(ctx)
if err != nil {
return err
}
// check if the client supports newer prune options
if opts.maxUsedSpace.Value() != 0 || opts.minFreeSpace.Value() != 0 {
caps, err := loadLLBCaps(ctx, c)
if err != nil {
return errors.Wrap(err, "failed to load buildkit capabilities for prune")
}
if caps.Supports(pb.CapGCFreeSpaceFilter) != nil {
return errors.New("buildkit v0.17.0+ is required for max-used-space and min-free-space filters")
}
}
popts := []client.PruneOption{
client.WithKeepOpt(pi.KeepDuration, opts.reservedSpace.Value(), opts.maxUsedSpace.Value(), opts.minFreeSpace.Value()),
client.WithKeepOpt(pi.KeepDuration, opts.keepStorage.Value()),
client.WithFilter(pi.Filter),
}
if opts.all {
@@ -132,7 +109,7 @@ func runPrune(ctx context.Context, dockerCli command.Cli, opts pruneOptions) err
}
return nil
})
}(node)
}(di)
}
if err := eg.Wait(); err != nil {
@@ -142,22 +119,11 @@ func runPrune(ctx context.Context, dockerCli command.Cli, opts pruneOptions) err
<-printed
tw = tabwriter.NewWriter(os.Stdout, 1, 8, 1, '\t', 0)
fmt.Fprintf(tw, "Total:\t%s\n", units.HumanSize(float64(total)))
fmt.Fprintf(tw, "Total:\t%.2f\n", units.Bytes(total))
tw.Flush()
return nil
}
func loadLLBCaps(ctx context.Context, c *client.Client) (apicaps.CapSet, error) {
var caps apicaps.CapSet
_, err := c.Build(ctx, client.SolveOpt{
Internal: true,
}, "buildx", func(ctx context.Context, c gateway.Client) (*gateway.Result, error) {
caps = c.BuildOpts().LLBCaps
return nil, nil
}, nil)
return caps, err
}
func pruneCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
options := pruneOptions{filter: opts.NewFilterOpt()}
@@ -167,23 +133,17 @@ func pruneCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
Args: cli.NoArgs,
RunE: func(cmd *cobra.Command, args []string) error {
options.builder = rootOpts.builder
return runPrune(cmd.Context(), dockerCli, options)
return runPrune(dockerCli, options)
},
ValidArgsFunction: completion.Disable,
}
flags := cmd.Flags()
flags.BoolVarP(&options.all, "all", "a", false, "Include internal/frontend images")
flags.Var(&options.filter, "filter", `Provide filter values (e.g., "until=24h")`)
flags.Var(&options.reservedSpace, "reserved-space", "Amount of disk space always allowed to keep for cache")
flags.Var(&options.minFreeSpace, "min-free-space", "Target amount of free disk space after pruning")
flags.Var(&options.maxUsedSpace, "max-used-space", "Maximum amount of disk space allowed to keep for cache")
flags.BoolVarP(&options.all, "all", "a", false, "Remove all unused images, not just dangling ones")
flags.Var(&options.filter, "filter", "Provide filter values (e.g. 'until=24h')")
flags.Var(&options.keepStorage, "keep-storage", "Amount of disk space to keep for cache")
flags.BoolVar(&options.verbose, "verbose", false, "Provide a more verbose output")
flags.BoolVarP(&options.force, "force", "f", false, "Do not prompt for confirmation")
flags.Var(&options.reservedSpace, "keep-storage", "Amount of disk space to keep for cache")
flags.MarkDeprecated("keep-storage", "keep-storage flag has been changed to max-storage")
return cmd
}
@@ -195,9 +155,9 @@ func toBuildkitPruneInfo(f filters.Args) (*client.PruneInfo, error) {
if len(untilValues) > 0 && len(unusedForValues) > 0 {
return nil, errors.Errorf("conflicting filters %q and %q", "until", "unused-for")
}
untilKey := "until"
filterKey := "until"
if len(unusedForValues) > 0 {
untilKey = "unused-for"
filterKey = "unused-for"
}
untilValues = append(untilValues, unusedForValues...)
@@ -208,29 +168,23 @@ func toBuildkitPruneInfo(f filters.Args) (*client.PruneInfo, error) {
var err error
until, err = time.ParseDuration(untilValues[0])
if err != nil {
return nil, errors.Wrapf(err, "%q filter expects a duration (e.g., '24h')", untilKey)
return nil, errors.Wrapf(err, "%q filter expects a duration (e.g., '24h')", filterKey)
}
default:
return nil, errors.Errorf("filters expect only one value")
}
filters := make([]string, 0, f.Len())
for _, filterKey := range f.Keys() {
if filterKey == untilKey {
continue
}
values := f.Get(filterKey)
bkFilter := make([]string, 0, f.Len())
for _, field := range f.Keys() {
values := f.Get(field)
switch len(values) {
case 0:
filters = append(filters, filterKey)
bkFilter = append(bkFilter, field)
case 1:
if filterKey == "id" {
filters = append(filters, filterKey+"~="+values[0])
} else if strings.HasSuffix(filterKey, "!") || strings.HasSuffix(filterKey, "~") {
filters = append(filters, filterKey+"="+values[0])
if field == "id" {
bkFilter = append(bkFilter, field+"~="+values[0])
} else {
filters = append(filters, filterKey+"=="+values[0])
bkFilter = append(bkFilter, field+"=="+values[0])
}
default:
return nil, errors.Errorf("filters expect only one value")
@@ -238,6 +192,6 @@ func toBuildkitPruneInfo(f filters.Args) (*client.PruneInfo, error) {
}
return &client.PruneInfo{
KeepDuration: until,
Filter: []string{strings.Join(filters, ",")},
Filter: []string{strings.Join(bkFilter, ",")},
}, nil
}

View File

@@ -2,96 +2,52 @@ package commands
import (
"context"
"fmt"
"time"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/store"
"github.com/docker/buildx/store/storeutil"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/pkg/errors"
"github.com/moby/buildkit/util/appcontext"
"github.com/spf13/cobra"
"golang.org/x/sync/errgroup"
)
type rmOptions struct {
builders []string
keepState bool
keepDaemon bool
allInactive bool
force bool
builder string
keepState bool
}
const (
rmInactiveWarning = `WARNING! This will remove all builders that are not in running state. Are you sure you want to continue?`
)
func runRm(dockerCli command.Cli, in rmOptions) error {
ctx := appcontext.Context()
func runRm(ctx context.Context, dockerCli command.Cli, in rmOptions) error {
if in.allInactive && !in.force {
if ok, err := prompt(ctx, dockerCli.In(), dockerCli.Out(), rmInactiveWarning); err != nil {
return err
} else if !ok {
return nil
}
}
txn, release, err := storeutil.GetStore(dockerCli)
txn, release, err := getStore(dockerCli)
if err != nil {
return err
}
defer release()
if in.allInactive {
return rmAllInactive(ctx, txn, dockerCli, in)
if in.builder != "" {
ng, err := getNodeGroup(txn, dockerCli, in.builder)
if err != nil {
return err
}
err1 := rm(ctx, dockerCli, ng, in.keepState)
if err := txn.Remove(ng.Name); err != nil {
return err
}
return err1
}
eg, _ := errgroup.WithContext(ctx)
for _, name := range in.builders {
func(name string) {
eg.Go(func() (err error) {
defer func() {
if err == nil {
_, _ = fmt.Fprintf(dockerCli.Err(), "%s removed\n", name)
} else {
_, _ = fmt.Fprintf(dockerCli.Err(), "failed to remove %s: %v\n", name, err)
}
}()
b, err := builder.New(dockerCli,
builder.WithName(name),
builder.WithStore(txn),
builder.WithSkippedValidation(),
)
if err != nil {
return err
}
nodes, err := b.LoadNodes(ctx)
if err != nil {
return err
}
if cb := b.ContextName(); cb != "" {
return errors.Errorf("context builder cannot be removed, run `docker context rm %s` to remove this context", cb)
}
err1 := rm(ctx, nodes, in)
if err := txn.Remove(b.Name); err != nil {
return err
}
if err1 != nil {
return err1
}
return nil
})
}(name)
ng, err := getCurrentInstance(txn, dockerCli)
if err != nil {
return err
}
if ng != nil {
err1 := rm(ctx, dockerCli, ng, in.keepState)
if err := txn.Remove(ng.Name); err != nil {
return err
}
return err1
}
if err := eg.Wait(); err != nil {
return errors.New("failed to remove one or more builders")
}
return nil
}
@@ -99,84 +55,41 @@ func rmCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
var options rmOptions
cmd := &cobra.Command{
Use: "rm [OPTIONS] [NAME] [NAME...]",
Short: "Remove one or more builder instances",
Use: "rm [NAME]",
Short: "Remove a builder instance",
Args: cli.RequiresMaxArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
options.builders = []string{rootOpts.builder}
options.builder = rootOpts.builder
if len(args) > 0 {
if options.allInactive {
return errors.New("cannot specify builder name when --all-inactive is set")
}
options.builders = args
options.builder = args[0]
}
return runRm(cmd.Context(), dockerCli, options)
return runRm(dockerCli, options)
},
ValidArgsFunction: completion.BuilderNames(dockerCli),
}
flags := cmd.Flags()
flags.BoolVar(&options.keepState, "keep-state", false, "Keep BuildKit state")
flags.BoolVar(&options.keepDaemon, "keep-daemon", false, "Keep the BuildKit daemon running")
flags.BoolVar(&options.allInactive, "all-inactive", false, "Remove all inactive builders")
flags.BoolVarP(&options.force, "force", "f", false, "Do not prompt for confirmation")
return cmd
}
func rm(ctx context.Context, nodes []builder.Node, in rmOptions) (err error) {
for _, node := range nodes {
if node.Driver == nil {
continue
}
// Do not stop the buildkitd daemon when --keep-daemon is provided
if !in.keepDaemon {
if err := node.Driver.Stop(ctx, true); err != nil {
func rm(ctx context.Context, dockerCli command.Cli, ng *store.NodeGroup, keepState bool) error {
dis, err := driversForNodeGroup(ctx, dockerCli, ng, "")
if err != nil {
return err
}
for _, di := range dis {
if di.Driver != nil {
if err := di.Driver.Stop(ctx, true); err != nil {
return err
}
if err := di.Driver.Rm(ctx, true, !keepState); err != nil {
return err
}
}
if err := node.Driver.Rm(ctx, true, !in.keepState, !in.keepDaemon); err != nil {
return err
}
if node.Err != nil {
err = node.Err
if di.Err != nil {
err = di.Err
}
}
return err
}
func rmAllInactive(ctx context.Context, txn *store.Txn, dockerCli command.Cli, in rmOptions) error {
builders, err := builder.GetBuilders(dockerCli, txn)
if err != nil {
return err
}
timeoutCtx, cancel := context.WithCancelCause(ctx)
timeoutCtx, _ = context.WithTimeoutCause(timeoutCtx, 20*time.Second, errors.WithStack(context.DeadlineExceeded)) //nolint:govet,lostcancel // no need to manually cancel this context as we already rely on parent
defer func() { cancel(errors.WithStack(context.Canceled)) }()
eg, _ := errgroup.WithContext(timeoutCtx)
for _, b := range builders {
func(b *builder.Builder) {
eg.Go(func() error {
nodes, err := b.LoadNodes(timeoutCtx, builder.WithData())
if err != nil {
return errors.Wrapf(err, "cannot load %s", b.Name)
}
if b.Dynamic {
return nil
}
if b.Inactive() {
rmerr := rm(ctx, nodes, in)
if err := txn.Remove(b.Name); err != nil {
return err
}
_, _ = fmt.Fprintf(dockerCli.Err(), "%s removed\n", b.Name)
return rmerr
}
return nil
})
}(b)
}
return eg.Wait()
}

View File

@@ -1,100 +1,42 @@
package commands
import (
"fmt"
"os"
debugcmd "github.com/docker/buildx/commands/debug"
imagetoolscmd "github.com/docker/buildx/commands/imagetools"
"github.com/docker/buildx/controller/remote"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/logutil"
"github.com/docker/cli-docs-tool/annotation"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli-plugins/plugin"
"github.com/docker/cli/cli/command"
"github.com/docker/cli/cli/debug"
"github.com/moby/buildkit/util/appcontext"
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
)
func NewRootCmd(name string, isPlugin bool, dockerCli command.Cli) *cobra.Command {
var opt rootOptions
cmd := &cobra.Command{
Short: "Docker Buildx",
Long: `Extended build capabilities with BuildKit`,
Short: "Build with BuildKit",
Use: name,
Annotations: map[string]string{
annotation.CodeDelimiter: `"`,
},
CompletionOptions: cobra.CompletionOptions{
HiddenDefaultCmd: true,
},
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
if opt.debug {
debug.Enable()
}
cmd.SetContext(appcontext.Context())
if !isPlugin {
return nil
}
}
if isPlugin {
cmd.PersistentPreRunE = func(cmd *cobra.Command, args []string) error {
return plugin.PersistentPreRunE(cmd, args)
},
RunE: func(cmd *cobra.Command, args []string) error {
if len(args) == 0 {
return cmd.Help()
}
_ = cmd.Help()
return cli.StatusError{
StatusCode: 1,
Status: fmt.Sprintf("ERROR: unknown command: %q", args[0]),
}
},
}
if !isPlugin {
// match plugin behavior for standalone mode
// https://github.com/docker/cli/blob/6c9eb708fa6d17765d71965f90e1c59cea686ee9/cli-plugins/plugin/plugin.go#L117-L127
cmd.SilenceUsage = true
cmd.SilenceErrors = true
cmd.TraverseChildren = true
cmd.DisableFlagsInUseLine = true
cli.DisableFlagsInUseLine(cmd)
}
}
logrus.SetFormatter(&logutil.Formatter{})
logrus.AddHook(logutil.NewFilter([]logrus.Level{
logrus.DebugLevel,
},
"serving grpc connection",
"stopping session",
"using default config store",
))
if !confutil.IsExperimental() {
cmd.SetHelpTemplate(cmd.HelpTemplate() + "\nExperimental commands and flags are hidden. Set BUILDX_EXPERIMENTAL=1 to show them.\n")
}
addCommands(cmd, &opt, dockerCli)
addCommands(cmd, dockerCli)
return cmd
}
type rootOptions struct {
builder string
debug bool
}
func addCommands(cmd *cobra.Command, opts *rootOptions, dockerCli command.Cli) {
func addCommands(cmd *cobra.Command, dockerCli command.Cli) {
opts := &rootOptions{}
rootFlags(opts, cmd.PersistentFlags())
cmd.AddCommand(
buildCmd(dockerCli, opts, nil),
buildCmd(dockerCli, opts),
bakeCmd(dockerCli, opts),
createCmd(dockerCli),
dialStdioCmd(dockerCli, opts),
rmCmd(dockerCli, opts),
lsCmd(dockerCli),
useCmd(dockerCli, opts),
@@ -105,22 +47,10 @@ func addCommands(cmd *cobra.Command, opts *rootOptions, dockerCli command.Cli) {
versionCmd(dockerCli),
pruneCmd(dockerCli, opts),
duCmd(dockerCli, opts),
imagetoolscmd.RootCmd(cmd, dockerCli, imagetoolscmd.RootOptions{Builder: &opts.builder}),
)
if confutil.IsExperimental() {
cmd.AddCommand(debugcmd.RootCmd(dockerCli,
newDebuggableBuild(dockerCli, opts),
))
remote.AddControllerCommands(cmd, dockerCli)
}
cmd.RegisterFlagCompletionFunc( //nolint:errcheck
"builder",
completion.BuilderNames(dockerCli),
imagetoolscmd.RootCmd(dockerCli),
)
}
func rootFlags(options *rootOptions, flags *pflag.FlagSet) {
flags.StringVar(&options.builder, "builder", os.Getenv("BUILDX_BUILDER"), "Override the configured builder instance")
flags.BoolVarP(&options.debug, "debug", "D", debug.IsEnabled(), "Enable debug logging")
}

View File

@@ -3,10 +3,10 @@ package commands
import (
"context"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/store"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/moby/buildkit/util/appcontext"
"github.com/spf13/cobra"
)
@@ -14,20 +14,35 @@ type stopOptions struct {
builder string
}
func runStop(ctx context.Context, dockerCli command.Cli, in stopOptions) error {
b, err := builder.New(dockerCli,
builder.WithName(in.builder),
builder.WithSkippedValidation(),
)
func runStop(dockerCli command.Cli, in stopOptions) error {
ctx := appcontext.Context()
txn, release, err := getStore(dockerCli)
if err != nil {
return err
}
nodes, err := b.LoadNodes(ctx)
if err != nil {
return err
defer release()
if in.builder != "" {
ng, err := getNodeGroup(txn, dockerCli, in.builder)
if err != nil {
return err
}
if err := stop(ctx, dockerCli, ng); err != nil {
return err
}
return nil
}
return stop(ctx, nodes)
ng, err := getCurrentInstance(txn, dockerCli)
if err != nil {
return err
}
if ng != nil {
return stop(ctx, dockerCli, ng)
}
return stopCurrent(ctx, dockerCli)
}
func stopCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
@@ -42,23 +57,50 @@ func stopCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
if len(args) > 0 {
options.builder = args[0]
}
return runStop(cmd.Context(), dockerCli, options)
return runStop(dockerCli, options)
},
ValidArgsFunction: completion.BuilderNames(dockerCli),
}
flags := cmd.Flags()
// flags.StringArrayVarP(&options.outputs, "output", "o", []string{}, "Output destination (format: type=local,dest=path)")
_ = flags
return cmd
}
func stop(ctx context.Context, nodes []builder.Node) (err error) {
for _, node := range nodes {
if node.Driver != nil {
if err := node.Driver.Stop(ctx, true); err != nil {
func stop(ctx context.Context, dockerCli command.Cli, ng *store.NodeGroup) error {
dis, err := driversForNodeGroup(ctx, dockerCli, ng, "")
if err != nil {
return err
}
for _, di := range dis {
if di.Driver != nil {
if err := di.Driver.Stop(ctx, true); err != nil {
return err
}
}
if node.Err != nil {
err = node.Err
if di.Err != nil {
err = di.Err
}
}
return err
}
func stopCurrent(ctx context.Context, dockerCli command.Cli) error {
dis, err := getDefaultDrivers(ctx, dockerCli, false, "")
if err != nil {
return err
}
for _, di := range dis {
if di.Driver != nil {
if err := di.Driver.Stop(ctx, true); err != nil {
return err
}
}
if di.Err != nil {
err = di.Err
}
}
return err

View File

@@ -3,8 +3,6 @@ package commands
import (
"os"
"github.com/docker/buildx/util/cobrautil"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/docker/cli/cli/config"
@@ -15,7 +13,7 @@ import (
type uninstallOptions struct {
}
func runUninstall(_ command.Cli, _ uninstallOptions) error {
func runUninstall(dockerCli command.Cli, in uninstallOptions) error {
dir := config.Dir()
cfg, err := config.Load(dir)
if err != nil {
@@ -53,12 +51,8 @@ func uninstallCmd(dockerCli command.Cli) *cobra.Command {
RunE: func(cmd *cobra.Command, args []string) error {
return runUninstall(dockerCli, options)
},
Hidden: true,
ValidArgsFunction: completion.Disable,
Hidden: true,
}
// hide builder persistent flag for this command
cobrautil.HideInheritedFlags(cmd, "builder")
return cmd
}

View File

@@ -3,9 +3,6 @@ package commands
import (
"os"
"github.com/docker/buildx/store/storeutil"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/dockerutil"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/pkg/errors"
@@ -19,7 +16,7 @@ type useOptions struct {
}
func runUse(dockerCli command.Cli, in useOptions) error {
txn, release, err := storeutil.GetStore(dockerCli)
txn, release, err := getStore(dockerCli)
if err != nil {
return err
}
@@ -31,11 +28,14 @@ func runUse(dockerCli command.Cli, in useOptions) error {
return errors.Errorf("run `docker context use default` to switch to default context")
}
if in.builder == "default" || in.builder == dockerCli.CurrentContext() {
ep, err := dockerutil.GetCurrentEndpoint(dockerCli)
ep, err := getCurrentEndpoint(dockerCli)
if err != nil {
return err
}
return txn.SetCurrent(ep, "", false, false)
if err := txn.SetCurrent(ep, "", false, false); err != nil {
return err
}
return nil
}
list, err := dockerCli.ContextStore().List()
if err != nil {
@@ -46,15 +46,20 @@ func runUse(dockerCli command.Cli, in useOptions) error {
return errors.Errorf("run `docker context use %s` to switch to context %s", in.builder, in.builder)
}
}
}
return errors.Wrapf(err, "failed to find instance %q", in.builder)
}
ep, err := dockerutil.GetCurrentEndpoint(dockerCli)
ep, err := getCurrentEndpoint(dockerCli)
if err != nil {
return err
}
return txn.SetCurrent(ep, in.builder, in.isGlobal, in.isDefault)
if err := txn.SetCurrent(ep, in.builder, in.isGlobal, in.isDefault); err != nil {
return err
}
return nil
}
func useCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
@@ -71,12 +76,14 @@ func useCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
}
return runUse(dockerCli, options)
},
ValidArgsFunction: completion.BuilderNames(dockerCli),
}
flags := cmd.Flags()
flags.BoolVar(&options.isGlobal, "global", false, "Builder persists context changes")
flags.BoolVar(&options.isDefault, "default", false, "Set builder as default for current context")
_ = flags
return cmd
}

View File

@@ -1,57 +1,477 @@
package commands
import (
"bufio"
"context"
"fmt"
"io"
"net/url"
"os"
"runtime"
"path/filepath"
"strings"
"github.com/docker/cli/cli/streams"
"github.com/docker/buildx/build"
"github.com/docker/buildx/driver"
"github.com/docker/buildx/store"
"github.com/docker/buildx/util/platformutil"
"github.com/docker/cli/cli/command"
"github.com/docker/cli/cli/context/docker"
"github.com/docker/cli/cli/context/kubernetes"
ctxstore "github.com/docker/cli/cli/context/store"
dopts "github.com/docker/cli/opts"
dockerclient "github.com/docker/docker/client"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"golang.org/x/sync/errgroup"
"k8s.io/client-go/tools/clientcmd"
)
func prompt(ctx context.Context, ins io.Reader, out io.Writer, msg string) (bool, error) {
done := make(chan struct{})
var ok bool
go func() {
ok = promptForConfirmation(ins, out, msg)
close(done)
}()
select {
case <-ctx.Done():
return false, context.Cause(ctx)
case <-done:
return ok, nil
// getStore returns current builder instance store
func getStore(dockerCli command.Cli) (*store.Txn, func(), error) {
s, err := store.New(getConfigStorePath(dockerCli))
if err != nil {
return nil, nil, err
}
return s.Txn()
}
// promptForConfirmation requests and checks confirmation from user.
// This will display the provided message followed by ' [y/N] '. If
// the user input 'y' or 'Y' it returns true other false. If no
// message is provided "Are you sure you want to proceed? [y/N] "
// will be used instead.
//
// Copied from github.com/docker/cli since the upstream version changed
// recently with an incompatible change.
//
// See https://github.com/docker/buildx/pull/2359#discussion_r1544736494
// for discussion on the issue.
func promptForConfirmation(ins io.Reader, outs io.Writer, message string) bool {
if message == "" {
message = "Are you sure you want to proceed?"
}
message += " [y/N] "
_, _ = fmt.Fprint(outs, message)
// On Windows, force the use of the regular OS stdin stream.
if runtime.GOOS == "windows" {
ins = streams.NewIn(os.Stdin)
// getConfigStorePath will look for correct configuration store path;
// if `$BUILDX_CONFIG` is set - use it, otherwise use parent directory
// of Docker config file (i.e. `${DOCKER_CONFIG}/buildx`)
func getConfigStorePath(dockerCli command.Cli) string {
if buildxConfig := os.Getenv("BUILDX_CONFIG"); buildxConfig != "" {
logrus.Debugf("using config store %q based in \"$BUILDX_CONFIG\" environment variable", buildxConfig)
return buildxConfig
}
reader := bufio.NewReader(ins)
answer, _, _ := reader.ReadLine()
return strings.ToLower(string(answer)) == "y"
buildxConfig := filepath.Join(filepath.Dir(dockerCli.ConfigFile().Filename), "buildx")
logrus.Debugf("using default config store %q", buildxConfig)
return buildxConfig
}
// getCurrentEndpoint returns the current default endpoint value
func getCurrentEndpoint(dockerCli command.Cli) (string, error) {
name := dockerCli.CurrentContext()
if name != "default" {
return name, nil
}
de, err := getDockerEndpoint(dockerCli, name)
if err != nil {
return "", errors.Errorf("docker endpoint for %q not found", name)
}
return de, nil
}
// getDockerEndpoint returns docker endpoint string for given context
func getDockerEndpoint(dockerCli command.Cli, name string) (string, error) {
list, err := dockerCli.ContextStore().List()
if err != nil {
return "", err
}
for _, l := range list {
if l.Name == name {
ep, ok := l.Endpoints["docker"]
if !ok {
return "", errors.Errorf("context %q does not have a Docker endpoint", name)
}
typed, ok := ep.(docker.EndpointMeta)
if !ok {
return "", errors.Errorf("endpoint %q is not of type EndpointMeta, %T", ep, ep)
}
return typed.Host, nil
}
}
return "", nil
}
// validateEndpoint validates that endpoint is either a context or a docker host
func validateEndpoint(dockerCli command.Cli, ep string) (string, error) {
de, err := getDockerEndpoint(dockerCli, ep)
if err == nil && de != "" {
if ep == "default" {
return de, nil
}
return ep, nil
}
h, err := dopts.ParseHost(true, ep)
if err != nil {
return "", errors.Wrapf(err, "failed to parse endpoint %s", ep)
}
return h, nil
}
// getCurrentInstance finds the current builder instance
func getCurrentInstance(txn *store.Txn, dockerCli command.Cli) (*store.NodeGroup, error) {
ep, err := getCurrentEndpoint(dockerCli)
if err != nil {
return nil, err
}
ng, err := txn.Current(ep)
if err != nil {
return nil, err
}
if ng == nil {
ng, _ = getNodeGroup(txn, dockerCli, dockerCli.CurrentContext())
}
return ng, nil
}
// getNodeGroup returns nodegroup based on the name
func getNodeGroup(txn *store.Txn, dockerCli command.Cli, name string) (*store.NodeGroup, error) {
ng, err := txn.NodeGroupByName(name)
if err != nil {
if !os.IsNotExist(errors.Cause(err)) {
return nil, err
}
}
if ng != nil {
return ng, nil
}
if name == "default" {
name = dockerCli.CurrentContext()
}
list, err := dockerCli.ContextStore().List()
if err != nil {
return nil, err
}
for _, l := range list {
if l.Name == name {
return &store.NodeGroup{
Name: "default",
Nodes: []store.Node{
{
Name: "default",
Endpoint: name,
},
},
}, nil
}
}
return nil, errors.Errorf("no builder %q found", name)
}
// driversForNodeGroup returns drivers for a nodegroup instance
func driversForNodeGroup(ctx context.Context, dockerCli command.Cli, ng *store.NodeGroup, contextPathHash string) ([]build.DriverInfo, error) {
eg, _ := errgroup.WithContext(ctx)
dis := make([]build.DriverInfo, len(ng.Nodes))
var f driver.Factory
if ng.Driver != "" {
f = driver.GetFactory(ng.Driver, true)
if f == nil {
return nil, errors.Errorf("failed to find driver %q", f)
}
} else {
dockerapi, err := clientForEndpoint(dockerCli, ng.Nodes[0].Endpoint)
if err != nil {
return nil, err
}
f, err = driver.GetDefaultFactory(ctx, dockerapi, false)
if err != nil {
return nil, err
}
ng.Driver = f.Name()
}
for i, n := range ng.Nodes {
func(i int, n store.Node) {
eg.Go(func() error {
di := build.DriverInfo{
Name: n.Name,
Platform: n.Platforms,
}
defer func() {
dis[i] = di
}()
dockerapi, err := clientForEndpoint(dockerCli, n.Endpoint)
if err != nil {
di.Err = err
return nil
}
// TODO: replace the following line with dockerclient.WithAPIVersionNegotiation option in clientForEndpoint
dockerapi.NegotiateAPIVersion(ctx)
contextStore := dockerCli.ContextStore()
var kcc driver.KubeClientConfig
kcc, err = configFromContext(n.Endpoint, contextStore)
if err != nil {
// err is returned if n.Endpoint is non-context name like "unix:///var/run/docker.sock".
// try again with name="default".
// FIXME: n should retain real context name.
kcc, err = configFromContext("default", contextStore)
if err != nil {
logrus.Error(err)
}
}
tryToUseKubeConfigInCluster := false
if kcc == nil {
tryToUseKubeConfigInCluster = true
} else {
if _, err := kcc.ClientConfig(); err != nil {
tryToUseKubeConfigInCluster = true
}
}
if tryToUseKubeConfigInCluster {
kccInCluster := driver.KubeClientConfigInCluster{}
if _, err := kccInCluster.ClientConfig(); err == nil {
logrus.Debug("using kube config in cluster")
kcc = kccInCluster
}
}
d, err := driver.GetDriver(ctx, "buildx_buildkit_"+n.Name, f, dockerapi, dockerCli.ConfigFile(), kcc, n.Flags, n.ConfigFile, n.DriverOpts, n.Platforms, contextPathHash)
if err != nil {
di.Err = err
return nil
}
di.Driver = d
return nil
})
}(i, n)
}
if err := eg.Wait(); err != nil {
return nil, err
}
return dis, nil
}
func configFromContext(endpointName string, s ctxstore.Reader) (clientcmd.ClientConfig, error) {
if strings.HasPrefix(endpointName, "kubernetes://") {
u, _ := url.Parse(endpointName)
if kubeconfig := u.Query().Get("kubeconfig"); kubeconfig != "" {
clientConfig := clientcmd.NewNonInteractiveDeferredLoadingClientConfig(
&clientcmd.ClientConfigLoadingRules{ExplicitPath: kubeconfig},
&clientcmd.ConfigOverrides{},
)
return clientConfig, nil
}
}
return kubernetes.ConfigFromContext(endpointName, s)
}
// clientForEndpoint returns a docker client for an endpoint
func clientForEndpoint(dockerCli command.Cli, name string) (dockerclient.APIClient, error) {
list, err := dockerCli.ContextStore().List()
if err != nil {
return nil, err
}
for _, l := range list {
if l.Name == name {
dep, ok := l.Endpoints["docker"]
if !ok {
return nil, errors.Errorf("context %q does not have a Docker endpoint", name)
}
epm, ok := dep.(docker.EndpointMeta)
if !ok {
return nil, errors.Errorf("endpoint %q is not of type EndpointMeta, %T", dep, dep)
}
ep, err := docker.WithTLSData(dockerCli.ContextStore(), name, epm)
if err != nil {
return nil, err
}
clientOpts, err := ep.ClientOpts()
if err != nil {
return nil, err
}
return dockerclient.NewClientWithOpts(clientOpts...)
}
}
ep := docker.Endpoint{
EndpointMeta: docker.EndpointMeta{
Host: name,
},
}
clientOpts, err := ep.ClientOpts()
if err != nil {
return nil, err
}
return dockerclient.NewClientWithOpts(clientOpts...)
}
func getInstanceOrDefault(ctx context.Context, dockerCli command.Cli, instance, contextPathHash string) ([]build.DriverInfo, error) {
var defaultOnly bool
if instance == "default" && instance != dockerCli.CurrentContext() {
return nil, errors.Errorf("use `docker --context=default buildx` to switch to default context")
}
if instance == "default" || instance == dockerCli.CurrentContext() {
instance = ""
defaultOnly = true
}
list, err := dockerCli.ContextStore().List()
if err != nil {
return nil, err
}
for _, l := range list {
if l.Name == instance {
return nil, errors.Errorf("use `docker --context=%s buildx` to switch to context %s", instance, instance)
}
}
if instance != "" {
return getInstanceByName(ctx, dockerCli, instance, contextPathHash)
}
return getDefaultDrivers(ctx, dockerCli, defaultOnly, contextPathHash)
}
func getInstanceByName(ctx context.Context, dockerCli command.Cli, instance, contextPathHash string) ([]build.DriverInfo, error) {
txn, release, err := getStore(dockerCli)
if err != nil {
return nil, err
}
defer release()
ng, err := txn.NodeGroupByName(instance)
if err != nil {
return nil, err
}
return driversForNodeGroup(ctx, dockerCli, ng, contextPathHash)
}
// getDefaultDrivers returns drivers based on current cli config
func getDefaultDrivers(ctx context.Context, dockerCli command.Cli, defaultOnly bool, contextPathHash string) ([]build.DriverInfo, error) {
txn, release, err := getStore(dockerCli)
if err != nil {
return nil, err
}
defer release()
if !defaultOnly {
ng, err := getCurrentInstance(txn, dockerCli)
if err != nil {
return nil, err
}
if ng != nil {
return driversForNodeGroup(ctx, dockerCli, ng, contextPathHash)
}
}
d, err := driver.GetDriver(ctx, "buildx_buildkit_default", nil, dockerCli.Client(), dockerCli.ConfigFile(), nil, nil, "", nil, nil, contextPathHash)
if err != nil {
return nil, err
}
return []build.DriverInfo{
{
Name: "default",
Driver: d,
},
}, nil
}
func loadInfoData(ctx context.Context, d *dinfo) error {
if d.di.Driver == nil {
return nil
}
info, err := d.di.Driver.Info(ctx)
if err != nil {
return err
}
d.info = info
if info.Status == driver.Running {
c, err := d.di.Driver.Client(ctx)
if err != nil {
return err
}
workers, err := c.ListWorkers(ctx)
if err != nil {
return errors.Wrap(err, "listing workers")
}
for _, w := range workers {
for _, p := range w.Platforms {
d.platforms = append(d.platforms, p)
}
}
d.platforms = platformutil.Dedupe(d.platforms)
}
return nil
}
func loadNodeGroupData(ctx context.Context, dockerCli command.Cli, ngi *nginfo) error {
eg, _ := errgroup.WithContext(ctx)
dis, err := driversForNodeGroup(ctx, dockerCli, ngi.ng, "")
if err != nil {
return err
}
ngi.drivers = make([]dinfo, len(dis))
for i, di := range dis {
d := di
ngi.drivers[i].di = &d
func(d *dinfo) {
eg.Go(func() error {
if err := loadInfoData(ctx, d); err != nil {
d.err = err
}
return nil
})
}(&ngi.drivers[i])
}
if eg.Wait(); err != nil {
return err
}
kubernetesDriverCount := 0
for _, di := range ngi.drivers {
if di.info != nil && len(di.info.DynamicNodes) > 0 {
kubernetesDriverCount++
}
}
isAllKubernetesDrivers := len(ngi.drivers) == kubernetesDriverCount
if isAllKubernetesDrivers {
var drivers []dinfo
var dynamicNodes []store.Node
for _, di := range ngi.drivers {
// dynamic nodes are used in Kubernetes driver.
// Kubernetes pods are dynamically mapped to BuildKit Nodes.
if di.info != nil && len(di.info.DynamicNodes) > 0 {
for i := 0; i < len(di.info.DynamicNodes); i++ {
// all []dinfo share *build.DriverInfo and *driver.Info
diClone := di
if pl := di.info.DynamicNodes[i].Platforms; len(pl) > 0 {
diClone.platforms = pl
}
drivers = append(drivers, di)
}
dynamicNodes = append(dynamicNodes, di.info.DynamicNodes...)
}
}
// not append (remove the static nodes in the store)
ngi.ng.Nodes = dynamicNodes
ngi.drivers = drivers
ngi.ng.Dynamic = true
}
return nil
}
func dockerAPI(dockerCli command.Cli) *api {
return &api{dockerCli: dockerCli}
}
type api struct {
dockerCli command.Cli
}
func (a *api) DockerAPI(name string) (dockerclient.APIClient, error) {
if name == "" {
name = a.dockerCli.CurrentContext()
}
return clientForEndpoint(a.dockerCli, name)
}

View File

@@ -3,15 +3,13 @@ package commands
import (
"fmt"
"github.com/docker/buildx/util/cobrautil"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/version"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/spf13/cobra"
)
func runVersion(_ command.Cli) error {
func runVersion(dockerCli command.Cli) error {
fmt.Println(version.Package, version.Version, version.Revision)
return nil
}
@@ -24,11 +22,6 @@ func versionCmd(dockerCli command.Cli) *cobra.Command {
RunE: func(cmd *cobra.Command, args []string) error {
return runVersion(dockerCli)
},
ValidArgsFunction: completion.Disable,
}
// hide builder persistent flag for this command
cobrautil.HideInheritedFlags(cmd, "builder")
return cmd
}

View File

@@ -1,286 +0,0 @@
package build
import (
"context"
"io"
"path/filepath"
"strings"
"sync"
"github.com/docker/buildx/build"
"github.com/docker/buildx/builder"
controllerapi "github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/store"
"github.com/docker/buildx/store/storeutil"
"github.com/docker/buildx/util/buildflags"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/dockerutil"
"github.com/docker/buildx/util/platformutil"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/command"
dockeropts "github.com/docker/cli/opts"
"github.com/docker/docker/api/types/container"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/session/auth/authprovider"
"github.com/moby/buildkit/util/grpcerrors"
"github.com/pkg/errors"
"google.golang.org/grpc/codes"
)
const defaultTargetName = "default"
// RunBuild runs the specified build and returns the result.
//
// NOTE: When an error happens during the build and this function acquires the debuggable *build.ResultHandle,
// this function returns it in addition to the error (i.e. it does "return nil, res, err"). The caller can
// inspect the result and debug the cause of that error.
func RunBuild(ctx context.Context, dockerCli command.Cli, in *controllerapi.BuildOptions, inStream io.Reader, progress progress.Writer, generateResult bool) (*client.SolveResponse, *build.ResultHandle, *build.Inputs, error) {
if in.NoCache && len(in.NoCacheFilter) > 0 {
return nil, nil, nil, errors.Errorf("--no-cache and --no-cache-filter cannot currently be used together")
}
contexts := map[string]build.NamedContext{}
for name, path := range in.NamedContexts {
contexts[name] = build.NamedContext{Path: path}
}
opts := build.Options{
Inputs: build.Inputs{
ContextPath: in.ContextPath,
DockerfilePath: in.DockerfileName,
InStream: build.NewSyncMultiReader(inStream),
NamedContexts: contexts,
},
Ref: in.Ref,
BuildArgs: in.BuildArgs,
CgroupParent: in.CgroupParent,
ExtraHosts: in.ExtraHosts,
Labels: in.Labels,
NetworkMode: in.NetworkMode,
NoCache: in.NoCache,
NoCacheFilter: in.NoCacheFilter,
Pull: in.Pull,
ShmSize: dockeropts.MemBytes(in.ShmSize),
Tags: in.Tags,
Target: in.Target,
Ulimits: controllerUlimitOpt2DockerUlimit(in.Ulimits),
GroupRef: in.GroupRef,
ProvenanceResponseMode: confutil.ParseMetadataProvenance(in.ProvenanceResponseMode),
}
platforms, err := platformutil.Parse(in.Platforms)
if err != nil {
return nil, nil, nil, err
}
opts.Platforms = platforms
dockerConfig := dockerCli.ConfigFile()
opts.Session = append(opts.Session, authprovider.NewDockerAuthProvider(dockerConfig, nil))
secrets, err := controllerapi.CreateSecrets(in.Secrets)
if err != nil {
return nil, nil, nil, err
}
opts.Session = append(opts.Session, secrets)
sshSpecs := in.SSH
if len(sshSpecs) == 0 && buildflags.IsGitSSH(in.ContextPath) {
sshSpecs = append(sshSpecs, &controllerapi.SSH{ID: "default"})
}
ssh, err := controllerapi.CreateSSH(sshSpecs)
if err != nil {
return nil, nil, nil, err
}
opts.Session = append(opts.Session, ssh)
outputs, err := controllerapi.CreateExports(in.Exports)
if err != nil {
return nil, nil, nil, err
}
if in.ExportPush {
var pushUsed bool
for i := range outputs {
if outputs[i].Type == client.ExporterImage {
outputs[i].Attrs["push"] = "true"
pushUsed = true
}
}
if !pushUsed {
outputs = append(outputs, client.ExportEntry{
Type: client.ExporterImage,
Attrs: map[string]string{
"push": "true",
},
})
}
}
if in.ExportLoad {
var loadUsed bool
for i := range outputs {
if outputs[i].Type == client.ExporterDocker {
if _, ok := outputs[i].Attrs["dest"]; !ok {
loadUsed = true
break
}
}
}
if !loadUsed {
outputs = append(outputs, client.ExportEntry{
Type: client.ExporterDocker,
Attrs: map[string]string{},
})
}
}
annotations, err := buildflags.ParseAnnotations(in.Annotations)
if err != nil {
return nil, nil, nil, errors.Wrap(err, "parse annotations")
}
for _, o := range outputs {
for k, v := range annotations {
o.Attrs[k.String()] = v
}
}
opts.Exports = outputs
opts.CacheFrom = controllerapi.CreateCaches(in.CacheFrom)
opts.CacheTo = controllerapi.CreateCaches(in.CacheTo)
opts.Attests = controllerapi.CreateAttestations(in.Attests)
opts.SourcePolicy = in.SourcePolicy
allow, err := buildflags.ParseEntitlements(in.Allow)
if err != nil {
return nil, nil, nil, err
}
opts.Allow = allow
if in.CallFunc != nil {
opts.CallFunc = &build.CallFunc{
Name: in.CallFunc.Name,
Format: in.CallFunc.Format,
IgnoreStatus: in.CallFunc.IgnoreStatus,
}
}
// key string used for kubernetes "sticky" mode
contextPathHash, err := filepath.Abs(in.ContextPath)
if err != nil {
contextPathHash = in.ContextPath
}
// TODO: this should not be loaded this side of the controller api
b, err := builder.New(dockerCli,
builder.WithName(in.Builder),
builder.WithContextPathHash(contextPathHash),
)
if err != nil {
return nil, nil, nil, err
}
if err = updateLastActivity(dockerCli, b.NodeGroup); err != nil {
return nil, nil, nil, errors.Wrapf(err, "failed to update builder last activity time")
}
nodes, err := b.LoadNodes(ctx)
if err != nil {
return nil, nil, nil, err
}
var inputs *build.Inputs
buildOptions := map[string]build.Options{defaultTargetName: opts}
resp, res, err := buildTargets(ctx, dockerCli, nodes, buildOptions, progress, generateResult)
err = wrapBuildError(err, false)
if err != nil {
// NOTE: buildTargets can return *build.ResultHandle even on error.
return nil, res, nil, err
}
if i, ok := buildOptions[defaultTargetName]; ok {
inputs = &i.Inputs
}
return resp, res, inputs, nil
}
// buildTargets runs the specified build and returns the result.
//
// NOTE: When an error happens during the build and this function acquires the debuggable *build.ResultHandle,
// this function returns it in addition to the error (i.e. it does "return nil, res, err"). The caller can
// inspect the result and debug the cause of that error.
func buildTargets(ctx context.Context, dockerCli command.Cli, nodes []builder.Node, opts map[string]build.Options, progress progress.Writer, generateResult bool) (*client.SolveResponse, *build.ResultHandle, error) {
var res *build.ResultHandle
var resp map[string]*client.SolveResponse
var err error
if generateResult {
var mu sync.Mutex
var idx int
resp, err = build.BuildWithResultHandler(ctx, nodes, opts, dockerutil.NewClient(dockerCli), confutil.NewConfig(dockerCli), progress, func(driverIndex int, gotRes *build.ResultHandle) {
mu.Lock()
defer mu.Unlock()
if res == nil || driverIndex < idx {
idx, res = driverIndex, gotRes
}
})
} else {
resp, err = build.Build(ctx, nodes, opts, dockerutil.NewClient(dockerCli), confutil.NewConfig(dockerCli), progress)
}
if err != nil {
return nil, res, err
}
return resp[defaultTargetName], res, err
}
func wrapBuildError(err error, bake bool) error {
if err == nil {
return nil
}
st, ok := grpcerrors.AsGRPCStatus(err)
if ok {
if st.Code() == codes.Unimplemented && strings.Contains(st.Message(), "unsupported frontend capability moby.buildkit.frontend.contexts") {
msg := "current frontend does not support --build-context."
if bake {
msg = "current frontend does not support defining additional contexts for targets."
}
msg += " Named contexts are supported since Dockerfile v1.4. Use #syntax directive in Dockerfile or update to latest BuildKit."
return &wrapped{err, msg}
}
}
return err
}
type wrapped struct {
err error
msg string
}
func (w *wrapped) Error() string {
return w.msg
}
func (w *wrapped) Unwrap() error {
return w.err
}
func updateLastActivity(dockerCli command.Cli, ng *store.NodeGroup) error {
txn, release, err := storeutil.GetStore(dockerCli)
if err != nil {
return err
}
defer release()
return txn.UpdateLastActivity(ng)
}
func controllerUlimitOpt2DockerUlimit(u *controllerapi.UlimitOpt) *dockeropts.UlimitOpt {
if u == nil {
return nil
}
values := make(map[string]*container.Ulimit)
for k, v := range u.Values {
values[k] = &container.Ulimit{
Name: v.Name,
Hard: v.Hard,
Soft: v.Soft,
}
}
return dockeropts.NewUlimitOpt(&values)
}

View File

@@ -1,33 +0,0 @@
package control
import (
"context"
"io"
"github.com/docker/buildx/build"
controllerapi "github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/util/progress"
"github.com/moby/buildkit/client"
)
type BuildxController interface {
Build(ctx context.Context, options *controllerapi.BuildOptions, in io.ReadCloser, progress progress.Writer) (ref string, resp *client.SolveResponse, inputs *build.Inputs, err error)
// Invoke starts an IO session into the specified process.
// If pid doesn't matche to any running processes, it starts a new process with the specified config.
// If there is no container running or InvokeConfig.Rollback is speicfied, the process will start in a newly created container.
// NOTE: If needed, in the future, we can split this API into three APIs (NewContainer, NewProcess and Attach).
Invoke(ctx context.Context, ref, pid string, options *controllerapi.InvokeConfig, ioIn io.ReadCloser, ioOut io.WriteCloser, ioErr io.WriteCloser) error
Kill(ctx context.Context) error
Close() error
List(ctx context.Context) (refs []string, _ error)
Disconnect(ctx context.Context, ref string) error
ListProcesses(ctx context.Context, ref string) (infos []*controllerapi.ProcessInfo, retErr error)
DisconnectProcess(ctx context.Context, ref, pid string) error
Inspect(ctx context.Context, ref string) (*controllerapi.InspectResponse, error)
}
type ControlOptions struct {
ServerConfig string
Root string
Detach bool
}

View File

@@ -1,36 +0,0 @@
package controller
import (
"context"
"fmt"
"github.com/docker/buildx/controller/control"
"github.com/docker/buildx/controller/local"
"github.com/docker/buildx/controller/remote"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/command"
"github.com/pkg/errors"
)
func NewController(ctx context.Context, opts control.ControlOptions, dockerCli command.Cli, pw progress.Writer) (control.BuildxController, error) {
var name string
if opts.Detach {
name = "remote"
} else {
name = "local"
}
var c control.BuildxController
err := progress.Wrap(fmt.Sprintf("[internal] connecting to %s controller", name), pw.Write, func(l progress.SubLogger) (err error) {
if opts.Detach {
c, err = remote.NewRemoteBuildxController(ctx, dockerCli, opts, l)
} else {
c = local.NewLocalBuildxController(ctx, dockerCli, l)
}
return err
})
if err != nil {
return nil, errors.Wrap(err, "failed to start buildx controller")
}
return c, nil
}

View File

@@ -1,48 +0,0 @@
package errdefs
import (
"io"
"github.com/containerd/typeurl/v2"
"github.com/docker/buildx/util/desktop"
"github.com/moby/buildkit/util/grpcerrors"
)
func init() {
typeurl.Register((*Build)(nil), "github.com/docker/buildx", "errdefs.Build+json")
}
type BuildError struct {
*Build
error
}
func (e *BuildError) Unwrap() error {
return e.error
}
func (e *BuildError) ToProto() grpcerrors.TypedErrorProto {
return e.Build
}
func (e *BuildError) PrintBuildDetails(w io.Writer) error {
if e.Ref == "" {
return nil
}
ebr := &desktop.ErrorWithBuildRef{
Ref: e.Ref,
Err: e.error,
}
return ebr.Print(w)
}
func WrapBuild(err error, sessionID string, ref string) error {
if err == nil {
return nil
}
return &BuildError{Build: &Build{SessionID: sessionID, Ref: ref}, error: err}
}
func (b *Build) WrapError(err error) error {
return &BuildError{error: err, Build: b}
}

View File

@@ -1,157 +0,0 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.34.1
// protoc v3.11.4
// source: github.com/docker/buildx/controller/errdefs/errdefs.proto
package errdefs
import (
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
reflect "reflect"
sync "sync"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
type Build struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
SessionID string `protobuf:"bytes,1,opt,name=SessionID,proto3" json:"SessionID,omitempty"`
Ref string `protobuf:"bytes,2,opt,name=Ref,proto3" json:"Ref,omitempty"`
}
func (x *Build) Reset() {
*x = Build{}
if protoimpl.UnsafeEnabled {
mi := &file_github_com_docker_buildx_controller_errdefs_errdefs_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *Build) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*Build) ProtoMessage() {}
func (x *Build) ProtoReflect() protoreflect.Message {
mi := &file_github_com_docker_buildx_controller_errdefs_errdefs_proto_msgTypes[0]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use Build.ProtoReflect.Descriptor instead.
func (*Build) Descriptor() ([]byte, []int) {
return file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDescGZIP(), []int{0}
}
func (x *Build) GetSessionID() string {
if x != nil {
return x.SessionID
}
return ""
}
func (x *Build) GetRef() string {
if x != nil {
return x.Ref
}
return ""
}
var File_github_com_docker_buildx_controller_errdefs_errdefs_proto protoreflect.FileDescriptor
var file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDesc = []byte{
0x0a, 0x39, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x64, 0x6f, 0x63,
0x6b, 0x65, 0x72, 0x2f, 0x62, 0x75, 0x69, 0x6c, 0x64, 0x78, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x72,
0x6f, 0x6c, 0x6c, 0x65, 0x72, 0x2f, 0x65, 0x72, 0x72, 0x64, 0x65, 0x66, 0x73, 0x2f, 0x65, 0x72,
0x72, 0x64, 0x65, 0x66, 0x73, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x15, 0x64, 0x6f, 0x63,
0x6b, 0x65, 0x72, 0x2e, 0x62, 0x75, 0x69, 0x6c, 0x64, 0x78, 0x2e, 0x65, 0x72, 0x72, 0x64, 0x65,
0x66, 0x73, 0x22, 0x37, 0x0a, 0x05, 0x42, 0x75, 0x69, 0x6c, 0x64, 0x12, 0x1c, 0x0a, 0x09, 0x53,
0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, 0x49, 0x44, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09,
0x53, 0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, 0x49, 0x44, 0x12, 0x10, 0x0a, 0x03, 0x52, 0x65, 0x66,
0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x52, 0x65, 0x66, 0x42, 0x2d, 0x5a, 0x2b, 0x67,
0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x64, 0x6f, 0x63, 0x6b, 0x65, 0x72,
0x2f, 0x62, 0x75, 0x69, 0x6c, 0x64, 0x78, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x72, 0x6f, 0x6c, 0x6c,
0x65, 0x72, 0x2f, 0x65, 0x72, 0x72, 0x64, 0x65, 0x66, 0x73, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74,
0x6f, 0x33,
}
var (
file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDescOnce sync.Once
file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDescData = file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDesc
)
func file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDescGZIP() []byte {
file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDescOnce.Do(func() {
file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDescData = protoimpl.X.CompressGZIP(file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDescData)
})
return file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDescData
}
var file_github_com_docker_buildx_controller_errdefs_errdefs_proto_msgTypes = make([]protoimpl.MessageInfo, 1)
var file_github_com_docker_buildx_controller_errdefs_errdefs_proto_goTypes = []interface{}{
(*Build)(nil), // 0: docker.buildx.errdefs.Build
}
var file_github_com_docker_buildx_controller_errdefs_errdefs_proto_depIdxs = []int32{
0, // [0:0] is the sub-list for method output_type
0, // [0:0] is the sub-list for method input_type
0, // [0:0] is the sub-list for extension type_name
0, // [0:0] is the sub-list for extension extendee
0, // [0:0] is the sub-list for field type_name
}
func init() { file_github_com_docker_buildx_controller_errdefs_errdefs_proto_init() }
func file_github_com_docker_buildx_controller_errdefs_errdefs_proto_init() {
if File_github_com_docker_buildx_controller_errdefs_errdefs_proto != nil {
return
}
if !protoimpl.UnsafeEnabled {
file_github_com_docker_buildx_controller_errdefs_errdefs_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*Build); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDesc,
NumEnums: 0,
NumMessages: 1,
NumExtensions: 0,
NumServices: 0,
},
GoTypes: file_github_com_docker_buildx_controller_errdefs_errdefs_proto_goTypes,
DependencyIndexes: file_github_com_docker_buildx_controller_errdefs_errdefs_proto_depIdxs,
MessageInfos: file_github_com_docker_buildx_controller_errdefs_errdefs_proto_msgTypes,
}.Build()
File_github_com_docker_buildx_controller_errdefs_errdefs_proto = out.File
file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDesc = nil
file_github_com_docker_buildx_controller_errdefs_errdefs_proto_goTypes = nil
file_github_com_docker_buildx_controller_errdefs_errdefs_proto_depIdxs = nil
}

View File

@@ -1,10 +0,0 @@
syntax = "proto3";
package docker.buildx.errdefs;
option go_package = "github.com/docker/buildx/controller/errdefs";
message Build {
string SessionID = 1;
string Ref = 2;
}

View File

@@ -1,241 +0,0 @@
// Code generated by protoc-gen-go-vtproto. DO NOT EDIT.
// protoc-gen-go-vtproto version: v0.6.1-0.20240319094008-0393e58bdf10
// source: github.com/docker/buildx/controller/errdefs/errdefs.proto
package errdefs
import (
fmt "fmt"
protohelpers "github.com/planetscale/vtprotobuf/protohelpers"
proto "google.golang.org/protobuf/proto"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
io "io"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
func (m *Build) CloneVT() *Build {
if m == nil {
return (*Build)(nil)
}
r := new(Build)
r.SessionID = m.SessionID
r.Ref = m.Ref
if len(m.unknownFields) > 0 {
r.unknownFields = make([]byte, len(m.unknownFields))
copy(r.unknownFields, m.unknownFields)
}
return r
}
func (m *Build) CloneMessageVT() proto.Message {
return m.CloneVT()
}
func (this *Build) EqualVT(that *Build) bool {
if this == that {
return true
} else if this == nil || that == nil {
return false
}
if this.SessionID != that.SessionID {
return false
}
if this.Ref != that.Ref {
return false
}
return string(this.unknownFields) == string(that.unknownFields)
}
func (this *Build) EqualMessageVT(thatMsg proto.Message) bool {
that, ok := thatMsg.(*Build)
if !ok {
return false
}
return this.EqualVT(that)
}
func (m *Build) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
size := m.SizeVT()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBufferVT(dAtA[:size])
if err != nil {
return nil, err
}
return dAtA[:n], nil
}
func (m *Build) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
func (m *Build) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
i := len(dAtA)
_ = i
var l int
_ = l
if m.unknownFields != nil {
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
if len(m.Ref) > 0 {
i -= len(m.Ref)
copy(dAtA[i:], m.Ref)
i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Ref)))
i--
dAtA[i] = 0x12
}
if len(m.SessionID) > 0 {
i -= len(m.SessionID)
copy(dAtA[i:], m.SessionID)
i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.SessionID)))
i--
dAtA[i] = 0xa
}
return len(dAtA) - i, nil
}
func (m *Build) SizeVT() (n int) {
if m == nil {
return 0
}
var l int
_ = l
l = len(m.SessionID)
if l > 0 {
n += 1 + l + protohelpers.SizeOfVarint(uint64(l))
}
l = len(m.Ref)
if l > 0 {
n += 1 + l + protohelpers.SizeOfVarint(uint64(l))
}
n += len(m.unknownFields)
return n
}
func (m *Build) UnmarshalVT(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return protohelpers.ErrIntOverflow
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
return fmt.Errorf("proto: Build: wiretype end group for non-group")
}
if fieldNum <= 0 {
return fmt.Errorf("proto: Build: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field SessionID", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return protohelpers.ErrIntOverflow
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return protohelpers.ErrInvalidLength
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return protohelpers.ErrInvalidLength
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.SessionID = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 2:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Ref", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return protohelpers.ErrIntOverflow
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return protohelpers.ErrInvalidLength
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return protohelpers.ErrInvalidLength
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Ref = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := protohelpers.Skip(dAtA[iNdEx:])
if err != nil {
return err
}
if (skippy < 0) || (iNdEx+skippy) < 0 {
return protohelpers.ErrInvalidLength
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
m.unknownFields = append(m.unknownFields, dAtA[iNdEx:iNdEx+skippy]...)
iNdEx += skippy
}
}
if iNdEx > l {
return io.ErrUnexpectedEOF
}
return nil
}

View File

@@ -1,152 +0,0 @@
package local
import (
"context"
"io"
"sync/atomic"
"github.com/docker/buildx/build"
cbuild "github.com/docker/buildx/controller/build"
"github.com/docker/buildx/controller/control"
controllererrors "github.com/docker/buildx/controller/errdefs"
controllerapi "github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/controller/processes"
"github.com/docker/buildx/util/desktop"
"github.com/docker/buildx/util/ioset"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/command"
"github.com/moby/buildkit/client"
"github.com/pkg/errors"
)
func NewLocalBuildxController(ctx context.Context, dockerCli command.Cli, logger progress.SubLogger) control.BuildxController {
return &localController{
dockerCli: dockerCli,
sessionID: "local",
processes: processes.NewManager(),
}
}
type buildConfig struct {
// TODO: these two structs should be merged
// Discussion: https://github.com/docker/buildx/pull/1640#discussion_r1113279719
resultCtx *build.ResultHandle
buildOptions *controllerapi.BuildOptions
}
type localController struct {
dockerCli command.Cli
sessionID string
buildConfig buildConfig
processes *processes.Manager
buildOnGoing atomic.Bool
}
func (b *localController) Build(ctx context.Context, options *controllerapi.BuildOptions, in io.ReadCloser, progress progress.Writer) (string, *client.SolveResponse, *build.Inputs, error) {
if !b.buildOnGoing.CompareAndSwap(false, true) {
return "", nil, nil, errors.New("build ongoing")
}
defer b.buildOnGoing.Store(false)
resp, res, dockerfileMappings, buildErr := cbuild.RunBuild(ctx, b.dockerCli, options, in, progress, true)
// NOTE: RunBuild can return *build.ResultHandle even on error.
if res != nil {
b.buildConfig = buildConfig{
resultCtx: res,
buildOptions: options,
}
if buildErr != nil {
var ref string
var ebr *desktop.ErrorWithBuildRef
if errors.As(buildErr, &ebr) {
ref = ebr.Ref
}
buildErr = controllererrors.WrapBuild(buildErr, b.sessionID, ref)
}
}
if buildErr != nil {
return "", nil, nil, buildErr
}
return b.sessionID, resp, dockerfileMappings, nil
}
func (b *localController) ListProcesses(ctx context.Context, sessionID string) (infos []*controllerapi.ProcessInfo, retErr error) {
if sessionID != b.sessionID {
return nil, errors.Errorf("unknown session ID %q", sessionID)
}
return b.processes.ListProcesses(), nil
}
func (b *localController) DisconnectProcess(ctx context.Context, sessionID, pid string) error {
if sessionID != b.sessionID {
return errors.Errorf("unknown session ID %q", sessionID)
}
return b.processes.DeleteProcess(pid)
}
func (b *localController) cancelRunningProcesses() {
b.processes.CancelRunningProcesses()
}
func (b *localController) Invoke(ctx context.Context, sessionID string, pid string, cfg *controllerapi.InvokeConfig, ioIn io.ReadCloser, ioOut io.WriteCloser, ioErr io.WriteCloser) error {
if sessionID != b.sessionID {
return errors.Errorf("unknown session ID %q", sessionID)
}
proc, ok := b.processes.Get(pid)
if !ok {
// Start a new process.
if b.buildConfig.resultCtx == nil {
return errors.New("no build result is registered")
}
var err error
proc, err = b.processes.StartProcess(pid, b.buildConfig.resultCtx, cfg)
if err != nil {
return err
}
}
// Attach containerIn to this process
ioCancelledCh := make(chan struct{})
proc.ForwardIO(&ioset.In{Stdin: ioIn, Stdout: ioOut, Stderr: ioErr}, func(error) { close(ioCancelledCh) })
select {
case <-ioCancelledCh:
return errors.Errorf("io cancelled")
case err := <-proc.Done():
return err
case <-ctx.Done():
return context.Cause(ctx)
}
}
func (b *localController) Kill(context.Context) error {
b.Close()
return nil
}
func (b *localController) Close() error {
b.cancelRunningProcesses()
if b.buildConfig.resultCtx != nil {
b.buildConfig.resultCtx.Done()
}
// TODO: cancel ongoing builds?
return nil
}
func (b *localController) List(ctx context.Context) (res []string, _ error) {
return []string{b.sessionID}, nil
}
func (b *localController) Disconnect(ctx context.Context, key string) error {
b.Close()
return nil
}
func (b *localController) Inspect(ctx context.Context, sessionID string) (*controllerapi.InspectResponse, error) {
if sessionID != b.sessionID {
return nil, errors.Errorf("unknown session ID %q", sessionID)
}
return &controllerapi.InspectResponse{Options: b.buildConfig.buildOptions}, nil
}

View File

@@ -1,20 +0,0 @@
package pb
func CreateAttestations(attests []*Attest) map[string]*string {
result := map[string]*string{}
for _, attest := range attests {
// ignore duplicates
if _, ok := result[attest.Type]; ok {
continue
}
if attest.Disabled {
result[attest.Type] = nil
continue
}
attrs := attest.Attrs
result[attest.Type] = &attrs
}
return result
}

View File

@@ -1,21 +0,0 @@
package pb
import "github.com/moby/buildkit/client"
func CreateCaches(entries []*CacheOptionsEntry) []client.CacheOptionsEntry {
var outs []client.CacheOptionsEntry
if len(entries) == 0 {
return nil
}
for _, entry := range entries {
out := client.CacheOptionsEntry{
Type: entry.Type,
Attrs: map[string]string{},
}
for k, v := range entry.Attrs {
out.Attrs[k] = v
}
outs = append(outs, out)
}
return outs
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,250 +0,0 @@
syntax = "proto3";
package buildx.controller.v1;
import "github.com/moby/buildkit/api/services/control/control.proto";
import "github.com/moby/buildkit/sourcepolicy/pb/policy.proto";
option go_package = "github.com/docker/buildx/controller/pb";
service Controller {
rpc Build(BuildRequest) returns (BuildResponse);
rpc Inspect(InspectRequest) returns (InspectResponse);
rpc Status(StatusRequest) returns (stream StatusResponse);
rpc Input(stream InputMessage) returns (InputResponse);
rpc Invoke(stream Message) returns (stream Message);
rpc List(ListRequest) returns (ListResponse);
rpc Disconnect(DisconnectRequest) returns (DisconnectResponse);
rpc Info(InfoRequest) returns (InfoResponse);
rpc ListProcesses(ListProcessesRequest) returns (ListProcessesResponse);
rpc DisconnectProcess(DisconnectProcessRequest) returns (DisconnectProcessResponse);
}
message ListProcessesRequest {
string SessionID = 1;
}
message ListProcessesResponse {
repeated ProcessInfo Infos = 1;
}
message ProcessInfo {
string ProcessID = 1;
InvokeConfig InvokeConfig = 2;
}
message DisconnectProcessRequest {
string SessionID = 1;
string ProcessID = 2;
}
message DisconnectProcessResponse {
}
message BuildRequest {
string SessionID = 1;
BuildOptions Options = 2;
}
message BuildOptions {
string ContextPath = 1;
string DockerfileName = 2;
CallFunc CallFunc = 3;
map<string, string> NamedContexts = 4;
repeated string Allow = 5;
repeated Attest Attests = 6;
map<string, string> BuildArgs = 7;
repeated CacheOptionsEntry CacheFrom = 8;
repeated CacheOptionsEntry CacheTo = 9;
string CgroupParent = 10;
repeated ExportEntry Exports = 11;
repeated string ExtraHosts = 12;
map<string, string> Labels = 13;
string NetworkMode = 14;
repeated string NoCacheFilter = 15;
repeated string Platforms = 16;
repeated Secret Secrets = 17;
int64 ShmSize = 18;
repeated SSH SSH = 19;
repeated string Tags = 20;
string Target = 21;
UlimitOpt Ulimits = 22;
string Builder = 23;
bool NoCache = 24;
bool Pull = 25;
bool ExportPush = 26;
bool ExportLoad = 27;
moby.buildkit.v1.sourcepolicy.Policy SourcePolicy = 28;
string Ref = 29;
string GroupRef = 30;
repeated string Annotations = 31;
string ProvenanceResponseMode = 32;
}
message ExportEntry {
string Type = 1;
map<string, string> Attrs = 2;
string Destination = 3;
}
message CacheOptionsEntry {
string Type = 1;
map<string, string> Attrs = 2;
}
message Attest {
string Type = 1;
bool Disabled = 2;
string Attrs = 3;
}
message SSH {
string ID = 1;
repeated string Paths = 2;
}
message Secret {
string ID = 1;
string FilePath = 2;
string Env = 3;
}
message CallFunc {
string Name = 1;
string Format = 2;
bool IgnoreStatus = 3;
}
message InspectRequest {
string SessionID = 1;
}
message InspectResponse {
BuildOptions Options = 1;
}
message UlimitOpt {
map<string, Ulimit> values = 1;
}
message Ulimit {
string Name = 1;
int64 Hard = 2;
int64 Soft = 3;
}
message BuildResponse {
map<string, string> ExporterResponse = 1;
}
message DisconnectRequest {
string SessionID = 1;
}
message DisconnectResponse {}
message ListRequest {
string SessionID = 1;
}
message ListResponse {
repeated string keys = 1;
}
message InputMessage {
oneof Input {
InputInitMessage Init = 1;
DataMessage Data = 2;
}
}
message InputInitMessage {
string SessionID = 1;
}
message DataMessage {
bool EOF = 1; // true if eof was reached
bytes Data = 2; // should be chunked smaller than 4MB:
// https://pkg.go.dev/google.golang.org/grpc#MaxRecvMsgSize
}
message InputResponse {}
message Message {
oneof Input {
InitMessage Init = 1;
// FdMessage used from client to server for input (stdin) and
// from server to client for output (stdout, stderr)
FdMessage File = 2;
// ResizeMessage used from client to server for terminal resize events
ResizeMessage Resize = 3;
// SignalMessage is used from client to server to send signal events
SignalMessage Signal = 4;
}
}
message InitMessage {
string SessionID = 1;
// If ProcessID already exists in the server, it tries to connect to it
// instead of invoking the new one. In this case, InvokeConfig will be ignored.
string ProcessID = 2;
InvokeConfig InvokeConfig = 3;
}
message InvokeConfig {
repeated string Entrypoint = 1;
repeated string Cmd = 2;
bool NoCmd = 11; // Do not set cmd but use the image's default
repeated string Env = 3;
string User = 4;
bool NoUser = 5; // Do not set user but use the image's default
string Cwd = 6;
bool NoCwd = 7; // Do not set cwd but use the image's default
bool Tty = 8;
bool Rollback = 9; // Kill all process in the container and recreate it.
bool Initial = 10; // Run container from the initial state of that stage (supported only on the failed step)
}
message FdMessage {
uint32 Fd = 1; // what fd the data was from
bool EOF = 2; // true if eof was reached
bytes Data = 3; // should be chunked smaller than 4MB:
// https://pkg.go.dev/google.golang.org/grpc#MaxRecvMsgSize
}
message ResizeMessage {
uint32 Rows = 1;
uint32 Cols = 2;
}
message SignalMessage {
// we only send name (ie HUP, INT) because the int values
// are platform dependent.
string Name = 1;
}
message StatusRequest {
string SessionID = 1;
}
message StatusResponse {
repeated moby.buildkit.v1.Vertex vertexes = 1;
repeated moby.buildkit.v1.VertexStatus statuses = 2;
repeated moby.buildkit.v1.VertexLog logs = 3;
repeated moby.buildkit.v1.VertexWarning warnings = 4;
}
message InfoRequest {}
message InfoResponse {
BuildxVersion buildxVersion = 1;
}
message BuildxVersion {
string package = 1;
string version = 2;
string revision = 3;
}

View File

@@ -1,452 +0,0 @@
// Code generated by protoc-gen-go-grpc. DO NOT EDIT.
// versions:
// - protoc-gen-go-grpc v1.5.1
// - protoc v3.11.4
// source: github.com/docker/buildx/controller/pb/controller.proto
package pb
import (
context "context"
grpc "google.golang.org/grpc"
codes "google.golang.org/grpc/codes"
status "google.golang.org/grpc/status"
)
// This is a compile-time assertion to ensure that this generated file
// is compatible with the grpc package it is being compiled against.
// Requires gRPC-Go v1.64.0 or later.
const _ = grpc.SupportPackageIsVersion9
const (
Controller_Build_FullMethodName = "/buildx.controller.v1.Controller/Build"
Controller_Inspect_FullMethodName = "/buildx.controller.v1.Controller/Inspect"
Controller_Status_FullMethodName = "/buildx.controller.v1.Controller/Status"
Controller_Input_FullMethodName = "/buildx.controller.v1.Controller/Input"
Controller_Invoke_FullMethodName = "/buildx.controller.v1.Controller/Invoke"
Controller_List_FullMethodName = "/buildx.controller.v1.Controller/List"
Controller_Disconnect_FullMethodName = "/buildx.controller.v1.Controller/Disconnect"
Controller_Info_FullMethodName = "/buildx.controller.v1.Controller/Info"
Controller_ListProcesses_FullMethodName = "/buildx.controller.v1.Controller/ListProcesses"
Controller_DisconnectProcess_FullMethodName = "/buildx.controller.v1.Controller/DisconnectProcess"
)
// ControllerClient is the client API for Controller service.
//
// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream.
type ControllerClient interface {
Build(ctx context.Context, in *BuildRequest, opts ...grpc.CallOption) (*BuildResponse, error)
Inspect(ctx context.Context, in *InspectRequest, opts ...grpc.CallOption) (*InspectResponse, error)
Status(ctx context.Context, in *StatusRequest, opts ...grpc.CallOption) (grpc.ServerStreamingClient[StatusResponse], error)
Input(ctx context.Context, opts ...grpc.CallOption) (grpc.ClientStreamingClient[InputMessage, InputResponse], error)
Invoke(ctx context.Context, opts ...grpc.CallOption) (grpc.BidiStreamingClient[Message, Message], error)
List(ctx context.Context, in *ListRequest, opts ...grpc.CallOption) (*ListResponse, error)
Disconnect(ctx context.Context, in *DisconnectRequest, opts ...grpc.CallOption) (*DisconnectResponse, error)
Info(ctx context.Context, in *InfoRequest, opts ...grpc.CallOption) (*InfoResponse, error)
ListProcesses(ctx context.Context, in *ListProcessesRequest, opts ...grpc.CallOption) (*ListProcessesResponse, error)
DisconnectProcess(ctx context.Context, in *DisconnectProcessRequest, opts ...grpc.CallOption) (*DisconnectProcessResponse, error)
}
type controllerClient struct {
cc grpc.ClientConnInterface
}
func NewControllerClient(cc grpc.ClientConnInterface) ControllerClient {
return &controllerClient{cc}
}
func (c *controllerClient) Build(ctx context.Context, in *BuildRequest, opts ...grpc.CallOption) (*BuildResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(BuildResponse)
err := c.cc.Invoke(ctx, Controller_Build_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *controllerClient) Inspect(ctx context.Context, in *InspectRequest, opts ...grpc.CallOption) (*InspectResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(InspectResponse)
err := c.cc.Invoke(ctx, Controller_Inspect_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *controllerClient) Status(ctx context.Context, in *StatusRequest, opts ...grpc.CallOption) (grpc.ServerStreamingClient[StatusResponse], error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
stream, err := c.cc.NewStream(ctx, &Controller_ServiceDesc.Streams[0], Controller_Status_FullMethodName, cOpts...)
if err != nil {
return nil, err
}
x := &grpc.GenericClientStream[StatusRequest, StatusResponse]{ClientStream: stream}
if err := x.ClientStream.SendMsg(in); err != nil {
return nil, err
}
if err := x.ClientStream.CloseSend(); err != nil {
return nil, err
}
return x, nil
}
// This type alias is provided for backwards compatibility with existing code that references the prior non-generic stream type by name.
type Controller_StatusClient = grpc.ServerStreamingClient[StatusResponse]
func (c *controllerClient) Input(ctx context.Context, opts ...grpc.CallOption) (grpc.ClientStreamingClient[InputMessage, InputResponse], error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
stream, err := c.cc.NewStream(ctx, &Controller_ServiceDesc.Streams[1], Controller_Input_FullMethodName, cOpts...)
if err != nil {
return nil, err
}
x := &grpc.GenericClientStream[InputMessage, InputResponse]{ClientStream: stream}
return x, nil
}
// This type alias is provided for backwards compatibility with existing code that references the prior non-generic stream type by name.
type Controller_InputClient = grpc.ClientStreamingClient[InputMessage, InputResponse]
func (c *controllerClient) Invoke(ctx context.Context, opts ...grpc.CallOption) (grpc.BidiStreamingClient[Message, Message], error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
stream, err := c.cc.NewStream(ctx, &Controller_ServiceDesc.Streams[2], Controller_Invoke_FullMethodName, cOpts...)
if err != nil {
return nil, err
}
x := &grpc.GenericClientStream[Message, Message]{ClientStream: stream}
return x, nil
}
// This type alias is provided for backwards compatibility with existing code that references the prior non-generic stream type by name.
type Controller_InvokeClient = grpc.BidiStreamingClient[Message, Message]
func (c *controllerClient) List(ctx context.Context, in *ListRequest, opts ...grpc.CallOption) (*ListResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(ListResponse)
err := c.cc.Invoke(ctx, Controller_List_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *controllerClient) Disconnect(ctx context.Context, in *DisconnectRequest, opts ...grpc.CallOption) (*DisconnectResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(DisconnectResponse)
err := c.cc.Invoke(ctx, Controller_Disconnect_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *controllerClient) Info(ctx context.Context, in *InfoRequest, opts ...grpc.CallOption) (*InfoResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(InfoResponse)
err := c.cc.Invoke(ctx, Controller_Info_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *controllerClient) ListProcesses(ctx context.Context, in *ListProcessesRequest, opts ...grpc.CallOption) (*ListProcessesResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(ListProcessesResponse)
err := c.cc.Invoke(ctx, Controller_ListProcesses_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *controllerClient) DisconnectProcess(ctx context.Context, in *DisconnectProcessRequest, opts ...grpc.CallOption) (*DisconnectProcessResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(DisconnectProcessResponse)
err := c.cc.Invoke(ctx, Controller_DisconnectProcess_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
// ControllerServer is the server API for Controller service.
// All implementations should embed UnimplementedControllerServer
// for forward compatibility.
type ControllerServer interface {
Build(context.Context, *BuildRequest) (*BuildResponse, error)
Inspect(context.Context, *InspectRequest) (*InspectResponse, error)
Status(*StatusRequest, grpc.ServerStreamingServer[StatusResponse]) error
Input(grpc.ClientStreamingServer[InputMessage, InputResponse]) error
Invoke(grpc.BidiStreamingServer[Message, Message]) error
List(context.Context, *ListRequest) (*ListResponse, error)
Disconnect(context.Context, *DisconnectRequest) (*DisconnectResponse, error)
Info(context.Context, *InfoRequest) (*InfoResponse, error)
ListProcesses(context.Context, *ListProcessesRequest) (*ListProcessesResponse, error)
DisconnectProcess(context.Context, *DisconnectProcessRequest) (*DisconnectProcessResponse, error)
}
// UnimplementedControllerServer should be embedded to have
// forward compatible implementations.
//
// NOTE: this should be embedded by value instead of pointer to avoid a nil
// pointer dereference when methods are called.
type UnimplementedControllerServer struct{}
func (UnimplementedControllerServer) Build(context.Context, *BuildRequest) (*BuildResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method Build not implemented")
}
func (UnimplementedControllerServer) Inspect(context.Context, *InspectRequest) (*InspectResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method Inspect not implemented")
}
func (UnimplementedControllerServer) Status(*StatusRequest, grpc.ServerStreamingServer[StatusResponse]) error {
return status.Errorf(codes.Unimplemented, "method Status not implemented")
}
func (UnimplementedControllerServer) Input(grpc.ClientStreamingServer[InputMessage, InputResponse]) error {
return status.Errorf(codes.Unimplemented, "method Input not implemented")
}
func (UnimplementedControllerServer) Invoke(grpc.BidiStreamingServer[Message, Message]) error {
return status.Errorf(codes.Unimplemented, "method Invoke not implemented")
}
func (UnimplementedControllerServer) List(context.Context, *ListRequest) (*ListResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method List not implemented")
}
func (UnimplementedControllerServer) Disconnect(context.Context, *DisconnectRequest) (*DisconnectResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method Disconnect not implemented")
}
func (UnimplementedControllerServer) Info(context.Context, *InfoRequest) (*InfoResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method Info not implemented")
}
func (UnimplementedControllerServer) ListProcesses(context.Context, *ListProcessesRequest) (*ListProcessesResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method ListProcesses not implemented")
}
func (UnimplementedControllerServer) DisconnectProcess(context.Context, *DisconnectProcessRequest) (*DisconnectProcessResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method DisconnectProcess not implemented")
}
func (UnimplementedControllerServer) testEmbeddedByValue() {}
// UnsafeControllerServer may be embedded to opt out of forward compatibility for this service.
// Use of this interface is not recommended, as added methods to ControllerServer will
// result in compilation errors.
type UnsafeControllerServer interface {
mustEmbedUnimplementedControllerServer()
}
func RegisterControllerServer(s grpc.ServiceRegistrar, srv ControllerServer) {
// If the following call pancis, it indicates UnimplementedControllerServer was
// embedded by pointer and is nil. This will cause panics if an
// unimplemented method is ever invoked, so we test this at initialization
// time to prevent it from happening at runtime later due to I/O.
if t, ok := srv.(interface{ testEmbeddedByValue() }); ok {
t.testEmbeddedByValue()
}
s.RegisterService(&Controller_ServiceDesc, srv)
}
func _Controller_Build_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(BuildRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(ControllerServer).Build(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: Controller_Build_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(ControllerServer).Build(ctx, req.(*BuildRequest))
}
return interceptor(ctx, in, info, handler)
}
func _Controller_Inspect_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(InspectRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(ControllerServer).Inspect(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: Controller_Inspect_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(ControllerServer).Inspect(ctx, req.(*InspectRequest))
}
return interceptor(ctx, in, info, handler)
}
func _Controller_Status_Handler(srv interface{}, stream grpc.ServerStream) error {
m := new(StatusRequest)
if err := stream.RecvMsg(m); err != nil {
return err
}
return srv.(ControllerServer).Status(m, &grpc.GenericServerStream[StatusRequest, StatusResponse]{ServerStream: stream})
}
// This type alias is provided for backwards compatibility with existing code that references the prior non-generic stream type by name.
type Controller_StatusServer = grpc.ServerStreamingServer[StatusResponse]
func _Controller_Input_Handler(srv interface{}, stream grpc.ServerStream) error {
return srv.(ControllerServer).Input(&grpc.GenericServerStream[InputMessage, InputResponse]{ServerStream: stream})
}
// This type alias is provided for backwards compatibility with existing code that references the prior non-generic stream type by name.
type Controller_InputServer = grpc.ClientStreamingServer[InputMessage, InputResponse]
func _Controller_Invoke_Handler(srv interface{}, stream grpc.ServerStream) error {
return srv.(ControllerServer).Invoke(&grpc.GenericServerStream[Message, Message]{ServerStream: stream})
}
// This type alias is provided for backwards compatibility with existing code that references the prior non-generic stream type by name.
type Controller_InvokeServer = grpc.BidiStreamingServer[Message, Message]
func _Controller_List_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(ListRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(ControllerServer).List(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: Controller_List_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(ControllerServer).List(ctx, req.(*ListRequest))
}
return interceptor(ctx, in, info, handler)
}
func _Controller_Disconnect_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(DisconnectRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(ControllerServer).Disconnect(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: Controller_Disconnect_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(ControllerServer).Disconnect(ctx, req.(*DisconnectRequest))
}
return interceptor(ctx, in, info, handler)
}
func _Controller_Info_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(InfoRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(ControllerServer).Info(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: Controller_Info_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(ControllerServer).Info(ctx, req.(*InfoRequest))
}
return interceptor(ctx, in, info, handler)
}
func _Controller_ListProcesses_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(ListProcessesRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(ControllerServer).ListProcesses(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: Controller_ListProcesses_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(ControllerServer).ListProcesses(ctx, req.(*ListProcessesRequest))
}
return interceptor(ctx, in, info, handler)
}
func _Controller_DisconnectProcess_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(DisconnectProcessRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(ControllerServer).DisconnectProcess(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: Controller_DisconnectProcess_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(ControllerServer).DisconnectProcess(ctx, req.(*DisconnectProcessRequest))
}
return interceptor(ctx, in, info, handler)
}
// Controller_ServiceDesc is the grpc.ServiceDesc for Controller service.
// It's only intended for direct use with grpc.RegisterService,
// and not to be introspected or modified (even as a copy)
var Controller_ServiceDesc = grpc.ServiceDesc{
ServiceName: "buildx.controller.v1.Controller",
HandlerType: (*ControllerServer)(nil),
Methods: []grpc.MethodDesc{
{
MethodName: "Build",
Handler: _Controller_Build_Handler,
},
{
MethodName: "Inspect",
Handler: _Controller_Inspect_Handler,
},
{
MethodName: "List",
Handler: _Controller_List_Handler,
},
{
MethodName: "Disconnect",
Handler: _Controller_Disconnect_Handler,
},
{
MethodName: "Info",
Handler: _Controller_Info_Handler,
},
{
MethodName: "ListProcesses",
Handler: _Controller_ListProcesses_Handler,
},
{
MethodName: "DisconnectProcess",
Handler: _Controller_DisconnectProcess_Handler,
},
},
Streams: []grpc.StreamDesc{
{
StreamName: "Status",
Handler: _Controller_Status_Handler,
ServerStreams: true,
},
{
StreamName: "Input",
Handler: _Controller_Input_Handler,
ClientStreams: true,
},
{
StreamName: "Invoke",
Handler: _Controller_Invoke_Handler,
ServerStreams: true,
ClientStreams: true,
},
},
Metadata: "github.com/docker/buildx/controller/pb/controller.proto",
}

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More