Compare commits

..

6 Commits

Author SHA1 Message Date
CrazyMax
2af40b75b7 Merge pull request #1415 from jedevc/cherry-pick-1383-to-0.9
[0.9] driver: don't create tracer delegate opt if tracer is nil
2022-11-21 10:29:09 +01:00
Justin Chadwell
83f3691c15 driver: don't create tracer delegate opt if tracer is nil
The error handling for the cast to client.TracerDelegate was incorrect,
and previously, a client would unconditionally append an opt.

This results in the scenario that while the ClientOpt was not nil, the
tracer delegate in the ClientOpt was, which isn't an error case
explicitly handled by buildkit.

Signed-off-by: Justin Chadwell <me@jedevc.com>
2022-11-18 13:36:31 +00:00
CrazyMax
4e93e87991 Merge pull request #1409 from jedevc/cherry-pick-1406-to-0.9
[0.9] Synchronise access to the map when printing
2022-11-18 13:48:30 +01:00
Felix de Souza
3f1516d3fe Synchronise access to the map when printing.
Signed-off-by: Felix de Souza <fdesouza@palantir.com>
2022-11-15 15:54:43 +00:00
Tõnis Tiigi
09d1e1ee99 Merge pull request #1348 from AkihiroSuda/gcos-rootless-0.9
[v0.9] kubernetes: rootless: support Google Container-Optimized OS
2022-11-01 22:34:53 -07:00
Akihiro Suda
2e9906ba20 kubernetes: rootless: support Google Container-Optimized OS
Tested with GKE Autopilot 1.24.3-gke.200 (kernel 5.10.123+, containerd 1.6.6).

ref: moby/buildkit PR 3097

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 33e5f47c6c)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2022-10-07 18:57:26 +09:00
3402 changed files with 84218 additions and 359328 deletions

View File

@@ -1 +1,3 @@
bin/
cross-out/
release-out/

View File

@@ -116,60 +116,6 @@ commit automatically with `git commit -s`.
### Run the unit- and integration-tests
Running tests:
```bash
make test
```
This runs all unit and integration tests, in a containerized environment.
Locally, every package can be tested separately with standard Go tools, but
integration tests are skipped if local user doesn't have enough permissions or
worker binaries are not installed.
```bash
# run unit tests only
make test-unit
# run integration tests only
make test-integration
# test a specific package
TESTPKGS=./bake make test
# run all integration tests with a specific worker
TESTFLAGS="--run=//worker=remote -v" make test-integration
# run a specific integration test
TESTFLAGS="--run /TestBuild/worker=remote/ -v" make test-integration
# run a selection of integration tests using a regexp
TESTFLAGS="--run /TestBuild.*/worker=remote/ -v" make test-integration
```
> **Note**
>
> Set `TEST_KEEP_CACHE=1` for the test framework to keep external dependant
> images in a docker volume if you are repeatedly calling `make test`. This
> helps to avoid rate limiting on the remote registry side.
> **Note**
>
> Set `TEST_DOCKERD=1` for the test framework to enable the docker workers,
> specifically the `docker` and `docker-container` drivers.
>
> The docker tests cannot be run in parallel, so require passing `--parallel=1`
> in `TESTFLAGS`.
> **Note**
>
> If you are working behind a proxy, you can set some of or all
> `HTTP_PROXY=http://ip:port`, `HTTPS_PROXY=http://ip:port`, `NO_PROXY=http://ip:port`
> for the test framework to specify the proxy build args.
### Run the helper commands
To enter a demo container environment and experiment, you may run:
```

View File

@@ -1,124 +0,0 @@
# https://docs.github.com/en/communities/using-templates-to-encourage-useful-issues-and-pull-requests/syntax-for-githubs-form-schema
name: Bug Report
description: Report a bug
labels:
- status/triage
body:
- type: markdown
attributes:
value: |
Thank you for taking the time to report a bug!
If this is a security issue please report it to the [Docker Security team](mailto:security@docker.com).
- type: checkboxes
attributes:
label: Contributing guidelines
description: |
Please read the contributing guidelines before proceeding.
options:
- label: I've read the [contributing guidelines](https://github.com/docker/buildx/blob/master/.github/CONTRIBUTING.md) and wholeheartedly agree
required: true
- type: checkboxes
attributes:
label: I've found a bug and checked that ...
description: |
Make sure that your request fulfills all of the following requirements.
If one requirement cannot be satisfied, explain in detail why.
options:
- label: ... the documentation does not mention anything about my problem
- label: ... there are no open or closed issues that are related to my problem
- type: textarea
attributes:
label: Description
description: |
Please provide a brief description of the bug in 1-2 sentences.
validations:
required: true
- type: textarea
attributes:
label: Expected behaviour
description: |
Please describe precisely what you'd expect to happen.
validations:
required: true
- type: textarea
attributes:
label: Actual behaviour
description: |
Please describe precisely what is actually happening.
validations:
required: true
- type: input
attributes:
label: Buildx version
description: |
Output of `docker buildx version` command.
Example: `github.com/docker/buildx v0.8.1 5fac64c2c49dae1320f2b51f1a899ca451935554`
validations:
required: true
- type: textarea
attributes:
label: Docker info
description: |
Output of `docker info` command.
render: text
- type: textarea
attributes:
label: Builders list
description: |
Output of `docker buildx ls` command.
render: text
validations:
required: true
- type: textarea
attributes:
label: Configuration
description: >
Please provide a minimal Dockerfile, bake definition (if applicable) and
invoked commands to help reproducing your issue.
placeholder: |
```dockerfile
FROM alpine
echo hello
```
```hcl
group "default" {
targets = ["app"]
}
target "app" {
dockerfile = "Dockerfile"
target = "build"
}
```
```console
$ docker buildx build .
$ docker buildx bake
```
validations:
required: true
- type: textarea
attributes:
label: Build logs
description: |
Please provide logs output (and/or BuildKit logs if applicable).
render: text
validations:
required: false
- type: textarea
attributes:
label: Additional info
description: |
Please provide any additional information that could be useful.

View File

@@ -1,12 +0,0 @@
# https://docs.github.com/en/communities/using-templates-to-encourage-useful-issues-and-pull-requests/configuring-issue-templates-for-your-repository#configuring-the-template-chooser
blank_issues_enabled: true
contact_links:
- name: Questions and Discussions
url: https://github.com/docker/buildx/discussions/new
about: Use Github Discussions to ask questions and/or open discussion topics.
- name: Command line reference
url: https://docs.docker.com/engine/reference/commandline/buildx/
about: Read the command line reference.
- name: Documentation
url: https://docs.docker.com/build/
about: Read the documentation.

View File

@@ -1,15 +0,0 @@
# https://docs.github.com/en/communities/using-templates-to-encourage-useful-issues-and-pull-requests/syntax-for-githubs-form-schema
name: Feature request
description: Missing functionality? Come tell us about it!
labels:
- kind/enhancement
- status/triage
body:
- type: textarea
id: description
attributes:
label: Description
description: What is the feature you want to see?
validations:
required: true

12
.github/SECURITY.md vendored
View File

@@ -1,12 +0,0 @@
# Reporting security issues
The project maintainers take security seriously. If you discover a security
issue, please bring it to their attention right away!
**Please _DO NOT_ file a public issue**, instead send your report privately to
[security@docker.com](mailto:security@docker.com).
Security reports are greatly appreciated, and we will publicly thank you for it.
We also like to send gifts&mdash;if you're into schwag, make sure to let
us know. We currently do not offer a paid security bounty program, but are not
ruling it out in the future.

735
.github/releases.json vendored
View File

@@ -1,735 +0,0 @@
{
"latest": {
"id": 90741208,
"tag_name": "v0.10.2",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.10.2",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v6.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v6.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v7.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v7.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-ppc64le.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-ppc64le.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-riscv64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-riscv64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-s390x.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-s390x.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/checksums.txt"
]
},
"v0.10.2": {
"id": 90741208,
"tag_name": "v0.10.2",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.10.2",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v6.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v6.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v7.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v7.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-ppc64le.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-ppc64le.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-riscv64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-riscv64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-s390x.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-s390x.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/checksums.txt"
]
},
"v0.10.1": {
"id": 90346950,
"tag_name": "v0.10.1",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.10.1",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.darwin-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.darwin-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.darwin-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.darwin-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm-v6.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm-v6.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm-v7.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm-v7.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-ppc64le.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-ppc64le.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-riscv64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-riscv64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-s390x.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-s390x.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.windows-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.windows-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.windows-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.windows-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/checksums.txt"
]
},
"v0.10.0": {
"id": 88388110,
"tag_name": "v0.10.0",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.10.0",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.darwin-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.darwin-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.darwin-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.darwin-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm-v6.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm-v6.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm-v7.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm-v7.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-ppc64le.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-ppc64le.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-riscv64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-riscv64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-s390x.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-s390x.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.windows-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.windows-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.windows-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.windows-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/checksums.txt"
]
},
"v0.10.0-rc3": {
"id": 88191592,
"tag_name": "v0.10.0-rc3",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.10.0-rc3",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.darwin-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.darwin-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.darwin-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.darwin-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm-v6.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm-v6.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm-v7.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm-v7.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-ppc64le.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-ppc64le.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-riscv64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-riscv64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-s390x.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-s390x.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.windows-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.windows-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.windows-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.windows-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/checksums.txt"
]
},
"v0.10.0-rc2": {
"id": 86248476,
"tag_name": "v0.10.0-rc2",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.10.0-rc2",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.darwin-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.darwin-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.darwin-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.darwin-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm-v6.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm-v6.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm-v7.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm-v7.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-ppc64le.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-ppc64le.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-riscv64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-riscv64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-s390x.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-s390x.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.windows-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.windows-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.windows-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.windows-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/checksums.txt"
]
},
"v0.10.0-rc1": {
"id": 85963900,
"tag_name": "v0.10.0-rc1",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.10.0-rc1",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/checksums.txt"
]
},
"v0.9.1": {
"id": 74760068,
"tag_name": "v0.9.1",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.9.1",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.9.1/checksums.txt"
]
},
"v0.9.0": {
"id": 74546589,
"tag_name": "v0.9.0",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.9.0",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.9.0/checksums.txt"
]
},
"v0.9.0-rc2": {
"id": 74052235,
"tag_name": "v0.9.0-rc2",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.9.0-rc2",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/checksums.txt"
]
},
"v0.9.0-rc1": {
"id": 73389692,
"tag_name": "v0.9.0-rc1",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.9.0-rc1",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/checksums.txt"
]
},
"v0.8.2": {
"id": 63479740,
"tag_name": "v0.8.2",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.8.2",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.8.2/checksums.txt"
]
},
"v0.8.1": {
"id": 62289050,
"tag_name": "v0.8.1",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.8.1",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.8.1/checksums.txt"
]
},
"v0.8.0": {
"id": 61423774,
"tag_name": "v0.8.0",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.8.0",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.8.0/checksums.txt"
]
},
"v0.8.0-rc1": {
"id": 60513568,
"tag_name": "v0.8.0-rc1",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.8.0-rc1",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/checksums.txt"
]
},
"v0.7.1": {
"id": 54098347,
"tag_name": "v0.7.1",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.7.1",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.7.1/checksums.txt"
]
},
"v0.7.0": {
"id": 53109422,
"tag_name": "v0.7.0",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.7.0",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.7.0/checksums.txt"
]
},
"v0.7.0-rc1": {
"id": 52726324,
"tag_name": "v0.7.0-rc1",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.7.0-rc1",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/checksums.txt"
]
},
"v0.6.3": {
"id": 48691641,
"tag_name": "v0.6.3",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.6.3",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.windows-arm64.exe"
]
},
"v0.6.2": {
"id": 48207405,
"tag_name": "v0.6.2",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.6.2",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.windows-arm64.exe"
]
},
"v0.6.1": {
"id": 47064772,
"tag_name": "v0.6.1",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.6.1",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.windows-arm64.exe"
]
},
"v0.6.0": {
"id": 46343260,
"tag_name": "v0.6.0",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.6.0",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.windows-arm64.exe"
]
},
"v0.6.0-rc1": {
"id": 46230351,
"tag_name": "v0.6.0-rc1",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.6.0-rc1",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.windows-arm64.exe"
]
},
"v0.5.1": {
"id": 35276550,
"tag_name": "v0.5.1",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.5.1",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.darwin-universal",
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.windows-amd64.exe"
]
},
"v0.5.0": {
"id": 35268960,
"tag_name": "v0.5.0",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.5.0",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.darwin-universal",
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.windows-amd64.exe"
]
},
"v0.5.0-rc1": {
"id": 35015334,
"tag_name": "v0.5.0-rc1",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.5.0-rc1",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.windows-amd64.exe"
]
},
"v0.4.2": {
"id": 30007794,
"tag_name": "v0.4.2",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.4.2",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.windows-amd64.exe"
]
},
"v0.4.1": {
"id": 26067509,
"tag_name": "v0.4.1",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.4.1",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.windows-amd64.exe"
]
},
"v0.4.0": {
"id": 26028174,
"tag_name": "v0.4.0",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.4.0",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.windows-amd64.exe"
]
},
"v0.3.1": {
"id": 20316235,
"tag_name": "v0.3.1",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.3.1",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.windows-amd64.exe"
]
},
"v0.3.0": {
"id": 19029664,
"tag_name": "v0.3.0",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.3.0",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.windows-amd64.exe"
]
},
"v0.2.2": {
"id": 17671545,
"tag_name": "v0.2.2",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.2.2",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.windows-amd64.exe"
]
},
"v0.2.1": {
"id": 17582885,
"tag_name": "v0.2.1",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.2.1",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.windows-amd64.exe"
]
},
"v0.2.0": {
"id": 16965310,
"tag_name": "v0.2.0",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.2.0",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.windows-amd64.exe"
]
}
}

View File

@@ -13,120 +13,42 @@ on:
tags:
- 'v*'
pull_request:
paths-ignore:
- '.github/releases.json'
- 'README.md'
- 'docs/**'
branches:
- 'master'
- 'v[0-9]*'
env:
BUILDX_VERSION: "latest"
BUILDKIT_IMAGE: "moby/buildkit:latest"
REPO_SLUG: "docker/buildx-bin"
DESTDIR: "./bin"
TEST_CACHE_SCOPE: "test"
RELEASE_OUT: "./release-out"
jobs:
prepare-test:
runs-on: ubuntu-22.04
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Set up QEMU
uses: docker/setup-qemu-action@v2
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
with:
version: ${{ env.BUILDX_VERSION }}
driver-opts: image=${{ env.BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build
uses: docker/bake-action@v3
with:
targets: integration-test-base
set: |
*.cache-from=type=gha,scope=${{ env.TEST_CACHE_SCOPE }}
*.cache-to=type=gha,scope=${{ env.TEST_CACHE_SCOPE }}
test:
runs-on: ubuntu-22.04
needs:
- prepare-test
env:
TESTFLAGS: "-v --parallel=6 --timeout=30m"
TESTFLAGS_DOCKER: "-v --parallel=1 --timeout=30m"
GOTESTSUM_FORMAT: "standard-verbose"
TEST_IMAGE_BUILD: "0"
TEST_IMAGE_ID: "buildx-tests"
strategy:
fail-fast: false
matrix:
worker:
- docker
- docker-container
- remote
pkg:
- ./tests
include:
- pkg: ./...
skip-integration-tests: 1
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Set up QEMU
uses: docker/setup-qemu-action@v2
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
with:
version: ${{ env.BUILDX_VERSION }}
driver-opts: image=${{ env.BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build test image
uses: docker/bake-action@v3
with:
targets: integration-test
set: |
*.cache-from=type=gha,scope=${{ env.TEST_CACHE_SCOPE }}
*.output=type=docker,name=${{ env.TEST_IMAGE_ID }}
version: latest
-
name: Test
run: |
export TEST_REPORT_SUFFIX=-${{ github.job }}-$(echo "${{ matrix.pkg }}-${{ matrix.skip-integration-tests }}-${{ matrix.worker }}" | tr -dc '[:alnum:]-\n\r' | tr '[:upper:]' '[:lower:]')
./hack/test
env:
TEST_DOCKERD: "${{ (matrix.worker == 'docker' || matrix.worker == 'docker-container') && '1' || '0' }}"
TESTFLAGS: "${{ (matrix.worker == 'docker' || matrix.worker == 'docker-container') && env.TESTFLAGS_DOCKER || env.TESTFLAGS }} --run=//worker=${{ matrix.worker }}$"
TESTPKGS: "${{ matrix.pkg }}"
SKIP_INTEGRATION_TESTS: "${{ matrix.skip-integration-tests }}"
uses: docker/bake-action@v2
with:
targets: test
set: |
*.cache-from=type=gha,scope=test
*.cache-to=type=gha,scope=test
-
name: Send to Codecov
if: always()
name: Upload coverage
uses: codecov/codecov-action@v3
with:
directory: ./bin/testreports
-
name: Generate annotations
if: always()
uses: crazy-max/.github/.github/actions/gotest-annotations@1a64ea6d01db9a48aa61954cb20e265782c167d9
with:
directory: ./bin/testreports
-
name: Upload test reports
if: always()
uses: actions/upload-artifact@v3
with:
name: test-reports
path: ./bin/testreports
file: ./coverage/coverage.txt
prepare-binaries:
runs-on: ubuntu-22.04
prepare:
runs-on: ubuntu-latest
outputs:
matrix: ${{ steps.platforms.outputs.matrix }}
steps:
@@ -137,20 +59,20 @@ jobs:
name: Create matrix
id: platforms
run: |
echo "matrix=$(docker buildx bake binaries-cross --print | jq -cr '.target."binaries-cross".platforms')" >>${GITHUB_OUTPUT}
echo ::set-output name=matrix::$(docker buildx bake binaries-cross --print | jq -cr '.target."binaries-cross".platforms')
-
name: Show matrix
run: |
echo ${{ steps.platforms.outputs.matrix }}
binaries:
runs-on: ubuntu-22.04
runs-on: ubuntu-latest
needs:
- prepare-binaries
- prepare
strategy:
fail-fast: false
matrix:
platform: ${{ fromJson(needs.prepare-binaries.outputs.matrix) }}
platform: ${{ fromJson(needs.prepare.outputs.matrix) }}
steps:
-
name: Prepare
@@ -167,30 +89,27 @@ jobs:
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
with:
version: ${{ env.BUILDX_VERSION }}
driver-opts: image=${{ env.BUILDKIT_IMAGE }}
buildkitd-flags: --debug
version: latest
-
name: Build
run: |
make release
env:
PLATFORMS: ${{ matrix.platform }}
CACHE_FROM: type=gha,scope=binaries-${{ env.PLATFORM_PAIR }}
CACHE_TO: type=gha,scope=binaries-${{ env.PLATFORM_PAIR }},mode=max
uses: docker/bake-action@v2
with:
targets: release
set: |
*.platform=${{ matrix.platform }}
*.cache-from=type=gha,scope=binaries-${{ env.PLATFORM_PAIR }}
*.cache-to=type=gha,scope=binaries-${{ env.PLATFORM_PAIR }},mode=max
-
name: Upload artifacts
uses: actions/upload-artifact@v3
with:
name: buildx
path: ${{ env.DESTDIR }}/*
path: ${{ env.RELEASE_OUT }}/*
if-no-files-found: error
bin-image:
runs-on: ubuntu-22.04
needs:
- test
if: ${{ github.event_name != 'pull_request' && github.repository == 'docker/buildx' }}
runs-on: ubuntu-latest
if: ${{ github.event_name != 'pull_request' }}
steps:
-
name: Checkout
@@ -202,9 +121,7 @@ jobs:
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
with:
version: ${{ env.BUILDX_VERSION }}
driver-opts: image=${{ env.BUILDKIT_IMAGE }}
buildkitd-flags: --debug
version: latest
-
name: Docker meta
id: meta
@@ -226,22 +143,20 @@ jobs:
password: ${{ secrets.DOCKERHUB_TOKEN }}
-
name: Build and push image
uses: docker/bake-action@v3
uses: docker/bake-action@v2
with:
files: |
./docker-bake.hcl
${{ steps.meta.outputs.bake-file }}
targets: image-cross
push: ${{ github.event_name != 'pull_request' }}
sbom: true
set: |
*.cache-from=type=gha,scope=bin-image
*.cache-to=type=gha,scope=bin-image,mode=max
release:
runs-on: ubuntu-22.04
runs-on: ubuntu-latest
needs:
- test
- binaries
steps:
-
@@ -252,30 +167,30 @@ jobs:
uses: actions/download-artifact@v3
with:
name: buildx
path: ${{ env.DESTDIR }}
path: ${{ env.RELEASE_OUT }}
-
name: Create checksums
run: ./hack/hash-files
-
name: List artifacts
run: |
tree -nh ${{ env.DESTDIR }}
tree -nh ${{ env.RELEASE_OUT }}
-
name: Check artifacts
run: |
find ${{ env.DESTDIR }} -type f -exec file -e ascii -- {} +
find ${{ env.RELEASE_OUT }} -type f -exec file -e ascii -- {} +
-
name: GitHub Release
if: startsWith(github.ref, 'refs/tags/v')
uses: softprops/action-gh-release@de2c0eb89ae2a093876385947365aca7b0e5f844 # v0.1.15
uses: softprops/action-gh-release@1e07f4398721186383de40550babbdf2b84acfc5
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
draft: true
files: ${{ env.DESTDIR }}/*
files: ${{ env.RELEASE_OUT }}/*
buildkit-edge:
runs-on: ubuntu-22.04
runs-on: ubuntu-latest
continue-on-error: true
steps:
-
@@ -288,12 +203,12 @@ jobs:
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
with:
version: ${{ env.BUILDX_VERSION }}
version: latest
driver-opts: image=moby/buildkit:master
buildkitd-flags: --debug
-
# Just run a bake target to check eveything runs fine
name: Build
uses: docker/bake-action@v3
uses: docker/bake-action@v2
with:
targets: binaries

View File

@@ -2,21 +2,19 @@ name: docs-release
on:
release:
types:
- released
types: [ released ]
jobs:
open-pr:
runs-on: ubuntu-22.04
if: ${{ github.event.release.prerelease != true && github.repository == 'docker/buildx' }}
runs-on: ubuntu-latest
steps:
-
name: Checkout docs repo
uses: actions/checkout@v3
with:
token: ${{ secrets.GHPAT_DOCS_DISPATCH }}
repository: docker/docs
ref: main
repository: docker/docker.github.io
ref: master
-
name: Prepare
run: |
@@ -26,7 +24,7 @@ jobs:
uses: docker/setup-buildx-action@v2
-
name: Build docs
uses: docker/bake-action@v3
uses: docker/bake-action@v2
with:
source: ${{ github.server_url }}/${{ github.repository }}.git#${{ github.event.release.name }}
targets: update-docs
@@ -44,7 +42,7 @@ jobs:
git add -A .
-
name: Create PR on docs repo
uses: peter-evans/create-pull-request@284f54f989303d2699d373481a0cfa13ad5a6666
uses: peter-evans/create-pull-request@923ad837f191474af6b1721408744feb989a4c27 # v4.0.4
with:
token: ${{ secrets.GHPAT_DOCS_DISPATCH }}
push-to-fork: docker-tools-robot/docker.github.io

View File

@@ -22,7 +22,7 @@ on:
jobs:
docs-yaml:
runs-on: ubuntu-22.04
runs-on: ubuntu-latest
steps:
-
name: Checkout
@@ -34,7 +34,7 @@ jobs:
version: latest
-
name: Build reference YAML docs
uses: docker/bake-action@v3
uses: docker/bake-action@v2
with:
targets: update-docs
set: |
@@ -52,11 +52,67 @@ jobs:
retention-days: 1
validate:
uses: docker/docs/.github/workflows/validate-upstream.yml@main
runs-on: ubuntu-latest
needs:
- docs-yaml
with:
repo: https://github.com/${{ github.repository }}
data-files-id: docs-yaml
data-files-folder: buildx
data-files-placeholder-folder: engine/reference/commandline
steps:
-
name: Checkout
uses: actions/checkout@v3
with:
repository: docker/docker.github.io
-
name: Install js-yaml
run: npm install js-yaml
-
# use the actual buildx ref that triggers this workflow, so we make
# sure pages fetched by docs repo are still valid
# https://github.com/docker/docker.github.io/blob/98c7c9535063ae4cd2cd0a31478a21d16d2f07a3/_config.yml#L164-L173
name: Set correct ref to fetch remote resources
uses: actions/github-script@v6
with:
script: |
const fs = require('fs');
const yaml = require('js-yaml');
const configFile = '_config.yml'
const config = yaml.load(fs.readFileSync(configFile, 'utf8'));
for (const remote of config['fetch-remote']) {
if (remote['repo'] != 'https://github.com/docker/buildx') {
continue;
}
remote['ref'] = "${{ github.ref }}";
}
try {
fs.writeFileSync(configFile, yaml.dump(config), 'utf8')
} catch (err) {
console.error(err.message)
process.exit(1)
}
-
name: Prepare
run: |
# print docs jekyll config updated in previous step
yq _config.yml
# cleanup reference yaml docs and js-yaml module
rm -rf ./_data/buildx/* ./node_modules
-
name: Download built reference YAML docs
uses: actions/download-artifact@v3
with:
name: docs-yaml
path: ./_data/buildx/
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
with:
version: latest
-
name: Validate
uses: docker/bake-action@v2
with:
targets: validate
set: |
*.cache-from=type=gha,scope=docs-upstream
*.cache-to=type=gha,scope=docs-upstream,mode=max

View File

@@ -11,18 +11,15 @@ on:
- 'master'
- 'v[0-9]*'
pull_request:
paths-ignore:
- '.github/releases.json'
- 'README.md'
- 'docs/**'
env:
DESTDIR: "./bin"
K3S_VERSION: "v1.21.2-k3s1"
branches:
- 'master'
- 'v[0-9]*'
jobs:
build:
runs-on: ubuntu-22.04
runs-on: ubuntu-20.04
env:
BIN_OUT: ./bin
steps:
- name: Checkout
uses: actions/checkout@v3
@@ -33,7 +30,7 @@ jobs:
version: latest
-
name: Build
uses: docker/bake-action@v3
uses: docker/bake-action@v2
with:
targets: binaries
set: |
@@ -43,13 +40,13 @@ jobs:
-
name: Rename binary
run: |
mv ${{ env.DESTDIR }}/build/buildx ${{ env.DESTDIR }}/build/docker-buildx
mv ${{ env.BIN_OUT }}/buildx ${{ env.BIN_OUT }}/docker-buildx
-
name: Upload artifacts
uses: actions/upload-artifact@v3
with:
name: binary
path: ${{ env.DESTDIR }}/build
path: ${{ env.BIN_OUT }}
if-no-files-found: error
retention-days: 7
@@ -132,67 +129,20 @@ jobs:
-
name: Install k3s
if: matrix.driver == 'kubernetes'
uses: actions/github-script@v6
uses: debianmaster/actions-k3s@b9cf3f599fd118699a3c8a0d18a2f2bda6cf4ce4
id: k3s
with:
script: |
const fs = require('fs');
let wait = function(milliseconds) {
return new Promise((resolve, reject) => {
if (typeof(milliseconds) !== 'number') {
throw new Error('milleseconds not a number');
}
setTimeout(() => resolve("done!"), milliseconds)
});
}
try {
const kubeconfig="/tmp/buildkit-k3s/kubeconfig.yaml";
core.info(`storing kubeconfig in ${kubeconfig}`);
await exec.exec('docker', ["run", "-d",
"--privileged",
"--name=buildkit-k3s",
"-e", "K3S_KUBECONFIG_OUTPUT="+kubeconfig,
"-e", "K3S_KUBECONFIG_MODE=666",
"-v", "/tmp/buildkit-k3s:/tmp/buildkit-k3s",
"-p", "6443:6443",
"-p", "80:80",
"-p", "443:443",
"-p", "8080:8080",
"rancher/k3s:${{ env.K3S_VERSION }}", "server"
]);
await wait(10000);
core.exportVariable('KUBECONFIG', kubeconfig);
let nodeName;
for (let count = 1; count <= 5; count++) {
try {
const nodeNameOutput = await exec.getExecOutput("kubectl get nodes --no-headers -oname");
nodeName = nodeNameOutput.stdout
} catch (error) {
core.info(`Unable to resolve node name (${error.message}). Attempt ${count} of 5.`)
} finally {
if (nodeName) {
break;
}
await wait(5000);
}
}
if (!nodeName) {
throw new Error(`Unable to resolve node name after 5 attempts.`);
}
await exec.exec(`kubectl wait --for=condition=Ready ${nodeName}`);
} catch (error) {
core.setFailed(error.message);
}
version: v1.21.2-k3s1
-
name: Print KUBECONFIG
name: Config k3s
if: matrix.driver == 'kubernetes'
run: |
yq ${{ env.KUBECONFIG }}
(set -x ; cat ${{ steps.k3s.outputs.kubeconfig }})
-
name: Check k3s nodes
if: matrix.driver == 'kubernetes'
run: |
kubectl get nodes
-
name: Launch remote buildkitd
if: matrix.driver == 'remote'

View File

@@ -13,12 +13,13 @@ on:
tags:
- 'v*'
pull_request:
paths-ignore:
- '.github/releases.json'
branches:
- 'master'
- 'v[0-9]*'
jobs:
validate:
runs-on: ubuntu-22.04
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
@@ -26,7 +27,6 @@ jobs:
- lint
- validate-vendor
- validate-docs
- validate-generated-files
steps:
-
name: Checkout

5
.gitignore vendored
View File

@@ -1 +1,4 @@
/bin
bin
coverage
cross-out
release-out

View File

@@ -11,17 +11,17 @@ linters:
enable:
- gofmt
- govet
- deadcode
- depguard
- goimports
- ineffassign
- misspell
- unused
- varcheck
- revive
- staticcheck
- typecheck
- nolintlint
- gosec
- forbidigo
- structcheck
disable-all: true
linters-settings:
@@ -32,15 +32,6 @@ linters-settings:
# The io/ioutil package has been deprecated.
# https://go.dev/doc/go1.16#ioutil
- io/ioutil
forbidigo:
forbid:
- '^fmt\.Errorf(# use errors\.Errorf instead)?$'
gosec:
excludes:
- G204 # Audit use of command execution
- G402 # TLS MinVersion too low
config:
G306: "0644"
issues:
exclude-rules:

View File

@@ -1,11 +1,8 @@
# syntax=docker/dockerfile:1
# syntax=docker/dockerfile:1.4
ARG GO_VERSION=1.20
ARG XX_VERSION=1.2.1
ARG GO_VERSION=1.18
ARG XX_VERSION=1.1.2
ARG DOCKERD_VERSION=20.10.14
ARG GOTESTSUM_VERSION=v1.9.0
ARG REGISTRY_VERSION=2.8.0
ARG BUILDKIT_VERSION=v0.11.6
FROM docker:$DOCKERD_VERSION AS dockerd-release
@@ -21,39 +18,23 @@ ENV GOFLAGS=-mod=vendor
ENV CGO_ENABLED=0
WORKDIR /src
FROM registry:$REGISTRY_VERSION AS registry
FROM moby/buildkit:$BUILDKIT_VERSION AS buildkit
FROM gobase AS gotestsum
ARG GOTESTSUM_VERSION
ENV GOFLAGS=
RUN --mount=target=/root/.cache,type=cache \
GOBIN=/out/ go install "gotest.tools/gotestsum@${GOTESTSUM_VERSION}" && \
/out/gotestsum --version
FROM gobase AS buildx-version
RUN --mount=type=bind,target=. <<EOT
set -e
mkdir /buildx-version
echo -n "$(./hack/git-meta version)" | tee /buildx-version/version
echo -n "$(./hack/git-meta revision)" | tee /buildx-version/revision
EOT
RUN --mount=target=. \
PKG=github.com/docker/buildx VERSION=$(git describe --match 'v[0-9]*' --dirty='.m' --always --tags) REVISION=$(git rev-parse HEAD)$(if ! git diff --no-ext-diff --quiet --exit-code; then echo .m; fi); \
echo "-X ${PKG}/version.Version=${VERSION} -X ${PKG}/version.Revision=${REVISION} -X ${PKG}/version.Package=${PKG}" | tee /tmp/.ldflags; \
echo -n "${VERSION}" | tee /tmp/.version;
FROM gobase AS buildx-build
ARG LDFLAGS="-w -s"
ARG TARGETPLATFORM
RUN --mount=type=bind,target=. \
--mount=type=cache,target=/root/.cache \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=bind,from=buildx-version,source=/buildx-version,target=/buildx-version <<EOT
set -e
xx-go --wrap
DESTDIR=/usr/bin VERSION=$(cat /buildx-version/version) REVISION=$(cat /buildx-version/revision) GO_EXTRA_LDFLAGS="-s -w" ./hack/build
xx-verify --static /usr/bin/docker-buildx
EOT
--mount=type=bind,source=/tmp/.ldflags,target=/tmp/.ldflags,from=buildx-version \
set -x; xx-go build -ldflags "$(cat /tmp/.ldflags) ${LDFLAGS}" -o /usr/bin/buildx ./cmd/buildx && \
xx-verify --static /usr/bin/buildx
FROM gobase AS test
ENV SKIP_INTEGRATION_TESTS=1
RUN --mount=type=bind,target=. \
--mount=type=cache,target=/root/.cache \
--mount=type=cache,target=/go/pkg/mod \
@@ -64,39 +45,23 @@ FROM scratch AS test-coverage
COPY --from=test /tmp/coverage.txt /coverage.txt
FROM scratch AS binaries-unix
COPY --link --from=buildx-build /usr/bin/docker-buildx /buildx
COPY --link --from=buildx-build /usr/bin/buildx /
FROM binaries-unix AS binaries-darwin
FROM binaries-unix AS binaries-linux
FROM scratch AS binaries-windows
COPY --link --from=buildx-build /usr/bin/docker-buildx /buildx.exe
COPY --link --from=buildx-build /usr/bin/buildx /buildx.exe
FROM binaries-$TARGETOS AS binaries
# enable scanning for this stage
ARG BUILDKIT_SBOM_SCAN_STAGE=true
FROM gobase AS integration-test-base
RUN apk add --no-cache docker runc containerd
COPY --link --from=gotestsum /out/gotestsum /usr/bin/
COPY --link --from=registry /bin/registry /usr/bin/
COPY --link --from=buildkit /usr/bin/buildkitd /usr/bin/
COPY --link --from=buildkit /usr/bin/buildctl /usr/bin/
COPY --link --from=binaries /buildx /usr/bin/
FROM integration-test-base AS integration-test
COPY . .
# Release
FROM --platform=$BUILDPLATFORM alpine AS releaser
WORKDIR /work
ARG TARGETPLATFORM
RUN --mount=from=binaries \
--mount=type=bind,from=buildx-version,source=/buildx-version,target=/buildx-version <<EOT
set -e
mkdir -p /out
cp buildx* "/out/buildx-$(cat /buildx-version/version).$(echo $TARGETPLATFORM | sed 's/\//-/g')$(ls buildx* | sed -e 's/^buildx//')"
EOT
--mount=type=bind,source=/tmp/.version,target=/tmp/.version,from=buildx-version \
mkdir -p /out && cp buildx* "/out/buildx-$(cat /tmp/.version).$(echo $TARGETPLATFORM | sed 's/\//-/g')$(ls buildx* | sed -e 's/^buildx//')"
FROM scratch AS release
COPY --from=releaser /out/ /

View File

@@ -4,93 +4,59 @@ else ifneq (, $(shell docker buildx version))
export BUILDX_CMD = docker buildx
else ifneq (, $(shell which buildx))
export BUILDX_CMD = $(which buildx)
else
$(error "Buildx is required: https://github.com/docker/buildx#installing")
endif
export BUILDX_CMD ?= docker buildx
export BIN_OUT = ./bin
export RELEASE_OUT = ./release-out
.PHONY: all
all: binaries
.PHONY: build
build:
./hack/build
.PHONY: shell
shell:
./hack/shell
.PHONY: binaries
binaries:
$(BUILDX_CMD) bake binaries
.PHONY: binaries-cross
binaries-cross:
$(BUILDX_CMD) bake binaries-cross
.PHONY: install
install: binaries
mkdir -p ~/.docker/cli-plugins
install bin/build/buildx ~/.docker/cli-plugins/docker-buildx
install bin/buildx ~/.docker/cli-plugins/docker-buildx
.PHONY: release
release:
./hack/release
.PHONY: validate-all
validate-all: lint test validate-vendor validate-docs validate-generated-files
validate-all: lint test validate-vendor validate-docs
.PHONY: lint
lint:
$(BUILDX_CMD) bake lint
.PHONY: test
test:
./hack/test
$(BUILDX_CMD) bake test
.PHONY: test-unit
test-unit:
TESTPKGS=./... SKIP_INTEGRATION_TESTS=1 ./hack/test
.PHONY: test
test-integration:
TESTPKGS=./tests ./hack/test
.PHONY: validate-vendor
validate-vendor:
$(BUILDX_CMD) bake validate-vendor
.PHONY: validate-docs
validate-docs:
$(BUILDX_CMD) bake validate-docs
.PHONY: validate-authors
validate-authors:
$(BUILDX_CMD) bake validate-authors
.PHONY: validate-generated-files
validate-generated-files:
$(BUILDX_CMD) bake validate-generated-files
.PHONY: test-driver
test-driver:
./hack/test-driver
.PHONY: vendor
vendor:
./hack/update-vendor
.PHONY: docs
docs:
./hack/update-docs
.PHONY: authors
authors:
$(BUILDX_CMD) bake update-authors
.PHONY: mod-outdated
mod-outdated:
$(BUILDX_CMD) bake mod-outdated
.PHONY: generated-files
generated-files:
$(BUILDX_CMD) bake update-generated-files
.PHONY: shell binaries binaries-cross install release validate-all lint validate-vendor validate-docs validate-authors vendor docs authors

View File

@@ -2,7 +2,7 @@
[![GitHub release](https://img.shields.io/github/release/docker/buildx.svg?style=flat-square)](https://github.com/docker/buildx/releases/latest)
[![PkgGoDev](https://img.shields.io/badge/go.dev-docs-007d9c?style=flat-square&logo=go&logoColor=white)](https://pkg.go.dev/github.com/docker/buildx)
[![Build Status](https://img.shields.io/github/actions/workflow/status/docker/buildx/build.yml?branch=master&label=build&logo=github&style=flat-square)](https://github.com/docker/buildx/actions?query=workflow%3Abuild)
[![Build Status](https://img.shields.io/github/workflow/status/docker/buildx/build?label=build&logo=github&style=flat-square)](https://github.com/docker/buildx/actions?query=workflow%3Abuild)
[![Go Report Card](https://goreportcard.com/badge/github.com/docker/buildx?style=flat-square)](https://goreportcard.com/report/github.com/docker/buildx)
[![codecov](https://img.shields.io/codecov/c/github/docker/buildx?logo=codecov&style=flat-square)](https://codecov.io/gh/docker/buildx)
@@ -32,6 +32,16 @@ Key features:
- [Building with buildx](#building-with-buildx)
- [Working with builder instances](#working-with-builder-instances)
- [Building multi-platform images](#building-multi-platform-images)
- [Guides](docs/guides)
- [High-level build options with Bake](docs/guides/bake/index.md)
- [CI/CD](docs/guides/cicd.md)
- [CNI networking](docs/guides/cni-networking.md)
- [Using a custom network](docs/guides/custom-network.md)
- [Using a custom registry configuration](docs/guides/custom-registry-config.md)
- [OpenTelemetry support](docs/guides/opentelemetry.md)
- [Registry mirror](docs/guides/registry-mirror.md)
- [Drivers](docs/guides/drivers/index.md)
- [Resource limiting](docs/guides/resource-limiting.md)
- [Reference](docs/reference/buildx.md)
- [`buildx bake`](docs/reference/buildx_bake.md)
- [`buildx build`](docs/reference/buildx_build.md)
@@ -51,18 +61,11 @@ Key features:
- [`buildx version`](docs/reference/buildx_version.md)
- [Contributing](#contributing)
For more information on how to use Buildx, see
[Docker Build docs](https://docs.docker.com/build/).
# Installing
Using `buildx` with Docker requires Docker engine 19.03 or newer.
> **Warning**
>
> Using an incompatible version of Docker may result in unexpected behavior,
> and will likely cause issues, especially when using Buildx builders with more
> recent versions of BuildKit.
Using `buildx` as a docker CLI plugin requires using Docker 19.03 or newer.
A limited set of functionality works with older versions of Docker when
invoking the binary directly.
## Windows and macOS
@@ -120,8 +123,7 @@ On Windows:
Here is how to install and use Buildx inside a Dockerfile through the
[`docker/buildx-bin`](https://hub.docker.com/r/docker/buildx-bin) image:
```dockerfile
# syntax=docker/dockerfile:1
```Dockerfile
FROM docker
COPY --from=docker/buildx-bin /buildx /usr/libexec/docker/cli-plugins/docker-buildx
RUN docker buildx version
@@ -141,7 +143,7 @@ To remove this alias, run [`docker buildx uninstall`](docs/reference/buildx_unin
# Buildx 0.6+
$ docker buildx bake "https://github.com/docker/buildx.git"
$ mkdir -p ~/.docker/cli-plugins
$ mv ./bin/build/buildx ~/.docker/cli-plugins/docker-buildx
$ mv ./bin/buildx ~/.docker/cli-plugins/docker-buildx
# Docker 19.03+
$ DOCKER_BUILDKIT=1 docker build --platform=local -o . "https://github.com/docker/buildx.git"
@@ -188,12 +190,12 @@ through various "drivers". Each driver defines how and where a build should
run, and have different feature sets.
We currently support the following drivers:
- The `docker` driver ([guide](docs/manuals/drivers/docker.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
- The `docker-container` driver ([guide](docs/manuals/drivers/docker-container.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
- The `kubernetes` driver ([guide](docs/manuals/drivers/kubernetes.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
- The `remote` driver ([guide](docs/manuals/drivers/remote.md))
- The `docker` driver ([guide](docs/guides/drivers/docker.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
- The `docker-container` driver ([guide](docs/guides/drivers/docker-container.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
- The `kubernetes` driver ([guide](docs/guides/drivers/kubernetes.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
- The `remote` driver ([guide](docs/guides/drivers/remote.md))
For more information on drivers, see the [drivers guide](docs/manuals/drivers/index.md).
For more information on drivers, see the [drivers guide](docs/guides/drivers/index.md).
## Working with builder instances
@@ -296,7 +298,6 @@ inside your Dockerfile and can be leveraged by the processes running as part
of your build.
```dockerfile
# syntax=docker/dockerfile:1
FROM --platform=$BUILDPLATFORM golang:alpine AS build
ARG TARGETPLATFORM
ARG BUILDPLATFORM
@@ -310,7 +311,7 @@ cross-compilation helpers for more advanced use-cases.
## High-level build options
See [High-level builds with Bake](https://docs.docker.com/build/bake/) for more details.
See [`docs/guides/bake/index.md`](docs/guides/bake/index.md) for more details.
# Contributing

View File

@@ -3,6 +3,7 @@ package bake
import (
"context"
"encoding/csv"
"fmt"
"io"
"os"
"path"
@@ -12,23 +13,22 @@ import (
"strconv"
"strings"
composecli "github.com/compose-spec/compose-go/cli"
"github.com/docker/buildx/bake/hclparser"
"github.com/docker/buildx/build"
controllerapi "github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/util/buildflags"
"github.com/docker/buildx/util/platformutil"
"github.com/docker/cli/cli/config"
"github.com/docker/docker/builder/remotecontext/urlutil"
hcl "github.com/hashicorp/hcl/v2"
"github.com/moby/buildkit/client/llb"
"github.com/moby/buildkit/session/auth/authprovider"
"github.com/pkg/errors"
"github.com/zclconf/go-cty/cty"
"github.com/zclconf/go-cty/cty/convert"
)
var (
httpPrefix = regexp.MustCompile(`^https?://`)
gitURLPathWithFragmentSuffix = regexp.MustCompile(`\.git(?:#.+)?$`)
validTargetNameChars = `[a-zA-Z0-9_-]+`
targetNamePattern = regexp.MustCompile(`^` + validTargetNameChars + `$`)
)
@@ -44,18 +44,17 @@ type Override struct {
}
func defaultFilenames() []string {
names := []string{}
names = append(names, composecli.DefaultFileNames...)
names = append(names, []string{
return []string{
"docker-compose.yml", // support app
"docker-compose.yaml", // support app
"docker-bake.json",
"docker-bake.override.json",
"docker-bake.hcl",
"docker-bake.override.hcl",
}...)
return names
}
}
func ReadLocalFiles(names []string, stdin io.Reader) ([]File, error) {
func ReadLocalFiles(names []string) ([]File, error) {
isDefault := false
if len(names) == 0 {
isDefault = true
@@ -67,7 +66,7 @@ func ReadLocalFiles(names []string, stdin io.Reader) ([]File, error) {
var dt []byte
var err error
if n == "-" {
dt, err = io.ReadAll(stdin)
dt, err = io.ReadAll(os.Stdin)
if err != nil {
return nil, err
}
@@ -85,22 +84,7 @@ func ReadLocalFiles(names []string, stdin io.Reader) ([]File, error) {
return out, nil
}
func ListTargets(files []File) ([]string, error) {
c, err := ParseFiles(files, nil)
if err != nil {
return nil, err
}
var targets []string
for _, g := range c.Groups {
targets = append(targets, g.Name)
}
for _, t := range c.Targets {
targets = append(targets, t.Name)
}
return dedupSlice(targets), nil
}
func ReadTargets(ctx context.Context, files []File, targets, overrides []string, defaults map[string]string) (map[string]*Target, map[string]*Group, error) {
func ReadTargets(ctx context.Context, files []File, targets, overrides []string, defaults map[string]string) (map[string]*Target, []*Group, error) {
c, err := ParseFiles(files, defaults)
if err != nil {
return nil, nil, err
@@ -115,39 +99,42 @@ func ReadTargets(ctx context.Context, files []File, targets, overrides []string,
return nil, nil, err
}
m := map[string]*Target{}
n := map[string]*Group{}
for _, target := range targets {
ts, gs := c.ResolveGroup(target)
for _, tname := range ts {
t, err := c.ResolveTarget(tname, o)
for _, n := range targets {
for _, n := range c.ResolveGroup(n) {
t, err := c.ResolveTarget(n, o)
if err != nil {
return nil, nil, err
}
if t != nil {
m[tname] = t
}
}
for _, gname := range gs {
for _, group := range c.Groups {
if group.Name == gname {
n[gname] = group
break
}
m[n] = t
}
}
}
for _, target := range targets {
if target == "default" {
continue
var g []*Group
if len(targets) == 0 || (len(targets) == 1 && targets[0] == "default") {
for _, group := range c.Groups {
if group.Name != "default" {
continue
}
g = []*Group{{Targets: group.Targets}}
}
if _, ok := n["default"]; !ok {
n["default"] = &Group{Name: "default"}
} else {
var gt []string
for _, target := range targets {
isGroup := false
for _, group := range c.Groups {
if target == group.Name {
gt = append(gt, group.Targets...)
isGroup = true
break
}
}
if !isGroup {
gt = append(gt, target)
}
}
n["default"].Targets = append(n["default"].Targets, target)
}
if g, ok := n["default"]; ok {
g.Targets = dedupSlice(g.Targets)
g = []*Group{{Targets: dedupSlice(gt)}}
}
for name, t := range m {
@@ -156,7 +143,7 @@ func ReadTargets(ctx context.Context, files []File, targets, overrides []string,
}
}
return m, n, nil
return m, g, nil
}
func dedupSlice(s []string) []string {
@@ -231,7 +218,7 @@ func ParseFiles(files []File, defaults map[string]string) (_ *Config, err error)
}
hclFiles = append(hclFiles, hf)
} else if composeErr != nil {
return nil, errors.Wrapf(err, "failed to parse %s: parsing yaml: %v, parsing hcl", f.Name, composeErr)
return nil, fmt.Errorf("failed to parse %s: parsing yaml: %v, parsing hcl: %w", f.Name, composeErr, err)
} else {
return nil, err
}
@@ -248,28 +235,13 @@ func ParseFiles(files []File, defaults map[string]string) (_ *Config, err error)
}
if len(hclFiles) > 0 {
renamed, err := hclparser.Parse(hcl.MergeFiles(hclFiles), hclparser.Opt{
if err := hclparser.Parse(hcl.MergeFiles(hclFiles), hclparser.Opt{
LookupVar: os.LookupEnv,
Vars: defaults,
ValidateLabel: validateTargetName,
}, &c)
if err.HasErrors() {
}, &c); err.HasErrors() {
return nil, err
}
for _, renamed := range renamed {
for oldName, newNames := range renamed {
newNames = dedupSlice(newNames)
if len(newNames) == 1 && oldName == newNames[0] {
continue
}
c.Groups = append(c.Groups, &Group{
Name: oldName,
Targets: newNames,
})
}
}
c = dedupeConfig(c)
}
return &c, nil
@@ -301,8 +273,8 @@ func ParseFile(dt []byte, fn string) (*Config, error) {
}
type Config struct {
Groups []*Group `json:"group" hcl:"group,block" cty:"group"`
Targets []*Target `json:"target" hcl:"target,block" cty:"target"`
Groups []*Group `json:"group" hcl:"group,block"`
Targets []*Target `json:"target" hcl:"target,block"`
}
func mergeConfig(c1, c2 Config) Config {
@@ -448,7 +420,7 @@ func (c Config) newOverrides(v []string) (map[string]map[string]Override, error)
o := t[kk[1]]
switch keys[1] {
case "output", "cache-to", "cache-from", "tags", "platform", "secrets", "ssh", "attest":
case "output", "cache-to", "cache-from", "tags", "platform", "secrets", "ssh":
if len(parts) == 2 {
o.ArrValue = append(o.ArrValue, parts[1])
}
@@ -481,19 +453,13 @@ func (c Config) newOverrides(v []string) (map[string]map[string]Override, error)
return m, nil
}
func (c Config) ResolveGroup(name string) ([]string, []string) {
targets, groups := c.group(name, map[string]visit{})
return dedupSlice(targets), dedupSlice(groups)
func (c Config) ResolveGroup(name string) []string {
return dedupSlice(c.group(name, map[string][]string{}))
}
type visit struct {
target []string
group []string
}
func (c Config) group(name string, visited map[string]visit) ([]string, []string) {
if v, ok := visited[name]; ok {
return v.target, v.group
func (c Config) group(name string, visited map[string][]string) []string {
if _, ok := visited[name]; ok {
return visited[name]
}
var g *Group
for _, group := range c.Groups {
@@ -503,24 +469,20 @@ func (c Config) group(name string, visited map[string]visit) ([]string, []string
}
}
if g == nil {
return []string{name}, nil
return []string{name}
}
visited[name] = visit{}
visited[name] = []string{}
targets := make([]string, 0, len(g.Targets))
groups := []string{name}
for _, t := range g.Targets {
ttarget, tgroup := c.group(t, visited)
if len(ttarget) > 0 {
targets = append(targets, ttarget...)
tgroup := c.group(t, visited)
if len(tgroup) > 0 {
targets = append(targets, tgroup...)
} else {
targets = append(targets, t)
}
if len(tgroup) > 0 {
groups = append(groups, tgroup...)
}
}
visited[name] = visit{target: targets, group: groups}
return targets, groups
visited[name] = targets
return targets
}
func (c Config) ResolveTarget(name string, overrides map[string]map[string]Override) (*Target, error) {
@@ -578,49 +540,42 @@ func (c Config) target(name string, visited map[string]*Target, overrides map[st
}
type Group struct {
Name string `json:"-" hcl:"name,label" cty:"name"`
Targets []string `json:"targets" hcl:"targets" cty:"targets"`
Name string `json:"-" hcl:"name,label"`
Targets []string `json:"targets" hcl:"targets"`
// Target // TODO?
}
type Target struct {
Name string `json:"-" hcl:"name,label" cty:"name"`
Name string `json:"-" hcl:"name,label"`
// Inherits is the only field that cannot be overridden with --set
Attest []string `json:"attest,omitempty" hcl:"attest,optional" cty:"attest"`
Inherits []string `json:"inherits,omitempty" hcl:"inherits,optional" cty:"inherits"`
Inherits []string `json:"inherits,omitempty" hcl:"inherits,optional"`
Context *string `json:"context,omitempty" hcl:"context,optional" cty:"context"`
Contexts map[string]string `json:"contexts,omitempty" hcl:"contexts,optional" cty:"contexts"`
Dockerfile *string `json:"dockerfile,omitempty" hcl:"dockerfile,optional" cty:"dockerfile"`
DockerfileInline *string `json:"dockerfile-inline,omitempty" hcl:"dockerfile-inline,optional" cty:"dockerfile-inline"`
Args map[string]*string `json:"args,omitempty" hcl:"args,optional" cty:"args"`
Labels map[string]*string `json:"labels,omitempty" hcl:"labels,optional" cty:"labels"`
Tags []string `json:"tags,omitempty" hcl:"tags,optional" cty:"tags"`
CacheFrom []string `json:"cache-from,omitempty" hcl:"cache-from,optional" cty:"cache-from"`
CacheTo []string `json:"cache-to,omitempty" hcl:"cache-to,optional" cty:"cache-to"`
Target *string `json:"target,omitempty" hcl:"target,optional" cty:"target"`
Secrets []string `json:"secret,omitempty" hcl:"secret,optional" cty:"secret"`
SSH []string `json:"ssh,omitempty" hcl:"ssh,optional" cty:"ssh"`
Platforms []string `json:"platforms,omitempty" hcl:"platforms,optional" cty:"platforms"`
Outputs []string `json:"output,omitempty" hcl:"output,optional" cty:"output"`
Pull *bool `json:"pull,omitempty" hcl:"pull,optional" cty:"pull"`
NoCache *bool `json:"no-cache,omitempty" hcl:"no-cache,optional" cty:"no-cache"`
NetworkMode *string `json:"-" hcl:"-" cty:"-"`
NoCacheFilter []string `json:"no-cache-filter,omitempty" hcl:"no-cache-filter,optional" cty:"no-cache-filter"`
// IMPORTANT: if you add more fields here, do not forget to update newOverrides and docs/bake-reference.md.
Context *string `json:"context,omitempty" hcl:"context,optional"`
Contexts map[string]string `json:"contexts,omitempty" hcl:"contexts,optional"`
Dockerfile *string `json:"dockerfile,omitempty" hcl:"dockerfile,optional"`
DockerfileInline *string `json:"dockerfile-inline,omitempty" hcl:"dockerfile-inline,optional"`
Args map[string]string `json:"args,omitempty" hcl:"args,optional"`
Labels map[string]string `json:"labels,omitempty" hcl:"labels,optional"`
Tags []string `json:"tags,omitempty" hcl:"tags,optional"`
CacheFrom []string `json:"cache-from,omitempty" hcl:"cache-from,optional"`
CacheTo []string `json:"cache-to,omitempty" hcl:"cache-to,optional"`
Target *string `json:"target,omitempty" hcl:"target,optional"`
Secrets []string `json:"secret,omitempty" hcl:"secret,optional"`
SSH []string `json:"ssh,omitempty" hcl:"ssh,optional"`
Platforms []string `json:"platforms,omitempty" hcl:"platforms,optional"`
Outputs []string `json:"output,omitempty" hcl:"output,optional"`
Pull *bool `json:"pull,omitempty" hcl:"pull,optional"`
NoCache *bool `json:"no-cache,omitempty" hcl:"no-cache,optional"`
NetworkMode *string `json:"-" hcl:"-"`
NoCacheFilter []string `json:"no-cache-filter,omitempty" hcl:"no-cache-filter,optional"`
// IMPORTANT: if you add more fields here, do not forget to update newOverrides and docs/guides/bake/file-definition.md.
// linked is a private field to mark a target used as a linked one
linked bool
}
var _ hclparser.WithEvalContexts = &Target{}
var _ hclparser.WithGetName = &Target{}
var _ hclparser.WithEvalContexts = &Group{}
var _ hclparser.WithGetName = &Group{}
func (t *Target) normalize() {
t.Attest = removeAttestDupes(t.Attest)
t.Tags = removeDupes(t.Tags)
t.Secrets = removeDupes(t.Secrets)
t.SSH = removeDupes(t.SSH)
@@ -651,11 +606,8 @@ func (t *Target) Merge(t2 *Target) {
t.DockerfileInline = t2.DockerfileInline
}
for k, v := range t2.Args {
if v == nil {
continue
}
if t.Args == nil {
t.Args = map[string]*string{}
t.Args = map[string]string{}
}
t.Args[k] = v
}
@@ -666,11 +618,8 @@ func (t *Target) Merge(t2 *Target) {
t.Contexts[k] = v
}
for k, v := range t2.Labels {
if v == nil {
continue
}
if t.Labels == nil {
t.Labels = map[string]*string{}
t.Labels = map[string]string{}
}
t.Labels[k] = v
}
@@ -680,10 +629,6 @@ func (t *Target) Merge(t2 *Target) {
if t2.Target != nil {
t.Target = t2.Target
}
if t2.Attest != nil { // merge
t.Attest = append(t.Attest, t2.Attest...)
t.Attest = removeAttestDupes(t.Attest)
}
if t2.Secrets != nil { // merge
t.Secrets = append(t.Secrets, t2.Secrets...)
}
@@ -731,9 +676,9 @@ func (t *Target) AddOverrides(overrides map[string]Override) error {
return errors.Errorf("args require name")
}
if t.Args == nil {
t.Args = map[string]*string{}
t.Args = map[string]string{}
}
t.Args[keys[1]] = &value
t.Args[keys[1]] = value
case "contexts":
if len(keys) != 2 {
return errors.Errorf("contexts require name")
@@ -747,9 +692,9 @@ func (t *Target) AddOverrides(overrides map[string]Override) error {
return errors.Errorf("labels require name")
}
if t.Labels == nil {
t.Labels = map[string]*string{}
t.Labels = map[string]string{}
}
t.Labels[keys[1]] = &value
t.Labels[keys[1]] = value
case "tags":
t.Tags = o.ArrValue
case "cache-from":
@@ -766,8 +711,6 @@ func (t *Target) AddOverrides(overrides map[string]Override) error {
t.Platforms = o.ArrValue
case "output":
t.Outputs = o.ArrValue
case "attest":
t.Attest = append(t.Attest, o.ArrValue...)
case "no-cache":
noCache, err := strconv.ParseBool(value)
if err != nil {
@@ -803,114 +746,6 @@ func (t *Target) AddOverrides(overrides map[string]Override) error {
return nil
}
func (g *Group) GetEvalContexts(ectx *hcl.EvalContext, block *hcl.Block, loadDeps func(hcl.Expression) hcl.Diagnostics) ([]*hcl.EvalContext, error) {
content, _, err := block.Body.PartialContent(&hcl.BodySchema{
Attributes: []hcl.AttributeSchema{{Name: "matrix"}},
})
if err != nil {
return nil, err
}
if _, ok := content.Attributes["matrix"]; ok {
return nil, errors.Errorf("matrix is not supported for groups")
}
return []*hcl.EvalContext{ectx}, nil
}
func (t *Target) GetEvalContexts(ectx *hcl.EvalContext, block *hcl.Block, loadDeps func(hcl.Expression) hcl.Diagnostics) ([]*hcl.EvalContext, error) {
content, _, err := block.Body.PartialContent(&hcl.BodySchema{
Attributes: []hcl.AttributeSchema{{Name: "matrix"}},
})
if err != nil {
return nil, err
}
attr, ok := content.Attributes["matrix"]
if !ok {
return []*hcl.EvalContext{ectx}, nil
}
if diags := loadDeps(attr.Expr); diags.HasErrors() {
return nil, diags
}
value, err := attr.Expr.Value(ectx)
if err != nil {
return nil, err
}
if !value.Type().IsMapType() && !value.Type().IsObjectType() {
return nil, errors.Errorf("matrix must be a map")
}
matrix := value.AsValueMap()
ectxs := []*hcl.EvalContext{ectx}
for k, expr := range matrix {
if !expr.CanIterateElements() {
return nil, errors.Errorf("matrix values must be a list")
}
ectxs2 := []*hcl.EvalContext{}
for _, v := range expr.AsValueSlice() {
for _, e := range ectxs {
e2 := ectx.NewChild()
e2.Variables = make(map[string]cty.Value)
for k, v := range e.Variables {
e2.Variables[k] = v
}
e2.Variables[k] = v
ectxs2 = append(ectxs2, e2)
}
}
ectxs = ectxs2
}
return ectxs, nil
}
func (g *Group) GetName(ectx *hcl.EvalContext, block *hcl.Block, loadDeps func(hcl.Expression) hcl.Diagnostics) (string, error) {
content, _, diags := block.Body.PartialContent(&hcl.BodySchema{
Attributes: []hcl.AttributeSchema{{Name: "name"}, {Name: "matrix"}},
})
if diags != nil {
return "", diags
}
if _, ok := content.Attributes["name"]; ok {
return "", errors.Errorf("name is not supported for groups")
}
if _, ok := content.Attributes["matrix"]; ok {
return "", errors.Errorf("matrix is not supported for groups")
}
return block.Labels[0], nil
}
func (t *Target) GetName(ectx *hcl.EvalContext, block *hcl.Block, loadDeps func(hcl.Expression) hcl.Diagnostics) (string, error) {
content, _, diags := block.Body.PartialContent(&hcl.BodySchema{
Attributes: []hcl.AttributeSchema{{Name: "name"}, {Name: "matrix"}},
})
if diags != nil {
return "", diags
}
attr, ok := content.Attributes["name"]
if !ok {
return block.Labels[0], nil
}
if _, ok := content.Attributes["matrix"]; !ok {
return "", errors.Errorf("name requires matrix")
}
if diags := loadDeps(attr.Expr); diags.HasErrors() {
return "", diags
}
value, diags := attr.Expr.Value(ectx)
if diags != nil {
return "", diags
}
value, err := convert.Convert(value, cty.String)
if err != nil {
return "", err
}
return value.AsString(), nil
}
func TargetsToBuildOpt(m map[string]*Target, inp *Input) (map[string]build.Options, error) {
m2 := make(map[string]build.Options, len(m))
for k, v := range m {
@@ -935,7 +770,7 @@ func updateContext(t *build.Inputs, inp *Input) {
if strings.HasPrefix(v.Path, "cwd://") || strings.HasPrefix(v.Path, "target:") || strings.HasPrefix(v.Path, "docker-image:") {
continue
}
if build.IsRemoteURL(v.Path) {
if IsRemoteURL(v.Path) {
continue
}
st := llb.Scratch().File(llb.Copy(*inp.State, v.Path, "/"), llb.WithCustomNamef("set context %s to %s", k, v.Path))
@@ -949,15 +784,10 @@ func updateContext(t *build.Inputs, inp *Input) {
if strings.HasPrefix(t.ContextPath, "cwd://") {
return
}
if build.IsRemoteURL(t.ContextPath) {
if IsRemoteURL(t.ContextPath) {
return
}
st := llb.Scratch().File(
llb.Copy(*inp.State, t.ContextPath, "/", &llb.CopyInfo{
CopyDirContentsOnly: true,
}),
llb.WithCustomNamef("set context to %s", t.ContextPath),
)
st := llb.Scratch().File(llb.Copy(*inp.State, t.ContextPath, "/"), llb.WithCustomNamef("set context to %s", t.ContextPath))
t.ContextState = &st
}
@@ -990,7 +820,7 @@ func validateContextsEntitlements(t build.Inputs, inp *Input) error {
}
func checkPath(p string) error {
if build.IsRemoteURL(p) || strings.HasPrefix(p, "target:") || strings.HasPrefix(p, "docker-image:") {
if IsRemoteURL(p) || strings.HasPrefix(p, "target:") || strings.HasPrefix(p, "docker-image:") {
return nil
}
p, err := filepath.EvalSymlinks(p)
@@ -1000,10 +830,6 @@ func checkPath(p string) error {
}
return err
}
p, err = filepath.Abs(p)
if err != nil {
return err
}
wd, err := os.Getwd()
if err != nil {
return err
@@ -1012,8 +838,7 @@ func checkPath(p string) error {
if err != nil {
return err
}
parts := strings.Split(rel, string(os.PathSeparator))
if parts[0] == ".." {
if strings.HasPrefix(rel, ".."+string(os.PathSeparator)) {
return errors.Errorf("path %s is outside of the working directory, please set BAKE_ALLOW_REMOTE_FS_ACCESS=1", p)
}
return nil
@@ -1031,7 +856,7 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
if t.Context != nil {
contextPath = *t.Context
}
if !strings.HasPrefix(contextPath, "cwd://") && !build.IsRemoteURL(contextPath) {
if !strings.HasPrefix(contextPath, "cwd://") && !IsRemoteURL(contextPath) {
contextPath = path.Clean(contextPath)
}
dockerfilePath := "Dockerfile"
@@ -1039,47 +864,8 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
dockerfilePath = *t.Dockerfile
}
bi := build.Inputs{
ContextPath: contextPath,
DockerfilePath: dockerfilePath,
NamedContexts: toNamedContexts(t.Contexts),
}
if t.DockerfileInline != nil {
bi.DockerfileInline = *t.DockerfileInline
}
updateContext(&bi, inp)
if !build.IsRemoteURL(bi.ContextPath) && bi.ContextState == nil && !path.IsAbs(bi.DockerfilePath) {
bi.DockerfilePath = path.Join(bi.ContextPath, bi.DockerfilePath)
}
if strings.HasPrefix(bi.ContextPath, "cwd://") {
bi.ContextPath = path.Clean(strings.TrimPrefix(bi.ContextPath, "cwd://"))
}
for k, v := range bi.NamedContexts {
if strings.HasPrefix(v.Path, "cwd://") {
bi.NamedContexts[k] = build.NamedContext{Path: path.Clean(strings.TrimPrefix(v.Path, "cwd://"))}
}
}
if err := validateContextsEntitlements(bi, inp); err != nil {
return nil, err
}
t.Context = &bi.ContextPath
args := map[string]string{}
for k, v := range t.Args {
if v == nil {
continue
}
args[k] = *v
}
labels := map[string]string{}
for k, v := range t.Labels {
if v == nil {
continue
}
labels[k] = *v
if !isRemoteResource(contextPath) && !path.IsAbs(dockerfilePath) {
dockerfilePath = path.Join(contextPath, dockerfilePath)
}
noCache := false
@@ -1095,11 +881,35 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
networkMode = *t.NetworkMode
}
bi := build.Inputs{
ContextPath: contextPath,
DockerfilePath: dockerfilePath,
NamedContexts: toNamedContexts(t.Contexts),
}
if t.DockerfileInline != nil {
bi.DockerfileInline = *t.DockerfileInline
}
updateContext(&bi, inp)
if strings.HasPrefix(bi.ContextPath, "cwd://") {
bi.ContextPath = path.Clean(strings.TrimPrefix(bi.ContextPath, "cwd://"))
}
for k, v := range bi.NamedContexts {
if strings.HasPrefix(v.Path, "cwd://") {
bi.NamedContexts[k] = build.NamedContext{Path: path.Clean(strings.TrimPrefix(v.Path, "cwd://"))}
}
}
if err := validateContextsEntitlements(bi, inp); err != nil {
return nil, err
}
t.Context = &bi.ContextPath
bo := &build.Options{
Inputs: bi,
Tags: t.Tags,
BuildArgs: args,
Labels: labels,
BuildArgs: t.Args,
Labels: t.Labels,
NoCache: noCache,
NoCacheFilter: t.NoCacheFilter,
Pull: pull,
@@ -1120,24 +930,17 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
if err != nil {
return nil, err
}
secretAttachment, err := controllerapi.CreateSecrets(secrets)
if err != nil {
return nil, err
}
bo.Session = append(bo.Session, secretAttachment)
bo.Session = append(bo.Session, secrets)
sshSpecs, err := buildflags.ParseSSHSpecs(t.SSH)
sshSpecs := t.SSH
if len(sshSpecs) == 0 && buildflags.IsGitSSH(contextPath) {
sshSpecs = []string{"default"}
}
ssh, err := buildflags.ParseSSHSpecs(sshSpecs)
if err != nil {
return nil, err
}
if len(sshSpecs) == 0 && (buildflags.IsGitSSH(bi.ContextPath) || (inp != nil && buildflags.IsGitSSH(inp.URL))) {
sshSpecs = append(sshSpecs, &controllerapi.SSH{ID: "default"})
}
sshAttachment, err := controllerapi.CreateSSH(sshSpecs)
if err != nil {
return nil, err
}
bo.Session = append(bo.Session, sshAttachment)
bo.Session = append(bo.Session, ssh)
if t.Target != nil {
bo.Target = *t.Target
@@ -1147,33 +950,19 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
if err != nil {
return nil, err
}
bo.CacheFrom = controllerapi.CreateCaches(cacheImports)
bo.CacheFrom = cacheImports
cacheExports, err := buildflags.ParseCacheEntry(t.CacheTo)
if err != nil {
return nil, err
}
bo.CacheTo = controllerapi.CreateCaches(cacheExports)
bo.CacheTo = cacheExports
outputs, err := buildflags.ParseExports(t.Outputs)
if err != nil {
return nil, err
}
bo.Exports, err = controllerapi.CreateExports(outputs)
if err != nil {
return nil, err
}
attests, err := buildflags.ParseAttests(t.Attest)
if err != nil {
return nil, err
}
bo.Attests = controllerapi.CreateAttestations(attests)
bo.SourcePolicy, err = build.ReadSourcePolicy()
outputs, err := buildflags.ParseOutputs(t.Outputs)
if err != nil {
return nil, err
}
bo.Exports = outputs
return bo, nil
}
@@ -1199,24 +988,8 @@ func removeDupes(s []string) []string {
return s[:i]
}
func removeAttestDupes(s []string) []string {
res := []string{}
m := map[string]int{}
for _, v := range s {
att, err := buildflags.ParseAttest(v)
if err != nil {
res = append(res, v)
continue
}
if i, ok := m[att.Type]; ok {
res[i] = v
} else {
m[att.Type] = len(res)
res = append(res, v)
}
}
return res
func isRemoteResource(str string) bool {
return urlutil.IsGitURL(str) || urlutil.IsURL(str)
}
func parseOutputType(str string) string {

View File

@@ -4,13 +4,14 @@ import (
"context"
"os"
"sort"
"strings"
"testing"
"github.com/stretchr/testify/require"
)
func TestReadTargets(t *testing.T) {
t.Parallel()
fp := File{
Name: "config.hcl",
Data: []byte(`
@@ -34,23 +35,21 @@ target "webapp" {
ctx := context.TODO()
t.Run("NoOverrides", func(t *testing.T) {
t.Parallel()
m, g, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, nil, nil)
require.NoError(t, err)
require.Equal(t, 1, len(m))
require.Equal(t, "Dockerfile.webapp", *m["webapp"].Dockerfile)
require.Equal(t, ".", *m["webapp"].Context)
require.Equal(t, ptrstr("webDEP"), m["webapp"].Args["VAR_INHERITED"])
require.Equal(t, "webDEP", m["webapp"].Args["VAR_INHERITED"])
require.Equal(t, true, *m["webapp"].NoCache)
require.Nil(t, m["webapp"].Pull)
require.Equal(t, 1, len(g))
require.Equal(t, []string{"webapp"}, g["default"].Targets)
require.Equal(t, []string{"webapp"}, g[0].Targets)
})
t.Run("InvalidTargetOverrides", func(t *testing.T) {
t.Parallel()
_, _, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"nosuchtarget.context=foo"}, nil)
require.NotNil(t, err)
require.Equal(t, err.Error(), "could not find any target matching 'nosuchtarget'")
@@ -58,7 +57,8 @@ target "webapp" {
t.Run("ArgsOverrides", func(t *testing.T) {
t.Run("leaf", func(t *testing.T) {
t.Setenv("VAR_FROMENV"+t.Name(), "fromEnv")
os.Setenv("VAR_FROMENV"+t.Name(), "fromEnv")
defer os.Unsetenv("VAR_FROM_ENV" + t.Name())
m, g, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{
"webapp.args.VAR_UNSET",
@@ -79,35 +79,33 @@ target "webapp" {
_, isSet = m["webapp"].Args["VAR_EMPTY"]
require.True(t, isSet, m["webapp"].Args["VAR_EMPTY"])
require.Equal(t, ptrstr("bananas"), m["webapp"].Args["VAR_SET"])
require.Equal(t, m["webapp"].Args["VAR_SET"], "bananas")
require.Equal(t, ptrstr("fromEnv"), m["webapp"].Args["VAR_FROMENV"+t.Name()])
require.Equal(t, m["webapp"].Args["VAR_FROMENV"+t.Name()], "fromEnv")
require.Equal(t, ptrstr("webapp"), m["webapp"].Args["VAR_BOTH"])
require.Equal(t, ptrstr("override"), m["webapp"].Args["VAR_INHERITED"])
require.Equal(t, m["webapp"].Args["VAR_BOTH"], "webapp")
require.Equal(t, m["webapp"].Args["VAR_INHERITED"], "override")
require.Equal(t, 1, len(g))
require.Equal(t, []string{"webapp"}, g["default"].Targets)
require.Equal(t, []string{"webapp"}, g[0].Targets)
})
// building leaf but overriding parent fields
t.Run("parent", func(t *testing.T) {
t.Parallel()
m, g, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{
"webDEP.args.VAR_INHERITED=override",
"webDEP.args.VAR_BOTH=override",
}, nil)
require.NoError(t, err)
require.Equal(t, ptrstr("override"), m["webapp"].Args["VAR_INHERITED"])
require.Equal(t, ptrstr("webapp"), m["webapp"].Args["VAR_BOTH"])
require.Equal(t, m["webapp"].Args["VAR_INHERITED"], "override")
require.Equal(t, m["webapp"].Args["VAR_BOTH"], "webapp")
require.Equal(t, 1, len(g))
require.Equal(t, []string{"webapp"}, g["default"].Targets)
require.Equal(t, []string{"webapp"}, g[0].Targets)
})
})
t.Run("ContextOverride", func(t *testing.T) {
t.Parallel()
_, _, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"webapp.context"}, nil)
require.NotNil(t, err)
@@ -115,47 +113,44 @@ target "webapp" {
require.NoError(t, err)
require.Equal(t, "foo", *m["webapp"].Context)
require.Equal(t, 1, len(g))
require.Equal(t, []string{"webapp"}, g["default"].Targets)
require.Equal(t, []string{"webapp"}, g[0].Targets)
})
t.Run("NoCacheOverride", func(t *testing.T) {
t.Parallel()
m, g, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"webapp.no-cache=false"}, nil)
require.NoError(t, err)
require.Equal(t, false, *m["webapp"].NoCache)
require.Equal(t, 1, len(g))
require.Equal(t, []string{"webapp"}, g["default"].Targets)
require.Equal(t, []string{"webapp"}, g[0].Targets)
})
t.Run("PullOverride", func(t *testing.T) {
t.Parallel()
m, g, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"webapp.pull=false"}, nil)
require.NoError(t, err)
require.Equal(t, false, *m["webapp"].Pull)
require.Equal(t, 1, len(g))
require.Equal(t, []string{"webapp"}, g["default"].Targets)
require.Equal(t, []string{"webapp"}, g[0].Targets)
})
t.Run("PatternOverride", func(t *testing.T) {
t.Parallel()
// same check for two cases
multiTargetCheck := func(t *testing.T, m map[string]*Target, g map[string]*Group, err error) {
multiTargetCheck := func(t *testing.T, m map[string]*Target, g []*Group, err error) {
require.NoError(t, err)
require.Equal(t, 2, len(m))
require.Equal(t, "foo", *m["webapp"].Dockerfile)
require.Equal(t, ptrstr("webDEP"), m["webapp"].Args["VAR_INHERITED"])
require.Equal(t, "webDEP", m["webapp"].Args["VAR_INHERITED"])
require.Equal(t, "foo", *m["webDEP"].Dockerfile)
require.Equal(t, ptrstr("webDEP"), m["webDEP"].Args["VAR_INHERITED"])
require.Equal(t, "webDEP", m["webDEP"].Args["VAR_INHERITED"])
require.Equal(t, 1, len(g))
sort.Strings(g["default"].Targets)
require.Equal(t, []string{"webDEP", "webapp"}, g["default"].Targets)
sort.Strings(g[0].Targets)
require.Equal(t, []string{"webDEP", "webapp"}, g[0].Targets)
}
cases := []struct {
name string
targets []string
overrides []string
check func(*testing.T, map[string]*Target, map[string]*Group, error)
check func(*testing.T, map[string]*Target, []*Group, error)
}{
{
name: "multi target single pattern",
@@ -173,20 +168,20 @@ target "webapp" {
name: "single target",
targets: []string{"webapp"},
overrides: []string{"web*.dockerfile=foo"},
check: func(t *testing.T, m map[string]*Target, g map[string]*Group, err error) {
check: func(t *testing.T, m map[string]*Target, g []*Group, err error) {
require.NoError(t, err)
require.Equal(t, 1, len(m))
require.Equal(t, "foo", *m["webapp"].Dockerfile)
require.Equal(t, ptrstr("webDEP"), m["webapp"].Args["VAR_INHERITED"])
require.Equal(t, "webDEP", m["webapp"].Args["VAR_INHERITED"])
require.Equal(t, 1, len(g))
require.Equal(t, []string{"webapp"}, g["default"].Targets)
require.Equal(t, []string{"webapp"}, g[0].Targets)
},
},
{
name: "nomatch",
targets: []string{"webapp"},
overrides: []string{"nomatch*.dockerfile=foo"},
check: func(t *testing.T, m map[string]*Target, g map[string]*Group, err error) {
check: func(t *testing.T, m map[string]*Target, g []*Group, err error) {
// NOTE: I am unsure whether failing to match should always error out
// instead of simply skipping that override.
// Let's enforce the error and we can relax it later if users complain.
@@ -304,12 +299,12 @@ services:
require.True(t, ok)
require.Equal(t, "Dockerfile.webapp", *m["webapp"].Dockerfile)
require.Equal(t, ".", *m["webapp"].Context)
require.Equal(t, ptrstr("1"), m["webapp"].Args["buildno"])
require.Equal(t, ptrstr("12"), m["webapp"].Args["buildno2"])
require.Equal(t, "1", m["webapp"].Args["buildno"])
require.Equal(t, "12", m["webapp"].Args["buildno2"])
require.Equal(t, 1, len(g))
sort.Strings(g["default"].Targets)
require.Equal(t, []string{"db", "newservice", "webapp"}, g["default"].Targets)
sort.Strings(g[0].Targets)
require.Equal(t, []string{"db", "newservice", "webapp"}, g[0].Targets)
}
func TestReadTargetsWithDotCompose(t *testing.T) {
@@ -348,7 +343,7 @@ services:
_, ok := m["web_app"]
require.True(t, ok)
require.Equal(t, "Dockerfile.webapp", *m["web_app"].Dockerfile)
require.Equal(t, ptrstr("1"), m["web_app"].Args["buildno"])
require.Equal(t, "1", m["web_app"].Args["buildno"])
m, _, err = ReadTargets(ctx, []File{fp2}, []string{"web_app"}, nil, nil)
require.NoError(t, err)
@@ -356,7 +351,7 @@ services:
_, ok = m["web_app"]
require.True(t, ok)
require.Equal(t, "Dockerfile", *m["web_app"].Dockerfile)
require.Equal(t, ptrstr("12"), m["web_app"].Args["buildno2"])
require.Equal(t, "12", m["web_app"].Args["buildno2"])
m, g, err := ReadTargets(ctx, []File{fp, fp2}, []string{"default"}, nil, nil)
require.NoError(t, err)
@@ -365,12 +360,12 @@ services:
require.True(t, ok)
require.Equal(t, "Dockerfile.webapp", *m["web_app"].Dockerfile)
require.Equal(t, ".", *m["web_app"].Context)
require.Equal(t, ptrstr("1"), m["web_app"].Args["buildno"])
require.Equal(t, ptrstr("12"), m["web_app"].Args["buildno2"])
require.Equal(t, "1", m["web_app"].Args["buildno"])
require.Equal(t, "12", m["web_app"].Args["buildno2"])
require.Equal(t, 1, len(g))
sort.Strings(g["default"].Targets)
require.Equal(t, []string{"web_app"}, g["default"].Targets)
sort.Strings(g[0].Targets)
require.Equal(t, []string{"web_app"}, g[0].Targets)
}
func TestHCLCwdPrefix(t *testing.T) {
@@ -397,7 +392,7 @@ func TestHCLCwdPrefix(t *testing.T) {
require.Equal(t, "foo", *m["app"].Context)
require.Equal(t, 1, len(g))
require.Equal(t, []string{"app"}, g["default"].Targets)
require.Equal(t, []string{"app"}, g[0].Targets)
}
func TestOverrideMerge(t *testing.T) {
@@ -701,7 +696,7 @@ target "image" {
m, g, err := ReadTargets(ctx, []File{f}, []string{"image"}, nil, nil)
require.NoError(t, err)
require.Equal(t, 1, len(g))
require.Equal(t, []string{"image"}, g["default"].Targets)
require.Equal(t, []string{"image"}, g[0].Targets)
require.Equal(t, 1, len(m))
require.Equal(t, "test", *m["image"].Dockerfile)
}
@@ -722,9 +717,8 @@ target "image" {
m, g, err := ReadTargets(ctx, []File{f}, []string{"foo"}, nil, nil)
require.NoError(t, err)
require.Equal(t, 2, len(g))
require.Equal(t, []string{"foo"}, g["default"].Targets)
require.Equal(t, []string{"image"}, g["foo"].Targets)
require.Equal(t, 1, len(g))
require.Equal(t, []string{"image"}, g[0].Targets)
require.Equal(t, 1, len(m))
require.Equal(t, "test", *m["image"].Dockerfile)
}
@@ -748,17 +742,15 @@ target "image" {
m, g, err := ReadTargets(ctx, []File{f}, []string{"foo"}, nil, nil)
require.NoError(t, err)
require.Equal(t, 2, len(g))
require.Equal(t, []string{"foo"}, g["default"].Targets)
require.Equal(t, []string{"image"}, g["foo"].Targets)
require.Equal(t, 1, len(g))
require.Equal(t, []string{"image"}, g[0].Targets)
require.Equal(t, 1, len(m))
require.Equal(t, "test", *m["image"].Dockerfile)
m, g, err = ReadTargets(ctx, []File{f}, []string{"foo", "foo"}, nil, nil)
require.NoError(t, err)
require.Equal(t, 2, len(g))
require.Equal(t, []string{"foo"}, g["default"].Targets)
require.Equal(t, []string{"image"}, g["foo"].Targets)
require.Equal(t, 1, len(g))
require.Equal(t, []string{"image"}, g[0].Targets)
require.Equal(t, 1, len(m))
require.Equal(t, "test", *m["image"].Dockerfile)
}
@@ -837,7 +829,7 @@ services:
m, g, err := ReadTargets(ctx, []File{fhcl}, []string{"default"}, nil, nil)
require.NoError(t, err)
require.Equal(t, 1, len(g))
require.Equal(t, []string{"image"}, g["default"].Targets)
require.Equal(t, []string{"image"}, g[0].Targets)
require.Equal(t, 1, len(m))
require.Equal(t, 1, len(m["image"].Outputs))
require.Equal(t, "type=docker", m["image"].Outputs[0])
@@ -845,7 +837,7 @@ services:
m, g, err = ReadTargets(ctx, []File{fhcl}, []string{"image-release"}, nil, nil)
require.NoError(t, err)
require.Equal(t, 1, len(g))
require.Equal(t, []string{"image-release"}, g["default"].Targets)
require.Equal(t, []string{"image-release"}, g[0].Targets)
require.Equal(t, 1, len(m))
require.Equal(t, 1, len(m["image-release"].Outputs))
require.Equal(t, "type=image,push=true", m["image-release"].Outputs[0])
@@ -853,7 +845,7 @@ services:
m, g, err = ReadTargets(ctx, []File{fhcl}, []string{"image", "image-release"}, nil, nil)
require.NoError(t, err)
require.Equal(t, 1, len(g))
require.Equal(t, []string{"image", "image-release"}, g["default"].Targets)
require.Equal(t, []string{"image", "image-release"}, g[0].Targets)
require.Equal(t, 2, len(m))
require.Equal(t, ".", *m["image"].Context)
require.Equal(t, 1, len(m["image-release"].Outputs))
@@ -862,22 +854,22 @@ services:
m, g, err = ReadTargets(ctx, []File{fyml, fhcl}, []string{"default"}, nil, nil)
require.NoError(t, err)
require.Equal(t, 1, len(g))
require.Equal(t, []string{"image"}, g["default"].Targets)
require.Equal(t, []string{"image"}, g[0].Targets)
require.Equal(t, 1, len(m))
require.Equal(t, ".", *m["image"].Context)
m, g, err = ReadTargets(ctx, []File{fjson}, []string{"default"}, nil, nil)
require.NoError(t, err)
require.Equal(t, 1, len(g))
require.Equal(t, []string{"image"}, g["default"].Targets)
require.Equal(t, []string{"image"}, g[0].Targets)
require.Equal(t, 1, len(m))
require.Equal(t, ".", *m["image"].Context)
m, g, err = ReadTargets(ctx, []File{fyml}, []string{"default"}, nil, nil)
require.NoError(t, err)
require.Equal(t, 1, len(g))
sort.Strings(g["default"].Targets)
require.Equal(t, []string{"addon", "aws"}, g["default"].Targets)
sort.Strings(g[0].Targets)
require.Equal(t, []string{"addon", "aws"}, g[0].Targets)
require.Equal(t, 2, len(m))
require.Equal(t, "./Dockerfile", *m["addon"].Dockerfile)
require.Equal(t, "./aws.Dockerfile", *m["aws"].Dockerfile)
@@ -885,8 +877,8 @@ services:
m, g, err = ReadTargets(ctx, []File{fyml, fhcl}, []string{"addon", "aws"}, nil, nil)
require.NoError(t, err)
require.Equal(t, 1, len(g))
sort.Strings(g["default"].Targets)
require.Equal(t, []string{"addon", "aws"}, g["default"].Targets)
sort.Strings(g[0].Targets)
require.Equal(t, []string{"addon", "aws"}, g[0].Targets)
require.Equal(t, 2, len(m))
require.Equal(t, "./Dockerfile", *m["addon"].Dockerfile)
require.Equal(t, "./aws.Dockerfile", *m["aws"].Dockerfile)
@@ -894,8 +886,8 @@ services:
m, g, err = ReadTargets(ctx, []File{fyml, fhcl}, []string{"addon", "aws", "image"}, nil, nil)
require.NoError(t, err)
require.Equal(t, 1, len(g))
sort.Strings(g["default"].Targets)
require.Equal(t, []string{"addon", "aws", "image"}, g["default"].Targets)
sort.Strings(g[0].Targets)
require.Equal(t, []string{"addon", "aws", "image"}, g[0].Targets)
require.Equal(t, 3, len(m))
require.Equal(t, ".", *m["image"].Context)
require.Equal(t, "./Dockerfile", *m["addon"].Dockerfile)
@@ -921,17 +913,15 @@ target "image" {
m, g, err := ReadTargets(ctx, []File{f}, []string{"foo"}, nil, nil)
require.NoError(t, err)
require.Equal(t, 2, len(g))
require.Equal(t, []string{"foo"}, g["default"].Targets)
require.Equal(t, []string{"foo"}, g["foo"].Targets)
require.Equal(t, 1, len(g))
require.Equal(t, []string{"foo"}, g[0].Targets)
require.Equal(t, 1, len(m))
require.Equal(t, "bar", *m["foo"].Dockerfile)
m, g, err = ReadTargets(ctx, []File{f}, []string{"foo", "foo"}, nil, nil)
require.NoError(t, err)
require.Equal(t, 2, len(g))
require.Equal(t, []string{"foo"}, g["default"].Targets)
require.Equal(t, []string{"foo"}, g["foo"].Targets)
require.Equal(t, 1, len(g))
require.Equal(t, []string{"foo"}, g[0].Targets)
require.Equal(t, 1, len(m))
require.Equal(t, "bar", *m["foo"].Dockerfile)
}
@@ -955,18 +945,16 @@ target "image" {
m, g, err := ReadTargets(ctx, []File{f}, []string{"foo"}, nil, nil)
require.NoError(t, err)
require.Equal(t, 2, len(g))
require.Equal(t, []string{"foo"}, g["default"].Targets)
require.Equal(t, []string{"foo", "image"}, g["foo"].Targets)
require.Equal(t, 1, len(g))
require.Equal(t, []string{"foo", "image"}, g[0].Targets)
require.Equal(t, 2, len(m))
require.Equal(t, "bar", *m["foo"].Dockerfile)
require.Equal(t, "type=docker", m["image"].Outputs[0])
m, g, err = ReadTargets(ctx, []File{f}, []string{"foo", "image"}, nil, nil)
require.NoError(t, err)
require.Equal(t, 2, len(g))
require.Equal(t, []string{"foo", "image"}, g["default"].Targets)
require.Equal(t, []string{"foo", "image"}, g["foo"].Targets)
require.Equal(t, 1, len(g))
require.Equal(t, []string{"foo", "image"}, g[0].Targets)
require.Equal(t, 2, len(m))
require.Equal(t, "bar", *m["foo"].Dockerfile)
require.Equal(t, "type=docker", m["image"].Outputs[0])
@@ -1003,22 +991,22 @@ target "d" {
cases := []struct {
name string
overrides []string
want map[string]*string
want map[string]string
}{
{
name: "nested simple",
overrides: nil,
want: map[string]*string{"bar": ptrstr("234"), "baz": ptrstr("890"), "foo": ptrstr("123")},
want: map[string]string{"bar": "234", "baz": "890", "foo": "123"},
},
{
name: "nested with overrides first",
overrides: []string{"a.args.foo=321", "b.args.bar=432"},
want: map[string]*string{"bar": ptrstr("234"), "baz": ptrstr("890"), "foo": ptrstr("321")},
want: map[string]string{"bar": "234", "baz": "890", "foo": "321"},
},
{
name: "nested with overrides last",
overrides: []string{"a.args.foo=321", "c.args.bar=432"},
want: map[string]*string{"bar": ptrstr("432"), "baz": ptrstr("890"), "foo": ptrstr("321")},
want: map[string]string{"bar": "432", "baz": "890", "foo": "321"},
},
}
for _, tt := range cases {
@@ -1027,7 +1015,7 @@ target "d" {
m, g, err := ReadTargets(ctx, []File{f}, []string{"d"}, tt.overrides, nil)
require.NoError(t, err)
require.Equal(t, 1, len(g))
require.Equal(t, []string{"d"}, g["default"].Targets)
require.Equal(t, []string{"d"}, g[0].Targets)
require.Equal(t, 1, len(m))
require.Equal(t, tt.want, m["d"].Args)
})
@@ -1071,26 +1059,26 @@ group "default" {
cases := []struct {
name string
overrides []string
wantch1 map[string]*string
wantch2 map[string]*string
wantch1 map[string]string
wantch2 map[string]string
}{
{
name: "nested simple",
overrides: nil,
wantch1: map[string]*string{"BAR": ptrstr("fuu"), "FOO": ptrstr("bar")},
wantch2: map[string]*string{"BAR": ptrstr("fuu"), "FOO": ptrstr("bar"), "FOO2": ptrstr("bar2")},
wantch1: map[string]string{"BAR": "fuu", "FOO": "bar"},
wantch2: map[string]string{"BAR": "fuu", "FOO": "bar", "FOO2": "bar2"},
},
{
name: "nested with overrides first",
overrides: []string{"grandparent.args.BAR=fii", "child1.args.FOO=baaar"},
wantch1: map[string]*string{"BAR": ptrstr("fii"), "FOO": ptrstr("baaar")},
wantch2: map[string]*string{"BAR": ptrstr("fii"), "FOO": ptrstr("bar"), "FOO2": ptrstr("bar2")},
wantch1: map[string]string{"BAR": "fii", "FOO": "baaar"},
wantch2: map[string]string{"BAR": "fii", "FOO": "bar", "FOO2": "bar2"},
},
{
name: "nested with overrides last",
overrides: []string{"grandparent.args.BAR=fii", "child2.args.FOO=baaar"},
wantch1: map[string]*string{"BAR": ptrstr("fii"), "FOO": ptrstr("bar")},
wantch2: map[string]*string{"BAR": ptrstr("fii"), "FOO": ptrstr("baaar"), "FOO2": ptrstr("bar2")},
wantch1: map[string]string{"BAR": "fii", "FOO": "bar"},
wantch2: map[string]string{"BAR": "fii", "FOO": "baaar", "FOO2": "bar2"},
},
}
for _, tt := range cases {
@@ -1099,7 +1087,7 @@ group "default" {
m, g, err := ReadTargets(ctx, []File{f}, []string{"default"}, tt.overrides, nil)
require.NoError(t, err)
require.Equal(t, 1, len(g))
require.Equal(t, []string{"child1", "child2"}, g["default"].Targets)
require.Equal(t, []string{"child1", "child2"}, g[0].Targets)
require.Equal(t, 2, len(m))
require.Equal(t, tt.wantch1, m["child1"].Args)
require.Equal(t, []string{"type=docker"}, m["child1"].Outputs)
@@ -1196,67 +1184,44 @@ target "f" {
}`)}
cases := []struct {
names []string
targets []string
groups []string
count int
name string
targets []string
ntargets int
}{
{
names: []string{"a"},
targets: []string{"a"},
groups: []string{"default", "a", "b", "c"},
count: 1,
name: "a",
targets: []string{"b", "c"},
ntargets: 1,
},
{
names: []string{"b"},
targets: []string{"b"},
groups: []string{"default", "b"},
count: 1,
name: "b",
targets: []string{"d"},
ntargets: 1,
},
{
names: []string{"c"},
targets: []string{"c"},
groups: []string{"default", "b", "c"},
count: 1,
name: "c",
targets: []string{"b"},
ntargets: 1,
},
{
names: []string{"d"},
targets: []string{"d"},
groups: []string{"default"},
count: 1,
name: "d",
targets: []string{"d"},
ntargets: 1,
},
{
names: []string{"e"},
targets: []string{"e"},
groups: []string{"default", "a", "b", "c", "e"},
count: 2,
},
{
names: []string{"a", "e"},
targets: []string{"a", "e"},
groups: []string{"default", "a", "b", "c", "e"},
count: 2,
name: "e",
targets: []string{"a", "f"},
ntargets: 2,
},
}
for _, tt := range cases {
tt := tt
t.Run(strings.Join(tt.names, "+"), func(t *testing.T) {
m, g, err := ReadTargets(ctx, []File{f}, tt.names, nil, nil)
t.Run(tt.name, func(t *testing.T) {
m, g, err := ReadTargets(ctx, []File{f}, []string{tt.name}, nil, nil)
require.NoError(t, err)
var gnames []string
for _, g := range g {
gnames = append(gnames, g.Name)
}
sort.Strings(gnames)
sort.Strings(tt.groups)
require.Equal(t, tt.groups, gnames)
sort.Strings(g["default"].Targets)
sort.Strings(tt.targets)
require.Equal(t, tt.targets, g["default"].Targets)
require.Equal(t, tt.count, len(m))
require.Equal(t, 1, len(g))
require.Equal(t, tt.targets, g[0].Targets)
require.Equal(t, tt.ntargets, len(m))
require.Equal(t, ".", *m["d"].Context)
require.Equal(t, "./testdockerfile", *m["d"].Dockerfile)
})
@@ -1289,164 +1254,8 @@ services:
require.Equal(t, 1, len(c.Targets))
require.Equal(t, "app", c.Targets[0].Name)
require.Equal(t, ptrstr("foo"), c.Targets[0].Args["v1"])
require.Equal(t, ptrstr("bar"), c.Targets[0].Args["v2"])
require.Equal(t, "foo", c.Targets[0].Args["v1"])
require.Equal(t, "bar", c.Targets[0].Args["v2"])
require.Equal(t, "dir", *c.Targets[0].Context)
require.Equal(t, "Dockerfile-alternate", *c.Targets[0].Dockerfile)
}
func TestHCLNullVars(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(
`variable "FOO" {
default = null
}
variable "BAR" {
default = null
}
target "default" {
args = {
foo = FOO
bar = "baz"
}
labels = {
"com.docker.app.bar" = BAR
"com.docker.app.baz" = "foo"
}
}`),
}
ctx := context.TODO()
m, _, err := ReadTargets(ctx, []File{fp}, []string{"default"}, nil, nil)
require.NoError(t, err)
require.Equal(t, 1, len(m))
_, ok := m["default"]
require.True(t, ok)
_, err = TargetsToBuildOpt(m, &Input{})
require.NoError(t, err)
require.Equal(t, map[string]*string{"bar": ptrstr("baz")}, m["default"].Args)
require.Equal(t, map[string]*string{"com.docker.app.baz": ptrstr("foo")}, m["default"].Labels)
}
func TestJSONNullVars(t *testing.T) {
fp := File{
Name: "docker-bake.json",
Data: []byte(
`{
"variable": {
"FOO": {
"default": null
}
},
"target": {
"default": {
"args": {
"foo": "${FOO}",
"bar": "baz"
}
}
}
}`),
}
ctx := context.TODO()
m, _, err := ReadTargets(ctx, []File{fp}, []string{"default"}, nil, nil)
require.NoError(t, err)
require.Equal(t, 1, len(m))
_, ok := m["default"]
require.True(t, ok)
_, err = TargetsToBuildOpt(m, &Input{})
require.NoError(t, err)
require.Equal(t, map[string]*string{"bar": ptrstr("baz")}, m["default"].Args)
}
func TestReadLocalFilesDefault(t *testing.T) {
tests := []struct {
filenames []string
expected []string
}{
{
filenames: []string{"abc.yml", "docker-compose.yml"},
expected: []string{"docker-compose.yml"},
},
{
filenames: []string{"test.foo", "compose.yml", "docker-bake.hcl"},
expected: []string{"compose.yml", "docker-bake.hcl"},
},
{
filenames: []string{"compose.yaml", "docker-compose.yml", "docker-bake.hcl"},
expected: []string{"compose.yaml", "docker-compose.yml", "docker-bake.hcl"},
},
{
filenames: []string{"test.txt", "compsoe.yaml"}, // intentional misspell
expected: []string{},
},
}
pwd, err := os.Getwd()
require.NoError(t, err)
for _, tt := range tests {
t.Run(strings.Join(tt.filenames, "-"), func(t *testing.T) {
dir := t.TempDir()
t.Cleanup(func() { _ = os.Chdir(pwd) })
require.NoError(t, os.Chdir(dir))
for _, tf := range tt.filenames {
require.NoError(t, os.WriteFile(tf, []byte(tf), 0644))
}
files, err := ReadLocalFiles(nil, nil)
require.NoError(t, err)
if len(files) == 0 {
require.Equal(t, len(tt.expected), len(files))
} else {
found := false
for _, exp := range tt.expected {
for _, f := range files {
if f.Name == exp {
found = true
break
}
}
require.True(t, found, exp)
}
}
})
}
}
func TestAttestDuplicates(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(
`target "default" {
attest = ["type=sbom", "type=sbom,generator=custom", "type=sbom,foo=bar", "type=provenance,mode=max"]
}`),
}
ctx := context.TODO()
m, _, err := ReadTargets(ctx, []File{fp}, []string{"default"}, nil, nil)
require.Equal(t, []string{"type=sbom,foo=bar", "type=provenance,mode=max"}, m["default"].Attest)
require.NoError(t, err)
opts, err := TargetsToBuildOpt(m, &Input{})
require.NoError(t, err)
require.Equal(t, map[string]*string{
"sbom": ptrstr("type=sbom,foo=bar"),
"provenance": ptrstr("type=provenance,mode=max"),
}, opts["default"].Attests)
m, _, err = ReadTargets(ctx, []File{fp}, []string{"default"}, []string{"*.attest=type=sbom,disabled=true"}, nil)
require.Equal(t, []string{"type=sbom,disabled=true", "type=provenance,mode=max"}, m["default"].Attest)
require.NoError(t, err)
opts, err = TargetsToBuildOpt(m, &Input{})
require.NoError(t, err)
require.Equal(t, map[string]*string{
"sbom": nil,
"provenance": ptrstr("type=provenance,mode=max"),
}, opts["default"].Attests)
}

View File

@@ -28,14 +28,10 @@ func ParseComposeFiles(fs []File) (*Config, error) {
}
func ParseCompose(cfgs []compose.ConfigFile, envs map[string]string) (*Config, error) {
if envs == nil {
envs = make(map[string]string)
}
cfg, err := loader.Load(compose.ConfigDetails{
ConfigFiles: cfgs,
Environment: envs,
}, func(options *loader.Options) {
options.SetProjectName("bake", false)
options.SkipNormalization = true
})
if err != nil {
@@ -69,19 +65,6 @@ func ParseCompose(cfgs []compose.ConfigFile, envs map[string]string) (*Config, e
dockerfilePath := s.Build.Dockerfile
dockerfilePathP = &dockerfilePath
}
var dockerfileInlineP *string
if s.Build.DockerfileInline != "" {
dockerfileInline := s.Build.DockerfileInline
dockerfileInlineP = &dockerfileInline
}
var additionalContexts map[string]string
if s.Build.AdditionalContexts != nil {
additionalContexts = map[string]string{}
for k, v := range s.Build.AdditionalContexts {
additionalContexts[k] = v
}
}
var secrets []string
for _, bs := range s.Build.Secrets {
@@ -92,22 +75,13 @@ func ParseCompose(cfgs []compose.ConfigFile, envs map[string]string) (*Config, e
secrets = append(secrets, secret)
}
// compose does not support nil values for labels
labels := map[string]*string{}
for k, v := range s.Build.Labels {
v := v
labels[k] = &v
}
g.Targets = append(g.Targets, targetName)
t := &Target{
Name: targetName,
Context: contextPathP,
Contexts: additionalContexts,
Dockerfile: dockerfilePathP,
DockerfileInline: dockerfileInlineP,
Tags: s.Build.Tags,
Labels: labels,
Name: targetName,
Context: contextPathP,
Dockerfile: dockerfilePathP,
Tags: s.Build.Tags,
Labels: s.Build.Labels,
Args: flatten(s.Build.Args.Resolve(func(val string) (string, bool) {
if val, ok := s.Environment[val]; ok && val != nil {
return *val, true
@@ -164,7 +138,6 @@ func validateCompose(dt []byte, envs map[string]string) error {
},
Environment: envs,
}, func(options *loader.Options) {
options.SetProjectName("bake", false)
options.SkipNormalization = true
// consistency is checked later in ParseCompose to ensure multiple
// compose files can be merged together
@@ -205,7 +178,7 @@ func loadDotEnv(curenv map[string]string, workingDir string) (map[string]string,
return nil, err
}
envs, err := dotenv.UnmarshalBytesWithLookup(dt, nil)
envs, err := dotenv.UnmarshalBytes(dt)
if err != nil {
return nil, err
}
@@ -220,16 +193,16 @@ func loadDotEnv(curenv map[string]string, workingDir string) (map[string]string,
return curenv, nil
}
func flatten(in compose.MappingWithEquals) map[string]*string {
func flatten(in compose.MappingWithEquals) compose.Mapping {
if len(in) == 0 {
return nil
}
out := map[string]*string{}
out := compose.Mapping{}
for k, v := range in {
if v == nil {
continue
}
out[k] = v
out[k] = *v
}
return out
}
@@ -249,7 +222,7 @@ type xbake struct {
NoCacheFilter stringArray `yaml:"no-cache-filter,omitempty"`
Contexts stringMap `yaml:"contexts,omitempty"`
// don't forget to update documentation if you add a new field:
// docs/manuals/bake/compose-file.md#extension-field-with-x-bake
// docs/guides/bake/compose-file.md#extension-field-with-x-bake
}
type stringMap map[string]string

View File

@@ -21,8 +21,6 @@ services:
webapp:
build:
context: ./dir
additional_contexts:
foo: /bar
dockerfile: Dockerfile-alternate
network:
none
@@ -35,11 +33,6 @@ services:
secrets:
- token
- aws
webapp2:
build:
context: ./dir
dockerfile_inline: |
FROM alpine
secrets:
token:
environment: ENV_TOKEN
@@ -53,9 +46,9 @@ secrets:
require.Equal(t, 1, len(c.Groups))
require.Equal(t, "default", c.Groups[0].Name)
sort.Strings(c.Groups[0].Targets)
require.Equal(t, []string{"db", "webapp", "webapp2"}, c.Groups[0].Targets)
require.Equal(t, []string{"db", "webapp"}, c.Groups[0].Targets)
require.Equal(t, 3, len(c.Targets))
require.Equal(t, 2, len(c.Targets))
sort.Slice(c.Targets, func(i, j int) bool {
return c.Targets[i].Name < c.Targets[j].Name
})
@@ -65,10 +58,9 @@ secrets:
require.Equal(t, "webapp", c.Targets[1].Name)
require.Equal(t, "./dir", *c.Targets[1].Context)
require.Equal(t, map[string]string{"foo": "/bar"}, c.Targets[1].Contexts)
require.Equal(t, "Dockerfile-alternate", *c.Targets[1].Dockerfile)
require.Equal(t, 1, len(c.Targets[1].Args))
require.Equal(t, ptrstr("123"), c.Targets[1].Args["buildno"])
require.Equal(t, "123", c.Targets[1].Args["buildno"])
require.Equal(t, []string{"type=local,src=path/to/cache"}, c.Targets[1].CacheFrom)
require.Equal(t, []string{"type=local,dest=path/to/cache"}, c.Targets[1].CacheTo)
require.Equal(t, "none", *c.Targets[1].NetworkMode)
@@ -76,10 +68,6 @@ secrets:
"id=token,env=ENV_TOKEN",
"id=aws,src=/root/.aws/credentials",
}, c.Targets[1].Secrets)
require.Equal(t, "webapp2", c.Targets[2].Name)
require.Equal(t, "./dir", *c.Targets[2].Context)
require.Equal(t, "FROM alpine\n", *c.Targets[2].DockerfileInline)
}
func TestNoBuildOutOfTreeService(t *testing.T) {
@@ -161,15 +149,18 @@ services:
BRB: FOO
`)
t.Setenv("FOO", "bar")
t.Setenv("BAR", "foo")
t.Setenv("ZZZ_BAR", "zzz_foo")
os.Setenv("FOO", "bar")
defer os.Unsetenv("FOO")
os.Setenv("BAR", "foo")
defer os.Unsetenv("BAR")
os.Setenv("ZZZ_BAR", "zzz_foo")
defer os.Unsetenv("ZZZ_BAR")
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, sliceToMap(os.Environ()))
require.NoError(t, err)
require.Equal(t, ptrstr("bar"), c.Targets[0].Args["FOO"])
require.Equal(t, ptrstr("zzz_foo"), c.Targets[0].Args["BAR"])
require.Equal(t, ptrstr("FOO"), c.Targets[0].Args["BRB"])
require.Equal(t, "bar", c.Targets[0].Args["FOO"])
require.Equal(t, "zzz_foo", c.Targets[0].Args["BAR"])
require.Equal(t, "FOO", c.Targets[0].Args["BRB"])
}
func TestInconsistentComposeFile(t *testing.T) {
@@ -317,7 +308,7 @@ services:
sort.Slice(c.Targets, func(i, j int) bool {
return c.Targets[i].Name < c.Targets[j].Name
})
require.Equal(t, map[string]*string{"CT_ECR": ptrstr("foo"), "CT_TAG": ptrstr("bar")}, c.Targets[0].Args)
require.Equal(t, map[string]string{"CT_ECR": "foo", "CT_TAG": "bar"}, c.Targets[0].Args)
require.Equal(t, []string{"ct-addon:baz", "ct-addon:foo", "ct-addon:alp"}, c.Targets[0].Tags)
require.Equal(t, []string{"linux/amd64", "linux/arm64"}, c.Targets[0].Platforms)
require.Equal(t, []string{"user/app:cache", "type=local,src=path/to/cache"}, c.Targets[0].CacheFrom)
@@ -390,7 +381,7 @@ services:
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, map[string]*string{"CT_ECR": ptrstr("foo"), "FOO": ptrstr("bsdf -csdf"), "NODE_ENV": ptrstr("test")}, c.Targets[0].Args)
require.Equal(t, map[string]string{"CT_ECR": "foo", "FOO": "bsdf -csdf", "NODE_ENV": "test"}, c.Targets[0].Args)
}
func TestDotEnv(t *testing.T) {
@@ -414,7 +405,7 @@ services:
Data: dt,
}})
require.NoError(t, err)
require.Equal(t, map[string]*string{"FOO": ptrstr("bar")}, c.Targets[0].Args)
require.Equal(t, map[string]string{"FOO": "bar"}, c.Targets[0].Args)
}
func TestPorts(t *testing.T) {
@@ -638,22 +629,6 @@ target "default" {
}
}
func TestComposeNullArgs(t *testing.T) {
var dt = []byte(`
services:
scratch:
build:
context: .
args:
FOO: null
bar: "baz"
`)
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, map[string]*string{"bar": ptrstr("baz")}, c.Targets[0].Args)
}
// chdir changes the current working directory to the named directory,
// and then restore the original working directory at the end of the test.
func chdir(t *testing.T, dir string) {

View File

@@ -1,7 +1,7 @@
package bake
import (
"reflect"
"os"
"testing"
"github.com/stretchr/testify/require"
@@ -54,7 +54,7 @@ func TestHCLBasic(t *testing.T) {
require.Equal(t, c.Targets[1].Name, "webapp")
require.Equal(t, 1, len(c.Targets[1].Args))
require.Equal(t, ptrstr("123"), c.Targets[1].Args["buildno"])
require.Equal(t, "123", c.Targets[1].Args["buildno"])
require.Equal(t, c.Targets[2].Name, "cross")
require.Equal(t, 2, len(c.Targets[2].Platforms))
@@ -62,7 +62,7 @@ func TestHCLBasic(t *testing.T) {
require.Equal(t, c.Targets[3].Name, "webapp-plus")
require.Equal(t, 1, len(c.Targets[3].Args))
require.Equal(t, map[string]*string{"IAMCROSS": ptrstr("true")}, c.Targets[3].Args)
require.Equal(t, map[string]string{"IAMCROSS": "true"}, c.Targets[3].Args)
}
func TestHCLBasicInJSON(t *testing.T) {
@@ -114,7 +114,7 @@ func TestHCLBasicInJSON(t *testing.T) {
require.Equal(t, c.Targets[1].Name, "webapp")
require.Equal(t, 1, len(c.Targets[1].Args))
require.Equal(t, ptrstr("123"), c.Targets[1].Args["buildno"])
require.Equal(t, "123", c.Targets[1].Args["buildno"])
require.Equal(t, c.Targets[2].Name, "cross")
require.Equal(t, 2, len(c.Targets[2].Platforms))
@@ -122,7 +122,7 @@ func TestHCLBasicInJSON(t *testing.T) {
require.Equal(t, c.Targets[3].Name, "webapp-plus")
require.Equal(t, 1, len(c.Targets[3].Args))
require.Equal(t, map[string]*string{"IAMCROSS": ptrstr("true")}, c.Targets[3].Args)
require.Equal(t, map[string]string{"IAMCROSS": "true"}, c.Targets[3].Args)
}
func TestHCLWithFunctions(t *testing.T) {
@@ -147,7 +147,7 @@ func TestHCLWithFunctions(t *testing.T) {
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "webapp")
require.Equal(t, ptrstr("124"), c.Targets[0].Args["buildno"])
require.Equal(t, "124", c.Targets[0].Args["buildno"])
}
func TestHCLWithUserDefinedFunctions(t *testing.T) {
@@ -177,7 +177,7 @@ func TestHCLWithUserDefinedFunctions(t *testing.T) {
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "webapp")
require.Equal(t, ptrstr("124"), c.Targets[0].Args["buildno"])
require.Equal(t, "124", c.Targets[0].Args["buildno"])
}
func TestHCLWithVariables(t *testing.T) {
@@ -206,9 +206,9 @@ func TestHCLWithVariables(t *testing.T) {
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "webapp")
require.Equal(t, ptrstr("123"), c.Targets[0].Args["buildno"])
require.Equal(t, "123", c.Targets[0].Args["buildno"])
t.Setenv("BUILD_NUMBER", "456")
os.Setenv("BUILD_NUMBER", "456")
c, err = ParseFile(dt, "docker-bake.hcl")
require.NoError(t, err)
@@ -219,7 +219,7 @@ func TestHCLWithVariables(t *testing.T) {
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "webapp")
require.Equal(t, ptrstr("456"), c.Targets[0].Args["buildno"])
require.Equal(t, "456", c.Targets[0].Args["buildno"])
}
func TestHCLWithVariablesInFunctions(t *testing.T) {
@@ -244,7 +244,7 @@ func TestHCLWithVariablesInFunctions(t *testing.T) {
require.Equal(t, c.Targets[0].Name, "webapp")
require.Equal(t, []string{"user/repo:v1"}, c.Targets[0].Tags)
t.Setenv("REPO", "docker/buildx")
os.Setenv("REPO", "docker/buildx")
c, err = ParseFile(dt, "docker-bake.hcl")
require.NoError(t, err)
@@ -280,10 +280,10 @@ func TestHCLMultiFileSharedVariables(t *testing.T) {
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, ptrstr("pre-abc"), c.Targets[0].Args["v1"])
require.Equal(t, ptrstr("abc-post"), c.Targets[0].Args["v2"])
require.Equal(t, "pre-abc", c.Targets[0].Args["v1"])
require.Equal(t, "abc-post", c.Targets[0].Args["v2"])
t.Setenv("FOO", "def")
os.Setenv("FOO", "def")
c, err = ParseFiles([]File{
{Data: dt, Name: "c1.hcl"},
@@ -293,11 +293,12 @@ func TestHCLMultiFileSharedVariables(t *testing.T) {
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, ptrstr("pre-def"), c.Targets[0].Args["v1"])
require.Equal(t, ptrstr("def-post"), c.Targets[0].Args["v2"])
require.Equal(t, "pre-def", c.Targets[0].Args["v1"])
require.Equal(t, "def-post", c.Targets[0].Args["v2"])
}
func TestHCLVarsWithVars(t *testing.T) {
os.Unsetenv("FOO")
dt := []byte(`
variable "FOO" {
default = upper("${BASE}def")
@@ -329,10 +330,10 @@ func TestHCLVarsWithVars(t *testing.T) {
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, ptrstr("pre--ABCDEF-"), c.Targets[0].Args["v1"])
require.Equal(t, ptrstr("ABCDEF-post"), c.Targets[0].Args["v2"])
require.Equal(t, "pre--ABCDEF-", c.Targets[0].Args["v1"])
require.Equal(t, "ABCDEF-post", c.Targets[0].Args["v2"])
t.Setenv("BASE", "new")
os.Setenv("BASE", "new")
c, err = ParseFiles([]File{
{Data: dt, Name: "c1.hcl"},
@@ -342,11 +343,12 @@ func TestHCLVarsWithVars(t *testing.T) {
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, ptrstr("pre--NEWDEF-"), c.Targets[0].Args["v1"])
require.Equal(t, ptrstr("NEWDEF-post"), c.Targets[0].Args["v2"])
require.Equal(t, "pre--NEWDEF-", c.Targets[0].Args["v1"])
require.Equal(t, "NEWDEF-post", c.Targets[0].Args["v2"])
}
func TestHCLTypedVariables(t *testing.T) {
os.Unsetenv("FOO")
dt := []byte(`
variable "FOO" {
default = 3
@@ -367,80 +369,33 @@ func TestHCLTypedVariables(t *testing.T) {
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, ptrstr("lower"), c.Targets[0].Args["v1"])
require.Equal(t, ptrstr("yes"), c.Targets[0].Args["v2"])
require.Equal(t, "lower", c.Targets[0].Args["v1"])
require.Equal(t, "yes", c.Targets[0].Args["v2"])
t.Setenv("FOO", "5.1")
t.Setenv("IS_FOO", "0")
os.Setenv("FOO", "5.1")
os.Setenv("IS_FOO", "0")
c, err = ParseFile(dt, "docker-bake.hcl")
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, ptrstr("higher"), c.Targets[0].Args["v1"])
require.Equal(t, ptrstr("no"), c.Targets[0].Args["v2"])
require.Equal(t, "higher", c.Targets[0].Args["v1"])
require.Equal(t, "no", c.Targets[0].Args["v2"])
t.Setenv("FOO", "NaN")
os.Setenv("FOO", "NaN")
_, err = ParseFile(dt, "docker-bake.hcl")
require.Error(t, err)
require.Contains(t, err.Error(), "failed to parse FOO as number")
t.Setenv("FOO", "0")
t.Setenv("IS_FOO", "maybe")
os.Setenv("FOO", "0")
os.Setenv("IS_FOO", "maybe")
_, err = ParseFile(dt, "docker-bake.hcl")
require.Error(t, err)
require.Contains(t, err.Error(), "failed to parse IS_FOO as bool")
}
func TestHCLNullVariables(t *testing.T) {
dt := []byte(`
variable "FOO" {
default = null
}
target "default" {
args = {
foo = FOO
}
}`)
c, err := ParseFile(dt, "docker-bake.hcl")
require.NoError(t, err)
require.Equal(t, ptrstr(nil), c.Targets[0].Args["foo"])
t.Setenv("FOO", "bar")
c, err = ParseFile(dt, "docker-bake.hcl")
require.NoError(t, err)
require.Equal(t, ptrstr("bar"), c.Targets[0].Args["foo"])
}
func TestJSONNullVariables(t *testing.T) {
dt := []byte(`{
"variable": {
"FOO": {
"default": null
}
},
"target": {
"default": {
"args": {
"foo": "${FOO}"
}
}
}
}`)
c, err := ParseFile(dt, "docker-bake.json")
require.NoError(t, err)
require.Equal(t, ptrstr(nil), c.Targets[0].Args["foo"])
t.Setenv("FOO", "bar")
c, err = ParseFile(dt, "docker-bake.json")
require.NoError(t, err)
require.Equal(t, ptrstr("bar"), c.Targets[0].Args["foo"])
}
func TestHCLVariableCycle(t *testing.T) {
dt := []byte(`
variable "FOO" {
@@ -476,107 +431,19 @@ func TestHCLAttrs(t *testing.T) {
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, ptrstr("attr-abcdef"), c.Targets[0].Args["v1"])
require.Equal(t, "attr-abcdef", c.Targets[0].Args["v1"])
// env does not apply if no variable
t.Setenv("FOO", "bar")
os.Setenv("FOO", "bar")
c, err = ParseFile(dt, "docker-bake.hcl")
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, ptrstr("attr-abcdef"), c.Targets[0].Args["v1"])
require.Equal(t, "attr-abcdef", c.Targets[0].Args["v1"])
// attr-multifile
}
func TestHCLTargetAttrs(t *testing.T) {
dt := []byte(`
target "foo" {
dockerfile = "xxx"
context = target.bar.context
target = target.foo.dockerfile
}
target "bar" {
dockerfile = target.foo.dockerfile
context = "yyy"
target = target.bar.context
}
`)
c, err := ParseFile(dt, "docker-bake.hcl")
require.NoError(t, err)
require.Equal(t, 2, len(c.Targets))
require.Equal(t, "foo", c.Targets[0].Name)
require.Equal(t, "bar", c.Targets[1].Name)
require.Equal(t, "xxx", *c.Targets[0].Dockerfile)
require.Equal(t, "yyy", *c.Targets[0].Context)
require.Equal(t, "xxx", *c.Targets[0].Target)
require.Equal(t, "xxx", *c.Targets[1].Dockerfile)
require.Equal(t, "yyy", *c.Targets[1].Context)
require.Equal(t, "yyy", *c.Targets[1].Target)
}
func TestHCLTargetGlobal(t *testing.T) {
dt := []byte(`
target "foo" {
dockerfile = "x"
}
x = target.foo.dockerfile
y = x
target "bar" {
dockerfile = y
}
`)
c, err := ParseFile(dt, "docker-bake.hcl")
require.NoError(t, err)
require.Equal(t, 2, len(c.Targets))
require.Equal(t, "foo", c.Targets[0].Name)
require.Equal(t, "bar", c.Targets[1].Name)
require.Equal(t, "x", *c.Targets[0].Dockerfile)
require.Equal(t, "x", *c.Targets[1].Dockerfile)
}
func TestHCLTargetAttrName(t *testing.T) {
dt := []byte(`
target "foo" {
dockerfile = target.foo.name
}
`)
c, err := ParseFile(dt, "docker-bake.hcl")
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, "foo", c.Targets[0].Name)
require.Equal(t, "foo", *c.Targets[0].Dockerfile)
}
func TestHCLTargetAttrEmptyChain(t *testing.T) {
dt := []byte(`
target "foo" {
# dockerfile = Dockerfile
context = target.foo.dockerfile
target = target.foo.context
}
`)
c, err := ParseFile(dt, "docker-bake.hcl")
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, "foo", c.Targets[0].Name)
require.Nil(t, c.Targets[0].Dockerfile)
require.Nil(t, c.Targets[0].Context)
require.Nil(t, c.Targets[0].Target)
}
func TestHCLAttrsCustomType(t *testing.T) {
dt := []byte(`
platforms=["linux/arm64", "linux/amd64"]
@@ -594,10 +461,11 @@ func TestHCLAttrsCustomType(t *testing.T) {
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, []string{"linux/arm64", "linux/amd64"}, c.Targets[0].Platforms)
require.Equal(t, ptrstr("linux/arm64"), c.Targets[0].Args["v1"])
require.Equal(t, "linux/arm64", c.Targets[0].Args["v1"])
}
func TestHCLMultiFileAttrs(t *testing.T) {
os.Unsetenv("FOO")
dt := []byte(`
variable "FOO" {
default = "abc"
@@ -619,9 +487,9 @@ func TestHCLMultiFileAttrs(t *testing.T) {
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, ptrstr("pre-def"), c.Targets[0].Args["v1"])
require.Equal(t, "pre-def", c.Targets[0].Args["v1"])
t.Setenv("FOO", "ghi")
os.Setenv("FOO", "ghi")
c, err = ParseFiles([]File{
{Data: dt, Name: "c1.hcl"},
@@ -631,463 +499,7 @@ func TestHCLMultiFileAttrs(t *testing.T) {
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, ptrstr("pre-ghi"), c.Targets[0].Args["v1"])
}
func TestHCLDuplicateTarget(t *testing.T) {
dt := []byte(`
target "app" {
dockerfile = "x"
}
target "app" {
dockerfile = "y"
}
`)
c, err := ParseFile(dt, "docker-bake.hcl")
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, "app", c.Targets[0].Name)
require.Equal(t, "y", *c.Targets[0].Dockerfile)
}
func TestHCLRenameTarget(t *testing.T) {
dt := []byte(`
target "abc" {
name = "xyz"
dockerfile = "foo"
}
`)
_, err := ParseFile(dt, "docker-bake.hcl")
require.ErrorContains(t, err, "requires matrix")
}
func TestHCLRenameGroup(t *testing.T) {
dt := []byte(`
group "foo" {
name = "bar"
targets = ["x", "y"]
}
`)
_, err := ParseFile(dt, "docker-bake.hcl")
require.ErrorContains(t, err, "not supported")
dt = []byte(`
group "foo" {
matrix = {
name = ["x", "y"]
}
}
`)
_, err = ParseFile(dt, "docker-bake.hcl")
require.ErrorContains(t, err, "not supported")
}
func TestHCLRenameTargetAttrs(t *testing.T) {
dt := []byte(`
target "abc" {
name = "xyz"
matrix = {}
dockerfile = "foo"
}
target "def" {
dockerfile = target.xyz.dockerfile
}
`)
c, err := ParseFile(dt, "docker-bake.hcl")
require.NoError(t, err)
require.Equal(t, 2, len(c.Targets))
require.Equal(t, "xyz", c.Targets[0].Name)
require.Equal(t, "foo", *c.Targets[0].Dockerfile)
require.Equal(t, "def", c.Targets[1].Name)
require.Equal(t, "foo", *c.Targets[1].Dockerfile)
dt = []byte(`
target "def" {
dockerfile = target.xyz.dockerfile
}
target "abc" {
name = "xyz"
matrix = {}
dockerfile = "foo"
}
`)
c, err = ParseFile(dt, "docker-bake.hcl")
require.NoError(t, err)
require.Equal(t, 2, len(c.Targets))
require.Equal(t, "def", c.Targets[0].Name)
require.Equal(t, "foo", *c.Targets[0].Dockerfile)
require.Equal(t, "xyz", c.Targets[1].Name)
require.Equal(t, "foo", *c.Targets[1].Dockerfile)
dt = []byte(`
target "abc" {
name = "xyz"
matrix = {}
dockerfile = "foo"
}
target "def" {
dockerfile = target.abc.dockerfile
}
`)
_, err = ParseFile(dt, "docker-bake.hcl")
require.ErrorContains(t, err, "abc")
dt = []byte(`
target "def" {
dockerfile = target.abc.dockerfile
}
target "abc" {
name = "xyz"
matrix = {}
dockerfile = "foo"
}
`)
_, err = ParseFile(dt, "docker-bake.hcl")
require.ErrorContains(t, err, "abc")
}
func TestHCLRenameSplit(t *testing.T) {
dt := []byte(`
target "x" {
name = "y"
matrix = {}
dockerfile = "foo"
}
target "x" {
name = "z"
matrix = {}
dockerfile = "bar"
}
`)
c, err := ParseFile(dt, "docker-bake.hcl")
require.NoError(t, err)
require.Equal(t, 2, len(c.Targets))
require.Equal(t, "y", c.Targets[0].Name)
require.Equal(t, "foo", *c.Targets[0].Dockerfile)
require.Equal(t, "z", c.Targets[1].Name)
require.Equal(t, "bar", *c.Targets[1].Dockerfile)
require.Equal(t, 1, len(c.Groups))
require.Equal(t, "x", c.Groups[0].Name)
require.Equal(t, []string{"y", "z"}, c.Groups[0].Targets)
}
func TestHCLRenameMultiFile(t *testing.T) {
dt := []byte(`
target "foo" {
name = "bar"
matrix = {}
dockerfile = "x"
}
`)
dt2 := []byte(`
target "foo" {
context = "y"
}
`)
dt3 := []byte(`
target "bar" {
target = "z"
}
`)
c, err := ParseFiles([]File{
{Data: dt, Name: "c1.hcl"},
{Data: dt2, Name: "c2.hcl"},
{Data: dt3, Name: "c3.hcl"},
}, nil)
require.NoError(t, err)
require.Equal(t, 2, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "bar")
require.Equal(t, *c.Targets[0].Dockerfile, "x")
require.Equal(t, *c.Targets[0].Target, "z")
require.Equal(t, c.Targets[1].Name, "foo")
require.Equal(t, *c.Targets[1].Context, "y")
}
func TestHCLMatrixBasic(t *testing.T) {
dt := []byte(`
target "default" {
matrix = {
foo = ["x", "y"]
}
name = foo
dockerfile = "${foo}.Dockerfile"
}
`)
c, err := ParseFile(dt, "docker-bake.hcl")
require.NoError(t, err)
require.Equal(t, 2, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "x")
require.Equal(t, c.Targets[1].Name, "y")
require.Equal(t, *c.Targets[0].Dockerfile, "x.Dockerfile")
require.Equal(t, *c.Targets[1].Dockerfile, "y.Dockerfile")
require.Equal(t, 1, len(c.Groups))
require.Equal(t, "default", c.Groups[0].Name)
require.Equal(t, []string{"x", "y"}, c.Groups[0].Targets)
}
func TestHCLMatrixMultipleKeys(t *testing.T) {
dt := []byte(`
target "default" {
matrix = {
foo = ["a"]
bar = ["b", "c"]
baz = ["d", "e", "f"]
}
name = "${foo}-${bar}-${baz}"
}
`)
c, err := ParseFile(dt, "docker-bake.hcl")
require.NoError(t, err)
require.Equal(t, 6, len(c.Targets))
names := make([]string, len(c.Targets))
for i, t := range c.Targets {
names[i] = t.Name
}
require.ElementsMatch(t, []string{"a-b-d", "a-b-e", "a-b-f", "a-c-d", "a-c-e", "a-c-f"}, names)
require.Equal(t, 1, len(c.Groups))
require.Equal(t, "default", c.Groups[0].Name)
require.ElementsMatch(t, []string{"a-b-d", "a-b-e", "a-b-f", "a-c-d", "a-c-e", "a-c-f"}, c.Groups[0].Targets)
}
func TestHCLMatrixLists(t *testing.T) {
dt := []byte(`
target "foo" {
matrix = {
aa = [["aa", "bb"], ["cc", "dd"]]
}
name = aa[0]
args = {
target = "val${aa[1]}"
}
}
`)
c, err := ParseFile(dt, "docker-bake.hcl")
require.NoError(t, err)
require.Equal(t, 2, len(c.Targets))
require.Equal(t, "aa", c.Targets[0].Name)
require.Equal(t, ptrstr("valbb"), c.Targets[0].Args["target"])
require.Equal(t, "cc", c.Targets[1].Name)
require.Equal(t, ptrstr("valdd"), c.Targets[1].Args["target"])
}
func TestHCLMatrixMaps(t *testing.T) {
dt := []byte(`
target "foo" {
matrix = {
aa = [
{
foo = "aa"
bar = "bb"
},
{
foo = "cc"
bar = "dd"
}
]
}
name = aa.foo
args = {
target = "val${aa.bar}"
}
}
`)
c, err := ParseFile(dt, "docker-bake.hcl")
require.NoError(t, err)
require.Equal(t, 2, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "aa")
require.Equal(t, c.Targets[0].Args["target"], ptrstr("valbb"))
require.Equal(t, c.Targets[1].Name, "cc")
require.Equal(t, c.Targets[1].Args["target"], ptrstr("valdd"))
}
func TestHCLMatrixMultipleTargets(t *testing.T) {
dt := []byte(`
target "x" {
matrix = {
foo = ["a", "b"]
}
name = foo
}
target "y" {
matrix = {
bar = ["c", "d"]
}
name = bar
}
`)
c, err := ParseFile(dt, "docker-bake.hcl")
require.NoError(t, err)
require.Equal(t, 4, len(c.Targets))
names := make([]string, len(c.Targets))
for i, t := range c.Targets {
names[i] = t.Name
}
require.ElementsMatch(t, []string{"a", "b", "c", "d"}, names)
require.Equal(t, 2, len(c.Groups))
names = make([]string, len(c.Groups))
for i, c := range c.Groups {
names[i] = c.Name
}
require.ElementsMatch(t, []string{"x", "y"}, names)
for _, g := range c.Groups {
switch g.Name {
case "x":
require.Equal(t, []string{"a", "b"}, g.Targets)
case "y":
require.Equal(t, []string{"c", "d"}, g.Targets)
}
}
}
func TestHCLMatrixDuplicateNames(t *testing.T) {
dt := []byte(`
target "default" {
matrix = {
foo = ["a", "b"]
}
name = "c"
}
`)
_, err := ParseFile(dt, "docker-bake.hcl")
require.Error(t, err)
}
func TestHCLMatrixArgs(t *testing.T) {
dt := []byte(`
a = 1
variable "b" {
default = 2
}
target "default" {
matrix = {
foo = [a, b]
}
name = foo
}
`)
c, err := ParseFile(dt, "docker-bake.hcl")
require.NoError(t, err)
require.Equal(t, 2, len(c.Targets))
require.Equal(t, "1", c.Targets[0].Name)
require.Equal(t, "2", c.Targets[1].Name)
}
func TestHCLMatrixArgsOverride(t *testing.T) {
dt := []byte(`
variable "ABC" {
default = "def"
}
target "bar" {
matrix = {
aa = split(",", ABC)
}
name = "bar-${aa}"
args = {
foo = aa
}
}
`)
c, err := ParseFiles([]File{
{Data: dt, Name: "docker-bake.hcl"},
}, map[string]string{"ABC": "11,22,33"})
require.NoError(t, err)
require.Equal(t, 3, len(c.Targets))
require.Equal(t, "bar-11", c.Targets[0].Name)
require.Equal(t, "bar-22", c.Targets[1].Name)
require.Equal(t, "bar-33", c.Targets[2].Name)
require.Equal(t, ptrstr("11"), c.Targets[0].Args["foo"])
require.Equal(t, ptrstr("22"), c.Targets[1].Args["foo"])
require.Equal(t, ptrstr("33"), c.Targets[2].Args["foo"])
}
func TestHCLMatrixBadTypes(t *testing.T) {
dt := []byte(`
target "default" {
matrix = "test"
}
`)
_, err := ParseFile(dt, "docker-bake.hcl")
require.Error(t, err)
dt = []byte(`
target "default" {
matrix = ["test"]
}
`)
_, err = ParseFile(dt, "docker-bake.hcl")
require.Error(t, err)
dt = []byte(`
target "default" {
matrix = {
["a"] = ["b"]
}
}
`)
_, err = ParseFile(dt, "docker-bake.hcl")
require.Error(t, err)
dt = []byte(`
target "default" {
matrix = {
1 = 2
}
}
`)
_, err = ParseFile(dt, "docker-bake.hcl")
require.Error(t, err)
dt = []byte(`
target "default" {
matrix = {
a = "b"
}
}
`)
_, err = ParseFile(dt, "docker-bake.hcl")
require.Error(t, err)
require.Equal(t, "pre-ghi", c.Targets[0].Args["v1"])
}
func TestJSONAttributes(t *testing.T) {
@@ -1098,7 +510,7 @@ func TestJSONAttributes(t *testing.T) {
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, ptrstr("pre-abc-def"), c.Targets[0].Args["v1"])
require.Equal(t, "pre-abc-def", c.Targets[0].Args["v1"])
}
func TestJSONFunctions(t *testing.T) {
@@ -1123,25 +535,7 @@ func TestJSONFunctions(t *testing.T) {
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, ptrstr("pre-<FOO-abc>"), c.Targets[0].Args["v1"])
}
func TestJSONInvalidFunctions(t *testing.T) {
dt := []byte(`{
"target": {
"app": {
"args": {
"v1": "myfunc(\"foo\")"
}
}
}}`)
c, err := ParseFile(dt, "docker-bake.json")
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, ptrstr(`myfunc("foo")`), c.Targets[0].Args["v1"])
require.Equal(t, "pre-<FOO-abc>", c.Targets[0].Args["v1"])
}
func TestHCLFunctionInAttr(t *testing.T) {
@@ -1169,7 +563,7 @@ func TestHCLFunctionInAttr(t *testing.T) {
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, ptrstr("FOO <> [baz]"), c.Targets[0].Args["v1"])
require.Equal(t, "FOO <> [baz]", c.Targets[0].Args["v1"])
}
func TestHCLCombineCompose(t *testing.T) {
@@ -1200,8 +594,8 @@ services:
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, ptrstr("foo"), c.Targets[0].Args["v1"])
require.Equal(t, ptrstr("bar"), c.Targets[0].Args["v2"])
require.Equal(t, "foo", c.Targets[0].Args["v1"])
require.Equal(t, "bar", c.Targets[0].Args["v2"])
require.Equal(t, "dir", *c.Targets[0].Context)
require.Equal(t, "Dockerfile-alternate", *c.Targets[0].Dockerfile)
}
@@ -1346,10 +740,10 @@ target "two" {
require.Equal(t, 2, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "one")
require.Equal(t, map[string]*string{"a": ptrstr("pre-ghi-jkl")}, c.Targets[0].Args)
require.Equal(t, map[string]string{"a": "pre-ghi-jkl"}, c.Targets[0].Args)
require.Equal(t, c.Targets[1].Name, "two")
require.Equal(t, map[string]*string{"b": ptrstr("pre-jkl")}, c.Targets[1].Args)
require.Equal(t, map[string]string{"b": "pre-jkl"}, c.Targets[1].Args)
}
func TestEmptyVariableJSON(t *testing.T) {
@@ -1388,24 +782,3 @@ func TestFunctionNoResult(t *testing.T) {
_, err := ParseFile(dt, "docker-bake.hcl")
require.Error(t, err)
}
func TestVarUnsupportedType(t *testing.T) {
dt := []byte(`
variable "FOO" {
default = []
}
target "default" {}`)
t.Setenv("FOO", "bar")
_, err := ParseFile(dt, "docker-bake.hcl")
require.Error(t, err)
}
func ptrstr(s interface{}) *string {
var n *string
if reflect.ValueOf(s).Kind() == reflect.String {
ss := s.(string)
n = &ss
}
return n
}

View File

@@ -1,103 +0,0 @@
package hclparser
import (
"github.com/hashicorp/hcl/v2"
)
type filterBody struct {
body hcl.Body
schema *hcl.BodySchema
exclude bool
}
func FilterIncludeBody(body hcl.Body, schema *hcl.BodySchema) hcl.Body {
return &filterBody{
body: body,
schema: schema,
}
}
func FilterExcludeBody(body hcl.Body, schema *hcl.BodySchema) hcl.Body {
return &filterBody{
body: body,
schema: schema,
exclude: true,
}
}
func (b *filterBody) Content(schema *hcl.BodySchema) (*hcl.BodyContent, hcl.Diagnostics) {
if b.exclude {
schema = subtractSchemas(schema, b.schema)
} else {
schema = intersectSchemas(schema, b.schema)
}
content, _, diag := b.body.PartialContent(schema)
return content, diag
}
func (b *filterBody) PartialContent(schema *hcl.BodySchema) (*hcl.BodyContent, hcl.Body, hcl.Diagnostics) {
if b.exclude {
schema = subtractSchemas(schema, b.schema)
} else {
schema = intersectSchemas(schema, b.schema)
}
return b.body.PartialContent(schema)
}
func (b *filterBody) JustAttributes() (hcl.Attributes, hcl.Diagnostics) {
return b.body.JustAttributes()
}
func (b *filterBody) MissingItemRange() hcl.Range {
return b.body.MissingItemRange()
}
func intersectSchemas(a, b *hcl.BodySchema) *hcl.BodySchema {
result := &hcl.BodySchema{}
for _, blockA := range a.Blocks {
for _, blockB := range b.Blocks {
if blockA.Type == blockB.Type {
result.Blocks = append(result.Blocks, blockA)
break
}
}
}
for _, attrA := range a.Attributes {
for _, attrB := range b.Attributes {
if attrA.Name == attrB.Name {
result.Attributes = append(result.Attributes, attrA)
break
}
}
}
return result
}
func subtractSchemas(a, b *hcl.BodySchema) *hcl.BodySchema {
result := &hcl.BodySchema{}
for _, blockA := range a.Blocks {
found := false
for _, blockB := range b.Blocks {
if blockA.Type == blockB.Type {
found = true
break
}
}
if !found {
result.Blocks = append(result.Blocks, blockA)
}
}
for _, attrA := range a.Attributes {
found := false
for _, attrB := range b.Attributes {
if attrA.Name == attrB.Name {
found = true
break
}
}
if !found {
result.Attributes = append(result.Attributes, attrA)
}
}
return result
}

View File

@@ -14,7 +14,15 @@ func funcCalls(exp hcl.Expression) ([]string, hcl.Diagnostics) {
if !ok {
fns, err := jsonFuncCallsRecursive(exp)
if err != nil {
return nil, wrapErrorDiagnostic("Invalid expression", err, exp.Range().Ptr(), exp.Range().Ptr())
return nil, hcl.Diagnostics{
&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid expression",
Detail: err.Error(),
Subject: exp.Range().Ptr(),
Context: exp.Range().Ptr(),
},
}
}
return fns, nil
}
@@ -75,11 +83,11 @@ func appendJSONFuncCalls(exp hcl.Expression, m map[string]struct{}) error {
// hcl/v2/json/ast#stringVal
val := src.FieldByName("Value")
if !val.IsValid() || val.IsZero() {
if val.IsZero() {
return nil
}
rng := src.FieldByName("SrcRange")
if rng.IsZero() {
if val.IsZero() {
return nil
}
var stringVal struct {

View File

@@ -1,9 +1,7 @@
package hclparser
import (
"encoding/binary"
"fmt"
"hash/fnv"
"math"
"math/big"
"reflect"
@@ -15,7 +13,6 @@ import (
"github.com/hashicorp/hcl/v2/gohcl"
"github.com/pkg/errors"
"github.com/zclconf/go-cty/cty"
"github.com/zclconf/go-cty/cty/gocty"
)
type Opt struct {
@@ -51,42 +48,30 @@ type parser struct {
attrs map[string]*hcl.Attribute
funcs map[string]*functionDef
blocks map[string]map[string][]*hcl.Block
blockValues map[*hcl.Block][]reflect.Value
blockEvalCtx map[*hcl.Block][]*hcl.EvalContext
blockNames map[*hcl.Block][]string
blockTypes map[string]reflect.Type
ectx *hcl.EvalContext
progressV map[uint64]struct{}
progressF map[uint64]struct{}
progressB map[uint64]map[string]struct{}
doneB map[uint64]map[string]struct{}
progress map[string]struct{}
progressF map[string]struct{}
doneF map[string]struct{}
}
type WithEvalContexts interface {
GetEvalContexts(base *hcl.EvalContext, block *hcl.Block, loadDeps func(hcl.Expression) hcl.Diagnostics) ([]*hcl.EvalContext, error)
}
type WithGetName interface {
GetName(ectx *hcl.EvalContext, block *hcl.Block, loadDeps func(hcl.Expression) hcl.Diagnostics) (string, error)
}
var errUndefined = errors.New("undefined")
func (p *parser) loadDeps(ectx *hcl.EvalContext, exp hcl.Expression, exclude map[string]struct{}, allowMissing bool) hcl.Diagnostics {
func (p *parser) loadDeps(exp hcl.Expression, exclude map[string]struct{}) hcl.Diagnostics {
fns, hcldiags := funcCalls(exp)
if hcldiags.HasErrors() {
return hcldiags
}
for _, fn := range fns {
if err := p.resolveFunction(ectx, fn); err != nil {
if allowMissing && errors.Is(err, errUndefined) {
continue
if err := p.resolveFunction(fn); err != nil {
return hcl.Diagnostics{
&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid expression",
Detail: err.Error(),
Subject: exp.Range().Ptr(),
Context: exp.Range().Ptr(),
},
}
return wrapErrorDiagnostic("Invalid expression", err, exp.Range().Ptr(), exp.Range().Ptr())
}
}
@@ -94,61 +79,15 @@ func (p *parser) loadDeps(ectx *hcl.EvalContext, exp hcl.Expression, exclude map
if _, ok := exclude[v.RootName()]; ok {
continue
}
if _, ok := p.blockTypes[v.RootName()]; ok {
blockType := v.RootName()
split := v.SimpleSplit().Rel
if len(split) == 0 {
return hcl.Diagnostics{
&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid expression",
Detail: fmt.Sprintf("cannot access %s as a variable", blockType),
Subject: exp.Range().Ptr(),
Context: exp.Range().Ptr(),
},
}
}
blockName, ok := split[0].(hcl.TraverseAttr)
if !ok {
return hcl.Diagnostics{
&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid expression",
Detail: fmt.Sprintf("cannot traverse %s without attribute", blockType),
Subject: exp.Range().Ptr(),
Context: exp.Range().Ptr(),
},
}
}
blocks := p.blocks[blockType][blockName.Name]
if len(blocks) == 0 {
continue
}
var target *hcl.BodySchema
if len(split) > 1 {
if attr, ok := split[1].(hcl.TraverseAttr); ok {
target = &hcl.BodySchema{
Attributes: []hcl.AttributeSchema{{Name: attr.Name}},
Blocks: []hcl.BlockHeaderSchema{{Type: attr.Name}},
}
}
}
for _, block := range blocks {
if err := p.resolveBlock(block, target); err != nil {
if allowMissing && errors.Is(err, errUndefined) {
continue
}
return wrapErrorDiagnostic("Invalid expression", err, exp.Range().Ptr(), exp.Range().Ptr())
}
}
} else {
if err := p.resolveValue(ectx, v.RootName()); err != nil {
if allowMissing && errors.Is(err, errUndefined) {
continue
}
return wrapErrorDiagnostic("Invalid expression", err, exp.Range().Ptr(), exp.Range().Ptr())
if err := p.resolveValue(v.RootName()); err != nil {
return hcl.Diagnostics{
&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid expression",
Detail: err.Error(),
Subject: v.SourceRange().Ptr(),
Context: v.SourceRange().Ptr(),
},
}
}
}
@@ -156,23 +95,21 @@ func (p *parser) loadDeps(ectx *hcl.EvalContext, exp hcl.Expression, exclude map
return nil
}
// resolveFunction forces evaluation of a function, storing the result into the
// parser.
func (p *parser) resolveFunction(ectx *hcl.EvalContext, name string) error {
if _, ok := p.ectx.Functions[name]; ok {
return nil
}
if _, ok := ectx.Functions[name]; ok {
func (p *parser) resolveFunction(name string) error {
if _, ok := p.doneF[name]; ok {
return nil
}
f, ok := p.funcs[name]
if !ok {
return errors.Wrapf(errUndefined, "function %q does not exist", name)
if _, ok := p.ectx.Functions[name]; ok {
return nil
}
return errors.Errorf("undefined function %s", name)
}
if _, ok := p.progressF[key(ectx, name)]; ok {
if _, ok := p.progressF[name]; ok {
return errors.Errorf("function cycle not allowed for %s", name)
}
p.progressF[key(ectx, name)] = struct{}{}
p.progressF[name] = struct{}{}
if f.Result == nil {
return errors.Errorf("empty result not allowed for %s", name)
@@ -217,7 +154,7 @@ func (p *parser) resolveFunction(ectx *hcl.EvalContext, name string) error {
return diags
}
if diags := p.loadDeps(p.ectx, f.Result.Expr, params, false); diags.HasErrors() {
if diags := p.loadDeps(f.Result.Expr, params); diags.HasErrors() {
return diags
}
@@ -227,24 +164,20 @@ func (p *parser) resolveFunction(ectx *hcl.EvalContext, name string) error {
if diags.HasErrors() {
return diags
}
p.doneF[name] = struct{}{}
p.ectx.Functions[name] = v
return nil
}
// resolveValue forces evaluation of a named value, storing the result into the
// parser.
func (p *parser) resolveValue(ectx *hcl.EvalContext, name string) (err error) {
func (p *parser) resolveValue(name string) (err error) {
if _, ok := p.ectx.Variables[name]; ok {
return nil
}
if _, ok := ectx.Variables[name]; ok {
return nil
}
if _, ok := p.progressV[key(ectx, name)]; ok {
if _, ok := p.progress[name]; ok {
return errors.Errorf("variable cycle not allowed for %s", name)
}
p.progressV[key(ectx, name)] = struct{}{}
p.progress[name] = struct{}{}
var v *cty.Value
defer func() {
@@ -257,10 +190,9 @@ func (p *parser) resolveValue(ectx *hcl.EvalContext, name string) (err error) {
if _, builtin := p.opt.Vars[name]; !ok && !builtin {
vr, ok := p.vars[name]
if !ok {
return errors.Wrapf(errUndefined, "variable %q does not exist", name)
return errors.Errorf("undefined variable %q", name)
}
def = vr.Default
ectx = p.ectx
}
if def == nil {
@@ -273,10 +205,10 @@ func (p *parser) resolveValue(ectx *hcl.EvalContext, name string) (err error) {
return
}
if diags := p.loadDeps(ectx, def.Expr, nil, true); diags.HasErrors() {
if diags := p.loadDeps(def.Expr, nil); diags.HasErrors() {
return diags
}
vv, diags := def.Expr.Value(ectx)
vv, diags := def.Expr.Value(p.ectx)
if diags.HasErrors() {
return diags
}
@@ -284,16 +216,19 @@ func (p *parser) resolveValue(ectx *hcl.EvalContext, name string) (err error) {
_, isVar := p.vars[name]
if envv, ok := p.opt.LookupVar(name); ok && isVar {
switch {
case vv.Type().Equals(cty.Bool):
if vv.Type().Equals(cty.Bool) {
b, err := strconv.ParseBool(envv)
if err != nil {
return errors.Wrapf(err, "failed to parse %s as bool", name)
}
vv = cty.BoolVal(b)
case vv.Type().Equals(cty.String), vv.Type().Equals(cty.DynamicPseudoType):
vv = cty.StringVal(envv)
case vv.Type().Equals(cty.Number):
vv := cty.BoolVal(b)
v = &vv
return nil
} else if vv.Type().Equals(cty.String) {
vv := cty.StringVal(envv)
v = &vv
return nil
} else if vv.Type().Equals(cty.Number) {
n, err := strconv.ParseFloat(envv, 64)
if err == nil && (math.IsNaN(n) || math.IsInf(n, 0)) {
err = errors.Errorf("invalid number value")
@@ -301,240 +236,19 @@ func (p *parser) resolveValue(ectx *hcl.EvalContext, name string) (err error) {
if err != nil {
return errors.Wrapf(err, "failed to parse %s as number", name)
}
vv = cty.NumberVal(big.NewFloat(n))
default:
vv := cty.NumberVal(big.NewFloat(n))
v = &vv
return nil
} else {
// TODO: support lists with csv values
return errors.Errorf("unsupported type %s for variable %s", vv.Type().FriendlyName(), name)
return errors.Errorf("unsupported type %s for variable %s", v.Type(), name)
}
}
v = &vv
return nil
}
// resolveBlock force evaluates a block, storing the result in the parser. If a
// target schema is provided, only the attributes and blocks present in the
// schema will be evaluated.
func (p *parser) resolveBlock(block *hcl.Block, target *hcl.BodySchema) (err error) {
// prepare the variable map for this type
if _, ok := p.ectx.Variables[block.Type]; !ok {
p.ectx.Variables[block.Type] = cty.MapValEmpty(cty.Map(cty.String))
}
// prepare the output destination and evaluation context
t, ok := p.blockTypes[block.Type]
if !ok {
return nil
}
var outputs []reflect.Value
var ectxs []*hcl.EvalContext
if prev, ok := p.blockValues[block]; ok {
outputs = prev
ectxs = p.blockEvalCtx[block]
} else {
if v, ok := reflect.New(t).Interface().(WithEvalContexts); ok {
ectxs, err = v.GetEvalContexts(p.ectx, block, func(expr hcl.Expression) hcl.Diagnostics {
return p.loadDeps(p.ectx, expr, nil, true)
})
if err != nil {
return err
}
for _, ectx := range ectxs {
if ectx != p.ectx && ectx.Parent() != p.ectx {
return errors.Errorf("EvalContext must return a context with the correct parent")
}
}
} else {
ectxs = append([]*hcl.EvalContext{}, p.ectx)
}
for range ectxs {
outputs = append(outputs, reflect.New(t))
}
}
p.blockValues[block] = outputs
p.blockEvalCtx[block] = ectxs
for i, output := range outputs {
target := target
ectx := ectxs[i]
name := block.Labels[0]
if names, ok := p.blockNames[block]; ok {
name = names[i]
}
if _, ok := p.doneB[key(block, ectx)]; !ok {
p.doneB[key(block, ectx)] = map[string]struct{}{}
}
if _, ok := p.progressB[key(block, ectx)]; !ok {
p.progressB[key(block, ectx)] = map[string]struct{}{}
}
if target != nil {
// filter out attributes and blocks that are already evaluated
original := target
target = &hcl.BodySchema{}
for _, a := range original.Attributes {
if _, ok := p.doneB[key(block, ectx)][a.Name]; !ok {
target.Attributes = append(target.Attributes, a)
}
}
for _, b := range original.Blocks {
if _, ok := p.doneB[key(block, ectx)][b.Type]; !ok {
target.Blocks = append(target.Blocks, b)
}
}
if len(target.Attributes) == 0 && len(target.Blocks) == 0 {
return nil
}
}
if target != nil {
// detect reference cycles
for _, a := range target.Attributes {
if _, ok := p.progressB[key(block, ectx)][a.Name]; ok {
return errors.Errorf("reference cycle not allowed for %s.%s.%s", block.Type, name, a.Name)
}
}
for _, b := range target.Blocks {
if _, ok := p.progressB[key(block, ectx)][b.Type]; ok {
return errors.Errorf("reference cycle not allowed for %s.%s.%s", block.Type, name, b.Type)
}
}
for _, a := range target.Attributes {
p.progressB[key(block, ectx)][a.Name] = struct{}{}
}
for _, b := range target.Blocks {
p.progressB[key(block, ectx)][b.Type] = struct{}{}
}
}
// create a filtered body that contains only the target properties
body := func() hcl.Body {
if target != nil {
return FilterIncludeBody(block.Body, target)
}
filter := &hcl.BodySchema{}
for k := range p.doneB[key(block, ectx)] {
filter.Attributes = append(filter.Attributes, hcl.AttributeSchema{Name: k})
filter.Blocks = append(filter.Blocks, hcl.BlockHeaderSchema{Type: k})
}
return FilterExcludeBody(block.Body, filter)
}
// load dependencies from all targeted properties
schema, _ := gohcl.ImpliedBodySchema(reflect.New(t).Interface())
content, _, diag := body().PartialContent(schema)
if diag.HasErrors() {
return diag
}
for _, a := range content.Attributes {
diag := p.loadDeps(ectx, a.Expr, nil, true)
if diag.HasErrors() {
return diag
}
}
for _, b := range content.Blocks {
err := p.resolveBlock(b, nil)
if err != nil {
return err
}
}
// decode!
diag = gohcl.DecodeBody(body(), ectx, output.Interface())
if diag.HasErrors() {
return diag
}
// mark all targeted properties as done
for _, a := range content.Attributes {
p.doneB[key(block, ectx)][a.Name] = struct{}{}
}
for _, b := range content.Blocks {
p.doneB[key(block, ectx)][b.Type] = struct{}{}
}
if target != nil {
for _, a := range target.Attributes {
p.doneB[key(block, ectx)][a.Name] = struct{}{}
}
for _, b := range target.Blocks {
p.doneB[key(block, ectx)][b.Type] = struct{}{}
}
}
// store the result into the evaluation context (so it can be referenced)
outputType, err := gocty.ImpliedType(output.Interface())
if err != nil {
return err
}
outputValue, err := gocty.ToCtyValue(output.Interface(), outputType)
if err != nil {
return err
}
var m map[string]cty.Value
if m2, ok := p.ectx.Variables[block.Type]; ok {
m = m2.AsValueMap()
}
if m == nil {
m = map[string]cty.Value{}
}
m[name] = outputValue
p.ectx.Variables[block.Type] = cty.MapVal(m)
}
return nil
}
// resolveBlockNames returns the names of the block, calling resolveBlock to
// evaluate any label fields to correctly resolve the name.
func (p *parser) resolveBlockNames(block *hcl.Block) ([]string, error) {
if names, ok := p.blockNames[block]; ok {
return names, nil
}
if err := p.resolveBlock(block, &hcl.BodySchema{}); err != nil {
return nil, err
}
names := make([]string, 0, len(p.blockValues[block]))
for i, val := range p.blockValues[block] {
ectx := p.blockEvalCtx[block][i]
name := block.Labels[0]
if err := p.opt.ValidateLabel(name); err != nil {
return nil, err
}
if v, ok := val.Interface().(WithGetName); ok {
var err error
name, err = v.GetName(ectx, block, func(expr hcl.Expression) hcl.Diagnostics {
return p.loadDeps(ectx, expr, nil, true)
})
if err != nil {
return nil, err
}
if err := p.opt.ValidateLabel(name); err != nil {
return nil, err
}
}
setName(val, name)
names = append(names, name)
}
found := map[string]struct{}{}
for _, name := range names {
if _, ok := found[name]; ok {
return nil, errors.Errorf("duplicate name %q", name)
}
found[name] = struct{}{}
}
p.blockNames[block] = names
return names, nil
}
func Parse(b hcl.Body, opt Opt, val interface{}) (map[string]map[string][]string, hcl.Diagnostics) {
func Parse(b hcl.Body, opt Opt, val interface{}) hcl.Diagnostics {
reserved := map[string]struct{}{}
schema, _ := gohcl.ImpliedBodySchema(val)
@@ -547,7 +261,7 @@ func Parse(b hcl.Body, opt Opt, val interface{}) (map[string]map[string][]string
var defs inputs
if err := gohcl.DecodeBody(b, nil, &defs); err != nil {
return nil, err
return err
}
defsSchema, _ := gohcl.ImpliedBodySchema(defs)
@@ -570,20 +284,13 @@ func Parse(b hcl.Body, opt Opt, val interface{}) (map[string]map[string][]string
attrs: map[string]*hcl.Attribute{},
funcs: map[string]*functionDef{},
blocks: map[string]map[string][]*hcl.Block{},
blockValues: map[*hcl.Block][]reflect.Value{},
blockEvalCtx: map[*hcl.Block][]*hcl.EvalContext{},
blockNames: map[*hcl.Block][]string{},
blockTypes: map[string]reflect.Type{},
progress: map[string]struct{}{},
progressF: map[string]struct{}{},
doneF: map[string]struct{}{},
ectx: &hcl.EvalContext{
Variables: map[string]cty.Value{},
Functions: Stdlib(),
Functions: stdlibFunctions,
},
progressV: map[uint64]struct{}{},
progressF: map[uint64]struct{}{},
progressB: map[uint64]map[string]struct{}{},
doneB: map[uint64]map[string]struct{}{},
}
for _, v := range defs.Variables {
@@ -603,18 +310,18 @@ func Parse(b hcl.Body, opt Opt, val interface{}) (map[string]map[string][]string
content, b, diags := b.PartialContent(schema)
if diags.HasErrors() {
return nil, diags
return diags
}
blocks, b, diags := b.PartialContent(defsSchema)
if diags.HasErrors() {
return nil, diags
return diags
}
attrs, diags := b.JustAttributes()
if diags.HasErrors() {
if d := removeAttributesDiags(diags, reserved, p.vars); len(d) > 0 {
return nil, d
return d
}
}
@@ -627,35 +334,48 @@ func Parse(b hcl.Body, opt Opt, val interface{}) (map[string]map[string][]string
delete(p.attrs, "function")
for k := range p.opt.Vars {
_ = p.resolveValue(p.ectx, k)
_ = p.resolveValue(k)
}
for _, a := range content.Attributes {
return nil, hcl.Diagnostics{
&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid attribute",
Detail: "global attributes currently not supported",
Subject: &a.Range,
Context: &a.Range,
},
for k := range p.attrs {
if err := p.resolveValue(k); err != nil {
if diags, ok := err.(hcl.Diagnostics); ok {
return diags
}
return hcl.Diagnostics{
&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid attribute",
Detail: err.Error(),
Subject: &p.attrs[k].Range,
Context: &p.attrs[k].Range,
},
}
}
}
for k := range p.vars {
if err := p.resolveValue(p.ectx, k); err != nil {
if err := p.resolveValue(k); err != nil {
if diags, ok := err.(hcl.Diagnostics); ok {
return nil, diags
return diags
}
r := p.vars[k].Body.MissingItemRange()
return nil, wrapErrorDiagnostic("Invalid value", err, &r, &r)
return hcl.Diagnostics{
&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid value",
Detail: err.Error(),
Subject: &r,
Context: &r,
},
}
}
}
for k := range p.funcs {
if err := p.resolveFunction(p.ectx, k); err != nil {
if err := p.resolveFunction(k); err != nil {
if diags, ok := err.(hcl.Diagnostics); ok {
return nil, diags
return diags
}
var subject *hcl.Range
var context *hcl.Range
@@ -671,10 +391,56 @@ func Parse(b hcl.Body, opt Opt, val interface{}) (map[string]map[string][]string
}
}
}
return nil, wrapErrorDiagnostic("Invalid function", err, subject, context)
return hcl.Diagnostics{
&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid function",
Detail: err.Error(),
Subject: subject,
Context: context,
},
}
}
}
for _, a := range content.Attributes {
return hcl.Diagnostics{
&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid attribute",
Detail: "global attributes currently not supported",
Subject: &a.Range,
Context: &a.Range,
},
}
}
m := map[string]map[string][]*hcl.Block{}
for _, b := range content.Blocks {
if len(b.Labels) == 0 || len(b.Labels) > 1 {
return hcl.Diagnostics{
&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid block",
Detail: fmt.Sprintf("invalid block label: %v", b.Labels),
Subject: &b.LabelRanges[0],
Context: &b.LabelRanges[0],
},
}
}
bm, ok := m[b.Type]
if !ok {
bm = map[string][]*hcl.Block{}
m[b.Type] = bm
}
lbl := b.Labels[0]
bm[lbl] = append(bm[lbl], b)
}
vt := reflect.ValueOf(val).Elem().Type()
numFields := vt.NumField()
type value struct {
reflect.Value
idx int
@@ -685,173 +451,93 @@ func Parse(b hcl.Body, opt Opt, val interface{}) (map[string]map[string][]string
values map[string]value
}
types := map[string]field{}
renamed := map[string]map[string][]string{}
vt := reflect.ValueOf(val).Elem().Type()
for i := 0; i < vt.NumField(); i++ {
for i := 0; i < numFields; i++ {
tags := strings.Split(vt.Field(i).Tag.Get("hcl"), ",")
p.blockTypes[tags[0]] = vt.Field(i).Type.Elem().Elem()
types[tags[0]] = field{
idx: i,
typ: vt.Field(i).Type,
values: make(map[string]value),
}
renamed[tags[0]] = map[string][]string{}
}
tmpBlocks := map[string]map[string][]*hcl.Block{}
for _, b := range content.Blocks {
if len(b.Labels) == 0 || len(b.Labels) > 1 {
return nil, hcl.Diagnostics{
&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid block",
Detail: fmt.Sprintf("invalid block label: %v", b.Labels),
Subject: &b.LabelRanges[0],
Context: &b.LabelRanges[0],
},
}
}
bm, ok := tmpBlocks[b.Type]
if !ok {
bm = map[string][]*hcl.Block{}
tmpBlocks[b.Type] = bm
}
names, err := p.resolveBlockNames(b)
if err != nil {
return nil, wrapErrorDiagnostic("Invalid name", err, &b.LabelRanges[0], &b.LabelRanges[0])
}
for _, name := range names {
bm[name] = append(bm[name], b)
renamed[b.Type][b.Labels[0]] = append(renamed[b.Type][b.Labels[0]], name)
}
}
p.blocks = tmpBlocks
diags = hcl.Diagnostics{}
for _, b := range content.Blocks {
v := reflect.ValueOf(val)
err := p.resolveBlock(b, nil)
if err != nil {
if diag, ok := err.(hcl.Diagnostics); ok {
if diag.HasErrors() {
diags = append(diags, diag...)
continue
}
} else {
return nil, wrapErrorDiagnostic("Invalid block", err, &b.LabelRanges[0], &b.DefRange)
t, ok := types[b.Type]
if !ok {
continue
}
vv := reflect.New(t.typ.Elem().Elem())
diag := gohcl.DecodeBody(b.Body, p.ectx, vv.Interface())
if diag.HasErrors() {
diags = append(diags, diag...)
continue
}
if err := opt.ValidateLabel(b.Labels[0]); err != nil {
return hcl.Diagnostics{
&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid name",
Detail: err.Error(),
Subject: &b.LabelRanges[0],
},
}
}
vvs := p.blockValues[b]
for _, vv := range vvs {
t := types[b.Type]
lblIndex, lblExists := getNameIndex(vv)
lblName, _ := getName(vv)
oldValue, exists := t.values[lblName]
if !exists && lblExists {
if v.Elem().Field(t.idx).Type().Kind() == reflect.Slice {
for i := 0; i < v.Elem().Field(t.idx).Len(); i++ {
if lblName == v.Elem().Field(t.idx).Index(i).Elem().Field(lblIndex).String() {
exists = true
oldValue = value{Value: v.Elem().Field(t.idx).Index(i), idx: i}
break
}
lblIndex := setLabel(vv, b.Labels[0])
oldValue, exists := t.values[b.Labels[0]]
if !exists && lblIndex != -1 {
if v.Elem().Field(t.idx).Type().Kind() == reflect.Slice {
for i := 0; i < v.Elem().Field(t.idx).Len(); i++ {
if b.Labels[0] == v.Elem().Field(t.idx).Index(i).Elem().Field(lblIndex).String() {
exists = true
oldValue = value{Value: v.Elem().Field(t.idx).Index(i), idx: i}
break
}
}
}
if exists {
if m := oldValue.Value.MethodByName("Merge"); m.IsValid() {
m.Call([]reflect.Value{vv})
} else {
v.Elem().Field(t.idx).Index(oldValue.idx).Set(vv)
}
}
if exists {
if m := oldValue.Value.MethodByName("Merge"); m.IsValid() {
m.Call([]reflect.Value{vv})
} else {
slice := v.Elem().Field(t.idx)
if slice.IsNil() {
slice = reflect.New(t.typ).Elem()
}
t.values[lblName] = value{Value: vv, idx: slice.Len()}
v.Elem().Field(t.idx).Set(reflect.Append(slice, vv))
v.Elem().Field(t.idx).Index(oldValue.idx).Set(vv)
}
} else {
slice := v.Elem().Field(t.idx)
if slice.IsNil() {
slice = reflect.New(t.typ).Elem()
}
t.values[b.Labels[0]] = value{Value: vv, idx: slice.Len()}
v.Elem().Field(t.idx).Set(reflect.Append(slice, vv))
}
}
if diags.HasErrors() {
return nil, diags
return diags
}
for k := range p.attrs {
if err := p.resolveValue(p.ectx, k); err != nil {
if diags, ok := err.(hcl.Diagnostics); ok {
return nil, diags
}
return nil, wrapErrorDiagnostic("Invalid attribute", err, &p.attrs[k].Range, &p.attrs[k].Range)
}
}
return renamed, nil
return nil
}
// wrapErrorDiagnostic wraps an error into a hcl.Diagnostics object.
// If the error is already an hcl.Diagnostics object, it is returned as is.
func wrapErrorDiagnostic(message string, err error, subject *hcl.Range, context *hcl.Range) hcl.Diagnostics {
switch err := err.(type) {
case *hcl.Diagnostic:
return hcl.Diagnostics{err}
case hcl.Diagnostics:
return err
default:
return hcl.Diagnostics{
&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: message,
Detail: err.Error(),
Subject: subject,
Context: context,
},
}
}
}
func setName(v reflect.Value, name string) {
func setLabel(v reflect.Value, lbl string) int {
// cache field index?
numFields := v.Elem().Type().NumField()
for i := 0; i < numFields; i++ {
parts := strings.Split(v.Elem().Type().Field(i).Tag.Get("hcl"), ",")
for _, t := range parts[1:] {
for _, t := range strings.Split(v.Elem().Type().Field(i).Tag.Get("hcl"), ",") {
if t == "label" {
v.Elem().Field(i).Set(reflect.ValueOf(name))
v.Elem().Field(i).Set(reflect.ValueOf(lbl))
return i
}
}
}
}
func getName(v reflect.Value) (string, bool) {
numFields := v.Elem().Type().NumField()
for i := 0; i < numFields; i++ {
parts := strings.Split(v.Elem().Type().Field(i).Tag.Get("hcl"), ",")
for _, t := range parts[1:] {
if t == "label" {
return v.Elem().Field(i).String(), true
}
}
}
return "", false
}
func getNameIndex(v reflect.Value) (int, bool) {
numFields := v.Elem().Type().NumField()
for i := 0; i < numFields; i++ {
parts := strings.Split(v.Elem().Type().Field(i).Tag.Get("hcl"), ",")
for _, t := range parts[1:] {
if t == "label" {
return i, true
}
}
}
return 0, false
return -1
}
func removeAttributesDiags(diags hcl.Diagnostics, reserved map[string]struct{}, vars map[string]*variable) hcl.Diagnostics {
@@ -883,21 +569,3 @@ func removeAttributesDiags(diags hcl.Diagnostics, reserved map[string]struct{},
}
return fdiags
}
// key returns a unique hash for the given values
func key(ks ...any) uint64 {
hash := fnv.New64a()
for _, k := range ks {
v := reflect.ValueOf(k)
switch v.Kind() {
case reflect.String:
hash.Write([]byte(v.String()))
case reflect.Pointer:
ptr := reflect.ValueOf(k).Pointer()
binary.Write(hash, binary.LittleEndian, uint64(ptr))
default:
panic(fmt.Sprintf("unknown key kind %s", v.Kind().String()))
}
}
return hash.Sum64()
}

View File

@@ -31,21 +31,21 @@ var stdlibFunctions = map[string]function.Function{
"cidrnetmask": cidr.NetmaskFunc,
"cidrsubnet": cidr.SubnetFunc,
"cidrsubnets": cidr.SubnetsFunc,
"csvdecode": stdlib.CSVDecodeFunc,
"coalesce": stdlib.CoalesceFunc,
"coalescelist": stdlib.CoalesceListFunc,
"compact": stdlib.CompactFunc,
"concat": stdlib.ConcatFunc,
"contains": stdlib.ContainsFunc,
"convert": typeexpr.ConvertFunc,
"csvdecode": stdlib.CSVDecodeFunc,
"distinct": stdlib.DistinctFunc,
"divide": stdlib.DivideFunc,
"element": stdlib.ElementFunc,
"equal": stdlib.EqualFunc,
"flatten": stdlib.FlattenFunc,
"floor": stdlib.FloorFunc,
"format": stdlib.FormatFunc,
"formatdate": stdlib.FormatDateFunc,
"format": stdlib.FormatFunc,
"formatlist": stdlib.FormatListFunc,
"greaterthan": stdlib.GreaterThanFunc,
"greaterthanorequalto": stdlib.GreaterThanOrEqualToFunc,
@@ -53,10 +53,10 @@ var stdlibFunctions = map[string]function.Function{
"indent": stdlib.IndentFunc,
"index": stdlib.IndexFunc,
"int": stdlib.IntFunc,
"join": stdlib.JoinFunc,
"jsondecode": stdlib.JSONDecodeFunc,
"jsonencode": stdlib.JSONEncodeFunc,
"keys": stdlib.KeysFunc,
"join": stdlib.JoinFunc,
"length": stdlib.LengthFunc,
"lessthan": stdlib.LessThanFunc,
"lessthanorequalto": stdlib.LessThanOrEqualToFunc,
@@ -70,16 +70,15 @@ var stdlibFunctions = map[string]function.Function{
"modulo": stdlib.ModuloFunc,
"multiply": stdlib.MultiplyFunc,
"negate": stdlib.NegateFunc,
"not": stdlib.NotFunc,
"notequal": stdlib.NotEqualFunc,
"not": stdlib.NotFunc,
"or": stdlib.OrFunc,
"parseint": stdlib.ParseIntFunc,
"pow": stdlib.PowFunc,
"range": stdlib.RangeFunc,
"regex_replace": stdlib.RegexReplaceFunc,
"regex": stdlib.RegexFunc,
"regexall": stdlib.RegexAllFunc,
"replace": stdlib.ReplaceFunc,
"regex": stdlib.RegexFunc,
"regex_replace": stdlib.RegexReplaceFunc,
"reverse": stdlib.ReverseFunc,
"reverselist": stdlib.ReverseListFunc,
"rsadecrypt": crypto.RsaDecryptFunc,
@@ -125,11 +124,3 @@ var timestampFunc = function.New(&function.Spec{
return cty.StringVal(time.Now().UTC().Format(time.RFC3339)), nil
},
})
func Stdlib() map[string]function.Function {
funcs := make(map[string]function.Function, len(stdlibFunctions))
for k, v := range stdlibFunctions {
funcs[k] = v
}
return funcs
}

View File

@@ -4,16 +4,14 @@ import (
"archive/tar"
"bytes"
"context"
"strings"
"github.com/docker/buildx/builder"
controllerapi "github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/build"
"github.com/docker/buildx/driver"
"github.com/docker/buildx/util/progress"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/client/llb"
"github.com/moby/buildkit/frontend/dockerui"
gwclient "github.com/moby/buildkit/frontend/gateway/client"
"github.com/moby/buildkit/session"
"github.com/pkg/errors"
)
@@ -22,17 +20,11 @@ type Input struct {
URL string
}
func ReadRemoteFiles(ctx context.Context, nodes []builder.Node, url string, names []string, pw progress.Writer) ([]File, *Input, error) {
var session []session.Attachable
func ReadRemoteFiles(ctx context.Context, dis []build.DriverInfo, url string, names []string, pw progress.Writer) ([]File, *Input, error) {
var filename string
st, ok := dockerui.DetectGitContext(url, false)
if ok {
ssh, err := controllerapi.CreateSSH([]*controllerapi.SSH{{ID: "default"}})
if err == nil {
session = append(session, ssh)
}
} else {
st, filename, ok = dockerui.DetectHTTPContext(url)
st, ok := detectGitContext(url)
if !ok {
st, filename, ok = detectHTTPContext(url)
if !ok {
return nil, nil, errors.Errorf("not url context")
}
@@ -41,25 +33,25 @@ func ReadRemoteFiles(ctx context.Context, nodes []builder.Node, url string, name
inp := &Input{State: st, URL: url}
var files []File
var node *builder.Node
for i, n := range nodes {
if n.Err == nil {
node = &nodes[i]
var di *build.DriverInfo
for _, d := range dis {
if d.Err == nil {
di = &d
continue
}
}
if node == nil {
if di == nil {
return nil, nil, nil
}
c, err := driver.Boot(ctx, ctx, node.Driver, pw)
c, err := driver.Boot(ctx, ctx, di.Driver, pw)
if err != nil {
return nil, nil, err
}
ch, done := progress.NewChannel(pw)
defer func() { <-done }()
_, err = c.Build(ctx, client.SolveOpt{Session: session}, "buildx", func(ctx context.Context, c gwclient.Client) (*gwclient.Result, error) {
_, err = c.Build(ctx, client.SolveOpt{}, "buildx", func(ctx context.Context, c gwclient.Client) (*gwclient.Result, error) {
def, err := st.Marshal(ctx)
if err != nil {
return nil, err
@@ -91,6 +83,51 @@ func ReadRemoteFiles(ctx context.Context, nodes []builder.Node, url string, name
return files, inp, nil
}
func IsRemoteURL(url string) bool {
if _, _, ok := detectHTTPContext(url); ok {
return true
}
if _, ok := detectGitContext(url); ok {
return true
}
return false
}
func detectHTTPContext(url string) (*llb.State, string, bool) {
if httpPrefix.MatchString(url) {
httpContext := llb.HTTP(url, llb.Filename("context"), llb.WithCustomName("[internal] load remote build context"))
return &httpContext, "context", true
}
return nil, "", false
}
func detectGitContext(ref string) (*llb.State, bool) {
found := false
if httpPrefix.MatchString(ref) && gitURLPathWithFragmentSuffix.MatchString(ref) {
found = true
}
for _, prefix := range []string{"git://", "github.com/", "git@"} {
if strings.HasPrefix(ref, prefix) {
found = true
break
}
}
if !found {
return nil, false
}
parts := strings.SplitN(ref, "#", 2)
branch := ""
if len(parts) > 1 {
branch = parts[1]
}
gitOpts := []llb.GitOption{llb.WithCustomName("[internal] load git source " + ref)}
st := llb.Git(parts[0], branch, gitOpts...)
return &st, true
}
func isArchive(header []byte) bool {
for _, m := range [][]byte{
{0x42, 0x5A, 0x68}, // bzip2

File diff suppressed because it is too large Load Diff

View File

@@ -1,115 +0,0 @@
package build
import (
"context"
"os"
"path"
"path/filepath"
"strconv"
"strings"
"github.com/docker/buildx/util/gitutil"
specs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
const DockerfileLabel = "com.docker.image.source.entrypoint"
func getGitAttributes(ctx context.Context, contextPath string, dockerfilePath string) (res map[string]string, _ error) {
res = make(map[string]string)
if contextPath == "" {
return
}
setGitLabels := false
if v, ok := os.LookupEnv("BUILDX_GIT_LABELS"); ok {
if v == "full" { // backward compatibility with old "full" mode
setGitLabels = true
} else if v, err := strconv.ParseBool(v); err == nil {
setGitLabels = v
}
}
setGitInfo := true
if v, ok := os.LookupEnv("BUILDX_GIT_INFO"); ok {
if v, err := strconv.ParseBool(v); err == nil {
setGitInfo = v
}
}
if !setGitLabels && !setGitInfo {
return
}
// figure out in which directory the git command needs to run in
var wd string
if filepath.IsAbs(contextPath) {
wd = contextPath
} else {
cwd, _ := os.Getwd()
wd, _ = filepath.Abs(filepath.Join(cwd, contextPath))
}
gitc, err := gitutil.New(gitutil.WithContext(ctx), gitutil.WithWorkingDir(wd))
if err != nil {
if st, err := os.Stat(path.Join(wd, ".git")); err == nil && st.IsDir() {
return res, errors.New("buildx: git was not found in the system. Current commit information was not captured by the build")
}
return
}
if !gitc.IsInsideWorkTree() {
if st, err := os.Stat(path.Join(wd, ".git")); err == nil && st.IsDir() {
return res, errors.New("buildx: failed to read current commit information with git rev-parse --is-inside-work-tree")
}
return res, nil
}
if sha, err := gitc.FullCommit(); err != nil && !gitutil.IsUnknownRevision(err) {
return res, errors.Wrapf(err, "buildx: failed to get git commit")
} else if sha != "" {
checkDirty := false
if v, ok := os.LookupEnv("BUILDX_GIT_CHECK_DIRTY"); ok {
if v, err := strconv.ParseBool(v); err == nil {
checkDirty = v
}
}
if checkDirty && gitc.IsDirty() {
sha += "-dirty"
}
if setGitLabels {
res["label:"+specs.AnnotationRevision] = sha
}
if setGitInfo {
res["vcs:revision"] = sha
}
}
if rurl, err := gitc.RemoteURL(); err == nil && rurl != "" {
if setGitLabels {
res["label:"+specs.AnnotationSource] = rurl
}
if setGitInfo {
res["vcs:source"] = rurl
}
}
if setGitLabels {
if root, err := gitc.RootDir(); err != nil {
return res, errors.Wrapf(err, "buildx: failed to get git root dir")
} else if root != "" {
if dockerfilePath == "" {
dockerfilePath = filepath.Join(wd, "Dockerfile")
}
if !filepath.IsAbs(dockerfilePath) {
cwd, _ := os.Getwd()
dockerfilePath = filepath.Join(cwd, dockerfilePath)
}
dockerfilePath, _ = filepath.Rel(root, dockerfilePath)
if !strings.HasPrefix(dockerfilePath, "..") {
res["label:"+DockerfileLabel] = dockerfilePath
}
}
}
return
}

View File

@@ -1,156 +0,0 @@
package build
import (
"context"
"os"
"path"
"path/filepath"
"strings"
"testing"
"github.com/docker/buildx/util/gitutil"
specs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func setupTest(tb testing.TB) {
gitutil.Mktmp(tb)
c, err := gitutil.New()
require.NoError(tb, err)
gitutil.GitInit(c, tb)
df := []byte("FROM alpine:latest\n")
assert.NoError(tb, os.WriteFile("Dockerfile", df, 0644))
gitutil.GitAdd(c, tb, "Dockerfile")
gitutil.GitCommit(c, tb, "initial commit")
gitutil.GitSetRemote(c, tb, "origin", "git@github.com:docker/buildx.git")
}
func TestGetGitAttributesNotGitRepo(t *testing.T) {
_, err := getGitAttributes(context.Background(), t.TempDir(), "Dockerfile")
assert.NoError(t, err)
}
func TestGetGitAttributesBadGitRepo(t *testing.T) {
tmp := t.TempDir()
require.NoError(t, os.MkdirAll(path.Join(tmp, ".git"), 0755))
_, err := getGitAttributes(context.Background(), tmp, "Dockerfile")
assert.Error(t, err)
}
func TestGetGitAttributesNoContext(t *testing.T) {
setupTest(t)
gitattrs, err := getGitAttributes(context.Background(), "", "Dockerfile")
assert.NoError(t, err)
assert.Empty(t, gitattrs)
}
func TestGetGitAttributes(t *testing.T) {
cases := []struct {
name string
envGitLabels string
envGitInfo string
expected []string
}{
{
name: "default",
envGitLabels: "",
envGitInfo: "",
expected: []string{
"vcs:revision",
"vcs:source",
},
},
{
name: "none",
envGitLabels: "false",
envGitInfo: "false",
expected: []string{},
},
{
name: "gitinfo",
envGitLabels: "false",
envGitInfo: "true",
expected: []string{
"vcs:revision",
"vcs:source",
},
},
{
name: "gitlabels",
envGitLabels: "true",
envGitInfo: "false",
expected: []string{
"label:" + DockerfileLabel,
"label:" + specs.AnnotationRevision,
"label:" + specs.AnnotationSource,
},
},
{
name: "both",
envGitLabels: "true",
envGitInfo: "",
expected: []string{
"label:" + DockerfileLabel,
"label:" + specs.AnnotationRevision,
"label:" + specs.AnnotationSource,
"vcs:revision",
"vcs:source",
},
},
}
for _, tt := range cases {
tt := tt
t.Run(tt.name, func(t *testing.T) {
setupTest(t)
if tt.envGitLabels != "" {
t.Setenv("BUILDX_GIT_LABELS", tt.envGitLabels)
}
if tt.envGitInfo != "" {
t.Setenv("BUILDX_GIT_INFO", tt.envGitInfo)
}
gitattrs, err := getGitAttributes(context.Background(), ".", "Dockerfile")
require.NoError(t, err)
for _, e := range tt.expected {
assert.Contains(t, gitattrs, e)
assert.NotEmpty(t, gitattrs[e])
if e == "label:"+DockerfileLabel {
assert.Equal(t, "Dockerfile", gitattrs[e])
} else if e == "label:"+specs.AnnotationSource || e == "vcs:source" {
assert.Equal(t, "git@github.com:docker/buildx.git", gitattrs[e])
}
}
})
}
}
func TestGetGitAttributesDirty(t *testing.T) {
setupTest(t)
t.Setenv("BUILDX_GIT_CHECK_DIRTY", "true")
// make a change to test dirty flag
df := []byte("FROM alpine:edge\n")
require.NoError(t, os.Mkdir("dir", 0755))
require.NoError(t, os.WriteFile(filepath.Join("dir", "Dockerfile"), df, 0644))
t.Setenv("BUILDX_GIT_LABELS", "true")
gitattrs, _ := getGitAttributes(context.Background(), ".", "Dockerfile")
assert.Equal(t, 5, len(gitattrs))
assert.Contains(t, gitattrs, "label:"+DockerfileLabel)
assert.Equal(t, "Dockerfile", gitattrs["label:"+DockerfileLabel])
assert.Contains(t, gitattrs, "label:"+specs.AnnotationSource)
assert.Equal(t, "git@github.com:docker/buildx.git", gitattrs["label:"+specs.AnnotationSource])
assert.Contains(t, gitattrs, "label:"+specs.AnnotationRevision)
assert.True(t, strings.HasSuffix(gitattrs["label:"+specs.AnnotationRevision], "-dirty"))
assert.Contains(t, gitattrs, "vcs:source")
assert.Equal(t, "git@github.com:docker/buildx.git", gitattrs["vcs:source"])
assert.Contains(t, gitattrs, "vcs:revision")
assert.True(t, strings.HasSuffix(gitattrs["vcs:revision"], "-dirty"))
}

View File

@@ -1,138 +0,0 @@
package build
import (
"context"
_ "crypto/sha256" // ensure digests can be computed
"io"
"sync"
"sync/atomic"
"syscall"
controllerapi "github.com/docker/buildx/controller/pb"
gateway "github.com/moby/buildkit/frontend/gateway/client"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
type Container struct {
cancelOnce sync.Once
containerCancel func()
isUnavailable atomic.Bool
initStarted atomic.Bool
container gateway.Container
releaseCh chan struct{}
resultCtx *ResultHandle
}
func NewContainer(ctx context.Context, resultCtx *ResultHandle, cfg *controllerapi.InvokeConfig) (*Container, error) {
mainCtx := ctx
ctrCh := make(chan *Container)
errCh := make(chan error)
go func() {
err := resultCtx.build(func(ctx context.Context, c gateway.Client) (*gateway.Result, error) {
ctx, cancel := context.WithCancel(ctx)
go func() {
<-mainCtx.Done()
cancel()
}()
containerCfg, err := resultCtx.getContainerConfig(ctx, c, cfg)
if err != nil {
return nil, err
}
containerCtx, containerCancel := context.WithCancel(ctx)
defer containerCancel()
bkContainer, err := c.NewContainer(containerCtx, containerCfg)
if err != nil {
return nil, err
}
releaseCh := make(chan struct{})
container := &Container{
containerCancel: containerCancel,
container: bkContainer,
releaseCh: releaseCh,
resultCtx: resultCtx,
}
doneCh := make(chan struct{})
defer close(doneCh)
resultCtx.registerCleanup(func() {
container.Cancel()
<-doneCh
})
ctrCh <- container
<-container.releaseCh
return nil, bkContainer.Release(ctx)
})
if err != nil {
errCh <- err
}
}()
select {
case ctr := <-ctrCh:
return ctr, nil
case err := <-errCh:
return nil, err
case <-mainCtx.Done():
return nil, mainCtx.Err()
}
}
func (c *Container) Cancel() {
c.markUnavailable()
c.cancelOnce.Do(func() {
if c.containerCancel != nil {
c.containerCancel()
}
close(c.releaseCh)
})
}
func (c *Container) IsUnavailable() bool {
return c.isUnavailable.Load()
}
func (c *Container) markUnavailable() {
c.isUnavailable.Store(true)
}
func (c *Container) Exec(ctx context.Context, cfg *controllerapi.InvokeConfig, stdin io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) error {
if isInit := c.initStarted.CompareAndSwap(false, true); isInit {
defer func() {
// container can't be used after init exits
c.markUnavailable()
}()
}
err := exec(ctx, c.resultCtx, cfg, c.container, stdin, stdout, stderr)
if err != nil {
// Container becomes unavailable if one of the processes fails in it.
c.markUnavailable()
}
return err
}
func exec(ctx context.Context, resultCtx *ResultHandle, cfg *controllerapi.InvokeConfig, ctr gateway.Container, stdin io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) error {
processCfg, err := resultCtx.getProcessConfig(cfg, stdin, stdout, stderr)
if err != nil {
return err
}
proc, err := ctr.Start(ctx, processCfg)
if err != nil {
return errors.Errorf("failed to start container: %v", err)
}
doneCh := make(chan struct{})
defer close(doneCh)
go func() {
select {
case <-ctx.Done():
if err := proc.Signal(ctx, syscall.SIGKILL); err != nil {
logrus.Warnf("failed to kill process: %v", err)
}
case <-doneCh:
}
}()
return proc.Wait()
}

View File

@@ -1,494 +0,0 @@
package build
import (
"context"
_ "crypto/sha256" // ensure digests can be computed
"encoding/json"
"io"
"sync"
controllerapi "github.com/docker/buildx/controller/pb"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/exporter/containerimage/exptypes"
gateway "github.com/moby/buildkit/frontend/gateway/client"
"github.com/moby/buildkit/solver/errdefs"
"github.com/moby/buildkit/solver/pb"
"github.com/moby/buildkit/solver/result"
specs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"golang.org/x/sync/errgroup"
)
// NewResultHandle makes a call to client.Build, additionally returning a
// opaque ResultHandle alongside the standard response and error.
//
// This ResultHandle can be used to execute additional build steps in the same
// context as the build occurred, which can allow easy debugging of build
// failures and successes.
//
// If the returned ResultHandle is not nil, the caller must call Done() on it.
func NewResultHandle(ctx context.Context, cc *client.Client, opt client.SolveOpt, product string, buildFunc gateway.BuildFunc, ch chan *client.SolveStatus) (*ResultHandle, *client.SolveResponse, error) {
// Create a new context to wrap the original, and cancel it when the
// caller-provided context is cancelled.
//
// We derive the context from the background context so that we can forbid
// cancellation of the build request after <-done is closed (which we do
// before returning the ResultHandle).
baseCtx := ctx
ctx, cancel := context.WithCancelCause(context.Background())
done := make(chan struct{})
go func() {
select {
case <-baseCtx.Done():
cancel(baseCtx.Err())
case <-done:
// Once done is closed, we've recorded a ResultHandle, so we
// shouldn't allow cancelling the underlying build request anymore.
}
}()
// Create a new channel to forward status messages to the original.
//
// We do this so that we can discard status messages after the main portion
// of the build is complete. This is necessary for the solve error case,
// where the original gateway is kept open until the ResultHandle is
// closed - we don't want progress messages from operations in that
// ResultHandle to display after this function exits.
//
// Additionally, callers should wait for the progress channel to be closed.
// If we keep the session open and never close the progress channel, the
// caller will likely hang.
baseCh := ch
ch = make(chan *client.SolveStatus)
go func() {
for {
s, ok := <-ch
if !ok {
return
}
select {
case <-baseCh:
// base channel is closed, discard status messages
default:
baseCh <- s
}
}
}()
defer close(baseCh)
var resp *client.SolveResponse
var respErr error
var respHandle *ResultHandle
go func() {
defer cancel(context.Canceled) // ensure no dangling processes
var res *gateway.Result
var err error
resp, err = cc.Build(ctx, opt, product, func(ctx context.Context, c gateway.Client) (*gateway.Result, error) {
var err error
res, err = buildFunc(ctx, c)
if res != nil && err == nil {
// Force evaluation of the build result (otherwise, we likely
// won't get a solve error)
def, err2 := getDefinition(ctx, res)
if err2 != nil {
return nil, err2
}
res, err = evalDefinition(ctx, c, def)
}
if err != nil {
// Scenario 1: we failed to evaluate a node somewhere in the
// build graph.
//
// In this case, we construct a ResultHandle from this
// original Build session, and return it alongside the original
// build error. We then need to keep the gateway session open
// until the caller explicitly closes the ResultHandle.
var se *errdefs.SolveError
if errors.As(err, &se) {
respHandle = &ResultHandle{
done: make(chan struct{}),
solveErr: se,
gwClient: c,
gwCtx: ctx,
}
respErr = se
close(done)
// Block until the caller closes the ResultHandle.
select {
case <-respHandle.done:
case <-ctx.Done():
}
}
}
return res, err
}, ch)
if respHandle != nil {
return
}
if err != nil {
// Something unexpected failed during the build, we didn't succeed,
// but we also didn't make it far enough to create a ResultHandle.
respErr = err
close(done)
return
}
// Scenario 2: we successfully built the image with no errors.
//
// In this case, the original gateway session has now been closed
// since the Build has been completed. So, we need to create a new
// gateway session to populate the ResultHandle. To do this, we
// need to re-evaluate the target result, in this new session. This
// should be instantaneous since the result should be cached.
def, err := getDefinition(ctx, res)
if err != nil {
respErr = err
close(done)
return
}
// NOTE: ideally this second connection should be lazily opened
opt := opt
opt.Ref = ""
opt.Exports = nil
opt.CacheExports = nil
_, respErr = cc.Build(ctx, opt, "buildx", func(ctx context.Context, c gateway.Client) (*gateway.Result, error) {
res, err := evalDefinition(ctx, c, def)
if err != nil {
// This should probably not happen, since we've previously
// successfully evaluated the same result with no issues.
return nil, errors.Wrap(err, "inconsistent solve result")
}
respHandle = &ResultHandle{
done: make(chan struct{}),
res: res,
gwClient: c,
gwCtx: ctx,
}
close(done)
// Block until the caller closes the ResultHandle.
select {
case <-respHandle.done:
case <-ctx.Done():
}
return nil, ctx.Err()
}, nil)
if respHandle != nil {
return
}
close(done)
}()
// Block until the other thread signals that it's completed the build.
select {
case <-done:
case <-baseCtx.Done():
if respErr == nil {
respErr = baseCtx.Err()
}
}
return respHandle, resp, respErr
}
// getDefinition converts a gateway result into a collection of definitions for
// each ref in the result.
func getDefinition(ctx context.Context, res *gateway.Result) (*result.Result[*pb.Definition], error) {
return result.ConvertResult(res, func(ref gateway.Reference) (*pb.Definition, error) {
st, err := ref.ToState()
if err != nil {
return nil, err
}
def, err := st.Marshal(ctx)
if err != nil {
return nil, err
}
return def.ToPB(), nil
})
}
// evalDefinition performs the reverse of getDefinition, converting a
// collection of definitions into a gateway result.
func evalDefinition(ctx context.Context, c gateway.Client, defs *result.Result[*pb.Definition]) (*gateway.Result, error) {
// force evaluation of all targets in parallel
results := make(map[*pb.Definition]*gateway.Result)
resultsMu := sync.Mutex{}
eg, egCtx := errgroup.WithContext(ctx)
defs.EachRef(func(def *pb.Definition) error {
eg.Go(func() error {
res, err := c.Solve(egCtx, gateway.SolveRequest{
Evaluate: true,
Definition: def,
})
if err != nil {
return err
}
resultsMu.Lock()
results[def] = res
resultsMu.Unlock()
return nil
})
return nil
})
if err := eg.Wait(); err != nil {
return nil, err
}
res, _ := result.ConvertResult(defs, func(def *pb.Definition) (gateway.Reference, error) {
if res, ok := results[def]; ok {
return res.Ref, nil
}
return nil, nil
})
return res, nil
}
// ResultHandle is a build result with the client that built it.
type ResultHandle struct {
res *gateway.Result
solveErr *errdefs.SolveError
done chan struct{}
doneOnce sync.Once
gwClient gateway.Client
gwCtx context.Context
cleanups []func()
cleanupsMu sync.Mutex
}
func (r *ResultHandle) Done() {
r.doneOnce.Do(func() {
r.cleanupsMu.Lock()
cleanups := r.cleanups
r.cleanups = nil
r.cleanupsMu.Unlock()
for _, f := range cleanups {
f()
}
close(r.done)
<-r.gwCtx.Done()
})
}
func (r *ResultHandle) registerCleanup(f func()) {
r.cleanupsMu.Lock()
r.cleanups = append(r.cleanups, f)
r.cleanupsMu.Unlock()
}
func (r *ResultHandle) build(buildFunc gateway.BuildFunc) (err error) {
_, err = buildFunc(r.gwCtx, r.gwClient)
return err
}
func (r *ResultHandle) getContainerConfig(ctx context.Context, c gateway.Client, cfg *controllerapi.InvokeConfig) (containerCfg gateway.NewContainerRequest, _ error) {
if r.res != nil && r.solveErr == nil {
logrus.Debugf("creating container from successful build")
ccfg, err := containerConfigFromResult(ctx, r.res, c, *cfg)
if err != nil {
return containerCfg, err
}
containerCfg = *ccfg
} else {
logrus.Debugf("creating container from failed build %+v", cfg)
ccfg, err := containerConfigFromError(r.solveErr, *cfg)
if err != nil {
return containerCfg, errors.Wrapf(err, "no result nor error is available")
}
containerCfg = *ccfg
}
return containerCfg, nil
}
func (r *ResultHandle) getProcessConfig(cfg *controllerapi.InvokeConfig, stdin io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) (_ gateway.StartRequest, err error) {
processCfg := newStartRequest(stdin, stdout, stderr)
if r.res != nil && r.solveErr == nil {
logrus.Debugf("creating container from successful build")
if err := populateProcessConfigFromResult(&processCfg, r.res, *cfg); err != nil {
return processCfg, err
}
} else {
logrus.Debugf("creating container from failed build %+v", cfg)
if err := populateProcessConfigFromError(&processCfg, r.solveErr, *cfg); err != nil {
return processCfg, err
}
}
return processCfg, nil
}
func containerConfigFromResult(ctx context.Context, res *gateway.Result, c gateway.Client, cfg controllerapi.InvokeConfig) (*gateway.NewContainerRequest, error) {
if cfg.Initial {
return nil, errors.Errorf("starting from the container from the initial state of the step is supported only on the failed steps")
}
ps, err := exptypes.ParsePlatforms(res.Metadata)
if err != nil {
return nil, err
}
ref, ok := res.FindRef(ps.Platforms[0].ID)
if !ok {
return nil, errors.Errorf("no reference found")
}
return &gateway.NewContainerRequest{
Mounts: []gateway.Mount{
{
Dest: "/",
MountType: pb.MountType_BIND,
Ref: ref,
},
},
}, nil
}
func populateProcessConfigFromResult(req *gateway.StartRequest, res *gateway.Result, cfg controllerapi.InvokeConfig) error {
imgData := res.Metadata[exptypes.ExporterImageConfigKey]
var img *specs.Image
if len(imgData) > 0 {
img = &specs.Image{}
if err := json.Unmarshal(imgData, img); err != nil {
return err
}
}
user := ""
if !cfg.NoUser {
user = cfg.User
} else if img != nil {
user = img.Config.User
}
cwd := ""
if !cfg.NoCwd {
cwd = cfg.Cwd
} else if img != nil {
cwd = img.Config.WorkingDir
}
env := []string{}
if img != nil {
env = append(env, img.Config.Env...)
}
env = append(env, cfg.Env...)
args := []string{}
if cfg.Entrypoint != nil {
args = append(args, cfg.Entrypoint...)
} else if img != nil {
args = append(args, img.Config.Entrypoint...)
}
if cfg.Cmd != nil {
args = append(args, cfg.Cmd...)
} else if img != nil {
args = append(args, img.Config.Cmd...)
}
req.Args = args
req.Env = env
req.User = user
req.Cwd = cwd
req.Tty = cfg.Tty
return nil
}
func containerConfigFromError(solveErr *errdefs.SolveError, cfg controllerapi.InvokeConfig) (*gateway.NewContainerRequest, error) {
exec, err := execOpFromError(solveErr)
if err != nil {
return nil, err
}
var mounts []gateway.Mount
for i, mnt := range exec.Mounts {
rid := solveErr.Solve.MountIDs[i]
if cfg.Initial {
rid = solveErr.Solve.InputIDs[i]
}
mounts = append(mounts, gateway.Mount{
Selector: mnt.Selector,
Dest: mnt.Dest,
ResultID: rid,
Readonly: mnt.Readonly,
MountType: mnt.MountType,
CacheOpt: mnt.CacheOpt,
SecretOpt: mnt.SecretOpt,
SSHOpt: mnt.SSHOpt,
})
}
return &gateway.NewContainerRequest{
Mounts: mounts,
NetMode: exec.Network,
}, nil
}
func populateProcessConfigFromError(req *gateway.StartRequest, solveErr *errdefs.SolveError, cfg controllerapi.InvokeConfig) error {
exec, err := execOpFromError(solveErr)
if err != nil {
return err
}
meta := exec.Meta
user := ""
if !cfg.NoUser {
user = cfg.User
} else {
user = meta.User
}
cwd := ""
if !cfg.NoCwd {
cwd = cfg.Cwd
} else {
cwd = meta.Cwd
}
env := append(meta.Env, cfg.Env...)
args := []string{}
if cfg.Entrypoint != nil {
args = append(args, cfg.Entrypoint...)
}
if cfg.Cmd != nil {
args = append(args, cfg.Cmd...)
}
if len(args) == 0 {
args = meta.Args
}
req.Args = args
req.Env = env
req.User = user
req.Cwd = cwd
req.Tty = cfg.Tty
return nil
}
func execOpFromError(solveErr *errdefs.SolveError) (*pb.ExecOp, error) {
if solveErr == nil {
return nil, errors.Errorf("no error is available")
}
switch op := solveErr.Solve.Op.GetOp().(type) {
case *pb.Op_Exec:
return op.Exec, nil
default:
return nil, errors.Errorf("invoke: unsupported error type")
}
// TODO: support other ops
}
func newStartRequest(stdin io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) gateway.StartRequest {
return gateway.StartRequest{
Stdin: stdin,
Stdout: stdout,
Stderr: stderr,
}
}

View File

@@ -13,7 +13,7 @@ import (
"github.com/pkg/errors"
)
func createTempDockerfileFromURL(ctx context.Context, d *driver.DriverHandle, url string, pw progress.Writer) (string, error) {
func createTempDockerfileFromURL(ctx context.Context, d driver.Driver, url string, pw progress.Writer) (string, error) {
c, err := driver.Boot(ctx, ctx, d, pw)
if err != nil {
return "", err

View File

@@ -8,29 +8,11 @@ import (
"strings"
"github.com/docker/cli/opts"
"github.com/docker/docker/builder/remotecontext/urlutil"
"github.com/moby/buildkit/util/gitutil"
"github.com/pkg/errors"
)
const (
// archiveHeaderSize is the number of bytes in an archive header
archiveHeaderSize = 512
// mobyHostGatewayName defines a special string which users can append to
// --add-host to add an extra entry in /etc/hosts that maps
// host.docker.internal to the host IP
mobyHostGatewayName = "host-gateway"
)
func IsRemoteURL(c string) bool {
if urlutil.IsURL(c) {
return true
}
if _, err := gitutil.ParseGitRef(c); err == nil {
return true
}
return false
}
// archiveHeaderSize is the number of bytes in an archive header
const archiveHeaderSize = 512
func isLocalDir(c string) bool {
st, err := os.Stat(c)
@@ -57,23 +39,18 @@ func isArchive(header []byte) bool {
}
// toBuildkitExtraHosts converts hosts from docker key:value format to buildkit's csv format
func toBuildkitExtraHosts(inp []string, mobyDriver bool) (string, error) {
func toBuildkitExtraHosts(inp []string) (string, error) {
if len(inp) == 0 {
return "", nil
}
hosts := make([]string, 0, len(inp))
for _, h := range inp {
host, ip, ok := strings.Cut(h, ":")
if !ok || host == "" || ip == "" {
parts := strings.Split(h, ":")
if len(parts) != 2 || parts[0] == "" || net.ParseIP(parts[1]) == nil {
return "", errors.Errorf("invalid host %s", h)
}
// Skip IP address validation for "host-gateway" string with moby driver
if !mobyDriver || ip != mobyHostGatewayName {
if net.ParseIP(ip) == nil {
return "", errors.Errorf("invalid host %s", h)
}
}
hosts = append(hosts, host+"="+ip)
hosts = append(hosts, parts[0]+"="+parts[1])
}
return strings.Join(hosts, ","), nil
}

View File

@@ -1,292 +0,0 @@
package builder
import (
"context"
"os"
"sort"
"sync"
"github.com/docker/buildx/driver"
"github.com/docker/buildx/store"
"github.com/docker/buildx/store/storeutil"
"github.com/docker/buildx/util/dockerutil"
"github.com/docker/buildx/util/imagetools"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/command"
"github.com/pkg/errors"
"golang.org/x/sync/errgroup"
)
// Builder represents an active builder object
type Builder struct {
*store.NodeGroup
driverFactory driverFactory
nodes []Node
opts builderOpts
err error
}
type builderOpts struct {
dockerCli command.Cli
name string
txn *store.Txn
contextPathHash string
validate bool
}
// Option provides a variadic option for configuring the builder.
type Option func(b *Builder)
// WithName sets builder name.
func WithName(name string) Option {
return func(b *Builder) {
b.opts.name = name
}
}
// WithStore sets a store instance used at init.
func WithStore(txn *store.Txn) Option {
return func(b *Builder) {
b.opts.txn = txn
}
}
// WithContextPathHash is used for determining pods in k8s driver instance.
func WithContextPathHash(contextPathHash string) Option {
return func(b *Builder) {
b.opts.contextPathHash = contextPathHash
}
}
// WithSkippedValidation skips builder context validation.
func WithSkippedValidation() Option {
return func(b *Builder) {
b.opts.validate = false
}
}
// New initializes a new builder client
func New(dockerCli command.Cli, opts ...Option) (_ *Builder, err error) {
b := &Builder{
opts: builderOpts{
dockerCli: dockerCli,
validate: true,
},
}
for _, opt := range opts {
opt(b)
}
if b.opts.txn == nil {
// if store instance is nil we create a short-lived one using the
// default store and ensure we release it on completion
var release func()
b.opts.txn, release, err = storeutil.GetStore(dockerCli)
if err != nil {
return nil, err
}
defer release()
}
if b.opts.name != "" {
if b.NodeGroup, err = storeutil.GetNodeGroup(b.opts.txn, dockerCli, b.opts.name); err != nil {
return nil, err
}
} else {
if b.NodeGroup, err = storeutil.GetCurrentInstance(b.opts.txn, dockerCli); err != nil {
return nil, err
}
}
if b.opts.validate {
if err = b.Validate(); err != nil {
return nil, err
}
}
return b, nil
}
// Validate validates builder context
func (b *Builder) Validate() error {
if b.NodeGroup != nil && b.NodeGroup.DockerContext {
list, err := b.opts.dockerCli.ContextStore().List()
if err != nil {
return err
}
currentContext := b.opts.dockerCli.CurrentContext()
for _, l := range list {
if l.Name == b.Name && l.Name != currentContext {
return errors.Errorf("use `docker --context=%s buildx` to switch to context %q", l.Name, l.Name)
}
}
}
return nil
}
// ContextName returns builder context name if available.
func (b *Builder) ContextName() string {
ctxbuilders, err := b.opts.dockerCli.ContextStore().List()
if err != nil {
return ""
}
for _, cb := range ctxbuilders {
if b.NodeGroup.Driver == "docker" && len(b.NodeGroup.Nodes) == 1 && b.NodeGroup.Nodes[0].Endpoint == cb.Name {
return cb.Name
}
}
return ""
}
// ImageOpt returns registry auth configuration
func (b *Builder) ImageOpt() (imagetools.Opt, error) {
return storeutil.GetImageConfig(b.opts.dockerCli, b.NodeGroup)
}
// Boot bootstrap a builder
func (b *Builder) Boot(ctx context.Context) (bool, error) {
toBoot := make([]int, 0, len(b.nodes))
for idx, d := range b.nodes {
if d.Err != nil || d.Driver == nil || d.DriverInfo == nil {
continue
}
if d.DriverInfo.Status != driver.Running {
toBoot = append(toBoot, idx)
}
}
if len(toBoot) == 0 {
return false, nil
}
printer, err := progress.NewPrinter(context.TODO(), os.Stderr, os.Stderr, progress.PrinterModeAuto)
if err != nil {
return false, err
}
baseCtx := ctx
eg, _ := errgroup.WithContext(ctx)
for _, idx := range toBoot {
func(idx int) {
eg.Go(func() error {
pw := progress.WithPrefix(printer, b.NodeGroup.Nodes[idx].Name, len(toBoot) > 1)
_, err := driver.Boot(ctx, baseCtx, b.nodes[idx].Driver, pw)
if err != nil {
b.nodes[idx].Err = err
}
return nil
})
}(idx)
}
err = eg.Wait()
err1 := printer.Wait()
if err == nil {
err = err1
}
return true, err
}
// Inactive checks if all nodes are inactive for this builder.
func (b *Builder) Inactive() bool {
for _, d := range b.nodes {
if d.DriverInfo != nil && d.DriverInfo.Status == driver.Running {
return false
}
}
return true
}
// Err returns error if any.
func (b *Builder) Err() error {
return b.err
}
type driverFactory struct {
driver.Factory
once sync.Once
}
// Factory returns the driver factory.
func (b *Builder) Factory(ctx context.Context) (_ driver.Factory, err error) {
b.driverFactory.once.Do(func() {
if b.Driver != "" {
b.driverFactory.Factory, err = driver.GetFactory(b.Driver, true)
if err != nil {
return
}
} else {
// empty driver means nodegroup was implicitly created as a default
// driver for a docker context and allows falling back to a
// docker-container driver for older daemon that doesn't support
// buildkit (< 18.06).
ep := b.NodeGroup.Nodes[0].Endpoint
var dockerapi *dockerutil.ClientAPI
dockerapi, err = dockerutil.NewClientAPI(b.opts.dockerCli, b.NodeGroup.Nodes[0].Endpoint)
if err != nil {
return
}
// check if endpoint is healthy is needed to determine the driver type.
// if this fails then can't continue with driver selection.
if _, err = dockerapi.Ping(ctx); err != nil {
return
}
b.driverFactory.Factory, err = driver.GetDefaultFactory(ctx, ep, dockerapi, false)
if err != nil {
return
}
b.Driver = b.driverFactory.Factory.Name()
}
})
return b.driverFactory.Factory, err
}
// GetBuilders returns all builders
func GetBuilders(dockerCli command.Cli, txn *store.Txn) ([]*Builder, error) {
storeng, err := txn.List()
if err != nil {
return nil, err
}
builders := make([]*Builder, len(storeng))
seen := make(map[string]struct{})
for i, ng := range storeng {
b, err := New(dockerCli,
WithName(ng.Name),
WithStore(txn),
WithSkippedValidation(),
)
if err != nil {
return nil, err
}
builders[i] = b
seen[b.NodeGroup.Name] = struct{}{}
}
contexts, err := dockerCli.ContextStore().List()
if err != nil {
return nil, err
}
sort.Slice(contexts, func(i, j int) bool {
return contexts[i].Name < contexts[j].Name
})
for _, c := range contexts {
// if a context has the same name as an instance from the store, do not
// add it to the builders list. An instance from the store takes
// precedence over context builders.
if _, ok := seen[c.Name]; ok {
continue
}
b, err := New(dockerCli,
WithName(c.Name),
WithStore(txn),
WithSkippedValidation(),
)
if err != nil {
return nil, err
}
builders = append(builders, b)
}
return builders, nil
}

View File

@@ -1,211 +0,0 @@
package builder
import (
"context"
"github.com/docker/buildx/driver"
ctxkube "github.com/docker/buildx/driver/kubernetes/context"
"github.com/docker/buildx/store"
"github.com/docker/buildx/store/storeutil"
"github.com/docker/buildx/util/dockerutil"
"github.com/docker/buildx/util/imagetools"
"github.com/docker/buildx/util/platformutil"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/util/grpcerrors"
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"golang.org/x/sync/errgroup"
"google.golang.org/grpc/codes"
)
type Node struct {
store.Node
Builder string
Driver *driver.DriverHandle
DriverInfo *driver.Info
Platforms []ocispecs.Platform
GCPolicy []client.PruneInfo
Labels map[string]string
ImageOpt imagetools.Opt
ProxyConfig map[string]string
Version string
Err error
}
// Nodes returns nodes for this builder.
func (b *Builder) Nodes() []Node {
return b.nodes
}
// LoadNodes loads and returns nodes for this builder.
// TODO: this should be a method on a Node object and lazy load data for each driver.
func (b *Builder) LoadNodes(ctx context.Context, withData bool) (_ []Node, err error) {
eg, _ := errgroup.WithContext(ctx)
b.nodes = make([]Node, len(b.NodeGroup.Nodes))
defer func() {
if b.err == nil && err != nil {
b.err = err
}
}()
factory, err := b.Factory(ctx)
if err != nil {
return nil, err
}
imageopt, err := b.ImageOpt()
if err != nil {
return nil, err
}
for i, n := range b.NodeGroup.Nodes {
func(i int, n store.Node) {
eg.Go(func() error {
node := Node{
Node: n,
ProxyConfig: storeutil.GetProxyConfig(b.opts.dockerCli),
Platforms: n.Platforms,
Builder: b.Name,
}
defer func() {
b.nodes[i] = node
}()
dockerapi, err := dockerutil.NewClientAPI(b.opts.dockerCli, n.Endpoint)
if err != nil {
node.Err = err
return nil
}
contextStore := b.opts.dockerCli.ContextStore()
var kcc driver.KubeClientConfig
kcc, err = ctxkube.ConfigFromContext(n.Endpoint, contextStore)
if err != nil {
// err is returned if n.Endpoint is non-context name like "unix:///var/run/docker.sock".
// try again with name="default".
// FIXME(@AkihiroSuda): n should retain real context name.
kcc, err = ctxkube.ConfigFromContext("default", contextStore)
if err != nil {
logrus.Error(err)
}
}
tryToUseKubeConfigInCluster := false
if kcc == nil {
tryToUseKubeConfigInCluster = true
} else {
if _, err := kcc.ClientConfig(); err != nil {
tryToUseKubeConfigInCluster = true
}
}
if tryToUseKubeConfigInCluster {
kccInCluster := driver.KubeClientConfigInCluster{}
if _, err := kccInCluster.ClientConfig(); err == nil {
logrus.Debug("using kube config in cluster")
kcc = kccInCluster
}
}
d, err := driver.GetDriver(ctx, "buildx_buildkit_"+n.Name, factory, n.Endpoint, dockerapi, imageopt.Auth, kcc, n.Flags, n.Files, n.DriverOpts, n.Platforms, b.opts.contextPathHash)
if err != nil {
node.Err = err
return nil
}
node.Driver = d
node.ImageOpt = imageopt
if withData {
if err := node.loadData(ctx); err != nil {
node.Err = err
}
}
return nil
})
}(i, n)
}
if err := eg.Wait(); err != nil {
return nil, err
}
// TODO: This should be done in the routine loading driver data
if withData {
kubernetesDriverCount := 0
for _, d := range b.nodes {
if d.DriverInfo != nil && len(d.DriverInfo.DynamicNodes) > 0 {
kubernetesDriverCount++
}
}
isAllKubernetesDrivers := len(b.nodes) == kubernetesDriverCount
if isAllKubernetesDrivers {
var nodes []Node
var dynamicNodes []store.Node
for _, di := range b.nodes {
// dynamic nodes are used in Kubernetes driver.
// Kubernetes' pods are dynamically mapped to BuildKit Nodes.
if di.DriverInfo != nil && len(di.DriverInfo.DynamicNodes) > 0 {
for i := 0; i < len(di.DriverInfo.DynamicNodes); i++ {
diClone := di
if pl := di.DriverInfo.DynamicNodes[i].Platforms; len(pl) > 0 {
diClone.Platforms = pl
}
nodes = append(nodes, di)
}
dynamicNodes = append(dynamicNodes, di.DriverInfo.DynamicNodes...)
}
}
// not append (remove the static nodes in the store)
b.NodeGroup.Nodes = dynamicNodes
b.nodes = nodes
b.NodeGroup.Dynamic = true
}
}
return b.nodes, nil
}
func (n *Node) loadData(ctx context.Context) error {
if n.Driver == nil {
return nil
}
info, err := n.Driver.Info(ctx)
if err != nil {
return err
}
n.DriverInfo = info
if n.DriverInfo.Status == driver.Running {
driverClient, err := n.Driver.Client(ctx)
if err != nil {
return err
}
workers, err := driverClient.ListWorkers(ctx)
if err != nil {
return errors.Wrap(err, "listing workers")
}
for idx, w := range workers {
n.Platforms = append(n.Platforms, w.Platforms...)
if idx == 0 {
n.GCPolicy = w.GCPolicy
n.Labels = w.Labels
}
}
n.Platforms = platformutil.Dedupe(n.Platforms)
inf, err := driverClient.Info(ctx)
if err != nil {
if st, ok := grpcerrors.AsGRPCStatus(err); ok && st.Code() == codes.Unimplemented {
n.Version, err = n.Driver.Version(ctx)
if err != nil {
return errors.Wrap(err, "getting version")
}
}
} else {
n.Version = inf.BuildkitVersion.Version
}
}
return nil
}

View File

@@ -4,8 +4,8 @@ import (
"fmt"
"os"
"github.com/containerd/containerd/pkg/seed"
"github.com/docker/buildx/commands"
"github.com/docker/buildx/util/desktop"
"github.com/docker/buildx/version"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli-plugins/manager"
@@ -16,10 +16,10 @@ import (
"github.com/moby/buildkit/solver/errdefs"
"github.com/moby/buildkit/util/stack"
//nolint:staticcheck // vendored dependencies may still use this
"github.com/containerd/containerd/pkg/seed"
_ "k8s.io/client-go/plugin/pkg/client/auth/azure"
_ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
_ "k8s.io/client-go/plugin/pkg/client/auth/oidc"
_ "k8s.io/client-go/plugin/pkg/client/auth/openstack"
_ "github.com/docker/buildx/driver/docker"
_ "github.com/docker/buildx/driver/docker-container"
@@ -28,9 +28,7 @@ import (
)
func init() {
//nolint:staticcheck
seed.WithTimeAndRand()
stack.SetVersionInfo(version.Version, version.Revision)
}
@@ -87,9 +85,6 @@ func main() {
} else {
fmt.Fprintf(cmd.Err(), "ERROR: %v\n", err)
}
if ebr, ok := err.(*desktop.ErrorWithBuildRef); ok {
ebr.Print(cmd.Err())
}
os.Exit(1)
}

View File

@@ -6,16 +6,10 @@ import (
"fmt"
"os"
"github.com/containerd/console"
"github.com/containerd/containerd/platforms"
"github.com/docker/buildx/bake"
"github.com/docker/buildx/build"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/util/buildflags"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/desktop"
"github.com/docker/buildx/util/dockerutil"
"github.com/docker/buildx/util/progress"
"github.com/docker/buildx/util/tracing"
"github.com/docker/cli/cli/command"
@@ -25,19 +19,13 @@ import (
)
type bakeOptions struct {
files []string
overrides []string
printOnly bool
sbom string
provenance string
builder string
metadataFile string
exportPush bool
exportLoad bool
files []string
overrides []string
printOnly bool
commonOptions
}
func runBake(dockerCli command.Cli, targets []string, in bakeOptions, cFlags commonFlags) (err error) {
func runBake(dockerCli command.Cli, targets []string, in bakeOptions) (err error) {
ctx := appcontext.Context()
ctx, end, err := tracing.TraceCurrentCommand(ctx, "bake")
@@ -52,11 +40,11 @@ func runBake(dockerCli command.Cli, targets []string, in bakeOptions, cFlags com
cmdContext := "cwd://"
if len(targets) > 0 {
if build.IsRemoteURL(targets[0]) {
if bake.IsRemoteURL(targets[0]) {
url = targets[0]
targets = targets[1:]
if len(targets) > 0 {
if build.IsRemoteURL(targets[0]) {
if bake.IsRemoteURL(targets[0]) {
cmdContext = targets[0]
targets = targets[1:]
}
@@ -77,59 +65,17 @@ func runBake(dockerCli command.Cli, targets []string, in bakeOptions, cFlags com
} else if in.exportLoad {
overrides = append(overrides, "*.output=type=docker")
}
if cFlags.noCache != nil {
overrides = append(overrides, fmt.Sprintf("*.no-cache=%t", *cFlags.noCache))
if in.noCache != nil {
overrides = append(overrides, fmt.Sprintf("*.no-cache=%t", *in.noCache))
}
if cFlags.pull != nil {
overrides = append(overrides, fmt.Sprintf("*.pull=%t", *cFlags.pull))
}
if in.sbom != "" {
overrides = append(overrides, fmt.Sprintf("*.attest=%s", buildflags.CanonicalizeAttest("sbom", in.sbom)))
}
if in.provenance != "" {
overrides = append(overrides, fmt.Sprintf("*.attest=%s", buildflags.CanonicalizeAttest("provenance", in.provenance)))
if in.pull != nil {
overrides = append(overrides, fmt.Sprintf("*.pull=%t", *in.pull))
}
contextPathHash, _ := os.Getwd()
ctx2, cancel := context.WithCancel(context.TODO())
defer cancel()
var nodes []builder.Node
var files []bake.File
var inp *bake.Input
var progressConsoleDesc, progressTextDesc string
// instance only needed for reading remote bake files or building
if url != "" || !in.printOnly {
b, err := builder.New(dockerCli,
builder.WithName(in.builder),
builder.WithContextPathHash(contextPathHash),
)
if err != nil {
return err
}
if err = updateLastActivity(dockerCli, b.NodeGroup); err != nil {
return errors.Wrapf(err, "failed to update builder last activity time")
}
nodes, err = b.LoadNodes(ctx, false)
if err != nil {
return err
}
progressConsoleDesc = fmt.Sprintf("%s:%s", b.Driver, b.Name)
progressTextDesc = fmt.Sprintf("building with %q instance using %s driver", b.Name, b.Driver)
}
var term bool
if _, err := console.ConsoleFromFile(os.Stderr); err == nil {
term = true
}
printer, err := progress.NewPrinter(ctx2, os.Stderr, os.Stderr, cFlags.progress,
progress.WithDesc(progressTextDesc, progressConsoleDesc),
)
if err != nil {
return err
}
printer := progress.NewPrinter(ctx2, os.Stderr, os.Stderr, in.progress)
defer func() {
if printer != nil {
@@ -137,16 +83,21 @@ func runBake(dockerCli command.Cli, targets []string, in bakeOptions, cFlags com
if err == nil {
err = err1
}
if err == nil && cFlags.progress != progress.PrinterModeQuiet {
desktop.PrintBuildDetails(os.Stderr, printer.BuildRefs(), term)
}
}
}()
dis, err := getInstanceOrDefault(ctx, dockerCli, in.builder, contextPathHash)
if err != nil {
return err
}
var files []bake.File
var inp *bake.Input
if url != "" {
files, inp, err = bake.ReadRemoteFiles(ctx, nodes, url, in.files, printer)
files, inp, err = bake.ReadRemoteFiles(ctx, dis, url, in.files, printer)
} else {
files, err = bake.ReadLocalFiles(in.files, dockerCli.In())
files, err = bake.ReadLocalFiles(in.files)
}
if err != nil {
return err
@@ -154,7 +105,7 @@ func runBake(dockerCli command.Cli, targets []string, in bakeOptions, cFlags com
tgts, grps, err := bake.ReadTargets(ctx, files, targets, overrides, map[string]string{
// don't forget to update documentation if you add a new
// built-in variable: docs/bake-reference.md#built-in-variables
// built-in variable: docs/guides/bake/file-definition.md#built-in-variables
"BAKE_CMD_CONTEXT": cmdContext,
"BAKE_LOCAL_PLATFORM": platforms.DefaultString(),
})
@@ -162,19 +113,6 @@ func runBake(dockerCli command.Cli, targets []string, in bakeOptions, cFlags com
return err
}
if v := os.Getenv("SOURCE_DATE_EPOCH"); v != "" {
// TODO: extract env var parsing to a method easily usable by library consumers
for _, t := range tgts {
if _, ok := t.Args["SOURCE_DATE_EPOCH"]; ok {
continue
}
if t.Args == nil {
t.Args = map[string]*string{}
}
t.Args["SOURCE_DATE_EPOCH"] = &v
}
}
// this function can update target context string from the input so call before printOnly check
bo, err := bake.TargetsToBuildOpt(tgts, inp)
if err != nil {
@@ -182,11 +120,17 @@ func runBake(dockerCli command.Cli, targets []string, in bakeOptions, cFlags com
}
if in.printOnly {
var defg map[string]*bake.Group
if len(grps) == 1 {
defg = map[string]*bake.Group{
"default": grps[0],
}
}
dt, err := json.MarshalIndent(struct {
Group map[string]*bake.Group `json:"group,omitempty"`
Target map[string]*bake.Target `json:"target"`
}{
grps,
defg,
tgts,
}, "", " ")
if err != nil {
@@ -201,7 +145,7 @@ func runBake(dockerCli command.Cli, targets []string, in bakeOptions, cFlags com
return nil
}
resp, err := build.Build(ctx, nodes, bo, dockerutil.NewClient(dockerCli), confutil.ConfigDir(dockerCli), printer)
resp, err := build.Build(ctx, dis, bo, dockerAPI(dockerCli), confutil.ConfigDir(dockerCli), printer)
if err != nil {
return wrapBuildError(err, true)
}
@@ -221,7 +165,6 @@ func runBake(dockerCli command.Cli, targets []string, in bakeOptions, cFlags com
func bakeCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
var options bakeOptions
var cFlags commonFlags
cmd := &cobra.Command{
Use: "bake [OPTIONS] [TARGET...]",
@@ -230,17 +173,14 @@ func bakeCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
RunE: func(cmd *cobra.Command, args []string) error {
// reset to nil to avoid override is unset
if !cmd.Flags().Lookup("no-cache").Changed {
cFlags.noCache = nil
options.noCache = nil
}
if !cmd.Flags().Lookup("pull").Changed {
cFlags.pull = nil
options.pull = nil
}
options.builder = rootOpts.builder
options.metadataFile = cFlags.metadataFile
// Other common flags (noCache, pull and progress) are processed in runBake function.
return runBake(dockerCli, args, options, cFlags)
options.commonOptions.builder = rootOpts.builder
return runBake(dockerCli, args, options)
},
ValidArgsFunction: completion.BakeTargets(options.files),
}
flags := cmd.Flags()
@@ -249,11 +189,9 @@ func bakeCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
flags.BoolVar(&options.exportLoad, "load", false, `Shorthand for "--set=*.output=type=docker"`)
flags.BoolVar(&options.printOnly, "print", false, "Print the options without building")
flags.BoolVar(&options.exportPush, "push", false, `Shorthand for "--set=*.output=type=registry"`)
flags.StringVar(&options.sbom, "sbom", "", `Shorthand for "--set=*.attest=type=sbom"`)
flags.StringVar(&options.provenance, "provenance", "", `Shorthand for "--set=*.attest=type=provenance"`)
flags.StringArrayVar(&options.overrides, "set", nil, `Override target value (e.g., "targetpattern.key=value")`)
commonBuildFlags(&cFlags, flags)
commonBuildFlags(&options.commonOptions, flags)
return cmd
}

File diff suppressed because it is too large Load Diff

View File

@@ -10,20 +10,13 @@ import (
"strings"
"time"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/driver"
k8sutil "github.com/docker/buildx/driver/kubernetes/util"
remoteutil "github.com/docker/buildx/driver/remote/util"
"github.com/docker/buildx/localstate"
"github.com/docker/buildx/store"
"github.com/docker/buildx/store/storeutil"
"github.com/docker/buildx/util/cobrautil"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/dockerutil"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
dopts "github.com/docker/cli/opts"
"github.com/google/shlex"
"github.com/moby/buildkit/util/appcontext"
"github.com/pkg/errors"
@@ -172,34 +165,18 @@ func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
if err := ng.Leave(in.nodeName); err != nil {
return err
}
ls, err := localstate.New(confutil.ConfigDir(dockerCli))
if err != nil {
return err
}
if err := ls.RemoveBuilderNode(ng.Name, in.nodeName); err != nil {
return err
}
} else {
switch {
case driverName == "kubernetes":
if len(args) > 0 {
logrus.Warnf("kubernetes driver does not support endpoint args %q", args[0])
}
// generate node name if not provided to avoid duplicated endpoint
// error: https://github.com/docker/setup-buildx-action/issues/215
nodeName := in.nodeName
if nodeName == "" {
nodeName, err = k8sutil.GenerateNodeName(name, txn)
if err != nil {
return err
}
}
// naming endpoint to make --append works
ep = (&url.URL{
Scheme: driverName,
Path: "/" + name,
Path: "/" + in.name,
RawQuery: (&url.Values{
"deployment": {nodeName},
"deployment": {in.nodeName},
"kubeconfig": {os.Getenv("KUBECONFIG")},
}).Encode(),
}).String()
@@ -227,7 +204,7 @@ func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
if dockerCli.CurrentContext() == "default" && dockerCli.DockerEndpoint().TLSData != nil {
return errors.Errorf("could not create a builder instance with TLS data loaded from environment. Please use `docker context create <context-name>` to create a context for current environment and then create a builder instance with `docker buildx create <context-name>`")
}
ep, err = dockerutil.GetCurrentEndpoint(dockerCli)
ep, err = storeutil.GetCurrentEndpoint(dockerCli)
if err != nil {
return err
}
@@ -257,26 +234,17 @@ func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
return err
}
b, err := builder.New(dockerCli,
builder.WithName(ng.Name),
builder.WithStore(txn),
builder.WithSkippedValidation(),
)
if err != nil {
return err
}
ngi := &nginfo{ng: ng}
timeoutCtx, cancel := context.WithTimeout(ctx, 20*time.Second)
defer cancel()
nodes, err := b.LoadNodes(timeoutCtx, true)
if err != nil {
if err = loadNodeGroupData(timeoutCtx, dockerCli, ngi); err != nil {
return err
}
for _, node := range nodes {
if err := node.Err; err != nil {
err := errors.Errorf("failed to initialize builder %s (%s): %s", ng.Name, node.Name, err)
for _, info := range ngi.drivers {
if err := info.di.Err; err != nil {
err := errors.Errorf("failed to initialize builder %s (%s): %s", ng.Name, info.di.Name, err)
var err2 error
if ngOriginal == nil {
err2 = txn.Remove(ng.Name)
@@ -291,7 +259,7 @@ func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
}
if in.use && ep != "" {
current, err := dockerutil.GetCurrentEndpoint(dockerCli)
current, err := storeutil.GetCurrentEndpoint(dockerCli)
if err != nil {
return err
}
@@ -301,7 +269,7 @@ func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
}
if in.bootstrap {
if _, err = b.Boot(ctx); err != nil {
if _, err = boot(ctx, ngi); err != nil {
return err
}
}
@@ -328,7 +296,6 @@ func createCmd(dockerCli command.Cli) *cobra.Command {
RunE: func(cmd *cobra.Command, args []string) error {
return runCreate(dockerCli, options, args)
},
ValidArgsFunction: completion.Disable,
}
flags := cmd.Flags()
@@ -373,27 +340,3 @@ func csvToMap(in []string) (map[string]string, error) {
}
return m, nil
}
// validateEndpoint validates that endpoint is either a context or a docker host
func validateEndpoint(dockerCli command.Cli, ep string) (string, error) {
dem, err := dockerutil.GetDockerEndpoint(dockerCli, ep)
if err == nil && dem != nil {
if ep == "default" {
return dem.Host, nil
}
return ep, nil
}
h, err := dopts.ParseHost(true, ep)
if err != nil {
return "", errors.Wrapf(err, "failed to parse endpoint %s", ep)
}
return h, nil
}
// validateBuildkitEndpoint validates that endpoint is a valid buildkit host
func validateBuildkitEndpoint(ep string) (string, error) {
if err := remoteutil.IsValidEndpoint(ep); err != nil {
return "", err
}
return ep, nil
}

View File

@@ -1,70 +0,0 @@
package commands
import (
"context"
"os"
"runtime"
"github.com/containerd/console"
"github.com/docker/buildx/controller"
"github.com/docker/buildx/controller/control"
controllerapi "github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/monitor"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/command"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
)
func debugShellCmd(dockerCli command.Cli) *cobra.Command {
var options control.ControlOptions
var progressMode string
cmd := &cobra.Command{
Use: "debug-shell",
Short: "Start a monitor",
RunE: func(cmd *cobra.Command, args []string) error {
printer, err := progress.NewPrinter(context.TODO(), os.Stderr, os.Stderr, progressMode)
if err != nil {
return err
}
ctx := context.TODO()
c, err := controller.NewController(ctx, options, dockerCli, printer)
if err != nil {
return err
}
defer func() {
if err := c.Close(); err != nil {
logrus.Warnf("failed to close server connection %v", err)
}
}()
con := console.Current()
if err := con.SetRaw(); err != nil {
return errors.Errorf("failed to configure terminal: %v", err)
}
err = monitor.RunMonitor(ctx, "", nil, controllerapi.InvokeConfig{
Tty: true,
}, c, dockerCli.In(), os.Stdout, os.Stderr, printer)
con.Reset()
return err
},
}
flags := cmd.Flags()
flags.StringVar(&options.Root, "root", "", "Specify root directory of server to connect [experimental]")
flags.BoolVar(&options.Detach, "detach", runtime.GOOS == "linux", "Detach buildx server (supported only on linux) [experimental]")
flags.StringVar(&options.ServerConfig, "server-config", "", "Specify buildx server config file (used only when launching new server) [experimental]")
flags.StringVar(&progressMode, "progress", "auto", `Set type of progress output ("auto", "plain", "tty"). Use plain to show container output`)
return cmd
}
func addDebugShellCommand(cmd *cobra.Command, dockerCli command.Cli) {
cmd.AddCommand(
debugShellCmd(dockerCli),
)
}

View File

@@ -8,8 +8,7 @@ import (
"text/tabwriter"
"time"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/build"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/docker/cli/opts"
@@ -34,29 +33,25 @@ func runDiskUsage(dockerCli command.Cli, opts duOptions) error {
return err
}
b, err := builder.New(dockerCli, builder.WithName(opts.builder))
dis, err := getInstanceOrDefault(ctx, dockerCli, opts.builder, "")
if err != nil {
return err
}
nodes, err := b.LoadNodes(ctx, false)
if err != nil {
return err
}
for _, node := range nodes {
if node.Err != nil {
return node.Err
for _, di := range dis {
if di.Err != nil {
return err
}
}
out := make([][]*client.UsageInfo, len(nodes))
out := make([][]*client.UsageInfo, len(dis))
eg, ctx := errgroup.WithContext(ctx)
for i, node := range nodes {
func(i int, node builder.Node) {
for i, di := range dis {
func(i int, di build.DriverInfo) {
eg.Go(func() error {
if node.Driver != nil {
c, err := node.Driver.Client(ctx)
if di.Driver != nil {
c, err := di.Driver.Client(ctx)
if err != nil {
return err
}
@@ -69,7 +64,7 @@ func runDiskUsage(dockerCli command.Cli, opts duOptions) error {
}
return nil
})
}(i, node)
}(i, di)
}
if err := eg.Wait(); err != nil {
@@ -116,7 +111,6 @@ func duCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
options.builder = rootOpts.builder
return runDiskUsage(dockerCli, options)
},
ValidArgsFunction: completion.Disable,
}
flags := cmd.Flags()

View File

@@ -7,8 +7,8 @@ import (
"os"
"strings"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/store"
"github.com/docker/buildx/store/storeutil"
"github.com/docker/buildx/util/imagetools"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/command"
@@ -90,34 +90,47 @@ func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
}
for i, s := range srcs {
if s.Ref == nil {
if s.Ref == nil && s.Desc.MediaType == "" && s.Desc.Digest != "" {
if defaultRepo == nil {
return errors.Errorf("multiple repositories specified, cannot infer repository for %q", args[i])
}
n, err := reference.ParseNormalizedNamed(*defaultRepo)
if err != nil {
return err
}
if s.Desc.MediaType == "" && s.Desc.Digest != "" {
r, err := reference.WithDigest(n, s.Desc.Digest)
if err != nil {
return err
}
srcs[i].Ref = r
sourceRefs = true
} else {
srcs[i].Ref = reference.TagNameOnly(n)
r, err := reference.WithDigest(n, s.Desc.Digest)
if err != nil {
return err
}
srcs[i].Ref = r
sourceRefs = true
}
}
ctx := appcontext.Context()
b, err := builder.New(dockerCli, builder.WithName(in.builder))
txn, release, err := storeutil.GetStore(dockerCli)
if err != nil {
return err
}
imageopt, err := b.ImageOpt()
defer release()
var ng *store.NodeGroup
if in.builder != "" {
ng, err = storeutil.GetNodeGroup(txn, dockerCli, in.builder)
if err != nil {
return err
}
} else {
ng, err = storeutil.GetCurrentInstance(txn, dockerCli)
if err != nil {
return err
}
}
imageopt, err := storeutil.GetImageConfig(dockerCli, ng)
if err != nil {
return err
}
@@ -169,10 +182,7 @@ func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
ctx2, cancel := context.WithCancel(context.TODO())
defer cancel()
printer, err := progress.NewPrinter(ctx2, os.Stderr, os.Stderr, in.progress)
if err != nil {
return err
}
printer := progress.NewPrinter(ctx2, os.Stderr, os.Stderr, in.progress)
eg, _ := errgroup.WithContext(ctx)
pw := progress.WithPrefix(printer, "internal", true)
@@ -274,7 +284,6 @@ func createCmd(dockerCli command.Cli, opts RootOptions) *cobra.Command {
options.builder = *opts.Builder
return runCreate(dockerCli, options, args)
},
ValidArgsFunction: completion.Disable,
}
flags := cmd.Flags()

View File

@@ -1,8 +1,8 @@
package commands
import (
"github.com/docker/buildx/builder"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/store"
"github.com/docker/buildx/store/storeutil"
"github.com/docker/buildx/util/imagetools"
"github.com/docker/cli-docs-tool/annotation"
"github.com/docker/cli/cli"
@@ -25,11 +25,27 @@ func runInspect(dockerCli command.Cli, in inspectOptions, name string) error {
return errors.Errorf("format and raw cannot be used together")
}
b, err := builder.New(dockerCli, builder.WithName(in.builder))
txn, release, err := storeutil.GetStore(dockerCli)
if err != nil {
return err
}
imageopt, err := b.ImageOpt()
defer release()
var ng *store.NodeGroup
if in.builder != "" {
ng, err = storeutil.GetNodeGroup(txn, dockerCli, in.builder)
if err != nil {
return err
}
} else {
ng, err = storeutil.GetCurrentInstance(txn, dockerCli)
if err != nil {
return err
}
}
imageopt, err := storeutil.GetImageConfig(dockerCli, ng)
if err != nil {
return err
}
@@ -53,7 +69,6 @@ func inspectCmd(dockerCli command.Cli, rootOpts RootOptions) *cobra.Command {
options.builder = *rootOpts.Builder
return runInspect(dockerCli, options, args[0])
},
ValidArgsFunction: completion.Disable,
}
flags := cmd.Flags()

View File

@@ -1,7 +1,6 @@
package commands
import (
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/cli/cli/command"
"github.com/spf13/cobra"
)
@@ -12,9 +11,8 @@ type RootOptions struct {
func RootCmd(dockerCli command.Cli, opts RootOptions) *cobra.Command {
cmd := &cobra.Command{
Use: "imagetools",
Short: "Commands to work on images in registry",
ValidArgsFunction: completion.Disable,
Use: "imagetools",
Short: "Commands to work on images in registry",
}
cmd.AddCommand(

View File

@@ -4,19 +4,15 @@ import (
"context"
"fmt"
"os"
"sort"
"strings"
"text/tabwriter"
"time"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/driver"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/store"
"github.com/docker/buildx/store/storeutil"
"github.com/docker/buildx/util/platformutil"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/docker/cli/cli/debug"
"github.com/docker/go-units"
"github.com/moby/buildkit/util/appcontext"
"github.com/spf13/cobra"
)
@@ -29,46 +25,71 @@ type inspectOptions struct {
func runInspect(dockerCli command.Cli, in inspectOptions) error {
ctx := appcontext.Context()
b, err := builder.New(dockerCli,
builder.WithName(in.builder),
builder.WithSkippedValidation(),
)
txn, release, err := storeutil.GetStore(dockerCli)
if err != nil {
return err
}
defer release()
var ng *store.NodeGroup
if in.builder != "" {
ng, err = storeutil.GetNodeGroup(txn, dockerCli, in.builder)
if err != nil {
return err
}
} else {
ng, err = storeutil.GetCurrentInstance(txn, dockerCli)
if err != nil {
return err
}
}
if ng == nil {
ng = &store.NodeGroup{
Name: "default",
Nodes: []store.Node{{
Name: "default",
Endpoint: "default",
}},
}
}
ngi := &nginfo{ng: ng}
timeoutCtx, cancel := context.WithTimeout(ctx, 20*time.Second)
defer cancel()
nodes, err := b.LoadNodes(timeoutCtx, true)
err = loadNodeGroupData(timeoutCtx, dockerCli, ngi)
var bootNgi *nginfo
if in.bootstrap {
var ok bool
ok, err = b.Boot(ctx)
ok, err = boot(ctx, ngi)
if err != nil {
return err
}
bootNgi = ngi
if ok {
nodes, err = b.LoadNodes(timeoutCtx, true)
ngi = &nginfo{ng: ng}
err = loadNodeGroupData(ctx, dockerCli, ngi)
}
}
w := tabwriter.NewWriter(os.Stdout, 0, 0, 1, ' ', 0)
fmt.Fprintf(w, "Name:\t%s\n", b.Name)
fmt.Fprintf(w, "Driver:\t%s\n", b.Driver)
if !b.NodeGroup.LastActivity.IsZero() {
fmt.Fprintf(w, "Last Activity:\t%v\n", b.NodeGroup.LastActivity)
}
fmt.Fprintf(w, "Name:\t%s\n", ngi.ng.Name)
fmt.Fprintf(w, "Driver:\t%s\n", ngi.ng.Driver)
if err != nil {
fmt.Fprintf(w, "Error:\t%s\n", err.Error())
} else if b.Err() != nil {
fmt.Fprintf(w, "Error:\t%s\n", b.Err().Error())
} else if ngi.err != nil {
fmt.Fprintf(w, "Error:\t%s\n", ngi.err.Error())
}
if err == nil {
fmt.Fprintln(w, "")
fmt.Fprintln(w, "Nodes:")
for i, n := range nodes {
for i, n := range ngi.ng.Nodes {
if i != 0 {
fmt.Fprintln(w, "")
}
@@ -83,49 +104,21 @@ func runInspect(dockerCli command.Cli, in inspectOptions) error {
fmt.Fprintf(w, "Driver Options:\t%s\n", strings.Join(driverOpts, " "))
}
if err := n.Err; err != nil {
if err := ngi.drivers[i].di.Err; err != nil {
fmt.Fprintf(w, "Error:\t%s\n", err.Error())
} else if err := ngi.drivers[i].err; err != nil {
fmt.Fprintf(w, "Error:\t%s\n", err.Error())
} else if bootNgi != nil && len(bootNgi.drivers) > i && bootNgi.drivers[i].err != nil {
fmt.Fprintf(w, "Error:\t%s\n", bootNgi.drivers[i].err.Error())
} else {
fmt.Fprintf(w, "Status:\t%s\n", nodes[i].DriverInfo.Status)
fmt.Fprintf(w, "Status:\t%s\n", ngi.drivers[i].info.Status)
if len(n.Flags) > 0 {
fmt.Fprintf(w, "Flags:\t%s\n", strings.Join(n.Flags, " "))
}
if nodes[i].Version != "" {
fmt.Fprintf(w, "Buildkit:\t%s\n", nodes[i].Version)
}
fmt.Fprintf(w, "Platforms:\t%s\n", strings.Join(platformutil.FormatInGroups(n.Node.Platforms, n.Platforms), ", "))
if debug.IsEnabled() {
fmt.Fprintf(w, "Features:\n")
features := nodes[i].Driver.Features(ctx)
featKeys := make([]string, 0, len(features))
for k := range features {
featKeys = append(featKeys, string(k))
}
sort.Strings(featKeys)
for _, k := range featKeys {
fmt.Fprintf(w, "\t%s:\t%t\n", k, features[driver.Feature(k)])
}
}
if len(nodes[i].Labels) > 0 {
fmt.Fprintf(w, "Labels:\n")
for _, k := range sortedKeys(nodes[i].Labels) {
v := nodes[i].Labels[k]
fmt.Fprintf(w, "\t%s:\t%s\n", k, v)
}
}
for ri, rule := range nodes[i].GCPolicy {
fmt.Fprintf(w, "GC Policy rule#%d:\n", ri)
fmt.Fprintf(w, "\tAll:\t%v\n", rule.All)
if len(rule.Filter) > 0 {
fmt.Fprintf(w, "\tFilters:\t%s\n", strings.Join(rule.Filter, " "))
}
if rule.KeepDuration > 0 {
fmt.Fprintf(w, "\tKeep Duration:\t%v\n", rule.KeepDuration.String())
}
if rule.KeepBytes > 0 {
fmt.Fprintf(w, "\tKeep Bytes:\t%s\n", units.BytesSize(float64(rule.KeepBytes)))
}
if ngi.drivers[i].version != "" {
fmt.Fprintf(w, "Buildkit:\t%s\n", ngi.drivers[i].version)
}
fmt.Fprintf(w, "Platforms:\t%s\n", strings.Join(platformutil.FormatInGroups(n.Platforms, ngi.drivers[i].platforms), ", "))
}
}
}
@@ -149,7 +142,6 @@ func inspectCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
}
return runInspect(dockerCli, options)
},
ValidArgsFunction: completion.BuilderNames(dockerCli),
}
flags := cmd.Flags()
@@ -157,14 +149,3 @@ func inspectCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
return cmd
}
func sortedKeys(m map[string]string) []string {
s := make([]string, len(m))
i := 0
for k := range m {
s[i] = k
i++
}
sort.Strings(s)
return s
}

View File

@@ -4,7 +4,6 @@ import (
"os"
"github.com/docker/buildx/util/cobrautil"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/docker/cli/cli/config"
@@ -47,8 +46,7 @@ func installCmd(dockerCli command.Cli) *cobra.Command {
RunE: func(cmd *cobra.Command, args []string) error {
return runInstall(dockerCli, options)
},
Hidden: true,
ValidArgsFunction: completion.Disable,
Hidden: true,
}
// hide builder persistent flag for this command

View File

@@ -4,14 +4,14 @@ import (
"context"
"fmt"
"io"
"sort"
"strings"
"text/tabwriter"
"time"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/store"
"github.com/docker/buildx/store/storeutil"
"github.com/docker/buildx/util/cobrautil"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/platformutil"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
@@ -32,24 +32,52 @@ func runLs(dockerCli command.Cli, in lsOptions) error {
}
defer release()
current, err := storeutil.GetCurrentInstance(txn, dockerCli)
if err != nil {
return err
}
builders, err := builder.GetBuilders(dockerCli, txn)
if err != nil {
return err
}
timeoutCtx, cancel := context.WithTimeout(ctx, 20*time.Second)
ctx, cancel := context.WithTimeout(ctx, 20*time.Second)
defer cancel()
eg, _ := errgroup.WithContext(timeoutCtx)
ll, err := txn.List()
if err != nil {
return err
}
builders := make([]*nginfo, len(ll))
for i, ng := range ll {
builders[i] = &nginfo{ng: ng}
}
contexts, err := dockerCli.ContextStore().List()
if err != nil {
return err
}
sort.Slice(contexts, func(i, j int) bool {
return contexts[i].Name < contexts[j].Name
})
for _, c := range contexts {
ngi := &nginfo{ng: &store.NodeGroup{
Name: c.Name,
Nodes: []store.Node{{
Name: c.Name,
Endpoint: c.Name,
}},
}}
// if a context has the same name as an instance from the store, do not
// add it to the builders list. An instance from the store takes
// precedence over context builders.
if hasNodeGroup(builders, ngi) {
continue
}
builders = append(builders, ngi)
}
eg, _ := errgroup.WithContext(ctx)
for _, b := range builders {
func(b *builder.Builder) {
func(b *nginfo) {
eg.Go(func() error {
_, _ = b.LoadNodes(timeoutCtx, true)
err = loadNodeGroupData(ctx, dockerCli, b)
if b.err == nil && err != nil {
b.err = err
}
return nil
})
}(b)
@@ -59,15 +87,29 @@ func runLs(dockerCli command.Cli, in lsOptions) error {
return err
}
currentName := "default"
current, err := storeutil.GetCurrentInstance(txn, dockerCli)
if err != nil {
return err
}
if current != nil {
currentName = current.Name
if current.Name == "default" {
currentName = current.Nodes[0].Endpoint
}
}
w := tabwriter.NewWriter(dockerCli.Out(), 0, 0, 1, ' ', 0)
fmt.Fprintf(w, "NAME/NODE\tDRIVER/ENDPOINT\tSTATUS\tBUILDKIT\tPLATFORMS\n")
currentSet := false
printErr := false
for _, b := range builders {
if current.Name == b.Name {
b.Name += " *"
if !currentSet && b.ng.Name == currentName {
b.ng.Name += " *"
currentSet = true
}
if ok := printBuilder(w, b); !ok {
if ok := printngi(w, b); !ok {
printErr = true
}
}
@@ -77,12 +119,19 @@ func runLs(dockerCli command.Cli, in lsOptions) error {
if printErr {
_, _ = fmt.Fprintf(dockerCli.Err(), "\n")
for _, b := range builders {
if b.Err() != nil {
_, _ = fmt.Fprintf(dockerCli.Err(), "Cannot load builder %s: %s\n", b.Name, strings.TrimSpace(b.Err().Error()))
if b.err != nil {
_, _ = fmt.Fprintf(dockerCli.Err(), "Cannot load builder %s: %s\n", b.ng.Name, strings.TrimSpace(b.err.Error()))
} else {
for _, d := range b.Nodes() {
if d.Err != nil {
_, _ = fmt.Fprintf(dockerCli.Err(), "Failed to get status for %s (%s): %s\n", b.Name, d.Name, strings.TrimSpace(d.Err.Error()))
for idx, n := range b.ng.Nodes {
d := b.drivers[idx]
var nodeErr string
if d.err != nil {
nodeErr = d.err.Error()
} else if d.di.Err != nil {
nodeErr = d.di.Err.Error()
}
if nodeErr != "" {
_, _ = fmt.Fprintf(dockerCli.Err(), "Failed to get status for %s (%s): %s\n", b.ng.Name, n.Name, strings.TrimSpace(nodeErr))
}
}
}
@@ -92,25 +141,26 @@ func runLs(dockerCli command.Cli, in lsOptions) error {
return nil
}
func printBuilder(w io.Writer, b *builder.Builder) (ok bool) {
func printngi(w io.Writer, ngi *nginfo) (ok bool) {
ok = true
var err string
if b.Err() != nil {
if ngi.err != nil {
ok = false
err = "error"
}
fmt.Fprintf(w, "%s\t%s\t%s\t\t\n", b.Name, b.Driver, err)
if b.Err() == nil {
for _, n := range b.Nodes() {
fmt.Fprintf(w, "%s\t%s\t%s\t\t\n", ngi.ng.Name, ngi.ng.Driver, err)
if ngi.err == nil {
for idx, n := range ngi.ng.Nodes {
d := ngi.drivers[idx]
var status string
if n.DriverInfo != nil {
status = n.DriverInfo.Status.String()
if d.info != nil {
status = d.info.Status.String()
}
if n.Err != nil {
if d.err != nil || d.di.Err != nil {
ok = false
fmt.Fprintf(w, " %s\t%s\t%s\t\t\n", n.Name, n.Endpoint, "error")
} else {
fmt.Fprintf(w, " %s\t%s\t%s\t%s\t%s\n", n.Name, n.Endpoint, status, n.Version, strings.Join(platformutil.FormatInGroups(n.Node.Platforms, n.Platforms), ", "))
fmt.Fprintf(w, " %s\t%s\t%s\t%s\t%s\n", n.Name, n.Endpoint, status, d.version, strings.Join(platformutil.FormatInGroups(n.Platforms, d.platforms), ", "))
}
}
}
@@ -127,7 +177,6 @@ func lsCmd(dockerCli command.Cli) *cobra.Command {
RunE: func(cmd *cobra.Command, args []string) error {
return runLs(dockerCli, options)
},
ValidArgsFunction: completion.Disable,
}
// hide builder persistent flag for this command

48
commands/print.go Normal file
View File

@@ -0,0 +1,48 @@
package commands
import (
"fmt"
"io"
"log"
"os"
"github.com/docker/buildx/build"
"github.com/docker/docker/api/types/versions"
"github.com/moby/buildkit/frontend/subrequests"
"github.com/moby/buildkit/frontend/subrequests/outline"
"github.com/moby/buildkit/frontend/subrequests/targets"
)
func printResult(f *build.PrintFunc, res map[string]string) error {
switch f.Name {
case "outline":
return printValue(outline.PrintOutline, outline.SubrequestsOutlineDefinition.Version, f.Format, res)
case "targets":
return printValue(targets.PrintTargets, targets.SubrequestsTargetsDefinition.Version, f.Format, res)
case "subrequests.describe":
return printValue(subrequests.PrintDescribe, subrequests.SubrequestsDescribeDefinition.Version, f.Format, res)
default:
if dt, ok := res["result.txt"]; ok {
fmt.Print(dt)
} else {
log.Printf("%s %+v", f, res)
}
}
return nil
}
type printFunc func([]byte, io.Writer) error
func printValue(printer printFunc, version string, format string, res map[string]string) error {
if format == "json" {
fmt.Fprintln(os.Stdout, res["result.json"])
return nil
}
if res["version"] != "" && versions.LessThan(version, res["version"]) && res["result.txt"] != "" {
// structure is too new and we don't know how to print it
fmt.Fprint(os.Stdout, res["result.txt"])
return nil
}
return printer([]byte(res["result.json"]), os.Stdout)
}

View File

@@ -7,8 +7,7 @@ import (
"text/tabwriter"
"time"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/build"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/docker/cli/opts"
@@ -55,18 +54,14 @@ func runPrune(dockerCli command.Cli, opts pruneOptions) error {
return nil
}
b, err := builder.New(dockerCli, builder.WithName(opts.builder))
dis, err := getInstanceOrDefault(ctx, dockerCli, opts.builder, "")
if err != nil {
return err
}
nodes, err := b.LoadNodes(ctx, false)
if err != nil {
return err
}
for _, node := range nodes {
if node.Err != nil {
return node.Err
for _, di := range dis {
if di.Err != nil {
return err
}
}
@@ -95,11 +90,11 @@ func runPrune(dockerCli command.Cli, opts pruneOptions) error {
}()
eg, ctx := errgroup.WithContext(ctx)
for _, node := range nodes {
func(node builder.Node) {
for _, di := range dis {
func(di build.DriverInfo) {
eg.Go(func() error {
if node.Driver != nil {
c, err := node.Driver.Client(ctx)
if di.Driver != nil {
c, err := di.Driver.Client(ctx)
if err != nil {
return err
}
@@ -114,7 +109,7 @@ func runPrune(dockerCli command.Cli, opts pruneOptions) error {
}
return nil
})
}(node)
}(di)
}
if err := eg.Wait(); err != nil {
@@ -140,7 +135,6 @@ func pruneCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
options.builder = rootOpts.builder
return runPrune(dockerCli, options)
},
ValidArgsFunction: completion.Disable,
}
flags := cmd.Flags()

View File

@@ -5,10 +5,8 @@ import (
"fmt"
"time"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/store"
"github.com/docker/buildx/store/storeutil"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/moby/buildkit/util/appcontext"
@@ -46,33 +44,41 @@ func runRm(dockerCli command.Cli, in rmOptions) error {
return rmAllInactive(ctx, txn, dockerCli, in)
}
b, err := builder.New(dockerCli,
builder.WithName(in.builder),
builder.WithStore(txn),
builder.WithSkippedValidation(),
)
var ng *store.NodeGroup
if in.builder != "" {
ng, err = storeutil.GetNodeGroup(txn, dockerCli, in.builder)
if err != nil {
return err
}
} else {
ng, err = storeutil.GetCurrentInstance(txn, dockerCli)
if err != nil {
return err
}
}
if ng == nil {
return nil
}
ctxbuilders, err := dockerCli.ContextStore().List()
if err != nil {
return err
}
nodes, err := b.LoadNodes(ctx, false)
if err != nil {
return err
for _, cb := range ctxbuilders {
if ng.Driver == "docker" && len(ng.Nodes) == 1 && ng.Nodes[0].Endpoint == cb.Name {
return errors.Errorf("context builder cannot be removed, run `docker context rm %s` to remove this context", cb.Name)
}
}
if cb := b.ContextName(); cb != "" {
return errors.Errorf("context builder cannot be removed, run `docker context rm %s` to remove this context", cb)
}
err1 := rm(ctx, nodes, in)
if err := txn.Remove(b.Name); err != nil {
err1 := rm(ctx, dockerCli, in, ng)
if err := txn.Remove(ng.Name); err != nil {
return err
}
if err1 != nil {
return err1
}
_, _ = fmt.Fprintf(dockerCli.Err(), "%s removed\n", b.Name)
_, _ = fmt.Fprintf(dockerCli.Err(), "%s removed\n", ng.Name)
return nil
}
@@ -93,7 +99,6 @@ func rmCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
}
return runRm(dockerCli, options)
},
ValidArgsFunction: completion.BuilderNames(dockerCli),
}
flags := cmd.Flags()
@@ -105,53 +110,61 @@ func rmCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
return cmd
}
func rm(ctx context.Context, nodes []builder.Node, in rmOptions) (err error) {
for _, node := range nodes {
if node.Driver == nil {
func rm(ctx context.Context, dockerCli command.Cli, in rmOptions, ng *store.NodeGroup) error {
dis, err := driversForNodeGroup(ctx, dockerCli, ng, "")
if err != nil {
return err
}
for _, di := range dis {
if di.Driver == nil {
continue
}
// Do not stop the buildkitd daemon when --keep-daemon is provided
if !in.keepDaemon {
if err := node.Driver.Stop(ctx, true); err != nil {
if err := di.Driver.Stop(ctx, true); err != nil {
return err
}
}
if err := node.Driver.Rm(ctx, true, !in.keepState, !in.keepDaemon); err != nil {
if err := di.Driver.Rm(ctx, true, !in.keepState, !in.keepDaemon); err != nil {
return err
}
if node.Err != nil {
err = node.Err
if di.Err != nil {
err = di.Err
}
}
return err
}
func rmAllInactive(ctx context.Context, txn *store.Txn, dockerCli command.Cli, in rmOptions) error {
builders, err := builder.GetBuilders(dockerCli, txn)
ctx, cancel := context.WithTimeout(ctx, 20*time.Second)
defer cancel()
ll, err := txn.List()
if err != nil {
return err
}
timeoutCtx, cancel := context.WithTimeout(ctx, 20*time.Second)
defer cancel()
builders := make([]*nginfo, len(ll))
for i, ng := range ll {
builders[i] = &nginfo{ng: ng}
}
eg, _ := errgroup.WithContext(timeoutCtx)
eg, _ := errgroup.WithContext(ctx)
for _, b := range builders {
func(b *builder.Builder) {
func(b *nginfo) {
eg.Go(func() error {
nodes, err := b.LoadNodes(timeoutCtx, true)
if err != nil {
return errors.Wrapf(err, "cannot load %s", b.Name)
if err := loadNodeGroupData(ctx, dockerCli, b); err != nil {
return errors.Wrapf(err, "cannot load %s", b.ng.Name)
}
if b.Dynamic {
if b.ng.Dynamic {
return nil
}
if b.Inactive() {
rmerr := rm(ctx, nodes, in)
if err := txn.Remove(b.Name); err != nil {
if b.inactive() {
rmerr := rm(ctx, dockerCli, in, b.ng)
if err := txn.Remove(b.ng.Name); err != nil {
return err
}
_, _ = fmt.Fprintf(dockerCli.Err(), "%s removed\n", b.Name)
_, _ = fmt.Fprintf(dockerCli.Err(), "%s removed\n", b.ng.Name)
return rmerr
}
return nil

View File

@@ -4,8 +4,6 @@ import (
"os"
imagetoolscmd "github.com/docker/buildx/commands/imagetools"
"github.com/docker/buildx/controller/remote"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/logutil"
"github.com/docker/cli-docs-tool/annotation"
"github.com/docker/cli/cli"
@@ -24,9 +22,6 @@ func NewRootCmd(name string, isPlugin bool, dockerCli command.Cli) *cobra.Comman
Annotations: map[string]string{
annotation.CodeDelimiter: `"`,
},
CompletionOptions: cobra.CompletionOptions{
HiddenDefaultCmd: true,
},
}
if isPlugin {
cmd.PersistentPreRunE = func(cmd *cobra.Command, args []string) error {
@@ -91,15 +86,6 @@ func addCommands(cmd *cobra.Command, dockerCli command.Cli) {
duCmd(dockerCli, opts),
imagetoolscmd.RootCmd(dockerCli, imagetoolscmd.RootOptions{Builder: &opts.builder}),
)
if isExperimental() {
remote.AddControllerCommands(cmd, dockerCli)
addDebugShellCommand(cmd, dockerCli)
}
cmd.RegisterFlagCompletionFunc( //nolint:errcheck
"builder",
completion.BuilderNames(dockerCli),
)
}
func rootFlags(options *rootOptions, flags *pflag.FlagSet) {

View File

@@ -3,8 +3,8 @@ package commands
import (
"context"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/store"
"github.com/docker/buildx/store/storeutil"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/moby/buildkit/util/appcontext"
@@ -18,19 +18,32 @@ type stopOptions struct {
func runStop(dockerCli command.Cli, in stopOptions) error {
ctx := appcontext.Context()
b, err := builder.New(dockerCli,
builder.WithName(in.builder),
builder.WithSkippedValidation(),
)
txn, release, err := storeutil.GetStore(dockerCli)
if err != nil {
return err
}
nodes, err := b.LoadNodes(ctx, false)
if err != nil {
return err
defer release()
if in.builder != "" {
ng, err := storeutil.GetNodeGroup(txn, dockerCli, in.builder)
if err != nil {
return err
}
if err := stop(ctx, dockerCli, ng); err != nil {
return err
}
return nil
}
return stop(ctx, nodes)
ng, err := storeutil.GetCurrentInstance(txn, dockerCli)
if err != nil {
return err
}
if ng != nil {
return stop(ctx, dockerCli, ng)
}
return stopCurrent(ctx, dockerCli)
}
func stopCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
@@ -47,21 +60,42 @@ func stopCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
}
return runStop(dockerCli, options)
},
ValidArgsFunction: completion.BuilderNames(dockerCli),
}
return cmd
}
func stop(ctx context.Context, nodes []builder.Node) (err error) {
for _, node := range nodes {
if node.Driver != nil {
if err := node.Driver.Stop(ctx, true); err != nil {
func stop(ctx context.Context, dockerCli command.Cli, ng *store.NodeGroup) error {
dis, err := driversForNodeGroup(ctx, dockerCli, ng, "")
if err != nil {
return err
}
for _, di := range dis {
if di.Driver != nil {
if err := di.Driver.Stop(ctx, true); err != nil {
return err
}
}
if node.Err != nil {
err = node.Err
if di.Err != nil {
err = di.Err
}
}
return err
}
func stopCurrent(ctx context.Context, dockerCli command.Cli) error {
dis, err := getDefaultDrivers(ctx, dockerCli, false, "")
if err != nil {
return err
}
for _, di := range dis {
if di.Driver != nil {
if err := di.Driver.Stop(ctx, true); err != nil {
return err
}
}
if di.Err != nil {
err = di.Err
}
}
return err

View File

@@ -4,7 +4,6 @@ import (
"os"
"github.com/docker/buildx/util/cobrautil"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/docker/cli/cli/config"
@@ -53,8 +52,7 @@ func uninstallCmd(dockerCli command.Cli) *cobra.Command {
RunE: func(cmd *cobra.Command, args []string) error {
return runUninstall(dockerCli, options)
},
Hidden: true,
ValidArgsFunction: completion.Disable,
Hidden: true,
}
// hide builder persistent flag for this command

View File

@@ -4,8 +4,6 @@ import (
"os"
"github.com/docker/buildx/store/storeutil"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/dockerutil"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/pkg/errors"
@@ -31,7 +29,7 @@ func runUse(dockerCli command.Cli, in useOptions) error {
return errors.Errorf("run `docker context use default` to switch to default context")
}
if in.builder == "default" || in.builder == dockerCli.CurrentContext() {
ep, err := dockerutil.GetCurrentEndpoint(dockerCli)
ep, err := storeutil.GetCurrentEndpoint(dockerCli)
if err != nil {
return err
}
@@ -54,7 +52,7 @@ func runUse(dockerCli command.Cli, in useOptions) error {
return errors.Wrapf(err, "failed to find instance %q", in.builder)
}
ep, err := dockerutil.GetCurrentEndpoint(dockerCli)
ep, err := storeutil.GetCurrentEndpoint(dockerCli)
if err != nil {
return err
}
@@ -79,7 +77,6 @@ func useCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
}
return runUse(dockerCli, options)
},
ValidArgsFunction: completion.BuilderNames(dockerCli),
}
flags := cmd.Flags()

487
commands/util.go Normal file
View File

@@ -0,0 +1,487 @@
package commands
import (
"context"
"net/url"
"os"
"strings"
"github.com/docker/buildx/build"
"github.com/docker/buildx/driver"
ctxkube "github.com/docker/buildx/driver/kubernetes/context"
remoteutil "github.com/docker/buildx/driver/remote/util"
"github.com/docker/buildx/store"
"github.com/docker/buildx/store/storeutil"
"github.com/docker/buildx/util/platformutil"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/command"
"github.com/docker/cli/cli/context/docker"
ctxstore "github.com/docker/cli/cli/context/store"
dopts "github.com/docker/cli/opts"
dockerclient "github.com/docker/docker/client"
"github.com/moby/buildkit/util/grpcerrors"
specs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"golang.org/x/sync/errgroup"
"google.golang.org/grpc/codes"
"k8s.io/client-go/tools/clientcmd"
)
// validateEndpoint validates that endpoint is either a context or a docker host
func validateEndpoint(dockerCli command.Cli, ep string) (string, error) {
de, err := storeutil.GetDockerEndpoint(dockerCli, ep)
if err == nil && de != "" {
if ep == "default" {
return de, nil
}
return ep, nil
}
h, err := dopts.ParseHost(true, ep)
if err != nil {
return "", errors.Wrapf(err, "failed to parse endpoint %s", ep)
}
return h, nil
}
// validateBuildkitEndpoint validates that endpoint is a valid buildkit host
func validateBuildkitEndpoint(ep string) (string, error) {
if err := remoteutil.IsValidEndpoint(ep); err != nil {
return "", err
}
return ep, nil
}
// driversForNodeGroup returns drivers for a nodegroup instance
func driversForNodeGroup(ctx context.Context, dockerCli command.Cli, ng *store.NodeGroup, contextPathHash string) ([]build.DriverInfo, error) {
eg, _ := errgroup.WithContext(ctx)
dis := make([]build.DriverInfo, len(ng.Nodes))
var f driver.Factory
if ng.Driver != "" {
var err error
f, err = driver.GetFactory(ng.Driver, true)
if err != nil {
return nil, err
}
} else {
// empty driver means nodegroup was implicitly created as a default
// driver for a docker context and allows falling back to a
// docker-container driver for older daemon that doesn't support
// buildkit (< 18.06).
ep := ng.Nodes[0].Endpoint
dockerapi, err := clientForEndpoint(dockerCli, ep)
if err != nil {
return nil, err
}
// check if endpoint is healthy is needed to determine the driver type.
// if this fails then can't continue with driver selection.
if _, err = dockerapi.Ping(ctx); err != nil {
return nil, err
}
f, err = driver.GetDefaultFactory(ctx, ep, dockerapi, false)
if err != nil {
return nil, err
}
ng.Driver = f.Name()
}
imageopt, err := storeutil.GetImageConfig(dockerCli, ng)
if err != nil {
return nil, err
}
for i, n := range ng.Nodes {
func(i int, n store.Node) {
eg.Go(func() error {
di := build.DriverInfo{
Name: n.Name,
Platform: n.Platforms,
ProxyConfig: storeutil.GetProxyConfig(dockerCli),
}
defer func() {
dis[i] = di
}()
dockerapi, err := clientForEndpoint(dockerCli, n.Endpoint)
if err != nil {
di.Err = err
return nil
}
// TODO: replace the following line with dockerclient.WithAPIVersionNegotiation option in clientForEndpoint
dockerapi.NegotiateAPIVersion(ctx)
contextStore := dockerCli.ContextStore()
var kcc driver.KubeClientConfig
kcc, err = configFromContext(n.Endpoint, contextStore)
if err != nil {
// err is returned if n.Endpoint is non-context name like "unix:///var/run/docker.sock".
// try again with name="default".
// FIXME: n should retain real context name.
kcc, err = configFromContext("default", contextStore)
if err != nil {
logrus.Error(err)
}
}
tryToUseKubeConfigInCluster := false
if kcc == nil {
tryToUseKubeConfigInCluster = true
} else {
if _, err := kcc.ClientConfig(); err != nil {
tryToUseKubeConfigInCluster = true
}
}
if tryToUseKubeConfigInCluster {
kccInCluster := driver.KubeClientConfigInCluster{}
if _, err := kccInCluster.ClientConfig(); err == nil {
logrus.Debug("using kube config in cluster")
kcc = kccInCluster
}
}
d, err := driver.GetDriver(ctx, "buildx_buildkit_"+n.Name, f, n.Endpoint, dockerapi, imageopt.Auth, kcc, n.Flags, n.Files, n.DriverOpts, n.Platforms, contextPathHash)
if err != nil {
di.Err = err
return nil
}
di.Driver = d
di.ImageOpt = imageopt
return nil
})
}(i, n)
}
if err := eg.Wait(); err != nil {
return nil, err
}
return dis, nil
}
func configFromContext(endpointName string, s ctxstore.Reader) (clientcmd.ClientConfig, error) {
if strings.HasPrefix(endpointName, "kubernetes://") {
u, _ := url.Parse(endpointName)
if kubeconfig := u.Query().Get("kubeconfig"); kubeconfig != "" {
_ = os.Setenv(clientcmd.RecommendedConfigPathEnvVar, kubeconfig)
}
rules := clientcmd.NewDefaultClientConfigLoadingRules()
apiConfig, err := rules.Load()
if err != nil {
return nil, err
}
return clientcmd.NewDefaultClientConfig(*apiConfig, &clientcmd.ConfigOverrides{}), nil
}
return ctxkube.ConfigFromContext(endpointName, s)
}
// clientForEndpoint returns a docker client for an endpoint
func clientForEndpoint(dockerCli command.Cli, name string) (dockerclient.APIClient, error) {
list, err := dockerCli.ContextStore().List()
if err != nil {
return nil, err
}
for _, l := range list {
if l.Name == name {
dep, ok := l.Endpoints["docker"]
if !ok {
return nil, errors.Errorf("context %q does not have a Docker endpoint", name)
}
epm, ok := dep.(docker.EndpointMeta)
if !ok {
return nil, errors.Errorf("endpoint %q is not of type EndpointMeta, %T", dep, dep)
}
ep, err := docker.WithTLSData(dockerCli.ContextStore(), name, epm)
if err != nil {
return nil, err
}
clientOpts, err := ep.ClientOpts()
if err != nil {
return nil, err
}
return dockerclient.NewClientWithOpts(clientOpts...)
}
}
ep := docker.Endpoint{
EndpointMeta: docker.EndpointMeta{
Host: name,
},
}
clientOpts, err := ep.ClientOpts()
if err != nil {
return nil, err
}
return dockerclient.NewClientWithOpts(clientOpts...)
}
func getInstanceOrDefault(ctx context.Context, dockerCli command.Cli, instance, contextPathHash string) ([]build.DriverInfo, error) {
var defaultOnly bool
if instance == "default" && instance != dockerCli.CurrentContext() {
return nil, errors.Errorf("use `docker --context=default buildx` to switch to default context")
}
if instance == "default" || instance == dockerCli.CurrentContext() {
instance = ""
defaultOnly = true
}
list, err := dockerCli.ContextStore().List()
if err != nil {
return nil, err
}
for _, l := range list {
if l.Name == instance {
return nil, errors.Errorf("use `docker --context=%s buildx` to switch to context %s", instance, instance)
}
}
if instance != "" {
return getInstanceByName(ctx, dockerCli, instance, contextPathHash)
}
return getDefaultDrivers(ctx, dockerCli, defaultOnly, contextPathHash)
}
func getInstanceByName(ctx context.Context, dockerCli command.Cli, instance, contextPathHash string) ([]build.DriverInfo, error) {
txn, release, err := storeutil.GetStore(dockerCli)
if err != nil {
return nil, err
}
defer release()
ng, err := txn.NodeGroupByName(instance)
if err != nil {
return nil, err
}
return driversForNodeGroup(ctx, dockerCli, ng, contextPathHash)
}
// getDefaultDrivers returns drivers based on current cli config
func getDefaultDrivers(ctx context.Context, dockerCli command.Cli, defaultOnly bool, contextPathHash string) ([]build.DriverInfo, error) {
txn, release, err := storeutil.GetStore(dockerCli)
if err != nil {
return nil, err
}
defer release()
if !defaultOnly {
ng, err := storeutil.GetCurrentInstance(txn, dockerCli)
if err != nil {
return nil, err
}
if ng != nil {
return driversForNodeGroup(ctx, dockerCli, ng, contextPathHash)
}
}
imageopt, err := storeutil.GetImageConfig(dockerCli, nil)
if err != nil {
return nil, err
}
d, err := driver.GetDriver(ctx, "buildx_buildkit_default", nil, "", dockerCli.Client(), imageopt.Auth, nil, nil, nil, nil, nil, contextPathHash)
if err != nil {
return nil, err
}
return []build.DriverInfo{
{
Name: "default",
Driver: d,
ImageOpt: imageopt,
ProxyConfig: storeutil.GetProxyConfig(dockerCli),
},
}, nil
}
func loadInfoData(ctx context.Context, d *dinfo) error {
if d.di.Driver == nil {
return nil
}
info, err := d.di.Driver.Info(ctx)
if err != nil {
return err
}
d.info = info
if info.Status == driver.Running {
c, err := d.di.Driver.Client(ctx)
if err != nil {
return err
}
workers, err := c.ListWorkers(ctx)
if err != nil {
return errors.Wrap(err, "listing workers")
}
for _, w := range workers {
d.platforms = append(d.platforms, w.Platforms...)
}
d.platforms = platformutil.Dedupe(d.platforms)
inf, err := c.Info(ctx)
if err != nil {
if st, ok := grpcerrors.AsGRPCStatus(err); ok && st.Code() == codes.Unimplemented {
d.version, err = d.di.Driver.Version(ctx)
if err != nil {
return errors.Wrap(err, "getting version")
}
}
} else {
d.version = inf.BuildkitVersion.Version
}
}
return nil
}
func loadNodeGroupData(ctx context.Context, dockerCli command.Cli, ngi *nginfo) error {
eg, _ := errgroup.WithContext(ctx)
dis, err := driversForNodeGroup(ctx, dockerCli, ngi.ng, "")
if err != nil {
return err
}
ngi.drivers = make([]dinfo, len(dis))
for i, di := range dis {
d := di
ngi.drivers[i].di = &d
func(d *dinfo) {
eg.Go(func() error {
if err := loadInfoData(ctx, d); err != nil {
d.err = err
}
return nil
})
}(&ngi.drivers[i])
}
if eg.Wait(); err != nil {
return err
}
kubernetesDriverCount := 0
for _, di := range ngi.drivers {
if di.info != nil && len(di.info.DynamicNodes) > 0 {
kubernetesDriverCount++
}
}
isAllKubernetesDrivers := len(ngi.drivers) == kubernetesDriverCount
if isAllKubernetesDrivers {
var drivers []dinfo
var dynamicNodes []store.Node
for _, di := range ngi.drivers {
// dynamic nodes are used in Kubernetes driver.
// Kubernetes pods are dynamically mapped to BuildKit Nodes.
if di.info != nil && len(di.info.DynamicNodes) > 0 {
for i := 0; i < len(di.info.DynamicNodes); i++ {
// all []dinfo share *build.DriverInfo and *driver.Info
diClone := di
if pl := di.info.DynamicNodes[i].Platforms; len(pl) > 0 {
diClone.platforms = pl
}
drivers = append(drivers, di)
}
dynamicNodes = append(dynamicNodes, di.info.DynamicNodes...)
}
}
// not append (remove the static nodes in the store)
ngi.ng.Nodes = dynamicNodes
ngi.drivers = drivers
ngi.ng.Dynamic = true
}
return nil
}
func hasNodeGroup(list []*nginfo, ngi *nginfo) bool {
for _, l := range list {
if ngi.ng.Name == l.ng.Name {
return true
}
}
return false
}
func dockerAPI(dockerCli command.Cli) *api {
return &api{dockerCli: dockerCli}
}
type api struct {
dockerCli command.Cli
}
func (a *api) DockerAPI(name string) (dockerclient.APIClient, error) {
if name == "" {
name = a.dockerCli.CurrentContext()
}
return clientForEndpoint(a.dockerCli, name)
}
type dinfo struct {
di *build.DriverInfo
info *driver.Info
platforms []specs.Platform
version string
err error
}
type nginfo struct {
ng *store.NodeGroup
drivers []dinfo
err error
}
// inactive checks if all nodes are inactive for this builder
func (n *nginfo) inactive() bool {
for idx := range n.ng.Nodes {
d := n.drivers[idx]
if d.info != nil && d.info.Status == driver.Running {
return false
}
}
return true
}
func boot(ctx context.Context, ngi *nginfo) (bool, error) {
toBoot := make([]int, 0, len(ngi.drivers))
for i, d := range ngi.drivers {
if d.err != nil || d.di.Err != nil || d.di.Driver == nil || d.info == nil {
continue
}
if d.info.Status != driver.Running {
toBoot = append(toBoot, i)
}
}
if len(toBoot) == 0 {
return false, nil
}
printer := progress.NewPrinter(context.TODO(), os.Stderr, os.Stderr, "auto")
baseCtx := ctx
eg, _ := errgroup.WithContext(ctx)
for _, idx := range toBoot {
func(idx int) {
eg.Go(func() error {
pw := progress.WithPrefix(printer, ngi.ng.Nodes[idx].Name, len(toBoot) > 1)
_, err := driver.Boot(ctx, baseCtx, ngi.drivers[idx].di.Driver, pw)
if err != nil {
ngi.drivers[idx].err = err
}
return nil
})
}(idx)
}
err := eg.Wait()
err1 := printer.Wait()
if err == nil {
err = err1
}
return true, err
}

View File

@@ -4,7 +4,6 @@ import (
"fmt"
"github.com/docker/buildx/util/cobrautil"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/version"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
@@ -24,7 +23,6 @@ func versionCmd(dockerCli command.Cli) *cobra.Command {
RunE: func(cmd *cobra.Command, args []string) error {
return runVersion(dockerCli)
},
ValidArgsFunction: completion.Disable,
}
// hide builder persistent flag for this command

View File

@@ -1,266 +0,0 @@
package build
import (
"context"
"io"
"os"
"path/filepath"
"strings"
"sync"
"github.com/docker/buildx/build"
"github.com/docker/buildx/builder"
controllerapi "github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/store"
"github.com/docker/buildx/store/storeutil"
"github.com/docker/buildx/util/buildflags"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/dockerutil"
"github.com/docker/buildx/util/platformutil"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/command"
"github.com/docker/cli/cli/config"
dockeropts "github.com/docker/cli/opts"
"github.com/docker/go-units"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/session/auth/authprovider"
"github.com/moby/buildkit/util/grpcerrors"
"github.com/pkg/errors"
"google.golang.org/grpc/codes"
)
const defaultTargetName = "default"
// RunBuild runs the specified build and returns the result.
//
// NOTE: When an error happens during the build and this function acquires the debuggable *build.ResultHandle,
// this function returns it in addition to the error (i.e. it does "return nil, res, err"). The caller can
// inspect the result and debug the cause of that error.
func RunBuild(ctx context.Context, dockerCli command.Cli, in controllerapi.BuildOptions, inStream io.Reader, progress progress.Writer, generateResult bool) (*client.SolveResponse, *build.ResultHandle, error) {
if in.NoCache && len(in.NoCacheFilter) > 0 {
return nil, nil, errors.Errorf("--no-cache and --no-cache-filter cannot currently be used together")
}
contexts := map[string]build.NamedContext{}
for name, path := range in.NamedContexts {
contexts[name] = build.NamedContext{Path: path}
}
opts := build.Options{
Inputs: build.Inputs{
ContextPath: in.ContextPath,
DockerfilePath: in.DockerfileName,
InStream: inStream,
NamedContexts: contexts,
},
BuildArgs: in.BuildArgs,
ExtraHosts: in.ExtraHosts,
Labels: in.Labels,
NetworkMode: in.NetworkMode,
NoCache: in.NoCache,
NoCacheFilter: in.NoCacheFilter,
Pull: in.Pull,
ShmSize: dockeropts.MemBytes(in.ShmSize),
Tags: in.Tags,
Target: in.Target,
Ulimits: controllerUlimitOpt2DockerUlimit(in.Ulimits),
}
platforms, err := platformutil.Parse(in.Platforms)
if err != nil {
return nil, nil, err
}
opts.Platforms = platforms
dockerConfig := config.LoadDefaultConfigFile(os.Stderr)
opts.Session = append(opts.Session, authprovider.NewDockerAuthProvider(dockerConfig))
secrets, err := controllerapi.CreateSecrets(in.Secrets)
if err != nil {
return nil, nil, err
}
opts.Session = append(opts.Session, secrets)
sshSpecs := in.SSH
if len(sshSpecs) == 0 && buildflags.IsGitSSH(in.ContextPath) {
sshSpecs = append(sshSpecs, &controllerapi.SSH{ID: "default"})
}
ssh, err := controllerapi.CreateSSH(sshSpecs)
if err != nil {
return nil, nil, err
}
opts.Session = append(opts.Session, ssh)
outputs, err := controllerapi.CreateExports(in.Exports)
if err != nil {
return nil, nil, err
}
if in.ExportPush {
if in.ExportLoad {
return nil, nil, errors.Errorf("push and load may not be set together at the moment")
}
if len(outputs) == 0 {
outputs = []client.ExportEntry{{
Type: "image",
Attrs: map[string]string{
"push": "true",
},
}}
} else {
switch outputs[0].Type {
case "image":
outputs[0].Attrs["push"] = "true"
default:
return nil, nil, errors.Errorf("push and %q output can't be used together", outputs[0].Type)
}
}
}
if in.ExportLoad {
if len(outputs) == 0 {
outputs = []client.ExportEntry{{
Type: "docker",
Attrs: map[string]string{},
}}
} else {
switch outputs[0].Type {
case "docker":
default:
return nil, nil, errors.Errorf("load and %q output can't be used together", outputs[0].Type)
}
}
}
opts.Exports = outputs
opts.CacheFrom = controllerapi.CreateCaches(in.CacheFrom)
opts.CacheTo = controllerapi.CreateCaches(in.CacheTo)
opts.Attests = controllerapi.CreateAttestations(in.Attests)
opts.SourcePolicy = in.SourcePolicy
allow, err := buildflags.ParseEntitlements(in.Allow)
if err != nil {
return nil, nil, err
}
opts.Allow = allow
if in.PrintFunc != nil {
opts.PrintFunc = &build.PrintFunc{
Name: in.PrintFunc.Name,
Format: in.PrintFunc.Format,
}
}
// key string used for kubernetes "sticky" mode
contextPathHash, err := filepath.Abs(in.ContextPath)
if err != nil {
contextPathHash = in.ContextPath
}
// TODO: this should not be loaded this side of the controller api
b, err := builder.New(dockerCli,
builder.WithName(in.Builder),
builder.WithContextPathHash(contextPathHash),
)
if err != nil {
return nil, nil, err
}
if err = updateLastActivity(dockerCli, b.NodeGroup); err != nil {
return nil, nil, errors.Wrapf(err, "failed to update builder last activity time")
}
nodes, err := b.LoadNodes(ctx, false)
if err != nil {
return nil, nil, err
}
resp, res, err := buildTargets(ctx, dockerCli, b.NodeGroup, nodes, map[string]build.Options{defaultTargetName: opts}, progress, generateResult)
err = wrapBuildError(err, false)
if err != nil {
// NOTE: buildTargets can return *build.ResultHandle even on error.
return nil, res, err
}
return resp, res, nil
}
// buildTargets runs the specified build and returns the result.
//
// NOTE: When an error happens during the build and this function acquires the debuggable *build.ResultHandle,
// this function returns it in addition to the error (i.e. it does "return nil, res, err"). The caller can
// inspect the result and debug the cause of that error.
func buildTargets(ctx context.Context, dockerCli command.Cli, ng *store.NodeGroup, nodes []builder.Node, opts map[string]build.Options, progress progress.Writer, generateResult bool) (*client.SolveResponse, *build.ResultHandle, error) {
var res *build.ResultHandle
var resp map[string]*client.SolveResponse
var err error
if generateResult {
var mu sync.Mutex
var idx int
resp, err = build.BuildWithResultHandler(ctx, nodes, opts, dockerutil.NewClient(dockerCli), confutil.ConfigDir(dockerCli), progress, func(driverIndex int, gotRes *build.ResultHandle) {
mu.Lock()
defer mu.Unlock()
if res == nil || driverIndex < idx {
idx, res = driverIndex, gotRes
}
})
} else {
resp, err = build.Build(ctx, nodes, opts, dockerutil.NewClient(dockerCli), confutil.ConfigDir(dockerCli), progress)
}
if err != nil {
return nil, res, err
}
return resp[defaultTargetName], res, err
}
func wrapBuildError(err error, bake bool) error {
if err == nil {
return nil
}
st, ok := grpcerrors.AsGRPCStatus(err)
if ok {
if st.Code() == codes.Unimplemented && strings.Contains(st.Message(), "unsupported frontend capability moby.buildkit.frontend.contexts") {
msg := "current frontend does not support --build-context."
if bake {
msg = "current frontend does not support defining additional contexts for targets."
}
msg += " Named contexts are supported since Dockerfile v1.4. Use #syntax directive in Dockerfile or update to latest BuildKit."
return &wrapped{err, msg}
}
}
return err
}
type wrapped struct {
err error
msg string
}
func (w *wrapped) Error() string {
return w.msg
}
func (w *wrapped) Unwrap() error {
return w.err
}
func updateLastActivity(dockerCli command.Cli, ng *store.NodeGroup) error {
txn, release, err := storeutil.GetStore(dockerCli)
if err != nil {
return err
}
defer release()
return txn.UpdateLastActivity(ng)
}
func controllerUlimitOpt2DockerUlimit(u *controllerapi.UlimitOpt) *dockeropts.UlimitOpt {
if u == nil {
return nil
}
values := make(map[string]*units.Ulimit)
for k, v := range u.Values {
values[k] = &units.Ulimit{
Name: v.Name,
Hard: v.Hard,
Soft: v.Soft,
}
}
return dockeropts.NewUlimitOpt(&values)
}

View File

@@ -1,32 +0,0 @@
package control
import (
"context"
"io"
controllerapi "github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/util/progress"
"github.com/moby/buildkit/client"
)
type BuildxController interface {
Build(ctx context.Context, options controllerapi.BuildOptions, in io.ReadCloser, progress progress.Writer) (ref string, resp *client.SolveResponse, err error)
// Invoke starts an IO session into the specified process.
// If pid doesn't matche to any running processes, it starts a new process with the specified config.
// If there is no container running or InvokeConfig.Rollback is speicfied, the process will start in a newly created container.
// NOTE: If needed, in the future, we can split this API into three APIs (NewContainer, NewProcess and Attach).
Invoke(ctx context.Context, ref, pid string, options controllerapi.InvokeConfig, ioIn io.ReadCloser, ioOut io.WriteCloser, ioErr io.WriteCloser) error
Kill(ctx context.Context) error
Close() error
List(ctx context.Context) (refs []string, _ error)
Disconnect(ctx context.Context, ref string) error
ListProcesses(ctx context.Context, ref string) (infos []*controllerapi.ProcessInfo, retErr error)
DisconnectProcess(ctx context.Context, ref, pid string) error
Inspect(ctx context.Context, ref string) (*controllerapi.InspectResponse, error)
}
type ControlOptions struct {
ServerConfig string
Root string
Detach bool
}

View File

@@ -1,36 +0,0 @@
package controller
import (
"context"
"fmt"
"github.com/docker/buildx/controller/control"
"github.com/docker/buildx/controller/local"
"github.com/docker/buildx/controller/remote"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/command"
"github.com/pkg/errors"
)
func NewController(ctx context.Context, opts control.ControlOptions, dockerCli command.Cli, pw progress.Writer) (control.BuildxController, error) {
var name string
if opts.Detach {
name = "remote"
} else {
name = "local"
}
var c control.BuildxController
err := progress.Wrap(fmt.Sprintf("[internal] connecting to %s controller", name), pw.Write, func(l progress.SubLogger) (err error) {
if opts.Detach {
c, err = remote.NewRemoteBuildxController(ctx, dockerCli, opts, l)
} else {
c = local.NewLocalBuildxController(ctx, dockerCli, l)
}
return err
})
if err != nil {
return nil, errors.Wrap(err, "failed to start buildx controller")
}
return c, nil
}

View File

@@ -1,34 +0,0 @@
package errdefs
import (
"github.com/containerd/typeurl/v2"
"github.com/moby/buildkit/util/grpcerrors"
)
func init() {
typeurl.Register((*Build)(nil), "github.com/docker/buildx", "errdefs.Build+json")
}
type BuildError struct {
Build
error
}
func (e *BuildError) Unwrap() error {
return e.error
}
func (e *BuildError) ToProto() grpcerrors.TypedErrorProto {
return &e.Build
}
func WrapBuild(err error, ref string) error {
if err == nil {
return nil
}
return &BuildError{Build: Build{Ref: ref}, error: err}
}
func (b *Build) WrapError(err error) error {
return &BuildError{error: err, Build: *b}
}

View File

@@ -1,77 +0,0 @@
// Code generated by protoc-gen-gogo. DO NOT EDIT.
// source: errdefs.proto
package errdefs
import (
fmt "fmt"
proto "github.com/gogo/protobuf/proto"
_ "github.com/moby/buildkit/solver/pb"
math "math"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package
type Build struct {
Ref string `protobuf:"bytes,1,opt,name=Ref,proto3" json:"Ref,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *Build) Reset() { *m = Build{} }
func (m *Build) String() string { return proto.CompactTextString(m) }
func (*Build) ProtoMessage() {}
func (*Build) Descriptor() ([]byte, []int) {
return fileDescriptor_689dc58a5060aff5, []int{0}
}
func (m *Build) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_Build.Unmarshal(m, b)
}
func (m *Build) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_Build.Marshal(b, m, deterministic)
}
func (m *Build) XXX_Merge(src proto.Message) {
xxx_messageInfo_Build.Merge(m, src)
}
func (m *Build) XXX_Size() int {
return xxx_messageInfo_Build.Size(m)
}
func (m *Build) XXX_DiscardUnknown() {
xxx_messageInfo_Build.DiscardUnknown(m)
}
var xxx_messageInfo_Build proto.InternalMessageInfo
func (m *Build) GetRef() string {
if m != nil {
return m.Ref
}
return ""
}
func init() {
proto.RegisterType((*Build)(nil), "errdefs.Build")
}
func init() { proto.RegisterFile("errdefs.proto", fileDescriptor_689dc58a5060aff5) }
var fileDescriptor_689dc58a5060aff5 = []byte{
// 111 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0xe2, 0x4d, 0x2d, 0x2a, 0x4a,
0x49, 0x4d, 0x2b, 0xd6, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x17, 0x62, 0x87, 0x72, 0xa5, 0x74, 0xd2,
0x33, 0x4b, 0x32, 0x4a, 0x93, 0xf4, 0x92, 0xf3, 0x73, 0xf5, 0x73, 0xf3, 0x93, 0x2a, 0xf5, 0x93,
0x4a, 0x33, 0x73, 0x52, 0xb2, 0x33, 0x4b, 0xf4, 0x8b, 0xf3, 0x73, 0xca, 0x52, 0x8b, 0xf4, 0x0b,
0x92, 0xf4, 0xf3, 0x0b, 0xa0, 0xda, 0x94, 0x24, 0xb9, 0x58, 0x9d, 0x40, 0xf2, 0x42, 0x02, 0x5c,
0xcc, 0x41, 0xa9, 0x69, 0x12, 0x8c, 0x0a, 0x8c, 0x1a, 0x9c, 0x41, 0x20, 0x66, 0x12, 0x1b, 0x58,
0x85, 0x31, 0x20, 0x00, 0x00, 0xff, 0xff, 0x56, 0x52, 0x41, 0x91, 0x69, 0x00, 0x00, 0x00,
}

View File

@@ -1,9 +0,0 @@
syntax = "proto3";
package errdefs;
import "github.com/moby/buildkit/solver/pb/ops.proto";
message Build {
string Ref = 1;
}

View File

@@ -1,3 +0,0 @@
package errdefs
//go:generate protoc -I=. -I=../../vendor/ --gogo_out=plugins=grpc:. errdefs.proto

View File

@@ -1,146 +0,0 @@
package local
import (
"context"
"io"
"sync/atomic"
"github.com/docker/buildx/build"
cbuild "github.com/docker/buildx/controller/build"
"github.com/docker/buildx/controller/control"
controllererrors "github.com/docker/buildx/controller/errdefs"
controllerapi "github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/controller/processes"
"github.com/docker/buildx/util/ioset"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/command"
"github.com/moby/buildkit/client"
"github.com/pkg/errors"
)
func NewLocalBuildxController(ctx context.Context, dockerCli command.Cli, logger progress.SubLogger) control.BuildxController {
return &localController{
dockerCli: dockerCli,
ref: "local",
processes: processes.NewManager(),
}
}
type buildConfig struct {
// TODO: these two structs should be merged
// Discussion: https://github.com/docker/buildx/pull/1640#discussion_r1113279719
resultCtx *build.ResultHandle
buildOptions *controllerapi.BuildOptions
}
type localController struct {
dockerCli command.Cli
ref string
buildConfig buildConfig
processes *processes.Manager
buildOnGoing atomic.Bool
}
func (b *localController) Build(ctx context.Context, options controllerapi.BuildOptions, in io.ReadCloser, progress progress.Writer) (string, *client.SolveResponse, error) {
if !b.buildOnGoing.CompareAndSwap(false, true) {
return "", nil, errors.New("build ongoing")
}
defer b.buildOnGoing.Store(false)
resp, res, buildErr := cbuild.RunBuild(ctx, b.dockerCli, options, in, progress, true)
// NOTE: RunBuild can return *build.ResultHandle even on error.
if res != nil {
b.buildConfig = buildConfig{
resultCtx: res,
buildOptions: &options,
}
if buildErr != nil {
buildErr = controllererrors.WrapBuild(buildErr, b.ref)
}
}
if buildErr != nil {
return "", nil, buildErr
}
return b.ref, resp, nil
}
func (b *localController) ListProcesses(ctx context.Context, ref string) (infos []*controllerapi.ProcessInfo, retErr error) {
if ref != b.ref {
return nil, errors.Errorf("unknown ref %q", ref)
}
return b.processes.ListProcesses(), nil
}
func (b *localController) DisconnectProcess(ctx context.Context, ref, pid string) error {
if ref != b.ref {
return errors.Errorf("unknown ref %q", ref)
}
return b.processes.DeleteProcess(pid)
}
func (b *localController) cancelRunningProcesses() {
b.processes.CancelRunningProcesses()
}
func (b *localController) Invoke(ctx context.Context, ref string, pid string, cfg controllerapi.InvokeConfig, ioIn io.ReadCloser, ioOut io.WriteCloser, ioErr io.WriteCloser) error {
if ref != b.ref {
return errors.Errorf("unknown ref %q", ref)
}
proc, ok := b.processes.Get(pid)
if !ok {
// Start a new process.
if b.buildConfig.resultCtx == nil {
return errors.New("no build result is registered")
}
var err error
proc, err = b.processes.StartProcess(pid, b.buildConfig.resultCtx, &cfg)
if err != nil {
return err
}
}
// Attach containerIn to this process
ioCancelledCh := make(chan struct{})
proc.ForwardIO(&ioset.In{Stdin: ioIn, Stdout: ioOut, Stderr: ioErr}, func() { close(ioCancelledCh) })
select {
case <-ioCancelledCh:
return errors.Errorf("io cancelled")
case err := <-proc.Done():
return err
case <-ctx.Done():
return ctx.Err()
}
}
func (b *localController) Kill(context.Context) error {
b.Close()
return nil
}
func (b *localController) Close() error {
b.cancelRunningProcesses()
if b.buildConfig.resultCtx != nil {
b.buildConfig.resultCtx.Done()
}
// TODO: cancel ongoing builds?
return nil
}
func (b *localController) List(ctx context.Context) (res []string, _ error) {
return []string{b.ref}, nil
}
func (b *localController) Disconnect(ctx context.Context, key string) error {
b.Close()
return nil
}
func (b *localController) Inspect(ctx context.Context, ref string) (*controllerapi.InspectResponse, error) {
if ref != b.ref {
return nil, errors.Errorf("unknown ref %q", ref)
}
return &controllerapi.InspectResponse{Options: b.buildConfig.buildOptions}, nil
}

View File

@@ -1,20 +0,0 @@
package pb
func CreateAttestations(attests []*Attest) map[string]*string {
result := map[string]*string{}
for _, attest := range attests {
// ignore duplicates
if _, ok := result[attest.Type]; ok {
continue
}
if attest.Disabled {
result[attest.Type] = nil
continue
}
attrs := attest.Attrs
result[attest.Type] = &attrs
}
return result
}

View File

@@ -1,21 +0,0 @@
package pb
import "github.com/moby/buildkit/client"
func CreateCaches(entries []*CacheOptionsEntry) []client.CacheOptionsEntry {
var outs []client.CacheOptionsEntry
if len(entries) == 0 {
return nil
}
for _, entry := range entries {
out := client.CacheOptionsEntry{
Type: entry.Type,
Attrs: map[string]string{},
}
for k, v := range entry.Attrs {
out.Attrs[k] = v
}
outs = append(outs, out)
}
return outs
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,244 +0,0 @@
syntax = "proto3";
package buildx.controller.v1;
import "github.com/moby/buildkit/api/services/control/control.proto";
import "github.com/moby/buildkit/sourcepolicy/pb/policy.proto";
option go_package = "pb";
service Controller {
rpc Build(BuildRequest) returns (BuildResponse);
rpc Inspect(InspectRequest) returns (InspectResponse);
rpc Status(StatusRequest) returns (stream StatusResponse);
rpc Input(stream InputMessage) returns (InputResponse);
rpc Invoke(stream Message) returns (stream Message);
rpc List(ListRequest) returns (ListResponse);
rpc Disconnect(DisconnectRequest) returns (DisconnectResponse);
rpc Info(InfoRequest) returns (InfoResponse);
rpc ListProcesses(ListProcessesRequest) returns (ListProcessesResponse);
rpc DisconnectProcess(DisconnectProcessRequest) returns (DisconnectProcessResponse);
}
message ListProcessesRequest {
string Ref = 1;
}
message ListProcessesResponse {
repeated ProcessInfo Infos = 1;
}
message ProcessInfo {
string ProcessID = 1;
InvokeConfig InvokeConfig = 2;
}
message DisconnectProcessRequest {
string Ref = 1;
string ProcessID = 2;
}
message DisconnectProcessResponse {
}
message BuildRequest {
string Ref = 1;
BuildOptions Options = 2;
}
message BuildOptions {
string ContextPath = 1;
string DockerfileName = 2;
PrintFunc PrintFunc = 3;
map<string, string> NamedContexts = 4;
repeated string Allow = 5;
repeated Attest Attests = 6;
map<string, string> BuildArgs = 7;
repeated CacheOptionsEntry CacheFrom = 8;
repeated CacheOptionsEntry CacheTo = 9;
string CgroupParent = 10;
repeated ExportEntry Exports = 11;
repeated string ExtraHosts = 12;
map<string, string> Labels = 13;
string NetworkMode = 14;
repeated string NoCacheFilter = 15;
repeated string Platforms = 16;
repeated Secret Secrets = 17;
int64 ShmSize = 18;
repeated SSH SSH = 19;
repeated string Tags = 20;
string Target = 21;
UlimitOpt Ulimits = 22;
string Builder = 23;
bool NoCache = 24;
bool Pull = 25;
bool ExportPush = 26;
bool ExportLoad = 27;
moby.buildkit.v1.sourcepolicy.Policy SourcePolicy = 28;
}
message ExportEntry {
string Type = 1;
map<string, string> Attrs = 2;
string Destination = 3;
}
message CacheOptionsEntry {
string Type = 1;
map<string, string> Attrs = 2;
}
message Attest {
string Type = 1;
bool Disabled = 2;
string Attrs = 3;
}
message SSH {
string ID = 1;
repeated string Paths = 2;
}
message Secret {
string ID = 1;
string FilePath = 2;
string Env = 3;
}
message PrintFunc {
string Name = 1;
string Format = 2;
}
message InspectRequest {
string Ref = 1;
}
message InspectResponse {
BuildOptions Options = 1;
}
message UlimitOpt {
map<string, Ulimit> values = 1;
}
message Ulimit {
string Name = 1;
int64 Hard = 2;
int64 Soft = 3;
}
message BuildResponse {
map<string, string> ExporterResponse = 1;
}
message DisconnectRequest {
string Ref = 1;
}
message DisconnectResponse {}
message ListRequest {
string Ref = 1;
}
message ListResponse {
repeated string keys = 1;
}
message InputMessage {
oneof Input {
InputInitMessage Init = 1;
DataMessage Data = 2;
}
}
message InputInitMessage {
string Ref = 1;
}
message DataMessage {
bool EOF = 1; // true if eof was reached
bytes Data = 2; // should be chunked smaller than 4MB:
// https://pkg.go.dev/google.golang.org/grpc#MaxRecvMsgSize
}
message InputResponse {}
message Message {
oneof Input {
InitMessage Init = 1;
// FdMessage used from client to server for input (stdin) and
// from server to client for output (stdout, stderr)
FdMessage File = 2;
// ResizeMessage used from client to server for terminal resize events
ResizeMessage Resize = 3;
// SignalMessage is used from client to server to send signal events
SignalMessage Signal = 4;
}
}
message InitMessage {
string Ref = 1;
// If ProcessID already exists in the server, it tries to connect to it
// instead of invoking the new one. In this case, InvokeConfig will be ignored.
string ProcessID = 2;
InvokeConfig InvokeConfig = 3;
}
message InvokeConfig {
repeated string Entrypoint = 1;
repeated string Cmd = 2;
repeated string Env = 3;
string User = 4;
bool NoUser = 5; // Do not set user but use the image's default
string Cwd = 6;
bool NoCwd = 7; // Do not set cwd but use the image's default
bool Tty = 8;
bool Rollback = 9; // Kill all process in the container and recreate it.
bool Initial = 10; // Run container from the initial state of that stage (supported only on the failed step)
}
message FdMessage {
uint32 Fd = 1; // what fd the data was from
bool EOF = 2; // true if eof was reached
bytes Data = 3; // should be chunked smaller than 4MB:
// https://pkg.go.dev/google.golang.org/grpc#MaxRecvMsgSize
}
message ResizeMessage {
uint32 Rows = 1;
uint32 Cols = 2;
}
message SignalMessage {
// we only send name (ie HUP, INT) because the int values
// are platform dependent.
string Name = 1;
}
message StatusRequest {
string Ref = 1;
}
message StatusResponse {
repeated moby.buildkit.v1.Vertex vertexes = 1;
repeated moby.buildkit.v1.VertexStatus statuses = 2;
repeated moby.buildkit.v1.VertexLog logs = 3;
repeated moby.buildkit.v1.VertexWarning warnings = 4;
}
message InfoRequest {}
message InfoResponse {
BuildxVersion buildxVersion = 1;
}
message BuildxVersion {
string package = 1;
string version = 2;
string revision = 3;
}

View File

@@ -1,100 +0,0 @@
package pb
import (
"io"
"os"
"strconv"
"github.com/containerd/console"
"github.com/moby/buildkit/client"
"github.com/pkg/errors"
)
func CreateExports(entries []*ExportEntry) ([]client.ExportEntry, error) {
var outs []client.ExportEntry
if len(entries) == 0 {
return nil, nil
}
for _, entry := range entries {
if entry.Type == "" {
return nil, errors.Errorf("type is required for output")
}
out := client.ExportEntry{
Type: entry.Type,
Attrs: map[string]string{},
}
for k, v := range entry.Attrs {
out.Attrs[k] = v
}
supportFile := false
supportDir := false
switch out.Type {
case client.ExporterLocal:
supportDir = true
case client.ExporterTar:
supportFile = true
case client.ExporterOCI, client.ExporterDocker:
tar, err := strconv.ParseBool(out.Attrs["tar"])
if err != nil {
tar = true
}
supportFile = tar
supportDir = !tar
case "registry":
out.Type = client.ExporterImage
}
if supportDir {
if entry.Destination == "" {
return nil, errors.Errorf("dest is required for %s exporter", out.Type)
}
if entry.Destination == "-" {
return nil, errors.Errorf("dest cannot be stdout for %s exporter", out.Type)
}
fi, err := os.Stat(entry.Destination)
if err != nil && !os.IsNotExist(err) {
return nil, errors.Wrapf(err, "invalid destination directory: %s", entry.Destination)
}
if err == nil && !fi.IsDir() {
return nil, errors.Errorf("destination directory %s is a file", entry.Destination)
}
out.OutputDir = entry.Destination
}
if supportFile {
if entry.Destination == "" && out.Type != client.ExporterDocker {
entry.Destination = "-"
}
if entry.Destination == "-" {
if _, err := console.ConsoleFromFile(os.Stdout); err == nil {
return nil, errors.Errorf("dest file is required for %s exporter. refusing to write to console", out.Type)
}
out.Output = wrapWriteCloser(os.Stdout)
} else if entry.Destination != "" {
fi, err := os.Stat(entry.Destination)
if err != nil && !os.IsNotExist(err) {
return nil, errors.Wrapf(err, "invalid destination file: %s", entry.Destination)
}
if err == nil && fi.IsDir() {
return nil, errors.Errorf("destination file %s is a directory", entry.Destination)
}
f, err := os.Create(entry.Destination)
if err != nil {
return nil, errors.Errorf("failed to open %s", err)
}
out.Output = wrapWriteCloser(f)
}
}
outs = append(outs, out)
}
return outs, nil
}
func wrapWriteCloser(wc io.WriteCloser) func(map[string]string) (io.WriteCloser, error) {
return func(map[string]string) (io.WriteCloser, error) {
return wc, nil
}
}

View File

@@ -1,3 +0,0 @@
package pb
//go:generate protoc -I=. -I=../../vendor/ --gogo_out=plugins=grpc:. controller.proto

View File

@@ -1,175 +0,0 @@
package pb
import (
"path/filepath"
"strings"
"github.com/docker/docker/builder/remotecontext/urlutil"
"github.com/moby/buildkit/util/gitutil"
)
// ResolveOptionPaths resolves all paths contained in BuildOptions
// and replaces them to absolute paths.
func ResolveOptionPaths(options *BuildOptions) (_ *BuildOptions, err error) {
localContext := false
if options.ContextPath != "" && options.ContextPath != "-" {
if !isRemoteURL(options.ContextPath) {
localContext = true
options.ContextPath, err = filepath.Abs(options.ContextPath)
if err != nil {
return nil, err
}
}
}
if options.DockerfileName != "" && options.DockerfileName != "-" {
if localContext && !urlutil.IsURL(options.DockerfileName) {
options.DockerfileName, err = filepath.Abs(options.DockerfileName)
if err != nil {
return nil, err
}
}
}
var contexts map[string]string
for k, v := range options.NamedContexts {
if isRemoteURL(v) || strings.HasPrefix(v, "docker-image://") {
// url prefix, this is a remote path
} else if strings.HasPrefix(v, "oci-layout://") {
// oci layout prefix, this is a local path
p := strings.TrimPrefix(v, "oci-layout://")
p, err = filepath.Abs(p)
if err != nil {
return nil, err
}
v = "oci-layout://" + p
} else {
// no prefix, assume local path
v, err = filepath.Abs(v)
if err != nil {
return nil, err
}
}
if contexts == nil {
contexts = make(map[string]string)
}
contexts[k] = v
}
options.NamedContexts = contexts
var cacheFrom []*CacheOptionsEntry
for _, co := range options.CacheFrom {
switch co.Type {
case "local":
var attrs map[string]string
for k, v := range co.Attrs {
if attrs == nil {
attrs = make(map[string]string)
}
switch k {
case "src":
p := v
if p != "" {
p, err = filepath.Abs(p)
if err != nil {
return nil, err
}
}
attrs[k] = p
default:
attrs[k] = v
}
}
co.Attrs = attrs
cacheFrom = append(cacheFrom, co)
default:
cacheFrom = append(cacheFrom, co)
}
}
options.CacheFrom = cacheFrom
var cacheTo []*CacheOptionsEntry
for _, co := range options.CacheTo {
switch co.Type {
case "local":
var attrs map[string]string
for k, v := range co.Attrs {
if attrs == nil {
attrs = make(map[string]string)
}
switch k {
case "dest":
p := v
if p != "" {
p, err = filepath.Abs(p)
if err != nil {
return nil, err
}
}
attrs[k] = p
default:
attrs[k] = v
}
}
co.Attrs = attrs
cacheTo = append(cacheTo, co)
default:
cacheTo = append(cacheTo, co)
}
}
options.CacheTo = cacheTo
var exports []*ExportEntry
for _, e := range options.Exports {
if e.Destination != "" && e.Destination != "-" {
e.Destination, err = filepath.Abs(e.Destination)
if err != nil {
return nil, err
}
}
exports = append(exports, e)
}
options.Exports = exports
var secrets []*Secret
for _, s := range options.Secrets {
if s.FilePath != "" {
s.FilePath, err = filepath.Abs(s.FilePath)
if err != nil {
return nil, err
}
}
secrets = append(secrets, s)
}
options.Secrets = secrets
var ssh []*SSH
for _, s := range options.SSH {
var ps []string
for _, pt := range s.Paths {
p := pt
if p != "" {
p, err = filepath.Abs(p)
if err != nil {
return nil, err
}
}
ps = append(ps, p)
}
s.Paths = ps
ssh = append(ssh, s)
}
options.SSH = ssh
return options, nil
}
func isRemoteURL(c string) bool {
if urlutil.IsURL(c) {
return true
}
if _, err := gitutil.ParseGitRef(c); err == nil {
return true
}
return false
}

View File

@@ -1,247 +0,0 @@
package pb
import (
"os"
"path/filepath"
"reflect"
"testing"
"github.com/stretchr/testify/require"
)
func TestResolvePaths(t *testing.T) {
tmpwd, err := os.MkdirTemp("", "testresolvepaths")
require.NoError(t, err)
defer os.Remove(tmpwd)
require.NoError(t, os.Chdir(tmpwd))
tests := []struct {
name string
options BuildOptions
want BuildOptions
}{
{
name: "contextpath",
options: BuildOptions{ContextPath: "test"},
want: BuildOptions{ContextPath: filepath.Join(tmpwd, "test")},
},
{
name: "contextpath-cwd",
options: BuildOptions{ContextPath: "."},
want: BuildOptions{ContextPath: tmpwd},
},
{
name: "contextpath-dash",
options: BuildOptions{ContextPath: "-"},
want: BuildOptions{ContextPath: "-"},
},
{
name: "contextpath-ssh",
options: BuildOptions{ContextPath: "git@github.com:docker/buildx.git"},
want: BuildOptions{ContextPath: "git@github.com:docker/buildx.git"},
},
{
name: "dockerfilename",
options: BuildOptions{DockerfileName: "test", ContextPath: "."},
want: BuildOptions{DockerfileName: filepath.Join(tmpwd, "test"), ContextPath: tmpwd},
},
{
name: "dockerfilename-dash",
options: BuildOptions{DockerfileName: "-", ContextPath: "."},
want: BuildOptions{DockerfileName: "-", ContextPath: tmpwd},
},
{
name: "dockerfilename-remote",
options: BuildOptions{DockerfileName: "test", ContextPath: "git@github.com:docker/buildx.git"},
want: BuildOptions{DockerfileName: "test", ContextPath: "git@github.com:docker/buildx.git"},
},
{
name: "contexts",
options: BuildOptions{NamedContexts: map[string]string{"a": "test1", "b": "test2",
"alpine": "docker-image://alpine@sha256:0123456789", "project": "https://github.com/myuser/project.git"}},
want: BuildOptions{NamedContexts: map[string]string{"a": filepath.Join(tmpwd, "test1"), "b": filepath.Join(tmpwd, "test2"),
"alpine": "docker-image://alpine@sha256:0123456789", "project": "https://github.com/myuser/project.git"}},
},
{
name: "cache-from",
options: BuildOptions{
CacheFrom: []*CacheOptionsEntry{
{
Type: "local",
Attrs: map[string]string{"src": "test"},
},
{
Type: "registry",
Attrs: map[string]string{"ref": "user/app"},
},
},
},
want: BuildOptions{
CacheFrom: []*CacheOptionsEntry{
{
Type: "local",
Attrs: map[string]string{"src": filepath.Join(tmpwd, "test")},
},
{
Type: "registry",
Attrs: map[string]string{"ref": "user/app"},
},
},
},
},
{
name: "cache-to",
options: BuildOptions{
CacheTo: []*CacheOptionsEntry{
{
Type: "local",
Attrs: map[string]string{"dest": "test"},
},
{
Type: "registry",
Attrs: map[string]string{"ref": "user/app"},
},
},
},
want: BuildOptions{
CacheTo: []*CacheOptionsEntry{
{
Type: "local",
Attrs: map[string]string{"dest": filepath.Join(tmpwd, "test")},
},
{
Type: "registry",
Attrs: map[string]string{"ref": "user/app"},
},
},
},
},
{
name: "exports",
options: BuildOptions{
Exports: []*ExportEntry{
{
Type: "local",
Destination: "-",
},
{
Type: "local",
Destination: "test1",
},
{
Type: "tar",
Destination: "test3",
},
{
Type: "oci",
Destination: "-",
},
{
Type: "docker",
Destination: "test4",
},
{
Type: "image",
Attrs: map[string]string{"push": "true"},
},
},
},
want: BuildOptions{
Exports: []*ExportEntry{
{
Type: "local",
Destination: "-",
},
{
Type: "local",
Destination: filepath.Join(tmpwd, "test1"),
},
{
Type: "tar",
Destination: filepath.Join(tmpwd, "test3"),
},
{
Type: "oci",
Destination: "-",
},
{
Type: "docker",
Destination: filepath.Join(tmpwd, "test4"),
},
{
Type: "image",
Attrs: map[string]string{"push": "true"},
},
},
},
},
{
name: "secrets",
options: BuildOptions{
Secrets: []*Secret{
{
FilePath: "test1",
},
{
ID: "val",
Env: "a",
},
{
ID: "test",
FilePath: "test3",
},
},
},
want: BuildOptions{
Secrets: []*Secret{
{
FilePath: filepath.Join(tmpwd, "test1"),
},
{
ID: "val",
Env: "a",
},
{
ID: "test",
FilePath: filepath.Join(tmpwd, "test3"),
},
},
},
},
{
name: "ssh",
options: BuildOptions{
SSH: []*SSH{
{
ID: "default",
Paths: []string{"test1", "test2"},
},
{
ID: "a",
Paths: []string{"test3"},
},
},
},
want: BuildOptions{
SSH: []*SSH{
{
ID: "default",
Paths: []string{filepath.Join(tmpwd, "test1"), filepath.Join(tmpwd, "test2")},
},
{
ID: "a",
Paths: []string{filepath.Join(tmpwd, "test3")},
},
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := ResolveOptionPaths(&tt.options)
require.NoError(t, err)
if !reflect.DeepEqual(tt.want, *got) {
t.Fatalf("expected %#v, got %#v", tt.want, *got)
}
})
}
}

View File

@@ -1,126 +0,0 @@
package pb
import (
"github.com/docker/buildx/util/progress"
control "github.com/moby/buildkit/api/services/control"
"github.com/moby/buildkit/client"
"github.com/opencontainers/go-digest"
)
type writer struct {
ch chan<- *StatusResponse
}
func NewProgressWriter(ch chan<- *StatusResponse) progress.Writer {
return &writer{ch: ch}
}
func (w *writer) Write(status *client.SolveStatus) {
w.ch <- ToControlStatus(status)
}
func (w *writer) WriteBuildRef(target string, ref string) {
return
}
func (w *writer) ValidateLogSource(digest.Digest, interface{}) bool {
return true
}
func (w *writer) ClearLogSource(interface{}) {}
func ToControlStatus(s *client.SolveStatus) *StatusResponse {
resp := StatusResponse{}
for _, v := range s.Vertexes {
resp.Vertexes = append(resp.Vertexes, &control.Vertex{
Digest: v.Digest,
Inputs: v.Inputs,
Name: v.Name,
Started: v.Started,
Completed: v.Completed,
Error: v.Error,
Cached: v.Cached,
ProgressGroup: v.ProgressGroup,
})
}
for _, v := range s.Statuses {
resp.Statuses = append(resp.Statuses, &control.VertexStatus{
ID: v.ID,
Vertex: v.Vertex,
Name: v.Name,
Total: v.Total,
Current: v.Current,
Timestamp: v.Timestamp,
Started: v.Started,
Completed: v.Completed,
})
}
for _, v := range s.Logs {
resp.Logs = append(resp.Logs, &control.VertexLog{
Vertex: v.Vertex,
Stream: int64(v.Stream),
Msg: v.Data,
Timestamp: v.Timestamp,
})
}
for _, v := range s.Warnings {
resp.Warnings = append(resp.Warnings, &control.VertexWarning{
Vertex: v.Vertex,
Level: int64(v.Level),
Short: v.Short,
Detail: v.Detail,
Url: v.URL,
Info: v.SourceInfo,
Ranges: v.Range,
})
}
return &resp
}
func FromControlStatus(resp *StatusResponse) *client.SolveStatus {
s := client.SolveStatus{}
for _, v := range resp.Vertexes {
s.Vertexes = append(s.Vertexes, &client.Vertex{
Digest: v.Digest,
Inputs: v.Inputs,
Name: v.Name,
Started: v.Started,
Completed: v.Completed,
Error: v.Error,
Cached: v.Cached,
ProgressGroup: v.ProgressGroup,
})
}
for _, v := range resp.Statuses {
s.Statuses = append(s.Statuses, &client.VertexStatus{
ID: v.ID,
Vertex: v.Vertex,
Name: v.Name,
Total: v.Total,
Current: v.Current,
Timestamp: v.Timestamp,
Started: v.Started,
Completed: v.Completed,
})
}
for _, v := range resp.Logs {
s.Logs = append(s.Logs, &client.VertexLog{
Vertex: v.Vertex,
Stream: int(v.Stream),
Data: v.Msg,
Timestamp: v.Timestamp,
})
}
for _, v := range resp.Warnings {
s.Warnings = append(s.Warnings, &client.VertexWarning{
Vertex: v.Vertex,
Level: int(v.Level),
Short: v.Short,
Detail: v.Detail,
URL: v.Url,
SourceInfo: v.Info,
Range: v.Ranges,
})
}
return &s
}

View File

@@ -1,22 +0,0 @@
package pb
import (
"github.com/moby/buildkit/session"
"github.com/moby/buildkit/session/secrets/secretsprovider"
)
func CreateSecrets(secrets []*Secret) (session.Attachable, error) {
fs := make([]secretsprovider.Source, 0, len(secrets))
for _, secret := range secrets {
fs = append(fs, secretsprovider.Source{
ID: secret.ID,
FilePath: secret.FilePath,
Env: secret.Env,
})
}
store, err := secretsprovider.NewStore(fs)
if err != nil {
return nil, err
}
return secretsprovider.NewSecretProvider(store), nil
}

View File

@@ -1,18 +0,0 @@
package pb
import (
"github.com/moby/buildkit/session"
"github.com/moby/buildkit/session/sshforward/sshprovider"
)
func CreateSSH(ssh []*SSH) (session.Attachable, error) {
configs := make([]sshprovider.AgentConfig, 0, len(ssh))
for _, ssh := range ssh {
cfg := sshprovider.AgentConfig{
ID: ssh.ID,
Paths: append([]string{}, ssh.Paths...),
}
configs = append(configs, cfg)
}
return sshprovider.NewSSHAgentProvider(configs)
}

View File

@@ -1,149 +0,0 @@
package processes
import (
"context"
"sync"
"sync/atomic"
"github.com/docker/buildx/build"
"github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/util/ioset"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
// Process provides methods to control a process.
type Process struct {
inEnd *ioset.Forwarder
invokeConfig *pb.InvokeConfig
errCh chan error
processCancel func()
serveIOCancel func()
}
// ForwardIO forwards process's io to the specified reader/writer.
// Optionally specify ioCancelCallback which will be called when
// the process closes the specified IO. This will be useful for additional cleanup.
func (p *Process) ForwardIO(in *ioset.In, ioCancelCallback func()) {
p.inEnd.SetIn(in)
if f := p.serveIOCancel; f != nil {
f()
}
p.serveIOCancel = ioCancelCallback
}
// Done returns a channel where error or nil will be sent
// when the process exits.
// TODO: change this to Wait()
func (p *Process) Done() <-chan error {
return p.errCh
}
// Manager manages a set of proceses.
type Manager struct {
container atomic.Value
processes sync.Map
}
// NewManager creates and returns a Manager.
func NewManager() *Manager {
return &Manager{}
}
// Get returns the specified process.
func (m *Manager) Get(id string) (*Process, bool) {
v, ok := m.processes.Load(id)
if !ok {
return nil, false
}
return v.(*Process), true
}
// CancelRunningProcesses cancels execution of all running processes.
func (m *Manager) CancelRunningProcesses() {
var funcs []func()
m.processes.Range(func(key, value any) bool {
funcs = append(funcs, value.(*Process).processCancel)
m.processes.Delete(key)
return true
})
for _, f := range funcs {
f()
}
}
// ListProcesses lists all running processes.
func (m *Manager) ListProcesses() (res []*pb.ProcessInfo) {
m.processes.Range(func(key, value any) bool {
res = append(res, &pb.ProcessInfo{
ProcessID: key.(string),
InvokeConfig: value.(*Process).invokeConfig,
})
return true
})
return res
}
// DeleteProcess deletes the specified process.
func (m *Manager) DeleteProcess(id string) error {
p, ok := m.processes.LoadAndDelete(id)
if !ok {
return errors.Errorf("unknown process %q", id)
}
p.(*Process).processCancel()
return nil
}
// StartProcess starts a process in the container.
// When a container isn't available (i.e. first time invoking or the container has exited) or cfg.Rollback is set,
// this method will start a new container and run the process in it. Otherwise, this method starts a new process in the
// existing container.
func (m *Manager) StartProcess(pid string, resultCtx *build.ResultHandle, cfg *pb.InvokeConfig) (*Process, error) {
// Get the target result to invoke a container from
var ctr *build.Container
if a := m.container.Load(); a != nil {
ctr = a.(*build.Container)
}
if cfg.Rollback || ctr == nil || ctr.IsUnavailable() {
go m.CancelRunningProcesses()
// (Re)create a new container if this is rollback or first time to invoke a process.
if ctr != nil {
go ctr.Cancel() // Finish the existing container
}
var err error
ctr, err = build.NewContainer(context.TODO(), resultCtx, cfg)
if err != nil {
return nil, errors.Errorf("failed to create container %v", err)
}
m.container.Store(ctr)
}
// [client(ForwardIO)] <-forwarder(switchable)-> [out] <-pipe-> [in] <- [process]
in, out := ioset.Pipe()
f := ioset.NewForwarder()
f.PropagateStdinClose = false
f.SetOut(&out)
// Register process
ctx, cancel := context.WithCancel(context.TODO())
var cancelOnce sync.Once
processCancelFunc := func() { cancelOnce.Do(func() { cancel(); f.Close(); in.Close(); out.Close() }) }
p := &Process{
inEnd: f,
invokeConfig: cfg,
processCancel: processCancelFunc,
errCh: make(chan error),
}
m.processes.Store(pid, p)
go func() {
var err error
if err = ctr.Exec(ctx, cfg, in.Stdin, in.Stdout, in.Stderr); err != nil {
logrus.Errorf("failed to exec process: %v", err)
}
logrus.Debugf("finished process %s %v", pid, cfg.Entrypoint)
m.processes.Delete(pid)
processCancelFunc()
p.errCh <- err
}()
return p, nil
}

View File

@@ -1,240 +0,0 @@
package remote
import (
"context"
"io"
"sync"
"time"
"github.com/containerd/containerd/defaults"
"github.com/containerd/containerd/pkg/dialer"
"github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/util/progress"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/identity"
"github.com/moby/buildkit/util/grpcerrors"
"github.com/pkg/errors"
"golang.org/x/sync/errgroup"
"google.golang.org/grpc"
"google.golang.org/grpc/backoff"
"google.golang.org/grpc/credentials/insecure"
)
func NewClient(ctx context.Context, addr string) (*Client, error) {
backoffConfig := backoff.DefaultConfig
backoffConfig.MaxDelay = 3 * time.Second
connParams := grpc.ConnectParams{
Backoff: backoffConfig,
}
gopts := []grpc.DialOption{
grpc.WithBlock(),
grpc.WithTransportCredentials(insecure.NewCredentials()),
grpc.WithConnectParams(connParams),
grpc.WithContextDialer(dialer.ContextDialer),
grpc.WithDefaultCallOptions(grpc.MaxCallRecvMsgSize(defaults.DefaultMaxRecvMsgSize)),
grpc.WithDefaultCallOptions(grpc.MaxCallSendMsgSize(defaults.DefaultMaxSendMsgSize)),
grpc.WithUnaryInterceptor(grpcerrors.UnaryClientInterceptor),
grpc.WithStreamInterceptor(grpcerrors.StreamClientInterceptor),
}
conn, err := grpc.DialContext(ctx, dialer.DialAddress(addr), gopts...)
if err != nil {
return nil, err
}
return &Client{conn: conn}, nil
}
type Client struct {
conn *grpc.ClientConn
closeOnce sync.Once
}
func (c *Client) Close() (err error) {
c.closeOnce.Do(func() {
err = c.conn.Close()
})
return
}
func (c *Client) Version(ctx context.Context) (string, string, string, error) {
res, err := c.client().Info(ctx, &pb.InfoRequest{})
if err != nil {
return "", "", "", err
}
v := res.BuildxVersion
return v.Package, v.Version, v.Revision, nil
}
func (c *Client) List(ctx context.Context) (keys []string, retErr error) {
res, err := c.client().List(ctx, &pb.ListRequest{})
if err != nil {
return nil, err
}
return res.Keys, nil
}
func (c *Client) Disconnect(ctx context.Context, key string) error {
if key == "" {
return nil
}
_, err := c.client().Disconnect(ctx, &pb.DisconnectRequest{Ref: key})
return err
}
func (c *Client) ListProcesses(ctx context.Context, ref string) (infos []*pb.ProcessInfo, retErr error) {
res, err := c.client().ListProcesses(ctx, &pb.ListProcessesRequest{Ref: ref})
if err != nil {
return nil, err
}
return res.Infos, nil
}
func (c *Client) DisconnectProcess(ctx context.Context, ref, pid string) error {
_, err := c.client().DisconnectProcess(ctx, &pb.DisconnectProcessRequest{Ref: ref, ProcessID: pid})
return err
}
func (c *Client) Invoke(ctx context.Context, ref string, pid string, invokeConfig pb.InvokeConfig, in io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) error {
if ref == "" || pid == "" {
return errors.New("build reference must be specified")
}
stream, err := c.client().Invoke(ctx)
if err != nil {
return err
}
return attachIO(ctx, stream, &pb.InitMessage{Ref: ref, ProcessID: pid, InvokeConfig: &invokeConfig}, ioAttachConfig{
stdin: in,
stdout: stdout,
stderr: stderr,
// TODO: Signal, Resize
})
}
func (c *Client) Inspect(ctx context.Context, ref string) (*pb.InspectResponse, error) {
return c.client().Inspect(ctx, &pb.InspectRequest{Ref: ref})
}
func (c *Client) Build(ctx context.Context, options pb.BuildOptions, in io.ReadCloser, progress progress.Writer) (string, *client.SolveResponse, error) {
ref := identity.NewID()
statusChan := make(chan *client.SolveStatus)
eg, egCtx := errgroup.WithContext(ctx)
var resp *client.SolveResponse
eg.Go(func() error {
defer close(statusChan)
var err error
resp, err = c.build(egCtx, ref, options, in, statusChan)
return err
})
eg.Go(func() error {
for s := range statusChan {
st := s
progress.Write(st)
}
return nil
})
return ref, resp, eg.Wait()
}
func (c *Client) build(ctx context.Context, ref string, options pb.BuildOptions, in io.ReadCloser, statusChan chan *client.SolveStatus) (*client.SolveResponse, error) {
eg, egCtx := errgroup.WithContext(ctx)
done := make(chan struct{})
var resp *client.SolveResponse
eg.Go(func() error {
defer close(done)
pbResp, err := c.client().Build(egCtx, &pb.BuildRequest{
Ref: ref,
Options: &options,
})
if err != nil {
return err
}
resp = &client.SolveResponse{
ExporterResponse: pbResp.ExporterResponse,
}
return nil
})
eg.Go(func() error {
stream, err := c.client().Status(egCtx, &pb.StatusRequest{
Ref: ref,
})
if err != nil {
return err
}
for {
resp, err := stream.Recv()
if err != nil {
if err == io.EOF {
return nil
}
return errors.Wrap(err, "failed to receive status")
}
statusChan <- pb.FromControlStatus(resp)
}
})
if in != nil {
eg.Go(func() error {
stream, err := c.client().Input(egCtx)
if err != nil {
return err
}
if err := stream.Send(&pb.InputMessage{
Input: &pb.InputMessage_Init{
Init: &pb.InputInitMessage{
Ref: ref,
},
},
}); err != nil {
return errors.Wrap(err, "failed to init input")
}
inReader, inWriter := io.Pipe()
eg2, _ := errgroup.WithContext(ctx)
eg2.Go(func() error {
<-done
return inWriter.Close()
})
go func() {
// do not wait for read completion but return here and let the caller send EOF
// this allows us to return on ctx.Done() without being blocked by this reader.
io.Copy(inWriter, in)
inWriter.Close()
}()
eg2.Go(func() error {
for {
buf := make([]byte, 32*1024)
n, err := inReader.Read(buf)
if err != nil {
if err == io.EOF {
break // break loop and send EOF
}
return err
} else if n > 0 {
if stream.Send(&pb.InputMessage{
Input: &pb.InputMessage_Data{
Data: &pb.DataMessage{
Data: buf[:n],
},
},
}); err != nil {
return err
}
}
}
return stream.Send(&pb.InputMessage{
Input: &pb.InputMessage_Data{
Data: &pb.DataMessage{
EOF: true,
},
},
})
})
return eg2.Wait()
})
}
return resp, eg.Wait()
}
func (c *Client) client() pb.ControllerClient {
return pb.NewControllerClient(c.conn)
}

View File

@@ -1,333 +0,0 @@
//go:build linux
package remote
import (
"context"
"fmt"
"io"
"net"
"os"
"os/exec"
"os/signal"
"path/filepath"
"strconv"
"syscall"
"time"
"github.com/containerd/containerd/log"
"github.com/docker/buildx/build"
cbuild "github.com/docker/buildx/controller/build"
"github.com/docker/buildx/controller/control"
controllerapi "github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/progress"
"github.com/docker/buildx/version"
"github.com/docker/cli/cli/command"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/util/grpcerrors"
"github.com/pelletier/go-toml"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"google.golang.org/grpc"
)
const (
serveCommandName = "_INTERNAL_SERVE"
)
var (
defaultLogFilename = fmt.Sprintf("buildx.%s.log", version.Revision)
defaultSocketFilename = fmt.Sprintf("buildx.%s.sock", version.Revision)
defaultPIDFilename = fmt.Sprintf("buildx.%s.pid", version.Revision)
)
type serverConfig struct {
// Specify buildx server root
Root string `toml:"root"`
// LogLevel sets the logging level [trace, debug, info, warn, error, fatal, panic]
LogLevel string `toml:"log_level"`
// Specify file to output buildx server log
LogFile string `toml:"log_file"`
}
func NewRemoteBuildxController(ctx context.Context, dockerCli command.Cli, opts control.ControlOptions, logger progress.SubLogger) (control.BuildxController, error) {
rootDir := opts.Root
if rootDir == "" {
rootDir = rootDataDir(dockerCli)
}
serverRoot := filepath.Join(rootDir, "shared")
// connect to buildx server if it is already running
ctx2, cancel := context.WithTimeout(ctx, 1*time.Second)
c, err := newBuildxClientAndCheck(ctx2, filepath.Join(serverRoot, defaultSocketFilename))
cancel()
if err != nil {
if !errors.Is(err, context.DeadlineExceeded) {
return nil, errors.Wrap(err, "cannot connect to the buildx server")
}
} else {
return &buildxController{c, serverRoot}, nil
}
// start buildx server via subcommand
err = logger.Wrap("no buildx server found; launching...", func() error {
launchFlags := []string{}
if opts.ServerConfig != "" {
launchFlags = append(launchFlags, "--config", opts.ServerConfig)
}
logFile, err := getLogFilePath(dockerCli, opts.ServerConfig)
if err != nil {
return err
}
wait, err := launch(ctx, logFile, append([]string{serveCommandName}, launchFlags...)...)
if err != nil {
return err
}
go wait()
// wait for buildx server to be ready
ctx2, cancel = context.WithTimeout(ctx, 10*time.Second)
c, err = newBuildxClientAndCheck(ctx2, filepath.Join(serverRoot, defaultSocketFilename))
cancel()
if err != nil {
return errors.Wrap(err, "cannot connect to the buildx server")
}
return nil
})
if err != nil {
return nil, err
}
return &buildxController{c, serverRoot}, nil
}
func AddControllerCommands(cmd *cobra.Command, dockerCli command.Cli) {
cmd.AddCommand(
serveCmd(dockerCli),
)
}
func serveCmd(dockerCli command.Cli) *cobra.Command {
var serverConfigPath string
cmd := &cobra.Command{
Use: fmt.Sprintf("%s [OPTIONS]", serveCommandName),
Hidden: true,
RunE: func(cmd *cobra.Command, args []string) error {
// Parse config
config, err := getConfig(dockerCli, serverConfigPath)
if err != nil {
return err
}
if config.LogLevel == "" {
logrus.SetLevel(logrus.InfoLevel)
} else {
lvl, err := logrus.ParseLevel(config.LogLevel)
if err != nil {
return errors.Wrap(err, "failed to prepare logger")
}
logrus.SetLevel(lvl)
}
logrus.SetFormatter(&logrus.JSONFormatter{
TimestampFormat: log.RFC3339NanoFixed,
})
root, err := prepareRootDir(dockerCli, config)
if err != nil {
return err
}
pidF := filepath.Join(root, defaultPIDFilename)
if err := os.WriteFile(pidF, []byte(fmt.Sprintf("%d", os.Getpid())), 0600); err != nil {
return err
}
defer func() {
if err := os.Remove(pidF); err != nil {
logrus.Errorf("failed to clean up info file %q: %v", pidF, err)
}
}()
// prepare server
b := NewServer(func(ctx context.Context, options *controllerapi.BuildOptions, stdin io.Reader, progress progress.Writer) (*client.SolveResponse, *build.ResultHandle, error) {
return cbuild.RunBuild(ctx, dockerCli, *options, stdin, progress, true)
})
defer b.Close()
// serve server
addr := filepath.Join(root, defaultSocketFilename)
if err := os.Remove(addr); err != nil && !os.IsNotExist(err) { // avoid EADDRINUSE
return err
}
defer func() {
if err := os.Remove(addr); err != nil {
logrus.Errorf("failed to clean up socket %q: %v", addr, err)
}
}()
logrus.Infof("starting server at %q", addr)
l, err := net.Listen("unix", addr)
if err != nil {
return err
}
rpc := grpc.NewServer(
grpc.UnaryInterceptor(grpcerrors.UnaryServerInterceptor),
grpc.StreamInterceptor(grpcerrors.StreamServerInterceptor),
)
controllerapi.RegisterControllerServer(rpc, b)
doneCh := make(chan struct{})
errCh := make(chan error, 1)
go func() {
defer close(doneCh)
if err := rpc.Serve(l); err != nil {
errCh <- errors.Wrapf(err, "error on serving via socket %q", addr)
}
}()
var s os.Signal
sigCh := make(chan os.Signal, 1)
signal.Notify(sigCh, syscall.SIGINT)
signal.Notify(sigCh, syscall.SIGTERM)
select {
case err := <-errCh:
logrus.Errorf("got error %s, exiting", err)
return err
case s = <-sigCh:
logrus.Infof("got signal %s, exiting", s)
return nil
case <-doneCh:
logrus.Infof("rpc server done, exiting")
return nil
}
},
}
flags := cmd.Flags()
flags.StringVar(&serverConfigPath, "config", "", "Specify buildx server config file")
return cmd
}
func getLogFilePath(dockerCli command.Cli, configPath string) (string, error) {
config, err := getConfig(dockerCli, configPath)
if err != nil {
return "", err
}
if config.LogFile == "" {
root, err := prepareRootDir(dockerCli, config)
if err != nil {
return "", err
}
return filepath.Join(root, defaultLogFilename), nil
}
return config.LogFile, nil
}
func getConfig(dockerCli command.Cli, configPath string) (*serverConfig, error) {
var defaultConfigPath bool
if configPath == "" {
defaultRoot := rootDataDir(dockerCli)
configPath = filepath.Join(defaultRoot, "config.toml")
defaultConfigPath = true
}
var config serverConfig
tree, err := toml.LoadFile(configPath)
if err != nil && !(os.IsNotExist(err) && defaultConfigPath) {
return nil, errors.Wrapf(err, "failed to read config %q", configPath)
} else if err == nil {
if err := tree.Unmarshal(&config); err != nil {
return nil, errors.Wrapf(err, "failed to unmarshal config %q", configPath)
}
}
return &config, nil
}
func prepareRootDir(dockerCli command.Cli, config *serverConfig) (string, error) {
rootDir := config.Root
if rootDir == "" {
rootDir = rootDataDir(dockerCli)
}
if rootDir == "" {
return "", errors.New("buildx root dir must be determined")
}
if err := os.MkdirAll(rootDir, 0700); err != nil {
return "", err
}
serverRoot := filepath.Join(rootDir, "shared")
if err := os.MkdirAll(serverRoot, 0700); err != nil {
return "", err
}
return serverRoot, nil
}
func rootDataDir(dockerCli command.Cli) string {
return filepath.Join(confutil.ConfigDir(dockerCli), "controller")
}
func newBuildxClientAndCheck(ctx context.Context, addr string) (*Client, error) {
c, err := NewClient(ctx, addr)
if err != nil {
return nil, err
}
p, v, r, err := c.Version(ctx)
if err != nil {
return nil, err
}
logrus.Debugf("connected to server (\"%v %v %v\")", p, v, r)
if !(p == version.Package && v == version.Version && r == version.Revision) {
return nil, errors.Errorf("version mismatch (client: \"%v %v %v\", server: \"%v %v %v\")", version.Package, version.Version, version.Revision, p, v, r)
}
return c, nil
}
type buildxController struct {
*Client
serverRoot string
}
func (c *buildxController) Kill(ctx context.Context) error {
pidB, err := os.ReadFile(filepath.Join(c.serverRoot, defaultPIDFilename))
if err != nil {
return err
}
pid, err := strconv.ParseInt(string(pidB), 10, 64)
if err != nil {
return err
}
if pid <= 0 {
return errors.New("no PID is recorded for buildx server")
}
p, err := os.FindProcess(int(pid))
if err != nil {
return err
}
if err := p.Signal(syscall.SIGINT); err != nil {
return err
}
// TODO: Should we send SIGKILL if process doesn't finish?
return nil
}
func launch(ctx context.Context, logFile string, args ...string) (func() error, error) {
// set absolute path of binary, since we set the working directory to the root
pathname, err := os.Executable()
if err != nil {
return nil, err
}
bCmd := exec.CommandContext(ctx, pathname, args...)
if logFile != "" {
f, err := os.OpenFile(logFile, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644)
if err != nil {
return nil, err
}
defer f.Close()
bCmd.Stdout = f
bCmd.Stderr = f
}
bCmd.Stdin = nil
bCmd.Dir = "/"
bCmd.SysProcAttr = &syscall.SysProcAttr{
Setsid: true,
}
if err := bCmd.Start(); err != nil {
return nil, err
}
return bCmd.Wait, nil
}

View File

@@ -1,19 +0,0 @@
//go:build !linux
package remote
import (
"context"
"github.com/docker/buildx/controller/control"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/command"
"github.com/pkg/errors"
"github.com/spf13/cobra"
)
func NewRemoteBuildxController(ctx context.Context, dockerCli command.Cli, opts control.ControlOptions, logger progress.SubLogger) (control.BuildxController, error) {
return nil, errors.New("remote buildx unsupported")
}
func AddControllerCommands(cmd *cobra.Command, dockerCli command.Cli) {}

View File

@@ -1,430 +0,0 @@
package remote
import (
"context"
"io"
"syscall"
"time"
"github.com/docker/buildx/controller/pb"
"github.com/moby/sys/signal"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"golang.org/x/sync/errgroup"
)
type msgStream interface {
Send(*pb.Message) error
Recv() (*pb.Message, error)
}
type ioServerConfig struct {
stdin io.WriteCloser
stdout, stderr io.ReadCloser
// signalFn is a callback function called when a signal is reached to the client.
signalFn func(context.Context, syscall.Signal) error
// resizeFn is a callback function called when a resize event is reached to the client.
resizeFn func(context.Context, winSize) error
}
func serveIO(attachCtx context.Context, srv msgStream, initFn func(*pb.InitMessage) error, ioConfig *ioServerConfig) (err error) {
stdin, stdout, stderr := ioConfig.stdin, ioConfig.stdout, ioConfig.stderr
stream := &debugStream{srv, "server=" + time.Now().String()}
eg, ctx := errgroup.WithContext(attachCtx)
done := make(chan struct{})
msg, err := receive(ctx, stream)
if err != nil {
return err
}
init := msg.GetInit()
if init == nil {
return errors.Errorf("unexpected message: %T; wanted init", msg.GetInput())
}
ref := init.Ref
if ref == "" {
return errors.New("no ref is provided")
}
if err := initFn(init); err != nil {
return errors.Wrap(err, "failed to initialize IO server")
}
if stdout != nil {
stdoutReader, stdoutWriter := io.Pipe()
eg.Go(func() error {
<-done
return stdoutWriter.Close()
})
go func() {
// do not wait for read completion but return here and let the caller send EOF
// this allows us to return on ctx.Done() without being blocked by this reader.
io.Copy(stdoutWriter, stdout)
stdoutWriter.Close()
}()
eg.Go(func() error {
defer stdoutReader.Close()
return copyToStream(1, stream, stdoutReader)
})
}
if stderr != nil {
stderrReader, stderrWriter := io.Pipe()
eg.Go(func() error {
<-done
return stderrWriter.Close()
})
go func() {
// do not wait for read completion but return here and let the caller send EOF
// this allows us to return on ctx.Done() without being blocked by this reader.
io.Copy(stderrWriter, stderr)
stderrWriter.Close()
}()
eg.Go(func() error {
defer stderrReader.Close()
return copyToStream(2, stream, stderrReader)
})
}
msgCh := make(chan *pb.Message)
eg.Go(func() error {
defer close(msgCh)
for {
msg, err := receive(ctx, stream)
if err != nil {
return err
}
select {
case msgCh <- msg:
case <-done:
return nil
case <-ctx.Done():
return nil
}
}
})
eg.Go(func() error {
defer close(done)
for {
var msg *pb.Message
select {
case msg = <-msgCh:
case <-ctx.Done():
return nil
}
if msg == nil {
return nil
}
if file := msg.GetFile(); file != nil {
if file.Fd != 0 {
return errors.Errorf("unexpected fd: %v", file.Fd)
}
if stdin == nil {
continue // no stdin destination is specified so ignore the data
}
if len(file.Data) > 0 {
_, err := stdin.Write(file.Data)
if err != nil {
return err
}
}
if file.EOF {
stdin.Close()
}
} else if resize := msg.GetResize(); resize != nil {
if ioConfig.resizeFn != nil {
ioConfig.resizeFn(ctx, winSize{
cols: resize.Cols,
rows: resize.Rows,
})
}
} else if sig := msg.GetSignal(); sig != nil {
if ioConfig.signalFn != nil {
syscallSignal, ok := signal.SignalMap[sig.Name]
if !ok {
continue
}
ioConfig.signalFn(ctx, syscallSignal)
}
} else {
return errors.Errorf("unexpected message: %T", msg.GetInput())
}
}
})
return eg.Wait()
}
type ioAttachConfig struct {
stdin io.ReadCloser
stdout, stderr io.WriteCloser
signal <-chan syscall.Signal
resize <-chan winSize
}
type winSize struct {
rows uint32
cols uint32
}
func attachIO(ctx context.Context, stream msgStream, initMessage *pb.InitMessage, cfg ioAttachConfig) (retErr error) {
eg, ctx := errgroup.WithContext(ctx)
done := make(chan struct{})
if err := stream.Send(&pb.Message{
Input: &pb.Message_Init{
Init: initMessage,
},
}); err != nil {
return errors.Wrap(err, "failed to init")
}
if cfg.stdin != nil {
stdinReader, stdinWriter := io.Pipe()
eg.Go(func() error {
<-done
return stdinWriter.Close()
})
go func() {
// do not wait for read completion but return here and let the caller send EOF
// this allows us to return on ctx.Done() without being blocked by this reader.
io.Copy(stdinWriter, cfg.stdin)
stdinWriter.Close()
}()
eg.Go(func() error {
defer stdinReader.Close()
return copyToStream(0, stream, stdinReader)
})
}
if cfg.signal != nil {
eg.Go(func() error {
for {
var sig syscall.Signal
select {
case sig = <-cfg.signal:
case <-done:
return nil
case <-ctx.Done():
return nil
}
name := sigToName[sig]
if name == "" {
continue
}
if err := stream.Send(&pb.Message{
Input: &pb.Message_Signal{
Signal: &pb.SignalMessage{
Name: name,
},
},
}); err != nil {
return errors.Wrap(err, "failed to send signal")
}
}
})
}
if cfg.resize != nil {
eg.Go(func() error {
for {
var win winSize
select {
case win = <-cfg.resize:
case <-done:
return nil
case <-ctx.Done():
return nil
}
if err := stream.Send(&pb.Message{
Input: &pb.Message_Resize{
Resize: &pb.ResizeMessage{
Rows: win.rows,
Cols: win.cols,
},
},
}); err != nil {
return errors.Wrap(err, "failed to send resize")
}
}
})
}
msgCh := make(chan *pb.Message)
eg.Go(func() error {
defer close(msgCh)
for {
msg, err := receive(ctx, stream)
if err != nil {
return err
}
select {
case msgCh <- msg:
case <-done:
return nil
case <-ctx.Done():
return nil
}
}
})
eg.Go(func() error {
eofs := make(map[uint32]struct{})
defer close(done)
for {
var msg *pb.Message
select {
case msg = <-msgCh:
case <-ctx.Done():
return nil
}
if msg == nil {
return nil
}
if file := msg.GetFile(); file != nil {
if _, ok := eofs[file.Fd]; ok {
continue
}
var out io.WriteCloser
switch file.Fd {
case 1:
out = cfg.stdout
case 2:
out = cfg.stderr
default:
return errors.Errorf("unsupported fd %d", file.Fd)
}
if out == nil {
logrus.Warnf("attachIO: no writer for fd %d", file.Fd)
continue
}
if len(file.Data) > 0 {
if _, err := out.Write(file.Data); err != nil {
return err
}
}
if file.EOF {
eofs[file.Fd] = struct{}{}
}
} else {
return errors.Errorf("unexpected message: %T", msg.GetInput())
}
}
})
return eg.Wait()
}
func receive(ctx context.Context, stream msgStream) (*pb.Message, error) {
msgCh := make(chan *pb.Message)
errCh := make(chan error)
go func() {
msg, err := stream.Recv()
if err != nil {
if errors.Is(err, io.EOF) {
return
}
errCh <- err
return
}
msgCh <- msg
}()
select {
case msg := <-msgCh:
return msg, nil
case err := <-errCh:
return nil, err
case <-ctx.Done():
return nil, ctx.Err()
}
}
func copyToStream(fd uint32, snd msgStream, r io.Reader) error {
for {
buf := make([]byte, 32*1024)
n, err := r.Read(buf)
if err != nil {
if err == io.EOF {
break // break loop and send EOF
}
return err
} else if n > 0 {
if snd.Send(&pb.Message{
Input: &pb.Message_File{
File: &pb.FdMessage{
Fd: fd,
Data: buf[:n],
},
},
}); err != nil {
return err
}
}
}
return snd.Send(&pb.Message{
Input: &pb.Message_File{
File: &pb.FdMessage{
Fd: fd,
EOF: true,
},
},
})
}
var sigToName = map[syscall.Signal]string{}
func init() {
for name, value := range signal.SignalMap {
sigToName[value] = name
}
}
type debugStream struct {
msgStream
prefix string
}
func (s *debugStream) Send(msg *pb.Message) error {
switch m := msg.GetInput().(type) {
case *pb.Message_File:
if m.File.EOF {
logrus.Debugf("|---> File Message (sender:%v) fd=%d, EOF", s.prefix, m.File.Fd)
} else {
logrus.Debugf("|---> File Message (sender:%v) fd=%d, %d bytes", s.prefix, m.File.Fd, len(m.File.Data))
}
case *pb.Message_Resize:
logrus.Debugf("|---> Resize Message (sender:%v): %+v", s.prefix, m.Resize)
case *pb.Message_Signal:
logrus.Debugf("|---> Signal Message (sender:%v): %s", s.prefix, m.Signal.Name)
}
return s.msgStream.Send(msg)
}
func (s *debugStream) Recv() (*pb.Message, error) {
msg, err := s.msgStream.Recv()
if err != nil {
return nil, err
}
switch m := msg.GetInput().(type) {
case *pb.Message_File:
if m.File.EOF {
logrus.Debugf("|<--- File Message (receiver:%v) fd=%d, EOF", s.prefix, m.File.Fd)
} else {
logrus.Debugf("|<--- File Message (receiver:%v) fd=%d, %d bytes", s.prefix, m.File.Fd, len(m.File.Data))
}
case *pb.Message_Resize:
logrus.Debugf("|<--- Resize Message (receiver:%v): %+v", s.prefix, m.Resize)
case *pb.Message_Signal:
logrus.Debugf("|<--- Signal Message (receiver:%v): %s", s.prefix, m.Signal.Name)
}
return msg, nil
}

View File

@@ -1,441 +0,0 @@
package remote
import (
"context"
"io"
"sync"
"sync/atomic"
"time"
"github.com/docker/buildx/build"
controllererrors "github.com/docker/buildx/controller/errdefs"
"github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/controller/processes"
"github.com/docker/buildx/util/ioset"
"github.com/docker/buildx/util/progress"
"github.com/docker/buildx/version"
"github.com/moby/buildkit/client"
"github.com/pkg/errors"
"golang.org/x/sync/errgroup"
)
type BuildFunc func(ctx context.Context, options *pb.BuildOptions, stdin io.Reader, progress progress.Writer) (resp *client.SolveResponse, res *build.ResultHandle, err error)
func NewServer(buildFunc BuildFunc) *Server {
return &Server{
buildFunc: buildFunc,
}
}
type Server struct {
buildFunc BuildFunc
session map[string]*session
sessionMu sync.Mutex
}
type session struct {
buildOnGoing atomic.Bool
statusChan chan *pb.StatusResponse
cancelBuild func()
buildOptions *pb.BuildOptions
inputPipe *io.PipeWriter
result *build.ResultHandle
processes *processes.Manager
}
func (s *session) cancelRunningProcesses() {
s.processes.CancelRunningProcesses()
}
func (m *Server) ListProcesses(ctx context.Context, req *pb.ListProcessesRequest) (res *pb.ListProcessesResponse, err error) {
m.sessionMu.Lock()
defer m.sessionMu.Unlock()
s, ok := m.session[req.Ref]
if !ok {
return nil, errors.Errorf("unknown ref %q", req.Ref)
}
res = new(pb.ListProcessesResponse)
for _, p := range s.processes.ListProcesses() {
res.Infos = append(res.Infos, p)
}
return res, nil
}
func (m *Server) DisconnectProcess(ctx context.Context, req *pb.DisconnectProcessRequest) (res *pb.DisconnectProcessResponse, err error) {
m.sessionMu.Lock()
defer m.sessionMu.Unlock()
s, ok := m.session[req.Ref]
if !ok {
return nil, errors.Errorf("unknown ref %q", req.Ref)
}
return res, s.processes.DeleteProcess(req.ProcessID)
}
func (m *Server) Info(ctx context.Context, req *pb.InfoRequest) (res *pb.InfoResponse, err error) {
return &pb.InfoResponse{
BuildxVersion: &pb.BuildxVersion{
Package: version.Package,
Version: version.Version,
Revision: version.Revision,
},
}, nil
}
func (m *Server) List(ctx context.Context, req *pb.ListRequest) (res *pb.ListResponse, err error) {
keys := make(map[string]struct{})
m.sessionMu.Lock()
for k := range m.session {
keys[k] = struct{}{}
}
m.sessionMu.Unlock()
var keysL []string
for k := range keys {
keysL = append(keysL, k)
}
return &pb.ListResponse{
Keys: keysL,
}, nil
}
func (m *Server) Disconnect(ctx context.Context, req *pb.DisconnectRequest) (res *pb.DisconnectResponse, err error) {
key := req.Ref
if key == "" {
return nil, errors.New("disconnect: empty key")
}
m.sessionMu.Lock()
if s, ok := m.session[key]; ok {
if s.cancelBuild != nil {
s.cancelBuild()
}
s.cancelRunningProcesses()
if s.result != nil {
s.result.Done()
}
}
delete(m.session, key)
m.sessionMu.Unlock()
return &pb.DisconnectResponse{}, nil
}
func (m *Server) Close() error {
m.sessionMu.Lock()
for k := range m.session {
if s, ok := m.session[k]; ok {
if s.cancelBuild != nil {
s.cancelBuild()
}
s.cancelRunningProcesses()
}
}
m.sessionMu.Unlock()
return nil
}
func (m *Server) Inspect(ctx context.Context, req *pb.InspectRequest) (*pb.InspectResponse, error) {
ref := req.Ref
if ref == "" {
return nil, errors.New("inspect: empty key")
}
var bo *pb.BuildOptions
m.sessionMu.Lock()
if s, ok := m.session[ref]; ok {
bo = s.buildOptions
} else {
m.sessionMu.Unlock()
return nil, errors.Errorf("inspect: unknown key %v", ref)
}
m.sessionMu.Unlock()
return &pb.InspectResponse{Options: bo}, nil
}
func (m *Server) Build(ctx context.Context, req *pb.BuildRequest) (*pb.BuildResponse, error) {
ref := req.Ref
if ref == "" {
return nil, errors.New("build: empty key")
}
// Prepare status channel and session
m.sessionMu.Lock()
if m.session == nil {
m.session = make(map[string]*session)
}
s, ok := m.session[ref]
if ok {
if !s.buildOnGoing.CompareAndSwap(false, true) {
m.sessionMu.Unlock()
return &pb.BuildResponse{}, errors.New("build ongoing")
}
s.cancelRunningProcesses()
s.result = nil
} else {
s = &session{}
s.buildOnGoing.Store(true)
}
s.processes = processes.NewManager()
statusChan := make(chan *pb.StatusResponse)
s.statusChan = statusChan
inR, inW := io.Pipe()
defer inR.Close()
s.inputPipe = inW
m.session[ref] = s
m.sessionMu.Unlock()
defer func() {
close(statusChan)
m.sessionMu.Lock()
s, ok := m.session[ref]
if ok {
s.statusChan = nil
s.buildOnGoing.Store(false)
}
m.sessionMu.Unlock()
}()
pw := pb.NewProgressWriter(statusChan)
// Build the specified request
ctx, cancel := context.WithCancel(ctx)
defer cancel()
resp, res, buildErr := m.buildFunc(ctx, req.Options, inR, pw)
m.sessionMu.Lock()
if s, ok := m.session[ref]; ok {
// NOTE: buildFunc can return *build.ResultHandle even on error (e.g. when it's implemented using (github.com/docker/buildx/controller/build).RunBuild).
if res != nil {
s.result = res
s.cancelBuild = cancel
s.buildOptions = req.Options
m.session[ref] = s
if buildErr != nil {
buildErr = controllererrors.WrapBuild(buildErr, ref)
}
}
} else {
m.sessionMu.Unlock()
return nil, errors.Errorf("build: unknown key %v", ref)
}
m.sessionMu.Unlock()
if buildErr != nil {
return nil, buildErr
}
if resp == nil {
resp = &client.SolveResponse{}
}
return &pb.BuildResponse{
ExporterResponse: resp.ExporterResponse,
}, nil
}
func (m *Server) Status(req *pb.StatusRequest, stream pb.Controller_StatusServer) error {
ref := req.Ref
if ref == "" {
return errors.New("status: empty key")
}
// Wait and get status channel prepared by Build()
var statusChan <-chan *pb.StatusResponse
for {
// TODO: timeout?
m.sessionMu.Lock()
if _, ok := m.session[ref]; !ok || m.session[ref].statusChan == nil {
m.sessionMu.Unlock()
time.Sleep(time.Millisecond) // TODO: wait Build without busy loop and make it cancellable
continue
}
statusChan = m.session[ref].statusChan
m.sessionMu.Unlock()
break
}
// forward status
for ss := range statusChan {
if ss == nil {
break
}
if err := stream.Send(ss); err != nil {
return err
}
}
return nil
}
func (m *Server) Input(stream pb.Controller_InputServer) (err error) {
// Get the target ref from init message
msg, err := stream.Recv()
if err != nil {
if !errors.Is(err, io.EOF) {
return err
}
return nil
}
init := msg.GetInit()
if init == nil {
return errors.Errorf("unexpected message: %T; wanted init", msg.GetInit())
}
ref := init.Ref
if ref == "" {
return errors.New("input: no ref is provided")
}
// Wait and get input stream pipe prepared by Build()
var inputPipeW *io.PipeWriter
for {
// TODO: timeout?
m.sessionMu.Lock()
if _, ok := m.session[ref]; !ok || m.session[ref].inputPipe == nil {
m.sessionMu.Unlock()
time.Sleep(time.Millisecond) // TODO: wait Build without busy loop and make it cancellable
continue
}
inputPipeW = m.session[ref].inputPipe
m.sessionMu.Unlock()
break
}
// Forward input stream
eg, ctx := errgroup.WithContext(context.TODO())
done := make(chan struct{})
msgCh := make(chan *pb.InputMessage)
eg.Go(func() error {
defer close(msgCh)
for {
msg, err := stream.Recv()
if err != nil {
if !errors.Is(err, io.EOF) {
return err
}
return nil
}
select {
case msgCh <- msg:
case <-done:
return nil
case <-ctx.Done():
return nil
}
}
})
eg.Go(func() (retErr error) {
defer close(done)
defer func() {
if retErr != nil {
inputPipeW.CloseWithError(retErr)
return
}
inputPipeW.Close()
}()
for {
var msg *pb.InputMessage
select {
case msg = <-msgCh:
case <-ctx.Done():
return errors.Wrap(ctx.Err(), "canceled")
}
if msg == nil {
return nil
}
if data := msg.GetData(); data != nil {
if len(data.Data) > 0 {
_, err := inputPipeW.Write(data.Data)
if err != nil {
return err
}
}
if data.EOF {
return nil
}
}
}
})
return eg.Wait()
}
func (m *Server) Invoke(srv pb.Controller_InvokeServer) error {
containerIn, containerOut := ioset.Pipe()
defer func() { containerOut.Close(); containerIn.Close() }()
initDoneCh := make(chan *processes.Process)
initErrCh := make(chan error)
eg, egCtx := errgroup.WithContext(context.TODO())
srvIOCtx, srvIOCancel := context.WithCancel(egCtx)
eg.Go(func() error {
defer srvIOCancel()
return serveIO(srvIOCtx, srv, func(initMessage *pb.InitMessage) (retErr error) {
defer func() {
if retErr != nil {
initErrCh <- retErr
}
}()
ref := initMessage.Ref
cfg := initMessage.InvokeConfig
m.sessionMu.Lock()
s, ok := m.session[ref]
if !ok {
m.sessionMu.Unlock()
return errors.Errorf("invoke: unknown key %v", ref)
}
m.sessionMu.Unlock()
pid := initMessage.ProcessID
if pid == "" {
return errors.Errorf("invoke: specify process ID")
}
proc, ok := s.processes.Get(pid)
if !ok {
// Start a new process.
if cfg == nil {
return errors.New("no container config is provided")
}
var err error
proc, err = s.processes.StartProcess(pid, s.result, cfg)
if err != nil {
return err
}
}
// Attach containerIn to this process
proc.ForwardIO(&containerIn, srvIOCancel)
initDoneCh <- proc
return nil
}, &ioServerConfig{
stdin: containerOut.Stdin,
stdout: containerOut.Stdout,
stderr: containerOut.Stderr,
// TODO: signal, resize
})
})
eg.Go(func() (rErr error) {
defer srvIOCancel()
// Wait for init done
var proc *processes.Process
select {
case p := <-initDoneCh:
proc = p
case err := <-initErrCh:
return err
case <-egCtx.Done():
return egCtx.Err()
}
// Wait for IO done
select {
case <-srvIOCtx.Done():
return srvIOCtx.Err()
case err := <-proc.Done():
return err
case <-egCtx.Done():
return egCtx.Err()
}
})
return eg.Wait()
}

View File

@@ -1,14 +1,17 @@
variable "GO_VERSION" {
default = "1.20"
default = "1.18"
}
variable "BIN_OUT" {
default = "./bin"
}
variable "RELEASE_OUT" {
default = "./release-out"
}
variable "DOCS_FORMATS" {
default = "md"
}
variable "DESTDIR" {
default = "./bin"
}
# Special target: https://github.com/docker/metadata-action#bake-definition
// Special target: https://github.com/docker/metadata-action#bake-definition
target "meta-helper" {
tags = ["docker/buildx-bin:local"]
}
@@ -17,6 +20,7 @@ target "_common" {
args = {
GO_VERSION = GO_VERSION
BUILDKIT_CONTEXT_KEEP_GIT_DIR = 1
BUILDX_EXPERIMENTAL = 1
}
}
@@ -45,7 +49,6 @@ target "validate-docs" {
inherits = ["_common"]
args = {
FORMATS = DOCS_FORMATS
BUILDX_EXPERIMENTAL = 1 // enables experimental cmds/flags for docs generation
}
dockerfile = "./hack/dockerfiles/docs.Dockerfile"
target = "validate"
@@ -59,13 +62,6 @@ target "validate-authors" {
output = ["type=cacheonly"]
}
target "validate-generated-files" {
inherits = ["_common"]
dockerfile = "./hack/dockerfiles/generated-files.Dockerfile"
target = "validate"
output = ["type=cacheonly"]
}
target "update-vendor" {
inherits = ["_common"]
dockerfile = "./hack/dockerfiles/vendor.Dockerfile"
@@ -77,7 +73,6 @@ target "update-docs" {
inherits = ["_common"]
args = {
FORMATS = DOCS_FORMATS
BUILDX_EXPERIMENTAL = 1 // enables experimental cmds/flags for docs generation
}
dockerfile = "./hack/dockerfiles/docs.Dockerfile"
target = "update"
@@ -91,13 +86,6 @@ target "update-authors" {
output = ["."]
}
target "update-generated-files" {
inherits = ["_common"]
dockerfile = "./hack/dockerfiles/generated-files.Dockerfile"
target = "update"
output = ["."]
}
target "mod-outdated" {
inherits = ["_common"]
dockerfile = "./hack/dockerfiles/vendor.Dockerfile"
@@ -109,13 +97,13 @@ target "mod-outdated" {
target "test" {
inherits = ["_common"]
target = "test-coverage"
output = ["${DESTDIR}/coverage"]
output = ["./coverage"]
}
target "binaries" {
inherits = ["_common"]
target = "binaries"
output = ["${DESTDIR}/build"]
output = [BIN_OUT]
platforms = ["local"]
}
@@ -139,7 +127,7 @@ target "binaries-cross" {
target "release" {
inherits = ["binaries-cross"]
target = "release"
output = ["${DESTDIR}/release"]
output = [RELEASE_OUT]
}
target "image" {
@@ -156,29 +144,3 @@ target "image-local" {
inherits = ["image"]
output = ["type=docker"]
}
variable "HTTP_PROXY" {
default = ""
}
variable "HTTPS_PROXY" {
default = ""
}
variable "NO_PROXY" {
default = ""
}
target "integration-test-base" {
inherits = ["_common"]
args = {
HTTP_PROXY = HTTP_PROXY
HTTPS_PROXY = HTTPS_PROXY
NO_PROXY = NO_PROXY
}
target = "integration-test-base"
output = ["type=cacheonly"]
}
target "integration-test" {
inherits = ["integration-test-base"]
target = "integration-test"
}

View File

@@ -1,952 +0,0 @@
# Bake file reference
The Bake file is a file for defining workflows that you run using `docker buildx bake`.
## File format
You can define your Bake file in the following file formats:
- HashiCorp Configuration Language (HCL)
- JSON
- YAML (Compose file)
By default, Bake uses the following lookup order to find the configuration file:
1. `docker-bake.override.hcl`
2. `docker-bake.hcl`
3. `docker-bake.override.json`
4. `docker-bake.json`
5. `docker-compose.yaml`
6. `docker-compose.yml`
Bake searches for the file in the current working directory.
You can specify the file location explicitly using the `--file` flag:
```console
$ docker buildx bake --file=../docker/bake.hcl --print
```
## Syntax
The Bake file supports the following property types:
- `target`: build targets
- `group`: collections of build targets
- `variable`: build arguments and variables
- `function`: custom Bake functions
You define properties as hierarchical blocks in the Bake file.
You can assign one or more attributes to a property.
The following snippet shows a JSON representation of a simple Bake file.
This Bake file defines three properties: a variable, a group, and a target.
```json
{
"variable": {
"TAG": {
"default": "latest"
}
},
"group": {
"default": {
"targets": ["webapp"]
}
},
"target": {
"webapp": {
"dockerfile": "Dockerfile",
"tags": ["docker.io/username/webapp:${TAG}"]
}
}
}
```
In the JSON representation of a Bake file, properties are objects,
and attributes are values assigned to those objects.
The following example shows the same Bake file in the HCL format:
```hcl
variable "TAG" {
default = "latest"
}
group "default" {
targets = ["webapp"]
}
target "webapp" {
dockerfile = "Dockerfile"
tags = ["docker.io/username/webapp:${TAG}"]
}
```
HCL is the preferred format for Bake files.
Aside from syntactic differences,
HCL lets you use features that the JSON and YAML formats don't support.
The examples in this document use the HCL format.
## Target
A target reflects a single `docker build` invocation.
Consider the following build command:
```console
$ docker build \
--file=Dockerfile.webapp \
--tag=docker.io/username/webapp:latest \
https://github.com/username/webapp
```
You can express this command in a Bake file as follows:
```hcl
target "webapp" {
dockerfile = "Dockerfile.webapp"
tags = ["docker.io/username/webapp:latest"]
context = "https://github.com/username/webapp"
}
```
The following table shows the complete list of attributes that you can assign to a target:
| Name | Type | Description |
| ----------------------------------------------- | ------- | -------------------------------------------------------------------- |
| [`args`](#targetargs) | Map | Build arguments |
| [`attest`](#targetattest) | List | Build attestations |
| [`cache-from`](#targetcache-from) | List | External cache sources |
| [`cache-to`](#targetcache-to) | List | External cache destinations |
| [`context`](#targetcontext) | String | Set of files located in the specified path or URL |
| [`contexts`](#targetcontexts) | Map | Additional build contexts |
| [`dockerfile-inline`](#targetdockerfile-inline) | String | Inline Dockerfile string |
| [`dockerfile`](#targetdockerfile) | String | Dockerfile location |
| [`inherits`](#targetinherits) | List | Inherit attributes from other targets |
| [`labels`](#targetlabels) | Map | Metadata for images |
| [`matrix`](#targetmatrix) | Map | Define a set of variables that forks a target into multiple targets. |
| [`name`](#targetname) | String | Override the target name when using a matrix. |
| [`no-cache-filter`](#targetno-cache-filter) | List | Disable build cache for specific stages |
| [`no-cache`](#targetno-cache) | Boolean | Disable build cache completely |
| [`output`](#targetoutput) | List | Output destinations |
| [`platforms`](#targetplatforms) | List | Target platforms |
| [`pull`](#targetpull) | Boolean | Always pull images |
| [`secret`](#targetsecret) | List | Secrets to expose to the build |
| [`ssh`](#targetssh) | List | SSH agent sockets or keys to expose to the build |
| [`tags`](#targettags) | List | Image names and tags |
| [`target`](#targettarget) | String | Target build stage |
### `target.args`
Use the `args` attribute to define build arguments for the target.
This has the same effect as passing a [`--build-arg`][build-arg] flag to the build command.
```hcl
target "default" {
args = {
VERSION = "0.0.0+unknown"
}
}
```
You can set `args` attributes to use `null` values.
Doing so forces the `target` to use the `ARG` value specified in the Dockerfile.
```hcl
variable "GO_VERSION" {
default = "1.20.3"
}
target "webapp" {
dockerfile = "webapp.Dockerfile"
tags = ["docker.io/username/webapp"]
}
target "db" {
args = {
GO_VERSION = null
}
dockerfile = "db.Dockerfile"
tags = ["docker.io/username/db"]
}
```
### `target.attest`
The `attest` attribute lets you apply [build attestations][attestations] to the target.
This attribute accepts the long-form CSV version of attestation parameters.
```hcl
target "default" {
attest = [
"type=provenance,mode=min",
"type=sbom"
]
}
```
### `target.cache-from`
Build cache sources.
The builder imports cache from the locations you specify.
It uses the [Buildx cache storage backends][cache-backends],
and it works the same way as the [`--cache-from`][cache-from] flag.
This takes a list value, so you can specify multiple cache sources.
```hcl
target "app" {
cache-from = [
"type=s3,region=eu-west-1,bucket=mybucket",
"user/repo:cache",
]
}
```
### `target.cache-to`
Build cache export destinations.
The builder exports its build cache to the locations you specify.
It uses the [Buildx cache storage backends][cache-backends],
and it works the same way as the [`--cache-to` flag][cache-to].
This takes a list value, so you can specify multiple cache export targets.
```hcl
target "app" {
cache-to = [
"type=s3,region=eu-west-1,bucket=mybucket",
"type=inline"
]
}
```
### `target.context`
Specifies the location of the build context to use for this target.
Accepts a URL or a directory path.
This is the same as the [build context][context] positional argument
that you pass to the build command.
```hcl
target "app" {
context = "./src/www"
}
```
This resolves to the current working directory (`"."`) by default.
```console
$ docker buildx bake --print -f - <<< 'target "default" {}'
[+] Building 0.0s (0/0)
{
"target": {
"default": {
"context": ".",
"dockerfile": "Dockerfile"
}
}
}
```
### `target.contexts`
Additional build contexts.
This is the same as the [`--build-context` flag][build-context].
This attribute takes a map, where keys result in named contexts that you can
reference in your builds.
You can specify different types of contexts, such local directories, Git URLs,
and even other Bake targets. Bake automatically determines the type of
a context based on the pattern of the context value.
| Context type | Example |
| --------------- | ----------------------------------------- |
| Container image | `docker-image://alpine@sha256:0123456789` |
| Git URL | `https://github.com/user/proj.git` |
| HTTP URL | `https://example.com/files` |
| Local directory | `../path/to/src` |
| Bake target | `target:base` |
#### Pin an image version
```hcl
# docker-bake.hcl
target "app" {
contexts = {
alpine = "docker-image://alpine:3.13"
}
}
```
```Dockerfile
# Dockerfile
FROM alpine
RUN echo "Hello world"
```
#### Use a local directory
```hcl
# docker-bake.hcl
target "app" {
contexts = {
src = "../path/to/source"
}
}
```
```Dockerfile
# Dockerfile
FROM scratch AS src
FROM golang
COPY --from=src . .
```
#### Use another target as base
> **Note**
>
> You should prefer to use regular multi-stage builds over this option. You can
> Use this feature when you have multiple Dockerfiles that can't be easily
> merged into one.
```hcl
# docker-bake.hcl
target "base" {
dockerfile = "baseapp.Dockerfile"
}
target "app" {
contexts = {
baseapp = "target:base"
}
}
```
```Dockerfile
# Dockerfile
FROM baseapp
RUN echo "Hello world"
```
### `target.dockerfile-inline`
Uses the string value as an inline Dockerfile for the build target.
```hcl
target "default" {
dockerfile-inline = "FROM alpine\nENTRYPOINT [\"echo\", \"hello\"]"
}
```
The `dockerfile-inline` takes precedence over the `dockerfile` attribute.
If you specify both, Bake uses the inline version.
### `target.dockerfile`
Name of the Dockerfile to use for the build.
This is the same as the [`--file` flag][file] for the `docker build` command.
```hcl
target "default" {
dockerfile = "./src/www/Dockerfile"
}
```
Resolves to `"Dockerfile"` by default.
```console
$ docker buildx bake --print -f - <<< 'target "default" {}'
[+] Building 0.0s (0/0)
{
"target": {
"default": {
"context": ".",
"dockerfile": "Dockerfile"
}
}
}
```
### `target.inherits`
A target can inherit attributes from other targets.
Use `inherits` to reference from one target to another.
In the following example,
the `app-dev` target specifies an image name and tag.
The `app-release` target uses `inherits` to reuse the tag name.
```hcl
variable "TAG" {
default = "latest"
}
target "app-dev" {
tags = ["docker.io/username/myapp:${TAG}"]
}
target "app-release" {
inherits = ["app-dev"]
platforms = ["linux/amd64", "linux/arm64"]
}
```
The `inherits` attribute is a list,
meaning you can reuse attributes from multiple other targets.
In the following example, the `app-release` target reuses attributes
from both the `app-dev` and `_release` targets.
```hcl
target "app-dev" {
args = {
GO_VERSION = "1.20"
BUILDX_EXPERIMENTAL = 1
}
tags = ["docker.io/username/myapp"]
dockerfile = "app.Dockerfile"
labels = {
"org.opencontainers.image.source" = "https://github.com/username/myapp"
}
}
target "_release" {
args = {
BUILDKIT_CONTEXT_KEEP_GIT_DIR = 1
BUILDX_EXPERIMENTAL = 0
}
}
target "app-release" {
inherits = ["app-dev", "_release"]
platforms = ["linux/amd64", "linux/arm64"]
}
```
When inheriting attributes from multiple targets and there's a conflict,
the target that appears last in the `inherits` list takes precedence.
The previous example defines the `BUILDX_EXPERIMENTAL` argument twice for the `app-release` target.
It resolves to `0` because the `_release` target appears last in the inheritance chain:
```console
$ docker buildx bake --print app-release
[+] Building 0.0s (0/0)
{
"group": {
"default": {
"targets": [
"app-release"
]
}
},
"target": {
"app-release": {
"context": ".",
"dockerfile": "app.Dockerfile",
"args": {
"BUILDKIT_CONTEXT_KEEP_GIT_DIR": "1",
"BUILDX_EXPERIMENTAL": "0",
"GO_VERSION": "1.20"
},
"labels": {
"org.opencontainers.image.source": "https://github.com/username/myapp"
},
"tags": [
"docker.io/username/myapp"
],
"platforms": [
"linux/amd64",
"linux/arm64"
]
}
}
}
```
### `target.labels`
Assigns image labels to the build.
This is the same as the `--label` flag for `docker build`.
```hcl
target "default" {
labels = {
"org.opencontainers.image.source" = "https://github.com/username/myapp"
"com.docker.image.source.entrypoint" = "Dockerfile"
}
}
```
It's possible to use a `null` value for labels.
If you do, the builder uses the label value specified in the Dockerfile.
### `target.matrix`
A matrix strategy lets you fork a single target into multiple different
variants, based on parameters that you specify.
This works in a similar way to [Matrix strategies for GitHub Actions].
You can use this to reduce duplication in your bake definition.
The `matrix` attribute is a map of parameter names to lists of values.
Bake builds each possible combination of values as a separate target.
Each generated target **must** have a unique name.
To specify how target names should resolve, use the `name` attribute.
The following example resolves the `app` target to `app-foo` and `app-bar`.
It also uses the matrix value to define the [target build stage](#targettarget).
```hcl
target "app" {
name = "app-${tgt}"
matrix = {
tgt = ["foo", "bar"]
}
target = tgt
}
```
```console
$ docker buildx bake --print app
[+] Building 0.0s (0/0)
{
"group": {
"app": {
"targets": [
"app-foo",
"app-bar"
]
},
"default": {
"targets": [
"app"
]
}
},
"target": {
"app-bar": {
"context": ".",
"dockerfile": "Dockerfile",
"target": "bar"
},
"app-foo": {
"context": ".",
"dockerfile": "Dockerfile",
"target": "foo"
}
}
}
```
#### Multiple axes
You can specify multiple keys in your matrix to fork a target on multiple axes.
When using multiple matrix keys, Bake builds every possible variant.
The following example builds four targets:
- `app-foo-1-0`
- `app-foo-2-0`
- `app-bar-1-0`
- `app-bar-2-0`
```hcl
target "app" {
name = "app-${tgt}-${replace(version, ".", "-")}"
matrix = {
tgt = ["foo", "bar"]
version = ["1.0", "2.0"]
}
target = tgt
args = {
VERSION = version
}
}
```
#### Multiple values per matrix target
If you want to differentiate the matrix on more than just a single value,
you can use maps as matrix values. Bake creates a target for each map,
and you can access the nested values using dot notation.
The following example builds two targets:
- `app-foo-1-0`
- `app-bar-2-0`
```hcl
target "app" {
name = "app-${item.tgt}-${replace(item.version, ".", "-")}"
matrix = {
item = [
{
tgt = "foo"
version = "1.0"
},
{
tgt = "bar"
version = "2.0"
}
]
}
target = item.tgt
args = {
VERSION = item.version
}
}
```
### `target.name`
Specify name resolution for targets that use a matrix strategy.
The following example resolves the `app` target to `app-foo` and `app-bar`.
```hcl
target "app" {
name = "app-${tgt}"
matrix = {
tgt = ["foo", "bar"]
}
target = tgt
}
```
### `target.no-cache-filter`
Don't use build cache for the specified stages.
This is the same as the `--no-cache-filter` flag for `docker build`.
The following example avoids build cache for the `foo` build stage.
```hcl
target "default" {
no-cache-filter = ["foo"]
}
```
### `target.no-cache`
Don't use cache when building the image.
This is the same as the `--no-cache` flag for `docker build`.
```hcl
target "default" {
no-cache = 1
}
```
### `target.output`
Configuration for exporting the build output.
This is the same as the [`--output` flag][output].
The following example configures the target to use a cache-only output,
```hcl
target "default" {
output = ["type=cacheonly"]
}
```
### `target.platforms`
Set target platforms for the build target.
This is the same as the [`--platform` flag][platform].
The following example creates a multi-platform build for three architectures.
```hcl
target "default" {
platforms = ["linux/amd64", "linux/arm64", "linux/arm/v7"]
}
```
### `target.pull`
Configures whether the builder should attempt to pull images when building the target.
This is the same as the `--pull` flag for `docker build`.
The following example forces the builder to always pull all images referenced in the build target.
```hcl
target "default" {
pull = "always"
}
```
### `target.secret`
Defines secrets to expose to the build target.
This is the same as the [`--secret` flag][secret].
```hcl
variable "HOME" {
default = null
}
target "default" {
secret = [
"type=env,id=KUBECONFIG",
"type=file,id=aws,src=${HOME}/.aws/credentials"
]
}
```
This lets you [mount the secret][run_mount_secret] in your Dockerfile.
```dockerfile
RUN --mount=type=secret,id=aws,target=/root/.aws/credentials \
aws cloudfront create-invalidation ...
RUN --mount=type=secret,id=KUBECONFIG \
KUBECONFIG=$(cat /run/secrets/KUBECONFIG) helm upgrade --install
```
### `target.ssh`
Defines SSH agent sockets or keys to expose to the build.
This is the same as the [`--ssh` flag][ssh].
This can be useful if you need to access private repositories during a build.
```hcl
target "default" {
ssh = ["default"]
}
```
```dockerfile
FROM alpine
RUN --mount=type=ssh \
apk add git openssh-client \
&& install -m 0700 -d ~/.ssh \
&& ssh-keyscan github.com >> ~/.ssh/known_hosts \
&& git clone git@github.com:user/my-private-repo.git
```
### `target.tags`
Image names and tags to use for the build target.
This is the same as the [`--tag` flag][tag].
```hcl
target "default" {
tags = [
"org/repo:latest",
"myregistry.azurecr.io/team/image:v1"
]
}
```
### `target.target`
Set the target build stage to build.
This is the same as the [`--target` flag][target].
```hcl
target "default" {
target = "binaries"
}
```
## Group
Groups allow you to invoke multiple builds (targets) at once.
```hcl
group "default" {
targets = ["db", "webapp-dev"]
}
target "webapp-dev" {
dockerfile = "Dockerfile.webapp"
tags = ["docker.io/username/webapp:latest"]
}
target "db" {
dockerfile = "Dockerfile.db"
tags = ["docker.io/username/db"]
}
```
Groups take precedence over targets, if both exist with the same name.
The following bake file builds the `default` group.
Bake ignores the `default` target.
```hcl
target "default" {
dockerfile-inline = "FROM ubuntu"
}
group "default" {
targets = ["alpine", "debian"]
}
target "alpine" {
dockerfile-inline = "FROM alpine"
}
target "debian" {
dockerfile-inline = "FROM debian"
}
```
## Variable
The HCL file format supports variable block definitions.
You can use variables as build arguments in your Dockerfile,
or interpolate them in attribute values in your Bake file.
```hcl
variable "TAG" {
default = "latest"
}
target "webapp-dev" {
dockerfile = "Dockerfile.webapp"
tags = ["docker.io/username/webapp:${TAG}"]
}
```
You can assign a default value for a variable in the Bake file,
or assign a `null` value to it. If you assign a `null` value,
Buildx uses the default value from the Dockerfile instead.
You can override variable defaults set in the Bake file using environment variables.
The following example sets the `TAG` variable to `dev`,
overriding the default `latest` value shown in the previous example.
```console
$ TAG=dev docker buildx bake webapp-dev
```
### Built-in variables
The following variables are built-ins that you can use with Bake without having
to define them.
| Variable | Description |
| --------------------- | ----------------------------------------------------------------------------------- |
| `BAKE_CMD_CONTEXT` | Holds the main context when building using a remote Bake file. |
| `BAKE_LOCAL_PLATFORM` | Returns the current platforms default platform specification (e.g. `linux/amd64`). |
### Use environment variable as default
You can set a Bake variable to use the value of an environment variable as a default value:
```hcl
variable "HOME" {
default = "$HOME"
}
```
### Interpolate variables into attributes
To interpolate a variable into an attribute string value,
you must use curly brackets.
The following doesn't work:
```hcl
variable "HOME" {
default = "$HOME"
}
target "default" {
ssh = ["default=$HOME/.ssh/id_rsa"]
}
```
Wrap the variable in curly brackets where you want to insert it:
```diff
variable "HOME" {
default = "$HOME"
}
target "default" {
- ssh = ["default=$HOME/.ssh/id_rsa"]
+ ssh = ["default=${HOME}/.ssh/id_rsa"]
}
```
Before you can interpolate a variable into an attribute,
first you must declare it in the bake file,
as demonstrated in the following example.
```console
$ cat docker-bake.hcl
target "default" {
dockerfile-inline = "FROM ${BASE_IMAGE}"
}
$ docker buildx bake
[+] Building 0.0s (0/0)
docker-bake.hcl:2
--------------------
1 | target "default" {
2 | >>> dockerfile-inline = "FROM ${BASE_IMAGE}"
3 | }
4 |
--------------------
ERROR: docker-bake.hcl:2,31-41: Unknown variable; There is no variable named "BASE_IMAGE"., and 1 other diagnostic(s)
$ cat >> docker-bake.hcl
variable "BASE_IMAGE" {
default = "alpine"
}
$ docker buildx bake
[+] Building 0.6s (5/5) FINISHED
```
## Function
A [set of general-purpose functions][bake_stdlib]
provided by [go-cty][go-cty]
are available for use in HCL files:
```hcl
# docker-bake.hcl
target "webapp-dev" {
dockerfile = "Dockerfile.webapp"
tags = ["docker.io/username/webapp:latest"]
args = {
buildno = "${add(123, 1)}"
}
}
```
In addition, [user defined functions][userfunc]
are also supported:
```hcl
# docker-bake.hcl
function "increment" {
params = [number]
result = number + 1
}
target "webapp-dev" {
dockerfile = "Dockerfile.webapp"
tags = ["docker.io/username/webapp:latest"]
args = {
buildno = "${increment(123)}"
}
}
```
> **Note**
>
> See [User defined HCL functions][hcl-funcs] page for more details.
<!-- external links -->
[attestations]: https://docs.docker.com/build/attestations/
[bake_stdlib]: https://github.com/docker/buildx/blob/master/bake/hclparser/stdlib.go
[build-arg]: https://docs.docker.com/engine/reference/commandline/build/#build-arg
[build-context]: https://docs.docker.com/engine/reference/commandline/buildx_build/#build-context
[cache-backends]: https://docs.docker.com/build/cache/backends/
[cache-from]: https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-from
[cache-to]: https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-to
[context]: https://docs.docker.com/engine/reference/commandline/buildx_build/#build-context
[file]: https://docs.docker.com/engine/reference/commandline/build/#file
[go-cty]: https://github.com/zclconf/go-cty/tree/main/cty/function/stdlib
[hcl-funcs]: https://docs.docker.com/build/bake/hcl-funcs/
[output]: https://docs.docker.com/engine/reference/commandline/buildx_build/#output
[platform]: https://docs.docker.com/engine/reference/commandline/buildx_build/#platform
[run_mount_secret]: https://docs.docker.com/engine/reference/builder/#run---mounttypesecret
[secret]: https://docs.docker.com/engine/reference/commandline/buildx_build/#secret
[ssh]: https://docs.docker.com/engine/reference/commandline/buildx_build/#ssh
[tag]: https://docs.docker.com/engine/reference/commandline/build/#tag
[target]: https://docs.docker.com/engine/reference/commandline/build/#target
[userfunc]: https://github.com/hashicorp/hcl/tree/main/ext/userfunc

View File

@@ -0,0 +1,74 @@
# Defining additional build contexts and linking targets
In addition to the main `context` key that defines the build context each target
can also define additional named contexts with a map defined with key `contexts`.
These values map to the `--build-context` flag in the [build command](https://docs.docker.com/engine/reference/commandline/buildx_build/#build-context).
Inside the Dockerfile these contexts can be used with the `FROM` instruction or `--from` flag.
The value can be a local source directory, container image (with `docker-image://` prefix),
Git URL, HTTP URL or a name of another target in the Bake file (with `target:` prefix).
## Pinning alpine image
```dockerfile
# syntax=docker/dockerfile:1
FROM alpine
RUN echo "Hello world"
```
```hcl
# docker-bake.hcl
target "app" {
contexts = {
alpine = "docker-image://alpine:3.13"
}
}
```
## Using a secondary source directory
```dockerfile
# syntax=docker/dockerfile:1
FROM scratch AS src
FROM golang
COPY --from=src . .
```
```hcl
# docker-bake.hcl
target "app" {
contexts = {
src = "../path/to/source"
}
}
```
## Using a result of one target as a base image in another target
To use a result of one target as a build context of another, specity the target
name with `target:` prefix.
```dockerfile
# syntax=docker/dockerfile:1
FROM baseapp
RUN echo "Hello world"
```
```hcl
# docker-bake.hcl
target "base" {
dockerfile = "baseapp.Dockerfile"
}
target "app" {
contexts = {
baseapp = "target:base"
}
}
```
Please note that in most cases you should just use a single multi-stage
Dockerfile with multiple targets for similar behavior. This case is recommended
when you have multiple Dockerfiles that can't be easily merged into one.

View File

@@ -0,0 +1,270 @@
# Building from Compose file
## Specification
Bake uses the [compose-spec](https://docs.docker.com/compose/compose-file/) to
parse a compose file and translate each service to a [target](file-definition.md#target).
```yaml
# docker-compose.yml
services:
webapp-dev:
build: &build-dev
dockerfile: Dockerfile.webapp
tags:
- docker.io/username/webapp:latest
cache_from:
- docker.io/username/webapp:cache
cache_to:
- docker.io/username/webapp:cache
webapp-release:
build:
<<: *build-dev
x-bake:
platforms:
- linux/amd64
- linux/arm64
db:
image: docker.io/username/db
build:
dockerfile: Dockerfile.db
```
```console
$ docker buildx bake --print
```
```json
{
"group": {
"default": {
"targets": [
"db",
"webapp-dev",
"webapp-release"
]
}
},
"target": {
"db": {
"context": ".",
"dockerfile": "Dockerfile.db",
"tags": [
"docker.io/username/db"
]
},
"webapp-dev": {
"context": ".",
"dockerfile": "Dockerfile.webapp",
"tags": [
"docker.io/username/webapp:latest"
],
"cache-from": [
"docker.io/username/webapp:cache"
],
"cache-to": [
"docker.io/username/webapp:cache"
]
},
"webapp-release": {
"context": ".",
"dockerfile": "Dockerfile.webapp",
"tags": [
"docker.io/username/webapp:latest"
],
"cache-from": [
"docker.io/username/webapp:cache"
],
"cache-to": [
"docker.io/username/webapp:cache"
],
"platforms": [
"linux/amd64",
"linux/arm64"
]
}
}
}
```
Unlike the [HCL format](file-definition.md#hcl-definition), there are some
limitations with the compose format:
* Specifying variables or global scope attributes is not yet supported
* `inherits` service field is not supported, but you can use [YAML anchors](https://docs.docker.com/compose/compose-file/#fragments) to reference other services like the example above
## `.env` file
You can declare default environment variables in an environment file named
`.env`. This file will be loaded from the current working directory,
where the command is executed and applied to compose definitions passed
with `-f`.
```yaml
# docker-compose.yml
services:
webapp:
image: docker.io/username/webapp:${TAG:-v1.0.0}
build:
dockerfile: Dockerfile
```
```
# .env
TAG=v1.1.0
```
```console
$ docker buildx bake --print
```
```json
{
"group": {
"default": {
"targets": [
"webapp"
]
}
},
"target": {
"webapp": {
"context": ".",
"dockerfile": "Dockerfile",
"tags": [
"docker.io/username/webapp:v1.1.0"
]
}
}
}
```
> **Note**
>
> System environment variables take precedence over environment variables
> in `.env` file.
## Extension field with `x-bake`
Even if some fields are not (yet) available in the compose specification, you
can use the [special extension](https://docs.docker.com/compose/compose-file/#extension)
field `x-bake` in your compose file to evaluate extra fields:
```yaml
# docker-compose.yml
services:
addon:
image: ct-addon:bar
build:
context: .
dockerfile: ./Dockerfile
args:
CT_ECR: foo
CT_TAG: bar
x-bake:
tags:
- ct-addon:foo
- ct-addon:alp
platforms:
- linux/amd64
- linux/arm64
cache-from:
- user/app:cache
- type=local,src=path/to/cache
cache-to:
- type=local,dest=path/to/cache
pull: true
aws:
image: ct-fake-aws:bar
build:
dockerfile: ./aws.Dockerfile
args:
CT_ECR: foo
CT_TAG: bar
x-bake:
secret:
- id=mysecret,src=./secret
- id=mysecret2,src=./secret2
platforms: linux/arm64
output: type=docker
no-cache: true
```
```console
$ docker buildx bake --print
```
```json
{
"group": {
"default": {
"targets": [
"aws",
"addon"
]
}
},
"target": {
"addon": {
"context": ".",
"dockerfile": "./Dockerfile",
"args": {
"CT_ECR": "foo",
"CT_TAG": "bar"
},
"tags": [
"ct-addon:foo",
"ct-addon:alp"
],
"cache-from": [
"user/app:cache",
"type=local,src=path/to/cache"
],
"cache-to": [
"type=local,dest=path/to/cache"
],
"platforms": [
"linux/amd64",
"linux/arm64"
],
"pull": true
},
"aws": {
"context": ".",
"dockerfile": "./aws.Dockerfile",
"args": {
"CT_ECR": "foo",
"CT_TAG": "bar"
},
"tags": [
"ct-fake-aws:bar"
],
"secret": [
"id=mysecret,src=./secret",
"id=mysecret2,src=./secret2"
],
"platforms": [
"linux/arm64"
],
"output": [
"type=docker"
],
"no-cache": true
}
}
}
```
Complete list of valid fields for `x-bake`:
* `cache-from`
* `cache-to`
* `contexts`
* `no-cache`
* `no-cache-filter`
* `output`
* `platforms`
* `pull`
* `secret`
* `ssh`
* `tags`

View File

@@ -0,0 +1,216 @@
# Configuring builds
Bake supports loading build definition from files, but sometimes you need even
more flexibility to configure this definition.
For this use case, you can define variables inside the bake files that can be
set by the user with environment variables or by [attribute definitions](#global-scope-attributes)
in other bake files. If you wish to change a specific value for a single
invocation you can use the `--set` flag [from the command line](#from-command-line).
## Global scope attributes
You can define global scope attributes in HCL/JSON and use them for code reuse
and setting values for variables. This means you can do a "data-only" HCL file
with the values you want to set/override and use it in the list of regular
output files.
```hcl
# docker-bake.hcl
variable "FOO" {
default = "abc"
}
target "app" {
args = {
v1 = "pre-${FOO}"
}
}
```
You can use this file directly:
```console
$ docker buildx bake --print app
```
```json
{
"group": {
"default": {
"targets": [
"app"
]
}
},
"target": {
"app": {
"context": ".",
"dockerfile": "Dockerfile",
"args": {
"v1": "pre-abc"
}
}
}
}
```
Or create an override configuration file:
```hcl
# env.hcl
WHOAMI="myuser"
FOO="def-${WHOAMI}"
```
And invoke bake together with both of the files:
```console
$ docker buildx bake -f docker-bake.hcl -f env.hcl --print app
```
```json
{
"group": {
"default": {
"targets": [
"app"
]
}
},
"target": {
"app": {
"context": ".",
"dockerfile": "Dockerfile",
"args": {
"v1": "pre-def-myuser"
}
}
}
}
```
## From command line
You can also override target configurations from the command line with the
[`--set` flag](https://docs.docker.com/engine/reference/commandline/buildx_bake/#set):
```hcl
# docker-bake.hcl
target "app" {
args = {
mybuildarg = "foo"
}
}
```
```console
$ docker buildx bake --set app.args.mybuildarg=bar --set app.platform=linux/arm64 app --print
```
```json
{
"group": {
"default": {
"targets": [
"app"
]
}
},
"target": {
"app": {
"context": ".",
"dockerfile": "Dockerfile",
"args": {
"mybuildarg": "bar"
},
"platforms": [
"linux/arm64"
]
}
}
}
```
Pattern matching syntax defined in [https://golang.org/pkg/path/#Match](https://golang.org/pkg/path/#Match)
is also supported:
```console
$ docker buildx bake --set foo*.args.mybuildarg=value # overrides build arg for all targets starting with "foo"
$ docker buildx bake --set *.platform=linux/arm64 # overrides platform for all targets
$ docker buildx bake --set foo*.no-cache # bypass caching only for targets starting with "foo"
```
Complete list of overridable fields:
* `args`
* `cache-from`
* `cache-to`
* `context`
* `dockerfile`
* `labels`
* `no-cache`
* `output`
* `platform`
* `pull`
* `secrets`
* `ssh`
* `tags`
* `target`
## Using variables in variables across files
When multiple files are specified, one file can use variables defined in
another file.
```hcl
# docker-bake1.hcl
variable "FOO" {
default = upper("${BASE}def")
}
variable "BAR" {
default = "-${FOO}-"
}
target "app" {
args = {
v1 = "pre-${BAR}"
}
}
```
```hcl
# docker-bake2.hcl
variable "BASE" {
default = "abc"
}
target "app" {
args = {
v2 = "${FOO}-post"
}
}
```
```console
$ docker buildx bake -f docker-bake1.hcl -f docker-bake2.hcl --print app
```
```json
{
"group": {
"default": {
"targets": [
"app"
]
}
},
"target": {
"app": {
"context": ".",
"dockerfile": "Dockerfile",
"args": {
"v1": "pre--ABCDEF-",
"v2": "ABCDEF-post"
}
}
}
}
```

View File

@@ -0,0 +1,440 @@
# Bake file definition
`buildx bake` supports HCL, JSON and Compose file format for defining build
[groups](#group), [targets](#target) as well as [variables](#variable) and
[functions](#functions). It looks for build definition files in the current
directory in the following order:
* `docker-compose.yml`
* `docker-compose.yaml`
* `docker-bake.json`
* `docker-bake.override.json`
* `docker-bake.hcl`
* `docker-bake.override.hcl`
## Specification
Inside a bake file you can declare group, target and variable blocks to define
project specific reusable build flows.
### Target
A target reflects a single docker build invocation with the same options that
you would specify for `docker build`:
```hcl
# docker-bake.hcl
target "webapp-dev" {
dockerfile = "Dockerfile.webapp"
tags = ["docker.io/username/webapp:latest"]
}
```
```console
$ docker buildx bake webapp-dev
```
> **Note**
>
> In the case of compose files, each service corresponds to a target.
> If compose service name contains a dot it will be replaced with an underscore.
Complete list of valid target fields available for [HCL](#hcl-definition) and
[JSON](#json-definition) definitions:
| Name | Type | Description |
|---------------------|--------|-------------------------------------------------------------------------------------------------------------------------------------------------|
| `inherits` | List | [Inherit build options](#merging-and-inheritance) from other targets |
| `args` | Map | Set build-time variables (same as [`--build-arg` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
| `cache-from` | List | External cache sources (same as [`--cache-from` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
| `cache-to` | List | Cache export destinations (same as [`--cache-to` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
| `context` | String | Set of files located in the specified path or URL |
| `contexts` | Map | Additional build contexts (same as [`--build-context` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
| `dockerfile` | String | Name of the Dockerfile (same as [`--file` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
| `dockerfile-inline` | String | Inline Dockerfile content |
| `labels` | Map | Set metadata for an image (same as [`--label` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
| `no-cache` | Bool | Do not use cache when building the image (same as [`--no-cache` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
| `no-cache-filter` | List | Do not cache specified stages (same as [`--no-cache-filter` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
| `output` | List | Output destination (same as [`--output` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
| `platforms` | List | Set target platforms for build (same as [`--platform` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
| `pull` | Bool | Always attempt to pull all referenced images (same as [`--pull` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
| `secret` | List | Secret to expose to the build (same as [`--secret` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
| `ssh` | List | SSH agent socket or keys to expose to the build (same as [`--ssh` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
| `tags` | List | Name and optionally a tag in the format `name:tag` (same as [`--tag` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
| `target` | String | Set the target build stage to build (same as [`--target` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
### Group
A group is a grouping of targets:
```hcl
# docker-bake.hcl
group "build" {
targets = ["db", "webapp-dev"]
}
target "webapp-dev" {
dockerfile = "Dockerfile.webapp"
tags = ["docker.io/username/webapp:latest"]
}
target "db" {
dockerfile = "Dockerfile.db"
tags = ["docker.io/username/db"]
}
```
```console
$ docker buildx bake build
```
### Variable
Similar to how Terraform provides a way to [define variables](https://www.terraform.io/docs/configuration/variables.html#declaring-an-input-variable),
the HCL file format also supports variable block definitions. These can be used
to define variables with values provided by the current environment, or a
default value when unset:
```hcl
# docker-bake.hcl
variable "TAG" {
default = "latest"
}
target "webapp-dev" {
dockerfile = "Dockerfile.webapp"
tags = ["docker.io/username/webapp:${TAG}"]
}
```
```console
$ docker buildx bake webapp-dev # will use the default value "latest"
$ TAG=dev docker buildx bake webapp-dev # will use the TAG environment variable value
```
> **Tip**
>
> See also the [Configuring builds](configuring-build.md) page for advanced usage.
### Functions
A [set of generally useful functions](https://github.com/docker/buildx/blob/master/bake/hclparser/stdlib.go)
provided by [go-cty](https://github.com/zclconf/go-cty/tree/main/cty/function/stdlib)
are available for use in HCL files:
```hcl
# docker-bake.hcl
target "webapp-dev" {
dockerfile = "Dockerfile.webapp"
tags = ["docker.io/username/webapp:latest"]
args = {
buildno = "${add(123, 1)}"
}
}
```
In addition, [user defined functions](https://github.com/hashicorp/hcl/tree/main/ext/userfunc)
are also supported:
```hcl
# docker-bake.hcl
function "increment" {
params = [number]
result = number + 1
}
target "webapp-dev" {
dockerfile = "Dockerfile.webapp"
tags = ["docker.io/username/webapp:latest"]
args = {
buildno = "${increment(123)}"
}
}
```
> **Note**
>
> See [User defined HCL functions](hcl-funcs.md) page for more details.
## Built-in variables
* `BAKE_CMD_CONTEXT` can be used to access the main `context` for bake command
from a bake file that has been [imported remotely](file-definition.md#remote-definition).
* `BAKE_LOCAL_PLATFORM` returns the current platform's default platform
specification (e.g. `linux/amd64`).
## Merging and inheritance
Multiple files can include the same target and final build options will be
determined by merging them together:
```hcl
# docker-bake.hcl
target "webapp-dev" {
dockerfile = "Dockerfile.webapp"
tags = ["docker.io/username/webapp:latest"]
}
```
```hcl
# docker-bake2.hcl
target "webapp-dev" {
tags = ["docker.io/username/webapp:dev"]
}
```
```console
$ docker buildx bake -f docker-bake.hcl -f docker-bake2.hcl webapp-dev
```
A group can specify its list of targets with the `targets` option. A target can
inherit build options by setting the `inherits` option to the list of targets or
groups to inherit from:
```hcl
# docker-bake.hcl
target "webapp-dev" {
dockerfile = "Dockerfile.webapp"
tags = ["docker.io/username/webapp:${TAG}"]
}
target "webapp-release" {
inherits = ["webapp-dev"]
platforms = ["linux/amd64", "linux/arm64"]
}
```
## `default` target/group
When you invoke `bake` you specify what targets/groups you want to build. If no
arguments is specified, the group/target named `default` will be built:
```hcl
# docker-bake.hcl
target "default" {
dockerfile = "Dockerfile.webapp"
tags = ["docker.io/username/webapp:latest"]
}
```
```console
$ docker buildx bake
```
## Definitions
### HCL definition
HCL definition file is recommended as its experience is more aligned with buildx UX
and also allows better code reuse, different target groups and extended features.
```hcl
# docker-bake.hcl
variable "TAG" {
default = "latest"
}
group "default" {
targets = ["db", "webapp-dev"]
}
target "webapp-dev" {
dockerfile = "Dockerfile.webapp"
tags = ["docker.io/username/webapp:${TAG}"]
}
target "webapp-release" {
inherits = ["webapp-dev"]
platforms = ["linux/amd64", "linux/arm64"]
}
target "db" {
dockerfile = "Dockerfile.db"
tags = ["docker.io/username/db"]
}
```
### JSON definition
```json
{
"variable": {
"TAG": {
"default": "latest"
}
},
"group": {
"default": {
"targets": [
"db",
"webapp-dev"
]
}
},
"target": {
"webapp-dev": {
"dockerfile": "Dockerfile.webapp",
"tags": [
"docker.io/username/webapp:${TAG}"
]
},
"webapp-release": {
"inherits": [
"webapp-dev"
],
"platforms": [
"linux/amd64",
"linux/arm64"
]
},
"db": {
"dockerfile": "Dockerfile.db",
"tags": [
"docker.io/username/db"
]
}
}
}
```
### Compose file
```yaml
# docker-compose.yml
services:
webapp:
image: docker.io/username/webapp:latest
build:
dockerfile: Dockerfile.webapp
db:
image: docker.io/username/db
build:
dockerfile: Dockerfile.db
```
> **Note**
>
> See [Building from Compose file](compose-file.md) page for more details.
## Remote definition
You can also build bake files directly from a remote Git repository or HTTPS URL:
```console
$ docker buildx bake "https://github.com/docker/cli.git#v20.10.11" --print
#1 [internal] load git source https://github.com/docker/cli.git#v20.10.11
#1 0.745 e8f1871b077b64bcb4a13334b7146492773769f7 refs/tags/v20.10.11
#1 2.022 From https://github.com/docker/cli
#1 2.022 * [new tag] v20.10.11 -> v20.10.11
#1 DONE 2.9s
```
```json
{
"group": {
"default": {
"targets": [
"binary"
]
}
},
"target": {
"binary": {
"context": "https://github.com/docker/cli.git#v20.10.11",
"dockerfile": "Dockerfile",
"args": {
"BASE_VARIANT": "alpine",
"GO_STRIP": "",
"VERSION": ""
},
"target": "binary",
"platforms": [
"local"
],
"output": [
"build"
]
}
}
}
```
As you can see the context is fixed to `https://github.com/docker/cli.git` even if
[no context is actually defined](https://github.com/docker/cli/blob/2776a6d694f988c0c1df61cad4bfac0f54e481c8/docker-bake.hcl#L17-L26)
in the definition.
If you want to access the main context for bake command from a bake file
that has been imported remotely, you can use the [`BAKE_CMD_CONTEXT` built-in var](#built-in-variables).
```console
$ cat https://raw.githubusercontent.com/tonistiigi/buildx/remote-test/docker-bake.hcl
```
```hcl
target "default" {
context = BAKE_CMD_CONTEXT
dockerfile-inline = <<EOT
FROM alpine
WORKDIR /src
COPY . .
RUN ls -l && stop
EOT
}
```
```console
$ docker buildx bake "https://github.com/tonistiigi/buildx.git#remote-test" --print
```
```json
{
"target": {
"default": {
"context": ".",
"dockerfile": "Dockerfile",
"dockerfile-inline": "FROM alpine\nWORKDIR /src\nCOPY . .\nRUN ls -l \u0026\u0026 stop\n"
}
}
}
```
```console
$ touch foo bar
$ docker buildx bake "https://github.com/tonistiigi/buildx.git#remote-test"
```
```text
...
> [4/4] RUN ls -l && stop:
#8 0.101 total 0
#8 0.102 -rw-r--r-- 1 root root 0 Jul 27 18:47 bar
#8 0.102 -rw-r--r-- 1 root root 0 Jul 27 18:47 foo
#8 0.102 /bin/sh: stop: not found
```
```console
$ docker buildx bake "https://github.com/tonistiigi/buildx.git#remote-test" "https://github.com/docker/cli.git#v20.10.11" --print
#1 [internal] load git source https://github.com/tonistiigi/buildx.git#remote-test
#1 0.429 577303add004dd7efeb13434d69ea030d35f7888 refs/heads/remote-test
#1 CACHED
```
```json
{
"target": {
"default": {
"context": "https://github.com/docker/cli.git#v20.10.11",
"dockerfile": "Dockerfile",
"dockerfile-inline": "FROM alpine\nWORKDIR /src\nCOPY . .\nRUN ls -l \u0026\u0026 stop\n"
}
}
}
```
```console
$ docker buildx bake "https://github.com/tonistiigi/buildx.git#remote-test" "https://github.com/docker/cli.git#v20.10.11"
```
```text
...
> [4/4] RUN ls -l && stop:
#8 0.136 drwxrwxrwx 5 root root 4096 Jul 27 18:31 kubernetes
#8 0.136 drwxrwxrwx 3 root root 4096 Jul 27 18:31 man
#8 0.136 drwxrwxrwx 2 root root 4096 Jul 27 18:31 opts
#8 0.136 -rw-rw-rw- 1 root root 1893 Jul 27 18:31 poule.yml
#8 0.136 drwxrwxrwx 7 root root 4096 Jul 27 18:31 scripts
#8 0.136 drwxrwxrwx 3 root root 4096 Jul 27 18:31 service
#8 0.136 drwxrwxrwx 2 root root 4096 Jul 27 18:31 templates
#8 0.136 drwxrwxrwx 10 root root 4096 Jul 27 18:31 vendor
#8 0.136 -rwxrwxrwx 1 root root 9620 Jul 27 18:31 vendor.conf
#8 0.136 /bin/sh: stop: not found
```

View File

@@ -0,0 +1,327 @@
# User defined HCL functions
## Using interpolation to tag an image with the git sha
As shown in the [File definition](file-definition.md#variable) page, `bake`
supports variable blocks which are assigned to matching environment variables
or default values:
```hcl
# docker-bake.hcl
variable "TAG" {
default = "latest"
}
group "default" {
targets = ["webapp"]
}
target "webapp" {
tags = ["docker.io/username/webapp:${TAG}"]
}
```
alternatively, in json format:
```json
{
"variable": {
"TAG": {
"default": "latest"
}
},
"group": {
"default": {
"targets": ["webapp"]
}
},
"target": {
"webapp": {
"tags": ["docker.io/username/webapp:${TAG}"]
}
}
}
```
```console
$ docker buildx bake --print webapp
```
```json
{
"group": {
"default": {
"targets": [
"webapp"
]
}
},
"target": {
"webapp": {
"context": ".",
"dockerfile": "Dockerfile",
"tags": [
"docker.io/username/webapp:latest"
]
}
}
}
```
```console
$ TAG=$(git rev-parse --short HEAD) docker buildx bake --print webapp
```
```json
{
"group": {
"default": {
"targets": [
"webapp"
]
}
},
"target": {
"webapp": {
"context": ".",
"dockerfile": "Dockerfile",
"tags": [
"docker.io/username/webapp:985e9e9"
]
}
}
}
```
## Using the `add` function
You can use [`go-cty` stdlib functions](https://github.com/zclconf/go-cty/tree/main/cty/function/stdlib).
Here we are using the `add` function.
```hcl
# docker-bake.hcl
variable "TAG" {
default = "latest"
}
group "default" {
targets = ["webapp"]
}
target "webapp" {
args = {
buildno = "${add(123, 1)}"
}
}
```
```console
$ docker buildx bake --print webapp
```
```json
{
"group": {
"default": {
"targets": [
"webapp"
]
}
},
"target": {
"webapp": {
"context": ".",
"dockerfile": "Dockerfile",
"args": {
"buildno": "124"
}
}
}
}
```
## Defining an `increment` function
It also supports [user defined functions](https://github.com/hashicorp/hcl/tree/main/ext/userfunc).
The following example defines a simple an `increment` function.
```hcl
# docker-bake.hcl
function "increment" {
params = [number]
result = number + 1
}
group "default" {
targets = ["webapp"]
}
target "webapp" {
args = {
buildno = "${increment(123)}"
}
}
```
```console
$ docker buildx bake --print webapp
```
```json
{
"group": {
"default": {
"targets": [
"webapp"
]
}
},
"target": {
"webapp": {
"context": ".",
"dockerfile": "Dockerfile",
"args": {
"buildno": "124"
}
}
}
}
```
## Only adding tags if a variable is not empty using an `notequal`
Here we are using the conditional `notequal` function which is just for
symmetry with the `equal` one.
```hcl
# docker-bake.hcl
variable "TAG" {default="" }
group "default" {
targets = [
"webapp",
]
}
target "webapp" {
context="."
dockerfile="Dockerfile"
tags = [
"my-image:latest",
notequal("",TAG) ? "my-image:${TAG}": "",
]
}
```
```console
$ docker buildx bake --print webapp
```
```json
{
"group": {
"default": {
"targets": [
"webapp"
]
}
},
"target": {
"webapp": {
"context": ".",
"dockerfile": "Dockerfile",
"tags": [
"my-image:latest"
]
}
}
}
```
## Using variables in functions
You can refer variables to other variables like the target blocks can. Stdlib
functions can also be called but user functions can't at the moment.
```hcl
# docker-bake.hcl
variable "REPO" {
default = "user/repo"
}
function "tag" {
params = [tag]
result = ["${REPO}:${tag}"]
}
target "webapp" {
tags = tag("v1")
}
```
```console
$ docker buildx bake --print webapp
```
```json
{
"group": {
"default": {
"targets": [
"webapp"
]
}
},
"target": {
"webapp": {
"context": ".",
"dockerfile": "Dockerfile",
"tags": [
"user/repo:v1"
]
}
}
}
```
## Using typed variables
Non-string variables are also accepted. The value passed with env is parsed
into suitable type first.
```hcl
# docker-bake.hcl
variable "FOO" {
default = 3
}
variable "IS_FOO" {
default = true
}
target "app" {
args = {
v1 = FOO > 5 ? "higher" : "lower"
v2 = IS_FOO ? "yes" : "no"
}
}
```
```console
$ docker buildx bake --print app
```
```json
{
"group": {
"default": {
"targets": [
"app"
]
}
},
"target": {
"app": {
"context": ".",
"dockerfile": "Dockerfile",
"args": {
"v1": "lower",
"v2": "yes"
}
}
}
}
```

36
docs/guides/bake/index.md Normal file
View File

@@ -0,0 +1,36 @@
# High-level build options with Bake
> This command is experimental.
>
> The design of bake is in early stages, and we are looking for [feedback from users](https://github.com/docker/buildx/issues).
{: .experimental }
Buildx also aims to provide support for high-level build concepts that go beyond
invoking a single build command. We want to support building all the images in
your application together and let the users define project specific reusable
build flows that can then be easily invoked by anyone.
[BuildKit](https://github.com/moby/buildkit) efficiently handles multiple
concurrent build requests and de-duplicating work. The build commands can be
combined with general-purpose command runners (for example, `make`). However,
these tools generally invoke builds in sequence and therefore cannot leverage
the full potential of BuildKit parallelization, or combine BuildKit's output
for the user. For this use case, we have added a command called
[`docker buildx bake`](https://docs.docker.com/engine/reference/commandline/buildx_bake/).
The `bake` command supports building images from HCL, JSON and Compose files.
This is similar to [`docker compose build`](https://docs.docker.com/compose/reference/build/),
but allowing all the services to be built concurrently as part of a single
request. If multiple files are specified they are all read and configurations are
combined.
We recommend using HCL files as its experience is more aligned with buildx UX
and also allows better code reuse, different target groups and extended features.
## Next steps
* [File definition](file-definition.md)
* [Configuring builds](configuring-build.md)
* [User defined HCL functions](hcl-funcs.md)
* [Defining additional build contexts and linking targets](build-contexts.md)
* [Building from Compose file](compose-file.md)

View File

@@ -1,3 +1,48 @@
# CI/CD
This page has moved to [Docker Docs website](https://docs.docker.com/build/ci/)
## GitHub Actions
Docker provides a [GitHub Action that will build and push your image](https://github.com/docker/build-push-action/#about)
using Buildx. Here is a simple workflow:
```yaml
name: ci
on:
push:
branches:
- 'main'
jobs:
docker:
runs-on: ubuntu-latest
steps:
-
name: Set up QEMU
uses: docker/setup-qemu-action@v2
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
-
name: Build and push
uses: docker/build-push-action@v2
with:
push: true
tags: user/app:latest
```
In this example we are also using 3 other actions:
* [`setup-buildx`](https://github.com/docker/setup-buildx-action) action will create and boot a builder using by
default the `docker-container` [builder driver](../reference/buildx_create.md#driver).
This is **not required but recommended** using it to be able to build multi-platform images, export cache, etc.
* [`setup-qemu`](https://github.com/docker/setup-qemu-action) action can be useful if you want
to add emulation support with QEMU to be able to build against more platforms.
* [`login`](https://github.com/docker/login-action) action will take care to log
in against a Docker registry.

View File

@@ -1,3 +1,23 @@
# CNI networking
This page has moved to [Docker Docs website](https://docs.docker.com/build/buildkit/configure/#cni-networking)
It can be useful to use a bridge network for your builder if for example you
encounter a network port contention during multiple builds. If you're using
the BuildKit image, CNI is not yet available in it, but you can create
[a custom BuildKit image with CNI support](https://github.com/moby/buildkit/blob/master/docs/cni-networking.md).
Now build this image:
```console
$ docker buildx build --tag buildkit-cni:local --load .
```
Then [create a `docker-container` builder](https://docs.docker.com/engine/reference/commandline/buildx_create/) that
will use this image:
```console
$ docker buildx create --use \
--name mybuilder \
--driver docker-container \
--driver-opt "image=buildkit-cni:local" \
--buildkitd-flags "--oci-worker-net=cni"
```

View File

@@ -1,3 +1,20 @@
# Color output controls
This page has moved to [Docker Docs website](https://docs.docker.com/build/building/env-vars/#buildkit_colors)
Buildx has support for modifying the colors that are used to output information
to the terminal. You can set the environment variable `BUILDKIT_COLORS` to
something like `run=123,20,245:error=yellow:cancel=blue:warning=white` to set
the colors that you would like to use:
![Progress output custom colors](https://user-images.githubusercontent.com/1951866/180584033-24522385-cafd-4a54-a4a2-18f5ce74eb27.png)
Setting `NO_COLOR` to anything will disable any colorized output as recommended
by [no-color.org](https://no-color.org/):
![Progress output no color](https://user-images.githubusercontent.com/1951866/180584037-e28f9997-dd4c-49cf-8b26-04864815de19.png)
> **Note**
>
> Parsing errors will be reported but ignored. This will result in default
> color values being used where needed.
See also [the list of pre-defined colors](https://github.com/moby/buildkit/blob/master/util/progress/progressui/colors.go).

View File

@@ -1,3 +1,34 @@
# Using a custom network
This page has moved to [Docker Docs website](https://docs.docker.com/build/drivers/docker-container/#custom-network)
[Create a network](https://docs.docker.com/engine/reference/commandline/network_create/)
named `foonet`:
```console
$ docker network create foonet
```
[Create a `docker-container` builder](https://docs.docker.com/engine/reference/commandline/buildx_create/)
named `mybuilder` that will use this network:
```console
$ docker buildx create --use \
--name mybuilder \
--driver docker-container \
--driver-opt "network=foonet"
```
Boot and [inspect `mybuilder`](https://docs.docker.com/engine/reference/commandline/buildx_inspect/):
```console
$ docker buildx inspect --bootstrap
```
[Inspect the builder container](https://docs.docker.com/engine/reference/commandline/inspect/)
and see what network is being used:
{% raw %}
```console
$ docker inspect buildx_buildkit_mybuilder0 --format={{.NetworkSettings.Networks}}
map[foonet:0xc00018c0c0]
```
{% endraw %}

View File

@@ -1,3 +1,63 @@
# Using a custom registry configuration
This page has moved to [Docker Docs website](https://docs.docker.com/build/buildkit/configure/#setting-registry-certificates)
If you [create a `docker-container` or `kubernetes` builder](https://docs.docker.com/engine/reference/commandline/buildx_create/) and
have specified certificates for registries in the [BuildKit daemon configuration](https://github.com/moby/buildkit/blob/master/docs/buildkitd.toml.md),
the files will be copied into the container under `/etc/buildkit/certs` and
configuration will be updated to reflect that.
Take the following `buildkitd.toml` configuration that will be used for
pushing an image to this registry using self-signed certificates:
```toml
# /etc/buildkitd.toml
debug = true
[registry."myregistry.com"]
ca=["/etc/certs/myregistry.pem"]
[[registry."myregistry.com".keypair]]
key="/etc/certs/myregistry_key.pem"
cert="/etc/certs/myregistry_cert.pem"
```
Here we have configured a self-signed certificate for `myregistry.com` registry.
Now [create a `docker-container` builder](https://docs.docker.com/engine/reference/commandline/buildx_create/)
that will use this BuildKit configuration:
```console
$ docker buildx create --use \
--name mybuilder \
--driver docker-container \
--config /etc/buildkitd.toml
```
Inspecting the builder container, you can see that buildkitd configuration
has changed:
```console
$ docker exec -it buildx_buildkit_mybuilder0 cat /etc/buildkit/buildkitd.toml
```
```toml
debug = true
[registry]
[registry."myregistry.com"]
ca = ["/etc/buildkit/certs/myregistry.com/myregistry.pem"]
[[registry."myregistry.com".keypair]]
cert = "/etc/buildkit/certs/myregistry.com/myregistry_cert.pem"
key = "/etc/buildkit/certs/myregistry.com/myregistry_key.pem"
```
And certificates copied inside the container:
```console
$ docker exec -it buildx_buildkit_mybuilder0 ls /etc/buildkit/certs/myregistry.com/
myregistry.pem myregistry_cert.pem myregistry_key.pem
```
Now you should be able to push to the registry with this builder:
```console
$ docker buildx build --push --tag myregistry.com/myimage:latest .
```

View File

@@ -1,164 +0,0 @@
# Debug monitor
To assist with creating and debugging complex builds, Buildx provides a
debugger to help you step through the build process and easily inspect the
state of the build environment at any point.
> **Note**
>
> The debug monitor is a new experimental feature in recent versions of Buildx.
> There are rough edges, known bugs, and missing features. Please try it out
> and let us know what you think!
## Starting the debugger
To start the debugger, first, ensure that `BUILDX_EXPERIMENTAL=1` is set in
your environment.
```console
$ export BUILDX_EXPERIMENTAL=1
```
To start a debug session for a build, you can use the `--invoke` flag with the
build command to specify a command to launch in the resulting image.
```console
$ docker buildx build --invoke /bin/sh .
[+] Building 4.2s (19/19) FINISHED
=> [internal] connecting to local controller 0.0s
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 32B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 34B 0.0s
...
Launching interactive container. Press Ctrl-a-c to switch to monitor console
Interactive container was restarted with process "dzz7pjb4pk1mj29xqrx0ac3oj". Press Ctrl-a-c to switch to the new container
Switched IO
/ #
```
This launches a `/bin/sh` process in the final stage of the image, and allows
you to explore the contents of the image, without needing to export or load the
image outside of the builder.
For example, you can use `ls` to see the contents of the image:
```console
/ # ls
bin etc lib mnt proc run srv tmp var
dev home media opt root sbin sys usr work
```
Optional long form allows you specifying detailed configurations of the process.
It must be CSV-styled comma-separated key-value pairs.
Supported keys are `args` (can be JSON array format), `entrypoint` (can be JSON array format), `env` (can be JSON array format), `user`, `cwd` and `tty` (bool).
Example:
```
$ docker buildx build --invoke 'entrypoint=["sh"],"args=[""-c"", ""env | grep -e FOO -e AAA""]","env=[""FOO=bar"", ""AAA=bbb""]"' .
```
#### `on-error`
If you want to start a debug session when a build fails, you can use
`--invoke=on-error` to start a debug session when the build fails.
```console
$ docker buildx build --invoke on-error .
[+] Building 4.2s (19/19) FINISHED
=> [internal] connecting to local controller 0.0s
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 32B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 34B 0.0s
...
=> ERROR [shell 10/10] RUN bad-command
------
> [shell 10/10] RUN bad-command:
#0 0.049 /bin/sh: bad-command: not found
------
Launching interactive container. Press Ctrl-a-c to switch to monitor console
Interactive container was restarted with process "edmzor60nrag7rh1mbi4o9lm8". Press Ctrl-a-c to switch to the new container
/ #
```
This allows you to explore the state of the image when the build failed.
#### `debug-shell`
If you want to drop into a debug session without first starting the build, you
can use `--invoke=debug-shell` to start a debug session.
```
$ docker buildx build --invoke debug-shell .
[+] Building 4.2s (19/19) FINISHED
=> [internal] connecting to local controller 0.0s
(buildx)
```
You can then use the commands available in [monitor mode](#monitor-mode) to
start and observe the build.
## Monitor mode
By default, when debugging, you'll be dropped into a shell in the final stage.
When you're in a debug shell, you can use the `Ctrl-a-c` key combination (press
`Ctrl`+`a` together, lift, then press `c`) to toggle between the debug shell
and the monitor mode. In monitor mode, you can run commands that control the
debug environment.
```console
(buildx) help
Available commands are:
attach attach to a buildx server or a process in the container
disconnect disconnect a client from a buildx server. Specific session ID can be specified an arg
exec execute a process in the interactive container
exit exits monitor
help shows this message
kill kill buildx server
list list buildx sessions
ps list processes invoked by "exec". Use "attach" to attach IO to that process
reload reloads the context and build it
rollback re-runs the interactive container with initial rootfs contents
```
## Build controllers
Debugging is performed using a buildx "controller", which provides a high-level
abstraction to perform builds. By default, the local controller is used for a
more stable experience which runs all builds in-process. However, you can also
use the remote controller to detach the build process from the CLI.
To detach the build process from the CLI, you can use the `--detach=true` flag with
the build command.
```console
$ docker buildx build --detach=true --invoke /bin/sh .
```
If you start a debugging session using the `--invoke` flag with a detached
build, then you can attach to it using the `buildx debug-shell` subcommand to
immediately enter the monitor mode.
```console
$ docker buildx debug-shell
[+] Building 0.0s (1/1) FINISHED
=> [internal] connecting to remote controller
(buildx) list
ID CURRENT_SESSION
xfe1162ovd9def8yapb4ys66t false
(buildx) attach xfe1162ovd9def8yapb4ys66t
Attached to process "". Press Ctrl-a-c to switch to the new container
(buildx) ps
PID CURRENT_SESSION COMMAND
3ug8iqaufiwwnukimhqqt06jz false [sh]
(buildx) attach 3ug8iqaufiwwnukimhqqt06jz
Attached to process "3ug8iqaufiwwnukimhqqt06jz". Press Ctrl-a-c to switch to the new container
(buildx) Switched IO
/ # ls
bin etc lib mnt proc run srv tmp var
dev home media opt root sbin sys usr work
/ #
```

View File

@@ -0,0 +1,75 @@
# Docker container driver
The buildx docker-container driver allows creation of a managed and
customizable BuildKit environment inside a dedicated Docker container.
Using the docker-container driver has a couple of advantages over the basic
docker driver. Firstly, we can manually override the version of buildkit to
use, meaning that we can access the latest and greatest features as soon as
they're released, instead of waiting to upgrade to a newer version of Docker.
Additionally, we can access more complex features like multi-architecture
builds and the more advanced cache exporters, which are currently unsupported
in the default docker driver.
We can easily create a new builder that uses the docker-container driver:
```console
$ docker buildx create --name container --driver docker-container
container
```
We should then be able to see it on our list of available builders:
```console
$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
container docker-container
container0 desktop-linux inactive
default docker
default default running 20.10.17 linux/amd64, linux/386
```
If we trigger a build, the appropriate `moby/buildkit` image will be pulled
from [Docker Hub](https://hub.docker.com/u/moby/buildkit), the image started,
and our build submitted to our containerized build server.
```console
$ docker buildx build -t <image> --builder=container .
WARNING: No output specified with docker-container driver. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load
#1 [internal] booting buildkit
#1 pulling image moby/buildkit:buildx-stable-1
#1 pulling image moby/buildkit:buildx-stable-1 1.9s done
#1 creating container buildx_buildkit_container0
#1 creating container buildx_buildkit_container0 0.5s done
#1 DONE 2.4s
...
```
Note the warning "Build result will only remain in the build cache" - unlike
the `docker` driver, the built image must be explicitly loaded into the local
image store. We can use the `--load` flag for this:
```console
$ docker buildx build --load -t <image> --builder=container .
...
=> exporting to oci image format 7.7s
=> => exporting layers 4.9s
=> => exporting manifest sha256:4e4ca161fa338be2c303445411900ebbc5fc086153a0b846ac12996960b479d3 0.0s
=> => exporting config sha256:adf3eec768a14b6e183a1010cb96d91155a82fd722a1091440c88f3747f1f53f 0.0s
=> => sending tarball 2.8s
=> importing to docker
```
The image should then be available in the image store:
```console
$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
<image> latest adf3eec768a1 2 minutes ago 197MB
```
## Further reading
For more information on the docker-container driver, see the [buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver).
<!--- FIXME: for 0.9, make reference link relative --->

View File

@@ -0,0 +1,50 @@
# Docker driver
The buildx docker driver is the default builtin driver, that uses the BuildKit
server components built directly into the docker engine.
No setup should be required for the docker driver - it should already be
configured for you:
```console
$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
default docker
default default running 20.10.17 linux/amd64, linux/386
```
This builder is ready to build with out-of-the-box, requiring no extra setup,
so you can get going with a `docker buildx build` as soon as you like.
Depending on your personal setup, you may find multiple builders in your list
the use the docker driver. For example, on a system that runs both a package
managed version of dockerd, as well as Docker Desktop, you might have the
following:
```console
NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
default docker
default default running 20.10.17 linux/amd64, linux/386
desktop-linux * docker
desktop-linux desktop-linux running 20.10.17 linux/amd64, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v6
```
This is because the docker driver builders are automatically pulled from
the available [Docker Contexts](https://docs.docker.com/engine/context/working-with-contexts/).
When you add new contexts using `docker context create`, these will appear in
your list of buildx builders.
Unlike the [other drivers](../index.md), builders using the docker driver
cannot be manually created, and can only be automatically created from the
docker context. Additionally, they cannot be configured to a specific BuildKit
version, and cannot take any extra parameters, as these are both preset by the
Docker engine internally.
If you want the extra configuration and flexibility without too much more
overhead, then see the help page for the [docker-container driver](./docker-container.md).
## Further reading
For more information on the docker driver, see the [buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver).
<!--- FIXME: for 0.9, make reference link relative --->

View File

@@ -0,0 +1,41 @@
# Buildx drivers overview
The buildx client connects out to the BuildKit backend to execute builds -
Buildx drivers allow fine-grained control over management of the backend, and
supports several different options for where and how BuildKit should run.
Currently, we support the following drivers:
- The `docker` driver, that uses the BuildKit library bundled into the Docker
daemon.
([guide](./docker.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
- The `docker-container` driver, that launches a dedicated BuildKit container
using Docker, for access to advanced features.
([guide](./docker-container.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
- The `kubernetes` driver, that launches dedicated BuildKit pods in a
remote Kubernetes cluster, for scalable builds.
([guide](./kubernetes.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
- The `remote` driver, that allows directly connecting to a manually managed
BuildKit daemon, for more custom setups.
([guide](./remote.md))
<!--- FIXME: for 0.9, make links relative, and add reference link for remote --->
To create a new builder that uses one of the above drivers, you can use the
[`docker buildx create`](https://docs.docker.com/engine/reference/commandline/buildx_create/) command:
```console
$ docker buildx create --name=<builder-name> --driver=<driver> --driver-opt=<driver-options>
```
The build experience is very similar across drivers, however, there are some
features that are not evenly supported across the board, notably, the `docker`
driver does not include support for certain output/caching types.
| Feature | `docker` | `docker-container` | `kubernetes` | `remote` |
| :---------------------------- | :-------------: | :----------------: | :----------: | :--------------------: |
| **Automatic `--load`** | ✅ | ❌ | ❌ | ❌ |
| **Cache export** | ❔ (inline only) | ✅ | ✅ | ✅ |
| **Docker/OCI tarball output** | ❌ | ✅ | ✅ | ✅ |
| **Multi-arch images** | ❌ | ✅ | ✅ | ✅ |
| **BuildKit configuration** | ❌ | ✅ | ✅ | ❔ (managed externally) |

View File

@@ -0,0 +1,238 @@
# Kubernetes driver
The buildx kubernetes driver allows connecting your local development or ci
environments to your kubernetes cluster to allow access to more powerful
and varied compute resources.
This guide assumes you already have an existing kubernetes cluster - if you don't already
have one, you can easily follow along by installing
[minikube](https://minikube.sigs.k8s.io/docs/).
Before connecting buildx to your cluster, you may want to create a dedicated
namespace using `kubectl` to keep your buildx-managed resources separate. You
can call your namespace anything you want, or use the existing `default`
namespace, but we'll create a `buildkit` namespace for now:
```console
$ kubectl create namespace buildkit
```
Then create a new buildx builder:
```console
$ docker buildx create \
--bootstrap \
--name=kube \
--driver=kubernetes \
--driver-opt=namespace=buildkit
```
This assumes that the kubernetes cluster you want to connect to is currently
accessible via the kubectl command, with the `KUBECONFIG` environment variable
[set appropriately](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#set-the-kubeconfig-environment-variable)
if neccessary.
You should now be able to see the builder in the list of buildx builders:
```console
$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS PLATFORMS
kube kubernetes
kube0-6977cdcb75-k9h9m running linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/386
default * docker
default default running linux/amd64, linux/386
```
The buildx driver creates the neccessary resources on your cluster in the
specified namespace (in this case, `buildkit`), while keeping your
driver configuration locally. You can see the running pods with:
```console
$ kubectl -n buildkit get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
kube0 1/1 1 1 32s
$ kubectl -n buildkit get pods
NAME READY STATUS RESTARTS AGE
kube0-6977cdcb75-k9h9m 1/1 Running 0 32s
```
You can use your new builder by including the `--builder` flag when running
buildx commands. For example (replacing `<user>` and `<image>` with your Docker
Hub username and desired image output respectively):
```console
$ docker buildx build . \
--builder=kube \
-t <user>/<image> \
--push
```
## Scaling Buildkit
One of the main advantages of the kubernetes builder is that you can easily
scale your builder up and down to handle increased build load. These controls
are exposed via the following options:
- `replicas=N`
- This scales the number of buildkit pods to the desired size. By default,
only a single pod will be created, but increasing this allows taking of
advantage of multiple nodes in your cluster.
- `requests.cpu`, `requests.memory`, `limits.cpu`, `limits.memory`
- These options allow requesting and limiting the resources available to each
buildkit pod according to the official kubernetes documentation
[here](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/).
For example, to create 4 replica buildkit pods:
```console
$ docker buildx create \
--bootstrap \
--name=kube \
--driver=kubernetes \
--driver-opt=namespace=buildkit,replicas=4
```
Listing the pods, we get:
```console
$ kubectl -n buildkit get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
kube0 4/4 4 4 8s
$ kubectl -n buildkit get pods
NAME READY STATUS RESTARTS AGE
kube0-6977cdcb75-48ld2 1/1 Running 0 8s
kube0-6977cdcb75-rkc6b 1/1 Running 0 8s
kube0-6977cdcb75-vb4ks 1/1 Running 0 8s
kube0-6977cdcb75-z4fzs 1/1 Running 0 8s
```
Additionally, you can use the `loadbalance=(sticky|random)` option to control
the load-balancing behavior when there are multiple replicas. While `random`
should selects random nodes from the available pool, which should provide
better balancing across all replicas, `sticky` (the default) attempts to
connect the same build performed multiple times to the same node each time,
ensuring better local cache utilization.
For more information on scalability, see the options for [buildx create](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver-opt).
## Multi-platform builds
The kubernetes buildx driver has support for creating [multi-platform images](https://docs.docker.com/build/buildx/multiplatform-images/),
for easily building for multiple platforms at once.
### QEMU
Like the other containerized driver `docker-container`, the kubernetes driver
also supports using [QEMU](https://www.qemu.org/) (user mode) to build
non-native platforms. If using a default setup like above, no extra setup
should be needed, you should just be able to start building for other
architectures, by including the `--platform` flag.
For example, to build a Linux image for `amd64` and `arm64`:
```console
$ docker buildx build . \
--builder=kube \
--platform=linux/amd64,linux/arm64 \
-t <user>/<image> \
--push
```
> **Warning**
> QEMU performs full-system emulation of non-native platforms, which is *much*
> slower than native builds. Compute-heavy tasks like compilation and
> compression/decompression will likely take a large performance hit.
Note, if you're using a custom buildkit image using the `image=<image>` driver
option, or invoking non-native binaries from within your build, you may need to
explicitly enable QEMU using the `qemu.install` option during driver creation:
```console
$ docker buildx create \
--bootstrap \
--name=kube \
--driver=kubernetes \
--driver-opt=namespace=buildkit,qemu.install=true
```
### Native
If you have access to cluster nodes of different architectures, we can
configure the kubernetes driver to take advantage of these for native builds.
To do this, we need to use the `--append` feature of `docker buildx create`.
To start, we can create our builder with explicit support for a single
architecture, `amd64`:
```console
$ docker buildx create \
--bootstrap \
--name=kube \
--driver=kubernetes \
--platform=linux/amd64 \
--node=builder-amd64 \
--driver-opt=namespace=buildkit,nodeselector="kubernetes.io/arch=amd64"
```
This creates a buildx builder `kube` containing a single builder node `builder-amd64`.
Note that the buildx concept of a node is not the same as the kubernetes
concept of a node - the buildx node in this case could connect multiple
kubernetes nodes of the same architecture together.
With our `kube` driver created, we can now introduce another architecture into
the mix, for example, like before we can use `arm64`:
```console
$ docker buildx create \
--append \
--bootstrap \
--name=kube \
--driver=kubernetes \
--platform=linux/arm64 \
--node=builder-arm64 \
--driver-opt=namespace=buildkit,nodeselector="kubernetes.io/arch=arm64"
```
If you list builders now, you should be able to see both nodes present:
```console
$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS PLATFORMS
kube kubernetes
builder-amd64 kubernetes:///kube?deployment=builder-amd64&kubeconfig= running linux/amd64*, linux/amd64/v2, linux/amd64/v3, linux/386
builder-arm64 kubernetes:///kube?deployment=builder-arm64&kubeconfig= running linux/arm64*
```
You should now be able to build multi-arch images with `amd64` and `arm64`
combined, by specifying those platforms together in your buildx command:
```console
$ docker buildx build --builder=kube --platform=linux/amd64,linux/arm64 -t <user>/<image> --push .
```
You can repeat the `buildx create --append` command for as many different
architectures that you want to support.
## Rootless mode
The kubernetes driver supports rootless mode. For more information on how
rootless mode works, and it's requirements, see [here](https://github.com/moby/buildkit/blob/master/docs/rootless.md).
To enable it in your cluster, you can use the `rootless=true` driver option:
```console
$ docker buildx create \
--name=kube \
--driver=kubernetes \
--driver-opt=namespace=buildkit,rootless=true
```
This will create your pods without `securityContext.privileged`.
## Further reading
For more information on the kubernetes driver, see the [buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver).
<!--- FIXME: for 0.9, make reference link relative --->

Some files were not shown because too many files have changed in this diff Show More