Compare commits

..

73 Commits

Author SHA1 Message Date
Justin Chadwell
86bdced776 Merge pull request #1815 from jedevc/v0.10-vendor-buildkit 2023-05-22 17:28:34 +01:00
Justin Chadwell
edb535f263 vendor: update buildkit to v0.11@348e79dfed17
Signed-off-by: Justin Chadwell <me@jedevc.com>
2023-05-19 11:30:23 +01:00
CrazyMax
f16694cc5d Merge pull request #1792 from jedevc/v0.10-bake-reference
[v0.10] docs: move and rewrite bake reference
2023-05-11 14:18:53 +02:00
David Karlsson
e7db0ce587 docs: refactor bake file reference
Signed-off-by: David Karlsson <david.karlsson@docker.com>
2023-05-11 13:04:30 +01:00
Tõnis Tiigi
c513d34049 Merge pull request #1664 from crazy-max/v0.10_backport_stripcreds
[v0.10 backport] build: strip credentials from remote url on collecting Git provenance info
2023-03-06 16:25:59 +00:00
CrazyMax
d455c07331 build: strip credentials from remote url on collecting Git provenance info
Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
2023-03-06 17:14:40 +01:00
Tõnis Tiigi
5ac3b4c4b6 Merge pull request #1662 from crazy-max/v0.10.4_picks
[v0.10] cherry-picks for v0.10.4
2023-03-06 14:37:30 +00:00
CrazyMax
b1440b07f2 build: makes git dirty check opt-in
Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
2023-03-06 10:56:54 +01:00
David Karlsson
a3286a0ab1 docs: added --platform=local example
Signed-off-by: David Karlsson <david.karlsson@docker.com>
2023-03-06 10:54:42 +01:00
Tõnis Tiigi
b79345c63e Merge pull request #1645 from cpuguy83/0.10_env_no_provenance
[0.10] Add env var to disable default attestations
2023-02-22 10:28:01 -08:00
Brian Goff
23eb3c3ccd Add env var to disable default attestations
For certain cases we need to build with `--provenance=false`.
However not all build envs (especially in the OSS ethos) have the latest
buildx so just blanket setting `--provenance=false` will fail in these
cases.

Having an env var allows people to set the value without having to worry
about if the buildx version has the `--provenance` flag.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit bc9cb2c66a)
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2023-02-22 18:20:34 +00:00
CrazyMax
79e156beb1 Merge pull request #1636 from crazy-max/v0.10_backport_ci-update-ver
[v0.10 backport] ci: update buildx and buildkit to latest
2023-02-16 14:22:20 +01:00
CrazyMax
c960d16da5 ci: update buildx and buildkit to latest
Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
(cherry picked from commit f1a5a3ec50)
2023-02-16 14:16:36 +01:00
CrazyMax
b5b9de69d9 Merge pull request #1635 from crazy-max/v0.10_backport_fix-git-ambiguous
[v0.10 backport] build: fix git ambiguous argument
2023-02-16 14:14:11 +01:00
David Gageot
45863c4f16 Remove git warning: buildx/1633
Signed-off-by: David Gageot <david.gageot@docker.com>
(cherry picked from commit d4a4aaf509)
2023-02-16 14:07:28 +01:00
CrazyMax
f2feea8bed Merge pull request #1609 from crazy-max/0.10.3_cherry_picks
[v0.10] cherry-picks for v0.10.3
2023-02-16 13:48:46 +01:00
Justin Chadwell
a73d07ff7a imagetools: process com.docker.reference.* annotations
To give us the option later down the road of producing recommended OCI
names in BuildKit (using com instead of vnd, woops), we need to update
Buildx to be able to process both.

Ideally, if a Buildx/BuildKit release hadn't been made we could just
switch over, but since we have, we'd need to support both (at least for
a while, eventually we could consider deprecating+removing the vnd
variant).

Signed-off-by: Justin Chadwell <me@jedevc.com>
(cherry picked from commit 642f28f439)
2023-02-16 13:21:41 +01:00
Justin Chadwell
0fad89c3b9 bake: avoid nesting error diagnostics
With changes to the lazy evaluation, the evaluation order is no longer
fixed - this means that we can follow long and confusing paths to get to
an error.

Because of the co-recursive nature of the lazy evaluation, we need to
take special care that the original HCL diagnostics are not discarded
and are preserved so that the original source of the error can be
detected. Preserving the full trace is not necessary, and probably not
useful to the user - all of the file that is not lazily loaded will be
eagerly loaded after all struct blocks are loaded - so the error would
be found regardless.

Signed-off-by: Justin Chadwell <me@jedevc.com>
(cherry picked from commit fbb4f4dec8)
2023-02-09 22:23:02 +01:00
CrazyMax
661af29d46 build: check reachable git commits
Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
(cherry picked from commit fd5884189c)
2023-02-08 14:34:23 +01:00
CrazyMax
02cf539a08 gitutil: override the locale to ensure consistent output
Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
(cherry picked from commit a8eb2a7fbe)
2023-02-08 14:34:14 +01:00
Justin Chadwell
cc87bd104e bake: avoid early-exit for resolution failures
With changes made to allow lazy evaluation, we were early exiting if an
undefined name was detected, either for a variable or a function.

This had two key implications:

1. The error messages changed, and became significantly less
   informative.

   For example, we went from:

   > Unknown variable; There is no variable named "FO". Did you mean "FOO"?, and 1 other diagnostic(s)

   To

   > Invalid expression; undefined variable "FO"

2. Any issues in our function detection from funcCalls which cause JSON
   functions to be erroneously detected cause invalid functions to be
   resolved, which causes new name resolution errors.

To avoid the above problems, we can defer the error from an undefined
name until HCL evaluation - which produces the more informative errors,
and does not suffer from incorrectly detecting JSON functions.

Signed-off-by: Justin Chadwell <me@jedevc.com>
(cherry picked from commit dc8a2b0398)
2023-02-08 14:33:53 +01:00
Justin Chadwell
582cc04be6 build: add docs for boolean attestation flags
Signed-off-by: Justin Chadwell <me@jedevc.com>
(cherry picked from commit 07548bc898)
2023-02-08 14:33:35 +01:00
CrazyMax
ae278ce450 builder: fix docker context not validated
Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
(cherry picked from commit 0e544fe835)
2023-02-08 14:31:43 +01:00
Justin Chadwell
b66988c824 bake: fix loop references
Signed-off-by: Justin Chadwell <me@jedevc.com>
(cherry picked from commit 48357ee0c6)
2023-02-08 14:29:45 +01:00
Tõnis Tiigi
00ed17df6d Merge pull request #1569 from tonistiigi/v0.10.2-picks
[v0.10] cherry-picks for v0.10.2
2023-01-30 11:57:04 -08:00
CrazyMax
cfb71fab97 build: better message output for git provenance
Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
(cherry picked from commit 6db696748b)
2023-01-30 11:46:51 -08:00
CrazyMax
f62342768b build: silently fail if git remote not found
Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
(cherry picked from commit 4789d2219c)
2023-01-30 11:46:42 -08:00
Tonis Tiigi
7776652a4d build: fix multi-node merge to read descriptor from result
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit c33b310b48)
2023-01-30 11:46:12 -08:00
Akihiro Suda
5a4f80f3ce bake: SOURCE_DATE_EPOCH: fix panic: assignment to entry in nil map
Fix issue 1562

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 1f56f51740)
2023-01-30 11:45:50 -08:00
CrazyMax
b5ea79e277 build: fix preferred platform not taken account
Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
(cherry picked from commit 49b3c0dba5)
2023-01-30 11:45:15 -08:00
Tõnis Tiigi
481796f84f Merge pull request #1556 from crazy-max/0.10.1_cherry_picks
[v0.10] cherry-picks for v0.10.1
2023-01-26 11:02:55 -08:00
Tonis Tiigi
0090d49e57 vendor: update buildkit to v0.11.2
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit f6da7ee135)
2023-01-26 10:34:57 -08:00
CrazyMax
389ac0c3d1 build: set remote origin url
Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
(cherry picked from commit c1058c17aa)
2023-01-26 13:36:58 +01:00
Justin Chadwell
2bb8ce2f57 build: create error group per opt
Using the syncronization primitive, we can avoid needing to create a
separate wait group.

This allows us to sidestep the issue where the wait group could be
completed, but the build invocation functions had not terminated - if
one of the functions was to terminate with an error, then it was
possible to encounter a race condition, where the result handling code
would begin executing, despite an error.

The refactor to use a separate error group which more elegantly handles
the concept of function returns and errors, ensures that we can't
encounter this issue.

Signed-off-by: Justin Chadwell <me@jedevc.com>
(cherry picked from commit 8b7aa1a168)
2023-01-26 13:36:57 +01:00
Justin Chadwell
65cea456fd build: reorder error group funcs
Signed-off-by: Justin Chadwell <me@jedevc.com>
(cherry picked from commit 1180d919f5)
2023-01-26 13:36:57 +01:00
Justin Chadwell
f7bd5b99da build: use copy for BuildWithResultHandler loop vars
Signed-off-by: Justin Chadwell <me@jedevc.com>
(cherry picked from commit 347417ee12)
2023-01-26 13:36:57 +01:00
Justin Chadwell
8c14407fa2 imagetools: silence intoto warnings
Signed-off-by: Justin Chadwell <me@jedevc.com>
(cherry picked from commit 7145e021f9)
2023-01-26 13:36:57 +01:00
CrazyMax
5245a2b3ff rm: do not check for context builders when removing inactive
This change has been introduced in e7b5ee7518
but we should not check context builders when removing inactive
ones.

Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
(cherry picked from commit 6cd0c11ab1)
2023-01-26 13:36:28 +01:00
Tonis Tiigi
44d99d4573 build: mark capabilities request as internal
So it doesn't show up in the History API.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit be55b41427)
2023-01-26 13:35:46 +01:00
David Karlsson
14942a266e docs: fix broken link in buildx_bake CLI reference
Signed-off-by: David Karlsson <david.karlsson@docker.com>
(cherry picked from commit ba8fa6c403)
2023-01-26 13:33:13 +01:00
CrazyMax
123febf107 ci: fix typo in docs-release workflow
Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
(cherry picked from commit 523a16aa35)
2023-01-26 13:32:58 +01:00
Batuhan Apaydın
3f5f7c5228 fix the directory of the buildx binary
Signed-off-by: Batuhan Apaydın <batuhan.apaydin@trendyol.com>
(cherry picked from commit edb16f8aab)
2023-01-26 13:32:34 +01:00
Justin Chadwell
6d935625a6 Merge pull request #1546 from jedevc/v0.10-inspect-lazy-attestations
[v0.10] Lazily load attestation data in imagetools inspect
2023-01-24 12:41:13 +00:00
Justin Chadwell
e640dc6041 Merge pull request #1545 from jedevc/v0.10-error-on-attestations-docker
[v0.10] build: error when using docker exporter and attestations
2023-01-24 12:41:03 +00:00
Justin Chadwell
08244b12b5 Merge pull request #1544 from jedevc/v0.10-bump-ci
[v0.10] Bump Buildx and BuildKit versions in GitHub actions
2023-01-24 12:40:52 +00:00
Justin Chadwell
78d8b926db inspect: lazily load attestation data
Delay loading the attestation data immediately, and only compute it upon
request. We do this using a deferred function which allows to define the
computation in the same place as before, but perform the computation
later.

With this patch, we ensure that the attestation data is only pulled from
the remote if it is actually referenced in the format string -
otherwise, we can skip it, for improved performance.

Signed-off-by: Justin Chadwell <me@jedevc.com>
2023-01-24 12:10:57 +00:00
Justin Chadwell
19291d900e inspect: move attestation loading to struct methods
This refactor ensures that the attestations are not output in the JSON
output for "{{ json . }}", and additionally allows future refactors to
dynamically load the attestation contents, ensuring faster performance
when attestations are not used in the output.

Signed-off-by: Justin Chadwell <me@jedevc.com>
2023-01-24 12:10:57 +00:00
Justin Chadwell
ed9b4a7169 build: error when using docker exporter and attestations
Signed-off-by: Justin Chadwell <me@jedevc.com>
(cherry picked from commit 43a748fd15)
Signed-off-by: Justin Chadwell <me@jedevc.com>
2023-01-24 12:07:43 +00:00
Justin Chadwell
033d5629c0 build: avoid compatability error when attestations disabled
We should avoid erroring with attestations support compatability errors
when a user has specified --provenance=false.

A user may wish to enable --provenance=false that works across buildkit
versions, but currently it will fail on old versions - this patch fixes
this, to silently ignore the provenance flag for this check if it's set
to disabled.

Signed-off-by: Justin Chadwell <me@jedevc.com>
(cherry picked from commit 15a80b56b5)
Signed-off-by: Justin Chadwell <me@jedevc.com>
2023-01-24 12:07:34 +00:00
Justin Chadwell
7cd5add568 ci: update buildkit release version in build pipeline
Signed-off-by: Justin Chadwell <me@jedevc.com>
(cherry picked from commit c1ab55a3f2)
Signed-off-by: Justin Chadwell <me@jedevc.com>
2023-01-24 11:50:58 +00:00
Justin Chadwell
2a000096fa ci: update buildx release version in build pipeline
Signed-off-by: Justin Chadwell <me@jedevc.com>
(cherry picked from commit bc1d590ca7)
Signed-off-by: Justin Chadwell <me@jedevc.com>
2023-01-24 11:50:53 +00:00
Tõnis Tiigi
b7781447d7 Merge pull request #1530 from thaJeztah/0.10_backport_update_buildkit
[0.10 backport] vendor: github.com/moby/buildkit v0.11.1
2023-01-24 00:50:03 -08:00
Sebastiaan van Stijn
f6ba0a23f8 vendor: github.com/moby/buildkit v0.11.1
full diff: https://github.com/moby/buildkit/compare/v0.11.0...v0.11.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 01e1c28dd9)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-01-18 20:58:27 +01:00
CrazyMax
bf4b95fc3a Merge pull request #1524 from jedevc/v0.10-docs-reference-attest
[0.10] docs: add reference for new attest family of flags
2023-01-17 16:24:18 +01:00
Justin Chadwell
467586dc8d docs: add reference for new attest family of flags
Signed-off-by: Justin Chadwell <me@jedevc.com>
2023-01-17 13:48:19 +00:00
Tõnis Tiigi
8764628976 Merge pull request #1501 from tonistiigi/v0.10-picks
[v0.10] cherry-picks
2023-01-09 16:10:12 -08:00
Justin Chadwell
583fe71740 docs: update with new inspect output
Signed-off-by: Justin Chadwell <me@jedevc.com>
(cherry picked from commit 9818055b0e)
2023-01-09 15:53:42 -08:00
Justin Chadwell
9fb3ff1a27 inspect: change additional spdxs to not have duplicates
Signed-off-by: Justin Chadwell <me@jedevc.com>
(cherry picked from commit 484823c97d)
2023-01-09 15:53:37 -08:00
Justin Chadwell
9d4f38c5fa inspect: provide access to multiple spdx documents
Signed-off-by: Justin Chadwell <me@jedevc.com>
(cherry picked from commit 3ce17b01dc)
2023-01-09 15:53:34 -08:00
Justin Chadwell
793082f543 inspect: parse sbom and provenance into json structs
Signed-off-by: Justin Chadwell <me@jedevc.com>
(cherry picked from commit e68c566c1c)
2023-01-09 15:53:29 -08:00
Justin Chadwell
fe6f697205 inspect: break after first matching attestation
Signed-off-by: Justin Chadwell <me@jedevc.com>
(cherry picked from commit 19d16aa941)
2023-01-09 15:53:13 -08:00
Tonis Tiigi
fd3fb752d3 github: update CI to buildkit v0.11
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 571871b084)
2023-01-09 15:52:51 -08:00
CrazyMax
7fcea64eb4 Merge pull request #1496 from thaJeztah/0.10_backport_docs_updates
[0.10 backport] update anchor-links and cli-docs-tool v0.5.1
2023-01-09 15:52:56 +01:00
Sebastiaan van Stijn
05e0ce4953 go.mod: update cli-docs-tool v0.5.1 and re-generate docs
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit c97500b117)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-01-09 13:05:27 +01:00
Sebastiaan van Stijn
f8d9d1e776 docs: update anchor links
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit b8285c17e6)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-01-09 13:05:27 +01:00
CrazyMax
8a7a221a7f imagetools inspect: handle provenance and sbom
use stub structs for SLSA/SBOM while waiting for
go-imageinspect library to be public.

Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
2023-01-06 16:33:47 -08:00
CrazyMax
e4db8d2a21 imagetools inspect: missing annotations key
Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
2023-01-06 16:33:47 -08:00
Justin Chadwell
7394853ddf vendor: update buildkit to v0.11.0-rc4
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
Signed-off-by: Justin Chadwell <me@jedevc.com>
2023-01-06 16:33:46 -08:00
Justin Chadwell
a8be6b576b docs: update oci layout with tag resolution
Signed-off-by: Justin Chadwell <me@jedevc.com>
2023-01-06 16:33:46 -08:00
Justin Chadwell
8b960ededd build: refactor reference parsing for image layouts
We allow any valid image reference format for the oci-layout, not just
limiting to name@digest, we additionally allow images of the form
name:tag@digest now.

The name of the reference is used to find the local directory to lookup
the store in, while the tag and digest are attached to a random identity
to generate the dummy reference sent to the oci-layout context.

This separation of the target to replace and the value to replace it
with ensures that any tag or digest set in the client is properly sent
across to the server. The tag is used when a digest was not specified,
and it is resolved in the context of the local directory before being
sent, using the same helpers as we use for the local cache expoter.

Signed-off-by: Justin Chadwell <me@jedevc.com>
2023-01-06 16:33:46 -08:00
CrazyMax
4735a71fbd e2e: use native k3s installation script
debianmaster/actions-k3s action gives some warnings in our e2e
workflow. This commit brings https://github.com/debianmaster/actions-k3s/blob/master/index.js
directly in the workflow through actions/github-script with
some changes to properly wait for nodes to be up.

Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
2023-01-06 16:33:46 -08:00
Tõnis Tiigi
37fce8cc06 Merge pull request #1489 from AkihiroSuda/cherrypick-1482-v0.10
[0.10] Propagate SOURCE_DATE_EPOCH from the client env
2023-01-05 23:45:21 -08:00
Akihiro Suda
82476ab039 Propagate SOURCE_DATE_EPOCH from the client env
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 0e6f5a155e)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2023-01-05 08:48:27 +09:00
4694 changed files with 144142 additions and 506891 deletions

View File

@@ -116,60 +116,6 @@ commit automatically with `git commit -s`.
### Run the unit- and integration-tests
Running tests:
```bash
make test
```
This runs all unit and integration tests, in a containerized environment.
Locally, every package can be tested separately with standard Go tools, but
integration tests are skipped if local user doesn't have enough permissions or
worker binaries are not installed.
```bash
# run unit tests only
make test-unit
# run integration tests only
make test-integration
# test a specific package
TESTPKGS=./bake make test
# run all integration tests with a specific worker
TESTFLAGS="--run=//worker=remote -v" make test-integration
# run a specific integration test
TESTFLAGS="--run /TestBuild/worker=remote/ -v" make test-integration
# run a selection of integration tests using a regexp
TESTFLAGS="--run /TestBuild.*/worker=remote/ -v" make test-integration
```
> **Note**
>
> Set `TEST_KEEP_CACHE=1` for the test framework to keep external dependant
> images in a docker volume if you are repeatedly calling `make test`. This
> helps to avoid rate limiting on the remote registry side.
> **Note**
>
> Set `TEST_DOCKERD=1` for the test framework to enable the docker workers,
> specifically the `docker` and `docker-container` drivers.
>
> The docker tests cannot be run in parallel, so require passing `--parallel=1`
> in `TESTFLAGS`.
> **Note**
>
> If you are working behind a proxy, you can set some of or all
> `HTTP_PROXY=http://ip:port`, `HTTPS_PROXY=http://ip:port`, `NO_PROXY=http://ip:port`
> for the test framework to specify the proxy build args.
### Run the helper commands
To enter a demo container environment and experiment, you may run:
```

View File

@@ -1,124 +0,0 @@
# https://docs.github.com/en/communities/using-templates-to-encourage-useful-issues-and-pull-requests/syntax-for-githubs-form-schema
name: Bug Report
description: Report a bug
labels:
- status/triage
body:
- type: markdown
attributes:
value: |
Thank you for taking the time to report a bug!
If this is a security issue please report it to the [Docker Security team](mailto:security@docker.com).
- type: checkboxes
attributes:
label: Contributing guidelines
description: |
Please read the contributing guidelines before proceeding.
options:
- label: I've read the [contributing guidelines](https://github.com/docker/buildx/blob/master/.github/CONTRIBUTING.md) and wholeheartedly agree
required: true
- type: checkboxes
attributes:
label: I've found a bug and checked that ...
description: |
Make sure that your request fulfills all of the following requirements.
If one requirement cannot be satisfied, explain in detail why.
options:
- label: ... the documentation does not mention anything about my problem
- label: ... there are no open or closed issues that are related to my problem
- type: textarea
attributes:
label: Description
description: |
Please provide a brief description of the bug in 1-2 sentences.
validations:
required: true
- type: textarea
attributes:
label: Expected behaviour
description: |
Please describe precisely what you'd expect to happen.
validations:
required: true
- type: textarea
attributes:
label: Actual behaviour
description: |
Please describe precisely what is actually happening.
validations:
required: true
- type: input
attributes:
label: Buildx version
description: |
Output of `docker buildx version` command.
Example: `github.com/docker/buildx v0.8.1 5fac64c2c49dae1320f2b51f1a899ca451935554`
validations:
required: true
- type: textarea
attributes:
label: Docker info
description: |
Output of `docker info` command.
render: text
- type: textarea
attributes:
label: Builders list
description: |
Output of `docker buildx ls` command.
render: text
validations:
required: true
- type: textarea
attributes:
label: Configuration
description: >
Please provide a minimal Dockerfile, bake definition (if applicable) and
invoked commands to help reproducing your issue.
placeholder: |
```dockerfile
FROM alpine
echo hello
```
```hcl
group "default" {
targets = ["app"]
}
target "app" {
dockerfile = "Dockerfile"
target = "build"
}
```
```console
$ docker buildx build .
$ docker buildx bake
```
validations:
required: true
- type: textarea
attributes:
label: Build logs
description: |
Please provide logs output (and/or BuildKit logs if applicable).
render: text
validations:
required: false
- type: textarea
attributes:
label: Additional info
description: |
Please provide any additional information that could be useful.

View File

@@ -1,12 +0,0 @@
# https://docs.github.com/en/communities/using-templates-to-encourage-useful-issues-and-pull-requests/configuring-issue-templates-for-your-repository#configuring-the-template-chooser
blank_issues_enabled: true
contact_links:
- name: Questions and Discussions
url: https://github.com/docker/buildx/discussions/new
about: Use Github Discussions to ask questions and/or open discussion topics.
- name: Command line reference
url: https://docs.docker.com/engine/reference/commandline/buildx/
about: Read the command line reference.
- name: Documentation
url: https://docs.docker.com/build/
about: Read the documentation.

View File

@@ -1,15 +0,0 @@
# https://docs.github.com/en/communities/using-templates-to-encourage-useful-issues-and-pull-requests/syntax-for-githubs-form-schema
name: Feature request
description: Missing functionality? Come tell us about it!
labels:
- kind/enhancement
- status/triage
body:
- type: textarea
id: description
attributes:
label: Description
description: What is the feature you want to see?
validations:
required: true

12
.github/SECURITY.md vendored
View File

@@ -1,12 +0,0 @@
# Reporting security issues
The project maintainers take security seriously. If you discover a security
issue, please bring it to their attention right away!
**Please _DO NOT_ file a public issue**, instead send your report privately to
[security@docker.com](mailto:security@docker.com).
Security reports are greatly appreciated, and we will publicly thank you for it.
We also like to send gifts&mdash;if you're into schwag, make sure to let
us know. We currently do not offer a paid security bounty program, but are not
ruling it out in the future.

View File

@@ -5,11 +5,6 @@ updates:
directory: "/"
schedule:
interval: "daily"
ignore:
# ignore this dependency
# it seems a bug with dependabot as pining to commit sha should not
# trigger a new version: https://github.com/docker/buildx/pull/2222#issuecomment-1919092153
- dependency-name: "docker/docs"
labels:
- "dependencies"
- "bot"

104
.github/labeler.yml vendored
View File

@@ -1,104 +0,0 @@
# Add 'area/project' label to changes in basic project documentation and .github folder, excluding .github/workflows
area/project:
- all:
- changed-files:
- any-glob-to-any-file:
- .github/**
- LICENSE
- AUTHORS
- MAINTAINERS
- PROJECT.md
- README.md
- .gitignore
- codecov.yml
- all-globs-to-all-files: '!.github/workflows/*'
# Add 'area/github-actions' label to changes in the .github/workflows folder
area/ci:
- changed-files:
- any-glob-to-any-file: '.github/workflows/**'
# Add 'area/bake' label to changes in the bake
area/bake:
- changed-files:
- any-glob-to-any-file: 'bake/**'
# Add 'area/bake/compose' label to changes in the bake+compose
area/bake/compose:
- changed-files:
- any-glob-to-any-file:
- bake/compose.go
- bake/compose_test.go
# Add 'area/build' label to changes in build files
area/build:
- changed-files:
- any-glob-to-any-file: 'build/**'
# Add 'area/builder' label to changes in builder files
area/builder:
- changed-files:
- any-glob-to-any-file: 'builder/**'
# Add 'area/cli' label to changes in the CLI
area/cli:
- changed-files:
- any-glob-to-any-file:
- cmd/**
- commands/**
# Add 'area/controller' label to changes in the controller
area/controller:
- changed-files:
- any-glob-to-any-file: 'controller/**'
# Add 'area/docs' label to markdown files in the docs folder
area/docs:
- changed-files:
- any-glob-to-any-file: 'docs/**/*.md'
# Add 'area/dependencies' label to changes in go dependency files
area/dependencies:
- changed-files:
- any-glob-to-any-file:
- go.mod
- go.sum
- vendor/**
# Add 'area/driver' label to changes in the driver folder
area/driver:
- changed-files:
- any-glob-to-any-file: 'driver/**'
# Add 'area/driver/docker' label to changes in the docker driver
area/driver/docker:
- changed-files:
- any-glob-to-any-file: 'driver/docker/**'
# Add 'area/driver/docker-container' label to changes in the docker-container driver
area/driver/docker-container:
- changed-files:
- any-glob-to-any-file: 'driver/docker-container/**'
# Add 'area/driver/kubernetes' label to changes in the kubernetes driver
area/driver/kubernetes:
- changed-files:
- any-glob-to-any-file: 'driver/kubernetes/**'
# Add 'area/driver/remote' label to changes in the remote driver
area/driver/remote:
- changed-files:
- any-glob-to-any-file: 'driver/remote/**'
# Add 'area/hack' label to changes in the hack folder
area/hack:
- changed-files:
- any-glob-to-any-file: 'hack/**'
# Add 'area/tests' label to changes in test files
area/tests:
- changed-files:
- any-glob-to-any-file:
- tests/**
- '**/*_test.go'

735
.github/releases.json vendored
View File

@@ -1,735 +0,0 @@
{
"latest": {
"id": 90741208,
"tag_name": "v0.10.2",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.10.2",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v6.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v6.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v7.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v7.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-ppc64le.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-ppc64le.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-riscv64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-riscv64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-s390x.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-s390x.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/checksums.txt"
]
},
"v0.10.2": {
"id": 90741208,
"tag_name": "v0.10.2",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.10.2",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v6.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v6.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v7.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v7.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-ppc64le.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-ppc64le.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-riscv64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-riscv64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-s390x.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-s390x.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.2/checksums.txt"
]
},
"v0.10.1": {
"id": 90346950,
"tag_name": "v0.10.1",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.10.1",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.darwin-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.darwin-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.darwin-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.darwin-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm-v6.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm-v6.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm-v7.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm-v7.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-ppc64le.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-ppc64le.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-riscv64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-riscv64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-s390x.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-s390x.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.windows-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.windows-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.windows-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.windows-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.1/checksums.txt"
]
},
"v0.10.0": {
"id": 88388110,
"tag_name": "v0.10.0",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.10.0",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.darwin-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.darwin-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.darwin-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.darwin-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm-v6.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm-v6.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm-v7.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm-v7.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-ppc64le.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-ppc64le.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-riscv64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-riscv64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-s390x.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-s390x.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.windows-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.windows-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.windows-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.windows-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0/checksums.txt"
]
},
"v0.10.0-rc3": {
"id": 88191592,
"tag_name": "v0.10.0-rc3",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.10.0-rc3",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.darwin-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.darwin-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.darwin-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.darwin-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm-v6.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm-v6.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm-v7.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm-v7.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-ppc64le.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-ppc64le.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-riscv64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-riscv64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-s390x.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-s390x.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.windows-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.windows-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.windows-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.windows-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/checksums.txt"
]
},
"v0.10.0-rc2": {
"id": 86248476,
"tag_name": "v0.10.0-rc2",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.10.0-rc2",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.darwin-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.darwin-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.darwin-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.darwin-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm-v6.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm-v6.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm-v7.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm-v7.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-ppc64le.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-ppc64le.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-riscv64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-riscv64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-s390x.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-s390x.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.windows-amd64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.windows-amd64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.windows-arm64.provenance.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.windows-arm64.sbom.json",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/checksums.txt"
]
},
"v0.10.0-rc1": {
"id": 85963900,
"tag_name": "v0.10.0-rc1",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.10.0-rc1",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/checksums.txt"
]
},
"v0.9.1": {
"id": 74760068,
"tag_name": "v0.9.1",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.9.1",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.9.1/checksums.txt"
]
},
"v0.9.0": {
"id": 74546589,
"tag_name": "v0.9.0",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.9.0",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.9.0/checksums.txt"
]
},
"v0.9.0-rc2": {
"id": 74052235,
"tag_name": "v0.9.0-rc2",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.9.0-rc2",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/checksums.txt"
]
},
"v0.9.0-rc1": {
"id": 73389692,
"tag_name": "v0.9.0-rc1",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.9.0-rc1",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/checksums.txt"
]
},
"v0.8.2": {
"id": 63479740,
"tag_name": "v0.8.2",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.8.2",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.8.2/checksums.txt"
]
},
"v0.8.1": {
"id": 62289050,
"tag_name": "v0.8.1",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.8.1",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.8.1/checksums.txt"
]
},
"v0.8.0": {
"id": 61423774,
"tag_name": "v0.8.0",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.8.0",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.8.0/checksums.txt"
]
},
"v0.8.0-rc1": {
"id": 60513568,
"tag_name": "v0.8.0-rc1",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.8.0-rc1",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/checksums.txt"
]
},
"v0.7.1": {
"id": 54098347,
"tag_name": "v0.7.1",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.7.1",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.7.1/checksums.txt"
]
},
"v0.7.0": {
"id": 53109422,
"tag_name": "v0.7.0",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.7.0",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.7.0/checksums.txt"
]
},
"v0.7.0-rc1": {
"id": 52726324,
"tag_name": "v0.7.0-rc1",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.7.0-rc1",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.windows-arm64.exe",
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/checksums.txt"
]
},
"v0.6.3": {
"id": 48691641,
"tag_name": "v0.6.3",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.6.3",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.windows-arm64.exe"
]
},
"v0.6.2": {
"id": 48207405,
"tag_name": "v0.6.2",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.6.2",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.windows-arm64.exe"
]
},
"v0.6.1": {
"id": 47064772,
"tag_name": "v0.6.1",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.6.1",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.windows-arm64.exe"
]
},
"v0.6.0": {
"id": 46343260,
"tag_name": "v0.6.0",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.6.0",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.windows-arm64.exe"
]
},
"v0.6.0-rc1": {
"id": 46230351,
"tag_name": "v0.6.0-rc1",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.6.0-rc1",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.linux-riscv64",
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.windows-amd64.exe",
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.windows-arm64.exe"
]
},
"v0.5.1": {
"id": 35276550,
"tag_name": "v0.5.1",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.5.1",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.darwin-universal",
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.windows-amd64.exe"
]
},
"v0.5.0": {
"id": 35268960,
"tag_name": "v0.5.0",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.5.0",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.darwin-arm64",
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.darwin-universal",
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.windows-amd64.exe"
]
},
"v0.5.0-rc1": {
"id": 35015334,
"tag_name": "v0.5.0-rc1",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.5.0-rc1",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.windows-amd64.exe"
]
},
"v0.4.2": {
"id": 30007794,
"tag_name": "v0.4.2",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.4.2",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.windows-amd64.exe"
]
},
"v0.4.1": {
"id": 26067509,
"tag_name": "v0.4.1",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.4.1",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.windows-amd64.exe"
]
},
"v0.4.0": {
"id": 26028174,
"tag_name": "v0.4.0",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.4.0",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.windows-amd64.exe"
]
},
"v0.3.1": {
"id": 20316235,
"tag_name": "v0.3.1",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.3.1",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.windows-amd64.exe"
]
},
"v0.3.0": {
"id": 19029664,
"tag_name": "v0.3.0",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.3.0",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.windows-amd64.exe"
]
},
"v0.2.2": {
"id": 17671545,
"tag_name": "v0.2.2",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.2.2",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.windows-amd64.exe"
]
},
"v0.2.1": {
"id": 17582885,
"tag_name": "v0.2.1",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.2.1",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.windows-amd64.exe"
]
},
"v0.2.0": {
"id": 16965310,
"tag_name": "v0.2.0",
"html_url": "https://github.com/docker/buildx/releases/tag/v0.2.0",
"assets": [
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.darwin-amd64",
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.linux-amd64",
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.linux-arm-v6",
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.linux-arm-v7",
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.linux-arm64",
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.linux-ppc64le",
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.linux-s390x",
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.windows-amd64.exe"
]
}
}

View File

@@ -13,8 +13,10 @@ on:
tags:
- 'v*'
pull_request:
branches:
- 'master'
- 'v[0-9]*'
paths-ignore:
- '.github/releases.json'
- 'README.md'
- 'docs/**'
@@ -23,205 +25,43 @@ env:
BUILDKIT_IMAGE: "moby/buildkit:latest"
REPO_SLUG: "docker/buildx-bin"
DESTDIR: "./bin"
TEST_CACHE_SCOPE: "test"
TESTFLAGS: "-v --parallel=6 --timeout=30m"
GOTESTSUM_FORMAT: "standard-verbose"
GO_VERSION: "1.22"
GOTESTSUM_VERSION: "v1.9.0" # same as one in Dockerfile
jobs:
test-integration:
runs-on: ubuntu-24.04
env:
TESTFLAGS_DOCKER: "-v --parallel=1 --timeout=30m"
TEST_IMAGE_BUILD: "0"
TEST_IMAGE_ID: "buildx-tests"
TEST_COVERAGE: "1"
strategy:
fail-fast: false
matrix:
buildkit:
- master
- latest
- buildx-stable-1
- v0.14.1
- v0.13.2
- v0.12.5
worker:
- docker-container
- remote
pkg:
- ./tests
mode:
- ""
- experimental
include:
- worker: docker
pkg: ./tests
- worker: docker+containerd # same as docker, but with containerd snapshotter
pkg: ./tests
- worker: docker
pkg: ./tests
mode: experimental
- worker: docker+containerd # same as docker, but with containerd snapshotter
pkg: ./tests
mode: experimental
test:
runs-on: ubuntu-22.04
steps:
-
name: Prepare
run: |
echo "TESTREPORTS_NAME=${{ github.job }}-$(echo "${{ matrix.pkg }}-${{ matrix.buildkit }}-${{ matrix.worker }}-${{ matrix.mode }}" | tr -dc '[:alnum:]-\n\r' | tr '[:upper:]' '[:lower:]')" >> $GITHUB_ENV
if [ -n "${{ matrix.buildkit }}" ]; then
echo "TEST_BUILDKIT_TAG=${{ matrix.buildkit }}" >> $GITHUB_ENV
fi
testFlags="--run=//worker=$(echo "${{ matrix.worker }}" | sed 's/\+/\\+/g')$"
case "${{ matrix.worker }}" in
docker | docker+containerd)
echo "TESTFLAGS=${{ env.TESTFLAGS_DOCKER }} $testFlags" >> $GITHUB_ENV
;;
*)
echo "TESTFLAGS=${{ env.TESTFLAGS }} $testFlags" >> $GITHUB_ENV
;;
esac
if [[ "${{ matrix.worker }}" == "docker"* ]]; then
echo "TEST_DOCKERD=1" >> $GITHUB_ENV
fi
if [ "${{ matrix.mode }}" = "experimental" ]; then
echo "TEST_BUILDX_EXPERIMENTAL=1" >> $GITHUB_ENV
fi
-
name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
-
name: Set up QEMU
uses: docker/setup-qemu-action@v3
uses: actions/checkout@v3
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@v2
with:
version: ${{ env.BUILDX_VERSION }}
driver-opts: image=${{ env.BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build test image
uses: docker/bake-action@v5
name: Test
uses: docker/bake-action@v2
with:
targets: integration-test
targets: test
set: |
*.output=type=docker,name=${{ env.TEST_IMAGE_ID }}
*.cache-from=type=gha,scope=test
*.cache-to=type=gha,scope=test
-
name: Test
run: |
./hack/test
env:
TEST_REPORT_SUFFIX: "-${{ env.TESTREPORTS_NAME }}"
TESTPKGS: "${{ matrix.pkg }}"
-
name: Send to Codecov
if: always()
uses: codecov/codecov-action@v4
name: Upload coverage
uses: codecov/codecov-action@v3
with:
directory: ./bin/testreports
flags: integration
token: ${{ secrets.CODECOV_TOKEN }}
disable_file_fixes: true
-
name: Generate annotations
if: always()
uses: crazy-max/.github/.github/actions/gotest-annotations@fa6141aedf23596fb8bdcceab9cce8dadaa31bd9
with:
directory: ./bin/testreports
-
name: Upload test reports
if: always()
uses: actions/upload-artifact@v4
with:
name: test-reports-${{ env.TESTREPORTS_NAME }}
path: ./bin/testreports
directory: ${{ env.DESTDIR }}/coverage
test-unit:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os:
- ubuntu-24.04
- macos-12
- windows-2022
env:
SKIP_INTEGRATION_TESTS: 1
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up Go
uses: actions/setup-go@v5
with:
go-version: "${{ env.GO_VERSION }}"
-
name: Prepare
run: |
testreportsName=${{ github.job }}--${{ matrix.os }}
testreportsBaseDir=./bin/testreports
testreportsDir=$testreportsBaseDir/$testreportsName
echo "TESTREPORTS_NAME=$testreportsName" >> $GITHUB_ENV
echo "TESTREPORTS_BASEDIR=$testreportsBaseDir" >> $GITHUB_ENV
echo "TESTREPORTS_DIR=$testreportsDir" >> $GITHUB_ENV
mkdir -p $testreportsDir
shell: bash
-
name: Install gotestsum
run: |
go install gotest.tools/gotestsum@${{ env.GOTESTSUM_VERSION }}
-
name: Test
env:
TMPDIR: ${{ runner.temp }}
run: |
gotestsum \
--jsonfile="${{ env.TESTREPORTS_DIR }}/go-test-report.json" \
--junitfile="${{ env.TESTREPORTS_DIR }}/junit-report.xml" \
--packages="./..." \
-- \
"-mod=vendor" \
"-coverprofile" "${{ env.TESTREPORTS_DIR }}/coverage.txt" \
"-covermode" "atomic" ${{ env.TESTFLAGS }}
shell: bash
-
name: Send to Codecov
if: always()
uses: codecov/codecov-action@v4
with:
directory: ${{ env.TESTREPORTS_DIR }}
env_vars: RUNNER_OS
flags: unit
token: ${{ secrets.CODECOV_TOKEN }}
disable_file_fixes: true
-
name: Generate annotations
if: always()
uses: crazy-max/.github/.github/actions/gotest-annotations@fa6141aedf23596fb8bdcceab9cce8dadaa31bd9
with:
directory: ${{ env.TESTREPORTS_DIR }}
-
name: Upload test reports
if: always()
uses: actions/upload-artifact@v4
with:
name: test-reports-${{ env.TESTREPORTS_NAME }}
path: ${{ env.TESTREPORTS_BASEDIR }}
prepare-binaries:
runs-on: ubuntu-24.04
prepare:
runs-on: ubuntu-22.04
outputs:
matrix: ${{ steps.platforms.outputs.matrix }}
steps:
-
name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v3
-
name: Create matrix
id: platforms
@@ -233,13 +73,13 @@ jobs:
echo ${{ steps.platforms.outputs.matrix }}
binaries:
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
needs:
- prepare-binaries
- prepare
strategy:
fail-fast: false
matrix:
platform: ${{ fromJson(needs.prepare-binaries.outputs.matrix) }}
platform: ${{ fromJson(needs.prepare.outputs.matrix) }}
steps:
-
name: Prepare
@@ -248,13 +88,13 @@ jobs:
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
-
name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v3
-
name: Set up QEMU
uses: docker/setup-qemu-action@v3
uses: docker/setup-qemu-action@v2
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@v2
with:
version: ${{ env.BUILDX_VERSION }}
driver-opts: image=${{ env.BUILDKIT_IMAGE }}
@@ -269,28 +109,25 @@ jobs:
CACHE_TO: type=gha,scope=binaries-${{ env.PLATFORM_PAIR }},mode=max
-
name: Upload artifacts
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v3
with:
name: buildx-${{ env.PLATFORM_PAIR }}
name: buildx
path: ${{ env.DESTDIR }}/*
if-no-files-found: error
bin-image:
runs-on: ubuntu-24.04
needs:
- test-integration
- test-unit
if: ${{ github.event_name != 'pull_request' && github.repository == 'docker/buildx' }}
runs-on: ubuntu-22.04
if: ${{ github.event_name != 'pull_request' }}
steps:
-
name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v3
-
name: Set up QEMU
uses: docker/setup-qemu-action@v3
uses: docker/setup-qemu-action@v2
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@v2
with:
version: ${{ env.BUILDX_VERSION }}
driver-opts: image=${{ env.BUILDKIT_IMAGE }}
@@ -298,7 +135,7 @@ jobs:
-
name: Docker meta
id: meta
uses: docker/metadata-action@v5
uses: docker/metadata-action@v4
with:
images: |
${{ env.REPO_SLUG }}
@@ -310,41 +147,39 @@ jobs:
-
name: Login to DockerHub
if: github.event_name != 'pull_request'
uses: docker/login-action@v3
uses: docker/login-action@v2
with:
username: ${{ vars.DOCKERPUBLICBOT_USERNAME }}
password: ${{ secrets.DOCKERPUBLICBOT_WRITE_PAT }}
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
-
name: Build and push image
uses: docker/bake-action@v5
uses: docker/bake-action@v2
with:
files: |
./docker-bake.hcl
${{ steps.meta.outputs.bake-file }}
targets: image-cross
push: ${{ github.event_name != 'pull_request' }}
sbom: true
set: |
*.cache-from=type=gha,scope=bin-image
*.cache-to=type=gha,scope=bin-image,mode=max
*.attest=type=sbom
*.attest=type=provenance,mode=max,builder-id=https://github.com/${{ env.GITHUB_REPOSITORY }}/actions/runs/${{ env.GITHUB_RUN_ID }}
release:
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
needs:
- test-integration
- test-unit
- binaries
steps:
-
name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v3
-
name: Download binaries
uses: actions/download-artifact@v4
uses: actions/download-artifact@v3
with:
name: buildx
path: ${{ env.DESTDIR }}
pattern: buildx-*
merge-multiple: true
-
name: Create checksums
run: ./hack/hash-files
@@ -359,9 +194,33 @@ jobs:
-
name: GitHub Release
if: startsWith(github.ref, 'refs/tags/v')
uses: softprops/action-gh-release@a74c6b72af54cfa997e81df42d94703d6313a2d0 # v2.0.6
uses: softprops/action-gh-release@de2c0eb89ae2a093876385947365aca7b0e5f844 # v0.1.15
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
draft: true
files: ${{ env.DESTDIR }}/*
buildkit-edge:
runs-on: ubuntu-22.04
continue-on-error: true
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Set up QEMU
uses: docker/setup-qemu-action@v2
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
with:
version: ${{ env.BUILDX_VERSION }}
driver-opts: image=moby/buildkit:master
buildkitd-flags: --debug
-
# Just run a bake target to check eveything runs fine
name: Build
uses: docker/bake-action@v2
with:
targets: binaries

View File

@@ -1,42 +0,0 @@
name: codeql
on:
push:
branches:
- 'master'
- 'v[0-9]*'
pull_request:
permissions:
actions: read
contents: read
security-events: write
env:
GO_VERSION: "1.22"
jobs:
codeql:
runs-on: ubuntu-24.04
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
-
name: Initialize CodeQL
uses: github/codeql-action/init@v3
with:
languages: go
-
name: Autobuild
uses: github/codeql-action/autobuild@v3
-
name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3
with:
category: "/language:go"

View File

@@ -1,23 +1,18 @@
name: docs-release
on:
workflow_dispatch:
inputs:
tag:
description: 'Git tag'
required: true
release:
types:
- released
jobs:
open-pr:
runs-on: ubuntu-24.04
if: ${{ (github.event.release.prerelease != true || github.event.inputs.tag != '') && github.repository == 'docker/buildx' }}
runs-on: ubuntu-22.04
if: "!github.event.release.prerelease"
steps:
-
name: Checkout docs repo
uses: actions/checkout@v4
uses: actions/checkout@v3
with:
token: ${{ secrets.GHPAT_DOCS_DISPATCH }}
repository: docker/docs
@@ -25,47 +20,39 @@ jobs:
-
name: Prepare
run: |
rm -rf ./data/buildx/*
if [ -n "${{ github.event.inputs.tag }}" ]; then
echo "RELEASE_NAME=${{ github.event.inputs.tag }}" >> $GITHUB_ENV
else
echo "RELEASE_NAME=${{ github.event.release.name }}" >> $GITHUB_ENV
fi
rm -rf ./_data/buildx/*
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@v2
-
name: Generate yaml
uses: docker/bake-action@v5
name: Build docs
uses: docker/bake-action@v2
with:
source: ${{ github.server_url }}/${{ github.repository }}.git#${{ env.RELEASE_NAME }}
source: ${{ github.server_url }}/${{ github.repository }}.git#${{ github.event.release.name }}
targets: update-docs
provenance: false
set: |
*.output=/tmp/buildx-docs
env:
DOCS_FORMATS: yaml
-
name: Copy yaml
name: Copy files
run: |
cp /tmp/buildx-docs/out/reference/*.yaml ./data/buildx/
cp /tmp/buildx-docs/out/reference/*.yaml ./_data/buildx/
-
name: Update vendor
name: Commit changes
run: |
make vendor
env:
VENDOR_MODULE: github.com/docker/buildx@${{ env.RELEASE_NAME }}
git add -A .
-
name: Create PR on docs repo
uses: peter-evans/create-pull-request@c5a7806660adbe173f04e3e038b0ccdcd758773c # v6.1.0
uses: peter-evans/create-pull-request@2b011faafdcbc9ceb11414d64d0573f37c774b04
with:
token: ${{ secrets.GHPAT_DOCS_DISPATCH }}
push-to-fork: docker-tools-robot/docker.github.io
commit-message: "vendor: github.com/docker/buildx ${{ env.RELEASE_NAME }}"
commit-message: "build: update buildx reference to ${{ github.event.release.name }}"
signoff: true
branch: dispatch/buildx-ref-${{ env.RELEASE_NAME }}
branch: dispatch/buildx-ref-${{ github.event.release.name }}
delete-branch: true
title: Update buildx reference to ${{ env.RELEASE_NAME }}
title: Update buildx reference to ${{ github.event.release.name }}
body: |
Update the buildx reference documentation to keep in sync with the latest release `${{ env.RELEASE_NAME }}`
Update the buildx reference documentation to keep in sync with the latest release `${{ github.event.release.name }}`
draft: false

View File

@@ -22,22 +22,21 @@ on:
jobs:
docs-yaml:
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
steps:
-
name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v3
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@v2
with:
version: latest
-
name: Build reference YAML docs
uses: docker/bake-action@v5
uses: docker/bake-action@v2
with:
targets: update-docs
provenance: false
set: |
*.output=/tmp/buildx-docs
*.cache-from=type=gha,scope=docs-yaml
@@ -46,18 +45,17 @@ jobs:
DOCS_FORMATS: yaml
-
name: Upload reference YAML docs
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v3
with:
name: docs-yaml
path: /tmp/buildx-docs/out/reference
retention-days: 1
validate:
uses: docker/docs/.github/workflows/validate-upstream.yml@6b73b05acb21edf7995cc5b3c6672d8e314cee7a # pin for artifact v4 support: https://github.com/docker/docs/pull/19220
uses: docker/docs/.github/workflows/validate-upstream.yml@main
needs:
- docs-yaml
with:
module-name: docker/buildx
repo: https://github.com/${{ github.repository }}
data-files-id: docs-yaml
data-files-folder: buildx
create-placeholder-stubs: true

View File

@@ -11,8 +11,10 @@ on:
- 'master'
- 'v[0-9]*'
pull_request:
branches:
- 'master'
- 'v[0-9]*'
paths-ignore:
- '.github/releases.json'
- 'README.md'
- 'docs/**'
@@ -22,18 +24,18 @@ env:
jobs:
build:
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
steps:
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v3
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@v2
with:
version: latest
-
name: Build
uses: docker/bake-action@v5
uses: docker/bake-action@v2
with:
targets: binaries
set: |
@@ -46,7 +48,7 @@ jobs:
mv ${{ env.DESTDIR }}/build/buildx ${{ env.DESTDIR }}/build/docker-buildx
-
name: Upload artifacts
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v3
with:
name: binary
path: ${{ env.DESTDIR }}/build
@@ -82,10 +84,6 @@ jobs:
driver-opt: qemu.install=true
- driver: remote
endpoint: tcp://localhost:1234
- driver: docker-container
metadata-provenance: max
- driver: docker-container
metadata-warnings: true
exclude:
- driver: docker
multi-node: mnode-true
@@ -100,14 +98,14 @@ jobs:
steps:
-
name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v3
-
name: Set up QEMU
uses: docker/setup-qemu-action@v3
uses: docker/setup-qemu-action@v2
if: matrix.driver == 'docker' || matrix.driver == 'docker-container'
-
name: Install buildx
uses: actions/download-artifact@v4
uses: actions/download-artifact@v3
with:
name: binary
path: /home/runner/.docker/cli-plugins
@@ -133,18 +131,70 @@ jobs:
else
echo "MULTI_NODE=0" >> $GITHUB_ENV
fi
if [ -n "${{ matrix.metadata-provenance }}" ]; then
echo "BUILDX_METADATA_PROVENANCE=${{ matrix.metadata-provenance }}" >> $GITHUB_ENV
fi
if [ -n "${{ matrix.metadata-warnings }}" ]; then
echo "BUILDX_METADATA_WARNINGS=${{ matrix.metadata-warnings }}" >> $GITHUB_ENV
fi
-
name: Install k3s
if: matrix.driver == 'kubernetes'
uses: crazy-max/.github/.github/actions/install-k3s@fa6141aedf23596fb8bdcceab9cce8dadaa31bd9
uses: actions/github-script@v6
with:
version: ${{ env.K3S_VERSION }}
script: |
const fs = require('fs');
let wait = function(milliseconds) {
return new Promise((resolve, reject) => {
if (typeof(milliseconds) !== 'number') {
throw new Error('milleseconds not a number');
}
setTimeout(() => resolve("done!"), milliseconds)
});
}
try {
const kubeconfig="/tmp/buildkit-k3s/kubeconfig.yaml";
core.info(`storing kubeconfig in ${kubeconfig}`);
await exec.exec('docker', ["run", "-d",
"--privileged",
"--name=buildkit-k3s",
"-e", "K3S_KUBECONFIG_OUTPUT="+kubeconfig,
"-e", "K3S_KUBECONFIG_MODE=666",
"-v", "/tmp/buildkit-k3s:/tmp/buildkit-k3s",
"-p", "6443:6443",
"-p", "80:80",
"-p", "443:443",
"-p", "8080:8080",
"rancher/k3s:${{ env.K3S_VERSION }}", "server"
]);
await wait(10000);
core.exportVariable('KUBECONFIG', kubeconfig);
let nodeName;
for (let count = 1; count <= 5; count++) {
try {
const nodeNameOutput = await exec.getExecOutput("kubectl get nodes --no-headers -oname");
nodeName = nodeNameOutput.stdout
} catch (error) {
core.info(`Unable to resolve node name (${error.message}). Attempt ${count} of 5.`)
} finally {
if (nodeName) {
break;
}
await wait(5000);
}
}
if (!nodeName) {
throw new Error(`Unable to resolve node name after 5 attempts.`);
}
await exec.exec(`kubectl wait --for=condition=Ready ${nodeName}`);
} catch (error) {
core.setFailed(error.message);
}
-
name: Print KUBECONFIG
if: matrix.driver == 'kubernetes'
run: |
yq ${{ env.KUBECONFIG }}
-
name: Launch remote buildkitd
if: matrix.driver == 'remote'

View File

@@ -1,19 +0,0 @@
name: labeler
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
pull_request_target:
jobs:
labeler:
permissions:
contents: read
pull-requests: write
runs-on: ubuntu-latest
steps:
-
name: Run
uses: actions/labeler@v5

View File

@@ -13,86 +13,30 @@ on:
tags:
- 'v*'
pull_request:
paths-ignore:
- '.github/releases.json'
branches:
- 'master'
- 'v[0-9]*'
jobs:
prepare:
runs-on: ubuntu-24.04
outputs:
includes: ${{ steps.matrix.outputs.includes }}
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Matrix
id: matrix
uses: actions/github-script@v7
with:
script: |
let def = {};
await core.group(`Parsing definition`, async () => {
const printEnv = Object.assign({}, process.env, {
GOLANGCI_LINT_MULTIPLATFORM: process.env.GITHUB_REPOSITORY === 'docker/buildx' ? '1' : ''
});
const resPrint = await exec.getExecOutput('docker', ['buildx', 'bake', 'validate', '--print'], {
ignoreReturnCode: true,
env: printEnv
});
if (resPrint.stderr.length > 0 && resPrint.exitCode != 0) {
throw new Error(res.stderr);
}
def = JSON.parse(resPrint.stdout.trim());
});
await core.group(`Generating matrix`, async () => {
const includes = [];
for (const targetName of Object.keys(def.target)) {
const target = def.target[targetName];
if (target.platforms && target.platforms.length > 0) {
target.platforms.forEach(platform => {
includes.push({
target: targetName,
platform: platform
});
});
} else {
includes.push({
target: targetName
});
}
}
core.info(JSON.stringify(includes, null, 2));
core.setOutput('includes', JSON.stringify(includes));
});
validate:
runs-on: ubuntu-24.04
needs:
- prepare
runs-on: ubuntu-22.04
strategy:
fail-fast: false
matrix:
include: ${{ fromJson(needs.prepare.outputs.includes) }}
target:
- lint
- validate-vendor
- validate-docs
steps:
-
name: Prepare
run: |
if [ "$GITHUB_REPOSITORY" = "docker/buildx" ]; then
echo "GOLANGCI_LINT_MULTIPLATFORM=1" >> $GITHUB_ENV
fi
-
name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v3
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@v2
with:
version: latest
-
name: Validate
uses: docker/bake-action@v5
with:
targets: ${{ matrix.target }}
set: |
*.platform=${{ matrix.platform }}
name: Run
run: |
make ${{ matrix.target }}

View File

@@ -1,5 +1,5 @@
run:
timeout: 30m
timeout: 10m
skip-files:
- ".*\\.pb\\.go$"
@@ -11,72 +11,30 @@ linters:
enable:
- gofmt
- govet
- deadcode
- depguard
- goimports
- ineffassign
- misspell
- unused
- varcheck
- revive
- staticcheck
- typecheck
- nolintlint
- gosec
- forbidigo
disable-all: true
linters-settings:
govet:
enable:
- nilness
- unusedwrite
# enable-all: true
# disable:
# - fieldalignment
# - shadow
depguard:
rules:
main:
deny:
- pkg: "github.com/containerd/containerd/errdefs"
desc: The containerd errdefs package was migrated to a separate module. Use github.com/containerd/errdefs instead.
- pkg: "github.com/containerd/containerd/log"
desc: The containerd log package was migrated to a separate module. Use github.com/containerd/log instead.
- pkg: "github.com/containerd/containerd/platforms"
desc: The containerd platforms package was migrated to a separate module. Use github.com/containerd/platforms instead.
- pkg: "io/ioutil"
desc: The io/ioutil package has been deprecated.
forbidigo:
forbid:
- '^fmt\.Errorf(# use errors\.Errorf instead)?$'
- '^platforms\.DefaultString(# use platforms\.Format(platforms\.DefaultSpec()) instead\.)?$'
gosec:
excludes:
- G204 # Audit use of command execution
- G402 # TLS MinVersion too low
config:
G306: "0644"
list-type: blacklist
include-go-root: true
packages:
# The io/ioutil package has been deprecated.
# https://go.dev/doc/go1.16#ioutil
- io/ioutil
issues:
exclude-rules:
- linters:
- revive
text: "stutters"
- linters:
- revive
text: "empty-block"
- linters:
- revive
text: "superfluous-else"
- linters:
- revive
text: "unused-parameter"
- linters:
- revive
text: "redefines-builtin-id"
- linters:
- revive
text: "if-return"
# show all
max-issues-per-linter: 0
max-same-issues: 0

View File

@@ -1,22 +1,15 @@
# syntax=docker/dockerfile:1
# syntax=docker/dockerfile-upstream:1.5.0
ARG GO_VERSION=1.22
ARG XX_VERSION=1.4.0
ARG GO_VERSION=1.19
ARG XX_VERSION=1.1.2
ARG DOCKERD_VERSION=20.10.14
# for testing
ARG DOCKER_VERSION=27.0.0-rc.2
ARG GOTESTSUM_VERSION=v1.9.0
ARG REGISTRY_VERSION=2.8.0
ARG BUILDKIT_VERSION=v0.14.1
ARG UNDOCK_VERSION=0.7.0
FROM docker:$DOCKERD_VERSION AS dockerd-release
# xx is a helper for cross-compilation
FROM --platform=$BUILDPLATFORM tonistiigi/xx:${XX_VERSION} AS xx
FROM --platform=$BUILDPLATFORM golang:${GO_VERSION}-alpine AS golatest
FROM moby/moby-bin:$DOCKER_VERSION AS docker-engine
FROM dockereng/cli-bin:$DOCKER_VERSION AS docker-cli
FROM registry:$REGISTRY_VERSION AS registry
FROM moby/buildkit:$BUILDKIT_VERSION AS buildkit
FROM crazymax/undock:$UNDOCK_VERSION AS undock
FROM golatest AS gobase
COPY --from=xx / /
@@ -25,39 +18,6 @@ ENV GOFLAGS=-mod=vendor
ENV CGO_ENABLED=0
WORKDIR /src
FROM gobase AS gotestsum
ARG GOTESTSUM_VERSION
ENV GOFLAGS=""
RUN --mount=target=/root/.cache,type=cache <<EOT
set -ex
go install "gotest.tools/gotestsum@${GOTESTSUM_VERSION}"
go install "github.com/wadey/gocovmerge@latest"
mkdir /out
/go/bin/gotestsum --version
mv /go/bin/gotestsum /out
mv /go/bin/gocovmerge /out
EOT
COPY --chmod=755 <<"EOF" /out/gotestsumandcover
#!/bin/sh
set -x
if [ -z "$GO_TEST_COVERPROFILE" ]; then
exec gotestsum "$@"
fi
coverdir="$(dirname "$GO_TEST_COVERPROFILE")"
mkdir -p "$coverdir/helpers"
gotestsum "$@" "-coverprofile=$GO_TEST_COVERPROFILE"
ecode=$?
go tool covdata textfmt -i=$coverdir/helpers -o=$coverdir/helpers-report.txt
gocovmerge "$coverdir/helpers-report.txt" "$GO_TEST_COVERPROFILE" > "$coverdir/merged-report.txt"
mv "$coverdir/merged-report.txt" "$GO_TEST_COVERPROFILE"
rm "$coverdir/helpers-report.txt"
for f in "$coverdir/helpers"/*; do
rm "$f"
done
rmdir "$coverdir/helpers"
exit $ecode
EOF
FROM gobase AS buildx-version
RUN --mount=type=bind,target=. <<EOT
set -e
@@ -68,7 +28,6 @@ EOT
FROM gobase AS buildx-build
ARG TARGETPLATFORM
ARG GO_EXTRA_FLAGS
RUN --mount=type=bind,target=. \
--mount=type=cache,target=/root/.cache \
--mount=type=cache,target=/go/pkg/mod \
@@ -80,7 +39,6 @@ RUN --mount=type=bind,target=. \
EOT
FROM gobase AS test
ENV SKIP_INTEGRATION_TESTS=1
RUN --mount=type=bind,target=. \
--mount=type=cache,target=/root/.cache \
--mount=type=cache,target=/go/pkg/mod \
@@ -103,30 +61,6 @@ FROM binaries-$TARGETOS AS binaries
# enable scanning for this stage
ARG BUILDKIT_SBOM_SCAN_STAGE=true
FROM gobase AS integration-test-base
# https://github.com/docker/docker/blob/master/project/PACKAGERS.md#runtime-dependencies
RUN apk add --no-cache \
btrfs-progs \
e2fsprogs \
e2fsprogs-extra \
ip6tables \
iptables \
openssl \
shadow-uidmap \
xfsprogs \
xz
COPY --link --from=gotestsum /out /usr/bin/
COPY --link --from=registry /bin/registry /usr/bin/
COPY --link --from=docker-engine / /usr/bin/
COPY --link --from=docker-cli / /usr/bin/
COPY --link --from=buildkit /usr/bin/buildkitd /usr/bin/
COPY --link --from=buildkit /usr/bin/buildctl /usr/bin/
COPY --link --from=undock /usr/local/bin/undock /usr/bin/
COPY --link --from=binaries /buildx /usr/bin/
FROM integration-test-base AS integration-test
COPY . .
# Release
FROM --platform=$BUILDPLATFORM alpine AS releaser
WORKDIR /work
@@ -142,7 +76,7 @@ FROM scratch AS release
COPY --from=releaser /out/ /
# Shell
FROM docker:$DOCKER_VERSION AS dockerd-release
FROM docker:$DOCKERD_VERSION AS dockerd-release
FROM alpine AS shell
RUN apk add --no-cache iptables tmux git vim less openssh
RUN mkdir -p /usr/local/lib/docker/cli-plugins && ln -s /usr/local/bin/buildx /usr/local/lib/docker/cli-plugins/docker-buildx

View File

@@ -153,7 +153,6 @@ made through a pull request.
"akihirosuda",
"crazy-max",
"jedevc",
"jsternberg",
"tiborvass",
"tonistiigi",
]
@@ -195,11 +194,6 @@ made through a pull request.
Email = "me@jedevc.com"
GitHub = "jedevc"
[people.jsternberg]
Name = "Jonathan Sternberg"
Email = "jonathan.sternberg@docker.com"
GitHub = "jsternberg"
[people.thajeztah]
Name = "Sebastiaan van Stijn"
Email = "github@gone.nl"

View File

@@ -8,8 +8,6 @@ endif
export BUILDX_CMD ?= docker buildx
BAKE_TARGETS := binaries binaries-cross lint lint-gopls validate-vendor validate-docs validate-authors validate-generated-files
.PHONY: all
all: binaries
@@ -21,9 +19,13 @@ build:
shell:
./hack/shell
.PHONY: $(BAKE_TARGETS)
$(BAKE_TARGETS):
$(BUILDX_CMD) bake $@
.PHONY: binaries
binaries:
$(BUILDX_CMD) bake binaries
.PHONY: binaries-cross
binaries-cross:
$(BUILDX_CMD) bake binaries-cross
.PHONY: install
install: binaries
@@ -35,19 +37,27 @@ release:
./hack/release
.PHONY: validate-all
validate-all: lint test validate-vendor validate-docs validate-generated-files
validate-all: lint test validate-vendor validate-docs
.PHONY: lint
lint:
$(BUILDX_CMD) bake lint
.PHONY: test
test:
./hack/test
$(BUILDX_CMD) bake test
.PHONY: test-unit
test-unit:
TESTPKGS=./... SKIP_INTEGRATION_TESTS=1 ./hack/test
.PHONY: validate-vendor
validate-vendor:
$(BUILDX_CMD) bake validate-vendor
.PHONY: test
test-integration:
TESTPKGS=./tests ./hack/test
.PHONY: validate-docs
validate-docs:
$(BUILDX_CMD) bake validate-docs
.PHONY: validate-authors
validate-authors:
$(BUILDX_CMD) bake validate-authors
.PHONY: test-driver
test-driver:
@@ -68,7 +78,3 @@ authors:
.PHONY: mod-outdated
mod-outdated:
$(BUILDX_CMD) bake mod-outdated
.PHONY: generated-files
generated-files:
$(BUILDX_CMD) bake update-generated-files

View File

@@ -32,6 +32,19 @@ Key features:
- [Building with buildx](#building-with-buildx)
- [Working with builder instances](#working-with-builder-instances)
- [Building multi-platform images](#building-multi-platform-images)
- [Manuals](docs/manuals)
- [High-level build options with Bake](docs/manuals/bake/index.md)
- [Drivers](docs/manuals/drivers/index.md)
- [Exporters](docs/manuals/exporters/index.md)
- [Cache backends](docs/manuals/cache/backends/index.md)
- [Guides](docs/guides)
- [CI/CD](docs/guides/cicd.md)
- [CNI networking](docs/guides/cni-networking.md)
- [Using a custom network](docs/guides/custom-network.md)
- [Using a custom registry configuration](docs/guides/custom-registry-config.md)
- [OpenTelemetry support](docs/guides/opentelemetry.md)
- [Registry mirror](docs/guides/registry-mirror.md)
- [Resource limiting](docs/guides/resource-limiting.md)
- [Reference](docs/reference/buildx.md)
- [`buildx bake`](docs/reference/buildx_bake.md)
- [`buildx build`](docs/reference/buildx_build.md)
@@ -41,26 +54,21 @@ Key features:
- [`buildx imagetools create`](docs/reference/buildx_imagetools_create.md)
- [`buildx imagetools inspect`](docs/reference/buildx_imagetools_inspect.md)
- [`buildx inspect`](docs/reference/buildx_inspect.md)
- [`buildx install`](docs/reference/buildx_install.md)
- [`buildx ls`](docs/reference/buildx_ls.md)
- [`buildx prune`](docs/reference/buildx_prune.md)
- [`buildx rm`](docs/reference/buildx_rm.md)
- [`buildx stop`](docs/reference/buildx_stop.md)
- [`buildx uninstall`](docs/reference/buildx_uninstall.md)
- [`buildx use`](docs/reference/buildx_use.md)
- [`buildx version`](docs/reference/buildx_version.md)
- [Contributing](#contributing)
For more information on how to use Buildx, see
[Docker Build docs](https://docs.docker.com/build/).
# Installing
Using `buildx` with Docker requires Docker engine 19.03 or newer.
> **Warning**
>
> Using an incompatible version of Docker may result in unexpected behavior,
> and will likely cause issues, especially when using Buildx builders with more
> recent versions of BuildKit.
Using `buildx` as a docker CLI plugin requires using Docker 19.03 or newer.
A limited set of functionality works with older versions of Docker when
invoking the binary directly.
## Windows and macOS
@@ -69,9 +77,8 @@ for Windows and macOS.
## Linux packages
Docker Engine package repositories contain Docker Buildx packages when installed according to the
[Docker Engine install documentation](https://docs.docker.com/engine/install/). Install the
`docker-buildx-plugin` package to install the Buildx plugin.
Docker Linux packages also include Docker Buildx when installed using the
[DEB or RPM packages](https://docs.docker.com/engine/install/).
## Manual download
@@ -147,7 +154,7 @@ $ DOCKER_BUILDKIT=1 docker build --platform=local -o . "https://github.com/docke
$ mkdir -p ~/.docker/cli-plugins
$ mv buildx ~/.docker/cli-plugins/docker-buildx
# Local
# Local
$ git clone https://github.com/docker/buildx.git && cd buildx
$ make install
```
@@ -187,12 +194,12 @@ through various "drivers". Each driver defines how and where a build should
run, and have different feature sets.
We currently support the following drivers:
- The `docker` driver ([guide](https://docs.docker.com/build/drivers/docker/), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
- The `docker-container` driver ([guide](https://docs.docker.com/build/drivers/docker-container/), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
- The `kubernetes` driver ([guide](https://docs.docker.com/build/drivers/kubernetes/), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
- The `remote` driver ([guide](https://docs.docker.com/build/drivers/remote/))
- The `docker` driver ([guide](docs/manuals/drivers/docker.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
- The `docker-container` driver ([guide](docs/manuals/drivers/docker-container.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
- The `kubernetes` driver ([guide](docs/manuals/drivers/kubernetes.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
- The `remote` driver ([guide](docs/manuals/drivers/remote.md))
For more information on drivers, see the [drivers guide](https://docs.docker.com/build/drivers/).
For more information on drivers, see the [drivers guide](docs/manuals/drivers/index.md).
## Working with builder instances
@@ -239,7 +246,7 @@ When you invoke a build, you can set the `--platform` flag to specify the target
platform for the build output, (for example, `linux/amd64`, `linux/arm64`, or
`darwin/amd64`).
When the current builder instance is backed by the `docker-container` or
When the current builder instance is backed by the `docker-container` or
`kubernetes` driver, you can specify multiple platforms together. In this case,
it builds a manifest list which contains images for all specified architectures.
When you use this image in [`docker run`](https://docs.docker.com/engine/reference/commandline/run/)
@@ -309,7 +316,7 @@ cross-compilation helpers for more advanced use-cases.
## High-level build options
See [High-level builds with Bake](https://docs.docker.com/build/bake/) for more details.
See [`docs/manuals/bake/index.md`](docs/manuals/bake/index.md) for more details.
# Contributing

View File

@@ -2,6 +2,8 @@ package bake
import (
"context"
"encoding/csv"
"fmt"
"io"
"os"
"path"
@@ -10,28 +12,23 @@ import (
"sort"
"strconv"
"strings"
"time"
composecli "github.com/compose-spec/compose-go/v2/cli"
"github.com/docker/buildx/bake/hclparser"
"github.com/docker/buildx/build"
controllerapi "github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/util/buildflags"
"github.com/docker/buildx/util/platformutil"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/config"
dockeropts "github.com/docker/cli/opts"
"github.com/docker/docker/builder/remotecontext/urlutil"
hcl "github.com/hashicorp/hcl/v2"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/client/llb"
"github.com/moby/buildkit/session/auth/authprovider"
"github.com/pkg/errors"
"github.com/tonistiigi/go-csvvalue"
"github.com/zclconf/go-cty/cty"
"github.com/zclconf/go-cty/cty/convert"
)
var (
httpPrefix = regexp.MustCompile(`^https?://`)
gitURLPathWithFragmentSuffix = regexp.MustCompile(`\.git(?:#.+)?$`)
validTargetNameChars = `[a-zA-Z0-9_-]+`
targetNamePattern = regexp.MustCompile(`^` + validTargetNameChars + `$`)
)
@@ -47,18 +44,17 @@ type Override struct {
}
func defaultFilenames() []string {
names := []string{}
names = append(names, composecli.DefaultFileNames...)
names = append(names, []string{
return []string{
"docker-compose.yml", // support app
"docker-compose.yaml", // support app
"docker-bake.json",
"docker-bake.override.json",
"docker-bake.hcl",
"docker-bake.override.hcl",
}...)
return names
}
}
func ReadLocalFiles(names []string, stdin io.Reader, l progress.SubLogger) ([]File, error) {
func ReadLocalFiles(names []string) ([]File, error) {
isDefault := false
if len(names) == 0 {
isDefault = true
@@ -66,26 +62,20 @@ func ReadLocalFiles(names []string, stdin io.Reader, l progress.SubLogger) ([]Fi
}
out := make([]File, 0, len(names))
setStatus := func(st *client.VertexStatus) {
if l != nil {
l.SetStatus(st)
}
}
for _, n := range names {
var dt []byte
var err error
if n == "-" {
dt, err = readWithProgress(stdin, setStatus)
dt, err = io.ReadAll(os.Stdin)
if err != nil {
return nil, err
}
} else {
dt, err = readFileWithProgress(n, isDefault, setStatus)
if dt == nil && err == nil {
continue
}
dt, err = os.ReadFile(n)
if err != nil {
if isDefault && errors.Is(err, os.ErrNotExist) {
continue
}
return nil, err
}
}
@@ -94,105 +84,8 @@ func ReadLocalFiles(names []string, stdin io.Reader, l progress.SubLogger) ([]Fi
return out, nil
}
func readFileWithProgress(fname string, isDefault bool, setStatus func(st *client.VertexStatus)) (dt []byte, err error) {
st := &client.VertexStatus{
ID: "reading " + fname,
}
defer func() {
now := time.Now()
st.Completed = &now
if dt != nil || err != nil {
setStatus(st)
}
}()
now := time.Now()
st.Started = &now
f, err := os.Open(fname)
if err != nil {
if isDefault && errors.Is(err, os.ErrNotExist) {
return nil, nil
}
return nil, err
}
defer f.Close()
setStatus(st)
info, err := f.Stat()
if err != nil {
return nil, err
}
st.Total = info.Size()
setStatus(st)
buf := make([]byte, 1024)
for {
n, err := f.Read(buf)
if err == io.EOF {
break
}
if err != nil {
return nil, err
}
dt = append(dt, buf[:n]...)
st.Current += int64(n)
setStatus(st)
}
return dt, nil
}
func readWithProgress(r io.Reader, setStatus func(st *client.VertexStatus)) (dt []byte, err error) {
st := &client.VertexStatus{
ID: "reading from stdin",
}
defer func() {
now := time.Now()
st.Completed = &now
setStatus(st)
}()
now := time.Now()
st.Started = &now
setStatus(st)
buf := make([]byte, 1024)
for {
n, err := r.Read(buf)
if err == io.EOF {
break
}
if err != nil {
return nil, err
}
dt = append(dt, buf[:n]...)
st.Current += int64(n)
setStatus(st)
}
return dt, nil
}
func ListTargets(files []File) ([]string, error) {
c, _, err := ParseFiles(files, nil)
if err != nil {
return nil, err
}
var targets []string
for _, g := range c.Groups {
targets = append(targets, g.Name)
}
for _, t := range c.Targets {
targets = append(targets, t.Name)
}
return dedupSlice(targets), nil
}
func ReadTargets(ctx context.Context, files []File, targets, overrides []string, defaults map[string]string) (map[string]*Target, map[string]*Group, error) {
c, _, err := ParseFiles(files, defaults)
c, err := ParseFiles(files, defaults)
if err != nil {
return nil, nil, err
}
@@ -247,6 +140,19 @@ func ReadTargets(ctx context.Context, files []File, targets, overrides []string,
}
}
// Propagate SOURCE_DATE_EPOCH from the client env.
// The logic is purposely duplicated from `build/build`.go for keeping this visible in `bake --print`.
if v := os.Getenv("SOURCE_DATE_EPOCH"); v != "" {
for _, f := range m {
if f.Args == nil {
f.Args = make(map[string]*string)
}
if _, ok := f.Args["SOURCE_DATE_EPOCH"]; !ok {
f.Args["SOURCE_DATE_EPOCH"] = &v
}
}
}
return m, n, nil
}
@@ -298,7 +204,7 @@ func sliceToMap(env []string) (res map[string]string) {
return
}
func ParseFiles(files []File, defaults map[string]string) (_ *Config, _ *hclparser.ParseMeta, err error) {
func ParseFiles(files []File, defaults map[string]string) (_ *Config, err error) {
defer func() {
err = formatHCLError(err, files)
}()
@@ -310,7 +216,7 @@ func ParseFiles(files []File, defaults map[string]string) (_ *Config, _ *hclpars
isCompose, composeErr := validateComposeFile(f.Data, f.Name)
if isCompose {
if composeErr != nil {
return nil, nil, composeErr
return nil, composeErr
}
composeFiles = append(composeFiles, f)
}
@@ -318,13 +224,13 @@ func ParseFiles(files []File, defaults map[string]string) (_ *Config, _ *hclpars
hf, isHCL, err := ParseHCLFile(f.Data, f.Name)
if isHCL {
if err != nil {
return nil, nil, err
return nil, err
}
hclFiles = append(hclFiles, hf)
} else if composeErr != nil {
return nil, nil, errors.Wrapf(err, "failed to parse %s: parsing yaml: %v, parsing hcl", f.Name, composeErr)
return nil, fmt.Errorf("failed to parse %s: parsing yaml: %v, parsing hcl: %w", f.Name, composeErr, err)
} else {
return nil, nil, err
return nil, err
}
}
}
@@ -332,40 +238,23 @@ func ParseFiles(files []File, defaults map[string]string) (_ *Config, _ *hclpars
if len(composeFiles) > 0 {
cfg, cmperr := ParseComposeFiles(composeFiles)
if cmperr != nil {
return nil, nil, errors.Wrap(cmperr, "failed to parse compose file")
return nil, errors.Wrap(cmperr, "failed to parse compose file")
}
c = mergeConfig(c, *cfg)
c = dedupeConfig(c)
}
var pm hclparser.ParseMeta
if len(hclFiles) > 0 {
res, err := hclparser.Parse(hclparser.MergeFiles(hclFiles), hclparser.Opt{
if err := hclparser.Parse(hcl.MergeFiles(hclFiles), hclparser.Opt{
LookupVar: os.LookupEnv,
Vars: defaults,
ValidateLabel: validateTargetName,
}, &c)
if err.HasErrors() {
return nil, nil, err
}, &c); err.HasErrors() {
return nil, err
}
for _, renamed := range res.Renamed {
for oldName, newNames := range renamed {
newNames = dedupSlice(newNames)
if len(newNames) == 1 && oldName == newNames[0] {
continue
}
c.Groups = append(c.Groups, &Group{
Name: oldName,
Targets: newNames,
})
}
}
c = dedupeConfig(c)
pm = *res
}
return &c, &pm, nil
return &c, nil
}
func dedupeConfig(c Config) Config {
@@ -390,8 +279,7 @@ func dedupeConfig(c Config) Config {
}
func ParseFile(dt []byte, fn string) (*Config, error) {
c, _, err := ParseFiles([]File{{Data: dt, Name: fn}}, nil)
return c, err
return ParseFiles([]File{{Data: dt, Name: fn}}, nil)
}
type Config struct {
@@ -672,21 +560,18 @@ func (c Config) target(name string, visited map[string]*Target, overrides map[st
}
type Group struct {
Name string `json:"-" hcl:"name,label" cty:"name"`
Description string `json:"description,omitempty" hcl:"description,optional" cty:"description"`
Targets []string `json:"targets" hcl:"targets" cty:"targets"`
Name string `json:"-" hcl:"name,label" cty:"name"`
Targets []string `json:"targets" hcl:"targets" cty:"targets"`
// Target // TODO?
}
type Target struct {
Name string `json:"-" hcl:"name,label" cty:"name"`
Description string `json:"description,omitempty" hcl:"description,optional" cty:"description"`
Name string `json:"-" hcl:"name,label" cty:"name"`
// Inherits is the only field that cannot be overridden with --set
Attest []string `json:"attest,omitempty" hcl:"attest,optional" cty:"attest"`
Inherits []string `json:"inherits,omitempty" hcl:"inherits,optional" cty:"inherits"`
Annotations []string `json:"annotations,omitempty" hcl:"annotations,optional" cty:"annotations"`
Attest []string `json:"attest,omitempty" hcl:"attest,optional" cty:"attest"`
Context *string `json:"context,omitempty" hcl:"context,optional" cty:"context"`
Contexts map[string]string `json:"contexts,omitempty" hcl:"contexts,optional" cty:"contexts"`
Dockerfile *string `json:"dockerfile,omitempty" hcl:"dockerfile,optional" cty:"dockerfile"`
@@ -705,23 +590,14 @@ type Target struct {
NoCache *bool `json:"no-cache,omitempty" hcl:"no-cache,optional" cty:"no-cache"`
NetworkMode *string `json:"-" hcl:"-" cty:"-"`
NoCacheFilter []string `json:"no-cache-filter,omitempty" hcl:"no-cache-filter,optional" cty:"no-cache-filter"`
ShmSize *string `json:"shm-size,omitempty" hcl:"shm-size,optional"`
Ulimits []string `json:"ulimits,omitempty" hcl:"ulimits,optional"`
Call *string `json:"call,omitempty" hcl:"call,optional" cty:"call"`
// IMPORTANT: if you add more fields here, do not forget to update newOverrides/AddOverrides and docs/bake-reference.md.
// IMPORTANT: if you add more fields here, do not forget to update newOverrides and docs/bake-reference.md.
// linked is a private field to mark a target used as a linked one
linked bool
}
var _ hclparser.WithEvalContexts = &Target{}
var _ hclparser.WithGetName = &Target{}
var _ hclparser.WithEvalContexts = &Group{}
var _ hclparser.WithGetName = &Group{}
func (t *Target) normalize() {
t.Annotations = removeDupes(t.Annotations)
t.Attest = removeAttestDupes(t.Attest)
t.Attest = removeDupes(t.Attest)
t.Tags = removeDupes(t.Tags)
t.Secrets = removeDupes(t.Secrets)
t.SSH = removeDupes(t.SSH)
@@ -730,7 +606,6 @@ func (t *Target) normalize() {
t.CacheTo = removeDupes(t.CacheTo)
t.Outputs = removeDupes(t.Outputs)
t.NoCacheFilter = removeDupes(t.NoCacheFilter)
t.Ulimits = removeDupes(t.Ulimits)
for k, v := range t.Contexts {
if v == "" {
@@ -782,15 +657,8 @@ func (t *Target) Merge(t2 *Target) {
if t2.Target != nil {
t.Target = t2.Target
}
if t2.Call != nil {
t.Call = t2.Call
}
if t2.Annotations != nil { // merge
t.Annotations = append(t.Annotations, t2.Annotations...)
}
if t2.Attest != nil { // merge
t.Attest = append(t.Attest, t2.Attest...)
t.Attest = removeAttestDupes(t.Attest)
}
if t2.Secrets != nil { // merge
t.Secrets = append(t.Secrets, t2.Secrets...)
@@ -822,15 +690,6 @@ func (t *Target) Merge(t2 *Target) {
if t2.NoCacheFilter != nil { // merge
t.NoCacheFilter = append(t.NoCacheFilter, t2.NoCacheFilter...)
}
if t2.ShmSize != nil { // no merge
t.ShmSize = t2.ShmSize
}
if t2.Ulimits != nil { // merge
t.Ulimits = append(t.Ulimits, t2.Ulimits...)
}
if t2.Description != "" {
t.Description = t2.Description
}
t.Inherits = append(t.Inherits, t2.Inherits...)
}
@@ -875,8 +734,6 @@ func (t *Target) AddOverrides(overrides map[string]Override) error {
t.CacheTo = o.ArrValue
case "target":
t.Target = &value
case "call":
t.Call = &value
case "secrets":
t.Secrets = o.ArrValue
case "ssh":
@@ -885,8 +742,6 @@ func (t *Target) AddOverrides(overrides map[string]Override) error {
t.Platforms = o.ArrValue
case "output":
t.Outputs = o.ArrValue
case "annotations":
t.Annotations = append(t.Annotations, o.ArrValue...)
case "attest":
t.Attest = append(t.Attest, o.ArrValue...)
case "no-cache":
@@ -897,10 +752,6 @@ func (t *Target) AddOverrides(overrides map[string]Override) error {
t.NoCache = &noCache
case "no-cache-filter":
t.NoCacheFilter = o.ArrValue
case "shm-size":
t.ShmSize = &value
case "ulimits":
t.Ulimits = o.ArrValue
case "pull":
pull, err := strconv.ParseBool(value)
if err != nil {
@@ -908,17 +759,19 @@ func (t *Target) AddOverrides(overrides map[string]Override) error {
}
t.Pull = &pull
case "push":
push, err := strconv.ParseBool(value)
_, err := strconv.ParseBool(value)
if err != nil {
return errors.Errorf("invalid value %s for boolean key push", value)
}
t.Outputs = setPushOverride(t.Outputs, push)
case "load":
load, err := strconv.ParseBool(value)
if err != nil {
return errors.Errorf("invalid value %s for boolean key load", value)
if len(t.Outputs) == 0 {
t.Outputs = append(t.Outputs, "type=image,push=true")
} else {
for i, output := range t.Outputs {
if typ := parseOutputType(output); typ == "image" || typ == "registry" {
t.Outputs[i] = t.Outputs[i] + ",push=" + value
}
}
}
t.Outputs = setLoadOverride(t.Outputs, load)
default:
return errors.Errorf("unknown key: %s", keys[0])
}
@@ -926,128 +779,13 @@ func (t *Target) AddOverrides(overrides map[string]Override) error {
return nil
}
func (g *Group) GetEvalContexts(ectx *hcl.EvalContext, block *hcl.Block, loadDeps func(hcl.Expression) hcl.Diagnostics) ([]*hcl.EvalContext, error) {
content, _, err := block.Body.PartialContent(&hcl.BodySchema{
Attributes: []hcl.AttributeSchema{{Name: "matrix"}},
})
if err != nil {
return nil, err
}
if _, ok := content.Attributes["matrix"]; ok {
return nil, errors.Errorf("matrix is not supported for groups")
}
return []*hcl.EvalContext{ectx}, nil
}
func (t *Target) GetEvalContexts(ectx *hcl.EvalContext, block *hcl.Block, loadDeps func(hcl.Expression) hcl.Diagnostics) ([]*hcl.EvalContext, error) {
content, _, err := block.Body.PartialContent(&hcl.BodySchema{
Attributes: []hcl.AttributeSchema{{Name: "matrix"}},
})
if err != nil {
return nil, err
}
attr, ok := content.Attributes["matrix"]
if !ok {
return []*hcl.EvalContext{ectx}, nil
}
if diags := loadDeps(attr.Expr); diags.HasErrors() {
return nil, diags
}
value, err := attr.Expr.Value(ectx)
if err != nil {
return nil, err
}
if !value.Type().IsMapType() && !value.Type().IsObjectType() {
return nil, errors.Errorf("matrix must be a map")
}
matrix := value.AsValueMap()
ectxs := []*hcl.EvalContext{ectx}
for k, expr := range matrix {
if !expr.CanIterateElements() {
return nil, errors.Errorf("matrix values must be a list")
}
ectxs2 := []*hcl.EvalContext{}
for _, v := range expr.AsValueSlice() {
for _, e := range ectxs {
e2 := ectx.NewChild()
e2.Variables = make(map[string]cty.Value)
if e != ectx {
for k, v := range e.Variables {
e2.Variables[k] = v
}
}
e2.Variables[k] = v
ectxs2 = append(ectxs2, e2)
}
}
ectxs = ectxs2
}
return ectxs, nil
}
func (g *Group) GetName(ectx *hcl.EvalContext, block *hcl.Block, loadDeps func(hcl.Expression) hcl.Diagnostics) (string, error) {
content, _, diags := block.Body.PartialContent(&hcl.BodySchema{
Attributes: []hcl.AttributeSchema{{Name: "name"}, {Name: "matrix"}},
})
if diags != nil {
return "", diags
}
if _, ok := content.Attributes["name"]; ok {
return "", errors.Errorf("name is not supported for groups")
}
if _, ok := content.Attributes["matrix"]; ok {
return "", errors.Errorf("matrix is not supported for groups")
}
return block.Labels[0], nil
}
func (t *Target) GetName(ectx *hcl.EvalContext, block *hcl.Block, loadDeps func(hcl.Expression) hcl.Diagnostics) (string, error) {
content, _, diags := block.Body.PartialContent(&hcl.BodySchema{
Attributes: []hcl.AttributeSchema{{Name: "name"}, {Name: "matrix"}},
})
if diags != nil {
return "", diags
}
attr, ok := content.Attributes["name"]
if !ok {
return block.Labels[0], nil
}
if _, ok := content.Attributes["matrix"]; !ok {
return "", errors.Errorf("name requires matrix")
}
if diags := loadDeps(attr.Expr); diags.HasErrors() {
return "", diags
}
value, diags := attr.Expr.Value(ectx)
if diags != nil {
return "", diags
}
value, err := convert.Convert(value, cty.String)
if err != nil {
return "", err
}
return value.AsString(), nil
}
func TargetsToBuildOpt(m map[string]*Target, inp *Input) (map[string]build.Options, error) {
// make sure local credentials are loaded multiple times for different targets
dockerConfig := config.LoadDefaultConfigFile(os.Stderr)
authProvider := authprovider.NewDockerAuthProvider(dockerConfig, nil)
m2 := make(map[string]build.Options, len(m))
for k, v := range m {
bo, err := toBuildOpt(v, inp)
if err != nil {
return nil, err
}
bo.Session = append(bo.Session, authProvider)
m2[k] = *bo
}
return m2, nil
@@ -1065,7 +803,7 @@ func updateContext(t *build.Inputs, inp *Input) {
if strings.HasPrefix(v.Path, "cwd://") || strings.HasPrefix(v.Path, "target:") || strings.HasPrefix(v.Path, "docker-image:") {
continue
}
if build.IsRemoteURL(v.Path) {
if IsRemoteURL(v.Path) {
continue
}
st := llb.Scratch().File(llb.Copy(*inp.State, v.Path, "/"), llb.WithCustomNamef("set context %s to %s", k, v.Path))
@@ -1079,15 +817,10 @@ func updateContext(t *build.Inputs, inp *Input) {
if strings.HasPrefix(t.ContextPath, "cwd://") {
return
}
if build.IsRemoteURL(t.ContextPath) {
if IsRemoteURL(t.ContextPath) {
return
}
st := llb.Scratch().File(
llb.Copy(*inp.State, t.ContextPath, "/", &llb.CopyInfo{
CopyDirContentsOnly: true,
}),
llb.WithCustomNamef("set context to %s", t.ContextPath),
)
st := llb.Scratch().File(llb.Copy(*inp.State, t.ContextPath, "/"), llb.WithCustomNamef("set context to %s", t.ContextPath))
t.ContextState = &st
}
@@ -1120,7 +853,7 @@ func validateContextsEntitlements(t build.Inputs, inp *Input) error {
}
func checkPath(p string) error {
if build.IsRemoteURL(p) || strings.HasPrefix(p, "target:") || strings.HasPrefix(p, "docker-image:") {
if IsRemoteURL(p) || strings.HasPrefix(p, "target:") || strings.HasPrefix(p, "docker-image:") {
return nil
}
p, err := filepath.EvalSymlinks(p)
@@ -1130,10 +863,6 @@ func checkPath(p string) error {
}
return err
}
p, err = filepath.Abs(p)
if err != nil {
return err
}
wd, err := os.Getwd()
if err != nil {
return err
@@ -1142,8 +871,7 @@ func checkPath(p string) error {
if err != nil {
return err
}
parts := strings.Split(rel, string(os.PathSeparator))
if parts[0] == ".." {
if strings.HasPrefix(rel, ".."+string(os.PathSeparator)) {
return errors.Errorf("path %s is outside of the working directory, please set BAKE_ALLOW_REMOTE_FS_ACCESS=1", p)
}
return nil
@@ -1161,75 +889,17 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
if t.Context != nil {
contextPath = *t.Context
}
if !strings.HasPrefix(contextPath, "cwd://") && !build.IsRemoteURL(contextPath) {
if !strings.HasPrefix(contextPath, "cwd://") && !IsRemoteURL(contextPath) {
contextPath = path.Clean(contextPath)
}
dockerfilePath := "Dockerfile"
if t.Dockerfile != nil {
dockerfilePath = *t.Dockerfile
}
if !strings.HasPrefix(dockerfilePath, "cwd://") {
dockerfilePath = path.Clean(dockerfilePath)
}
bi := build.Inputs{
ContextPath: contextPath,
DockerfilePath: dockerfilePath,
NamedContexts: toNamedContexts(t.Contexts),
if !isRemoteResource(contextPath) && !path.IsAbs(dockerfilePath) {
dockerfilePath = path.Join(contextPath, dockerfilePath)
}
if t.DockerfileInline != nil {
bi.DockerfileInline = *t.DockerfileInline
}
updateContext(&bi, inp)
if strings.HasPrefix(bi.DockerfilePath, "cwd://") {
// If Dockerfile is local for a remote invocation, we first check if
// it's not outside the working directory and then resolve it to an
// absolute path.
bi.DockerfilePath = path.Clean(strings.TrimPrefix(bi.DockerfilePath, "cwd://"))
if err := checkPath(bi.DockerfilePath); err != nil {
return nil, err
}
var err error
bi.DockerfilePath, err = filepath.Abs(bi.DockerfilePath)
if err != nil {
return nil, err
}
} else if !build.IsRemoteURL(bi.DockerfilePath) && strings.HasPrefix(bi.ContextPath, "cwd://") && (inp != nil && build.IsRemoteURL(inp.URL)) {
// We don't currently support reading a remote Dockerfile with a local
// context when doing a remote invocation because we automatically
// derive the dockerfile from the context atm:
//
// target "default" {
// context = BAKE_CMD_CONTEXT
// dockerfile = "Dockerfile.app"
// }
//
// > docker buildx bake https://github.com/foo/bar.git
// failed to solve: failed to read dockerfile: open /var/lib/docker/tmp/buildkit-mount3004544897/Dockerfile.app: no such file or directory
//
// To avoid mistakenly reading a local Dockerfile, we check if the
// Dockerfile exists locally and if so, we error out.
if _, err := os.Stat(filepath.Join(path.Clean(strings.TrimPrefix(bi.ContextPath, "cwd://")), bi.DockerfilePath)); err == nil {
return nil, errors.Errorf("reading a dockerfile for a remote build invocation is currently not supported")
}
}
if strings.HasPrefix(bi.ContextPath, "cwd://") {
bi.ContextPath = path.Clean(strings.TrimPrefix(bi.ContextPath, "cwd://"))
}
if !build.IsRemoteURL(bi.ContextPath) && bi.ContextState == nil && !path.IsAbs(bi.DockerfilePath) {
bi.DockerfilePath = path.Join(bi.ContextPath, bi.DockerfilePath)
}
for k, v := range bi.NamedContexts {
if strings.HasPrefix(v.Path, "cwd://") {
bi.NamedContexts[k] = build.NamedContext{Path: path.Clean(strings.TrimPrefix(v.Path, "cwd://"))}
}
}
if err := validateContextsEntitlements(bi, inp); err != nil {
return nil, err
}
t.Context = &bi.ContextPath
args := map[string]string{}
for k, v := range t.Args {
@@ -1259,13 +929,31 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
if t.NetworkMode != nil {
networkMode = *t.NetworkMode
}
shmSize := new(dockeropts.MemBytes)
if t.ShmSize != nil {
if err := shmSize.Set(*t.ShmSize); err != nil {
return nil, errors.Errorf("invalid value %s for membytes key shm-size", *t.ShmSize)
bi := build.Inputs{
ContextPath: contextPath,
DockerfilePath: dockerfilePath,
NamedContexts: toNamedContexts(t.Contexts),
}
if t.DockerfileInline != nil {
bi.DockerfileInline = *t.DockerfileInline
}
updateContext(&bi, inp)
if strings.HasPrefix(bi.ContextPath, "cwd://") {
bi.ContextPath = path.Clean(strings.TrimPrefix(bi.ContextPath, "cwd://"))
}
for k, v := range bi.NamedContexts {
if strings.HasPrefix(v.Path, "cwd://") {
bi.NamedContexts[k] = build.NamedContext{Path: path.Clean(strings.TrimPrefix(v.Path, "cwd://"))}
}
}
if err := validateContextsEntitlements(bi, inp); err != nil {
return nil, err
}
t.Context = &bi.ContextPath
bo := &build.Options{
Inputs: bi,
Tags: t.Tags,
@@ -1276,7 +964,6 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
Pull: pull,
NetworkMode: networkMode,
Linked: t.linked,
ShmSize: *shmSize,
}
platforms, err := platformutil.Parse(t.Platforms)
@@ -1285,88 +972,52 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
}
bo.Platforms = platforms
dockerConfig := config.LoadDefaultConfigFile(os.Stderr)
bo.Session = append(bo.Session, authprovider.NewDockerAuthProvider(dockerConfig))
secrets, err := buildflags.ParseSecretSpecs(t.Secrets)
if err != nil {
return nil, err
}
secretAttachment, err := controllerapi.CreateSecrets(secrets)
if err != nil {
return nil, err
}
bo.Session = append(bo.Session, secretAttachment)
bo.Session = append(bo.Session, secrets)
sshSpecs, err := buildflags.ParseSSHSpecs(t.SSH)
sshSpecs := t.SSH
if len(sshSpecs) == 0 && buildflags.IsGitSSH(contextPath) {
sshSpecs = []string{"default"}
}
ssh, err := buildflags.ParseSSHSpecs(sshSpecs)
if err != nil {
return nil, err
}
if len(sshSpecs) == 0 && (buildflags.IsGitSSH(bi.ContextPath) || (inp != nil && buildflags.IsGitSSH(inp.URL))) {
sshSpecs = append(sshSpecs, &controllerapi.SSH{ID: "default"})
}
sshAttachment, err := controllerapi.CreateSSH(sshSpecs)
if err != nil {
return nil, err
}
bo.Session = append(bo.Session, sshAttachment)
bo.Session = append(bo.Session, ssh)
if t.Target != nil {
bo.Target = *t.Target
}
if t.Call != nil {
bo.PrintFunc = &build.PrintFunc{
Name: *t.Call,
}
}
cacheImports, err := buildflags.ParseCacheEntry(t.CacheFrom)
if err != nil {
return nil, err
}
bo.CacheFrom = controllerapi.CreateCaches(cacheImports)
bo.CacheFrom = cacheImports
cacheExports, err := buildflags.ParseCacheEntry(t.CacheTo)
if err != nil {
return nil, err
}
bo.CacheTo = controllerapi.CreateCaches(cacheExports)
bo.CacheTo = cacheExports
outputs, err := buildflags.ParseExports(t.Outputs)
outputs, err := buildflags.ParseOutputs(t.Outputs)
if err != nil {
return nil, err
}
bo.Exports, err = controllerapi.CreateExports(outputs)
if err != nil {
return nil, err
}
annotations, err := buildflags.ParseAnnotations(t.Annotations)
if err != nil {
return nil, err
}
for _, e := range bo.Exports {
for k, v := range annotations {
e.Attrs[k.String()] = v
}
}
bo.Exports = outputs
attests, err := buildflags.ParseAttests(t.Attest)
if err != nil {
return nil, err
}
bo.Attests = controllerapi.CreateAttestations(attests)
bo.SourcePolicy, err = build.ReadSourcePolicy()
if err != nil {
return nil, err
}
ulimits := dockeropts.NewUlimitOpt(nil)
for _, field := range t.Ulimits {
if err := ulimits.Set(field); err != nil {
return nil, err
}
}
bo.Ulimits = ulimits
bo.Attests = attests
return bo, nil
}
@@ -1392,109 +1043,27 @@ func removeDupes(s []string) []string {
return s[:i]
}
func removeAttestDupes(s []string) []string {
res := []string{}
m := map[string]int{}
for _, v := range s {
att, err := buildflags.ParseAttest(v)
if err != nil {
res = append(res, v)
continue
}
if i, ok := m[att.Type]; ok {
res[i] = v
} else {
m[att.Type] = len(res)
res = append(res, v)
}
}
return res
}
func parseOutput(str string) map[string]string {
fields, err := csvvalue.Fields(str, nil)
if err != nil {
return nil
}
res := map[string]string{}
for _, field := range fields {
parts := strings.SplitN(field, "=", 2)
if len(parts) == 2 {
res[parts[0]] = parts[1]
}
}
return res
func isRemoteResource(str string) bool {
return urlutil.IsGitURL(str) || urlutil.IsURL(str)
}
func parseOutputType(str string) string {
if out := parseOutput(str); out != nil {
if v, ok := out["type"]; ok {
return v
csvReader := csv.NewReader(strings.NewReader(str))
fields, err := csvReader.Read()
if err != nil {
return ""
}
for _, field := range fields {
parts := strings.SplitN(field, "=", 2)
if len(parts) == 2 {
if parts[0] == "type" {
return parts[1]
}
}
}
return ""
}
func setPushOverride(outputs []string, push bool) []string {
var out []string
setPush := true
for _, output := range outputs {
typ := parseOutputType(output)
if typ == "image" || typ == "registry" {
// no need to set push if image or registry types already defined
setPush = false
if typ == "registry" {
if !push {
// don't set registry output if "push" is false
continue
}
// no need to set "push" attribute to true for registry
out = append(out, output)
continue
}
out = append(out, output+",push="+strconv.FormatBool(push))
} else {
if typ != "docker" {
// if there is any output that is not docker, don't set "push"
setPush = false
}
out = append(out, output)
}
}
if push && setPush {
out = append(out, "type=image,push=true")
}
return out
}
func setLoadOverride(outputs []string, load bool) []string {
if !load {
return outputs
}
setLoad := true
for _, output := range outputs {
if typ := parseOutputType(output); typ == "docker" {
if v := parseOutput(output); v != nil {
// dest set means we want to output as tar so don't set load
if _, ok := v["dest"]; !ok {
setLoad = false
break
}
}
} else if typ != "image" && typ != "registry" && typ != "oci" {
// if there is any output that is not an image, registry
// or oci, don't set "load" similar to push override
setLoad = false
break
}
}
if setLoad {
outputs = append(outputs, "type=docker")
}
return outputs
}
func validateTargetName(name string) error {
if !targetNamePattern.MatchString(name) {
return errors.Errorf("only %q are allowed", validTargetNameChars)

View File

@@ -2,17 +2,16 @@ package bake
import (
"context"
"os"
"path/filepath"
"sort"
"strings"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestReadTargets(t *testing.T) {
t.Parallel()
fp := File{
Name: "config.hcl",
Data: []byte(`
@@ -22,8 +21,6 @@ target "webDEP" {
VAR_BOTH = "webDEP"
}
no-cache = true
shm-size = "128m"
ulimits = ["nofile=1024:1024"]
}
target "webapp" {
@@ -38,7 +35,6 @@ target "webapp" {
ctx := context.TODO()
t.Run("NoOverrides", func(t *testing.T) {
t.Parallel()
m, g, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, nil, nil)
require.NoError(t, err)
require.Equal(t, 1, len(m))
@@ -47,8 +43,6 @@ target "webapp" {
require.Equal(t, ".", *m["webapp"].Context)
require.Equal(t, ptrstr("webDEP"), m["webapp"].Args["VAR_INHERITED"])
require.Equal(t, true, *m["webapp"].NoCache)
require.Equal(t, "128m", *m["webapp"].ShmSize)
require.Equal(t, []string{"nofile=1024:1024"}, m["webapp"].Ulimits)
require.Nil(t, m["webapp"].Pull)
require.Equal(t, 1, len(g))
@@ -56,7 +50,6 @@ target "webapp" {
})
t.Run("InvalidTargetOverrides", func(t *testing.T) {
t.Parallel()
_, _, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"nosuchtarget.context=foo"}, nil)
require.NotNil(t, err)
require.Equal(t, err.Error(), "could not find any target matching 'nosuchtarget'")
@@ -98,7 +91,6 @@ target "webapp" {
// building leaf but overriding parent fields
t.Run("parent", func(t *testing.T) {
t.Parallel()
m, g, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{
"webDEP.args.VAR_INHERITED=override",
"webDEP.args.VAR_BOTH=override",
@@ -113,7 +105,6 @@ target "webapp" {
})
t.Run("ContextOverride", func(t *testing.T) {
t.Parallel()
_, _, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"webapp.context"}, nil)
require.NotNil(t, err)
@@ -125,7 +116,6 @@ target "webapp" {
})
t.Run("NoCacheOverride", func(t *testing.T) {
t.Parallel()
m, g, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"webapp.no-cache=false"}, nil)
require.NoError(t, err)
require.Equal(t, false, *m["webapp"].NoCache)
@@ -133,14 +123,7 @@ target "webapp" {
require.Equal(t, []string{"webapp"}, g["default"].Targets)
})
t.Run("ShmSizeOverride", func(t *testing.T) {
m, _, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"webapp.shm-size=256m"}, nil)
require.NoError(t, err)
require.Equal(t, "256m", *m["webapp"].ShmSize)
})
t.Run("PullOverride", func(t *testing.T) {
t.Parallel()
m, g, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"webapp.pull=false"}, nil)
require.NoError(t, err)
require.Equal(t, false, *m["webapp"].Pull)
@@ -149,7 +132,6 @@ target "webapp" {
})
t.Run("PatternOverride", func(t *testing.T) {
t.Parallel()
// same check for two cases
multiTargetCheck := func(t *testing.T, m map[string]*Target, g map[string]*Group, err error) {
require.NoError(t, err)
@@ -217,252 +199,48 @@ target "webapp" {
}
func TestPushOverride(t *testing.T) {
t.Run("empty output", func(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(
`target "app" {
}`),
}
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.push=true"}, nil)
require.NoError(t, err)
require.Equal(t, 1, len(m["app"].Outputs))
require.Equal(t, "type=image,push=true", m["app"].Outputs[0])
})
t.Parallel()
t.Run("type image", func(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(
`target "app" {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(
`target "app" {
output = ["type=image,compression=zstd"]
}`),
}
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.push=true"}, nil)
require.NoError(t, err)
require.Equal(t, 1, len(m["app"].Outputs))
require.Equal(t, "type=image,compression=zstd,push=true", m["app"].Outputs[0])
})
}
ctx := context.TODO()
m, _, err := ReadTargets(ctx, []File{fp}, []string{"app"}, []string{"*.push=true"}, nil)
require.NoError(t, err)
t.Run("type image push false", func(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(
`target "app" {
require.Equal(t, 1, len(m["app"].Outputs))
require.Equal(t, "type=image,compression=zstd,push=true", m["app"].Outputs[0])
fp = File{
Name: "docker-bake.hcl",
Data: []byte(
`target "app" {
output = ["type=image,compression=zstd"]
}`),
}
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.push=false"}, nil)
require.NoError(t, err)
require.Equal(t, 1, len(m["app"].Outputs))
require.Equal(t, "type=image,compression=zstd,push=false", m["app"].Outputs[0])
})
}
ctx = context.TODO()
m, _, err = ReadTargets(ctx, []File{fp}, []string{"app"}, []string{"*.push=false"}, nil)
require.NoError(t, err)
t.Run("type registry", func(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(
`target "app" {
output = ["type=registry"]
require.Equal(t, 1, len(m["app"].Outputs))
require.Equal(t, "type=image,compression=zstd,push=false", m["app"].Outputs[0])
fp = File{
Name: "docker-bake.hcl",
Data: []byte(
`target "app" {
}`),
}
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.push=true"}, nil)
require.NoError(t, err)
require.Equal(t, 1, len(m["app"].Outputs))
require.Equal(t, "type=registry", m["app"].Outputs[0])
})
}
ctx = context.TODO()
m, _, err = ReadTargets(ctx, []File{fp}, []string{"app"}, []string{"*.push=true"}, nil)
require.NoError(t, err)
t.Run("type registry push false", func(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(
`target "app" {
output = ["type=registry"]
}`),
}
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.push=false"}, nil)
require.NoError(t, err)
require.Equal(t, 0, len(m["app"].Outputs))
})
t.Run("type local and empty target", func(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(
`target "foo" {
output = [ "type=local,dest=out" ]
}
target "bar" {
}`),
}
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"foo", "bar"}, []string{"*.push=true"}, nil)
require.NoError(t, err)
require.Equal(t, 2, len(m))
require.Equal(t, 1, len(m["foo"].Outputs))
require.Equal(t, []string{"type=local,dest=out"}, m["foo"].Outputs)
require.Equal(t, 1, len(m["bar"].Outputs))
require.Equal(t, []string{"type=image,push=true"}, m["bar"].Outputs)
})
}
func TestLoadOverride(t *testing.T) {
t.Run("empty output", func(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(
`target "app" {
}`),
}
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.load=true"}, nil)
require.NoError(t, err)
require.Equal(t, 1, len(m["app"].Outputs))
require.Equal(t, "type=docker", m["app"].Outputs[0])
})
t.Run("type docker", func(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(
`target "app" {
output = ["type=docker"]
}`),
}
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.load=true"}, nil)
require.NoError(t, err)
require.Equal(t, 1, len(m["app"].Outputs))
require.Equal(t, []string{"type=docker"}, m["app"].Outputs)
})
t.Run("type image", func(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(
`target "app" {
output = ["type=image"]
}`),
}
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.load=true"}, nil)
require.NoError(t, err)
require.Equal(t, 2, len(m["app"].Outputs))
require.Equal(t, []string{"type=image", "type=docker"}, m["app"].Outputs)
})
t.Run("type image load false", func(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(
`target "app" {
output = ["type=image"]
}`),
}
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.load=false"}, nil)
require.NoError(t, err)
require.Equal(t, 1, len(m["app"].Outputs))
require.Equal(t, []string{"type=image"}, m["app"].Outputs)
})
t.Run("type registry", func(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(
`target "app" {
output = ["type=registry"]
}`),
}
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.load=true"}, nil)
require.NoError(t, err)
require.Equal(t, 2, len(m["app"].Outputs))
require.Equal(t, []string{"type=registry", "type=docker"}, m["app"].Outputs)
})
t.Run("type oci", func(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(
`target "app" {
output = ["type=oci,dest=out"]
}`),
}
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.load=true"}, nil)
require.NoError(t, err)
require.Equal(t, 2, len(m["app"].Outputs))
require.Equal(t, []string{"type=oci,dest=out", "type=docker"}, m["app"].Outputs)
})
t.Run("type docker with dest", func(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(
`target "app" {
output = ["type=docker,dest=out"]
}`),
}
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.load=true"}, nil)
require.NoError(t, err)
require.Equal(t, 2, len(m["app"].Outputs))
require.Equal(t, []string{"type=docker,dest=out", "type=docker"}, m["app"].Outputs)
})
t.Run("type local and empty target", func(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(
`target "foo" {
output = [ "type=local,dest=out" ]
}
target "bar" {
}`),
}
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"foo", "bar"}, []string{"*.load=true"}, nil)
require.NoError(t, err)
require.Equal(t, 2, len(m))
require.Equal(t, 1, len(m["foo"].Outputs))
require.Equal(t, []string{"type=local,dest=out"}, m["foo"].Outputs)
require.Equal(t, 1, len(m["bar"].Outputs))
require.Equal(t, []string{"type=docker"}, m["bar"].Outputs)
})
}
func TestLoadAndPushOverride(t *testing.T) {
t.Run("type local and empty target", func(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(
`target "foo" {
output = [ "type=local,dest=out" ]
}
target "bar" {
}`),
}
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"foo", "bar"}, []string{"*.load=true", "*.push=true"}, nil)
require.NoError(t, err)
require.Equal(t, 2, len(m))
require.Equal(t, 1, len(m["foo"].Outputs))
sort.Strings(m["foo"].Outputs)
require.Equal(t, []string{"type=local,dest=out"}, m["foo"].Outputs)
require.Equal(t, 2, len(m["bar"].Outputs))
sort.Strings(m["bar"].Outputs)
require.Equal(t, []string{"type=docker", "type=image,push=true"}, m["bar"].Outputs)
})
t.Run("type registry", func(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(
`target "foo" {
output = [ "type=registry" ]
}`),
}
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"foo"}, []string{"*.load=true", "*.push=true"}, nil)
require.NoError(t, err)
require.Equal(t, 1, len(m))
require.Equal(t, 2, len(m["foo"].Outputs))
sort.Strings(m["foo"].Outputs)
require.Equal(t, []string{"type=docker", "type=registry"}, m["foo"].Outputs)
})
require.Equal(t, 1, len(m["app"].Outputs))
require.Equal(t, "type=image,push=true", m["app"].Outputs[0])
}
func TestReadTargetsCompose(t *testing.T) {
@@ -589,7 +367,7 @@ services:
require.Equal(t, []string{"web_app"}, g["default"].Targets)
}
func TestHCLContextCwdPrefix(t *testing.T) {
func TestHCLCwdPrefix(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(
@@ -602,49 +380,18 @@ func TestHCLContextCwdPrefix(t *testing.T) {
m, g, err := ReadTargets(ctx, []File{fp}, []string{"app"}, nil, nil)
require.NoError(t, err)
bo, err := TargetsToBuildOpt(m, &Input{})
require.Equal(t, 1, len(m))
_, ok := m["app"]
require.True(t, ok)
_, err = TargetsToBuildOpt(m, &Input{})
require.NoError(t, err)
require.Equal(t, "test", *m["app"].Dockerfile)
require.Equal(t, "foo", *m["app"].Context)
require.Equal(t, 1, len(g))
require.Equal(t, []string{"app"}, g["default"].Targets)
require.Equal(t, 1, len(m))
require.Contains(t, m, "app")
assert.Equal(t, "test", *m["app"].Dockerfile)
assert.Equal(t, "foo", *m["app"].Context)
assert.Equal(t, "foo/test", bo["app"].Inputs.DockerfilePath)
assert.Equal(t, "foo", bo["app"].Inputs.ContextPath)
}
func TestHCLDockerfileCwdPrefix(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(
`target "app" {
context = "."
dockerfile = "cwd://Dockerfile.app"
}`),
}
ctx := context.TODO()
cwd, err := os.Getwd()
require.NoError(t, err)
m, g, err := ReadTargets(ctx, []File{fp}, []string{"app"}, nil, nil)
require.NoError(t, err)
bo, err := TargetsToBuildOpt(m, &Input{})
require.NoError(t, err)
require.Equal(t, 1, len(g))
require.Equal(t, []string{"app"}, g["default"].Targets)
require.Equal(t, 1, len(m))
require.Contains(t, m, "app")
assert.Equal(t, "cwd://Dockerfile.app", *m["app"].Dockerfile)
assert.Equal(t, ".", *m["app"].Context)
assert.Equal(t, filepath.Join(cwd, "Dockerfile.app"), bo["app"].Inputs.DockerfilePath)
assert.Equal(t, ".", bo["app"].Inputs.ContextPath)
}
func TestOverrideMerge(t *testing.T) {
@@ -1528,7 +1275,7 @@ services:
v2: "bar"
`)
c, _, err := ParseFiles([]File{
c, err := ParseFiles([]File{
{Data: dt, Name: "c1.foo"},
{Data: dt2, Name: "c2.bar"},
}, nil)
@@ -1611,117 +1358,3 @@ func TestJSONNullVars(t *testing.T) {
require.NoError(t, err)
require.Equal(t, map[string]*string{"bar": ptrstr("baz")}, m["default"].Args)
}
func TestReadLocalFilesDefault(t *testing.T) {
tests := []struct {
filenames []string
expected []string
}{
{
filenames: []string{"abc.yml", "docker-compose.yml"},
expected: []string{"docker-compose.yml"},
},
{
filenames: []string{"test.foo", "compose.yml", "docker-bake.hcl"},
expected: []string{"compose.yml", "docker-bake.hcl"},
},
{
filenames: []string{"compose.yaml", "docker-compose.yml", "docker-bake.hcl"},
expected: []string{"compose.yaml", "docker-compose.yml", "docker-bake.hcl"},
},
{
filenames: []string{"test.txt", "compsoe.yaml"}, // intentional misspell
expected: []string{},
},
}
pwd, err := os.Getwd()
require.NoError(t, err)
for _, tt := range tests {
t.Run(strings.Join(tt.filenames, "-"), func(t *testing.T) {
dir := t.TempDir()
t.Cleanup(func() { _ = os.Chdir(pwd) })
require.NoError(t, os.Chdir(dir))
for _, tf := range tt.filenames {
require.NoError(t, os.WriteFile(tf, []byte(tf), 0644))
}
files, err := ReadLocalFiles(nil, nil, nil)
require.NoError(t, err)
if len(files) == 0 {
require.Equal(t, len(tt.expected), len(files))
} else {
found := false
for _, exp := range tt.expected {
for _, f := range files {
if f.Name == exp {
found = true
break
}
}
require.True(t, found, exp)
}
}
})
}
}
func TestAttestDuplicates(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(
`target "default" {
attest = ["type=sbom", "type=sbom,generator=custom", "type=sbom,foo=bar", "type=provenance,mode=max"]
}`),
}
ctx := context.TODO()
m, _, err := ReadTargets(ctx, []File{fp}, []string{"default"}, nil, nil)
require.Equal(t, []string{"type=sbom,foo=bar", "type=provenance,mode=max"}, m["default"].Attest)
require.NoError(t, err)
opts, err := TargetsToBuildOpt(m, &Input{})
require.NoError(t, err)
require.Equal(t, map[string]*string{
"sbom": ptrstr("type=sbom,foo=bar"),
"provenance": ptrstr("type=provenance,mode=max"),
}, opts["default"].Attests)
m, _, err = ReadTargets(ctx, []File{fp}, []string{"default"}, []string{"*.attest=type=sbom,disabled=true"}, nil)
require.Equal(t, []string{"type=sbom,disabled=true", "type=provenance,mode=max"}, m["default"].Attest)
require.NoError(t, err)
opts, err = TargetsToBuildOpt(m, &Input{})
require.NoError(t, err)
require.Equal(t, map[string]*string{
"sbom": nil,
"provenance": ptrstr("type=provenance,mode=max"),
}, opts["default"].Attests)
}
func TestAnnotations(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(
`target "app" {
output = ["type=image,name=foo"]
annotations = ["manifest[linux/amd64]:foo=bar"]
}`),
}
ctx := context.TODO()
m, g, err := ReadTargets(ctx, []File{fp}, []string{"app"}, nil, nil)
require.NoError(t, err)
bo, err := TargetsToBuildOpt(m, &Input{})
require.NoError(t, err)
require.Equal(t, 1, len(g))
require.Equal(t, []string{"app"}, g["default"].Targets)
require.Equal(t, 1, len(m))
require.Contains(t, m, "app")
require.Equal(t, "type=image,name=foo", m["app"].Outputs[0])
require.Equal(t, "manifest[linux/amd64]:foo=bar", m["app"].Annotations[0])
require.Len(t, bo["app"].Exports, 1)
require.Equal(t, "bar", bo["app"].Exports[0].Attrs["annotation-manifest[linux/amd64].foo"])
}

View File

@@ -1,18 +1,13 @@
package bake
import (
"context"
"fmt"
"os"
"path/filepath"
"sort"
"strings"
"github.com/compose-spec/compose-go/v2/dotenv"
"github.com/compose-spec/compose-go/v2/loader"
composetypes "github.com/compose-spec/compose-go/v2/types"
dockeropts "github.com/docker/cli/opts"
"github.com/docker/go-units"
"github.com/compose-spec/compose-go/dotenv"
"github.com/compose-spec/compose-go/loader"
compose "github.com/compose-spec/compose-go/types"
"github.com/pkg/errors"
"gopkg.in/yaml.v3"
)
@@ -22,9 +17,9 @@ func ParseComposeFiles(fs []File) (*Config, error) {
if err != nil {
return nil, err
}
var cfgs []composetypes.ConfigFile
var cfgs []compose.ConfigFile
for _, f := range fs {
cfgs = append(cfgs, composetypes.ConfigFile{
cfgs = append(cfgs, compose.ConfigFile{
Filename: f.Name,
Content: f.Data,
})
@@ -32,17 +27,12 @@ func ParseComposeFiles(fs []File) (*Config, error) {
return ParseCompose(cfgs, envs)
}
func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Config, error) {
if envs == nil {
envs = make(map[string]string)
}
cfg, err := loader.LoadWithContext(context.Background(), composetypes.ConfigDetails{
func ParseCompose(cfgs []compose.ConfigFile, envs map[string]string) (*Config, error) {
cfg, err := loader.Load(compose.ConfigDetails{
ConfigFiles: cfgs,
Environment: envs,
}, func(options *loader.Options) {
options.SetProjectName("bake", false)
options.SkipNormalization = true
options.Profiles = []string{"*"}
})
if err != nil {
return nil, err
@@ -56,7 +46,6 @@ func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Conf
g := &Group{Name: "default"}
for _, s := range cfg.Services {
s := s
if s.Build == nil {
continue
}
@@ -76,44 +65,6 @@ func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Conf
dockerfilePath := s.Build.Dockerfile
dockerfilePathP = &dockerfilePath
}
var dockerfileInlineP *string
if s.Build.DockerfileInline != "" {
dockerfileInline := s.Build.DockerfileInline
dockerfileInlineP = &dockerfileInline
}
var additionalContexts map[string]string
if s.Build.AdditionalContexts != nil {
additionalContexts = map[string]string{}
for k, v := range s.Build.AdditionalContexts {
additionalContexts[k] = v
}
}
var shmSize *string
if s.Build.ShmSize > 0 {
shmSizeBytes := dockeropts.MemBytes(s.Build.ShmSize)
shmSizeStr := shmSizeBytes.String()
shmSize = &shmSizeStr
}
var ulimits []string
if s.Build.Ulimits != nil {
for n, u := range s.Build.Ulimits {
ulimit, err := units.ParseUlimit(fmt.Sprintf("%s=%d:%d", n, u.Soft, u.Hard))
if err != nil {
return nil, err
}
ulimits = append(ulimits, ulimit.String())
}
}
var ssh []string
for _, bkey := range s.Build.SSH {
sshkey := composeToBuildkitSSH(bkey)
ssh = append(ssh, sshkey)
}
sort.Strings(ssh)
var secrets []string
for _, bs := range s.Build.Secrets {
@@ -133,13 +84,11 @@ func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Conf
g.Targets = append(g.Targets, targetName)
t := &Target{
Name: targetName,
Context: contextPathP,
Contexts: additionalContexts,
Dockerfile: dockerfilePathP,
DockerfileInline: dockerfileInlineP,
Tags: s.Build.Tags,
Labels: labels,
Name: targetName,
Context: contextPathP,
Dockerfile: dockerfilePathP,
Tags: s.Build.Tags,
Labels: labels,
Args: flatten(s.Build.Args.Resolve(func(val string) (string, bool) {
if val, ok := s.Environment[val]; ok && val != nil {
return *val, true
@@ -150,10 +99,7 @@ func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Conf
CacheFrom: s.Build.CacheFrom,
CacheTo: s.Build.CacheTo,
NetworkMode: &s.Build.Network,
SSH: ssh,
Secrets: secrets,
ShmSize: shmSize,
Ulimits: ulimits,
}
if err = t.composeExtTarget(s.Build.Extensions); err != nil {
return nil, err
@@ -191,15 +137,14 @@ func validateComposeFile(dt []byte, fn string) (bool, error) {
}
func validateCompose(dt []byte, envs map[string]string) error {
_, err := loader.Load(composetypes.ConfigDetails{
ConfigFiles: []composetypes.ConfigFile{
_, err := loader.Load(compose.ConfigDetails{
ConfigFiles: []compose.ConfigFile{
{
Content: dt,
},
},
Environment: envs,
}, func(options *loader.Options) {
options.SetProjectName("bake", false)
options.SkipNormalization = true
// consistency is checked later in ParseCompose to ensure multiple
// compose files can be merged together
@@ -240,7 +185,7 @@ func loadDotEnv(curenv map[string]string, workingDir string) (map[string]string,
return nil, err
}
envs, err := dotenv.UnmarshalBytesWithLookup(dt, nil)
envs, err := dotenv.UnmarshalBytes(dt)
if err != nil {
return nil, err
}
@@ -255,7 +200,7 @@ func loadDotEnv(curenv map[string]string, workingDir string) (map[string]string,
return curenv, nil
}
func flatten(in composetypes.MappingWithEquals) map[string]*string {
func flatten(in compose.MappingWithEquals) map[string]*string {
if len(in) == 0 {
return nil
}
@@ -284,7 +229,7 @@ type xbake struct {
NoCacheFilter stringArray `yaml:"no-cache-filter,omitempty"`
Contexts stringMap `yaml:"contexts,omitempty"`
// don't forget to update documentation if you add a new field:
// https://github.com/docker/docs/blob/main/content/build/bake/compose-file.md#extension-field-with-x-bake
// docs/manuals/bake/compose-file.md#extension-field-with-x-bake
}
type stringMap map[string]string
@@ -334,7 +279,6 @@ func (t *Target) composeExtTarget(exts map[string]interface{}) error {
}
if len(xb.SSH) > 0 {
t.SSH = dedupSlice(append(t.SSH, xb.SSH...))
sort.Strings(t.SSH)
}
if len(xb.Platforms) > 0 {
t.Platforms = dedupSlice(append(t.Platforms, xb.Platforms...))
@@ -360,8 +304,8 @@ func (t *Target) composeExtTarget(exts map[string]interface{}) error {
// composeToBuildkitSecret converts secret from compose format to buildkit's
// csv format.
func composeToBuildkitSecret(inp composetypes.ServiceSecretConfig, psecret composetypes.SecretConfig) (string, error) {
if psecret.External {
func composeToBuildkitSecret(inp compose.ServiceSecretConfig, psecret compose.SecretConfig) (string, error) {
if psecret.External.External {
return "", errors.Errorf("unsupported external secret %s", psecret.Name)
}
@@ -378,17 +322,3 @@ func composeToBuildkitSecret(inp composetypes.ServiceSecretConfig, psecret compo
return strings.Join(bkattrs, ","), nil
}
// composeToBuildkitSSH converts secret from compose format to buildkit's
// csv format.
func composeToBuildkitSSH(sshKey composetypes.SSHKey) string {
var bkattrs []string
bkattrs = append(bkattrs, sshKey.ID)
if sshKey.Path != "" {
bkattrs = append(bkattrs, sshKey.Path)
}
return strings.Join(bkattrs, "=")
}

View File

@@ -6,7 +6,7 @@ import (
"sort"
"testing"
composetypes "github.com/compose-spec/compose-go/v2/types"
compose "github.com/compose-spec/compose-go/types"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
@@ -21,8 +21,6 @@ services:
webapp:
build:
context: ./dir
additional_contexts:
foo: ./bar
dockerfile: Dockerfile-alternate
network:
none
@@ -32,19 +30,9 @@ services:
- type=local,src=path/to/cache
cache_to:
- type=local,dest=path/to/cache
ssh:
- key=path/to/key
- default
secrets:
- token
- aws
webapp2:
profiles:
- test
build:
context: ./dir
dockerfile_inline: |
FROM alpine
secrets:
token:
environment: ENV_TOKEN
@@ -52,40 +40,34 @@ secrets:
file: /root/.aws/credentials
`)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, 1, len(c.Groups))
require.Equal(t, "default", c.Groups[0].Name)
sort.Strings(c.Groups[0].Targets)
require.Equal(t, []string{"db", "webapp", "webapp2"}, c.Groups[0].Targets)
require.Equal(t, []string{"db", "webapp"}, c.Groups[0].Targets)
require.Equal(t, 3, len(c.Targets))
require.Equal(t, 2, len(c.Targets))
sort.Slice(c.Targets, func(i, j int) bool {
return c.Targets[i].Name < c.Targets[j].Name
})
require.Equal(t, "db", c.Targets[0].Name)
require.Equal(t, "db", *c.Targets[0].Context)
require.Equal(t, "./db", *c.Targets[0].Context)
require.Equal(t, []string{"docker.io/tonistiigi/db"}, c.Targets[0].Tags)
require.Equal(t, "webapp", c.Targets[1].Name)
require.Equal(t, "dir", *c.Targets[1].Context)
require.Equal(t, map[string]string{"foo": "bar"}, c.Targets[1].Contexts)
require.Equal(t, "./dir", *c.Targets[1].Context)
require.Equal(t, "Dockerfile-alternate", *c.Targets[1].Dockerfile)
require.Equal(t, 1, len(c.Targets[1].Args))
require.Equal(t, ptrstr("123"), c.Targets[1].Args["buildno"])
require.Equal(t, []string{"type=local,src=path/to/cache"}, c.Targets[1].CacheFrom)
require.Equal(t, []string{"type=local,dest=path/to/cache"}, c.Targets[1].CacheTo)
require.Equal(t, "none", *c.Targets[1].NetworkMode)
require.Equal(t, []string{"default", "key=path/to/key"}, c.Targets[1].SSH)
require.Equal(t, []string{
"id=token,env=ENV_TOKEN",
"id=aws,src=/root/.aws/credentials",
}, c.Targets[1].Secrets)
require.Equal(t, "webapp2", c.Targets[2].Name)
require.Equal(t, "dir", *c.Targets[2].Context)
require.Equal(t, "FROM alpine\n", *c.Targets[2].DockerfileInline)
}
func TestNoBuildOutOfTreeService(t *testing.T) {
@@ -96,7 +78,7 @@ services:
webapp:
build: ./db
`)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, 1, len(c.Groups))
require.Equal(t, 1, len(c.Targets))
@@ -115,7 +97,7 @@ services:
target: webapp
`)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, 2, len(c.Targets))
@@ -140,7 +122,7 @@ services:
target: webapp
`)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, 2, len(c.Targets))
sort.Slice(c.Targets, func(i, j int) bool {
@@ -171,7 +153,7 @@ services:
t.Setenv("BAR", "foo")
t.Setenv("ZZZ_BAR", "zzz_foo")
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, sliceToMap(os.Environ()))
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, sliceToMap(os.Environ()))
require.NoError(t, err)
require.Equal(t, ptrstr("bar"), c.Targets[0].Args["FOO"])
require.Equal(t, ptrstr("zzz_foo"), c.Targets[0].Args["BAR"])
@@ -185,7 +167,7 @@ services:
entrypoint: echo 1
`)
_, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
_, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
require.Error(t, err)
}
@@ -210,7 +192,7 @@ networks:
gateway: 10.5.0.254
`)
_, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
_, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
}
@@ -227,7 +209,7 @@ services:
- bar
`)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, []string{"foo", "bar"}, c.Targets[0].Tags)
}
@@ -264,7 +246,7 @@ networks:
name: test-net
`)
_, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
_, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
}
@@ -282,8 +264,6 @@ services:
- user/app:cache
tags:
- ct-addon:baz
ssh:
key: path/to/key
args:
CT_ECR: foo
CT_TAG: bar
@@ -293,9 +273,6 @@ services:
tags:
- ct-addon:foo
- ct-addon:alp
ssh:
- default
- other=path/to/otherkey
platforms:
- linux/amd64
- linux/arm64
@@ -312,11 +289,6 @@ services:
args:
CT_ECR: foo
CT_TAG: bar
shm_size: 128m
ulimits:
nofile:
soft: 1024
hard: 1024
x-bake:
secret:
- id=mysecret,src=/local/secret
@@ -327,7 +299,7 @@ services:
no-cache: true
`)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, 2, len(c.Targets))
sort.Slice(c.Targets, func(i, j int) bool {
@@ -338,7 +310,6 @@ services:
require.Equal(t, []string{"linux/amd64", "linux/arm64"}, c.Targets[0].Platforms)
require.Equal(t, []string{"user/app:cache", "type=local,src=path/to/cache"}, c.Targets[0].CacheFrom)
require.Equal(t, []string{"user/app:cache", "type=local,dest=path/to/cache"}, c.Targets[0].CacheTo)
require.Equal(t, []string{"default", "key=path/to/key", "other=path/to/otherkey"}, c.Targets[0].SSH)
require.Equal(t, newBool(true), c.Targets[0].Pull)
require.Equal(t, map[string]string{"alpine": "docker-image://alpine:3.13"}, c.Targets[0].Contexts)
require.Equal(t, []string{"ct-fake-aws:bar"}, c.Targets[1].Tags)
@@ -347,8 +318,6 @@ services:
require.Equal(t, []string{"linux/arm64"}, c.Targets[1].Platforms)
require.Equal(t, []string{"type=docker"}, c.Targets[1].Outputs)
require.Equal(t, newBool(true), c.Targets[1].NoCache)
require.Equal(t, ptrstr("128MiB"), c.Targets[1].ShmSize)
require.Equal(t, []string{"nofile=1024:1024"}, c.Targets[1].Ulimits)
}
func TestComposeExtDedup(t *testing.T) {
@@ -363,8 +332,6 @@ services:
- user/app:cache
tags:
- ct-addon:foo
ssh:
- default
x-bake:
tags:
- ct-addon:foo
@@ -374,18 +341,14 @@ services:
- type=local,src=path/to/cache
cache-to:
- type=local,dest=path/to/cache
ssh:
- default
- key=path/to/key
`)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, []string{"ct-addon:foo", "ct-addon:baz"}, c.Targets[0].Tags)
require.Equal(t, []string{"user/app:cache", "type=local,src=path/to/cache"}, c.Targets[0].CacheFrom)
require.Equal(t, []string{"user/app:cache", "type=local,dest=path/to/cache"}, c.Targets[0].CacheTo)
require.Equal(t, []string{"default", "key=path/to/key"}, c.Targets[0].SSH)
}
func TestEnv(t *testing.T) {
@@ -413,7 +376,7 @@ services:
- ` + envf.Name() + `
`)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, map[string]*string{"CT_ECR": ptrstr("foo"), "FOO": ptrstr("bsdf -csdf"), "NODE_ENV": ptrstr("test")}, c.Targets[0].Args)
}
@@ -459,7 +422,7 @@ services:
published: "3306"
protocol: tcp
`)
_, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
_, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
}
@@ -505,7 +468,7 @@ func TestServiceName(t *testing.T) {
for _, tt := range cases {
tt := tt
t.Run(tt.svc, func(t *testing.T) {
_, err := ParseCompose([]composetypes.ConfigFile{{Content: []byte(`
_, err := ParseCompose([]compose.ConfigFile{{Content: []byte(`
services:
` + tt.svc + `:
build:
@@ -576,7 +539,7 @@ services:
for _, tt := range cases {
tt := tt
t.Run(tt.name, func(t *testing.T) {
_, err := ParseCompose([]composetypes.ConfigFile{{Content: tt.dt}}, nil)
_, err := ParseCompose([]compose.ConfigFile{{Content: tt.dt}}, nil)
if tt.wantErr {
require.Error(t, err)
} else {
@@ -674,103 +637,11 @@ services:
bar: "baz"
`)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, map[string]*string{"bar": ptrstr("baz")}, c.Targets[0].Args)
}
func TestDependsOn(t *testing.T) {
var dt = []byte(`
services:
foo:
build:
context: .
ports:
- 3306:3306
depends_on:
- bar
bar:
build:
context: .
`)
_, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
}
func TestInclude(t *testing.T) {
tmpdir := t.TempDir()
err := os.WriteFile(filepath.Join(tmpdir, "compose-foo.yml"), []byte(`
services:
foo:
build:
context: .
target: buildfoo
ports:
- 3306:3306
`), 0644)
require.NoError(t, err)
var dt = []byte(`
include:
- compose-foo.yml
services:
bar:
build:
context: .
target: buildbar
`)
chdir(t, tmpdir)
c, err := ParseComposeFiles([]File{{
Name: "composetypes.yml",
Data: dt,
}})
require.NoError(t, err)
require.Equal(t, 2, len(c.Targets))
sort.Slice(c.Targets, func(i, j int) bool {
return c.Targets[i].Name < c.Targets[j].Name
})
require.Equal(t, "bar", c.Targets[0].Name)
require.Equal(t, "buildbar", *c.Targets[0].Target)
require.Equal(t, "foo", c.Targets[1].Name)
require.Equal(t, "buildfoo", *c.Targets[1].Target)
}
func TestDevelop(t *testing.T) {
var dt = []byte(`
services:
scratch:
build:
context: ./webapp
develop:
watch:
- path: ./webapp/html
action: sync
target: /var/www
ignore:
- node_modules/
`)
_, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
}
func TestCgroup(t *testing.T) {
var dt = []byte(`
services:
scratch:
build:
context: ./webapp
cgroup: private
`)
_, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
}
// chdir changes the current working directory to the named directory,
// and then restore the original working directory at the end of the test.
func chdir(t *testing.T, dir string) {

View File

@@ -273,7 +273,7 @@ func TestHCLMultiFileSharedVariables(t *testing.T) {
}
`)
c, _, err := ParseFiles([]File{
c, err := ParseFiles([]File{
{Data: dt, Name: "c1.hcl"},
{Data: dt2, Name: "c2.hcl"},
}, nil)
@@ -285,7 +285,7 @@ func TestHCLMultiFileSharedVariables(t *testing.T) {
t.Setenv("FOO", "def")
c, _, err = ParseFiles([]File{
c, err = ParseFiles([]File{
{Data: dt, Name: "c1.hcl"},
{Data: dt2, Name: "c2.hcl"},
}, nil)
@@ -322,7 +322,7 @@ func TestHCLVarsWithVars(t *testing.T) {
}
`)
c, _, err := ParseFiles([]File{
c, err := ParseFiles([]File{
{Data: dt, Name: "c1.hcl"},
{Data: dt2, Name: "c2.hcl"},
}, nil)
@@ -334,7 +334,7 @@ func TestHCLVarsWithVars(t *testing.T) {
t.Setenv("BASE", "new")
c, _, err = ParseFiles([]File{
c, err = ParseFiles([]File{
{Data: dt, Name: "c1.hcl"},
{Data: dt2, Name: "c2.hcl"},
}, nil)
@@ -612,7 +612,7 @@ func TestHCLMultiFileAttrs(t *testing.T) {
FOO="def"
`)
c, _, err := ParseFiles([]File{
c, err := ParseFiles([]File{
{Data: dt, Name: "c1.hcl"},
{Data: dt2, Name: "c2.hcl"},
}, nil)
@@ -623,7 +623,7 @@ func TestHCLMultiFileAttrs(t *testing.T) {
t.Setenv("FOO", "ghi")
c, _, err = ParseFiles([]File{
c, err = ParseFiles([]File{
{Data: dt, Name: "c1.hcl"},
{Data: dt2, Name: "c2.hcl"},
}, nil)
@@ -634,506 +634,6 @@ func TestHCLMultiFileAttrs(t *testing.T) {
require.Equal(t, ptrstr("pre-ghi"), c.Targets[0].Args["v1"])
}
func TestHCLMultiFileGlobalAttrs(t *testing.T) {
dt := []byte(`
FOO = "abc"
target "app" {
args = {
v1 = "pre-${FOO}"
}
}
`)
dt2 := []byte(`
FOO = "def"
`)
c, _, err := ParseFiles([]File{
{Data: dt, Name: "c1.hcl"},
{Data: dt2, Name: "c2.hcl"},
}, nil)
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, "pre-def", *c.Targets[0].Args["v1"])
}
func TestHCLDuplicateTarget(t *testing.T) {
dt := []byte(`
target "app" {
dockerfile = "x"
}
target "app" {
dockerfile = "y"
}
`)
c, err := ParseFile(dt, "docker-bake.hcl")
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, "app", c.Targets[0].Name)
require.Equal(t, "y", *c.Targets[0].Dockerfile)
}
func TestHCLRenameTarget(t *testing.T) {
dt := []byte(`
target "abc" {
name = "xyz"
dockerfile = "foo"
}
`)
_, err := ParseFile(dt, "docker-bake.hcl")
require.ErrorContains(t, err, "requires matrix")
}
func TestHCLRenameGroup(t *testing.T) {
dt := []byte(`
group "foo" {
name = "bar"
targets = ["x", "y"]
}
`)
_, err := ParseFile(dt, "docker-bake.hcl")
require.ErrorContains(t, err, "not supported")
dt = []byte(`
group "foo" {
matrix = {
name = ["x", "y"]
}
}
`)
_, err = ParseFile(dt, "docker-bake.hcl")
require.ErrorContains(t, err, "not supported")
}
func TestHCLRenameTargetAttrs(t *testing.T) {
dt := []byte(`
target "abc" {
name = "xyz"
matrix = {}
dockerfile = "foo"
}
target "def" {
dockerfile = target.xyz.dockerfile
}
`)
c, err := ParseFile(dt, "docker-bake.hcl")
require.NoError(t, err)
require.Equal(t, 2, len(c.Targets))
require.Equal(t, "xyz", c.Targets[0].Name)
require.Equal(t, "foo", *c.Targets[0].Dockerfile)
require.Equal(t, "def", c.Targets[1].Name)
require.Equal(t, "foo", *c.Targets[1].Dockerfile)
dt = []byte(`
target "def" {
dockerfile = target.xyz.dockerfile
}
target "abc" {
name = "xyz"
matrix = {}
dockerfile = "foo"
}
`)
c, err = ParseFile(dt, "docker-bake.hcl")
require.NoError(t, err)
require.Equal(t, 2, len(c.Targets))
require.Equal(t, "def", c.Targets[0].Name)
require.Equal(t, "foo", *c.Targets[0].Dockerfile)
require.Equal(t, "xyz", c.Targets[1].Name)
require.Equal(t, "foo", *c.Targets[1].Dockerfile)
dt = []byte(`
target "abc" {
name = "xyz"
matrix = {}
dockerfile = "foo"
}
target "def" {
dockerfile = target.abc.dockerfile
}
`)
_, err = ParseFile(dt, "docker-bake.hcl")
require.ErrorContains(t, err, "abc")
dt = []byte(`
target "def" {
dockerfile = target.abc.dockerfile
}
target "abc" {
name = "xyz"
matrix = {}
dockerfile = "foo"
}
`)
_, err = ParseFile(dt, "docker-bake.hcl")
require.ErrorContains(t, err, "abc")
}
func TestHCLRenameSplit(t *testing.T) {
dt := []byte(`
target "x" {
name = "y"
matrix = {}
dockerfile = "foo"
}
target "x" {
name = "z"
matrix = {}
dockerfile = "bar"
}
`)
c, err := ParseFile(dt, "docker-bake.hcl")
require.NoError(t, err)
require.Equal(t, 2, len(c.Targets))
require.Equal(t, "y", c.Targets[0].Name)
require.Equal(t, "foo", *c.Targets[0].Dockerfile)
require.Equal(t, "z", c.Targets[1].Name)
require.Equal(t, "bar", *c.Targets[1].Dockerfile)
require.Equal(t, 1, len(c.Groups))
require.Equal(t, "x", c.Groups[0].Name)
require.Equal(t, []string{"y", "z"}, c.Groups[0].Targets)
}
func TestHCLRenameMultiFile(t *testing.T) {
dt := []byte(`
target "foo" {
name = "bar"
matrix = {}
dockerfile = "x"
}
`)
dt2 := []byte(`
target "foo" {
context = "y"
}
`)
dt3 := []byte(`
target "bar" {
target = "z"
}
`)
c, _, err := ParseFiles([]File{
{Data: dt, Name: "c1.hcl"},
{Data: dt2, Name: "c2.hcl"},
{Data: dt3, Name: "c3.hcl"},
}, nil)
require.NoError(t, err)
require.Equal(t, 2, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "bar")
require.Equal(t, *c.Targets[0].Dockerfile, "x")
require.Equal(t, *c.Targets[0].Target, "z")
require.Equal(t, c.Targets[1].Name, "foo")
require.Equal(t, *c.Targets[1].Context, "y")
}
func TestHCLMatrixBasic(t *testing.T) {
dt := []byte(`
target "default" {
matrix = {
foo = ["x", "y"]
}
name = foo
dockerfile = "${foo}.Dockerfile"
}
`)
c, err := ParseFile(dt, "docker-bake.hcl")
require.NoError(t, err)
require.Equal(t, 2, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "x")
require.Equal(t, c.Targets[1].Name, "y")
require.Equal(t, *c.Targets[0].Dockerfile, "x.Dockerfile")
require.Equal(t, *c.Targets[1].Dockerfile, "y.Dockerfile")
require.Equal(t, 1, len(c.Groups))
require.Equal(t, "default", c.Groups[0].Name)
require.Equal(t, []string{"x", "y"}, c.Groups[0].Targets)
}
func TestHCLMatrixMultipleKeys(t *testing.T) {
dt := []byte(`
target "default" {
matrix = {
foo = ["a"]
bar = ["b", "c"]
baz = ["d", "e", "f"]
}
name = "${foo}-${bar}-${baz}"
}
`)
c, err := ParseFile(dt, "docker-bake.hcl")
require.NoError(t, err)
require.Equal(t, 6, len(c.Targets))
names := make([]string, len(c.Targets))
for i, t := range c.Targets {
names[i] = t.Name
}
require.ElementsMatch(t, []string{"a-b-d", "a-b-e", "a-b-f", "a-c-d", "a-c-e", "a-c-f"}, names)
require.Equal(t, 1, len(c.Groups))
require.Equal(t, "default", c.Groups[0].Name)
require.ElementsMatch(t, []string{"a-b-d", "a-b-e", "a-b-f", "a-c-d", "a-c-e", "a-c-f"}, c.Groups[0].Targets)
}
func TestHCLMatrixLists(t *testing.T) {
dt := []byte(`
target "foo" {
matrix = {
aa = [["aa", "bb"], ["cc", "dd"]]
}
name = aa[0]
args = {
target = "val${aa[1]}"
}
}
`)
c, err := ParseFile(dt, "docker-bake.hcl")
require.NoError(t, err)
require.Equal(t, 2, len(c.Targets))
require.Equal(t, "aa", c.Targets[0].Name)
require.Equal(t, ptrstr("valbb"), c.Targets[0].Args["target"])
require.Equal(t, "cc", c.Targets[1].Name)
require.Equal(t, ptrstr("valdd"), c.Targets[1].Args["target"])
}
func TestHCLMatrixMaps(t *testing.T) {
dt := []byte(`
target "foo" {
matrix = {
aa = [
{
foo = "aa"
bar = "bb"
},
{
foo = "cc"
bar = "dd"
}
]
}
name = aa.foo
args = {
target = "val${aa.bar}"
}
}
`)
c, err := ParseFile(dt, "docker-bake.hcl")
require.NoError(t, err)
require.Equal(t, 2, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "aa")
require.Equal(t, c.Targets[0].Args["target"], ptrstr("valbb"))
require.Equal(t, c.Targets[1].Name, "cc")
require.Equal(t, c.Targets[1].Args["target"], ptrstr("valdd"))
}
func TestHCLMatrixMultipleTargets(t *testing.T) {
dt := []byte(`
target "x" {
matrix = {
foo = ["a", "b"]
}
name = foo
}
target "y" {
matrix = {
bar = ["c", "d"]
}
name = bar
}
`)
c, err := ParseFile(dt, "docker-bake.hcl")
require.NoError(t, err)
require.Equal(t, 4, len(c.Targets))
names := make([]string, len(c.Targets))
for i, t := range c.Targets {
names[i] = t.Name
}
require.ElementsMatch(t, []string{"a", "b", "c", "d"}, names)
require.Equal(t, 2, len(c.Groups))
names = make([]string, len(c.Groups))
for i, c := range c.Groups {
names[i] = c.Name
}
require.ElementsMatch(t, []string{"x", "y"}, names)
for _, g := range c.Groups {
switch g.Name {
case "x":
require.Equal(t, []string{"a", "b"}, g.Targets)
case "y":
require.Equal(t, []string{"c", "d"}, g.Targets)
}
}
}
func TestHCLMatrixDuplicateNames(t *testing.T) {
dt := []byte(`
target "default" {
matrix = {
foo = ["a", "b"]
}
name = "c"
}
`)
_, err := ParseFile(dt, "docker-bake.hcl")
require.Error(t, err)
}
func TestHCLMatrixArgs(t *testing.T) {
dt := []byte(`
a = 1
variable "b" {
default = 2
}
target "default" {
matrix = {
foo = [a, b]
}
name = foo
}
`)
c, err := ParseFile(dt, "docker-bake.hcl")
require.NoError(t, err)
require.Equal(t, 2, len(c.Targets))
require.Equal(t, "1", c.Targets[0].Name)
require.Equal(t, "2", c.Targets[1].Name)
}
func TestHCLMatrixArgsOverride(t *testing.T) {
dt := []byte(`
variable "ABC" {
default = "def"
}
target "bar" {
matrix = {
aa = split(",", ABC)
}
name = "bar-${aa}"
args = {
foo = aa
}
}
`)
c, _, err := ParseFiles([]File{
{Data: dt, Name: "docker-bake.hcl"},
}, map[string]string{"ABC": "11,22,33"})
require.NoError(t, err)
require.Equal(t, 3, len(c.Targets))
require.Equal(t, "bar-11", c.Targets[0].Name)
require.Equal(t, "bar-22", c.Targets[1].Name)
require.Equal(t, "bar-33", c.Targets[2].Name)
require.Equal(t, ptrstr("11"), c.Targets[0].Args["foo"])
require.Equal(t, ptrstr("22"), c.Targets[1].Args["foo"])
require.Equal(t, ptrstr("33"), c.Targets[2].Args["foo"])
}
func TestHCLMatrixBadTypes(t *testing.T) {
dt := []byte(`
target "default" {
matrix = "test"
}
`)
_, err := ParseFile(dt, "docker-bake.hcl")
require.Error(t, err)
dt = []byte(`
target "default" {
matrix = ["test"]
}
`)
_, err = ParseFile(dt, "docker-bake.hcl")
require.Error(t, err)
dt = []byte(`
target "default" {
matrix = {
["a"] = ["b"]
}
}
`)
_, err = ParseFile(dt, "docker-bake.hcl")
require.Error(t, err)
dt = []byte(`
target "default" {
matrix = {
1 = 2
}
}
`)
_, err = ParseFile(dt, "docker-bake.hcl")
require.Error(t, err)
dt = []byte(`
target "default" {
matrix = {
a = "b"
}
}
`)
_, err = ParseFile(dt, "docker-bake.hcl")
require.Error(t, err)
}
func TestHCLMatrixWithGlobalTarget(t *testing.T) {
dt := []byte(`
target "x" {
tags = ["a", "b"]
}
target "default" {
tags = target.x.tags
matrix = {
dummy = [""]
}
}
`)
c, err := ParseFile(dt, "docker-bake.hcl")
require.NoError(t, err)
require.Equal(t, 2, len(c.Targets))
require.Equal(t, "x", c.Targets[0].Name)
require.Equal(t, "default", c.Targets[1].Name)
require.Equal(t, []string{"a", "b"}, c.Targets[1].Tags)
}
func TestJSONAttributes(t *testing.T) {
dt := []byte(`{"FOO": "abc", "variable": {"BAR": {"default": "def"}}, "target": { "app": { "args": {"v1": "pre-${FOO}-${BAR}"}} } }`)
@@ -1236,7 +736,7 @@ services:
v2: "bar"
`)
c, _, err := ParseFiles([]File{
c, err := ParseFiles([]File{
{Data: dt, Name: "c1.hcl"},
{Data: dt2, Name: "c2.yml"},
}, nil)
@@ -1258,7 +758,7 @@ func TestHCLBuiltinVars(t *testing.T) {
}
`)
c, _, err := ParseFiles([]File{
c, err := ParseFiles([]File{
{Data: dt, Name: "c1.hcl"},
}, map[string]string{
"BAKE_CMD_CONTEXT": "foo",
@@ -1272,7 +772,7 @@ func TestHCLBuiltinVars(t *testing.T) {
}
func TestCombineHCLAndJSONTargets(t *testing.T) {
c, _, err := ParseFiles([]File{
c, err := ParseFiles([]File{
{
Name: "docker-bake.hcl",
Data: []byte(`
@@ -1348,7 +848,7 @@ target "b" {
}
func TestCombineHCLAndJSONVars(t *testing.T) {
c, _, err := ParseFiles([]File{
c, err := ParseFiles([]File{
{
Name: "docker-bake.hcl",
Data: []byte(`
@@ -1445,41 +945,8 @@ func TestVarUnsupportedType(t *testing.T) {
require.Error(t, err)
}
func TestHCLIndexOfFunc(t *testing.T) {
dt := []byte(`
variable "APP_VERSIONS" {
default = [
"1.42.4",
"1.42.3"
]
}
target "default" {
args = {
APP_VERSION = app_version
}
matrix = {
app_version = APP_VERSIONS
}
name="app-${replace(app_version, ".", "-")}"
tags = [
"app:${app_version}",
indexof(APP_VERSIONS, app_version) == 0 ? "app:latest" : "",
]
}
`)
c, err := ParseFile(dt, "docker-bake.hcl")
require.NoError(t, err)
require.Equal(t, 2, len(c.Targets))
require.Equal(t, "app-1-42-4", c.Targets[0].Name)
require.Equal(t, "app:latest", c.Targets[0].Tags[1])
require.Equal(t, "app-1-42-3", c.Targets[1].Name)
require.Empty(t, c.Targets[1].Tags[1])
}
func ptrstr(s interface{}) *string {
var n *string
var n *string = nil
if reflect.ValueOf(s).Kind() == reflect.String {
ss := s.(string)
n = &ss

View File

@@ -1,9 +1,7 @@
package hclparser
import (
"encoding/binary"
"fmt"
"hash/fnv"
"math"
"math/big"
"reflect"
@@ -25,11 +23,9 @@ type Opt struct {
}
type variable struct {
Name string `json:"-" hcl:"name,label"`
Default *hcl.Attribute `json:"default,omitempty" hcl:"default,optional"`
Description string `json:"description,omitempty" hcl:"description,optional"`
Body hcl.Body `json:"-" hcl:",body"`
Remain hcl.Body `json:"-" hcl:",remain"`
Name string `json:"-" hcl:"name,label"`
Default *hcl.Attribute `json:"default,omitempty" hcl:"default,optional"`
Body hcl.Body `json:"-" hcl:",body"`
}
type functionDef struct {
@@ -53,38 +49,29 @@ type parser struct {
attrs map[string]*hcl.Attribute
funcs map[string]*functionDef
blocks map[string]map[string][]*hcl.Block
blockValues map[*hcl.Block][]reflect.Value
blockEvalCtx map[*hcl.Block][]*hcl.EvalContext
blockNames map[*hcl.Block][]string
blockTypes map[string]reflect.Type
blocks map[string]map[string][]*hcl.Block
blockValues map[*hcl.Block]reflect.Value
blockTypes map[string]reflect.Type
ectx *hcl.EvalContext
progressV map[uint64]struct{}
progressF map[uint64]struct{}
progressB map[uint64]map[string]struct{}
doneB map[uint64]map[string]struct{}
}
type WithEvalContexts interface {
GetEvalContexts(base *hcl.EvalContext, block *hcl.Block, loadDeps func(hcl.Expression) hcl.Diagnostics) ([]*hcl.EvalContext, error)
}
type WithGetName interface {
GetName(ectx *hcl.EvalContext, block *hcl.Block, loadDeps func(hcl.Expression) hcl.Diagnostics) (string, error)
progress map[string]struct{}
progressF map[string]struct{}
progressB map[*hcl.Block]map[string]struct{}
doneF map[string]struct{}
doneB map[*hcl.Block]map[string]struct{}
}
var errUndefined = errors.New("undefined")
func (p *parser) loadDeps(ectx *hcl.EvalContext, exp hcl.Expression, exclude map[string]struct{}, allowMissing bool) hcl.Diagnostics {
func (p *parser) loadDeps(exp hcl.Expression, exclude map[string]struct{}, allowMissing bool) hcl.Diagnostics {
fns, hcldiags := funcCalls(exp)
if hcldiags.HasErrors() {
return hcldiags
}
for _, fn := range fns {
if err := p.resolveFunction(ectx, fn); err != nil {
if err := p.resolveFunction(fn); err != nil {
if allowMissing && errors.Is(err, errUndefined) {
continue
}
@@ -137,16 +124,14 @@ func (p *parser) loadDeps(ectx *hcl.EvalContext, exp hcl.Expression, exclude map
}
}
}
for _, block := range blocks {
if err := p.resolveBlock(block, target); err != nil {
if allowMissing && errors.Is(err, errUndefined) {
continue
}
return wrapErrorDiagnostic("Invalid expression", err, exp.Range().Ptr(), exp.Range().Ptr())
if err := p.resolveBlock(blocks[0], target); err != nil {
if allowMissing && errors.Is(err, errUndefined) {
continue
}
return wrapErrorDiagnostic("Invalid expression", err, exp.Range().Ptr(), exp.Range().Ptr())
}
} else {
if err := p.resolveValue(ectx, v.RootName()); err != nil {
if err := p.resolveValue(v.RootName()); err != nil {
if allowMissing && errors.Is(err, errUndefined) {
continue
}
@@ -160,21 +145,21 @@ func (p *parser) loadDeps(ectx *hcl.EvalContext, exp hcl.Expression, exclude map
// resolveFunction forces evaluation of a function, storing the result into the
// parser.
func (p *parser) resolveFunction(ectx *hcl.EvalContext, name string) error {
if _, ok := p.ectx.Functions[name]; ok {
return nil
}
if _, ok := ectx.Functions[name]; ok {
func (p *parser) resolveFunction(name string) error {
if _, ok := p.doneF[name]; ok {
return nil
}
f, ok := p.funcs[name]
if !ok {
return errors.Wrapf(errUndefined, "function %q does not exist", name)
if _, ok := p.ectx.Functions[name]; ok {
return nil
}
return errors.Wrapf(errUndefined, "function %q does not exit", name)
}
if _, ok := p.progressF[key(ectx, name)]; ok {
if _, ok := p.progressF[name]; ok {
return errors.Errorf("function cycle not allowed for %s", name)
}
p.progressF[key(ectx, name)] = struct{}{}
p.progressF[name] = struct{}{}
if f.Result == nil {
return errors.Errorf("empty result not allowed for %s", name)
@@ -219,7 +204,7 @@ func (p *parser) resolveFunction(ectx *hcl.EvalContext, name string) error {
return diags
}
if diags := p.loadDeps(p.ectx, f.Result.Expr, params, false); diags.HasErrors() {
if diags := p.loadDeps(f.Result.Expr, params, false); diags.HasErrors() {
return diags
}
@@ -229,6 +214,7 @@ func (p *parser) resolveFunction(ectx *hcl.EvalContext, name string) error {
if diags.HasErrors() {
return diags
}
p.doneF[name] = struct{}{}
p.ectx.Functions[name] = v
return nil
@@ -236,17 +222,14 @@ func (p *parser) resolveFunction(ectx *hcl.EvalContext, name string) error {
// resolveValue forces evaluation of a named value, storing the result into the
// parser.
func (p *parser) resolveValue(ectx *hcl.EvalContext, name string) (err error) {
func (p *parser) resolveValue(name string) (err error) {
if _, ok := p.ectx.Variables[name]; ok {
return nil
}
if _, ok := ectx.Variables[name]; ok {
return nil
}
if _, ok := p.progressV[key(ectx, name)]; ok {
if _, ok := p.progress[name]; ok {
return errors.Errorf("variable cycle not allowed for %s", name)
}
p.progressV[key(ectx, name)] = struct{}{}
p.progress[name] = struct{}{}
var v *cty.Value
defer func() {
@@ -259,10 +242,9 @@ func (p *parser) resolveValue(ectx *hcl.EvalContext, name string) (err error) {
if _, builtin := p.opt.Vars[name]; !ok && !builtin {
vr, ok := p.vars[name]
if !ok {
return errors.Wrapf(errUndefined, "variable %q does not exist", name)
return errors.Wrapf(errUndefined, "variable %q does not exit", name)
}
def = vr.Default
ectx = p.ectx
}
if def == nil {
@@ -275,10 +257,10 @@ func (p *parser) resolveValue(ectx *hcl.EvalContext, name string) (err error) {
return
}
if diags := p.loadDeps(ectx, def.Expr, nil, true); diags.HasErrors() {
if diags := p.loadDeps(def.Expr, nil, true); diags.HasErrors() {
return diags
}
vv, diags := def.Expr.Value(ectx)
vv, diags := def.Expr.Value(p.ectx)
if diags.HasErrors() {
return diags
}
@@ -317,237 +299,147 @@ func (p *parser) resolveValue(ectx *hcl.EvalContext, name string) (err error) {
// target schema is provided, only the attributes and blocks present in the
// schema will be evaluated.
func (p *parser) resolveBlock(block *hcl.Block, target *hcl.BodySchema) (err error) {
// prepare the variable map for this type
if _, ok := p.ectx.Variables[block.Type]; !ok {
p.ectx.Variables[block.Type] = cty.MapValEmpty(cty.Map(cty.String))
name := block.Labels[0]
if err := p.opt.ValidateLabel(name); err != nil {
return wrapErrorDiagnostic("Invalid name", err, &block.LabelRanges[0], &block.LabelRanges[0])
}
// prepare the output destination and evaluation context
if _, ok := p.doneB[block]; !ok {
p.doneB[block] = map[string]struct{}{}
}
if _, ok := p.progressB[block]; !ok {
p.progressB[block] = map[string]struct{}{}
}
if target != nil {
// filter out attributes and blocks that are already evaluated
original := target
target = &hcl.BodySchema{}
for _, a := range original.Attributes {
if _, ok := p.doneB[block][a.Name]; !ok {
target.Attributes = append(target.Attributes, a)
}
}
for _, b := range original.Blocks {
if _, ok := p.doneB[block][b.Type]; !ok {
target.Blocks = append(target.Blocks, b)
}
}
if len(target.Attributes) == 0 && len(target.Blocks) == 0 {
return nil
}
}
if target != nil {
// detect reference cycles
for _, a := range target.Attributes {
if _, ok := p.progressB[block][a.Name]; ok {
return errors.Errorf("reference cycle not allowed for %s.%s.%s", block.Type, name, a.Name)
}
}
for _, b := range target.Blocks {
if _, ok := p.progressB[block][b.Type]; ok {
return errors.Errorf("reference cycle not allowed for %s.%s.%s", block.Type, name, b.Type)
}
}
for _, a := range target.Attributes {
p.progressB[block][a.Name] = struct{}{}
}
for _, b := range target.Blocks {
p.progressB[block][b.Type] = struct{}{}
}
}
// create a filtered body that contains only the target properties
body := func() hcl.Body {
if target != nil {
return FilterIncludeBody(block.Body, target)
}
filter := &hcl.BodySchema{}
for k := range p.doneB[block] {
filter.Attributes = append(filter.Attributes, hcl.AttributeSchema{Name: k})
filter.Blocks = append(filter.Blocks, hcl.BlockHeaderSchema{Type: k})
}
return FilterExcludeBody(block.Body, filter)
}
// load dependencies from all targeted properties
t, ok := p.blockTypes[block.Type]
if !ok {
return nil
}
var outputs []reflect.Value
var ectxs []*hcl.EvalContext
schema, _ := gohcl.ImpliedBodySchema(reflect.New(t).Interface())
content, _, diag := body().PartialContent(schema)
if diag.HasErrors() {
return diag
}
for _, a := range content.Attributes {
diag := p.loadDeps(a.Expr, nil, true)
if diag.HasErrors() {
return diag
}
}
for _, b := range content.Blocks {
err := p.resolveBlock(b, nil)
if err != nil {
return err
}
}
// decode!
var output reflect.Value
if prev, ok := p.blockValues[block]; ok {
outputs = prev
ectxs = p.blockEvalCtx[block]
output = prev
} else {
if v, ok := reflect.New(t).Interface().(WithEvalContexts); ok {
ectxs, err = v.GetEvalContexts(p.ectx, block, func(expr hcl.Expression) hcl.Diagnostics {
return p.loadDeps(p.ectx, expr, nil, true)
})
if err != nil {
return err
}
for _, ectx := range ectxs {
if ectx != p.ectx && ectx.Parent() != p.ectx {
return errors.Errorf("EvalContext must return a context with the correct parent")
}
}
} else {
ectxs = append([]*hcl.EvalContext{}, p.ectx)
output = reflect.New(t)
setLabel(output, block.Labels[0]) // early attach labels, so we can reference them
}
diag = gohcl.DecodeBody(body(), p.ectx, output.Interface())
if diag.HasErrors() {
return diag
}
p.blockValues[block] = output
// mark all targeted properties as done
for _, a := range content.Attributes {
p.doneB[block][a.Name] = struct{}{}
}
for _, b := range content.Blocks {
p.doneB[block][b.Type] = struct{}{}
}
if target != nil {
for _, a := range target.Attributes {
p.doneB[block][a.Name] = struct{}{}
}
for range ectxs {
outputs = append(outputs, reflect.New(t))
for _, b := range target.Blocks {
p.doneB[block][b.Type] = struct{}{}
}
}
p.blockValues[block] = outputs
p.blockEvalCtx[block] = ectxs
for i, output := range outputs {
target := target
ectx := ectxs[i]
name := block.Labels[0]
if names, ok := p.blockNames[block]; ok {
name = names[i]
}
if _, ok := p.doneB[key(block, ectx)]; !ok {
p.doneB[key(block, ectx)] = map[string]struct{}{}
}
if _, ok := p.progressB[key(block, ectx)]; !ok {
p.progressB[key(block, ectx)] = map[string]struct{}{}
}
if target != nil {
// filter out attributes and blocks that are already evaluated
original := target
target = &hcl.BodySchema{}
for _, a := range original.Attributes {
if _, ok := p.doneB[key(block, ectx)][a.Name]; !ok {
target.Attributes = append(target.Attributes, a)
}
}
for _, b := range original.Blocks {
if _, ok := p.doneB[key(block, ectx)][b.Type]; !ok {
target.Blocks = append(target.Blocks, b)
}
}
if len(target.Attributes) == 0 && len(target.Blocks) == 0 {
return nil
}
}
if target != nil {
// detect reference cycles
for _, a := range target.Attributes {
if _, ok := p.progressB[key(block, ectx)][a.Name]; ok {
return errors.Errorf("reference cycle not allowed for %s.%s.%s", block.Type, name, a.Name)
}
}
for _, b := range target.Blocks {
if _, ok := p.progressB[key(block, ectx)][b.Type]; ok {
return errors.Errorf("reference cycle not allowed for %s.%s.%s", block.Type, name, b.Type)
}
}
for _, a := range target.Attributes {
p.progressB[key(block, ectx)][a.Name] = struct{}{}
}
for _, b := range target.Blocks {
p.progressB[key(block, ectx)][b.Type] = struct{}{}
}
}
// create a filtered body that contains only the target properties
body := func() hcl.Body {
if target != nil {
return FilterIncludeBody(block.Body, target)
}
filter := &hcl.BodySchema{}
for k := range p.doneB[key(block, ectx)] {
filter.Attributes = append(filter.Attributes, hcl.AttributeSchema{Name: k})
filter.Blocks = append(filter.Blocks, hcl.BlockHeaderSchema{Type: k})
}
return FilterExcludeBody(block.Body, filter)
}
// load dependencies from all targeted properties
schema, _ := gohcl.ImpliedBodySchema(reflect.New(t).Interface())
content, _, diag := body().PartialContent(schema)
if diag.HasErrors() {
return diag
}
for _, a := range content.Attributes {
diag := p.loadDeps(ectx, a.Expr, nil, true)
if diag.HasErrors() {
return diag
}
}
for _, b := range content.Blocks {
err := p.resolveBlock(b, nil)
if err != nil {
return err
}
}
// decode!
diag = gohcl.DecodeBody(body(), ectx, output.Interface())
if diag.HasErrors() {
return diag
}
// mark all targeted properties as done
for _, a := range content.Attributes {
p.doneB[key(block, ectx)][a.Name] = struct{}{}
}
for _, b := range content.Blocks {
p.doneB[key(block, ectx)][b.Type] = struct{}{}
}
if target != nil {
for _, a := range target.Attributes {
p.doneB[key(block, ectx)][a.Name] = struct{}{}
}
for _, b := range target.Blocks {
p.doneB[key(block, ectx)][b.Type] = struct{}{}
}
}
// store the result into the evaluation context (so it can be referenced)
outputType, err := gocty.ImpliedType(output.Interface())
if err != nil {
return err
}
outputValue, err := gocty.ToCtyValue(output.Interface(), outputType)
if err != nil {
return err
}
var m map[string]cty.Value
if m2, ok := p.ectx.Variables[block.Type]; ok {
m = m2.AsValueMap()
}
if m == nil {
m = map[string]cty.Value{}
}
m[name] = outputValue
p.ectx.Variables[block.Type] = cty.MapVal(m)
// store the result into the evaluation context (so if can be referenced)
outputType, err := gocty.ImpliedType(output.Interface())
if err != nil {
return err
}
outputValue, err := gocty.ToCtyValue(output.Interface(), outputType)
if err != nil {
return err
}
var m map[string]cty.Value
if m2, ok := p.ectx.Variables[block.Type]; ok {
m = m2.AsValueMap()
}
if m == nil {
m = map[string]cty.Value{}
}
m[name] = outputValue
p.ectx.Variables[block.Type] = cty.MapVal(m)
return nil
}
// resolveBlockNames returns the names of the block, calling resolveBlock to
// evaluate any label fields to correctly resolve the name.
func (p *parser) resolveBlockNames(block *hcl.Block) ([]string, error) {
if names, ok := p.blockNames[block]; ok {
return names, nil
}
if err := p.resolveBlock(block, &hcl.BodySchema{}); err != nil {
return nil, err
}
names := make([]string, 0, len(p.blockValues[block]))
for i, val := range p.blockValues[block] {
ectx := p.blockEvalCtx[block][i]
name := block.Labels[0]
if err := p.opt.ValidateLabel(name); err != nil {
return nil, err
}
if v, ok := val.Interface().(WithGetName); ok {
var err error
name, err = v.GetName(ectx, block, func(expr hcl.Expression) hcl.Diagnostics {
return p.loadDeps(ectx, expr, nil, true)
})
if err != nil {
return nil, err
}
if err := p.opt.ValidateLabel(name); err != nil {
return nil, err
}
}
setName(val, name)
names = append(names, name)
}
found := map[string]struct{}{}
for _, name := range names {
if _, ok := found[name]; ok {
return nil, errors.Errorf("duplicate name %q", name)
}
found[name] = struct{}{}
}
p.blockNames[block] = names
return names, nil
}
type Variable struct {
Name string
Description string
Value *string
}
type ParseMeta struct {
Renamed map[string]map[string][]string
AllVariables []*Variable
}
func Parse(b hcl.Body, opt Opt, val interface{}) (*ParseMeta, hcl.Diagnostics) {
func Parse(b hcl.Body, opt Opt, val interface{}) hcl.Diagnostics {
reserved := map[string]struct{}{}
schema, _ := gohcl.ImpliedBodySchema(val)
@@ -560,7 +452,7 @@ func Parse(b hcl.Body, opt Opt, val interface{}) (*ParseMeta, hcl.Diagnostics) {
var defs inputs
if err := gohcl.DecodeBody(b, nil, &defs); err != nil {
return nil, err
return err
}
defsSchema, _ := gohcl.ImpliedBodySchema(defs)
@@ -583,20 +475,20 @@ func Parse(b hcl.Body, opt Opt, val interface{}) (*ParseMeta, hcl.Diagnostics) {
attrs: map[string]*hcl.Attribute{},
funcs: map[string]*functionDef{},
blocks: map[string]map[string][]*hcl.Block{},
blockValues: map[*hcl.Block][]reflect.Value{},
blockEvalCtx: map[*hcl.Block][]*hcl.EvalContext{},
blockNames: map[*hcl.Block][]string{},
blockTypes: map[string]reflect.Type{},
blocks: map[string]map[string][]*hcl.Block{},
blockValues: map[*hcl.Block]reflect.Value{},
blockTypes: map[string]reflect.Type{},
progress: map[string]struct{}{},
progressF: map[string]struct{}{},
progressB: map[*hcl.Block]map[string]struct{}{},
doneF: map[string]struct{}{},
doneB: map[*hcl.Block]map[string]struct{}{},
ectx: &hcl.EvalContext{
Variables: map[string]cty.Value{},
Functions: Stdlib(),
Functions: stdlibFunctions,
},
progressV: map[uint64]struct{}{},
progressF: map[uint64]struct{}{},
progressB: map[uint64]map[string]struct{}{},
doneB: map[uint64]map[string]struct{}{},
}
for _, v := range defs.Variables {
@@ -616,18 +508,18 @@ func Parse(b hcl.Body, opt Opt, val interface{}) (*ParseMeta, hcl.Diagnostics) {
content, b, diags := b.PartialContent(schema)
if diags.HasErrors() {
return nil, diags
return diags
}
blocks, b, diags := b.PartialContent(defsSchema)
if diags.HasErrors() {
return nil, diags
return diags
}
attrs, diags := b.JustAttributes()
if diags.HasErrors() {
if d := removeAttributesDiags(diags, reserved, p.vars, attrs); len(d) > 0 {
return nil, d
if d := removeAttributesDiags(diags, reserved, p.vars); len(d) > 0 {
return d
}
}
@@ -640,72 +532,76 @@ func Parse(b hcl.Body, opt Opt, val interface{}) (*ParseMeta, hcl.Diagnostics) {
delete(p.attrs, "function")
for k := range p.opt.Vars {
_ = p.resolveValue(p.ectx, k)
_ = p.resolveValue(k)
}
for _, a := range content.Attributes {
a := a
return nil, hcl.Diagnostics{
return hcl.Diagnostics{
&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid attribute",
Detail: "global attributes currently not supported",
Subject: a.Range.Ptr(),
Context: a.Range.Ptr(),
Subject: &a.Range,
Context: &a.Range,
},
}
}
vars := make([]*Variable, 0, len(p.vars))
for k := range p.vars {
if err := p.resolveValue(p.ectx, k); err != nil {
if err := p.resolveValue(k); err != nil {
if diags, ok := err.(hcl.Diagnostics); ok {
return nil, diags
return diags
}
r := p.vars[k].Body.MissingItemRange()
return nil, wrapErrorDiagnostic("Invalid value", err, &r, &r)
return wrapErrorDiagnostic("Invalid value", err, &r, &r)
}
v := &Variable{
Name: p.vars[k].Name,
Description: p.vars[k].Description,
}
if vv := p.ectx.Variables[k]; !vv.IsNull() {
var s string
switch vv.Type() {
case cty.String:
s = vv.AsString()
case cty.Bool:
s = strconv.FormatBool(vv.True())
}
v.Value = &s
}
vars = append(vars, v)
}
for k := range p.funcs {
if err := p.resolveFunction(p.ectx, k); err != nil {
if err := p.resolveFunction(k); err != nil {
if diags, ok := err.(hcl.Diagnostics); ok {
return nil, diags
return diags
}
var subject *hcl.Range
var context *hcl.Range
if p.funcs[k].Params != nil {
subject = p.funcs[k].Params.Range.Ptr()
subject = &p.funcs[k].Params.Range
context = subject
} else {
for _, block := range blocks.Blocks {
block := block
if block.Type == "function" && len(block.Labels) == 1 && block.Labels[0] == k {
subject = block.LabelRanges[0].Ptr()
context = block.DefRange.Ptr()
subject = &block.LabelRanges[0]
context = &block.DefRange
break
}
}
}
return nil, wrapErrorDiagnostic("Invalid function", err, subject, context)
return wrapErrorDiagnostic("Invalid function", err, subject, context)
}
}
for _, b := range content.Blocks {
if len(b.Labels) == 0 || len(b.Labels) > 1 {
return hcl.Diagnostics{
&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid block",
Detail: fmt.Sprintf("invalid block label: %v", b.Labels),
Subject: &b.LabelRanges[0],
Context: &b.LabelRanges[0],
},
}
}
bm, ok := p.blocks[b.Type]
if !ok {
bm = map[string][]*hcl.Block{}
p.blocks[b.Type] = bm
}
lbl := b.Labels[0]
bm[lbl] = append(bm[lbl], b)
}
type value struct {
reflect.Value
idx int
@@ -716,7 +612,7 @@ func Parse(b hcl.Body, opt Opt, val interface{}) (*ParseMeta, hcl.Diagnostics) {
values map[string]value
}
types := map[string]field{}
renamed := map[string]map[string][]string{}
vt := reflect.ValueOf(val).Elem().Type()
for i := 0; i < vt.NumField(); i++ {
tags := strings.Split(vt.Field(i).Tag.Get("hcl"), ",")
@@ -727,43 +623,10 @@ func Parse(b hcl.Body, opt Opt, val interface{}) (*ParseMeta, hcl.Diagnostics) {
typ: vt.Field(i).Type,
values: make(map[string]value),
}
renamed[tags[0]] = map[string][]string{}
}
tmpBlocks := map[string]map[string][]*hcl.Block{}
for _, b := range content.Blocks {
if len(b.Labels) == 0 || len(b.Labels) > 1 {
return nil, hcl.Diagnostics{
&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid block",
Detail: fmt.Sprintf("invalid block label: %v", b.Labels),
Subject: &b.LabelRanges[0],
Context: &b.LabelRanges[0],
},
}
}
bm, ok := tmpBlocks[b.Type]
if !ok {
bm = map[string][]*hcl.Block{}
tmpBlocks[b.Type] = bm
}
names, err := p.resolveBlockNames(b)
if err != nil {
return nil, wrapErrorDiagnostic("Invalid name", err, &b.LabelRanges[0], &b.LabelRanges[0])
}
for _, name := range names {
bm[name] = append(bm[name], b)
renamed[b.Type][b.Labels[0]] = append(renamed[b.Type][b.Labels[0]], name)
}
}
p.blocks = tmpBlocks
diags = hcl.Diagnostics{}
for _, b := range content.Blocks {
b := b
v := reflect.ValueOf(val)
err := p.resolveBlock(b, nil)
@@ -774,60 +637,56 @@ func Parse(b hcl.Body, opt Opt, val interface{}) (*ParseMeta, hcl.Diagnostics) {
continue
}
} else {
return nil, wrapErrorDiagnostic("Invalid block", err, b.LabelRanges[0].Ptr(), b.DefRange.Ptr())
return wrapErrorDiagnostic("Invalid block", err, &b.LabelRanges[0], &b.DefRange)
}
}
vvs := p.blockValues[b]
for _, vv := range vvs {
t := types[b.Type]
lblIndex, lblExists := getNameIndex(vv)
lblName, _ := getName(vv)
oldValue, exists := t.values[lblName]
if !exists && lblExists {
if v.Elem().Field(t.idx).Type().Kind() == reflect.Slice {
for i := 0; i < v.Elem().Field(t.idx).Len(); i++ {
if lblName == v.Elem().Field(t.idx).Index(i).Elem().Field(lblIndex).String() {
exists = true
oldValue = value{Value: v.Elem().Field(t.idx).Index(i), idx: i}
break
}
vv := p.blockValues[b]
t := types[b.Type]
lblIndex := setLabel(vv, b.Labels[0])
oldValue, exists := t.values[b.Labels[0]]
if !exists && lblIndex != -1 {
if v.Elem().Field(t.idx).Type().Kind() == reflect.Slice {
for i := 0; i < v.Elem().Field(t.idx).Len(); i++ {
if b.Labels[0] == v.Elem().Field(t.idx).Index(i).Elem().Field(lblIndex).String() {
exists = true
oldValue = value{Value: v.Elem().Field(t.idx).Index(i), idx: i}
break
}
}
}
if exists {
if m := oldValue.Value.MethodByName("Merge"); m.IsValid() {
m.Call([]reflect.Value{vv})
} else {
v.Elem().Field(t.idx).Index(oldValue.idx).Set(vv)
}
}
if exists {
if m := oldValue.Value.MethodByName("Merge"); m.IsValid() {
m.Call([]reflect.Value{vv})
} else {
slice := v.Elem().Field(t.idx)
if slice.IsNil() {
slice = reflect.New(t.typ).Elem()
}
t.values[lblName] = value{Value: vv, idx: slice.Len()}
v.Elem().Field(t.idx).Set(reflect.Append(slice, vv))
v.Elem().Field(t.idx).Index(oldValue.idx).Set(vv)
}
} else {
slice := v.Elem().Field(t.idx)
if slice.IsNil() {
slice = reflect.New(t.typ).Elem()
}
t.values[b.Labels[0]] = value{Value: vv, idx: slice.Len()}
v.Elem().Field(t.idx).Set(reflect.Append(slice, vv))
}
}
if diags.HasErrors() {
return nil, diags
return diags
}
for k := range p.attrs {
if err := p.resolveValue(p.ectx, k); err != nil {
if err := p.resolveValue(k); err != nil {
if diags, ok := err.(hcl.Diagnostics); ok {
return nil, diags
return diags
}
return nil, wrapErrorDiagnostic("Invalid attribute", err, &p.attrs[k].Range, &p.attrs[k].Range)
return wrapErrorDiagnostic("Invalid attribute", err, &p.attrs[k].Range, &p.attrs[k].Range)
}
}
return &ParseMeta{
Renamed: renamed,
AllVariables: vars,
}, nil
return nil
}
// wrapErrorDiagnostic wraps an error into a hcl.Diagnostics object.
@@ -851,45 +710,21 @@ func wrapErrorDiagnostic(message string, err error, subject *hcl.Range, context
}
}
func setName(v reflect.Value, name string) {
func setLabel(v reflect.Value, lbl string) int {
// cache field index?
numFields := v.Elem().Type().NumField()
for i := 0; i < numFields; i++ {
parts := strings.Split(v.Elem().Type().Field(i).Tag.Get("hcl"), ",")
for _, t := range parts[1:] {
for _, t := range strings.Split(v.Elem().Type().Field(i).Tag.Get("hcl"), ",") {
if t == "label" {
v.Elem().Field(i).Set(reflect.ValueOf(name))
v.Elem().Field(i).Set(reflect.ValueOf(lbl))
return i
}
}
}
return -1
}
func getName(v reflect.Value) (string, bool) {
numFields := v.Elem().Type().NumField()
for i := 0; i < numFields; i++ {
parts := strings.Split(v.Elem().Type().Field(i).Tag.Get("hcl"), ",")
for _, t := range parts[1:] {
if t == "label" {
return v.Elem().Field(i).String(), true
}
}
}
return "", false
}
func getNameIndex(v reflect.Value) (int, bool) {
numFields := v.Elem().Type().NumField()
for i := 0; i < numFields; i++ {
parts := strings.Split(v.Elem().Type().Field(i).Tag.Get("hcl"), ",")
for _, t := range parts[1:] {
if t == "label" {
return i, true
}
}
}
return 0, false
}
func removeAttributesDiags(diags hcl.Diagnostics, reserved map[string]struct{}, vars map[string]*variable, attrs hcl.Attributes) hcl.Diagnostics {
func removeAttributesDiags(diags hcl.Diagnostics, reserved map[string]struct{}, vars map[string]*variable) hcl.Diagnostics {
var fdiags hcl.Diagnostics
for _, d := range diags {
if fout := func(d *hcl.Diagnostic) bool {
@@ -911,12 +746,6 @@ func removeAttributesDiags(diags hcl.Diagnostics, reserved map[string]struct{},
return true
}
}
for a := range attrs {
// Do the same for attributes
if strings.HasPrefix(d.Detail, fmt.Sprintf(`Argument "%s" was already set at `, a)) {
return true
}
}
return false
}(d); !fout {
fdiags = append(fdiags, d)
@@ -924,21 +753,3 @@ func removeAttributesDiags(diags hcl.Diagnostics, reserved map[string]struct{},
}
return fdiags
}
// key returns a unique hash for the given values
func key(ks ...any) uint64 {
hash := fnv.New64a()
for _, k := range ks {
v := reflect.ValueOf(k)
switch v.Kind() {
case reflect.String:
hash.Write([]byte(v.String()))
case reflect.Pointer:
ptr := reflect.ValueOf(k).Pointer()
binary.Write(hash, binary.LittleEndian, uint64(ptr))
default:
panic(fmt.Sprintf("unknown key kind %s", v.Kind().String()))
}
}
return hash.Sum64()
}

View File

@@ -1,228 +0,0 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
// Forked from https://github.com/hashicorp/hcl/blob/4679383728fe331fc8a6b46036a27b8f818d9bc0/merged.go
package hclparser
import (
"fmt"
"github.com/hashicorp/hcl/v2"
)
// MergeFiles combines the given files to produce a single body that contains
// configuration from all of the given files.
//
// The ordering of the given files decides the order in which contained
// elements will be returned. If any top-level attributes are defined with
// the same name across multiple files, a diagnostic will be produced from
// the Content and PartialContent methods describing this error in a
// user-friendly way.
func MergeFiles(files []*hcl.File) hcl.Body {
var bodies []hcl.Body
for _, file := range files {
bodies = append(bodies, file.Body)
}
return MergeBodies(bodies)
}
// MergeBodies is like MergeFiles except it deals directly with bodies, rather
// than with entire files.
func MergeBodies(bodies []hcl.Body) hcl.Body {
if len(bodies) == 0 {
// Swap out for our singleton empty body, to reduce the number of
// empty slices we have hanging around.
return emptyBody
}
// If any of the given bodies are already merged bodies, we'll unpack
// to flatten to a single mergedBodies, since that's conceptually simpler.
// This also, as a side-effect, eliminates any empty bodies, since
// empties are merged bodies with no inner bodies.
var newLen int
var flatten bool
for _, body := range bodies {
if children, merged := body.(mergedBodies); merged {
newLen += len(children)
flatten = true
} else {
newLen++
}
}
if !flatten { // not just newLen == len, because we might have mergedBodies with single bodies inside
return mergedBodies(bodies)
}
if newLen == 0 {
// Don't allocate a new empty when we already have one
return emptyBody
}
n := make([]hcl.Body, 0, newLen)
for _, body := range bodies {
if children, merged := body.(mergedBodies); merged {
n = append(n, children...)
} else {
n = append(n, body)
}
}
return mergedBodies(n)
}
var emptyBody = mergedBodies([]hcl.Body{})
// EmptyBody returns a body with no content. This body can be used as a
// placeholder when a body is required but no body content is available.
func EmptyBody() hcl.Body {
return emptyBody
}
type mergedBodies []hcl.Body
// Content returns the content produced by applying the given schema to all
// of the merged bodies and merging the result.
//
// Although required attributes _are_ supported, they should be used sparingly
// with merged bodies since in this case there is no contextual information
// with which to return good diagnostics. Applications working with merged
// bodies may wish to mark all attributes as optional and then check for
// required attributes afterwards, to produce better diagnostics.
func (mb mergedBodies) Content(schema *hcl.BodySchema) (*hcl.BodyContent, hcl.Diagnostics) {
// the returned body will always be empty in this case, because mergedContent
// will only ever call Content on the child bodies.
content, _, diags := mb.mergedContent(schema, false)
return content, diags
}
func (mb mergedBodies) PartialContent(schema *hcl.BodySchema) (*hcl.BodyContent, hcl.Body, hcl.Diagnostics) {
return mb.mergedContent(schema, true)
}
func (mb mergedBodies) JustAttributes() (hcl.Attributes, hcl.Diagnostics) {
attrs := make(map[string]*hcl.Attribute)
var diags hcl.Diagnostics
for _, body := range mb {
thisAttrs, thisDiags := body.JustAttributes()
if len(thisDiags) != 0 {
diags = append(diags, thisDiags...)
}
for name, attr := range thisAttrs {
if existing := attrs[name]; existing != nil {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Duplicate argument",
Detail: fmt.Sprintf(
"Argument %q was already set at %s",
name, existing.NameRange.String(),
),
Subject: thisAttrs[name].NameRange.Ptr(),
})
}
attrs[name] = attr
}
}
return attrs, diags
}
func (mb mergedBodies) MissingItemRange() hcl.Range {
if len(mb) == 0 {
// Nothing useful to return here, so we'll return some garbage.
return hcl.Range{
Filename: "<empty>",
}
}
// arbitrarily use the first body's missing item range
return mb[0].MissingItemRange()
}
func (mb mergedBodies) mergedContent(schema *hcl.BodySchema, partial bool) (*hcl.BodyContent, hcl.Body, hcl.Diagnostics) {
// We need to produce a new schema with none of the attributes marked as
// required, since _any one_ of our bodies can contribute an attribute value.
// We'll separately check that all required attributes are present at
// the end.
mergedSchema := &hcl.BodySchema{
Blocks: schema.Blocks,
}
for _, attrS := range schema.Attributes {
mergedAttrS := attrS
mergedAttrS.Required = false
mergedSchema.Attributes = append(mergedSchema.Attributes, mergedAttrS)
}
var mergedLeftovers []hcl.Body
content := &hcl.BodyContent{
Attributes: map[string]*hcl.Attribute{},
}
var diags hcl.Diagnostics
for _, body := range mb {
var thisContent *hcl.BodyContent
var thisLeftovers hcl.Body
var thisDiags hcl.Diagnostics
if partial {
thisContent, thisLeftovers, thisDiags = body.PartialContent(mergedSchema)
} else {
thisContent, thisDiags = body.Content(mergedSchema)
}
if thisLeftovers != nil {
mergedLeftovers = append(mergedLeftovers, thisLeftovers)
}
if len(thisDiags) != 0 {
diags = append(diags, thisDiags...)
}
if thisContent.Attributes != nil {
for name, attr := range thisContent.Attributes {
if existing := content.Attributes[name]; existing != nil {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Duplicate argument",
Detail: fmt.Sprintf(
"Argument %q was already set at %s",
name, existing.NameRange.String(),
),
Subject: thisContent.Attributes[name].NameRange.Ptr(),
})
}
content.Attributes[name] = attr
}
}
if len(thisContent.Blocks) != 0 {
content.Blocks = append(content.Blocks, thisContent.Blocks...)
}
}
// Finally, we check for required attributes.
for _, attrS := range schema.Attributes {
if !attrS.Required {
continue
}
if content.Attributes[attrS.Name] == nil {
// We don't have any context here to produce a good diagnostic,
// which is why we warn in the Content docstring to minimize the
// use of required attributes on merged bodies.
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Missing required argument",
Detail: fmt.Sprintf(
"The argument %q is required, but was not set.",
attrS.Name,
),
})
}
}
leftoverBody := MergeBodies(mergedLeftovers)
return content, leftoverBody, diags
}

View File

@@ -9,7 +9,6 @@ import (
"github.com/hashicorp/go-cty-funcs/uuid"
"github.com/hashicorp/hcl/v2/ext/tryfunc"
"github.com/hashicorp/hcl/v2/ext/typeexpr"
"github.com/pkg/errors"
"github.com/zclconf/go-cty/cty"
"github.com/zclconf/go-cty/cty/function"
"github.com/zclconf/go-cty/cty/function/stdlib"
@@ -32,33 +31,32 @@ var stdlibFunctions = map[string]function.Function{
"cidrnetmask": cidr.NetmaskFunc,
"cidrsubnet": cidr.SubnetFunc,
"cidrsubnets": cidr.SubnetsFunc,
"csvdecode": stdlib.CSVDecodeFunc,
"coalesce": stdlib.CoalesceFunc,
"coalescelist": stdlib.CoalesceListFunc,
"compact": stdlib.CompactFunc,
"concat": stdlib.ConcatFunc,
"contains": stdlib.ContainsFunc,
"convert": typeexpr.ConvertFunc,
"csvdecode": stdlib.CSVDecodeFunc,
"distinct": stdlib.DistinctFunc,
"divide": stdlib.DivideFunc,
"element": stdlib.ElementFunc,
"equal": stdlib.EqualFunc,
"flatten": stdlib.FlattenFunc,
"floor": stdlib.FloorFunc,
"format": stdlib.FormatFunc,
"formatdate": stdlib.FormatDateFunc,
"format": stdlib.FormatFunc,
"formatlist": stdlib.FormatListFunc,
"greaterthan": stdlib.GreaterThanFunc,
"greaterthanorequalto": stdlib.GreaterThanOrEqualToFunc,
"hasindex": stdlib.HasIndexFunc,
"indent": stdlib.IndentFunc,
"index": stdlib.IndexFunc,
"indexof": indexOfFunc,
"int": stdlib.IntFunc,
"join": stdlib.JoinFunc,
"jsondecode": stdlib.JSONDecodeFunc,
"jsonencode": stdlib.JSONEncodeFunc,
"keys": stdlib.KeysFunc,
"join": stdlib.JoinFunc,
"length": stdlib.LengthFunc,
"lessthan": stdlib.LessThanFunc,
"lessthanorequalto": stdlib.LessThanOrEqualToFunc,
@@ -72,16 +70,15 @@ var stdlibFunctions = map[string]function.Function{
"modulo": stdlib.ModuloFunc,
"multiply": stdlib.MultiplyFunc,
"negate": stdlib.NegateFunc,
"not": stdlib.NotFunc,
"notequal": stdlib.NotEqualFunc,
"not": stdlib.NotFunc,
"or": stdlib.OrFunc,
"parseint": stdlib.ParseIntFunc,
"pow": stdlib.PowFunc,
"range": stdlib.RangeFunc,
"regex_replace": stdlib.RegexReplaceFunc,
"regex": stdlib.RegexFunc,
"regexall": stdlib.RegexAllFunc,
"replace": stdlib.ReplaceFunc,
"regex": stdlib.RegexFunc,
"regex_replace": stdlib.RegexReplaceFunc,
"reverse": stdlib.ReverseFunc,
"reverselist": stdlib.ReverseListFunc,
"rsadecrypt": crypto.RsaDecryptFunc,
@@ -117,51 +114,6 @@ var stdlibFunctions = map[string]function.Function{
"zipmap": stdlib.ZipmapFunc,
}
// indexOfFunc constructs a function that finds the element index for a given
// value in a list.
var indexOfFunc = function.New(&function.Spec{
Params: []function.Parameter{
{
Name: "list",
Type: cty.DynamicPseudoType,
},
{
Name: "value",
Type: cty.DynamicPseudoType,
},
},
Type: function.StaticReturnType(cty.Number),
Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) {
if !(args[0].Type().IsListType() || args[0].Type().IsTupleType()) {
return cty.NilVal, errors.New("argument must be a list or tuple")
}
if !args[0].IsKnown() {
return cty.UnknownVal(cty.Number), nil
}
if args[0].LengthInt() == 0 { // Easy path
return cty.NilVal, errors.New("cannot search an empty list")
}
for it := args[0].ElementIterator(); it.Next(); {
i, v := it.Element()
eq, err := stdlib.Equal(v, args[1])
if err != nil {
return cty.NilVal, err
}
if !eq.IsKnown() {
return cty.UnknownVal(cty.Number), nil
}
if eq.True() {
return i, nil
}
}
return cty.NilVal, errors.New("item not found")
},
})
// timestampFunc constructs a function that returns a string representation of the current date and time.
//
// This function was imported from terraform's datetime utilities.
@@ -172,11 +124,3 @@ var timestampFunc = function.New(&function.Spec{
return cty.StringVal(time.Now().UTC().Format(time.RFC3339)), nil
},
})
func Stdlib() map[string]function.Function {
funcs := make(map[string]function.Function, len(stdlibFunctions))
for k, v := range stdlibFunctions {
funcs[k] = v
}
return funcs
}

View File

@@ -1,49 +0,0 @@
package hclparser
import (
"testing"
"github.com/zclconf/go-cty/cty"
)
func TestIndexOf(t *testing.T) {
type testCase struct {
input cty.Value
key cty.Value
want cty.Value
wantErr bool
}
tests := map[string]testCase{
"index 0": {
input: cty.TupleVal([]cty.Value{cty.StringVal("one"), cty.NumberIntVal(2.0), cty.NumberIntVal(3), cty.StringVal("four")}),
key: cty.StringVal("one"),
want: cty.NumberIntVal(0),
},
"index 3": {
input: cty.TupleVal([]cty.Value{cty.StringVal("one"), cty.NumberIntVal(2.0), cty.NumberIntVal(3), cty.StringVal("four")}),
key: cty.StringVal("four"),
want: cty.NumberIntVal(3),
},
"index -1": {
input: cty.TupleVal([]cty.Value{cty.StringVal("one"), cty.NumberIntVal(2.0), cty.NumberIntVal(3), cty.StringVal("four")}),
key: cty.StringVal("3"),
wantErr: true,
},
}
for name, test := range tests {
name, test := name, test
t.Run(name, func(t *testing.T) {
got, err := indexOfFunc.Call([]cty.Value{test.input, test.key})
if err != nil {
if test.wantErr {
return
}
t.Fatalf("unexpected error: %s", err)
}
if !got.RawEquals(test.want) {
t.Errorf("wrong result\ngot: %#v\nwant: %#v", got, test.want)
}
})
}
}

View File

@@ -4,18 +4,14 @@ import (
"archive/tar"
"bytes"
"context"
"os"
"strings"
"github.com/docker/buildx/builder"
controllerapi "github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/driver"
"github.com/docker/buildx/util/progress"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/client/llb"
"github.com/moby/buildkit/frontend/dockerui"
gwclient "github.com/moby/buildkit/frontend/gateway/client"
"github.com/moby/buildkit/session"
"github.com/pkg/errors"
)
@@ -25,37 +21,10 @@ type Input struct {
}
func ReadRemoteFiles(ctx context.Context, nodes []builder.Node, url string, names []string, pw progress.Writer) ([]File, *Input, error) {
var sessions []session.Attachable
var filename string
st, ok := dockerui.DetectGitContext(url, false)
if ok {
if ssh, err := controllerapi.CreateSSH([]*controllerapi.SSH{{
ID: "default",
Paths: strings.Split(os.Getenv("BUILDX_BAKE_GIT_SSH"), ","),
}}); err == nil {
sessions = append(sessions, ssh)
}
var gitAuthSecrets []*controllerapi.Secret
if _, ok := os.LookupEnv("BUILDX_BAKE_GIT_AUTH_TOKEN"); ok {
gitAuthSecrets = append(gitAuthSecrets, &controllerapi.Secret{
ID: llb.GitAuthTokenKey,
Env: "BUILDX_BAKE_GIT_AUTH_TOKEN",
})
}
if _, ok := os.LookupEnv("BUILDX_BAKE_GIT_AUTH_HEADER"); ok {
gitAuthSecrets = append(gitAuthSecrets, &controllerapi.Secret{
ID: llb.GitAuthHeaderKey,
Env: "BUILDX_BAKE_GIT_AUTH_HEADER",
})
}
if len(gitAuthSecrets) > 0 {
if secrets, err := controllerapi.CreateSecrets(gitAuthSecrets); err == nil {
sessions = append(sessions, secrets)
}
}
} else {
st, filename, ok = dockerui.DetectHTTPContext(url)
st, ok := detectGitContext(url)
if !ok {
st, filename, ok = detectHTTPContext(url)
if !ok {
return nil, nil, errors.Errorf("not url context")
}
@@ -82,7 +51,7 @@ func ReadRemoteFiles(ctx context.Context, nodes []builder.Node, url string, name
ch, done := progress.NewChannel(pw)
defer func() { <-done }()
_, err = c.Build(ctx, client.SolveOpt{Session: sessions, Internal: true}, "buildx", func(ctx context.Context, c gwclient.Client) (*gwclient.Result, error) {
_, err = c.Build(ctx, client.SolveOpt{}, "buildx", func(ctx context.Context, c gwclient.Client) (*gwclient.Result, error) {
def, err := st.Marshal(ctx)
if err != nil {
return nil, err
@@ -114,6 +83,51 @@ func ReadRemoteFiles(ctx context.Context, nodes []builder.Node, url string, name
return files, inp, nil
}
func IsRemoteURL(url string) bool {
if _, _, ok := detectHTTPContext(url); ok {
return true
}
if _, ok := detectGitContext(url); ok {
return true
}
return false
}
func detectHTTPContext(url string) (*llb.State, string, bool) {
if httpPrefix.MatchString(url) {
httpContext := llb.HTTP(url, llb.Filename("context"), llb.WithCustomName("[internal] load remote build context"))
return &httpContext, "context", true
}
return nil, "", false
}
func detectGitContext(ref string) (*llb.State, bool) {
found := false
if httpPrefix.MatchString(ref) && gitURLPathWithFragmentSuffix.MatchString(ref) {
found = true
}
for _, prefix := range []string{"git://", "github.com/", "git@"} {
if strings.HasPrefix(ref, prefix) {
found = true
break
}
}
if !found {
return nil, false
}
parts := strings.SplitN(ref, "#", 2)
branch := ""
if len(parts) > 1 {
branch = parts[1]
}
gitOpts := []llb.GitOption{llb.WithCustomName("[internal] load git source " + ref)}
st := llb.Git(parts[0], branch, gitOpts...)
return &st, true
}
func isArchive(header []byte) bool {
for _, m := range [][]byte{
{0x42, 0x5A, 0x68}, // bzip2

File diff suppressed because it is too large Load Diff

View File

@@ -1,62 +0,0 @@
package build
import (
"context"
stderrors "errors"
"net"
"github.com/containerd/platforms"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/util/progress"
v1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
func Dial(ctx context.Context, nodes []builder.Node, pw progress.Writer, platform *v1.Platform) (net.Conn, error) {
nodes, err := filterAvailableNodes(nodes)
if err != nil {
return nil, err
}
if len(nodes) == 0 {
return nil, errors.New("no nodes available")
}
var pls []v1.Platform
if platform != nil {
pls = []v1.Platform{*platform}
}
opts := map[string]Options{"default": {Platforms: pls}}
resolved, err := resolveDrivers(ctx, nodes, opts, pw)
if err != nil {
return nil, err
}
var dialError error
for _, ls := range resolved {
for _, rn := range ls {
if platform != nil {
p := *platform
var found bool
for _, pp := range rn.platforms {
if platforms.Only(p).Match(pp) {
found = true
break
}
}
if !found {
continue
}
}
conn, err := nodes[rn.driverIndex].Driver.Dial(ctx)
if err == nil {
return conn, nil
}
dialError = stderrors.Join(err)
}
}
return nil, errors.Wrap(dialError, "no nodes available")
}

View File

@@ -1,352 +0,0 @@
package build
import (
"context"
"fmt"
"sync"
"github.com/containerd/platforms"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/driver"
"github.com/docker/buildx/util/progress"
"github.com/moby/buildkit/client"
gateway "github.com/moby/buildkit/frontend/gateway/client"
"github.com/moby/buildkit/util/flightcontrol"
"github.com/moby/buildkit/util/tracing"
specs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"go.opentelemetry.io/otel/trace"
"golang.org/x/sync/errgroup"
)
type resolvedNode struct {
resolver *nodeResolver
driverIndex int
platforms []specs.Platform
}
func (dp resolvedNode) Node() builder.Node {
return dp.resolver.nodes[dp.driverIndex]
}
func (dp resolvedNode) Client(ctx context.Context) (*client.Client, error) {
clients, err := dp.resolver.boot(ctx, []int{dp.driverIndex}, nil)
if err != nil {
return nil, err
}
return clients[0], nil
}
func (dp resolvedNode) BuildOpts(ctx context.Context) (gateway.BuildOpts, error) {
opts, err := dp.resolver.opts(ctx, []int{dp.driverIndex}, nil)
if err != nil {
return gateway.BuildOpts{}, err
}
return opts[0], nil
}
type matchMaker func(specs.Platform) platforms.MatchComparer
type cachedGroup[T any] struct {
g flightcontrol.Group[T]
cache map[int]T
cacheMu sync.Mutex
}
func newCachedGroup[T any]() cachedGroup[T] {
return cachedGroup[T]{
cache: map[int]T{},
}
}
type nodeResolver struct {
nodes []builder.Node
clients cachedGroup[*client.Client]
buildOpts cachedGroup[gateway.BuildOpts]
}
func resolveDrivers(ctx context.Context, nodes []builder.Node, opt map[string]Options, pw progress.Writer) (map[string][]*resolvedNode, error) {
driverRes := newDriverResolver(nodes)
drivers, err := driverRes.Resolve(ctx, opt, pw)
if err != nil {
return nil, err
}
return drivers, err
}
func newDriverResolver(nodes []builder.Node) *nodeResolver {
r := &nodeResolver{
nodes: nodes,
clients: newCachedGroup[*client.Client](),
buildOpts: newCachedGroup[gateway.BuildOpts](),
}
return r
}
func (r *nodeResolver) Resolve(ctx context.Context, opt map[string]Options, pw progress.Writer) (map[string][]*resolvedNode, error) {
if len(r.nodes) == 0 {
return nil, nil
}
nodes := map[string][]*resolvedNode{}
for k, opt := range opt {
node, perfect, err := r.resolve(ctx, opt.Platforms, pw, platforms.OnlyStrict, nil)
if err != nil {
return nil, err
}
if !perfect {
break
}
nodes[k] = node
}
if len(nodes) != len(opt) {
// if we didn't get a perfect match, we need to boot all drivers
allIndexes := make([]int, len(r.nodes))
for i := range allIndexes {
allIndexes[i] = i
}
clients, err := r.boot(ctx, allIndexes, pw)
if err != nil {
return nil, err
}
eg, egCtx := errgroup.WithContext(ctx)
workers := make([][]specs.Platform, len(clients))
for i, c := range clients {
i, c := i, c
if c == nil {
continue
}
eg.Go(func() error {
ww, err := c.ListWorkers(egCtx)
if err != nil {
return errors.Wrap(err, "listing workers")
}
ps := make(map[string]specs.Platform, len(ww))
for _, w := range ww {
for _, p := range w.Platforms {
pk := platforms.Format(platforms.Normalize(p))
ps[pk] = p
}
}
for _, p := range ps {
workers[i] = append(workers[i], p)
}
return nil
})
}
if err := eg.Wait(); err != nil {
return nil, err
}
// then we can attempt to match against all the available platforms
// (this time we don't care about imperfect matches)
nodes = map[string][]*resolvedNode{}
for k, opt := range opt {
node, _, err := r.resolve(ctx, opt.Platforms, pw, platforms.Only, func(idx int, n builder.Node) []specs.Platform {
return workers[idx]
})
if err != nil {
return nil, err
}
nodes[k] = node
}
}
idxs := make([]int, 0, len(r.nodes))
for _, nodes := range nodes {
for _, node := range nodes {
idxs = append(idxs, node.driverIndex)
}
}
// preload capabilities
span, ctx := tracing.StartSpan(ctx, "load buildkit capabilities", trace.WithSpanKind(trace.SpanKindInternal))
_, err := r.opts(ctx, idxs, pw)
tracing.FinishWithError(span, err)
if err != nil {
return nil, err
}
return nodes, nil
}
func (r *nodeResolver) resolve(ctx context.Context, ps []specs.Platform, pw progress.Writer, matcher matchMaker, additional func(idx int, n builder.Node) []specs.Platform) ([]*resolvedNode, bool, error) {
if len(r.nodes) == 0 {
return nil, true, nil
}
perfect := true
nodeIdxs := make([]int, 0)
for _, p := range ps {
idx := r.get(p, matcher, additional)
if idx == -1 {
idx = 0
perfect = false
}
nodeIdxs = append(nodeIdxs, idx)
}
var nodes []*resolvedNode
if len(nodeIdxs) == 0 {
nodes = append(nodes, &resolvedNode{
resolver: r,
driverIndex: 0,
})
nodeIdxs = append(nodeIdxs, 0)
} else {
for i, idx := range nodeIdxs {
node := &resolvedNode{
resolver: r,
driverIndex: idx,
}
if len(ps) > 0 {
node.platforms = []specs.Platform{ps[i]}
}
nodes = append(nodes, node)
}
}
nodes = recombineNodes(nodes)
if _, err := r.boot(ctx, nodeIdxs, pw); err != nil {
return nil, false, err
}
return nodes, perfect, nil
}
func (r *nodeResolver) get(p specs.Platform, matcher matchMaker, additionalPlatforms func(int, builder.Node) []specs.Platform) int {
best := -1
bestPlatform := specs.Platform{}
for i, node := range r.nodes {
platforms := node.Platforms
if additionalPlatforms != nil {
platforms = append([]specs.Platform{}, platforms...)
platforms = append(platforms, additionalPlatforms(i, node)...)
}
for _, p2 := range platforms {
m := matcher(p2)
if !m.Match(p) {
continue
}
if best == -1 {
best = i
bestPlatform = p2
continue
}
if matcher(p2).Less(p, bestPlatform) {
best = i
bestPlatform = p2
}
}
}
return best
}
func (r *nodeResolver) boot(ctx context.Context, idxs []int, pw progress.Writer) ([]*client.Client, error) {
clients := make([]*client.Client, len(idxs))
baseCtx := ctx
eg, ctx := errgroup.WithContext(ctx)
for i, idx := range idxs {
i, idx := i, idx
eg.Go(func() error {
c, err := r.clients.g.Do(ctx, fmt.Sprint(idx), func(ctx context.Context) (*client.Client, error) {
if r.nodes[idx].Driver == nil {
return nil, nil
}
r.clients.cacheMu.Lock()
c, ok := r.clients.cache[idx]
r.clients.cacheMu.Unlock()
if ok {
return c, nil
}
c, err := driver.Boot(ctx, baseCtx, r.nodes[idx].Driver, pw)
if err != nil {
return nil, err
}
r.clients.cacheMu.Lock()
r.clients.cache[idx] = c
r.clients.cacheMu.Unlock()
return c, nil
})
if err != nil {
return err
}
clients[i] = c
return nil
})
}
if err := eg.Wait(); err != nil {
return nil, err
}
return clients, nil
}
func (r *nodeResolver) opts(ctx context.Context, idxs []int, pw progress.Writer) ([]gateway.BuildOpts, error) {
clients, err := r.boot(ctx, idxs, pw)
if err != nil {
return nil, err
}
bopts := make([]gateway.BuildOpts, len(clients))
eg, ctx := errgroup.WithContext(ctx)
for i, idxs := range idxs {
i, idx := i, idxs
c := clients[i]
if c == nil {
continue
}
eg.Go(func() error {
opt, err := r.buildOpts.g.Do(ctx, fmt.Sprint(idx), func(ctx context.Context) (gateway.BuildOpts, error) {
r.buildOpts.cacheMu.Lock()
opt, ok := r.buildOpts.cache[idx]
r.buildOpts.cacheMu.Unlock()
if ok {
return opt, nil
}
_, err := c.Build(ctx, client.SolveOpt{
Internal: true,
}, "buildx", func(ctx context.Context, c gateway.Client) (*gateway.Result, error) {
opt = c.BuildOpts()
return nil, nil
}, nil)
if err != nil {
return gateway.BuildOpts{}, err
}
r.buildOpts.cacheMu.Lock()
r.buildOpts.cache[idx] = opt
r.buildOpts.cacheMu.Unlock()
return opt, err
})
if err != nil {
return err
}
bopts[i] = opt
return nil
})
}
if err := eg.Wait(); err != nil {
return nil, err
}
return bopts, nil
}
// recombineDriverPairs recombines resolved nodes that are on the same driver
// back together into a single node.
func recombineNodes(nodes []*resolvedNode) []*resolvedNode {
result := make([]*resolvedNode, 0, len(nodes))
lookup := map[int]int{}
for _, node := range nodes {
if idx, ok := lookup[node.driverIndex]; ok {
result[idx].platforms = append(result[idx].platforms, node.platforms...)
} else {
lookup[node.driverIndex] = len(result)
result = append(result, node)
}
}
return result
}

View File

@@ -1,315 +0,0 @@
package build
import (
"context"
"sort"
"testing"
"github.com/containerd/platforms"
"github.com/docker/buildx/builder"
specs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/stretchr/testify/require"
)
func TestFindDriverSanity(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
"aaa": {platforms.DefaultSpec()},
})
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.DefaultSpec()}, nil, platforms.OnlyStrict, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, 0, res[0].driverIndex)
require.Equal(t, "aaa", res[0].Node().Builder)
require.Equal(t, []specs.Platform{platforms.DefaultSpec()}, res[0].platforms)
}
func TestFindDriverEmpty(t *testing.T) {
r := makeTestResolver(nil)
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.DefaultSpec()}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Nil(t, res)
}
func TestFindDriverWeirdName(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
"aaa": {platforms.MustParse("linux/amd64")},
"bbb": {platforms.MustParse("linux/foobar")},
})
// find first platform
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/foobar")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, 1, res[0].driverIndex)
require.Equal(t, "bbb", res[0].Node().Builder)
}
func TestFindDriverUnknown(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
"aaa": {platforms.MustParse("linux/amd64")},
})
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/riscv64")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.False(t, perfect)
require.Len(t, res, 1)
require.Equal(t, 0, res[0].driverIndex)
require.Equal(t, "aaa", res[0].Node().Builder)
}
func TestSelectNodeSinglePlatform(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
"aaa": {platforms.MustParse("linux/amd64")},
"bbb": {platforms.MustParse("linux/riscv64")},
})
// find first platform
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/amd64")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, 0, res[0].driverIndex)
require.Equal(t, "aaa", res[0].Node().Builder)
// find second platform
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/riscv64")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, 1, res[0].driverIndex)
require.Equal(t, "bbb", res[0].Node().Builder)
// find an unknown platform, should match the first driver
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/s390x")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.False(t, perfect)
require.Len(t, res, 1)
require.Equal(t, 0, res[0].driverIndex)
require.Equal(t, "aaa", res[0].Node().Builder)
}
func TestSelectNodeMultiPlatform(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
"aaa": {platforms.MustParse("linux/amd64"), platforms.MustParse("linux/arm64")},
"bbb": {platforms.MustParse("linux/riscv64")},
})
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/amd64")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, 0, res[0].driverIndex)
require.Equal(t, "aaa", res[0].Node().Builder)
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm64")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, 0, res[0].driverIndex)
require.Equal(t, "aaa", res[0].Node().Builder)
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/riscv64")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, 1, res[0].driverIndex)
require.Equal(t, "bbb", res[0].Node().Builder)
}
func TestSelectNodeNonStrict(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
"aaa": {platforms.MustParse("linux/amd64")},
"bbb": {platforms.MustParse("linux/arm64")},
})
// arm64 should match itself
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm64")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "bbb", res[0].Node().Builder)
// arm64 may support arm/v8
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v8")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "bbb", res[0].Node().Builder)
// arm64 may support arm/v7
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v7")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "bbb", res[0].Node().Builder)
}
func TestSelectNodeNonStrictARM(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
"aaa": {platforms.MustParse("linux/amd64")},
"bbb": {platforms.MustParse("linux/arm64")},
"ccc": {platforms.MustParse("linux/arm/v8")},
})
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v8")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "ccc", res[0].Node().Builder)
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v7")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "ccc", res[0].Node().Builder)
}
func TestSelectNodeNonStrictLower(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
"aaa": {platforms.MustParse("linux/amd64")},
"bbb": {platforms.MustParse("linux/arm/v7")},
})
// v8 can't be built on v7 (so we should select the default)...
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v8")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.False(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "aaa", res[0].Node().Builder)
// ...but v6 can be built on v8
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v6")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "bbb", res[0].Node().Builder)
}
func TestSelectNodePreferStart(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
"aaa": {platforms.MustParse("linux/amd64")},
"bbb": {platforms.MustParse("linux/riscv64")},
"ccc": {platforms.MustParse("linux/riscv64")},
})
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/riscv64")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "bbb", res[0].Node().Builder)
}
func TestSelectNodePreferExact(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
"aaa": {platforms.MustParse("linux/arm/v8")},
"bbb": {platforms.MustParse("linux/arm/v7")},
})
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v7")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "bbb", res[0].Node().Builder)
}
func TestSelectNodeNoPlatform(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
"aaa": {platforms.MustParse("linux/foobar")},
"bbb": {platforms.DefaultSpec()},
})
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "aaa", res[0].Node().Builder)
require.Empty(t, res[0].platforms)
}
func TestSelectNodeAdditionalPlatforms(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
"aaa": {platforms.MustParse("linux/amd64")},
"bbb": {platforms.MustParse("linux/arm/v8")},
})
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v7")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "bbb", res[0].Node().Builder)
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v7")}, nil, platforms.Only, func(idx int, n builder.Node) []specs.Platform {
if n.Builder == "aaa" {
return []specs.Platform{platforms.MustParse("linux/arm/v7")}
}
return nil
})
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "aaa", res[0].Node().Builder)
}
func TestSplitNodeMultiPlatform(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
"aaa": {platforms.MustParse("linux/amd64"), platforms.MustParse("linux/arm64")},
"bbb": {platforms.MustParse("linux/riscv64")},
})
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{
platforms.MustParse("linux/amd64"),
platforms.MustParse("linux/arm64"),
}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "aaa", res[0].Node().Builder)
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{
platforms.MustParse("linux/amd64"),
platforms.MustParse("linux/riscv64"),
}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 2)
require.Equal(t, "aaa", res[0].Node().Builder)
require.Equal(t, "bbb", res[1].Node().Builder)
}
func TestSplitNodeMultiPlatformNoUnify(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
"aaa": {platforms.MustParse("linux/amd64")},
"bbb": {platforms.MustParse("linux/amd64"), platforms.MustParse("linux/riscv64")},
})
// the "best" choice would be the node with both platforms, but we're using
// a naive algorithm that doesn't try to unify the platforms
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{
platforms.MustParse("linux/amd64"),
platforms.MustParse("linux/riscv64"),
}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 2)
require.Equal(t, "aaa", res[0].Node().Builder)
require.Equal(t, "bbb", res[1].Node().Builder)
}
func makeTestResolver(nodes map[string][]specs.Platform) *nodeResolver {
var ns []builder.Node
for name, platforms := range nodes {
ns = append(ns, builder.Node{
Builder: name,
Platforms: platforms,
})
}
sort.Slice(ns, func(i, j int) bool {
return ns[i].Builder < ns[j].Builder
})
return newDriverResolver(ns)
}

View File

@@ -9,18 +9,16 @@ import (
"strings"
"github.com/docker/buildx/util/gitutil"
"github.com/docker/buildx/util/osutil"
"github.com/moby/buildkit/client"
specs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
const DockerfileLabel = "com.docker.image.source.entrypoint"
func getGitAttributes(ctx context.Context, contextPath string, dockerfilePath string) (map[string]string, func(key, dir string, so *client.SolveOpt), error) {
res := make(map[string]string)
func getGitAttributes(ctx context.Context, contextPath string, dockerfilePath string) (res map[string]string, _ error) {
res = make(map[string]string)
if contextPath == "" {
return nil, nil, nil
return
}
setGitLabels := false
@@ -39,7 +37,7 @@ func getGitAttributes(ctx context.Context, contextPath string, dockerfilePath st
}
if !setGitLabels && !setGitInfo {
return nil, nil, nil
return
}
// figure out in which directory the git command needs to run in
@@ -47,32 +45,27 @@ func getGitAttributes(ctx context.Context, contextPath string, dockerfilePath st
if filepath.IsAbs(contextPath) {
wd = contextPath
} else {
wd, _ = filepath.Abs(filepath.Join(osutil.GetWd(), contextPath))
cwd, _ := os.Getwd()
wd, _ = filepath.Abs(filepath.Join(cwd, contextPath))
}
wd = osutil.SanitizePath(wd)
gitc, err := gitutil.New(gitutil.WithContext(ctx), gitutil.WithWorkingDir(wd))
if err != nil {
if st, err1 := os.Stat(path.Join(wd, ".git")); err1 == nil && st.IsDir() {
return res, nil, errors.Wrap(err, "git was not found in the system")
if st, err := os.Stat(path.Join(wd, ".git")); err == nil && st.IsDir() {
return res, errors.New("buildx: git was not found in the system. Current commit information was not captured by the build")
}
return nil, nil, nil
return
}
if !gitc.IsInsideWorkTree() {
if st, err := os.Stat(path.Join(wd, ".git")); err == nil && st.IsDir() {
return res, nil, errors.New("failed to read current commit information with git rev-parse --is-inside-work-tree")
return res, errors.New("buildx: failed to read current commit information with git rev-parse --is-inside-work-tree")
}
return nil, nil, nil
}
root, err := gitc.RootDir()
if err != nil {
return res, nil, errors.Wrap(err, "failed to get git root dir")
return res, nil
}
if sha, err := gitc.FullCommit(); err != nil && !gitutil.IsUnknownRevision(err) {
return res, nil, errors.Wrap(err, "failed to get git commit")
return res, errors.Wrapf(err, "buildx: failed to get git commit")
} else if sha != "" {
checkDirty := false
if v, ok := os.LookupEnv("BUILDX_GIT_CHECK_DIRTY"); ok {
@@ -100,32 +93,23 @@ func getGitAttributes(ctx context.Context, contextPath string, dockerfilePath st
}
}
if setGitLabels && root != "" {
if dockerfilePath == "" {
dockerfilePath = filepath.Join(wd, "Dockerfile")
}
if !filepath.IsAbs(dockerfilePath) {
dockerfilePath = filepath.Join(osutil.GetWd(), dockerfilePath)
}
if r, err := filepath.Rel(root, dockerfilePath); err == nil && !strings.HasPrefix(r, "..") {
res["label:"+DockerfileLabel] = r
if setGitLabels {
if root, err := gitc.RootDir(); err != nil {
return res, errors.Wrapf(err, "buildx: failed to get git root dir")
} else if root != "" {
if dockerfilePath == "" {
dockerfilePath = filepath.Join(wd, "Dockerfile")
}
if !filepath.IsAbs(dockerfilePath) {
cwd, _ := os.Getwd()
dockerfilePath = filepath.Join(cwd, dockerfilePath)
}
dockerfilePath, _ = filepath.Rel(root, dockerfilePath)
if !strings.HasPrefix(dockerfilePath, "..") {
res["label:"+DockerfileLabel] = dockerfilePath
}
}
}
return res, func(key, dir string, so *client.SolveOpt) {
if !setGitInfo || root == "" {
return
}
dir, err := filepath.Abs(dir)
if err != nil {
return
}
if lp, err := osutil.GetLongPathName(dir); err == nil {
dir = lp
}
dir = osutil.SanitizePath(dir)
if r, err := filepath.Rel(root, dir); err == nil && !strings.HasPrefix(r, "..") {
so.FrontendAttrs["vcs:localdir:"+key] = r
}
}, nil
return
}

View File

@@ -9,7 +9,6 @@ import (
"testing"
"github.com/docker/buildx/util/gitutil"
"github.com/moby/buildkit/client"
specs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
@@ -31,7 +30,7 @@ func setupTest(tb testing.TB) {
}
func TestGetGitAttributesNotGitRepo(t *testing.T) {
_, _, err := getGitAttributes(context.Background(), t.TempDir(), "Dockerfile")
_, err := getGitAttributes(context.Background(), t.TempDir(), "Dockerfile")
assert.NoError(t, err)
}
@@ -39,14 +38,14 @@ func TestGetGitAttributesBadGitRepo(t *testing.T) {
tmp := t.TempDir()
require.NoError(t, os.MkdirAll(path.Join(tmp, ".git"), 0755))
_, _, err := getGitAttributes(context.Background(), tmp, "Dockerfile")
_, err := getGitAttributes(context.Background(), tmp, "Dockerfile")
assert.Error(t, err)
}
func TestGetGitAttributesNoContext(t *testing.T) {
setupTest(t)
gitattrs, _, err := getGitAttributes(context.Background(), "", "Dockerfile")
gitattrs, err := getGitAttributes(context.Background(), "", "Dockerfile")
assert.NoError(t, err)
assert.Empty(t, gitattrs)
}
@@ -115,7 +114,7 @@ func TestGetGitAttributes(t *testing.T) {
if tt.envGitInfo != "" {
t.Setenv("BUILDX_GIT_INFO", tt.envGitInfo)
}
gitattrs, _, err := getGitAttributes(context.Background(), ".", "Dockerfile")
gitattrs, err := getGitAttributes(context.Background(), ".", "Dockerfile")
require.NoError(t, err)
for _, e := range tt.expected {
assert.Contains(t, gitattrs, e)
@@ -140,7 +139,7 @@ func TestGetGitAttributesDirty(t *testing.T) {
require.NoError(t, os.WriteFile(filepath.Join("dir", "Dockerfile"), df, 0644))
t.Setenv("BUILDX_GIT_LABELS", "true")
gitattrs, _, _ := getGitAttributes(context.Background(), ".", "Dockerfile")
gitattrs, _ := getGitAttributes(context.Background(), ".", "Dockerfile")
assert.Equal(t, 5, len(gitattrs))
assert.Contains(t, gitattrs, "label:"+DockerfileLabel)
@@ -155,55 +154,3 @@ func TestGetGitAttributesDirty(t *testing.T) {
assert.Contains(t, gitattrs, "vcs:revision")
assert.True(t, strings.HasSuffix(gitattrs["vcs:revision"], "-dirty"))
}
func TestLocalDirs(t *testing.T) {
setupTest(t)
so := &client.SolveOpt{
FrontendAttrs: map[string]string{},
}
_, addVCSLocalDir, err := getGitAttributes(context.Background(), ".", "Dockerfile")
require.NoError(t, err)
require.NotNil(t, addVCSLocalDir)
require.NoError(t, setLocalMount("context", ".", so, addVCSLocalDir))
require.Contains(t, so.FrontendAttrs, "vcs:localdir:context")
assert.Equal(t, ".", so.FrontendAttrs["vcs:localdir:context"])
require.NoError(t, setLocalMount("dockerfile", ".", so, addVCSLocalDir))
require.Contains(t, so.FrontendAttrs, "vcs:localdir:dockerfile")
assert.Equal(t, ".", so.FrontendAttrs["vcs:localdir:dockerfile"])
}
func TestLocalDirsSub(t *testing.T) {
gitutil.Mktmp(t)
c, err := gitutil.New()
require.NoError(t, err)
gitutil.GitInit(c, t)
df := []byte("FROM alpine:latest\n")
assert.NoError(t, os.MkdirAll("app", 0755))
assert.NoError(t, os.WriteFile("app/Dockerfile", df, 0644))
gitutil.GitAdd(c, t, "app/Dockerfile")
gitutil.GitCommit(c, t, "initial commit")
gitutil.GitSetRemote(c, t, "origin", "git@github.com:docker/buildx.git")
so := &client.SolveOpt{
FrontendAttrs: map[string]string{},
}
_, addVCSLocalDir, err := getGitAttributes(context.Background(), ".", "app/Dockerfile")
require.NoError(t, err)
require.NotNil(t, addVCSLocalDir)
require.NoError(t, setLocalMount("context", ".", so, addVCSLocalDir))
require.Contains(t, so.FrontendAttrs, "vcs:localdir:context")
assert.Equal(t, ".", so.FrontendAttrs["vcs:localdir:context"])
require.NoError(t, setLocalMount("dockerfile", "app", so, addVCSLocalDir))
require.Contains(t, so.FrontendAttrs, "vcs:localdir:dockerfile")
assert.Equal(t, "app", so.FrontendAttrs["vcs:localdir:dockerfile"])
}

View File

@@ -1,138 +0,0 @@
package build
import (
"context"
_ "crypto/sha256" // ensure digests can be computed
"io"
"sync"
"sync/atomic"
"syscall"
controllerapi "github.com/docker/buildx/controller/pb"
gateway "github.com/moby/buildkit/frontend/gateway/client"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
type Container struct {
cancelOnce sync.Once
containerCancel func()
isUnavailable atomic.Bool
initStarted atomic.Bool
container gateway.Container
releaseCh chan struct{}
resultCtx *ResultHandle
}
func NewContainer(ctx context.Context, resultCtx *ResultHandle, cfg *controllerapi.InvokeConfig) (*Container, error) {
mainCtx := ctx
ctrCh := make(chan *Container)
errCh := make(chan error)
go func() {
err := resultCtx.build(func(ctx context.Context, c gateway.Client) (*gateway.Result, error) {
ctx, cancel := context.WithCancel(ctx)
go func() {
<-mainCtx.Done()
cancel()
}()
containerCfg, err := resultCtx.getContainerConfig(cfg)
if err != nil {
return nil, err
}
containerCtx, containerCancel := context.WithCancel(ctx)
defer containerCancel()
bkContainer, err := c.NewContainer(containerCtx, containerCfg)
if err != nil {
return nil, err
}
releaseCh := make(chan struct{})
container := &Container{
containerCancel: containerCancel,
container: bkContainer,
releaseCh: releaseCh,
resultCtx: resultCtx,
}
doneCh := make(chan struct{})
defer close(doneCh)
resultCtx.registerCleanup(func() {
container.Cancel()
<-doneCh
})
ctrCh <- container
<-container.releaseCh
return nil, bkContainer.Release(ctx)
})
if err != nil {
errCh <- err
}
}()
select {
case ctr := <-ctrCh:
return ctr, nil
case err := <-errCh:
return nil, err
case <-mainCtx.Done():
return nil, mainCtx.Err()
}
}
func (c *Container) Cancel() {
c.markUnavailable()
c.cancelOnce.Do(func() {
if c.containerCancel != nil {
c.containerCancel()
}
close(c.releaseCh)
})
}
func (c *Container) IsUnavailable() bool {
return c.isUnavailable.Load()
}
func (c *Container) markUnavailable() {
c.isUnavailable.Store(true)
}
func (c *Container) Exec(ctx context.Context, cfg *controllerapi.InvokeConfig, stdin io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) error {
if isInit := c.initStarted.CompareAndSwap(false, true); isInit {
defer func() {
// container can't be used after init exits
c.markUnavailable()
}()
}
err := exec(ctx, c.resultCtx, cfg, c.container, stdin, stdout, stderr)
if err != nil {
// Container becomes unavailable if one of the processes fails in it.
c.markUnavailable()
}
return err
}
func exec(ctx context.Context, resultCtx *ResultHandle, cfg *controllerapi.InvokeConfig, ctr gateway.Container, stdin io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) error {
processCfg, err := resultCtx.getProcessConfig(cfg, stdin, stdout, stderr)
if err != nil {
return err
}
proc, err := ctr.Start(ctx, processCfg)
if err != nil {
return errors.Errorf("failed to start container: %v", err)
}
doneCh := make(chan struct{})
defer close(doneCh)
go func() {
select {
case <-ctx.Done():
if err := proc.Signal(ctx, syscall.SIGKILL); err != nil {
logrus.Warnf("failed to kill process: %v", err)
}
case <-doneCh:
}
}()
return proc.Wait()
}

View File

@@ -1,43 +0,0 @@
package build
import (
"path/filepath"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/localstate"
"github.com/moby/buildkit/client"
)
func saveLocalState(so *client.SolveOpt, target string, opts Options, node builder.Node, configDir string) error {
var err error
if so.Ref == "" {
return nil
}
lp := opts.Inputs.ContextPath
dp := opts.Inputs.DockerfilePath
if dp != "" && !IsRemoteURL(lp) && lp != "-" && dp != "-" {
dp, err = filepath.Abs(dp)
if err != nil {
return err
}
}
if lp != "" && !IsRemoteURL(lp) && lp != "-" {
lp, err = filepath.Abs(lp)
if err != nil {
return err
}
}
if lp == "" && dp == "" {
return nil
}
l, err := localstate.New(configDir)
if err != nil {
return err
}
return l.SaveRef(node.Builder, node.Name, so.Ref, localstate.State{
Target: target,
LocalPath: lp,
DockerfilePath: dp,
GroupRef: opts.GroupRef,
})
}

View File

@@ -1,637 +0,0 @@
package build
import (
"bufio"
"context"
"io"
"os"
"path/filepath"
"strconv"
"strings"
"syscall"
"github.com/containerd/containerd/content"
"github.com/containerd/containerd/content/local"
"github.com/containerd/platforms"
"github.com/distribution/reference"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/driver"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/dockerutil"
"github.com/docker/buildx/util/osutil"
"github.com/docker/buildx/util/progress"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/client/llb"
"github.com/moby/buildkit/client/ociindex"
gateway "github.com/moby/buildkit/frontend/gateway/client"
"github.com/moby/buildkit/identity"
"github.com/moby/buildkit/session/upload/uploadprovider"
"github.com/moby/buildkit/solver/pb"
"github.com/moby/buildkit/util/apicaps"
"github.com/moby/buildkit/util/entitlements"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/tonistiigi/fsutil"
)
func toSolveOpt(ctx context.Context, node builder.Node, multiDriver bool, opt Options, bopts gateway.BuildOpts, configDir string, addVCSLocalDir func(key, dir string, so *client.SolveOpt), pw progress.Writer, docker *dockerutil.Client) (_ *client.SolveOpt, release func(), err error) {
nodeDriver := node.Driver
defers := make([]func(), 0, 2)
releaseF := func() {
for _, f := range defers {
f()
}
}
defer func() {
if err != nil {
releaseF()
}
}()
// inline cache from build arg
if v, ok := opt.BuildArgs["BUILDKIT_INLINE_CACHE"]; ok {
if v, _ := strconv.ParseBool(v); v {
opt.CacheTo = append(opt.CacheTo, client.CacheOptionsEntry{
Type: "inline",
Attrs: map[string]string{},
})
}
}
for _, e := range opt.CacheTo {
if e.Type != "inline" && !nodeDriver.Features(ctx)[driver.CacheExport] {
return nil, nil, notSupported(driver.CacheExport, nodeDriver, "https://docs.docker.com/go/build-cache-backends/")
}
}
cacheTo := make([]client.CacheOptionsEntry, 0, len(opt.CacheTo))
for _, e := range opt.CacheTo {
if e.Type == "gha" {
if !bopts.LLBCaps.Contains(apicaps.CapID("cache.gha")) {
continue
}
} else if e.Type == "s3" {
if !bopts.LLBCaps.Contains(apicaps.CapID("cache.s3")) {
continue
}
}
cacheTo = append(cacheTo, e)
}
cacheFrom := make([]client.CacheOptionsEntry, 0, len(opt.CacheFrom))
for _, e := range opt.CacheFrom {
if e.Type == "gha" {
if !bopts.LLBCaps.Contains(apicaps.CapID("cache.gha")) {
continue
}
} else if e.Type == "s3" {
if !bopts.LLBCaps.Contains(apicaps.CapID("cache.s3")) {
continue
}
}
cacheFrom = append(cacheFrom, e)
}
so := client.SolveOpt{
Ref: opt.Ref,
Frontend: "dockerfile.v0",
FrontendAttrs: map[string]string{},
LocalMounts: map[string]fsutil.FS{},
CacheExports: cacheTo,
CacheImports: cacheFrom,
AllowedEntitlements: opt.Allow,
SourcePolicy: opt.SourcePolicy,
}
if opt.CgroupParent != "" {
so.FrontendAttrs["cgroup-parent"] = opt.CgroupParent
}
if v, ok := opt.BuildArgs["BUILDKIT_MULTI_PLATFORM"]; ok {
if v, _ := strconv.ParseBool(v); v {
so.FrontendAttrs["multi-platform"] = "true"
}
}
if multiDriver {
// force creation of manifest list
so.FrontendAttrs["multi-platform"] = "true"
}
attests := make(map[string]string)
for k, v := range opt.Attests {
if v != nil {
attests[k] = *v
}
}
supportAttestations := bopts.LLBCaps.Contains(apicaps.CapID("exporter.image.attestations")) && nodeDriver.Features(ctx)[driver.MultiPlatform]
if len(attests) > 0 {
if !supportAttestations {
if !nodeDriver.Features(ctx)[driver.MultiPlatform] {
return nil, nil, notSupported("Attestation", nodeDriver, "https://docs.docker.com/go/attestations/")
}
return nil, nil, errors.Errorf("Attestations are not supported by the current BuildKit daemon")
}
for k, v := range attests {
so.FrontendAttrs["attest:"+k] = v
}
}
if _, ok := opt.Attests["provenance"]; !ok && supportAttestations {
const noAttestEnv = "BUILDX_NO_DEFAULT_ATTESTATIONS"
var noProv bool
if v, ok := os.LookupEnv(noAttestEnv); ok {
noProv, err = strconv.ParseBool(v)
if err != nil {
return nil, nil, errors.Wrap(err, "invalid "+noAttestEnv)
}
}
if !noProv {
so.FrontendAttrs["attest:provenance"] = "mode=min,inline-only=true"
}
}
switch len(opt.Exports) {
case 1:
// valid
case 0:
if !noDefaultLoad() && opt.PrintFunc == nil {
if nodeDriver.IsMobyDriver() {
// backwards compat for docker driver only:
// this ensures the build results in a docker image.
opt.Exports = []client.ExportEntry{{Type: "image", Attrs: map[string]string{}}}
} else if nodeDriver.Features(ctx)[driver.DefaultLoad] {
opt.Exports = []client.ExportEntry{{Type: "docker", Attrs: map[string]string{}}}
}
}
default:
if err := bopts.LLBCaps.Supports(pb.CapMultipleExporters); err != nil {
return nil, nil, errors.Errorf("multiple outputs currently unsupported by the current BuildKit daemon, please upgrade to version v0.13+ or use a single output")
}
}
// fill in image exporter names from tags
if len(opt.Tags) > 0 {
tags := make([]string, len(opt.Tags))
for i, tag := range opt.Tags {
ref, err := reference.Parse(tag)
if err != nil {
return nil, nil, errors.Wrapf(err, "invalid tag %q", tag)
}
tags[i] = ref.String()
}
for i, e := range opt.Exports {
switch e.Type {
case "image", "oci", "docker":
opt.Exports[i].Attrs["name"] = strings.Join(tags, ",")
}
}
} else {
for _, e := range opt.Exports {
if e.Type == "image" && e.Attrs["name"] == "" && e.Attrs["push"] != "" {
if ok, _ := strconv.ParseBool(e.Attrs["push"]); ok {
return nil, nil, errors.Errorf("tag is needed when pushing to registry")
}
}
}
}
// cacheonly is a fake exporter to opt out of default behaviors
exports := make([]client.ExportEntry, 0, len(opt.Exports))
for _, e := range opt.Exports {
if e.Type != "cacheonly" {
exports = append(exports, e)
}
}
opt.Exports = exports
// set up exporters
for i, e := range opt.Exports {
if e.Type == "oci" && !nodeDriver.Features(ctx)[driver.OCIExporter] {
return nil, nil, notSupported(driver.OCIExporter, nodeDriver, "https://docs.docker.com/go/build-exporters/")
}
if e.Type == "docker" {
features := docker.Features(ctx, e.Attrs["context"])
if features[dockerutil.OCIImporter] && e.Output == nil {
// rely on oci importer if available (which supports
// multi-platform images), otherwise fall back to docker
opt.Exports[i].Type = "oci"
} else if len(opt.Platforms) > 1 || len(attests) > 0 {
if e.Output != nil {
return nil, nil, errors.Errorf("docker exporter does not support exporting manifest lists, use the oci exporter instead")
}
return nil, nil, errors.Errorf("docker exporter does not currently support exporting manifest lists")
}
if e.Output == nil {
if nodeDriver.IsMobyDriver() {
e.Type = "image"
} else {
w, cancel, err := docker.LoadImage(ctx, e.Attrs["context"], pw)
if err != nil {
return nil, nil, err
}
defers = append(defers, cancel)
opt.Exports[i].Output = func(_ map[string]string) (io.WriteCloser, error) {
return w, nil
}
}
} else if !nodeDriver.Features(ctx)[driver.DockerExporter] {
return nil, nil, notSupported(driver.DockerExporter, nodeDriver, "https://docs.docker.com/go/build-exporters/")
}
}
if e.Type == "image" && nodeDriver.IsMobyDriver() {
opt.Exports[i].Type = "moby"
if e.Attrs["push"] != "" {
if ok, _ := strconv.ParseBool(e.Attrs["push"]); ok {
if ok, _ := strconv.ParseBool(e.Attrs["push-by-digest"]); ok {
return nil, nil, errors.Errorf("push-by-digest is currently not implemented for docker driver, please create a new builder instance")
}
}
}
}
if e.Type == "docker" || e.Type == "image" || e.Type == "oci" {
// inline buildinfo attrs from build arg
if v, ok := opt.BuildArgs["BUILDKIT_INLINE_BUILDINFO_ATTRS"]; ok {
opt.Exports[i].Attrs["buildinfo-attrs"] = v
}
}
}
so.Exports = opt.Exports
so.Session = opt.Session
releaseLoad, err := loadInputs(ctx, nodeDriver, opt.Inputs, addVCSLocalDir, pw, &so)
if err != nil {
return nil, nil, err
}
defers = append(defers, releaseLoad)
// add node identifier to shared key if one was specified
if so.SharedKey != "" {
so.SharedKey += ":" + confutil.TryNodeIdentifier(configDir)
}
if opt.Pull {
so.FrontendAttrs["image-resolve-mode"] = pb.AttrImageResolveModeForcePull
} else if nodeDriver.IsMobyDriver() {
// moby driver always resolves local images by default
so.FrontendAttrs["image-resolve-mode"] = pb.AttrImageResolveModePreferLocal
}
if opt.Target != "" {
so.FrontendAttrs["target"] = opt.Target
}
if len(opt.NoCacheFilter) > 0 {
so.FrontendAttrs["no-cache"] = strings.Join(opt.NoCacheFilter, ",")
}
if opt.NoCache {
so.FrontendAttrs["no-cache"] = ""
}
for k, v := range opt.BuildArgs {
so.FrontendAttrs["build-arg:"+k] = v
}
for k, v := range opt.Labels {
so.FrontendAttrs["label:"+k] = v
}
for k, v := range node.ProxyConfig {
if _, ok := opt.BuildArgs[k]; !ok {
so.FrontendAttrs["build-arg:"+k] = v
}
}
// set platforms
if len(opt.Platforms) != 0 {
pp := make([]string, len(opt.Platforms))
for i, p := range opt.Platforms {
pp[i] = platforms.Format(p)
}
if len(pp) > 1 && !nodeDriver.Features(ctx)[driver.MultiPlatform] {
return nil, nil, notSupported(driver.MultiPlatform, nodeDriver, "https://docs.docker.com/go/build-multi-platform/")
}
so.FrontendAttrs["platform"] = strings.Join(pp, ",")
}
// setup networkmode
switch opt.NetworkMode {
case "host":
so.FrontendAttrs["force-network-mode"] = opt.NetworkMode
so.AllowedEntitlements = append(so.AllowedEntitlements, entitlements.EntitlementNetworkHost)
case "none":
so.FrontendAttrs["force-network-mode"] = opt.NetworkMode
case "", "default":
default:
return nil, nil, errors.Errorf("network mode %q not supported by buildkit - you can define a custom network for your builder using the network driver-opt in buildx create", opt.NetworkMode)
}
// setup extrahosts
extraHosts, err := toBuildkitExtraHosts(ctx, opt.ExtraHosts, nodeDriver)
if err != nil {
return nil, nil, err
}
if len(extraHosts) > 0 {
so.FrontendAttrs["add-hosts"] = extraHosts
}
// setup shm size
if opt.ShmSize.Value() > 0 {
so.FrontendAttrs["shm-size"] = strconv.FormatInt(opt.ShmSize.Value(), 10)
}
// setup ulimits
ulimits, err := toBuildkitUlimits(opt.Ulimits)
if err != nil {
return nil, nil, err
} else if len(ulimits) > 0 {
so.FrontendAttrs["ulimit"] = ulimits
}
// mark info request as internal
if opt.PrintFunc != nil {
so.Internal = true
}
return &so, releaseF, nil
}
func loadInputs(ctx context.Context, d *driver.DriverHandle, inp Inputs, addVCSLocalDir func(key, dir string, so *client.SolveOpt), pw progress.Writer, target *client.SolveOpt) (func(), error) {
if inp.ContextPath == "" {
return nil, errors.New("please specify build context (e.g. \".\" for the current directory)")
}
// TODO: handle stdin, symlinks, remote contexts, check files exist
var (
err error
dockerfileReader io.Reader
dockerfileDir string
dockerfileName = inp.DockerfilePath
toRemove []string
)
switch {
case inp.ContextState != nil:
if target.FrontendInputs == nil {
target.FrontendInputs = make(map[string]llb.State)
}
target.FrontendInputs["context"] = *inp.ContextState
target.FrontendInputs["dockerfile"] = *inp.ContextState
case inp.ContextPath == "-":
if inp.DockerfilePath == "-" {
return nil, errStdinConflict
}
buf := bufio.NewReader(inp.InStream)
magic, err := buf.Peek(archiveHeaderSize * 2)
if err != nil && err != io.EOF {
return nil, errors.Wrap(err, "failed to peek context header from STDIN")
}
if !(err == io.EOF && len(magic) == 0) {
if isArchive(magic) {
// stdin is context
up := uploadprovider.New()
target.FrontendAttrs["context"] = up.Add(buf)
target.Session = append(target.Session, up)
} else {
if inp.DockerfilePath != "" {
return nil, errDockerfileConflict
}
// stdin is dockerfile
dockerfileReader = buf
inp.ContextPath, _ = os.MkdirTemp("", "empty-dir")
toRemove = append(toRemove, inp.ContextPath)
if err := setLocalMount("context", inp.ContextPath, target, addVCSLocalDir); err != nil {
return nil, err
}
}
}
case osutil.IsLocalDir(inp.ContextPath):
if err := setLocalMount("context", inp.ContextPath, target, addVCSLocalDir); err != nil {
return nil, err
}
sharedKey := inp.ContextPath
if p, err := filepath.Abs(sharedKey); err == nil {
sharedKey = filepath.Base(p)
}
target.SharedKey = sharedKey
switch inp.DockerfilePath {
case "-":
dockerfileReader = inp.InStream
case "":
dockerfileDir = inp.ContextPath
default:
dockerfileDir = filepath.Dir(inp.DockerfilePath)
dockerfileName = filepath.Base(inp.DockerfilePath)
}
case IsRemoteURL(inp.ContextPath):
if inp.DockerfilePath == "-" {
dockerfileReader = inp.InStream
} else if filepath.IsAbs(inp.DockerfilePath) {
dockerfileDir = filepath.Dir(inp.DockerfilePath)
dockerfileName = filepath.Base(inp.DockerfilePath)
target.FrontendAttrs["dockerfilekey"] = "dockerfile"
}
target.FrontendAttrs["context"] = inp.ContextPath
default:
return nil, errors.Errorf("unable to prepare context: path %q not found", inp.ContextPath)
}
if inp.DockerfileInline != "" {
dockerfileReader = strings.NewReader(inp.DockerfileInline)
}
if dockerfileReader != nil {
dockerfileDir, err = createTempDockerfile(dockerfileReader)
if err != nil {
return nil, err
}
toRemove = append(toRemove, dockerfileDir)
dockerfileName = "Dockerfile"
target.FrontendAttrs["dockerfilekey"] = "dockerfile"
}
if isHTTPURL(inp.DockerfilePath) {
dockerfileDir, err = createTempDockerfileFromURL(ctx, d, inp.DockerfilePath, pw)
if err != nil {
return nil, err
}
toRemove = append(toRemove, dockerfileDir)
dockerfileName = "Dockerfile"
target.FrontendAttrs["dockerfilekey"] = "dockerfile"
delete(target.FrontendInputs, "dockerfile")
}
if dockerfileName == "" {
dockerfileName = "Dockerfile"
}
if dockerfileDir != "" {
if err := setLocalMount("dockerfile", dockerfileDir, target, addVCSLocalDir); err != nil {
return nil, err
}
dockerfileName = handleLowercaseDockerfile(dockerfileDir, dockerfileName)
}
target.FrontendAttrs["filename"] = dockerfileName
for k, v := range inp.NamedContexts {
target.FrontendAttrs["frontend.caps"] = "moby.buildkit.frontend.contexts+forward"
if v.State != nil {
target.FrontendAttrs["context:"+k] = "input:" + k
if target.FrontendInputs == nil {
target.FrontendInputs = make(map[string]llb.State)
}
target.FrontendInputs[k] = *v.State
continue
}
if IsRemoteURL(v.Path) || strings.HasPrefix(v.Path, "docker-image://") || strings.HasPrefix(v.Path, "target:") {
target.FrontendAttrs["context:"+k] = v.Path
continue
}
// handle OCI layout
if strings.HasPrefix(v.Path, "oci-layout://") {
localPath := strings.TrimPrefix(v.Path, "oci-layout://")
localPath, dig, hasDigest := strings.Cut(localPath, "@")
localPath, tag, hasTag := strings.Cut(localPath, ":")
if !hasTag {
tag = "latest"
}
if !hasDigest {
dig, err = resolveDigest(localPath, tag)
if err != nil {
return nil, errors.Wrapf(err, "oci-layout reference %q could not be resolved", v.Path)
}
}
store, err := local.NewStore(localPath)
if err != nil {
return nil, errors.Wrapf(err, "invalid store at %s", localPath)
}
storeName := identity.NewID()
if target.OCIStores == nil {
target.OCIStores = map[string]content.Store{}
}
target.OCIStores[storeName] = store
target.FrontendAttrs["context:"+k] = "oci-layout://" + storeName + ":" + tag + "@" + dig
continue
}
st, err := os.Stat(v.Path)
if err != nil {
return nil, errors.Wrapf(err, "failed to get build context %v", k)
}
if !st.IsDir() {
return nil, errors.Wrapf(syscall.ENOTDIR, "failed to get build context path %v", v)
}
localName := k
if k == "context" || k == "dockerfile" {
localName = "_" + k // underscore to avoid collisions
}
if err := setLocalMount(localName, v.Path, target, addVCSLocalDir); err != nil {
return nil, err
}
target.FrontendAttrs["context:"+k] = "local:" + localName
}
release := func() {
for _, dir := range toRemove {
_ = os.RemoveAll(dir)
}
}
return release, nil
}
func resolveDigest(localPath, tag string) (dig string, _ error) {
idx := ociindex.NewStoreIndex(localPath)
// lookup by name
desc, err := idx.Get(tag)
if err != nil {
return "", err
}
if desc == nil {
// lookup single
desc, err = idx.GetSingle()
if err != nil {
return "", err
}
}
if desc == nil {
return "", errors.New("failed to resolve digest")
}
dig = string(desc.Digest)
_, err = digest.Parse(dig)
if err != nil {
return "", errors.Wrapf(err, "invalid digest %s", dig)
}
return dig, nil
}
func setLocalMount(name, root string, so *client.SolveOpt, addVCSLocalDir func(key, dir string, so *client.SolveOpt)) error {
lm, err := fsutil.NewFS(root)
if err != nil {
return err
}
root, err = filepath.EvalSymlinks(root) // keep same behavior as fsutil.NewFS
if err != nil {
return err
}
if so.LocalMounts == nil {
so.LocalMounts = map[string]fsutil.FS{}
}
so.LocalMounts[name] = lm
if addVCSLocalDir != nil {
addVCSLocalDir(name, root, so)
}
return nil
}
func createTempDockerfile(r io.Reader) (string, error) {
dir, err := os.MkdirTemp("", "dockerfile")
if err != nil {
return "", err
}
f, err := os.Create(filepath.Join(dir, "Dockerfile"))
if err != nil {
return "", err
}
defer f.Close()
if _, err := io.Copy(f, r); err != nil {
return "", err
}
return dir, err
}
// handle https://github.com/moby/moby/pull/10858
func handleLowercaseDockerfile(dir, p string) string {
if filepath.Base(p) != "Dockerfile" {
return p
}
f, err := os.Open(filepath.Dir(filepath.Join(dir, p)))
if err != nil {
return p
}
names, err := f.Readdirnames(-1)
if err != nil {
return p
}
foundLowerCase := false
for _, n := range names {
if n == "Dockerfile" {
return p
}
if n == "dockerfile" {
foundLowerCase = true
}
}
if foundLowerCase {
return filepath.Join(filepath.Dir(p), "dockerfile")
}
return p
}

View File

@@ -1,156 +0,0 @@
package build
import (
"context"
"encoding/base64"
"encoding/json"
"io"
"strings"
"sync"
"github.com/containerd/containerd/content"
"github.com/containerd/containerd/content/proxy"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/progress"
controlapi "github.com/moby/buildkit/api/services/control"
"github.com/moby/buildkit/client"
provenancetypes "github.com/moby/buildkit/solver/llbsolver/provenance/types"
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"golang.org/x/sync/errgroup"
)
type provenancePredicate struct {
Builder *provenanceBuilder `json:"builder,omitempty"`
provenancetypes.ProvenancePredicate
}
type provenanceBuilder struct {
ID string `json:"id,omitempty"`
}
func setRecordProvenance(ctx context.Context, c *client.Client, sr *client.SolveResponse, ref string, mode confutil.MetadataProvenanceMode, pw progress.Writer) error {
if mode == confutil.MetadataProvenanceModeDisabled {
return nil
}
pw = progress.ResetTime(pw)
return progress.Wrap("resolving provenance for metadata file", pw.Write, func(l progress.SubLogger) error {
res, err := fetchProvenance(ctx, c, ref, mode)
if err != nil {
return err
}
for k, v := range res {
sr.ExporterResponse[k] = v
}
return nil
})
}
func fetchProvenance(ctx context.Context, c *client.Client, ref string, mode confutil.MetadataProvenanceMode) (out map[string]string, err error) {
cl, err := c.ControlClient().ListenBuildHistory(ctx, &controlapi.BuildHistoryRequest{
Ref: ref,
EarlyExit: true,
})
if err != nil {
return nil, err
}
var mu sync.Mutex
eg, ctx := errgroup.WithContext(ctx)
store := proxy.NewContentStore(c.ContentClient())
for {
ev, err := cl.Recv()
if errors.Is(err, io.EOF) {
break
} else if err != nil {
return nil, err
}
if ev.Record == nil {
continue
}
if ev.Record.Result != nil {
desc := lookupProvenance(ev.Record.Result)
if desc == nil {
continue
}
eg.Go(func() error {
dt, err := content.ReadBlob(ctx, store, *desc)
if err != nil {
return errors.Wrapf(err, "failed to load provenance blob from build record")
}
prv, err := encodeProvenance(dt, mode)
if err != nil {
return err
}
mu.Lock()
if out == nil {
out = make(map[string]string)
}
out["buildx.build.provenance"] = prv
mu.Unlock()
return nil
})
} else if ev.Record.Results != nil {
for platform, res := range ev.Record.Results {
platform := platform
desc := lookupProvenance(res)
if desc == nil {
continue
}
eg.Go(func() error {
dt, err := content.ReadBlob(ctx, store, *desc)
if err != nil {
return errors.Wrapf(err, "failed to load provenance blob from build record")
}
prv, err := encodeProvenance(dt, mode)
if err != nil {
return err
}
mu.Lock()
if out == nil {
out = make(map[string]string)
}
out["buildx.build.provenance/"+platform] = prv
mu.Unlock()
return nil
})
}
}
}
return out, eg.Wait()
}
func lookupProvenance(res *controlapi.BuildResultInfo) *ocispecs.Descriptor {
for _, a := range res.Attestations {
if a.MediaType == "application/vnd.in-toto+json" && strings.HasPrefix(a.Annotations["in-toto.io/predicate-type"], "https://slsa.dev/provenance/") {
return &ocispecs.Descriptor{
Digest: a.Digest,
Size: a.Size_,
MediaType: a.MediaType,
Annotations: a.Annotations,
}
}
}
return nil
}
func encodeProvenance(dt []byte, mode confutil.MetadataProvenanceMode) (string, error) {
var prv provenancePredicate
if err := json.Unmarshal(dt, &prv); err != nil {
return "", errors.Wrapf(err, "failed to unmarshal provenance")
}
if prv.Builder != nil && prv.Builder.ID == "" {
// reset builder if id is empty
prv.Builder = nil
}
if mode == confutil.MetadataProvenanceModeMin {
// reset fields for minimal provenance
prv.BuildConfig = nil
prv.Metadata = nil
}
dtprv, err := json.Marshal(prv)
if err != nil {
return "", errors.Wrapf(err, "failed to marshal provenance")
}
return base64.StdEncoding.EncodeToString(dtprv), nil
}

View File

@@ -1,495 +0,0 @@
package build
import (
"context"
_ "crypto/sha256" // ensure digests can be computed
"encoding/json"
"io"
"sync"
controllerapi "github.com/docker/buildx/controller/pb"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/exporter/containerimage/exptypes"
gateway "github.com/moby/buildkit/frontend/gateway/client"
"github.com/moby/buildkit/solver/errdefs"
"github.com/moby/buildkit/solver/pb"
"github.com/moby/buildkit/solver/result"
specs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"golang.org/x/sync/errgroup"
)
// NewResultHandle makes a call to client.Build, additionally returning a
// opaque ResultHandle alongside the standard response and error.
//
// This ResultHandle can be used to execute additional build steps in the same
// context as the build occurred, which can allow easy debugging of build
// failures and successes.
//
// If the returned ResultHandle is not nil, the caller must call Done() on it.
func NewResultHandle(ctx context.Context, cc *client.Client, opt client.SolveOpt, product string, buildFunc gateway.BuildFunc, ch chan *client.SolveStatus) (*ResultHandle, *client.SolveResponse, error) {
// Create a new context to wrap the original, and cancel it when the
// caller-provided context is cancelled.
//
// We derive the context from the background context so that we can forbid
// cancellation of the build request after <-done is closed (which we do
// before returning the ResultHandle).
baseCtx := ctx
ctx, cancel := context.WithCancelCause(context.Background())
done := make(chan struct{})
go func() {
select {
case <-baseCtx.Done():
cancel(baseCtx.Err())
case <-done:
// Once done is closed, we've recorded a ResultHandle, so we
// shouldn't allow cancelling the underlying build request anymore.
}
}()
// Create a new channel to forward status messages to the original.
//
// We do this so that we can discard status messages after the main portion
// of the build is complete. This is necessary for the solve error case,
// where the original gateway is kept open until the ResultHandle is
// closed - we don't want progress messages from operations in that
// ResultHandle to display after this function exits.
//
// Additionally, callers should wait for the progress channel to be closed.
// If we keep the session open and never close the progress channel, the
// caller will likely hang.
baseCh := ch
ch = make(chan *client.SolveStatus)
go func() {
for {
s, ok := <-ch
if !ok {
return
}
select {
case <-baseCh:
// base channel is closed, discard status messages
default:
baseCh <- s
}
}
}()
defer close(baseCh)
var resp *client.SolveResponse
var respErr error
var respHandle *ResultHandle
go func() {
defer cancel(context.Canceled) // ensure no dangling processes
var res *gateway.Result
var err error
resp, err = cc.Build(ctx, opt, product, func(ctx context.Context, c gateway.Client) (*gateway.Result, error) {
var err error
res, err = buildFunc(ctx, c)
if res != nil && err == nil {
// Force evaluation of the build result (otherwise, we likely
// won't get a solve error)
def, err2 := getDefinition(ctx, res)
if err2 != nil {
return nil, err2
}
res, err = evalDefinition(ctx, c, def)
}
if err != nil {
// Scenario 1: we failed to evaluate a node somewhere in the
// build graph.
//
// In this case, we construct a ResultHandle from this
// original Build session, and return it alongside the original
// build error. We then need to keep the gateway session open
// until the caller explicitly closes the ResultHandle.
var se *errdefs.SolveError
if errors.As(err, &se) {
respHandle = &ResultHandle{
done: make(chan struct{}),
solveErr: se,
gwClient: c,
gwCtx: ctx,
}
respErr = err // return original error to preserve stacktrace
close(done)
// Block until the caller closes the ResultHandle.
select {
case <-respHandle.done:
case <-ctx.Done():
}
}
}
return res, err
}, ch)
if respHandle != nil {
return
}
if err != nil {
// Something unexpected failed during the build, we didn't succeed,
// but we also didn't make it far enough to create a ResultHandle.
respErr = err
close(done)
return
}
// Scenario 2: we successfully built the image with no errors.
//
// In this case, the original gateway session has now been closed
// since the Build has been completed. So, we need to create a new
// gateway session to populate the ResultHandle. To do this, we
// need to re-evaluate the target result, in this new session. This
// should be instantaneous since the result should be cached.
def, err := getDefinition(ctx, res)
if err != nil {
respErr = err
close(done)
return
}
// NOTE: ideally this second connection should be lazily opened
opt := opt
opt.Ref = ""
opt.Exports = nil
opt.CacheExports = nil
opt.Internal = true
_, respErr = cc.Build(ctx, opt, "buildx", func(ctx context.Context, c gateway.Client) (*gateway.Result, error) {
res, err := evalDefinition(ctx, c, def)
if err != nil {
// This should probably not happen, since we've previously
// successfully evaluated the same result with no issues.
return nil, errors.Wrap(err, "inconsistent solve result")
}
respHandle = &ResultHandle{
done: make(chan struct{}),
res: res,
gwClient: c,
gwCtx: ctx,
}
close(done)
// Block until the caller closes the ResultHandle.
select {
case <-respHandle.done:
case <-ctx.Done():
}
return nil, ctx.Err()
}, nil)
if respHandle != nil {
return
}
close(done)
}()
// Block until the other thread signals that it's completed the build.
select {
case <-done:
case <-baseCtx.Done():
if respErr == nil {
respErr = baseCtx.Err()
}
}
return respHandle, resp, respErr
}
// getDefinition converts a gateway result into a collection of definitions for
// each ref in the result.
func getDefinition(ctx context.Context, res *gateway.Result) (*result.Result[*pb.Definition], error) {
return result.ConvertResult(res, func(ref gateway.Reference) (*pb.Definition, error) {
st, err := ref.ToState()
if err != nil {
return nil, err
}
def, err := st.Marshal(ctx)
if err != nil {
return nil, err
}
return def.ToPB(), nil
})
}
// evalDefinition performs the reverse of getDefinition, converting a
// collection of definitions into a gateway result.
func evalDefinition(ctx context.Context, c gateway.Client, defs *result.Result[*pb.Definition]) (*gateway.Result, error) {
// force evaluation of all targets in parallel
results := make(map[*pb.Definition]*gateway.Result)
resultsMu := sync.Mutex{}
eg, egCtx := errgroup.WithContext(ctx)
defs.EachRef(func(def *pb.Definition) error {
eg.Go(func() error {
res, err := c.Solve(egCtx, gateway.SolveRequest{
Evaluate: true,
Definition: def,
})
if err != nil {
return err
}
resultsMu.Lock()
results[def] = res
resultsMu.Unlock()
return nil
})
return nil
})
if err := eg.Wait(); err != nil {
return nil, err
}
res, _ := result.ConvertResult(defs, func(def *pb.Definition) (gateway.Reference, error) {
if res, ok := results[def]; ok {
return res.Ref, nil
}
return nil, nil
})
return res, nil
}
// ResultHandle is a build result with the client that built it.
type ResultHandle struct {
res *gateway.Result
solveErr *errdefs.SolveError
done chan struct{}
doneOnce sync.Once
gwClient gateway.Client
gwCtx context.Context
cleanups []func()
cleanupsMu sync.Mutex
}
func (r *ResultHandle) Done() {
r.doneOnce.Do(func() {
r.cleanupsMu.Lock()
cleanups := r.cleanups
r.cleanups = nil
r.cleanupsMu.Unlock()
for _, f := range cleanups {
f()
}
close(r.done)
<-r.gwCtx.Done()
})
}
func (r *ResultHandle) registerCleanup(f func()) {
r.cleanupsMu.Lock()
r.cleanups = append(r.cleanups, f)
r.cleanupsMu.Unlock()
}
func (r *ResultHandle) build(buildFunc gateway.BuildFunc) (err error) {
_, err = buildFunc(r.gwCtx, r.gwClient)
return err
}
func (r *ResultHandle) getContainerConfig(cfg *controllerapi.InvokeConfig) (containerCfg gateway.NewContainerRequest, _ error) {
if r.res != nil && r.solveErr == nil {
logrus.Debugf("creating container from successful build")
ccfg, err := containerConfigFromResult(r.res, *cfg)
if err != nil {
return containerCfg, err
}
containerCfg = *ccfg
} else {
logrus.Debugf("creating container from failed build %+v", cfg)
ccfg, err := containerConfigFromError(r.solveErr, *cfg)
if err != nil {
return containerCfg, errors.Wrapf(err, "no result nor error is available")
}
containerCfg = *ccfg
}
return containerCfg, nil
}
func (r *ResultHandle) getProcessConfig(cfg *controllerapi.InvokeConfig, stdin io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) (_ gateway.StartRequest, err error) {
processCfg := newStartRequest(stdin, stdout, stderr)
if r.res != nil && r.solveErr == nil {
logrus.Debugf("creating container from successful build")
if err := populateProcessConfigFromResult(&processCfg, r.res, *cfg); err != nil {
return processCfg, err
}
} else {
logrus.Debugf("creating container from failed build %+v", cfg)
if err := populateProcessConfigFromError(&processCfg, r.solveErr, *cfg); err != nil {
return processCfg, err
}
}
return processCfg, nil
}
func containerConfigFromResult(res *gateway.Result, cfg controllerapi.InvokeConfig) (*gateway.NewContainerRequest, error) {
if cfg.Initial {
return nil, errors.Errorf("starting from the container from the initial state of the step is supported only on the failed steps")
}
ps, err := exptypes.ParsePlatforms(res.Metadata)
if err != nil {
return nil, err
}
ref, ok := res.FindRef(ps.Platforms[0].ID)
if !ok {
return nil, errors.Errorf("no reference found")
}
return &gateway.NewContainerRequest{
Mounts: []gateway.Mount{
{
Dest: "/",
MountType: pb.MountType_BIND,
Ref: ref,
},
},
}, nil
}
func populateProcessConfigFromResult(req *gateway.StartRequest, res *gateway.Result, cfg controllerapi.InvokeConfig) error {
imgData := res.Metadata[exptypes.ExporterImageConfigKey]
var img *specs.Image
if len(imgData) > 0 {
img = &specs.Image{}
if err := json.Unmarshal(imgData, img); err != nil {
return err
}
}
user := ""
if !cfg.NoUser {
user = cfg.User
} else if img != nil {
user = img.Config.User
}
cwd := ""
if !cfg.NoCwd {
cwd = cfg.Cwd
} else if img != nil {
cwd = img.Config.WorkingDir
}
env := []string{}
if img != nil {
env = append(env, img.Config.Env...)
}
env = append(env, cfg.Env...)
args := []string{}
if cfg.Entrypoint != nil {
args = append(args, cfg.Entrypoint...)
} else if img != nil {
args = append(args, img.Config.Entrypoint...)
}
if !cfg.NoCmd {
args = append(args, cfg.Cmd...)
} else if img != nil {
args = append(args, img.Config.Cmd...)
}
req.Args = args
req.Env = env
req.User = user
req.Cwd = cwd
req.Tty = cfg.Tty
return nil
}
func containerConfigFromError(solveErr *errdefs.SolveError, cfg controllerapi.InvokeConfig) (*gateway.NewContainerRequest, error) {
exec, err := execOpFromError(solveErr)
if err != nil {
return nil, err
}
var mounts []gateway.Mount
for i, mnt := range exec.Mounts {
rid := solveErr.Solve.MountIDs[i]
if cfg.Initial {
rid = solveErr.Solve.InputIDs[i]
}
mounts = append(mounts, gateway.Mount{
Selector: mnt.Selector,
Dest: mnt.Dest,
ResultID: rid,
Readonly: mnt.Readonly,
MountType: mnt.MountType,
CacheOpt: mnt.CacheOpt,
SecretOpt: mnt.SecretOpt,
SSHOpt: mnt.SSHOpt,
})
}
return &gateway.NewContainerRequest{
Mounts: mounts,
NetMode: exec.Network,
}, nil
}
func populateProcessConfigFromError(req *gateway.StartRequest, solveErr *errdefs.SolveError, cfg controllerapi.InvokeConfig) error {
exec, err := execOpFromError(solveErr)
if err != nil {
return err
}
meta := exec.Meta
user := ""
if !cfg.NoUser {
user = cfg.User
} else {
user = meta.User
}
cwd := ""
if !cfg.NoCwd {
cwd = cfg.Cwd
} else {
cwd = meta.Cwd
}
env := append(meta.Env, cfg.Env...)
args := []string{}
if cfg.Entrypoint != nil {
args = append(args, cfg.Entrypoint...)
}
if cfg.Cmd != nil {
args = append(args, cfg.Cmd...)
}
if len(args) == 0 {
args = meta.Args
}
req.Args = args
req.Env = env
req.User = user
req.Cwd = cwd
req.Tty = cfg.Tty
return nil
}
func execOpFromError(solveErr *errdefs.SolveError) (*pb.ExecOp, error) {
if solveErr == nil {
return nil, errors.Errorf("no error is available")
}
switch op := solveErr.Solve.Op.GetOp().(type) {
case *pb.Op_Exec:
return op.Exec, nil
default:
return nil, errors.Errorf("invoke: unsupported error type")
}
// TODO: support other ops
}
func newStartRequest(stdin io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) gateway.StartRequest {
return gateway.StartRequest{
Stdin: stdin,
Stdout: stdout,
Stderr: stderr,
}
}

View File

@@ -13,7 +13,7 @@ import (
"github.com/pkg/errors"
)
func createTempDockerfileFromURL(ctx context.Context, d *driver.DriverHandle, url string, pw progress.Writer) (string, error) {
func createTempDockerfileFromURL(ctx context.Context, d driver.Driver, url string, pw progress.Writer) (string, error) {
c, err := driver.Boot(ctx, ctx, d, pw)
if err != nil {
return "", err
@@ -21,7 +21,7 @@ func createTempDockerfileFromURL(ctx context.Context, d *driver.DriverHandle, ur
var out string
ch, done := progress.NewChannel(pw)
defer func() { <-done }()
_, err = c.Build(ctx, client.SolveOpt{Internal: true}, "buildx", func(ctx context.Context, c gwclient.Client) (*gwclient.Result, error) {
_, err = c.Build(ctx, client.SolveOpt{}, "buildx", func(ctx context.Context, c gwclient.Client) (*gwclient.Result, error) {
def, err := llb.HTTP(url, llb.Filename("Dockerfile"), llb.WithCustomNamef("[internal] load %s", url)).Marshal(ctx)
if err != nil {
return nil, err

View File

@@ -3,17 +3,12 @@ package build
import (
"archive/tar"
"bytes"
"context"
"net"
"os"
"strconv"
"strings"
"github.com/docker/buildx/driver"
"github.com/docker/cli/opts"
"github.com/moby/buildkit/util/gitutil"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
const (
@@ -25,21 +20,9 @@ const (
mobyHostGatewayName = "host-gateway"
)
// isHTTPURL returns true if the provided str is an HTTP(S) URL by checking if it
// has a http:// or https:// scheme. No validation is performed to verify if the
// URL is well-formed.
func isHTTPURL(str string) bool {
return strings.HasPrefix(str, "https://") || strings.HasPrefix(str, "http://")
}
func IsRemoteURL(c string) bool {
if isHTTPURL(c) {
return true
}
if _, err := gitutil.ParseGitRef(c); err == nil {
return true
}
return false
func isLocalDir(c string) bool {
st, err := os.Stat(c)
return err == nil && st.IsDir()
}
func isArchive(header []byte) bool {
@@ -62,34 +45,18 @@ func isArchive(header []byte) bool {
}
// toBuildkitExtraHosts converts hosts from docker key:value format to buildkit's csv format
func toBuildkitExtraHosts(ctx context.Context, inp []string, nodeDriver *driver.DriverHandle) (string, error) {
func toBuildkitExtraHosts(inp []string, mobyDriver bool) (string, error) {
if len(inp) == 0 {
return "", nil
}
hosts := make([]string, 0, len(inp))
for _, h := range inp {
host, ip, ok := strings.Cut(h, "=")
if !ok {
host, ip, ok = strings.Cut(h, ":")
}
host, ip, ok := strings.Cut(h, ":")
if !ok || host == "" || ip == "" {
return "", errors.Errorf("invalid host %s", h)
}
// If the IP Address is a "host-gateway", replace this value with the
// IP address provided by the worker's label.
if ip == mobyHostGatewayName {
hgip, err := nodeDriver.HostGatewayIP(ctx)
if err != nil {
return "", errors.Wrap(err, "unable to derive the IP value for host-gateway")
}
ip = hgip.String()
} else {
// If the address is enclosed in square brackets, extract it (for IPv6, but
// permit it for IPv4 as well; we don't know the address family here, but it's
// unambiguous).
if len(ip) > 2 && ip[0] == '[' && ip[len(ip)-1] == ']' {
ip = ip[1 : len(ip)-1]
}
// Skip IP address validation for "host-gateway" string with moby driver
if !mobyDriver || ip != mobyHostGatewayName {
if net.ParseIP(ip) == nil {
return "", errors.Errorf("invalid host %s", h)
}
@@ -110,21 +77,3 @@ func toBuildkitUlimits(inp *opts.UlimitOpt) (string, error) {
}
return strings.Join(ulimits, ","), nil
}
func notSupported(f driver.Feature, d *driver.DriverHandle, docs string) error {
return errors.Errorf(`%s is not supported for the %s driver.
Switch to a different driver, or turn on the containerd image store, and try again.
Learn more at %s`, f, d.Factory().Name(), docs)
}
func noDefaultLoad() bool {
v, ok := os.LookupEnv("BUILDX_NO_DEFAULT_LOAD")
if !ok {
return false
}
b, err := strconv.ParseBool(v)
if err != nil {
logrus.Warnf("invalid non-bool value for BUILDX_NO_DEFAULT_LOAD: %s", v)
}
return b
}

View File

@@ -1,148 +0,0 @@
package build
import (
"context"
"strings"
"testing"
"github.com/stretchr/testify/require"
)
func TestToBuildkitExtraHosts(t *testing.T) {
tests := []struct {
doc string
input []string
expectedOut string // Expect output==input if not set.
expectedErr string // Expect success if not set.
}{
{
doc: "IPv4, colon sep",
input: []string{`myhost:192.168.0.1`},
expectedOut: `myhost=192.168.0.1`,
},
{
doc: "IPv4, eq sep",
input: []string{`myhost=192.168.0.1`},
},
{
doc: "Weird but permitted, IPv4 with brackets",
input: []string{`myhost=[192.168.0.1]`},
expectedOut: `myhost=192.168.0.1`,
},
{
doc: "Host and domain",
input: []string{`host.and.domain.invalid:10.0.2.1`},
expectedOut: `host.and.domain.invalid=10.0.2.1`,
},
{
doc: "IPv6, colon sep",
input: []string{`anipv6host:2003:ab34:e::1`},
expectedOut: `anipv6host=2003:ab34:e::1`,
},
{
doc: "IPv6, colon sep, brackets",
input: []string{`anipv6host:[2003:ab34:e::1]`},
expectedOut: `anipv6host=2003:ab34:e::1`,
},
{
doc: "IPv6, eq sep, brackets",
input: []string{`anipv6host=[2003:ab34:e::1]`},
expectedOut: `anipv6host=2003:ab34:e::1`,
},
{
doc: "IPv6 localhost, colon sep",
input: []string{`ipv6local:::1`},
expectedOut: `ipv6local=::1`,
},
{
doc: "IPv6 localhost, eq sep",
input: []string{`ipv6local=::1`},
},
{
doc: "IPv6 localhost, eq sep, brackets",
input: []string{`ipv6local=[::1]`},
expectedOut: `ipv6local=::1`,
},
{
doc: "IPv6 localhost, non-canonical, colon sep",
input: []string{`ipv6local:0:0:0:0:0:0:0:1`},
expectedOut: `ipv6local=0:0:0:0:0:0:0:1`,
},
{
doc: "IPv6 localhost, non-canonical, eq sep",
input: []string{`ipv6local=0:0:0:0:0:0:0:1`},
},
{
doc: "IPv6 localhost, non-canonical, eq sep, brackets",
input: []string{`ipv6local=[0:0:0:0:0:0:0:1]`},
expectedOut: `ipv6local=0:0:0:0:0:0:0:1`,
},
{
doc: "Bad address, colon sep",
input: []string{`myhost:192.notanipaddress.1`},
expectedErr: `invalid IP address in add-host: "192.notanipaddress.1"`,
},
{
doc: "Bad address, eq sep",
input: []string{`myhost=192.notanipaddress.1`},
expectedErr: `invalid IP address in add-host: "192.notanipaddress.1"`,
},
{
doc: "No sep",
input: []string{`thathost-nosemicolon10.0.0.1`},
expectedErr: `bad format for add-host: "thathost-nosemicolon10.0.0.1"`,
},
{
doc: "Bad IPv6",
input: []string{`anipv6host:::::1`},
expectedErr: `invalid IP address in add-host: "::::1"`,
},
{
doc: "Bad IPv6, trailing colons",
input: []string{`ipv6local:::0::`},
expectedErr: `invalid IP address in add-host: "::0::"`,
},
{
doc: "Bad IPv6, missing close bracket",
input: []string{`ipv6addr=[::1`},
expectedErr: `invalid IP address in add-host: "[::1"`,
},
{
doc: "Bad IPv6, missing open bracket",
input: []string{`ipv6addr=::1]`},
expectedErr: `invalid IP address in add-host: "::1]"`,
},
{
doc: "Missing address, colon sep",
input: []string{`myhost.invalid:`},
expectedErr: `invalid IP address in add-host: ""`,
},
{
doc: "Missing address, eq sep",
input: []string{`myhost.invalid=`},
expectedErr: `invalid IP address in add-host: ""`,
},
{
doc: "No input",
input: []string{``},
expectedErr: `bad format for add-host: ""`,
},
}
for _, tc := range tests {
tc := tc
if tc.expectedOut == "" {
tc.expectedOut = strings.Join(tc.input, ",")
}
t.Run(tc.doc, func(t *testing.T) {
actualOut, actualErr := toBuildkitExtraHosts(context.TODO(), tc.input, nil)
if tc.expectedErr == "" {
require.Equal(t, tc.expectedOut, actualOut)
require.Nil(t, actualErr)
} else {
require.Zero(t, actualOut)
require.Error(t, actualErr, tc.expectedErr)
}
})
}
}

View File

@@ -2,31 +2,18 @@ package builder
import (
"context"
"encoding/json"
"net/url"
"os"
"sort"
"strings"
"sync"
"time"
"github.com/docker/buildx/driver"
k8sutil "github.com/docker/buildx/driver/kubernetes/util"
remoteutil "github.com/docker/buildx/driver/remote/util"
"github.com/docker/buildx/localstate"
"github.com/docker/buildx/store"
"github.com/docker/buildx/store/storeutil"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/dockerutil"
"github.com/docker/buildx/util/imagetools"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/command"
dopts "github.com/docker/cli/opts"
"github.com/google/shlex"
"github.com/moby/buildkit/util/progress/progressui"
"github.com/pkg/errors"
"github.com/spf13/pflag"
"github.com/tonistiigi/go-csvvalue"
"golang.org/x/sync/errgroup"
)
@@ -121,7 +108,7 @@ func New(dockerCli command.Cli, opts ...Option) (_ *Builder, err error) {
// Validate validates builder context
func (b *Builder) Validate() error {
if b.NodeGroup != nil && b.NodeGroup.DockerContext {
if b.NodeGroup.DockerContext {
list, err := b.opts.dockerCli.ContextStore().List()
if err != nil {
return err
@@ -170,14 +157,13 @@ func (b *Builder) Boot(ctx context.Context) (bool, error) {
return false, nil
}
printer, err := progress.NewPrinter(context.TODO(), os.Stderr, progressui.AutoMode)
printer, err := progress.NewPrinter(context.TODO(), os.Stderr, os.Stderr, progress.PrinterModeAuto)
if err != nil {
return false, err
}
baseCtx := ctx
eg, _ := errgroup.WithContext(ctx)
errCh := make(chan error, len(toBoot))
for _, idx := range toBoot {
func(idx int) {
eg.Go(func() error {
@@ -185,7 +171,6 @@ func (b *Builder) Boot(ctx context.Context) (bool, error) {
_, err := driver.Boot(ctx, baseCtx, b.nodes[idx].Driver, pw)
if err != nil {
b.nodes[idx].Err = err
errCh <- err
}
return nil
})
@@ -193,15 +178,11 @@ func (b *Builder) Boot(ctx context.Context) (bool, error) {
}
err = eg.Wait()
close(errCh)
err1 := printer.Wait()
if err == nil {
err = err1
}
if err == nil && len(errCh) == len(toBoot) {
return false, <-errCh
}
return true, err
}
@@ -226,7 +207,7 @@ type driverFactory struct {
}
// Factory returns the driver factory.
func (b *Builder) Factory(ctx context.Context, dialMeta map[string][]string) (_ driver.Factory, err error) {
func (b *Builder) Factory(ctx context.Context) (_ driver.Factory, err error) {
b.driverFactory.once.Do(func() {
if b.Driver != "" {
b.driverFactory.Factory, err = driver.GetFactory(b.Driver, true)
@@ -249,7 +230,7 @@ func (b *Builder) Factory(ctx context.Context, dialMeta map[string][]string) (_
if _, err = dockerapi.Ping(ctx); err != nil {
return
}
b.driverFactory.Factory, err = driver.GetDefaultFactory(ctx, ep, dockerapi, false, dialMeta)
b.driverFactory.Factory, err = driver.GetDefaultFactory(ctx, ep, dockerapi, false)
if err != nil {
return
}
@@ -259,28 +240,6 @@ func (b *Builder) Factory(ctx context.Context, dialMeta map[string][]string) (_
return b.driverFactory.Factory, err
}
func (b *Builder) MarshalJSON() ([]byte, error) {
var berr string
if b.err != nil {
berr = strings.TrimSpace(b.err.Error())
}
return json.Marshal(struct {
Name string
Driver string
LastActivity time.Time `json:",omitempty"`
Dynamic bool
Nodes []Node
Err string `json:",omitempty"`
}{
Name: b.Name,
Driver: b.Driver,
LastActivity: b.LastActivity,
Dynamic: b.Dynamic,
Nodes: b.nodes,
Err: berr,
})
}
// GetBuilders returns all builders
func GetBuilders(dockerCli command.Cli, txn *store.Txn) ([]*Builder, error) {
storeng, err := txn.List()
@@ -331,346 +290,3 @@ func GetBuilders(dockerCli command.Cli, txn *store.Txn) ([]*Builder, error) {
return builders, nil
}
type CreateOpts struct {
Name string
Driver string
NodeName string
Platforms []string
BuildkitdFlags string
BuildkitdConfigFile string
DriverOpts []string
Use bool
Endpoint string
Append bool
}
func Create(ctx context.Context, txn *store.Txn, dockerCli command.Cli, opts CreateOpts) (*Builder, error) {
var err error
if opts.Name == "default" {
return nil, errors.Errorf("default is a reserved name and cannot be used to identify builder instance")
} else if opts.Append && opts.Name == "" {
return nil, errors.Errorf("append requires a builder name")
}
name := opts.Name
if name == "" {
name, err = store.GenerateName(txn)
if err != nil {
return nil, err
}
}
if !opts.Append {
contexts, err := dockerCli.ContextStore().List()
if err != nil {
return nil, err
}
for _, c := range contexts {
if c.Name == name {
return nil, errors.Errorf("instance name %q already exists as context builder", name)
}
}
}
ng, err := txn.NodeGroupByName(name)
if err != nil {
if os.IsNotExist(errors.Cause(err)) {
if opts.Append && opts.Name != "" {
return nil, errors.Errorf("failed to find instance %q for append", opts.Name)
}
} else {
return nil, err
}
}
buildkitHost := os.Getenv("BUILDKIT_HOST")
driverName := opts.Driver
if driverName == "" {
if ng != nil {
driverName = ng.Driver
} else if opts.Endpoint == "" && buildkitHost != "" {
driverName = "remote"
} else {
f, err := driver.GetDefaultFactory(ctx, opts.Endpoint, dockerCli.Client(), true, nil)
if err != nil {
return nil, err
}
if f == nil {
return nil, errors.Errorf("no valid drivers found")
}
driverName = f.Name()
}
}
if ng != nil {
if opts.NodeName == "" && !opts.Append {
return nil, errors.Errorf("existing instance for %q but no append mode, specify the node name to make changes for existing instances", name)
}
if driverName != ng.Driver {
return nil, errors.Errorf("existing instance for %q but has mismatched driver %q", name, ng.Driver)
}
}
if _, err := driver.GetFactory(driverName, true); err != nil {
return nil, err
}
ngOriginal := ng
if ngOriginal != nil {
ngOriginal = ngOriginal.Copy()
}
if ng == nil {
ng = &store.NodeGroup{
Name: name,
Driver: driverName,
}
}
driverOpts, err := csvToMap(opts.DriverOpts)
if err != nil {
return nil, err
}
buildkitdFlags, err := parseBuildkitdFlags(opts.BuildkitdFlags, driverName, driverOpts)
if err != nil {
return nil, err
}
var ep string
var setEp bool
switch {
case driverName == "kubernetes":
if opts.Endpoint != "" {
return nil, errors.Errorf("kubernetes driver does not support endpoint args %q", opts.Endpoint)
}
// generate node name if not provided to avoid duplicated endpoint
// error: https://github.com/docker/setup-buildx-action/issues/215
nodeName := opts.NodeName
if nodeName == "" {
nodeName, err = k8sutil.GenerateNodeName(name, txn)
if err != nil {
return nil, err
}
}
// naming endpoint to make append works
ep = (&url.URL{
Scheme: driverName,
Path: "/" + name,
RawQuery: (&url.Values{
"deployment": {nodeName},
"kubeconfig": {os.Getenv("KUBECONFIG")},
}).Encode(),
}).String()
setEp = false
case driverName == "remote":
if opts.Endpoint != "" {
ep = opts.Endpoint
} else if buildkitHost != "" {
ep = buildkitHost
} else {
return nil, errors.Errorf("no remote endpoint provided")
}
ep, err = validateBuildkitEndpoint(ep)
if err != nil {
return nil, err
}
setEp = true
case opts.Endpoint != "":
ep, err = validateEndpoint(dockerCli, opts.Endpoint)
if err != nil {
return nil, err
}
setEp = true
default:
if dockerCli.CurrentContext() == "default" && dockerCli.DockerEndpoint().TLSData != nil {
return nil, errors.Errorf("could not create a builder instance with TLS data loaded from environment. Please use `docker context create <context-name>` to create a context for current environment and then create a builder instance with context set to <context-name>")
}
ep, err = dockerutil.GetCurrentEndpoint(dockerCli)
if err != nil {
return nil, err
}
setEp = false
}
buildkitdConfigFile := opts.BuildkitdConfigFile
if buildkitdConfigFile == "" {
// if buildkit daemon config is not provided, check if the default one
// is available and use it
if f, ok := confutil.DefaultConfigFile(dockerCli); ok {
buildkitdConfigFile = f
}
}
if err := ng.Update(opts.NodeName, ep, opts.Platforms, setEp, opts.Append, buildkitdFlags, buildkitdConfigFile, driverOpts); err != nil {
return nil, err
}
if err := txn.Save(ng); err != nil {
return nil, err
}
b, err := New(dockerCli,
WithName(ng.Name),
WithStore(txn),
WithSkippedValidation(),
)
if err != nil {
return nil, err
}
timeoutCtx, cancel := context.WithTimeout(ctx, 20*time.Second)
defer cancel()
nodes, err := b.LoadNodes(timeoutCtx, WithData())
if err != nil {
return nil, err
}
for _, node := range nodes {
if err := node.Err; err != nil {
err := errors.Errorf("failed to initialize builder %s (%s): %s", ng.Name, node.Name, err)
var err2 error
if ngOriginal == nil {
err2 = txn.Remove(ng.Name)
} else {
err2 = txn.Save(ngOriginal)
}
if err2 != nil {
return nil, errors.Errorf("could not rollback to previous state: %s", err2)
}
return nil, err
}
}
if opts.Use && ep != "" {
current, err := dockerutil.GetCurrentEndpoint(dockerCli)
if err != nil {
return nil, err
}
if err := txn.SetCurrent(current, ng.Name, false, false); err != nil {
return nil, err
}
}
return b, nil
}
type LeaveOpts struct {
Name string
NodeName string
}
func Leave(ctx context.Context, txn *store.Txn, dockerCli command.Cli, opts LeaveOpts) error {
if opts.Name == "" {
return errors.Errorf("leave requires instance name")
}
if opts.NodeName == "" {
return errors.Errorf("leave requires node name")
}
ng, err := txn.NodeGroupByName(opts.Name)
if err != nil {
if os.IsNotExist(errors.Cause(err)) {
return errors.Errorf("failed to find instance %q for leave", opts.Name)
}
return err
}
if err := ng.Leave(opts.NodeName); err != nil {
return err
}
ls, err := localstate.New(confutil.ConfigDir(dockerCli))
if err != nil {
return err
}
if err := ls.RemoveBuilderNode(ng.Name, opts.NodeName); err != nil {
return err
}
return txn.Save(ng)
}
func csvToMap(in []string) (map[string]string, error) {
if len(in) == 0 {
return nil, nil
}
m := make(map[string]string, len(in))
for _, s := range in {
fields, err := csvvalue.Fields(s, nil)
if err != nil {
return nil, err
}
for _, v := range fields {
p := strings.SplitN(v, "=", 2)
if len(p) != 2 {
return nil, errors.Errorf("invalid value %q, expecting k=v", v)
}
m[p[0]] = p[1]
}
}
return m, nil
}
// validateEndpoint validates that endpoint is either a context or a docker host
func validateEndpoint(dockerCli command.Cli, ep string) (string, error) {
dem, err := dockerutil.GetDockerEndpoint(dockerCli, ep)
if err == nil && dem != nil {
if ep == "default" {
return dem.Host, nil
}
return ep, nil
}
h, err := dopts.ParseHost(true, ep)
if err != nil {
return "", errors.Wrapf(err, "failed to parse endpoint %s", ep)
}
return h, nil
}
// validateBuildkitEndpoint validates that endpoint is a valid buildkit host
func validateBuildkitEndpoint(ep string) (string, error) {
if err := remoteutil.IsValidEndpoint(ep); err != nil {
return "", err
}
return ep, nil
}
// parseBuildkitdFlags parses buildkit flags
func parseBuildkitdFlags(inp string, driver string, driverOpts map[string]string) (res []string, err error) {
if inp != "" {
res, err = shlex.Split(inp)
if err != nil {
return nil, errors.Wrap(err, "failed to parse buildkit flags")
}
}
var allowInsecureEntitlements []string
flags := pflag.NewFlagSet("buildkitd", pflag.ContinueOnError)
flags.Usage = func() {}
flags.StringArrayVar(&allowInsecureEntitlements, "allow-insecure-entitlement", nil, "")
_ = flags.Parse(res)
var hasNetworkHostEntitlement bool
for _, e := range allowInsecureEntitlements {
if e == "network.host" {
hasNetworkHostEntitlement = true
break
}
}
if v, ok := driverOpts["network"]; ok && v == "host" && !hasNetworkHostEntitlement && driver == "docker-container" {
// always set network.host entitlement if user has set network=host
res = append(res, "--allow-insecure-entitlement=network.host")
} else if len(allowInsecureEntitlements) == 0 && (driver == "kubernetes" || driver == "docker-container") {
// set network.host entitlement if user does not provide any as
// network is isolated for container drivers.
res = append(res, "--allow-insecure-entitlement=network.host")
}
return res, nil
}

View File

@@ -1,139 +0,0 @@
package builder
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestCsvToMap(t *testing.T) {
d := []string{
"\"tolerations=key=foo,value=bar;key=foo2,value=bar2\",replicas=1",
"namespace=default",
}
r, err := csvToMap(d)
require.NoError(t, err)
require.Contains(t, r, "tolerations")
require.Equal(t, r["tolerations"], "key=foo,value=bar;key=foo2,value=bar2")
require.Contains(t, r, "replicas")
require.Equal(t, r["replicas"], "1")
require.Contains(t, r, "namespace")
require.Equal(t, r["namespace"], "default")
}
func TestParseBuildkitdFlags(t *testing.T) {
testCases := []struct {
name string
flags string
driver string
driverOpts map[string]string
expected []string
wantErr bool
}{
{
"docker-container no flags",
"",
"docker-container",
nil,
[]string{
"--allow-insecure-entitlement=network.host",
},
false,
},
{
"kubernetes no flags",
"",
"kubernetes",
nil,
[]string{
"--allow-insecure-entitlement=network.host",
},
false,
},
{
"remote no flags",
"",
"remote",
nil,
nil,
false,
},
{
"docker-container with insecure flag",
"--allow-insecure-entitlement=security.insecure",
"docker-container",
nil,
[]string{
"--allow-insecure-entitlement=security.insecure",
},
false,
},
{
"docker-container with insecure and host flag",
"--allow-insecure-entitlement=network.host --allow-insecure-entitlement=security.insecure",
"docker-container",
nil,
[]string{
"--allow-insecure-entitlement=network.host",
"--allow-insecure-entitlement=security.insecure",
},
false,
},
{
"docker-container with network host opt",
"",
"docker-container",
map[string]string{"network": "host"},
[]string{
"--allow-insecure-entitlement=network.host",
},
false,
},
{
"docker-container with host flag and network host opt",
"--allow-insecure-entitlement=network.host",
"docker-container",
map[string]string{"network": "host"},
[]string{
"--allow-insecure-entitlement=network.host",
},
false,
},
{
"docker-container with insecure, host flag and network host opt",
"--allow-insecure-entitlement=network.host --allow-insecure-entitlement=security.insecure",
"docker-container",
map[string]string{"network": "host"},
[]string{
"--allow-insecure-entitlement=network.host",
"--allow-insecure-entitlement=security.insecure",
},
false,
},
{
"error parsing flags",
"foo'",
"docker-container",
nil,
nil,
true,
},
}
for _, tt := range testCases {
tt := tt
t.Run(tt.name, func(t *testing.T) {
flags, err := parseBuildkitdFlags(tt.flags, tt.driver, tt.driverOpts)
if tt.wantErr {
require.Error(t, err)
return
}
require.NoError(t, err)
assert.Equal(t, tt.expected, flags)
})
}
}

View File

@@ -2,11 +2,7 @@ package builder
import (
"context"
"encoding/json"
"sort"
"strings"
"github.com/containerd/platforms"
"github.com/docker/buildx/driver"
ctxkube "github.com/docker/buildx/driver/kubernetes/context"
"github.com/docker/buildx/store"
@@ -14,7 +10,6 @@ import (
"github.com/docker/buildx/util/dockerutil"
"github.com/docker/buildx/util/imagetools"
"github.com/docker/buildx/util/platformutil"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/util/grpcerrors"
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
@@ -25,19 +20,13 @@ import (
type Node struct {
store.Node
Builder string
Driver *driver.DriverHandle
Driver driver.Driver
DriverInfo *driver.Info
Platforms []ocispecs.Platform
ImageOpt imagetools.Opt
ProxyConfig map[string]string
Version string
Err error
// worker settings
IDs []string
Platforms []ocispecs.Platform
GCPolicy []client.PruneInfo
Labels map[string]string
}
// Nodes returns nodes for this builder.
@@ -45,35 +34,9 @@ func (b *Builder) Nodes() []Node {
return b.nodes
}
type LoadNodesOption func(*loadNodesOptions)
type loadNodesOptions struct {
data bool
dialMeta map[string][]string
}
func WithData() LoadNodesOption {
return func(o *loadNodesOptions) {
o.data = true
}
}
func WithDialMeta(dialMeta map[string][]string) LoadNodesOption {
return func(o *loadNodesOptions) {
o.dialMeta = dialMeta
}
}
// LoadNodes loads and returns nodes for this builder.
// TODO: this should be a method on a Node object and lazy load data for each driver.
func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []Node, err error) {
lno := loadNodesOptions{
data: false,
}
for _, opt := range opts {
opt(&lno)
}
func (b *Builder) LoadNodes(ctx context.Context, withData bool) (_ []Node, err error) {
eg, _ := errgroup.WithContext(ctx)
b.nodes = make([]Node, len(b.NodeGroup.Nodes))
@@ -83,7 +46,7 @@ func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []N
}
}()
factory, err := b.Factory(ctx, lno.dialMeta)
factory, err := b.Factory(ctx)
if err != nil {
return nil, err
}
@@ -100,7 +63,6 @@ func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []N
Node: n,
ProxyConfig: storeutil.GetProxyConfig(b.opts.dockerCli),
Platforms: n.Platforms,
Builder: b.Name,
}
defer func() {
b.nodes[i] = node
@@ -115,12 +77,12 @@ func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []N
contextStore := b.opts.dockerCli.ContextStore()
var kcc driver.KubeClientConfig
kcc, err = ctxkube.ConfigFromEndpoint(n.Endpoint, contextStore)
kcc, err = ctxkube.ConfigFromContext(n.Endpoint, contextStore)
if err != nil {
// err is returned if n.Endpoint is non-context name like "unix:///var/run/docker.sock".
// try again with name="default".
// FIXME(@AkihiroSuda): n should retain real context name.
kcc, err = ctxkube.ConfigFromEndpoint("default", contextStore)
kcc, err = ctxkube.ConfigFromContext("default", contextStore)
if err != nil {
logrus.Error(err)
}
@@ -142,7 +104,7 @@ func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []N
}
}
d, err := driver.GetDriver(ctx, driver.BuilderName(n.Name), factory, n.Endpoint, dockerapi, imageopt.Auth, kcc, n.BuildkitdFlags, n.Files, n.DriverOpts, n.Platforms, b.opts.contextPathHash, lno.dialMeta)
d, err := driver.GetDriver(ctx, "buildx_buildkit_"+n.Name, factory, n.Endpoint, dockerapi, imageopt.Auth, kcc, n.Flags, n.Files, n.DriverOpts, n.Platforms, b.opts.contextPathHash)
if err != nil {
node.Err = err
return nil
@@ -150,7 +112,7 @@ func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []N
node.Driver = d
node.ImageOpt = imageopt
if lno.data {
if withData {
if err := node.loadData(ctx); err != nil {
node.Err = err
}
@@ -165,7 +127,7 @@ func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []N
}
// TODO: This should be done in the routine loading driver data
if lno.data {
if withData {
kubernetesDriverCount := 0
for _, d := range b.nodes {
if d.DriverInfo != nil && len(d.DriverInfo.DynamicNodes) > 0 {
@@ -186,7 +148,7 @@ func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []N
if pl := di.DriverInfo.DynamicNodes[i].Platforms; len(pl) > 0 {
diClone.Platforms = pl
}
nodes = append(nodes, diClone)
nodes = append(nodes, di)
}
dynamicNodes = append(dynamicNodes, di.DriverInfo.DynamicNodes...)
}
@@ -202,51 +164,6 @@ func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []N
return b.nodes, nil
}
func (n *Node) MarshalJSON() ([]byte, error) {
var status string
if n.DriverInfo != nil {
status = n.DriverInfo.Status.String()
}
var nerr string
if n.Err != nil {
status = "error"
nerr = strings.TrimSpace(n.Err.Error())
}
var pp []string
for _, p := range n.Platforms {
pp = append(pp, platforms.Format(p))
}
return json.Marshal(struct {
Name string
Endpoint string
BuildkitdFlags []string `json:"Flags,omitempty"`
DriverOpts map[string]string `json:",omitempty"`
Files map[string][]byte `json:",omitempty"`
Status string `json:",omitempty"`
ProxyConfig map[string]string `json:",omitempty"`
Version string `json:",omitempty"`
Err string `json:",omitempty"`
IDs []string `json:",omitempty"`
Platforms []string `json:",omitempty"`
GCPolicy []client.PruneInfo `json:",omitempty"`
Labels map[string]string `json:",omitempty"`
}{
Name: n.Name,
Endpoint: n.Endpoint,
BuildkitdFlags: n.BuildkitdFlags,
DriverOpts: n.DriverOpts,
Files: n.Files,
Status: status,
ProxyConfig: n.ProxyConfig,
Version: n.Version,
Err: nerr,
IDs: n.IDs,
Platforms: pp,
GCPolicy: n.GCPolicy,
Labels: n.Labels,
})
}
func (n *Node) loadData(ctx context.Context) error {
if n.Driver == nil {
return nil
@@ -265,15 +182,9 @@ func (n *Node) loadData(ctx context.Context) error {
if err != nil {
return errors.Wrap(err, "listing workers")
}
for idx, w := range workers {
n.IDs = append(n.IDs, w.ID)
for _, w := range workers {
n.Platforms = append(n.Platforms, w.Platforms...)
if idx == 0 {
n.GCPolicy = w.GCPolicy
n.Labels = w.Labels
}
}
sort.Strings(n.IDs)
n.Platforms = platformutil.Dedupe(n.Platforms)
inf, err := driverClient.Info(ctx)
if err != nil {

View File

@@ -1,12 +1,11 @@
package main
import (
"context"
"fmt"
"os"
"github.com/containerd/containerd/pkg/seed"
"github.com/docker/buildx/commands"
"github.com/docker/buildx/util/desktop"
"github.com/docker/buildx/version"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli-plugins/manager"
@@ -16,12 +15,11 @@ import (
cliflags "github.com/docker/cli/cli/flags"
"github.com/moby/buildkit/solver/errdefs"
"github.com/moby/buildkit/util/stack"
"go.opentelemetry.io/otel"
//nolint:staticcheck // vendored dependencies may still use this
"github.com/containerd/containerd/pkg/seed"
_ "k8s.io/client-go/plugin/pkg/client/auth/azure"
_ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
_ "k8s.io/client-go/plugin/pkg/client/auth/oidc"
_ "k8s.io/client-go/plugin/pkg/client/auth/openstack"
_ "github.com/docker/buildx/driver/docker"
_ "github.com/docker/buildx/driver/docker-container"
@@ -30,9 +28,7 @@ import (
)
func init() {
//nolint:staticcheck
seed.WithTimeAndRand()
stack.SetVersionInfo(version.Version, version.Revision)
}
@@ -40,27 +36,10 @@ func runStandalone(cmd *command.DockerCli) error {
if err := cmd.Initialize(cliflags.NewClientOptions()); err != nil {
return err
}
defer flushMetrics(cmd)
rootCmd := commands.NewRootCmd(os.Args[0], false, cmd)
return rootCmd.Execute()
}
// flushMetrics will manually flush metrics from the configured
// meter provider. This is needed when running in standalone mode
// because the meter provider is initialized by the cli library,
// but the mechanism for forcing it to report is not presently
// exposed and not invoked when run in standalone mode.
// There are plans to fix that in the next release, but this is
// needed temporarily until the API for this is more thorough.
func flushMetrics(cmd *command.DockerCli) {
if mp, ok := cmd.MeterProvider().(command.MeterProvider); ok {
if err := mp.ForceFlush(context.Background()); err != nil {
otel.Handle(err)
}
}
}
func runPlugin(cmd *command.DockerCli) error {
rootCmd := commands.NewRootCmd("buildx", true, cmd)
return plugin.RunPlugin(cmd, rootCmd, manager.Metadata{
@@ -106,9 +85,6 @@ func main() {
} else {
fmt.Fprintf(cmd.Err(), "ERROR: %v\n", err)
}
if ebr, ok := err.(*desktop.ErrorWithBuildRef); ok {
ebr.Print(cmd.Err())
}
os.Exit(1)
}

View File

@@ -4,6 +4,7 @@ import (
"github.com/moby/buildkit/util/tracing/detect"
"go.opentelemetry.io/otel"
_ "github.com/moby/buildkit/util/tracing/detect/delegated"
_ "github.com/moby/buildkit/util/tracing/env"
)

View File

@@ -1,4 +1 @@
comment: false
ignore:
- "**/*.pb.go"

View File

@@ -1,58 +1,36 @@
package commands
import (
"bytes"
"cmp"
"context"
"encoding/json"
"fmt"
"io"
"os"
"slices"
"strings"
"text/tabwriter"
"github.com/containerd/console"
"github.com/containerd/platforms"
"github.com/containerd/containerd/platforms"
"github.com/docker/buildx/bake"
"github.com/docker/buildx/bake/hclparser"
"github.com/docker/buildx/build"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/localstate"
"github.com/docker/buildx/util/buildflags"
"github.com/docker/buildx/util/cobrautil"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/desktop"
"github.com/docker/buildx/util/dockerutil"
"github.com/docker/buildx/util/progress"
"github.com/docker/buildx/util/tracing"
"github.com/docker/cli/cli/command"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/identity"
"github.com/moby/buildkit/util/progress/progressui"
"github.com/moby/buildkit/util/appcontext"
"github.com/pkg/errors"
"github.com/spf13/cobra"
)
type bakeOptions struct {
files []string
overrides []string
printOnly bool
listTargets bool
listVars bool
sbom string
provenance string
builder string
metadataFile string
exportPush bool
exportLoad bool
callFunc string
files []string
overrides []string
printOnly bool
commonOptions
}
func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in bakeOptions, cFlags commonFlags) (err error) {
func runBake(dockerCli command.Cli, targets []string, in bakeOptions) (err error) {
ctx := appcontext.Context()
ctx, end, err := tracing.TraceCurrentCommand(ctx, "bake")
if err != nil {
return err
@@ -65,11 +43,11 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
cmdContext := "cwd://"
if len(targets) > 0 {
if build.IsRemoteURL(targets[0]) {
if bake.IsRemoteURL(targets[0]) {
url = targets[0]
targets = targets[1:]
if len(targets) > 0 {
if build.IsRemoteURL(targets[0]) {
if bake.IsRemoteURL(targets[0]) {
cmdContext = targets[0]
targets = targets[1:]
}
@@ -81,26 +59,20 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
targets = []string{"default"}
}
callFunc, err := buildflags.ParsePrintFunc(in.callFunc)
if err != nil {
return err
}
overrides := in.overrides
if in.exportPush {
if in.exportLoad {
return errors.Errorf("push and load may not be set together at the moment")
}
overrides = append(overrides, "*.push=true")
} else if in.exportLoad {
overrides = append(overrides, "*.output=type=docker")
}
if in.exportLoad {
overrides = append(overrides, "*.load=true")
if in.noCache != nil {
overrides = append(overrides, fmt.Sprintf("*.no-cache=%t", *in.noCache))
}
if callFunc != nil {
overrides = append(overrides, fmt.Sprintf("*.call=%s", callFunc.Name))
}
if cFlags.noCache != nil {
overrides = append(overrides, fmt.Sprintf("*.no-cache=%t", *cFlags.noCache))
}
if cFlags.pull != nil {
overrides = append(overrides, fmt.Sprintf("*.pull=%t", *cFlags.pull))
if in.pull != nil {
overrides = append(overrides, fmt.Sprintf("*.pull=%t", *in.pull))
}
if in.sbom != "" {
overrides = append(overrides, fmt.Sprintf("*.attest=%s", buildflags.CanonicalizeAttest("sbom", in.sbom)))
@@ -112,9 +84,23 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
ctx2, cancel := context.WithCancel(context.TODO())
defer cancel()
printer, err := progress.NewPrinter(ctx2, os.Stderr, os.Stderr, in.progress)
if err != nil {
return err
}
defer func() {
if printer != nil {
err1 := printer.Wait()
if err == nil {
err = err1
}
}
}()
var nodes []builder.Node
var progressConsoleDesc, progressTextDesc string
var files []bake.File
var inp *bake.Input
// instance only needed for reading remote bake files or building
if url != "" || !in.printOnly {
@@ -128,126 +114,45 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
if err = updateLastActivity(dockerCli, b.NodeGroup); err != nil {
return errors.Wrapf(err, "failed to update builder last activity time")
}
nodes, err = b.LoadNodes(ctx)
nodes, err = b.LoadNodes(ctx, false)
if err != nil {
return err
}
progressConsoleDesc = fmt.Sprintf("%s:%s", b.Driver, b.Name)
progressTextDesc = fmt.Sprintf("building with %q instance using %s driver", b.Name, b.Driver)
}
var term bool
if _, err := console.ConsoleFromFile(os.Stderr); err == nil {
term = true
if url != "" {
files, inp, err = bake.ReadRemoteFiles(ctx, nodes, url, in.files, printer)
} else {
files, err = bake.ReadLocalFiles(in.files)
}
progressMode := progressui.DisplayMode(cFlags.progress)
var printer *progress.Printer
printer, err = progress.NewPrinter(ctx2, os.Stderr, progressMode,
progress.WithDesc(progressTextDesc, progressConsoleDesc),
progress.WithOnClose(func() {
printWarnings(os.Stderr, printer.Warnings(), progressMode)
}),
)
if err != nil {
return err
}
var resp map[string]*client.SolveResponse
defer func() {
if printer != nil {
err1 := printer.Wait()
if err == nil {
err = err1
}
if err != nil {
return
}
if progressMode != progressui.QuietMode && progressMode != progressui.RawJSONMode {
desktop.PrintBuildDetails(os.Stderr, printer.BuildRefs(), term)
}
if resp != nil && len(in.metadataFile) > 0 {
dt := make(map[string]interface{})
for t, r := range resp {
dt[t] = decodeExporterResponse(r.ExporterResponse)
}
if warnings := printer.Warnings(); len(warnings) > 0 && confutil.MetadataWarningsEnabled() {
dt["buildx.build.warnings"] = warnings
}
err = writeMetadataFile(in.metadataFile, dt)
}
}
}()
files, inp, err := readBakeFiles(ctx, nodes, url, in.files, dockerCli.In(), printer)
if err != nil {
return err
}
if len(files) == 0 {
return errors.New("couldn't find a bake definition")
}
defaults := map[string]string{
tgts, grps, err := bake.ReadTargets(ctx, files, targets, overrides, map[string]string{
// don't forget to update documentation if you add a new
// built-in variable: docs/bake-reference.md#built-in-variables
"BAKE_CMD_CONTEXT": cmdContext,
"BAKE_LOCAL_PLATFORM": platforms.Format(platforms.DefaultSpec()),
}
if in.listTargets || in.listVars {
cfg, pm, err := bake.ParseFiles(files, defaults)
if err != nil {
return err
}
err = printer.Wait()
printer = nil
if err != nil {
return err
}
if in.listTargets {
return printTargetList(dockerCli.Out(), cfg)
} else if in.listVars {
return printVars(dockerCli.Out(), pm.AllVariables)
}
}
tgts, grps, err := bake.ReadTargets(ctx, files, targets, overrides, defaults)
"BAKE_LOCAL_PLATFORM": platforms.DefaultString(),
})
if err != nil {
return err
}
if v := os.Getenv("SOURCE_DATE_EPOCH"); v != "" {
// TODO: extract env var parsing to a method easily usable by library consumers
for _, t := range tgts {
if _, ok := t.Args["SOURCE_DATE_EPOCH"]; ok {
continue
}
if t.Args == nil {
t.Args = map[string]*string{}
}
t.Args["SOURCE_DATE_EPOCH"] = &v
}
}
// this function can update target context string from the input so call before printOnly check
bo, err := bake.TargetsToBuildOpt(tgts, inp)
if err != nil {
return err
}
def := struct {
Group map[string]*bake.Group `json:"group,omitempty"`
Target map[string]*bake.Target `json:"target"`
}{
Group: grps,
Target: tgts,
}
if in.printOnly {
dt, err := json.MarshalIndent(def, "", " ")
dt, err := json.MarshalIndent(struct {
Group map[string]*bake.Group `json:"group,omitempty"`
Target map[string]*bake.Target `json:"target"`
}{
grps,
tgts,
}, "", " ")
if err != nil {
return err
}
@@ -260,164 +165,26 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
return nil
}
for _, opt := range bo {
if opt.PrintFunc != nil {
cf, err := buildflags.ParsePrintFunc(opt.PrintFunc.Name)
if err != nil {
return err
}
opt.PrintFunc.Name = cf.Name
}
}
prm := confutil.MetadataProvenance()
if len(in.metadataFile) == 0 {
prm = confutil.MetadataProvenanceModeDisabled
}
groupRef := identity.NewID()
var refs []string
for k, b := range bo {
b.Ref = identity.NewID()
b.GroupRef = groupRef
b.ProvenanceResponseMode = prm
refs = append(refs, b.Ref)
bo[k] = b
}
dt, err := json.Marshal(def)
if err != nil {
return err
}
if err := saveLocalStateGroup(dockerCli, groupRef, localstate.StateGroup{
Definition: dt,
Targets: targets,
Inputs: overrides,
Refs: refs,
}); err != nil {
return err
}
resp, err = build.Build(ctx, nodes, bo, dockerutil.NewClient(dockerCli), confutil.ConfigDir(dockerCli), printer)
resp, err := build.Build(ctx, nodes, bo, dockerutil.NewClient(dockerCli), confutil.ConfigDir(dockerCli), printer)
if err != nil {
return wrapBuildError(err, true)
}
err = printer.Wait()
if err != nil {
return err
}
var callFormatJSON bool
var jsonResults = map[string]map[string]any{}
if callFunc != nil {
callFormatJSON = callFunc.Format == "json"
}
var sep bool
var exitCode int
names := make([]string, 0, len(bo))
for name := range bo {
names = append(names, name)
}
slices.Sort(names)
for _, name := range names {
req := bo[name]
if req.PrintFunc == nil {
continue
if len(in.metadataFile) > 0 {
dt := make(map[string]interface{})
for t, r := range resp {
dt[t] = decodeExporterResponse(r.ExporterResponse)
}
pf := &pb.PrintFunc{
Name: req.PrintFunc.Name,
Format: req.PrintFunc.Format,
IgnoreStatus: req.PrintFunc.IgnoreStatus,
}
if callFunc != nil {
pf.Format = callFunc.Format
pf.IgnoreStatus = callFunc.IgnoreStatus
}
var res map[string]string
if sp, ok := resp[name]; ok {
res = sp.ExporterResponse
}
if callFormatJSON {
jsonResults[name] = map[string]any{}
buf := &bytes.Buffer{}
if code, err := printResult(buf, pf, res); err != nil {
jsonResults[name]["error"] = err.Error()
exitCode = 1
} else if code != 0 && exitCode == 0 {
exitCode = code
}
m := map[string]*json.RawMessage{}
if err := json.Unmarshal(buf.Bytes(), &m); err == nil {
for k, v := range m {
jsonResults[name][k] = v
}
} else {
jsonResults[name][pf.Name] = json.RawMessage(buf.Bytes())
}
} else {
if sep {
fmt.Fprintln(dockerCli.Out())
} else {
sep = true
}
fmt.Fprintf(dockerCli.Out(), "%s\n", name)
if descr := tgts[name].Description; descr != "" {
fmt.Fprintf(dockerCli.Out(), "%s\n", descr)
}
fmt.Fprintln(dockerCli.Out())
if code, err := printResult(dockerCli.Out(), pf, res); err != nil {
fmt.Fprintf(dockerCli.Out(), "error: %v\n", err)
exitCode = 1
} else if code != 0 && exitCode == 0 {
exitCode = code
}
}
}
if callFormatJSON {
out := struct {
Group map[string]*bake.Group `json:"group,omitempty"`
Target map[string]map[string]any `json:"target"`
}{
Group: grps,
Target: map[string]map[string]any{},
}
for name, def := range tgts {
out.Target[name] = map[string]any{
"build": def,
}
if res, ok := jsonResults[name]; ok {
printName := bo[name].PrintFunc.Name
if printName == "lint" {
printName = "check"
}
out.Target[name][printName] = res
}
}
dt, err := json.MarshalIndent(out, "", " ")
if err != nil {
if err := writeMetadataFile(in.metadataFile, dt); err != nil {
return err
}
fmt.Fprintln(dockerCli.Out(), string(dt))
}
if exitCode != 0 {
os.Exit(exitCode)
}
return nil
return err
}
func bakeCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
var options bakeOptions
var cFlags commonFlags
cmd := &cobra.Command{
Use: "bake [OPTIONS] [TARGET...]",
@@ -426,17 +193,14 @@ func bakeCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
RunE: func(cmd *cobra.Command, args []string) error {
// reset to nil to avoid override is unset
if !cmd.Flags().Lookup("no-cache").Changed {
cFlags.noCache = nil
options.noCache = nil
}
if !cmd.Flags().Lookup("pull").Changed {
cFlags.pull = nil
options.pull = nil
}
options.builder = rootOpts.builder
options.metadataFile = cFlags.metadataFile
// Other common flags (noCache, pull and progress) are processed in runBake function.
return runBake(cmd.Context(), dockerCli, args, options, cFlags)
options.commonOptions.builder = rootOpts.builder
return runBake(dockerCli, args, options)
},
ValidArgsFunction: completion.BakeTargets(options.files),
}
flags := cmd.Flags()
@@ -448,143 +212,8 @@ func bakeCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
flags.StringVar(&options.sbom, "sbom", "", `Shorthand for "--set=*.attest=type=sbom"`)
flags.StringVar(&options.provenance, "provenance", "", `Shorthand for "--set=*.attest=type=provenance"`)
flags.StringArrayVar(&options.overrides, "set", nil, `Override target value (e.g., "targetpattern.key=value")`)
flags.StringVar(&options.callFunc, "call", "build", `Set method for evaluating build ("check", "outline", "targets")`)
flags.VarPF(callAlias(&options.callFunc, "check"), "check", "", `Shorthand for "--call=check"`)
flags.Lookup("check").NoOptDefVal = "true"
flags.BoolVar(&options.listTargets, "list-targets", false, "List available targets")
cobrautil.MarkFlagsExperimental(flags, "list-targets")
flags.MarkHidden("list-targets")
flags.BoolVar(&options.listVars, "list-variables", false, "List defined variables")
cobrautil.MarkFlagsExperimental(flags, "list-variables")
flags.MarkHidden("list-variables")
commonBuildFlags(&cFlags, flags)
commonBuildFlags(&options.commonOptions, flags)
return cmd
}
func saveLocalStateGroup(dockerCli command.Cli, ref string, lsg localstate.StateGroup) error {
l, err := localstate.New(confutil.ConfigDir(dockerCli))
if err != nil {
return err
}
return l.SaveGroup(ref, lsg)
}
func readBakeFiles(ctx context.Context, nodes []builder.Node, url string, names []string, stdin io.Reader, pw progress.Writer) (files []bake.File, inp *bake.Input, err error) {
var lnames []string // local
var rnames []string // remote
var anames []string // both
for _, v := range names {
if strings.HasPrefix(v, "cwd://") {
tname := strings.TrimPrefix(v, "cwd://")
lnames = append(lnames, tname)
anames = append(anames, tname)
} else {
rnames = append(rnames, v)
anames = append(anames, v)
}
}
if url != "" {
var rfiles []bake.File
rfiles, inp, err = bake.ReadRemoteFiles(ctx, nodes, url, rnames, pw)
if err != nil {
return nil, nil, err
}
files = append(files, rfiles...)
}
if len(lnames) > 0 || url == "" {
var lfiles []bake.File
progress.Wrap("[internal] load local bake definitions", pw.Write, func(sub progress.SubLogger) error {
if url != "" {
lfiles, err = bake.ReadLocalFiles(lnames, stdin, sub)
} else {
lfiles, err = bake.ReadLocalFiles(anames, stdin, sub)
}
return nil
})
if err != nil {
return nil, nil, err
}
files = append(files, lfiles...)
}
return
}
func printVars(w io.Writer, vars []*hclparser.Variable) error {
slices.SortFunc(vars, func(a, b *hclparser.Variable) int {
return cmp.Compare(a.Name, b.Name)
})
tw := tabwriter.NewWriter(w, 1, 8, 1, '\t', 0)
defer tw.Flush()
tw.Write([]byte("VARIABLE\tVALUE\tDESCRIPTION\n"))
for _, v := range vars {
var value string
if v.Value != nil {
value = *v.Value
} else {
value = "<null>"
}
fmt.Fprintf(tw, "%s\t%s\t%s\n", v.Name, value, v.Description)
}
return nil
}
func printTargetList(w io.Writer, cfg *bake.Config) error {
tw := tabwriter.NewWriter(w, 1, 8, 1, '\t', 0)
defer tw.Flush()
tw.Write([]byte("TARGET\tDESCRIPTION\n"))
type targetOrGroup struct {
name string
target *bake.Target
group *bake.Group
}
list := make([]targetOrGroup, 0, len(cfg.Targets)+len(cfg.Groups))
for _, tgt := range cfg.Targets {
list = append(list, targetOrGroup{name: tgt.Name, target: tgt})
}
for _, grp := range cfg.Groups {
list = append(list, targetOrGroup{name: grp.Name, group: grp})
}
slices.SortFunc(list, func(a, b targetOrGroup) int {
return cmp.Compare(a.name, b.name)
})
for _, tgt := range list {
if strings.HasPrefix(tgt.name, "_") {
// convention for a private target
continue
}
var descr string
if tgt.target != nil {
descr = tgt.target.Description
} else if tgt.group != nil {
descr = tgt.group.Description
if len(tgt.group.Targets) > 0 {
slices.Sort(tgt.group.Targets)
names := strings.Join(tgt.group.Targets, ", ")
if descr != "" {
descr += " (" + names + ")"
} else {
descr = names
}
}
}
fmt.Fprintf(tw, "%s\t%s\n", tgt.name, descr)
}
return nil
}

File diff suppressed because it is too large Load Diff

View File

@@ -3,72 +3,283 @@ package commands
import (
"bytes"
"context"
"encoding/csv"
"fmt"
"net/url"
"os"
"strings"
"time"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/driver"
remoteutil "github.com/docker/buildx/driver/remote/util"
"github.com/docker/buildx/store"
"github.com/docker/buildx/store/storeutil"
"github.com/docker/buildx/util/cobrautil"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/dockerutil"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
dopts "github.com/docker/cli/opts"
"github.com/google/shlex"
"github.com/moby/buildkit/util/appcontext"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
)
type createOptions struct {
name string
driver string
nodeName string
platform []string
actionAppend bool
actionLeave bool
use bool
driverOpts []string
buildkitdFlags string
buildkitdConfigFile string
bootstrap bool
name string
driver string
nodeName string
platform []string
actionAppend bool
actionLeave bool
use bool
flags string
configFile string
driverOpts []string
bootstrap bool
// upgrade bool // perform upgrade of the driver
}
func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, args []string) error {
func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
ctx := appcontext.Context()
if in.name == "default" {
return errors.Errorf("default is a reserved name and cannot be used to identify builder instance")
}
if in.actionLeave {
if in.name == "" {
return errors.Errorf("leave requires instance name")
}
if in.nodeName == "" {
return errors.Errorf("leave requires node name but --node not set")
}
}
if in.actionAppend {
if in.name == "" {
logrus.Warnf("append used without name, creating a new instance instead")
}
}
txn, release, err := storeutil.GetStore(dockerCli)
if err != nil {
return err
}
// Ensure the file lock gets released no matter what happens.
defer release()
if in.actionLeave {
return builder.Leave(ctx, txn, dockerCli, builder.LeaveOpts{
Name: in.name,
NodeName: in.nodeName,
})
name := in.name
if name == "" {
name, err = store.GenerateName(txn)
if err != nil {
return err
}
}
if !in.actionLeave && !in.actionAppend {
contexts, err := dockerCli.ContextStore().List()
if err != nil {
return err
}
for _, c := range contexts {
if c.Name == name {
logrus.Warnf("instance name %q already exists as context builder", name)
break
}
}
}
ng, err := txn.NodeGroupByName(name)
if err != nil {
if os.IsNotExist(errors.Cause(err)) {
if in.actionAppend && in.name != "" {
logrus.Warnf("failed to find %q for append, creating a new instance instead", in.name)
}
if in.actionLeave {
return errors.Errorf("failed to find instance %q for leave", in.name)
}
} else {
return err
}
}
buildkitHost := os.Getenv("BUILDKIT_HOST")
driverName := in.driver
if driverName == "" {
if ng != nil {
driverName = ng.Driver
} else if len(args) == 0 && buildkitHost != "" {
driverName = "remote"
} else {
var arg string
if len(args) > 0 {
arg = args[0]
}
f, err := driver.GetDefaultFactory(ctx, arg, dockerCli.Client(), true)
if err != nil {
return err
}
if f == nil {
return errors.Errorf("no valid drivers found")
}
driverName = f.Name()
}
}
if ng != nil {
if in.nodeName == "" && !in.actionAppend {
return errors.Errorf("existing instance for %q but no append mode, specify --node to make changes for existing instances", name)
}
if driverName != ng.Driver {
return errors.Errorf("existing instance for %q but has mismatched driver %q", name, ng.Driver)
}
}
if _, err := driver.GetFactory(driverName, true); err != nil {
return err
}
ngOriginal := ng
if ngOriginal != nil {
ngOriginal = ngOriginal.Copy()
}
if ng == nil {
ng = &store.NodeGroup{
Name: name,
Driver: driverName,
}
}
var flags []string
if in.flags != "" {
flags, err = shlex.Split(in.flags)
if err != nil {
return errors.Wrap(err, "failed to parse buildkit flags")
}
}
var ep string
if len(args) > 0 {
ep = args[0]
var setEp bool
if in.actionLeave {
if err := ng.Leave(in.nodeName); err != nil {
return err
}
} else {
switch {
case driverName == "kubernetes":
if len(args) > 0 {
logrus.Warnf("kubernetes driver does not support endpoint args %q", args[0])
}
// naming endpoint to make --append works
ep = (&url.URL{
Scheme: driverName,
Path: "/" + in.name,
RawQuery: (&url.Values{
"deployment": {in.nodeName},
"kubeconfig": {os.Getenv("KUBECONFIG")},
}).Encode(),
}).String()
setEp = false
case driverName == "remote":
if len(args) > 0 {
ep = args[0]
} else if buildkitHost != "" {
ep = buildkitHost
} else {
return errors.Errorf("no remote endpoint provided")
}
ep, err = validateBuildkitEndpoint(ep)
if err != nil {
return err
}
setEp = true
case len(args) > 0:
ep, err = validateEndpoint(dockerCli, args[0])
if err != nil {
return err
}
setEp = true
default:
if dockerCli.CurrentContext() == "default" && dockerCli.DockerEndpoint().TLSData != nil {
return errors.Errorf("could not create a builder instance with TLS data loaded from environment. Please use `docker context create <context-name>` to create a context for current environment and then create a builder instance with `docker buildx create <context-name>`")
}
ep, err = dockerutil.GetCurrentEndpoint(dockerCli)
if err != nil {
return err
}
setEp = false
}
m, err := csvToMap(in.driverOpts)
if err != nil {
return err
}
if in.configFile == "" {
// if buildkit config is not provided, check if the default one is
// available and use it
if f, ok := confutil.DefaultConfigFile(dockerCli); ok {
logrus.Warnf("Using default BuildKit config in %s", f)
in.configFile = f
}
}
if err := ng.Update(in.nodeName, ep, in.platform, setEp, in.actionAppend, flags, in.configFile, m); err != nil {
return err
}
}
b, err := builder.Create(ctx, txn, dockerCli, builder.CreateOpts{
Name: in.name,
Driver: in.driver,
NodeName: in.nodeName,
Platforms: in.platform,
DriverOpts: in.driverOpts,
BuildkitdFlags: in.buildkitdFlags,
BuildkitdConfigFile: in.buildkitdConfigFile,
Use: in.use,
Endpoint: ep,
Append: in.actionAppend,
})
if err := txn.Save(ng); err != nil {
return err
}
b, err := builder.New(dockerCli,
builder.WithName(ng.Name),
builder.WithStore(txn),
builder.WithSkippedValidation(),
)
if err != nil {
return err
}
// The store is no longer used from this point.
// Release it so we aren't holding the file lock during the boot.
release()
timeoutCtx, cancel := context.WithTimeout(ctx, 20*time.Second)
defer cancel()
nodes, err := b.LoadNodes(timeoutCtx, true)
if err != nil {
return err
}
for _, node := range nodes {
if err := node.Err; err != nil {
err := errors.Errorf("failed to initialize builder %s (%s): %s", ng.Name, node.Name, err)
var err2 error
if ngOriginal == nil {
err2 = txn.Remove(ng.Name)
} else {
err2 = txn.Save(ngOriginal)
}
if err2 != nil {
logrus.Warnf("Could not rollback to previous state: %s", err2)
}
return err
}
}
if in.use && ep != "" {
current, err := dockerutil.GetCurrentEndpoint(dockerCli)
if err != nil {
return err
}
if err := txn.SetCurrent(current, ng.Name, false, false); err != nil {
return err
}
}
if in.bootstrap {
if _, err = b.Boot(ctx); err != nil {
@@ -76,7 +287,7 @@ func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, arg
}
}
fmt.Printf("%s\n", b.Name)
fmt.Printf("%s\n", ng.Name)
return nil
}
@@ -96,9 +307,8 @@ func createCmd(dockerCli command.Cli) *cobra.Command {
Short: "Create a new builder instance",
Args: cli.RequiresMaxArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
return runCreate(cmd.Context(), dockerCli, options, args)
return runCreate(dockerCli, options, args)
},
ValidArgsFunction: completion.Disable,
}
flags := cmd.Flags()
@@ -106,16 +316,12 @@ func createCmd(dockerCli command.Cli) *cobra.Command {
flags.StringVar(&options.name, "name", "", "Builder instance name")
flags.StringVar(&options.driver, "driver", "", fmt.Sprintf("Driver to use (available: %s)", drivers.String()))
flags.StringVar(&options.nodeName, "node", "", "Create/modify node with given name")
flags.StringVar(&options.flags, "buildkitd-flags", "", "Flags for buildkitd daemon")
flags.StringVar(&options.configFile, "config", "", "BuildKit config file")
flags.StringArrayVar(&options.platform, "platform", []string{}, "Fixed platforms for current node")
flags.StringArrayVar(&options.driverOpts, "driver-opt", []string{}, "Options for the driver")
flags.StringVar(&options.buildkitdFlags, "buildkitd-flags", "", "BuildKit daemon flags")
// we allow for both "--config" and "--buildkitd-config", although the latter is the recommended way to avoid ambiguity.
flags.StringVar(&options.buildkitdConfigFile, "buildkitd-config", "", "BuildKit daemon config file")
flags.StringVar(&options.buildkitdConfigFile, "config", "", "BuildKit daemon config file")
flags.MarkHidden("config")
flags.BoolVar(&options.bootstrap, "bootstrap", false, "Boot builder after creation")
flags.BoolVar(&options.actionAppend, "append", false, "Append a node to builder instead of changing it")
flags.BoolVar(&options.actionLeave, "leave", false, "Remove a node from builder instead of changing it")
flags.BoolVar(&options.use, "use", false, "Set the current builder instance")
@@ -125,3 +331,49 @@ func createCmd(dockerCli command.Cli) *cobra.Command {
return cmd
}
func csvToMap(in []string) (map[string]string, error) {
if len(in) == 0 {
return nil, nil
}
m := make(map[string]string, len(in))
for _, s := range in {
csvReader := csv.NewReader(strings.NewReader(s))
fields, err := csvReader.Read()
if err != nil {
return nil, err
}
for _, v := range fields {
p := strings.SplitN(v, "=", 2)
if len(p) != 2 {
return nil, errors.Errorf("invalid value %q, expecting k=v", v)
}
m[p[0]] = p[1]
}
}
return m, nil
}
// validateEndpoint validates that endpoint is either a context or a docker host
func validateEndpoint(dockerCli command.Cli, ep string) (string, error) {
dem, err := dockerutil.GetDockerEndpoint(dockerCli, ep)
if err == nil && dem != nil {
if ep == "default" {
return dem.Host, nil
}
return ep, nil
}
h, err := dopts.ParseHost(true, ep)
if err != nil {
return "", errors.Wrapf(err, "failed to parse endpoint %s", ep)
}
return h, nil
}
// validateBuildkitEndpoint validates that endpoint is a valid buildkit host
func validateBuildkitEndpoint(ep string) (string, error) {
if err := remoteutil.IsValidEndpoint(ep); err != nil {
return "", err
}
return ep, nil
}

26
commands/create_test.go Normal file
View File

@@ -0,0 +1,26 @@
package commands
import (
"testing"
"github.com/stretchr/testify/require"
)
func TestCsvToMap(t *testing.T) {
d := []string{
"\"tolerations=key=foo,value=bar;key=foo2,value=bar2\",replicas=1",
"namespace=default",
}
r, err := csvToMap(d)
require.NoError(t, err)
require.Contains(t, r, "tolerations")
require.Equal(t, r["tolerations"], "key=foo,value=bar;key=foo2,value=bar2")
require.Contains(t, r, "replicas")
require.Equal(t, r["replicas"], "1")
require.Contains(t, r, "namespace")
require.Equal(t, r["namespace"], "default")
}

View File

@@ -1,92 +0,0 @@
package debug
import (
"context"
"os"
"runtime"
"github.com/containerd/console"
"github.com/docker/buildx/controller"
"github.com/docker/buildx/controller/control"
controllerapi "github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/monitor"
"github.com/docker/buildx/util/cobrautil"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/command"
"github.com/moby/buildkit/util/progress/progressui"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
)
// DebugConfig is a user-specified configuration for the debugger.
type DebugConfig struct {
// InvokeFlag is a flag to configure the launched debugger and the commaned executed on the debugger.
InvokeFlag string
// OnFlag is a flag to configure the timing of launching the debugger.
OnFlag string
}
// DebuggableCmd is a command that supports debugger with recognizing the user-specified DebugConfig.
type DebuggableCmd interface {
// NewDebugger returns the new *cobra.Command with support for the debugger with recognizing DebugConfig.
NewDebugger(*DebugConfig) *cobra.Command
}
func RootCmd(dockerCli command.Cli, children ...DebuggableCmd) *cobra.Command {
var controlOptions control.ControlOptions
var progressMode string
var options DebugConfig
cmd := &cobra.Command{
Use: "debug",
Short: "Start debugger",
Args: cobra.NoArgs,
RunE: func(cmd *cobra.Command, args []string) error {
printer, err := progress.NewPrinter(context.TODO(), os.Stderr, progressui.DisplayMode(progressMode))
if err != nil {
return err
}
ctx := context.TODO()
c, err := controller.NewController(ctx, controlOptions, dockerCli, printer)
if err != nil {
return err
}
defer func() {
if err := c.Close(); err != nil {
logrus.Warnf("failed to close server connection %v", err)
}
}()
con := console.Current()
if err := con.SetRaw(); err != nil {
return errors.Errorf("failed to configure terminal: %v", err)
}
_, err = monitor.RunMonitor(ctx, "", nil, controllerapi.InvokeConfig{
Tty: true,
}, c, dockerCli.In(), os.Stdout, os.Stderr, printer)
con.Reset()
return err
},
}
cobrautil.MarkCommandExperimental(cmd)
flags := cmd.Flags()
flags.StringVar(&options.InvokeFlag, "invoke", "", "Launch a monitor with executing specified command")
flags.StringVar(&options.OnFlag, "on", "error", "When to launch the monitor ([always, error])")
flags.StringVar(&controlOptions.Root, "root", "", "Specify root directory of server to connect for the monitor")
flags.BoolVar(&controlOptions.Detach, "detach", runtime.GOOS == "linux", "Detach buildx server for the monitor (supported only on linux)")
flags.StringVar(&controlOptions.ServerConfig, "server-config", "", "Specify buildx server config file for the monitor (used only when launching new server)")
flags.StringVar(&progressMode, "progress", "auto", `Set type of progress output ("auto", "plain", "tty", "rawjson") for the monitor. Use plain to show container output`)
cobrautil.MarkFlagsExperimental(flags, "invoke", "on", "root", "detach", "server-config")
for _, c := range children {
cmd.AddCommand(c.NewDebugger(&options))
}
return cmd
}

View File

@@ -1,131 +0,0 @@
package commands
import (
"io"
"net"
"os"
"github.com/containerd/platforms"
"github.com/docker/buildx/build"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/command"
"github.com/moby/buildkit/util/appcontext"
"github.com/moby/buildkit/util/progress/progressui"
v1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/spf13/cobra"
"golang.org/x/sync/errgroup"
)
type stdioOptions struct {
builder string
platform string
progress string
}
func runDialStdio(dockerCli command.Cli, opts stdioOptions) error {
ctx := appcontext.Context()
contextPathHash, _ := os.Getwd()
b, err := builder.New(dockerCli,
builder.WithName(opts.builder),
builder.WithContextPathHash(contextPathHash),
)
if err != nil {
return err
}
if err = updateLastActivity(dockerCli, b.NodeGroup); err != nil {
return errors.Wrapf(err, "failed to update builder last activity time")
}
nodes, err := b.LoadNodes(ctx)
if err != nil {
return err
}
printer, err := progress.NewPrinter(ctx, os.Stderr, progressui.DisplayMode(opts.progress), progress.WithPhase("dial-stdio"), progress.WithDesc("builder: "+b.Name, "builder:"+b.Name))
if err != nil {
return err
}
var p *v1.Platform
if opts.platform != "" {
pp, err := platforms.Parse(opts.platform)
if err != nil {
return errors.Wrapf(err, "invalid platform %q", opts.platform)
}
p = &pp
}
defer printer.Wait()
return progress.Wrap("Proxying to builder", printer.Write, func(sub progress.SubLogger) error {
var conn net.Conn
err := sub.Wrap("Dialing builder", func() error {
conn, err = build.Dial(ctx, nodes, printer, p)
if err != nil {
return err
}
return nil
})
if err != nil {
return err
}
defer conn.Close()
go func() {
<-ctx.Done()
closeWrite(conn)
}()
var eg errgroup.Group
eg.Go(func() error {
_, err := io.Copy(conn, os.Stdin)
closeWrite(conn)
return err
})
eg.Go(func() error {
_, err := io.Copy(os.Stdout, conn)
closeRead(conn)
return err
})
return eg.Wait()
})
}
func closeRead(conn net.Conn) error {
if c, ok := conn.(interface{ CloseRead() error }); ok {
return c.CloseRead()
}
return conn.Close()
}
func closeWrite(conn net.Conn) error {
if c, ok := conn.(interface{ CloseWrite() error }); ok {
return c.CloseWrite()
}
return conn.Close()
}
func dialStdioCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
opts := stdioOptions{}
cmd := &cobra.Command{
Use: "dial-stdio",
Short: "Proxy current stdio streams to builder instance",
Args: cobra.NoArgs,
RunE: func(cmd *cobra.Command, args []string) error {
opts.builder = rootOpts.builder
return runDialStdio(dockerCli, opts)
},
}
flags := cmd.Flags()
flags.StringVar(&opts.platform, "platform", os.Getenv("DOCKER_DEFAULT_PLATFORM"), "Target platform: this is used for node selection")
flags.StringVar(&opts.progress, "progress", "quiet", `Set type of progress output ("auto", "plain", "tty", "rawjson"). Use plain to show container output`)
return cmd
}

View File

@@ -1,7 +1,6 @@
package commands
import (
"context"
"fmt"
"io"
"os"
@@ -10,12 +9,12 @@ import (
"time"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/docker/cli/opts"
"github.com/docker/go-units"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/util/appcontext"
"github.com/spf13/cobra"
"golang.org/x/sync/errgroup"
)
@@ -26,7 +25,9 @@ type duOptions struct {
verbose bool
}
func runDiskUsage(ctx context.Context, dockerCli command.Cli, opts duOptions) error {
func runDiskUsage(dockerCli command.Cli, opts duOptions) error {
ctx := appcontext.Context()
pi, err := toBuildkitPruneInfo(opts.filter.Value())
if err != nil {
return err
@@ -37,7 +38,7 @@ func runDiskUsage(ctx context.Context, dockerCli command.Cli, opts duOptions) er
return err
}
nodes, err := b.LoadNodes(ctx)
nodes, err := b.LoadNodes(ctx, false)
if err != nil {
return err
}
@@ -112,9 +113,8 @@ func duCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
Args: cli.NoArgs,
RunE: func(cmd *cobra.Command, args []string) error {
options.builder = rootOpts.builder
return runDiskUsage(cmd.Context(), dockerCli, options)
return runDiskUsage(dockerCli, options)
},
ValidArgsFunction: completion.Disable,
}
flags := cmd.Flags()

View File

@@ -7,14 +7,12 @@ import (
"os"
"strings"
"github.com/distribution/reference"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/util/buildflags"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/imagetools"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/command"
"github.com/moby/buildkit/util/progress/progressui"
"github.com/docker/distribution/reference"
"github.com/moby/buildkit/util/appcontext"
"github.com/opencontainers/go-digest"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
@@ -26,14 +24,12 @@ type createOptions struct {
builder string
files []string
tags []string
annotations []string
dryrun bool
actionAppend bool
progress string
preferIndex bool
}
func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, args []string) error {
func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
if len(args) == 0 && len(in.files) == 0 {
return errors.Errorf("no sources specified")
}
@@ -114,6 +110,8 @@ func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, arg
}
}
ctx := appcontext.Context()
b, err := builder.New(dockerCli, builder.WithName(in.builder))
if err != nil {
return err
@@ -155,12 +153,7 @@ func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, arg
}
}
annotations, err := buildflags.ParseAnnotations(in.annotations)
if err != nil {
return errors.Wrapf(err, "failed to parse annotations")
}
dt, desc, err := r.Combine(ctx, srcs, annotations, in.preferIndex)
dt, desc, err := r.Combine(ctx, srcs)
if err != nil {
return err
}
@@ -175,7 +168,7 @@ func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, arg
ctx2, cancel := context.WithCancel(context.TODO())
defer cancel()
printer, err := progress.NewPrinter(ctx2, os.Stderr, progressui.DisplayMode(in.progress))
printer, err := progress.NewPrinter(ctx2, os.Stderr, os.Stderr, in.progress)
if err != nil {
return err
}
@@ -278,9 +271,8 @@ func createCmd(dockerCli command.Cli, opts RootOptions) *cobra.Command {
Short: "Create a new image based on source images",
RunE: func(cmd *cobra.Command, args []string) error {
options.builder = *opts.Builder
return runCreate(cmd.Context(), dockerCli, options, args)
return runCreate(dockerCli, options, args)
},
ValidArgsFunction: completion.Disable,
}
flags := cmd.Flags()
@@ -288,9 +280,7 @@ func createCmd(dockerCli command.Cli, opts RootOptions) *cobra.Command {
flags.StringArrayVarP(&options.tags, "tag", "t", []string{}, "Set reference for new image")
flags.BoolVar(&options.dryrun, "dry-run", false, "Show final image instead of pushing")
flags.BoolVar(&options.actionAppend, "append", false, "Append to existing manifest")
flags.StringVar(&options.progress, "progress", "auto", `Set type of progress output ("auto", "plain", "tty", "rawjson"). Use plain to show container output`)
flags.StringArrayVarP(&options.annotations, "annotation", "", []string{}, "Add annotation to the image")
flags.BoolVar(&options.preferIndex, "prefer-index", true, "When only a single source is specified, prefer outputting an image index or manifest list instead of performing a carbon copy")
flags.StringVar(&options.progress, "progress", "auto", `Set type of progress output ("auto", "plain", "tty"). Use plain to show container output`)
return cmd
}

View File

@@ -1,14 +1,12 @@
package commands
import (
"context"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/imagetools"
"github.com/docker/cli-docs-tool/annotation"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/moby/buildkit/util/appcontext"
"github.com/pkg/errors"
"github.com/spf13/cobra"
)
@@ -19,7 +17,9 @@ type inspectOptions struct {
raw bool
}
func runInspect(ctx context.Context, dockerCli command.Cli, in inspectOptions, name string) error {
func runInspect(dockerCli command.Cli, in inspectOptions, name string) error {
ctx := appcontext.Context()
if in.format != "" && in.raw {
return errors.Errorf("format and raw cannot be used together")
}
@@ -50,9 +50,8 @@ func inspectCmd(dockerCli command.Cli, rootOpts RootOptions) *cobra.Command {
Args: cli.ExactArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
options.builder = *rootOpts.Builder
return runInspect(cmd.Context(), dockerCli, options, args[0])
return runInspect(dockerCli, options, args[0])
},
ValidArgsFunction: completion.Disable,
}
flags := cmd.Flags()

View File

@@ -1,7 +1,6 @@
package commands
import (
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/cli/cli/command"
"github.com/spf13/cobra"
)
@@ -12,9 +11,8 @@ type RootOptions struct {
func RootCmd(dockerCli command.Cli, opts RootOptions) *cobra.Command {
cmd := &cobra.Command{
Use: "imagetools",
Short: "Commands to work on images in registry",
ValidArgsFunction: completion.Disable,
Use: "imagetools",
Short: "Commands to work on images in registry",
}
cmd.AddCommand(

View File

@@ -4,19 +4,15 @@ import (
"context"
"fmt"
"os"
"sort"
"strings"
"text/tabwriter"
"time"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/driver"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/platformutil"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/docker/cli/cli/debug"
"github.com/docker/go-units"
"github.com/moby/buildkit/util/appcontext"
"github.com/spf13/cobra"
)
@@ -25,7 +21,9 @@ type inspectOptions struct {
builder string
}
func runInspect(ctx context.Context, dockerCli command.Cli, in inspectOptions) error {
func runInspect(dockerCli command.Cli, in inspectOptions) error {
ctx := appcontext.Context()
b, err := builder.New(dockerCli,
builder.WithName(in.builder),
builder.WithSkippedValidation(),
@@ -37,7 +35,7 @@ func runInspect(ctx context.Context, dockerCli command.Cli, in inspectOptions) e
timeoutCtx, cancel := context.WithTimeout(ctx, 20*time.Second)
defer cancel()
nodes, err := b.LoadNodes(timeoutCtx, builder.WithData())
nodes, err := b.LoadNodes(timeoutCtx, true)
if in.bootstrap {
var ok bool
ok, err = b.Boot(ctx)
@@ -45,7 +43,7 @@ func runInspect(ctx context.Context, dockerCli command.Cli, in inspectOptions) e
return err
}
if ok {
nodes, err = b.LoadNodes(timeoutCtx, builder.WithData())
nodes, err = b.LoadNodes(timeoutCtx, true)
}
}
@@ -84,48 +82,13 @@ func runInspect(ctx context.Context, dockerCli command.Cli, in inspectOptions) e
fmt.Fprintf(w, "Error:\t%s\n", err.Error())
} else {
fmt.Fprintf(w, "Status:\t%s\n", nodes[i].DriverInfo.Status)
if len(n.BuildkitdFlags) > 0 {
fmt.Fprintf(w, "BuildKit daemon flags:\t%s\n", strings.Join(n.BuildkitdFlags, " "))
if len(n.Flags) > 0 {
fmt.Fprintf(w, "Flags:\t%s\n", strings.Join(n.Flags, " "))
}
if nodes[i].Version != "" {
fmt.Fprintf(w, "BuildKit version:\t%s\n", nodes[i].Version)
}
platforms := platformutil.FormatInGroups(n.Node.Platforms, n.Platforms)
if len(platforms) > 0 {
fmt.Fprintf(w, "Platforms:\t%s\n", strings.Join(platforms, ", "))
}
if debug.IsEnabled() {
fmt.Fprintf(w, "Features:\n")
features := nodes[i].Driver.Features(ctx)
featKeys := make([]string, 0, len(features))
for k := range features {
featKeys = append(featKeys, string(k))
}
sort.Strings(featKeys)
for _, k := range featKeys {
fmt.Fprintf(w, "\t%s:\t%t\n", k, features[driver.Feature(k)])
}
}
if len(nodes[i].Labels) > 0 {
fmt.Fprintf(w, "Labels:\n")
for _, k := range sortedKeys(nodes[i].Labels) {
v := nodes[i].Labels[k]
fmt.Fprintf(w, "\t%s:\t%s\n", k, v)
}
}
for ri, rule := range nodes[i].GCPolicy {
fmt.Fprintf(w, "GC Policy rule#%d:\n", ri)
fmt.Fprintf(w, "\tAll:\t%v\n", rule.All)
if len(rule.Filter) > 0 {
fmt.Fprintf(w, "\tFilters:\t%s\n", strings.Join(rule.Filter, " "))
}
if rule.KeepDuration > 0 {
fmt.Fprintf(w, "\tKeep Duration:\t%v\n", rule.KeepDuration.String())
}
if rule.KeepBytes > 0 {
fmt.Fprintf(w, "\tKeep Bytes:\t%s\n", units.BytesSize(float64(rule.KeepBytes)))
}
fmt.Fprintf(w, "Buildkit:\t%s\n", nodes[i].Version)
}
fmt.Fprintf(w, "Platforms:\t%s\n", strings.Join(platformutil.FormatInGroups(n.Node.Platforms, n.Platforms), ", "))
}
}
}
@@ -147,9 +110,8 @@ func inspectCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
if len(args) > 0 {
options.builder = args[0]
}
return runInspect(cmd.Context(), dockerCli, options)
return runInspect(dockerCli, options)
},
ValidArgsFunction: completion.BuilderNames(dockerCli),
}
flags := cmd.Flags()
@@ -157,14 +119,3 @@ func inspectCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
return cmd
}
func sortedKeys(m map[string]string) []string {
s := make([]string, len(m))
i := 0
for k := range m {
s[i] = k
i++
}
sort.Strings(s)
return s
}

View File

@@ -4,7 +4,6 @@ import (
"os"
"github.com/docker/buildx/util/cobrautil"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/docker/cli/cli/config"
@@ -15,7 +14,7 @@ import (
type installOptions struct {
}
func runInstall(_ command.Cli, _ installOptions) error {
func runInstall(dockerCli command.Cli, in installOptions) error {
dir := config.Dir()
if err := os.MkdirAll(dir, 0755); err != nil {
return errors.Wrap(err, "could not create docker config")
@@ -47,8 +46,7 @@ func installCmd(dockerCli command.Cli) *cobra.Command {
RunE: func(cmd *cobra.Command, args []string) error {
return runInstall(dockerCli, options)
},
Hidden: true,
ValidArgsFunction: completion.Disable,
Hidden: true,
}
// hide builder persistent flag for this command

View File

@@ -2,43 +2,29 @@ package commands
import (
"context"
"encoding/json"
"fmt"
"sort"
"io"
"strings"
"text/tabwriter"
"time"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/store"
"github.com/docker/buildx/store/storeutil"
"github.com/docker/buildx/util/cobrautil"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/platformutil"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/docker/cli/cli/command/formatter"
"github.com/moby/buildkit/util/appcontext"
"github.com/spf13/cobra"
"golang.org/x/sync/errgroup"
)
const (
lsNameNodeHeader = "NAME/NODE"
lsDriverEndpointHeader = "DRIVER/ENDPOINT"
lsStatusHeader = "STATUS"
lsLastActivityHeader = "LAST ACTIVITY"
lsBuildkitHeader = "BUILDKIT"
lsPlatformsHeader = "PLATFORMS"
lsIndent = ` \_ `
lsDefaultTableFormat = "table {{.Name}}\t{{.DriverEndpoint}}\t{{.Status}}\t{{.Buildkit}}\t{{.Platforms}}"
)
type lsOptions struct {
format string
}
func runLs(ctx context.Context, dockerCli command.Cli, in lsOptions) error {
func runLs(dockerCli command.Cli, in lsOptions) error {
ctx := appcontext.Context()
txn, release, err := storeutil.GetStore(dockerCli)
if err != nil {
return err
@@ -62,7 +48,7 @@ func runLs(ctx context.Context, dockerCli command.Cli, in lsOptions) error {
for _, b := range builders {
func(b *builder.Builder) {
eg.Go(func() error {
_, _ = b.LoadNodes(timeoutCtx, builder.WithData())
_, _ = b.LoadNodes(timeoutCtx, true)
return nil
})
}(b)
@@ -72,9 +58,22 @@ func runLs(ctx context.Context, dockerCli command.Cli, in lsOptions) error {
return err
}
if hasErrors, err := lsPrint(dockerCli, current, builders, in.format); err != nil {
return err
} else if hasErrors {
w := tabwriter.NewWriter(dockerCli.Out(), 0, 0, 1, ' ', 0)
fmt.Fprintf(w, "NAME/NODE\tDRIVER/ENDPOINT\tSTATUS\tBUILDKIT\tPLATFORMS\n")
printErr := false
for _, b := range builders {
if current.Name == b.Name {
b.Name += " *"
}
if ok := printBuilder(w, b); !ok {
printErr = true
}
}
w.Flush()
if printErr {
_, _ = fmt.Fprintf(dockerCli.Err(), "\n")
for _, b := range builders {
if b.Err() != nil {
@@ -92,6 +91,31 @@ func runLs(ctx context.Context, dockerCli command.Cli, in lsOptions) error {
return nil
}
func printBuilder(w io.Writer, b *builder.Builder) (ok bool) {
ok = true
var err string
if b.Err() != nil {
ok = false
err = "error"
}
fmt.Fprintf(w, "%s\t%s\t%s\t\t\n", b.Name, b.Driver, err)
if b.Err() == nil {
for _, n := range b.Nodes() {
var status string
if n.DriverInfo != nil {
status = n.DriverInfo.Status.String()
}
if n.Err != nil {
ok = false
fmt.Fprintf(w, " %s\t%s\t%s\t\t\n", n.Name, n.Endpoint, "error")
} else {
fmt.Fprintf(w, " %s\t%s\t%s\t%s\t%s\n", n.Name, n.Endpoint, status, n.Version, strings.Join(platformutil.FormatInGroups(n.Node.Platforms, n.Platforms), ", "))
}
}
}
return
}
func lsCmd(dockerCli command.Cli) *cobra.Command {
var options lsOptions
@@ -100,175 +124,12 @@ func lsCmd(dockerCli command.Cli) *cobra.Command {
Short: "List builder instances",
Args: cli.ExactArgs(0),
RunE: func(cmd *cobra.Command, args []string) error {
return runLs(cmd.Context(), dockerCli, options)
return runLs(dockerCli, options)
},
ValidArgsFunction: completion.Disable,
}
flags := cmd.Flags()
flags.StringVar(&options.format, "format", formatter.TableFormatKey, "Format the output")
// hide builder persistent flag for this command
cobrautil.HideInheritedFlags(cmd, "builder")
return cmd
}
func lsPrint(dockerCli command.Cli, current *store.NodeGroup, builders []*builder.Builder, format string) (hasErrors bool, _ error) {
if format == formatter.TableFormatKey {
format = lsDefaultTableFormat
}
ctx := formatter.Context{
Output: dockerCli.Out(),
Format: formatter.Format(format),
}
sort.SliceStable(builders, func(i, j int) bool {
ierr := builders[i].Err() != nil
jerr := builders[j].Err() != nil
if ierr && !jerr {
return false
} else if !ierr && jerr {
return true
}
return i < j
})
render := func(format func(subContext formatter.SubContext) error) error {
for _, b := range builders {
if err := format(&lsContext{
Builder: &lsBuilder{
Builder: b,
Current: b.Name == current.Name,
},
format: ctx.Format,
}); err != nil {
return err
}
if b.Err() != nil {
if ctx.Format.IsTable() {
hasErrors = true
}
continue
}
for _, n := range b.Nodes() {
if n.Err != nil {
if ctx.Format.IsTable() {
hasErrors = true
}
}
if err := format(&lsContext{
format: ctx.Format,
Builder: &lsBuilder{
Builder: b,
Current: b.Name == current.Name,
},
node: n,
}); err != nil {
return err
}
}
}
return nil
}
lsCtx := lsContext{}
lsCtx.Header = formatter.SubHeaderContext{
"Name": lsNameNodeHeader,
"DriverEndpoint": lsDriverEndpointHeader,
"LastActivity": lsLastActivityHeader,
"Status": lsStatusHeader,
"Buildkit": lsBuildkitHeader,
"Platforms": lsPlatformsHeader,
}
return hasErrors, ctx.Write(&lsCtx, render)
}
type lsBuilder struct {
*builder.Builder
Current bool
}
type lsContext struct {
formatter.HeaderContext
Builder *lsBuilder
format formatter.Format
node builder.Node
}
func (c *lsContext) MarshalJSON() ([]byte, error) {
return json.Marshal(c.Builder)
}
func (c *lsContext) Name() string {
if c.node.Name == "" {
name := c.Builder.Name
if c.Builder.Current && c.format.IsTable() {
name += "*"
}
return name
}
if c.format.IsTable() {
return lsIndent + c.node.Name
}
return c.node.Name
}
func (c *lsContext) DriverEndpoint() string {
if c.node.Name == "" {
return c.Builder.Driver
}
if c.format.IsTable() {
return lsIndent + c.node.Endpoint
}
return c.node.Endpoint
}
func (c *lsContext) LastActivity() string {
if c.node.Name != "" || c.Builder.LastActivity.IsZero() {
return ""
}
return c.Builder.LastActivity.UTC().Format(time.RFC3339)
}
func (c *lsContext) Status() string {
if c.node.Name == "" {
if c.Builder.Err() != nil {
return "error"
}
return ""
}
if c.node.Err != nil {
return "error"
}
if c.node.DriverInfo != nil {
return c.node.DriverInfo.Status.String()
}
return ""
}
func (c *lsContext) Buildkit() string {
if c.node.Name == "" {
return ""
}
return c.node.Version
}
func (c *lsContext) Platforms() string {
if c.node.Name == "" {
return ""
}
return strings.Join(platformutil.FormatInGroups(c.node.Node.Platforms, c.node.Platforms), ", ")
}
func (c *lsContext) Error() string {
if c.node.Name != "" && c.node.Err != nil {
return c.node.Err.Error()
} else if err := c.Builder.Err(); err != nil {
return err.Error()
}
return ""
}

48
commands/print.go Normal file
View File

@@ -0,0 +1,48 @@
package commands
import (
"fmt"
"io"
"log"
"os"
"github.com/docker/buildx/build"
"github.com/docker/docker/api/types/versions"
"github.com/moby/buildkit/frontend/subrequests"
"github.com/moby/buildkit/frontend/subrequests/outline"
"github.com/moby/buildkit/frontend/subrequests/targets"
)
func printResult(f *build.PrintFunc, res map[string]string) error {
switch f.Name {
case "outline":
return printValue(outline.PrintOutline, outline.SubrequestsOutlineDefinition.Version, f.Format, res)
case "targets":
return printValue(targets.PrintTargets, targets.SubrequestsTargetsDefinition.Version, f.Format, res)
case "subrequests.describe":
return printValue(subrequests.PrintDescribe, subrequests.SubrequestsDescribeDefinition.Version, f.Format, res)
default:
if dt, ok := res["result.txt"]; ok {
fmt.Print(dt)
} else {
log.Printf("%s %+v", f, res)
}
}
return nil
}
type printFunc func([]byte, io.Writer) error
func printValue(printer printFunc, version string, format string, res map[string]string) error {
if format == "json" {
fmt.Fprintln(os.Stdout, res["result.json"])
return nil
}
if res["version"] != "" && versions.LessThan(version, res["version"]) && res["result.txt"] != "" {
// structure is too new and we don't know how to print it
fmt.Fprint(os.Stdout, res["result.txt"])
return nil
}
return printer([]byte(res["result.json"]), os.Stdout)
}

View File

@@ -1,7 +1,6 @@
package commands
import (
"context"
"fmt"
"os"
"strings"
@@ -9,13 +8,13 @@ import (
"time"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/docker/cli/opts"
"github.com/docker/docker/api/types/filters"
"github.com/docker/go-units"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/util/appcontext"
"github.com/pkg/errors"
"github.com/spf13/cobra"
"golang.org/x/sync/errgroup"
@@ -35,7 +34,9 @@ const (
allCacheWarning = `WARNING! This will remove all build cache. Are you sure you want to continue?`
)
func runPrune(ctx context.Context, dockerCli command.Cli, opts pruneOptions) error {
func runPrune(dockerCli command.Cli, opts pruneOptions) error {
ctx := appcontext.Context()
pruneFilters := opts.filter.Value()
pruneFilters = command.PruneFilters(dockerCli, pruneFilters)
@@ -49,12 +50,8 @@ func runPrune(ctx context.Context, dockerCli command.Cli, opts pruneOptions) err
warning = allCacheWarning
}
if !opts.force {
if ok, err := prompt(ctx, dockerCli.In(), dockerCli.Out(), warning); err != nil {
return err
} else if !ok {
return nil
}
if !opts.force && !command.PromptForConfirmation(dockerCli.In(), dockerCli.Out(), warning) {
return nil
}
b, err := builder.New(dockerCli, builder.WithName(opts.builder))
@@ -62,7 +59,7 @@ func runPrune(ctx context.Context, dockerCli command.Cli, opts pruneOptions) err
return err
}
nodes, err := b.LoadNodes(ctx)
nodes, err := b.LoadNodes(ctx, false)
if err != nil {
return err
}
@@ -140,9 +137,8 @@ func pruneCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
Args: cli.NoArgs,
RunE: func(cmd *cobra.Command, args []string) error {
options.builder = rootOpts.builder
return runPrune(cmd.Context(), dockerCli, options)
return runPrune(dockerCli, options)
},
ValidArgsFunction: completion.Disable,
}
flags := cmd.Flags()
@@ -195,8 +191,6 @@ func toBuildkitPruneInfo(f filters.Args) (*client.PruneInfo, error) {
case 1:
if filterKey == "id" {
filters = append(filters, filterKey+"~="+values[0])
} else if strings.HasSuffix(filterKey, "!") || strings.HasSuffix(filterKey, "~") {
filters = append(filters, filterKey+"="+values[0])
} else {
filters = append(filters, filterKey+"=="+values[0])
}

View File

@@ -8,15 +8,16 @@ import (
"github.com/docker/buildx/builder"
"github.com/docker/buildx/store"
"github.com/docker/buildx/store/storeutil"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/moby/buildkit/util/appcontext"
"github.com/pkg/errors"
"github.com/spf13/cobra"
"golang.org/x/sync/errgroup"
)
type rmOptions struct {
builders []string
builder string
keepState bool
keepDaemon bool
allInactive bool
@@ -27,13 +28,11 @@ const (
rmInactiveWarning = `WARNING! This will remove all builders that are not in running state. Are you sure you want to continue?`
)
func runRm(ctx context.Context, dockerCli command.Cli, in rmOptions) error {
if in.allInactive && !in.force {
if ok, err := prompt(ctx, dockerCli.In(), dockerCli.Out(), rmInactiveWarning); err != nil {
return err
} else if !ok {
return nil
}
func runRm(dockerCli command.Cli, in rmOptions) error {
ctx := appcontext.Context()
if in.allInactive && !in.force && !command.PromptForConfirmation(dockerCli.In(), dockerCli.Out(), rmInactiveWarning) {
return nil
}
txn, release, err := storeutil.GetStore(dockerCli)
@@ -46,52 +45,33 @@ func runRm(ctx context.Context, dockerCli command.Cli, in rmOptions) error {
return rmAllInactive(ctx, txn, dockerCli, in)
}
eg, _ := errgroup.WithContext(ctx)
for _, name := range in.builders {
func(name string) {
eg.Go(func() (err error) {
defer func() {
if err == nil {
_, _ = fmt.Fprintf(dockerCli.Err(), "%s removed\n", name)
} else {
_, _ = fmt.Fprintf(dockerCli.Err(), "failed to remove %s: %v\n", name, err)
}
}()
b, err := builder.New(dockerCli,
builder.WithName(name),
builder.WithStore(txn),
builder.WithSkippedValidation(),
)
if err != nil {
return err
}
nodes, err := b.LoadNodes(ctx)
if err != nil {
return err
}
if cb := b.ContextName(); cb != "" {
return errors.Errorf("context builder cannot be removed, run `docker context rm %s` to remove this context", cb)
}
err1 := rm(ctx, nodes, in)
if err := txn.Remove(b.Name); err != nil {
return err
}
if err1 != nil {
return err1
}
return nil
})
}(name)
b, err := builder.New(dockerCli,
builder.WithName(in.builder),
builder.WithStore(txn),
builder.WithSkippedValidation(),
)
if err != nil {
return err
}
if err := eg.Wait(); err != nil {
return errors.New("failed to remove one or more builders")
nodes, err := b.LoadNodes(ctx, false)
if err != nil {
return err
}
if cb := b.ContextName(); cb != "" {
return errors.Errorf("context builder cannot be removed, run `docker context rm %s` to remove this context", cb)
}
err1 := rm(ctx, nodes, in)
if err := txn.Remove(b.Name); err != nil {
return err
}
if err1 != nil {
return err1
}
_, _ = fmt.Fprintf(dockerCli.Err(), "%s removed\n", b.Name)
return nil
}
@@ -99,24 +79,24 @@ func rmCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
var options rmOptions
cmd := &cobra.Command{
Use: "rm [OPTIONS] [NAME] [NAME...]",
Short: "Remove one or more builder instances",
Use: "rm [NAME]",
Short: "Remove a builder instance",
Args: cli.RequiresMaxArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
options.builders = []string{rootOpts.builder}
options.builder = rootOpts.builder
if len(args) > 0 {
if options.allInactive {
return errors.New("cannot specify builder name when --all-inactive is set")
}
options.builders = args
options.builder = args[0]
}
return runRm(cmd.Context(), dockerCli, options)
return runRm(dockerCli, options)
},
ValidArgsFunction: completion.BuilderNames(dockerCli),
}
flags := cmd.Flags()
flags.BoolVar(&options.keepState, "keep-state", false, "Keep BuildKit state")
flags.BoolVar(&options.keepDaemon, "keep-daemon", false, "Keep the BuildKit daemon running")
flags.BoolVar(&options.keepDaemon, "keep-daemon", false, "Keep the buildkitd daemon running")
flags.BoolVar(&options.allInactive, "all-inactive", false, "Remove all inactive builders")
flags.BoolVarP(&options.force, "force", "f", false, "Do not prompt for confirmation")
@@ -157,7 +137,7 @@ func rmAllInactive(ctx context.Context, txn *store.Txn, dockerCli command.Cli, i
for _, b := range builders {
func(b *builder.Builder) {
eg.Go(func() error {
nodes, err := b.LoadNodes(timeoutCtx, builder.WithData())
nodes, err := b.LoadNodes(timeoutCtx, true)
if err != nil {
return errors.Wrapf(err, "cannot load %s", b.Name)
}

View File

@@ -3,18 +3,12 @@ package commands
import (
"os"
debugcmd "github.com/docker/buildx/commands/debug"
imagetoolscmd "github.com/docker/buildx/commands/imagetools"
"github.com/docker/buildx/controller/remote"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/logutil"
"github.com/docker/cli-docs-tool/annotation"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli-plugins/plugin"
"github.com/docker/cli/cli/command"
"github.com/docker/cli/cli/debug"
"github.com/moby/buildkit/util/appcontext"
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
@@ -28,18 +22,12 @@ func NewRootCmd(name string, isPlugin bool, dockerCli command.Cli) *cobra.Comman
Annotations: map[string]string{
annotation.CodeDelimiter: `"`,
},
CompletionOptions: cobra.CompletionOptions{
HiddenDefaultCmd: true,
},
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
cmd.SetContext(appcontext.Context())
if !isPlugin {
return nil
}
return plugin.PersistentPreRunE(cmd, args)
},
}
if !isPlugin {
if isPlugin {
cmd.PersistentPreRunE = func(cmd *cobra.Command, args []string) error {
return plugin.PersistentPreRunE(cmd, args)
}
} else {
// match plugin behavior for standalone mode
// https://github.com/docker/cli/blob/6c9eb708fa6d17765d71965f90e1c59cea686ee9/cli-plugins/plugin/plugin.go#L117-L127
cmd.SilenceUsage = true
@@ -47,11 +35,6 @@ func NewRootCmd(name string, isPlugin bool, dockerCli command.Cli) *cobra.Comman
cmd.TraverseChildren = true
cmd.DisableFlagsInUseLine = true
cli.DisableFlagsInUseLine(cmd)
// DEBUG=1 should perform the same as --debug at the docker root level
if debug.IsEnabled() {
debug.Enable()
}
}
logrus.SetFormatter(&logutil.Formatter{})
@@ -64,9 +47,16 @@ func NewRootCmd(name string, isPlugin bool, dockerCli command.Cli) *cobra.Comman
"using default config store",
))
if !confutil.IsExperimental() {
cmd.SetHelpTemplate(cmd.HelpTemplate() + "\nExperimental commands and flags are hidden. Set BUILDX_EXPERIMENTAL=1 to show them.\n")
}
// filter out useless commandConn.CloseWrite warning message that can occur
// when listing builder instances with "buildx ls" for those that are
// unreachable: "commandConn.CloseWrite: commandconn: failed to wait: signal: killed"
// https://github.com/docker/cli/blob/3fb4fb83dfb5db0c0753a8316f21aea54dab32c5/cli/connhelper/commandconn/commandconn.go#L203-L214
logrus.AddHook(logutil.NewFilter([]logrus.Level{
logrus.WarnLevel,
},
"commandConn.CloseWrite:",
"commandConn.CloseRead:",
))
addCommands(cmd, dockerCli)
return cmd
@@ -81,10 +71,9 @@ func addCommands(cmd *cobra.Command, dockerCli command.Cli) {
rootFlags(opts, cmd.PersistentFlags())
cmd.AddCommand(
buildCmd(dockerCli, opts, nil),
buildCmd(dockerCli, opts),
bakeCmd(dockerCli, opts),
createCmd(dockerCli),
dialStdioCmd(dockerCli, opts),
rmCmd(dockerCli, opts),
lsCmd(dockerCli),
useCmd(dockerCli, opts),
@@ -97,17 +86,6 @@ func addCommands(cmd *cobra.Command, dockerCli command.Cli) {
duCmd(dockerCli, opts),
imagetoolscmd.RootCmd(dockerCli, imagetoolscmd.RootOptions{Builder: &opts.builder}),
)
if confutil.IsExperimental() {
cmd.AddCommand(debugcmd.RootCmd(dockerCli,
newDebuggableBuild(dockerCli, opts),
))
remote.AddControllerCommands(cmd, dockerCli)
}
cmd.RegisterFlagCompletionFunc( //nolint:errcheck
"builder",
completion.BuilderNames(dockerCli),
)
}
func rootFlags(options *rootOptions, flags *pflag.FlagSet) {

View File

@@ -4,9 +4,9 @@ import (
"context"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/moby/buildkit/util/appcontext"
"github.com/spf13/cobra"
)
@@ -14,7 +14,9 @@ type stopOptions struct {
builder string
}
func runStop(ctx context.Context, dockerCli command.Cli, in stopOptions) error {
func runStop(dockerCli command.Cli, in stopOptions) error {
ctx := appcontext.Context()
b, err := builder.New(dockerCli,
builder.WithName(in.builder),
builder.WithSkippedValidation(),
@@ -22,7 +24,7 @@ func runStop(ctx context.Context, dockerCli command.Cli, in stopOptions) error {
if err != nil {
return err
}
nodes, err := b.LoadNodes(ctx)
nodes, err := b.LoadNodes(ctx, false)
if err != nil {
return err
}
@@ -42,9 +44,8 @@ func stopCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
if len(args) > 0 {
options.builder = args[0]
}
return runStop(cmd.Context(), dockerCli, options)
return runStop(dockerCli, options)
},
ValidArgsFunction: completion.BuilderNames(dockerCli),
}
return cmd

View File

@@ -4,7 +4,6 @@ import (
"os"
"github.com/docker/buildx/util/cobrautil"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/docker/cli/cli/config"
@@ -15,7 +14,7 @@ import (
type uninstallOptions struct {
}
func runUninstall(_ command.Cli, _ uninstallOptions) error {
func runUninstall(dockerCli command.Cli, in uninstallOptions) error {
dir := config.Dir()
cfg, err := config.Load(dir)
if err != nil {
@@ -53,8 +52,7 @@ func uninstallCmd(dockerCli command.Cli) *cobra.Command {
RunE: func(cmd *cobra.Command, args []string) error {
return runUninstall(dockerCli, options)
},
Hidden: true,
ValidArgsFunction: completion.Disable,
Hidden: true,
}
// hide builder persistent flag for this command

View File

@@ -4,7 +4,6 @@ import (
"os"
"github.com/docker/buildx/store/storeutil"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/dockerutil"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
@@ -35,7 +34,10 @@ func runUse(dockerCli command.Cli, in useOptions) error {
if err != nil {
return err
}
return txn.SetCurrent(ep, "", false, false)
if err := txn.SetCurrent(ep, "", false, false); err != nil {
return err
}
return nil
}
list, err := dockerCli.ContextStore().List()
if err != nil {
@@ -55,7 +57,11 @@ func runUse(dockerCli command.Cli, in useOptions) error {
if err != nil {
return err
}
return txn.SetCurrent(ep, in.builder, in.isGlobal, in.isDefault)
if err := txn.SetCurrent(ep, in.builder, in.isGlobal, in.isDefault); err != nil {
return err
}
return nil
}
func useCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
@@ -72,7 +78,6 @@ func useCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
}
return runUse(dockerCli, options)
},
ValidArgsFunction: completion.BuilderNames(dockerCli),
}
flags := cmd.Flags()

View File

@@ -1,57 +0,0 @@
package commands
import (
"bufio"
"context"
"fmt"
"io"
"os"
"runtime"
"strings"
"github.com/docker/cli/cli/streams"
)
func prompt(ctx context.Context, ins io.Reader, out io.Writer, msg string) (bool, error) {
done := make(chan struct{})
var ok bool
go func() {
ok = promptForConfirmation(ins, out, msg)
close(done)
}()
select {
case <-ctx.Done():
return false, context.Cause(ctx)
case <-done:
return ok, nil
}
}
// promptForConfirmation requests and checks confirmation from user.
// This will display the provided message followed by ' [y/N] '. If
// the user input 'y' or 'Y' it returns true other false. If no
// message is provided "Are you sure you want to proceed? [y/N] "
// will be used instead.
//
// Copied from github.com/docker/cli since the upstream version changed
// recently with an incompatible change.
//
// See https://github.com/docker/buildx/pull/2359#discussion_r1544736494
// for discussion on the issue.
func promptForConfirmation(ins io.Reader, outs io.Writer, message string) bool {
if message == "" {
message = "Are you sure you want to proceed?"
}
message += " [y/N] "
_, _ = fmt.Fprint(outs, message)
// On Windows, force the use of the regular OS stdin stream.
if runtime.GOOS == "windows" {
ins = streams.NewIn(os.Stdin)
}
reader := bufio.NewReader(ins)
answer, _, _ := reader.ReadLine()
return strings.ToLower(string(answer)) == "y"
}

View File

@@ -4,14 +4,13 @@ import (
"fmt"
"github.com/docker/buildx/util/cobrautil"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/version"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/spf13/cobra"
)
func runVersion(_ command.Cli) error {
func runVersion(dockerCli command.Cli) error {
fmt.Println(version.Package, version.Version, version.Revision)
return nil
}
@@ -24,7 +23,6 @@ func versionCmd(dockerCli command.Cli) *cobra.Command {
RunE: func(cmd *cobra.Command, args []string) error {
return runVersion(dockerCli)
},
ValidArgsFunction: completion.Disable,
}
// hide builder persistent flag for this command

View File

@@ -1,283 +0,0 @@
package build
import (
"context"
"io"
"os"
"path/filepath"
"strings"
"sync"
"github.com/docker/buildx/build"
"github.com/docker/buildx/builder"
controllerapi "github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/store"
"github.com/docker/buildx/store/storeutil"
"github.com/docker/buildx/util/buildflags"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/dockerutil"
"github.com/docker/buildx/util/platformutil"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/command"
"github.com/docker/cli/cli/config"
dockeropts "github.com/docker/cli/opts"
"github.com/docker/docker/api/types/container"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/session/auth/authprovider"
"github.com/moby/buildkit/util/grpcerrors"
"github.com/pkg/errors"
"google.golang.org/grpc/codes"
)
const defaultTargetName = "default"
// RunBuild runs the specified build and returns the result.
//
// NOTE: When an error happens during the build and this function acquires the debuggable *build.ResultHandle,
// this function returns it in addition to the error (i.e. it does "return nil, res, err"). The caller can
// inspect the result and debug the cause of that error.
func RunBuild(ctx context.Context, dockerCli command.Cli, in controllerapi.BuildOptions, inStream io.Reader, progress progress.Writer, generateResult bool) (*client.SolveResponse, *build.ResultHandle, error) {
if in.NoCache && len(in.NoCacheFilter) > 0 {
return nil, nil, errors.Errorf("--no-cache and --no-cache-filter cannot currently be used together")
}
contexts := map[string]build.NamedContext{}
for name, path := range in.NamedContexts {
contexts[name] = build.NamedContext{Path: path}
}
opts := build.Options{
Inputs: build.Inputs{
ContextPath: in.ContextPath,
DockerfilePath: in.DockerfileName,
InStream: inStream,
NamedContexts: contexts,
},
Ref: in.Ref,
BuildArgs: in.BuildArgs,
CgroupParent: in.CgroupParent,
ExtraHosts: in.ExtraHosts,
Labels: in.Labels,
NetworkMode: in.NetworkMode,
NoCache: in.NoCache,
NoCacheFilter: in.NoCacheFilter,
Pull: in.Pull,
ShmSize: dockeropts.MemBytes(in.ShmSize),
Tags: in.Tags,
Target: in.Target,
Ulimits: controllerUlimitOpt2DockerUlimit(in.Ulimits),
GroupRef: in.GroupRef,
ProvenanceResponseMode: confutil.ParseMetadataProvenance(in.ProvenanceResponseMode),
}
platforms, err := platformutil.Parse(in.Platforms)
if err != nil {
return nil, nil, err
}
opts.Platforms = platforms
dockerConfig := config.LoadDefaultConfigFile(os.Stderr)
opts.Session = append(opts.Session, authprovider.NewDockerAuthProvider(dockerConfig, nil))
secrets, err := controllerapi.CreateSecrets(in.Secrets)
if err != nil {
return nil, nil, err
}
opts.Session = append(opts.Session, secrets)
sshSpecs := in.SSH
if len(sshSpecs) == 0 && buildflags.IsGitSSH(in.ContextPath) {
sshSpecs = append(sshSpecs, &controllerapi.SSH{ID: "default"})
}
ssh, err := controllerapi.CreateSSH(sshSpecs)
if err != nil {
return nil, nil, err
}
opts.Session = append(opts.Session, ssh)
outputs, err := controllerapi.CreateExports(in.Exports)
if err != nil {
return nil, nil, err
}
if in.ExportPush {
var pushUsed bool
for i := range outputs {
if outputs[i].Type == client.ExporterImage {
outputs[i].Attrs["push"] = "true"
pushUsed = true
}
}
if !pushUsed {
outputs = append(outputs, client.ExportEntry{
Type: client.ExporterImage,
Attrs: map[string]string{
"push": "true",
},
})
}
}
if in.ExportLoad {
var loadUsed bool
for i := range outputs {
if outputs[i].Type == client.ExporterDocker {
if _, ok := outputs[i].Attrs["dest"]; !ok {
loadUsed = true
break
}
}
}
if !loadUsed {
outputs = append(outputs, client.ExportEntry{
Type: client.ExporterDocker,
Attrs: map[string]string{},
})
}
}
annotations, err := buildflags.ParseAnnotations(in.Annotations)
if err != nil {
return nil, nil, errors.Wrap(err, "parse annotations")
}
for _, o := range outputs {
for k, v := range annotations {
o.Attrs[k.String()] = v
}
}
opts.Exports = outputs
opts.CacheFrom = controllerapi.CreateCaches(in.CacheFrom)
opts.CacheTo = controllerapi.CreateCaches(in.CacheTo)
opts.Attests = controllerapi.CreateAttestations(in.Attests)
opts.SourcePolicy = in.SourcePolicy
allow, err := buildflags.ParseEntitlements(in.Allow)
if err != nil {
return nil, nil, err
}
opts.Allow = allow
if in.PrintFunc != nil {
opts.PrintFunc = &build.PrintFunc{
Name: in.PrintFunc.Name,
Format: in.PrintFunc.Format,
IgnoreStatus: in.PrintFunc.IgnoreStatus,
}
}
// key string used for kubernetes "sticky" mode
contextPathHash, err := filepath.Abs(in.ContextPath)
if err != nil {
contextPathHash = in.ContextPath
}
// TODO: this should not be loaded this side of the controller api
b, err := builder.New(dockerCli,
builder.WithName(in.Builder),
builder.WithContextPathHash(contextPathHash),
)
if err != nil {
return nil, nil, err
}
if err = updateLastActivity(dockerCli, b.NodeGroup); err != nil {
return nil, nil, errors.Wrapf(err, "failed to update builder last activity time")
}
nodes, err := b.LoadNodes(ctx)
if err != nil {
return nil, nil, err
}
resp, res, err := buildTargets(ctx, dockerCli, nodes, map[string]build.Options{defaultTargetName: opts}, progress, generateResult)
err = wrapBuildError(err, false)
if err != nil {
// NOTE: buildTargets can return *build.ResultHandle even on error.
return nil, res, err
}
return resp, res, nil
}
// buildTargets runs the specified build and returns the result.
//
// NOTE: When an error happens during the build and this function acquires the debuggable *build.ResultHandle,
// this function returns it in addition to the error (i.e. it does "return nil, res, err"). The caller can
// inspect the result and debug the cause of that error.
func buildTargets(ctx context.Context, dockerCli command.Cli, nodes []builder.Node, opts map[string]build.Options, progress progress.Writer, generateResult bool) (*client.SolveResponse, *build.ResultHandle, error) {
var res *build.ResultHandle
var resp map[string]*client.SolveResponse
var err error
if generateResult {
var mu sync.Mutex
var idx int
resp, err = build.BuildWithResultHandler(ctx, nodes, opts, dockerutil.NewClient(dockerCli), confutil.ConfigDir(dockerCli), progress, func(driverIndex int, gotRes *build.ResultHandle) {
mu.Lock()
defer mu.Unlock()
if res == nil || driverIndex < idx {
idx, res = driverIndex, gotRes
}
})
} else {
resp, err = build.Build(ctx, nodes, opts, dockerutil.NewClient(dockerCli), confutil.ConfigDir(dockerCli), progress)
}
if err != nil {
return nil, res, err
}
return resp[defaultTargetName], res, err
}
func wrapBuildError(err error, bake bool) error {
if err == nil {
return nil
}
st, ok := grpcerrors.AsGRPCStatus(err)
if ok {
if st.Code() == codes.Unimplemented && strings.Contains(st.Message(), "unsupported frontend capability moby.buildkit.frontend.contexts") {
msg := "current frontend does not support --build-context."
if bake {
msg = "current frontend does not support defining additional contexts for targets."
}
msg += " Named contexts are supported since Dockerfile v1.4. Use #syntax directive in Dockerfile or update to latest BuildKit."
return &wrapped{err, msg}
}
}
return err
}
type wrapped struct {
err error
msg string
}
func (w *wrapped) Error() string {
return w.msg
}
func (w *wrapped) Unwrap() error {
return w.err
}
func updateLastActivity(dockerCli command.Cli, ng *store.NodeGroup) error {
txn, release, err := storeutil.GetStore(dockerCli)
if err != nil {
return err
}
defer release()
return txn.UpdateLastActivity(ng)
}
func controllerUlimitOpt2DockerUlimit(u *controllerapi.UlimitOpt) *dockeropts.UlimitOpt {
if u == nil {
return nil
}
values := make(map[string]*container.Ulimit)
for k, v := range u.Values {
values[k] = &container.Ulimit{
Name: v.Name,
Hard: v.Hard,
Soft: v.Soft,
}
}
return dockeropts.NewUlimitOpt(&values)
}

View File

@@ -1,32 +0,0 @@
package control
import (
"context"
"io"
controllerapi "github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/util/progress"
"github.com/moby/buildkit/client"
)
type BuildxController interface {
Build(ctx context.Context, options controllerapi.BuildOptions, in io.ReadCloser, progress progress.Writer) (ref string, resp *client.SolveResponse, err error)
// Invoke starts an IO session into the specified process.
// If pid doesn't matche to any running processes, it starts a new process with the specified config.
// If there is no container running or InvokeConfig.Rollback is speicfied, the process will start in a newly created container.
// NOTE: If needed, in the future, we can split this API into three APIs (NewContainer, NewProcess and Attach).
Invoke(ctx context.Context, ref, pid string, options controllerapi.InvokeConfig, ioIn io.ReadCloser, ioOut io.WriteCloser, ioErr io.WriteCloser) error
Kill(ctx context.Context) error
Close() error
List(ctx context.Context) (refs []string, _ error)
Disconnect(ctx context.Context, ref string) error
ListProcesses(ctx context.Context, ref string) (infos []*controllerapi.ProcessInfo, retErr error)
DisconnectProcess(ctx context.Context, ref, pid string) error
Inspect(ctx context.Context, ref string) (*controllerapi.InspectResponse, error)
}
type ControlOptions struct {
ServerConfig string
Root string
Detach bool
}

View File

@@ -1,36 +0,0 @@
package controller
import (
"context"
"fmt"
"github.com/docker/buildx/controller/control"
"github.com/docker/buildx/controller/local"
"github.com/docker/buildx/controller/remote"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/command"
"github.com/pkg/errors"
)
func NewController(ctx context.Context, opts control.ControlOptions, dockerCli command.Cli, pw progress.Writer) (control.BuildxController, error) {
var name string
if opts.Detach {
name = "remote"
} else {
name = "local"
}
var c control.BuildxController
err := progress.Wrap(fmt.Sprintf("[internal] connecting to %s controller", name), pw.Write, func(l progress.SubLogger) (err error) {
if opts.Detach {
c, err = remote.NewRemoteBuildxController(ctx, dockerCli, opts, l)
} else {
c = local.NewLocalBuildxController(ctx, dockerCli, l)
}
return err
})
if err != nil {
return nil, errors.Wrap(err, "failed to start buildx controller")
}
return c, nil
}

View File

@@ -1,34 +0,0 @@
package errdefs
import (
"github.com/containerd/typeurl/v2"
"github.com/moby/buildkit/util/grpcerrors"
)
func init() {
typeurl.Register((*Build)(nil), "github.com/docker/buildx", "errdefs.Build+json")
}
type BuildError struct {
Build
error
}
func (e *BuildError) Unwrap() error {
return e.error
}
func (e *BuildError) ToProto() grpcerrors.TypedErrorProto {
return &e.Build
}
func WrapBuild(err error, ref string) error {
if err == nil {
return nil
}
return &BuildError{Build: Build{Ref: ref}, error: err}
}
func (b *Build) WrapError(err error) error {
return &BuildError{error: err, Build: *b}
}

View File

@@ -1,77 +0,0 @@
// Code generated by protoc-gen-gogo. DO NOT EDIT.
// source: errdefs.proto
package errdefs
import (
fmt "fmt"
proto "github.com/gogo/protobuf/proto"
_ "github.com/moby/buildkit/solver/pb"
math "math"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package
type Build struct {
Ref string `protobuf:"bytes,1,opt,name=Ref,proto3" json:"Ref,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *Build) Reset() { *m = Build{} }
func (m *Build) String() string { return proto.CompactTextString(m) }
func (*Build) ProtoMessage() {}
func (*Build) Descriptor() ([]byte, []int) {
return fileDescriptor_689dc58a5060aff5, []int{0}
}
func (m *Build) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_Build.Unmarshal(m, b)
}
func (m *Build) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_Build.Marshal(b, m, deterministic)
}
func (m *Build) XXX_Merge(src proto.Message) {
xxx_messageInfo_Build.Merge(m, src)
}
func (m *Build) XXX_Size() int {
return xxx_messageInfo_Build.Size(m)
}
func (m *Build) XXX_DiscardUnknown() {
xxx_messageInfo_Build.DiscardUnknown(m)
}
var xxx_messageInfo_Build proto.InternalMessageInfo
func (m *Build) GetRef() string {
if m != nil {
return m.Ref
}
return ""
}
func init() {
proto.RegisterType((*Build)(nil), "errdefs.Build")
}
func init() { proto.RegisterFile("errdefs.proto", fileDescriptor_689dc58a5060aff5) }
var fileDescriptor_689dc58a5060aff5 = []byte{
// 111 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0xe2, 0x4d, 0x2d, 0x2a, 0x4a,
0x49, 0x4d, 0x2b, 0xd6, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x17, 0x62, 0x87, 0x72, 0xa5, 0x74, 0xd2,
0x33, 0x4b, 0x32, 0x4a, 0x93, 0xf4, 0x92, 0xf3, 0x73, 0xf5, 0x73, 0xf3, 0x93, 0x2a, 0xf5, 0x93,
0x4a, 0x33, 0x73, 0x52, 0xb2, 0x33, 0x4b, 0xf4, 0x8b, 0xf3, 0x73, 0xca, 0x52, 0x8b, 0xf4, 0x0b,
0x92, 0xf4, 0xf3, 0x0b, 0xa0, 0xda, 0x94, 0x24, 0xb9, 0x58, 0x9d, 0x40, 0xf2, 0x42, 0x02, 0x5c,
0xcc, 0x41, 0xa9, 0x69, 0x12, 0x8c, 0x0a, 0x8c, 0x1a, 0x9c, 0x41, 0x20, 0x66, 0x12, 0x1b, 0x58,
0x85, 0x31, 0x20, 0x00, 0x00, 0xff, 0xff, 0x56, 0x52, 0x41, 0x91, 0x69, 0x00, 0x00, 0x00,
}

View File

@@ -1,9 +0,0 @@
syntax = "proto3";
package errdefs;
import "github.com/moby/buildkit/solver/pb/ops.proto";
message Build {
string Ref = 1;
}

View File

@@ -1,3 +0,0 @@
package errdefs
//go:generate protoc -I=. -I=../../vendor/ --gogo_out=plugins=grpc:. errdefs.proto

View File

@@ -1,146 +0,0 @@
package local
import (
"context"
"io"
"sync/atomic"
"github.com/docker/buildx/build"
cbuild "github.com/docker/buildx/controller/build"
"github.com/docker/buildx/controller/control"
controllererrors "github.com/docker/buildx/controller/errdefs"
controllerapi "github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/controller/processes"
"github.com/docker/buildx/util/ioset"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/command"
"github.com/moby/buildkit/client"
"github.com/pkg/errors"
)
func NewLocalBuildxController(ctx context.Context, dockerCli command.Cli, logger progress.SubLogger) control.BuildxController {
return &localController{
dockerCli: dockerCli,
ref: "local",
processes: processes.NewManager(),
}
}
type buildConfig struct {
// TODO: these two structs should be merged
// Discussion: https://github.com/docker/buildx/pull/1640#discussion_r1113279719
resultCtx *build.ResultHandle
buildOptions *controllerapi.BuildOptions
}
type localController struct {
dockerCli command.Cli
ref string
buildConfig buildConfig
processes *processes.Manager
buildOnGoing atomic.Bool
}
func (b *localController) Build(ctx context.Context, options controllerapi.BuildOptions, in io.ReadCloser, progress progress.Writer) (string, *client.SolveResponse, error) {
if !b.buildOnGoing.CompareAndSwap(false, true) {
return "", nil, errors.New("build ongoing")
}
defer b.buildOnGoing.Store(false)
resp, res, buildErr := cbuild.RunBuild(ctx, b.dockerCli, options, in, progress, true)
// NOTE: RunBuild can return *build.ResultHandle even on error.
if res != nil {
b.buildConfig = buildConfig{
resultCtx: res,
buildOptions: &options,
}
if buildErr != nil {
buildErr = controllererrors.WrapBuild(buildErr, b.ref)
}
}
if buildErr != nil {
return "", nil, buildErr
}
return b.ref, resp, nil
}
func (b *localController) ListProcesses(ctx context.Context, ref string) (infos []*controllerapi.ProcessInfo, retErr error) {
if ref != b.ref {
return nil, errors.Errorf("unknown ref %q", ref)
}
return b.processes.ListProcesses(), nil
}
func (b *localController) DisconnectProcess(ctx context.Context, ref, pid string) error {
if ref != b.ref {
return errors.Errorf("unknown ref %q", ref)
}
return b.processes.DeleteProcess(pid)
}
func (b *localController) cancelRunningProcesses() {
b.processes.CancelRunningProcesses()
}
func (b *localController) Invoke(ctx context.Context, ref string, pid string, cfg controllerapi.InvokeConfig, ioIn io.ReadCloser, ioOut io.WriteCloser, ioErr io.WriteCloser) error {
if ref != b.ref {
return errors.Errorf("unknown ref %q", ref)
}
proc, ok := b.processes.Get(pid)
if !ok {
// Start a new process.
if b.buildConfig.resultCtx == nil {
return errors.New("no build result is registered")
}
var err error
proc, err = b.processes.StartProcess(pid, b.buildConfig.resultCtx, &cfg)
if err != nil {
return err
}
}
// Attach containerIn to this process
ioCancelledCh := make(chan struct{})
proc.ForwardIO(&ioset.In{Stdin: ioIn, Stdout: ioOut, Stderr: ioErr}, func() { close(ioCancelledCh) })
select {
case <-ioCancelledCh:
return errors.Errorf("io cancelled")
case err := <-proc.Done():
return err
case <-ctx.Done():
return ctx.Err()
}
}
func (b *localController) Kill(context.Context) error {
b.Close()
return nil
}
func (b *localController) Close() error {
b.cancelRunningProcesses()
if b.buildConfig.resultCtx != nil {
b.buildConfig.resultCtx.Done()
}
// TODO: cancel ongoing builds?
return nil
}
func (b *localController) List(ctx context.Context) (res []string, _ error) {
return []string{b.ref}, nil
}
func (b *localController) Disconnect(ctx context.Context, key string) error {
b.Close()
return nil
}
func (b *localController) Inspect(ctx context.Context, ref string) (*controllerapi.InspectResponse, error) {
if ref != b.ref {
return nil, errors.Errorf("unknown ref %q", ref)
}
return &controllerapi.InspectResponse{Options: b.buildConfig.buildOptions}, nil
}

View File

@@ -1,20 +0,0 @@
package pb
func CreateAttestations(attests []*Attest) map[string]*string {
result := map[string]*string{}
for _, attest := range attests {
// ignore duplicates
if _, ok := result[attest.Type]; ok {
continue
}
if attest.Disabled {
result[attest.Type] = nil
continue
}
attrs := attest.Attrs
result[attest.Type] = &attrs
}
return result
}

View File

@@ -1,21 +0,0 @@
package pb
import "github.com/moby/buildkit/client"
func CreateCaches(entries []*CacheOptionsEntry) []client.CacheOptionsEntry {
var outs []client.CacheOptionsEntry
if len(entries) == 0 {
return nil
}
for _, entry := range entries {
out := client.CacheOptionsEntry{
Type: entry.Type,
Attrs: map[string]string{},
}
for k, v := range entry.Attrs {
out.Attrs[k] = v
}
outs = append(outs, out)
}
return outs
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,250 +0,0 @@
syntax = "proto3";
package buildx.controller.v1;
import "github.com/moby/buildkit/api/services/control/control.proto";
import "github.com/moby/buildkit/sourcepolicy/pb/policy.proto";
option go_package = "pb";
service Controller {
rpc Build(BuildRequest) returns (BuildResponse);
rpc Inspect(InspectRequest) returns (InspectResponse);
rpc Status(StatusRequest) returns (stream StatusResponse);
rpc Input(stream InputMessage) returns (InputResponse);
rpc Invoke(stream Message) returns (stream Message);
rpc List(ListRequest) returns (ListResponse);
rpc Disconnect(DisconnectRequest) returns (DisconnectResponse);
rpc Info(InfoRequest) returns (InfoResponse);
rpc ListProcesses(ListProcessesRequest) returns (ListProcessesResponse);
rpc DisconnectProcess(DisconnectProcessRequest) returns (DisconnectProcessResponse);
}
message ListProcessesRequest {
string Ref = 1;
}
message ListProcessesResponse {
repeated ProcessInfo Infos = 1;
}
message ProcessInfo {
string ProcessID = 1;
InvokeConfig InvokeConfig = 2;
}
message DisconnectProcessRequest {
string Ref = 1;
string ProcessID = 2;
}
message DisconnectProcessResponse {
}
message BuildRequest {
string Ref = 1;
BuildOptions Options = 2;
}
message BuildOptions {
string ContextPath = 1;
string DockerfileName = 2;
PrintFunc PrintFunc = 3;
map<string, string> NamedContexts = 4;
repeated string Allow = 5;
repeated Attest Attests = 6;
map<string, string> BuildArgs = 7;
repeated CacheOptionsEntry CacheFrom = 8;
repeated CacheOptionsEntry CacheTo = 9;
string CgroupParent = 10;
repeated ExportEntry Exports = 11;
repeated string ExtraHosts = 12;
map<string, string> Labels = 13;
string NetworkMode = 14;
repeated string NoCacheFilter = 15;
repeated string Platforms = 16;
repeated Secret Secrets = 17;
int64 ShmSize = 18;
repeated SSH SSH = 19;
repeated string Tags = 20;
string Target = 21;
UlimitOpt Ulimits = 22;
string Builder = 23;
bool NoCache = 24;
bool Pull = 25;
bool ExportPush = 26;
bool ExportLoad = 27;
moby.buildkit.v1.sourcepolicy.Policy SourcePolicy = 28;
string Ref = 29;
string GroupRef = 30;
repeated string Annotations = 31;
string ProvenanceResponseMode = 32;
}
message ExportEntry {
string Type = 1;
map<string, string> Attrs = 2;
string Destination = 3;
}
message CacheOptionsEntry {
string Type = 1;
map<string, string> Attrs = 2;
}
message Attest {
string Type = 1;
bool Disabled = 2;
string Attrs = 3;
}
message SSH {
string ID = 1;
repeated string Paths = 2;
}
message Secret {
string ID = 1;
string FilePath = 2;
string Env = 3;
}
message PrintFunc {
string Name = 1;
string Format = 2;
bool IgnoreStatus = 3;
}
message InspectRequest {
string Ref = 1;
}
message InspectResponse {
BuildOptions Options = 1;
}
message UlimitOpt {
map<string, Ulimit> values = 1;
}
message Ulimit {
string Name = 1;
int64 Hard = 2;
int64 Soft = 3;
}
message BuildResponse {
map<string, string> ExporterResponse = 1;
}
message DisconnectRequest {
string Ref = 1;
}
message DisconnectResponse {}
message ListRequest {
string Ref = 1;
}
message ListResponse {
repeated string keys = 1;
}
message InputMessage {
oneof Input {
InputInitMessage Init = 1;
DataMessage Data = 2;
}
}
message InputInitMessage {
string Ref = 1;
}
message DataMessage {
bool EOF = 1; // true if eof was reached
bytes Data = 2; // should be chunked smaller than 4MB:
// https://pkg.go.dev/google.golang.org/grpc#MaxRecvMsgSize
}
message InputResponse {}
message Message {
oneof Input {
InitMessage Init = 1;
// FdMessage used from client to server for input (stdin) and
// from server to client for output (stdout, stderr)
FdMessage File = 2;
// ResizeMessage used from client to server for terminal resize events
ResizeMessage Resize = 3;
// SignalMessage is used from client to server to send signal events
SignalMessage Signal = 4;
}
}
message InitMessage {
string Ref = 1;
// If ProcessID already exists in the server, it tries to connect to it
// instead of invoking the new one. In this case, InvokeConfig will be ignored.
string ProcessID = 2;
InvokeConfig InvokeConfig = 3;
}
message InvokeConfig {
repeated string Entrypoint = 1;
repeated string Cmd = 2;
bool NoCmd = 11; // Do not set cmd but use the image's default
repeated string Env = 3;
string User = 4;
bool NoUser = 5; // Do not set user but use the image's default
string Cwd = 6;
bool NoCwd = 7; // Do not set cwd but use the image's default
bool Tty = 8;
bool Rollback = 9; // Kill all process in the container and recreate it.
bool Initial = 10; // Run container from the initial state of that stage (supported only on the failed step)
}
message FdMessage {
uint32 Fd = 1; // what fd the data was from
bool EOF = 2; // true if eof was reached
bytes Data = 3; // should be chunked smaller than 4MB:
// https://pkg.go.dev/google.golang.org/grpc#MaxRecvMsgSize
}
message ResizeMessage {
uint32 Rows = 1;
uint32 Cols = 2;
}
message SignalMessage {
// we only send name (ie HUP, INT) because the int values
// are platform dependent.
string Name = 1;
}
message StatusRequest {
string Ref = 1;
}
message StatusResponse {
repeated moby.buildkit.v1.Vertex vertexes = 1;
repeated moby.buildkit.v1.VertexStatus statuses = 2;
repeated moby.buildkit.v1.VertexLog logs = 3;
repeated moby.buildkit.v1.VertexWarning warnings = 4;
}
message InfoRequest {}
message InfoResponse {
BuildxVersion buildxVersion = 1;
}
message BuildxVersion {
string package = 1;
string version = 2;
string revision = 3;
}

View File

@@ -1,3 +0,0 @@
package pb
//go:generate protoc -I=. -I=../../vendor/ --gogo_out=plugins=grpc:. controller.proto

View File

@@ -1,181 +0,0 @@
package pb
import (
"path/filepath"
"strings"
"github.com/moby/buildkit/util/gitutil"
)
// ResolveOptionPaths resolves all paths contained in BuildOptions
// and replaces them to absolute paths.
func ResolveOptionPaths(options *BuildOptions) (_ *BuildOptions, err error) {
localContext := false
if options.ContextPath != "" && options.ContextPath != "-" {
if !isRemoteURL(options.ContextPath) {
localContext = true
options.ContextPath, err = filepath.Abs(options.ContextPath)
if err != nil {
return nil, err
}
}
}
if options.DockerfileName != "" && options.DockerfileName != "-" {
if localContext && !isHTTPURL(options.DockerfileName) {
options.DockerfileName, err = filepath.Abs(options.DockerfileName)
if err != nil {
return nil, err
}
}
}
var contexts map[string]string
for k, v := range options.NamedContexts {
if isRemoteURL(v) || strings.HasPrefix(v, "docker-image://") {
// url prefix, this is a remote path
} else if strings.HasPrefix(v, "oci-layout://") {
// oci layout prefix, this is a local path
p := strings.TrimPrefix(v, "oci-layout://")
p, err = filepath.Abs(p)
if err != nil {
return nil, err
}
v = "oci-layout://" + p
} else {
// no prefix, assume local path
v, err = filepath.Abs(v)
if err != nil {
return nil, err
}
}
if contexts == nil {
contexts = make(map[string]string)
}
contexts[k] = v
}
options.NamedContexts = contexts
var cacheFrom []*CacheOptionsEntry
for _, co := range options.CacheFrom {
switch co.Type {
case "local":
var attrs map[string]string
for k, v := range co.Attrs {
if attrs == nil {
attrs = make(map[string]string)
}
switch k {
case "src":
p := v
if p != "" {
p, err = filepath.Abs(p)
if err != nil {
return nil, err
}
}
attrs[k] = p
default:
attrs[k] = v
}
}
co.Attrs = attrs
cacheFrom = append(cacheFrom, co)
default:
cacheFrom = append(cacheFrom, co)
}
}
options.CacheFrom = cacheFrom
var cacheTo []*CacheOptionsEntry
for _, co := range options.CacheTo {
switch co.Type {
case "local":
var attrs map[string]string
for k, v := range co.Attrs {
if attrs == nil {
attrs = make(map[string]string)
}
switch k {
case "dest":
p := v
if p != "" {
p, err = filepath.Abs(p)
if err != nil {
return nil, err
}
}
attrs[k] = p
default:
attrs[k] = v
}
}
co.Attrs = attrs
cacheTo = append(cacheTo, co)
default:
cacheTo = append(cacheTo, co)
}
}
options.CacheTo = cacheTo
var exports []*ExportEntry
for _, e := range options.Exports {
if e.Destination != "" && e.Destination != "-" {
e.Destination, err = filepath.Abs(e.Destination)
if err != nil {
return nil, err
}
}
exports = append(exports, e)
}
options.Exports = exports
var secrets []*Secret
for _, s := range options.Secrets {
if s.FilePath != "" {
s.FilePath, err = filepath.Abs(s.FilePath)
if err != nil {
return nil, err
}
}
secrets = append(secrets, s)
}
options.Secrets = secrets
var ssh []*SSH
for _, s := range options.SSH {
var ps []string
for _, pt := range s.Paths {
p := pt
if p != "" {
p, err = filepath.Abs(p)
if err != nil {
return nil, err
}
}
ps = append(ps, p)
}
s.Paths = ps
ssh = append(ssh, s)
}
options.SSH = ssh
return options, nil
}
// isHTTPURL returns true if the provided str is an HTTP(S) URL by checking if it
// has a http:// or https:// scheme. No validation is performed to verify if the
// URL is well-formed.
func isHTTPURL(str string) bool {
return strings.HasPrefix(str, "https://") || strings.HasPrefix(str, "http://")
}
func isRemoteURL(c string) bool {
if isHTTPURL(c) {
return true
}
if _, err := gitutil.ParseGitRef(c); err == nil {
return true
}
return false
}

View File

@@ -1,248 +0,0 @@
package pb
import (
"os"
"path/filepath"
"reflect"
"testing"
"github.com/stretchr/testify/require"
)
func TestResolvePaths(t *testing.T) {
tmpwd, err := os.MkdirTemp("", "testresolvepaths")
require.NoError(t, err)
defer os.Remove(tmpwd)
require.NoError(t, os.Chdir(tmpwd))
tests := []struct {
name string
options BuildOptions
want BuildOptions
}{
{
name: "contextpath",
options: BuildOptions{ContextPath: "test"},
want: BuildOptions{ContextPath: filepath.Join(tmpwd, "test")},
},
{
name: "contextpath-cwd",
options: BuildOptions{ContextPath: "."},
want: BuildOptions{ContextPath: tmpwd},
},
{
name: "contextpath-dash",
options: BuildOptions{ContextPath: "-"},
want: BuildOptions{ContextPath: "-"},
},
{
name: "contextpath-ssh",
options: BuildOptions{ContextPath: "git@github.com:docker/buildx.git"},
want: BuildOptions{ContextPath: "git@github.com:docker/buildx.git"},
},
{
name: "dockerfilename",
options: BuildOptions{DockerfileName: "test", ContextPath: "."},
want: BuildOptions{DockerfileName: filepath.Join(tmpwd, "test"), ContextPath: tmpwd},
},
{
name: "dockerfilename-dash",
options: BuildOptions{DockerfileName: "-", ContextPath: "."},
want: BuildOptions{DockerfileName: "-", ContextPath: tmpwd},
},
{
name: "dockerfilename-remote",
options: BuildOptions{DockerfileName: "test", ContextPath: "git@github.com:docker/buildx.git"},
want: BuildOptions{DockerfileName: "test", ContextPath: "git@github.com:docker/buildx.git"},
},
{
name: "contexts",
options: BuildOptions{NamedContexts: map[string]string{"a": "test1", "b": "test2",
"alpine": "docker-image://alpine@sha256:0123456789", "project": "https://github.com/myuser/project.git"}},
want: BuildOptions{NamedContexts: map[string]string{"a": filepath.Join(tmpwd, "test1"), "b": filepath.Join(tmpwd, "test2"),
"alpine": "docker-image://alpine@sha256:0123456789", "project": "https://github.com/myuser/project.git"}},
},
{
name: "cache-from",
options: BuildOptions{
CacheFrom: []*CacheOptionsEntry{
{
Type: "local",
Attrs: map[string]string{"src": "test"},
},
{
Type: "registry",
Attrs: map[string]string{"ref": "user/app"},
},
},
},
want: BuildOptions{
CacheFrom: []*CacheOptionsEntry{
{
Type: "local",
Attrs: map[string]string{"src": filepath.Join(tmpwd, "test")},
},
{
Type: "registry",
Attrs: map[string]string{"ref": "user/app"},
},
},
},
},
{
name: "cache-to",
options: BuildOptions{
CacheTo: []*CacheOptionsEntry{
{
Type: "local",
Attrs: map[string]string{"dest": "test"},
},
{
Type: "registry",
Attrs: map[string]string{"ref": "user/app"},
},
},
},
want: BuildOptions{
CacheTo: []*CacheOptionsEntry{
{
Type: "local",
Attrs: map[string]string{"dest": filepath.Join(tmpwd, "test")},
},
{
Type: "registry",
Attrs: map[string]string{"ref": "user/app"},
},
},
},
},
{
name: "exports",
options: BuildOptions{
Exports: []*ExportEntry{
{
Type: "local",
Destination: "-",
},
{
Type: "local",
Destination: "test1",
},
{
Type: "tar",
Destination: "test3",
},
{
Type: "oci",
Destination: "-",
},
{
Type: "docker",
Destination: "test4",
},
{
Type: "image",
Attrs: map[string]string{"push": "true"},
},
},
},
want: BuildOptions{
Exports: []*ExportEntry{
{
Type: "local",
Destination: "-",
},
{
Type: "local",
Destination: filepath.Join(tmpwd, "test1"),
},
{
Type: "tar",
Destination: filepath.Join(tmpwd, "test3"),
},
{
Type: "oci",
Destination: "-",
},
{
Type: "docker",
Destination: filepath.Join(tmpwd, "test4"),
},
{
Type: "image",
Attrs: map[string]string{"push": "true"},
},
},
},
},
{
name: "secrets",
options: BuildOptions{
Secrets: []*Secret{
{
FilePath: "test1",
},
{
ID: "val",
Env: "a",
},
{
ID: "test",
FilePath: "test3",
},
},
},
want: BuildOptions{
Secrets: []*Secret{
{
FilePath: filepath.Join(tmpwd, "test1"),
},
{
ID: "val",
Env: "a",
},
{
ID: "test",
FilePath: filepath.Join(tmpwd, "test3"),
},
},
},
},
{
name: "ssh",
options: BuildOptions{
SSH: []*SSH{
{
ID: "default",
Paths: []string{"test1", "test2"},
},
{
ID: "a",
Paths: []string{"test3"},
},
},
},
want: BuildOptions{
SSH: []*SSH{
{
ID: "default",
Paths: []string{filepath.Join(tmpwd, "test1"), filepath.Join(tmpwd, "test2")},
},
{
ID: "a",
Paths: []string{filepath.Join(tmpwd, "test3")},
},
},
},
},
}
for _, tt := range tests {
tt := tt
t.Run(tt.name, func(t *testing.T) {
got, err := ResolveOptionPaths(&tt.options)
require.NoError(t, err)
if !reflect.DeepEqual(tt.want, *got) {
t.Fatalf("expected %#v, got %#v", tt.want, *got)
}
})
}
}

View File

@@ -1,126 +0,0 @@
package pb
import (
"github.com/docker/buildx/util/progress"
control "github.com/moby/buildkit/api/services/control"
"github.com/moby/buildkit/client"
"github.com/opencontainers/go-digest"
)
type writer struct {
ch chan<- *StatusResponse
}
func NewProgressWriter(ch chan<- *StatusResponse) progress.Writer {
return &writer{ch: ch}
}
func (w *writer) Write(status *client.SolveStatus) {
w.ch <- ToControlStatus(status)
}
func (w *writer) WriteBuildRef(target string, ref string) {
return
}
func (w *writer) ValidateLogSource(digest.Digest, interface{}) bool {
return true
}
func (w *writer) ClearLogSource(interface{}) {}
func ToControlStatus(s *client.SolveStatus) *StatusResponse {
resp := StatusResponse{}
for _, v := range s.Vertexes {
resp.Vertexes = append(resp.Vertexes, &control.Vertex{
Digest: v.Digest,
Inputs: v.Inputs,
Name: v.Name,
Started: v.Started,
Completed: v.Completed,
Error: v.Error,
Cached: v.Cached,
ProgressGroup: v.ProgressGroup,
})
}
for _, v := range s.Statuses {
resp.Statuses = append(resp.Statuses, &control.VertexStatus{
ID: v.ID,
Vertex: v.Vertex,
Name: v.Name,
Total: v.Total,
Current: v.Current,
Timestamp: v.Timestamp,
Started: v.Started,
Completed: v.Completed,
})
}
for _, v := range s.Logs {
resp.Logs = append(resp.Logs, &control.VertexLog{
Vertex: v.Vertex,
Stream: int64(v.Stream),
Msg: v.Data,
Timestamp: v.Timestamp,
})
}
for _, v := range s.Warnings {
resp.Warnings = append(resp.Warnings, &control.VertexWarning{
Vertex: v.Vertex,
Level: int64(v.Level),
Short: v.Short,
Detail: v.Detail,
Url: v.URL,
Info: v.SourceInfo,
Ranges: v.Range,
})
}
return &resp
}
func FromControlStatus(resp *StatusResponse) *client.SolveStatus {
s := client.SolveStatus{}
for _, v := range resp.Vertexes {
s.Vertexes = append(s.Vertexes, &client.Vertex{
Digest: v.Digest,
Inputs: v.Inputs,
Name: v.Name,
Started: v.Started,
Completed: v.Completed,
Error: v.Error,
Cached: v.Cached,
ProgressGroup: v.ProgressGroup,
})
}
for _, v := range resp.Statuses {
s.Statuses = append(s.Statuses, &client.VertexStatus{
ID: v.ID,
Vertex: v.Vertex,
Name: v.Name,
Total: v.Total,
Current: v.Current,
Timestamp: v.Timestamp,
Started: v.Started,
Completed: v.Completed,
})
}
for _, v := range resp.Logs {
s.Logs = append(s.Logs, &client.VertexLog{
Vertex: v.Vertex,
Stream: int(v.Stream),
Data: v.Msg,
Timestamp: v.Timestamp,
})
}
for _, v := range resp.Warnings {
s.Warnings = append(s.Warnings, &client.VertexWarning{
Vertex: v.Vertex,
Level: int(v.Level),
Short: v.Short,
Detail: v.Detail,
URL: v.Url,
SourceInfo: v.Info,
Range: v.Ranges,
})
}
return &s
}

View File

@@ -1,22 +0,0 @@
package pb
import (
"github.com/moby/buildkit/session"
"github.com/moby/buildkit/session/secrets/secretsprovider"
)
func CreateSecrets(secrets []*Secret) (session.Attachable, error) {
fs := make([]secretsprovider.Source, 0, len(secrets))
for _, secret := range secrets {
fs = append(fs, secretsprovider.Source{
ID: secret.ID,
FilePath: secret.FilePath,
Env: secret.Env,
})
}
store, err := secretsprovider.NewStore(fs)
if err != nil {
return nil, err
}
return secretsprovider.NewSecretProvider(store), nil
}

View File

@@ -1,18 +0,0 @@
package pb
import (
"github.com/moby/buildkit/session"
"github.com/moby/buildkit/session/sshforward/sshprovider"
)
func CreateSSH(ssh []*SSH) (session.Attachable, error) {
configs := make([]sshprovider.AgentConfig, 0, len(ssh))
for _, ssh := range ssh {
cfg := sshprovider.AgentConfig{
ID: ssh.ID,
Paths: append([]string{}, ssh.Paths...),
}
configs = append(configs, cfg)
}
return sshprovider.NewSSHAgentProvider(configs)
}

View File

@@ -1,149 +0,0 @@
package processes
import (
"context"
"sync"
"sync/atomic"
"github.com/docker/buildx/build"
"github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/util/ioset"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
// Process provides methods to control a process.
type Process struct {
inEnd *ioset.Forwarder
invokeConfig *pb.InvokeConfig
errCh chan error
processCancel func()
serveIOCancel func()
}
// ForwardIO forwards process's io to the specified reader/writer.
// Optionally specify ioCancelCallback which will be called when
// the process closes the specified IO. This will be useful for additional cleanup.
func (p *Process) ForwardIO(in *ioset.In, ioCancelCallback func()) {
p.inEnd.SetIn(in)
if f := p.serveIOCancel; f != nil {
f()
}
p.serveIOCancel = ioCancelCallback
}
// Done returns a channel where error or nil will be sent
// when the process exits.
// TODO: change this to Wait()
func (p *Process) Done() <-chan error {
return p.errCh
}
// Manager manages a set of proceses.
type Manager struct {
container atomic.Value
processes sync.Map
}
// NewManager creates and returns a Manager.
func NewManager() *Manager {
return &Manager{}
}
// Get returns the specified process.
func (m *Manager) Get(id string) (*Process, bool) {
v, ok := m.processes.Load(id)
if !ok {
return nil, false
}
return v.(*Process), true
}
// CancelRunningProcesses cancels execution of all running processes.
func (m *Manager) CancelRunningProcesses() {
var funcs []func()
m.processes.Range(func(key, value any) bool {
funcs = append(funcs, value.(*Process).processCancel)
m.processes.Delete(key)
return true
})
for _, f := range funcs {
f()
}
}
// ListProcesses lists all running processes.
func (m *Manager) ListProcesses() (res []*pb.ProcessInfo) {
m.processes.Range(func(key, value any) bool {
res = append(res, &pb.ProcessInfo{
ProcessID: key.(string),
InvokeConfig: value.(*Process).invokeConfig,
})
return true
})
return res
}
// DeleteProcess deletes the specified process.
func (m *Manager) DeleteProcess(id string) error {
p, ok := m.processes.LoadAndDelete(id)
if !ok {
return errors.Errorf("unknown process %q", id)
}
p.(*Process).processCancel()
return nil
}
// StartProcess starts a process in the container.
// When a container isn't available (i.e. first time invoking or the container has exited) or cfg.Rollback is set,
// this method will start a new container and run the process in it. Otherwise, this method starts a new process in the
// existing container.
func (m *Manager) StartProcess(pid string, resultCtx *build.ResultHandle, cfg *pb.InvokeConfig) (*Process, error) {
// Get the target result to invoke a container from
var ctr *build.Container
if a := m.container.Load(); a != nil {
ctr = a.(*build.Container)
}
if cfg.Rollback || ctr == nil || ctr.IsUnavailable() {
go m.CancelRunningProcesses()
// (Re)create a new container if this is rollback or first time to invoke a process.
if ctr != nil {
go ctr.Cancel() // Finish the existing container
}
var err error
ctr, err = build.NewContainer(context.TODO(), resultCtx, cfg)
if err != nil {
return nil, errors.Errorf("failed to create container %v", err)
}
m.container.Store(ctr)
}
// [client(ForwardIO)] <-forwarder(switchable)-> [out] <-pipe-> [in] <- [process]
in, out := ioset.Pipe()
f := ioset.NewForwarder()
f.PropagateStdinClose = false
f.SetOut(&out)
// Register process
ctx, cancel := context.WithCancel(context.TODO())
var cancelOnce sync.Once
processCancelFunc := func() { cancelOnce.Do(func() { cancel(); f.Close(); in.Close(); out.Close() }) }
p := &Process{
inEnd: f,
invokeConfig: cfg,
processCancel: processCancelFunc,
errCh: make(chan error),
}
m.processes.Store(pid, p)
go func() {
var err error
if err = ctr.Exec(ctx, cfg, in.Stdin, in.Stdout, in.Stderr); err != nil {
logrus.Debugf("process error: %v", err)
}
logrus.Debugf("finished process %s %v", pid, cfg.Entrypoint)
m.processes.Delete(pid)
processCancelFunc()
p.errCh <- err
}()
return p, nil
}

View File

@@ -1,240 +0,0 @@
package remote
import (
"context"
"io"
"sync"
"time"
"github.com/containerd/containerd/defaults"
"github.com/containerd/containerd/pkg/dialer"
"github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/util/progress"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/identity"
"github.com/moby/buildkit/util/grpcerrors"
"github.com/pkg/errors"
"golang.org/x/sync/errgroup"
"google.golang.org/grpc"
"google.golang.org/grpc/backoff"
"google.golang.org/grpc/credentials/insecure"
)
func NewClient(ctx context.Context, addr string) (*Client, error) {
backoffConfig := backoff.DefaultConfig
backoffConfig.MaxDelay = 3 * time.Second
connParams := grpc.ConnectParams{
Backoff: backoffConfig,
}
gopts := []grpc.DialOption{
grpc.WithBlock(),
grpc.WithTransportCredentials(insecure.NewCredentials()),
grpc.WithConnectParams(connParams),
grpc.WithContextDialer(dialer.ContextDialer),
grpc.WithDefaultCallOptions(grpc.MaxCallRecvMsgSize(defaults.DefaultMaxRecvMsgSize)),
grpc.WithDefaultCallOptions(grpc.MaxCallSendMsgSize(defaults.DefaultMaxSendMsgSize)),
grpc.WithUnaryInterceptor(grpcerrors.UnaryClientInterceptor),
grpc.WithStreamInterceptor(grpcerrors.StreamClientInterceptor),
}
conn, err := grpc.DialContext(ctx, dialer.DialAddress(addr), gopts...)
if err != nil {
return nil, err
}
return &Client{conn: conn}, nil
}
type Client struct {
conn *grpc.ClientConn
closeOnce sync.Once
}
func (c *Client) Close() (err error) {
c.closeOnce.Do(func() {
err = c.conn.Close()
})
return
}
func (c *Client) Version(ctx context.Context) (string, string, string, error) {
res, err := c.client().Info(ctx, &pb.InfoRequest{})
if err != nil {
return "", "", "", err
}
v := res.BuildxVersion
return v.Package, v.Version, v.Revision, nil
}
func (c *Client) List(ctx context.Context) (keys []string, retErr error) {
res, err := c.client().List(ctx, &pb.ListRequest{})
if err != nil {
return nil, err
}
return res.Keys, nil
}
func (c *Client) Disconnect(ctx context.Context, key string) error {
if key == "" {
return nil
}
_, err := c.client().Disconnect(ctx, &pb.DisconnectRequest{Ref: key})
return err
}
func (c *Client) ListProcesses(ctx context.Context, ref string) (infos []*pb.ProcessInfo, retErr error) {
res, err := c.client().ListProcesses(ctx, &pb.ListProcessesRequest{Ref: ref})
if err != nil {
return nil, err
}
return res.Infos, nil
}
func (c *Client) DisconnectProcess(ctx context.Context, ref, pid string) error {
_, err := c.client().DisconnectProcess(ctx, &pb.DisconnectProcessRequest{Ref: ref, ProcessID: pid})
return err
}
func (c *Client) Invoke(ctx context.Context, ref string, pid string, invokeConfig pb.InvokeConfig, in io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) error {
if ref == "" || pid == "" {
return errors.New("build reference must be specified")
}
stream, err := c.client().Invoke(ctx)
if err != nil {
return err
}
return attachIO(ctx, stream, &pb.InitMessage{Ref: ref, ProcessID: pid, InvokeConfig: &invokeConfig}, ioAttachConfig{
stdin: in,
stdout: stdout,
stderr: stderr,
// TODO: Signal, Resize
})
}
func (c *Client) Inspect(ctx context.Context, ref string) (*pb.InspectResponse, error) {
return c.client().Inspect(ctx, &pb.InspectRequest{Ref: ref})
}
func (c *Client) Build(ctx context.Context, options pb.BuildOptions, in io.ReadCloser, progress progress.Writer) (string, *client.SolveResponse, error) {
ref := identity.NewID()
statusChan := make(chan *client.SolveStatus)
eg, egCtx := errgroup.WithContext(ctx)
var resp *client.SolveResponse
eg.Go(func() error {
defer close(statusChan)
var err error
resp, err = c.build(egCtx, ref, options, in, statusChan)
return err
})
eg.Go(func() error {
for s := range statusChan {
st := s
progress.Write(st)
}
return nil
})
return ref, resp, eg.Wait()
}
func (c *Client) build(ctx context.Context, ref string, options pb.BuildOptions, in io.ReadCloser, statusChan chan *client.SolveStatus) (*client.SolveResponse, error) {
eg, egCtx := errgroup.WithContext(ctx)
done := make(chan struct{})
var resp *client.SolveResponse
eg.Go(func() error {
defer close(done)
pbResp, err := c.client().Build(egCtx, &pb.BuildRequest{
Ref: ref,
Options: &options,
})
if err != nil {
return err
}
resp = &client.SolveResponse{
ExporterResponse: pbResp.ExporterResponse,
}
return nil
})
eg.Go(func() error {
stream, err := c.client().Status(egCtx, &pb.StatusRequest{
Ref: ref,
})
if err != nil {
return err
}
for {
resp, err := stream.Recv()
if err != nil {
if err == io.EOF {
return nil
}
return errors.Wrap(err, "failed to receive status")
}
statusChan <- pb.FromControlStatus(resp)
}
})
if in != nil {
eg.Go(func() error {
stream, err := c.client().Input(egCtx)
if err != nil {
return err
}
if err := stream.Send(&pb.InputMessage{
Input: &pb.InputMessage_Init{
Init: &pb.InputInitMessage{
Ref: ref,
},
},
}); err != nil {
return errors.Wrap(err, "failed to init input")
}
inReader, inWriter := io.Pipe()
eg2, _ := errgroup.WithContext(ctx)
eg2.Go(func() error {
<-done
return inWriter.Close()
})
go func() {
// do not wait for read completion but return here and let the caller send EOF
// this allows us to return on ctx.Done() without being blocked by this reader.
io.Copy(inWriter, in)
inWriter.Close()
}()
eg2.Go(func() error {
for {
buf := make([]byte, 32*1024)
n, err := inReader.Read(buf)
if err != nil {
if err == io.EOF {
break // break loop and send EOF
}
return err
} else if n > 0 {
if err := stream.Send(&pb.InputMessage{
Input: &pb.InputMessage_Data{
Data: &pb.DataMessage{
Data: buf[:n],
},
},
}); err != nil {
return err
}
}
}
return stream.Send(&pb.InputMessage{
Input: &pb.InputMessage_Data{
Data: &pb.DataMessage{
EOF: true,
},
},
})
})
return eg2.Wait()
})
}
return resp, eg.Wait()
}
func (c *Client) client() pb.ControllerClient {
return pb.NewControllerClient(c.conn)
}

View File

@@ -1,333 +0,0 @@
//go:build linux
package remote
import (
"context"
"fmt"
"io"
"net"
"os"
"os/exec"
"os/signal"
"path/filepath"
"strconv"
"syscall"
"time"
"github.com/containerd/log"
"github.com/docker/buildx/build"
cbuild "github.com/docker/buildx/controller/build"
"github.com/docker/buildx/controller/control"
controllerapi "github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/progress"
"github.com/docker/buildx/version"
"github.com/docker/cli/cli/command"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/util/grpcerrors"
"github.com/pelletier/go-toml"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"google.golang.org/grpc"
)
const (
serveCommandName = "_INTERNAL_SERVE"
)
var (
defaultLogFilename = fmt.Sprintf("buildx.%s.log", version.Revision)
defaultSocketFilename = fmt.Sprintf("buildx.%s.sock", version.Revision)
defaultPIDFilename = fmt.Sprintf("buildx.%s.pid", version.Revision)
)
type serverConfig struct {
// Specify buildx server root
Root string `toml:"root"`
// LogLevel sets the logging level [trace, debug, info, warn, error, fatal, panic]
LogLevel string `toml:"log_level"`
// Specify file to output buildx server log
LogFile string `toml:"log_file"`
}
func NewRemoteBuildxController(ctx context.Context, dockerCli command.Cli, opts control.ControlOptions, logger progress.SubLogger) (control.BuildxController, error) {
rootDir := opts.Root
if rootDir == "" {
rootDir = rootDataDir(dockerCli)
}
serverRoot := filepath.Join(rootDir, "shared")
// connect to buildx server if it is already running
ctx2, cancel := context.WithTimeout(ctx, 1*time.Second)
c, err := newBuildxClientAndCheck(ctx2, filepath.Join(serverRoot, defaultSocketFilename))
cancel()
if err != nil {
if !errors.Is(err, context.DeadlineExceeded) {
return nil, errors.Wrap(err, "cannot connect to the buildx server")
}
} else {
return &buildxController{c, serverRoot}, nil
}
// start buildx server via subcommand
err = logger.Wrap("no buildx server found; launching...", func() error {
launchFlags := []string{}
if opts.ServerConfig != "" {
launchFlags = append(launchFlags, "--config", opts.ServerConfig)
}
logFile, err := getLogFilePath(dockerCli, opts.ServerConfig)
if err != nil {
return err
}
wait, err := launch(ctx, logFile, append([]string{serveCommandName}, launchFlags...)...)
if err != nil {
return err
}
go wait()
// wait for buildx server to be ready
ctx2, cancel = context.WithTimeout(ctx, 10*time.Second)
c, err = newBuildxClientAndCheck(ctx2, filepath.Join(serverRoot, defaultSocketFilename))
cancel()
if err != nil {
return errors.Wrap(err, "cannot connect to the buildx server")
}
return nil
})
if err != nil {
return nil, err
}
return &buildxController{c, serverRoot}, nil
}
func AddControllerCommands(cmd *cobra.Command, dockerCli command.Cli) {
cmd.AddCommand(
serveCmd(dockerCli),
)
}
func serveCmd(dockerCli command.Cli) *cobra.Command {
var serverConfigPath string
cmd := &cobra.Command{
Use: fmt.Sprintf("%s [OPTIONS]", serveCommandName),
Hidden: true,
RunE: func(cmd *cobra.Command, args []string) error {
// Parse config
config, err := getConfig(dockerCli, serverConfigPath)
if err != nil {
return err
}
if config.LogLevel == "" {
logrus.SetLevel(logrus.InfoLevel)
} else {
lvl, err := logrus.ParseLevel(config.LogLevel)
if err != nil {
return errors.Wrap(err, "failed to prepare logger")
}
logrus.SetLevel(lvl)
}
logrus.SetFormatter(&logrus.JSONFormatter{
TimestampFormat: log.RFC3339NanoFixed,
})
root, err := prepareRootDir(dockerCli, config)
if err != nil {
return err
}
pidF := filepath.Join(root, defaultPIDFilename)
if err := os.WriteFile(pidF, []byte(fmt.Sprintf("%d", os.Getpid())), 0600); err != nil {
return err
}
defer func() {
if err := os.Remove(pidF); err != nil {
logrus.Errorf("failed to clean up info file %q: %v", pidF, err)
}
}()
// prepare server
b := NewServer(func(ctx context.Context, options *controllerapi.BuildOptions, stdin io.Reader, progress progress.Writer) (*client.SolveResponse, *build.ResultHandle, error) {
return cbuild.RunBuild(ctx, dockerCli, *options, stdin, progress, true)
})
defer b.Close()
// serve server
addr := filepath.Join(root, defaultSocketFilename)
if err := os.Remove(addr); err != nil && !os.IsNotExist(err) { // avoid EADDRINUSE
return err
}
defer func() {
if err := os.Remove(addr); err != nil {
logrus.Errorf("failed to clean up socket %q: %v", addr, err)
}
}()
logrus.Infof("starting server at %q", addr)
l, err := net.Listen("unix", addr)
if err != nil {
return err
}
rpc := grpc.NewServer(
grpc.UnaryInterceptor(grpcerrors.UnaryServerInterceptor),
grpc.StreamInterceptor(grpcerrors.StreamServerInterceptor),
)
controllerapi.RegisterControllerServer(rpc, b)
doneCh := make(chan struct{})
errCh := make(chan error, 1)
go func() {
defer close(doneCh)
if err := rpc.Serve(l); err != nil {
errCh <- errors.Wrapf(err, "error on serving via socket %q", addr)
}
}()
var s os.Signal
sigCh := make(chan os.Signal, 1)
signal.Notify(sigCh, syscall.SIGINT)
signal.Notify(sigCh, syscall.SIGTERM)
select {
case err := <-errCh:
logrus.Errorf("got error %s, exiting", err)
return err
case s = <-sigCh:
logrus.Infof("got signal %s, exiting", s)
return nil
case <-doneCh:
logrus.Infof("rpc server done, exiting")
return nil
}
},
}
flags := cmd.Flags()
flags.StringVar(&serverConfigPath, "config", "", "Specify buildx server config file")
return cmd
}
func getLogFilePath(dockerCli command.Cli, configPath string) (string, error) {
config, err := getConfig(dockerCli, configPath)
if err != nil {
return "", err
}
if config.LogFile == "" {
root, err := prepareRootDir(dockerCli, config)
if err != nil {
return "", err
}
return filepath.Join(root, defaultLogFilename), nil
}
return config.LogFile, nil
}
func getConfig(dockerCli command.Cli, configPath string) (*serverConfig, error) {
var defaultConfigPath bool
if configPath == "" {
defaultRoot := rootDataDir(dockerCli)
configPath = filepath.Join(defaultRoot, "config.toml")
defaultConfigPath = true
}
var config serverConfig
tree, err := toml.LoadFile(configPath)
if err != nil && !(os.IsNotExist(err) && defaultConfigPath) {
return nil, errors.Wrapf(err, "failed to read config %q", configPath)
} else if err == nil {
if err := tree.Unmarshal(&config); err != nil {
return nil, errors.Wrapf(err, "failed to unmarshal config %q", configPath)
}
}
return &config, nil
}
func prepareRootDir(dockerCli command.Cli, config *serverConfig) (string, error) {
rootDir := config.Root
if rootDir == "" {
rootDir = rootDataDir(dockerCli)
}
if rootDir == "" {
return "", errors.New("buildx root dir must be determined")
}
if err := os.MkdirAll(rootDir, 0700); err != nil {
return "", err
}
serverRoot := filepath.Join(rootDir, "shared")
if err := os.MkdirAll(serverRoot, 0700); err != nil {
return "", err
}
return serverRoot, nil
}
func rootDataDir(dockerCli command.Cli) string {
return filepath.Join(confutil.ConfigDir(dockerCli), "controller")
}
func newBuildxClientAndCheck(ctx context.Context, addr string) (*Client, error) {
c, err := NewClient(ctx, addr)
if err != nil {
return nil, err
}
p, v, r, err := c.Version(ctx)
if err != nil {
return nil, err
}
logrus.Debugf("connected to server (\"%v %v %v\")", p, v, r)
if !(p == version.Package && v == version.Version && r == version.Revision) {
return nil, errors.Errorf("version mismatch (client: \"%v %v %v\", server: \"%v %v %v\")", version.Package, version.Version, version.Revision, p, v, r)
}
return c, nil
}
type buildxController struct {
*Client
serverRoot string
}
func (c *buildxController) Kill(ctx context.Context) error {
pidB, err := os.ReadFile(filepath.Join(c.serverRoot, defaultPIDFilename))
if err != nil {
return err
}
pid, err := strconv.ParseInt(string(pidB), 10, 64)
if err != nil {
return err
}
if pid <= 0 {
return errors.New("no PID is recorded for buildx server")
}
p, err := os.FindProcess(int(pid))
if err != nil {
return err
}
if err := p.Signal(syscall.SIGINT); err != nil {
return err
}
// TODO: Should we send SIGKILL if process doesn't finish?
return nil
}
func launch(ctx context.Context, logFile string, args ...string) (func() error, error) {
// set absolute path of binary, since we set the working directory to the root
pathname, err := os.Executable()
if err != nil {
return nil, err
}
bCmd := exec.CommandContext(ctx, pathname, args...)
if logFile != "" {
f, err := os.OpenFile(logFile, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644)
if err != nil {
return nil, err
}
defer f.Close()
bCmd.Stdout = f
bCmd.Stderr = f
}
bCmd.Stdin = nil
bCmd.Dir = "/"
bCmd.SysProcAttr = &syscall.SysProcAttr{
Setsid: true,
}
if err := bCmd.Start(); err != nil {
return nil, err
}
return bCmd.Wait, nil
}

View File

@@ -1,19 +0,0 @@
//go:build !linux
package remote
import (
"context"
"github.com/docker/buildx/controller/control"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/command"
"github.com/pkg/errors"
"github.com/spf13/cobra"
)
func NewRemoteBuildxController(ctx context.Context, dockerCli command.Cli, opts control.ControlOptions, logger progress.SubLogger) (control.BuildxController, error) {
return nil, errors.New("remote buildx unsupported")
}
func AddControllerCommands(cmd *cobra.Command, dockerCli command.Cli) {}

View File

@@ -1,430 +0,0 @@
package remote
import (
"context"
"io"
"syscall"
"time"
"github.com/docker/buildx/controller/pb"
"github.com/moby/sys/signal"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"golang.org/x/sync/errgroup"
)
type msgStream interface {
Send(*pb.Message) error
Recv() (*pb.Message, error)
}
type ioServerConfig struct {
stdin io.WriteCloser
stdout, stderr io.ReadCloser
// signalFn is a callback function called when a signal is reached to the client.
signalFn func(context.Context, syscall.Signal) error
// resizeFn is a callback function called when a resize event is reached to the client.
resizeFn func(context.Context, winSize) error
}
func serveIO(attachCtx context.Context, srv msgStream, initFn func(*pb.InitMessage) error, ioConfig *ioServerConfig) (err error) {
stdin, stdout, stderr := ioConfig.stdin, ioConfig.stdout, ioConfig.stderr
stream := &debugStream{srv, "server=" + time.Now().String()}
eg, ctx := errgroup.WithContext(attachCtx)
done := make(chan struct{})
msg, err := receive(ctx, stream)
if err != nil {
return err
}
init := msg.GetInit()
if init == nil {
return errors.Errorf("unexpected message: %T; wanted init", msg.GetInput())
}
ref := init.Ref
if ref == "" {
return errors.New("no ref is provided")
}
if err := initFn(init); err != nil {
return errors.Wrap(err, "failed to initialize IO server")
}
if stdout != nil {
stdoutReader, stdoutWriter := io.Pipe()
eg.Go(func() error {
<-done
return stdoutWriter.Close()
})
go func() {
// do not wait for read completion but return here and let the caller send EOF
// this allows us to return on ctx.Done() without being blocked by this reader.
io.Copy(stdoutWriter, stdout)
stdoutWriter.Close()
}()
eg.Go(func() error {
defer stdoutReader.Close()
return copyToStream(1, stream, stdoutReader)
})
}
if stderr != nil {
stderrReader, stderrWriter := io.Pipe()
eg.Go(func() error {
<-done
return stderrWriter.Close()
})
go func() {
// do not wait for read completion but return here and let the caller send EOF
// this allows us to return on ctx.Done() without being blocked by this reader.
io.Copy(stderrWriter, stderr)
stderrWriter.Close()
}()
eg.Go(func() error {
defer stderrReader.Close()
return copyToStream(2, stream, stderrReader)
})
}
msgCh := make(chan *pb.Message)
eg.Go(func() error {
defer close(msgCh)
for {
msg, err := receive(ctx, stream)
if err != nil {
return err
}
select {
case msgCh <- msg:
case <-done:
return nil
case <-ctx.Done():
return nil
}
}
})
eg.Go(func() error {
defer close(done)
for {
var msg *pb.Message
select {
case msg = <-msgCh:
case <-ctx.Done():
return nil
}
if msg == nil {
return nil
}
if file := msg.GetFile(); file != nil {
if file.Fd != 0 {
return errors.Errorf("unexpected fd: %v", file.Fd)
}
if stdin == nil {
continue // no stdin destination is specified so ignore the data
}
if len(file.Data) > 0 {
_, err := stdin.Write(file.Data)
if err != nil {
return err
}
}
if file.EOF {
stdin.Close()
}
} else if resize := msg.GetResize(); resize != nil {
if ioConfig.resizeFn != nil {
ioConfig.resizeFn(ctx, winSize{
cols: resize.Cols,
rows: resize.Rows,
})
}
} else if sig := msg.GetSignal(); sig != nil {
if ioConfig.signalFn != nil {
syscallSignal, ok := signal.SignalMap[sig.Name]
if !ok {
continue
}
ioConfig.signalFn(ctx, syscallSignal)
}
} else {
return errors.Errorf("unexpected message: %T", msg.GetInput())
}
}
})
return eg.Wait()
}
type ioAttachConfig struct {
stdin io.ReadCloser
stdout, stderr io.WriteCloser
signal <-chan syscall.Signal
resize <-chan winSize
}
type winSize struct {
rows uint32
cols uint32
}
func attachIO(ctx context.Context, stream msgStream, initMessage *pb.InitMessage, cfg ioAttachConfig) (retErr error) {
eg, ctx := errgroup.WithContext(ctx)
done := make(chan struct{})
if err := stream.Send(&pb.Message{
Input: &pb.Message_Init{
Init: initMessage,
},
}); err != nil {
return errors.Wrap(err, "failed to init")
}
if cfg.stdin != nil {
stdinReader, stdinWriter := io.Pipe()
eg.Go(func() error {
<-done
return stdinWriter.Close()
})
go func() {
// do not wait for read completion but return here and let the caller send EOF
// this allows us to return on ctx.Done() without being blocked by this reader.
io.Copy(stdinWriter, cfg.stdin)
stdinWriter.Close()
}()
eg.Go(func() error {
defer stdinReader.Close()
return copyToStream(0, stream, stdinReader)
})
}
if cfg.signal != nil {
eg.Go(func() error {
for {
var sig syscall.Signal
select {
case sig = <-cfg.signal:
case <-done:
return nil
case <-ctx.Done():
return nil
}
name := sigToName[sig]
if name == "" {
continue
}
if err := stream.Send(&pb.Message{
Input: &pb.Message_Signal{
Signal: &pb.SignalMessage{
Name: name,
},
},
}); err != nil {
return errors.Wrap(err, "failed to send signal")
}
}
})
}
if cfg.resize != nil {
eg.Go(func() error {
for {
var win winSize
select {
case win = <-cfg.resize:
case <-done:
return nil
case <-ctx.Done():
return nil
}
if err := stream.Send(&pb.Message{
Input: &pb.Message_Resize{
Resize: &pb.ResizeMessage{
Rows: win.rows,
Cols: win.cols,
},
},
}); err != nil {
return errors.Wrap(err, "failed to send resize")
}
}
})
}
msgCh := make(chan *pb.Message)
eg.Go(func() error {
defer close(msgCh)
for {
msg, err := receive(ctx, stream)
if err != nil {
return err
}
select {
case msgCh <- msg:
case <-done:
return nil
case <-ctx.Done():
return nil
}
}
})
eg.Go(func() error {
eofs := make(map[uint32]struct{})
defer close(done)
for {
var msg *pb.Message
select {
case msg = <-msgCh:
case <-ctx.Done():
return nil
}
if msg == nil {
return nil
}
if file := msg.GetFile(); file != nil {
if _, ok := eofs[file.Fd]; ok {
continue
}
var out io.WriteCloser
switch file.Fd {
case 1:
out = cfg.stdout
case 2:
out = cfg.stderr
default:
return errors.Errorf("unsupported fd %d", file.Fd)
}
if out == nil {
logrus.Warnf("attachIO: no writer for fd %d", file.Fd)
continue
}
if len(file.Data) > 0 {
if _, err := out.Write(file.Data); err != nil {
return err
}
}
if file.EOF {
eofs[file.Fd] = struct{}{}
}
} else {
return errors.Errorf("unexpected message: %T", msg.GetInput())
}
}
})
return eg.Wait()
}
func receive(ctx context.Context, stream msgStream) (*pb.Message, error) {
msgCh := make(chan *pb.Message)
errCh := make(chan error)
go func() {
msg, err := stream.Recv()
if err != nil {
if errors.Is(err, io.EOF) {
return
}
errCh <- err
return
}
msgCh <- msg
}()
select {
case msg := <-msgCh:
return msg, nil
case err := <-errCh:
return nil, err
case <-ctx.Done():
return nil, ctx.Err()
}
}
func copyToStream(fd uint32, snd msgStream, r io.Reader) error {
for {
buf := make([]byte, 32*1024)
n, err := r.Read(buf)
if err != nil {
if err == io.EOF {
break // break loop and send EOF
}
return err
} else if n > 0 {
if err := snd.Send(&pb.Message{
Input: &pb.Message_File{
File: &pb.FdMessage{
Fd: fd,
Data: buf[:n],
},
},
}); err != nil {
return err
}
}
}
return snd.Send(&pb.Message{
Input: &pb.Message_File{
File: &pb.FdMessage{
Fd: fd,
EOF: true,
},
},
})
}
var sigToName = map[syscall.Signal]string{}
func init() {
for name, value := range signal.SignalMap {
sigToName[value] = name
}
}
type debugStream struct {
msgStream
prefix string
}
func (s *debugStream) Send(msg *pb.Message) error {
switch m := msg.GetInput().(type) {
case *pb.Message_File:
if m.File.EOF {
logrus.Debugf("|---> File Message (sender:%v) fd=%d, EOF", s.prefix, m.File.Fd)
} else {
logrus.Debugf("|---> File Message (sender:%v) fd=%d, %d bytes", s.prefix, m.File.Fd, len(m.File.Data))
}
case *pb.Message_Resize:
logrus.Debugf("|---> Resize Message (sender:%v): %+v", s.prefix, m.Resize)
case *pb.Message_Signal:
logrus.Debugf("|---> Signal Message (sender:%v): %s", s.prefix, m.Signal.Name)
}
return s.msgStream.Send(msg)
}
func (s *debugStream) Recv() (*pb.Message, error) {
msg, err := s.msgStream.Recv()
if err != nil {
return nil, err
}
switch m := msg.GetInput().(type) {
case *pb.Message_File:
if m.File.EOF {
logrus.Debugf("|<--- File Message (receiver:%v) fd=%d, EOF", s.prefix, m.File.Fd)
} else {
logrus.Debugf("|<--- File Message (receiver:%v) fd=%d, %d bytes", s.prefix, m.File.Fd, len(m.File.Data))
}
case *pb.Message_Resize:
logrus.Debugf("|<--- Resize Message (receiver:%v): %+v", s.prefix, m.Resize)
case *pb.Message_Signal:
logrus.Debugf("|<--- Signal Message (receiver:%v): %s", s.prefix, m.Signal.Name)
}
return msg, nil
}

View File

@@ -1,439 +0,0 @@
package remote
import (
"context"
"io"
"sync"
"sync/atomic"
"time"
"github.com/docker/buildx/build"
controllererrors "github.com/docker/buildx/controller/errdefs"
"github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/controller/processes"
"github.com/docker/buildx/util/ioset"
"github.com/docker/buildx/util/progress"
"github.com/docker/buildx/version"
"github.com/moby/buildkit/client"
"github.com/pkg/errors"
"golang.org/x/sync/errgroup"
)
type BuildFunc func(ctx context.Context, options *pb.BuildOptions, stdin io.Reader, progress progress.Writer) (resp *client.SolveResponse, res *build.ResultHandle, err error)
func NewServer(buildFunc BuildFunc) *Server {
return &Server{
buildFunc: buildFunc,
}
}
type Server struct {
buildFunc BuildFunc
session map[string]*session
sessionMu sync.Mutex
}
type session struct {
buildOnGoing atomic.Bool
statusChan chan *pb.StatusResponse
cancelBuild func()
buildOptions *pb.BuildOptions
inputPipe *io.PipeWriter
result *build.ResultHandle
processes *processes.Manager
}
func (s *session) cancelRunningProcesses() {
s.processes.CancelRunningProcesses()
}
func (m *Server) ListProcesses(ctx context.Context, req *pb.ListProcessesRequest) (res *pb.ListProcessesResponse, err error) {
m.sessionMu.Lock()
defer m.sessionMu.Unlock()
s, ok := m.session[req.Ref]
if !ok {
return nil, errors.Errorf("unknown ref %q", req.Ref)
}
res = new(pb.ListProcessesResponse)
res.Infos = append(res.Infos, s.processes.ListProcesses()...)
return res, nil
}
func (m *Server) DisconnectProcess(ctx context.Context, req *pb.DisconnectProcessRequest) (res *pb.DisconnectProcessResponse, err error) {
m.sessionMu.Lock()
defer m.sessionMu.Unlock()
s, ok := m.session[req.Ref]
if !ok {
return nil, errors.Errorf("unknown ref %q", req.Ref)
}
return res, s.processes.DeleteProcess(req.ProcessID)
}
func (m *Server) Info(ctx context.Context, req *pb.InfoRequest) (res *pb.InfoResponse, err error) {
return &pb.InfoResponse{
BuildxVersion: &pb.BuildxVersion{
Package: version.Package,
Version: version.Version,
Revision: version.Revision,
},
}, nil
}
func (m *Server) List(ctx context.Context, req *pb.ListRequest) (res *pb.ListResponse, err error) {
keys := make(map[string]struct{})
m.sessionMu.Lock()
for k := range m.session {
keys[k] = struct{}{}
}
m.sessionMu.Unlock()
var keysL []string
for k := range keys {
keysL = append(keysL, k)
}
return &pb.ListResponse{
Keys: keysL,
}, nil
}
func (m *Server) Disconnect(ctx context.Context, req *pb.DisconnectRequest) (res *pb.DisconnectResponse, err error) {
key := req.Ref
if key == "" {
return nil, errors.New("disconnect: empty key")
}
m.sessionMu.Lock()
if s, ok := m.session[key]; ok {
if s.cancelBuild != nil {
s.cancelBuild()
}
s.cancelRunningProcesses()
if s.result != nil {
s.result.Done()
}
}
delete(m.session, key)
m.sessionMu.Unlock()
return &pb.DisconnectResponse{}, nil
}
func (m *Server) Close() error {
m.sessionMu.Lock()
for k := range m.session {
if s, ok := m.session[k]; ok {
if s.cancelBuild != nil {
s.cancelBuild()
}
s.cancelRunningProcesses()
}
}
m.sessionMu.Unlock()
return nil
}
func (m *Server) Inspect(ctx context.Context, req *pb.InspectRequest) (*pb.InspectResponse, error) {
ref := req.Ref
if ref == "" {
return nil, errors.New("inspect: empty key")
}
var bo *pb.BuildOptions
m.sessionMu.Lock()
if s, ok := m.session[ref]; ok {
bo = s.buildOptions
} else {
m.sessionMu.Unlock()
return nil, errors.Errorf("inspect: unknown key %v", ref)
}
m.sessionMu.Unlock()
return &pb.InspectResponse{Options: bo}, nil
}
func (m *Server) Build(ctx context.Context, req *pb.BuildRequest) (*pb.BuildResponse, error) {
ref := req.Ref
if ref == "" {
return nil, errors.New("build: empty key")
}
// Prepare status channel and session
m.sessionMu.Lock()
if m.session == nil {
m.session = make(map[string]*session)
}
s, ok := m.session[ref]
if ok {
if !s.buildOnGoing.CompareAndSwap(false, true) {
m.sessionMu.Unlock()
return &pb.BuildResponse{}, errors.New("build ongoing")
}
s.cancelRunningProcesses()
s.result = nil
} else {
s = &session{}
s.buildOnGoing.Store(true)
}
s.processes = processes.NewManager()
statusChan := make(chan *pb.StatusResponse)
s.statusChan = statusChan
inR, inW := io.Pipe()
defer inR.Close()
s.inputPipe = inW
m.session[ref] = s
m.sessionMu.Unlock()
defer func() {
close(statusChan)
m.sessionMu.Lock()
s, ok := m.session[ref]
if ok {
s.statusChan = nil
s.buildOnGoing.Store(false)
}
m.sessionMu.Unlock()
}()
pw := pb.NewProgressWriter(statusChan)
// Build the specified request
ctx, cancel := context.WithCancel(ctx)
defer cancel()
resp, res, buildErr := m.buildFunc(ctx, req.Options, inR, pw)
m.sessionMu.Lock()
if s, ok := m.session[ref]; ok {
// NOTE: buildFunc can return *build.ResultHandle even on error (e.g. when it's implemented using (github.com/docker/buildx/controller/build).RunBuild).
if res != nil {
s.result = res
s.cancelBuild = cancel
s.buildOptions = req.Options
m.session[ref] = s
if buildErr != nil {
buildErr = controllererrors.WrapBuild(buildErr, ref)
}
}
} else {
m.sessionMu.Unlock()
return nil, errors.Errorf("build: unknown key %v", ref)
}
m.sessionMu.Unlock()
if buildErr != nil {
return nil, buildErr
}
if resp == nil {
resp = &client.SolveResponse{}
}
return &pb.BuildResponse{
ExporterResponse: resp.ExporterResponse,
}, nil
}
func (m *Server) Status(req *pb.StatusRequest, stream pb.Controller_StatusServer) error {
ref := req.Ref
if ref == "" {
return errors.New("status: empty key")
}
// Wait and get status channel prepared by Build()
var statusChan <-chan *pb.StatusResponse
for {
// TODO: timeout?
m.sessionMu.Lock()
if _, ok := m.session[ref]; !ok || m.session[ref].statusChan == nil {
m.sessionMu.Unlock()
time.Sleep(time.Millisecond) // TODO: wait Build without busy loop and make it cancellable
continue
}
statusChan = m.session[ref].statusChan
m.sessionMu.Unlock()
break
}
// forward status
for ss := range statusChan {
if ss == nil {
break
}
if err := stream.Send(ss); err != nil {
return err
}
}
return nil
}
func (m *Server) Input(stream pb.Controller_InputServer) (err error) {
// Get the target ref from init message
msg, err := stream.Recv()
if err != nil {
if !errors.Is(err, io.EOF) {
return err
}
return nil
}
init := msg.GetInit()
if init == nil {
return errors.Errorf("unexpected message: %T; wanted init", msg.GetInit())
}
ref := init.Ref
if ref == "" {
return errors.New("input: no ref is provided")
}
// Wait and get input stream pipe prepared by Build()
var inputPipeW *io.PipeWriter
for {
// TODO: timeout?
m.sessionMu.Lock()
if _, ok := m.session[ref]; !ok || m.session[ref].inputPipe == nil {
m.sessionMu.Unlock()
time.Sleep(time.Millisecond) // TODO: wait Build without busy loop and make it cancellable
continue
}
inputPipeW = m.session[ref].inputPipe
m.sessionMu.Unlock()
break
}
// Forward input stream
eg, ctx := errgroup.WithContext(context.TODO())
done := make(chan struct{})
msgCh := make(chan *pb.InputMessage)
eg.Go(func() error {
defer close(msgCh)
for {
msg, err := stream.Recv()
if err != nil {
if !errors.Is(err, io.EOF) {
return err
}
return nil
}
select {
case msgCh <- msg:
case <-done:
return nil
case <-ctx.Done():
return nil
}
}
})
eg.Go(func() (retErr error) {
defer close(done)
defer func() {
if retErr != nil {
inputPipeW.CloseWithError(retErr)
return
}
inputPipeW.Close()
}()
for {
var msg *pb.InputMessage
select {
case msg = <-msgCh:
case <-ctx.Done():
return errors.Wrap(ctx.Err(), "canceled")
}
if msg == nil {
return nil
}
if data := msg.GetData(); data != nil {
if len(data.Data) > 0 {
_, err := inputPipeW.Write(data.Data)
if err != nil {
return err
}
}
if data.EOF {
return nil
}
}
}
})
return eg.Wait()
}
func (m *Server) Invoke(srv pb.Controller_InvokeServer) error {
containerIn, containerOut := ioset.Pipe()
defer func() { containerOut.Close(); containerIn.Close() }()
initDoneCh := make(chan *processes.Process)
initErrCh := make(chan error)
eg, egCtx := errgroup.WithContext(context.TODO())
srvIOCtx, srvIOCancel := context.WithCancel(egCtx)
eg.Go(func() error {
defer srvIOCancel()
return serveIO(srvIOCtx, srv, func(initMessage *pb.InitMessage) (retErr error) {
defer func() {
if retErr != nil {
initErrCh <- retErr
}
}()
ref := initMessage.Ref
cfg := initMessage.InvokeConfig
m.sessionMu.Lock()
s, ok := m.session[ref]
if !ok {
m.sessionMu.Unlock()
return errors.Errorf("invoke: unknown key %v", ref)
}
m.sessionMu.Unlock()
pid := initMessage.ProcessID
if pid == "" {
return errors.Errorf("invoke: specify process ID")
}
proc, ok := s.processes.Get(pid)
if !ok {
// Start a new process.
if cfg == nil {
return errors.New("no container config is provided")
}
var err error
proc, err = s.processes.StartProcess(pid, s.result, cfg)
if err != nil {
return err
}
}
// Attach containerIn to this process
proc.ForwardIO(&containerIn, srvIOCancel)
initDoneCh <- proc
return nil
}, &ioServerConfig{
stdin: containerOut.Stdin,
stdout: containerOut.Stdout,
stderr: containerOut.Stderr,
// TODO: signal, resize
})
})
eg.Go(func() (rErr error) {
defer srvIOCancel()
// Wait for init done
var proc *processes.Process
select {
case p := <-initDoneCh:
proc = p
case err := <-initErrCh:
return err
case <-egCtx.Done():
return egCtx.Err()
}
// Wait for IO done
select {
case <-srvIOCtx.Done():
return srvIOCtx.Err()
case err := <-proc.Done():
return err
case <-egCtx.Done():
return egCtx.Err()
}
})
return eg.Wait()
}

View File

@@ -1,5 +1,5 @@
variable "GO_VERSION" {
default = null
default = "1.19"
}
variable "DOCS_FORMATS" {
default = "md"
@@ -7,12 +7,6 @@ variable "DOCS_FORMATS" {
variable "DESTDIR" {
default = "./bin"
}
variable "TEST_COVERAGE" {
default = null
}
variable "GOLANGCI_LINT_MULTIPLATFORM" {
default = ""
}
# Special target: https://github.com/docker/metadata-action#bake-definition
target "meta-helper" {
@@ -31,29 +25,13 @@ group "default" {
}
group "validate" {
targets = ["lint", "lint-gopls", "validate-vendor", "validate-docs"]
targets = ["lint", "validate-vendor", "validate-docs"]
}
target "lint" {
inherits = ["_common"]
dockerfile = "./hack/dockerfiles/lint.Dockerfile"
output = ["type=cacheonly"]
platforms = GOLANGCI_LINT_MULTIPLATFORM != "" ? [
"darwin/amd64",
"darwin/arm64",
"linux/amd64",
"linux/arm64",
"linux/s390x",
"linux/ppc64le",
"linux/riscv64",
"windows/amd64",
"windows/arm64"
] : []
}
target "lint-gopls" {
inherits = ["lint"]
target = "gopls-analyze"
}
target "validate-vendor" {
@@ -81,13 +59,6 @@ target "validate-authors" {
output = ["type=cacheonly"]
}
target "validate-generated-files" {
inherits = ["_common"]
dockerfile = "./hack/dockerfiles/generated-files.Dockerfile"
target = "validate"
output = ["type=cacheonly"]
}
target "update-vendor" {
inherits = ["_common"]
dockerfile = "./hack/dockerfiles/vendor.Dockerfile"
@@ -113,13 +84,6 @@ target "update-authors" {
output = ["."]
}
target "update-generated-files" {
inherits = ["_common"]
dockerfile = "./hack/dockerfiles/generated-files.Dockerfile"
target = "update"
output = ["."]
}
target "mod-outdated" {
inherits = ["_common"]
dockerfile = "./hack/dockerfiles/vendor.Dockerfile"
@@ -178,34 +142,3 @@ target "image-local" {
inherits = ["image"]
output = ["type=docker"]
}
variable "HTTP_PROXY" {
default = ""
}
variable "HTTPS_PROXY" {
default = ""
}
variable "NO_PROXY" {
default = ""
}
variable "TEST_BUILDKIT_TAG" {
default = null
}
target "integration-test-base" {
inherits = ["_common"]
args = {
GO_EXTRA_FLAGS = TEST_COVERAGE == "1" ? "-cover" : null
HTTP_PROXY = HTTP_PROXY
HTTPS_PROXY = HTTPS_PROXY
NO_PROXY = NO_PROXY
BUILDKIT_VERSION = TEST_BUILDKIT_TAG
}
target = "integration-test-base"
output = ["type=cacheonly"]
}
target "integration-test" {
inherits = ["integration-test-base"]
target = "integration-test"
}

View File

@@ -1,6 +1,4 @@
---
title: Bake file reference
---
# Bake file reference
The Bake file is a file for defining workflows that you run using `docker buildx bake`.
@@ -14,118 +12,18 @@ You can define your Bake file in the following file formats:
By default, Bake uses the following lookup order to find the configuration file:
1. `compose.yaml`
2. `compose.yml`
3. `docker-compose.yml`
4. `docker-compose.yaml`
5. `docker-bake.json`
6. `docker-bake.override.json`
7. `docker-bake.hcl`
8. `docker-bake.override.hcl`
1. `docker-bake.override.hcl`
2. `docker-bake.hcl`
3. `docker-bake.override.json`
4. `docker-bake.json`
5. `docker-compose.yaml`
6. `docker-compose.yml`
Bake searches for the file in the current working directory.
You can specify the file location explicitly using the `--file` flag:
```console
$ docker buildx bake --file ../docker/bake.hcl --print
```
If you don't specify a file explicitly, Bake searches for the file in the
current working directory. If more than one Bake file is found, all files are
merged into a single definition. Files are merged according to the lookup
order. That means that if your project contains both a `compose.yaml` file and
a `docker-bake.hcl` file, Bake loads the `compose.yaml` file first, and then
the `docker-bake.hcl` file.
If merged files contain duplicate attribute definitions, those definitions are
either merged or overridden by the last occurrence, depending on the attribute.
The following attributes are overridden by the last occurrence:
- `target.cache-to`
- `target.dockerfile-inline`
- `target.dockerfile`
- `target.outputs`
- `target.platforms`
- `target.pull`
- `target.tags`
- `target.target`
For example, if `compose.yaml` and `docker-bake.hcl` both define the `tags`
attribute, the `docker-bake.hcl` is used.
```console
$ cat compose.yaml
services:
webapp:
build:
context: .
tags:
- bar
$ cat docker-bake.hcl
target "webapp" {
tags = ["foo"]
}
$ docker buildx bake --print webapp
{
"group": {
"default": {
"targets": [
"webapp"
]
}
},
"target": {
"webapp": {
"context": ".",
"dockerfile": "Dockerfile",
"tags": [
"foo"
]
}
}
}
```
All other attributes are merged. For example, if `compose.yaml` and
`docker-bake.hcl` both define unique entries for the `labels` attribute, all
entries are included. Duplicate entries for the same label are overridden.
```console
$ cat compose.yaml
services:
webapp:
build:
context: .
labels:
com.example.foo: "foo"
com.example.name: "Alice"
$ cat docker-bake.hcl
target "webapp" {
labels = {
"com.example.bar" = "bar"
"com.example.name" = "Bob"
}
}
$ docker buildx bake --print webapp
{
"group": {
"default": {
"targets": [
"webapp"
]
}
},
"target": {
"webapp": {
"context": ".",
"dockerfile": "Dockerfile",
"labels": {
"com.example.foo": "foo",
"com.example.bar": "bar",
"com.example.name": "Bob"
}
}
}
}
$ docker buildx bake --file=../docker/bake.hcl --print
```
## Syntax
@@ -171,16 +69,16 @@ The following example shows the same Bake file in the HCL format:
```hcl
variable "TAG" {
default = "latest"
"default" = "latest"
}
group "default" {
targets = ["webapp"]
"targets" = ["latest"]
}
target "webapp" {
dockerfile = "Dockerfile"
tags = ["docker.io/username/webapp:${TAG}"]
"dockerfile" = "Dockerfile"
"tags" = ["docker.io/username/webapp:${TAG}"]
}
```
@@ -215,9 +113,8 @@ target "webapp" {
The following table shows the complete list of attributes that you can assign to a target:
| Name | Type | Description |
|-------------------------------------------------|---------|----------------------------------------------------------------------|
| ----------------------------------------------- | ------- | -------------------------------------------------------------------- |
| [`args`](#targetargs) | Map | Build arguments |
| [`annotations`](#targetannotations) | List | Exporter annotations |
| [`attest`](#targetattest) | List | Build attestations |
| [`cache-from`](#targetcache-from) | List | External cache sources |
| [`cache-to`](#targetcache-to) | List | External cache destinations |
@@ -227,19 +124,15 @@ The following table shows the complete list of attributes that you can assign to
| [`dockerfile`](#targetdockerfile) | String | Dockerfile location |
| [`inherits`](#targetinherits) | List | Inherit attributes from other targets |
| [`labels`](#targetlabels) | Map | Metadata for images |
| [`matrix`](#targetmatrix) | Map | Define a set of variables that forks a target into multiple targets. |
| [`name`](#targetname) | String | Override the target name when using a matrix. |
| [`no-cache-filter`](#targetno-cache-filter) | List | Disable build cache for specific stages |
| [`no-cache`](#targetno-cache) | Boolean | Disable build cache completely |
| [`output`](#targetoutput) | List | Output destinations |
| [`platforms`](#targetplatforms) | List | Target platforms |
| [`pull`](#targetpull) | Boolean | Always pull images |
| [`secret`](#targetsecret) | List | Secrets to expose to the build |
| [`shm-size`](#targetshm-size) | List | Size of `/dev/shm` |
| [`ssh`](#targetssh) | List | SSH agent sockets or keys to expose to the build |
| [`tags`](#targettags) | List | Image names and tags |
| [`target`](#targettarget) | String | Target build stage |
| [`ulimits`](#targetulimits) | List | Ulimit options |
### `target.args`
@@ -276,41 +169,6 @@ target "db" {
}
```
### `target.annotations`
The `annotations` attribute lets you add annotations to images built with bake.
The key takes a list of annotations, in the format of `KEY=VALUE`.
```hcl
target "default" {
output = ["type=image,name=foo"]
annotations = ["org.opencontainers.image.authors=dvdksn"]
}
```
is the same as
```hcl
target "default" {
output = ["type=image,name=foo,annotation.org.opencontainers.image.authors=dvdksn"]
}
```
By default, the annotation is added to image manifests. You can configure the
level of the annotations by adding a prefix to the annotation, containing a
comma-separated list of all the levels that you want to annotate. The following
example adds annotations to both the image index and manifests.
```hcl
target "default" {
output = ["type=image,name=foo"]
annotations = ["index,manifest:org.opencontainers.image.authors=dvdksn"]
}
```
Read about the supported levels in
[Specifying annotation levels](https://docs.docker.com/build/building/annotations/#specifying-annotation-levels).
### `target.attest`
The `attest` attribute lets you apply [build attestations][attestations] to the target.
@@ -618,138 +476,6 @@ target "default" {
It's possible to use a `null` value for labels.
If you do, the builder uses the label value specified in the Dockerfile.
### `target.matrix`
A matrix strategy lets you fork a single target into multiple different
variants, based on parameters that you specify.
This works in a similar way to [Matrix strategies for GitHub Actions].
You can use this to reduce duplication in your bake definition.
The `matrix` attribute is a map of parameter names to lists of values.
Bake builds each possible combination of values as a separate target.
Each generated target **must** have a unique name.
To specify how target names should resolve, use the `name` attribute.
The following example resolves the `app` target to `app-foo` and `app-bar`.
It also uses the matrix value to define the [target build stage](#targettarget).
```hcl
target "app" {
name = "app-${tgt}"
matrix = {
tgt = ["foo", "bar"]
}
target = tgt
}
```
```console
$ docker buildx bake --print app
[+] Building 0.0s (0/0)
{
"group": {
"app": {
"targets": [
"app-foo",
"app-bar"
]
},
"default": {
"targets": [
"app"
]
}
},
"target": {
"app-bar": {
"context": ".",
"dockerfile": "Dockerfile",
"target": "bar"
},
"app-foo": {
"context": ".",
"dockerfile": "Dockerfile",
"target": "foo"
}
}
}
```
#### Multiple axes
You can specify multiple keys in your matrix to fork a target on multiple axes.
When using multiple matrix keys, Bake builds every possible variant.
The following example builds four targets:
- `app-foo-1-0`
- `app-foo-2-0`
- `app-bar-1-0`
- `app-bar-2-0`
```hcl
target "app" {
name = "app-${tgt}-${replace(version, ".", "-")}"
matrix = {
tgt = ["foo", "bar"]
version = ["1.0", "2.0"]
}
target = tgt
args = {
VERSION = version
}
}
```
#### Multiple values per matrix target
If you want to differentiate the matrix on more than just a single value,
you can use maps as matrix values. Bake creates a target for each map,
and you can access the nested values using dot notation.
The following example builds two targets:
- `app-foo-1-0`
- `app-bar-2-0`
```hcl
target "app" {
name = "app-${item.tgt}-${replace(item.version, ".", "-")}"
matrix = {
item = [
{
tgt = "foo"
version = "1.0"
},
{
tgt = "bar"
version = "2.0"
}
]
}
target = item.tgt
args = {
VERSION = item.version
}
}
```
### `target.name`
Specify name resolution for targets that use a matrix strategy.
The following example resolves the `app` target to `app-foo` and `app-bar`.
```hcl
target "app" {
name = "app-${tgt}"
matrix = {
tgt = ["foo", "bar"]
}
target = tgt
}
```
### `target.no-cache-filter`
Don't use build cache for the specified stages.
@@ -836,29 +562,6 @@ RUN --mount=type=secret,id=KUBECONFIG \
KUBECONFIG=$(cat /run/secrets/KUBECONFIG) helm upgrade --install
```
### `target.shm-size`
Sets the size of the shared memory allocated for build containers when using
`RUN` instructions.
The format is `<number><unit>`. `number` must be greater than `0`. Unit is
optional and can be `b` (bytes), `k` (kilobytes), `m` (megabytes), or `g`
(gigabytes). If you omit the unit, the system uses bytes.
This is the same as the `--shm-size` flag for `docker build`.
```hcl
target "default" {
shm-size = "128m"
}
```
> **Note**
>
> In most cases, it is recommended to let the builder automatically determine
> the appropriate configurations. Manual adjustments should only be considered
> when specific performance tuning is required for complex build scenarios.
### `target.ssh`
Defines SSH agent sockets or keys to expose to the build.
@@ -905,32 +608,6 @@ target "default" {
}
```
### `target.ulimits`
Ulimits overrides the default ulimits of build's containers when using `RUN`
instructions and are specified with a soft and hard limit as such:
`<type>=<soft limit>[:<hard limit>]`, for example:
```hcl
target "app" {
ulimits = [
"nofile=1024:1024"
]
}
```
> **Note**
>
> If you do not provide a `hard limit`, the `soft limit` is used
> for both values. If no `ulimits` are set, they are inherited from
> the default `ulimits` set on the daemon.
> **Note**
>
> In most cases, it is recommended to let the builder automatically determine
> the appropriate configurations. Manual adjustments should only be considered
> when specific performance tuning is required for complex build scenarios.
## Group
Groups allow you to invoke multiple builds (targets) at once.
@@ -1122,20 +799,20 @@ target "webapp-dev" {
[attestations]: https://docs.docker.com/build/attestations/
[bake_stdlib]: https://github.com/docker/buildx/blob/master/bake/hclparser/stdlib.go
[build-arg]: https://docs.docker.com/reference/cli/docker/image/build/#build-arg
[build-context]: https://docs.docker.com/reference/cli/docker/buildx/build/#build-context
[build-arg]: https://docs.docker.com/engine/reference/commandline/build/#build-arg
[build-context]: https://docs.docker.com/engine/reference/commandline/buildx_build/#build-context
[cache-backends]: https://docs.docker.com/build/cache/backends/
[cache-from]: https://docs.docker.com/reference/cli/docker/buildx/build/#cache-from
[cache-to]: https://docs.docker.com/reference/cli/docker/buildx/build/#cache-to
[context]: https://docs.docker.com/reference/cli/docker/buildx/build/#build-context
[file]: https://docs.docker.com/reference/cli/docker/image/build/#file
[cache-from]: https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-from
[cache-to]: https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-to
[context]: https://docs.docker.com/engine/reference/commandline/buildx_build/#build-context
[file]: https://docs.docker.com/engine/reference/commandline/build/#file
[go-cty]: https://github.com/zclconf/go-cty/tree/main/cty/function/stdlib
[hcl-funcs]: https://docs.docker.com/build/bake/hcl-funcs/
[output]: https://docs.docker.com/reference/cli/docker/buildx/build/#output
[platform]: https://docs.docker.com/reference/cli/docker/buildx/build/#platform
[run_mount_secret]: https://docs.docker.com/reference/dockerfile/#run---mounttypesecret
[secret]: https://docs.docker.com/reference/cli/docker/buildx/build/#secret
[ssh]: https://docs.docker.com/reference/cli/docker/buildx/build/#ssh
[tag]: https://docs.docker.com/reference/cli/docker/image/build/#tag
[target]: https://docs.docker.com/reference/cli/docker/image/build/#target
[output]: https://docs.docker.com/engine/reference/commandline/buildx_build/#output
[platform]: https://docs.docker.com/engine/reference/commandline/buildx_build/#platform
[run_mount_secret]: https://docs.docker.com/engine/reference/builder/#run---mounttypesecret
[secret]: https://docs.docker.com/engine/reference/commandline/buildx_build/#secret
[ssh]: https://docs.docker.com/engine/reference/commandline/buildx_build/#ssh
[tag]: https://docs.docker.com/engine/reference/commandline/build/#tag
[target]: https://docs.docker.com/engine/reference/commandline/build/#target
[userfunc]: https://github.com/hashicorp/hcl/tree/main/ext/userfunc

View File

@@ -1,166 +0,0 @@
# Debug monitor
To assist with creating and debugging complex builds, Buildx provides a
debugger to help you step through the build process and easily inspect the
state of the build environment at any point.
> **Note**
>
> The debug monitor is a new experimental feature in recent versions of Buildx.
> There are rough edges, known bugs, and missing features. Please try it out
> and let us know what you think!
## Starting the debugger
To start the debugger, first, ensure that `BUILDX_EXPERIMENTAL=1` is set in
your environment.
```console
$ export BUILDX_EXPERIMENTAL=1
```
To start a debug session for a build, you can use the `buildx debug` command with `--invoke` flag to specify a command to launch in the resulting image.
`buildx debug` command provides `buildx debug build` subcommand that provides the same features as the normal `buildx build` command but allows launching the debugger session after the build.
Arguments available after `buildx debug build` are the same as the normal `buildx build`.
```console
$ docker buildx debug --invoke /bin/sh build .
[+] Building 4.2s (19/19) FINISHED
=> [internal] connecting to local controller 0.0s
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 32B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 34B 0.0s
...
Launching interactive container. Press Ctrl-a-c to switch to monitor console
Interactive container was restarted with process "dzz7pjb4pk1mj29xqrx0ac3oj". Press Ctrl-a-c to switch to the new container
Switched IO
/ #
```
This launches a `/bin/sh` process in the final stage of the image, and allows
you to explore the contents of the image, without needing to export or load the
image outside of the builder.
For example, you can use `ls` to see the contents of the image:
```console
/ # ls
bin etc lib mnt proc run srv tmp var
dev home media opt root sbin sys usr work
```
Optional long form allows you specifying detailed configurations of the process.
It must be CSV-styled comma-separated key-value pairs.
Supported keys are `args` (can be JSON array format), `entrypoint` (can be JSON array format), `env` (can be JSON array format), `user`, `cwd` and `tty` (bool).
Example:
```
$ docker buildx debug --invoke 'entrypoint=["sh"],"args=[""-c"", ""env | grep -e FOO -e AAA""]","env=[""FOO=bar"", ""AAA=bbb""]"' build .
```
#### `on` flag
If you want to start a debug session when a build fails, you can use
`--on=error` to start a debug session when the build fails.
```console
$ docker buildx debug --on=error build .
[+] Building 4.2s (19/19) FINISHED
=> [internal] connecting to local controller 0.0s
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 32B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 34B 0.0s
...
=> ERROR [shell 10/10] RUN bad-command
------
> [shell 10/10] RUN bad-command:
#0 0.049 /bin/sh: bad-command: not found
------
Launching interactive container. Press Ctrl-a-c to switch to monitor console
Interactive container was restarted with process "edmzor60nrag7rh1mbi4o9lm8". Press Ctrl-a-c to switch to the new container
/ #
```
This allows you to explore the state of the image when the build failed.
#### Launch the debug session directly with `buildx debug` subcommand
If you want to drop into a debug session without first starting the build, you
can use `buildx debug` command to start a debug session.
```
$ docker buildx debug
[+] Building 4.2s (19/19) FINISHED
=> [internal] connecting to local controller 0.0s
(buildx)
```
You can then use the commands available in [monitor mode](#monitor-mode) to
start and observe the build.
## Monitor mode
By default, when debugging, you'll be dropped into a shell in the final stage.
When you're in a debug shell, you can use the `Ctrl-a-c` key combination (press
`Ctrl`+`a` together, lift, then press `c`) to toggle between the debug shell
and the monitor mode. In monitor mode, you can run commands that control the
debug environment.
```console
(buildx) help
Available commands are:
attach attach to a buildx server or a process in the container
disconnect disconnect a client from a buildx server. Specific session ID can be specified an arg
exec execute a process in the interactive container
exit exits monitor
help shows this message. Optionally pass a command name as an argument to print the detailed usage.
kill kill buildx server
list list buildx sessions
ps list processes invoked by "exec". Use "attach" to attach IO to that process
reload reloads the context and build it
rollback re-runs the interactive container with the step's rootfs contents
```
## Build controllers
Debugging is performed using a buildx "controller", which provides a high-level
abstraction to perform builds. By default, the local controller is used for a
more stable experience which runs all builds in-process. However, you can also
use the remote controller to detach the build process from the CLI.
To detach the build process from the CLI, you can use the `--detach=true` flag with
the build command.
```console
$ docker buildx debug --invoke /bin/sh build --detach=true .
```
If you start a debugging session using the `--invoke` flag with a detached
build, then you can attach to it using the `buildx debug` command to
immediately enter the monitor mode.
```console
$ docker buildx debug
[+] Building 0.0s (1/1) FINISHED
=> [internal] connecting to remote controller
(buildx) list
ID CURRENT_SESSION
xfe1162ovd9def8yapb4ys66t false
(buildx) attach xfe1162ovd9def8yapb4ys66t
Attached to process "". Press Ctrl-a-c to switch to the new container
(buildx) ps
PID CURRENT_SESSION COMMAND
3ug8iqaufiwwnukimhqqt06jz false [sh]
(buildx) attach 3ug8iqaufiwwnukimhqqt06jz
Attached to process "3ug8iqaufiwwnukimhqqt06jz". Press Ctrl-a-c to switch to the new container
(buildx) Switched IO
/ # ls
bin etc lib mnt proc run srv tmp var
dev home media opt root sbin sys usr work
/ #
```

View File

@@ -3,7 +3,6 @@ package main
import (
"log"
"os"
"strings"
"github.com/docker/buildx/commands"
clidocstool "github.com/docker/cli-docs-tool"
@@ -27,28 +26,6 @@ type options struct {
formats []string
}
// fixUpExperimentalCLI trims the " (EXPERIMENTAL)" suffix from the CLI output,
// as docs.docker.com already displays "experimental (CLI)",
//
// https://github.com/docker/buildx/pull/2188#issuecomment-1889487022
func fixUpExperimentalCLI(cmd *cobra.Command) {
const (
annotationExperimentalCLI = "experimentalCLI"
suffixExperimental = " (EXPERIMENTAL)"
)
if _, ok := cmd.Annotations[annotationExperimentalCLI]; ok {
cmd.Short = strings.TrimSuffix(cmd.Short, suffixExperimental)
}
cmd.Flags().VisitAll(func(f *pflag.Flag) {
if _, ok := f.Annotations[annotationExperimentalCLI]; ok {
f.Usage = strings.TrimSuffix(f.Usage, suffixExperimental)
}
})
for _, c := range cmd.Commands() {
fixUpExperimentalCLI(c)
}
}
func gen(opts *options) error {
log.SetFlags(0)
@@ -80,8 +57,6 @@ func gen(opts *options) error {
return err
}
case "yaml":
// fix up is needed only for yaml (used for generating docs.docker.com contents)
fixUpExperimentalCLI(cmd)
if err = c.GenYamlTree(cmd); err != nil {
return err
}

Some files were not shown because too many files have changed in this diff Show More