mirror of
https://gitea.com/Lydanne/buildx.git
synced 2025-09-06 19:09:08 +08:00
Compare commits
69 Commits
v0.15.0-rc
...
v0.10.4
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
c513d34049 | ||
|
|
d455c07331 | ||
|
|
5ac3b4c4b6 | ||
|
|
b1440b07f2 | ||
|
|
a3286a0ab1 | ||
|
|
b79345c63e | ||
|
|
23eb3c3ccd | ||
|
|
79e156beb1 | ||
|
|
c960d16da5 | ||
|
|
b5b9de69d9 | ||
|
|
45863c4f16 | ||
|
|
f2feea8bed | ||
|
|
a73d07ff7a | ||
|
|
0fad89c3b9 | ||
|
|
661af29d46 | ||
|
|
02cf539a08 | ||
|
|
cc87bd104e | ||
|
|
582cc04be6 | ||
|
|
ae278ce450 | ||
|
|
b66988c824 | ||
|
|
00ed17df6d | ||
|
|
cfb71fab97 | ||
|
|
f62342768b | ||
|
|
7776652a4d | ||
|
|
5a4f80f3ce | ||
|
|
b5ea79e277 | ||
|
|
481796f84f | ||
|
|
0090d49e57 | ||
|
|
389ac0c3d1 | ||
|
|
2bb8ce2f57 | ||
|
|
65cea456fd | ||
|
|
f7bd5b99da | ||
|
|
8c14407fa2 | ||
|
|
5245a2b3ff | ||
|
|
44d99d4573 | ||
|
|
14942a266e | ||
|
|
123febf107 | ||
|
|
3f5f7c5228 | ||
|
|
6d935625a6 | ||
|
|
e640dc6041 | ||
|
|
08244b12b5 | ||
|
|
78d8b926db | ||
|
|
19291d900e | ||
|
|
ed9b4a7169 | ||
|
|
033d5629c0 | ||
|
|
7cd5add568 | ||
|
|
2a000096fa | ||
|
|
b7781447d7 | ||
|
|
f6ba0a23f8 | ||
|
|
bf4b95fc3a | ||
|
|
467586dc8d | ||
|
|
8764628976 | ||
|
|
583fe71740 | ||
|
|
9fb3ff1a27 | ||
|
|
9d4f38c5fa | ||
|
|
793082f543 | ||
|
|
fe6f697205 | ||
|
|
fd3fb752d3 | ||
|
|
7fcea64eb4 | ||
|
|
05e0ce4953 | ||
|
|
f8d9d1e776 | ||
|
|
8a7a221a7f | ||
|
|
e4db8d2a21 | ||
|
|
7394853ddf | ||
|
|
a8be6b576b | ||
|
|
8b960ededd | ||
|
|
4735a71fbd | ||
|
|
37fce8cc06 | ||
|
|
82476ab039 |
54
.github/CONTRIBUTING.md
vendored
54
.github/CONTRIBUTING.md
vendored
@@ -116,60 +116,6 @@ commit automatically with `git commit -s`.
|
|||||||
|
|
||||||
### Run the unit- and integration-tests
|
### Run the unit- and integration-tests
|
||||||
|
|
||||||
Running tests:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
make test
|
|
||||||
```
|
|
||||||
|
|
||||||
This runs all unit and integration tests, in a containerized environment.
|
|
||||||
Locally, every package can be tested separately with standard Go tools, but
|
|
||||||
integration tests are skipped if local user doesn't have enough permissions or
|
|
||||||
worker binaries are not installed.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# run unit tests only
|
|
||||||
make test-unit
|
|
||||||
|
|
||||||
# run integration tests only
|
|
||||||
make test-integration
|
|
||||||
|
|
||||||
# test a specific package
|
|
||||||
TESTPKGS=./bake make test
|
|
||||||
|
|
||||||
# run all integration tests with a specific worker
|
|
||||||
TESTFLAGS="--run=//worker=remote -v" make test-integration
|
|
||||||
|
|
||||||
# run a specific integration test
|
|
||||||
TESTFLAGS="--run /TestBuild/worker=remote/ -v" make test-integration
|
|
||||||
|
|
||||||
# run a selection of integration tests using a regexp
|
|
||||||
TESTFLAGS="--run /TestBuild.*/worker=remote/ -v" make test-integration
|
|
||||||
```
|
|
||||||
|
|
||||||
> **Note**
|
|
||||||
>
|
|
||||||
> Set `TEST_KEEP_CACHE=1` for the test framework to keep external dependant
|
|
||||||
> images in a docker volume if you are repeatedly calling `make test`. This
|
|
||||||
> helps to avoid rate limiting on the remote registry side.
|
|
||||||
|
|
||||||
> **Note**
|
|
||||||
>
|
|
||||||
> Set `TEST_DOCKERD=1` for the test framework to enable the docker workers,
|
|
||||||
> specifically the `docker` and `docker-container` drivers.
|
|
||||||
>
|
|
||||||
> The docker tests cannot be run in parallel, so require passing `--parallel=1`
|
|
||||||
> in `TESTFLAGS`.
|
|
||||||
|
|
||||||
> **Note**
|
|
||||||
>
|
|
||||||
> If you are working behind a proxy, you can set some of or all
|
|
||||||
> `HTTP_PROXY=http://ip:port`, `HTTPS_PROXY=http://ip:port`, `NO_PROXY=http://ip:port`
|
|
||||||
> for the test framework to specify the proxy build args.
|
|
||||||
|
|
||||||
|
|
||||||
### Run the helper commands
|
|
||||||
|
|
||||||
To enter a demo container environment and experiment, you may run:
|
To enter a demo container environment and experiment, you may run:
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|||||||
124
.github/ISSUE_TEMPLATE/bug.yml
vendored
124
.github/ISSUE_TEMPLATE/bug.yml
vendored
@@ -1,124 +0,0 @@
|
|||||||
# https://docs.github.com/en/communities/using-templates-to-encourage-useful-issues-and-pull-requests/syntax-for-githubs-form-schema
|
|
||||||
name: Bug Report
|
|
||||||
description: Report a bug
|
|
||||||
labels:
|
|
||||||
- status/triage
|
|
||||||
|
|
||||||
body:
|
|
||||||
- type: markdown
|
|
||||||
attributes:
|
|
||||||
value: |
|
|
||||||
Thank you for taking the time to report a bug!
|
|
||||||
If this is a security issue please report it to the [Docker Security team](mailto:security@docker.com).
|
|
||||||
|
|
||||||
- type: checkboxes
|
|
||||||
attributes:
|
|
||||||
label: Contributing guidelines
|
|
||||||
description: |
|
|
||||||
Please read the contributing guidelines before proceeding.
|
|
||||||
options:
|
|
||||||
- label: I've read the [contributing guidelines](https://github.com/docker/buildx/blob/master/.github/CONTRIBUTING.md) and wholeheartedly agree
|
|
||||||
required: true
|
|
||||||
|
|
||||||
- type: checkboxes
|
|
||||||
attributes:
|
|
||||||
label: I've found a bug and checked that ...
|
|
||||||
description: |
|
|
||||||
Make sure that your request fulfills all of the following requirements.
|
|
||||||
If one requirement cannot be satisfied, explain in detail why.
|
|
||||||
options:
|
|
||||||
- label: ... the documentation does not mention anything about my problem
|
|
||||||
- label: ... there are no open or closed issues that are related to my problem
|
|
||||||
|
|
||||||
- type: textarea
|
|
||||||
attributes:
|
|
||||||
label: Description
|
|
||||||
description: |
|
|
||||||
Please provide a brief description of the bug in 1-2 sentences.
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
|
|
||||||
- type: textarea
|
|
||||||
attributes:
|
|
||||||
label: Expected behaviour
|
|
||||||
description: |
|
|
||||||
Please describe precisely what you'd expect to happen.
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
|
|
||||||
- type: textarea
|
|
||||||
attributes:
|
|
||||||
label: Actual behaviour
|
|
||||||
description: |
|
|
||||||
Please describe precisely what is actually happening.
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
|
|
||||||
- type: input
|
|
||||||
attributes:
|
|
||||||
label: Buildx version
|
|
||||||
description: |
|
|
||||||
Output of `docker buildx version` command.
|
|
||||||
Example: `github.com/docker/buildx v0.8.1 5fac64c2c49dae1320f2b51f1a899ca451935554`
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
|
|
||||||
- type: textarea
|
|
||||||
attributes:
|
|
||||||
label: Docker info
|
|
||||||
description: |
|
|
||||||
Output of `docker info` command.
|
|
||||||
render: text
|
|
||||||
|
|
||||||
- type: textarea
|
|
||||||
attributes:
|
|
||||||
label: Builders list
|
|
||||||
description: |
|
|
||||||
Output of `docker buildx ls` command.
|
|
||||||
render: text
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
|
|
||||||
- type: textarea
|
|
||||||
attributes:
|
|
||||||
label: Configuration
|
|
||||||
description: >
|
|
||||||
Please provide a minimal Dockerfile, bake definition (if applicable) and
|
|
||||||
invoked commands to help reproducing your issue.
|
|
||||||
placeholder: |
|
|
||||||
```dockerfile
|
|
||||||
FROM alpine
|
|
||||||
echo hello
|
|
||||||
```
|
|
||||||
|
|
||||||
```hcl
|
|
||||||
group "default" {
|
|
||||||
targets = ["app"]
|
|
||||||
}
|
|
||||||
target "app" {
|
|
||||||
dockerfile = "Dockerfile"
|
|
||||||
target = "build"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx build .
|
|
||||||
$ docker buildx bake
|
|
||||||
```
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
|
|
||||||
- type: textarea
|
|
||||||
attributes:
|
|
||||||
label: Build logs
|
|
||||||
description: |
|
|
||||||
Please provide logs output (and/or BuildKit logs if applicable).
|
|
||||||
render: text
|
|
||||||
validations:
|
|
||||||
required: false
|
|
||||||
|
|
||||||
- type: textarea
|
|
||||||
attributes:
|
|
||||||
label: Additional info
|
|
||||||
description: |
|
|
||||||
Please provide any additional information that could be useful.
|
|
||||||
12
.github/ISSUE_TEMPLATE/config.yml
vendored
12
.github/ISSUE_TEMPLATE/config.yml
vendored
@@ -1,12 +0,0 @@
|
|||||||
# https://docs.github.com/en/communities/using-templates-to-encourage-useful-issues-and-pull-requests/configuring-issue-templates-for-your-repository#configuring-the-template-chooser
|
|
||||||
blank_issues_enabled: true
|
|
||||||
contact_links:
|
|
||||||
- name: Questions and Discussions
|
|
||||||
url: https://github.com/docker/buildx/discussions/new
|
|
||||||
about: Use Github Discussions to ask questions and/or open discussion topics.
|
|
||||||
- name: Command line reference
|
|
||||||
url: https://docs.docker.com/engine/reference/commandline/buildx/
|
|
||||||
about: Read the command line reference.
|
|
||||||
- name: Documentation
|
|
||||||
url: https://docs.docker.com/build/
|
|
||||||
about: Read the documentation.
|
|
||||||
15
.github/ISSUE_TEMPLATE/feature.yml
vendored
15
.github/ISSUE_TEMPLATE/feature.yml
vendored
@@ -1,15 +0,0 @@
|
|||||||
# https://docs.github.com/en/communities/using-templates-to-encourage-useful-issues-and-pull-requests/syntax-for-githubs-form-schema
|
|
||||||
name: Feature request
|
|
||||||
description: Missing functionality? Come tell us about it!
|
|
||||||
labels:
|
|
||||||
- kind/enhancement
|
|
||||||
- status/triage
|
|
||||||
|
|
||||||
body:
|
|
||||||
- type: textarea
|
|
||||||
id: description
|
|
||||||
attributes:
|
|
||||||
label: Description
|
|
||||||
description: What is the feature you want to see?
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
12
.github/SECURITY.md
vendored
12
.github/SECURITY.md
vendored
@@ -1,12 +0,0 @@
|
|||||||
# Reporting security issues
|
|
||||||
|
|
||||||
The project maintainers take security seriously. If you discover a security
|
|
||||||
issue, please bring it to their attention right away!
|
|
||||||
|
|
||||||
**Please _DO NOT_ file a public issue**, instead send your report privately to
|
|
||||||
[security@docker.com](mailto:security@docker.com).
|
|
||||||
|
|
||||||
Security reports are greatly appreciated, and we will publicly thank you for it.
|
|
||||||
We also like to send gifts—if you're into schwag, make sure to let
|
|
||||||
us know. We currently do not offer a paid security bounty program, but are not
|
|
||||||
ruling it out in the future.
|
|
||||||
5
.github/dependabot.yml
vendored
5
.github/dependabot.yml
vendored
@@ -5,11 +5,6 @@ updates:
|
|||||||
directory: "/"
|
directory: "/"
|
||||||
schedule:
|
schedule:
|
||||||
interval: "daily"
|
interval: "daily"
|
||||||
ignore:
|
|
||||||
# ignore this dependency
|
|
||||||
# it seems a bug with dependabot as pining to commit sha should not
|
|
||||||
# trigger a new version: https://github.com/docker/buildx/pull/2222#issuecomment-1919092153
|
|
||||||
- dependency-name: "docker/docs"
|
|
||||||
labels:
|
labels:
|
||||||
- "dependencies"
|
- "dependencies"
|
||||||
- "bot"
|
- "bot"
|
||||||
|
|||||||
735
.github/releases.json
vendored
735
.github/releases.json
vendored
@@ -1,735 +0,0 @@
|
|||||||
{
|
|
||||||
"latest": {
|
|
||||||
"id": 90741208,
|
|
||||||
"tag_name": "v0.10.2",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.10.2",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-amd64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-amd64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-arm64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-arm64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-amd64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-amd64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v6.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v6.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v7.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v7.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-ppc64le.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-ppc64le.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-riscv64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-riscv64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-s390x.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-s390x.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-amd64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-amd64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-arm64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-arm64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-arm64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/checksums.txt"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.10.2": {
|
|
||||||
"id": 90741208,
|
|
||||||
"tag_name": "v0.10.2",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.10.2",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-amd64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-amd64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-arm64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-arm64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-amd64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-amd64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v6.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v6.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v7.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v7.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-ppc64le.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-ppc64le.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-riscv64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-riscv64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-s390x.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-s390x.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-amd64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-amd64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-arm64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-arm64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-arm64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/checksums.txt"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.10.1": {
|
|
||||||
"id": 90346950,
|
|
||||||
"tag_name": "v0.10.1",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.10.1",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.darwin-amd64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.darwin-amd64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.darwin-arm64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.darwin-arm64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-amd64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-amd64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm-v6.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm-v6.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm-v7.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm-v7.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-ppc64le.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-ppc64le.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-riscv64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-riscv64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-s390x.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-s390x.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.windows-amd64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.windows-amd64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.windows-arm64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.windows-arm64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.windows-arm64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/checksums.txt"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.10.0": {
|
|
||||||
"id": 88388110,
|
|
||||||
"tag_name": "v0.10.0",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.10.0",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.darwin-amd64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.darwin-amd64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.darwin-arm64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.darwin-arm64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-amd64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-amd64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm-v6.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm-v6.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm-v7.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm-v7.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-ppc64le.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-ppc64le.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-riscv64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-riscv64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-s390x.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-s390x.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.windows-amd64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.windows-amd64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.windows-arm64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.windows-arm64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.windows-arm64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/checksums.txt"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.10.0-rc3": {
|
|
||||||
"id": 88191592,
|
|
||||||
"tag_name": "v0.10.0-rc3",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.10.0-rc3",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.darwin-amd64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.darwin-amd64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.darwin-arm64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.darwin-arm64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-amd64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-amd64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm-v6.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm-v6.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm-v7.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm-v7.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-ppc64le.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-ppc64le.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-riscv64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-riscv64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-s390x.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-s390x.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.windows-amd64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.windows-amd64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.windows-arm64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.windows-arm64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.windows-arm64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/checksums.txt"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.10.0-rc2": {
|
|
||||||
"id": 86248476,
|
|
||||||
"tag_name": "v0.10.0-rc2",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.10.0-rc2",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.darwin-amd64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.darwin-amd64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.darwin-arm64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.darwin-arm64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-amd64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-amd64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm-v6.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm-v6.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm-v7.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm-v7.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-ppc64le.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-ppc64le.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-riscv64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-riscv64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-s390x.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-s390x.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.windows-amd64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.windows-amd64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.windows-arm64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.windows-arm64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.windows-arm64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/checksums.txt"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.10.0-rc1": {
|
|
||||||
"id": 85963900,
|
|
||||||
"tag_name": "v0.10.0-rc1",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.10.0-rc1",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.windows-arm64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/checksums.txt"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.9.1": {
|
|
||||||
"id": 74760068,
|
|
||||||
"tag_name": "v0.9.1",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.9.1",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.windows-arm64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.1/checksums.txt"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.9.0": {
|
|
||||||
"id": 74546589,
|
|
||||||
"tag_name": "v0.9.0",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.9.0",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.windows-arm64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0/checksums.txt"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.9.0-rc2": {
|
|
||||||
"id": 74052235,
|
|
||||||
"tag_name": "v0.9.0-rc2",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.9.0-rc2",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.windows-arm64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/checksums.txt"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.9.0-rc1": {
|
|
||||||
"id": 73389692,
|
|
||||||
"tag_name": "v0.9.0-rc1",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.9.0-rc1",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.windows-arm64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/checksums.txt"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.8.2": {
|
|
||||||
"id": 63479740,
|
|
||||||
"tag_name": "v0.8.2",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.8.2",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.windows-arm64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.2/checksums.txt"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.8.1": {
|
|
||||||
"id": 62289050,
|
|
||||||
"tag_name": "v0.8.1",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.8.1",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.windows-arm64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.1/checksums.txt"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.8.0": {
|
|
||||||
"id": 61423774,
|
|
||||||
"tag_name": "v0.8.0",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.8.0",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.windows-arm64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0/checksums.txt"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.8.0-rc1": {
|
|
||||||
"id": 60513568,
|
|
||||||
"tag_name": "v0.8.0-rc1",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.8.0-rc1",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.windows-arm64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/checksums.txt"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.7.1": {
|
|
||||||
"id": 54098347,
|
|
||||||
"tag_name": "v0.7.1",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.7.1",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.windows-arm64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.1/checksums.txt"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.7.0": {
|
|
||||||
"id": 53109422,
|
|
||||||
"tag_name": "v0.7.0",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.7.0",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.windows-arm64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0/checksums.txt"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.7.0-rc1": {
|
|
||||||
"id": 52726324,
|
|
||||||
"tag_name": "v0.7.0-rc1",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.7.0-rc1",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.windows-arm64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/checksums.txt"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.6.3": {
|
|
||||||
"id": 48691641,
|
|
||||||
"tag_name": "v0.6.3",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.6.3",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.windows-arm64.exe"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.6.2": {
|
|
||||||
"id": 48207405,
|
|
||||||
"tag_name": "v0.6.2",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.6.2",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.windows-arm64.exe"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.6.1": {
|
|
||||||
"id": 47064772,
|
|
||||||
"tag_name": "v0.6.1",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.6.1",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.windows-arm64.exe"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.6.0": {
|
|
||||||
"id": 46343260,
|
|
||||||
"tag_name": "v0.6.0",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.6.0",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.windows-arm64.exe"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.6.0-rc1": {
|
|
||||||
"id": 46230351,
|
|
||||||
"tag_name": "v0.6.0-rc1",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.6.0-rc1",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.windows-arm64.exe"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.5.1": {
|
|
||||||
"id": 35276550,
|
|
||||||
"tag_name": "v0.5.1",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.5.1",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.darwin-universal",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.windows-amd64.exe"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.5.0": {
|
|
||||||
"id": 35268960,
|
|
||||||
"tag_name": "v0.5.0",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.5.0",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.darwin-universal",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.windows-amd64.exe"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.5.0-rc1": {
|
|
||||||
"id": 35015334,
|
|
||||||
"tag_name": "v0.5.0-rc1",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.5.0-rc1",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.windows-amd64.exe"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.4.2": {
|
|
||||||
"id": 30007794,
|
|
||||||
"tag_name": "v0.4.2",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.4.2",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.windows-amd64.exe"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.4.1": {
|
|
||||||
"id": 26067509,
|
|
||||||
"tag_name": "v0.4.1",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.4.1",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.windows-amd64.exe"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.4.0": {
|
|
||||||
"id": 26028174,
|
|
||||||
"tag_name": "v0.4.0",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.4.0",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.windows-amd64.exe"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.3.1": {
|
|
||||||
"id": 20316235,
|
|
||||||
"tag_name": "v0.3.1",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.3.1",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.windows-amd64.exe"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.3.0": {
|
|
||||||
"id": 19029664,
|
|
||||||
"tag_name": "v0.3.0",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.3.0",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.windows-amd64.exe"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.2.2": {
|
|
||||||
"id": 17671545,
|
|
||||||
"tag_name": "v0.2.2",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.2.2",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.windows-amd64.exe"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.2.1": {
|
|
||||||
"id": 17582885,
|
|
||||||
"tag_name": "v0.2.1",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.2.1",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.windows-amd64.exe"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.2.0": {
|
|
||||||
"id": 16965310,
|
|
||||||
"tag_name": "v0.2.0",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.2.0",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.windows-amd64.exe"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
262
.github/workflows/build.yml
vendored
262
.github/workflows/build.yml
vendored
@@ -13,8 +13,10 @@ on:
|
|||||||
tags:
|
tags:
|
||||||
- 'v*'
|
- 'v*'
|
||||||
pull_request:
|
pull_request:
|
||||||
|
branches:
|
||||||
|
- 'master'
|
||||||
|
- 'v[0-9]*'
|
||||||
paths-ignore:
|
paths-ignore:
|
||||||
- '.github/releases.json'
|
|
||||||
- 'README.md'
|
- 'README.md'
|
||||||
- 'docs/**'
|
- 'docs/**'
|
||||||
|
|
||||||
@@ -23,202 +25,43 @@ env:
|
|||||||
BUILDKIT_IMAGE: "moby/buildkit:latest"
|
BUILDKIT_IMAGE: "moby/buildkit:latest"
|
||||||
REPO_SLUG: "docker/buildx-bin"
|
REPO_SLUG: "docker/buildx-bin"
|
||||||
DESTDIR: "./bin"
|
DESTDIR: "./bin"
|
||||||
TEST_CACHE_SCOPE: "test"
|
|
||||||
TESTFLAGS: "-v --parallel=6 --timeout=30m"
|
|
||||||
GOTESTSUM_FORMAT: "standard-verbose"
|
|
||||||
GO_VERSION: "1.21"
|
|
||||||
GOTESTSUM_VERSION: "v1.9.0" # same as one in Dockerfile
|
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
test-integration:
|
test:
|
||||||
runs-on: ubuntu-22.04
|
runs-on: ubuntu-22.04
|
||||||
env:
|
|
||||||
TESTFLAGS_DOCKER: "-v --parallel=1 --timeout=30m"
|
|
||||||
TEST_IMAGE_BUILD: "0"
|
|
||||||
TEST_IMAGE_ID: "buildx-tests"
|
|
||||||
strategy:
|
|
||||||
fail-fast: false
|
|
||||||
matrix:
|
|
||||||
buildkit:
|
|
||||||
- master
|
|
||||||
- latest
|
|
||||||
- buildx-stable-1
|
|
||||||
- v0.13.1
|
|
||||||
- v0.12.5
|
|
||||||
- v0.11.6
|
|
||||||
worker:
|
|
||||||
- docker-container
|
|
||||||
- remote
|
|
||||||
pkg:
|
|
||||||
- ./tests
|
|
||||||
mode:
|
|
||||||
- ""
|
|
||||||
- experimental
|
|
||||||
include:
|
|
||||||
- worker: docker
|
|
||||||
pkg: ./tests
|
|
||||||
- worker: docker+containerd # same as docker, but with containerd snapshotter
|
|
||||||
pkg: ./tests
|
|
||||||
- worker: docker
|
|
||||||
pkg: ./tests
|
|
||||||
mode: experimental
|
|
||||||
- worker: docker+containerd # same as docker, but with containerd snapshotter
|
|
||||||
pkg: ./tests
|
|
||||||
mode: experimental
|
|
||||||
steps:
|
steps:
|
||||||
-
|
|
||||||
name: Prepare
|
|
||||||
run: |
|
|
||||||
echo "TESTREPORTS_NAME=${{ github.job }}-$(echo "${{ matrix.pkg }}-${{ matrix.buildkit }}-${{ matrix.worker }}-${{ matrix.mode }}" | tr -dc '[:alnum:]-\n\r' | tr '[:upper:]' '[:lower:]')" >> $GITHUB_ENV
|
|
||||||
if [ -n "${{ matrix.buildkit }}" ]; then
|
|
||||||
echo "TEST_BUILDKIT_TAG=${{ matrix.buildkit }}" >> $GITHUB_ENV
|
|
||||||
fi
|
|
||||||
testFlags="--run=//worker=$(echo "${{ matrix.worker }}" | sed 's/\+/\\+/g')$"
|
|
||||||
case "${{ matrix.worker }}" in
|
|
||||||
docker | docker+containerd)
|
|
||||||
echo "TESTFLAGS=${{ env.TESTFLAGS_DOCKER }} $testFlags" >> $GITHUB_ENV
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
echo "TESTFLAGS=${{ env.TESTFLAGS }} $testFlags" >> $GITHUB_ENV
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
if [[ "${{ matrix.worker }}" == "docker"* ]]; then
|
|
||||||
echo "TEST_DOCKERD=1" >> $GITHUB_ENV
|
|
||||||
fi
|
|
||||||
if [ "${{ matrix.mode }}" = "experimental" ]; then
|
|
||||||
echo "TEST_BUILDX_EXPERIMENTAL=1" >> $GITHUB_ENV
|
|
||||||
fi
|
|
||||||
-
|
-
|
||||||
name: Checkout
|
name: Checkout
|
||||||
uses: actions/checkout@v4
|
uses: actions/checkout@v3
|
||||||
with:
|
|
||||||
fetch-depth: 0
|
|
||||||
-
|
|
||||||
name: Set up QEMU
|
|
||||||
uses: docker/setup-qemu-action@v3
|
|
||||||
-
|
-
|
||||||
name: Set up Docker Buildx
|
name: Set up Docker Buildx
|
||||||
uses: docker/setup-buildx-action@v3
|
uses: docker/setup-buildx-action@v2
|
||||||
with:
|
with:
|
||||||
version: ${{ env.BUILDX_VERSION }}
|
version: ${{ env.BUILDX_VERSION }}
|
||||||
driver-opts: image=${{ env.BUILDKIT_IMAGE }}
|
driver-opts: image=${{ env.BUILDKIT_IMAGE }}
|
||||||
buildkitd-flags: --debug
|
buildkitd-flags: --debug
|
||||||
-
|
-
|
||||||
name: Build test image
|
name: Test
|
||||||
uses: docker/bake-action@v4
|
uses: docker/bake-action@v2
|
||||||
with:
|
with:
|
||||||
targets: integration-test
|
targets: test
|
||||||
set: |
|
set: |
|
||||||
*.output=type=docker,name=${{ env.TEST_IMAGE_ID }}
|
*.cache-from=type=gha,scope=test
|
||||||
|
*.cache-to=type=gha,scope=test
|
||||||
-
|
-
|
||||||
name: Test
|
name: Upload coverage
|
||||||
run: |
|
uses: codecov/codecov-action@v3
|
||||||
./hack/test
|
|
||||||
env:
|
|
||||||
TEST_REPORT_SUFFIX: "-${{ env.TESTREPORTS_NAME }}"
|
|
||||||
TESTPKGS: "${{ matrix.pkg }}"
|
|
||||||
-
|
|
||||||
name: Send to Codecov
|
|
||||||
if: always()
|
|
||||||
uses: codecov/codecov-action@v4
|
|
||||||
with:
|
with:
|
||||||
directory: ./bin/testreports
|
directory: ${{ env.DESTDIR }}/coverage
|
||||||
flags: integration
|
|
||||||
token: ${{ secrets.CODECOV_TOKEN }}
|
|
||||||
-
|
|
||||||
name: Generate annotations
|
|
||||||
if: always()
|
|
||||||
uses: crazy-max/.github/.github/actions/gotest-annotations@fa6141aedf23596fb8bdcceab9cce8dadaa31bd9
|
|
||||||
with:
|
|
||||||
directory: ./bin/testreports
|
|
||||||
-
|
|
||||||
name: Upload test reports
|
|
||||||
if: always()
|
|
||||||
uses: actions/upload-artifact@v4
|
|
||||||
with:
|
|
||||||
name: test-reports-${{ env.TESTREPORTS_NAME }}
|
|
||||||
path: ./bin/testreports
|
|
||||||
|
|
||||||
test-unit:
|
prepare:
|
||||||
runs-on: ${{ matrix.os }}
|
|
||||||
strategy:
|
|
||||||
fail-fast: false
|
|
||||||
matrix:
|
|
||||||
os:
|
|
||||||
- ubuntu-22.04
|
|
||||||
- macos-12
|
|
||||||
- windows-2022
|
|
||||||
env:
|
|
||||||
SKIP_INTEGRATION_TESTS: 1
|
|
||||||
steps:
|
|
||||||
-
|
|
||||||
name: Checkout
|
|
||||||
uses: actions/checkout@v4
|
|
||||||
-
|
|
||||||
name: Set up Go
|
|
||||||
uses: actions/setup-go@v5
|
|
||||||
with:
|
|
||||||
go-version: "${{ env.GO_VERSION }}"
|
|
||||||
-
|
|
||||||
name: Prepare
|
|
||||||
run: |
|
|
||||||
testreportsName=${{ github.job }}--${{ matrix.os }}
|
|
||||||
testreportsBaseDir=./bin/testreports
|
|
||||||
testreportsDir=$testreportsBaseDir/$testreportsName
|
|
||||||
echo "TESTREPORTS_NAME=$testreportsName" >> $GITHUB_ENV
|
|
||||||
echo "TESTREPORTS_BASEDIR=$testreportsBaseDir" >> $GITHUB_ENV
|
|
||||||
echo "TESTREPORTS_DIR=$testreportsDir" >> $GITHUB_ENV
|
|
||||||
mkdir -p $testreportsDir
|
|
||||||
shell: bash
|
|
||||||
-
|
|
||||||
name: Install gotestsum
|
|
||||||
run: |
|
|
||||||
go install gotest.tools/gotestsum@${{ env.GOTESTSUM_VERSION }}
|
|
||||||
-
|
|
||||||
name: Test
|
|
||||||
env:
|
|
||||||
TMPDIR: ${{ runner.temp }}
|
|
||||||
run: |
|
|
||||||
gotestsum \
|
|
||||||
--jsonfile="${{ env.TESTREPORTS_DIR }}/go-test-report.json" \
|
|
||||||
--junitfile="${{ env.TESTREPORTS_DIR }}/junit-report.xml" \
|
|
||||||
--packages="./..." \
|
|
||||||
-- \
|
|
||||||
"-mod=vendor" \
|
|
||||||
"-coverprofile" "${{ env.TESTREPORTS_DIR }}/coverage.txt" \
|
|
||||||
"-covermode" "atomic" ${{ env.TESTFLAGS }}
|
|
||||||
shell: bash
|
|
||||||
-
|
|
||||||
name: Send to Codecov
|
|
||||||
if: always()
|
|
||||||
uses: codecov/codecov-action@v4
|
|
||||||
with:
|
|
||||||
directory: ${{ env.TESTREPORTS_DIR }}
|
|
||||||
env_vars: RUNNER_OS
|
|
||||||
flags: unit
|
|
||||||
token: ${{ secrets.CODECOV_TOKEN }}
|
|
||||||
-
|
|
||||||
name: Generate annotations
|
|
||||||
if: always()
|
|
||||||
uses: crazy-max/.github/.github/actions/gotest-annotations@fa6141aedf23596fb8bdcceab9cce8dadaa31bd9
|
|
||||||
with:
|
|
||||||
directory: ${{ env.TESTREPORTS_DIR }}
|
|
||||||
-
|
|
||||||
name: Upload test reports
|
|
||||||
if: always()
|
|
||||||
uses: actions/upload-artifact@v4
|
|
||||||
with:
|
|
||||||
name: test-reports-${{ env.TESTREPORTS_NAME }}
|
|
||||||
path: ${{ env.TESTREPORTS_BASEDIR }}
|
|
||||||
|
|
||||||
prepare-binaries:
|
|
||||||
runs-on: ubuntu-22.04
|
runs-on: ubuntu-22.04
|
||||||
outputs:
|
outputs:
|
||||||
matrix: ${{ steps.platforms.outputs.matrix }}
|
matrix: ${{ steps.platforms.outputs.matrix }}
|
||||||
steps:
|
steps:
|
||||||
-
|
-
|
||||||
name: Checkout
|
name: Checkout
|
||||||
uses: actions/checkout@v4
|
uses: actions/checkout@v3
|
||||||
-
|
-
|
||||||
name: Create matrix
|
name: Create matrix
|
||||||
id: platforms
|
id: platforms
|
||||||
@@ -232,11 +75,11 @@ jobs:
|
|||||||
binaries:
|
binaries:
|
||||||
runs-on: ubuntu-22.04
|
runs-on: ubuntu-22.04
|
||||||
needs:
|
needs:
|
||||||
- prepare-binaries
|
- prepare
|
||||||
strategy:
|
strategy:
|
||||||
fail-fast: false
|
fail-fast: false
|
||||||
matrix:
|
matrix:
|
||||||
platform: ${{ fromJson(needs.prepare-binaries.outputs.matrix) }}
|
platform: ${{ fromJson(needs.prepare.outputs.matrix) }}
|
||||||
steps:
|
steps:
|
||||||
-
|
-
|
||||||
name: Prepare
|
name: Prepare
|
||||||
@@ -245,13 +88,13 @@ jobs:
|
|||||||
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
|
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
|
||||||
-
|
-
|
||||||
name: Checkout
|
name: Checkout
|
||||||
uses: actions/checkout@v4
|
uses: actions/checkout@v3
|
||||||
-
|
-
|
||||||
name: Set up QEMU
|
name: Set up QEMU
|
||||||
uses: docker/setup-qemu-action@v3
|
uses: docker/setup-qemu-action@v2
|
||||||
-
|
-
|
||||||
name: Set up Docker Buildx
|
name: Set up Docker Buildx
|
||||||
uses: docker/setup-buildx-action@v3
|
uses: docker/setup-buildx-action@v2
|
||||||
with:
|
with:
|
||||||
version: ${{ env.BUILDX_VERSION }}
|
version: ${{ env.BUILDX_VERSION }}
|
||||||
driver-opts: image=${{ env.BUILDKIT_IMAGE }}
|
driver-opts: image=${{ env.BUILDKIT_IMAGE }}
|
||||||
@@ -266,28 +109,25 @@ jobs:
|
|||||||
CACHE_TO: type=gha,scope=binaries-${{ env.PLATFORM_PAIR }},mode=max
|
CACHE_TO: type=gha,scope=binaries-${{ env.PLATFORM_PAIR }},mode=max
|
||||||
-
|
-
|
||||||
name: Upload artifacts
|
name: Upload artifacts
|
||||||
uses: actions/upload-artifact@v4
|
uses: actions/upload-artifact@v3
|
||||||
with:
|
with:
|
||||||
name: buildx-${{ env.PLATFORM_PAIR }}
|
name: buildx
|
||||||
path: ${{ env.DESTDIR }}/*
|
path: ${{ env.DESTDIR }}/*
|
||||||
if-no-files-found: error
|
if-no-files-found: error
|
||||||
|
|
||||||
bin-image:
|
bin-image:
|
||||||
runs-on: ubuntu-22.04
|
runs-on: ubuntu-22.04
|
||||||
needs:
|
if: ${{ github.event_name != 'pull_request' }}
|
||||||
- test-integration
|
|
||||||
- test-unit
|
|
||||||
if: ${{ github.event_name != 'pull_request' && github.repository == 'docker/buildx' }}
|
|
||||||
steps:
|
steps:
|
||||||
-
|
-
|
||||||
name: Checkout
|
name: Checkout
|
||||||
uses: actions/checkout@v4
|
uses: actions/checkout@v3
|
||||||
-
|
-
|
||||||
name: Set up QEMU
|
name: Set up QEMU
|
||||||
uses: docker/setup-qemu-action@v3
|
uses: docker/setup-qemu-action@v2
|
||||||
-
|
-
|
||||||
name: Set up Docker Buildx
|
name: Set up Docker Buildx
|
||||||
uses: docker/setup-buildx-action@v3
|
uses: docker/setup-buildx-action@v2
|
||||||
with:
|
with:
|
||||||
version: ${{ env.BUILDX_VERSION }}
|
version: ${{ env.BUILDX_VERSION }}
|
||||||
driver-opts: image=${{ env.BUILDKIT_IMAGE }}
|
driver-opts: image=${{ env.BUILDKIT_IMAGE }}
|
||||||
@@ -295,7 +135,7 @@ jobs:
|
|||||||
-
|
-
|
||||||
name: Docker meta
|
name: Docker meta
|
||||||
id: meta
|
id: meta
|
||||||
uses: docker/metadata-action@v5
|
uses: docker/metadata-action@v4
|
||||||
with:
|
with:
|
||||||
images: |
|
images: |
|
||||||
${{ env.REPO_SLUG }}
|
${{ env.REPO_SLUG }}
|
||||||
@@ -307,41 +147,39 @@ jobs:
|
|||||||
-
|
-
|
||||||
name: Login to DockerHub
|
name: Login to DockerHub
|
||||||
if: github.event_name != 'pull_request'
|
if: github.event_name != 'pull_request'
|
||||||
uses: docker/login-action@v3
|
uses: docker/login-action@v2
|
||||||
with:
|
with:
|
||||||
username: ${{ vars.DOCKERPUBLICBOT_USERNAME }}
|
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||||
password: ${{ secrets.DOCKERPUBLICBOT_WRITE_PAT }}
|
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||||
-
|
-
|
||||||
name: Build and push image
|
name: Build and push image
|
||||||
uses: docker/bake-action@v4
|
uses: docker/bake-action@v2
|
||||||
with:
|
with:
|
||||||
files: |
|
files: |
|
||||||
./docker-bake.hcl
|
./docker-bake.hcl
|
||||||
${{ steps.meta.outputs.bake-file }}
|
${{ steps.meta.outputs.bake-file }}
|
||||||
targets: image-cross
|
targets: image-cross
|
||||||
push: ${{ github.event_name != 'pull_request' }}
|
push: ${{ github.event_name != 'pull_request' }}
|
||||||
sbom: true
|
|
||||||
set: |
|
set: |
|
||||||
*.cache-from=type=gha,scope=bin-image
|
*.cache-from=type=gha,scope=bin-image
|
||||||
*.cache-to=type=gha,scope=bin-image,mode=max
|
*.cache-to=type=gha,scope=bin-image,mode=max
|
||||||
|
*.attest=type=sbom
|
||||||
|
*.attest=type=provenance,mode=max,builder-id=https://github.com/${{ env.GITHUB_REPOSITORY }}/actions/runs/${{ env.GITHUB_RUN_ID }}
|
||||||
|
|
||||||
release:
|
release:
|
||||||
runs-on: ubuntu-22.04
|
runs-on: ubuntu-22.04
|
||||||
needs:
|
needs:
|
||||||
- test-integration
|
|
||||||
- test-unit
|
|
||||||
- binaries
|
- binaries
|
||||||
steps:
|
steps:
|
||||||
-
|
-
|
||||||
name: Checkout
|
name: Checkout
|
||||||
uses: actions/checkout@v4
|
uses: actions/checkout@v3
|
||||||
-
|
-
|
||||||
name: Download binaries
|
name: Download binaries
|
||||||
uses: actions/download-artifact@v4
|
uses: actions/download-artifact@v3
|
||||||
with:
|
with:
|
||||||
|
name: buildx
|
||||||
path: ${{ env.DESTDIR }}
|
path: ${{ env.DESTDIR }}
|
||||||
pattern: buildx-*
|
|
||||||
merge-multiple: true
|
|
||||||
-
|
-
|
||||||
name: Create checksums
|
name: Create checksums
|
||||||
run: ./hack/hash-files
|
run: ./hack/hash-files
|
||||||
@@ -356,9 +194,33 @@ jobs:
|
|||||||
-
|
-
|
||||||
name: GitHub Release
|
name: GitHub Release
|
||||||
if: startsWith(github.ref, 'refs/tags/v')
|
if: startsWith(github.ref, 'refs/tags/v')
|
||||||
uses: softprops/action-gh-release@69320dbe05506a9a39fc8ae11030b214ec2d1f87 # v2.0.5
|
uses: softprops/action-gh-release@de2c0eb89ae2a093876385947365aca7b0e5f844 # v0.1.15
|
||||||
env:
|
env:
|
||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
with:
|
with:
|
||||||
draft: true
|
draft: true
|
||||||
files: ${{ env.DESTDIR }}/*
|
files: ${{ env.DESTDIR }}/*
|
||||||
|
|
||||||
|
buildkit-edge:
|
||||||
|
runs-on: ubuntu-22.04
|
||||||
|
continue-on-error: true
|
||||||
|
steps:
|
||||||
|
-
|
||||||
|
name: Checkout
|
||||||
|
uses: actions/checkout@v3
|
||||||
|
-
|
||||||
|
name: Set up QEMU
|
||||||
|
uses: docker/setup-qemu-action@v2
|
||||||
|
-
|
||||||
|
name: Set up Docker Buildx
|
||||||
|
uses: docker/setup-buildx-action@v2
|
||||||
|
with:
|
||||||
|
version: ${{ env.BUILDX_VERSION }}
|
||||||
|
driver-opts: image=moby/buildkit:master
|
||||||
|
buildkitd-flags: --debug
|
||||||
|
-
|
||||||
|
# Just run a bake target to check eveything runs fine
|
||||||
|
name: Build
|
||||||
|
uses: docker/bake-action@v2
|
||||||
|
with:
|
||||||
|
targets: binaries
|
||||||
|
|||||||
42
.github/workflows/codeql.yml
vendored
42
.github/workflows/codeql.yml
vendored
@@ -1,42 +0,0 @@
|
|||||||
name: codeql
|
|
||||||
|
|
||||||
on:
|
|
||||||
push:
|
|
||||||
branches:
|
|
||||||
- 'master'
|
|
||||||
- 'v[0-9]*'
|
|
||||||
pull_request:
|
|
||||||
|
|
||||||
permissions:
|
|
||||||
actions: read
|
|
||||||
contents: read
|
|
||||||
security-events: write
|
|
||||||
|
|
||||||
env:
|
|
||||||
GO_VERSION: "1.21"
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
codeql:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
-
|
|
||||||
name: Checkout
|
|
||||||
uses: actions/checkout@v4
|
|
||||||
-
|
|
||||||
name: Set up Go
|
|
||||||
uses: actions/setup-go@v5
|
|
||||||
with:
|
|
||||||
go-version: ${{ env.GO_VERSION }}
|
|
||||||
-
|
|
||||||
name: Initialize CodeQL
|
|
||||||
uses: github/codeql-action/init@v3
|
|
||||||
with:
|
|
||||||
languages: go
|
|
||||||
-
|
|
||||||
name: Autobuild
|
|
||||||
uses: github/codeql-action/autobuild@v3
|
|
||||||
-
|
|
||||||
name: Perform CodeQL Analysis
|
|
||||||
uses: github/codeql-action/analyze@v3
|
|
||||||
with:
|
|
||||||
category: "/language:go"
|
|
||||||
45
.github/workflows/docs-release.yml
vendored
45
.github/workflows/docs-release.yml
vendored
@@ -1,11 +1,6 @@
|
|||||||
name: docs-release
|
name: docs-release
|
||||||
|
|
||||||
on:
|
on:
|
||||||
workflow_dispatch:
|
|
||||||
inputs:
|
|
||||||
tag:
|
|
||||||
description: 'Git tag'
|
|
||||||
required: true
|
|
||||||
release:
|
release:
|
||||||
types:
|
types:
|
||||||
- released
|
- released
|
||||||
@@ -13,11 +8,11 @@ on:
|
|||||||
jobs:
|
jobs:
|
||||||
open-pr:
|
open-pr:
|
||||||
runs-on: ubuntu-22.04
|
runs-on: ubuntu-22.04
|
||||||
if: ${{ (github.event.release.prerelease != true || github.event.inputs.tag != '') && github.repository == 'docker/buildx' }}
|
if: "!github.event.release.prerelease"
|
||||||
steps:
|
steps:
|
||||||
-
|
-
|
||||||
name: Checkout docs repo
|
name: Checkout docs repo
|
||||||
uses: actions/checkout@v4
|
uses: actions/checkout@v3
|
||||||
with:
|
with:
|
||||||
token: ${{ secrets.GHPAT_DOCS_DISPATCH }}
|
token: ${{ secrets.GHPAT_DOCS_DISPATCH }}
|
||||||
repository: docker/docs
|
repository: docker/docs
|
||||||
@@ -25,47 +20,39 @@ jobs:
|
|||||||
-
|
-
|
||||||
name: Prepare
|
name: Prepare
|
||||||
run: |
|
run: |
|
||||||
rm -rf ./data/buildx/*
|
rm -rf ./_data/buildx/*
|
||||||
if [ -n "${{ github.event.inputs.tag }}" ]; then
|
|
||||||
echo "RELEASE_NAME=${{ github.event.inputs.tag }}" >> $GITHUB_ENV
|
|
||||||
else
|
|
||||||
echo "RELEASE_NAME=${{ github.event.release.name }}" >> $GITHUB_ENV
|
|
||||||
fi
|
|
||||||
-
|
-
|
||||||
name: Set up Docker Buildx
|
name: Set up Docker Buildx
|
||||||
uses: docker/setup-buildx-action@v3
|
uses: docker/setup-buildx-action@v2
|
||||||
-
|
-
|
||||||
name: Generate yaml
|
name: Build docs
|
||||||
uses: docker/bake-action@v4
|
uses: docker/bake-action@v2
|
||||||
with:
|
with:
|
||||||
source: ${{ github.server_url }}/${{ github.repository }}.git#${{ env.RELEASE_NAME }}
|
source: ${{ github.server_url }}/${{ github.repository }}.git#${{ github.event.release.name }}
|
||||||
targets: update-docs
|
targets: update-docs
|
||||||
provenance: false
|
|
||||||
set: |
|
set: |
|
||||||
*.output=/tmp/buildx-docs
|
*.output=/tmp/buildx-docs
|
||||||
env:
|
env:
|
||||||
DOCS_FORMATS: yaml
|
DOCS_FORMATS: yaml
|
||||||
-
|
-
|
||||||
name: Copy yaml
|
name: Copy files
|
||||||
run: |
|
run: |
|
||||||
cp /tmp/buildx-docs/out/reference/*.yaml ./data/buildx/
|
cp /tmp/buildx-docs/out/reference/*.yaml ./_data/buildx/
|
||||||
-
|
-
|
||||||
name: Update vendor
|
name: Commit changes
|
||||||
run: |
|
run: |
|
||||||
make vendor
|
git add -A .
|
||||||
env:
|
|
||||||
VENDOR_MODULE: github.com/docker/buildx@${{ env.RELEASE_NAME }}
|
|
||||||
-
|
-
|
||||||
name: Create PR on docs repo
|
name: Create PR on docs repo
|
||||||
uses: peter-evans/create-pull-request@6d6857d36972b65feb161a90e484f2984215f83e # v6.0.5
|
uses: peter-evans/create-pull-request@2b011faafdcbc9ceb11414d64d0573f37c774b04
|
||||||
with:
|
with:
|
||||||
token: ${{ secrets.GHPAT_DOCS_DISPATCH }}
|
token: ${{ secrets.GHPAT_DOCS_DISPATCH }}
|
||||||
push-to-fork: docker-tools-robot/docker.github.io
|
push-to-fork: docker-tools-robot/docker.github.io
|
||||||
commit-message: "vendor: github.com/docker/buildx ${{ env.RELEASE_NAME }}"
|
commit-message: "build: update buildx reference to ${{ github.event.release.name }}"
|
||||||
signoff: true
|
signoff: true
|
||||||
branch: dispatch/buildx-ref-${{ env.RELEASE_NAME }}
|
branch: dispatch/buildx-ref-${{ github.event.release.name }}
|
||||||
delete-branch: true
|
delete-branch: true
|
||||||
title: Update buildx reference to ${{ env.RELEASE_NAME }}
|
title: Update buildx reference to ${{ github.event.release.name }}
|
||||||
body: |
|
body: |
|
||||||
Update the buildx reference documentation to keep in sync with the latest release `${{ env.RELEASE_NAME }}`
|
Update the buildx reference documentation to keep in sync with the latest release `${{ github.event.release.name }}`
|
||||||
draft: false
|
draft: false
|
||||||
|
|||||||
14
.github/workflows/docs-upstream.yml
vendored
14
.github/workflows/docs-upstream.yml
vendored
@@ -26,18 +26,17 @@ jobs:
|
|||||||
steps:
|
steps:
|
||||||
-
|
-
|
||||||
name: Checkout
|
name: Checkout
|
||||||
uses: actions/checkout@v4
|
uses: actions/checkout@v3
|
||||||
-
|
-
|
||||||
name: Set up Docker Buildx
|
name: Set up Docker Buildx
|
||||||
uses: docker/setup-buildx-action@v3
|
uses: docker/setup-buildx-action@v2
|
||||||
with:
|
with:
|
||||||
version: latest
|
version: latest
|
||||||
-
|
-
|
||||||
name: Build reference YAML docs
|
name: Build reference YAML docs
|
||||||
uses: docker/bake-action@v4
|
uses: docker/bake-action@v2
|
||||||
with:
|
with:
|
||||||
targets: update-docs
|
targets: update-docs
|
||||||
provenance: false
|
|
||||||
set: |
|
set: |
|
||||||
*.output=/tmp/buildx-docs
|
*.output=/tmp/buildx-docs
|
||||||
*.cache-from=type=gha,scope=docs-yaml
|
*.cache-from=type=gha,scope=docs-yaml
|
||||||
@@ -46,18 +45,17 @@ jobs:
|
|||||||
DOCS_FORMATS: yaml
|
DOCS_FORMATS: yaml
|
||||||
-
|
-
|
||||||
name: Upload reference YAML docs
|
name: Upload reference YAML docs
|
||||||
uses: actions/upload-artifact@v4
|
uses: actions/upload-artifact@v3
|
||||||
with:
|
with:
|
||||||
name: docs-yaml
|
name: docs-yaml
|
||||||
path: /tmp/buildx-docs/out/reference
|
path: /tmp/buildx-docs/out/reference
|
||||||
retention-days: 1
|
retention-days: 1
|
||||||
|
|
||||||
validate:
|
validate:
|
||||||
uses: docker/docs/.github/workflows/validate-upstream.yml@6b73b05acb21edf7995cc5b3c6672d8e314cee7a # pin for artifact v4 support: https://github.com/docker/docs/pull/19220
|
uses: docker/docs/.github/workflows/validate-upstream.yml@main
|
||||||
needs:
|
needs:
|
||||||
- docs-yaml
|
- docs-yaml
|
||||||
with:
|
with:
|
||||||
module-name: docker/buildx
|
repo: https://github.com/${{ github.repository }}
|
||||||
data-files-id: docs-yaml
|
data-files-id: docs-yaml
|
||||||
data-files-folder: buildx
|
data-files-folder: buildx
|
||||||
create-placeholder-stubs: true
|
|
||||||
|
|||||||
85
.github/workflows/e2e.yml
vendored
85
.github/workflows/e2e.yml
vendored
@@ -11,8 +11,10 @@ on:
|
|||||||
- 'master'
|
- 'master'
|
||||||
- 'v[0-9]*'
|
- 'v[0-9]*'
|
||||||
pull_request:
|
pull_request:
|
||||||
|
branches:
|
||||||
|
- 'master'
|
||||||
|
- 'v[0-9]*'
|
||||||
paths-ignore:
|
paths-ignore:
|
||||||
- '.github/releases.json'
|
|
||||||
- 'README.md'
|
- 'README.md'
|
||||||
- 'docs/**'
|
- 'docs/**'
|
||||||
|
|
||||||
@@ -25,15 +27,15 @@ jobs:
|
|||||||
runs-on: ubuntu-22.04
|
runs-on: ubuntu-22.04
|
||||||
steps:
|
steps:
|
||||||
- name: Checkout
|
- name: Checkout
|
||||||
uses: actions/checkout@v4
|
uses: actions/checkout@v3
|
||||||
-
|
-
|
||||||
name: Set up Docker Buildx
|
name: Set up Docker Buildx
|
||||||
uses: docker/setup-buildx-action@v3
|
uses: docker/setup-buildx-action@v2
|
||||||
with:
|
with:
|
||||||
version: latest
|
version: latest
|
||||||
-
|
-
|
||||||
name: Build
|
name: Build
|
||||||
uses: docker/bake-action@v4
|
uses: docker/bake-action@v2
|
||||||
with:
|
with:
|
||||||
targets: binaries
|
targets: binaries
|
||||||
set: |
|
set: |
|
||||||
@@ -46,7 +48,7 @@ jobs:
|
|||||||
mv ${{ env.DESTDIR }}/build/buildx ${{ env.DESTDIR }}/build/docker-buildx
|
mv ${{ env.DESTDIR }}/build/buildx ${{ env.DESTDIR }}/build/docker-buildx
|
||||||
-
|
-
|
||||||
name: Upload artifacts
|
name: Upload artifacts
|
||||||
uses: actions/upload-artifact@v4
|
uses: actions/upload-artifact@v3
|
||||||
with:
|
with:
|
||||||
name: binary
|
name: binary
|
||||||
path: ${{ env.DESTDIR }}/build
|
path: ${{ env.DESTDIR }}/build
|
||||||
@@ -82,8 +84,6 @@ jobs:
|
|||||||
driver-opt: qemu.install=true
|
driver-opt: qemu.install=true
|
||||||
- driver: remote
|
- driver: remote
|
||||||
endpoint: tcp://localhost:1234
|
endpoint: tcp://localhost:1234
|
||||||
- driver: docker-container
|
|
||||||
metadata-provenance: max
|
|
||||||
exclude:
|
exclude:
|
||||||
- driver: docker
|
- driver: docker
|
||||||
multi-node: mnode-true
|
multi-node: mnode-true
|
||||||
@@ -98,14 +98,14 @@ jobs:
|
|||||||
steps:
|
steps:
|
||||||
-
|
-
|
||||||
name: Checkout
|
name: Checkout
|
||||||
uses: actions/checkout@v4
|
uses: actions/checkout@v3
|
||||||
-
|
-
|
||||||
name: Set up QEMU
|
name: Set up QEMU
|
||||||
uses: docker/setup-qemu-action@v3
|
uses: docker/setup-qemu-action@v2
|
||||||
if: matrix.driver == 'docker' || matrix.driver == 'docker-container'
|
if: matrix.driver == 'docker' || matrix.driver == 'docker-container'
|
||||||
-
|
-
|
||||||
name: Install buildx
|
name: Install buildx
|
||||||
uses: actions/download-artifact@v4
|
uses: actions/download-artifact@v3
|
||||||
with:
|
with:
|
||||||
name: binary
|
name: binary
|
||||||
path: /home/runner/.docker/cli-plugins
|
path: /home/runner/.docker/cli-plugins
|
||||||
@@ -131,15 +131,70 @@ jobs:
|
|||||||
else
|
else
|
||||||
echo "MULTI_NODE=0" >> $GITHUB_ENV
|
echo "MULTI_NODE=0" >> $GITHUB_ENV
|
||||||
fi
|
fi
|
||||||
if [ -n "${{ matrix.metadata-provenance }}" ]; then
|
|
||||||
echo "BUILDX_METADATA_PROVENANCE=${{ matrix.metadata-provenance }}" >> $GITHUB_ENV
|
|
||||||
fi
|
|
||||||
-
|
-
|
||||||
name: Install k3s
|
name: Install k3s
|
||||||
if: matrix.driver == 'kubernetes'
|
if: matrix.driver == 'kubernetes'
|
||||||
uses: crazy-max/.github/.github/actions/install-k3s@fa6141aedf23596fb8bdcceab9cce8dadaa31bd9
|
uses: actions/github-script@v6
|
||||||
with:
|
with:
|
||||||
version: ${{ env.K3S_VERSION }}
|
script: |
|
||||||
|
const fs = require('fs');
|
||||||
|
|
||||||
|
let wait = function(milliseconds) {
|
||||||
|
return new Promise((resolve, reject) => {
|
||||||
|
if (typeof(milliseconds) !== 'number') {
|
||||||
|
throw new Error('milleseconds not a number');
|
||||||
|
}
|
||||||
|
setTimeout(() => resolve("done!"), milliseconds)
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
const kubeconfig="/tmp/buildkit-k3s/kubeconfig.yaml";
|
||||||
|
core.info(`storing kubeconfig in ${kubeconfig}`);
|
||||||
|
|
||||||
|
await exec.exec('docker', ["run", "-d",
|
||||||
|
"--privileged",
|
||||||
|
"--name=buildkit-k3s",
|
||||||
|
"-e", "K3S_KUBECONFIG_OUTPUT="+kubeconfig,
|
||||||
|
"-e", "K3S_KUBECONFIG_MODE=666",
|
||||||
|
"-v", "/tmp/buildkit-k3s:/tmp/buildkit-k3s",
|
||||||
|
"-p", "6443:6443",
|
||||||
|
"-p", "80:80",
|
||||||
|
"-p", "443:443",
|
||||||
|
"-p", "8080:8080",
|
||||||
|
"rancher/k3s:${{ env.K3S_VERSION }}", "server"
|
||||||
|
]);
|
||||||
|
await wait(10000);
|
||||||
|
|
||||||
|
core.exportVariable('KUBECONFIG', kubeconfig);
|
||||||
|
|
||||||
|
let nodeName;
|
||||||
|
for (let count = 1; count <= 5; count++) {
|
||||||
|
try {
|
||||||
|
const nodeNameOutput = await exec.getExecOutput("kubectl get nodes --no-headers -oname");
|
||||||
|
nodeName = nodeNameOutput.stdout
|
||||||
|
} catch (error) {
|
||||||
|
core.info(`Unable to resolve node name (${error.message}). Attempt ${count} of 5.`)
|
||||||
|
} finally {
|
||||||
|
if (nodeName) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
await wait(5000);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if (!nodeName) {
|
||||||
|
throw new Error(`Unable to resolve node name after 5 attempts.`);
|
||||||
|
}
|
||||||
|
|
||||||
|
await exec.exec(`kubectl wait --for=condition=Ready ${nodeName}`);
|
||||||
|
} catch (error) {
|
||||||
|
core.setFailed(error.message);
|
||||||
|
}
|
||||||
|
-
|
||||||
|
name: Print KUBECONFIG
|
||||||
|
if: matrix.driver == 'kubernetes'
|
||||||
|
run: |
|
||||||
|
yq ${{ env.KUBECONFIG }}
|
||||||
-
|
-
|
||||||
name: Launch remote buildkitd
|
name: Launch remote buildkitd
|
||||||
if: matrix.driver == 'remote'
|
if: matrix.driver == 'remote'
|
||||||
|
|||||||
80
.github/workflows/validate.yml
vendored
80
.github/workflows/validate.yml
vendored
@@ -13,86 +13,30 @@ on:
|
|||||||
tags:
|
tags:
|
||||||
- 'v*'
|
- 'v*'
|
||||||
pull_request:
|
pull_request:
|
||||||
paths-ignore:
|
branches:
|
||||||
- '.github/releases.json'
|
- 'master'
|
||||||
|
- 'v[0-9]*'
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
prepare:
|
|
||||||
runs-on: ubuntu-22.04
|
|
||||||
outputs:
|
|
||||||
includes: ${{ steps.matrix.outputs.includes }}
|
|
||||||
steps:
|
|
||||||
-
|
|
||||||
name: Checkout
|
|
||||||
uses: actions/checkout@v4
|
|
||||||
-
|
|
||||||
name: Matrix
|
|
||||||
id: matrix
|
|
||||||
uses: actions/github-script@v7
|
|
||||||
with:
|
|
||||||
script: |
|
|
||||||
let def = {};
|
|
||||||
await core.group(`Parsing definition`, async () => {
|
|
||||||
const printEnv = Object.assign({}, process.env, {
|
|
||||||
GOLANGCI_LINT_MULTIPLATFORM: process.env.GITHUB_REPOSITORY === 'docker/buildx' ? '1' : ''
|
|
||||||
});
|
|
||||||
const resPrint = await exec.getExecOutput('docker', ['buildx', 'bake', 'validate', '--print'], {
|
|
||||||
ignoreReturnCode: true,
|
|
||||||
env: printEnv
|
|
||||||
});
|
|
||||||
if (resPrint.stderr.length > 0 && resPrint.exitCode != 0) {
|
|
||||||
throw new Error(res.stderr);
|
|
||||||
}
|
|
||||||
def = JSON.parse(resPrint.stdout.trim());
|
|
||||||
});
|
|
||||||
await core.group(`Generating matrix`, async () => {
|
|
||||||
const includes = [];
|
|
||||||
for (const targetName of Object.keys(def.target)) {
|
|
||||||
const target = def.target[targetName];
|
|
||||||
if (target.platforms && target.platforms.length > 0) {
|
|
||||||
target.platforms.forEach(platform => {
|
|
||||||
includes.push({
|
|
||||||
target: targetName,
|
|
||||||
platform: platform
|
|
||||||
});
|
|
||||||
});
|
|
||||||
} else {
|
|
||||||
includes.push({
|
|
||||||
target: targetName
|
|
||||||
});
|
|
||||||
}
|
|
||||||
}
|
|
||||||
core.info(JSON.stringify(includes, null, 2));
|
|
||||||
core.setOutput('includes', JSON.stringify(includes));
|
|
||||||
});
|
|
||||||
|
|
||||||
validate:
|
validate:
|
||||||
runs-on: ubuntu-22.04
|
runs-on: ubuntu-22.04
|
||||||
needs:
|
|
||||||
- prepare
|
|
||||||
strategy:
|
strategy:
|
||||||
fail-fast: false
|
fail-fast: false
|
||||||
matrix:
|
matrix:
|
||||||
include: ${{ fromJson(needs.prepare.outputs.includes) }}
|
target:
|
||||||
|
- lint
|
||||||
|
- validate-vendor
|
||||||
|
- validate-docs
|
||||||
steps:
|
steps:
|
||||||
-
|
|
||||||
name: Prepare
|
|
||||||
run: |
|
|
||||||
if [ "$GITHUB_REPOSITORY" = "docker/buildx" ]; then
|
|
||||||
echo "GOLANGCI_LINT_MULTIPLATFORM=1" >> $GITHUB_ENV
|
|
||||||
fi
|
|
||||||
-
|
-
|
||||||
name: Checkout
|
name: Checkout
|
||||||
uses: actions/checkout@v4
|
uses: actions/checkout@v3
|
||||||
-
|
-
|
||||||
name: Set up Docker Buildx
|
name: Set up Docker Buildx
|
||||||
uses: docker/setup-buildx-action@v3
|
uses: docker/setup-buildx-action@v2
|
||||||
with:
|
with:
|
||||||
version: latest
|
version: latest
|
||||||
-
|
-
|
||||||
name: Validate
|
name: Run
|
||||||
uses: docker/bake-action@v4
|
run: |
|
||||||
with:
|
make ${{ matrix.target }}
|
||||||
targets: ${{ matrix.target }}
|
|
||||||
set: |
|
|
||||||
*.platform=${{ matrix.platform }}
|
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
run:
|
run:
|
||||||
timeout: 30m
|
timeout: 10m
|
||||||
skip-files:
|
skip-files:
|
||||||
- ".*\\.pb\\.go$"
|
- ".*\\.pb\\.go$"
|
||||||
|
|
||||||
@@ -11,67 +11,30 @@ linters:
|
|||||||
enable:
|
enable:
|
||||||
- gofmt
|
- gofmt
|
||||||
- govet
|
- govet
|
||||||
|
- deadcode
|
||||||
- depguard
|
- depguard
|
||||||
- goimports
|
- goimports
|
||||||
- ineffassign
|
- ineffassign
|
||||||
- misspell
|
- misspell
|
||||||
- unused
|
- unused
|
||||||
|
- varcheck
|
||||||
- revive
|
- revive
|
||||||
- staticcheck
|
- staticcheck
|
||||||
- typecheck
|
- typecheck
|
||||||
- nolintlint
|
- nolintlint
|
||||||
- gosec
|
|
||||||
- forbidigo
|
|
||||||
disable-all: true
|
disable-all: true
|
||||||
|
|
||||||
linters-settings:
|
linters-settings:
|
||||||
govet:
|
|
||||||
enable:
|
|
||||||
- nilness
|
|
||||||
- unusedwrite
|
|
||||||
# enable-all: true
|
|
||||||
# disable:
|
|
||||||
# - fieldalignment
|
|
||||||
# - shadow
|
|
||||||
depguard:
|
depguard:
|
||||||
rules:
|
list-type: blacklist
|
||||||
main:
|
include-go-root: true
|
||||||
deny:
|
packages:
|
||||||
# The io/ioutil package has been deprecated.
|
# The io/ioutil package has been deprecated.
|
||||||
# https://go.dev/doc/go1.16#ioutil
|
# https://go.dev/doc/go1.16#ioutil
|
||||||
- pkg: "io/ioutil"
|
- io/ioutil
|
||||||
desc: The io/ioutil package has been deprecated.
|
|
||||||
forbidigo:
|
|
||||||
forbid:
|
|
||||||
- '^fmt\.Errorf(# use errors\.Errorf instead)?$'
|
|
||||||
gosec:
|
|
||||||
excludes:
|
|
||||||
- G204 # Audit use of command execution
|
|
||||||
- G402 # TLS MinVersion too low
|
|
||||||
config:
|
|
||||||
G306: "0644"
|
|
||||||
|
|
||||||
issues:
|
issues:
|
||||||
exclude-rules:
|
exclude-rules:
|
||||||
- linters:
|
- linters:
|
||||||
- revive
|
- revive
|
||||||
text: "stutters"
|
text: "stutters"
|
||||||
- linters:
|
|
||||||
- revive
|
|
||||||
text: "empty-block"
|
|
||||||
- linters:
|
|
||||||
- revive
|
|
||||||
text: "superfluous-else"
|
|
||||||
- linters:
|
|
||||||
- revive
|
|
||||||
text: "unused-parameter"
|
|
||||||
- linters:
|
|
||||||
- revive
|
|
||||||
text: "redefines-builtin-id"
|
|
||||||
- linters:
|
|
||||||
- revive
|
|
||||||
text: "if-return"
|
|
||||||
|
|
||||||
# show all
|
|
||||||
max-issues-per-linter: 0
|
|
||||||
max-same-issues: 0
|
|
||||||
|
|||||||
55
Dockerfile
55
Dockerfile
@@ -1,22 +1,15 @@
|
|||||||
# syntax=docker/dockerfile:1
|
# syntax=docker/dockerfile-upstream:1.5.0
|
||||||
|
|
||||||
ARG GO_VERSION=1.21
|
ARG GO_VERSION=1.19
|
||||||
ARG XX_VERSION=1.4.0
|
ARG XX_VERSION=1.1.2
|
||||||
|
ARG DOCKERD_VERSION=20.10.14
|
||||||
|
|
||||||
# for testing
|
FROM docker:$DOCKERD_VERSION AS dockerd-release
|
||||||
ARG DOCKER_VERSION=26.0.0
|
|
||||||
ARG GOTESTSUM_VERSION=v1.9.0
|
|
||||||
ARG REGISTRY_VERSION=2.8.0
|
|
||||||
ARG BUILDKIT_VERSION=v0.13.1
|
|
||||||
ARG UNDOCK_VERSION=0.7.0
|
|
||||||
|
|
||||||
|
# xx is a helper for cross-compilation
|
||||||
FROM --platform=$BUILDPLATFORM tonistiigi/xx:${XX_VERSION} AS xx
|
FROM --platform=$BUILDPLATFORM tonistiigi/xx:${XX_VERSION} AS xx
|
||||||
|
|
||||||
FROM --platform=$BUILDPLATFORM golang:${GO_VERSION}-alpine AS golatest
|
FROM --platform=$BUILDPLATFORM golang:${GO_VERSION}-alpine AS golatest
|
||||||
FROM moby/moby-bin:$DOCKER_VERSION AS docker-engine
|
|
||||||
FROM dockereng/cli-bin:$DOCKER_VERSION AS docker-cli
|
|
||||||
FROM registry:$REGISTRY_VERSION AS registry
|
|
||||||
FROM moby/buildkit:$BUILDKIT_VERSION AS buildkit
|
|
||||||
FROM crazymax/undock:$UNDOCK_VERSION AS undock
|
|
||||||
|
|
||||||
FROM golatest AS gobase
|
FROM golatest AS gobase
|
||||||
COPY --from=xx / /
|
COPY --from=xx / /
|
||||||
@@ -25,13 +18,6 @@ ENV GOFLAGS=-mod=vendor
|
|||||||
ENV CGO_ENABLED=0
|
ENV CGO_ENABLED=0
|
||||||
WORKDIR /src
|
WORKDIR /src
|
||||||
|
|
||||||
FROM gobase AS gotestsum
|
|
||||||
ARG GOTESTSUM_VERSION
|
|
||||||
ENV GOFLAGS=
|
|
||||||
RUN --mount=target=/root/.cache,type=cache \
|
|
||||||
GOBIN=/out/ go install "gotest.tools/gotestsum@${GOTESTSUM_VERSION}" && \
|
|
||||||
/out/gotestsum --version
|
|
||||||
|
|
||||||
FROM gobase AS buildx-version
|
FROM gobase AS buildx-version
|
||||||
RUN --mount=type=bind,target=. <<EOT
|
RUN --mount=type=bind,target=. <<EOT
|
||||||
set -e
|
set -e
|
||||||
@@ -53,7 +39,6 @@ RUN --mount=type=bind,target=. \
|
|||||||
EOT
|
EOT
|
||||||
|
|
||||||
FROM gobase AS test
|
FROM gobase AS test
|
||||||
ENV SKIP_INTEGRATION_TESTS=1
|
|
||||||
RUN --mount=type=bind,target=. \
|
RUN --mount=type=bind,target=. \
|
||||||
--mount=type=cache,target=/root/.cache \
|
--mount=type=cache,target=/root/.cache \
|
||||||
--mount=type=cache,target=/go/pkg/mod \
|
--mount=type=cache,target=/go/pkg/mod \
|
||||||
@@ -76,30 +61,6 @@ FROM binaries-$TARGETOS AS binaries
|
|||||||
# enable scanning for this stage
|
# enable scanning for this stage
|
||||||
ARG BUILDKIT_SBOM_SCAN_STAGE=true
|
ARG BUILDKIT_SBOM_SCAN_STAGE=true
|
||||||
|
|
||||||
FROM gobase AS integration-test-base
|
|
||||||
# https://github.com/docker/docker/blob/master/project/PACKAGERS.md#runtime-dependencies
|
|
||||||
RUN apk add --no-cache \
|
|
||||||
btrfs-progs \
|
|
||||||
e2fsprogs \
|
|
||||||
e2fsprogs-extra \
|
|
||||||
ip6tables \
|
|
||||||
iptables \
|
|
||||||
openssl \
|
|
||||||
shadow-uidmap \
|
|
||||||
xfsprogs \
|
|
||||||
xz
|
|
||||||
COPY --link --from=gotestsum /out/gotestsum /usr/bin/
|
|
||||||
COPY --link --from=registry /bin/registry /usr/bin/
|
|
||||||
COPY --link --from=docker-engine / /usr/bin/
|
|
||||||
COPY --link --from=docker-cli / /usr/bin/
|
|
||||||
COPY --link --from=buildkit /usr/bin/buildkitd /usr/bin/
|
|
||||||
COPY --link --from=buildkit /usr/bin/buildctl /usr/bin/
|
|
||||||
COPY --link --from=undock /usr/local/bin/undock /usr/bin/
|
|
||||||
COPY --link --from=binaries /buildx /usr/bin/
|
|
||||||
|
|
||||||
FROM integration-test-base AS integration-test
|
|
||||||
COPY . .
|
|
||||||
|
|
||||||
# Release
|
# Release
|
||||||
FROM --platform=$BUILDPLATFORM alpine AS releaser
|
FROM --platform=$BUILDPLATFORM alpine AS releaser
|
||||||
WORKDIR /work
|
WORKDIR /work
|
||||||
@@ -115,7 +76,7 @@ FROM scratch AS release
|
|||||||
COPY --from=releaser /out/ /
|
COPY --from=releaser /out/ /
|
||||||
|
|
||||||
# Shell
|
# Shell
|
||||||
FROM docker:$DOCKER_VERSION AS dockerd-release
|
FROM docker:$DOCKERD_VERSION AS dockerd-release
|
||||||
FROM alpine AS shell
|
FROM alpine AS shell
|
||||||
RUN apk add --no-cache iptables tmux git vim less openssh
|
RUN apk add --no-cache iptables tmux git vim less openssh
|
||||||
RUN mkdir -p /usr/local/lib/docker/cli-plugins && ln -s /usr/local/bin/buildx /usr/local/lib/docker/cli-plugins/docker-buildx
|
RUN mkdir -p /usr/local/lib/docker/cli-plugins && ln -s /usr/local/bin/buildx /usr/local/lib/docker/cli-plugins/docker-buildx
|
||||||
|
|||||||
@@ -153,7 +153,6 @@ made through a pull request.
|
|||||||
"akihirosuda",
|
"akihirosuda",
|
||||||
"crazy-max",
|
"crazy-max",
|
||||||
"jedevc",
|
"jedevc",
|
||||||
"jsternberg",
|
|
||||||
"tiborvass",
|
"tiborvass",
|
||||||
"tonistiigi",
|
"tonistiigi",
|
||||||
]
|
]
|
||||||
@@ -195,11 +194,6 @@ made through a pull request.
|
|||||||
Email = "me@jedevc.com"
|
Email = "me@jedevc.com"
|
||||||
GitHub = "jedevc"
|
GitHub = "jedevc"
|
||||||
|
|
||||||
[people.jsternberg]
|
|
||||||
Name = "Jonathan Sternberg"
|
|
||||||
Email = "jonathan.sternberg@docker.com"
|
|
||||||
GitHub = "jsternberg"
|
|
||||||
|
|
||||||
[people.thajeztah]
|
[people.thajeztah]
|
||||||
Name = "Sebastiaan van Stijn"
|
Name = "Sebastiaan van Stijn"
|
||||||
Email = "github@gone.nl"
|
Email = "github@gone.nl"
|
||||||
|
|||||||
40
Makefile
40
Makefile
@@ -8,8 +8,6 @@ endif
|
|||||||
|
|
||||||
export BUILDX_CMD ?= docker buildx
|
export BUILDX_CMD ?= docker buildx
|
||||||
|
|
||||||
BAKE_TARGETS := binaries binaries-cross lint lint-gopls validate-vendor validate-docs validate-authors validate-generated-files
|
|
||||||
|
|
||||||
.PHONY: all
|
.PHONY: all
|
||||||
all: binaries
|
all: binaries
|
||||||
|
|
||||||
@@ -21,9 +19,13 @@ build:
|
|||||||
shell:
|
shell:
|
||||||
./hack/shell
|
./hack/shell
|
||||||
|
|
||||||
.PHONY: $(BAKE_TARGETS)
|
.PHONY: binaries
|
||||||
$(BAKE_TARGETS):
|
binaries:
|
||||||
$(BUILDX_CMD) bake $@
|
$(BUILDX_CMD) bake binaries
|
||||||
|
|
||||||
|
.PHONY: binaries-cross
|
||||||
|
binaries-cross:
|
||||||
|
$(BUILDX_CMD) bake binaries-cross
|
||||||
|
|
||||||
.PHONY: install
|
.PHONY: install
|
||||||
install: binaries
|
install: binaries
|
||||||
@@ -35,19 +37,27 @@ release:
|
|||||||
./hack/release
|
./hack/release
|
||||||
|
|
||||||
.PHONY: validate-all
|
.PHONY: validate-all
|
||||||
validate-all: lint test validate-vendor validate-docs validate-generated-files
|
validate-all: lint test validate-vendor validate-docs
|
||||||
|
|
||||||
|
.PHONY: lint
|
||||||
|
lint:
|
||||||
|
$(BUILDX_CMD) bake lint
|
||||||
|
|
||||||
.PHONY: test
|
.PHONY: test
|
||||||
test:
|
test:
|
||||||
./hack/test
|
$(BUILDX_CMD) bake test
|
||||||
|
|
||||||
.PHONY: test-unit
|
.PHONY: validate-vendor
|
||||||
test-unit:
|
validate-vendor:
|
||||||
TESTPKGS=./... SKIP_INTEGRATION_TESTS=1 ./hack/test
|
$(BUILDX_CMD) bake validate-vendor
|
||||||
|
|
||||||
.PHONY: test
|
.PHONY: validate-docs
|
||||||
test-integration:
|
validate-docs:
|
||||||
TESTPKGS=./tests ./hack/test
|
$(BUILDX_CMD) bake validate-docs
|
||||||
|
|
||||||
|
.PHONY: validate-authors
|
||||||
|
validate-authors:
|
||||||
|
$(BUILDX_CMD) bake validate-authors
|
||||||
|
|
||||||
.PHONY: test-driver
|
.PHONY: test-driver
|
||||||
test-driver:
|
test-driver:
|
||||||
@@ -68,7 +78,3 @@ authors:
|
|||||||
.PHONY: mod-outdated
|
.PHONY: mod-outdated
|
||||||
mod-outdated:
|
mod-outdated:
|
||||||
$(BUILDX_CMD) bake mod-outdated
|
$(BUILDX_CMD) bake mod-outdated
|
||||||
|
|
||||||
.PHONY: generated-files
|
|
||||||
generated-files:
|
|
||||||
$(BUILDX_CMD) bake update-generated-files
|
|
||||||
|
|||||||
45
README.md
45
README.md
@@ -32,6 +32,19 @@ Key features:
|
|||||||
- [Building with buildx](#building-with-buildx)
|
- [Building with buildx](#building-with-buildx)
|
||||||
- [Working with builder instances](#working-with-builder-instances)
|
- [Working with builder instances](#working-with-builder-instances)
|
||||||
- [Building multi-platform images](#building-multi-platform-images)
|
- [Building multi-platform images](#building-multi-platform-images)
|
||||||
|
- [Manuals](docs/manuals)
|
||||||
|
- [High-level build options with Bake](docs/manuals/bake/index.md)
|
||||||
|
- [Drivers](docs/manuals/drivers/index.md)
|
||||||
|
- [Exporters](docs/manuals/exporters/index.md)
|
||||||
|
- [Cache backends](docs/manuals/cache/backends/index.md)
|
||||||
|
- [Guides](docs/guides)
|
||||||
|
- [CI/CD](docs/guides/cicd.md)
|
||||||
|
- [CNI networking](docs/guides/cni-networking.md)
|
||||||
|
- [Using a custom network](docs/guides/custom-network.md)
|
||||||
|
- [Using a custom registry configuration](docs/guides/custom-registry-config.md)
|
||||||
|
- [OpenTelemetry support](docs/guides/opentelemetry.md)
|
||||||
|
- [Registry mirror](docs/guides/registry-mirror.md)
|
||||||
|
- [Resource limiting](docs/guides/resource-limiting.md)
|
||||||
- [Reference](docs/reference/buildx.md)
|
- [Reference](docs/reference/buildx.md)
|
||||||
- [`buildx bake`](docs/reference/buildx_bake.md)
|
- [`buildx bake`](docs/reference/buildx_bake.md)
|
||||||
- [`buildx build`](docs/reference/buildx_build.md)
|
- [`buildx build`](docs/reference/buildx_build.md)
|
||||||
@@ -41,26 +54,21 @@ Key features:
|
|||||||
- [`buildx imagetools create`](docs/reference/buildx_imagetools_create.md)
|
- [`buildx imagetools create`](docs/reference/buildx_imagetools_create.md)
|
||||||
- [`buildx imagetools inspect`](docs/reference/buildx_imagetools_inspect.md)
|
- [`buildx imagetools inspect`](docs/reference/buildx_imagetools_inspect.md)
|
||||||
- [`buildx inspect`](docs/reference/buildx_inspect.md)
|
- [`buildx inspect`](docs/reference/buildx_inspect.md)
|
||||||
|
- [`buildx install`](docs/reference/buildx_install.md)
|
||||||
- [`buildx ls`](docs/reference/buildx_ls.md)
|
- [`buildx ls`](docs/reference/buildx_ls.md)
|
||||||
- [`buildx prune`](docs/reference/buildx_prune.md)
|
- [`buildx prune`](docs/reference/buildx_prune.md)
|
||||||
- [`buildx rm`](docs/reference/buildx_rm.md)
|
- [`buildx rm`](docs/reference/buildx_rm.md)
|
||||||
- [`buildx stop`](docs/reference/buildx_stop.md)
|
- [`buildx stop`](docs/reference/buildx_stop.md)
|
||||||
|
- [`buildx uninstall`](docs/reference/buildx_uninstall.md)
|
||||||
- [`buildx use`](docs/reference/buildx_use.md)
|
- [`buildx use`](docs/reference/buildx_use.md)
|
||||||
- [`buildx version`](docs/reference/buildx_version.md)
|
- [`buildx version`](docs/reference/buildx_version.md)
|
||||||
- [Contributing](#contributing)
|
- [Contributing](#contributing)
|
||||||
|
|
||||||
For more information on how to use Buildx, see
|
|
||||||
[Docker Build docs](https://docs.docker.com/build/).
|
|
||||||
|
|
||||||
# Installing
|
# Installing
|
||||||
|
|
||||||
Using `buildx` with Docker requires Docker engine 19.03 or newer.
|
Using `buildx` as a docker CLI plugin requires using Docker 19.03 or newer.
|
||||||
|
A limited set of functionality works with older versions of Docker when
|
||||||
> **Warning**
|
invoking the binary directly.
|
||||||
>
|
|
||||||
> Using an incompatible version of Docker may result in unexpected behavior,
|
|
||||||
> and will likely cause issues, especially when using Buildx builders with more
|
|
||||||
> recent versions of BuildKit.
|
|
||||||
|
|
||||||
## Windows and macOS
|
## Windows and macOS
|
||||||
|
|
||||||
@@ -69,9 +77,8 @@ for Windows and macOS.
|
|||||||
|
|
||||||
## Linux packages
|
## Linux packages
|
||||||
|
|
||||||
Docker Engine package repositories contain Docker Buildx packages when installed according to the
|
Docker Linux packages also include Docker Buildx when installed using the
|
||||||
[Docker Engine install documentation](https://docs.docker.com/engine/install/). Install the
|
[DEB or RPM packages](https://docs.docker.com/engine/install/).
|
||||||
`docker-buildx-plugin` package to install the Buildx plugin.
|
|
||||||
|
|
||||||
## Manual download
|
## Manual download
|
||||||
|
|
||||||
@@ -187,12 +194,12 @@ through various "drivers". Each driver defines how and where a build should
|
|||||||
run, and have different feature sets.
|
run, and have different feature sets.
|
||||||
|
|
||||||
We currently support the following drivers:
|
We currently support the following drivers:
|
||||||
- The `docker` driver ([guide](https://docs.docker.com/build/drivers/docker/), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
|
- The `docker` driver ([guide](docs/manuals/drivers/docker.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
|
||||||
- The `docker-container` driver ([guide](https://docs.docker.com/build/drivers/docker-container/), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
|
- The `docker-container` driver ([guide](docs/manuals/drivers/docker-container.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
|
||||||
- The `kubernetes` driver ([guide](https://docs.docker.com/build/drivers/kubernetes/), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
|
- The `kubernetes` driver ([guide](docs/manuals/drivers/kubernetes.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
|
||||||
- The `remote` driver ([guide](https://docs.docker.com/build/drivers/remote/))
|
- The `remote` driver ([guide](docs/manuals/drivers/remote.md))
|
||||||
|
|
||||||
For more information on drivers, see the [drivers guide](https://docs.docker.com/build/drivers/).
|
For more information on drivers, see the [drivers guide](docs/manuals/drivers/index.md).
|
||||||
|
|
||||||
## Working with builder instances
|
## Working with builder instances
|
||||||
|
|
||||||
@@ -309,7 +316,7 @@ cross-compilation helpers for more advanced use-cases.
|
|||||||
|
|
||||||
## High-level build options
|
## High-level build options
|
||||||
|
|
||||||
See [High-level builds with Bake](https://docs.docker.com/build/bake/) for more details.
|
See [`docs/manuals/bake/index.md`](docs/manuals/bake/index.md) for more details.
|
||||||
|
|
||||||
# Contributing
|
# Contributing
|
||||||
|
|
||||||
|
|||||||
614
bake/bake.go
614
bake/bake.go
@@ -3,6 +3,7 @@ package bake
|
|||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"encoding/csv"
|
"encoding/csv"
|
||||||
|
"fmt"
|
||||||
"io"
|
"io"
|
||||||
"os"
|
"os"
|
||||||
"path"
|
"path"
|
||||||
@@ -11,27 +12,23 @@ import (
|
|||||||
"sort"
|
"sort"
|
||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
|
||||||
|
|
||||||
composecli "github.com/compose-spec/compose-go/v2/cli"
|
|
||||||
"github.com/docker/buildx/bake/hclparser"
|
"github.com/docker/buildx/bake/hclparser"
|
||||||
"github.com/docker/buildx/build"
|
"github.com/docker/buildx/build"
|
||||||
controllerapi "github.com/docker/buildx/controller/pb"
|
|
||||||
"github.com/docker/buildx/util/buildflags"
|
"github.com/docker/buildx/util/buildflags"
|
||||||
"github.com/docker/buildx/util/platformutil"
|
"github.com/docker/buildx/util/platformutil"
|
||||||
"github.com/docker/buildx/util/progress"
|
|
||||||
"github.com/docker/cli/cli/config"
|
"github.com/docker/cli/cli/config"
|
||||||
dockeropts "github.com/docker/cli/opts"
|
"github.com/docker/docker/builder/remotecontext/urlutil"
|
||||||
hcl "github.com/hashicorp/hcl/v2"
|
hcl "github.com/hashicorp/hcl/v2"
|
||||||
"github.com/moby/buildkit/client"
|
|
||||||
"github.com/moby/buildkit/client/llb"
|
"github.com/moby/buildkit/client/llb"
|
||||||
"github.com/moby/buildkit/session/auth/authprovider"
|
"github.com/moby/buildkit/session/auth/authprovider"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
"github.com/zclconf/go-cty/cty"
|
|
||||||
"github.com/zclconf/go-cty/cty/convert"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
|
httpPrefix = regexp.MustCompile(`^https?://`)
|
||||||
|
gitURLPathWithFragmentSuffix = regexp.MustCompile(`\.git(?:#.+)?$`)
|
||||||
|
|
||||||
validTargetNameChars = `[a-zA-Z0-9_-]+`
|
validTargetNameChars = `[a-zA-Z0-9_-]+`
|
||||||
targetNamePattern = regexp.MustCompile(`^` + validTargetNameChars + `$`)
|
targetNamePattern = regexp.MustCompile(`^` + validTargetNameChars + `$`)
|
||||||
)
|
)
|
||||||
@@ -47,18 +44,17 @@ type Override struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func defaultFilenames() []string {
|
func defaultFilenames() []string {
|
||||||
names := []string{}
|
return []string{
|
||||||
names = append(names, composecli.DefaultFileNames...)
|
"docker-compose.yml", // support app
|
||||||
names = append(names, []string{
|
"docker-compose.yaml", // support app
|
||||||
"docker-bake.json",
|
"docker-bake.json",
|
||||||
"docker-bake.override.json",
|
"docker-bake.override.json",
|
||||||
"docker-bake.hcl",
|
"docker-bake.hcl",
|
||||||
"docker-bake.override.hcl",
|
"docker-bake.override.hcl",
|
||||||
}...)
|
}
|
||||||
return names
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func ReadLocalFiles(names []string, stdin io.Reader, l progress.SubLogger) ([]File, error) {
|
func ReadLocalFiles(names []string) ([]File, error) {
|
||||||
isDefault := false
|
isDefault := false
|
||||||
if len(names) == 0 {
|
if len(names) == 0 {
|
||||||
isDefault = true
|
isDefault = true
|
||||||
@@ -66,26 +62,20 @@ func ReadLocalFiles(names []string, stdin io.Reader, l progress.SubLogger) ([]Fi
|
|||||||
}
|
}
|
||||||
out := make([]File, 0, len(names))
|
out := make([]File, 0, len(names))
|
||||||
|
|
||||||
setStatus := func(st *client.VertexStatus) {
|
|
||||||
if l != nil {
|
|
||||||
l.SetStatus(st)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, n := range names {
|
for _, n := range names {
|
||||||
var dt []byte
|
var dt []byte
|
||||||
var err error
|
var err error
|
||||||
if n == "-" {
|
if n == "-" {
|
||||||
dt, err = readWithProgress(stdin, setStatus)
|
dt, err = io.ReadAll(os.Stdin)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
dt, err = readFileWithProgress(n, isDefault, setStatus)
|
dt, err = os.ReadFile(n)
|
||||||
if dt == nil && err == nil {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
if isDefault && errors.Is(err, os.ErrNotExist) {
|
||||||
|
continue
|
||||||
|
}
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -94,103 +84,6 @@ func ReadLocalFiles(names []string, stdin io.Reader, l progress.SubLogger) ([]Fi
|
|||||||
return out, nil
|
return out, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func readFileWithProgress(fname string, isDefault bool, setStatus func(st *client.VertexStatus)) (dt []byte, err error) {
|
|
||||||
st := &client.VertexStatus{
|
|
||||||
ID: "reading " + fname,
|
|
||||||
}
|
|
||||||
|
|
||||||
defer func() {
|
|
||||||
now := time.Now()
|
|
||||||
st.Completed = &now
|
|
||||||
if dt != nil || err != nil {
|
|
||||||
setStatus(st)
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
|
|
||||||
now := time.Now()
|
|
||||||
st.Started = &now
|
|
||||||
|
|
||||||
f, err := os.Open(fname)
|
|
||||||
if err != nil {
|
|
||||||
if isDefault && errors.Is(err, os.ErrNotExist) {
|
|
||||||
return nil, nil
|
|
||||||
}
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
defer f.Close()
|
|
||||||
setStatus(st)
|
|
||||||
|
|
||||||
info, err := f.Stat()
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
st.Total = info.Size()
|
|
||||||
setStatus(st)
|
|
||||||
|
|
||||||
buf := make([]byte, 1024)
|
|
||||||
for {
|
|
||||||
n, err := f.Read(buf)
|
|
||||||
if err == io.EOF {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
dt = append(dt, buf[:n]...)
|
|
||||||
st.Current += int64(n)
|
|
||||||
setStatus(st)
|
|
||||||
}
|
|
||||||
|
|
||||||
return dt, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func readWithProgress(r io.Reader, setStatus func(st *client.VertexStatus)) (dt []byte, err error) {
|
|
||||||
st := &client.VertexStatus{
|
|
||||||
ID: "reading from stdin",
|
|
||||||
}
|
|
||||||
|
|
||||||
defer func() {
|
|
||||||
now := time.Now()
|
|
||||||
st.Completed = &now
|
|
||||||
setStatus(st)
|
|
||||||
}()
|
|
||||||
|
|
||||||
now := time.Now()
|
|
||||||
st.Started = &now
|
|
||||||
setStatus(st)
|
|
||||||
|
|
||||||
buf := make([]byte, 1024)
|
|
||||||
for {
|
|
||||||
n, err := r.Read(buf)
|
|
||||||
if err == io.EOF {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
dt = append(dt, buf[:n]...)
|
|
||||||
st.Current += int64(n)
|
|
||||||
setStatus(st)
|
|
||||||
}
|
|
||||||
|
|
||||||
return dt, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func ListTargets(files []File) ([]string, error) {
|
|
||||||
c, err := ParseFiles(files, nil)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
var targets []string
|
|
||||||
for _, g := range c.Groups {
|
|
||||||
targets = append(targets, g.Name)
|
|
||||||
}
|
|
||||||
for _, t := range c.Targets {
|
|
||||||
targets = append(targets, t.Name)
|
|
||||||
}
|
|
||||||
return dedupSlice(targets), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func ReadTargets(ctx context.Context, files []File, targets, overrides []string, defaults map[string]string) (map[string]*Target, map[string]*Group, error) {
|
func ReadTargets(ctx context.Context, files []File, targets, overrides []string, defaults map[string]string) (map[string]*Target, map[string]*Group, error) {
|
||||||
c, err := ParseFiles(files, defaults)
|
c, err := ParseFiles(files, defaults)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -247,6 +140,19 @@ func ReadTargets(ctx context.Context, files []File, targets, overrides []string,
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Propagate SOURCE_DATE_EPOCH from the client env.
|
||||||
|
// The logic is purposely duplicated from `build/build`.go for keeping this visible in `bake --print`.
|
||||||
|
if v := os.Getenv("SOURCE_DATE_EPOCH"); v != "" {
|
||||||
|
for _, f := range m {
|
||||||
|
if f.Args == nil {
|
||||||
|
f.Args = make(map[string]*string)
|
||||||
|
}
|
||||||
|
if _, ok := f.Args["SOURCE_DATE_EPOCH"]; !ok {
|
||||||
|
f.Args["SOURCE_DATE_EPOCH"] = &v
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
return m, n, nil
|
return m, n, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -322,7 +228,7 @@ func ParseFiles(files []File, defaults map[string]string) (_ *Config, err error)
|
|||||||
}
|
}
|
||||||
hclFiles = append(hclFiles, hf)
|
hclFiles = append(hclFiles, hf)
|
||||||
} else if composeErr != nil {
|
} else if composeErr != nil {
|
||||||
return nil, errors.Wrapf(err, "failed to parse %s: parsing yaml: %v, parsing hcl", f.Name, composeErr)
|
return nil, fmt.Errorf("failed to parse %s: parsing yaml: %v, parsing hcl: %w", f.Name, composeErr, err)
|
||||||
} else {
|
} else {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@@ -339,28 +245,13 @@ func ParseFiles(files []File, defaults map[string]string) (_ *Config, err error)
|
|||||||
}
|
}
|
||||||
|
|
||||||
if len(hclFiles) > 0 {
|
if len(hclFiles) > 0 {
|
||||||
renamed, err := hclparser.Parse(hclparser.MergeFiles(hclFiles), hclparser.Opt{
|
if err := hclparser.Parse(hcl.MergeFiles(hclFiles), hclparser.Opt{
|
||||||
LookupVar: os.LookupEnv,
|
LookupVar: os.LookupEnv,
|
||||||
Vars: defaults,
|
Vars: defaults,
|
||||||
ValidateLabel: validateTargetName,
|
ValidateLabel: validateTargetName,
|
||||||
}, &c)
|
}, &c); err.HasErrors() {
|
||||||
if err.HasErrors() {
|
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, renamed := range renamed {
|
|
||||||
for oldName, newNames := range renamed {
|
|
||||||
newNames = dedupSlice(newNames)
|
|
||||||
if len(newNames) == 1 && oldName == newNames[0] {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
c.Groups = append(c.Groups, &Group{
|
|
||||||
Name: oldName,
|
|
||||||
Targets: newNames,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
c = dedupeConfig(c)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return &c, nil
|
return &c, nil
|
||||||
@@ -678,10 +569,9 @@ type Target struct {
|
|||||||
Name string `json:"-" hcl:"name,label" cty:"name"`
|
Name string `json:"-" hcl:"name,label" cty:"name"`
|
||||||
|
|
||||||
// Inherits is the only field that cannot be overridden with --set
|
// Inherits is the only field that cannot be overridden with --set
|
||||||
|
Attest []string `json:"attest,omitempty" hcl:"attest,optional" cty:"attest"`
|
||||||
Inherits []string `json:"inherits,omitempty" hcl:"inherits,optional" cty:"inherits"`
|
Inherits []string `json:"inherits,omitempty" hcl:"inherits,optional" cty:"inherits"`
|
||||||
|
|
||||||
Annotations []string `json:"annotations,omitempty" hcl:"annotations,optional" cty:"annotations"`
|
|
||||||
Attest []string `json:"attest,omitempty" hcl:"attest,optional" cty:"attest"`
|
|
||||||
Context *string `json:"context,omitempty" hcl:"context,optional" cty:"context"`
|
Context *string `json:"context,omitempty" hcl:"context,optional" cty:"context"`
|
||||||
Contexts map[string]string `json:"contexts,omitempty" hcl:"contexts,optional" cty:"contexts"`
|
Contexts map[string]string `json:"contexts,omitempty" hcl:"contexts,optional" cty:"contexts"`
|
||||||
Dockerfile *string `json:"dockerfile,omitempty" hcl:"dockerfile,optional" cty:"dockerfile"`
|
Dockerfile *string `json:"dockerfile,omitempty" hcl:"dockerfile,optional" cty:"dockerfile"`
|
||||||
@@ -700,22 +590,14 @@ type Target struct {
|
|||||||
NoCache *bool `json:"no-cache,omitempty" hcl:"no-cache,optional" cty:"no-cache"`
|
NoCache *bool `json:"no-cache,omitempty" hcl:"no-cache,optional" cty:"no-cache"`
|
||||||
NetworkMode *string `json:"-" hcl:"-" cty:"-"`
|
NetworkMode *string `json:"-" hcl:"-" cty:"-"`
|
||||||
NoCacheFilter []string `json:"no-cache-filter,omitempty" hcl:"no-cache-filter,optional" cty:"no-cache-filter"`
|
NoCacheFilter []string `json:"no-cache-filter,omitempty" hcl:"no-cache-filter,optional" cty:"no-cache-filter"`
|
||||||
ShmSize *string `json:"shm-size,omitempty" hcl:"shm-size,optional"`
|
// IMPORTANT: if you add more fields here, do not forget to update newOverrides and docs/manuals/bake/file-definition.md.
|
||||||
Ulimits []string `json:"ulimits,omitempty" hcl:"ulimits,optional"`
|
|
||||||
// IMPORTANT: if you add more fields here, do not forget to update newOverrides and docs/bake-reference.md.
|
|
||||||
|
|
||||||
// linked is a private field to mark a target used as a linked one
|
// linked is a private field to mark a target used as a linked one
|
||||||
linked bool
|
linked bool
|
||||||
}
|
}
|
||||||
|
|
||||||
var _ hclparser.WithEvalContexts = &Target{}
|
|
||||||
var _ hclparser.WithGetName = &Target{}
|
|
||||||
var _ hclparser.WithEvalContexts = &Group{}
|
|
||||||
var _ hclparser.WithGetName = &Group{}
|
|
||||||
|
|
||||||
func (t *Target) normalize() {
|
func (t *Target) normalize() {
|
||||||
t.Annotations = removeDupes(t.Annotations)
|
t.Attest = removeDupes(t.Attest)
|
||||||
t.Attest = removeAttestDupes(t.Attest)
|
|
||||||
t.Tags = removeDupes(t.Tags)
|
t.Tags = removeDupes(t.Tags)
|
||||||
t.Secrets = removeDupes(t.Secrets)
|
t.Secrets = removeDupes(t.Secrets)
|
||||||
t.SSH = removeDupes(t.SSH)
|
t.SSH = removeDupes(t.SSH)
|
||||||
@@ -724,7 +606,6 @@ func (t *Target) normalize() {
|
|||||||
t.CacheTo = removeDupes(t.CacheTo)
|
t.CacheTo = removeDupes(t.CacheTo)
|
||||||
t.Outputs = removeDupes(t.Outputs)
|
t.Outputs = removeDupes(t.Outputs)
|
||||||
t.NoCacheFilter = removeDupes(t.NoCacheFilter)
|
t.NoCacheFilter = removeDupes(t.NoCacheFilter)
|
||||||
t.Ulimits = removeDupes(t.Ulimits)
|
|
||||||
|
|
||||||
for k, v := range t.Contexts {
|
for k, v := range t.Contexts {
|
||||||
if v == "" {
|
if v == "" {
|
||||||
@@ -776,12 +657,8 @@ func (t *Target) Merge(t2 *Target) {
|
|||||||
if t2.Target != nil {
|
if t2.Target != nil {
|
||||||
t.Target = t2.Target
|
t.Target = t2.Target
|
||||||
}
|
}
|
||||||
if t2.Annotations != nil { // merge
|
|
||||||
t.Annotations = append(t.Annotations, t2.Annotations...)
|
|
||||||
}
|
|
||||||
if t2.Attest != nil { // merge
|
if t2.Attest != nil { // merge
|
||||||
t.Attest = append(t.Attest, t2.Attest...)
|
t.Attest = append(t.Attest, t2.Attest...)
|
||||||
t.Attest = removeAttestDupes(t.Attest)
|
|
||||||
}
|
}
|
||||||
if t2.Secrets != nil { // merge
|
if t2.Secrets != nil { // merge
|
||||||
t.Secrets = append(t.Secrets, t2.Secrets...)
|
t.Secrets = append(t.Secrets, t2.Secrets...)
|
||||||
@@ -813,12 +690,6 @@ func (t *Target) Merge(t2 *Target) {
|
|||||||
if t2.NoCacheFilter != nil { // merge
|
if t2.NoCacheFilter != nil { // merge
|
||||||
t.NoCacheFilter = append(t.NoCacheFilter, t2.NoCacheFilter...)
|
t.NoCacheFilter = append(t.NoCacheFilter, t2.NoCacheFilter...)
|
||||||
}
|
}
|
||||||
if t2.ShmSize != nil { // no merge
|
|
||||||
t.ShmSize = t2.ShmSize
|
|
||||||
}
|
|
||||||
if t2.Ulimits != nil { // merge
|
|
||||||
t.Ulimits = append(t.Ulimits, t2.Ulimits...)
|
|
||||||
}
|
|
||||||
t.Inherits = append(t.Inherits, t2.Inherits...)
|
t.Inherits = append(t.Inherits, t2.Inherits...)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -871,8 +742,6 @@ func (t *Target) AddOverrides(overrides map[string]Override) error {
|
|||||||
t.Platforms = o.ArrValue
|
t.Platforms = o.ArrValue
|
||||||
case "output":
|
case "output":
|
||||||
t.Outputs = o.ArrValue
|
t.Outputs = o.ArrValue
|
||||||
case "annotations":
|
|
||||||
t.Annotations = append(t.Annotations, o.ArrValue...)
|
|
||||||
case "attest":
|
case "attest":
|
||||||
t.Attest = append(t.Attest, o.ArrValue...)
|
t.Attest = append(t.Attest, o.ArrValue...)
|
||||||
case "no-cache":
|
case "no-cache":
|
||||||
@@ -883,10 +752,6 @@ func (t *Target) AddOverrides(overrides map[string]Override) error {
|
|||||||
t.NoCache = &noCache
|
t.NoCache = &noCache
|
||||||
case "no-cache-filter":
|
case "no-cache-filter":
|
||||||
t.NoCacheFilter = o.ArrValue
|
t.NoCacheFilter = o.ArrValue
|
||||||
case "shm-size":
|
|
||||||
t.ShmSize = &value
|
|
||||||
case "ulimits":
|
|
||||||
t.Ulimits = o.ArrValue
|
|
||||||
case "pull":
|
case "pull":
|
||||||
pull, err := strconv.ParseBool(value)
|
pull, err := strconv.ParseBool(value)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -894,17 +759,19 @@ func (t *Target) AddOverrides(overrides map[string]Override) error {
|
|||||||
}
|
}
|
||||||
t.Pull = &pull
|
t.Pull = &pull
|
||||||
case "push":
|
case "push":
|
||||||
push, err := strconv.ParseBool(value)
|
_, err := strconv.ParseBool(value)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return errors.Errorf("invalid value %s for boolean key push", value)
|
return errors.Errorf("invalid value %s for boolean key push", value)
|
||||||
}
|
}
|
||||||
t.Outputs = setPushOverride(t.Outputs, push)
|
if len(t.Outputs) == 0 {
|
||||||
case "load":
|
t.Outputs = append(t.Outputs, "type=image,push=true")
|
||||||
load, err := strconv.ParseBool(value)
|
} else {
|
||||||
if err != nil {
|
for i, output := range t.Outputs {
|
||||||
return errors.Errorf("invalid value %s for boolean key load", value)
|
if typ := parseOutputType(output); typ == "image" || typ == "registry" {
|
||||||
|
t.Outputs[i] = t.Outputs[i] + ",push=" + value
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
t.Outputs = setLoadOverride(t.Outputs, load)
|
|
||||||
default:
|
default:
|
||||||
return errors.Errorf("unknown key: %s", keys[0])
|
return errors.Errorf("unknown key: %s", keys[0])
|
||||||
}
|
}
|
||||||
@@ -912,128 +779,13 @@ func (t *Target) AddOverrides(overrides map[string]Override) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (g *Group) GetEvalContexts(ectx *hcl.EvalContext, block *hcl.Block, loadDeps func(hcl.Expression) hcl.Diagnostics) ([]*hcl.EvalContext, error) {
|
|
||||||
content, _, err := block.Body.PartialContent(&hcl.BodySchema{
|
|
||||||
Attributes: []hcl.AttributeSchema{{Name: "matrix"}},
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
if _, ok := content.Attributes["matrix"]; ok {
|
|
||||||
return nil, errors.Errorf("matrix is not supported for groups")
|
|
||||||
}
|
|
||||||
return []*hcl.EvalContext{ectx}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (t *Target) GetEvalContexts(ectx *hcl.EvalContext, block *hcl.Block, loadDeps func(hcl.Expression) hcl.Diagnostics) ([]*hcl.EvalContext, error) {
|
|
||||||
content, _, err := block.Body.PartialContent(&hcl.BodySchema{
|
|
||||||
Attributes: []hcl.AttributeSchema{{Name: "matrix"}},
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
attr, ok := content.Attributes["matrix"]
|
|
||||||
if !ok {
|
|
||||||
return []*hcl.EvalContext{ectx}, nil
|
|
||||||
}
|
|
||||||
if diags := loadDeps(attr.Expr); diags.HasErrors() {
|
|
||||||
return nil, diags
|
|
||||||
}
|
|
||||||
value, err := attr.Expr.Value(ectx)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
if !value.Type().IsMapType() && !value.Type().IsObjectType() {
|
|
||||||
return nil, errors.Errorf("matrix must be a map")
|
|
||||||
}
|
|
||||||
matrix := value.AsValueMap()
|
|
||||||
|
|
||||||
ectxs := []*hcl.EvalContext{ectx}
|
|
||||||
for k, expr := range matrix {
|
|
||||||
if !expr.CanIterateElements() {
|
|
||||||
return nil, errors.Errorf("matrix values must be a list")
|
|
||||||
}
|
|
||||||
|
|
||||||
ectxs2 := []*hcl.EvalContext{}
|
|
||||||
for _, v := range expr.AsValueSlice() {
|
|
||||||
for _, e := range ectxs {
|
|
||||||
e2 := ectx.NewChild()
|
|
||||||
e2.Variables = make(map[string]cty.Value)
|
|
||||||
if e != ectx {
|
|
||||||
for k, v := range e.Variables {
|
|
||||||
e2.Variables[k] = v
|
|
||||||
}
|
|
||||||
}
|
|
||||||
e2.Variables[k] = v
|
|
||||||
ectxs2 = append(ectxs2, e2)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
ectxs = ectxs2
|
|
||||||
}
|
|
||||||
return ectxs, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (g *Group) GetName(ectx *hcl.EvalContext, block *hcl.Block, loadDeps func(hcl.Expression) hcl.Diagnostics) (string, error) {
|
|
||||||
content, _, diags := block.Body.PartialContent(&hcl.BodySchema{
|
|
||||||
Attributes: []hcl.AttributeSchema{{Name: "name"}, {Name: "matrix"}},
|
|
||||||
})
|
|
||||||
if diags != nil {
|
|
||||||
return "", diags
|
|
||||||
}
|
|
||||||
|
|
||||||
if _, ok := content.Attributes["name"]; ok {
|
|
||||||
return "", errors.Errorf("name is not supported for groups")
|
|
||||||
}
|
|
||||||
if _, ok := content.Attributes["matrix"]; ok {
|
|
||||||
return "", errors.Errorf("matrix is not supported for groups")
|
|
||||||
}
|
|
||||||
return block.Labels[0], nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (t *Target) GetName(ectx *hcl.EvalContext, block *hcl.Block, loadDeps func(hcl.Expression) hcl.Diagnostics) (string, error) {
|
|
||||||
content, _, diags := block.Body.PartialContent(&hcl.BodySchema{
|
|
||||||
Attributes: []hcl.AttributeSchema{{Name: "name"}, {Name: "matrix"}},
|
|
||||||
})
|
|
||||||
if diags != nil {
|
|
||||||
return "", diags
|
|
||||||
}
|
|
||||||
|
|
||||||
attr, ok := content.Attributes["name"]
|
|
||||||
if !ok {
|
|
||||||
return block.Labels[0], nil
|
|
||||||
}
|
|
||||||
if _, ok := content.Attributes["matrix"]; !ok {
|
|
||||||
return "", errors.Errorf("name requires matrix")
|
|
||||||
}
|
|
||||||
if diags := loadDeps(attr.Expr); diags.HasErrors() {
|
|
||||||
return "", diags
|
|
||||||
}
|
|
||||||
value, diags := attr.Expr.Value(ectx)
|
|
||||||
if diags != nil {
|
|
||||||
return "", diags
|
|
||||||
}
|
|
||||||
|
|
||||||
value, err := convert.Convert(value, cty.String)
|
|
||||||
if err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
return value.AsString(), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func TargetsToBuildOpt(m map[string]*Target, inp *Input) (map[string]build.Options, error) {
|
func TargetsToBuildOpt(m map[string]*Target, inp *Input) (map[string]build.Options, error) {
|
||||||
// make sure local credentials are loaded multiple times for different targets
|
|
||||||
dockerConfig := config.LoadDefaultConfigFile(os.Stderr)
|
|
||||||
authProvider := authprovider.NewDockerAuthProvider(dockerConfig, nil)
|
|
||||||
|
|
||||||
m2 := make(map[string]build.Options, len(m))
|
m2 := make(map[string]build.Options, len(m))
|
||||||
for k, v := range m {
|
for k, v := range m {
|
||||||
bo, err := toBuildOpt(v, inp)
|
bo, err := toBuildOpt(v, inp)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
bo.Session = append(bo.Session, authProvider)
|
|
||||||
m2[k] = *bo
|
m2[k] = *bo
|
||||||
}
|
}
|
||||||
return m2, nil
|
return m2, nil
|
||||||
@@ -1051,7 +803,7 @@ func updateContext(t *build.Inputs, inp *Input) {
|
|||||||
if strings.HasPrefix(v.Path, "cwd://") || strings.HasPrefix(v.Path, "target:") || strings.HasPrefix(v.Path, "docker-image:") {
|
if strings.HasPrefix(v.Path, "cwd://") || strings.HasPrefix(v.Path, "target:") || strings.HasPrefix(v.Path, "docker-image:") {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
if build.IsRemoteURL(v.Path) {
|
if IsRemoteURL(v.Path) {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
st := llb.Scratch().File(llb.Copy(*inp.State, v.Path, "/"), llb.WithCustomNamef("set context %s to %s", k, v.Path))
|
st := llb.Scratch().File(llb.Copy(*inp.State, v.Path, "/"), llb.WithCustomNamef("set context %s to %s", k, v.Path))
|
||||||
@@ -1065,15 +817,10 @@ func updateContext(t *build.Inputs, inp *Input) {
|
|||||||
if strings.HasPrefix(t.ContextPath, "cwd://") {
|
if strings.HasPrefix(t.ContextPath, "cwd://") {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
if build.IsRemoteURL(t.ContextPath) {
|
if IsRemoteURL(t.ContextPath) {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
st := llb.Scratch().File(
|
st := llb.Scratch().File(llb.Copy(*inp.State, t.ContextPath, "/"), llb.WithCustomNamef("set context to %s", t.ContextPath))
|
||||||
llb.Copy(*inp.State, t.ContextPath, "/", &llb.CopyInfo{
|
|
||||||
CopyDirContentsOnly: true,
|
|
||||||
}),
|
|
||||||
llb.WithCustomNamef("set context to %s", t.ContextPath),
|
|
||||||
)
|
|
||||||
t.ContextState = &st
|
t.ContextState = &st
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1106,7 +853,7 @@ func validateContextsEntitlements(t build.Inputs, inp *Input) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func checkPath(p string) error {
|
func checkPath(p string) error {
|
||||||
if build.IsRemoteURL(p) || strings.HasPrefix(p, "target:") || strings.HasPrefix(p, "docker-image:") {
|
if IsRemoteURL(p) || strings.HasPrefix(p, "target:") || strings.HasPrefix(p, "docker-image:") {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
p, err := filepath.EvalSymlinks(p)
|
p, err := filepath.EvalSymlinks(p)
|
||||||
@@ -1116,10 +863,6 @@ func checkPath(p string) error {
|
|||||||
}
|
}
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
p, err = filepath.Abs(p)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
wd, err := os.Getwd()
|
wd, err := os.Getwd()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -1128,8 +871,7 @@ func checkPath(p string) error {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
parts := strings.Split(rel, string(os.PathSeparator))
|
if strings.HasPrefix(rel, ".."+string(os.PathSeparator)) {
|
||||||
if parts[0] == ".." {
|
|
||||||
return errors.Errorf("path %s is outside of the working directory, please set BAKE_ALLOW_REMOTE_FS_ACCESS=1", p)
|
return errors.Errorf("path %s is outside of the working directory, please set BAKE_ALLOW_REMOTE_FS_ACCESS=1", p)
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
@@ -1147,75 +889,17 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
|
|||||||
if t.Context != nil {
|
if t.Context != nil {
|
||||||
contextPath = *t.Context
|
contextPath = *t.Context
|
||||||
}
|
}
|
||||||
if !strings.HasPrefix(contextPath, "cwd://") && !build.IsRemoteURL(contextPath) {
|
if !strings.HasPrefix(contextPath, "cwd://") && !IsRemoteURL(contextPath) {
|
||||||
contextPath = path.Clean(contextPath)
|
contextPath = path.Clean(contextPath)
|
||||||
}
|
}
|
||||||
dockerfilePath := "Dockerfile"
|
dockerfilePath := "Dockerfile"
|
||||||
if t.Dockerfile != nil {
|
if t.Dockerfile != nil {
|
||||||
dockerfilePath = *t.Dockerfile
|
dockerfilePath = *t.Dockerfile
|
||||||
}
|
}
|
||||||
if !strings.HasPrefix(dockerfilePath, "cwd://") {
|
|
||||||
dockerfilePath = path.Clean(dockerfilePath)
|
|
||||||
}
|
|
||||||
|
|
||||||
bi := build.Inputs{
|
if !isRemoteResource(contextPath) && !path.IsAbs(dockerfilePath) {
|
||||||
ContextPath: contextPath,
|
dockerfilePath = path.Join(contextPath, dockerfilePath)
|
||||||
DockerfilePath: dockerfilePath,
|
|
||||||
NamedContexts: toNamedContexts(t.Contexts),
|
|
||||||
}
|
}
|
||||||
if t.DockerfileInline != nil {
|
|
||||||
bi.DockerfileInline = *t.DockerfileInline
|
|
||||||
}
|
|
||||||
updateContext(&bi, inp)
|
|
||||||
if strings.HasPrefix(bi.DockerfilePath, "cwd://") {
|
|
||||||
// If Dockerfile is local for a remote invocation, we first check if
|
|
||||||
// it's not outside the working directory and then resolve it to an
|
|
||||||
// absolute path.
|
|
||||||
bi.DockerfilePath = path.Clean(strings.TrimPrefix(bi.DockerfilePath, "cwd://"))
|
|
||||||
if err := checkPath(bi.DockerfilePath); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
var err error
|
|
||||||
bi.DockerfilePath, err = filepath.Abs(bi.DockerfilePath)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
} else if !build.IsRemoteURL(bi.DockerfilePath) && strings.HasPrefix(bi.ContextPath, "cwd://") && (inp != nil && build.IsRemoteURL(inp.URL)) {
|
|
||||||
// We don't currently support reading a remote Dockerfile with a local
|
|
||||||
// context when doing a remote invocation because we automatically
|
|
||||||
// derive the dockerfile from the context atm:
|
|
||||||
//
|
|
||||||
// target "default" {
|
|
||||||
// context = BAKE_CMD_CONTEXT
|
|
||||||
// dockerfile = "Dockerfile.app"
|
|
||||||
// }
|
|
||||||
//
|
|
||||||
// > docker buildx bake https://github.com/foo/bar.git
|
|
||||||
// failed to solve: failed to read dockerfile: open /var/lib/docker/tmp/buildkit-mount3004544897/Dockerfile.app: no such file or directory
|
|
||||||
//
|
|
||||||
// To avoid mistakenly reading a local Dockerfile, we check if the
|
|
||||||
// Dockerfile exists locally and if so, we error out.
|
|
||||||
if _, err := os.Stat(filepath.Join(path.Clean(strings.TrimPrefix(bi.ContextPath, "cwd://")), bi.DockerfilePath)); err == nil {
|
|
||||||
return nil, errors.Errorf("reading a dockerfile for a remote build invocation is currently not supported")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if strings.HasPrefix(bi.ContextPath, "cwd://") {
|
|
||||||
bi.ContextPath = path.Clean(strings.TrimPrefix(bi.ContextPath, "cwd://"))
|
|
||||||
}
|
|
||||||
if !build.IsRemoteURL(bi.ContextPath) && bi.ContextState == nil && !path.IsAbs(bi.DockerfilePath) {
|
|
||||||
bi.DockerfilePath = path.Join(bi.ContextPath, bi.DockerfilePath)
|
|
||||||
}
|
|
||||||
for k, v := range bi.NamedContexts {
|
|
||||||
if strings.HasPrefix(v.Path, "cwd://") {
|
|
||||||
bi.NamedContexts[k] = build.NamedContext{Path: path.Clean(strings.TrimPrefix(v.Path, "cwd://"))}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := validateContextsEntitlements(bi, inp); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
t.Context = &bi.ContextPath
|
|
||||||
|
|
||||||
args := map[string]string{}
|
args := map[string]string{}
|
||||||
for k, v := range t.Args {
|
for k, v := range t.Args {
|
||||||
@@ -1245,13 +929,31 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
|
|||||||
if t.NetworkMode != nil {
|
if t.NetworkMode != nil {
|
||||||
networkMode = *t.NetworkMode
|
networkMode = *t.NetworkMode
|
||||||
}
|
}
|
||||||
shmSize := new(dockeropts.MemBytes)
|
|
||||||
if t.ShmSize != nil {
|
bi := build.Inputs{
|
||||||
if err := shmSize.Set(*t.ShmSize); err != nil {
|
ContextPath: contextPath,
|
||||||
return nil, errors.Errorf("invalid value %s for membytes key shm-size", *t.ShmSize)
|
DockerfilePath: dockerfilePath,
|
||||||
|
NamedContexts: toNamedContexts(t.Contexts),
|
||||||
|
}
|
||||||
|
if t.DockerfileInline != nil {
|
||||||
|
bi.DockerfileInline = *t.DockerfileInline
|
||||||
|
}
|
||||||
|
updateContext(&bi, inp)
|
||||||
|
if strings.HasPrefix(bi.ContextPath, "cwd://") {
|
||||||
|
bi.ContextPath = path.Clean(strings.TrimPrefix(bi.ContextPath, "cwd://"))
|
||||||
|
}
|
||||||
|
for k, v := range bi.NamedContexts {
|
||||||
|
if strings.HasPrefix(v.Path, "cwd://") {
|
||||||
|
bi.NamedContexts[k] = build.NamedContext{Path: path.Clean(strings.TrimPrefix(v.Path, "cwd://"))}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if err := validateContextsEntitlements(bi, inp); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
t.Context = &bi.ContextPath
|
||||||
|
|
||||||
bo := &build.Options{
|
bo := &build.Options{
|
||||||
Inputs: bi,
|
Inputs: bi,
|
||||||
Tags: t.Tags,
|
Tags: t.Tags,
|
||||||
@@ -1262,7 +964,6 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
|
|||||||
Pull: pull,
|
Pull: pull,
|
||||||
NetworkMode: networkMode,
|
NetworkMode: networkMode,
|
||||||
Linked: t.linked,
|
Linked: t.linked,
|
||||||
ShmSize: *shmSize,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
platforms, err := platformutil.Parse(t.Platforms)
|
platforms, err := platformutil.Parse(t.Platforms)
|
||||||
@@ -1271,28 +972,24 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
|
|||||||
}
|
}
|
||||||
bo.Platforms = platforms
|
bo.Platforms = platforms
|
||||||
|
|
||||||
|
dockerConfig := config.LoadDefaultConfigFile(os.Stderr)
|
||||||
|
bo.Session = append(bo.Session, authprovider.NewDockerAuthProvider(dockerConfig))
|
||||||
|
|
||||||
secrets, err := buildflags.ParseSecretSpecs(t.Secrets)
|
secrets, err := buildflags.ParseSecretSpecs(t.Secrets)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
secretAttachment, err := controllerapi.CreateSecrets(secrets)
|
bo.Session = append(bo.Session, secrets)
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
bo.Session = append(bo.Session, secretAttachment)
|
|
||||||
|
|
||||||
sshSpecs, err := buildflags.ParseSSHSpecs(t.SSH)
|
sshSpecs := t.SSH
|
||||||
|
if len(sshSpecs) == 0 && buildflags.IsGitSSH(contextPath) {
|
||||||
|
sshSpecs = []string{"default"}
|
||||||
|
}
|
||||||
|
ssh, err := buildflags.ParseSSHSpecs(sshSpecs)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
if len(sshSpecs) == 0 && (buildflags.IsGitSSH(bi.ContextPath) || (inp != nil && buildflags.IsGitSSH(inp.URL))) {
|
bo.Session = append(bo.Session, ssh)
|
||||||
sshSpecs = append(sshSpecs, &controllerapi.SSH{ID: "default"})
|
|
||||||
}
|
|
||||||
sshAttachment, err := controllerapi.CreateSSH(sshSpecs)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
bo.Session = append(bo.Session, sshAttachment)
|
|
||||||
|
|
||||||
if t.Target != nil {
|
if t.Target != nil {
|
||||||
bo.Target = *t.Target
|
bo.Target = *t.Target
|
||||||
@@ -1302,51 +999,25 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
bo.CacheFrom = controllerapi.CreateCaches(cacheImports)
|
bo.CacheFrom = cacheImports
|
||||||
|
|
||||||
cacheExports, err := buildflags.ParseCacheEntry(t.CacheTo)
|
cacheExports, err := buildflags.ParseCacheEntry(t.CacheTo)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
bo.CacheTo = controllerapi.CreateCaches(cacheExports)
|
bo.CacheTo = cacheExports
|
||||||
|
|
||||||
outputs, err := buildflags.ParseExports(t.Outputs)
|
outputs, err := buildflags.ParseOutputs(t.Outputs)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
bo.Exports, err = controllerapi.CreateExports(outputs)
|
bo.Exports = outputs
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
annotations, err := buildflags.ParseAnnotations(t.Annotations)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
for _, e := range bo.Exports {
|
|
||||||
for k, v := range annotations {
|
|
||||||
e.Attrs[k.String()] = v
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
attests, err := buildflags.ParseAttests(t.Attest)
|
attests, err := buildflags.ParseAttests(t.Attest)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
bo.Attests = controllerapi.CreateAttestations(attests)
|
bo.Attests = attests
|
||||||
|
|
||||||
bo.SourcePolicy, err = build.ReadSourcePolicy()
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
ulimits := dockeropts.NewUlimitOpt(nil)
|
|
||||||
for _, field := range t.Ulimits {
|
|
||||||
if err := ulimits.Set(field); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
bo.Ulimits = ulimits
|
|
||||||
|
|
||||||
return bo, nil
|
return bo, nil
|
||||||
}
|
}
|
||||||
@@ -1372,110 +1043,27 @@ func removeDupes(s []string) []string {
|
|||||||
return s[:i]
|
return s[:i]
|
||||||
}
|
}
|
||||||
|
|
||||||
func removeAttestDupes(s []string) []string {
|
func isRemoteResource(str string) bool {
|
||||||
res := []string{}
|
return urlutil.IsGitURL(str) || urlutil.IsURL(str)
|
||||||
m := map[string]int{}
|
|
||||||
for _, v := range s {
|
|
||||||
att, err := buildflags.ParseAttest(v)
|
|
||||||
if err != nil {
|
|
||||||
res = append(res, v)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
if i, ok := m[att.Type]; ok {
|
|
||||||
res[i] = v
|
|
||||||
} else {
|
|
||||||
m[att.Type] = len(res)
|
|
||||||
res = append(res, v)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return res
|
|
||||||
}
|
|
||||||
|
|
||||||
func parseOutput(str string) map[string]string {
|
|
||||||
csvReader := csv.NewReader(strings.NewReader(str))
|
|
||||||
fields, err := csvReader.Read()
|
|
||||||
if err != nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
res := map[string]string{}
|
|
||||||
for _, field := range fields {
|
|
||||||
parts := strings.SplitN(field, "=", 2)
|
|
||||||
if len(parts) == 2 {
|
|
||||||
res[parts[0]] = parts[1]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return res
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func parseOutputType(str string) string {
|
func parseOutputType(str string) string {
|
||||||
if out := parseOutput(str); out != nil {
|
csvReader := csv.NewReader(strings.NewReader(str))
|
||||||
if v, ok := out["type"]; ok {
|
fields, err := csvReader.Read()
|
||||||
return v
|
if err != nil {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
for _, field := range fields {
|
||||||
|
parts := strings.SplitN(field, "=", 2)
|
||||||
|
if len(parts) == 2 {
|
||||||
|
if parts[0] == "type" {
|
||||||
|
return parts[1]
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return ""
|
return ""
|
||||||
}
|
}
|
||||||
|
|
||||||
func setPushOverride(outputs []string, push bool) []string {
|
|
||||||
var out []string
|
|
||||||
setPush := true
|
|
||||||
for _, output := range outputs {
|
|
||||||
typ := parseOutputType(output)
|
|
||||||
if typ == "image" || typ == "registry" {
|
|
||||||
// no need to set push if image or registry types already defined
|
|
||||||
setPush = false
|
|
||||||
if typ == "registry" {
|
|
||||||
if !push {
|
|
||||||
// don't set registry output if "push" is false
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
// no need to set "push" attribute to true for registry
|
|
||||||
out = append(out, output)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
out = append(out, output+",push="+strconv.FormatBool(push))
|
|
||||||
} else {
|
|
||||||
if typ != "docker" {
|
|
||||||
// if there is any output that is not docker, don't set "push"
|
|
||||||
setPush = false
|
|
||||||
}
|
|
||||||
out = append(out, output)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if push && setPush {
|
|
||||||
out = append(out, "type=image,push=true")
|
|
||||||
}
|
|
||||||
return out
|
|
||||||
}
|
|
||||||
|
|
||||||
func setLoadOverride(outputs []string, load bool) []string {
|
|
||||||
if !load {
|
|
||||||
return outputs
|
|
||||||
}
|
|
||||||
setLoad := true
|
|
||||||
for _, output := range outputs {
|
|
||||||
if typ := parseOutputType(output); typ == "docker" {
|
|
||||||
if v := parseOutput(output); v != nil {
|
|
||||||
// dest set means we want to output as tar so don't set load
|
|
||||||
if _, ok := v["dest"]; !ok {
|
|
||||||
setLoad = false
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} else if typ != "image" && typ != "registry" && typ != "oci" {
|
|
||||||
// if there is any output that is not an image, registry
|
|
||||||
// or oci, don't set "load" similar to push override
|
|
||||||
setLoad = false
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if setLoad {
|
|
||||||
outputs = append(outputs, "type=docker")
|
|
||||||
}
|
|
||||||
return outputs
|
|
||||||
}
|
|
||||||
|
|
||||||
func validateTargetName(name string) error {
|
func validateTargetName(name string) error {
|
||||||
if !targetNamePattern.MatchString(name) {
|
if !targetNamePattern.MatchString(name) {
|
||||||
return errors.Errorf("only %q are allowed", validTargetNameChars)
|
return errors.Errorf("only %q are allowed", validTargetNameChars)
|
||||||
|
|||||||
@@ -2,17 +2,16 @@ package bake
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"os"
|
|
||||||
"path/filepath"
|
|
||||||
"sort"
|
"sort"
|
||||||
"strings"
|
"strings"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"github.com/stretchr/testify/assert"
|
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestReadTargets(t *testing.T) {
|
func TestReadTargets(t *testing.T) {
|
||||||
|
t.Parallel()
|
||||||
|
|
||||||
fp := File{
|
fp := File{
|
||||||
Name: "config.hcl",
|
Name: "config.hcl",
|
||||||
Data: []byte(`
|
Data: []byte(`
|
||||||
@@ -22,8 +21,6 @@ target "webDEP" {
|
|||||||
VAR_BOTH = "webDEP"
|
VAR_BOTH = "webDEP"
|
||||||
}
|
}
|
||||||
no-cache = true
|
no-cache = true
|
||||||
shm-size = "128m"
|
|
||||||
ulimits = ["nofile=1024:1024"]
|
|
||||||
}
|
}
|
||||||
|
|
||||||
target "webapp" {
|
target "webapp" {
|
||||||
@@ -38,7 +35,6 @@ target "webapp" {
|
|||||||
ctx := context.TODO()
|
ctx := context.TODO()
|
||||||
|
|
||||||
t.Run("NoOverrides", func(t *testing.T) {
|
t.Run("NoOverrides", func(t *testing.T) {
|
||||||
t.Parallel()
|
|
||||||
m, g, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, nil, nil)
|
m, g, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, nil, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, 1, len(m))
|
require.Equal(t, 1, len(m))
|
||||||
@@ -47,8 +43,6 @@ target "webapp" {
|
|||||||
require.Equal(t, ".", *m["webapp"].Context)
|
require.Equal(t, ".", *m["webapp"].Context)
|
||||||
require.Equal(t, ptrstr("webDEP"), m["webapp"].Args["VAR_INHERITED"])
|
require.Equal(t, ptrstr("webDEP"), m["webapp"].Args["VAR_INHERITED"])
|
||||||
require.Equal(t, true, *m["webapp"].NoCache)
|
require.Equal(t, true, *m["webapp"].NoCache)
|
||||||
require.Equal(t, "128m", *m["webapp"].ShmSize)
|
|
||||||
require.Equal(t, []string{"nofile=1024:1024"}, m["webapp"].Ulimits)
|
|
||||||
require.Nil(t, m["webapp"].Pull)
|
require.Nil(t, m["webapp"].Pull)
|
||||||
|
|
||||||
require.Equal(t, 1, len(g))
|
require.Equal(t, 1, len(g))
|
||||||
@@ -56,7 +50,6 @@ target "webapp" {
|
|||||||
})
|
})
|
||||||
|
|
||||||
t.Run("InvalidTargetOverrides", func(t *testing.T) {
|
t.Run("InvalidTargetOverrides", func(t *testing.T) {
|
||||||
t.Parallel()
|
|
||||||
_, _, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"nosuchtarget.context=foo"}, nil)
|
_, _, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"nosuchtarget.context=foo"}, nil)
|
||||||
require.NotNil(t, err)
|
require.NotNil(t, err)
|
||||||
require.Equal(t, err.Error(), "could not find any target matching 'nosuchtarget'")
|
require.Equal(t, err.Error(), "could not find any target matching 'nosuchtarget'")
|
||||||
@@ -98,7 +91,6 @@ target "webapp" {
|
|||||||
|
|
||||||
// building leaf but overriding parent fields
|
// building leaf but overriding parent fields
|
||||||
t.Run("parent", func(t *testing.T) {
|
t.Run("parent", func(t *testing.T) {
|
||||||
t.Parallel()
|
|
||||||
m, g, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{
|
m, g, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{
|
||||||
"webDEP.args.VAR_INHERITED=override",
|
"webDEP.args.VAR_INHERITED=override",
|
||||||
"webDEP.args.VAR_BOTH=override",
|
"webDEP.args.VAR_BOTH=override",
|
||||||
@@ -113,7 +105,6 @@ target "webapp" {
|
|||||||
})
|
})
|
||||||
|
|
||||||
t.Run("ContextOverride", func(t *testing.T) {
|
t.Run("ContextOverride", func(t *testing.T) {
|
||||||
t.Parallel()
|
|
||||||
_, _, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"webapp.context"}, nil)
|
_, _, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"webapp.context"}, nil)
|
||||||
require.NotNil(t, err)
|
require.NotNil(t, err)
|
||||||
|
|
||||||
@@ -125,7 +116,6 @@ target "webapp" {
|
|||||||
})
|
})
|
||||||
|
|
||||||
t.Run("NoCacheOverride", func(t *testing.T) {
|
t.Run("NoCacheOverride", func(t *testing.T) {
|
||||||
t.Parallel()
|
|
||||||
m, g, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"webapp.no-cache=false"}, nil)
|
m, g, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"webapp.no-cache=false"}, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, false, *m["webapp"].NoCache)
|
require.Equal(t, false, *m["webapp"].NoCache)
|
||||||
@@ -133,14 +123,7 @@ target "webapp" {
|
|||||||
require.Equal(t, []string{"webapp"}, g["default"].Targets)
|
require.Equal(t, []string{"webapp"}, g["default"].Targets)
|
||||||
})
|
})
|
||||||
|
|
||||||
t.Run("ShmSizeOverride", func(t *testing.T) {
|
|
||||||
m, _, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"webapp.shm-size=256m"}, nil)
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.Equal(t, "256m", *m["webapp"].ShmSize)
|
|
||||||
})
|
|
||||||
|
|
||||||
t.Run("PullOverride", func(t *testing.T) {
|
t.Run("PullOverride", func(t *testing.T) {
|
||||||
t.Parallel()
|
|
||||||
m, g, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"webapp.pull=false"}, nil)
|
m, g, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"webapp.pull=false"}, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, false, *m["webapp"].Pull)
|
require.Equal(t, false, *m["webapp"].Pull)
|
||||||
@@ -149,7 +132,6 @@ target "webapp" {
|
|||||||
})
|
})
|
||||||
|
|
||||||
t.Run("PatternOverride", func(t *testing.T) {
|
t.Run("PatternOverride", func(t *testing.T) {
|
||||||
t.Parallel()
|
|
||||||
// same check for two cases
|
// same check for two cases
|
||||||
multiTargetCheck := func(t *testing.T, m map[string]*Target, g map[string]*Group, err error) {
|
multiTargetCheck := func(t *testing.T, m map[string]*Target, g map[string]*Group, err error) {
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
@@ -217,252 +199,48 @@ target "webapp" {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func TestPushOverride(t *testing.T) {
|
func TestPushOverride(t *testing.T) {
|
||||||
t.Run("empty output", func(t *testing.T) {
|
t.Parallel()
|
||||||
fp := File{
|
|
||||||
Name: "docker-bake.hcl",
|
|
||||||
Data: []byte(
|
|
||||||
`target "app" {
|
|
||||||
}`),
|
|
||||||
}
|
|
||||||
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.push=true"}, nil)
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.Equal(t, 1, len(m["app"].Outputs))
|
|
||||||
require.Equal(t, "type=image,push=true", m["app"].Outputs[0])
|
|
||||||
})
|
|
||||||
|
|
||||||
t.Run("type image", func(t *testing.T) {
|
fp := File{
|
||||||
fp := File{
|
Name: "docker-bake.hcl",
|
||||||
Name: "docker-bake.hcl",
|
Data: []byte(
|
||||||
Data: []byte(
|
`target "app" {
|
||||||
`target "app" {
|
|
||||||
output = ["type=image,compression=zstd"]
|
output = ["type=image,compression=zstd"]
|
||||||
}`),
|
}`),
|
||||||
}
|
}
|
||||||
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.push=true"}, nil)
|
ctx := context.TODO()
|
||||||
require.NoError(t, err)
|
m, _, err := ReadTargets(ctx, []File{fp}, []string{"app"}, []string{"*.push=true"}, nil)
|
||||||
require.Equal(t, 1, len(m["app"].Outputs))
|
require.NoError(t, err)
|
||||||
require.Equal(t, "type=image,compression=zstd,push=true", m["app"].Outputs[0])
|
|
||||||
})
|
|
||||||
|
|
||||||
t.Run("type image push false", func(t *testing.T) {
|
require.Equal(t, 1, len(m["app"].Outputs))
|
||||||
fp := File{
|
require.Equal(t, "type=image,compression=zstd,push=true", m["app"].Outputs[0])
|
||||||
Name: "docker-bake.hcl",
|
|
||||||
Data: []byte(
|
fp = File{
|
||||||
`target "app" {
|
Name: "docker-bake.hcl",
|
||||||
|
Data: []byte(
|
||||||
|
`target "app" {
|
||||||
output = ["type=image,compression=zstd"]
|
output = ["type=image,compression=zstd"]
|
||||||
}`),
|
}`),
|
||||||
}
|
}
|
||||||
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.push=false"}, nil)
|
ctx = context.TODO()
|
||||||
require.NoError(t, err)
|
m, _, err = ReadTargets(ctx, []File{fp}, []string{"app"}, []string{"*.push=false"}, nil)
|
||||||
require.Equal(t, 1, len(m["app"].Outputs))
|
require.NoError(t, err)
|
||||||
require.Equal(t, "type=image,compression=zstd,push=false", m["app"].Outputs[0])
|
|
||||||
})
|
|
||||||
|
|
||||||
t.Run("type registry", func(t *testing.T) {
|
require.Equal(t, 1, len(m["app"].Outputs))
|
||||||
fp := File{
|
require.Equal(t, "type=image,compression=zstd,push=false", m["app"].Outputs[0])
|
||||||
Name: "docker-bake.hcl",
|
|
||||||
Data: []byte(
|
fp = File{
|
||||||
`target "app" {
|
Name: "docker-bake.hcl",
|
||||||
output = ["type=registry"]
|
Data: []byte(
|
||||||
|
`target "app" {
|
||||||
}`),
|
}`),
|
||||||
}
|
}
|
||||||
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.push=true"}, nil)
|
ctx = context.TODO()
|
||||||
require.NoError(t, err)
|
m, _, err = ReadTargets(ctx, []File{fp}, []string{"app"}, []string{"*.push=true"}, nil)
|
||||||
require.Equal(t, 1, len(m["app"].Outputs))
|
require.NoError(t, err)
|
||||||
require.Equal(t, "type=registry", m["app"].Outputs[0])
|
|
||||||
})
|
|
||||||
|
|
||||||
t.Run("type registry push false", func(t *testing.T) {
|
require.Equal(t, 1, len(m["app"].Outputs))
|
||||||
fp := File{
|
require.Equal(t, "type=image,push=true", m["app"].Outputs[0])
|
||||||
Name: "docker-bake.hcl",
|
|
||||||
Data: []byte(
|
|
||||||
`target "app" {
|
|
||||||
output = ["type=registry"]
|
|
||||||
}`),
|
|
||||||
}
|
|
||||||
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.push=false"}, nil)
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.Equal(t, 0, len(m["app"].Outputs))
|
|
||||||
})
|
|
||||||
|
|
||||||
t.Run("type local and empty target", func(t *testing.T) {
|
|
||||||
fp := File{
|
|
||||||
Name: "docker-bake.hcl",
|
|
||||||
Data: []byte(
|
|
||||||
`target "foo" {
|
|
||||||
output = [ "type=local,dest=out" ]
|
|
||||||
}
|
|
||||||
target "bar" {
|
|
||||||
}`),
|
|
||||||
}
|
|
||||||
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"foo", "bar"}, []string{"*.push=true"}, nil)
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.Equal(t, 2, len(m))
|
|
||||||
require.Equal(t, 1, len(m["foo"].Outputs))
|
|
||||||
require.Equal(t, []string{"type=local,dest=out"}, m["foo"].Outputs)
|
|
||||||
require.Equal(t, 1, len(m["bar"].Outputs))
|
|
||||||
require.Equal(t, []string{"type=image,push=true"}, m["bar"].Outputs)
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestLoadOverride(t *testing.T) {
|
|
||||||
t.Run("empty output", func(t *testing.T) {
|
|
||||||
fp := File{
|
|
||||||
Name: "docker-bake.hcl",
|
|
||||||
Data: []byte(
|
|
||||||
`target "app" {
|
|
||||||
}`),
|
|
||||||
}
|
|
||||||
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.load=true"}, nil)
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.Equal(t, 1, len(m["app"].Outputs))
|
|
||||||
require.Equal(t, "type=docker", m["app"].Outputs[0])
|
|
||||||
})
|
|
||||||
|
|
||||||
t.Run("type docker", func(t *testing.T) {
|
|
||||||
fp := File{
|
|
||||||
Name: "docker-bake.hcl",
|
|
||||||
Data: []byte(
|
|
||||||
`target "app" {
|
|
||||||
output = ["type=docker"]
|
|
||||||
}`),
|
|
||||||
}
|
|
||||||
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.load=true"}, nil)
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.Equal(t, 1, len(m["app"].Outputs))
|
|
||||||
require.Equal(t, []string{"type=docker"}, m["app"].Outputs)
|
|
||||||
})
|
|
||||||
|
|
||||||
t.Run("type image", func(t *testing.T) {
|
|
||||||
fp := File{
|
|
||||||
Name: "docker-bake.hcl",
|
|
||||||
Data: []byte(
|
|
||||||
`target "app" {
|
|
||||||
output = ["type=image"]
|
|
||||||
}`),
|
|
||||||
}
|
|
||||||
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.load=true"}, nil)
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.Equal(t, 2, len(m["app"].Outputs))
|
|
||||||
require.Equal(t, []string{"type=image", "type=docker"}, m["app"].Outputs)
|
|
||||||
})
|
|
||||||
|
|
||||||
t.Run("type image load false", func(t *testing.T) {
|
|
||||||
fp := File{
|
|
||||||
Name: "docker-bake.hcl",
|
|
||||||
Data: []byte(
|
|
||||||
`target "app" {
|
|
||||||
output = ["type=image"]
|
|
||||||
}`),
|
|
||||||
}
|
|
||||||
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.load=false"}, nil)
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.Equal(t, 1, len(m["app"].Outputs))
|
|
||||||
require.Equal(t, []string{"type=image"}, m["app"].Outputs)
|
|
||||||
})
|
|
||||||
|
|
||||||
t.Run("type registry", func(t *testing.T) {
|
|
||||||
fp := File{
|
|
||||||
Name: "docker-bake.hcl",
|
|
||||||
Data: []byte(
|
|
||||||
`target "app" {
|
|
||||||
output = ["type=registry"]
|
|
||||||
}`),
|
|
||||||
}
|
|
||||||
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.load=true"}, nil)
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.Equal(t, 2, len(m["app"].Outputs))
|
|
||||||
require.Equal(t, []string{"type=registry", "type=docker"}, m["app"].Outputs)
|
|
||||||
})
|
|
||||||
|
|
||||||
t.Run("type oci", func(t *testing.T) {
|
|
||||||
fp := File{
|
|
||||||
Name: "docker-bake.hcl",
|
|
||||||
Data: []byte(
|
|
||||||
`target "app" {
|
|
||||||
output = ["type=oci,dest=out"]
|
|
||||||
}`),
|
|
||||||
}
|
|
||||||
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.load=true"}, nil)
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.Equal(t, 2, len(m["app"].Outputs))
|
|
||||||
require.Equal(t, []string{"type=oci,dest=out", "type=docker"}, m["app"].Outputs)
|
|
||||||
})
|
|
||||||
|
|
||||||
t.Run("type docker with dest", func(t *testing.T) {
|
|
||||||
fp := File{
|
|
||||||
Name: "docker-bake.hcl",
|
|
||||||
Data: []byte(
|
|
||||||
`target "app" {
|
|
||||||
output = ["type=docker,dest=out"]
|
|
||||||
}`),
|
|
||||||
}
|
|
||||||
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.load=true"}, nil)
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.Equal(t, 2, len(m["app"].Outputs))
|
|
||||||
require.Equal(t, []string{"type=docker,dest=out", "type=docker"}, m["app"].Outputs)
|
|
||||||
})
|
|
||||||
|
|
||||||
t.Run("type local and empty target", func(t *testing.T) {
|
|
||||||
fp := File{
|
|
||||||
Name: "docker-bake.hcl",
|
|
||||||
Data: []byte(
|
|
||||||
`target "foo" {
|
|
||||||
output = [ "type=local,dest=out" ]
|
|
||||||
}
|
|
||||||
target "bar" {
|
|
||||||
}`),
|
|
||||||
}
|
|
||||||
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"foo", "bar"}, []string{"*.load=true"}, nil)
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.Equal(t, 2, len(m))
|
|
||||||
require.Equal(t, 1, len(m["foo"].Outputs))
|
|
||||||
require.Equal(t, []string{"type=local,dest=out"}, m["foo"].Outputs)
|
|
||||||
require.Equal(t, 1, len(m["bar"].Outputs))
|
|
||||||
require.Equal(t, []string{"type=docker"}, m["bar"].Outputs)
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestLoadAndPushOverride(t *testing.T) {
|
|
||||||
t.Run("type local and empty target", func(t *testing.T) {
|
|
||||||
fp := File{
|
|
||||||
Name: "docker-bake.hcl",
|
|
||||||
Data: []byte(
|
|
||||||
`target "foo" {
|
|
||||||
output = [ "type=local,dest=out" ]
|
|
||||||
}
|
|
||||||
target "bar" {
|
|
||||||
}`),
|
|
||||||
}
|
|
||||||
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"foo", "bar"}, []string{"*.load=true", "*.push=true"}, nil)
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.Equal(t, 2, len(m))
|
|
||||||
|
|
||||||
require.Equal(t, 1, len(m["foo"].Outputs))
|
|
||||||
sort.Strings(m["foo"].Outputs)
|
|
||||||
require.Equal(t, []string{"type=local,dest=out"}, m["foo"].Outputs)
|
|
||||||
|
|
||||||
require.Equal(t, 2, len(m["bar"].Outputs))
|
|
||||||
sort.Strings(m["bar"].Outputs)
|
|
||||||
require.Equal(t, []string{"type=docker", "type=image,push=true"}, m["bar"].Outputs)
|
|
||||||
})
|
|
||||||
|
|
||||||
t.Run("type registry", func(t *testing.T) {
|
|
||||||
fp := File{
|
|
||||||
Name: "docker-bake.hcl",
|
|
||||||
Data: []byte(
|
|
||||||
`target "foo" {
|
|
||||||
output = [ "type=registry" ]
|
|
||||||
}`),
|
|
||||||
}
|
|
||||||
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"foo"}, []string{"*.load=true", "*.push=true"}, nil)
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.Equal(t, 1, len(m))
|
|
||||||
|
|
||||||
require.Equal(t, 2, len(m["foo"].Outputs))
|
|
||||||
sort.Strings(m["foo"].Outputs)
|
|
||||||
require.Equal(t, []string{"type=docker", "type=registry"}, m["foo"].Outputs)
|
|
||||||
})
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestReadTargetsCompose(t *testing.T) {
|
func TestReadTargetsCompose(t *testing.T) {
|
||||||
@@ -589,7 +367,7 @@ services:
|
|||||||
require.Equal(t, []string{"web_app"}, g["default"].Targets)
|
require.Equal(t, []string{"web_app"}, g["default"].Targets)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestHCLContextCwdPrefix(t *testing.T) {
|
func TestHCLCwdPrefix(t *testing.T) {
|
||||||
fp := File{
|
fp := File{
|
||||||
Name: "docker-bake.hcl",
|
Name: "docker-bake.hcl",
|
||||||
Data: []byte(
|
Data: []byte(
|
||||||
@@ -602,49 +380,18 @@ func TestHCLContextCwdPrefix(t *testing.T) {
|
|||||||
m, g, err := ReadTargets(ctx, []File{fp}, []string{"app"}, nil, nil)
|
m, g, err := ReadTargets(ctx, []File{fp}, []string{"app"}, nil, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
bo, err := TargetsToBuildOpt(m, &Input{})
|
require.Equal(t, 1, len(m))
|
||||||
|
_, ok := m["app"]
|
||||||
|
require.True(t, ok)
|
||||||
|
|
||||||
|
_, err = TargetsToBuildOpt(m, &Input{})
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, "test", *m["app"].Dockerfile)
|
||||||
|
require.Equal(t, "foo", *m["app"].Context)
|
||||||
|
|
||||||
require.Equal(t, 1, len(g))
|
require.Equal(t, 1, len(g))
|
||||||
require.Equal(t, []string{"app"}, g["default"].Targets)
|
require.Equal(t, []string{"app"}, g["default"].Targets)
|
||||||
|
|
||||||
require.Equal(t, 1, len(m))
|
|
||||||
require.Contains(t, m, "app")
|
|
||||||
assert.Equal(t, "test", *m["app"].Dockerfile)
|
|
||||||
assert.Equal(t, "foo", *m["app"].Context)
|
|
||||||
assert.Equal(t, "foo/test", bo["app"].Inputs.DockerfilePath)
|
|
||||||
assert.Equal(t, "foo", bo["app"].Inputs.ContextPath)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestHCLDockerfileCwdPrefix(t *testing.T) {
|
|
||||||
fp := File{
|
|
||||||
Name: "docker-bake.hcl",
|
|
||||||
Data: []byte(
|
|
||||||
`target "app" {
|
|
||||||
context = "."
|
|
||||||
dockerfile = "cwd://Dockerfile.app"
|
|
||||||
}`),
|
|
||||||
}
|
|
||||||
ctx := context.TODO()
|
|
||||||
|
|
||||||
cwd, err := os.Getwd()
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
m, g, err := ReadTargets(ctx, []File{fp}, []string{"app"}, nil, nil)
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
bo, err := TargetsToBuildOpt(m, &Input{})
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
require.Equal(t, 1, len(g))
|
|
||||||
require.Equal(t, []string{"app"}, g["default"].Targets)
|
|
||||||
|
|
||||||
require.Equal(t, 1, len(m))
|
|
||||||
require.Contains(t, m, "app")
|
|
||||||
assert.Equal(t, "cwd://Dockerfile.app", *m["app"].Dockerfile)
|
|
||||||
assert.Equal(t, ".", *m["app"].Context)
|
|
||||||
assert.Equal(t, filepath.Join(cwd, "Dockerfile.app"), bo["app"].Inputs.DockerfilePath)
|
|
||||||
assert.Equal(t, ".", bo["app"].Inputs.ContextPath)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestOverrideMerge(t *testing.T) {
|
func TestOverrideMerge(t *testing.T) {
|
||||||
@@ -1611,117 +1358,3 @@ func TestJSONNullVars(t *testing.T) {
|
|||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, map[string]*string{"bar": ptrstr("baz")}, m["default"].Args)
|
require.Equal(t, map[string]*string{"bar": ptrstr("baz")}, m["default"].Args)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestReadLocalFilesDefault(t *testing.T) {
|
|
||||||
tests := []struct {
|
|
||||||
filenames []string
|
|
||||||
expected []string
|
|
||||||
}{
|
|
||||||
{
|
|
||||||
filenames: []string{"abc.yml", "docker-compose.yml"},
|
|
||||||
expected: []string{"docker-compose.yml"},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
filenames: []string{"test.foo", "compose.yml", "docker-bake.hcl"},
|
|
||||||
expected: []string{"compose.yml", "docker-bake.hcl"},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
filenames: []string{"compose.yaml", "docker-compose.yml", "docker-bake.hcl"},
|
|
||||||
expected: []string{"compose.yaml", "docker-compose.yml", "docker-bake.hcl"},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
filenames: []string{"test.txt", "compsoe.yaml"}, // intentional misspell
|
|
||||||
expected: []string{},
|
|
||||||
},
|
|
||||||
}
|
|
||||||
pwd, err := os.Getwd()
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
for _, tt := range tests {
|
|
||||||
t.Run(strings.Join(tt.filenames, "-"), func(t *testing.T) {
|
|
||||||
dir := t.TempDir()
|
|
||||||
t.Cleanup(func() { _ = os.Chdir(pwd) })
|
|
||||||
require.NoError(t, os.Chdir(dir))
|
|
||||||
for _, tf := range tt.filenames {
|
|
||||||
require.NoError(t, os.WriteFile(tf, []byte(tf), 0644))
|
|
||||||
}
|
|
||||||
files, err := ReadLocalFiles(nil, nil, nil)
|
|
||||||
require.NoError(t, err)
|
|
||||||
if len(files) == 0 {
|
|
||||||
require.Equal(t, len(tt.expected), len(files))
|
|
||||||
} else {
|
|
||||||
found := false
|
|
||||||
for _, exp := range tt.expected {
|
|
||||||
for _, f := range files {
|
|
||||||
if f.Name == exp {
|
|
||||||
found = true
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
require.True(t, found, exp)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestAttestDuplicates(t *testing.T) {
|
|
||||||
fp := File{
|
|
||||||
Name: "docker-bake.hcl",
|
|
||||||
Data: []byte(
|
|
||||||
`target "default" {
|
|
||||||
attest = ["type=sbom", "type=sbom,generator=custom", "type=sbom,foo=bar", "type=provenance,mode=max"]
|
|
||||||
}`),
|
|
||||||
}
|
|
||||||
ctx := context.TODO()
|
|
||||||
|
|
||||||
m, _, err := ReadTargets(ctx, []File{fp}, []string{"default"}, nil, nil)
|
|
||||||
require.Equal(t, []string{"type=sbom,foo=bar", "type=provenance,mode=max"}, m["default"].Attest)
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
opts, err := TargetsToBuildOpt(m, &Input{})
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.Equal(t, map[string]*string{
|
|
||||||
"sbom": ptrstr("type=sbom,foo=bar"),
|
|
||||||
"provenance": ptrstr("type=provenance,mode=max"),
|
|
||||||
}, opts["default"].Attests)
|
|
||||||
|
|
||||||
m, _, err = ReadTargets(ctx, []File{fp}, []string{"default"}, []string{"*.attest=type=sbom,disabled=true"}, nil)
|
|
||||||
require.Equal(t, []string{"type=sbom,disabled=true", "type=provenance,mode=max"}, m["default"].Attest)
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
opts, err = TargetsToBuildOpt(m, &Input{})
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.Equal(t, map[string]*string{
|
|
||||||
"sbom": nil,
|
|
||||||
"provenance": ptrstr("type=provenance,mode=max"),
|
|
||||||
}, opts["default"].Attests)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestAnnotations(t *testing.T) {
|
|
||||||
fp := File{
|
|
||||||
Name: "docker-bake.hcl",
|
|
||||||
Data: []byte(
|
|
||||||
`target "app" {
|
|
||||||
output = ["type=image,name=foo"]
|
|
||||||
annotations = ["manifest[linux/amd64]:foo=bar"]
|
|
||||||
}`),
|
|
||||||
}
|
|
||||||
ctx := context.TODO()
|
|
||||||
m, g, err := ReadTargets(ctx, []File{fp}, []string{"app"}, nil, nil)
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
bo, err := TargetsToBuildOpt(m, &Input{})
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
require.Equal(t, 1, len(g))
|
|
||||||
require.Equal(t, []string{"app"}, g["default"].Targets)
|
|
||||||
|
|
||||||
require.Equal(t, 1, len(m))
|
|
||||||
require.Contains(t, m, "app")
|
|
||||||
require.Equal(t, "type=image,name=foo", m["app"].Outputs[0])
|
|
||||||
require.Equal(t, "manifest[linux/amd64]:foo=bar", m["app"].Annotations[0])
|
|
||||||
|
|
||||||
require.Len(t, bo["app"].Exports, 1)
|
|
||||||
require.Equal(t, "bar", bo["app"].Exports[0].Attrs["annotation-manifest[linux/amd64].foo"])
|
|
||||||
}
|
|
||||||
|
|||||||
108
bake/compose.go
108
bake/compose.go
@@ -1,18 +1,13 @@
|
|||||||
package bake
|
package bake
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
|
||||||
"fmt"
|
|
||||||
"os"
|
"os"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"sort"
|
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"github.com/compose-spec/compose-go/v2/dotenv"
|
"github.com/compose-spec/compose-go/dotenv"
|
||||||
"github.com/compose-spec/compose-go/v2/loader"
|
"github.com/compose-spec/compose-go/loader"
|
||||||
composetypes "github.com/compose-spec/compose-go/v2/types"
|
compose "github.com/compose-spec/compose-go/types"
|
||||||
dockeropts "github.com/docker/cli/opts"
|
|
||||||
"github.com/docker/go-units"
|
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
"gopkg.in/yaml.v3"
|
"gopkg.in/yaml.v3"
|
||||||
)
|
)
|
||||||
@@ -22,9 +17,9 @@ func ParseComposeFiles(fs []File) (*Config, error) {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
var cfgs []composetypes.ConfigFile
|
var cfgs []compose.ConfigFile
|
||||||
for _, f := range fs {
|
for _, f := range fs {
|
||||||
cfgs = append(cfgs, composetypes.ConfigFile{
|
cfgs = append(cfgs, compose.ConfigFile{
|
||||||
Filename: f.Name,
|
Filename: f.Name,
|
||||||
Content: f.Data,
|
Content: f.Data,
|
||||||
})
|
})
|
||||||
@@ -32,17 +27,12 @@ func ParseComposeFiles(fs []File) (*Config, error) {
|
|||||||
return ParseCompose(cfgs, envs)
|
return ParseCompose(cfgs, envs)
|
||||||
}
|
}
|
||||||
|
|
||||||
func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Config, error) {
|
func ParseCompose(cfgs []compose.ConfigFile, envs map[string]string) (*Config, error) {
|
||||||
if envs == nil {
|
cfg, err := loader.Load(compose.ConfigDetails{
|
||||||
envs = make(map[string]string)
|
|
||||||
}
|
|
||||||
cfg, err := loader.LoadWithContext(context.Background(), composetypes.ConfigDetails{
|
|
||||||
ConfigFiles: cfgs,
|
ConfigFiles: cfgs,
|
||||||
Environment: envs,
|
Environment: envs,
|
||||||
}, func(options *loader.Options) {
|
}, func(options *loader.Options) {
|
||||||
options.SetProjectName("bake", false)
|
|
||||||
options.SkipNormalization = true
|
options.SkipNormalization = true
|
||||||
options.Profiles = []string{"*"}
|
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
@@ -56,7 +46,6 @@ func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Conf
|
|||||||
g := &Group{Name: "default"}
|
g := &Group{Name: "default"}
|
||||||
|
|
||||||
for _, s := range cfg.Services {
|
for _, s := range cfg.Services {
|
||||||
s := s
|
|
||||||
if s.Build == nil {
|
if s.Build == nil {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
@@ -76,44 +65,6 @@ func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Conf
|
|||||||
dockerfilePath := s.Build.Dockerfile
|
dockerfilePath := s.Build.Dockerfile
|
||||||
dockerfilePathP = &dockerfilePath
|
dockerfilePathP = &dockerfilePath
|
||||||
}
|
}
|
||||||
var dockerfileInlineP *string
|
|
||||||
if s.Build.DockerfileInline != "" {
|
|
||||||
dockerfileInline := s.Build.DockerfileInline
|
|
||||||
dockerfileInlineP = &dockerfileInline
|
|
||||||
}
|
|
||||||
|
|
||||||
var additionalContexts map[string]string
|
|
||||||
if s.Build.AdditionalContexts != nil {
|
|
||||||
additionalContexts = map[string]string{}
|
|
||||||
for k, v := range s.Build.AdditionalContexts {
|
|
||||||
additionalContexts[k] = v
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
var shmSize *string
|
|
||||||
if s.Build.ShmSize > 0 {
|
|
||||||
shmSizeBytes := dockeropts.MemBytes(s.Build.ShmSize)
|
|
||||||
shmSizeStr := shmSizeBytes.String()
|
|
||||||
shmSize = &shmSizeStr
|
|
||||||
}
|
|
||||||
|
|
||||||
var ulimits []string
|
|
||||||
if s.Build.Ulimits != nil {
|
|
||||||
for n, u := range s.Build.Ulimits {
|
|
||||||
ulimit, err := units.ParseUlimit(fmt.Sprintf("%s=%d:%d", n, u.Soft, u.Hard))
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
ulimits = append(ulimits, ulimit.String())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
var ssh []string
|
|
||||||
for _, bkey := range s.Build.SSH {
|
|
||||||
sshkey := composeToBuildkitSSH(bkey)
|
|
||||||
ssh = append(ssh, sshkey)
|
|
||||||
}
|
|
||||||
sort.Strings(ssh)
|
|
||||||
|
|
||||||
var secrets []string
|
var secrets []string
|
||||||
for _, bs := range s.Build.Secrets {
|
for _, bs := range s.Build.Secrets {
|
||||||
@@ -133,13 +84,11 @@ func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Conf
|
|||||||
|
|
||||||
g.Targets = append(g.Targets, targetName)
|
g.Targets = append(g.Targets, targetName)
|
||||||
t := &Target{
|
t := &Target{
|
||||||
Name: targetName,
|
Name: targetName,
|
||||||
Context: contextPathP,
|
Context: contextPathP,
|
||||||
Contexts: additionalContexts,
|
Dockerfile: dockerfilePathP,
|
||||||
Dockerfile: dockerfilePathP,
|
Tags: s.Build.Tags,
|
||||||
DockerfileInline: dockerfileInlineP,
|
Labels: labels,
|
||||||
Tags: s.Build.Tags,
|
|
||||||
Labels: labels,
|
|
||||||
Args: flatten(s.Build.Args.Resolve(func(val string) (string, bool) {
|
Args: flatten(s.Build.Args.Resolve(func(val string) (string, bool) {
|
||||||
if val, ok := s.Environment[val]; ok && val != nil {
|
if val, ok := s.Environment[val]; ok && val != nil {
|
||||||
return *val, true
|
return *val, true
|
||||||
@@ -150,10 +99,7 @@ func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Conf
|
|||||||
CacheFrom: s.Build.CacheFrom,
|
CacheFrom: s.Build.CacheFrom,
|
||||||
CacheTo: s.Build.CacheTo,
|
CacheTo: s.Build.CacheTo,
|
||||||
NetworkMode: &s.Build.Network,
|
NetworkMode: &s.Build.Network,
|
||||||
SSH: ssh,
|
|
||||||
Secrets: secrets,
|
Secrets: secrets,
|
||||||
ShmSize: shmSize,
|
|
||||||
Ulimits: ulimits,
|
|
||||||
}
|
}
|
||||||
if err = t.composeExtTarget(s.Build.Extensions); err != nil {
|
if err = t.composeExtTarget(s.Build.Extensions); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
@@ -191,15 +137,14 @@ func validateComposeFile(dt []byte, fn string) (bool, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func validateCompose(dt []byte, envs map[string]string) error {
|
func validateCompose(dt []byte, envs map[string]string) error {
|
||||||
_, err := loader.Load(composetypes.ConfigDetails{
|
_, err := loader.Load(compose.ConfigDetails{
|
||||||
ConfigFiles: []composetypes.ConfigFile{
|
ConfigFiles: []compose.ConfigFile{
|
||||||
{
|
{
|
||||||
Content: dt,
|
Content: dt,
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
Environment: envs,
|
Environment: envs,
|
||||||
}, func(options *loader.Options) {
|
}, func(options *loader.Options) {
|
||||||
options.SetProjectName("bake", false)
|
|
||||||
options.SkipNormalization = true
|
options.SkipNormalization = true
|
||||||
// consistency is checked later in ParseCompose to ensure multiple
|
// consistency is checked later in ParseCompose to ensure multiple
|
||||||
// compose files can be merged together
|
// compose files can be merged together
|
||||||
@@ -240,7 +185,7 @@ func loadDotEnv(curenv map[string]string, workingDir string) (map[string]string,
|
|||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
envs, err := dotenv.UnmarshalBytesWithLookup(dt, nil)
|
envs, err := dotenv.UnmarshalBytes(dt)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@@ -255,7 +200,7 @@ func loadDotEnv(curenv map[string]string, workingDir string) (map[string]string,
|
|||||||
return curenv, nil
|
return curenv, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func flatten(in composetypes.MappingWithEquals) map[string]*string {
|
func flatten(in compose.MappingWithEquals) map[string]*string {
|
||||||
if len(in) == 0 {
|
if len(in) == 0 {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
@@ -284,7 +229,7 @@ type xbake struct {
|
|||||||
NoCacheFilter stringArray `yaml:"no-cache-filter,omitempty"`
|
NoCacheFilter stringArray `yaml:"no-cache-filter,omitempty"`
|
||||||
Contexts stringMap `yaml:"contexts,omitempty"`
|
Contexts stringMap `yaml:"contexts,omitempty"`
|
||||||
// don't forget to update documentation if you add a new field:
|
// don't forget to update documentation if you add a new field:
|
||||||
// https://github.com/docker/docs/blob/main/content/build/bake/compose-file.md#extension-field-with-x-bake
|
// docs/manuals/bake/compose-file.md#extension-field-with-x-bake
|
||||||
}
|
}
|
||||||
|
|
||||||
type stringMap map[string]string
|
type stringMap map[string]string
|
||||||
@@ -334,7 +279,6 @@ func (t *Target) composeExtTarget(exts map[string]interface{}) error {
|
|||||||
}
|
}
|
||||||
if len(xb.SSH) > 0 {
|
if len(xb.SSH) > 0 {
|
||||||
t.SSH = dedupSlice(append(t.SSH, xb.SSH...))
|
t.SSH = dedupSlice(append(t.SSH, xb.SSH...))
|
||||||
sort.Strings(t.SSH)
|
|
||||||
}
|
}
|
||||||
if len(xb.Platforms) > 0 {
|
if len(xb.Platforms) > 0 {
|
||||||
t.Platforms = dedupSlice(append(t.Platforms, xb.Platforms...))
|
t.Platforms = dedupSlice(append(t.Platforms, xb.Platforms...))
|
||||||
@@ -360,8 +304,8 @@ func (t *Target) composeExtTarget(exts map[string]interface{}) error {
|
|||||||
|
|
||||||
// composeToBuildkitSecret converts secret from compose format to buildkit's
|
// composeToBuildkitSecret converts secret from compose format to buildkit's
|
||||||
// csv format.
|
// csv format.
|
||||||
func composeToBuildkitSecret(inp composetypes.ServiceSecretConfig, psecret composetypes.SecretConfig) (string, error) {
|
func composeToBuildkitSecret(inp compose.ServiceSecretConfig, psecret compose.SecretConfig) (string, error) {
|
||||||
if psecret.External {
|
if psecret.External.External {
|
||||||
return "", errors.Errorf("unsupported external secret %s", psecret.Name)
|
return "", errors.Errorf("unsupported external secret %s", psecret.Name)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -378,17 +322,3 @@ func composeToBuildkitSecret(inp composetypes.ServiceSecretConfig, psecret compo
|
|||||||
|
|
||||||
return strings.Join(bkattrs, ","), nil
|
return strings.Join(bkattrs, ","), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// composeToBuildkitSSH converts secret from compose format to buildkit's
|
|
||||||
// csv format.
|
|
||||||
func composeToBuildkitSSH(sshKey composetypes.SSHKey) string {
|
|
||||||
var bkattrs []string
|
|
||||||
|
|
||||||
bkattrs = append(bkattrs, sshKey.ID)
|
|
||||||
|
|
||||||
if sshKey.Path != "" {
|
|
||||||
bkattrs = append(bkattrs, sshKey.Path)
|
|
||||||
}
|
|
||||||
|
|
||||||
return strings.Join(bkattrs, "=")
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -6,7 +6,7 @@ import (
|
|||||||
"sort"
|
"sort"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
composetypes "github.com/compose-spec/compose-go/v2/types"
|
compose "github.com/compose-spec/compose-go/types"
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
)
|
)
|
||||||
@@ -21,8 +21,6 @@ services:
|
|||||||
webapp:
|
webapp:
|
||||||
build:
|
build:
|
||||||
context: ./dir
|
context: ./dir
|
||||||
additional_contexts:
|
|
||||||
foo: ./bar
|
|
||||||
dockerfile: Dockerfile-alternate
|
dockerfile: Dockerfile-alternate
|
||||||
network:
|
network:
|
||||||
none
|
none
|
||||||
@@ -32,19 +30,9 @@ services:
|
|||||||
- type=local,src=path/to/cache
|
- type=local,src=path/to/cache
|
||||||
cache_to:
|
cache_to:
|
||||||
- type=local,dest=path/to/cache
|
- type=local,dest=path/to/cache
|
||||||
ssh:
|
|
||||||
- key=path/to/key
|
|
||||||
- default
|
|
||||||
secrets:
|
secrets:
|
||||||
- token
|
- token
|
||||||
- aws
|
- aws
|
||||||
webapp2:
|
|
||||||
profiles:
|
|
||||||
- test
|
|
||||||
build:
|
|
||||||
context: ./dir
|
|
||||||
dockerfile_inline: |
|
|
||||||
FROM alpine
|
|
||||||
secrets:
|
secrets:
|
||||||
token:
|
token:
|
||||||
environment: ENV_TOKEN
|
environment: ENV_TOKEN
|
||||||
@@ -52,40 +40,34 @@ secrets:
|
|||||||
file: /root/.aws/credentials
|
file: /root/.aws/credentials
|
||||||
`)
|
`)
|
||||||
|
|
||||||
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
|
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
require.Equal(t, 1, len(c.Groups))
|
require.Equal(t, 1, len(c.Groups))
|
||||||
require.Equal(t, "default", c.Groups[0].Name)
|
require.Equal(t, "default", c.Groups[0].Name)
|
||||||
sort.Strings(c.Groups[0].Targets)
|
sort.Strings(c.Groups[0].Targets)
|
||||||
require.Equal(t, []string{"db", "webapp", "webapp2"}, c.Groups[0].Targets)
|
require.Equal(t, []string{"db", "webapp"}, c.Groups[0].Targets)
|
||||||
|
|
||||||
require.Equal(t, 3, len(c.Targets))
|
require.Equal(t, 2, len(c.Targets))
|
||||||
sort.Slice(c.Targets, func(i, j int) bool {
|
sort.Slice(c.Targets, func(i, j int) bool {
|
||||||
return c.Targets[i].Name < c.Targets[j].Name
|
return c.Targets[i].Name < c.Targets[j].Name
|
||||||
})
|
})
|
||||||
require.Equal(t, "db", c.Targets[0].Name)
|
require.Equal(t, "db", c.Targets[0].Name)
|
||||||
require.Equal(t, "db", *c.Targets[0].Context)
|
require.Equal(t, "./db", *c.Targets[0].Context)
|
||||||
require.Equal(t, []string{"docker.io/tonistiigi/db"}, c.Targets[0].Tags)
|
require.Equal(t, []string{"docker.io/tonistiigi/db"}, c.Targets[0].Tags)
|
||||||
|
|
||||||
require.Equal(t, "webapp", c.Targets[1].Name)
|
require.Equal(t, "webapp", c.Targets[1].Name)
|
||||||
require.Equal(t, "dir", *c.Targets[1].Context)
|
require.Equal(t, "./dir", *c.Targets[1].Context)
|
||||||
require.Equal(t, map[string]string{"foo": "bar"}, c.Targets[1].Contexts)
|
|
||||||
require.Equal(t, "Dockerfile-alternate", *c.Targets[1].Dockerfile)
|
require.Equal(t, "Dockerfile-alternate", *c.Targets[1].Dockerfile)
|
||||||
require.Equal(t, 1, len(c.Targets[1].Args))
|
require.Equal(t, 1, len(c.Targets[1].Args))
|
||||||
require.Equal(t, ptrstr("123"), c.Targets[1].Args["buildno"])
|
require.Equal(t, ptrstr("123"), c.Targets[1].Args["buildno"])
|
||||||
require.Equal(t, []string{"type=local,src=path/to/cache"}, c.Targets[1].CacheFrom)
|
require.Equal(t, []string{"type=local,src=path/to/cache"}, c.Targets[1].CacheFrom)
|
||||||
require.Equal(t, []string{"type=local,dest=path/to/cache"}, c.Targets[1].CacheTo)
|
require.Equal(t, []string{"type=local,dest=path/to/cache"}, c.Targets[1].CacheTo)
|
||||||
require.Equal(t, "none", *c.Targets[1].NetworkMode)
|
require.Equal(t, "none", *c.Targets[1].NetworkMode)
|
||||||
require.Equal(t, []string{"default", "key=path/to/key"}, c.Targets[1].SSH)
|
|
||||||
require.Equal(t, []string{
|
require.Equal(t, []string{
|
||||||
"id=token,env=ENV_TOKEN",
|
"id=token,env=ENV_TOKEN",
|
||||||
"id=aws,src=/root/.aws/credentials",
|
"id=aws,src=/root/.aws/credentials",
|
||||||
}, c.Targets[1].Secrets)
|
}, c.Targets[1].Secrets)
|
||||||
|
|
||||||
require.Equal(t, "webapp2", c.Targets[2].Name)
|
|
||||||
require.Equal(t, "dir", *c.Targets[2].Context)
|
|
||||||
require.Equal(t, "FROM alpine\n", *c.Targets[2].DockerfileInline)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestNoBuildOutOfTreeService(t *testing.T) {
|
func TestNoBuildOutOfTreeService(t *testing.T) {
|
||||||
@@ -96,7 +78,7 @@ services:
|
|||||||
webapp:
|
webapp:
|
||||||
build: ./db
|
build: ./db
|
||||||
`)
|
`)
|
||||||
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
|
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, 1, len(c.Groups))
|
require.Equal(t, 1, len(c.Groups))
|
||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
@@ -115,7 +97,7 @@ services:
|
|||||||
target: webapp
|
target: webapp
|
||||||
`)
|
`)
|
||||||
|
|
||||||
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
|
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
require.Equal(t, 2, len(c.Targets))
|
require.Equal(t, 2, len(c.Targets))
|
||||||
@@ -140,7 +122,7 @@ services:
|
|||||||
target: webapp
|
target: webapp
|
||||||
`)
|
`)
|
||||||
|
|
||||||
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
|
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, 2, len(c.Targets))
|
require.Equal(t, 2, len(c.Targets))
|
||||||
sort.Slice(c.Targets, func(i, j int) bool {
|
sort.Slice(c.Targets, func(i, j int) bool {
|
||||||
@@ -171,7 +153,7 @@ services:
|
|||||||
t.Setenv("BAR", "foo")
|
t.Setenv("BAR", "foo")
|
||||||
t.Setenv("ZZZ_BAR", "zzz_foo")
|
t.Setenv("ZZZ_BAR", "zzz_foo")
|
||||||
|
|
||||||
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, sliceToMap(os.Environ()))
|
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, sliceToMap(os.Environ()))
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, ptrstr("bar"), c.Targets[0].Args["FOO"])
|
require.Equal(t, ptrstr("bar"), c.Targets[0].Args["FOO"])
|
||||||
require.Equal(t, ptrstr("zzz_foo"), c.Targets[0].Args["BAR"])
|
require.Equal(t, ptrstr("zzz_foo"), c.Targets[0].Args["BAR"])
|
||||||
@@ -185,7 +167,7 @@ services:
|
|||||||
entrypoint: echo 1
|
entrypoint: echo 1
|
||||||
`)
|
`)
|
||||||
|
|
||||||
_, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
|
_, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
|
||||||
require.Error(t, err)
|
require.Error(t, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -210,7 +192,7 @@ networks:
|
|||||||
gateway: 10.5.0.254
|
gateway: 10.5.0.254
|
||||||
`)
|
`)
|
||||||
|
|
||||||
_, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
|
_, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -227,7 +209,7 @@ services:
|
|||||||
- bar
|
- bar
|
||||||
`)
|
`)
|
||||||
|
|
||||||
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
|
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, []string{"foo", "bar"}, c.Targets[0].Tags)
|
require.Equal(t, []string{"foo", "bar"}, c.Targets[0].Tags)
|
||||||
}
|
}
|
||||||
@@ -264,7 +246,7 @@ networks:
|
|||||||
name: test-net
|
name: test-net
|
||||||
`)
|
`)
|
||||||
|
|
||||||
_, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
|
_, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -282,8 +264,6 @@ services:
|
|||||||
- user/app:cache
|
- user/app:cache
|
||||||
tags:
|
tags:
|
||||||
- ct-addon:baz
|
- ct-addon:baz
|
||||||
ssh:
|
|
||||||
key: path/to/key
|
|
||||||
args:
|
args:
|
||||||
CT_ECR: foo
|
CT_ECR: foo
|
||||||
CT_TAG: bar
|
CT_TAG: bar
|
||||||
@@ -293,9 +273,6 @@ services:
|
|||||||
tags:
|
tags:
|
||||||
- ct-addon:foo
|
- ct-addon:foo
|
||||||
- ct-addon:alp
|
- ct-addon:alp
|
||||||
ssh:
|
|
||||||
- default
|
|
||||||
- other=path/to/otherkey
|
|
||||||
platforms:
|
platforms:
|
||||||
- linux/amd64
|
- linux/amd64
|
||||||
- linux/arm64
|
- linux/arm64
|
||||||
@@ -312,11 +289,6 @@ services:
|
|||||||
args:
|
args:
|
||||||
CT_ECR: foo
|
CT_ECR: foo
|
||||||
CT_TAG: bar
|
CT_TAG: bar
|
||||||
shm_size: 128m
|
|
||||||
ulimits:
|
|
||||||
nofile:
|
|
||||||
soft: 1024
|
|
||||||
hard: 1024
|
|
||||||
x-bake:
|
x-bake:
|
||||||
secret:
|
secret:
|
||||||
- id=mysecret,src=/local/secret
|
- id=mysecret,src=/local/secret
|
||||||
@@ -327,7 +299,7 @@ services:
|
|||||||
no-cache: true
|
no-cache: true
|
||||||
`)
|
`)
|
||||||
|
|
||||||
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
|
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, 2, len(c.Targets))
|
require.Equal(t, 2, len(c.Targets))
|
||||||
sort.Slice(c.Targets, func(i, j int) bool {
|
sort.Slice(c.Targets, func(i, j int) bool {
|
||||||
@@ -338,7 +310,6 @@ services:
|
|||||||
require.Equal(t, []string{"linux/amd64", "linux/arm64"}, c.Targets[0].Platforms)
|
require.Equal(t, []string{"linux/amd64", "linux/arm64"}, c.Targets[0].Platforms)
|
||||||
require.Equal(t, []string{"user/app:cache", "type=local,src=path/to/cache"}, c.Targets[0].CacheFrom)
|
require.Equal(t, []string{"user/app:cache", "type=local,src=path/to/cache"}, c.Targets[0].CacheFrom)
|
||||||
require.Equal(t, []string{"user/app:cache", "type=local,dest=path/to/cache"}, c.Targets[0].CacheTo)
|
require.Equal(t, []string{"user/app:cache", "type=local,dest=path/to/cache"}, c.Targets[0].CacheTo)
|
||||||
require.Equal(t, []string{"default", "key=path/to/key", "other=path/to/otherkey"}, c.Targets[0].SSH)
|
|
||||||
require.Equal(t, newBool(true), c.Targets[0].Pull)
|
require.Equal(t, newBool(true), c.Targets[0].Pull)
|
||||||
require.Equal(t, map[string]string{"alpine": "docker-image://alpine:3.13"}, c.Targets[0].Contexts)
|
require.Equal(t, map[string]string{"alpine": "docker-image://alpine:3.13"}, c.Targets[0].Contexts)
|
||||||
require.Equal(t, []string{"ct-fake-aws:bar"}, c.Targets[1].Tags)
|
require.Equal(t, []string{"ct-fake-aws:bar"}, c.Targets[1].Tags)
|
||||||
@@ -347,8 +318,6 @@ services:
|
|||||||
require.Equal(t, []string{"linux/arm64"}, c.Targets[1].Platforms)
|
require.Equal(t, []string{"linux/arm64"}, c.Targets[1].Platforms)
|
||||||
require.Equal(t, []string{"type=docker"}, c.Targets[1].Outputs)
|
require.Equal(t, []string{"type=docker"}, c.Targets[1].Outputs)
|
||||||
require.Equal(t, newBool(true), c.Targets[1].NoCache)
|
require.Equal(t, newBool(true), c.Targets[1].NoCache)
|
||||||
require.Equal(t, ptrstr("128MiB"), c.Targets[1].ShmSize)
|
|
||||||
require.Equal(t, []string{"nofile=1024:1024"}, c.Targets[1].Ulimits)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestComposeExtDedup(t *testing.T) {
|
func TestComposeExtDedup(t *testing.T) {
|
||||||
@@ -363,8 +332,6 @@ services:
|
|||||||
- user/app:cache
|
- user/app:cache
|
||||||
tags:
|
tags:
|
||||||
- ct-addon:foo
|
- ct-addon:foo
|
||||||
ssh:
|
|
||||||
- default
|
|
||||||
x-bake:
|
x-bake:
|
||||||
tags:
|
tags:
|
||||||
- ct-addon:foo
|
- ct-addon:foo
|
||||||
@@ -374,18 +341,14 @@ services:
|
|||||||
- type=local,src=path/to/cache
|
- type=local,src=path/to/cache
|
||||||
cache-to:
|
cache-to:
|
||||||
- type=local,dest=path/to/cache
|
- type=local,dest=path/to/cache
|
||||||
ssh:
|
|
||||||
- default
|
|
||||||
- key=path/to/key
|
|
||||||
`)
|
`)
|
||||||
|
|
||||||
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
|
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
require.Equal(t, []string{"ct-addon:foo", "ct-addon:baz"}, c.Targets[0].Tags)
|
require.Equal(t, []string{"ct-addon:foo", "ct-addon:baz"}, c.Targets[0].Tags)
|
||||||
require.Equal(t, []string{"user/app:cache", "type=local,src=path/to/cache"}, c.Targets[0].CacheFrom)
|
require.Equal(t, []string{"user/app:cache", "type=local,src=path/to/cache"}, c.Targets[0].CacheFrom)
|
||||||
require.Equal(t, []string{"user/app:cache", "type=local,dest=path/to/cache"}, c.Targets[0].CacheTo)
|
require.Equal(t, []string{"user/app:cache", "type=local,dest=path/to/cache"}, c.Targets[0].CacheTo)
|
||||||
require.Equal(t, []string{"default", "key=path/to/key"}, c.Targets[0].SSH)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestEnv(t *testing.T) {
|
func TestEnv(t *testing.T) {
|
||||||
@@ -413,7 +376,7 @@ services:
|
|||||||
- ` + envf.Name() + `
|
- ` + envf.Name() + `
|
||||||
`)
|
`)
|
||||||
|
|
||||||
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
|
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, map[string]*string{"CT_ECR": ptrstr("foo"), "FOO": ptrstr("bsdf -csdf"), "NODE_ENV": ptrstr("test")}, c.Targets[0].Args)
|
require.Equal(t, map[string]*string{"CT_ECR": ptrstr("foo"), "FOO": ptrstr("bsdf -csdf"), "NODE_ENV": ptrstr("test")}, c.Targets[0].Args)
|
||||||
}
|
}
|
||||||
@@ -459,7 +422,7 @@ services:
|
|||||||
published: "3306"
|
published: "3306"
|
||||||
protocol: tcp
|
protocol: tcp
|
||||||
`)
|
`)
|
||||||
_, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
|
_, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -505,7 +468,7 @@ func TestServiceName(t *testing.T) {
|
|||||||
for _, tt := range cases {
|
for _, tt := range cases {
|
||||||
tt := tt
|
tt := tt
|
||||||
t.Run(tt.svc, func(t *testing.T) {
|
t.Run(tt.svc, func(t *testing.T) {
|
||||||
_, err := ParseCompose([]composetypes.ConfigFile{{Content: []byte(`
|
_, err := ParseCompose([]compose.ConfigFile{{Content: []byte(`
|
||||||
services:
|
services:
|
||||||
` + tt.svc + `:
|
` + tt.svc + `:
|
||||||
build:
|
build:
|
||||||
@@ -576,7 +539,7 @@ services:
|
|||||||
for _, tt := range cases {
|
for _, tt := range cases {
|
||||||
tt := tt
|
tt := tt
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
_, err := ParseCompose([]composetypes.ConfigFile{{Content: tt.dt}}, nil)
|
_, err := ParseCompose([]compose.ConfigFile{{Content: tt.dt}}, nil)
|
||||||
if tt.wantErr {
|
if tt.wantErr {
|
||||||
require.Error(t, err)
|
require.Error(t, err)
|
||||||
} else {
|
} else {
|
||||||
@@ -674,90 +637,11 @@ services:
|
|||||||
bar: "baz"
|
bar: "baz"
|
||||||
`)
|
`)
|
||||||
|
|
||||||
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
|
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, map[string]*string{"bar": ptrstr("baz")}, c.Targets[0].Args)
|
require.Equal(t, map[string]*string{"bar": ptrstr("baz")}, c.Targets[0].Args)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestDependsOn(t *testing.T) {
|
|
||||||
var dt = []byte(`
|
|
||||||
services:
|
|
||||||
foo:
|
|
||||||
build:
|
|
||||||
context: .
|
|
||||||
ports:
|
|
||||||
- 3306:3306
|
|
||||||
depends_on:
|
|
||||||
- bar
|
|
||||||
bar:
|
|
||||||
build:
|
|
||||||
context: .
|
|
||||||
`)
|
|
||||||
_, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
|
|
||||||
require.NoError(t, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestInclude(t *testing.T) {
|
|
||||||
tmpdir := t.TempDir()
|
|
||||||
|
|
||||||
err := os.WriteFile(filepath.Join(tmpdir, "compose-foo.yml"), []byte(`
|
|
||||||
services:
|
|
||||||
foo:
|
|
||||||
build:
|
|
||||||
context: .
|
|
||||||
target: buildfoo
|
|
||||||
ports:
|
|
||||||
- 3306:3306
|
|
||||||
`), 0644)
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
var dt = []byte(`
|
|
||||||
include:
|
|
||||||
- compose-foo.yml
|
|
||||||
|
|
||||||
services:
|
|
||||||
bar:
|
|
||||||
build:
|
|
||||||
context: .
|
|
||||||
target: buildbar
|
|
||||||
`)
|
|
||||||
|
|
||||||
chdir(t, tmpdir)
|
|
||||||
c, err := ParseComposeFiles([]File{{
|
|
||||||
Name: "composetypes.yml",
|
|
||||||
Data: dt,
|
|
||||||
}})
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
require.Equal(t, 2, len(c.Targets))
|
|
||||||
sort.Slice(c.Targets, func(i, j int) bool {
|
|
||||||
return c.Targets[i].Name < c.Targets[j].Name
|
|
||||||
})
|
|
||||||
require.Equal(t, "bar", c.Targets[0].Name)
|
|
||||||
require.Equal(t, "buildbar", *c.Targets[0].Target)
|
|
||||||
require.Equal(t, "foo", c.Targets[1].Name)
|
|
||||||
require.Equal(t, "buildfoo", *c.Targets[1].Target)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestDevelop(t *testing.T) {
|
|
||||||
var dt = []byte(`
|
|
||||||
services:
|
|
||||||
scratch:
|
|
||||||
build:
|
|
||||||
context: ./webapp
|
|
||||||
develop:
|
|
||||||
watch:
|
|
||||||
- path: ./webapp/html
|
|
||||||
action: sync
|
|
||||||
target: /var/www
|
|
||||||
ignore:
|
|
||||||
- node_modules/
|
|
||||||
`)
|
|
||||||
|
|
||||||
_, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
|
|
||||||
require.NoError(t, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// chdir changes the current working directory to the named directory,
|
// chdir changes the current working directory to the named directory,
|
||||||
// and then restore the original working directory at the end of the test.
|
// and then restore the original working directory at the end of the test.
|
||||||
func chdir(t *testing.T, dir string) {
|
func chdir(t *testing.T, dir string) {
|
||||||
|
|||||||
535
bake/hcl_test.go
535
bake/hcl_test.go
@@ -634,506 +634,6 @@ func TestHCLMultiFileAttrs(t *testing.T) {
|
|||||||
require.Equal(t, ptrstr("pre-ghi"), c.Targets[0].Args["v1"])
|
require.Equal(t, ptrstr("pre-ghi"), c.Targets[0].Args["v1"])
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestHCLMultiFileGlobalAttrs(t *testing.T) {
|
|
||||||
dt := []byte(`
|
|
||||||
FOO = "abc"
|
|
||||||
target "app" {
|
|
||||||
args = {
|
|
||||||
v1 = "pre-${FOO}"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
`)
|
|
||||||
dt2 := []byte(`
|
|
||||||
FOO = "def"
|
|
||||||
`)
|
|
||||||
|
|
||||||
c, err := ParseFiles([]File{
|
|
||||||
{Data: dt, Name: "c1.hcl"},
|
|
||||||
{Data: dt2, Name: "c2.hcl"},
|
|
||||||
}, nil)
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.Equal(t, 1, len(c.Targets))
|
|
||||||
require.Equal(t, c.Targets[0].Name, "app")
|
|
||||||
require.Equal(t, "pre-def", *c.Targets[0].Args["v1"])
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestHCLDuplicateTarget(t *testing.T) {
|
|
||||||
dt := []byte(`
|
|
||||||
target "app" {
|
|
||||||
dockerfile = "x"
|
|
||||||
}
|
|
||||||
target "app" {
|
|
||||||
dockerfile = "y"
|
|
||||||
}
|
|
||||||
`)
|
|
||||||
|
|
||||||
c, err := ParseFile(dt, "docker-bake.hcl")
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
require.Equal(t, 1, len(c.Targets))
|
|
||||||
require.Equal(t, "app", c.Targets[0].Name)
|
|
||||||
require.Equal(t, "y", *c.Targets[0].Dockerfile)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestHCLRenameTarget(t *testing.T) {
|
|
||||||
dt := []byte(`
|
|
||||||
target "abc" {
|
|
||||||
name = "xyz"
|
|
||||||
dockerfile = "foo"
|
|
||||||
}
|
|
||||||
`)
|
|
||||||
|
|
||||||
_, err := ParseFile(dt, "docker-bake.hcl")
|
|
||||||
require.ErrorContains(t, err, "requires matrix")
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestHCLRenameGroup(t *testing.T) {
|
|
||||||
dt := []byte(`
|
|
||||||
group "foo" {
|
|
||||||
name = "bar"
|
|
||||||
targets = ["x", "y"]
|
|
||||||
}
|
|
||||||
`)
|
|
||||||
|
|
||||||
_, err := ParseFile(dt, "docker-bake.hcl")
|
|
||||||
require.ErrorContains(t, err, "not supported")
|
|
||||||
|
|
||||||
dt = []byte(`
|
|
||||||
group "foo" {
|
|
||||||
matrix = {
|
|
||||||
name = ["x", "y"]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
`)
|
|
||||||
|
|
||||||
_, err = ParseFile(dt, "docker-bake.hcl")
|
|
||||||
require.ErrorContains(t, err, "not supported")
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestHCLRenameTargetAttrs(t *testing.T) {
|
|
||||||
dt := []byte(`
|
|
||||||
target "abc" {
|
|
||||||
name = "xyz"
|
|
||||||
matrix = {}
|
|
||||||
dockerfile = "foo"
|
|
||||||
}
|
|
||||||
|
|
||||||
target "def" {
|
|
||||||
dockerfile = target.xyz.dockerfile
|
|
||||||
}
|
|
||||||
`)
|
|
||||||
|
|
||||||
c, err := ParseFile(dt, "docker-bake.hcl")
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.Equal(t, 2, len(c.Targets))
|
|
||||||
require.Equal(t, "xyz", c.Targets[0].Name)
|
|
||||||
require.Equal(t, "foo", *c.Targets[0].Dockerfile)
|
|
||||||
require.Equal(t, "def", c.Targets[1].Name)
|
|
||||||
require.Equal(t, "foo", *c.Targets[1].Dockerfile)
|
|
||||||
|
|
||||||
dt = []byte(`
|
|
||||||
target "def" {
|
|
||||||
dockerfile = target.xyz.dockerfile
|
|
||||||
}
|
|
||||||
|
|
||||||
target "abc" {
|
|
||||||
name = "xyz"
|
|
||||||
matrix = {}
|
|
||||||
dockerfile = "foo"
|
|
||||||
}
|
|
||||||
`)
|
|
||||||
|
|
||||||
c, err = ParseFile(dt, "docker-bake.hcl")
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.Equal(t, 2, len(c.Targets))
|
|
||||||
require.Equal(t, "def", c.Targets[0].Name)
|
|
||||||
require.Equal(t, "foo", *c.Targets[0].Dockerfile)
|
|
||||||
require.Equal(t, "xyz", c.Targets[1].Name)
|
|
||||||
require.Equal(t, "foo", *c.Targets[1].Dockerfile)
|
|
||||||
|
|
||||||
dt = []byte(`
|
|
||||||
target "abc" {
|
|
||||||
name = "xyz"
|
|
||||||
matrix = {}
|
|
||||||
dockerfile = "foo"
|
|
||||||
}
|
|
||||||
|
|
||||||
target "def" {
|
|
||||||
dockerfile = target.abc.dockerfile
|
|
||||||
}
|
|
||||||
`)
|
|
||||||
|
|
||||||
_, err = ParseFile(dt, "docker-bake.hcl")
|
|
||||||
require.ErrorContains(t, err, "abc")
|
|
||||||
|
|
||||||
dt = []byte(`
|
|
||||||
target "def" {
|
|
||||||
dockerfile = target.abc.dockerfile
|
|
||||||
}
|
|
||||||
|
|
||||||
target "abc" {
|
|
||||||
name = "xyz"
|
|
||||||
matrix = {}
|
|
||||||
dockerfile = "foo"
|
|
||||||
}
|
|
||||||
`)
|
|
||||||
|
|
||||||
_, err = ParseFile(dt, "docker-bake.hcl")
|
|
||||||
require.ErrorContains(t, err, "abc")
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestHCLRenameSplit(t *testing.T) {
|
|
||||||
dt := []byte(`
|
|
||||||
target "x" {
|
|
||||||
name = "y"
|
|
||||||
matrix = {}
|
|
||||||
dockerfile = "foo"
|
|
||||||
}
|
|
||||||
|
|
||||||
target "x" {
|
|
||||||
name = "z"
|
|
||||||
matrix = {}
|
|
||||||
dockerfile = "bar"
|
|
||||||
}
|
|
||||||
`)
|
|
||||||
|
|
||||||
c, err := ParseFile(dt, "docker-bake.hcl")
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
require.Equal(t, 2, len(c.Targets))
|
|
||||||
require.Equal(t, "y", c.Targets[0].Name)
|
|
||||||
require.Equal(t, "foo", *c.Targets[0].Dockerfile)
|
|
||||||
require.Equal(t, "z", c.Targets[1].Name)
|
|
||||||
require.Equal(t, "bar", *c.Targets[1].Dockerfile)
|
|
||||||
|
|
||||||
require.Equal(t, 1, len(c.Groups))
|
|
||||||
require.Equal(t, "x", c.Groups[0].Name)
|
|
||||||
require.Equal(t, []string{"y", "z"}, c.Groups[0].Targets)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestHCLRenameMultiFile(t *testing.T) {
|
|
||||||
dt := []byte(`
|
|
||||||
target "foo" {
|
|
||||||
name = "bar"
|
|
||||||
matrix = {}
|
|
||||||
dockerfile = "x"
|
|
||||||
}
|
|
||||||
`)
|
|
||||||
dt2 := []byte(`
|
|
||||||
target "foo" {
|
|
||||||
context = "y"
|
|
||||||
}
|
|
||||||
`)
|
|
||||||
dt3 := []byte(`
|
|
||||||
target "bar" {
|
|
||||||
target = "z"
|
|
||||||
}
|
|
||||||
`)
|
|
||||||
|
|
||||||
c, err := ParseFiles([]File{
|
|
||||||
{Data: dt, Name: "c1.hcl"},
|
|
||||||
{Data: dt2, Name: "c2.hcl"},
|
|
||||||
{Data: dt3, Name: "c3.hcl"},
|
|
||||||
}, nil)
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
require.Equal(t, 2, len(c.Targets))
|
|
||||||
|
|
||||||
require.Equal(t, c.Targets[0].Name, "bar")
|
|
||||||
require.Equal(t, *c.Targets[0].Dockerfile, "x")
|
|
||||||
require.Equal(t, *c.Targets[0].Target, "z")
|
|
||||||
|
|
||||||
require.Equal(t, c.Targets[1].Name, "foo")
|
|
||||||
require.Equal(t, *c.Targets[1].Context, "y")
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestHCLMatrixBasic(t *testing.T) {
|
|
||||||
dt := []byte(`
|
|
||||||
target "default" {
|
|
||||||
matrix = {
|
|
||||||
foo = ["x", "y"]
|
|
||||||
}
|
|
||||||
name = foo
|
|
||||||
dockerfile = "${foo}.Dockerfile"
|
|
||||||
}
|
|
||||||
`)
|
|
||||||
|
|
||||||
c, err := ParseFile(dt, "docker-bake.hcl")
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
require.Equal(t, 2, len(c.Targets))
|
|
||||||
require.Equal(t, c.Targets[0].Name, "x")
|
|
||||||
require.Equal(t, c.Targets[1].Name, "y")
|
|
||||||
require.Equal(t, *c.Targets[0].Dockerfile, "x.Dockerfile")
|
|
||||||
require.Equal(t, *c.Targets[1].Dockerfile, "y.Dockerfile")
|
|
||||||
|
|
||||||
require.Equal(t, 1, len(c.Groups))
|
|
||||||
require.Equal(t, "default", c.Groups[0].Name)
|
|
||||||
require.Equal(t, []string{"x", "y"}, c.Groups[0].Targets)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestHCLMatrixMultipleKeys(t *testing.T) {
|
|
||||||
dt := []byte(`
|
|
||||||
target "default" {
|
|
||||||
matrix = {
|
|
||||||
foo = ["a"]
|
|
||||||
bar = ["b", "c"]
|
|
||||||
baz = ["d", "e", "f"]
|
|
||||||
}
|
|
||||||
name = "${foo}-${bar}-${baz}"
|
|
||||||
}
|
|
||||||
`)
|
|
||||||
|
|
||||||
c, err := ParseFile(dt, "docker-bake.hcl")
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
require.Equal(t, 6, len(c.Targets))
|
|
||||||
names := make([]string, len(c.Targets))
|
|
||||||
for i, t := range c.Targets {
|
|
||||||
names[i] = t.Name
|
|
||||||
}
|
|
||||||
require.ElementsMatch(t, []string{"a-b-d", "a-b-e", "a-b-f", "a-c-d", "a-c-e", "a-c-f"}, names)
|
|
||||||
|
|
||||||
require.Equal(t, 1, len(c.Groups))
|
|
||||||
require.Equal(t, "default", c.Groups[0].Name)
|
|
||||||
require.ElementsMatch(t, []string{"a-b-d", "a-b-e", "a-b-f", "a-c-d", "a-c-e", "a-c-f"}, c.Groups[0].Targets)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestHCLMatrixLists(t *testing.T) {
|
|
||||||
dt := []byte(`
|
|
||||||
target "foo" {
|
|
||||||
matrix = {
|
|
||||||
aa = [["aa", "bb"], ["cc", "dd"]]
|
|
||||||
}
|
|
||||||
name = aa[0]
|
|
||||||
args = {
|
|
||||||
target = "val${aa[1]}"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
`)
|
|
||||||
|
|
||||||
c, err := ParseFile(dt, "docker-bake.hcl")
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
require.Equal(t, 2, len(c.Targets))
|
|
||||||
require.Equal(t, "aa", c.Targets[0].Name)
|
|
||||||
require.Equal(t, ptrstr("valbb"), c.Targets[0].Args["target"])
|
|
||||||
require.Equal(t, "cc", c.Targets[1].Name)
|
|
||||||
require.Equal(t, ptrstr("valdd"), c.Targets[1].Args["target"])
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestHCLMatrixMaps(t *testing.T) {
|
|
||||||
dt := []byte(`
|
|
||||||
target "foo" {
|
|
||||||
matrix = {
|
|
||||||
aa = [
|
|
||||||
{
|
|
||||||
foo = "aa"
|
|
||||||
bar = "bb"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
foo = "cc"
|
|
||||||
bar = "dd"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
name = aa.foo
|
|
||||||
args = {
|
|
||||||
target = "val${aa.bar}"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
`)
|
|
||||||
|
|
||||||
c, err := ParseFile(dt, "docker-bake.hcl")
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
require.Equal(t, 2, len(c.Targets))
|
|
||||||
require.Equal(t, c.Targets[0].Name, "aa")
|
|
||||||
require.Equal(t, c.Targets[0].Args["target"], ptrstr("valbb"))
|
|
||||||
require.Equal(t, c.Targets[1].Name, "cc")
|
|
||||||
require.Equal(t, c.Targets[1].Args["target"], ptrstr("valdd"))
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestHCLMatrixMultipleTargets(t *testing.T) {
|
|
||||||
dt := []byte(`
|
|
||||||
target "x" {
|
|
||||||
matrix = {
|
|
||||||
foo = ["a", "b"]
|
|
||||||
}
|
|
||||||
name = foo
|
|
||||||
}
|
|
||||||
target "y" {
|
|
||||||
matrix = {
|
|
||||||
bar = ["c", "d"]
|
|
||||||
}
|
|
||||||
name = bar
|
|
||||||
}
|
|
||||||
`)
|
|
||||||
|
|
||||||
c, err := ParseFile(dt, "docker-bake.hcl")
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
require.Equal(t, 4, len(c.Targets))
|
|
||||||
names := make([]string, len(c.Targets))
|
|
||||||
for i, t := range c.Targets {
|
|
||||||
names[i] = t.Name
|
|
||||||
}
|
|
||||||
require.ElementsMatch(t, []string{"a", "b", "c", "d"}, names)
|
|
||||||
|
|
||||||
require.Equal(t, 2, len(c.Groups))
|
|
||||||
names = make([]string, len(c.Groups))
|
|
||||||
for i, c := range c.Groups {
|
|
||||||
names[i] = c.Name
|
|
||||||
}
|
|
||||||
require.ElementsMatch(t, []string{"x", "y"}, names)
|
|
||||||
|
|
||||||
for _, g := range c.Groups {
|
|
||||||
switch g.Name {
|
|
||||||
case "x":
|
|
||||||
require.Equal(t, []string{"a", "b"}, g.Targets)
|
|
||||||
case "y":
|
|
||||||
require.Equal(t, []string{"c", "d"}, g.Targets)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestHCLMatrixDuplicateNames(t *testing.T) {
|
|
||||||
dt := []byte(`
|
|
||||||
target "default" {
|
|
||||||
matrix = {
|
|
||||||
foo = ["a", "b"]
|
|
||||||
}
|
|
||||||
name = "c"
|
|
||||||
}
|
|
||||||
`)
|
|
||||||
|
|
||||||
_, err := ParseFile(dt, "docker-bake.hcl")
|
|
||||||
require.Error(t, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestHCLMatrixArgs(t *testing.T) {
|
|
||||||
dt := []byte(`
|
|
||||||
a = 1
|
|
||||||
variable "b" {
|
|
||||||
default = 2
|
|
||||||
}
|
|
||||||
target "default" {
|
|
||||||
matrix = {
|
|
||||||
foo = [a, b]
|
|
||||||
}
|
|
||||||
name = foo
|
|
||||||
}
|
|
||||||
`)
|
|
||||||
|
|
||||||
c, err := ParseFile(dt, "docker-bake.hcl")
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
require.Equal(t, 2, len(c.Targets))
|
|
||||||
require.Equal(t, "1", c.Targets[0].Name)
|
|
||||||
require.Equal(t, "2", c.Targets[1].Name)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestHCLMatrixArgsOverride(t *testing.T) {
|
|
||||||
dt := []byte(`
|
|
||||||
variable "ABC" {
|
|
||||||
default = "def"
|
|
||||||
}
|
|
||||||
|
|
||||||
target "bar" {
|
|
||||||
matrix = {
|
|
||||||
aa = split(",", ABC)
|
|
||||||
}
|
|
||||||
name = "bar-${aa}"
|
|
||||||
args = {
|
|
||||||
foo = aa
|
|
||||||
}
|
|
||||||
}
|
|
||||||
`)
|
|
||||||
|
|
||||||
c, err := ParseFiles([]File{
|
|
||||||
{Data: dt, Name: "docker-bake.hcl"},
|
|
||||||
}, map[string]string{"ABC": "11,22,33"})
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
require.Equal(t, 3, len(c.Targets))
|
|
||||||
require.Equal(t, "bar-11", c.Targets[0].Name)
|
|
||||||
require.Equal(t, "bar-22", c.Targets[1].Name)
|
|
||||||
require.Equal(t, "bar-33", c.Targets[2].Name)
|
|
||||||
|
|
||||||
require.Equal(t, ptrstr("11"), c.Targets[0].Args["foo"])
|
|
||||||
require.Equal(t, ptrstr("22"), c.Targets[1].Args["foo"])
|
|
||||||
require.Equal(t, ptrstr("33"), c.Targets[2].Args["foo"])
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestHCLMatrixBadTypes(t *testing.T) {
|
|
||||||
dt := []byte(`
|
|
||||||
target "default" {
|
|
||||||
matrix = "test"
|
|
||||||
}
|
|
||||||
`)
|
|
||||||
_, err := ParseFile(dt, "docker-bake.hcl")
|
|
||||||
require.Error(t, err)
|
|
||||||
|
|
||||||
dt = []byte(`
|
|
||||||
target "default" {
|
|
||||||
matrix = ["test"]
|
|
||||||
}
|
|
||||||
`)
|
|
||||||
_, err = ParseFile(dt, "docker-bake.hcl")
|
|
||||||
require.Error(t, err)
|
|
||||||
|
|
||||||
dt = []byte(`
|
|
||||||
target "default" {
|
|
||||||
matrix = {
|
|
||||||
["a"] = ["b"]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
`)
|
|
||||||
_, err = ParseFile(dt, "docker-bake.hcl")
|
|
||||||
require.Error(t, err)
|
|
||||||
|
|
||||||
dt = []byte(`
|
|
||||||
target "default" {
|
|
||||||
matrix = {
|
|
||||||
1 = 2
|
|
||||||
}
|
|
||||||
}
|
|
||||||
`)
|
|
||||||
_, err = ParseFile(dt, "docker-bake.hcl")
|
|
||||||
require.Error(t, err)
|
|
||||||
|
|
||||||
dt = []byte(`
|
|
||||||
target "default" {
|
|
||||||
matrix = {
|
|
||||||
a = "b"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
`)
|
|
||||||
_, err = ParseFile(dt, "docker-bake.hcl")
|
|
||||||
require.Error(t, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestHCLMatrixWithGlobalTarget(t *testing.T) {
|
|
||||||
dt := []byte(`
|
|
||||||
target "x" {
|
|
||||||
tags = ["a", "b"]
|
|
||||||
}
|
|
||||||
|
|
||||||
target "default" {
|
|
||||||
tags = target.x.tags
|
|
||||||
matrix = {
|
|
||||||
dummy = [""]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
`)
|
|
||||||
c, err := ParseFile(dt, "docker-bake.hcl")
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.Equal(t, 2, len(c.Targets))
|
|
||||||
require.Equal(t, "x", c.Targets[0].Name)
|
|
||||||
require.Equal(t, "default", c.Targets[1].Name)
|
|
||||||
require.Equal(t, []string{"a", "b"}, c.Targets[1].Tags)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestJSONAttributes(t *testing.T) {
|
func TestJSONAttributes(t *testing.T) {
|
||||||
dt := []byte(`{"FOO": "abc", "variable": {"BAR": {"default": "def"}}, "target": { "app": { "args": {"v1": "pre-${FOO}-${BAR}"}} } }`)
|
dt := []byte(`{"FOO": "abc", "variable": {"BAR": {"default": "def"}}, "target": { "app": { "args": {"v1": "pre-${FOO}-${BAR}"}} } }`)
|
||||||
|
|
||||||
@@ -1445,41 +945,8 @@ func TestVarUnsupportedType(t *testing.T) {
|
|||||||
require.Error(t, err)
|
require.Error(t, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestHCLIndexOfFunc(t *testing.T) {
|
|
||||||
dt := []byte(`
|
|
||||||
variable "APP_VERSIONS" {
|
|
||||||
default = [
|
|
||||||
"1.42.4",
|
|
||||||
"1.42.3"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
target "default" {
|
|
||||||
args = {
|
|
||||||
APP_VERSION = app_version
|
|
||||||
}
|
|
||||||
matrix = {
|
|
||||||
app_version = APP_VERSIONS
|
|
||||||
}
|
|
||||||
name="app-${replace(app_version, ".", "-")}"
|
|
||||||
tags = [
|
|
||||||
"app:${app_version}",
|
|
||||||
indexof(APP_VERSIONS, app_version) == 0 ? "app:latest" : "",
|
|
||||||
]
|
|
||||||
}
|
|
||||||
`)
|
|
||||||
|
|
||||||
c, err := ParseFile(dt, "docker-bake.hcl")
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
require.Equal(t, 2, len(c.Targets))
|
|
||||||
require.Equal(t, "app-1-42-4", c.Targets[0].Name)
|
|
||||||
require.Equal(t, "app:latest", c.Targets[0].Tags[1])
|
|
||||||
require.Equal(t, "app-1-42-3", c.Targets[1].Name)
|
|
||||||
require.Empty(t, c.Targets[1].Tags[1])
|
|
||||||
}
|
|
||||||
|
|
||||||
func ptrstr(s interface{}) *string {
|
func ptrstr(s interface{}) *string {
|
||||||
var n *string
|
var n *string = nil
|
||||||
if reflect.ValueOf(s).Kind() == reflect.String {
|
if reflect.ValueOf(s).Kind() == reflect.String {
|
||||||
ss := s.(string)
|
ss := s.(string)
|
||||||
n = &ss
|
n = &ss
|
||||||
|
|||||||
@@ -1,9 +1,7 @@
|
|||||||
package hclparser
|
package hclparser
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"encoding/binary"
|
|
||||||
"fmt"
|
"fmt"
|
||||||
"hash/fnv"
|
|
||||||
"math"
|
"math"
|
||||||
"math/big"
|
"math/big"
|
||||||
"reflect"
|
"reflect"
|
||||||
@@ -51,38 +49,29 @@ type parser struct {
|
|||||||
attrs map[string]*hcl.Attribute
|
attrs map[string]*hcl.Attribute
|
||||||
funcs map[string]*functionDef
|
funcs map[string]*functionDef
|
||||||
|
|
||||||
blocks map[string]map[string][]*hcl.Block
|
blocks map[string]map[string][]*hcl.Block
|
||||||
blockValues map[*hcl.Block][]reflect.Value
|
blockValues map[*hcl.Block]reflect.Value
|
||||||
blockEvalCtx map[*hcl.Block][]*hcl.EvalContext
|
blockTypes map[string]reflect.Type
|
||||||
blockNames map[*hcl.Block][]string
|
|
||||||
blockTypes map[string]reflect.Type
|
|
||||||
|
|
||||||
ectx *hcl.EvalContext
|
ectx *hcl.EvalContext
|
||||||
|
|
||||||
progressV map[uint64]struct{}
|
progress map[string]struct{}
|
||||||
progressF map[uint64]struct{}
|
progressF map[string]struct{}
|
||||||
progressB map[uint64]map[string]struct{}
|
progressB map[*hcl.Block]map[string]struct{}
|
||||||
doneB map[uint64]map[string]struct{}
|
doneF map[string]struct{}
|
||||||
}
|
doneB map[*hcl.Block]map[string]struct{}
|
||||||
|
|
||||||
type WithEvalContexts interface {
|
|
||||||
GetEvalContexts(base *hcl.EvalContext, block *hcl.Block, loadDeps func(hcl.Expression) hcl.Diagnostics) ([]*hcl.EvalContext, error)
|
|
||||||
}
|
|
||||||
|
|
||||||
type WithGetName interface {
|
|
||||||
GetName(ectx *hcl.EvalContext, block *hcl.Block, loadDeps func(hcl.Expression) hcl.Diagnostics) (string, error)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
var errUndefined = errors.New("undefined")
|
var errUndefined = errors.New("undefined")
|
||||||
|
|
||||||
func (p *parser) loadDeps(ectx *hcl.EvalContext, exp hcl.Expression, exclude map[string]struct{}, allowMissing bool) hcl.Diagnostics {
|
func (p *parser) loadDeps(exp hcl.Expression, exclude map[string]struct{}, allowMissing bool) hcl.Diagnostics {
|
||||||
fns, hcldiags := funcCalls(exp)
|
fns, hcldiags := funcCalls(exp)
|
||||||
if hcldiags.HasErrors() {
|
if hcldiags.HasErrors() {
|
||||||
return hcldiags
|
return hcldiags
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, fn := range fns {
|
for _, fn := range fns {
|
||||||
if err := p.resolveFunction(ectx, fn); err != nil {
|
if err := p.resolveFunction(fn); err != nil {
|
||||||
if allowMissing && errors.Is(err, errUndefined) {
|
if allowMissing && errors.Is(err, errUndefined) {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
@@ -135,16 +124,14 @@ func (p *parser) loadDeps(ectx *hcl.EvalContext, exp hcl.Expression, exclude map
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
for _, block := range blocks {
|
if err := p.resolveBlock(blocks[0], target); err != nil {
|
||||||
if err := p.resolveBlock(block, target); err != nil {
|
if allowMissing && errors.Is(err, errUndefined) {
|
||||||
if allowMissing && errors.Is(err, errUndefined) {
|
continue
|
||||||
continue
|
|
||||||
}
|
|
||||||
return wrapErrorDiagnostic("Invalid expression", err, exp.Range().Ptr(), exp.Range().Ptr())
|
|
||||||
}
|
}
|
||||||
|
return wrapErrorDiagnostic("Invalid expression", err, exp.Range().Ptr(), exp.Range().Ptr())
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
if err := p.resolveValue(ectx, v.RootName()); err != nil {
|
if err := p.resolveValue(v.RootName()); err != nil {
|
||||||
if allowMissing && errors.Is(err, errUndefined) {
|
if allowMissing && errors.Is(err, errUndefined) {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
@@ -158,21 +145,21 @@ func (p *parser) loadDeps(ectx *hcl.EvalContext, exp hcl.Expression, exclude map
|
|||||||
|
|
||||||
// resolveFunction forces evaluation of a function, storing the result into the
|
// resolveFunction forces evaluation of a function, storing the result into the
|
||||||
// parser.
|
// parser.
|
||||||
func (p *parser) resolveFunction(ectx *hcl.EvalContext, name string) error {
|
func (p *parser) resolveFunction(name string) error {
|
||||||
if _, ok := p.ectx.Functions[name]; ok {
|
if _, ok := p.doneF[name]; ok {
|
||||||
return nil
|
|
||||||
}
|
|
||||||
if _, ok := ectx.Functions[name]; ok {
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
f, ok := p.funcs[name]
|
f, ok := p.funcs[name]
|
||||||
if !ok {
|
if !ok {
|
||||||
return errors.Wrapf(errUndefined, "function %q does not exist", name)
|
if _, ok := p.ectx.Functions[name]; ok {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return errors.Wrapf(errUndefined, "function %q does not exit", name)
|
||||||
}
|
}
|
||||||
if _, ok := p.progressF[key(ectx, name)]; ok {
|
if _, ok := p.progressF[name]; ok {
|
||||||
return errors.Errorf("function cycle not allowed for %s", name)
|
return errors.Errorf("function cycle not allowed for %s", name)
|
||||||
}
|
}
|
||||||
p.progressF[key(ectx, name)] = struct{}{}
|
p.progressF[name] = struct{}{}
|
||||||
|
|
||||||
if f.Result == nil {
|
if f.Result == nil {
|
||||||
return errors.Errorf("empty result not allowed for %s", name)
|
return errors.Errorf("empty result not allowed for %s", name)
|
||||||
@@ -217,7 +204,7 @@ func (p *parser) resolveFunction(ectx *hcl.EvalContext, name string) error {
|
|||||||
return diags
|
return diags
|
||||||
}
|
}
|
||||||
|
|
||||||
if diags := p.loadDeps(p.ectx, f.Result.Expr, params, false); diags.HasErrors() {
|
if diags := p.loadDeps(f.Result.Expr, params, false); diags.HasErrors() {
|
||||||
return diags
|
return diags
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -227,6 +214,7 @@ func (p *parser) resolveFunction(ectx *hcl.EvalContext, name string) error {
|
|||||||
if diags.HasErrors() {
|
if diags.HasErrors() {
|
||||||
return diags
|
return diags
|
||||||
}
|
}
|
||||||
|
p.doneF[name] = struct{}{}
|
||||||
p.ectx.Functions[name] = v
|
p.ectx.Functions[name] = v
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
@@ -234,17 +222,14 @@ func (p *parser) resolveFunction(ectx *hcl.EvalContext, name string) error {
|
|||||||
|
|
||||||
// resolveValue forces evaluation of a named value, storing the result into the
|
// resolveValue forces evaluation of a named value, storing the result into the
|
||||||
// parser.
|
// parser.
|
||||||
func (p *parser) resolveValue(ectx *hcl.EvalContext, name string) (err error) {
|
func (p *parser) resolveValue(name string) (err error) {
|
||||||
if _, ok := p.ectx.Variables[name]; ok {
|
if _, ok := p.ectx.Variables[name]; ok {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
if _, ok := ectx.Variables[name]; ok {
|
if _, ok := p.progress[name]; ok {
|
||||||
return nil
|
|
||||||
}
|
|
||||||
if _, ok := p.progressV[key(ectx, name)]; ok {
|
|
||||||
return errors.Errorf("variable cycle not allowed for %s", name)
|
return errors.Errorf("variable cycle not allowed for %s", name)
|
||||||
}
|
}
|
||||||
p.progressV[key(ectx, name)] = struct{}{}
|
p.progress[name] = struct{}{}
|
||||||
|
|
||||||
var v *cty.Value
|
var v *cty.Value
|
||||||
defer func() {
|
defer func() {
|
||||||
@@ -257,10 +242,9 @@ func (p *parser) resolveValue(ectx *hcl.EvalContext, name string) (err error) {
|
|||||||
if _, builtin := p.opt.Vars[name]; !ok && !builtin {
|
if _, builtin := p.opt.Vars[name]; !ok && !builtin {
|
||||||
vr, ok := p.vars[name]
|
vr, ok := p.vars[name]
|
||||||
if !ok {
|
if !ok {
|
||||||
return errors.Wrapf(errUndefined, "variable %q does not exist", name)
|
return errors.Wrapf(errUndefined, "variable %q does not exit", name)
|
||||||
}
|
}
|
||||||
def = vr.Default
|
def = vr.Default
|
||||||
ectx = p.ectx
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if def == nil {
|
if def == nil {
|
||||||
@@ -273,10 +257,10 @@ func (p *parser) resolveValue(ectx *hcl.EvalContext, name string) (err error) {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
if diags := p.loadDeps(ectx, def.Expr, nil, true); diags.HasErrors() {
|
if diags := p.loadDeps(def.Expr, nil, true); diags.HasErrors() {
|
||||||
return diags
|
return diags
|
||||||
}
|
}
|
||||||
vv, diags := def.Expr.Value(ectx)
|
vv, diags := def.Expr.Value(p.ectx)
|
||||||
if diags.HasErrors() {
|
if diags.HasErrors() {
|
||||||
return diags
|
return diags
|
||||||
}
|
}
|
||||||
@@ -315,226 +299,147 @@ func (p *parser) resolveValue(ectx *hcl.EvalContext, name string) (err error) {
|
|||||||
// target schema is provided, only the attributes and blocks present in the
|
// target schema is provided, only the attributes and blocks present in the
|
||||||
// schema will be evaluated.
|
// schema will be evaluated.
|
||||||
func (p *parser) resolveBlock(block *hcl.Block, target *hcl.BodySchema) (err error) {
|
func (p *parser) resolveBlock(block *hcl.Block, target *hcl.BodySchema) (err error) {
|
||||||
// prepare the variable map for this type
|
name := block.Labels[0]
|
||||||
if _, ok := p.ectx.Variables[block.Type]; !ok {
|
if err := p.opt.ValidateLabel(name); err != nil {
|
||||||
p.ectx.Variables[block.Type] = cty.MapValEmpty(cty.Map(cty.String))
|
return wrapErrorDiagnostic("Invalid name", err, &block.LabelRanges[0], &block.LabelRanges[0])
|
||||||
}
|
}
|
||||||
|
|
||||||
// prepare the output destination and evaluation context
|
if _, ok := p.doneB[block]; !ok {
|
||||||
|
p.doneB[block] = map[string]struct{}{}
|
||||||
|
}
|
||||||
|
if _, ok := p.progressB[block]; !ok {
|
||||||
|
p.progressB[block] = map[string]struct{}{}
|
||||||
|
}
|
||||||
|
|
||||||
|
if target != nil {
|
||||||
|
// filter out attributes and blocks that are already evaluated
|
||||||
|
original := target
|
||||||
|
target = &hcl.BodySchema{}
|
||||||
|
for _, a := range original.Attributes {
|
||||||
|
if _, ok := p.doneB[block][a.Name]; !ok {
|
||||||
|
target.Attributes = append(target.Attributes, a)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for _, b := range original.Blocks {
|
||||||
|
if _, ok := p.doneB[block][b.Type]; !ok {
|
||||||
|
target.Blocks = append(target.Blocks, b)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if len(target.Attributes) == 0 && len(target.Blocks) == 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if target != nil {
|
||||||
|
// detect reference cycles
|
||||||
|
for _, a := range target.Attributes {
|
||||||
|
if _, ok := p.progressB[block][a.Name]; ok {
|
||||||
|
return errors.Errorf("reference cycle not allowed for %s.%s.%s", block.Type, name, a.Name)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for _, b := range target.Blocks {
|
||||||
|
if _, ok := p.progressB[block][b.Type]; ok {
|
||||||
|
return errors.Errorf("reference cycle not allowed for %s.%s.%s", block.Type, name, b.Type)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for _, a := range target.Attributes {
|
||||||
|
p.progressB[block][a.Name] = struct{}{}
|
||||||
|
}
|
||||||
|
for _, b := range target.Blocks {
|
||||||
|
p.progressB[block][b.Type] = struct{}{}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// create a filtered body that contains only the target properties
|
||||||
|
body := func() hcl.Body {
|
||||||
|
if target != nil {
|
||||||
|
return FilterIncludeBody(block.Body, target)
|
||||||
|
}
|
||||||
|
|
||||||
|
filter := &hcl.BodySchema{}
|
||||||
|
for k := range p.doneB[block] {
|
||||||
|
filter.Attributes = append(filter.Attributes, hcl.AttributeSchema{Name: k})
|
||||||
|
filter.Blocks = append(filter.Blocks, hcl.BlockHeaderSchema{Type: k})
|
||||||
|
}
|
||||||
|
return FilterExcludeBody(block.Body, filter)
|
||||||
|
}
|
||||||
|
|
||||||
|
// load dependencies from all targeted properties
|
||||||
t, ok := p.blockTypes[block.Type]
|
t, ok := p.blockTypes[block.Type]
|
||||||
if !ok {
|
if !ok {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
var outputs []reflect.Value
|
schema, _ := gohcl.ImpliedBodySchema(reflect.New(t).Interface())
|
||||||
var ectxs []*hcl.EvalContext
|
content, _, diag := body().PartialContent(schema)
|
||||||
|
if diag.HasErrors() {
|
||||||
|
return diag
|
||||||
|
}
|
||||||
|
for _, a := range content.Attributes {
|
||||||
|
diag := p.loadDeps(a.Expr, nil, true)
|
||||||
|
if diag.HasErrors() {
|
||||||
|
return diag
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for _, b := range content.Blocks {
|
||||||
|
err := p.resolveBlock(b, nil)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// decode!
|
||||||
|
var output reflect.Value
|
||||||
if prev, ok := p.blockValues[block]; ok {
|
if prev, ok := p.blockValues[block]; ok {
|
||||||
outputs = prev
|
output = prev
|
||||||
ectxs = p.blockEvalCtx[block]
|
|
||||||
} else {
|
} else {
|
||||||
if v, ok := reflect.New(t).Interface().(WithEvalContexts); ok {
|
output = reflect.New(t)
|
||||||
ectxs, err = v.GetEvalContexts(p.ectx, block, func(expr hcl.Expression) hcl.Diagnostics {
|
setLabel(output, block.Labels[0]) // early attach labels, so we can reference them
|
||||||
return p.loadDeps(p.ectx, expr, nil, true)
|
}
|
||||||
})
|
diag = gohcl.DecodeBody(body(), p.ectx, output.Interface())
|
||||||
if err != nil {
|
if diag.HasErrors() {
|
||||||
return err
|
return diag
|
||||||
}
|
}
|
||||||
for _, ectx := range ectxs {
|
p.blockValues[block] = output
|
||||||
if ectx != p.ectx && ectx.Parent() != p.ectx {
|
|
||||||
return errors.Errorf("EvalContext must return a context with the correct parent")
|
// mark all targeted properties as done
|
||||||
}
|
for _, a := range content.Attributes {
|
||||||
}
|
p.doneB[block][a.Name] = struct{}{}
|
||||||
} else {
|
}
|
||||||
ectxs = append([]*hcl.EvalContext{}, p.ectx)
|
for _, b := range content.Blocks {
|
||||||
|
p.doneB[block][b.Type] = struct{}{}
|
||||||
|
}
|
||||||
|
if target != nil {
|
||||||
|
for _, a := range target.Attributes {
|
||||||
|
p.doneB[block][a.Name] = struct{}{}
|
||||||
}
|
}
|
||||||
for range ectxs {
|
for _, b := range target.Blocks {
|
||||||
outputs = append(outputs, reflect.New(t))
|
p.doneB[block][b.Type] = struct{}{}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
p.blockValues[block] = outputs
|
|
||||||
p.blockEvalCtx[block] = ectxs
|
|
||||||
|
|
||||||
for i, output := range outputs {
|
// store the result into the evaluation context (so if can be referenced)
|
||||||
target := target
|
outputType, err := gocty.ImpliedType(output.Interface())
|
||||||
ectx := ectxs[i]
|
if err != nil {
|
||||||
name := block.Labels[0]
|
return err
|
||||||
if names, ok := p.blockNames[block]; ok {
|
|
||||||
name = names[i]
|
|
||||||
}
|
|
||||||
|
|
||||||
if _, ok := p.doneB[key(block, ectx)]; !ok {
|
|
||||||
p.doneB[key(block, ectx)] = map[string]struct{}{}
|
|
||||||
}
|
|
||||||
if _, ok := p.progressB[key(block, ectx)]; !ok {
|
|
||||||
p.progressB[key(block, ectx)] = map[string]struct{}{}
|
|
||||||
}
|
|
||||||
|
|
||||||
if target != nil {
|
|
||||||
// filter out attributes and blocks that are already evaluated
|
|
||||||
original := target
|
|
||||||
target = &hcl.BodySchema{}
|
|
||||||
for _, a := range original.Attributes {
|
|
||||||
if _, ok := p.doneB[key(block, ectx)][a.Name]; !ok {
|
|
||||||
target.Attributes = append(target.Attributes, a)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
for _, b := range original.Blocks {
|
|
||||||
if _, ok := p.doneB[key(block, ectx)][b.Type]; !ok {
|
|
||||||
target.Blocks = append(target.Blocks, b)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if len(target.Attributes) == 0 && len(target.Blocks) == 0 {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if target != nil {
|
|
||||||
// detect reference cycles
|
|
||||||
for _, a := range target.Attributes {
|
|
||||||
if _, ok := p.progressB[key(block, ectx)][a.Name]; ok {
|
|
||||||
return errors.Errorf("reference cycle not allowed for %s.%s.%s", block.Type, name, a.Name)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
for _, b := range target.Blocks {
|
|
||||||
if _, ok := p.progressB[key(block, ectx)][b.Type]; ok {
|
|
||||||
return errors.Errorf("reference cycle not allowed for %s.%s.%s", block.Type, name, b.Type)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
for _, a := range target.Attributes {
|
|
||||||
p.progressB[key(block, ectx)][a.Name] = struct{}{}
|
|
||||||
}
|
|
||||||
for _, b := range target.Blocks {
|
|
||||||
p.progressB[key(block, ectx)][b.Type] = struct{}{}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// create a filtered body that contains only the target properties
|
|
||||||
body := func() hcl.Body {
|
|
||||||
if target != nil {
|
|
||||||
return FilterIncludeBody(block.Body, target)
|
|
||||||
}
|
|
||||||
|
|
||||||
filter := &hcl.BodySchema{}
|
|
||||||
for k := range p.doneB[key(block, ectx)] {
|
|
||||||
filter.Attributes = append(filter.Attributes, hcl.AttributeSchema{Name: k})
|
|
||||||
filter.Blocks = append(filter.Blocks, hcl.BlockHeaderSchema{Type: k})
|
|
||||||
}
|
|
||||||
return FilterExcludeBody(block.Body, filter)
|
|
||||||
}
|
|
||||||
|
|
||||||
// load dependencies from all targeted properties
|
|
||||||
schema, _ := gohcl.ImpliedBodySchema(reflect.New(t).Interface())
|
|
||||||
content, _, diag := body().PartialContent(schema)
|
|
||||||
if diag.HasErrors() {
|
|
||||||
return diag
|
|
||||||
}
|
|
||||||
for _, a := range content.Attributes {
|
|
||||||
diag := p.loadDeps(ectx, a.Expr, nil, true)
|
|
||||||
if diag.HasErrors() {
|
|
||||||
return diag
|
|
||||||
}
|
|
||||||
}
|
|
||||||
for _, b := range content.Blocks {
|
|
||||||
err := p.resolveBlock(b, nil)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// decode!
|
|
||||||
diag = gohcl.DecodeBody(body(), ectx, output.Interface())
|
|
||||||
if diag.HasErrors() {
|
|
||||||
return diag
|
|
||||||
}
|
|
||||||
|
|
||||||
// mark all targeted properties as done
|
|
||||||
for _, a := range content.Attributes {
|
|
||||||
p.doneB[key(block, ectx)][a.Name] = struct{}{}
|
|
||||||
}
|
|
||||||
for _, b := range content.Blocks {
|
|
||||||
p.doneB[key(block, ectx)][b.Type] = struct{}{}
|
|
||||||
}
|
|
||||||
if target != nil {
|
|
||||||
for _, a := range target.Attributes {
|
|
||||||
p.doneB[key(block, ectx)][a.Name] = struct{}{}
|
|
||||||
}
|
|
||||||
for _, b := range target.Blocks {
|
|
||||||
p.doneB[key(block, ectx)][b.Type] = struct{}{}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// store the result into the evaluation context (so it can be referenced)
|
|
||||||
outputType, err := gocty.ImpliedType(output.Interface())
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
outputValue, err := gocty.ToCtyValue(output.Interface(), outputType)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
var m map[string]cty.Value
|
|
||||||
if m2, ok := p.ectx.Variables[block.Type]; ok {
|
|
||||||
m = m2.AsValueMap()
|
|
||||||
}
|
|
||||||
if m == nil {
|
|
||||||
m = map[string]cty.Value{}
|
|
||||||
}
|
|
||||||
m[name] = outputValue
|
|
||||||
p.ectx.Variables[block.Type] = cty.MapVal(m)
|
|
||||||
}
|
}
|
||||||
|
outputValue, err := gocty.ToCtyValue(output.Interface(), outputType)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
var m map[string]cty.Value
|
||||||
|
if m2, ok := p.ectx.Variables[block.Type]; ok {
|
||||||
|
m = m2.AsValueMap()
|
||||||
|
}
|
||||||
|
if m == nil {
|
||||||
|
m = map[string]cty.Value{}
|
||||||
|
}
|
||||||
|
m[name] = outputValue
|
||||||
|
p.ectx.Variables[block.Type] = cty.MapVal(m)
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// resolveBlockNames returns the names of the block, calling resolveBlock to
|
func Parse(b hcl.Body, opt Opt, val interface{}) hcl.Diagnostics {
|
||||||
// evaluate any label fields to correctly resolve the name.
|
|
||||||
func (p *parser) resolveBlockNames(block *hcl.Block) ([]string, error) {
|
|
||||||
if names, ok := p.blockNames[block]; ok {
|
|
||||||
return names, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := p.resolveBlock(block, &hcl.BodySchema{}); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
names := make([]string, 0, len(p.blockValues[block]))
|
|
||||||
for i, val := range p.blockValues[block] {
|
|
||||||
ectx := p.blockEvalCtx[block][i]
|
|
||||||
|
|
||||||
name := block.Labels[0]
|
|
||||||
if err := p.opt.ValidateLabel(name); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
if v, ok := val.Interface().(WithGetName); ok {
|
|
||||||
var err error
|
|
||||||
name, err = v.GetName(ectx, block, func(expr hcl.Expression) hcl.Diagnostics {
|
|
||||||
return p.loadDeps(ectx, expr, nil, true)
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
if err := p.opt.ValidateLabel(name); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
setName(val, name)
|
|
||||||
names = append(names, name)
|
|
||||||
}
|
|
||||||
|
|
||||||
found := map[string]struct{}{}
|
|
||||||
for _, name := range names {
|
|
||||||
if _, ok := found[name]; ok {
|
|
||||||
return nil, errors.Errorf("duplicate name %q", name)
|
|
||||||
}
|
|
||||||
found[name] = struct{}{}
|
|
||||||
}
|
|
||||||
|
|
||||||
p.blockNames[block] = names
|
|
||||||
return names, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func Parse(b hcl.Body, opt Opt, val interface{}) (map[string]map[string][]string, hcl.Diagnostics) {
|
|
||||||
reserved := map[string]struct{}{}
|
reserved := map[string]struct{}{}
|
||||||
schema, _ := gohcl.ImpliedBodySchema(val)
|
schema, _ := gohcl.ImpliedBodySchema(val)
|
||||||
|
|
||||||
@@ -547,7 +452,7 @@ func Parse(b hcl.Body, opt Opt, val interface{}) (map[string]map[string][]string
|
|||||||
|
|
||||||
var defs inputs
|
var defs inputs
|
||||||
if err := gohcl.DecodeBody(b, nil, &defs); err != nil {
|
if err := gohcl.DecodeBody(b, nil, &defs); err != nil {
|
||||||
return nil, err
|
return err
|
||||||
}
|
}
|
||||||
defsSchema, _ := gohcl.ImpliedBodySchema(defs)
|
defsSchema, _ := gohcl.ImpliedBodySchema(defs)
|
||||||
|
|
||||||
@@ -570,20 +475,20 @@ func Parse(b hcl.Body, opt Opt, val interface{}) (map[string]map[string][]string
|
|||||||
attrs: map[string]*hcl.Attribute{},
|
attrs: map[string]*hcl.Attribute{},
|
||||||
funcs: map[string]*functionDef{},
|
funcs: map[string]*functionDef{},
|
||||||
|
|
||||||
blocks: map[string]map[string][]*hcl.Block{},
|
blocks: map[string]map[string][]*hcl.Block{},
|
||||||
blockValues: map[*hcl.Block][]reflect.Value{},
|
blockValues: map[*hcl.Block]reflect.Value{},
|
||||||
blockEvalCtx: map[*hcl.Block][]*hcl.EvalContext{},
|
blockTypes: map[string]reflect.Type{},
|
||||||
blockNames: map[*hcl.Block][]string{},
|
|
||||||
blockTypes: map[string]reflect.Type{},
|
progress: map[string]struct{}{},
|
||||||
|
progressF: map[string]struct{}{},
|
||||||
|
progressB: map[*hcl.Block]map[string]struct{}{},
|
||||||
|
|
||||||
|
doneF: map[string]struct{}{},
|
||||||
|
doneB: map[*hcl.Block]map[string]struct{}{},
|
||||||
ectx: &hcl.EvalContext{
|
ectx: &hcl.EvalContext{
|
||||||
Variables: map[string]cty.Value{},
|
Variables: map[string]cty.Value{},
|
||||||
Functions: Stdlib(),
|
Functions: stdlibFunctions,
|
||||||
},
|
},
|
||||||
|
|
||||||
progressV: map[uint64]struct{}{},
|
|
||||||
progressF: map[uint64]struct{}{},
|
|
||||||
progressB: map[uint64]map[string]struct{}{},
|
|
||||||
doneB: map[uint64]map[string]struct{}{},
|
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, v := range defs.Variables {
|
for _, v := range defs.Variables {
|
||||||
@@ -603,18 +508,18 @@ func Parse(b hcl.Body, opt Opt, val interface{}) (map[string]map[string][]string
|
|||||||
|
|
||||||
content, b, diags := b.PartialContent(schema)
|
content, b, diags := b.PartialContent(schema)
|
||||||
if diags.HasErrors() {
|
if diags.HasErrors() {
|
||||||
return nil, diags
|
return diags
|
||||||
}
|
}
|
||||||
|
|
||||||
blocks, b, diags := b.PartialContent(defsSchema)
|
blocks, b, diags := b.PartialContent(defsSchema)
|
||||||
if diags.HasErrors() {
|
if diags.HasErrors() {
|
||||||
return nil, diags
|
return diags
|
||||||
}
|
}
|
||||||
|
|
||||||
attrs, diags := b.JustAttributes()
|
attrs, diags := b.JustAttributes()
|
||||||
if diags.HasErrors() {
|
if diags.HasErrors() {
|
||||||
if d := removeAttributesDiags(diags, reserved, p.vars, attrs); len(d) > 0 {
|
if d := removeAttributesDiags(diags, reserved, p.vars); len(d) > 0 {
|
||||||
return nil, d
|
return d
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -627,56 +532,76 @@ func Parse(b hcl.Body, opt Opt, val interface{}) (map[string]map[string][]string
|
|||||||
delete(p.attrs, "function")
|
delete(p.attrs, "function")
|
||||||
|
|
||||||
for k := range p.opt.Vars {
|
for k := range p.opt.Vars {
|
||||||
_ = p.resolveValue(p.ectx, k)
|
_ = p.resolveValue(k)
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, a := range content.Attributes {
|
for _, a := range content.Attributes {
|
||||||
a := a
|
return hcl.Diagnostics{
|
||||||
return nil, hcl.Diagnostics{
|
|
||||||
&hcl.Diagnostic{
|
&hcl.Diagnostic{
|
||||||
Severity: hcl.DiagError,
|
Severity: hcl.DiagError,
|
||||||
Summary: "Invalid attribute",
|
Summary: "Invalid attribute",
|
||||||
Detail: "global attributes currently not supported",
|
Detail: "global attributes currently not supported",
|
||||||
Subject: a.Range.Ptr(),
|
Subject: &a.Range,
|
||||||
Context: a.Range.Ptr(),
|
Context: &a.Range,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
for k := range p.vars {
|
for k := range p.vars {
|
||||||
if err := p.resolveValue(p.ectx, k); err != nil {
|
if err := p.resolveValue(k); err != nil {
|
||||||
if diags, ok := err.(hcl.Diagnostics); ok {
|
if diags, ok := err.(hcl.Diagnostics); ok {
|
||||||
return nil, diags
|
return diags
|
||||||
}
|
}
|
||||||
r := p.vars[k].Body.MissingItemRange()
|
r := p.vars[k].Body.MissingItemRange()
|
||||||
return nil, wrapErrorDiagnostic("Invalid value", err, &r, &r)
|
return wrapErrorDiagnostic("Invalid value", err, &r, &r)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
for k := range p.funcs {
|
for k := range p.funcs {
|
||||||
if err := p.resolveFunction(p.ectx, k); err != nil {
|
if err := p.resolveFunction(k); err != nil {
|
||||||
if diags, ok := err.(hcl.Diagnostics); ok {
|
if diags, ok := err.(hcl.Diagnostics); ok {
|
||||||
return nil, diags
|
return diags
|
||||||
}
|
}
|
||||||
var subject *hcl.Range
|
var subject *hcl.Range
|
||||||
var context *hcl.Range
|
var context *hcl.Range
|
||||||
if p.funcs[k].Params != nil {
|
if p.funcs[k].Params != nil {
|
||||||
subject = p.funcs[k].Params.Range.Ptr()
|
subject = &p.funcs[k].Params.Range
|
||||||
context = subject
|
context = subject
|
||||||
} else {
|
} else {
|
||||||
for _, block := range blocks.Blocks {
|
for _, block := range blocks.Blocks {
|
||||||
block := block
|
|
||||||
if block.Type == "function" && len(block.Labels) == 1 && block.Labels[0] == k {
|
if block.Type == "function" && len(block.Labels) == 1 && block.Labels[0] == k {
|
||||||
subject = block.LabelRanges[0].Ptr()
|
subject = &block.LabelRanges[0]
|
||||||
context = block.DefRange.Ptr()
|
context = &block.DefRange
|
||||||
break
|
break
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return nil, wrapErrorDiagnostic("Invalid function", err, subject, context)
|
return wrapErrorDiagnostic("Invalid function", err, subject, context)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
for _, b := range content.Blocks {
|
||||||
|
if len(b.Labels) == 0 || len(b.Labels) > 1 {
|
||||||
|
return hcl.Diagnostics{
|
||||||
|
&hcl.Diagnostic{
|
||||||
|
Severity: hcl.DiagError,
|
||||||
|
Summary: "Invalid block",
|
||||||
|
Detail: fmt.Sprintf("invalid block label: %v", b.Labels),
|
||||||
|
Subject: &b.LabelRanges[0],
|
||||||
|
Context: &b.LabelRanges[0],
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
bm, ok := p.blocks[b.Type]
|
||||||
|
if !ok {
|
||||||
|
bm = map[string][]*hcl.Block{}
|
||||||
|
p.blocks[b.Type] = bm
|
||||||
|
}
|
||||||
|
|
||||||
|
lbl := b.Labels[0]
|
||||||
|
bm[lbl] = append(bm[lbl], b)
|
||||||
|
}
|
||||||
|
|
||||||
type value struct {
|
type value struct {
|
||||||
reflect.Value
|
reflect.Value
|
||||||
idx int
|
idx int
|
||||||
@@ -687,7 +612,7 @@ func Parse(b hcl.Body, opt Opt, val interface{}) (map[string]map[string][]string
|
|||||||
values map[string]value
|
values map[string]value
|
||||||
}
|
}
|
||||||
types := map[string]field{}
|
types := map[string]field{}
|
||||||
renamed := map[string]map[string][]string{}
|
|
||||||
vt := reflect.ValueOf(val).Elem().Type()
|
vt := reflect.ValueOf(val).Elem().Type()
|
||||||
for i := 0; i < vt.NumField(); i++ {
|
for i := 0; i < vt.NumField(); i++ {
|
||||||
tags := strings.Split(vt.Field(i).Tag.Get("hcl"), ",")
|
tags := strings.Split(vt.Field(i).Tag.Get("hcl"), ",")
|
||||||
@@ -698,43 +623,10 @@ func Parse(b hcl.Body, opt Opt, val interface{}) (map[string]map[string][]string
|
|||||||
typ: vt.Field(i).Type,
|
typ: vt.Field(i).Type,
|
||||||
values: make(map[string]value),
|
values: make(map[string]value),
|
||||||
}
|
}
|
||||||
renamed[tags[0]] = map[string][]string{}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
tmpBlocks := map[string]map[string][]*hcl.Block{}
|
|
||||||
for _, b := range content.Blocks {
|
|
||||||
if len(b.Labels) == 0 || len(b.Labels) > 1 {
|
|
||||||
return nil, hcl.Diagnostics{
|
|
||||||
&hcl.Diagnostic{
|
|
||||||
Severity: hcl.DiagError,
|
|
||||||
Summary: "Invalid block",
|
|
||||||
Detail: fmt.Sprintf("invalid block label: %v", b.Labels),
|
|
||||||
Subject: &b.LabelRanges[0],
|
|
||||||
Context: &b.LabelRanges[0],
|
|
||||||
},
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
bm, ok := tmpBlocks[b.Type]
|
|
||||||
if !ok {
|
|
||||||
bm = map[string][]*hcl.Block{}
|
|
||||||
tmpBlocks[b.Type] = bm
|
|
||||||
}
|
|
||||||
|
|
||||||
names, err := p.resolveBlockNames(b)
|
|
||||||
if err != nil {
|
|
||||||
return nil, wrapErrorDiagnostic("Invalid name", err, &b.LabelRanges[0], &b.LabelRanges[0])
|
|
||||||
}
|
|
||||||
for _, name := range names {
|
|
||||||
bm[name] = append(bm[name], b)
|
|
||||||
renamed[b.Type][b.Labels[0]] = append(renamed[b.Type][b.Labels[0]], name)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
p.blocks = tmpBlocks
|
|
||||||
|
|
||||||
diags = hcl.Diagnostics{}
|
diags = hcl.Diagnostics{}
|
||||||
for _, b := range content.Blocks {
|
for _, b := range content.Blocks {
|
||||||
b := b
|
|
||||||
v := reflect.ValueOf(val)
|
v := reflect.ValueOf(val)
|
||||||
|
|
||||||
err := p.resolveBlock(b, nil)
|
err := p.resolveBlock(b, nil)
|
||||||
@@ -745,57 +637,56 @@ func Parse(b hcl.Body, opt Opt, val interface{}) (map[string]map[string][]string
|
|||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
return nil, wrapErrorDiagnostic("Invalid block", err, b.LabelRanges[0].Ptr(), b.DefRange.Ptr())
|
return wrapErrorDiagnostic("Invalid block", err, &b.LabelRanges[0], &b.DefRange)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
vvs := p.blockValues[b]
|
vv := p.blockValues[b]
|
||||||
for _, vv := range vvs {
|
|
||||||
t := types[b.Type]
|
t := types[b.Type]
|
||||||
lblIndex, lblExists := getNameIndex(vv)
|
lblIndex := setLabel(vv, b.Labels[0])
|
||||||
lblName, _ := getName(vv)
|
|
||||||
oldValue, exists := t.values[lblName]
|
oldValue, exists := t.values[b.Labels[0]]
|
||||||
if !exists && lblExists {
|
if !exists && lblIndex != -1 {
|
||||||
if v.Elem().Field(t.idx).Type().Kind() == reflect.Slice {
|
if v.Elem().Field(t.idx).Type().Kind() == reflect.Slice {
|
||||||
for i := 0; i < v.Elem().Field(t.idx).Len(); i++ {
|
for i := 0; i < v.Elem().Field(t.idx).Len(); i++ {
|
||||||
if lblName == v.Elem().Field(t.idx).Index(i).Elem().Field(lblIndex).String() {
|
if b.Labels[0] == v.Elem().Field(t.idx).Index(i).Elem().Field(lblIndex).String() {
|
||||||
exists = true
|
exists = true
|
||||||
oldValue = value{Value: v.Elem().Field(t.idx).Index(i), idx: i}
|
oldValue = value{Value: v.Elem().Field(t.idx).Index(i), idx: i}
|
||||||
break
|
break
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if exists {
|
}
|
||||||
if m := oldValue.Value.MethodByName("Merge"); m.IsValid() {
|
if exists {
|
||||||
m.Call([]reflect.Value{vv})
|
if m := oldValue.Value.MethodByName("Merge"); m.IsValid() {
|
||||||
} else {
|
m.Call([]reflect.Value{vv})
|
||||||
v.Elem().Field(t.idx).Index(oldValue.idx).Set(vv)
|
|
||||||
}
|
|
||||||
} else {
|
} else {
|
||||||
slice := v.Elem().Field(t.idx)
|
v.Elem().Field(t.idx).Index(oldValue.idx).Set(vv)
|
||||||
if slice.IsNil() {
|
|
||||||
slice = reflect.New(t.typ).Elem()
|
|
||||||
}
|
|
||||||
t.values[lblName] = value{Value: vv, idx: slice.Len()}
|
|
||||||
v.Elem().Field(t.idx).Set(reflect.Append(slice, vv))
|
|
||||||
}
|
}
|
||||||
|
} else {
|
||||||
|
slice := v.Elem().Field(t.idx)
|
||||||
|
if slice.IsNil() {
|
||||||
|
slice = reflect.New(t.typ).Elem()
|
||||||
|
}
|
||||||
|
t.values[b.Labels[0]] = value{Value: vv, idx: slice.Len()}
|
||||||
|
v.Elem().Field(t.idx).Set(reflect.Append(slice, vv))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if diags.HasErrors() {
|
if diags.HasErrors() {
|
||||||
return nil, diags
|
return diags
|
||||||
}
|
}
|
||||||
|
|
||||||
for k := range p.attrs {
|
for k := range p.attrs {
|
||||||
if err := p.resolveValue(p.ectx, k); err != nil {
|
if err := p.resolveValue(k); err != nil {
|
||||||
if diags, ok := err.(hcl.Diagnostics); ok {
|
if diags, ok := err.(hcl.Diagnostics); ok {
|
||||||
return nil, diags
|
return diags
|
||||||
}
|
}
|
||||||
return nil, wrapErrorDiagnostic("Invalid attribute", err, &p.attrs[k].Range, &p.attrs[k].Range)
|
return wrapErrorDiagnostic("Invalid attribute", err, &p.attrs[k].Range, &p.attrs[k].Range)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return renamed, nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// wrapErrorDiagnostic wraps an error into a hcl.Diagnostics object.
|
// wrapErrorDiagnostic wraps an error into a hcl.Diagnostics object.
|
||||||
@@ -819,45 +710,21 @@ func wrapErrorDiagnostic(message string, err error, subject *hcl.Range, context
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func setName(v reflect.Value, name string) {
|
func setLabel(v reflect.Value, lbl string) int {
|
||||||
|
// cache field index?
|
||||||
numFields := v.Elem().Type().NumField()
|
numFields := v.Elem().Type().NumField()
|
||||||
for i := 0; i < numFields; i++ {
|
for i := 0; i < numFields; i++ {
|
||||||
parts := strings.Split(v.Elem().Type().Field(i).Tag.Get("hcl"), ",")
|
for _, t := range strings.Split(v.Elem().Type().Field(i).Tag.Get("hcl"), ",") {
|
||||||
for _, t := range parts[1:] {
|
|
||||||
if t == "label" {
|
if t == "label" {
|
||||||
v.Elem().Field(i).Set(reflect.ValueOf(name))
|
v.Elem().Field(i).Set(reflect.ValueOf(lbl))
|
||||||
|
return i
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
return -1
|
||||||
}
|
}
|
||||||
|
|
||||||
func getName(v reflect.Value) (string, bool) {
|
func removeAttributesDiags(diags hcl.Diagnostics, reserved map[string]struct{}, vars map[string]*variable) hcl.Diagnostics {
|
||||||
numFields := v.Elem().Type().NumField()
|
|
||||||
for i := 0; i < numFields; i++ {
|
|
||||||
parts := strings.Split(v.Elem().Type().Field(i).Tag.Get("hcl"), ",")
|
|
||||||
for _, t := range parts[1:] {
|
|
||||||
if t == "label" {
|
|
||||||
return v.Elem().Field(i).String(), true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return "", false
|
|
||||||
}
|
|
||||||
|
|
||||||
func getNameIndex(v reflect.Value) (int, bool) {
|
|
||||||
numFields := v.Elem().Type().NumField()
|
|
||||||
for i := 0; i < numFields; i++ {
|
|
||||||
parts := strings.Split(v.Elem().Type().Field(i).Tag.Get("hcl"), ",")
|
|
||||||
for _, t := range parts[1:] {
|
|
||||||
if t == "label" {
|
|
||||||
return i, true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return 0, false
|
|
||||||
}
|
|
||||||
|
|
||||||
func removeAttributesDiags(diags hcl.Diagnostics, reserved map[string]struct{}, vars map[string]*variable, attrs hcl.Attributes) hcl.Diagnostics {
|
|
||||||
var fdiags hcl.Diagnostics
|
var fdiags hcl.Diagnostics
|
||||||
for _, d := range diags {
|
for _, d := range diags {
|
||||||
if fout := func(d *hcl.Diagnostic) bool {
|
if fout := func(d *hcl.Diagnostic) bool {
|
||||||
@@ -879,12 +746,6 @@ func removeAttributesDiags(diags hcl.Diagnostics, reserved map[string]struct{},
|
|||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
for a := range attrs {
|
|
||||||
// Do the same for attributes
|
|
||||||
if strings.HasPrefix(d.Detail, fmt.Sprintf(`Argument "%s" was already set at `, a)) {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return false
|
return false
|
||||||
}(d); !fout {
|
}(d); !fout {
|
||||||
fdiags = append(fdiags, d)
|
fdiags = append(fdiags, d)
|
||||||
@@ -892,21 +753,3 @@ func removeAttributesDiags(diags hcl.Diagnostics, reserved map[string]struct{},
|
|||||||
}
|
}
|
||||||
return fdiags
|
return fdiags
|
||||||
}
|
}
|
||||||
|
|
||||||
// key returns a unique hash for the given values
|
|
||||||
func key(ks ...any) uint64 {
|
|
||||||
hash := fnv.New64a()
|
|
||||||
for _, k := range ks {
|
|
||||||
v := reflect.ValueOf(k)
|
|
||||||
switch v.Kind() {
|
|
||||||
case reflect.String:
|
|
||||||
hash.Write([]byte(v.String()))
|
|
||||||
case reflect.Pointer:
|
|
||||||
ptr := reflect.ValueOf(k).Pointer()
|
|
||||||
binary.Write(hash, binary.LittleEndian, uint64(ptr))
|
|
||||||
default:
|
|
||||||
panic(fmt.Sprintf("unknown key kind %s", v.Kind().String()))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return hash.Sum64()
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -1,230 +0,0 @@
|
|||||||
// Copyright (c) HashiCorp, Inc.
|
|
||||||
// SPDX-License-Identifier: MPL-2.0
|
|
||||||
|
|
||||||
// Forked from https://github.com/hashicorp/hcl/blob/4679383728fe331fc8a6b46036a27b8f818d9bc0/merged.go
|
|
||||||
|
|
||||||
package hclparser
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
|
|
||||||
"github.com/hashicorp/hcl/v2"
|
|
||||||
)
|
|
||||||
|
|
||||||
// MergeFiles combines the given files to produce a single body that contains
|
|
||||||
// configuration from all of the given files.
|
|
||||||
//
|
|
||||||
// The ordering of the given files decides the order in which contained
|
|
||||||
// elements will be returned. If any top-level attributes are defined with
|
|
||||||
// the same name across multiple files, a diagnostic will be produced from
|
|
||||||
// the Content and PartialContent methods describing this error in a
|
|
||||||
// user-friendly way.
|
|
||||||
func MergeFiles(files []*hcl.File) hcl.Body {
|
|
||||||
var bodies []hcl.Body
|
|
||||||
for _, file := range files {
|
|
||||||
bodies = append(bodies, file.Body)
|
|
||||||
}
|
|
||||||
return MergeBodies(bodies)
|
|
||||||
}
|
|
||||||
|
|
||||||
// MergeBodies is like MergeFiles except it deals directly with bodies, rather
|
|
||||||
// than with entire files.
|
|
||||||
func MergeBodies(bodies []hcl.Body) hcl.Body {
|
|
||||||
if len(bodies) == 0 {
|
|
||||||
// Swap out for our singleton empty body, to reduce the number of
|
|
||||||
// empty slices we have hanging around.
|
|
||||||
return emptyBody
|
|
||||||
}
|
|
||||||
|
|
||||||
// If any of the given bodies are already merged bodies, we'll unpack
|
|
||||||
// to flatten to a single mergedBodies, since that's conceptually simpler.
|
|
||||||
// This also, as a side-effect, eliminates any empty bodies, since
|
|
||||||
// empties are merged bodies with no inner bodies.
|
|
||||||
var newLen int
|
|
||||||
var flatten bool
|
|
||||||
for _, body := range bodies {
|
|
||||||
if children, merged := body.(mergedBodies); merged {
|
|
||||||
newLen += len(children)
|
|
||||||
flatten = true
|
|
||||||
} else {
|
|
||||||
newLen++
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if !flatten { // not just newLen == len, because we might have mergedBodies with single bodies inside
|
|
||||||
return mergedBodies(bodies)
|
|
||||||
}
|
|
||||||
|
|
||||||
if newLen == 0 {
|
|
||||||
// Don't allocate a new empty when we already have one
|
|
||||||
return emptyBody
|
|
||||||
}
|
|
||||||
|
|
||||||
n := make([]hcl.Body, 0, newLen)
|
|
||||||
for _, body := range bodies {
|
|
||||||
if children, merged := body.(mergedBodies); merged {
|
|
||||||
n = append(n, children...)
|
|
||||||
} else {
|
|
||||||
n = append(n, body)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return mergedBodies(n)
|
|
||||||
}
|
|
||||||
|
|
||||||
var emptyBody = mergedBodies([]hcl.Body{})
|
|
||||||
|
|
||||||
// EmptyBody returns a body with no content. This body can be used as a
|
|
||||||
// placeholder when a body is required but no body content is available.
|
|
||||||
func EmptyBody() hcl.Body {
|
|
||||||
return emptyBody
|
|
||||||
}
|
|
||||||
|
|
||||||
type mergedBodies []hcl.Body
|
|
||||||
|
|
||||||
// Content returns the content produced by applying the given schema to all
|
|
||||||
// of the merged bodies and merging the result.
|
|
||||||
//
|
|
||||||
// Although required attributes _are_ supported, they should be used sparingly
|
|
||||||
// with merged bodies since in this case there is no contextual information
|
|
||||||
// with which to return good diagnostics. Applications working with merged
|
|
||||||
// bodies may wish to mark all attributes as optional and then check for
|
|
||||||
// required attributes afterwards, to produce better diagnostics.
|
|
||||||
func (mb mergedBodies) Content(schema *hcl.BodySchema) (*hcl.BodyContent, hcl.Diagnostics) {
|
|
||||||
// the returned body will always be empty in this case, because mergedContent
|
|
||||||
// will only ever call Content on the child bodies.
|
|
||||||
content, _, diags := mb.mergedContent(schema, false)
|
|
||||||
return content, diags
|
|
||||||
}
|
|
||||||
|
|
||||||
func (mb mergedBodies) PartialContent(schema *hcl.BodySchema) (*hcl.BodyContent, hcl.Body, hcl.Diagnostics) {
|
|
||||||
return mb.mergedContent(schema, true)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (mb mergedBodies) JustAttributes() (hcl.Attributes, hcl.Diagnostics) {
|
|
||||||
attrs := make(map[string]*hcl.Attribute)
|
|
||||||
var diags hcl.Diagnostics
|
|
||||||
|
|
||||||
for _, body := range mb {
|
|
||||||
thisAttrs, thisDiags := body.JustAttributes()
|
|
||||||
|
|
||||||
if len(thisDiags) != 0 {
|
|
||||||
diags = append(diags, thisDiags...)
|
|
||||||
}
|
|
||||||
|
|
||||||
if thisAttrs != nil {
|
|
||||||
for name, attr := range thisAttrs {
|
|
||||||
if existing := attrs[name]; existing != nil {
|
|
||||||
diags = diags.Append(&hcl.Diagnostic{
|
|
||||||
Severity: hcl.DiagError,
|
|
||||||
Summary: "Duplicate argument",
|
|
||||||
Detail: fmt.Sprintf(
|
|
||||||
"Argument %q was already set at %s",
|
|
||||||
name, existing.NameRange.String(),
|
|
||||||
),
|
|
||||||
Subject: thisAttrs[name].NameRange.Ptr(),
|
|
||||||
})
|
|
||||||
}
|
|
||||||
attrs[name] = attr
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return attrs, diags
|
|
||||||
}
|
|
||||||
|
|
||||||
func (mb mergedBodies) MissingItemRange() hcl.Range {
|
|
||||||
if len(mb) == 0 {
|
|
||||||
// Nothing useful to return here, so we'll return some garbage.
|
|
||||||
return hcl.Range{
|
|
||||||
Filename: "<empty>",
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// arbitrarily use the first body's missing item range
|
|
||||||
return mb[0].MissingItemRange()
|
|
||||||
}
|
|
||||||
|
|
||||||
func (mb mergedBodies) mergedContent(schema *hcl.BodySchema, partial bool) (*hcl.BodyContent, hcl.Body, hcl.Diagnostics) {
|
|
||||||
// We need to produce a new schema with none of the attributes marked as
|
|
||||||
// required, since _any one_ of our bodies can contribute an attribute value.
|
|
||||||
// We'll separately check that all required attributes are present at
|
|
||||||
// the end.
|
|
||||||
mergedSchema := &hcl.BodySchema{
|
|
||||||
Blocks: schema.Blocks,
|
|
||||||
}
|
|
||||||
for _, attrS := range schema.Attributes {
|
|
||||||
mergedAttrS := attrS
|
|
||||||
mergedAttrS.Required = false
|
|
||||||
mergedSchema.Attributes = append(mergedSchema.Attributes, mergedAttrS)
|
|
||||||
}
|
|
||||||
|
|
||||||
var mergedLeftovers []hcl.Body
|
|
||||||
content := &hcl.BodyContent{
|
|
||||||
Attributes: map[string]*hcl.Attribute{},
|
|
||||||
}
|
|
||||||
|
|
||||||
var diags hcl.Diagnostics
|
|
||||||
for _, body := range mb {
|
|
||||||
var thisContent *hcl.BodyContent
|
|
||||||
var thisLeftovers hcl.Body
|
|
||||||
var thisDiags hcl.Diagnostics
|
|
||||||
|
|
||||||
if partial {
|
|
||||||
thisContent, thisLeftovers, thisDiags = body.PartialContent(mergedSchema)
|
|
||||||
} else {
|
|
||||||
thisContent, thisDiags = body.Content(mergedSchema)
|
|
||||||
}
|
|
||||||
|
|
||||||
if thisLeftovers != nil {
|
|
||||||
mergedLeftovers = append(mergedLeftovers, thisLeftovers)
|
|
||||||
}
|
|
||||||
if len(thisDiags) != 0 {
|
|
||||||
diags = append(diags, thisDiags...)
|
|
||||||
}
|
|
||||||
|
|
||||||
if thisContent.Attributes != nil {
|
|
||||||
for name, attr := range thisContent.Attributes {
|
|
||||||
if existing := content.Attributes[name]; existing != nil {
|
|
||||||
diags = diags.Append(&hcl.Diagnostic{
|
|
||||||
Severity: hcl.DiagError,
|
|
||||||
Summary: "Duplicate argument",
|
|
||||||
Detail: fmt.Sprintf(
|
|
||||||
"Argument %q was already set at %s",
|
|
||||||
name, existing.NameRange.String(),
|
|
||||||
),
|
|
||||||
Subject: thisContent.Attributes[name].NameRange.Ptr(),
|
|
||||||
})
|
|
||||||
}
|
|
||||||
content.Attributes[name] = attr
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(thisContent.Blocks) != 0 {
|
|
||||||
content.Blocks = append(content.Blocks, thisContent.Blocks...)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Finally, we check for required attributes.
|
|
||||||
for _, attrS := range schema.Attributes {
|
|
||||||
if !attrS.Required {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
if content.Attributes[attrS.Name] == nil {
|
|
||||||
// We don't have any context here to produce a good diagnostic,
|
|
||||||
// which is why we warn in the Content docstring to minimize the
|
|
||||||
// use of required attributes on merged bodies.
|
|
||||||
diags = diags.Append(&hcl.Diagnostic{
|
|
||||||
Severity: hcl.DiagError,
|
|
||||||
Summary: "Missing required argument",
|
|
||||||
Detail: fmt.Sprintf(
|
|
||||||
"The argument %q is required, but was not set.",
|
|
||||||
attrS.Name,
|
|
||||||
),
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
leftoverBody := MergeBodies(mergedLeftovers)
|
|
||||||
return content, leftoverBody, diags
|
|
||||||
}
|
|
||||||
@@ -9,7 +9,6 @@ import (
|
|||||||
"github.com/hashicorp/go-cty-funcs/uuid"
|
"github.com/hashicorp/go-cty-funcs/uuid"
|
||||||
"github.com/hashicorp/hcl/v2/ext/tryfunc"
|
"github.com/hashicorp/hcl/v2/ext/tryfunc"
|
||||||
"github.com/hashicorp/hcl/v2/ext/typeexpr"
|
"github.com/hashicorp/hcl/v2/ext/typeexpr"
|
||||||
"github.com/pkg/errors"
|
|
||||||
"github.com/zclconf/go-cty/cty"
|
"github.com/zclconf/go-cty/cty"
|
||||||
"github.com/zclconf/go-cty/cty/function"
|
"github.com/zclconf/go-cty/cty/function"
|
||||||
"github.com/zclconf/go-cty/cty/function/stdlib"
|
"github.com/zclconf/go-cty/cty/function/stdlib"
|
||||||
@@ -32,33 +31,32 @@ var stdlibFunctions = map[string]function.Function{
|
|||||||
"cidrnetmask": cidr.NetmaskFunc,
|
"cidrnetmask": cidr.NetmaskFunc,
|
||||||
"cidrsubnet": cidr.SubnetFunc,
|
"cidrsubnet": cidr.SubnetFunc,
|
||||||
"cidrsubnets": cidr.SubnetsFunc,
|
"cidrsubnets": cidr.SubnetsFunc,
|
||||||
|
"csvdecode": stdlib.CSVDecodeFunc,
|
||||||
"coalesce": stdlib.CoalesceFunc,
|
"coalesce": stdlib.CoalesceFunc,
|
||||||
"coalescelist": stdlib.CoalesceListFunc,
|
"coalescelist": stdlib.CoalesceListFunc,
|
||||||
"compact": stdlib.CompactFunc,
|
"compact": stdlib.CompactFunc,
|
||||||
"concat": stdlib.ConcatFunc,
|
"concat": stdlib.ConcatFunc,
|
||||||
"contains": stdlib.ContainsFunc,
|
"contains": stdlib.ContainsFunc,
|
||||||
"convert": typeexpr.ConvertFunc,
|
"convert": typeexpr.ConvertFunc,
|
||||||
"csvdecode": stdlib.CSVDecodeFunc,
|
|
||||||
"distinct": stdlib.DistinctFunc,
|
"distinct": stdlib.DistinctFunc,
|
||||||
"divide": stdlib.DivideFunc,
|
"divide": stdlib.DivideFunc,
|
||||||
"element": stdlib.ElementFunc,
|
"element": stdlib.ElementFunc,
|
||||||
"equal": stdlib.EqualFunc,
|
"equal": stdlib.EqualFunc,
|
||||||
"flatten": stdlib.FlattenFunc,
|
"flatten": stdlib.FlattenFunc,
|
||||||
"floor": stdlib.FloorFunc,
|
"floor": stdlib.FloorFunc,
|
||||||
"format": stdlib.FormatFunc,
|
|
||||||
"formatdate": stdlib.FormatDateFunc,
|
"formatdate": stdlib.FormatDateFunc,
|
||||||
|
"format": stdlib.FormatFunc,
|
||||||
"formatlist": stdlib.FormatListFunc,
|
"formatlist": stdlib.FormatListFunc,
|
||||||
"greaterthan": stdlib.GreaterThanFunc,
|
"greaterthan": stdlib.GreaterThanFunc,
|
||||||
"greaterthanorequalto": stdlib.GreaterThanOrEqualToFunc,
|
"greaterthanorequalto": stdlib.GreaterThanOrEqualToFunc,
|
||||||
"hasindex": stdlib.HasIndexFunc,
|
"hasindex": stdlib.HasIndexFunc,
|
||||||
"indent": stdlib.IndentFunc,
|
"indent": stdlib.IndentFunc,
|
||||||
"index": stdlib.IndexFunc,
|
"index": stdlib.IndexFunc,
|
||||||
"indexof": indexOfFunc,
|
|
||||||
"int": stdlib.IntFunc,
|
"int": stdlib.IntFunc,
|
||||||
"join": stdlib.JoinFunc,
|
|
||||||
"jsondecode": stdlib.JSONDecodeFunc,
|
"jsondecode": stdlib.JSONDecodeFunc,
|
||||||
"jsonencode": stdlib.JSONEncodeFunc,
|
"jsonencode": stdlib.JSONEncodeFunc,
|
||||||
"keys": stdlib.KeysFunc,
|
"keys": stdlib.KeysFunc,
|
||||||
|
"join": stdlib.JoinFunc,
|
||||||
"length": stdlib.LengthFunc,
|
"length": stdlib.LengthFunc,
|
||||||
"lessthan": stdlib.LessThanFunc,
|
"lessthan": stdlib.LessThanFunc,
|
||||||
"lessthanorequalto": stdlib.LessThanOrEqualToFunc,
|
"lessthanorequalto": stdlib.LessThanOrEqualToFunc,
|
||||||
@@ -72,16 +70,15 @@ var stdlibFunctions = map[string]function.Function{
|
|||||||
"modulo": stdlib.ModuloFunc,
|
"modulo": stdlib.ModuloFunc,
|
||||||
"multiply": stdlib.MultiplyFunc,
|
"multiply": stdlib.MultiplyFunc,
|
||||||
"negate": stdlib.NegateFunc,
|
"negate": stdlib.NegateFunc,
|
||||||
"not": stdlib.NotFunc,
|
|
||||||
"notequal": stdlib.NotEqualFunc,
|
"notequal": stdlib.NotEqualFunc,
|
||||||
|
"not": stdlib.NotFunc,
|
||||||
"or": stdlib.OrFunc,
|
"or": stdlib.OrFunc,
|
||||||
"parseint": stdlib.ParseIntFunc,
|
"parseint": stdlib.ParseIntFunc,
|
||||||
"pow": stdlib.PowFunc,
|
"pow": stdlib.PowFunc,
|
||||||
"range": stdlib.RangeFunc,
|
"range": stdlib.RangeFunc,
|
||||||
"regex_replace": stdlib.RegexReplaceFunc,
|
|
||||||
"regex": stdlib.RegexFunc,
|
|
||||||
"regexall": stdlib.RegexAllFunc,
|
"regexall": stdlib.RegexAllFunc,
|
||||||
"replace": stdlib.ReplaceFunc,
|
"regex": stdlib.RegexFunc,
|
||||||
|
"regex_replace": stdlib.RegexReplaceFunc,
|
||||||
"reverse": stdlib.ReverseFunc,
|
"reverse": stdlib.ReverseFunc,
|
||||||
"reverselist": stdlib.ReverseListFunc,
|
"reverselist": stdlib.ReverseListFunc,
|
||||||
"rsadecrypt": crypto.RsaDecryptFunc,
|
"rsadecrypt": crypto.RsaDecryptFunc,
|
||||||
@@ -117,51 +114,6 @@ var stdlibFunctions = map[string]function.Function{
|
|||||||
"zipmap": stdlib.ZipmapFunc,
|
"zipmap": stdlib.ZipmapFunc,
|
||||||
}
|
}
|
||||||
|
|
||||||
// indexOfFunc constructs a function that finds the element index for a given
|
|
||||||
// value in a list.
|
|
||||||
var indexOfFunc = function.New(&function.Spec{
|
|
||||||
Params: []function.Parameter{
|
|
||||||
{
|
|
||||||
Name: "list",
|
|
||||||
Type: cty.DynamicPseudoType,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Name: "value",
|
|
||||||
Type: cty.DynamicPseudoType,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
Type: function.StaticReturnType(cty.Number),
|
|
||||||
Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) {
|
|
||||||
if !(args[0].Type().IsListType() || args[0].Type().IsTupleType()) {
|
|
||||||
return cty.NilVal, errors.New("argument must be a list or tuple")
|
|
||||||
}
|
|
||||||
|
|
||||||
if !args[0].IsKnown() {
|
|
||||||
return cty.UnknownVal(cty.Number), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
if args[0].LengthInt() == 0 { // Easy path
|
|
||||||
return cty.NilVal, errors.New("cannot search an empty list")
|
|
||||||
}
|
|
||||||
|
|
||||||
for it := args[0].ElementIterator(); it.Next(); {
|
|
||||||
i, v := it.Element()
|
|
||||||
eq, err := stdlib.Equal(v, args[1])
|
|
||||||
if err != nil {
|
|
||||||
return cty.NilVal, err
|
|
||||||
}
|
|
||||||
if !eq.IsKnown() {
|
|
||||||
return cty.UnknownVal(cty.Number), nil
|
|
||||||
}
|
|
||||||
if eq.True() {
|
|
||||||
return i, nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return cty.NilVal, errors.New("item not found")
|
|
||||||
|
|
||||||
},
|
|
||||||
})
|
|
||||||
|
|
||||||
// timestampFunc constructs a function that returns a string representation of the current date and time.
|
// timestampFunc constructs a function that returns a string representation of the current date and time.
|
||||||
//
|
//
|
||||||
// This function was imported from terraform's datetime utilities.
|
// This function was imported from terraform's datetime utilities.
|
||||||
@@ -172,11 +124,3 @@ var timestampFunc = function.New(&function.Spec{
|
|||||||
return cty.StringVal(time.Now().UTC().Format(time.RFC3339)), nil
|
return cty.StringVal(time.Now().UTC().Format(time.RFC3339)), nil
|
||||||
},
|
},
|
||||||
})
|
})
|
||||||
|
|
||||||
func Stdlib() map[string]function.Function {
|
|
||||||
funcs := make(map[string]function.Function, len(stdlibFunctions))
|
|
||||||
for k, v := range stdlibFunctions {
|
|
||||||
funcs[k] = v
|
|
||||||
}
|
|
||||||
return funcs
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -1,49 +0,0 @@
|
|||||||
package hclparser
|
|
||||||
|
|
||||||
import (
|
|
||||||
"testing"
|
|
||||||
|
|
||||||
"github.com/zclconf/go-cty/cty"
|
|
||||||
)
|
|
||||||
|
|
||||||
func TestIndexOf(t *testing.T) {
|
|
||||||
type testCase struct {
|
|
||||||
input cty.Value
|
|
||||||
key cty.Value
|
|
||||||
want cty.Value
|
|
||||||
wantErr bool
|
|
||||||
}
|
|
||||||
tests := map[string]testCase{
|
|
||||||
"index 0": {
|
|
||||||
input: cty.TupleVal([]cty.Value{cty.StringVal("one"), cty.NumberIntVal(2.0), cty.NumberIntVal(3), cty.StringVal("four")}),
|
|
||||||
key: cty.StringVal("one"),
|
|
||||||
want: cty.NumberIntVal(0),
|
|
||||||
},
|
|
||||||
"index 3": {
|
|
||||||
input: cty.TupleVal([]cty.Value{cty.StringVal("one"), cty.NumberIntVal(2.0), cty.NumberIntVal(3), cty.StringVal("four")}),
|
|
||||||
key: cty.StringVal("four"),
|
|
||||||
want: cty.NumberIntVal(3),
|
|
||||||
},
|
|
||||||
"index -1": {
|
|
||||||
input: cty.TupleVal([]cty.Value{cty.StringVal("one"), cty.NumberIntVal(2.0), cty.NumberIntVal(3), cty.StringVal("four")}),
|
|
||||||
key: cty.StringVal("3"),
|
|
||||||
wantErr: true,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
for name, test := range tests {
|
|
||||||
name, test := name, test
|
|
||||||
t.Run(name, func(t *testing.T) {
|
|
||||||
got, err := indexOfFunc.Call([]cty.Value{test.input, test.key})
|
|
||||||
if err != nil {
|
|
||||||
if test.wantErr {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
t.Fatalf("unexpected error: %s", err)
|
|
||||||
}
|
|
||||||
if !got.RawEquals(test.want) {
|
|
||||||
t.Errorf("wrong result\ngot: %#v\nwant: %#v", got, test.want)
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -4,18 +4,14 @@ import (
|
|||||||
"archive/tar"
|
"archive/tar"
|
||||||
"bytes"
|
"bytes"
|
||||||
"context"
|
"context"
|
||||||
"os"
|
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"github.com/docker/buildx/builder"
|
"github.com/docker/buildx/builder"
|
||||||
controllerapi "github.com/docker/buildx/controller/pb"
|
|
||||||
"github.com/docker/buildx/driver"
|
"github.com/docker/buildx/driver"
|
||||||
"github.com/docker/buildx/util/progress"
|
"github.com/docker/buildx/util/progress"
|
||||||
"github.com/moby/buildkit/client"
|
"github.com/moby/buildkit/client"
|
||||||
"github.com/moby/buildkit/client/llb"
|
"github.com/moby/buildkit/client/llb"
|
||||||
"github.com/moby/buildkit/frontend/dockerui"
|
|
||||||
gwclient "github.com/moby/buildkit/frontend/gateway/client"
|
gwclient "github.com/moby/buildkit/frontend/gateway/client"
|
||||||
"github.com/moby/buildkit/session"
|
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -25,37 +21,10 @@ type Input struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func ReadRemoteFiles(ctx context.Context, nodes []builder.Node, url string, names []string, pw progress.Writer) ([]File, *Input, error) {
|
func ReadRemoteFiles(ctx context.Context, nodes []builder.Node, url string, names []string, pw progress.Writer) ([]File, *Input, error) {
|
||||||
var sessions []session.Attachable
|
|
||||||
var filename string
|
var filename string
|
||||||
|
st, ok := detectGitContext(url)
|
||||||
st, ok := dockerui.DetectGitContext(url, false)
|
if !ok {
|
||||||
if ok {
|
st, filename, ok = detectHTTPContext(url)
|
||||||
if ssh, err := controllerapi.CreateSSH([]*controllerapi.SSH{{
|
|
||||||
ID: "default",
|
|
||||||
Paths: strings.Split(os.Getenv("BUILDX_BAKE_GIT_SSH"), ","),
|
|
||||||
}}); err == nil {
|
|
||||||
sessions = append(sessions, ssh)
|
|
||||||
}
|
|
||||||
var gitAuthSecrets []*controllerapi.Secret
|
|
||||||
if _, ok := os.LookupEnv("BUILDX_BAKE_GIT_AUTH_TOKEN"); ok {
|
|
||||||
gitAuthSecrets = append(gitAuthSecrets, &controllerapi.Secret{
|
|
||||||
ID: llb.GitAuthTokenKey,
|
|
||||||
Env: "BUILDX_BAKE_GIT_AUTH_TOKEN",
|
|
||||||
})
|
|
||||||
}
|
|
||||||
if _, ok := os.LookupEnv("BUILDX_BAKE_GIT_AUTH_HEADER"); ok {
|
|
||||||
gitAuthSecrets = append(gitAuthSecrets, &controllerapi.Secret{
|
|
||||||
ID: llb.GitAuthHeaderKey,
|
|
||||||
Env: "BUILDX_BAKE_GIT_AUTH_HEADER",
|
|
||||||
})
|
|
||||||
}
|
|
||||||
if len(gitAuthSecrets) > 0 {
|
|
||||||
if secrets, err := controllerapi.CreateSecrets(gitAuthSecrets); err == nil {
|
|
||||||
sessions = append(sessions, secrets)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
st, filename, ok = dockerui.DetectHTTPContext(url)
|
|
||||||
if !ok {
|
if !ok {
|
||||||
return nil, nil, errors.Errorf("not url context")
|
return nil, nil, errors.Errorf("not url context")
|
||||||
}
|
}
|
||||||
@@ -82,7 +51,7 @@ func ReadRemoteFiles(ctx context.Context, nodes []builder.Node, url string, name
|
|||||||
|
|
||||||
ch, done := progress.NewChannel(pw)
|
ch, done := progress.NewChannel(pw)
|
||||||
defer func() { <-done }()
|
defer func() { <-done }()
|
||||||
_, err = c.Build(ctx, client.SolveOpt{Session: sessions, Internal: true}, "buildx", func(ctx context.Context, c gwclient.Client) (*gwclient.Result, error) {
|
_, err = c.Build(ctx, client.SolveOpt{}, "buildx", func(ctx context.Context, c gwclient.Client) (*gwclient.Result, error) {
|
||||||
def, err := st.Marshal(ctx)
|
def, err := st.Marshal(ctx)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
@@ -114,6 +83,51 @@ func ReadRemoteFiles(ctx context.Context, nodes []builder.Node, url string, name
|
|||||||
return files, inp, nil
|
return files, inp, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func IsRemoteURL(url string) bool {
|
||||||
|
if _, _, ok := detectHTTPContext(url); ok {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
if _, ok := detectGitContext(url); ok {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
func detectHTTPContext(url string) (*llb.State, string, bool) {
|
||||||
|
if httpPrefix.MatchString(url) {
|
||||||
|
httpContext := llb.HTTP(url, llb.Filename("context"), llb.WithCustomName("[internal] load remote build context"))
|
||||||
|
return &httpContext, "context", true
|
||||||
|
}
|
||||||
|
return nil, "", false
|
||||||
|
}
|
||||||
|
|
||||||
|
func detectGitContext(ref string) (*llb.State, bool) {
|
||||||
|
found := false
|
||||||
|
if httpPrefix.MatchString(ref) && gitURLPathWithFragmentSuffix.MatchString(ref) {
|
||||||
|
found = true
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, prefix := range []string{"git://", "github.com/", "git@"} {
|
||||||
|
if strings.HasPrefix(ref, prefix) {
|
||||||
|
found = true
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !found {
|
||||||
|
return nil, false
|
||||||
|
}
|
||||||
|
|
||||||
|
parts := strings.SplitN(ref, "#", 2)
|
||||||
|
branch := ""
|
||||||
|
if len(parts) > 1 {
|
||||||
|
branch = parts[1]
|
||||||
|
}
|
||||||
|
gitOpts := []llb.GitOption{llb.WithCustomName("[internal] load git source " + ref)}
|
||||||
|
|
||||||
|
st := llb.Git(parts[0], branch, gitOpts...)
|
||||||
|
return &st, true
|
||||||
|
}
|
||||||
|
|
||||||
func isArchive(header []byte) bool {
|
func isArchive(header []byte) bool {
|
||||||
for _, m := range [][]byte{
|
for _, m := range [][]byte{
|
||||||
{0x42, 0x5A, 0x68}, // bzip2
|
{0x42, 0x5A, 0x68}, // bzip2
|
||||||
|
|||||||
1373
build/build.go
1373
build/build.go
File diff suppressed because it is too large
Load Diff
@@ -1,62 +0,0 @@
|
|||||||
package build
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
stderrors "errors"
|
|
||||||
"net"
|
|
||||||
|
|
||||||
"github.com/containerd/containerd/platforms"
|
|
||||||
"github.com/docker/buildx/builder"
|
|
||||||
"github.com/docker/buildx/util/progress"
|
|
||||||
v1 "github.com/opencontainers/image-spec/specs-go/v1"
|
|
||||||
"github.com/pkg/errors"
|
|
||||||
)
|
|
||||||
|
|
||||||
func Dial(ctx context.Context, nodes []builder.Node, pw progress.Writer, platform *v1.Platform) (net.Conn, error) {
|
|
||||||
nodes, err := filterAvailableNodes(nodes)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(nodes) == 0 {
|
|
||||||
return nil, errors.New("no nodes available")
|
|
||||||
}
|
|
||||||
|
|
||||||
var pls []v1.Platform
|
|
||||||
if platform != nil {
|
|
||||||
pls = []v1.Platform{*platform}
|
|
||||||
}
|
|
||||||
|
|
||||||
opts := map[string]Options{"default": {Platforms: pls}}
|
|
||||||
resolved, err := resolveDrivers(ctx, nodes, opts, pw)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
var dialError error
|
|
||||||
for _, ls := range resolved {
|
|
||||||
for _, rn := range ls {
|
|
||||||
if platform != nil {
|
|
||||||
p := *platform
|
|
||||||
var found bool
|
|
||||||
for _, pp := range rn.platforms {
|
|
||||||
if platforms.Only(p).Match(pp) {
|
|
||||||
found = true
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if !found {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
conn, err := nodes[rn.driverIndex].Driver.Dial(ctx)
|
|
||||||
if err == nil {
|
|
||||||
return conn, nil
|
|
||||||
}
|
|
||||||
dialError = stderrors.Join(err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil, errors.Wrap(dialError, "no nodes available")
|
|
||||||
}
|
|
||||||
352
build/driver.go
352
build/driver.go
@@ -1,352 +0,0 @@
|
|||||||
package build
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"fmt"
|
|
||||||
"sync"
|
|
||||||
|
|
||||||
"github.com/containerd/containerd/platforms"
|
|
||||||
"github.com/docker/buildx/builder"
|
|
||||||
"github.com/docker/buildx/driver"
|
|
||||||
"github.com/docker/buildx/util/progress"
|
|
||||||
"github.com/moby/buildkit/client"
|
|
||||||
gateway "github.com/moby/buildkit/frontend/gateway/client"
|
|
||||||
"github.com/moby/buildkit/util/flightcontrol"
|
|
||||||
"github.com/moby/buildkit/util/tracing"
|
|
||||||
specs "github.com/opencontainers/image-spec/specs-go/v1"
|
|
||||||
"github.com/pkg/errors"
|
|
||||||
"go.opentelemetry.io/otel/trace"
|
|
||||||
"golang.org/x/sync/errgroup"
|
|
||||||
)
|
|
||||||
|
|
||||||
type resolvedNode struct {
|
|
||||||
resolver *nodeResolver
|
|
||||||
driverIndex int
|
|
||||||
platforms []specs.Platform
|
|
||||||
}
|
|
||||||
|
|
||||||
func (dp resolvedNode) Node() builder.Node {
|
|
||||||
return dp.resolver.nodes[dp.driverIndex]
|
|
||||||
}
|
|
||||||
|
|
||||||
func (dp resolvedNode) Client(ctx context.Context) (*client.Client, error) {
|
|
||||||
clients, err := dp.resolver.boot(ctx, []int{dp.driverIndex}, nil)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
return clients[0], nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (dp resolvedNode) BuildOpts(ctx context.Context) (gateway.BuildOpts, error) {
|
|
||||||
opts, err := dp.resolver.opts(ctx, []int{dp.driverIndex}, nil)
|
|
||||||
if err != nil {
|
|
||||||
return gateway.BuildOpts{}, err
|
|
||||||
}
|
|
||||||
return opts[0], nil
|
|
||||||
}
|
|
||||||
|
|
||||||
type matchMaker func(specs.Platform) platforms.MatchComparer
|
|
||||||
|
|
||||||
type cachedGroup[T any] struct {
|
|
||||||
g flightcontrol.Group[T]
|
|
||||||
cache map[int]T
|
|
||||||
cacheMu sync.Mutex
|
|
||||||
}
|
|
||||||
|
|
||||||
func newCachedGroup[T any]() cachedGroup[T] {
|
|
||||||
return cachedGroup[T]{
|
|
||||||
cache: map[int]T{},
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
type nodeResolver struct {
|
|
||||||
nodes []builder.Node
|
|
||||||
clients cachedGroup[*client.Client]
|
|
||||||
buildOpts cachedGroup[gateway.BuildOpts]
|
|
||||||
}
|
|
||||||
|
|
||||||
func resolveDrivers(ctx context.Context, nodes []builder.Node, opt map[string]Options, pw progress.Writer) (map[string][]*resolvedNode, error) {
|
|
||||||
driverRes := newDriverResolver(nodes)
|
|
||||||
drivers, err := driverRes.Resolve(ctx, opt, pw)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
return drivers, err
|
|
||||||
}
|
|
||||||
|
|
||||||
func newDriverResolver(nodes []builder.Node) *nodeResolver {
|
|
||||||
r := &nodeResolver{
|
|
||||||
nodes: nodes,
|
|
||||||
clients: newCachedGroup[*client.Client](),
|
|
||||||
buildOpts: newCachedGroup[gateway.BuildOpts](),
|
|
||||||
}
|
|
||||||
return r
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *nodeResolver) Resolve(ctx context.Context, opt map[string]Options, pw progress.Writer) (map[string][]*resolvedNode, error) {
|
|
||||||
if len(r.nodes) == 0 {
|
|
||||||
return nil, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
nodes := map[string][]*resolvedNode{}
|
|
||||||
for k, opt := range opt {
|
|
||||||
node, perfect, err := r.resolve(ctx, opt.Platforms, pw, platforms.OnlyStrict, nil)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
if !perfect {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
nodes[k] = node
|
|
||||||
}
|
|
||||||
if len(nodes) != len(opt) {
|
|
||||||
// if we didn't get a perfect match, we need to boot all drivers
|
|
||||||
allIndexes := make([]int, len(r.nodes))
|
|
||||||
for i := range allIndexes {
|
|
||||||
allIndexes[i] = i
|
|
||||||
}
|
|
||||||
|
|
||||||
clients, err := r.boot(ctx, allIndexes, pw)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
eg, egCtx := errgroup.WithContext(ctx)
|
|
||||||
workers := make([][]specs.Platform, len(clients))
|
|
||||||
for i, c := range clients {
|
|
||||||
i, c := i, c
|
|
||||||
if c == nil {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
eg.Go(func() error {
|
|
||||||
ww, err := c.ListWorkers(egCtx)
|
|
||||||
if err != nil {
|
|
||||||
return errors.Wrap(err, "listing workers")
|
|
||||||
}
|
|
||||||
|
|
||||||
ps := make(map[string]specs.Platform, len(ww))
|
|
||||||
for _, w := range ww {
|
|
||||||
for _, p := range w.Platforms {
|
|
||||||
pk := platforms.Format(platforms.Normalize(p))
|
|
||||||
ps[pk] = p
|
|
||||||
}
|
|
||||||
}
|
|
||||||
for _, p := range ps {
|
|
||||||
workers[i] = append(workers[i], p)
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
}
|
|
||||||
if err := eg.Wait(); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
// then we can attempt to match against all the available platforms
|
|
||||||
// (this time we don't care about imperfect matches)
|
|
||||||
nodes = map[string][]*resolvedNode{}
|
|
||||||
for k, opt := range opt {
|
|
||||||
node, _, err := r.resolve(ctx, opt.Platforms, pw, platforms.Only, func(idx int, n builder.Node) []specs.Platform {
|
|
||||||
return workers[idx]
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
nodes[k] = node
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
idxs := make([]int, 0, len(r.nodes))
|
|
||||||
for _, nodes := range nodes {
|
|
||||||
for _, node := range nodes {
|
|
||||||
idxs = append(idxs, node.driverIndex)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// preload capabilities
|
|
||||||
span, ctx := tracing.StartSpan(ctx, "load buildkit capabilities", trace.WithSpanKind(trace.SpanKindInternal))
|
|
||||||
_, err := r.opts(ctx, idxs, pw)
|
|
||||||
tracing.FinishWithError(span, err)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
return nodes, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *nodeResolver) resolve(ctx context.Context, ps []specs.Platform, pw progress.Writer, matcher matchMaker, additional func(idx int, n builder.Node) []specs.Platform) ([]*resolvedNode, bool, error) {
|
|
||||||
if len(r.nodes) == 0 {
|
|
||||||
return nil, true, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
perfect := true
|
|
||||||
nodeIdxs := make([]int, 0)
|
|
||||||
for _, p := range ps {
|
|
||||||
idx := r.get(p, matcher, additional)
|
|
||||||
if idx == -1 {
|
|
||||||
idx = 0
|
|
||||||
perfect = false
|
|
||||||
}
|
|
||||||
nodeIdxs = append(nodeIdxs, idx)
|
|
||||||
}
|
|
||||||
|
|
||||||
var nodes []*resolvedNode
|
|
||||||
if len(nodeIdxs) == 0 {
|
|
||||||
nodes = append(nodes, &resolvedNode{
|
|
||||||
resolver: r,
|
|
||||||
driverIndex: 0,
|
|
||||||
})
|
|
||||||
nodeIdxs = append(nodeIdxs, 0)
|
|
||||||
} else {
|
|
||||||
for i, idx := range nodeIdxs {
|
|
||||||
node := &resolvedNode{
|
|
||||||
resolver: r,
|
|
||||||
driverIndex: idx,
|
|
||||||
}
|
|
||||||
if len(ps) > 0 {
|
|
||||||
node.platforms = []specs.Platform{ps[i]}
|
|
||||||
}
|
|
||||||
nodes = append(nodes, node)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
nodes = recombineNodes(nodes)
|
|
||||||
if _, err := r.boot(ctx, nodeIdxs, pw); err != nil {
|
|
||||||
return nil, false, err
|
|
||||||
}
|
|
||||||
return nodes, perfect, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *nodeResolver) get(p specs.Platform, matcher matchMaker, additionalPlatforms func(int, builder.Node) []specs.Platform) int {
|
|
||||||
best := -1
|
|
||||||
bestPlatform := specs.Platform{}
|
|
||||||
for i, node := range r.nodes {
|
|
||||||
platforms := node.Platforms
|
|
||||||
if additionalPlatforms != nil {
|
|
||||||
platforms = append([]specs.Platform{}, platforms...)
|
|
||||||
platforms = append(platforms, additionalPlatforms(i, node)...)
|
|
||||||
}
|
|
||||||
for _, p2 := range platforms {
|
|
||||||
m := matcher(p2)
|
|
||||||
if !m.Match(p) {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
if best == -1 {
|
|
||||||
best = i
|
|
||||||
bestPlatform = p2
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
if matcher(p2).Less(p, bestPlatform) {
|
|
||||||
best = i
|
|
||||||
bestPlatform = p2
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return best
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *nodeResolver) boot(ctx context.Context, idxs []int, pw progress.Writer) ([]*client.Client, error) {
|
|
||||||
clients := make([]*client.Client, len(idxs))
|
|
||||||
|
|
||||||
baseCtx := ctx
|
|
||||||
eg, ctx := errgroup.WithContext(ctx)
|
|
||||||
|
|
||||||
for i, idx := range idxs {
|
|
||||||
i, idx := i, idx
|
|
||||||
eg.Go(func() error {
|
|
||||||
c, err := r.clients.g.Do(ctx, fmt.Sprint(idx), func(ctx context.Context) (*client.Client, error) {
|
|
||||||
if r.nodes[idx].Driver == nil {
|
|
||||||
return nil, nil
|
|
||||||
}
|
|
||||||
r.clients.cacheMu.Lock()
|
|
||||||
c, ok := r.clients.cache[idx]
|
|
||||||
r.clients.cacheMu.Unlock()
|
|
||||||
if ok {
|
|
||||||
return c, nil
|
|
||||||
}
|
|
||||||
c, err := driver.Boot(ctx, baseCtx, r.nodes[idx].Driver, pw)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
r.clients.cacheMu.Lock()
|
|
||||||
r.clients.cache[idx] = c
|
|
||||||
r.clients.cacheMu.Unlock()
|
|
||||||
return c, nil
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
clients[i] = c
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
}
|
|
||||||
if err := eg.Wait(); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
return clients, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *nodeResolver) opts(ctx context.Context, idxs []int, pw progress.Writer) ([]gateway.BuildOpts, error) {
|
|
||||||
clients, err := r.boot(ctx, idxs, pw)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
bopts := make([]gateway.BuildOpts, len(clients))
|
|
||||||
eg, ctx := errgroup.WithContext(ctx)
|
|
||||||
for i, idxs := range idxs {
|
|
||||||
i, idx := i, idxs
|
|
||||||
c := clients[i]
|
|
||||||
if c == nil {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
eg.Go(func() error {
|
|
||||||
opt, err := r.buildOpts.g.Do(ctx, fmt.Sprint(idx), func(ctx context.Context) (gateway.BuildOpts, error) {
|
|
||||||
r.buildOpts.cacheMu.Lock()
|
|
||||||
opt, ok := r.buildOpts.cache[idx]
|
|
||||||
r.buildOpts.cacheMu.Unlock()
|
|
||||||
if ok {
|
|
||||||
return opt, nil
|
|
||||||
}
|
|
||||||
_, err := c.Build(ctx, client.SolveOpt{
|
|
||||||
Internal: true,
|
|
||||||
}, "buildx", func(ctx context.Context, c gateway.Client) (*gateway.Result, error) {
|
|
||||||
opt = c.BuildOpts()
|
|
||||||
return nil, nil
|
|
||||||
}, nil)
|
|
||||||
if err != nil {
|
|
||||||
return gateway.BuildOpts{}, err
|
|
||||||
}
|
|
||||||
r.buildOpts.cacheMu.Lock()
|
|
||||||
r.buildOpts.cache[idx] = opt
|
|
||||||
r.buildOpts.cacheMu.Unlock()
|
|
||||||
return opt, err
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
bopts[i] = opt
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
}
|
|
||||||
if err := eg.Wait(); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
return bopts, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// recombineDriverPairs recombines resolved nodes that are on the same driver
|
|
||||||
// back together into a single node.
|
|
||||||
func recombineNodes(nodes []*resolvedNode) []*resolvedNode {
|
|
||||||
result := make([]*resolvedNode, 0, len(nodes))
|
|
||||||
lookup := map[int]int{}
|
|
||||||
for _, node := range nodes {
|
|
||||||
if idx, ok := lookup[node.driverIndex]; ok {
|
|
||||||
result[idx].platforms = append(result[idx].platforms, node.platforms...)
|
|
||||||
} else {
|
|
||||||
lookup[node.driverIndex] = len(result)
|
|
||||||
result = append(result, node)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return result
|
|
||||||
}
|
|
||||||
@@ -1,315 +0,0 @@
|
|||||||
package build
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"sort"
|
|
||||||
"testing"
|
|
||||||
|
|
||||||
"github.com/containerd/containerd/platforms"
|
|
||||||
"github.com/docker/buildx/builder"
|
|
||||||
specs "github.com/opencontainers/image-spec/specs-go/v1"
|
|
||||||
"github.com/stretchr/testify/require"
|
|
||||||
)
|
|
||||||
|
|
||||||
func TestFindDriverSanity(t *testing.T) {
|
|
||||||
r := makeTestResolver(map[string][]specs.Platform{
|
|
||||||
"aaa": {platforms.DefaultSpec()},
|
|
||||||
})
|
|
||||||
|
|
||||||
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.DefaultSpec()}, nil, platforms.OnlyStrict, nil)
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.True(t, perfect)
|
|
||||||
require.Len(t, res, 1)
|
|
||||||
require.Equal(t, 0, res[0].driverIndex)
|
|
||||||
require.Equal(t, "aaa", res[0].Node().Builder)
|
|
||||||
require.Equal(t, []specs.Platform{platforms.DefaultSpec()}, res[0].platforms)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestFindDriverEmpty(t *testing.T) {
|
|
||||||
r := makeTestResolver(nil)
|
|
||||||
|
|
||||||
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.DefaultSpec()}, nil, platforms.Only, nil)
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.True(t, perfect)
|
|
||||||
require.Nil(t, res)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestFindDriverWeirdName(t *testing.T) {
|
|
||||||
r := makeTestResolver(map[string][]specs.Platform{
|
|
||||||
"aaa": {platforms.MustParse("linux/amd64")},
|
|
||||||
"bbb": {platforms.MustParse("linux/foobar")},
|
|
||||||
})
|
|
||||||
|
|
||||||
// find first platform
|
|
||||||
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/foobar")}, nil, platforms.Only, nil)
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.True(t, perfect)
|
|
||||||
require.Len(t, res, 1)
|
|
||||||
require.Equal(t, 1, res[0].driverIndex)
|
|
||||||
require.Equal(t, "bbb", res[0].Node().Builder)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestFindDriverUnknown(t *testing.T) {
|
|
||||||
r := makeTestResolver(map[string][]specs.Platform{
|
|
||||||
"aaa": {platforms.MustParse("linux/amd64")},
|
|
||||||
})
|
|
||||||
|
|
||||||
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/riscv64")}, nil, platforms.Only, nil)
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.False(t, perfect)
|
|
||||||
require.Len(t, res, 1)
|
|
||||||
require.Equal(t, 0, res[0].driverIndex)
|
|
||||||
require.Equal(t, "aaa", res[0].Node().Builder)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestSelectNodeSinglePlatform(t *testing.T) {
|
|
||||||
r := makeTestResolver(map[string][]specs.Platform{
|
|
||||||
"aaa": {platforms.MustParse("linux/amd64")},
|
|
||||||
"bbb": {platforms.MustParse("linux/riscv64")},
|
|
||||||
})
|
|
||||||
|
|
||||||
// find first platform
|
|
||||||
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/amd64")}, nil, platforms.Only, nil)
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.True(t, perfect)
|
|
||||||
require.Len(t, res, 1)
|
|
||||||
require.Equal(t, 0, res[0].driverIndex)
|
|
||||||
require.Equal(t, "aaa", res[0].Node().Builder)
|
|
||||||
|
|
||||||
// find second platform
|
|
||||||
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/riscv64")}, nil, platforms.Only, nil)
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.True(t, perfect)
|
|
||||||
require.Len(t, res, 1)
|
|
||||||
require.Equal(t, 1, res[0].driverIndex)
|
|
||||||
require.Equal(t, "bbb", res[0].Node().Builder)
|
|
||||||
|
|
||||||
// find an unknown platform, should match the first driver
|
|
||||||
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/s390x")}, nil, platforms.Only, nil)
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.False(t, perfect)
|
|
||||||
require.Len(t, res, 1)
|
|
||||||
require.Equal(t, 0, res[0].driverIndex)
|
|
||||||
require.Equal(t, "aaa", res[0].Node().Builder)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestSelectNodeMultiPlatform(t *testing.T) {
|
|
||||||
r := makeTestResolver(map[string][]specs.Platform{
|
|
||||||
"aaa": {platforms.MustParse("linux/amd64"), platforms.MustParse("linux/arm64")},
|
|
||||||
"bbb": {platforms.MustParse("linux/riscv64")},
|
|
||||||
})
|
|
||||||
|
|
||||||
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/amd64")}, nil, platforms.Only, nil)
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.True(t, perfect)
|
|
||||||
require.Len(t, res, 1)
|
|
||||||
require.Equal(t, 0, res[0].driverIndex)
|
|
||||||
require.Equal(t, "aaa", res[0].Node().Builder)
|
|
||||||
|
|
||||||
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm64")}, nil, platforms.Only, nil)
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.True(t, perfect)
|
|
||||||
require.Len(t, res, 1)
|
|
||||||
require.Equal(t, 0, res[0].driverIndex)
|
|
||||||
require.Equal(t, "aaa", res[0].Node().Builder)
|
|
||||||
|
|
||||||
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/riscv64")}, nil, platforms.Only, nil)
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.True(t, perfect)
|
|
||||||
require.Len(t, res, 1)
|
|
||||||
require.Equal(t, 1, res[0].driverIndex)
|
|
||||||
require.Equal(t, "bbb", res[0].Node().Builder)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestSelectNodeNonStrict(t *testing.T) {
|
|
||||||
r := makeTestResolver(map[string][]specs.Platform{
|
|
||||||
"aaa": {platforms.MustParse("linux/amd64")},
|
|
||||||
"bbb": {platforms.MustParse("linux/arm64")},
|
|
||||||
})
|
|
||||||
|
|
||||||
// arm64 should match itself
|
|
||||||
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm64")}, nil, platforms.Only, nil)
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.True(t, perfect)
|
|
||||||
require.Len(t, res, 1)
|
|
||||||
require.Equal(t, "bbb", res[0].Node().Builder)
|
|
||||||
|
|
||||||
// arm64 may support arm/v8
|
|
||||||
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v8")}, nil, platforms.Only, nil)
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.True(t, perfect)
|
|
||||||
require.Len(t, res, 1)
|
|
||||||
require.Equal(t, "bbb", res[0].Node().Builder)
|
|
||||||
|
|
||||||
// arm64 may support arm/v7
|
|
||||||
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v7")}, nil, platforms.Only, nil)
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.True(t, perfect)
|
|
||||||
require.Len(t, res, 1)
|
|
||||||
require.Equal(t, "bbb", res[0].Node().Builder)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestSelectNodeNonStrictARM(t *testing.T) {
|
|
||||||
r := makeTestResolver(map[string][]specs.Platform{
|
|
||||||
"aaa": {platforms.MustParse("linux/amd64")},
|
|
||||||
"bbb": {platforms.MustParse("linux/arm64")},
|
|
||||||
"ccc": {platforms.MustParse("linux/arm/v8")},
|
|
||||||
})
|
|
||||||
|
|
||||||
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v8")}, nil, platforms.Only, nil)
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.True(t, perfect)
|
|
||||||
require.Len(t, res, 1)
|
|
||||||
require.Equal(t, "ccc", res[0].Node().Builder)
|
|
||||||
|
|
||||||
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v7")}, nil, platforms.Only, nil)
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.True(t, perfect)
|
|
||||||
require.Len(t, res, 1)
|
|
||||||
require.Equal(t, "ccc", res[0].Node().Builder)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestSelectNodeNonStrictLower(t *testing.T) {
|
|
||||||
r := makeTestResolver(map[string][]specs.Platform{
|
|
||||||
"aaa": {platforms.MustParse("linux/amd64")},
|
|
||||||
"bbb": {platforms.MustParse("linux/arm/v7")},
|
|
||||||
})
|
|
||||||
|
|
||||||
// v8 can't be built on v7 (so we should select the default)...
|
|
||||||
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v8")}, nil, platforms.Only, nil)
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.False(t, perfect)
|
|
||||||
require.Len(t, res, 1)
|
|
||||||
require.Equal(t, "aaa", res[0].Node().Builder)
|
|
||||||
|
|
||||||
// ...but v6 can be built on v8
|
|
||||||
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v6")}, nil, platforms.Only, nil)
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.True(t, perfect)
|
|
||||||
require.Len(t, res, 1)
|
|
||||||
require.Equal(t, "bbb", res[0].Node().Builder)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestSelectNodePreferStart(t *testing.T) {
|
|
||||||
r := makeTestResolver(map[string][]specs.Platform{
|
|
||||||
"aaa": {platforms.MustParse("linux/amd64")},
|
|
||||||
"bbb": {platforms.MustParse("linux/riscv64")},
|
|
||||||
"ccc": {platforms.MustParse("linux/riscv64")},
|
|
||||||
})
|
|
||||||
|
|
||||||
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/riscv64")}, nil, platforms.Only, nil)
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.True(t, perfect)
|
|
||||||
require.Len(t, res, 1)
|
|
||||||
require.Equal(t, "bbb", res[0].Node().Builder)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestSelectNodePreferExact(t *testing.T) {
|
|
||||||
r := makeTestResolver(map[string][]specs.Platform{
|
|
||||||
"aaa": {platforms.MustParse("linux/arm/v8")},
|
|
||||||
"bbb": {platforms.MustParse("linux/arm/v7")},
|
|
||||||
})
|
|
||||||
|
|
||||||
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v7")}, nil, platforms.Only, nil)
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.True(t, perfect)
|
|
||||||
require.Len(t, res, 1)
|
|
||||||
require.Equal(t, "bbb", res[0].Node().Builder)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestSelectNodeNoPlatform(t *testing.T) {
|
|
||||||
r := makeTestResolver(map[string][]specs.Platform{
|
|
||||||
"aaa": {platforms.MustParse("linux/foobar")},
|
|
||||||
"bbb": {platforms.DefaultSpec()},
|
|
||||||
})
|
|
||||||
|
|
||||||
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{}, nil, platforms.Only, nil)
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.True(t, perfect)
|
|
||||||
require.Len(t, res, 1)
|
|
||||||
require.Equal(t, "aaa", res[0].Node().Builder)
|
|
||||||
require.Empty(t, res[0].platforms)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestSelectNodeAdditionalPlatforms(t *testing.T) {
|
|
||||||
r := makeTestResolver(map[string][]specs.Platform{
|
|
||||||
"aaa": {platforms.MustParse("linux/amd64")},
|
|
||||||
"bbb": {platforms.MustParse("linux/arm/v8")},
|
|
||||||
})
|
|
||||||
|
|
||||||
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v7")}, nil, platforms.Only, nil)
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.True(t, perfect)
|
|
||||||
require.Len(t, res, 1)
|
|
||||||
require.Equal(t, "bbb", res[0].Node().Builder)
|
|
||||||
|
|
||||||
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v7")}, nil, platforms.Only, func(idx int, n builder.Node) []specs.Platform {
|
|
||||||
if n.Builder == "aaa" {
|
|
||||||
return []specs.Platform{platforms.MustParse("linux/arm/v7")}
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.True(t, perfect)
|
|
||||||
require.Len(t, res, 1)
|
|
||||||
require.Equal(t, "aaa", res[0].Node().Builder)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestSplitNodeMultiPlatform(t *testing.T) {
|
|
||||||
r := makeTestResolver(map[string][]specs.Platform{
|
|
||||||
"aaa": {platforms.MustParse("linux/amd64"), platforms.MustParse("linux/arm64")},
|
|
||||||
"bbb": {platforms.MustParse("linux/riscv64")},
|
|
||||||
})
|
|
||||||
|
|
||||||
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{
|
|
||||||
platforms.MustParse("linux/amd64"),
|
|
||||||
platforms.MustParse("linux/arm64"),
|
|
||||||
}, nil, platforms.Only, nil)
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.True(t, perfect)
|
|
||||||
require.Len(t, res, 1)
|
|
||||||
require.Equal(t, "aaa", res[0].Node().Builder)
|
|
||||||
|
|
||||||
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{
|
|
||||||
platforms.MustParse("linux/amd64"),
|
|
||||||
platforms.MustParse("linux/riscv64"),
|
|
||||||
}, nil, platforms.Only, nil)
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.True(t, perfect)
|
|
||||||
require.Len(t, res, 2)
|
|
||||||
require.Equal(t, "aaa", res[0].Node().Builder)
|
|
||||||
require.Equal(t, "bbb", res[1].Node().Builder)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestSplitNodeMultiPlatformNoUnify(t *testing.T) {
|
|
||||||
r := makeTestResolver(map[string][]specs.Platform{
|
|
||||||
"aaa": {platforms.MustParse("linux/amd64")},
|
|
||||||
"bbb": {platforms.MustParse("linux/amd64"), platforms.MustParse("linux/riscv64")},
|
|
||||||
})
|
|
||||||
|
|
||||||
// the "best" choice would be the node with both platforms, but we're using
|
|
||||||
// a naive algorithm that doesn't try to unify the platforms
|
|
||||||
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{
|
|
||||||
platforms.MustParse("linux/amd64"),
|
|
||||||
platforms.MustParse("linux/riscv64"),
|
|
||||||
}, nil, platforms.Only, nil)
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.True(t, perfect)
|
|
||||||
require.Len(t, res, 2)
|
|
||||||
require.Equal(t, "aaa", res[0].Node().Builder)
|
|
||||||
require.Equal(t, "bbb", res[1].Node().Builder)
|
|
||||||
}
|
|
||||||
|
|
||||||
func makeTestResolver(nodes map[string][]specs.Platform) *nodeResolver {
|
|
||||||
var ns []builder.Node
|
|
||||||
for name, platforms := range nodes {
|
|
||||||
ns = append(ns, builder.Node{
|
|
||||||
Builder: name,
|
|
||||||
Platforms: platforms,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
sort.Slice(ns, func(i, j int) bool {
|
|
||||||
return ns[i].Builder < ns[j].Builder
|
|
||||||
})
|
|
||||||
return newDriverResolver(ns)
|
|
||||||
}
|
|
||||||
72
build/git.go
72
build/git.go
@@ -9,18 +9,16 @@ import (
|
|||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"github.com/docker/buildx/util/gitutil"
|
"github.com/docker/buildx/util/gitutil"
|
||||||
"github.com/docker/buildx/util/osutil"
|
|
||||||
"github.com/moby/buildkit/client"
|
|
||||||
specs "github.com/opencontainers/image-spec/specs-go/v1"
|
specs "github.com/opencontainers/image-spec/specs-go/v1"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
)
|
)
|
||||||
|
|
||||||
const DockerfileLabel = "com.docker.image.source.entrypoint"
|
const DockerfileLabel = "com.docker.image.source.entrypoint"
|
||||||
|
|
||||||
func getGitAttributes(ctx context.Context, contextPath string, dockerfilePath string) (map[string]string, func(key, dir string, so *client.SolveOpt), error) {
|
func getGitAttributes(ctx context.Context, contextPath string, dockerfilePath string) (res map[string]string, _ error) {
|
||||||
res := make(map[string]string)
|
res = make(map[string]string)
|
||||||
if contextPath == "" {
|
if contextPath == "" {
|
||||||
return nil, nil, nil
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
setGitLabels := false
|
setGitLabels := false
|
||||||
@@ -39,7 +37,7 @@ func getGitAttributes(ctx context.Context, contextPath string, dockerfilePath st
|
|||||||
}
|
}
|
||||||
|
|
||||||
if !setGitLabels && !setGitInfo {
|
if !setGitLabels && !setGitInfo {
|
||||||
return nil, nil, nil
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
// figure out in which directory the git command needs to run in
|
// figure out in which directory the git command needs to run in
|
||||||
@@ -47,32 +45,27 @@ func getGitAttributes(ctx context.Context, contextPath string, dockerfilePath st
|
|||||||
if filepath.IsAbs(contextPath) {
|
if filepath.IsAbs(contextPath) {
|
||||||
wd = contextPath
|
wd = contextPath
|
||||||
} else {
|
} else {
|
||||||
wd, _ = filepath.Abs(filepath.Join(osutil.GetWd(), contextPath))
|
cwd, _ := os.Getwd()
|
||||||
|
wd, _ = filepath.Abs(filepath.Join(cwd, contextPath))
|
||||||
}
|
}
|
||||||
wd = osutil.SanitizePath(wd)
|
|
||||||
|
|
||||||
gitc, err := gitutil.New(gitutil.WithContext(ctx), gitutil.WithWorkingDir(wd))
|
gitc, err := gitutil.New(gitutil.WithContext(ctx), gitutil.WithWorkingDir(wd))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
if st, err1 := os.Stat(path.Join(wd, ".git")); err1 == nil && st.IsDir() {
|
if st, err := os.Stat(path.Join(wd, ".git")); err == nil && st.IsDir() {
|
||||||
return res, nil, errors.Wrap(err, "git was not found in the system")
|
return res, errors.New("buildx: git was not found in the system. Current commit information was not captured by the build")
|
||||||
}
|
}
|
||||||
return nil, nil, nil
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
if !gitc.IsInsideWorkTree() {
|
if !gitc.IsInsideWorkTree() {
|
||||||
if st, err := os.Stat(path.Join(wd, ".git")); err == nil && st.IsDir() {
|
if st, err := os.Stat(path.Join(wd, ".git")); err == nil && st.IsDir() {
|
||||||
return res, nil, errors.New("failed to read current commit information with git rev-parse --is-inside-work-tree")
|
return res, errors.New("buildx: failed to read current commit information with git rev-parse --is-inside-work-tree")
|
||||||
}
|
}
|
||||||
return nil, nil, nil
|
return res, nil
|
||||||
}
|
|
||||||
|
|
||||||
root, err := gitc.RootDir()
|
|
||||||
if err != nil {
|
|
||||||
return res, nil, errors.Wrap(err, "failed to get git root dir")
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if sha, err := gitc.FullCommit(); err != nil && !gitutil.IsUnknownRevision(err) {
|
if sha, err := gitc.FullCommit(); err != nil && !gitutil.IsUnknownRevision(err) {
|
||||||
return res, nil, errors.Wrap(err, "failed to get git commit")
|
return res, errors.Wrapf(err, "buildx: failed to get git commit")
|
||||||
} else if sha != "" {
|
} else if sha != "" {
|
||||||
checkDirty := false
|
checkDirty := false
|
||||||
if v, ok := os.LookupEnv("BUILDX_GIT_CHECK_DIRTY"); ok {
|
if v, ok := os.LookupEnv("BUILDX_GIT_CHECK_DIRTY"); ok {
|
||||||
@@ -100,32 +93,23 @@ func getGitAttributes(ctx context.Context, contextPath string, dockerfilePath st
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if setGitLabels && root != "" {
|
if setGitLabels {
|
||||||
if dockerfilePath == "" {
|
if root, err := gitc.RootDir(); err != nil {
|
||||||
dockerfilePath = filepath.Join(wd, "Dockerfile")
|
return res, errors.Wrapf(err, "buildx: failed to get git root dir")
|
||||||
}
|
} else if root != "" {
|
||||||
if !filepath.IsAbs(dockerfilePath) {
|
if dockerfilePath == "" {
|
||||||
dockerfilePath = filepath.Join(osutil.GetWd(), dockerfilePath)
|
dockerfilePath = filepath.Join(wd, "Dockerfile")
|
||||||
}
|
}
|
||||||
if r, err := filepath.Rel(root, dockerfilePath); err == nil && !strings.HasPrefix(r, "..") {
|
if !filepath.IsAbs(dockerfilePath) {
|
||||||
res["label:"+DockerfileLabel] = r
|
cwd, _ := os.Getwd()
|
||||||
|
dockerfilePath = filepath.Join(cwd, dockerfilePath)
|
||||||
|
}
|
||||||
|
dockerfilePath, _ = filepath.Rel(root, dockerfilePath)
|
||||||
|
if !strings.HasPrefix(dockerfilePath, "..") {
|
||||||
|
res["label:"+DockerfileLabel] = dockerfilePath
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return res, func(key, dir string, so *client.SolveOpt) {
|
return
|
||||||
if !setGitInfo || root == "" {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
dir, err := filepath.Abs(dir)
|
|
||||||
if err != nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if lp, err := osutil.GetLongPathName(dir); err == nil {
|
|
||||||
dir = lp
|
|
||||||
}
|
|
||||||
dir = osutil.SanitizePath(dir)
|
|
||||||
if r, err := filepath.Rel(root, dir); err == nil && !strings.HasPrefix(r, "..") {
|
|
||||||
so.FrontendAttrs["vcs:localdir:"+key] = r
|
|
||||||
}
|
|
||||||
}, nil
|
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -9,7 +9,6 @@ import (
|
|||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"github.com/docker/buildx/util/gitutil"
|
"github.com/docker/buildx/util/gitutil"
|
||||||
"github.com/moby/buildkit/client"
|
|
||||||
specs "github.com/opencontainers/image-spec/specs-go/v1"
|
specs "github.com/opencontainers/image-spec/specs-go/v1"
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
@@ -31,7 +30,7 @@ func setupTest(tb testing.TB) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func TestGetGitAttributesNotGitRepo(t *testing.T) {
|
func TestGetGitAttributesNotGitRepo(t *testing.T) {
|
||||||
_, _, err := getGitAttributes(context.Background(), t.TempDir(), "Dockerfile")
|
_, err := getGitAttributes(context.Background(), t.TempDir(), "Dockerfile")
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -39,14 +38,14 @@ func TestGetGitAttributesBadGitRepo(t *testing.T) {
|
|||||||
tmp := t.TempDir()
|
tmp := t.TempDir()
|
||||||
require.NoError(t, os.MkdirAll(path.Join(tmp, ".git"), 0755))
|
require.NoError(t, os.MkdirAll(path.Join(tmp, ".git"), 0755))
|
||||||
|
|
||||||
_, _, err := getGitAttributes(context.Background(), tmp, "Dockerfile")
|
_, err := getGitAttributes(context.Background(), tmp, "Dockerfile")
|
||||||
assert.Error(t, err)
|
assert.Error(t, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestGetGitAttributesNoContext(t *testing.T) {
|
func TestGetGitAttributesNoContext(t *testing.T) {
|
||||||
setupTest(t)
|
setupTest(t)
|
||||||
|
|
||||||
gitattrs, _, err := getGitAttributes(context.Background(), "", "Dockerfile")
|
gitattrs, err := getGitAttributes(context.Background(), "", "Dockerfile")
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
assert.Empty(t, gitattrs)
|
assert.Empty(t, gitattrs)
|
||||||
}
|
}
|
||||||
@@ -115,7 +114,7 @@ func TestGetGitAttributes(t *testing.T) {
|
|||||||
if tt.envGitInfo != "" {
|
if tt.envGitInfo != "" {
|
||||||
t.Setenv("BUILDX_GIT_INFO", tt.envGitInfo)
|
t.Setenv("BUILDX_GIT_INFO", tt.envGitInfo)
|
||||||
}
|
}
|
||||||
gitattrs, _, err := getGitAttributes(context.Background(), ".", "Dockerfile")
|
gitattrs, err := getGitAttributes(context.Background(), ".", "Dockerfile")
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
for _, e := range tt.expected {
|
for _, e := range tt.expected {
|
||||||
assert.Contains(t, gitattrs, e)
|
assert.Contains(t, gitattrs, e)
|
||||||
@@ -140,7 +139,7 @@ func TestGetGitAttributesDirty(t *testing.T) {
|
|||||||
require.NoError(t, os.WriteFile(filepath.Join("dir", "Dockerfile"), df, 0644))
|
require.NoError(t, os.WriteFile(filepath.Join("dir", "Dockerfile"), df, 0644))
|
||||||
|
|
||||||
t.Setenv("BUILDX_GIT_LABELS", "true")
|
t.Setenv("BUILDX_GIT_LABELS", "true")
|
||||||
gitattrs, _, _ := getGitAttributes(context.Background(), ".", "Dockerfile")
|
gitattrs, _ := getGitAttributes(context.Background(), ".", "Dockerfile")
|
||||||
assert.Equal(t, 5, len(gitattrs))
|
assert.Equal(t, 5, len(gitattrs))
|
||||||
|
|
||||||
assert.Contains(t, gitattrs, "label:"+DockerfileLabel)
|
assert.Contains(t, gitattrs, "label:"+DockerfileLabel)
|
||||||
@@ -155,55 +154,3 @@ func TestGetGitAttributesDirty(t *testing.T) {
|
|||||||
assert.Contains(t, gitattrs, "vcs:revision")
|
assert.Contains(t, gitattrs, "vcs:revision")
|
||||||
assert.True(t, strings.HasSuffix(gitattrs["vcs:revision"], "-dirty"))
|
assert.True(t, strings.HasSuffix(gitattrs["vcs:revision"], "-dirty"))
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestLocalDirs(t *testing.T) {
|
|
||||||
setupTest(t)
|
|
||||||
|
|
||||||
so := &client.SolveOpt{
|
|
||||||
FrontendAttrs: map[string]string{},
|
|
||||||
}
|
|
||||||
|
|
||||||
_, addVCSLocalDir, err := getGitAttributes(context.Background(), ".", "Dockerfile")
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.NotNil(t, addVCSLocalDir)
|
|
||||||
|
|
||||||
require.NoError(t, setLocalMount("context", ".", so, addVCSLocalDir))
|
|
||||||
require.Contains(t, so.FrontendAttrs, "vcs:localdir:context")
|
|
||||||
assert.Equal(t, ".", so.FrontendAttrs["vcs:localdir:context"])
|
|
||||||
|
|
||||||
require.NoError(t, setLocalMount("dockerfile", ".", so, addVCSLocalDir))
|
|
||||||
require.Contains(t, so.FrontendAttrs, "vcs:localdir:dockerfile")
|
|
||||||
assert.Equal(t, ".", so.FrontendAttrs["vcs:localdir:dockerfile"])
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestLocalDirsSub(t *testing.T) {
|
|
||||||
gitutil.Mktmp(t)
|
|
||||||
|
|
||||||
c, err := gitutil.New()
|
|
||||||
require.NoError(t, err)
|
|
||||||
gitutil.GitInit(c, t)
|
|
||||||
|
|
||||||
df := []byte("FROM alpine:latest\n")
|
|
||||||
assert.NoError(t, os.MkdirAll("app", 0755))
|
|
||||||
assert.NoError(t, os.WriteFile("app/Dockerfile", df, 0644))
|
|
||||||
|
|
||||||
gitutil.GitAdd(c, t, "app/Dockerfile")
|
|
||||||
gitutil.GitCommit(c, t, "initial commit")
|
|
||||||
gitutil.GitSetRemote(c, t, "origin", "git@github.com:docker/buildx.git")
|
|
||||||
|
|
||||||
so := &client.SolveOpt{
|
|
||||||
FrontendAttrs: map[string]string{},
|
|
||||||
}
|
|
||||||
|
|
||||||
_, addVCSLocalDir, err := getGitAttributes(context.Background(), ".", "app/Dockerfile")
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.NotNil(t, addVCSLocalDir)
|
|
||||||
|
|
||||||
require.NoError(t, setLocalMount("context", ".", so, addVCSLocalDir))
|
|
||||||
require.Contains(t, so.FrontendAttrs, "vcs:localdir:context")
|
|
||||||
assert.Equal(t, ".", so.FrontendAttrs["vcs:localdir:context"])
|
|
||||||
|
|
||||||
require.NoError(t, setLocalMount("dockerfile", "app", so, addVCSLocalDir))
|
|
||||||
require.Contains(t, so.FrontendAttrs, "vcs:localdir:dockerfile")
|
|
||||||
assert.Equal(t, "app", so.FrontendAttrs["vcs:localdir:dockerfile"])
|
|
||||||
}
|
|
||||||
|
|||||||
138
build/invoke.go
138
build/invoke.go
@@ -1,138 +0,0 @@
|
|||||||
package build
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
_ "crypto/sha256" // ensure digests can be computed
|
|
||||||
"io"
|
|
||||||
"sync"
|
|
||||||
"sync/atomic"
|
|
||||||
"syscall"
|
|
||||||
|
|
||||||
controllerapi "github.com/docker/buildx/controller/pb"
|
|
||||||
gateway "github.com/moby/buildkit/frontend/gateway/client"
|
|
||||||
"github.com/pkg/errors"
|
|
||||||
"github.com/sirupsen/logrus"
|
|
||||||
)
|
|
||||||
|
|
||||||
type Container struct {
|
|
||||||
cancelOnce sync.Once
|
|
||||||
containerCancel func()
|
|
||||||
isUnavailable atomic.Bool
|
|
||||||
initStarted atomic.Bool
|
|
||||||
container gateway.Container
|
|
||||||
releaseCh chan struct{}
|
|
||||||
resultCtx *ResultHandle
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewContainer(ctx context.Context, resultCtx *ResultHandle, cfg *controllerapi.InvokeConfig) (*Container, error) {
|
|
||||||
mainCtx := ctx
|
|
||||||
|
|
||||||
ctrCh := make(chan *Container)
|
|
||||||
errCh := make(chan error)
|
|
||||||
go func() {
|
|
||||||
err := resultCtx.build(func(ctx context.Context, c gateway.Client) (*gateway.Result, error) {
|
|
||||||
ctx, cancel := context.WithCancel(ctx)
|
|
||||||
go func() {
|
|
||||||
<-mainCtx.Done()
|
|
||||||
cancel()
|
|
||||||
}()
|
|
||||||
|
|
||||||
containerCfg, err := resultCtx.getContainerConfig(cfg)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
containerCtx, containerCancel := context.WithCancel(ctx)
|
|
||||||
defer containerCancel()
|
|
||||||
bkContainer, err := c.NewContainer(containerCtx, containerCfg)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
releaseCh := make(chan struct{})
|
|
||||||
container := &Container{
|
|
||||||
containerCancel: containerCancel,
|
|
||||||
container: bkContainer,
|
|
||||||
releaseCh: releaseCh,
|
|
||||||
resultCtx: resultCtx,
|
|
||||||
}
|
|
||||||
doneCh := make(chan struct{})
|
|
||||||
defer close(doneCh)
|
|
||||||
resultCtx.registerCleanup(func() {
|
|
||||||
container.Cancel()
|
|
||||||
<-doneCh
|
|
||||||
})
|
|
||||||
ctrCh <- container
|
|
||||||
<-container.releaseCh
|
|
||||||
|
|
||||||
return nil, bkContainer.Release(ctx)
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
errCh <- err
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
select {
|
|
||||||
case ctr := <-ctrCh:
|
|
||||||
return ctr, nil
|
|
||||||
case err := <-errCh:
|
|
||||||
return nil, err
|
|
||||||
case <-mainCtx.Done():
|
|
||||||
return nil, mainCtx.Err()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *Container) Cancel() {
|
|
||||||
c.markUnavailable()
|
|
||||||
c.cancelOnce.Do(func() {
|
|
||||||
if c.containerCancel != nil {
|
|
||||||
c.containerCancel()
|
|
||||||
}
|
|
||||||
close(c.releaseCh)
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *Container) IsUnavailable() bool {
|
|
||||||
return c.isUnavailable.Load()
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *Container) markUnavailable() {
|
|
||||||
c.isUnavailable.Store(true)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *Container) Exec(ctx context.Context, cfg *controllerapi.InvokeConfig, stdin io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) error {
|
|
||||||
if isInit := c.initStarted.CompareAndSwap(false, true); isInit {
|
|
||||||
defer func() {
|
|
||||||
// container can't be used after init exits
|
|
||||||
c.markUnavailable()
|
|
||||||
}()
|
|
||||||
}
|
|
||||||
err := exec(ctx, c.resultCtx, cfg, c.container, stdin, stdout, stderr)
|
|
||||||
if err != nil {
|
|
||||||
// Container becomes unavailable if one of the processes fails in it.
|
|
||||||
c.markUnavailable()
|
|
||||||
}
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
func exec(ctx context.Context, resultCtx *ResultHandle, cfg *controllerapi.InvokeConfig, ctr gateway.Container, stdin io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) error {
|
|
||||||
processCfg, err := resultCtx.getProcessConfig(cfg, stdin, stdout, stderr)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
proc, err := ctr.Start(ctx, processCfg)
|
|
||||||
if err != nil {
|
|
||||||
return errors.Errorf("failed to start container: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
doneCh := make(chan struct{})
|
|
||||||
defer close(doneCh)
|
|
||||||
go func() {
|
|
||||||
select {
|
|
||||||
case <-ctx.Done():
|
|
||||||
if err := proc.Signal(ctx, syscall.SIGKILL); err != nil {
|
|
||||||
logrus.Warnf("failed to kill process: %v", err)
|
|
||||||
}
|
|
||||||
case <-doneCh:
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
|
|
||||||
return proc.Wait()
|
|
||||||
}
|
|
||||||
@@ -1,43 +0,0 @@
|
|||||||
package build
|
|
||||||
|
|
||||||
import (
|
|
||||||
"path/filepath"
|
|
||||||
|
|
||||||
"github.com/docker/buildx/builder"
|
|
||||||
"github.com/docker/buildx/localstate"
|
|
||||||
"github.com/moby/buildkit/client"
|
|
||||||
)
|
|
||||||
|
|
||||||
func saveLocalState(so *client.SolveOpt, target string, opts Options, node builder.Node, configDir string) error {
|
|
||||||
var err error
|
|
||||||
if so.Ref == "" {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
lp := opts.Inputs.ContextPath
|
|
||||||
dp := opts.Inputs.DockerfilePath
|
|
||||||
if lp != "" || dp != "" {
|
|
||||||
if lp != "" {
|
|
||||||
lp, err = filepath.Abs(lp)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if dp != "" {
|
|
||||||
dp, err = filepath.Abs(dp)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
l, err := localstate.New(configDir)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
return l.SaveRef(node.Builder, node.Name, so.Ref, localstate.State{
|
|
||||||
Target: target,
|
|
||||||
LocalPath: lp,
|
|
||||||
DockerfilePath: dp,
|
|
||||||
GroupRef: opts.GroupRef,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
638
build/opt.go
638
build/opt.go
@@ -1,638 +0,0 @@
|
|||||||
package build
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bufio"
|
|
||||||
"context"
|
|
||||||
"io"
|
|
||||||
"os"
|
|
||||||
"path/filepath"
|
|
||||||
"strconv"
|
|
||||||
"strings"
|
|
||||||
"syscall"
|
|
||||||
|
|
||||||
"github.com/containerd/containerd/content"
|
|
||||||
"github.com/containerd/containerd/content/local"
|
|
||||||
"github.com/containerd/containerd/platforms"
|
|
||||||
"github.com/distribution/reference"
|
|
||||||
"github.com/docker/buildx/builder"
|
|
||||||
"github.com/docker/buildx/driver"
|
|
||||||
"github.com/docker/buildx/util/confutil"
|
|
||||||
"github.com/docker/buildx/util/dockerutil"
|
|
||||||
"github.com/docker/buildx/util/osutil"
|
|
||||||
"github.com/docker/buildx/util/progress"
|
|
||||||
"github.com/moby/buildkit/client"
|
|
||||||
"github.com/moby/buildkit/client/llb"
|
|
||||||
"github.com/moby/buildkit/client/ociindex"
|
|
||||||
gateway "github.com/moby/buildkit/frontend/gateway/client"
|
|
||||||
"github.com/moby/buildkit/identity"
|
|
||||||
"github.com/moby/buildkit/session/upload/uploadprovider"
|
|
||||||
"github.com/moby/buildkit/solver/pb"
|
|
||||||
"github.com/moby/buildkit/util/apicaps"
|
|
||||||
"github.com/moby/buildkit/util/entitlements"
|
|
||||||
"github.com/opencontainers/go-digest"
|
|
||||||
"github.com/pkg/errors"
|
|
||||||
"github.com/tonistiigi/fsutil"
|
|
||||||
)
|
|
||||||
|
|
||||||
func toSolveOpt(ctx context.Context, node builder.Node, multiDriver bool, opt Options, bopts gateway.BuildOpts, configDir string, addVCSLocalDir func(key, dir string, so *client.SolveOpt), pw progress.Writer, docker *dockerutil.Client) (_ *client.SolveOpt, release func(), err error) {
|
|
||||||
nodeDriver := node.Driver
|
|
||||||
defers := make([]func(), 0, 2)
|
|
||||||
releaseF := func() {
|
|
||||||
for _, f := range defers {
|
|
||||||
f()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
defer func() {
|
|
||||||
if err != nil {
|
|
||||||
releaseF()
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
|
|
||||||
// inline cache from build arg
|
|
||||||
if v, ok := opt.BuildArgs["BUILDKIT_INLINE_CACHE"]; ok {
|
|
||||||
if v, _ := strconv.ParseBool(v); v {
|
|
||||||
opt.CacheTo = append(opt.CacheTo, client.CacheOptionsEntry{
|
|
||||||
Type: "inline",
|
|
||||||
Attrs: map[string]string{},
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, e := range opt.CacheTo {
|
|
||||||
if e.Type != "inline" && !nodeDriver.Features(ctx)[driver.CacheExport] {
|
|
||||||
return nil, nil, notSupported(driver.CacheExport, nodeDriver, "https://docs.docker.com/go/build-cache-backends/")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
cacheTo := make([]client.CacheOptionsEntry, 0, len(opt.CacheTo))
|
|
||||||
for _, e := range opt.CacheTo {
|
|
||||||
if e.Type == "gha" {
|
|
||||||
if !bopts.LLBCaps.Contains(apicaps.CapID("cache.gha")) {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
} else if e.Type == "s3" {
|
|
||||||
if !bopts.LLBCaps.Contains(apicaps.CapID("cache.s3")) {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
}
|
|
||||||
cacheTo = append(cacheTo, e)
|
|
||||||
}
|
|
||||||
|
|
||||||
cacheFrom := make([]client.CacheOptionsEntry, 0, len(opt.CacheFrom))
|
|
||||||
for _, e := range opt.CacheFrom {
|
|
||||||
if e.Type == "gha" {
|
|
||||||
if !bopts.LLBCaps.Contains(apicaps.CapID("cache.gha")) {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
} else if e.Type == "s3" {
|
|
||||||
if !bopts.LLBCaps.Contains(apicaps.CapID("cache.s3")) {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
}
|
|
||||||
cacheFrom = append(cacheFrom, e)
|
|
||||||
}
|
|
||||||
|
|
||||||
so := client.SolveOpt{
|
|
||||||
Ref: opt.Ref,
|
|
||||||
Frontend: "dockerfile.v0",
|
|
||||||
FrontendAttrs: map[string]string{},
|
|
||||||
LocalMounts: map[string]fsutil.FS{},
|
|
||||||
CacheExports: cacheTo,
|
|
||||||
CacheImports: cacheFrom,
|
|
||||||
AllowedEntitlements: opt.Allow,
|
|
||||||
SourcePolicy: opt.SourcePolicy,
|
|
||||||
}
|
|
||||||
|
|
||||||
if so.Ref == "" {
|
|
||||||
so.Ref = identity.NewID()
|
|
||||||
}
|
|
||||||
|
|
||||||
if opt.CgroupParent != "" {
|
|
||||||
so.FrontendAttrs["cgroup-parent"] = opt.CgroupParent
|
|
||||||
}
|
|
||||||
|
|
||||||
if v, ok := opt.BuildArgs["BUILDKIT_MULTI_PLATFORM"]; ok {
|
|
||||||
if v, _ := strconv.ParseBool(v); v {
|
|
||||||
so.FrontendAttrs["multi-platform"] = "true"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if multiDriver {
|
|
||||||
// force creation of manifest list
|
|
||||||
so.FrontendAttrs["multi-platform"] = "true"
|
|
||||||
}
|
|
||||||
|
|
||||||
attests := make(map[string]string)
|
|
||||||
for k, v := range opt.Attests {
|
|
||||||
if v != nil {
|
|
||||||
attests[k] = *v
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
supportAttestations := bopts.LLBCaps.Contains(apicaps.CapID("exporter.image.attestations")) && nodeDriver.Features(ctx)[driver.MultiPlatform]
|
|
||||||
if len(attests) > 0 {
|
|
||||||
if !supportAttestations {
|
|
||||||
if !nodeDriver.Features(ctx)[driver.MultiPlatform] {
|
|
||||||
return nil, nil, notSupported("Attestation", nodeDriver, "https://docs.docker.com/go/attestations/")
|
|
||||||
}
|
|
||||||
return nil, nil, errors.Errorf("Attestations are not supported by the current BuildKit daemon")
|
|
||||||
}
|
|
||||||
for k, v := range attests {
|
|
||||||
so.FrontendAttrs["attest:"+k] = v
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if _, ok := opt.Attests["provenance"]; !ok && supportAttestations {
|
|
||||||
const noAttestEnv = "BUILDX_NO_DEFAULT_ATTESTATIONS"
|
|
||||||
var noProv bool
|
|
||||||
if v, ok := os.LookupEnv(noAttestEnv); ok {
|
|
||||||
noProv, err = strconv.ParseBool(v)
|
|
||||||
if err != nil {
|
|
||||||
return nil, nil, errors.Wrap(err, "invalid "+noAttestEnv)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if !noProv {
|
|
||||||
so.FrontendAttrs["attest:provenance"] = "mode=min,inline-only=true"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
switch len(opt.Exports) {
|
|
||||||
case 1:
|
|
||||||
// valid
|
|
||||||
case 0:
|
|
||||||
if !noDefaultLoad() && opt.PrintFunc == nil {
|
|
||||||
if nodeDriver.IsMobyDriver() {
|
|
||||||
// backwards compat for docker driver only:
|
|
||||||
// this ensures the build results in a docker image.
|
|
||||||
opt.Exports = []client.ExportEntry{{Type: "image", Attrs: map[string]string{}}}
|
|
||||||
} else if nodeDriver.Features(ctx)[driver.DefaultLoad] {
|
|
||||||
opt.Exports = []client.ExportEntry{{Type: "docker", Attrs: map[string]string{}}}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
default:
|
|
||||||
if err := bopts.LLBCaps.Supports(pb.CapMultipleExporters); err != nil {
|
|
||||||
return nil, nil, errors.Errorf("multiple outputs currently unsupported by the current BuildKit daemon, please upgrade to version v0.13+ or use a single output")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// fill in image exporter names from tags
|
|
||||||
if len(opt.Tags) > 0 {
|
|
||||||
tags := make([]string, len(opt.Tags))
|
|
||||||
for i, tag := range opt.Tags {
|
|
||||||
ref, err := reference.Parse(tag)
|
|
||||||
if err != nil {
|
|
||||||
return nil, nil, errors.Wrapf(err, "invalid tag %q", tag)
|
|
||||||
}
|
|
||||||
tags[i] = ref.String()
|
|
||||||
}
|
|
||||||
for i, e := range opt.Exports {
|
|
||||||
switch e.Type {
|
|
||||||
case "image", "oci", "docker":
|
|
||||||
opt.Exports[i].Attrs["name"] = strings.Join(tags, ",")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
for _, e := range opt.Exports {
|
|
||||||
if e.Type == "image" && e.Attrs["name"] == "" && e.Attrs["push"] != "" {
|
|
||||||
if ok, _ := strconv.ParseBool(e.Attrs["push"]); ok {
|
|
||||||
return nil, nil, errors.Errorf("tag is needed when pushing to registry")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// cacheonly is a fake exporter to opt out of default behaviors
|
|
||||||
exports := make([]client.ExportEntry, 0, len(opt.Exports))
|
|
||||||
for _, e := range opt.Exports {
|
|
||||||
if e.Type != "cacheonly" {
|
|
||||||
exports = append(exports, e)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
opt.Exports = exports
|
|
||||||
|
|
||||||
// set up exporters
|
|
||||||
for i, e := range opt.Exports {
|
|
||||||
if e.Type == "oci" && !nodeDriver.Features(ctx)[driver.OCIExporter] {
|
|
||||||
return nil, nil, notSupported(driver.OCIExporter, nodeDriver, "https://docs.docker.com/go/build-exporters/")
|
|
||||||
}
|
|
||||||
if e.Type == "docker" {
|
|
||||||
features := docker.Features(ctx, e.Attrs["context"])
|
|
||||||
if features[dockerutil.OCIImporter] && e.Output == nil {
|
|
||||||
// rely on oci importer if available (which supports
|
|
||||||
// multi-platform images), otherwise fall back to docker
|
|
||||||
opt.Exports[i].Type = "oci"
|
|
||||||
} else if len(opt.Platforms) > 1 || len(attests) > 0 {
|
|
||||||
if e.Output != nil {
|
|
||||||
return nil, nil, errors.Errorf("docker exporter does not support exporting manifest lists, use the oci exporter instead")
|
|
||||||
}
|
|
||||||
return nil, nil, errors.Errorf("docker exporter does not currently support exporting manifest lists")
|
|
||||||
}
|
|
||||||
if e.Output == nil {
|
|
||||||
if nodeDriver.IsMobyDriver() {
|
|
||||||
e.Type = "image"
|
|
||||||
} else {
|
|
||||||
w, cancel, err := docker.LoadImage(ctx, e.Attrs["context"], pw)
|
|
||||||
if err != nil {
|
|
||||||
return nil, nil, err
|
|
||||||
}
|
|
||||||
defers = append(defers, cancel)
|
|
||||||
opt.Exports[i].Output = func(_ map[string]string) (io.WriteCloser, error) {
|
|
||||||
return w, nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} else if !nodeDriver.Features(ctx)[driver.DockerExporter] {
|
|
||||||
return nil, nil, notSupported(driver.DockerExporter, nodeDriver, "https://docs.docker.com/go/build-exporters/")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if e.Type == "image" && nodeDriver.IsMobyDriver() {
|
|
||||||
opt.Exports[i].Type = "moby"
|
|
||||||
if e.Attrs["push"] != "" {
|
|
||||||
if ok, _ := strconv.ParseBool(e.Attrs["push"]); ok {
|
|
||||||
if ok, _ := strconv.ParseBool(e.Attrs["push-by-digest"]); ok {
|
|
||||||
return nil, nil, errors.Errorf("push-by-digest is currently not implemented for docker driver, please create a new builder instance")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if e.Type == "docker" || e.Type == "image" || e.Type == "oci" {
|
|
||||||
// inline buildinfo attrs from build arg
|
|
||||||
if v, ok := opt.BuildArgs["BUILDKIT_INLINE_BUILDINFO_ATTRS"]; ok {
|
|
||||||
e.Attrs["buildinfo-attrs"] = v
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
so.Exports = opt.Exports
|
|
||||||
so.Session = opt.Session
|
|
||||||
|
|
||||||
releaseLoad, err := loadInputs(ctx, nodeDriver, opt.Inputs, addVCSLocalDir, pw, &so)
|
|
||||||
if err != nil {
|
|
||||||
return nil, nil, err
|
|
||||||
}
|
|
||||||
defers = append(defers, releaseLoad)
|
|
||||||
|
|
||||||
if sharedKey := so.LocalDirs["context"]; sharedKey != "" {
|
|
||||||
if p, err := filepath.Abs(sharedKey); err == nil {
|
|
||||||
sharedKey = filepath.Base(p)
|
|
||||||
}
|
|
||||||
so.SharedKey = sharedKey + ":" + confutil.TryNodeIdentifier(configDir)
|
|
||||||
}
|
|
||||||
|
|
||||||
if opt.Pull {
|
|
||||||
so.FrontendAttrs["image-resolve-mode"] = pb.AttrImageResolveModeForcePull
|
|
||||||
} else if nodeDriver.IsMobyDriver() {
|
|
||||||
// moby driver always resolves local images by default
|
|
||||||
so.FrontendAttrs["image-resolve-mode"] = pb.AttrImageResolveModePreferLocal
|
|
||||||
}
|
|
||||||
if opt.Target != "" {
|
|
||||||
so.FrontendAttrs["target"] = opt.Target
|
|
||||||
}
|
|
||||||
if len(opt.NoCacheFilter) > 0 {
|
|
||||||
so.FrontendAttrs["no-cache"] = strings.Join(opt.NoCacheFilter, ",")
|
|
||||||
}
|
|
||||||
if opt.NoCache {
|
|
||||||
so.FrontendAttrs["no-cache"] = ""
|
|
||||||
}
|
|
||||||
for k, v := range opt.BuildArgs {
|
|
||||||
so.FrontendAttrs["build-arg:"+k] = v
|
|
||||||
}
|
|
||||||
for k, v := range opt.Labels {
|
|
||||||
so.FrontendAttrs["label:"+k] = v
|
|
||||||
}
|
|
||||||
|
|
||||||
for k, v := range node.ProxyConfig {
|
|
||||||
if _, ok := opt.BuildArgs[k]; !ok {
|
|
||||||
so.FrontendAttrs["build-arg:"+k] = v
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// set platforms
|
|
||||||
if len(opt.Platforms) != 0 {
|
|
||||||
pp := make([]string, len(opt.Platforms))
|
|
||||||
for i, p := range opt.Platforms {
|
|
||||||
pp[i] = platforms.Format(p)
|
|
||||||
}
|
|
||||||
if len(pp) > 1 && !nodeDriver.Features(ctx)[driver.MultiPlatform] {
|
|
||||||
return nil, nil, notSupported(driver.MultiPlatform, nodeDriver, "https://docs.docker.com/go/build-multi-platform/")
|
|
||||||
}
|
|
||||||
so.FrontendAttrs["platform"] = strings.Join(pp, ",")
|
|
||||||
}
|
|
||||||
|
|
||||||
// setup networkmode
|
|
||||||
switch opt.NetworkMode {
|
|
||||||
case "host":
|
|
||||||
so.FrontendAttrs["force-network-mode"] = opt.NetworkMode
|
|
||||||
so.AllowedEntitlements = append(so.AllowedEntitlements, entitlements.EntitlementNetworkHost)
|
|
||||||
case "none":
|
|
||||||
so.FrontendAttrs["force-network-mode"] = opt.NetworkMode
|
|
||||||
case "", "default":
|
|
||||||
default:
|
|
||||||
return nil, nil, errors.Errorf("network mode %q not supported by buildkit - you can define a custom network for your builder using the network driver-opt in buildx create", opt.NetworkMode)
|
|
||||||
}
|
|
||||||
|
|
||||||
// setup extrahosts
|
|
||||||
extraHosts, err := toBuildkitExtraHosts(ctx, opt.ExtraHosts, nodeDriver)
|
|
||||||
if err != nil {
|
|
||||||
return nil, nil, err
|
|
||||||
}
|
|
||||||
if len(extraHosts) > 0 {
|
|
||||||
so.FrontendAttrs["add-hosts"] = extraHosts
|
|
||||||
}
|
|
||||||
|
|
||||||
// setup shm size
|
|
||||||
if opt.ShmSize.Value() > 0 {
|
|
||||||
so.FrontendAttrs["shm-size"] = strconv.FormatInt(opt.ShmSize.Value(), 10)
|
|
||||||
}
|
|
||||||
|
|
||||||
// setup ulimits
|
|
||||||
ulimits, err := toBuildkitUlimits(opt.Ulimits)
|
|
||||||
if err != nil {
|
|
||||||
return nil, nil, err
|
|
||||||
} else if len(ulimits) > 0 {
|
|
||||||
so.FrontendAttrs["ulimit"] = ulimits
|
|
||||||
}
|
|
||||||
|
|
||||||
// mark info request as internal
|
|
||||||
if opt.PrintFunc != nil {
|
|
||||||
so.Internal = true
|
|
||||||
}
|
|
||||||
|
|
||||||
return &so, releaseF, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func loadInputs(ctx context.Context, d *driver.DriverHandle, inp Inputs, addVCSLocalDir func(key, dir string, so *client.SolveOpt), pw progress.Writer, target *client.SolveOpt) (func(), error) {
|
|
||||||
if inp.ContextPath == "" {
|
|
||||||
return nil, errors.New("please specify build context (e.g. \".\" for the current directory)")
|
|
||||||
}
|
|
||||||
|
|
||||||
// TODO: handle stdin, symlinks, remote contexts, check files exist
|
|
||||||
|
|
||||||
var (
|
|
||||||
err error
|
|
||||||
dockerfileReader io.Reader
|
|
||||||
dockerfileDir string
|
|
||||||
dockerfileName = inp.DockerfilePath
|
|
||||||
toRemove []string
|
|
||||||
)
|
|
||||||
|
|
||||||
switch {
|
|
||||||
case inp.ContextState != nil:
|
|
||||||
if target.FrontendInputs == nil {
|
|
||||||
target.FrontendInputs = make(map[string]llb.State)
|
|
||||||
}
|
|
||||||
target.FrontendInputs["context"] = *inp.ContextState
|
|
||||||
target.FrontendInputs["dockerfile"] = *inp.ContextState
|
|
||||||
case inp.ContextPath == "-":
|
|
||||||
if inp.DockerfilePath == "-" {
|
|
||||||
return nil, errStdinConflict
|
|
||||||
}
|
|
||||||
|
|
||||||
buf := bufio.NewReader(inp.InStream)
|
|
||||||
magic, err := buf.Peek(archiveHeaderSize * 2)
|
|
||||||
if err != nil && err != io.EOF {
|
|
||||||
return nil, errors.Wrap(err, "failed to peek context header from STDIN")
|
|
||||||
}
|
|
||||||
if !(err == io.EOF && len(magic) == 0) {
|
|
||||||
if isArchive(magic) {
|
|
||||||
// stdin is context
|
|
||||||
up := uploadprovider.New()
|
|
||||||
target.FrontendAttrs["context"] = up.Add(buf)
|
|
||||||
target.Session = append(target.Session, up)
|
|
||||||
} else {
|
|
||||||
if inp.DockerfilePath != "" {
|
|
||||||
return nil, errDockerfileConflict
|
|
||||||
}
|
|
||||||
// stdin is dockerfile
|
|
||||||
dockerfileReader = buf
|
|
||||||
inp.ContextPath, _ = os.MkdirTemp("", "empty-dir")
|
|
||||||
toRemove = append(toRemove, inp.ContextPath)
|
|
||||||
if err := setLocalMount("context", inp.ContextPath, target, addVCSLocalDir); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
case osutil.IsLocalDir(inp.ContextPath):
|
|
||||||
if err := setLocalMount("context", inp.ContextPath, target, addVCSLocalDir); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
switch inp.DockerfilePath {
|
|
||||||
case "-":
|
|
||||||
dockerfileReader = inp.InStream
|
|
||||||
case "":
|
|
||||||
dockerfileDir = inp.ContextPath
|
|
||||||
default:
|
|
||||||
dockerfileDir = filepath.Dir(inp.DockerfilePath)
|
|
||||||
dockerfileName = filepath.Base(inp.DockerfilePath)
|
|
||||||
}
|
|
||||||
case IsRemoteURL(inp.ContextPath):
|
|
||||||
if inp.DockerfilePath == "-" {
|
|
||||||
dockerfileReader = inp.InStream
|
|
||||||
} else if filepath.IsAbs(inp.DockerfilePath) {
|
|
||||||
dockerfileDir = filepath.Dir(inp.DockerfilePath)
|
|
||||||
dockerfileName = filepath.Base(inp.DockerfilePath)
|
|
||||||
target.FrontendAttrs["dockerfilekey"] = "dockerfile"
|
|
||||||
}
|
|
||||||
target.FrontendAttrs["context"] = inp.ContextPath
|
|
||||||
default:
|
|
||||||
return nil, errors.Errorf("unable to prepare context: path %q not found", inp.ContextPath)
|
|
||||||
}
|
|
||||||
|
|
||||||
if inp.DockerfileInline != "" {
|
|
||||||
dockerfileReader = strings.NewReader(inp.DockerfileInline)
|
|
||||||
}
|
|
||||||
|
|
||||||
if dockerfileReader != nil {
|
|
||||||
dockerfileDir, err = createTempDockerfile(dockerfileReader)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
toRemove = append(toRemove, dockerfileDir)
|
|
||||||
dockerfileName = "Dockerfile"
|
|
||||||
target.FrontendAttrs["dockerfilekey"] = "dockerfile"
|
|
||||||
}
|
|
||||||
if isHTTPURL(inp.DockerfilePath) {
|
|
||||||
dockerfileDir, err = createTempDockerfileFromURL(ctx, d, inp.DockerfilePath, pw)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
toRemove = append(toRemove, dockerfileDir)
|
|
||||||
dockerfileName = "Dockerfile"
|
|
||||||
target.FrontendAttrs["dockerfilekey"] = "dockerfile"
|
|
||||||
delete(target.FrontendInputs, "dockerfile")
|
|
||||||
}
|
|
||||||
|
|
||||||
if dockerfileName == "" {
|
|
||||||
dockerfileName = "Dockerfile"
|
|
||||||
}
|
|
||||||
|
|
||||||
if dockerfileDir != "" {
|
|
||||||
if err := setLocalMount("dockerfile", dockerfileDir, target, addVCSLocalDir); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
dockerfileName = handleLowercaseDockerfile(dockerfileDir, dockerfileName)
|
|
||||||
}
|
|
||||||
|
|
||||||
target.FrontendAttrs["filename"] = dockerfileName
|
|
||||||
|
|
||||||
for k, v := range inp.NamedContexts {
|
|
||||||
target.FrontendAttrs["frontend.caps"] = "moby.buildkit.frontend.contexts+forward"
|
|
||||||
if v.State != nil {
|
|
||||||
target.FrontendAttrs["context:"+k] = "input:" + k
|
|
||||||
if target.FrontendInputs == nil {
|
|
||||||
target.FrontendInputs = make(map[string]llb.State)
|
|
||||||
}
|
|
||||||
target.FrontendInputs[k] = *v.State
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
if IsRemoteURL(v.Path) || strings.HasPrefix(v.Path, "docker-image://") || strings.HasPrefix(v.Path, "target:") {
|
|
||||||
target.FrontendAttrs["context:"+k] = v.Path
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
// handle OCI layout
|
|
||||||
if strings.HasPrefix(v.Path, "oci-layout://") {
|
|
||||||
localPath := strings.TrimPrefix(v.Path, "oci-layout://")
|
|
||||||
localPath, dig, hasDigest := strings.Cut(localPath, "@")
|
|
||||||
localPath, tag, hasTag := strings.Cut(localPath, ":")
|
|
||||||
if !hasTag {
|
|
||||||
tag = "latest"
|
|
||||||
}
|
|
||||||
if !hasDigest {
|
|
||||||
dig, err = resolveDigest(localPath, tag)
|
|
||||||
if err != nil {
|
|
||||||
return nil, errors.Wrapf(err, "oci-layout reference %q could not be resolved", v.Path)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
store, err := local.NewStore(localPath)
|
|
||||||
if err != nil {
|
|
||||||
return nil, errors.Wrapf(err, "invalid store at %s", localPath)
|
|
||||||
}
|
|
||||||
storeName := identity.NewID()
|
|
||||||
if target.OCIStores == nil {
|
|
||||||
target.OCIStores = map[string]content.Store{}
|
|
||||||
}
|
|
||||||
target.OCIStores[storeName] = store
|
|
||||||
|
|
||||||
target.FrontendAttrs["context:"+k] = "oci-layout://" + storeName + ":" + tag + "@" + dig
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
st, err := os.Stat(v.Path)
|
|
||||||
if err != nil {
|
|
||||||
return nil, errors.Wrapf(err, "failed to get build context %v", k)
|
|
||||||
}
|
|
||||||
if !st.IsDir() {
|
|
||||||
return nil, errors.Wrapf(syscall.ENOTDIR, "failed to get build context path %v", v)
|
|
||||||
}
|
|
||||||
localName := k
|
|
||||||
if k == "context" || k == "dockerfile" {
|
|
||||||
localName = "_" + k // underscore to avoid collisions
|
|
||||||
}
|
|
||||||
if err := setLocalMount(localName, v.Path, target, addVCSLocalDir); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
target.FrontendAttrs["context:"+k] = "local:" + localName
|
|
||||||
}
|
|
||||||
|
|
||||||
release := func() {
|
|
||||||
for _, dir := range toRemove {
|
|
||||||
_ = os.RemoveAll(dir)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return release, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func resolveDigest(localPath, tag string) (dig string, _ error) {
|
|
||||||
idx := ociindex.NewStoreIndex(localPath)
|
|
||||||
|
|
||||||
// lookup by name
|
|
||||||
desc, err := idx.Get(tag)
|
|
||||||
if err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
if desc == nil {
|
|
||||||
// lookup single
|
|
||||||
desc, err = idx.GetSingle()
|
|
||||||
if err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if desc == nil {
|
|
||||||
return "", errors.New("failed to resolve digest")
|
|
||||||
}
|
|
||||||
|
|
||||||
dig = string(desc.Digest)
|
|
||||||
_, err = digest.Parse(dig)
|
|
||||||
if err != nil {
|
|
||||||
return "", errors.Wrapf(err, "invalid digest %s", dig)
|
|
||||||
}
|
|
||||||
|
|
||||||
return dig, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func setLocalMount(name, root string, so *client.SolveOpt, addVCSLocalDir func(key, dir string, so *client.SolveOpt)) error {
|
|
||||||
lm, err := fsutil.NewFS(root)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
root, err = filepath.EvalSymlinks(root) // keep same behavior as fsutil.NewFS
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if so.LocalMounts == nil {
|
|
||||||
so.LocalMounts = map[string]fsutil.FS{}
|
|
||||||
}
|
|
||||||
so.LocalMounts[name] = lm
|
|
||||||
if addVCSLocalDir != nil {
|
|
||||||
addVCSLocalDir(name, root, so)
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func createTempDockerfile(r io.Reader) (string, error) {
|
|
||||||
dir, err := os.MkdirTemp("", "dockerfile")
|
|
||||||
if err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
f, err := os.Create(filepath.Join(dir, "Dockerfile"))
|
|
||||||
if err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
defer f.Close()
|
|
||||||
if _, err := io.Copy(f, r); err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
return dir, err
|
|
||||||
}
|
|
||||||
|
|
||||||
// handle https://github.com/moby/moby/pull/10858
|
|
||||||
func handleLowercaseDockerfile(dir, p string) string {
|
|
||||||
if filepath.Base(p) != "Dockerfile" {
|
|
||||||
return p
|
|
||||||
}
|
|
||||||
|
|
||||||
f, err := os.Open(filepath.Dir(filepath.Join(dir, p)))
|
|
||||||
if err != nil {
|
|
||||||
return p
|
|
||||||
}
|
|
||||||
|
|
||||||
names, err := f.Readdirnames(-1)
|
|
||||||
if err != nil {
|
|
||||||
return p
|
|
||||||
}
|
|
||||||
|
|
||||||
foundLowerCase := false
|
|
||||||
for _, n := range names {
|
|
||||||
if n == "Dockerfile" {
|
|
||||||
return p
|
|
||||||
}
|
|
||||||
if n == "dockerfile" {
|
|
||||||
foundLowerCase = true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if foundLowerCase {
|
|
||||||
return filepath.Join(filepath.Dir(p), "dockerfile")
|
|
||||||
}
|
|
||||||
return p
|
|
||||||
}
|
|
||||||
@@ -1,157 +0,0 @@
|
|||||||
package build
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"encoding/base64"
|
|
||||||
"encoding/json"
|
|
||||||
"io"
|
|
||||||
"strings"
|
|
||||||
"sync"
|
|
||||||
|
|
||||||
"github.com/containerd/containerd/content"
|
|
||||||
"github.com/containerd/containerd/content/proxy"
|
|
||||||
"github.com/docker/buildx/util/confutil"
|
|
||||||
"github.com/docker/buildx/util/progress"
|
|
||||||
controlapi "github.com/moby/buildkit/api/services/control"
|
|
||||||
"github.com/moby/buildkit/client"
|
|
||||||
provenancetypes "github.com/moby/buildkit/solver/llbsolver/provenance/types"
|
|
||||||
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
|
|
||||||
"github.com/pkg/errors"
|
|
||||||
"golang.org/x/sync/errgroup"
|
|
||||||
)
|
|
||||||
|
|
||||||
type provenancePredicate struct {
|
|
||||||
Builder *provenanceBuilder `json:"builder,omitempty"`
|
|
||||||
provenancetypes.ProvenancePredicate
|
|
||||||
}
|
|
||||||
|
|
||||||
type provenanceBuilder struct {
|
|
||||||
ID string `json:"id,omitempty"`
|
|
||||||
}
|
|
||||||
|
|
||||||
func setRecordProvenance(ctx context.Context, c *client.Client, sr *client.SolveResponse, ref string, pw progress.Writer) error {
|
|
||||||
mode := confutil.MetadataProvenance()
|
|
||||||
if mode == confutil.MetadataProvenanceModeDisabled {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
pw = progress.ResetTime(pw)
|
|
||||||
return progress.Wrap("resolving provenance for metadata file", pw.Write, func(l progress.SubLogger) error {
|
|
||||||
res, err := fetchProvenance(ctx, c, ref, mode)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
for k, v := range res {
|
|
||||||
sr.ExporterResponse[k] = v
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
func fetchProvenance(ctx context.Context, c *client.Client, ref string, mode confutil.MetadataProvenanceMode) (out map[string]string, err error) {
|
|
||||||
cl, err := c.ControlClient().ListenBuildHistory(ctx, &controlapi.BuildHistoryRequest{
|
|
||||||
Ref: ref,
|
|
||||||
EarlyExit: true,
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
var mu sync.Mutex
|
|
||||||
eg, ctx := errgroup.WithContext(ctx)
|
|
||||||
store := proxy.NewContentStore(c.ContentClient())
|
|
||||||
for {
|
|
||||||
ev, err := cl.Recv()
|
|
||||||
if errors.Is(err, io.EOF) {
|
|
||||||
break
|
|
||||||
} else if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
if ev.Record == nil {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
if ev.Record.Result != nil {
|
|
||||||
desc := lookupProvenance(ev.Record.Result)
|
|
||||||
if desc == nil {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
eg.Go(func() error {
|
|
||||||
dt, err := content.ReadBlob(ctx, store, *desc)
|
|
||||||
if err != nil {
|
|
||||||
return errors.Wrapf(err, "failed to load provenance blob from build record")
|
|
||||||
}
|
|
||||||
prv, err := encodeProvenance(dt, mode)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
mu.Lock()
|
|
||||||
if out == nil {
|
|
||||||
out = make(map[string]string)
|
|
||||||
}
|
|
||||||
out["buildx.build.provenance"] = prv
|
|
||||||
mu.Unlock()
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
} else if ev.Record.Results != nil {
|
|
||||||
for platform, res := range ev.Record.Results {
|
|
||||||
platform := platform
|
|
||||||
desc := lookupProvenance(res)
|
|
||||||
if desc == nil {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
eg.Go(func() error {
|
|
||||||
dt, err := content.ReadBlob(ctx, store, *desc)
|
|
||||||
if err != nil {
|
|
||||||
return errors.Wrapf(err, "failed to load provenance blob from build record")
|
|
||||||
}
|
|
||||||
prv, err := encodeProvenance(dt, mode)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
mu.Lock()
|
|
||||||
if out == nil {
|
|
||||||
out = make(map[string]string)
|
|
||||||
}
|
|
||||||
out["buildx.build.provenance/"+platform] = prv
|
|
||||||
mu.Unlock()
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return out, eg.Wait()
|
|
||||||
}
|
|
||||||
|
|
||||||
func lookupProvenance(res *controlapi.BuildResultInfo) *ocispecs.Descriptor {
|
|
||||||
for _, a := range res.Attestations {
|
|
||||||
if a.MediaType == "application/vnd.in-toto+json" && strings.HasPrefix(a.Annotations["in-toto.io/predicate-type"], "https://slsa.dev/provenance/") {
|
|
||||||
return &ocispecs.Descriptor{
|
|
||||||
Digest: a.Digest,
|
|
||||||
Size: a.Size_,
|
|
||||||
MediaType: a.MediaType,
|
|
||||||
Annotations: a.Annotations,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func encodeProvenance(dt []byte, mode confutil.MetadataProvenanceMode) (string, error) {
|
|
||||||
var prv provenancePredicate
|
|
||||||
if err := json.Unmarshal(dt, &prv); err != nil {
|
|
||||||
return "", errors.Wrapf(err, "failed to unmarshal provenance")
|
|
||||||
}
|
|
||||||
if prv.Builder != nil && prv.Builder.ID == "" {
|
|
||||||
// reset builder if id is empty
|
|
||||||
prv.Builder = nil
|
|
||||||
}
|
|
||||||
if mode == confutil.MetadataProvenanceModeMin {
|
|
||||||
// reset fields for minimal provenance
|
|
||||||
prv.BuildConfig = nil
|
|
||||||
prv.Metadata = nil
|
|
||||||
}
|
|
||||||
dtprv, err := json.Marshal(prv)
|
|
||||||
if err != nil {
|
|
||||||
return "", errors.Wrapf(err, "failed to marshal provenance")
|
|
||||||
}
|
|
||||||
return base64.StdEncoding.EncodeToString(dtprv), nil
|
|
||||||
}
|
|
||||||
495
build/result.go
495
build/result.go
@@ -1,495 +0,0 @@
|
|||||||
package build
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
_ "crypto/sha256" // ensure digests can be computed
|
|
||||||
"encoding/json"
|
|
||||||
"io"
|
|
||||||
"sync"
|
|
||||||
|
|
||||||
controllerapi "github.com/docker/buildx/controller/pb"
|
|
||||||
"github.com/moby/buildkit/client"
|
|
||||||
"github.com/moby/buildkit/exporter/containerimage/exptypes"
|
|
||||||
gateway "github.com/moby/buildkit/frontend/gateway/client"
|
|
||||||
"github.com/moby/buildkit/solver/errdefs"
|
|
||||||
"github.com/moby/buildkit/solver/pb"
|
|
||||||
"github.com/moby/buildkit/solver/result"
|
|
||||||
specs "github.com/opencontainers/image-spec/specs-go/v1"
|
|
||||||
"github.com/pkg/errors"
|
|
||||||
"github.com/sirupsen/logrus"
|
|
||||||
"golang.org/x/sync/errgroup"
|
|
||||||
)
|
|
||||||
|
|
||||||
// NewResultHandle makes a call to client.Build, additionally returning a
|
|
||||||
// opaque ResultHandle alongside the standard response and error.
|
|
||||||
//
|
|
||||||
// This ResultHandle can be used to execute additional build steps in the same
|
|
||||||
// context as the build occurred, which can allow easy debugging of build
|
|
||||||
// failures and successes.
|
|
||||||
//
|
|
||||||
// If the returned ResultHandle is not nil, the caller must call Done() on it.
|
|
||||||
func NewResultHandle(ctx context.Context, cc *client.Client, opt client.SolveOpt, product string, buildFunc gateway.BuildFunc, ch chan *client.SolveStatus) (*ResultHandle, *client.SolveResponse, error) {
|
|
||||||
// Create a new context to wrap the original, and cancel it when the
|
|
||||||
// caller-provided context is cancelled.
|
|
||||||
//
|
|
||||||
// We derive the context from the background context so that we can forbid
|
|
||||||
// cancellation of the build request after <-done is closed (which we do
|
|
||||||
// before returning the ResultHandle).
|
|
||||||
baseCtx := ctx
|
|
||||||
ctx, cancel := context.WithCancelCause(context.Background())
|
|
||||||
done := make(chan struct{})
|
|
||||||
go func() {
|
|
||||||
select {
|
|
||||||
case <-baseCtx.Done():
|
|
||||||
cancel(baseCtx.Err())
|
|
||||||
case <-done:
|
|
||||||
// Once done is closed, we've recorded a ResultHandle, so we
|
|
||||||
// shouldn't allow cancelling the underlying build request anymore.
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
|
|
||||||
// Create a new channel to forward status messages to the original.
|
|
||||||
//
|
|
||||||
// We do this so that we can discard status messages after the main portion
|
|
||||||
// of the build is complete. This is necessary for the solve error case,
|
|
||||||
// where the original gateway is kept open until the ResultHandle is
|
|
||||||
// closed - we don't want progress messages from operations in that
|
|
||||||
// ResultHandle to display after this function exits.
|
|
||||||
//
|
|
||||||
// Additionally, callers should wait for the progress channel to be closed.
|
|
||||||
// If we keep the session open and never close the progress channel, the
|
|
||||||
// caller will likely hang.
|
|
||||||
baseCh := ch
|
|
||||||
ch = make(chan *client.SolveStatus)
|
|
||||||
go func() {
|
|
||||||
for {
|
|
||||||
s, ok := <-ch
|
|
||||||
if !ok {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
select {
|
|
||||||
case <-baseCh:
|
|
||||||
// base channel is closed, discard status messages
|
|
||||||
default:
|
|
||||||
baseCh <- s
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
defer close(baseCh)
|
|
||||||
|
|
||||||
var resp *client.SolveResponse
|
|
||||||
var respErr error
|
|
||||||
var respHandle *ResultHandle
|
|
||||||
|
|
||||||
go func() {
|
|
||||||
defer cancel(context.Canceled) // ensure no dangling processes
|
|
||||||
|
|
||||||
var res *gateway.Result
|
|
||||||
var err error
|
|
||||||
resp, err = cc.Build(ctx, opt, product, func(ctx context.Context, c gateway.Client) (*gateway.Result, error) {
|
|
||||||
var err error
|
|
||||||
res, err = buildFunc(ctx, c)
|
|
||||||
|
|
||||||
if res != nil && err == nil {
|
|
||||||
// Force evaluation of the build result (otherwise, we likely
|
|
||||||
// won't get a solve error)
|
|
||||||
def, err2 := getDefinition(ctx, res)
|
|
||||||
if err2 != nil {
|
|
||||||
return nil, err2
|
|
||||||
}
|
|
||||||
res, err = evalDefinition(ctx, c, def)
|
|
||||||
}
|
|
||||||
|
|
||||||
if err != nil {
|
|
||||||
// Scenario 1: we failed to evaluate a node somewhere in the
|
|
||||||
// build graph.
|
|
||||||
//
|
|
||||||
// In this case, we construct a ResultHandle from this
|
|
||||||
// original Build session, and return it alongside the original
|
|
||||||
// build error. We then need to keep the gateway session open
|
|
||||||
// until the caller explicitly closes the ResultHandle.
|
|
||||||
|
|
||||||
var se *errdefs.SolveError
|
|
||||||
if errors.As(err, &se) {
|
|
||||||
respHandle = &ResultHandle{
|
|
||||||
done: make(chan struct{}),
|
|
||||||
solveErr: se,
|
|
||||||
gwClient: c,
|
|
||||||
gwCtx: ctx,
|
|
||||||
}
|
|
||||||
respErr = err // return original error to preserve stacktrace
|
|
||||||
close(done)
|
|
||||||
|
|
||||||
// Block until the caller closes the ResultHandle.
|
|
||||||
select {
|
|
||||||
case <-respHandle.done:
|
|
||||||
case <-ctx.Done():
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return res, err
|
|
||||||
}, ch)
|
|
||||||
if respHandle != nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if err != nil {
|
|
||||||
// Something unexpected failed during the build, we didn't succeed,
|
|
||||||
// but we also didn't make it far enough to create a ResultHandle.
|
|
||||||
respErr = err
|
|
||||||
close(done)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Scenario 2: we successfully built the image with no errors.
|
|
||||||
//
|
|
||||||
// In this case, the original gateway session has now been closed
|
|
||||||
// since the Build has been completed. So, we need to create a new
|
|
||||||
// gateway session to populate the ResultHandle. To do this, we
|
|
||||||
// need to re-evaluate the target result, in this new session. This
|
|
||||||
// should be instantaneous since the result should be cached.
|
|
||||||
|
|
||||||
def, err := getDefinition(ctx, res)
|
|
||||||
if err != nil {
|
|
||||||
respErr = err
|
|
||||||
close(done)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// NOTE: ideally this second connection should be lazily opened
|
|
||||||
opt := opt
|
|
||||||
opt.Ref = ""
|
|
||||||
opt.Exports = nil
|
|
||||||
opt.CacheExports = nil
|
|
||||||
opt.Internal = true
|
|
||||||
_, respErr = cc.Build(ctx, opt, "buildx", func(ctx context.Context, c gateway.Client) (*gateway.Result, error) {
|
|
||||||
res, err := evalDefinition(ctx, c, def)
|
|
||||||
if err != nil {
|
|
||||||
// This should probably not happen, since we've previously
|
|
||||||
// successfully evaluated the same result with no issues.
|
|
||||||
return nil, errors.Wrap(err, "inconsistent solve result")
|
|
||||||
}
|
|
||||||
respHandle = &ResultHandle{
|
|
||||||
done: make(chan struct{}),
|
|
||||||
res: res,
|
|
||||||
gwClient: c,
|
|
||||||
gwCtx: ctx,
|
|
||||||
}
|
|
||||||
close(done)
|
|
||||||
|
|
||||||
// Block until the caller closes the ResultHandle.
|
|
||||||
select {
|
|
||||||
case <-respHandle.done:
|
|
||||||
case <-ctx.Done():
|
|
||||||
}
|
|
||||||
return nil, ctx.Err()
|
|
||||||
}, nil)
|
|
||||||
if respHandle != nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
close(done)
|
|
||||||
}()
|
|
||||||
|
|
||||||
// Block until the other thread signals that it's completed the build.
|
|
||||||
select {
|
|
||||||
case <-done:
|
|
||||||
case <-baseCtx.Done():
|
|
||||||
if respErr == nil {
|
|
||||||
respErr = baseCtx.Err()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return respHandle, resp, respErr
|
|
||||||
}
|
|
||||||
|
|
||||||
// getDefinition converts a gateway result into a collection of definitions for
|
|
||||||
// each ref in the result.
|
|
||||||
func getDefinition(ctx context.Context, res *gateway.Result) (*result.Result[*pb.Definition], error) {
|
|
||||||
return result.ConvertResult(res, func(ref gateway.Reference) (*pb.Definition, error) {
|
|
||||||
st, err := ref.ToState()
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
def, err := st.Marshal(ctx)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
return def.ToPB(), nil
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
// evalDefinition performs the reverse of getDefinition, converting a
|
|
||||||
// collection of definitions into a gateway result.
|
|
||||||
func evalDefinition(ctx context.Context, c gateway.Client, defs *result.Result[*pb.Definition]) (*gateway.Result, error) {
|
|
||||||
// force evaluation of all targets in parallel
|
|
||||||
results := make(map[*pb.Definition]*gateway.Result)
|
|
||||||
resultsMu := sync.Mutex{}
|
|
||||||
eg, egCtx := errgroup.WithContext(ctx)
|
|
||||||
defs.EachRef(func(def *pb.Definition) error {
|
|
||||||
eg.Go(func() error {
|
|
||||||
res, err := c.Solve(egCtx, gateway.SolveRequest{
|
|
||||||
Evaluate: true,
|
|
||||||
Definition: def,
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
resultsMu.Lock()
|
|
||||||
results[def] = res
|
|
||||||
resultsMu.Unlock()
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
if err := eg.Wait(); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
res, _ := result.ConvertResult(defs, func(def *pb.Definition) (gateway.Reference, error) {
|
|
||||||
if res, ok := results[def]; ok {
|
|
||||||
return res.Ref, nil
|
|
||||||
}
|
|
||||||
return nil, nil
|
|
||||||
})
|
|
||||||
return res, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// ResultHandle is a build result with the client that built it.
|
|
||||||
type ResultHandle struct {
|
|
||||||
res *gateway.Result
|
|
||||||
solveErr *errdefs.SolveError
|
|
||||||
|
|
||||||
done chan struct{}
|
|
||||||
doneOnce sync.Once
|
|
||||||
|
|
||||||
gwClient gateway.Client
|
|
||||||
gwCtx context.Context
|
|
||||||
|
|
||||||
cleanups []func()
|
|
||||||
cleanupsMu sync.Mutex
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *ResultHandle) Done() {
|
|
||||||
r.doneOnce.Do(func() {
|
|
||||||
r.cleanupsMu.Lock()
|
|
||||||
cleanups := r.cleanups
|
|
||||||
r.cleanups = nil
|
|
||||||
r.cleanupsMu.Unlock()
|
|
||||||
for _, f := range cleanups {
|
|
||||||
f()
|
|
||||||
}
|
|
||||||
|
|
||||||
close(r.done)
|
|
||||||
<-r.gwCtx.Done()
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *ResultHandle) registerCleanup(f func()) {
|
|
||||||
r.cleanupsMu.Lock()
|
|
||||||
r.cleanups = append(r.cleanups, f)
|
|
||||||
r.cleanupsMu.Unlock()
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *ResultHandle) build(buildFunc gateway.BuildFunc) (err error) {
|
|
||||||
_, err = buildFunc(r.gwCtx, r.gwClient)
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *ResultHandle) getContainerConfig(cfg *controllerapi.InvokeConfig) (containerCfg gateway.NewContainerRequest, _ error) {
|
|
||||||
if r.res != nil && r.solveErr == nil {
|
|
||||||
logrus.Debugf("creating container from successful build")
|
|
||||||
ccfg, err := containerConfigFromResult(r.res, *cfg)
|
|
||||||
if err != nil {
|
|
||||||
return containerCfg, err
|
|
||||||
}
|
|
||||||
containerCfg = *ccfg
|
|
||||||
} else {
|
|
||||||
logrus.Debugf("creating container from failed build %+v", cfg)
|
|
||||||
ccfg, err := containerConfigFromError(r.solveErr, *cfg)
|
|
||||||
if err != nil {
|
|
||||||
return containerCfg, errors.Wrapf(err, "no result nor error is available")
|
|
||||||
}
|
|
||||||
containerCfg = *ccfg
|
|
||||||
}
|
|
||||||
return containerCfg, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *ResultHandle) getProcessConfig(cfg *controllerapi.InvokeConfig, stdin io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) (_ gateway.StartRequest, err error) {
|
|
||||||
processCfg := newStartRequest(stdin, stdout, stderr)
|
|
||||||
if r.res != nil && r.solveErr == nil {
|
|
||||||
logrus.Debugf("creating container from successful build")
|
|
||||||
if err := populateProcessConfigFromResult(&processCfg, r.res, *cfg); err != nil {
|
|
||||||
return processCfg, err
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
logrus.Debugf("creating container from failed build %+v", cfg)
|
|
||||||
if err := populateProcessConfigFromError(&processCfg, r.solveErr, *cfg); err != nil {
|
|
||||||
return processCfg, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return processCfg, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func containerConfigFromResult(res *gateway.Result, cfg controllerapi.InvokeConfig) (*gateway.NewContainerRequest, error) {
|
|
||||||
if cfg.Initial {
|
|
||||||
return nil, errors.Errorf("starting from the container from the initial state of the step is supported only on the failed steps")
|
|
||||||
}
|
|
||||||
|
|
||||||
ps, err := exptypes.ParsePlatforms(res.Metadata)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
ref, ok := res.FindRef(ps.Platforms[0].ID)
|
|
||||||
if !ok {
|
|
||||||
return nil, errors.Errorf("no reference found")
|
|
||||||
}
|
|
||||||
|
|
||||||
return &gateway.NewContainerRequest{
|
|
||||||
Mounts: []gateway.Mount{
|
|
||||||
{
|
|
||||||
Dest: "/",
|
|
||||||
MountType: pb.MountType_BIND,
|
|
||||||
Ref: ref,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func populateProcessConfigFromResult(req *gateway.StartRequest, res *gateway.Result, cfg controllerapi.InvokeConfig) error {
|
|
||||||
imgData := res.Metadata[exptypes.ExporterImageConfigKey]
|
|
||||||
var img *specs.Image
|
|
||||||
if len(imgData) > 0 {
|
|
||||||
img = &specs.Image{}
|
|
||||||
if err := json.Unmarshal(imgData, img); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
user := ""
|
|
||||||
if !cfg.NoUser {
|
|
||||||
user = cfg.User
|
|
||||||
} else if img != nil {
|
|
||||||
user = img.Config.User
|
|
||||||
}
|
|
||||||
|
|
||||||
cwd := ""
|
|
||||||
if !cfg.NoCwd {
|
|
||||||
cwd = cfg.Cwd
|
|
||||||
} else if img != nil {
|
|
||||||
cwd = img.Config.WorkingDir
|
|
||||||
}
|
|
||||||
|
|
||||||
env := []string{}
|
|
||||||
if img != nil {
|
|
||||||
env = append(env, img.Config.Env...)
|
|
||||||
}
|
|
||||||
env = append(env, cfg.Env...)
|
|
||||||
|
|
||||||
args := []string{}
|
|
||||||
if cfg.Entrypoint != nil {
|
|
||||||
args = append(args, cfg.Entrypoint...)
|
|
||||||
} else if img != nil {
|
|
||||||
args = append(args, img.Config.Entrypoint...)
|
|
||||||
}
|
|
||||||
if !cfg.NoCmd {
|
|
||||||
args = append(args, cfg.Cmd...)
|
|
||||||
} else if img != nil {
|
|
||||||
args = append(args, img.Config.Cmd...)
|
|
||||||
}
|
|
||||||
|
|
||||||
req.Args = args
|
|
||||||
req.Env = env
|
|
||||||
req.User = user
|
|
||||||
req.Cwd = cwd
|
|
||||||
req.Tty = cfg.Tty
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func containerConfigFromError(solveErr *errdefs.SolveError, cfg controllerapi.InvokeConfig) (*gateway.NewContainerRequest, error) {
|
|
||||||
exec, err := execOpFromError(solveErr)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
var mounts []gateway.Mount
|
|
||||||
for i, mnt := range exec.Mounts {
|
|
||||||
rid := solveErr.Solve.MountIDs[i]
|
|
||||||
if cfg.Initial {
|
|
||||||
rid = solveErr.Solve.InputIDs[i]
|
|
||||||
}
|
|
||||||
mounts = append(mounts, gateway.Mount{
|
|
||||||
Selector: mnt.Selector,
|
|
||||||
Dest: mnt.Dest,
|
|
||||||
ResultID: rid,
|
|
||||||
Readonly: mnt.Readonly,
|
|
||||||
MountType: mnt.MountType,
|
|
||||||
CacheOpt: mnt.CacheOpt,
|
|
||||||
SecretOpt: mnt.SecretOpt,
|
|
||||||
SSHOpt: mnt.SSHOpt,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
return &gateway.NewContainerRequest{
|
|
||||||
Mounts: mounts,
|
|
||||||
NetMode: exec.Network,
|
|
||||||
}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func populateProcessConfigFromError(req *gateway.StartRequest, solveErr *errdefs.SolveError, cfg controllerapi.InvokeConfig) error {
|
|
||||||
exec, err := execOpFromError(solveErr)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
meta := exec.Meta
|
|
||||||
user := ""
|
|
||||||
if !cfg.NoUser {
|
|
||||||
user = cfg.User
|
|
||||||
} else {
|
|
||||||
user = meta.User
|
|
||||||
}
|
|
||||||
|
|
||||||
cwd := ""
|
|
||||||
if !cfg.NoCwd {
|
|
||||||
cwd = cfg.Cwd
|
|
||||||
} else {
|
|
||||||
cwd = meta.Cwd
|
|
||||||
}
|
|
||||||
|
|
||||||
env := append(meta.Env, cfg.Env...)
|
|
||||||
|
|
||||||
args := []string{}
|
|
||||||
if cfg.Entrypoint != nil {
|
|
||||||
args = append(args, cfg.Entrypoint...)
|
|
||||||
}
|
|
||||||
if cfg.Cmd != nil {
|
|
||||||
args = append(args, cfg.Cmd...)
|
|
||||||
}
|
|
||||||
if len(args) == 0 {
|
|
||||||
args = meta.Args
|
|
||||||
}
|
|
||||||
|
|
||||||
req.Args = args
|
|
||||||
req.Env = env
|
|
||||||
req.User = user
|
|
||||||
req.Cwd = cwd
|
|
||||||
req.Tty = cfg.Tty
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func execOpFromError(solveErr *errdefs.SolveError) (*pb.ExecOp, error) {
|
|
||||||
if solveErr == nil {
|
|
||||||
return nil, errors.Errorf("no error is available")
|
|
||||||
}
|
|
||||||
switch op := solveErr.Solve.Op.GetOp().(type) {
|
|
||||||
case *pb.Op_Exec:
|
|
||||||
return op.Exec, nil
|
|
||||||
default:
|
|
||||||
return nil, errors.Errorf("invoke: unsupported error type")
|
|
||||||
}
|
|
||||||
// TODO: support other ops
|
|
||||||
}
|
|
||||||
|
|
||||||
func newStartRequest(stdin io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) gateway.StartRequest {
|
|
||||||
return gateway.StartRequest{
|
|
||||||
Stdin: stdin,
|
|
||||||
Stdout: stdout,
|
|
||||||
Stderr: stderr,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -13,7 +13,7 @@ import (
|
|||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
)
|
)
|
||||||
|
|
||||||
func createTempDockerfileFromURL(ctx context.Context, d *driver.DriverHandle, url string, pw progress.Writer) (string, error) {
|
func createTempDockerfileFromURL(ctx context.Context, d driver.Driver, url string, pw progress.Writer) (string, error) {
|
||||||
c, err := driver.Boot(ctx, ctx, d, pw)
|
c, err := driver.Boot(ctx, ctx, d, pw)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return "", err
|
return "", err
|
||||||
@@ -21,7 +21,7 @@ func createTempDockerfileFromURL(ctx context.Context, d *driver.DriverHandle, ur
|
|||||||
var out string
|
var out string
|
||||||
ch, done := progress.NewChannel(pw)
|
ch, done := progress.NewChannel(pw)
|
||||||
defer func() { <-done }()
|
defer func() { <-done }()
|
||||||
_, err = c.Build(ctx, client.SolveOpt{Internal: true}, "buildx", func(ctx context.Context, c gwclient.Client) (*gwclient.Result, error) {
|
_, err = c.Build(ctx, client.SolveOpt{}, "buildx", func(ctx context.Context, c gwclient.Client) (*gwclient.Result, error) {
|
||||||
def, err := llb.HTTP(url, llb.Filename("Dockerfile"), llb.WithCustomNamef("[internal] load %s", url)).Marshal(ctx)
|
def, err := llb.HTTP(url, llb.Filename("Dockerfile"), llb.WithCustomNamef("[internal] load %s", url)).Marshal(ctx)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
|
|||||||
@@ -3,17 +3,12 @@ package build
|
|||||||
import (
|
import (
|
||||||
"archive/tar"
|
"archive/tar"
|
||||||
"bytes"
|
"bytes"
|
||||||
"context"
|
|
||||||
"net"
|
"net"
|
||||||
"os"
|
"os"
|
||||||
"strconv"
|
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"github.com/docker/buildx/driver"
|
|
||||||
"github.com/docker/cli/opts"
|
"github.com/docker/cli/opts"
|
||||||
"github.com/moby/buildkit/util/gitutil"
|
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
"github.com/sirupsen/logrus"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
@@ -25,21 +20,9 @@ const (
|
|||||||
mobyHostGatewayName = "host-gateway"
|
mobyHostGatewayName = "host-gateway"
|
||||||
)
|
)
|
||||||
|
|
||||||
// isHTTPURL returns true if the provided str is an HTTP(S) URL by checking if it
|
func isLocalDir(c string) bool {
|
||||||
// has a http:// or https:// scheme. No validation is performed to verify if the
|
st, err := os.Stat(c)
|
||||||
// URL is well-formed.
|
return err == nil && st.IsDir()
|
||||||
func isHTTPURL(str string) bool {
|
|
||||||
return strings.HasPrefix(str, "https://") || strings.HasPrefix(str, "http://")
|
|
||||||
}
|
|
||||||
|
|
||||||
func IsRemoteURL(c string) bool {
|
|
||||||
if isHTTPURL(c) {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
if _, err := gitutil.ParseGitRef(c); err == nil {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
return false
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func isArchive(header []byte) bool {
|
func isArchive(header []byte) bool {
|
||||||
@@ -62,34 +45,18 @@ func isArchive(header []byte) bool {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// toBuildkitExtraHosts converts hosts from docker key:value format to buildkit's csv format
|
// toBuildkitExtraHosts converts hosts from docker key:value format to buildkit's csv format
|
||||||
func toBuildkitExtraHosts(ctx context.Context, inp []string, nodeDriver *driver.DriverHandle) (string, error) {
|
func toBuildkitExtraHosts(inp []string, mobyDriver bool) (string, error) {
|
||||||
if len(inp) == 0 {
|
if len(inp) == 0 {
|
||||||
return "", nil
|
return "", nil
|
||||||
}
|
}
|
||||||
hosts := make([]string, 0, len(inp))
|
hosts := make([]string, 0, len(inp))
|
||||||
for _, h := range inp {
|
for _, h := range inp {
|
||||||
host, ip, ok := strings.Cut(h, "=")
|
host, ip, ok := strings.Cut(h, ":")
|
||||||
if !ok {
|
|
||||||
host, ip, ok = strings.Cut(h, ":")
|
|
||||||
}
|
|
||||||
if !ok || host == "" || ip == "" {
|
if !ok || host == "" || ip == "" {
|
||||||
return "", errors.Errorf("invalid host %s", h)
|
return "", errors.Errorf("invalid host %s", h)
|
||||||
}
|
}
|
||||||
// If the IP Address is a "host-gateway", replace this value with the
|
// Skip IP address validation for "host-gateway" string with moby driver
|
||||||
// IP address provided by the worker's label.
|
if !mobyDriver || ip != mobyHostGatewayName {
|
||||||
if ip == mobyHostGatewayName {
|
|
||||||
hgip, err := nodeDriver.HostGatewayIP(ctx)
|
|
||||||
if err != nil {
|
|
||||||
return "", errors.Wrap(err, "unable to derive the IP value for host-gateway")
|
|
||||||
}
|
|
||||||
ip = hgip.String()
|
|
||||||
} else {
|
|
||||||
// If the address is enclosed in square brackets, extract it (for IPv6, but
|
|
||||||
// permit it for IPv4 as well; we don't know the address family here, but it's
|
|
||||||
// unambiguous).
|
|
||||||
if len(ip) > 2 && ip[0] == '[' && ip[len(ip)-1] == ']' {
|
|
||||||
ip = ip[1 : len(ip)-1]
|
|
||||||
}
|
|
||||||
if net.ParseIP(ip) == nil {
|
if net.ParseIP(ip) == nil {
|
||||||
return "", errors.Errorf("invalid host %s", h)
|
return "", errors.Errorf("invalid host %s", h)
|
||||||
}
|
}
|
||||||
@@ -110,21 +77,3 @@ func toBuildkitUlimits(inp *opts.UlimitOpt) (string, error) {
|
|||||||
}
|
}
|
||||||
return strings.Join(ulimits, ","), nil
|
return strings.Join(ulimits, ","), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func notSupported(f driver.Feature, d *driver.DriverHandle, docs string) error {
|
|
||||||
return errors.Errorf(`%s is not supported for the %s driver.
|
|
||||||
Switch to a different driver, or turn on the containerd image store, and try again.
|
|
||||||
Learn more at %s`, f, d.Factory().Name(), docs)
|
|
||||||
}
|
|
||||||
|
|
||||||
func noDefaultLoad() bool {
|
|
||||||
v, ok := os.LookupEnv("BUILDX_NO_DEFAULT_LOAD")
|
|
||||||
if !ok {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
b, err := strconv.ParseBool(v)
|
|
||||||
if err != nil {
|
|
||||||
logrus.Warnf("invalid non-bool value for BUILDX_NO_DEFAULT_LOAD: %s", v)
|
|
||||||
}
|
|
||||||
return b
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -1,148 +0,0 @@
|
|||||||
package build
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"strings"
|
|
||||||
"testing"
|
|
||||||
|
|
||||||
"github.com/stretchr/testify/require"
|
|
||||||
)
|
|
||||||
|
|
||||||
func TestToBuildkitExtraHosts(t *testing.T) {
|
|
||||||
tests := []struct {
|
|
||||||
doc string
|
|
||||||
input []string
|
|
||||||
expectedOut string // Expect output==input if not set.
|
|
||||||
expectedErr string // Expect success if not set.
|
|
||||||
}{
|
|
||||||
{
|
|
||||||
doc: "IPv4, colon sep",
|
|
||||||
input: []string{`myhost:192.168.0.1`},
|
|
||||||
expectedOut: `myhost=192.168.0.1`,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
doc: "IPv4, eq sep",
|
|
||||||
input: []string{`myhost=192.168.0.1`},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
doc: "Weird but permitted, IPv4 with brackets",
|
|
||||||
input: []string{`myhost=[192.168.0.1]`},
|
|
||||||
expectedOut: `myhost=192.168.0.1`,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
doc: "Host and domain",
|
|
||||||
input: []string{`host.and.domain.invalid:10.0.2.1`},
|
|
||||||
expectedOut: `host.and.domain.invalid=10.0.2.1`,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
doc: "IPv6, colon sep",
|
|
||||||
input: []string{`anipv6host:2003:ab34:e::1`},
|
|
||||||
expectedOut: `anipv6host=2003:ab34:e::1`,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
doc: "IPv6, colon sep, brackets",
|
|
||||||
input: []string{`anipv6host:[2003:ab34:e::1]`},
|
|
||||||
expectedOut: `anipv6host=2003:ab34:e::1`,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
doc: "IPv6, eq sep, brackets",
|
|
||||||
input: []string{`anipv6host=[2003:ab34:e::1]`},
|
|
||||||
expectedOut: `anipv6host=2003:ab34:e::1`,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
doc: "IPv6 localhost, colon sep",
|
|
||||||
input: []string{`ipv6local:::1`},
|
|
||||||
expectedOut: `ipv6local=::1`,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
doc: "IPv6 localhost, eq sep",
|
|
||||||
input: []string{`ipv6local=::1`},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
doc: "IPv6 localhost, eq sep, brackets",
|
|
||||||
input: []string{`ipv6local=[::1]`},
|
|
||||||
expectedOut: `ipv6local=::1`,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
doc: "IPv6 localhost, non-canonical, colon sep",
|
|
||||||
input: []string{`ipv6local:0:0:0:0:0:0:0:1`},
|
|
||||||
expectedOut: `ipv6local=0:0:0:0:0:0:0:1`,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
doc: "IPv6 localhost, non-canonical, eq sep",
|
|
||||||
input: []string{`ipv6local=0:0:0:0:0:0:0:1`},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
doc: "IPv6 localhost, non-canonical, eq sep, brackets",
|
|
||||||
input: []string{`ipv6local=[0:0:0:0:0:0:0:1]`},
|
|
||||||
expectedOut: `ipv6local=0:0:0:0:0:0:0:1`,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
doc: "Bad address, colon sep",
|
|
||||||
input: []string{`myhost:192.notanipaddress.1`},
|
|
||||||
expectedErr: `invalid IP address in add-host: "192.notanipaddress.1"`,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
doc: "Bad address, eq sep",
|
|
||||||
input: []string{`myhost=192.notanipaddress.1`},
|
|
||||||
expectedErr: `invalid IP address in add-host: "192.notanipaddress.1"`,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
doc: "No sep",
|
|
||||||
input: []string{`thathost-nosemicolon10.0.0.1`},
|
|
||||||
expectedErr: `bad format for add-host: "thathost-nosemicolon10.0.0.1"`,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
doc: "Bad IPv6",
|
|
||||||
input: []string{`anipv6host:::::1`},
|
|
||||||
expectedErr: `invalid IP address in add-host: "::::1"`,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
doc: "Bad IPv6, trailing colons",
|
|
||||||
input: []string{`ipv6local:::0::`},
|
|
||||||
expectedErr: `invalid IP address in add-host: "::0::"`,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
doc: "Bad IPv6, missing close bracket",
|
|
||||||
input: []string{`ipv6addr=[::1`},
|
|
||||||
expectedErr: `invalid IP address in add-host: "[::1"`,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
doc: "Bad IPv6, missing open bracket",
|
|
||||||
input: []string{`ipv6addr=::1]`},
|
|
||||||
expectedErr: `invalid IP address in add-host: "::1]"`,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
doc: "Missing address, colon sep",
|
|
||||||
input: []string{`myhost.invalid:`},
|
|
||||||
expectedErr: `invalid IP address in add-host: ""`,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
doc: "Missing address, eq sep",
|
|
||||||
input: []string{`myhost.invalid=`},
|
|
||||||
expectedErr: `invalid IP address in add-host: ""`,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
doc: "No input",
|
|
||||||
input: []string{``},
|
|
||||||
expectedErr: `bad format for add-host: ""`,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, tc := range tests {
|
|
||||||
tc := tc
|
|
||||||
if tc.expectedOut == "" {
|
|
||||||
tc.expectedOut = strings.Join(tc.input, ",")
|
|
||||||
}
|
|
||||||
t.Run(tc.doc, func(t *testing.T) {
|
|
||||||
actualOut, actualErr := toBuildkitExtraHosts(context.TODO(), tc.input, nil)
|
|
||||||
if tc.expectedErr == "" {
|
|
||||||
require.Equal(t, tc.expectedOut, actualOut)
|
|
||||||
require.Nil(t, actualErr)
|
|
||||||
} else {
|
|
||||||
require.Zero(t, actualOut)
|
|
||||||
require.Error(t, actualErr, tc.expectedErr)
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -2,31 +2,18 @@ package builder
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"encoding/csv"
|
|
||||||
"encoding/json"
|
|
||||||
"net/url"
|
|
||||||
"os"
|
"os"
|
||||||
"sort"
|
"sort"
|
||||||
"strings"
|
|
||||||
"sync"
|
"sync"
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/docker/buildx/driver"
|
"github.com/docker/buildx/driver"
|
||||||
k8sutil "github.com/docker/buildx/driver/kubernetes/util"
|
|
||||||
remoteutil "github.com/docker/buildx/driver/remote/util"
|
|
||||||
"github.com/docker/buildx/localstate"
|
|
||||||
"github.com/docker/buildx/store"
|
"github.com/docker/buildx/store"
|
||||||
"github.com/docker/buildx/store/storeutil"
|
"github.com/docker/buildx/store/storeutil"
|
||||||
"github.com/docker/buildx/util/confutil"
|
|
||||||
"github.com/docker/buildx/util/dockerutil"
|
"github.com/docker/buildx/util/dockerutil"
|
||||||
"github.com/docker/buildx/util/imagetools"
|
"github.com/docker/buildx/util/imagetools"
|
||||||
"github.com/docker/buildx/util/progress"
|
"github.com/docker/buildx/util/progress"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
dopts "github.com/docker/cli/opts"
|
|
||||||
"github.com/google/shlex"
|
|
||||||
"github.com/moby/buildkit/util/progress/progressui"
|
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
"github.com/spf13/pflag"
|
|
||||||
"golang.org/x/sync/errgroup"
|
"golang.org/x/sync/errgroup"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -121,7 +108,7 @@ func New(dockerCli command.Cli, opts ...Option) (_ *Builder, err error) {
|
|||||||
|
|
||||||
// Validate validates builder context
|
// Validate validates builder context
|
||||||
func (b *Builder) Validate() error {
|
func (b *Builder) Validate() error {
|
||||||
if b.NodeGroup != nil && b.NodeGroup.DockerContext {
|
if b.NodeGroup.DockerContext {
|
||||||
list, err := b.opts.dockerCli.ContextStore().List()
|
list, err := b.opts.dockerCli.ContextStore().List()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -170,14 +157,13 @@ func (b *Builder) Boot(ctx context.Context) (bool, error) {
|
|||||||
return false, nil
|
return false, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
printer, err := progress.NewPrinter(context.TODO(), os.Stderr, progressui.AutoMode)
|
printer, err := progress.NewPrinter(context.TODO(), os.Stderr, os.Stderr, progress.PrinterModeAuto)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return false, err
|
return false, err
|
||||||
}
|
}
|
||||||
|
|
||||||
baseCtx := ctx
|
baseCtx := ctx
|
||||||
eg, _ := errgroup.WithContext(ctx)
|
eg, _ := errgroup.WithContext(ctx)
|
||||||
errCh := make(chan error, len(toBoot))
|
|
||||||
for _, idx := range toBoot {
|
for _, idx := range toBoot {
|
||||||
func(idx int) {
|
func(idx int) {
|
||||||
eg.Go(func() error {
|
eg.Go(func() error {
|
||||||
@@ -185,7 +171,6 @@ func (b *Builder) Boot(ctx context.Context) (bool, error) {
|
|||||||
_, err := driver.Boot(ctx, baseCtx, b.nodes[idx].Driver, pw)
|
_, err := driver.Boot(ctx, baseCtx, b.nodes[idx].Driver, pw)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
b.nodes[idx].Err = err
|
b.nodes[idx].Err = err
|
||||||
errCh <- err
|
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
})
|
})
|
||||||
@@ -193,15 +178,11 @@ func (b *Builder) Boot(ctx context.Context) (bool, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
err = eg.Wait()
|
err = eg.Wait()
|
||||||
close(errCh)
|
|
||||||
err1 := printer.Wait()
|
err1 := printer.Wait()
|
||||||
if err == nil {
|
if err == nil {
|
||||||
err = err1
|
err = err1
|
||||||
}
|
}
|
||||||
|
|
||||||
if err == nil && len(errCh) == len(toBoot) {
|
|
||||||
return false, <-errCh
|
|
||||||
}
|
|
||||||
return true, err
|
return true, err
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -226,7 +207,7 @@ type driverFactory struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Factory returns the driver factory.
|
// Factory returns the driver factory.
|
||||||
func (b *Builder) Factory(ctx context.Context, dialMeta map[string][]string) (_ driver.Factory, err error) {
|
func (b *Builder) Factory(ctx context.Context) (_ driver.Factory, err error) {
|
||||||
b.driverFactory.once.Do(func() {
|
b.driverFactory.once.Do(func() {
|
||||||
if b.Driver != "" {
|
if b.Driver != "" {
|
||||||
b.driverFactory.Factory, err = driver.GetFactory(b.Driver, true)
|
b.driverFactory.Factory, err = driver.GetFactory(b.Driver, true)
|
||||||
@@ -249,7 +230,7 @@ func (b *Builder) Factory(ctx context.Context, dialMeta map[string][]string) (_
|
|||||||
if _, err = dockerapi.Ping(ctx); err != nil {
|
if _, err = dockerapi.Ping(ctx); err != nil {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
b.driverFactory.Factory, err = driver.GetDefaultFactory(ctx, ep, dockerapi, false, dialMeta)
|
b.driverFactory.Factory, err = driver.GetDefaultFactory(ctx, ep, dockerapi, false)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -259,28 +240,6 @@ func (b *Builder) Factory(ctx context.Context, dialMeta map[string][]string) (_
|
|||||||
return b.driverFactory.Factory, err
|
return b.driverFactory.Factory, err
|
||||||
}
|
}
|
||||||
|
|
||||||
func (b *Builder) MarshalJSON() ([]byte, error) {
|
|
||||||
var berr string
|
|
||||||
if b.err != nil {
|
|
||||||
berr = strings.TrimSpace(b.err.Error())
|
|
||||||
}
|
|
||||||
return json.Marshal(struct {
|
|
||||||
Name string
|
|
||||||
Driver string
|
|
||||||
LastActivity time.Time `json:",omitempty"`
|
|
||||||
Dynamic bool
|
|
||||||
Nodes []Node
|
|
||||||
Err string `json:",omitempty"`
|
|
||||||
}{
|
|
||||||
Name: b.Name,
|
|
||||||
Driver: b.Driver,
|
|
||||||
LastActivity: b.LastActivity,
|
|
||||||
Dynamic: b.Dynamic,
|
|
||||||
Nodes: b.nodes,
|
|
||||||
Err: berr,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetBuilders returns all builders
|
// GetBuilders returns all builders
|
||||||
func GetBuilders(dockerCli command.Cli, txn *store.Txn) ([]*Builder, error) {
|
func GetBuilders(dockerCli command.Cli, txn *store.Txn) ([]*Builder, error) {
|
||||||
storeng, err := txn.List()
|
storeng, err := txn.List()
|
||||||
@@ -331,347 +290,3 @@ func GetBuilders(dockerCli command.Cli, txn *store.Txn) ([]*Builder, error) {
|
|||||||
|
|
||||||
return builders, nil
|
return builders, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
type CreateOpts struct {
|
|
||||||
Name string
|
|
||||||
Driver string
|
|
||||||
NodeName string
|
|
||||||
Platforms []string
|
|
||||||
BuildkitdFlags string
|
|
||||||
BuildkitdConfigFile string
|
|
||||||
DriverOpts []string
|
|
||||||
Use bool
|
|
||||||
Endpoint string
|
|
||||||
Append bool
|
|
||||||
}
|
|
||||||
|
|
||||||
func Create(ctx context.Context, txn *store.Txn, dockerCli command.Cli, opts CreateOpts) (*Builder, error) {
|
|
||||||
var err error
|
|
||||||
|
|
||||||
if opts.Name == "default" {
|
|
||||||
return nil, errors.Errorf("default is a reserved name and cannot be used to identify builder instance")
|
|
||||||
} else if opts.Append && opts.Name == "" {
|
|
||||||
return nil, errors.Errorf("append requires a builder name")
|
|
||||||
}
|
|
||||||
|
|
||||||
name := opts.Name
|
|
||||||
if name == "" {
|
|
||||||
name, err = store.GenerateName(txn)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if !opts.Append {
|
|
||||||
contexts, err := dockerCli.ContextStore().List()
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
for _, c := range contexts {
|
|
||||||
if c.Name == name {
|
|
||||||
return nil, errors.Errorf("instance name %q already exists as context builder", name)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
ng, err := txn.NodeGroupByName(name)
|
|
||||||
if err != nil {
|
|
||||||
if os.IsNotExist(errors.Cause(err)) {
|
|
||||||
if opts.Append && opts.Name != "" {
|
|
||||||
return nil, errors.Errorf("failed to find instance %q for append", opts.Name)
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
buildkitHost := os.Getenv("BUILDKIT_HOST")
|
|
||||||
|
|
||||||
driverName := opts.Driver
|
|
||||||
if driverName == "" {
|
|
||||||
if ng != nil {
|
|
||||||
driverName = ng.Driver
|
|
||||||
} else if opts.Endpoint == "" && buildkitHost != "" {
|
|
||||||
driverName = "remote"
|
|
||||||
} else {
|
|
||||||
f, err := driver.GetDefaultFactory(ctx, opts.Endpoint, dockerCli.Client(), true, nil)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
if f == nil {
|
|
||||||
return nil, errors.Errorf("no valid drivers found")
|
|
||||||
}
|
|
||||||
driverName = f.Name()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if ng != nil {
|
|
||||||
if opts.NodeName == "" && !opts.Append {
|
|
||||||
return nil, errors.Errorf("existing instance for %q but no append mode, specify the node name to make changes for existing instances", name)
|
|
||||||
}
|
|
||||||
if driverName != ng.Driver {
|
|
||||||
return nil, errors.Errorf("existing instance for %q but has mismatched driver %q", name, ng.Driver)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if _, err := driver.GetFactory(driverName, true); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
ngOriginal := ng
|
|
||||||
if ngOriginal != nil {
|
|
||||||
ngOriginal = ngOriginal.Copy()
|
|
||||||
}
|
|
||||||
|
|
||||||
if ng == nil {
|
|
||||||
ng = &store.NodeGroup{
|
|
||||||
Name: name,
|
|
||||||
Driver: driverName,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
driverOpts, err := csvToMap(opts.DriverOpts)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
buildkitdFlags, err := parseBuildkitdFlags(opts.BuildkitdFlags, driverName, driverOpts)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
var ep string
|
|
||||||
var setEp bool
|
|
||||||
switch {
|
|
||||||
case driverName == "kubernetes":
|
|
||||||
if opts.Endpoint != "" {
|
|
||||||
return nil, errors.Errorf("kubernetes driver does not support endpoint args %q", opts.Endpoint)
|
|
||||||
}
|
|
||||||
// generate node name if not provided to avoid duplicated endpoint
|
|
||||||
// error: https://github.com/docker/setup-buildx-action/issues/215
|
|
||||||
nodeName := opts.NodeName
|
|
||||||
if nodeName == "" {
|
|
||||||
nodeName, err = k8sutil.GenerateNodeName(name, txn)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
// naming endpoint to make append works
|
|
||||||
ep = (&url.URL{
|
|
||||||
Scheme: driverName,
|
|
||||||
Path: "/" + name,
|
|
||||||
RawQuery: (&url.Values{
|
|
||||||
"deployment": {nodeName},
|
|
||||||
"kubeconfig": {os.Getenv("KUBECONFIG")},
|
|
||||||
}).Encode(),
|
|
||||||
}).String()
|
|
||||||
setEp = false
|
|
||||||
case driverName == "remote":
|
|
||||||
if opts.Endpoint != "" {
|
|
||||||
ep = opts.Endpoint
|
|
||||||
} else if buildkitHost != "" {
|
|
||||||
ep = buildkitHost
|
|
||||||
} else {
|
|
||||||
return nil, errors.Errorf("no remote endpoint provided")
|
|
||||||
}
|
|
||||||
ep, err = validateBuildkitEndpoint(ep)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
setEp = true
|
|
||||||
case opts.Endpoint != "":
|
|
||||||
ep, err = validateEndpoint(dockerCli, opts.Endpoint)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
setEp = true
|
|
||||||
default:
|
|
||||||
if dockerCli.CurrentContext() == "default" && dockerCli.DockerEndpoint().TLSData != nil {
|
|
||||||
return nil, errors.Errorf("could not create a builder instance with TLS data loaded from environment. Please use `docker context create <context-name>` to create a context for current environment and then create a builder instance with context set to <context-name>")
|
|
||||||
}
|
|
||||||
ep, err = dockerutil.GetCurrentEndpoint(dockerCli)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
setEp = false
|
|
||||||
}
|
|
||||||
|
|
||||||
buildkitdConfigFile := opts.BuildkitdConfigFile
|
|
||||||
if buildkitdConfigFile == "" {
|
|
||||||
// if buildkit daemon config is not provided, check if the default one
|
|
||||||
// is available and use it
|
|
||||||
if f, ok := confutil.DefaultConfigFile(dockerCli); ok {
|
|
||||||
buildkitdConfigFile = f
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := ng.Update(opts.NodeName, ep, opts.Platforms, setEp, opts.Append, buildkitdFlags, buildkitdConfigFile, driverOpts); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := txn.Save(ng); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
b, err := New(dockerCli,
|
|
||||||
WithName(ng.Name),
|
|
||||||
WithStore(txn),
|
|
||||||
WithSkippedValidation(),
|
|
||||||
)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
timeoutCtx, cancel := context.WithTimeout(ctx, 20*time.Second)
|
|
||||||
defer cancel()
|
|
||||||
|
|
||||||
nodes, err := b.LoadNodes(timeoutCtx, WithData())
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, node := range nodes {
|
|
||||||
if err := node.Err; err != nil {
|
|
||||||
err := errors.Errorf("failed to initialize builder %s (%s): %s", ng.Name, node.Name, err)
|
|
||||||
var err2 error
|
|
||||||
if ngOriginal == nil {
|
|
||||||
err2 = txn.Remove(ng.Name)
|
|
||||||
} else {
|
|
||||||
err2 = txn.Save(ngOriginal)
|
|
||||||
}
|
|
||||||
if err2 != nil {
|
|
||||||
return nil, errors.Errorf("could not rollback to previous state: %s", err2)
|
|
||||||
}
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if opts.Use && ep != "" {
|
|
||||||
current, err := dockerutil.GetCurrentEndpoint(dockerCli)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
if err := txn.SetCurrent(current, ng.Name, false, false); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return b, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
type LeaveOpts struct {
|
|
||||||
Name string
|
|
||||||
NodeName string
|
|
||||||
}
|
|
||||||
|
|
||||||
func Leave(ctx context.Context, txn *store.Txn, dockerCli command.Cli, opts LeaveOpts) error {
|
|
||||||
if opts.Name == "" {
|
|
||||||
return errors.Errorf("leave requires instance name")
|
|
||||||
}
|
|
||||||
if opts.NodeName == "" {
|
|
||||||
return errors.Errorf("leave requires node name")
|
|
||||||
}
|
|
||||||
|
|
||||||
ng, err := txn.NodeGroupByName(opts.Name)
|
|
||||||
if err != nil {
|
|
||||||
if os.IsNotExist(errors.Cause(err)) {
|
|
||||||
return errors.Errorf("failed to find instance %q for leave", opts.Name)
|
|
||||||
}
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := ng.Leave(opts.NodeName); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
ls, err := localstate.New(confutil.ConfigDir(dockerCli))
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if err := ls.RemoveBuilderNode(ng.Name, opts.NodeName); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
return txn.Save(ng)
|
|
||||||
}
|
|
||||||
|
|
||||||
func csvToMap(in []string) (map[string]string, error) {
|
|
||||||
if len(in) == 0 {
|
|
||||||
return nil, nil
|
|
||||||
}
|
|
||||||
m := make(map[string]string, len(in))
|
|
||||||
for _, s := range in {
|
|
||||||
csvReader := csv.NewReader(strings.NewReader(s))
|
|
||||||
fields, err := csvReader.Read()
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
for _, v := range fields {
|
|
||||||
p := strings.SplitN(v, "=", 2)
|
|
||||||
if len(p) != 2 {
|
|
||||||
return nil, errors.Errorf("invalid value %q, expecting k=v", v)
|
|
||||||
}
|
|
||||||
m[p[0]] = p[1]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return m, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// validateEndpoint validates that endpoint is either a context or a docker host
|
|
||||||
func validateEndpoint(dockerCli command.Cli, ep string) (string, error) {
|
|
||||||
dem, err := dockerutil.GetDockerEndpoint(dockerCli, ep)
|
|
||||||
if err == nil && dem != nil {
|
|
||||||
if ep == "default" {
|
|
||||||
return dem.Host, nil
|
|
||||||
}
|
|
||||||
return ep, nil
|
|
||||||
}
|
|
||||||
h, err := dopts.ParseHost(true, ep)
|
|
||||||
if err != nil {
|
|
||||||
return "", errors.Wrapf(err, "failed to parse endpoint %s", ep)
|
|
||||||
}
|
|
||||||
return h, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// validateBuildkitEndpoint validates that endpoint is a valid buildkit host
|
|
||||||
func validateBuildkitEndpoint(ep string) (string, error) {
|
|
||||||
if err := remoteutil.IsValidEndpoint(ep); err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
return ep, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// parseBuildkitdFlags parses buildkit flags
|
|
||||||
func parseBuildkitdFlags(inp string, driver string, driverOpts map[string]string) (res []string, err error) {
|
|
||||||
if inp != "" {
|
|
||||||
res, err = shlex.Split(inp)
|
|
||||||
if err != nil {
|
|
||||||
return nil, errors.Wrap(err, "failed to parse buildkit flags")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
var allowInsecureEntitlements []string
|
|
||||||
flags := pflag.NewFlagSet("buildkitd", pflag.ContinueOnError)
|
|
||||||
flags.Usage = func() {}
|
|
||||||
flags.StringArrayVar(&allowInsecureEntitlements, "allow-insecure-entitlement", nil, "")
|
|
||||||
_ = flags.Parse(res)
|
|
||||||
|
|
||||||
var hasNetworkHostEntitlement bool
|
|
||||||
for _, e := range allowInsecureEntitlements {
|
|
||||||
if e == "network.host" {
|
|
||||||
hasNetworkHostEntitlement = true
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if v, ok := driverOpts["network"]; ok && v == "host" && !hasNetworkHostEntitlement && driver == "docker-container" {
|
|
||||||
// always set network.host entitlement if user has set network=host
|
|
||||||
res = append(res, "--allow-insecure-entitlement=network.host")
|
|
||||||
} else if len(allowInsecureEntitlements) == 0 && (driver == "kubernetes" || driver == "docker-container") {
|
|
||||||
// set network.host entitlement if user does not provide any as
|
|
||||||
// network is isolated for container drivers.
|
|
||||||
res = append(res, "--allow-insecure-entitlement=network.host")
|
|
||||||
}
|
|
||||||
|
|
||||||
return res, nil
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -1,139 +0,0 @@
|
|||||||
package builder
|
|
||||||
|
|
||||||
import (
|
|
||||||
"testing"
|
|
||||||
|
|
||||||
"github.com/stretchr/testify/assert"
|
|
||||||
"github.com/stretchr/testify/require"
|
|
||||||
)
|
|
||||||
|
|
||||||
func TestCsvToMap(t *testing.T) {
|
|
||||||
d := []string{
|
|
||||||
"\"tolerations=key=foo,value=bar;key=foo2,value=bar2\",replicas=1",
|
|
||||||
"namespace=default",
|
|
||||||
}
|
|
||||||
r, err := csvToMap(d)
|
|
||||||
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
require.Contains(t, r, "tolerations")
|
|
||||||
require.Equal(t, r["tolerations"], "key=foo,value=bar;key=foo2,value=bar2")
|
|
||||||
|
|
||||||
require.Contains(t, r, "replicas")
|
|
||||||
require.Equal(t, r["replicas"], "1")
|
|
||||||
|
|
||||||
require.Contains(t, r, "namespace")
|
|
||||||
require.Equal(t, r["namespace"], "default")
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestParseBuildkitdFlags(t *testing.T) {
|
|
||||||
testCases := []struct {
|
|
||||||
name string
|
|
||||||
flags string
|
|
||||||
driver string
|
|
||||||
driverOpts map[string]string
|
|
||||||
expected []string
|
|
||||||
wantErr bool
|
|
||||||
}{
|
|
||||||
{
|
|
||||||
"docker-container no flags",
|
|
||||||
"",
|
|
||||||
"docker-container",
|
|
||||||
nil,
|
|
||||||
[]string{
|
|
||||||
"--allow-insecure-entitlement=network.host",
|
|
||||||
},
|
|
||||||
false,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"kubernetes no flags",
|
|
||||||
"",
|
|
||||||
"kubernetes",
|
|
||||||
nil,
|
|
||||||
[]string{
|
|
||||||
"--allow-insecure-entitlement=network.host",
|
|
||||||
},
|
|
||||||
false,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"remote no flags",
|
|
||||||
"",
|
|
||||||
"remote",
|
|
||||||
nil,
|
|
||||||
nil,
|
|
||||||
false,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"docker-container with insecure flag",
|
|
||||||
"--allow-insecure-entitlement=security.insecure",
|
|
||||||
"docker-container",
|
|
||||||
nil,
|
|
||||||
[]string{
|
|
||||||
"--allow-insecure-entitlement=security.insecure",
|
|
||||||
},
|
|
||||||
false,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"docker-container with insecure and host flag",
|
|
||||||
"--allow-insecure-entitlement=network.host --allow-insecure-entitlement=security.insecure",
|
|
||||||
"docker-container",
|
|
||||||
nil,
|
|
||||||
[]string{
|
|
||||||
"--allow-insecure-entitlement=network.host",
|
|
||||||
"--allow-insecure-entitlement=security.insecure",
|
|
||||||
},
|
|
||||||
false,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"docker-container with network host opt",
|
|
||||||
"",
|
|
||||||
"docker-container",
|
|
||||||
map[string]string{"network": "host"},
|
|
||||||
[]string{
|
|
||||||
"--allow-insecure-entitlement=network.host",
|
|
||||||
},
|
|
||||||
false,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"docker-container with host flag and network host opt",
|
|
||||||
"--allow-insecure-entitlement=network.host",
|
|
||||||
"docker-container",
|
|
||||||
map[string]string{"network": "host"},
|
|
||||||
[]string{
|
|
||||||
"--allow-insecure-entitlement=network.host",
|
|
||||||
},
|
|
||||||
false,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"docker-container with insecure, host flag and network host opt",
|
|
||||||
"--allow-insecure-entitlement=network.host --allow-insecure-entitlement=security.insecure",
|
|
||||||
"docker-container",
|
|
||||||
map[string]string{"network": "host"},
|
|
||||||
[]string{
|
|
||||||
"--allow-insecure-entitlement=network.host",
|
|
||||||
"--allow-insecure-entitlement=security.insecure",
|
|
||||||
},
|
|
||||||
false,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"error parsing flags",
|
|
||||||
"foo'",
|
|
||||||
"docker-container",
|
|
||||||
nil,
|
|
||||||
nil,
|
|
||||||
true,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
for _, tt := range testCases {
|
|
||||||
tt := tt
|
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
|
||||||
flags, err := parseBuildkitdFlags(tt.flags, tt.driver, tt.driverOpts)
|
|
||||||
if tt.wantErr {
|
|
||||||
require.Error(t, err)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
require.NoError(t, err)
|
|
||||||
assert.Equal(t, tt.expected, flags)
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
111
builder/node.go
111
builder/node.go
@@ -2,11 +2,7 @@ package builder
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"encoding/json"
|
|
||||||
"sort"
|
|
||||||
"strings"
|
|
||||||
|
|
||||||
"github.com/containerd/containerd/platforms"
|
|
||||||
"github.com/docker/buildx/driver"
|
"github.com/docker/buildx/driver"
|
||||||
ctxkube "github.com/docker/buildx/driver/kubernetes/context"
|
ctxkube "github.com/docker/buildx/driver/kubernetes/context"
|
||||||
"github.com/docker/buildx/store"
|
"github.com/docker/buildx/store"
|
||||||
@@ -14,7 +10,6 @@ import (
|
|||||||
"github.com/docker/buildx/util/dockerutil"
|
"github.com/docker/buildx/util/dockerutil"
|
||||||
"github.com/docker/buildx/util/imagetools"
|
"github.com/docker/buildx/util/imagetools"
|
||||||
"github.com/docker/buildx/util/platformutil"
|
"github.com/docker/buildx/util/platformutil"
|
||||||
"github.com/moby/buildkit/client"
|
|
||||||
"github.com/moby/buildkit/util/grpcerrors"
|
"github.com/moby/buildkit/util/grpcerrors"
|
||||||
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
|
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
@@ -25,19 +20,13 @@ import (
|
|||||||
|
|
||||||
type Node struct {
|
type Node struct {
|
||||||
store.Node
|
store.Node
|
||||||
Builder string
|
Driver driver.Driver
|
||||||
Driver *driver.DriverHandle
|
|
||||||
DriverInfo *driver.Info
|
DriverInfo *driver.Info
|
||||||
|
Platforms []ocispecs.Platform
|
||||||
ImageOpt imagetools.Opt
|
ImageOpt imagetools.Opt
|
||||||
ProxyConfig map[string]string
|
ProxyConfig map[string]string
|
||||||
Version string
|
Version string
|
||||||
Err error
|
Err error
|
||||||
|
|
||||||
// worker settings
|
|
||||||
IDs []string
|
|
||||||
Platforms []ocispecs.Platform
|
|
||||||
GCPolicy []client.PruneInfo
|
|
||||||
Labels map[string]string
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Nodes returns nodes for this builder.
|
// Nodes returns nodes for this builder.
|
||||||
@@ -45,35 +34,9 @@ func (b *Builder) Nodes() []Node {
|
|||||||
return b.nodes
|
return b.nodes
|
||||||
}
|
}
|
||||||
|
|
||||||
type LoadNodesOption func(*loadNodesOptions)
|
|
||||||
|
|
||||||
type loadNodesOptions struct {
|
|
||||||
data bool
|
|
||||||
dialMeta map[string][]string
|
|
||||||
}
|
|
||||||
|
|
||||||
func WithData() LoadNodesOption {
|
|
||||||
return func(o *loadNodesOptions) {
|
|
||||||
o.data = true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func WithDialMeta(dialMeta map[string][]string) LoadNodesOption {
|
|
||||||
return func(o *loadNodesOptions) {
|
|
||||||
o.dialMeta = dialMeta
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// LoadNodes loads and returns nodes for this builder.
|
// LoadNodes loads and returns nodes for this builder.
|
||||||
// TODO: this should be a method on a Node object and lazy load data for each driver.
|
// TODO: this should be a method on a Node object and lazy load data for each driver.
|
||||||
func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []Node, err error) {
|
func (b *Builder) LoadNodes(ctx context.Context, withData bool) (_ []Node, err error) {
|
||||||
lno := loadNodesOptions{
|
|
||||||
data: false,
|
|
||||||
}
|
|
||||||
for _, opt := range opts {
|
|
||||||
opt(&lno)
|
|
||||||
}
|
|
||||||
|
|
||||||
eg, _ := errgroup.WithContext(ctx)
|
eg, _ := errgroup.WithContext(ctx)
|
||||||
b.nodes = make([]Node, len(b.NodeGroup.Nodes))
|
b.nodes = make([]Node, len(b.NodeGroup.Nodes))
|
||||||
|
|
||||||
@@ -83,7 +46,7 @@ func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []N
|
|||||||
}
|
}
|
||||||
}()
|
}()
|
||||||
|
|
||||||
factory, err := b.Factory(ctx, lno.dialMeta)
|
factory, err := b.Factory(ctx)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@@ -100,7 +63,6 @@ func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []N
|
|||||||
Node: n,
|
Node: n,
|
||||||
ProxyConfig: storeutil.GetProxyConfig(b.opts.dockerCli),
|
ProxyConfig: storeutil.GetProxyConfig(b.opts.dockerCli),
|
||||||
Platforms: n.Platforms,
|
Platforms: n.Platforms,
|
||||||
Builder: b.Name,
|
|
||||||
}
|
}
|
||||||
defer func() {
|
defer func() {
|
||||||
b.nodes[i] = node
|
b.nodes[i] = node
|
||||||
@@ -115,12 +77,12 @@ func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []N
|
|||||||
contextStore := b.opts.dockerCli.ContextStore()
|
contextStore := b.opts.dockerCli.ContextStore()
|
||||||
|
|
||||||
var kcc driver.KubeClientConfig
|
var kcc driver.KubeClientConfig
|
||||||
kcc, err = ctxkube.ConfigFromEndpoint(n.Endpoint, contextStore)
|
kcc, err = ctxkube.ConfigFromContext(n.Endpoint, contextStore)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
// err is returned if n.Endpoint is non-context name like "unix:///var/run/docker.sock".
|
// err is returned if n.Endpoint is non-context name like "unix:///var/run/docker.sock".
|
||||||
// try again with name="default".
|
// try again with name="default".
|
||||||
// FIXME(@AkihiroSuda): n should retain real context name.
|
// FIXME(@AkihiroSuda): n should retain real context name.
|
||||||
kcc, err = ctxkube.ConfigFromEndpoint("default", contextStore)
|
kcc, err = ctxkube.ConfigFromContext("default", contextStore)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
logrus.Error(err)
|
logrus.Error(err)
|
||||||
}
|
}
|
||||||
@@ -142,7 +104,7 @@ func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []N
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
d, err := driver.GetDriver(ctx, driver.BuilderName(n.Name), factory, n.Endpoint, dockerapi, imageopt.Auth, kcc, n.BuildkitdFlags, n.Files, n.DriverOpts, n.Platforms, b.opts.contextPathHash, lno.dialMeta)
|
d, err := driver.GetDriver(ctx, "buildx_buildkit_"+n.Name, factory, n.Endpoint, dockerapi, imageopt.Auth, kcc, n.Flags, n.Files, n.DriverOpts, n.Platforms, b.opts.contextPathHash)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
node.Err = err
|
node.Err = err
|
||||||
return nil
|
return nil
|
||||||
@@ -150,7 +112,7 @@ func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []N
|
|||||||
node.Driver = d
|
node.Driver = d
|
||||||
node.ImageOpt = imageopt
|
node.ImageOpt = imageopt
|
||||||
|
|
||||||
if lno.data {
|
if withData {
|
||||||
if err := node.loadData(ctx); err != nil {
|
if err := node.loadData(ctx); err != nil {
|
||||||
node.Err = err
|
node.Err = err
|
||||||
}
|
}
|
||||||
@@ -165,7 +127,7 @@ func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []N
|
|||||||
}
|
}
|
||||||
|
|
||||||
// TODO: This should be done in the routine loading driver data
|
// TODO: This should be done in the routine loading driver data
|
||||||
if lno.data {
|
if withData {
|
||||||
kubernetesDriverCount := 0
|
kubernetesDriverCount := 0
|
||||||
for _, d := range b.nodes {
|
for _, d := range b.nodes {
|
||||||
if d.DriverInfo != nil && len(d.DriverInfo.DynamicNodes) > 0 {
|
if d.DriverInfo != nil && len(d.DriverInfo.DynamicNodes) > 0 {
|
||||||
@@ -186,7 +148,7 @@ func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []N
|
|||||||
if pl := di.DriverInfo.DynamicNodes[i].Platforms; len(pl) > 0 {
|
if pl := di.DriverInfo.DynamicNodes[i].Platforms; len(pl) > 0 {
|
||||||
diClone.Platforms = pl
|
diClone.Platforms = pl
|
||||||
}
|
}
|
||||||
nodes = append(nodes, diClone)
|
nodes = append(nodes, di)
|
||||||
}
|
}
|
||||||
dynamicNodes = append(dynamicNodes, di.DriverInfo.DynamicNodes...)
|
dynamicNodes = append(dynamicNodes, di.DriverInfo.DynamicNodes...)
|
||||||
}
|
}
|
||||||
@@ -202,51 +164,6 @@ func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []N
|
|||||||
return b.nodes, nil
|
return b.nodes, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (n *Node) MarshalJSON() ([]byte, error) {
|
|
||||||
var status string
|
|
||||||
if n.DriverInfo != nil {
|
|
||||||
status = n.DriverInfo.Status.String()
|
|
||||||
}
|
|
||||||
var nerr string
|
|
||||||
if n.Err != nil {
|
|
||||||
status = "error"
|
|
||||||
nerr = strings.TrimSpace(n.Err.Error())
|
|
||||||
}
|
|
||||||
var pp []string
|
|
||||||
for _, p := range n.Platforms {
|
|
||||||
pp = append(pp, platforms.Format(p))
|
|
||||||
}
|
|
||||||
return json.Marshal(struct {
|
|
||||||
Name string
|
|
||||||
Endpoint string
|
|
||||||
BuildkitdFlags []string `json:"Flags,omitempty"`
|
|
||||||
DriverOpts map[string]string `json:",omitempty"`
|
|
||||||
Files map[string][]byte `json:",omitempty"`
|
|
||||||
Status string `json:",omitempty"`
|
|
||||||
ProxyConfig map[string]string `json:",omitempty"`
|
|
||||||
Version string `json:",omitempty"`
|
|
||||||
Err string `json:",omitempty"`
|
|
||||||
IDs []string `json:",omitempty"`
|
|
||||||
Platforms []string `json:",omitempty"`
|
|
||||||
GCPolicy []client.PruneInfo `json:",omitempty"`
|
|
||||||
Labels map[string]string `json:",omitempty"`
|
|
||||||
}{
|
|
||||||
Name: n.Name,
|
|
||||||
Endpoint: n.Endpoint,
|
|
||||||
BuildkitdFlags: n.BuildkitdFlags,
|
|
||||||
DriverOpts: n.DriverOpts,
|
|
||||||
Files: n.Files,
|
|
||||||
Status: status,
|
|
||||||
ProxyConfig: n.ProxyConfig,
|
|
||||||
Version: n.Version,
|
|
||||||
Err: nerr,
|
|
||||||
IDs: n.IDs,
|
|
||||||
Platforms: pp,
|
|
||||||
GCPolicy: n.GCPolicy,
|
|
||||||
Labels: n.Labels,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
func (n *Node) loadData(ctx context.Context) error {
|
func (n *Node) loadData(ctx context.Context) error {
|
||||||
if n.Driver == nil {
|
if n.Driver == nil {
|
||||||
return nil
|
return nil
|
||||||
@@ -265,15 +182,9 @@ func (n *Node) loadData(ctx context.Context) error {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return errors.Wrap(err, "listing workers")
|
return errors.Wrap(err, "listing workers")
|
||||||
}
|
}
|
||||||
for idx, w := range workers {
|
for _, w := range workers {
|
||||||
n.IDs = append(n.IDs, w.ID)
|
|
||||||
n.Platforms = append(n.Platforms, w.Platforms...)
|
n.Platforms = append(n.Platforms, w.Platforms...)
|
||||||
if idx == 0 {
|
|
||||||
n.GCPolicy = w.GCPolicy
|
|
||||||
n.Labels = w.Labels
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
sort.Strings(n.IDs)
|
|
||||||
n.Platforms = platformutil.Dedupe(n.Platforms)
|
n.Platforms = platformutil.Dedupe(n.Platforms)
|
||||||
inf, err := driverClient.Info(ctx)
|
inf, err := driverClient.Info(ctx)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
|||||||
@@ -1,12 +1,11 @@
|
|||||||
package main
|
package main
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
|
||||||
"fmt"
|
"fmt"
|
||||||
"os"
|
"os"
|
||||||
|
|
||||||
|
"github.com/containerd/containerd/pkg/seed"
|
||||||
"github.com/docker/buildx/commands"
|
"github.com/docker/buildx/commands"
|
||||||
"github.com/docker/buildx/util/desktop"
|
|
||||||
"github.com/docker/buildx/version"
|
"github.com/docker/buildx/version"
|
||||||
"github.com/docker/cli/cli"
|
"github.com/docker/cli/cli"
|
||||||
"github.com/docker/cli/cli-plugins/manager"
|
"github.com/docker/cli/cli-plugins/manager"
|
||||||
@@ -16,12 +15,11 @@ import (
|
|||||||
cliflags "github.com/docker/cli/cli/flags"
|
cliflags "github.com/docker/cli/cli/flags"
|
||||||
"github.com/moby/buildkit/solver/errdefs"
|
"github.com/moby/buildkit/solver/errdefs"
|
||||||
"github.com/moby/buildkit/util/stack"
|
"github.com/moby/buildkit/util/stack"
|
||||||
"go.opentelemetry.io/otel"
|
|
||||||
|
|
||||||
//nolint:staticcheck // vendored dependencies may still use this
|
|
||||||
"github.com/containerd/containerd/pkg/seed"
|
|
||||||
|
|
||||||
|
_ "k8s.io/client-go/plugin/pkg/client/auth/azure"
|
||||||
|
_ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
|
||||||
_ "k8s.io/client-go/plugin/pkg/client/auth/oidc"
|
_ "k8s.io/client-go/plugin/pkg/client/auth/oidc"
|
||||||
|
_ "k8s.io/client-go/plugin/pkg/client/auth/openstack"
|
||||||
|
|
||||||
_ "github.com/docker/buildx/driver/docker"
|
_ "github.com/docker/buildx/driver/docker"
|
||||||
_ "github.com/docker/buildx/driver/docker-container"
|
_ "github.com/docker/buildx/driver/docker-container"
|
||||||
@@ -30,9 +28,7 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
//nolint:staticcheck
|
|
||||||
seed.WithTimeAndRand()
|
seed.WithTimeAndRand()
|
||||||
|
|
||||||
stack.SetVersionInfo(version.Version, version.Revision)
|
stack.SetVersionInfo(version.Version, version.Revision)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -40,27 +36,10 @@ func runStandalone(cmd *command.DockerCli) error {
|
|||||||
if err := cmd.Initialize(cliflags.NewClientOptions()); err != nil {
|
if err := cmd.Initialize(cliflags.NewClientOptions()); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
defer flushMetrics(cmd)
|
|
||||||
|
|
||||||
rootCmd := commands.NewRootCmd(os.Args[0], false, cmd)
|
rootCmd := commands.NewRootCmd(os.Args[0], false, cmd)
|
||||||
return rootCmd.Execute()
|
return rootCmd.Execute()
|
||||||
}
|
}
|
||||||
|
|
||||||
// flushMetrics will manually flush metrics from the configured
|
|
||||||
// meter provider. This is needed when running in standalone mode
|
|
||||||
// because the meter provider is initialized by the cli library,
|
|
||||||
// but the mechanism for forcing it to report is not presently
|
|
||||||
// exposed and not invoked when run in standalone mode.
|
|
||||||
// There are plans to fix that in the next release, but this is
|
|
||||||
// needed temporarily until the API for this is more thorough.
|
|
||||||
func flushMetrics(cmd *command.DockerCli) {
|
|
||||||
if mp, ok := cmd.MeterProvider().(command.MeterProvider); ok {
|
|
||||||
if err := mp.ForceFlush(context.Background()); err != nil {
|
|
||||||
otel.Handle(err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func runPlugin(cmd *command.DockerCli) error {
|
func runPlugin(cmd *command.DockerCli) error {
|
||||||
rootCmd := commands.NewRootCmd("buildx", true, cmd)
|
rootCmd := commands.NewRootCmd("buildx", true, cmd)
|
||||||
return plugin.RunPlugin(cmd, rootCmd, manager.Metadata{
|
return plugin.RunPlugin(cmd, rootCmd, manager.Metadata{
|
||||||
@@ -106,9 +85,6 @@ func main() {
|
|||||||
} else {
|
} else {
|
||||||
fmt.Fprintf(cmd.Err(), "ERROR: %v\n", err)
|
fmt.Fprintf(cmd.Err(), "ERROR: %v\n", err)
|
||||||
}
|
}
|
||||||
if ebr, ok := err.(*desktop.ErrorWithBuildRef); ok {
|
|
||||||
ebr.Print(cmd.Err())
|
|
||||||
}
|
|
||||||
|
|
||||||
os.Exit(1)
|
os.Exit(1)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4,6 +4,7 @@ import (
|
|||||||
"github.com/moby/buildkit/util/tracing/detect"
|
"github.com/moby/buildkit/util/tracing/detect"
|
||||||
"go.opentelemetry.io/otel"
|
"go.opentelemetry.io/otel"
|
||||||
|
|
||||||
|
_ "github.com/moby/buildkit/util/tracing/detect/delegated"
|
||||||
_ "github.com/moby/buildkit/util/tracing/env"
|
_ "github.com/moby/buildkit/util/tracing/env"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|||||||
218
commands/bake.go
218
commands/bake.go
@@ -4,44 +4,33 @@ import (
|
|||||||
"context"
|
"context"
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
"fmt"
|
"fmt"
|
||||||
"io"
|
|
||||||
"os"
|
"os"
|
||||||
"strings"
|
|
||||||
|
|
||||||
"github.com/containerd/console"
|
|
||||||
"github.com/containerd/containerd/platforms"
|
"github.com/containerd/containerd/platforms"
|
||||||
"github.com/docker/buildx/bake"
|
"github.com/docker/buildx/bake"
|
||||||
"github.com/docker/buildx/build"
|
"github.com/docker/buildx/build"
|
||||||
"github.com/docker/buildx/builder"
|
"github.com/docker/buildx/builder"
|
||||||
"github.com/docker/buildx/localstate"
|
|
||||||
"github.com/docker/buildx/util/buildflags"
|
"github.com/docker/buildx/util/buildflags"
|
||||||
"github.com/docker/buildx/util/cobrautil/completion"
|
|
||||||
"github.com/docker/buildx/util/confutil"
|
"github.com/docker/buildx/util/confutil"
|
||||||
"github.com/docker/buildx/util/desktop"
|
|
||||||
"github.com/docker/buildx/util/dockerutil"
|
"github.com/docker/buildx/util/dockerutil"
|
||||||
"github.com/docker/buildx/util/progress"
|
"github.com/docker/buildx/util/progress"
|
||||||
"github.com/docker/buildx/util/tracing"
|
"github.com/docker/buildx/util/tracing"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
"github.com/moby/buildkit/identity"
|
"github.com/moby/buildkit/util/appcontext"
|
||||||
"github.com/moby/buildkit/util/progress/progressui"
|
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
type bakeOptions struct {
|
type bakeOptions struct {
|
||||||
files []string
|
files []string
|
||||||
overrides []string
|
overrides []string
|
||||||
printOnly bool
|
printOnly bool
|
||||||
sbom string
|
commonOptions
|
||||||
provenance string
|
|
||||||
|
|
||||||
builder string
|
|
||||||
metadataFile string
|
|
||||||
exportPush bool
|
|
||||||
exportLoad bool
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in bakeOptions, cFlags commonFlags) (err error) {
|
func runBake(dockerCli command.Cli, targets []string, in bakeOptions) (err error) {
|
||||||
|
ctx := appcontext.Context()
|
||||||
|
|
||||||
ctx, end, err := tracing.TraceCurrentCommand(ctx, "bake")
|
ctx, end, err := tracing.TraceCurrentCommand(ctx, "bake")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -54,11 +43,11 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
|
|||||||
cmdContext := "cwd://"
|
cmdContext := "cwd://"
|
||||||
|
|
||||||
if len(targets) > 0 {
|
if len(targets) > 0 {
|
||||||
if build.IsRemoteURL(targets[0]) {
|
if bake.IsRemoteURL(targets[0]) {
|
||||||
url = targets[0]
|
url = targets[0]
|
||||||
targets = targets[1:]
|
targets = targets[1:]
|
||||||
if len(targets) > 0 {
|
if len(targets) > 0 {
|
||||||
if build.IsRemoteURL(targets[0]) {
|
if bake.IsRemoteURL(targets[0]) {
|
||||||
cmdContext = targets[0]
|
cmdContext = targets[0]
|
||||||
targets = targets[1:]
|
targets = targets[1:]
|
||||||
}
|
}
|
||||||
@@ -72,16 +61,18 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
|
|||||||
|
|
||||||
overrides := in.overrides
|
overrides := in.overrides
|
||||||
if in.exportPush {
|
if in.exportPush {
|
||||||
|
if in.exportLoad {
|
||||||
|
return errors.Errorf("push and load may not be set together at the moment")
|
||||||
|
}
|
||||||
overrides = append(overrides, "*.push=true")
|
overrides = append(overrides, "*.push=true")
|
||||||
|
} else if in.exportLoad {
|
||||||
|
overrides = append(overrides, "*.output=type=docker")
|
||||||
}
|
}
|
||||||
if in.exportLoad {
|
if in.noCache != nil {
|
||||||
overrides = append(overrides, "*.load=true")
|
overrides = append(overrides, fmt.Sprintf("*.no-cache=%t", *in.noCache))
|
||||||
}
|
}
|
||||||
if cFlags.noCache != nil {
|
if in.pull != nil {
|
||||||
overrides = append(overrides, fmt.Sprintf("*.no-cache=%t", *cFlags.noCache))
|
overrides = append(overrides, fmt.Sprintf("*.pull=%t", *in.pull))
|
||||||
}
|
|
||||||
if cFlags.pull != nil {
|
|
||||||
overrides = append(overrides, fmt.Sprintf("*.pull=%t", *cFlags.pull))
|
|
||||||
}
|
}
|
||||||
if in.sbom != "" {
|
if in.sbom != "" {
|
||||||
overrides = append(overrides, fmt.Sprintf("*.attest=%s", buildflags.CanonicalizeAttest("sbom", in.sbom)))
|
overrides = append(overrides, fmt.Sprintf("*.attest=%s", buildflags.CanonicalizeAttest("sbom", in.sbom)))
|
||||||
@@ -93,9 +84,23 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
|
|||||||
|
|
||||||
ctx2, cancel := context.WithCancel(context.TODO())
|
ctx2, cancel := context.WithCancel(context.TODO())
|
||||||
defer cancel()
|
defer cancel()
|
||||||
|
printer, err := progress.NewPrinter(ctx2, os.Stderr, os.Stderr, in.progress)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
defer func() {
|
||||||
|
if printer != nil {
|
||||||
|
err1 := printer.Wait()
|
||||||
|
if err == nil {
|
||||||
|
err = err1
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
|
||||||
var nodes []builder.Node
|
var nodes []builder.Node
|
||||||
var progressConsoleDesc, progressTextDesc string
|
var files []bake.File
|
||||||
|
var inp *bake.Input
|
||||||
|
|
||||||
// instance only needed for reading remote bake files or building
|
// instance only needed for reading remote bake files or building
|
||||||
if url != "" || !in.printOnly {
|
if url != "" || !in.printOnly {
|
||||||
@@ -109,51 +114,24 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
|
|||||||
if err = updateLastActivity(dockerCli, b.NodeGroup); err != nil {
|
if err = updateLastActivity(dockerCli, b.NodeGroup); err != nil {
|
||||||
return errors.Wrapf(err, "failed to update builder last activity time")
|
return errors.Wrapf(err, "failed to update builder last activity time")
|
||||||
}
|
}
|
||||||
nodes, err = b.LoadNodes(ctx)
|
nodes, err = b.LoadNodes(ctx, false)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
progressConsoleDesc = fmt.Sprintf("%s:%s", b.Driver, b.Name)
|
|
||||||
progressTextDesc = fmt.Sprintf("building with %q instance using %s driver", b.Name, b.Driver)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
var term bool
|
if url != "" {
|
||||||
if _, err := console.ConsoleFromFile(os.Stderr); err == nil {
|
files, inp, err = bake.ReadRemoteFiles(ctx, nodes, url, in.files, printer)
|
||||||
term = true
|
} else {
|
||||||
|
files, err = bake.ReadLocalFiles(in.files)
|
||||||
}
|
}
|
||||||
|
|
||||||
progressMode := progressui.DisplayMode(cFlags.progress)
|
|
||||||
printer, err := progress.NewPrinter(ctx2, os.Stderr, progressMode,
|
|
||||||
progress.WithDesc(progressTextDesc, progressConsoleDesc),
|
|
||||||
)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
defer func() {
|
|
||||||
if printer != nil {
|
|
||||||
err1 := printer.Wait()
|
|
||||||
if err == nil {
|
|
||||||
err = err1
|
|
||||||
}
|
|
||||||
if err == nil && progressMode != progressui.QuietMode && progressMode != progressui.RawJSONMode {
|
|
||||||
desktop.PrintBuildDetails(os.Stderr, printer.BuildRefs(), term)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
|
|
||||||
files, inp, err := readBakeFiles(ctx, nodes, url, in.files, dockerCli.In(), printer)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(files) == 0 {
|
|
||||||
return errors.New("couldn't find a bake definition")
|
|
||||||
}
|
|
||||||
|
|
||||||
tgts, grps, err := bake.ReadTargets(ctx, files, targets, overrides, map[string]string{
|
tgts, grps, err := bake.ReadTargets(ctx, files, targets, overrides, map[string]string{
|
||||||
// don't forget to update documentation if you add a new
|
// don't forget to update documentation if you add a new
|
||||||
// built-in variable: docs/bake-reference.md#built-in-variables
|
// built-in variable: docs/manuals/bake/file-definition.md#built-in-variables
|
||||||
"BAKE_CMD_CONTEXT": cmdContext,
|
"BAKE_CMD_CONTEXT": cmdContext,
|
||||||
"BAKE_LOCAL_PLATFORM": platforms.DefaultString(),
|
"BAKE_LOCAL_PLATFORM": platforms.DefaultString(),
|
||||||
})
|
})
|
||||||
@@ -161,35 +139,20 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
if v := os.Getenv("SOURCE_DATE_EPOCH"); v != "" {
|
|
||||||
// TODO: extract env var parsing to a method easily usable by library consumers
|
|
||||||
for _, t := range tgts {
|
|
||||||
if _, ok := t.Args["SOURCE_DATE_EPOCH"]; ok {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
if t.Args == nil {
|
|
||||||
t.Args = map[string]*string{}
|
|
||||||
}
|
|
||||||
t.Args["SOURCE_DATE_EPOCH"] = &v
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// this function can update target context string from the input so call before printOnly check
|
// this function can update target context string from the input so call before printOnly check
|
||||||
bo, err := bake.TargetsToBuildOpt(tgts, inp)
|
bo, err := bake.TargetsToBuildOpt(tgts, inp)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
def := struct {
|
|
||||||
Group map[string]*bake.Group `json:"group,omitempty"`
|
|
||||||
Target map[string]*bake.Target `json:"target"`
|
|
||||||
}{
|
|
||||||
Group: grps,
|
|
||||||
Target: tgts,
|
|
||||||
}
|
|
||||||
|
|
||||||
if in.printOnly {
|
if in.printOnly {
|
||||||
dt, err := json.MarshalIndent(def, "", " ")
|
dt, err := json.MarshalIndent(struct {
|
||||||
|
Group map[string]*bake.Group `json:"group,omitempty"`
|
||||||
|
Target map[string]*bake.Target `json:"target"`
|
||||||
|
}{
|
||||||
|
grps,
|
||||||
|
tgts,
|
||||||
|
}, "", " ")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -202,28 +165,6 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
groupRef := identity.NewID()
|
|
||||||
var refs []string
|
|
||||||
for k, b := range bo {
|
|
||||||
b.Ref = identity.NewID()
|
|
||||||
b.GroupRef = groupRef
|
|
||||||
b.WithProvenanceResponse = len(in.metadataFile) > 0
|
|
||||||
refs = append(refs, b.Ref)
|
|
||||||
bo[k] = b
|
|
||||||
}
|
|
||||||
dt, err := json.Marshal(def)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if err := saveLocalStateGroup(dockerCli, groupRef, localstate.StateGroup{
|
|
||||||
Definition: dt,
|
|
||||||
Targets: targets,
|
|
||||||
Inputs: overrides,
|
|
||||||
Refs: refs,
|
|
||||||
}); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
resp, err := build.Build(ctx, nodes, bo, dockerutil.NewClient(dockerCli), confutil.ConfigDir(dockerCli), printer)
|
resp, err := build.Build(ctx, nodes, bo, dockerutil.NewClient(dockerCli), confutil.ConfigDir(dockerCli), printer)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return wrapBuildError(err, true)
|
return wrapBuildError(err, true)
|
||||||
@@ -244,7 +185,6 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
|
|||||||
|
|
||||||
func bakeCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
|
func bakeCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
|
||||||
var options bakeOptions
|
var options bakeOptions
|
||||||
var cFlags commonFlags
|
|
||||||
|
|
||||||
cmd := &cobra.Command{
|
cmd := &cobra.Command{
|
||||||
Use: "bake [OPTIONS] [TARGET...]",
|
Use: "bake [OPTIONS] [TARGET...]",
|
||||||
@@ -253,17 +193,14 @@ func bakeCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
|
|||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
// reset to nil to avoid override is unset
|
// reset to nil to avoid override is unset
|
||||||
if !cmd.Flags().Lookup("no-cache").Changed {
|
if !cmd.Flags().Lookup("no-cache").Changed {
|
||||||
cFlags.noCache = nil
|
options.noCache = nil
|
||||||
}
|
}
|
||||||
if !cmd.Flags().Lookup("pull").Changed {
|
if !cmd.Flags().Lookup("pull").Changed {
|
||||||
cFlags.pull = nil
|
options.pull = nil
|
||||||
}
|
}
|
||||||
options.builder = rootOpts.builder
|
options.commonOptions.builder = rootOpts.builder
|
||||||
options.metadataFile = cFlags.metadataFile
|
return runBake(dockerCli, args, options)
|
||||||
// Other common flags (noCache, pull and progress) are processed in runBake function.
|
|
||||||
return runBake(cmd.Context(), dockerCli, args, options, cFlags)
|
|
||||||
},
|
},
|
||||||
ValidArgsFunction: completion.BakeTargets(options.files),
|
|
||||||
}
|
}
|
||||||
|
|
||||||
flags := cmd.Flags()
|
flags := cmd.Flags()
|
||||||
@@ -276,58 +213,7 @@ func bakeCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
|
|||||||
flags.StringVar(&options.provenance, "provenance", "", `Shorthand for "--set=*.attest=type=provenance"`)
|
flags.StringVar(&options.provenance, "provenance", "", `Shorthand for "--set=*.attest=type=provenance"`)
|
||||||
flags.StringArrayVar(&options.overrides, "set", nil, `Override target value (e.g., "targetpattern.key=value")`)
|
flags.StringArrayVar(&options.overrides, "set", nil, `Override target value (e.g., "targetpattern.key=value")`)
|
||||||
|
|
||||||
commonBuildFlags(&cFlags, flags)
|
commonBuildFlags(&options.commonOptions, flags)
|
||||||
|
|
||||||
return cmd
|
return cmd
|
||||||
}
|
}
|
||||||
|
|
||||||
func saveLocalStateGroup(dockerCli command.Cli, ref string, lsg localstate.StateGroup) error {
|
|
||||||
l, err := localstate.New(confutil.ConfigDir(dockerCli))
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
return l.SaveGroup(ref, lsg)
|
|
||||||
}
|
|
||||||
|
|
||||||
func readBakeFiles(ctx context.Context, nodes []builder.Node, url string, names []string, stdin io.Reader, pw progress.Writer) (files []bake.File, inp *bake.Input, err error) {
|
|
||||||
var lnames []string // local
|
|
||||||
var rnames []string // remote
|
|
||||||
var anames []string // both
|
|
||||||
for _, v := range names {
|
|
||||||
if strings.HasPrefix(v, "cwd://") {
|
|
||||||
tname := strings.TrimPrefix(v, "cwd://")
|
|
||||||
lnames = append(lnames, tname)
|
|
||||||
anames = append(anames, tname)
|
|
||||||
} else {
|
|
||||||
rnames = append(rnames, v)
|
|
||||||
anames = append(anames, v)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if url != "" {
|
|
||||||
var rfiles []bake.File
|
|
||||||
rfiles, inp, err = bake.ReadRemoteFiles(ctx, nodes, url, rnames, pw)
|
|
||||||
if err != nil {
|
|
||||||
return nil, nil, err
|
|
||||||
}
|
|
||||||
files = append(files, rfiles...)
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(lnames) > 0 || url == "" {
|
|
||||||
var lfiles []bake.File
|
|
||||||
progress.Wrap("[internal] load local bake definitions", pw.Write, func(sub progress.SubLogger) error {
|
|
||||||
if url != "" {
|
|
||||||
lfiles, err = bake.ReadLocalFiles(lnames, stdin, sub)
|
|
||||||
} else {
|
|
||||||
lfiles, err = bake.ReadLocalFiles(anames, stdin, sub)
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
return nil, nil, err
|
|
||||||
}
|
|
||||||
files = append(files, lfiles...)
|
|
||||||
}
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|||||||
1195
commands/build.go
1195
commands/build.go
File diff suppressed because it is too large
Load Diff
@@ -3,72 +3,283 @@ package commands
|
|||||||
import (
|
import (
|
||||||
"bytes"
|
"bytes"
|
||||||
"context"
|
"context"
|
||||||
|
"encoding/csv"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"net/url"
|
||||||
|
"os"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
"github.com/docker/buildx/builder"
|
"github.com/docker/buildx/builder"
|
||||||
"github.com/docker/buildx/driver"
|
"github.com/docker/buildx/driver"
|
||||||
|
remoteutil "github.com/docker/buildx/driver/remote/util"
|
||||||
|
"github.com/docker/buildx/store"
|
||||||
"github.com/docker/buildx/store/storeutil"
|
"github.com/docker/buildx/store/storeutil"
|
||||||
"github.com/docker/buildx/util/cobrautil"
|
"github.com/docker/buildx/util/cobrautil"
|
||||||
"github.com/docker/buildx/util/cobrautil/completion"
|
"github.com/docker/buildx/util/confutil"
|
||||||
|
"github.com/docker/buildx/util/dockerutil"
|
||||||
"github.com/docker/cli/cli"
|
"github.com/docker/cli/cli"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
|
dopts "github.com/docker/cli/opts"
|
||||||
|
"github.com/google/shlex"
|
||||||
|
"github.com/moby/buildkit/util/appcontext"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
"github.com/sirupsen/logrus"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
type createOptions struct {
|
type createOptions struct {
|
||||||
name string
|
name string
|
||||||
driver string
|
driver string
|
||||||
nodeName string
|
nodeName string
|
||||||
platform []string
|
platform []string
|
||||||
actionAppend bool
|
actionAppend bool
|
||||||
actionLeave bool
|
actionLeave bool
|
||||||
use bool
|
use bool
|
||||||
driverOpts []string
|
flags string
|
||||||
buildkitdFlags string
|
configFile string
|
||||||
buildkitdConfigFile string
|
driverOpts []string
|
||||||
bootstrap bool
|
bootstrap bool
|
||||||
// upgrade bool // perform upgrade of the driver
|
// upgrade bool // perform upgrade of the driver
|
||||||
}
|
}
|
||||||
|
|
||||||
func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, args []string) error {
|
func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
|
||||||
|
ctx := appcontext.Context()
|
||||||
|
|
||||||
|
if in.name == "default" {
|
||||||
|
return errors.Errorf("default is a reserved name and cannot be used to identify builder instance")
|
||||||
|
}
|
||||||
|
|
||||||
|
if in.actionLeave {
|
||||||
|
if in.name == "" {
|
||||||
|
return errors.Errorf("leave requires instance name")
|
||||||
|
}
|
||||||
|
if in.nodeName == "" {
|
||||||
|
return errors.Errorf("leave requires node name but --node not set")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if in.actionAppend {
|
||||||
|
if in.name == "" {
|
||||||
|
logrus.Warnf("append used without name, creating a new instance instead")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
txn, release, err := storeutil.GetStore(dockerCli)
|
txn, release, err := storeutil.GetStore(dockerCli)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
// Ensure the file lock gets released no matter what happens.
|
|
||||||
defer release()
|
defer release()
|
||||||
|
|
||||||
if in.actionLeave {
|
name := in.name
|
||||||
return builder.Leave(ctx, txn, dockerCli, builder.LeaveOpts{
|
if name == "" {
|
||||||
Name: in.name,
|
name, err = store.GenerateName(txn)
|
||||||
NodeName: in.nodeName,
|
if err != nil {
|
||||||
})
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if !in.actionLeave && !in.actionAppend {
|
||||||
|
contexts, err := dockerCli.ContextStore().List()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
for _, c := range contexts {
|
||||||
|
if c.Name == name {
|
||||||
|
logrus.Warnf("instance name %q already exists as context builder", name)
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
ng, err := txn.NodeGroupByName(name)
|
||||||
|
if err != nil {
|
||||||
|
if os.IsNotExist(errors.Cause(err)) {
|
||||||
|
if in.actionAppend && in.name != "" {
|
||||||
|
logrus.Warnf("failed to find %q for append, creating a new instance instead", in.name)
|
||||||
|
}
|
||||||
|
if in.actionLeave {
|
||||||
|
return errors.Errorf("failed to find instance %q for leave", in.name)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
buildkitHost := os.Getenv("BUILDKIT_HOST")
|
||||||
|
|
||||||
|
driverName := in.driver
|
||||||
|
if driverName == "" {
|
||||||
|
if ng != nil {
|
||||||
|
driverName = ng.Driver
|
||||||
|
} else if len(args) == 0 && buildkitHost != "" {
|
||||||
|
driverName = "remote"
|
||||||
|
} else {
|
||||||
|
var arg string
|
||||||
|
if len(args) > 0 {
|
||||||
|
arg = args[0]
|
||||||
|
}
|
||||||
|
f, err := driver.GetDefaultFactory(ctx, arg, dockerCli.Client(), true)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if f == nil {
|
||||||
|
return errors.Errorf("no valid drivers found")
|
||||||
|
}
|
||||||
|
driverName = f.Name()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if ng != nil {
|
||||||
|
if in.nodeName == "" && !in.actionAppend {
|
||||||
|
return errors.Errorf("existing instance for %q but no append mode, specify --node to make changes for existing instances", name)
|
||||||
|
}
|
||||||
|
if driverName != ng.Driver {
|
||||||
|
return errors.Errorf("existing instance for %q but has mismatched driver %q", name, ng.Driver)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if _, err := driver.GetFactory(driverName, true); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
ngOriginal := ng
|
||||||
|
if ngOriginal != nil {
|
||||||
|
ngOriginal = ngOriginal.Copy()
|
||||||
|
}
|
||||||
|
|
||||||
|
if ng == nil {
|
||||||
|
ng = &store.NodeGroup{
|
||||||
|
Name: name,
|
||||||
|
Driver: driverName,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
var flags []string
|
||||||
|
if in.flags != "" {
|
||||||
|
flags, err = shlex.Split(in.flags)
|
||||||
|
if err != nil {
|
||||||
|
return errors.Wrap(err, "failed to parse buildkit flags")
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
var ep string
|
var ep string
|
||||||
if len(args) > 0 {
|
var setEp bool
|
||||||
ep = args[0]
|
if in.actionLeave {
|
||||||
|
if err := ng.Leave(in.nodeName); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
switch {
|
||||||
|
case driverName == "kubernetes":
|
||||||
|
if len(args) > 0 {
|
||||||
|
logrus.Warnf("kubernetes driver does not support endpoint args %q", args[0])
|
||||||
|
}
|
||||||
|
// naming endpoint to make --append works
|
||||||
|
ep = (&url.URL{
|
||||||
|
Scheme: driverName,
|
||||||
|
Path: "/" + in.name,
|
||||||
|
RawQuery: (&url.Values{
|
||||||
|
"deployment": {in.nodeName},
|
||||||
|
"kubeconfig": {os.Getenv("KUBECONFIG")},
|
||||||
|
}).Encode(),
|
||||||
|
}).String()
|
||||||
|
setEp = false
|
||||||
|
case driverName == "remote":
|
||||||
|
if len(args) > 0 {
|
||||||
|
ep = args[0]
|
||||||
|
} else if buildkitHost != "" {
|
||||||
|
ep = buildkitHost
|
||||||
|
} else {
|
||||||
|
return errors.Errorf("no remote endpoint provided")
|
||||||
|
}
|
||||||
|
ep, err = validateBuildkitEndpoint(ep)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
setEp = true
|
||||||
|
case len(args) > 0:
|
||||||
|
ep, err = validateEndpoint(dockerCli, args[0])
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
setEp = true
|
||||||
|
default:
|
||||||
|
if dockerCli.CurrentContext() == "default" && dockerCli.DockerEndpoint().TLSData != nil {
|
||||||
|
return errors.Errorf("could not create a builder instance with TLS data loaded from environment. Please use `docker context create <context-name>` to create a context for current environment and then create a builder instance with `docker buildx create <context-name>`")
|
||||||
|
}
|
||||||
|
ep, err = dockerutil.GetCurrentEndpoint(dockerCli)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
setEp = false
|
||||||
|
}
|
||||||
|
|
||||||
|
m, err := csvToMap(in.driverOpts)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if in.configFile == "" {
|
||||||
|
// if buildkit config is not provided, check if the default one is
|
||||||
|
// available and use it
|
||||||
|
if f, ok := confutil.DefaultConfigFile(dockerCli); ok {
|
||||||
|
logrus.Warnf("Using default BuildKit config in %s", f)
|
||||||
|
in.configFile = f
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := ng.Update(in.nodeName, ep, in.platform, setEp, in.actionAppend, flags, in.configFile, m); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
b, err := builder.Create(ctx, txn, dockerCli, builder.CreateOpts{
|
if err := txn.Save(ng); err != nil {
|
||||||
Name: in.name,
|
return err
|
||||||
Driver: in.driver,
|
}
|
||||||
NodeName: in.nodeName,
|
|
||||||
Platforms: in.platform,
|
b, err := builder.New(dockerCli,
|
||||||
DriverOpts: in.driverOpts,
|
builder.WithName(ng.Name),
|
||||||
BuildkitdFlags: in.buildkitdFlags,
|
builder.WithStore(txn),
|
||||||
BuildkitdConfigFile: in.buildkitdConfigFile,
|
builder.WithSkippedValidation(),
|
||||||
Use: in.use,
|
)
|
||||||
Endpoint: ep,
|
|
||||||
Append: in.actionAppend,
|
|
||||||
})
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
// The store is no longer used from this point.
|
timeoutCtx, cancel := context.WithTimeout(ctx, 20*time.Second)
|
||||||
// Release it so we aren't holding the file lock during the boot.
|
defer cancel()
|
||||||
release()
|
|
||||||
|
nodes, err := b.LoadNodes(timeoutCtx, true)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, node := range nodes {
|
||||||
|
if err := node.Err; err != nil {
|
||||||
|
err := errors.Errorf("failed to initialize builder %s (%s): %s", ng.Name, node.Name, err)
|
||||||
|
var err2 error
|
||||||
|
if ngOriginal == nil {
|
||||||
|
err2 = txn.Remove(ng.Name)
|
||||||
|
} else {
|
||||||
|
err2 = txn.Save(ngOriginal)
|
||||||
|
}
|
||||||
|
if err2 != nil {
|
||||||
|
logrus.Warnf("Could not rollback to previous state: %s", err2)
|
||||||
|
}
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if in.use && ep != "" {
|
||||||
|
current, err := dockerutil.GetCurrentEndpoint(dockerCli)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if err := txn.SetCurrent(current, ng.Name, false, false); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if in.bootstrap {
|
if in.bootstrap {
|
||||||
if _, err = b.Boot(ctx); err != nil {
|
if _, err = b.Boot(ctx); err != nil {
|
||||||
@@ -76,7 +287,7 @@ func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, arg
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
fmt.Printf("%s\n", b.Name)
|
fmt.Printf("%s\n", ng.Name)
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -96,9 +307,8 @@ func createCmd(dockerCli command.Cli) *cobra.Command {
|
|||||||
Short: "Create a new builder instance",
|
Short: "Create a new builder instance",
|
||||||
Args: cli.RequiresMaxArgs(1),
|
Args: cli.RequiresMaxArgs(1),
|
||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
return runCreate(cmd.Context(), dockerCli, options, args)
|
return runCreate(dockerCli, options, args)
|
||||||
},
|
},
|
||||||
ValidArgsFunction: completion.Disable,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
flags := cmd.Flags()
|
flags := cmd.Flags()
|
||||||
@@ -106,16 +316,12 @@ func createCmd(dockerCli command.Cli) *cobra.Command {
|
|||||||
flags.StringVar(&options.name, "name", "", "Builder instance name")
|
flags.StringVar(&options.name, "name", "", "Builder instance name")
|
||||||
flags.StringVar(&options.driver, "driver", "", fmt.Sprintf("Driver to use (available: %s)", drivers.String()))
|
flags.StringVar(&options.driver, "driver", "", fmt.Sprintf("Driver to use (available: %s)", drivers.String()))
|
||||||
flags.StringVar(&options.nodeName, "node", "", "Create/modify node with given name")
|
flags.StringVar(&options.nodeName, "node", "", "Create/modify node with given name")
|
||||||
|
flags.StringVar(&options.flags, "buildkitd-flags", "", "Flags for buildkitd daemon")
|
||||||
|
flags.StringVar(&options.configFile, "config", "", "BuildKit config file")
|
||||||
flags.StringArrayVar(&options.platform, "platform", []string{}, "Fixed platforms for current node")
|
flags.StringArrayVar(&options.platform, "platform", []string{}, "Fixed platforms for current node")
|
||||||
flags.StringArrayVar(&options.driverOpts, "driver-opt", []string{}, "Options for the driver")
|
flags.StringArrayVar(&options.driverOpts, "driver-opt", []string{}, "Options for the driver")
|
||||||
flags.StringVar(&options.buildkitdFlags, "buildkitd-flags", "", "BuildKit daemon flags")
|
|
||||||
|
|
||||||
// we allow for both "--config" and "--buildkitd-config", although the latter is the recommended way to avoid ambiguity.
|
|
||||||
flags.StringVar(&options.buildkitdConfigFile, "buildkitd-config", "", "BuildKit daemon config file")
|
|
||||||
flags.StringVar(&options.buildkitdConfigFile, "config", "", "BuildKit daemon config file")
|
|
||||||
flags.MarkHidden("config")
|
|
||||||
|
|
||||||
flags.BoolVar(&options.bootstrap, "bootstrap", false, "Boot builder after creation")
|
flags.BoolVar(&options.bootstrap, "bootstrap", false, "Boot builder after creation")
|
||||||
|
|
||||||
flags.BoolVar(&options.actionAppend, "append", false, "Append a node to builder instead of changing it")
|
flags.BoolVar(&options.actionAppend, "append", false, "Append a node to builder instead of changing it")
|
||||||
flags.BoolVar(&options.actionLeave, "leave", false, "Remove a node from builder instead of changing it")
|
flags.BoolVar(&options.actionLeave, "leave", false, "Remove a node from builder instead of changing it")
|
||||||
flags.BoolVar(&options.use, "use", false, "Set the current builder instance")
|
flags.BoolVar(&options.use, "use", false, "Set the current builder instance")
|
||||||
@@ -125,3 +331,49 @@ func createCmd(dockerCli command.Cli) *cobra.Command {
|
|||||||
|
|
||||||
return cmd
|
return cmd
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func csvToMap(in []string) (map[string]string, error) {
|
||||||
|
if len(in) == 0 {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
m := make(map[string]string, len(in))
|
||||||
|
for _, s := range in {
|
||||||
|
csvReader := csv.NewReader(strings.NewReader(s))
|
||||||
|
fields, err := csvReader.Read()
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
for _, v := range fields {
|
||||||
|
p := strings.SplitN(v, "=", 2)
|
||||||
|
if len(p) != 2 {
|
||||||
|
return nil, errors.Errorf("invalid value %q, expecting k=v", v)
|
||||||
|
}
|
||||||
|
m[p[0]] = p[1]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return m, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// validateEndpoint validates that endpoint is either a context or a docker host
|
||||||
|
func validateEndpoint(dockerCli command.Cli, ep string) (string, error) {
|
||||||
|
dem, err := dockerutil.GetDockerEndpoint(dockerCli, ep)
|
||||||
|
if err == nil && dem != nil {
|
||||||
|
if ep == "default" {
|
||||||
|
return dem.Host, nil
|
||||||
|
}
|
||||||
|
return ep, nil
|
||||||
|
}
|
||||||
|
h, err := dopts.ParseHost(true, ep)
|
||||||
|
if err != nil {
|
||||||
|
return "", errors.Wrapf(err, "failed to parse endpoint %s", ep)
|
||||||
|
}
|
||||||
|
return h, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// validateBuildkitEndpoint validates that endpoint is a valid buildkit host
|
||||||
|
func validateBuildkitEndpoint(ep string) (string, error) {
|
||||||
|
if err := remoteutil.IsValidEndpoint(ep); err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
return ep, nil
|
||||||
|
}
|
||||||
|
|||||||
26
commands/create_test.go
Normal file
26
commands/create_test.go
Normal file
@@ -0,0 +1,26 @@
|
|||||||
|
package commands
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestCsvToMap(t *testing.T) {
|
||||||
|
d := []string{
|
||||||
|
"\"tolerations=key=foo,value=bar;key=foo2,value=bar2\",replicas=1",
|
||||||
|
"namespace=default",
|
||||||
|
}
|
||||||
|
r, err := csvToMap(d)
|
||||||
|
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Contains(t, r, "tolerations")
|
||||||
|
require.Equal(t, r["tolerations"], "key=foo,value=bar;key=foo2,value=bar2")
|
||||||
|
|
||||||
|
require.Contains(t, r, "replicas")
|
||||||
|
require.Equal(t, r["replicas"], "1")
|
||||||
|
|
||||||
|
require.Contains(t, r, "namespace")
|
||||||
|
require.Equal(t, r["namespace"], "default")
|
||||||
|
}
|
||||||
@@ -1,92 +0,0 @@
|
|||||||
package debug
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"os"
|
|
||||||
"runtime"
|
|
||||||
|
|
||||||
"github.com/containerd/console"
|
|
||||||
"github.com/docker/buildx/controller"
|
|
||||||
"github.com/docker/buildx/controller/control"
|
|
||||||
controllerapi "github.com/docker/buildx/controller/pb"
|
|
||||||
"github.com/docker/buildx/monitor"
|
|
||||||
"github.com/docker/buildx/util/cobrautil"
|
|
||||||
"github.com/docker/buildx/util/progress"
|
|
||||||
"github.com/docker/cli/cli/command"
|
|
||||||
"github.com/moby/buildkit/util/progress/progressui"
|
|
||||||
"github.com/pkg/errors"
|
|
||||||
"github.com/sirupsen/logrus"
|
|
||||||
"github.com/spf13/cobra"
|
|
||||||
)
|
|
||||||
|
|
||||||
// DebugConfig is a user-specified configuration for the debugger.
|
|
||||||
type DebugConfig struct {
|
|
||||||
// InvokeFlag is a flag to configure the launched debugger and the commaned executed on the debugger.
|
|
||||||
InvokeFlag string
|
|
||||||
|
|
||||||
// OnFlag is a flag to configure the timing of launching the debugger.
|
|
||||||
OnFlag string
|
|
||||||
}
|
|
||||||
|
|
||||||
// DebuggableCmd is a command that supports debugger with recognizing the user-specified DebugConfig.
|
|
||||||
type DebuggableCmd interface {
|
|
||||||
// NewDebugger returns the new *cobra.Command with support for the debugger with recognizing DebugConfig.
|
|
||||||
NewDebugger(*DebugConfig) *cobra.Command
|
|
||||||
}
|
|
||||||
|
|
||||||
func RootCmd(dockerCli command.Cli, children ...DebuggableCmd) *cobra.Command {
|
|
||||||
var controlOptions control.ControlOptions
|
|
||||||
var progressMode string
|
|
||||||
var options DebugConfig
|
|
||||||
|
|
||||||
cmd := &cobra.Command{
|
|
||||||
Use: "debug",
|
|
||||||
Short: "Start debugger",
|
|
||||||
Args: cobra.NoArgs,
|
|
||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
|
||||||
printer, err := progress.NewPrinter(context.TODO(), os.Stderr, progressui.DisplayMode(progressMode))
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
ctx := context.TODO()
|
|
||||||
c, err := controller.NewController(ctx, controlOptions, dockerCli, printer)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
defer func() {
|
|
||||||
if err := c.Close(); err != nil {
|
|
||||||
logrus.Warnf("failed to close server connection %v", err)
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
con := console.Current()
|
|
||||||
if err := con.SetRaw(); err != nil {
|
|
||||||
return errors.Errorf("failed to configure terminal: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err = monitor.RunMonitor(ctx, "", nil, controllerapi.InvokeConfig{
|
|
||||||
Tty: true,
|
|
||||||
}, c, dockerCli.In(), os.Stdout, os.Stderr, printer)
|
|
||||||
con.Reset()
|
|
||||||
return err
|
|
||||||
},
|
|
||||||
}
|
|
||||||
cobrautil.MarkCommandExperimental(cmd)
|
|
||||||
|
|
||||||
flags := cmd.Flags()
|
|
||||||
flags.StringVar(&options.InvokeFlag, "invoke", "", "Launch a monitor with executing specified command")
|
|
||||||
flags.StringVar(&options.OnFlag, "on", "error", "When to launch the monitor ([always, error])")
|
|
||||||
|
|
||||||
flags.StringVar(&controlOptions.Root, "root", "", "Specify root directory of server to connect for the monitor")
|
|
||||||
flags.BoolVar(&controlOptions.Detach, "detach", runtime.GOOS == "linux", "Detach buildx server for the monitor (supported only on linux)")
|
|
||||||
flags.StringVar(&controlOptions.ServerConfig, "server-config", "", "Specify buildx server config file for the monitor (used only when launching new server)")
|
|
||||||
flags.StringVar(&progressMode, "progress", "auto", `Set type of progress output ("auto", "plain", "tty") for the monitor. Use plain to show container output`)
|
|
||||||
|
|
||||||
cobrautil.MarkFlagsExperimental(flags, "invoke", "on", "root", "detach", "server-config")
|
|
||||||
|
|
||||||
for _, c := range children {
|
|
||||||
cmd.AddCommand(c.NewDebugger(&options))
|
|
||||||
}
|
|
||||||
|
|
||||||
return cmd
|
|
||||||
}
|
|
||||||
@@ -1,131 +0,0 @@
|
|||||||
package commands
|
|
||||||
|
|
||||||
import (
|
|
||||||
"io"
|
|
||||||
"net"
|
|
||||||
"os"
|
|
||||||
|
|
||||||
"github.com/containerd/containerd/platforms"
|
|
||||||
"github.com/docker/buildx/build"
|
|
||||||
"github.com/docker/buildx/builder"
|
|
||||||
"github.com/docker/buildx/util/progress"
|
|
||||||
"github.com/docker/cli/cli/command"
|
|
||||||
"github.com/moby/buildkit/util/appcontext"
|
|
||||||
"github.com/moby/buildkit/util/progress/progressui"
|
|
||||||
v1 "github.com/opencontainers/image-spec/specs-go/v1"
|
|
||||||
"github.com/pkg/errors"
|
|
||||||
"github.com/spf13/cobra"
|
|
||||||
"golang.org/x/sync/errgroup"
|
|
||||||
)
|
|
||||||
|
|
||||||
type stdioOptions struct {
|
|
||||||
builder string
|
|
||||||
platform string
|
|
||||||
progress string
|
|
||||||
}
|
|
||||||
|
|
||||||
func runDialStdio(dockerCli command.Cli, opts stdioOptions) error {
|
|
||||||
ctx := appcontext.Context()
|
|
||||||
|
|
||||||
contextPathHash, _ := os.Getwd()
|
|
||||||
b, err := builder.New(dockerCli,
|
|
||||||
builder.WithName(opts.builder),
|
|
||||||
builder.WithContextPathHash(contextPathHash),
|
|
||||||
)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
if err = updateLastActivity(dockerCli, b.NodeGroup); err != nil {
|
|
||||||
return errors.Wrapf(err, "failed to update builder last activity time")
|
|
||||||
}
|
|
||||||
nodes, err := b.LoadNodes(ctx)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
printer, err := progress.NewPrinter(ctx, os.Stderr, progressui.DisplayMode(opts.progress), progress.WithPhase("dial-stdio"), progress.WithDesc("builder: "+b.Name, "builder:"+b.Name))
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
var p *v1.Platform
|
|
||||||
if opts.platform != "" {
|
|
||||||
pp, err := platforms.Parse(opts.platform)
|
|
||||||
if err != nil {
|
|
||||||
return errors.Wrapf(err, "invalid platform %q", opts.platform)
|
|
||||||
}
|
|
||||||
p = &pp
|
|
||||||
}
|
|
||||||
|
|
||||||
defer printer.Wait()
|
|
||||||
|
|
||||||
return progress.Wrap("Proxying to builder", printer.Write, func(sub progress.SubLogger) error {
|
|
||||||
var conn net.Conn
|
|
||||||
|
|
||||||
err := sub.Wrap("Dialing builder", func() error {
|
|
||||||
conn, err = build.Dial(ctx, nodes, printer, p)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
defer conn.Close()
|
|
||||||
|
|
||||||
go func() {
|
|
||||||
<-ctx.Done()
|
|
||||||
closeWrite(conn)
|
|
||||||
}()
|
|
||||||
|
|
||||||
var eg errgroup.Group
|
|
||||||
|
|
||||||
eg.Go(func() error {
|
|
||||||
_, err := io.Copy(conn, os.Stdin)
|
|
||||||
closeWrite(conn)
|
|
||||||
return err
|
|
||||||
})
|
|
||||||
eg.Go(func() error {
|
|
||||||
_, err := io.Copy(os.Stdout, conn)
|
|
||||||
closeRead(conn)
|
|
||||||
return err
|
|
||||||
})
|
|
||||||
return eg.Wait()
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
func closeRead(conn net.Conn) error {
|
|
||||||
if c, ok := conn.(interface{ CloseRead() error }); ok {
|
|
||||||
return c.CloseRead()
|
|
||||||
}
|
|
||||||
return conn.Close()
|
|
||||||
}
|
|
||||||
|
|
||||||
func closeWrite(conn net.Conn) error {
|
|
||||||
if c, ok := conn.(interface{ CloseWrite() error }); ok {
|
|
||||||
return c.CloseWrite()
|
|
||||||
}
|
|
||||||
return conn.Close()
|
|
||||||
}
|
|
||||||
|
|
||||||
func dialStdioCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
|
|
||||||
opts := stdioOptions{}
|
|
||||||
|
|
||||||
cmd := &cobra.Command{
|
|
||||||
Use: "dial-stdio",
|
|
||||||
Short: "Proxy current stdio streams to builder instance",
|
|
||||||
Args: cobra.NoArgs,
|
|
||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
|
||||||
opts.builder = rootOpts.builder
|
|
||||||
return runDialStdio(dockerCli, opts)
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
flags := cmd.Flags()
|
|
||||||
flags.StringVar(&opts.platform, "platform", os.Getenv("DOCKER_DEFAULT_PLATFORM"), "Target platform: this is used for node selection")
|
|
||||||
flags.StringVar(&opts.progress, "progress", "quiet", "Set type of progress output (auto, plain, tty).")
|
|
||||||
return cmd
|
|
||||||
}
|
|
||||||
@@ -1,7 +1,6 @@
|
|||||||
package commands
|
package commands
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
|
||||||
"fmt"
|
"fmt"
|
||||||
"io"
|
"io"
|
||||||
"os"
|
"os"
|
||||||
@@ -10,12 +9,12 @@ import (
|
|||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/docker/buildx/builder"
|
"github.com/docker/buildx/builder"
|
||||||
"github.com/docker/buildx/util/cobrautil/completion"
|
|
||||||
"github.com/docker/cli/cli"
|
"github.com/docker/cli/cli"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
"github.com/docker/cli/opts"
|
"github.com/docker/cli/opts"
|
||||||
"github.com/docker/go-units"
|
"github.com/docker/go-units"
|
||||||
"github.com/moby/buildkit/client"
|
"github.com/moby/buildkit/client"
|
||||||
|
"github.com/moby/buildkit/util/appcontext"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
"golang.org/x/sync/errgroup"
|
"golang.org/x/sync/errgroup"
|
||||||
)
|
)
|
||||||
@@ -26,7 +25,9 @@ type duOptions struct {
|
|||||||
verbose bool
|
verbose bool
|
||||||
}
|
}
|
||||||
|
|
||||||
func runDiskUsage(ctx context.Context, dockerCli command.Cli, opts duOptions) error {
|
func runDiskUsage(dockerCli command.Cli, opts duOptions) error {
|
||||||
|
ctx := appcontext.Context()
|
||||||
|
|
||||||
pi, err := toBuildkitPruneInfo(opts.filter.Value())
|
pi, err := toBuildkitPruneInfo(opts.filter.Value())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -37,7 +38,7 @@ func runDiskUsage(ctx context.Context, dockerCli command.Cli, opts duOptions) er
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
nodes, err := b.LoadNodes(ctx)
|
nodes, err := b.LoadNodes(ctx, false)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -112,9 +113,8 @@ func duCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
|
|||||||
Args: cli.NoArgs,
|
Args: cli.NoArgs,
|
||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
options.builder = rootOpts.builder
|
options.builder = rootOpts.builder
|
||||||
return runDiskUsage(cmd.Context(), dockerCli, options)
|
return runDiskUsage(dockerCli, options)
|
||||||
},
|
},
|
||||||
ValidArgsFunction: completion.Disable,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
flags := cmd.Flags()
|
flags := cmd.Flags()
|
||||||
|
|||||||
@@ -7,13 +7,12 @@ import (
|
|||||||
"os"
|
"os"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"github.com/distribution/reference"
|
|
||||||
"github.com/docker/buildx/builder"
|
"github.com/docker/buildx/builder"
|
||||||
"github.com/docker/buildx/util/cobrautil/completion"
|
|
||||||
"github.com/docker/buildx/util/imagetools"
|
"github.com/docker/buildx/util/imagetools"
|
||||||
"github.com/docker/buildx/util/progress"
|
"github.com/docker/buildx/util/progress"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
"github.com/moby/buildkit/util/progress/progressui"
|
"github.com/docker/distribution/reference"
|
||||||
|
"github.com/moby/buildkit/util/appcontext"
|
||||||
"github.com/opencontainers/go-digest"
|
"github.com/opencontainers/go-digest"
|
||||||
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
|
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
@@ -25,14 +24,12 @@ type createOptions struct {
|
|||||||
builder string
|
builder string
|
||||||
files []string
|
files []string
|
||||||
tags []string
|
tags []string
|
||||||
annotations []string
|
|
||||||
dryrun bool
|
dryrun bool
|
||||||
actionAppend bool
|
actionAppend bool
|
||||||
progress string
|
progress string
|
||||||
preferIndex bool
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, args []string) error {
|
func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
|
||||||
if len(args) == 0 && len(in.files) == 0 {
|
if len(args) == 0 && len(in.files) == 0 {
|
||||||
return errors.Errorf("no sources specified")
|
return errors.Errorf("no sources specified")
|
||||||
}
|
}
|
||||||
@@ -113,6 +110,8 @@ func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, arg
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
ctx := appcontext.Context()
|
||||||
|
|
||||||
b, err := builder.New(dockerCli, builder.WithName(in.builder))
|
b, err := builder.New(dockerCli, builder.WithName(in.builder))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -154,7 +153,7 @@ func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, arg
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
dt, desc, err := r.Combine(ctx, srcs, in.annotations, in.preferIndex)
|
dt, desc, err := r.Combine(ctx, srcs)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -169,7 +168,7 @@ func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, arg
|
|||||||
|
|
||||||
ctx2, cancel := context.WithCancel(context.TODO())
|
ctx2, cancel := context.WithCancel(context.TODO())
|
||||||
defer cancel()
|
defer cancel()
|
||||||
printer, err := progress.NewPrinter(ctx2, os.Stderr, progressui.DisplayMode(in.progress))
|
printer, err := progress.NewPrinter(ctx2, os.Stderr, os.Stderr, in.progress)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -272,9 +271,8 @@ func createCmd(dockerCli command.Cli, opts RootOptions) *cobra.Command {
|
|||||||
Short: "Create a new image based on source images",
|
Short: "Create a new image based on source images",
|
||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
options.builder = *opts.Builder
|
options.builder = *opts.Builder
|
||||||
return runCreate(cmd.Context(), dockerCli, options, args)
|
return runCreate(dockerCli, options, args)
|
||||||
},
|
},
|
||||||
ValidArgsFunction: completion.Disable,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
flags := cmd.Flags()
|
flags := cmd.Flags()
|
||||||
@@ -283,8 +281,6 @@ func createCmd(dockerCli command.Cli, opts RootOptions) *cobra.Command {
|
|||||||
flags.BoolVar(&options.dryrun, "dry-run", false, "Show final image instead of pushing")
|
flags.BoolVar(&options.dryrun, "dry-run", false, "Show final image instead of pushing")
|
||||||
flags.BoolVar(&options.actionAppend, "append", false, "Append to existing manifest")
|
flags.BoolVar(&options.actionAppend, "append", false, "Append to existing manifest")
|
||||||
flags.StringVar(&options.progress, "progress", "auto", `Set type of progress output ("auto", "plain", "tty"). Use plain to show container output`)
|
flags.StringVar(&options.progress, "progress", "auto", `Set type of progress output ("auto", "plain", "tty"). Use plain to show container output`)
|
||||||
flags.StringArrayVarP(&options.annotations, "annotation", "", []string{}, "Add annotation to the image")
|
|
||||||
flags.BoolVar(&options.preferIndex, "prefer-index", true, "When only a single source is specified, prefer outputting an image index or manifest list instead of performing a carbon copy")
|
|
||||||
|
|
||||||
return cmd
|
return cmd
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,14 +1,12 @@
|
|||||||
package commands
|
package commands
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
|
||||||
|
|
||||||
"github.com/docker/buildx/builder"
|
"github.com/docker/buildx/builder"
|
||||||
"github.com/docker/buildx/util/cobrautil/completion"
|
|
||||||
"github.com/docker/buildx/util/imagetools"
|
"github.com/docker/buildx/util/imagetools"
|
||||||
"github.com/docker/cli-docs-tool/annotation"
|
"github.com/docker/cli-docs-tool/annotation"
|
||||||
"github.com/docker/cli/cli"
|
"github.com/docker/cli/cli"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
|
"github.com/moby/buildkit/util/appcontext"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
@@ -19,7 +17,9 @@ type inspectOptions struct {
|
|||||||
raw bool
|
raw bool
|
||||||
}
|
}
|
||||||
|
|
||||||
func runInspect(ctx context.Context, dockerCli command.Cli, in inspectOptions, name string) error {
|
func runInspect(dockerCli command.Cli, in inspectOptions, name string) error {
|
||||||
|
ctx := appcontext.Context()
|
||||||
|
|
||||||
if in.format != "" && in.raw {
|
if in.format != "" && in.raw {
|
||||||
return errors.Errorf("format and raw cannot be used together")
|
return errors.Errorf("format and raw cannot be used together")
|
||||||
}
|
}
|
||||||
@@ -50,9 +50,8 @@ func inspectCmd(dockerCli command.Cli, rootOpts RootOptions) *cobra.Command {
|
|||||||
Args: cli.ExactArgs(1),
|
Args: cli.ExactArgs(1),
|
||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
options.builder = *rootOpts.Builder
|
options.builder = *rootOpts.Builder
|
||||||
return runInspect(cmd.Context(), dockerCli, options, args[0])
|
return runInspect(dockerCli, options, args[0])
|
||||||
},
|
},
|
||||||
ValidArgsFunction: completion.Disable,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
flags := cmd.Flags()
|
flags := cmd.Flags()
|
||||||
|
|||||||
@@ -1,7 +1,6 @@
|
|||||||
package commands
|
package commands
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"github.com/docker/buildx/util/cobrautil/completion"
|
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
@@ -12,9 +11,8 @@ type RootOptions struct {
|
|||||||
|
|
||||||
func RootCmd(dockerCli command.Cli, opts RootOptions) *cobra.Command {
|
func RootCmd(dockerCli command.Cli, opts RootOptions) *cobra.Command {
|
||||||
cmd := &cobra.Command{
|
cmd := &cobra.Command{
|
||||||
Use: "imagetools",
|
Use: "imagetools",
|
||||||
Short: "Commands to work on images in registry",
|
Short: "Commands to work on images in registry",
|
||||||
ValidArgsFunction: completion.Disable,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
cmd.AddCommand(
|
cmd.AddCommand(
|
||||||
|
|||||||
@@ -4,19 +4,15 @@ import (
|
|||||||
"context"
|
"context"
|
||||||
"fmt"
|
"fmt"
|
||||||
"os"
|
"os"
|
||||||
"sort"
|
|
||||||
"strings"
|
"strings"
|
||||||
"text/tabwriter"
|
"text/tabwriter"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/docker/buildx/builder"
|
"github.com/docker/buildx/builder"
|
||||||
"github.com/docker/buildx/driver"
|
|
||||||
"github.com/docker/buildx/util/cobrautil/completion"
|
|
||||||
"github.com/docker/buildx/util/platformutil"
|
"github.com/docker/buildx/util/platformutil"
|
||||||
"github.com/docker/cli/cli"
|
"github.com/docker/cli/cli"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
"github.com/docker/cli/cli/debug"
|
"github.com/moby/buildkit/util/appcontext"
|
||||||
"github.com/docker/go-units"
|
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -25,7 +21,9 @@ type inspectOptions struct {
|
|||||||
builder string
|
builder string
|
||||||
}
|
}
|
||||||
|
|
||||||
func runInspect(ctx context.Context, dockerCli command.Cli, in inspectOptions) error {
|
func runInspect(dockerCli command.Cli, in inspectOptions) error {
|
||||||
|
ctx := appcontext.Context()
|
||||||
|
|
||||||
b, err := builder.New(dockerCli,
|
b, err := builder.New(dockerCli,
|
||||||
builder.WithName(in.builder),
|
builder.WithName(in.builder),
|
||||||
builder.WithSkippedValidation(),
|
builder.WithSkippedValidation(),
|
||||||
@@ -37,7 +35,7 @@ func runInspect(ctx context.Context, dockerCli command.Cli, in inspectOptions) e
|
|||||||
timeoutCtx, cancel := context.WithTimeout(ctx, 20*time.Second)
|
timeoutCtx, cancel := context.WithTimeout(ctx, 20*time.Second)
|
||||||
defer cancel()
|
defer cancel()
|
||||||
|
|
||||||
nodes, err := b.LoadNodes(timeoutCtx, builder.WithData())
|
nodes, err := b.LoadNodes(timeoutCtx, true)
|
||||||
if in.bootstrap {
|
if in.bootstrap {
|
||||||
var ok bool
|
var ok bool
|
||||||
ok, err = b.Boot(ctx)
|
ok, err = b.Boot(ctx)
|
||||||
@@ -45,7 +43,7 @@ func runInspect(ctx context.Context, dockerCli command.Cli, in inspectOptions) e
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
if ok {
|
if ok {
|
||||||
nodes, err = b.LoadNodes(timeoutCtx, builder.WithData())
|
nodes, err = b.LoadNodes(timeoutCtx, true)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -84,48 +82,13 @@ func runInspect(ctx context.Context, dockerCli command.Cli, in inspectOptions) e
|
|||||||
fmt.Fprintf(w, "Error:\t%s\n", err.Error())
|
fmt.Fprintf(w, "Error:\t%s\n", err.Error())
|
||||||
} else {
|
} else {
|
||||||
fmt.Fprintf(w, "Status:\t%s\n", nodes[i].DriverInfo.Status)
|
fmt.Fprintf(w, "Status:\t%s\n", nodes[i].DriverInfo.Status)
|
||||||
if len(n.BuildkitdFlags) > 0 {
|
if len(n.Flags) > 0 {
|
||||||
fmt.Fprintf(w, "BuildKit daemon flags:\t%s\n", strings.Join(n.BuildkitdFlags, " "))
|
fmt.Fprintf(w, "Flags:\t%s\n", strings.Join(n.Flags, " "))
|
||||||
}
|
}
|
||||||
if nodes[i].Version != "" {
|
if nodes[i].Version != "" {
|
||||||
fmt.Fprintf(w, "BuildKit version:\t%s\n", nodes[i].Version)
|
fmt.Fprintf(w, "Buildkit:\t%s\n", nodes[i].Version)
|
||||||
}
|
|
||||||
platforms := platformutil.FormatInGroups(n.Node.Platforms, n.Platforms)
|
|
||||||
if len(platforms) > 0 {
|
|
||||||
fmt.Fprintf(w, "Platforms:\t%s\n", strings.Join(platforms, ", "))
|
|
||||||
}
|
|
||||||
if debug.IsEnabled() {
|
|
||||||
fmt.Fprintf(w, "Features:\n")
|
|
||||||
features := nodes[i].Driver.Features(ctx)
|
|
||||||
featKeys := make([]string, 0, len(features))
|
|
||||||
for k := range features {
|
|
||||||
featKeys = append(featKeys, string(k))
|
|
||||||
}
|
|
||||||
sort.Strings(featKeys)
|
|
||||||
for _, k := range featKeys {
|
|
||||||
fmt.Fprintf(w, "\t%s:\t%t\n", k, features[driver.Feature(k)])
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if len(nodes[i].Labels) > 0 {
|
|
||||||
fmt.Fprintf(w, "Labels:\n")
|
|
||||||
for _, k := range sortedKeys(nodes[i].Labels) {
|
|
||||||
v := nodes[i].Labels[k]
|
|
||||||
fmt.Fprintf(w, "\t%s:\t%s\n", k, v)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
for ri, rule := range nodes[i].GCPolicy {
|
|
||||||
fmt.Fprintf(w, "GC Policy rule#%d:\n", ri)
|
|
||||||
fmt.Fprintf(w, "\tAll:\t%v\n", rule.All)
|
|
||||||
if len(rule.Filter) > 0 {
|
|
||||||
fmt.Fprintf(w, "\tFilters:\t%s\n", strings.Join(rule.Filter, " "))
|
|
||||||
}
|
|
||||||
if rule.KeepDuration > 0 {
|
|
||||||
fmt.Fprintf(w, "\tKeep Duration:\t%v\n", rule.KeepDuration.String())
|
|
||||||
}
|
|
||||||
if rule.KeepBytes > 0 {
|
|
||||||
fmt.Fprintf(w, "\tKeep Bytes:\t%s\n", units.BytesSize(float64(rule.KeepBytes)))
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
fmt.Fprintf(w, "Platforms:\t%s\n", strings.Join(platformutil.FormatInGroups(n.Node.Platforms, n.Platforms), ", "))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -147,9 +110,8 @@ func inspectCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
|
|||||||
if len(args) > 0 {
|
if len(args) > 0 {
|
||||||
options.builder = args[0]
|
options.builder = args[0]
|
||||||
}
|
}
|
||||||
return runInspect(cmd.Context(), dockerCli, options)
|
return runInspect(dockerCli, options)
|
||||||
},
|
},
|
||||||
ValidArgsFunction: completion.BuilderNames(dockerCli),
|
|
||||||
}
|
}
|
||||||
|
|
||||||
flags := cmd.Flags()
|
flags := cmd.Flags()
|
||||||
@@ -157,14 +119,3 @@ func inspectCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
|
|||||||
|
|
||||||
return cmd
|
return cmd
|
||||||
}
|
}
|
||||||
|
|
||||||
func sortedKeys(m map[string]string) []string {
|
|
||||||
s := make([]string, len(m))
|
|
||||||
i := 0
|
|
||||||
for k := range m {
|
|
||||||
s[i] = k
|
|
||||||
i++
|
|
||||||
}
|
|
||||||
sort.Strings(s)
|
|
||||||
return s
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -4,7 +4,6 @@ import (
|
|||||||
"os"
|
"os"
|
||||||
|
|
||||||
"github.com/docker/buildx/util/cobrautil"
|
"github.com/docker/buildx/util/cobrautil"
|
||||||
"github.com/docker/buildx/util/cobrautil/completion"
|
|
||||||
"github.com/docker/cli/cli"
|
"github.com/docker/cli/cli"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
"github.com/docker/cli/cli/config"
|
"github.com/docker/cli/cli/config"
|
||||||
@@ -15,7 +14,7 @@ import (
|
|||||||
type installOptions struct {
|
type installOptions struct {
|
||||||
}
|
}
|
||||||
|
|
||||||
func runInstall(_ command.Cli, _ installOptions) error {
|
func runInstall(dockerCli command.Cli, in installOptions) error {
|
||||||
dir := config.Dir()
|
dir := config.Dir()
|
||||||
if err := os.MkdirAll(dir, 0755); err != nil {
|
if err := os.MkdirAll(dir, 0755); err != nil {
|
||||||
return errors.Wrap(err, "could not create docker config")
|
return errors.Wrap(err, "could not create docker config")
|
||||||
@@ -47,8 +46,7 @@ func installCmd(dockerCli command.Cli) *cobra.Command {
|
|||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
return runInstall(dockerCli, options)
|
return runInstall(dockerCli, options)
|
||||||
},
|
},
|
||||||
Hidden: true,
|
Hidden: true,
|
||||||
ValidArgsFunction: completion.Disable,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// hide builder persistent flag for this command
|
// hide builder persistent flag for this command
|
||||||
|
|||||||
237
commands/ls.go
237
commands/ls.go
@@ -2,43 +2,29 @@ package commands
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"encoding/json"
|
|
||||||
"fmt"
|
"fmt"
|
||||||
"sort"
|
"io"
|
||||||
"strings"
|
"strings"
|
||||||
|
"text/tabwriter"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/docker/buildx/builder"
|
"github.com/docker/buildx/builder"
|
||||||
"github.com/docker/buildx/store"
|
|
||||||
"github.com/docker/buildx/store/storeutil"
|
"github.com/docker/buildx/store/storeutil"
|
||||||
"github.com/docker/buildx/util/cobrautil"
|
"github.com/docker/buildx/util/cobrautil"
|
||||||
"github.com/docker/buildx/util/cobrautil/completion"
|
|
||||||
"github.com/docker/buildx/util/platformutil"
|
"github.com/docker/buildx/util/platformutil"
|
||||||
"github.com/docker/cli/cli"
|
"github.com/docker/cli/cli"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
"github.com/docker/cli/cli/command/formatter"
|
"github.com/moby/buildkit/util/appcontext"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
"golang.org/x/sync/errgroup"
|
"golang.org/x/sync/errgroup"
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
|
||||||
lsNameNodeHeader = "NAME/NODE"
|
|
||||||
lsDriverEndpointHeader = "DRIVER/ENDPOINT"
|
|
||||||
lsStatusHeader = "STATUS"
|
|
||||||
lsLastActivityHeader = "LAST ACTIVITY"
|
|
||||||
lsBuildkitHeader = "BUILDKIT"
|
|
||||||
lsPlatformsHeader = "PLATFORMS"
|
|
||||||
|
|
||||||
lsIndent = ` \_ `
|
|
||||||
|
|
||||||
lsDefaultTableFormat = "table {{.Name}}\t{{.DriverEndpoint}}\t{{.Status}}\t{{.Buildkit}}\t{{.Platforms}}"
|
|
||||||
)
|
|
||||||
|
|
||||||
type lsOptions struct {
|
type lsOptions struct {
|
||||||
format string
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func runLs(ctx context.Context, dockerCli command.Cli, in lsOptions) error {
|
func runLs(dockerCli command.Cli, in lsOptions) error {
|
||||||
|
ctx := appcontext.Context()
|
||||||
|
|
||||||
txn, release, err := storeutil.GetStore(dockerCli)
|
txn, release, err := storeutil.GetStore(dockerCli)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -62,7 +48,7 @@ func runLs(ctx context.Context, dockerCli command.Cli, in lsOptions) error {
|
|||||||
for _, b := range builders {
|
for _, b := range builders {
|
||||||
func(b *builder.Builder) {
|
func(b *builder.Builder) {
|
||||||
eg.Go(func() error {
|
eg.Go(func() error {
|
||||||
_, _ = b.LoadNodes(timeoutCtx, builder.WithData())
|
_, _ = b.LoadNodes(timeoutCtx, true)
|
||||||
return nil
|
return nil
|
||||||
})
|
})
|
||||||
}(b)
|
}(b)
|
||||||
@@ -72,9 +58,22 @@ func runLs(ctx context.Context, dockerCli command.Cli, in lsOptions) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
if hasErrors, err := lsPrint(dockerCli, current, builders, in.format); err != nil {
|
w := tabwriter.NewWriter(dockerCli.Out(), 0, 0, 1, ' ', 0)
|
||||||
return err
|
fmt.Fprintf(w, "NAME/NODE\tDRIVER/ENDPOINT\tSTATUS\tBUILDKIT\tPLATFORMS\n")
|
||||||
} else if hasErrors {
|
|
||||||
|
printErr := false
|
||||||
|
for _, b := range builders {
|
||||||
|
if current.Name == b.Name {
|
||||||
|
b.Name += " *"
|
||||||
|
}
|
||||||
|
if ok := printBuilder(w, b); !ok {
|
||||||
|
printErr = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
w.Flush()
|
||||||
|
|
||||||
|
if printErr {
|
||||||
_, _ = fmt.Fprintf(dockerCli.Err(), "\n")
|
_, _ = fmt.Fprintf(dockerCli.Err(), "\n")
|
||||||
for _, b := range builders {
|
for _, b := range builders {
|
||||||
if b.Err() != nil {
|
if b.Err() != nil {
|
||||||
@@ -92,6 +91,31 @@ func runLs(ctx context.Context, dockerCli command.Cli, in lsOptions) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func printBuilder(w io.Writer, b *builder.Builder) (ok bool) {
|
||||||
|
ok = true
|
||||||
|
var err string
|
||||||
|
if b.Err() != nil {
|
||||||
|
ok = false
|
||||||
|
err = "error"
|
||||||
|
}
|
||||||
|
fmt.Fprintf(w, "%s\t%s\t%s\t\t\n", b.Name, b.Driver, err)
|
||||||
|
if b.Err() == nil {
|
||||||
|
for _, n := range b.Nodes() {
|
||||||
|
var status string
|
||||||
|
if n.DriverInfo != nil {
|
||||||
|
status = n.DriverInfo.Status.String()
|
||||||
|
}
|
||||||
|
if n.Err != nil {
|
||||||
|
ok = false
|
||||||
|
fmt.Fprintf(w, " %s\t%s\t%s\t\t\n", n.Name, n.Endpoint, "error")
|
||||||
|
} else {
|
||||||
|
fmt.Fprintf(w, " %s\t%s\t%s\t%s\t%s\n", n.Name, n.Endpoint, status, n.Version, strings.Join(platformutil.FormatInGroups(n.Node.Platforms, n.Platforms), ", "))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
func lsCmd(dockerCli command.Cli) *cobra.Command {
|
func lsCmd(dockerCli command.Cli) *cobra.Command {
|
||||||
var options lsOptions
|
var options lsOptions
|
||||||
|
|
||||||
@@ -100,175 +124,12 @@ func lsCmd(dockerCli command.Cli) *cobra.Command {
|
|||||||
Short: "List builder instances",
|
Short: "List builder instances",
|
||||||
Args: cli.ExactArgs(0),
|
Args: cli.ExactArgs(0),
|
||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
return runLs(cmd.Context(), dockerCli, options)
|
return runLs(dockerCli, options)
|
||||||
},
|
},
|
||||||
ValidArgsFunction: completion.Disable,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
flags := cmd.Flags()
|
|
||||||
flags.StringVar(&options.format, "format", formatter.TableFormatKey, "Format the output")
|
|
||||||
|
|
||||||
// hide builder persistent flag for this command
|
// hide builder persistent flag for this command
|
||||||
cobrautil.HideInheritedFlags(cmd, "builder")
|
cobrautil.HideInheritedFlags(cmd, "builder")
|
||||||
|
|
||||||
return cmd
|
return cmd
|
||||||
}
|
}
|
||||||
|
|
||||||
func lsPrint(dockerCli command.Cli, current *store.NodeGroup, builders []*builder.Builder, format string) (hasErrors bool, _ error) {
|
|
||||||
if format == formatter.TableFormatKey {
|
|
||||||
format = lsDefaultTableFormat
|
|
||||||
}
|
|
||||||
|
|
||||||
ctx := formatter.Context{
|
|
||||||
Output: dockerCli.Out(),
|
|
||||||
Format: formatter.Format(format),
|
|
||||||
}
|
|
||||||
|
|
||||||
sort.SliceStable(builders, func(i, j int) bool {
|
|
||||||
ierr := builders[i].Err() != nil
|
|
||||||
jerr := builders[j].Err() != nil
|
|
||||||
if ierr && !jerr {
|
|
||||||
return false
|
|
||||||
} else if !ierr && jerr {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
return i < j
|
|
||||||
})
|
|
||||||
|
|
||||||
render := func(format func(subContext formatter.SubContext) error) error {
|
|
||||||
for _, b := range builders {
|
|
||||||
if err := format(&lsContext{
|
|
||||||
Builder: &lsBuilder{
|
|
||||||
Builder: b,
|
|
||||||
Current: b.Name == current.Name,
|
|
||||||
},
|
|
||||||
format: ctx.Format,
|
|
||||||
}); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if b.Err() != nil {
|
|
||||||
if ctx.Format.IsTable() {
|
|
||||||
hasErrors = true
|
|
||||||
}
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
for _, n := range b.Nodes() {
|
|
||||||
if n.Err != nil {
|
|
||||||
if ctx.Format.IsTable() {
|
|
||||||
hasErrors = true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if err := format(&lsContext{
|
|
||||||
format: ctx.Format,
|
|
||||||
Builder: &lsBuilder{
|
|
||||||
Builder: b,
|
|
||||||
Current: b.Name == current.Name,
|
|
||||||
},
|
|
||||||
node: n,
|
|
||||||
}); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
lsCtx := lsContext{}
|
|
||||||
lsCtx.Header = formatter.SubHeaderContext{
|
|
||||||
"Name": lsNameNodeHeader,
|
|
||||||
"DriverEndpoint": lsDriverEndpointHeader,
|
|
||||||
"LastActivity": lsLastActivityHeader,
|
|
||||||
"Status": lsStatusHeader,
|
|
||||||
"Buildkit": lsBuildkitHeader,
|
|
||||||
"Platforms": lsPlatformsHeader,
|
|
||||||
}
|
|
||||||
|
|
||||||
return hasErrors, ctx.Write(&lsCtx, render)
|
|
||||||
}
|
|
||||||
|
|
||||||
type lsBuilder struct {
|
|
||||||
*builder.Builder
|
|
||||||
Current bool
|
|
||||||
}
|
|
||||||
|
|
||||||
type lsContext struct {
|
|
||||||
formatter.HeaderContext
|
|
||||||
Builder *lsBuilder
|
|
||||||
|
|
||||||
format formatter.Format
|
|
||||||
node builder.Node
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *lsContext) MarshalJSON() ([]byte, error) {
|
|
||||||
return json.Marshal(c.Builder)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *lsContext) Name() string {
|
|
||||||
if c.node.Name == "" {
|
|
||||||
name := c.Builder.Name
|
|
||||||
if c.Builder.Current && c.format.IsTable() {
|
|
||||||
name += "*"
|
|
||||||
}
|
|
||||||
return name
|
|
||||||
}
|
|
||||||
if c.format.IsTable() {
|
|
||||||
return lsIndent + c.node.Name
|
|
||||||
}
|
|
||||||
return c.node.Name
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *lsContext) DriverEndpoint() string {
|
|
||||||
if c.node.Name == "" {
|
|
||||||
return c.Builder.Driver
|
|
||||||
}
|
|
||||||
if c.format.IsTable() {
|
|
||||||
return lsIndent + c.node.Endpoint
|
|
||||||
}
|
|
||||||
return c.node.Endpoint
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *lsContext) LastActivity() string {
|
|
||||||
if c.node.Name != "" || c.Builder.LastActivity.IsZero() {
|
|
||||||
return ""
|
|
||||||
}
|
|
||||||
return c.Builder.LastActivity.UTC().Format(time.RFC3339)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *lsContext) Status() string {
|
|
||||||
if c.node.Name == "" {
|
|
||||||
if c.Builder.Err() != nil {
|
|
||||||
return "error"
|
|
||||||
}
|
|
||||||
return ""
|
|
||||||
}
|
|
||||||
if c.node.Err != nil {
|
|
||||||
return "error"
|
|
||||||
}
|
|
||||||
if c.node.DriverInfo != nil {
|
|
||||||
return c.node.DriverInfo.Status.String()
|
|
||||||
}
|
|
||||||
return ""
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *lsContext) Buildkit() string {
|
|
||||||
if c.node.Name == "" {
|
|
||||||
return ""
|
|
||||||
}
|
|
||||||
return c.node.Version
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *lsContext) Platforms() string {
|
|
||||||
if c.node.Name == "" {
|
|
||||||
return ""
|
|
||||||
}
|
|
||||||
return strings.Join(platformutil.FormatInGroups(c.node.Node.Platforms, c.node.Platforms), ", ")
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *lsContext) Error() string {
|
|
||||||
if c.node.Name != "" && c.node.Err != nil {
|
|
||||||
return c.node.Err.Error()
|
|
||||||
} else if err := c.Builder.Err(); err != nil {
|
|
||||||
return err.Error()
|
|
||||||
}
|
|
||||||
return ""
|
|
||||||
}
|
|
||||||
|
|||||||
48
commands/print.go
Normal file
48
commands/print.go
Normal file
@@ -0,0 +1,48 @@
|
|||||||
|
package commands
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"log"
|
||||||
|
"os"
|
||||||
|
|
||||||
|
"github.com/docker/buildx/build"
|
||||||
|
"github.com/docker/docker/api/types/versions"
|
||||||
|
"github.com/moby/buildkit/frontend/subrequests"
|
||||||
|
"github.com/moby/buildkit/frontend/subrequests/outline"
|
||||||
|
"github.com/moby/buildkit/frontend/subrequests/targets"
|
||||||
|
)
|
||||||
|
|
||||||
|
func printResult(f *build.PrintFunc, res map[string]string) error {
|
||||||
|
switch f.Name {
|
||||||
|
case "outline":
|
||||||
|
return printValue(outline.PrintOutline, outline.SubrequestsOutlineDefinition.Version, f.Format, res)
|
||||||
|
case "targets":
|
||||||
|
return printValue(targets.PrintTargets, targets.SubrequestsTargetsDefinition.Version, f.Format, res)
|
||||||
|
case "subrequests.describe":
|
||||||
|
return printValue(subrequests.PrintDescribe, subrequests.SubrequestsDescribeDefinition.Version, f.Format, res)
|
||||||
|
default:
|
||||||
|
if dt, ok := res["result.txt"]; ok {
|
||||||
|
fmt.Print(dt)
|
||||||
|
} else {
|
||||||
|
log.Printf("%s %+v", f, res)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
type printFunc func([]byte, io.Writer) error
|
||||||
|
|
||||||
|
func printValue(printer printFunc, version string, format string, res map[string]string) error {
|
||||||
|
if format == "json" {
|
||||||
|
fmt.Fprintln(os.Stdout, res["result.json"])
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
if res["version"] != "" && versions.LessThan(version, res["version"]) && res["result.txt"] != "" {
|
||||||
|
// structure is too new and we don't know how to print it
|
||||||
|
fmt.Fprint(os.Stdout, res["result.txt"])
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return printer([]byte(res["result.json"]), os.Stdout)
|
||||||
|
}
|
||||||
@@ -1,7 +1,6 @@
|
|||||||
package commands
|
package commands
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
|
||||||
"fmt"
|
"fmt"
|
||||||
"os"
|
"os"
|
||||||
"strings"
|
"strings"
|
||||||
@@ -9,13 +8,13 @@ import (
|
|||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/docker/buildx/builder"
|
"github.com/docker/buildx/builder"
|
||||||
"github.com/docker/buildx/util/cobrautil/completion"
|
|
||||||
"github.com/docker/cli/cli"
|
"github.com/docker/cli/cli"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
"github.com/docker/cli/opts"
|
"github.com/docker/cli/opts"
|
||||||
"github.com/docker/docker/api/types/filters"
|
"github.com/docker/docker/api/types/filters"
|
||||||
"github.com/docker/go-units"
|
"github.com/docker/go-units"
|
||||||
"github.com/moby/buildkit/client"
|
"github.com/moby/buildkit/client"
|
||||||
|
"github.com/moby/buildkit/util/appcontext"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
"golang.org/x/sync/errgroup"
|
"golang.org/x/sync/errgroup"
|
||||||
@@ -35,7 +34,9 @@ const (
|
|||||||
allCacheWarning = `WARNING! This will remove all build cache. Are you sure you want to continue?`
|
allCacheWarning = `WARNING! This will remove all build cache. Are you sure you want to continue?`
|
||||||
)
|
)
|
||||||
|
|
||||||
func runPrune(ctx context.Context, dockerCli command.Cli, opts pruneOptions) error {
|
func runPrune(dockerCli command.Cli, opts pruneOptions) error {
|
||||||
|
ctx := appcontext.Context()
|
||||||
|
|
||||||
pruneFilters := opts.filter.Value()
|
pruneFilters := opts.filter.Value()
|
||||||
pruneFilters = command.PruneFilters(dockerCli, pruneFilters)
|
pruneFilters = command.PruneFilters(dockerCli, pruneFilters)
|
||||||
|
|
||||||
@@ -49,12 +50,8 @@ func runPrune(ctx context.Context, dockerCli command.Cli, opts pruneOptions) err
|
|||||||
warning = allCacheWarning
|
warning = allCacheWarning
|
||||||
}
|
}
|
||||||
|
|
||||||
if !opts.force {
|
if !opts.force && !command.PromptForConfirmation(dockerCli.In(), dockerCli.Out(), warning) {
|
||||||
if ok, err := prompt(ctx, dockerCli.In(), dockerCli.Out(), warning); err != nil {
|
return nil
|
||||||
return err
|
|
||||||
} else if !ok {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
b, err := builder.New(dockerCli, builder.WithName(opts.builder))
|
b, err := builder.New(dockerCli, builder.WithName(opts.builder))
|
||||||
@@ -62,7 +59,7 @@ func runPrune(ctx context.Context, dockerCli command.Cli, opts pruneOptions) err
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
nodes, err := b.LoadNodes(ctx)
|
nodes, err := b.LoadNodes(ctx, false)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -140,9 +137,8 @@ func pruneCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
|
|||||||
Args: cli.NoArgs,
|
Args: cli.NoArgs,
|
||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
options.builder = rootOpts.builder
|
options.builder = rootOpts.builder
|
||||||
return runPrune(cmd.Context(), dockerCli, options)
|
return runPrune(dockerCli, options)
|
||||||
},
|
},
|
||||||
ValidArgsFunction: completion.Disable,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
flags := cmd.Flags()
|
flags := cmd.Flags()
|
||||||
@@ -195,8 +191,6 @@ func toBuildkitPruneInfo(f filters.Args) (*client.PruneInfo, error) {
|
|||||||
case 1:
|
case 1:
|
||||||
if filterKey == "id" {
|
if filterKey == "id" {
|
||||||
filters = append(filters, filterKey+"~="+values[0])
|
filters = append(filters, filterKey+"~="+values[0])
|
||||||
} else if strings.HasSuffix(filterKey, "!") || strings.HasSuffix(filterKey, "~") {
|
|
||||||
filters = append(filters, filterKey+"="+values[0])
|
|
||||||
} else {
|
} else {
|
||||||
filters = append(filters, filterKey+"=="+values[0])
|
filters = append(filters, filterKey+"=="+values[0])
|
||||||
}
|
}
|
||||||
|
|||||||
100
commands/rm.go
100
commands/rm.go
@@ -8,15 +8,16 @@ import (
|
|||||||
"github.com/docker/buildx/builder"
|
"github.com/docker/buildx/builder"
|
||||||
"github.com/docker/buildx/store"
|
"github.com/docker/buildx/store"
|
||||||
"github.com/docker/buildx/store/storeutil"
|
"github.com/docker/buildx/store/storeutil"
|
||||||
"github.com/docker/buildx/util/cobrautil/completion"
|
"github.com/docker/cli/cli"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
|
"github.com/moby/buildkit/util/appcontext"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
"golang.org/x/sync/errgroup"
|
"golang.org/x/sync/errgroup"
|
||||||
)
|
)
|
||||||
|
|
||||||
type rmOptions struct {
|
type rmOptions struct {
|
||||||
builders []string
|
builder string
|
||||||
keepState bool
|
keepState bool
|
||||||
keepDaemon bool
|
keepDaemon bool
|
||||||
allInactive bool
|
allInactive bool
|
||||||
@@ -27,13 +28,11 @@ const (
|
|||||||
rmInactiveWarning = `WARNING! This will remove all builders that are not in running state. Are you sure you want to continue?`
|
rmInactiveWarning = `WARNING! This will remove all builders that are not in running state. Are you sure you want to continue?`
|
||||||
)
|
)
|
||||||
|
|
||||||
func runRm(ctx context.Context, dockerCli command.Cli, in rmOptions) error {
|
func runRm(dockerCli command.Cli, in rmOptions) error {
|
||||||
if in.allInactive && !in.force {
|
ctx := appcontext.Context()
|
||||||
if ok, err := prompt(ctx, dockerCli.In(), dockerCli.Out(), rmInactiveWarning); err != nil {
|
|
||||||
return err
|
if in.allInactive && !in.force && !command.PromptForConfirmation(dockerCli.In(), dockerCli.Out(), rmInactiveWarning) {
|
||||||
} else if !ok {
|
return nil
|
||||||
return nil
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
txn, release, err := storeutil.GetStore(dockerCli)
|
txn, release, err := storeutil.GetStore(dockerCli)
|
||||||
@@ -46,52 +45,33 @@ func runRm(ctx context.Context, dockerCli command.Cli, in rmOptions) error {
|
|||||||
return rmAllInactive(ctx, txn, dockerCli, in)
|
return rmAllInactive(ctx, txn, dockerCli, in)
|
||||||
}
|
}
|
||||||
|
|
||||||
eg, _ := errgroup.WithContext(ctx)
|
b, err := builder.New(dockerCli,
|
||||||
for _, name := range in.builders {
|
builder.WithName(in.builder),
|
||||||
func(name string) {
|
builder.WithStore(txn),
|
||||||
eg.Go(func() (err error) {
|
builder.WithSkippedValidation(),
|
||||||
defer func() {
|
)
|
||||||
if err == nil {
|
if err != nil {
|
||||||
_, _ = fmt.Fprintf(dockerCli.Err(), "%s removed\n", name)
|
return err
|
||||||
} else {
|
|
||||||
_, _ = fmt.Fprintf(dockerCli.Err(), "failed to remove %s: %v\n", name, err)
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
|
|
||||||
b, err := builder.New(dockerCli,
|
|
||||||
builder.WithName(name),
|
|
||||||
builder.WithStore(txn),
|
|
||||||
builder.WithSkippedValidation(),
|
|
||||||
)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
nodes, err := b.LoadNodes(ctx)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
if cb := b.ContextName(); cb != "" {
|
|
||||||
return errors.Errorf("context builder cannot be removed, run `docker context rm %s` to remove this context", cb)
|
|
||||||
}
|
|
||||||
|
|
||||||
err1 := rm(ctx, nodes, in)
|
|
||||||
if err := txn.Remove(b.Name); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if err1 != nil {
|
|
||||||
return err1
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
}(name)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := eg.Wait(); err != nil {
|
nodes, err := b.LoadNodes(ctx, false)
|
||||||
return errors.New("failed to remove one or more builders")
|
if err != nil {
|
||||||
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if cb := b.ContextName(); cb != "" {
|
||||||
|
return errors.Errorf("context builder cannot be removed, run `docker context rm %s` to remove this context", cb)
|
||||||
|
}
|
||||||
|
|
||||||
|
err1 := rm(ctx, nodes, in)
|
||||||
|
if err := txn.Remove(b.Name); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if err1 != nil {
|
||||||
|
return err1
|
||||||
|
}
|
||||||
|
|
||||||
|
_, _ = fmt.Fprintf(dockerCli.Err(), "%s removed\n", b.Name)
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -99,24 +79,24 @@ func rmCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
|
|||||||
var options rmOptions
|
var options rmOptions
|
||||||
|
|
||||||
cmd := &cobra.Command{
|
cmd := &cobra.Command{
|
||||||
Use: "rm [OPTIONS] [NAME] [NAME...]",
|
Use: "rm [NAME]",
|
||||||
Short: "Remove one or more builder instances",
|
Short: "Remove a builder instance",
|
||||||
|
Args: cli.RequiresMaxArgs(1),
|
||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
options.builders = []string{rootOpts.builder}
|
options.builder = rootOpts.builder
|
||||||
if len(args) > 0 {
|
if len(args) > 0 {
|
||||||
if options.allInactive {
|
if options.allInactive {
|
||||||
return errors.New("cannot specify builder name when --all-inactive is set")
|
return errors.New("cannot specify builder name when --all-inactive is set")
|
||||||
}
|
}
|
||||||
options.builders = args
|
options.builder = args[0]
|
||||||
}
|
}
|
||||||
return runRm(cmd.Context(), dockerCli, options)
|
return runRm(dockerCli, options)
|
||||||
},
|
},
|
||||||
ValidArgsFunction: completion.BuilderNames(dockerCli),
|
|
||||||
}
|
}
|
||||||
|
|
||||||
flags := cmd.Flags()
|
flags := cmd.Flags()
|
||||||
flags.BoolVar(&options.keepState, "keep-state", false, "Keep BuildKit state")
|
flags.BoolVar(&options.keepState, "keep-state", false, "Keep BuildKit state")
|
||||||
flags.BoolVar(&options.keepDaemon, "keep-daemon", false, "Keep the BuildKit daemon running")
|
flags.BoolVar(&options.keepDaemon, "keep-daemon", false, "Keep the buildkitd daemon running")
|
||||||
flags.BoolVar(&options.allInactive, "all-inactive", false, "Remove all inactive builders")
|
flags.BoolVar(&options.allInactive, "all-inactive", false, "Remove all inactive builders")
|
||||||
flags.BoolVarP(&options.force, "force", "f", false, "Do not prompt for confirmation")
|
flags.BoolVarP(&options.force, "force", "f", false, "Do not prompt for confirmation")
|
||||||
|
|
||||||
@@ -157,7 +137,7 @@ func rmAllInactive(ctx context.Context, txn *store.Txn, dockerCli command.Cli, i
|
|||||||
for _, b := range builders {
|
for _, b := range builders {
|
||||||
func(b *builder.Builder) {
|
func(b *builder.Builder) {
|
||||||
eg.Go(func() error {
|
eg.Go(func() error {
|
||||||
nodes, err := b.LoadNodes(timeoutCtx, builder.WithData())
|
nodes, err := b.LoadNodes(timeoutCtx, true)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return errors.Wrapf(err, "cannot load %s", b.Name)
|
return errors.Wrapf(err, "cannot load %s", b.Name)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -3,18 +3,12 @@ package commands
|
|||||||
import (
|
import (
|
||||||
"os"
|
"os"
|
||||||
|
|
||||||
debugcmd "github.com/docker/buildx/commands/debug"
|
|
||||||
imagetoolscmd "github.com/docker/buildx/commands/imagetools"
|
imagetoolscmd "github.com/docker/buildx/commands/imagetools"
|
||||||
"github.com/docker/buildx/controller/remote"
|
|
||||||
"github.com/docker/buildx/util/cobrautil/completion"
|
|
||||||
"github.com/docker/buildx/util/confutil"
|
|
||||||
"github.com/docker/buildx/util/logutil"
|
"github.com/docker/buildx/util/logutil"
|
||||||
"github.com/docker/cli-docs-tool/annotation"
|
"github.com/docker/cli-docs-tool/annotation"
|
||||||
"github.com/docker/cli/cli"
|
"github.com/docker/cli/cli"
|
||||||
"github.com/docker/cli/cli-plugins/plugin"
|
"github.com/docker/cli/cli-plugins/plugin"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
"github.com/docker/cli/cli/debug"
|
|
||||||
"github.com/moby/buildkit/util/appcontext"
|
|
||||||
"github.com/sirupsen/logrus"
|
"github.com/sirupsen/logrus"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
"github.com/spf13/pflag"
|
"github.com/spf13/pflag"
|
||||||
@@ -28,18 +22,12 @@ func NewRootCmd(name string, isPlugin bool, dockerCli command.Cli) *cobra.Comman
|
|||||||
Annotations: map[string]string{
|
Annotations: map[string]string{
|
||||||
annotation.CodeDelimiter: `"`,
|
annotation.CodeDelimiter: `"`,
|
||||||
},
|
},
|
||||||
CompletionOptions: cobra.CompletionOptions{
|
|
||||||
HiddenDefaultCmd: true,
|
|
||||||
},
|
|
||||||
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
|
|
||||||
cmd.SetContext(appcontext.Context())
|
|
||||||
if !isPlugin {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
return plugin.PersistentPreRunE(cmd, args)
|
|
||||||
},
|
|
||||||
}
|
}
|
||||||
if !isPlugin {
|
if isPlugin {
|
||||||
|
cmd.PersistentPreRunE = func(cmd *cobra.Command, args []string) error {
|
||||||
|
return plugin.PersistentPreRunE(cmd, args)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
// match plugin behavior for standalone mode
|
// match plugin behavior for standalone mode
|
||||||
// https://github.com/docker/cli/blob/6c9eb708fa6d17765d71965f90e1c59cea686ee9/cli-plugins/plugin/plugin.go#L117-L127
|
// https://github.com/docker/cli/blob/6c9eb708fa6d17765d71965f90e1c59cea686ee9/cli-plugins/plugin/plugin.go#L117-L127
|
||||||
cmd.SilenceUsage = true
|
cmd.SilenceUsage = true
|
||||||
@@ -47,11 +35,6 @@ func NewRootCmd(name string, isPlugin bool, dockerCli command.Cli) *cobra.Comman
|
|||||||
cmd.TraverseChildren = true
|
cmd.TraverseChildren = true
|
||||||
cmd.DisableFlagsInUseLine = true
|
cmd.DisableFlagsInUseLine = true
|
||||||
cli.DisableFlagsInUseLine(cmd)
|
cli.DisableFlagsInUseLine(cmd)
|
||||||
|
|
||||||
// DEBUG=1 should perform the same as --debug at the docker root level
|
|
||||||
if debug.IsEnabled() {
|
|
||||||
debug.Enable()
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
logrus.SetFormatter(&logutil.Formatter{})
|
logrus.SetFormatter(&logutil.Formatter{})
|
||||||
@@ -64,9 +47,16 @@ func NewRootCmd(name string, isPlugin bool, dockerCli command.Cli) *cobra.Comman
|
|||||||
"using default config store",
|
"using default config store",
|
||||||
))
|
))
|
||||||
|
|
||||||
if !confutil.IsExperimental() {
|
// filter out useless commandConn.CloseWrite warning message that can occur
|
||||||
cmd.SetHelpTemplate(cmd.HelpTemplate() + "\nExperimental commands and flags are hidden. Set BUILDX_EXPERIMENTAL=1 to show them.\n")
|
// when listing builder instances with "buildx ls" for those that are
|
||||||
}
|
// unreachable: "commandConn.CloseWrite: commandconn: failed to wait: signal: killed"
|
||||||
|
// https://github.com/docker/cli/blob/3fb4fb83dfb5db0c0753a8316f21aea54dab32c5/cli/connhelper/commandconn/commandconn.go#L203-L214
|
||||||
|
logrus.AddHook(logutil.NewFilter([]logrus.Level{
|
||||||
|
logrus.WarnLevel,
|
||||||
|
},
|
||||||
|
"commandConn.CloseWrite:",
|
||||||
|
"commandConn.CloseRead:",
|
||||||
|
))
|
||||||
|
|
||||||
addCommands(cmd, dockerCli)
|
addCommands(cmd, dockerCli)
|
||||||
return cmd
|
return cmd
|
||||||
@@ -81,10 +71,9 @@ func addCommands(cmd *cobra.Command, dockerCli command.Cli) {
|
|||||||
rootFlags(opts, cmd.PersistentFlags())
|
rootFlags(opts, cmd.PersistentFlags())
|
||||||
|
|
||||||
cmd.AddCommand(
|
cmd.AddCommand(
|
||||||
buildCmd(dockerCli, opts, nil),
|
buildCmd(dockerCli, opts),
|
||||||
bakeCmd(dockerCli, opts),
|
bakeCmd(dockerCli, opts),
|
||||||
createCmd(dockerCli),
|
createCmd(dockerCli),
|
||||||
dialStdioCmd(dockerCli, opts),
|
|
||||||
rmCmd(dockerCli, opts),
|
rmCmd(dockerCli, opts),
|
||||||
lsCmd(dockerCli),
|
lsCmd(dockerCli),
|
||||||
useCmd(dockerCli, opts),
|
useCmd(dockerCli, opts),
|
||||||
@@ -97,17 +86,6 @@ func addCommands(cmd *cobra.Command, dockerCli command.Cli) {
|
|||||||
duCmd(dockerCli, opts),
|
duCmd(dockerCli, opts),
|
||||||
imagetoolscmd.RootCmd(dockerCli, imagetoolscmd.RootOptions{Builder: &opts.builder}),
|
imagetoolscmd.RootCmd(dockerCli, imagetoolscmd.RootOptions{Builder: &opts.builder}),
|
||||||
)
|
)
|
||||||
if confutil.IsExperimental() {
|
|
||||||
cmd.AddCommand(debugcmd.RootCmd(dockerCli,
|
|
||||||
newDebuggableBuild(dockerCli, opts),
|
|
||||||
))
|
|
||||||
remote.AddControllerCommands(cmd, dockerCli)
|
|
||||||
}
|
|
||||||
|
|
||||||
cmd.RegisterFlagCompletionFunc( //nolint:errcheck
|
|
||||||
"builder",
|
|
||||||
completion.BuilderNames(dockerCli),
|
|
||||||
)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func rootFlags(options *rootOptions, flags *pflag.FlagSet) {
|
func rootFlags(options *rootOptions, flags *pflag.FlagSet) {
|
||||||
|
|||||||
@@ -4,9 +4,9 @@ import (
|
|||||||
"context"
|
"context"
|
||||||
|
|
||||||
"github.com/docker/buildx/builder"
|
"github.com/docker/buildx/builder"
|
||||||
"github.com/docker/buildx/util/cobrautil/completion"
|
|
||||||
"github.com/docker/cli/cli"
|
"github.com/docker/cli/cli"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
|
"github.com/moby/buildkit/util/appcontext"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -14,7 +14,9 @@ type stopOptions struct {
|
|||||||
builder string
|
builder string
|
||||||
}
|
}
|
||||||
|
|
||||||
func runStop(ctx context.Context, dockerCli command.Cli, in stopOptions) error {
|
func runStop(dockerCli command.Cli, in stopOptions) error {
|
||||||
|
ctx := appcontext.Context()
|
||||||
|
|
||||||
b, err := builder.New(dockerCli,
|
b, err := builder.New(dockerCli,
|
||||||
builder.WithName(in.builder),
|
builder.WithName(in.builder),
|
||||||
builder.WithSkippedValidation(),
|
builder.WithSkippedValidation(),
|
||||||
@@ -22,7 +24,7 @@ func runStop(ctx context.Context, dockerCli command.Cli, in stopOptions) error {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
nodes, err := b.LoadNodes(ctx)
|
nodes, err := b.LoadNodes(ctx, false)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -42,9 +44,8 @@ func stopCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
|
|||||||
if len(args) > 0 {
|
if len(args) > 0 {
|
||||||
options.builder = args[0]
|
options.builder = args[0]
|
||||||
}
|
}
|
||||||
return runStop(cmd.Context(), dockerCli, options)
|
return runStop(dockerCli, options)
|
||||||
},
|
},
|
||||||
ValidArgsFunction: completion.BuilderNames(dockerCli),
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return cmd
|
return cmd
|
||||||
|
|||||||
@@ -4,7 +4,6 @@ import (
|
|||||||
"os"
|
"os"
|
||||||
|
|
||||||
"github.com/docker/buildx/util/cobrautil"
|
"github.com/docker/buildx/util/cobrautil"
|
||||||
"github.com/docker/buildx/util/cobrautil/completion"
|
|
||||||
"github.com/docker/cli/cli"
|
"github.com/docker/cli/cli"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
"github.com/docker/cli/cli/config"
|
"github.com/docker/cli/cli/config"
|
||||||
@@ -15,7 +14,7 @@ import (
|
|||||||
type uninstallOptions struct {
|
type uninstallOptions struct {
|
||||||
}
|
}
|
||||||
|
|
||||||
func runUninstall(_ command.Cli, _ uninstallOptions) error {
|
func runUninstall(dockerCli command.Cli, in uninstallOptions) error {
|
||||||
dir := config.Dir()
|
dir := config.Dir()
|
||||||
cfg, err := config.Load(dir)
|
cfg, err := config.Load(dir)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -53,8 +52,7 @@ func uninstallCmd(dockerCli command.Cli) *cobra.Command {
|
|||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
return runUninstall(dockerCli, options)
|
return runUninstall(dockerCli, options)
|
||||||
},
|
},
|
||||||
Hidden: true,
|
Hidden: true,
|
||||||
ValidArgsFunction: completion.Disable,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// hide builder persistent flag for this command
|
// hide builder persistent flag for this command
|
||||||
|
|||||||
@@ -4,7 +4,6 @@ import (
|
|||||||
"os"
|
"os"
|
||||||
|
|
||||||
"github.com/docker/buildx/store/storeutil"
|
"github.com/docker/buildx/store/storeutil"
|
||||||
"github.com/docker/buildx/util/cobrautil/completion"
|
|
||||||
"github.com/docker/buildx/util/dockerutil"
|
"github.com/docker/buildx/util/dockerutil"
|
||||||
"github.com/docker/cli/cli"
|
"github.com/docker/cli/cli"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
@@ -35,7 +34,10 @@ func runUse(dockerCli command.Cli, in useOptions) error {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
return txn.SetCurrent(ep, "", false, false)
|
if err := txn.SetCurrent(ep, "", false, false); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return nil
|
||||||
}
|
}
|
||||||
list, err := dockerCli.ContextStore().List()
|
list, err := dockerCli.ContextStore().List()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -55,7 +57,11 @@ func runUse(dockerCli command.Cli, in useOptions) error {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
return txn.SetCurrent(ep, in.builder, in.isGlobal, in.isDefault)
|
if err := txn.SetCurrent(ep, in.builder, in.isGlobal, in.isDefault); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func useCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
|
func useCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
|
||||||
@@ -72,7 +78,6 @@ func useCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
|
|||||||
}
|
}
|
||||||
return runUse(dockerCli, options)
|
return runUse(dockerCli, options)
|
||||||
},
|
},
|
||||||
ValidArgsFunction: completion.BuilderNames(dockerCli),
|
|
||||||
}
|
}
|
||||||
|
|
||||||
flags := cmd.Flags()
|
flags := cmd.Flags()
|
||||||
|
|||||||
@@ -1,57 +0,0 @@
|
|||||||
package commands
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bufio"
|
|
||||||
"context"
|
|
||||||
"fmt"
|
|
||||||
"io"
|
|
||||||
"os"
|
|
||||||
"runtime"
|
|
||||||
"strings"
|
|
||||||
|
|
||||||
"github.com/docker/cli/cli/streams"
|
|
||||||
)
|
|
||||||
|
|
||||||
func prompt(ctx context.Context, ins io.Reader, out io.Writer, msg string) (bool, error) {
|
|
||||||
done := make(chan struct{})
|
|
||||||
var ok bool
|
|
||||||
go func() {
|
|
||||||
ok = promptForConfirmation(ins, out, msg)
|
|
||||||
close(done)
|
|
||||||
}()
|
|
||||||
select {
|
|
||||||
case <-ctx.Done():
|
|
||||||
return false, context.Cause(ctx)
|
|
||||||
case <-done:
|
|
||||||
return ok, nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// promptForConfirmation requests and checks confirmation from user.
|
|
||||||
// This will display the provided message followed by ' [y/N] '. If
|
|
||||||
// the user input 'y' or 'Y' it returns true other false. If no
|
|
||||||
// message is provided "Are you sure you want to proceed? [y/N] "
|
|
||||||
// will be used instead.
|
|
||||||
//
|
|
||||||
// Copied from github.com/docker/cli since the upstream version changed
|
|
||||||
// recently with an incompatible change.
|
|
||||||
//
|
|
||||||
// See https://github.com/docker/buildx/pull/2359#discussion_r1544736494
|
|
||||||
// for discussion on the issue.
|
|
||||||
func promptForConfirmation(ins io.Reader, outs io.Writer, message string) bool {
|
|
||||||
if message == "" {
|
|
||||||
message = "Are you sure you want to proceed?"
|
|
||||||
}
|
|
||||||
message += " [y/N] "
|
|
||||||
|
|
||||||
_, _ = fmt.Fprint(outs, message)
|
|
||||||
|
|
||||||
// On Windows, force the use of the regular OS stdin stream.
|
|
||||||
if runtime.GOOS == "windows" {
|
|
||||||
ins = streams.NewIn(os.Stdin)
|
|
||||||
}
|
|
||||||
|
|
||||||
reader := bufio.NewReader(ins)
|
|
||||||
answer, _, _ := reader.ReadLine()
|
|
||||||
return strings.ToLower(string(answer)) == "y"
|
|
||||||
}
|
|
||||||
@@ -4,14 +4,13 @@ import (
|
|||||||
"fmt"
|
"fmt"
|
||||||
|
|
||||||
"github.com/docker/buildx/util/cobrautil"
|
"github.com/docker/buildx/util/cobrautil"
|
||||||
"github.com/docker/buildx/util/cobrautil/completion"
|
|
||||||
"github.com/docker/buildx/version"
|
"github.com/docker/buildx/version"
|
||||||
"github.com/docker/cli/cli"
|
"github.com/docker/cli/cli"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
func runVersion(_ command.Cli) error {
|
func runVersion(dockerCli command.Cli) error {
|
||||||
fmt.Println(version.Package, version.Version, version.Revision)
|
fmt.Println(version.Package, version.Version, version.Revision)
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
@@ -24,7 +23,6 @@ func versionCmd(dockerCli command.Cli) *cobra.Command {
|
|||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
return runVersion(dockerCli)
|
return runVersion(dockerCli)
|
||||||
},
|
},
|
||||||
ValidArgsFunction: completion.Disable,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// hide builder persistent flag for this command
|
// hide builder persistent flag for this command
|
||||||
|
|||||||
@@ -1,282 +0,0 @@
|
|||||||
package build
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"io"
|
|
||||||
"os"
|
|
||||||
"path/filepath"
|
|
||||||
"strings"
|
|
||||||
"sync"
|
|
||||||
|
|
||||||
"github.com/docker/buildx/build"
|
|
||||||
"github.com/docker/buildx/builder"
|
|
||||||
controllerapi "github.com/docker/buildx/controller/pb"
|
|
||||||
"github.com/docker/buildx/store"
|
|
||||||
"github.com/docker/buildx/store/storeutil"
|
|
||||||
"github.com/docker/buildx/util/buildflags"
|
|
||||||
"github.com/docker/buildx/util/confutil"
|
|
||||||
"github.com/docker/buildx/util/dockerutil"
|
|
||||||
"github.com/docker/buildx/util/platformutil"
|
|
||||||
"github.com/docker/buildx/util/progress"
|
|
||||||
"github.com/docker/cli/cli/command"
|
|
||||||
"github.com/docker/cli/cli/config"
|
|
||||||
dockeropts "github.com/docker/cli/opts"
|
|
||||||
"github.com/docker/go-units"
|
|
||||||
"github.com/moby/buildkit/client"
|
|
||||||
"github.com/moby/buildkit/session/auth/authprovider"
|
|
||||||
"github.com/moby/buildkit/util/grpcerrors"
|
|
||||||
"github.com/pkg/errors"
|
|
||||||
"google.golang.org/grpc/codes"
|
|
||||||
)
|
|
||||||
|
|
||||||
const defaultTargetName = "default"
|
|
||||||
|
|
||||||
// RunBuild runs the specified build and returns the result.
|
|
||||||
//
|
|
||||||
// NOTE: When an error happens during the build and this function acquires the debuggable *build.ResultHandle,
|
|
||||||
// this function returns it in addition to the error (i.e. it does "return nil, res, err"). The caller can
|
|
||||||
// inspect the result and debug the cause of that error.
|
|
||||||
func RunBuild(ctx context.Context, dockerCli command.Cli, in controllerapi.BuildOptions, inStream io.Reader, progress progress.Writer, generateResult bool) (*client.SolveResponse, *build.ResultHandle, error) {
|
|
||||||
if in.NoCache && len(in.NoCacheFilter) > 0 {
|
|
||||||
return nil, nil, errors.Errorf("--no-cache and --no-cache-filter cannot currently be used together")
|
|
||||||
}
|
|
||||||
|
|
||||||
contexts := map[string]build.NamedContext{}
|
|
||||||
for name, path := range in.NamedContexts {
|
|
||||||
contexts[name] = build.NamedContext{Path: path}
|
|
||||||
}
|
|
||||||
|
|
||||||
opts := build.Options{
|
|
||||||
Inputs: build.Inputs{
|
|
||||||
ContextPath: in.ContextPath,
|
|
||||||
DockerfilePath: in.DockerfileName,
|
|
||||||
InStream: inStream,
|
|
||||||
NamedContexts: contexts,
|
|
||||||
},
|
|
||||||
Ref: in.Ref,
|
|
||||||
BuildArgs: in.BuildArgs,
|
|
||||||
CgroupParent: in.CgroupParent,
|
|
||||||
ExtraHosts: in.ExtraHosts,
|
|
||||||
Labels: in.Labels,
|
|
||||||
NetworkMode: in.NetworkMode,
|
|
||||||
NoCache: in.NoCache,
|
|
||||||
NoCacheFilter: in.NoCacheFilter,
|
|
||||||
Pull: in.Pull,
|
|
||||||
ShmSize: dockeropts.MemBytes(in.ShmSize),
|
|
||||||
Tags: in.Tags,
|
|
||||||
Target: in.Target,
|
|
||||||
Ulimits: controllerUlimitOpt2DockerUlimit(in.Ulimits),
|
|
||||||
GroupRef: in.GroupRef,
|
|
||||||
WithProvenanceResponse: in.WithProvenanceResponse,
|
|
||||||
}
|
|
||||||
|
|
||||||
platforms, err := platformutil.Parse(in.Platforms)
|
|
||||||
if err != nil {
|
|
||||||
return nil, nil, err
|
|
||||||
}
|
|
||||||
opts.Platforms = platforms
|
|
||||||
|
|
||||||
dockerConfig := config.LoadDefaultConfigFile(os.Stderr)
|
|
||||||
opts.Session = append(opts.Session, authprovider.NewDockerAuthProvider(dockerConfig, nil))
|
|
||||||
|
|
||||||
secrets, err := controllerapi.CreateSecrets(in.Secrets)
|
|
||||||
if err != nil {
|
|
||||||
return nil, nil, err
|
|
||||||
}
|
|
||||||
opts.Session = append(opts.Session, secrets)
|
|
||||||
|
|
||||||
sshSpecs := in.SSH
|
|
||||||
if len(sshSpecs) == 0 && buildflags.IsGitSSH(in.ContextPath) {
|
|
||||||
sshSpecs = append(sshSpecs, &controllerapi.SSH{ID: "default"})
|
|
||||||
}
|
|
||||||
ssh, err := controllerapi.CreateSSH(sshSpecs)
|
|
||||||
if err != nil {
|
|
||||||
return nil, nil, err
|
|
||||||
}
|
|
||||||
opts.Session = append(opts.Session, ssh)
|
|
||||||
|
|
||||||
outputs, err := controllerapi.CreateExports(in.Exports)
|
|
||||||
if err != nil {
|
|
||||||
return nil, nil, err
|
|
||||||
}
|
|
||||||
if in.ExportPush {
|
|
||||||
var pushUsed bool
|
|
||||||
for i := range outputs {
|
|
||||||
if outputs[i].Type == client.ExporterImage {
|
|
||||||
outputs[i].Attrs["push"] = "true"
|
|
||||||
pushUsed = true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if !pushUsed {
|
|
||||||
outputs = append(outputs, client.ExportEntry{
|
|
||||||
Type: client.ExporterImage,
|
|
||||||
Attrs: map[string]string{
|
|
||||||
"push": "true",
|
|
||||||
},
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if in.ExportLoad {
|
|
||||||
var loadUsed bool
|
|
||||||
for i := range outputs {
|
|
||||||
if outputs[i].Type == client.ExporterDocker {
|
|
||||||
if _, ok := outputs[i].Attrs["dest"]; !ok {
|
|
||||||
loadUsed = true
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if !loadUsed {
|
|
||||||
outputs = append(outputs, client.ExportEntry{
|
|
||||||
Type: client.ExporterDocker,
|
|
||||||
Attrs: map[string]string{},
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
annotations, err := buildflags.ParseAnnotations(in.Annotations)
|
|
||||||
if err != nil {
|
|
||||||
return nil, nil, err
|
|
||||||
}
|
|
||||||
for _, o := range outputs {
|
|
||||||
for k, v := range annotations {
|
|
||||||
o.Attrs[k.String()] = v
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
opts.Exports = outputs
|
|
||||||
|
|
||||||
opts.CacheFrom = controllerapi.CreateCaches(in.CacheFrom)
|
|
||||||
opts.CacheTo = controllerapi.CreateCaches(in.CacheTo)
|
|
||||||
|
|
||||||
opts.Attests = controllerapi.CreateAttestations(in.Attests)
|
|
||||||
|
|
||||||
opts.SourcePolicy = in.SourcePolicy
|
|
||||||
|
|
||||||
allow, err := buildflags.ParseEntitlements(in.Allow)
|
|
||||||
if err != nil {
|
|
||||||
return nil, nil, err
|
|
||||||
}
|
|
||||||
opts.Allow = allow
|
|
||||||
|
|
||||||
if in.PrintFunc != nil {
|
|
||||||
opts.PrintFunc = &build.PrintFunc{
|
|
||||||
Name: in.PrintFunc.Name,
|
|
||||||
Format: in.PrintFunc.Format,
|
|
||||||
IgnoreStatus: in.PrintFunc.IgnoreStatus,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// key string used for kubernetes "sticky" mode
|
|
||||||
contextPathHash, err := filepath.Abs(in.ContextPath)
|
|
||||||
if err != nil {
|
|
||||||
contextPathHash = in.ContextPath
|
|
||||||
}
|
|
||||||
|
|
||||||
// TODO: this should not be loaded this side of the controller api
|
|
||||||
b, err := builder.New(dockerCli,
|
|
||||||
builder.WithName(in.Builder),
|
|
||||||
builder.WithContextPathHash(contextPathHash),
|
|
||||||
)
|
|
||||||
if err != nil {
|
|
||||||
return nil, nil, err
|
|
||||||
}
|
|
||||||
if err = updateLastActivity(dockerCli, b.NodeGroup); err != nil {
|
|
||||||
return nil, nil, errors.Wrapf(err, "failed to update builder last activity time")
|
|
||||||
}
|
|
||||||
nodes, err := b.LoadNodes(ctx)
|
|
||||||
if err != nil {
|
|
||||||
return nil, nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
resp, res, err := buildTargets(ctx, dockerCli, nodes, map[string]build.Options{defaultTargetName: opts}, progress, generateResult)
|
|
||||||
err = wrapBuildError(err, false)
|
|
||||||
if err != nil {
|
|
||||||
// NOTE: buildTargets can return *build.ResultHandle even on error.
|
|
||||||
return nil, res, err
|
|
||||||
}
|
|
||||||
return resp, res, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// buildTargets runs the specified build and returns the result.
|
|
||||||
//
|
|
||||||
// NOTE: When an error happens during the build and this function acquires the debuggable *build.ResultHandle,
|
|
||||||
// this function returns it in addition to the error (i.e. it does "return nil, res, err"). The caller can
|
|
||||||
// inspect the result and debug the cause of that error.
|
|
||||||
func buildTargets(ctx context.Context, dockerCli command.Cli, nodes []builder.Node, opts map[string]build.Options, progress progress.Writer, generateResult bool) (*client.SolveResponse, *build.ResultHandle, error) {
|
|
||||||
var res *build.ResultHandle
|
|
||||||
var resp map[string]*client.SolveResponse
|
|
||||||
var err error
|
|
||||||
if generateResult {
|
|
||||||
var mu sync.Mutex
|
|
||||||
var idx int
|
|
||||||
resp, err = build.BuildWithResultHandler(ctx, nodes, opts, dockerutil.NewClient(dockerCli), confutil.ConfigDir(dockerCli), progress, func(driverIndex int, gotRes *build.ResultHandle) {
|
|
||||||
mu.Lock()
|
|
||||||
defer mu.Unlock()
|
|
||||||
if res == nil || driverIndex < idx {
|
|
||||||
idx, res = driverIndex, gotRes
|
|
||||||
}
|
|
||||||
})
|
|
||||||
} else {
|
|
||||||
resp, err = build.Build(ctx, nodes, opts, dockerutil.NewClient(dockerCli), confutil.ConfigDir(dockerCli), progress)
|
|
||||||
}
|
|
||||||
if err != nil {
|
|
||||||
return nil, res, err
|
|
||||||
}
|
|
||||||
return resp[defaultTargetName], res, err
|
|
||||||
}
|
|
||||||
|
|
||||||
func wrapBuildError(err error, bake bool) error {
|
|
||||||
if err == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
st, ok := grpcerrors.AsGRPCStatus(err)
|
|
||||||
if ok {
|
|
||||||
if st.Code() == codes.Unimplemented && strings.Contains(st.Message(), "unsupported frontend capability moby.buildkit.frontend.contexts") {
|
|
||||||
msg := "current frontend does not support --build-context."
|
|
||||||
if bake {
|
|
||||||
msg = "current frontend does not support defining additional contexts for targets."
|
|
||||||
}
|
|
||||||
msg += " Named contexts are supported since Dockerfile v1.4. Use #syntax directive in Dockerfile or update to latest BuildKit."
|
|
||||||
return &wrapped{err, msg}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
type wrapped struct {
|
|
||||||
err error
|
|
||||||
msg string
|
|
||||||
}
|
|
||||||
|
|
||||||
func (w *wrapped) Error() string {
|
|
||||||
return w.msg
|
|
||||||
}
|
|
||||||
|
|
||||||
func (w *wrapped) Unwrap() error {
|
|
||||||
return w.err
|
|
||||||
}
|
|
||||||
|
|
||||||
func updateLastActivity(dockerCli command.Cli, ng *store.NodeGroup) error {
|
|
||||||
txn, release, err := storeutil.GetStore(dockerCli)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
defer release()
|
|
||||||
return txn.UpdateLastActivity(ng)
|
|
||||||
}
|
|
||||||
|
|
||||||
func controllerUlimitOpt2DockerUlimit(u *controllerapi.UlimitOpt) *dockeropts.UlimitOpt {
|
|
||||||
if u == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
values := make(map[string]*units.Ulimit)
|
|
||||||
for k, v := range u.Values {
|
|
||||||
values[k] = &units.Ulimit{
|
|
||||||
Name: v.Name,
|
|
||||||
Hard: v.Hard,
|
|
||||||
Soft: v.Soft,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return dockeropts.NewUlimitOpt(&values)
|
|
||||||
}
|
|
||||||
@@ -1,32 +0,0 @@
|
|||||||
package control
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"io"
|
|
||||||
|
|
||||||
controllerapi "github.com/docker/buildx/controller/pb"
|
|
||||||
"github.com/docker/buildx/util/progress"
|
|
||||||
"github.com/moby/buildkit/client"
|
|
||||||
)
|
|
||||||
|
|
||||||
type BuildxController interface {
|
|
||||||
Build(ctx context.Context, options controllerapi.BuildOptions, in io.ReadCloser, progress progress.Writer) (ref string, resp *client.SolveResponse, err error)
|
|
||||||
// Invoke starts an IO session into the specified process.
|
|
||||||
// If pid doesn't matche to any running processes, it starts a new process with the specified config.
|
|
||||||
// If there is no container running or InvokeConfig.Rollback is speicfied, the process will start in a newly created container.
|
|
||||||
// NOTE: If needed, in the future, we can split this API into three APIs (NewContainer, NewProcess and Attach).
|
|
||||||
Invoke(ctx context.Context, ref, pid string, options controllerapi.InvokeConfig, ioIn io.ReadCloser, ioOut io.WriteCloser, ioErr io.WriteCloser) error
|
|
||||||
Kill(ctx context.Context) error
|
|
||||||
Close() error
|
|
||||||
List(ctx context.Context) (refs []string, _ error)
|
|
||||||
Disconnect(ctx context.Context, ref string) error
|
|
||||||
ListProcesses(ctx context.Context, ref string) (infos []*controllerapi.ProcessInfo, retErr error)
|
|
||||||
DisconnectProcess(ctx context.Context, ref, pid string) error
|
|
||||||
Inspect(ctx context.Context, ref string) (*controllerapi.InspectResponse, error)
|
|
||||||
}
|
|
||||||
|
|
||||||
type ControlOptions struct {
|
|
||||||
ServerConfig string
|
|
||||||
Root string
|
|
||||||
Detach bool
|
|
||||||
}
|
|
||||||
@@ -1,36 +0,0 @@
|
|||||||
package controller
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"fmt"
|
|
||||||
|
|
||||||
"github.com/docker/buildx/controller/control"
|
|
||||||
"github.com/docker/buildx/controller/local"
|
|
||||||
"github.com/docker/buildx/controller/remote"
|
|
||||||
"github.com/docker/buildx/util/progress"
|
|
||||||
"github.com/docker/cli/cli/command"
|
|
||||||
"github.com/pkg/errors"
|
|
||||||
)
|
|
||||||
|
|
||||||
func NewController(ctx context.Context, opts control.ControlOptions, dockerCli command.Cli, pw progress.Writer) (control.BuildxController, error) {
|
|
||||||
var name string
|
|
||||||
if opts.Detach {
|
|
||||||
name = "remote"
|
|
||||||
} else {
|
|
||||||
name = "local"
|
|
||||||
}
|
|
||||||
|
|
||||||
var c control.BuildxController
|
|
||||||
err := progress.Wrap(fmt.Sprintf("[internal] connecting to %s controller", name), pw.Write, func(l progress.SubLogger) (err error) {
|
|
||||||
if opts.Detach {
|
|
||||||
c, err = remote.NewRemoteBuildxController(ctx, dockerCli, opts, l)
|
|
||||||
} else {
|
|
||||||
c = local.NewLocalBuildxController(ctx, dockerCli, l)
|
|
||||||
}
|
|
||||||
return err
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
return nil, errors.Wrap(err, "failed to start buildx controller")
|
|
||||||
}
|
|
||||||
return c, nil
|
|
||||||
}
|
|
||||||
@@ -1,34 +0,0 @@
|
|||||||
package errdefs
|
|
||||||
|
|
||||||
import (
|
|
||||||
"github.com/containerd/typeurl/v2"
|
|
||||||
"github.com/moby/buildkit/util/grpcerrors"
|
|
||||||
)
|
|
||||||
|
|
||||||
func init() {
|
|
||||||
typeurl.Register((*Build)(nil), "github.com/docker/buildx", "errdefs.Build+json")
|
|
||||||
}
|
|
||||||
|
|
||||||
type BuildError struct {
|
|
||||||
Build
|
|
||||||
error
|
|
||||||
}
|
|
||||||
|
|
||||||
func (e *BuildError) Unwrap() error {
|
|
||||||
return e.error
|
|
||||||
}
|
|
||||||
|
|
||||||
func (e *BuildError) ToProto() grpcerrors.TypedErrorProto {
|
|
||||||
return &e.Build
|
|
||||||
}
|
|
||||||
|
|
||||||
func WrapBuild(err error, ref string) error {
|
|
||||||
if err == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
return &BuildError{Build: Build{Ref: ref}, error: err}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (b *Build) WrapError(err error) error {
|
|
||||||
return &BuildError{error: err, Build: *b}
|
|
||||||
}
|
|
||||||
@@ -1,77 +0,0 @@
|
|||||||
// Code generated by protoc-gen-gogo. DO NOT EDIT.
|
|
||||||
// source: errdefs.proto
|
|
||||||
|
|
||||||
package errdefs
|
|
||||||
|
|
||||||
import (
|
|
||||||
fmt "fmt"
|
|
||||||
proto "github.com/gogo/protobuf/proto"
|
|
||||||
_ "github.com/moby/buildkit/solver/pb"
|
|
||||||
math "math"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Reference imports to suppress errors if they are not otherwise used.
|
|
||||||
var _ = proto.Marshal
|
|
||||||
var _ = fmt.Errorf
|
|
||||||
var _ = math.Inf
|
|
||||||
|
|
||||||
// This is a compile-time assertion to ensure that this generated file
|
|
||||||
// is compatible with the proto package it is being compiled against.
|
|
||||||
// A compilation error at this line likely means your copy of the
|
|
||||||
// proto package needs to be updated.
|
|
||||||
const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package
|
|
||||||
|
|
||||||
type Build struct {
|
|
||||||
Ref string `protobuf:"bytes,1,opt,name=Ref,proto3" json:"Ref,omitempty"`
|
|
||||||
XXX_NoUnkeyedLiteral struct{} `json:"-"`
|
|
||||||
XXX_unrecognized []byte `json:"-"`
|
|
||||||
XXX_sizecache int32 `json:"-"`
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *Build) Reset() { *m = Build{} }
|
|
||||||
func (m *Build) String() string { return proto.CompactTextString(m) }
|
|
||||||
func (*Build) ProtoMessage() {}
|
|
||||||
func (*Build) Descriptor() ([]byte, []int) {
|
|
||||||
return fileDescriptor_689dc58a5060aff5, []int{0}
|
|
||||||
}
|
|
||||||
func (m *Build) XXX_Unmarshal(b []byte) error {
|
|
||||||
return xxx_messageInfo_Build.Unmarshal(m, b)
|
|
||||||
}
|
|
||||||
func (m *Build) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
|
|
||||||
return xxx_messageInfo_Build.Marshal(b, m, deterministic)
|
|
||||||
}
|
|
||||||
func (m *Build) XXX_Merge(src proto.Message) {
|
|
||||||
xxx_messageInfo_Build.Merge(m, src)
|
|
||||||
}
|
|
||||||
func (m *Build) XXX_Size() int {
|
|
||||||
return xxx_messageInfo_Build.Size(m)
|
|
||||||
}
|
|
||||||
func (m *Build) XXX_DiscardUnknown() {
|
|
||||||
xxx_messageInfo_Build.DiscardUnknown(m)
|
|
||||||
}
|
|
||||||
|
|
||||||
var xxx_messageInfo_Build proto.InternalMessageInfo
|
|
||||||
|
|
||||||
func (m *Build) GetRef() string {
|
|
||||||
if m != nil {
|
|
||||||
return m.Ref
|
|
||||||
}
|
|
||||||
return ""
|
|
||||||
}
|
|
||||||
|
|
||||||
func init() {
|
|
||||||
proto.RegisterType((*Build)(nil), "errdefs.Build")
|
|
||||||
}
|
|
||||||
|
|
||||||
func init() { proto.RegisterFile("errdefs.proto", fileDescriptor_689dc58a5060aff5) }
|
|
||||||
|
|
||||||
var fileDescriptor_689dc58a5060aff5 = []byte{
|
|
||||||
// 111 bytes of a gzipped FileDescriptorProto
|
|
||||||
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0xe2, 0x4d, 0x2d, 0x2a, 0x4a,
|
|
||||||
0x49, 0x4d, 0x2b, 0xd6, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x17, 0x62, 0x87, 0x72, 0xa5, 0x74, 0xd2,
|
|
||||||
0x33, 0x4b, 0x32, 0x4a, 0x93, 0xf4, 0x92, 0xf3, 0x73, 0xf5, 0x73, 0xf3, 0x93, 0x2a, 0xf5, 0x93,
|
|
||||||
0x4a, 0x33, 0x73, 0x52, 0xb2, 0x33, 0x4b, 0xf4, 0x8b, 0xf3, 0x73, 0xca, 0x52, 0x8b, 0xf4, 0x0b,
|
|
||||||
0x92, 0xf4, 0xf3, 0x0b, 0xa0, 0xda, 0x94, 0x24, 0xb9, 0x58, 0x9d, 0x40, 0xf2, 0x42, 0x02, 0x5c,
|
|
||||||
0xcc, 0x41, 0xa9, 0x69, 0x12, 0x8c, 0x0a, 0x8c, 0x1a, 0x9c, 0x41, 0x20, 0x66, 0x12, 0x1b, 0x58,
|
|
||||||
0x85, 0x31, 0x20, 0x00, 0x00, 0xff, 0xff, 0x56, 0x52, 0x41, 0x91, 0x69, 0x00, 0x00, 0x00,
|
|
||||||
}
|
|
||||||
@@ -1,9 +0,0 @@
|
|||||||
syntax = "proto3";
|
|
||||||
|
|
||||||
package errdefs;
|
|
||||||
|
|
||||||
import "github.com/moby/buildkit/solver/pb/ops.proto";
|
|
||||||
|
|
||||||
message Build {
|
|
||||||
string Ref = 1;
|
|
||||||
}
|
|
||||||
@@ -1,3 +0,0 @@
|
|||||||
package errdefs
|
|
||||||
|
|
||||||
//go:generate protoc -I=. -I=../../vendor/ --gogo_out=plugins=grpc:. errdefs.proto
|
|
||||||
@@ -1,146 +0,0 @@
|
|||||||
package local
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"io"
|
|
||||||
"sync/atomic"
|
|
||||||
|
|
||||||
"github.com/docker/buildx/build"
|
|
||||||
cbuild "github.com/docker/buildx/controller/build"
|
|
||||||
"github.com/docker/buildx/controller/control"
|
|
||||||
controllererrors "github.com/docker/buildx/controller/errdefs"
|
|
||||||
controllerapi "github.com/docker/buildx/controller/pb"
|
|
||||||
"github.com/docker/buildx/controller/processes"
|
|
||||||
"github.com/docker/buildx/util/ioset"
|
|
||||||
"github.com/docker/buildx/util/progress"
|
|
||||||
"github.com/docker/cli/cli/command"
|
|
||||||
"github.com/moby/buildkit/client"
|
|
||||||
"github.com/pkg/errors"
|
|
||||||
)
|
|
||||||
|
|
||||||
func NewLocalBuildxController(ctx context.Context, dockerCli command.Cli, logger progress.SubLogger) control.BuildxController {
|
|
||||||
return &localController{
|
|
||||||
dockerCli: dockerCli,
|
|
||||||
ref: "local",
|
|
||||||
processes: processes.NewManager(),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
type buildConfig struct {
|
|
||||||
// TODO: these two structs should be merged
|
|
||||||
// Discussion: https://github.com/docker/buildx/pull/1640#discussion_r1113279719
|
|
||||||
resultCtx *build.ResultHandle
|
|
||||||
buildOptions *controllerapi.BuildOptions
|
|
||||||
}
|
|
||||||
|
|
||||||
type localController struct {
|
|
||||||
dockerCli command.Cli
|
|
||||||
ref string
|
|
||||||
buildConfig buildConfig
|
|
||||||
processes *processes.Manager
|
|
||||||
|
|
||||||
buildOnGoing atomic.Bool
|
|
||||||
}
|
|
||||||
|
|
||||||
func (b *localController) Build(ctx context.Context, options controllerapi.BuildOptions, in io.ReadCloser, progress progress.Writer) (string, *client.SolveResponse, error) {
|
|
||||||
if !b.buildOnGoing.CompareAndSwap(false, true) {
|
|
||||||
return "", nil, errors.New("build ongoing")
|
|
||||||
}
|
|
||||||
defer b.buildOnGoing.Store(false)
|
|
||||||
|
|
||||||
resp, res, buildErr := cbuild.RunBuild(ctx, b.dockerCli, options, in, progress, true)
|
|
||||||
// NOTE: RunBuild can return *build.ResultHandle even on error.
|
|
||||||
if res != nil {
|
|
||||||
b.buildConfig = buildConfig{
|
|
||||||
resultCtx: res,
|
|
||||||
buildOptions: &options,
|
|
||||||
}
|
|
||||||
if buildErr != nil {
|
|
||||||
buildErr = controllererrors.WrapBuild(buildErr, b.ref)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if buildErr != nil {
|
|
||||||
return "", nil, buildErr
|
|
||||||
}
|
|
||||||
return b.ref, resp, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (b *localController) ListProcesses(ctx context.Context, ref string) (infos []*controllerapi.ProcessInfo, retErr error) {
|
|
||||||
if ref != b.ref {
|
|
||||||
return nil, errors.Errorf("unknown ref %q", ref)
|
|
||||||
}
|
|
||||||
return b.processes.ListProcesses(), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (b *localController) DisconnectProcess(ctx context.Context, ref, pid string) error {
|
|
||||||
if ref != b.ref {
|
|
||||||
return errors.Errorf("unknown ref %q", ref)
|
|
||||||
}
|
|
||||||
return b.processes.DeleteProcess(pid)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (b *localController) cancelRunningProcesses() {
|
|
||||||
b.processes.CancelRunningProcesses()
|
|
||||||
}
|
|
||||||
|
|
||||||
func (b *localController) Invoke(ctx context.Context, ref string, pid string, cfg controllerapi.InvokeConfig, ioIn io.ReadCloser, ioOut io.WriteCloser, ioErr io.WriteCloser) error {
|
|
||||||
if ref != b.ref {
|
|
||||||
return errors.Errorf("unknown ref %q", ref)
|
|
||||||
}
|
|
||||||
|
|
||||||
proc, ok := b.processes.Get(pid)
|
|
||||||
if !ok {
|
|
||||||
// Start a new process.
|
|
||||||
if b.buildConfig.resultCtx == nil {
|
|
||||||
return errors.New("no build result is registered")
|
|
||||||
}
|
|
||||||
var err error
|
|
||||||
proc, err = b.processes.StartProcess(pid, b.buildConfig.resultCtx, &cfg)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Attach containerIn to this process
|
|
||||||
ioCancelledCh := make(chan struct{})
|
|
||||||
proc.ForwardIO(&ioset.In{Stdin: ioIn, Stdout: ioOut, Stderr: ioErr}, func() { close(ioCancelledCh) })
|
|
||||||
|
|
||||||
select {
|
|
||||||
case <-ioCancelledCh:
|
|
||||||
return errors.Errorf("io cancelled")
|
|
||||||
case err := <-proc.Done():
|
|
||||||
return err
|
|
||||||
case <-ctx.Done():
|
|
||||||
return ctx.Err()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (b *localController) Kill(context.Context) error {
|
|
||||||
b.Close()
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (b *localController) Close() error {
|
|
||||||
b.cancelRunningProcesses()
|
|
||||||
if b.buildConfig.resultCtx != nil {
|
|
||||||
b.buildConfig.resultCtx.Done()
|
|
||||||
}
|
|
||||||
// TODO: cancel ongoing builds?
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (b *localController) List(ctx context.Context) (res []string, _ error) {
|
|
||||||
return []string{b.ref}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (b *localController) Disconnect(ctx context.Context, key string) error {
|
|
||||||
b.Close()
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (b *localController) Inspect(ctx context.Context, ref string) (*controllerapi.InspectResponse, error) {
|
|
||||||
if ref != b.ref {
|
|
||||||
return nil, errors.Errorf("unknown ref %q", ref)
|
|
||||||
}
|
|
||||||
return &controllerapi.InspectResponse{Options: b.buildConfig.buildOptions}, nil
|
|
||||||
}
|
|
||||||
@@ -1,20 +0,0 @@
|
|||||||
package pb
|
|
||||||
|
|
||||||
func CreateAttestations(attests []*Attest) map[string]*string {
|
|
||||||
result := map[string]*string{}
|
|
||||||
for _, attest := range attests {
|
|
||||||
// ignore duplicates
|
|
||||||
if _, ok := result[attest.Type]; ok {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
if attest.Disabled {
|
|
||||||
result[attest.Type] = nil
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
attrs := attest.Attrs
|
|
||||||
result[attest.Type] = &attrs
|
|
||||||
}
|
|
||||||
return result
|
|
||||||
}
|
|
||||||
@@ -1,21 +0,0 @@
|
|||||||
package pb
|
|
||||||
|
|
||||||
import "github.com/moby/buildkit/client"
|
|
||||||
|
|
||||||
func CreateCaches(entries []*CacheOptionsEntry) []client.CacheOptionsEntry {
|
|
||||||
var outs []client.CacheOptionsEntry
|
|
||||||
if len(entries) == 0 {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
for _, entry := range entries {
|
|
||||||
out := client.CacheOptionsEntry{
|
|
||||||
Type: entry.Type,
|
|
||||||
Attrs: map[string]string{},
|
|
||||||
}
|
|
||||||
for k, v := range entry.Attrs {
|
|
||||||
out.Attrs[k] = v
|
|
||||||
}
|
|
||||||
outs = append(outs, out)
|
|
||||||
}
|
|
||||||
return outs
|
|
||||||
}
|
|
||||||
File diff suppressed because it is too large
Load Diff
@@ -1,250 +0,0 @@
|
|||||||
syntax = "proto3";
|
|
||||||
|
|
||||||
package buildx.controller.v1;
|
|
||||||
|
|
||||||
import "github.com/moby/buildkit/api/services/control/control.proto";
|
|
||||||
import "github.com/moby/buildkit/sourcepolicy/pb/policy.proto";
|
|
||||||
|
|
||||||
option go_package = "pb";
|
|
||||||
|
|
||||||
service Controller {
|
|
||||||
rpc Build(BuildRequest) returns (BuildResponse);
|
|
||||||
rpc Inspect(InspectRequest) returns (InspectResponse);
|
|
||||||
rpc Status(StatusRequest) returns (stream StatusResponse);
|
|
||||||
rpc Input(stream InputMessage) returns (InputResponse);
|
|
||||||
rpc Invoke(stream Message) returns (stream Message);
|
|
||||||
rpc List(ListRequest) returns (ListResponse);
|
|
||||||
rpc Disconnect(DisconnectRequest) returns (DisconnectResponse);
|
|
||||||
rpc Info(InfoRequest) returns (InfoResponse);
|
|
||||||
rpc ListProcesses(ListProcessesRequest) returns (ListProcessesResponse);
|
|
||||||
rpc DisconnectProcess(DisconnectProcessRequest) returns (DisconnectProcessResponse);
|
|
||||||
}
|
|
||||||
|
|
||||||
message ListProcessesRequest {
|
|
||||||
string Ref = 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
message ListProcessesResponse {
|
|
||||||
repeated ProcessInfo Infos = 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
message ProcessInfo {
|
|
||||||
string ProcessID = 1;
|
|
||||||
InvokeConfig InvokeConfig = 2;
|
|
||||||
}
|
|
||||||
|
|
||||||
message DisconnectProcessRequest {
|
|
||||||
string Ref = 1;
|
|
||||||
string ProcessID = 2;
|
|
||||||
}
|
|
||||||
|
|
||||||
message DisconnectProcessResponse {
|
|
||||||
}
|
|
||||||
|
|
||||||
message BuildRequest {
|
|
||||||
string Ref = 1;
|
|
||||||
BuildOptions Options = 2;
|
|
||||||
}
|
|
||||||
|
|
||||||
message BuildOptions {
|
|
||||||
string ContextPath = 1;
|
|
||||||
string DockerfileName = 2;
|
|
||||||
PrintFunc PrintFunc = 3;
|
|
||||||
map<string, string> NamedContexts = 4;
|
|
||||||
|
|
||||||
repeated string Allow = 5;
|
|
||||||
repeated Attest Attests = 6;
|
|
||||||
map<string, string> BuildArgs = 7;
|
|
||||||
repeated CacheOptionsEntry CacheFrom = 8;
|
|
||||||
repeated CacheOptionsEntry CacheTo = 9;
|
|
||||||
string CgroupParent = 10;
|
|
||||||
repeated ExportEntry Exports = 11;
|
|
||||||
repeated string ExtraHosts = 12;
|
|
||||||
map<string, string> Labels = 13;
|
|
||||||
string NetworkMode = 14;
|
|
||||||
repeated string NoCacheFilter = 15;
|
|
||||||
repeated string Platforms = 16;
|
|
||||||
repeated Secret Secrets = 17;
|
|
||||||
int64 ShmSize = 18;
|
|
||||||
repeated SSH SSH = 19;
|
|
||||||
repeated string Tags = 20;
|
|
||||||
string Target = 21;
|
|
||||||
UlimitOpt Ulimits = 22;
|
|
||||||
|
|
||||||
string Builder = 23;
|
|
||||||
bool NoCache = 24;
|
|
||||||
bool Pull = 25;
|
|
||||||
bool ExportPush = 26;
|
|
||||||
bool ExportLoad = 27;
|
|
||||||
moby.buildkit.v1.sourcepolicy.Policy SourcePolicy = 28;
|
|
||||||
string Ref = 29;
|
|
||||||
string GroupRef = 30;
|
|
||||||
repeated string Annotations = 31;
|
|
||||||
bool WithProvenanceResponse = 32;
|
|
||||||
}
|
|
||||||
|
|
||||||
message ExportEntry {
|
|
||||||
string Type = 1;
|
|
||||||
map<string, string> Attrs = 2;
|
|
||||||
string Destination = 3;
|
|
||||||
}
|
|
||||||
|
|
||||||
message CacheOptionsEntry {
|
|
||||||
string Type = 1;
|
|
||||||
map<string, string> Attrs = 2;
|
|
||||||
}
|
|
||||||
|
|
||||||
message Attest {
|
|
||||||
string Type = 1;
|
|
||||||
bool Disabled = 2;
|
|
||||||
string Attrs = 3;
|
|
||||||
}
|
|
||||||
|
|
||||||
message SSH {
|
|
||||||
string ID = 1;
|
|
||||||
repeated string Paths = 2;
|
|
||||||
}
|
|
||||||
|
|
||||||
message Secret {
|
|
||||||
string ID = 1;
|
|
||||||
string FilePath = 2;
|
|
||||||
string Env = 3;
|
|
||||||
}
|
|
||||||
|
|
||||||
message PrintFunc {
|
|
||||||
string Name = 1;
|
|
||||||
string Format = 2;
|
|
||||||
bool IgnoreStatus = 3;
|
|
||||||
}
|
|
||||||
|
|
||||||
message InspectRequest {
|
|
||||||
string Ref = 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
message InspectResponse {
|
|
||||||
BuildOptions Options = 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
message UlimitOpt {
|
|
||||||
map<string, Ulimit> values = 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
message Ulimit {
|
|
||||||
string Name = 1;
|
|
||||||
int64 Hard = 2;
|
|
||||||
int64 Soft = 3;
|
|
||||||
}
|
|
||||||
|
|
||||||
message BuildResponse {
|
|
||||||
map<string, string> ExporterResponse = 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
message DisconnectRequest {
|
|
||||||
string Ref = 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
message DisconnectResponse {}
|
|
||||||
|
|
||||||
message ListRequest {
|
|
||||||
string Ref = 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
message ListResponse {
|
|
||||||
repeated string keys = 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
message InputMessage {
|
|
||||||
oneof Input {
|
|
||||||
InputInitMessage Init = 1;
|
|
||||||
DataMessage Data = 2;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
message InputInitMessage {
|
|
||||||
string Ref = 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
message DataMessage {
|
|
||||||
bool EOF = 1; // true if eof was reached
|
|
||||||
bytes Data = 2; // should be chunked smaller than 4MB:
|
|
||||||
// https://pkg.go.dev/google.golang.org/grpc#MaxRecvMsgSize
|
|
||||||
}
|
|
||||||
|
|
||||||
message InputResponse {}
|
|
||||||
|
|
||||||
message Message {
|
|
||||||
oneof Input {
|
|
||||||
InitMessage Init = 1;
|
|
||||||
// FdMessage used from client to server for input (stdin) and
|
|
||||||
// from server to client for output (stdout, stderr)
|
|
||||||
FdMessage File = 2;
|
|
||||||
// ResizeMessage used from client to server for terminal resize events
|
|
||||||
ResizeMessage Resize = 3;
|
|
||||||
// SignalMessage is used from client to server to send signal events
|
|
||||||
SignalMessage Signal = 4;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
message InitMessage {
|
|
||||||
string Ref = 1;
|
|
||||||
|
|
||||||
// If ProcessID already exists in the server, it tries to connect to it
|
|
||||||
// instead of invoking the new one. In this case, InvokeConfig will be ignored.
|
|
||||||
string ProcessID = 2;
|
|
||||||
InvokeConfig InvokeConfig = 3;
|
|
||||||
}
|
|
||||||
|
|
||||||
message InvokeConfig {
|
|
||||||
repeated string Entrypoint = 1;
|
|
||||||
repeated string Cmd = 2;
|
|
||||||
bool NoCmd = 11; // Do not set cmd but use the image's default
|
|
||||||
repeated string Env = 3;
|
|
||||||
string User = 4;
|
|
||||||
bool NoUser = 5; // Do not set user but use the image's default
|
|
||||||
string Cwd = 6;
|
|
||||||
bool NoCwd = 7; // Do not set cwd but use the image's default
|
|
||||||
bool Tty = 8;
|
|
||||||
bool Rollback = 9; // Kill all process in the container and recreate it.
|
|
||||||
bool Initial = 10; // Run container from the initial state of that stage (supported only on the failed step)
|
|
||||||
}
|
|
||||||
|
|
||||||
message FdMessage {
|
|
||||||
uint32 Fd = 1; // what fd the data was from
|
|
||||||
bool EOF = 2; // true if eof was reached
|
|
||||||
bytes Data = 3; // should be chunked smaller than 4MB:
|
|
||||||
// https://pkg.go.dev/google.golang.org/grpc#MaxRecvMsgSize
|
|
||||||
}
|
|
||||||
|
|
||||||
message ResizeMessage {
|
|
||||||
uint32 Rows = 1;
|
|
||||||
uint32 Cols = 2;
|
|
||||||
}
|
|
||||||
|
|
||||||
message SignalMessage {
|
|
||||||
// we only send name (ie HUP, INT) because the int values
|
|
||||||
// are platform dependent.
|
|
||||||
string Name = 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
message StatusRequest {
|
|
||||||
string Ref = 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
message StatusResponse {
|
|
||||||
repeated moby.buildkit.v1.Vertex vertexes = 1;
|
|
||||||
repeated moby.buildkit.v1.VertexStatus statuses = 2;
|
|
||||||
repeated moby.buildkit.v1.VertexLog logs = 3;
|
|
||||||
repeated moby.buildkit.v1.VertexWarning warnings = 4;
|
|
||||||
}
|
|
||||||
|
|
||||||
message InfoRequest {}
|
|
||||||
|
|
||||||
message InfoResponse {
|
|
||||||
BuildxVersion buildxVersion = 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
message BuildxVersion {
|
|
||||||
string package = 1;
|
|
||||||
string version = 2;
|
|
||||||
string revision = 3;
|
|
||||||
}
|
|
||||||
@@ -1,3 +0,0 @@
|
|||||||
package pb
|
|
||||||
|
|
||||||
//go:generate protoc -I=. -I=../../vendor/ --gogo_out=plugins=grpc:. controller.proto
|
|
||||||
@@ -1,181 +0,0 @@
|
|||||||
package pb
|
|
||||||
|
|
||||||
import (
|
|
||||||
"path/filepath"
|
|
||||||
"strings"
|
|
||||||
|
|
||||||
"github.com/moby/buildkit/util/gitutil"
|
|
||||||
)
|
|
||||||
|
|
||||||
// ResolveOptionPaths resolves all paths contained in BuildOptions
|
|
||||||
// and replaces them to absolute paths.
|
|
||||||
func ResolveOptionPaths(options *BuildOptions) (_ *BuildOptions, err error) {
|
|
||||||
localContext := false
|
|
||||||
if options.ContextPath != "" && options.ContextPath != "-" {
|
|
||||||
if !isRemoteURL(options.ContextPath) {
|
|
||||||
localContext = true
|
|
||||||
options.ContextPath, err = filepath.Abs(options.ContextPath)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if options.DockerfileName != "" && options.DockerfileName != "-" {
|
|
||||||
if localContext && !isHTTPURL(options.DockerfileName) {
|
|
||||||
options.DockerfileName, err = filepath.Abs(options.DockerfileName)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
var contexts map[string]string
|
|
||||||
for k, v := range options.NamedContexts {
|
|
||||||
if isRemoteURL(v) || strings.HasPrefix(v, "docker-image://") {
|
|
||||||
// url prefix, this is a remote path
|
|
||||||
} else if strings.HasPrefix(v, "oci-layout://") {
|
|
||||||
// oci layout prefix, this is a local path
|
|
||||||
p := strings.TrimPrefix(v, "oci-layout://")
|
|
||||||
p, err = filepath.Abs(p)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
v = "oci-layout://" + p
|
|
||||||
} else {
|
|
||||||
// no prefix, assume local path
|
|
||||||
v, err = filepath.Abs(v)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if contexts == nil {
|
|
||||||
contexts = make(map[string]string)
|
|
||||||
}
|
|
||||||
contexts[k] = v
|
|
||||||
}
|
|
||||||
options.NamedContexts = contexts
|
|
||||||
|
|
||||||
var cacheFrom []*CacheOptionsEntry
|
|
||||||
for _, co := range options.CacheFrom {
|
|
||||||
switch co.Type {
|
|
||||||
case "local":
|
|
||||||
var attrs map[string]string
|
|
||||||
for k, v := range co.Attrs {
|
|
||||||
if attrs == nil {
|
|
||||||
attrs = make(map[string]string)
|
|
||||||
}
|
|
||||||
switch k {
|
|
||||||
case "src":
|
|
||||||
p := v
|
|
||||||
if p != "" {
|
|
||||||
p, err = filepath.Abs(p)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
attrs[k] = p
|
|
||||||
default:
|
|
||||||
attrs[k] = v
|
|
||||||
}
|
|
||||||
}
|
|
||||||
co.Attrs = attrs
|
|
||||||
cacheFrom = append(cacheFrom, co)
|
|
||||||
default:
|
|
||||||
cacheFrom = append(cacheFrom, co)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
options.CacheFrom = cacheFrom
|
|
||||||
|
|
||||||
var cacheTo []*CacheOptionsEntry
|
|
||||||
for _, co := range options.CacheTo {
|
|
||||||
switch co.Type {
|
|
||||||
case "local":
|
|
||||||
var attrs map[string]string
|
|
||||||
for k, v := range co.Attrs {
|
|
||||||
if attrs == nil {
|
|
||||||
attrs = make(map[string]string)
|
|
||||||
}
|
|
||||||
switch k {
|
|
||||||
case "dest":
|
|
||||||
p := v
|
|
||||||
if p != "" {
|
|
||||||
p, err = filepath.Abs(p)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
attrs[k] = p
|
|
||||||
default:
|
|
||||||
attrs[k] = v
|
|
||||||
}
|
|
||||||
}
|
|
||||||
co.Attrs = attrs
|
|
||||||
cacheTo = append(cacheTo, co)
|
|
||||||
default:
|
|
||||||
cacheTo = append(cacheTo, co)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
options.CacheTo = cacheTo
|
|
||||||
var exports []*ExportEntry
|
|
||||||
for _, e := range options.Exports {
|
|
||||||
if e.Destination != "" && e.Destination != "-" {
|
|
||||||
e.Destination, err = filepath.Abs(e.Destination)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
exports = append(exports, e)
|
|
||||||
}
|
|
||||||
options.Exports = exports
|
|
||||||
|
|
||||||
var secrets []*Secret
|
|
||||||
for _, s := range options.Secrets {
|
|
||||||
if s.FilePath != "" {
|
|
||||||
s.FilePath, err = filepath.Abs(s.FilePath)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
secrets = append(secrets, s)
|
|
||||||
}
|
|
||||||
options.Secrets = secrets
|
|
||||||
|
|
||||||
var ssh []*SSH
|
|
||||||
for _, s := range options.SSH {
|
|
||||||
var ps []string
|
|
||||||
for _, pt := range s.Paths {
|
|
||||||
p := pt
|
|
||||||
if p != "" {
|
|
||||||
p, err = filepath.Abs(p)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
ps = append(ps, p)
|
|
||||||
|
|
||||||
}
|
|
||||||
s.Paths = ps
|
|
||||||
ssh = append(ssh, s)
|
|
||||||
}
|
|
||||||
options.SSH = ssh
|
|
||||||
|
|
||||||
return options, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// isHTTPURL returns true if the provided str is an HTTP(S) URL by checking if it
|
|
||||||
// has a http:// or https:// scheme. No validation is performed to verify if the
|
|
||||||
// URL is well-formed.
|
|
||||||
func isHTTPURL(str string) bool {
|
|
||||||
return strings.HasPrefix(str, "https://") || strings.HasPrefix(str, "http://")
|
|
||||||
}
|
|
||||||
|
|
||||||
func isRemoteURL(c string) bool {
|
|
||||||
if isHTTPURL(c) {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
if _, err := gitutil.ParseGitRef(c); err == nil {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
@@ -1,248 +0,0 @@
|
|||||||
package pb
|
|
||||||
|
|
||||||
import (
|
|
||||||
"os"
|
|
||||||
"path/filepath"
|
|
||||||
"reflect"
|
|
||||||
"testing"
|
|
||||||
|
|
||||||
"github.com/stretchr/testify/require"
|
|
||||||
)
|
|
||||||
|
|
||||||
func TestResolvePaths(t *testing.T) {
|
|
||||||
tmpwd, err := os.MkdirTemp("", "testresolvepaths")
|
|
||||||
require.NoError(t, err)
|
|
||||||
defer os.Remove(tmpwd)
|
|
||||||
require.NoError(t, os.Chdir(tmpwd))
|
|
||||||
tests := []struct {
|
|
||||||
name string
|
|
||||||
options BuildOptions
|
|
||||||
want BuildOptions
|
|
||||||
}{
|
|
||||||
{
|
|
||||||
name: "contextpath",
|
|
||||||
options: BuildOptions{ContextPath: "test"},
|
|
||||||
want: BuildOptions{ContextPath: filepath.Join(tmpwd, "test")},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "contextpath-cwd",
|
|
||||||
options: BuildOptions{ContextPath: "."},
|
|
||||||
want: BuildOptions{ContextPath: tmpwd},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "contextpath-dash",
|
|
||||||
options: BuildOptions{ContextPath: "-"},
|
|
||||||
want: BuildOptions{ContextPath: "-"},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "contextpath-ssh",
|
|
||||||
options: BuildOptions{ContextPath: "git@github.com:docker/buildx.git"},
|
|
||||||
want: BuildOptions{ContextPath: "git@github.com:docker/buildx.git"},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "dockerfilename",
|
|
||||||
options: BuildOptions{DockerfileName: "test", ContextPath: "."},
|
|
||||||
want: BuildOptions{DockerfileName: filepath.Join(tmpwd, "test"), ContextPath: tmpwd},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "dockerfilename-dash",
|
|
||||||
options: BuildOptions{DockerfileName: "-", ContextPath: "."},
|
|
||||||
want: BuildOptions{DockerfileName: "-", ContextPath: tmpwd},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "dockerfilename-remote",
|
|
||||||
options: BuildOptions{DockerfileName: "test", ContextPath: "git@github.com:docker/buildx.git"},
|
|
||||||
want: BuildOptions{DockerfileName: "test", ContextPath: "git@github.com:docker/buildx.git"},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "contexts",
|
|
||||||
options: BuildOptions{NamedContexts: map[string]string{"a": "test1", "b": "test2",
|
|
||||||
"alpine": "docker-image://alpine@sha256:0123456789", "project": "https://github.com/myuser/project.git"}},
|
|
||||||
want: BuildOptions{NamedContexts: map[string]string{"a": filepath.Join(tmpwd, "test1"), "b": filepath.Join(tmpwd, "test2"),
|
|
||||||
"alpine": "docker-image://alpine@sha256:0123456789", "project": "https://github.com/myuser/project.git"}},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "cache-from",
|
|
||||||
options: BuildOptions{
|
|
||||||
CacheFrom: []*CacheOptionsEntry{
|
|
||||||
{
|
|
||||||
Type: "local",
|
|
||||||
Attrs: map[string]string{"src": "test"},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Type: "registry",
|
|
||||||
Attrs: map[string]string{"ref": "user/app"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
want: BuildOptions{
|
|
||||||
CacheFrom: []*CacheOptionsEntry{
|
|
||||||
{
|
|
||||||
Type: "local",
|
|
||||||
Attrs: map[string]string{"src": filepath.Join(tmpwd, "test")},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Type: "registry",
|
|
||||||
Attrs: map[string]string{"ref": "user/app"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "cache-to",
|
|
||||||
options: BuildOptions{
|
|
||||||
CacheTo: []*CacheOptionsEntry{
|
|
||||||
{
|
|
||||||
Type: "local",
|
|
||||||
Attrs: map[string]string{"dest": "test"},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Type: "registry",
|
|
||||||
Attrs: map[string]string{"ref": "user/app"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
want: BuildOptions{
|
|
||||||
CacheTo: []*CacheOptionsEntry{
|
|
||||||
{
|
|
||||||
Type: "local",
|
|
||||||
Attrs: map[string]string{"dest": filepath.Join(tmpwd, "test")},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Type: "registry",
|
|
||||||
Attrs: map[string]string{"ref": "user/app"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "exports",
|
|
||||||
options: BuildOptions{
|
|
||||||
Exports: []*ExportEntry{
|
|
||||||
{
|
|
||||||
Type: "local",
|
|
||||||
Destination: "-",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Type: "local",
|
|
||||||
Destination: "test1",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Type: "tar",
|
|
||||||
Destination: "test3",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Type: "oci",
|
|
||||||
Destination: "-",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Type: "docker",
|
|
||||||
Destination: "test4",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Type: "image",
|
|
||||||
Attrs: map[string]string{"push": "true"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
want: BuildOptions{
|
|
||||||
Exports: []*ExportEntry{
|
|
||||||
{
|
|
||||||
Type: "local",
|
|
||||||
Destination: "-",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Type: "local",
|
|
||||||
Destination: filepath.Join(tmpwd, "test1"),
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Type: "tar",
|
|
||||||
Destination: filepath.Join(tmpwd, "test3"),
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Type: "oci",
|
|
||||||
Destination: "-",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Type: "docker",
|
|
||||||
Destination: filepath.Join(tmpwd, "test4"),
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Type: "image",
|
|
||||||
Attrs: map[string]string{"push": "true"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "secrets",
|
|
||||||
options: BuildOptions{
|
|
||||||
Secrets: []*Secret{
|
|
||||||
{
|
|
||||||
FilePath: "test1",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
ID: "val",
|
|
||||||
Env: "a",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
ID: "test",
|
|
||||||
FilePath: "test3",
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
want: BuildOptions{
|
|
||||||
Secrets: []*Secret{
|
|
||||||
{
|
|
||||||
FilePath: filepath.Join(tmpwd, "test1"),
|
|
||||||
},
|
|
||||||
{
|
|
||||||
ID: "val",
|
|
||||||
Env: "a",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
ID: "test",
|
|
||||||
FilePath: filepath.Join(tmpwd, "test3"),
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "ssh",
|
|
||||||
options: BuildOptions{
|
|
||||||
SSH: []*SSH{
|
|
||||||
{
|
|
||||||
ID: "default",
|
|
||||||
Paths: []string{"test1", "test2"},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
ID: "a",
|
|
||||||
Paths: []string{"test3"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
want: BuildOptions{
|
|
||||||
SSH: []*SSH{
|
|
||||||
{
|
|
||||||
ID: "default",
|
|
||||||
Paths: []string{filepath.Join(tmpwd, "test1"), filepath.Join(tmpwd, "test2")},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
ID: "a",
|
|
||||||
Paths: []string{filepath.Join(tmpwd, "test3")},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
}
|
|
||||||
for _, tt := range tests {
|
|
||||||
tt := tt
|
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
|
||||||
got, err := ResolveOptionPaths(&tt.options)
|
|
||||||
require.NoError(t, err)
|
|
||||||
if !reflect.DeepEqual(tt.want, *got) {
|
|
||||||
t.Fatalf("expected %#v, got %#v", tt.want, *got)
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,126 +0,0 @@
|
|||||||
package pb
|
|
||||||
|
|
||||||
import (
|
|
||||||
"github.com/docker/buildx/util/progress"
|
|
||||||
control "github.com/moby/buildkit/api/services/control"
|
|
||||||
"github.com/moby/buildkit/client"
|
|
||||||
"github.com/opencontainers/go-digest"
|
|
||||||
)
|
|
||||||
|
|
||||||
type writer struct {
|
|
||||||
ch chan<- *StatusResponse
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewProgressWriter(ch chan<- *StatusResponse) progress.Writer {
|
|
||||||
return &writer{ch: ch}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (w *writer) Write(status *client.SolveStatus) {
|
|
||||||
w.ch <- ToControlStatus(status)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (w *writer) WriteBuildRef(target string, ref string) {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
func (w *writer) ValidateLogSource(digest.Digest, interface{}) bool {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
|
|
||||||
func (w *writer) ClearLogSource(interface{}) {}
|
|
||||||
|
|
||||||
func ToControlStatus(s *client.SolveStatus) *StatusResponse {
|
|
||||||
resp := StatusResponse{}
|
|
||||||
for _, v := range s.Vertexes {
|
|
||||||
resp.Vertexes = append(resp.Vertexes, &control.Vertex{
|
|
||||||
Digest: v.Digest,
|
|
||||||
Inputs: v.Inputs,
|
|
||||||
Name: v.Name,
|
|
||||||
Started: v.Started,
|
|
||||||
Completed: v.Completed,
|
|
||||||
Error: v.Error,
|
|
||||||
Cached: v.Cached,
|
|
||||||
ProgressGroup: v.ProgressGroup,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
for _, v := range s.Statuses {
|
|
||||||
resp.Statuses = append(resp.Statuses, &control.VertexStatus{
|
|
||||||
ID: v.ID,
|
|
||||||
Vertex: v.Vertex,
|
|
||||||
Name: v.Name,
|
|
||||||
Total: v.Total,
|
|
||||||
Current: v.Current,
|
|
||||||
Timestamp: v.Timestamp,
|
|
||||||
Started: v.Started,
|
|
||||||
Completed: v.Completed,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
for _, v := range s.Logs {
|
|
||||||
resp.Logs = append(resp.Logs, &control.VertexLog{
|
|
||||||
Vertex: v.Vertex,
|
|
||||||
Stream: int64(v.Stream),
|
|
||||||
Msg: v.Data,
|
|
||||||
Timestamp: v.Timestamp,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
for _, v := range s.Warnings {
|
|
||||||
resp.Warnings = append(resp.Warnings, &control.VertexWarning{
|
|
||||||
Vertex: v.Vertex,
|
|
||||||
Level: int64(v.Level),
|
|
||||||
Short: v.Short,
|
|
||||||
Detail: v.Detail,
|
|
||||||
Url: v.URL,
|
|
||||||
Info: v.SourceInfo,
|
|
||||||
Ranges: v.Range,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
return &resp
|
|
||||||
}
|
|
||||||
|
|
||||||
func FromControlStatus(resp *StatusResponse) *client.SolveStatus {
|
|
||||||
s := client.SolveStatus{}
|
|
||||||
for _, v := range resp.Vertexes {
|
|
||||||
s.Vertexes = append(s.Vertexes, &client.Vertex{
|
|
||||||
Digest: v.Digest,
|
|
||||||
Inputs: v.Inputs,
|
|
||||||
Name: v.Name,
|
|
||||||
Started: v.Started,
|
|
||||||
Completed: v.Completed,
|
|
||||||
Error: v.Error,
|
|
||||||
Cached: v.Cached,
|
|
||||||
ProgressGroup: v.ProgressGroup,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
for _, v := range resp.Statuses {
|
|
||||||
s.Statuses = append(s.Statuses, &client.VertexStatus{
|
|
||||||
ID: v.ID,
|
|
||||||
Vertex: v.Vertex,
|
|
||||||
Name: v.Name,
|
|
||||||
Total: v.Total,
|
|
||||||
Current: v.Current,
|
|
||||||
Timestamp: v.Timestamp,
|
|
||||||
Started: v.Started,
|
|
||||||
Completed: v.Completed,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
for _, v := range resp.Logs {
|
|
||||||
s.Logs = append(s.Logs, &client.VertexLog{
|
|
||||||
Vertex: v.Vertex,
|
|
||||||
Stream: int(v.Stream),
|
|
||||||
Data: v.Msg,
|
|
||||||
Timestamp: v.Timestamp,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
for _, v := range resp.Warnings {
|
|
||||||
s.Warnings = append(s.Warnings, &client.VertexWarning{
|
|
||||||
Vertex: v.Vertex,
|
|
||||||
Level: int(v.Level),
|
|
||||||
Short: v.Short,
|
|
||||||
Detail: v.Detail,
|
|
||||||
URL: v.Url,
|
|
||||||
SourceInfo: v.Info,
|
|
||||||
Range: v.Ranges,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
return &s
|
|
||||||
}
|
|
||||||
@@ -1,22 +0,0 @@
|
|||||||
package pb
|
|
||||||
|
|
||||||
import (
|
|
||||||
"github.com/moby/buildkit/session"
|
|
||||||
"github.com/moby/buildkit/session/secrets/secretsprovider"
|
|
||||||
)
|
|
||||||
|
|
||||||
func CreateSecrets(secrets []*Secret) (session.Attachable, error) {
|
|
||||||
fs := make([]secretsprovider.Source, 0, len(secrets))
|
|
||||||
for _, secret := range secrets {
|
|
||||||
fs = append(fs, secretsprovider.Source{
|
|
||||||
ID: secret.ID,
|
|
||||||
FilePath: secret.FilePath,
|
|
||||||
Env: secret.Env,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
store, err := secretsprovider.NewStore(fs)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
return secretsprovider.NewSecretProvider(store), nil
|
|
||||||
}
|
|
||||||
@@ -1,18 +0,0 @@
|
|||||||
package pb
|
|
||||||
|
|
||||||
import (
|
|
||||||
"github.com/moby/buildkit/session"
|
|
||||||
"github.com/moby/buildkit/session/sshforward/sshprovider"
|
|
||||||
)
|
|
||||||
|
|
||||||
func CreateSSH(ssh []*SSH) (session.Attachable, error) {
|
|
||||||
configs := make([]sshprovider.AgentConfig, 0, len(ssh))
|
|
||||||
for _, ssh := range ssh {
|
|
||||||
cfg := sshprovider.AgentConfig{
|
|
||||||
ID: ssh.ID,
|
|
||||||
Paths: append([]string{}, ssh.Paths...),
|
|
||||||
}
|
|
||||||
configs = append(configs, cfg)
|
|
||||||
}
|
|
||||||
return sshprovider.NewSSHAgentProvider(configs)
|
|
||||||
}
|
|
||||||
@@ -1,149 +0,0 @@
|
|||||||
package processes
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"sync"
|
|
||||||
"sync/atomic"
|
|
||||||
|
|
||||||
"github.com/docker/buildx/build"
|
|
||||||
"github.com/docker/buildx/controller/pb"
|
|
||||||
"github.com/docker/buildx/util/ioset"
|
|
||||||
"github.com/pkg/errors"
|
|
||||||
"github.com/sirupsen/logrus"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Process provides methods to control a process.
|
|
||||||
type Process struct {
|
|
||||||
inEnd *ioset.Forwarder
|
|
||||||
invokeConfig *pb.InvokeConfig
|
|
||||||
errCh chan error
|
|
||||||
processCancel func()
|
|
||||||
serveIOCancel func()
|
|
||||||
}
|
|
||||||
|
|
||||||
// ForwardIO forwards process's io to the specified reader/writer.
|
|
||||||
// Optionally specify ioCancelCallback which will be called when
|
|
||||||
// the process closes the specified IO. This will be useful for additional cleanup.
|
|
||||||
func (p *Process) ForwardIO(in *ioset.In, ioCancelCallback func()) {
|
|
||||||
p.inEnd.SetIn(in)
|
|
||||||
if f := p.serveIOCancel; f != nil {
|
|
||||||
f()
|
|
||||||
}
|
|
||||||
p.serveIOCancel = ioCancelCallback
|
|
||||||
}
|
|
||||||
|
|
||||||
// Done returns a channel where error or nil will be sent
|
|
||||||
// when the process exits.
|
|
||||||
// TODO: change this to Wait()
|
|
||||||
func (p *Process) Done() <-chan error {
|
|
||||||
return p.errCh
|
|
||||||
}
|
|
||||||
|
|
||||||
// Manager manages a set of proceses.
|
|
||||||
type Manager struct {
|
|
||||||
container atomic.Value
|
|
||||||
processes sync.Map
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewManager creates and returns a Manager.
|
|
||||||
func NewManager() *Manager {
|
|
||||||
return &Manager{}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get returns the specified process.
|
|
||||||
func (m *Manager) Get(id string) (*Process, bool) {
|
|
||||||
v, ok := m.processes.Load(id)
|
|
||||||
if !ok {
|
|
||||||
return nil, false
|
|
||||||
}
|
|
||||||
return v.(*Process), true
|
|
||||||
}
|
|
||||||
|
|
||||||
// CancelRunningProcesses cancels execution of all running processes.
|
|
||||||
func (m *Manager) CancelRunningProcesses() {
|
|
||||||
var funcs []func()
|
|
||||||
m.processes.Range(func(key, value any) bool {
|
|
||||||
funcs = append(funcs, value.(*Process).processCancel)
|
|
||||||
m.processes.Delete(key)
|
|
||||||
return true
|
|
||||||
})
|
|
||||||
for _, f := range funcs {
|
|
||||||
f()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// ListProcesses lists all running processes.
|
|
||||||
func (m *Manager) ListProcesses() (res []*pb.ProcessInfo) {
|
|
||||||
m.processes.Range(func(key, value any) bool {
|
|
||||||
res = append(res, &pb.ProcessInfo{
|
|
||||||
ProcessID: key.(string),
|
|
||||||
InvokeConfig: value.(*Process).invokeConfig,
|
|
||||||
})
|
|
||||||
return true
|
|
||||||
})
|
|
||||||
return res
|
|
||||||
}
|
|
||||||
|
|
||||||
// DeleteProcess deletes the specified process.
|
|
||||||
func (m *Manager) DeleteProcess(id string) error {
|
|
||||||
p, ok := m.processes.LoadAndDelete(id)
|
|
||||||
if !ok {
|
|
||||||
return errors.Errorf("unknown process %q", id)
|
|
||||||
}
|
|
||||||
p.(*Process).processCancel()
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// StartProcess starts a process in the container.
|
|
||||||
// When a container isn't available (i.e. first time invoking or the container has exited) or cfg.Rollback is set,
|
|
||||||
// this method will start a new container and run the process in it. Otherwise, this method starts a new process in the
|
|
||||||
// existing container.
|
|
||||||
func (m *Manager) StartProcess(pid string, resultCtx *build.ResultHandle, cfg *pb.InvokeConfig) (*Process, error) {
|
|
||||||
// Get the target result to invoke a container from
|
|
||||||
var ctr *build.Container
|
|
||||||
if a := m.container.Load(); a != nil {
|
|
||||||
ctr = a.(*build.Container)
|
|
||||||
}
|
|
||||||
if cfg.Rollback || ctr == nil || ctr.IsUnavailable() {
|
|
||||||
go m.CancelRunningProcesses()
|
|
||||||
// (Re)create a new container if this is rollback or first time to invoke a process.
|
|
||||||
if ctr != nil {
|
|
||||||
go ctr.Cancel() // Finish the existing container
|
|
||||||
}
|
|
||||||
var err error
|
|
||||||
ctr, err = build.NewContainer(context.TODO(), resultCtx, cfg)
|
|
||||||
if err != nil {
|
|
||||||
return nil, errors.Errorf("failed to create container %v", err)
|
|
||||||
}
|
|
||||||
m.container.Store(ctr)
|
|
||||||
}
|
|
||||||
// [client(ForwardIO)] <-forwarder(switchable)-> [out] <-pipe-> [in] <- [process]
|
|
||||||
in, out := ioset.Pipe()
|
|
||||||
f := ioset.NewForwarder()
|
|
||||||
f.PropagateStdinClose = false
|
|
||||||
f.SetOut(&out)
|
|
||||||
|
|
||||||
// Register process
|
|
||||||
ctx, cancel := context.WithCancel(context.TODO())
|
|
||||||
var cancelOnce sync.Once
|
|
||||||
processCancelFunc := func() { cancelOnce.Do(func() { cancel(); f.Close(); in.Close(); out.Close() }) }
|
|
||||||
p := &Process{
|
|
||||||
inEnd: f,
|
|
||||||
invokeConfig: cfg,
|
|
||||||
processCancel: processCancelFunc,
|
|
||||||
errCh: make(chan error),
|
|
||||||
}
|
|
||||||
m.processes.Store(pid, p)
|
|
||||||
go func() {
|
|
||||||
var err error
|
|
||||||
if err = ctr.Exec(ctx, cfg, in.Stdin, in.Stdout, in.Stderr); err != nil {
|
|
||||||
logrus.Debugf("process error: %v", err)
|
|
||||||
}
|
|
||||||
logrus.Debugf("finished process %s %v", pid, cfg.Entrypoint)
|
|
||||||
m.processes.Delete(pid)
|
|
||||||
processCancelFunc()
|
|
||||||
p.errCh <- err
|
|
||||||
}()
|
|
||||||
|
|
||||||
return p, nil
|
|
||||||
}
|
|
||||||
@@ -1,240 +0,0 @@
|
|||||||
package remote
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"io"
|
|
||||||
"sync"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/containerd/containerd/defaults"
|
|
||||||
"github.com/containerd/containerd/pkg/dialer"
|
|
||||||
"github.com/docker/buildx/controller/pb"
|
|
||||||
"github.com/docker/buildx/util/progress"
|
|
||||||
"github.com/moby/buildkit/client"
|
|
||||||
"github.com/moby/buildkit/identity"
|
|
||||||
"github.com/moby/buildkit/util/grpcerrors"
|
|
||||||
"github.com/pkg/errors"
|
|
||||||
"golang.org/x/sync/errgroup"
|
|
||||||
"google.golang.org/grpc"
|
|
||||||
"google.golang.org/grpc/backoff"
|
|
||||||
"google.golang.org/grpc/credentials/insecure"
|
|
||||||
)
|
|
||||||
|
|
||||||
func NewClient(ctx context.Context, addr string) (*Client, error) {
|
|
||||||
backoffConfig := backoff.DefaultConfig
|
|
||||||
backoffConfig.MaxDelay = 3 * time.Second
|
|
||||||
connParams := grpc.ConnectParams{
|
|
||||||
Backoff: backoffConfig,
|
|
||||||
}
|
|
||||||
gopts := []grpc.DialOption{
|
|
||||||
grpc.WithBlock(),
|
|
||||||
grpc.WithTransportCredentials(insecure.NewCredentials()),
|
|
||||||
grpc.WithConnectParams(connParams),
|
|
||||||
grpc.WithContextDialer(dialer.ContextDialer),
|
|
||||||
grpc.WithDefaultCallOptions(grpc.MaxCallRecvMsgSize(defaults.DefaultMaxRecvMsgSize)),
|
|
||||||
grpc.WithDefaultCallOptions(grpc.MaxCallSendMsgSize(defaults.DefaultMaxSendMsgSize)),
|
|
||||||
grpc.WithUnaryInterceptor(grpcerrors.UnaryClientInterceptor),
|
|
||||||
grpc.WithStreamInterceptor(grpcerrors.StreamClientInterceptor),
|
|
||||||
}
|
|
||||||
conn, err := grpc.DialContext(ctx, dialer.DialAddress(addr), gopts...)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
return &Client{conn: conn}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
type Client struct {
|
|
||||||
conn *grpc.ClientConn
|
|
||||||
closeOnce sync.Once
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *Client) Close() (err error) {
|
|
||||||
c.closeOnce.Do(func() {
|
|
||||||
err = c.conn.Close()
|
|
||||||
})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *Client) Version(ctx context.Context) (string, string, string, error) {
|
|
||||||
res, err := c.client().Info(ctx, &pb.InfoRequest{})
|
|
||||||
if err != nil {
|
|
||||||
return "", "", "", err
|
|
||||||
}
|
|
||||||
v := res.BuildxVersion
|
|
||||||
return v.Package, v.Version, v.Revision, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *Client) List(ctx context.Context) (keys []string, retErr error) {
|
|
||||||
res, err := c.client().List(ctx, &pb.ListRequest{})
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
return res.Keys, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *Client) Disconnect(ctx context.Context, key string) error {
|
|
||||||
if key == "" {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
_, err := c.client().Disconnect(ctx, &pb.DisconnectRequest{Ref: key})
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *Client) ListProcesses(ctx context.Context, ref string) (infos []*pb.ProcessInfo, retErr error) {
|
|
||||||
res, err := c.client().ListProcesses(ctx, &pb.ListProcessesRequest{Ref: ref})
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
return res.Infos, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *Client) DisconnectProcess(ctx context.Context, ref, pid string) error {
|
|
||||||
_, err := c.client().DisconnectProcess(ctx, &pb.DisconnectProcessRequest{Ref: ref, ProcessID: pid})
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *Client) Invoke(ctx context.Context, ref string, pid string, invokeConfig pb.InvokeConfig, in io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) error {
|
|
||||||
if ref == "" || pid == "" {
|
|
||||||
return errors.New("build reference must be specified")
|
|
||||||
}
|
|
||||||
stream, err := c.client().Invoke(ctx)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
return attachIO(ctx, stream, &pb.InitMessage{Ref: ref, ProcessID: pid, InvokeConfig: &invokeConfig}, ioAttachConfig{
|
|
||||||
stdin: in,
|
|
||||||
stdout: stdout,
|
|
||||||
stderr: stderr,
|
|
||||||
// TODO: Signal, Resize
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *Client) Inspect(ctx context.Context, ref string) (*pb.InspectResponse, error) {
|
|
||||||
return c.client().Inspect(ctx, &pb.InspectRequest{Ref: ref})
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *Client) Build(ctx context.Context, options pb.BuildOptions, in io.ReadCloser, progress progress.Writer) (string, *client.SolveResponse, error) {
|
|
||||||
ref := identity.NewID()
|
|
||||||
statusChan := make(chan *client.SolveStatus)
|
|
||||||
eg, egCtx := errgroup.WithContext(ctx)
|
|
||||||
var resp *client.SolveResponse
|
|
||||||
eg.Go(func() error {
|
|
||||||
defer close(statusChan)
|
|
||||||
var err error
|
|
||||||
resp, err = c.build(egCtx, ref, options, in, statusChan)
|
|
||||||
return err
|
|
||||||
})
|
|
||||||
eg.Go(func() error {
|
|
||||||
for s := range statusChan {
|
|
||||||
st := s
|
|
||||||
progress.Write(st)
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
return ref, resp, eg.Wait()
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *Client) build(ctx context.Context, ref string, options pb.BuildOptions, in io.ReadCloser, statusChan chan *client.SolveStatus) (*client.SolveResponse, error) {
|
|
||||||
eg, egCtx := errgroup.WithContext(ctx)
|
|
||||||
done := make(chan struct{})
|
|
||||||
|
|
||||||
var resp *client.SolveResponse
|
|
||||||
|
|
||||||
eg.Go(func() error {
|
|
||||||
defer close(done)
|
|
||||||
pbResp, err := c.client().Build(egCtx, &pb.BuildRequest{
|
|
||||||
Ref: ref,
|
|
||||||
Options: &options,
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
resp = &client.SolveResponse{
|
|
||||||
ExporterResponse: pbResp.ExporterResponse,
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
eg.Go(func() error {
|
|
||||||
stream, err := c.client().Status(egCtx, &pb.StatusRequest{
|
|
||||||
Ref: ref,
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
for {
|
|
||||||
resp, err := stream.Recv()
|
|
||||||
if err != nil {
|
|
||||||
if err == io.EOF {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
return errors.Wrap(err, "failed to receive status")
|
|
||||||
}
|
|
||||||
statusChan <- pb.FromControlStatus(resp)
|
|
||||||
}
|
|
||||||
})
|
|
||||||
if in != nil {
|
|
||||||
eg.Go(func() error {
|
|
||||||
stream, err := c.client().Input(egCtx)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if err := stream.Send(&pb.InputMessage{
|
|
||||||
Input: &pb.InputMessage_Init{
|
|
||||||
Init: &pb.InputInitMessage{
|
|
||||||
Ref: ref,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
}); err != nil {
|
|
||||||
return errors.Wrap(err, "failed to init input")
|
|
||||||
}
|
|
||||||
|
|
||||||
inReader, inWriter := io.Pipe()
|
|
||||||
eg2, _ := errgroup.WithContext(ctx)
|
|
||||||
eg2.Go(func() error {
|
|
||||||
<-done
|
|
||||||
return inWriter.Close()
|
|
||||||
})
|
|
||||||
go func() {
|
|
||||||
// do not wait for read completion but return here and let the caller send EOF
|
|
||||||
// this allows us to return on ctx.Done() without being blocked by this reader.
|
|
||||||
io.Copy(inWriter, in)
|
|
||||||
inWriter.Close()
|
|
||||||
}()
|
|
||||||
eg2.Go(func() error {
|
|
||||||
for {
|
|
||||||
buf := make([]byte, 32*1024)
|
|
||||||
n, err := inReader.Read(buf)
|
|
||||||
if err != nil {
|
|
||||||
if err == io.EOF {
|
|
||||||
break // break loop and send EOF
|
|
||||||
}
|
|
||||||
return err
|
|
||||||
} else if n > 0 {
|
|
||||||
if err := stream.Send(&pb.InputMessage{
|
|
||||||
Input: &pb.InputMessage_Data{
|
|
||||||
Data: &pb.DataMessage{
|
|
||||||
Data: buf[:n],
|
|
||||||
},
|
|
||||||
},
|
|
||||||
}); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return stream.Send(&pb.InputMessage{
|
|
||||||
Input: &pb.InputMessage_Data{
|
|
||||||
Data: &pb.DataMessage{
|
|
||||||
EOF: true,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
})
|
|
||||||
})
|
|
||||||
return eg2.Wait()
|
|
||||||
})
|
|
||||||
}
|
|
||||||
return resp, eg.Wait()
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *Client) client() pb.ControllerClient {
|
|
||||||
return pb.NewControllerClient(c.conn)
|
|
||||||
}
|
|
||||||
@@ -1,333 +0,0 @@
|
|||||||
//go:build linux
|
|
||||||
|
|
||||||
package remote
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"fmt"
|
|
||||||
"io"
|
|
||||||
"net"
|
|
||||||
"os"
|
|
||||||
"os/exec"
|
|
||||||
"os/signal"
|
|
||||||
"path/filepath"
|
|
||||||
"strconv"
|
|
||||||
"syscall"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/containerd/log"
|
|
||||||
"github.com/docker/buildx/build"
|
|
||||||
cbuild "github.com/docker/buildx/controller/build"
|
|
||||||
"github.com/docker/buildx/controller/control"
|
|
||||||
controllerapi "github.com/docker/buildx/controller/pb"
|
|
||||||
"github.com/docker/buildx/util/confutil"
|
|
||||||
"github.com/docker/buildx/util/progress"
|
|
||||||
"github.com/docker/buildx/version"
|
|
||||||
"github.com/docker/cli/cli/command"
|
|
||||||
"github.com/moby/buildkit/client"
|
|
||||||
"github.com/moby/buildkit/util/grpcerrors"
|
|
||||||
"github.com/pelletier/go-toml"
|
|
||||||
"github.com/pkg/errors"
|
|
||||||
"github.com/sirupsen/logrus"
|
|
||||||
"github.com/spf13/cobra"
|
|
||||||
"google.golang.org/grpc"
|
|
||||||
)
|
|
||||||
|
|
||||||
const (
|
|
||||||
serveCommandName = "_INTERNAL_SERVE"
|
|
||||||
)
|
|
||||||
|
|
||||||
var (
|
|
||||||
defaultLogFilename = fmt.Sprintf("buildx.%s.log", version.Revision)
|
|
||||||
defaultSocketFilename = fmt.Sprintf("buildx.%s.sock", version.Revision)
|
|
||||||
defaultPIDFilename = fmt.Sprintf("buildx.%s.pid", version.Revision)
|
|
||||||
)
|
|
||||||
|
|
||||||
type serverConfig struct {
|
|
||||||
// Specify buildx server root
|
|
||||||
Root string `toml:"root"`
|
|
||||||
|
|
||||||
// LogLevel sets the logging level [trace, debug, info, warn, error, fatal, panic]
|
|
||||||
LogLevel string `toml:"log_level"`
|
|
||||||
|
|
||||||
// Specify file to output buildx server log
|
|
||||||
LogFile string `toml:"log_file"`
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewRemoteBuildxController(ctx context.Context, dockerCli command.Cli, opts control.ControlOptions, logger progress.SubLogger) (control.BuildxController, error) {
|
|
||||||
rootDir := opts.Root
|
|
||||||
if rootDir == "" {
|
|
||||||
rootDir = rootDataDir(dockerCli)
|
|
||||||
}
|
|
||||||
serverRoot := filepath.Join(rootDir, "shared")
|
|
||||||
|
|
||||||
// connect to buildx server if it is already running
|
|
||||||
ctx2, cancel := context.WithTimeout(ctx, 1*time.Second)
|
|
||||||
c, err := newBuildxClientAndCheck(ctx2, filepath.Join(serverRoot, defaultSocketFilename))
|
|
||||||
cancel()
|
|
||||||
if err != nil {
|
|
||||||
if !errors.Is(err, context.DeadlineExceeded) {
|
|
||||||
return nil, errors.Wrap(err, "cannot connect to the buildx server")
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
return &buildxController{c, serverRoot}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// start buildx server via subcommand
|
|
||||||
err = logger.Wrap("no buildx server found; launching...", func() error {
|
|
||||||
launchFlags := []string{}
|
|
||||||
if opts.ServerConfig != "" {
|
|
||||||
launchFlags = append(launchFlags, "--config", opts.ServerConfig)
|
|
||||||
}
|
|
||||||
logFile, err := getLogFilePath(dockerCli, opts.ServerConfig)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
wait, err := launch(ctx, logFile, append([]string{serveCommandName}, launchFlags...)...)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
go wait()
|
|
||||||
|
|
||||||
// wait for buildx server to be ready
|
|
||||||
ctx2, cancel = context.WithTimeout(ctx, 10*time.Second)
|
|
||||||
c, err = newBuildxClientAndCheck(ctx2, filepath.Join(serverRoot, defaultSocketFilename))
|
|
||||||
cancel()
|
|
||||||
if err != nil {
|
|
||||||
return errors.Wrap(err, "cannot connect to the buildx server")
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
return &buildxController{c, serverRoot}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func AddControllerCommands(cmd *cobra.Command, dockerCli command.Cli) {
|
|
||||||
cmd.AddCommand(
|
|
||||||
serveCmd(dockerCli),
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
func serveCmd(dockerCli command.Cli) *cobra.Command {
|
|
||||||
var serverConfigPath string
|
|
||||||
cmd := &cobra.Command{
|
|
||||||
Use: fmt.Sprintf("%s [OPTIONS]", serveCommandName),
|
|
||||||
Hidden: true,
|
|
||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
|
||||||
// Parse config
|
|
||||||
config, err := getConfig(dockerCli, serverConfigPath)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if config.LogLevel == "" {
|
|
||||||
logrus.SetLevel(logrus.InfoLevel)
|
|
||||||
} else {
|
|
||||||
lvl, err := logrus.ParseLevel(config.LogLevel)
|
|
||||||
if err != nil {
|
|
||||||
return errors.Wrap(err, "failed to prepare logger")
|
|
||||||
}
|
|
||||||
logrus.SetLevel(lvl)
|
|
||||||
}
|
|
||||||
logrus.SetFormatter(&logrus.JSONFormatter{
|
|
||||||
TimestampFormat: log.RFC3339NanoFixed,
|
|
||||||
})
|
|
||||||
root, err := prepareRootDir(dockerCli, config)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
pidF := filepath.Join(root, defaultPIDFilename)
|
|
||||||
if err := os.WriteFile(pidF, []byte(fmt.Sprintf("%d", os.Getpid())), 0600); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
defer func() {
|
|
||||||
if err := os.Remove(pidF); err != nil {
|
|
||||||
logrus.Errorf("failed to clean up info file %q: %v", pidF, err)
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
|
|
||||||
// prepare server
|
|
||||||
b := NewServer(func(ctx context.Context, options *controllerapi.BuildOptions, stdin io.Reader, progress progress.Writer) (*client.SolveResponse, *build.ResultHandle, error) {
|
|
||||||
return cbuild.RunBuild(ctx, dockerCli, *options, stdin, progress, true)
|
|
||||||
})
|
|
||||||
defer b.Close()
|
|
||||||
|
|
||||||
// serve server
|
|
||||||
addr := filepath.Join(root, defaultSocketFilename)
|
|
||||||
if err := os.Remove(addr); err != nil && !os.IsNotExist(err) { // avoid EADDRINUSE
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
defer func() {
|
|
||||||
if err := os.Remove(addr); err != nil {
|
|
||||||
logrus.Errorf("failed to clean up socket %q: %v", addr, err)
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
logrus.Infof("starting server at %q", addr)
|
|
||||||
l, err := net.Listen("unix", addr)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
rpc := grpc.NewServer(
|
|
||||||
grpc.UnaryInterceptor(grpcerrors.UnaryServerInterceptor),
|
|
||||||
grpc.StreamInterceptor(grpcerrors.StreamServerInterceptor),
|
|
||||||
)
|
|
||||||
controllerapi.RegisterControllerServer(rpc, b)
|
|
||||||
doneCh := make(chan struct{})
|
|
||||||
errCh := make(chan error, 1)
|
|
||||||
go func() {
|
|
||||||
defer close(doneCh)
|
|
||||||
if err := rpc.Serve(l); err != nil {
|
|
||||||
errCh <- errors.Wrapf(err, "error on serving via socket %q", addr)
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
|
|
||||||
var s os.Signal
|
|
||||||
sigCh := make(chan os.Signal, 1)
|
|
||||||
signal.Notify(sigCh, syscall.SIGINT)
|
|
||||||
signal.Notify(sigCh, syscall.SIGTERM)
|
|
||||||
select {
|
|
||||||
case err := <-errCh:
|
|
||||||
logrus.Errorf("got error %s, exiting", err)
|
|
||||||
return err
|
|
||||||
case s = <-sigCh:
|
|
||||||
logrus.Infof("got signal %s, exiting", s)
|
|
||||||
return nil
|
|
||||||
case <-doneCh:
|
|
||||||
logrus.Infof("rpc server done, exiting")
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
flags := cmd.Flags()
|
|
||||||
flags.StringVar(&serverConfigPath, "config", "", "Specify buildx server config file")
|
|
||||||
return cmd
|
|
||||||
}
|
|
||||||
|
|
||||||
func getLogFilePath(dockerCli command.Cli, configPath string) (string, error) {
|
|
||||||
config, err := getConfig(dockerCli, configPath)
|
|
||||||
if err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
if config.LogFile == "" {
|
|
||||||
root, err := prepareRootDir(dockerCli, config)
|
|
||||||
if err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
return filepath.Join(root, defaultLogFilename), nil
|
|
||||||
}
|
|
||||||
return config.LogFile, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func getConfig(dockerCli command.Cli, configPath string) (*serverConfig, error) {
|
|
||||||
var defaultConfigPath bool
|
|
||||||
if configPath == "" {
|
|
||||||
defaultRoot := rootDataDir(dockerCli)
|
|
||||||
configPath = filepath.Join(defaultRoot, "config.toml")
|
|
||||||
defaultConfigPath = true
|
|
||||||
}
|
|
||||||
var config serverConfig
|
|
||||||
tree, err := toml.LoadFile(configPath)
|
|
||||||
if err != nil && !(os.IsNotExist(err) && defaultConfigPath) {
|
|
||||||
return nil, errors.Wrapf(err, "failed to read config %q", configPath)
|
|
||||||
} else if err == nil {
|
|
||||||
if err := tree.Unmarshal(&config); err != nil {
|
|
||||||
return nil, errors.Wrapf(err, "failed to unmarshal config %q", configPath)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return &config, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func prepareRootDir(dockerCli command.Cli, config *serverConfig) (string, error) {
|
|
||||||
rootDir := config.Root
|
|
||||||
if rootDir == "" {
|
|
||||||
rootDir = rootDataDir(dockerCli)
|
|
||||||
}
|
|
||||||
if rootDir == "" {
|
|
||||||
return "", errors.New("buildx root dir must be determined")
|
|
||||||
}
|
|
||||||
if err := os.MkdirAll(rootDir, 0700); err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
serverRoot := filepath.Join(rootDir, "shared")
|
|
||||||
if err := os.MkdirAll(serverRoot, 0700); err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
return serverRoot, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func rootDataDir(dockerCli command.Cli) string {
|
|
||||||
return filepath.Join(confutil.ConfigDir(dockerCli), "controller")
|
|
||||||
}
|
|
||||||
|
|
||||||
func newBuildxClientAndCheck(ctx context.Context, addr string) (*Client, error) {
|
|
||||||
c, err := NewClient(ctx, addr)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
p, v, r, err := c.Version(ctx)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
logrus.Debugf("connected to server (\"%v %v %v\")", p, v, r)
|
|
||||||
if !(p == version.Package && v == version.Version && r == version.Revision) {
|
|
||||||
return nil, errors.Errorf("version mismatch (client: \"%v %v %v\", server: \"%v %v %v\")", version.Package, version.Version, version.Revision, p, v, r)
|
|
||||||
}
|
|
||||||
return c, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
type buildxController struct {
|
|
||||||
*Client
|
|
||||||
serverRoot string
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *buildxController) Kill(ctx context.Context) error {
|
|
||||||
pidB, err := os.ReadFile(filepath.Join(c.serverRoot, defaultPIDFilename))
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
pid, err := strconv.ParseInt(string(pidB), 10, 64)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if pid <= 0 {
|
|
||||||
return errors.New("no PID is recorded for buildx server")
|
|
||||||
}
|
|
||||||
p, err := os.FindProcess(int(pid))
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if err := p.Signal(syscall.SIGINT); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
// TODO: Should we send SIGKILL if process doesn't finish?
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func launch(ctx context.Context, logFile string, args ...string) (func() error, error) {
|
|
||||||
// set absolute path of binary, since we set the working directory to the root
|
|
||||||
pathname, err := os.Executable()
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
bCmd := exec.CommandContext(ctx, pathname, args...)
|
|
||||||
if logFile != "" {
|
|
||||||
f, err := os.OpenFile(logFile, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
defer f.Close()
|
|
||||||
bCmd.Stdout = f
|
|
||||||
bCmd.Stderr = f
|
|
||||||
}
|
|
||||||
bCmd.Stdin = nil
|
|
||||||
bCmd.Dir = "/"
|
|
||||||
bCmd.SysProcAttr = &syscall.SysProcAttr{
|
|
||||||
Setsid: true,
|
|
||||||
}
|
|
||||||
if err := bCmd.Start(); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
return bCmd.Wait, nil
|
|
||||||
}
|
|
||||||
@@ -1,19 +0,0 @@
|
|||||||
//go:build !linux
|
|
||||||
|
|
||||||
package remote
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
|
|
||||||
"github.com/docker/buildx/controller/control"
|
|
||||||
"github.com/docker/buildx/util/progress"
|
|
||||||
"github.com/docker/cli/cli/command"
|
|
||||||
"github.com/pkg/errors"
|
|
||||||
"github.com/spf13/cobra"
|
|
||||||
)
|
|
||||||
|
|
||||||
func NewRemoteBuildxController(ctx context.Context, dockerCli command.Cli, opts control.ControlOptions, logger progress.SubLogger) (control.BuildxController, error) {
|
|
||||||
return nil, errors.New("remote buildx unsupported")
|
|
||||||
}
|
|
||||||
|
|
||||||
func AddControllerCommands(cmd *cobra.Command, dockerCli command.Cli) {}
|
|
||||||
@@ -1,430 +0,0 @@
|
|||||||
package remote
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"io"
|
|
||||||
"syscall"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/docker/buildx/controller/pb"
|
|
||||||
"github.com/moby/sys/signal"
|
|
||||||
"github.com/pkg/errors"
|
|
||||||
"github.com/sirupsen/logrus"
|
|
||||||
"golang.org/x/sync/errgroup"
|
|
||||||
)
|
|
||||||
|
|
||||||
type msgStream interface {
|
|
||||||
Send(*pb.Message) error
|
|
||||||
Recv() (*pb.Message, error)
|
|
||||||
}
|
|
||||||
|
|
||||||
type ioServerConfig struct {
|
|
||||||
stdin io.WriteCloser
|
|
||||||
stdout, stderr io.ReadCloser
|
|
||||||
|
|
||||||
// signalFn is a callback function called when a signal is reached to the client.
|
|
||||||
signalFn func(context.Context, syscall.Signal) error
|
|
||||||
|
|
||||||
// resizeFn is a callback function called when a resize event is reached to the client.
|
|
||||||
resizeFn func(context.Context, winSize) error
|
|
||||||
}
|
|
||||||
|
|
||||||
func serveIO(attachCtx context.Context, srv msgStream, initFn func(*pb.InitMessage) error, ioConfig *ioServerConfig) (err error) {
|
|
||||||
stdin, stdout, stderr := ioConfig.stdin, ioConfig.stdout, ioConfig.stderr
|
|
||||||
stream := &debugStream{srv, "server=" + time.Now().String()}
|
|
||||||
eg, ctx := errgroup.WithContext(attachCtx)
|
|
||||||
done := make(chan struct{})
|
|
||||||
|
|
||||||
msg, err := receive(ctx, stream)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
init := msg.GetInit()
|
|
||||||
if init == nil {
|
|
||||||
return errors.Errorf("unexpected message: %T; wanted init", msg.GetInput())
|
|
||||||
}
|
|
||||||
ref := init.Ref
|
|
||||||
if ref == "" {
|
|
||||||
return errors.New("no ref is provided")
|
|
||||||
}
|
|
||||||
if err := initFn(init); err != nil {
|
|
||||||
return errors.Wrap(err, "failed to initialize IO server")
|
|
||||||
}
|
|
||||||
|
|
||||||
if stdout != nil {
|
|
||||||
stdoutReader, stdoutWriter := io.Pipe()
|
|
||||||
eg.Go(func() error {
|
|
||||||
<-done
|
|
||||||
return stdoutWriter.Close()
|
|
||||||
})
|
|
||||||
|
|
||||||
go func() {
|
|
||||||
// do not wait for read completion but return here and let the caller send EOF
|
|
||||||
// this allows us to return on ctx.Done() without being blocked by this reader.
|
|
||||||
io.Copy(stdoutWriter, stdout)
|
|
||||||
stdoutWriter.Close()
|
|
||||||
}()
|
|
||||||
|
|
||||||
eg.Go(func() error {
|
|
||||||
defer stdoutReader.Close()
|
|
||||||
return copyToStream(1, stream, stdoutReader)
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
if stderr != nil {
|
|
||||||
stderrReader, stderrWriter := io.Pipe()
|
|
||||||
eg.Go(func() error {
|
|
||||||
<-done
|
|
||||||
return stderrWriter.Close()
|
|
||||||
})
|
|
||||||
|
|
||||||
go func() {
|
|
||||||
// do not wait for read completion but return here and let the caller send EOF
|
|
||||||
// this allows us to return on ctx.Done() without being blocked by this reader.
|
|
||||||
io.Copy(stderrWriter, stderr)
|
|
||||||
stderrWriter.Close()
|
|
||||||
}()
|
|
||||||
|
|
||||||
eg.Go(func() error {
|
|
||||||
defer stderrReader.Close()
|
|
||||||
return copyToStream(2, stream, stderrReader)
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
msgCh := make(chan *pb.Message)
|
|
||||||
eg.Go(func() error {
|
|
||||||
defer close(msgCh)
|
|
||||||
for {
|
|
||||||
msg, err := receive(ctx, stream)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
select {
|
|
||||||
case msgCh <- msg:
|
|
||||||
case <-done:
|
|
||||||
return nil
|
|
||||||
case <-ctx.Done():
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
})
|
|
||||||
|
|
||||||
eg.Go(func() error {
|
|
||||||
defer close(done)
|
|
||||||
for {
|
|
||||||
var msg *pb.Message
|
|
||||||
select {
|
|
||||||
case msg = <-msgCh:
|
|
||||||
case <-ctx.Done():
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
if msg == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
if file := msg.GetFile(); file != nil {
|
|
||||||
if file.Fd != 0 {
|
|
||||||
return errors.Errorf("unexpected fd: %v", file.Fd)
|
|
||||||
}
|
|
||||||
if stdin == nil {
|
|
||||||
continue // no stdin destination is specified so ignore the data
|
|
||||||
}
|
|
||||||
if len(file.Data) > 0 {
|
|
||||||
_, err := stdin.Write(file.Data)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if file.EOF {
|
|
||||||
stdin.Close()
|
|
||||||
}
|
|
||||||
} else if resize := msg.GetResize(); resize != nil {
|
|
||||||
if ioConfig.resizeFn != nil {
|
|
||||||
ioConfig.resizeFn(ctx, winSize{
|
|
||||||
cols: resize.Cols,
|
|
||||||
rows: resize.Rows,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
} else if sig := msg.GetSignal(); sig != nil {
|
|
||||||
if ioConfig.signalFn != nil {
|
|
||||||
syscallSignal, ok := signal.SignalMap[sig.Name]
|
|
||||||
if !ok {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
ioConfig.signalFn(ctx, syscallSignal)
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
return errors.Errorf("unexpected message: %T", msg.GetInput())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
})
|
|
||||||
|
|
||||||
return eg.Wait()
|
|
||||||
}
|
|
||||||
|
|
||||||
type ioAttachConfig struct {
|
|
||||||
stdin io.ReadCloser
|
|
||||||
stdout, stderr io.WriteCloser
|
|
||||||
signal <-chan syscall.Signal
|
|
||||||
resize <-chan winSize
|
|
||||||
}
|
|
||||||
|
|
||||||
type winSize struct {
|
|
||||||
rows uint32
|
|
||||||
cols uint32
|
|
||||||
}
|
|
||||||
|
|
||||||
func attachIO(ctx context.Context, stream msgStream, initMessage *pb.InitMessage, cfg ioAttachConfig) (retErr error) {
|
|
||||||
eg, ctx := errgroup.WithContext(ctx)
|
|
||||||
done := make(chan struct{})
|
|
||||||
|
|
||||||
if err := stream.Send(&pb.Message{
|
|
||||||
Input: &pb.Message_Init{
|
|
||||||
Init: initMessage,
|
|
||||||
},
|
|
||||||
}); err != nil {
|
|
||||||
return errors.Wrap(err, "failed to init")
|
|
||||||
}
|
|
||||||
|
|
||||||
if cfg.stdin != nil {
|
|
||||||
stdinReader, stdinWriter := io.Pipe()
|
|
||||||
eg.Go(func() error {
|
|
||||||
<-done
|
|
||||||
return stdinWriter.Close()
|
|
||||||
})
|
|
||||||
|
|
||||||
go func() {
|
|
||||||
// do not wait for read completion but return here and let the caller send EOF
|
|
||||||
// this allows us to return on ctx.Done() without being blocked by this reader.
|
|
||||||
io.Copy(stdinWriter, cfg.stdin)
|
|
||||||
stdinWriter.Close()
|
|
||||||
}()
|
|
||||||
|
|
||||||
eg.Go(func() error {
|
|
||||||
defer stdinReader.Close()
|
|
||||||
return copyToStream(0, stream, stdinReader)
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
if cfg.signal != nil {
|
|
||||||
eg.Go(func() error {
|
|
||||||
for {
|
|
||||||
var sig syscall.Signal
|
|
||||||
select {
|
|
||||||
case sig = <-cfg.signal:
|
|
||||||
case <-done:
|
|
||||||
return nil
|
|
||||||
case <-ctx.Done():
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
name := sigToName[sig]
|
|
||||||
if name == "" {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
if err := stream.Send(&pb.Message{
|
|
||||||
Input: &pb.Message_Signal{
|
|
||||||
Signal: &pb.SignalMessage{
|
|
||||||
Name: name,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
}); err != nil {
|
|
||||||
return errors.Wrap(err, "failed to send signal")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
if cfg.resize != nil {
|
|
||||||
eg.Go(func() error {
|
|
||||||
for {
|
|
||||||
var win winSize
|
|
||||||
select {
|
|
||||||
case win = <-cfg.resize:
|
|
||||||
case <-done:
|
|
||||||
return nil
|
|
||||||
case <-ctx.Done():
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
if err := stream.Send(&pb.Message{
|
|
||||||
Input: &pb.Message_Resize{
|
|
||||||
Resize: &pb.ResizeMessage{
|
|
||||||
Rows: win.rows,
|
|
||||||
Cols: win.cols,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
}); err != nil {
|
|
||||||
return errors.Wrap(err, "failed to send resize")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
msgCh := make(chan *pb.Message)
|
|
||||||
eg.Go(func() error {
|
|
||||||
defer close(msgCh)
|
|
||||||
for {
|
|
||||||
msg, err := receive(ctx, stream)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
select {
|
|
||||||
case msgCh <- msg:
|
|
||||||
case <-done:
|
|
||||||
return nil
|
|
||||||
case <-ctx.Done():
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
})
|
|
||||||
|
|
||||||
eg.Go(func() error {
|
|
||||||
eofs := make(map[uint32]struct{})
|
|
||||||
defer close(done)
|
|
||||||
for {
|
|
||||||
var msg *pb.Message
|
|
||||||
select {
|
|
||||||
case msg = <-msgCh:
|
|
||||||
case <-ctx.Done():
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
if msg == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
if file := msg.GetFile(); file != nil {
|
|
||||||
if _, ok := eofs[file.Fd]; ok {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
var out io.WriteCloser
|
|
||||||
switch file.Fd {
|
|
||||||
case 1:
|
|
||||||
out = cfg.stdout
|
|
||||||
case 2:
|
|
||||||
out = cfg.stderr
|
|
||||||
default:
|
|
||||||
return errors.Errorf("unsupported fd %d", file.Fd)
|
|
||||||
|
|
||||||
}
|
|
||||||
if out == nil {
|
|
||||||
logrus.Warnf("attachIO: no writer for fd %d", file.Fd)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
if len(file.Data) > 0 {
|
|
||||||
if _, err := out.Write(file.Data); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if file.EOF {
|
|
||||||
eofs[file.Fd] = struct{}{}
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
return errors.Errorf("unexpected message: %T", msg.GetInput())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
})
|
|
||||||
|
|
||||||
return eg.Wait()
|
|
||||||
}
|
|
||||||
|
|
||||||
func receive(ctx context.Context, stream msgStream) (*pb.Message, error) {
|
|
||||||
msgCh := make(chan *pb.Message)
|
|
||||||
errCh := make(chan error)
|
|
||||||
go func() {
|
|
||||||
msg, err := stream.Recv()
|
|
||||||
if err != nil {
|
|
||||||
if errors.Is(err, io.EOF) {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
errCh <- err
|
|
||||||
return
|
|
||||||
}
|
|
||||||
msgCh <- msg
|
|
||||||
}()
|
|
||||||
select {
|
|
||||||
case msg := <-msgCh:
|
|
||||||
return msg, nil
|
|
||||||
case err := <-errCh:
|
|
||||||
return nil, err
|
|
||||||
case <-ctx.Done():
|
|
||||||
return nil, ctx.Err()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func copyToStream(fd uint32, snd msgStream, r io.Reader) error {
|
|
||||||
for {
|
|
||||||
buf := make([]byte, 32*1024)
|
|
||||||
n, err := r.Read(buf)
|
|
||||||
if err != nil {
|
|
||||||
if err == io.EOF {
|
|
||||||
break // break loop and send EOF
|
|
||||||
}
|
|
||||||
return err
|
|
||||||
} else if n > 0 {
|
|
||||||
if err := snd.Send(&pb.Message{
|
|
||||||
Input: &pb.Message_File{
|
|
||||||
File: &pb.FdMessage{
|
|
||||||
Fd: fd,
|
|
||||||
Data: buf[:n],
|
|
||||||
},
|
|
||||||
},
|
|
||||||
}); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return snd.Send(&pb.Message{
|
|
||||||
Input: &pb.Message_File{
|
|
||||||
File: &pb.FdMessage{
|
|
||||||
Fd: fd,
|
|
||||||
EOF: true,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
var sigToName = map[syscall.Signal]string{}
|
|
||||||
|
|
||||||
func init() {
|
|
||||||
for name, value := range signal.SignalMap {
|
|
||||||
sigToName[value] = name
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
type debugStream struct {
|
|
||||||
msgStream
|
|
||||||
prefix string
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *debugStream) Send(msg *pb.Message) error {
|
|
||||||
switch m := msg.GetInput().(type) {
|
|
||||||
case *pb.Message_File:
|
|
||||||
if m.File.EOF {
|
|
||||||
logrus.Debugf("|---> File Message (sender:%v) fd=%d, EOF", s.prefix, m.File.Fd)
|
|
||||||
} else {
|
|
||||||
logrus.Debugf("|---> File Message (sender:%v) fd=%d, %d bytes", s.prefix, m.File.Fd, len(m.File.Data))
|
|
||||||
}
|
|
||||||
case *pb.Message_Resize:
|
|
||||||
logrus.Debugf("|---> Resize Message (sender:%v): %+v", s.prefix, m.Resize)
|
|
||||||
case *pb.Message_Signal:
|
|
||||||
logrus.Debugf("|---> Signal Message (sender:%v): %s", s.prefix, m.Signal.Name)
|
|
||||||
}
|
|
||||||
return s.msgStream.Send(msg)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *debugStream) Recv() (*pb.Message, error) {
|
|
||||||
msg, err := s.msgStream.Recv()
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
switch m := msg.GetInput().(type) {
|
|
||||||
case *pb.Message_File:
|
|
||||||
if m.File.EOF {
|
|
||||||
logrus.Debugf("|<--- File Message (receiver:%v) fd=%d, EOF", s.prefix, m.File.Fd)
|
|
||||||
} else {
|
|
||||||
logrus.Debugf("|<--- File Message (receiver:%v) fd=%d, %d bytes", s.prefix, m.File.Fd, len(m.File.Data))
|
|
||||||
}
|
|
||||||
case *pb.Message_Resize:
|
|
||||||
logrus.Debugf("|<--- Resize Message (receiver:%v): %+v", s.prefix, m.Resize)
|
|
||||||
case *pb.Message_Signal:
|
|
||||||
logrus.Debugf("|<--- Signal Message (receiver:%v): %s", s.prefix, m.Signal.Name)
|
|
||||||
}
|
|
||||||
return msg, nil
|
|
||||||
}
|
|
||||||
@@ -1,439 +0,0 @@
|
|||||||
package remote
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"io"
|
|
||||||
"sync"
|
|
||||||
"sync/atomic"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/docker/buildx/build"
|
|
||||||
controllererrors "github.com/docker/buildx/controller/errdefs"
|
|
||||||
"github.com/docker/buildx/controller/pb"
|
|
||||||
"github.com/docker/buildx/controller/processes"
|
|
||||||
"github.com/docker/buildx/util/ioset"
|
|
||||||
"github.com/docker/buildx/util/progress"
|
|
||||||
"github.com/docker/buildx/version"
|
|
||||||
"github.com/moby/buildkit/client"
|
|
||||||
"github.com/pkg/errors"
|
|
||||||
"golang.org/x/sync/errgroup"
|
|
||||||
)
|
|
||||||
|
|
||||||
type BuildFunc func(ctx context.Context, options *pb.BuildOptions, stdin io.Reader, progress progress.Writer) (resp *client.SolveResponse, res *build.ResultHandle, err error)
|
|
||||||
|
|
||||||
func NewServer(buildFunc BuildFunc) *Server {
|
|
||||||
return &Server{
|
|
||||||
buildFunc: buildFunc,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
type Server struct {
|
|
||||||
buildFunc BuildFunc
|
|
||||||
session map[string]*session
|
|
||||||
sessionMu sync.Mutex
|
|
||||||
}
|
|
||||||
|
|
||||||
type session struct {
|
|
||||||
buildOnGoing atomic.Bool
|
|
||||||
statusChan chan *pb.StatusResponse
|
|
||||||
cancelBuild func()
|
|
||||||
buildOptions *pb.BuildOptions
|
|
||||||
inputPipe *io.PipeWriter
|
|
||||||
|
|
||||||
result *build.ResultHandle
|
|
||||||
|
|
||||||
processes *processes.Manager
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *session) cancelRunningProcesses() {
|
|
||||||
s.processes.CancelRunningProcesses()
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *Server) ListProcesses(ctx context.Context, req *pb.ListProcessesRequest) (res *pb.ListProcessesResponse, err error) {
|
|
||||||
m.sessionMu.Lock()
|
|
||||||
defer m.sessionMu.Unlock()
|
|
||||||
s, ok := m.session[req.Ref]
|
|
||||||
if !ok {
|
|
||||||
return nil, errors.Errorf("unknown ref %q", req.Ref)
|
|
||||||
}
|
|
||||||
res = new(pb.ListProcessesResponse)
|
|
||||||
res.Infos = append(res.Infos, s.processes.ListProcesses()...)
|
|
||||||
return res, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *Server) DisconnectProcess(ctx context.Context, req *pb.DisconnectProcessRequest) (res *pb.DisconnectProcessResponse, err error) {
|
|
||||||
m.sessionMu.Lock()
|
|
||||||
defer m.sessionMu.Unlock()
|
|
||||||
s, ok := m.session[req.Ref]
|
|
||||||
if !ok {
|
|
||||||
return nil, errors.Errorf("unknown ref %q", req.Ref)
|
|
||||||
}
|
|
||||||
return res, s.processes.DeleteProcess(req.ProcessID)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *Server) Info(ctx context.Context, req *pb.InfoRequest) (res *pb.InfoResponse, err error) {
|
|
||||||
return &pb.InfoResponse{
|
|
||||||
BuildxVersion: &pb.BuildxVersion{
|
|
||||||
Package: version.Package,
|
|
||||||
Version: version.Version,
|
|
||||||
Revision: version.Revision,
|
|
||||||
},
|
|
||||||
}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *Server) List(ctx context.Context, req *pb.ListRequest) (res *pb.ListResponse, err error) {
|
|
||||||
keys := make(map[string]struct{})
|
|
||||||
|
|
||||||
m.sessionMu.Lock()
|
|
||||||
for k := range m.session {
|
|
||||||
keys[k] = struct{}{}
|
|
||||||
}
|
|
||||||
m.sessionMu.Unlock()
|
|
||||||
|
|
||||||
var keysL []string
|
|
||||||
for k := range keys {
|
|
||||||
keysL = append(keysL, k)
|
|
||||||
}
|
|
||||||
return &pb.ListResponse{
|
|
||||||
Keys: keysL,
|
|
||||||
}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *Server) Disconnect(ctx context.Context, req *pb.DisconnectRequest) (res *pb.DisconnectResponse, err error) {
|
|
||||||
key := req.Ref
|
|
||||||
if key == "" {
|
|
||||||
return nil, errors.New("disconnect: empty key")
|
|
||||||
}
|
|
||||||
|
|
||||||
m.sessionMu.Lock()
|
|
||||||
if s, ok := m.session[key]; ok {
|
|
||||||
if s.cancelBuild != nil {
|
|
||||||
s.cancelBuild()
|
|
||||||
}
|
|
||||||
s.cancelRunningProcesses()
|
|
||||||
if s.result != nil {
|
|
||||||
s.result.Done()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
delete(m.session, key)
|
|
||||||
m.sessionMu.Unlock()
|
|
||||||
|
|
||||||
return &pb.DisconnectResponse{}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *Server) Close() error {
|
|
||||||
m.sessionMu.Lock()
|
|
||||||
for k := range m.session {
|
|
||||||
if s, ok := m.session[k]; ok {
|
|
||||||
if s.cancelBuild != nil {
|
|
||||||
s.cancelBuild()
|
|
||||||
}
|
|
||||||
s.cancelRunningProcesses()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
m.sessionMu.Unlock()
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *Server) Inspect(ctx context.Context, req *pb.InspectRequest) (*pb.InspectResponse, error) {
|
|
||||||
ref := req.Ref
|
|
||||||
if ref == "" {
|
|
||||||
return nil, errors.New("inspect: empty key")
|
|
||||||
}
|
|
||||||
var bo *pb.BuildOptions
|
|
||||||
m.sessionMu.Lock()
|
|
||||||
if s, ok := m.session[ref]; ok {
|
|
||||||
bo = s.buildOptions
|
|
||||||
} else {
|
|
||||||
m.sessionMu.Unlock()
|
|
||||||
return nil, errors.Errorf("inspect: unknown key %v", ref)
|
|
||||||
}
|
|
||||||
m.sessionMu.Unlock()
|
|
||||||
return &pb.InspectResponse{Options: bo}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *Server) Build(ctx context.Context, req *pb.BuildRequest) (*pb.BuildResponse, error) {
|
|
||||||
ref := req.Ref
|
|
||||||
if ref == "" {
|
|
||||||
return nil, errors.New("build: empty key")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Prepare status channel and session
|
|
||||||
m.sessionMu.Lock()
|
|
||||||
if m.session == nil {
|
|
||||||
m.session = make(map[string]*session)
|
|
||||||
}
|
|
||||||
s, ok := m.session[ref]
|
|
||||||
if ok {
|
|
||||||
if !s.buildOnGoing.CompareAndSwap(false, true) {
|
|
||||||
m.sessionMu.Unlock()
|
|
||||||
return &pb.BuildResponse{}, errors.New("build ongoing")
|
|
||||||
}
|
|
||||||
s.cancelRunningProcesses()
|
|
||||||
s.result = nil
|
|
||||||
} else {
|
|
||||||
s = &session{}
|
|
||||||
s.buildOnGoing.Store(true)
|
|
||||||
}
|
|
||||||
|
|
||||||
s.processes = processes.NewManager()
|
|
||||||
statusChan := make(chan *pb.StatusResponse)
|
|
||||||
s.statusChan = statusChan
|
|
||||||
inR, inW := io.Pipe()
|
|
||||||
defer inR.Close()
|
|
||||||
s.inputPipe = inW
|
|
||||||
m.session[ref] = s
|
|
||||||
m.sessionMu.Unlock()
|
|
||||||
defer func() {
|
|
||||||
close(statusChan)
|
|
||||||
m.sessionMu.Lock()
|
|
||||||
s, ok := m.session[ref]
|
|
||||||
if ok {
|
|
||||||
s.statusChan = nil
|
|
||||||
s.buildOnGoing.Store(false)
|
|
||||||
}
|
|
||||||
m.sessionMu.Unlock()
|
|
||||||
}()
|
|
||||||
|
|
||||||
pw := pb.NewProgressWriter(statusChan)
|
|
||||||
|
|
||||||
// Build the specified request
|
|
||||||
ctx, cancel := context.WithCancel(ctx)
|
|
||||||
defer cancel()
|
|
||||||
resp, res, buildErr := m.buildFunc(ctx, req.Options, inR, pw)
|
|
||||||
m.sessionMu.Lock()
|
|
||||||
if s, ok := m.session[ref]; ok {
|
|
||||||
// NOTE: buildFunc can return *build.ResultHandle even on error (e.g. when it's implemented using (github.com/docker/buildx/controller/build).RunBuild).
|
|
||||||
if res != nil {
|
|
||||||
s.result = res
|
|
||||||
s.cancelBuild = cancel
|
|
||||||
s.buildOptions = req.Options
|
|
||||||
m.session[ref] = s
|
|
||||||
if buildErr != nil {
|
|
||||||
buildErr = controllererrors.WrapBuild(buildErr, ref)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
m.sessionMu.Unlock()
|
|
||||||
return nil, errors.Errorf("build: unknown key %v", ref)
|
|
||||||
}
|
|
||||||
m.sessionMu.Unlock()
|
|
||||||
|
|
||||||
if buildErr != nil {
|
|
||||||
return nil, buildErr
|
|
||||||
}
|
|
||||||
|
|
||||||
if resp == nil {
|
|
||||||
resp = &client.SolveResponse{}
|
|
||||||
}
|
|
||||||
return &pb.BuildResponse{
|
|
||||||
ExporterResponse: resp.ExporterResponse,
|
|
||||||
}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *Server) Status(req *pb.StatusRequest, stream pb.Controller_StatusServer) error {
|
|
||||||
ref := req.Ref
|
|
||||||
if ref == "" {
|
|
||||||
return errors.New("status: empty key")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Wait and get status channel prepared by Build()
|
|
||||||
var statusChan <-chan *pb.StatusResponse
|
|
||||||
for {
|
|
||||||
// TODO: timeout?
|
|
||||||
m.sessionMu.Lock()
|
|
||||||
if _, ok := m.session[ref]; !ok || m.session[ref].statusChan == nil {
|
|
||||||
m.sessionMu.Unlock()
|
|
||||||
time.Sleep(time.Millisecond) // TODO: wait Build without busy loop and make it cancellable
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
statusChan = m.session[ref].statusChan
|
|
||||||
m.sessionMu.Unlock()
|
|
||||||
break
|
|
||||||
}
|
|
||||||
|
|
||||||
// forward status
|
|
||||||
for ss := range statusChan {
|
|
||||||
if ss == nil {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
if err := stream.Send(ss); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *Server) Input(stream pb.Controller_InputServer) (err error) {
|
|
||||||
// Get the target ref from init message
|
|
||||||
msg, err := stream.Recv()
|
|
||||||
if err != nil {
|
|
||||||
if !errors.Is(err, io.EOF) {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
init := msg.GetInit()
|
|
||||||
if init == nil {
|
|
||||||
return errors.Errorf("unexpected message: %T; wanted init", msg.GetInit())
|
|
||||||
}
|
|
||||||
ref := init.Ref
|
|
||||||
if ref == "" {
|
|
||||||
return errors.New("input: no ref is provided")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Wait and get input stream pipe prepared by Build()
|
|
||||||
var inputPipeW *io.PipeWriter
|
|
||||||
for {
|
|
||||||
// TODO: timeout?
|
|
||||||
m.sessionMu.Lock()
|
|
||||||
if _, ok := m.session[ref]; !ok || m.session[ref].inputPipe == nil {
|
|
||||||
m.sessionMu.Unlock()
|
|
||||||
time.Sleep(time.Millisecond) // TODO: wait Build without busy loop and make it cancellable
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
inputPipeW = m.session[ref].inputPipe
|
|
||||||
m.sessionMu.Unlock()
|
|
||||||
break
|
|
||||||
}
|
|
||||||
|
|
||||||
// Forward input stream
|
|
||||||
eg, ctx := errgroup.WithContext(context.TODO())
|
|
||||||
done := make(chan struct{})
|
|
||||||
msgCh := make(chan *pb.InputMessage)
|
|
||||||
eg.Go(func() error {
|
|
||||||
defer close(msgCh)
|
|
||||||
for {
|
|
||||||
msg, err := stream.Recv()
|
|
||||||
if err != nil {
|
|
||||||
if !errors.Is(err, io.EOF) {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
select {
|
|
||||||
case msgCh <- msg:
|
|
||||||
case <-done:
|
|
||||||
return nil
|
|
||||||
case <-ctx.Done():
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
})
|
|
||||||
eg.Go(func() (retErr error) {
|
|
||||||
defer close(done)
|
|
||||||
defer func() {
|
|
||||||
if retErr != nil {
|
|
||||||
inputPipeW.CloseWithError(retErr)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
inputPipeW.Close()
|
|
||||||
}()
|
|
||||||
for {
|
|
||||||
var msg *pb.InputMessage
|
|
||||||
select {
|
|
||||||
case msg = <-msgCh:
|
|
||||||
case <-ctx.Done():
|
|
||||||
return errors.Wrap(ctx.Err(), "canceled")
|
|
||||||
}
|
|
||||||
if msg == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
if data := msg.GetData(); data != nil {
|
|
||||||
if len(data.Data) > 0 {
|
|
||||||
_, err := inputPipeW.Write(data.Data)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if data.EOF {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
})
|
|
||||||
|
|
||||||
return eg.Wait()
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *Server) Invoke(srv pb.Controller_InvokeServer) error {
|
|
||||||
containerIn, containerOut := ioset.Pipe()
|
|
||||||
defer func() { containerOut.Close(); containerIn.Close() }()
|
|
||||||
|
|
||||||
initDoneCh := make(chan *processes.Process)
|
|
||||||
initErrCh := make(chan error)
|
|
||||||
eg, egCtx := errgroup.WithContext(context.TODO())
|
|
||||||
srvIOCtx, srvIOCancel := context.WithCancel(egCtx)
|
|
||||||
eg.Go(func() error {
|
|
||||||
defer srvIOCancel()
|
|
||||||
return serveIO(srvIOCtx, srv, func(initMessage *pb.InitMessage) (retErr error) {
|
|
||||||
defer func() {
|
|
||||||
if retErr != nil {
|
|
||||||
initErrCh <- retErr
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
ref := initMessage.Ref
|
|
||||||
cfg := initMessage.InvokeConfig
|
|
||||||
|
|
||||||
m.sessionMu.Lock()
|
|
||||||
s, ok := m.session[ref]
|
|
||||||
if !ok {
|
|
||||||
m.sessionMu.Unlock()
|
|
||||||
return errors.Errorf("invoke: unknown key %v", ref)
|
|
||||||
}
|
|
||||||
m.sessionMu.Unlock()
|
|
||||||
|
|
||||||
pid := initMessage.ProcessID
|
|
||||||
if pid == "" {
|
|
||||||
return errors.Errorf("invoke: specify process ID")
|
|
||||||
}
|
|
||||||
proc, ok := s.processes.Get(pid)
|
|
||||||
if !ok {
|
|
||||||
// Start a new process.
|
|
||||||
if cfg == nil {
|
|
||||||
return errors.New("no container config is provided")
|
|
||||||
}
|
|
||||||
var err error
|
|
||||||
proc, err = s.processes.StartProcess(pid, s.result, cfg)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
// Attach containerIn to this process
|
|
||||||
proc.ForwardIO(&containerIn, srvIOCancel)
|
|
||||||
initDoneCh <- proc
|
|
||||||
return nil
|
|
||||||
}, &ioServerConfig{
|
|
||||||
stdin: containerOut.Stdin,
|
|
||||||
stdout: containerOut.Stdout,
|
|
||||||
stderr: containerOut.Stderr,
|
|
||||||
// TODO: signal, resize
|
|
||||||
})
|
|
||||||
})
|
|
||||||
eg.Go(func() (rErr error) {
|
|
||||||
defer srvIOCancel()
|
|
||||||
// Wait for init done
|
|
||||||
var proc *processes.Process
|
|
||||||
select {
|
|
||||||
case p := <-initDoneCh:
|
|
||||||
proc = p
|
|
||||||
case err := <-initErrCh:
|
|
||||||
return err
|
|
||||||
case <-egCtx.Done():
|
|
||||||
return egCtx.Err()
|
|
||||||
}
|
|
||||||
|
|
||||||
// Wait for IO done
|
|
||||||
select {
|
|
||||||
case <-srvIOCtx.Done():
|
|
||||||
return srvIOCtx.Err()
|
|
||||||
case err := <-proc.Done():
|
|
||||||
return err
|
|
||||||
case <-egCtx.Done():
|
|
||||||
return egCtx.Err()
|
|
||||||
}
|
|
||||||
})
|
|
||||||
|
|
||||||
return eg.Wait()
|
|
||||||
}
|
|
||||||
@@ -1,5 +1,5 @@
|
|||||||
variable "GO_VERSION" {
|
variable "GO_VERSION" {
|
||||||
default = null
|
default = "1.19"
|
||||||
}
|
}
|
||||||
variable "DOCS_FORMATS" {
|
variable "DOCS_FORMATS" {
|
||||||
default = "md"
|
default = "md"
|
||||||
@@ -7,9 +7,6 @@ variable "DOCS_FORMATS" {
|
|||||||
variable "DESTDIR" {
|
variable "DESTDIR" {
|
||||||
default = "./bin"
|
default = "./bin"
|
||||||
}
|
}
|
||||||
variable "GOLANGCI_LINT_MULTIPLATFORM" {
|
|
||||||
default = ""
|
|
||||||
}
|
|
||||||
|
|
||||||
# Special target: https://github.com/docker/metadata-action#bake-definition
|
# Special target: https://github.com/docker/metadata-action#bake-definition
|
||||||
target "meta-helper" {
|
target "meta-helper" {
|
||||||
@@ -28,29 +25,13 @@ group "default" {
|
|||||||
}
|
}
|
||||||
|
|
||||||
group "validate" {
|
group "validate" {
|
||||||
targets = ["lint", "lint-gopls", "validate-vendor", "validate-docs"]
|
targets = ["lint", "validate-vendor", "validate-docs"]
|
||||||
}
|
}
|
||||||
|
|
||||||
target "lint" {
|
target "lint" {
|
||||||
inherits = ["_common"]
|
inherits = ["_common"]
|
||||||
dockerfile = "./hack/dockerfiles/lint.Dockerfile"
|
dockerfile = "./hack/dockerfiles/lint.Dockerfile"
|
||||||
output = ["type=cacheonly"]
|
output = ["type=cacheonly"]
|
||||||
platforms = GOLANGCI_LINT_MULTIPLATFORM != "" ? [
|
|
||||||
"darwin/amd64",
|
|
||||||
"darwin/arm64",
|
|
||||||
"linux/amd64",
|
|
||||||
"linux/arm64",
|
|
||||||
"linux/s390x",
|
|
||||||
"linux/ppc64le",
|
|
||||||
"linux/riscv64",
|
|
||||||
"windows/amd64",
|
|
||||||
"windows/arm64"
|
|
||||||
] : []
|
|
||||||
}
|
|
||||||
|
|
||||||
target "lint-gopls" {
|
|
||||||
inherits = ["lint"]
|
|
||||||
target = "gopls-analyze"
|
|
||||||
}
|
}
|
||||||
|
|
||||||
target "validate-vendor" {
|
target "validate-vendor" {
|
||||||
@@ -78,13 +59,6 @@ target "validate-authors" {
|
|||||||
output = ["type=cacheonly"]
|
output = ["type=cacheonly"]
|
||||||
}
|
}
|
||||||
|
|
||||||
target "validate-generated-files" {
|
|
||||||
inherits = ["_common"]
|
|
||||||
dockerfile = "./hack/dockerfiles/generated-files.Dockerfile"
|
|
||||||
target = "validate"
|
|
||||||
output = ["type=cacheonly"]
|
|
||||||
}
|
|
||||||
|
|
||||||
target "update-vendor" {
|
target "update-vendor" {
|
||||||
inherits = ["_common"]
|
inherits = ["_common"]
|
||||||
dockerfile = "./hack/dockerfiles/vendor.Dockerfile"
|
dockerfile = "./hack/dockerfiles/vendor.Dockerfile"
|
||||||
@@ -110,13 +84,6 @@ target "update-authors" {
|
|||||||
output = ["."]
|
output = ["."]
|
||||||
}
|
}
|
||||||
|
|
||||||
target "update-generated-files" {
|
|
||||||
inherits = ["_common"]
|
|
||||||
dockerfile = "./hack/dockerfiles/generated-files.Dockerfile"
|
|
||||||
target = "update"
|
|
||||||
output = ["."]
|
|
||||||
}
|
|
||||||
|
|
||||||
target "mod-outdated" {
|
target "mod-outdated" {
|
||||||
inherits = ["_common"]
|
inherits = ["_common"]
|
||||||
dockerfile = "./hack/dockerfiles/vendor.Dockerfile"
|
dockerfile = "./hack/dockerfiles/vendor.Dockerfile"
|
||||||
@@ -175,33 +142,3 @@ target "image-local" {
|
|||||||
inherits = ["image"]
|
inherits = ["image"]
|
||||||
output = ["type=docker"]
|
output = ["type=docker"]
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "HTTP_PROXY" {
|
|
||||||
default = ""
|
|
||||||
}
|
|
||||||
variable "HTTPS_PROXY" {
|
|
||||||
default = ""
|
|
||||||
}
|
|
||||||
variable "NO_PROXY" {
|
|
||||||
default = ""
|
|
||||||
}
|
|
||||||
variable "TEST_BUILDKIT_TAG" {
|
|
||||||
default = null
|
|
||||||
}
|
|
||||||
|
|
||||||
target "integration-test-base" {
|
|
||||||
inherits = ["_common"]
|
|
||||||
args = {
|
|
||||||
HTTP_PROXY = HTTP_PROXY
|
|
||||||
HTTPS_PROXY = HTTPS_PROXY
|
|
||||||
NO_PROXY = NO_PROXY
|
|
||||||
BUILDKIT_VERSION = TEST_BUILDKIT_TAG
|
|
||||||
}
|
|
||||||
target = "integration-test-base"
|
|
||||||
output = ["type=cacheonly"]
|
|
||||||
}
|
|
||||||
|
|
||||||
target "integration-test" {
|
|
||||||
inherits = ["integration-test-base"]
|
|
||||||
target = "integration-test"
|
|
||||||
}
|
|
||||||
|
|||||||
File diff suppressed because it is too large
Load Diff
@@ -1,166 +0,0 @@
|
|||||||
# Debug monitor
|
|
||||||
|
|
||||||
To assist with creating and debugging complex builds, Buildx provides a
|
|
||||||
debugger to help you step through the build process and easily inspect the
|
|
||||||
state of the build environment at any point.
|
|
||||||
|
|
||||||
> **Note**
|
|
||||||
>
|
|
||||||
> The debug monitor is a new experimental feature in recent versions of Buildx.
|
|
||||||
> There are rough edges, known bugs, and missing features. Please try it out
|
|
||||||
> and let us know what you think!
|
|
||||||
|
|
||||||
## Starting the debugger
|
|
||||||
|
|
||||||
To start the debugger, first, ensure that `BUILDX_EXPERIMENTAL=1` is set in
|
|
||||||
your environment.
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ export BUILDX_EXPERIMENTAL=1
|
|
||||||
```
|
|
||||||
|
|
||||||
To start a debug session for a build, you can use the `buildx debug` command with `--invoke` flag to specify a command to launch in the resulting image.
|
|
||||||
`buildx debug` command provides `buildx debug build` subcommand that provides the same features as the normal `buildx build` command but allows launching the debugger session after the build.
|
|
||||||
|
|
||||||
Arguments available after `buildx debug build` are the same as the normal `buildx build`.
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx debug --invoke /bin/sh build .
|
|
||||||
[+] Building 4.2s (19/19) FINISHED
|
|
||||||
=> [internal] connecting to local controller 0.0s
|
|
||||||
=> [internal] load build definition from Dockerfile 0.0s
|
|
||||||
=> => transferring dockerfile: 32B 0.0s
|
|
||||||
=> [internal] load .dockerignore 0.0s
|
|
||||||
=> => transferring context: 34B 0.0s
|
|
||||||
...
|
|
||||||
Launching interactive container. Press Ctrl-a-c to switch to monitor console
|
|
||||||
Interactive container was restarted with process "dzz7pjb4pk1mj29xqrx0ac3oj". Press Ctrl-a-c to switch to the new container
|
|
||||||
Switched IO
|
|
||||||
/ #
|
|
||||||
```
|
|
||||||
|
|
||||||
This launches a `/bin/sh` process in the final stage of the image, and allows
|
|
||||||
you to explore the contents of the image, without needing to export or load the
|
|
||||||
image outside of the builder.
|
|
||||||
|
|
||||||
For example, you can use `ls` to see the contents of the image:
|
|
||||||
|
|
||||||
```console
|
|
||||||
/ # ls
|
|
||||||
bin etc lib mnt proc run srv tmp var
|
|
||||||
dev home media opt root sbin sys usr work
|
|
||||||
```
|
|
||||||
|
|
||||||
Optional long form allows you specifying detailed configurations of the process.
|
|
||||||
It must be CSV-styled comma-separated key-value pairs.
|
|
||||||
Supported keys are `args` (can be JSON array format), `entrypoint` (can be JSON array format), `env` (can be JSON array format), `user`, `cwd` and `tty` (bool).
|
|
||||||
|
|
||||||
Example:
|
|
||||||
|
|
||||||
```
|
|
||||||
$ docker buildx debug --invoke 'entrypoint=["sh"],"args=[""-c"", ""env | grep -e FOO -e AAA""]","env=[""FOO=bar"", ""AAA=bbb""]"' build .
|
|
||||||
```
|
|
||||||
|
|
||||||
#### `on` flag
|
|
||||||
|
|
||||||
If you want to start a debug session when a build fails, you can use
|
|
||||||
`--on=error` to start a debug session when the build fails.
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx debug --on=error build .
|
|
||||||
[+] Building 4.2s (19/19) FINISHED
|
|
||||||
=> [internal] connecting to local controller 0.0s
|
|
||||||
=> [internal] load build definition from Dockerfile 0.0s
|
|
||||||
=> => transferring dockerfile: 32B 0.0s
|
|
||||||
=> [internal] load .dockerignore 0.0s
|
|
||||||
=> => transferring context: 34B 0.0s
|
|
||||||
...
|
|
||||||
=> ERROR [shell 10/10] RUN bad-command
|
|
||||||
------
|
|
||||||
> [shell 10/10] RUN bad-command:
|
|
||||||
#0 0.049 /bin/sh: bad-command: not found
|
|
||||||
------
|
|
||||||
Launching interactive container. Press Ctrl-a-c to switch to monitor console
|
|
||||||
Interactive container was restarted with process "edmzor60nrag7rh1mbi4o9lm8". Press Ctrl-a-c to switch to the new container
|
|
||||||
/ #
|
|
||||||
```
|
|
||||||
|
|
||||||
This allows you to explore the state of the image when the build failed.
|
|
||||||
|
|
||||||
#### Launch the debug session directly with `buildx debug` subcommand
|
|
||||||
|
|
||||||
If you want to drop into a debug session without first starting the build, you
|
|
||||||
can use `buildx debug` command to start a debug session.
|
|
||||||
|
|
||||||
```
|
|
||||||
$ docker buildx debug
|
|
||||||
[+] Building 4.2s (19/19) FINISHED
|
|
||||||
=> [internal] connecting to local controller 0.0s
|
|
||||||
(buildx)
|
|
||||||
```
|
|
||||||
|
|
||||||
You can then use the commands available in [monitor mode](#monitor-mode) to
|
|
||||||
start and observe the build.
|
|
||||||
|
|
||||||
## Monitor mode
|
|
||||||
|
|
||||||
By default, when debugging, you'll be dropped into a shell in the final stage.
|
|
||||||
|
|
||||||
When you're in a debug shell, you can use the `Ctrl-a-c` key combination (press
|
|
||||||
`Ctrl`+`a` together, lift, then press `c`) to toggle between the debug shell
|
|
||||||
and the monitor mode. In monitor mode, you can run commands that control the
|
|
||||||
debug environment.
|
|
||||||
|
|
||||||
```console
|
|
||||||
(buildx) help
|
|
||||||
Available commands are:
|
|
||||||
attach attach to a buildx server or a process in the container
|
|
||||||
disconnect disconnect a client from a buildx server. Specific session ID can be specified an arg
|
|
||||||
exec execute a process in the interactive container
|
|
||||||
exit exits monitor
|
|
||||||
help shows this message. Optionally pass a command name as an argument to print the detailed usage.
|
|
||||||
kill kill buildx server
|
|
||||||
list list buildx sessions
|
|
||||||
ps list processes invoked by "exec". Use "attach" to attach IO to that process
|
|
||||||
reload reloads the context and build it
|
|
||||||
rollback re-runs the interactive container with the step's rootfs contents
|
|
||||||
```
|
|
||||||
|
|
||||||
## Build controllers
|
|
||||||
|
|
||||||
Debugging is performed using a buildx "controller", which provides a high-level
|
|
||||||
abstraction to perform builds. By default, the local controller is used for a
|
|
||||||
more stable experience which runs all builds in-process. However, you can also
|
|
||||||
use the remote controller to detach the build process from the CLI.
|
|
||||||
|
|
||||||
To detach the build process from the CLI, you can use the `--detach=true` flag with
|
|
||||||
the build command.
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx debug --invoke /bin/sh build --detach=true .
|
|
||||||
```
|
|
||||||
|
|
||||||
If you start a debugging session using the `--invoke` flag with a detached
|
|
||||||
build, then you can attach to it using the `buildx debug` command to
|
|
||||||
immediately enter the monitor mode.
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx debug
|
|
||||||
[+] Building 0.0s (1/1) FINISHED
|
|
||||||
=> [internal] connecting to remote controller
|
|
||||||
(buildx) list
|
|
||||||
ID CURRENT_SESSION
|
|
||||||
xfe1162ovd9def8yapb4ys66t false
|
|
||||||
(buildx) attach xfe1162ovd9def8yapb4ys66t
|
|
||||||
Attached to process "". Press Ctrl-a-c to switch to the new container
|
|
||||||
(buildx) ps
|
|
||||||
PID CURRENT_SESSION COMMAND
|
|
||||||
3ug8iqaufiwwnukimhqqt06jz false [sh]
|
|
||||||
(buildx) attach 3ug8iqaufiwwnukimhqqt06jz
|
|
||||||
Attached to process "3ug8iqaufiwwnukimhqqt06jz". Press Ctrl-a-c to switch to the new container
|
|
||||||
(buildx) Switched IO
|
|
||||||
/ # ls
|
|
||||||
bin etc lib mnt proc run srv tmp var
|
|
||||||
dev home media opt root sbin sys usr work
|
|
||||||
/ #
|
|
||||||
```
|
|
||||||
@@ -3,7 +3,6 @@ package main
|
|||||||
import (
|
import (
|
||||||
"log"
|
"log"
|
||||||
"os"
|
"os"
|
||||||
"strings"
|
|
||||||
|
|
||||||
"github.com/docker/buildx/commands"
|
"github.com/docker/buildx/commands"
|
||||||
clidocstool "github.com/docker/cli-docs-tool"
|
clidocstool "github.com/docker/cli-docs-tool"
|
||||||
@@ -27,28 +26,6 @@ type options struct {
|
|||||||
formats []string
|
formats []string
|
||||||
}
|
}
|
||||||
|
|
||||||
// fixUpExperimentalCLI trims the " (EXPERIMENTAL)" suffix from the CLI output,
|
|
||||||
// as docs.docker.com already displays "experimental (CLI)",
|
|
||||||
//
|
|
||||||
// https://github.com/docker/buildx/pull/2188#issuecomment-1889487022
|
|
||||||
func fixUpExperimentalCLI(cmd *cobra.Command) {
|
|
||||||
const (
|
|
||||||
annotationExperimentalCLI = "experimentalCLI"
|
|
||||||
suffixExperimental = " (EXPERIMENTAL)"
|
|
||||||
)
|
|
||||||
if _, ok := cmd.Annotations[annotationExperimentalCLI]; ok {
|
|
||||||
cmd.Short = strings.TrimSuffix(cmd.Short, suffixExperimental)
|
|
||||||
}
|
|
||||||
cmd.Flags().VisitAll(func(f *pflag.Flag) {
|
|
||||||
if _, ok := f.Annotations[annotationExperimentalCLI]; ok {
|
|
||||||
f.Usage = strings.TrimSuffix(f.Usage, suffixExperimental)
|
|
||||||
}
|
|
||||||
})
|
|
||||||
for _, c := range cmd.Commands() {
|
|
||||||
fixUpExperimentalCLI(c)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func gen(opts *options) error {
|
func gen(opts *options) error {
|
||||||
log.SetFlags(0)
|
log.SetFlags(0)
|
||||||
|
|
||||||
@@ -80,8 +57,6 @@ func gen(opts *options) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
case "yaml":
|
case "yaml":
|
||||||
// fix up is needed only for yaml (used for generating docs.docker.com contents)
|
|
||||||
fixUpExperimentalCLI(cmd)
|
|
||||||
if err = c.GenYamlTree(cmd); err != nil {
|
if err = c.GenYamlTree(cmd); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|||||||
48
docs/guides/cicd.md
Normal file
48
docs/guides/cicd.md
Normal file
@@ -0,0 +1,48 @@
|
|||||||
|
# CI/CD
|
||||||
|
|
||||||
|
## GitHub Actions
|
||||||
|
|
||||||
|
Docker provides a [GitHub Action that will build and push your image](https://github.com/docker/build-push-action/#about)
|
||||||
|
using Buildx. Here is a simple workflow:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
name: ci
|
||||||
|
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
branches:
|
||||||
|
- 'main'
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
docker:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
-
|
||||||
|
name: Set up QEMU
|
||||||
|
uses: docker/setup-qemu-action@v2
|
||||||
|
-
|
||||||
|
name: Set up Docker Buildx
|
||||||
|
uses: docker/setup-buildx-action@v2
|
||||||
|
-
|
||||||
|
name: Login to DockerHub
|
||||||
|
uses: docker/login-action@v2
|
||||||
|
with:
|
||||||
|
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||||
|
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||||
|
-
|
||||||
|
name: Build and push
|
||||||
|
uses: docker/build-push-action@v2
|
||||||
|
with:
|
||||||
|
push: true
|
||||||
|
tags: user/app:latest
|
||||||
|
```
|
||||||
|
|
||||||
|
In this example we are also using 3 other actions:
|
||||||
|
|
||||||
|
* [`setup-buildx`](https://github.com/docker/setup-buildx-action) action will create and boot a builder using by
|
||||||
|
default the `docker-container` [builder driver](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver).
|
||||||
|
This is **not required but recommended** using it to be able to build multi-platform images, export cache, etc.
|
||||||
|
* [`setup-qemu`](https://github.com/docker/setup-qemu-action) action can be useful if you want
|
||||||
|
to add emulation support with QEMU to be able to build against more platforms.
|
||||||
|
* [`login`](https://github.com/docker/login-action) action will take care to log
|
||||||
|
in against a Docker registry.
|
||||||
23
docs/guides/cni-networking.md
Normal file
23
docs/guides/cni-networking.md
Normal file
@@ -0,0 +1,23 @@
|
|||||||
|
# CNI networking
|
||||||
|
|
||||||
|
It can be useful to use a bridge network for your builder if for example you
|
||||||
|
encounter a network port contention during multiple builds. If you're using
|
||||||
|
the BuildKit image, CNI is not yet available in it, but you can create
|
||||||
|
[a custom BuildKit image with CNI support](https://github.com/moby/buildkit/blob/master/docs/cni-networking.md).
|
||||||
|
|
||||||
|
Now build this image:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx build --tag buildkit-cni:local --load .
|
||||||
|
```
|
||||||
|
|
||||||
|
Then [create a `docker-container` builder](https://docs.docker.com/engine/reference/commandline/buildx_create/) that
|
||||||
|
will use this image:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx create --use \
|
||||||
|
--name mybuilder \
|
||||||
|
--driver docker-container \
|
||||||
|
--driver-opt "image=buildkit-cni:local" \
|
||||||
|
--buildkitd-flags "--oci-worker-net=cni"
|
||||||
|
```
|
||||||
20
docs/guides/color-output.md
Normal file
20
docs/guides/color-output.md
Normal file
@@ -0,0 +1,20 @@
|
|||||||
|
# Color output controls
|
||||||
|
|
||||||
|
Buildx has support for modifying the colors that are used to output information
|
||||||
|
to the terminal. You can set the environment variable `BUILDKIT_COLORS` to
|
||||||
|
something like `run=123,20,245:error=yellow:cancel=blue:warning=white` to set
|
||||||
|
the colors that you would like to use:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Setting `NO_COLOR` to anything will disable any colorized output as recommended
|
||||||
|
by [no-color.org](https://no-color.org/):
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
> **Note**
|
||||||
|
>
|
||||||
|
> Parsing errors will be reported but ignored. This will result in default
|
||||||
|
> color values being used where needed.
|
||||||
|
|
||||||
|
See also [the list of pre-defined colors](https://github.com/moby/buildkit/blob/master/util/progress/progressui/colors.go).
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user