Compare commits

...

587 Commits

Author SHA1 Message Date
Tõnis Tiigi
34c195271a Merge pull request #2604 from crazy-max/v0.16.1_cherry-picks
[v0.16] cherry-picks for v0.16.1
2024-07-18 09:29:55 -07:00
Talon Bowler
dd3bb69a1e clarify the appropriate place to use the debug flag when viewing warnings
Signed-off-by: Talon Bowler <talon.bowler@docker.com>
(cherry picked from commit cedbc5d68d)
2024-07-18 18:18:03 +02:00
CrazyMax
9d8cf0bed3 test: bake print
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
(cherry picked from commit bd0b425734)
2024-07-18 17:47:17 +02:00
CrazyMax
fb773fa805 bake: check printer before printing warnings
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
(cherry picked from commit 7823a2dc01)
2024-07-18 17:47:17 +02:00
Tõnis Tiigi
10c9ff901c Merge pull request #2590 from tonistiigi/vendor-buildkit-v0.15.0
[v0.16] vendor: update buildkit to v0.15.0
2024-07-11 11:35:48 -07:00
Tonis Tiigi
470e45e599 vendor: update buildkit to v0.15.0
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-07-11 11:23:39 -07:00
Tõnis Tiigi
2a2648b1db Merge pull request #2588 from tonistiigi/vendor-buildkit-v0.15.0-rc2
vendor: update buildkit to v0.15.0-rc2
2024-07-10 15:35:25 -07:00
Tonis Tiigi
ac930bda69 vendor: update buildkit to v0.15.0-rc2
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-07-10 15:03:14 -07:00
Tõnis Tiigi
6791ecb628 Merge pull request #2587 from tonistiigi/update-moby-27.0.3
Dockerfile: update moby for testing to v27.0.3
2024-07-10 13:06:16 -07:00
Tonis Tiigi
d717237e4f Dockerfile: update moby for testing to v27.0.3
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-07-10 11:16:38 -07:00
Tõnis Tiigi
ee642ecc4c Merge pull request #2578 from crazy-max/driver-client-opt
driver: allow arbitrary client opts
2024-07-10 09:55:15 -07:00
Tõnis Tiigi
06d96d665e Merge pull request #2584 from tonistiigi/bake-test-fix
bake: fix testing json formatted output
2024-07-09 12:40:51 -07:00
Tõnis Tiigi
dc83501a5b Merge pull request #2583 from tonistiigi/bake-implicit-cacheonly
bake: use cacheonly exporter for implicit targets
2024-07-09 12:40:21 -07:00
Tonis Tiigi
0f74f9a794 bake: fix testing json formatted output
Because the test checked for combinedoutput, it
could contain internal warning messages in stderr.
JSON output is guaranteed in stdout.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-07-08 18:32:37 -07:00
Tonis Tiigi
6d6adc11a1 bake: use cacheonly exporter for implicit targets
Clearing the exporter may result in default export
behavior from the driver.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-07-08 17:53:52 -07:00
Tõnis Tiigi
68076909b9 Merge pull request #2579 from crazy-max/fix-compose-project-name
bake: use compose project name from env if set
2024-07-08 11:20:42 -07:00
CrazyMax
7957b73a30 bake: use compose project name from env if set
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-07-04 16:37:07 +02:00
CrazyMax
1dceb49a27 driver: allow arbitrary client opts
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-07-04 16:36:55 +02:00
Tõnis Tiigi
b96ad59f64 Merge pull request #2577 from tonistiigi/vendor-buildkit-v0.15.0-rc1
vendor: update buildkit to v0.15.0-rc1
2024-07-03 12:51:46 -07:00
Tonis Tiigi
50aa895477 vendor: update buildkit to v0.15.0-rc1
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-07-03 12:43:04 -07:00
Tõnis Tiigi
74374ea418 Merge pull request #2576 from crazy-max/bake-call-check-docs
docs: link to build ref page for --call and --check ref with bake
2024-07-03 11:35:52 -07:00
CrazyMax
6bbe59697a docs: link to build ref page for --call and --check ref with bake
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-07-03 20:19:56 +02:00
Tõnis Tiigi
c51004e2e4 Merge pull request #2556 from tonistiigi/bake-call
bake: add call methods support and printing
2024-07-03 10:40:32 -07:00
Tõnis Tiigi
8535c6b455 Merge pull request #2562 from dvdksn/docs-buildx-b-call
docs: reference description for --call and --check
2024-07-03 10:17:33 -07:00
CrazyMax
153e5ed274 mark list-targets and list-variables as hidden and experimental
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-07-03 09:54:09 -07:00
Tonis Tiigi
cc097db675 bake: fix printer reset before metadata written
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-07-03 09:54:09 -07:00
Tonis Tiigi
35313e865f bake: add tests for call and list
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-07-03 09:54:03 -07:00
Tonis Tiigi
233b869c63 bake: add list-variables option
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-07-03 09:54:03 -07:00
Tonis Tiigi
7460f049f2 bake: add list-targets options to list available targets/groups
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-07-03 09:54:03 -07:00
Tonis Tiigi
8f4c8b094a bake: allow text descriptions for targets
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-07-03 09:54:03 -07:00
Tonis Tiigi
8da28574b0 bake: add call methods support and printing
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-07-03 09:54:02 -07:00
Tõnis Tiigi
7e49141c4e Merge pull request #2574 from crazy-max/update-mod-outdated
dockerfile: update go-mod-outdated to v0.9.0
2024-07-03 09:42:04 -07:00
David Karlsson
5ec703ba10 docs: reference description for --call and --check
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-07-03 18:34:51 +02:00
Tõnis Tiigi
1ffc6f1d58 Merge pull request #2572 from crazy-max/build-ref-multi-nodes
build: set same ref when building on multiple nodes
2024-07-03 09:07:46 -07:00
CrazyMax
f65631546d dockerfile: update go-mod-outdated to v0.9.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-07-03 15:37:28 +02:00
CrazyMax
6fc19c4024 build: set same ref when building on multiple nodes
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-07-03 15:06:53 +02:00
CrazyMax
5656c98133 Merge pull request #2565 from dvdksn/cli-docs-tool-v0.8.0
cli docs tool v0.8.0
2024-07-03 11:55:18 +02:00
David Karlsson
263a9ddaee chore: regenerate docs
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-07-03 11:48:11 +02:00
David Karlsson
1774aa0cf0 vendor: github.com/docker/cli-docs-tool v0.8.0
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-07-03 11:47:04 +02:00
CrazyMax
7b80ad7069 Merge pull request #2569 from dvdksn/fix-alias
fix: buildx b alias
2024-07-03 10:14:45 +02:00
CrazyMax
c0c4d7172b Merge pull request #2567 from tonistiigi/update-buildkit-0702
vendor: update buildkit to f7bda278b7e2
2024-07-03 10:09:10 +02:00
CrazyMax
e498ba9c27 Merge pull request #2568 from tonistiigi/go-1.22
update Go to 1.22
2024-07-03 10:08:25 +02:00
David Karlsson
2e7e7abe42 test: add test for building with alias "buildx b"
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-07-03 10:04:09 +02:00
David Karlsson
048ef1fbf8 fix: buildx b alias
the shorthand "b" alias was accidentally removed in 19d838a

Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-07-03 10:03:01 +02:00
Tonis Tiigi
cbe7901667 update Go to 1.22
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-07-02 22:27:43 -07:00
Tonis Tiigi
f374f64d2f vendor: update buildkit to f7bda278b7e2
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-07-02 22:24:55 -07:00
Tõnis Tiigi
4be2259719 Merge pull request #2501 from tonistiigi/remote-client-cache
remote: ensure that client connection is not established twice
2024-07-02 09:30:32 -07:00
Tõnis Tiigi
6627f315cb Merge pull request #2397 from dvdksn/buildx_build_canonical
docs: make buildx build the canonical doc
2024-07-02 09:17:01 -07:00
David Karlsson
19d838a3f4 docs: make buildx build the canonical doc
Move descriptions of flags common with the legacy build client to the buildx
build reference doc.

Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-07-02 15:54:10 +02:00
CrazyMax
17878d641e Merge pull request #2534 from tonistiigi/bake-warnings
bake: print warnings on progress
2024-07-01 18:16:52 +02:00
Tõnis Tiigi
63eb73d9cf Merge pull request #2560 from crazy-max/fix-localstate-remote
build: fix localstate for remote context and stdin
2024-06-28 16:56:53 -07:00
Tõnis Tiigi
59a0ffcf83 Merge pull request #2546 from treuherz/multinode-annotations
Pass in index annotations from builds on multiple nodes
2024-06-28 16:46:20 -07:00
Tõnis Tiigi
2b17f277a1 Merge pull request #2549 from daghack/warning-free-msg
Add message when --check does not produce warnings or errors
2024-06-28 16:45:57 -07:00
Tõnis Tiigi
ea7c8e83d2 Merge pull request #2559 from tonistiigi/update-buildkit-0627
use csvvalue package for parsing csv inputs
2024-06-28 16:42:10 -07:00
Tõnis Tiigi
9358c45b46 Merge pull request #2558 from tonistiigi/fix-sharedkey-for-context
build: fix sharedkey computation for local context
2024-06-28 16:41:30 -07:00
CrazyMax
cfb7fc4fb5 build: fix localstate for remote context and stdin
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-06-28 14:56:45 +02:00
CrazyMax
d4b112ab05 test: build remote
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-06-28 11:06:09 +02:00
Tonis Tiigi
f7a32361ea use csvvalue package for parsing csv inputs
This package is better suited for parsing single-line
CSV strings.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-06-27 21:31:11 -07:00
Tonis Tiigi
af902caeaa vendor: update buildkit to 8397d0b9
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-06-27 20:44:07 -07:00
Tõnis Tiigi
04000db8da Merge pull request #2499 from thaJeztah/bump_buildkit
vendor: buildkit, docker/docker and docker/cli v27.0.1
2024-06-27 20:42:35 -07:00
Tonis Tiigi
b8da14166c build: fix sharedkey computation for local context
When LocalDirs were changed to LocalMounts, this broke the
sharedKey computation that was based on the context directory
path. SharedKey defines if directory is valid candidate for
incremental context transfer and if not set properly then
different directories do metadata-based transfers to same destination.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-06-27 17:53:22 -07:00
Tonis Tiigi
c1f680df14 bake: print warnings on progress
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-06-27 17:31:15 -07:00
Talon Bowler
b6482ab6bb Add message when --check does not produce warnings or errors
Signed-off-by: Talon Bowler <talon.bowler@docker.com>
2024-06-27 08:24:00 -07:00
Eli Treuherz
6f45b0ea06 Get annotations from exports
Signed-off-by: Eli Treuherz <et@arenko.group>
2024-06-27 13:26:07 +01:00
Eli Treuherz
3971361ed2 Pass in index annotations from builds on multiple nodes
Fixes #2540

Signed-off-by: Eli Treuherz <et@arenko.group>
2024-06-27 13:26:07 +01:00
Tõnis Tiigi
818045482e Merge pull request #2522 from treuherz/annotation-per-type
Make multi-type annotation settings match docs
2024-06-26 10:07:27 -07:00
CrazyMax
f8e1746d0d Merge pull request #2557 from thaJeztah/test_bump-buildkit
dockerfile, gha: update buildkit to 0.13.2, 0.14.1
2024-06-26 16:54:44 +02:00
Sebastiaan van Stijn
92a6799514 dockerfile, gha: update buildkit to 0.13.2, 0.14.1
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-06-26 15:52:28 +02:00
Sebastiaan van Stijn
9358f84668 vendor: buildkit, docker/docker and docker/cli v27.0.1
diffs:

- https://github.com/docker/cli/compare/v26.1.4..v27.0.1
- https://github.com/docker/docker/compare/v26.1.4..v27.0.1
- https://github.com/moby/buildkit/compare/v0.14.1...aaaf86e5470bffbb395f5c15ad4a1c152642ea30

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-06-26 15:31:47 +02:00
Sebastiaan van Stijn
dbdd3601eb vendor: github.com/containerd/ttrpc v1.2.5
full diff: https://github.com/containerd/ttrpc/compare/v1.2.4...v1.2.5

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-06-26 15:30:40 +02:00
Sebastiaan van Stijn
a3c8a72b54 vendor: github.com/klauspost/compress v1.17.9
full diff: https://github.com/klauspost/compress/compare/v1.17.4...v1.17.9

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-06-26 15:30:40 +02:00
Sebastiaan van Stijn
4c3af9becf vendor: golang.org/x/sys v0.20.0
full diff: https://github.com/golang/sys/compare/v0.18.0...v0.20.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-06-26 15:30:37 +02:00
CrazyMax
d8c9ebde1f Merge pull request #2551 from crazy-max/metadata-warnings-2
build: opt to set progress warnings in response
2024-06-26 08:16:29 +02:00
CrazyMax
01a50aac42 printer: dedup warnings
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-06-26 06:53:35 +02:00
CrazyMax
f7bcafed21 build: opt to set progress warnings in response
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-06-26 06:53:35 +02:00
Tõnis Tiigi
e5ded4b2de Merge pull request #2521 from crazy-max/fix-buildinfo
fix assignment of buildinfo-attrs for exporter
2024-06-25 11:36:14 -07:00
Tõnis Tiigi
6ef443de41 Merge pull request #2550 from crazy-max/provenance-mode-commands
build: read provenance response mode in commands pkg
2024-06-25 11:34:23 -07:00
Tõnis Tiigi
076e19d0ce Merge pull request #2555 from crazy-max/compose-test-cgroup
test: compose cgroup property
2024-06-25 11:25:59 -07:00
CrazyMax
5599699d29 test: compose cgroup property
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-06-25 18:22:50 +02:00
CrazyMax
d155747029 build: read provenance response mode in commands pkg
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-06-24 14:32:26 +02:00
CrazyMax
9cebd0c80f Merge pull request #2545 from docker/dependabot/github_actions/softprops/action-gh-release-2.0.6
build(deps): bump softprops/action-gh-release from 2.0.5 to 2.0.6
2024-06-24 14:30:17 +02:00
CrazyMax
7b1ec7211d Merge pull request #2547 from glours/bump-compose-go-v2.1.3
bump compose-go to version v2.1.3
2024-06-21 15:20:18 +02:00
Guillaume Lours
689fd74104 bump compose-go to version v2.1.3
Signed-off-by: Guillaume Lours <705411+glours@users.noreply.github.com>
2024-06-21 15:04:10 +02:00
dependabot[bot]
0dfd315daa build(deps): bump softprops/action-gh-release from 2.0.5 to 2.0.6
Bumps [softprops/action-gh-release](https://github.com/softprops/action-gh-release) from 2.0.5 to 2.0.6.
- [Release notes](https://github.com/softprops/action-gh-release/releases)
- [Changelog](https://github.com/softprops/action-gh-release/blob/master/CHANGELOG.md)
- [Commits](69320dbe05...a74c6b72af)

---
updated-dependencies:
- dependency-name: softprops/action-gh-release
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-06-20 18:46:57 +00:00
CrazyMax
9b100c2552 Merge pull request #2541 from docker/dependabot/github_actions/peter-evans/create-pull-request-6.1.0
build(deps): bump peter-evans/create-pull-request from 6.0.5 to 6.1.0
2024-06-20 19:28:48 +02:00
Tõnis Tiigi
92aaaa8f67 Merge pull request #2524 from thaJeztah/test_engine_27.0
Dockerfile: update docker engine to 27.0.0-rc.2
2024-06-20 10:28:01 -07:00
Tõnis Tiigi
6111d9a00d Merge pull request #2531 from dvdksn/docs-xref-exportermanuals
docs: link to exporter descriptions from reference docs
2024-06-20 10:26:22 -07:00
Tõnis Tiigi
310aaf1891 Merge pull request #2543 from thaJeztah/bump_testify
vendor: github.com/stretchr/testify v1.9.0
2024-06-20 10:25:37 -07:00
Tõnis Tiigi
6c7e65c789 Merge pull request #2544 from thaJeztah/bump_cobra
vendor: github.com/spf13/cobra v1.8.1
2024-06-20 10:25:16 -07:00
CrazyMax
66b0abf078 Merge pull request #2536 from thompson-shaun/pr-labeler
ci: add pr-labeler
2024-06-20 15:26:28 +02:00
Sebastiaan van Stijn
6efa26c2de vendor: github.com/spf13/cobra v1.8.1
full diff: https://github.com/spf13/cobra/compare/v1.8.0...v1.8.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-06-20 15:07:27 +02:00
Sebastiaan van Stijn
5b726afa5e vendor: github.com/stretchr/testify v1.9.0
full diff: https://github.com/stretchr/testify/compare/v1.8.4...v1.9.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-06-20 15:05:45 +02:00
dependabot[bot]
009f318bbd build(deps): bump peter-evans/create-pull-request from 6.0.5 to 6.1.0
Bumps [peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request) from 6.0.5 to 6.1.0.
- [Release notes](https://github.com/peter-evans/create-pull-request/releases)
- [Commits](6d6857d369...c5a7806660)

---
updated-dependencies:
- dependency-name: peter-evans/create-pull-request
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-06-18 19:03:17 +00:00
Tõnis Tiigi
9f7c8ea3fb Merge pull request #2538 from tonistiigi/lint-fallback-v1.8.1
build: update lint fallback image to dockerfile 1.8.1
2024-06-18 09:51:41 -07:00
Tonis Tiigi
be12199eb9 build: update lint fallback image to dockerfile 1.8.1
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-06-18 09:33:55 -07:00
Tõnis Tiigi
94355517c4 Merge pull request #2537 from tonistiigi/update-buildkit-v0.14.1
vendor: update buildkit v0.14.1
2024-06-18 09:20:28 -07:00
Tonis Tiigi
cb1be7214a vendor: update buildkit v0.14.1
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-06-18 09:07:21 -07:00
CrazyMax
f42a4a1e94 Merge pull request #2533 from docker/dependabot/github_actions/docker/bake-action-5
build(deps): bump docker/bake-action from 4 to 5
2024-06-18 17:58:21 +02:00
Shaun Thompson
4d7365018c ci: add pr-labeler
Signed-off-by: Shaun Thompson <shaun.thompson@docker.com>
2024-06-18 09:10:01 -04:00
Eli Treuherz
3d0951b800 Reduce regex usage in annotation parser
Signed-off-by: Eli Treuherz <et@arenko.group>
2024-06-18 12:31:02 +01:00
Eli Treuherz
bcd04d5a64 Style fixes to test
Signed-off-by: Eli Treuherz <et@arenko.group>
2024-06-18 12:31:02 +01:00
Eli Treuherz
b00001d8ac Make multi-type annotation settings match docs
The Docker docs in multiple places describe passing an annotation at the
command line like "index,manifest:com.example.name=my-cool-image", and
say that this will result in the annotation being applied to both the
index and the manifest. It doesn't seem like this was actually
implemented, and instead it just results in an annotation key with
"index,manifest:" at the beginning being applied to the manifest.

This change splits the part of the key before the colon by comma, and
creates an annotation for each type/platform given, so the
implementation should now match the docs.

Signed-off-by: Eli Treuherz <et@arenko.group>
2024-06-18 12:31:02 +01:00
Tõnis Tiigi
31187735de Merge pull request #2535 from thaJeztah/bump_credshelpers
vendor: github.com/docker/docker-credential-helpers v0.8.2
2024-06-17 16:39:23 -07:00
Sebastiaan van Stijn
3373a27f1f vendor: github.com/docker/docker-credential-helpers v0.8.2
full diff: https://github.com/docker/docker-credential-helpers/compare/v0.8.0...v0.8.2

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-06-17 23:58:24 +02:00
Sebastiaan van Stijn
56698805a9 Dockerfile: update docker engine to 27.0.0-rc.2
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-06-17 23:55:05 +02:00
dependabot[bot]
4c2e0c4307 build(deps): bump docker/bake-action from 4 to 5
Bumps [docker/bake-action](https://github.com/docker/bake-action) from 4 to 5.
- [Release notes](https://github.com/docker/bake-action/releases)
- [Commits](https://github.com/docker/bake-action/compare/v4...v5)

---
updated-dependencies:
- dependency-name: docker/bake-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-06-17 18:12:10 +00:00
David Karlsson
fb6a3178c9 docs: link to exporter descriptions from reference docs
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-06-17 17:51:10 +02:00
Tõnis Tiigi
8ca18dee2d Merge pull request #2518 from daghack/handle-build-err-during-lint-request
update the lint subrequest call to error
2024-06-14 19:44:13 -07:00
Tõnis Tiigi
917d2f4a0a Merge pull request #2523 from thaJeztah/test_engine_26.1
Dockerfile: update docker engine to 26.1.4
2024-06-14 19:32:47 -07:00
Talon Bowler
366328ba6a Add comment to document the purpose behind the non-standard handling of the error
Signed-off-by: Talon Bowler <talon.bowler@docker.com>
2024-06-13 16:11:35 -07:00
Sebastiaan van Stijn
5f822b36d3 Dockerfile: update docker engine to 26.1.4
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-06-13 19:22:19 +02:00
Tõnis Tiigi
e423d096a6 Merge pull request #2508 from crazy-max/integration-tests-coverage
test: setup integration tests coverage
2024-06-13 10:10:32 -07:00
Talon Bowler
927fb6731c update the lint subrequest call to error when a build error was encountered during linting
Signed-off-by: Talon Bowler <talon.bowler@docker.com>
2024-06-13 09:47:05 -07:00
CrazyMax
314ca32446 fix assignment of buildinfo-attrs for exporter
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-06-13 10:56:32 +02:00
CrazyMax
3b25e3fa5c Merge pull request #2516 from thaJeztah/remove_c8d_errdefs
remove use of deprecated containerd/containerd/errdefs
2024-06-12 09:35:17 +02:00
CrazyMax
41d369120b ci: enable disable_file_fixes in codecov action
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-06-12 08:47:48 +02:00
CrazyMax
56ffe55f81 codecov: exclude generated files
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-06-12 08:47:47 +02:00
CrazyMax
6d5823beb1 test: setup integration tests coverage
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-06-12 08:46:49 +02:00
Sebastiaan van Stijn
c116af7b82 remove use of deprecated containerd/containerd/errdefs
This package has moved to a separate module. Also added linting
rules to prevent accidental reintroduction.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-06-12 01:12:59 +02:00
CrazyMax
fb130243f8 Merge pull request #2515 from crazy-max/bump-buildkit
testing: update buildkit to 0.14.0
2024-06-11 23:48:33 +02:00
Tõnis Tiigi
29c8107b85 Merge pull request #2514 from crazy-max/align-build-checks-tests
test: align build call tests
2024-06-11 13:51:46 -07:00
CrazyMax
ee3baa54f7 dockerfile: update buildkit to 0.14.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-06-11 20:38:39 +02:00
CrazyMax
9de95d81eb test: align build call tests
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-06-11 20:07:23 +02:00
Tõnis Tiigi
d3a53189f7 Merge pull request #2513 from tonistiigi/lint-fallback-1.8.0
build: update lint fallback image to dockerfile 1.8.0
2024-06-11 10:26:20 -07:00
Tonis Tiigi
0496dae9d5 build: update lint fallback image to dockerfile 1.8.0
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-06-11 10:12:52 -07:00
Tõnis Tiigi
40fcf992b1 Merge pull request #2512 from tonistiigi/0611-update-buildkit
vendor: update buildkit to v0.14.0
2024-06-11 10:11:34 -07:00
Tonis Tiigi
85c25f719c vendor: update buildkit to v0.14.0
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-06-11 09:56:59 -07:00
CrazyMax
875e4cd52e Merge pull request #2510 from crazy-max/ci-ubuntu24.04
ci: switch to ubuntu-24.04 runner
2024-06-11 15:36:04 +02:00
CrazyMax
24cedc6c0f ci: switch to ubuntu-24.04 runner
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-06-11 14:32:54 +02:00
Tõnis Tiigi
59f52c9505 Merge pull request #2507 from daghack/update-lint-metric-regex
Update the lint metrics to match agains the rule URL
2024-06-10 13:33:25 -07:00
Talon Bowler
1e916ae6c6 add length check for lint message regex result
Signed-off-by: Talon Bowler <talon.bowler@docker.com>
2024-06-10 12:26:32 -07:00
Talon Bowler
d342cb9d03 vendor golang.org/x/text dependency
Signed-off-by: Talon Bowler <talon.bowler@docker.com>
2024-06-10 12:17:48 -07:00
Talon Bowler
9fdc99dc76 Update the lint metrics to match agains the rule URL rather than a prefix on the lint rule
Signed-off-by: Talon Bowler <talon.bowler@docker.com>
2024-06-10 12:11:50 -07:00
Akihiro Suda
ab835fd904 Merge pull request #2504 from thaJeztah/bump_pty
vendor: github.com/creack/pty v1.1.21
2024-06-10 06:43:36 +09:00
Sebastiaan van Stijn
87efbd43b5 vendor: github.com/creack/pty v1.1.21
full diff: https://github.com/creack/pty/compare/v1.1.18...v1.1.21

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-06-08 17:47:33 +02:00
Tõnis Tiigi
39db6159f9 Merge pull request #2503 from tonistiigi/20240606-update-buildkit
update buildkit to v0.14.0-rc2
2024-06-06 16:30:31 -07:00
Tonis Tiigi
922328cbaf build: update lint fallback image to v1.8.0-rc2
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-06-06 16:18:43 -07:00
Tonis Tiigi
aa0f90fdd6 vendor: update buildkit to v0.14.0-rc2
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-06-06 16:17:17 -07:00
CrazyMax
82b6826cd7 Merge pull request #2500 from dvdksn/doc-rawjson
docs: mention rawjson progress output mode
2024-06-06 17:26:14 +02:00
David Karlsson
1e3aec1ae2 docs: mention rawjson progress output mode
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-06-06 16:51:42 +02:00
CrazyMax
cfef22ddf0 Merge pull request #2502 from thaJeztah/bump_compose_go
vendor: github.com/compose-spec/compose-go/v2 v2.1.2
2024-06-06 15:31:51 +02:00
Sebastiaan van Stijn
9e5ba66553 vendor: github.com/compose-spec/compose-go/v2 v2.1.2
Replaces uses of the github.com/mitchellh/mapstructure module, which
was deprecated by the owner and moved to new maintainership at
github.com/go-viper/mapstructure.

The old module is still referenced as indirect dependency (through
docker/cli and theupdateframework/notary), but not used in code, and
should eventually go away.

full diff: https://github.com/compose-spec/compose-go/compare/v2.1.1...v2.1.2

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-06-06 12:44:51 +02:00
Tonis Tiigi
9ceda78057 remote: ensure that client connection is not established twice
Because remote driver implements Info() by calling
Client() internally, two instances on Client are created
backed by separate TCP connection. This hack avoids it
and improves performance.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-06-05 22:04:59 -07:00
CrazyMax
747b75a217 Merge pull request #2497 from crazy-max/fix-k8s-kubeconfig
k8s: fix concurrent kubeconfig access when loading nodes
2024-06-04 12:10:44 +02:00
Tõnis Tiigi
d8de5bb345 Merge pull request #2498 from tonistiigi/0603-lint-fallback-update
build: update lint fallback image
2024-06-03 13:28:21 -07:00
Tonis Tiigi
eff1850d53 build: update lint fallback image
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-06-03 13:17:52 -07:00
Tõnis Tiigi
a24043e9f1 Merge pull request #2487 from daghack/call-lint
Rename --print to --call and make previous name hidden
2024-06-03 12:39:43 -07:00
Tonis Tiigi
0902294e1a ensure call aliases also work formatting parameters
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-06-03 12:09:47 -07:00
Jonathan A. Sternberg
ef4a165e48 commands: add an alias for --check to be the same as --call=check
This adds an alias for `--check` that causes it to behave the same as
`--call=check`. This is done using `BoolFunc` to call a function when
the option is seen and to set it to the correct value. This should allow
command line flags like `--check --call=targets` to work correctly (even
though they conflict) by making it so the first invocation sets the
print function to `check` and the second overwrites the first. This is
the expected behavior for these types of boolean flags.

`BoolFunc` itself is part of the standard library flags package, but
never seems to have made it into pflag possibly because it was added in
go 1.21.

https://pkg.go.dev/flag#FlagSet.BoolFunc

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-06-03 13:25:21 -05:00
Tonis Tiigi
89810dc998 build: set default call method name to build
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-06-03 10:42:35 -07:00
Talon Bowler
250cd44d70 Adds a --call flag as an alias to the --print flag and hides the later.
Signed-off-by: Talon Bowler <talon.bowler@docker.com>
2024-06-03 10:30:30 -07:00
Tõnis Tiigi
5afb210d43 Merge pull request #2491 from jsternberg/update-buildkit
vendor: update buildkit to v0.14.0-rc1
2024-06-03 10:23:26 -07:00
Tõnis Tiigi
03f84d2e83 Merge pull request #2496 from crazy-max/dial-cmd-flags
dial-stdio: remove extra cmd.flags()
2024-06-03 09:08:32 -07:00
CrazyMax
945e774a02 k8s: fix concurrent kubeconfig access when loading nodes
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-06-03 16:16:24 +02:00
CrazyMax
947d6023e4 Merge pull request #2492 from crazy-max/k8s-timeout
k8s: opt to customize timeout during deployment
2024-06-03 16:10:05 +02:00
CrazyMax
c58599ca50 dial-stdio: remove extra cmd.flags()
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-06-03 14:14:55 +02:00
CrazyMax
f30e143428 k8s: rename timeout opt and move it out of deployment manifest
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-06-03 10:30:06 +02:00
Arnold Sobanski
53b7cbc5cb Add parameter provisioningTimeout to Kubernetes driver options.
Signed-off-by: Arnold Sobanski <arnold@l4g.dev>
2024-06-03 10:08:03 +02:00
Tonis Tiigi
9a30215886 tests: avoid early shutdown of sandbox
Because sandbox is closed down when the main test
that created the sandbox returns it can't have subtests
that set themselves as parallel as they would continue
to run in a different lifecycle.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-05-31 17:38:32 -07:00
Jonathan A. Sternberg
b1cb658a31 vendor: update buildkit to v0.14.0-rc1
Update buildkit dependency to v0.14.0-rc1. Update the tracing
infrastructure to use the new detect API which updates how the delegated
exporter is configured.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-05-31 16:23:41 -05:00
Tõnis Tiigi
bc83ecb538 Merge pull request #2490 from tonistiigi/lint-fallback-update
build: update --print fallback image to 1.8.0-rc1
2024-05-31 14:08:08 -07:00
Tonis Tiigi
ceaa4534f9 build: update --print fallback image to 1.8.0-rc1
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-05-31 13:57:56 -07:00
Tõnis Tiigi
9b6c4103af Merge pull request #2488 from tonistiigi/add-jonathan
add Jonathan to buildx maintainers
2024-05-31 10:07:21 -07:00
Tõnis Tiigi
4549283f44 Merge pull request #2482 from rvoh-tismith/fix/single_source_create
Add `--prefer-index` flag for`imagetools create` on a single source
2024-05-31 09:44:43 -07:00
Tonis Tiigi
b2e907d5c2 add Jonathan to buildx maintainers
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-05-31 08:36:15 -07:00
Tõnis Tiigi
7427adb9b0 Merge pull request #2484 from jsternberg/lint-metrics
metrics: record the number of times lint rules are triggered during a build
2024-05-30 15:15:03 -07:00
Jonathan A. Sternberg
1a93bbd3a5 metrics: record the number of times lint rules are triggered during a build
This metric records the number of times it sees a lint warning in the
progress stream and categorizes the number of times each rule has been
triggered. This will only record whether a lint warning was triggered
and not whether the linter was even used or which rules were present.
That information isn't presently part of the stream.

With this change, we might be reaching some of the limitations that
spying on the progress stream gives us for metrics and may want to
consider another way for the build to communicate metrics back to the
client.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-05-30 15:05:00 -05:00
thompson-shaun
1f28985d20 Merge pull request #2425 from glours/bump-compose-go-2.1.0
bump compose-go to v2.1.1
2024-05-30 12:16:33 -04:00
CrazyMax
33a5528003 bump compose-go to v2.1.1
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-05-30 15:37:13 +02:00
Tim Smith
7bfae2b809 Updated the docs.
Signed-off-by: Tim Smith <tismith@rvohealth.com>
2024-05-29 22:54:09 -04:00
Tim Smith
117c9016e1 Updated tests further to make sure the new flag doesn't affect copy an index regardless of what value you specify.
Signed-off-by: Tim Smith <tismith@rvohealth.com>
2024-05-29 21:56:22 -04:00
Tim Smith
388af3576a Updated tests to test new --prefer-index flag
Signed-off-by: Tim Smith <tismith@rvohealth.com>
2024-05-29 21:39:14 -04:00
Tim Smith
2061550bc1 Slightly refactored the mediaType check on single source so that now we return original bytes without filtering on mediaType, based on the preferIndex preference.
Signed-off-by: Tim Smith <tismith@rvohealth.com>
2024-05-29 14:20:53 -04:00
Tim Smith
abf6c77d91 Add a --prefer-index flag that allows you to specify the preferred behavior when deciding on how to create an image/manifest from a single source.
Signed-off-by: Tim Smith <tismith@rvohealth.com>
2024-05-29 14:07:28 -04:00
Justin Chadwell
9ad116aa8e Merge pull request #2478 from thaJeztah/extract_resolve_digest
build: loadInputs: extract resolving digest to a separate function
2024-05-29 11:00:54 +01:00
Tõnis Tiigi
e3d5e64ec9 Merge pull request #2475 from thaJeztah/remove_urlutil
remove uses of github.com/docker/docker/builder/remotecontext package
2024-05-28 22:51:36 -07:00
Tim Smith
0808747add Added application/vnd.docker.distribution.manifest.v2+json mediatype to the list of mediatypes we return the original bytes for when calling *Resolver.Combine rather than adding it to a newly created manifest list
Signed-off-by: Tim Smith <tismith@rvohealth.com>
2024-05-28 23:01:14 -04:00
Tõnis Tiigi
2e7da01560 Merge pull request #2473 from tonistiigi/prune-negative-filter
prune: allow negative and prefix filters
2024-05-28 13:53:06 -07:00
Sebastiaan van Stijn
38d7d36f0a build: loadInputs: extract resolving digest to a separate function
This makes the code slightly more idiomatic, but the errors produced will
change slightly to prevent having to path NamedContext as argument.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-05-27 19:31:32 +02:00
CrazyMax
55c86543ca Merge pull request #2477 from thaJeztah/remove_redundant_checks
build: loadInputs: remove redundant checks for hasTag, hasDigest
2024-05-27 16:04:07 +02:00
CrazyMax
f98ef00ec7 Merge pull request #2454 from kariya-mitsuru/fix-k8s-driver
Fix k8s driver with certs cannot boot
2024-05-27 12:32:38 +02:00
Sebastiaan van Stijn
b948b07e2d remove uses of github.com/docker/docker/builder/remotecontext package
This package is part of the classic builder, and was currently only used
for the IsURL utility, which is a very rudimentary check for a string having
a "https://" or "http://" scheme.

This patch copies the code as non-exported functions where they're used to
remove the dependency.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-05-26 11:06:02 +02:00
Sebastiaan van Stijn
17c0a3794b build: loadInputs: remove redundant check for hasDigest
hasDigest would always be true when reaching this code, because the function
would return with an error when failing to resolve the digest;

    if !hasDigest {
        return nil, errors.Errorf("oci-layout reference %q could not be resolved", v.Path)
    }

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-05-25 16:36:52 +02:00
Sebastiaan van Stijn
c0a986b43b build: loadInputs: remove redundant check for hasTag
hasTag was always true as it was set to "true" when missing, in which case
the default (`:latest`) tag was applied;

    localPath, tag, hasTag := strings.Cut(localPath, ":")
    if !hasTag {
        tag = "latest"
        hasTag = true
    }

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-05-25 16:32:37 +02:00
Tonis Tiigi
781dcbd196 prune: allow negative and prefix filters
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-05-24 16:57:25 -07:00
Tõnis Tiigi
37c4ff0944 Merge pull request #2467 from tonistiigi/fix-resolvednode-cache-panic
build: fix resolvedNode cache and panic protection
2024-05-22 07:07:36 -07:00
thompson-shaun
6211f56b8d Merge pull request #2461 from jsternberg/v0.14.1-picks
[v0.14] cherry-picks for v0.14.1
2024-05-21 14:12:17 -04:00
Tõnis Tiigi
cc9ea87142 Merge pull request #2460 from jsternberg/vendor-update
vendor: update buildx to latest docker/cli
2024-05-21 09:20:09 -07:00
Tonis Tiigi
035236a5ed driver: handle nil logger for bootstrap
resolveNode methods can call with nil logger. Although
the results should already be cached now in resolver
this makes the protection more explicit.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-05-20 16:38:01 -07:00
Tonis Tiigi
99777eaf34 build: add cache to resolvedNode
Currently it is possible for boot() to be called
multiple times, resulting multiple slow requests to
establish connection (eg. multiple container inspects
for container driver).

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-05-20 16:31:42 -07:00
Jonathan A. Sternberg
cf68b5b878 vendor: update buildx to latest docker/cli
This version of docker/cli has changes to remove compose-cli wrapper and
move all CLI metrics to OTEL.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
(cherry picked from commit 4fc4bc07ae)
2024-05-16 12:14:08 -05:00
Tonis Tiigi
3f1aaa68d5 build: fix multiple named contexts pointing to same bake target
Contexts using target: schema are replaced by input: pointing
to previous build result before build request is sent. Currently
this replacement did not work if multiple contexts pointed to
the same target name.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit f8c6a97edc)
2024-05-16 12:14:08 -05:00
jaihwan104
f6830f3b86 build: exit 1 when manifest merge failed
Signed-off-by: jaihwan104 <42341126+jaihwan104@users.noreply.github.com>
(cherry picked from commit f2823515db)
2024-05-16 12:13:59 -05:00
Jonathan A. Sternberg
4fc4bc07ae vendor: update buildx to latest docker/cli
This version of docker/cli has changes to remove compose-cli wrapper and
move all CLI metrics to OTEL.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-05-16 12:07:13 -05:00
CrazyMax
f6e57cf5b5 build: don't generate metadata file when print flag is used
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
(cherry picked from commit ba264138d6)
2024-05-16 11:31:58 -05:00
Tonis Tiigi
b77648d5f8 build: avoid default load with --print
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit fbb0f9b424)
2024-05-16 11:29:59 -05:00
Akihiro Suda
afcb609966 Merge pull request #2456 from thaJeztah/rm_k8s_apiserver
driver/kubernetes/util: remove k8s.io/apiserver dependency
2024-05-14 21:30:50 +09:00
Sebastiaan van Stijn
946e0a5d74 driver/kubernetes/util: remove k8s.io/apiserver dependency
Use a simplified local implementation that follow the same semantics,
so that we don't need k8s.io/apiserver as dependency.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-05-14 13:58:56 +02:00
CrazyMax
c4db5b252a Merge pull request #2445 from sumnerwarren/bake-compose-ssh
Bake: support compose ssh config
2024-05-14 10:15:48 +02:00
CrazyMax
8afeb56a3b Merge pull request #2455 from dvdksn/docs-bakefile-reference-frontmatter
docs: move Bake file reference title to front matter
2024-05-13 18:37:55 +02:00
David Karlsson
fd801a12c1 docs: move Bake file reference title to front matter
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-05-13 18:15:17 +02:00
Tõnis Tiigi
2f98e6f3ac Merge pull request #2444 from tonistiigi/fix-base-duplicate-target-ref
build: fix multiple named contexts pointing to same bake target
2024-05-13 08:48:55 -07:00
Sumner Warren
224c6a59bf Bake: support compose ssh config
Signed-off-by: Sumner Warren <sumner.warren@gmail.com>
2024-05-13 08:46:17 -04:00
Mitsuru Kariya
cbb75bbfd5 Fix k8s driver with certs cannot boot
Signed-off-by: Mitsuru Kariya <mitsuru.kariya@nttdata.com>
2024-05-13 10:33:15 +09:00
CrazyMax
72085dbdf0 Merge pull request #2449 from docker/dependabot/github_actions/softprops/action-gh-release-2.0.5
build(deps): bump softprops/action-gh-release from 2.0.4 to 2.0.5
2024-05-10 11:32:35 +02:00
dependabot[bot]
480b53f529 build(deps): bump softprops/action-gh-release from 2.0.4 to 2.0.5
Bumps [softprops/action-gh-release](https://github.com/softprops/action-gh-release) from 2.0.4 to 2.0.5.
- [Release notes](https://github.com/softprops/action-gh-release/releases)
- [Changelog](https://github.com/softprops/action-gh-release/blob/master/CHANGELOG.md)
- [Commits](9d7c94cfd0...69320dbe05)

---
updated-dependencies:
- dependency-name: softprops/action-gh-release
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-05-07 18:47:50 +00:00
Tonis Tiigi
f8c6a97edc build: fix multiple named contexts pointing to same bake target
Contexts using target: schema are replaced by input: pointing
to previous build result before build request is sent. Currently
this replacement did not work if multiple contexts pointed to
the same target name.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-05-02 18:08:45 -07:00
CrazyMax
d4f088e689 Merge pull request #2442 from crazy-max/ci-fix-validate-matrix
ci(validate): fix GOLANGCI_LINT_MULTIPLATFORM type for multiplatform lint
2024-05-02 14:58:55 +02:00
CrazyMax
db3a8ad7ca ci(validate): fix GOLANGCI_LINT_MULTIPLATFORM type for multiplatform lint
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-05-02 11:17:05 +02:00
Tõnis Tiigi
1d88c4b169 Merge pull request #2439 from crazy-max/ci-split-validate
ci(validate): split lint
2024-05-01 13:58:20 -07:00
CrazyMax
6d95fb586e ci(validate): split lint
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-30 10:19:06 +02:00
Tõnis Tiigi
1fb5d2a9ee Merge pull request #2422 from crazy-max/skip-provenance-internal
build: don't generate metadata file when print flag is used
2024-04-29 17:12:20 -07:00
CrazyMax
ba264138d6 build: don't generate metadata file when print flag is used
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-26 10:53:46 +02:00
CrazyMax
6375dc7230 Merge pull request #2432 from docker/dependabot/github_actions/peter-evans/create-pull-request-6.0.5
build(deps): bump peter-evans/create-pull-request from 6.0.4 to 6.0.5
2024-04-26 09:06:56 +02:00
CrazyMax
9cc6c7df70 Merge pull request #2431 from tonistiigi/make-bake-tidy
make: tidy redirects to bake
2024-04-26 09:06:00 +02:00
Tonis Tiigi
7ea5cffb98 make: tidy redirects to bake
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-04-25 17:08:21 -07:00
dependabot[bot]
d2d21577fb build(deps): bump peter-evans/create-pull-request from 6.0.4 to 6.0.5
Bumps [peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request) from 6.0.4 to 6.0.5.
- [Release notes](https://github.com/peter-evans/create-pull-request/releases)
- [Commits](9153d834b6...6d6857d369)

---
updated-dependencies:
- dependency-name: peter-evans/create-pull-request
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-25 18:36:26 +00:00
CrazyMax
e344e2251b Merge pull request #2430 from tonistiigi/linter-updates
linter updates and gopls linting
2024-04-25 09:16:56 +02:00
CrazyMax
833fe3b04f Merge pull request #2427 from dvdksn/remove-doc-stubs
docs: remove stub files and update links
2024-04-25 09:15:11 +02:00
Tonis Tiigi
d0cc9ed0cb hack: add gopls based linters
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-04-24 18:11:30 -07:00
Tonis Tiigi
b30566438b lint: gopls fixes
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-04-24 17:58:17 -07:00
Tonis Tiigi
ec98985b4e hack: linter updates
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-04-24 17:20:27 -07:00
Tonis Tiigi
9428447cd2 lint: unusedwrite fixes
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-04-24 17:19:52 -07:00
Tonis Tiigi
6112c41637 lint: nilness fixes
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-04-24 17:19:32 -07:00
Tõnis Tiigi
a727de7d5f Merge pull request #2421 from tonistiigi/print-default-load
build: avoid default load with --print
2024-04-24 16:38:21 -07:00
Tõnis Tiigi
4a8fcb7aa0 Merge pull request #2423 from crazy-max/test-build-print
test: build print
2024-04-24 16:38:03 -07:00
Tõnis Tiigi
771e66bf7a Merge pull request #2424 from jaihwan104/exit-1-when-manifest-merge-failed
fix exit code when manifest merge failed
2024-04-24 16:37:18 -07:00
David Karlsson
7e0ab1a003 docs: remove stub files and update links
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-04-23 13:39:56 +02:00
Guillaume Lours
e3e16ad088 bumpo compose-go to v2.1.0
Signed-off-by: Guillaume Lours <705411+glours@users.noreply.github.com>
2024-04-23 10:28:28 +02:00
jaihwan104
f2823515db build: exit 1 when manifest merge failed
Signed-off-by: jaihwan104 <42341126+jaihwan104@users.noreply.github.com>
2024-04-22 23:56:10 +09:00
CrazyMax
5ac9b78384 test: build print
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-19 10:51:27 +02:00
Tonis Tiigi
fbb0f9b424 build: avoid default load with --print
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-04-18 18:12:39 -07:00
CrazyMax
699fa43f7f Merge pull request #2419 from docker/dependabot/github_actions/peter-evans/create-pull-request-6.0.4
build(deps): bump peter-evans/create-pull-request from 6.0.3 to 6.0.4
2024-04-18 16:57:16 +02:00
dependabot[bot]
bdf27ee797 build(deps): bump peter-evans/create-pull-request from 6.0.3 to 6.0.4
Bumps [peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request) from 6.0.3 to 6.0.4.
- [Release notes](https://github.com/peter-evans/create-pull-request/releases)
- [Commits](c55203cfde...9153d834b6)

---
updated-dependencies:
- dependency-name: peter-evans/create-pull-request
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-17 18:53:16 +00:00
Tõnis Tiigi
171fcbeb69 Merge pull request #2417 from tonistiigi/update-buildkit-240417
vendor: update buildkit to 71f99c52a669
2024-04-17 10:02:29 -07:00
Tonis Tiigi
370a5aa127 update lint fallback image
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-04-17 09:18:52 -07:00
Tonis Tiigi
13653fb84d vendor: update buildkit to 71f99c52a669
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-04-17 08:21:11 -07:00
Tõnis Tiigi
1b16594f4a Merge pull request #2415 from igaskin/scheduler-name
feat: adding option to add scheduler name to kubernetes driver
2024-04-17 08:18:23 -07:00
Tõnis Tiigi
3905e8cf06 Merge pull request #2416 from crazy-max/print-internal
build: mark information requests as internal
2024-04-17 08:15:55 -07:00
CrazyMax
177b95c972 build: mark information requests as internal
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-17 16:56:43 +02:00
Isaac Gaskin
74fdbb5e7f feat: adding option to add scheduler name to kubernetes driver
this allows for custom scheduling of deployments

Signed-off-by: Isaac Gaskin <isaac.gaskin@circle.com>
2024-04-16 14:51:59 -07:00
Tõnis Tiigi
ac331d3569 Merge pull request #2401 from crazy-max/ci-k3s-update
ci: switch to reusable workflow to install k3s
2024-04-15 16:00:55 -07:00
Tõnis Tiigi
07c9b45bae Merge pull request #2408 from tonistiigi/print-statuscode
build: support statuscode response for print requests
2024-04-15 15:58:52 -07:00
Tõnis Tiigi
b91957444b Merge pull request #2406 from tonistiigi/print-lint-fallback
build: add fallback image for --print=lint
2024-04-15 15:58:34 -07:00
Tonis Tiigi
46c44c58ae build: support statuscode response for print requests
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-04-15 10:38:54 -07:00
CrazyMax
6aed54c35a Merge pull request #2405 from docker/dependabot/github_actions/peter-evans/create-pull-request-6.0.3
build(deps): bump peter-evans/create-pull-request from 6.0.2 to 6.0.3
2024-04-13 14:54:34 +02:00
Tonis Tiigi
126fe653c7 build: refactor print fallbacks to own function
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-04-12 17:09:43 -07:00
Tonis Tiigi
f0cbc95eaf build: add fallback image for --print=lint
Fallback to known supporting image if lint called
on old frontend.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-04-12 17:09:38 -07:00
dependabot[bot]
1a0f9fa96c build(deps): bump peter-evans/create-pull-request from 6.0.2 to 6.0.3
Bumps [peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request) from 6.0.2 to 6.0.3.
- [Release notes](https://github.com/peter-evans/create-pull-request/releases)
- [Commits](70a41aba78...c55203cfde)

---
updated-dependencies:
- dependency-name: peter-evans/create-pull-request
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-12 18:33:29 +00:00
CrazyMax
df7a3db947 Merge pull request #2384 from Usual-Coder/feature-hcl-index
bake: add `indexof` hcl func
2024-04-11 17:27:21 +02:00
Tõnis Tiigi
d294232cb5 Merge pull request #2404 from tonistiigi/buildkit-vendor-lint-update
vendor: update buildkit v0.14-dev version 549891b
2024-04-11 08:24:21 -07:00
CrazyMax
0a7f5c4d94 bake: test indexof hcl func and make it private
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-11 17:11:44 +02:00
Usual Coder
5777d980b5 bake: add indexof hcl func
Signed-off-by: Usual Coder <34403413+Usual-Coder@users.noreply.github.com>
2024-04-11 17:01:53 +02:00
Tonis Tiigi
46cf94092c commands: use vendored formatter for lint responses
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-04-11 07:52:07 -07:00
Tonis Tiigi
da3435ed3a vendor: update buildkit v0.14-dev version 549891b
Brings in formatter for lint requests.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-04-11 07:49:31 -07:00
Tõnis Tiigi
3e90cc4b84 Merge pull request #2280 from crazy-max/provenance-metadata
build: set record provenance in response
2024-04-11 07:31:12 -07:00
CrazyMax
6418669e75 Merge pull request #2402 from crazy-max/bump-docker
vendor: github.com/docker/cli b6c552212837 (v26.1.0-dev)
2024-04-11 15:14:05 +02:00
CrazyMax
188495aa93 vendor: github.com/docker/cli b6c552212837 (v26.1.0-dev)
full diff: 155dc5e4e4...b6c5522128

Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-11 14:57:31 +02:00
CrazyMax
54a5c1ff93 ci: switch to reusable workflow to install k3s
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-11 10:15:37 +02:00
CrazyMax
2e2f9f571f build: set record provenance in response
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-11 10:11:27 +02:00
CrazyMax
d2ac1f2d6e Merge pull request #2322 from crazy-max/test-buildkit-multi-ver
tests: matrix with buildkit versions
2024-04-11 10:10:21 +02:00
CrazyMax
7e3acad9f4 ci: remove buildkit-edge job
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-11 09:55:00 +02:00
CrazyMax
e04637cf34 ci: use string type for experimental so it can appear on actions page
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-11 09:55:00 +02:00
CrazyMax
b9c5f9f1ee ci: run docker worker in dedicated matrix
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-11 09:48:32 +02:00
CrazyMax
92ab188781 dockerfile: update buildkit to 0.13.1
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-11 09:43:14 +02:00
CrazyMax
dd4d52407f tests: skip according to buildkit version constraint
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-11 09:43:14 +02:00
CrazyMax
7432b483ce dockerfile: add undock for integration tests
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-11 09:42:19 +02:00
CrazyMax
6e3164dc6f tests: matrix with buildkit versions
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-11 09:42:19 +02:00
CrazyMax
2fdb1682f8 Merge pull request #2399 from thaJeztah/bump_x_net
vendor: golang.org/x/sys v0.18.0, golang.org/x/term v0.18.0, golang.org/x/crypto v0.21.0, golang.org/x/net v0.23.0
2024-04-10 19:20:40 +02:00
Sebastiaan van Stijn
7f1eaa2a8a vendor: golang.org/x/net v0.23.0
full diff: https://github.com/golang/net/compare/v0.22.0...v0.23.0

Includes a fix for CVE-2023-45288, which is also addressed in go1.22.2
and go1.21.9;

> http2: close connections when receiving too many headers
>
> Maintaining HPACK state requires that we parse and process
> all HEADERS and CONTINUATION frames on a connection.
> When a request's headers exceed MaxHeaderBytes, we don't
> allocate memory to store the excess headers but we do
> parse them. This permits an attacker to cause an HTTP/2
> endpoint to read arbitrary amounts of data, all associated
> with a request which is going to be rejected.
>
> Set a limit on the amount of excess header frames we
> will process before closing a connection.
>
> Thanks to Bartek Nowotarski for reporting this issue.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-04-10 17:22:06 +02:00
Sebastiaan van Stijn
fbddc9ebea vendor: golang.org/x/net v0.22.0, golang.org/x/crypto v0.21.0
full diffs changes relevant to vendored code:

- https://github.com/golang/net/compare/v0.20.0...v0.22.0
    - http2: remove suspicious uint32->v conversion in frame code
    - http2: send an error of FLOW_CONTROL_ERROR when exceed the maximum octets
- https://github.com/golang/crypto/compare/v0.18.0...v0.21.0
    - x/crypto/internal/poly1305: improve sum_ppc64le.s

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-04-10 17:14:09 +02:00
Sebastiaan van Stijn
d347499112 vendor: golang.org/x/term v0.18.0
no changes in vendored code

full diff: https://github.com/golang/term/compare/v0.16.0...v0.18.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-04-10 17:02:36 +02:00
Sebastiaan van Stijn
b1fb67f44a vendor: golang.org/x/sys v0.18.0
full diff: https://github.com/golang/sys/compare/v0.16.0...v0.18.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-04-10 17:01:00 +02:00
CrazyMax
a9575a872a Merge pull request #2392 from crazy-max/update-hcl
vendor: update hcl dependencies
2024-04-10 08:48:10 +02:00
Tõnis Tiigi
60f48059a7 Merge pull request #2394 from crazy-max/fix-stdin-controller
build: fix stdin handling when building with controller
2024-04-09 09:57:31 -07:00
CrazyMax
ffff87be03 build: fix stdin handling when building with controller
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-09 14:49:30 +02:00
CrazyMax
0a3e5e5257 Merge pull request #2393 from crazy-max/fix-go-mod
go.mod: move indirect deps to the right require block
2024-04-09 10:17:10 +02:00
CrazyMax
151b0de8f2 go.mod: move indirect deps to the right require block
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-09 10:01:07 +02:00
CrazyMax
e40c630758 Merge pull request #2391 from crazy-max/update-compose
vendor: update compose-go to v2.0.2
2024-04-09 09:58:30 +02:00
CrazyMax
ea3338c3f3 vendor: update github.com/zclconf/go-cty to v1.14.4
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-09 09:41:03 +02:00
CrazyMax
744c055560 vendor: update github.com/hashicorp/hcl/v2 to v2.20.1
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-09 09:39:15 +02:00
CrazyMax
ca0b583f5a vendor: update compose-go to v2.0.2
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-09 09:20:12 +02:00
CrazyMax
e7f2da9c4f Merge pull request #2385 from davix/patch-1
Fix typo in buildx_build.md
2024-04-09 09:14:30 +02:00
CrazyMax
d805c784f2 Merge pull request #2378 from dvdksn/docs-crossref-secrets
docs: add cross-reference about build secrets
2024-04-09 08:52:42 +02:00
Wei
a2866b79e3 Fix typo in buildx_build.md
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-09 08:49:25 +02:00
Akihiro Suda
12e1f65eb3 Merge pull request #2370 from Moleus/feat-ephemeral-storage-opts
driver: add ephemeral-storage options to kuberentes-driver
2024-04-09 09:04:25 +09:00
Tõnis Tiigi
0d6b3a9d1d Merge pull request #2336 from crazy-max/bake-load-override
bake: load override
2024-04-08 16:12:22 -07:00
CrazyMax
4b3c3c8401 Merge pull request #2259 from namespacelabs/master
Implement ability to load images by default in non-Docker build drivers.
2024-04-05 16:13:14 +02:00
Niklas Gehlen
ccc314a823 Implement new driver-opt: default-load
This eases build driver migrations, as it allows aligning the default behavior.
See also https://docs.docker.com/build/drivers/

Signed-off-by: Niklas Gehlen <niklas@namespacelabs.com>
2024-04-05 15:30:33 +02:00
CrazyMax
dc4b4c36bd bake: load override
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-05 13:03:15 +02:00
CrazyMax
5c29e6e26e Merge pull request #2374 from tonistiigi/print-json-format
handle json formatting for print
2024-04-05 09:08:27 +02:00
CrazyMax
6a0d5b771f Merge pull request #2376 from crazy-max/ci-test-experimental
tests: test with buildx experimental
2024-04-04 19:51:10 +02:00
CrazyMax
59cc10767e Merge pull request #2363 from crazy-max/bake-remote-token
bake: git auth support for remote definitions
2024-04-04 19:37:16 +02:00
CrazyMax
b61b29f603 tests: test with buildx experimental
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-04 19:32:20 +02:00
CrazyMax
7cfef05661 Merge pull request #2381 from crazy-max/test-secret
tests: build secret
2024-04-04 19:23:03 +02:00
CrazyMax
4d39259f8e bake: git auth support for remote definitions
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-04 14:12:48 +02:00
CrazyMax
15fd39ebec tests: build secret
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-04 13:09:42 +02:00
CrazyMax
a7d59ae332 Merge pull request #2373 from jsternberg/docker-cli-meter-provider
metricutil: switch to using the cli meter provider
2024-04-04 11:10:46 +02:00
David Karlsson
e18a2f6e58 docs: add cross-reference about build secrets
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-04-03 10:37:17 +02:00
Tõnis Tiigi
38fbd9a85c Merge pull request #2377 from crazy-max/test-stdin
tests: build from stdin
2024-04-02 09:54:45 -07:00
CrazyMax
84ddbc2b3b Merge pull request #2375 from crazy-max/bump-docker-26
vendor: github.com/docker/docker v26.0.0
2024-04-02 16:40:14 +02:00
Jonathan A. Sternberg
b4799f9d16 metricutil: switch to using the cli meter provider
The meter provider initialization that was located here has now been
moved to a common area in the docker cli. This upgrades our CLI version
and then uses this common code instead of our own version.

As a piece of additional functionality, the docker OTEL endpoint can now
be overwritten with `DOCKER_CLI_OTEL_EXPORTER_OTLP_ENDPOINT` for
testing.

This removes the OTLP exporter from the CLI that was previously locked
behind `BUILDX_EXPERIMENTAL`. I do plan for this to return, but as a
proper part of the `docker/cli` implementation rather than something
special with `buildx`.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-04-02 09:36:55 -05:00
CrazyMax
7cded6b33b tests: build from stdin
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-02 15:10:18 +02:00
CrazyMax
1b36bd0c4a vendor: github.com/docker/docker v26.0.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-02 11:29:15 +02:00
CrazyMax
7dc5639216 Merge pull request #2372 from jsternberg/bump-docker
vendor: github.com/docker/docker and github.com/docker/cli v26.0.0
2024-04-02 11:20:38 +02:00
Tonis Tiigi
858e347306 handle json formatting for print
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-04-01 16:46:04 -07:00
Jonathan A. Sternberg
adb9bc86e5 vendor: github.com/docker/docker and github.com/docker/cli v26.0.0
Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-04-01 13:05:55 -05:00
Moleus
ef2e30deba driver: add ephemeral-storage options to kuberentes-driver
Signed-off-by: Moleus <fafufuburr@gmail.com>
2024-04-01 13:10:44 +03:00
Tõnis Tiigi
c690d460e8 Merge pull request #2362 from jsternberg/single-tracer-delegate-client
driver: initialize tracer delegate in driver handle instead of individual plugins
2024-03-29 11:47:41 -07:00
Tõnis Tiigi
35781a6c78 Merge pull request #2366 from crazy-max/update-buildkit
vendor: github.com/moby/buildkit 25bec7145b39 (v0.14.0-dev)
2024-03-29 10:59:43 -07:00
CrazyMax
de5efcb03b vendor: github.com/moby/buildkit 25bec7145b39 (v0.14.0-dev)
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-03-28 17:51:45 +01:00
Jonathan A. Sternberg
5c89004bb6 driver: initialize tracer delegate in driver handle instead of individual plugins
This refactors the driver handle to initialize the tracer delegate
inside of the driver handle instead of the individual plugins.

This provides more uniformity to how the tracer delegate is created by
allowing the driver handle to pass additional client options to the
drivers when they create the client. It also avoids creating the tracer
delegate client multiple times because the driver handle will only
initialize the client once. This prevents some drivers, like the remote
driver, from accidentally registering multiple clients as tracer
delegates.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-03-27 15:13:43 -05:00
Tõnis Tiigi
8abef59087 Merge pull request #2344 from jsternberg/progress-metrics-non-experimental
progress: remove the experimental label from progress metrics
2024-03-22 09:23:39 -07:00
Jonathan A. Sternberg
4999908fbc progress: remove the experimental label from progress metrics
Removes the experimental label from progress metrics. User-metrics
themselves are still experimental so this is still blocked behind the
experimental flag, but this will allow the docker otlp endpoint to
receive these metrics.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-03-19 08:23:32 -05:00
Tõnis Tiigi
4af0ed5159 Merge pull request #2323 from jsternberg/build-idle-time-metric
metrics: measure idle time during builds
2024-03-18 15:15:29 -07:00
Jonathan A. Sternberg
a4a8846e46 metrics: measure idle time during builds
This measures the amount of time spent idle during the build. This is
done by collecting the set of task times, determining which sections
contain gaps where no task is running, and aggregating that duration
into a metric.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-03-18 08:43:15 -05:00
Tõnis Tiigi
520dc5968a Merge pull request #2298 from LaurentGoderre/imagetools-inspect-tests
Add tests for imagetools inspect
2024-03-15 13:04:06 -07:00
Tõnis Tiigi
324afe60ad Merge pull request #2341 from crazy-max/tests-refactor-worker-handling
tests: refactor worker handling in sandbox
2024-03-15 12:53:27 -07:00
CrazyMax
c0c3a55fca Merge pull request #2343 from crazy-max/experimental-ref
chore: check experimental from confutil
2024-03-15 19:24:44 +01:00
CrazyMax
2a30229916 chore: check experimental from confutil
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-03-15 11:52:41 +01:00
Tõnis Tiigi
ed76661b0d Merge pull request #2317 from jsternberg/build-export-image-metric
metrics: measure export image operation
2024-03-14 14:59:35 -07:00
Jonathan A. Sternberg
a0cce9b31e metrics: measure export image operation
This measures the amount of time it takes to export to a specific
format.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-03-14 16:08:19 -05:00
Tõnis Tiigi
d410597f5a Merge pull request #2316 from jsternberg/build-exec-command-time
metrics: measure run operations for exec operations
2024-03-14 13:13:51 -07:00
Jonathan A. Sternberg
9016d85718 metrics: measure run operations for exec operations
This measures the duration of exec operations. It does not factor in
whether the operation was cached or not so this should include the
amount of time to determine whether an operation was cached.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-03-14 14:51:27 -05:00
Tõnis Tiigi
2565c74a89 Merge pull request #2254 from crazy-max/rm-local-dirs
chore: switch to LocalMounts implementation
2024-03-14 11:34:12 -07:00
Tõnis Tiigi
eab5cccbb4 Merge pull request #2271 from jsternberg/build-image-transfer-metric
metrics: measure image transfers for image source operations
2024-03-14 10:28:50 -07:00
CrazyMax
e2be765e7b tests: refactor worker handling in sandbox
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-03-14 13:42:37 +01:00
CrazyMax
276dd5150f Merge pull request #2339 from crazy-max/ci-lint-multi
ci: enable multi-platform lint only for upstream repo
2024-03-14 10:59:34 +01:00
CrazyMax
5c69fa267f ci: enable multi-platform lint only for upstream repo
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-03-14 10:39:50 +01:00
CrazyMax
b240a00def chore: switch to LocalMounts implementation
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-03-13 18:59:14 +01:00
Tõnis Tiigi
a8af6fa013 Merge pull request #2332 from crazy-max/build-move-opts
build: move funcs related to solve opts handling
2024-03-13 10:58:26 -07:00
CrazyMax
7eb3dfbd22 Merge pull request #2335 from docker/dependabot/github_actions/softprops/action-gh-release-2.0.4
build(deps): bump softprops/action-gh-release from 2.0.3 to 2.0.4
2024-03-13 10:12:48 +01:00
CrazyMax
4b24f66a10 Merge pull request #2334 from docker/dependabot/github_actions/peter-evans/create-pull-request-6.0.2
build(deps): bump peter-evans/create-pull-request from 6.0.1 to 6.0.2
2024-03-13 10:12:33 +01:00
CrazyMax
8d5b967f2d ci: set comment version for peter-evans/create-pull-request
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-03-13 09:44:40 +01:00
CrazyMax
8842e19869 ci: update comment version for softprops/action-gh-release update
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-03-13 09:43:39 +01:00
dependabot[bot]
a0ce8bec97 build(deps): bump softprops/action-gh-release from 2.0.3 to 2.0.4
Bumps [softprops/action-gh-release](https://github.com/softprops/action-gh-release) from 2.0.3 to 2.0.4.
- [Release notes](https://github.com/softprops/action-gh-release/releases)
- [Changelog](https://github.com/softprops/action-gh-release/blob/master/CHANGELOG.md)
- [Commits](3198ee18f8...9d7c94cfd0)

---
updated-dependencies:
- dependency-name: softprops/action-gh-release
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-12 18:19:57 +00:00
dependabot[bot]
84d79df93b build(deps): bump peter-evans/create-pull-request from 6.0.1 to 6.0.2
Bumps [peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request) from 6.0.1 to 6.0.2.
- [Release notes](https://github.com/peter-evans/create-pull-request/releases)
- [Commits](a4f52f8033...70a41aba78)

---
updated-dependencies:
- dependency-name: peter-evans/create-pull-request
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-12 18:19:54 +00:00
Tõnis Tiigi
df4b13320d Merge pull request #2330 from crazy-max/fix-bake-load-push
bake: fix output handling for push
2024-03-12 09:34:07 -07:00
Tõnis Tiigi
bb511110d6 Merge pull request #2327 from tonistiigi/remote-connhelper-fix
remote: fix connhelpers with custom dialer
2024-03-12 09:01:23 -07:00
CrazyMax
47cf4a5dbe bake: fix output handling for push
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-03-12 13:13:13 +01:00
CrazyMax
cfbed42fa7 Merge pull request #2331 from docker/dependabot/github_actions/softprops/action-gh-release-2
build(deps): bump softprops/action-gh-release from 1 to 2
2024-03-12 10:38:23 +01:00
CrazyMax
ff27ab7e86 ci: update comment version for softprops/action-gh-release update
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-03-12 09:24:28 +01:00
CrazyMax
5655e5e2b6 build: don't export LoadInputs
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-03-12 08:48:45 +01:00
CrazyMax
4b516af1f6 build: move funcs related to solve opts handling
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-03-12 08:48:45 +01:00
CrazyMax
b1490ed5ce tests: create remote with container helper
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-03-12 08:44:36 +01:00
dependabot[bot]
ea830c9758 build(deps): bump softprops/action-gh-release from 1 to 2
Bumps [softprops/action-gh-release](https://github.com/softprops/action-gh-release) from 1 to 2.
- [Release notes](https://github.com/softprops/action-gh-release/releases)
- [Changelog](https://github.com/softprops/action-gh-release/blob/master/CHANGELOG.md)
- [Commits](de2c0eb89a...3198ee18f8)

---
updated-dependencies:
- dependency-name: softprops/action-gh-release
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-11 18:14:17 +00:00
Tonis Tiigi
8f576e5790 remote: fix connhelpers with custom dialer
With the new dial-stdio command the dialer is split
from `Client` function in order to access it directly.

This breaks the custom connhelpers functionality
as support for connhelpers is a feature of the default
dialer. If client defines a custom dialer then only
it is used without extra modifications. This means
that remote driver dialer needs to detect the
connhelpers on its own.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-03-08 18:35:53 -08:00
CrazyMax
4327ee73b1 Merge pull request #2321 from crazy-max/docker-use-bin-images
dockerfile: use moby-bin and cli-bin images for docker binaries
2024-03-07 13:46:01 +01:00
CrazyMax
70a28fed12 dockerfile: use moby-bin and cli-bin images for docker binaries
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-03-07 13:10:01 +01:00
CrazyMax
fc22d39d6d Merge pull request #2319 from dvdksn/doc-securitysandbox-link
docs: fix link to new target in dockerfile reference
2024-03-07 10:36:03 +01:00
David Karlsson
1cc5e39cb8 docs: fix link to new target in dockerfile reference
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-03-07 10:07:43 +01:00
CrazyMax
1815e4d9b2 Merge pull request #2314 from dvdksn/docs-vendor
ci: use make target for vendoring docs release
2024-03-06 14:42:03 +01:00
David Karlsson
2ec1dbd1b6 ci: use make target for vendoring docs release
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-03-06 14:25:49 +01:00
CrazyMax
a6163470b7 Merge pull request #2312 from crazy-max/ci-docs-no-provenance
ci: disable provenance for docs generation
2024-03-06 09:29:31 +01:00
CrazyMax
3dfb102f82 ci: disable provenance for docs generation
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-03-06 09:09:43 +01:00
CrazyMax
253cbee5c7 Merge pull request #2310 from crazy-max/fix-docs-release
ci(docs-release): fix vendoring step
2024-03-06 08:59:11 +01:00
CrazyMax
c1dfa74b98 ci(docs-release): manual trigger support
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-03-06 08:40:44 +01:00
CrazyMax
647491dd99 ci(docs-release): fix vendoring step
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-03-06 08:40:43 +01:00
Jonathan A. Sternberg
9a71895a48 metrics: measure image transfers for image source operations
This measures the transfer size and duration for image pulls along with
the time spent extracting the image contents.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-03-05 16:33:20 -06:00
Laurent Goderre
abff444562 Added test for imagetool inspect load
Signed-off-by: Laurent Goderre <laurent.goderre@docker.com>
2024-03-05 13:56:46 -05:00
Laurent Goderre
1d0b542b1b Add unit test for SBOM and Provenance scanning
Signed-off-by: Laurent Goderre <laurent.goderre@docker.com>
2024-03-05 13:15:21 -05:00
Laurent Goderre
6c485a98be Add tests for imagetools inspect
Signed-off-by: Laurent Goderre <laurent.goderre@docker.com>
2024-03-05 13:13:23 -05:00
Tõnis Tiigi
9ebfde4897 Merge pull request #2302 from crazy-max/multi-load-push
build: handle push/load shorthands for multi exporters
2024-03-05 09:09:30 -08:00
Tõnis Tiigi
e4ee2ca1fd Merge pull request #2308 from tonistiigi/vendor-buildkit-240305
vendor: update to buildkit v0.13.0
2024-03-05 09:09:07 -08:00
Tonis Tiigi
849456c198 vendor: update to buildkit v0.13.0
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-03-05 08:53:44 -08:00
CrazyMax
9a2536dd0d test: multi exporters
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-03-05 17:05:59 +01:00
CrazyMax
a03263acf8 build: handle push/load shorthands for multi exporters
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-03-05 17:05:59 +01:00
CrazyMax
0c0dcb7c8c Merge pull request #2299 from vvoland/vendor-moby-v26
vendor: github.com/docker/docker v26.0.0-rc1
2024-03-05 08:58:41 +01:00
Paweł Gronowski
9bce433154 vendor: github.com/docker/docker v26.0.0-rc1
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-03-01 12:29:55 +01:00
Paweł Gronowski
04f0fc5871 Replace deprecated docker types usage
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-03-01 12:29:54 +01:00
CrazyMax
e7da2b0686 Merge pull request #2296 from dvdksn/docs-release-fix-dirnames
ci(fix): remove underscore in docs data dir
2024-02-29 12:02:09 +01:00
David Karlsson
eab565afe7 ci(fix): remove underscore in docs data dir
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-02-29 11:29:28 +01:00
CrazyMax
7d952441ea Merge pull request #2295 from dvdksn/fix-docs-release-workflow
ci: fix docs-release workflow
2024-02-29 11:26:58 +01:00
David Karlsson
835a6b1096 ci: fix docs-release workflow
Automatically create PR for updating docs on release

Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-02-29 10:43:57 +01:00
Tõnis Tiigi
e273a53c88 Merge pull request #2194 from LaurentGoderre/sbom-dsse
Add support for DSSE envelope for attestation in imagetools
2024-02-28 20:08:07 -08:00
Tonis Tiigi
dcdcce6c52 imagetools: supress warnings for dsse mediatypes
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-02-28 19:25:42 -08:00
Tõnis Tiigi
c5b4ce9e7b Merge pull request #2292 from crazy-max/go-minor
pin to go 1.21
2024-02-28 14:26:28 -08:00
Tõnis Tiigi
8f484f6ac1 Merge pull request #2290 from tonistiigi/multi-export
build: allow multiple exports if supported by buildkit
2024-02-28 14:20:21 -08:00
Laurent Goderre
b748185f48 Add support for DSSE envelope for attestation and provenance in imagetools
Signed-off-by: Laurent Goderre <laurent.goderre@docker.com>
2024-02-28 16:45:51 -05:00
CrazyMax
a6228ed78f Merge pull request #2293 from docker/dependabot/github_actions/peter-evans/create-pull-request-6.0.1
build(deps): bump peter-evans/create-pull-request from 6.0.0 to 6.0.1
2024-02-28 22:31:55 +01:00
Tonis Tiigi
fcbe2803c8 build: allow multiple exports if supported by buildkit
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-02-28 13:16:15 -08:00
Tõnis Tiigi
83c30c6c5a Merge pull request #2291 from crazy-max/update-buildkit
vendor: github.com/moby/buildkit v0.13.0-rc3
2024-02-28 13:13:06 -08:00
Tõnis Tiigi
8db86e4031 Merge pull request #2287 from iankingori/fix-dialer
remote: use winio DialPipeContext for named pipes
2024-02-28 13:12:29 -08:00
Tõnis Tiigi
e705cafcd5 Merge pull request #2289 from tonistiigi/prompt-cancel
commands: handle ctrl-c on active prompt
2024-02-28 13:11:48 -08:00
dependabot[bot]
32f17b0de1 build(deps): bump peter-evans/create-pull-request from 6.0.0 to 6.0.1
Bumps [peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request) from 6.0.0 to 6.0.1.
- [Release notes](https://github.com/peter-evans/create-pull-request/releases)
- [Commits](b1ddad2c99...a4f52f8033)

---
updated-dependencies:
- dependency-name: peter-evans/create-pull-request
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-02-28 18:23:29 +00:00
Ian King'ori
d40c4bb046 remote: use winio DialPipeContext for named pipes
Signed-off-by: Ian King'ori <kingorim.ian@gmail.com>
2024-02-28 16:19:58 +03:00
CrazyMax
25f8011825 pin to go 1.21
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-02-28 13:18:42 +01:00
CrazyMax
d0f9655aa2 vendor: github.com/moby/buildkit v0.13.0-rc3
full diff: https://github.com/moby/buildkit/compare/v0.13.0-rc2...v0.13.0-rc3

Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-02-28 09:46:36 +01:00
Tonis Tiigi
ce9a486a0e commands: handle ctrl-c on active prompt
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-02-27 17:35:27 -08:00
CrazyMax
85abcc413e Merge pull request #2283 from crazy-max/update-compose
vendor: update compose-go to v2.0.0-rc.8
2024-02-27 08:55:04 +01:00
Tõnis Tiigi
e5acb010c9 Merge pull request #2284 from crazy-max/update-uuid
vendor: update github.com/google/uuid to v1.6.0
2024-02-26 08:39:01 -08:00
Akihiro Suda
79f50ad924 Merge pull request #2285 from crazy-max/update-hashring
vendor: github.com/serialx/hashring 22c0c7ab6b1b (master)
2024-02-26 21:00:42 +09:00
Tõnis Tiigi
5723ceefb6 Merge pull request #2281 from crazy-max/update-buildkit
vendor: github.com/moby/buildkit v0.13.0-rc2
2024-02-25 22:19:45 -08:00
CrazyMax
95185e9525 vendor: update compose-go to v2.0.0-rc.8
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-02-24 17:15:27 +01:00
CrazyMax
e423a67f7b vendor: github.com/moby/buildkit v0.13.0-rc2
full diff: https://github.com/moby/buildkit/compare/8e3fe35738c2...v0.13.0-rc2

Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-02-24 17:14:01 +01:00
CrazyMax
545a5c97c6 Merge pull request #2282 from crazy-max/update-k8s
vendor: bump k8s dependencies to v0.29.2
2024-02-24 17:12:39 +01:00
CrazyMax
625d90b983 vendor: github.com/serialx/hashring 22c0c7ab6b1b (master)
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-02-24 16:56:59 +01:00
CrazyMax
9999fc63e8 vendor: update github.com/google/uuid to v1.6.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-02-24 16:52:21 +01:00
CrazyMax
303e509bbf vendor: bump k8s dependencies to v0.29.2
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-02-24 16:41:41 +01:00
CrazyMax
ae0a5e495a Merge pull request #2263 from crazy-max/resp-build-ref
build: set build ref in response
2024-02-23 23:21:26 +01:00
CrazyMax
2edb7a04a9 build: set build ref in response
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-02-23 23:01:41 +01:00
CrazyMax
a0599c1c31 Merge pull request #2279 from crazy-max/build-test-shmsize-ulimit
test: build shm-size and ulimit
2024-02-23 23:00:31 +01:00
CrazyMax
eedf9f10e8 test: build shm-size and ulimit
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-02-23 22:44:30 +01:00
CrazyMax
d891634fc6 Merge pull request #2266 from crazy-max/container-driver-host-entl
driver: set network.host entitlement by default for container drivers
2024-02-23 22:41:12 +01:00
Tõnis Tiigi
af75d0bd7d Merge pull request #2235 from jsternberg/build-context-transfer-metric
metrics: measure context transfers for local source operations
2024-02-23 13:25:58 -08:00
CrazyMax
e008b846bb driver: set network.host entitlement by default for container drivers
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-02-23 22:23:27 +01:00
Tõnis Tiigi
fd11d93381 Merge pull request #2275 from crazy-max/buildkitd-flags-network-mode
driver: docs to set buildkitd network mode and add tests
2024-02-23 13:20:20 -08:00
CrazyMax
aa518f9b88 driver: test bridge network mode
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-02-22 23:07:50 +01:00
CrazyMax
b16bd02f95 vendor: github.com/moby/buildkit 8e3fe35738c2 (v0.13.0-dev)
full diff: 8e3fe35738...d6e142600e

Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-02-22 23:07:50 +01:00
CrazyMax
69bd408964 docs(driver): set buildkitd network mode
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-02-22 23:07:50 +01:00
Tõnis Tiigi
d8e9c7f5b5 Merge pull request #2268 from crazy-max/dirver-align-flags
driver: make buildkitd "config" and "flags" names consistent
2024-02-22 13:50:24 -08:00
CrazyMax
fd54daf184 Merge pull request #2276 from crazy-max/codecov-token
ci: set codecov token
2024-02-22 22:43:11 +01:00
CrazyMax
9057bd27af ci: set codecov token
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-02-22 18:05:08 +01:00
CrazyMax
5a466918f9 build: enhance error message for unsupported attestations
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-02-22 15:30:53 +01:00
CrazyMax
56fc68eb7e driver: make buildkitd "config" and "flags" names consistent
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-02-22 10:26:18 +01:00
CrazyMax
ccfcf4bc37 Merge pull request #2272 from crazy-max/fix-docs-upstream
ci: update docs-upstream workflow
2024-02-22 10:25:28 +01:00
CrazyMax
560eaf0e78 ci: update docs-upstream workflow
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-02-22 10:07:48 +01:00
Tõnis Tiigi
daaa8f2482 Merge pull request #2242 from crazy-max/bake-ulimits-shmsize
bake: ulimits and shm-size support
2024-02-21 16:30:27 -08:00
Jonathan A. Sternberg
97052cf203 metrics: measure context transfers for local source operations
Measure the transfer size and duration of context transfers for various
categories of local source transfers from the progress stream that's
returned during the build.

Local source transfers are split into one of four categories:
* context
* dockerfile
* dockerignore
* namedcontext

Named contexts that are different names will be categorized under the
same metric.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-02-21 14:52:03 -06:00
Tõnis Tiigi
2eccaadce5 Merge pull request #2258 from jsternberg/docker-otel-non-experimental
metricutil: remove BUILDX_EXPERIMENTAL from internal docker reporting
2024-02-21 11:45:23 -08:00
Tõnis Tiigi
aa4317bfce Merge pull request #2267 from crazy-max/update-buildkit
vendor: github.com/moby/buildkit db304eb93126 (v0.13.0-dev)
2024-02-21 10:22:14 -08:00
CrazyMax
953cbf6696 vendor: github.com/moby/buildkit db304eb93126 (v0.13.0-dev)
full diff: d6e142600e...db304eb931

Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-02-21 11:54:00 +01:00
CrazyMax
414f215929 Merge pull request #2265 from tonistiigi/bake-parent-eval
bake: avoid evaluating parent targets before child LLB loaded
2024-02-21 11:46:12 +01:00
Tonis Tiigi
698eb840a3 bake: avoid evaluating parent targets before child LLB loaded
Because of the way buildkit cache works if you have request
with external cache, if some vertices from the request have
already been evaluated and are available in the shared graph
BuildKit will not load cache keys from external source for such
vertices. This may mean that children of such vertices will
not load cache because there isn't a cache path through the parent.

To work around it, wait before child definition is loaded before
evaluating the parent.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-02-20 19:52:17 -08:00
Justin Chadwell
714b85aaaf Merge pull request #2262 from Granjow/patch-1
Fix typo in URL
2024-02-20 14:45:18 +00:00
Simon A. Eugster
fb604d4b57 Fix typo in URL
Signed-off-by: Simon A. Eugster <simon.eu@gmail.com>
2024-02-20 14:19:04 +01:00
CrazyMax
73d8969158 docs: more context around shm-size and ulimit usage
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-02-20 11:29:13 +01:00
CrazyMax
64e2b2532a bake: ulimits support
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-02-20 11:23:42 +01:00
CrazyMax
c2befc0c12 bake: shm-size support
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-02-20 11:23:42 +01:00
CrazyMax
345551ae0d test: fix message output
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-02-20 11:23:42 +01:00
CrazyMax
97e8fa7aaf Merge pull request #2253 from dvdksn/docs-cli-reference-urlscheme
docs: use absolute links and update link targets
2024-02-20 09:29:14 +01:00
David Karlsson
cdfc35d0b6 docs: update external link paths
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-02-20 08:55:41 +01:00
David Karlsson
ce66d8830d vendor: github.com/docker/cli-docs-tool v0.7.0
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-02-20 08:34:13 +01:00
Jonathan A. Sternberg
fe08cf2981 metricutil: remove BUILDX_EXPERIMENTAL from internal docker reporting
The `BUILDX_EXPERIMENTAL` check is removed from the docker otel
collector. We'll send metrics to the OTLP endpoint for docker desktop if
it is present and enabled regardless of experimental status.

The user-facing `OTEL` endpoints for enabling the metric reporting for
external use is still hidden behind the experimental flag. We'll likely
remove the experimental flag for this feature for v0.14.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-02-15 14:17:58 -06:00
Tõnis Tiigi
c9d1c41d20 Merge pull request #2225 from jsternberg/command-duration-metric
metrics: add build command duration metric
2024-02-14 15:38:41 -08:00
Jonathan A. Sternberg
bda968ad5d metrics: add build command duration metric
This adds a build duration metric for the build command with attributes
related to the buildx driver, the error type (if any), and which options
were used to perform the build from a subset of the options.

This also refactors some of the utility methods used by the git tool to
determine filepaths into its own separate package so they can be reused
in another place.

Also adds a test to ensure the resource is initialized correctly and
doesn't error. The otel handler logging message is suppressed on buildx
invocations so we never see the error if there's a problem with the
schema url. It's so easy to mess up the schema url when upgrading OTEL
that we need a proper test to make sure we haven't broken the
functionality.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-02-14 15:58:52 -06:00
Tõnis Tiigi
481384b185 Merge pull request #2112 from cpuguy83/dialstdio
Add dial-stdio command
2024-02-09 17:13:46 -08:00
Tõnis Tiigi
67d9385ce0 Merge pull request #2252 from ndeloof/rawjson
don't print build details when progress is rawjson
2024-02-09 16:40:50 -08:00
Nicolas De Loof
598bc16e5d don't print build details when progress is rawjson
Signed-off-by: Nicolas De Loof <nicolas.deloof@gmail.com>
2024-02-09 13:28:44 +01:00
Brian Goff
760244ee3e Add dial-stdio command
This allows the buildx CLI to act a proxy to the configured instance.
It allows external code to use buildx itself as a driver for connecting
to buildkitd instances.

Instance and node selection should follow the same semantics as as
`buildx build`, including taking into account the `BUILDX_BUILDER` env
var and the `--builder` global flag.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2024-02-08 22:16:00 +00:00
CrazyMax
d0177c6da3 Merge pull request #1271 from crazy-max/ctn-restart
docker-container: restart-policy opt
2024-02-08 19:31:57 +01:00
Justin Chadwell
8f8ed68b61 Merge pull request #2250 from iankingori/add-npipe
driver: add npipe url scheme support
2024-02-07 13:46:01 +00:00
Ian King'ori
981cc8c2aa add npipe url scheme support
- enables remote builder and buildx create on windows
Signed-off-by: Ian King'ori <kingorim.ian@gmail.com>
2024-02-07 13:47:12 +03:00
CrazyMax
9822409b67 docker-container: restart-policy opt
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-02-06 14:33:36 +01:00
CrazyMax
328666dc6a Merge pull request #2249 from crazy-max/dockerfile-bump-docker
Dockerfile: update to Docker Engine v25.0.2
2024-02-06 11:03:35 +01:00
CrazyMax
42d2719b08 Merge pull request #2248 from crazy-max/bump-xx
update xx to 1.4.0
2024-02-06 11:00:09 +01:00
CrazyMax
3b33ac48d2 Dockerfile: update to Docker Engine v25.0.2
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-02-06 10:45:59 +01:00
CrazyMax
e0303dd65a update xx to 1.4.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-02-06 10:44:03 +01:00
CrazyMax
dab7af617a Merge pull request #2245 from thaJeztah/bump_buildkit
vendor: github.com/moby/buildkit 6bd81372ad6f (v0.13.0-dev)
2024-02-05 23:15:42 +01:00
CrazyMax
0326d2a5b1 Merge pull request #2246 from thaJeztah/bump_c8d_console
vendor: github.com/containerd/console v1.0.4
2024-02-05 18:49:20 +01:00
Sebastiaan van Stijn
b4c81a4d27 vendor: github.com/containerd/console v1.0.4
no diff; same commit, but tagged: https://github.com/containerd/console/compare/8f6c4e4faef5...v1.0.4

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-05 18:24:02 +01:00
Sebastiaan van Stijn
7b3c4fc714 vendor: github.com/moby/buildkit 6bd81372ad6f (v0.13.0-dev)
full diff: 6bd81372ad...d6e142600e

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-05 18:10:02 +01:00
Sebastiaan van Stijn
43ed470208 vendor: github.com/aws/aws-sdk-go-v2/config v1.26.6
vendor github.com/aws/aws-sdk-go-v2/config v1.26.6 and related dependencies.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-05 18:08:03 +01:00
Sebastiaan van Stijn
089982153f vendor: github.com/docker/cli v25.0.2
no changes in vendored code

full diff: https://github.com/docker/cli/compare/v25.0.1...v25.0.2

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-05 18:03:10 +01:00
Sebastiaan van Stijn
7393650008 vendor: github.com/docker/docker v25.0.2
no changes in vendored code

full diff: https://github.com/docker/docker/compare/v25.0.1...v25.0.2

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-05 18:01:55 +01:00
CrazyMax
b36c5196dd Merge pull request #2243 from crazy-max/bump-codecov
ci: bump codecov/codecov-action to 4
2024-02-05 16:29:37 +01:00
CrazyMax
1484862a50 Merge pull request #2244 from kushmansingh/upgrade-buildkit
Upgrade buildkit to v0.12.5
2024-02-05 16:25:19 +01:00
Kushagra Mansingh
e5c3fa5293 Upgrade buildkit to v0.12.5
Contains important security fixes https://github.com/moby/buildkit/releases/tag/v0.12.5

Signed-off-by: Kushagra Mansingh <12158241+kushmansingh@users.noreply.github.com>
2024-02-05 09:54:07 -05:00
CrazyMax
2c58e6003f ci: bump codecov/codecov-action to 4
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-02-05 13:54:55 +01:00
CrazyMax
30ae5ceb6e Merge pull request #2238 from crazy-max/fix-win-console
vendor: github.com/containerd/console 8f6c4e4
2024-02-05 13:10:50 +01:00
CrazyMax
6ffb77dcda vendor: github.com/containerd/console 8f6c4e4
full diff: https://github.com/containerd/console/compare/v1.0.3...8f6c4e4faef5a326d2cd907097d04c0239ee5e2f

Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-02-05 12:16:39 +01:00
CrazyMax
2c1f46450a Merge pull request #2237 from crazy-max/fix-bake-read-order
bake: fix definitions merge order
2024-02-02 13:42:17 +01:00
CrazyMax
052f279de7 bake: fix definitions merge order
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-02-02 10:55:25 +01:00
Justin Chadwell
89684021b3 Merge pull request #2230 from jedevc/imagetools-resolver-copy-dupe
fix: avoid modifying source during resolver.Copy
2024-02-01 10:30:32 +00:00
Justin Chadwell
95bdecc145 fix: avoid modifying source during resolver.Copy
Signed-off-by: Justin Chadwell <me@jedevc.com>
2024-01-31 14:44:10 +00:00
CrazyMax
082d5d70b2 Merge pull request #2228 from docker/dependabot/github_actions/peter-evans/create-pull-request-6.0.0
build(deps): bump peter-evans/create-pull-request from 5.0.2 to 6.0.0
2024-01-31 15:06:24 +01:00
dependabot[bot]
5b75930a6d build(deps): bump peter-evans/create-pull-request from 5.0.2 to 6.0.0
Bumps [peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request) from 5.0.2 to 6.0.0.
- [Release notes](https://github.com/peter-evans/create-pull-request/releases)
- [Commits](153407881e...b1ddad2c99)

---
updated-dependencies:
- dependency-name: peter-evans/create-pull-request
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-31 13:46:27 +00:00
CrazyMax
e41ab8d10d Merge pull request #2227 from crazy-max/fix-dependabot-conf
chore: ignore docker/docs deps with dependabot
2024-01-31 14:46:00 +01:00
CrazyMax
4b408c79fe Merge pull request #2205 from crazy-max/bump-compose-go
vendor: update compose-go to v2.0.0-rc.3
2024-01-31 14:31:50 +01:00
CrazyMax
cff7baff1c chore: ignore docker/docs deps with dependabot
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-01-31 14:23:34 +01:00
CrazyMax
5130700981 test: revert non-deterministic compose context path
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-01-31 14:15:57 +01:00
CrazyMax
13beda8b11 vendor: update compose-go to v2.0.0-rc.3
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-01-31 14:15:57 +01:00
Tõnis Tiigi
8babd5a147 Merge pull request #2215 from thaJeztah/bump_cobra_1.8
vendor: github.com/spf13/cobra v1.8.0
2024-01-30 17:57:41 -08:00
Tõnis Tiigi
cb856682e9 Merge pull request #2224 from jsternberg/otelsdk-package
otel: include service instance id attribute to resource and move to metricutil package
2024-01-30 14:45:24 -08:00
Jonathan A. Sternberg
c65b7ed24f otel: include service instance id attribute to resource and move to metricutil package
Add the service instance id to the resource attributes to prevent
downstream OTEL processors and exporters from thinking that the CLI
invocations are a single process that keeps restarting. The unique id
can be removed through downstream aggregation to prevent cardinality
issues, but we need some way to tell OTEL that it shouldn't reset the
counters.

Move the check for the experimental flag to its own package and then use
that invocation to prevent creating exporters so metrics are disabled
completely. This makes it so we don't have to check for the experimental
flag in every place we add metrics until we decide to make metrics
stable in general.

This also moves the OTEL initialization to a `util/metricutil` package
to be more consistent with the existing util naming and to differentiate
it from the upstream `metric` name. Using both `metrics` and `metric` as
import names was confusing since `metric` was an upstream dependency and
`metrics` was a local utility. `metricutil` matches with the existing
utilities and makes clear that it isn't a spelling mistake.

The record version metric has been removed since we weren't planning on
keeping that metric anyway and most of the information is now included
in the instrumentation library name and version. That function is
included as a utility in the `otel/sdk/metric` package to retrieve the
appropriate meter from the meter provider.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-01-30 16:27:02 -06:00
Tõnis Tiigi
2c3d7dab3f Merge pull request #2212 from crazy-max/bump-gotest-annotations
bump gotest-annotations to fa6141aedf23596fb8bdcceab9cce8dadaa31bd9
2024-01-28 17:34:36 -08:00
Sebastiaan van Stijn
13467c1f5d vendor: github.com/spf13/cobra v1.8.0
- release notes: https://github.com/spf13/cobra/releases/tag/v1.8.0
- full diff: https://github.com/spf13/cobra/compare/v1.7.0...v1.8.0

Release notes highlights:

Features

- Support usage as plugin for tools like kubectl - this means that programs
  that utilize a "plugin-like" structure have much better support and usage
  (like for completions, command paths, etc.)
- Move documentation sources to site/content
- Add 'one required flag' group - this includes a new MarkFlagsOneRequired API
  for flags which can be used to mark a flag group as required and cause command
  failure if at least one is not used when invoked.
- Customizable error message prefix - This adds the SetErrPrefix and ErrPrefix
  APIs on the Command struct to allow for setting a custom prefix for errors
- feat: add getters for flag completions
- Feature: allow running persistent run hooks of all parents
- Improve API to get flag completion function

Bug fixes

- Fix typo in fish completions
- Fix grammar: 'allows to'
- powershell: escape variable with curly brackets
- Don't complete --help flag when flag parsing disabled
- Replace all non-alphanumerics in active help env var program prefix

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-26 16:11:21 +01:00
CrazyMax
d0c4bed484 Merge pull request #2198 from thaJeztah/bump_buildkit
vendor: github.com/moby/buildkit 6bd81372ad6f (master)
2024-01-26 14:35:29 +01:00
Sebastiaan van Stijn
dbaad32f49 vendor: github.com/moby/buildkit 6bd81372ad6f (master)
- tests: implement NetNSDetached method

full diff: 6e200afad5...6bd81372ad

Co-authored-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-26 13:05:26 +01:00
Sebastiaan van Stijn
528e3ba259 vendor: github.com/docker/cli v25.0.1
- cli-plugins: move socket code into common package
- cli-plugins: don't use abstract sockets on macOS
    - fixes CLI leaving behind plugin socket mount-points
- socket: return from loop after EOF

full diff: https://github.com/docker/cli/compare/v25.0.0-rc.1...v25.0.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-26 13:02:24 +01:00
Sebastiaan van Stijn
1ff261d38e vendor: github.com/docker/docker v25.0.1
- pkg/system: return even richer xattr errors

full diff: https://github.com/moby/moby/compare/v25.0.0-rc.1...v25.0.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-26 12:56:23 +01:00
Sebastiaan van Stijn
bef5d567b0 vendor: github.com/moby/sys/mountinfo v0.7.1
full diff: https://github.com/moby/sys/compare/mountinfo/v0.6.2...mountinfo/v0.7.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-26 12:54:40 +01:00
Sebastiaan van Stijn
da95d9f0ca vendor: golang.org/x/tools v0.14.0, golang.org/x/mod v0.13.0, golang.org/x/sync v0.4.0
full diff:

- https://github.com/golang/tools/compare/v0.10.0...v0.14.0
- https://github.com/golang/mod/compare/v0.11.0...v0.13.0
- https://github.com/golang/sync/compare/v0.3.0...v0.4.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-26 12:54:40 +01:00
Sebastiaan van Stijn
7206e2d179 vendor: golang.org/x/sys v0.16.0
full diff: https://github.com/golang/sys/compare/v0.15.0...v0.16.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-26 12:54:40 +01:00
Sebastiaan van Stijn
736094794c vendor: github.com/google/uuid v1.5.0
full diff: https://github.com/google/uuid/compare/v1.3.1...v1.5.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-26 12:54:40 +01:00
Sebastiaan van Stijn
a399a97949 vendor: github.com/google/go-cmp v0.6.0
full diff: https://github.com/google/go-cmp/compare/v0.5.9...v0.6.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-26 12:54:39 +01:00
Sebastiaan van Stijn
62a416fe12 vendor: github.com/containerd/containerd v1.7.12
full diff: https://github.com/containerd/containerd/compare/v1.7.11...v1.7.12

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-26 12:54:37 +01:00
CrazyMax
f6564c3147 bump gotest-annotations to fa6141aedf23596fb8bdcceab9cce8dadaa31bd9
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-01-26 10:04:30 +01:00
CrazyMax
b49911416c Merge pull request #2210 from crazy-max/bump-artifacts-action
bump actions/upload-artifact and actions/download-artifact to 4
2024-01-26 10:01:04 +01:00
CrazyMax
22c2538466 Merge pull request #2211 from docker/dependabot/github_actions/actions/setup-go-5
build(deps): bump actions/setup-go from 4 to 5
2024-01-26 09:46:20 +01:00
CrazyMax
1861405b1e ci(docs-upstream): pin reusable workflow
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-01-26 09:37:40 +01:00
CrazyMax
c9aeca19ce bump actions/upload-artifact and actions/download-artifact to 4
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-01-26 09:33:50 +01:00
dependabot[bot]
59827f5c27 build(deps): bump actions/setup-go from 4 to 5
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 4 to 5.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-26 08:28:22 +00:00
CrazyMax
827622421e Merge pull request #2206 from crazy-max/test-os
ci: test-unit job matrix for win/macos/ubuntu
2024-01-26 09:09:28 +01:00
Tõnis Tiigi
f0c5dfaf48 Merge pull request #2184 from laurazard/cli-signal-handling
Use Cobra's `command.Context()`, cancel execution when CLI is signalled
2024-01-25 10:37:45 -08:00
CrazyMax
703c765ec8 gitutil: check git bash env when testing
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-01-25 15:06:32 +01:00
CrazyMax
fb2c62a038 build: resolve 8.3 filename format to long one on Windows
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-01-25 15:06:32 +01:00
CrazyMax
eabbee797b ci: test-unit job matrix for win/macos/ubuntu
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-01-25 10:53:01 +01:00
CrazyMax
7e4021a43d Merge pull request #2204 from thaJeztah/ci_update_docker_version
Dockerfile: update to Docker Engine v25.0.1
2024-01-24 11:59:38 +01:00
Sebastiaan van Stijn
2478f300aa Dockerfile: update to Docker Engine v25.0.1
Update DOCKER_VERSION to the current release, so that tests run
against that version.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-24 11:01:43 +01:00
CrazyMax
620c57c86c Merge pull request #2203 from thaJeztah/ci_update_buildkit_version
Dockerfile: update to buildkit v0.12.4
2024-01-24 10:30:24 +01:00
Sebastiaan van Stijn
8bea1cb417 Dockerfile: update to BuildKit v0.12.4
Update BUILDKIT_VERSION to the current release, so that tests run
against that version.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-24 09:35:18 +01:00
Tonis Tiigi
147c7135b0 simplify signal handling for cobra context
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-01-19 17:18:42 -08:00
Laura Brehm
650a7af0ae cobra/commands: cancel command context on signal
See https://github.com/docker/cli/pull/4599 and
https://github.com/docker/cli/pull/4769.

Since we switched to using the cobra command context instead of
`appcontext`, we need to set up the signal handling that was being
provided by `appcontext`, as well as configuring the context with
the OTEL tracing utilities also used by `appcontext`.

This commit introduces `cobrautil.ConfigureContext` which implements
the pre-existing signal handling logic from `appcontext` and cancels
the command's context when a signal is received, as well as doing
the relevant OTEL config.

Signed-off-by: Laura Brehm <laurabrehm@hey.com>
2024-01-19 16:30:52 -08:00
Laura Brehm
4f738020fd deps: remove appcontext, use cmd.Context
Signed-off-by: Laura Brehm <laurabrehm@hey.com>
2024-01-19 16:29:12 -08:00
CrazyMax
d852568a29 Merge pull request #2186 from dvdksn/docs-fix-cli-reference-link
docs: update link to `docker build` reference
2024-01-19 16:20:54 +01:00
David Karlsson
68c3ac4f66 docs: update link to docker build reference
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-01-19 15:28:37 +01:00
Tõnis Tiigi
38afdf1f52 Merge pull request #2188 from AkihiroSuda/mark-experimental-flags
Mark experimental flags in `--help`
2024-01-17 21:21:29 -08:00
CrazyMax
b2e723e2a3 Merge pull request #2190 from thaJeztah/bump_golang
update to go1.21.6
2024-01-17 10:53:34 +01:00
Akihiro Suda
02c2073feb Mark experimental flags in --help
Prior to this commit, experimental flags were not distinguishable from
regular flags in `--help`

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2024-01-13 19:53:56 +09:00
Sebastiaan van Stijn
61dff684ad update to go1.21.6
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-12 23:28:24 +01:00
Tõnis Tiigi
78adfc80a9 Merge pull request #2155 from jsternberg/otel-exporter
metrics: send metrics to the otel collector endpoint when active
2024-01-08 14:46:58 -08:00
Tõnis Tiigi
7c590ecb9a Merge pull request #2140 from crazy-max/rm-multi
rm: support removing multiple builders at once
2024-01-08 11:08:46 -08:00
CrazyMax
24e043e375 rm: support removing multiple builders at once
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-01-08 01:02:03 +01:00
Jonathan A. Sternberg
7094eb86c9 metrics: send metrics to the otel collector endpoint when active
Introduce a meter provider to the buildx cli that will send metrics to
the otel-collector included in docker desktop if enabled.

This will send usage metrics to the desktop application but also send
metrics to a user-provided otlp receiver endpoint through the standard
environment variables.

This introduces a single metric which is the cli count for build and
bake along with the command name and a few additional attributes.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-01-05 14:50:54 -06:00
Tõnis Tiigi
81ea718ea4 Merge pull request #2179 from thaJeztah/go_connection_0.5.0
vendor: github.com/docker/go-connections v0.5.0
2024-01-05 12:46:33 -08:00
Tõnis Tiigi
9060cab077 Merge pull request #2175 from jsternberg/vendor-update
deps: update buildkit, vendor changes
2024-01-05 12:45:07 -08:00
Sebastiaan van Stijn
3cd6d8d6e4 vendor: github.com/docker/go-connections v0.5.0
no diff, as the tag is the same commit as we used already;
https://github.com/docker/go-connections/compare/fa09c952e3ea...v0.5.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-05 18:29:16 +01:00
Jonathan A. Sternberg
ba43fe08f4 deps: update buildkit, vendor changes
Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-01-05 11:17:43 -06:00
Tõnis Tiigi
6b63e7e3de Merge pull request #2176 from crazy-max/fix-builder-creation
driver(container): fix conditional statement for error handling
2024-01-05 08:31:00 -08:00
CrazyMax
57d737a13c driver(container): fix conditional statement for error handling
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-01-05 16:35:09 +01:00
Tõnis Tiigi
671347dc35 Merge pull request #2156 from crazy-max/frontend-attr-localdirs
build: set local dirs as frontend attributes
2024-01-02 23:58:32 -08:00
CrazyMax
02bc4e8992 build: set local dirs as frontend attributes
Set local dirs metadata if relative to VCS directory so
dockerfile path is tracked accurately in case vcs information
is not fulfilled.

Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2023-12-21 11:38:11 +01:00
CrazyMax
1cdefbe901 Merge pull request #2167 from crazy-max/fix-git-root-wsl
gitutil: sanitize root dir on WSL
2023-12-20 11:03:28 +01:00
CrazyMax
7694f0b9d8 Merge pull request #2166 from crazy-max/multi-golangci-lint
enable golangci-lint for supported platforms
2023-12-20 11:02:58 +01:00
CrazyMax
fa9126c61f Merge pull request #2171 from dvdksn/docs-no-cache-filter
docs: add --no-cache-filter example
2023-12-20 10:59:53 +01:00
David Karlsson
ebae070f7e docs: add --no-cache-filter example
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2023-12-20 08:58:52 +01:00
CrazyMax
617f538cb3 Merge pull request #2170 from laurazard/bump-buildkit
deps: update buildkit, vendor changes
2023-12-19 15:08:29 +01:00
Laura Brehm
0f45b629ad deps: update buildkit, vendor changes
Signed-off-by: Laura Brehm <laurabrehm@hey.com>
2023-12-19 14:01:05 +00:00
CrazyMax
ac5b3241b1 gitutil: sanitize root dir on WSL
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2023-12-19 09:27:32 +01:00
CrazyMax
ee24a36c4f enable golangci-lint for supported platforms
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2023-12-14 22:49:37 +01:00
CrazyMax
8484fcdd57 Merge pull request #2162 from crazy-max/ci-username-var
ci: use org-wide var as username
2023-12-14 14:35:19 +01:00
CrazyMax
45deb29f09 ci: use org-wide var as username
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2023-12-14 14:24:56 +01:00
CrazyMax
6641167e7d Merge pull request #2159 from docker/dependabot/github_actions/github/codeql-action-3
build(deps): bump github/codeql-action from 2 to 3
2023-12-14 10:13:58 +01:00
CrazyMax
9f4987997c Merge pull request #2154 from dvdksn/doc-annotations
docs: annotations
2023-12-14 10:07:18 +01:00
CrazyMax
8337c25fa4 Merge pull request #2157 from crazy-max/swtich-account-push-bin
ci: use public bot account to push bin image
2023-12-13 20:49:23 +01:00
dependabot[bot]
6b048e2316 build(deps): bump github/codeql-action from 2 to 3
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 2 to 3.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/github/codeql-action/compare/v2...v3)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-12-13 18:52:37 +00:00
CrazyMax
54a1f0f0ea ci: use public bot account to push bin image
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2023-12-13 15:33:22 +01:00
CrazyMax
57dc45774a Merge pull request #2131 from crazy-max/fix-driver-infer-platform
build: infer platform from first node if none set
2023-12-12 13:27:51 +01:00
CrazyMax
9d8ac1ce2d build: move solveOpt to local struct type
*client.SolveOpt in driver code is only used by build code.
For a clear separation of concerns, move it to an internal
struct type only accessible by BuildWithResultHandler func.

Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2023-12-11 19:53:05 +01:00
CrazyMax
0a0252d9b3 Merge pull request #2150 from crazy-max/builder-create
builder: move builder creation logic to builder package
2023-12-11 01:34:15 -08:00
David Karlsson
c6535e9675 docs: add levels to bake file target.annotations
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2023-12-10 21:16:08 +01:00
David Karlsson
d762c76a68 docs: build --annotation
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2023-12-10 21:16:08 +01:00
David Karlsson
1091707bd5 docs: add lang tag for plaintext code blocks
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2023-12-10 20:35:08 +01:00
David Karlsson
a4c392f4db docs: imagetools create --annotation
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2023-12-10 20:35:08 +01:00
CrazyMax
e4880c5dd1 Merge pull request #2151 from docker/dependabot/github_actions/actions/setup-go-5
build(deps): bump actions/setup-go from 4 to 5
2023-12-07 07:43:59 -08:00
dependabot[bot]
b2510c6b94 build(deps): bump actions/setup-go from 4 to 5
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 4 to 5.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-12-06 18:12:24 +00:00
CrazyMax
5b5c4c8c9d Merge pull request #2146 from tonistiigi/gitutil-default-remote
gitutil: find default remote by tracking branch
2023-12-04 08:08:11 -08:00
CrazyMax
ceb5bc807c builder: move builder creation logic to builder package
This moves the builder creation logic to the builder package as we
plan to allow builder creation in the build ui of Docker Desktop so we
avoid drifting with the same logic in command package.

Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2023-12-04 14:08:08 +01:00
CrazyMax
b2f705ad71 Merge pull request #2147 from tonistiigi/bake-shared-authprovider
bake: use same auth provider for bake targets
2023-12-01 03:24:17 -08:00
Tonis Tiigi
6028094e6b gitutil: find default remote by tracking branch
Using this command resolves remote based on remote
tracking branch of the curently selected branch and
should be more precise in case we can't predict if user
uses origin to mark upstream or their fork.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2023-11-30 23:33:51 -08:00
Tonis Tiigi
9516ce8e25 bake: use same auth provider for bake targets
The results from credential plugins are cached
and this reduces the lookup times.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2023-11-30 22:44:53 -08:00
CrazyMax
d82637582c Merge pull request #2145 from crazy-max/update-buildkit
vendor: github.com/moby/buildkit c9ee8491d74f (master)
2023-11-30 06:57:58 -08:00
CrazyMax
1e80c70990 vendor: github.com/moby/buildkit c9ee8491d74f (master)
full diff:

- https://github.com/containerd/containerd/compare/v1.7.8...v1.7.9
- 5ae9b23c40...c9ee8491d7

Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2023-11-29 16:40:13 +01:00
CrazyMax
54032316f9 build: infer platform from first node if none set
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2023-11-28 11:55:47 +01:00
Justin Chadwell
aac7a47469 build: fix incorrect solve opt platform from being set
This regression was introduced in 616fb3e55c,
with the node resolution refactor.

The core issue here is just that we would unconditionally set the
solve opt's platform to the default current platform, which was
incorrect. We can prevent this easily by having a special case for the
default case, like we had before, and then not setting the platforms
field on this (which keeping the resolution behavior which was
introduced).

Signed-off-by: Justin Chadwell <me@jedevc.com>
2023-11-28 10:11:51 +00:00
Justin Chadwell
aa0aeac297 build: move solve opt out of duplicate map
This was more error prone, as opposed to the approach used prior to
616fb3e55c.

Signed-off-by: Justin Chadwell <me@jedevc.com>
2023-11-28 10:10:50 +00:00
CrazyMax
cec4496d3b Merge pull request #1787 from crazy-max/ls-format
ls: format opt
2023-11-23 06:00:17 -08:00
CrazyMax
9368ecb67e Merge pull request #2121 from robmry/align_add-host_with_cli
Permit '=' separator and '[ipv6]' in --add-host
2023-11-23 02:33:23 -08:00
CrazyMax
20c947990c ls: format opt
Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
2023-11-22 21:11:24 +01:00
Rob Murray
eeeff1cf23 Permit '=' separator and '[ipv6]' in --add-host
Fixes docker/cli#4648

Make it easier to specify IPv6 addresses in the '--add-host' option by
permitting 'host=ip' in addition to 'host:ip', and allowing square
brackets around the address.

For example:

    --add-host=hostname:127.0.0.1
    --add-host=hostname:::1
    --add-host=hostname=::1
    --add-host=hostname=[::1]

Signed-off-by: Rob Murray <rob.murray@docker.com>
2023-11-22 10:52:14 +00:00
Tõnis Tiigi
752680e289 Merge pull request #2136 from dvdksn/docs-du-verbose
docs: add buildx du verbose example
2023-11-21 16:49:55 -08:00
David Karlsson
5bf02d9f7b docs: add buildx du verbose example
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2023-11-21 15:55:50 +01:00
Tõnis Tiigi
0962fdbb04 Merge pull request #2130 from jsternberg/remote-bootstrap-timeout
driver: add status reporting and a timeout to the remote driver bootstrap
2023-11-17 14:12:33 -08:00
Jonathan A. Sternberg
1f5562315b driver: add status reporting and a timeout to the remote driver bootstrap
This adds status reporting for the remote driver so it shows the length
of time it is spending waiting for the connection. If the service is
already present, this logger isn't shown but it should help provide a
message to show the user why the build is stalled.

A timeout of 20 seconds has been added to the bootstrap.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2023-11-17 15:33:28 -06:00
CrazyMax
a102d33738 Merge pull request #2128 from dvdksn/docs-du-reftypes
docs: update du cmd description
2023-11-17 04:51:11 -08:00
David Karlsson
940e0a4a3c docs: update du cmd description
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2023-11-17 11:02:20 +01:00
Tõnis Tiigi
a978b2b7a3 Merge pull request #2117 from dvdksn/docs-cli-reference-refresh
docs: minor cli reference editorial updates
2023-11-16 11:25:52 -08:00
David Karlsson
1326634c7d chore: add docs reminder comments for driver opts
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2023-11-16 17:00:37 +01:00
David Karlsson
7a724ac445 docs: minor cli reference editorial updates
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2023-11-16 16:05:38 +01:00
CrazyMax
55e164a540 Merge pull request #2122 from crazy-max/docs-fix-image-inspect
docs: fix imagetools inspect json format
2023-11-16 06:47:37 -08:00
CrazyMax
707ae87060 docs: fix imagetools inspect json format
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2023-11-16 14:50:56 +01:00
CrazyMax
cb37886658 Merge pull request #2118 from thaJeztah/update_engine_25
vendor: buildkit 5ae9b23c40a9 (master / v0.13.0-dev), docker v25.0.0-beta.1
2023-11-15 07:32:06 -08:00
Sebastiaan van Stijn
c855277d53 vendor: github.com/moby/buildkit 5ae9b23c40a9 (master / v0.13.0-dev)
full diff:

- 36ef4d8c0d...f098008783
- d5c1d785b0...5ae9b23c40

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-11-15 15:59:23 +01:00
Sebastiaan van Stijn
898a8eeddf vendor: github.com/docker/docker, github.com/docker/cli v25.0.0-beta.1
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-11-15 15:55:37 +01:00
Sebastiaan van Stijn
c857eaa380 vendor: github.com/docker/go-connections fa09c952e3ea (v0.5.0-dev)
full diff: https://github.com/docker/go-connections/compare/v0.4.0...fa09c952e3ea

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-11-15 15:54:44 +01:00
Sebastiaan van Stijn
55db25c21c vendor: github.com/docker/docker-credential-helpers v0.8.0
full diff: https://github.com/docker/docker-credential-helpers/compare/v0.7.0...v0.8.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-11-15 15:53:58 +01:00
Sebastiaan van Stijn
f353814390 vendor: github.com/klauspost/compress v1.17.2
full diff: https://github.com/klauspost/compress/compare/v1.16.3...v1.17.2

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-11-15 15:52:35 +01:00
Sebastiaan van Stijn
271a467612 vendor: github.com/go-logr/logr v1.2.4
full diff: https://github.com/go-logr/logr/compare/v1.2.3...v1.2.4

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-11-15 15:51:13 +01:00
CrazyMax
b3b8c62ad4 Merge pull request #2111 from docker/dependabot/github_actions/actions/github-script-7
build(deps): bump actions/github-script from 6 to 7
2023-11-15 06:50:56 -08:00
Sebastiaan van Stijn
eacf2bdf3d vendor: github.com/cenkalti/backoff/v4 v4.2.1
no changes to vendored files

full diff: https://github.com/cenkalti/backoff/compare/v4.2.0...v4.2.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-11-15 15:48:18 +01:00
Sebastiaan van Stijn
da5f853b44 vendor: github.com/containerd/containerd v1.7.8
full diff: https://github.com/containerd/containerd/compare/v1.7.7...v1.7.8

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-11-14 14:33:27 +01:00
Sebastiaan van Stijn
4932eecc3f vendor: google.golang.org/grpc v1.58.3
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-11-14 14:31:24 +01:00
Sebastiaan van Stijn
7f93616ff1 vendor: golang.org/x/oauth2 v0.10.0
full diff: https://github.com/golang/oauth2/compare/v0.5.0...v0.10.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-11-14 14:30:28 +01:00
Sebastiaan van Stijn
ab58333311 vendor: google.golang.org/protobuf v1.31.0
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-11-14 14:29:26 +01:00
Sebastiaan van Stijn
9efaa2793d vendor: golang.org/x/tools v0.10.0
full diff: https://github.com/golang/tools/compare/v0.7.0...v0.10.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-11-14 14:26:38 +01:00
dependabot[bot]
8819ac1b65 build(deps): bump actions/github-script from 6 to 7
Bumps [actions/github-script](https://github.com/actions/github-script) from 6 to 7.
- [Release notes](https://github.com/actions/github-script/releases)
- [Commits](https://github.com/actions/github-script/compare/v6...v7)

---
updated-dependencies:
- dependency-name: actions/github-script
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-11-13 18:55:01 +00:00
3047 changed files with 227213 additions and 79040 deletions

View File

@@ -5,6 +5,11 @@ updates:
directory: "/"
schedule:
interval: "daily"
ignore:
# ignore this dependency
# it seems a bug with dependabot as pining to commit sha should not
# trigger a new version: https://github.com/docker/buildx/pull/2222#issuecomment-1919092153
- dependency-name: "docker/docs"
labels:
- "dependencies"
- "bot"

104
.github/labeler.yml vendored Normal file
View File

@@ -0,0 +1,104 @@
# Add 'area/project' label to changes in basic project documentation and .github folder, excluding .github/workflows
area/project:
- all:
- changed-files:
- any-glob-to-any-file:
- .github/**
- LICENSE
- AUTHORS
- MAINTAINERS
- PROJECT.md
- README.md
- .gitignore
- codecov.yml
- all-globs-to-all-files: '!.github/workflows/*'
# Add 'area/github-actions' label to changes in the .github/workflows folder
area/ci:
- changed-files:
- any-glob-to-any-file: '.github/workflows/**'
# Add 'area/bake' label to changes in the bake
area/bake:
- changed-files:
- any-glob-to-any-file: 'bake/**'
# Add 'area/bake/compose' label to changes in the bake+compose
area/bake/compose:
- changed-files:
- any-glob-to-any-file:
- bake/compose.go
- bake/compose_test.go
# Add 'area/build' label to changes in build files
area/build:
- changed-files:
- any-glob-to-any-file: 'build/**'
# Add 'area/builder' label to changes in builder files
area/builder:
- changed-files:
- any-glob-to-any-file: 'builder/**'
# Add 'area/cli' label to changes in the CLI
area/cli:
- changed-files:
- any-glob-to-any-file:
- cmd/**
- commands/**
# Add 'area/controller' label to changes in the controller
area/controller:
- changed-files:
- any-glob-to-any-file: 'controller/**'
# Add 'area/docs' label to markdown files in the docs folder
area/docs:
- changed-files:
- any-glob-to-any-file: 'docs/**/*.md'
# Add 'area/dependencies' label to changes in go dependency files
area/dependencies:
- changed-files:
- any-glob-to-any-file:
- go.mod
- go.sum
- vendor/**
# Add 'area/driver' label to changes in the driver folder
area/driver:
- changed-files:
- any-glob-to-any-file: 'driver/**'
# Add 'area/driver/docker' label to changes in the docker driver
area/driver/docker:
- changed-files:
- any-glob-to-any-file: 'driver/docker/**'
# Add 'area/driver/docker-container' label to changes in the docker-container driver
area/driver/docker-container:
- changed-files:
- any-glob-to-any-file: 'driver/docker-container/**'
# Add 'area/driver/kubernetes' label to changes in the kubernetes driver
area/driver/kubernetes:
- changed-files:
- any-glob-to-any-file: 'driver/kubernetes/**'
# Add 'area/driver/remote' label to changes in the remote driver
area/driver/remote:
- changed-files:
- any-glob-to-any-file: 'driver/remote/**'
# Add 'area/hack' label to changes in the hack folder
area/hack:
- changed-files:
- any-glob-to-any-file: 'hack/**'
# Add 'area/tests' label to changes in test files
area/tests:
- changed-files:
- any-glob-to-any-file:
- tests/**
- '**/*_test.go'

View File

@@ -24,57 +24,71 @@ env:
REPO_SLUG: "docker/buildx-bin"
DESTDIR: "./bin"
TEST_CACHE_SCOPE: "test"
TESTFLAGS: "-v --parallel=6 --timeout=30m"
GOTESTSUM_FORMAT: "standard-verbose"
GO_VERSION: "1.22"
GOTESTSUM_VERSION: "v1.9.0" # same as one in Dockerfile
jobs:
prepare-test:
runs-on: ubuntu-22.04
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up QEMU
uses: docker/setup-qemu-action@v3
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.BUILDX_VERSION }}
driver-opts: image=${{ env.BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build
uses: docker/bake-action@v4
with:
targets: integration-test-base
set: |
*.cache-from=type=gha,scope=${{ env.TEST_CACHE_SCOPE }}
*.cache-to=type=gha,scope=${{ env.TEST_CACHE_SCOPE }}
test:
runs-on: ubuntu-22.04
needs:
- prepare-test
test-integration:
runs-on: ubuntu-24.04
env:
TESTFLAGS: "-v --parallel=6 --timeout=30m"
TESTFLAGS_DOCKER: "-v --parallel=1 --timeout=30m"
GOTESTSUM_FORMAT: "standard-verbose"
TEST_IMAGE_BUILD: "0"
TEST_IMAGE_ID: "buildx-tests"
TEST_COVERAGE: "1"
strategy:
fail-fast: false
matrix:
buildkit:
- master
- latest
- buildx-stable-1
- v0.14.1
- v0.13.2
- v0.12.5
worker:
- docker
- docker\+containerd # same as docker, but with containerd snapshotter
- docker-container
- remote
pkg:
- ./tests
mode:
- ""
- experimental
include:
- pkg: ./...
skip-integration-tests: 1
- worker: docker
pkg: ./tests
- worker: docker+containerd # same as docker, but with containerd snapshotter
pkg: ./tests
- worker: docker
pkg: ./tests
mode: experimental
- worker: docker+containerd # same as docker, but with containerd snapshotter
pkg: ./tests
mode: experimental
steps:
-
name: Prepare
run: |
echo "TESTREPORTS_NAME=${{ github.job }}-$(echo "${{ matrix.pkg }}-${{ matrix.buildkit }}-${{ matrix.worker }}-${{ matrix.mode }}" | tr -dc '[:alnum:]-\n\r' | tr '[:upper:]' '[:lower:]')" >> $GITHUB_ENV
if [ -n "${{ matrix.buildkit }}" ]; then
echo "TEST_BUILDKIT_TAG=${{ matrix.buildkit }}" >> $GITHUB_ENV
fi
testFlags="--run=//worker=$(echo "${{ matrix.worker }}" | sed 's/\+/\\+/g')$"
case "${{ matrix.worker }}" in
docker | docker+containerd)
echo "TESTFLAGS=${{ env.TESTFLAGS_DOCKER }} $testFlags" >> $GITHUB_ENV
;;
*)
echo "TESTFLAGS=${{ env.TESTFLAGS }} $testFlags" >> $GITHUB_ENV
;;
esac
if [[ "${{ matrix.worker }}" == "docker"* ]]; then
echo "TEST_DOCKERD=1" >> $GITHUB_ENV
fi
if [ "${{ matrix.mode }}" = "experimental" ]; then
echo "TEST_BUILDX_EXPERIMENTAL=1" >> $GITHUB_ENV
fi
-
name: Checkout
uses: actions/checkout@v4
@@ -92,44 +106,116 @@ jobs:
buildkitd-flags: --debug
-
name: Build test image
uses: docker/bake-action@v4
uses: docker/bake-action@v5
with:
targets: integration-test
set: |
*.cache-from=type=gha,scope=${{ env.TEST_CACHE_SCOPE }}
*.output=type=docker,name=${{ env.TEST_IMAGE_ID }}
-
name: Test
run: |
export TEST_REPORT_SUFFIX=-${{ github.job }}-$(echo "${{ matrix.pkg }}-${{ matrix.skip-integration-tests }}-${{ matrix.worker }}" | tr -dc '[:alnum:]-\n\r' | tr '[:upper:]' '[:lower:]')
./hack/test
env:
TEST_DOCKERD: "${{ startsWith(matrix.worker, 'docker') && '1' || '0' }}"
TESTFLAGS: "${{ (matrix.worker == 'docker' || matrix.worker == 'docker\\+containerd') && env.TESTFLAGS_DOCKER || env.TESTFLAGS }} --run=//worker=${{ matrix.worker }}$"
TEST_REPORT_SUFFIX: "-${{ env.TESTREPORTS_NAME }}"
TESTPKGS: "${{ matrix.pkg }}"
SKIP_INTEGRATION_TESTS: "${{ matrix.skip-integration-tests }}"
-
name: Send to Codecov
if: always()
uses: codecov/codecov-action@v3
uses: codecov/codecov-action@v4
with:
directory: ./bin/testreports
flags: integration
token: ${{ secrets.CODECOV_TOKEN }}
disable_file_fixes: true
-
name: Generate annotations
if: always()
uses: crazy-max/.github/.github/actions/gotest-annotations@1a64ea6d01db9a48aa61954cb20e265782c167d9
uses: crazy-max/.github/.github/actions/gotest-annotations@fa6141aedf23596fb8bdcceab9cce8dadaa31bd9
with:
directory: ./bin/testreports
-
name: Upload test reports
if: always()
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: test-reports
name: test-reports-${{ env.TESTREPORTS_NAME }}
path: ./bin/testreports
test-unit:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os:
- ubuntu-24.04
- macos-12
- windows-2022
env:
SKIP_INTEGRATION_TESTS: 1
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up Go
uses: actions/setup-go@v5
with:
go-version: "${{ env.GO_VERSION }}"
-
name: Prepare
run: |
testreportsName=${{ github.job }}--${{ matrix.os }}
testreportsBaseDir=./bin/testreports
testreportsDir=$testreportsBaseDir/$testreportsName
echo "TESTREPORTS_NAME=$testreportsName" >> $GITHUB_ENV
echo "TESTREPORTS_BASEDIR=$testreportsBaseDir" >> $GITHUB_ENV
echo "TESTREPORTS_DIR=$testreportsDir" >> $GITHUB_ENV
mkdir -p $testreportsDir
shell: bash
-
name: Install gotestsum
run: |
go install gotest.tools/gotestsum@${{ env.GOTESTSUM_VERSION }}
-
name: Test
env:
TMPDIR: ${{ runner.temp }}
run: |
gotestsum \
--jsonfile="${{ env.TESTREPORTS_DIR }}/go-test-report.json" \
--junitfile="${{ env.TESTREPORTS_DIR }}/junit-report.xml" \
--packages="./..." \
-- \
"-mod=vendor" \
"-coverprofile" "${{ env.TESTREPORTS_DIR }}/coverage.txt" \
"-covermode" "atomic" ${{ env.TESTFLAGS }}
shell: bash
-
name: Send to Codecov
if: always()
uses: codecov/codecov-action@v4
with:
directory: ${{ env.TESTREPORTS_DIR }}
env_vars: RUNNER_OS
flags: unit
token: ${{ secrets.CODECOV_TOKEN }}
disable_file_fixes: true
-
name: Generate annotations
if: always()
uses: crazy-max/.github/.github/actions/gotest-annotations@fa6141aedf23596fb8bdcceab9cce8dadaa31bd9
with:
directory: ${{ env.TESTREPORTS_DIR }}
-
name: Upload test reports
if: always()
uses: actions/upload-artifact@v4
with:
name: test-reports-${{ env.TESTREPORTS_NAME }}
path: ${{ env.TESTREPORTS_BASEDIR }}
prepare-binaries:
runs-on: ubuntu-22.04
runs-on: ubuntu-24.04
outputs:
matrix: ${{ steps.platforms.outputs.matrix }}
steps:
@@ -147,7 +233,7 @@ jobs:
echo ${{ steps.platforms.outputs.matrix }}
binaries:
runs-on: ubuntu-22.04
runs-on: ubuntu-24.04
needs:
- prepare-binaries
strategy:
@@ -183,16 +269,17 @@ jobs:
CACHE_TO: type=gha,scope=binaries-${{ env.PLATFORM_PAIR }},mode=max
-
name: Upload artifacts
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: buildx
name: buildx-${{ env.PLATFORM_PAIR }}
path: ${{ env.DESTDIR }}/*
if-no-files-found: error
bin-image:
runs-on: ubuntu-22.04
runs-on: ubuntu-24.04
needs:
- test
- test-integration
- test-unit
if: ${{ github.event_name != 'pull_request' && github.repository == 'docker/buildx' }}
steps:
-
@@ -225,11 +312,11 @@ jobs:
if: github.event_name != 'pull_request'
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
username: ${{ vars.DOCKERPUBLICBOT_USERNAME }}
password: ${{ secrets.DOCKERPUBLICBOT_WRITE_PAT }}
-
name: Build and push image
uses: docker/bake-action@v4
uses: docker/bake-action@v5
with:
files: |
./docker-bake.hcl
@@ -242,9 +329,10 @@ jobs:
*.cache-to=type=gha,scope=bin-image,mode=max
release:
runs-on: ubuntu-22.04
runs-on: ubuntu-24.04
needs:
- test
- test-integration
- test-unit
- binaries
steps:
-
@@ -252,10 +340,11 @@ jobs:
uses: actions/checkout@v4
-
name: Download binaries
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
with:
name: buildx
path: ${{ env.DESTDIR }}
pattern: buildx-*
merge-multiple: true
-
name: Create checksums
run: ./hack/hash-files
@@ -270,33 +359,9 @@ jobs:
-
name: GitHub Release
if: startsWith(github.ref, 'refs/tags/v')
uses: softprops/action-gh-release@de2c0eb89ae2a093876385947365aca7b0e5f844 # v0.1.15
uses: softprops/action-gh-release@a74c6b72af54cfa997e81df42d94703d6313a2d0 # v2.0.6
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
draft: true
files: ${{ env.DESTDIR }}/*
buildkit-edge:
runs-on: ubuntu-22.04
continue-on-error: true
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up QEMU
uses: docker/setup-qemu-action@v3
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.BUILDX_VERSION }}
driver-opts: image=moby/buildkit:master
buildkitd-flags: --debug
-
# Just run a bake target to check eveything runs fine
name: Build
uses: docker/bake-action@v4
with:
targets: binaries

View File

@@ -13,30 +13,30 @@ permissions:
security-events: write
env:
GO_VERSION: 1.21.3
GO_VERSION: "1.22"
jobs:
codeql:
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up Go
uses: actions/setup-go@v4
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
-
name: Initialize CodeQL
uses: github/codeql-action/init@v2
uses: github/codeql-action/init@v3
with:
languages: go
-
name: Autobuild
uses: github/codeql-action/autobuild@v2
uses: github/codeql-action/autobuild@v3
-
name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v2
uses: github/codeql-action/analyze@v3
with:
category: "/language:go"

View File

@@ -1,14 +1,19 @@
name: docs-release
on:
workflow_dispatch:
inputs:
tag:
description: 'Git tag'
required: true
release:
types:
- released
jobs:
open-pr:
runs-on: ubuntu-22.04
if: ${{ github.event.release.prerelease != true && github.repository == 'docker/buildx' }}
runs-on: ubuntu-24.04
if: ${{ (github.event.release.prerelease != true || github.event.inputs.tag != '') && github.repository == 'docker/buildx' }}
steps:
-
name: Checkout docs repo
@@ -20,39 +25,47 @@ jobs:
-
name: Prepare
run: |
rm -rf ./_data/buildx/*
rm -rf ./data/buildx/*
if [ -n "${{ github.event.inputs.tag }}" ]; then
echo "RELEASE_NAME=${{ github.event.inputs.tag }}" >> $GITHUB_ENV
else
echo "RELEASE_NAME=${{ github.event.release.name }}" >> $GITHUB_ENV
fi
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
-
name: Build docs
uses: docker/bake-action@v4
name: Generate yaml
uses: docker/bake-action@v5
with:
source: ${{ github.server_url }}/${{ github.repository }}.git#${{ github.event.release.name }}
source: ${{ github.server_url }}/${{ github.repository }}.git#${{ env.RELEASE_NAME }}
targets: update-docs
provenance: false
set: |
*.output=/tmp/buildx-docs
env:
DOCS_FORMATS: yaml
-
name: Copy files
name: Copy yaml
run: |
cp /tmp/buildx-docs/out/reference/*.yaml ./_data/buildx/
cp /tmp/buildx-docs/out/reference/*.yaml ./data/buildx/
-
name: Commit changes
name: Update vendor
run: |
git add -A .
make vendor
env:
VENDOR_MODULE: github.com/docker/buildx@${{ env.RELEASE_NAME }}
-
name: Create PR on docs repo
uses: peter-evans/create-pull-request@153407881ec5c347639a548ade7d8ad1d6740e38
uses: peter-evans/create-pull-request@c5a7806660adbe173f04e3e038b0ccdcd758773c # v6.1.0
with:
token: ${{ secrets.GHPAT_DOCS_DISPATCH }}
push-to-fork: docker-tools-robot/docker.github.io
commit-message: "build: update buildx reference to ${{ github.event.release.name }}"
commit-message: "vendor: github.com/docker/buildx ${{ env.RELEASE_NAME }}"
signoff: true
branch: dispatch/buildx-ref-${{ github.event.release.name }}
branch: dispatch/buildx-ref-${{ env.RELEASE_NAME }}
delete-branch: true
title: Update buildx reference to ${{ github.event.release.name }}
title: Update buildx reference to ${{ env.RELEASE_NAME }}
body: |
Update the buildx reference documentation to keep in sync with the latest release `${{ github.event.release.name }}`
Update the buildx reference documentation to keep in sync with the latest release `${{ env.RELEASE_NAME }}`
draft: false

View File

@@ -22,7 +22,7 @@ on:
jobs:
docs-yaml:
runs-on: ubuntu-22.04
runs-on: ubuntu-24.04
steps:
-
name: Checkout
@@ -34,9 +34,10 @@ jobs:
version: latest
-
name: Build reference YAML docs
uses: docker/bake-action@v4
uses: docker/bake-action@v5
with:
targets: update-docs
provenance: false
set: |
*.output=/tmp/buildx-docs
*.cache-from=type=gha,scope=docs-yaml
@@ -45,18 +46,18 @@ jobs:
DOCS_FORMATS: yaml
-
name: Upload reference YAML docs
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: docs-yaml
path: /tmp/buildx-docs/out/reference
retention-days: 1
validate:
uses: docker/docs/.github/workflows/validate-upstream.yml@main
uses: docker/docs/.github/workflows/validate-upstream.yml@6b73b05acb21edf7995cc5b3c6672d8e314cee7a # pin for artifact v4 support: https://github.com/docker/docs/pull/19220
needs:
- docs-yaml
with:
module-name: docker/buildx
data-files-id: docs-yaml
data-files-folder: buildx
data-files-placeholder-folder: engine/reference/commandline
create-placeholder-stubs: true

View File

@@ -22,7 +22,7 @@ env:
jobs:
build:
runs-on: ubuntu-22.04
runs-on: ubuntu-24.04
steps:
- name: Checkout
uses: actions/checkout@v4
@@ -33,7 +33,7 @@ jobs:
version: latest
-
name: Build
uses: docker/bake-action@v4
uses: docker/bake-action@v5
with:
targets: binaries
set: |
@@ -46,7 +46,7 @@ jobs:
mv ${{ env.DESTDIR }}/build/buildx ${{ env.DESTDIR }}/build/docker-buildx
-
name: Upload artifacts
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: binary
path: ${{ env.DESTDIR }}/build
@@ -82,6 +82,10 @@ jobs:
driver-opt: qemu.install=true
- driver: remote
endpoint: tcp://localhost:1234
- driver: docker-container
metadata-provenance: max
- driver: docker-container
metadata-warnings: true
exclude:
- driver: docker
multi-node: mnode-true
@@ -103,7 +107,7 @@ jobs:
if: matrix.driver == 'docker' || matrix.driver == 'docker-container'
-
name: Install buildx
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
with:
name: binary
path: /home/runner/.docker/cli-plugins
@@ -129,70 +133,18 @@ jobs:
else
echo "MULTI_NODE=0" >> $GITHUB_ENV
fi
if [ -n "${{ matrix.metadata-provenance }}" ]; then
echo "BUILDX_METADATA_PROVENANCE=${{ matrix.metadata-provenance }}" >> $GITHUB_ENV
fi
if [ -n "${{ matrix.metadata-warnings }}" ]; then
echo "BUILDX_METADATA_WARNINGS=${{ matrix.metadata-warnings }}" >> $GITHUB_ENV
fi
-
name: Install k3s
if: matrix.driver == 'kubernetes'
uses: actions/github-script@v6
uses: crazy-max/.github/.github/actions/install-k3s@fa6141aedf23596fb8bdcceab9cce8dadaa31bd9
with:
script: |
const fs = require('fs');
let wait = function(milliseconds) {
return new Promise((resolve, reject) => {
if (typeof(milliseconds) !== 'number') {
throw new Error('milleseconds not a number');
}
setTimeout(() => resolve("done!"), milliseconds)
});
}
try {
const kubeconfig="/tmp/buildkit-k3s/kubeconfig.yaml";
core.info(`storing kubeconfig in ${kubeconfig}`);
await exec.exec('docker', ["run", "-d",
"--privileged",
"--name=buildkit-k3s",
"-e", "K3S_KUBECONFIG_OUTPUT="+kubeconfig,
"-e", "K3S_KUBECONFIG_MODE=666",
"-v", "/tmp/buildkit-k3s:/tmp/buildkit-k3s",
"-p", "6443:6443",
"-p", "80:80",
"-p", "443:443",
"-p", "8080:8080",
"rancher/k3s:${{ env.K3S_VERSION }}", "server"
]);
await wait(10000);
core.exportVariable('KUBECONFIG', kubeconfig);
let nodeName;
for (let count = 1; count <= 5; count++) {
try {
const nodeNameOutput = await exec.getExecOutput("kubectl get nodes --no-headers -oname");
nodeName = nodeNameOutput.stdout
} catch (error) {
core.info(`Unable to resolve node name (${error.message}). Attempt ${count} of 5.`)
} finally {
if (nodeName) {
break;
}
await wait(5000);
}
}
if (!nodeName) {
throw new Error(`Unable to resolve node name after 5 attempts.`);
}
await exec.exec(`kubectl wait --for=condition=Ready ${nodeName}`);
} catch (error) {
core.setFailed(error.message);
}
-
name: Print KUBECONFIG
if: matrix.driver == 'kubernetes'
run: |
yq ${{ env.KUBECONFIG }}
version: ${{ env.K3S_VERSION }}
-
name: Launch remote buildkitd
if: matrix.driver == 'remote'

19
.github/workflows/labeler.yml vendored Normal file
View File

@@ -0,0 +1,19 @@
name: labeler
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
pull_request_target:
jobs:
labeler:
permissions:
contents: read
pull-requests: write
runs-on: ubuntu-latest
steps:
-
name: Run
uses: actions/labeler@v5

View File

@@ -17,17 +17,70 @@ on:
- '.github/releases.json'
jobs:
prepare:
runs-on: ubuntu-24.04
outputs:
includes: ${{ steps.matrix.outputs.includes }}
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Matrix
id: matrix
uses: actions/github-script@v7
with:
script: |
let def = {};
await core.group(`Parsing definition`, async () => {
const printEnv = Object.assign({}, process.env, {
GOLANGCI_LINT_MULTIPLATFORM: process.env.GITHUB_REPOSITORY === 'docker/buildx' ? '1' : ''
});
const resPrint = await exec.getExecOutput('docker', ['buildx', 'bake', 'validate', '--print'], {
ignoreReturnCode: true,
env: printEnv
});
if (resPrint.stderr.length > 0 && resPrint.exitCode != 0) {
throw new Error(res.stderr);
}
def = JSON.parse(resPrint.stdout.trim());
});
await core.group(`Generating matrix`, async () => {
const includes = [];
for (const targetName of Object.keys(def.target)) {
const target = def.target[targetName];
if (target.platforms && target.platforms.length > 0) {
target.platforms.forEach(platform => {
includes.push({
target: targetName,
platform: platform
});
});
} else {
includes.push({
target: targetName
});
}
}
core.info(JSON.stringify(includes, null, 2));
core.setOutput('includes', JSON.stringify(includes));
});
validate:
runs-on: ubuntu-22.04
runs-on: ubuntu-24.04
needs:
- prepare
strategy:
fail-fast: false
matrix:
target:
- lint
- validate-vendor
- validate-docs
- validate-generated-files
include: ${{ fromJson(needs.prepare.outputs.includes) }}
steps:
-
name: Prepare
run: |
if [ "$GITHUB_REPOSITORY" = "docker/buildx" ]; then
echo "GOLANGCI_LINT_MULTIPLATFORM=1" >> $GITHUB_ENV
fi
-
name: Checkout
uses: actions/checkout@v4
@@ -37,6 +90,9 @@ jobs:
with:
version: latest
-
name: Run
run: |
make ${{ matrix.target }}
name: Validate
uses: docker/bake-action@v5
with:
targets: ${{ matrix.target }}
set: |
*.platform=${{ matrix.platform }}

View File

@@ -1,5 +1,5 @@
run:
timeout: 10m
timeout: 30m
skip-files:
- ".*\\.pb\\.go$"
@@ -25,17 +25,30 @@ linters:
disable-all: true
linters-settings:
govet:
enable:
- nilness
- unusedwrite
# enable-all: true
# disable:
# - fieldalignment
# - shadow
depguard:
rules:
main:
deny:
# The io/ioutil package has been deprecated.
# https://go.dev/doc/go1.16#ioutil
- pkg: "github.com/containerd/containerd/errdefs"
desc: The containerd errdefs package was migrated to a separate module. Use github.com/containerd/errdefs instead.
- pkg: "github.com/containerd/containerd/log"
desc: The containerd log package was migrated to a separate module. Use github.com/containerd/log instead.
- pkg: "github.com/containerd/containerd/platforms"
desc: The containerd platforms package was migrated to a separate module. Use github.com/containerd/platforms instead.
- pkg: "io/ioutil"
desc: The io/ioutil package has been deprecated.
forbidigo:
forbid:
- '^fmt\.Errorf(# use errors\.Errorf instead)?$'
- '^platforms\.DefaultString(# use platforms\.Format(platforms\.DefaultSpec()) instead\.)?$'
gosec:
excludes:
- G204 # Audit use of command execution

View File

@@ -1,17 +1,22 @@
# syntax=docker/dockerfile:1
ARG GO_VERSION=1.21.3
ARG XX_VERSION=1.2.1
ARG GO_VERSION=1.22
ARG XX_VERSION=1.4.0
ARG DOCKER_VERSION=24.0.6
# for testing
ARG DOCKER_VERSION=27.0.3
ARG GOTESTSUM_VERSION=v1.9.0
ARG REGISTRY_VERSION=2.8.0
ARG BUILDKIT_VERSION=v0.11.6
ARG BUILDKIT_VERSION=v0.14.1
ARG UNDOCK_VERSION=0.7.0
# xx is a helper for cross-compilation
FROM --platform=$BUILDPLATFORM tonistiigi/xx:${XX_VERSION} AS xx
FROM --platform=$BUILDPLATFORM golang:${GO_VERSION}-alpine AS golatest
FROM moby/moby-bin:$DOCKER_VERSION AS docker-engine
FROM dockereng/cli-bin:$DOCKER_VERSION AS docker-cli
FROM registry:$REGISTRY_VERSION AS registry
FROM moby/buildkit:$BUILDKIT_VERSION AS buildkit
FROM crazymax/undock:$UNDOCK_VERSION AS undock
FROM golatest AS gobase
COPY --from=xx / /
@@ -20,32 +25,38 @@ ENV GOFLAGS=-mod=vendor
ENV CGO_ENABLED=0
WORKDIR /src
FROM registry:$REGISTRY_VERSION AS registry
FROM moby/buildkit:$BUILDKIT_VERSION AS buildkit
FROM gobase AS docker
ARG TARGETPLATFORM
ARG DOCKER_VERSION
WORKDIR /opt/docker
RUN DOCKER_ARCH=$(case ${TARGETPLATFORM:-linux/amd64} in \
"linux/amd64") echo "x86_64" ;; \
"linux/arm/v6") echo "armel" ;; \
"linux/arm/v7") echo "armhf" ;; \
"linux/arm64") echo "aarch64" ;; \
"linux/ppc64le") echo "ppc64le" ;; \
"linux/s390x") echo "s390x" ;; \
*) echo "" ;; esac) \
&& echo "DOCKER_ARCH=$DOCKER_ARCH" \
&& wget -qO- "https://download.docker.com/linux/static/stable/${DOCKER_ARCH}/docker-${DOCKER_VERSION}.tgz" | tar xvz --strip 1
RUN ./dockerd --version && ./containerd --version && ./ctr --version && ./runc --version
FROM gobase AS gotestsum
ARG GOTESTSUM_VERSION
ENV GOFLAGS=
RUN --mount=target=/root/.cache,type=cache \
GOBIN=/out/ go install "gotest.tools/gotestsum@${GOTESTSUM_VERSION}" && \
/out/gotestsum --version
ENV GOFLAGS=""
RUN --mount=target=/root/.cache,type=cache <<EOT
set -ex
go install "gotest.tools/gotestsum@${GOTESTSUM_VERSION}"
go install "github.com/wadey/gocovmerge@latest"
mkdir /out
/go/bin/gotestsum --version
mv /go/bin/gotestsum /out
mv /go/bin/gocovmerge /out
EOT
COPY --chmod=755 <<"EOF" /out/gotestsumandcover
#!/bin/sh
set -x
if [ -z "$GO_TEST_COVERPROFILE" ]; then
exec gotestsum "$@"
fi
coverdir="$(dirname "$GO_TEST_COVERPROFILE")"
mkdir -p "$coverdir/helpers"
gotestsum "$@" "-coverprofile=$GO_TEST_COVERPROFILE"
ecode=$?
go tool covdata textfmt -i=$coverdir/helpers -o=$coverdir/helpers-report.txt
gocovmerge "$coverdir/helpers-report.txt" "$GO_TEST_COVERPROFILE" > "$coverdir/merged-report.txt"
mv "$coverdir/merged-report.txt" "$GO_TEST_COVERPROFILE"
rm "$coverdir/helpers-report.txt"
for f in "$coverdir/helpers"/*; do
rm "$f"
done
rmdir "$coverdir/helpers"
exit $ecode
EOF
FROM gobase AS buildx-version
RUN --mount=type=bind,target=. <<EOT
@@ -57,6 +68,7 @@ EOT
FROM gobase AS buildx-build
ARG TARGETPLATFORM
ARG GO_EXTRA_FLAGS
RUN --mount=type=bind,target=. \
--mount=type=cache,target=/root/.cache \
--mount=type=cache,target=/go/pkg/mod \
@@ -103,11 +115,13 @@ RUN apk add --no-cache \
shadow-uidmap \
xfsprogs \
xz
COPY --link --from=gotestsum /out/gotestsum /usr/bin/
COPY --link --from=gotestsum /out /usr/bin/
COPY --link --from=registry /bin/registry /usr/bin/
COPY --link --from=docker /opt/docker/* /usr/bin/
COPY --link --from=docker-engine / /usr/bin/
COPY --link --from=docker-cli / /usr/bin/
COPY --link --from=buildkit /usr/bin/buildkitd /usr/bin/
COPY --link --from=buildkit /usr/bin/buildctl /usr/bin/
COPY --link --from=undock /usr/local/bin/undock /usr/bin/
COPY --link --from=binaries /buildx /usr/bin/
FROM integration-test-base AS integration-test

View File

@@ -153,6 +153,7 @@ made through a pull request.
"akihirosuda",
"crazy-max",
"jedevc",
"jsternberg",
"tiborvass",
"tonistiigi",
]
@@ -194,6 +195,11 @@ made through a pull request.
Email = "me@jedevc.com"
GitHub = "jedevc"
[people.jsternberg]
Name = "Jonathan Sternberg"
Email = "jonathan.sternberg@docker.com"
GitHub = "jsternberg"
[people.thajeztah]
Name = "Sebastiaan van Stijn"
Email = "github@gone.nl"

View File

@@ -8,6 +8,8 @@ endif
export BUILDX_CMD ?= docker buildx
BAKE_TARGETS := binaries binaries-cross lint lint-gopls validate-vendor validate-docs validate-authors validate-generated-files
.PHONY: all
all: binaries
@@ -19,13 +21,9 @@ build:
shell:
./hack/shell
.PHONY: binaries
binaries:
$(BUILDX_CMD) bake binaries
.PHONY: binaries-cross
binaries-cross:
$(BUILDX_CMD) bake binaries-cross
.PHONY: $(BAKE_TARGETS)
$(BAKE_TARGETS):
$(BUILDX_CMD) bake $@
.PHONY: install
install: binaries
@@ -39,10 +37,6 @@ release:
.PHONY: validate-all
validate-all: lint test validate-vendor validate-docs validate-generated-files
.PHONY: lint
lint:
$(BUILDX_CMD) bake lint
.PHONY: test
test:
./hack/test
@@ -55,22 +49,6 @@ test-unit:
test-integration:
TESTPKGS=./tests ./hack/test
.PHONY: validate-vendor
validate-vendor:
$(BUILDX_CMD) bake validate-vendor
.PHONY: validate-docs
validate-docs:
$(BUILDX_CMD) bake validate-docs
.PHONY: validate-authors
validate-authors:
$(BUILDX_CMD) bake validate-authors
.PHONY: validate-generated-files
validate-generated-files:
$(BUILDX_CMD) bake validate-generated-files
.PHONY: test-driver
test-driver:
./hack/test-driver

View File

@@ -187,12 +187,12 @@ through various "drivers". Each driver defines how and where a build should
run, and have different feature sets.
We currently support the following drivers:
- The `docker` driver ([guide](docs/manuals/drivers/docker.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
- The `docker-container` driver ([guide](docs/manuals/drivers/docker-container.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
- The `kubernetes` driver ([guide](docs/manuals/drivers/kubernetes.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
- The `remote` driver ([guide](docs/manuals/drivers/remote.md))
- The `docker` driver ([guide](https://docs.docker.com/build/drivers/docker/), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
- The `docker-container` driver ([guide](https://docs.docker.com/build/drivers/docker-container/), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
- The `kubernetes` driver ([guide](https://docs.docker.com/build/drivers/kubernetes/), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
- The `remote` driver ([guide](https://docs.docker.com/build/drivers/remote/))
For more information on drivers, see the [drivers guide](docs/manuals/drivers/index.md).
For more information on drivers, see the [drivers guide](https://docs.docker.com/build/drivers/).
## Working with builder instances

View File

@@ -2,7 +2,6 @@ package bake
import (
"context"
"encoding/csv"
"io"
"os"
"path"
@@ -13,7 +12,7 @@ import (
"strings"
"time"
composecli "github.com/compose-spec/compose-go/cli"
composecli "github.com/compose-spec/compose-go/v2/cli"
"github.com/docker/buildx/bake/hclparser"
"github.com/docker/buildx/build"
controllerapi "github.com/docker/buildx/controller/pb"
@@ -21,11 +20,13 @@ import (
"github.com/docker/buildx/util/platformutil"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/config"
dockeropts "github.com/docker/cli/opts"
hcl "github.com/hashicorp/hcl/v2"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/client/llb"
"github.com/moby/buildkit/session/auth/authprovider"
"github.com/pkg/errors"
"github.com/tonistiigi/go-csvvalue"
"github.com/zclconf/go-cty/cty"
"github.com/zclconf/go-cty/cty/convert"
)
@@ -176,7 +177,7 @@ func readWithProgress(r io.Reader, setStatus func(st *client.VertexStatus)) (dt
}
func ListTargets(files []File) ([]string, error) {
c, err := ParseFiles(files, nil)
c, _, err := ParseFiles(files, nil)
if err != nil {
return nil, err
}
@@ -191,7 +192,7 @@ func ListTargets(files []File) ([]string, error) {
}
func ReadTargets(ctx context.Context, files []File, targets, overrides []string, defaults map[string]string) (map[string]*Target, map[string]*Group, error) {
c, err := ParseFiles(files, defaults)
c, _, err := ParseFiles(files, defaults)
if err != nil {
return nil, nil, err
}
@@ -297,7 +298,7 @@ func sliceToMap(env []string) (res map[string]string) {
return
}
func ParseFiles(files []File, defaults map[string]string) (_ *Config, err error) {
func ParseFiles(files []File, defaults map[string]string) (_ *Config, _ *hclparser.ParseMeta, err error) {
defer func() {
err = formatHCLError(err, files)
}()
@@ -309,7 +310,7 @@ func ParseFiles(files []File, defaults map[string]string) (_ *Config, err error)
isCompose, composeErr := validateComposeFile(f.Data, f.Name)
if isCompose {
if composeErr != nil {
return nil, composeErr
return nil, nil, composeErr
}
composeFiles = append(composeFiles, f)
}
@@ -317,13 +318,13 @@ func ParseFiles(files []File, defaults map[string]string) (_ *Config, err error)
hf, isHCL, err := ParseHCLFile(f.Data, f.Name)
if isHCL {
if err != nil {
return nil, err
return nil, nil, err
}
hclFiles = append(hclFiles, hf)
} else if composeErr != nil {
return nil, errors.Wrapf(err, "failed to parse %s: parsing yaml: %v, parsing hcl", f.Name, composeErr)
return nil, nil, errors.Wrapf(err, "failed to parse %s: parsing yaml: %v, parsing hcl", f.Name, composeErr)
} else {
return nil, err
return nil, nil, err
}
}
}
@@ -331,23 +332,24 @@ func ParseFiles(files []File, defaults map[string]string) (_ *Config, err error)
if len(composeFiles) > 0 {
cfg, cmperr := ParseComposeFiles(composeFiles)
if cmperr != nil {
return nil, errors.Wrap(cmperr, "failed to parse compose file")
return nil, nil, errors.Wrap(cmperr, "failed to parse compose file")
}
c = mergeConfig(c, *cfg)
c = dedupeConfig(c)
}
var pm hclparser.ParseMeta
if len(hclFiles) > 0 {
renamed, err := hclparser.Parse(hclparser.MergeFiles(hclFiles), hclparser.Opt{
res, err := hclparser.Parse(hclparser.MergeFiles(hclFiles), hclparser.Opt{
LookupVar: os.LookupEnv,
Vars: defaults,
ValidateLabel: validateTargetName,
}, &c)
if err.HasErrors() {
return nil, err
return nil, nil, err
}
for _, renamed := range renamed {
for _, renamed := range res.Renamed {
for oldName, newNames := range renamed {
newNames = dedupSlice(newNames)
if len(newNames) == 1 && oldName == newNames[0] {
@@ -360,9 +362,10 @@ func ParseFiles(files []File, defaults map[string]string) (_ *Config, err error)
}
}
c = dedupeConfig(c)
pm = *res
}
return &c, nil
return &c, &pm, nil
}
func dedupeConfig(c Config) Config {
@@ -387,7 +390,8 @@ func dedupeConfig(c Config) Config {
}
func ParseFile(dt []byte, fn string) (*Config, error) {
return ParseFiles([]File{{Data: dt, Name: fn}}, nil)
c, _, err := ParseFiles([]File{{Data: dt, Name: fn}}, nil)
return c, err
}
type Config struct {
@@ -490,7 +494,7 @@ func (c Config) loadLinks(name string, t *Target, m map[string]*Target, o map[st
if err != nil {
return err
}
t2.Outputs = nil
t2.Outputs = []string{"type=cacheonly"}
t2.linked = true
m[target] = t2
}
@@ -668,13 +672,15 @@ func (c Config) target(name string, visited map[string]*Target, overrides map[st
}
type Group struct {
Name string `json:"-" hcl:"name,label" cty:"name"`
Targets []string `json:"targets" hcl:"targets" cty:"targets"`
Name string `json:"-" hcl:"name,label" cty:"name"`
Description string `json:"description,omitempty" hcl:"description,optional" cty:"description"`
Targets []string `json:"targets" hcl:"targets" cty:"targets"`
// Target // TODO?
}
type Target struct {
Name string `json:"-" hcl:"name,label" cty:"name"`
Name string `json:"-" hcl:"name,label" cty:"name"`
Description string `json:"description,omitempty" hcl:"description,optional" cty:"description"`
// Inherits is the only field that cannot be overridden with --set
Inherits []string `json:"inherits,omitempty" hcl:"inherits,optional" cty:"inherits"`
@@ -699,7 +705,10 @@ type Target struct {
NoCache *bool `json:"no-cache,omitempty" hcl:"no-cache,optional" cty:"no-cache"`
NetworkMode *string `json:"-" hcl:"-" cty:"-"`
NoCacheFilter []string `json:"no-cache-filter,omitempty" hcl:"no-cache-filter,optional" cty:"no-cache-filter"`
// IMPORTANT: if you add more fields here, do not forget to update newOverrides and docs/bake-reference.md.
ShmSize *string `json:"shm-size,omitempty" hcl:"shm-size,optional"`
Ulimits []string `json:"ulimits,omitempty" hcl:"ulimits,optional"`
Call *string `json:"call,omitempty" hcl:"call,optional" cty:"call"`
// IMPORTANT: if you add more fields here, do not forget to update newOverrides/AddOverrides and docs/bake-reference.md.
// linked is a private field to mark a target used as a linked one
linked bool
@@ -721,6 +730,7 @@ func (t *Target) normalize() {
t.CacheTo = removeDupes(t.CacheTo)
t.Outputs = removeDupes(t.Outputs)
t.NoCacheFilter = removeDupes(t.NoCacheFilter)
t.Ulimits = removeDupes(t.Ulimits)
for k, v := range t.Contexts {
if v == "" {
@@ -772,6 +782,9 @@ func (t *Target) Merge(t2 *Target) {
if t2.Target != nil {
t.Target = t2.Target
}
if t2.Call != nil {
t.Call = t2.Call
}
if t2.Annotations != nil { // merge
t.Annotations = append(t.Annotations, t2.Annotations...)
}
@@ -809,6 +822,15 @@ func (t *Target) Merge(t2 *Target) {
if t2.NoCacheFilter != nil { // merge
t.NoCacheFilter = append(t.NoCacheFilter, t2.NoCacheFilter...)
}
if t2.ShmSize != nil { // no merge
t.ShmSize = t2.ShmSize
}
if t2.Ulimits != nil { // merge
t.Ulimits = append(t.Ulimits, t2.Ulimits...)
}
if t2.Description != "" {
t.Description = t2.Description
}
t.Inherits = append(t.Inherits, t2.Inherits...)
}
@@ -853,6 +875,8 @@ func (t *Target) AddOverrides(overrides map[string]Override) error {
t.CacheTo = o.ArrValue
case "target":
t.Target = &value
case "call":
t.Call = &value
case "secrets":
t.Secrets = o.ArrValue
case "ssh":
@@ -873,6 +897,10 @@ func (t *Target) AddOverrides(overrides map[string]Override) error {
t.NoCache = &noCache
case "no-cache-filter":
t.NoCacheFilter = o.ArrValue
case "shm-size":
t.ShmSize = &value
case "ulimits":
t.Ulimits = o.ArrValue
case "pull":
pull, err := strconv.ParseBool(value)
if err != nil {
@@ -880,19 +908,17 @@ func (t *Target) AddOverrides(overrides map[string]Override) error {
}
t.Pull = &pull
case "push":
_, err := strconv.ParseBool(value)
push, err := strconv.ParseBool(value)
if err != nil {
return errors.Errorf("invalid value %s for boolean key push", value)
}
if len(t.Outputs) == 0 {
t.Outputs = append(t.Outputs, "type=image,push=true")
} else {
for i, output := range t.Outputs {
if typ := parseOutputType(output); typ == "image" || typ == "registry" {
t.Outputs[i] = t.Outputs[i] + ",push=" + value
}
}
t.Outputs = setPushOverride(t.Outputs, push)
case "load":
load, err := strconv.ParseBool(value)
if err != nil {
return errors.Errorf("invalid value %s for boolean key load", value)
}
t.Outputs = setLoadOverride(t.Outputs, load)
default:
return errors.Errorf("unknown key: %s", keys[0])
}
@@ -1011,12 +1037,17 @@ func (t *Target) GetName(ectx *hcl.EvalContext, block *hcl.Block, loadDeps func(
}
func TargetsToBuildOpt(m map[string]*Target, inp *Input) (map[string]build.Options, error) {
// make sure local credentials are loaded multiple times for different targets
dockerConfig := config.LoadDefaultConfigFile(os.Stderr)
authProvider := authprovider.NewDockerAuthProvider(dockerConfig, nil)
m2 := make(map[string]build.Options, len(m))
for k, v := range m {
bo, err := toBuildOpt(v, inp)
if err != nil {
return nil, err
}
bo.Session = append(bo.Session, authProvider)
m2[k] = *bo
}
return m2, nil
@@ -1228,6 +1259,12 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
if t.NetworkMode != nil {
networkMode = *t.NetworkMode
}
shmSize := new(dockeropts.MemBytes)
if t.ShmSize != nil {
if err := shmSize.Set(*t.ShmSize); err != nil {
return nil, errors.Errorf("invalid value %s for membytes key shm-size", *t.ShmSize)
}
}
bo := &build.Options{
Inputs: bi,
@@ -1239,6 +1276,7 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
Pull: pull,
NetworkMode: networkMode,
Linked: t.linked,
ShmSize: *shmSize,
}
platforms, err := platformutil.Parse(t.Platforms)
@@ -1247,9 +1285,6 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
}
bo.Platforms = platforms
dockerConfig := config.LoadDefaultConfigFile(os.Stderr)
bo.Session = append(bo.Session, authprovider.NewDockerAuthProvider(dockerConfig, nil))
secrets, err := buildflags.ParseSecretSpecs(t.Secrets)
if err != nil {
return nil, err
@@ -1277,6 +1312,12 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
bo.Target = *t.Target
}
if t.Call != nil {
bo.PrintFunc = &build.PrintFunc{
Name: *t.Call,
}
}
cacheImports, err := buildflags.ParseCacheEntry(t.CacheFrom)
if err != nil {
return nil, err
@@ -1319,6 +1360,14 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
return nil, err
}
ulimits := dockeropts.NewUlimitOpt(nil)
for _, field := range t.Ulimits {
if err := ulimits.Set(field); err != nil {
return nil, err
}
}
bo.Ulimits = ulimits
return bo, nil
}
@@ -1363,23 +1412,89 @@ func removeAttestDupes(s []string) []string {
return res
}
func parseOutputType(str string) string {
csvReader := csv.NewReader(strings.NewReader(str))
fields, err := csvReader.Read()
func parseOutput(str string) map[string]string {
fields, err := csvvalue.Fields(str, nil)
if err != nil {
return ""
return nil
}
res := map[string]string{}
for _, field := range fields {
parts := strings.SplitN(field, "=", 2)
if len(parts) == 2 {
if parts[0] == "type" {
return parts[1]
}
res[parts[0]] = parts[1]
}
}
return res
}
func parseOutputType(str string) string {
if out := parseOutput(str); out != nil {
if v, ok := out["type"]; ok {
return v
}
}
return ""
}
func setPushOverride(outputs []string, push bool) []string {
var out []string
setPush := true
for _, output := range outputs {
typ := parseOutputType(output)
if typ == "image" || typ == "registry" {
// no need to set push if image or registry types already defined
setPush = false
if typ == "registry" {
if !push {
// don't set registry output if "push" is false
continue
}
// no need to set "push" attribute to true for registry
out = append(out, output)
continue
}
out = append(out, output+",push="+strconv.FormatBool(push))
} else {
if typ != "docker" {
// if there is any output that is not docker, don't set "push"
setPush = false
}
out = append(out, output)
}
}
if push && setPush {
out = append(out, "type=image,push=true")
}
return out
}
func setLoadOverride(outputs []string, load bool) []string {
if !load {
return outputs
}
setLoad := true
for _, output := range outputs {
if typ := parseOutputType(output); typ == "docker" {
if v := parseOutput(output); v != nil {
// dest set means we want to output as tar so don't set load
if _, ok := v["dest"]; !ok {
setLoad = false
break
}
}
} else if typ != "image" && typ != "registry" && typ != "oci" {
// if there is any output that is not an image, registry
// or oci, don't set "load" similar to push override
setLoad = false
break
}
}
if setLoad {
outputs = append(outputs, "type=docker")
}
return outputs
}
func validateTargetName(name string) error {
if !targetNamePattern.MatchString(name) {
return errors.Errorf("only %q are allowed", validTargetNameChars)

View File

@@ -22,6 +22,8 @@ target "webDEP" {
VAR_BOTH = "webDEP"
}
no-cache = true
shm-size = "128m"
ulimits = ["nofile=1024:1024"]
}
target "webapp" {
@@ -45,6 +47,8 @@ target "webapp" {
require.Equal(t, ".", *m["webapp"].Context)
require.Equal(t, ptrstr("webDEP"), m["webapp"].Args["VAR_INHERITED"])
require.Equal(t, true, *m["webapp"].NoCache)
require.Equal(t, "128m", *m["webapp"].ShmSize)
require.Equal(t, []string{"nofile=1024:1024"}, m["webapp"].Ulimits)
require.Nil(t, m["webapp"].Pull)
require.Equal(t, 1, len(g))
@@ -129,6 +133,12 @@ target "webapp" {
require.Equal(t, []string{"webapp"}, g["default"].Targets)
})
t.Run("ShmSizeOverride", func(t *testing.T) {
m, _, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"webapp.shm-size=256m"}, nil)
require.NoError(t, err)
require.Equal(t, "256m", *m["webapp"].ShmSize)
})
t.Run("PullOverride", func(t *testing.T) {
t.Parallel()
m, g, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"webapp.pull=false"}, nil)
@@ -207,48 +217,252 @@ target "webapp" {
}
func TestPushOverride(t *testing.T) {
t.Parallel()
t.Run("empty output", func(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(
`target "app" {
}`),
}
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.push=true"}, nil)
require.NoError(t, err)
require.Equal(t, 1, len(m["app"].Outputs))
require.Equal(t, "type=image,push=true", m["app"].Outputs[0])
})
fp := File{
Name: "docker-bake.hcl",
Data: []byte(
`target "app" {
t.Run("type image", func(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(
`target "app" {
output = ["type=image,compression=zstd"]
}`),
}
ctx := context.TODO()
m, _, err := ReadTargets(ctx, []File{fp}, []string{"app"}, []string{"*.push=true"}, nil)
require.NoError(t, err)
}
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.push=true"}, nil)
require.NoError(t, err)
require.Equal(t, 1, len(m["app"].Outputs))
require.Equal(t, "type=image,compression=zstd,push=true", m["app"].Outputs[0])
})
require.Equal(t, 1, len(m["app"].Outputs))
require.Equal(t, "type=image,compression=zstd,push=true", m["app"].Outputs[0])
fp = File{
Name: "docker-bake.hcl",
Data: []byte(
`target "app" {
t.Run("type image push false", func(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(
`target "app" {
output = ["type=image,compression=zstd"]
}`),
}
ctx = context.TODO()
m, _, err = ReadTargets(ctx, []File{fp}, []string{"app"}, []string{"*.push=false"}, nil)
require.NoError(t, err)
}
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.push=false"}, nil)
require.NoError(t, err)
require.Equal(t, 1, len(m["app"].Outputs))
require.Equal(t, "type=image,compression=zstd,push=false", m["app"].Outputs[0])
})
require.Equal(t, 1, len(m["app"].Outputs))
require.Equal(t, "type=image,compression=zstd,push=false", m["app"].Outputs[0])
fp = File{
Name: "docker-bake.hcl",
Data: []byte(
`target "app" {
t.Run("type registry", func(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(
`target "app" {
output = ["type=registry"]
}`),
}
ctx = context.TODO()
m, _, err = ReadTargets(ctx, []File{fp}, []string{"app"}, []string{"*.push=true"}, nil)
require.NoError(t, err)
}
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.push=true"}, nil)
require.NoError(t, err)
require.Equal(t, 1, len(m["app"].Outputs))
require.Equal(t, "type=registry", m["app"].Outputs[0])
})
require.Equal(t, 1, len(m["app"].Outputs))
require.Equal(t, "type=image,push=true", m["app"].Outputs[0])
t.Run("type registry push false", func(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(
`target "app" {
output = ["type=registry"]
}`),
}
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.push=false"}, nil)
require.NoError(t, err)
require.Equal(t, 0, len(m["app"].Outputs))
})
t.Run("type local and empty target", func(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(
`target "foo" {
output = [ "type=local,dest=out" ]
}
target "bar" {
}`),
}
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"foo", "bar"}, []string{"*.push=true"}, nil)
require.NoError(t, err)
require.Equal(t, 2, len(m))
require.Equal(t, 1, len(m["foo"].Outputs))
require.Equal(t, []string{"type=local,dest=out"}, m["foo"].Outputs)
require.Equal(t, 1, len(m["bar"].Outputs))
require.Equal(t, []string{"type=image,push=true"}, m["bar"].Outputs)
})
}
func TestLoadOverride(t *testing.T) {
t.Run("empty output", func(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(
`target "app" {
}`),
}
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.load=true"}, nil)
require.NoError(t, err)
require.Equal(t, 1, len(m["app"].Outputs))
require.Equal(t, "type=docker", m["app"].Outputs[0])
})
t.Run("type docker", func(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(
`target "app" {
output = ["type=docker"]
}`),
}
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.load=true"}, nil)
require.NoError(t, err)
require.Equal(t, 1, len(m["app"].Outputs))
require.Equal(t, []string{"type=docker"}, m["app"].Outputs)
})
t.Run("type image", func(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(
`target "app" {
output = ["type=image"]
}`),
}
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.load=true"}, nil)
require.NoError(t, err)
require.Equal(t, 2, len(m["app"].Outputs))
require.Equal(t, []string{"type=image", "type=docker"}, m["app"].Outputs)
})
t.Run("type image load false", func(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(
`target "app" {
output = ["type=image"]
}`),
}
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.load=false"}, nil)
require.NoError(t, err)
require.Equal(t, 1, len(m["app"].Outputs))
require.Equal(t, []string{"type=image"}, m["app"].Outputs)
})
t.Run("type registry", func(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(
`target "app" {
output = ["type=registry"]
}`),
}
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.load=true"}, nil)
require.NoError(t, err)
require.Equal(t, 2, len(m["app"].Outputs))
require.Equal(t, []string{"type=registry", "type=docker"}, m["app"].Outputs)
})
t.Run("type oci", func(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(
`target "app" {
output = ["type=oci,dest=out"]
}`),
}
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.load=true"}, nil)
require.NoError(t, err)
require.Equal(t, 2, len(m["app"].Outputs))
require.Equal(t, []string{"type=oci,dest=out", "type=docker"}, m["app"].Outputs)
})
t.Run("type docker with dest", func(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(
`target "app" {
output = ["type=docker,dest=out"]
}`),
}
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.load=true"}, nil)
require.NoError(t, err)
require.Equal(t, 2, len(m["app"].Outputs))
require.Equal(t, []string{"type=docker,dest=out", "type=docker"}, m["app"].Outputs)
})
t.Run("type local and empty target", func(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(
`target "foo" {
output = [ "type=local,dest=out" ]
}
target "bar" {
}`),
}
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"foo", "bar"}, []string{"*.load=true"}, nil)
require.NoError(t, err)
require.Equal(t, 2, len(m))
require.Equal(t, 1, len(m["foo"].Outputs))
require.Equal(t, []string{"type=local,dest=out"}, m["foo"].Outputs)
require.Equal(t, 1, len(m["bar"].Outputs))
require.Equal(t, []string{"type=docker"}, m["bar"].Outputs)
})
}
func TestLoadAndPushOverride(t *testing.T) {
t.Run("type local and empty target", func(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(
`target "foo" {
output = [ "type=local,dest=out" ]
}
target "bar" {
}`),
}
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"foo", "bar"}, []string{"*.load=true", "*.push=true"}, nil)
require.NoError(t, err)
require.Equal(t, 2, len(m))
require.Equal(t, 1, len(m["foo"].Outputs))
sort.Strings(m["foo"].Outputs)
require.Equal(t, []string{"type=local,dest=out"}, m["foo"].Outputs)
require.Equal(t, 2, len(m["bar"].Outputs))
sort.Strings(m["bar"].Outputs)
require.Equal(t, []string{"type=docker", "type=image,push=true"}, m["bar"].Outputs)
})
t.Run("type registry", func(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(
`target "foo" {
output = [ "type=registry" ]
}`),
}
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"foo"}, []string{"*.load=true", "*.push=true"}, nil)
require.NoError(t, err)
require.Equal(t, 1, len(m))
require.Equal(t, 2, len(m["foo"].Outputs))
sort.Strings(m["foo"].Outputs)
require.Equal(t, []string{"type=docker", "type=registry"}, m["foo"].Outputs)
})
}
func TestReadTargetsCompose(t *testing.T) {
@@ -297,9 +511,6 @@ services:
ctx := context.TODO()
cwd, err := os.Getwd()
require.NoError(t, err)
m, g, err := ReadTargets(ctx, []File{fp, fp2, fp3}, []string{"default"}, nil, nil)
require.NoError(t, err)
@@ -308,7 +519,7 @@ services:
require.True(t, ok)
require.Equal(t, "Dockerfile.webapp", *m["webapp"].Dockerfile)
require.Equal(t, cwd, *m["webapp"].Context)
require.Equal(t, ".", *m["webapp"].Context)
require.Equal(t, ptrstr("1"), m["webapp"].Args["buildno"])
require.Equal(t, ptrstr("12"), m["webapp"].Args["buildno2"])
@@ -347,9 +558,6 @@ services:
ctx := context.TODO()
cwd, err := os.Getwd()
require.NoError(t, err)
m, _, err := ReadTargets(ctx, []File{fp}, []string{"web.app"}, nil, nil)
require.NoError(t, err)
require.Equal(t, 1, len(m))
@@ -372,7 +580,7 @@ services:
_, ok = m["web_app"]
require.True(t, ok)
require.Equal(t, "Dockerfile.webapp", *m["web_app"].Dockerfile)
require.Equal(t, cwd, *m["web_app"].Context)
require.Equal(t, ".", *m["web_app"].Context)
require.Equal(t, ptrstr("1"), m["web_app"].Args["buildno"])
require.Equal(t, ptrstr("12"), m["web_app"].Args["buildno2"])
@@ -581,9 +789,6 @@ services:
ctx := context.TODO()
cwd, err := os.Getwd()
require.NoError(t, err)
m, _, err := ReadTargets(ctx, []File{fp, fp2}, []string{"app1", "app2"}, nil, nil)
require.NoError(t, err)
@@ -596,7 +801,7 @@ services:
require.Equal(t, "Dockerfile", *m["app1"].Dockerfile)
require.Equal(t, ".", *m["app1"].Context)
require.Equal(t, "Dockerfile", *m["app2"].Dockerfile)
require.Equal(t, cwd, *m["app2"].Context)
require.Equal(t, ".", *m["app2"].Context)
}
func TestReadContextFromTargetChain(t *testing.T) {
@@ -633,7 +838,8 @@ func TestReadContextFromTargetChain(t *testing.T) {
mid, ok := m["mid"]
require.True(t, ok)
require.Equal(t, 0, len(mid.Outputs))
require.Equal(t, 1, len(mid.Outputs))
require.Equal(t, "type=cacheonly", mid.Outputs[0])
require.Equal(t, 1, len(mid.Contexts))
base, ok := m["base"]
@@ -1323,7 +1529,7 @@ services:
v2: "bar"
`)
c, err := ParseFiles([]File{
c, _, err := ParseFiles([]File{
{Data: dt, Name: "c1.foo"},
{Data: dt2, Name: "c2.bar"},
}, nil)

View File

@@ -2,13 +2,18 @@ package bake
import (
"context"
"fmt"
"os"
"path/filepath"
"sort"
"strings"
"github.com/compose-spec/compose-go/dotenv"
"github.com/compose-spec/compose-go/loader"
compose "github.com/compose-spec/compose-go/types"
"github.com/compose-spec/compose-go/v2/consts"
"github.com/compose-spec/compose-go/v2/dotenv"
"github.com/compose-spec/compose-go/v2/loader"
composetypes "github.com/compose-spec/compose-go/v2/types"
dockeropts "github.com/docker/cli/opts"
"github.com/docker/go-units"
"github.com/pkg/errors"
"gopkg.in/yaml.v3"
)
@@ -18,9 +23,9 @@ func ParseComposeFiles(fs []File) (*Config, error) {
if err != nil {
return nil, err
}
var cfgs []compose.ConfigFile
var cfgs []composetypes.ConfigFile
for _, f := range fs {
cfgs = append(cfgs, compose.ConfigFile{
cfgs = append(cfgs, composetypes.ConfigFile{
Filename: f.Name,
Content: f.Data,
})
@@ -28,15 +33,19 @@ func ParseComposeFiles(fs []File) (*Config, error) {
return ParseCompose(cfgs, envs)
}
func ParseCompose(cfgs []compose.ConfigFile, envs map[string]string) (*Config, error) {
func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Config, error) {
if envs == nil {
envs = make(map[string]string)
}
cfg, err := loader.LoadWithContext(context.Background(), compose.ConfigDetails{
cfg, err := loader.LoadWithContext(context.Background(), composetypes.ConfigDetails{
ConfigFiles: cfgs,
Environment: envs,
}, func(options *loader.Options) {
options.SetProjectName("bake", false)
projectName := "bake"
if v, ok := envs[consts.ComposeProjectName]; ok && v != "" {
projectName = v
}
options.SetProjectName(projectName, false)
options.SkipNormalization = true
options.Profiles = []string{"*"}
})
@@ -86,6 +95,31 @@ func ParseCompose(cfgs []compose.ConfigFile, envs map[string]string) (*Config, e
}
}
var shmSize *string
if s.Build.ShmSize > 0 {
shmSizeBytes := dockeropts.MemBytes(s.Build.ShmSize)
shmSizeStr := shmSizeBytes.String()
shmSize = &shmSizeStr
}
var ulimits []string
if s.Build.Ulimits != nil {
for n, u := range s.Build.Ulimits {
ulimit, err := units.ParseUlimit(fmt.Sprintf("%s=%d:%d", n, u.Soft, u.Hard))
if err != nil {
return nil, err
}
ulimits = append(ulimits, ulimit.String())
}
}
var ssh []string
for _, bkey := range s.Build.SSH {
sshkey := composeToBuildkitSSH(bkey)
ssh = append(ssh, sshkey)
}
sort.Strings(ssh)
var secrets []string
for _, bs := range s.Build.Secrets {
secret, err := composeToBuildkitSecret(bs, cfg.Secrets[bs.Source])
@@ -121,7 +155,10 @@ func ParseCompose(cfgs []compose.ConfigFile, envs map[string]string) (*Config, e
CacheFrom: s.Build.CacheFrom,
CacheTo: s.Build.CacheTo,
NetworkMode: &s.Build.Network,
SSH: ssh,
Secrets: secrets,
ShmSize: shmSize,
Ulimits: ulimits,
}
if err = t.composeExtTarget(s.Build.Extensions); err != nil {
return nil, err
@@ -159,8 +196,8 @@ func validateComposeFile(dt []byte, fn string) (bool, error) {
}
func validateCompose(dt []byte, envs map[string]string) error {
_, err := loader.Load(compose.ConfigDetails{
ConfigFiles: []compose.ConfigFile{
_, err := loader.Load(composetypes.ConfigDetails{
ConfigFiles: []composetypes.ConfigFile{
{
Content: dt,
},
@@ -223,7 +260,7 @@ func loadDotEnv(curenv map[string]string, workingDir string) (map[string]string,
return curenv, nil
}
func flatten(in compose.MappingWithEquals) map[string]*string {
func flatten(in composetypes.MappingWithEquals) map[string]*string {
if len(in) == 0 {
return nil
}
@@ -252,7 +289,7 @@ type xbake struct {
NoCacheFilter stringArray `yaml:"no-cache-filter,omitempty"`
Contexts stringMap `yaml:"contexts,omitempty"`
// don't forget to update documentation if you add a new field:
// docs/manuals/bake/compose-file.md#extension-field-with-x-bake
// https://github.com/docker/docs/blob/main/content/build/bake/compose-file.md#extension-field-with-x-bake
}
type stringMap map[string]string
@@ -302,6 +339,7 @@ func (t *Target) composeExtTarget(exts map[string]interface{}) error {
}
if len(xb.SSH) > 0 {
t.SSH = dedupSlice(append(t.SSH, xb.SSH...))
sort.Strings(t.SSH)
}
if len(xb.Platforms) > 0 {
t.Platforms = dedupSlice(append(t.Platforms, xb.Platforms...))
@@ -327,8 +365,8 @@ func (t *Target) composeExtTarget(exts map[string]interface{}) error {
// composeToBuildkitSecret converts secret from compose format to buildkit's
// csv format.
func composeToBuildkitSecret(inp compose.ServiceSecretConfig, psecret compose.SecretConfig) (string, error) {
if psecret.External.External {
func composeToBuildkitSecret(inp composetypes.ServiceSecretConfig, psecret composetypes.SecretConfig) (string, error) {
if psecret.External {
return "", errors.Errorf("unsupported external secret %s", psecret.Name)
}
@@ -345,3 +383,17 @@ func composeToBuildkitSecret(inp compose.ServiceSecretConfig, psecret compose.Se
return strings.Join(bkattrs, ","), nil
}
// composeToBuildkitSSH converts secret from compose format to buildkit's
// csv format.
func composeToBuildkitSSH(sshKey composetypes.SSHKey) string {
var bkattrs []string
bkattrs = append(bkattrs, sshKey.ID)
if sshKey.Path != "" {
bkattrs = append(bkattrs, sshKey.Path)
}
return strings.Join(bkattrs, "=")
}

View File

@@ -6,7 +6,7 @@ import (
"sort"
"testing"
compose "github.com/compose-spec/compose-go/types"
composetypes "github.com/compose-spec/compose-go/v2/types"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
@@ -32,6 +32,9 @@ services:
- type=local,src=path/to/cache
cache_to:
- type=local,dest=path/to/cache
ssh:
- key=path/to/key
- default
secrets:
- token
- aws
@@ -49,10 +52,7 @@ secrets:
file: /root/.aws/credentials
`)
cwd, err := os.Getwd()
require.NoError(t, err)
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, 1, len(c.Groups))
@@ -65,25 +65,26 @@ secrets:
return c.Targets[i].Name < c.Targets[j].Name
})
require.Equal(t, "db", c.Targets[0].Name)
require.Equal(t, filepath.Join(cwd, "db"), *c.Targets[0].Context)
require.Equal(t, "db", *c.Targets[0].Context)
require.Equal(t, []string{"docker.io/tonistiigi/db"}, c.Targets[0].Tags)
require.Equal(t, "webapp", c.Targets[1].Name)
require.Equal(t, filepath.Join(cwd, "dir"), *c.Targets[1].Context)
require.Equal(t, map[string]string{"foo": filepath.Join(cwd, "bar")}, c.Targets[1].Contexts)
require.Equal(t, "dir", *c.Targets[1].Context)
require.Equal(t, map[string]string{"foo": "bar"}, c.Targets[1].Contexts)
require.Equal(t, "Dockerfile-alternate", *c.Targets[1].Dockerfile)
require.Equal(t, 1, len(c.Targets[1].Args))
require.Equal(t, ptrstr("123"), c.Targets[1].Args["buildno"])
require.Equal(t, []string{"type=local,src=path/to/cache"}, c.Targets[1].CacheFrom)
require.Equal(t, []string{"type=local,dest=path/to/cache"}, c.Targets[1].CacheTo)
require.Equal(t, "none", *c.Targets[1].NetworkMode)
require.Equal(t, []string{"default", "key=path/to/key"}, c.Targets[1].SSH)
require.Equal(t, []string{
"id=token,env=ENV_TOKEN",
"id=aws,src=/root/.aws/credentials",
}, c.Targets[1].Secrets)
require.Equal(t, "webapp2", c.Targets[2].Name)
require.Equal(t, filepath.Join(cwd, "dir"), *c.Targets[2].Context)
require.Equal(t, "dir", *c.Targets[2].Context)
require.Equal(t, "FROM alpine\n", *c.Targets[2].DockerfileInline)
}
@@ -95,7 +96,7 @@ services:
webapp:
build: ./db
`)
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, 1, len(c.Groups))
require.Equal(t, 1, len(c.Targets))
@@ -114,7 +115,7 @@ services:
target: webapp
`)
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, 2, len(c.Targets))
@@ -139,7 +140,7 @@ services:
target: webapp
`)
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, 2, len(c.Targets))
sort.Slice(c.Targets, func(i, j int) bool {
@@ -170,7 +171,7 @@ services:
t.Setenv("BAR", "foo")
t.Setenv("ZZZ_BAR", "zzz_foo")
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, sliceToMap(os.Environ()))
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, sliceToMap(os.Environ()))
require.NoError(t, err)
require.Equal(t, ptrstr("bar"), c.Targets[0].Args["FOO"])
require.Equal(t, ptrstr("zzz_foo"), c.Targets[0].Args["BAR"])
@@ -184,7 +185,7 @@ services:
entrypoint: echo 1
`)
_, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
_, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.Error(t, err)
}
@@ -209,7 +210,7 @@ networks:
gateway: 10.5.0.254
`)
_, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
_, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
}
@@ -226,7 +227,7 @@ services:
- bar
`)
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, []string{"foo", "bar"}, c.Targets[0].Tags)
}
@@ -263,7 +264,7 @@ networks:
name: test-net
`)
_, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
_, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
}
@@ -281,6 +282,8 @@ services:
- user/app:cache
tags:
- ct-addon:baz
ssh:
key: path/to/key
args:
CT_ECR: foo
CT_TAG: bar
@@ -290,6 +293,9 @@ services:
tags:
- ct-addon:foo
- ct-addon:alp
ssh:
- default
- other=path/to/otherkey
platforms:
- linux/amd64
- linux/arm64
@@ -306,6 +312,11 @@ services:
args:
CT_ECR: foo
CT_TAG: bar
shm_size: 128m
ulimits:
nofile:
soft: 1024
hard: 1024
x-bake:
secret:
- id=mysecret,src=/local/secret
@@ -316,7 +327,7 @@ services:
no-cache: true
`)
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, 2, len(c.Targets))
sort.Slice(c.Targets, func(i, j int) bool {
@@ -327,6 +338,7 @@ services:
require.Equal(t, []string{"linux/amd64", "linux/arm64"}, c.Targets[0].Platforms)
require.Equal(t, []string{"user/app:cache", "type=local,src=path/to/cache"}, c.Targets[0].CacheFrom)
require.Equal(t, []string{"user/app:cache", "type=local,dest=path/to/cache"}, c.Targets[0].CacheTo)
require.Equal(t, []string{"default", "key=path/to/key", "other=path/to/otherkey"}, c.Targets[0].SSH)
require.Equal(t, newBool(true), c.Targets[0].Pull)
require.Equal(t, map[string]string{"alpine": "docker-image://alpine:3.13"}, c.Targets[0].Contexts)
require.Equal(t, []string{"ct-fake-aws:bar"}, c.Targets[1].Tags)
@@ -335,6 +347,8 @@ services:
require.Equal(t, []string{"linux/arm64"}, c.Targets[1].Platforms)
require.Equal(t, []string{"type=docker"}, c.Targets[1].Outputs)
require.Equal(t, newBool(true), c.Targets[1].NoCache)
require.Equal(t, ptrstr("128MiB"), c.Targets[1].ShmSize)
require.Equal(t, []string{"nofile=1024:1024"}, c.Targets[1].Ulimits)
}
func TestComposeExtDedup(t *testing.T) {
@@ -349,6 +363,8 @@ services:
- user/app:cache
tags:
- ct-addon:foo
ssh:
- default
x-bake:
tags:
- ct-addon:foo
@@ -358,14 +374,18 @@ services:
- type=local,src=path/to/cache
cache-to:
- type=local,dest=path/to/cache
ssh:
- default
- key=path/to/key
`)
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, []string{"ct-addon:foo", "ct-addon:baz"}, c.Targets[0].Tags)
require.Equal(t, []string{"user/app:cache", "type=local,src=path/to/cache"}, c.Targets[0].CacheFrom)
require.Equal(t, []string{"user/app:cache", "type=local,dest=path/to/cache"}, c.Targets[0].CacheTo)
require.Equal(t, []string{"default", "key=path/to/key"}, c.Targets[0].SSH)
}
func TestEnv(t *testing.T) {
@@ -393,7 +413,7 @@ services:
- ` + envf.Name() + `
`)
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, map[string]*string{"CT_ECR": ptrstr("foo"), "FOO": ptrstr("bsdf -csdf"), "NODE_ENV": ptrstr("test")}, c.Targets[0].Args)
}
@@ -439,7 +459,7 @@ services:
published: "3306"
protocol: tcp
`)
_, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
_, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
}
@@ -485,7 +505,7 @@ func TestServiceName(t *testing.T) {
for _, tt := range cases {
tt := tt
t.Run(tt.svc, func(t *testing.T) {
_, err := ParseCompose([]compose.ConfigFile{{Content: []byte(`
_, err := ParseCompose([]composetypes.ConfigFile{{Content: []byte(`
services:
` + tt.svc + `:
build:
@@ -556,7 +576,7 @@ services:
for _, tt := range cases {
tt := tt
t.Run(tt.name, func(t *testing.T) {
_, err := ParseCompose([]compose.ConfigFile{{Content: tt.dt}}, nil)
_, err := ParseCompose([]composetypes.ConfigFile{{Content: tt.dt}}, nil)
if tt.wantErr {
require.Error(t, err)
} else {
@@ -654,7 +674,7 @@ services:
bar: "baz"
`)
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, map[string]*string{"bar": ptrstr("baz")}, c.Targets[0].Args)
}
@@ -673,7 +693,7 @@ services:
build:
context: .
`)
_, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
_, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
}
@@ -704,7 +724,7 @@ services:
chdir(t, tmpdir)
c, err := ParseComposeFiles([]File{{
Name: "compose.yml",
Name: "composetypes.yml",
Data: dt,
}})
require.NoError(t, err)
@@ -734,10 +754,50 @@ services:
- node_modules/
`)
_, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
_, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
}
func TestCgroup(t *testing.T) {
var dt = []byte(`
services:
scratch:
build:
context: ./webapp
cgroup: private
`)
_, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
}
func TestProjectName(t *testing.T) {
var dt = []byte(`
services:
scratch:
build:
context: ./webapp
args:
PROJECT_NAME: ${COMPOSE_PROJECT_NAME}
`)
t.Run("default", func(t *testing.T) {
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Len(t, c.Targets, 1)
require.Len(t, c.Targets[0].Args, 1)
require.Equal(t, map[string]*string{"PROJECT_NAME": ptrstr("bake")}, c.Targets[0].Args)
})
t.Run("env", func(t *testing.T) {
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, map[string]string{"COMPOSE_PROJECT_NAME": "foo"})
require.NoError(t, err)
require.Len(t, c.Targets, 1)
require.Len(t, c.Targets[0].Args, 1)
require.Equal(t, map[string]*string{"PROJECT_NAME": ptrstr("foo")}, c.Targets[0].Args)
})
}
// chdir changes the current working directory to the named directory,
// and then restore the original working directory at the end of the test.
func chdir(t *testing.T, dir string) {

View File

@@ -273,7 +273,7 @@ func TestHCLMultiFileSharedVariables(t *testing.T) {
}
`)
c, err := ParseFiles([]File{
c, _, err := ParseFiles([]File{
{Data: dt, Name: "c1.hcl"},
{Data: dt2, Name: "c2.hcl"},
}, nil)
@@ -285,7 +285,7 @@ func TestHCLMultiFileSharedVariables(t *testing.T) {
t.Setenv("FOO", "def")
c, err = ParseFiles([]File{
c, _, err = ParseFiles([]File{
{Data: dt, Name: "c1.hcl"},
{Data: dt2, Name: "c2.hcl"},
}, nil)
@@ -322,7 +322,7 @@ func TestHCLVarsWithVars(t *testing.T) {
}
`)
c, err := ParseFiles([]File{
c, _, err := ParseFiles([]File{
{Data: dt, Name: "c1.hcl"},
{Data: dt2, Name: "c2.hcl"},
}, nil)
@@ -334,7 +334,7 @@ func TestHCLVarsWithVars(t *testing.T) {
t.Setenv("BASE", "new")
c, err = ParseFiles([]File{
c, _, err = ParseFiles([]File{
{Data: dt, Name: "c1.hcl"},
{Data: dt2, Name: "c2.hcl"},
}, nil)
@@ -612,7 +612,7 @@ func TestHCLMultiFileAttrs(t *testing.T) {
FOO="def"
`)
c, err := ParseFiles([]File{
c, _, err := ParseFiles([]File{
{Data: dt, Name: "c1.hcl"},
{Data: dt2, Name: "c2.hcl"},
}, nil)
@@ -623,7 +623,7 @@ func TestHCLMultiFileAttrs(t *testing.T) {
t.Setenv("FOO", "ghi")
c, err = ParseFiles([]File{
c, _, err = ParseFiles([]File{
{Data: dt, Name: "c1.hcl"},
{Data: dt2, Name: "c2.hcl"},
}, nil)
@@ -647,7 +647,7 @@ func TestHCLMultiFileGlobalAttrs(t *testing.T) {
FOO = "def"
`)
c, err := ParseFiles([]File{
c, _, err := ParseFiles([]File{
{Data: dt, Name: "c1.hcl"},
{Data: dt2, Name: "c2.hcl"},
}, nil)
@@ -830,7 +830,7 @@ func TestHCLRenameMultiFile(t *testing.T) {
}
`)
c, err := ParseFiles([]File{
c, _, err := ParseFiles([]File{
{Data: dt, Name: "c1.hcl"},
{Data: dt2, Name: "c2.hcl"},
{Data: dt3, Name: "c3.hcl"},
@@ -1050,7 +1050,7 @@ func TestHCLMatrixArgsOverride(t *testing.T) {
}
`)
c, err := ParseFiles([]File{
c, _, err := ParseFiles([]File{
{Data: dt, Name: "docker-bake.hcl"},
}, map[string]string{"ABC": "11,22,33"})
require.NoError(t, err)
@@ -1236,7 +1236,7 @@ services:
v2: "bar"
`)
c, err := ParseFiles([]File{
c, _, err := ParseFiles([]File{
{Data: dt, Name: "c1.hcl"},
{Data: dt2, Name: "c2.yml"},
}, nil)
@@ -1258,7 +1258,7 @@ func TestHCLBuiltinVars(t *testing.T) {
}
`)
c, err := ParseFiles([]File{
c, _, err := ParseFiles([]File{
{Data: dt, Name: "c1.hcl"},
}, map[string]string{
"BAKE_CMD_CONTEXT": "foo",
@@ -1272,7 +1272,7 @@ func TestHCLBuiltinVars(t *testing.T) {
}
func TestCombineHCLAndJSONTargets(t *testing.T) {
c, err := ParseFiles([]File{
c, _, err := ParseFiles([]File{
{
Name: "docker-bake.hcl",
Data: []byte(`
@@ -1348,7 +1348,7 @@ target "b" {
}
func TestCombineHCLAndJSONVars(t *testing.T) {
c, err := ParseFiles([]File{
c, _, err := ParseFiles([]File{
{
Name: "docker-bake.hcl",
Data: []byte(`
@@ -1445,6 +1445,39 @@ func TestVarUnsupportedType(t *testing.T) {
require.Error(t, err)
}
func TestHCLIndexOfFunc(t *testing.T) {
dt := []byte(`
variable "APP_VERSIONS" {
default = [
"1.42.4",
"1.42.3"
]
}
target "default" {
args = {
APP_VERSION = app_version
}
matrix = {
app_version = APP_VERSIONS
}
name="app-${replace(app_version, ".", "-")}"
tags = [
"app:${app_version}",
indexof(APP_VERSIONS, app_version) == 0 ? "app:latest" : "",
]
}
`)
c, err := ParseFile(dt, "docker-bake.hcl")
require.NoError(t, err)
require.Equal(t, 2, len(c.Targets))
require.Equal(t, "app-1-42-4", c.Targets[0].Name)
require.Equal(t, "app:latest", c.Targets[0].Tags[1])
require.Equal(t, "app-1-42-3", c.Targets[1].Name)
require.Empty(t, c.Targets[1].Tags[1])
}
func ptrstr(s interface{}) *string {
var n *string
if reflect.ValueOf(s).Kind() == reflect.String {

View File

@@ -25,9 +25,11 @@ type Opt struct {
}
type variable struct {
Name string `json:"-" hcl:"name,label"`
Default *hcl.Attribute `json:"default,omitempty" hcl:"default,optional"`
Body hcl.Body `json:"-" hcl:",body"`
Name string `json:"-" hcl:"name,label"`
Default *hcl.Attribute `json:"default,omitempty" hcl:"default,optional"`
Description string `json:"description,omitempty" hcl:"description,optional"`
Body hcl.Body `json:"-" hcl:",body"`
Remain hcl.Body `json:"-" hcl:",remain"`
}
type functionDef struct {
@@ -534,7 +536,18 @@ func (p *parser) resolveBlockNames(block *hcl.Block) ([]string, error) {
return names, nil
}
func Parse(b hcl.Body, opt Opt, val interface{}) (map[string]map[string][]string, hcl.Diagnostics) {
type Variable struct {
Name string
Description string
Value *string
}
type ParseMeta struct {
Renamed map[string]map[string][]string
AllVariables []*Variable
}
func Parse(b hcl.Body, opt Opt, val interface{}) (*ParseMeta, hcl.Diagnostics) {
reserved := map[string]struct{}{}
schema, _ := gohcl.ImpliedBodySchema(val)
@@ -643,6 +656,7 @@ func Parse(b hcl.Body, opt Opt, val interface{}) (map[string]map[string][]string
}
}
vars := make([]*Variable, 0, len(p.vars))
for k := range p.vars {
if err := p.resolveValue(p.ectx, k); err != nil {
if diags, ok := err.(hcl.Diagnostics); ok {
@@ -651,6 +665,21 @@ func Parse(b hcl.Body, opt Opt, val interface{}) (map[string]map[string][]string
r := p.vars[k].Body.MissingItemRange()
return nil, wrapErrorDiagnostic("Invalid value", err, &r, &r)
}
v := &Variable{
Name: p.vars[k].Name,
Description: p.vars[k].Description,
}
if vv := p.ectx.Variables[k]; !vv.IsNull() {
var s string
switch vv.Type() {
case cty.String:
s = vv.AsString()
case cty.Bool:
s = strconv.FormatBool(vv.True())
}
v.Value = &s
}
vars = append(vars, v)
}
for k := range p.funcs {
@@ -795,7 +824,10 @@ func Parse(b hcl.Body, opt Opt, val interface{}) (map[string]map[string][]string
}
}
return renamed, nil
return &ParseMeta{
Renamed: renamed,
AllVariables: vars,
}, nil
}
// wrapErrorDiagnostic wraps an error into a hcl.Diagnostics object.

View File

@@ -111,21 +111,19 @@ func (mb mergedBodies) JustAttributes() (hcl.Attributes, hcl.Diagnostics) {
diags = append(diags, thisDiags...)
}
if thisAttrs != nil {
for name, attr := range thisAttrs {
if existing := attrs[name]; existing != nil {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Duplicate argument",
Detail: fmt.Sprintf(
"Argument %q was already set at %s",
name, existing.NameRange.String(),
),
Subject: thisAttrs[name].NameRange.Ptr(),
})
}
attrs[name] = attr
for name, attr := range thisAttrs {
if existing := attrs[name]; existing != nil {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Duplicate argument",
Detail: fmt.Sprintf(
"Argument %q was already set at %s",
name, existing.NameRange.String(),
),
Subject: thisAttrs[name].NameRange.Ptr(),
})
}
attrs[name] = attr
}
}

View File

@@ -9,6 +9,7 @@ import (
"github.com/hashicorp/go-cty-funcs/uuid"
"github.com/hashicorp/hcl/v2/ext/tryfunc"
"github.com/hashicorp/hcl/v2/ext/typeexpr"
"github.com/pkg/errors"
"github.com/zclconf/go-cty/cty"
"github.com/zclconf/go-cty/cty/function"
"github.com/zclconf/go-cty/cty/function/stdlib"
@@ -52,6 +53,7 @@ var stdlibFunctions = map[string]function.Function{
"hasindex": stdlib.HasIndexFunc,
"indent": stdlib.IndentFunc,
"index": stdlib.IndexFunc,
"indexof": indexOfFunc,
"int": stdlib.IntFunc,
"join": stdlib.JoinFunc,
"jsondecode": stdlib.JSONDecodeFunc,
@@ -115,6 +117,51 @@ var stdlibFunctions = map[string]function.Function{
"zipmap": stdlib.ZipmapFunc,
}
// indexOfFunc constructs a function that finds the element index for a given
// value in a list.
var indexOfFunc = function.New(&function.Spec{
Params: []function.Parameter{
{
Name: "list",
Type: cty.DynamicPseudoType,
},
{
Name: "value",
Type: cty.DynamicPseudoType,
},
},
Type: function.StaticReturnType(cty.Number),
Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) {
if !(args[0].Type().IsListType() || args[0].Type().IsTupleType()) {
return cty.NilVal, errors.New("argument must be a list or tuple")
}
if !args[0].IsKnown() {
return cty.UnknownVal(cty.Number), nil
}
if args[0].LengthInt() == 0 { // Easy path
return cty.NilVal, errors.New("cannot search an empty list")
}
for it := args[0].ElementIterator(); it.Next(); {
i, v := it.Element()
eq, err := stdlib.Equal(v, args[1])
if err != nil {
return cty.NilVal, err
}
if !eq.IsKnown() {
return cty.UnknownVal(cty.Number), nil
}
if eq.True() {
return i, nil
}
}
return cty.NilVal, errors.New("item not found")
},
})
// timestampFunc constructs a function that returns a string representation of the current date and time.
//
// This function was imported from terraform's datetime utilities.

View File

@@ -0,0 +1,49 @@
package hclparser
import (
"testing"
"github.com/zclconf/go-cty/cty"
)
func TestIndexOf(t *testing.T) {
type testCase struct {
input cty.Value
key cty.Value
want cty.Value
wantErr bool
}
tests := map[string]testCase{
"index 0": {
input: cty.TupleVal([]cty.Value{cty.StringVal("one"), cty.NumberIntVal(2.0), cty.NumberIntVal(3), cty.StringVal("four")}),
key: cty.StringVal("one"),
want: cty.NumberIntVal(0),
},
"index 3": {
input: cty.TupleVal([]cty.Value{cty.StringVal("one"), cty.NumberIntVal(2.0), cty.NumberIntVal(3), cty.StringVal("four")}),
key: cty.StringVal("four"),
want: cty.NumberIntVal(3),
},
"index -1": {
input: cty.TupleVal([]cty.Value{cty.StringVal("one"), cty.NumberIntVal(2.0), cty.NumberIntVal(3), cty.StringVal("four")}),
key: cty.StringVal("3"),
wantErr: true,
},
}
for name, test := range tests {
name, test := name, test
t.Run(name, func(t *testing.T) {
got, err := indexOfFunc.Call([]cty.Value{test.input, test.key})
if err != nil {
if test.wantErr {
return
}
t.Fatalf("unexpected error: %s", err)
}
if !got.RawEquals(test.want) {
t.Errorf("wrong result\ngot: %#v\nwant: %#v", got, test.want)
}
})
}
}

View File

@@ -4,6 +4,8 @@ import (
"archive/tar"
"bytes"
"context"
"os"
"strings"
"github.com/docker/buildx/builder"
controllerapi "github.com/docker/buildx/controller/pb"
@@ -23,13 +25,34 @@ type Input struct {
}
func ReadRemoteFiles(ctx context.Context, nodes []builder.Node, url string, names []string, pw progress.Writer) ([]File, *Input, error) {
var session []session.Attachable
var sessions []session.Attachable
var filename string
st, ok := dockerui.DetectGitContext(url, false)
if ok {
ssh, err := controllerapi.CreateSSH([]*controllerapi.SSH{{ID: "default"}})
if err == nil {
session = append(session, ssh)
if ssh, err := controllerapi.CreateSSH([]*controllerapi.SSH{{
ID: "default",
Paths: strings.Split(os.Getenv("BUILDX_BAKE_GIT_SSH"), ","),
}}); err == nil {
sessions = append(sessions, ssh)
}
var gitAuthSecrets []*controllerapi.Secret
if _, ok := os.LookupEnv("BUILDX_BAKE_GIT_AUTH_TOKEN"); ok {
gitAuthSecrets = append(gitAuthSecrets, &controllerapi.Secret{
ID: llb.GitAuthTokenKey,
Env: "BUILDX_BAKE_GIT_AUTH_TOKEN",
})
}
if _, ok := os.LookupEnv("BUILDX_BAKE_GIT_AUTH_HEADER"); ok {
gitAuthSecrets = append(gitAuthSecrets, &controllerapi.Secret{
ID: llb.GitAuthHeaderKey,
Env: "BUILDX_BAKE_GIT_AUTH_HEADER",
})
}
if len(gitAuthSecrets) > 0 {
if secrets, err := controllerapi.CreateSecrets(gitAuthSecrets); err == nil {
sessions = append(sessions, secrets)
}
}
} else {
st, filename, ok = dockerui.DetectHTTPContext(url)
@@ -59,7 +82,7 @@ func ReadRemoteFiles(ctx context.Context, nodes []builder.Node, url string, name
ch, done := progress.NewChannel(pw)
defer func() { <-done }()
_, err = c.Build(ctx, client.SolveOpt{Session: session, Internal: true}, "buildx", func(ctx context.Context, c gwclient.Client) (*gwclient.Result, error) {
_, err = c.Build(ctx, client.SolveOpt{Session: sessions, Internal: true}, "buildx", func(ctx context.Context, c gwclient.Client) (*gwclient.Result, error) {
def, err := st.Marshal(ctx)
if err != nil {
return nil, err

File diff suppressed because it is too large Load Diff

62
build/dial.go Normal file
View File

@@ -0,0 +1,62 @@
package build
import (
"context"
stderrors "errors"
"net"
"github.com/containerd/platforms"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/util/progress"
v1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
func Dial(ctx context.Context, nodes []builder.Node, pw progress.Writer, platform *v1.Platform) (net.Conn, error) {
nodes, err := filterAvailableNodes(nodes)
if err != nil {
return nil, err
}
if len(nodes) == 0 {
return nil, errors.New("no nodes available")
}
var pls []v1.Platform
if platform != nil {
pls = []v1.Platform{*platform}
}
opts := map[string]Options{"default": {Platforms: pls}}
resolved, err := resolveDrivers(ctx, nodes, opts, pw)
if err != nil {
return nil, err
}
var dialError error
for _, ls := range resolved {
for _, rn := range ls {
if platform != nil {
p := *platform
var found bool
for _, pp := range rn.platforms {
if platforms.Only(p).Match(pp) {
found = true
break
}
}
if !found {
continue
}
}
conn, err := nodes[rn.driverIndex].Driver.Dial(ctx)
if err == nil {
return conn, nil
}
dialError = stderrors.Join(err)
}
}
return nil, errors.Wrap(dialError, "no nodes available")
}

View File

@@ -3,8 +3,9 @@ package build
import (
"context"
"fmt"
"sync"
"github.com/containerd/containerd/platforms"
"github.com/containerd/platforms"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/driver"
"github.com/docker/buildx/util/progress"
@@ -46,10 +47,22 @@ func (dp resolvedNode) BuildOpts(ctx context.Context) (gateway.BuildOpts, error)
type matchMaker func(specs.Platform) platforms.MatchComparer
type cachedGroup[T any] struct {
g flightcontrol.Group[T]
cache map[int]T
cacheMu sync.Mutex
}
func newCachedGroup[T any]() cachedGroup[T] {
return cachedGroup[T]{
cache: map[int]T{},
}
}
type nodeResolver struct {
nodes []builder.Node
clients flightcontrol.Group[*client.Client]
opt flightcontrol.Group[gateway.BuildOpts]
nodes []builder.Node
clients cachedGroup[*client.Client]
buildOpts cachedGroup[gateway.BuildOpts]
}
func resolveDrivers(ctx context.Context, nodes []builder.Node, opt map[string]Options, pw progress.Writer) (map[string][]*resolvedNode, error) {
@@ -63,7 +76,9 @@ func resolveDrivers(ctx context.Context, nodes []builder.Node, opt map[string]Op
func newDriverResolver(nodes []builder.Node) *nodeResolver {
r := &nodeResolver{
nodes: nodes,
nodes: nodes,
clients: newCachedGroup[*client.Client](),
buildOpts: newCachedGroup[gateway.BuildOpts](),
}
return r
}
@@ -162,10 +177,6 @@ func (r *nodeResolver) resolve(ctx context.Context, ps []specs.Platform, pw prog
return nil, true, nil
}
if len(ps) == 0 {
ps = []specs.Platform{platforms.DefaultSpec()}
}
perfect := true
nodeIdxs := make([]int, 0)
for _, p := range ps {
@@ -178,13 +189,25 @@ func (r *nodeResolver) resolve(ctx context.Context, ps []specs.Platform, pw prog
}
var nodes []*resolvedNode
for i, idx := range nodeIdxs {
if len(nodeIdxs) == 0 {
nodes = append(nodes, &resolvedNode{
resolver: r,
driverIndex: idx,
platforms: []specs.Platform{ps[i]},
driverIndex: 0,
})
nodeIdxs = append(nodeIdxs, 0)
} else {
for i, idx := range nodeIdxs {
node := &resolvedNode{
resolver: r,
driverIndex: idx,
}
if len(ps) > 0 {
node.platforms = []specs.Platform{ps[i]}
}
nodes = append(nodes, node)
}
}
nodes = recombineNodes(nodes)
if _, err := r.boot(ctx, nodeIdxs, pw); err != nil {
return nil, false, err
@@ -230,11 +253,24 @@ func (r *nodeResolver) boot(ctx context.Context, idxs []int, pw progress.Writer)
for i, idx := range idxs {
i, idx := i, idx
eg.Go(func() error {
c, err := r.clients.Do(ctx, fmt.Sprint(idx), func(ctx context.Context) (*client.Client, error) {
c, err := r.clients.g.Do(ctx, fmt.Sprint(idx), func(ctx context.Context) (*client.Client, error) {
if r.nodes[idx].Driver == nil {
return nil, nil
}
return driver.Boot(ctx, baseCtx, r.nodes[idx].Driver, pw)
r.clients.cacheMu.Lock()
c, ok := r.clients.cache[idx]
r.clients.cacheMu.Unlock()
if ok {
return c, nil
}
c, err := driver.Boot(ctx, baseCtx, r.nodes[idx].Driver, pw)
if err != nil {
return nil, err
}
r.clients.cacheMu.Lock()
r.clients.cache[idx] = c
r.clients.cacheMu.Unlock()
return c, nil
})
if err != nil {
return err
@@ -265,14 +301,25 @@ func (r *nodeResolver) opts(ctx context.Context, idxs []int, pw progress.Writer)
continue
}
eg.Go(func() error {
opt, err := r.opt.Do(ctx, fmt.Sprint(idx), func(ctx context.Context) (gateway.BuildOpts, error) {
opt := gateway.BuildOpts{}
opt, err := r.buildOpts.g.Do(ctx, fmt.Sprint(idx), func(ctx context.Context) (gateway.BuildOpts, error) {
r.buildOpts.cacheMu.Lock()
opt, ok := r.buildOpts.cache[idx]
r.buildOpts.cacheMu.Unlock()
if ok {
return opt, nil
}
_, err := c.Build(ctx, client.SolveOpt{
Internal: true,
}, "buildx", func(ctx context.Context, c gateway.Client) (*gateway.Result, error) {
opt = c.BuildOpts()
return nil, nil
}, nil)
if err != nil {
return gateway.BuildOpts{}, err
}
r.buildOpts.cacheMu.Lock()
r.buildOpts.cache[idx] = opt
r.buildOpts.cacheMu.Unlock()
return opt, err
})
if err != nil {

View File

@@ -5,7 +5,7 @@ import (
"sort"
"testing"
"github.com/containerd/containerd/platforms"
"github.com/containerd/platforms"
"github.com/docker/buildx/builder"
specs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/stretchr/testify/require"
@@ -22,6 +22,7 @@ func TestFindDriverSanity(t *testing.T) {
require.Len(t, res, 1)
require.Equal(t, 0, res[0].driverIndex)
require.Equal(t, "aaa", res[0].Node().Builder)
require.Equal(t, []specs.Platform{platforms.DefaultSpec()}, res[0].platforms)
}
func TestFindDriverEmpty(t *testing.T) {
@@ -216,7 +217,7 @@ func TestSelectNodePreferExact(t *testing.T) {
require.Equal(t, "bbb", res[0].Node().Builder)
}
func TestSelectNodeCurrentPlatform(t *testing.T) {
func TestSelectNodeNoPlatform(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
"aaa": {platforms.MustParse("linux/foobar")},
"bbb": {platforms.DefaultSpec()},
@@ -226,7 +227,8 @@ func TestSelectNodeCurrentPlatform(t *testing.T) {
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "bbb", res[0].Node().Builder)
require.Equal(t, "aaa", res[0].Node().Builder)
require.Empty(t, res[0].platforms)
}
func TestSelectNodeAdditionalPlatforms(t *testing.T) {

View File

@@ -9,16 +9,18 @@ import (
"strings"
"github.com/docker/buildx/util/gitutil"
"github.com/docker/buildx/util/osutil"
"github.com/moby/buildkit/client"
specs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
const DockerfileLabel = "com.docker.image.source.entrypoint"
func getGitAttributes(ctx context.Context, contextPath string, dockerfilePath string) (res map[string]string, _ error) {
res = make(map[string]string)
func getGitAttributes(ctx context.Context, contextPath string, dockerfilePath string) (map[string]string, func(key, dir string, so *client.SolveOpt), error) {
res := make(map[string]string)
if contextPath == "" {
return
return nil, nil, nil
}
setGitLabels := false
@@ -37,7 +39,7 @@ func getGitAttributes(ctx context.Context, contextPath string, dockerfilePath st
}
if !setGitLabels && !setGitInfo {
return
return nil, nil, nil
}
// figure out in which directory the git command needs to run in
@@ -45,27 +47,32 @@ func getGitAttributes(ctx context.Context, contextPath string, dockerfilePath st
if filepath.IsAbs(contextPath) {
wd = contextPath
} else {
cwd, _ := os.Getwd()
wd, _ = filepath.Abs(filepath.Join(cwd, contextPath))
wd, _ = filepath.Abs(filepath.Join(osutil.GetWd(), contextPath))
}
wd = osutil.SanitizePath(wd)
gitc, err := gitutil.New(gitutil.WithContext(ctx), gitutil.WithWorkingDir(wd))
if err != nil {
if st, err1 := os.Stat(path.Join(wd, ".git")); err1 == nil && st.IsDir() {
return res, errors.Wrap(err, "git was not found in the system")
return res, nil, errors.Wrap(err, "git was not found in the system")
}
return
return nil, nil, nil
}
if !gitc.IsInsideWorkTree() {
if st, err := os.Stat(path.Join(wd, ".git")); err == nil && st.IsDir() {
return res, errors.New("failed to read current commit information with git rev-parse --is-inside-work-tree")
return res, nil, errors.New("failed to read current commit information with git rev-parse --is-inside-work-tree")
}
return res, nil
return nil, nil, nil
}
root, err := gitc.RootDir()
if err != nil {
return res, nil, errors.Wrap(err, "failed to get git root dir")
}
if sha, err := gitc.FullCommit(); err != nil && !gitutil.IsUnknownRevision(err) {
return res, errors.Wrap(err, "failed to get git commit")
return res, nil, errors.Wrap(err, "failed to get git commit")
} else if sha != "" {
checkDirty := false
if v, ok := os.LookupEnv("BUILDX_GIT_CHECK_DIRTY"); ok {
@@ -93,23 +100,32 @@ func getGitAttributes(ctx context.Context, contextPath string, dockerfilePath st
}
}
if setGitLabels {
if root, err := gitc.RootDir(); err != nil {
return res, errors.Wrap(err, "failed to get git root dir")
} else if root != "" {
if dockerfilePath == "" {
dockerfilePath = filepath.Join(wd, "Dockerfile")
}
if !filepath.IsAbs(dockerfilePath) {
cwd, _ := os.Getwd()
dockerfilePath = filepath.Join(cwd, dockerfilePath)
}
dockerfilePath, _ = filepath.Rel(root, dockerfilePath)
if !strings.HasPrefix(dockerfilePath, "..") {
res["label:"+DockerfileLabel] = dockerfilePath
}
if setGitLabels && root != "" {
if dockerfilePath == "" {
dockerfilePath = filepath.Join(wd, "Dockerfile")
}
if !filepath.IsAbs(dockerfilePath) {
dockerfilePath = filepath.Join(osutil.GetWd(), dockerfilePath)
}
if r, err := filepath.Rel(root, dockerfilePath); err == nil && !strings.HasPrefix(r, "..") {
res["label:"+DockerfileLabel] = r
}
}
return
return res, func(key, dir string, so *client.SolveOpt) {
if !setGitInfo || root == "" {
return
}
dir, err := filepath.Abs(dir)
if err != nil {
return
}
if lp, err := osutil.GetLongPathName(dir); err == nil {
dir = lp
}
dir = osutil.SanitizePath(dir)
if r, err := filepath.Rel(root, dir); err == nil && !strings.HasPrefix(r, "..") {
so.FrontendAttrs["vcs:localdir:"+key] = r
}
}, nil
}

View File

@@ -9,6 +9,7 @@ import (
"testing"
"github.com/docker/buildx/util/gitutil"
"github.com/moby/buildkit/client"
specs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
@@ -30,7 +31,7 @@ func setupTest(tb testing.TB) {
}
func TestGetGitAttributesNotGitRepo(t *testing.T) {
_, err := getGitAttributes(context.Background(), t.TempDir(), "Dockerfile")
_, _, err := getGitAttributes(context.Background(), t.TempDir(), "Dockerfile")
assert.NoError(t, err)
}
@@ -38,14 +39,14 @@ func TestGetGitAttributesBadGitRepo(t *testing.T) {
tmp := t.TempDir()
require.NoError(t, os.MkdirAll(path.Join(tmp, ".git"), 0755))
_, err := getGitAttributes(context.Background(), tmp, "Dockerfile")
_, _, err := getGitAttributes(context.Background(), tmp, "Dockerfile")
assert.Error(t, err)
}
func TestGetGitAttributesNoContext(t *testing.T) {
setupTest(t)
gitattrs, err := getGitAttributes(context.Background(), "", "Dockerfile")
gitattrs, _, err := getGitAttributes(context.Background(), "", "Dockerfile")
assert.NoError(t, err)
assert.Empty(t, gitattrs)
}
@@ -114,7 +115,7 @@ func TestGetGitAttributes(t *testing.T) {
if tt.envGitInfo != "" {
t.Setenv("BUILDX_GIT_INFO", tt.envGitInfo)
}
gitattrs, err := getGitAttributes(context.Background(), ".", "Dockerfile")
gitattrs, _, err := getGitAttributes(context.Background(), ".", "Dockerfile")
require.NoError(t, err)
for _, e := range tt.expected {
assert.Contains(t, gitattrs, e)
@@ -139,7 +140,7 @@ func TestGetGitAttributesDirty(t *testing.T) {
require.NoError(t, os.WriteFile(filepath.Join("dir", "Dockerfile"), df, 0644))
t.Setenv("BUILDX_GIT_LABELS", "true")
gitattrs, _ := getGitAttributes(context.Background(), ".", "Dockerfile")
gitattrs, _, _ := getGitAttributes(context.Background(), ".", "Dockerfile")
assert.Equal(t, 5, len(gitattrs))
assert.Contains(t, gitattrs, "label:"+DockerfileLabel)
@@ -154,3 +155,55 @@ func TestGetGitAttributesDirty(t *testing.T) {
assert.Contains(t, gitattrs, "vcs:revision")
assert.True(t, strings.HasSuffix(gitattrs["vcs:revision"], "-dirty"))
}
func TestLocalDirs(t *testing.T) {
setupTest(t)
so := &client.SolveOpt{
FrontendAttrs: map[string]string{},
}
_, addVCSLocalDir, err := getGitAttributes(context.Background(), ".", "Dockerfile")
require.NoError(t, err)
require.NotNil(t, addVCSLocalDir)
require.NoError(t, setLocalMount("context", ".", so, addVCSLocalDir))
require.Contains(t, so.FrontendAttrs, "vcs:localdir:context")
assert.Equal(t, ".", so.FrontendAttrs["vcs:localdir:context"])
require.NoError(t, setLocalMount("dockerfile", ".", so, addVCSLocalDir))
require.Contains(t, so.FrontendAttrs, "vcs:localdir:dockerfile")
assert.Equal(t, ".", so.FrontendAttrs["vcs:localdir:dockerfile"])
}
func TestLocalDirsSub(t *testing.T) {
gitutil.Mktmp(t)
c, err := gitutil.New()
require.NoError(t, err)
gitutil.GitInit(c, t)
df := []byte("FROM alpine:latest\n")
assert.NoError(t, os.MkdirAll("app", 0755))
assert.NoError(t, os.WriteFile("app/Dockerfile", df, 0644))
gitutil.GitAdd(c, t, "app/Dockerfile")
gitutil.GitCommit(c, t, "initial commit")
gitutil.GitSetRemote(c, t, "origin", "git@github.com:docker/buildx.git")
so := &client.SolveOpt{
FrontendAttrs: map[string]string{},
}
_, addVCSLocalDir, err := getGitAttributes(context.Background(), ".", "app/Dockerfile")
require.NoError(t, err)
require.NotNil(t, addVCSLocalDir)
require.NoError(t, setLocalMount("context", ".", so, addVCSLocalDir))
require.Contains(t, so.FrontendAttrs, "vcs:localdir:context")
assert.Equal(t, ".", so.FrontendAttrs["vcs:localdir:context"])
require.NoError(t, setLocalMount("dockerfile", "app", so, addVCSLocalDir))
require.Contains(t, so.FrontendAttrs, "vcs:localdir:dockerfile")
assert.Equal(t, "app", so.FrontendAttrs["vcs:localdir:dockerfile"])
}

View File

@@ -37,7 +37,7 @@ func NewContainer(ctx context.Context, resultCtx *ResultHandle, cfg *controllera
cancel()
}()
containerCfg, err := resultCtx.getContainerConfig(ctx, c, cfg)
containerCfg, err := resultCtx.getContainerConfig(cfg)
if err != nil {
return nil, err
}

View File

@@ -15,29 +15,29 @@ func saveLocalState(so *client.SolveOpt, target string, opts Options, node build
}
lp := opts.Inputs.ContextPath
dp := opts.Inputs.DockerfilePath
if lp != "" || dp != "" {
if lp != "" {
lp, err = filepath.Abs(lp)
if err != nil {
return err
}
}
if dp != "" {
dp, err = filepath.Abs(dp)
if err != nil {
return err
}
}
l, err := localstate.New(configDir)
if dp != "" && !IsRemoteURL(lp) && lp != "-" && dp != "-" {
dp, err = filepath.Abs(dp)
if err != nil {
return err
}
return l.SaveRef(node.Builder, node.Name, so.Ref, localstate.State{
Target: target,
LocalPath: lp,
DockerfilePath: dp,
GroupRef: opts.GroupRef,
})
}
return nil
if lp != "" && !IsRemoteURL(lp) && lp != "-" {
lp, err = filepath.Abs(lp)
if err != nil {
return err
}
}
if lp == "" && dp == "" {
return nil
}
l, err := localstate.New(configDir)
if err != nil {
return err
}
return l.SaveRef(node.Builder, node.Name, so.Ref, localstate.State{
Target: target,
LocalPath: lp,
DockerfilePath: dp,
GroupRef: opts.GroupRef,
})
}

637
build/opt.go Normal file
View File

@@ -0,0 +1,637 @@
package build
import (
"bufio"
"context"
"io"
"os"
"path/filepath"
"strconv"
"strings"
"syscall"
"github.com/containerd/containerd/content"
"github.com/containerd/containerd/content/local"
"github.com/containerd/platforms"
"github.com/distribution/reference"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/driver"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/dockerutil"
"github.com/docker/buildx/util/osutil"
"github.com/docker/buildx/util/progress"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/client/llb"
"github.com/moby/buildkit/client/ociindex"
gateway "github.com/moby/buildkit/frontend/gateway/client"
"github.com/moby/buildkit/identity"
"github.com/moby/buildkit/session/upload/uploadprovider"
"github.com/moby/buildkit/solver/pb"
"github.com/moby/buildkit/util/apicaps"
"github.com/moby/buildkit/util/entitlements"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/tonistiigi/fsutil"
)
func toSolveOpt(ctx context.Context, node builder.Node, multiDriver bool, opt Options, bopts gateway.BuildOpts, configDir string, addVCSLocalDir func(key, dir string, so *client.SolveOpt), pw progress.Writer, docker *dockerutil.Client) (_ *client.SolveOpt, release func(), err error) {
nodeDriver := node.Driver
defers := make([]func(), 0, 2)
releaseF := func() {
for _, f := range defers {
f()
}
}
defer func() {
if err != nil {
releaseF()
}
}()
// inline cache from build arg
if v, ok := opt.BuildArgs["BUILDKIT_INLINE_CACHE"]; ok {
if v, _ := strconv.ParseBool(v); v {
opt.CacheTo = append(opt.CacheTo, client.CacheOptionsEntry{
Type: "inline",
Attrs: map[string]string{},
})
}
}
for _, e := range opt.CacheTo {
if e.Type != "inline" && !nodeDriver.Features(ctx)[driver.CacheExport] {
return nil, nil, notSupported(driver.CacheExport, nodeDriver, "https://docs.docker.com/go/build-cache-backends/")
}
}
cacheTo := make([]client.CacheOptionsEntry, 0, len(opt.CacheTo))
for _, e := range opt.CacheTo {
if e.Type == "gha" {
if !bopts.LLBCaps.Contains(apicaps.CapID("cache.gha")) {
continue
}
} else if e.Type == "s3" {
if !bopts.LLBCaps.Contains(apicaps.CapID("cache.s3")) {
continue
}
}
cacheTo = append(cacheTo, e)
}
cacheFrom := make([]client.CacheOptionsEntry, 0, len(opt.CacheFrom))
for _, e := range opt.CacheFrom {
if e.Type == "gha" {
if !bopts.LLBCaps.Contains(apicaps.CapID("cache.gha")) {
continue
}
} else if e.Type == "s3" {
if !bopts.LLBCaps.Contains(apicaps.CapID("cache.s3")) {
continue
}
}
cacheFrom = append(cacheFrom, e)
}
so := client.SolveOpt{
Ref: opt.Ref,
Frontend: "dockerfile.v0",
FrontendAttrs: map[string]string{},
LocalMounts: map[string]fsutil.FS{},
CacheExports: cacheTo,
CacheImports: cacheFrom,
AllowedEntitlements: opt.Allow,
SourcePolicy: opt.SourcePolicy,
}
if opt.CgroupParent != "" {
so.FrontendAttrs["cgroup-parent"] = opt.CgroupParent
}
if v, ok := opt.BuildArgs["BUILDKIT_MULTI_PLATFORM"]; ok {
if v, _ := strconv.ParseBool(v); v {
so.FrontendAttrs["multi-platform"] = "true"
}
}
if multiDriver {
// force creation of manifest list
so.FrontendAttrs["multi-platform"] = "true"
}
attests := make(map[string]string)
for k, v := range opt.Attests {
if v != nil {
attests[k] = *v
}
}
supportAttestations := bopts.LLBCaps.Contains(apicaps.CapID("exporter.image.attestations")) && nodeDriver.Features(ctx)[driver.MultiPlatform]
if len(attests) > 0 {
if !supportAttestations {
if !nodeDriver.Features(ctx)[driver.MultiPlatform] {
return nil, nil, notSupported("Attestation", nodeDriver, "https://docs.docker.com/go/attestations/")
}
return nil, nil, errors.Errorf("Attestations are not supported by the current BuildKit daemon")
}
for k, v := range attests {
so.FrontendAttrs["attest:"+k] = v
}
}
if _, ok := opt.Attests["provenance"]; !ok && supportAttestations {
const noAttestEnv = "BUILDX_NO_DEFAULT_ATTESTATIONS"
var noProv bool
if v, ok := os.LookupEnv(noAttestEnv); ok {
noProv, err = strconv.ParseBool(v)
if err != nil {
return nil, nil, errors.Wrap(err, "invalid "+noAttestEnv)
}
}
if !noProv {
so.FrontendAttrs["attest:provenance"] = "mode=min,inline-only=true"
}
}
switch len(opt.Exports) {
case 1:
// valid
case 0:
if !noDefaultLoad() && opt.PrintFunc == nil {
if nodeDriver.IsMobyDriver() {
// backwards compat for docker driver only:
// this ensures the build results in a docker image.
opt.Exports = []client.ExportEntry{{Type: "image", Attrs: map[string]string{}}}
} else if nodeDriver.Features(ctx)[driver.DefaultLoad] {
opt.Exports = []client.ExportEntry{{Type: "docker", Attrs: map[string]string{}}}
}
}
default:
if err := bopts.LLBCaps.Supports(pb.CapMultipleExporters); err != nil {
return nil, nil, errors.Errorf("multiple outputs currently unsupported by the current BuildKit daemon, please upgrade to version v0.13+ or use a single output")
}
}
// fill in image exporter names from tags
if len(opt.Tags) > 0 {
tags := make([]string, len(opt.Tags))
for i, tag := range opt.Tags {
ref, err := reference.Parse(tag)
if err != nil {
return nil, nil, errors.Wrapf(err, "invalid tag %q", tag)
}
tags[i] = ref.String()
}
for i, e := range opt.Exports {
switch e.Type {
case "image", "oci", "docker":
opt.Exports[i].Attrs["name"] = strings.Join(tags, ",")
}
}
} else {
for _, e := range opt.Exports {
if e.Type == "image" && e.Attrs["name"] == "" && e.Attrs["push"] != "" {
if ok, _ := strconv.ParseBool(e.Attrs["push"]); ok {
return nil, nil, errors.Errorf("tag is needed when pushing to registry")
}
}
}
}
// cacheonly is a fake exporter to opt out of default behaviors
exports := make([]client.ExportEntry, 0, len(opt.Exports))
for _, e := range opt.Exports {
if e.Type != "cacheonly" {
exports = append(exports, e)
}
}
opt.Exports = exports
// set up exporters
for i, e := range opt.Exports {
if e.Type == "oci" && !nodeDriver.Features(ctx)[driver.OCIExporter] {
return nil, nil, notSupported(driver.OCIExporter, nodeDriver, "https://docs.docker.com/go/build-exporters/")
}
if e.Type == "docker" {
features := docker.Features(ctx, e.Attrs["context"])
if features[dockerutil.OCIImporter] && e.Output == nil {
// rely on oci importer if available (which supports
// multi-platform images), otherwise fall back to docker
opt.Exports[i].Type = "oci"
} else if len(opt.Platforms) > 1 || len(attests) > 0 {
if e.Output != nil {
return nil, nil, errors.Errorf("docker exporter does not support exporting manifest lists, use the oci exporter instead")
}
return nil, nil, errors.Errorf("docker exporter does not currently support exporting manifest lists")
}
if e.Output == nil {
if nodeDriver.IsMobyDriver() {
e.Type = "image"
} else {
w, cancel, err := docker.LoadImage(ctx, e.Attrs["context"], pw)
if err != nil {
return nil, nil, err
}
defers = append(defers, cancel)
opt.Exports[i].Output = func(_ map[string]string) (io.WriteCloser, error) {
return w, nil
}
}
} else if !nodeDriver.Features(ctx)[driver.DockerExporter] {
return nil, nil, notSupported(driver.DockerExporter, nodeDriver, "https://docs.docker.com/go/build-exporters/")
}
}
if e.Type == "image" && nodeDriver.IsMobyDriver() {
opt.Exports[i].Type = "moby"
if e.Attrs["push"] != "" {
if ok, _ := strconv.ParseBool(e.Attrs["push"]); ok {
if ok, _ := strconv.ParseBool(e.Attrs["push-by-digest"]); ok {
return nil, nil, errors.Errorf("push-by-digest is currently not implemented for docker driver, please create a new builder instance")
}
}
}
}
if e.Type == "docker" || e.Type == "image" || e.Type == "oci" {
// inline buildinfo attrs from build arg
if v, ok := opt.BuildArgs["BUILDKIT_INLINE_BUILDINFO_ATTRS"]; ok {
opt.Exports[i].Attrs["buildinfo-attrs"] = v
}
}
}
so.Exports = opt.Exports
so.Session = opt.Session
releaseLoad, err := loadInputs(ctx, nodeDriver, opt.Inputs, addVCSLocalDir, pw, &so)
if err != nil {
return nil, nil, err
}
defers = append(defers, releaseLoad)
// add node identifier to shared key if one was specified
if so.SharedKey != "" {
so.SharedKey += ":" + confutil.TryNodeIdentifier(configDir)
}
if opt.Pull {
so.FrontendAttrs["image-resolve-mode"] = pb.AttrImageResolveModeForcePull
} else if nodeDriver.IsMobyDriver() {
// moby driver always resolves local images by default
so.FrontendAttrs["image-resolve-mode"] = pb.AttrImageResolveModePreferLocal
}
if opt.Target != "" {
so.FrontendAttrs["target"] = opt.Target
}
if len(opt.NoCacheFilter) > 0 {
so.FrontendAttrs["no-cache"] = strings.Join(opt.NoCacheFilter, ",")
}
if opt.NoCache {
so.FrontendAttrs["no-cache"] = ""
}
for k, v := range opt.BuildArgs {
so.FrontendAttrs["build-arg:"+k] = v
}
for k, v := range opt.Labels {
so.FrontendAttrs["label:"+k] = v
}
for k, v := range node.ProxyConfig {
if _, ok := opt.BuildArgs[k]; !ok {
so.FrontendAttrs["build-arg:"+k] = v
}
}
// set platforms
if len(opt.Platforms) != 0 {
pp := make([]string, len(opt.Platforms))
for i, p := range opt.Platforms {
pp[i] = platforms.Format(p)
}
if len(pp) > 1 && !nodeDriver.Features(ctx)[driver.MultiPlatform] {
return nil, nil, notSupported(driver.MultiPlatform, nodeDriver, "https://docs.docker.com/go/build-multi-platform/")
}
so.FrontendAttrs["platform"] = strings.Join(pp, ",")
}
// setup networkmode
switch opt.NetworkMode {
case "host":
so.FrontendAttrs["force-network-mode"] = opt.NetworkMode
so.AllowedEntitlements = append(so.AllowedEntitlements, entitlements.EntitlementNetworkHost)
case "none":
so.FrontendAttrs["force-network-mode"] = opt.NetworkMode
case "", "default":
default:
return nil, nil, errors.Errorf("network mode %q not supported by buildkit - you can define a custom network for your builder using the network driver-opt in buildx create", opt.NetworkMode)
}
// setup extrahosts
extraHosts, err := toBuildkitExtraHosts(ctx, opt.ExtraHosts, nodeDriver)
if err != nil {
return nil, nil, err
}
if len(extraHosts) > 0 {
so.FrontendAttrs["add-hosts"] = extraHosts
}
// setup shm size
if opt.ShmSize.Value() > 0 {
so.FrontendAttrs["shm-size"] = strconv.FormatInt(opt.ShmSize.Value(), 10)
}
// setup ulimits
ulimits, err := toBuildkitUlimits(opt.Ulimits)
if err != nil {
return nil, nil, err
} else if len(ulimits) > 0 {
so.FrontendAttrs["ulimit"] = ulimits
}
// mark info request as internal
if opt.PrintFunc != nil {
so.Internal = true
}
return &so, releaseF, nil
}
func loadInputs(ctx context.Context, d *driver.DriverHandle, inp Inputs, addVCSLocalDir func(key, dir string, so *client.SolveOpt), pw progress.Writer, target *client.SolveOpt) (func(), error) {
if inp.ContextPath == "" {
return nil, errors.New("please specify build context (e.g. \".\" for the current directory)")
}
// TODO: handle stdin, symlinks, remote contexts, check files exist
var (
err error
dockerfileReader io.Reader
dockerfileDir string
dockerfileName = inp.DockerfilePath
toRemove []string
)
switch {
case inp.ContextState != nil:
if target.FrontendInputs == nil {
target.FrontendInputs = make(map[string]llb.State)
}
target.FrontendInputs["context"] = *inp.ContextState
target.FrontendInputs["dockerfile"] = *inp.ContextState
case inp.ContextPath == "-":
if inp.DockerfilePath == "-" {
return nil, errStdinConflict
}
buf := bufio.NewReader(inp.InStream)
magic, err := buf.Peek(archiveHeaderSize * 2)
if err != nil && err != io.EOF {
return nil, errors.Wrap(err, "failed to peek context header from STDIN")
}
if !(err == io.EOF && len(magic) == 0) {
if isArchive(magic) {
// stdin is context
up := uploadprovider.New()
target.FrontendAttrs["context"] = up.Add(buf)
target.Session = append(target.Session, up)
} else {
if inp.DockerfilePath != "" {
return nil, errDockerfileConflict
}
// stdin is dockerfile
dockerfileReader = buf
inp.ContextPath, _ = os.MkdirTemp("", "empty-dir")
toRemove = append(toRemove, inp.ContextPath)
if err := setLocalMount("context", inp.ContextPath, target, addVCSLocalDir); err != nil {
return nil, err
}
}
}
case osutil.IsLocalDir(inp.ContextPath):
if err := setLocalMount("context", inp.ContextPath, target, addVCSLocalDir); err != nil {
return nil, err
}
sharedKey := inp.ContextPath
if p, err := filepath.Abs(sharedKey); err == nil {
sharedKey = filepath.Base(p)
}
target.SharedKey = sharedKey
switch inp.DockerfilePath {
case "-":
dockerfileReader = inp.InStream
case "":
dockerfileDir = inp.ContextPath
default:
dockerfileDir = filepath.Dir(inp.DockerfilePath)
dockerfileName = filepath.Base(inp.DockerfilePath)
}
case IsRemoteURL(inp.ContextPath):
if inp.DockerfilePath == "-" {
dockerfileReader = inp.InStream
} else if filepath.IsAbs(inp.DockerfilePath) {
dockerfileDir = filepath.Dir(inp.DockerfilePath)
dockerfileName = filepath.Base(inp.DockerfilePath)
target.FrontendAttrs["dockerfilekey"] = "dockerfile"
}
target.FrontendAttrs["context"] = inp.ContextPath
default:
return nil, errors.Errorf("unable to prepare context: path %q not found", inp.ContextPath)
}
if inp.DockerfileInline != "" {
dockerfileReader = strings.NewReader(inp.DockerfileInline)
}
if dockerfileReader != nil {
dockerfileDir, err = createTempDockerfile(dockerfileReader)
if err != nil {
return nil, err
}
toRemove = append(toRemove, dockerfileDir)
dockerfileName = "Dockerfile"
target.FrontendAttrs["dockerfilekey"] = "dockerfile"
}
if isHTTPURL(inp.DockerfilePath) {
dockerfileDir, err = createTempDockerfileFromURL(ctx, d, inp.DockerfilePath, pw)
if err != nil {
return nil, err
}
toRemove = append(toRemove, dockerfileDir)
dockerfileName = "Dockerfile"
target.FrontendAttrs["dockerfilekey"] = "dockerfile"
delete(target.FrontendInputs, "dockerfile")
}
if dockerfileName == "" {
dockerfileName = "Dockerfile"
}
if dockerfileDir != "" {
if err := setLocalMount("dockerfile", dockerfileDir, target, addVCSLocalDir); err != nil {
return nil, err
}
dockerfileName = handleLowercaseDockerfile(dockerfileDir, dockerfileName)
}
target.FrontendAttrs["filename"] = dockerfileName
for k, v := range inp.NamedContexts {
target.FrontendAttrs["frontend.caps"] = "moby.buildkit.frontend.contexts+forward"
if v.State != nil {
target.FrontendAttrs["context:"+k] = "input:" + k
if target.FrontendInputs == nil {
target.FrontendInputs = make(map[string]llb.State)
}
target.FrontendInputs[k] = *v.State
continue
}
if IsRemoteURL(v.Path) || strings.HasPrefix(v.Path, "docker-image://") || strings.HasPrefix(v.Path, "target:") {
target.FrontendAttrs["context:"+k] = v.Path
continue
}
// handle OCI layout
if strings.HasPrefix(v.Path, "oci-layout://") {
localPath := strings.TrimPrefix(v.Path, "oci-layout://")
localPath, dig, hasDigest := strings.Cut(localPath, "@")
localPath, tag, hasTag := strings.Cut(localPath, ":")
if !hasTag {
tag = "latest"
}
if !hasDigest {
dig, err = resolveDigest(localPath, tag)
if err != nil {
return nil, errors.Wrapf(err, "oci-layout reference %q could not be resolved", v.Path)
}
}
store, err := local.NewStore(localPath)
if err != nil {
return nil, errors.Wrapf(err, "invalid store at %s", localPath)
}
storeName := identity.NewID()
if target.OCIStores == nil {
target.OCIStores = map[string]content.Store{}
}
target.OCIStores[storeName] = store
target.FrontendAttrs["context:"+k] = "oci-layout://" + storeName + ":" + tag + "@" + dig
continue
}
st, err := os.Stat(v.Path)
if err != nil {
return nil, errors.Wrapf(err, "failed to get build context %v", k)
}
if !st.IsDir() {
return nil, errors.Wrapf(syscall.ENOTDIR, "failed to get build context path %v", v)
}
localName := k
if k == "context" || k == "dockerfile" {
localName = "_" + k // underscore to avoid collisions
}
if err := setLocalMount(localName, v.Path, target, addVCSLocalDir); err != nil {
return nil, err
}
target.FrontendAttrs["context:"+k] = "local:" + localName
}
release := func() {
for _, dir := range toRemove {
_ = os.RemoveAll(dir)
}
}
return release, nil
}
func resolveDigest(localPath, tag string) (dig string, _ error) {
idx := ociindex.NewStoreIndex(localPath)
// lookup by name
desc, err := idx.Get(tag)
if err != nil {
return "", err
}
if desc == nil {
// lookup single
desc, err = idx.GetSingle()
if err != nil {
return "", err
}
}
if desc == nil {
return "", errors.New("failed to resolve digest")
}
dig = string(desc.Digest)
_, err = digest.Parse(dig)
if err != nil {
return "", errors.Wrapf(err, "invalid digest %s", dig)
}
return dig, nil
}
func setLocalMount(name, root string, so *client.SolveOpt, addVCSLocalDir func(key, dir string, so *client.SolveOpt)) error {
lm, err := fsutil.NewFS(root)
if err != nil {
return err
}
root, err = filepath.EvalSymlinks(root) // keep same behavior as fsutil.NewFS
if err != nil {
return err
}
if so.LocalMounts == nil {
so.LocalMounts = map[string]fsutil.FS{}
}
so.LocalMounts[name] = lm
if addVCSLocalDir != nil {
addVCSLocalDir(name, root, so)
}
return nil
}
func createTempDockerfile(r io.Reader) (string, error) {
dir, err := os.MkdirTemp("", "dockerfile")
if err != nil {
return "", err
}
f, err := os.Create(filepath.Join(dir, "Dockerfile"))
if err != nil {
return "", err
}
defer f.Close()
if _, err := io.Copy(f, r); err != nil {
return "", err
}
return dir, err
}
// handle https://github.com/moby/moby/pull/10858
func handleLowercaseDockerfile(dir, p string) string {
if filepath.Base(p) != "Dockerfile" {
return p
}
f, err := os.Open(filepath.Dir(filepath.Join(dir, p)))
if err != nil {
return p
}
names, err := f.Readdirnames(-1)
if err != nil {
return p
}
foundLowerCase := false
for _, n := range names {
if n == "Dockerfile" {
return p
}
if n == "dockerfile" {
foundLowerCase = true
}
}
if foundLowerCase {
return filepath.Join(filepath.Dir(p), "dockerfile")
}
return p
}

156
build/provenance.go Normal file
View File

@@ -0,0 +1,156 @@
package build
import (
"context"
"encoding/base64"
"encoding/json"
"io"
"strings"
"sync"
"github.com/containerd/containerd/content"
"github.com/containerd/containerd/content/proxy"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/progress"
controlapi "github.com/moby/buildkit/api/services/control"
"github.com/moby/buildkit/client"
provenancetypes "github.com/moby/buildkit/solver/llbsolver/provenance/types"
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"golang.org/x/sync/errgroup"
)
type provenancePredicate struct {
Builder *provenanceBuilder `json:"builder,omitempty"`
provenancetypes.ProvenancePredicate
}
type provenanceBuilder struct {
ID string `json:"id,omitempty"`
}
func setRecordProvenance(ctx context.Context, c *client.Client, sr *client.SolveResponse, ref string, mode confutil.MetadataProvenanceMode, pw progress.Writer) error {
if mode == confutil.MetadataProvenanceModeDisabled {
return nil
}
pw = progress.ResetTime(pw)
return progress.Wrap("resolving provenance for metadata file", pw.Write, func(l progress.SubLogger) error {
res, err := fetchProvenance(ctx, c, ref, mode)
if err != nil {
return err
}
for k, v := range res {
sr.ExporterResponse[k] = v
}
return nil
})
}
func fetchProvenance(ctx context.Context, c *client.Client, ref string, mode confutil.MetadataProvenanceMode) (out map[string]string, err error) {
cl, err := c.ControlClient().ListenBuildHistory(ctx, &controlapi.BuildHistoryRequest{
Ref: ref,
EarlyExit: true,
})
if err != nil {
return nil, err
}
var mu sync.Mutex
eg, ctx := errgroup.WithContext(ctx)
store := proxy.NewContentStore(c.ContentClient())
for {
ev, err := cl.Recv()
if errors.Is(err, io.EOF) {
break
} else if err != nil {
return nil, err
}
if ev.Record == nil {
continue
}
if ev.Record.Result != nil {
desc := lookupProvenance(ev.Record.Result)
if desc == nil {
continue
}
eg.Go(func() error {
dt, err := content.ReadBlob(ctx, store, *desc)
if err != nil {
return errors.Wrapf(err, "failed to load provenance blob from build record")
}
prv, err := encodeProvenance(dt, mode)
if err != nil {
return err
}
mu.Lock()
if out == nil {
out = make(map[string]string)
}
out["buildx.build.provenance"] = prv
mu.Unlock()
return nil
})
} else if ev.Record.Results != nil {
for platform, res := range ev.Record.Results {
platform := platform
desc := lookupProvenance(res)
if desc == nil {
continue
}
eg.Go(func() error {
dt, err := content.ReadBlob(ctx, store, *desc)
if err != nil {
return errors.Wrapf(err, "failed to load provenance blob from build record")
}
prv, err := encodeProvenance(dt, mode)
if err != nil {
return err
}
mu.Lock()
if out == nil {
out = make(map[string]string)
}
out["buildx.build.provenance/"+platform] = prv
mu.Unlock()
return nil
})
}
}
}
return out, eg.Wait()
}
func lookupProvenance(res *controlapi.BuildResultInfo) *ocispecs.Descriptor {
for _, a := range res.Attestations {
if a.MediaType == "application/vnd.in-toto+json" && strings.HasPrefix(a.Annotations["in-toto.io/predicate-type"], "https://slsa.dev/provenance/") {
return &ocispecs.Descriptor{
Digest: a.Digest,
Size: a.Size_,
MediaType: a.MediaType,
Annotations: a.Annotations,
}
}
}
return nil
}
func encodeProvenance(dt []byte, mode confutil.MetadataProvenanceMode) (string, error) {
var prv provenancePredicate
if err := json.Unmarshal(dt, &prv); err != nil {
return "", errors.Wrapf(err, "failed to unmarshal provenance")
}
if prv.Builder != nil && prv.Builder.ID == "" {
// reset builder if id is empty
prv.Builder = nil
}
if mode == confutil.MetadataProvenanceModeMin {
// reset fields for minimal provenance
prv.BuildConfig = nil
prv.Metadata = nil
}
dtprv, err := json.Marshal(prv)
if err != nil {
return "", errors.Wrapf(err, "failed to marshal provenance")
}
return base64.StdEncoding.EncodeToString(dtprv), nil
}

View File

@@ -292,10 +292,10 @@ func (r *ResultHandle) build(buildFunc gateway.BuildFunc) (err error) {
return err
}
func (r *ResultHandle) getContainerConfig(ctx context.Context, c gateway.Client, cfg *controllerapi.InvokeConfig) (containerCfg gateway.NewContainerRequest, _ error) {
func (r *ResultHandle) getContainerConfig(cfg *controllerapi.InvokeConfig) (containerCfg gateway.NewContainerRequest, _ error) {
if r.res != nil && r.solveErr == nil {
logrus.Debugf("creating container from successful build")
ccfg, err := containerConfigFromResult(ctx, r.res, c, *cfg)
ccfg, err := containerConfigFromResult(r.res, *cfg)
if err != nil {
return containerCfg, err
}
@@ -327,7 +327,7 @@ func (r *ResultHandle) getProcessConfig(cfg *controllerapi.InvokeConfig, stdin i
return processCfg, nil
}
func containerConfigFromResult(ctx context.Context, res *gateway.Result, c gateway.Client, cfg controllerapi.InvokeConfig) (*gateway.NewContainerRequest, error) {
func containerConfigFromResult(res *gateway.Result, cfg controllerapi.InvokeConfig) (*gateway.NewContainerRequest, error) {
if cfg.Initial {
return nil, errors.Errorf("starting from the container from the initial state of the step is supported only on the failed steps")
}

View File

@@ -6,13 +6,14 @@ import (
"context"
"net"
"os"
"strconv"
"strings"
"github.com/docker/buildx/driver"
"github.com/docker/cli/opts"
"github.com/docker/docker/builder/remotecontext/urlutil"
"github.com/moby/buildkit/util/gitutil"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
const (
@@ -24,8 +25,15 @@ const (
mobyHostGatewayName = "host-gateway"
)
// isHTTPURL returns true if the provided str is an HTTP(S) URL by checking if it
// has a http:// or https:// scheme. No validation is performed to verify if the
// URL is well-formed.
func isHTTPURL(str string) bool {
return strings.HasPrefix(str, "https://") || strings.HasPrefix(str, "http://")
}
func IsRemoteURL(c string) bool {
if urlutil.IsURL(c) {
if isHTTPURL(c) {
return true
}
if _, err := gitutil.ParseGitRef(c); err == nil {
@@ -34,11 +42,6 @@ func IsRemoteURL(c string) bool {
return false
}
func isLocalDir(c string) bool {
st, err := os.Stat(c)
return err == nil && st.IsDir()
}
func isArchive(header []byte) bool {
for _, m := range [][]byte{
{0x42, 0x5A, 0x68}, // bzip2
@@ -65,7 +68,10 @@ func toBuildkitExtraHosts(ctx context.Context, inp []string, nodeDriver *driver.
}
hosts := make([]string, 0, len(inp))
for _, h := range inp {
host, ip, ok := strings.Cut(h, ":")
host, ip, ok := strings.Cut(h, "=")
if !ok {
host, ip, ok = strings.Cut(h, ":")
}
if !ok || host == "" || ip == "" {
return "", errors.Errorf("invalid host %s", h)
}
@@ -77,8 +83,16 @@ func toBuildkitExtraHosts(ctx context.Context, inp []string, nodeDriver *driver.
return "", errors.Wrap(err, "unable to derive the IP value for host-gateway")
}
ip = hgip.String()
} else if net.ParseIP(ip) == nil {
return "", errors.Errorf("invalid host %s", h)
} else {
// If the address is enclosed in square brackets, extract it (for IPv6, but
// permit it for IPv4 as well; we don't know the address family here, but it's
// unambiguous).
if len(ip) > 2 && ip[0] == '[' && ip[len(ip)-1] == ']' {
ip = ip[1 : len(ip)-1]
}
if net.ParseIP(ip) == nil {
return "", errors.Errorf("invalid host %s", h)
}
}
hosts = append(hosts, host+"="+ip)
}
@@ -96,3 +110,21 @@ func toBuildkitUlimits(inp *opts.UlimitOpt) (string, error) {
}
return strings.Join(ulimits, ","), nil
}
func notSupported(f driver.Feature, d *driver.DriverHandle, docs string) error {
return errors.Errorf(`%s is not supported for the %s driver.
Switch to a different driver, or turn on the containerd image store, and try again.
Learn more at %s`, f, d.Factory().Name(), docs)
}
func noDefaultLoad() bool {
v, ok := os.LookupEnv("BUILDX_NO_DEFAULT_LOAD")
if !ok {
return false
}
b, err := strconv.ParseBool(v)
if err != nil {
logrus.Warnf("invalid non-bool value for BUILDX_NO_DEFAULT_LOAD: %s", v)
}
return b
}

148
build/utils_test.go Normal file
View File

@@ -0,0 +1,148 @@
package build
import (
"context"
"strings"
"testing"
"github.com/stretchr/testify/require"
)
func TestToBuildkitExtraHosts(t *testing.T) {
tests := []struct {
doc string
input []string
expectedOut string // Expect output==input if not set.
expectedErr string // Expect success if not set.
}{
{
doc: "IPv4, colon sep",
input: []string{`myhost:192.168.0.1`},
expectedOut: `myhost=192.168.0.1`,
},
{
doc: "IPv4, eq sep",
input: []string{`myhost=192.168.0.1`},
},
{
doc: "Weird but permitted, IPv4 with brackets",
input: []string{`myhost=[192.168.0.1]`},
expectedOut: `myhost=192.168.0.1`,
},
{
doc: "Host and domain",
input: []string{`host.and.domain.invalid:10.0.2.1`},
expectedOut: `host.and.domain.invalid=10.0.2.1`,
},
{
doc: "IPv6, colon sep",
input: []string{`anipv6host:2003:ab34:e::1`},
expectedOut: `anipv6host=2003:ab34:e::1`,
},
{
doc: "IPv6, colon sep, brackets",
input: []string{`anipv6host:[2003:ab34:e::1]`},
expectedOut: `anipv6host=2003:ab34:e::1`,
},
{
doc: "IPv6, eq sep, brackets",
input: []string{`anipv6host=[2003:ab34:e::1]`},
expectedOut: `anipv6host=2003:ab34:e::1`,
},
{
doc: "IPv6 localhost, colon sep",
input: []string{`ipv6local:::1`},
expectedOut: `ipv6local=::1`,
},
{
doc: "IPv6 localhost, eq sep",
input: []string{`ipv6local=::1`},
},
{
doc: "IPv6 localhost, eq sep, brackets",
input: []string{`ipv6local=[::1]`},
expectedOut: `ipv6local=::1`,
},
{
doc: "IPv6 localhost, non-canonical, colon sep",
input: []string{`ipv6local:0:0:0:0:0:0:0:1`},
expectedOut: `ipv6local=0:0:0:0:0:0:0:1`,
},
{
doc: "IPv6 localhost, non-canonical, eq sep",
input: []string{`ipv6local=0:0:0:0:0:0:0:1`},
},
{
doc: "IPv6 localhost, non-canonical, eq sep, brackets",
input: []string{`ipv6local=[0:0:0:0:0:0:0:1]`},
expectedOut: `ipv6local=0:0:0:0:0:0:0:1`,
},
{
doc: "Bad address, colon sep",
input: []string{`myhost:192.notanipaddress.1`},
expectedErr: `invalid IP address in add-host: "192.notanipaddress.1"`,
},
{
doc: "Bad address, eq sep",
input: []string{`myhost=192.notanipaddress.1`},
expectedErr: `invalid IP address in add-host: "192.notanipaddress.1"`,
},
{
doc: "No sep",
input: []string{`thathost-nosemicolon10.0.0.1`},
expectedErr: `bad format for add-host: "thathost-nosemicolon10.0.0.1"`,
},
{
doc: "Bad IPv6",
input: []string{`anipv6host:::::1`},
expectedErr: `invalid IP address in add-host: "::::1"`,
},
{
doc: "Bad IPv6, trailing colons",
input: []string{`ipv6local:::0::`},
expectedErr: `invalid IP address in add-host: "::0::"`,
},
{
doc: "Bad IPv6, missing close bracket",
input: []string{`ipv6addr=[::1`},
expectedErr: `invalid IP address in add-host: "[::1"`,
},
{
doc: "Bad IPv6, missing open bracket",
input: []string{`ipv6addr=::1]`},
expectedErr: `invalid IP address in add-host: "::1]"`,
},
{
doc: "Missing address, colon sep",
input: []string{`myhost.invalid:`},
expectedErr: `invalid IP address in add-host: ""`,
},
{
doc: "Missing address, eq sep",
input: []string{`myhost.invalid=`},
expectedErr: `invalid IP address in add-host: ""`,
},
{
doc: "No input",
input: []string{``},
expectedErr: `bad format for add-host: ""`,
},
}
for _, tc := range tests {
tc := tc
if tc.expectedOut == "" {
tc.expectedOut = strings.Join(tc.input, ",")
}
t.Run(tc.doc, func(t *testing.T) {
actualOut, actualErr := toBuildkitExtraHosts(context.TODO(), tc.input, nil)
if tc.expectedErr == "" {
require.Equal(t, tc.expectedOut, actualOut)
require.Nil(t, actualErr)
} else {
require.Zero(t, actualOut)
require.Error(t, actualErr, tc.expectedErr)
}
})
}
}

View File

@@ -2,19 +2,31 @@ package builder
import (
"context"
"encoding/json"
"net/url"
"os"
"sort"
"strings"
"sync"
"time"
"github.com/docker/buildx/driver"
k8sutil "github.com/docker/buildx/driver/kubernetes/util"
remoteutil "github.com/docker/buildx/driver/remote/util"
"github.com/docker/buildx/localstate"
"github.com/docker/buildx/store"
"github.com/docker/buildx/store/storeutil"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/dockerutil"
"github.com/docker/buildx/util/imagetools"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/command"
dopts "github.com/docker/cli/opts"
"github.com/google/shlex"
"github.com/moby/buildkit/util/progress/progressui"
"github.com/pkg/errors"
"github.com/spf13/pflag"
"github.com/tonistiigi/go-csvvalue"
"golang.org/x/sync/errgroup"
)
@@ -247,6 +259,28 @@ func (b *Builder) Factory(ctx context.Context, dialMeta map[string][]string) (_
return b.driverFactory.Factory, err
}
func (b *Builder) MarshalJSON() ([]byte, error) {
var berr string
if b.err != nil {
berr = strings.TrimSpace(b.err.Error())
}
return json.Marshal(struct {
Name string
Driver string
LastActivity time.Time `json:",omitempty"`
Dynamic bool
Nodes []Node
Err string `json:",omitempty"`
}{
Name: b.Name,
Driver: b.Driver,
LastActivity: b.LastActivity,
Dynamic: b.Dynamic,
Nodes: b.nodes,
Err: berr,
})
}
// GetBuilders returns all builders
func GetBuilders(dockerCli command.Cli, txn *store.Txn) ([]*Builder, error) {
storeng, err := txn.List()
@@ -297,3 +331,346 @@ func GetBuilders(dockerCli command.Cli, txn *store.Txn) ([]*Builder, error) {
return builders, nil
}
type CreateOpts struct {
Name string
Driver string
NodeName string
Platforms []string
BuildkitdFlags string
BuildkitdConfigFile string
DriverOpts []string
Use bool
Endpoint string
Append bool
}
func Create(ctx context.Context, txn *store.Txn, dockerCli command.Cli, opts CreateOpts) (*Builder, error) {
var err error
if opts.Name == "default" {
return nil, errors.Errorf("default is a reserved name and cannot be used to identify builder instance")
} else if opts.Append && opts.Name == "" {
return nil, errors.Errorf("append requires a builder name")
}
name := opts.Name
if name == "" {
name, err = store.GenerateName(txn)
if err != nil {
return nil, err
}
}
if !opts.Append {
contexts, err := dockerCli.ContextStore().List()
if err != nil {
return nil, err
}
for _, c := range contexts {
if c.Name == name {
return nil, errors.Errorf("instance name %q already exists as context builder", name)
}
}
}
ng, err := txn.NodeGroupByName(name)
if err != nil {
if os.IsNotExist(errors.Cause(err)) {
if opts.Append && opts.Name != "" {
return nil, errors.Errorf("failed to find instance %q for append", opts.Name)
}
} else {
return nil, err
}
}
buildkitHost := os.Getenv("BUILDKIT_HOST")
driverName := opts.Driver
if driverName == "" {
if ng != nil {
driverName = ng.Driver
} else if opts.Endpoint == "" && buildkitHost != "" {
driverName = "remote"
} else {
f, err := driver.GetDefaultFactory(ctx, opts.Endpoint, dockerCli.Client(), true, nil)
if err != nil {
return nil, err
}
if f == nil {
return nil, errors.Errorf("no valid drivers found")
}
driverName = f.Name()
}
}
if ng != nil {
if opts.NodeName == "" && !opts.Append {
return nil, errors.Errorf("existing instance for %q but no append mode, specify the node name to make changes for existing instances", name)
}
if driverName != ng.Driver {
return nil, errors.Errorf("existing instance for %q but has mismatched driver %q", name, ng.Driver)
}
}
if _, err := driver.GetFactory(driverName, true); err != nil {
return nil, err
}
ngOriginal := ng
if ngOriginal != nil {
ngOriginal = ngOriginal.Copy()
}
if ng == nil {
ng = &store.NodeGroup{
Name: name,
Driver: driverName,
}
}
driverOpts, err := csvToMap(opts.DriverOpts)
if err != nil {
return nil, err
}
buildkitdFlags, err := parseBuildkitdFlags(opts.BuildkitdFlags, driverName, driverOpts)
if err != nil {
return nil, err
}
var ep string
var setEp bool
switch {
case driverName == "kubernetes":
if opts.Endpoint != "" {
return nil, errors.Errorf("kubernetes driver does not support endpoint args %q", opts.Endpoint)
}
// generate node name if not provided to avoid duplicated endpoint
// error: https://github.com/docker/setup-buildx-action/issues/215
nodeName := opts.NodeName
if nodeName == "" {
nodeName, err = k8sutil.GenerateNodeName(name, txn)
if err != nil {
return nil, err
}
}
// naming endpoint to make append works
ep = (&url.URL{
Scheme: driverName,
Path: "/" + name,
RawQuery: (&url.Values{
"deployment": {nodeName},
"kubeconfig": {os.Getenv("KUBECONFIG")},
}).Encode(),
}).String()
setEp = false
case driverName == "remote":
if opts.Endpoint != "" {
ep = opts.Endpoint
} else if buildkitHost != "" {
ep = buildkitHost
} else {
return nil, errors.Errorf("no remote endpoint provided")
}
ep, err = validateBuildkitEndpoint(ep)
if err != nil {
return nil, err
}
setEp = true
case opts.Endpoint != "":
ep, err = validateEndpoint(dockerCli, opts.Endpoint)
if err != nil {
return nil, err
}
setEp = true
default:
if dockerCli.CurrentContext() == "default" && dockerCli.DockerEndpoint().TLSData != nil {
return nil, errors.Errorf("could not create a builder instance with TLS data loaded from environment. Please use `docker context create <context-name>` to create a context for current environment and then create a builder instance with context set to <context-name>")
}
ep, err = dockerutil.GetCurrentEndpoint(dockerCli)
if err != nil {
return nil, err
}
setEp = false
}
buildkitdConfigFile := opts.BuildkitdConfigFile
if buildkitdConfigFile == "" {
// if buildkit daemon config is not provided, check if the default one
// is available and use it
if f, ok := confutil.DefaultConfigFile(dockerCli); ok {
buildkitdConfigFile = f
}
}
if err := ng.Update(opts.NodeName, ep, opts.Platforms, setEp, opts.Append, buildkitdFlags, buildkitdConfigFile, driverOpts); err != nil {
return nil, err
}
if err := txn.Save(ng); err != nil {
return nil, err
}
b, err := New(dockerCli,
WithName(ng.Name),
WithStore(txn),
WithSkippedValidation(),
)
if err != nil {
return nil, err
}
timeoutCtx, cancel := context.WithTimeout(ctx, 20*time.Second)
defer cancel()
nodes, err := b.LoadNodes(timeoutCtx, WithData())
if err != nil {
return nil, err
}
for _, node := range nodes {
if err := node.Err; err != nil {
err := errors.Errorf("failed to initialize builder %s (%s): %s", ng.Name, node.Name, err)
var err2 error
if ngOriginal == nil {
err2 = txn.Remove(ng.Name)
} else {
err2 = txn.Save(ngOriginal)
}
if err2 != nil {
return nil, errors.Errorf("could not rollback to previous state: %s", err2)
}
return nil, err
}
}
if opts.Use && ep != "" {
current, err := dockerutil.GetCurrentEndpoint(dockerCli)
if err != nil {
return nil, err
}
if err := txn.SetCurrent(current, ng.Name, false, false); err != nil {
return nil, err
}
}
return b, nil
}
type LeaveOpts struct {
Name string
NodeName string
}
func Leave(ctx context.Context, txn *store.Txn, dockerCli command.Cli, opts LeaveOpts) error {
if opts.Name == "" {
return errors.Errorf("leave requires instance name")
}
if opts.NodeName == "" {
return errors.Errorf("leave requires node name")
}
ng, err := txn.NodeGroupByName(opts.Name)
if err != nil {
if os.IsNotExist(errors.Cause(err)) {
return errors.Errorf("failed to find instance %q for leave", opts.Name)
}
return err
}
if err := ng.Leave(opts.NodeName); err != nil {
return err
}
ls, err := localstate.New(confutil.ConfigDir(dockerCli))
if err != nil {
return err
}
if err := ls.RemoveBuilderNode(ng.Name, opts.NodeName); err != nil {
return err
}
return txn.Save(ng)
}
func csvToMap(in []string) (map[string]string, error) {
if len(in) == 0 {
return nil, nil
}
m := make(map[string]string, len(in))
for _, s := range in {
fields, err := csvvalue.Fields(s, nil)
if err != nil {
return nil, err
}
for _, v := range fields {
p := strings.SplitN(v, "=", 2)
if len(p) != 2 {
return nil, errors.Errorf("invalid value %q, expecting k=v", v)
}
m[p[0]] = p[1]
}
}
return m, nil
}
// validateEndpoint validates that endpoint is either a context or a docker host
func validateEndpoint(dockerCli command.Cli, ep string) (string, error) {
dem, err := dockerutil.GetDockerEndpoint(dockerCli, ep)
if err == nil && dem != nil {
if ep == "default" {
return dem.Host, nil
}
return ep, nil
}
h, err := dopts.ParseHost(true, ep)
if err != nil {
return "", errors.Wrapf(err, "failed to parse endpoint %s", ep)
}
return h, nil
}
// validateBuildkitEndpoint validates that endpoint is a valid buildkit host
func validateBuildkitEndpoint(ep string) (string, error) {
if err := remoteutil.IsValidEndpoint(ep); err != nil {
return "", err
}
return ep, nil
}
// parseBuildkitdFlags parses buildkit flags
func parseBuildkitdFlags(inp string, driver string, driverOpts map[string]string) (res []string, err error) {
if inp != "" {
res, err = shlex.Split(inp)
if err != nil {
return nil, errors.Wrap(err, "failed to parse buildkit flags")
}
}
var allowInsecureEntitlements []string
flags := pflag.NewFlagSet("buildkitd", pflag.ContinueOnError)
flags.Usage = func() {}
flags.StringArrayVar(&allowInsecureEntitlements, "allow-insecure-entitlement", nil, "")
_ = flags.Parse(res)
var hasNetworkHostEntitlement bool
for _, e := range allowInsecureEntitlements {
if e == "network.host" {
hasNetworkHostEntitlement = true
break
}
}
if v, ok := driverOpts["network"]; ok && v == "host" && !hasNetworkHostEntitlement && driver == "docker-container" {
// always set network.host entitlement if user has set network=host
res = append(res, "--allow-insecure-entitlement=network.host")
} else if len(allowInsecureEntitlements) == 0 && (driver == "kubernetes" || driver == "docker-container") {
// set network.host entitlement if user does not provide any as
// network is isolated for container drivers.
res = append(res, "--allow-insecure-entitlement=network.host")
}
return res, nil
}

139
builder/builder_test.go Normal file
View File

@@ -0,0 +1,139 @@
package builder
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestCsvToMap(t *testing.T) {
d := []string{
"\"tolerations=key=foo,value=bar;key=foo2,value=bar2\",replicas=1",
"namespace=default",
}
r, err := csvToMap(d)
require.NoError(t, err)
require.Contains(t, r, "tolerations")
require.Equal(t, r["tolerations"], "key=foo,value=bar;key=foo2,value=bar2")
require.Contains(t, r, "replicas")
require.Equal(t, r["replicas"], "1")
require.Contains(t, r, "namespace")
require.Equal(t, r["namespace"], "default")
}
func TestParseBuildkitdFlags(t *testing.T) {
testCases := []struct {
name string
flags string
driver string
driverOpts map[string]string
expected []string
wantErr bool
}{
{
"docker-container no flags",
"",
"docker-container",
nil,
[]string{
"--allow-insecure-entitlement=network.host",
},
false,
},
{
"kubernetes no flags",
"",
"kubernetes",
nil,
[]string{
"--allow-insecure-entitlement=network.host",
},
false,
},
{
"remote no flags",
"",
"remote",
nil,
nil,
false,
},
{
"docker-container with insecure flag",
"--allow-insecure-entitlement=security.insecure",
"docker-container",
nil,
[]string{
"--allow-insecure-entitlement=security.insecure",
},
false,
},
{
"docker-container with insecure and host flag",
"--allow-insecure-entitlement=network.host --allow-insecure-entitlement=security.insecure",
"docker-container",
nil,
[]string{
"--allow-insecure-entitlement=network.host",
"--allow-insecure-entitlement=security.insecure",
},
false,
},
{
"docker-container with network host opt",
"",
"docker-container",
map[string]string{"network": "host"},
[]string{
"--allow-insecure-entitlement=network.host",
},
false,
},
{
"docker-container with host flag and network host opt",
"--allow-insecure-entitlement=network.host",
"docker-container",
map[string]string{"network": "host"},
[]string{
"--allow-insecure-entitlement=network.host",
},
false,
},
{
"docker-container with insecure, host flag and network host opt",
"--allow-insecure-entitlement=network.host --allow-insecure-entitlement=security.insecure",
"docker-container",
map[string]string{"network": "host"},
[]string{
"--allow-insecure-entitlement=network.host",
"--allow-insecure-entitlement=security.insecure",
},
false,
},
{
"error parsing flags",
"foo'",
"docker-container",
nil,
nil,
true,
},
}
for _, tt := range testCases {
tt := tt
t.Run(tt.name, func(t *testing.T) {
flags, err := parseBuildkitdFlags(tt.flags, tt.driver, tt.driverOpts)
if tt.wantErr {
require.Error(t, err)
return
}
require.NoError(t, err)
assert.Equal(t, tt.expected, flags)
})
}
}

View File

@@ -2,8 +2,11 @@ package builder
import (
"context"
"encoding/json"
"sort"
"strings"
"github.com/containerd/platforms"
"github.com/docker/buildx/driver"
ctxkube "github.com/docker/buildx/driver/kubernetes/context"
"github.com/docker/buildx/store"
@@ -45,8 +48,9 @@ func (b *Builder) Nodes() []Node {
type LoadNodesOption func(*loadNodesOptions)
type loadNodesOptions struct {
data bool
dialMeta map[string][]string
data bool
dialMeta map[string][]string
clientOpt []client.ClientOpt
}
func WithData() LoadNodesOption {
@@ -61,6 +65,12 @@ func WithDialMeta(dialMeta map[string][]string) LoadNodesOption {
}
}
func WithClientOpt(clientOpt ...client.ClientOpt) LoadNodesOption {
return func(o *loadNodesOptions) {
o.clientOpt = clientOpt
}
}
// LoadNodes loads and returns nodes for this builder.
// TODO: this should be a method on a Node object and lazy load data for each driver.
func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []Node, err error) {
@@ -139,7 +149,7 @@ func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []N
}
}
d, err := driver.GetDriver(ctx, "buildx_buildkit_"+n.Name, factory, n.Endpoint, dockerapi, imageopt.Auth, kcc, n.Flags, n.Files, n.DriverOpts, n.Platforms, b.opts.contextPathHash, lno.dialMeta)
d, err := driver.GetDriver(ctx, driver.BuilderName(n.Name), factory, n.Endpoint, dockerapi, imageopt.Auth, kcc, n.BuildkitdFlags, n.Files, n.DriverOpts, n.Platforms, b.opts.contextPathHash, lno.dialMeta)
if err != nil {
node.Err = err
return nil
@@ -148,7 +158,7 @@ func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []N
node.ImageOpt = imageopt
if lno.data {
if err := node.loadData(ctx); err != nil {
if err := node.loadData(ctx, lno.clientOpt...); err != nil {
node.Err = err
}
}
@@ -183,7 +193,7 @@ func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []N
if pl := di.DriverInfo.DynamicNodes[i].Platforms; len(pl) > 0 {
diClone.Platforms = pl
}
nodes = append(nodes, di)
nodes = append(nodes, diClone)
}
dynamicNodes = append(dynamicNodes, di.DriverInfo.DynamicNodes...)
}
@@ -199,7 +209,52 @@ func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []N
return b.nodes, nil
}
func (n *Node) loadData(ctx context.Context) error {
func (n *Node) MarshalJSON() ([]byte, error) {
var status string
if n.DriverInfo != nil {
status = n.DriverInfo.Status.String()
}
var nerr string
if n.Err != nil {
status = "error"
nerr = strings.TrimSpace(n.Err.Error())
}
var pp []string
for _, p := range n.Platforms {
pp = append(pp, platforms.Format(p))
}
return json.Marshal(struct {
Name string
Endpoint string
BuildkitdFlags []string `json:"Flags,omitempty"`
DriverOpts map[string]string `json:",omitempty"`
Files map[string][]byte `json:",omitempty"`
Status string `json:",omitempty"`
ProxyConfig map[string]string `json:",omitempty"`
Version string `json:",omitempty"`
Err string `json:",omitempty"`
IDs []string `json:",omitempty"`
Platforms []string `json:",omitempty"`
GCPolicy []client.PruneInfo `json:",omitempty"`
Labels map[string]string `json:",omitempty"`
}{
Name: n.Name,
Endpoint: n.Endpoint,
BuildkitdFlags: n.BuildkitdFlags,
DriverOpts: n.DriverOpts,
Files: n.Files,
Status: status,
ProxyConfig: n.ProxyConfig,
Version: n.Version,
Err: nerr,
IDs: n.IDs,
Platforms: pp,
GCPolicy: n.GCPolicy,
Labels: n.Labels,
})
}
func (n *Node) loadData(ctx context.Context, clientOpt ...client.ClientOpt) error {
if n.Driver == nil {
return nil
}
@@ -209,7 +264,7 @@ func (n *Node) loadData(ctx context.Context) error {
}
n.DriverInfo = info
if n.DriverInfo.Status == driver.Running {
driverClient, err := n.Driver.Client(ctx)
driverClient, err := n.Driver.Client(ctx, clientOpt...)
if err != nil {
return err
}

View File

@@ -1,6 +1,7 @@
package main
import (
"context"
"fmt"
"os"
@@ -15,6 +16,7 @@ import (
cliflags "github.com/docker/cli/cli/flags"
"github.com/moby/buildkit/solver/errdefs"
"github.com/moby/buildkit/util/stack"
"go.opentelemetry.io/otel"
//nolint:staticcheck // vendored dependencies may still use this
"github.com/containerd/containerd/pkg/seed"
@@ -38,10 +40,27 @@ func runStandalone(cmd *command.DockerCli) error {
if err := cmd.Initialize(cliflags.NewClientOptions()); err != nil {
return err
}
defer flushMetrics(cmd)
rootCmd := commands.NewRootCmd(os.Args[0], false, cmd)
return rootCmd.Execute()
}
// flushMetrics will manually flush metrics from the configured
// meter provider. This is needed when running in standalone mode
// because the meter provider is initialized by the cli library,
// but the mechanism for forcing it to report is not presently
// exposed and not invoked when run in standalone mode.
// There are plans to fix that in the next release, but this is
// needed temporarily until the API for this is more thorough.
func flushMetrics(cmd *command.DockerCli) {
if mp, ok := cmd.MeterProvider().(command.MeterProvider); ok {
if err := mp.ForceFlush(context.Background()); err != nil {
otel.Handle(err)
}
}
}
func runPlugin(cmd *command.DockerCli) error {
rootCmd := commands.NewRootCmd("buildx", true, cmd)
return plugin.RunPlugin(cmd, rootCmd, manager.Metadata{

View File

@@ -4,7 +4,6 @@ import (
"github.com/moby/buildkit/util/tracing/detect"
"go.opentelemetry.io/otel"
_ "github.com/moby/buildkit/util/tracing/detect/delegated"
_ "github.com/moby/buildkit/util/tracing/env"
)

View File

@@ -1 +1,4 @@
comment: false
ignore:
- "**/*.pb.go"

View File

@@ -1,20 +1,27 @@
package commands
import (
"bytes"
"cmp"
"context"
"encoding/json"
"fmt"
"io"
"os"
"slices"
"strings"
"text/tabwriter"
"github.com/containerd/console"
"github.com/containerd/containerd/platforms"
"github.com/containerd/platforms"
"github.com/docker/buildx/bake"
"github.com/docker/buildx/bake/hclparser"
"github.com/docker/buildx/build"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/localstate"
"github.com/docker/buildx/util/buildflags"
"github.com/docker/buildx/util/cobrautil"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/desktop"
@@ -22,29 +29,30 @@ import (
"github.com/docker/buildx/util/progress"
"github.com/docker/buildx/util/tracing"
"github.com/docker/cli/cli/command"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/identity"
"github.com/moby/buildkit/util/appcontext"
"github.com/moby/buildkit/util/progress/progressui"
"github.com/pkg/errors"
"github.com/spf13/cobra"
)
type bakeOptions struct {
files []string
overrides []string
printOnly bool
sbom string
provenance string
files []string
overrides []string
printOnly bool
listTargets bool
listVars bool
sbom string
provenance string
builder string
metadataFile string
exportPush bool
exportLoad bool
callFunc string
}
func runBake(dockerCli command.Cli, targets []string, in bakeOptions, cFlags commonFlags) (err error) {
ctx := appcontext.Context()
func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in bakeOptions, cFlags commonFlags) (err error) {
ctx, end, err := tracing.TraceCurrentCommand(ctx, "bake")
if err != nil {
return err
@@ -73,14 +81,20 @@ func runBake(dockerCli command.Cli, targets []string, in bakeOptions, cFlags com
targets = []string{"default"}
}
callFunc, err := buildflags.ParsePrintFunc(in.callFunc)
if err != nil {
return err
}
overrides := in.overrides
if in.exportPush {
if in.exportLoad {
return errors.Errorf("push and load may not be set together at the moment")
}
overrides = append(overrides, "*.push=true")
} else if in.exportLoad {
overrides = append(overrides, "*.output=type=docker")
}
if in.exportLoad {
overrides = append(overrides, "*.load=true")
}
if callFunc != nil {
overrides = append(overrides, fmt.Sprintf("*.call=%s", callFunc.Name))
}
if cFlags.noCache != nil {
overrides = append(overrides, fmt.Sprintf("*.no-cache=%t", *cFlags.noCache))
@@ -128,22 +142,43 @@ func runBake(dockerCli command.Cli, targets []string, in bakeOptions, cFlags com
}
progressMode := progressui.DisplayMode(cFlags.progress)
printer, err := progress.NewPrinter(ctx2, os.Stderr, progressMode,
var printer *progress.Printer
printer, err = progress.NewPrinter(ctx2, os.Stderr, progressMode,
progress.WithDesc(progressTextDesc, progressConsoleDesc),
progress.WithOnClose(func() {
if p := printer; p != nil {
printWarnings(os.Stderr, p.Warnings(), progressMode)
}
}),
)
if err != nil {
return err
}
var resp map[string]*client.SolveResponse
defer func() {
if printer != nil {
err1 := printer.Wait()
if err == nil {
err = err1
}
if err == nil && progressMode != progressui.QuietMode {
if err != nil {
return
}
if progressMode != progressui.QuietMode && progressMode != progressui.RawJSONMode {
desktop.PrintBuildDetails(os.Stderr, printer.BuildRefs(), term)
}
if resp != nil && len(in.metadataFile) > 0 {
dt := make(map[string]interface{})
for t, r := range resp {
dt[t] = decodeExporterResponse(r.ExporterResponse)
}
if warnings := printer.Warnings(); len(warnings) > 0 && confutil.MetadataWarningsEnabled() {
dt["buildx.build.warnings"] = warnings
}
err = writeMetadataFile(in.metadataFile, dt)
}
}
}()
@@ -156,12 +191,32 @@ func runBake(dockerCli command.Cli, targets []string, in bakeOptions, cFlags com
return errors.New("couldn't find a bake definition")
}
tgts, grps, err := bake.ReadTargets(ctx, files, targets, overrides, map[string]string{
defaults := map[string]string{
// don't forget to update documentation if you add a new
// built-in variable: docs/bake-reference.md#built-in-variables
"BAKE_CMD_CONTEXT": cmdContext,
"BAKE_LOCAL_PLATFORM": platforms.DefaultString(),
})
"BAKE_LOCAL_PLATFORM": platforms.Format(platforms.DefaultSpec()),
}
if in.listTargets || in.listVars {
cfg, pm, err := bake.ParseFiles(files, defaults)
if err != nil {
return err
}
err = printer.Wait()
printer = nil
if err != nil {
return err
}
if in.listTargets {
return printTargetList(dockerCli.Out(), cfg)
} else if in.listVars {
return printVars(dockerCli.Out(), pm.AllVariables)
}
}
tgts, grps, err := bake.ReadTargets(ctx, files, targets, overrides, defaults)
if err != nil {
return err
}
@@ -207,12 +262,27 @@ func runBake(dockerCli command.Cli, targets []string, in bakeOptions, cFlags com
return nil
}
// local state group
for _, opt := range bo {
if opt.PrintFunc != nil {
cf, err := buildflags.ParsePrintFunc(opt.PrintFunc.Name)
if err != nil {
return err
}
opt.PrintFunc.Name = cf.Name
}
}
prm := confutil.MetadataProvenance()
if len(in.metadataFile) == 0 {
prm = confutil.MetadataProvenanceModeDisabled
}
groupRef := identity.NewID()
var refs []string
for k, b := range bo {
b.Ref = identity.NewID()
b.GroupRef = groupRef
b.ProvenanceResponseMode = prm
refs = append(refs, b.Ref)
bo[k] = b
}
@@ -229,22 +299,122 @@ func runBake(dockerCli command.Cli, targets []string, in bakeOptions, cFlags com
return err
}
resp, err := build.Build(ctx, nodes, bo, dockerutil.NewClient(dockerCli), confutil.ConfigDir(dockerCli), printer)
resp, err = build.Build(ctx, nodes, bo, dockerutil.NewClient(dockerCli), confutil.ConfigDir(dockerCli), printer)
if err != nil {
return wrapBuildError(err, true)
}
if len(in.metadataFile) > 0 {
dt := make(map[string]interface{})
for t, r := range resp {
dt[t] = decodeExporterResponse(r.ExporterResponse)
}
if err := writeMetadataFile(in.metadataFile, dt); err != nil {
return err
}
err = printer.Wait()
if err != nil {
return err
}
return err
var callFormatJSON bool
var jsonResults = map[string]map[string]any{}
if callFunc != nil {
callFormatJSON = callFunc.Format == "json"
}
var sep bool
var exitCode int
names := make([]string, 0, len(bo))
for name := range bo {
names = append(names, name)
}
slices.Sort(names)
for _, name := range names {
req := bo[name]
if req.PrintFunc == nil {
continue
}
pf := &pb.PrintFunc{
Name: req.PrintFunc.Name,
Format: req.PrintFunc.Format,
IgnoreStatus: req.PrintFunc.IgnoreStatus,
}
if callFunc != nil {
pf.Format = callFunc.Format
pf.IgnoreStatus = callFunc.IgnoreStatus
}
var res map[string]string
if sp, ok := resp[name]; ok {
res = sp.ExporterResponse
}
if callFormatJSON {
jsonResults[name] = map[string]any{}
buf := &bytes.Buffer{}
if code, err := printResult(buf, pf, res); err != nil {
jsonResults[name]["error"] = err.Error()
exitCode = 1
} else if code != 0 && exitCode == 0 {
exitCode = code
}
m := map[string]*json.RawMessage{}
if err := json.Unmarshal(buf.Bytes(), &m); err == nil {
for k, v := range m {
jsonResults[name][k] = v
}
} else {
jsonResults[name][pf.Name] = json.RawMessage(buf.Bytes())
}
} else {
if sep {
fmt.Fprintln(dockerCli.Out())
} else {
sep = true
}
fmt.Fprintf(dockerCli.Out(), "%s\n", name)
if descr := tgts[name].Description; descr != "" {
fmt.Fprintf(dockerCli.Out(), "%s\n", descr)
}
fmt.Fprintln(dockerCli.Out())
if code, err := printResult(dockerCli.Out(), pf, res); err != nil {
fmt.Fprintf(dockerCli.Out(), "error: %v\n", err)
exitCode = 1
} else if code != 0 && exitCode == 0 {
exitCode = code
}
}
}
if callFormatJSON {
out := struct {
Group map[string]*bake.Group `json:"group,omitempty"`
Target map[string]map[string]any `json:"target"`
}{
Group: grps,
Target: map[string]map[string]any{},
}
for name, def := range tgts {
out.Target[name] = map[string]any{
"build": def,
}
if res, ok := jsonResults[name]; ok {
printName := bo[name].PrintFunc.Name
if printName == "lint" {
printName = "check"
}
out.Target[name][printName] = res
}
}
dt, err := json.MarshalIndent(out, "", " ")
if err != nil {
return err
}
fmt.Fprintln(dockerCli.Out(), string(dt))
}
if exitCode != 0 {
os.Exit(exitCode)
}
return nil
}
func bakeCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
@@ -266,7 +436,7 @@ func bakeCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
options.builder = rootOpts.builder
options.metadataFile = cFlags.metadataFile
// Other common flags (noCache, pull and progress) are processed in runBake function.
return runBake(dockerCli, args, options, cFlags)
return runBake(cmd.Context(), dockerCli, args, options, cFlags)
},
ValidArgsFunction: completion.BakeTargets(options.files),
}
@@ -280,6 +450,18 @@ func bakeCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
flags.StringVar(&options.sbom, "sbom", "", `Shorthand for "--set=*.attest=type=sbom"`)
flags.StringVar(&options.provenance, "provenance", "", `Shorthand for "--set=*.attest=type=provenance"`)
flags.StringArrayVar(&options.overrides, "set", nil, `Override target value (e.g., "targetpattern.key=value")`)
flags.StringVar(&options.callFunc, "call", "build", `Set method for evaluating build ("check", "outline", "targets")`)
flags.VarPF(callAlias(&options.callFunc, "check"), "check", "", `Shorthand for "--call=check"`)
flags.Lookup("check").NoOptDefVal = "true"
flags.BoolVar(&options.listTargets, "list-targets", false, "List available targets")
cobrautil.MarkFlagsExperimental(flags, "list-targets")
flags.MarkHidden("list-targets")
flags.BoolVar(&options.listVars, "list-variables", false, "List defined variables")
cobrautil.MarkFlagsExperimental(flags, "list-variables")
flags.MarkHidden("list-variables")
commonBuildFlags(&cFlags, flags)
@@ -295,13 +477,17 @@ func saveLocalStateGroup(dockerCli command.Cli, ref string, lsg localstate.State
}
func readBakeFiles(ctx context.Context, nodes []builder.Node, url string, names []string, stdin io.Reader, pw progress.Writer) (files []bake.File, inp *bake.Input, err error) {
var lnames []string
var rnames []string
var lnames []string // local
var rnames []string // remote
var anames []string // both
for _, v := range names {
if strings.HasPrefix(v, "cwd://") {
lnames = append(lnames, strings.TrimPrefix(v, "cwd://"))
tname := strings.TrimPrefix(v, "cwd://")
lnames = append(lnames, tname)
anames = append(anames, tname)
} else {
rnames = append(rnames, v)
anames = append(anames, v)
}
}
@@ -320,7 +506,7 @@ func readBakeFiles(ctx context.Context, nodes []builder.Node, url string, names
if url != "" {
lfiles, err = bake.ReadLocalFiles(lnames, stdin, sub)
} else {
lfiles, err = bake.ReadLocalFiles(append(lnames, rnames...), stdin, sub)
lfiles, err = bake.ReadLocalFiles(anames, stdin, sub)
}
return nil
})
@@ -332,3 +518,75 @@ func readBakeFiles(ctx context.Context, nodes []builder.Node, url string, names
return
}
func printVars(w io.Writer, vars []*hclparser.Variable) error {
slices.SortFunc(vars, func(a, b *hclparser.Variable) int {
return cmp.Compare(a.Name, b.Name)
})
tw := tabwriter.NewWriter(w, 1, 8, 1, '\t', 0)
defer tw.Flush()
tw.Write([]byte("VARIABLE\tVALUE\tDESCRIPTION\n"))
for _, v := range vars {
var value string
if v.Value != nil {
value = *v.Value
} else {
value = "<null>"
}
fmt.Fprintf(tw, "%s\t%s\t%s\n", v.Name, value, v.Description)
}
return nil
}
func printTargetList(w io.Writer, cfg *bake.Config) error {
tw := tabwriter.NewWriter(w, 1, 8, 1, '\t', 0)
defer tw.Flush()
tw.Write([]byte("TARGET\tDESCRIPTION\n"))
type targetOrGroup struct {
name string
target *bake.Target
group *bake.Group
}
list := make([]targetOrGroup, 0, len(cfg.Targets)+len(cfg.Groups))
for _, tgt := range cfg.Targets {
list = append(list, targetOrGroup{name: tgt.Name, target: tgt})
}
for _, grp := range cfg.Groups {
list = append(list, targetOrGroup{name: grp.Name, group: grp})
}
slices.SortFunc(list, func(a, b targetOrGroup) int {
return cmp.Compare(a.name, b.name)
})
for _, tgt := range list {
if strings.HasPrefix(tgt.name, "_") {
// convention for a private target
continue
}
var descr string
if tgt.target != nil {
descr = tgt.target.Description
} else if tgt.group != nil {
descr = tgt.group.Description
if len(tgt.group.Targets) > 0 {
slices.Sort(tgt.group.Targets)
names := strings.Join(tgt.group.Targets, ", ")
if descr != "" {
descr += " (" + names + ")"
} else {
descr = names
}
}
}
fmt.Fprintf(tw, "%s\t%s\n", tgt.name, descr)
}
return nil
}

View File

@@ -3,16 +3,18 @@ package commands
import (
"bytes"
"context"
"crypto/sha256"
"encoding/base64"
"encoding/csv"
"encoding/hex"
"encoding/json"
"fmt"
"io"
"log"
"os"
"path/filepath"
"strconv"
"strings"
"sync"
"time"
"github.com/containerd/console"
"github.com/docker/buildx/build"
@@ -27,11 +29,14 @@ import (
"github.com/docker/buildx/store"
"github.com/docker/buildx/store/storeutil"
"github.com/docker/buildx/util/buildflags"
"github.com/docker/buildx/util/cobrautil"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/desktop"
"github.com/docker/buildx/util/ioset"
"github.com/docker/buildx/util/metricutil"
"github.com/docker/buildx/util/osutil"
"github.com/docker/buildx/util/progress"
"github.com/docker/buildx/util/tracing"
"github.com/docker/cli-docs-tool/annotation"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
dockeropts "github.com/docker/cli/opts"
@@ -40,10 +45,10 @@ import (
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/exporter/containerimage/exptypes"
"github.com/moby/buildkit/frontend/subrequests"
"github.com/moby/buildkit/frontend/subrequests/lint"
"github.com/moby/buildkit/frontend/subrequests/outline"
"github.com/moby/buildkit/frontend/subrequests/targets"
"github.com/moby/buildkit/solver/errdefs"
"github.com/moby/buildkit/util/appcontext"
"github.com/moby/buildkit/util/grpcerrors"
"github.com/moby/buildkit/util/progress/progressui"
"github.com/morikuni/aec"
@@ -51,6 +56,9 @@ import (
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
"github.com/tonistiigi/go-csvvalue"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/metric"
"google.golang.org/grpc/codes"
)
@@ -196,6 +204,12 @@ func (o *buildOptions) toControllerOptions() (*controllerapi.BuildOptions, error
return nil, err
}
prm := confutil.MetadataProvenance()
if opts.PrintFunc != nil || len(o.metadataFile) == 0 {
prm = confutil.MetadataProvenanceModeDisabled
}
opts.ProvenanceResponseMode = string(prm)
return &opts, nil
}
@@ -210,8 +224,55 @@ func (o *buildOptions) toDisplayMode() (progressui.DisplayMode, error) {
return progress, nil
}
func runBuild(dockerCli command.Cli, options buildOptions) (err error) {
ctx := appcontext.Context()
func buildMetricAttributes(dockerCli command.Cli, b *builder.Builder, options *buildOptions) attribute.Set {
return attribute.NewSet(
attribute.String("command.name", "build"),
attribute.Stringer("command.options.hash", &buildOptionsHash{
buildOptions: options,
configDir: confutil.ConfigDir(dockerCli),
}),
attribute.String("driver.name", options.builder),
attribute.String("driver.type", b.Driver),
)
}
// buildOptionsHash computes a hash for the buildOptions when the String method is invoked.
// This is done so we can delay the computation of the hash until needed by OTEL using
// the fmt.Stringer interface.
type buildOptionsHash struct {
*buildOptions
configDir string
result string
resultOnce sync.Once
}
func (o *buildOptionsHash) String() string {
o.resultOnce.Do(func() {
target := o.target
contextPath := o.contextPath
dockerfile := o.dockerfileName
if dockerfile == "" {
dockerfile = "Dockerfile"
}
if contextPath != "-" && osutil.IsLocalDir(contextPath) {
contextPath = osutil.ToAbs(contextPath)
}
salt := confutil.TryNodeIdentifier(o.configDir)
h := sha256.New()
for _, s := range []string{target, contextPath, dockerfile, salt} {
_, _ = io.WriteString(h, s)
h.Write([]byte{0})
}
o.result = hex.EncodeToString(h.Sum(nil))
})
return o.result
}
func runBuild(ctx context.Context, dockerCli command.Cli, options buildOptions) (err error) {
mp := dockerCli.MeterProvider()
ctx, end, err := tracing.TraceCurrentCommand(ctx, "build")
if err != nil {
return err
@@ -252,6 +313,7 @@ func runBuild(dockerCli command.Cli, options buildOptions) (err error) {
if _, err := console.ConsoleFromFile(os.Stderr); err == nil {
term = true
}
attributes := buildMetricAttributes(dockerCli, b, &options)
ctx2, cancel := context.WithCancel(context.TODO())
defer cancel()
@@ -265,6 +327,7 @@ func runBuild(dockerCli command.Cli, options buildOptions) (err error) {
fmt.Sprintf("building with %q instance using %s driver", b.Name, b.Driver),
fmt.Sprintf("%s:%s", b.Driver, b.Name),
),
progress.WithMetrics(mp, attributes),
progress.WithOnClose(func() {
printWarnings(os.Stderr, printer.Warnings(), progressMode)
}),
@@ -273,38 +336,49 @@ func runBuild(dockerCli command.Cli, options buildOptions) (err error) {
return err
}
done := timeBuildCommand(mp, attributes)
var resp *client.SolveResponse
var retErr error
if isExperimental() {
if confutil.IsExperimental() {
resp, retErr = runControllerBuild(ctx, dockerCli, opts, options, printer)
} else {
resp, retErr = runBasicBuild(ctx, dockerCli, opts, options, printer)
resp, retErr = runBasicBuild(ctx, dockerCli, opts, printer)
}
if err := printer.Wait(); retErr == nil {
retErr = err
}
done(retErr)
if retErr != nil {
return retErr
}
if progressMode != progressui.QuietMode {
desktop.PrintBuildDetails(os.Stderr, printer.BuildRefs(), term)
} else {
switch progressMode {
case progressui.RawJSONMode:
// no additional display
case progressui.QuietMode:
fmt.Println(getImageID(resp.ExporterResponse))
default:
desktop.PrintBuildDetails(os.Stderr, printer.BuildRefs(), term)
}
if options.imageIDFile != "" {
if err := os.WriteFile(options.imageIDFile, []byte(getImageID(resp.ExporterResponse)), 0644); err != nil {
return errors.Wrap(err, "writing image ID file")
}
}
if options.metadataFile != "" {
if err := writeMetadataFile(options.metadataFile, decodeExporterResponse(resp.ExporterResponse)); err != nil {
return err
}
}
if opts.PrintFunc != nil {
if err := printResult(opts.PrintFunc, resp.ExporterResponse); err != nil {
if exitcode, err := printResult(dockerCli.Out(), opts.PrintFunc, resp.ExporterResponse); err != nil {
return err
} else if exitcode != 0 {
os.Exit(exitcode)
}
} else if options.metadataFile != "" {
dt := decodeExporterResponse(resp.ExporterResponse)
if warnings := printer.Warnings(); len(warnings) > 0 && confutil.MetadataWarningsEnabled() {
dt["buildx.build.warnings"] = warnings
}
if err := writeMetadataFile(options.metadataFile, dt); err != nil {
return err
}
}
@@ -320,7 +394,7 @@ func getImageID(resp map[string]string) string {
return dgst
}
func runBasicBuild(ctx context.Context, dockerCli command.Cli, opts *controllerapi.BuildOptions, options buildOptions, printer *progress.Printer) (*client.SolveResponse, error) {
func runBasicBuild(ctx context.Context, dockerCli command.Cli, opts *controllerapi.BuildOptions, printer *progress.Printer) (*client.SolveResponse, error) {
resp, res, err := cbuild.RunBuild(ctx, dockerCli, *opts, dockerCli.In(), printer, false)
if res != nil {
res.Done()
@@ -353,14 +427,22 @@ func runControllerBuild(ctx context.Context, dockerCli command.Cli, opts *contro
var ref string
var retErr error
var resp *client.SolveResponse
f := ioset.NewSingleForwarder()
f.SetReader(dockerCli.In())
pr, pw := io.Pipe()
f.SetWriter(pw, func() io.WriteCloser {
pw.Close() // propagate EOF
logrus.Debug("propagating stdin close")
return nil
})
var f *ioset.SingleForwarder
var pr io.ReadCloser
var pw io.WriteCloser
if options.invokeConfig == nil {
pr = dockerCli.In()
} else {
f = ioset.NewSingleForwarder()
f.SetReader(dockerCli.In())
pr, pw = io.Pipe()
f.SetWriter(pw, func() io.WriteCloser {
pw.Close() // propagate EOF
logrus.Debug("propagating stdin close")
return nil
})
}
ref, resp, err = c.Build(ctx, *opts, pr, printer)
if err != nil {
@@ -374,11 +456,13 @@ func runControllerBuild(ctx context.Context, dockerCli command.Cli, opts *contro
}
}
if err := pw.Close(); err != nil {
logrus.Debug("failed to close stdin pipe writer")
}
if err := pr.Close(); err != nil {
logrus.Debug("failed to close stdin pipe reader")
if options.invokeConfig != nil {
if err := pw.Close(); err != nil {
logrus.Debug("failed to close stdin pipe writer")
}
if err := pr.Close(); err != nil {
logrus.Debug("failed to close stdin pipe reader")
}
}
if options.invokeConfig != nil && options.invokeConfig.needsDebug(retErr) {
@@ -446,9 +530,12 @@ func buildCmd(dockerCli command.Cli, rootOpts *rootOptions, debugConfig *debug.D
cmd := &cobra.Command{
Use: "build [OPTIONS] PATH | URL | -",
Aliases: []string{"b"},
Short: "Start a build",
Args: cli.ExactArgs(1),
Aliases: []string{"b"},
Annotations: map[string]string{
"aliases": "docker build, docker builder build, docker image build, docker buildx b",
},
RunE: func(cmd *cobra.Command, args []string) error {
options.contextPath = args[0]
options.builder = rootOpts.builder
@@ -472,7 +559,7 @@ func buildCmd(dockerCli command.Cli, rootOpts *rootOptions, debugConfig *debug.D
options.invokeConfig = iConfig
}
return runBuild(dockerCli, *options)
return runBuild(cmd.Context(), dockerCli, *options)
},
ValidArgsFunction: func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
return nil, cobra.ShellCompDirectiveFilterDirs
@@ -487,7 +574,6 @@ func buildCmd(dockerCli command.Cli, rootOpts *rootOptions, debugConfig *debug.D
flags := cmd.Flags()
flags.StringSliceVar(&options.extraHosts, "add-host", []string{}, `Add a custom host-to-IP mapping (format: "host:ip")`)
flags.SetAnnotation("add-host", annotation.ExternalURL, []string{"https://docs.docker.com/engine/reference/commandline/build/#add-host"})
flags.StringSliceVar(&options.allow, "allow", []string{}, `Allow extra privileged entitlement (e.g., "network.host", "security.insecure")`)
@@ -500,14 +586,12 @@ func buildCmd(dockerCli command.Cli, rootOpts *rootOptions, debugConfig *debug.D
flags.StringArrayVar(&options.cacheTo, "cache-to", []string{}, `Cache export destinations (e.g., "user/app:cache", "type=local,dest=path/to/dir")`)
flags.StringVar(&options.cgroupParent, "cgroup-parent", "", `Set the parent cgroup for the "RUN" instructions during build`)
flags.SetAnnotation("cgroup-parent", annotation.ExternalURL, []string{"https://docs.docker.com/engine/reference/commandline/build/#cgroup-parent"})
flags.StringArrayVar(&options.contexts, "build-context", []string{}, "Additional build contexts (e.g., name=path)")
flags.StringVarP(&options.dockerfileName, "file", "f", "", `Name of the Dockerfile (default: "PATH/Dockerfile")`)
flags.SetAnnotation("file", annotation.ExternalURL, []string{"https://docs.docker.com/engine/reference/commandline/build/#file"})
flags.StringVar(&options.imageIDFile, "iidfile", "", "Write the image ID to the file")
flags.StringVar(&options.imageIDFile, "iidfile", "", "Write the image ID to a file")
flags.StringArrayVar(&options.labels, "label", []string{}, "Set metadata for an image")
@@ -521,26 +605,19 @@ func buildCmd(dockerCli command.Cli, rootOpts *rootOptions, debugConfig *debug.D
flags.StringArrayVar(&options.platforms, "platform", platformsDefault, "Set target platform for build")
if isExperimental() {
flags.StringVar(&options.printFunc, "print", "", "Print result of information request (e.g., outline, targets)")
flags.SetAnnotation("print", "experimentalCLI", nil)
}
flags.BoolVar(&options.exportPush, "push", false, `Shorthand for "--output=type=registry"`)
flags.BoolVarP(&options.quiet, "quiet", "q", false, "Suppress the build output and print image ID on success")
flags.StringArrayVar(&options.secrets, "secret", []string{}, `Secret to expose to the build (format: "id=mysecret[,src=/local/secret]")`)
flags.Var(&options.shmSize, "shm-size", `Size of "/dev/shm"`)
flags.Var(&options.shmSize, "shm-size", `Shared memory size for build containers`)
flags.StringArrayVar(&options.ssh, "ssh", []string{}, `SSH agent socket or keys to expose to the build (format: "default|<id>[=<socket>|<key>[,<key>]]")`)
flags.StringArrayVarP(&options.tags, "tag", "t", []string{}, `Name and optionally a tag (format: "name:tag")`)
flags.SetAnnotation("tag", annotation.ExternalURL, []string{"https://docs.docker.com/engine/reference/commandline/build/#tag"})
flags.StringVar(&options.target, "target", "", "Set the target build stage to build")
flags.SetAnnotation("target", annotation.ExternalURL, []string{"https://docs.docker.com/engine/reference/commandline/build/#target"})
options.ulimits = dockeropts.NewUlimitOpt(nil)
flags.Var(options.ulimits, "ulimit", "Ulimit options")
@@ -549,22 +626,28 @@ func buildCmd(dockerCli command.Cli, rootOpts *rootOptions, debugConfig *debug.D
flags.StringVar(&options.sbom, "sbom", "", `Shorthand for "--attest=type=sbom"`)
flags.StringVar(&options.provenance, "provenance", "", `Shorthand for "--attest=type=provenance"`)
if isExperimental() {
if confutil.IsExperimental() {
// TODO: move this to debug command if needed
flags.StringVar(&options.Root, "root", "", "Specify root directory of server to connect")
flags.SetAnnotation("root", "experimentalCLI", nil)
flags.BoolVar(&options.Detach, "detach", false, "Detach buildx server (supported only on linux)")
flags.SetAnnotation("detach", "experimentalCLI", nil)
flags.StringVar(&options.ServerConfig, "server-config", "", "Specify buildx server config file (used only when launching new server)")
flags.SetAnnotation("server-config", "experimentalCLI", nil)
cobrautil.MarkFlagsExperimental(flags, "root", "detach", "server-config")
}
flags.StringVar(&options.printFunc, "call", "build", `Set method for evaluating build ("check", "outline", "targets")`)
flags.VarPF(callAlias(&options.printFunc, "check"), "check", "", `Shorthand for "--call=check"`)
flags.Lookup("check").NoOptDefVal = "true"
// hidden flags
var ignore string
var ignoreSlice []string
var ignoreBool bool
var ignoreInt int64
flags.StringVar(&options.printFunc, "print", "", "Print result of information request (e.g., outline, targets)")
cobrautil.MarkFlagsExperimental(flags, "print")
flags.MarkHidden("print")
flags.BoolVar(&ignoreBool, "compress", false, "Compress the build context using gzip")
flags.MarkHidden("compress")
@@ -579,7 +662,7 @@ func buildCmd(dockerCli command.Cli, rootOpts *rootOptions, debugConfig *debug.D
flags.BoolVar(&ignoreBool, "squash", false, "Squash newly built layers into a single new layer")
flags.MarkHidden("squash")
flags.SetAnnotation("squash", "flag-warn", []string{"experimental flag squash is removed with BuildKit. You should squash inside build using a multi-stage Dockerfile for efficiency."})
flags.SetAnnotation("squash", "experimentalCLI", nil)
cobrautil.MarkFlagsExperimental(flags, "squash")
flags.StringVarP(&ignore, "memory", "m", "", "Memory limit")
flags.MarkHidden("memory")
@@ -622,9 +705,9 @@ type commonFlags struct {
func commonBuildFlags(options *commonFlags, flags *pflag.FlagSet) {
options.noCache = flags.Bool("no-cache", false, "Do not use cache when building the image")
flags.StringVar(&options.progress, "progress", "auto", `Set type of progress output ("auto", "plain", "tty"). Use plain to show container output`)
flags.StringVar(&options.progress, "progress", "auto", `Set type of progress output ("auto", "plain", "tty", "rawjson"). Use plain to show container output`)
options.pull = flags.Bool("pull", false, "Always attempt to pull all referenced images")
flags.StringVar(&options.metadataFile, "metadata-file", "", "Write build result metadata to the file")
flags.StringVar(&options.metadataFile, "metadata-file", "", "Write build result metadata to a file")
}
func checkWarnedFlags(f *pflag.Flag) {
@@ -696,14 +779,6 @@ func (w *wrapped) Unwrap() error {
return w.err
}
func isExperimental() bool {
if v, ok := os.LookupEnv("BUILDX_EXPERIMENTAL"); ok {
vv, _ := strconv.ParseBool(v)
return vv
}
return false
}
func updateLastActivity(dockerCli command.Cli, ng *store.NodeGroup) error {
txn, release, err := storeutil.GetStore(dockerCli)
if err != nil {
@@ -749,7 +824,7 @@ func dockerUlimitToControllerUlimit(u *dockeropts.UlimitOpt) *controllerapi.Ulim
}
func printWarnings(w io.Writer, warnings []client.VertexWarning, mode progressui.DisplayMode) {
if len(warnings) == 0 || mode == progressui.QuietMode {
if len(warnings) == 0 || mode == progressui.QuietMode || mode == progressui.RawJSONMode {
return
}
fmt.Fprintf(w, "\n ")
@@ -760,7 +835,7 @@ func printWarnings(w io.Writer, warnings []client.VertexWarning, mode progressui
fmt.Fprintf(sb, "%d warnings found", len(warnings))
}
if logrus.GetLevel() < logrus.DebugLevel {
fmt.Fprintf(sb, " (use --debug to expand)")
fmt.Fprintf(sb, " (use docker --debug to expand)")
}
fmt.Fprintf(sb, ":\n")
fmt.Fprint(w, aec.Apply(sb.String(), aec.YellowF))
@@ -788,38 +863,78 @@ func printWarnings(w io.Writer, warnings []client.VertexWarning, mode progressui
}
}
func printResult(f *controllerapi.PrintFunc, res map[string]string) error {
func printResult(w io.Writer, f *controllerapi.PrintFunc, res map[string]string) (int, error) {
switch f.Name {
case "outline":
return printValue(outline.PrintOutline, outline.SubrequestsOutlineDefinition.Version, f.Format, res)
return 0, printValue(w, outline.PrintOutline, outline.SubrequestsOutlineDefinition.Version, f.Format, res)
case "targets":
return printValue(targets.PrintTargets, targets.SubrequestsTargetsDefinition.Version, f.Format, res)
return 0, printValue(w, targets.PrintTargets, targets.SubrequestsTargetsDefinition.Version, f.Format, res)
case "subrequests.describe":
return printValue(subrequests.PrintDescribe, subrequests.SubrequestsDescribeDefinition.Version, f.Format, res)
return 0, printValue(w, subrequests.PrintDescribe, subrequests.SubrequestsDescribeDefinition.Version, f.Format, res)
case "lint":
err := printValue(w, lint.PrintLintViolations, lint.SubrequestLintDefinition.Version, f.Format, res)
if err != nil {
return 0, err
}
lintResults := lint.LintResults{}
if result, ok := res["result.json"]; ok {
if err := json.Unmarshal([]byte(result), &lintResults); err != nil {
return 0, err
}
}
if lintResults.Error != nil {
// Print the error message and the source
// Normally, we would use `errdefs.WithSource` to attach the source to the
// error and let the error be printed by the handling that's already in place,
// but here we want to print the error in a way that's consistent with how
// the lint warnings are printed via the `lint.PrintLintViolations` function,
// which differs from the default error printing.
if f.Format != "json" && len(lintResults.Warnings) > 0 {
fmt.Fprintln(w)
}
lintBuf := bytes.NewBuffer([]byte(lintResults.Error.Message + "\n"))
sourceInfo := lintResults.Sources[lintResults.Error.Location.SourceIndex]
source := errdefs.Source{
Info: sourceInfo,
Ranges: lintResults.Error.Location.Ranges,
}
source.Print(lintBuf)
return 0, errors.New(lintBuf.String())
} else if len(lintResults.Warnings) == 0 && f.Format != "json" {
fmt.Fprintln(w, "Check complete, no warnings found.")
}
default:
if dt, ok := res["result.txt"]; ok {
fmt.Print(dt)
if dt, ok := res["result.json"]; ok && f.Format == "json" {
fmt.Fprintln(w, dt)
} else if dt, ok := res["result.txt"]; ok {
fmt.Fprint(w, dt)
} else {
log.Printf("%s %+v", f, res)
fmt.Fprintf(w, "%s %+v\n", f, res)
}
}
return nil
if v, ok := res["result.statuscode"]; !f.IgnoreStatus && ok {
if n, err := strconv.Atoi(v); err == nil && n != 0 {
return n, nil
}
}
return 0, nil
}
type printFunc func([]byte, io.Writer) error
func printValue(printer printFunc, version string, format string, res map[string]string) error {
func printValue(w io.Writer, printer printFunc, version string, format string, res map[string]string) error {
if format == "json" {
fmt.Fprintln(os.Stdout, res["result.json"])
fmt.Fprintln(w, res["result.json"])
return nil
}
if res["version"] != "" && versions.LessThan(version, res["version"]) && res["result.txt"] != "" {
// structure is too new and we don't know how to print it
fmt.Fprint(os.Stdout, res["result.txt"])
fmt.Fprint(w, res["result.txt"])
return nil
}
return printer([]byte(res["result.json"]), os.Stdout)
return printer([]byte(res["result.json"]), w)
}
type invokeConfig struct {
@@ -869,9 +984,9 @@ func (cfg *invokeConfig) parseInvokeConfig(invoke, on string) error {
return nil
}
csvReader := csv.NewReader(strings.NewReader(invoke))
csvReader.LazyQuotes = true
fields, err := csvReader.Read()
csvParser := csvvalue.NewParser()
csvParser.LazyQuotes = true
fields, err := csvParser.Fields(invoke, nil)
if err != nil {
return err
}
@@ -926,3 +1041,52 @@ func maybeJSONArray(v string) []string {
}
return []string{v}
}
func callAlias(target *string, value string) cobrautil.BoolFuncValue {
return func(s string) error {
v, err := strconv.ParseBool(s)
if err != nil {
return err
}
if v {
*target = value
}
return nil
}
}
// timeBuildCommand will start a timer for timing the build command. It records the time when the returned
// function is invoked into a metric.
func timeBuildCommand(mp metric.MeterProvider, attrs attribute.Set) func(err error) {
meter := metricutil.Meter(mp)
counter, _ := meter.Float64Counter("command.time",
metric.WithDescription("Measures the duration of the build command."),
metric.WithUnit("ms"),
)
start := time.Now()
return func(err error) {
dur := float64(time.Since(start)) / float64(time.Millisecond)
extraAttrs := attribute.NewSet()
if err != nil {
extraAttrs = attribute.NewSet(
attribute.String("error.type", otelErrorType(err)),
)
}
counter.Add(context.Background(), dur,
metric.WithAttributeSet(attrs),
metric.WithAttributeSet(extraAttrs),
)
}
}
// otelErrorType returns an attribute for the error type based on the error category.
// If nil, this function returns an invalid attribute.
func otelErrorType(err error) string {
name := "generic"
if errors.Is(err, context.Canceled) {
name = "canceled"
}
return name
}

View File

@@ -3,71 +3,34 @@ package commands
import (
"bytes"
"context"
"encoding/csv"
"fmt"
"net/url"
"os"
"strings"
"time"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/driver"
k8sutil "github.com/docker/buildx/driver/kubernetes/util"
remoteutil "github.com/docker/buildx/driver/remote/util"
"github.com/docker/buildx/localstate"
"github.com/docker/buildx/store"
"github.com/docker/buildx/store/storeutil"
"github.com/docker/buildx/util/cobrautil"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/dockerutil"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
dopts "github.com/docker/cli/opts"
"github.com/google/shlex"
"github.com/moby/buildkit/util/appcontext"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
)
type createOptions struct {
name string
driver string
nodeName string
platform []string
actionAppend bool
actionLeave bool
use bool
flags string
configFile string
driverOpts []string
bootstrap bool
name string
driver string
nodeName string
platform []string
actionAppend bool
actionLeave bool
use bool
driverOpts []string
buildkitdFlags string
buildkitdConfigFile string
bootstrap bool
// upgrade bool // perform upgrade of the driver
}
func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
ctx := appcontext.Context()
if in.name == "default" {
return errors.Errorf("default is a reserved name and cannot be used to identify builder instance")
}
if in.actionLeave {
if in.name == "" {
return errors.Errorf("leave requires instance name")
}
if in.nodeName == "" {
return errors.Errorf("leave requires node name but --node not set")
}
}
if in.actionAppend {
if in.name == "" {
logrus.Warnf("append used without name, creating a new instance instead")
}
}
func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, args []string) error {
txn, release, err := storeutil.GetStore(dockerCli)
if err != nil {
return err
@@ -75,232 +38,34 @@ func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
// Ensure the file lock gets released no matter what happens.
defer release()
name := in.name
if name == "" {
name, err = store.GenerateName(txn)
if err != nil {
return err
}
}
if !in.actionLeave && !in.actionAppend {
contexts, err := dockerCli.ContextStore().List()
if err != nil {
return err
}
for _, c := range contexts {
if c.Name == name {
logrus.Warnf("instance name %q already exists as context builder", name)
break
}
}
}
ng, err := txn.NodeGroupByName(name)
if err != nil {
if os.IsNotExist(errors.Cause(err)) {
if in.actionAppend && in.name != "" {
logrus.Warnf("failed to find %q for append, creating a new instance instead", in.name)
}
if in.actionLeave {
return errors.Errorf("failed to find instance %q for leave", in.name)
}
} else {
return err
}
}
buildkitHost := os.Getenv("BUILDKIT_HOST")
driverName := in.driver
if driverName == "" {
if ng != nil {
driverName = ng.Driver
} else if len(args) == 0 && buildkitHost != "" {
driverName = "remote"
} else {
var arg string
if len(args) > 0 {
arg = args[0]
}
f, err := driver.GetDefaultFactory(ctx, arg, dockerCli.Client(), true, nil)
if err != nil {
return err
}
if f == nil {
return errors.Errorf("no valid drivers found")
}
driverName = f.Name()
}
}
if ng != nil {
if in.nodeName == "" && !in.actionAppend {
return errors.Errorf("existing instance for %q but no append mode, specify --node to make changes for existing instances", name)
}
if driverName != ng.Driver {
return errors.Errorf("existing instance for %q but has mismatched driver %q", name, ng.Driver)
}
}
if _, err := driver.GetFactory(driverName, true); err != nil {
return err
}
ngOriginal := ng
if ngOriginal != nil {
ngOriginal = ngOriginal.Copy()
}
if ng == nil {
ng = &store.NodeGroup{
Name: name,
Driver: driverName,
}
}
var flags []string
if in.flags != "" {
flags, err = shlex.Split(in.flags)
if err != nil {
return errors.Wrap(err, "failed to parse buildkit flags")
}
if in.actionLeave {
return builder.Leave(ctx, txn, dockerCli, builder.LeaveOpts{
Name: in.name,
NodeName: in.nodeName,
})
}
var ep string
var setEp bool
if in.actionLeave {
if err := ng.Leave(in.nodeName); err != nil {
return err
}
ls, err := localstate.New(confutil.ConfigDir(dockerCli))
if err != nil {
return err
}
if err := ls.RemoveBuilderNode(ng.Name, in.nodeName); err != nil {
return err
}
} else {
switch {
case driverName == "kubernetes":
if len(args) > 0 {
logrus.Warnf("kubernetes driver does not support endpoint args %q", args[0])
}
// generate node name if not provided to avoid duplicated endpoint
// error: https://github.com/docker/setup-buildx-action/issues/215
nodeName := in.nodeName
if nodeName == "" {
nodeName, err = k8sutil.GenerateNodeName(name, txn)
if err != nil {
return err
}
}
// naming endpoint to make --append works
ep = (&url.URL{
Scheme: driverName,
Path: "/" + name,
RawQuery: (&url.Values{
"deployment": {nodeName},
"kubeconfig": {os.Getenv("KUBECONFIG")},
}).Encode(),
}).String()
setEp = false
case driverName == "remote":
if len(args) > 0 {
ep = args[0]
} else if buildkitHost != "" {
ep = buildkitHost
} else {
return errors.Errorf("no remote endpoint provided")
}
ep, err = validateBuildkitEndpoint(ep)
if err != nil {
return err
}
setEp = true
case len(args) > 0:
ep, err = validateEndpoint(dockerCli, args[0])
if err != nil {
return err
}
setEp = true
default:
if dockerCli.CurrentContext() == "default" && dockerCli.DockerEndpoint().TLSData != nil {
return errors.Errorf("could not create a builder instance with TLS data loaded from environment. Please use `docker context create <context-name>` to create a context for current environment and then create a builder instance with `docker buildx create <context-name>`")
}
ep, err = dockerutil.GetCurrentEndpoint(dockerCli)
if err != nil {
return err
}
setEp = false
}
m, err := csvToMap(in.driverOpts)
if err != nil {
return err
}
if in.configFile == "" {
// if buildkit config is not provided, check if the default one is
// available and use it
if f, ok := confutil.DefaultConfigFile(dockerCli); ok {
logrus.Warnf("Using default BuildKit config in %s", f)
in.configFile = f
}
}
if err := ng.Update(in.nodeName, ep, in.platform, setEp, in.actionAppend, flags, in.configFile, m); err != nil {
return err
}
if len(args) > 0 {
ep = args[0]
}
if err := txn.Save(ng); err != nil {
return err
}
b, err := builder.New(dockerCli,
builder.WithName(ng.Name),
builder.WithStore(txn),
builder.WithSkippedValidation(),
)
b, err := builder.Create(ctx, txn, dockerCli, builder.CreateOpts{
Name: in.name,
Driver: in.driver,
NodeName: in.nodeName,
Platforms: in.platform,
DriverOpts: in.driverOpts,
BuildkitdFlags: in.buildkitdFlags,
BuildkitdConfigFile: in.buildkitdConfigFile,
Use: in.use,
Endpoint: ep,
Append: in.actionAppend,
})
if err != nil {
return err
}
timeoutCtx, cancel := context.WithTimeout(ctx, 20*time.Second)
defer cancel()
nodes, err := b.LoadNodes(timeoutCtx, builder.WithData())
if err != nil {
return err
}
for _, node := range nodes {
if err := node.Err; err != nil {
err := errors.Errorf("failed to initialize builder %s (%s): %s", ng.Name, node.Name, err)
var err2 error
if ngOriginal == nil {
err2 = txn.Remove(ng.Name)
} else {
err2 = txn.Save(ngOriginal)
}
if err2 != nil {
logrus.Warnf("Could not rollback to previous state: %s", err2)
}
return err
}
}
if in.use && ep != "" {
current, err := dockerutil.GetCurrentEndpoint(dockerCli)
if err != nil {
return err
}
if err := txn.SetCurrent(current, ng.Name, false, false); err != nil {
return err
}
}
// The store is no longer used from this point.
// Release it so we aren't holding the file lock during the boot.
release()
@@ -311,7 +76,7 @@ func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
}
}
fmt.Printf("%s\n", ng.Name)
fmt.Printf("%s\n", b.Name)
return nil
}
@@ -331,7 +96,7 @@ func createCmd(dockerCli command.Cli) *cobra.Command {
Short: "Create a new builder instance",
Args: cli.RequiresMaxArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
return runCreate(dockerCli, options, args)
return runCreate(cmd.Context(), dockerCli, options, args)
},
ValidArgsFunction: completion.Disable,
}
@@ -341,12 +106,16 @@ func createCmd(dockerCli command.Cli) *cobra.Command {
flags.StringVar(&options.name, "name", "", "Builder instance name")
flags.StringVar(&options.driver, "driver", "", fmt.Sprintf("Driver to use (available: %s)", drivers.String()))
flags.StringVar(&options.nodeName, "node", "", "Create/modify node with given name")
flags.StringVar(&options.flags, "buildkitd-flags", "", "Flags for buildkitd daemon")
flags.StringVar(&options.configFile, "config", "", "BuildKit config file")
flags.StringArrayVar(&options.platform, "platform", []string{}, "Fixed platforms for current node")
flags.StringArrayVar(&options.driverOpts, "driver-opt", []string{}, "Options for the driver")
flags.BoolVar(&options.bootstrap, "bootstrap", false, "Boot builder after creation")
flags.StringVar(&options.buildkitdFlags, "buildkitd-flags", "", "BuildKit daemon flags")
// we allow for both "--config" and "--buildkitd-config", although the latter is the recommended way to avoid ambiguity.
flags.StringVar(&options.buildkitdConfigFile, "buildkitd-config", "", "BuildKit daemon config file")
flags.StringVar(&options.buildkitdConfigFile, "config", "", "BuildKit daemon config file")
flags.MarkHidden("config")
flags.BoolVar(&options.bootstrap, "bootstrap", false, "Boot builder after creation")
flags.BoolVar(&options.actionAppend, "append", false, "Append a node to builder instead of changing it")
flags.BoolVar(&options.actionLeave, "leave", false, "Remove a node from builder instead of changing it")
flags.BoolVar(&options.use, "use", false, "Set the current builder instance")
@@ -356,49 +125,3 @@ func createCmd(dockerCli command.Cli) *cobra.Command {
return cmd
}
func csvToMap(in []string) (map[string]string, error) {
if len(in) == 0 {
return nil, nil
}
m := make(map[string]string, len(in))
for _, s := range in {
csvReader := csv.NewReader(strings.NewReader(s))
fields, err := csvReader.Read()
if err != nil {
return nil, err
}
for _, v := range fields {
p := strings.SplitN(v, "=", 2)
if len(p) != 2 {
return nil, errors.Errorf("invalid value %q, expecting k=v", v)
}
m[p[0]] = p[1]
}
}
return m, nil
}
// validateEndpoint validates that endpoint is either a context or a docker host
func validateEndpoint(dockerCli command.Cli, ep string) (string, error) {
dem, err := dockerutil.GetDockerEndpoint(dockerCli, ep)
if err == nil && dem != nil {
if ep == "default" {
return dem.Host, nil
}
return ep, nil
}
h, err := dopts.ParseHost(true, ep)
if err != nil {
return "", errors.Wrapf(err, "failed to parse endpoint %s", ep)
}
return h, nil
}
// validateBuildkitEndpoint validates that endpoint is a valid buildkit host
func validateBuildkitEndpoint(ep string) (string, error) {
if err := remoteutil.IsValidEndpoint(ep); err != nil {
return "", err
}
return ep, nil
}

View File

@@ -1,26 +0,0 @@
package commands
import (
"testing"
"github.com/stretchr/testify/require"
)
func TestCsvToMap(t *testing.T) {
d := []string{
"\"tolerations=key=foo,value=bar;key=foo2,value=bar2\",replicas=1",
"namespace=default",
}
r, err := csvToMap(d)
require.NoError(t, err)
require.Contains(t, r, "tolerations")
require.Equal(t, r["tolerations"], "key=foo,value=bar;key=foo2,value=bar2")
require.Contains(t, r, "replicas")
require.Equal(t, r["replicas"], "1")
require.Contains(t, r, "namespace")
require.Equal(t, r["namespace"], "default")
}

View File

@@ -10,6 +10,7 @@ import (
"github.com/docker/buildx/controller/control"
controllerapi "github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/monitor"
"github.com/docker/buildx/util/cobrautil"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/command"
"github.com/moby/buildkit/util/progress/progressui"
@@ -42,9 +43,6 @@ func RootCmd(dockerCli command.Cli, children ...DebuggableCmd) *cobra.Command {
Use: "debug",
Short: "Start debugger",
Args: cobra.NoArgs,
Annotations: map[string]string{
"experimentalCLI": "",
},
RunE: func(cmd *cobra.Command, args []string) error {
printer, err := progress.NewPrinter(context.TODO(), os.Stderr, progressui.DisplayMode(progressMode))
if err != nil {
@@ -73,20 +71,18 @@ func RootCmd(dockerCli command.Cli, children ...DebuggableCmd) *cobra.Command {
return err
},
}
cobrautil.MarkCommandExperimental(cmd)
flags := cmd.Flags()
flags.StringVar(&options.InvokeFlag, "invoke", "", "Launch a monitor with executing specified command")
flags.SetAnnotation("invoke", "experimentalCLI", nil)
flags.StringVar(&options.OnFlag, "on", "error", "When to launch the monitor ([always, error])")
flags.SetAnnotation("on", "experimentalCLI", nil)
flags.StringVar(&controlOptions.Root, "root", "", "Specify root directory of server to connect for the monitor")
flags.SetAnnotation("root", "experimentalCLI", nil)
flags.BoolVar(&controlOptions.Detach, "detach", runtime.GOOS == "linux", "Detach buildx server for the monitor (supported only on linux)")
flags.SetAnnotation("detach", "experimentalCLI", nil)
flags.StringVar(&controlOptions.ServerConfig, "server-config", "", "Specify buildx server config file for the monitor (used only when launching new server)")
flags.SetAnnotation("server-config", "experimentalCLI", nil)
flags.StringVar(&progressMode, "progress", "auto", `Set type of progress output ("auto", "plain", "tty") for the monitor. Use plain to show container output`)
flags.StringVar(&progressMode, "progress", "auto", `Set type of progress output ("auto", "plain", "tty", "rawjson") for the monitor. Use plain to show container output`)
cobrautil.MarkFlagsExperimental(flags, "invoke", "on", "root", "detach", "server-config")
for _, c := range children {
cmd.AddCommand(c.NewDebugger(&options))

131
commands/dial_stdio.go Normal file
View File

@@ -0,0 +1,131 @@
package commands
import (
"io"
"net"
"os"
"github.com/containerd/platforms"
"github.com/docker/buildx/build"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/command"
"github.com/moby/buildkit/util/appcontext"
"github.com/moby/buildkit/util/progress/progressui"
v1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/spf13/cobra"
"golang.org/x/sync/errgroup"
)
type stdioOptions struct {
builder string
platform string
progress string
}
func runDialStdio(dockerCli command.Cli, opts stdioOptions) error {
ctx := appcontext.Context()
contextPathHash, _ := os.Getwd()
b, err := builder.New(dockerCli,
builder.WithName(opts.builder),
builder.WithContextPathHash(contextPathHash),
)
if err != nil {
return err
}
if err = updateLastActivity(dockerCli, b.NodeGroup); err != nil {
return errors.Wrapf(err, "failed to update builder last activity time")
}
nodes, err := b.LoadNodes(ctx)
if err != nil {
return err
}
printer, err := progress.NewPrinter(ctx, os.Stderr, progressui.DisplayMode(opts.progress), progress.WithPhase("dial-stdio"), progress.WithDesc("builder: "+b.Name, "builder:"+b.Name))
if err != nil {
return err
}
var p *v1.Platform
if opts.platform != "" {
pp, err := platforms.Parse(opts.platform)
if err != nil {
return errors.Wrapf(err, "invalid platform %q", opts.platform)
}
p = &pp
}
defer printer.Wait()
return progress.Wrap("Proxying to builder", printer.Write, func(sub progress.SubLogger) error {
var conn net.Conn
err := sub.Wrap("Dialing builder", func() error {
conn, err = build.Dial(ctx, nodes, printer, p)
if err != nil {
return err
}
return nil
})
if err != nil {
return err
}
defer conn.Close()
go func() {
<-ctx.Done()
closeWrite(conn)
}()
var eg errgroup.Group
eg.Go(func() error {
_, err := io.Copy(conn, os.Stdin)
closeWrite(conn)
return err
})
eg.Go(func() error {
_, err := io.Copy(os.Stdout, conn)
closeRead(conn)
return err
})
return eg.Wait()
})
}
func closeRead(conn net.Conn) error {
if c, ok := conn.(interface{ CloseRead() error }); ok {
return c.CloseRead()
}
return conn.Close()
}
func closeWrite(conn net.Conn) error {
if c, ok := conn.(interface{ CloseWrite() error }); ok {
return c.CloseWrite()
}
return conn.Close()
}
func dialStdioCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
opts := stdioOptions{}
cmd := &cobra.Command{
Use: "dial-stdio",
Short: "Proxy current stdio streams to builder instance",
Args: cobra.NoArgs,
RunE: func(cmd *cobra.Command, args []string) error {
opts.builder = rootOpts.builder
return runDialStdio(dockerCli, opts)
},
}
flags := cmd.Flags()
flags.StringVar(&opts.platform, "platform", os.Getenv("DOCKER_DEFAULT_PLATFORM"), "Target platform: this is used for node selection")
flags.StringVar(&opts.progress, "progress", "quiet", `Set type of progress output ("auto", "plain", "tty", "rawjson"). Use plain to show container output`)
return cmd
}

View File

@@ -1,6 +1,7 @@
package commands
import (
"context"
"fmt"
"io"
"os"
@@ -15,7 +16,6 @@ import (
"github.com/docker/cli/opts"
"github.com/docker/go-units"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/util/appcontext"
"github.com/spf13/cobra"
"golang.org/x/sync/errgroup"
)
@@ -26,9 +26,7 @@ type duOptions struct {
verbose bool
}
func runDiskUsage(dockerCli command.Cli, opts duOptions) error {
ctx := appcontext.Context()
func runDiskUsage(ctx context.Context, dockerCli command.Cli, opts duOptions) error {
pi, err := toBuildkitPruneInfo(opts.filter.Value())
if err != nil {
return err
@@ -114,7 +112,7 @@ func duCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
Args: cli.NoArgs,
RunE: func(cmd *cobra.Command, args []string) error {
options.builder = rootOpts.builder
return runDiskUsage(dockerCli, options)
return runDiskUsage(cmd.Context(), dockerCli, options)
},
ValidArgsFunction: completion.Disable,
}

View File

@@ -9,11 +9,11 @@ import (
"github.com/distribution/reference"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/util/buildflags"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/imagetools"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/command"
"github.com/moby/buildkit/util/appcontext"
"github.com/moby/buildkit/util/progress/progressui"
"github.com/opencontainers/go-digest"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
@@ -30,9 +30,10 @@ type createOptions struct {
dryrun bool
actionAppend bool
progress string
preferIndex bool
}
func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, args []string) error {
if len(args) == 0 && len(in.files) == 0 {
return errors.Errorf("no sources specified")
}
@@ -113,8 +114,6 @@ func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
}
}
ctx := appcontext.Context()
b, err := builder.New(dockerCli, builder.WithName(in.builder))
if err != nil {
return err
@@ -156,7 +155,12 @@ func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
}
}
dt, desc, err := r.Combine(ctx, srcs, in.annotations)
annotations, err := buildflags.ParseAnnotations(in.annotations)
if err != nil {
return errors.Wrapf(err, "failed to parse annotations")
}
dt, desc, err := r.Combine(ctx, srcs, annotations, in.preferIndex)
if err != nil {
return err
}
@@ -274,7 +278,7 @@ func createCmd(dockerCli command.Cli, opts RootOptions) *cobra.Command {
Short: "Create a new image based on source images",
RunE: func(cmd *cobra.Command, args []string) error {
options.builder = *opts.Builder
return runCreate(dockerCli, options, args)
return runCreate(cmd.Context(), dockerCli, options, args)
},
ValidArgsFunction: completion.Disable,
}
@@ -284,8 +288,9 @@ func createCmd(dockerCli command.Cli, opts RootOptions) *cobra.Command {
flags.StringArrayVarP(&options.tags, "tag", "t", []string{}, "Set reference for new image")
flags.BoolVar(&options.dryrun, "dry-run", false, "Show final image instead of pushing")
flags.BoolVar(&options.actionAppend, "append", false, "Append to existing manifest")
flags.StringVar(&options.progress, "progress", "auto", `Set type of progress output ("auto", "plain", "tty"). Use plain to show container output`)
flags.StringVar(&options.progress, "progress", "auto", `Set type of progress output ("auto", "plain", "tty", "rawjson"). Use plain to show container output`)
flags.StringArrayVarP(&options.annotations, "annotation", "", []string{}, "Add annotation to the image")
flags.BoolVar(&options.preferIndex, "prefer-index", true, "When only a single source is specified, prefer outputting an image index or manifest list instead of performing a carbon copy")
return cmd
}

View File

@@ -1,13 +1,14 @@
package commands
import (
"context"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/imagetools"
"github.com/docker/cli-docs-tool/annotation"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/moby/buildkit/util/appcontext"
"github.com/pkg/errors"
"github.com/spf13/cobra"
)
@@ -18,9 +19,7 @@ type inspectOptions struct {
raw bool
}
func runInspect(dockerCli command.Cli, in inspectOptions, name string) error {
ctx := appcontext.Context()
func runInspect(ctx context.Context, dockerCli command.Cli, in inspectOptions, name string) error {
if in.format != "" && in.raw {
return errors.Errorf("format and raw cannot be used together")
}
@@ -51,7 +50,7 @@ func inspectCmd(dockerCli command.Cli, rootOpts RootOptions) *cobra.Command {
Args: cli.ExactArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
options.builder = *rootOpts.Builder
return runInspect(dockerCli, options, args[0])
return runInspect(cmd.Context(), dockerCli, options, args[0])
},
ValidArgsFunction: completion.Disable,
}

View File

@@ -17,7 +17,6 @@ import (
"github.com/docker/cli/cli/command"
"github.com/docker/cli/cli/debug"
"github.com/docker/go-units"
"github.com/moby/buildkit/util/appcontext"
"github.com/spf13/cobra"
)
@@ -26,9 +25,7 @@ type inspectOptions struct {
builder string
}
func runInspect(dockerCli command.Cli, in inspectOptions) error {
ctx := appcontext.Context()
func runInspect(ctx context.Context, dockerCli command.Cli, in inspectOptions) error {
b, err := builder.New(dockerCli,
builder.WithName(in.builder),
builder.WithSkippedValidation(),
@@ -87,11 +84,11 @@ func runInspect(dockerCli command.Cli, in inspectOptions) error {
fmt.Fprintf(w, "Error:\t%s\n", err.Error())
} else {
fmt.Fprintf(w, "Status:\t%s\n", nodes[i].DriverInfo.Status)
if len(n.Flags) > 0 {
fmt.Fprintf(w, "Flags:\t%s\n", strings.Join(n.Flags, " "))
if len(n.BuildkitdFlags) > 0 {
fmt.Fprintf(w, "BuildKit daemon flags:\t%s\n", strings.Join(n.BuildkitdFlags, " "))
}
if nodes[i].Version != "" {
fmt.Fprintf(w, "Buildkit:\t%s\n", nodes[i].Version)
fmt.Fprintf(w, "BuildKit version:\t%s\n", nodes[i].Version)
}
platforms := platformutil.FormatInGroups(n.Node.Platforms, n.Platforms)
if len(platforms) > 0 {
@@ -150,7 +147,7 @@ func inspectCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
if len(args) > 0 {
options.builder = args[0]
}
return runInspect(dockerCli, options)
return runInspect(cmd.Context(), dockerCli, options)
},
ValidArgsFunction: completion.BuilderNames(dockerCli),
}

View File

@@ -15,7 +15,7 @@ import (
type installOptions struct {
}
func runInstall(dockerCli command.Cli, in installOptions) error {
func runInstall(_ command.Cli, _ installOptions) error {
dir := config.Dir()
if err := os.MkdirAll(dir, 0755); err != nil {
return errors.Wrap(err, "could not create docker config")

View File

@@ -2,30 +2,43 @@ package commands
import (
"context"
"encoding/json"
"fmt"
"io"
"sort"
"strings"
"text/tabwriter"
"time"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/store"
"github.com/docker/buildx/store/storeutil"
"github.com/docker/buildx/util/cobrautil"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/platformutil"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/moby/buildkit/util/appcontext"
"github.com/docker/cli/cli/command/formatter"
"github.com/spf13/cobra"
"golang.org/x/sync/errgroup"
)
const (
lsNameNodeHeader = "NAME/NODE"
lsDriverEndpointHeader = "DRIVER/ENDPOINT"
lsStatusHeader = "STATUS"
lsLastActivityHeader = "LAST ACTIVITY"
lsBuildkitHeader = "BUILDKIT"
lsPlatformsHeader = "PLATFORMS"
lsIndent = ` \_ `
lsDefaultTableFormat = "table {{.Name}}\t{{.DriverEndpoint}}\t{{.Status}}\t{{.Buildkit}}\t{{.Platforms}}"
)
type lsOptions struct {
format string
}
func runLs(dockerCli command.Cli, in lsOptions) error {
ctx := appcontext.Context()
func runLs(ctx context.Context, dockerCli command.Cli, in lsOptions) error {
txn, release, err := storeutil.GetStore(dockerCli)
if err != nil {
return err
@@ -59,22 +72,9 @@ func runLs(dockerCli command.Cli, in lsOptions) error {
return err
}
w := tabwriter.NewWriter(dockerCli.Out(), 0, 0, 1, ' ', 0)
fmt.Fprintf(w, "NAME/NODE\tDRIVER/ENDPOINT\tSTATUS\tBUILDKIT\tPLATFORMS\n")
printErr := false
for _, b := range builders {
if current.Name == b.Name {
b.Name += " *"
}
if ok := printBuilder(w, b); !ok {
printErr = true
}
}
w.Flush()
if printErr {
if hasErrors, err := lsPrint(dockerCli, current, builders, in.format); err != nil {
return err
} else if hasErrors {
_, _ = fmt.Fprintf(dockerCli.Err(), "\n")
for _, b := range builders {
if b.Err() != nil {
@@ -92,31 +92,6 @@ func runLs(dockerCli command.Cli, in lsOptions) error {
return nil
}
func printBuilder(w io.Writer, b *builder.Builder) (ok bool) {
ok = true
var err string
if b.Err() != nil {
ok = false
err = "error"
}
fmt.Fprintf(w, "%s\t%s\t%s\t\t\n", b.Name, b.Driver, err)
if b.Err() == nil {
for _, n := range b.Nodes() {
var status string
if n.DriverInfo != nil {
status = n.DriverInfo.Status.String()
}
if n.Err != nil {
ok = false
fmt.Fprintf(w, " %s\t%s\t%s\t\t\n", n.Name, n.Endpoint, "error")
} else {
fmt.Fprintf(w, " %s\t%s\t%s\t%s\t%s\n", n.Name, n.Endpoint, status, n.Version, strings.Join(platformutil.FormatInGroups(n.Node.Platforms, n.Platforms), ", "))
}
}
}
return
}
func lsCmd(dockerCli command.Cli) *cobra.Command {
var options lsOptions
@@ -125,13 +100,175 @@ func lsCmd(dockerCli command.Cli) *cobra.Command {
Short: "List builder instances",
Args: cli.ExactArgs(0),
RunE: func(cmd *cobra.Command, args []string) error {
return runLs(dockerCli, options)
return runLs(cmd.Context(), dockerCli, options)
},
ValidArgsFunction: completion.Disable,
}
flags := cmd.Flags()
flags.StringVar(&options.format, "format", formatter.TableFormatKey, "Format the output")
// hide builder persistent flag for this command
cobrautil.HideInheritedFlags(cmd, "builder")
return cmd
}
func lsPrint(dockerCli command.Cli, current *store.NodeGroup, builders []*builder.Builder, format string) (hasErrors bool, _ error) {
if format == formatter.TableFormatKey {
format = lsDefaultTableFormat
}
ctx := formatter.Context{
Output: dockerCli.Out(),
Format: formatter.Format(format),
}
sort.SliceStable(builders, func(i, j int) bool {
ierr := builders[i].Err() != nil
jerr := builders[j].Err() != nil
if ierr && !jerr {
return false
} else if !ierr && jerr {
return true
}
return i < j
})
render := func(format func(subContext formatter.SubContext) error) error {
for _, b := range builders {
if err := format(&lsContext{
Builder: &lsBuilder{
Builder: b,
Current: b.Name == current.Name,
},
format: ctx.Format,
}); err != nil {
return err
}
if b.Err() != nil {
if ctx.Format.IsTable() {
hasErrors = true
}
continue
}
for _, n := range b.Nodes() {
if n.Err != nil {
if ctx.Format.IsTable() {
hasErrors = true
}
}
if err := format(&lsContext{
format: ctx.Format,
Builder: &lsBuilder{
Builder: b,
Current: b.Name == current.Name,
},
node: n,
}); err != nil {
return err
}
}
}
return nil
}
lsCtx := lsContext{}
lsCtx.Header = formatter.SubHeaderContext{
"Name": lsNameNodeHeader,
"DriverEndpoint": lsDriverEndpointHeader,
"LastActivity": lsLastActivityHeader,
"Status": lsStatusHeader,
"Buildkit": lsBuildkitHeader,
"Platforms": lsPlatformsHeader,
}
return hasErrors, ctx.Write(&lsCtx, render)
}
type lsBuilder struct {
*builder.Builder
Current bool
}
type lsContext struct {
formatter.HeaderContext
Builder *lsBuilder
format formatter.Format
node builder.Node
}
func (c *lsContext) MarshalJSON() ([]byte, error) {
return json.Marshal(c.Builder)
}
func (c *lsContext) Name() string {
if c.node.Name == "" {
name := c.Builder.Name
if c.Builder.Current && c.format.IsTable() {
name += "*"
}
return name
}
if c.format.IsTable() {
return lsIndent + c.node.Name
}
return c.node.Name
}
func (c *lsContext) DriverEndpoint() string {
if c.node.Name == "" {
return c.Builder.Driver
}
if c.format.IsTable() {
return lsIndent + c.node.Endpoint
}
return c.node.Endpoint
}
func (c *lsContext) LastActivity() string {
if c.node.Name != "" || c.Builder.LastActivity.IsZero() {
return ""
}
return c.Builder.LastActivity.UTC().Format(time.RFC3339)
}
func (c *lsContext) Status() string {
if c.node.Name == "" {
if c.Builder.Err() != nil {
return "error"
}
return ""
}
if c.node.Err != nil {
return "error"
}
if c.node.DriverInfo != nil {
return c.node.DriverInfo.Status.String()
}
return ""
}
func (c *lsContext) Buildkit() string {
if c.node.Name == "" {
return ""
}
return c.node.Version
}
func (c *lsContext) Platforms() string {
if c.node.Name == "" {
return ""
}
return strings.Join(platformutil.FormatInGroups(c.node.Node.Platforms, c.node.Platforms), ", ")
}
func (c *lsContext) Error() string {
if c.node.Name != "" && c.node.Err != nil {
return c.node.Err.Error()
} else if err := c.Builder.Err(); err != nil {
return err.Error()
}
return ""
}

View File

@@ -1,6 +1,7 @@
package commands
import (
"context"
"fmt"
"os"
"strings"
@@ -15,7 +16,6 @@ import (
"github.com/docker/docker/api/types/filters"
"github.com/docker/go-units"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/util/appcontext"
"github.com/pkg/errors"
"github.com/spf13/cobra"
"golang.org/x/sync/errgroup"
@@ -35,9 +35,7 @@ const (
allCacheWarning = `WARNING! This will remove all build cache. Are you sure you want to continue?`
)
func runPrune(dockerCli command.Cli, opts pruneOptions) error {
ctx := appcontext.Context()
func runPrune(ctx context.Context, dockerCli command.Cli, opts pruneOptions) error {
pruneFilters := opts.filter.Value()
pruneFilters = command.PruneFilters(dockerCli, pruneFilters)
@@ -51,8 +49,12 @@ func runPrune(dockerCli command.Cli, opts pruneOptions) error {
warning = allCacheWarning
}
if !opts.force && !command.PromptForConfirmation(dockerCli.In(), dockerCli.Out(), warning) {
return nil
if !opts.force {
if ok, err := prompt(ctx, dockerCli.In(), dockerCli.Out(), warning); err != nil {
return err
} else if !ok {
return nil
}
}
b, err := builder.New(dockerCli, builder.WithName(opts.builder))
@@ -138,7 +140,7 @@ func pruneCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
Args: cli.NoArgs,
RunE: func(cmd *cobra.Command, args []string) error {
options.builder = rootOpts.builder
return runPrune(dockerCli, options)
return runPrune(cmd.Context(), dockerCli, options)
},
ValidArgsFunction: completion.Disable,
}
@@ -193,6 +195,8 @@ func toBuildkitPruneInfo(f filters.Args) (*client.PruneInfo, error) {
case 1:
if filterKey == "id" {
filters = append(filters, filterKey+"~="+values[0])
} else if strings.HasSuffix(filterKey, "!") || strings.HasSuffix(filterKey, "~") {
filters = append(filters, filterKey+"="+values[0])
} else {
filters = append(filters, filterKey+"=="+values[0])
}

View File

@@ -9,16 +9,14 @@ import (
"github.com/docker/buildx/store"
"github.com/docker/buildx/store/storeutil"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/moby/buildkit/util/appcontext"
"github.com/pkg/errors"
"github.com/spf13/cobra"
"golang.org/x/sync/errgroup"
)
type rmOptions struct {
builder string
builders []string
keepState bool
keepDaemon bool
allInactive bool
@@ -29,11 +27,13 @@ const (
rmInactiveWarning = `WARNING! This will remove all builders that are not in running state. Are you sure you want to continue?`
)
func runRm(dockerCli command.Cli, in rmOptions) error {
ctx := appcontext.Context()
if in.allInactive && !in.force && !command.PromptForConfirmation(dockerCli.In(), dockerCli.Out(), rmInactiveWarning) {
return nil
func runRm(ctx context.Context, dockerCli command.Cli, in rmOptions) error {
if in.allInactive && !in.force {
if ok, err := prompt(ctx, dockerCli.In(), dockerCli.Out(), rmInactiveWarning); err != nil {
return err
} else if !ok {
return nil
}
}
txn, release, err := storeutil.GetStore(dockerCli)
@@ -46,33 +46,52 @@ func runRm(dockerCli command.Cli, in rmOptions) error {
return rmAllInactive(ctx, txn, dockerCli, in)
}
b, err := builder.New(dockerCli,
builder.WithName(in.builder),
builder.WithStore(txn),
builder.WithSkippedValidation(),
)
if err != nil {
return err
eg, _ := errgroup.WithContext(ctx)
for _, name := range in.builders {
func(name string) {
eg.Go(func() (err error) {
defer func() {
if err == nil {
_, _ = fmt.Fprintf(dockerCli.Err(), "%s removed\n", name)
} else {
_, _ = fmt.Fprintf(dockerCli.Err(), "failed to remove %s: %v\n", name, err)
}
}()
b, err := builder.New(dockerCli,
builder.WithName(name),
builder.WithStore(txn),
builder.WithSkippedValidation(),
)
if err != nil {
return err
}
nodes, err := b.LoadNodes(ctx)
if err != nil {
return err
}
if cb := b.ContextName(); cb != "" {
return errors.Errorf("context builder cannot be removed, run `docker context rm %s` to remove this context", cb)
}
err1 := rm(ctx, nodes, in)
if err := txn.Remove(b.Name); err != nil {
return err
}
if err1 != nil {
return err1
}
return nil
})
}(name)
}
nodes, err := b.LoadNodes(ctx)
if err != nil {
return err
if err := eg.Wait(); err != nil {
return errors.New("failed to remove one or more builders")
}
if cb := b.ContextName(); cb != "" {
return errors.Errorf("context builder cannot be removed, run `docker context rm %s` to remove this context", cb)
}
err1 := rm(ctx, nodes, in)
if err := txn.Remove(b.Name); err != nil {
return err
}
if err1 != nil {
return err1
}
_, _ = fmt.Fprintf(dockerCli.Err(), "%s removed\n", b.Name)
return nil
}
@@ -80,25 +99,24 @@ func rmCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
var options rmOptions
cmd := &cobra.Command{
Use: "rm [NAME]",
Short: "Remove a builder instance",
Args: cli.RequiresMaxArgs(1),
Use: "rm [OPTIONS] [NAME] [NAME...]",
Short: "Remove one or more builder instances",
RunE: func(cmd *cobra.Command, args []string) error {
options.builder = rootOpts.builder
options.builders = []string{rootOpts.builder}
if len(args) > 0 {
if options.allInactive {
return errors.New("cannot specify builder name when --all-inactive is set")
}
options.builder = args[0]
options.builders = args
}
return runRm(dockerCli, options)
return runRm(cmd.Context(), dockerCli, options)
},
ValidArgsFunction: completion.BuilderNames(dockerCli),
}
flags := cmd.Flags()
flags.BoolVar(&options.keepState, "keep-state", false, "Keep BuildKit state")
flags.BoolVar(&options.keepDaemon, "keep-daemon", false, "Keep the buildkitd daemon running")
flags.BoolVar(&options.keepDaemon, "keep-daemon", false, "Keep the BuildKit daemon running")
flags.BoolVar(&options.allInactive, "all-inactive", false, "Remove all inactive builders")
flags.BoolVarP(&options.force, "force", "f", false, "Do not prompt for confirmation")

View File

@@ -7,12 +7,14 @@ import (
imagetoolscmd "github.com/docker/buildx/commands/imagetools"
"github.com/docker/buildx/controller/remote"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/logutil"
"github.com/docker/cli-docs-tool/annotation"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli-plugins/plugin"
"github.com/docker/cli/cli/command"
"github.com/docker/cli/cli/debug"
"github.com/moby/buildkit/util/appcontext"
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
@@ -29,12 +31,15 @@ func NewRootCmd(name string, isPlugin bool, dockerCli command.Cli) *cobra.Comman
CompletionOptions: cobra.CompletionOptions{
HiddenDefaultCmd: true,
},
}
if isPlugin {
cmd.PersistentPreRunE = func(cmd *cobra.Command, args []string) error {
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
cmd.SetContext(appcontext.Context())
if !isPlugin {
return nil
}
return plugin.PersistentPreRunE(cmd, args)
}
} else {
},
}
if !isPlugin {
// match plugin behavior for standalone mode
// https://github.com/docker/cli/blob/6c9eb708fa6d17765d71965f90e1c59cea686ee9/cli-plugins/plugin/plugin.go#L117-L127
cmd.SilenceUsage = true
@@ -59,6 +64,10 @@ func NewRootCmd(name string, isPlugin bool, dockerCli command.Cli) *cobra.Comman
"using default config store",
))
if !confutil.IsExperimental() {
cmd.SetHelpTemplate(cmd.HelpTemplate() + "\nExperimental commands and flags are hidden. Set BUILDX_EXPERIMENTAL=1 to show them.\n")
}
addCommands(cmd, dockerCli)
return cmd
}
@@ -75,6 +84,7 @@ func addCommands(cmd *cobra.Command, dockerCli command.Cli) {
buildCmd(dockerCli, opts, nil),
bakeCmd(dockerCli, opts),
createCmd(dockerCli),
dialStdioCmd(dockerCli, opts),
rmCmd(dockerCli, opts),
lsCmd(dockerCli),
useCmd(dockerCli, opts),
@@ -87,7 +97,7 @@ func addCommands(cmd *cobra.Command, dockerCli command.Cli) {
duCmd(dockerCli, opts),
imagetoolscmd.RootCmd(dockerCli, imagetoolscmd.RootOptions{Builder: &opts.builder}),
)
if isExperimental() {
if confutil.IsExperimental() {
cmd.AddCommand(debugcmd.RootCmd(dockerCli,
newDebuggableBuild(dockerCli, opts),
))

View File

@@ -7,7 +7,6 @@ import (
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/moby/buildkit/util/appcontext"
"github.com/spf13/cobra"
)
@@ -15,9 +14,7 @@ type stopOptions struct {
builder string
}
func runStop(dockerCli command.Cli, in stopOptions) error {
ctx := appcontext.Context()
func runStop(ctx context.Context, dockerCli command.Cli, in stopOptions) error {
b, err := builder.New(dockerCli,
builder.WithName(in.builder),
builder.WithSkippedValidation(),
@@ -45,7 +42,7 @@ func stopCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
if len(args) > 0 {
options.builder = args[0]
}
return runStop(dockerCli, options)
return runStop(cmd.Context(), dockerCli, options)
},
ValidArgsFunction: completion.BuilderNames(dockerCli),
}

View File

@@ -15,7 +15,7 @@ import (
type uninstallOptions struct {
}
func runUninstall(dockerCli command.Cli, in uninstallOptions) error {
func runUninstall(_ command.Cli, _ uninstallOptions) error {
dir := config.Dir()
cfg, err := config.Load(dir)
if err != nil {

57
commands/util.go Normal file
View File

@@ -0,0 +1,57 @@
package commands
import (
"bufio"
"context"
"fmt"
"io"
"os"
"runtime"
"strings"
"github.com/docker/cli/cli/streams"
)
func prompt(ctx context.Context, ins io.Reader, out io.Writer, msg string) (bool, error) {
done := make(chan struct{})
var ok bool
go func() {
ok = promptForConfirmation(ins, out, msg)
close(done)
}()
select {
case <-ctx.Done():
return false, context.Cause(ctx)
case <-done:
return ok, nil
}
}
// promptForConfirmation requests and checks confirmation from user.
// This will display the provided message followed by ' [y/N] '. If
// the user input 'y' or 'Y' it returns true other false. If no
// message is provided "Are you sure you want to proceed? [y/N] "
// will be used instead.
//
// Copied from github.com/docker/cli since the upstream version changed
// recently with an incompatible change.
//
// See https://github.com/docker/buildx/pull/2359#discussion_r1544736494
// for discussion on the issue.
func promptForConfirmation(ins io.Reader, outs io.Writer, message string) bool {
if message == "" {
message = "Are you sure you want to proceed?"
}
message += " [y/N] "
_, _ = fmt.Fprint(outs, message)
// On Windows, force the use of the regular OS stdin stream.
if runtime.GOOS == "windows" {
ins = streams.NewIn(os.Stdin)
}
reader := bufio.NewReader(ins)
answer, _, _ := reader.ReadLine()
return strings.ToLower(string(answer)) == "y"
}

View File

@@ -11,7 +11,7 @@ import (
"github.com/spf13/cobra"
)
func runVersion(dockerCli command.Cli) error {
func runVersion(_ command.Cli) error {
fmt.Println(version.Package, version.Version, version.Revision)
return nil
}

View File

@@ -21,7 +21,7 @@ import (
"github.com/docker/cli/cli/command"
"github.com/docker/cli/cli/config"
dockeropts "github.com/docker/cli/opts"
"github.com/docker/go-units"
"github.com/docker/docker/api/types/container"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/session/auth/authprovider"
"github.com/moby/buildkit/util/grpcerrors"
@@ -53,20 +53,21 @@ func RunBuild(ctx context.Context, dockerCli command.Cli, in controllerapi.Build
InStream: inStream,
NamedContexts: contexts,
},
Ref: in.Ref,
BuildArgs: in.BuildArgs,
CgroupParent: in.CgroupParent,
ExtraHosts: in.ExtraHosts,
Labels: in.Labels,
NetworkMode: in.NetworkMode,
NoCache: in.NoCache,
NoCacheFilter: in.NoCacheFilter,
Pull: in.Pull,
ShmSize: dockeropts.MemBytes(in.ShmSize),
Tags: in.Tags,
Target: in.Target,
Ulimits: controllerUlimitOpt2DockerUlimit(in.Ulimits),
GroupRef: in.GroupRef,
Ref: in.Ref,
BuildArgs: in.BuildArgs,
CgroupParent: in.CgroupParent,
ExtraHosts: in.ExtraHosts,
Labels: in.Labels,
NetworkMode: in.NetworkMode,
NoCache: in.NoCache,
NoCacheFilter: in.NoCacheFilter,
Pull: in.Pull,
ShmSize: dockeropts.MemBytes(in.ShmSize),
Tags: in.Tags,
Target: in.Target,
Ulimits: controllerUlimitOpt2DockerUlimit(in.Ulimits),
GroupRef: in.GroupRef,
ProvenanceResponseMode: confutil.ParseMetadataProvenance(in.ProvenanceResponseMode),
}
platforms, err := platformutil.Parse(in.Platforms)
@@ -99,44 +100,45 @@ func RunBuild(ctx context.Context, dockerCli command.Cli, in controllerapi.Build
return nil, nil, err
}
if in.ExportPush {
if in.ExportLoad {
return nil, nil, errors.Errorf("push and load may not be set together at the moment")
var pushUsed bool
for i := range outputs {
if outputs[i].Type == client.ExporterImage {
outputs[i].Attrs["push"] = "true"
pushUsed = true
}
}
if len(outputs) == 0 {
outputs = []client.ExportEntry{{
Type: "image",
if !pushUsed {
outputs = append(outputs, client.ExportEntry{
Type: client.ExporterImage,
Attrs: map[string]string{
"push": "true",
},
}}
} else {
switch outputs[0].Type {
case "image":
outputs[0].Attrs["push"] = "true"
default:
return nil, nil, errors.Errorf("push and %q output can't be used together", outputs[0].Type)
}
})
}
}
if in.ExportLoad {
if len(outputs) == 0 {
outputs = []client.ExportEntry{{
Type: "docker",
Attrs: map[string]string{},
}}
} else {
switch outputs[0].Type {
case "docker":
default:
return nil, nil, errors.Errorf("load and %q output can't be used together", outputs[0].Type)
var loadUsed bool
for i := range outputs {
if outputs[i].Type == client.ExporterDocker {
if _, ok := outputs[i].Attrs["dest"]; !ok {
loadUsed = true
break
}
}
}
if !loadUsed {
outputs = append(outputs, client.ExportEntry{
Type: client.ExporterDocker,
Attrs: map[string]string{},
})
}
}
annotations, err := buildflags.ParseAnnotations(in.Annotations)
if err != nil {
return nil, nil, err
return nil, nil, errors.Wrap(err, "parse annotations")
}
for _, o := range outputs {
for k, v := range annotations {
o.Attrs[k.String()] = v
@@ -160,8 +162,9 @@ func RunBuild(ctx context.Context, dockerCli command.Cli, in controllerapi.Build
if in.PrintFunc != nil {
opts.PrintFunc = &build.PrintFunc{
Name: in.PrintFunc.Name,
Format: in.PrintFunc.Format,
Name: in.PrintFunc.Name,
Format: in.PrintFunc.Format,
IgnoreStatus: in.PrintFunc.IgnoreStatus,
}
}
@@ -187,7 +190,7 @@ func RunBuild(ctx context.Context, dockerCli command.Cli, in controllerapi.Build
return nil, nil, err
}
resp, res, err := buildTargets(ctx, dockerCli, b.NodeGroup, nodes, map[string]build.Options{defaultTargetName: opts}, progress, generateResult)
resp, res, err := buildTargets(ctx, dockerCli, nodes, map[string]build.Options{defaultTargetName: opts}, progress, generateResult)
err = wrapBuildError(err, false)
if err != nil {
// NOTE: buildTargets can return *build.ResultHandle even on error.
@@ -201,7 +204,7 @@ func RunBuild(ctx context.Context, dockerCli command.Cli, in controllerapi.Build
// NOTE: When an error happens during the build and this function acquires the debuggable *build.ResultHandle,
// this function returns it in addition to the error (i.e. it does "return nil, res, err"). The caller can
// inspect the result and debug the cause of that error.
func buildTargets(ctx context.Context, dockerCli command.Cli, ng *store.NodeGroup, nodes []builder.Node, opts map[string]build.Options, progress progress.Writer, generateResult bool) (*client.SolveResponse, *build.ResultHandle, error) {
func buildTargets(ctx context.Context, dockerCli command.Cli, nodes []builder.Node, opts map[string]build.Options, progress progress.Writer, generateResult bool) (*client.SolveResponse, *build.ResultHandle, error) {
var res *build.ResultHandle
var resp map[string]*client.SolveResponse
var err error
@@ -268,9 +271,9 @@ func controllerUlimitOpt2DockerUlimit(u *controllerapi.UlimitOpt) *dockeropts.Ul
if u == nil {
return nil
}
values := make(map[string]*units.Ulimit)
values := make(map[string]*container.Ulimit)
for k, v := range u.Values {
values[k] = &units.Ulimit{
values[k] = &container.Ulimit{
Name: v.Name,
Hard: v.Hard,
Soft: v.Soft,

View File

@@ -271,40 +271,41 @@ func (m *BuildRequest) GetOptions() *BuildOptions {
}
type BuildOptions struct {
ContextPath string `protobuf:"bytes,1,opt,name=ContextPath,proto3" json:"ContextPath,omitempty"`
DockerfileName string `protobuf:"bytes,2,opt,name=DockerfileName,proto3" json:"DockerfileName,omitempty"`
PrintFunc *PrintFunc `protobuf:"bytes,3,opt,name=PrintFunc,proto3" json:"PrintFunc,omitempty"`
NamedContexts map[string]string `protobuf:"bytes,4,rep,name=NamedContexts,proto3" json:"NamedContexts,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
Allow []string `protobuf:"bytes,5,rep,name=Allow,proto3" json:"Allow,omitempty"`
Attests []*Attest `protobuf:"bytes,6,rep,name=Attests,proto3" json:"Attests,omitempty"`
BuildArgs map[string]string `protobuf:"bytes,7,rep,name=BuildArgs,proto3" json:"BuildArgs,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
CacheFrom []*CacheOptionsEntry `protobuf:"bytes,8,rep,name=CacheFrom,proto3" json:"CacheFrom,omitempty"`
CacheTo []*CacheOptionsEntry `protobuf:"bytes,9,rep,name=CacheTo,proto3" json:"CacheTo,omitempty"`
CgroupParent string `protobuf:"bytes,10,opt,name=CgroupParent,proto3" json:"CgroupParent,omitempty"`
Exports []*ExportEntry `protobuf:"bytes,11,rep,name=Exports,proto3" json:"Exports,omitempty"`
ExtraHosts []string `protobuf:"bytes,12,rep,name=ExtraHosts,proto3" json:"ExtraHosts,omitempty"`
Labels map[string]string `protobuf:"bytes,13,rep,name=Labels,proto3" json:"Labels,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
NetworkMode string `protobuf:"bytes,14,opt,name=NetworkMode,proto3" json:"NetworkMode,omitempty"`
NoCacheFilter []string `protobuf:"bytes,15,rep,name=NoCacheFilter,proto3" json:"NoCacheFilter,omitempty"`
Platforms []string `protobuf:"bytes,16,rep,name=Platforms,proto3" json:"Platforms,omitempty"`
Secrets []*Secret `protobuf:"bytes,17,rep,name=Secrets,proto3" json:"Secrets,omitempty"`
ShmSize int64 `protobuf:"varint,18,opt,name=ShmSize,proto3" json:"ShmSize,omitempty"`
SSH []*SSH `protobuf:"bytes,19,rep,name=SSH,proto3" json:"SSH,omitempty"`
Tags []string `protobuf:"bytes,20,rep,name=Tags,proto3" json:"Tags,omitempty"`
Target string `protobuf:"bytes,21,opt,name=Target,proto3" json:"Target,omitempty"`
Ulimits *UlimitOpt `protobuf:"bytes,22,opt,name=Ulimits,proto3" json:"Ulimits,omitempty"`
Builder string `protobuf:"bytes,23,opt,name=Builder,proto3" json:"Builder,omitempty"`
NoCache bool `protobuf:"varint,24,opt,name=NoCache,proto3" json:"NoCache,omitempty"`
Pull bool `protobuf:"varint,25,opt,name=Pull,proto3" json:"Pull,omitempty"`
ExportPush bool `protobuf:"varint,26,opt,name=ExportPush,proto3" json:"ExportPush,omitempty"`
ExportLoad bool `protobuf:"varint,27,opt,name=ExportLoad,proto3" json:"ExportLoad,omitempty"`
SourcePolicy *pb.Policy `protobuf:"bytes,28,opt,name=SourcePolicy,proto3" json:"SourcePolicy,omitempty"`
Ref string `protobuf:"bytes,29,opt,name=Ref,proto3" json:"Ref,omitempty"`
GroupRef string `protobuf:"bytes,30,opt,name=GroupRef,proto3" json:"GroupRef,omitempty"`
Annotations []string `protobuf:"bytes,31,rep,name=Annotations,proto3" json:"Annotations,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
ContextPath string `protobuf:"bytes,1,opt,name=ContextPath,proto3" json:"ContextPath,omitempty"`
DockerfileName string `protobuf:"bytes,2,opt,name=DockerfileName,proto3" json:"DockerfileName,omitempty"`
PrintFunc *PrintFunc `protobuf:"bytes,3,opt,name=PrintFunc,proto3" json:"PrintFunc,omitempty"`
NamedContexts map[string]string `protobuf:"bytes,4,rep,name=NamedContexts,proto3" json:"NamedContexts,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
Allow []string `protobuf:"bytes,5,rep,name=Allow,proto3" json:"Allow,omitempty"`
Attests []*Attest `protobuf:"bytes,6,rep,name=Attests,proto3" json:"Attests,omitempty"`
BuildArgs map[string]string `protobuf:"bytes,7,rep,name=BuildArgs,proto3" json:"BuildArgs,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
CacheFrom []*CacheOptionsEntry `protobuf:"bytes,8,rep,name=CacheFrom,proto3" json:"CacheFrom,omitempty"`
CacheTo []*CacheOptionsEntry `protobuf:"bytes,9,rep,name=CacheTo,proto3" json:"CacheTo,omitempty"`
CgroupParent string `protobuf:"bytes,10,opt,name=CgroupParent,proto3" json:"CgroupParent,omitempty"`
Exports []*ExportEntry `protobuf:"bytes,11,rep,name=Exports,proto3" json:"Exports,omitempty"`
ExtraHosts []string `protobuf:"bytes,12,rep,name=ExtraHosts,proto3" json:"ExtraHosts,omitempty"`
Labels map[string]string `protobuf:"bytes,13,rep,name=Labels,proto3" json:"Labels,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
NetworkMode string `protobuf:"bytes,14,opt,name=NetworkMode,proto3" json:"NetworkMode,omitempty"`
NoCacheFilter []string `protobuf:"bytes,15,rep,name=NoCacheFilter,proto3" json:"NoCacheFilter,omitempty"`
Platforms []string `protobuf:"bytes,16,rep,name=Platforms,proto3" json:"Platforms,omitempty"`
Secrets []*Secret `protobuf:"bytes,17,rep,name=Secrets,proto3" json:"Secrets,omitempty"`
ShmSize int64 `protobuf:"varint,18,opt,name=ShmSize,proto3" json:"ShmSize,omitempty"`
SSH []*SSH `protobuf:"bytes,19,rep,name=SSH,proto3" json:"SSH,omitempty"`
Tags []string `protobuf:"bytes,20,rep,name=Tags,proto3" json:"Tags,omitempty"`
Target string `protobuf:"bytes,21,opt,name=Target,proto3" json:"Target,omitempty"`
Ulimits *UlimitOpt `protobuf:"bytes,22,opt,name=Ulimits,proto3" json:"Ulimits,omitempty"`
Builder string `protobuf:"bytes,23,opt,name=Builder,proto3" json:"Builder,omitempty"`
NoCache bool `protobuf:"varint,24,opt,name=NoCache,proto3" json:"NoCache,omitempty"`
Pull bool `protobuf:"varint,25,opt,name=Pull,proto3" json:"Pull,omitempty"`
ExportPush bool `protobuf:"varint,26,opt,name=ExportPush,proto3" json:"ExportPush,omitempty"`
ExportLoad bool `protobuf:"varint,27,opt,name=ExportLoad,proto3" json:"ExportLoad,omitempty"`
SourcePolicy *pb.Policy `protobuf:"bytes,28,opt,name=SourcePolicy,proto3" json:"SourcePolicy,omitempty"`
Ref string `protobuf:"bytes,29,opt,name=Ref,proto3" json:"Ref,omitempty"`
GroupRef string `protobuf:"bytes,30,opt,name=GroupRef,proto3" json:"GroupRef,omitempty"`
Annotations []string `protobuf:"bytes,31,rep,name=Annotations,proto3" json:"Annotations,omitempty"`
ProvenanceResponseMode string `protobuf:"bytes,32,opt,name=ProvenanceResponseMode,proto3" json:"ProvenanceResponseMode,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *BuildOptions) Reset() { *m = BuildOptions{} }
@@ -548,6 +549,13 @@ func (m *BuildOptions) GetAnnotations() []string {
return nil
}
func (m *BuildOptions) GetProvenanceResponseMode() string {
if m != nil {
return m.ProvenanceResponseMode
}
return ""
}
type ExportEntry struct {
Type string `protobuf:"bytes,1,opt,name=Type,proto3" json:"Type,omitempty"`
Attrs map[string]string `protobuf:"bytes,2,rep,name=Attrs,proto3" json:"Attrs,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
@@ -805,6 +813,7 @@ func (m *Secret) GetEnv() string {
type PrintFunc struct {
Name string `protobuf:"bytes,1,opt,name=Name,proto3" json:"Name,omitempty"`
Format string `protobuf:"bytes,2,opt,name=Format,proto3" json:"Format,omitempty"`
IgnoreStatus bool `protobuf:"varint,3,opt,name=IgnoreStatus,proto3" json:"IgnoreStatus,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
@@ -848,6 +857,13 @@ func (m *PrintFunc) GetFormat() string {
return ""
}
func (m *PrintFunc) GetIgnoreStatus() bool {
if m != nil {
return m.IgnoreStatus
}
return false
}
type InspectRequest struct {
Ref string `protobuf:"bytes,1,opt,name=Ref,proto3" json:"Ref,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
@@ -2078,128 +2094,130 @@ func init() {
func init() { proto.RegisterFile("controller.proto", fileDescriptor_ed7f10298fa1d90f) }
var fileDescriptor_ed7f10298fa1d90f = []byte{
// 1922 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xa4, 0x58, 0x5f, 0x73, 0x1b, 0x49,
0x11, 0x67, 0x25, 0x59, 0x7f, 0x5a, 0x96, 0xcf, 0x19, 0x9c, 0x30, 0xd9, 0xe4, 0x12, 0x67, 0x93,
0x1c, 0x2a, 0x42, 0xc9, 0x77, 0x3e, 0x82, 0x2f, 0x97, 0xbb, 0x2a, 0x6c, 0xd9, 0xc2, 0xbe, 0x4a,
0x6c, 0xd7, 0xca, 0xc9, 0x15, 0x50, 0xc5, 0xd5, 0x5a, 0x1a, 0xcb, 0x5b, 0x5a, 0xed, 0x88, 0x9d,
0x91, 0x6d, 0xf1, 0xc4, 0x03, 0xbc, 0x51, 0x14, 0x5f, 0x83, 0xe2, 0x23, 0xf0, 0xc4, 0x37, 0xe2,
0x23, 0x50, 0xd3, 0x33, 0xbb, 0x5a, 0x59, 0x5a, 0xd9, 0x86, 0x27, 0x4d, 0xf7, 0xfe, 0xba, 0x7b,
0xba, 0xa7, 0xa7, 0xbb, 0x47, 0xb0, 0xda, 0xe1, 0xa1, 0x8c, 0x78, 0x10, 0xb0, 0xa8, 0x31, 0x8c,
0xb8, 0xe4, 0x64, 0xed, 0x74, 0xe4, 0x07, 0xdd, 0xab, 0x46, 0xea, 0xc3, 0xc5, 0x17, 0xf6, 0xdb,
0x9e, 0x2f, 0xcf, 0x47, 0xa7, 0x8d, 0x0e, 0x1f, 0x6c, 0x0c, 0xf8, 0xe9, 0x78, 0x03, 0x51, 0x7d,
0x5f, 0x6e, 0x78, 0x43, 0x7f, 0x43, 0xb0, 0xe8, 0xc2, 0xef, 0x30, 0xb1, 0x61, 0x84, 0xe2, 0x5f,
0xad, 0xd2, 0x7e, 0x9d, 0x29, 0x2c, 0xf8, 0x28, 0xea, 0xb0, 0x21, 0x0f, 0xfc, 0xce, 0x78, 0x63,
0x78, 0xba, 0xa1, 0x57, 0x5a, 0xcc, 0xa9, 0xc3, 0xda, 0x3b, 0x5f, 0xc8, 0xe3, 0x88, 0x77, 0x98,
0x10, 0x4c, 0xb8, 0xec, 0x0f, 0x23, 0x26, 0x24, 0x59, 0x85, 0xbc, 0xcb, 0xce, 0xa8, 0xb5, 0x6e,
0xd5, 0x2b, 0xae, 0x5a, 0x3a, 0xc7, 0x70, 0xff, 0x1a, 0x52, 0x0c, 0x79, 0x28, 0x18, 0xd9, 0x82,
0xa5, 0x83, 0xf0, 0x8c, 0x0b, 0x6a, 0xad, 0xe7, 0xeb, 0xd5, 0xcd, 0x67, 0x8d, 0x79, 0xce, 0x35,
0x8c, 0x9c, 0x42, 0xba, 0x1a, 0xef, 0x08, 0xa8, 0xa6, 0xb8, 0xe4, 0x31, 0x54, 0x62, 0x72, 0xd7,
0x18, 0x9e, 0x30, 0x48, 0x0b, 0x96, 0x0f, 0xc2, 0x0b, 0xde, 0x67, 0x4d, 0x1e, 0x9e, 0xf9, 0x3d,
0x9a, 0x5b, 0xb7, 0xea, 0xd5, 0x4d, 0x67, 0xbe, 0xb1, 0x34, 0xd2, 0x9d, 0x92, 0x73, 0xbe, 0x03,
0xba, 0xeb, 0x8b, 0x0e, 0x0f, 0x43, 0xd6, 0x89, 0x9d, 0xc9, 0x74, 0x7a, 0x7a, 0x4f, 0xb9, 0x6b,
0x7b, 0x72, 0x1e, 0xc1, 0xc3, 0x39, 0xba, 0x74, 0x58, 0x9c, 0xdf, 0xc3, 0xf2, 0x8e, 0xda, 0x5b,
0xb6, 0xf2, 0x6f, 0xa0, 0x74, 0x34, 0x94, 0x3e, 0x0f, 0xc5, 0x62, 0x6f, 0x50, 0x8d, 0x41, 0xba,
0xb1, 0x88, 0xf3, 0xf7, 0x65, 0x63, 0xc0, 0x30, 0xc8, 0x3a, 0x54, 0x9b, 0x3c, 0x94, 0xec, 0x4a,
0x1e, 0x7b, 0xf2, 0xdc, 0x18, 0x4a, 0xb3, 0xc8, 0x67, 0xb0, 0xb2, 0xcb, 0x3b, 0x7d, 0x16, 0x9d,
0xf9, 0x01, 0x3b, 0xf4, 0x06, 0xcc, 0xb8, 0x74, 0x8d, 0x4b, 0xbe, 0x55, 0x5e, 0xfb, 0xa1, 0x6c,
0x8d, 0xc2, 0x0e, 0xcd, 0xe3, 0xd6, 0x9e, 0x66, 0x9d, 0xaa, 0x81, 0xb9, 0x13, 0x09, 0xf2, 0x3b,
0xa8, 0x29, 0x35, 0x5d, 0x63, 0x5a, 0xd0, 0x02, 0x26, 0xc6, 0xeb, 0x9b, 0xbd, 0x6b, 0x4c, 0xc9,
0xed, 0x85, 0x32, 0x1a, 0xbb, 0xd3, 0xba, 0xc8, 0x1a, 0x2c, 0x6d, 0x07, 0x01, 0xbf, 0xa4, 0x4b,
0xeb, 0xf9, 0x7a, 0xc5, 0xd5, 0x04, 0xf9, 0x25, 0x94, 0xb6, 0xa5, 0x64, 0x42, 0x0a, 0x5a, 0x44,
0x63, 0x8f, 0xe7, 0x1b, 0xd3, 0x20, 0x37, 0x06, 0x93, 0x23, 0xa8, 0xa0, 0xfd, 0xed, 0xa8, 0x27,
0x68, 0x09, 0x25, 0xbf, 0xb8, 0xc5, 0x36, 0x13, 0x19, 0xbd, 0xc5, 0x89, 0x0e, 0xb2, 0x07, 0x95,
0xa6, 0xd7, 0x39, 0x67, 0xad, 0x88, 0x0f, 0x68, 0x19, 0x15, 0xfe, 0x74, 0xbe, 0x42, 0x84, 0x19,
0x85, 0x46, 0x4d, 0x22, 0x49, 0xb6, 0xa1, 0x84, 0xc4, 0x09, 0xa7, 0x95, 0xbb, 0x29, 0x89, 0xe5,
0x88, 0x03, 0xcb, 0xcd, 0x5e, 0xc4, 0x47, 0xc3, 0x63, 0x2f, 0x62, 0xa1, 0xa4, 0x80, 0x47, 0x3d,
0xc5, 0x23, 0x6f, 0xa1, 0xb4, 0x77, 0x35, 0xe4, 0x91, 0x14, 0xb4, 0xba, 0xe8, 0xf2, 0x6a, 0x90,
0x31, 0x60, 0x24, 0xc8, 0x13, 0x80, 0xbd, 0x2b, 0x19, 0x79, 0xfb, 0x5c, 0x85, 0x7d, 0x19, 0x8f,
0x23, 0xc5, 0x21, 0x2d, 0x28, 0xbe, 0xf3, 0x4e, 0x59, 0x20, 0x68, 0x0d, 0x75, 0x37, 0x6e, 0x11,
0x58, 0x2d, 0xa0, 0x0d, 0x19, 0x69, 0x95, 0xd7, 0x87, 0x4c, 0x5e, 0xf2, 0xa8, 0xff, 0x9e, 0x77,
0x19, 0x5d, 0xd1, 0x79, 0x9d, 0x62, 0x91, 0x17, 0x50, 0x3b, 0xe4, 0x3a, 0x78, 0x7e, 0x20, 0x59,
0x44, 0x3f, 0xc1, 0xcd, 0x4c, 0x33, 0xf1, 0x2e, 0x07, 0x9e, 0x3c, 0xe3, 0xd1, 0x40, 0xd0, 0x55,
0x44, 0x4c, 0x18, 0x2a, 0x83, 0xda, 0xac, 0x13, 0x31, 0x29, 0xe8, 0xbd, 0x45, 0x19, 0xa4, 0x41,
0x6e, 0x0c, 0x26, 0x14, 0x4a, 0xed, 0xf3, 0x41, 0xdb, 0xff, 0x23, 0xa3, 0x64, 0xdd, 0xaa, 0xe7,
0xdd, 0x98, 0x24, 0xaf, 0x20, 0xdf, 0x6e, 0xef, 0xd3, 0x1f, 0xa3, 0xb6, 0x87, 0x19, 0xda, 0xda,
0xfb, 0xae, 0x42, 0x11, 0x02, 0x85, 0x13, 0xaf, 0x27, 0xe8, 0x1a, 0xee, 0x0b, 0xd7, 0xe4, 0x01,
0x14, 0x4f, 0xbc, 0xa8, 0xc7, 0x24, 0xbd, 0x8f, 0x3e, 0x1b, 0x8a, 0xbc, 0x81, 0xd2, 0x87, 0xc0,
0x1f, 0xf8, 0x52, 0xd0, 0x07, 0x8b, 0x2e, 0xa7, 0x06, 0x1d, 0x0d, 0xa5, 0x1b, 0xe3, 0xd5, 0x6e,
0x31, 0xde, 0x2c, 0xa2, 0x3f, 0x41, 0x9d, 0x31, 0xa9, 0xbe, 0x98, 0x70, 0x51, 0xba, 0x6e, 0xd5,
0xcb, 0x6e, 0x4c, 0xaa, 0xad, 0x1d, 0x8f, 0x82, 0x80, 0x3e, 0x44, 0x36, 0xae, 0xf5, 0xd9, 0xab,
0x34, 0x38, 0x1e, 0x89, 0x73, 0x6a, 0xe3, 0x97, 0x14, 0x67, 0xf2, 0xfd, 0x1d, 0xf7, 0xba, 0xf4,
0x51, 0xfa, 0xbb, 0xe2, 0x90, 0x03, 0x58, 0x6e, 0x63, 0x5b, 0x3a, 0xc6, 0x66, 0x44, 0x1f, 0xa3,
0x1f, 0x2f, 0x1b, 0xaa, 0x73, 0x35, 0xe2, 0xce, 0xa5, 0x7c, 0x48, 0x37, 0xaf, 0x86, 0x06, 0xbb,
0x53, 0xa2, 0x71, 0x5d, 0xfd, 0x74, 0x52, 0x57, 0x6d, 0x28, 0xff, 0x5a, 0x25, 0xb9, 0x62, 0x3f,
0x41, 0x76, 0x42, 0xab, 0x64, 0xda, 0x0e, 0x43, 0x2e, 0x3d, 0x5d, 0x77, 0x9f, 0x62, 0xb8, 0xd3,
0x2c, 0xfb, 0x57, 0x40, 0x66, 0xab, 0x90, 0xb2, 0xd2, 0x67, 0xe3, 0xb8, 0x7a, 0xf7, 0xd9, 0x58,
0x15, 0xa2, 0x0b, 0x2f, 0x18, 0xc5, 0x35, 0x54, 0x13, 0x5f, 0xe7, 0xbe, 0xb2, 0xec, 0x6f, 0x60,
0x65, 0xba, 0x40, 0xdc, 0x49, 0xfa, 0x0d, 0x54, 0x53, 0xb7, 0xe0, 0x2e, 0xa2, 0xce, 0xbf, 0x2d,
0xa8, 0xa6, 0xae, 0x2a, 0x26, 0xd5, 0x78, 0xc8, 0x8c, 0x30, 0xae, 0xc9, 0x0e, 0x2c, 0x6d, 0x4b,
0x19, 0xa9, 0x96, 0xa3, 0xf2, 0xf2, 0xe7, 0x37, 0x5e, 0xf8, 0x06, 0xc2, 0xf5, 0x95, 0xd4, 0xa2,
0x2a, 0x88, 0xbb, 0x4c, 0x48, 0x3f, 0xc4, 0x90, 0x61, 0x87, 0xa8, 0xb8, 0x69, 0x96, 0xfd, 0x15,
0xc0, 0x44, 0xec, 0x4e, 0x3e, 0xfc, 0xd3, 0x82, 0x7b, 0x33, 0x55, 0x6d, 0xae, 0x27, 0xfb, 0xd3,
0x9e, 0x6c, 0xde, 0xb2, 0x42, 0xce, 0xfa, 0xf3, 0x7f, 0xec, 0xf6, 0x10, 0x8a, 0xba, 0x95, 0xcc,
0xdd, 0xa1, 0x0d, 0xe5, 0x5d, 0x5f, 0x78, 0xa7, 0x01, 0xeb, 0xa2, 0x68, 0xd9, 0x4d, 0x68, 0xec,
0x63, 0xb8, 0x7b, 0x1d, 0x3d, 0x4d, 0x38, 0xba, 0x66, 0x90, 0x15, 0xc8, 0x25, 0x33, 0x50, 0xee,
0x60, 0x57, 0x81, 0x55, 0x03, 0xd7, 0xae, 0x56, 0x5c, 0x4d, 0x38, 0x2d, 0x28, 0xea, 0x2a, 0x34,
0x83, 0xb7, 0xa1, 0xdc, 0xf2, 0x03, 0x86, 0x73, 0x80, 0xde, 0x73, 0x42, 0x2b, 0xf7, 0xf6, 0xc2,
0x0b, 0x63, 0x56, 0x2d, 0x9d, 0xad, 0x54, 0xbb, 0x57, 0x7e, 0xe0, 0x64, 0x60, 0xfc, 0xc0, 0x79,
0xe0, 0x01, 0x14, 0x5b, 0x3c, 0x1a, 0x78, 0xd2, 0x28, 0x33, 0x94, 0xe3, 0xc0, 0xca, 0x41, 0x28,
0x86, 0xac, 0x23, 0xb3, 0xc7, 0xc6, 0x23, 0xf8, 0x24, 0xc1, 0x98, 0x81, 0x31, 0x35, 0xf7, 0x58,
0x77, 0x9f, 0x7b, 0xfe, 0x61, 0x41, 0x25, 0xa9, 0x6c, 0xa4, 0x09, 0x45, 0x3c, 0x8d, 0x78, 0xfa,
0x7c, 0x75, 0x43, 0x29, 0x6c, 0x7c, 0x44, 0xb4, 0xe9, 0x30, 0x5a, 0xd4, 0xfe, 0x1e, 0xaa, 0x29,
0xf6, 0x9c, 0x04, 0xd8, 0x4c, 0x27, 0x40, 0x66, 0x6b, 0xd0, 0x46, 0xd2, 0xe9, 0xb1, 0x0b, 0x45,
0xcd, 0x9c, 0x1b, 0x56, 0x02, 0x85, 0x7d, 0x2f, 0xd2, 0xa9, 0x91, 0x77, 0x71, 0xad, 0x78, 0x6d,
0x7e, 0x26, 0xf1, 0x78, 0xf2, 0x2e, 0xae, 0x9d, 0x7f, 0x59, 0x50, 0x33, 0xa3, 0xa4, 0x89, 0x20,
0x83, 0x55, 0x7d, 0x43, 0x59, 0x14, 0xf3, 0x8c, 0xff, 0x6f, 0x16, 0x84, 0x32, 0x86, 0x36, 0xae,
0xcb, 0xea, 0x68, 0xcc, 0xa8, 0xb4, 0x9b, 0x70, 0x7f, 0x2e, 0xf4, 0x4e, 0x57, 0xe4, 0x25, 0xdc,
0x9b, 0x0c, 0xc9, 0xd9, 0x79, 0xb2, 0x06, 0x24, 0x0d, 0x33, 0x43, 0xf4, 0x53, 0xa8, 0xaa, 0x47,
0x47, 0xb6, 0x98, 0x03, 0xcb, 0x1a, 0x60, 0x22, 0x43, 0xa0, 0xd0, 0x67, 0x63, 0x9d, 0x0d, 0x15,
0x17, 0xd7, 0xce, 0xdf, 0x2c, 0xf5, 0x76, 0x18, 0x8e, 0xe4, 0x7b, 0x26, 0x84, 0xd7, 0x53, 0x09,
0x58, 0x38, 0x08, 0x7d, 0x69, 0xb2, 0xef, 0xb3, 0xac, 0x37, 0xc4, 0x70, 0x24, 0x15, 0xcc, 0x48,
0xed, 0xff, 0xc8, 0x45, 0x29, 0xb2, 0x05, 0x85, 0x5d, 0x4f, 0x7a, 0x26, 0x17, 0x32, 0x26, 0x26,
0x85, 0x48, 0x09, 0x2a, 0x72, 0xa7, 0xa4, 0x1e, 0x4a, 0xc3, 0x91, 0x74, 0x5e, 0xc0, 0xea, 0x75,
0xed, 0x73, 0x5c, 0xfb, 0x12, 0xaa, 0x29, 0x2d, 0x78, 0x6f, 0x8f, 0x5a, 0x08, 0x28, 0xbb, 0x6a,
0xa9, 0x7c, 0x4d, 0x36, 0xb2, 0xac, 0x6d, 0x38, 0x9f, 0x40, 0x0d, 0x55, 0x27, 0x11, 0xfc, 0x53,
0x0e, 0x4a, 0xb1, 0x8a, 0xad, 0x29, 0xbf, 0x9f, 0x65, 0xf9, 0x3d, 0xeb, 0xf2, 0x6b, 0x28, 0xa8,
0xfa, 0x61, 0x5c, 0xce, 0x18, 0x37, 0x5a, 0xdd, 0x94, 0x98, 0x82, 0x93, 0x6f, 0xa1, 0xe8, 0x32,
0xa1, 0x46, 0x23, 0xfd, 0x88, 0x78, 0x3e, 0x5f, 0x50, 0x63, 0x26, 0xc2, 0x46, 0x48, 0x89, 0xb7,
0xfd, 0x5e, 0xe8, 0x05, 0xb4, 0xb0, 0x48, 0x5c, 0x63, 0x52, 0xe2, 0x9a, 0x31, 0x09, 0xf7, 0x5f,
0x2c, 0xa8, 0x2e, 0x0c, 0xf5, 0xe2, 0x67, 0xde, 0xcc, 0xd3, 0x33, 0xff, 0x3f, 0x3e, 0x3d, 0xff,
0x9c, 0x9b, 0x56, 0x84, 0x53, 0x92, 0xba, 0x4f, 0x43, 0xee, 0x87, 0xd2, 0xa4, 0x6c, 0x8a, 0xa3,
0x36, 0xda, 0x1c, 0x74, 0x4d, 0xd1, 0x57, 0x4b, 0x75, 0xcd, 0x0e, 0xb9, 0xe2, 0x55, 0x31, 0x0d,
0x34, 0x31, 0x29, 0xe9, 0x79, 0x53, 0xd2, 0x55, 0x6a, 0x7c, 0x10, 0x2c, 0xc2, 0xc0, 0x55, 0x5c,
0x5c, 0xab, 0x2a, 0x7e, 0xc8, 0x91, 0xbb, 0x84, 0xc2, 0x86, 0x42, 0x2b, 0x97, 0x5d, 0x5a, 0xd4,
0xe1, 0x68, 0x5e, 0xc6, 0x56, 0x2e, 0xbb, 0xb4, 0x94, 0x58, 0xb9, 0x44, 0x2b, 0x27, 0x72, 0x4c,
0xcb, 0x3a, 0x01, 0x4f, 0xe4, 0x58, 0xb5, 0x19, 0x97, 0x07, 0xc1, 0xa9, 0xd7, 0xe9, 0xd3, 0x8a,
0xee, 0x6f, 0x31, 0xad, 0xe6, 0x49, 0x15, 0x73, 0xdf, 0x0b, 0xf0, 0xe5, 0x51, 0x76, 0x63, 0xd2,
0xd9, 0x86, 0x4a, 0x92, 0x2a, 0xaa, 0x73, 0xb5, 0xba, 0x78, 0x14, 0x35, 0x37, 0xd7, 0xea, 0xc6,
0x59, 0x9e, 0x9b, 0xcd, 0xf2, 0x7c, 0x2a, 0xcb, 0xb7, 0xa0, 0x36, 0x95, 0x34, 0x0a, 0xe4, 0xf2,
0x4b, 0x61, 0x14, 0xe1, 0x5a, 0xf1, 0x9a, 0x3c, 0xd0, 0x6f, 0xeb, 0x9a, 0x8b, 0x6b, 0xe7, 0x39,
0xd4, 0xa6, 0xd2, 0x65, 0x5e, 0x5d, 0x76, 0x9e, 0x41, 0xad, 0x2d, 0x3d, 0x39, 0x5a, 0xf0, 0x67,
0xc8, 0x7f, 0x2c, 0x58, 0x89, 0x31, 0xa6, 0xf2, 0xfc, 0x02, 0xca, 0x17, 0x2c, 0x92, 0xec, 0x2a,
0xe9, 0x45, 0x74, 0x76, 0x9c, 0xfd, 0x88, 0x08, 0x37, 0x41, 0x92, 0xaf, 0xa1, 0x2c, 0x50, 0x0f,
0x8b, 0xe7, 0x98, 0x27, 0x59, 0x52, 0xc6, 0x5e, 0x82, 0x27, 0x1b, 0x50, 0x08, 0x78, 0x4f, 0xe0,
0xb9, 0x57, 0x37, 0x1f, 0x65, 0xc9, 0xbd, 0xe3, 0x3d, 0x17, 0x81, 0xe4, 0x2d, 0x94, 0x2f, 0xbd,
0x28, 0xf4, 0xc3, 0x5e, 0xfc, 0x26, 0x7f, 0x9a, 0x25, 0xf4, 0xbd, 0xc6, 0xb9, 0x89, 0x80, 0x53,
0x53, 0x97, 0xe8, 0x8c, 0x9b, 0x98, 0x38, 0xbf, 0x51, 0xb9, 0xac, 0x48, 0xe3, 0xfe, 0x01, 0xd4,
0xf4, 0x7d, 0xf8, 0xc8, 0x22, 0xa1, 0xa6, 0x42, 0x6b, 0xd1, 0x9d, 0xdd, 0x49, 0x43, 0xdd, 0x69,
0x49, 0xe7, 0x07, 0xd3, 0xee, 0x62, 0x86, 0xca, 0xa5, 0xa1, 0xd7, 0xe9, 0x7b, 0xbd, 0xf8, 0x9c,
0x62, 0x52, 0x7d, 0xb9, 0x30, 0xf6, 0xf4, 0xb5, 0x8d, 0x49, 0x95, 0x9b, 0x11, 0xbb, 0xf0, 0xc5,
0x64, 0x40, 0x4d, 0xe8, 0xcd, 0xbf, 0x96, 0x00, 0x9a, 0xc9, 0x7e, 0xc8, 0x31, 0x2c, 0xa1, 0x3d,
0xe2, 0x2c, 0x6c, 0x9e, 0xe8, 0xb7, 0xfd, 0xfc, 0x16, 0x0d, 0x96, 0x7c, 0x54, 0xc9, 0x8f, 0x43,
0x0f, 0x79, 0x91, 0x55, 0x26, 0xd2, 0x73, 0x93, 0xfd, 0xf2, 0x06, 0x94, 0xd1, 0xfb, 0x01, 0x8a,
0x3a, 0x0b, 0x48, 0x56, 0x2d, 0x4c, 0xe7, 0xad, 0xfd, 0x62, 0x31, 0x48, 0x2b, 0xfd, 0xdc, 0x22,
0xae, 0xa9, 0x94, 0xc4, 0x59, 0xd0, 0x0a, 0xcd, 0x8d, 0xc9, 0x0a, 0xc0, 0x54, 0xd7, 0xa9, 0x5b,
0xe4, 0x3b, 0x28, 0xea, 0x5a, 0x47, 0x3e, 0x9d, 0x2f, 0x10, 0xeb, 0x5b, 0xfc, 0xb9, 0x6e, 0x7d,
0x6e, 0x91, 0xf7, 0x50, 0x50, 0x4d, 0x9e, 0x64, 0x74, 0xac, 0xd4, 0x84, 0x60, 0x3b, 0x8b, 0x20,
0x26, 0x8a, 0x3f, 0x00, 0x4c, 0x46, 0x0d, 0x92, 0xf1, 0xcf, 0xca, 0xcc, 0xcc, 0x62, 0xd7, 0x6f,
0x06, 0x1a, 0x03, 0xef, 0x55, 0x9f, 0x3d, 0xe3, 0x24, 0xb3, 0xc3, 0x26, 0xd7, 0xc8, 0x76, 0x16,
0x41, 0x8c, 0xba, 0x73, 0xa8, 0x4d, 0xfd, 0xf3, 0x4a, 0x7e, 0x96, 0xed, 0xe4, 0xf5, 0x3f, 0x72,
0xed, 0x57, 0xb7, 0xc2, 0x1a, 0x4b, 0x32, 0x3d, 0xab, 0x99, 0xcf, 0xa4, 0x71, 0x93, 0xdf, 0xd3,
0xff, 0xa2, 0xda, 0x1b, 0xb7, 0xc6, 0x6b, 0xab, 0x3b, 0x85, 0xdf, 0xe6, 0x86, 0xa7, 0xa7, 0x45,
0xfc, 0x43, 0xfa, 0xcb, 0xff, 0x06, 0x00, 0x00, 0xff, 0xff, 0xe3, 0x77, 0x0e, 0x2f, 0x2e, 0x17,
0x00, 0x00,
// 1957 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xa4, 0x58, 0x5f, 0x73, 0x1b, 0xb7,
0x11, 0xef, 0x91, 0x14, 0xff, 0x2c, 0x45, 0xd9, 0x46, 0x6d, 0x17, 0x3e, 0x3b, 0xb6, 0x7c, 0xb6,
0x53, 0x4e, 0xdd, 0xa1, 0x12, 0xa5, 0x8e, 0xe3, 0x38, 0x99, 0xa9, 0x44, 0x89, 0x95, 0x32, 0xb6,
0xa4, 0x01, 0x65, 0x67, 0xda, 0xcc, 0x34, 0x73, 0x22, 0x21, 0xea, 0x46, 0xa7, 0x03, 0x7b, 0x00,
0xf5, 0xa7, 0x4f, 0x7d, 0x68, 0xdf, 0x3a, 0xfd, 0x1e, 0x9d, 0x7e, 0x84, 0x3e, 0xf5, 0xad, 0x1f,
0xa7, 0x1f, 0xa1, 0x83, 0x05, 0xee, 0x78, 0x14, 0x79, 0x94, 0xd4, 0x3e, 0x11, 0xbb, 0xf8, 0xed,
0x2e, 0x76, 0x6f, 0xb1, 0xbb, 0x20, 0xdc, 0xee, 0x89, 0x48, 0xc5, 0x22, 0x0c, 0x79, 0xdc, 0x1a,
0xc6, 0x42, 0x09, 0x72, 0xf7, 0x60, 0x14, 0x84, 0xfd, 0xf3, 0x56, 0x66, 0xe3, 0xf4, 0x73, 0xf7,
0xed, 0x20, 0x50, 0x47, 0xa3, 0x83, 0x56, 0x4f, 0x9c, 0xac, 0x9c, 0x88, 0x83, 0x8b, 0x15, 0x44,
0x1d, 0x07, 0x6a, 0xc5, 0x1f, 0x06, 0x2b, 0x92, 0xc7, 0xa7, 0x41, 0x8f, 0xcb, 0x15, 0x2b, 0x94,
0xfc, 0x1a, 0x95, 0xee, 0xab, 0x5c, 0x61, 0x29, 0x46, 0x71, 0x8f, 0x0f, 0x45, 0x18, 0xf4, 0x2e,
0x56, 0x86, 0x07, 0x2b, 0x66, 0x65, 0xc4, 0xbc, 0x26, 0xdc, 0x7d, 0x17, 0x48, 0xb5, 0x17, 0x8b,
0x1e, 0x97, 0x92, 0x4b, 0xc6, 0xff, 0x30, 0xe2, 0x52, 0x91, 0xdb, 0x50, 0x64, 0xfc, 0x90, 0x3a,
0xcb, 0x4e, 0xb3, 0xc6, 0xf4, 0xd2, 0xdb, 0x83, 0x7b, 0x97, 0x90, 0x72, 0x28, 0x22, 0xc9, 0xc9,
0x6b, 0x58, 0xd8, 0x8e, 0x0e, 0x85, 0xa4, 0xce, 0x72, 0xb1, 0x59, 0x5f, 0x7d, 0xda, 0x9a, 0xe5,
0x5c, 0xcb, 0xca, 0x69, 0x24, 0x33, 0x78, 0x4f, 0x42, 0x3d, 0xc3, 0x25, 0x8f, 0xa0, 0x96, 0x90,
0x1b, 0xd6, 0xf0, 0x98, 0x41, 0x3a, 0xb0, 0xb8, 0x1d, 0x9d, 0x8a, 0x63, 0xde, 0x16, 0xd1, 0x61,
0x30, 0xa0, 0x85, 0x65, 0xa7, 0x59, 0x5f, 0xf5, 0x66, 0x1b, 0xcb, 0x22, 0xd9, 0x84, 0x9c, 0xf7,
0x1d, 0xd0, 0x8d, 0x40, 0xf6, 0x44, 0x14, 0xf1, 0x5e, 0xe2, 0x4c, 0xae, 0xd3, 0x93, 0x67, 0x2a,
0x5c, 0x3a, 0x93, 0xf7, 0x10, 0x1e, 0xcc, 0xd0, 0x65, 0xc2, 0xe2, 0xfd, 0x1e, 0x16, 0xd7, 0xf5,
0xd9, 0xf2, 0x95, 0x7f, 0x03, 0x95, 0xdd, 0xa1, 0x0a, 0x44, 0x24, 0xe7, 0x7b, 0x83, 0x6a, 0x2c,
0x92, 0x25, 0x22, 0xde, 0xbf, 0x17, 0xad, 0x01, 0xcb, 0x20, 0xcb, 0x50, 0x6f, 0x8b, 0x48, 0xf1,
0x73, 0xb5, 0xe7, 0xab, 0x23, 0x6b, 0x28, 0xcb, 0x22, 0x9f, 0xc2, 0xd2, 0x86, 0xe8, 0x1d, 0xf3,
0xf8, 0x30, 0x08, 0xf9, 0x8e, 0x7f, 0xc2, 0xad, 0x4b, 0x97, 0xb8, 0xe4, 0x5b, 0xed, 0x75, 0x10,
0xa9, 0xce, 0x28, 0xea, 0xd1, 0x22, 0x1e, 0xed, 0x49, 0xde, 0x57, 0xb5, 0x30, 0x36, 0x96, 0x20,
0x3f, 0x40, 0x43, 0xab, 0xe9, 0x5b, 0xd3, 0x92, 0x96, 0x30, 0x31, 0x5e, 0x5d, 0xed, 0x5d, 0x6b,
0x42, 0x6e, 0x33, 0x52, 0xf1, 0x05, 0x9b, 0xd4, 0x45, 0xee, 0xc2, 0xc2, 0x5a, 0x18, 0x8a, 0x33,
0xba, 0xb0, 0x5c, 0x6c, 0xd6, 0x98, 0x21, 0xc8, 0x97, 0x50, 0x59, 0x53, 0x8a, 0x4b, 0x25, 0x69,
0x19, 0x8d, 0x3d, 0x9a, 0x6d, 0xcc, 0x80, 0x58, 0x02, 0x26, 0xbb, 0x50, 0x43, 0xfb, 0x6b, 0xf1,
0x40, 0xd2, 0x0a, 0x4a, 0x7e, 0x7e, 0x8d, 0x63, 0xa6, 0x32, 0xe6, 0x88, 0x63, 0x1d, 0x64, 0x13,
0x6a, 0x6d, 0xbf, 0x77, 0xc4, 0x3b, 0xb1, 0x38, 0xa1, 0x55, 0x54, 0xf8, 0xf3, 0xd9, 0x0a, 0x11,
0x66, 0x15, 0x5a, 0x35, 0xa9, 0x24, 0x59, 0x83, 0x0a, 0x12, 0xfb, 0x82, 0xd6, 0x6e, 0xa6, 0x24,
0x91, 0x23, 0x1e, 0x2c, 0xb6, 0x07, 0xb1, 0x18, 0x0d, 0xf7, 0xfc, 0x98, 0x47, 0x8a, 0x02, 0x7e,
0xea, 0x09, 0x1e, 0x79, 0x0b, 0x95, 0xcd, 0xf3, 0xa1, 0x88, 0x95, 0xa4, 0xf5, 0x79, 0x97, 0xd7,
0x80, 0xac, 0x01, 0x2b, 0x41, 0x1e, 0x03, 0x6c, 0x9e, 0xab, 0xd8, 0xdf, 0x12, 0x3a, 0xec, 0x8b,
0xf8, 0x39, 0x32, 0x1c, 0xd2, 0x81, 0xf2, 0x3b, 0xff, 0x80, 0x87, 0x92, 0x36, 0x50, 0x77, 0xeb,
0x1a, 0x81, 0x35, 0x02, 0xc6, 0x90, 0x95, 0xd6, 0x79, 0xbd, 0xc3, 0xd5, 0x99, 0x88, 0x8f, 0xdf,
0x8b, 0x3e, 0xa7, 0x4b, 0x26, 0xaf, 0x33, 0x2c, 0xf2, 0x1c, 0x1a, 0x3b, 0xc2, 0x04, 0x2f, 0x08,
0x15, 0x8f, 0xe9, 0x2d, 0x3c, 0xcc, 0x24, 0x13, 0xef, 0x72, 0xe8, 0xab, 0x43, 0x11, 0x9f, 0x48,
0x7a, 0x1b, 0x11, 0x63, 0x86, 0xce, 0xa0, 0x2e, 0xef, 0xc5, 0x5c, 0x49, 0x7a, 0x67, 0x5e, 0x06,
0x19, 0x10, 0x4b, 0xc0, 0x84, 0x42, 0xa5, 0x7b, 0x74, 0xd2, 0x0d, 0xfe, 0xc8, 0x29, 0x59, 0x76,
0x9a, 0x45, 0x96, 0x90, 0xe4, 0x25, 0x14, 0xbb, 0xdd, 0x2d, 0xfa, 0x53, 0xd4, 0xf6, 0x20, 0x47,
0x5b, 0x77, 0x8b, 0x69, 0x14, 0x21, 0x50, 0xda, 0xf7, 0x07, 0x92, 0xde, 0xc5, 0x73, 0xe1, 0x9a,
0xdc, 0x87, 0xf2, 0xbe, 0x1f, 0x0f, 0xb8, 0xa2, 0xf7, 0xd0, 0x67, 0x4b, 0x91, 0x37, 0x50, 0xf9,
0x10, 0x06, 0x27, 0x81, 0x92, 0xf4, 0xfe, 0xbc, 0xcb, 0x69, 0x40, 0xbb, 0x43, 0xc5, 0x12, 0xbc,
0x3e, 0x2d, 0xc6, 0x9b, 0xc7, 0xf4, 0x67, 0xa8, 0x33, 0x21, 0xf5, 0x8e, 0x0d, 0x17, 0xa5, 0xcb,
0x4e, 0xb3, 0xca, 0x12, 0x52, 0x1f, 0x6d, 0x6f, 0x14, 0x86, 0xf4, 0x01, 0xb2, 0x71, 0x6d, 0xbe,
0xbd, 0x4e, 0x83, 0xbd, 0x91, 0x3c, 0xa2, 0x2e, 0xee, 0x64, 0x38, 0xe3, 0xfd, 0x77, 0xc2, 0xef,
0xd3, 0x87, 0xd9, 0x7d, 0xcd, 0x21, 0xdb, 0xb0, 0xd8, 0xc5, 0xb6, 0xb4, 0x87, 0xcd, 0x88, 0x3e,
0x42, 0x3f, 0x5e, 0xb4, 0x74, 0xe7, 0x6a, 0x25, 0x9d, 0x4b, 0xfb, 0x90, 0x6d, 0x5e, 0x2d, 0x03,
0x66, 0x13, 0xa2, 0x49, 0x5d, 0xfd, 0x64, 0x5c, 0x57, 0x5d, 0xa8, 0xfe, 0x46, 0x27, 0xb9, 0x66,
0x3f, 0x46, 0x76, 0x4a, 0xeb, 0x64, 0x5a, 0x8b, 0x22, 0xa1, 0x7c, 0x53, 0x77, 0x9f, 0x60, 0xb8,
0xb3, 0x2c, 0xf2, 0x25, 0xdc, 0xdf, 0x8b, 0xc5, 0x29, 0x8f, 0xfc, 0xa8, 0xc7, 0x93, 0x6a, 0x8e,
0x99, 0xb7, 0x8c, 0xba, 0x72, 0x76, 0xdd, 0x5f, 0x03, 0x99, 0xae, 0x5e, 0xfa, 0x74, 0xc7, 0xfc,
0x22, 0xa9, 0xfa, 0xc7, 0xfc, 0x42, 0x17, 0xb0, 0x53, 0x3f, 0x1c, 0x25, 0xb5, 0xd7, 0x10, 0x5f,
0x17, 0xbe, 0x72, 0xdc, 0x6f, 0x60, 0x69, 0xb2, 0xb0, 0xdc, 0x48, 0xfa, 0x0d, 0xd4, 0x33, 0xb7,
0xe7, 0x26, 0xa2, 0xde, 0xbf, 0x1c, 0xa8, 0x67, 0xae, 0x38, 0x26, 0xe3, 0xc5, 0x90, 0x5b, 0x61,
0x5c, 0x93, 0x75, 0x58, 0x58, 0x53, 0x2a, 0xd6, 0xad, 0x4a, 0xe7, 0xf3, 0x2f, 0xaf, 0x2c, 0x14,
0x2d, 0x84, 0x9b, 0xab, 0x6c, 0x44, 0x75, 0xf0, 0x37, 0xb8, 0x54, 0x41, 0x84, 0xa1, 0xc6, 0xce,
0x52, 0x63, 0x59, 0x96, 0xfb, 0x15, 0xc0, 0x58, 0xec, 0x46, 0x3e, 0xfc, 0xc3, 0x81, 0x3b, 0x53,
0xd5, 0x70, 0xa6, 0x27, 0x5b, 0x93, 0x9e, 0xac, 0x5e, 0xb3, 0xb2, 0x4e, 0xfb, 0xf3, 0x7f, 0x9c,
0x76, 0x07, 0xca, 0xa6, 0x05, 0xcd, 0x3c, 0xa1, 0x0b, 0xd5, 0x8d, 0x40, 0xfa, 0x07, 0x21, 0xef,
0xa3, 0x68, 0x95, 0xa5, 0x34, 0xf6, 0x3f, 0x3c, 0xbd, 0x89, 0x9e, 0x21, 0x3c, 0x53, 0x6b, 0xc8,
0x12, 0x14, 0xd2, 0xd9, 0xa9, 0xb0, 0xbd, 0xa1, 0xc1, 0xba, 0xf1, 0x1b, 0x57, 0x6b, 0xcc, 0x10,
0x5e, 0x07, 0xca, 0xa6, 0x7a, 0x4d, 0xe1, 0x5d, 0xa8, 0x76, 0x82, 0x90, 0xe3, 0xfc, 0x60, 0xce,
0x9c, 0xd2, 0xda, 0xbd, 0xcd, 0xe8, 0xd4, 0x9a, 0xd5, 0x4b, 0xef, 0x87, 0xcc, 0x98, 0xa0, 0xfd,
0xc0, 0x89, 0xc2, 0xfa, 0x81, 0x73, 0xc4, 0x7d, 0x28, 0x77, 0x44, 0x7c, 0xe2, 0x2b, 0xab, 0xcc,
0x52, 0xba, 0x35, 0x6d, 0x0f, 0x22, 0x11, 0xf3, 0xae, 0xf2, 0xd5, 0xc8, 0xb8, 0x52, 0x65, 0x13,
0x3c, 0xcf, 0x83, 0xa5, 0xed, 0x48, 0x0e, 0x79, 0x4f, 0xe5, 0x8f, 0xa4, 0xbb, 0x70, 0x2b, 0xc5,
0xd8, 0x61, 0x34, 0x33, 0x53, 0x39, 0x37, 0x9f, 0xa9, 0xfe, 0xee, 0x40, 0x2d, 0xad, 0x9a, 0xa4,
0x0d, 0x65, 0xfc, 0x62, 0xc9, 0x64, 0xfb, 0xf2, 0x8a, 0x32, 0xdb, 0xfa, 0x88, 0x68, 0xdb, 0xbd,
0x8c, 0xa8, 0xfb, 0x3d, 0xd4, 0x33, 0xec, 0x19, 0x49, 0xb2, 0x9a, 0x4d, 0x92, 0xdc, 0xb6, 0x63,
0x8c, 0x64, 0x53, 0x68, 0x03, 0xca, 0x86, 0x39, 0x33, 0xf4, 0x04, 0x4a, 0x5b, 0x7e, 0x6c, 0xd2,
0xa7, 0xc8, 0x70, 0xad, 0x79, 0x5d, 0x71, 0xa8, 0x30, 0xdc, 0x45, 0x86, 0x6b, 0xef, 0x9f, 0x0e,
0x34, 0xec, 0x98, 0x6a, 0x23, 0xc8, 0xe1, 0xb6, 0xb9, 0xc5, 0x3c, 0x4e, 0x78, 0xd6, 0xff, 0x37,
0x73, 0x42, 0x99, 0x40, 0x5b, 0x97, 0x65, 0x4d, 0x34, 0xa6, 0x54, 0xba, 0x6d, 0xb8, 0x37, 0x13,
0x7a, 0xa3, 0x6b, 0xf4, 0x02, 0xee, 0x8c, 0x07, 0xf0, 0xfc, 0x3c, 0xb9, 0x0b, 0x24, 0x0b, 0xb3,
0x03, 0xfa, 0x13, 0xa8, 0xeb, 0x07, 0x4d, 0xbe, 0x98, 0x07, 0x8b, 0x06, 0x60, 0x23, 0x43, 0xa0,
0x74, 0xcc, 0x2f, 0x4c, 0x36, 0xd4, 0x18, 0xae, 0xbd, 0xbf, 0x39, 0xfa, 0x5d, 0x32, 0x1c, 0xa9,
0xf7, 0x5c, 0x4a, 0x7f, 0xa0, 0x13, 0xb0, 0xb4, 0x1d, 0x05, 0xca, 0x66, 0xdf, 0xa7, 0x79, 0xef,
0x93, 0xe1, 0x48, 0x69, 0x98, 0x95, 0xda, 0xfa, 0x09, 0x43, 0x29, 0xf2, 0x1a, 0x4a, 0x1b, 0xbe,
0xf2, 0x6d, 0x2e, 0xe4, 0x4c, 0x63, 0x1a, 0x91, 0x11, 0xd4, 0xe4, 0x7a, 0x45, 0x3f, 0xc2, 0x86,
0x23, 0xe5, 0x3d, 0x87, 0xdb, 0x97, 0xb5, 0xcf, 0x70, 0xed, 0x0b, 0xa8, 0x67, 0xb4, 0xe0, 0xdd,
0xde, 0xed, 0x20, 0xa0, 0xca, 0xf4, 0x52, 0xfb, 0x9a, 0x1e, 0x64, 0xd1, 0xd8, 0xf0, 0x6e, 0x41,
0x03, 0x55, 0xa7, 0x11, 0xfc, 0x53, 0x01, 0x2a, 0x89, 0x8a, 0xd7, 0x13, 0x7e, 0x3f, 0xcd, 0xf3,
0x7b, 0xda, 0xe5, 0x57, 0x50, 0xd2, 0x35, 0xc6, 0xba, 0x9c, 0x33, 0xca, 0x74, 0xfa, 0x19, 0x31,
0x0d, 0x27, 0xdf, 0x42, 0x99, 0x71, 0xa9, 0xc7, 0x2e, 0xf3, 0x40, 0x79, 0x36, 0x5b, 0xd0, 0x60,
0xc6, 0xc2, 0x56, 0x48, 0x8b, 0x77, 0x83, 0x41, 0xe4, 0x87, 0xb4, 0x34, 0x4f, 0xdc, 0x60, 0x32,
0xe2, 0x86, 0x31, 0x0e, 0xf7, 0x5f, 0x1c, 0xa8, 0xcf, 0x0d, 0xf5, 0xfc, 0x27, 0xe4, 0xd4, 0xb3,
0xb6, 0xf8, 0x3f, 0x3e, 0x6b, 0xff, 0x5c, 0x98, 0x54, 0x84, 0x13, 0x98, 0xbe, 0x4f, 0x43, 0x11,
0x44, 0xca, 0xa6, 0x6c, 0x86, 0xa3, 0x0f, 0xda, 0x3e, 0xe9, 0xdb, 0xc6, 0xa0, 0x97, 0xfa, 0x9a,
0xed, 0x08, 0xcd, 0xab, 0x63, 0x1a, 0x18, 0x62, 0x5c, 0xf6, 0x8b, 0xb6, 0xec, 0xeb, 0xd4, 0xf8,
0x20, 0x79, 0x8c, 0x81, 0xab, 0x31, 0x5c, 0xeb, 0x4a, 0xbf, 0x23, 0x90, 0xbb, 0x80, 0xc2, 0x96,
0x42, 0x2b, 0x67, 0x7d, 0x5a, 0x36, 0xe1, 0x68, 0x9f, 0x25, 0x56, 0xce, 0xfa, 0xb4, 0x92, 0x5a,
0x39, 0x43, 0x2b, 0xfb, 0xea, 0x82, 0x56, 0x4d, 0x02, 0xee, 0xab, 0x0b, 0xdd, 0x8a, 0x98, 0x08,
0xc3, 0x03, 0xbf, 0x77, 0x4c, 0x6b, 0xa6, 0x07, 0x26, 0xb4, 0x9e, 0x55, 0x75, 0xcc, 0x03, 0x3f,
0xc4, 0x57, 0x4d, 0x95, 0x25, 0xa4, 0xb7, 0x06, 0xb5, 0x34, 0x55, 0x74, 0x77, 0xeb, 0xf4, 0xf1,
0x53, 0x34, 0x58, 0xa1, 0xd3, 0x4f, 0xb2, 0xbc, 0x30, 0x9d, 0xe5, 0xc5, 0x4c, 0x96, 0xbf, 0x86,
0xc6, 0x44, 0xd2, 0x68, 0x10, 0x13, 0x67, 0xd2, 0x2a, 0xc2, 0xb5, 0xe6, 0xb5, 0x45, 0x68, 0xde,
0xed, 0x0d, 0x86, 0x6b, 0xef, 0x19, 0x34, 0x26, 0xd2, 0x65, 0x56, 0x5d, 0xf6, 0x9e, 0x42, 0xc3,
0x34, 0xb8, 0xfc, 0xb2, 0xf3, 0x1f, 0x07, 0x96, 0x12, 0x8c, 0xad, 0x3c, 0xbf, 0x82, 0xea, 0x29,
0x8f, 0x15, 0x3f, 0x4f, 0x7b, 0x11, 0x9d, 0x1e, 0x95, 0x3f, 0x22, 0x82, 0xa5, 0x48, 0xf2, 0x35,
0x54, 0x25, 0xea, 0xe1, 0xc9, 0xac, 0xf3, 0x38, 0x4f, 0xca, 0xda, 0x4b, 0xf1, 0x64, 0x05, 0x4a,
0xa1, 0x18, 0x48, 0xfc, 0xee, 0xf5, 0xd5, 0x87, 0x79, 0x72, 0xef, 0xc4, 0x80, 0x21, 0x90, 0xbc,
0x85, 0xea, 0x99, 0x1f, 0x47, 0x41, 0x34, 0x48, 0xde, 0xfb, 0x4f, 0xf2, 0x84, 0xbe, 0x37, 0x38,
0x96, 0x0a, 0x78, 0x0d, 0x7d, 0x89, 0x0e, 0x85, 0x8d, 0x89, 0xf7, 0x5b, 0x9d, 0xcb, 0x9a, 0xb4,
0xee, 0x6f, 0x43, 0xc3, 0xdc, 0x87, 0x8f, 0x3c, 0x96, 0x7a, 0x72, 0x74, 0xe6, 0xdd, 0xd9, 0xf5,
0x2c, 0x94, 0x4d, 0x4a, 0x7a, 0x3f, 0xda, 0x76, 0x97, 0x30, 0x74, 0x2e, 0x0d, 0xfd, 0xde, 0xb1,
0x3f, 0x48, 0xbe, 0x53, 0x42, 0xea, 0x9d, 0x53, 0x6b, 0xcf, 0x5c, 0xdb, 0x84, 0xd4, 0xb9, 0x19,
0xf3, 0xd3, 0x40, 0x8e, 0x87, 0xd8, 0x94, 0x5e, 0xfd, 0x6b, 0x05, 0xa0, 0x9d, 0x9e, 0x87, 0xec,
0xc1, 0x02, 0xda, 0x23, 0xde, 0xdc, 0xe6, 0x89, 0x7e, 0xbb, 0xcf, 0xae, 0xd1, 0x60, 0xc9, 0x47,
0x9d, 0xfc, 0x38, 0xf4, 0x90, 0xe7, 0x79, 0x65, 0x22, 0x3b, 0x37, 0xb9, 0x2f, 0xae, 0x40, 0x59,
0xbd, 0x1f, 0xa0, 0x6c, 0xb2, 0x80, 0xe4, 0xd5, 0xc2, 0x6c, 0xde, 0xba, 0xcf, 0xe7, 0x83, 0x8c,
0xd2, 0xcf, 0x1c, 0xc2, 0x6c, 0xa5, 0x24, 0xde, 0x9c, 0x56, 0x68, 0x6f, 0x4c, 0x5e, 0x00, 0x26,
0xba, 0x4e, 0xd3, 0x21, 0xdf, 0x41, 0xd9, 0xd4, 0x3a, 0xf2, 0xc9, 0x6c, 0x81, 0x44, 0xdf, 0xfc,
0xed, 0xa6, 0xf3, 0x99, 0x43, 0xde, 0x43, 0x49, 0x37, 0x79, 0x92, 0xd3, 0xb1, 0x32, 0x13, 0x82,
0xeb, 0xcd, 0x83, 0xd8, 0x28, 0xfe, 0x08, 0x30, 0x1e, 0x35, 0x48, 0xce, 0xbf, 0x36, 0x53, 0x33,
0x8b, 0xdb, 0xbc, 0x1a, 0x68, 0x0d, 0xbc, 0xd7, 0x7d, 0xf6, 0x50, 0x90, 0xdc, 0x0e, 0x9b, 0x5e,
0x23, 0xd7, 0x9b, 0x07, 0xb1, 0xea, 0x8e, 0xa0, 0x31, 0xf1, 0xaf, 0x2e, 0xf9, 0x45, 0xbe, 0x93,
0x97, 0xff, 0x24, 0x76, 0x5f, 0x5e, 0x0b, 0x6b, 0x2d, 0xa9, 0xec, 0xac, 0x66, 0xb7, 0x49, 0xeb,
0x2a, 0xbf, 0x27, 0xff, 0xa1, 0x75, 0x57, 0xae, 0x8d, 0x37, 0x56, 0xd7, 0x4b, 0xbf, 0x2b, 0x0c,
0x0f, 0x0e, 0xca, 0xf8, 0x67, 0xf7, 0x17, 0xff, 0x0d, 0x00, 0x00, 0xff, 0xff, 0xa2, 0x32, 0x20,
0xaa, 0x8a, 0x17, 0x00, 0x00,
}
// Reference imports to suppress errors if they are not otherwise used.

View File

@@ -80,6 +80,7 @@ message BuildOptions {
string Ref = 29;
string GroupRef = 30;
repeated string Annotations = 31;
string ProvenanceResponseMode = 32;
}
message ExportEntry {
@@ -111,8 +112,9 @@ message Secret {
}
message PrintFunc {
string Name = 1;
string Format = 2;
string Name = 1;
string Format = 2;
bool IgnoreStatus = 3;
}
message InspectRequest {

View File

@@ -15,6 +15,7 @@ func CreateExports(entries []*ExportEntry) ([]client.ExportEntry, error) {
if len(entries) == 0 {
return nil, nil
}
var stdoutUsed bool
for _, entry := range entries {
if entry.Type == "" {
return nil, errors.Errorf("type is required for output")
@@ -68,10 +69,14 @@ func CreateExports(entries []*ExportEntry) ([]client.ExportEntry, error) {
entry.Destination = "-"
}
if entry.Destination == "-" {
if stdoutUsed {
return nil, errors.Errorf("multiple outputs configured to write to stdout")
}
if _, err := console.ConsoleFromFile(os.Stdout); err == nil {
return nil, errors.Errorf("dest file is required for %s exporter. refusing to write to console", out.Type)
}
out.Output = wrapWriteCloser(os.Stdout)
stdoutUsed = true
} else if entry.Destination != "" {
fi, err := os.Stat(entry.Destination)
if err != nil && !os.IsNotExist(err) {

View File

@@ -4,7 +4,6 @@ import (
"path/filepath"
"strings"
"github.com/docker/docker/builder/remotecontext/urlutil"
"github.com/moby/buildkit/util/gitutil"
)
@@ -22,7 +21,7 @@ func ResolveOptionPaths(options *BuildOptions) (_ *BuildOptions, err error) {
}
}
if options.DockerfileName != "" && options.DockerfileName != "-" {
if localContext && !urlutil.IsURL(options.DockerfileName) {
if localContext && !isHTTPURL(options.DockerfileName) {
options.DockerfileName, err = filepath.Abs(options.DockerfileName)
if err != nil {
return nil, err
@@ -164,8 +163,15 @@ func ResolveOptionPaths(options *BuildOptions) (_ *BuildOptions, err error) {
return options, nil
}
// isHTTPURL returns true if the provided str is an HTTP(S) URL by checking if it
// has a http:// or https:// scheme. No validation is performed to verify if the
// URL is well-formed.
func isHTTPURL(str string) bool {
return strings.HasPrefix(str, "https://") || strings.HasPrefix(str, "http://")
}
func isRemoteURL(c string) bool {
if urlutil.IsURL(c) {
if isHTTPURL(c) {
return true
}
if _, err := gitutil.ParseGitRef(c); err == nil {

View File

@@ -210,7 +210,7 @@ func (c *Client) build(ctx context.Context, ref string, options pb.BuildOptions,
}
return err
} else if n > 0 {
if stream.Send(&pb.InputMessage{
if err := stream.Send(&pb.InputMessage{
Input: &pb.InputMessage_Data{
Data: &pb.DataMessage{
Data: buf[:n],

View File

@@ -358,7 +358,7 @@ func copyToStream(fd uint32, snd msgStream, r io.Reader) error {
}
return err
} else if n > 0 {
if snd.Send(&pb.Message{
if err := snd.Send(&pb.Message{
Input: &pb.Message_File{
File: &pb.FdMessage{
Fd: fd,

View File

@@ -7,6 +7,12 @@ variable "DOCS_FORMATS" {
variable "DESTDIR" {
default = "./bin"
}
variable "TEST_COVERAGE" {
default = null
}
variable "GOLANGCI_LINT_MULTIPLATFORM" {
default = ""
}
# Special target: https://github.com/docker/metadata-action#bake-definition
target "meta-helper" {
@@ -25,13 +31,29 @@ group "default" {
}
group "validate" {
targets = ["lint", "validate-vendor", "validate-docs"]
targets = ["lint", "lint-gopls", "validate-vendor", "validate-docs"]
}
target "lint" {
inherits = ["_common"]
dockerfile = "./hack/dockerfiles/lint.Dockerfile"
output = ["type=cacheonly"]
platforms = GOLANGCI_LINT_MULTIPLATFORM != "" ? [
"darwin/amd64",
"darwin/arm64",
"linux/amd64",
"linux/arm64",
"linux/s390x",
"linux/ppc64le",
"linux/riscv64",
"windows/amd64",
"windows/arm64"
] : []
}
target "lint-gopls" {
inherits = ["lint"]
target = "gopls-analyze"
}
target "validate-vendor" {
@@ -166,13 +188,18 @@ variable "HTTPS_PROXY" {
variable "NO_PROXY" {
default = ""
}
variable "TEST_BUILDKIT_TAG" {
default = null
}
target "integration-test-base" {
inherits = ["_common"]
args = {
GO_EXTRA_FLAGS = TEST_COVERAGE == "1" ? "-cover" : null
HTTP_PROXY = HTTP_PROXY
HTTPS_PROXY = HTTPS_PROXY
NO_PROXY = NO_PROXY
BUILDKIT_VERSION = TEST_BUILDKIT_TAG
}
target = "integration-test-base"
output = ["type=cacheonly"]

View File

@@ -1,4 +1,6 @@
# Bake file reference
---
title: Bake file reference
---
The Bake file is a file for defining workflows that you run using `docker buildx bake`.
@@ -213,7 +215,7 @@ target "webapp" {
The following table shows the complete list of attributes that you can assign to a target:
| Name | Type | Description |
| ----------------------------------------------- | ------- | -------------------------------------------------------------------- |
|-------------------------------------------------|---------|----------------------------------------------------------------------|
| [`args`](#targetargs) | Map | Build arguments |
| [`annotations`](#targetannotations) | List | Exporter annotations |
| [`attest`](#targetattest) | List | Build attestations |
@@ -233,9 +235,11 @@ The following table shows the complete list of attributes that you can assign to
| [`platforms`](#targetplatforms) | List | Target platforms |
| [`pull`](#targetpull) | Boolean | Always pull images |
| [`secret`](#targetsecret) | List | Secrets to expose to the build |
| [`shm-size`](#targetshm-size) | List | Size of `/dev/shm` |
| [`ssh`](#targetssh) | List | SSH agent sockets or keys to expose to the build |
| [`tags`](#targettags) | List | Image names and tags |
| [`target`](#targettarget) | String | Target build stage |
| [`ulimits`](#targetulimits) | List | Ulimit options |
### `target.args`
@@ -274,13 +278,13 @@ target "db" {
### `target.annotations`
The `annotations` attribute is a shortcut to allow you to easily set a list of
annotations on the target.
The `annotations` attribute lets you add annotations to images built with bake.
The key takes a list of annotations, in the format of `KEY=VALUE`.
```hcl
target "default" {
output = ["type=image,name=foo"]
annotations = ["key=value"]
annotations = ["org.opencontainers.image.authors=dvdksn"]
}
```
@@ -288,10 +292,25 @@ is the same as
```hcl
target "default" {
output = ["type=image,name=foo,annotation.key=value"]
output = ["type=image,name=foo,annotation.org.opencontainers.image.authors=dvdksn"]
}
```
By default, the annotation is added to image manifests. You can configure the
level of the annotations by adding a prefix to the annotation, containing a
comma-separated list of all the levels that you want to annotate. The following
example adds annotations to both the image index and manifests.
```hcl
target "default" {
output = ["type=image,name=foo"]
annotations = ["index,manifest:org.opencontainers.image.authors=dvdksn"]
}
```
Read about the supported levels in
[Specifying annotation levels](https://docs.docker.com/build/building/annotations/#specifying-annotation-levels).
### `target.attest`
The `attest` attribute lets you apply [build attestations][attestations] to the target.
@@ -817,6 +836,29 @@ RUN --mount=type=secret,id=KUBECONFIG \
KUBECONFIG=$(cat /run/secrets/KUBECONFIG) helm upgrade --install
```
### `target.shm-size`
Sets the size of the shared memory allocated for build containers when using
`RUN` instructions.
The format is `<number><unit>`. `number` must be greater than `0`. Unit is
optional and can be `b` (bytes), `k` (kilobytes), `m` (megabytes), or `g`
(gigabytes). If you omit the unit, the system uses bytes.
This is the same as the `--shm-size` flag for `docker build`.
```hcl
target "default" {
shm-size = "128m"
}
```
> **Note**
>
> In most cases, it is recommended to let the builder automatically determine
> the appropriate configurations. Manual adjustments should only be considered
> when specific performance tuning is required for complex build scenarios.
### `target.ssh`
Defines SSH agent sockets or keys to expose to the build.
@@ -863,6 +905,32 @@ target "default" {
}
```
### `target.ulimits`
Ulimits overrides the default ulimits of build's containers when using `RUN`
instructions and are specified with a soft and hard limit as such:
`<type>=<soft limit>[:<hard limit>]`, for example:
```hcl
target "app" {
ulimits = [
"nofile=1024:1024"
]
}
```
> **Note**
>
> If you do not provide a `hard limit`, the `soft limit` is used
> for both values. If no `ulimits` are set, they are inherited from
> the default `ulimits` set on the daemon.
> **Note**
>
> In most cases, it is recommended to let the builder automatically determine
> the appropriate configurations. Manual adjustments should only be considered
> when specific performance tuning is required for complex build scenarios.
## Group
Groups allow you to invoke multiple builds (targets) at once.
@@ -1054,20 +1122,20 @@ target "webapp-dev" {
[attestations]: https://docs.docker.com/build/attestations/
[bake_stdlib]: https://github.com/docker/buildx/blob/master/bake/hclparser/stdlib.go
[build-arg]: https://docs.docker.com/engine/reference/commandline/build/#build-arg
[build-context]: https://docs.docker.com/engine/reference/commandline/buildx_build/#build-context
[build-arg]: https://docs.docker.com/reference/cli/docker/image/build/#build-arg
[build-context]: https://docs.docker.com/reference/cli/docker/buildx/build/#build-context
[cache-backends]: https://docs.docker.com/build/cache/backends/
[cache-from]: https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-from
[cache-to]: https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-to
[context]: https://docs.docker.com/engine/reference/commandline/buildx_build/#build-context
[file]: https://docs.docker.com/engine/reference/commandline/build/#file
[cache-from]: https://docs.docker.com/reference/cli/docker/buildx/build/#cache-from
[cache-to]: https://docs.docker.com/reference/cli/docker/buildx/build/#cache-to
[context]: https://docs.docker.com/reference/cli/docker/buildx/build/#build-context
[file]: https://docs.docker.com/reference/cli/docker/image/build/#file
[go-cty]: https://github.com/zclconf/go-cty/tree/main/cty/function/stdlib
[hcl-funcs]: https://docs.docker.com/build/bake/hcl-funcs/
[output]: https://docs.docker.com/engine/reference/commandline/buildx_build/#output
[platform]: https://docs.docker.com/engine/reference/commandline/buildx_build/#platform
[run_mount_secret]: https://docs.docker.com/engine/reference/builder/#run---mounttypesecret
[secret]: https://docs.docker.com/engine/reference/commandline/buildx_build/#secret
[ssh]: https://docs.docker.com/engine/reference/commandline/buildx_build/#ssh
[tag]: https://docs.docker.com/engine/reference/commandline/build/#tag
[target]: https://docs.docker.com/engine/reference/commandline/build/#target
[output]: https://docs.docker.com/reference/cli/docker/buildx/build/#output
[platform]: https://docs.docker.com/reference/cli/docker/buildx/build/#platform
[run_mount_secret]: https://docs.docker.com/reference/dockerfile/#run---mounttypesecret
[secret]: https://docs.docker.com/reference/cli/docker/buildx/build/#secret
[ssh]: https://docs.docker.com/reference/cli/docker/buildx/build/#ssh
[tag]: https://docs.docker.com/reference/cli/docker/image/build/#tag
[target]: https://docs.docker.com/reference/cli/docker/image/build/#target
[userfunc]: https://github.com/hashicorp/hcl/tree/main/ext/userfunc

View File

@@ -3,6 +3,7 @@ package main
import (
"log"
"os"
"strings"
"github.com/docker/buildx/commands"
clidocstool "github.com/docker/cli-docs-tool"
@@ -26,6 +27,28 @@ type options struct {
formats []string
}
// fixUpExperimentalCLI trims the " (EXPERIMENTAL)" suffix from the CLI output,
// as docs.docker.com already displays "experimental (CLI)",
//
// https://github.com/docker/buildx/pull/2188#issuecomment-1889487022
func fixUpExperimentalCLI(cmd *cobra.Command) {
const (
annotationExperimentalCLI = "experimentalCLI"
suffixExperimental = " (EXPERIMENTAL)"
)
if _, ok := cmd.Annotations[annotationExperimentalCLI]; ok {
cmd.Short = strings.TrimSuffix(cmd.Short, suffixExperimental)
}
cmd.Flags().VisitAll(func(f *pflag.Flag) {
if _, ok := f.Annotations[annotationExperimentalCLI]; ok {
f.Usage = strings.TrimSuffix(f.Usage, suffixExperimental)
}
})
for _, c := range cmd.Commands() {
fixUpExperimentalCLI(c)
}
}
func gen(opts *options) error {
log.SetFlags(0)
@@ -57,6 +80,8 @@ func gen(opts *options) error {
return err
}
case "yaml":
// fix up is needed only for yaml (used for generating docs.docker.com contents)
fixUpExperimentalCLI(cmd)
if err = c.GenYamlTree(cmd); err != nil {
return err
}

View File

@@ -1,3 +0,0 @@
# CI/CD
This page has moved to [Docker Docs website](https://docs.docker.com/build/ci/)

View File

@@ -1,3 +0,0 @@
# CNI networking
This page has moved to [Docker Docs website](https://docs.docker.com/build/buildkit/configure/#cni-networking)

View File

@@ -1,3 +0,0 @@
# Color output controls
This page has moved to [Docker Docs website](https://docs.docker.com/build/building/env-vars/#buildkit_colors)

View File

@@ -1,3 +0,0 @@
# Using a custom network
This page has moved to [Docker Docs website](https://docs.docker.com/build/drivers/docker-container/#custom-network)

View File

@@ -1,3 +0,0 @@
# Using a custom registry configuration
This page has moved to [Docker Docs website](https://docs.docker.com/build/buildkit/configure/#setting-registry-certificates)

View File

@@ -1,3 +0,0 @@
# OpenTelemetry support
This page has moved to [Docker Docs website](https://docs.docker.com/build/building/opentelemetry/)

View File

@@ -1,3 +0,0 @@
# Registry mirror
This page has moved to [Docker Docs website](https://docs.docker.com/build/buildkit/configure/#registry-mirror)

View File

@@ -1,3 +0,0 @@
# Resource limiting
This page has moved to [Docker Docs website](https://docs.docker.com/build/buildkit/configure/#resource-limiting)

View File

@@ -1,3 +0,0 @@
# Defining additional build contexts and linking targets
This page has moved to [Docker Docs website](https://docs.docker.com/build/bake/build-contexts)

View File

@@ -1,3 +0,0 @@
# Building from Compose file
This page has moved to [Docker Docs website](https://docs.docker.com/build/bake/compose-file)

View File

@@ -1,3 +0,0 @@
# Configuring builds
This page has moved to [Docker Docs website](https://docs.docker.com/build/bake/configuring-build)

View File

@@ -1,3 +0,0 @@
# Bake file definition
This page has moved to [docs/bake-reference.md](../../bake-reference.md)

View File

@@ -1,3 +0,0 @@
# User defined HCL functions
This page has moved to [Docker Docs website](https://docs.docker.com/build/bake/hcl-funcs)

View File

@@ -1,3 +0,0 @@
# High-level build options with Bake
This page has moved to [Docker Docs website](https://docs.docker.com/build/bake)

View File

@@ -1,3 +0,0 @@
# Azure Blob Storage cache storage
This page has moved to [Docker Docs website](https://docs.docker.com/build/building/cache/backends/azblob)

View File

@@ -1,3 +0,0 @@
# GitHub Actions cache storage
This page has moved to [Docker Docs website](https://docs.docker.com/build/building/cache/backends/gha)

View File

@@ -1,3 +0,0 @@
# Cache storage backends
This page has moved to [Docker Docs website](https://docs.docker.com/build/building/cache/backends)

View File

@@ -1,3 +0,0 @@
# Inline cache storage
This page has moved to [Docker Docs website](https://docs.docker.com/build/building/cache/backends/inline)

View File

@@ -1,3 +0,0 @@
# Local cache storage
This page has moved to [Docker Docs website](https://docs.docker.com/build/building/cache/backends/local)

View File

@@ -1,3 +0,0 @@
# Registry cache storage
This page has moved to [Docker Docs website](https://docs.docker.com/build/building/cache/backends/registry)

View File

@@ -1,3 +0,0 @@
# Amazon S3 cache storage
This page has moved to [Docker Docs website](https://docs.docker.com/build/building/cache/backends/s3)

View File

@@ -1,3 +0,0 @@
# Docker container driver
This page has moved to [Docker Docs website](https://docs.docker.com/build/building/drivers/docker-container)

View File

@@ -1,3 +0,0 @@
# Docker driver
This page has moved to [Docker Docs website](https://docs.docker.com/build/building/drivers/docker)

View File

@@ -1,3 +0,0 @@
# Buildx drivers overview
This page has moved to [Docker Docs website](https://docs.docker.com/build/building/drivers)

View File

@@ -1,3 +0,0 @@
# Kubernetes driver
This page has moved to [Docker Docs website](https://docs.docker.com/build/building/drivers/kubernetes)

View File

@@ -1,3 +0,0 @@
# Remote driver
This page has moved to [Docker Docs website](https://docs.docker.com/build/building/drivers/remote)

View File

@@ -1,3 +0,0 @@
# Image and registry exporters
This page has moved to [Docker Docs website](https://docs.docker.com/build/building/exporters/image-registry)

Some files were not shown because too many files have changed in this diff Show More