Compare commits

...

163 Commits

Author SHA1 Message Date
Tõnis Tiigi
5113f9ea89 Merge pull request #2824 from jsternberg/bake-revert-composable-attributes
[v0.19] revert: "bake: initial set of composable bake attributes"
2024-11-27 09:37:55 -08:00
CrazyMax
8b029626f3 bake: additional test for empty variable
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-27 18:28:43 +01:00
Jonathan A. Sternberg
cd017e98ed revert: "bake: initial set of composable bake attributes"
This reverts commit 3ccbb88e6a.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-11-27 09:39:57 -06:00
Tõnis Tiigi
71c7889719 Merge pull request #2821 from tonistiigi/update-buildkit-v0.18.0
vendor: update buildkit to v0.18.0
2024-11-26 14:49:31 -08:00
Tonis Tiigi
a3418e0178 vendor: update buildkit to v0.18.0
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-11-26 13:57:25 -08:00
Tõnis Tiigi
6a1cf78879 Merge pull request #2818 from tonistiigi/vendor-buildkit-v0.18.0-rc2
vendor: update buildkit to v0.18.0-rc2
2024-11-25 17:52:46 -08:00
Tonis Tiigi
ec1f712328 vendor: update buildkit to v0.18.0-rc2
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-11-25 17:42:30 -08:00
CrazyMax
5ce6597c07 Merge pull request #2812 from crazy-max/bake-win-fs-ent
bake: add wildcard to fs entitlements to allow any paths
2024-11-25 20:29:14 +01:00
CrazyMax
9c75071793 bake: add wildcard to fs entitlements to allow any paths
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-25 20:13:27 +01:00
Tõnis Tiigi
d612139b19 Merge pull request #2811 from crazy-max/update-buildkit
dockerfile: update buildkit to v0.17.2
2024-11-25 10:11:09 -08:00
Tõnis Tiigi
42f7898c53 Merge pull request #2815 from tonistiigi/entitlements-symlink-tests
bake: fix entitlement test when running from symlink temp
2024-11-25 10:08:19 -08:00
Tonis Tiigi
3148c098a2 bake: remove unnecessary GetLongPathName calls
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-11-25 08:26:02 -08:00
Tonis Tiigi
f95d574f94 bake: fix entitlement test when running from symlink temp
As the paths returned by validator have the symlinks resolved,
the test needs to resolve the symlinks also in the expected
values. Previously this would fail if t.TempDir() or os.GetWd()
returned a path that contained a symlink.

The issue was purely in the test and not in the entitlements
validation logic.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-11-25 00:03:54 -08:00
CrazyMax
60822781be ci: update buildkit to v0.17.2
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-22 13:00:07 +01:00
CrazyMax
4c83475703 dockerfile: update buildkit to v0.17.2
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-22 11:27:33 +01:00
Tõnis Tiigi
17eff25fe5 Merge pull request #2807 from tonistiigi/buildkit-v0.18.0-rc1
vendor: update buildkit to v0.18.0-rc1
2024-11-21 14:29:29 -08:00
Tõnis Tiigi
9c8ffb77d6 Merge pull request #2806 from tonistiigi/vendor-compose-v2.4.4
vendor: update compose to v2.4.4
2024-11-21 14:29:18 -08:00
Tonis Tiigi
13a426fca6 vendor: update buildkit to v0.18.0-rc1
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-11-21 12:57:27 -08:00
Tõnis Tiigi
1a039115bc Merge pull request #2758 from jsternberg/bake-composable-attributes
bake: initial set of composable bake attributes
2024-11-21 12:54:54 -08:00
Tonis Tiigi
07d58782b8 vendor: update compose to v2.4.4
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-11-21 10:32:02 -08:00
Jonathan A. Sternberg
3ccbb88e6a bake: initial set of composable bake attributes
This allows using either the csv syntax or object syntax to specify
certain attributes.

This applies to the following fields:
- output
- cache-from
- cache-to
- secret
- ssh

There are still some remaining fields to translate. Specifically
ulimits, annotations, and attest.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-11-21 12:31:11 -06:00
Tõnis Tiigi
a34c641bc4 Merge pull request #2796 from tonistiigi/fs-entitlements
bake: add filesystem entitlements support
2024-11-21 09:51:49 -08:00
CrazyMax
f10be074b4 bake: handle root evaluation with available drives for windows entitlement
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-21 14:05:13 +01:00
CrazyMax
9f429965c0 bake: windows entitlement path fixes take 2
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-21 14:05:12 +01:00
CrazyMax
f3929447d7 fix lint issues
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-21 14:05:12 +01:00
Tonis Tiigi
615f4f6759 bake: windows entitlement path fixes
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-11-21 14:05:12 +01:00
Tonis Tiigi
9a7b028bab bake: add fs entitlements for context paths
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-11-21 14:05:11 +01:00
Tonis Tiigi
1af4f05ba4 bake: add filesystem entitlements support
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-11-21 14:05:11 +01:00
CrazyMax
4b5d78db9b Merge pull request #2801 from tonistiigi/enable-testifylint
lint: enable testifylint
2024-11-21 13:57:54 +01:00
Tonis Tiigi
d2c512a95b lint: enable testifylint
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-11-20 10:53:11 -08:00
Tõnis Tiigi
5937ba0e00 Merge pull request #2307 from crazy-max/test-docker-multi-ver
tests: handle multiple docker versions
2024-11-20 09:53:57 -08:00
Tõnis Tiigi
21fb026aa3 Merge pull request #2775 from crazy-max/openbsd
build openbsd
2024-11-20 09:49:49 -08:00
CrazyMax
bc45641086 build openbsd
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-20 11:42:10 +01:00
CrazyMax
96689e5d05 Merge pull request #2782 from crazy-max/go-1.23
update to go 1.23
2024-11-20 11:40:54 +01:00
CrazyMax
50a8f11f0f dockerfile: missing xx update to 1.5.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-20 11:20:18 +01:00
CrazyMax
11cf38bd97 update to go 1.23
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-20 11:20:18 +01:00
CrazyMax
300d56b3ff update gopls and golangci-lint
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-20 11:20:18 +01:00
CrazyMax
e04da86aca fix golangci-lint issues
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-20 11:20:17 +01:00
Akihiro Suda
9f1fc99018 Merge pull request #2797 from crazy-max/ci-macos-14
ci: update runner to macos-14
2024-11-20 18:16:59 +09:00
CrazyMax
26bbddb5d6 Merge pull request #2798 from tonistiigi/linter-updates
Improve linter checks
2024-11-20 10:05:37 +01:00
Tonis Tiigi
58fd190c31 lint: enable importas rules from buildkit
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-11-19 18:29:04 -08:00
Tonis Tiigi
e7a53fb829 lint: enable forbidigo context rules
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-11-19 18:27:25 -08:00
Tonis Tiigi
c0fd64f4f8 lint: enable linters from buildkit
Skipping errname and testifylint

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-11-19 17:51:24 -08:00
Tonis Tiigi
0c629335ac lint: sort linters
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-11-19 17:40:42 -08:00
Tonis Tiigi
f216b71ad2 lint: enable gosimple
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-11-19 17:39:22 -08:00
CrazyMax
debe8c0187 ci: update runner to macos-14
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-20 01:02:43 +01:00
CrazyMax
a69d857b8a tests: handle multiple docker versions
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-20 00:59:09 +01:00
Tõnis Tiigi
a6ef9db84d Merge pull request #2794 from crazy-max/bake-var-req
bake: basic variable validation
2024-11-19 12:23:44 -08:00
CrazyMax
9c27be752c Merge pull request #2791 from docker/dependabot/github_actions/codecov/codecov-action-5
build(deps): bump codecov/codecov-action from 4 to 5
2024-11-19 19:06:56 +01:00
CrazyMax
82a65d4f9b Merge pull request #2792 from thaJeztah/test_engine_27.4
Dockerfile: update to docker v27.4.0-rc.2
2024-11-19 18:36:00 +01:00
Sebastiaan van Stijn
8647f408ac Dockerfile: update to docker v27.4.0-rc.2
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-11-19 17:31:21 +01:00
CrazyMax
e51cdcac50 bake: basic variable validation
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-19 12:41:06 +01:00
CrazyMax
55a544d976 Merge pull request #2795 from dvdksn/bake-docs-call-check
docs: add "call" attribute for target
2024-11-18 16:12:57 +01:00
Tõnis Tiigi
3b943bd4ba Merge pull request #2790 from crazy-max/fix-network-attr-yaml
bake: check for empty build network with compose
2024-11-14 18:55:33 -08:00
dependabot[bot]
502bb51a3b build(deps): bump codecov/codecov-action from 4 to 5
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 4 to 5.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v4...v5)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-14 18:33:10 +00:00
CrazyMax
48977780ad bake: check for empty build network with compose
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-14 19:27:54 +01:00
Tõnis Tiigi
e540bb03a4 Merge pull request #2773 from crazy-max/dockerfile-bump-versions
dockerfile: update testing tools
2024-11-13 15:54:31 -08:00
CrazyMax
919c52395d dockerfile: update gotestsum to 1.12.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-13 22:47:31 +01:00
CrazyMax
7f01c63be7 dockerfile: update registry to 2.8.3
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-13 22:47:31 +01:00
CrazyMax
076d2f19d5 dockerfile: update undock to 0.8.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-13 22:47:30 +01:00
CrazyMax
3c3150b8d3 dockerfile: update docker to 27.3.1
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-13 22:47:30 +01:00
CrazyMax
b03d8c52e1 dockerfile: update buildkit to v0.17.1
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-13 22:47:30 +01:00
CrazyMax
e67ccb080b Merge pull request #2787 from docker/dependabot/github_actions/softprops/action-gh-release-2.1.0
build(deps): bump softprops/action-gh-release from 2.0.9 to 2.1.0
2024-11-13 12:14:53 +01:00
dependabot[bot]
dab02c347e build(deps): bump softprops/action-gh-release from 2.0.9 to 2.1.0
Bumps [softprops/action-gh-release](https://github.com/softprops/action-gh-release) from 2.0.9 to 2.1.0.
- [Release notes](https://github.com/softprops/action-gh-release/releases)
- [Changelog](https://github.com/softprops/action-gh-release/blob/master/CHANGELOG.md)
- [Commits](e7a8f85e1c...01570a1f39)

---
updated-dependencies:
- dependency-name: softprops/action-gh-release
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-12 18:14:26 +00:00
Tõnis Tiigi
6caa151e98 Merge pull request #2777 from LaurentGoderre/metadata-list-support
Add ability to output json lists in metadata build file
2024-11-11 13:52:09 -08:00
Laurent Goderre
be6d8326a8 Add ability to output json lists in metadata build file
Signed-off-by: Laurent Goderre <laurent.goderre@docker.com>
2024-11-11 16:36:45 -05:00
Tõnis Tiigi
7855f8324b Merge pull request #2781 from crazy-max/update-fsutil
vendor: github.com/tonistiigi/fsutil 8d32dbdd27d3
2024-11-10 20:21:13 -08:00
CrazyMax
850e5330ad Merge pull request #2778 from jsternberg/improve-missing-name-set-error
bake: improve error when using incorrect format for setting labels
2024-11-06 12:07:26 +01:00
CrazyMax
b7ea25eb59 vendor: github.com/tonistiigi/fsutil 8d32dbdd27d3
full diff: 397af5306b...8d32dbdd27

Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-06 12:02:13 +01:00
Tõnis Tiigi
8cdeac54ab Merge pull request #2780 from glours/bump-compose-go-v2.4.3
bump compose-go to version v2.4.3
2024-11-05 09:48:20 -08:00
Guillaume Lours
752c70a06c bump compose-go to version v2.4.3
Signed-off-by: Guillaume Lours <705411+glours@users.noreply.github.com>
2024-11-05 18:29:03 +01:00
Tõnis Tiigi
83dd969dc1 Merge pull request #2774 from crazy-max/freebsd
build freebsd
2024-11-05 08:30:26 -08:00
Jonathan A. Sternberg
a5bb117ff0 bake: improve error when using incorrect format for setting labels
Improves the error message when using an incorrect format for setting
labels. This includes the intended format directly in the error message
instead of assuming the user knows what the format is.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-11-04 14:38:23 -06:00
CrazyMax
735b7f68fe build freebsd
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-03 10:39:28 +01:00
Tõnis Tiigi
bcac44f658 Merge pull request #2771 from docker/dependabot/github_actions/softprops/action-gh-release-2.0.9
build(deps): bump softprops/action-gh-release from 2.0.8 to 2.0.9
2024-10-31 16:59:55 -07:00
dependabot[bot]
d46595eed8 build(deps): bump softprops/action-gh-release from 2.0.8 to 2.0.9
Bumps [softprops/action-gh-release](https://github.com/softprops/action-gh-release) from 2.0.8 to 2.0.9.
- [Release notes](https://github.com/softprops/action-gh-release/releases)
- [Changelog](https://github.com/softprops/action-gh-release/blob/master/CHANGELOG.md)
- [Commits](c062e08bd5...e7a8f85e1c)

---
updated-dependencies:
- dependency-name: softprops/action-gh-release
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-31 18:34:55 +00:00
Tõnis Tiigi
62407927fa Merge pull request #2757 from dvdksn/pprof-dev-docs
docs: add dev instructions on generating/analyzing pprof samples
2024-10-30 15:09:19 -07:00
Tõnis Tiigi
c7b0a84c6a Merge pull request #2767 from tonistiigi/buildkit-v0.17.0
vendor: update buildkit to v0.17.0
2024-10-30 14:41:33 -07:00
Tonis Tiigi
1aac809c63 vendor: update buildkit to v0.17.0
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-10-30 12:04:42 -07:00
Tõnis Tiigi
9b0575b589 Merge pull request #2766 from tonistiigi/prune-caps-detection
prune: detect if buildkit supports newer storage filters
2024-10-29 13:55:29 -07:00
Tonis Tiigi
9f3a578149 prune: detect if buildkit supports newer storage filters
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-10-29 13:18:04 -07:00
CrazyMax
14b31d8b77 Merge pull request #2765 from crazy-max/ci-fix-content-read
ci: keep contents read permissions in jobs
2024-10-29 19:04:45 +01:00
CrazyMax
e26911f403 ci: keep contents read permissions in jobs
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-10-29 18:48:42 +01:00
Tõnis Tiigi
cd8d61a9d7 Merge pull request #2763 from neumantm/feat/listWithoutBuilder
Skip Builder Init For Bake List Flags
2024-10-29 10:20:58 -07:00
Tõnis Tiigi
3a56161d03 Merge pull request #2761 from crazy-max/fix-workflow-perms
ci: fix workflow permissions
2024-10-29 10:19:04 -07:00
Tim Neumann
0fd935b0ca Skip Builder Init For Bake List Flags
Add the flags --list-targets and --list-variables to the cases
where initializing the builder can be skipped.

This allows the listing of targets and variables
when no builder is available.

Resolves: docker/buildx#2755
Signed-off-by: Tim Neumann <git@neumann-tim.de>
2024-10-29 10:34:20 +01:00
CrazyMax
704b2cc52d Merge pull request #2760 from tonistiigi/update-compose-v2.4.1
vendor: update compose to v2.4.1
2024-10-29 10:28:24 +01:00
CrazyMax
6b2dc8ce56 ci: fix workflow permissions
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-10-29 09:48:47 +01:00
Tonis Tiigi
a585faf3d2 vendor: update compose to v2.4.1
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-10-28 17:26:28 -07:00
Tõnis Tiigi
181348397c Merge pull request #2742 from tonistiigi/otel-build
build: add OTEL span around build function
2024-10-28 16:16:08 -07:00
Tõnis Tiigi
ad371e428e Merge pull request #2759 from tonistiigi/vendor-buildkit-v0.17.0-rc2
vendor: update buildkit to v0.17.0-rc2
2024-10-28 16:15:19 -07:00
Tonis Tiigi
f35dae3726 build: add OTEL span around build function
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-10-28 15:53:22 -07:00
Tonis Tiigi
6fcc6853d9 vendor: update buildkit to v0.17.0-rc2
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-10-28 15:39:50 -07:00
Tõnis Tiigi
202c390fca Merge pull request #2722 from crazy-max/test-details-link-exp
build: fix build details link in experimental mode
2024-10-28 10:03:10 -07:00
David Karlsson
ca502cc9a5 docs: add dev instructions on generating/analyzing pprof samples
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-10-28 17:15:27 +01:00
Tõnis Tiigi
2bdf451b68 Merge pull request #2754 from crazy-max/call-localstate
build: don't generate local state for subrequests
2024-10-25 11:06:22 -07:00
CrazyMax
658ed584c7 Merge pull request #2746 from jsternberg/buildx-profiles
pprof: take cpu and memory profiles by setting environment variables
2024-10-25 15:52:35 +02:00
CrazyMax
886ae21e93 build: don't generate local state for subrequests
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-10-25 11:06:25 +02:00
Jonathan A. Sternberg
cf7a9aa084 pprof: take cpu and memory profiles by setting environment variables
When run in standalone mode, the environment variables
`DOCKER_BUILDX_CPU_PROFILE` and `DOCKER_BUILDX_MEM_PROFILE` will cause
profiles to be written by the CLI.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-10-24 09:56:27 -05:00
CrazyMax
eb15c667b9 controller: rename ref to sessionID and set buildRef back to ref
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-10-24 15:37:18 +02:00
CrazyMax
1060328a96 build: fix build details link in experimental mode
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-10-23 20:31:17 +02:00
Tõnis Tiigi
746eadd16e Merge pull request #2745 from crazy-max/detect-sudo
config: fix file/folder ownership
2024-10-23 10:04:38 -07:00
CrazyMax
f89f861999 config: fix file/folder ownership
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-10-23 18:23:14 +02:00
Tõnis Tiigi
08a973a148 Merge pull request #2741 from crazy-max/cli-fix-unknown-command
cli: error out on unknown command
2024-10-23 08:47:44 -07:00
CrazyMax
cc286e2ef5 cli: error out on unknown command
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-10-18 14:04:16 +02:00
David Karlsson
8056a3dc7c docs: add "call" attribute for target
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-10-17 14:09:19 +02:00
CrazyMax
9f0ebd2643 Merge pull request #2744 from dvdksn/bake-docs-pull-bool
docs: bake pull attr is a boolean
2024-10-17 10:36:10 +02:00
David Karlsson
680cdf1179 docs: bake pull attr is a boolean
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-10-17 10:26:29 +02:00
Tõnis Tiigi
8d32cabc22 Merge pull request #2740 from dvdksn/src-attr-secret-env
docs: clarify options for secret types (file, env)
2024-10-16 12:20:58 -07:00
David Karlsson
239930c998 chore: fix FromAsCasing in Dockerfile example
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-10-16 15:58:34 +02:00
David Karlsson
8d7f69883f docs: clarify options for secret types (file, env)
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-10-16 15:57:58 +02:00
Tõnis Tiigi
1de332530f Merge pull request #2729 from thaJeztah/touchup_security
touch-up security policy
2024-10-10 09:57:55 -07:00
CrazyMax
65c4756473 Merge pull request #2728 from thaJeztah/gha_permissions
gha: set default permissions to "contents: read"
2024-10-09 09:43:33 +02:00
Tõnis Tiigi
d3ff70ace0 Merge pull request #2724 from jsternberg/vtproto
hack: generate vtproto files for buildx
2024-10-08 17:04:19 -07:00
Tonis Tiigi
14de641bec vendor: update buildkit to v0.17.0-rc1
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-10-08 16:54:03 -07:00
Sebastiaan van Stijn
1ce3e6a221 touch-up security policy
Touch-up the security policy to make the OpenSSF scorecard
slightly happier;
https://securityscorecards.dev/viewer/?uri=github.com/docker/buildx

    Warn: One or no descriptive hints of disclosure, vulnerability, and/or timelines in security policy

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-10-09 01:22:26 +02:00
Sebastiaan van Stijn
b1a13bb740 gha: set default permissions to "contents: read"
make the OpenSSF scorecard slightly happier;
https://securityscorecards.dev/viewer/?uri=github.com/docker/buildx

    Warn: no topLevel permission defined: .github/workflows/build.yml:1
    Warn: topLevel 'security-events' permission set to 'write': .github/workflows/codeql.yml:13
    Warn: no topLevel permission defined: .github/workflows/docs-release.yml:1
    Warn: no topLevel permission defined: .github/workflows/docs-upstream.yml:1
    Warn: no topLevel permission defined: .github/workflows/e2e.yml:1
    Warn: no topLevel permission defined: .github/workflows/labeler.yml:1
    Warn: no topLevel permission defined: .github/workflows/validate.yml:1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-10-09 01:07:18 +02:00
Jonathan A. Sternberg
64c5139ab6 hack: generate vtproto files for buildx
Integrates vtproto into buildx. The generated files dockerfile has been
modified to copy the buildkit equivalent file to ensure files are laid
out in the appropriate way for imports.

An import has also been included to change the grpc codec to the version
in buildkit that supports vtproto. This will allow buildx to utilize the
speed and memory improvements from that.

Also updates the gc control options for prune.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-10-08 13:35:06 -05:00
Tõnis Tiigi
d353f5f6ba Merge pull request #2717 from crazy-max/fix-ls-notrunc
ls: ensure deterministic output for truncated platforms
2024-10-04 12:52:45 -07:00
Tõnis Tiigi
4507a492da Merge pull request #2719 from jsternberg/bake-remote-size
bake: raise maximum size limit and fix size check
2024-10-04 12:51:28 -07:00
Jonathan A. Sternberg
9fc6f39d71 bake: raise maximum size limit and fix size check
Similar to https://github.com/docker/buildx/pull/2716.

Use the file size rather than the proto size, raise the allowed limit to
the same value for consistency, and improve the error message to include
the limit in human units.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-10-04 09:11:07 -05:00
CrazyMax
f6a27a664b ls: ensure deterministic output for truncated platforms
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-10-04 09:27:03 +02:00
Tõnis Tiigi
48153169d8 Merge pull request #2716 from jsternberg/dockerfile-size-limit
build: raise maximum size limit for dockerfile and fix size check
2024-10-03 14:25:31 -07:00
Jonathan A. Sternberg
d7de22c61f build: raise maximum size limit for dockerfile and fix size check
Raise the maximum size limit for the dockerfile and correct the size
check. The size check was intended to use the size attribute from the
file stat, but the original gogo version confused the `Size()`
method (which returned the size of the proto message) with the `Size`
attribute (which was named `Size_`).

During the conversion, we noticed the mistake but kept the incorrect
behavior for the sake of keeping the conversion simple.

This also raises the maximum limit because 512 kB is likely a bit too
conservative. The limit has been raised to 2 MB and the limit has been
included in the error message.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-10-03 12:12:40 -05:00
Tõnis Tiigi
7c91f3d0dd Merge pull request #2138 from crazy-max/ls-notrunc
ls: no-trunc opt
2024-10-03 08:21:09 -07:00
CrazyMax
820f5e77ed ls: no-trunc opt
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-10-03 11:15:46 +02:00
Tõnis Tiigi
1db8f6789f Merge pull request #2713 from jsternberg/gogoproto-remove
protobuf: remove gogoproto
2024-10-02 15:39:47 -07:00
Jonathan A. Sternberg
b35a0f4718 protobuf: remove gogoproto
Removes gogo/protobuf from buildx and updates to a version of
moby/buildkit where gogo is removed.

This also changes how the proto files are generated. This is because
newer versions of protobuf are more strict about name conflicts. If two
files have the same name (even if they are relative paths) and are used
in different protoc commands, they'll conflict in the registry.

Since protobuf file generation doesn't work very well with
`paths=source_relative`, this removes the `go:generate` expression and
just relies on the dockerfile to perform the generation.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-10-02 15:51:59 -05:00
CrazyMax
8e47387d02 Merge pull request #2701 from tonistiigi/fix-link-entitlements
bake: fix linking to targets with entitlements
2024-09-25 10:43:21 +02:00
CrazyMax
fdda92f304 Merge pull request #2703 from docker/dependabot/github_actions/peter-evans/create-pull-request-7.0.5
build(deps): bump peter-evans/create-pull-request from 7.0.3 to 7.0.5
2024-09-25 10:42:46 +02:00
CrazyMax
d078a3047d Merge pull request #2705 from tonistiigi/call-fallback
build: use better references for --call fallback images
2024-09-25 10:42:24 +02:00
Tõnis Tiigi
f102ad73a8 Merge pull request #2672 from daghack/dockerfile-path-on-warnings
build: display Dockerfile path on check warnings
2024-09-19 08:30:48 -07:00
Talon Bowler
671bd1b54d Update to pass DockerMappingSrc and Dst in with Inputs, and return Inputs through Build
Signed-off-by: Talon Bowler <talon.bowler@docker.com>
2024-09-18 20:56:31 -07:00
Tonis Tiigi
f8657e8798 build: use better references for --call fallback images
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-09-18 18:43:40 -07:00
dependabot[bot]
61d9f1d981 build(deps): bump peter-evans/create-pull-request from 7.0.3 to 7.0.5
Bumps [peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request) from 7.0.3 to 7.0.5.
- [Release notes](https://github.com/peter-evans/create-pull-request/releases)
- [Commits](6cd32fd936...5e914681df)

---
updated-dependencies:
- dependency-name: peter-evans/create-pull-request
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-09-18 18:49:37 +00:00
Tõnis Tiigi
9eb0318ee6 Merge pull request #2696 from crazy-max/test-fix-cleanup
test: fix missing envs when cleaning up some workers
2024-09-17 20:27:29 -07:00
CrazyMax
4528269102 Merge pull request #2699 from docker/dependabot/github_actions/peter-evans/create-pull-request-7.0.3
build(deps): bump peter-evans/create-pull-request from 7.0.2 to 7.0.3
2024-09-17 09:27:20 +02:00
CrazyMax
8d3d32e376 Merge pull request #2700 from tonistiigi/fix-link-itself
bake: fix validation for linking to itself
2024-09-17 09:25:26 +02:00
Tonis Tiigi
c60afbb25b bake: fix linking to targets with entitlements
When linked target requires entitlement, same entitlement
is also needed by the caller. Otherwise, the request will
fail when the build is processed.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-09-16 16:31:22 -07:00
Tonis Tiigi
9bfa8603f6 bake: fix validation for linking to itself
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-09-16 16:29:32 -07:00
dependabot[bot]
30e60628bf build(deps): bump peter-evans/create-pull-request from 7.0.2 to 7.0.3
Bumps [peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request) from 7.0.2 to 7.0.3.
- [Release notes](https://github.com/peter-evans/create-pull-request/releases)
- [Commits](d121e62763...6cd32fd936)

---
updated-dependencies:
- dependency-name: peter-evans/create-pull-request
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-09-16 18:36:21 +00:00
CrazyMax
df0270d0cc test: fix missing envs when cleaning up some workers
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-09-13 14:19:46 +02:00
CrazyMax
056cf8a7ca Merge pull request #2693 from docker/dependabot/github_actions/peter-evans/create-pull-request-7.0.2
build(deps): bump peter-evans/create-pull-request from 7.0.1 to 7.0.2
2024-09-12 22:48:06 +02:00
dependabot[bot]
15c596a091 build(deps): bump peter-evans/create-pull-request from 7.0.1 to 7.0.2
Bumps [peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request) from 7.0.1 to 7.0.2.
- [Release notes](https://github.com/peter-evans/create-pull-request/releases)
- [Commits](8867c4aba1...d121e62763)

---
updated-dependencies:
- dependency-name: peter-evans/create-pull-request
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-09-12 18:30:42 +00:00
CrazyMax
e950b2e478 Merge pull request #2692 from glours/bump-compose-go-v2.2.0
bump compose-go to v2.2.0
2024-09-12 19:04:35 +02:00
Guillaume Lours
4da753da79 bump compose-go to v2.2.0
Signed-off-by: Guillaume Lours <705411+glours@users.noreply.github.com>
2024-09-12 18:14:18 +02:00
CrazyMax
3f81293fd4 Merge pull request #2691 from crazy-max/ci-fix-perms
ci: fix golvulncheck job permissions
2024-09-12 16:36:29 +02:00
CrazyMax
120578091f ci: fix golvulncheck job permissions
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-09-12 15:23:33 +02:00
Tõnis Tiigi
604b723007 Merge pull request #2684 from crazy-max/inspect-buildkitd-conf
inspect: display buildkit daemon configuration file
2024-09-11 17:32:25 -07:00
CrazyMax
528181c759 inspect: display buildkit daemon configuration file
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-09-12 00:16:24 +02:00
Tõnis Tiigi
cd5381900c Merge pull request #2688 from crazy-max/bump-xx
dockerfile: update xx to 1.5.0
2024-09-11 10:50:58 -07:00
Tõnis Tiigi
bba2bb4b89 Merge pull request #2686 from crazy-max/bump-buildkit
dockerfile, ci: update buildkit to latest stable
2024-09-11 10:50:40 -07:00
Tõnis Tiigi
8fd27b8c23 Merge pull request #2685 from crazy-max/skip-networkhost-conf
builder: do not set network.host entitlement flag if already set in buildkitd conf
2024-09-11 10:39:29 -07:00
Tõnis Tiigi
6dcc8d8b84 Merge pull request #2689 from crazy-max/bake-fix-network-field
bake: fix missing omitempty and optional tags for network field
2024-09-11 10:35:33 -07:00
CrazyMax
9fb8b04b64 bake: fix missing omitempty and optional tags for network field
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-09-11 14:47:01 +02:00
CrazyMax
6ba5802958 Merge pull request #2687 from crazy-max/bump-docker
dockerfile: update docker to 27.2.1
2024-09-11 13:57:09 +02:00
CrazyMax
f039670961 dockerfile: update xx to 1.5.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-09-11 12:59:36 +02:00
CrazyMax
4ec12e7e68 dockerfile: update docker to 27.2.1
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-09-11 12:58:33 +02:00
CrazyMax
66ed7d6162 dockerfile, ci: update buildkit to latest stable
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-09-11 12:51:20 +02:00
CrazyMax
617d59d70b builder: do not set network.host entitlement flag if already set in buildkitd conf
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-09-11 12:27:29 +02:00
CrazyMax
40f444f4b8 Merge pull request #2680 from crazy-max/update-buildkit
vendor: update buildkit to v0.16.0
2024-09-10 18:35:06 +02:00
CrazyMax
8201d301d5 vendor: update buildkit to v0.16.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-09-10 18:18:16 +02:00
Talon Bowler
f1b92e9e6c update Build commands to return dockerfile mapping for use in printing rule check warnings
Signed-off-by: Talon Bowler <talon.bowler@docker.com>
2024-09-06 07:34:13 -07:00
1326 changed files with 153553 additions and 119122 deletions

View File

@@ -188,6 +188,89 @@ To generate new vendored files with go modules run:
$ make vendor
```
### Generate profiling data
You can configure Buildx to generate [`pprof`](https://github.com/google/pprof)
memory and CPU profiles to analyze and optimize your builds. These profiles are
useful for identifying performance bottlenecks, detecting memory
inefficiencies, and ensuring the program (Buildx) runs efficiently.
The following environment variables control whether Buildx generates profiling
data for builds:
```console
$ export BUILDX_CPU_PROFILE=buildx_cpu.prof
$ export BUILDX_MEM_PROFILE=buildx_mem.prof
```
When set, Buildx emits profiling samples for the builds to the location
specified by the environment variable.
To analyze and visualize profiling samples, you need `pprof` from the Go
toolchain, and (optionally) GraphViz for visualization in a graphical format.
To inspect profiling data with `pprof`:
1. Build a local binary of Buildx from source.
```console
$ docker buildx bake
```
The binary gets exported to `./bin/build/buildx`.
2. Run a build and with the environment variables set to generate profiling data.
```console
$ export BUILDX_CPU_PROFILE=buildx_cpu.prof
$ export BUILDX_MEM_PROFILE=buildx_mem.prof
$ ./bin/build/buildx bake
```
This creates `buildx_cpu.prof` and `buildx_mem.prof` for the build.
3. Start `pprof` and specify the filename of the profile that you want to
analyze.
```console
$ go tool pprof buildx_cpu.prof
```
This opens the `pprof` interactive console. From here, you can inspect the
profiling sample using various commands. For example, use `top 10` command
to view the top 10 most time-consuming entries.
```plaintext
(pprof) top 10
Showing nodes accounting for 3.04s, 91.02% of 3.34s total
Dropped 123 nodes (cum <= 0.02s)
Showing top 10 nodes out of 159
flat flat% sum% cum cum%
1.14s 34.13% 34.13% 1.14s 34.13% syscall.syscall
0.91s 27.25% 61.38% 0.91s 27.25% runtime.kevent
0.35s 10.48% 71.86% 0.35s 10.48% runtime.pthread_cond_wait
0.22s 6.59% 78.44% 0.22s 6.59% runtime.pthread_cond_signal
0.15s 4.49% 82.93% 0.15s 4.49% runtime.usleep
0.10s 2.99% 85.93% 0.10s 2.99% runtime.memclrNoHeapPointers
0.10s 2.99% 88.92% 0.10s 2.99% runtime.memmove
0.03s 0.9% 89.82% 0.03s 0.9% runtime.madvise
0.02s 0.6% 90.42% 0.02s 0.6% runtime.(*mspan).typePointersOfUnchecked
0.02s 0.6% 91.02% 0.02s 0.6% runtime.pcvalue
```
To view the call graph in a GUI, run `go tool pprof -http=:8081 <sample>`.
> [!NOTE]
> Requires [GraphViz](https://www.graphviz.org/) to be installed.
```console
$ go tool pprof -http=:8081 buildx_cpu.prof
Serving web UI on http://127.0.0.1:8081
http://127.0.0.1:8081
```
For more information about using `pprof` and how to interpret the call graph,
refer to the [`pprof` README](https://github.com/google/pprof/blob/main/doc/README.md).
### Conventions

50
.github/SECURITY.md vendored
View File

@@ -1,12 +1,44 @@
# Reporting security issues
# Security Policy
The project maintainers take security seriously. If you discover a security
issue, please bring it to their attention right away!
The maintainers of Docker Buildx take security seriously. If you discover
a security issue, please bring it to their attention right away!
**Please _DO NOT_ file a public issue**, instead send your report privately to
[security@docker.com](mailto:security@docker.com).
## Reporting a Vulnerability
Security reports are greatly appreciated, and we will publicly thank you for it.
We also like to send gifts&mdash;if you're into schwag, make sure to let
us know. We currently do not offer a paid security bounty program, but are not
ruling it out in the future.
Please **DO NOT** file a public issue, instead send your report privately
to [security@docker.com](mailto:security@docker.com).
Reporter(s) can expect a response within 72 hours, acknowledging the issue was
received.
## Review Process
After receiving the report, an initial triage and technical analysis is
performed to confirm the report and determine its scope. We may request
additional information in this stage of the process.
Once a reviewer has confirmed the relevance of the report, a draft security
advisory will be created on GitHub. The draft advisory will be used to discuss
the issue with maintainers, the reporter(s), and where applicable, other
affected parties under embargo.
If the vulnerability is accepted, a timeline for developing a patch, public
disclosure, and patch release will be determined. If there is an embargo period
on public disclosure before the patch release, the reporter(s) are expected to
participate in the discussion of the timeline and abide by agreed upon dates
for public disclosure.
## Accreditation
Security reports are greatly appreciated and we will publicly thank you,
although we will keep your name confidential if you request it. We also like to
send gifts - if you're into swag, make sure to let us know. We do not currently
offer a paid security bounty program at this time.
## Supported Versions
Once a new feature release is cut, support for the previous feature release is
discontinued. An exception may be made for urgent security releases that occur
shortly after a new feature release. Buildx does not offer LTS (Long-Term Support)
releases. Refer to the [Support Policy](https://github.com/docker/buildx/blob/master/PROJECT.md#support-policy)
for further details.

View File

@@ -1,5 +1,14 @@
name: build
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
@@ -27,7 +36,7 @@ env:
TEST_CACHE_SCOPE: "test"
TESTFLAGS: "-v --parallel=6 --timeout=30m"
GOTESTSUM_FORMAT: "standard-verbose"
GO_VERSION: "1.22"
GO_VERSION: "1.23"
GOTESTSUM_VERSION: "v1.9.0" # same as one in Dockerfile
jobs:
@@ -45,9 +54,9 @@ jobs:
- master
- latest
- buildx-stable-1
- v0.14.1
- v0.13.2
- v0.12.5
- v0.17.2
- v0.16.0
- v0.15.2
worker:
- docker-container
- remote
@@ -67,6 +76,16 @@ jobs:
- worker: docker+containerd # same as docker, but with containerd snapshotter
pkg: ./tests
mode: experimental
- worker: "docker@26.1"
pkg: ./tests
- worker: "docker+containerd@26.1" # same as docker, but with containerd snapshotter
pkg: ./tests
- worker: "docker@26.1"
pkg: ./tests
mode: experimental
- worker: "docker+containerd@26.1" # same as docker, but with containerd snapshotter
pkg: ./tests
mode: experimental
steps:
-
name: Prepare
@@ -77,7 +96,7 @@ jobs:
fi
testFlags="--run=//worker=$(echo "${{ matrix.worker }}" | sed 's/\+/\\+/g')$"
case "${{ matrix.worker }}" in
docker | docker+containerd)
docker | docker+containerd | docker@* | docker+containerd@*)
echo "TESTFLAGS=${{ env.TESTFLAGS_DOCKER }} $testFlags" >> $GITHUB_ENV
;;
*)
@@ -122,7 +141,7 @@ jobs:
-
name: Send to Codecov
if: always()
uses: codecov/codecov-action@v4
uses: codecov/codecov-action@v5
with:
directory: ./bin/testreports
flags: integration
@@ -149,7 +168,7 @@ jobs:
matrix:
os:
- ubuntu-24.04
- macos-12
- macos-14
- windows-2022
env:
SKIP_INTEGRATION_TESTS: 1
@@ -194,7 +213,7 @@ jobs:
-
name: Send to Codecov
if: always()
uses: codecov/codecov-action@v4
uses: codecov/codecov-action@v5
with:
directory: ${{ env.TESTREPORTS_DIR }}
env_vars: RUNNER_OS
@@ -218,6 +237,8 @@ jobs:
govulncheck:
runs-on: ubuntu-24.04
permissions:
# same as global permission
contents: read
# required to write sarif report
security-events: write
steps:
@@ -363,6 +384,8 @@ jobs:
runs-on: ubuntu-24.04
if: ${{ github.ref == 'refs/heads/master' && github.repository == 'docker/buildx' }}
permissions:
# same as global permission
contents: read
# required to write sarif report
security-events: write
needs:
@@ -393,6 +416,9 @@ jobs:
release:
runs-on: ubuntu-24.04
permissions:
# required to create GitHub release
contents: write
needs:
- test-integration
- test-unit
@@ -422,7 +448,7 @@ jobs:
-
name: GitHub Release
if: startsWith(github.ref, 'refs/tags/v')
uses: softprops/action-gh-release@c062e08bd532815e2082a85e87e3ef29c3e6d191 # v2.0.8
uses: softprops/action-gh-release@01570a1f39cb168c169c802c3bceb9e93fb10974 # v2.1.0
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:

View File

@@ -1,5 +1,14 @@
name: codeql
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
on:
push:
branches:
@@ -7,17 +16,16 @@ on:
- 'v[0-9]*'
pull_request:
permissions:
actions: read
contents: read
security-events: write
env:
GO_VERSION: "1.22"
GO_VERSION: "1.23"
jobs:
codeql:
runs-on: ubuntu-24.04
permissions:
contents: read
actions: read
security-events: write
steps:
-
name: Checkout

View File

@@ -1,5 +1,14 @@
name: docs-release
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
on:
workflow_dispatch:
inputs:
@@ -14,6 +23,9 @@ jobs:
open-pr:
runs-on: ubuntu-24.04
if: ${{ (github.event.release.prerelease != true || github.event.inputs.tag != '') && github.repository == 'docker/buildx' }}
permissions:
contents: write
pull-requests: write
steps:
-
name: Checkout docs repo
@@ -57,7 +69,7 @@ jobs:
VENDOR_MODULE: github.com/docker/buildx@${{ env.RELEASE_NAME }}
-
name: Create PR on docs repo
uses: peter-evans/create-pull-request@8867c4aba1b742c39f8d0ba35429c2dfa4b6cb20 # v7.0.1
uses: peter-evans/create-pull-request@5e914681df9dc83aa4e4905692ca88beb2f9e91f # v7.0.5
with:
token: ${{ secrets.GHPAT_DOCS_DISPATCH }}
push-to-fork: docker-tools-robot/docker.github.io

View File

@@ -3,6 +3,15 @@
# https://github.com/docker/docker.github.io/blob/98c7c9535063ae4cd2cd0a31478a21d16d2f07a3/docker-bake.hcl#L34-L36
name: docs-upstream
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true

View File

@@ -1,5 +1,14 @@
name: e2e
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true

View File

@@ -1,5 +1,14 @@
name: labeler
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
@@ -9,10 +18,12 @@ on:
jobs:
labeler:
permissions:
contents: read
pull-requests: write
runs-on: ubuntu-latest
permissions:
# same as global permission
contents: read
# required for writing labels
pull-requests: write
steps:
-
name: Run

View File

@@ -1,5 +1,14 @@
name: validate
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true

View File

@@ -1,26 +1,52 @@
run:
timeout: 30m
modules-download-mode: vendor
# default uses Go version from the go.mod file, fallback on the env var
# `GOVERSION`, fallback on 1.17: https://golangci-lint.run/usage/configuration/#run-configuration
go: "1.23"
linters:
enable:
- gofmt
- govet
- bodyclose
- depguard
- forbidigo
- gocritic
- gofmt
- goimports
- gosec
- gosimple
- govet
- ineffassign
- makezero
- misspell
- unused
- noctx
- nolintlint
- revive
- staticcheck
- testifylint
- typecheck
- nolintlint
- gosec
- forbidigo
- unused
- whitespace
disable-all: true
linters-settings:
gocritic:
disabled-checks:
- "ifElseChain"
- "assignOp"
- "appendAssign"
- "singleCaseSwitch"
- "exitAfterDefer" # FIXME
importas:
alias:
# Enforce alias to prevent it accidentally being used instead of
# buildkit errdefs package (or vice-versa).
- pkg: "github.com/containerd/errdefs"
alias: "cerrdefs"
- pkg: "github.com/opencontainers/image-spec/specs-go/v1"
alias: "ocispecs"
- pkg: "github.com/opencontainers/go-digest"
alias: "digest"
govet:
enable:
- nilness
@@ -43,14 +69,27 @@ linters-settings:
desc: The io/ioutil package has been deprecated.
forbidigo:
forbid:
- '^context\.WithCancel(# use context\.WithCancelCause instead)?$'
- '^context\.WithDeadline(# use context\.WithDeadline instead)?$'
- '^context\.WithTimeout(# use context\.WithTimeoutCause instead)?$'
- '^ctx\.Err(# use context\.Cause instead)?$'
- '^fmt\.Errorf(# use errors\.Errorf instead)?$'
- '^platforms\.DefaultString(# use platforms\.Format(platforms\.DefaultSpec()) instead\.)?$'
gosec:
excludes:
- G204 # Audit use of command execution
- G402 # TLS MinVersion too low
- G115 # integer overflow conversion (TODO: verify these)
config:
G306: "0644"
testifylint:
disable:
# disable rules that reduce the test condition
- "empty"
- "bool-compare"
- "len"
- "negative-positive"
issues:
exclude-files:

View File

@@ -1,20 +1,23 @@
# syntax=docker/dockerfile:1
ARG GO_VERSION=1.22
ARG XX_VERSION=1.4.0
ARG GO_VERSION=1.23
ARG XX_VERSION=1.5.0
# for testing
ARG DOCKER_VERSION=27.1.1
ARG DOCKER_VERSION=27.4.0-rc.2
ARG DOCKER_VERSION_ALT_26=26.1.3
ARG DOCKER_CLI_VERSION=${DOCKER_VERSION}
ARG GOTESTSUM_VERSION=v1.9.0
ARG REGISTRY_VERSION=2.8.0
ARG BUILDKIT_VERSION=v0.14.1
ARG UNDOCK_VERSION=0.7.0
ARG GOTESTSUM_VERSION=v1.12.0
ARG REGISTRY_VERSION=2.8.3
ARG BUILDKIT_VERSION=v0.17.2
ARG UNDOCK_VERSION=0.8.0
FROM --platform=$BUILDPLATFORM tonistiigi/xx:${XX_VERSION} AS xx
FROM --platform=$BUILDPLATFORM golang:${GO_VERSION}-alpine AS golatest
FROM moby/moby-bin:$DOCKER_VERSION AS docker-engine
FROM dockereng/cli-bin:$DOCKER_CLI_VERSION AS docker-cli
FROM moby/moby-bin:$DOCKER_VERSION_ALT_26 AS docker-engine-alt
FROM dockereng/cli-bin:$DOCKER_VERSION_ALT_26 AS docker-cli-alt
FROM registry:$REGISTRY_VERSION AS registry
FROM moby/buildkit:$BUILDKIT_VERSION AS buildkit
FROM crazymax/undock:$UNDOCK_VERSION AS undock
@@ -77,6 +80,7 @@ RUN --mount=type=bind,target=. \
set -e
xx-go --wrap
DESTDIR=/usr/bin VERSION=$(cat /buildx-version/version) REVISION=$(cat /buildx-version/revision) GO_EXTRA_LDFLAGS="-s -w" ./hack/build
file /usr/bin/docker-buildx
xx-verify --static /usr/bin/docker-buildx
EOT
@@ -95,7 +99,9 @@ FROM scratch AS binaries-unix
COPY --link --from=buildx-build /usr/bin/docker-buildx /buildx
FROM binaries-unix AS binaries-darwin
FROM binaries-unix AS binaries-freebsd
FROM binaries-unix AS binaries-linux
FROM binaries-unix AS binaries-openbsd
FROM scratch AS binaries-windows
COPY --link --from=buildx-build /usr/bin/docker-buildx /buildx.exe
@@ -120,10 +126,13 @@ COPY --link --from=gotestsum /out /usr/bin/
COPY --link --from=registry /bin/registry /usr/bin/
COPY --link --from=docker-engine / /usr/bin/
COPY --link --from=docker-cli / /usr/bin/
COPY --link --from=docker-engine-alt / /opt/docker-alt-26/
COPY --link --from=docker-cli-alt / /opt/docker-alt-26/
COPY --link --from=buildkit /usr/bin/buildkitd /usr/bin/
COPY --link --from=buildkit /usr/bin/buildctl /usr/bin/
COPY --link --from=undock /usr/local/bin/undock /usr/bin/
COPY --link --from=binaries /buildx /usr/bin/
ENV TEST_DOCKER_EXTRA="docker@26.1=/opt/docker-alt-26"
FROM integration-test-base AS integration-test
COPY . .

View File

@@ -7,6 +7,7 @@ import (
"path"
"path/filepath"
"regexp"
"slices"
"sort"
"strconv"
"strings"
@@ -480,7 +481,7 @@ func (c Config) loadLinks(name string, t *Target, m map[string]*Target, o map[st
for _, v := range t.Contexts {
if strings.HasPrefix(v, "target:") {
target := strings.TrimPrefix(v, "target:")
if target == t.Name {
if target == name {
return errors.Errorf("target %s cannot link to itself", target)
}
for _, v := range visited {
@@ -502,6 +503,14 @@ func (c Config) loadLinks(name string, t *Target, m map[string]*Target, o map[st
if err := c.loadLinks(target, t2, m, o, visited); err != nil {
return err
}
// entitlements are inherited from linked targets
for _, ent := range t2.Entitlements {
if !slices.Contains(t.Entitlements, ent) {
t.Entitlements = append(t.Entitlements, ent)
}
}
if len(t.Platforms) > 1 && len(t2.Platforms) > 1 {
if !sliceEqual(t.Platforms, t2.Platforms) {
return errors.Errorf("target %s can't be used by %s because it is defined for different platforms %v and %v", target, name, t2.Platforms, t.Platforms)
@@ -704,7 +713,7 @@ type Target struct {
Outputs []string `json:"output,omitempty" hcl:"output,optional" cty:"output"`
Pull *bool `json:"pull,omitempty" hcl:"pull,optional" cty:"pull"`
NoCache *bool `json:"no-cache,omitempty" hcl:"no-cache,optional" cty:"no-cache"`
NetworkMode *string `json:"network" hcl:"network" cty:"network"`
NetworkMode *string `json:"network,omitempty" hcl:"network,optional" cty:"network"`
NoCacheFilter []string `json:"no-cache-filter,omitempty" hcl:"no-cache-filter,optional" cty:"no-cache-filter"`
ShmSize *string `json:"shm-size,omitempty" hcl:"shm-size,optional"`
Ulimits []string `json:"ulimits,omitempty" hcl:"ulimits,optional"`
@@ -716,10 +725,12 @@ type Target struct {
linked bool
}
var _ hclparser.WithEvalContexts = &Target{}
var _ hclparser.WithGetName = &Target{}
var _ hclparser.WithEvalContexts = &Group{}
var _ hclparser.WithGetName = &Group{}
var (
_ hclparser.WithEvalContexts = &Target{}
_ hclparser.WithGetName = &Target{}
_ hclparser.WithEvalContexts = &Group{}
_ hclparser.WithGetName = &Group{}
)
func (t *Target) normalize() {
t.Annotations = removeDupes(t.Annotations)
@@ -856,7 +867,7 @@ func (t *Target) AddOverrides(overrides map[string]Override) error {
t.Dockerfile = &value
case "args":
if len(keys) != 2 {
return errors.Errorf("args require name")
return errors.Errorf("invalid format for args, expecting args.<name>=<value>")
}
if t.Args == nil {
t.Args = map[string]*string{}
@@ -864,7 +875,7 @@ func (t *Target) AddOverrides(overrides map[string]Override) error {
t.Args[keys[1]] = &value
case "contexts":
if len(keys) != 2 {
return errors.Errorf("contexts require name")
return errors.Errorf("invalid format for contexts, expecting contexts.<name>=<value>")
}
if t.Contexts == nil {
t.Contexts = map[string]string{}
@@ -872,7 +883,7 @@ func (t *Target) AddOverrides(overrides map[string]Override) error {
t.Contexts[keys[1]] = value
case "labels":
if len(keys) != 2 {
return errors.Errorf("labels require name")
return errors.Errorf("invalid format for labels, expecting labels.<name>=<value>")
}
if t.Labels == nil {
t.Labels = map[string]*string{}
@@ -1106,62 +1117,34 @@ func updateContext(t *build.Inputs, inp *Input) {
t.ContextState = &st
}
// validateContextsEntitlements is a basic check to ensure contexts do not
// escape local directories when loaded from remote sources. This is to be
// replaced with proper entitlements support in the future.
func validateContextsEntitlements(t build.Inputs, inp *Input) error {
if inp == nil || inp.State == nil {
return nil
}
if v, ok := os.LookupEnv("BAKE_ALLOW_REMOTE_FS_ACCESS"); ok {
if vv, _ := strconv.ParseBool(v); vv {
return nil
}
}
func collectLocalPaths(t build.Inputs) []string {
var out []string
if t.ContextState == nil {
if err := checkPath(t.ContextPath); err != nil {
return err
if v, ok := isLocalPath(t.ContextPath); ok {
out = append(out, v)
}
if v, ok := isLocalPath(t.DockerfilePath); ok {
out = append(out, v)
}
} else if strings.HasPrefix(t.ContextPath, "cwd://") {
out = append(out, strings.TrimPrefix(t.ContextPath, "cwd://"))
}
for _, v := range t.NamedContexts {
if v.State != nil {
continue
}
if err := checkPath(v.Path); err != nil {
return err
if v, ok := isLocalPath(v.Path); ok {
out = append(out, v)
}
}
return nil
return out
}
func checkPath(p string) error {
func isLocalPath(p string) (string, bool) {
if build.IsRemoteURL(p) || strings.HasPrefix(p, "target:") || strings.HasPrefix(p, "docker-image:") {
return nil
return "", false
}
p, err := filepath.EvalSymlinks(p)
if err != nil {
if os.IsNotExist(err) {
return nil
}
return err
}
p, err = filepath.Abs(p)
if err != nil {
return err
}
wd, err := os.Getwd()
if err != nil {
return err
}
rel, err := filepath.Rel(wd, p)
if err != nil {
return err
}
parts := strings.Split(rel, string(os.PathSeparator))
if parts[0] == ".." {
return errors.Errorf("path %s is outside of the working directory, please set BAKE_ALLOW_REMOTE_FS_ACCESS=1", p)
}
return nil
return strings.TrimPrefix(p, "cwd://"), true
}
func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
@@ -1201,9 +1184,6 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
// it's not outside the working directory and then resolve it to an
// absolute path.
bi.DockerfilePath = path.Clean(strings.TrimPrefix(bi.DockerfilePath, "cwd://"))
if err := checkPath(bi.DockerfilePath); err != nil {
return nil, err
}
var err error
bi.DockerfilePath, err = filepath.Abs(bi.DockerfilePath)
if err != nil {
@@ -1240,10 +1220,6 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
}
}
if err := validateContextsEntitlements(bi, inp); err != nil {
return nil, err
}
t.Context = &bi.ContextPath
args := map[string]string{}
@@ -1304,6 +1280,8 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
if err != nil {
return nil, err
}
bo.SecretSpecs = secrets
secretAttachment, err := controllerapi.CreateSecrets(secrets)
if err != nil {
return nil, err
@@ -1317,6 +1295,8 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
if len(sshSpecs) == 0 && (buildflags.IsGitSSH(bi.ContextPath) || (inp != nil && buildflags.IsGitSSH(inp.URL))) {
sshSpecs = append(sshSpecs, &controllerapi.SSH{ID: "default"})
}
bo.SSHSpecs = sshSpecs
sshAttachment, err := controllerapi.CreateSSH(sshSpecs)
if err != nil {
return nil, err

View File

@@ -59,8 +59,8 @@ target "webapp" {
t.Run("InvalidTargetOverrides", func(t *testing.T) {
t.Parallel()
_, _, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"nosuchtarget.context=foo"}, nil)
require.NotNil(t, err)
require.Equal(t, err.Error(), "could not find any target matching 'nosuchtarget'")
require.Error(t, err)
require.Equal(t, "could not find any target matching 'nosuchtarget'", err.Error())
})
t.Run("ArgsOverrides", func(t *testing.T) {
@@ -116,7 +116,7 @@ target "webapp" {
t.Run("ContextOverride", func(t *testing.T) {
t.Parallel()
_, _, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"webapp.context"}, nil)
require.NotNil(t, err)
require.Error(t, err)
m, g, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"webapp.context=foo"}, nil)
require.NoError(t, err)
@@ -203,8 +203,8 @@ target "webapp" {
// NOTE: I am unsure whether failing to match should always error out
// instead of simply skipping that override.
// Let's enforce the error and we can relax it later if users complain.
require.NotNil(t, err)
require.Equal(t, err.Error(), "could not find any target matching 'nomatch*'")
require.Error(t, err)
require.Equal(t, "could not find any target matching 'nomatch*'", err.Error())
},
},
}
@@ -1856,3 +1856,175 @@ func TestNetNone(t *testing.T) {
require.Len(t, bo["app"].Allow, 0)
require.Equal(t, "none", bo["app"].NetworkMode)
}
func TestVariableValidation(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(`
variable "FOO" {
validation {
condition = FOO != ""
error_message = "FOO is required."
}
}
target "app" {
args = {
FOO = FOO
}
}
`),
}
ctx := context.TODO()
t.Run("Valid", func(t *testing.T) {
t.Setenv("FOO", "bar")
_, _, err := ReadTargets(ctx, []File{fp}, []string{"app"}, nil, nil)
require.NoError(t, err)
})
t.Run("Invalid", func(t *testing.T) {
_, _, err := ReadTargets(ctx, []File{fp}, []string{"app"}, nil, nil)
require.Error(t, err)
require.Contains(t, err.Error(), "FOO is required.")
})
}
func TestVariableValidationMulti(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(`
variable "FOO" {
validation {
condition = FOO != ""
error_message = "FOO is required."
}
validation {
condition = strlen(FOO) > 4
error_message = "FOO must be longer than 4 characters."
}
}
target "app" {
args = {
FOO = FOO
}
}
`),
}
ctx := context.TODO()
t.Run("Valid", func(t *testing.T) {
t.Setenv("FOO", "barbar")
_, _, err := ReadTargets(ctx, []File{fp}, []string{"app"}, nil, nil)
require.NoError(t, err)
})
t.Run("InvalidLength", func(t *testing.T) {
t.Setenv("FOO", "bar")
_, _, err := ReadTargets(ctx, []File{fp}, []string{"app"}, nil, nil)
require.Error(t, err)
require.Contains(t, err.Error(), "FOO must be longer than 4 characters.")
})
t.Run("InvalidEmpty", func(t *testing.T) {
_, _, err := ReadTargets(ctx, []File{fp}, []string{"app"}, nil, nil)
require.Error(t, err)
require.Contains(t, err.Error(), "FOO is required.")
})
}
func TestVariableValidationWithDeps(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(`
variable "FOO" {}
variable "BAR" {
validation {
condition = FOO != ""
error_message = "BAR requires FOO to be set."
}
}
target "app" {
args = {
BAR = BAR
}
}
`),
}
ctx := context.TODO()
t.Run("Valid", func(t *testing.T) {
t.Setenv("FOO", "bar")
_, _, err := ReadTargets(ctx, []File{fp}, []string{"app"}, nil, nil)
require.NoError(t, err)
})
t.Run("SetBar", func(t *testing.T) {
t.Setenv("FOO", "bar")
t.Setenv("BAR", "baz")
_, _, err := ReadTargets(ctx, []File{fp}, []string{"app"}, nil, nil)
require.NoError(t, err)
})
t.Run("Invalid", func(t *testing.T) {
_, _, err := ReadTargets(ctx, []File{fp}, []string{"app"}, nil, nil)
require.Error(t, err)
require.Contains(t, err.Error(), "BAR requires FOO to be set.")
})
}
func TestVariableValidationTyped(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(`
variable "FOO" {
default = 0
validation {
condition = FOO > 5
error_message = "FOO must be greater than 5."
}
}
target "app" {
args = {
FOO = FOO
}
}
`),
}
ctx := context.TODO()
t.Run("Valid", func(t *testing.T) {
t.Setenv("FOO", "10")
_, _, err := ReadTargets(ctx, []File{fp}, []string{"app"}, nil, nil)
require.NoError(t, err)
})
t.Run("Invalid", func(t *testing.T) {
_, _, err := ReadTargets(ctx, []File{fp}, []string{"app"}, nil, nil)
require.Error(t, err)
require.Contains(t, err.Error(), "FOO must be greater than 5.")
})
}
// https://github.com/docker/buildx/issues/2822
func TestVariableEmpty(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(`
variable "FOO" {
default = ""
}
target "app" {
output = [FOO]
}
`),
}
ctx := context.TODO()
_, _, err := ReadTargets(ctx, []File{fp}, []string{"app"}, nil, nil)
require.NoError(t, err)
}

View File

@@ -102,6 +102,12 @@ func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Conf
shmSize = &shmSizeStr
}
var networkModeP *string
if s.Build.Network != "" {
networkMode := s.Build.Network
networkModeP = &networkMode
}
var ulimits []string
if s.Build.Ulimits != nil {
for n, u := range s.Build.Ulimits {
@@ -154,7 +160,7 @@ func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Conf
})),
CacheFrom: s.Build.CacheFrom,
CacheTo: s.Build.CacheTo,
NetworkMode: &s.Build.Network,
NetworkMode: networkModeP,
SSH: ssh,
Secrets: secrets,
ShmSize: shmSize,
@@ -173,7 +179,6 @@ func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Conf
c.Targets = append(c.Targets, t)
}
c.Groups = append(c.Groups, g)
}
return &c, nil

View File

@@ -2,15 +2,21 @@ package bake
import (
"bufio"
"cmp"
"context"
"fmt"
"io"
"io/fs"
"os"
"path/filepath"
"slices"
"strconv"
"strings"
"syscall"
"github.com/containerd/console"
"github.com/docker/buildx/build"
"github.com/docker/buildx/util/osutil"
"github.com/moby/buildkit/util/entitlements"
"github.com/pkg/errors"
)
@@ -67,10 +73,8 @@ func ParseEntitlements(in []string) (EntitlementConf, error) {
conf.ImagePush = append(conf.ImagePush, v)
conf.ImageLoad = append(conf.ImageLoad, v)
default:
return conf, errors.Errorf("uknown entitlement key %q", k)
return conf, errors.Errorf("unknown entitlement key %q", k)
}
// TODO: dedupe slices and parent paths
}
}
return conf, nil
@@ -101,10 +105,73 @@ func (c EntitlementConf) check(bo build.Options, expected *EntitlementConf) erro
}
}
}
rwPaths := map[string]struct{}{}
roPaths := map[string]struct{}{}
for _, p := range collectLocalPaths(bo.Inputs) {
roPaths[p] = struct{}{}
}
for _, out := range bo.Exports {
if out.Type == "local" {
if dest, ok := out.Attrs["dest"]; ok {
rwPaths[dest] = struct{}{}
}
}
if out.Type == "tar" {
if dest, ok := out.Attrs["dest"]; ok && dest != "-" {
rwPaths[dest] = struct{}{}
}
}
}
for _, ce := range bo.CacheTo {
if ce.Type == "local" {
if dest, ok := ce.Attrs["dest"]; ok {
rwPaths[dest] = struct{}{}
}
}
}
for _, ci := range bo.CacheFrom {
if ci.Type == "local" {
if src, ok := ci.Attrs["src"]; ok {
roPaths[src] = struct{}{}
}
}
}
for _, secret := range bo.SecretSpecs {
if secret.FilePath != "" {
roPaths[secret.FilePath] = struct{}{}
}
}
for _, ssh := range bo.SSHSpecs {
for _, p := range ssh.Paths {
roPaths[p] = struct{}{}
}
if len(ssh.Paths) == 0 {
expected.SSH = true
}
}
var err error
expected.FSRead, err = findMissingPaths(c.FSRead, roPaths)
if err != nil {
return err
}
expected.FSWrite, err = findMissingPaths(c.FSWrite, rwPaths)
if err != nil {
return err
}
return nil
}
func (c EntitlementConf) Prompt(ctx context.Context, out io.Writer) error {
func (c EntitlementConf) Prompt(ctx context.Context, isRemote bool, out io.Writer) error {
var term bool
if _, err := console.ConsoleFromFile(os.Stdin); err == nil {
term = true
@@ -113,32 +180,78 @@ func (c EntitlementConf) Prompt(ctx context.Context, out io.Writer) error {
var msgs []string
var flags []string
// these warnings are currently disabled to give users time to update
var msgsFS []string
var flagsFS []string
if c.NetworkHost {
msgs = append(msgs, " - Running build containers that can access host network")
flags = append(flags, "network.host")
flags = append(flags, string(EntitlementKeyNetworkHost))
}
if c.SecurityInsecure {
msgs = append(msgs, " - Running privileged containers that can make system changes")
flags = append(flags, "security.insecure")
flags = append(flags, string(EntitlementKeySecurityInsecure))
}
if len(msgs) == 0 {
if c.SSH {
msgsFS = append(msgsFS, " - Forwarding default SSH agent socket")
flagsFS = append(flagsFS, string(EntitlementKeySSH))
}
roPaths, rwPaths, commonPaths := groupSamePaths(c.FSRead, c.FSWrite)
wd, err := os.Getwd()
if err != nil {
return errors.Wrap(err, "failed to get current working directory")
}
wd, err = filepath.EvalSymlinks(wd)
if err != nil {
return errors.Wrap(err, "failed to evaluate working directory")
}
roPaths = toRelativePaths(roPaths, wd)
rwPaths = toRelativePaths(rwPaths, wd)
commonPaths = toRelativePaths(commonPaths, wd)
if len(commonPaths) > 0 {
for _, p := range commonPaths {
msgsFS = append(msgsFS, fmt.Sprintf(" - Read and write access to path %s", p))
flagsFS = append(flagsFS, string(EntitlementKeyFS)+"="+p)
}
}
if len(roPaths) > 0 {
for _, p := range roPaths {
msgsFS = append(msgsFS, fmt.Sprintf(" - Read access to path %s", p))
flagsFS = append(flagsFS, string(EntitlementKeyFSRead)+"="+p)
}
}
if len(rwPaths) > 0 {
for _, p := range rwPaths {
msgsFS = append(msgsFS, fmt.Sprintf(" - Write access to path %s", p))
flagsFS = append(flagsFS, string(EntitlementKeyFSWrite)+"="+p)
}
}
if len(msgs) == 0 && len(msgsFS) == 0 {
return nil
}
fmt.Fprintf(out, "Your build is requesting privileges for following possibly insecure capabilities:\n\n")
for _, m := range msgs {
for _, m := range slices.Concat(msgs, msgsFS) {
fmt.Fprintf(out, "%s\n", m)
}
for i, f := range flags {
flags[i] = "--allow=" + f
}
for i, f := range flagsFS {
flagsFS[i] = "--allow=" + f
}
if term {
fmt.Fprintf(out, "\nIn order to not see this message in the future pass %q to grant requested privileges.\n", strings.Join(flags, " "))
fmt.Fprintf(out, "\nIn order to not see this message in the future pass %q to grant requested privileges.\n", strings.Join(slices.Concat(flags, flagsFS), " "))
} else {
fmt.Fprintf(out, "\nPass %q to grant requested privileges.\n", strings.Join(flags, " "))
fmt.Fprintf(out, "\nPass %q to grant requested privileges.\n", strings.Join(slices.Concat(flags, flagsFS), " "))
}
args := append([]string(nil), os.Args...)
@@ -149,7 +262,35 @@ func (c EntitlementConf) Prompt(ctx context.Context, out io.Writer) error {
if idx != -1 {
fmt.Fprintf(out, "\nYour full command with requested privileges:\n\n")
fmt.Fprintf(out, "%s %s %s\n\n", strings.Join(args[:idx+1], " "), strings.Join(flags, " "), strings.Join(args[idx+1:], " "))
fmt.Fprintf(out, "%s %s %s\n\n", strings.Join(args[:idx+1], " "), strings.Join(slices.Concat(flags, flagsFS), " "), strings.Join(args[idx+1:], " "))
}
fsEntitlementsEnabled := false
if isRemote {
if v, ok := os.LookupEnv("BAKE_ALLOW_REMOTE_FS_ACCESS"); ok {
vv, err := strconv.ParseBool(v)
if err != nil {
return errors.Wrapf(err, "failed to parse BAKE_ALLOW_REMOTE_FS_ACCESS value %q", v)
}
fsEntitlementsEnabled = !vv
} else {
fsEntitlementsEnabled = true
}
}
v, fsEntitlementsSet := os.LookupEnv("BUILDX_BAKE_ENTITLEMENTS_FS")
if fsEntitlementsSet {
vv, err := strconv.ParseBool(v)
if err != nil {
return errors.Wrapf(err, "failed to parse BUILDX_BAKE_ENTITLEMENTS_FS value %q", v)
}
fsEntitlementsEnabled = vv
}
if !fsEntitlementsEnabled && len(msgs) == 0 {
if !fsEntitlementsSet {
fmt.Fprintf(out, "This warning will become an error in a future release. To enable filesystem entitlements checks at the moment, set BUILDX_BAKE_ENTITLEMENTS_FS=1 .\n\n")
}
return nil
}
if term {
@@ -173,3 +314,288 @@ func (c EntitlementConf) Prompt(ctx context.Context, out io.Writer) error {
return errors.Errorf("additional privileges requested")
}
func isParentOrEqualPath(p, parent string) bool {
if p == parent || parent == "/" {
return true
}
if strings.HasPrefix(p, filepath.Clean(parent+string(filepath.Separator))) {
return true
}
return false
}
func findMissingPaths(set []string, paths map[string]struct{}) ([]string, error) {
set, allowAny, err := evaluatePaths(set)
if err != nil {
return nil, err
} else if allowAny {
return nil, nil
}
paths, err = evaluateToExistingPaths(paths)
if err != nil {
return nil, err
}
paths, err = dedupPaths(paths)
if err != nil {
return nil, err
}
out := make([]string, 0, len(paths))
loop0:
for p := range paths {
for _, c := range set {
if isParentOrEqualPath(p, c) {
continue loop0
}
}
out = append(out, p)
}
if len(out) == 0 {
return nil, nil
}
slices.Sort(out)
return out, nil
}
func dedupPaths(in map[string]struct{}) (map[string]struct{}, error) {
arr := make([]string, 0, len(in))
for p := range in {
arr = append(arr, filepath.Clean(p))
}
slices.SortFunc(arr, func(a, b string) int {
return cmp.Compare(len(a), len(b))
})
m := make(map[string]struct{}, len(arr))
loop0:
for _, p := range arr {
for parent := range m {
if strings.HasPrefix(p, parent+string(filepath.Separator)) {
continue loop0
}
}
m[p] = struct{}{}
}
return m, nil
}
func toRelativePaths(in []string, wd string) []string {
out := make([]string, 0, len(in))
for _, p := range in {
rel, err := filepath.Rel(wd, p)
if err == nil {
// allow up to one level of ".." in the path
if !strings.HasPrefix(rel, ".."+string(filepath.Separator)+"..") {
out = append(out, rel)
continue
}
}
out = append(out, p)
}
return out
}
func groupSamePaths(in1, in2 []string) ([]string, []string, []string) {
if in1 == nil || in2 == nil {
return in1, in2, nil
}
slices.Sort(in1)
slices.Sort(in2)
common := []string{}
i, j := 0, 0
for i < len(in1) && j < len(in2) {
switch {
case in1[i] == in2[j]:
common = append(common, in1[i])
i++
j++
case in1[i] < in2[j]:
i++
default:
j++
}
}
in1 = removeCommonPaths(in1, common)
in2 = removeCommonPaths(in2, common)
return in1, in2, common
}
func removeCommonPaths(in, common []string) []string {
filtered := make([]string, 0, len(in))
commonIndex := 0
for _, path := range in {
if commonIndex < len(common) && path == common[commonIndex] {
commonIndex++
continue
}
filtered = append(filtered, path)
}
return filtered
}
func evaluatePaths(in []string) ([]string, bool, error) {
out := make([]string, 0, len(in))
allowAny := false
for _, p := range in {
if p == "*" {
allowAny = true
continue
}
v, err := filepath.Abs(p)
if err != nil {
return nil, false, errors.Wrapf(err, "failed to evaluate path %q", p)
}
v, err = filepath.EvalSymlinks(v)
if err != nil {
return nil, false, errors.Wrapf(err, "failed to evaluate path %q", p)
}
out = append(out, v)
}
return out, allowAny, nil
}
func evaluateToExistingPaths(in map[string]struct{}) (map[string]struct{}, error) {
m := make(map[string]struct{}, len(in))
for p := range in {
v, err := evaluateToExistingPath(p)
if err != nil {
return nil, errors.Wrapf(err, "failed to evaluate path %q", p)
}
v, err = osutil.GetLongPathName(v)
if err != nil {
return nil, errors.Wrapf(err, "failed to evaluate path %q", p)
}
m[v] = struct{}{}
}
return m, nil
}
func evaluateToExistingPath(in string) (string, error) {
in, err := filepath.Abs(in)
if err != nil {
return "", err
}
volLen := volumeNameLen(in)
pathSeparator := string(os.PathSeparator)
if volLen < len(in) && os.IsPathSeparator(in[volLen]) {
volLen++
}
vol := in[:volLen]
dest := vol
linksWalked := 0
var end int
for start := volLen; start < len(in); start = end {
for start < len(in) && os.IsPathSeparator(in[start]) {
start++
}
end = start
for end < len(in) && !os.IsPathSeparator(in[end]) {
end++
}
if end == start {
break
} else if in[start:end] == "." {
continue
} else if in[start:end] == ".." {
var r int
for r = len(dest) - 1; r >= volLen; r-- {
if os.IsPathSeparator(dest[r]) {
break
}
}
if r < volLen || dest[r+1:] == ".." {
if len(dest) > volLen {
dest += pathSeparator
}
dest += ".."
} else {
dest = dest[:r]
}
continue
}
if len(dest) > volumeNameLen(dest) && !os.IsPathSeparator(dest[len(dest)-1]) {
dest += pathSeparator
}
dest += in[start:end]
fi, err := os.Lstat(dest)
if err != nil {
// If the component doesn't exist, return the last valid path
if os.IsNotExist(err) {
for r := len(dest) - 1; r >= volLen; r-- {
if os.IsPathSeparator(dest[r]) {
return dest[:r], nil
}
}
return vol, nil
}
return "", err
}
if fi.Mode()&fs.ModeSymlink == 0 {
if !fi.Mode().IsDir() && end < len(in) {
return "", syscall.ENOTDIR
}
continue
}
linksWalked++
if linksWalked > 255 {
return "", errors.New("too many symlinks")
}
link, err := os.Readlink(dest)
if err != nil {
return "", err
}
in = link + in[end:]
v := volumeNameLen(link)
if v > 0 {
if v < len(link) && os.IsPathSeparator(link[v]) {
v++
}
vol = link[:v]
dest = vol
end = len(vol)
} else if len(link) > 0 && os.IsPathSeparator(link[0]) {
dest = link[:1]
end = 1
vol = link[:1]
volLen = 1
} else {
var r int
for r = len(dest) - 1; r >= volLen; r-- {
if os.IsPathSeparator(dest[r]) {
break
}
}
if r < volLen {
dest = vol
} else {
dest = dest[:r]
}
end = 0
}
}
return filepath.Clean(dest), nil
}
func volumeNameLen(s string) int {
return len(filepath.VolumeName(s))
}

460
bake/entitlements_test.go Normal file
View File

@@ -0,0 +1,460 @@
package bake
import (
"fmt"
"os"
"path/filepath"
"slices"
"testing"
"github.com/docker/buildx/build"
"github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/util/osutil"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/client/llb"
"github.com/moby/buildkit/util/entitlements"
"github.com/stretchr/testify/require"
)
func TestEvaluateToExistingPath(t *testing.T) {
tempDir, err := osutil.GetLongPathName(t.TempDir())
require.NoError(t, err)
// Setup temporary directory structure for testing
existingFile := filepath.Join(tempDir, "existing_file")
require.NoError(t, os.WriteFile(existingFile, []byte("test"), 0644))
existingDir := filepath.Join(tempDir, "existing_dir")
require.NoError(t, os.Mkdir(existingDir, 0755))
symlinkToFile := filepath.Join(tempDir, "symlink_to_file")
require.NoError(t, os.Symlink(existingFile, symlinkToFile))
symlinkToDir := filepath.Join(tempDir, "symlink_to_dir")
require.NoError(t, os.Symlink(existingDir, symlinkToDir))
nonexistentPath := filepath.Join(tempDir, "nonexistent", "path", "file.txt")
tests := []struct {
name string
input string
expected string
expectErr bool
}{
{
name: "Existing file",
input: existingFile,
expected: existingFile,
expectErr: false,
},
{
name: "Existing directory",
input: existingDir,
expected: existingDir,
expectErr: false,
},
{
name: "Symlink to file",
input: symlinkToFile,
expected: existingFile,
expectErr: false,
},
{
name: "Symlink to directory",
input: symlinkToDir,
expected: existingDir,
expectErr: false,
},
{
name: "Non-existent path",
input: nonexistentPath,
expected: tempDir,
expectErr: false,
},
{
name: "Non-existent intermediate path",
input: filepath.Join(tempDir, "nonexistent", "file.txt"),
expected: tempDir,
expectErr: false,
},
{
name: "Root path",
input: "/",
expected: func() string {
root, _ := filepath.Abs("/")
return root
}(),
expectErr: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result, err := evaluateToExistingPath(tt.input)
if tt.expectErr {
require.Error(t, err)
} else {
require.NoError(t, err)
require.Equal(t, tt.expected, result)
}
})
}
}
func TestDedupePaths(t *testing.T) {
wd := osutil.GetWd()
tcases := []struct {
in map[string]struct{}
out map[string]struct{}
}{
{
in: map[string]struct{}{
"/a/b/c": {},
"/a/b/d": {},
"/a/b/e": {},
},
out: map[string]struct{}{
"/a/b/c": {},
"/a/b/d": {},
"/a/b/e": {},
},
},
{
in: map[string]struct{}{
"/a/b/c": {},
"/a/b/c/d": {},
"/a/b/c/d/e": {},
"/a/b/../b/c": {},
},
out: map[string]struct{}{
"/a/b/c": {},
},
},
{
in: map[string]struct{}{
filepath.Join(wd, "a/b/c"): {},
filepath.Join(wd, "../aa"): {},
filepath.Join(wd, "a/b"): {},
filepath.Join(wd, "a/b/d"): {},
filepath.Join(wd, "../aa/b"): {},
filepath.Join(wd, "../../bb"): {},
},
out: map[string]struct{}{
"a/b": {},
"../aa": {},
filepath.Join(wd, "../../bb"): {},
},
},
}
for i, tc := range tcases {
t.Run(fmt.Sprintf("case%d", i), func(t *testing.T) {
out, err := dedupPaths(tc.in)
if err != nil {
require.NoError(t, err)
}
// convert to relative paths as that is shown to user
arr := make([]string, 0, len(out))
for k := range out {
arr = append(arr, k)
}
require.NoError(t, err)
arr = toRelativePaths(arr, wd)
m := make(map[string]struct{})
for _, v := range arr {
m[filepath.ToSlash(v)] = struct{}{}
}
o := make(map[string]struct{}, len(tc.out))
for k := range tc.out {
o[filepath.ToSlash(k)] = struct{}{}
}
require.Equal(t, o, m)
})
}
}
func TestValidateEntitlements(t *testing.T) {
dir1 := t.TempDir()
dir2 := t.TempDir()
// the paths returned by entitlements validation will have symlinks resolved
expDir1, err := filepath.EvalSymlinks(dir1)
require.NoError(t, err)
expDir2, err := filepath.EvalSymlinks(dir2)
require.NoError(t, err)
escapeLink := filepath.Join(dir1, "escape_link")
require.NoError(t, os.Symlink("../../aa", escapeLink))
wd, err := os.Getwd()
require.NoError(t, err)
expWd, err := filepath.EvalSymlinks(wd)
require.NoError(t, err)
tcases := []struct {
name string
conf EntitlementConf
opt build.Options
expected EntitlementConf
}{
{
name: "No entitlements",
opt: build.Options{
Inputs: build.Inputs{
ContextState: &llb.State{},
},
},
},
{
name: "NetworkHostMissing",
opt: build.Options{
Allow: []entitlements.Entitlement{
entitlements.EntitlementNetworkHost,
},
},
expected: EntitlementConf{
NetworkHost: true,
FSRead: []string{expWd},
},
},
{
name: "NetworkHostSet",
conf: EntitlementConf{
NetworkHost: true,
},
opt: build.Options{
Allow: []entitlements.Entitlement{
entitlements.EntitlementNetworkHost,
},
},
expected: EntitlementConf{
FSRead: []string{expWd},
},
},
{
name: "SecurityAndNetworkHostMissing",
opt: build.Options{
Allow: []entitlements.Entitlement{
entitlements.EntitlementNetworkHost,
entitlements.EntitlementSecurityInsecure,
},
},
expected: EntitlementConf{
NetworkHost: true,
SecurityInsecure: true,
FSRead: []string{expWd},
},
},
{
name: "SecurityMissingAndNetworkHostSet",
conf: EntitlementConf{
NetworkHost: true,
},
opt: build.Options{
Allow: []entitlements.Entitlement{
entitlements.EntitlementNetworkHost,
entitlements.EntitlementSecurityInsecure,
},
},
expected: EntitlementConf{
SecurityInsecure: true,
FSRead: []string{expWd},
},
},
{
name: "SSHMissing",
opt: build.Options{
SSHSpecs: []*pb.SSH{
{
ID: "test",
},
},
},
expected: EntitlementConf{
SSH: true,
FSRead: []string{expWd},
},
},
{
name: "ExportLocal",
opt: build.Options{
Exports: []client.ExportEntry{
{
Type: "local",
Attrs: map[string]string{
"dest": dir1,
},
},
{
Type: "local",
Attrs: map[string]string{
"dest": filepath.Join(dir1, "subdir"),
},
},
{
Type: "local",
Attrs: map[string]string{
"dest": dir2,
},
},
},
},
expected: EntitlementConf{
FSWrite: func() []string {
exp := []string{expDir1, expDir2}
slices.Sort(exp)
return exp
}(),
FSRead: []string{expWd},
},
},
{
name: "SecretFromSubFile",
opt: build.Options{
SecretSpecs: []*pb.Secret{
{
FilePath: filepath.Join(dir1, "subfile"),
},
},
},
conf: EntitlementConf{
FSRead: []string{wd, dir1},
},
},
{
name: "SecretFromEscapeLink",
opt: build.Options{
SecretSpecs: []*pb.Secret{
{
FilePath: escapeLink,
},
},
},
conf: EntitlementConf{
FSRead: []string{wd, dir1},
},
expected: EntitlementConf{
FSRead: []string{filepath.Join(expDir1, "../..")},
},
},
{
name: "SecretFromEscapeLinkAllowRoot",
opt: build.Options{
SecretSpecs: []*pb.Secret{
{
FilePath: escapeLink,
},
},
},
conf: EntitlementConf{
FSRead: []string{"/"},
},
expected: EntitlementConf{
FSRead: func() []string {
// on windows root (/) is only allowed if it is the same volume as wd
if filepath.VolumeName(wd) == filepath.VolumeName(escapeLink) {
return nil
}
// if not, then escapeLink is not allowed
exp, err := evaluateToExistingPath(escapeLink)
require.NoError(t, err)
exp, err = filepath.EvalSymlinks(exp)
require.NoError(t, err)
return []string{exp}
}(),
},
},
{
name: "SecretFromEscapeLinkAllowAny",
opt: build.Options{
SecretSpecs: []*pb.Secret{
{
FilePath: escapeLink,
},
},
},
conf: EntitlementConf{
FSRead: []string{"*"},
},
expected: EntitlementConf{},
},
}
for _, tc := range tcases {
t.Run(tc.name, func(t *testing.T) {
expected, err := tc.conf.Validate(map[string]build.Options{"test": tc.opt})
require.NoError(t, err)
require.Equal(t, tc.expected, expected)
})
}
}
func TestGroupSamePaths(t *testing.T) {
tests := []struct {
name string
in1 []string
in2 []string
expected1 []string
expected2 []string
expectedC []string
}{
{
name: "All common paths",
in1: []string{"/path/a", "/path/b", "/path/c"},
in2: []string{"/path/a", "/path/b", "/path/c"},
expected1: []string{},
expected2: []string{},
expectedC: []string{"/path/a", "/path/b", "/path/c"},
},
{
name: "No common paths",
in1: []string{"/path/a", "/path/b"},
in2: []string{"/path/c", "/path/d"},
expected1: []string{"/path/a", "/path/b"},
expected2: []string{"/path/c", "/path/d"},
expectedC: []string{},
},
{
name: "Some common paths",
in1: []string{"/path/a", "/path/b", "/path/c"},
in2: []string{"/path/b", "/path/c", "/path/d"},
expected1: []string{"/path/a"},
expected2: []string{"/path/d"},
expectedC: []string{"/path/b", "/path/c"},
},
{
name: "Empty inputs",
in1: []string{},
in2: []string{},
expected1: []string{},
expected2: []string{},
expectedC: []string{},
},
{
name: "One empty input",
in1: []string{"/path/a", "/path/b"},
in2: []string{},
expected1: []string{"/path/a", "/path/b"},
expected2: []string{},
expectedC: []string{},
},
{
name: "Unsorted inputs with common paths",
in1: []string{"/path/c", "/path/a", "/path/b"},
in2: []string{"/path/b", "/path/c", "/path/a"},
expected1: []string{},
expected2: []string{},
expectedC: []string{"/path/a", "/path/b", "/path/c"},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
out1, out2, common := groupSamePaths(tt.in1, tt.in2)
require.Equal(t, tt.expected1, out1, "in1 should match expected1")
require.Equal(t, tt.expected2, out2, "in2 should match expected2")
require.Equal(t, tt.expectedC, common, "common should match expectedC")
})
}
}

View File

@@ -56,7 +56,7 @@ func formatHCLError(err error, files []File) error {
break
}
}
src := errdefs.Source{
src := &errdefs.Source{
Info: &pb.SourceInfo{
Filename: d.Subject.Filename,
Data: dt,
@@ -72,7 +72,7 @@ func formatHCLError(err error, files []File) error {
func toErrRange(in *hcl.Range) *pb.Range {
return &pb.Range{
Start: pb.Position{Line: int32(in.Start.Line), Character: int32(in.Start.Column)},
End: pb.Position{Line: int32(in.End.Line), Character: int32(in.End.Column)},
Start: &pb.Position{Line: int32(in.Start.Line), Character: int32(in.Start.Column)},
End: &pb.Position{Line: int32(in.End.Line), Character: int32(in.End.Column)},
}
}

View File

@@ -49,18 +49,18 @@ func TestHCLBasic(t *testing.T) {
require.Equal(t, []string{"db", "webapp"}, c.Groups[0].Targets)
require.Equal(t, 4, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "db")
require.Equal(t, "db", c.Targets[0].Name)
require.Equal(t, "./db", *c.Targets[0].Context)
require.Equal(t, c.Targets[1].Name, "webapp")
require.Equal(t, "webapp", c.Targets[1].Name)
require.Equal(t, 1, len(c.Targets[1].Args))
require.Equal(t, ptrstr("123"), c.Targets[1].Args["buildno"])
require.Equal(t, c.Targets[2].Name, "cross")
require.Equal(t, "cross", c.Targets[2].Name)
require.Equal(t, 2, len(c.Targets[2].Platforms))
require.Equal(t, []string{"linux/amd64", "linux/arm64"}, c.Targets[2].Platforms)
require.Equal(t, c.Targets[3].Name, "webapp-plus")
require.Equal(t, "webapp-plus", c.Targets[3].Name)
require.Equal(t, 1, len(c.Targets[3].Args))
require.Equal(t, map[string]*string{"IAMCROSS": ptrstr("true")}, c.Targets[3].Args)
}
@@ -109,18 +109,18 @@ func TestHCLBasicInJSON(t *testing.T) {
require.Equal(t, []string{"db", "webapp"}, c.Groups[0].Targets)
require.Equal(t, 4, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "db")
require.Equal(t, "db", c.Targets[0].Name)
require.Equal(t, "./db", *c.Targets[0].Context)
require.Equal(t, c.Targets[1].Name, "webapp")
require.Equal(t, "webapp", c.Targets[1].Name)
require.Equal(t, 1, len(c.Targets[1].Args))
require.Equal(t, ptrstr("123"), c.Targets[1].Args["buildno"])
require.Equal(t, c.Targets[2].Name, "cross")
require.Equal(t, "cross", c.Targets[2].Name)
require.Equal(t, 2, len(c.Targets[2].Platforms))
require.Equal(t, []string{"linux/amd64", "linux/arm64"}, c.Targets[2].Platforms)
require.Equal(t, c.Targets[3].Name, "webapp-plus")
require.Equal(t, "webapp-plus", c.Targets[3].Name)
require.Equal(t, 1, len(c.Targets[3].Args))
require.Equal(t, map[string]*string{"IAMCROSS": ptrstr("true")}, c.Targets[3].Args)
}
@@ -146,7 +146,7 @@ func TestHCLWithFunctions(t *testing.T) {
require.Equal(t, []string{"webapp"}, c.Groups[0].Targets)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "webapp")
require.Equal(t, "webapp", c.Targets[0].Name)
require.Equal(t, ptrstr("124"), c.Targets[0].Args["buildno"])
}
@@ -176,7 +176,7 @@ func TestHCLWithUserDefinedFunctions(t *testing.T) {
require.Equal(t, []string{"webapp"}, c.Groups[0].Targets)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "webapp")
require.Equal(t, "webapp", c.Targets[0].Name)
require.Equal(t, ptrstr("124"), c.Targets[0].Args["buildno"])
}
@@ -205,7 +205,7 @@ func TestHCLWithVariables(t *testing.T) {
require.Equal(t, []string{"webapp"}, c.Groups[0].Targets)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "webapp")
require.Equal(t, "webapp", c.Targets[0].Name)
require.Equal(t, ptrstr("123"), c.Targets[0].Args["buildno"])
t.Setenv("BUILD_NUMBER", "456")
@@ -218,7 +218,7 @@ func TestHCLWithVariables(t *testing.T) {
require.Equal(t, []string{"webapp"}, c.Groups[0].Targets)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "webapp")
require.Equal(t, "webapp", c.Targets[0].Name)
require.Equal(t, ptrstr("456"), c.Targets[0].Args["buildno"])
}
@@ -241,7 +241,7 @@ func TestHCLWithVariablesInFunctions(t *testing.T) {
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "webapp")
require.Equal(t, "webapp", c.Targets[0].Name)
require.Equal(t, []string{"user/repo:v1"}, c.Targets[0].Tags)
t.Setenv("REPO", "docker/buildx")
@@ -250,7 +250,7 @@ func TestHCLWithVariablesInFunctions(t *testing.T) {
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "webapp")
require.Equal(t, "webapp", c.Targets[0].Name)
require.Equal(t, []string{"docker/buildx:v1"}, c.Targets[0].Tags)
}
@@ -279,7 +279,7 @@ func TestHCLMultiFileSharedVariables(t *testing.T) {
}, nil)
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, "app", c.Targets[0].Name)
require.Equal(t, ptrstr("pre-abc"), c.Targets[0].Args["v1"])
require.Equal(t, ptrstr("abc-post"), c.Targets[0].Args["v2"])
@@ -292,7 +292,7 @@ func TestHCLMultiFileSharedVariables(t *testing.T) {
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, "app", c.Targets[0].Name)
require.Equal(t, ptrstr("pre-def"), c.Targets[0].Args["v1"])
require.Equal(t, ptrstr("def-post"), c.Targets[0].Args["v2"])
}
@@ -328,7 +328,7 @@ func TestHCLVarsWithVars(t *testing.T) {
}, nil)
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, "app", c.Targets[0].Name)
require.Equal(t, ptrstr("pre--ABCDEF-"), c.Targets[0].Args["v1"])
require.Equal(t, ptrstr("ABCDEF-post"), c.Targets[0].Args["v2"])
@@ -341,7 +341,7 @@ func TestHCLVarsWithVars(t *testing.T) {
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, "app", c.Targets[0].Name)
require.Equal(t, ptrstr("pre--NEWDEF-"), c.Targets[0].Args["v1"])
require.Equal(t, ptrstr("NEWDEF-post"), c.Targets[0].Args["v2"])
}
@@ -366,7 +366,7 @@ func TestHCLTypedVariables(t *testing.T) {
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, "app", c.Targets[0].Name)
require.Equal(t, ptrstr("lower"), c.Targets[0].Args["v1"])
require.Equal(t, ptrstr("yes"), c.Targets[0].Args["v2"])
@@ -377,7 +377,7 @@ func TestHCLTypedVariables(t *testing.T) {
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, "app", c.Targets[0].Name)
require.Equal(t, ptrstr("higher"), c.Targets[0].Args["v1"])
require.Equal(t, ptrstr("no"), c.Targets[0].Args["v2"])
@@ -475,7 +475,7 @@ func TestHCLAttrs(t *testing.T) {
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, "app", c.Targets[0].Name)
require.Equal(t, ptrstr("attr-abcdef"), c.Targets[0].Args["v1"])
// env does not apply if no variable
@@ -484,7 +484,7 @@ func TestHCLAttrs(t *testing.T) {
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, "app", c.Targets[0].Name)
require.Equal(t, ptrstr("attr-abcdef"), c.Targets[0].Args["v1"])
// attr-multifile
}
@@ -592,7 +592,7 @@ func TestHCLAttrsCustomType(t *testing.T) {
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, "app", c.Targets[0].Name)
require.Equal(t, []string{"linux/arm64", "linux/amd64"}, c.Targets[0].Platforms)
require.Equal(t, ptrstr("linux/arm64"), c.Targets[0].Args["v1"])
}
@@ -618,7 +618,7 @@ func TestHCLMultiFileAttrs(t *testing.T) {
}, nil)
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, "app", c.Targets[0].Name)
require.Equal(t, ptrstr("pre-def"), c.Targets[0].Args["v1"])
t.Setenv("FOO", "ghi")
@@ -630,7 +630,7 @@ func TestHCLMultiFileAttrs(t *testing.T) {
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, "app", c.Targets[0].Name)
require.Equal(t, ptrstr("pre-ghi"), c.Targets[0].Args["v1"])
}
@@ -653,7 +653,7 @@ func TestHCLMultiFileGlobalAttrs(t *testing.T) {
}, nil)
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, "app", c.Targets[0].Name)
require.Equal(t, "pre-def", *c.Targets[0].Args["v1"])
}
@@ -839,12 +839,12 @@ func TestHCLRenameMultiFile(t *testing.T) {
require.Equal(t, 2, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "bar")
require.Equal(t, *c.Targets[0].Dockerfile, "x")
require.Equal(t, *c.Targets[0].Target, "z")
require.Equal(t, "bar", c.Targets[0].Name)
require.Equal(t, "x", *c.Targets[0].Dockerfile)
require.Equal(t, "z", *c.Targets[0].Target)
require.Equal(t, c.Targets[1].Name, "foo")
require.Equal(t, *c.Targets[1].Context, "y")
require.Equal(t, "foo", c.Targets[1].Name)
require.Equal(t, "y", *c.Targets[1].Context)
}
func TestHCLMatrixBasic(t *testing.T) {
@@ -862,10 +862,10 @@ func TestHCLMatrixBasic(t *testing.T) {
require.NoError(t, err)
require.Equal(t, 2, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "x")
require.Equal(t, c.Targets[1].Name, "y")
require.Equal(t, *c.Targets[0].Dockerfile, "x.Dockerfile")
require.Equal(t, *c.Targets[1].Dockerfile, "y.Dockerfile")
require.Equal(t, "x", c.Targets[0].Name)
require.Equal(t, "y", c.Targets[1].Name)
require.Equal(t, "x.Dockerfile", *c.Targets[0].Dockerfile)
require.Equal(t, "y.Dockerfile", *c.Targets[1].Dockerfile)
require.Equal(t, 1, len(c.Groups))
require.Equal(t, "default", c.Groups[0].Name)
@@ -948,9 +948,9 @@ func TestHCLMatrixMaps(t *testing.T) {
require.NoError(t, err)
require.Equal(t, 2, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "aa")
require.Equal(t, "aa", c.Targets[0].Name)
require.Equal(t, c.Targets[0].Args["target"], ptrstr("valbb"))
require.Equal(t, c.Targets[1].Name, "cc")
require.Equal(t, "cc", c.Targets[1].Name)
require.Equal(t, c.Targets[1].Args["target"], ptrstr("valdd"))
}
@@ -1141,7 +1141,7 @@ func TestJSONAttributes(t *testing.T) {
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, "app", c.Targets[0].Name)
require.Equal(t, ptrstr("pre-abc-def"), c.Targets[0].Args["v1"])
}
@@ -1166,7 +1166,7 @@ func TestJSONFunctions(t *testing.T) {
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, "app", c.Targets[0].Name)
require.Equal(t, ptrstr("pre-<FOO-abc>"), c.Targets[0].Args["v1"])
}
@@ -1184,7 +1184,7 @@ func TestJSONInvalidFunctions(t *testing.T) {
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, "app", c.Targets[0].Name)
require.Equal(t, ptrstr(`myfunc("foo")`), c.Targets[0].Args["v1"])
}
@@ -1212,7 +1212,7 @@ func TestHCLFunctionInAttr(t *testing.T) {
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, "app", c.Targets[0].Name)
require.Equal(t, ptrstr("FOO <> [baz]"), c.Targets[0].Args["v1"])
}
@@ -1243,7 +1243,7 @@ services:
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, "app", c.Targets[0].Name)
require.Equal(t, ptrstr("foo"), c.Targets[0].Args["v1"])
require.Equal(t, ptrstr("bar"), c.Targets[0].Args["v2"])
require.Equal(t, "dir", *c.Targets[0].Context)
@@ -1266,7 +1266,7 @@ func TestHCLBuiltinVars(t *testing.T) {
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "app")
require.Equal(t, "app", c.Targets[0].Name)
require.Equal(t, "foo", *c.Targets[0].Context)
require.Equal(t, "test", *c.Targets[0].Dockerfile)
}
@@ -1332,17 +1332,17 @@ target "b" {
require.Equal(t, 4, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "metadata-a")
require.Equal(t, "metadata-a", c.Targets[0].Name)
require.Equal(t, []string{"app/a:1.0.0", "app/a:latest"}, c.Targets[0].Tags)
require.Equal(t, c.Targets[1].Name, "metadata-b")
require.Equal(t, "metadata-b", c.Targets[1].Name)
require.Equal(t, []string{"app/b:1.0.0", "app/b:latest"}, c.Targets[1].Tags)
require.Equal(t, c.Targets[2].Name, "a")
require.Equal(t, "a", c.Targets[2].Name)
require.Equal(t, ".", *c.Targets[2].Context)
require.Equal(t, "a", *c.Targets[2].Target)
require.Equal(t, c.Targets[3].Name, "b")
require.Equal(t, "b", c.Targets[3].Name)
require.Equal(t, ".", *c.Targets[3].Context)
require.Equal(t, "b", *c.Targets[3].Target)
}
@@ -1389,10 +1389,10 @@ target "two" {
require.Equal(t, 2, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "one")
require.Equal(t, "one", c.Targets[0].Name)
require.Equal(t, map[string]*string{"a": ptrstr("pre-ghi-jkl")}, c.Targets[0].Args)
require.Equal(t, c.Targets[1].Name, "two")
require.Equal(t, "two", c.Targets[1].Name)
require.Equal(t, map[string]*string{"b": ptrstr("pre-jkl")}, c.Targets[1].Args)
}

View File

@@ -28,10 +28,16 @@ type variable struct {
Name string `json:"-" hcl:"name,label"`
Default *hcl.Attribute `json:"default,omitempty" hcl:"default,optional"`
Description string `json:"description,omitempty" hcl:"description,optional"`
Validations []*variableValidation `json:"validation,omitempty" hcl:"validation,block"`
Body hcl.Body `json:"-" hcl:",body"`
Remain hcl.Body `json:"-" hcl:",remain"`
}
type variableValidation struct {
Condition hcl.Expression `json:"condition" hcl:"condition"`
ErrorMessage hcl.Expression `json:"error_message" hcl:"error_message"`
}
type functionDef struct {
Name string `json:"-" hcl:"name,label"`
Params *hcl.Attribute `json:"params,omitempty" hcl:"params"`
@@ -541,6 +547,33 @@ func (p *parser) resolveBlockNames(block *hcl.Block) ([]string, error) {
return names, nil
}
func (p *parser) validateVariables(vars map[string]*variable, ectx *hcl.EvalContext) hcl.Diagnostics {
var diags hcl.Diagnostics
for _, v := range vars {
for _, validation := range v.Validations {
condition, condDiags := validation.Condition.Value(ectx)
if condDiags.HasErrors() {
diags = append(diags, condDiags...)
continue
}
if !condition.True() {
message, msgDiags := validation.ErrorMessage.Value(ectx)
if msgDiags.HasErrors() {
diags = append(diags, msgDiags...)
continue
}
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Validation failed",
Detail: message.AsString(),
Subject: validation.Condition.Range().Ptr(),
})
}
}
}
return diags
}
type Variable struct {
Name string
Description string
@@ -686,6 +719,9 @@ func Parse(b hcl.Body, opt Opt, val interface{}) (*ParseMeta, hcl.Diagnostics) {
}
vars = append(vars, v)
}
if diags := p.validateVariables(p.vars, p.ectx); diags.HasErrors() {
return nil, diags
}
for k := range p.funcs {
if err := p.resolveFunction(p.ectx, k); err != nil {

View File

@@ -170,7 +170,6 @@ func indexOfFunc() function.Function {
}
}
return cty.NilVal, errors.New("item not found")
},
})
}

View File

@@ -11,6 +11,7 @@ import (
controllerapi "github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/driver"
"github.com/docker/buildx/util/progress"
"github.com/docker/go-units"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/client/llb"
"github.com/moby/buildkit/frontend/dockerui"
@@ -19,6 +20,8 @@ import (
"github.com/pkg/errors"
)
const maxBakeDefinitionSize = 2 * 1024 * 1024 // 2 MB
type Input struct {
State *llb.State
URL string
@@ -106,7 +109,6 @@ func ReadRemoteFiles(ctx context.Context, nodes []builder.Node, url string, name
}
return nil, err
}, ch)
if err != nil {
return nil, nil, err
}
@@ -178,9 +180,9 @@ func filesFromURLRef(ctx context.Context, c gwclient.Client, ref gwclient.Refere
name := inp.URL
inp.URL = ""
if len(dt) > stat.Size() {
if stat.Size() > 1024*512 {
return nil, errors.Errorf("non-archive definition URL bigger than maximum allowed size")
if int64(len(dt)) > stat.Size {
if stat.Size > maxBakeDefinitionSize {
return nil, errors.Errorf("non-archive definition URL bigger than maximum allowed size (%s)", units.HumanSize(maxBakeDefinitionSize))
}
dt, err = ref.ReadFile(ctx, gwclient.ReadRequest{

View File

@@ -18,6 +18,7 @@ import (
"github.com/containerd/containerd/images"
"github.com/distribution/reference"
"github.com/docker/buildx/builder"
controllerapi "github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/driver"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/desktop"
@@ -50,11 +51,12 @@ import (
fstypes "github.com/tonistiigi/fsutil/types"
"go.opentelemetry.io/otel/trace"
"golang.org/x/sync/errgroup"
"google.golang.org/protobuf/proto"
)
const (
printFallbackImage = "docker/dockerfile:1.5@sha256:dbbd5e059e8a07ff7ea6233b213b36aa516b4c53c645f1817a4dd18b83cbea56"
printLintFallbackImage = "docker.io/docker/dockerfile-upstream:1.8.1@sha256:e87caa74dcb7d46cd820352bfea12591f3dba3ddc4285e19c7dcd13359f7cefd"
printFallbackImage = "docker/dockerfile:1.7.1@sha256:a57df69d0ea827fb7266491f2813635de6f17269be881f696fbfdf2d83dda33e"
printLintFallbackImage = "docker/dockerfile:1.8.1@sha256:e87caa74dcb7d46cd820352bfea12591f3dba3ddc4285e19c7dcd13359f7cefd"
)
type Options struct {
@@ -75,6 +77,8 @@ type Options struct {
NoCacheFilter []string
Platforms []specs.Platform
Pull bool
SecretSpecs []*controllerapi.Secret
SSHSpecs []*controllerapi.SSH
ShmSize opts.MemBytes
Tags []string
Target string
@@ -101,6 +105,9 @@ type Inputs struct {
ContextState *llb.State
DockerfileInline string
NamedContexts map[string]NamedContext
// DockerfileMappingSrc and DockerfileMappingDst are filled in by the builder.
DockerfileMappingSrc string
DockerfileMappingDst string
}
type NamedContext struct {
@@ -147,11 +154,11 @@ func toRepoOnly(in string) (string, error) {
return strings.Join(out, ","), nil
}
func Build(ctx context.Context, nodes []builder.Node, opt map[string]Options, docker *dockerutil.Client, configDir string, w progress.Writer) (resp map[string]*client.SolveResponse, err error) {
return BuildWithResultHandler(ctx, nodes, opt, docker, configDir, w, nil)
func Build(ctx context.Context, nodes []builder.Node, opts map[string]Options, docker *dockerutil.Client, cfg *confutil.Config, w progress.Writer) (resp map[string]*client.SolveResponse, err error) {
return BuildWithResultHandler(ctx, nodes, opts, docker, cfg, w, nil)
}
func BuildWithResultHandler(ctx context.Context, nodes []builder.Node, opt map[string]Options, docker *dockerutil.Client, configDir string, w progress.Writer, resultHandleFunc func(driverIndex int, rCtx *ResultHandle)) (resp map[string]*client.SolveResponse, err error) {
func BuildWithResultHandler(ctx context.Context, nodes []builder.Node, opts map[string]Options, docker *dockerutil.Client, cfg *confutil.Config, w progress.Writer, resultHandleFunc func(driverIndex int, rCtx *ResultHandle)) (resp map[string]*client.SolveResponse, err error) {
if len(nodes) == 0 {
return nil, errors.Errorf("driver required for build")
}
@@ -169,9 +176,9 @@ func BuildWithResultHandler(ctx context.Context, nodes []builder.Node, opt map[s
}
}
if noMobyDriver != nil && !noDefaultLoad() && noCallFunc(opt) {
if noMobyDriver != nil && !noDefaultLoad() && noCallFunc(opts) {
var noOutputTargets []string
for name, opt := range opt {
for name, opt := range opts {
if noMobyDriver.Features(ctx)[driver.DefaultLoad] {
continue
}
@@ -192,7 +199,7 @@ func BuildWithResultHandler(ctx context.Context, nodes []builder.Node, opt map[s
}
}
drivers, err := resolveDrivers(ctx, nodes, opt, w)
drivers, err := resolveDrivers(ctx, nodes, opts, w)
if err != nil {
return nil, err
}
@@ -209,7 +216,7 @@ func BuildWithResultHandler(ctx context.Context, nodes []builder.Node, opt map[s
reqForNodes := make(map[string][]*reqForNode)
eg, ctx := errgroup.WithContext(ctx)
for k, opt := range opt {
for k, opt := range opts {
multiDriver := len(drivers[k]) > 1
hasMobyDriver := false
addGitAttrs, err := getGitAttributes(ctx, opt.Inputs.ContextPath, opt.Inputs.DockerfilePath)
@@ -229,11 +236,13 @@ func BuildWithResultHandler(ctx context.Context, nodes []builder.Node, opt map[s
if err != nil {
return nil, err
}
so, release, err := toSolveOpt(ctx, np.Node(), multiDriver, opt, gatewayOpts, configDir, w, docker)
localOpt := opt
so, release, err := toSolveOpt(ctx, np.Node(), multiDriver, &localOpt, gatewayOpts, cfg, w, docker)
opts[k] = localOpt
if err != nil {
return nil, err
}
if err := saveLocalState(so, k, opt, np.Node(), configDir); err != nil {
if err := saveLocalState(so, k, opt, np.Node(), cfg); err != nil {
return nil, err
}
addGitAttrs(so)
@@ -269,7 +278,7 @@ func BuildWithResultHandler(ctx context.Context, nodes []builder.Node, opt map[s
}
// validate that all links between targets use same drivers
for name := range opt {
for name := range opts {
dps := reqForNodes[name]
for i, dp := range dps {
so := reqForNodes[name][i].so
@@ -305,10 +314,10 @@ func BuildWithResultHandler(ctx context.Context, nodes []builder.Node, opt map[s
var respMu sync.Mutex
results := waitmap.New()
multiTarget := len(opt) > 1
childTargets := calculateChildTargets(reqForNodes, opt)
multiTarget := len(opts) > 1
childTargets := calculateChildTargets(reqForNodes, opts)
for k, opt := range opt {
for k, opt := range opts {
err := func(k string) (err error) {
opt := opt
dps := drivers[k]
@@ -494,7 +503,9 @@ func BuildWithResultHandler(ctx context.Context, nodes []builder.Node, opt map[s
resultHandle, rr, err = NewResultHandle(ctx, cc, *so, "buildx", buildFunc, ch)
resultHandleFunc(dp.driverIndex, resultHandle)
} else {
span, ctx := tracing.StartSpan(ctx, "build")
rr, err = c.Build(ctx, *so, "buildx", buildFunc, ch)
tracing.FinishWithError(span, err)
}
if !so.Internal && desktop.BuildBackendEnabled() && node.Driver.HistoryAPISupported(ctx) {
if err != nil {
@@ -1131,7 +1142,7 @@ func ReadSourcePolicy() (*spb.Policy, error) {
var pol spb.Policy
if err := json.Unmarshal(data, &pol); err != nil {
// maybe it's in protobuf format?
e2 := pol.Unmarshal(data)
e2 := proto.Unmarshal(data, &pol)
if e2 != nil {
return nil, errors.Wrap(err, "failed to parse source policy")
}

View File

@@ -23,7 +23,7 @@ func setupTest(tb testing.TB) {
gitutil.GitInit(c, tb)
df := []byte("FROM alpine:latest\n")
assert.NoError(tb, os.WriteFile("Dockerfile", df, 0644))
require.NoError(tb, os.WriteFile("Dockerfile", df, 0644))
gitutil.GitAdd(c, tb, "Dockerfile")
gitutil.GitCommit(c, tb, "initial commit")
@@ -32,7 +32,7 @@ func setupTest(tb testing.TB) {
func TestGetGitAttributesNotGitRepo(t *testing.T) {
_, err := getGitAttributes(context.Background(), t.TempDir(), "Dockerfile")
assert.NoError(t, err)
require.NoError(t, err)
}
func TestGetGitAttributesBadGitRepo(t *testing.T) {
@@ -47,7 +47,7 @@ func TestGetGitAttributesNoContext(t *testing.T) {
setupTest(t)
addGitAttrs, err := getGitAttributes(context.Background(), "", "Dockerfile")
assert.NoError(t, err)
require.NoError(t, err)
var so client.SolveOpt
addGitAttrs(&so)
assert.Empty(t, so.FrontendAttrs)
@@ -195,8 +195,8 @@ func TestLocalDirsSub(t *testing.T) {
gitutil.GitInit(c, t)
df := []byte("FROM alpine:latest\n")
assert.NoError(t, os.MkdirAll("app", 0755))
assert.NoError(t, os.WriteFile("app/Dockerfile", df, 0644))
require.NoError(t, os.MkdirAll("app", 0755))
require.NoError(t, os.WriteFile("app/Dockerfile", df, 0644))
gitutil.GitAdd(c, t, "app/Dockerfile")
gitutil.GitCommit(c, t, "initial commit")

View File

@@ -16,7 +16,7 @@ import (
type Container struct {
cancelOnce sync.Once
containerCancel func()
containerCancel func(error)
isUnavailable atomic.Bool
initStarted atomic.Bool
container gateway.Container
@@ -31,18 +31,18 @@ func NewContainer(ctx context.Context, resultCtx *ResultHandle, cfg *controllera
errCh := make(chan error)
go func() {
err := resultCtx.build(func(ctx context.Context, c gateway.Client) (*gateway.Result, error) {
ctx, cancel := context.WithCancel(ctx)
ctx, cancel := context.WithCancelCause(ctx)
go func() {
<-mainCtx.Done()
cancel()
cancel(errors.WithStack(context.Canceled))
}()
containerCfg, err := resultCtx.getContainerConfig(cfg)
if err != nil {
return nil, err
}
containerCtx, containerCancel := context.WithCancel(ctx)
defer containerCancel()
containerCtx, containerCancel := context.WithCancelCause(ctx)
defer containerCancel(errors.WithStack(context.Canceled))
bkContainer, err := c.NewContainer(containerCtx, containerCfg)
if err != nil {
return nil, err
@@ -83,7 +83,7 @@ func (c *Container) Cancel() {
c.markUnavailable()
c.cancelOnce.Do(func() {
if c.containerCancel != nil {
c.containerCancel()
c.containerCancel(errors.WithStack(context.Canceled))
}
close(c.releaseCh)
})

View File

@@ -5,12 +5,13 @@ import (
"github.com/docker/buildx/builder"
"github.com/docker/buildx/localstate"
"github.com/docker/buildx/util/confutil"
"github.com/moby/buildkit/client"
)
func saveLocalState(so *client.SolveOpt, target string, opts Options, node builder.Node, configDir string) error {
func saveLocalState(so *client.SolveOpt, target string, opts Options, node builder.Node, cfg *confutil.Config) error {
var err error
if so.Ref == "" {
if so.Ref == "" || opts.CallFunc != nil {
return nil
}
lp := opts.Inputs.ContextPath
@@ -30,7 +31,7 @@ func saveLocalState(so *client.SolveOpt, target string, opts Options, node build
if lp == "" && dp == "" {
return nil
}
l, err := localstate.New(configDir)
l, err := localstate.New(cfg)
if err != nil {
return err
}

View File

@@ -35,7 +35,7 @@ import (
"github.com/tonistiigi/fsutil"
)
func toSolveOpt(ctx context.Context, node builder.Node, multiDriver bool, opt Options, bopts gateway.BuildOpts, configDir string, pw progress.Writer, docker *dockerutil.Client) (_ *client.SolveOpt, release func(), err error) {
func toSolveOpt(ctx context.Context, node builder.Node, multiDriver bool, opt *Options, bopts gateway.BuildOpts, cfg *confutil.Config, pw progress.Writer, docker *dockerutil.Client) (_ *client.SolveOpt, release func(), err error) {
nodeDriver := node.Driver
defers := make([]func(), 0, 2)
releaseF := func() {
@@ -263,7 +263,7 @@ func toSolveOpt(ctx context.Context, node builder.Node, multiDriver bool, opt Op
so.Exports = opt.Exports
so.Session = slices.Clone(opt.Session)
releaseLoad, err := loadInputs(ctx, nodeDriver, opt.Inputs, pw, &so)
releaseLoad, err := loadInputs(ctx, nodeDriver, &opt.Inputs, pw, &so)
if err != nil {
return nil, nil, err
}
@@ -271,7 +271,7 @@ func toSolveOpt(ctx context.Context, node builder.Node, multiDriver bool, opt Op
// add node identifier to shared key if one was specified
if so.SharedKey != "" {
so.SharedKey += ":" + confutil.TryNodeIdentifier(configDir)
so.SharedKey += ":" + cfg.TryNodeIdentifier()
}
if opt.Pull {
@@ -356,7 +356,7 @@ func toSolveOpt(ctx context.Context, node builder.Node, multiDriver bool, opt Op
return &so, releaseF, nil
}
func loadInputs(ctx context.Context, d *driver.DriverHandle, inp Inputs, pw progress.Writer, target *client.SolveOpt) (func(), error) {
func loadInputs(ctx context.Context, d *driver.DriverHandle, inp *Inputs, pw progress.Writer, target *client.SolveOpt) (func(), error) {
if inp.ContextPath == "" {
return nil, errors.New("please specify build context (e.g. \".\" for the current directory)")
}
@@ -368,6 +368,7 @@ func loadInputs(ctx context.Context, d *driver.DriverHandle, inp Inputs, pw prog
dockerfileReader io.ReadCloser
dockerfileDir string
dockerfileName = inp.DockerfilePath
dockerfileSrcName = inp.DockerfilePath
toRemove []string
)
@@ -440,6 +441,11 @@ func loadInputs(ctx context.Context, d *driver.DriverHandle, inp Inputs, pw prog
if inp.DockerfileInline != "" {
dockerfileReader = io.NopCloser(strings.NewReader(inp.DockerfileInline))
dockerfileSrcName = "inline"
} else if inp.DockerfilePath == "-" {
dockerfileSrcName = "stdin"
} else if inp.DockerfilePath == "" {
dockerfileSrcName = filepath.Join(inp.ContextPath, "Dockerfile")
}
if dockerfileReader != nil {
@@ -540,6 +546,9 @@ func loadInputs(ctx context.Context, d *driver.DriverHandle, inp Inputs, pw prog
_ = os.RemoveAll(dir)
}
}
inp.DockerfileMappingSrc = dockerfileSrcName
inp.DockerfileMappingDst = dockerfileName
return release, nil
}

View File

@@ -15,6 +15,7 @@ import (
controlapi "github.com/moby/buildkit/api/services/control"
"github.com/moby/buildkit/client"
provenancetypes "github.com/moby/buildkit/solver/llbsolver/provenance/types"
digest "github.com/opencontainers/go-digest"
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"golang.org/x/sync/errgroup"
@@ -124,8 +125,8 @@ func lookupProvenance(res *controlapi.BuildResultInfo) *ocispecs.Descriptor {
for _, a := range res.Attestations {
if a.MediaType == "application/vnd.in-toto+json" && strings.HasPrefix(a.Annotations["in-toto.io/predicate-type"], "https://slsa.dev/provenance/") {
return &ocispecs.Descriptor{
Digest: a.Digest,
Size: a.Size_,
Digest: digest.Digest(a.Digest),
Size: a.Size,
MediaType: a.MediaType,
Annotations: a.Annotations,
}

View File

@@ -10,7 +10,6 @@ import (
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func generateRandomData(size int) []byte {
@@ -57,7 +56,7 @@ func TestSyncMultiReaderParallel(t *testing.T) {
return
}
require.NoError(t, err, "Reader %d error", readerId)
assert.NoError(t, err, "Reader %d error", readerId)
if mathrand.Intn(1000) == 0 { //nolint:gosec
t.Logf("Reader %d closing", readerId)

View File

@@ -82,7 +82,7 @@ func NewResultHandle(ctx context.Context, cc *client.Client, opt client.SolveOpt
var respHandle *ResultHandle
go func() {
defer cancel(context.Canceled) // ensure no dangling processes
defer func() { cancel(errors.WithStack(context.Canceled)) }() // ensure no dangling processes
var res *gateway.Result
var err error
@@ -181,7 +181,7 @@ func NewResultHandle(ctx context.Context, cc *client.Client, opt client.SolveOpt
case <-respHandle.done:
case <-ctx.Done():
}
return nil, ctx.Err()
return nil, context.Cause(ctx)
}, nil)
if respHandle != nil {
return
@@ -295,14 +295,14 @@ func (r *ResultHandle) build(buildFunc gateway.BuildFunc) (err error) {
func (r *ResultHandle) getContainerConfig(cfg *controllerapi.InvokeConfig) (containerCfg gateway.NewContainerRequest, _ error) {
if r.res != nil && r.solveErr == nil {
logrus.Debugf("creating container from successful build")
ccfg, err := containerConfigFromResult(r.res, *cfg)
ccfg, err := containerConfigFromResult(r.res, cfg)
if err != nil {
return containerCfg, err
}
containerCfg = *ccfg
} else {
logrus.Debugf("creating container from failed build %+v", cfg)
ccfg, err := containerConfigFromError(r.solveErr, *cfg)
ccfg, err := containerConfigFromError(r.solveErr, cfg)
if err != nil {
return containerCfg, errors.Wrapf(err, "no result nor error is available")
}
@@ -315,19 +315,19 @@ func (r *ResultHandle) getProcessConfig(cfg *controllerapi.InvokeConfig, stdin i
processCfg := newStartRequest(stdin, stdout, stderr)
if r.res != nil && r.solveErr == nil {
logrus.Debugf("creating container from successful build")
if err := populateProcessConfigFromResult(&processCfg, r.res, *cfg); err != nil {
if err := populateProcessConfigFromResult(&processCfg, r.res, cfg); err != nil {
return processCfg, err
}
} else {
logrus.Debugf("creating container from failed build %+v", cfg)
if err := populateProcessConfigFromError(&processCfg, r.solveErr, *cfg); err != nil {
if err := populateProcessConfigFromError(&processCfg, r.solveErr, cfg); err != nil {
return processCfg, err
}
}
return processCfg, nil
}
func containerConfigFromResult(res *gateway.Result, cfg controllerapi.InvokeConfig) (*gateway.NewContainerRequest, error) {
func containerConfigFromResult(res *gateway.Result, cfg *controllerapi.InvokeConfig) (*gateway.NewContainerRequest, error) {
if cfg.Initial {
return nil, errors.Errorf("starting from the container from the initial state of the step is supported only on the failed steps")
}
@@ -352,7 +352,7 @@ func containerConfigFromResult(res *gateway.Result, cfg controllerapi.InvokeConf
}, nil
}
func populateProcessConfigFromResult(req *gateway.StartRequest, res *gateway.Result, cfg controllerapi.InvokeConfig) error {
func populateProcessConfigFromResult(req *gateway.StartRequest, res *gateway.Result, cfg *controllerapi.InvokeConfig) error {
imgData := res.Metadata[exptypes.ExporterImageConfigKey]
var img *specs.Image
if len(imgData) > 0 {
@@ -403,7 +403,7 @@ func populateProcessConfigFromResult(req *gateway.StartRequest, res *gateway.Res
return nil
}
func containerConfigFromError(solveErr *errdefs.SolveError, cfg controllerapi.InvokeConfig) (*gateway.NewContainerRequest, error) {
func containerConfigFromError(solveErr *errdefs.SolveError, cfg *controllerapi.InvokeConfig) (*gateway.NewContainerRequest, error) {
exec, err := execOpFromError(solveErr)
if err != nil {
return nil, err
@@ -431,7 +431,7 @@ func containerConfigFromError(solveErr *errdefs.SolveError, cfg controllerapi.In
}, nil
}
func populateProcessConfigFromError(req *gateway.StartRequest, solveErr *errdefs.SolveError, cfg controllerapi.InvokeConfig) error {
func populateProcessConfigFromError(req *gateway.StartRequest, solveErr *errdefs.SolveError, cfg *controllerapi.InvokeConfig) error {
exec, err := execOpFromError(solveErr)
if err != nil {
return err

View File

@@ -7,12 +7,15 @@ import (
"github.com/docker/buildx/driver"
"github.com/docker/buildx/util/progress"
"github.com/docker/go-units"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/client/llb"
gwclient "github.com/moby/buildkit/frontend/gateway/client"
"github.com/pkg/errors"
)
const maxDockerfileSize = 2 * 1024 * 1024 // 2 MB
func createTempDockerfileFromURL(ctx context.Context, d *driver.DriverHandle, url string, pw progress.Writer) (string, error) {
c, err := driver.Boot(ctx, ctx, d, pw)
if err != nil {
@@ -43,8 +46,8 @@ func createTempDockerfileFromURL(ctx context.Context, d *driver.DriverHandle, ur
if err != nil {
return nil, err
}
if stat.Size() > 512*1024 {
return nil, errors.Errorf("Dockerfile %s bigger than allowed max size", url)
if stat.Size > maxDockerfileSize {
return nil, errors.Errorf("Dockerfile %s bigger than allowed max size (%s)", url, units.HumanSize(maxDockerfileSize))
}
dt, err := ref.ReadFile(ctx, gwclient.ReadRequest{
@@ -63,7 +66,6 @@ func createTempDockerfileFromURL(ctx context.Context, d *driver.DriverHandle, ur
out = dir
return nil, nil
}, ch)
if err != nil {
return "", err
}

View File

@@ -138,7 +138,7 @@ func TestToBuildkitExtraHosts(t *testing.T) {
actualOut, actualErr := toBuildkitExtraHosts(context.TODO(), tc.input, nil)
if tc.expectedErr == "" {
require.Equal(t, tc.expectedOut, actualOut)
require.Nil(t, actualErr)
require.NoError(t, actualErr)
} else {
require.Zero(t, actualOut)
require.Error(t, actualErr, tc.expectedErr)

View File

@@ -288,7 +288,15 @@ func GetBuilders(dockerCli command.Cli, txn *store.Txn) ([]*Builder, error) {
return nil, err
}
builders := make([]*Builder, len(storeng))
contexts, err := dockerCli.ContextStore().List()
if err != nil {
return nil, err
}
sort.Slice(contexts, func(i, j int) bool {
return contexts[i].Name < contexts[j].Name
})
builders := make([]*Builder, len(storeng), len(storeng)+len(contexts))
seen := make(map[string]struct{})
for i, ng := range storeng {
b, err := New(dockerCli,
@@ -303,14 +311,6 @@ func GetBuilders(dockerCli command.Cli, txn *store.Txn) ([]*Builder, error) {
seen[b.NodeGroup.Name] = struct{}{}
}
contexts, err := dockerCli.ContextStore().List()
if err != nil {
return nil, err
}
sort.Slice(contexts, func(i, j int) bool {
return contexts[i].Name < contexts[j].Name
})
for _, c := range contexts {
// if a context has the same name as an instance from the store, do not
// add it to the builders list. An instance from the store takes
@@ -435,7 +435,16 @@ func Create(ctx context.Context, txn *store.Txn, dockerCli command.Cli, opts Cre
return nil, err
}
buildkitdFlags, err := parseBuildkitdFlags(opts.BuildkitdFlags, driverName, driverOpts)
buildkitdConfigFile := opts.BuildkitdConfigFile
if buildkitdConfigFile == "" {
// if buildkit daemon config is not provided, check if the default one
// is available and use it
if f, ok := confutil.NewConfig(dockerCli).BuildKitConfigFile(); ok {
buildkitdConfigFile = f
}
}
buildkitdFlags, err := parseBuildkitdFlags(opts.BuildkitdFlags, driverName, driverOpts, buildkitdConfigFile)
if err != nil {
return nil, err
}
@@ -496,15 +505,6 @@ func Create(ctx context.Context, txn *store.Txn, dockerCli command.Cli, opts Cre
setEp = false
}
buildkitdConfigFile := opts.BuildkitdConfigFile
if buildkitdConfigFile == "" {
// if buildkit daemon config is not provided, check if the default one
// is available and use it
if f, ok := confutil.DefaultConfigFile(dockerCli); ok {
buildkitdConfigFile = f
}
}
if err := ng.Update(opts.NodeName, ep, opts.Platforms, setEp, opts.Append, buildkitdFlags, buildkitdConfigFile, driverOpts); err != nil {
return nil, err
}
@@ -522,8 +522,9 @@ func Create(ctx context.Context, txn *store.Txn, dockerCli command.Cli, opts Cre
return nil, err
}
timeoutCtx, cancel := context.WithTimeout(ctx, 20*time.Second)
defer cancel()
cancelCtx, cancel := context.WithCancelCause(ctx)
timeoutCtx, _ := context.WithTimeoutCause(cancelCtx, 20*time.Second, errors.WithStack(context.DeadlineExceeded)) //nolint:govet,lostcancel // no need to manually cancel this context as we already rely on parent
defer func() { cancel(errors.WithStack(context.Canceled)) }()
nodes, err := b.LoadNodes(timeoutCtx, WithData())
if err != nil {
@@ -584,7 +585,7 @@ func Leave(ctx context.Context, txn *store.Txn, dockerCli command.Cli, opts Leav
return err
}
ls, err := localstate.New(confutil.ConfigDir(dockerCli))
ls, err := localstate.New(confutil.NewConfig(dockerCli))
if err != nil {
return err
}
@@ -641,7 +642,7 @@ func validateBuildkitEndpoint(ep string) (string, error) {
}
// parseBuildkitdFlags parses buildkit flags
func parseBuildkitdFlags(inp string, driver string, driverOpts map[string]string) (res []string, err error) {
func parseBuildkitdFlags(inp string, driver string, driverOpts map[string]string, buildkitdConfigFile string) (res []string, err error) {
if inp != "" {
res, err = shlex.Split(inp)
if err != nil {
@@ -663,10 +664,27 @@ func parseBuildkitdFlags(inp string, driver string, driverOpts map[string]string
}
}
var hasNetworkHostEntitlementInConf bool
if buildkitdConfigFile != "" {
btoml, err := confutil.LoadConfigTree(buildkitdConfigFile)
if err != nil {
return nil, err
} else if btoml != nil {
if ies := btoml.GetArray("insecure-entitlements"); ies != nil {
for _, e := range ies.([]string) {
if e == "network.host" {
hasNetworkHostEntitlementInConf = true
break
}
}
}
}
}
if v, ok := driverOpts["network"]; ok && v == "host" && !hasNetworkHostEntitlement && driver == "docker-container" {
// always set network.host entitlement if user has set network=host
res = append(res, "--allow-insecure-entitlement=network.host")
} else if len(allowInsecureEntitlements) == 0 && (driver == "kubernetes" || driver == "docker-container") {
} else if len(allowInsecureEntitlements) == 0 && !hasNetworkHostEntitlementInConf && (driver == "kubernetes" || driver == "docker-container") {
// set network.host entitlement if user does not provide any as
// network is isolated for container drivers.
res = append(res, "--allow-insecure-entitlement=network.host")

View File

@@ -1,6 +1,8 @@
package builder
import (
"os"
"path"
"testing"
"github.com/stretchr/testify/assert"
@@ -17,21 +19,35 @@ func TestCsvToMap(t *testing.T) {
require.NoError(t, err)
require.Contains(t, r, "tolerations")
require.Equal(t, r["tolerations"], "key=foo,value=bar;key=foo2,value=bar2")
require.Equal(t, "key=foo,value=bar;key=foo2,value=bar2", r["tolerations"])
require.Contains(t, r, "replicas")
require.Equal(t, r["replicas"], "1")
require.Equal(t, "1", r["replicas"])
require.Contains(t, r, "namespace")
require.Equal(t, r["namespace"], "default")
require.Equal(t, "default", r["namespace"])
}
func TestParseBuildkitdFlags(t *testing.T) {
buildkitdConf := `
# debug enables additional debug logging
debug = true
# insecure-entitlements allows insecure entitlements, disabled by default.
insecure-entitlements = [ "network.host", "security.insecure" ]
[log]
# log formatter: json or text
format = "text"
`
dirConf := t.TempDir()
buildkitdConfPath := path.Join(dirConf, "buildkitd-conf.toml")
require.NoError(t, os.WriteFile(buildkitdConfPath, []byte(buildkitdConf), 0644))
testCases := []struct {
name string
flags string
driver string
driverOpts map[string]string
buildkitdConfigFile string
expected []string
wantErr bool
}{
@@ -40,6 +56,7 @@ func TestParseBuildkitdFlags(t *testing.T) {
"",
"docker-container",
nil,
"",
[]string{
"--allow-insecure-entitlement=network.host",
},
@@ -50,6 +67,7 @@ func TestParseBuildkitdFlags(t *testing.T) {
"",
"kubernetes",
nil,
"",
[]string{
"--allow-insecure-entitlement=network.host",
},
@@ -60,6 +78,7 @@ func TestParseBuildkitdFlags(t *testing.T) {
"",
"remote",
nil,
"",
nil,
false,
},
@@ -68,6 +87,7 @@ func TestParseBuildkitdFlags(t *testing.T) {
"--allow-insecure-entitlement=security.insecure",
"docker-container",
nil,
"",
[]string{
"--allow-insecure-entitlement=security.insecure",
},
@@ -78,6 +98,7 @@ func TestParseBuildkitdFlags(t *testing.T) {
"--allow-insecure-entitlement=network.host --allow-insecure-entitlement=security.insecure",
"docker-container",
nil,
"",
[]string{
"--allow-insecure-entitlement=network.host",
"--allow-insecure-entitlement=security.insecure",
@@ -89,6 +110,7 @@ func TestParseBuildkitdFlags(t *testing.T) {
"",
"docker-container",
map[string]string{"network": "host"},
"",
[]string{
"--allow-insecure-entitlement=network.host",
},
@@ -99,6 +121,7 @@ func TestParseBuildkitdFlags(t *testing.T) {
"--allow-insecure-entitlement=network.host",
"docker-container",
map[string]string{"network": "host"},
"",
[]string{
"--allow-insecure-entitlement=network.host",
},
@@ -109,17 +132,28 @@ func TestParseBuildkitdFlags(t *testing.T) {
"--allow-insecure-entitlement=network.host --allow-insecure-entitlement=security.insecure",
"docker-container",
map[string]string{"network": "host"},
"",
[]string{
"--allow-insecure-entitlement=network.host",
"--allow-insecure-entitlement=security.insecure",
},
false,
},
{
"docker-container with buildkitd conf setting network.host entitlement",
"",
"docker-container",
nil,
buildkitdConfPath,
nil,
false,
},
{
"error parsing flags",
"foo'",
"docker-container",
nil,
"",
nil,
true,
},
@@ -127,7 +161,7 @@ func TestParseBuildkitdFlags(t *testing.T) {
for _, tt := range testCases {
tt := tt
t.Run(tt.name, func(t *testing.T) {
flags, err := parseBuildkitdFlags(tt.flags, tt.driver, tt.driverOpts)
flags, err := parseBuildkitdFlags(tt.flags, tt.driver, tt.driverOpts, tt.buildkitdConfigFile)
if tt.wantErr {
require.Error(t, err)
return

75
cmd/buildx/debug.go Normal file
View File

@@ -0,0 +1,75 @@
package main
import (
"context"
"os"
"runtime"
"runtime/pprof"
"github.com/moby/buildkit/util/bklog"
"github.com/sirupsen/logrus"
)
func setupDebugProfiles(ctx context.Context) (stop func()) {
var stopFuncs []func()
if fn := setupCPUProfile(ctx); fn != nil {
stopFuncs = append(stopFuncs, fn)
}
if fn := setupHeapProfile(ctx); fn != nil {
stopFuncs = append(stopFuncs, fn)
}
return func() {
for _, fn := range stopFuncs {
fn()
}
}
}
func setupCPUProfile(ctx context.Context) (stop func()) {
if cpuProfile := os.Getenv("BUILDX_CPU_PROFILE"); cpuProfile != "" {
f, err := os.Create(cpuProfile)
if err != nil {
bklog.G(ctx).Warn("could not create cpu profile", logrus.WithError(err))
return nil
}
if err := pprof.StartCPUProfile(f); err != nil {
bklog.G(ctx).Warn("could not start cpu profile", logrus.WithError(err))
_ = f.Close()
return nil
}
return func() {
pprof.StopCPUProfile()
if err := f.Close(); err != nil {
bklog.G(ctx).Warn("could not close file for cpu profile", logrus.WithError(err))
}
}
}
return nil
}
func setupHeapProfile(ctx context.Context) (stop func()) {
if heapProfile := os.Getenv("BUILDX_MEM_PROFILE"); heapProfile != "" {
// Memory profile is only created on stop.
return func() {
f, err := os.Create(heapProfile)
if err != nil {
bklog.G(ctx).Warn("could not create memory profile", logrus.WithError(err))
return
}
// get up-to-date statistics
runtime.GC()
if err := pprof.WriteHeapProfile(f); err != nil {
bklog.G(ctx).Warn("could not write memory profile", logrus.WithError(err))
}
if err := f.Close(); err != nil {
bklog.G(ctx).Warn("could not close file for memory profile", logrus.WithError(err))
}
}
}
return nil
}

View File

@@ -6,6 +6,7 @@ import (
"os"
"github.com/docker/buildx/commands"
controllererrors "github.com/docker/buildx/controller/errdefs"
"github.com/docker/buildx/util/desktop"
"github.com/docker/buildx/version"
"github.com/docker/cli/cli"
@@ -16,6 +17,7 @@ import (
cliflags "github.com/docker/cli/cli/flags"
"github.com/moby/buildkit/solver/errdefs"
"github.com/moby/buildkit/util/stack"
"github.com/pkg/errors"
"go.opentelemetry.io/otel"
//nolint:staticcheck // vendored dependencies may still use this
@@ -27,6 +29,9 @@ import (
_ "github.com/docker/buildx/driver/docker-container"
_ "github.com/docker/buildx/driver/kubernetes"
_ "github.com/docker/buildx/driver/remote"
// Use custom grpc codec to utilize vtprotobuf
_ "github.com/moby/buildkit/util/grpcutil/encoding/proto"
)
func init() {
@@ -70,6 +75,16 @@ func runPlugin(cmd *command.DockerCli) error {
})
}
func run(cmd *command.DockerCli) error {
stopProfiles := setupDebugProfiles(context.TODO())
defer stopProfiles()
if plugin.RunningStandalone() {
return runStandalone(cmd)
}
return runPlugin(cmd)
}
func main() {
cmd, err := command.NewDockerCli()
if err != nil {
@@ -77,15 +92,11 @@ func main() {
os.Exit(1)
}
if plugin.RunningStandalone() {
err = runStandalone(cmd)
} else {
err = runPlugin(cmd)
}
if err == nil {
if err = run(cmd); err == nil {
return
}
// Check the error from the run function above.
if sterr, ok := err.(cli.StatusError); ok {
if sterr.Status != "" {
fmt.Fprintln(cmd.Err(), sterr.Status)
@@ -106,8 +117,15 @@ func main() {
} else {
fmt.Fprintf(cmd.Err(), "ERROR: %v\n", err)
}
if ebr, ok := err.(*desktop.ErrorWithBuildRef); ok {
var ebr *desktop.ErrorWithBuildRef
if errors.As(err, &ebr) {
ebr.Print(cmd.Err())
} else {
var be *controllererrors.BuildError
if errors.As(err, &be) {
be.PrintBuildDetails(cmd.Err())
}
}
os.Exit(1)

View File

@@ -107,16 +107,23 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
if err != nil {
return err
}
wd, err := os.Getwd()
if err != nil {
return errors.Wrapf(err, "failed to get current working directory")
}
// filesystem access under the current working directory is allowed by default
ent.FSRead = append(ent.FSRead, wd)
ent.FSWrite = append(ent.FSWrite, wd)
ctx2, cancel := context.WithCancel(context.TODO())
defer cancel()
ctx2, cancel := context.WithCancelCause(context.TODO())
defer cancel(errors.WithStack(context.Canceled))
var nodes []builder.Node
var progressConsoleDesc, progressTextDesc string
// instance only needed for reading remote bake files or building
var driverType string
if url != "" || !in.printOnly {
if url != "" || !(in.printOnly || in.listTargets || in.listVars) {
b, err := builder.New(dockerCli,
builder.WithName(in.builder),
builder.WithContextPathHash(contextPathHash),
@@ -250,7 +257,7 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
if err != nil {
return err
}
if err := exp.Prompt(ctx, &syncWriter{w: dockerCli.Err(), wait: printer.Wait}); err != nil {
if err := exp.Prompt(ctx, url != "", &syncWriter{w: dockerCli.Err(), wait: printer.Wait}); err != nil {
return err
}
if printer.IsDone() {
@@ -265,7 +272,7 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
}
done := timeBuildCommand(mp, attributes)
resp, retErr := build.Build(ctx, nodes, bo, dockerutil.NewClient(dockerCli), confutil.ConfigDir(dockerCli), printer)
resp, retErr := build.Build(ctx, nodes, bo, dockerutil.NewClient(dockerCli), confutil.NewConfig(dockerCli), printer)
if err := printer.Wait(); retErr == nil {
retErr = err
}
@@ -335,7 +342,7 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
if callFormatJSON {
jsonResults[name] = map[string]any{}
buf := &bytes.Buffer{}
if code, err := printResult(buf, pf, res); err != nil {
if code, err := printResult(buf, pf, res, name, &req.Inputs); err != nil {
jsonResults[name]["error"] = err.Error()
exitCode = 1
} else if code != 0 && exitCode == 0 {
@@ -361,7 +368,7 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
}
fmt.Fprintln(dockerCli.Out())
if code, err := printResult(dockerCli.Out(), pf, res); err != nil {
if code, err := printResult(dockerCli.Out(), pf, res, name, &req.Inputs); err != nil {
fmt.Fprintf(dockerCli.Out(), "error: %v\n", err)
exitCode = 1
} else if code != 0 && exitCode == 0 {
@@ -464,13 +471,19 @@ func saveLocalStateGroup(dockerCli command.Cli, in bakeOptions, targets []string
groupRef := identity.NewID()
refs := make([]string, 0, len(bo))
for k, b := range bo {
if b.CallFunc != nil {
continue
}
b.Ref = identity.NewID()
b.GroupRef = groupRef
b.ProvenanceResponseMode = prm
refs = append(refs, b.Ref)
bo[k] = b
}
l, err := localstate.New(confutil.ConfigDir(dockerCli))
if len(refs) == 0 {
return nil
}
l, err := localstate.New(confutil.NewConfig(dockerCli))
if err != nil {
return err
}
@@ -621,7 +634,7 @@ func bakeMetricAttributes(dockerCli command.Cli, driverType, url, cmdContext str
commandNameAttribute.String("bake"),
attribute.Stringer(string(commandOptionsHash), &bakeOptionsHash{
bakeOptions: options,
configDir: confutil.ConfigDir(dockerCli),
cfg: confutil.NewConfig(dockerCli),
url: url,
cmdContext: cmdContext,
targets: targets,
@@ -633,7 +646,7 @@ func bakeMetricAttributes(dockerCli command.Cli, driverType, url, cmdContext str
type bakeOptionsHash struct {
*bakeOptions
configDir string
cfg *confutil.Config
url string
cmdContext string
targets []string
@@ -657,7 +670,7 @@ func (o *bakeOptionsHash) String() string {
joinedFiles := strings.Join(files, ",")
joinedTargets := strings.Join(targets, ",")
salt := confutil.TryNodeIdentifier(o.configDir)
salt := o.cfg.TryNodeIdentifier()
h := sha256.New()
for _, s := range []string{url, cmdContext, joinedFiles, joinedTargets, salt} {

View File

@@ -49,6 +49,7 @@ import (
"github.com/moby/buildkit/frontend/subrequests/outline"
"github.com/moby/buildkit/frontend/subrequests/targets"
"github.com/moby/buildkit/solver/errdefs"
solverpb "github.com/moby/buildkit/solver/pb"
"github.com/moby/buildkit/util/grpcerrors"
"github.com/moby/buildkit/util/progress/progressui"
"github.com/morikuni/aec"
@@ -60,6 +61,7 @@ import (
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/metric"
"google.golang.org/grpc/codes"
"google.golang.org/protobuf/proto"
)
type buildOptions struct {
@@ -236,7 +238,7 @@ func buildMetricAttributes(dockerCli command.Cli, driverType string, options *bu
commandNameAttribute.String("build"),
attribute.Stringer(string(commandOptionsHash), &buildOptionsHash{
buildOptions: options,
configDir: confutil.ConfigDir(dockerCli),
cfg: confutil.NewConfig(dockerCli),
}),
driverNameAttribute.String(options.builder),
driverTypeAttribute.String(driverType),
@@ -248,7 +250,7 @@ func buildMetricAttributes(dockerCli command.Cli, driverType string, options *bu
// the fmt.Stringer interface.
type buildOptionsHash struct {
*buildOptions
configDir string
cfg *confutil.Config
result string
resultOnce sync.Once
}
@@ -265,7 +267,7 @@ func (o *buildOptionsHash) String() string {
if contextPath != "-" && osutil.IsLocalDir(contextPath) {
contextPath = osutil.ToAbs(contextPath)
}
salt := confutil.TryNodeIdentifier(o.configDir)
salt := o.cfg.TryNodeIdentifier()
h := sha256.New()
for _, s := range []string{target, contextPath, dockerfile, salt} {
@@ -323,8 +325,8 @@ func runBuild(ctx context.Context, dockerCli command.Cli, options buildOptions)
}
attributes := buildMetricAttributes(dockerCli, driverType, &options)
ctx2, cancel := context.WithCancel(context.TODO())
defer cancel()
ctx2, cancel := context.WithCancelCause(context.TODO())
defer func() { cancel(errors.WithStack(context.Canceled)) }()
progressMode, err := options.toDisplayMode()
if err != nil {
return err
@@ -346,11 +348,12 @@ func runBuild(ctx context.Context, dockerCli command.Cli, options buildOptions)
done := timeBuildCommand(mp, attributes)
var resp *client.SolveResponse
var inputs *build.Inputs
var retErr error
if confutil.IsExperimental() {
resp, retErr = runControllerBuild(ctx, dockerCli, opts, options, printer)
resp, inputs, retErr = runControllerBuild(ctx, dockerCli, opts, options, printer)
} else {
resp, retErr = runBasicBuild(ctx, dockerCli, opts, printer)
resp, inputs, retErr = runBasicBuild(ctx, dockerCli, opts, printer)
}
if err := printer.Wait(); retErr == nil {
@@ -387,7 +390,7 @@ func runBuild(ctx context.Context, dockerCli command.Cli, options buildOptions)
}
}
if opts.CallFunc != nil {
if exitcode, err := printResult(dockerCli.Out(), opts.CallFunc, resp.ExporterResponse); err != nil {
if exitcode, err := printResult(dockerCli.Out(), opts.CallFunc, resp.ExporterResponse, options.target, inputs); err != nil {
return err
} else if exitcode != 0 {
os.Exit(exitcode)
@@ -405,22 +408,22 @@ func getImageID(resp map[string]string) string {
return dgst
}
func runBasicBuild(ctx context.Context, dockerCli command.Cli, opts *controllerapi.BuildOptions, printer *progress.Printer) (*client.SolveResponse, error) {
resp, res, err := cbuild.RunBuild(ctx, dockerCli, *opts, dockerCli.In(), printer, false)
func runBasicBuild(ctx context.Context, dockerCli command.Cli, opts *controllerapi.BuildOptions, printer *progress.Printer) (*client.SolveResponse, *build.Inputs, error) {
resp, res, dfmap, err := cbuild.RunBuild(ctx, dockerCli, opts, dockerCli.In(), printer, false)
if res != nil {
res.Done()
}
return resp, err
return resp, dfmap, err
}
func runControllerBuild(ctx context.Context, dockerCli command.Cli, opts *controllerapi.BuildOptions, options buildOptions, printer *progress.Printer) (*client.SolveResponse, error) {
func runControllerBuild(ctx context.Context, dockerCli command.Cli, opts *controllerapi.BuildOptions, options buildOptions, printer *progress.Printer) (*client.SolveResponse, *build.Inputs, error) {
if options.invokeConfig != nil && (options.dockerfileName == "-" || options.contextPath == "-") {
// stdin must be usable for monitor
return nil, errors.Errorf("Dockerfile or context from stdin is not supported with invoke")
return nil, nil, errors.Errorf("Dockerfile or context from stdin is not supported with invoke")
}
c, err := controller.NewController(ctx, options.ControlOptions, dockerCli, printer)
if err != nil {
return nil, err
return nil, nil, err
}
defer func() {
if err := c.Close(); err != nil {
@@ -432,12 +435,13 @@ func runControllerBuild(ctx context.Context, dockerCli command.Cli, opts *contro
// so we need to resolve paths to abosolute ones in the client.
opts, err = controllerapi.ResolveOptionPaths(opts)
if err != nil {
return nil, err
return nil, nil, err
}
var ref string
var retErr error
var resp *client.SolveResponse
var inputs *build.Inputs
var f *ioset.SingleForwarder
var pr io.ReadCloser
@@ -455,7 +459,7 @@ func runControllerBuild(ctx context.Context, dockerCli command.Cli, opts *contro
})
}
ref, resp, err = c.Build(ctx, *opts, pr, printer)
ref, resp, inputs, err = c.Build(ctx, opts, pr, printer)
if err != nil {
var be *controllererrors.BuildError
if errors.As(err, &be) {
@@ -463,7 +467,7 @@ func runControllerBuild(ctx context.Context, dockerCli command.Cli, opts *contro
retErr = err
// We can proceed to monitor
} else {
return nil, errors.Wrapf(err, "failed to build")
return nil, nil, errors.Wrapf(err, "failed to build")
}
}
@@ -504,7 +508,7 @@ func runControllerBuild(ctx context.Context, dockerCli command.Cli, opts *contro
}
}
return resp, retErr
return resp, inputs, retErr
}
func printError(err error, printer *progress.Printer) error {
@@ -759,9 +763,12 @@ func decodeExporterResponse(exporterResponse map[string]string) map[string]inter
}
var raw map[string]interface{}
if err = json.Unmarshal(dt, &raw); err != nil || len(raw) == 0 {
var rawList []map[string]interface{}
if err = json.Unmarshal(dt, &rawList); err != nil || len(rawList) == 0 {
out[k] = v
continue
}
}
out[k] = json.RawMessage(dt)
}
return out
@@ -878,11 +885,10 @@ func printWarnings(w io.Writer, warnings []client.VertexWarning, mode progressui
src.Print(w)
}
fmt.Fprintf(w, "\n")
}
}
func printResult(w io.Writer, f *controllerapi.CallFunc, res map[string]string) (int, error) {
func printResult(w io.Writer, f *controllerapi.CallFunc, res map[string]string, target string, inp *build.Inputs) (int, error) {
switch f.Name {
case "outline":
return 0, printValue(w, outline.PrintOutline, outline.SubrequestsOutlineDefinition.Version, f.Format, res)
@@ -908,8 +914,27 @@ func printResult(w io.Writer, f *controllerapi.CallFunc, res map[string]string)
}
fmt.Fprintf(w, "Check complete, %s\n", warningCountMsg)
}
sourceInfoMap := func(sourceInfo *solverpb.SourceInfo) *solverpb.SourceInfo {
if sourceInfo == nil || inp == nil {
return sourceInfo
}
if target == "" {
target = "default"
}
err := printValue(w, printLintViolationsWrapper, lint.SubrequestLintDefinition.Version, f.Format, res)
if inp.DockerfileMappingSrc != "" {
newSourceInfo := proto.Clone(sourceInfo).(*solverpb.SourceInfo)
newSourceInfo.Filename = inp.DockerfileMappingSrc
return newSourceInfo
}
return sourceInfo
}
printLintWarnings := func(dt []byte, w io.Writer) error {
return lintResults.PrintTo(w, sourceInfoMap)
}
err := printValue(w, printLintWarnings, lint.SubrequestLintDefinition.Version, f.Format, res)
if err != nil {
return 0, err
}
@@ -924,13 +949,8 @@ func printResult(w io.Writer, f *controllerapi.CallFunc, res map[string]string)
if f.Format != "json" && len(lintResults.Warnings) > 0 {
fmt.Fprintln(w)
}
lintBuf := bytes.NewBuffer([]byte(lintResults.Error.Message + "\n"))
sourceInfo := lintResults.Sources[lintResults.Error.Location.SourceIndex]
source := errdefs.Source{
Info: sourceInfo,
Ranges: lintResults.Error.Location.Ranges,
}
source.Print(lintBuf)
lintBuf := bytes.NewBuffer(nil)
lintResults.PrintErrorTo(lintBuf, sourceInfoMap)
return 0, errors.New(lintBuf.String())
} else if len(lintResults.Warnings) == 0 && f.Format != "json" {
fmt.Fprintln(w, "Check complete, no warnings found.")
@@ -968,11 +988,6 @@ func printValue(w io.Writer, printer callFunc, version string, format string, re
return printer([]byte(res["result.json"]), w)
}
// FIXME: remove once https://github.com/docker/buildx/pull/2672 is sorted
func printLintViolationsWrapper(dt []byte, w io.Writer) error {
return lint.PrintLintViolations(dt, w, nil)
}
type invokeConfig struct {
controllerapi.InvokeConfig
onFlag string
@@ -1000,7 +1015,7 @@ func (cfg *invokeConfig) runDebug(ctx context.Context, ref string, options *cont
return nil, errors.Errorf("failed to configure terminal: %v", err)
}
defer con.Reset()
return monitor.RunMonitor(ctx, ref, options, cfg.InvokeConfig, c, stdin, stdout, stderr, progress)
return monitor.RunMonitor(ctx, ref, options, &cfg.InvokeConfig, c, stdin, stdout, stderr, progress)
}
func (cfg *invokeConfig) parseInvokeConfig(invoke, on string) error {

View File

@@ -64,7 +64,7 @@ func RootCmd(dockerCli command.Cli, children ...DebuggableCmd) *cobra.Command {
return errors.Errorf("failed to configure terminal: %v", err)
}
_, err = monitor.RunMonitor(ctx, "", nil, controllerapi.InvokeConfig{
_, err = monitor.RunMonitor(ctx, "", nil, &controllerapi.InvokeConfig{
Tty: true,
}, c, dockerCli.In(), os.Stdout, os.Stderr, printer)
con.Reset()

View File

@@ -42,7 +42,7 @@ func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, arg
return errors.Errorf("can't push with no tags specified, please set --tag or --dry-run")
}
fileArgs := make([]string, len(in.files))
fileArgs := make([]string, len(in.files), len(in.files)+len(args))
for i, f := range in.files {
dt, err := os.ReadFile(f)
if err != nil {
@@ -173,8 +173,8 @@ func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, arg
// new resolver cause need new auth
r = imagetools.New(imageopt)
ctx2, cancel := context.WithCancel(context.TODO())
defer cancel()
ctx2, cancel := context.WithCancelCause(context.TODO())
defer func() { cancel(errors.WithStack(context.Canceled)) }()
printer, err := progress.NewPrinter(ctx2, os.Stderr, progressui.DisplayMode(in.progress))
if err != nil {
return err

View File

@@ -10,11 +10,12 @@ type RootOptions struct {
Builder *string
}
func RootCmd(dockerCli command.Cli, opts RootOptions) *cobra.Command {
func RootCmd(rootcmd *cobra.Command, dockerCli command.Cli, opts RootOptions) *cobra.Command {
cmd := &cobra.Command{
Use: "imagetools",
Short: "Commands to work on images in registry",
ValidArgsFunction: completion.Disable,
RunE: rootcmd.RunE,
}
cmd.AddCommand(

View File

@@ -17,6 +17,7 @@ import (
"github.com/docker/cli/cli/command"
"github.com/docker/cli/cli/debug"
"github.com/docker/go-units"
"github.com/pkg/errors"
"github.com/spf13/cobra"
)
@@ -34,8 +35,9 @@ func runInspect(ctx context.Context, dockerCli command.Cli, in inspectOptions) e
return err
}
timeoutCtx, cancel := context.WithTimeout(ctx, 20*time.Second)
defer cancel()
timeoutCtx, cancel := context.WithCancelCause(ctx)
timeoutCtx, _ = context.WithTimeoutCause(timeoutCtx, 20*time.Second, errors.WithStack(context.DeadlineExceeded)) //nolint:govet,lostcancel // no need to manually cancel this context as we already rely on parent
defer func() { cancel(errors.WithStack(context.Canceled)) }()
nodes, err := b.LoadNodes(timeoutCtx, builder.WithData())
if in.bootstrap {
@@ -122,8 +124,20 @@ func runInspect(ctx context.Context, dockerCli command.Cli, in inspectOptions) e
if rule.KeepDuration > 0 {
fmt.Fprintf(w, "\tKeep Duration:\t%v\n", rule.KeepDuration.String())
}
if rule.KeepBytes > 0 {
fmt.Fprintf(w, "\tKeep Bytes:\t%s\n", units.BytesSize(float64(rule.KeepBytes)))
if rule.ReservedSpace > 0 {
fmt.Fprintf(w, "\tReserved Space:\t%s\n", units.BytesSize(float64(rule.ReservedSpace)))
}
if rule.MaxUsedSpace > 0 {
fmt.Fprintf(w, "\tMax Used Space:\t%s\n", units.BytesSize(float64(rule.MaxUsedSpace)))
}
if rule.MinFreeSpace > 0 {
fmt.Fprintf(w, "\tMin Free Space:\t%s\n", units.BytesSize(float64(rule.MinFreeSpace)))
}
}
for f, dt := range nodes[i].Files {
fmt.Fprintf(w, "File#%s:\n", f)
for _, line := range strings.Split(string(dt), "\n") {
fmt.Fprintf(w, "\t> %s\n", line)
}
}
}

View File

@@ -8,6 +8,7 @@ import (
"strings"
"time"
"github.com/containerd/platforms"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/store"
"github.com/docker/buildx/store/storeutil"
@@ -17,6 +18,7 @@ import (
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/docker/cli/cli/command/formatter"
"github.com/pkg/errors"
"github.com/spf13/cobra"
"golang.org/x/sync/errgroup"
)
@@ -36,6 +38,7 @@ const (
type lsOptions struct {
format string
noTrunc bool
}
func runLs(ctx context.Context, dockerCli command.Cli, in lsOptions) error {
@@ -55,8 +58,9 @@ func runLs(ctx context.Context, dockerCli command.Cli, in lsOptions) error {
return err
}
timeoutCtx, cancel := context.WithTimeout(ctx, 20*time.Second)
defer cancel()
timeoutCtx, cancel := context.WithCancelCause(ctx)
timeoutCtx, _ = context.WithTimeoutCause(timeoutCtx, 20*time.Second, errors.WithStack(context.DeadlineExceeded)) //nolint:govet,lostcancel // no need to manually cancel this context as we already rely on parent
defer func() { cancel(errors.WithStack(context.Canceled)) }()
eg, _ := errgroup.WithContext(timeoutCtx)
for _, b := range builders {
@@ -72,7 +76,7 @@ func runLs(ctx context.Context, dockerCli command.Cli, in lsOptions) error {
return err
}
if hasErrors, err := lsPrint(dockerCli, current, builders, in.format); err != nil {
if hasErrors, err := lsPrint(dockerCli, current, builders, in); err != nil {
return err
} else if hasErrors {
_, _ = fmt.Fprintf(dockerCli.Err(), "\n")
@@ -107,6 +111,7 @@ func lsCmd(dockerCli command.Cli) *cobra.Command {
flags := cmd.Flags()
flags.StringVar(&options.format, "format", formatter.TableFormatKey, "Format the output")
flags.BoolVar(&options.noTrunc, "no-trunc", false, "Don't truncate output")
// hide builder persistent flag for this command
cobrautil.HideInheritedFlags(cmd, "builder")
@@ -114,14 +119,15 @@ func lsCmd(dockerCli command.Cli) *cobra.Command {
return cmd
}
func lsPrint(dockerCli command.Cli, current *store.NodeGroup, builders []*builder.Builder, format string) (hasErrors bool, _ error) {
if format == formatter.TableFormatKey {
format = lsDefaultTableFormat
func lsPrint(dockerCli command.Cli, current *store.NodeGroup, builders []*builder.Builder, in lsOptions) (hasErrors bool, _ error) {
if in.format == formatter.TableFormatKey {
in.format = lsDefaultTableFormat
}
ctx := formatter.Context{
Output: dockerCli.Out(),
Format: formatter.Format(format),
Format: formatter.Format(in.format),
Trunc: !in.noTrunc,
}
sort.SliceStable(builders, func(i, j int) bool {
@@ -138,11 +144,12 @@ func lsPrint(dockerCli command.Cli, current *store.NodeGroup, builders []*builde
render := func(format func(subContext formatter.SubContext) error) error {
for _, b := range builders {
if err := format(&lsContext{
format: ctx.Format,
trunc: ctx.Trunc,
Builder: &lsBuilder{
Builder: b,
Current: b.Name == current.Name,
},
format: ctx.Format,
}); err != nil {
return err
}
@@ -160,6 +167,7 @@ func lsPrint(dockerCli command.Cli, current *store.NodeGroup, builders []*builde
}
if err := format(&lsContext{
format: ctx.Format,
trunc: ctx.Trunc,
Builder: &lsBuilder{
Builder: b,
Current: b.Name == current.Name,
@@ -196,6 +204,7 @@ type lsContext struct {
Builder *lsBuilder
format formatter.Format
trunc bool
node builder.Node
}
@@ -261,7 +270,11 @@ func (c *lsContext) Platforms() string {
if c.node.Name == "" {
return ""
}
return strings.Join(platformutil.FormatInGroups(c.node.Node.Platforms, c.node.Platforms), ", ")
pfs := platformutil.FormatInGroups(c.node.Node.Platforms, c.node.Platforms)
if c.trunc && c.format.IsTable() {
return truncPlatforms(pfs, 4).String()
}
return strings.Join(pfs, ", ")
}
func (c *lsContext) Error() string {
@@ -272,3 +285,133 @@ func (c *lsContext) Error() string {
}
return ""
}
var truncMajorPlatforms = []string{
"linux/amd64",
"linux/arm64",
"linux/arm",
"linux/ppc64le",
"linux/s390x",
"linux/riscv64",
"linux/mips64",
}
type truncatedPlatforms struct {
res map[string][]string
input []string
max int
}
func (tp truncatedPlatforms) List() map[string][]string {
return tp.res
}
func (tp truncatedPlatforms) String() string {
var out []string
var count int
var keys []string
for k := range tp.res {
keys = append(keys, k)
}
sort.Strings(keys)
seen := make(map[string]struct{})
for _, mpf := range truncMajorPlatforms {
if tpf, ok := tp.res[mpf]; ok {
seen[mpf] = struct{}{}
if len(tpf) == 1 {
out = append(out, tpf[0])
count++
} else {
hasPreferredPlatform := false
for _, pf := range tpf {
if strings.HasSuffix(pf, "*") {
hasPreferredPlatform = true
break
}
}
mainpf := mpf
if hasPreferredPlatform {
mainpf += "*"
}
out = append(out, fmt.Sprintf("%s (+%d)", mainpf, len(tpf)))
count += len(tpf)
}
}
}
for _, mpf := range keys {
if len(out) >= tp.max {
break
}
if _, ok := seen[mpf]; ok {
continue
}
if len(tp.res[mpf]) == 1 {
out = append(out, tp.res[mpf][0])
count++
} else {
hasPreferredPlatform := false
for _, pf := range tp.res[mpf] {
if strings.HasSuffix(pf, "*") {
hasPreferredPlatform = true
break
}
}
mainpf := mpf
if hasPreferredPlatform {
mainpf += "*"
}
out = append(out, fmt.Sprintf("%s (+%d)", mainpf, len(tp.res[mpf])))
count += len(tp.res[mpf])
}
}
left := len(tp.input) - count
if left > 0 {
out = append(out, fmt.Sprintf("(%d more)", left))
}
return strings.Join(out, ", ")
}
func truncPlatforms(pfs []string, max int) truncatedPlatforms {
res := make(map[string][]string)
for _, mpf := range truncMajorPlatforms {
for _, pf := range pfs {
if len(res) >= max {
break
}
pp, err := platforms.Parse(strings.TrimSuffix(pf, "*"))
if err != nil {
continue
}
if pp.OS+"/"+pp.Architecture == mpf {
res[mpf] = append(res[mpf], pf)
}
}
}
left := make(map[string][]string)
for _, pf := range pfs {
if len(res) >= max {
break
}
pp, err := platforms.Parse(strings.TrimSuffix(pf, "*"))
if err != nil {
continue
}
ppf := strings.TrimSuffix(pp.OS+"/"+pp.Architecture, "*")
if _, ok := res[ppf]; !ok {
left[ppf] = append(left[ppf], pf)
}
}
for k, v := range left {
res[k] = v
}
return truncatedPlatforms{
res: res,
input: pfs,
max: max,
}
}

174
commands/ls_test.go Normal file
View File

@@ -0,0 +1,174 @@
package commands
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestTruncPlatforms(t *testing.T) {
tests := []struct {
name string
platforms []string
max int
expectedList map[string][]string
expectedOut string
}{
{
name: "arm64 preferred and emulated",
platforms: []string{"linux/arm64*", "linux/amd64", "linux/amd64/v2", "linux/riscv64", "linux/ppc64le", "linux/s390x", "linux/386", "linux/mips64le", "linux/mips64", "linux/arm/v7", "linux/arm/v6"},
max: 4,
expectedList: map[string][]string{
"linux/amd64": {
"linux/amd64",
"linux/amd64/v2",
},
"linux/arm": {
"linux/arm/v7",
"linux/arm/v6",
},
"linux/arm64": {
"linux/arm64*",
},
"linux/ppc64le": {
"linux/ppc64le",
},
},
expectedOut: "linux/amd64 (+2), linux/arm64*, linux/arm (+2), linux/ppc64le, (5 more)",
},
{
name: "riscv64 preferred only",
platforms: []string{"linux/riscv64*"},
max: 4,
expectedList: map[string][]string{
"linux/riscv64": {
"linux/riscv64*",
},
},
expectedOut: "linux/riscv64*",
},
{
name: "amd64 no preferred and emulated",
platforms: []string{"linux/amd64", "linux/amd64/v2", "linux/amd64/v3", "linux/386", "linux/arm64", "linux/riscv64", "linux/ppc64le", "linux/s390x", "linux/mips64le", "linux/mips64", "linux/arm/v7", "linux/arm/v6"},
max: 4,
expectedList: map[string][]string{
"linux/amd64": {
"linux/amd64",
"linux/amd64/v2",
"linux/amd64/v3",
},
"linux/arm": {
"linux/arm/v7",
"linux/arm/v6",
},
"linux/arm64": {
"linux/arm64",
},
"linux/ppc64le": {
"linux/ppc64le",
}},
expectedOut: "linux/amd64 (+3), linux/arm64, linux/arm (+2), linux/ppc64le, (5 more)",
},
{
name: "amd64 no preferred",
platforms: []string{"linux/amd64", "linux/386"},
max: 4,
expectedList: map[string][]string{
"linux/386": {
"linux/386",
},
"linux/amd64": {
"linux/amd64",
},
},
expectedOut: "linux/amd64, linux/386",
},
{
name: "arm64 no preferred",
platforms: []string{"linux/arm64", "linux/arm/v7", "linux/arm/v6"},
max: 4,
expectedList: map[string][]string{
"linux/arm": {
"linux/arm/v7",
"linux/arm/v6",
},
"linux/arm64": {
"linux/arm64",
},
},
expectedOut: "linux/arm64, linux/arm (+2)",
},
{
name: "all preferred",
platforms: []string{"darwin/arm64*", "linux/arm64*", "linux/arm/v5*", "linux/arm/v6*", "linux/arm/v7*", "windows/arm64*"},
max: 4,
expectedList: map[string][]string{
"darwin/arm64": {
"darwin/arm64*",
},
"linux/arm": {
"linux/arm/v5*",
"linux/arm/v6*",
"linux/arm/v7*",
},
"linux/arm64": {
"linux/arm64*",
},
"windows/arm64": {
"windows/arm64*",
},
},
expectedOut: "linux/arm64*, linux/arm* (+3), darwin/arm64*, windows/arm64*",
},
{
name: "no major preferred",
platforms: []string{"linux/amd64/v2*", "linux/arm/v6*", "linux/mips64le*", "linux/amd64", "linux/amd64/v3", "linux/386", "linux/arm64", "linux/riscv64", "linux/ppc64le", "linux/s390x", "linux/mips64", "linux/arm/v7"},
max: 4,
expectedList: map[string][]string{
"linux/amd64": {
"linux/amd64/v2*",
"linux/amd64",
"linux/amd64/v3",
},
"linux/arm": {
"linux/arm/v6*",
"linux/arm/v7",
},
"linux/arm64": {
"linux/arm64",
},
"linux/ppc64le": {
"linux/ppc64le",
},
},
expectedOut: "linux/amd64* (+3), linux/arm64, linux/arm* (+2), linux/ppc64le, (5 more)",
},
{
name: "no major with multiple variants",
platforms: []string{"linux/arm64", "linux/arm/v7", "linux/arm/v6", "linux/mips64le/softfloat", "linux/mips64le/hardfloat"},
max: 4,
expectedList: map[string][]string{
"linux/arm": {
"linux/arm/v7",
"linux/arm/v6",
},
"linux/arm64": {
"linux/arm64",
},
"linux/mips64le": {
"linux/mips64le/softfloat",
"linux/mips64le/hardfloat",
},
},
expectedOut: "linux/arm64, linux/arm (+2), linux/mips64le (+2)",
},
}
for _, tt := range tests {
tt := tt
t.Run(tt.name, func(t *testing.T) {
tpfs := truncPlatforms(tt.platforms, tt.max)
assert.Equal(t, tt.expectedList, tpfs.List())
assert.Equal(t, tt.expectedOut, tpfs.String())
})
}
}

View File

@@ -16,6 +16,9 @@ import (
"github.com/docker/docker/api/types/filters"
"github.com/docker/go-units"
"github.com/moby/buildkit/client"
gateway "github.com/moby/buildkit/frontend/gateway/client"
pb "github.com/moby/buildkit/solver/pb"
"github.com/moby/buildkit/util/apicaps"
"github.com/pkg/errors"
"github.com/spf13/cobra"
"golang.org/x/sync/errgroup"
@@ -25,7 +28,9 @@ type pruneOptions struct {
builder string
all bool
filter opts.FilterOpt
keepStorage opts.MemBytes
reservedSpace opts.MemBytes
maxUsedSpace opts.MemBytes
minFreeSpace opts.MemBytes
force bool
verbose bool
}
@@ -105,8 +110,19 @@ func runPrune(ctx context.Context, dockerCli command.Cli, opts pruneOptions) err
if err != nil {
return err
}
// check if the client supports newer prune options
if opts.maxUsedSpace.Value() != 0 || opts.minFreeSpace.Value() != 0 {
caps, err := loadLLBCaps(ctx, c)
if err != nil {
return errors.Wrap(err, "failed to load buildkit capabilities for prune")
}
if caps.Supports(pb.CapGCFreeSpaceFilter) != nil {
return errors.New("buildkit v0.17.0+ is required for max-used-space and min-free-space filters")
}
}
popts := []client.PruneOption{
client.WithKeepOpt(pi.KeepDuration, opts.keepStorage.Value()),
client.WithKeepOpt(pi.KeepDuration, opts.reservedSpace.Value(), opts.maxUsedSpace.Value(), opts.minFreeSpace.Value()),
client.WithFilter(pi.Filter),
}
if opts.all {
@@ -131,6 +147,17 @@ func runPrune(ctx context.Context, dockerCli command.Cli, opts pruneOptions) err
return nil
}
func loadLLBCaps(ctx context.Context, c *client.Client) (apicaps.CapSet, error) {
var caps apicaps.CapSet
_, err := c.Build(ctx, client.SolveOpt{
Internal: true,
}, "buildx", func(ctx context.Context, c gateway.Client) (*gateway.Result, error) {
caps = c.BuildOpts().LLBCaps
return nil, nil
}, nil)
return caps, err
}
func pruneCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
options := pruneOptions{filter: opts.NewFilterOpt()}
@@ -148,10 +175,15 @@ func pruneCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
flags := cmd.Flags()
flags.BoolVarP(&options.all, "all", "a", false, "Include internal/frontend images")
flags.Var(&options.filter, "filter", `Provide filter values (e.g., "until=24h")`)
flags.Var(&options.keepStorage, "keep-storage", "Amount of disk space to keep for cache")
flags.Var(&options.reservedSpace, "reserved-space", "Amount of disk space always allowed to keep for cache")
flags.Var(&options.minFreeSpace, "min-free-space", "Target amount of free disk space after pruning")
flags.Var(&options.maxUsedSpace, "max-used-space", "Maximum amount of disk space allowed to keep for cache")
flags.BoolVar(&options.verbose, "verbose", false, "Provide a more verbose output")
flags.BoolVarP(&options.force, "force", "f", false, "Do not prompt for confirmation")
flags.Var(&options.reservedSpace, "keep-storage", "Amount of disk space to keep for cache")
flags.MarkDeprecated("keep-storage", "keep-storage flag has been changed to max-storage")
return cmd
}

View File

@@ -150,8 +150,9 @@ func rmAllInactive(ctx context.Context, txn *store.Txn, dockerCli command.Cli, i
return err
}
timeoutCtx, cancel := context.WithTimeout(ctx, 20*time.Second)
defer cancel()
timeoutCtx, cancel := context.WithCancelCause(ctx)
timeoutCtx, _ = context.WithTimeoutCause(timeoutCtx, 20*time.Second, errors.WithStack(context.DeadlineExceeded)) //nolint:govet,lostcancel // no need to manually cancel this context as we already rely on parent
defer func() { cancel(errors.WithStack(context.Canceled)) }()
eg, _ := errgroup.WithContext(timeoutCtx)
for _, b := range builders {

View File

@@ -1,6 +1,7 @@
package commands
import (
"fmt"
"os"
debugcmd "github.com/docker/buildx/commands/debug"
@@ -36,13 +37,22 @@ func NewRootCmd(name string, isPlugin bool, dockerCli command.Cli) *cobra.Comman
if opt.debug {
debug.Enable()
}
cmd.SetContext(appcontext.Context())
if !isPlugin {
return nil
}
return plugin.PersistentPreRunE(cmd, args)
},
RunE: func(cmd *cobra.Command, args []string) error {
if len(args) == 0 {
return cmd.Help()
}
_ = cmd.Help()
return cli.StatusError{
StatusCode: 1,
Status: fmt.Sprintf("ERROR: unknown command: %q", args[0]),
}
},
}
if !isPlugin {
// match plugin behavior for standalone mode
@@ -95,7 +105,7 @@ func addCommands(cmd *cobra.Command, opts *rootOptions, dockerCli command.Cli) {
versionCmd(dockerCli),
pruneCmd(dockerCli, opts),
duCmd(dockerCli, opts),
imagetoolscmd.RootCmd(dockerCli, imagetoolscmd.RootOptions{Builder: &opts.builder}),
imagetoolscmd.RootCmd(cmd, dockerCli, imagetoolscmd.RootOptions{Builder: &opts.builder}),
)
if confutil.IsExperimental() {
cmd.AddCommand(debugcmd.RootCmd(dockerCli,

View File

@@ -46,7 +46,6 @@ func runUse(dockerCli command.Cli, in useOptions) error {
return errors.Errorf("run `docker context use %s` to switch to context %s", in.builder, in.builder)
}
}
}
return errors.Wrapf(err, "failed to find instance %q", in.builder)
}

View File

@@ -34,9 +34,9 @@ const defaultTargetName = "default"
// NOTE: When an error happens during the build and this function acquires the debuggable *build.ResultHandle,
// this function returns it in addition to the error (i.e. it does "return nil, res, err"). The caller can
// inspect the result and debug the cause of that error.
func RunBuild(ctx context.Context, dockerCli command.Cli, in controllerapi.BuildOptions, inStream io.Reader, progress progress.Writer, generateResult bool) (*client.SolveResponse, *build.ResultHandle, error) {
func RunBuild(ctx context.Context, dockerCli command.Cli, in *controllerapi.BuildOptions, inStream io.Reader, progress progress.Writer, generateResult bool) (*client.SolveResponse, *build.ResultHandle, *build.Inputs, error) {
if in.NoCache && len(in.NoCacheFilter) > 0 {
return nil, nil, errors.Errorf("--no-cache and --no-cache-filter cannot currently be used together")
return nil, nil, nil, errors.Errorf("--no-cache and --no-cache-filter cannot currently be used together")
}
contexts := map[string]build.NamedContext{}
@@ -70,7 +70,7 @@ func RunBuild(ctx context.Context, dockerCli command.Cli, in controllerapi.Build
platforms, err := platformutil.Parse(in.Platforms)
if err != nil {
return nil, nil, err
return nil, nil, nil, err
}
opts.Platforms = platforms
@@ -79,7 +79,7 @@ func RunBuild(ctx context.Context, dockerCli command.Cli, in controllerapi.Build
secrets, err := controllerapi.CreateSecrets(in.Secrets)
if err != nil {
return nil, nil, err
return nil, nil, nil, err
}
opts.Session = append(opts.Session, secrets)
@@ -89,13 +89,13 @@ func RunBuild(ctx context.Context, dockerCli command.Cli, in controllerapi.Build
}
ssh, err := controllerapi.CreateSSH(sshSpecs)
if err != nil {
return nil, nil, err
return nil, nil, nil, err
}
opts.Session = append(opts.Session, ssh)
outputs, err := controllerapi.CreateExports(in.Exports)
if err != nil {
return nil, nil, err
return nil, nil, nil, err
}
if in.ExportPush {
var pushUsed bool
@@ -134,7 +134,7 @@ func RunBuild(ctx context.Context, dockerCli command.Cli, in controllerapi.Build
annotations, err := buildflags.ParseAnnotations(in.Annotations)
if err != nil {
return nil, nil, errors.Wrap(err, "parse annotations")
return nil, nil, nil, errors.Wrap(err, "parse annotations")
}
for _, o := range outputs {
@@ -154,7 +154,7 @@ func RunBuild(ctx context.Context, dockerCli command.Cli, in controllerapi.Build
allow, err := buildflags.ParseEntitlements(in.Allow)
if err != nil {
return nil, nil, err
return nil, nil, nil, err
}
opts.Allow = allow
@@ -178,23 +178,28 @@ func RunBuild(ctx context.Context, dockerCli command.Cli, in controllerapi.Build
builder.WithContextPathHash(contextPathHash),
)
if err != nil {
return nil, nil, err
return nil, nil, nil, err
}
if err = updateLastActivity(dockerCli, b.NodeGroup); err != nil {
return nil, nil, errors.Wrapf(err, "failed to update builder last activity time")
return nil, nil, nil, errors.Wrapf(err, "failed to update builder last activity time")
}
nodes, err := b.LoadNodes(ctx)
if err != nil {
return nil, nil, err
return nil, nil, nil, err
}
resp, res, err := buildTargets(ctx, dockerCli, nodes, map[string]build.Options{defaultTargetName: opts}, progress, generateResult)
var inputs *build.Inputs
buildOptions := map[string]build.Options{defaultTargetName: opts}
resp, res, err := buildTargets(ctx, dockerCli, nodes, buildOptions, progress, generateResult)
err = wrapBuildError(err, false)
if err != nil {
// NOTE: buildTargets can return *build.ResultHandle even on error.
return nil, res, err
return nil, res, nil, err
}
return resp, res, nil
if i, ok := buildOptions[defaultTargetName]; ok {
inputs = &i.Inputs
}
return resp, res, inputs, nil
}
// buildTargets runs the specified build and returns the result.
@@ -209,7 +214,7 @@ func buildTargets(ctx context.Context, dockerCli command.Cli, nodes []builder.No
if generateResult {
var mu sync.Mutex
var idx int
resp, err = build.BuildWithResultHandler(ctx, nodes, opts, dockerutil.NewClient(dockerCli), confutil.ConfigDir(dockerCli), progress, func(driverIndex int, gotRes *build.ResultHandle) {
resp, err = build.BuildWithResultHandler(ctx, nodes, opts, dockerutil.NewClient(dockerCli), confutil.NewConfig(dockerCli), progress, func(driverIndex int, gotRes *build.ResultHandle) {
mu.Lock()
defer mu.Unlock()
if res == nil || driverIndex < idx {
@@ -217,7 +222,7 @@ func buildTargets(ctx context.Context, dockerCli command.Cli, nodes []builder.No
}
})
} else {
resp, err = build.Build(ctx, nodes, opts, dockerutil.NewClient(dockerCli), confutil.ConfigDir(dockerCli), progress)
resp, err = build.Build(ctx, nodes, opts, dockerutil.NewClient(dockerCli), confutil.NewConfig(dockerCli), progress)
}
if err != nil {
return nil, res, err

View File

@@ -4,18 +4,19 @@ import (
"context"
"io"
"github.com/docker/buildx/build"
controllerapi "github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/util/progress"
"github.com/moby/buildkit/client"
)
type BuildxController interface {
Build(ctx context.Context, options controllerapi.BuildOptions, in io.ReadCloser, progress progress.Writer) (ref string, resp *client.SolveResponse, err error)
Build(ctx context.Context, options *controllerapi.BuildOptions, in io.ReadCloser, progress progress.Writer) (ref string, resp *client.SolveResponse, inputs *build.Inputs, err error)
// Invoke starts an IO session into the specified process.
// If pid doesn't matche to any running processes, it starts a new process with the specified config.
// If there is no container running or InvokeConfig.Rollback is speicfied, the process will start in a newly created container.
// NOTE: If needed, in the future, we can split this API into three APIs (NewContainer, NewProcess and Attach).
Invoke(ctx context.Context, ref, pid string, options controllerapi.InvokeConfig, ioIn io.ReadCloser, ioOut io.WriteCloser, ioErr io.WriteCloser) error
Invoke(ctx context.Context, ref, pid string, options *controllerapi.InvokeConfig, ioIn io.ReadCloser, ioOut io.WriteCloser, ioErr io.WriteCloser) error
Kill(ctx context.Context) error
Close() error
List(ctx context.Context) (refs []string, _ error)

View File

@@ -1,7 +1,10 @@
package errdefs
import (
"io"
"github.com/containerd/typeurl/v2"
"github.com/docker/buildx/util/desktop"
"github.com/moby/buildkit/util/grpcerrors"
)
@@ -10,7 +13,7 @@ func init() {
}
type BuildError struct {
Build
*Build
error
}
@@ -19,16 +22,27 @@ func (e *BuildError) Unwrap() error {
}
func (e *BuildError) ToProto() grpcerrors.TypedErrorProto {
return &e.Build
return e.Build
}
func WrapBuild(err error, ref string) error {
func (e *BuildError) PrintBuildDetails(w io.Writer) error {
if e.Ref == "" {
return nil
}
ebr := &desktop.ErrorWithBuildRef{
Ref: e.Ref,
Err: e.error,
}
return ebr.Print(w)
}
func WrapBuild(err error, sessionID string, ref string) error {
if err == nil {
return nil
}
return &BuildError{Build: Build{Ref: ref}, error: err}
return &BuildError{Build: &Build{SessionID: sessionID, Ref: ref}, error: err}
}
func (b *Build) WrapError(err error) error {
return &BuildError{error: err, Build: *b}
return &BuildError{error: err, Build: b}
}

View File

@@ -1,77 +1,157 @@
// Code generated by protoc-gen-gogo. DO NOT EDIT.
// source: errdefs.proto
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.34.1
// protoc v3.11.4
// source: github.com/docker/buildx/controller/errdefs/errdefs.proto
package errdefs
import (
fmt "fmt"
proto "github.com/gogo/protobuf/proto"
_ "github.com/moby/buildkit/solver/pb"
math "math"
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
reflect "reflect"
sync "sync"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
type Build struct {
Ref string `protobuf:"bytes,1,opt,name=Ref,proto3" json:"Ref,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
SessionID string `protobuf:"bytes,1,opt,name=SessionID,proto3" json:"SessionID,omitempty"`
Ref string `protobuf:"bytes,2,opt,name=Ref,proto3" json:"Ref,omitempty"`
}
func (x *Build) Reset() {
*x = Build{}
if protoimpl.UnsafeEnabled {
mi := &file_github_com_docker_buildx_controller_errdefs_errdefs_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *Build) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (m *Build) Reset() { *m = Build{} }
func (m *Build) String() string { return proto.CompactTextString(m) }
func (*Build) ProtoMessage() {}
func (x *Build) ProtoReflect() protoreflect.Message {
mi := &file_github_com_docker_buildx_controller_errdefs_errdefs_proto_msgTypes[0]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use Build.ProtoReflect.Descriptor instead.
func (*Build) Descriptor() ([]byte, []int) {
return fileDescriptor_689dc58a5060aff5, []int{0}
}
func (m *Build) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_Build.Unmarshal(m, b)
}
func (m *Build) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_Build.Marshal(b, m, deterministic)
}
func (m *Build) XXX_Merge(src proto.Message) {
xxx_messageInfo_Build.Merge(m, src)
}
func (m *Build) XXX_Size() int {
return xxx_messageInfo_Build.Size(m)
}
func (m *Build) XXX_DiscardUnknown() {
xxx_messageInfo_Build.DiscardUnknown(m)
return file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDescGZIP(), []int{0}
}
var xxx_messageInfo_Build proto.InternalMessageInfo
func (m *Build) GetRef() string {
if m != nil {
return m.Ref
func (x *Build) GetSessionID() string {
if x != nil {
return x.SessionID
}
return ""
}
func init() {
proto.RegisterType((*Build)(nil), "errdefs.Build")
func (x *Build) GetRef() string {
if x != nil {
return x.Ref
}
return ""
}
func init() { proto.RegisterFile("errdefs.proto", fileDescriptor_689dc58a5060aff5) }
var File_github_com_docker_buildx_controller_errdefs_errdefs_proto protoreflect.FileDescriptor
var fileDescriptor_689dc58a5060aff5 = []byte{
// 111 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0xe2, 0x4d, 0x2d, 0x2a, 0x4a,
0x49, 0x4d, 0x2b, 0xd6, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x17, 0x62, 0x87, 0x72, 0xa5, 0x74, 0xd2,
0x33, 0x4b, 0x32, 0x4a, 0x93, 0xf4, 0x92, 0xf3, 0x73, 0xf5, 0x73, 0xf3, 0x93, 0x2a, 0xf5, 0x93,
0x4a, 0x33, 0x73, 0x52, 0xb2, 0x33, 0x4b, 0xf4, 0x8b, 0xf3, 0x73, 0xca, 0x52, 0x8b, 0xf4, 0x0b,
0x92, 0xf4, 0xf3, 0x0b, 0xa0, 0xda, 0x94, 0x24, 0xb9, 0x58, 0x9d, 0x40, 0xf2, 0x42, 0x02, 0x5c,
0xcc, 0x41, 0xa9, 0x69, 0x12, 0x8c, 0x0a, 0x8c, 0x1a, 0x9c, 0x41, 0x20, 0x66, 0x12, 0x1b, 0x58,
0x85, 0x31, 0x20, 0x00, 0x00, 0xff, 0xff, 0x56, 0x52, 0x41, 0x91, 0x69, 0x00, 0x00, 0x00,
var file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDesc = []byte{
0x0a, 0x39, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x64, 0x6f, 0x63,
0x6b, 0x65, 0x72, 0x2f, 0x62, 0x75, 0x69, 0x6c, 0x64, 0x78, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x72,
0x6f, 0x6c, 0x6c, 0x65, 0x72, 0x2f, 0x65, 0x72, 0x72, 0x64, 0x65, 0x66, 0x73, 0x2f, 0x65, 0x72,
0x72, 0x64, 0x65, 0x66, 0x73, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x15, 0x64, 0x6f, 0x63,
0x6b, 0x65, 0x72, 0x2e, 0x62, 0x75, 0x69, 0x6c, 0x64, 0x78, 0x2e, 0x65, 0x72, 0x72, 0x64, 0x65,
0x66, 0x73, 0x22, 0x37, 0x0a, 0x05, 0x42, 0x75, 0x69, 0x6c, 0x64, 0x12, 0x1c, 0x0a, 0x09, 0x53,
0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, 0x49, 0x44, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09,
0x53, 0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, 0x49, 0x44, 0x12, 0x10, 0x0a, 0x03, 0x52, 0x65, 0x66,
0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x52, 0x65, 0x66, 0x42, 0x2d, 0x5a, 0x2b, 0x67,
0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x64, 0x6f, 0x63, 0x6b, 0x65, 0x72,
0x2f, 0x62, 0x75, 0x69, 0x6c, 0x64, 0x78, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x72, 0x6f, 0x6c, 0x6c,
0x65, 0x72, 0x2f, 0x65, 0x72, 0x72, 0x64, 0x65, 0x66, 0x73, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74,
0x6f, 0x33,
}
var (
file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDescOnce sync.Once
file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDescData = file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDesc
)
func file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDescGZIP() []byte {
file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDescOnce.Do(func() {
file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDescData = protoimpl.X.CompressGZIP(file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDescData)
})
return file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDescData
}
var file_github_com_docker_buildx_controller_errdefs_errdefs_proto_msgTypes = make([]protoimpl.MessageInfo, 1)
var file_github_com_docker_buildx_controller_errdefs_errdefs_proto_goTypes = []interface{}{
(*Build)(nil), // 0: docker.buildx.errdefs.Build
}
var file_github_com_docker_buildx_controller_errdefs_errdefs_proto_depIdxs = []int32{
0, // [0:0] is the sub-list for method output_type
0, // [0:0] is the sub-list for method input_type
0, // [0:0] is the sub-list for extension type_name
0, // [0:0] is the sub-list for extension extendee
0, // [0:0] is the sub-list for field type_name
}
func init() { file_github_com_docker_buildx_controller_errdefs_errdefs_proto_init() }
func file_github_com_docker_buildx_controller_errdefs_errdefs_proto_init() {
if File_github_com_docker_buildx_controller_errdefs_errdefs_proto != nil {
return
}
if !protoimpl.UnsafeEnabled {
file_github_com_docker_buildx_controller_errdefs_errdefs_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*Build); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDesc,
NumEnums: 0,
NumMessages: 1,
NumExtensions: 0,
NumServices: 0,
},
GoTypes: file_github_com_docker_buildx_controller_errdefs_errdefs_proto_goTypes,
DependencyIndexes: file_github_com_docker_buildx_controller_errdefs_errdefs_proto_depIdxs,
MessageInfos: file_github_com_docker_buildx_controller_errdefs_errdefs_proto_msgTypes,
}.Build()
File_github_com_docker_buildx_controller_errdefs_errdefs_proto = out.File
file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDesc = nil
file_github_com_docker_buildx_controller_errdefs_errdefs_proto_goTypes = nil
file_github_com_docker_buildx_controller_errdefs_errdefs_proto_depIdxs = nil
}

View File

@@ -1,9 +1,10 @@
syntax = "proto3";
package errdefs;
package docker.buildx.errdefs;
import "github.com/moby/buildkit/solver/pb/ops.proto";
option go_package = "github.com/docker/buildx/controller/errdefs";
message Build {
string Ref = 1;
string SessionID = 1;
string Ref = 2;
}

View File

@@ -0,0 +1,241 @@
// Code generated by protoc-gen-go-vtproto. DO NOT EDIT.
// protoc-gen-go-vtproto version: v0.6.1-0.20240319094008-0393e58bdf10
// source: github.com/docker/buildx/controller/errdefs/errdefs.proto
package errdefs
import (
fmt "fmt"
protohelpers "github.com/planetscale/vtprotobuf/protohelpers"
proto "google.golang.org/protobuf/proto"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
io "io"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
func (m *Build) CloneVT() *Build {
if m == nil {
return (*Build)(nil)
}
r := new(Build)
r.SessionID = m.SessionID
r.Ref = m.Ref
if len(m.unknownFields) > 0 {
r.unknownFields = make([]byte, len(m.unknownFields))
copy(r.unknownFields, m.unknownFields)
}
return r
}
func (m *Build) CloneMessageVT() proto.Message {
return m.CloneVT()
}
func (this *Build) EqualVT(that *Build) bool {
if this == that {
return true
} else if this == nil || that == nil {
return false
}
if this.SessionID != that.SessionID {
return false
}
if this.Ref != that.Ref {
return false
}
return string(this.unknownFields) == string(that.unknownFields)
}
func (this *Build) EqualMessageVT(thatMsg proto.Message) bool {
that, ok := thatMsg.(*Build)
if !ok {
return false
}
return this.EqualVT(that)
}
func (m *Build) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
size := m.SizeVT()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBufferVT(dAtA[:size])
if err != nil {
return nil, err
}
return dAtA[:n], nil
}
func (m *Build) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
func (m *Build) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
i := len(dAtA)
_ = i
var l int
_ = l
if m.unknownFields != nil {
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
if len(m.Ref) > 0 {
i -= len(m.Ref)
copy(dAtA[i:], m.Ref)
i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Ref)))
i--
dAtA[i] = 0x12
}
if len(m.SessionID) > 0 {
i -= len(m.SessionID)
copy(dAtA[i:], m.SessionID)
i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.SessionID)))
i--
dAtA[i] = 0xa
}
return len(dAtA) - i, nil
}
func (m *Build) SizeVT() (n int) {
if m == nil {
return 0
}
var l int
_ = l
l = len(m.SessionID)
if l > 0 {
n += 1 + l + protohelpers.SizeOfVarint(uint64(l))
}
l = len(m.Ref)
if l > 0 {
n += 1 + l + protohelpers.SizeOfVarint(uint64(l))
}
n += len(m.unknownFields)
return n
}
func (m *Build) UnmarshalVT(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return protohelpers.ErrIntOverflow
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
return fmt.Errorf("proto: Build: wiretype end group for non-group")
}
if fieldNum <= 0 {
return fmt.Errorf("proto: Build: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field SessionID", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return protohelpers.ErrIntOverflow
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return protohelpers.ErrInvalidLength
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return protohelpers.ErrInvalidLength
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.SessionID = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 2:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Ref", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return protohelpers.ErrIntOverflow
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return protohelpers.ErrInvalidLength
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return protohelpers.ErrInvalidLength
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Ref = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := protohelpers.Skip(dAtA[iNdEx:])
if err != nil {
return err
}
if (skippy < 0) || (iNdEx+skippy) < 0 {
return protohelpers.ErrInvalidLength
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
m.unknownFields = append(m.unknownFields, dAtA[iNdEx:iNdEx+skippy]...)
iNdEx += skippy
}
}
if iNdEx > l {
return io.ErrUnexpectedEOF
}
return nil
}

View File

@@ -1,3 +0,0 @@
package errdefs
//go:generate protoc -I=. -I=../../vendor/ --gogo_out=plugins=grpc:. errdefs.proto

View File

@@ -11,6 +11,7 @@ import (
controllererrors "github.com/docker/buildx/controller/errdefs"
controllerapi "github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/controller/processes"
"github.com/docker/buildx/util/desktop"
"github.com/docker/buildx/util/ioset"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/command"
@@ -21,7 +22,7 @@ import (
func NewLocalBuildxController(ctx context.Context, dockerCli command.Cli, logger progress.SubLogger) control.BuildxController {
return &localController{
dockerCli: dockerCli,
ref: "local",
sessionID: "local",
processes: processes.NewManager(),
}
}
@@ -35,46 +36,51 @@ type buildConfig struct {
type localController struct {
dockerCli command.Cli
ref string
sessionID string
buildConfig buildConfig
processes *processes.Manager
buildOnGoing atomic.Bool
}
func (b *localController) Build(ctx context.Context, options controllerapi.BuildOptions, in io.ReadCloser, progress progress.Writer) (string, *client.SolveResponse, error) {
func (b *localController) Build(ctx context.Context, options *controllerapi.BuildOptions, in io.ReadCloser, progress progress.Writer) (string, *client.SolveResponse, *build.Inputs, error) {
if !b.buildOnGoing.CompareAndSwap(false, true) {
return "", nil, errors.New("build ongoing")
return "", nil, nil, errors.New("build ongoing")
}
defer b.buildOnGoing.Store(false)
resp, res, buildErr := cbuild.RunBuild(ctx, b.dockerCli, options, in, progress, true)
resp, res, dockerfileMappings, buildErr := cbuild.RunBuild(ctx, b.dockerCli, options, in, progress, true)
// NOTE: RunBuild can return *build.ResultHandle even on error.
if res != nil {
b.buildConfig = buildConfig{
resultCtx: res,
buildOptions: &options,
buildOptions: options,
}
if buildErr != nil {
buildErr = controllererrors.WrapBuild(buildErr, b.ref)
var ref string
var ebr *desktop.ErrorWithBuildRef
if errors.As(buildErr, &ebr) {
ref = ebr.Ref
}
buildErr = controllererrors.WrapBuild(buildErr, b.sessionID, ref)
}
}
if buildErr != nil {
return "", nil, buildErr
return "", nil, nil, buildErr
}
return b.ref, resp, nil
return b.sessionID, resp, dockerfileMappings, nil
}
func (b *localController) ListProcesses(ctx context.Context, ref string) (infos []*controllerapi.ProcessInfo, retErr error) {
if ref != b.ref {
return nil, errors.Errorf("unknown ref %q", ref)
func (b *localController) ListProcesses(ctx context.Context, sessionID string) (infos []*controllerapi.ProcessInfo, retErr error) {
if sessionID != b.sessionID {
return nil, errors.Errorf("unknown session ID %q", sessionID)
}
return b.processes.ListProcesses(), nil
}
func (b *localController) DisconnectProcess(ctx context.Context, ref, pid string) error {
if ref != b.ref {
return errors.Errorf("unknown ref %q", ref)
func (b *localController) DisconnectProcess(ctx context.Context, sessionID, pid string) error {
if sessionID != b.sessionID {
return errors.Errorf("unknown session ID %q", sessionID)
}
return b.processes.DeleteProcess(pid)
}
@@ -83,9 +89,9 @@ func (b *localController) cancelRunningProcesses() {
b.processes.CancelRunningProcesses()
}
func (b *localController) Invoke(ctx context.Context, ref string, pid string, cfg controllerapi.InvokeConfig, ioIn io.ReadCloser, ioOut io.WriteCloser, ioErr io.WriteCloser) error {
if ref != b.ref {
return errors.Errorf("unknown ref %q", ref)
func (b *localController) Invoke(ctx context.Context, sessionID string, pid string, cfg *controllerapi.InvokeConfig, ioIn io.ReadCloser, ioOut io.WriteCloser, ioErr io.WriteCloser) error {
if sessionID != b.sessionID {
return errors.Errorf("unknown session ID %q", sessionID)
}
proc, ok := b.processes.Get(pid)
@@ -95,7 +101,7 @@ func (b *localController) Invoke(ctx context.Context, ref string, pid string, cf
return errors.New("no build result is registered")
}
var err error
proc, err = b.processes.StartProcess(pid, b.buildConfig.resultCtx, &cfg)
proc, err = b.processes.StartProcess(pid, b.buildConfig.resultCtx, cfg)
if err != nil {
return err
}
@@ -103,7 +109,7 @@ func (b *localController) Invoke(ctx context.Context, ref string, pid string, cf
// Attach containerIn to this process
ioCancelledCh := make(chan struct{})
proc.ForwardIO(&ioset.In{Stdin: ioIn, Stdout: ioOut, Stderr: ioErr}, func() { close(ioCancelledCh) })
proc.ForwardIO(&ioset.In{Stdin: ioIn, Stdout: ioOut, Stderr: ioErr}, func(error) { close(ioCancelledCh) })
select {
case <-ioCancelledCh:
@@ -111,7 +117,7 @@ func (b *localController) Invoke(ctx context.Context, ref string, pid string, cf
case err := <-proc.Done():
return err
case <-ctx.Done():
return ctx.Err()
return context.Cause(ctx)
}
}
@@ -130,7 +136,7 @@ func (b *localController) Close() error {
}
func (b *localController) List(ctx context.Context) (res []string, _ error) {
return []string{b.ref}, nil
return []string{b.sessionID}, nil
}
func (b *localController) Disconnect(ctx context.Context, key string) error {
@@ -138,9 +144,9 @@ func (b *localController) Disconnect(ctx context.Context, key string) error {
return nil
}
func (b *localController) Inspect(ctx context.Context, ref string) (*controllerapi.InspectResponse, error) {
if ref != b.ref {
return nil, errors.Errorf("unknown ref %q", ref)
func (b *localController) Inspect(ctx context.Context, sessionID string) (*controllerapi.InspectResponse, error) {
if sessionID != b.sessionID {
return nil, errors.Errorf("unknown session ID %q", sessionID)
}
return &controllerapi.InspectResponse{Options: b.buildConfig.buildOptions}, nil
}

File diff suppressed because it is too large Load Diff

View File

@@ -5,7 +5,7 @@ package buildx.controller.v1;
import "github.com/moby/buildkit/api/services/control/control.proto";
import "github.com/moby/buildkit/sourcepolicy/pb/policy.proto";
option go_package = "pb";
option go_package = "github.com/docker/buildx/controller/pb";
service Controller {
rpc Build(BuildRequest) returns (BuildResponse);
@@ -21,7 +21,7 @@ service Controller {
}
message ListProcessesRequest {
string Ref = 1;
string SessionID = 1;
}
message ListProcessesResponse {
@@ -34,7 +34,7 @@ message ProcessInfo {
}
message DisconnectProcessRequest {
string Ref = 1;
string SessionID = 1;
string ProcessID = 2;
}
@@ -42,7 +42,7 @@ message DisconnectProcessResponse {
}
message BuildRequest {
string Ref = 1;
string SessionID = 1;
BuildOptions Options = 2;
}
@@ -118,7 +118,7 @@ message CallFunc {
}
message InspectRequest {
string Ref = 1;
string SessionID = 1;
}
message InspectResponse {
@@ -140,13 +140,13 @@ message BuildResponse {
}
message DisconnectRequest {
string Ref = 1;
string SessionID = 1;
}
message DisconnectResponse {}
message ListRequest {
string Ref = 1;
string SessionID = 1;
}
message ListResponse {
@@ -161,7 +161,7 @@ message InputMessage {
}
message InputInitMessage {
string Ref = 1;
string SessionID = 1;
}
message DataMessage {
@@ -186,7 +186,7 @@ message Message {
}
message InitMessage {
string Ref = 1;
string SessionID = 1;
// If ProcessID already exists in the server, it tries to connect to it
// instead of invoking the new one. In this case, InvokeConfig will be ignored.
@@ -227,7 +227,7 @@ message SignalMessage {
}
message StatusRequest {
string Ref = 1;
string SessionID = 1;
}
message StatusResponse {

View File

@@ -0,0 +1,452 @@
// Code generated by protoc-gen-go-grpc. DO NOT EDIT.
// versions:
// - protoc-gen-go-grpc v1.5.1
// - protoc v3.11.4
// source: github.com/docker/buildx/controller/pb/controller.proto
package pb
import (
context "context"
grpc "google.golang.org/grpc"
codes "google.golang.org/grpc/codes"
status "google.golang.org/grpc/status"
)
// This is a compile-time assertion to ensure that this generated file
// is compatible with the grpc package it is being compiled against.
// Requires gRPC-Go v1.64.0 or later.
const _ = grpc.SupportPackageIsVersion9
const (
Controller_Build_FullMethodName = "/buildx.controller.v1.Controller/Build"
Controller_Inspect_FullMethodName = "/buildx.controller.v1.Controller/Inspect"
Controller_Status_FullMethodName = "/buildx.controller.v1.Controller/Status"
Controller_Input_FullMethodName = "/buildx.controller.v1.Controller/Input"
Controller_Invoke_FullMethodName = "/buildx.controller.v1.Controller/Invoke"
Controller_List_FullMethodName = "/buildx.controller.v1.Controller/List"
Controller_Disconnect_FullMethodName = "/buildx.controller.v1.Controller/Disconnect"
Controller_Info_FullMethodName = "/buildx.controller.v1.Controller/Info"
Controller_ListProcesses_FullMethodName = "/buildx.controller.v1.Controller/ListProcesses"
Controller_DisconnectProcess_FullMethodName = "/buildx.controller.v1.Controller/DisconnectProcess"
)
// ControllerClient is the client API for Controller service.
//
// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream.
type ControllerClient interface {
Build(ctx context.Context, in *BuildRequest, opts ...grpc.CallOption) (*BuildResponse, error)
Inspect(ctx context.Context, in *InspectRequest, opts ...grpc.CallOption) (*InspectResponse, error)
Status(ctx context.Context, in *StatusRequest, opts ...grpc.CallOption) (grpc.ServerStreamingClient[StatusResponse], error)
Input(ctx context.Context, opts ...grpc.CallOption) (grpc.ClientStreamingClient[InputMessage, InputResponse], error)
Invoke(ctx context.Context, opts ...grpc.CallOption) (grpc.BidiStreamingClient[Message, Message], error)
List(ctx context.Context, in *ListRequest, opts ...grpc.CallOption) (*ListResponse, error)
Disconnect(ctx context.Context, in *DisconnectRequest, opts ...grpc.CallOption) (*DisconnectResponse, error)
Info(ctx context.Context, in *InfoRequest, opts ...grpc.CallOption) (*InfoResponse, error)
ListProcesses(ctx context.Context, in *ListProcessesRequest, opts ...grpc.CallOption) (*ListProcessesResponse, error)
DisconnectProcess(ctx context.Context, in *DisconnectProcessRequest, opts ...grpc.CallOption) (*DisconnectProcessResponse, error)
}
type controllerClient struct {
cc grpc.ClientConnInterface
}
func NewControllerClient(cc grpc.ClientConnInterface) ControllerClient {
return &controllerClient{cc}
}
func (c *controllerClient) Build(ctx context.Context, in *BuildRequest, opts ...grpc.CallOption) (*BuildResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(BuildResponse)
err := c.cc.Invoke(ctx, Controller_Build_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *controllerClient) Inspect(ctx context.Context, in *InspectRequest, opts ...grpc.CallOption) (*InspectResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(InspectResponse)
err := c.cc.Invoke(ctx, Controller_Inspect_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *controllerClient) Status(ctx context.Context, in *StatusRequest, opts ...grpc.CallOption) (grpc.ServerStreamingClient[StatusResponse], error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
stream, err := c.cc.NewStream(ctx, &Controller_ServiceDesc.Streams[0], Controller_Status_FullMethodName, cOpts...)
if err != nil {
return nil, err
}
x := &grpc.GenericClientStream[StatusRequest, StatusResponse]{ClientStream: stream}
if err := x.ClientStream.SendMsg(in); err != nil {
return nil, err
}
if err := x.ClientStream.CloseSend(); err != nil {
return nil, err
}
return x, nil
}
// This type alias is provided for backwards compatibility with existing code that references the prior non-generic stream type by name.
type Controller_StatusClient = grpc.ServerStreamingClient[StatusResponse]
func (c *controllerClient) Input(ctx context.Context, opts ...grpc.CallOption) (grpc.ClientStreamingClient[InputMessage, InputResponse], error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
stream, err := c.cc.NewStream(ctx, &Controller_ServiceDesc.Streams[1], Controller_Input_FullMethodName, cOpts...)
if err != nil {
return nil, err
}
x := &grpc.GenericClientStream[InputMessage, InputResponse]{ClientStream: stream}
return x, nil
}
// This type alias is provided for backwards compatibility with existing code that references the prior non-generic stream type by name.
type Controller_InputClient = grpc.ClientStreamingClient[InputMessage, InputResponse]
func (c *controllerClient) Invoke(ctx context.Context, opts ...grpc.CallOption) (grpc.BidiStreamingClient[Message, Message], error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
stream, err := c.cc.NewStream(ctx, &Controller_ServiceDesc.Streams[2], Controller_Invoke_FullMethodName, cOpts...)
if err != nil {
return nil, err
}
x := &grpc.GenericClientStream[Message, Message]{ClientStream: stream}
return x, nil
}
// This type alias is provided for backwards compatibility with existing code that references the prior non-generic stream type by name.
type Controller_InvokeClient = grpc.BidiStreamingClient[Message, Message]
func (c *controllerClient) List(ctx context.Context, in *ListRequest, opts ...grpc.CallOption) (*ListResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(ListResponse)
err := c.cc.Invoke(ctx, Controller_List_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *controllerClient) Disconnect(ctx context.Context, in *DisconnectRequest, opts ...grpc.CallOption) (*DisconnectResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(DisconnectResponse)
err := c.cc.Invoke(ctx, Controller_Disconnect_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *controllerClient) Info(ctx context.Context, in *InfoRequest, opts ...grpc.CallOption) (*InfoResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(InfoResponse)
err := c.cc.Invoke(ctx, Controller_Info_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *controllerClient) ListProcesses(ctx context.Context, in *ListProcessesRequest, opts ...grpc.CallOption) (*ListProcessesResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(ListProcessesResponse)
err := c.cc.Invoke(ctx, Controller_ListProcesses_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *controllerClient) DisconnectProcess(ctx context.Context, in *DisconnectProcessRequest, opts ...grpc.CallOption) (*DisconnectProcessResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(DisconnectProcessResponse)
err := c.cc.Invoke(ctx, Controller_DisconnectProcess_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
// ControllerServer is the server API for Controller service.
// All implementations should embed UnimplementedControllerServer
// for forward compatibility.
type ControllerServer interface {
Build(context.Context, *BuildRequest) (*BuildResponse, error)
Inspect(context.Context, *InspectRequest) (*InspectResponse, error)
Status(*StatusRequest, grpc.ServerStreamingServer[StatusResponse]) error
Input(grpc.ClientStreamingServer[InputMessage, InputResponse]) error
Invoke(grpc.BidiStreamingServer[Message, Message]) error
List(context.Context, *ListRequest) (*ListResponse, error)
Disconnect(context.Context, *DisconnectRequest) (*DisconnectResponse, error)
Info(context.Context, *InfoRequest) (*InfoResponse, error)
ListProcesses(context.Context, *ListProcessesRequest) (*ListProcessesResponse, error)
DisconnectProcess(context.Context, *DisconnectProcessRequest) (*DisconnectProcessResponse, error)
}
// UnimplementedControllerServer should be embedded to have
// forward compatible implementations.
//
// NOTE: this should be embedded by value instead of pointer to avoid a nil
// pointer dereference when methods are called.
type UnimplementedControllerServer struct{}
func (UnimplementedControllerServer) Build(context.Context, *BuildRequest) (*BuildResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method Build not implemented")
}
func (UnimplementedControllerServer) Inspect(context.Context, *InspectRequest) (*InspectResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method Inspect not implemented")
}
func (UnimplementedControllerServer) Status(*StatusRequest, grpc.ServerStreamingServer[StatusResponse]) error {
return status.Errorf(codes.Unimplemented, "method Status not implemented")
}
func (UnimplementedControllerServer) Input(grpc.ClientStreamingServer[InputMessage, InputResponse]) error {
return status.Errorf(codes.Unimplemented, "method Input not implemented")
}
func (UnimplementedControllerServer) Invoke(grpc.BidiStreamingServer[Message, Message]) error {
return status.Errorf(codes.Unimplemented, "method Invoke not implemented")
}
func (UnimplementedControllerServer) List(context.Context, *ListRequest) (*ListResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method List not implemented")
}
func (UnimplementedControllerServer) Disconnect(context.Context, *DisconnectRequest) (*DisconnectResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method Disconnect not implemented")
}
func (UnimplementedControllerServer) Info(context.Context, *InfoRequest) (*InfoResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method Info not implemented")
}
func (UnimplementedControllerServer) ListProcesses(context.Context, *ListProcessesRequest) (*ListProcessesResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method ListProcesses not implemented")
}
func (UnimplementedControllerServer) DisconnectProcess(context.Context, *DisconnectProcessRequest) (*DisconnectProcessResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method DisconnectProcess not implemented")
}
func (UnimplementedControllerServer) testEmbeddedByValue() {}
// UnsafeControllerServer may be embedded to opt out of forward compatibility for this service.
// Use of this interface is not recommended, as added methods to ControllerServer will
// result in compilation errors.
type UnsafeControllerServer interface {
mustEmbedUnimplementedControllerServer()
}
func RegisterControllerServer(s grpc.ServiceRegistrar, srv ControllerServer) {
// If the following call pancis, it indicates UnimplementedControllerServer was
// embedded by pointer and is nil. This will cause panics if an
// unimplemented method is ever invoked, so we test this at initialization
// time to prevent it from happening at runtime later due to I/O.
if t, ok := srv.(interface{ testEmbeddedByValue() }); ok {
t.testEmbeddedByValue()
}
s.RegisterService(&Controller_ServiceDesc, srv)
}
func _Controller_Build_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(BuildRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(ControllerServer).Build(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: Controller_Build_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(ControllerServer).Build(ctx, req.(*BuildRequest))
}
return interceptor(ctx, in, info, handler)
}
func _Controller_Inspect_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(InspectRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(ControllerServer).Inspect(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: Controller_Inspect_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(ControllerServer).Inspect(ctx, req.(*InspectRequest))
}
return interceptor(ctx, in, info, handler)
}
func _Controller_Status_Handler(srv interface{}, stream grpc.ServerStream) error {
m := new(StatusRequest)
if err := stream.RecvMsg(m); err != nil {
return err
}
return srv.(ControllerServer).Status(m, &grpc.GenericServerStream[StatusRequest, StatusResponse]{ServerStream: stream})
}
// This type alias is provided for backwards compatibility with existing code that references the prior non-generic stream type by name.
type Controller_StatusServer = grpc.ServerStreamingServer[StatusResponse]
func _Controller_Input_Handler(srv interface{}, stream grpc.ServerStream) error {
return srv.(ControllerServer).Input(&grpc.GenericServerStream[InputMessage, InputResponse]{ServerStream: stream})
}
// This type alias is provided for backwards compatibility with existing code that references the prior non-generic stream type by name.
type Controller_InputServer = grpc.ClientStreamingServer[InputMessage, InputResponse]
func _Controller_Invoke_Handler(srv interface{}, stream grpc.ServerStream) error {
return srv.(ControllerServer).Invoke(&grpc.GenericServerStream[Message, Message]{ServerStream: stream})
}
// This type alias is provided for backwards compatibility with existing code that references the prior non-generic stream type by name.
type Controller_InvokeServer = grpc.BidiStreamingServer[Message, Message]
func _Controller_List_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(ListRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(ControllerServer).List(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: Controller_List_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(ControllerServer).List(ctx, req.(*ListRequest))
}
return interceptor(ctx, in, info, handler)
}
func _Controller_Disconnect_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(DisconnectRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(ControllerServer).Disconnect(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: Controller_Disconnect_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(ControllerServer).Disconnect(ctx, req.(*DisconnectRequest))
}
return interceptor(ctx, in, info, handler)
}
func _Controller_Info_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(InfoRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(ControllerServer).Info(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: Controller_Info_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(ControllerServer).Info(ctx, req.(*InfoRequest))
}
return interceptor(ctx, in, info, handler)
}
func _Controller_ListProcesses_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(ListProcessesRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(ControllerServer).ListProcesses(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: Controller_ListProcesses_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(ControllerServer).ListProcesses(ctx, req.(*ListProcessesRequest))
}
return interceptor(ctx, in, info, handler)
}
func _Controller_DisconnectProcess_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(DisconnectProcessRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(ControllerServer).DisconnectProcess(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: Controller_DisconnectProcess_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(ControllerServer).DisconnectProcess(ctx, req.(*DisconnectProcessRequest))
}
return interceptor(ctx, in, info, handler)
}
// Controller_ServiceDesc is the grpc.ServiceDesc for Controller service.
// It's only intended for direct use with grpc.RegisterService,
// and not to be introspected or modified (even as a copy)
var Controller_ServiceDesc = grpc.ServiceDesc{
ServiceName: "buildx.controller.v1.Controller",
HandlerType: (*ControllerServer)(nil),
Methods: []grpc.MethodDesc{
{
MethodName: "Build",
Handler: _Controller_Build_Handler,
},
{
MethodName: "Inspect",
Handler: _Controller_Inspect_Handler,
},
{
MethodName: "List",
Handler: _Controller_List_Handler,
},
{
MethodName: "Disconnect",
Handler: _Controller_Disconnect_Handler,
},
{
MethodName: "Info",
Handler: _Controller_Info_Handler,
},
{
MethodName: "ListProcesses",
Handler: _Controller_ListProcesses_Handler,
},
{
MethodName: "DisconnectProcess",
Handler: _Controller_DisconnectProcess_Handler,
},
},
Streams: []grpc.StreamDesc{
{
StreamName: "Status",
Handler: _Controller_Status_Handler,
ServerStreams: true,
},
{
StreamName: "Input",
Handler: _Controller_Input_Handler,
ClientStreams: true,
},
{
StreamName: "Invoke",
Handler: _Controller_Invoke_Handler,
ServerStreams: true,
ClientStreams: true,
},
},
Metadata: "github.com/docker/buildx/controller/pb/controller.proto",
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,3 +0,0 @@
package pb
//go:generate protoc -I=. -I=../../vendor/ --gogo_out=plugins=grpc:. controller.proto

View File

@@ -153,7 +153,6 @@ func ResolveOptionPaths(options *BuildOptions) (_ *BuildOptions, err error) {
}
}
ps = append(ps, p)
}
s.Paths = ps
ssh = append(ssh, s)

View File

@@ -3,10 +3,10 @@ package pb
import (
"os"
"path/filepath"
"reflect"
"testing"
"github.com/stretchr/testify/require"
"google.golang.org/protobuf/proto"
)
func TestResolvePaths(t *testing.T) {
@@ -16,54 +16,58 @@ func TestResolvePaths(t *testing.T) {
require.NoError(t, os.Chdir(tmpwd))
tests := []struct {
name string
options BuildOptions
want BuildOptions
options *BuildOptions
want *BuildOptions
}{
{
name: "contextpath",
options: BuildOptions{ContextPath: "test"},
want: BuildOptions{ContextPath: filepath.Join(tmpwd, "test")},
options: &BuildOptions{ContextPath: "test"},
want: &BuildOptions{ContextPath: filepath.Join(tmpwd, "test")},
},
{
name: "contextpath-cwd",
options: BuildOptions{ContextPath: "."},
want: BuildOptions{ContextPath: tmpwd},
options: &BuildOptions{ContextPath: "."},
want: &BuildOptions{ContextPath: tmpwd},
},
{
name: "contextpath-dash",
options: BuildOptions{ContextPath: "-"},
want: BuildOptions{ContextPath: "-"},
options: &BuildOptions{ContextPath: "-"},
want: &BuildOptions{ContextPath: "-"},
},
{
name: "contextpath-ssh",
options: BuildOptions{ContextPath: "git@github.com:docker/buildx.git"},
want: BuildOptions{ContextPath: "git@github.com:docker/buildx.git"},
options: &BuildOptions{ContextPath: "git@github.com:docker/buildx.git"},
want: &BuildOptions{ContextPath: "git@github.com:docker/buildx.git"},
},
{
name: "dockerfilename",
options: BuildOptions{DockerfileName: "test", ContextPath: "."},
want: BuildOptions{DockerfileName: filepath.Join(tmpwd, "test"), ContextPath: tmpwd},
options: &BuildOptions{DockerfileName: "test", ContextPath: "."},
want: &BuildOptions{DockerfileName: filepath.Join(tmpwd, "test"), ContextPath: tmpwd},
},
{
name: "dockerfilename-dash",
options: BuildOptions{DockerfileName: "-", ContextPath: "."},
want: BuildOptions{DockerfileName: "-", ContextPath: tmpwd},
options: &BuildOptions{DockerfileName: "-", ContextPath: "."},
want: &BuildOptions{DockerfileName: "-", ContextPath: tmpwd},
},
{
name: "dockerfilename-remote",
options: BuildOptions{DockerfileName: "test", ContextPath: "git@github.com:docker/buildx.git"},
want: BuildOptions{DockerfileName: "test", ContextPath: "git@github.com:docker/buildx.git"},
options: &BuildOptions{DockerfileName: "test", ContextPath: "git@github.com:docker/buildx.git"},
want: &BuildOptions{DockerfileName: "test", ContextPath: "git@github.com:docker/buildx.git"},
},
{
name: "contexts",
options: BuildOptions{NamedContexts: map[string]string{"a": "test1", "b": "test2",
"alpine": "docker-image://alpine@sha256:0123456789", "project": "https://github.com/myuser/project.git"}},
want: BuildOptions{NamedContexts: map[string]string{"a": filepath.Join(tmpwd, "test1"), "b": filepath.Join(tmpwd, "test2"),
"alpine": "docker-image://alpine@sha256:0123456789", "project": "https://github.com/myuser/project.git"}},
options: &BuildOptions{NamedContexts: map[string]string{
"a": "test1", "b": "test2",
"alpine": "docker-image://alpine@sha256:0123456789", "project": "https://github.com/myuser/project.git",
}},
want: &BuildOptions{NamedContexts: map[string]string{
"a": filepath.Join(tmpwd, "test1"), "b": filepath.Join(tmpwd, "test2"),
"alpine": "docker-image://alpine@sha256:0123456789", "project": "https://github.com/myuser/project.git",
}},
},
{
name: "cache-from",
options: BuildOptions{
options: &BuildOptions{
CacheFrom: []*CacheOptionsEntry{
{
Type: "local",
@@ -75,7 +79,7 @@ func TestResolvePaths(t *testing.T) {
},
},
},
want: BuildOptions{
want: &BuildOptions{
CacheFrom: []*CacheOptionsEntry{
{
Type: "local",
@@ -90,7 +94,7 @@ func TestResolvePaths(t *testing.T) {
},
{
name: "cache-to",
options: BuildOptions{
options: &BuildOptions{
CacheTo: []*CacheOptionsEntry{
{
Type: "local",
@@ -102,7 +106,7 @@ func TestResolvePaths(t *testing.T) {
},
},
},
want: BuildOptions{
want: &BuildOptions{
CacheTo: []*CacheOptionsEntry{
{
Type: "local",
@@ -117,7 +121,7 @@ func TestResolvePaths(t *testing.T) {
},
{
name: "exports",
options: BuildOptions{
options: &BuildOptions{
Exports: []*ExportEntry{
{
Type: "local",
@@ -145,7 +149,7 @@ func TestResolvePaths(t *testing.T) {
},
},
},
want: BuildOptions{
want: &BuildOptions{
Exports: []*ExportEntry{
{
Type: "local",
@@ -176,7 +180,7 @@ func TestResolvePaths(t *testing.T) {
},
{
name: "secrets",
options: BuildOptions{
options: &BuildOptions{
Secrets: []*Secret{
{
FilePath: "test1",
@@ -191,7 +195,7 @@ func TestResolvePaths(t *testing.T) {
},
},
},
want: BuildOptions{
want: &BuildOptions{
Secrets: []*Secret{
{
FilePath: filepath.Join(tmpwd, "test1"),
@@ -209,7 +213,7 @@ func TestResolvePaths(t *testing.T) {
},
{
name: "ssh",
options: BuildOptions{
options: &BuildOptions{
SSH: []*SSH{
{
ID: "default",
@@ -221,7 +225,7 @@ func TestResolvePaths(t *testing.T) {
},
},
},
want: BuildOptions{
want: &BuildOptions{
SSH: []*SSH{
{
ID: "default",
@@ -238,10 +242,10 @@ func TestResolvePaths(t *testing.T) {
for _, tt := range tests {
tt := tt
t.Run(tt.name, func(t *testing.T) {
got, err := ResolveOptionPaths(&tt.options)
got, err := ResolveOptionPaths(tt.options)
require.NoError(t, err)
if !reflect.DeepEqual(tt.want, *got) {
t.Fatalf("expected %#v, got %#v", tt.want, *got)
if !proto.Equal(tt.want, got) {
t.Fatalf("expected %#v, got %#v", tt.want, got)
}
})
}

View File

@@ -1,10 +1,13 @@
package pb
import (
"time"
"github.com/docker/buildx/util/progress"
control "github.com/moby/buildkit/api/services/control"
"github.com/moby/buildkit/client"
"github.com/opencontainers/go-digest"
"google.golang.org/protobuf/types/known/timestamppb"
)
type writer struct {
@@ -19,9 +22,7 @@ func (w *writer) Write(status *client.SolveStatus) {
w.ch <- ToControlStatus(status)
}
func (w *writer) WriteBuildRef(target string, ref string) {
return
}
func (w *writer) WriteBuildRef(target string, ref string) {}
func (w *writer) ValidateLogSource(digest.Digest, interface{}) bool {
return true
@@ -33,11 +34,11 @@ func ToControlStatus(s *client.SolveStatus) *StatusResponse {
resp := StatusResponse{}
for _, v := range s.Vertexes {
resp.Vertexes = append(resp.Vertexes, &control.Vertex{
Digest: v.Digest,
Inputs: v.Inputs,
Digest: string(v.Digest),
Inputs: digestSliceToPB(v.Inputs),
Name: v.Name,
Started: v.Started,
Completed: v.Completed,
Started: timestampToPB(v.Started),
Completed: timestampToPB(v.Completed),
Error: v.Error,
Cached: v.Cached,
ProgressGroup: v.ProgressGroup,
@@ -46,26 +47,26 @@ func ToControlStatus(s *client.SolveStatus) *StatusResponse {
for _, v := range s.Statuses {
resp.Statuses = append(resp.Statuses, &control.VertexStatus{
ID: v.ID,
Vertex: v.Vertex,
Vertex: string(v.Vertex),
Name: v.Name,
Total: v.Total,
Current: v.Current,
Timestamp: v.Timestamp,
Started: v.Started,
Completed: v.Completed,
Timestamp: timestamppb.New(v.Timestamp),
Started: timestampToPB(v.Started),
Completed: timestampToPB(v.Completed),
})
}
for _, v := range s.Logs {
resp.Logs = append(resp.Logs, &control.VertexLog{
Vertex: v.Vertex,
Vertex: string(v.Vertex),
Stream: int64(v.Stream),
Msg: v.Data,
Timestamp: v.Timestamp,
Timestamp: timestamppb.New(v.Timestamp),
})
}
for _, v := range s.Warnings {
resp.Warnings = append(resp.Warnings, &control.VertexWarning{
Vertex: v.Vertex,
Vertex: string(v.Vertex),
Level: int64(v.Level),
Short: v.Short,
Detail: v.Detail,
@@ -81,11 +82,11 @@ func FromControlStatus(resp *StatusResponse) *client.SolveStatus {
s := client.SolveStatus{}
for _, v := range resp.Vertexes {
s.Vertexes = append(s.Vertexes, &client.Vertex{
Digest: v.Digest,
Inputs: v.Inputs,
Digest: digest.Digest(v.Digest),
Inputs: digestSliceFromPB(v.Inputs),
Name: v.Name,
Started: v.Started,
Completed: v.Completed,
Started: timestampFromPB(v.Started),
Completed: timestampFromPB(v.Completed),
Error: v.Error,
Cached: v.Cached,
ProgressGroup: v.ProgressGroup,
@@ -94,26 +95,26 @@ func FromControlStatus(resp *StatusResponse) *client.SolveStatus {
for _, v := range resp.Statuses {
s.Statuses = append(s.Statuses, &client.VertexStatus{
ID: v.ID,
Vertex: v.Vertex,
Vertex: digest.Digest(v.Vertex),
Name: v.Name,
Total: v.Total,
Current: v.Current,
Timestamp: v.Timestamp,
Started: v.Started,
Completed: v.Completed,
Timestamp: v.Timestamp.AsTime(),
Started: timestampFromPB(v.Started),
Completed: timestampFromPB(v.Completed),
})
}
for _, v := range resp.Logs {
s.Logs = append(s.Logs, &client.VertexLog{
Vertex: v.Vertex,
Vertex: digest.Digest(v.Vertex),
Stream: int(v.Stream),
Data: v.Msg,
Timestamp: v.Timestamp,
Timestamp: v.Timestamp.AsTime(),
})
}
for _, v := range resp.Warnings {
s.Warnings = append(s.Warnings, &client.VertexWarning{
Vertex: v.Vertex,
Vertex: digest.Digest(v.Vertex),
Level: int(v.Level),
Short: v.Short,
Detail: v.Detail,
@@ -124,3 +125,38 @@ func FromControlStatus(resp *StatusResponse) *client.SolveStatus {
}
return &s
}
func timestampFromPB(ts *timestamppb.Timestamp) *time.Time {
if ts == nil {
return nil
}
t := ts.AsTime()
if t.IsZero() {
return nil
}
return &t
}
func timestampToPB(ts *time.Time) *timestamppb.Timestamp {
if ts == nil {
return nil
}
return timestamppb.New(*ts)
}
func digestSliceFromPB(elems []string) []digest.Digest {
clone := make([]digest.Digest, len(elems))
for i, e := range elems {
clone[i] = digest.Digest(e)
}
return clone
}
func digestSliceToPB(elems []digest.Digest) []string {
clone := make([]string, len(elems))
for i, e := range elems {
clone[i] = string(e)
}
return clone
}

View File

@@ -18,16 +18,16 @@ type Process struct {
invokeConfig *pb.InvokeConfig
errCh chan error
processCancel func()
serveIOCancel func()
serveIOCancel func(error)
}
// ForwardIO forwards process's io to the specified reader/writer.
// Optionally specify ioCancelCallback which will be called when
// the process closes the specified IO. This will be useful for additional cleanup.
func (p *Process) ForwardIO(in *ioset.In, ioCancelCallback func()) {
func (p *Process) ForwardIO(in *ioset.In, ioCancelCallback func(error)) {
p.inEnd.SetIn(in)
if f := p.serveIOCancel; f != nil {
f()
f(errors.WithStack(context.Canceled))
}
p.serveIOCancel = ioCancelCallback
}
@@ -124,9 +124,16 @@ func (m *Manager) StartProcess(pid string, resultCtx *build.ResultHandle, cfg *p
f.SetOut(&out)
// Register process
ctx, cancel := context.WithCancel(context.TODO())
ctx, cancel := context.WithCancelCause(context.TODO())
var cancelOnce sync.Once
processCancelFunc := func() { cancelOnce.Do(func() { cancel(); f.Close(); in.Close(); out.Close() }) }
processCancelFunc := func() {
cancelOnce.Do(func() {
cancel(errors.WithStack(context.Canceled))
f.Close()
in.Close()
out.Close()
})
}
p := &Process{
inEnd: f,
invokeConfig: cfg,

View File

@@ -8,6 +8,7 @@ import (
"github.com/containerd/containerd/defaults"
"github.com/containerd/containerd/pkg/dialer"
"github.com/docker/buildx/build"
"github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/util/progress"
"github.com/moby/buildkit/client"
@@ -27,6 +28,7 @@ func NewClient(ctx context.Context, addr string) (*Client, error) {
Backoff: backoffConfig,
}
gopts := []grpc.DialOption{
//nolint:staticcheck // ignore SA1019: WithBlock is deprecated and does not work with NewClient.
grpc.WithBlock(),
grpc.WithTransportCredentials(insecure.NewCredentials()),
grpc.WithConnectParams(connParams),
@@ -36,6 +38,7 @@ func NewClient(ctx context.Context, addr string) (*Client, error) {
grpc.WithUnaryInterceptor(grpcerrors.UnaryClientInterceptor),
grpc.WithStreamInterceptor(grpcerrors.StreamClientInterceptor),
}
//nolint:staticcheck // ignore SA1019: Recommended NewClient has different behavior from DialContext.
conn, err := grpc.DialContext(ctx, dialer.DialAddress(addr), gopts...)
if err != nil {
return nil, err
@@ -72,36 +75,36 @@ func (c *Client) List(ctx context.Context) (keys []string, retErr error) {
return res.Keys, nil
}
func (c *Client) Disconnect(ctx context.Context, key string) error {
if key == "" {
func (c *Client) Disconnect(ctx context.Context, sessionID string) error {
if sessionID == "" {
return nil
}
_, err := c.client().Disconnect(ctx, &pb.DisconnectRequest{Ref: key})
_, err := c.client().Disconnect(ctx, &pb.DisconnectRequest{SessionID: sessionID})
return err
}
func (c *Client) ListProcesses(ctx context.Context, ref string) (infos []*pb.ProcessInfo, retErr error) {
res, err := c.client().ListProcesses(ctx, &pb.ListProcessesRequest{Ref: ref})
func (c *Client) ListProcesses(ctx context.Context, sessionID string) (infos []*pb.ProcessInfo, retErr error) {
res, err := c.client().ListProcesses(ctx, &pb.ListProcessesRequest{SessionID: sessionID})
if err != nil {
return nil, err
}
return res.Infos, nil
}
func (c *Client) DisconnectProcess(ctx context.Context, ref, pid string) error {
_, err := c.client().DisconnectProcess(ctx, &pb.DisconnectProcessRequest{Ref: ref, ProcessID: pid})
func (c *Client) DisconnectProcess(ctx context.Context, sessionID, pid string) error {
_, err := c.client().DisconnectProcess(ctx, &pb.DisconnectProcessRequest{SessionID: sessionID, ProcessID: pid})
return err
}
func (c *Client) Invoke(ctx context.Context, ref string, pid string, invokeConfig pb.InvokeConfig, in io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) error {
if ref == "" || pid == "" {
return errors.New("build reference must be specified")
func (c *Client) Invoke(ctx context.Context, sessionID string, pid string, invokeConfig *pb.InvokeConfig, in io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) error {
if sessionID == "" || pid == "" {
return errors.New("build session ID must be specified")
}
stream, err := c.client().Invoke(ctx)
if err != nil {
return err
}
return attachIO(ctx, stream, &pb.InitMessage{Ref: ref, ProcessID: pid, InvokeConfig: &invokeConfig}, ioAttachConfig{
return attachIO(ctx, stream, &pb.InitMessage{SessionID: sessionID, ProcessID: pid, InvokeConfig: invokeConfig}, ioAttachConfig{
stdin: in,
stdout: stdout,
stderr: stderr,
@@ -109,11 +112,11 @@ func (c *Client) Invoke(ctx context.Context, ref string, pid string, invokeConfi
})
}
func (c *Client) Inspect(ctx context.Context, ref string) (*pb.InspectResponse, error) {
return c.client().Inspect(ctx, &pb.InspectRequest{Ref: ref})
func (c *Client) Inspect(ctx context.Context, sessionID string) (*pb.InspectResponse, error) {
return c.client().Inspect(ctx, &pb.InspectRequest{SessionID: sessionID})
}
func (c *Client) Build(ctx context.Context, options pb.BuildOptions, in io.ReadCloser, progress progress.Writer) (string, *client.SolveResponse, error) {
func (c *Client) Build(ctx context.Context, options *pb.BuildOptions, in io.ReadCloser, progress progress.Writer) (string, *client.SolveResponse, *build.Inputs, error) {
ref := identity.NewID()
statusChan := make(chan *client.SolveStatus)
eg, egCtx := errgroup.WithContext(ctx)
@@ -131,10 +134,10 @@ func (c *Client) Build(ctx context.Context, options pb.BuildOptions, in io.ReadC
}
return nil
})
return ref, resp, eg.Wait()
return ref, resp, nil, eg.Wait()
}
func (c *Client) build(ctx context.Context, ref string, options pb.BuildOptions, in io.ReadCloser, statusChan chan *client.SolveStatus) (*client.SolveResponse, error) {
func (c *Client) build(ctx context.Context, sessionID string, options *pb.BuildOptions, in io.ReadCloser, statusChan chan *client.SolveStatus) (*client.SolveResponse, error) {
eg, egCtx := errgroup.WithContext(ctx)
done := make(chan struct{})
@@ -143,8 +146,8 @@ func (c *Client) build(ctx context.Context, ref string, options pb.BuildOptions,
eg.Go(func() error {
defer close(done)
pbResp, err := c.client().Build(egCtx, &pb.BuildRequest{
Ref: ref,
Options: &options,
SessionID: sessionID,
Options: options,
})
if err != nil {
return err
@@ -156,7 +159,7 @@ func (c *Client) build(ctx context.Context, ref string, options pb.BuildOptions,
})
eg.Go(func() error {
stream, err := c.client().Status(egCtx, &pb.StatusRequest{
Ref: ref,
SessionID: sessionID,
})
if err != nil {
return err
@@ -181,7 +184,7 @@ func (c *Client) build(ctx context.Context, ref string, options pb.BuildOptions,
if err := stream.Send(&pb.InputMessage{
Input: &pb.InputMessage_Init{
Init: &pb.InputInitMessage{
Ref: ref,
SessionID: sessionID,
},
},
}); err != nil {

View File

@@ -62,9 +62,10 @@ func NewRemoteBuildxController(ctx context.Context, dockerCli command.Cli, opts
serverRoot := filepath.Join(rootDir, "shared")
// connect to buildx server if it is already running
ctx2, cancel := context.WithTimeout(ctx, 1*time.Second)
ctx2, cancel := context.WithCancelCause(ctx)
ctx2, _ = context.WithTimeoutCause(ctx2, 1*time.Second, errors.WithStack(context.DeadlineExceeded)) //nolint:govet,lostcancel // no need to manually cancel this context as we already rely on parent
c, err := newBuildxClientAndCheck(ctx2, filepath.Join(serverRoot, defaultSocketFilename))
cancel()
cancel(errors.WithStack(context.Canceled))
if err != nil {
if !errors.Is(err, context.DeadlineExceeded) {
return nil, errors.Wrap(err, "cannot connect to the buildx server")
@@ -90,9 +91,10 @@ func NewRemoteBuildxController(ctx context.Context, dockerCli command.Cli, opts
go wait()
// wait for buildx server to be ready
ctx2, cancel = context.WithTimeout(ctx, 10*time.Second)
ctx2, cancel = context.WithCancelCause(ctx)
ctx2, _ = context.WithTimeoutCause(ctx2, 10*time.Second, errors.WithStack(context.DeadlineExceeded)) //nolint:govet,lostcancel // no need to manually cancel this context as we already rely on parent
c, err = newBuildxClientAndCheck(ctx2, filepath.Join(serverRoot, defaultSocketFilename))
cancel()
cancel(errors.WithStack(context.Canceled))
if err != nil {
return errors.Wrap(err, "cannot connect to the buildx server")
}
@@ -148,8 +150,8 @@ func serveCmd(dockerCli command.Cli) *cobra.Command {
}()
// prepare server
b := NewServer(func(ctx context.Context, options *controllerapi.BuildOptions, stdin io.Reader, progress progress.Writer) (*client.SolveResponse, *build.ResultHandle, error) {
return cbuild.RunBuild(ctx, dockerCli, *options, stdin, progress, true)
b := NewServer(func(ctx context.Context, options *controllerapi.BuildOptions, stdin io.Reader, progress progress.Writer) (*client.SolveResponse, *build.ResultHandle, *build.Inputs, error) {
return cbuild.RunBuild(ctx, dockerCli, options, stdin, progress, true)
})
defer b.Close()
@@ -258,7 +260,7 @@ func prepareRootDir(dockerCli command.Cli, config *serverConfig) (string, error)
}
func rootDataDir(dockerCli command.Cli) string {
return filepath.Join(confutil.ConfigDir(dockerCli), "controller")
return filepath.Join(confutil.NewConfig(dockerCli).Dir(), "controller")
}
func newBuildxClientAndCheck(ctx context.Context, addr string) (*Client, error) {

View File

@@ -43,9 +43,9 @@ func serveIO(attachCtx context.Context, srv msgStream, initFn func(*pb.InitMessa
if init == nil {
return errors.Errorf("unexpected message: %T; wanted init", msg.GetInput())
}
ref := init.Ref
if ref == "" {
return errors.New("no ref is provided")
sessionID := init.SessionID
if sessionID == "" {
return errors.New("no session ID is provided")
}
if err := initFn(init); err != nil {
return errors.Wrap(err, "failed to initialize IO server")
@@ -302,7 +302,6 @@ func attachIO(ctx context.Context, stream msgStream, initMessage *pb.InitMessage
out = cfg.stderr
default:
return errors.Errorf("unsupported fd %d", file.Fd)
}
if out == nil {
logrus.Warnf("attachIO: no writer for fd %d", file.Fd)
@@ -345,7 +344,7 @@ func receive(ctx context.Context, stream msgStream) (*pb.Message, error) {
case err := <-errCh:
return nil, err
case <-ctx.Done():
return nil, ctx.Err()
return nil, context.Cause(ctx)
}
}

View File

@@ -11,6 +11,7 @@ import (
controllererrors "github.com/docker/buildx/controller/errdefs"
"github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/controller/processes"
"github.com/docker/buildx/util/desktop"
"github.com/docker/buildx/util/ioset"
"github.com/docker/buildx/util/progress"
"github.com/docker/buildx/version"
@@ -19,7 +20,7 @@ import (
"golang.org/x/sync/errgroup"
)
type BuildFunc func(ctx context.Context, options *pb.BuildOptions, stdin io.Reader, progress progress.Writer) (resp *client.SolveResponse, res *build.ResultHandle, err error)
type BuildFunc func(ctx context.Context, options *pb.BuildOptions, stdin io.Reader, progress progress.Writer) (resp *client.SolveResponse, res *build.ResultHandle, inp *build.Inputs, err error)
func NewServer(buildFunc BuildFunc) *Server {
return &Server{
@@ -36,7 +37,7 @@ type Server struct {
type session struct {
buildOnGoing atomic.Bool
statusChan chan *pb.StatusResponse
cancelBuild func()
cancelBuild func(error)
buildOptions *pb.BuildOptions
inputPipe *io.PipeWriter
@@ -52,9 +53,9 @@ func (s *session) cancelRunningProcesses() {
func (m *Server) ListProcesses(ctx context.Context, req *pb.ListProcessesRequest) (res *pb.ListProcessesResponse, err error) {
m.sessionMu.Lock()
defer m.sessionMu.Unlock()
s, ok := m.session[req.Ref]
s, ok := m.session[req.SessionID]
if !ok {
return nil, errors.Errorf("unknown ref %q", req.Ref)
return nil, errors.Errorf("unknown session ID %q", req.SessionID)
}
res = new(pb.ListProcessesResponse)
res.Infos = append(res.Infos, s.processes.ListProcesses()...)
@@ -64,9 +65,9 @@ func (m *Server) ListProcesses(ctx context.Context, req *pb.ListProcessesRequest
func (m *Server) DisconnectProcess(ctx context.Context, req *pb.DisconnectProcessRequest) (res *pb.DisconnectProcessResponse, err error) {
m.sessionMu.Lock()
defer m.sessionMu.Unlock()
s, ok := m.session[req.Ref]
s, ok := m.session[req.SessionID]
if !ok {
return nil, errors.Errorf("unknown ref %q", req.Ref)
return nil, errors.Errorf("unknown session ID %q", req.SessionID)
}
return res, s.processes.DeleteProcess(req.ProcessID)
}
@@ -100,22 +101,22 @@ func (m *Server) List(ctx context.Context, req *pb.ListRequest) (res *pb.ListRes
}
func (m *Server) Disconnect(ctx context.Context, req *pb.DisconnectRequest) (res *pb.DisconnectResponse, err error) {
key := req.Ref
if key == "" {
return nil, errors.New("disconnect: empty key")
sessionID := req.SessionID
if sessionID == "" {
return nil, errors.New("disconnect: empty session ID")
}
m.sessionMu.Lock()
if s, ok := m.session[key]; ok {
if s, ok := m.session[sessionID]; ok {
if s.cancelBuild != nil {
s.cancelBuild()
s.cancelBuild(errors.WithStack(context.Canceled))
}
s.cancelRunningProcesses()
if s.result != nil {
s.result.Done()
}
}
delete(m.session, key)
delete(m.session, sessionID)
m.sessionMu.Unlock()
return &pb.DisconnectResponse{}, nil
@@ -126,7 +127,7 @@ func (m *Server) Close() error {
for k := range m.session {
if s, ok := m.session[k]; ok {
if s.cancelBuild != nil {
s.cancelBuild()
s.cancelBuild(errors.WithStack(context.Canceled))
}
s.cancelRunningProcesses()
}
@@ -136,26 +137,26 @@ func (m *Server) Close() error {
}
func (m *Server) Inspect(ctx context.Context, req *pb.InspectRequest) (*pb.InspectResponse, error) {
ref := req.Ref
if ref == "" {
return nil, errors.New("inspect: empty key")
sessionID := req.SessionID
if sessionID == "" {
return nil, errors.New("inspect: empty session ID")
}
var bo *pb.BuildOptions
m.sessionMu.Lock()
if s, ok := m.session[ref]; ok {
if s, ok := m.session[sessionID]; ok {
bo = s.buildOptions
} else {
m.sessionMu.Unlock()
return nil, errors.Errorf("inspect: unknown key %v", ref)
return nil, errors.Errorf("inspect: unknown key %v", sessionID)
}
m.sessionMu.Unlock()
return &pb.InspectResponse{Options: bo}, nil
}
func (m *Server) Build(ctx context.Context, req *pb.BuildRequest) (*pb.BuildResponse, error) {
ref := req.Ref
if ref == "" {
return nil, errors.New("build: empty key")
sessionID := req.SessionID
if sessionID == "" {
return nil, errors.New("build: empty session ID")
}
// Prepare status channel and session
@@ -163,7 +164,7 @@ func (m *Server) Build(ctx context.Context, req *pb.BuildRequest) (*pb.BuildResp
if m.session == nil {
m.session = make(map[string]*session)
}
s, ok := m.session[ref]
s, ok := m.session[sessionID]
if ok {
if !s.buildOnGoing.CompareAndSwap(false, true) {
m.sessionMu.Unlock()
@@ -182,12 +183,12 @@ func (m *Server) Build(ctx context.Context, req *pb.BuildRequest) (*pb.BuildResp
inR, inW := io.Pipe()
defer inR.Close()
s.inputPipe = inW
m.session[ref] = s
m.session[sessionID] = s
m.sessionMu.Unlock()
defer func() {
close(statusChan)
m.sessionMu.Lock()
s, ok := m.session[ref]
s, ok := m.session[sessionID]
if ok {
s.statusChan = nil
s.buildOnGoing.Store(false)
@@ -198,24 +199,29 @@ func (m *Server) Build(ctx context.Context, req *pb.BuildRequest) (*pb.BuildResp
pw := pb.NewProgressWriter(statusChan)
// Build the specified request
ctx, cancel := context.WithCancel(ctx)
defer cancel()
resp, res, buildErr := m.buildFunc(ctx, req.Options, inR, pw)
ctx, cancel := context.WithCancelCause(ctx)
defer func() { cancel(errors.WithStack(context.Canceled)) }()
resp, res, _, buildErr := m.buildFunc(ctx, req.Options, inR, pw)
m.sessionMu.Lock()
if s, ok := m.session[ref]; ok {
if s, ok := m.session[sessionID]; ok {
// NOTE: buildFunc can return *build.ResultHandle even on error (e.g. when it's implemented using (github.com/docker/buildx/controller/build).RunBuild).
if res != nil {
s.result = res
s.cancelBuild = cancel
s.buildOptions = req.Options
m.session[ref] = s
m.session[sessionID] = s
if buildErr != nil {
buildErr = controllererrors.WrapBuild(buildErr, ref)
var ref string
var ebr *desktop.ErrorWithBuildRef
if errors.As(buildErr, &ebr) {
ref = ebr.Ref
}
buildErr = controllererrors.WrapBuild(buildErr, sessionID, ref)
}
}
} else {
m.sessionMu.Unlock()
return nil, errors.Errorf("build: unknown key %v", ref)
return nil, errors.Errorf("build: unknown session ID %v", sessionID)
}
m.sessionMu.Unlock()
@@ -232,9 +238,9 @@ func (m *Server) Build(ctx context.Context, req *pb.BuildRequest) (*pb.BuildResp
}
func (m *Server) Status(req *pb.StatusRequest, stream pb.Controller_StatusServer) error {
ref := req.Ref
if ref == "" {
return errors.New("status: empty key")
sessionID := req.SessionID
if sessionID == "" {
return errors.New("status: empty session ID")
}
// Wait and get status channel prepared by Build()
@@ -242,12 +248,12 @@ func (m *Server) Status(req *pb.StatusRequest, stream pb.Controller_StatusServer
for {
// TODO: timeout?
m.sessionMu.Lock()
if _, ok := m.session[ref]; !ok || m.session[ref].statusChan == nil {
if _, ok := m.session[sessionID]; !ok || m.session[sessionID].statusChan == nil {
m.sessionMu.Unlock()
time.Sleep(time.Millisecond) // TODO: wait Build without busy loop and make it cancellable
continue
}
statusChan = m.session[ref].statusChan
statusChan = m.session[sessionID].statusChan
m.sessionMu.Unlock()
break
}
@@ -278,9 +284,9 @@ func (m *Server) Input(stream pb.Controller_InputServer) (err error) {
if init == nil {
return errors.Errorf("unexpected message: %T; wanted init", msg.GetInit())
}
ref := init.Ref
if ref == "" {
return errors.New("input: no ref is provided")
sessionID := init.SessionID
if sessionID == "" {
return errors.New("input: no session ID is provided")
}
// Wait and get input stream pipe prepared by Build()
@@ -288,12 +294,12 @@ func (m *Server) Input(stream pb.Controller_InputServer) (err error) {
for {
// TODO: timeout?
m.sessionMu.Lock()
if _, ok := m.session[ref]; !ok || m.session[ref].inputPipe == nil {
if _, ok := m.session[sessionID]; !ok || m.session[sessionID].inputPipe == nil {
m.sessionMu.Unlock()
time.Sleep(time.Millisecond) // TODO: wait Build without busy loop and make it cancellable
continue
}
inputPipeW = m.session[ref].inputPipe
inputPipeW = m.session[sessionID].inputPipe
m.sessionMu.Unlock()
break
}
@@ -335,7 +341,7 @@ func (m *Server) Input(stream pb.Controller_InputServer) (err error) {
select {
case msg = <-msgCh:
case <-ctx.Done():
return errors.Wrap(ctx.Err(), "canceled")
return context.Cause(ctx)
}
if msg == nil {
return nil
@@ -364,23 +370,23 @@ func (m *Server) Invoke(srv pb.Controller_InvokeServer) error {
initDoneCh := make(chan *processes.Process)
initErrCh := make(chan error)
eg, egCtx := errgroup.WithContext(context.TODO())
srvIOCtx, srvIOCancel := context.WithCancel(egCtx)
srvIOCtx, srvIOCancel := context.WithCancelCause(egCtx)
eg.Go(func() error {
defer srvIOCancel()
defer srvIOCancel(errors.WithStack(context.Canceled))
return serveIO(srvIOCtx, srv, func(initMessage *pb.InitMessage) (retErr error) {
defer func() {
if retErr != nil {
initErrCh <- retErr
}
}()
ref := initMessage.Ref
sessionID := initMessage.SessionID
cfg := initMessage.InvokeConfig
m.sessionMu.Lock()
s, ok := m.session[ref]
s, ok := m.session[sessionID]
if !ok {
m.sessionMu.Unlock()
return errors.Errorf("invoke: unknown key %v", ref)
return errors.Errorf("invoke: unknown session ID %v", sessionID)
}
m.sessionMu.Unlock()
@@ -412,7 +418,7 @@ func (m *Server) Invoke(srv pb.Controller_InvokeServer) error {
})
})
eg.Go(func() (rErr error) {
defer srvIOCancel()
defer srvIOCancel(errors.WithStack(context.Canceled))
// Wait for init done
var proc *processes.Process
select {

View File

@@ -41,11 +41,15 @@ target "lint" {
platforms = GOLANGCI_LINT_MULTIPLATFORM != "" ? [
"darwin/amd64",
"darwin/arm64",
"freebsd/amd64",
"freebsd/arm64",
"linux/amd64",
"linux/arm64",
"linux/s390x",
"linux/ppc64le",
"linux/riscv64",
"openbsd/amd64",
"openbsd/arm64",
"windows/amd64",
"windows/arm64"
] : []
@@ -154,6 +158,8 @@ target "binaries-cross" {
platforms = [
"darwin/amd64",
"darwin/arm64",
"freebsd/amd64",
"freebsd/arm64",
"linux/amd64",
"linux/arm/v6",
"linux/arm/v7",
@@ -161,6 +167,8 @@ target "binaries-cross" {
"linux/ppc64le",
"linux/riscv64",
"linux/s390x",
"openbsd/amd64",
"openbsd/arm64",
"windows/amd64",
"windows/arm64"
]

View File

@@ -359,6 +359,21 @@ target "app" {
}
```
### `target.call`
Specifies the frontend method to use. Frontend methods let you, for example,
execute build checks only, instead of running a build. This is the same as the
`--call` flag.
```hcl
target "app" {
call = "check"
}
```
For more information about frontend methods, refer to the CLI reference for
[`docker buildx build --call`](https://docs.docker.com/reference/cli/docker/buildx/build/#call).
### `target.context`
Specifies the location of the build context to use for this target.
@@ -844,7 +859,7 @@ The following example forces the builder to always pull all images referenced in
```hcl
target "default" {
pull = "always"
pull = true
}
```

View File

@@ -626,7 +626,7 @@ For example, the following Dockerfile contains four stages:
```dockerfile
# syntax=docker/dockerfile:1
FROM oven/bun:1 as base
FROM oven/bun:1 AS base
WORKDIR /app
FROM base AS install
@@ -912,17 +912,39 @@ For more information about how to use build secrets, see
Supported types are:
- [`file`](#file)
- [`env`](#env)
- [`type=file`](#typefile)
- [`type=env`](#typeenv)
Buildx attempts to detect the `type` automatically if unset.
Buildx attempts to detect the `type` automatically if unset. If an environment
variable with the same key as `id` is set, then Buildx uses `type=env` and the
variable value becomes the secret. If no such environment variable is set, and
`type` is not set, then Buildx falls back to `type=file`.
#### `file`
#### `type=file`
Attribute keys:
Source a build secret from a file.
- `id` - ID of the secret. Defaults to base name of the `src` path.
- `src`, `source` - Secret filename. `id` used if unset.
##### `type=file` synopsis
```console
$ docker buildx build --secret [type=file,]id=<ID>[,src=<FILEPATH>] .
```
##### `type=file` attributes
| Key | Description | Default |
| --------------- | ----------------------------------------------------------------------------------------------------- | -------------------------- |
| `id` | ID of the secret. | N/A (this key is required) |
| `src`, `source` | Filepath of the file containing the secret value (absolute or relative to current working directory). | `id` if unset. |
###### `type=file` usage
In the following example, `type=file` is automatically detected because no
environment variable mathing `aws` (the ID) is set.
```console
$ docker buildx build --secret id=aws,src=$HOME/.aws/credentials .
```
```dockerfile
# syntax=docker/dockerfile:1
@@ -932,16 +954,31 @@ RUN --mount=type=secret,id=aws,target=/root/.aws/credentials \
aws s3 cp s3://... ...
```
#### `type=env`
Source a build secret from an environment variable.
##### `type=env` synopsis
```console
$ docker buildx build --secret id=aws,src=$HOME/.aws/credentials .
$ docker buildx build --secret [type=env,]id=<ID>[,env=<VARIABLE>] .
```
#### `env`
##### `type=env` attributes
Attribute keys:
| Key | Description | Default |
| ---------------------- | ----------------------------------------------- | -------------------------- |
| `id` | ID of the secret. | N/A (this key is required) |
| `env`, `src`, `source` | Environment variable to source the secret from. | `id` if unset. |
- `id` - ID of the secret. Defaults to `env` name.
- `env` - Secret environment variable. `id` used if unset, otherwise will look for `src`, `source` if `id` unset.
##### `type=env` usage
In the following example, `type=env` is automatically detected because an
environment variable matching `id` is set.
```console
$ SECRET_TOKEN=token docker buildx build --secret id=SECRET_TOKEN .
```
```dockerfile
# syntax=docker/dockerfile:1
@@ -951,10 +988,26 @@ RUN --mount=type=bind,target=. \
yarn run test
```
In the following example, the build argument `SECRET_TOKEN` is set to contain
the value of the environment variable `API_KEY`.
```console
$ SECRET_TOKEN=token docker buildx build --secret id=SECRET_TOKEN .
$ API_KEY=token docker buildx build --secret id=SECRET_TOKEN,env=API_KEY .
```
You can also specify the name of the environment variable with `src` or `source`:
```console
$ API_KEY=token docker buildx build --secret type=env,id=SECRET_TOKEN,src=API_KEY .
```
> [!NOTE]
> Specifying the environment variable name with `src` or `source`, you are
> required to set `type=env` explicitly, or else Buildx assumes that the secret
> is `type=file`, and looks for a file with the name of `src` or `source` (in
> this case, a file named `API_KEY` relative to the location where the `docker
> buildx build` command was executed.
### <a name="shm-size"></a> Shared memory size for build containers (--shm-size)
Sets the size of the shared memory allocated for build containers when using

View File

@@ -10,9 +10,10 @@ List builder instances
### Options
| Name | Type | Default | Description |
|:----------------------|:---------|:--------|:---------------------|
|:----------------------|:---------|:--------|:----------------------|
| `-D`, `--debug` | `bool` | | Enable debug logging |
| [`--format`](#format) | `string` | `table` | Format the output |
| `--no-trunc` | `bool` | | Don't truncate output |
<!---MARKER_GEN_END-->

View File

@@ -10,13 +10,15 @@ Remove build cache
### Options
| Name | Type | Default | Description |
|:------------------------|:---------|:--------|:------------------------------------------|
|:------------------------|:---------|:--------|:-------------------------------------------------------|
| `-a`, `--all` | `bool` | | Include internal/frontend images |
| [`--builder`](#builder) | `string` | | Override the configured builder instance |
| `-D`, `--debug` | `bool` | | Enable debug logging |
| `--filter` | `filter` | | Provide filter values (e.g., `until=24h`) |
| `-f`, `--force` | `bool` | | Do not prompt for confirmation |
| `--keep-storage` | `bytes` | `0` | Amount of disk space to keep for cache |
| `--max-used-space` | `bytes` | `0` | Maximum amount of disk space allowed to keep for cache |
| `--min-free-space` | `bytes` | `0` | Target amount of free disk space after pruning |
| `--reserved-space` | `bytes` | `0` | Amount of disk space always allowed to keep for cache |
| `--verbose` | `bool` | | Provide a more verbose output |

View File

@@ -177,7 +177,6 @@ func (d *Driver) create(ctx context.Context, l progress.SubLogger) error {
break
}
}
}
_, err := d.DockerAPI.ContainerCreate(ctx, cfg, hc, &network.NetworkingConfig{}, nil, d.Name)
if err != nil && !errdefs.IsConflict(err) {
@@ -213,7 +212,7 @@ func (d *Driver) wait(ctx context.Context, l progress.SubLogger) error {
}
select {
case <-ctx.Done():
return ctx.Err()
return context.Cause(ctx)
case <-time.After(time.Duration(try*120) * time.Millisecond):
try++
continue

View File

@@ -29,7 +29,7 @@ func (d *Driver) Bootstrap(ctx context.Context, l progress.Logger) error {
func (d *Driver) Info(ctx context.Context) (*driver.Info, error) {
_, err := d.DockerAPI.ServerVersion(ctx)
if err != nil {
return nil, errors.Wrapf(driver.ErrNotConnecting{}, err.Error())
return nil, errors.Wrap(driver.ErrNotConnecting{}, err.Error())
}
return &driver.Info{
Status: driver.Running,
@@ -39,7 +39,7 @@ func (d *Driver) Info(ctx context.Context) (*driver.Info, error) {
func (d *Driver) Version(ctx context.Context) (string, error) {
v, err := d.DockerAPI.ServerVersion(ctx)
if err != nil {
return "", errors.Wrapf(driver.ErrNotConnecting{}, err.Error())
return "", errors.Wrap(driver.ErrNotConnecting{}, err.Error())
}
if bkversion, _ := resolveBuildKitVersion(v.Version); bkversion != "" {
return bkversion, nil

View File

@@ -176,11 +176,6 @@ func resolveBuildKitVersion(ver string) (string, error) {
if err != nil {
return "", err
}
//if _, errs := c.Validate(mobyVersion); len(errs) > 0 {
// for _, err := range errs {
// fmt.Printf("%s: %v\n", m.MobyVersionConstraint, err)
// }
//}
if !c.Check(mobyVersion) {
continue
}

View File

@@ -191,7 +191,7 @@ func checkClientConfig(t *testing.T, ep Endpoint, server, namespace string, ca,
require.NoError(t, err)
assert.Equal(t, proxyURLString, proxyURL.String())
} else {
assert.True(t, cfg.Proxy == nil, "expected proxy to be nil, but is not nil instead")
assert.Nil(t, cfg.Proxy, "expected proxy to be nil, but is not nil instead")
}
}
@@ -224,7 +224,7 @@ func TestSaveLoadGKEConfig(t *testing.T) {
persistedMetadata, err := store.GetMetadata("gke-context")
require.NoError(t, err)
persistedEPMeta := EndpointFromContext(persistedMetadata)
assert.True(t, persistedEPMeta != nil)
assert.NotNil(t, persistedEPMeta)
persistedEP, err := persistedEPMeta.WithTLSData(store, "gke-context")
require.NoError(t, err)
persistedCfg := persistedEP.KubernetesConfig()
@@ -249,7 +249,7 @@ func TestSaveLoadEKSConfig(t *testing.T) {
persistedMetadata, err := store.GetMetadata("eks-context")
require.NoError(t, err)
persistedEPMeta := EndpointFromContext(persistedMetadata)
assert.True(t, persistedEPMeta != nil)
assert.NotNil(t, persistedEPMeta)
persistedEP, err := persistedEPMeta.WithTLSData(store, "eks-context")
require.NoError(t, err)
persistedCfg := persistedEP.KubernetesConfig()
@@ -274,14 +274,14 @@ func TestSaveLoadK3SConfig(t *testing.T) {
persistedMetadata, err := store.GetMetadata("k3s-context")
require.NoError(t, err)
persistedEPMeta := EndpointFromContext(persistedMetadata)
assert.True(t, persistedEPMeta != nil)
assert.NotNil(t, persistedEPMeta)
persistedEP, err := persistedEPMeta.WithTLSData(store, "k3s-context")
require.NoError(t, err)
persistedCfg := persistedEP.KubernetesConfig()
actualCfg, err := persistedCfg.ClientConfig()
require.NoError(t, err)
assert.True(t, len(actualCfg.Username) > 0)
assert.True(t, len(actualCfg.Password) > 0)
assert.Greater(t, len(actualCfg.Username), 0)
assert.Greater(t, len(actualCfg.Password), 0)
assert.Equal(t, expectedCfg.Username, actualCfg.Username)
assert.Equal(t, expectedCfg.Password, actualCfg.Password)
}

View File

@@ -112,7 +112,7 @@ func (d *Driver) wait(ctx context.Context) error {
for {
select {
case <-ctx.Done():
return ctx.Err()
return context.Cause(ctx)
case <-timeoutChan:
return err
case <-ticker.C:

View File

@@ -47,8 +47,9 @@ func (d *Driver) Bootstrap(ctx context.Context, l progress.Logger) error {
return err
}
return progress.Wrap("[internal] waiting for connection", l, func(_ progress.SubLogger) error {
ctx, cancel := context.WithTimeout(ctx, 20*time.Second)
defer cancel()
cancelCtx, cancel := context.WithCancelCause(ctx)
ctx, _ := context.WithTimeoutCause(cancelCtx, 20*time.Second, errors.WithStack(context.DeadlineExceeded)) //nolint:govet,lostcancel // no need to manually cancel this context as we already rely on parent
defer func() { cancel(errors.WithStack(context.Canceled)) }()
return c.Wait(ctx)
})
}

100
go.mod
View File

@@ -1,35 +1,33 @@
module github.com/docker/buildx
go 1.21.0
go 1.22.0
require (
github.com/Masterminds/semver/v3 v3.2.1
github.com/Microsoft/go-winio v0.6.2
github.com/aws/aws-sdk-go-v2/config v1.26.6
github.com/compose-spec/compose-go/v2 v2.1.6
github.com/compose-spec/compose-go/v2 v2.4.4
github.com/containerd/console v1.0.4
github.com/containerd/containerd v1.7.21
github.com/containerd/continuity v0.4.3
github.com/containerd/errdefs v0.1.0
github.com/containerd/containerd v1.7.24
github.com/containerd/continuity v0.4.5
github.com/containerd/errdefs v0.3.0
github.com/containerd/log v0.1.0
github.com/containerd/platforms v0.2.1
github.com/containerd/typeurl/v2 v2.2.0
github.com/containerd/typeurl/v2 v2.2.3
github.com/creack/pty v1.1.21
github.com/distribution/reference v0.6.0
github.com/docker/cli v27.2.0+incompatible
github.com/docker/cli v27.4.0-rc.2+incompatible
github.com/docker/cli-docs-tool v0.8.0
github.com/docker/docker v27.2.0+incompatible
github.com/docker/docker v27.4.0-rc.2+incompatible
github.com/docker/go-units v0.5.0
github.com/gofrs/flock v0.12.1
github.com/gogo/protobuf v1.3.2
github.com/golang/protobuf v1.5.4
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510
github.com/google/uuid v1.6.0
github.com/hashicorp/go-cty-funcs v0.0.0-20230405223818-a090f58aa992
github.com/hashicorp/hcl/v2 v2.20.1
github.com/in-toto/in-toto-golang v0.5.0
github.com/mitchellh/hashstructure/v2 v2.0.2
github.com/moby/buildkit v0.16.0-rc2
github.com/moby/buildkit v0.18.0
github.com/moby/sys/mountinfo v0.7.2
github.com/moby/sys/signal v0.7.1
github.com/morikuni/aec v1.0.0
@@ -37,24 +35,27 @@ require (
github.com/opencontainers/image-spec v1.1.0
github.com/pelletier/go-toml v1.9.5
github.com/pkg/errors v0.9.1
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10
github.com/serialx/hashring v0.0.0-20200727003509-22c0c7ab6b1b
github.com/sirupsen/logrus v1.9.3
github.com/spf13/cobra v1.8.1
github.com/spf13/pflag v1.0.5
github.com/stretchr/testify v1.9.0
github.com/tonistiigi/fsutil v0.0.0-20240424095704-91a3fc46842c
github.com/tonistiigi/fsutil v0.0.0-20241121093142-31cf1f437184
github.com/tonistiigi/go-csvvalue v0.0.0-20240710180619-ddb21b71c0b4
github.com/zclconf/go-cty v1.14.4
go.opentelemetry.io/otel v1.21.0
go.opentelemetry.io/otel/metric v1.21.0
go.opentelemetry.io/otel/sdk v1.21.0
go.opentelemetry.io/otel/trace v1.21.0
golang.org/x/mod v0.17.0
golang.org/x/sync v0.7.0
golang.org/x/sys v0.22.0
golang.org/x/term v0.20.0
golang.org/x/text v0.15.0
google.golang.org/grpc v1.62.0
go.opentelemetry.io/otel v1.28.0
go.opentelemetry.io/otel/metric v1.28.0
go.opentelemetry.io/otel/sdk v1.28.0
go.opentelemetry.io/otel/trace v1.28.0
golang.org/x/mod v0.21.0
golang.org/x/sync v0.8.0
golang.org/x/sys v0.26.0
golang.org/x/term v0.24.0
golang.org/x/text v0.18.0
google.golang.org/grpc v1.66.3
google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.5.1
google.golang.org/protobuf v1.35.1
gopkg.in/yaml.v3 v3.0.1
k8s.io/api v0.29.2
k8s.io/apimachinery v0.29.2
@@ -81,11 +82,11 @@ require (
github.com/aws/aws-sdk-go-v2/service/sts v1.26.7 // indirect
github.com/aws/smithy-go v1.19.0 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/cenkalti/backoff/v4 v4.2.1 // indirect
github.com/cespare/xxhash/v2 v2.2.0 // indirect
github.com/cenkalti/backoff/v4 v4.3.0 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/containerd/containerd/api v1.7.19 // indirect
github.com/containerd/ttrpc v1.2.5 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.4 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.5 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/docker/distribution v2.8.3+incompatible // indirect
github.com/docker/docker-credential-helpers v0.8.2 // indirect
@@ -95,19 +96,20 @@ require (
github.com/emicklei/go-restful/v3 v3.11.0 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/fvbommel/sortorder v1.0.1 // indirect
github.com/go-logr/logr v1.4.1 // indirect
github.com/go-logr/logr v1.4.2 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-openapi/jsonpointer v0.19.6 // indirect
github.com/go-openapi/jsonreference v0.20.2 // indirect
github.com/go-openapi/swag v0.22.3 // indirect
github.com/go-viper/mapstructure/v2 v2.0.0 // indirect
github.com/gogo/googleapis v1.4.1 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/protobuf v1.5.4 // indirect
github.com/google/gnostic-models v0.6.8 // indirect
github.com/google/go-cmp v0.6.0 // indirect
github.com/google/gofuzz v1.2.0 // indirect
github.com/gorilla/mux v1.8.1 // indirect
github.com/gorilla/websocket v1.5.0 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.16.0 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.20.0 // indirect
github.com/hashicorp/errwrap v1.1.0 // indirect
github.com/hashicorp/go-cleanhttp v0.5.2 // indirect
github.com/hashicorp/go-multierror v1.1.1 // indirect
@@ -115,11 +117,10 @@ require (
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/compress v1.17.9 // indirect
github.com/klauspost/compress v1.17.11 // indirect
github.com/mailru/easyjson v0.7.7 // indirect
github.com/mattn/go-runewidth v0.0.15 // indirect
github.com/mattn/go-shellwords v1.0.12 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.4 // indirect
github.com/miekg/pkcs11 v1.1.1 // indirect
github.com/mitchellh/go-wordwrap v0.0.0-20150314170334-ad45545899c7 // indirect
github.com/mitchellh/mapstructure v1.5.0 // indirect
@@ -127,7 +128,7 @@ require (
github.com/moby/locker v1.0.1 // indirect
github.com/moby/patternmatcher v0.6.0 // indirect
github.com/moby/spdystream v0.2.0 // indirect
github.com/moby/sys/sequential v0.5.0 // indirect
github.com/moby/sys/sequential v0.6.0 // indirect
github.com/moby/sys/user v0.3.0 // indirect
github.com/moby/sys/userns v0.1.0 // indirect
github.com/moby/term v0.5.0 // indirect
@@ -136,15 +137,16 @@ require (
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/prometheus/client_golang v1.17.0 // indirect
github.com/prometheus/client_model v0.5.0 // indirect
github.com/prometheus/common v0.44.0 // indirect
github.com/prometheus/client_golang v1.20.2 // indirect
github.com/prometheus/client_model v0.6.1 // indirect
github.com/prometheus/common v0.55.0 // indirect
github.com/prometheus/procfs v0.15.1 // indirect
github.com/rivo/uniseg v0.2.0 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/secure-systems-lab/go-securesystemslib v0.4.0 // indirect
github.com/shibumi/go-pathspec v1.3.0 // indirect
github.com/theupdateframework/notary v0.7.0 // indirect
github.com/tonistiigi/dchapes-mode v0.0.0-20241001053921-ca0759fec205 // indirect
github.com/tonistiigi/units v0.0.0-20180711220420-6950e57a87ea // indirect
github.com/tonistiigi/vt100 v0.0.0-20240514184818-90bafcd6abab // indirect
github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb // indirect
@@ -152,25 +154,23 @@ require (
github.com/xeipuuv/gojsonschema v1.2.0 // indirect
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.46.1 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace v0.46.1 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.46.1 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.53.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v0.44.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v0.44.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.21.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.21.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.21.0 // indirect
go.opentelemetry.io/otel/sdk/metric v1.21.0 // indirect
go.opentelemetry.io/proto/otlp v1.0.0 // indirect
golang.org/x/crypto v0.23.0 // indirect
golang.org/x/exp v0.0.0-20240112132812-db7319d0e0e3 // indirect
golang.org/x/net v0.25.0 // indirect
golang.org/x/oauth2 v0.16.0 // indirect
golang.org/x/time v0.3.0 // indirect
golang.org/x/tools v0.17.0 // indirect
google.golang.org/appengine v1.6.8 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.28.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.28.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.28.0 // indirect
go.opentelemetry.io/otel/sdk/metric v1.28.0 // indirect
go.opentelemetry.io/proto/otlp v1.3.1 // indirect
golang.org/x/crypto v0.27.0 // indirect
golang.org/x/exp v0.0.0-20240909161429-701f63a606c0 // indirect
golang.org/x/net v0.29.0 // indirect
golang.org/x/oauth2 v0.21.0 // indirect
golang.org/x/time v0.6.0 // indirect
golang.org/x/tools v0.25.0 // indirect
google.golang.org/genproto v0.0.0-20240123012728-ef4313101c80 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20240123012728-ef4313101c80 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20240123012728-ef4313101c80 // indirect
google.golang.org/protobuf v1.33.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20240701130421-f6361c86f094 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20240701130421-f6361c86f094 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
k8s.io/klog/v2 v2.110.1 // indirect

231
go.sum
View File

@@ -1,8 +1,7 @@
cloud.google.com/go v0.112.0 h1:tpFCD7hpHFlQ8yPwT3x+QeXqc2T6+n6T+hmABHfDUSM=
cloud.google.com/go/compute v1.23.3 h1:6sVlXXBmbd7jNX0Ipq0trII3e4n1/MsADLK6a+aiVlk=
cloud.google.com/go/compute v1.23.3/go.mod h1:VCgBUoMnIVIR0CscqQiPJLAG25E3ZRZMzcFZeQ+h8CI=
cloud.google.com/go/compute/metadata v0.2.3 h1:mg4jlk7mCAj6xXp9UJ4fjI9VUI5rubuGBW5aJ7UnBMY=
cloud.google.com/go/compute/metadata v0.2.3/go.mod h1:VAV5nSsACxMJvgaAuX6Pk2AawlZn8kiOGuCv6gTkwuA=
cloud.google.com/go/compute/metadata v0.3.0 h1:Tz+eQXMEqDIKRsmY3cHTL6FVaynIjX2QxYC4trgAKZc=
cloud.google.com/go/compute/metadata v0.3.0/go.mod h1:zFmK7XCadkQkj6TtorcaGlCW1hT1fIilQDwofLpJ20k=
github.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24 h1:bvDV9vkmnHYOMsOr4WLk+Vo07yKIzd94sVoIqshQ4bU=
github.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24/go.mod h1:8o94RPi1/7XTJvwPpRSzSUedZrtlirdB3r9Z20bi2f8=
github.com/AdamKorcz/go-118-fuzz-build v0.0.0-20230306123547-8075edf89bb0 h1:59MxjQVfjXsBpLy+dbd2/ELV5ofnUkUZBvWSC85sheA=
@@ -15,8 +14,8 @@ github.com/Masterminds/semver/v3 v3.2.1 h1:RN9w6+7QoMeJVGyfmbcgs28Br8cvmnucEXnY0
github.com/Masterminds/semver/v3 v3.2.1/go.mod h1:qvl/7zhW3nngYb5+80sSMF+FG2BjYrf8m9wsX0PNOMQ=
github.com/Microsoft/go-winio v0.6.2 h1:F2VQgta7ecxGYO8k3ZZz3RS8fVIXVxONVUPlNERoyfY=
github.com/Microsoft/go-winio v0.6.2/go.mod h1:yd8OoFMLzJbo9gZq8j5qaps8bJ9aShtEA8Ipt1oGCvU=
github.com/Microsoft/hcsshim v0.11.7 h1:vl/nj3Bar/CvJSYo7gIQPyRWc9f3c6IeSNavBTSZNZQ=
github.com/Microsoft/hcsshim v0.11.7/go.mod h1:MV8xMfmECjl5HdO7U/3/hFVnkmSBjAjmA09d4bExKcU=
github.com/Microsoft/hcsshim v0.12.8 h1:BtDWYlFMcWhorrvSSo2M7z0csPdw6t7no/C3FsSvqiI=
github.com/Microsoft/hcsshim v0.12.8/go.mod h1:cibQ4BqhJ32FXDwPdQhKhwrwophnh3FuT4nwQZF907w=
github.com/Shopify/logrus-bugsnag v0.0.0-20170309145241-6dbc35f2c30d/go.mod h1:HI8ITrYtUY+O+ZhtlqUnD8+KwNPOyugEhfP9fdUIaEQ=
github.com/Shopify/logrus-bugsnag v0.0.0-20171204204709-577dee27f20d h1:UrqY+r/OJnIp5u0s1SbQ8dVfLCZJsnvazdBP5hS4iRs=
github.com/Shopify/logrus-bugsnag v0.0.0-20171204204709-577dee27f20d/go.mod h1:HI8ITrYtUY+O+ZhtlqUnD8+KwNPOyugEhfP9fdUIaEQ=
@@ -74,30 +73,31 @@ github.com/bugsnag/osext v0.0.0-20130617224835-0dd3f918b21b h1:otBG+dV+YK+Soembj
github.com/bugsnag/osext v0.0.0-20130617224835-0dd3f918b21b/go.mod h1:obH5gd0BsqsP2LwDJ9aOkm/6J86V6lyAXCoQWGw3K50=
github.com/bugsnag/panicwrap v0.0.0-20151223152923-e2c28503fcd0 h1:nvj0OLI3YqYXer/kZD8Ri1aaunCxIEsOst1BVJswV0o=
github.com/bugsnag/panicwrap v0.0.0-20151223152923-e2c28503fcd0/go.mod h1:D/8v3kj0zr8ZAKg1AQ6crr+5VwKN5eIywRkfhyM/+dE=
github.com/cenkalti/backoff/v4 v4.2.1 h1:y4OZtCnogmCPw98Zjyt5a6+QwPLGkiQsYW5oUqylYbM=
github.com/cenkalti/backoff/v4 v4.2.1/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE=
github.com/cespare/xxhash/v2 v2.2.0 h1:DC2CZ1Ep5Y4k3ZQ899DldepgrayRUGE6BBZ/cd9Cj44=
github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cenkalti/backoff/v4 v4.3.0 h1:MyRJ/UdXutAwSAT+s3wNd7MfTIcy71VQueUuFK343L8=
github.com/cenkalti/backoff/v4 v4.3.0/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cloudflare/cfssl v0.0.0-20180223231731-4e2dcbde5004 h1:lkAMpLVBDaj17e85keuznYcH5rqI438v41pKcBl4ZxQ=
github.com/cloudflare/cfssl v0.0.0-20180223231731-4e2dcbde5004/go.mod h1:yMWuSON2oQp+43nFtAV/uvKQIFpSPerB57DCt9t8sSA=
github.com/cncf/xds/go v0.0.0-20231128003011-0fa0005c9caa h1:jQCWAUqqlij9Pgj2i/PB79y4KOPYVyFYdROxgaCwdTQ=
github.com/cncf/xds/go v0.0.0-20231128003011-0fa0005c9caa/go.mod h1:x/1Gn8zydmfq8dk6e9PdstVsDgu9RuyIIJqAaF//0IM=
github.com/cncf/xds/go v0.0.0-20240423153145-555b57ec207b h1:ga8SEFjZ60pxLcmhnThWgvH2wg8376yUJmPhEH4H3kw=
github.com/cncf/xds/go v0.0.0-20240423153145-555b57ec207b/go.mod h1:W+zGtBO5Y1IgJhy4+A9GOqVhqLpfZi+vwmdNXUehLA8=
github.com/codahale/rfc6979 v0.0.0-20141003034818-6a90f24967eb h1:EDmT6Q9Zs+SbUoc7Ik9EfrFqcylYqgPZ9ANSbTAntnE=
github.com/codahale/rfc6979 v0.0.0-20141003034818-6a90f24967eb/go.mod h1:ZjrT6AXHbDs86ZSdt/osfBi5qfexBrKUdONk989Wnk4=
github.com/compose-spec/compose-go/v2 v2.1.6 h1:d0Cs0DffmOwmSzs0YPHwKCskknGq2jfGg4uGowlEpps=
github.com/compose-spec/compose-go/v2 v2.1.6/go.mod h1:lFN0DrMxIncJGYAXTfWuajfwj5haBJqrBkarHcnjJKc=
github.com/compose-spec/compose-go/v2 v2.4.4 h1:cvHBl5Jf1iNBmRrZCICmHvaoskYc1etTPEMLKVwokAY=
github.com/compose-spec/compose-go/v2 v2.4.4/go.mod h1:lFN0DrMxIncJGYAXTfWuajfwj5haBJqrBkarHcnjJKc=
github.com/containerd/cgroups v1.1.0 h1:v8rEWFl6EoqHB+swVNjVoCJE8o3jX7e8nqBGPLaDFBM=
github.com/containerd/cgroups v1.1.0/go.mod h1:6ppBcbh/NOOUU+dMKrykgaBnK9lCIBxHqJDGwsa1mIw=
github.com/containerd/cgroups/v3 v3.0.3 h1:S5ByHZ/h9PMe5IOQoN7E+nMc2UcLEM/V48DGDJ9kip0=
github.com/containerd/cgroups/v3 v3.0.3/go.mod h1:8HBe7V3aWGLFPd/k03swSIsGjZhHI2WzJmticMgVuz0=
github.com/containerd/console v1.0.4 h1:F2g4+oChYvBTsASRTz8NP6iIAi97J3TtSAsLbIFn4ro=
github.com/containerd/console v1.0.4/go.mod h1:YynlIjWYF8myEu6sdkwKIvGQq+cOckRm6So2avqoYAk=
github.com/containerd/containerd v1.7.21 h1:USGXRK1eOC/SX0L195YgxTHb0a00anxajOzgfN0qrCA=
github.com/containerd/containerd v1.7.21/go.mod h1:e3Jz1rYRUZ2Lt51YrH9Rz0zPyJBOlSvB3ghr2jbVD8g=
github.com/containerd/containerd v1.7.24 h1:zxszGrGjrra1yYJW/6rhm9cJ1ZQ8rkKBR48brqsa7nA=
github.com/containerd/containerd v1.7.24/go.mod h1:7QUzfURqZWCZV7RLNEn1XjUCQLEf0bkaK4GjUaZehxw=
github.com/containerd/containerd/api v1.7.19 h1:VWbJL+8Ap4Ju2mx9c9qS1uFSB1OVYr5JJrW2yT5vFoA=
github.com/containerd/containerd/api v1.7.19/go.mod h1:fwGavl3LNwAV5ilJ0sbrABL44AQxmNjDRcwheXDb6Ig=
github.com/containerd/continuity v0.4.3 h1:6HVkalIp+2u1ZLH1J/pYX2oBVXlJZvh1X1A7bEZ9Su8=
github.com/containerd/continuity v0.4.3/go.mod h1:F6PTNCKepoxEaXLQp3wDAjygEnImnZ/7o4JzpodfroQ=
github.com/containerd/errdefs v0.1.0 h1:m0wCRBiu1WJT/Fr+iOoQHMQS/eP5myQ8lCv4Dz5ZURM=
github.com/containerd/errdefs v0.1.0/go.mod h1:YgWiiHtLmSeBrvpw+UfPijzbLaB77mEG1WwJTDETIV0=
github.com/containerd/continuity v0.4.5 h1:ZRoN1sXq9u7V6QoHMcVWGhOwDFqZ4B9i5H6un1Wh0x4=
github.com/containerd/continuity v0.4.5/go.mod h1:/lNJvtJKUQStBzpVQ1+rasXO1LAWtUQssk28EZvJ3nE=
github.com/containerd/errdefs v0.3.0 h1:FSZgGOeK4yuT/+DnF07/Olde/q4KBoMsaamhXxIMDp4=
github.com/containerd/errdefs v0.3.0/go.mod h1:+YBYIdtsnF4Iw6nWZhJcqGSg/dwvV7tyJ/kCkyJ2k+M=
github.com/containerd/fifo v1.1.0 h1:4I2mbh5stb1u6ycIABlBw9zgtlK8viPI9QkQNRQEEmY=
github.com/containerd/fifo v1.1.0/go.mod h1:bmC4NWMbXlt2EZ0Hc7Fx7QzTFxgPID13eH0Qu+MAb2o=
github.com/containerd/log v0.1.0 h1:TCJt7ioM2cr/tfR8GPbGf9/VRAX8D2B4PjzCpfX540I=
@@ -111,10 +111,11 @@ github.com/containerd/stargz-snapshotter/estargz v0.15.1 h1:eXJjw9RbkLFgioVaTG+G
github.com/containerd/stargz-snapshotter/estargz v0.15.1/go.mod h1:gr2RNwukQ/S9Nv33Lt6UC7xEx58C+LHRdoqbEKjz1Kk=
github.com/containerd/ttrpc v1.2.5 h1:IFckT1EFQoFBMG4c3sMdT8EP3/aKfumK1msY+Ze4oLU=
github.com/containerd/ttrpc v1.2.5/go.mod h1:YCXHsb32f+Sq5/72xHubdiJRQY9inL4a4ZQrAbN1q9o=
github.com/containerd/typeurl/v2 v2.2.0 h1:6NBDbQzr7I5LHgp34xAXYF5DOTQDn05X58lsPEmzLso=
github.com/containerd/typeurl/v2 v2.2.0/go.mod h1:8XOOxnyatxSWuG8OfsZXVnAF4iZfedjS/8UHSPJnX4g=
github.com/cpuguy83/go-md2man/v2 v2.0.4 h1:wfIWP927BUkWJb2NmU/kNDYIBTh/ziUX91+lVfRxZq4=
github.com/containerd/typeurl/v2 v2.2.3 h1:yNA/94zxWdvYACdYO8zofhrTVuQY73fFU1y++dYSw40=
github.com/containerd/typeurl/v2 v2.2.3/go.mod h1:95ljDnPfD3bAbDJRugOiShd/DlAAsxGtUBhJxIn7SCk=
github.com/cpuguy83/go-md2man/v2 v2.0.4/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/cpuguy83/go-md2man/v2 v2.0.5 h1:ZtcqGrnekaHpVLArFSe4HK5DoKx1T0rq2DwVB0alcyc=
github.com/cpuguy83/go-md2man/v2 v2.0.5/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/creack/pty v1.1.21 h1:1/QdRyBaHHJP61QkWMXlOIBfsgdDeeKfK8SYVUWJKf0=
github.com/creack/pty v1.1.21/go.mod h1:MOBLtS5ELjhRRrroQr9kyvTxUAFNvYEK993ew/Vr4O4=
@@ -124,15 +125,15 @@ github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSs
github.com/denisenkom/go-mssqldb v0.0.0-20191128021309-1d7a30a10f73/go.mod h1:xbL0rPBG9cCiLr28tMa8zpbdarY27NDyej4t/EjAShU=
github.com/distribution/reference v0.6.0 h1:0IXCQ5g4/QMHHkarYzh5l+u8T3t73zM5QvfrDyIgxBk=
github.com/distribution/reference v0.6.0/go.mod h1:BbU0aIcezP1/5jX/8MP0YiH4SdvB5Y4f/wlDRiLyi3E=
github.com/docker/cli v27.2.0+incompatible h1:yHD1QEB1/0vr5eBNpu8tncu8gWxg8EydFPOSKHzXSMM=
github.com/docker/cli v27.2.0+incompatible/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8=
github.com/docker/cli v27.4.0-rc.2+incompatible h1:A0GZwegDlt2wdt3tpmrUzkVOZmbhvd7i05wPSf7Oo74=
github.com/docker/cli v27.4.0-rc.2+incompatible/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8=
github.com/docker/cli-docs-tool v0.8.0 h1:YcDWl7rQJC3lJ7WVZRwSs3bc9nka97QLWfyJQli8yJU=
github.com/docker/cli-docs-tool v0.8.0/go.mod h1:8TQQ3E7mOXoYUs811LiPdUnAhXrcVsBIrW21a5pUbdk=
github.com/docker/distribution v2.7.1+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
github.com/docker/distribution v2.8.3+incompatible h1:AtKxIZ36LoNK51+Z6RpzLpddBirtxJnzDrHLEKxTAYk=
github.com/docker/distribution v2.8.3+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
github.com/docker/docker v27.2.0+incompatible h1:Rk9nIVdfH3+Vz4cyI/uhbINhEZ/oLmc+CBXmH6fbNk4=
github.com/docker/docker v27.2.0+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/docker v27.4.0-rc.2+incompatible h1:9OJjVGtelk/zGC3TyKweJ29b9Axzh0s/0vtU4mneumE=
github.com/docker/docker v27.4.0-rc.2+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/docker-credential-helpers v0.8.2 h1:bX3YxiGzFP5sOXWc3bTPEXdEaZSeVMrFgOr3T+zrFAo=
github.com/docker/docker-credential-helpers v0.8.2/go.mod h1:P3ci7E3lwkZg6XiHdRKft1KckHiO9a2rNtyFbZ/ry9M=
github.com/docker/go v1.5.1-1.0.20160303222718-d30aec9fd63c h1:lzqkGL9b3znc+ZUgi7FlLnqjQhcXxkNM/quxIjBVMD0=
@@ -165,8 +166,8 @@ github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.3.0/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/logr v1.4.1 h1:pKouT5E8xu9zeFC39JXRDukb6JFQPXM5p5I91188VAQ=
github.com/go-logr/logr v1.4.1/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY=
github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/go-openapi/jsonpointer v0.19.6 h1:eCs3fxoIi3Wh6vtgmLTOjdhSpiqphQ+DaPn38N2ZdrE=
@@ -186,15 +187,11 @@ github.com/go-viper/mapstructure/v2 v2.0.0 h1:dhn8MZ1gZ0mzeodTG3jt5Vj/o87xZKuNAp
github.com/go-viper/mapstructure/v2 v2.0.0/go.mod h1:oJDH3BJKyqBA2TXFhDsKDGDTlndYOZ6rGS0BRZIxGhM=
github.com/gofrs/flock v0.12.1 h1:MTLVXXHf8ekldpJk3AKicLij9MdwOWkZ+a/jHHZby9E=
github.com/gofrs/flock v0.12.1/go.mod h1:9zxTsyu5xtJ9DK+1tFZyibEV7y3uwDxPPfbxeeHCoD0=
github.com/gogo/googleapis v1.4.1 h1:1Yx4Myt7BxzvUr5ldGSbwYiZG6t9wGBZ+8/fX3Wvtq0=
github.com/gogo/googleapis v1.4.1/go.mod h1:2lpHqI5OcWCtVElxXnPt+s8oJvMpySlOyM6xDCrzib4=
github.com/gogo/protobuf v1.0.0/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang-sql/civil v0.0.0-20190719163853-cb61b32ac6fe/go.mod h1:8vg3r2VgvsThLBIFL93Qb5yWzgyZWhEmBwUJWevAkK0=
github.com/golang/glog v1.2.0 h1:uCdmnmatrKCgMBlM4rMuJZWOkPDqdbZPnrMXDY4gI68=
github.com/golang/glog v1.2.0/go.mod h1:6AhwSGph0fcJtXVM/PEHPqZlFeoLxhs7/t5UDAwmO+w=
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da h1:oI5xCqsCo564l8iNU+DwB5epxmsaqB+rhGL0m5jtYqE=
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/protobuf v1.1.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
@@ -202,8 +199,6 @@ github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5y
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.4/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
github.com/google/certificate-transparency-go v1.0.10-0.20180222191210-5ab67e519c93 h1:jc2UWq7CbdszqeH6qu1ougXMIUBfSy8Pbh/anURYbGI=
@@ -212,7 +207,6 @@ github.com/google/gnostic-models v0.6.8 h1:yo/ABAfM5IMRsS1VnXjTBvUb61tFIHozhlYvR
github.com/google/gnostic-models v0.6.8/go.mod h1:5n7qKqH0f5wFt+aWF8CW6pZLLNOfYuF5OpfBSENuI8U=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
@@ -232,8 +226,8 @@ github.com/gorilla/mux v1.8.1/go.mod h1:AKf9I4AEqPTmMytcMc0KkNouC66V3BtZ4qD5fmWS
github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/gorilla/websocket v1.5.0 h1:PPwGk2jz7EePpoHN/+ClbZu8SPxiqlu12wZP/3sWmnc=
github.com/gorilla/websocket v1.5.0/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.16.0 h1:YBftPWNWd4WwGqtY2yeZL2ef8rHAxPBD8KFhJpmcqms=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.16.0/go.mod h1:YN5jB8ie0yfIUg6VvR9Kz84aCaG7AsGZnLjhHbUqwPg=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.20.0 h1:bkypFPDjIYGfCYD5mRBvpqxfYX1YCS1PXdKYWi8FsN0=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.20.0/go.mod h1:P+Lt/0by1T8bfcF3z737NnSbmxQAppXMRziHUxPOC8k=
github.com/hailocab/go-hostpool v0.0.0-20160125115350-e80d13ce29ed h1:5upAirOpQc1Q53c0bnx2ufif5kANL7bfZWcc6VJWJd8=
github.com/hailocab/go-hostpool v0.0.0-20160125115350-e80d13ce29ed/go.mod h1:tMWxXQ9wFIaZeTI9F+hmhFiGpFmhOHzyShyFUhRm0H4=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
@@ -270,8 +264,8 @@ github.com/juju/loggo v0.0.0-20190526231331-6e530bcce5d8/go.mod h1:vgyd7OREkbtVE
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/compress v1.17.9 h1:6KIumPrER1LHsvBVuDa0r5xaG0Es51mhhB9BQB2qeMA=
github.com/klauspost/compress v1.17.9/go.mod h1:Di0epgTjJY877eYKx5yC51cX2A2Vl2ibi7bDH9ttBbw=
github.com/klauspost/compress v1.17.11 h1:In6xLpyWOi1+C7tXUUWv2ot1QvBjxevKAaI6IXrJmUc=
github.com/klauspost/compress v1.17.11/go.mod h1:pMDklpSncoRMuLFrf1W9Ss9KT+0rH90U12bZKk7uwG0=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/konsorten/go-windows-terminal-sequences v1.0.2/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
@@ -283,6 +277,8 @@ github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/lib/pq v0.0.0-20150723085316-0dad96c0b94f/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
github.com/magiconair/properties v1.5.3 h1:C8fxWnhYyME3n0klPOhVM7PtYUB3eV1W3DeFmN3j53Y=
github.com/magiconair/properties v1.5.3/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
@@ -294,8 +290,6 @@ github.com/mattn/go-shellwords v1.0.12 h1:M2zGm7EW6UQJvDeQxo4T51eKPurbeFbe8WtebG
github.com/mattn/go-shellwords v1.0.12/go.mod h1:EZzvwXDESEeg03EKmM+RmDnNOPKG4lLtQsUlTZDWQ8Y=
github.com/mattn/go-sqlite3 v1.6.0/go.mod h1:FPy6KqzDD04eiIsT53CuJW3U88zkxoIYsOqkbpncsNc=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/matttproud/golang_protobuf_extensions v1.0.4 h1:mmDVorXM7PCGKw94cs5zkfA9PSy5pEvNWRP0ET0TIVo=
github.com/matttproud/golang_protobuf_extensions v1.0.4/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4=
github.com/miekg/pkcs11 v1.0.2/go.mod h1:XsNlhZGX73bx86s2hdc/FuaLm2CPZJemRLMA+WTFxgs=
github.com/miekg/pkcs11 v1.1.1 h1:Ugu9pdy6vAYku5DEpVWVFPYnzV+bxB+iRdbuFSu7TvU=
github.com/miekg/pkcs11 v1.1.1/go.mod h1:XsNlhZGX73bx86s2hdc/FuaLm2CPZJemRLMA+WTFxgs=
@@ -307,8 +301,8 @@ github.com/mitchellh/hashstructure/v2 v2.0.2/go.mod h1:MG3aRVU/N29oo/V/IhBX8GR/z
github.com/mitchellh/mapstructure v0.0.0-20150613213606-2caf8efc9366/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY=
github.com/mitchellh/mapstructure v1.5.0/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
github.com/moby/buildkit v0.16.0-rc2 h1:5uFWrGujJOWHu0duIcz0tBagS97xddogZ3OxNEfezrU=
github.com/moby/buildkit v0.16.0-rc2/go.mod h1:9STuyLZDJNGenp/smTiR01mnvqlO5u5ZW/0/aWHQcio=
github.com/moby/buildkit v0.18.0 h1:KSelhNINJcNA3FCWBbGCytvicjP+kjU5kZlZhkTUkVo=
github.com/moby/buildkit v0.18.0/go.mod h1:vCR5CX8NGsPTthTg681+9kdmfvkvqJBXEv71GZe5msU=
github.com/moby/docker-image-spec v1.3.1 h1:jMKff3w6PgbfSa69GfNg+zN/XLhfXJGnEx3Nl2EsFP0=
github.com/moby/docker-image-spec v1.3.1/go.mod h1:eKmb5VW8vQEh/BAr2yvVNvuiJuY6UIocYsFu/DxxRpo=
github.com/moby/locker v1.0.1 h1:fOXqR41zeveg4fFODix+1Ch4mj/gT0NE1XJbp/epuBg=
@@ -319,8 +313,8 @@ github.com/moby/spdystream v0.2.0 h1:cjW1zVyyoiM0T7b6UoySUFqzXMoqRckQtXwGPiBhOM8
github.com/moby/spdystream v0.2.0/go.mod h1:f7i0iNDQJ059oMTcWxx8MA/zKFIuD/lY+0GqbN2Wy8c=
github.com/moby/sys/mountinfo v0.7.2 h1:1shs6aH5s4o5H2zQLn796ADW1wMrIwHsyJ2v9KouLrg=
github.com/moby/sys/mountinfo v0.7.2/go.mod h1:1YOa8w8Ih7uW0wALDUgT1dTTSBrZ+HiBLGws92L2RU4=
github.com/moby/sys/sequential v0.5.0 h1:OPvI35Lzn9K04PBbCLW0g4LcFAJgHsvXsRyewg5lXtc=
github.com/moby/sys/sequential v0.5.0/go.mod h1:tH2cOOs5V9MlPiXcQzRC+eEyab644PWKGRYaaV5ZZlo=
github.com/moby/sys/sequential v0.6.0 h1:qrx7XFUd/5DxtqcoH1h438hF5TmOvzC/lspjy7zgvCU=
github.com/moby/sys/sequential v0.6.0/go.mod h1:uyv8EUTrca5PnDsdMGXhZe6CCe8U/UiTWd+lL+7b/Ko=
github.com/moby/sys/signal v0.7.1 h1:PrQxdvxcGijdo6UXXo/lU/TvHUWyPhj7UOpSo8tuvk0=
github.com/moby/sys/signal v0.7.1/go.mod h1:Se1VGehYokAkrSQwL4tDzHvETwUZlnY7S5XtQ50mQp8=
github.com/moby/sys/user v0.3.0 h1:9ni5DlcW5an3SvRSx4MouotOygvzaXbaSrc/wGDFWPo=
@@ -371,24 +365,26 @@ github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINE
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10 h1:GFCKgmp0tecUJ0sJuv4pzYCqS9+RGSn52M3FUwPs+uo=
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10/go.mod h1:t/avpk3KcrXxUnYOhZhMXJlSEyie6gQbtLq5NM3loB8=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v0.9.0-pre1.0.20180209125602-c332b6f63c06/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
github.com/prometheus/client_golang v1.1.0/go.mod h1:I1FGZT9+L76gKKOs5djB6ezCbFQP1xR9D75/vuwEF3g=
github.com/prometheus/client_golang v1.17.0 h1:rl2sfwZMtSthVU752MqfjQozy7blglC+1SOtjMAMh+Q=
github.com/prometheus/client_golang v1.17.0/go.mod h1:VeL+gMmOAxkS2IqfCq0ZmHSL+LjWfWDUmp1mBz9JgUY=
github.com/prometheus/client_golang v1.20.2 h1:5ctymQzZlyOON1666svgwn3s6IKWgfbjsejTMiXIyjg=
github.com/prometheus/client_golang v1.20.2/go.mod h1:PIEt8X02hGcP8JWbeHyeZ53Y/jReSnHgO035n//V5WE=
github.com/prometheus/client_model v0.0.0-20171117100541-99fa1f4be8e5/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.5.0 h1:VQw1hfvPvk3Uv6Qf29VrPF32JB6rtbgI6cYPYQjL0Qw=
github.com/prometheus/client_model v0.5.0/go.mod h1:dTiFglRmd66nLR9Pv9f0mZi7B7fk5Pm3gvsjB5tr+kI=
github.com/prometheus/client_model v0.6.1 h1:ZKSh/rekM+n3CeS952MLRAdFwIKqeY8b62p8ais2e9E=
github.com/prometheus/client_model v0.6.1/go.mod h1:OrxVMOVHjw3lKMa8+x6HeMGkHMQyHDk9E3jmP2AmGiY=
github.com/prometheus/common v0.0.0-20180110214958-89604d197083/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.6.0/go.mod h1:eBmuwkDJBwy6iBfxCBob6t6dR6ENT/y+J+Zk0j9GMYc=
github.com/prometheus/common v0.44.0 h1:+5BrQJwiBB9xsMygAB3TNvpQKOwlkc25LbISbrdOOfY=
github.com/prometheus/common v0.44.0/go.mod h1:ofAIvZbQ1e/nugmZGz4/qCb9Ap1VoSTIO7x0VV9VvuY=
github.com/prometheus/common v0.55.0 h1:KEi6DK7lXW/m7Ig5i47x0vRzuBsHuvJdi5ee6Y3G1dc=
github.com/prometheus/common v0.55.0/go.mod h1:2SECS4xJG1kd8XF9IcM1gMX6510RAEL65zxzNImwdc8=
github.com/prometheus/procfs v0.0.0-20180125133057-cb4147076ac7/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
@@ -397,8 +393,8 @@ github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0leargg
github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk=
github.com/rivo/uniseg v0.2.0 h1:S1pD9weZBuJdFmowNwbpi7BJ8TNftyUImj/0WQi72jY=
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
github.com/rogpeppe/go-internal v1.11.0 h1:cWPaGQEPrBb5/AsnsZesgZZ9yb1OQ+GOISoDNXVBh4M=
github.com/rogpeppe/go-internal v1.11.0/go.mod h1:ddIwULY96R17DhadqLgMfk9H9tvdUzkipdSkR5nkCZA=
github.com/rogpeppe/go-internal v1.12.0 h1:exVL4IDcn6na9z1rAb56Vxr+CgyK3nn3O+epU5NdKM8=
github.com/rogpeppe/go-internal v1.12.0/go.mod h1:E+RYuTGaKKdloAfM02xzb0FW3Paa99yedzYV+kq4uf4=
github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/secure-systems-lab/go-securesystemslib v0.4.0 h1:b23VGrQhTA8cN2CbBw7/FulN9fTtqYUdS5+Oxzt+DUE=
@@ -443,8 +439,10 @@ github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsT
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/theupdateframework/notary v0.7.0 h1:QyagRZ7wlSpjT5N2qQAh/pN+DVqgekv4DzbAiAiEL3c=
github.com/theupdateframework/notary v0.7.0/go.mod h1:c9DRxcmhHmVLDay4/2fUYdISnHqbFDGRSlXPO0AhYWw=
github.com/tonistiigi/fsutil v0.0.0-20240424095704-91a3fc46842c h1:+6wg/4ORAbnSoGDzg2Q1i3CeMcT/jjhye/ZfnBHy7/M=
github.com/tonistiigi/fsutil v0.0.0-20240424095704-91a3fc46842c/go.mod h1:vbbYqJlnswsbJqWUcJN8fKtBhnEgldDrcagTgnBVKKM=
github.com/tonistiigi/dchapes-mode v0.0.0-20241001053921-ca0759fec205 h1:eUk79E1w8yMtXeHSzjKorxuC8qJOnyXQnLaJehxpJaI=
github.com/tonistiigi/dchapes-mode v0.0.0-20241001053921-ca0759fec205/go.mod h1:3Iuxbr0P7D3zUzBMAZB+ois3h/et0shEz0qApgHYGpY=
github.com/tonistiigi/fsutil v0.0.0-20241121093142-31cf1f437184 h1:RgyoSI38Y36zjQaszel/0RAcIehAnjA1B0RiUV9SDO4=
github.com/tonistiigi/fsutil v0.0.0-20241121093142-31cf1f437184/go.mod h1:Dl/9oEjK7IqnjAm21Okx/XIxUCFJzvh+XdVHUlBwXTw=
github.com/tonistiigi/go-csvvalue v0.0.0-20240710180619-ddb21b71c0b4 h1:7I5c2Ig/5FgqkYOh/N87NzoyI9U15qUPXhDD8uCupv8=
github.com/tonistiigi/go-csvvalue v0.0.0-20240710180619-ddb21b71c0b4/go.mod h1:278M4p8WsNh3n4a1eqiFcV2FGk7wE5fwUpUom9mK9lE=
github.com/tonistiigi/units v0.0.0-20180711220420-6950e57a87ea h1:SXhTLE6pb6eld/v/cCndK0AMpt1wiVFb/YYmqB3/QG0=
@@ -463,7 +461,6 @@ github.com/xeipuuv/gojsonschema v1.2.0 h1:LhYJRs+L4fBtjZUfuSZIKGeVu0QRy8e5Xi7D17
github.com/xeipuuv/gojsonschema v1.2.0/go.mod h1:anYRn/JVcOK2ZgGU+IjEV4nwlhoK5sQluxsYJ78Id3Y=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
github.com/zclconf/go-cty v1.4.0/go.mod h1:nHzOclRkoj++EU9ZjSrZvRG0BXIWt8c7loYc0qXAFGQ=
github.com/zclconf/go-cty v1.14.4 h1:uXXczd9QDGsgu0i/QFR/hzI5NYCHLf6NQw/atrbnhq8=
github.com/zclconf/go-cty v1.14.4/go.mod h1:VvMs5i0vgZdhYawQNq5kePSpLAoz8u1xvZgrPIxfnZE=
@@ -475,30 +472,30 @@ go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.4
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.46.1/go.mod h1:4UoMYEZOC0yN/sPGH76KPkkU7zgiEWYWL9vwmbnTJPE=
go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace v0.46.1 h1:gbhw/u49SS3gkPWiYweQNJGm/uJN5GkI/FrosxSHT7A=
go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace v0.46.1/go.mod h1:GnOaBaFQ2we3b9AGWJpsBa7v1S5RlQzlC3O7dRMxZhM=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.46.1 h1:aFJWCqJMNjENlcleuuOkGAPH82y0yULBScfXcIEdS24=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.46.1/go.mod h1:sEGXWArGqc3tVa+ekntsN65DmVbVeW+7lTKTjZF3/Fo=
go.opentelemetry.io/otel v1.21.0 h1:hzLeKBZEL7Okw2mGzZ0cc4k/A7Fta0uoPgaJCr8fsFc=
go.opentelemetry.io/otel v1.21.0/go.mod h1:QZzNPQPm1zLX4gZK4cMi+71eaorMSGT3A4znnUvNNEo=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.53.0 h1:4K4tsIXefpVJtvA/8srF4V4y0akAoPHkIslgAkjixJA=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.53.0/go.mod h1:jjdQuTGVsXV4vSs+CJ2qYDeDPf9yIJV23qlIzBm73Vg=
go.opentelemetry.io/otel v1.28.0 h1:/SqNcYk+idO0CxKEUOtKQClMK/MimZihKYMruSMViUo=
go.opentelemetry.io/otel v1.28.0/go.mod h1:q68ijF8Fc8CnMHKyzqL6akLO46ePnjkgfIMIjUIX9z4=
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v0.44.0 h1:jd0+5t/YynESZqsSyPz+7PAFdEop0dlN0+PkyHYo8oI=
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v0.44.0/go.mod h1:U707O40ee1FpQGyhvqnzmCJm1Wh6OX6GGBVn0E6Uyyk=
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v0.44.0 h1:bflGWrfYyuulcdxf14V6n9+CoQcu5SAAdHmDPAJnlps=
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v0.44.0/go.mod h1:qcTO4xHAxZLaLxPd60TdE88rxtItPHgHWqOhOGRr0as=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.21.0 h1:cl5P5/GIfFh4t6xyruOgJP5QiA1pw4fYYdv6nc6CBWw=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.21.0/go.mod h1:zgBdWWAu7oEEMC06MMKc5NLbA/1YDXV1sMpSqEeLQLg=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.21.0 h1:tIqheXEFWAZ7O8A7m+J0aPTmpJN3YQ7qetUAdkkkKpk=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.21.0/go.mod h1:nUeKExfxAQVbiVFn32YXpXZZHZ61Cc3s3Rn1pDBGAb0=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.21.0 h1:digkEZCJWobwBqMwC0cwCq8/wkkRy/OowZg5OArWZrM=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.21.0/go.mod h1:/OpE/y70qVkndM0TrxT4KBoN3RsFZP0QaofcfYrj76I=
go.opentelemetry.io/otel/metric v1.21.0 h1:tlYWfeo+Bocx5kLEloTjbcDwBuELRrIFxwdQ36PlJu4=
go.opentelemetry.io/otel/metric v1.21.0/go.mod h1:o1p3CA8nNHW8j5yuQLdc1eeqEaPfzug24uvsyIEJRWM=
go.opentelemetry.io/otel/sdk v1.21.0 h1:FTt8qirL1EysG6sTQRZ5TokkU8d0ugCj8htOgThZXQ8=
go.opentelemetry.io/otel/sdk v1.21.0/go.mod h1:Nna6Yv7PWTdgJHVRD9hIYywQBRx7pbox6nwBnZIxl/E=
go.opentelemetry.io/otel/sdk/metric v1.21.0 h1:smhI5oD714d6jHE6Tie36fPx4WDFIg+Y6RfAY4ICcR0=
go.opentelemetry.io/otel/sdk/metric v1.21.0/go.mod h1:FJ8RAsoPGv/wYMgBdUJXOm+6pzFY3YdljnXtv1SBE8Q=
go.opentelemetry.io/otel/trace v1.21.0 h1:WD9i5gzvoUPuXIXH24ZNBudiarZDKuekPqi/E8fpfLc=
go.opentelemetry.io/otel/trace v1.21.0/go.mod h1:LGbsEB0f9LGjN+OZaQQ26sohbOmiMR+BaslueVtS/qQ=
go.opentelemetry.io/proto/otlp v1.0.0 h1:T0TX0tmXU8a3CbNXzEKGeU5mIVOdf0oykP+u2lIVU/I=
go.opentelemetry.io/proto/otlp v1.0.0/go.mod h1:Sy6pihPLfYHkr3NkUbEhGHFhINUSI/v80hjKIs5JXpM=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.28.0 h1:3Q/xZUyC1BBkualc9ROb4G8qkH90LXEIICcs5zv1OYY=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.28.0/go.mod h1:s75jGIWA9OfCMzF0xr+ZgfrB5FEbbV7UuYo32ahUiFI=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.28.0 h1:R3X6ZXmNPRR8ul6i3WgFURCHzaXjHdm0karRG/+dj3s=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.28.0/go.mod h1:QWFXnDavXWwMx2EEcZsf3yxgEKAqsxQ+Syjp+seyInw=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.28.0 h1:j9+03ymgYhPKmeXGk5Zu+cIZOlVzd9Zv7QIiyItjFBU=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.28.0/go.mod h1:Y5+XiUG4Emn1hTfciPzGPJaSI+RpDts6BnCIir0SLqk=
go.opentelemetry.io/otel/metric v1.28.0 h1:f0HGvSl1KRAU1DLgLGFjrwVyismPlnuU6JD6bOeuA5Q=
go.opentelemetry.io/otel/metric v1.28.0/go.mod h1:Fb1eVBFZmLVTMb6PPohq3TO9IIhUisDsbJoL/+uQW4s=
go.opentelemetry.io/otel/sdk v1.28.0 h1:b9d7hIry8yZsgtbmM0DKyPWMMUMlK9NEKuIG4aBqWyE=
go.opentelemetry.io/otel/sdk v1.28.0/go.mod h1:oYj7ClPUA7Iw3m+r7GeEjz0qckQRJK2B8zjcZEfu7Pg=
go.opentelemetry.io/otel/sdk/metric v1.28.0 h1:OkuaKgKrgAbYrrY0t92c+cC+2F6hsFNnCQArXCKlg08=
go.opentelemetry.io/otel/sdk/metric v1.28.0/go.mod h1:cWPjykihLAPvXKi4iZc1dpER3Jdq2Z0YLse3moQUCpg=
go.opentelemetry.io/otel/trace v1.28.0 h1:GhQ9cUuQGmNDd5BTCP2dAvv75RdMxEfTmYejp+lkx9g=
go.opentelemetry.io/otel/trace v1.28.0/go.mod h1:jPyXzNPg6da9+38HEwElrQiHlVMTnVfM3/yv2OlIHaI=
go.opentelemetry.io/proto/otlp v1.3.1 h1:TrMUixzpM0yuc/znrFTP9MMRh8trP93mkCiDVeXrui0=
go.opentelemetry.io/proto/otlp v1.3.1/go.mod h1:0X1WI4de4ZsLrrJNLAQbFeLCm3T7yBkR0XqQ7niQU+8=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
@@ -509,16 +506,14 @@ golang.org/x/crypto v0.0.0-20200302210943-78000ba7a073/go.mod h1:LzIPMQfyMNhhGPh
golang.org/x/crypto v0.0.0-20200422194213-44a606286825/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20201117144127-c1f2f97bffc9/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.23.0 h1:dIJU/v2J8Mdglj/8rJ6UUOM3Zc9zLZxVZwwxMooUSAI=
golang.org/x/crypto v0.23.0/go.mod h1:CKFgDieR+mRhux2Lsu27y0fO304Db0wZe70UKqHu0v8=
golang.org/x/exp v0.0.0-20240112132812-db7319d0e0e3 h1:hNQpMuAJe5CtcUqCXaWga3FHu+kQvCqcsoVaQgSV60o=
golang.org/x/exp v0.0.0-20240112132812-db7319d0e0e3/go.mod h1:idGWGoKP1toJGkd5/ig9ZLuPcZBC3ewk7SzmH0uou08=
golang.org/x/crypto v0.27.0 h1:GXm2NjJrPaiv/h1tb2UH8QfgC/hOf/+z0p6PT8o1w7A=
golang.org/x/crypto v0.27.0/go.mod h1:1Xngt8kV6Dvbssa53Ziq6Eqn0HqbZi5Z6R0ZpwQzt70=
golang.org/x/exp v0.0.0-20240909161429-701f63a606c0 h1:e66Fs6Z+fZTbFBAxKfP3PALWBtpfqks2bwGcexMxgtk=
golang.org/x/exp v0.0.0-20240909161429-701f63a606c0/go.mod h1:2TbTHSBQa924w8M6Xs1QcRcFwyucIwBGpK1p2f1YFFY=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.17.0 h1:zY54UmvipHiNd+pm+m0x9KhZ9hl1/7QNMyxXbc6ICqA=
golang.org/x/mod v0.17.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
golang.org/x/mod v0.21.0 h1:vvrHzRwRfVKSiLrG+d4FMl/Qi4ukBCE6kZlTUkDYRT0=
golang.org/x/mod v0.21.0/go.mod h1:6SkKJ3Xj0I0BrPOZoBy3bdMptDDU9oJrpohJ3eWZ1fY=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
@@ -526,21 +521,18 @@ golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLL
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.25.0 h1:d/OCCoBEUq33pjydKrGQhw7IlUPI2Oylr+8qLx49kac=
golang.org/x/net v0.25.0/go.mod h1:JkAGAh7GEvH74S6FOH42FLoXpXbE/aqXSrIQjXgsiwM=
golang.org/x/oauth2 v0.16.0 h1:aDkGMBSYxElaoP81NpoUoz2oo2R2wHdZpGToUxfyQrQ=
golang.org/x/oauth2 v0.16.0/go.mod h1:hqZ+0LWXsiVoZpeld6jVt06P3adbS2Uu911W1SsJv2o=
golang.org/x/net v0.29.0 h1:5ORfpBpCs4HzDYoodCDBbwHzdR5UrLBZ3sOnUJmFoHo=
golang.org/x/net v0.29.0/go.mod h1:gLkgy8jTGERgjzMic6DS9+SP0ajcu6Xu3Orq/SpETg0=
golang.org/x/oauth2 v0.21.0 h1:tsimM75w1tF/uws5rbeHzIWxEqElMehnc+iW793zsZs=
golang.org/x/oauth2 v0.21.0/go.mod h1:XYTD2NtWslqkgxebSiOHnXEap4TF09sJSc7H1sXbhtI=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.7.0 h1:YsImfSBoP9QPYL0xyKJPq0gcaJdG3rInoqxTWbfQu9M=
golang.org/x/sync v0.7.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.8.0 h1:3NFvSEYkUoMifnESzZl15y791HH1qU2xm6eCJU5ZPXQ=
golang.org/x/sync v0.8.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@@ -550,56 +542,45 @@ golang.org/x/sys v0.0.0-20190801041406-cbf593c0f2f3/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.22.0 h1:RI27ohtqKCnwULzJLqkv897zojh5/DwS/ENaMzUOaWI=
golang.org/x/sys v0.22.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.26.0 h1:KHjCJyddX0LoSTb3J+vWpupP9p0oznkqVk/IfjymZbo=
golang.org/x/sys v0.26.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.20.0 h1:VnkxpohqXaOBYJtBmEppKUG6mXpi+4O6purfc2+sMhw=
golang.org/x/term v0.20.0/go.mod h1:8UkIAJTvZgivsXaD6/pH6U9ecQzZ45awqEOzuCvwpFY=
golang.org/x/term v0.24.0 h1:Mh5cbb+Zk2hqqXNO7S1iTjEphVL+jb8ZWaqh/g+JWkM=
golang.org/x/term v0.24.0/go.mod h1:lOBK/LVxemqiMij05LGJ0tzNr8xlmwBRJ81PX6wVLH8=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.3.8/go.mod h1:E6s5w1FMmriuDzIBO73fBruAKo1PCIq6d2Q6DHfQ8WQ=
golang.org/x/text v0.15.0 h1:h1V/4gjBv8v9cjcR6+AR5+/cIYK5N/WAgiv4xlsEtAk=
golang.org/x/text v0.15.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/time v0.3.0 h1:rg5rLMjNzMS1RkNLzCG38eapWhnYLFYXDXj2gOlr8j4=
golang.org/x/time v0.3.0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/text v0.18.0 h1:XvMDiNzPAl0jr17s6W9lcaIhGUfUORdGCNsuLmPG224=
golang.org/x/text v0.18.0/go.mod h1:BuEKDfySbSR4drPmRPG/7iBdf8hvFMuRexcpahXilzY=
golang.org/x/time v0.6.0 h1:eTDhh4ZXt5Qf0augr54TN6suAUudPcawVZeIAPU7D4U=
golang.org/x/time v0.6.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.17.0 h1:FvmRgNOcs3kOa+T20R1uhfP9F6HgG2mfxDv1vrx1Htc=
golang.org/x/tools v0.17.0/go.mod h1:xsh6VxdV005rRVaS6SSAf9oiAqljS7UZUacMZ8Bnsps=
golang.org/x/tools v0.25.0 h1:oFU9pkj/iJgs+0DT+VMHrx+oBKs/LJMV+Uvg78sl+fE=
golang.org/x/tools v0.25.0/go.mod h1:/vtpO8WL1N9cQC3FN5zPqb//fRXskFHbLKk4OW1Q7rg=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.6.8 h1:IhEN5q69dyKagZPYMSdIjS2HqprW324FRQZJcGqPAsM=
google.golang.org/appengine v1.6.8/go.mod h1:1jJ3jBArFh5pcgW8gCtRJnepW8FzD1V44FJffLiz/Ds=
google.golang.org/genproto v0.0.0-20240123012728-ef4313101c80 h1:KAeGQVN3M9nD0/bQXnr/ClcEMJ968gUXJQ9pwfSynuQ=
google.golang.org/genproto v0.0.0-20240123012728-ef4313101c80/go.mod h1:cc8bqMqtv9gMOr0zHg2Vzff5ULhhL2IXP4sbcn32Dro=
google.golang.org/genproto/googleapis/api v0.0.0-20240123012728-ef4313101c80 h1:Lj5rbfG876hIAYFjqiJnPHfhXbv+nzTWfm04Fg/XSVU=
google.golang.org/genproto/googleapis/api v0.0.0-20240123012728-ef4313101c80/go.mod h1:4jWUdICTdgc3Ibxmr8nAJiiLHwQBY0UI0XZcEMaFKaA=
google.golang.org/genproto/googleapis/rpc v0.0.0-20240123012728-ef4313101c80 h1:AjyfHzEPEFp/NpvfN5g+KDla3EMojjhRVZc1i7cj+oM=
google.golang.org/genproto/googleapis/rpc v0.0.0-20240123012728-ef4313101c80/go.mod h1:PAREbraiVEVGVdTZsVWjSbbTtSyGbAgIIvni8a8CD5s=
google.golang.org/genproto/googleapis/api v0.0.0-20240701130421-f6361c86f094 h1:0+ozOGcrp+Y8Aq8TLNN2Aliibms5LEzsq99ZZmAGYm0=
google.golang.org/genproto/googleapis/api v0.0.0-20240701130421-f6361c86f094/go.mod h1:fJ/e3If/Q67Mj99hin0hMhiNyCRmt6BQ2aWIJshUSJw=
google.golang.org/genproto/googleapis/rpc v0.0.0-20240701130421-f6361c86f094 h1:BwIjyKYGsK9dMCBOorzRri8MQwmi7mT9rGHsCEinZkA=
google.golang.org/genproto/googleapis/rpc v0.0.0-20240701130421-f6361c86f094/go.mod h1:Ue6ibwXGpU+dqIcODieyLOcgj7z8+IcskoNIgZxtrFY=
google.golang.org/grpc v1.0.5/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw=
google.golang.org/grpc v1.62.0 h1:HQKZ/fa1bXkX1oFOvSjmZEUL8wLSaZTjCcLAlmZRtdk=
google.golang.org/grpc v1.62.0/go.mod h1:IWTG0VlJLCh1SkC58F7np9ka9mx/WNkjl4PGJaiq+QE=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.33.0 h1:uNO2rsAINq/JlFpSdYEKIZ0uKD/R9cpdv0T+yoGwGmI=
google.golang.org/protobuf v1.33.0/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos=
google.golang.org/grpc v1.66.3 h1:TWlsh8Mv0QI/1sIbs1W36lqRclxrmF+eFJ4DbI0fuhA=
google.golang.org/grpc v1.66.3/go.mod h1:s3/l6xSSCURdVfAnL+TqCNMyTDAGN6+lZeVxnZR128Y=
google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.5.1 h1:F29+wU6Ee6qgu9TddPgooOdaqsxTMunOoj8KA5yuS5A=
google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.5.1/go.mod h1:5KF+wpkbTSbGcR9zteSqZV6fqFOWBl4Yde8En8MryZA=
google.golang.org/protobuf v1.35.1 h1:m3LfL6/Ca+fqnjnlqQXNpFPABW1UD7mjh8KO2mKFytA=
google.golang.org/protobuf v1.35.1/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
gopkg.in/airbrake/gobrake.v2 v2.0.9/go.mod h1:/h5ZAUhDkGaJfjzjKLSjv6zCL6O0LLBxU4K+aSYdM/U=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/cenkalti/backoff.v2 v2.2.1 h1:eJ9UAg01/HIHG987TwxvnzK2MgxXq97YY6rYDpY9aII=

View File

@@ -1,6 +1,6 @@
# syntax=docker/dockerfile:1
ARG GO_VERSION=1.22
ARG GO_VERSION=1.23
ARG FORMATS=md,yaml
FROM golang:${GO_VERSION}-alpine AS docsgen

View File

@@ -5,12 +5,15 @@
# Copyright The Buildx Authors.
# Licensed under the Apache License, Version 2.0
ARG GO_VERSION="1.22"
ARG PROTOC_VERSION="3.11.4"
ARG GO_VERSION=1.23
ARG PROTOC_VERSION=3.11.4
ARG PROTOC_GOOGLEAPIS_VERSION=2af421884dd468d565137215c946ebe4e245ae26
# protoc is dynamically linked to glibc so can't use alpine base
FROM golang:${GO_VERSION}-bookworm AS base
RUN apt-get update && apt-get --no-install-recommends install -y git unzip
FROM base AS protoc
ARG PROTOC_VERSION
ARG TARGETOS
ARG TARGETARCH
@@ -18,34 +21,69 @@ RUN <<EOT
set -e
arch=$(echo $TARGETARCH | sed -e s/amd64/x86_64/ -e s/arm64/aarch_64/)
wget -q https://github.com/protocolbuffers/protobuf/releases/download/v${PROTOC_VERSION}/protoc-${PROTOC_VERSION}-${TARGETOS}-${arch}.zip
unzip protoc-${PROTOC_VERSION}-${TARGETOS}-${arch}.zip -d /usr/local
unzip protoc-${PROTOC_VERSION}-${TARGETOS}-${arch}.zip -d /opt/protoc
EOT
WORKDIR /go/src/github.com/docker/buildx
FROM base AS tools
RUN --mount=type=bind,target=.,rw \
FROM base AS googleapis
ARG PROTOC_GOOGLEAPIS_VERSION
RUN <<EOT
set -e
wget -q https://github.com/googleapis/googleapis/archive/${PROTOC_GOOGLEAPIS_VERSION}.zip -O googleapis.zip
unzip googleapis.zip '*.proto' -d /opt
mkdir -p /opt/googleapis
mv /opt/googleapis-${PROTOC_GOOGLEAPIS_VERSION} /opt/googleapis/include
EOT
FROM base AS gobuild-base
WORKDIR /app
FROM gobuild-base AS vtprotobuf
RUN --mount=type=bind,source=go.mod,target=/app/go.mod \
--mount=type=bind,source=go.sum,target=/app/go.sum \
--mount=type=cache,target=/root/.cache \
--mount=type=cache,target=/go/pkg/mod <<EOT
set -e
mkdir -p /opt/vtprotobuf
go mod download github.com/planetscale/vtprotobuf
cp -r $(go list -m -f='{{.Dir}}' github.com/planetscale/vtprotobuf)/include /opt/vtprotobuf
EOT
FROM gobuild-base AS vendored
RUN --mount=type=bind,source=vendor,target=/app <<EOT
set -e
mkdir -p /opt/vendored/include
find . -name '*.proto' | tar -cf - --files-from - | tar -C /opt/vendored/include -xf -
EOT
FROM gobuild-base AS tools
RUN --mount=type=bind,source=go.mod,target=/app/go.mod,ro \
--mount=type=bind,source=go.sum,target=/app/go.sum,ro \
--mount=type=cache,target=/root/.cache \
--mount=type=cache,target=/go/pkg/mod \
go install \
github.com/gogo/protobuf/protoc-gen-gogo \
github.com/gogo/protobuf/protoc-gen-gogofaster \
github.com/gogo/protobuf/protoc-gen-gogoslick \
github.com/golang/protobuf/protoc-gen-go
github.com/planetscale/vtprotobuf/cmd/protoc-gen-go-vtproto \
google.golang.org/protobuf/cmd/protoc-gen-go \
google.golang.org/grpc/cmd/protoc-gen-go-grpc
COPY --link --from=protoc /opt/protoc /usr/local
COPY --link --from=googleapis /opt/googleapis /usr/local
COPY --link --from=vtprotobuf /opt/vtprotobuf /usr/local
COPY --link --from=vendored /opt/vendored /usr/local
FROM tools AS generated
RUN --mount=type=bind,target=.,rw <<EOT
RUN --mount=type=bind,target=github.com/docker/buildx,ro <<EOT
set -ex
go generate -mod=vendor -v ./...
mkdir /out
git ls-files -m --others -- ':!vendor' '**/*.pb.go' | tar -cf - --files-from - | tar -C /out -xf -
find github.com/docker/buildx -name '*.proto' -o -name vendor -prune -false | xargs \
protoc --go_out=/out --go-grpc_out=require_unimplemented_servers=false:/out \
--go-vtproto_out=features=marshal+unmarshal+size+equal+pool+clone:/out
EOT
FROM scratch AS update
COPY --from=generated /out /
COPY --from=generated /out/github.com/docker/buildx /
FROM base AS validate
FROM gobuild-base AS validate
RUN --mount=type=bind,target=.,rw \
--mount=type=bind,from=generated,source=/out,target=/generated-files <<EOT
--mount=type=bind,from=update,target=/generated-files <<EOT
set -e
git add -A
if [ "$(ls -A /generated-files)" ]; then

View File

@@ -1,7 +1,7 @@
# syntax=docker/dockerfile:1
ARG GO_VERSION="1.22"
ARG GOVULNCHECK_VERSION="v1.1.3"
ARG GO_VERSION=1.23
ARG GOVULNCHECK_VERSION=v1.1.3
ARG FORMAT="text"
FROM golang:${GO_VERSION}-alpine AS base

View File

@@ -1,12 +1,11 @@
# syntax=docker/dockerfile:1
ARG GO_VERSION=1.22
ARG XX_VERSION=1.3.0
ARG GOLANGCI_LINT_VERSION=1.57.2
ARG GOPLS_VERSION=v0.20.0
ARG GO_VERSION=1.23
ARG XX_VERSION=1.5.0
ARG GOLANGCI_LINT_VERSION=1.62.0
ARG GOPLS_VERSION=v0.26.0
# disabled: deprecated unusedvariable simplifyrange
ARG GOPLS_ANALYZERS="embeddirective fillreturns infertypeargs nonewvars noresultvalues simplifycompositelit simplifyslice stubmethods undeclaredname unusedparams useany"
ARG GOPLS_ANALYZERS="embeddirective fillreturns infertypeargs nonewvars norangeoverfunc noresultvalues simplifycompositelit simplifyslice undeclaredname unusedparams useany"
FROM --platform=$BUILDPLATFORM tonistiigi/xx:${XX_VERSION} AS xx

View File

@@ -1,6 +1,6 @@
# syntax=docker/dockerfile:1
ARG GO_VERSION=1.22
ARG GO_VERSION=1.23
ARG MODOUTDATED_VERSION=v0.9.0
FROM golang:${GO_VERSION}-alpine AS base

View File

@@ -8,7 +8,7 @@ import (
"path/filepath"
"sync"
"github.com/docker/docker/pkg/ioutils"
"github.com/docker/buildx/util/confutil"
"github.com/pkg/errors"
"golang.org/x/sync/errgroup"
)
@@ -42,18 +42,18 @@ type StateGroup struct {
}
type LocalState struct {
root string
cfg *confutil.Config
}
func New(root string) (*LocalState, error) {
if root == "" {
return nil, errors.Errorf("root dir empty")
func New(cfg *confutil.Config) (*LocalState, error) {
if cfg.Dir() == "" {
return nil, errors.Errorf("config dir empty")
}
if err := os.MkdirAll(filepath.Join(root, refsDir), 0700); err != nil {
if err := cfg.MkdirAll(refsDir, 0700); err != nil {
return nil, err
}
return &LocalState{
root: root,
cfg: cfg,
}, nil
}
@@ -61,7 +61,7 @@ func (ls *LocalState) ReadRef(builderName, nodeName, id string) (*State, error)
if err := ls.validate(builderName, nodeName, id); err != nil {
return nil, err
}
dt, err := os.ReadFile(filepath.Join(ls.root, refsDir, builderName, nodeName, id))
dt, err := os.ReadFile(filepath.Join(ls.cfg.Dir(), refsDir, builderName, nodeName, id))
if err != nil {
return nil, err
}
@@ -76,19 +76,19 @@ func (ls *LocalState) SaveRef(builderName, nodeName, id string, st State) error
if err := ls.validate(builderName, nodeName, id); err != nil {
return err
}
refDir := filepath.Join(ls.root, refsDir, builderName, nodeName)
if err := os.MkdirAll(refDir, 0700); err != nil {
refDir := filepath.Join(refsDir, builderName, nodeName)
if err := ls.cfg.MkdirAll(refDir, 0700); err != nil {
return err
}
dt, err := json.Marshal(st)
if err != nil {
return err
}
return ioutils.AtomicWriteFile(filepath.Join(refDir, id), dt, 0600)
return ls.cfg.AtomicWriteFile(filepath.Join(refDir, id), dt, 0644)
}
func (ls *LocalState) ReadGroup(id string) (*StateGroup, error) {
dt, err := os.ReadFile(filepath.Join(ls.root, refsDir, groupDir, id))
dt, err := os.ReadFile(filepath.Join(ls.cfg.Dir(), refsDir, groupDir, id))
if err != nil {
return nil, err
}
@@ -100,15 +100,15 @@ func (ls *LocalState) ReadGroup(id string) (*StateGroup, error) {
}
func (ls *LocalState) SaveGroup(id string, stg StateGroup) error {
refDir := filepath.Join(ls.root, refsDir, groupDir)
if err := os.MkdirAll(refDir, 0700); err != nil {
refDir := filepath.Join(refsDir, groupDir)
if err := ls.cfg.MkdirAll(refDir, 0700); err != nil {
return err
}
dt, err := json.Marshal(stg)
if err != nil {
return err
}
return ioutils.AtomicWriteFile(filepath.Join(refDir, id), dt, 0600)
return ls.cfg.AtomicWriteFile(filepath.Join(refDir, id), dt, 0600)
}
func (ls *LocalState) RemoveBuilder(builderName string) error {
@@ -116,7 +116,7 @@ func (ls *LocalState) RemoveBuilder(builderName string) error {
return errors.Errorf("builder name empty")
}
dir := filepath.Join(ls.root, refsDir, builderName)
dir := filepath.Join(ls.cfg.Dir(), refsDir, builderName)
if _, err := os.Lstat(dir); err != nil {
if !os.IsNotExist(err) {
return err
@@ -147,7 +147,7 @@ func (ls *LocalState) RemoveBuilderNode(builderName string, nodeName string) err
return errors.Errorf("node name empty")
}
dir := filepath.Join(ls.root, refsDir, builderName, nodeName)
dir := filepath.Join(ls.cfg.Dir(), refsDir, builderName, nodeName)
if _, err := os.Lstat(dir); err != nil {
if !os.IsNotExist(err) {
return err
@@ -208,7 +208,7 @@ func (ls *LocalState) removeGroup(id string) error {
if id == "" {
return errors.Errorf("group ref empty")
}
f := filepath.Join(ls.root, refsDir, groupDir, id)
f := filepath.Join(ls.cfg.Dir(), refsDir, groupDir, id)
if _, err := os.Lstat(f); err != nil {
if !os.IsNotExist(err) {
return err

View File

@@ -4,6 +4,7 @@ import (
"path/filepath"
"testing"
"github.com/docker/buildx/util/confutil"
"github.com/stretchr/testify/require"
)
@@ -39,10 +40,10 @@ func newls(t *testing.T) *LocalState {
t.Helper()
tmpdir := t.TempDir()
l, err := New(tmpdir)
l, err := New(confutil.NewConfig(nil, confutil.WithDir(tmpdir)))
require.NoError(t, err)
require.DirExists(t, filepath.Join(tmpdir, refsDir))
require.Equal(t, tmpdir, l.root)
require.Equal(t, tmpdir, l.cfg.Dir())
require.NoError(t, l.SaveRef(testBuilderName, testNodeName, testStateRefID, testStateRef))

View File

@@ -13,11 +13,11 @@ import (
type ExecCmd struct {
m types.Monitor
invokeConfig controllerapi.InvokeConfig
invokeConfig *controllerapi.InvokeConfig
stdout io.WriteCloser
}
func NewExecCmd(m types.Monitor, invokeConfig controllerapi.InvokeConfig, stdout io.WriteCloser) types.Command {
func NewExecCmd(m types.Monitor, invokeConfig *controllerapi.InvokeConfig, stdout io.WriteCloser) types.Command {
return &ExecCmd{m, invokeConfig, stdout}
}
@@ -41,7 +41,7 @@ func (cm *ExecCmd) Exec(ctx context.Context, args []string) error {
if len(args) < 2 {
return errors.Errorf("command must be passed")
}
cfg := controllerapi.InvokeConfig{
cfg := &controllerapi.InvokeConfig{
Entrypoint: []string{args[1]},
Cmd: args[2:],
NoCmd: false,

View File

@@ -20,10 +20,10 @@ type ReloadCmd struct {
progress *progress.Printer
options *controllerapi.BuildOptions
invokeConfig controllerapi.InvokeConfig
invokeConfig *controllerapi.InvokeConfig
}
func NewReloadCmd(m types.Monitor, stdout io.WriteCloser, progress *progress.Printer, options *controllerapi.BuildOptions, invokeConfig controllerapi.InvokeConfig) types.Command {
func NewReloadCmd(m types.Monitor, stdout io.WriteCloser, progress *progress.Printer, options *controllerapi.BuildOptions, invokeConfig *controllerapi.InvokeConfig) types.Command {
return &ReloadCmd{m, stdout, progress, options, invokeConfig}
}
@@ -61,7 +61,7 @@ func (cm *ReloadCmd) Exec(ctx context.Context, args []string) error {
}
var resultUpdated bool
cm.progress.Unpause()
ref, _, err := cm.m.Build(ctx, *bo, nil, cm.progress) // TODO: support stdin, hold build ref
ref, _, _, err := cm.m.Build(ctx, bo, nil, cm.progress) // TODO: support stdin, hold build ref
cm.progress.Pause()
if err != nil {
var be *controllererrors.BuildError

View File

@@ -13,11 +13,11 @@ import (
type RollbackCmd struct {
m types.Monitor
invokeConfig controllerapi.InvokeConfig
invokeConfig *controllerapi.InvokeConfig
stdout io.WriteCloser
}
func NewRollbackCmd(m types.Monitor, invokeConfig controllerapi.InvokeConfig, stdout io.WriteCloser) types.Command {
func NewRollbackCmd(m types.Monitor, invokeConfig *controllerapi.InvokeConfig, stdout io.WriteCloser) types.Command {
return &RollbackCmd{m, invokeConfig, stdout}
}

View File

@@ -10,6 +10,7 @@ import (
"text/tabwriter"
"github.com/containerd/console"
"github.com/docker/buildx/build"
"github.com/docker/buildx/controller/control"
controllerapi "github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/monitor/commands"
@@ -30,7 +31,7 @@ type MonitorBuildResult struct {
}
// RunMonitor provides an interactive session for running and managing containers via specified IO.
func RunMonitor(ctx context.Context, curRef string, options *controllerapi.BuildOptions, invokeConfig controllerapi.InvokeConfig, c control.BuildxController, stdin io.ReadCloser, stdout io.WriteCloser, stderr console.File, progress *progress.Printer) (*MonitorBuildResult, error) {
func RunMonitor(ctx context.Context, curRef string, options *controllerapi.BuildOptions, invokeConfig *controllerapi.InvokeConfig, c control.BuildxController, stdin io.ReadCloser, stdout io.WriteCloser, stderr console.File, progress *progress.Printer) (*MonitorBuildResult, error) {
defer func() {
if err := c.Disconnect(ctx, curRef); err != nil {
logrus.Warnf("disconnect error: %v", err)
@@ -243,8 +244,8 @@ type monitor struct {
lastBuildResult *MonitorBuildResult
}
func (m *monitor) Build(ctx context.Context, options controllerapi.BuildOptions, in io.ReadCloser, progress progress.Writer) (ref string, resp *client.SolveResponse, err error) {
ref, resp, err = m.BuildxController.Build(ctx, options, in, progress)
func (m *monitor) Build(ctx context.Context, options *controllerapi.BuildOptions, in io.ReadCloser, progress progress.Writer) (ref string, resp *client.SolveResponse, input *build.Inputs, err error) {
ref, resp, _, err = m.BuildxController.Build(ctx, options, in, progress)
m.lastBuildResult = &MonitorBuildResult{Resp: resp, Err: err} // Record build result
return
}
@@ -261,19 +262,19 @@ func (m *monitor) AttachedSessionID() string {
return m.ref.Load().(string)
}
func (m *monitor) Rollback(ctx context.Context, cfg controllerapi.InvokeConfig) string {
func (m *monitor) Rollback(ctx context.Context, cfg *controllerapi.InvokeConfig) string {
pid := identity.NewID()
cfg1 := cfg
cfg1.Rollback = true
return m.startInvoke(ctx, pid, cfg1)
}
func (m *monitor) Exec(ctx context.Context, cfg controllerapi.InvokeConfig) string {
func (m *monitor) Exec(ctx context.Context, cfg *controllerapi.InvokeConfig) string {
return m.startInvoke(ctx, identity.NewID(), cfg)
}
func (m *monitor) Attach(ctx context.Context, pid string) {
m.startInvoke(ctx, pid, controllerapi.InvokeConfig{})
m.startInvoke(ctx, pid, &controllerapi.InvokeConfig{})
}
func (m *monitor) Detach() {
@@ -290,7 +291,7 @@ func (m *monitor) close() {
m.Detach()
}
func (m *monitor) startInvoke(ctx context.Context, pid string, cfg controllerapi.InvokeConfig) string {
func (m *monitor) startInvoke(ctx context.Context, pid string, cfg *controllerapi.InvokeConfig) string {
if m.invokeCancel != nil {
m.invokeCancel() // Finish existing attach
}
@@ -316,7 +317,7 @@ func (m *monitor) startInvoke(ctx context.Context, pid string, cfg controllerapi
return pid
}
func (m *monitor) invoke(ctx context.Context, pid string, cfg controllerapi.InvokeConfig) error {
func (m *monitor) invoke(ctx context.Context, pid string, cfg *controllerapi.InvokeConfig) error {
m.muxIO.Enable(1)
defer m.muxIO.Disable(1)
if err := m.muxIO.SwitchTo(1); err != nil {
@@ -325,7 +326,7 @@ func (m *monitor) invoke(ctx context.Context, pid string, cfg controllerapi.Invo
if m.AttachedSessionID() == "" {
return nil
}
invokeCtx, invokeCancel := context.WithCancel(ctx)
invokeCtx, invokeCancel := context.WithCancelCause(ctx)
containerIn, containerOut := ioset.Pipe()
m.invokeIO.SetOut(&containerOut)
@@ -335,7 +336,7 @@ func (m *monitor) invoke(ctx context.Context, pid string, cfg controllerapi.Invo
cancelOnce.Do(func() {
containerIn.Close()
m.invokeIO.SetOut(nil)
invokeCancel()
invokeCancel(errors.WithStack(context.Canceled))
})
<-waitInvokeDoneCh
}

View File

@@ -12,10 +12,10 @@ type Monitor interface {
control.BuildxController
// Rollback re-runs the interactive container with initial rootfs contents.
Rollback(ctx context.Context, cfg controllerapi.InvokeConfig) string
Rollback(ctx context.Context, cfg *controllerapi.InvokeConfig) string
// Rollback executes a process in the interactive container.
Exec(ctx context.Context, cfg controllerapi.InvokeConfig) string
Exec(ctx context.Context, cfg *controllerapi.InvokeConfig) string
// Attach attaches IO to a process in the container.
Attach(ctx context.Context, pid string)
@@ -50,7 +50,6 @@ type CommandInfo struct {
// Command represents a command for debugging.
type Command interface {
// Exec executes the command.
Exec(ctx context.Context, args []string) error

View File

@@ -17,13 +17,13 @@ func TestNodeGroupUpdate(t *testing.T) {
err = ng.Update("foo1", "foo1", []string{"linux/arm64", "linux/arm/v7"}, true, true, nil, "", nil)
require.NoError(t, err)
require.Equal(t, len(ng.Nodes), 2)
require.Equal(t, 2, len(ng.Nodes))
// update
err = ng.Update("foo", "foo2", []string{"linux/amd64", "linux/arm"}, true, false, nil, "", nil)
require.NoError(t, err)
require.Equal(t, len(ng.Nodes), 2)
require.Equal(t, 2, len(ng.Nodes))
require.Equal(t, []string{"linux/amd64", "linux/arm/v7"}, platformutil.Format(ng.Nodes[0].Platforms))
require.Equal(t, []string{"linux/arm64"}, platformutil.Format(ng.Nodes[1].Platforms))
@@ -39,6 +39,6 @@ func TestNodeGroupUpdate(t *testing.T) {
err = ng.Leave("foo")
require.NoError(t, err)
require.Equal(t, len(ng.Nodes), 1)
require.Equal(t, 1, len(ng.Nodes))
require.Equal(t, []string{"linux/arm64"}, platformutil.Format(ng.Nodes[0].Platforms))
}

View File

@@ -8,7 +8,7 @@ import (
"time"
"github.com/docker/buildx/localstate"
"github.com/docker/docker/pkg/ioutils"
"github.com/docker/buildx/util/confutil"
"github.com/gofrs/flock"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
@@ -20,25 +20,25 @@ const (
activityDir = "activity"
)
func New(root string) (*Store, error) {
if err := os.MkdirAll(filepath.Join(root, instanceDir), 0700); err != nil {
func New(cfg *confutil.Config) (*Store, error) {
if err := cfg.MkdirAll(instanceDir, 0700); err != nil {
return nil, err
}
if err := os.MkdirAll(filepath.Join(root, defaultsDir), 0700); err != nil {
if err := cfg.MkdirAll(defaultsDir, 0700); err != nil {
return nil, err
}
if err := os.MkdirAll(filepath.Join(root, activityDir), 0700); err != nil {
if err := cfg.MkdirAll(activityDir, 0700); err != nil {
return nil, err
}
return &Store{root: root}, nil
return &Store{cfg: cfg}, nil
}
type Store struct {
root string
cfg *confutil.Config
}
func (s *Store) Txn() (*Txn, func(), error) {
l := flock.New(filepath.Join(s.root, ".lock"))
l := flock.New(filepath.Join(s.cfg.Dir(), ".lock"))
if err := l.Lock(); err != nil {
return nil, nil, err
}
@@ -54,7 +54,7 @@ type Txn struct {
}
func (t *Txn) List() ([]*NodeGroup, error) {
pp := filepath.Join(t.s.root, instanceDir)
pp := filepath.Join(t.s.cfg.Dir(), instanceDir)
fis, err := os.ReadDir(pp)
if err != nil {
return nil, err
@@ -84,7 +84,7 @@ func (t *Txn) NodeGroupByName(name string) (*NodeGroup, error) {
if err != nil {
return nil, err
}
dt, err := os.ReadFile(filepath.Join(t.s.root, instanceDir, name))
dt, err := os.ReadFile(filepath.Join(t.s.cfg.Dir(), instanceDir, name))
if err != nil {
return nil, err
}
@@ -110,7 +110,7 @@ func (t *Txn) Save(ng *NodeGroup) error {
if err != nil {
return err
}
return ioutils.AtomicWriteFile(filepath.Join(t.s.root, instanceDir, name), dt, 0600)
return t.s.cfg.AtomicWriteFile(filepath.Join(instanceDir, name), dt, 0600)
}
func (t *Txn) Remove(name string) error {
@@ -121,14 +121,14 @@ func (t *Txn) Remove(name string) error {
if err := t.RemoveLastActivity(name); err != nil {
return err
}
ls, err := localstate.New(t.s.root)
ls, err := localstate.New(t.s.cfg)
if err != nil {
return err
}
if err := ls.RemoveBuilder(name); err != nil {
return err
}
return os.RemoveAll(filepath.Join(t.s.root, instanceDir, name))
return os.RemoveAll(filepath.Join(t.s.cfg.Dir(), instanceDir, name))
}
func (t *Txn) SetCurrent(key, name string, global, def bool) error {
@@ -141,28 +141,28 @@ func (t *Txn) SetCurrent(key, name string, global, def bool) error {
if err != nil {
return err
}
if err := ioutils.AtomicWriteFile(filepath.Join(t.s.root, "current"), dt, 0600); err != nil {
if err := t.s.cfg.AtomicWriteFile("current", dt, 0600); err != nil {
return err
}
h := toHash(key)
if def {
if err := ioutils.AtomicWriteFile(filepath.Join(t.s.root, defaultsDir, h), []byte(name), 0600); err != nil {
if err := t.s.cfg.AtomicWriteFile(filepath.Join(defaultsDir, h), []byte(name), 0600); err != nil {
return err
}
} else {
os.RemoveAll(filepath.Join(t.s.root, defaultsDir, h)) // ignore error
os.RemoveAll(filepath.Join(t.s.cfg.Dir(), defaultsDir, h)) // ignore error
}
return nil
}
func (t *Txn) UpdateLastActivity(ng *NodeGroup) error {
return ioutils.AtomicWriteFile(filepath.Join(t.s.root, activityDir, ng.Name), []byte(time.Now().UTC().Format(time.RFC3339)), 0600)
return t.s.cfg.AtomicWriteFile(filepath.Join(activityDir, ng.Name), []byte(time.Now().UTC().Format(time.RFC3339)), 0600)
}
func (t *Txn) GetLastActivity(ng *NodeGroup) (la time.Time, _ error) {
dt, err := os.ReadFile(filepath.Join(t.s.root, activityDir, ng.Name))
dt, err := os.ReadFile(filepath.Join(t.s.cfg.Dir(), activityDir, ng.Name))
if err != nil {
if os.IsNotExist(errors.Cause(err)) {
return la, nil
@@ -177,7 +177,7 @@ func (t *Txn) RemoveLastActivity(name string) error {
if err != nil {
return err
}
return os.RemoveAll(filepath.Join(t.s.root, activityDir, name))
return os.RemoveAll(filepath.Join(t.s.cfg.Dir(), activityDir, name))
}
func (t *Txn) reset(key string) error {
@@ -185,11 +185,11 @@ func (t *Txn) reset(key string) error {
if err != nil {
return err
}
return ioutils.AtomicWriteFile(filepath.Join(t.s.root, "current"), dt, 0600)
return t.s.cfg.AtomicWriteFile("current", dt, 0600)
}
func (t *Txn) Current(key string) (*NodeGroup, error) {
dt, err := os.ReadFile(filepath.Join(t.s.root, "current"))
dt, err := os.ReadFile(filepath.Join(t.s.cfg.Dir(), "current"))
if err != nil {
if !os.IsNotExist(err) {
return nil, err
@@ -220,7 +220,7 @@ func (t *Txn) Current(key string) (*NodeGroup, error) {
h := toHash(key)
dt, err = os.ReadFile(filepath.Join(t.s.root, defaultsDir, h))
dt, err = os.ReadFile(filepath.Join(t.s.cfg.Dir(), defaultsDir, h))
if err != nil {
if os.IsNotExist(err) {
t.reset(key)

View File

@@ -5,7 +5,9 @@ import (
"testing"
"time"
"github.com/docker/buildx/util/confutil"
"github.com/pkg/errors"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
@@ -15,7 +17,7 @@ func TestEmptyStartup(t *testing.T) {
require.NoError(t, err)
defer os.RemoveAll(tmpdir)
s, err := New(tmpdir)
s, err := New(confutil.NewConfig(nil, confutil.WithDir(tmpdir)))
require.NoError(t, err)
txn, release, err := s.Txn()
@@ -33,7 +35,7 @@ func TestNodeLocking(t *testing.T) {
require.NoError(t, err)
defer os.RemoveAll(tmpdir)
s, err := New(tmpdir)
s, err := New(confutil.NewConfig(nil, confutil.WithDir(tmpdir)))
require.NoError(t, err)
_, release, err := s.Txn()
@@ -43,7 +45,7 @@ func TestNodeLocking(t *testing.T) {
go func() {
_, release, err := s.Txn()
require.NoError(t, err)
assert.NoError(t, err)
release()
close(ready)
}()
@@ -68,7 +70,7 @@ func TestNodeManagement(t *testing.T) {
require.NoError(t, err)
defer os.RemoveAll(tmpdir)
s, err := New(tmpdir)
s, err := New(confutil.NewConfig(nil, confutil.WithDir(tmpdir)))
require.NoError(t, err)
txn, release, err := s.Txn()
@@ -240,7 +242,7 @@ func TestNodeInvalidName(t *testing.T) {
t.Parallel()
tmpdir := t.TempDir()
s, err := New(tmpdir)
s, err := New(confutil.NewConfig(nil, confutil.WithDir(tmpdir)))
require.NoError(t, err)
txn, release, err := s.Txn()

View File

@@ -17,7 +17,7 @@ import (
// GetStore returns current builder instance store
func GetStore(dockerCli command.Cli) (*store.Txn, func(), error) {
s, err := store.New(confutil.ConfigDir(dockerCli))
s, err := store.New(confutil.NewConfig(dockerCli))
if err != nil {
return nil, nil, err
}

View File

@@ -63,22 +63,44 @@ var bakeTests = []func(t *testing.T, sb integration.Sandbox){
}
func testBakePrint(t *testing.T, sb integration.Sandbox) {
dockerfile := []byte(`
FROM busybox
ARG HELLO
RUN echo "Hello ${HELLO}"
`)
bakefile := []byte(`
testCases := []struct {
name string
f string
dt []byte
}{
{
"HCL",
"docker-bake.hcl",
[]byte(`
target "build" {
args = {
HELLO = "foo"
}
}
`)
`)},
{
"Compose",
"compose.yml",
[]byte(`
services:
build:
build:
context: .
args:
HELLO: foo
`)},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
dir := tmpdir(
t,
fstest.CreateFile("docker-bake.hcl", bakefile, 0600),
fstest.CreateFile("Dockerfile", dockerfile, 0600),
fstest.CreateFile(tc.f, tc.dt, 0600),
fstest.CreateFile("Dockerfile", []byte(`
FROM busybox
ARG HELLO
RUN echo "Hello ${HELLO}"
`), 0600),
)
cmd := buildxCmd(sb, withDir(dir), withArgs("bake", "--print", "build"))
@@ -103,6 +125,28 @@ target "build" {
require.Equal(t, ".", *def.Target["build"].Context)
require.Equal(t, "Dockerfile", *def.Target["build"].Dockerfile)
require.Equal(t, map[string]*string{"HELLO": ptrstr("foo")}, def.Target["build"].Args)
require.JSONEq(t, `{
"group": {
"default": {
"targets": [
"build"
]
}
},
"target": {
"build": {
"context": ".",
"dockerfile": "Dockerfile",
"args": {
"HELLO": "foo"
}
}
}
}
`, stdout.String())
})
}
}
func testBakeLocal(t *testing.T, sb integration.Sandbox) {
@@ -464,7 +508,8 @@ EOT
withArgs(addr, "--set", "*.output=type=local,dest="+dirDest),
)
require.Error(t, err, out)
require.Contains(t, out, "outside of the working directory, please set BAKE_ALLOW_REMOTE_FS_ACCESS")
require.Contains(t, out, "Your build is requesting privileges for following possibly insecure capabilities")
require.Contains(t, out, "Read access to path ../")
out, err = bakeCmd(
sb,
@@ -511,7 +556,8 @@ EOT
withArgs(addr, "--set", "*.output=type=local,dest="+dirDest),
)
require.Error(t, err, out)
require.Contains(t, out, "outside of the working directory, please set BAKE_ALLOW_REMOTE_FS_ACCESS")
require.Contains(t, out, "Your build is requesting privileges for following possibly insecure capabilities")
require.Contains(t, out, "Read access to path ..")
out, err = bakeCmd(
sb,
@@ -934,7 +980,6 @@ func testBakeMultiPlatform(t *testing.T, sb integration.Sandbox) {
require.NotNil(t, img)
img = imgs.Find("linux/arm64")
require.NotNil(t, img)
} else {
require.Error(t, err, string(out))
require.Contains(t, string(out), "Multi-platform build is not supported")
@@ -1398,4 +1443,48 @@ target "second" {
require.Contains(t, stdout.String(), "Check complete, 1 warning has been found!")
require.Contains(t, stdout.String(), "Check complete, 2 warnings have been found!")
})
t.Run("check for Dockerfile path printed with context when displaying rule check warnings with multiple build targets", func(t *testing.T) {
dockerfile := []byte(`
FROM busybox
copy Dockerfile .
`)
bakefile := []byte(`
target "first" {
dockerfile = "Dockerfile"
}
target "second" {
dockerfile = "subdir/Dockerfile"
}
target "third" {
dockerfile = "subdir/subsubdir/Dockerfile"
}
`)
dir := tmpdir(
t,
fstest.CreateDir("subdir", 0700),
fstest.CreateDir("subdir/subsubdir", 0700),
fstest.CreateFile("Dockerfile", dockerfile, 0600),
fstest.CreateFile("subdir/Dockerfile", dockerfile, 0600),
fstest.CreateFile("subdir/subsubdir/Dockerfile", dockerfile, 0600),
fstest.CreateFile("docker-bake.hcl", bakefile, 0600),
)
dockerfilePathFirst := "Dockerfile"
dockerfilePathSecond := filepath.Join("subdir", "Dockerfile")
dockerfilePathThird := filepath.Join("subdir", "subsubdir", "Dockerfile")
cmd := buildxCmd(
sb,
withDir(dir),
withArgs("bake", "--call", "check", "first", "second", "third"),
)
stdout := bytes.Buffer{}
stderr := bytes.Buffer{}
cmd.Stdout = &stdout
cmd.Stderr = &stderr
require.Error(t, cmd.Run(), stdout.String(), stderr.String())
require.Contains(t, stdout.String(), dockerfilePathFirst+":3")
require.Contains(t, stdout.String(), dockerfilePathSecond+":3")
require.Contains(t, stdout.String(), dockerfilePathThird+":3")
})
}

View File

@@ -17,6 +17,7 @@ import (
"github.com/containerd/platforms"
"github.com/creack/pty"
"github.com/docker/buildx/localstate"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/gitutil"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/frontend/subrequests/lint"
@@ -167,7 +168,7 @@ COPY --from=base /etc/bar /bar
err = json.Unmarshal(dt, &md)
require.NoError(t, err)
ls, err := localstate.New(buildxConfig(sb))
ls, err := localstate.New(confutil.NewConfig(nil, confutil.WithDir(buildxConfig(sb))))
require.NoError(t, err)
refParts := strings.Split(md.BuildRef, "/")
@@ -209,7 +210,7 @@ COPY --from=base /etc/bar /bar
err = json.Unmarshal(dt, &md)
require.NoError(t, err)
ls, err := localstate.New(buildxConfig(sb))
ls, err := localstate.New(confutil.NewConfig(nil, confutil.WithDir(buildxConfig(sb))))
require.NoError(t, err)
refParts := strings.Split(md.BuildRef, "/")
@@ -261,7 +262,7 @@ COPY foo /foo
err = json.Unmarshal(dt, &md)
require.NoError(t, err)
ls, err := localstate.New(buildxConfig(sb))
ls, err := localstate.New(confutil.NewConfig(nil, confutil.WithDir(buildxConfig(sb))))
require.NoError(t, err)
refParts := strings.Split(md.BuildRef, "/")
@@ -470,7 +471,7 @@ RUN echo foo > /bar`)
cmd := buildxCmd(sb, withArgs("build", "--output=type=cacheonly", dir))
out, err := cmd.CombinedOutput()
require.NoError(t, err, string(out))
require.False(t, buildDetailsPattern.MatchString(string(out)), fmt.Sprintf("build details link not expected in output, got %q", out))
require.False(t, buildDetailsPattern.MatchString(string(out)), "build details link not expected in output, got %q", out)
// create desktop-build .lastaccess file
home, err := os.UserHomeDir() // TODO: sandbox should create a temp home dir and expose it through its interface
@@ -490,12 +491,7 @@ RUN echo foo > /bar`)
cmd = buildxCmd(sb, withArgs("build", "--output=type=cacheonly", dir))
out, err = cmd.CombinedOutput()
require.NoError(t, err, string(out))
require.True(t, buildDetailsPattern.MatchString(string(out)), fmt.Sprintf("expected build details link in output, got %q", out))
if isExperimental() {
// FIXME: https://github.com/docker/buildx/issues/2382
t.Skip("build details link not displayed in experimental mode when build fails: https://github.com/docker/buildx/issues/2382")
}
require.True(t, buildDetailsPattern.MatchString(string(out)), "expected build details link in output, got %q", out)
// build erroneous dockerfile
dockerfile = []byte(`FROM busybox:latest
@@ -504,12 +500,12 @@ RUN exit 1`)
cmd = buildxCmd(sb, withArgs("build", "--output=type=cacheonly", dir))
out, err = cmd.CombinedOutput()
require.Error(t, err, string(out))
require.True(t, buildDetailsPattern.MatchString(string(out)), fmt.Sprintf("expected build details link in output, got %q", out))
require.True(t, buildDetailsPattern.MatchString(string(out)), "expected build details link in output, got %q", out)
}
func testBuildProgress(t *testing.T, sb integration.Sandbox) {
dir := createTestProject(t)
sbDriver, _ := driverName(sb.Name())
sbDriver, _, _ := driverName(sb.Name())
name := sb.Address()
// progress=tty
@@ -582,7 +578,7 @@ func testBuildBuildArgNoKey(t *testing.T, sb integration.Sandbox) {
cmd := buildxCmd(sb, withArgs("build", "--build-arg", "=TEST_STRING", dir))
out, err := cmd.CombinedOutput()
require.Error(t, err, string(out))
require.Equal(t, strings.TrimSpace(string(out)), `ERROR: invalid key-value pair "=TEST_STRING": empty key`)
require.Equal(t, `ERROR: invalid key-value pair "=TEST_STRING": empty key`, strings.TrimSpace(string(out)))
}
func testBuildLabelNoKey(t *testing.T, sb integration.Sandbox) {
@@ -590,7 +586,7 @@ func testBuildLabelNoKey(t *testing.T, sb integration.Sandbox) {
cmd := buildxCmd(sb, withArgs("build", "--label", "=TEST_STRING", dir))
out, err := cmd.CombinedOutput()
require.Error(t, err, string(out))
require.Equal(t, strings.TrimSpace(string(out)), `ERROR: invalid key-value pair "=TEST_STRING": empty key`)
require.Equal(t, `ERROR: invalid key-value pair "=TEST_STRING": empty key`, strings.TrimSpace(string(out)))
}
func testBuildCacheExportNotSupported(t *testing.T, sb integration.Sandbox) {
@@ -653,7 +649,6 @@ func testBuildMultiPlatform(t *testing.T, sb integration.Sandbox) {
require.NotNil(t, img)
img = imgs.Find("linux/arm64")
require.NotNil(t, img)
} else {
require.Error(t, err, string(out))
require.Contains(t, string(out), "Multi-platform build is not supported")
@@ -1291,6 +1286,29 @@ cOpy Dockerfile .
require.Error(t, cmd.Run(), stdout.String(), stderr.String())
require.Contains(t, stdout.String(), "Check complete, 2 warnings have been found!")
})
t.Run("check for Dockerfile path printed with context when displaying rule check warnings", func(t *testing.T) {
dockerfile := []byte(`
frOM busybox as base
cOpy Dockerfile .
`)
dir := tmpdir(
t,
fstest.CreateDir("subdir", 0700),
fstest.CreateFile("subdir/Dockerfile", dockerfile, 0600),
)
dockerfilePath := filepath.Join(dir, "subdir", "Dockerfile")
cmd := buildxCmd(sb, withArgs("build", "--call=check", "-f", dockerfilePath, dir))
stdout := bytes.Buffer{}
stderr := bytes.Buffer{}
cmd.Stdout = &stdout
cmd.Stderr = &stderr
require.Error(t, cmd.Run(), stdout.String(), stderr.String())
require.Contains(t, stdout.String(), "Check complete, 2 warnings have been found!")
require.Contains(t, stdout.String(), dockerfilePath+":2")
require.Contains(t, stdout.String(), dockerfilePath+":3")
})
}
func createTestProject(t *testing.T) string {

29
tests/common.go Normal file
View File

@@ -0,0 +1,29 @@
package tests
import (
"testing"
"github.com/moby/buildkit/util/testutil/integration"
"github.com/stretchr/testify/require"
)
var commonTests = []func(t *testing.T, sb integration.Sandbox){
testUnknownCommand,
testUnknownFlag,
}
func testUnknownCommand(t *testing.T, sb integration.Sandbox) {
cmd := buildxCmd(sb, withArgs("foo"))
out, err := cmd.CombinedOutput()
require.Error(t, err, string(out))
cmd = buildxCmd(sb, withArgs("imagetools", "foo"))
out, err = cmd.CombinedOutput()
require.Error(t, err, string(out))
}
func testUnknownFlag(t *testing.T, sb integration.Sandbox) {
cmd := buildxCmd(sb, withArgs("build", "--foo=bar"))
out, err := cmd.CombinedOutput()
require.Error(t, err, string(out))
}

Some files were not shown because too many files have changed in this diff Show More