mirror of
https://gitea.com/Lydanne/buildx.git
synced 2025-09-15 07:19:07 +08:00
Compare commits
646 Commits
v0.13.0-rc
...
v0.19.1
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
5113f9ea89 | ||
|
|
8b029626f3 | ||
|
|
cd017e98ed | ||
|
|
71c7889719 | ||
|
|
a3418e0178 | ||
|
|
6a1cf78879 | ||
|
|
ec1f712328 | ||
|
|
5ce6597c07 | ||
|
|
9c75071793 | ||
|
|
d612139b19 | ||
|
|
42f7898c53 | ||
|
|
3148c098a2 | ||
|
|
f95d574f94 | ||
|
|
60822781be | ||
|
|
4c83475703 | ||
|
|
17eff25fe5 | ||
|
|
9c8ffb77d6 | ||
|
|
13a426fca6 | ||
|
|
1a039115bc | ||
|
|
07d58782b8 | ||
|
|
3ccbb88e6a | ||
|
|
a34c641bc4 | ||
|
|
f10be074b4 | ||
|
|
9f429965c0 | ||
|
|
f3929447d7 | ||
|
|
615f4f6759 | ||
|
|
9a7b028bab | ||
|
|
1af4f05ba4 | ||
|
|
4b5d78db9b | ||
|
|
d2c512a95b | ||
|
|
5937ba0e00 | ||
|
|
21fb026aa3 | ||
|
|
bc45641086 | ||
|
|
96689e5d05 | ||
|
|
50a8f11f0f | ||
|
|
11cf38bd97 | ||
|
|
300d56b3ff | ||
|
|
e04da86aca | ||
|
|
9f1fc99018 | ||
|
|
26bbddb5d6 | ||
|
|
58fd190c31 | ||
|
|
e7a53fb829 | ||
|
|
c0fd64f4f8 | ||
|
|
0c629335ac | ||
|
|
f216b71ad2 | ||
|
|
debe8c0187 | ||
|
|
a69d857b8a | ||
|
|
a6ef9db84d | ||
|
|
9c27be752c | ||
|
|
82a65d4f9b | ||
|
|
8647f408ac | ||
|
|
e51cdcac50 | ||
|
|
55a544d976 | ||
|
|
3b943bd4ba | ||
|
|
502bb51a3b | ||
|
|
48977780ad | ||
|
|
e540bb03a4 | ||
|
|
919c52395d | ||
|
|
7f01c63be7 | ||
|
|
076d2f19d5 | ||
|
|
3c3150b8d3 | ||
|
|
b03d8c52e1 | ||
|
|
e67ccb080b | ||
|
|
dab02c347e | ||
|
|
6caa151e98 | ||
|
|
be6d8326a8 | ||
|
|
7855f8324b | ||
|
|
850e5330ad | ||
|
|
b7ea25eb59 | ||
|
|
8cdeac54ab | ||
|
|
752c70a06c | ||
|
|
83dd969dc1 | ||
|
|
a5bb117ff0 | ||
|
|
735b7f68fe | ||
|
|
bcac44f658 | ||
|
|
d46595eed8 | ||
|
|
62407927fa | ||
|
|
c7b0a84c6a | ||
|
|
1aac809c63 | ||
|
|
9b0575b589 | ||
|
|
9f3a578149 | ||
|
|
14b31d8b77 | ||
|
|
e26911f403 | ||
|
|
cd8d61a9d7 | ||
|
|
3a56161d03 | ||
|
|
0fd935b0ca | ||
|
|
704b2cc52d | ||
|
|
6b2dc8ce56 | ||
|
|
a585faf3d2 | ||
|
|
181348397c | ||
|
|
ad371e428e | ||
|
|
f35dae3726 | ||
|
|
6fcc6853d9 | ||
|
|
202c390fca | ||
|
|
ca502cc9a5 | ||
|
|
2bdf451b68 | ||
|
|
658ed584c7 | ||
|
|
886ae21e93 | ||
|
|
cf7a9aa084 | ||
|
|
eb15c667b9 | ||
|
|
1060328a96 | ||
|
|
746eadd16e | ||
|
|
f89f861999 | ||
|
|
08a973a148 | ||
|
|
cc286e2ef5 | ||
|
|
8056a3dc7c | ||
|
|
9f0ebd2643 | ||
|
|
680cdf1179 | ||
|
|
8d32cabc22 | ||
|
|
239930c998 | ||
|
|
8d7f69883f | ||
|
|
1de332530f | ||
|
|
65c4756473 | ||
|
|
d3ff70ace0 | ||
|
|
14de641bec | ||
|
|
1ce3e6a221 | ||
|
|
b1a13bb740 | ||
|
|
64c5139ab6 | ||
|
|
d353f5f6ba | ||
|
|
4507a492da | ||
|
|
9fc6f39d71 | ||
|
|
f6a27a664b | ||
|
|
48153169d8 | ||
|
|
d7de22c61f | ||
|
|
7c91f3d0dd | ||
|
|
820f5e77ed | ||
|
|
1db8f6789f | ||
|
|
b35a0f4718 | ||
|
|
8e47387d02 | ||
|
|
fdda92f304 | ||
|
|
d078a3047d | ||
|
|
f102ad73a8 | ||
|
|
671bd1b54d | ||
|
|
f8657e8798 | ||
|
|
61d9f1d981 | ||
|
|
9eb0318ee6 | ||
|
|
4528269102 | ||
|
|
8d3d32e376 | ||
|
|
c60afbb25b | ||
|
|
9bfa8603f6 | ||
|
|
30e60628bf | ||
|
|
df0270d0cc | ||
|
|
056cf8a7ca | ||
|
|
15c596a091 | ||
|
|
e950b2e478 | ||
|
|
4da753da79 | ||
|
|
3f81293fd4 | ||
|
|
120578091f | ||
|
|
604b723007 | ||
|
|
528181c759 | ||
|
|
cd5381900c | ||
|
|
bba2bb4b89 | ||
|
|
8fd27b8c23 | ||
|
|
6dcc8d8b84 | ||
|
|
9fb8b04b64 | ||
|
|
6ba5802958 | ||
|
|
f039670961 | ||
|
|
4ec12e7e68 | ||
|
|
66ed7d6162 | ||
|
|
617d59d70b | ||
|
|
40f444f4b8 | ||
|
|
8201d301d5 | ||
|
|
40ef3446f5 | ||
|
|
7213b2a814 | ||
|
|
9cfa25ab40 | ||
|
|
6db3444a25 | ||
|
|
15e930b691 | ||
|
|
abc5eaed88 | ||
|
|
f1b92e9e6c | ||
|
|
ad9a5196b3 | ||
|
|
db117855da | ||
|
|
ecfe98df6f | ||
|
|
479177eaf9 | ||
|
|
194f523fe1 | ||
|
|
29d367bdd4 | ||
|
|
ed341bafd0 | ||
|
|
c887c2c62a | ||
|
|
7c481aae20 | ||
|
|
f0f8876902 | ||
|
|
fa1d19bb1e | ||
|
|
7bea00f3dd | ||
|
|
83d5c0c61b | ||
|
|
e58a1d35d1 | ||
|
|
b920b08ad3 | ||
|
|
f369377d74 | ||
|
|
b7486e5cd5 | ||
|
|
5ecff53e0c | ||
|
|
48faab5890 | ||
|
|
f77866f5b4 | ||
|
|
203fd8aee5 | ||
|
|
806ccd3545 | ||
|
|
d6e030eda7 | ||
|
|
96eb69aea4 | ||
|
|
d1d8d6e19c | ||
|
|
dc7f679ab1 | ||
|
|
e403ab2d63 | ||
|
|
b6a2c96926 | ||
|
|
7a7a9c8e01 | ||
|
|
fa8f859159 | ||
|
|
8411a763d9 | ||
|
|
6c5279da54 | ||
|
|
0e64eb4f8b | ||
|
|
adbcc2225e | ||
|
|
e00efeb399 | ||
|
|
d03c13b947 | ||
|
|
4787b5c046 | ||
|
|
1c66f293c7 | ||
|
|
246a36d463 | ||
|
|
a4adae3d6b | ||
|
|
36cd88f8ca | ||
|
|
07a85a544b | ||
|
|
f64b85afe6 | ||
|
|
4b27fb3022 | ||
|
|
38a8261f05 | ||
|
|
a3e6f4be15 | ||
|
|
6467a86427 | ||
|
|
58571ff6d6 | ||
|
|
71174c3041 | ||
|
|
16860e6dd2 | ||
|
|
8e02b1a2f7 | ||
|
|
531c6d4ff1 | ||
|
|
238a3e03dd | ||
|
|
9a0c320588 | ||
|
|
acf0216292 | ||
|
|
5a50d13641 | ||
|
|
2810f20f3a | ||
|
|
e2f6808457 | ||
|
|
39bbb9e478 | ||
|
|
771f0139ac | ||
|
|
6034c58285 | ||
|
|
199890ff51 | ||
|
|
d391b1d3e6 | ||
|
|
f4da6b8f69 | ||
|
|
386d599309 | ||
|
|
d130f8ef0a | ||
|
|
b691a10379 | ||
|
|
e628f9ea14 | ||
|
|
0fb0b6db0d | ||
|
|
6efb1d7cdc | ||
|
|
bc2748da59 | ||
|
|
d4c4632cf6 | ||
|
|
cdd46af015 | ||
|
|
b62d64b2b5 | ||
|
|
64171cb13e | ||
|
|
f28dff7598 | ||
|
|
3d542f3d31 | ||
|
|
30dbdcfa3e | ||
|
|
16518091cd | ||
|
|
897fc91802 | ||
|
|
c4d3011a98 | ||
|
|
a47f761c55 | ||
|
|
aa35c954f3 | ||
|
|
56df4e98a0 | ||
|
|
9f00a9eafa | ||
|
|
56cb197c0a | ||
|
|
466006849a | ||
|
|
738f5ee9db | ||
|
|
9b49cf3ae6 | ||
|
|
bd0b425734 | ||
|
|
7823a2dc01 | ||
|
|
cedbc5d68d | ||
|
|
12d431d1b4 | ||
|
|
ca452c47d8 | ||
|
|
d8f26f79ed | ||
|
|
4304d388ef | ||
|
|
96509847b9 | ||
|
|
52bb668085 | ||
|
|
85cf3bace9 | ||
|
|
b92bfb53d2 | ||
|
|
6c929a45c7 | ||
|
|
d296d5d46a | ||
|
|
6e433da23f | ||
|
|
3005743f7c | ||
|
|
d64d3a4caf | ||
|
|
0d37d68efd | ||
|
|
03a691a0a5 | ||
|
|
fa392a2dca | ||
|
|
470e45e599 | ||
|
|
2a2648b1db | ||
|
|
ac930bda69 | ||
|
|
6791ecb628 | ||
|
|
d717237e4f | ||
|
|
ee642ecc4c | ||
|
|
06d96d665e | ||
|
|
dc83501a5b | ||
|
|
0f74f9a794 | ||
|
|
6d6adc11a1 | ||
|
|
68076909b9 | ||
|
|
7957b73a30 | ||
|
|
1dceb49a27 | ||
|
|
b96ad59f64 | ||
|
|
50aa895477 | ||
|
|
74374ea418 | ||
|
|
6bbe59697a | ||
|
|
c51004e2e4 | ||
|
|
8535c6b455 | ||
|
|
153e5ed274 | ||
|
|
cc097db675 | ||
|
|
35313e865f | ||
|
|
233b869c63 | ||
|
|
7460f049f2 | ||
|
|
8f4c8b094a | ||
|
|
8da28574b0 | ||
|
|
7e49141c4e | ||
|
|
5ec703ba10 | ||
|
|
1ffc6f1d58 | ||
|
|
f65631546d | ||
|
|
6fc19c4024 | ||
|
|
5656c98133 | ||
|
|
263a9ddaee | ||
|
|
1774aa0cf0 | ||
|
|
7b80ad7069 | ||
|
|
c0c4d7172b | ||
|
|
e498ba9c27 | ||
|
|
2e7e7abe42 | ||
|
|
048ef1fbf8 | ||
|
|
cbe7901667 | ||
|
|
f374f64d2f | ||
|
|
4be2259719 | ||
|
|
6627f315cb | ||
|
|
19d838a3f4 | ||
|
|
17878d641e | ||
|
|
63eb73d9cf | ||
|
|
59a0ffcf83 | ||
|
|
2b17f277a1 | ||
|
|
ea7c8e83d2 | ||
|
|
9358c45b46 | ||
|
|
cfb7fc4fb5 | ||
|
|
d4b112ab05 | ||
|
|
f7a32361ea | ||
|
|
af902caeaa | ||
|
|
04000db8da | ||
|
|
b8da14166c | ||
|
|
c1f680df14 | ||
|
|
b6482ab6bb | ||
|
|
6f45b0ea06 | ||
|
|
3971361ed2 | ||
|
|
818045482e | ||
|
|
f8e1746d0d | ||
|
|
92a6799514 | ||
|
|
9358f84668 | ||
|
|
dbdd3601eb | ||
|
|
a3c8a72b54 | ||
|
|
4c3af9becf | ||
|
|
d8c9ebde1f | ||
|
|
01a50aac42 | ||
|
|
f7bcafed21 | ||
|
|
e5ded4b2de | ||
|
|
6ef443de41 | ||
|
|
076e19d0ce | ||
|
|
5599699d29 | ||
|
|
d155747029 | ||
|
|
9cebd0c80f | ||
|
|
7b1ec7211d | ||
|
|
689fd74104 | ||
|
|
0dfd315daa | ||
|
|
9b100c2552 | ||
|
|
92aaaa8f67 | ||
|
|
6111d9a00d | ||
|
|
310aaf1891 | ||
|
|
6c7e65c789 | ||
|
|
66b0abf078 | ||
|
|
6efa26c2de | ||
|
|
5b726afa5e | ||
|
|
009f318bbd | ||
|
|
9f7c8ea3fb | ||
|
|
be12199eb9 | ||
|
|
94355517c4 | ||
|
|
cb1be7214a | ||
|
|
f42a4a1e94 | ||
|
|
4d7365018c | ||
|
|
3d0951b800 | ||
|
|
bcd04d5a64 | ||
|
|
b00001d8ac | ||
|
|
31187735de | ||
|
|
3373a27f1f | ||
|
|
56698805a9 | ||
|
|
4c2e0c4307 | ||
|
|
fb6a3178c9 | ||
|
|
8ca18dee2d | ||
|
|
917d2f4a0a | ||
|
|
366328ba6a | ||
|
|
5f822b36d3 | ||
|
|
e423d096a6 | ||
|
|
927fb6731c | ||
|
|
314ca32446 | ||
|
|
3b25e3fa5c | ||
|
|
41d369120b | ||
|
|
56ffe55f81 | ||
|
|
6d5823beb1 | ||
|
|
c116af7b82 | ||
|
|
fb130243f8 | ||
|
|
29c8107b85 | ||
|
|
ee3baa54f7 | ||
|
|
9de95d81eb | ||
|
|
d3a53189f7 | ||
|
|
0496dae9d5 | ||
|
|
40fcf992b1 | ||
|
|
85c25f719c | ||
|
|
875e4cd52e | ||
|
|
24cedc6c0f | ||
|
|
59f52c9505 | ||
|
|
1e916ae6c6 | ||
|
|
d342cb9d03 | ||
|
|
9fdc99dc76 | ||
|
|
ab835fd904 | ||
|
|
87efbd43b5 | ||
|
|
39db6159f9 | ||
|
|
922328cbaf | ||
|
|
aa0f90fdd6 | ||
|
|
82b6826cd7 | ||
|
|
1e3aec1ae2 | ||
|
|
cfef22ddf0 | ||
|
|
9e5ba66553 | ||
|
|
9ceda78057 | ||
|
|
747b75a217 | ||
|
|
d8de5bb345 | ||
|
|
eff1850d53 | ||
|
|
a24043e9f1 | ||
|
|
0902294e1a | ||
|
|
ef4a165e48 | ||
|
|
89810dc998 | ||
|
|
250cd44d70 | ||
|
|
5afb210d43 | ||
|
|
03f84d2e83 | ||
|
|
945e774a02 | ||
|
|
947d6023e4 | ||
|
|
c58599ca50 | ||
|
|
f30e143428 | ||
|
|
53b7cbc5cb | ||
|
|
9a30215886 | ||
|
|
b1cb658a31 | ||
|
|
bc83ecb538 | ||
|
|
ceaa4534f9 | ||
|
|
9b6c4103af | ||
|
|
4549283f44 | ||
|
|
b2e907d5c2 | ||
|
|
7427adb9b0 | ||
|
|
1a93bbd3a5 | ||
|
|
1f28985d20 | ||
|
|
33a5528003 | ||
|
|
7bfae2b809 | ||
|
|
117c9016e1 | ||
|
|
388af3576a | ||
|
|
2061550bc1 | ||
|
|
abf6c77d91 | ||
|
|
9ad116aa8e | ||
|
|
e3d5e64ec9 | ||
|
|
0808747add | ||
|
|
2e7da01560 | ||
|
|
38d7d36f0a | ||
|
|
55c86543ca | ||
|
|
f98ef00ec7 | ||
|
|
b948b07e2d | ||
|
|
17c0a3794b | ||
|
|
c0a986b43b | ||
|
|
781dcbd196 | ||
|
|
37c4ff0944 | ||
|
|
6211f56b8d | ||
|
|
cc9ea87142 | ||
|
|
035236a5ed | ||
|
|
99777eaf34 | ||
|
|
cf68b5b878 | ||
|
|
3f1aaa68d5 | ||
|
|
f6830f3b86 | ||
|
|
4fc4bc07ae | ||
|
|
f6e57cf5b5 | ||
|
|
b77648d5f8 | ||
|
|
afcb609966 | ||
|
|
946e0a5d74 | ||
|
|
c4db5b252a | ||
|
|
8afeb56a3b | ||
|
|
fd801a12c1 | ||
|
|
2f98e6f3ac | ||
|
|
224c6a59bf | ||
|
|
cbb75bbfd5 | ||
|
|
72085dbdf0 | ||
|
|
480b53f529 | ||
|
|
f8c6a97edc | ||
|
|
d4f088e689 | ||
|
|
db3a8ad7ca | ||
|
|
1d88c4b169 | ||
|
|
6d95fb586e | ||
|
|
1fb5d2a9ee | ||
|
|
ba264138d6 | ||
|
|
6375dc7230 | ||
|
|
9cc6c7df70 | ||
|
|
7ea5cffb98 | ||
|
|
d2d21577fb | ||
|
|
e344e2251b | ||
|
|
833fe3b04f | ||
|
|
d0cc9ed0cb | ||
|
|
b30566438b | ||
|
|
ec98985b4e | ||
|
|
9428447cd2 | ||
|
|
6112c41637 | ||
|
|
a727de7d5f | ||
|
|
4a8fcb7aa0 | ||
|
|
771e66bf7a | ||
|
|
7e0ab1a003 | ||
|
|
e3e16ad088 | ||
|
|
f2823515db | ||
|
|
5ac9b78384 | ||
|
|
fbb0f9b424 | ||
|
|
699fa43f7f | ||
|
|
bdf27ee797 | ||
|
|
171fcbeb69 | ||
|
|
370a5aa127 | ||
|
|
13653fb84d | ||
|
|
1b16594f4a | ||
|
|
3905e8cf06 | ||
|
|
177b95c972 | ||
|
|
74fdbb5e7f | ||
|
|
ac331d3569 | ||
|
|
07c9b45bae | ||
|
|
b91957444b | ||
|
|
46c44c58ae | ||
|
|
6aed54c35a | ||
|
|
126fe653c7 | ||
|
|
f0cbc95eaf | ||
|
|
1a0f9fa96c | ||
|
|
df7a3db947 | ||
|
|
d294232cb5 | ||
|
|
0a7f5c4d94 | ||
|
|
5777d980b5 | ||
|
|
46cf94092c | ||
|
|
da3435ed3a | ||
|
|
3e90cc4b84 | ||
|
|
6418669e75 | ||
|
|
188495aa93 | ||
|
|
54a5c1ff93 | ||
|
|
2e2f9f571f | ||
|
|
d2ac1f2d6e | ||
|
|
7e3acad9f4 | ||
|
|
e04637cf34 | ||
|
|
b9c5f9f1ee | ||
|
|
92ab188781 | ||
|
|
dd4d52407f | ||
|
|
7432b483ce | ||
|
|
6e3164dc6f | ||
|
|
2fdb1682f8 | ||
|
|
7f1eaa2a8a | ||
|
|
fbddc9ebea | ||
|
|
d347499112 | ||
|
|
b1fb67f44a | ||
|
|
a9575a872a | ||
|
|
60f48059a7 | ||
|
|
ffff87be03 | ||
|
|
0a3e5e5257 | ||
|
|
151b0de8f2 | ||
|
|
e40c630758 | ||
|
|
ea3338c3f3 | ||
|
|
744c055560 | ||
|
|
ca0b583f5a | ||
|
|
e7f2da9c4f | ||
|
|
d805c784f2 | ||
|
|
a2866b79e3 | ||
|
|
12e1f65eb3 | ||
|
|
0d6b3a9d1d | ||
|
|
4b3c3c8401 | ||
|
|
ccc314a823 | ||
|
|
dc4b4c36bd | ||
|
|
5c29e6e26e | ||
|
|
6a0d5b771f | ||
|
|
59cc10767e | ||
|
|
b61b29f603 | ||
|
|
7cfef05661 | ||
|
|
4d39259f8e | ||
|
|
15fd39ebec | ||
|
|
a7d59ae332 | ||
|
|
e18a2f6e58 | ||
|
|
38fbd9a85c | ||
|
|
84ddbc2b3b | ||
|
|
b4799f9d16 | ||
|
|
7cded6b33b | ||
|
|
1b36bd0c4a | ||
|
|
7dc5639216 | ||
|
|
858e347306 | ||
|
|
adb9bc86e5 | ||
|
|
ef2e30deba | ||
|
|
c690d460e8 | ||
|
|
35781a6c78 | ||
|
|
de5efcb03b | ||
|
|
5c89004bb6 | ||
|
|
8abef59087 | ||
|
|
4999908fbc | ||
|
|
4af0ed5159 | ||
|
|
a4a8846e46 | ||
|
|
520dc5968a | ||
|
|
324afe60ad | ||
|
|
c0c3a55fca | ||
|
|
2a30229916 | ||
|
|
ed76661b0d | ||
|
|
a0cce9b31e | ||
|
|
d410597f5a | ||
|
|
9016d85718 | ||
|
|
2565c74a89 | ||
|
|
eab5cccbb4 | ||
|
|
e2be765e7b | ||
|
|
276dd5150f | ||
|
|
5c69fa267f | ||
|
|
b240a00def | ||
|
|
a8af6fa013 | ||
|
|
7eb3dfbd22 | ||
|
|
4b24f66a10 | ||
|
|
8d5b967f2d | ||
|
|
8842e19869 | ||
|
|
a0ce8bec97 | ||
|
|
84d79df93b | ||
|
|
df4b13320d | ||
|
|
bb511110d6 | ||
|
|
47cf4a5dbe | ||
|
|
cfbed42fa7 | ||
|
|
ff27ab7e86 | ||
|
|
5655e5e2b6 | ||
|
|
4b516af1f6 | ||
|
|
b1490ed5ce | ||
|
|
ea830c9758 | ||
|
|
8f576e5790 | ||
|
|
4327ee73b1 | ||
|
|
70a28fed12 | ||
|
|
fc22d39d6d | ||
|
|
1cc5e39cb8 | ||
|
|
1815e4d9b2 | ||
|
|
2ec1dbd1b6 | ||
|
|
a6163470b7 | ||
|
|
3dfb102f82 | ||
|
|
253cbee5c7 | ||
|
|
c1dfa74b98 | ||
|
|
647491dd99 | ||
|
|
9a71895a48 | ||
|
|
abff444562 | ||
|
|
1d0b542b1b | ||
|
|
6c485a98be | ||
|
|
9ebfde4897 | ||
|
|
e4ee2ca1fd | ||
|
|
849456c198 | ||
|
|
9a2536dd0d | ||
|
|
a03263acf8 | ||
|
|
0c0dcb7c8c | ||
|
|
9bce433154 | ||
|
|
04f0fc5871 | ||
|
|
e7da2b0686 | ||
|
|
eab565afe7 | ||
|
|
7d952441ea | ||
|
|
835a6b1096 |
83
.github/CONTRIBUTING.md
vendored
83
.github/CONTRIBUTING.md
vendored
@@ -188,6 +188,89 @@ To generate new vendored files with go modules run:
|
|||||||
$ make vendor
|
$ make vendor
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Generate profiling data
|
||||||
|
|
||||||
|
You can configure Buildx to generate [`pprof`](https://github.com/google/pprof)
|
||||||
|
memory and CPU profiles to analyze and optimize your builds. These profiles are
|
||||||
|
useful for identifying performance bottlenecks, detecting memory
|
||||||
|
inefficiencies, and ensuring the program (Buildx) runs efficiently.
|
||||||
|
|
||||||
|
The following environment variables control whether Buildx generates profiling
|
||||||
|
data for builds:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ export BUILDX_CPU_PROFILE=buildx_cpu.prof
|
||||||
|
$ export BUILDX_MEM_PROFILE=buildx_mem.prof
|
||||||
|
```
|
||||||
|
|
||||||
|
When set, Buildx emits profiling samples for the builds to the location
|
||||||
|
specified by the environment variable.
|
||||||
|
|
||||||
|
To analyze and visualize profiling samples, you need `pprof` from the Go
|
||||||
|
toolchain, and (optionally) GraphViz for visualization in a graphical format.
|
||||||
|
|
||||||
|
To inspect profiling data with `pprof`:
|
||||||
|
|
||||||
|
1. Build a local binary of Buildx from source.
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx bake
|
||||||
|
```
|
||||||
|
|
||||||
|
The binary gets exported to `./bin/build/buildx`.
|
||||||
|
|
||||||
|
2. Run a build and with the environment variables set to generate profiling data.
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ export BUILDX_CPU_PROFILE=buildx_cpu.prof
|
||||||
|
$ export BUILDX_MEM_PROFILE=buildx_mem.prof
|
||||||
|
$ ./bin/build/buildx bake
|
||||||
|
```
|
||||||
|
|
||||||
|
This creates `buildx_cpu.prof` and `buildx_mem.prof` for the build.
|
||||||
|
|
||||||
|
3. Start `pprof` and specify the filename of the profile that you want to
|
||||||
|
analyze.
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ go tool pprof buildx_cpu.prof
|
||||||
|
```
|
||||||
|
|
||||||
|
This opens the `pprof` interactive console. From here, you can inspect the
|
||||||
|
profiling sample using various commands. For example, use `top 10` command
|
||||||
|
to view the top 10 most time-consuming entries.
|
||||||
|
|
||||||
|
```plaintext
|
||||||
|
(pprof) top 10
|
||||||
|
Showing nodes accounting for 3.04s, 91.02% of 3.34s total
|
||||||
|
Dropped 123 nodes (cum <= 0.02s)
|
||||||
|
Showing top 10 nodes out of 159
|
||||||
|
flat flat% sum% cum cum%
|
||||||
|
1.14s 34.13% 34.13% 1.14s 34.13% syscall.syscall
|
||||||
|
0.91s 27.25% 61.38% 0.91s 27.25% runtime.kevent
|
||||||
|
0.35s 10.48% 71.86% 0.35s 10.48% runtime.pthread_cond_wait
|
||||||
|
0.22s 6.59% 78.44% 0.22s 6.59% runtime.pthread_cond_signal
|
||||||
|
0.15s 4.49% 82.93% 0.15s 4.49% runtime.usleep
|
||||||
|
0.10s 2.99% 85.93% 0.10s 2.99% runtime.memclrNoHeapPointers
|
||||||
|
0.10s 2.99% 88.92% 0.10s 2.99% runtime.memmove
|
||||||
|
0.03s 0.9% 89.82% 0.03s 0.9% runtime.madvise
|
||||||
|
0.02s 0.6% 90.42% 0.02s 0.6% runtime.(*mspan).typePointersOfUnchecked
|
||||||
|
0.02s 0.6% 91.02% 0.02s 0.6% runtime.pcvalue
|
||||||
|
```
|
||||||
|
|
||||||
|
To view the call graph in a GUI, run `go tool pprof -http=:8081 <sample>`.
|
||||||
|
|
||||||
|
> [!NOTE]
|
||||||
|
> Requires [GraphViz](https://www.graphviz.org/) to be installed.
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ go tool pprof -http=:8081 buildx_cpu.prof
|
||||||
|
Serving web UI on http://127.0.0.1:8081
|
||||||
|
http://127.0.0.1:8081
|
||||||
|
```
|
||||||
|
|
||||||
|
For more information about using `pprof` and how to interpret the call graph,
|
||||||
|
refer to the [`pprof` README](https://github.com/google/pprof/blob/main/doc/README.md).
|
||||||
|
|
||||||
### Conventions
|
### Conventions
|
||||||
|
|
||||||
|
|||||||
50
.github/SECURITY.md
vendored
50
.github/SECURITY.md
vendored
@@ -1,12 +1,44 @@
|
|||||||
# Reporting security issues
|
# Security Policy
|
||||||
|
|
||||||
The project maintainers take security seriously. If you discover a security
|
The maintainers of Docker Buildx take security seriously. If you discover
|
||||||
issue, please bring it to their attention right away!
|
a security issue, please bring it to their attention right away!
|
||||||
|
|
||||||
**Please _DO NOT_ file a public issue**, instead send your report privately to
|
## Reporting a Vulnerability
|
||||||
[security@docker.com](mailto:security@docker.com).
|
|
||||||
|
|
||||||
Security reports are greatly appreciated, and we will publicly thank you for it.
|
Please **DO NOT** file a public issue, instead send your report privately
|
||||||
We also like to send gifts—if you're into schwag, make sure to let
|
to [security@docker.com](mailto:security@docker.com).
|
||||||
us know. We currently do not offer a paid security bounty program, but are not
|
|
||||||
ruling it out in the future.
|
Reporter(s) can expect a response within 72 hours, acknowledging the issue was
|
||||||
|
received.
|
||||||
|
|
||||||
|
## Review Process
|
||||||
|
|
||||||
|
After receiving the report, an initial triage and technical analysis is
|
||||||
|
performed to confirm the report and determine its scope. We may request
|
||||||
|
additional information in this stage of the process.
|
||||||
|
|
||||||
|
Once a reviewer has confirmed the relevance of the report, a draft security
|
||||||
|
advisory will be created on GitHub. The draft advisory will be used to discuss
|
||||||
|
the issue with maintainers, the reporter(s), and where applicable, other
|
||||||
|
affected parties under embargo.
|
||||||
|
|
||||||
|
If the vulnerability is accepted, a timeline for developing a patch, public
|
||||||
|
disclosure, and patch release will be determined. If there is an embargo period
|
||||||
|
on public disclosure before the patch release, the reporter(s) are expected to
|
||||||
|
participate in the discussion of the timeline and abide by agreed upon dates
|
||||||
|
for public disclosure.
|
||||||
|
|
||||||
|
## Accreditation
|
||||||
|
|
||||||
|
Security reports are greatly appreciated and we will publicly thank you,
|
||||||
|
although we will keep your name confidential if you request it. We also like to
|
||||||
|
send gifts - if you're into swag, make sure to let us know. We do not currently
|
||||||
|
offer a paid security bounty program at this time.
|
||||||
|
|
||||||
|
## Supported Versions
|
||||||
|
|
||||||
|
Once a new feature release is cut, support for the previous feature release is
|
||||||
|
discontinued. An exception may be made for urgent security releases that occur
|
||||||
|
shortly after a new feature release. Buildx does not offer LTS (Long-Term Support)
|
||||||
|
releases. Refer to the [Support Policy](https://github.com/docker/buildx/blob/master/PROJECT.md#support-policy)
|
||||||
|
for further details.
|
||||||
|
|||||||
2
.github/dependabot.yml
vendored
2
.github/dependabot.yml
vendored
@@ -11,5 +11,5 @@ updates:
|
|||||||
# trigger a new version: https://github.com/docker/buildx/pull/2222#issuecomment-1919092153
|
# trigger a new version: https://github.com/docker/buildx/pull/2222#issuecomment-1919092153
|
||||||
- dependency-name: "docker/docs"
|
- dependency-name: "docker/docs"
|
||||||
labels:
|
labels:
|
||||||
- "dependencies"
|
- "area/dependencies"
|
||||||
- "bot"
|
- "bot"
|
||||||
|
|||||||
104
.github/labeler.yml
vendored
Normal file
104
.github/labeler.yml
vendored
Normal file
@@ -0,0 +1,104 @@
|
|||||||
|
|
||||||
|
# Add 'area/project' label to changes in basic project documentation and .github folder, excluding .github/workflows
|
||||||
|
area/project:
|
||||||
|
- all:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file:
|
||||||
|
- .github/**
|
||||||
|
- LICENSE
|
||||||
|
- AUTHORS
|
||||||
|
- MAINTAINERS
|
||||||
|
- PROJECT.md
|
||||||
|
- README.md
|
||||||
|
- .gitignore
|
||||||
|
- codecov.yml
|
||||||
|
- all-globs-to-all-files: '!.github/workflows/*'
|
||||||
|
|
||||||
|
# Add 'area/github-actions' label to changes in the .github/workflows folder
|
||||||
|
area/ci:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file: '.github/workflows/**'
|
||||||
|
|
||||||
|
# Add 'area/bake' label to changes in the bake
|
||||||
|
area/bake:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file: 'bake/**'
|
||||||
|
|
||||||
|
# Add 'area/bake/compose' label to changes in the bake+compose
|
||||||
|
area/bake/compose:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file:
|
||||||
|
- bake/compose.go
|
||||||
|
- bake/compose_test.go
|
||||||
|
|
||||||
|
# Add 'area/build' label to changes in build files
|
||||||
|
area/build:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file: 'build/**'
|
||||||
|
|
||||||
|
# Add 'area/builder' label to changes in builder files
|
||||||
|
area/builder:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file: 'builder/**'
|
||||||
|
|
||||||
|
# Add 'area/cli' label to changes in the CLI
|
||||||
|
area/cli:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file:
|
||||||
|
- cmd/**
|
||||||
|
- commands/**
|
||||||
|
|
||||||
|
# Add 'area/controller' label to changes in the controller
|
||||||
|
area/controller:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file: 'controller/**'
|
||||||
|
|
||||||
|
# Add 'area/docs' label to markdown files in the docs folder
|
||||||
|
area/docs:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file: 'docs/**/*.md'
|
||||||
|
|
||||||
|
# Add 'area/dependencies' label to changes in go dependency files
|
||||||
|
area/dependencies:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file:
|
||||||
|
- go.mod
|
||||||
|
- go.sum
|
||||||
|
- vendor/**
|
||||||
|
|
||||||
|
# Add 'area/driver' label to changes in the driver folder
|
||||||
|
area/driver:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file: 'driver/**'
|
||||||
|
|
||||||
|
# Add 'area/driver/docker' label to changes in the docker driver
|
||||||
|
area/driver/docker:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file: 'driver/docker/**'
|
||||||
|
|
||||||
|
# Add 'area/driver/docker-container' label to changes in the docker-container driver
|
||||||
|
area/driver/docker-container:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file: 'driver/docker-container/**'
|
||||||
|
|
||||||
|
# Add 'area/driver/kubernetes' label to changes in the kubernetes driver
|
||||||
|
area/driver/kubernetes:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file: 'driver/kubernetes/**'
|
||||||
|
|
||||||
|
# Add 'area/driver/remote' label to changes in the remote driver
|
||||||
|
area/driver/remote:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file: 'driver/remote/**'
|
||||||
|
|
||||||
|
# Add 'area/hack' label to changes in the hack folder
|
||||||
|
area/hack:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file: 'hack/**'
|
||||||
|
|
||||||
|
# Add 'area/tests' label to changes in test files
|
||||||
|
area/tests:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file:
|
||||||
|
- tests/**
|
||||||
|
- '**/*_test.go'
|
||||||
215
.github/workflows/build.yml
vendored
215
.github/workflows/build.yml
vendored
@@ -1,5 +1,14 @@
|
|||||||
name: build
|
name: build
|
||||||
|
|
||||||
|
# Default to 'contents: read', which grants actions to read commits.
|
||||||
|
#
|
||||||
|
# If any permission is set, any permission not included in the list is
|
||||||
|
# implicitly set to "none".
|
||||||
|
#
|
||||||
|
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
|
||||||
|
permissions:
|
||||||
|
contents: read
|
||||||
|
|
||||||
concurrency:
|
concurrency:
|
||||||
group: ${{ github.workflow }}-${{ github.ref }}
|
group: ${{ github.workflow }}-${{ github.ref }}
|
||||||
cancel-in-progress: true
|
cancel-in-progress: true
|
||||||
@@ -21,63 +30,85 @@ on:
|
|||||||
env:
|
env:
|
||||||
BUILDX_VERSION: "latest"
|
BUILDX_VERSION: "latest"
|
||||||
BUILDKIT_IMAGE: "moby/buildkit:latest"
|
BUILDKIT_IMAGE: "moby/buildkit:latest"
|
||||||
|
SCOUT_VERSION: "1.11.0"
|
||||||
REPO_SLUG: "docker/buildx-bin"
|
REPO_SLUG: "docker/buildx-bin"
|
||||||
DESTDIR: "./bin"
|
DESTDIR: "./bin"
|
||||||
TEST_CACHE_SCOPE: "test"
|
TEST_CACHE_SCOPE: "test"
|
||||||
TESTFLAGS: "-v --parallel=6 --timeout=30m"
|
TESTFLAGS: "-v --parallel=6 --timeout=30m"
|
||||||
GOTESTSUM_FORMAT: "standard-verbose"
|
GOTESTSUM_FORMAT: "standard-verbose"
|
||||||
GO_VERSION: "1.21"
|
GO_VERSION: "1.23"
|
||||||
GOTESTSUM_VERSION: "v1.9.0" # same as one in Dockerfile
|
GOTESTSUM_VERSION: "v1.9.0" # same as one in Dockerfile
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
prepare-test-integration:
|
|
||||||
runs-on: ubuntu-22.04
|
|
||||||
steps:
|
|
||||||
-
|
|
||||||
name: Checkout
|
|
||||||
uses: actions/checkout@v4
|
|
||||||
-
|
|
||||||
name: Set up QEMU
|
|
||||||
uses: docker/setup-qemu-action@v3
|
|
||||||
-
|
|
||||||
name: Set up Docker Buildx
|
|
||||||
uses: docker/setup-buildx-action@v3
|
|
||||||
with:
|
|
||||||
version: ${{ env.BUILDX_VERSION }}
|
|
||||||
driver-opts: image=${{ env.BUILDKIT_IMAGE }}
|
|
||||||
buildkitd-flags: --debug
|
|
||||||
-
|
|
||||||
name: Build
|
|
||||||
uses: docker/bake-action@v4
|
|
||||||
with:
|
|
||||||
targets: integration-test-base
|
|
||||||
set: |
|
|
||||||
*.cache-from=type=gha,scope=${{ env.TEST_CACHE_SCOPE }}
|
|
||||||
*.cache-to=type=gha,scope=${{ env.TEST_CACHE_SCOPE }}
|
|
||||||
|
|
||||||
test-integration:
|
test-integration:
|
||||||
runs-on: ubuntu-22.04
|
runs-on: ubuntu-24.04
|
||||||
needs:
|
|
||||||
- prepare-test-integration
|
|
||||||
env:
|
env:
|
||||||
TESTFLAGS_DOCKER: "-v --parallel=1 --timeout=30m"
|
TESTFLAGS_DOCKER: "-v --parallel=1 --timeout=30m"
|
||||||
TEST_IMAGE_BUILD: "0"
|
TEST_IMAGE_BUILD: "0"
|
||||||
TEST_IMAGE_ID: "buildx-tests"
|
TEST_IMAGE_ID: "buildx-tests"
|
||||||
|
TEST_COVERAGE: "1"
|
||||||
strategy:
|
strategy:
|
||||||
fail-fast: false
|
fail-fast: false
|
||||||
matrix:
|
matrix:
|
||||||
|
buildkit:
|
||||||
|
- master
|
||||||
|
- latest
|
||||||
|
- buildx-stable-1
|
||||||
|
- v0.17.2
|
||||||
|
- v0.16.0
|
||||||
|
- v0.15.2
|
||||||
worker:
|
worker:
|
||||||
- docker
|
|
||||||
- docker\+containerd # same as docker, but with containerd snapshotter
|
|
||||||
- docker-container
|
- docker-container
|
||||||
- remote
|
- remote
|
||||||
pkg:
|
pkg:
|
||||||
- ./tests
|
- ./tests
|
||||||
|
mode:
|
||||||
|
- ""
|
||||||
|
- experimental
|
||||||
|
include:
|
||||||
|
- worker: docker
|
||||||
|
pkg: ./tests
|
||||||
|
- worker: docker+containerd # same as docker, but with containerd snapshotter
|
||||||
|
pkg: ./tests
|
||||||
|
- worker: docker
|
||||||
|
pkg: ./tests
|
||||||
|
mode: experimental
|
||||||
|
- worker: docker+containerd # same as docker, but with containerd snapshotter
|
||||||
|
pkg: ./tests
|
||||||
|
mode: experimental
|
||||||
|
- worker: "docker@26.1"
|
||||||
|
pkg: ./tests
|
||||||
|
- worker: "docker+containerd@26.1" # same as docker, but with containerd snapshotter
|
||||||
|
pkg: ./tests
|
||||||
|
- worker: "docker@26.1"
|
||||||
|
pkg: ./tests
|
||||||
|
mode: experimental
|
||||||
|
- worker: "docker+containerd@26.1" # same as docker, but with containerd snapshotter
|
||||||
|
pkg: ./tests
|
||||||
|
mode: experimental
|
||||||
steps:
|
steps:
|
||||||
-
|
-
|
||||||
name: Prepare
|
name: Prepare
|
||||||
run: |
|
run: |
|
||||||
echo "TESTREPORTS_NAME=${{ github.job }}-$(echo "${{ matrix.pkg }}-${{ matrix.worker }}" | tr -dc '[:alnum:]-\n\r' | tr '[:upper:]' '[:lower:]')" >> $GITHUB_ENV
|
echo "TESTREPORTS_NAME=${{ github.job }}-$(echo "${{ matrix.pkg }}-${{ matrix.buildkit }}-${{ matrix.worker }}-${{ matrix.mode }}" | tr -dc '[:alnum:]-\n\r' | tr '[:upper:]' '[:lower:]')" >> $GITHUB_ENV
|
||||||
|
if [ -n "${{ matrix.buildkit }}" ]; then
|
||||||
|
echo "TEST_BUILDKIT_TAG=${{ matrix.buildkit }}" >> $GITHUB_ENV
|
||||||
|
fi
|
||||||
|
testFlags="--run=//worker=$(echo "${{ matrix.worker }}" | sed 's/\+/\\+/g')$"
|
||||||
|
case "${{ matrix.worker }}" in
|
||||||
|
docker | docker+containerd | docker@* | docker+containerd@*)
|
||||||
|
echo "TESTFLAGS=${{ env.TESTFLAGS_DOCKER }} $testFlags" >> $GITHUB_ENV
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "TESTFLAGS=${{ env.TESTFLAGS }} $testFlags" >> $GITHUB_ENV
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
if [[ "${{ matrix.worker }}" == "docker"* ]]; then
|
||||||
|
echo "TEST_DOCKERD=1" >> $GITHUB_ENV
|
||||||
|
fi
|
||||||
|
if [ "${{ matrix.mode }}" = "experimental" ]; then
|
||||||
|
echo "TEST_BUILDX_EXPERIMENTAL=1" >> $GITHUB_ENV
|
||||||
|
fi
|
||||||
-
|
-
|
||||||
name: Checkout
|
name: Checkout
|
||||||
uses: actions/checkout@v4
|
uses: actions/checkout@v4
|
||||||
@@ -95,11 +126,10 @@ jobs:
|
|||||||
buildkitd-flags: --debug
|
buildkitd-flags: --debug
|
||||||
-
|
-
|
||||||
name: Build test image
|
name: Build test image
|
||||||
uses: docker/bake-action@v4
|
uses: docker/bake-action@v5
|
||||||
with:
|
with:
|
||||||
targets: integration-test
|
targets: integration-test
|
||||||
set: |
|
set: |
|
||||||
*.cache-from=type=gha,scope=${{ env.TEST_CACHE_SCOPE }}
|
|
||||||
*.output=type=docker,name=${{ env.TEST_IMAGE_ID }}
|
*.output=type=docker,name=${{ env.TEST_IMAGE_ID }}
|
||||||
-
|
-
|
||||||
name: Test
|
name: Test
|
||||||
@@ -107,17 +137,16 @@ jobs:
|
|||||||
./hack/test
|
./hack/test
|
||||||
env:
|
env:
|
||||||
TEST_REPORT_SUFFIX: "-${{ env.TESTREPORTS_NAME }}"
|
TEST_REPORT_SUFFIX: "-${{ env.TESTREPORTS_NAME }}"
|
||||||
TEST_DOCKERD: "${{ startsWith(matrix.worker, 'docker') && '1' || '0' }}"
|
|
||||||
TESTFLAGS: "${{ (matrix.worker == 'docker' || matrix.worker == 'docker\\+containerd') && env.TESTFLAGS_DOCKER || env.TESTFLAGS }} --run=//worker=${{ matrix.worker }}$"
|
|
||||||
TESTPKGS: "${{ matrix.pkg }}"
|
TESTPKGS: "${{ matrix.pkg }}"
|
||||||
-
|
-
|
||||||
name: Send to Codecov
|
name: Send to Codecov
|
||||||
if: always()
|
if: always()
|
||||||
uses: codecov/codecov-action@v4
|
uses: codecov/codecov-action@v5
|
||||||
with:
|
with:
|
||||||
directory: ./bin/testreports
|
directory: ./bin/testreports
|
||||||
flags: integration
|
flags: integration
|
||||||
token: ${{ secrets.CODECOV_TOKEN }}
|
token: ${{ secrets.CODECOV_TOKEN }}
|
||||||
|
disable_file_fixes: true
|
||||||
-
|
-
|
||||||
name: Generate annotations
|
name: Generate annotations
|
||||||
if: always()
|
if: always()
|
||||||
@@ -138,8 +167,8 @@ jobs:
|
|||||||
fail-fast: false
|
fail-fast: false
|
||||||
matrix:
|
matrix:
|
||||||
os:
|
os:
|
||||||
- ubuntu-22.04
|
- ubuntu-24.04
|
||||||
- macos-12
|
- macos-14
|
||||||
- windows-2022
|
- windows-2022
|
||||||
env:
|
env:
|
||||||
SKIP_INTEGRATION_TESTS: 1
|
SKIP_INTEGRATION_TESTS: 1
|
||||||
@@ -184,12 +213,13 @@ jobs:
|
|||||||
-
|
-
|
||||||
name: Send to Codecov
|
name: Send to Codecov
|
||||||
if: always()
|
if: always()
|
||||||
uses: codecov/codecov-action@v4
|
uses: codecov/codecov-action@v5
|
||||||
with:
|
with:
|
||||||
directory: ${{ env.TESTREPORTS_DIR }}
|
directory: ${{ env.TESTREPORTS_DIR }}
|
||||||
env_vars: RUNNER_OS
|
env_vars: RUNNER_OS
|
||||||
flags: unit
|
flags: unit
|
||||||
token: ${{ secrets.CODECOV_TOKEN }}
|
token: ${{ secrets.CODECOV_TOKEN }}
|
||||||
|
disable_file_fixes: true
|
||||||
-
|
-
|
||||||
name: Generate annotations
|
name: Generate annotations
|
||||||
if: always()
|
if: always()
|
||||||
@@ -204,8 +234,40 @@ jobs:
|
|||||||
name: test-reports-${{ env.TESTREPORTS_NAME }}
|
name: test-reports-${{ env.TESTREPORTS_NAME }}
|
||||||
path: ${{ env.TESTREPORTS_BASEDIR }}
|
path: ${{ env.TESTREPORTS_BASEDIR }}
|
||||||
|
|
||||||
|
govulncheck:
|
||||||
|
runs-on: ubuntu-24.04
|
||||||
|
permissions:
|
||||||
|
# same as global permission
|
||||||
|
contents: read
|
||||||
|
# required to write sarif report
|
||||||
|
security-events: write
|
||||||
|
steps:
|
||||||
|
-
|
||||||
|
name: Checkout
|
||||||
|
uses: actions/checkout@v4
|
||||||
|
-
|
||||||
|
name: Set up Docker Buildx
|
||||||
|
uses: docker/setup-buildx-action@v3
|
||||||
|
with:
|
||||||
|
version: ${{ env.BUILDX_VERSION }}
|
||||||
|
driver-opts: image=${{ env.BUILDKIT_IMAGE }}
|
||||||
|
buildkitd-flags: --debug
|
||||||
|
-
|
||||||
|
name: Run
|
||||||
|
uses: docker/bake-action@v5
|
||||||
|
with:
|
||||||
|
targets: govulncheck
|
||||||
|
env:
|
||||||
|
GOVULNCHECK_FORMAT: sarif
|
||||||
|
-
|
||||||
|
name: Upload SARIF report
|
||||||
|
if: ${{ github.ref == 'refs/heads/master' && github.repository == 'docker/buildx' }}
|
||||||
|
uses: github/codeql-action/upload-sarif@v3
|
||||||
|
with:
|
||||||
|
sarif_file: ${{ env.DESTDIR }}/govulncheck.out
|
||||||
|
|
||||||
prepare-binaries:
|
prepare-binaries:
|
||||||
runs-on: ubuntu-22.04
|
runs-on: ubuntu-24.04
|
||||||
outputs:
|
outputs:
|
||||||
matrix: ${{ steps.platforms.outputs.matrix }}
|
matrix: ${{ steps.platforms.outputs.matrix }}
|
||||||
steps:
|
steps:
|
||||||
@@ -223,7 +285,7 @@ jobs:
|
|||||||
echo ${{ steps.platforms.outputs.matrix }}
|
echo ${{ steps.platforms.outputs.matrix }}
|
||||||
|
|
||||||
binaries:
|
binaries:
|
||||||
runs-on: ubuntu-22.04
|
runs-on: ubuntu-24.04
|
||||||
needs:
|
needs:
|
||||||
- prepare-binaries
|
- prepare-binaries
|
||||||
strategy:
|
strategy:
|
||||||
@@ -266,7 +328,7 @@ jobs:
|
|||||||
if-no-files-found: error
|
if-no-files-found: error
|
||||||
|
|
||||||
bin-image:
|
bin-image:
|
||||||
runs-on: ubuntu-22.04
|
runs-on: ubuntu-24.04
|
||||||
needs:
|
needs:
|
||||||
- test-integration
|
- test-integration
|
||||||
- test-unit
|
- test-unit
|
||||||
@@ -306,7 +368,7 @@ jobs:
|
|||||||
password: ${{ secrets.DOCKERPUBLICBOT_WRITE_PAT }}
|
password: ${{ secrets.DOCKERPUBLICBOT_WRITE_PAT }}
|
||||||
-
|
-
|
||||||
name: Build and push image
|
name: Build and push image
|
||||||
uses: docker/bake-action@v4
|
uses: docker/bake-action@v5
|
||||||
with:
|
with:
|
||||||
files: |
|
files: |
|
||||||
./docker-bake.hcl
|
./docker-bake.hcl
|
||||||
@@ -318,8 +380,45 @@ jobs:
|
|||||||
*.cache-from=type=gha,scope=bin-image
|
*.cache-from=type=gha,scope=bin-image
|
||||||
*.cache-to=type=gha,scope=bin-image,mode=max
|
*.cache-to=type=gha,scope=bin-image,mode=max
|
||||||
|
|
||||||
|
scout:
|
||||||
|
runs-on: ubuntu-24.04
|
||||||
|
if: ${{ github.ref == 'refs/heads/master' && github.repository == 'docker/buildx' }}
|
||||||
|
permissions:
|
||||||
|
# same as global permission
|
||||||
|
contents: read
|
||||||
|
# required to write sarif report
|
||||||
|
security-events: write
|
||||||
|
needs:
|
||||||
|
- bin-image
|
||||||
|
steps:
|
||||||
|
-
|
||||||
|
name: Checkout
|
||||||
|
uses: actions/checkout@v4
|
||||||
|
-
|
||||||
|
name: Login to DockerHub
|
||||||
|
uses: docker/login-action@v3
|
||||||
|
with:
|
||||||
|
username: ${{ vars.DOCKERPUBLICBOT_USERNAME }}
|
||||||
|
password: ${{ secrets.DOCKERPUBLICBOT_WRITE_PAT }}
|
||||||
|
-
|
||||||
|
name: Scout
|
||||||
|
id: scout
|
||||||
|
uses: crazy-max/.github/.github/actions/docker-scout@ccae1c98f1237b5c19e4ef77ace44fa68b3bc7e4
|
||||||
|
with:
|
||||||
|
version: ${{ env.SCOUT_VERSION }}
|
||||||
|
format: sarif
|
||||||
|
image: registry://${{ env.REPO_SLUG }}:master
|
||||||
|
-
|
||||||
|
name: Upload SARIF report
|
||||||
|
uses: github/codeql-action/upload-sarif@v3
|
||||||
|
with:
|
||||||
|
sarif_file: ${{ steps.scout.outputs.result-file }}
|
||||||
|
|
||||||
release:
|
release:
|
||||||
runs-on: ubuntu-22.04
|
runs-on: ubuntu-24.04
|
||||||
|
permissions:
|
||||||
|
# required to create GitHub release
|
||||||
|
contents: write
|
||||||
needs:
|
needs:
|
||||||
- test-integration
|
- test-integration
|
||||||
- test-unit
|
- test-unit
|
||||||
@@ -349,33 +448,9 @@ jobs:
|
|||||||
-
|
-
|
||||||
name: GitHub Release
|
name: GitHub Release
|
||||||
if: startsWith(github.ref, 'refs/tags/v')
|
if: startsWith(github.ref, 'refs/tags/v')
|
||||||
uses: softprops/action-gh-release@de2c0eb89ae2a093876385947365aca7b0e5f844 # v0.1.15
|
uses: softprops/action-gh-release@01570a1f39cb168c169c802c3bceb9e93fb10974 # v2.1.0
|
||||||
env:
|
env:
|
||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
with:
|
with:
|
||||||
draft: true
|
draft: true
|
||||||
files: ${{ env.DESTDIR }}/*
|
files: ${{ env.DESTDIR }}/*
|
||||||
|
|
||||||
buildkit-edge:
|
|
||||||
runs-on: ubuntu-22.04
|
|
||||||
continue-on-error: true
|
|
||||||
steps:
|
|
||||||
-
|
|
||||||
name: Checkout
|
|
||||||
uses: actions/checkout@v4
|
|
||||||
-
|
|
||||||
name: Set up QEMU
|
|
||||||
uses: docker/setup-qemu-action@v3
|
|
||||||
-
|
|
||||||
name: Set up Docker Buildx
|
|
||||||
uses: docker/setup-buildx-action@v3
|
|
||||||
with:
|
|
||||||
version: ${{ env.BUILDX_VERSION }}
|
|
||||||
driver-opts: image=moby/buildkit:master
|
|
||||||
buildkitd-flags: --debug
|
|
||||||
-
|
|
||||||
# Just run a bake target to check eveything runs fine
|
|
||||||
name: Build
|
|
||||||
uses: docker/bake-action@v4
|
|
||||||
with:
|
|
||||||
targets: binaries
|
|
||||||
|
|||||||
22
.github/workflows/codeql.yml
vendored
22
.github/workflows/codeql.yml
vendored
@@ -1,5 +1,14 @@
|
|||||||
name: codeql
|
name: codeql
|
||||||
|
|
||||||
|
# Default to 'contents: read', which grants actions to read commits.
|
||||||
|
#
|
||||||
|
# If any permission is set, any permission not included in the list is
|
||||||
|
# implicitly set to "none".
|
||||||
|
#
|
||||||
|
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
|
||||||
|
permissions:
|
||||||
|
contents: read
|
||||||
|
|
||||||
on:
|
on:
|
||||||
push:
|
push:
|
||||||
branches:
|
branches:
|
||||||
@@ -7,17 +16,16 @@ on:
|
|||||||
- 'v[0-9]*'
|
- 'v[0-9]*'
|
||||||
pull_request:
|
pull_request:
|
||||||
|
|
||||||
permissions:
|
|
||||||
actions: read
|
|
||||||
contents: read
|
|
||||||
security-events: write
|
|
||||||
|
|
||||||
env:
|
env:
|
||||||
GO_VERSION: "1.21"
|
GO_VERSION: "1.23"
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
codeql:
|
codeql:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-24.04
|
||||||
|
permissions:
|
||||||
|
contents: read
|
||||||
|
actions: read
|
||||||
|
security-events: write
|
||||||
steps:
|
steps:
|
||||||
-
|
-
|
||||||
name: Checkout
|
name: Checkout
|
||||||
|
|||||||
55
.github/workflows/docs-release.yml
vendored
55
.github/workflows/docs-release.yml
vendored
@@ -1,14 +1,31 @@
|
|||||||
name: docs-release
|
name: docs-release
|
||||||
|
|
||||||
|
# Default to 'contents: read', which grants actions to read commits.
|
||||||
|
#
|
||||||
|
# If any permission is set, any permission not included in the list is
|
||||||
|
# implicitly set to "none".
|
||||||
|
#
|
||||||
|
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
|
||||||
|
permissions:
|
||||||
|
contents: read
|
||||||
|
|
||||||
on:
|
on:
|
||||||
|
workflow_dispatch:
|
||||||
|
inputs:
|
||||||
|
tag:
|
||||||
|
description: 'Git tag'
|
||||||
|
required: true
|
||||||
release:
|
release:
|
||||||
types:
|
types:
|
||||||
- released
|
- released
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
open-pr:
|
open-pr:
|
||||||
runs-on: ubuntu-22.04
|
runs-on: ubuntu-24.04
|
||||||
if: ${{ github.event.release.prerelease != true && github.repository == 'docker/buildx' }}
|
if: ${{ (github.event.release.prerelease != true || github.event.inputs.tag != '') && github.repository == 'docker/buildx' }}
|
||||||
|
permissions:
|
||||||
|
contents: write
|
||||||
|
pull-requests: write
|
||||||
steps:
|
steps:
|
||||||
-
|
-
|
||||||
name: Checkout docs repo
|
name: Checkout docs repo
|
||||||
@@ -20,39 +37,47 @@ jobs:
|
|||||||
-
|
-
|
||||||
name: Prepare
|
name: Prepare
|
||||||
run: |
|
run: |
|
||||||
rm -rf ./_data/buildx/*
|
rm -rf ./data/buildx/*
|
||||||
|
if [ -n "${{ github.event.inputs.tag }}" ]; then
|
||||||
|
echo "RELEASE_NAME=${{ github.event.inputs.tag }}" >> $GITHUB_ENV
|
||||||
|
else
|
||||||
|
echo "RELEASE_NAME=${{ github.event.release.name }}" >> $GITHUB_ENV
|
||||||
|
fi
|
||||||
-
|
-
|
||||||
name: Set up Docker Buildx
|
name: Set up Docker Buildx
|
||||||
uses: docker/setup-buildx-action@v3
|
uses: docker/setup-buildx-action@v3
|
||||||
-
|
-
|
||||||
name: Build docs
|
name: Generate yaml
|
||||||
uses: docker/bake-action@v4
|
uses: docker/bake-action@v5
|
||||||
with:
|
with:
|
||||||
source: ${{ github.server_url }}/${{ github.repository }}.git#${{ github.event.release.name }}
|
source: ${{ github.server_url }}/${{ github.repository }}.git#${{ env.RELEASE_NAME }}
|
||||||
targets: update-docs
|
targets: update-docs
|
||||||
|
provenance: false
|
||||||
set: |
|
set: |
|
||||||
*.output=/tmp/buildx-docs
|
*.output=/tmp/buildx-docs
|
||||||
env:
|
env:
|
||||||
DOCS_FORMATS: yaml
|
DOCS_FORMATS: yaml
|
||||||
-
|
-
|
||||||
name: Copy files
|
name: Copy yaml
|
||||||
run: |
|
run: |
|
||||||
cp /tmp/buildx-docs/out/reference/*.yaml ./_data/buildx/
|
cp /tmp/buildx-docs/out/reference/*.yaml ./data/buildx/
|
||||||
-
|
-
|
||||||
name: Commit changes
|
name: Update vendor
|
||||||
run: |
|
run: |
|
||||||
git add -A .
|
make vendor
|
||||||
|
env:
|
||||||
|
VENDOR_MODULE: github.com/docker/buildx@${{ env.RELEASE_NAME }}
|
||||||
-
|
-
|
||||||
name: Create PR on docs repo
|
name: Create PR on docs repo
|
||||||
uses: peter-evans/create-pull-request@a4f52f8033a6168103c2538976c07b467e8163bc
|
uses: peter-evans/create-pull-request@5e914681df9dc83aa4e4905692ca88beb2f9e91f # v7.0.5
|
||||||
with:
|
with:
|
||||||
token: ${{ secrets.GHPAT_DOCS_DISPATCH }}
|
token: ${{ secrets.GHPAT_DOCS_DISPATCH }}
|
||||||
push-to-fork: docker-tools-robot/docker.github.io
|
push-to-fork: docker-tools-robot/docker.github.io
|
||||||
commit-message: "build: update buildx reference to ${{ github.event.release.name }}"
|
commit-message: "vendor: github.com/docker/buildx ${{ env.RELEASE_NAME }}"
|
||||||
signoff: true
|
signoff: true
|
||||||
branch: dispatch/buildx-ref-${{ github.event.release.name }}
|
branch: dispatch/buildx-ref-${{ env.RELEASE_NAME }}
|
||||||
delete-branch: true
|
delete-branch: true
|
||||||
title: Update buildx reference to ${{ github.event.release.name }}
|
title: Update buildx reference to ${{ env.RELEASE_NAME }}
|
||||||
body: |
|
body: |
|
||||||
Update the buildx reference documentation to keep in sync with the latest release `${{ github.event.release.name }}`
|
Update the buildx reference documentation to keep in sync with the latest release `${{ env.RELEASE_NAME }}`
|
||||||
draft: false
|
draft: false
|
||||||
|
|||||||
14
.github/workflows/docs-upstream.yml
vendored
14
.github/workflows/docs-upstream.yml
vendored
@@ -3,6 +3,15 @@
|
|||||||
# https://github.com/docker/docker.github.io/blob/98c7c9535063ae4cd2cd0a31478a21d16d2f07a3/docker-bake.hcl#L34-L36
|
# https://github.com/docker/docker.github.io/blob/98c7c9535063ae4cd2cd0a31478a21d16d2f07a3/docker-bake.hcl#L34-L36
|
||||||
name: docs-upstream
|
name: docs-upstream
|
||||||
|
|
||||||
|
# Default to 'contents: read', which grants actions to read commits.
|
||||||
|
#
|
||||||
|
# If any permission is set, any permission not included in the list is
|
||||||
|
# implicitly set to "none".
|
||||||
|
#
|
||||||
|
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
|
||||||
|
permissions:
|
||||||
|
contents: read
|
||||||
|
|
||||||
concurrency:
|
concurrency:
|
||||||
group: ${{ github.workflow }}-${{ github.ref }}
|
group: ${{ github.workflow }}-${{ github.ref }}
|
||||||
cancel-in-progress: true
|
cancel-in-progress: true
|
||||||
@@ -22,7 +31,7 @@ on:
|
|||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
docs-yaml:
|
docs-yaml:
|
||||||
runs-on: ubuntu-22.04
|
runs-on: ubuntu-24.04
|
||||||
steps:
|
steps:
|
||||||
-
|
-
|
||||||
name: Checkout
|
name: Checkout
|
||||||
@@ -34,9 +43,10 @@ jobs:
|
|||||||
version: latest
|
version: latest
|
||||||
-
|
-
|
||||||
name: Build reference YAML docs
|
name: Build reference YAML docs
|
||||||
uses: docker/bake-action@v4
|
uses: docker/bake-action@v5
|
||||||
with:
|
with:
|
||||||
targets: update-docs
|
targets: update-docs
|
||||||
|
provenance: false
|
||||||
set: |
|
set: |
|
||||||
*.output=/tmp/buildx-docs
|
*.output=/tmp/buildx-docs
|
||||||
*.cache-from=type=gha,scope=docs-yaml
|
*.cache-from=type=gha,scope=docs-yaml
|
||||||
|
|||||||
85
.github/workflows/e2e.yml
vendored
85
.github/workflows/e2e.yml
vendored
@@ -1,5 +1,14 @@
|
|||||||
name: e2e
|
name: e2e
|
||||||
|
|
||||||
|
# Default to 'contents: read', which grants actions to read commits.
|
||||||
|
#
|
||||||
|
# If any permission is set, any permission not included in the list is
|
||||||
|
# implicitly set to "none".
|
||||||
|
#
|
||||||
|
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
|
||||||
|
permissions:
|
||||||
|
contents: read
|
||||||
|
|
||||||
concurrency:
|
concurrency:
|
||||||
group: ${{ github.workflow }}-${{ github.ref }}
|
group: ${{ github.workflow }}-${{ github.ref }}
|
||||||
cancel-in-progress: true
|
cancel-in-progress: true
|
||||||
@@ -22,7 +31,7 @@ env:
|
|||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
build:
|
build:
|
||||||
runs-on: ubuntu-22.04
|
runs-on: ubuntu-24.04
|
||||||
steps:
|
steps:
|
||||||
- name: Checkout
|
- name: Checkout
|
||||||
uses: actions/checkout@v4
|
uses: actions/checkout@v4
|
||||||
@@ -33,7 +42,7 @@ jobs:
|
|||||||
version: latest
|
version: latest
|
||||||
-
|
-
|
||||||
name: Build
|
name: Build
|
||||||
uses: docker/bake-action@v4
|
uses: docker/bake-action@v5
|
||||||
with:
|
with:
|
||||||
targets: binaries
|
targets: binaries
|
||||||
set: |
|
set: |
|
||||||
@@ -82,6 +91,10 @@ jobs:
|
|||||||
driver-opt: qemu.install=true
|
driver-opt: qemu.install=true
|
||||||
- driver: remote
|
- driver: remote
|
||||||
endpoint: tcp://localhost:1234
|
endpoint: tcp://localhost:1234
|
||||||
|
- driver: docker-container
|
||||||
|
metadata-provenance: max
|
||||||
|
- driver: docker-container
|
||||||
|
metadata-warnings: true
|
||||||
exclude:
|
exclude:
|
||||||
- driver: docker
|
- driver: docker
|
||||||
multi-node: mnode-true
|
multi-node: mnode-true
|
||||||
@@ -129,70 +142,18 @@ jobs:
|
|||||||
else
|
else
|
||||||
echo "MULTI_NODE=0" >> $GITHUB_ENV
|
echo "MULTI_NODE=0" >> $GITHUB_ENV
|
||||||
fi
|
fi
|
||||||
|
if [ -n "${{ matrix.metadata-provenance }}" ]; then
|
||||||
|
echo "BUILDX_METADATA_PROVENANCE=${{ matrix.metadata-provenance }}" >> $GITHUB_ENV
|
||||||
|
fi
|
||||||
|
if [ -n "${{ matrix.metadata-warnings }}" ]; then
|
||||||
|
echo "BUILDX_METADATA_WARNINGS=${{ matrix.metadata-warnings }}" >> $GITHUB_ENV
|
||||||
|
fi
|
||||||
-
|
-
|
||||||
name: Install k3s
|
name: Install k3s
|
||||||
if: matrix.driver == 'kubernetes'
|
if: matrix.driver == 'kubernetes'
|
||||||
uses: actions/github-script@v7
|
uses: crazy-max/.github/.github/actions/install-k3s@fa6141aedf23596fb8bdcceab9cce8dadaa31bd9
|
||||||
with:
|
with:
|
||||||
script: |
|
version: ${{ env.K3S_VERSION }}
|
||||||
const fs = require('fs');
|
|
||||||
|
|
||||||
let wait = function(milliseconds) {
|
|
||||||
return new Promise((resolve, reject) => {
|
|
||||||
if (typeof(milliseconds) !== 'number') {
|
|
||||||
throw new Error('milleseconds not a number');
|
|
||||||
}
|
|
||||||
setTimeout(() => resolve("done!"), milliseconds)
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
try {
|
|
||||||
const kubeconfig="/tmp/buildkit-k3s/kubeconfig.yaml";
|
|
||||||
core.info(`storing kubeconfig in ${kubeconfig}`);
|
|
||||||
|
|
||||||
await exec.exec('docker', ["run", "-d",
|
|
||||||
"--privileged",
|
|
||||||
"--name=buildkit-k3s",
|
|
||||||
"-e", "K3S_KUBECONFIG_OUTPUT="+kubeconfig,
|
|
||||||
"-e", "K3S_KUBECONFIG_MODE=666",
|
|
||||||
"-v", "/tmp/buildkit-k3s:/tmp/buildkit-k3s",
|
|
||||||
"-p", "6443:6443",
|
|
||||||
"-p", "80:80",
|
|
||||||
"-p", "443:443",
|
|
||||||
"-p", "8080:8080",
|
|
||||||
"rancher/k3s:${{ env.K3S_VERSION }}", "server"
|
|
||||||
]);
|
|
||||||
await wait(10000);
|
|
||||||
|
|
||||||
core.exportVariable('KUBECONFIG', kubeconfig);
|
|
||||||
|
|
||||||
let nodeName;
|
|
||||||
for (let count = 1; count <= 5; count++) {
|
|
||||||
try {
|
|
||||||
const nodeNameOutput = await exec.getExecOutput("kubectl get nodes --no-headers -oname");
|
|
||||||
nodeName = nodeNameOutput.stdout
|
|
||||||
} catch (error) {
|
|
||||||
core.info(`Unable to resolve node name (${error.message}). Attempt ${count} of 5.`)
|
|
||||||
} finally {
|
|
||||||
if (nodeName) {
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
await wait(5000);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if (!nodeName) {
|
|
||||||
throw new Error(`Unable to resolve node name after 5 attempts.`);
|
|
||||||
}
|
|
||||||
|
|
||||||
await exec.exec(`kubectl wait --for=condition=Ready ${nodeName}`);
|
|
||||||
} catch (error) {
|
|
||||||
core.setFailed(error.message);
|
|
||||||
}
|
|
||||||
-
|
|
||||||
name: Print KUBECONFIG
|
|
||||||
if: matrix.driver == 'kubernetes'
|
|
||||||
run: |
|
|
||||||
yq ${{ env.KUBECONFIG }}
|
|
||||||
-
|
-
|
||||||
name: Launch remote buildkitd
|
name: Launch remote buildkitd
|
||||||
if: matrix.driver == 'remote'
|
if: matrix.driver == 'remote'
|
||||||
|
|||||||
32
.github/workflows/labeler.yml
vendored
Normal file
32
.github/workflows/labeler.yml
vendored
Normal file
@@ -0,0 +1,32 @@
|
|||||||
|
name: labeler
|
||||||
|
|
||||||
|
# Default to 'contents: read', which grants actions to read commits.
|
||||||
|
#
|
||||||
|
# If any permission is set, any permission not included in the list is
|
||||||
|
# implicitly set to "none".
|
||||||
|
#
|
||||||
|
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
|
||||||
|
permissions:
|
||||||
|
contents: read
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-${{ github.ref }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
on:
|
||||||
|
pull_request_target:
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
labeler:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
permissions:
|
||||||
|
# same as global permission
|
||||||
|
contents: read
|
||||||
|
# required for writing labels
|
||||||
|
pull-requests: write
|
||||||
|
steps:
|
||||||
|
-
|
||||||
|
name: Run
|
||||||
|
uses: actions/labeler@v5
|
||||||
|
with:
|
||||||
|
sync-labels: true
|
||||||
85
.github/workflows/validate.yml
vendored
85
.github/workflows/validate.yml
vendored
@@ -1,5 +1,14 @@
|
|||||||
name: validate
|
name: validate
|
||||||
|
|
||||||
|
# Default to 'contents: read', which grants actions to read commits.
|
||||||
|
#
|
||||||
|
# If any permission is set, any permission not included in the list is
|
||||||
|
# implicitly set to "none".
|
||||||
|
#
|
||||||
|
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
|
||||||
|
permissions:
|
||||||
|
contents: read
|
||||||
|
|
||||||
concurrency:
|
concurrency:
|
||||||
group: ${{ github.workflow }}-${{ github.ref }}
|
group: ${{ github.workflow }}-${{ github.ref }}
|
||||||
cancel-in-progress: true
|
cancel-in-progress: true
|
||||||
@@ -17,19 +26,70 @@ on:
|
|||||||
- '.github/releases.json'
|
- '.github/releases.json'
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
|
prepare:
|
||||||
|
runs-on: ubuntu-24.04
|
||||||
|
outputs:
|
||||||
|
includes: ${{ steps.matrix.outputs.includes }}
|
||||||
|
steps:
|
||||||
|
-
|
||||||
|
name: Checkout
|
||||||
|
uses: actions/checkout@v4
|
||||||
|
-
|
||||||
|
name: Matrix
|
||||||
|
id: matrix
|
||||||
|
uses: actions/github-script@v7
|
||||||
|
with:
|
||||||
|
script: |
|
||||||
|
let def = {};
|
||||||
|
await core.group(`Parsing definition`, async () => {
|
||||||
|
const printEnv = Object.assign({}, process.env, {
|
||||||
|
GOLANGCI_LINT_MULTIPLATFORM: process.env.GITHUB_REPOSITORY === 'docker/buildx' ? '1' : ''
|
||||||
|
});
|
||||||
|
const resPrint = await exec.getExecOutput('docker', ['buildx', 'bake', 'validate', '--print'], {
|
||||||
|
ignoreReturnCode: true,
|
||||||
|
env: printEnv
|
||||||
|
});
|
||||||
|
if (resPrint.stderr.length > 0 && resPrint.exitCode != 0) {
|
||||||
|
throw new Error(res.stderr);
|
||||||
|
}
|
||||||
|
def = JSON.parse(resPrint.stdout.trim());
|
||||||
|
});
|
||||||
|
await core.group(`Generating matrix`, async () => {
|
||||||
|
const includes = [];
|
||||||
|
for (const targetName of Object.keys(def.target)) {
|
||||||
|
const target = def.target[targetName];
|
||||||
|
if (target.platforms && target.platforms.length > 0) {
|
||||||
|
target.platforms.forEach(platform => {
|
||||||
|
includes.push({
|
||||||
|
target: targetName,
|
||||||
|
platform: platform
|
||||||
|
});
|
||||||
|
});
|
||||||
|
} else {
|
||||||
|
includes.push({
|
||||||
|
target: targetName
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
core.info(JSON.stringify(includes, null, 2));
|
||||||
|
core.setOutput('includes', JSON.stringify(includes));
|
||||||
|
});
|
||||||
|
|
||||||
validate:
|
validate:
|
||||||
runs-on: ubuntu-22.04
|
runs-on: ubuntu-24.04
|
||||||
env:
|
needs:
|
||||||
GOLANGCI_LINT_MULTIPLATFORM: 1
|
- prepare
|
||||||
strategy:
|
strategy:
|
||||||
fail-fast: false
|
fail-fast: false
|
||||||
matrix:
|
matrix:
|
||||||
target:
|
include: ${{ fromJson(needs.prepare.outputs.includes) }}
|
||||||
- lint
|
|
||||||
- validate-vendor
|
|
||||||
- validate-docs
|
|
||||||
- validate-generated-files
|
|
||||||
steps:
|
steps:
|
||||||
|
-
|
||||||
|
name: Prepare
|
||||||
|
run: |
|
||||||
|
if [ "$GITHUB_REPOSITORY" = "docker/buildx" ]; then
|
||||||
|
echo "GOLANGCI_LINT_MULTIPLATFORM=1" >> $GITHUB_ENV
|
||||||
|
fi
|
||||||
-
|
-
|
||||||
name: Checkout
|
name: Checkout
|
||||||
uses: actions/checkout@v4
|
uses: actions/checkout@v4
|
||||||
@@ -39,6 +99,9 @@ jobs:
|
|||||||
with:
|
with:
|
||||||
version: latest
|
version: latest
|
||||||
-
|
-
|
||||||
name: Run
|
name: Validate
|
||||||
run: |
|
uses: docker/bake-action@v5
|
||||||
make ${{ matrix.target }}
|
with:
|
||||||
|
targets: ${{ matrix.target }}
|
||||||
|
set: |
|
||||||
|
*.platform=${{ matrix.platform }}
|
||||||
|
|||||||
@@ -1,49 +1,99 @@
|
|||||||
run:
|
run:
|
||||||
timeout: 30m
|
timeout: 30m
|
||||||
skip-files:
|
|
||||||
- ".*\\.pb\\.go$"
|
|
||||||
|
|
||||||
modules-download-mode: vendor
|
modules-download-mode: vendor
|
||||||
|
# default uses Go version from the go.mod file, fallback on the env var
|
||||||
build-tags:
|
# `GOVERSION`, fallback on 1.17: https://golangci-lint.run/usage/configuration/#run-configuration
|
||||||
|
go: "1.23"
|
||||||
|
|
||||||
linters:
|
linters:
|
||||||
enable:
|
enable:
|
||||||
- gofmt
|
- bodyclose
|
||||||
- govet
|
|
||||||
- depguard
|
- depguard
|
||||||
|
- forbidigo
|
||||||
|
- gocritic
|
||||||
|
- gofmt
|
||||||
- goimports
|
- goimports
|
||||||
|
- gosec
|
||||||
|
- gosimple
|
||||||
|
- govet
|
||||||
- ineffassign
|
- ineffassign
|
||||||
|
- makezero
|
||||||
- misspell
|
- misspell
|
||||||
- unused
|
- noctx
|
||||||
|
- nolintlint
|
||||||
- revive
|
- revive
|
||||||
- staticcheck
|
- staticcheck
|
||||||
|
- testifylint
|
||||||
- typecheck
|
- typecheck
|
||||||
- nolintlint
|
- unused
|
||||||
- gosec
|
- whitespace
|
||||||
- forbidigo
|
|
||||||
disable-all: true
|
disable-all: true
|
||||||
|
|
||||||
linters-settings:
|
linters-settings:
|
||||||
|
gocritic:
|
||||||
|
disabled-checks:
|
||||||
|
- "ifElseChain"
|
||||||
|
- "assignOp"
|
||||||
|
- "appendAssign"
|
||||||
|
- "singleCaseSwitch"
|
||||||
|
- "exitAfterDefer" # FIXME
|
||||||
|
importas:
|
||||||
|
alias:
|
||||||
|
# Enforce alias to prevent it accidentally being used instead of
|
||||||
|
# buildkit errdefs package (or vice-versa).
|
||||||
|
- pkg: "github.com/containerd/errdefs"
|
||||||
|
alias: "cerrdefs"
|
||||||
|
- pkg: "github.com/opencontainers/image-spec/specs-go/v1"
|
||||||
|
alias: "ocispecs"
|
||||||
|
- pkg: "github.com/opencontainers/go-digest"
|
||||||
|
alias: "digest"
|
||||||
|
govet:
|
||||||
|
enable:
|
||||||
|
- nilness
|
||||||
|
- unusedwrite
|
||||||
|
# enable-all: true
|
||||||
|
# disable:
|
||||||
|
# - fieldalignment
|
||||||
|
# - shadow
|
||||||
depguard:
|
depguard:
|
||||||
rules:
|
rules:
|
||||||
main:
|
main:
|
||||||
deny:
|
deny:
|
||||||
# The io/ioutil package has been deprecated.
|
- pkg: "github.com/containerd/containerd/errdefs"
|
||||||
# https://go.dev/doc/go1.16#ioutil
|
desc: The containerd errdefs package was migrated to a separate module. Use github.com/containerd/errdefs instead.
|
||||||
|
- pkg: "github.com/containerd/containerd/log"
|
||||||
|
desc: The containerd log package was migrated to a separate module. Use github.com/containerd/log instead.
|
||||||
|
- pkg: "github.com/containerd/containerd/platforms"
|
||||||
|
desc: The containerd platforms package was migrated to a separate module. Use github.com/containerd/platforms instead.
|
||||||
- pkg: "io/ioutil"
|
- pkg: "io/ioutil"
|
||||||
desc: The io/ioutil package has been deprecated.
|
desc: The io/ioutil package has been deprecated.
|
||||||
forbidigo:
|
forbidigo:
|
||||||
forbid:
|
forbid:
|
||||||
|
- '^context\.WithCancel(# use context\.WithCancelCause instead)?$'
|
||||||
|
- '^context\.WithDeadline(# use context\.WithDeadline instead)?$'
|
||||||
|
- '^context\.WithTimeout(# use context\.WithTimeoutCause instead)?$'
|
||||||
|
- '^ctx\.Err(# use context\.Cause instead)?$'
|
||||||
- '^fmt\.Errorf(# use errors\.Errorf instead)?$'
|
- '^fmt\.Errorf(# use errors\.Errorf instead)?$'
|
||||||
|
- '^platforms\.DefaultString(# use platforms\.Format(platforms\.DefaultSpec()) instead\.)?$'
|
||||||
gosec:
|
gosec:
|
||||||
excludes:
|
excludes:
|
||||||
- G204 # Audit use of command execution
|
- G204 # Audit use of command execution
|
||||||
- G402 # TLS MinVersion too low
|
- G402 # TLS MinVersion too low
|
||||||
|
- G115 # integer overflow conversion (TODO: verify these)
|
||||||
config:
|
config:
|
||||||
G306: "0644"
|
G306: "0644"
|
||||||
|
testifylint:
|
||||||
|
disable:
|
||||||
|
# disable rules that reduce the test condition
|
||||||
|
- "empty"
|
||||||
|
- "bool-compare"
|
||||||
|
- "len"
|
||||||
|
- "negative-positive"
|
||||||
|
|
||||||
|
|
||||||
issues:
|
issues:
|
||||||
|
exclude-files:
|
||||||
|
- ".*\\.pb\\.go$"
|
||||||
exclude-rules:
|
exclude-rules:
|
||||||
- linters:
|
- linters:
|
||||||
- revive
|
- revive
|
||||||
@@ -64,6 +114,6 @@ issues:
|
|||||||
- revive
|
- revive
|
||||||
text: "if-return"
|
text: "if-return"
|
||||||
|
|
||||||
# show all
|
# show all
|
||||||
max-issues-per-linter: 0
|
max-issues-per-linter: 0
|
||||||
max-same-issues: 0
|
max-same-issues: 0
|
||||||
|
|||||||
14
.mailmap
14
.mailmap
@@ -1,11 +1,25 @@
|
|||||||
# This file lists all individuals having contributed content to the repository.
|
# This file lists all individuals having contributed content to the repository.
|
||||||
# For how it is generated, see hack/dockerfiles/authors.Dockerfile.
|
# For how it is generated, see hack/dockerfiles/authors.Dockerfile.
|
||||||
|
|
||||||
|
Batuhan Apaydın <batuhan.apaydin@trendyol.com>
|
||||||
|
Batuhan Apaydın <batuhan.apaydin@trendyol.com> <developerguy2@gmail.com>
|
||||||
CrazyMax <github@crazymax.dev>
|
CrazyMax <github@crazymax.dev>
|
||||||
CrazyMax <github@crazymax.dev> <1951866+crazy-max@users.noreply.github.com>
|
CrazyMax <github@crazymax.dev> <1951866+crazy-max@users.noreply.github.com>
|
||||||
CrazyMax <github@crazymax.dev> <crazy-max@users.noreply.github.com>
|
CrazyMax <github@crazymax.dev> <crazy-max@users.noreply.github.com>
|
||||||
|
David Karlsson <david.karlsson@docker.com>
|
||||||
|
David Karlsson <david.karlsson@docker.com> <35727626+dvdksn@users.noreply.github.com>
|
||||||
|
jaihwan104 <jaihwan104@woowahan.com>
|
||||||
|
jaihwan104 <jaihwan104@woowahan.com> <42341126+jaihwan104@users.noreply.github.com>
|
||||||
|
Kenyon Ralph <kenyon@kenyonralph.com>
|
||||||
|
Kenyon Ralph <kenyon@kenyonralph.com> <quic_kralph@quicinc.com>
|
||||||
Sebastiaan van Stijn <github@gone.nl>
|
Sebastiaan van Stijn <github@gone.nl>
|
||||||
Sebastiaan van Stijn <github@gone.nl> <thaJeztah@users.noreply.github.com>
|
Sebastiaan van Stijn <github@gone.nl> <thaJeztah@users.noreply.github.com>
|
||||||
|
Shaun Thompson <shaun.thompson@docker.com>
|
||||||
|
Shaun Thompson <shaun.thompson@docker.com> <shaun.b.thompson@gmail.com>
|
||||||
|
Silvin Lubecki <silvin.lubecki@docker.com>
|
||||||
|
Silvin Lubecki <silvin.lubecki@docker.com> <31478878+silvin-lubecki@users.noreply.github.com>
|
||||||
|
Talon Bowler <talon.bowler@docker.com>
|
||||||
|
Talon Bowler <talon.bowler@docker.com> <nolat301@gmail.com>
|
||||||
Tibor Vass <tibor@docker.com>
|
Tibor Vass <tibor@docker.com>
|
||||||
Tibor Vass <tibor@docker.com> <tiborvass@users.noreply.github.com>
|
Tibor Vass <tibor@docker.com> <tiborvass@users.noreply.github.com>
|
||||||
Tõnis Tiigi <tonistiigi@gmail.com>
|
Tõnis Tiigi <tonistiigi@gmail.com>
|
||||||
|
|||||||
69
AUTHORS
69
AUTHORS
@@ -1,45 +1,112 @@
|
|||||||
# This file lists all individuals having contributed content to the repository.
|
# This file lists all individuals having contributed content to the repository.
|
||||||
# For how it is generated, see hack/dockerfiles/authors.Dockerfile.
|
# For how it is generated, see hack/dockerfiles/authors.Dockerfile.
|
||||||
|
|
||||||
|
accetto <34798830+accetto@users.noreply.github.com>
|
||||||
Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
|
Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
|
||||||
|
Aleksa Sarai <cyphar@cyphar.com>
|
||||||
Alex Couture-Beil <alex@earthly.dev>
|
Alex Couture-Beil <alex@earthly.dev>
|
||||||
Andrew Haines <andrew.haines@zencargo.com>
|
Andrew Haines <andrew.haines@zencargo.com>
|
||||||
|
Andy Caldwell <andrew.caldwell@metaswitch.com>
|
||||||
Andy MacKinlay <admackin@users.noreply.github.com>
|
Andy MacKinlay <admackin@users.noreply.github.com>
|
||||||
Anthony Poschen <zanven42@gmail.com>
|
Anthony Poschen <zanven42@gmail.com>
|
||||||
|
Arnold Sobanski <arnold@l4g.dev>
|
||||||
Artur Klauser <Artur.Klauser@computer.org>
|
Artur Klauser <Artur.Klauser@computer.org>
|
||||||
Batuhan Apaydın <developerguy2@gmail.com>
|
Avi Deitcher <avi@deitcher.net>
|
||||||
|
Batuhan Apaydın <batuhan.apaydin@trendyol.com>
|
||||||
|
Ben Peachey <potherca@gmail.com>
|
||||||
|
Bertrand Paquet <bertrand.paquet@gmail.com>
|
||||||
Bin Du <bindu@microsoft.com>
|
Bin Du <bindu@microsoft.com>
|
||||||
Brandon Philips <brandon@ifup.org>
|
Brandon Philips <brandon@ifup.org>
|
||||||
Brian Goff <cpuguy83@gmail.com>
|
Brian Goff <cpuguy83@gmail.com>
|
||||||
|
Bryce Lampe <bryce@pulumi.com>
|
||||||
|
Cameron Adams <pnzreba@gmail.com>
|
||||||
|
Christian Dupuis <cd@atomist.com>
|
||||||
|
Cory Snider <csnider@mirantis.com>
|
||||||
CrazyMax <github@crazymax.dev>
|
CrazyMax <github@crazymax.dev>
|
||||||
|
David Gageot <david.gageot@docker.com>
|
||||||
|
David Karlsson <david.karlsson@docker.com>
|
||||||
|
David Scott <dave@recoil.org>
|
||||||
dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
|
dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
|
||||||
Devin Bayer <dev@doubly.so>
|
Devin Bayer <dev@doubly.so>
|
||||||
Djordje Lukic <djordje.lukic@docker.com>
|
Djordje Lukic <djordje.lukic@docker.com>
|
||||||
|
Dmitry Makovey <dmakovey@gitlab.com>
|
||||||
Dmytro Makovey <dmytro.makovey@docker.com>
|
Dmytro Makovey <dmytro.makovey@docker.com>
|
||||||
Donghui Wang <977675308@qq.com>
|
Donghui Wang <977675308@qq.com>
|
||||||
|
Doug Borg <dougborg@apple.com>
|
||||||
|
Edgar Lee <edgarl@netflix.com>
|
||||||
|
Eli Treuherz <et@arenko.group>
|
||||||
|
Eliott Wiener <eliottwiener@gmail.com>
|
||||||
|
Elran Shefer <elran.shefer@velocity.tech>
|
||||||
faust <faustin@fala.red>
|
faust <faustin@fala.red>
|
||||||
Felipe Santos <felipecassiors@gmail.com>
|
Felipe Santos <felipecassiors@gmail.com>
|
||||||
|
Felix de Souza <fdesouza@palantir.com>
|
||||||
Fernando Miguel <github@FernandoMiguel.net>
|
Fernando Miguel <github@FernandoMiguel.net>
|
||||||
gfrancesco <gfrancesco@users.noreply.github.com>
|
gfrancesco <gfrancesco@users.noreply.github.com>
|
||||||
gracenoah <gracenoahgh@gmail.com>
|
gracenoah <gracenoahgh@gmail.com>
|
||||||
|
Guillaume Lours <705411+glours@users.noreply.github.com>
|
||||||
|
guoguangwu <guoguangwu@magic-shield.com>
|
||||||
Hollow Man <hollowman@hollowman.ml>
|
Hollow Man <hollowman@hollowman.ml>
|
||||||
|
Ian King'ori <kingorim.ian@gmail.com>
|
||||||
|
idnandre <andre@idntimes.com>
|
||||||
Ilya Dmitrichenko <errordeveloper@gmail.com>
|
Ilya Dmitrichenko <errordeveloper@gmail.com>
|
||||||
|
Isaac Gaskin <isaac.gaskin@circle.com>
|
||||||
Jack Laxson <jackjrabbit@gmail.com>
|
Jack Laxson <jackjrabbit@gmail.com>
|
||||||
|
jaihwan104 <jaihwan104@woowahan.com>
|
||||||
Jean-Yves Gastaud <jygastaud@gmail.com>
|
Jean-Yves Gastaud <jygastaud@gmail.com>
|
||||||
|
Jhan S. Álvarez <51450231+yastanotheruser@users.noreply.github.com>
|
||||||
|
Jonathan A. Sternberg <jonathan.sternberg@docker.com>
|
||||||
|
Jonathan Piché <jpiche@coveo.com>
|
||||||
|
Justin Chadwell <me@jedevc.com>
|
||||||
|
Kenyon Ralph <kenyon@kenyonralph.com>
|
||||||
khs1994 <khs1994@khs1994.com>
|
khs1994 <khs1994@khs1994.com>
|
||||||
|
Kijima Daigo <norimaking777@gmail.com>
|
||||||
|
Kohei Tokunaga <ktokunaga.mail@gmail.com>
|
||||||
Kotaro Adachi <k33asby@gmail.com>
|
Kotaro Adachi <k33asby@gmail.com>
|
||||||
|
Kushagra Mansingh <12158241+kushmansingh@users.noreply.github.com>
|
||||||
l00397676 <lujingxiao@huawei.com>
|
l00397676 <lujingxiao@huawei.com>
|
||||||
|
Laura Brehm <laurabrehm@hey.com>
|
||||||
|
Laurent Goderre <laurent.goderre@docker.com>
|
||||||
|
Mark Hildreth <113933455+markhildreth-gravity@users.noreply.github.com>
|
||||||
|
Mayeul Blanzat <mayeul.blanzat@datadoghq.com>
|
||||||
Michal Augustyn <michal.augustyn@mail.com>
|
Michal Augustyn <michal.augustyn@mail.com>
|
||||||
|
Milas Bowman <milas.bowman@docker.com>
|
||||||
|
Mitsuru Kariya <mitsuru.kariya@nttdata.com>
|
||||||
|
Moleus <fafufuburr@gmail.com>
|
||||||
|
Nick Santos <nick.santos@docker.com>
|
||||||
|
Nick Sieger <nick@nicksieger.com>
|
||||||
|
Nicolas De Loof <nicolas.deloof@gmail.com>
|
||||||
|
Niklas Gehlen <niklas@namespacelabs.com>
|
||||||
Patrick Van Stee <patrick@vanstee.me>
|
Patrick Van Stee <patrick@vanstee.me>
|
||||||
|
Paweł Gronowski <pawel.gronowski@docker.com>
|
||||||
|
Phong Tran <tran.pho@northeastern.edu>
|
||||||
|
Qasim Sarfraz <qasimsarfraz@microsoft.com>
|
||||||
|
Rob Murray <rob.murray@docker.com>
|
||||||
|
robertlestak <robert.lestak@umusic.com>
|
||||||
Saul Shanabrook <s.shanabrook@gmail.com>
|
Saul Shanabrook <s.shanabrook@gmail.com>
|
||||||
|
Sean P. Kane <spkane00@gmail.com>
|
||||||
Sebastiaan van Stijn <github@gone.nl>
|
Sebastiaan van Stijn <github@gone.nl>
|
||||||
|
Shaun Thompson <shaun.thompson@docker.com>
|
||||||
SHIMA Tatsuya <ts1s1andn@gmail.com>
|
SHIMA Tatsuya <ts1s1andn@gmail.com>
|
||||||
Silvin Lubecki <silvin.lubecki@docker.com>
|
Silvin Lubecki <silvin.lubecki@docker.com>
|
||||||
|
Simon A. Eugster <simon.eu@gmail.com>
|
||||||
Solomon Hykes <sh.github.6811@hykes.org>
|
Solomon Hykes <sh.github.6811@hykes.org>
|
||||||
|
Sumner Warren <sumner.warren@gmail.com>
|
||||||
Sune Keller <absukl@almbrand.dk>
|
Sune Keller <absukl@almbrand.dk>
|
||||||
|
Talon Bowler <talon.bowler@docker.com>
|
||||||
|
Tianon Gravi <admwiggin@gmail.com>
|
||||||
Tibor Vass <tibor@docker.com>
|
Tibor Vass <tibor@docker.com>
|
||||||
|
Tim Smith <tismith@rvohealth.com>
|
||||||
|
Timofey Kirillov <timofey.kirillov@flant.com>
|
||||||
|
Tyler Smith <tylerlwsmith@gmail.com>
|
||||||
Tõnis Tiigi <tonistiigi@gmail.com>
|
Tõnis Tiigi <tonistiigi@gmail.com>
|
||||||
Ulysses Souza <ulyssessouza@gmail.com>
|
Ulysses Souza <ulyssessouza@gmail.com>
|
||||||
|
Usual Coder <34403413+Usual-Coder@users.noreply.github.com>
|
||||||
Wang Jinglei <morlay.null@gmail.com>
|
Wang Jinglei <morlay.null@gmail.com>
|
||||||
|
Wei <daviseago@gmail.com>
|
||||||
|
Wojciech M <wmiedzybrodzki@outlook.com>
|
||||||
Xiang Dai <764524258@qq.com>
|
Xiang Dai <764524258@qq.com>
|
||||||
|
Zachary Povey <zachary.povey@autotrader.co.uk>
|
||||||
zelahi <elahi.zuhayr@gmail.com>
|
zelahi <elahi.zuhayr@gmail.com>
|
||||||
|
Zero <tobewhatwewant@gmail.com>
|
||||||
|
zhyon404 <zhyong4@gmail.com>
|
||||||
|
Zsolt <zsolt.szeberenyi@figured.com>
|
||||||
|
|||||||
92
Dockerfile
92
Dockerfile
@@ -1,17 +1,26 @@
|
|||||||
# syntax=docker/dockerfile:1
|
# syntax=docker/dockerfile:1
|
||||||
|
|
||||||
ARG GO_VERSION=1.21
|
ARG GO_VERSION=1.23
|
||||||
ARG XX_VERSION=1.4.0
|
ARG XX_VERSION=1.5.0
|
||||||
|
|
||||||
ARG DOCKER_VERSION=25.0.2
|
# for testing
|
||||||
ARG GOTESTSUM_VERSION=v1.9.0
|
ARG DOCKER_VERSION=27.4.0-rc.2
|
||||||
ARG REGISTRY_VERSION=2.8.0
|
ARG DOCKER_VERSION_ALT_26=26.1.3
|
||||||
ARG BUILDKIT_VERSION=v0.12.5
|
ARG DOCKER_CLI_VERSION=${DOCKER_VERSION}
|
||||||
|
ARG GOTESTSUM_VERSION=v1.12.0
|
||||||
|
ARG REGISTRY_VERSION=2.8.3
|
||||||
|
ARG BUILDKIT_VERSION=v0.17.2
|
||||||
|
ARG UNDOCK_VERSION=0.8.0
|
||||||
|
|
||||||
# xx is a helper for cross-compilation
|
|
||||||
FROM --platform=$BUILDPLATFORM tonistiigi/xx:${XX_VERSION} AS xx
|
FROM --platform=$BUILDPLATFORM tonistiigi/xx:${XX_VERSION} AS xx
|
||||||
|
|
||||||
FROM --platform=$BUILDPLATFORM golang:${GO_VERSION}-alpine AS golatest
|
FROM --platform=$BUILDPLATFORM golang:${GO_VERSION}-alpine AS golatest
|
||||||
|
FROM moby/moby-bin:$DOCKER_VERSION AS docker-engine
|
||||||
|
FROM dockereng/cli-bin:$DOCKER_CLI_VERSION AS docker-cli
|
||||||
|
FROM moby/moby-bin:$DOCKER_VERSION_ALT_26 AS docker-engine-alt
|
||||||
|
FROM dockereng/cli-bin:$DOCKER_VERSION_ALT_26 AS docker-cli-alt
|
||||||
|
FROM registry:$REGISTRY_VERSION AS registry
|
||||||
|
FROM moby/buildkit:$BUILDKIT_VERSION AS buildkit
|
||||||
|
FROM crazymax/undock:$UNDOCK_VERSION AS undock
|
||||||
|
|
||||||
FROM golatest AS gobase
|
FROM golatest AS gobase
|
||||||
COPY --from=xx / /
|
COPY --from=xx / /
|
||||||
@@ -20,32 +29,38 @@ ENV GOFLAGS=-mod=vendor
|
|||||||
ENV CGO_ENABLED=0
|
ENV CGO_ENABLED=0
|
||||||
WORKDIR /src
|
WORKDIR /src
|
||||||
|
|
||||||
FROM registry:$REGISTRY_VERSION AS registry
|
|
||||||
|
|
||||||
FROM moby/buildkit:$BUILDKIT_VERSION AS buildkit
|
|
||||||
|
|
||||||
FROM gobase AS docker
|
|
||||||
ARG TARGETPLATFORM
|
|
||||||
ARG DOCKER_VERSION
|
|
||||||
WORKDIR /opt/docker
|
|
||||||
RUN DOCKER_ARCH=$(case ${TARGETPLATFORM:-linux/amd64} in \
|
|
||||||
"linux/amd64") echo "x86_64" ;; \
|
|
||||||
"linux/arm/v6") echo "armel" ;; \
|
|
||||||
"linux/arm/v7") echo "armhf" ;; \
|
|
||||||
"linux/arm64") echo "aarch64" ;; \
|
|
||||||
"linux/ppc64le") echo "ppc64le" ;; \
|
|
||||||
"linux/s390x") echo "s390x" ;; \
|
|
||||||
*) echo "" ;; esac) \
|
|
||||||
&& echo "DOCKER_ARCH=$DOCKER_ARCH" \
|
|
||||||
&& wget -qO- "https://download.docker.com/linux/static/stable/${DOCKER_ARCH}/docker-${DOCKER_VERSION}.tgz" | tar xvz --strip 1
|
|
||||||
RUN ./dockerd --version && ./containerd --version && ./ctr --version && ./runc --version
|
|
||||||
|
|
||||||
FROM gobase AS gotestsum
|
FROM gobase AS gotestsum
|
||||||
ARG GOTESTSUM_VERSION
|
ARG GOTESTSUM_VERSION
|
||||||
ENV GOFLAGS=
|
ENV GOFLAGS=""
|
||||||
RUN --mount=target=/root/.cache,type=cache \
|
RUN --mount=target=/root/.cache,type=cache <<EOT
|
||||||
GOBIN=/out/ go install "gotest.tools/gotestsum@${GOTESTSUM_VERSION}" && \
|
set -ex
|
||||||
/out/gotestsum --version
|
go install "gotest.tools/gotestsum@${GOTESTSUM_VERSION}"
|
||||||
|
go install "github.com/wadey/gocovmerge@latest"
|
||||||
|
mkdir /out
|
||||||
|
/go/bin/gotestsum --version
|
||||||
|
mv /go/bin/gotestsum /out
|
||||||
|
mv /go/bin/gocovmerge /out
|
||||||
|
EOT
|
||||||
|
COPY --chmod=755 <<"EOF" /out/gotestsumandcover
|
||||||
|
#!/bin/sh
|
||||||
|
set -x
|
||||||
|
if [ -z "$GO_TEST_COVERPROFILE" ]; then
|
||||||
|
exec gotestsum "$@"
|
||||||
|
fi
|
||||||
|
coverdir="$(dirname "$GO_TEST_COVERPROFILE")"
|
||||||
|
mkdir -p "$coverdir/helpers"
|
||||||
|
gotestsum "$@" "-coverprofile=$GO_TEST_COVERPROFILE"
|
||||||
|
ecode=$?
|
||||||
|
go tool covdata textfmt -i=$coverdir/helpers -o=$coverdir/helpers-report.txt
|
||||||
|
gocovmerge "$coverdir/helpers-report.txt" "$GO_TEST_COVERPROFILE" > "$coverdir/merged-report.txt"
|
||||||
|
mv "$coverdir/merged-report.txt" "$GO_TEST_COVERPROFILE"
|
||||||
|
rm "$coverdir/helpers-report.txt"
|
||||||
|
for f in "$coverdir/helpers"/*; do
|
||||||
|
rm "$f"
|
||||||
|
done
|
||||||
|
rmdir "$coverdir/helpers"
|
||||||
|
exit $ecode
|
||||||
|
EOF
|
||||||
|
|
||||||
FROM gobase AS buildx-version
|
FROM gobase AS buildx-version
|
||||||
RUN --mount=type=bind,target=. <<EOT
|
RUN --mount=type=bind,target=. <<EOT
|
||||||
@@ -57,6 +72,7 @@ EOT
|
|||||||
|
|
||||||
FROM gobase AS buildx-build
|
FROM gobase AS buildx-build
|
||||||
ARG TARGETPLATFORM
|
ARG TARGETPLATFORM
|
||||||
|
ARG GO_EXTRA_FLAGS
|
||||||
RUN --mount=type=bind,target=. \
|
RUN --mount=type=bind,target=. \
|
||||||
--mount=type=cache,target=/root/.cache \
|
--mount=type=cache,target=/root/.cache \
|
||||||
--mount=type=cache,target=/go/pkg/mod \
|
--mount=type=cache,target=/go/pkg/mod \
|
||||||
@@ -64,6 +80,7 @@ RUN --mount=type=bind,target=. \
|
|||||||
set -e
|
set -e
|
||||||
xx-go --wrap
|
xx-go --wrap
|
||||||
DESTDIR=/usr/bin VERSION=$(cat /buildx-version/version) REVISION=$(cat /buildx-version/revision) GO_EXTRA_LDFLAGS="-s -w" ./hack/build
|
DESTDIR=/usr/bin VERSION=$(cat /buildx-version/version) REVISION=$(cat /buildx-version/revision) GO_EXTRA_LDFLAGS="-s -w" ./hack/build
|
||||||
|
file /usr/bin/docker-buildx
|
||||||
xx-verify --static /usr/bin/docker-buildx
|
xx-verify --static /usr/bin/docker-buildx
|
||||||
EOT
|
EOT
|
||||||
|
|
||||||
@@ -82,7 +99,9 @@ FROM scratch AS binaries-unix
|
|||||||
COPY --link --from=buildx-build /usr/bin/docker-buildx /buildx
|
COPY --link --from=buildx-build /usr/bin/docker-buildx /buildx
|
||||||
|
|
||||||
FROM binaries-unix AS binaries-darwin
|
FROM binaries-unix AS binaries-darwin
|
||||||
|
FROM binaries-unix AS binaries-freebsd
|
||||||
FROM binaries-unix AS binaries-linux
|
FROM binaries-unix AS binaries-linux
|
||||||
|
FROM binaries-unix AS binaries-openbsd
|
||||||
|
|
||||||
FROM scratch AS binaries-windows
|
FROM scratch AS binaries-windows
|
||||||
COPY --link --from=buildx-build /usr/bin/docker-buildx /buildx.exe
|
COPY --link --from=buildx-build /usr/bin/docker-buildx /buildx.exe
|
||||||
@@ -103,12 +122,17 @@ RUN apk add --no-cache \
|
|||||||
shadow-uidmap \
|
shadow-uidmap \
|
||||||
xfsprogs \
|
xfsprogs \
|
||||||
xz
|
xz
|
||||||
COPY --link --from=gotestsum /out/gotestsum /usr/bin/
|
COPY --link --from=gotestsum /out /usr/bin/
|
||||||
COPY --link --from=registry /bin/registry /usr/bin/
|
COPY --link --from=registry /bin/registry /usr/bin/
|
||||||
COPY --link --from=docker /opt/docker/* /usr/bin/
|
COPY --link --from=docker-engine / /usr/bin/
|
||||||
|
COPY --link --from=docker-cli / /usr/bin/
|
||||||
|
COPY --link --from=docker-engine-alt / /opt/docker-alt-26/
|
||||||
|
COPY --link --from=docker-cli-alt / /opt/docker-alt-26/
|
||||||
COPY --link --from=buildkit /usr/bin/buildkitd /usr/bin/
|
COPY --link --from=buildkit /usr/bin/buildkitd /usr/bin/
|
||||||
COPY --link --from=buildkit /usr/bin/buildctl /usr/bin/
|
COPY --link --from=buildkit /usr/bin/buildctl /usr/bin/
|
||||||
|
COPY --link --from=undock /usr/local/bin/undock /usr/bin/
|
||||||
COPY --link --from=binaries /buildx /usr/bin/
|
COPY --link --from=binaries /buildx /usr/bin/
|
||||||
|
ENV TEST_DOCKER_EXTRA="docker@26.1=/opt/docker-alt-26"
|
||||||
|
|
||||||
FROM integration-test-base AS integration-test
|
FROM integration-test-base AS integration-test
|
||||||
COPY . .
|
COPY . .
|
||||||
|
|||||||
@@ -153,6 +153,7 @@ made through a pull request.
|
|||||||
"akihirosuda",
|
"akihirosuda",
|
||||||
"crazy-max",
|
"crazy-max",
|
||||||
"jedevc",
|
"jedevc",
|
||||||
|
"jsternberg",
|
||||||
"tiborvass",
|
"tiborvass",
|
||||||
"tonistiigi",
|
"tonistiigi",
|
||||||
]
|
]
|
||||||
@@ -194,6 +195,11 @@ made through a pull request.
|
|||||||
Email = "me@jedevc.com"
|
Email = "me@jedevc.com"
|
||||||
GitHub = "jedevc"
|
GitHub = "jedevc"
|
||||||
|
|
||||||
|
[people.jsternberg]
|
||||||
|
Name = "Jonathan Sternberg"
|
||||||
|
Email = "jonathan.sternberg@docker.com"
|
||||||
|
GitHub = "jsternberg"
|
||||||
|
|
||||||
[people.thajeztah]
|
[people.thajeztah]
|
||||||
Name = "Sebastiaan van Stijn"
|
Name = "Sebastiaan van Stijn"
|
||||||
Email = "github@gone.nl"
|
Email = "github@gone.nl"
|
||||||
|
|||||||
32
Makefile
32
Makefile
@@ -8,6 +8,8 @@ endif
|
|||||||
|
|
||||||
export BUILDX_CMD ?= docker buildx
|
export BUILDX_CMD ?= docker buildx
|
||||||
|
|
||||||
|
BAKE_TARGETS := binaries binaries-cross lint lint-gopls validate-vendor validate-docs validate-authors validate-generated-files
|
||||||
|
|
||||||
.PHONY: all
|
.PHONY: all
|
||||||
all: binaries
|
all: binaries
|
||||||
|
|
||||||
@@ -19,13 +21,9 @@ build:
|
|||||||
shell:
|
shell:
|
||||||
./hack/shell
|
./hack/shell
|
||||||
|
|
||||||
.PHONY: binaries
|
.PHONY: $(BAKE_TARGETS)
|
||||||
binaries:
|
$(BAKE_TARGETS):
|
||||||
$(BUILDX_CMD) bake binaries
|
$(BUILDX_CMD) bake $@
|
||||||
|
|
||||||
.PHONY: binaries-cross
|
|
||||||
binaries-cross:
|
|
||||||
$(BUILDX_CMD) bake binaries-cross
|
|
||||||
|
|
||||||
.PHONY: install
|
.PHONY: install
|
||||||
install: binaries
|
install: binaries
|
||||||
@@ -39,10 +37,6 @@ release:
|
|||||||
.PHONY: validate-all
|
.PHONY: validate-all
|
||||||
validate-all: lint test validate-vendor validate-docs validate-generated-files
|
validate-all: lint test validate-vendor validate-docs validate-generated-files
|
||||||
|
|
||||||
.PHONY: lint
|
|
||||||
lint:
|
|
||||||
$(BUILDX_CMD) bake lint
|
|
||||||
|
|
||||||
.PHONY: test
|
.PHONY: test
|
||||||
test:
|
test:
|
||||||
./hack/test
|
./hack/test
|
||||||
@@ -55,22 +49,6 @@ test-unit:
|
|||||||
test-integration:
|
test-integration:
|
||||||
TESTPKGS=./tests ./hack/test
|
TESTPKGS=./tests ./hack/test
|
||||||
|
|
||||||
.PHONY: validate-vendor
|
|
||||||
validate-vendor:
|
|
||||||
$(BUILDX_CMD) bake validate-vendor
|
|
||||||
|
|
||||||
.PHONY: validate-docs
|
|
||||||
validate-docs:
|
|
||||||
$(BUILDX_CMD) bake validate-docs
|
|
||||||
|
|
||||||
.PHONY: validate-authors
|
|
||||||
validate-authors:
|
|
||||||
$(BUILDX_CMD) bake validate-authors
|
|
||||||
|
|
||||||
.PHONY: validate-generated-files
|
|
||||||
validate-generated-files:
|
|
||||||
$(BUILDX_CMD) bake validate-generated-files
|
|
||||||
|
|
||||||
.PHONY: test-driver
|
.PHONY: test-driver
|
||||||
test-driver:
|
test-driver:
|
||||||
./hack/test-driver
|
./hack/test-driver
|
||||||
|
|||||||
453
PROJECT.md
Normal file
453
PROJECT.md
Normal file
@@ -0,0 +1,453 @@
|
|||||||
|
# Project processing guide <!-- omit from toc -->
|
||||||
|
|
||||||
|
- [Project scope](#project-scope)
|
||||||
|
- [Labels](#labels)
|
||||||
|
- [Global](#global)
|
||||||
|
- [`area/`](#area)
|
||||||
|
- [`exp/`](#exp)
|
||||||
|
- [`impact/`](#impact)
|
||||||
|
- [`kind/`](#kind)
|
||||||
|
- [`needs/`](#needs)
|
||||||
|
- [`priority/`](#priority)
|
||||||
|
- [`status/`](#status)
|
||||||
|
- [Types of releases](#types-of-releases)
|
||||||
|
- [Feature releases](#feature-releases)
|
||||||
|
- [Release Candidates](#release-candidates)
|
||||||
|
- [Support Policy](#support-policy)
|
||||||
|
- [Contributing to Releases](#contributing-to-releases)
|
||||||
|
- [Patch releases](#patch-releases)
|
||||||
|
- [Milestones](#milestones)
|
||||||
|
- [Triage process](#triage-process)
|
||||||
|
- [Verify essential information](#verify-essential-information)
|
||||||
|
- [Classify the issue](#classify-the-issue)
|
||||||
|
- [Prioritization guidelines for `kind/bug`](#prioritization-guidelines-for-kindbug)
|
||||||
|
- [Issue lifecyle](#issue-lifecyle)
|
||||||
|
- [Examples](#examples)
|
||||||
|
- [Submitting a bug](#submitting-a-bug)
|
||||||
|
- [Pull request review process](#pull-request-review-process)
|
||||||
|
- [Handling stalled issues and pull requests](#handling-stalled-issues-and-pull-requests)
|
||||||
|
- [Moving to a discussion](#moving-to-a-discussion)
|
||||||
|
- [Workflow automation](#workflow-automation)
|
||||||
|
- [Exempting an issue/PR from stale bot processing](#exempting-an-issuepr-from-stale-bot-processing)
|
||||||
|
- [Updating dependencies](#updating-dependencies)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Project scope
|
||||||
|
|
||||||
|
**Docker Buildx** is a Docker CLI plugin designed to extend build capabilities using BuildKit. It provides advanced features for building container images, supporting multiple builder instances, multi-node builds, and high-level build constructs. Buildx enhances the Docker build process, making it more efficient and flexible, and is compatible with both Docker and Kubernetes environments. Key features include:
|
||||||
|
|
||||||
|
- **Familiar user experience:** Buildx offers a user experience similar to legacy docker build, ensuring a smooth transition from legacy commands
|
||||||
|
- **Full BuildKit capabilities:** Leverage the full feature set of [`moby/buildkit`](https://github.com/moby/buildkit) when using the container driver
|
||||||
|
- **Multiple builder instances:** Supports the use of multiple builder instances, allowing concurrent builds and effective management and monitoring of these builders.
|
||||||
|
- **Multi-node builds:** Use multiple nodes to build cross-platform images
|
||||||
|
- **Compose integration:** Build complex, multi-services files as defined in compose
|
||||||
|
- **High-level build constructs via `bake`:** Introduces high-level build constructs for more complex build workflows
|
||||||
|
- **In-container driver support:** Support in-container drivers for both Docker and Kubernetes environments to support isolation/security.
|
||||||
|
|
||||||
|
## Labels
|
||||||
|
|
||||||
|
Below are common groups, labels, and their intended usage to support issues, pull requests, and discussion processing.
|
||||||
|
|
||||||
|
### Global
|
||||||
|
|
||||||
|
General attributes that can apply to nearly any issue or pull request.
|
||||||
|
|
||||||
|
| Label | Applies to | Description |
|
||||||
|
| ------------------- | ----------- | ------------------------------------------------------------------------- |
|
||||||
|
| `bot` | Issues, PRs | Created by a bot |
|
||||||
|
| `good first issue ` | Issues | Suitable for first-time contributors |
|
||||||
|
| `help wanted` | Issues, PRs | Assistance requested |
|
||||||
|
| `lgtm` | PRs | “Looks good to me” approval |
|
||||||
|
| `stale` | Issues, PRs | The issue/PR has not had activity for a while |
|
||||||
|
| `rotten` | Issues, PRs | The issue/PR has not had activity since being marked stale and was closed |
|
||||||
|
| `frozen` | Issues, PRs | The issue/PR should be skipped by the stale-bot |
|
||||||
|
| `dco/no` | PRs | The PR is missing a developer certificate of origin sign-off |
|
||||||
|
|
||||||
|
### `area/`
|
||||||
|
|
||||||
|
Area or component of the project affected. Please note that the table below may not be inclusive of all current options.
|
||||||
|
|
||||||
|
| Label | Applies to | Description |
|
||||||
|
| ------------------------------ | ---------- | -------------------------- |
|
||||||
|
| `area/bake` | Any | `bake` |
|
||||||
|
| `area/bake/compose` | Any | `bake/compose` |
|
||||||
|
| `area/build` | Any | `build` |
|
||||||
|
| `area/builder` | Any | `builder` |
|
||||||
|
| `area/buildkit` | Any | Relates to `moby/buildkit` |
|
||||||
|
| `area/cache` | Any | `cache` |
|
||||||
|
| `area/checks` | Any | `checks` |
|
||||||
|
| `area/ci` | Any | Project CI |
|
||||||
|
| `area/cli` | Any | `cli` |
|
||||||
|
| `area/controller` | Any | `controller` |
|
||||||
|
| `area/debug` | Any | `debug` |
|
||||||
|
| `area/dependencies` | Any | Project dependencies |
|
||||||
|
| `area/dockerfile` | Any | `dockerfile` |
|
||||||
|
| `area/docs` | Any | `docs` |
|
||||||
|
| `area/driver` | Any | `driver` |
|
||||||
|
| `area/driver/docker` | Any | `driver/docker` |
|
||||||
|
| `area/driver/docker-container` | Any | `driver/docker-container` |
|
||||||
|
| `area/driver/kubernetes` | Any | `driver/kubernetes` |
|
||||||
|
| `area/driver/remote` | Any | `driver/remote` |
|
||||||
|
| `area/feature-parity` | Any | `feature-parity` |
|
||||||
|
| `area/github-actions` | Any | `github-actions` |
|
||||||
|
| `area/hack` | Any | Project hack/support |
|
||||||
|
| `area/imagetools` | Any | `imagetools` |
|
||||||
|
| `area/metrics` | Any | `metrics` |
|
||||||
|
| `area/moby` | Any | Relates to `moby/moby` |
|
||||||
|
| `area/project` | Any | Project support |
|
||||||
|
| `area/qemu` | Any | `qemu` |
|
||||||
|
| `area/tests` | Any | Project testing |
|
||||||
|
| `area/windows` | Any | `windows` |
|
||||||
|
|
||||||
|
### `exp/`
|
||||||
|
|
||||||
|
Estimated experience level to complete the item
|
||||||
|
|
||||||
|
| Label | Applies to | Description |
|
||||||
|
| ------------------ | ---------- | ------------------------------------------------------------------------------- |
|
||||||
|
| `exp/beginner` | Issue | Suitable for contributors new to the project or technology stack |
|
||||||
|
| `exp/intermediate` | Issue | Requires some familiarity with the project and technology |
|
||||||
|
| `exp/expert` | Issue | Requires deep understanding and advanced skills with the project and technology |
|
||||||
|
|
||||||
|
### `impact/`
|
||||||
|
|
||||||
|
Potential impact areas of the issue or pull request.
|
||||||
|
|
||||||
|
| Label | Applies to | Description |
|
||||||
|
| -------------------- | ---------- | -------------------------------------------------- |
|
||||||
|
| `impact/breaking` | PR | Change is API-breaking |
|
||||||
|
| `impact/changelog` | PR | When complete, the item should be in the changelog |
|
||||||
|
| `impact/deprecation` | PR | Change is a deprecation of a feature |
|
||||||
|
|
||||||
|
|
||||||
|
### `kind/`
|
||||||
|
|
||||||
|
The type of issue, pull request or discussion
|
||||||
|
|
||||||
|
| Label | Applies to | Description |
|
||||||
|
| ------------------ | ----------------- | ------------------------------------------------------- |
|
||||||
|
| `kind/bug` | Issue, PR | Confirmed bug |
|
||||||
|
| `kind/chore` | Issue, PR | Project support tasks |
|
||||||
|
| `kind/docs` | Issue, PR | Additions or modifications to the documentation |
|
||||||
|
| `kind/duplicate` | Any | Duplicate of another item |
|
||||||
|
| `kind/enhancement` | Any | Enhancement of an existing feature |
|
||||||
|
| `kind/feature` | Any | A brand new feature |
|
||||||
|
| `kind/maybe-bug` | Issue, PR | Unconfirmed bug, turns into kind/bug when confirmed |
|
||||||
|
| `kind/proposal` | Issue, Discussion | A proposed major change |
|
||||||
|
| `kind/refactor` | Issue, PR | Refactor of existing code |
|
||||||
|
| `kind/support` | Any | A question, discussion, or other user support item |
|
||||||
|
| `kind/tests` | Issue, PR | Additions or modifications to the project testing suite |
|
||||||
|
|
||||||
|
### `needs/`
|
||||||
|
|
||||||
|
Actions or missing requirements needed by the issue or pull request.
|
||||||
|
|
||||||
|
| Label | Applies to | Description |
|
||||||
|
| --------------------------- | ---------- | ----------------------------------------------------- |
|
||||||
|
| `needs/assignee` | Issue, PR | Needs an assignee |
|
||||||
|
| `needs/code-review` | PR | Needs review of code |
|
||||||
|
| `needs/design-review` | Issue, PR | Needs review of design |
|
||||||
|
| `needs/docs-review` | Issue, PR | Needs review by the documentation team |
|
||||||
|
| `needs/docs-update` | Issue, PR | Needs an update to the docs |
|
||||||
|
| `needs/follow-on-work` | Issue, PR | Needs follow-on work/PR |
|
||||||
|
| `needs/issue` | PR | Needs an issue |
|
||||||
|
| `needs/maintainer-decision` | Issue, PR | Needs maintainer discussion/decision before advancing |
|
||||||
|
| `needs/milestone` | Issue, PR | Needs milestone assignment |
|
||||||
|
| `needs/more-info` | Any | Needs more information from the author |
|
||||||
|
| `needs/more-investigation` | Issue, PR | Needs further investigation |
|
||||||
|
| `needs/priority` | Issue, PR | Needs priority assignment |
|
||||||
|
| `needs/pull-request` | Issue | Needs a pull request |
|
||||||
|
| `needs/rebase` | PR | Needs rebase to target branch |
|
||||||
|
| `needs/reproduction` | Issue, PR | Needs reproduction steps |
|
||||||
|
|
||||||
|
### `priority/`
|
||||||
|
|
||||||
|
Level of urgency of a `kind/bug` issue or pull request.
|
||||||
|
|
||||||
|
| Label | Applies to | Description |
|
||||||
|
| ------------- | ---------- | ----------------------------------------------------------------------- |
|
||||||
|
| `priority/P0` | Issue, PR | Urgent: Security, critical bugs, blocking issues. |
|
||||||
|
| `priority/P1` | Issue, PR | Important: This is a top priority and a must-have for the next release. |
|
||||||
|
| `priority/P2` | Issue, PR | Normal: Default priority |
|
||||||
|
|
||||||
|
### `status/`
|
||||||
|
|
||||||
|
Current lifecycle state of the issue or pull request.
|
||||||
|
|
||||||
|
| Label | Applies to | Description |
|
||||||
|
| --------------------- | ---------- | ---------------------------------------------------------------------- |
|
||||||
|
| `status/accepted` | Issue, PR | The issue has been reviewed and accepted for implementation |
|
||||||
|
| `status/active` | PR | The PR is actively being worked on by a maintainer or community member |
|
||||||
|
| `status/blocked` | Issue, PR | The issue/PR is blocked from advancing to another status |
|
||||||
|
| `status/do-not-merge` | PR | Should not be merged pending further review or changes |
|
||||||
|
| `status/transfer` | Any | Transferred to another project |
|
||||||
|
| `status/triage` | Any | The item needs to be sorted by maintainers |
|
||||||
|
| `status/wontfix` | Issue, PR | The issue/PR will not be fixed or addressed as described |
|
||||||
|
|
||||||
|
## Types of releases
|
||||||
|
|
||||||
|
This project has feature releases, patch releases, and security releases.
|
||||||
|
|
||||||
|
### Feature releases
|
||||||
|
|
||||||
|
Feature releases are made from the development branch, followed by cutting a release branch for future patch releases, which may also occur during the code freeze period.
|
||||||
|
|
||||||
|
#### Release Candidates
|
||||||
|
|
||||||
|
Users can expect 2-3 release candidate (RC) test releases prior to a feature release. The first RC is typically released about one to two weeks before the final release.
|
||||||
|
|
||||||
|
#### Support Policy
|
||||||
|
|
||||||
|
Once a new feature release is cut, support for the previous feature release is discontinued. An exception may be made for urgent security releases that occur shortly after a new feature release. Buildx does not offer LTS (Long-Term Support) releases.
|
||||||
|
|
||||||
|
#### Contributing to Releases
|
||||||
|
|
||||||
|
Anyone can request that an issue or PR be included in the next feature or patch release milestone, provided it meets the necessary requirements.
|
||||||
|
|
||||||
|
### Patch releases
|
||||||
|
|
||||||
|
Patch releases should only include the most critical patches. Stability is vital, so everyone should always use the latest patch release.
|
||||||
|
|
||||||
|
If a fix is needed but does not qualify for a patch release because of its code size or other criteria that make it too unpredictable, we will prioritize cutting a new feature release sooner rather than making an exception for backporting.
|
||||||
|
|
||||||
|
Following PRs are included in patch releases
|
||||||
|
|
||||||
|
- `priority/P0` fixes
|
||||||
|
- `priority/P1` fixes, assuming maintainers don’t object because of the patch size
|
||||||
|
- `priority/P2` fixes, only if (both required)
|
||||||
|
- proposed by maintainer
|
||||||
|
- the patch is trivial and self-contained
|
||||||
|
- Documentation-only patches
|
||||||
|
- Vendored dependency updates, only if:
|
||||||
|
- Fixing (qualifying) bug or security issue in Buildx
|
||||||
|
- The patch is small, else a forked version of the dependency with only the patches required
|
||||||
|
|
||||||
|
New features do not qualify for patch release.
|
||||||
|
|
||||||
|
## Milestones
|
||||||
|
|
||||||
|
Milestones are used to help identify what releases a contribution will be in.
|
||||||
|
|
||||||
|
- The `v0.next` milestone collects unblocked items planned for the next 2-3 feature releases but not yet assigned to a specific version milestone.
|
||||||
|
- The `v0.backlog` milestone gathers all triaged items considered for the long-term (beyond the next 3 feature releases) or currently unfit for a future release due to certain conditions. These items may be blocked and need to be unblocked before progressing.
|
||||||
|
|
||||||
|
## Triage process
|
||||||
|
|
||||||
|
Triage provides an important way to contribute to an open-source project. When submitted without an issue this process applies to Pull Requests as well. Triage helps ensure work items are resolved quickly by:
|
||||||
|
|
||||||
|
- Ensuring the issue's intent and purpose are described precisely. This is necessary because it can be difficult for an issue to explain how an end user experiences a problem and what actions they took to arrive at the problem.
|
||||||
|
- Giving a contributor the information they need before they commit to resolving an issue.
|
||||||
|
- Lowering the issue count by preventing duplicate issues.
|
||||||
|
- Streamlining the development process by preventing duplicate discussions.
|
||||||
|
|
||||||
|
If you don't have time to code, consider helping with triage. The community will thank you for saving them time by spending some of yours. The same basic process should be applied upon receipt of a new issue.
|
||||||
|
|
||||||
|
1. Verify essential information
|
||||||
|
2. Classify the issue
|
||||||
|
3. Prioritizing the issue
|
||||||
|
|
||||||
|
### Verify essential information
|
||||||
|
|
||||||
|
Before advancing the triage process, ensure the issue contains all necessary information to be properly understood and assessed. The required information may vary by issue type, but typically includes the system environment, version numbers, reproduction steps, expected outcomes, and actual results.
|
||||||
|
|
||||||
|
- **Exercising Judgment**: Use your best judgment to assess the issue description’s completeness.
|
||||||
|
- **Communicating Needs**: If the information provided is insufficient, kindly request additional details from the author. Explain that this information is crucial for clarity and resolution of the issue, and apply the `needs/more-information` label to indicate a response from the author is required.
|
||||||
|
|
||||||
|
### Classify the issue
|
||||||
|
|
||||||
|
An issue will typically have multiple labels. These are used to help communicate key information about context, requirements, and status. At a minimum, a properly classified issue should have:
|
||||||
|
|
||||||
|
- (Required) One or more [`area/*`](#area) labels
|
||||||
|
- (Required) One [`kind/*`](#kind) label to indicate the type of issue
|
||||||
|
- (Required if `kind/bug`) A [`priority/*`](#priority) label
|
||||||
|
|
||||||
|
When assigning a decision the following labels should be present:
|
||||||
|
|
||||||
|
- (Required) One [`status/*`](#status) label to indicate lifecycle status
|
||||||
|
|
||||||
|
Additional labels can provide more clarity:
|
||||||
|
|
||||||
|
- Zero or more [`needs/*`](#needs) labels to indicate missing items
|
||||||
|
- Zero or more [`impact/*`](#impact) labels
|
||||||
|
- One [`exp/*`](#exp) label
|
||||||
|
|
||||||
|
## Prioritization guidelines for `kind/bug`
|
||||||
|
|
||||||
|
When an issue or pull request of `kind/bug` is correctly categorized and attached to a milestone, the labels indicate the urgency with which it should be completed.
|
||||||
|
|
||||||
|
**priority/P0**
|
||||||
|
|
||||||
|
Fixing this item is the highest priority. A patch release will follow as soon as a patch is available and verified. This level is used exclusively for bugs.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
|
||||||
|
- Regression in a critical code path
|
||||||
|
- Panic in a critical code path
|
||||||
|
- Corruption in critical code path or rest of the system
|
||||||
|
- Leaked zero-day critical security
|
||||||
|
|
||||||
|
**priority/P1**
|
||||||
|
|
||||||
|
Items with this label should be fixed with high priority and almost always included in a patch release. Unless waiting for another issue, patch releases should happen within a week. This level is not used for features or enhancements.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
|
||||||
|
- Any regression, panic
|
||||||
|
- Measurable performance regression
|
||||||
|
- A major bug in a new feature in the latest release
|
||||||
|
- Incompatibility with upgraded external dependency
|
||||||
|
|
||||||
|
**priority/P2**
|
||||||
|
|
||||||
|
This is the default priority and is implied in the absence of a `priority/` label. Bugs with this priority should be included in the next feature release but may land in a patch release if they are ready and unlikely to impact other functionality adversely. Non-bug issues with this priority should also be included in the next feature release if they are available and ready.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
|
||||||
|
- Confirmed bugs
|
||||||
|
- Bugs in non-default configurations
|
||||||
|
- Most enhancements
|
||||||
|
|
||||||
|
## Issue lifecyle
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
flowchart LR
|
||||||
|
create([New issue]) --> triage
|
||||||
|
subgraph triage[Triage Loop]
|
||||||
|
review[Review]
|
||||||
|
end
|
||||||
|
subgraph decision[Decision]
|
||||||
|
accept[Accept]
|
||||||
|
close[Close]
|
||||||
|
end
|
||||||
|
triage -- if accepted --> accept[Assign status, milestone]
|
||||||
|
triage -- if rejected --> close[Assign status, close issue]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Examples
|
||||||
|
|
||||||
|
#### Submitting a bug
|
||||||
|
|
||||||
|
To help illustrate the issue life cycle let’s walk through submitting an issue as a potential bug in CI that enters a feedback loop and is eventually accepted as P2 priority and placed on the backlog.
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
flowchart LR
|
||||||
|
|
||||||
|
new([New issue])
|
||||||
|
|
||||||
|
subgraph triage[Triage]
|
||||||
|
direction LR
|
||||||
|
|
||||||
|
create["Action: Submit issue via Bug form\nLabels: kind/maybe-bug, status/triage"]
|
||||||
|
style create text-align:left
|
||||||
|
|
||||||
|
subgraph review[Review]
|
||||||
|
direction TB
|
||||||
|
classify["Action: Maintainer reviews issue, requests more info\nLabels: kind/maybe-bug, status/triage, needs/more-info, area/*"]
|
||||||
|
style classify text-align:left
|
||||||
|
|
||||||
|
update["Action: Author updates issue\nLabels: kind/maybe-bug, status/triage, needs/more-info, area/*"]
|
||||||
|
style update text-align:left
|
||||||
|
|
||||||
|
classify --> update
|
||||||
|
update --> classify
|
||||||
|
end
|
||||||
|
|
||||||
|
create --> review
|
||||||
|
end
|
||||||
|
|
||||||
|
subgraph decision[Decision]
|
||||||
|
accept["Action: Maintainer reviews updates, accepts, assigns milestone\nLabels: kind/bug, priority/P2, status/accepted, area/*, impact/*"]
|
||||||
|
style accept text-align: left
|
||||||
|
end
|
||||||
|
|
||||||
|
new --> triage
|
||||||
|
triage --> decision
|
||||||
|
```
|
||||||
|
|
||||||
|
## Pull request review process
|
||||||
|
|
||||||
|
A thorough and timely review process for pull requests (PRs) is crucial for maintaining the integrity and quality of the project while fostering a collaborative environment.
|
||||||
|
|
||||||
|
- **Labeling**: Most labels should be inherited from a linked issue. If no issue is linked an extended review process may be required.
|
||||||
|
- **Continuous Integration**: With few exceptions, it is crucial that all Continuous Integration (CI) workflows pass successfully.
|
||||||
|
- **Draft Status**: Incomplete or long-running PRs should be placed in "Draft" status. They may revert to "Draft" status upon initial review if significant rework is required.
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
flowchart LR
|
||||||
|
triage([Triage])
|
||||||
|
draft[Draft PR]
|
||||||
|
review[PR Review]
|
||||||
|
closed{{Close PR}}
|
||||||
|
merge{{Merge PR}}
|
||||||
|
|
||||||
|
subgraph feedback1[Feedback Loop]
|
||||||
|
draft
|
||||||
|
end
|
||||||
|
subgraph feedback2[Feedback Loop]
|
||||||
|
review
|
||||||
|
end
|
||||||
|
|
||||||
|
triage --> draft
|
||||||
|
draft --> review
|
||||||
|
review --> closed
|
||||||
|
review --> draft
|
||||||
|
review --> merge
|
||||||
|
```
|
||||||
|
|
||||||
|
## Handling stalled issues and pull requests
|
||||||
|
|
||||||
|
Unfortunately, some issues or pull requests can remain inactive for extended periods. To mitigate this, automation is employed to prompt both the author and maintainers, ensuring that all contributions receive appropriate attention.
|
||||||
|
|
||||||
|
**For Authors:**
|
||||||
|
|
||||||
|
- **Closure of Inactive Items**: If your issue or PR becomes irrelevant or is no longer needed, please close it to help keep the project clean.
|
||||||
|
- **Prompt Responses**: If additional information is requested, please respond promptly to facilitate progress.
|
||||||
|
|
||||||
|
**For Maintainers:**
|
||||||
|
|
||||||
|
- **Timely Responses**: Endeavor to address issues and PRs within a reasonable timeframe to keep the community actively engaged.
|
||||||
|
- **Engagement with Stale Issues**: If an issue becomes stale due to maintainer inaction, re-engage with the author to reassess and revitalize the discussion.
|
||||||
|
|
||||||
|
**Stale and Rotten Policy:**
|
||||||
|
|
||||||
|
- An issue or PR will be labeled as **`stale`** after 14 calendar days of inactivity. If it remains inactive for another 30 days, it will be labeled as **`rotten`** and closed.
|
||||||
|
- Authors whose issues or PRs have been closed are welcome to re-open them or create new ones and link to the original.
|
||||||
|
|
||||||
|
**Skipping Stale Processing:**
|
||||||
|
|
||||||
|
- To prevent an issue or PR from being marked as stale, label it as **`frozen`**.
|
||||||
|
|
||||||
|
**Exceptions to Stale Processing:**
|
||||||
|
|
||||||
|
- Issues or PRs marked as **`frozen`**.
|
||||||
|
- Issues or PRs assigned to a milestone.
|
||||||
|
|
||||||
|
## Moving to a discussion
|
||||||
|
|
||||||
|
Sometimes, an issue or pull request may not be the appropriate medium for what is essentially a discussion. In such cases, the issue or PR will either be converted to a discussion or a new discussion will be created. The original item will then be labeled appropriately (**`kind/discussion`** or **`kind/question`**) and closed.
|
||||||
|
|
||||||
|
If you believe this conversion was made in error, please express your concerns in the new discussion thread. If necessary, a reversal to the original issue or PR format can be facilitated.
|
||||||
|
|
||||||
|
## Workflow automation
|
||||||
|
|
||||||
|
To help expedite common operations, avoid errors and reduce toil some workflow automation is used by the project. This can include:
|
||||||
|
|
||||||
|
- Stale issue or pull request processing
|
||||||
|
- Auto-labeling actions
|
||||||
|
- Auto-response actions
|
||||||
|
- Label carry over from issue to pull request
|
||||||
|
|
||||||
|
### Exempting an issue/PR from stale bot processing
|
||||||
|
|
||||||
|
The stale item handling is configured in the [repository](link-to-config-file). To exempt an issue or PR from stale processing you can:
|
||||||
|
|
||||||
|
- Add the item to a milestone
|
||||||
|
- Add the `frozen` label to the item
|
||||||
|
|
||||||
|
## Updating dependencies
|
||||||
|
|
||||||
|
- **Runtime Dependencies**: Use the latest stable release available when the first Release Candidate (RC) of a new feature release is cut. For patch releases, update to the latest corresponding patch release of the dependency.
|
||||||
|
- **Other Dependencies**: Always permitted to update to the latest patch release in the development branch. Updates to a new feature release require justification, unless the dependency is outdated. Prefer tagged versions of dependencies unless a specific untagged commit is needed. Go modules should specify the lowest compatible version; there is no requirement to update all dependencies to their latest versions before cutting a new Buildx feature release.
|
||||||
|
- **Patch Releases**: Vendored dependency updates are considered for patch releases, except in the rare cases specified previously.
|
||||||
|
- **Security Considerations**: A security scanner report indicating a non-exploitable issue via Buildx does not justify backports.
|
||||||
19
README.md
19
README.md
@@ -56,8 +56,7 @@ For more information on how to use Buildx, see
|
|||||||
|
|
||||||
Using `buildx` with Docker requires Docker engine 19.03 or newer.
|
Using `buildx` with Docker requires Docker engine 19.03 or newer.
|
||||||
|
|
||||||
> **Warning**
|
> [!WARNING]
|
||||||
>
|
|
||||||
> Using an incompatible version of Docker may result in unexpected behavior,
|
> Using an incompatible version of Docker may result in unexpected behavior,
|
||||||
> and will likely cause issues, especially when using Buildx builders with more
|
> and will likely cause issues, especially when using Buildx builders with more
|
||||||
> recent versions of BuildKit.
|
> recent versions of BuildKit.
|
||||||
@@ -75,8 +74,7 @@ Docker Engine package repositories contain Docker Buildx packages when installed
|
|||||||
|
|
||||||
## Manual download
|
## Manual download
|
||||||
|
|
||||||
> **Important**
|
> [!IMPORTANT]
|
||||||
>
|
|
||||||
> This section is for unattended installation of the buildx component. These
|
> This section is for unattended installation of the buildx component. These
|
||||||
> instructions are mostly suitable for testing purposes. We do not recommend
|
> instructions are mostly suitable for testing purposes. We do not recommend
|
||||||
> installing buildx using manual download in production environments as they
|
> installing buildx using manual download in production environments as they
|
||||||
@@ -107,8 +105,7 @@ On Windows:
|
|||||||
* `C:\ProgramData\Docker\cli-plugins`
|
* `C:\ProgramData\Docker\cli-plugins`
|
||||||
* `C:\Program Files\Docker\cli-plugins`
|
* `C:\Program Files\Docker\cli-plugins`
|
||||||
|
|
||||||
> **Note**
|
> [!NOTE]
|
||||||
>
|
|
||||||
> On Unix environments, it may also be necessary to make it executable with `chmod +x`:
|
> On Unix environments, it may also be necessary to make it executable with `chmod +x`:
|
||||||
> ```shell
|
> ```shell
|
||||||
> $ chmod +x ~/.docker/cli-plugins/docker-buildx
|
> $ chmod +x ~/.docker/cli-plugins/docker-buildx
|
||||||
@@ -187,12 +184,12 @@ through various "drivers". Each driver defines how and where a build should
|
|||||||
run, and have different feature sets.
|
run, and have different feature sets.
|
||||||
|
|
||||||
We currently support the following drivers:
|
We currently support the following drivers:
|
||||||
- The `docker` driver ([guide](docs/manuals/drivers/docker.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
|
- The `docker` driver ([guide](https://docs.docker.com/build/drivers/docker/), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
|
||||||
- The `docker-container` driver ([guide](docs/manuals/drivers/docker-container.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
|
- The `docker-container` driver ([guide](https://docs.docker.com/build/drivers/docker-container/), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
|
||||||
- The `kubernetes` driver ([guide](docs/manuals/drivers/kubernetes.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
|
- The `kubernetes` driver ([guide](https://docs.docker.com/build/drivers/kubernetes/), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
|
||||||
- The `remote` driver ([guide](docs/manuals/drivers/remote.md))
|
- The `remote` driver ([guide](https://docs.docker.com/build/drivers/remote/))
|
||||||
|
|
||||||
For more information on drivers, see the [drivers guide](docs/manuals/drivers/index.md).
|
For more information on drivers, see the [drivers guide](https://docs.docker.com/build/drivers/).
|
||||||
|
|
||||||
## Working with builder instances
|
## Working with builder instances
|
||||||
|
|
||||||
|
|||||||
265
bake/bake.go
265
bake/bake.go
@@ -2,12 +2,12 @@ package bake
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"encoding/csv"
|
|
||||||
"io"
|
"io"
|
||||||
"os"
|
"os"
|
||||||
"path"
|
"path"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"regexp"
|
"regexp"
|
||||||
|
"slices"
|
||||||
"sort"
|
"sort"
|
||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
@@ -26,7 +26,9 @@ import (
|
|||||||
"github.com/moby/buildkit/client"
|
"github.com/moby/buildkit/client"
|
||||||
"github.com/moby/buildkit/client/llb"
|
"github.com/moby/buildkit/client/llb"
|
||||||
"github.com/moby/buildkit/session/auth/authprovider"
|
"github.com/moby/buildkit/session/auth/authprovider"
|
||||||
|
"github.com/moby/buildkit/util/entitlements"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
|
"github.com/tonistiigi/go-csvvalue"
|
||||||
"github.com/zclconf/go-cty/cty"
|
"github.com/zclconf/go-cty/cty"
|
||||||
"github.com/zclconf/go-cty/cty/convert"
|
"github.com/zclconf/go-cty/cty/convert"
|
||||||
)
|
)
|
||||||
@@ -177,7 +179,7 @@ func readWithProgress(r io.Reader, setStatus func(st *client.VertexStatus)) (dt
|
|||||||
}
|
}
|
||||||
|
|
||||||
func ListTargets(files []File) ([]string, error) {
|
func ListTargets(files []File) ([]string, error) {
|
||||||
c, err := ParseFiles(files, nil)
|
c, _, err := ParseFiles(files, nil)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@@ -192,7 +194,7 @@ func ListTargets(files []File) ([]string, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func ReadTargets(ctx context.Context, files []File, targets, overrides []string, defaults map[string]string) (map[string]*Target, map[string]*Group, error) {
|
func ReadTargets(ctx context.Context, files []File, targets, overrides []string, defaults map[string]string) (map[string]*Target, map[string]*Group, error) {
|
||||||
c, err := ParseFiles(files, defaults)
|
c, _, err := ParseFiles(files, defaults)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, nil, err
|
return nil, nil, err
|
||||||
}
|
}
|
||||||
@@ -298,7 +300,7 @@ func sliceToMap(env []string) (res map[string]string) {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
func ParseFiles(files []File, defaults map[string]string) (_ *Config, err error) {
|
func ParseFiles(files []File, defaults map[string]string) (_ *Config, _ *hclparser.ParseMeta, err error) {
|
||||||
defer func() {
|
defer func() {
|
||||||
err = formatHCLError(err, files)
|
err = formatHCLError(err, files)
|
||||||
}()
|
}()
|
||||||
@@ -310,7 +312,7 @@ func ParseFiles(files []File, defaults map[string]string) (_ *Config, err error)
|
|||||||
isCompose, composeErr := validateComposeFile(f.Data, f.Name)
|
isCompose, composeErr := validateComposeFile(f.Data, f.Name)
|
||||||
if isCompose {
|
if isCompose {
|
||||||
if composeErr != nil {
|
if composeErr != nil {
|
||||||
return nil, composeErr
|
return nil, nil, composeErr
|
||||||
}
|
}
|
||||||
composeFiles = append(composeFiles, f)
|
composeFiles = append(composeFiles, f)
|
||||||
}
|
}
|
||||||
@@ -318,13 +320,13 @@ func ParseFiles(files []File, defaults map[string]string) (_ *Config, err error)
|
|||||||
hf, isHCL, err := ParseHCLFile(f.Data, f.Name)
|
hf, isHCL, err := ParseHCLFile(f.Data, f.Name)
|
||||||
if isHCL {
|
if isHCL {
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, nil, err
|
||||||
}
|
}
|
||||||
hclFiles = append(hclFiles, hf)
|
hclFiles = append(hclFiles, hf)
|
||||||
} else if composeErr != nil {
|
} else if composeErr != nil {
|
||||||
return nil, errors.Wrapf(err, "failed to parse %s: parsing yaml: %v, parsing hcl", f.Name, composeErr)
|
return nil, nil, errors.Wrapf(err, "failed to parse %s: parsing yaml: %v, parsing hcl", f.Name, composeErr)
|
||||||
} else {
|
} else {
|
||||||
return nil, err
|
return nil, nil, err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -332,23 +334,24 @@ func ParseFiles(files []File, defaults map[string]string) (_ *Config, err error)
|
|||||||
if len(composeFiles) > 0 {
|
if len(composeFiles) > 0 {
|
||||||
cfg, cmperr := ParseComposeFiles(composeFiles)
|
cfg, cmperr := ParseComposeFiles(composeFiles)
|
||||||
if cmperr != nil {
|
if cmperr != nil {
|
||||||
return nil, errors.Wrap(cmperr, "failed to parse compose file")
|
return nil, nil, errors.Wrap(cmperr, "failed to parse compose file")
|
||||||
}
|
}
|
||||||
c = mergeConfig(c, *cfg)
|
c = mergeConfig(c, *cfg)
|
||||||
c = dedupeConfig(c)
|
c = dedupeConfig(c)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
var pm hclparser.ParseMeta
|
||||||
if len(hclFiles) > 0 {
|
if len(hclFiles) > 0 {
|
||||||
renamed, err := hclparser.Parse(hclparser.MergeFiles(hclFiles), hclparser.Opt{
|
res, err := hclparser.Parse(hclparser.MergeFiles(hclFiles), hclparser.Opt{
|
||||||
LookupVar: os.LookupEnv,
|
LookupVar: os.LookupEnv,
|
||||||
Vars: defaults,
|
Vars: defaults,
|
||||||
ValidateLabel: validateTargetName,
|
ValidateLabel: validateTargetName,
|
||||||
}, &c)
|
}, &c)
|
||||||
if err.HasErrors() {
|
if err.HasErrors() {
|
||||||
return nil, err
|
return nil, nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, renamed := range renamed {
|
for _, renamed := range res.Renamed {
|
||||||
for oldName, newNames := range renamed {
|
for oldName, newNames := range renamed {
|
||||||
newNames = dedupSlice(newNames)
|
newNames = dedupSlice(newNames)
|
||||||
if len(newNames) == 1 && oldName == newNames[0] {
|
if len(newNames) == 1 && oldName == newNames[0] {
|
||||||
@@ -361,9 +364,10 @@ func ParseFiles(files []File, defaults map[string]string) (_ *Config, err error)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
c = dedupeConfig(c)
|
c = dedupeConfig(c)
|
||||||
|
pm = *res
|
||||||
}
|
}
|
||||||
|
|
||||||
return &c, nil
|
return &c, &pm, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func dedupeConfig(c Config) Config {
|
func dedupeConfig(c Config) Config {
|
||||||
@@ -388,7 +392,8 @@ func dedupeConfig(c Config) Config {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func ParseFile(dt []byte, fn string) (*Config, error) {
|
func ParseFile(dt []byte, fn string) (*Config, error) {
|
||||||
return ParseFiles([]File{{Data: dt, Name: fn}}, nil)
|
c, _, err := ParseFiles([]File{{Data: dt, Name: fn}}, nil)
|
||||||
|
return c, err
|
||||||
}
|
}
|
||||||
|
|
||||||
type Config struct {
|
type Config struct {
|
||||||
@@ -476,7 +481,7 @@ func (c Config) loadLinks(name string, t *Target, m map[string]*Target, o map[st
|
|||||||
for _, v := range t.Contexts {
|
for _, v := range t.Contexts {
|
||||||
if strings.HasPrefix(v, "target:") {
|
if strings.HasPrefix(v, "target:") {
|
||||||
target := strings.TrimPrefix(v, "target:")
|
target := strings.TrimPrefix(v, "target:")
|
||||||
if target == t.Name {
|
if target == name {
|
||||||
return errors.Errorf("target %s cannot link to itself", target)
|
return errors.Errorf("target %s cannot link to itself", target)
|
||||||
}
|
}
|
||||||
for _, v := range visited {
|
for _, v := range visited {
|
||||||
@@ -491,13 +496,21 @@ func (c Config) loadLinks(name string, t *Target, m map[string]*Target, o map[st
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
t2.Outputs = nil
|
t2.Outputs = []string{"type=cacheonly"}
|
||||||
t2.linked = true
|
t2.linked = true
|
||||||
m[target] = t2
|
m[target] = t2
|
||||||
}
|
}
|
||||||
if err := c.loadLinks(target, t2, m, o, visited); err != nil {
|
if err := c.loadLinks(target, t2, m, o, visited); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// entitlements are inherited from linked targets
|
||||||
|
for _, ent := range t2.Entitlements {
|
||||||
|
if !slices.Contains(t.Entitlements, ent) {
|
||||||
|
t.Entitlements = append(t.Entitlements, ent)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if len(t.Platforms) > 1 && len(t2.Platforms) > 1 {
|
if len(t.Platforms) > 1 && len(t2.Platforms) > 1 {
|
||||||
if !sliceEqual(t.Platforms, t2.Platforms) {
|
if !sliceEqual(t.Platforms, t2.Platforms) {
|
||||||
return errors.Errorf("target %s can't be used by %s because it is defined for different platforms %v and %v", target, name, t2.Platforms, t.Platforms)
|
return errors.Errorf("target %s can't be used by %s because it is defined for different platforms %v and %v", target, name, t2.Platforms, t.Platforms)
|
||||||
@@ -539,7 +552,7 @@ func (c Config) newOverrides(v []string) (map[string]map[string]Override, error)
|
|||||||
o := t[kk[1]]
|
o := t[kk[1]]
|
||||||
|
|
||||||
switch keys[1] {
|
switch keys[1] {
|
||||||
case "output", "cache-to", "cache-from", "tags", "platform", "secrets", "ssh", "attest":
|
case "output", "cache-to", "cache-from", "tags", "platform", "secrets", "ssh", "attest", "entitlements", "network":
|
||||||
if len(parts) == 2 {
|
if len(parts) == 2 {
|
||||||
o.ArrValue = append(o.ArrValue, parts[1])
|
o.ArrValue = append(o.ArrValue, parts[1])
|
||||||
}
|
}
|
||||||
@@ -670,12 +683,14 @@ func (c Config) target(name string, visited map[string]*Target, overrides map[st
|
|||||||
|
|
||||||
type Group struct {
|
type Group struct {
|
||||||
Name string `json:"-" hcl:"name,label" cty:"name"`
|
Name string `json:"-" hcl:"name,label" cty:"name"`
|
||||||
|
Description string `json:"description,omitempty" hcl:"description,optional" cty:"description"`
|
||||||
Targets []string `json:"targets" hcl:"targets" cty:"targets"`
|
Targets []string `json:"targets" hcl:"targets" cty:"targets"`
|
||||||
// Target // TODO?
|
// Target // TODO?
|
||||||
}
|
}
|
||||||
|
|
||||||
type Target struct {
|
type Target struct {
|
||||||
Name string `json:"-" hcl:"name,label" cty:"name"`
|
Name string `json:"-" hcl:"name,label" cty:"name"`
|
||||||
|
Description string `json:"description,omitempty" hcl:"description,optional" cty:"description"`
|
||||||
|
|
||||||
// Inherits is the only field that cannot be overridden with --set
|
// Inherits is the only field that cannot be overridden with --set
|
||||||
Inherits []string `json:"inherits,omitempty" hcl:"inherits,optional" cty:"inherits"`
|
Inherits []string `json:"inherits,omitempty" hcl:"inherits,optional" cty:"inherits"`
|
||||||
@@ -698,20 +713,24 @@ type Target struct {
|
|||||||
Outputs []string `json:"output,omitempty" hcl:"output,optional" cty:"output"`
|
Outputs []string `json:"output,omitempty" hcl:"output,optional" cty:"output"`
|
||||||
Pull *bool `json:"pull,omitempty" hcl:"pull,optional" cty:"pull"`
|
Pull *bool `json:"pull,omitempty" hcl:"pull,optional" cty:"pull"`
|
||||||
NoCache *bool `json:"no-cache,omitempty" hcl:"no-cache,optional" cty:"no-cache"`
|
NoCache *bool `json:"no-cache,omitempty" hcl:"no-cache,optional" cty:"no-cache"`
|
||||||
NetworkMode *string `json:"-" hcl:"-" cty:"-"`
|
NetworkMode *string `json:"network,omitempty" hcl:"network,optional" cty:"network"`
|
||||||
NoCacheFilter []string `json:"no-cache-filter,omitempty" hcl:"no-cache-filter,optional" cty:"no-cache-filter"`
|
NoCacheFilter []string `json:"no-cache-filter,omitempty" hcl:"no-cache-filter,optional" cty:"no-cache-filter"`
|
||||||
ShmSize *string `json:"shm-size,omitempty" hcl:"shm-size,optional"`
|
ShmSize *string `json:"shm-size,omitempty" hcl:"shm-size,optional"`
|
||||||
Ulimits []string `json:"ulimits,omitempty" hcl:"ulimits,optional"`
|
Ulimits []string `json:"ulimits,omitempty" hcl:"ulimits,optional"`
|
||||||
// IMPORTANT: if you add more fields here, do not forget to update newOverrides and docs/bake-reference.md.
|
Call *string `json:"call,omitempty" hcl:"call,optional" cty:"call"`
|
||||||
|
Entitlements []string `json:"entitlements,omitempty" hcl:"entitlements,optional" cty:"entitlements"`
|
||||||
|
// IMPORTANT: if you add more fields here, do not forget to update newOverrides/AddOverrides and docs/bake-reference.md.
|
||||||
|
|
||||||
// linked is a private field to mark a target used as a linked one
|
// linked is a private field to mark a target used as a linked one
|
||||||
linked bool
|
linked bool
|
||||||
}
|
}
|
||||||
|
|
||||||
var _ hclparser.WithEvalContexts = &Target{}
|
var (
|
||||||
var _ hclparser.WithGetName = &Target{}
|
_ hclparser.WithEvalContexts = &Target{}
|
||||||
var _ hclparser.WithEvalContexts = &Group{}
|
_ hclparser.WithGetName = &Target{}
|
||||||
var _ hclparser.WithGetName = &Group{}
|
_ hclparser.WithEvalContexts = &Group{}
|
||||||
|
_ hclparser.WithGetName = &Group{}
|
||||||
|
)
|
||||||
|
|
||||||
func (t *Target) normalize() {
|
func (t *Target) normalize() {
|
||||||
t.Annotations = removeDupes(t.Annotations)
|
t.Annotations = removeDupes(t.Annotations)
|
||||||
@@ -726,6 +745,12 @@ func (t *Target) normalize() {
|
|||||||
t.NoCacheFilter = removeDupes(t.NoCacheFilter)
|
t.NoCacheFilter = removeDupes(t.NoCacheFilter)
|
||||||
t.Ulimits = removeDupes(t.Ulimits)
|
t.Ulimits = removeDupes(t.Ulimits)
|
||||||
|
|
||||||
|
if t.NetworkMode != nil && *t.NetworkMode == "host" {
|
||||||
|
t.Entitlements = append(t.Entitlements, "network.host")
|
||||||
|
}
|
||||||
|
|
||||||
|
t.Entitlements = removeDupes(t.Entitlements)
|
||||||
|
|
||||||
for k, v := range t.Contexts {
|
for k, v := range t.Contexts {
|
||||||
if v == "" {
|
if v == "" {
|
||||||
delete(t.Contexts, k)
|
delete(t.Contexts, k)
|
||||||
@@ -776,6 +801,9 @@ func (t *Target) Merge(t2 *Target) {
|
|||||||
if t2.Target != nil {
|
if t2.Target != nil {
|
||||||
t.Target = t2.Target
|
t.Target = t2.Target
|
||||||
}
|
}
|
||||||
|
if t2.Call != nil {
|
||||||
|
t.Call = t2.Call
|
||||||
|
}
|
||||||
if t2.Annotations != nil { // merge
|
if t2.Annotations != nil { // merge
|
||||||
t.Annotations = append(t.Annotations, t2.Annotations...)
|
t.Annotations = append(t.Annotations, t2.Annotations...)
|
||||||
}
|
}
|
||||||
@@ -819,6 +847,12 @@ func (t *Target) Merge(t2 *Target) {
|
|||||||
if t2.Ulimits != nil { // merge
|
if t2.Ulimits != nil { // merge
|
||||||
t.Ulimits = append(t.Ulimits, t2.Ulimits...)
|
t.Ulimits = append(t.Ulimits, t2.Ulimits...)
|
||||||
}
|
}
|
||||||
|
if t2.Description != "" {
|
||||||
|
t.Description = t2.Description
|
||||||
|
}
|
||||||
|
if t2.Entitlements != nil { // merge
|
||||||
|
t.Entitlements = append(t.Entitlements, t2.Entitlements...)
|
||||||
|
}
|
||||||
t.Inherits = append(t.Inherits, t2.Inherits...)
|
t.Inherits = append(t.Inherits, t2.Inherits...)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -833,7 +867,7 @@ func (t *Target) AddOverrides(overrides map[string]Override) error {
|
|||||||
t.Dockerfile = &value
|
t.Dockerfile = &value
|
||||||
case "args":
|
case "args":
|
||||||
if len(keys) != 2 {
|
if len(keys) != 2 {
|
||||||
return errors.Errorf("args require name")
|
return errors.Errorf("invalid format for args, expecting args.<name>=<value>")
|
||||||
}
|
}
|
||||||
if t.Args == nil {
|
if t.Args == nil {
|
||||||
t.Args = map[string]*string{}
|
t.Args = map[string]*string{}
|
||||||
@@ -841,7 +875,7 @@ func (t *Target) AddOverrides(overrides map[string]Override) error {
|
|||||||
t.Args[keys[1]] = &value
|
t.Args[keys[1]] = &value
|
||||||
case "contexts":
|
case "contexts":
|
||||||
if len(keys) != 2 {
|
if len(keys) != 2 {
|
||||||
return errors.Errorf("contexts require name")
|
return errors.Errorf("invalid format for contexts, expecting contexts.<name>=<value>")
|
||||||
}
|
}
|
||||||
if t.Contexts == nil {
|
if t.Contexts == nil {
|
||||||
t.Contexts = map[string]string{}
|
t.Contexts = map[string]string{}
|
||||||
@@ -849,7 +883,7 @@ func (t *Target) AddOverrides(overrides map[string]Override) error {
|
|||||||
t.Contexts[keys[1]] = value
|
t.Contexts[keys[1]] = value
|
||||||
case "labels":
|
case "labels":
|
||||||
if len(keys) != 2 {
|
if len(keys) != 2 {
|
||||||
return errors.Errorf("labels require name")
|
return errors.Errorf("invalid format for labels, expecting labels.<name>=<value>")
|
||||||
}
|
}
|
||||||
if t.Labels == nil {
|
if t.Labels == nil {
|
||||||
t.Labels = map[string]*string{}
|
t.Labels = map[string]*string{}
|
||||||
@@ -863,6 +897,8 @@ func (t *Target) AddOverrides(overrides map[string]Override) error {
|
|||||||
t.CacheTo = o.ArrValue
|
t.CacheTo = o.ArrValue
|
||||||
case "target":
|
case "target":
|
||||||
t.Target = &value
|
t.Target = &value
|
||||||
|
case "call":
|
||||||
|
t.Call = &value
|
||||||
case "secrets":
|
case "secrets":
|
||||||
t.Secrets = o.ArrValue
|
t.Secrets = o.ArrValue
|
||||||
case "ssh":
|
case "ssh":
|
||||||
@@ -871,6 +907,8 @@ func (t *Target) AddOverrides(overrides map[string]Override) error {
|
|||||||
t.Platforms = o.ArrValue
|
t.Platforms = o.ArrValue
|
||||||
case "output":
|
case "output":
|
||||||
t.Outputs = o.ArrValue
|
t.Outputs = o.ArrValue
|
||||||
|
case "entitlements":
|
||||||
|
t.Entitlements = append(t.Entitlements, o.ArrValue...)
|
||||||
case "annotations":
|
case "annotations":
|
||||||
t.Annotations = append(t.Annotations, o.ArrValue...)
|
t.Annotations = append(t.Annotations, o.ArrValue...)
|
||||||
case "attest":
|
case "attest":
|
||||||
@@ -887,6 +925,8 @@ func (t *Target) AddOverrides(overrides map[string]Override) error {
|
|||||||
t.ShmSize = &value
|
t.ShmSize = &value
|
||||||
case "ulimits":
|
case "ulimits":
|
||||||
t.Ulimits = o.ArrValue
|
t.Ulimits = o.ArrValue
|
||||||
|
case "network":
|
||||||
|
t.NetworkMode = &value
|
||||||
case "pull":
|
case "pull":
|
||||||
pull, err := strconv.ParseBool(value)
|
pull, err := strconv.ParseBool(value)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -894,19 +934,17 @@ func (t *Target) AddOverrides(overrides map[string]Override) error {
|
|||||||
}
|
}
|
||||||
t.Pull = &pull
|
t.Pull = &pull
|
||||||
case "push":
|
case "push":
|
||||||
_, err := strconv.ParseBool(value)
|
push, err := strconv.ParseBool(value)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return errors.Errorf("invalid value %s for boolean key push", value)
|
return errors.Errorf("invalid value %s for boolean key push", value)
|
||||||
}
|
}
|
||||||
if len(t.Outputs) == 0 {
|
t.Outputs = setPushOverride(t.Outputs, push)
|
||||||
t.Outputs = append(t.Outputs, "type=image,push=true")
|
case "load":
|
||||||
} else {
|
load, err := strconv.ParseBool(value)
|
||||||
for i, output := range t.Outputs {
|
if err != nil {
|
||||||
if typ := parseOutputType(output); typ == "image" || typ == "registry" {
|
return errors.Errorf("invalid value %s for boolean key load", value)
|
||||||
t.Outputs[i] = t.Outputs[i] + ",push=" + value
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
t.Outputs = setLoadOverride(t.Outputs, load)
|
||||||
default:
|
default:
|
||||||
return errors.Errorf("unknown key: %s", keys[0])
|
return errors.Errorf("unknown key: %s", keys[0])
|
||||||
}
|
}
|
||||||
@@ -1079,62 +1117,34 @@ func updateContext(t *build.Inputs, inp *Input) {
|
|||||||
t.ContextState = &st
|
t.ContextState = &st
|
||||||
}
|
}
|
||||||
|
|
||||||
// validateContextsEntitlements is a basic check to ensure contexts do not
|
func collectLocalPaths(t build.Inputs) []string {
|
||||||
// escape local directories when loaded from remote sources. This is to be
|
var out []string
|
||||||
// replaced with proper entitlements support in the future.
|
|
||||||
func validateContextsEntitlements(t build.Inputs, inp *Input) error {
|
|
||||||
if inp == nil || inp.State == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
if v, ok := os.LookupEnv("BAKE_ALLOW_REMOTE_FS_ACCESS"); ok {
|
|
||||||
if vv, _ := strconv.ParseBool(v); vv {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if t.ContextState == nil {
|
if t.ContextState == nil {
|
||||||
if err := checkPath(t.ContextPath); err != nil {
|
if v, ok := isLocalPath(t.ContextPath); ok {
|
||||||
return err
|
out = append(out, v)
|
||||||
}
|
}
|
||||||
|
if v, ok := isLocalPath(t.DockerfilePath); ok {
|
||||||
|
out = append(out, v)
|
||||||
|
}
|
||||||
|
} else if strings.HasPrefix(t.ContextPath, "cwd://") {
|
||||||
|
out = append(out, strings.TrimPrefix(t.ContextPath, "cwd://"))
|
||||||
}
|
}
|
||||||
for _, v := range t.NamedContexts {
|
for _, v := range t.NamedContexts {
|
||||||
if v.State != nil {
|
if v.State != nil {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
if err := checkPath(v.Path); err != nil {
|
if v, ok := isLocalPath(v.Path); ok {
|
||||||
return err
|
out = append(out, v)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return nil
|
return out
|
||||||
}
|
}
|
||||||
|
|
||||||
func checkPath(p string) error {
|
func isLocalPath(p string) (string, bool) {
|
||||||
if build.IsRemoteURL(p) || strings.HasPrefix(p, "target:") || strings.HasPrefix(p, "docker-image:") {
|
if build.IsRemoteURL(p) || strings.HasPrefix(p, "target:") || strings.HasPrefix(p, "docker-image:") {
|
||||||
return nil
|
return "", false
|
||||||
}
|
}
|
||||||
p, err := filepath.EvalSymlinks(p)
|
return strings.TrimPrefix(p, "cwd://"), true
|
||||||
if err != nil {
|
|
||||||
if os.IsNotExist(err) {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
p, err = filepath.Abs(p)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
wd, err := os.Getwd()
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
rel, err := filepath.Rel(wd, p)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
parts := strings.Split(rel, string(os.PathSeparator))
|
|
||||||
if parts[0] == ".." {
|
|
||||||
return errors.Errorf("path %s is outside of the working directory, please set BAKE_ALLOW_REMOTE_FS_ACCESS=1", p)
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
|
func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
|
||||||
@@ -1174,9 +1184,6 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
|
|||||||
// it's not outside the working directory and then resolve it to an
|
// it's not outside the working directory and then resolve it to an
|
||||||
// absolute path.
|
// absolute path.
|
||||||
bi.DockerfilePath = path.Clean(strings.TrimPrefix(bi.DockerfilePath, "cwd://"))
|
bi.DockerfilePath = path.Clean(strings.TrimPrefix(bi.DockerfilePath, "cwd://"))
|
||||||
if err := checkPath(bi.DockerfilePath); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
var err error
|
var err error
|
||||||
bi.DockerfilePath, err = filepath.Abs(bi.DockerfilePath)
|
bi.DockerfilePath, err = filepath.Abs(bi.DockerfilePath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -1213,10 +1220,6 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := validateContextsEntitlements(bi, inp); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
t.Context = &bi.ContextPath
|
t.Context = &bi.ContextPath
|
||||||
|
|
||||||
args := map[string]string{}
|
args := map[string]string{}
|
||||||
@@ -1277,6 +1280,8 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
bo.SecretSpecs = secrets
|
||||||
|
|
||||||
secretAttachment, err := controllerapi.CreateSecrets(secrets)
|
secretAttachment, err := controllerapi.CreateSecrets(secrets)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
@@ -1290,6 +1295,8 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
|
|||||||
if len(sshSpecs) == 0 && (buildflags.IsGitSSH(bi.ContextPath) || (inp != nil && buildflags.IsGitSSH(inp.URL))) {
|
if len(sshSpecs) == 0 && (buildflags.IsGitSSH(bi.ContextPath) || (inp != nil && buildflags.IsGitSSH(inp.URL))) {
|
||||||
sshSpecs = append(sshSpecs, &controllerapi.SSH{ID: "default"})
|
sshSpecs = append(sshSpecs, &controllerapi.SSH{ID: "default"})
|
||||||
}
|
}
|
||||||
|
bo.SSHSpecs = sshSpecs
|
||||||
|
|
||||||
sshAttachment, err := controllerapi.CreateSSH(sshSpecs)
|
sshAttachment, err := controllerapi.CreateSSH(sshSpecs)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
@@ -1300,6 +1307,12 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
|
|||||||
bo.Target = *t.Target
|
bo.Target = *t.Target
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if t.Call != nil {
|
||||||
|
bo.CallFunc = &build.CallFunc{
|
||||||
|
Name: *t.Call,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
cacheImports, err := buildflags.ParseCacheEntry(t.CacheFrom)
|
cacheImports, err := buildflags.ParseCacheEntry(t.CacheFrom)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
@@ -1350,6 +1363,10 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
|
|||||||
}
|
}
|
||||||
bo.Ulimits = ulimits
|
bo.Ulimits = ulimits
|
||||||
|
|
||||||
|
for _, ent := range t.Entitlements {
|
||||||
|
bo.Allow = append(bo.Allow, entitlements.Entitlement(ent))
|
||||||
|
}
|
||||||
|
|
||||||
return bo, nil
|
return bo, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1394,23 +1411,89 @@ func removeAttestDupes(s []string) []string {
|
|||||||
return res
|
return res
|
||||||
}
|
}
|
||||||
|
|
||||||
func parseOutputType(str string) string {
|
func parseOutput(str string) map[string]string {
|
||||||
csvReader := csv.NewReader(strings.NewReader(str))
|
fields, err := csvvalue.Fields(str, nil)
|
||||||
fields, err := csvReader.Read()
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return ""
|
return nil
|
||||||
}
|
}
|
||||||
|
res := map[string]string{}
|
||||||
for _, field := range fields {
|
for _, field := range fields {
|
||||||
parts := strings.SplitN(field, "=", 2)
|
parts := strings.SplitN(field, "=", 2)
|
||||||
if len(parts) == 2 {
|
if len(parts) == 2 {
|
||||||
if parts[0] == "type" {
|
res[parts[0]] = parts[1]
|
||||||
return parts[1]
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
return res
|
||||||
|
}
|
||||||
|
|
||||||
|
func parseOutputType(str string) string {
|
||||||
|
if out := parseOutput(str); out != nil {
|
||||||
|
if v, ok := out["type"]; ok {
|
||||||
|
return v
|
||||||
|
}
|
||||||
}
|
}
|
||||||
return ""
|
return ""
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func setPushOverride(outputs []string, push bool) []string {
|
||||||
|
var out []string
|
||||||
|
setPush := true
|
||||||
|
for _, output := range outputs {
|
||||||
|
typ := parseOutputType(output)
|
||||||
|
if typ == "image" || typ == "registry" {
|
||||||
|
// no need to set push if image or registry types already defined
|
||||||
|
setPush = false
|
||||||
|
if typ == "registry" {
|
||||||
|
if !push {
|
||||||
|
// don't set registry output if "push" is false
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
// no need to set "push" attribute to true for registry
|
||||||
|
out = append(out, output)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
out = append(out, output+",push="+strconv.FormatBool(push))
|
||||||
|
} else {
|
||||||
|
if typ != "docker" {
|
||||||
|
// if there is any output that is not docker, don't set "push"
|
||||||
|
setPush = false
|
||||||
|
}
|
||||||
|
out = append(out, output)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if push && setPush {
|
||||||
|
out = append(out, "type=image,push=true")
|
||||||
|
}
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
|
||||||
|
func setLoadOverride(outputs []string, load bool) []string {
|
||||||
|
if !load {
|
||||||
|
return outputs
|
||||||
|
}
|
||||||
|
setLoad := true
|
||||||
|
for _, output := range outputs {
|
||||||
|
if typ := parseOutputType(output); typ == "docker" {
|
||||||
|
if v := parseOutput(output); v != nil {
|
||||||
|
// dest set means we want to output as tar so don't set load
|
||||||
|
if _, ok := v["dest"]; !ok {
|
||||||
|
setLoad = false
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else if typ != "image" && typ != "registry" && typ != "oci" {
|
||||||
|
// if there is any output that is not an image, registry
|
||||||
|
// or oci, don't set "load" similar to push override
|
||||||
|
setLoad = false
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if setLoad {
|
||||||
|
outputs = append(outputs, "type=docker")
|
||||||
|
}
|
||||||
|
return outputs
|
||||||
|
}
|
||||||
|
|
||||||
func validateTargetName(name string) error {
|
func validateTargetName(name string) error {
|
||||||
if !targetNamePattern.MatchString(name) {
|
if !targetNamePattern.MatchString(name) {
|
||||||
return errors.Errorf("only %q are allowed", validTargetNameChars)
|
return errors.Errorf("only %q are allowed", validTargetNameChars)
|
||||||
|
|||||||
@@ -8,6 +8,7 @@ import (
|
|||||||
"strings"
|
"strings"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
|
"github.com/moby/buildkit/util/entitlements"
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
)
|
)
|
||||||
@@ -58,8 +59,8 @@ target "webapp" {
|
|||||||
t.Run("InvalidTargetOverrides", func(t *testing.T) {
|
t.Run("InvalidTargetOverrides", func(t *testing.T) {
|
||||||
t.Parallel()
|
t.Parallel()
|
||||||
_, _, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"nosuchtarget.context=foo"}, nil)
|
_, _, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"nosuchtarget.context=foo"}, nil)
|
||||||
require.NotNil(t, err)
|
require.Error(t, err)
|
||||||
require.Equal(t, err.Error(), "could not find any target matching 'nosuchtarget'")
|
require.Equal(t, "could not find any target matching 'nosuchtarget'", err.Error())
|
||||||
})
|
})
|
||||||
|
|
||||||
t.Run("ArgsOverrides", func(t *testing.T) {
|
t.Run("ArgsOverrides", func(t *testing.T) {
|
||||||
@@ -115,7 +116,7 @@ target "webapp" {
|
|||||||
t.Run("ContextOverride", func(t *testing.T) {
|
t.Run("ContextOverride", func(t *testing.T) {
|
||||||
t.Parallel()
|
t.Parallel()
|
||||||
_, _, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"webapp.context"}, nil)
|
_, _, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"webapp.context"}, nil)
|
||||||
require.NotNil(t, err)
|
require.Error(t, err)
|
||||||
|
|
||||||
m, g, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"webapp.context=foo"}, nil)
|
m, g, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"webapp.context=foo"}, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
@@ -202,8 +203,8 @@ target "webapp" {
|
|||||||
// NOTE: I am unsure whether failing to match should always error out
|
// NOTE: I am unsure whether failing to match should always error out
|
||||||
// instead of simply skipping that override.
|
// instead of simply skipping that override.
|
||||||
// Let's enforce the error and we can relax it later if users complain.
|
// Let's enforce the error and we can relax it later if users complain.
|
||||||
require.NotNil(t, err)
|
require.Error(t, err)
|
||||||
require.Equal(t, err.Error(), "could not find any target matching 'nomatch*'")
|
require.Equal(t, "could not find any target matching 'nomatch*'", err.Error())
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
@@ -217,8 +218,20 @@ target "webapp" {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func TestPushOverride(t *testing.T) {
|
func TestPushOverride(t *testing.T) {
|
||||||
t.Parallel()
|
t.Run("empty output", func(t *testing.T) {
|
||||||
|
fp := File{
|
||||||
|
Name: "docker-bake.hcl",
|
||||||
|
Data: []byte(
|
||||||
|
`target "app" {
|
||||||
|
}`),
|
||||||
|
}
|
||||||
|
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.push=true"}, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, 1, len(m["app"].Outputs))
|
||||||
|
require.Equal(t, "type=image,push=true", m["app"].Outputs[0])
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("type image", func(t *testing.T) {
|
||||||
fp := File{
|
fp := File{
|
||||||
Name: "docker-bake.hcl",
|
Name: "docker-bake.hcl",
|
||||||
Data: []byte(
|
Data: []byte(
|
||||||
@@ -226,39 +239,231 @@ func TestPushOverride(t *testing.T) {
|
|||||||
output = ["type=image,compression=zstd"]
|
output = ["type=image,compression=zstd"]
|
||||||
}`),
|
}`),
|
||||||
}
|
}
|
||||||
ctx := context.TODO()
|
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.push=true"}, nil)
|
||||||
m, _, err := ReadTargets(ctx, []File{fp}, []string{"app"}, []string{"*.push=true"}, nil)
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
require.Equal(t, 1, len(m["app"].Outputs))
|
require.Equal(t, 1, len(m["app"].Outputs))
|
||||||
require.Equal(t, "type=image,compression=zstd,push=true", m["app"].Outputs[0])
|
require.Equal(t, "type=image,compression=zstd,push=true", m["app"].Outputs[0])
|
||||||
|
})
|
||||||
|
|
||||||
fp = File{
|
t.Run("type image push false", func(t *testing.T) {
|
||||||
|
fp := File{
|
||||||
Name: "docker-bake.hcl",
|
Name: "docker-bake.hcl",
|
||||||
Data: []byte(
|
Data: []byte(
|
||||||
`target "app" {
|
`target "app" {
|
||||||
output = ["type=image,compression=zstd"]
|
output = ["type=image,compression=zstd"]
|
||||||
}`),
|
}`),
|
||||||
}
|
}
|
||||||
ctx = context.TODO()
|
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.push=false"}, nil)
|
||||||
m, _, err = ReadTargets(ctx, []File{fp}, []string{"app"}, []string{"*.push=false"}, nil)
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
require.Equal(t, 1, len(m["app"].Outputs))
|
require.Equal(t, 1, len(m["app"].Outputs))
|
||||||
require.Equal(t, "type=image,compression=zstd,push=false", m["app"].Outputs[0])
|
require.Equal(t, "type=image,compression=zstd,push=false", m["app"].Outputs[0])
|
||||||
|
})
|
||||||
|
|
||||||
fp = File{
|
t.Run("type registry", func(t *testing.T) {
|
||||||
|
fp := File{
|
||||||
|
Name: "docker-bake.hcl",
|
||||||
|
Data: []byte(
|
||||||
|
`target "app" {
|
||||||
|
output = ["type=registry"]
|
||||||
|
}`),
|
||||||
|
}
|
||||||
|
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.push=true"}, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, 1, len(m["app"].Outputs))
|
||||||
|
require.Equal(t, "type=registry", m["app"].Outputs[0])
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("type registry push false", func(t *testing.T) {
|
||||||
|
fp := File{
|
||||||
|
Name: "docker-bake.hcl",
|
||||||
|
Data: []byte(
|
||||||
|
`target "app" {
|
||||||
|
output = ["type=registry"]
|
||||||
|
}`),
|
||||||
|
}
|
||||||
|
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.push=false"}, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, 0, len(m["app"].Outputs))
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("type local and empty target", func(t *testing.T) {
|
||||||
|
fp := File{
|
||||||
|
Name: "docker-bake.hcl",
|
||||||
|
Data: []byte(
|
||||||
|
`target "foo" {
|
||||||
|
output = [ "type=local,dest=out" ]
|
||||||
|
}
|
||||||
|
target "bar" {
|
||||||
|
}`),
|
||||||
|
}
|
||||||
|
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"foo", "bar"}, []string{"*.push=true"}, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, 2, len(m))
|
||||||
|
require.Equal(t, 1, len(m["foo"].Outputs))
|
||||||
|
require.Equal(t, []string{"type=local,dest=out"}, m["foo"].Outputs)
|
||||||
|
require.Equal(t, 1, len(m["bar"].Outputs))
|
||||||
|
require.Equal(t, []string{"type=image,push=true"}, m["bar"].Outputs)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestLoadOverride(t *testing.T) {
|
||||||
|
t.Run("empty output", func(t *testing.T) {
|
||||||
|
fp := File{
|
||||||
Name: "docker-bake.hcl",
|
Name: "docker-bake.hcl",
|
||||||
Data: []byte(
|
Data: []byte(
|
||||||
`target "app" {
|
`target "app" {
|
||||||
}`),
|
}`),
|
||||||
}
|
}
|
||||||
ctx = context.TODO()
|
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.load=true"}, nil)
|
||||||
m, _, err = ReadTargets(ctx, []File{fp}, []string{"app"}, []string{"*.push=true"}, nil)
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
require.Equal(t, 1, len(m["app"].Outputs))
|
require.Equal(t, 1, len(m["app"].Outputs))
|
||||||
require.Equal(t, "type=image,push=true", m["app"].Outputs[0])
|
require.Equal(t, "type=docker", m["app"].Outputs[0])
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("type docker", func(t *testing.T) {
|
||||||
|
fp := File{
|
||||||
|
Name: "docker-bake.hcl",
|
||||||
|
Data: []byte(
|
||||||
|
`target "app" {
|
||||||
|
output = ["type=docker"]
|
||||||
|
}`),
|
||||||
|
}
|
||||||
|
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.load=true"}, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, 1, len(m["app"].Outputs))
|
||||||
|
require.Equal(t, []string{"type=docker"}, m["app"].Outputs)
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("type image", func(t *testing.T) {
|
||||||
|
fp := File{
|
||||||
|
Name: "docker-bake.hcl",
|
||||||
|
Data: []byte(
|
||||||
|
`target "app" {
|
||||||
|
output = ["type=image"]
|
||||||
|
}`),
|
||||||
|
}
|
||||||
|
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.load=true"}, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, 2, len(m["app"].Outputs))
|
||||||
|
require.Equal(t, []string{"type=image", "type=docker"}, m["app"].Outputs)
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("type image load false", func(t *testing.T) {
|
||||||
|
fp := File{
|
||||||
|
Name: "docker-bake.hcl",
|
||||||
|
Data: []byte(
|
||||||
|
`target "app" {
|
||||||
|
output = ["type=image"]
|
||||||
|
}`),
|
||||||
|
}
|
||||||
|
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.load=false"}, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, 1, len(m["app"].Outputs))
|
||||||
|
require.Equal(t, []string{"type=image"}, m["app"].Outputs)
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("type registry", func(t *testing.T) {
|
||||||
|
fp := File{
|
||||||
|
Name: "docker-bake.hcl",
|
||||||
|
Data: []byte(
|
||||||
|
`target "app" {
|
||||||
|
output = ["type=registry"]
|
||||||
|
}`),
|
||||||
|
}
|
||||||
|
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.load=true"}, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, 2, len(m["app"].Outputs))
|
||||||
|
require.Equal(t, []string{"type=registry", "type=docker"}, m["app"].Outputs)
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("type oci", func(t *testing.T) {
|
||||||
|
fp := File{
|
||||||
|
Name: "docker-bake.hcl",
|
||||||
|
Data: []byte(
|
||||||
|
`target "app" {
|
||||||
|
output = ["type=oci,dest=out"]
|
||||||
|
}`),
|
||||||
|
}
|
||||||
|
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.load=true"}, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, 2, len(m["app"].Outputs))
|
||||||
|
require.Equal(t, []string{"type=oci,dest=out", "type=docker"}, m["app"].Outputs)
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("type docker with dest", func(t *testing.T) {
|
||||||
|
fp := File{
|
||||||
|
Name: "docker-bake.hcl",
|
||||||
|
Data: []byte(
|
||||||
|
`target "app" {
|
||||||
|
output = ["type=docker,dest=out"]
|
||||||
|
}`),
|
||||||
|
}
|
||||||
|
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"app"}, []string{"*.load=true"}, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, 2, len(m["app"].Outputs))
|
||||||
|
require.Equal(t, []string{"type=docker,dest=out", "type=docker"}, m["app"].Outputs)
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("type local and empty target", func(t *testing.T) {
|
||||||
|
fp := File{
|
||||||
|
Name: "docker-bake.hcl",
|
||||||
|
Data: []byte(
|
||||||
|
`target "foo" {
|
||||||
|
output = [ "type=local,dest=out" ]
|
||||||
|
}
|
||||||
|
target "bar" {
|
||||||
|
}`),
|
||||||
|
}
|
||||||
|
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"foo", "bar"}, []string{"*.load=true"}, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, 2, len(m))
|
||||||
|
require.Equal(t, 1, len(m["foo"].Outputs))
|
||||||
|
require.Equal(t, []string{"type=local,dest=out"}, m["foo"].Outputs)
|
||||||
|
require.Equal(t, 1, len(m["bar"].Outputs))
|
||||||
|
require.Equal(t, []string{"type=docker"}, m["bar"].Outputs)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestLoadAndPushOverride(t *testing.T) {
|
||||||
|
t.Run("type local and empty target", func(t *testing.T) {
|
||||||
|
fp := File{
|
||||||
|
Name: "docker-bake.hcl",
|
||||||
|
Data: []byte(
|
||||||
|
`target "foo" {
|
||||||
|
output = [ "type=local,dest=out" ]
|
||||||
|
}
|
||||||
|
target "bar" {
|
||||||
|
}`),
|
||||||
|
}
|
||||||
|
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"foo", "bar"}, []string{"*.load=true", "*.push=true"}, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, 2, len(m))
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(m["foo"].Outputs))
|
||||||
|
sort.Strings(m["foo"].Outputs)
|
||||||
|
require.Equal(t, []string{"type=local,dest=out"}, m["foo"].Outputs)
|
||||||
|
|
||||||
|
require.Equal(t, 2, len(m["bar"].Outputs))
|
||||||
|
sort.Strings(m["bar"].Outputs)
|
||||||
|
require.Equal(t, []string{"type=docker", "type=image,push=true"}, m["bar"].Outputs)
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("type registry", func(t *testing.T) {
|
||||||
|
fp := File{
|
||||||
|
Name: "docker-bake.hcl",
|
||||||
|
Data: []byte(
|
||||||
|
`target "foo" {
|
||||||
|
output = [ "type=registry" ]
|
||||||
|
}`),
|
||||||
|
}
|
||||||
|
m, _, err := ReadTargets(context.TODO(), []File{fp}, []string{"foo"}, []string{"*.load=true", "*.push=true"}, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, 1, len(m))
|
||||||
|
|
||||||
|
require.Equal(t, 2, len(m["foo"].Outputs))
|
||||||
|
sort.Strings(m["foo"].Outputs)
|
||||||
|
require.Equal(t, []string{"type=docker", "type=registry"}, m["foo"].Outputs)
|
||||||
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestReadTargetsCompose(t *testing.T) {
|
func TestReadTargetsCompose(t *testing.T) {
|
||||||
@@ -634,7 +839,8 @@ func TestReadContextFromTargetChain(t *testing.T) {
|
|||||||
|
|
||||||
mid, ok := m["mid"]
|
mid, ok := m["mid"]
|
||||||
require.True(t, ok)
|
require.True(t, ok)
|
||||||
require.Equal(t, 0, len(mid.Outputs))
|
require.Equal(t, 1, len(mid.Outputs))
|
||||||
|
require.Equal(t, "type=cacheonly", mid.Outputs[0])
|
||||||
require.Equal(t, 1, len(mid.Contexts))
|
require.Equal(t, 1, len(mid.Contexts))
|
||||||
|
|
||||||
base, ok := m["base"]
|
base, ok := m["base"]
|
||||||
@@ -1324,7 +1530,7 @@ services:
|
|||||||
v2: "bar"
|
v2: "bar"
|
||||||
`)
|
`)
|
||||||
|
|
||||||
c, err := ParseFiles([]File{
|
c, _, err := ParseFiles([]File{
|
||||||
{Data: dt, Name: "c1.foo"},
|
{Data: dt, Name: "c1.foo"},
|
||||||
{Data: dt2, Name: "c2.bar"},
|
{Data: dt2, Name: "c2.bar"},
|
||||||
}, nil)
|
}, nil)
|
||||||
@@ -1521,3 +1727,304 @@ func TestAnnotations(t *testing.T) {
|
|||||||
require.Len(t, bo["app"].Exports, 1)
|
require.Len(t, bo["app"].Exports, 1)
|
||||||
require.Equal(t, "bar", bo["app"].Exports[0].Attrs["annotation-manifest[linux/amd64].foo"])
|
require.Equal(t, "bar", bo["app"].Exports[0].Attrs["annotation-manifest[linux/amd64].foo"])
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestHCLEntitlements(t *testing.T) {
|
||||||
|
fp := File{
|
||||||
|
Name: "docker-bake.hcl",
|
||||||
|
Data: []byte(
|
||||||
|
`target "app" {
|
||||||
|
entitlements = ["security.insecure", "network.host"]
|
||||||
|
}`),
|
||||||
|
}
|
||||||
|
ctx := context.TODO()
|
||||||
|
m, g, err := ReadTargets(ctx, []File{fp}, []string{"app"}, nil, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
bo, err := TargetsToBuildOpt(m, &Input{})
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(g))
|
||||||
|
require.Equal(t, []string{"app"}, g["default"].Targets)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(m))
|
||||||
|
require.Contains(t, m, "app")
|
||||||
|
require.Len(t, m["app"].Entitlements, 2)
|
||||||
|
require.Equal(t, "security.insecure", m["app"].Entitlements[0])
|
||||||
|
require.Equal(t, "network.host", m["app"].Entitlements[1])
|
||||||
|
|
||||||
|
require.Len(t, bo["app"].Allow, 2)
|
||||||
|
require.Equal(t, entitlements.EntitlementSecurityInsecure, bo["app"].Allow[0])
|
||||||
|
require.Equal(t, entitlements.EntitlementNetworkHost, bo["app"].Allow[1])
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestEntitlementsForNetHostCompose(t *testing.T) {
|
||||||
|
fp := File{
|
||||||
|
Name: "docker-bake.hcl",
|
||||||
|
Data: []byte(
|
||||||
|
`target "app" {
|
||||||
|
dockerfile = "app.Dockerfile"
|
||||||
|
}`),
|
||||||
|
}
|
||||||
|
|
||||||
|
fp2 := File{
|
||||||
|
Name: "docker-compose.yml",
|
||||||
|
Data: []byte(
|
||||||
|
`services:
|
||||||
|
app:
|
||||||
|
build:
|
||||||
|
network: "host"
|
||||||
|
`),
|
||||||
|
}
|
||||||
|
|
||||||
|
ctx := context.TODO()
|
||||||
|
m, g, err := ReadTargets(ctx, []File{fp, fp2}, []string{"app"}, nil, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
bo, err := TargetsToBuildOpt(m, &Input{})
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(g))
|
||||||
|
require.Equal(t, []string{"app"}, g["default"].Targets)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(m))
|
||||||
|
require.Contains(t, m, "app")
|
||||||
|
require.Len(t, m["app"].Entitlements, 1)
|
||||||
|
require.Equal(t, "network.host", m["app"].Entitlements[0])
|
||||||
|
require.Equal(t, "host", *m["app"].NetworkMode)
|
||||||
|
|
||||||
|
require.Len(t, bo["app"].Allow, 1)
|
||||||
|
require.Equal(t, entitlements.EntitlementNetworkHost, bo["app"].Allow[0])
|
||||||
|
require.Equal(t, "host", bo["app"].NetworkMode)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestEntitlementsForNetHost(t *testing.T) {
|
||||||
|
fp := File{
|
||||||
|
Name: "docker-bake.hcl",
|
||||||
|
Data: []byte(
|
||||||
|
`target "app" {
|
||||||
|
dockerfile = "app.Dockerfile"
|
||||||
|
network = "host"
|
||||||
|
}`),
|
||||||
|
}
|
||||||
|
|
||||||
|
ctx := context.TODO()
|
||||||
|
m, g, err := ReadTargets(ctx, []File{fp}, []string{"app"}, nil, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
bo, err := TargetsToBuildOpt(m, &Input{})
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(g))
|
||||||
|
require.Equal(t, []string{"app"}, g["default"].Targets)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(m))
|
||||||
|
require.Contains(t, m, "app")
|
||||||
|
require.Len(t, m["app"].Entitlements, 1)
|
||||||
|
require.Equal(t, "network.host", m["app"].Entitlements[0])
|
||||||
|
require.Equal(t, "host", *m["app"].NetworkMode)
|
||||||
|
|
||||||
|
require.Len(t, bo["app"].Allow, 1)
|
||||||
|
require.Equal(t, entitlements.EntitlementNetworkHost, bo["app"].Allow[0])
|
||||||
|
require.Equal(t, "host", bo["app"].NetworkMode)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestNetNone(t *testing.T) {
|
||||||
|
fp := File{
|
||||||
|
Name: "docker-bake.hcl",
|
||||||
|
Data: []byte(
|
||||||
|
`target "app" {
|
||||||
|
dockerfile = "app.Dockerfile"
|
||||||
|
network = "none"
|
||||||
|
}`),
|
||||||
|
}
|
||||||
|
|
||||||
|
ctx := context.TODO()
|
||||||
|
m, g, err := ReadTargets(ctx, []File{fp}, []string{"app"}, nil, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
bo, err := TargetsToBuildOpt(m, &Input{})
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(g))
|
||||||
|
require.Equal(t, []string{"app"}, g["default"].Targets)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(m))
|
||||||
|
require.Contains(t, m, "app")
|
||||||
|
require.Len(t, m["app"].Entitlements, 0)
|
||||||
|
require.Equal(t, "none", *m["app"].NetworkMode)
|
||||||
|
|
||||||
|
require.Len(t, bo["app"].Allow, 0)
|
||||||
|
require.Equal(t, "none", bo["app"].NetworkMode)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestVariableValidation(t *testing.T) {
|
||||||
|
fp := File{
|
||||||
|
Name: "docker-bake.hcl",
|
||||||
|
Data: []byte(`
|
||||||
|
variable "FOO" {
|
||||||
|
validation {
|
||||||
|
condition = FOO != ""
|
||||||
|
error_message = "FOO is required."
|
||||||
|
}
|
||||||
|
}
|
||||||
|
target "app" {
|
||||||
|
args = {
|
||||||
|
FOO = FOO
|
||||||
|
}
|
||||||
|
}
|
||||||
|
`),
|
||||||
|
}
|
||||||
|
|
||||||
|
ctx := context.TODO()
|
||||||
|
|
||||||
|
t.Run("Valid", func(t *testing.T) {
|
||||||
|
t.Setenv("FOO", "bar")
|
||||||
|
_, _, err := ReadTargets(ctx, []File{fp}, []string{"app"}, nil, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("Invalid", func(t *testing.T) {
|
||||||
|
_, _, err := ReadTargets(ctx, []File{fp}, []string{"app"}, nil, nil)
|
||||||
|
require.Error(t, err)
|
||||||
|
require.Contains(t, err.Error(), "FOO is required.")
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestVariableValidationMulti(t *testing.T) {
|
||||||
|
fp := File{
|
||||||
|
Name: "docker-bake.hcl",
|
||||||
|
Data: []byte(`
|
||||||
|
variable "FOO" {
|
||||||
|
validation {
|
||||||
|
condition = FOO != ""
|
||||||
|
error_message = "FOO is required."
|
||||||
|
}
|
||||||
|
validation {
|
||||||
|
condition = strlen(FOO) > 4
|
||||||
|
error_message = "FOO must be longer than 4 characters."
|
||||||
|
}
|
||||||
|
}
|
||||||
|
target "app" {
|
||||||
|
args = {
|
||||||
|
FOO = FOO
|
||||||
|
}
|
||||||
|
}
|
||||||
|
`),
|
||||||
|
}
|
||||||
|
|
||||||
|
ctx := context.TODO()
|
||||||
|
|
||||||
|
t.Run("Valid", func(t *testing.T) {
|
||||||
|
t.Setenv("FOO", "barbar")
|
||||||
|
_, _, err := ReadTargets(ctx, []File{fp}, []string{"app"}, nil, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("InvalidLength", func(t *testing.T) {
|
||||||
|
t.Setenv("FOO", "bar")
|
||||||
|
_, _, err := ReadTargets(ctx, []File{fp}, []string{"app"}, nil, nil)
|
||||||
|
require.Error(t, err)
|
||||||
|
require.Contains(t, err.Error(), "FOO must be longer than 4 characters.")
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("InvalidEmpty", func(t *testing.T) {
|
||||||
|
_, _, err := ReadTargets(ctx, []File{fp}, []string{"app"}, nil, nil)
|
||||||
|
require.Error(t, err)
|
||||||
|
require.Contains(t, err.Error(), "FOO is required.")
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestVariableValidationWithDeps(t *testing.T) {
|
||||||
|
fp := File{
|
||||||
|
Name: "docker-bake.hcl",
|
||||||
|
Data: []byte(`
|
||||||
|
variable "FOO" {}
|
||||||
|
variable "BAR" {
|
||||||
|
validation {
|
||||||
|
condition = FOO != ""
|
||||||
|
error_message = "BAR requires FOO to be set."
|
||||||
|
}
|
||||||
|
}
|
||||||
|
target "app" {
|
||||||
|
args = {
|
||||||
|
BAR = BAR
|
||||||
|
}
|
||||||
|
}
|
||||||
|
`),
|
||||||
|
}
|
||||||
|
|
||||||
|
ctx := context.TODO()
|
||||||
|
|
||||||
|
t.Run("Valid", func(t *testing.T) {
|
||||||
|
t.Setenv("FOO", "bar")
|
||||||
|
_, _, err := ReadTargets(ctx, []File{fp}, []string{"app"}, nil, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("SetBar", func(t *testing.T) {
|
||||||
|
t.Setenv("FOO", "bar")
|
||||||
|
t.Setenv("BAR", "baz")
|
||||||
|
_, _, err := ReadTargets(ctx, []File{fp}, []string{"app"}, nil, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("Invalid", func(t *testing.T) {
|
||||||
|
_, _, err := ReadTargets(ctx, []File{fp}, []string{"app"}, nil, nil)
|
||||||
|
require.Error(t, err)
|
||||||
|
require.Contains(t, err.Error(), "BAR requires FOO to be set.")
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestVariableValidationTyped(t *testing.T) {
|
||||||
|
fp := File{
|
||||||
|
Name: "docker-bake.hcl",
|
||||||
|
Data: []byte(`
|
||||||
|
variable "FOO" {
|
||||||
|
default = 0
|
||||||
|
validation {
|
||||||
|
condition = FOO > 5
|
||||||
|
error_message = "FOO must be greater than 5."
|
||||||
|
}
|
||||||
|
}
|
||||||
|
target "app" {
|
||||||
|
args = {
|
||||||
|
FOO = FOO
|
||||||
|
}
|
||||||
|
}
|
||||||
|
`),
|
||||||
|
}
|
||||||
|
|
||||||
|
ctx := context.TODO()
|
||||||
|
|
||||||
|
t.Run("Valid", func(t *testing.T) {
|
||||||
|
t.Setenv("FOO", "10")
|
||||||
|
_, _, err := ReadTargets(ctx, []File{fp}, []string{"app"}, nil, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("Invalid", func(t *testing.T) {
|
||||||
|
_, _, err := ReadTargets(ctx, []File{fp}, []string{"app"}, nil, nil)
|
||||||
|
require.Error(t, err)
|
||||||
|
require.Contains(t, err.Error(), "FOO must be greater than 5.")
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// https://github.com/docker/buildx/issues/2822
|
||||||
|
func TestVariableEmpty(t *testing.T) {
|
||||||
|
fp := File{
|
||||||
|
Name: "docker-bake.hcl",
|
||||||
|
Data: []byte(`
|
||||||
|
variable "FOO" {
|
||||||
|
default = ""
|
||||||
|
}
|
||||||
|
target "app" {
|
||||||
|
output = [FOO]
|
||||||
|
}
|
||||||
|
`),
|
||||||
|
}
|
||||||
|
|
||||||
|
ctx := context.TODO()
|
||||||
|
|
||||||
|
_, _, err := ReadTargets(ctx, []File{fp}, []string{"app"}, nil, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
}
|
||||||
|
|||||||
@@ -5,8 +5,10 @@ import (
|
|||||||
"fmt"
|
"fmt"
|
||||||
"os"
|
"os"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
|
"sort"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
|
"github.com/compose-spec/compose-go/v2/consts"
|
||||||
"github.com/compose-spec/compose-go/v2/dotenv"
|
"github.com/compose-spec/compose-go/v2/dotenv"
|
||||||
"github.com/compose-spec/compose-go/v2/loader"
|
"github.com/compose-spec/compose-go/v2/loader"
|
||||||
composetypes "github.com/compose-spec/compose-go/v2/types"
|
composetypes "github.com/compose-spec/compose-go/v2/types"
|
||||||
@@ -39,7 +41,11 @@ func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Conf
|
|||||||
ConfigFiles: cfgs,
|
ConfigFiles: cfgs,
|
||||||
Environment: envs,
|
Environment: envs,
|
||||||
}, func(options *loader.Options) {
|
}, func(options *loader.Options) {
|
||||||
options.SetProjectName("bake", false)
|
projectName := "bake"
|
||||||
|
if v, ok := envs[consts.ComposeProjectName]; ok && v != "" {
|
||||||
|
projectName = v
|
||||||
|
}
|
||||||
|
options.SetProjectName(projectName, false)
|
||||||
options.SkipNormalization = true
|
options.SkipNormalization = true
|
||||||
options.Profiles = []string{"*"}
|
options.Profiles = []string{"*"}
|
||||||
})
|
})
|
||||||
@@ -96,6 +102,12 @@ func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Conf
|
|||||||
shmSize = &shmSizeStr
|
shmSize = &shmSizeStr
|
||||||
}
|
}
|
||||||
|
|
||||||
|
var networkModeP *string
|
||||||
|
if s.Build.Network != "" {
|
||||||
|
networkMode := s.Build.Network
|
||||||
|
networkModeP = &networkMode
|
||||||
|
}
|
||||||
|
|
||||||
var ulimits []string
|
var ulimits []string
|
||||||
if s.Build.Ulimits != nil {
|
if s.Build.Ulimits != nil {
|
||||||
for n, u := range s.Build.Ulimits {
|
for n, u := range s.Build.Ulimits {
|
||||||
@@ -107,6 +119,13 @@ func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Conf
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
var ssh []string
|
||||||
|
for _, bkey := range s.Build.SSH {
|
||||||
|
sshkey := composeToBuildkitSSH(bkey)
|
||||||
|
ssh = append(ssh, sshkey)
|
||||||
|
}
|
||||||
|
sort.Strings(ssh)
|
||||||
|
|
||||||
var secrets []string
|
var secrets []string
|
||||||
for _, bs := range s.Build.Secrets {
|
for _, bs := range s.Build.Secrets {
|
||||||
secret, err := composeToBuildkitSecret(bs, cfg.Secrets[bs.Source])
|
secret, err := composeToBuildkitSecret(bs, cfg.Secrets[bs.Source])
|
||||||
@@ -141,7 +160,8 @@ func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Conf
|
|||||||
})),
|
})),
|
||||||
CacheFrom: s.Build.CacheFrom,
|
CacheFrom: s.Build.CacheFrom,
|
||||||
CacheTo: s.Build.CacheTo,
|
CacheTo: s.Build.CacheTo,
|
||||||
NetworkMode: &s.Build.Network,
|
NetworkMode: networkModeP,
|
||||||
|
SSH: ssh,
|
||||||
Secrets: secrets,
|
Secrets: secrets,
|
||||||
ShmSize: shmSize,
|
ShmSize: shmSize,
|
||||||
Ulimits: ulimits,
|
Ulimits: ulimits,
|
||||||
@@ -159,7 +179,6 @@ func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Conf
|
|||||||
c.Targets = append(c.Targets, t)
|
c.Targets = append(c.Targets, t)
|
||||||
}
|
}
|
||||||
c.Groups = append(c.Groups, g)
|
c.Groups = append(c.Groups, g)
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return &c, nil
|
return &c, nil
|
||||||
@@ -275,7 +294,7 @@ type xbake struct {
|
|||||||
NoCacheFilter stringArray `yaml:"no-cache-filter,omitempty"`
|
NoCacheFilter stringArray `yaml:"no-cache-filter,omitempty"`
|
||||||
Contexts stringMap `yaml:"contexts,omitempty"`
|
Contexts stringMap `yaml:"contexts,omitempty"`
|
||||||
// don't forget to update documentation if you add a new field:
|
// don't forget to update documentation if you add a new field:
|
||||||
// docs/manuals/bake/compose-file.md#extension-field-with-x-bake
|
// https://github.com/docker/docs/blob/main/content/build/bake/compose-file.md#extension-field-with-x-bake
|
||||||
}
|
}
|
||||||
|
|
||||||
type stringMap map[string]string
|
type stringMap map[string]string
|
||||||
@@ -325,6 +344,7 @@ func (t *Target) composeExtTarget(exts map[string]interface{}) error {
|
|||||||
}
|
}
|
||||||
if len(xb.SSH) > 0 {
|
if len(xb.SSH) > 0 {
|
||||||
t.SSH = dedupSlice(append(t.SSH, xb.SSH...))
|
t.SSH = dedupSlice(append(t.SSH, xb.SSH...))
|
||||||
|
sort.Strings(t.SSH)
|
||||||
}
|
}
|
||||||
if len(xb.Platforms) > 0 {
|
if len(xb.Platforms) > 0 {
|
||||||
t.Platforms = dedupSlice(append(t.Platforms, xb.Platforms...))
|
t.Platforms = dedupSlice(append(t.Platforms, xb.Platforms...))
|
||||||
@@ -368,3 +388,17 @@ func composeToBuildkitSecret(inp composetypes.ServiceSecretConfig, psecret compo
|
|||||||
|
|
||||||
return strings.Join(bkattrs, ","), nil
|
return strings.Join(bkattrs, ","), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// composeToBuildkitSSH converts secret from compose format to buildkit's
|
||||||
|
// csv format.
|
||||||
|
func composeToBuildkitSSH(sshKey composetypes.SSHKey) string {
|
||||||
|
var bkattrs []string
|
||||||
|
|
||||||
|
bkattrs = append(bkattrs, sshKey.ID)
|
||||||
|
|
||||||
|
if sshKey.Path != "" {
|
||||||
|
bkattrs = append(bkattrs, sshKey.Path)
|
||||||
|
}
|
||||||
|
|
||||||
|
return strings.Join(bkattrs, "=")
|
||||||
|
}
|
||||||
|
|||||||
@@ -32,6 +32,9 @@ services:
|
|||||||
- type=local,src=path/to/cache
|
- type=local,src=path/to/cache
|
||||||
cache_to:
|
cache_to:
|
||||||
- type=local,dest=path/to/cache
|
- type=local,dest=path/to/cache
|
||||||
|
ssh:
|
||||||
|
- key=path/to/key
|
||||||
|
- default
|
||||||
secrets:
|
secrets:
|
||||||
- token
|
- token
|
||||||
- aws
|
- aws
|
||||||
@@ -74,6 +77,7 @@ secrets:
|
|||||||
require.Equal(t, []string{"type=local,src=path/to/cache"}, c.Targets[1].CacheFrom)
|
require.Equal(t, []string{"type=local,src=path/to/cache"}, c.Targets[1].CacheFrom)
|
||||||
require.Equal(t, []string{"type=local,dest=path/to/cache"}, c.Targets[1].CacheTo)
|
require.Equal(t, []string{"type=local,dest=path/to/cache"}, c.Targets[1].CacheTo)
|
||||||
require.Equal(t, "none", *c.Targets[1].NetworkMode)
|
require.Equal(t, "none", *c.Targets[1].NetworkMode)
|
||||||
|
require.Equal(t, []string{"default", "key=path/to/key"}, c.Targets[1].SSH)
|
||||||
require.Equal(t, []string{
|
require.Equal(t, []string{
|
||||||
"id=token,env=ENV_TOKEN",
|
"id=token,env=ENV_TOKEN",
|
||||||
"id=aws,src=/root/.aws/credentials",
|
"id=aws,src=/root/.aws/credentials",
|
||||||
@@ -278,6 +282,8 @@ services:
|
|||||||
- user/app:cache
|
- user/app:cache
|
||||||
tags:
|
tags:
|
||||||
- ct-addon:baz
|
- ct-addon:baz
|
||||||
|
ssh:
|
||||||
|
key: path/to/key
|
||||||
args:
|
args:
|
||||||
CT_ECR: foo
|
CT_ECR: foo
|
||||||
CT_TAG: bar
|
CT_TAG: bar
|
||||||
@@ -287,6 +293,9 @@ services:
|
|||||||
tags:
|
tags:
|
||||||
- ct-addon:foo
|
- ct-addon:foo
|
||||||
- ct-addon:alp
|
- ct-addon:alp
|
||||||
|
ssh:
|
||||||
|
- default
|
||||||
|
- other=path/to/otherkey
|
||||||
platforms:
|
platforms:
|
||||||
- linux/amd64
|
- linux/amd64
|
||||||
- linux/arm64
|
- linux/arm64
|
||||||
@@ -329,6 +338,7 @@ services:
|
|||||||
require.Equal(t, []string{"linux/amd64", "linux/arm64"}, c.Targets[0].Platforms)
|
require.Equal(t, []string{"linux/amd64", "linux/arm64"}, c.Targets[0].Platforms)
|
||||||
require.Equal(t, []string{"user/app:cache", "type=local,src=path/to/cache"}, c.Targets[0].CacheFrom)
|
require.Equal(t, []string{"user/app:cache", "type=local,src=path/to/cache"}, c.Targets[0].CacheFrom)
|
||||||
require.Equal(t, []string{"user/app:cache", "type=local,dest=path/to/cache"}, c.Targets[0].CacheTo)
|
require.Equal(t, []string{"user/app:cache", "type=local,dest=path/to/cache"}, c.Targets[0].CacheTo)
|
||||||
|
require.Equal(t, []string{"default", "key=path/to/key", "other=path/to/otherkey"}, c.Targets[0].SSH)
|
||||||
require.Equal(t, newBool(true), c.Targets[0].Pull)
|
require.Equal(t, newBool(true), c.Targets[0].Pull)
|
||||||
require.Equal(t, map[string]string{"alpine": "docker-image://alpine:3.13"}, c.Targets[0].Contexts)
|
require.Equal(t, map[string]string{"alpine": "docker-image://alpine:3.13"}, c.Targets[0].Contexts)
|
||||||
require.Equal(t, []string{"ct-fake-aws:bar"}, c.Targets[1].Tags)
|
require.Equal(t, []string{"ct-fake-aws:bar"}, c.Targets[1].Tags)
|
||||||
@@ -353,6 +363,8 @@ services:
|
|||||||
- user/app:cache
|
- user/app:cache
|
||||||
tags:
|
tags:
|
||||||
- ct-addon:foo
|
- ct-addon:foo
|
||||||
|
ssh:
|
||||||
|
- default
|
||||||
x-bake:
|
x-bake:
|
||||||
tags:
|
tags:
|
||||||
- ct-addon:foo
|
- ct-addon:foo
|
||||||
@@ -362,6 +374,9 @@ services:
|
|||||||
- type=local,src=path/to/cache
|
- type=local,src=path/to/cache
|
||||||
cache-to:
|
cache-to:
|
||||||
- type=local,dest=path/to/cache
|
- type=local,dest=path/to/cache
|
||||||
|
ssh:
|
||||||
|
- default
|
||||||
|
- key=path/to/key
|
||||||
`)
|
`)
|
||||||
|
|
||||||
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
|
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
|
||||||
@@ -370,6 +385,7 @@ services:
|
|||||||
require.Equal(t, []string{"ct-addon:foo", "ct-addon:baz"}, c.Targets[0].Tags)
|
require.Equal(t, []string{"ct-addon:foo", "ct-addon:baz"}, c.Targets[0].Tags)
|
||||||
require.Equal(t, []string{"user/app:cache", "type=local,src=path/to/cache"}, c.Targets[0].CacheFrom)
|
require.Equal(t, []string{"user/app:cache", "type=local,src=path/to/cache"}, c.Targets[0].CacheFrom)
|
||||||
require.Equal(t, []string{"user/app:cache", "type=local,dest=path/to/cache"}, c.Targets[0].CacheTo)
|
require.Equal(t, []string{"user/app:cache", "type=local,dest=path/to/cache"}, c.Targets[0].CacheTo)
|
||||||
|
require.Equal(t, []string{"default", "key=path/to/key"}, c.Targets[0].SSH)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestEnv(t *testing.T) {
|
func TestEnv(t *testing.T) {
|
||||||
@@ -742,6 +758,46 @@ services:
|
|||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestCgroup(t *testing.T) {
|
||||||
|
var dt = []byte(`
|
||||||
|
services:
|
||||||
|
scratch:
|
||||||
|
build:
|
||||||
|
context: ./webapp
|
||||||
|
cgroup: private
|
||||||
|
`)
|
||||||
|
|
||||||
|
_, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestProjectName(t *testing.T) {
|
||||||
|
var dt = []byte(`
|
||||||
|
services:
|
||||||
|
scratch:
|
||||||
|
build:
|
||||||
|
context: ./webapp
|
||||||
|
args:
|
||||||
|
PROJECT_NAME: ${COMPOSE_PROJECT_NAME}
|
||||||
|
`)
|
||||||
|
|
||||||
|
t.Run("default", func(t *testing.T) {
|
||||||
|
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Len(t, c.Targets, 1)
|
||||||
|
require.Len(t, c.Targets[0].Args, 1)
|
||||||
|
require.Equal(t, map[string]*string{"PROJECT_NAME": ptrstr("bake")}, c.Targets[0].Args)
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("env", func(t *testing.T) {
|
||||||
|
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, map[string]string{"COMPOSE_PROJECT_NAME": "foo"})
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Len(t, c.Targets, 1)
|
||||||
|
require.Len(t, c.Targets[0].Args, 1)
|
||||||
|
require.Equal(t, map[string]*string{"PROJECT_NAME": ptrstr("foo")}, c.Targets[0].Args)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
// chdir changes the current working directory to the named directory,
|
// chdir changes the current working directory to the named directory,
|
||||||
// and then restore the original working directory at the end of the test.
|
// and then restore the original working directory at the end of the test.
|
||||||
func chdir(t *testing.T, dir string) {
|
func chdir(t *testing.T, dir string) {
|
||||||
|
|||||||
601
bake/entitlements.go
Normal file
601
bake/entitlements.go
Normal file
@@ -0,0 +1,601 @@
|
|||||||
|
package bake
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bufio"
|
||||||
|
"cmp"
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"io/fs"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"slices"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
"syscall"
|
||||||
|
|
||||||
|
"github.com/containerd/console"
|
||||||
|
"github.com/docker/buildx/build"
|
||||||
|
"github.com/docker/buildx/util/osutil"
|
||||||
|
"github.com/moby/buildkit/util/entitlements"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
)
|
||||||
|
|
||||||
|
type EntitlementKey string
|
||||||
|
|
||||||
|
const (
|
||||||
|
EntitlementKeyNetworkHost EntitlementKey = "network.host"
|
||||||
|
EntitlementKeySecurityInsecure EntitlementKey = "security.insecure"
|
||||||
|
EntitlementKeyFSRead EntitlementKey = "fs.read"
|
||||||
|
EntitlementKeyFSWrite EntitlementKey = "fs.write"
|
||||||
|
EntitlementKeyFS EntitlementKey = "fs"
|
||||||
|
EntitlementKeyImagePush EntitlementKey = "image.push"
|
||||||
|
EntitlementKeyImageLoad EntitlementKey = "image.load"
|
||||||
|
EntitlementKeyImage EntitlementKey = "image"
|
||||||
|
EntitlementKeySSH EntitlementKey = "ssh"
|
||||||
|
)
|
||||||
|
|
||||||
|
type EntitlementConf struct {
|
||||||
|
NetworkHost bool
|
||||||
|
SecurityInsecure bool
|
||||||
|
FSRead []string
|
||||||
|
FSWrite []string
|
||||||
|
ImagePush []string
|
||||||
|
ImageLoad []string
|
||||||
|
SSH bool
|
||||||
|
}
|
||||||
|
|
||||||
|
func ParseEntitlements(in []string) (EntitlementConf, error) {
|
||||||
|
var conf EntitlementConf
|
||||||
|
for _, e := range in {
|
||||||
|
switch e {
|
||||||
|
case string(EntitlementKeyNetworkHost):
|
||||||
|
conf.NetworkHost = true
|
||||||
|
case string(EntitlementKeySecurityInsecure):
|
||||||
|
conf.SecurityInsecure = true
|
||||||
|
case string(EntitlementKeySSH):
|
||||||
|
conf.SSH = true
|
||||||
|
default:
|
||||||
|
k, v, _ := strings.Cut(e, "=")
|
||||||
|
switch k {
|
||||||
|
case string(EntitlementKeyFSRead):
|
||||||
|
conf.FSRead = append(conf.FSRead, v)
|
||||||
|
case string(EntitlementKeyFSWrite):
|
||||||
|
conf.FSWrite = append(conf.FSWrite, v)
|
||||||
|
case string(EntitlementKeyFS):
|
||||||
|
conf.FSRead = append(conf.FSRead, v)
|
||||||
|
conf.FSWrite = append(conf.FSWrite, v)
|
||||||
|
case string(EntitlementKeyImagePush):
|
||||||
|
conf.ImagePush = append(conf.ImagePush, v)
|
||||||
|
case string(EntitlementKeyImageLoad):
|
||||||
|
conf.ImageLoad = append(conf.ImageLoad, v)
|
||||||
|
case string(EntitlementKeyImage):
|
||||||
|
conf.ImagePush = append(conf.ImagePush, v)
|
||||||
|
conf.ImageLoad = append(conf.ImageLoad, v)
|
||||||
|
default:
|
||||||
|
return conf, errors.Errorf("unknown entitlement key %q", k)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return conf, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c EntitlementConf) Validate(m map[string]build.Options) (EntitlementConf, error) {
|
||||||
|
var expected EntitlementConf
|
||||||
|
|
||||||
|
for _, v := range m {
|
||||||
|
if err := c.check(v, &expected); err != nil {
|
||||||
|
return EntitlementConf{}, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return expected, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c EntitlementConf) check(bo build.Options, expected *EntitlementConf) error {
|
||||||
|
for _, e := range bo.Allow {
|
||||||
|
switch e {
|
||||||
|
case entitlements.EntitlementNetworkHost:
|
||||||
|
if !c.NetworkHost {
|
||||||
|
expected.NetworkHost = true
|
||||||
|
}
|
||||||
|
case entitlements.EntitlementSecurityInsecure:
|
||||||
|
if !c.SecurityInsecure {
|
||||||
|
expected.SecurityInsecure = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
rwPaths := map[string]struct{}{}
|
||||||
|
roPaths := map[string]struct{}{}
|
||||||
|
|
||||||
|
for _, p := range collectLocalPaths(bo.Inputs) {
|
||||||
|
roPaths[p] = struct{}{}
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, out := range bo.Exports {
|
||||||
|
if out.Type == "local" {
|
||||||
|
if dest, ok := out.Attrs["dest"]; ok {
|
||||||
|
rwPaths[dest] = struct{}{}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if out.Type == "tar" {
|
||||||
|
if dest, ok := out.Attrs["dest"]; ok && dest != "-" {
|
||||||
|
rwPaths[dest] = struct{}{}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, ce := range bo.CacheTo {
|
||||||
|
if ce.Type == "local" {
|
||||||
|
if dest, ok := ce.Attrs["dest"]; ok {
|
||||||
|
rwPaths[dest] = struct{}{}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, ci := range bo.CacheFrom {
|
||||||
|
if ci.Type == "local" {
|
||||||
|
if src, ok := ci.Attrs["src"]; ok {
|
||||||
|
roPaths[src] = struct{}{}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, secret := range bo.SecretSpecs {
|
||||||
|
if secret.FilePath != "" {
|
||||||
|
roPaths[secret.FilePath] = struct{}{}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, ssh := range bo.SSHSpecs {
|
||||||
|
for _, p := range ssh.Paths {
|
||||||
|
roPaths[p] = struct{}{}
|
||||||
|
}
|
||||||
|
if len(ssh.Paths) == 0 {
|
||||||
|
expected.SSH = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
var err error
|
||||||
|
expected.FSRead, err = findMissingPaths(c.FSRead, roPaths)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
expected.FSWrite, err = findMissingPaths(c.FSWrite, rwPaths)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c EntitlementConf) Prompt(ctx context.Context, isRemote bool, out io.Writer) error {
|
||||||
|
var term bool
|
||||||
|
if _, err := console.ConsoleFromFile(os.Stdin); err == nil {
|
||||||
|
term = true
|
||||||
|
}
|
||||||
|
|
||||||
|
var msgs []string
|
||||||
|
var flags []string
|
||||||
|
|
||||||
|
// these warnings are currently disabled to give users time to update
|
||||||
|
var msgsFS []string
|
||||||
|
var flagsFS []string
|
||||||
|
|
||||||
|
if c.NetworkHost {
|
||||||
|
msgs = append(msgs, " - Running build containers that can access host network")
|
||||||
|
flags = append(flags, string(EntitlementKeyNetworkHost))
|
||||||
|
}
|
||||||
|
if c.SecurityInsecure {
|
||||||
|
msgs = append(msgs, " - Running privileged containers that can make system changes")
|
||||||
|
flags = append(flags, string(EntitlementKeySecurityInsecure))
|
||||||
|
}
|
||||||
|
|
||||||
|
if c.SSH {
|
||||||
|
msgsFS = append(msgsFS, " - Forwarding default SSH agent socket")
|
||||||
|
flagsFS = append(flagsFS, string(EntitlementKeySSH))
|
||||||
|
}
|
||||||
|
|
||||||
|
roPaths, rwPaths, commonPaths := groupSamePaths(c.FSRead, c.FSWrite)
|
||||||
|
wd, err := os.Getwd()
|
||||||
|
if err != nil {
|
||||||
|
return errors.Wrap(err, "failed to get current working directory")
|
||||||
|
}
|
||||||
|
wd, err = filepath.EvalSymlinks(wd)
|
||||||
|
if err != nil {
|
||||||
|
return errors.Wrap(err, "failed to evaluate working directory")
|
||||||
|
}
|
||||||
|
roPaths = toRelativePaths(roPaths, wd)
|
||||||
|
rwPaths = toRelativePaths(rwPaths, wd)
|
||||||
|
commonPaths = toRelativePaths(commonPaths, wd)
|
||||||
|
|
||||||
|
if len(commonPaths) > 0 {
|
||||||
|
for _, p := range commonPaths {
|
||||||
|
msgsFS = append(msgsFS, fmt.Sprintf(" - Read and write access to path %s", p))
|
||||||
|
flagsFS = append(flagsFS, string(EntitlementKeyFS)+"="+p)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(roPaths) > 0 {
|
||||||
|
for _, p := range roPaths {
|
||||||
|
msgsFS = append(msgsFS, fmt.Sprintf(" - Read access to path %s", p))
|
||||||
|
flagsFS = append(flagsFS, string(EntitlementKeyFSRead)+"="+p)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(rwPaths) > 0 {
|
||||||
|
for _, p := range rwPaths {
|
||||||
|
msgsFS = append(msgsFS, fmt.Sprintf(" - Write access to path %s", p))
|
||||||
|
flagsFS = append(flagsFS, string(EntitlementKeyFSWrite)+"="+p)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(msgs) == 0 && len(msgsFS) == 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Fprintf(out, "Your build is requesting privileges for following possibly insecure capabilities:\n\n")
|
||||||
|
for _, m := range slices.Concat(msgs, msgsFS) {
|
||||||
|
fmt.Fprintf(out, "%s\n", m)
|
||||||
|
}
|
||||||
|
|
||||||
|
for i, f := range flags {
|
||||||
|
flags[i] = "--allow=" + f
|
||||||
|
}
|
||||||
|
for i, f := range flagsFS {
|
||||||
|
flagsFS[i] = "--allow=" + f
|
||||||
|
}
|
||||||
|
|
||||||
|
if term {
|
||||||
|
fmt.Fprintf(out, "\nIn order to not see this message in the future pass %q to grant requested privileges.\n", strings.Join(slices.Concat(flags, flagsFS), " "))
|
||||||
|
} else {
|
||||||
|
fmt.Fprintf(out, "\nPass %q to grant requested privileges.\n", strings.Join(slices.Concat(flags, flagsFS), " "))
|
||||||
|
}
|
||||||
|
|
||||||
|
args := append([]string(nil), os.Args...)
|
||||||
|
if v, ok := os.LookupEnv("DOCKER_CLI_PLUGIN_ORIGINAL_CLI_COMMAND"); ok && v != "" {
|
||||||
|
args[0] = v
|
||||||
|
}
|
||||||
|
idx := slices.Index(args, "bake")
|
||||||
|
|
||||||
|
if idx != -1 {
|
||||||
|
fmt.Fprintf(out, "\nYour full command with requested privileges:\n\n")
|
||||||
|
fmt.Fprintf(out, "%s %s %s\n\n", strings.Join(args[:idx+1], " "), strings.Join(slices.Concat(flags, flagsFS), " "), strings.Join(args[idx+1:], " "))
|
||||||
|
}
|
||||||
|
|
||||||
|
fsEntitlementsEnabled := false
|
||||||
|
if isRemote {
|
||||||
|
if v, ok := os.LookupEnv("BAKE_ALLOW_REMOTE_FS_ACCESS"); ok {
|
||||||
|
vv, err := strconv.ParseBool(v)
|
||||||
|
if err != nil {
|
||||||
|
return errors.Wrapf(err, "failed to parse BAKE_ALLOW_REMOTE_FS_ACCESS value %q", v)
|
||||||
|
}
|
||||||
|
fsEntitlementsEnabled = !vv
|
||||||
|
} else {
|
||||||
|
fsEntitlementsEnabled = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
v, fsEntitlementsSet := os.LookupEnv("BUILDX_BAKE_ENTITLEMENTS_FS")
|
||||||
|
if fsEntitlementsSet {
|
||||||
|
vv, err := strconv.ParseBool(v)
|
||||||
|
if err != nil {
|
||||||
|
return errors.Wrapf(err, "failed to parse BUILDX_BAKE_ENTITLEMENTS_FS value %q", v)
|
||||||
|
}
|
||||||
|
fsEntitlementsEnabled = vv
|
||||||
|
}
|
||||||
|
|
||||||
|
if !fsEntitlementsEnabled && len(msgs) == 0 {
|
||||||
|
if !fsEntitlementsSet {
|
||||||
|
fmt.Fprintf(out, "This warning will become an error in a future release. To enable filesystem entitlements checks at the moment, set BUILDX_BAKE_ENTITLEMENTS_FS=1 .\n\n")
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
if term {
|
||||||
|
fmt.Fprintf(out, "Do you want to grant requested privileges and continue? [y/N] ")
|
||||||
|
reader := bufio.NewReader(os.Stdin)
|
||||||
|
answerCh := make(chan string, 1)
|
||||||
|
go func() {
|
||||||
|
answer, _, _ := reader.ReadLine()
|
||||||
|
answerCh <- string(answer)
|
||||||
|
close(answerCh)
|
||||||
|
}()
|
||||||
|
|
||||||
|
select {
|
||||||
|
case <-ctx.Done():
|
||||||
|
case answer := <-answerCh:
|
||||||
|
if strings.ToLower(string(answer)) == "y" {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return errors.Errorf("additional privileges requested")
|
||||||
|
}
|
||||||
|
|
||||||
|
func isParentOrEqualPath(p, parent string) bool {
|
||||||
|
if p == parent || parent == "/" {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
if strings.HasPrefix(p, filepath.Clean(parent+string(filepath.Separator))) {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
func findMissingPaths(set []string, paths map[string]struct{}) ([]string, error) {
|
||||||
|
set, allowAny, err := evaluatePaths(set)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
} else if allowAny {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
paths, err = evaluateToExistingPaths(paths)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
paths, err = dedupPaths(paths)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
out := make([]string, 0, len(paths))
|
||||||
|
loop0:
|
||||||
|
for p := range paths {
|
||||||
|
for _, c := range set {
|
||||||
|
if isParentOrEqualPath(p, c) {
|
||||||
|
continue loop0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
out = append(out, p)
|
||||||
|
}
|
||||||
|
if len(out) == 0 {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
slices.Sort(out)
|
||||||
|
|
||||||
|
return out, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func dedupPaths(in map[string]struct{}) (map[string]struct{}, error) {
|
||||||
|
arr := make([]string, 0, len(in))
|
||||||
|
for p := range in {
|
||||||
|
arr = append(arr, filepath.Clean(p))
|
||||||
|
}
|
||||||
|
|
||||||
|
slices.SortFunc(arr, func(a, b string) int {
|
||||||
|
return cmp.Compare(len(a), len(b))
|
||||||
|
})
|
||||||
|
|
||||||
|
m := make(map[string]struct{}, len(arr))
|
||||||
|
loop0:
|
||||||
|
for _, p := range arr {
|
||||||
|
for parent := range m {
|
||||||
|
if strings.HasPrefix(p, parent+string(filepath.Separator)) {
|
||||||
|
continue loop0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
m[p] = struct{}{}
|
||||||
|
}
|
||||||
|
return m, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func toRelativePaths(in []string, wd string) []string {
|
||||||
|
out := make([]string, 0, len(in))
|
||||||
|
for _, p := range in {
|
||||||
|
rel, err := filepath.Rel(wd, p)
|
||||||
|
if err == nil {
|
||||||
|
// allow up to one level of ".." in the path
|
||||||
|
if !strings.HasPrefix(rel, ".."+string(filepath.Separator)+"..") {
|
||||||
|
out = append(out, rel)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
}
|
||||||
|
out = append(out, p)
|
||||||
|
}
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
|
||||||
|
func groupSamePaths(in1, in2 []string) ([]string, []string, []string) {
|
||||||
|
if in1 == nil || in2 == nil {
|
||||||
|
return in1, in2, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
slices.Sort(in1)
|
||||||
|
slices.Sort(in2)
|
||||||
|
|
||||||
|
common := []string{}
|
||||||
|
i, j := 0, 0
|
||||||
|
|
||||||
|
for i < len(in1) && j < len(in2) {
|
||||||
|
switch {
|
||||||
|
case in1[i] == in2[j]:
|
||||||
|
common = append(common, in1[i])
|
||||||
|
i++
|
||||||
|
j++
|
||||||
|
case in1[i] < in2[j]:
|
||||||
|
i++
|
||||||
|
default:
|
||||||
|
j++
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
in1 = removeCommonPaths(in1, common)
|
||||||
|
in2 = removeCommonPaths(in2, common)
|
||||||
|
|
||||||
|
return in1, in2, common
|
||||||
|
}
|
||||||
|
|
||||||
|
func removeCommonPaths(in, common []string) []string {
|
||||||
|
filtered := make([]string, 0, len(in))
|
||||||
|
commonIndex := 0
|
||||||
|
for _, path := range in {
|
||||||
|
if commonIndex < len(common) && path == common[commonIndex] {
|
||||||
|
commonIndex++
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
filtered = append(filtered, path)
|
||||||
|
}
|
||||||
|
return filtered
|
||||||
|
}
|
||||||
|
|
||||||
|
func evaluatePaths(in []string) ([]string, bool, error) {
|
||||||
|
out := make([]string, 0, len(in))
|
||||||
|
allowAny := false
|
||||||
|
for _, p := range in {
|
||||||
|
if p == "*" {
|
||||||
|
allowAny = true
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
v, err := filepath.Abs(p)
|
||||||
|
if err != nil {
|
||||||
|
return nil, false, errors.Wrapf(err, "failed to evaluate path %q", p)
|
||||||
|
}
|
||||||
|
v, err = filepath.EvalSymlinks(v)
|
||||||
|
if err != nil {
|
||||||
|
return nil, false, errors.Wrapf(err, "failed to evaluate path %q", p)
|
||||||
|
}
|
||||||
|
out = append(out, v)
|
||||||
|
}
|
||||||
|
return out, allowAny, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func evaluateToExistingPaths(in map[string]struct{}) (map[string]struct{}, error) {
|
||||||
|
m := make(map[string]struct{}, len(in))
|
||||||
|
for p := range in {
|
||||||
|
v, err := evaluateToExistingPath(p)
|
||||||
|
if err != nil {
|
||||||
|
return nil, errors.Wrapf(err, "failed to evaluate path %q", p)
|
||||||
|
}
|
||||||
|
v, err = osutil.GetLongPathName(v)
|
||||||
|
if err != nil {
|
||||||
|
return nil, errors.Wrapf(err, "failed to evaluate path %q", p)
|
||||||
|
}
|
||||||
|
m[v] = struct{}{}
|
||||||
|
}
|
||||||
|
return m, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func evaluateToExistingPath(in string) (string, error) {
|
||||||
|
in, err := filepath.Abs(in)
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
|
||||||
|
volLen := volumeNameLen(in)
|
||||||
|
pathSeparator := string(os.PathSeparator)
|
||||||
|
|
||||||
|
if volLen < len(in) && os.IsPathSeparator(in[volLen]) {
|
||||||
|
volLen++
|
||||||
|
}
|
||||||
|
vol := in[:volLen]
|
||||||
|
dest := vol
|
||||||
|
linksWalked := 0
|
||||||
|
var end int
|
||||||
|
for start := volLen; start < len(in); start = end {
|
||||||
|
for start < len(in) && os.IsPathSeparator(in[start]) {
|
||||||
|
start++
|
||||||
|
}
|
||||||
|
end = start
|
||||||
|
for end < len(in) && !os.IsPathSeparator(in[end]) {
|
||||||
|
end++
|
||||||
|
}
|
||||||
|
|
||||||
|
if end == start {
|
||||||
|
break
|
||||||
|
} else if in[start:end] == "." {
|
||||||
|
continue
|
||||||
|
} else if in[start:end] == ".." {
|
||||||
|
var r int
|
||||||
|
for r = len(dest) - 1; r >= volLen; r-- {
|
||||||
|
if os.IsPathSeparator(dest[r]) {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if r < volLen || dest[r+1:] == ".." {
|
||||||
|
if len(dest) > volLen {
|
||||||
|
dest += pathSeparator
|
||||||
|
}
|
||||||
|
dest += ".."
|
||||||
|
} else {
|
||||||
|
dest = dest[:r]
|
||||||
|
}
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(dest) > volumeNameLen(dest) && !os.IsPathSeparator(dest[len(dest)-1]) {
|
||||||
|
dest += pathSeparator
|
||||||
|
}
|
||||||
|
dest += in[start:end]
|
||||||
|
|
||||||
|
fi, err := os.Lstat(dest)
|
||||||
|
if err != nil {
|
||||||
|
// If the component doesn't exist, return the last valid path
|
||||||
|
if os.IsNotExist(err) {
|
||||||
|
for r := len(dest) - 1; r >= volLen; r-- {
|
||||||
|
if os.IsPathSeparator(dest[r]) {
|
||||||
|
return dest[:r], nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return vol, nil
|
||||||
|
}
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
|
||||||
|
if fi.Mode()&fs.ModeSymlink == 0 {
|
||||||
|
if !fi.Mode().IsDir() && end < len(in) {
|
||||||
|
return "", syscall.ENOTDIR
|
||||||
|
}
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
linksWalked++
|
||||||
|
if linksWalked > 255 {
|
||||||
|
return "", errors.New("too many symlinks")
|
||||||
|
}
|
||||||
|
|
||||||
|
link, err := os.Readlink(dest)
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
|
||||||
|
in = link + in[end:]
|
||||||
|
|
||||||
|
v := volumeNameLen(link)
|
||||||
|
if v > 0 {
|
||||||
|
if v < len(link) && os.IsPathSeparator(link[v]) {
|
||||||
|
v++
|
||||||
|
}
|
||||||
|
vol = link[:v]
|
||||||
|
dest = vol
|
||||||
|
end = len(vol)
|
||||||
|
} else if len(link) > 0 && os.IsPathSeparator(link[0]) {
|
||||||
|
dest = link[:1]
|
||||||
|
end = 1
|
||||||
|
vol = link[:1]
|
||||||
|
volLen = 1
|
||||||
|
} else {
|
||||||
|
var r int
|
||||||
|
for r = len(dest) - 1; r >= volLen; r-- {
|
||||||
|
if os.IsPathSeparator(dest[r]) {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if r < volLen {
|
||||||
|
dest = vol
|
||||||
|
} else {
|
||||||
|
dest = dest[:r]
|
||||||
|
}
|
||||||
|
end = 0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return filepath.Clean(dest), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func volumeNameLen(s string) int {
|
||||||
|
return len(filepath.VolumeName(s))
|
||||||
|
}
|
||||||
460
bake/entitlements_test.go
Normal file
460
bake/entitlements_test.go
Normal file
@@ -0,0 +1,460 @@
|
|||||||
|
package bake
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"slices"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/docker/buildx/build"
|
||||||
|
"github.com/docker/buildx/controller/pb"
|
||||||
|
"github.com/docker/buildx/util/osutil"
|
||||||
|
"github.com/moby/buildkit/client"
|
||||||
|
"github.com/moby/buildkit/client/llb"
|
||||||
|
"github.com/moby/buildkit/util/entitlements"
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestEvaluateToExistingPath(t *testing.T) {
|
||||||
|
tempDir, err := osutil.GetLongPathName(t.TempDir())
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
// Setup temporary directory structure for testing
|
||||||
|
existingFile := filepath.Join(tempDir, "existing_file")
|
||||||
|
require.NoError(t, os.WriteFile(existingFile, []byte("test"), 0644))
|
||||||
|
|
||||||
|
existingDir := filepath.Join(tempDir, "existing_dir")
|
||||||
|
require.NoError(t, os.Mkdir(existingDir, 0755))
|
||||||
|
|
||||||
|
symlinkToFile := filepath.Join(tempDir, "symlink_to_file")
|
||||||
|
require.NoError(t, os.Symlink(existingFile, symlinkToFile))
|
||||||
|
|
||||||
|
symlinkToDir := filepath.Join(tempDir, "symlink_to_dir")
|
||||||
|
require.NoError(t, os.Symlink(existingDir, symlinkToDir))
|
||||||
|
|
||||||
|
nonexistentPath := filepath.Join(tempDir, "nonexistent", "path", "file.txt")
|
||||||
|
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
input string
|
||||||
|
expected string
|
||||||
|
expectErr bool
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "Existing file",
|
||||||
|
input: existingFile,
|
||||||
|
expected: existingFile,
|
||||||
|
expectErr: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "Existing directory",
|
||||||
|
input: existingDir,
|
||||||
|
expected: existingDir,
|
||||||
|
expectErr: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "Symlink to file",
|
||||||
|
input: symlinkToFile,
|
||||||
|
expected: existingFile,
|
||||||
|
expectErr: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "Symlink to directory",
|
||||||
|
input: symlinkToDir,
|
||||||
|
expected: existingDir,
|
||||||
|
expectErr: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "Non-existent path",
|
||||||
|
input: nonexistentPath,
|
||||||
|
expected: tempDir,
|
||||||
|
expectErr: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "Non-existent intermediate path",
|
||||||
|
input: filepath.Join(tempDir, "nonexistent", "file.txt"),
|
||||||
|
expected: tempDir,
|
||||||
|
expectErr: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "Root path",
|
||||||
|
input: "/",
|
||||||
|
expected: func() string {
|
||||||
|
root, _ := filepath.Abs("/")
|
||||||
|
return root
|
||||||
|
}(),
|
||||||
|
expectErr: false,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
result, err := evaluateToExistingPath(tt.input)
|
||||||
|
|
||||||
|
if tt.expectErr {
|
||||||
|
require.Error(t, err)
|
||||||
|
} else {
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, tt.expected, result)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestDedupePaths(t *testing.T) {
|
||||||
|
wd := osutil.GetWd()
|
||||||
|
tcases := []struct {
|
||||||
|
in map[string]struct{}
|
||||||
|
out map[string]struct{}
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
in: map[string]struct{}{
|
||||||
|
"/a/b/c": {},
|
||||||
|
"/a/b/d": {},
|
||||||
|
"/a/b/e": {},
|
||||||
|
},
|
||||||
|
out: map[string]struct{}{
|
||||||
|
"/a/b/c": {},
|
||||||
|
"/a/b/d": {},
|
||||||
|
"/a/b/e": {},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
in: map[string]struct{}{
|
||||||
|
"/a/b/c": {},
|
||||||
|
"/a/b/c/d": {},
|
||||||
|
"/a/b/c/d/e": {},
|
||||||
|
"/a/b/../b/c": {},
|
||||||
|
},
|
||||||
|
out: map[string]struct{}{
|
||||||
|
"/a/b/c": {},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
in: map[string]struct{}{
|
||||||
|
filepath.Join(wd, "a/b/c"): {},
|
||||||
|
filepath.Join(wd, "../aa"): {},
|
||||||
|
filepath.Join(wd, "a/b"): {},
|
||||||
|
filepath.Join(wd, "a/b/d"): {},
|
||||||
|
filepath.Join(wd, "../aa/b"): {},
|
||||||
|
filepath.Join(wd, "../../bb"): {},
|
||||||
|
},
|
||||||
|
out: map[string]struct{}{
|
||||||
|
"a/b": {},
|
||||||
|
"../aa": {},
|
||||||
|
filepath.Join(wd, "../../bb"): {},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for i, tc := range tcases {
|
||||||
|
t.Run(fmt.Sprintf("case%d", i), func(t *testing.T) {
|
||||||
|
out, err := dedupPaths(tc.in)
|
||||||
|
if err != nil {
|
||||||
|
require.NoError(t, err)
|
||||||
|
}
|
||||||
|
// convert to relative paths as that is shown to user
|
||||||
|
arr := make([]string, 0, len(out))
|
||||||
|
for k := range out {
|
||||||
|
arr = append(arr, k)
|
||||||
|
}
|
||||||
|
require.NoError(t, err)
|
||||||
|
arr = toRelativePaths(arr, wd)
|
||||||
|
m := make(map[string]struct{})
|
||||||
|
for _, v := range arr {
|
||||||
|
m[filepath.ToSlash(v)] = struct{}{}
|
||||||
|
}
|
||||||
|
o := make(map[string]struct{}, len(tc.out))
|
||||||
|
for k := range tc.out {
|
||||||
|
o[filepath.ToSlash(k)] = struct{}{}
|
||||||
|
}
|
||||||
|
require.Equal(t, o, m)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestValidateEntitlements(t *testing.T) {
|
||||||
|
dir1 := t.TempDir()
|
||||||
|
dir2 := t.TempDir()
|
||||||
|
|
||||||
|
// the paths returned by entitlements validation will have symlinks resolved
|
||||||
|
expDir1, err := filepath.EvalSymlinks(dir1)
|
||||||
|
require.NoError(t, err)
|
||||||
|
expDir2, err := filepath.EvalSymlinks(dir2)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
escapeLink := filepath.Join(dir1, "escape_link")
|
||||||
|
require.NoError(t, os.Symlink("../../aa", escapeLink))
|
||||||
|
|
||||||
|
wd, err := os.Getwd()
|
||||||
|
require.NoError(t, err)
|
||||||
|
expWd, err := filepath.EvalSymlinks(wd)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
tcases := []struct {
|
||||||
|
name string
|
||||||
|
conf EntitlementConf
|
||||||
|
opt build.Options
|
||||||
|
expected EntitlementConf
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "No entitlements",
|
||||||
|
opt: build.Options{
|
||||||
|
Inputs: build.Inputs{
|
||||||
|
ContextState: &llb.State{},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "NetworkHostMissing",
|
||||||
|
opt: build.Options{
|
||||||
|
Allow: []entitlements.Entitlement{
|
||||||
|
entitlements.EntitlementNetworkHost,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
expected: EntitlementConf{
|
||||||
|
NetworkHost: true,
|
||||||
|
FSRead: []string{expWd},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "NetworkHostSet",
|
||||||
|
conf: EntitlementConf{
|
||||||
|
NetworkHost: true,
|
||||||
|
},
|
||||||
|
opt: build.Options{
|
||||||
|
Allow: []entitlements.Entitlement{
|
||||||
|
entitlements.EntitlementNetworkHost,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
expected: EntitlementConf{
|
||||||
|
FSRead: []string{expWd},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "SecurityAndNetworkHostMissing",
|
||||||
|
opt: build.Options{
|
||||||
|
Allow: []entitlements.Entitlement{
|
||||||
|
entitlements.EntitlementNetworkHost,
|
||||||
|
entitlements.EntitlementSecurityInsecure,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
expected: EntitlementConf{
|
||||||
|
NetworkHost: true,
|
||||||
|
SecurityInsecure: true,
|
||||||
|
FSRead: []string{expWd},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "SecurityMissingAndNetworkHostSet",
|
||||||
|
conf: EntitlementConf{
|
||||||
|
NetworkHost: true,
|
||||||
|
},
|
||||||
|
opt: build.Options{
|
||||||
|
Allow: []entitlements.Entitlement{
|
||||||
|
entitlements.EntitlementNetworkHost,
|
||||||
|
entitlements.EntitlementSecurityInsecure,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
expected: EntitlementConf{
|
||||||
|
SecurityInsecure: true,
|
||||||
|
FSRead: []string{expWd},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "SSHMissing",
|
||||||
|
opt: build.Options{
|
||||||
|
SSHSpecs: []*pb.SSH{
|
||||||
|
{
|
||||||
|
ID: "test",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
expected: EntitlementConf{
|
||||||
|
SSH: true,
|
||||||
|
FSRead: []string{expWd},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "ExportLocal",
|
||||||
|
opt: build.Options{
|
||||||
|
Exports: []client.ExportEntry{
|
||||||
|
{
|
||||||
|
Type: "local",
|
||||||
|
Attrs: map[string]string{
|
||||||
|
"dest": dir1,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Type: "local",
|
||||||
|
Attrs: map[string]string{
|
||||||
|
"dest": filepath.Join(dir1, "subdir"),
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Type: "local",
|
||||||
|
Attrs: map[string]string{
|
||||||
|
"dest": dir2,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
expected: EntitlementConf{
|
||||||
|
FSWrite: func() []string {
|
||||||
|
exp := []string{expDir1, expDir2}
|
||||||
|
slices.Sort(exp)
|
||||||
|
return exp
|
||||||
|
}(),
|
||||||
|
FSRead: []string{expWd},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "SecretFromSubFile",
|
||||||
|
opt: build.Options{
|
||||||
|
SecretSpecs: []*pb.Secret{
|
||||||
|
{
|
||||||
|
FilePath: filepath.Join(dir1, "subfile"),
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
conf: EntitlementConf{
|
||||||
|
FSRead: []string{wd, dir1},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "SecretFromEscapeLink",
|
||||||
|
opt: build.Options{
|
||||||
|
SecretSpecs: []*pb.Secret{
|
||||||
|
{
|
||||||
|
FilePath: escapeLink,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
conf: EntitlementConf{
|
||||||
|
FSRead: []string{wd, dir1},
|
||||||
|
},
|
||||||
|
expected: EntitlementConf{
|
||||||
|
FSRead: []string{filepath.Join(expDir1, "../..")},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "SecretFromEscapeLinkAllowRoot",
|
||||||
|
opt: build.Options{
|
||||||
|
SecretSpecs: []*pb.Secret{
|
||||||
|
{
|
||||||
|
FilePath: escapeLink,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
conf: EntitlementConf{
|
||||||
|
FSRead: []string{"/"},
|
||||||
|
},
|
||||||
|
expected: EntitlementConf{
|
||||||
|
FSRead: func() []string {
|
||||||
|
// on windows root (/) is only allowed if it is the same volume as wd
|
||||||
|
if filepath.VolumeName(wd) == filepath.VolumeName(escapeLink) {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
// if not, then escapeLink is not allowed
|
||||||
|
exp, err := evaluateToExistingPath(escapeLink)
|
||||||
|
require.NoError(t, err)
|
||||||
|
exp, err = filepath.EvalSymlinks(exp)
|
||||||
|
require.NoError(t, err)
|
||||||
|
return []string{exp}
|
||||||
|
}(),
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "SecretFromEscapeLinkAllowAny",
|
||||||
|
opt: build.Options{
|
||||||
|
SecretSpecs: []*pb.Secret{
|
||||||
|
{
|
||||||
|
FilePath: escapeLink,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
conf: EntitlementConf{
|
||||||
|
FSRead: []string{"*"},
|
||||||
|
},
|
||||||
|
expected: EntitlementConf{},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tc := range tcases {
|
||||||
|
t.Run(tc.name, func(t *testing.T) {
|
||||||
|
expected, err := tc.conf.Validate(map[string]build.Options{"test": tc.opt})
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, tc.expected, expected)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGroupSamePaths(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
in1 []string
|
||||||
|
in2 []string
|
||||||
|
expected1 []string
|
||||||
|
expected2 []string
|
||||||
|
expectedC []string
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "All common paths",
|
||||||
|
in1: []string{"/path/a", "/path/b", "/path/c"},
|
||||||
|
in2: []string{"/path/a", "/path/b", "/path/c"},
|
||||||
|
expected1: []string{},
|
||||||
|
expected2: []string{},
|
||||||
|
expectedC: []string{"/path/a", "/path/b", "/path/c"},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "No common paths",
|
||||||
|
in1: []string{"/path/a", "/path/b"},
|
||||||
|
in2: []string{"/path/c", "/path/d"},
|
||||||
|
expected1: []string{"/path/a", "/path/b"},
|
||||||
|
expected2: []string{"/path/c", "/path/d"},
|
||||||
|
expectedC: []string{},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "Some common paths",
|
||||||
|
in1: []string{"/path/a", "/path/b", "/path/c"},
|
||||||
|
in2: []string{"/path/b", "/path/c", "/path/d"},
|
||||||
|
expected1: []string{"/path/a"},
|
||||||
|
expected2: []string{"/path/d"},
|
||||||
|
expectedC: []string{"/path/b", "/path/c"},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "Empty inputs",
|
||||||
|
in1: []string{},
|
||||||
|
in2: []string{},
|
||||||
|
expected1: []string{},
|
||||||
|
expected2: []string{},
|
||||||
|
expectedC: []string{},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "One empty input",
|
||||||
|
in1: []string{"/path/a", "/path/b"},
|
||||||
|
in2: []string{},
|
||||||
|
expected1: []string{"/path/a", "/path/b"},
|
||||||
|
expected2: []string{},
|
||||||
|
expectedC: []string{},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "Unsorted inputs with common paths",
|
||||||
|
in1: []string{"/path/c", "/path/a", "/path/b"},
|
||||||
|
in2: []string{"/path/b", "/path/c", "/path/a"},
|
||||||
|
expected1: []string{},
|
||||||
|
expected2: []string{},
|
||||||
|
expectedC: []string{"/path/a", "/path/b", "/path/c"},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
out1, out2, common := groupSamePaths(tt.in1, tt.in2)
|
||||||
|
require.Equal(t, tt.expected1, out1, "in1 should match expected1")
|
||||||
|
require.Equal(t, tt.expected2, out2, "in2 should match expected2")
|
||||||
|
require.Equal(t, tt.expectedC, common, "common should match expectedC")
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -56,7 +56,7 @@ func formatHCLError(err error, files []File) error {
|
|||||||
break
|
break
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
src := errdefs.Source{
|
src := &errdefs.Source{
|
||||||
Info: &pb.SourceInfo{
|
Info: &pb.SourceInfo{
|
||||||
Filename: d.Subject.Filename,
|
Filename: d.Subject.Filename,
|
||||||
Data: dt,
|
Data: dt,
|
||||||
@@ -72,7 +72,7 @@ func formatHCLError(err error, files []File) error {
|
|||||||
|
|
||||||
func toErrRange(in *hcl.Range) *pb.Range {
|
func toErrRange(in *hcl.Range) *pb.Range {
|
||||||
return &pb.Range{
|
return &pb.Range{
|
||||||
Start: pb.Position{Line: int32(in.Start.Line), Character: int32(in.Start.Column)},
|
Start: &pb.Position{Line: int32(in.Start.Line), Character: int32(in.Start.Column)},
|
||||||
End: pb.Position{Line: int32(in.End.Line), Character: int32(in.End.Column)},
|
End: &pb.Position{Line: int32(in.End.Line), Character: int32(in.End.Column)},
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
157
bake/hcl_test.go
157
bake/hcl_test.go
@@ -49,18 +49,18 @@ func TestHCLBasic(t *testing.T) {
|
|||||||
require.Equal(t, []string{"db", "webapp"}, c.Groups[0].Targets)
|
require.Equal(t, []string{"db", "webapp"}, c.Groups[0].Targets)
|
||||||
|
|
||||||
require.Equal(t, 4, len(c.Targets))
|
require.Equal(t, 4, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "db")
|
require.Equal(t, "db", c.Targets[0].Name)
|
||||||
require.Equal(t, "./db", *c.Targets[0].Context)
|
require.Equal(t, "./db", *c.Targets[0].Context)
|
||||||
|
|
||||||
require.Equal(t, c.Targets[1].Name, "webapp")
|
require.Equal(t, "webapp", c.Targets[1].Name)
|
||||||
require.Equal(t, 1, len(c.Targets[1].Args))
|
require.Equal(t, 1, len(c.Targets[1].Args))
|
||||||
require.Equal(t, ptrstr("123"), c.Targets[1].Args["buildno"])
|
require.Equal(t, ptrstr("123"), c.Targets[1].Args["buildno"])
|
||||||
|
|
||||||
require.Equal(t, c.Targets[2].Name, "cross")
|
require.Equal(t, "cross", c.Targets[2].Name)
|
||||||
require.Equal(t, 2, len(c.Targets[2].Platforms))
|
require.Equal(t, 2, len(c.Targets[2].Platforms))
|
||||||
require.Equal(t, []string{"linux/amd64", "linux/arm64"}, c.Targets[2].Platforms)
|
require.Equal(t, []string{"linux/amd64", "linux/arm64"}, c.Targets[2].Platforms)
|
||||||
|
|
||||||
require.Equal(t, c.Targets[3].Name, "webapp-plus")
|
require.Equal(t, "webapp-plus", c.Targets[3].Name)
|
||||||
require.Equal(t, 1, len(c.Targets[3].Args))
|
require.Equal(t, 1, len(c.Targets[3].Args))
|
||||||
require.Equal(t, map[string]*string{"IAMCROSS": ptrstr("true")}, c.Targets[3].Args)
|
require.Equal(t, map[string]*string{"IAMCROSS": ptrstr("true")}, c.Targets[3].Args)
|
||||||
}
|
}
|
||||||
@@ -109,18 +109,18 @@ func TestHCLBasicInJSON(t *testing.T) {
|
|||||||
require.Equal(t, []string{"db", "webapp"}, c.Groups[0].Targets)
|
require.Equal(t, []string{"db", "webapp"}, c.Groups[0].Targets)
|
||||||
|
|
||||||
require.Equal(t, 4, len(c.Targets))
|
require.Equal(t, 4, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "db")
|
require.Equal(t, "db", c.Targets[0].Name)
|
||||||
require.Equal(t, "./db", *c.Targets[0].Context)
|
require.Equal(t, "./db", *c.Targets[0].Context)
|
||||||
|
|
||||||
require.Equal(t, c.Targets[1].Name, "webapp")
|
require.Equal(t, "webapp", c.Targets[1].Name)
|
||||||
require.Equal(t, 1, len(c.Targets[1].Args))
|
require.Equal(t, 1, len(c.Targets[1].Args))
|
||||||
require.Equal(t, ptrstr("123"), c.Targets[1].Args["buildno"])
|
require.Equal(t, ptrstr("123"), c.Targets[1].Args["buildno"])
|
||||||
|
|
||||||
require.Equal(t, c.Targets[2].Name, "cross")
|
require.Equal(t, "cross", c.Targets[2].Name)
|
||||||
require.Equal(t, 2, len(c.Targets[2].Platforms))
|
require.Equal(t, 2, len(c.Targets[2].Platforms))
|
||||||
require.Equal(t, []string{"linux/amd64", "linux/arm64"}, c.Targets[2].Platforms)
|
require.Equal(t, []string{"linux/amd64", "linux/arm64"}, c.Targets[2].Platforms)
|
||||||
|
|
||||||
require.Equal(t, c.Targets[3].Name, "webapp-plus")
|
require.Equal(t, "webapp-plus", c.Targets[3].Name)
|
||||||
require.Equal(t, 1, len(c.Targets[3].Args))
|
require.Equal(t, 1, len(c.Targets[3].Args))
|
||||||
require.Equal(t, map[string]*string{"IAMCROSS": ptrstr("true")}, c.Targets[3].Args)
|
require.Equal(t, map[string]*string{"IAMCROSS": ptrstr("true")}, c.Targets[3].Args)
|
||||||
}
|
}
|
||||||
@@ -146,7 +146,7 @@ func TestHCLWithFunctions(t *testing.T) {
|
|||||||
require.Equal(t, []string{"webapp"}, c.Groups[0].Targets)
|
require.Equal(t, []string{"webapp"}, c.Groups[0].Targets)
|
||||||
|
|
||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "webapp")
|
require.Equal(t, "webapp", c.Targets[0].Name)
|
||||||
require.Equal(t, ptrstr("124"), c.Targets[0].Args["buildno"])
|
require.Equal(t, ptrstr("124"), c.Targets[0].Args["buildno"])
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -176,7 +176,7 @@ func TestHCLWithUserDefinedFunctions(t *testing.T) {
|
|||||||
require.Equal(t, []string{"webapp"}, c.Groups[0].Targets)
|
require.Equal(t, []string{"webapp"}, c.Groups[0].Targets)
|
||||||
|
|
||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "webapp")
|
require.Equal(t, "webapp", c.Targets[0].Name)
|
||||||
require.Equal(t, ptrstr("124"), c.Targets[0].Args["buildno"])
|
require.Equal(t, ptrstr("124"), c.Targets[0].Args["buildno"])
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -205,7 +205,7 @@ func TestHCLWithVariables(t *testing.T) {
|
|||||||
require.Equal(t, []string{"webapp"}, c.Groups[0].Targets)
|
require.Equal(t, []string{"webapp"}, c.Groups[0].Targets)
|
||||||
|
|
||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "webapp")
|
require.Equal(t, "webapp", c.Targets[0].Name)
|
||||||
require.Equal(t, ptrstr("123"), c.Targets[0].Args["buildno"])
|
require.Equal(t, ptrstr("123"), c.Targets[0].Args["buildno"])
|
||||||
|
|
||||||
t.Setenv("BUILD_NUMBER", "456")
|
t.Setenv("BUILD_NUMBER", "456")
|
||||||
@@ -218,7 +218,7 @@ func TestHCLWithVariables(t *testing.T) {
|
|||||||
require.Equal(t, []string{"webapp"}, c.Groups[0].Targets)
|
require.Equal(t, []string{"webapp"}, c.Groups[0].Targets)
|
||||||
|
|
||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "webapp")
|
require.Equal(t, "webapp", c.Targets[0].Name)
|
||||||
require.Equal(t, ptrstr("456"), c.Targets[0].Args["buildno"])
|
require.Equal(t, ptrstr("456"), c.Targets[0].Args["buildno"])
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -241,7 +241,7 @@ func TestHCLWithVariablesInFunctions(t *testing.T) {
|
|||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "webapp")
|
require.Equal(t, "webapp", c.Targets[0].Name)
|
||||||
require.Equal(t, []string{"user/repo:v1"}, c.Targets[0].Tags)
|
require.Equal(t, []string{"user/repo:v1"}, c.Targets[0].Tags)
|
||||||
|
|
||||||
t.Setenv("REPO", "docker/buildx")
|
t.Setenv("REPO", "docker/buildx")
|
||||||
@@ -250,7 +250,7 @@ func TestHCLWithVariablesInFunctions(t *testing.T) {
|
|||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "webapp")
|
require.Equal(t, "webapp", c.Targets[0].Name)
|
||||||
require.Equal(t, []string{"docker/buildx:v1"}, c.Targets[0].Tags)
|
require.Equal(t, []string{"docker/buildx:v1"}, c.Targets[0].Tags)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -273,26 +273,26 @@ func TestHCLMultiFileSharedVariables(t *testing.T) {
|
|||||||
}
|
}
|
||||||
`)
|
`)
|
||||||
|
|
||||||
c, err := ParseFiles([]File{
|
c, _, err := ParseFiles([]File{
|
||||||
{Data: dt, Name: "c1.hcl"},
|
{Data: dt, Name: "c1.hcl"},
|
||||||
{Data: dt2, Name: "c2.hcl"},
|
{Data: dt2, Name: "c2.hcl"},
|
||||||
}, nil)
|
}, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "app")
|
require.Equal(t, "app", c.Targets[0].Name)
|
||||||
require.Equal(t, ptrstr("pre-abc"), c.Targets[0].Args["v1"])
|
require.Equal(t, ptrstr("pre-abc"), c.Targets[0].Args["v1"])
|
||||||
require.Equal(t, ptrstr("abc-post"), c.Targets[0].Args["v2"])
|
require.Equal(t, ptrstr("abc-post"), c.Targets[0].Args["v2"])
|
||||||
|
|
||||||
t.Setenv("FOO", "def")
|
t.Setenv("FOO", "def")
|
||||||
|
|
||||||
c, err = ParseFiles([]File{
|
c, _, err = ParseFiles([]File{
|
||||||
{Data: dt, Name: "c1.hcl"},
|
{Data: dt, Name: "c1.hcl"},
|
||||||
{Data: dt2, Name: "c2.hcl"},
|
{Data: dt2, Name: "c2.hcl"},
|
||||||
}, nil)
|
}, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "app")
|
require.Equal(t, "app", c.Targets[0].Name)
|
||||||
require.Equal(t, ptrstr("pre-def"), c.Targets[0].Args["v1"])
|
require.Equal(t, ptrstr("pre-def"), c.Targets[0].Args["v1"])
|
||||||
require.Equal(t, ptrstr("def-post"), c.Targets[0].Args["v2"])
|
require.Equal(t, ptrstr("def-post"), c.Targets[0].Args["v2"])
|
||||||
}
|
}
|
||||||
@@ -322,26 +322,26 @@ func TestHCLVarsWithVars(t *testing.T) {
|
|||||||
}
|
}
|
||||||
`)
|
`)
|
||||||
|
|
||||||
c, err := ParseFiles([]File{
|
c, _, err := ParseFiles([]File{
|
||||||
{Data: dt, Name: "c1.hcl"},
|
{Data: dt, Name: "c1.hcl"},
|
||||||
{Data: dt2, Name: "c2.hcl"},
|
{Data: dt2, Name: "c2.hcl"},
|
||||||
}, nil)
|
}, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "app")
|
require.Equal(t, "app", c.Targets[0].Name)
|
||||||
require.Equal(t, ptrstr("pre--ABCDEF-"), c.Targets[0].Args["v1"])
|
require.Equal(t, ptrstr("pre--ABCDEF-"), c.Targets[0].Args["v1"])
|
||||||
require.Equal(t, ptrstr("ABCDEF-post"), c.Targets[0].Args["v2"])
|
require.Equal(t, ptrstr("ABCDEF-post"), c.Targets[0].Args["v2"])
|
||||||
|
|
||||||
t.Setenv("BASE", "new")
|
t.Setenv("BASE", "new")
|
||||||
|
|
||||||
c, err = ParseFiles([]File{
|
c, _, err = ParseFiles([]File{
|
||||||
{Data: dt, Name: "c1.hcl"},
|
{Data: dt, Name: "c1.hcl"},
|
||||||
{Data: dt2, Name: "c2.hcl"},
|
{Data: dt2, Name: "c2.hcl"},
|
||||||
}, nil)
|
}, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "app")
|
require.Equal(t, "app", c.Targets[0].Name)
|
||||||
require.Equal(t, ptrstr("pre--NEWDEF-"), c.Targets[0].Args["v1"])
|
require.Equal(t, ptrstr("pre--NEWDEF-"), c.Targets[0].Args["v1"])
|
||||||
require.Equal(t, ptrstr("NEWDEF-post"), c.Targets[0].Args["v2"])
|
require.Equal(t, ptrstr("NEWDEF-post"), c.Targets[0].Args["v2"])
|
||||||
}
|
}
|
||||||
@@ -366,7 +366,7 @@ func TestHCLTypedVariables(t *testing.T) {
|
|||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "app")
|
require.Equal(t, "app", c.Targets[0].Name)
|
||||||
require.Equal(t, ptrstr("lower"), c.Targets[0].Args["v1"])
|
require.Equal(t, ptrstr("lower"), c.Targets[0].Args["v1"])
|
||||||
require.Equal(t, ptrstr("yes"), c.Targets[0].Args["v2"])
|
require.Equal(t, ptrstr("yes"), c.Targets[0].Args["v2"])
|
||||||
|
|
||||||
@@ -377,7 +377,7 @@ func TestHCLTypedVariables(t *testing.T) {
|
|||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "app")
|
require.Equal(t, "app", c.Targets[0].Name)
|
||||||
require.Equal(t, ptrstr("higher"), c.Targets[0].Args["v1"])
|
require.Equal(t, ptrstr("higher"), c.Targets[0].Args["v1"])
|
||||||
require.Equal(t, ptrstr("no"), c.Targets[0].Args["v2"])
|
require.Equal(t, ptrstr("no"), c.Targets[0].Args["v2"])
|
||||||
|
|
||||||
@@ -475,7 +475,7 @@ func TestHCLAttrs(t *testing.T) {
|
|||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "app")
|
require.Equal(t, "app", c.Targets[0].Name)
|
||||||
require.Equal(t, ptrstr("attr-abcdef"), c.Targets[0].Args["v1"])
|
require.Equal(t, ptrstr("attr-abcdef"), c.Targets[0].Args["v1"])
|
||||||
|
|
||||||
// env does not apply if no variable
|
// env does not apply if no variable
|
||||||
@@ -484,7 +484,7 @@ func TestHCLAttrs(t *testing.T) {
|
|||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "app")
|
require.Equal(t, "app", c.Targets[0].Name)
|
||||||
require.Equal(t, ptrstr("attr-abcdef"), c.Targets[0].Args["v1"])
|
require.Equal(t, ptrstr("attr-abcdef"), c.Targets[0].Args["v1"])
|
||||||
// attr-multifile
|
// attr-multifile
|
||||||
}
|
}
|
||||||
@@ -592,7 +592,7 @@ func TestHCLAttrsCustomType(t *testing.T) {
|
|||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "app")
|
require.Equal(t, "app", c.Targets[0].Name)
|
||||||
require.Equal(t, []string{"linux/arm64", "linux/amd64"}, c.Targets[0].Platforms)
|
require.Equal(t, []string{"linux/arm64", "linux/amd64"}, c.Targets[0].Platforms)
|
||||||
require.Equal(t, ptrstr("linux/arm64"), c.Targets[0].Args["v1"])
|
require.Equal(t, ptrstr("linux/arm64"), c.Targets[0].Args["v1"])
|
||||||
}
|
}
|
||||||
@@ -612,25 +612,25 @@ func TestHCLMultiFileAttrs(t *testing.T) {
|
|||||||
FOO="def"
|
FOO="def"
|
||||||
`)
|
`)
|
||||||
|
|
||||||
c, err := ParseFiles([]File{
|
c, _, err := ParseFiles([]File{
|
||||||
{Data: dt, Name: "c1.hcl"},
|
{Data: dt, Name: "c1.hcl"},
|
||||||
{Data: dt2, Name: "c2.hcl"},
|
{Data: dt2, Name: "c2.hcl"},
|
||||||
}, nil)
|
}, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "app")
|
require.Equal(t, "app", c.Targets[0].Name)
|
||||||
require.Equal(t, ptrstr("pre-def"), c.Targets[0].Args["v1"])
|
require.Equal(t, ptrstr("pre-def"), c.Targets[0].Args["v1"])
|
||||||
|
|
||||||
t.Setenv("FOO", "ghi")
|
t.Setenv("FOO", "ghi")
|
||||||
|
|
||||||
c, err = ParseFiles([]File{
|
c, _, err = ParseFiles([]File{
|
||||||
{Data: dt, Name: "c1.hcl"},
|
{Data: dt, Name: "c1.hcl"},
|
||||||
{Data: dt2, Name: "c2.hcl"},
|
{Data: dt2, Name: "c2.hcl"},
|
||||||
}, nil)
|
}, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "app")
|
require.Equal(t, "app", c.Targets[0].Name)
|
||||||
require.Equal(t, ptrstr("pre-ghi"), c.Targets[0].Args["v1"])
|
require.Equal(t, ptrstr("pre-ghi"), c.Targets[0].Args["v1"])
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -647,13 +647,13 @@ func TestHCLMultiFileGlobalAttrs(t *testing.T) {
|
|||||||
FOO = "def"
|
FOO = "def"
|
||||||
`)
|
`)
|
||||||
|
|
||||||
c, err := ParseFiles([]File{
|
c, _, err := ParseFiles([]File{
|
||||||
{Data: dt, Name: "c1.hcl"},
|
{Data: dt, Name: "c1.hcl"},
|
||||||
{Data: dt2, Name: "c2.hcl"},
|
{Data: dt2, Name: "c2.hcl"},
|
||||||
}, nil)
|
}, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "app")
|
require.Equal(t, "app", c.Targets[0].Name)
|
||||||
require.Equal(t, "pre-def", *c.Targets[0].Args["v1"])
|
require.Equal(t, "pre-def", *c.Targets[0].Args["v1"])
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -830,7 +830,7 @@ func TestHCLRenameMultiFile(t *testing.T) {
|
|||||||
}
|
}
|
||||||
`)
|
`)
|
||||||
|
|
||||||
c, err := ParseFiles([]File{
|
c, _, err := ParseFiles([]File{
|
||||||
{Data: dt, Name: "c1.hcl"},
|
{Data: dt, Name: "c1.hcl"},
|
||||||
{Data: dt2, Name: "c2.hcl"},
|
{Data: dt2, Name: "c2.hcl"},
|
||||||
{Data: dt3, Name: "c3.hcl"},
|
{Data: dt3, Name: "c3.hcl"},
|
||||||
@@ -839,12 +839,12 @@ func TestHCLRenameMultiFile(t *testing.T) {
|
|||||||
|
|
||||||
require.Equal(t, 2, len(c.Targets))
|
require.Equal(t, 2, len(c.Targets))
|
||||||
|
|
||||||
require.Equal(t, c.Targets[0].Name, "bar")
|
require.Equal(t, "bar", c.Targets[0].Name)
|
||||||
require.Equal(t, *c.Targets[0].Dockerfile, "x")
|
require.Equal(t, "x", *c.Targets[0].Dockerfile)
|
||||||
require.Equal(t, *c.Targets[0].Target, "z")
|
require.Equal(t, "z", *c.Targets[0].Target)
|
||||||
|
|
||||||
require.Equal(t, c.Targets[1].Name, "foo")
|
require.Equal(t, "foo", c.Targets[1].Name)
|
||||||
require.Equal(t, *c.Targets[1].Context, "y")
|
require.Equal(t, "y", *c.Targets[1].Context)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestHCLMatrixBasic(t *testing.T) {
|
func TestHCLMatrixBasic(t *testing.T) {
|
||||||
@@ -862,10 +862,10 @@ func TestHCLMatrixBasic(t *testing.T) {
|
|||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
require.Equal(t, 2, len(c.Targets))
|
require.Equal(t, 2, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "x")
|
require.Equal(t, "x", c.Targets[0].Name)
|
||||||
require.Equal(t, c.Targets[1].Name, "y")
|
require.Equal(t, "y", c.Targets[1].Name)
|
||||||
require.Equal(t, *c.Targets[0].Dockerfile, "x.Dockerfile")
|
require.Equal(t, "x.Dockerfile", *c.Targets[0].Dockerfile)
|
||||||
require.Equal(t, *c.Targets[1].Dockerfile, "y.Dockerfile")
|
require.Equal(t, "y.Dockerfile", *c.Targets[1].Dockerfile)
|
||||||
|
|
||||||
require.Equal(t, 1, len(c.Groups))
|
require.Equal(t, 1, len(c.Groups))
|
||||||
require.Equal(t, "default", c.Groups[0].Name)
|
require.Equal(t, "default", c.Groups[0].Name)
|
||||||
@@ -948,9 +948,9 @@ func TestHCLMatrixMaps(t *testing.T) {
|
|||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
require.Equal(t, 2, len(c.Targets))
|
require.Equal(t, 2, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "aa")
|
require.Equal(t, "aa", c.Targets[0].Name)
|
||||||
require.Equal(t, c.Targets[0].Args["target"], ptrstr("valbb"))
|
require.Equal(t, c.Targets[0].Args["target"], ptrstr("valbb"))
|
||||||
require.Equal(t, c.Targets[1].Name, "cc")
|
require.Equal(t, "cc", c.Targets[1].Name)
|
||||||
require.Equal(t, c.Targets[1].Args["target"], ptrstr("valdd"))
|
require.Equal(t, c.Targets[1].Args["target"], ptrstr("valdd"))
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1050,7 +1050,7 @@ func TestHCLMatrixArgsOverride(t *testing.T) {
|
|||||||
}
|
}
|
||||||
`)
|
`)
|
||||||
|
|
||||||
c, err := ParseFiles([]File{
|
c, _, err := ParseFiles([]File{
|
||||||
{Data: dt, Name: "docker-bake.hcl"},
|
{Data: dt, Name: "docker-bake.hcl"},
|
||||||
}, map[string]string{"ABC": "11,22,33"})
|
}, map[string]string{"ABC": "11,22,33"})
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
@@ -1141,7 +1141,7 @@ func TestJSONAttributes(t *testing.T) {
|
|||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "app")
|
require.Equal(t, "app", c.Targets[0].Name)
|
||||||
require.Equal(t, ptrstr("pre-abc-def"), c.Targets[0].Args["v1"])
|
require.Equal(t, ptrstr("pre-abc-def"), c.Targets[0].Args["v1"])
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1166,7 +1166,7 @@ func TestJSONFunctions(t *testing.T) {
|
|||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "app")
|
require.Equal(t, "app", c.Targets[0].Name)
|
||||||
require.Equal(t, ptrstr("pre-<FOO-abc>"), c.Targets[0].Args["v1"])
|
require.Equal(t, ptrstr("pre-<FOO-abc>"), c.Targets[0].Args["v1"])
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1184,7 +1184,7 @@ func TestJSONInvalidFunctions(t *testing.T) {
|
|||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "app")
|
require.Equal(t, "app", c.Targets[0].Name)
|
||||||
require.Equal(t, ptrstr(`myfunc("foo")`), c.Targets[0].Args["v1"])
|
require.Equal(t, ptrstr(`myfunc("foo")`), c.Targets[0].Args["v1"])
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1212,7 +1212,7 @@ func TestHCLFunctionInAttr(t *testing.T) {
|
|||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "app")
|
require.Equal(t, "app", c.Targets[0].Name)
|
||||||
require.Equal(t, ptrstr("FOO <> [baz]"), c.Targets[0].Args["v1"])
|
require.Equal(t, ptrstr("FOO <> [baz]"), c.Targets[0].Args["v1"])
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1236,14 +1236,14 @@ services:
|
|||||||
v2: "bar"
|
v2: "bar"
|
||||||
`)
|
`)
|
||||||
|
|
||||||
c, err := ParseFiles([]File{
|
c, _, err := ParseFiles([]File{
|
||||||
{Data: dt, Name: "c1.hcl"},
|
{Data: dt, Name: "c1.hcl"},
|
||||||
{Data: dt2, Name: "c2.yml"},
|
{Data: dt2, Name: "c2.yml"},
|
||||||
}, nil)
|
}, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "app")
|
require.Equal(t, "app", c.Targets[0].Name)
|
||||||
require.Equal(t, ptrstr("foo"), c.Targets[0].Args["v1"])
|
require.Equal(t, ptrstr("foo"), c.Targets[0].Args["v1"])
|
||||||
require.Equal(t, ptrstr("bar"), c.Targets[0].Args["v2"])
|
require.Equal(t, ptrstr("bar"), c.Targets[0].Args["v2"])
|
||||||
require.Equal(t, "dir", *c.Targets[0].Context)
|
require.Equal(t, "dir", *c.Targets[0].Context)
|
||||||
@@ -1258,7 +1258,7 @@ func TestHCLBuiltinVars(t *testing.T) {
|
|||||||
}
|
}
|
||||||
`)
|
`)
|
||||||
|
|
||||||
c, err := ParseFiles([]File{
|
c, _, err := ParseFiles([]File{
|
||||||
{Data: dt, Name: "c1.hcl"},
|
{Data: dt, Name: "c1.hcl"},
|
||||||
}, map[string]string{
|
}, map[string]string{
|
||||||
"BAKE_CMD_CONTEXT": "foo",
|
"BAKE_CMD_CONTEXT": "foo",
|
||||||
@@ -1266,13 +1266,13 @@ func TestHCLBuiltinVars(t *testing.T) {
|
|||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "app")
|
require.Equal(t, "app", c.Targets[0].Name)
|
||||||
require.Equal(t, "foo", *c.Targets[0].Context)
|
require.Equal(t, "foo", *c.Targets[0].Context)
|
||||||
require.Equal(t, "test", *c.Targets[0].Dockerfile)
|
require.Equal(t, "test", *c.Targets[0].Dockerfile)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestCombineHCLAndJSONTargets(t *testing.T) {
|
func TestCombineHCLAndJSONTargets(t *testing.T) {
|
||||||
c, err := ParseFiles([]File{
|
c, _, err := ParseFiles([]File{
|
||||||
{
|
{
|
||||||
Name: "docker-bake.hcl",
|
Name: "docker-bake.hcl",
|
||||||
Data: []byte(`
|
Data: []byte(`
|
||||||
@@ -1332,23 +1332,23 @@ target "b" {
|
|||||||
|
|
||||||
require.Equal(t, 4, len(c.Targets))
|
require.Equal(t, 4, len(c.Targets))
|
||||||
|
|
||||||
require.Equal(t, c.Targets[0].Name, "metadata-a")
|
require.Equal(t, "metadata-a", c.Targets[0].Name)
|
||||||
require.Equal(t, []string{"app/a:1.0.0", "app/a:latest"}, c.Targets[0].Tags)
|
require.Equal(t, []string{"app/a:1.0.0", "app/a:latest"}, c.Targets[0].Tags)
|
||||||
|
|
||||||
require.Equal(t, c.Targets[1].Name, "metadata-b")
|
require.Equal(t, "metadata-b", c.Targets[1].Name)
|
||||||
require.Equal(t, []string{"app/b:1.0.0", "app/b:latest"}, c.Targets[1].Tags)
|
require.Equal(t, []string{"app/b:1.0.0", "app/b:latest"}, c.Targets[1].Tags)
|
||||||
|
|
||||||
require.Equal(t, c.Targets[2].Name, "a")
|
require.Equal(t, "a", c.Targets[2].Name)
|
||||||
require.Equal(t, ".", *c.Targets[2].Context)
|
require.Equal(t, ".", *c.Targets[2].Context)
|
||||||
require.Equal(t, "a", *c.Targets[2].Target)
|
require.Equal(t, "a", *c.Targets[2].Target)
|
||||||
|
|
||||||
require.Equal(t, c.Targets[3].Name, "b")
|
require.Equal(t, "b", c.Targets[3].Name)
|
||||||
require.Equal(t, ".", *c.Targets[3].Context)
|
require.Equal(t, ".", *c.Targets[3].Context)
|
||||||
require.Equal(t, "b", *c.Targets[3].Target)
|
require.Equal(t, "b", *c.Targets[3].Target)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestCombineHCLAndJSONVars(t *testing.T) {
|
func TestCombineHCLAndJSONVars(t *testing.T) {
|
||||||
c, err := ParseFiles([]File{
|
c, _, err := ParseFiles([]File{
|
||||||
{
|
{
|
||||||
Name: "docker-bake.hcl",
|
Name: "docker-bake.hcl",
|
||||||
Data: []byte(`
|
Data: []byte(`
|
||||||
@@ -1389,10 +1389,10 @@ target "two" {
|
|||||||
|
|
||||||
require.Equal(t, 2, len(c.Targets))
|
require.Equal(t, 2, len(c.Targets))
|
||||||
|
|
||||||
require.Equal(t, c.Targets[0].Name, "one")
|
require.Equal(t, "one", c.Targets[0].Name)
|
||||||
require.Equal(t, map[string]*string{"a": ptrstr("pre-ghi-jkl")}, c.Targets[0].Args)
|
require.Equal(t, map[string]*string{"a": ptrstr("pre-ghi-jkl")}, c.Targets[0].Args)
|
||||||
|
|
||||||
require.Equal(t, c.Targets[1].Name, "two")
|
require.Equal(t, "two", c.Targets[1].Name)
|
||||||
require.Equal(t, map[string]*string{"b": ptrstr("pre-jkl")}, c.Targets[1].Args)
|
require.Equal(t, map[string]*string{"b": ptrstr("pre-jkl")}, c.Targets[1].Args)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1445,6 +1445,39 @@ func TestVarUnsupportedType(t *testing.T) {
|
|||||||
require.Error(t, err)
|
require.Error(t, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestHCLIndexOfFunc(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
variable "APP_VERSIONS" {
|
||||||
|
default = [
|
||||||
|
"1.42.4",
|
||||||
|
"1.42.3"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
target "default" {
|
||||||
|
args = {
|
||||||
|
APP_VERSION = app_version
|
||||||
|
}
|
||||||
|
matrix = {
|
||||||
|
app_version = APP_VERSIONS
|
||||||
|
}
|
||||||
|
name="app-${replace(app_version, ".", "-")}"
|
||||||
|
tags = [
|
||||||
|
"app:${app_version}",
|
||||||
|
indexof(APP_VERSIONS, app_version) == 0 ? "app:latest" : "",
|
||||||
|
]
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
|
||||||
|
c, err := ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 2, len(c.Targets))
|
||||||
|
require.Equal(t, "app-1-42-4", c.Targets[0].Name)
|
||||||
|
require.Equal(t, "app:latest", c.Targets[0].Tags[1])
|
||||||
|
require.Equal(t, "app-1-42-3", c.Targets[1].Name)
|
||||||
|
require.Empty(t, c.Targets[1].Tags[1])
|
||||||
|
}
|
||||||
|
|
||||||
func ptrstr(s interface{}) *string {
|
func ptrstr(s interface{}) *string {
|
||||||
var n *string
|
var n *string
|
||||||
if reflect.ValueOf(s).Kind() == reflect.String {
|
if reflect.ValueOf(s).Kind() == reflect.String {
|
||||||
|
|||||||
@@ -27,7 +27,15 @@ type Opt struct {
|
|||||||
type variable struct {
|
type variable struct {
|
||||||
Name string `json:"-" hcl:"name,label"`
|
Name string `json:"-" hcl:"name,label"`
|
||||||
Default *hcl.Attribute `json:"default,omitempty" hcl:"default,optional"`
|
Default *hcl.Attribute `json:"default,omitempty" hcl:"default,optional"`
|
||||||
|
Description string `json:"description,omitempty" hcl:"description,optional"`
|
||||||
|
Validations []*variableValidation `json:"validation,omitempty" hcl:"validation,block"`
|
||||||
Body hcl.Body `json:"-" hcl:",body"`
|
Body hcl.Body `json:"-" hcl:",body"`
|
||||||
|
Remain hcl.Body `json:"-" hcl:",remain"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type variableValidation struct {
|
||||||
|
Condition hcl.Expression `json:"condition" hcl:"condition"`
|
||||||
|
ErrorMessage hcl.Expression `json:"error_message" hcl:"error_message"`
|
||||||
}
|
}
|
||||||
|
|
||||||
type functionDef struct {
|
type functionDef struct {
|
||||||
@@ -73,7 +81,12 @@ type WithGetName interface {
|
|||||||
GetName(ectx *hcl.EvalContext, block *hcl.Block, loadDeps func(hcl.Expression) hcl.Diagnostics) (string, error)
|
GetName(ectx *hcl.EvalContext, block *hcl.Block, loadDeps func(hcl.Expression) hcl.Diagnostics) (string, error)
|
||||||
}
|
}
|
||||||
|
|
||||||
var errUndefined = errors.New("undefined")
|
// errUndefined is returned when a variable or function is not defined.
|
||||||
|
type errUndefined struct{}
|
||||||
|
|
||||||
|
func (errUndefined) Error() string {
|
||||||
|
return "undefined"
|
||||||
|
}
|
||||||
|
|
||||||
func (p *parser) loadDeps(ectx *hcl.EvalContext, exp hcl.Expression, exclude map[string]struct{}, allowMissing bool) hcl.Diagnostics {
|
func (p *parser) loadDeps(ectx *hcl.EvalContext, exp hcl.Expression, exclude map[string]struct{}, allowMissing bool) hcl.Diagnostics {
|
||||||
fns, hcldiags := funcCalls(exp)
|
fns, hcldiags := funcCalls(exp)
|
||||||
@@ -83,7 +96,7 @@ func (p *parser) loadDeps(ectx *hcl.EvalContext, exp hcl.Expression, exclude map
|
|||||||
|
|
||||||
for _, fn := range fns {
|
for _, fn := range fns {
|
||||||
if err := p.resolveFunction(ectx, fn); err != nil {
|
if err := p.resolveFunction(ectx, fn); err != nil {
|
||||||
if allowMissing && errors.Is(err, errUndefined) {
|
if allowMissing && errors.Is(err, errUndefined{}) {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
return wrapErrorDiagnostic("Invalid expression", err, exp.Range().Ptr(), exp.Range().Ptr())
|
return wrapErrorDiagnostic("Invalid expression", err, exp.Range().Ptr(), exp.Range().Ptr())
|
||||||
@@ -137,7 +150,7 @@ func (p *parser) loadDeps(ectx *hcl.EvalContext, exp hcl.Expression, exclude map
|
|||||||
}
|
}
|
||||||
for _, block := range blocks {
|
for _, block := range blocks {
|
||||||
if err := p.resolveBlock(block, target); err != nil {
|
if err := p.resolveBlock(block, target); err != nil {
|
||||||
if allowMissing && errors.Is(err, errUndefined) {
|
if allowMissing && errors.Is(err, errUndefined{}) {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
return wrapErrorDiagnostic("Invalid expression", err, exp.Range().Ptr(), exp.Range().Ptr())
|
return wrapErrorDiagnostic("Invalid expression", err, exp.Range().Ptr(), exp.Range().Ptr())
|
||||||
@@ -145,7 +158,7 @@ func (p *parser) loadDeps(ectx *hcl.EvalContext, exp hcl.Expression, exclude map
|
|||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
if err := p.resolveValue(ectx, v.RootName()); err != nil {
|
if err := p.resolveValue(ectx, v.RootName()); err != nil {
|
||||||
if allowMissing && errors.Is(err, errUndefined) {
|
if allowMissing && errors.Is(err, errUndefined{}) {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
return wrapErrorDiagnostic("Invalid expression", err, exp.Range().Ptr(), exp.Range().Ptr())
|
return wrapErrorDiagnostic("Invalid expression", err, exp.Range().Ptr(), exp.Range().Ptr())
|
||||||
@@ -167,7 +180,7 @@ func (p *parser) resolveFunction(ectx *hcl.EvalContext, name string) error {
|
|||||||
}
|
}
|
||||||
f, ok := p.funcs[name]
|
f, ok := p.funcs[name]
|
||||||
if !ok {
|
if !ok {
|
||||||
return errors.Wrapf(errUndefined, "function %q does not exist", name)
|
return errors.Wrapf(errUndefined{}, "function %q does not exist", name)
|
||||||
}
|
}
|
||||||
if _, ok := p.progressF[key(ectx, name)]; ok {
|
if _, ok := p.progressF[key(ectx, name)]; ok {
|
||||||
return errors.Errorf("function cycle not allowed for %s", name)
|
return errors.Errorf("function cycle not allowed for %s", name)
|
||||||
@@ -257,7 +270,7 @@ func (p *parser) resolveValue(ectx *hcl.EvalContext, name string) (err error) {
|
|||||||
if _, builtin := p.opt.Vars[name]; !ok && !builtin {
|
if _, builtin := p.opt.Vars[name]; !ok && !builtin {
|
||||||
vr, ok := p.vars[name]
|
vr, ok := p.vars[name]
|
||||||
if !ok {
|
if !ok {
|
||||||
return errors.Wrapf(errUndefined, "variable %q does not exist", name)
|
return errors.Wrapf(errUndefined{}, "variable %q does not exist", name)
|
||||||
}
|
}
|
||||||
def = vr.Default
|
def = vr.Default
|
||||||
ectx = p.ectx
|
ectx = p.ectx
|
||||||
@@ -534,7 +547,45 @@ func (p *parser) resolveBlockNames(block *hcl.Block) ([]string, error) {
|
|||||||
return names, nil
|
return names, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func Parse(b hcl.Body, opt Opt, val interface{}) (map[string]map[string][]string, hcl.Diagnostics) {
|
func (p *parser) validateVariables(vars map[string]*variable, ectx *hcl.EvalContext) hcl.Diagnostics {
|
||||||
|
var diags hcl.Diagnostics
|
||||||
|
for _, v := range vars {
|
||||||
|
for _, validation := range v.Validations {
|
||||||
|
condition, condDiags := validation.Condition.Value(ectx)
|
||||||
|
if condDiags.HasErrors() {
|
||||||
|
diags = append(diags, condDiags...)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if !condition.True() {
|
||||||
|
message, msgDiags := validation.ErrorMessage.Value(ectx)
|
||||||
|
if msgDiags.HasErrors() {
|
||||||
|
diags = append(diags, msgDiags...)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
diags = append(diags, &hcl.Diagnostic{
|
||||||
|
Severity: hcl.DiagError,
|
||||||
|
Summary: "Validation failed",
|
||||||
|
Detail: message.AsString(),
|
||||||
|
Subject: validation.Condition.Range().Ptr(),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return diags
|
||||||
|
}
|
||||||
|
|
||||||
|
type Variable struct {
|
||||||
|
Name string
|
||||||
|
Description string
|
||||||
|
Value *string
|
||||||
|
}
|
||||||
|
|
||||||
|
type ParseMeta struct {
|
||||||
|
Renamed map[string]map[string][]string
|
||||||
|
AllVariables []*Variable
|
||||||
|
}
|
||||||
|
|
||||||
|
func Parse(b hcl.Body, opt Opt, val interface{}) (*ParseMeta, hcl.Diagnostics) {
|
||||||
reserved := map[string]struct{}{}
|
reserved := map[string]struct{}{}
|
||||||
schema, _ := gohcl.ImpliedBodySchema(val)
|
schema, _ := gohcl.ImpliedBodySchema(val)
|
||||||
|
|
||||||
@@ -643,6 +694,7 @@ func Parse(b hcl.Body, opt Opt, val interface{}) (map[string]map[string][]string
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
vars := make([]*Variable, 0, len(p.vars))
|
||||||
for k := range p.vars {
|
for k := range p.vars {
|
||||||
if err := p.resolveValue(p.ectx, k); err != nil {
|
if err := p.resolveValue(p.ectx, k); err != nil {
|
||||||
if diags, ok := err.(hcl.Diagnostics); ok {
|
if diags, ok := err.(hcl.Diagnostics); ok {
|
||||||
@@ -651,6 +703,24 @@ func Parse(b hcl.Body, opt Opt, val interface{}) (map[string]map[string][]string
|
|||||||
r := p.vars[k].Body.MissingItemRange()
|
r := p.vars[k].Body.MissingItemRange()
|
||||||
return nil, wrapErrorDiagnostic("Invalid value", err, &r, &r)
|
return nil, wrapErrorDiagnostic("Invalid value", err, &r, &r)
|
||||||
}
|
}
|
||||||
|
v := &Variable{
|
||||||
|
Name: p.vars[k].Name,
|
||||||
|
Description: p.vars[k].Description,
|
||||||
|
}
|
||||||
|
if vv := p.ectx.Variables[k]; !vv.IsNull() {
|
||||||
|
var s string
|
||||||
|
switch vv.Type() {
|
||||||
|
case cty.String:
|
||||||
|
s = vv.AsString()
|
||||||
|
case cty.Bool:
|
||||||
|
s = strconv.FormatBool(vv.True())
|
||||||
|
}
|
||||||
|
v.Value = &s
|
||||||
|
}
|
||||||
|
vars = append(vars, v)
|
||||||
|
}
|
||||||
|
if diags := p.validateVariables(p.vars, p.ectx); diags.HasErrors() {
|
||||||
|
return nil, diags
|
||||||
}
|
}
|
||||||
|
|
||||||
for k := range p.funcs {
|
for k := range p.funcs {
|
||||||
@@ -795,7 +865,10 @@ func Parse(b hcl.Body, opt Opt, val interface{}) (map[string]map[string][]string
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return renamed, nil
|
return &ParseMeta{
|
||||||
|
Renamed: renamed,
|
||||||
|
AllVariables: vars,
|
||||||
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// wrapErrorDiagnostic wraps an error into a hcl.Diagnostics object.
|
// wrapErrorDiagnostic wraps an error into a hcl.Diagnostics object.
|
||||||
|
|||||||
@@ -111,7 +111,6 @@ func (mb mergedBodies) JustAttributes() (hcl.Attributes, hcl.Diagnostics) {
|
|||||||
diags = append(diags, thisDiags...)
|
diags = append(diags, thisDiags...)
|
||||||
}
|
}
|
||||||
|
|
||||||
if thisAttrs != nil {
|
|
||||||
for name, attr := range thisAttrs {
|
for name, attr := range thisAttrs {
|
||||||
if existing := attrs[name]; existing != nil {
|
if existing := attrs[name]; existing != nil {
|
||||||
diags = diags.Append(&hcl.Diagnostic{
|
diags = diags.Append(&hcl.Diagnostic{
|
||||||
@@ -127,7 +126,6 @@ func (mb mergedBodies) JustAttributes() (hcl.Attributes, hcl.Diagnostics) {
|
|||||||
attrs[name] = attr
|
attrs[name] = attr
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
|
||||||
|
|
||||||
return attrs, diags
|
return attrs, diags
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,6 +1,9 @@
|
|||||||
package hclparser
|
package hclparser
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"errors"
|
||||||
|
"path"
|
||||||
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/hashicorp/go-cty-funcs/cidr"
|
"github.com/hashicorp/go-cty-funcs/cidr"
|
||||||
@@ -14,122 +17,245 @@ import (
|
|||||||
"github.com/zclconf/go-cty/cty/function/stdlib"
|
"github.com/zclconf/go-cty/cty/function/stdlib"
|
||||||
)
|
)
|
||||||
|
|
||||||
var stdlibFunctions = map[string]function.Function{
|
type funcDef struct {
|
||||||
"absolute": stdlib.AbsoluteFunc,
|
name string
|
||||||
"add": stdlib.AddFunc,
|
fn function.Function
|
||||||
"and": stdlib.AndFunc,
|
factory func() function.Function
|
||||||
"base64decode": encoding.Base64DecodeFunc,
|
}
|
||||||
"base64encode": encoding.Base64EncodeFunc,
|
|
||||||
"bcrypt": crypto.BcryptFunc,
|
var stdlibFunctions = []funcDef{
|
||||||
"byteslen": stdlib.BytesLenFunc,
|
{name: "absolute", fn: stdlib.AbsoluteFunc},
|
||||||
"bytesslice": stdlib.BytesSliceFunc,
|
{name: "add", fn: stdlib.AddFunc},
|
||||||
"can": tryfunc.CanFunc,
|
{name: "and", fn: stdlib.AndFunc},
|
||||||
"ceil": stdlib.CeilFunc,
|
{name: "base64decode", fn: encoding.Base64DecodeFunc},
|
||||||
"chomp": stdlib.ChompFunc,
|
{name: "base64encode", fn: encoding.Base64EncodeFunc},
|
||||||
"chunklist": stdlib.ChunklistFunc,
|
{name: "basename", factory: basenameFunc},
|
||||||
"cidrhost": cidr.HostFunc,
|
{name: "bcrypt", fn: crypto.BcryptFunc},
|
||||||
"cidrnetmask": cidr.NetmaskFunc,
|
{name: "byteslen", fn: stdlib.BytesLenFunc},
|
||||||
"cidrsubnet": cidr.SubnetFunc,
|
{name: "bytesslice", fn: stdlib.BytesSliceFunc},
|
||||||
"cidrsubnets": cidr.SubnetsFunc,
|
{name: "can", fn: tryfunc.CanFunc},
|
||||||
"coalesce": stdlib.CoalesceFunc,
|
{name: "ceil", fn: stdlib.CeilFunc},
|
||||||
"coalescelist": stdlib.CoalesceListFunc,
|
{name: "chomp", fn: stdlib.ChompFunc},
|
||||||
"compact": stdlib.CompactFunc,
|
{name: "chunklist", fn: stdlib.ChunklistFunc},
|
||||||
"concat": stdlib.ConcatFunc,
|
{name: "cidrhost", fn: cidr.HostFunc},
|
||||||
"contains": stdlib.ContainsFunc,
|
{name: "cidrnetmask", fn: cidr.NetmaskFunc},
|
||||||
"convert": typeexpr.ConvertFunc,
|
{name: "cidrsubnet", fn: cidr.SubnetFunc},
|
||||||
"csvdecode": stdlib.CSVDecodeFunc,
|
{name: "cidrsubnets", fn: cidr.SubnetsFunc},
|
||||||
"distinct": stdlib.DistinctFunc,
|
{name: "coalesce", fn: stdlib.CoalesceFunc},
|
||||||
"divide": stdlib.DivideFunc,
|
{name: "coalescelist", fn: stdlib.CoalesceListFunc},
|
||||||
"element": stdlib.ElementFunc,
|
{name: "compact", fn: stdlib.CompactFunc},
|
||||||
"equal": stdlib.EqualFunc,
|
{name: "concat", fn: stdlib.ConcatFunc},
|
||||||
"flatten": stdlib.FlattenFunc,
|
{name: "contains", fn: stdlib.ContainsFunc},
|
||||||
"floor": stdlib.FloorFunc,
|
{name: "convert", fn: typeexpr.ConvertFunc},
|
||||||
"format": stdlib.FormatFunc,
|
{name: "csvdecode", fn: stdlib.CSVDecodeFunc},
|
||||||
"formatdate": stdlib.FormatDateFunc,
|
{name: "dirname", factory: dirnameFunc},
|
||||||
"formatlist": stdlib.FormatListFunc,
|
{name: "distinct", fn: stdlib.DistinctFunc},
|
||||||
"greaterthan": stdlib.GreaterThanFunc,
|
{name: "divide", fn: stdlib.DivideFunc},
|
||||||
"greaterthanorequalto": stdlib.GreaterThanOrEqualToFunc,
|
{name: "element", fn: stdlib.ElementFunc},
|
||||||
"hasindex": stdlib.HasIndexFunc,
|
{name: "equal", fn: stdlib.EqualFunc},
|
||||||
"indent": stdlib.IndentFunc,
|
{name: "flatten", fn: stdlib.FlattenFunc},
|
||||||
"index": stdlib.IndexFunc,
|
{name: "floor", fn: stdlib.FloorFunc},
|
||||||
"int": stdlib.IntFunc,
|
{name: "format", fn: stdlib.FormatFunc},
|
||||||
"join": stdlib.JoinFunc,
|
{name: "formatdate", fn: stdlib.FormatDateFunc},
|
||||||
"jsondecode": stdlib.JSONDecodeFunc,
|
{name: "formatlist", fn: stdlib.FormatListFunc},
|
||||||
"jsonencode": stdlib.JSONEncodeFunc,
|
{name: "greaterthan", fn: stdlib.GreaterThanFunc},
|
||||||
"keys": stdlib.KeysFunc,
|
{name: "greaterthanorequalto", fn: stdlib.GreaterThanOrEqualToFunc},
|
||||||
"length": stdlib.LengthFunc,
|
{name: "hasindex", fn: stdlib.HasIndexFunc},
|
||||||
"lessthan": stdlib.LessThanFunc,
|
{name: "indent", fn: stdlib.IndentFunc},
|
||||||
"lessthanorequalto": stdlib.LessThanOrEqualToFunc,
|
{name: "index", fn: stdlib.IndexFunc},
|
||||||
"log": stdlib.LogFunc,
|
{name: "indexof", factory: indexOfFunc},
|
||||||
"lookup": stdlib.LookupFunc,
|
{name: "int", fn: stdlib.IntFunc},
|
||||||
"lower": stdlib.LowerFunc,
|
{name: "join", fn: stdlib.JoinFunc},
|
||||||
"max": stdlib.MaxFunc,
|
{name: "jsondecode", fn: stdlib.JSONDecodeFunc},
|
||||||
"md5": crypto.Md5Func,
|
{name: "jsonencode", fn: stdlib.JSONEncodeFunc},
|
||||||
"merge": stdlib.MergeFunc,
|
{name: "keys", fn: stdlib.KeysFunc},
|
||||||
"min": stdlib.MinFunc,
|
{name: "length", fn: stdlib.LengthFunc},
|
||||||
"modulo": stdlib.ModuloFunc,
|
{name: "lessthan", fn: stdlib.LessThanFunc},
|
||||||
"multiply": stdlib.MultiplyFunc,
|
{name: "lessthanorequalto", fn: stdlib.LessThanOrEqualToFunc},
|
||||||
"negate": stdlib.NegateFunc,
|
{name: "log", fn: stdlib.LogFunc},
|
||||||
"not": stdlib.NotFunc,
|
{name: "lookup", fn: stdlib.LookupFunc},
|
||||||
"notequal": stdlib.NotEqualFunc,
|
{name: "lower", fn: stdlib.LowerFunc},
|
||||||
"or": stdlib.OrFunc,
|
{name: "max", fn: stdlib.MaxFunc},
|
||||||
"parseint": stdlib.ParseIntFunc,
|
{name: "md5", fn: crypto.Md5Func},
|
||||||
"pow": stdlib.PowFunc,
|
{name: "merge", fn: stdlib.MergeFunc},
|
||||||
"range": stdlib.RangeFunc,
|
{name: "min", fn: stdlib.MinFunc},
|
||||||
"regex_replace": stdlib.RegexReplaceFunc,
|
{name: "modulo", fn: stdlib.ModuloFunc},
|
||||||
"regex": stdlib.RegexFunc,
|
{name: "multiply", fn: stdlib.MultiplyFunc},
|
||||||
"regexall": stdlib.RegexAllFunc,
|
{name: "negate", fn: stdlib.NegateFunc},
|
||||||
"replace": stdlib.ReplaceFunc,
|
{name: "not", fn: stdlib.NotFunc},
|
||||||
"reverse": stdlib.ReverseFunc,
|
{name: "notequal", fn: stdlib.NotEqualFunc},
|
||||||
"reverselist": stdlib.ReverseListFunc,
|
{name: "or", fn: stdlib.OrFunc},
|
||||||
"rsadecrypt": crypto.RsaDecryptFunc,
|
{name: "parseint", fn: stdlib.ParseIntFunc},
|
||||||
"sethaselement": stdlib.SetHasElementFunc,
|
{name: "pow", fn: stdlib.PowFunc},
|
||||||
"setintersection": stdlib.SetIntersectionFunc,
|
{name: "range", fn: stdlib.RangeFunc},
|
||||||
"setproduct": stdlib.SetProductFunc,
|
{name: "regex_replace", fn: stdlib.RegexReplaceFunc},
|
||||||
"setsubtract": stdlib.SetSubtractFunc,
|
{name: "regex", fn: stdlib.RegexFunc},
|
||||||
"setsymmetricdifference": stdlib.SetSymmetricDifferenceFunc,
|
{name: "regexall", fn: stdlib.RegexAllFunc},
|
||||||
"setunion": stdlib.SetUnionFunc,
|
{name: "replace", fn: stdlib.ReplaceFunc},
|
||||||
"sha1": crypto.Sha1Func,
|
{name: "reverse", fn: stdlib.ReverseFunc},
|
||||||
"sha256": crypto.Sha256Func,
|
{name: "reverselist", fn: stdlib.ReverseListFunc},
|
||||||
"sha512": crypto.Sha512Func,
|
{name: "rsadecrypt", fn: crypto.RsaDecryptFunc},
|
||||||
"signum": stdlib.SignumFunc,
|
{name: "sanitize", factory: sanitizeFunc},
|
||||||
"slice": stdlib.SliceFunc,
|
{name: "sethaselement", fn: stdlib.SetHasElementFunc},
|
||||||
"sort": stdlib.SortFunc,
|
{name: "setintersection", fn: stdlib.SetIntersectionFunc},
|
||||||
"split": stdlib.SplitFunc,
|
{name: "setproduct", fn: stdlib.SetProductFunc},
|
||||||
"strlen": stdlib.StrlenFunc,
|
{name: "setsubtract", fn: stdlib.SetSubtractFunc},
|
||||||
"substr": stdlib.SubstrFunc,
|
{name: "setsymmetricdifference", fn: stdlib.SetSymmetricDifferenceFunc},
|
||||||
"subtract": stdlib.SubtractFunc,
|
{name: "setunion", fn: stdlib.SetUnionFunc},
|
||||||
"timeadd": stdlib.TimeAddFunc,
|
{name: "sha1", fn: crypto.Sha1Func},
|
||||||
"timestamp": timestampFunc,
|
{name: "sha256", fn: crypto.Sha256Func},
|
||||||
"title": stdlib.TitleFunc,
|
{name: "sha512", fn: crypto.Sha512Func},
|
||||||
"trim": stdlib.TrimFunc,
|
{name: "signum", fn: stdlib.SignumFunc},
|
||||||
"trimprefix": stdlib.TrimPrefixFunc,
|
{name: "slice", fn: stdlib.SliceFunc},
|
||||||
"trimspace": stdlib.TrimSpaceFunc,
|
{name: "sort", fn: stdlib.SortFunc},
|
||||||
"trimsuffix": stdlib.TrimSuffixFunc,
|
{name: "split", fn: stdlib.SplitFunc},
|
||||||
"try": tryfunc.TryFunc,
|
{name: "strlen", fn: stdlib.StrlenFunc},
|
||||||
"upper": stdlib.UpperFunc,
|
{name: "substr", fn: stdlib.SubstrFunc},
|
||||||
"urlencode": encoding.URLEncodeFunc,
|
{name: "subtract", fn: stdlib.SubtractFunc},
|
||||||
"uuidv4": uuid.V4Func,
|
{name: "timeadd", fn: stdlib.TimeAddFunc},
|
||||||
"uuidv5": uuid.V5Func,
|
{name: "timestamp", factory: timestampFunc},
|
||||||
"values": stdlib.ValuesFunc,
|
{name: "title", fn: stdlib.TitleFunc},
|
||||||
"zipmap": stdlib.ZipmapFunc,
|
{name: "trim", fn: stdlib.TrimFunc},
|
||||||
|
{name: "trimprefix", fn: stdlib.TrimPrefixFunc},
|
||||||
|
{name: "trimspace", fn: stdlib.TrimSpaceFunc},
|
||||||
|
{name: "trimsuffix", fn: stdlib.TrimSuffixFunc},
|
||||||
|
{name: "try", fn: tryfunc.TryFunc},
|
||||||
|
{name: "upper", fn: stdlib.UpperFunc},
|
||||||
|
{name: "urlencode", fn: encoding.URLEncodeFunc},
|
||||||
|
{name: "uuidv4", fn: uuid.V4Func},
|
||||||
|
{name: "uuidv5", fn: uuid.V5Func},
|
||||||
|
{name: "values", fn: stdlib.ValuesFunc},
|
||||||
|
{name: "zipmap", fn: stdlib.ZipmapFunc},
|
||||||
|
}
|
||||||
|
|
||||||
|
// indexOfFunc constructs a function that finds the element index for a given
|
||||||
|
// value in a list.
|
||||||
|
func indexOfFunc() function.Function {
|
||||||
|
return function.New(&function.Spec{
|
||||||
|
Params: []function.Parameter{
|
||||||
|
{
|
||||||
|
Name: "list",
|
||||||
|
Type: cty.DynamicPseudoType,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Name: "value",
|
||||||
|
Type: cty.DynamicPseudoType,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
Type: function.StaticReturnType(cty.Number),
|
||||||
|
Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) {
|
||||||
|
if !(args[0].Type().IsListType() || args[0].Type().IsTupleType()) {
|
||||||
|
return cty.NilVal, errors.New("argument must be a list or tuple")
|
||||||
|
}
|
||||||
|
|
||||||
|
if !args[0].IsKnown() {
|
||||||
|
return cty.UnknownVal(cty.Number), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
if args[0].LengthInt() == 0 { // Easy path
|
||||||
|
return cty.NilVal, errors.New("cannot search an empty list")
|
||||||
|
}
|
||||||
|
|
||||||
|
for it := args[0].ElementIterator(); it.Next(); {
|
||||||
|
i, v := it.Element()
|
||||||
|
eq, err := stdlib.Equal(v, args[1])
|
||||||
|
if err != nil {
|
||||||
|
return cty.NilVal, err
|
||||||
|
}
|
||||||
|
if !eq.IsKnown() {
|
||||||
|
return cty.UnknownVal(cty.Number), nil
|
||||||
|
}
|
||||||
|
if eq.True() {
|
||||||
|
return i, nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return cty.NilVal, errors.New("item not found")
|
||||||
|
},
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// basenameFunc constructs a function that returns the last element of a path.
|
||||||
|
func basenameFunc() function.Function {
|
||||||
|
return function.New(&function.Spec{
|
||||||
|
Params: []function.Parameter{
|
||||||
|
{
|
||||||
|
Name: "path",
|
||||||
|
Type: cty.String,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
Type: function.StaticReturnType(cty.String),
|
||||||
|
Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) {
|
||||||
|
in := args[0].AsString()
|
||||||
|
return cty.StringVal(path.Base(in)), nil
|
||||||
|
},
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// dirnameFunc constructs a function that returns the directory of a path.
|
||||||
|
func dirnameFunc() function.Function {
|
||||||
|
return function.New(&function.Spec{
|
||||||
|
Params: []function.Parameter{
|
||||||
|
{
|
||||||
|
Name: "path",
|
||||||
|
Type: cty.String,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
Type: function.StaticReturnType(cty.String),
|
||||||
|
Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) {
|
||||||
|
in := args[0].AsString()
|
||||||
|
return cty.StringVal(path.Dir(in)), nil
|
||||||
|
},
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// sanitizyFunc constructs a function that replaces all non-alphanumeric characters with a underscore,
|
||||||
|
// leaving only characters that are valid for a Bake target name.
|
||||||
|
func sanitizeFunc() function.Function {
|
||||||
|
return function.New(&function.Spec{
|
||||||
|
Params: []function.Parameter{
|
||||||
|
{
|
||||||
|
Name: "name",
|
||||||
|
Type: cty.String,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
Type: function.StaticReturnType(cty.String),
|
||||||
|
Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) {
|
||||||
|
in := args[0].AsString()
|
||||||
|
// only [a-zA-Z0-9_-]+ is allowed
|
||||||
|
var b strings.Builder
|
||||||
|
for _, r := range in {
|
||||||
|
if r >= 'a' && r <= 'z' || r >= 'A' && r <= 'Z' || r >= '0' && r <= '9' || r == '_' || r == '-' {
|
||||||
|
b.WriteRune(r)
|
||||||
|
} else {
|
||||||
|
b.WriteRune('_')
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return cty.StringVal(b.String()), nil
|
||||||
|
},
|
||||||
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
// timestampFunc constructs a function that returns a string representation of the current date and time.
|
// timestampFunc constructs a function that returns a string representation of the current date and time.
|
||||||
//
|
//
|
||||||
// This function was imported from terraform's datetime utilities.
|
// This function was imported from terraform's datetime utilities.
|
||||||
var timestampFunc = function.New(&function.Spec{
|
func timestampFunc() function.Function {
|
||||||
|
return function.New(&function.Spec{
|
||||||
Params: []function.Parameter{},
|
Params: []function.Parameter{},
|
||||||
Type: function.StaticReturnType(cty.String),
|
Type: function.StaticReturnType(cty.String),
|
||||||
Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) {
|
Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) {
|
||||||
return cty.StringVal(time.Now().UTC().Format(time.RFC3339)), nil
|
return cty.StringVal(time.Now().UTC().Format(time.RFC3339)), nil
|
||||||
},
|
},
|
||||||
})
|
})
|
||||||
|
}
|
||||||
|
|
||||||
func Stdlib() map[string]function.Function {
|
func Stdlib() map[string]function.Function {
|
||||||
funcs := make(map[string]function.Function, len(stdlibFunctions))
|
funcs := make(map[string]function.Function, len(stdlibFunctions))
|
||||||
for k, v := range stdlibFunctions {
|
for _, v := range stdlibFunctions {
|
||||||
funcs[k] = v
|
if v.factory != nil {
|
||||||
|
funcs[v.name] = v.factory()
|
||||||
|
} else {
|
||||||
|
funcs[v.name] = v.fn
|
||||||
|
}
|
||||||
}
|
}
|
||||||
return funcs
|
return funcs
|
||||||
}
|
}
|
||||||
|
|||||||
199
bake/hclparser/stdlib_test.go
Normal file
199
bake/hclparser/stdlib_test.go
Normal file
@@ -0,0 +1,199 @@
|
|||||||
|
package hclparser
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
"github.com/zclconf/go-cty/cty"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestIndexOf(t *testing.T) {
|
||||||
|
type testCase struct {
|
||||||
|
input cty.Value
|
||||||
|
key cty.Value
|
||||||
|
want cty.Value
|
||||||
|
wantErr bool
|
||||||
|
}
|
||||||
|
tests := map[string]testCase{
|
||||||
|
"index 0": {
|
||||||
|
input: cty.TupleVal([]cty.Value{cty.StringVal("one"), cty.NumberIntVal(2.0), cty.NumberIntVal(3), cty.StringVal("four")}),
|
||||||
|
key: cty.StringVal("one"),
|
||||||
|
want: cty.NumberIntVal(0),
|
||||||
|
},
|
||||||
|
"index 3": {
|
||||||
|
input: cty.TupleVal([]cty.Value{cty.StringVal("one"), cty.NumberIntVal(2.0), cty.NumberIntVal(3), cty.StringVal("four")}),
|
||||||
|
key: cty.StringVal("four"),
|
||||||
|
want: cty.NumberIntVal(3),
|
||||||
|
},
|
||||||
|
"index -1": {
|
||||||
|
input: cty.TupleVal([]cty.Value{cty.StringVal("one"), cty.NumberIntVal(2.0), cty.NumberIntVal(3), cty.StringVal("four")}),
|
||||||
|
key: cty.StringVal("3"),
|
||||||
|
wantErr: true,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for name, test := range tests {
|
||||||
|
name, test := name, test
|
||||||
|
t.Run(name, func(t *testing.T) {
|
||||||
|
got, err := indexOfFunc().Call([]cty.Value{test.input, test.key})
|
||||||
|
if test.wantErr {
|
||||||
|
require.Error(t, err)
|
||||||
|
} else {
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, test.want, got)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestBasename(t *testing.T) {
|
||||||
|
type testCase struct {
|
||||||
|
input cty.Value
|
||||||
|
want cty.Value
|
||||||
|
wantErr bool
|
||||||
|
}
|
||||||
|
tests := map[string]testCase{
|
||||||
|
"empty": {
|
||||||
|
input: cty.StringVal(""),
|
||||||
|
want: cty.StringVal("."),
|
||||||
|
},
|
||||||
|
"slash": {
|
||||||
|
input: cty.StringVal("/"),
|
||||||
|
want: cty.StringVal("/"),
|
||||||
|
},
|
||||||
|
"simple": {
|
||||||
|
input: cty.StringVal("/foo/bar"),
|
||||||
|
want: cty.StringVal("bar"),
|
||||||
|
},
|
||||||
|
"simple no slash": {
|
||||||
|
input: cty.StringVal("foo/bar"),
|
||||||
|
want: cty.StringVal("bar"),
|
||||||
|
},
|
||||||
|
"dot": {
|
||||||
|
input: cty.StringVal("/foo/bar."),
|
||||||
|
want: cty.StringVal("bar."),
|
||||||
|
},
|
||||||
|
"dotdot": {
|
||||||
|
input: cty.StringVal("/foo/bar.."),
|
||||||
|
want: cty.StringVal("bar.."),
|
||||||
|
},
|
||||||
|
"dotdotdot": {
|
||||||
|
input: cty.StringVal("/foo/bar..."),
|
||||||
|
want: cty.StringVal("bar..."),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for name, test := range tests {
|
||||||
|
name, test := name, test
|
||||||
|
t.Run(name, func(t *testing.T) {
|
||||||
|
got, err := basenameFunc().Call([]cty.Value{test.input})
|
||||||
|
if test.wantErr {
|
||||||
|
require.Error(t, err)
|
||||||
|
} else {
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, test.want, got)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestDirname(t *testing.T) {
|
||||||
|
type testCase struct {
|
||||||
|
input cty.Value
|
||||||
|
want cty.Value
|
||||||
|
wantErr bool
|
||||||
|
}
|
||||||
|
tests := map[string]testCase{
|
||||||
|
"empty": {
|
||||||
|
input: cty.StringVal(""),
|
||||||
|
want: cty.StringVal("."),
|
||||||
|
},
|
||||||
|
"slash": {
|
||||||
|
input: cty.StringVal("/"),
|
||||||
|
want: cty.StringVal("/"),
|
||||||
|
},
|
||||||
|
"simple": {
|
||||||
|
input: cty.StringVal("/foo/bar"),
|
||||||
|
want: cty.StringVal("/foo"),
|
||||||
|
},
|
||||||
|
"simple no slash": {
|
||||||
|
input: cty.StringVal("foo/bar"),
|
||||||
|
want: cty.StringVal("foo"),
|
||||||
|
},
|
||||||
|
"dot": {
|
||||||
|
input: cty.StringVal("/foo/bar."),
|
||||||
|
want: cty.StringVal("/foo"),
|
||||||
|
},
|
||||||
|
"dotdot": {
|
||||||
|
input: cty.StringVal("/foo/bar.."),
|
||||||
|
want: cty.StringVal("/foo"),
|
||||||
|
},
|
||||||
|
"dotdotdot": {
|
||||||
|
input: cty.StringVal("/foo/bar..."),
|
||||||
|
want: cty.StringVal("/foo"),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for name, test := range tests {
|
||||||
|
name, test := name, test
|
||||||
|
t.Run(name, func(t *testing.T) {
|
||||||
|
got, err := dirnameFunc().Call([]cty.Value{test.input})
|
||||||
|
if test.wantErr {
|
||||||
|
require.Error(t, err)
|
||||||
|
} else {
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, test.want, got)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestSanitize(t *testing.T) {
|
||||||
|
type testCase struct {
|
||||||
|
input cty.Value
|
||||||
|
want cty.Value
|
||||||
|
}
|
||||||
|
tests := map[string]testCase{
|
||||||
|
"empty": {
|
||||||
|
input: cty.StringVal(""),
|
||||||
|
want: cty.StringVal(""),
|
||||||
|
},
|
||||||
|
"simple": {
|
||||||
|
input: cty.StringVal("foo/bar"),
|
||||||
|
want: cty.StringVal("foo_bar"),
|
||||||
|
},
|
||||||
|
"simple no slash": {
|
||||||
|
input: cty.StringVal("foobar"),
|
||||||
|
want: cty.StringVal("foobar"),
|
||||||
|
},
|
||||||
|
"dot": {
|
||||||
|
input: cty.StringVal("foo/bar."),
|
||||||
|
want: cty.StringVal("foo_bar_"),
|
||||||
|
},
|
||||||
|
"dotdot": {
|
||||||
|
input: cty.StringVal("foo/bar.."),
|
||||||
|
want: cty.StringVal("foo_bar__"),
|
||||||
|
},
|
||||||
|
"dotdotdot": {
|
||||||
|
input: cty.StringVal("foo/bar..."),
|
||||||
|
want: cty.StringVal("foo_bar___"),
|
||||||
|
},
|
||||||
|
"utf8": {
|
||||||
|
input: cty.StringVal("foo/🍕bar"),
|
||||||
|
want: cty.StringVal("foo__bar"),
|
||||||
|
},
|
||||||
|
"symbols": {
|
||||||
|
input: cty.StringVal("foo/bar!@(ba+z)"),
|
||||||
|
want: cty.StringVal("foo_bar___ba_z_"),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for name, test := range tests {
|
||||||
|
name, test := name, test
|
||||||
|
t.Run(name, func(t *testing.T) {
|
||||||
|
got, err := sanitizeFunc().Call([]cty.Value{test.input})
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, test.want, got)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -4,11 +4,14 @@ import (
|
|||||||
"archive/tar"
|
"archive/tar"
|
||||||
"bytes"
|
"bytes"
|
||||||
"context"
|
"context"
|
||||||
|
"os"
|
||||||
|
"strings"
|
||||||
|
|
||||||
"github.com/docker/buildx/builder"
|
"github.com/docker/buildx/builder"
|
||||||
controllerapi "github.com/docker/buildx/controller/pb"
|
controllerapi "github.com/docker/buildx/controller/pb"
|
||||||
"github.com/docker/buildx/driver"
|
"github.com/docker/buildx/driver"
|
||||||
"github.com/docker/buildx/util/progress"
|
"github.com/docker/buildx/util/progress"
|
||||||
|
"github.com/docker/go-units"
|
||||||
"github.com/moby/buildkit/client"
|
"github.com/moby/buildkit/client"
|
||||||
"github.com/moby/buildkit/client/llb"
|
"github.com/moby/buildkit/client/llb"
|
||||||
"github.com/moby/buildkit/frontend/dockerui"
|
"github.com/moby/buildkit/frontend/dockerui"
|
||||||
@@ -17,19 +20,42 @@ import (
|
|||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
const maxBakeDefinitionSize = 2 * 1024 * 1024 // 2 MB
|
||||||
|
|
||||||
type Input struct {
|
type Input struct {
|
||||||
State *llb.State
|
State *llb.State
|
||||||
URL string
|
URL string
|
||||||
}
|
}
|
||||||
|
|
||||||
func ReadRemoteFiles(ctx context.Context, nodes []builder.Node, url string, names []string, pw progress.Writer) ([]File, *Input, error) {
|
func ReadRemoteFiles(ctx context.Context, nodes []builder.Node, url string, names []string, pw progress.Writer) ([]File, *Input, error) {
|
||||||
var session []session.Attachable
|
var sessions []session.Attachable
|
||||||
var filename string
|
var filename string
|
||||||
|
|
||||||
st, ok := dockerui.DetectGitContext(url, false)
|
st, ok := dockerui.DetectGitContext(url, false)
|
||||||
if ok {
|
if ok {
|
||||||
ssh, err := controllerapi.CreateSSH([]*controllerapi.SSH{{ID: "default"}})
|
if ssh, err := controllerapi.CreateSSH([]*controllerapi.SSH{{
|
||||||
if err == nil {
|
ID: "default",
|
||||||
session = append(session, ssh)
|
Paths: strings.Split(os.Getenv("BUILDX_BAKE_GIT_SSH"), ","),
|
||||||
|
}}); err == nil {
|
||||||
|
sessions = append(sessions, ssh)
|
||||||
|
}
|
||||||
|
var gitAuthSecrets []*controllerapi.Secret
|
||||||
|
if _, ok := os.LookupEnv("BUILDX_BAKE_GIT_AUTH_TOKEN"); ok {
|
||||||
|
gitAuthSecrets = append(gitAuthSecrets, &controllerapi.Secret{
|
||||||
|
ID: llb.GitAuthTokenKey,
|
||||||
|
Env: "BUILDX_BAKE_GIT_AUTH_TOKEN",
|
||||||
|
})
|
||||||
|
}
|
||||||
|
if _, ok := os.LookupEnv("BUILDX_BAKE_GIT_AUTH_HEADER"); ok {
|
||||||
|
gitAuthSecrets = append(gitAuthSecrets, &controllerapi.Secret{
|
||||||
|
ID: llb.GitAuthHeaderKey,
|
||||||
|
Env: "BUILDX_BAKE_GIT_AUTH_HEADER",
|
||||||
|
})
|
||||||
|
}
|
||||||
|
if len(gitAuthSecrets) > 0 {
|
||||||
|
if secrets, err := controllerapi.CreateSecrets(gitAuthSecrets); err == nil {
|
||||||
|
sessions = append(sessions, secrets)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
st, filename, ok = dockerui.DetectHTTPContext(url)
|
st, filename, ok = dockerui.DetectHTTPContext(url)
|
||||||
@@ -59,7 +85,7 @@ func ReadRemoteFiles(ctx context.Context, nodes []builder.Node, url string, name
|
|||||||
|
|
||||||
ch, done := progress.NewChannel(pw)
|
ch, done := progress.NewChannel(pw)
|
||||||
defer func() { <-done }()
|
defer func() { <-done }()
|
||||||
_, err = c.Build(ctx, client.SolveOpt{Session: session, Internal: true}, "buildx", func(ctx context.Context, c gwclient.Client) (*gwclient.Result, error) {
|
_, err = c.Build(ctx, client.SolveOpt{Session: sessions, Internal: true}, "buildx", func(ctx context.Context, c gwclient.Client) (*gwclient.Result, error) {
|
||||||
def, err := st.Marshal(ctx)
|
def, err := st.Marshal(ctx)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
@@ -83,7 +109,6 @@ func ReadRemoteFiles(ctx context.Context, nodes []builder.Node, url string, name
|
|||||||
}
|
}
|
||||||
return nil, err
|
return nil, err
|
||||||
}, ch)
|
}, ch)
|
||||||
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, nil, err
|
return nil, nil, err
|
||||||
}
|
}
|
||||||
@@ -155,9 +180,9 @@ func filesFromURLRef(ctx context.Context, c gwclient.Client, ref gwclient.Refere
|
|||||||
name := inp.URL
|
name := inp.URL
|
||||||
inp.URL = ""
|
inp.URL = ""
|
||||||
|
|
||||||
if len(dt) > stat.Size() {
|
if int64(len(dt)) > stat.Size {
|
||||||
if stat.Size() > 1024*512 {
|
if stat.Size > maxBakeDefinitionSize {
|
||||||
return nil, errors.Errorf("non-archive definition URL bigger than maximum allowed size")
|
return nil, errors.Errorf("non-archive definition URL bigger than maximum allowed size (%s)", units.HumanSize(maxBakeDefinitionSize))
|
||||||
}
|
}
|
||||||
|
|
||||||
dt, err = ref.ReadFile(ctx, gwclient.ReadRequest{
|
dt, err = ref.ReadFile(ctx, gwclient.ReadRequest{
|
||||||
|
|||||||
986
build/build.go
986
build/build.go
File diff suppressed because it is too large
Load Diff
@@ -5,7 +5,7 @@ import (
|
|||||||
stderrors "errors"
|
stderrors "errors"
|
||||||
"net"
|
"net"
|
||||||
|
|
||||||
"github.com/containerd/containerd/platforms"
|
"github.com/containerd/platforms"
|
||||||
"github.com/docker/buildx/builder"
|
"github.com/docker/buildx/builder"
|
||||||
"github.com/docker/buildx/util/progress"
|
"github.com/docker/buildx/util/progress"
|
||||||
v1 "github.com/opencontainers/image-spec/specs-go/v1"
|
v1 "github.com/opencontainers/image-spec/specs-go/v1"
|
||||||
|
|||||||
@@ -3,8 +3,9 @@ package build
|
|||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"sync"
|
||||||
|
|
||||||
"github.com/containerd/containerd/platforms"
|
"github.com/containerd/platforms"
|
||||||
"github.com/docker/buildx/builder"
|
"github.com/docker/buildx/builder"
|
||||||
"github.com/docker/buildx/driver"
|
"github.com/docker/buildx/driver"
|
||||||
"github.com/docker/buildx/util/progress"
|
"github.com/docker/buildx/util/progress"
|
||||||
@@ -46,10 +47,22 @@ func (dp resolvedNode) BuildOpts(ctx context.Context) (gateway.BuildOpts, error)
|
|||||||
|
|
||||||
type matchMaker func(specs.Platform) platforms.MatchComparer
|
type matchMaker func(specs.Platform) platforms.MatchComparer
|
||||||
|
|
||||||
|
type cachedGroup[T any] struct {
|
||||||
|
g flightcontrol.Group[T]
|
||||||
|
cache map[int]T
|
||||||
|
cacheMu sync.Mutex
|
||||||
|
}
|
||||||
|
|
||||||
|
func newCachedGroup[T any]() cachedGroup[T] {
|
||||||
|
return cachedGroup[T]{
|
||||||
|
cache: map[int]T{},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
type nodeResolver struct {
|
type nodeResolver struct {
|
||||||
nodes []builder.Node
|
nodes []builder.Node
|
||||||
clients flightcontrol.Group[*client.Client]
|
clients cachedGroup[*client.Client]
|
||||||
opt flightcontrol.Group[gateway.BuildOpts]
|
buildOpts cachedGroup[gateway.BuildOpts]
|
||||||
}
|
}
|
||||||
|
|
||||||
func resolveDrivers(ctx context.Context, nodes []builder.Node, opt map[string]Options, pw progress.Writer) (map[string][]*resolvedNode, error) {
|
func resolveDrivers(ctx context.Context, nodes []builder.Node, opt map[string]Options, pw progress.Writer) (map[string][]*resolvedNode, error) {
|
||||||
@@ -64,6 +77,8 @@ func resolveDrivers(ctx context.Context, nodes []builder.Node, opt map[string]Op
|
|||||||
func newDriverResolver(nodes []builder.Node) *nodeResolver {
|
func newDriverResolver(nodes []builder.Node) *nodeResolver {
|
||||||
r := &nodeResolver{
|
r := &nodeResolver{
|
||||||
nodes: nodes,
|
nodes: nodes,
|
||||||
|
clients: newCachedGroup[*client.Client](),
|
||||||
|
buildOpts: newCachedGroup[gateway.BuildOpts](),
|
||||||
}
|
}
|
||||||
return r
|
return r
|
||||||
}
|
}
|
||||||
@@ -179,6 +194,7 @@ func (r *nodeResolver) resolve(ctx context.Context, ps []specs.Platform, pw prog
|
|||||||
resolver: r,
|
resolver: r,
|
||||||
driverIndex: 0,
|
driverIndex: 0,
|
||||||
})
|
})
|
||||||
|
nodeIdxs = append(nodeIdxs, 0)
|
||||||
} else {
|
} else {
|
||||||
for i, idx := range nodeIdxs {
|
for i, idx := range nodeIdxs {
|
||||||
node := &resolvedNode{
|
node := &resolvedNode{
|
||||||
@@ -237,11 +253,24 @@ func (r *nodeResolver) boot(ctx context.Context, idxs []int, pw progress.Writer)
|
|||||||
for i, idx := range idxs {
|
for i, idx := range idxs {
|
||||||
i, idx := i, idx
|
i, idx := i, idx
|
||||||
eg.Go(func() error {
|
eg.Go(func() error {
|
||||||
c, err := r.clients.Do(ctx, fmt.Sprint(idx), func(ctx context.Context) (*client.Client, error) {
|
c, err := r.clients.g.Do(ctx, fmt.Sprint(idx), func(ctx context.Context) (*client.Client, error) {
|
||||||
if r.nodes[idx].Driver == nil {
|
if r.nodes[idx].Driver == nil {
|
||||||
return nil, nil
|
return nil, nil
|
||||||
}
|
}
|
||||||
return driver.Boot(ctx, baseCtx, r.nodes[idx].Driver, pw)
|
r.clients.cacheMu.Lock()
|
||||||
|
c, ok := r.clients.cache[idx]
|
||||||
|
r.clients.cacheMu.Unlock()
|
||||||
|
if ok {
|
||||||
|
return c, nil
|
||||||
|
}
|
||||||
|
c, err := driver.Boot(ctx, baseCtx, r.nodes[idx].Driver, pw)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
r.clients.cacheMu.Lock()
|
||||||
|
r.clients.cache[idx] = c
|
||||||
|
r.clients.cacheMu.Unlock()
|
||||||
|
return c, nil
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -272,14 +301,25 @@ func (r *nodeResolver) opts(ctx context.Context, idxs []int, pw progress.Writer)
|
|||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
eg.Go(func() error {
|
eg.Go(func() error {
|
||||||
opt, err := r.opt.Do(ctx, fmt.Sprint(idx), func(ctx context.Context) (gateway.BuildOpts, error) {
|
opt, err := r.buildOpts.g.Do(ctx, fmt.Sprint(idx), func(ctx context.Context) (gateway.BuildOpts, error) {
|
||||||
opt := gateway.BuildOpts{}
|
r.buildOpts.cacheMu.Lock()
|
||||||
|
opt, ok := r.buildOpts.cache[idx]
|
||||||
|
r.buildOpts.cacheMu.Unlock()
|
||||||
|
if ok {
|
||||||
|
return opt, nil
|
||||||
|
}
|
||||||
_, err := c.Build(ctx, client.SolveOpt{
|
_, err := c.Build(ctx, client.SolveOpt{
|
||||||
Internal: true,
|
Internal: true,
|
||||||
}, "buildx", func(ctx context.Context, c gateway.Client) (*gateway.Result, error) {
|
}, "buildx", func(ctx context.Context, c gateway.Client) (*gateway.Result, error) {
|
||||||
opt = c.BuildOpts()
|
opt = c.BuildOpts()
|
||||||
return nil, nil
|
return nil, nil
|
||||||
}, nil)
|
}, nil)
|
||||||
|
if err != nil {
|
||||||
|
return gateway.BuildOpts{}, err
|
||||||
|
}
|
||||||
|
r.buildOpts.cacheMu.Lock()
|
||||||
|
r.buildOpts.cache[idx] = opt
|
||||||
|
r.buildOpts.cacheMu.Unlock()
|
||||||
return opt, err
|
return opt, err
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
|||||||
@@ -5,7 +5,7 @@ import (
|
|||||||
"sort"
|
"sort"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"github.com/containerd/containerd/platforms"
|
"github.com/containerd/platforms"
|
||||||
"github.com/docker/buildx/builder"
|
"github.com/docker/buildx/builder"
|
||||||
specs "github.com/opencontainers/image-spec/specs-go/v1"
|
specs "github.com/opencontainers/image-spec/specs-go/v1"
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
|
|||||||
51
build/git.go
51
build/git.go
@@ -17,10 +17,19 @@ import (
|
|||||||
|
|
||||||
const DockerfileLabel = "com.docker.image.source.entrypoint"
|
const DockerfileLabel = "com.docker.image.source.entrypoint"
|
||||||
|
|
||||||
func getGitAttributes(ctx context.Context, contextPath string, dockerfilePath string) (map[string]string, func(*client.SolveOpt), error) {
|
type gitAttrsAppendFunc func(so *client.SolveOpt)
|
||||||
res := make(map[string]string)
|
|
||||||
|
func gitAppendNoneFunc(_ *client.SolveOpt) {}
|
||||||
|
|
||||||
|
func getGitAttributes(ctx context.Context, contextPath, dockerfilePath string) (f gitAttrsAppendFunc, err error) {
|
||||||
|
defer func() {
|
||||||
|
if f == nil {
|
||||||
|
f = gitAppendNoneFunc
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
|
||||||
if contextPath == "" {
|
if contextPath == "" {
|
||||||
return nil, nil, nil
|
return nil, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
setGitLabels := false
|
setGitLabels := false
|
||||||
@@ -39,7 +48,7 @@ func getGitAttributes(ctx context.Context, contextPath string, dockerfilePath st
|
|||||||
}
|
}
|
||||||
|
|
||||||
if !setGitLabels && !setGitInfo {
|
if !setGitLabels && !setGitInfo {
|
||||||
return nil, nil, nil
|
return nil, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// figure out in which directory the git command needs to run in
|
// figure out in which directory the git command needs to run in
|
||||||
@@ -54,25 +63,27 @@ func getGitAttributes(ctx context.Context, contextPath string, dockerfilePath st
|
|||||||
gitc, err := gitutil.New(gitutil.WithContext(ctx), gitutil.WithWorkingDir(wd))
|
gitc, err := gitutil.New(gitutil.WithContext(ctx), gitutil.WithWorkingDir(wd))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
if st, err1 := os.Stat(path.Join(wd, ".git")); err1 == nil && st.IsDir() {
|
if st, err1 := os.Stat(path.Join(wd, ".git")); err1 == nil && st.IsDir() {
|
||||||
return res, nil, errors.Wrap(err, "git was not found in the system")
|
return nil, errors.Wrap(err, "git was not found in the system")
|
||||||
}
|
}
|
||||||
return nil, nil, nil
|
return nil, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
if !gitc.IsInsideWorkTree() {
|
if !gitc.IsInsideWorkTree() {
|
||||||
if st, err := os.Stat(path.Join(wd, ".git")); err == nil && st.IsDir() {
|
if st, err := os.Stat(path.Join(wd, ".git")); err == nil && st.IsDir() {
|
||||||
return res, nil, errors.New("failed to read current commit information with git rev-parse --is-inside-work-tree")
|
return nil, errors.New("failed to read current commit information with git rev-parse --is-inside-work-tree")
|
||||||
}
|
}
|
||||||
return nil, nil, nil
|
return nil, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
root, err := gitc.RootDir()
|
root, err := gitc.RootDir()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return res, nil, errors.Wrap(err, "failed to get git root dir")
|
return nil, errors.Wrap(err, "failed to get git root dir")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
res := make(map[string]string)
|
||||||
|
|
||||||
if sha, err := gitc.FullCommit(); err != nil && !gitutil.IsUnknownRevision(err) {
|
if sha, err := gitc.FullCommit(); err != nil && !gitutil.IsUnknownRevision(err) {
|
||||||
return res, nil, errors.Wrap(err, "failed to get git commit")
|
return nil, errors.Wrap(err, "failed to get git commit")
|
||||||
} else if sha != "" {
|
} else if sha != "" {
|
||||||
checkDirty := false
|
checkDirty := false
|
||||||
if v, ok := os.LookupEnv("BUILDX_GIT_CHECK_DIRTY"); ok {
|
if v, ok := os.LookupEnv("BUILDX_GIT_CHECK_DIRTY"); ok {
|
||||||
@@ -112,12 +123,24 @@ func getGitAttributes(ctx context.Context, contextPath string, dockerfilePath st
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return res, func(so *client.SolveOpt) {
|
return func(so *client.SolveOpt) {
|
||||||
|
if so.FrontendAttrs == nil {
|
||||||
|
so.FrontendAttrs = make(map[string]string)
|
||||||
|
}
|
||||||
|
for k, v := range res {
|
||||||
|
so.FrontendAttrs[k] = v
|
||||||
|
}
|
||||||
|
|
||||||
if !setGitInfo || root == "" {
|
if !setGitInfo || root == "" {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
for k, dir := range so.LocalDirs {
|
|
||||||
dir, err = filepath.EvalSymlinks(dir)
|
for key, mount := range so.LocalMounts {
|
||||||
|
fs, ok := mount.(*fs)
|
||||||
|
if !ok {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
dir, err := filepath.EvalSymlinks(fs.dir) // keep same behavior as fsutil.NewFS
|
||||||
if err != nil {
|
if err != nil {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
@@ -130,7 +153,7 @@ func getGitAttributes(ctx context.Context, contextPath string, dockerfilePath st
|
|||||||
}
|
}
|
||||||
dir = osutil.SanitizePath(dir)
|
dir = osutil.SanitizePath(dir)
|
||||||
if r, err := filepath.Rel(root, dir); err == nil && !strings.HasPrefix(r, "..") {
|
if r, err := filepath.Rel(root, dir); err == nil && !strings.HasPrefix(r, "..") {
|
||||||
so.FrontendAttrs["vcs:localdir:"+k] = r
|
so.FrontendAttrs["vcs:localdir:"+key] = r
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}, nil
|
}, nil
|
||||||
|
|||||||
@@ -23,7 +23,7 @@ func setupTest(tb testing.TB) {
|
|||||||
gitutil.GitInit(c, tb)
|
gitutil.GitInit(c, tb)
|
||||||
|
|
||||||
df := []byte("FROM alpine:latest\n")
|
df := []byte("FROM alpine:latest\n")
|
||||||
assert.NoError(tb, os.WriteFile("Dockerfile", df, 0644))
|
require.NoError(tb, os.WriteFile("Dockerfile", df, 0644))
|
||||||
|
|
||||||
gitutil.GitAdd(c, tb, "Dockerfile")
|
gitutil.GitAdd(c, tb, "Dockerfile")
|
||||||
gitutil.GitCommit(c, tb, "initial commit")
|
gitutil.GitCommit(c, tb, "initial commit")
|
||||||
@@ -31,24 +31,26 @@ func setupTest(tb testing.TB) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func TestGetGitAttributesNotGitRepo(t *testing.T) {
|
func TestGetGitAttributesNotGitRepo(t *testing.T) {
|
||||||
_, _, err := getGitAttributes(context.Background(), t.TempDir(), "Dockerfile")
|
_, err := getGitAttributes(context.Background(), t.TempDir(), "Dockerfile")
|
||||||
assert.NoError(t, err)
|
require.NoError(t, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestGetGitAttributesBadGitRepo(t *testing.T) {
|
func TestGetGitAttributesBadGitRepo(t *testing.T) {
|
||||||
tmp := t.TempDir()
|
tmp := t.TempDir()
|
||||||
require.NoError(t, os.MkdirAll(path.Join(tmp, ".git"), 0755))
|
require.NoError(t, os.MkdirAll(path.Join(tmp, ".git"), 0755))
|
||||||
|
|
||||||
_, _, err := getGitAttributes(context.Background(), tmp, "Dockerfile")
|
_, err := getGitAttributes(context.Background(), tmp, "Dockerfile")
|
||||||
assert.Error(t, err)
|
assert.Error(t, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestGetGitAttributesNoContext(t *testing.T) {
|
func TestGetGitAttributesNoContext(t *testing.T) {
|
||||||
setupTest(t)
|
setupTest(t)
|
||||||
|
|
||||||
gitattrs, _, err := getGitAttributes(context.Background(), "", "Dockerfile")
|
addGitAttrs, err := getGitAttributes(context.Background(), "", "Dockerfile")
|
||||||
assert.NoError(t, err)
|
require.NoError(t, err)
|
||||||
assert.Empty(t, gitattrs)
|
var so client.SolveOpt
|
||||||
|
addGitAttrs(&so)
|
||||||
|
assert.Empty(t, so.FrontendAttrs)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestGetGitAttributes(t *testing.T) {
|
func TestGetGitAttributes(t *testing.T) {
|
||||||
@@ -115,15 +117,17 @@ func TestGetGitAttributes(t *testing.T) {
|
|||||||
if tt.envGitInfo != "" {
|
if tt.envGitInfo != "" {
|
||||||
t.Setenv("BUILDX_GIT_INFO", tt.envGitInfo)
|
t.Setenv("BUILDX_GIT_INFO", tt.envGitInfo)
|
||||||
}
|
}
|
||||||
gitattrs, _, err := getGitAttributes(context.Background(), ".", "Dockerfile")
|
addGitAttrs, err := getGitAttributes(context.Background(), ".", "Dockerfile")
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
var so client.SolveOpt
|
||||||
|
addGitAttrs(&so)
|
||||||
for _, e := range tt.expected {
|
for _, e := range tt.expected {
|
||||||
assert.Contains(t, gitattrs, e)
|
assert.Contains(t, so.FrontendAttrs, e)
|
||||||
assert.NotEmpty(t, gitattrs[e])
|
assert.NotEmpty(t, so.FrontendAttrs[e])
|
||||||
if e == "label:"+DockerfileLabel {
|
if e == "label:"+DockerfileLabel {
|
||||||
assert.Equal(t, "Dockerfile", gitattrs[e])
|
assert.Equal(t, "Dockerfile", so.FrontendAttrs[e])
|
||||||
} else if e == "label:"+specs.AnnotationSource || e == "vcs:source" {
|
} else if e == "label:"+specs.AnnotationSource || e == "vcs:source" {
|
||||||
assert.Equal(t, "git@github.com:docker/buildx.git", gitattrs[e])
|
assert.Equal(t, "git@github.com:docker/buildx.git", so.FrontendAttrs[e])
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
@@ -140,20 +144,25 @@ func TestGetGitAttributesDirty(t *testing.T) {
|
|||||||
require.NoError(t, os.WriteFile(filepath.Join("dir", "Dockerfile"), df, 0644))
|
require.NoError(t, os.WriteFile(filepath.Join("dir", "Dockerfile"), df, 0644))
|
||||||
|
|
||||||
t.Setenv("BUILDX_GIT_LABELS", "true")
|
t.Setenv("BUILDX_GIT_LABELS", "true")
|
||||||
gitattrs, _, _ := getGitAttributes(context.Background(), ".", "Dockerfile")
|
addGitAttrs, err := getGitAttributes(context.Background(), ".", "Dockerfile")
|
||||||
assert.Equal(t, 5, len(gitattrs))
|
require.NoError(t, err)
|
||||||
|
|
||||||
assert.Contains(t, gitattrs, "label:"+DockerfileLabel)
|
var so client.SolveOpt
|
||||||
assert.Equal(t, "Dockerfile", gitattrs["label:"+DockerfileLabel])
|
addGitAttrs(&so)
|
||||||
assert.Contains(t, gitattrs, "label:"+specs.AnnotationSource)
|
|
||||||
assert.Equal(t, "git@github.com:docker/buildx.git", gitattrs["label:"+specs.AnnotationSource])
|
|
||||||
assert.Contains(t, gitattrs, "label:"+specs.AnnotationRevision)
|
|
||||||
assert.True(t, strings.HasSuffix(gitattrs["label:"+specs.AnnotationRevision], "-dirty"))
|
|
||||||
|
|
||||||
assert.Contains(t, gitattrs, "vcs:source")
|
assert.Equal(t, 5, len(so.FrontendAttrs))
|
||||||
assert.Equal(t, "git@github.com:docker/buildx.git", gitattrs["vcs:source"])
|
|
||||||
assert.Contains(t, gitattrs, "vcs:revision")
|
assert.Contains(t, so.FrontendAttrs, "label:"+DockerfileLabel)
|
||||||
assert.True(t, strings.HasSuffix(gitattrs["vcs:revision"], "-dirty"))
|
assert.Equal(t, "Dockerfile", so.FrontendAttrs["label:"+DockerfileLabel])
|
||||||
|
assert.Contains(t, so.FrontendAttrs, "label:"+specs.AnnotationSource)
|
||||||
|
assert.Equal(t, "git@github.com:docker/buildx.git", so.FrontendAttrs["label:"+specs.AnnotationSource])
|
||||||
|
assert.Contains(t, so.FrontendAttrs, "label:"+specs.AnnotationRevision)
|
||||||
|
assert.True(t, strings.HasSuffix(so.FrontendAttrs["label:"+specs.AnnotationRevision], "-dirty"))
|
||||||
|
|
||||||
|
assert.Contains(t, so.FrontendAttrs, "vcs:source")
|
||||||
|
assert.Equal(t, "git@github.com:docker/buildx.git", so.FrontendAttrs["vcs:source"])
|
||||||
|
assert.Contains(t, so.FrontendAttrs, "vcs:revision")
|
||||||
|
assert.True(t, strings.HasSuffix(so.FrontendAttrs["vcs:revision"], "-dirty"))
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestLocalDirs(t *testing.T) {
|
func TestLocalDirs(t *testing.T) {
|
||||||
@@ -161,19 +170,19 @@ func TestLocalDirs(t *testing.T) {
|
|||||||
|
|
||||||
so := &client.SolveOpt{
|
so := &client.SolveOpt{
|
||||||
FrontendAttrs: map[string]string{},
|
FrontendAttrs: map[string]string{},
|
||||||
LocalDirs: map[string]string{
|
|
||||||
"context": ".",
|
|
||||||
"dockerfile": ".",
|
|
||||||
},
|
|
||||||
}
|
}
|
||||||
|
|
||||||
_, addVCSLocalDir, err := getGitAttributes(context.Background(), ".", "Dockerfile")
|
addGitAttrs, err := getGitAttributes(context.Background(), ".", "Dockerfile")
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.NotNil(t, addVCSLocalDir)
|
|
||||||
|
|
||||||
addVCSLocalDir(so)
|
require.NoError(t, setLocalMount("context", ".", so))
|
||||||
|
require.NoError(t, setLocalMount("dockerfile", ".", so))
|
||||||
|
|
||||||
|
addGitAttrs(so)
|
||||||
|
|
||||||
require.Contains(t, so.FrontendAttrs, "vcs:localdir:context")
|
require.Contains(t, so.FrontendAttrs, "vcs:localdir:context")
|
||||||
assert.Equal(t, ".", so.FrontendAttrs["vcs:localdir:context"])
|
assert.Equal(t, ".", so.FrontendAttrs["vcs:localdir:context"])
|
||||||
|
|
||||||
require.Contains(t, so.FrontendAttrs, "vcs:localdir:dockerfile")
|
require.Contains(t, so.FrontendAttrs, "vcs:localdir:dockerfile")
|
||||||
assert.Equal(t, ".", so.FrontendAttrs["vcs:localdir:dockerfile"])
|
assert.Equal(t, ".", so.FrontendAttrs["vcs:localdir:dockerfile"])
|
||||||
}
|
}
|
||||||
@@ -186,8 +195,8 @@ func TestLocalDirsSub(t *testing.T) {
|
|||||||
gitutil.GitInit(c, t)
|
gitutil.GitInit(c, t)
|
||||||
|
|
||||||
df := []byte("FROM alpine:latest\n")
|
df := []byte("FROM alpine:latest\n")
|
||||||
assert.NoError(t, os.MkdirAll("app", 0755))
|
require.NoError(t, os.MkdirAll("app", 0755))
|
||||||
assert.NoError(t, os.WriteFile("app/Dockerfile", df, 0644))
|
require.NoError(t, os.WriteFile("app/Dockerfile", df, 0644))
|
||||||
|
|
||||||
gitutil.GitAdd(c, t, "app/Dockerfile")
|
gitutil.GitAdd(c, t, "app/Dockerfile")
|
||||||
gitutil.GitCommit(c, t, "initial commit")
|
gitutil.GitCommit(c, t, "initial commit")
|
||||||
@@ -195,19 +204,18 @@ func TestLocalDirsSub(t *testing.T) {
|
|||||||
|
|
||||||
so := &client.SolveOpt{
|
so := &client.SolveOpt{
|
||||||
FrontendAttrs: map[string]string{},
|
FrontendAttrs: map[string]string{},
|
||||||
LocalDirs: map[string]string{
|
|
||||||
"context": ".",
|
|
||||||
"dockerfile": "app",
|
|
||||||
},
|
|
||||||
}
|
}
|
||||||
|
require.NoError(t, setLocalMount("context", ".", so))
|
||||||
|
require.NoError(t, setLocalMount("dockerfile", "app", so))
|
||||||
|
|
||||||
_, addVCSLocalDir, err := getGitAttributes(context.Background(), ".", "app/Dockerfile")
|
addGitAttrs, err := getGitAttributes(context.Background(), ".", "app/Dockerfile")
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.NotNil(t, addVCSLocalDir)
|
|
||||||
|
|
||||||
addVCSLocalDir(so)
|
addGitAttrs(so)
|
||||||
|
|
||||||
require.Contains(t, so.FrontendAttrs, "vcs:localdir:context")
|
require.Contains(t, so.FrontendAttrs, "vcs:localdir:context")
|
||||||
assert.Equal(t, ".", so.FrontendAttrs["vcs:localdir:context"])
|
assert.Equal(t, ".", so.FrontendAttrs["vcs:localdir:context"])
|
||||||
|
|
||||||
require.Contains(t, so.FrontendAttrs, "vcs:localdir:dockerfile")
|
require.Contains(t, so.FrontendAttrs, "vcs:localdir:dockerfile")
|
||||||
assert.Equal(t, "app", so.FrontendAttrs["vcs:localdir:dockerfile"])
|
assert.Equal(t, "app", so.FrontendAttrs["vcs:localdir:dockerfile"])
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -16,7 +16,7 @@ import (
|
|||||||
|
|
||||||
type Container struct {
|
type Container struct {
|
||||||
cancelOnce sync.Once
|
cancelOnce sync.Once
|
||||||
containerCancel func()
|
containerCancel func(error)
|
||||||
isUnavailable atomic.Bool
|
isUnavailable atomic.Bool
|
||||||
initStarted atomic.Bool
|
initStarted atomic.Bool
|
||||||
container gateway.Container
|
container gateway.Container
|
||||||
@@ -31,18 +31,18 @@ func NewContainer(ctx context.Context, resultCtx *ResultHandle, cfg *controllera
|
|||||||
errCh := make(chan error)
|
errCh := make(chan error)
|
||||||
go func() {
|
go func() {
|
||||||
err := resultCtx.build(func(ctx context.Context, c gateway.Client) (*gateway.Result, error) {
|
err := resultCtx.build(func(ctx context.Context, c gateway.Client) (*gateway.Result, error) {
|
||||||
ctx, cancel := context.WithCancel(ctx)
|
ctx, cancel := context.WithCancelCause(ctx)
|
||||||
go func() {
|
go func() {
|
||||||
<-mainCtx.Done()
|
<-mainCtx.Done()
|
||||||
cancel()
|
cancel(errors.WithStack(context.Canceled))
|
||||||
}()
|
}()
|
||||||
|
|
||||||
containerCfg, err := resultCtx.getContainerConfig(ctx, c, cfg)
|
containerCfg, err := resultCtx.getContainerConfig(cfg)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
containerCtx, containerCancel := context.WithCancel(ctx)
|
containerCtx, containerCancel := context.WithCancelCause(ctx)
|
||||||
defer containerCancel()
|
defer containerCancel(errors.WithStack(context.Canceled))
|
||||||
bkContainer, err := c.NewContainer(containerCtx, containerCfg)
|
bkContainer, err := c.NewContainer(containerCtx, containerCfg)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
@@ -83,7 +83,7 @@ func (c *Container) Cancel() {
|
|||||||
c.markUnavailable()
|
c.markUnavailable()
|
||||||
c.cancelOnce.Do(func() {
|
c.cancelOnce.Do(func() {
|
||||||
if c.containerCancel != nil {
|
if c.containerCancel != nil {
|
||||||
c.containerCancel()
|
c.containerCancel(errors.WithStack(context.Canceled))
|
||||||
}
|
}
|
||||||
close(c.releaseCh)
|
close(c.releaseCh)
|
||||||
})
|
})
|
||||||
|
|||||||
@@ -5,30 +5,33 @@ import (
|
|||||||
|
|
||||||
"github.com/docker/buildx/builder"
|
"github.com/docker/buildx/builder"
|
||||||
"github.com/docker/buildx/localstate"
|
"github.com/docker/buildx/localstate"
|
||||||
|
"github.com/docker/buildx/util/confutil"
|
||||||
"github.com/moby/buildkit/client"
|
"github.com/moby/buildkit/client"
|
||||||
)
|
)
|
||||||
|
|
||||||
func saveLocalState(so *client.SolveOpt, target string, opts Options, node builder.Node, configDir string) error {
|
func saveLocalState(so *client.SolveOpt, target string, opts Options, node builder.Node, cfg *confutil.Config) error {
|
||||||
var err error
|
var err error
|
||||||
if so.Ref == "" {
|
if so.Ref == "" || opts.CallFunc != nil {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
lp := opts.Inputs.ContextPath
|
lp := opts.Inputs.ContextPath
|
||||||
dp := opts.Inputs.DockerfilePath
|
dp := opts.Inputs.DockerfilePath
|
||||||
if lp != "" || dp != "" {
|
if dp != "" && !IsRemoteURL(lp) && lp != "-" && dp != "-" {
|
||||||
if lp != "" {
|
|
||||||
lp, err = filepath.Abs(lp)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if dp != "" {
|
|
||||||
dp, err = filepath.Abs(dp)
|
dp, err = filepath.Abs(dp)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
l, err := localstate.New(configDir)
|
if lp != "" && !IsRemoteURL(lp) && lp != "-" {
|
||||||
|
lp, err = filepath.Abs(lp)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if lp == "" && dp == "" {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
l, err := localstate.New(cfg)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -38,6 +41,4 @@ func saveLocalState(so *client.SolveOpt, target string, opts Options, node build
|
|||||||
DockerfilePath: dp,
|
DockerfilePath: dp,
|
||||||
GroupRef: opts.GroupRef,
|
GroupRef: opts.GroupRef,
|
||||||
})
|
})
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
}
|
||||||
|
|||||||
657
build/opt.go
Normal file
657
build/opt.go
Normal file
@@ -0,0 +1,657 @@
|
|||||||
|
package build
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"context"
|
||||||
|
"io"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"slices"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
"syscall"
|
||||||
|
|
||||||
|
"github.com/containerd/containerd/content"
|
||||||
|
"github.com/containerd/containerd/content/local"
|
||||||
|
"github.com/containerd/platforms"
|
||||||
|
"github.com/distribution/reference"
|
||||||
|
"github.com/docker/buildx/builder"
|
||||||
|
"github.com/docker/buildx/driver"
|
||||||
|
"github.com/docker/buildx/util/confutil"
|
||||||
|
"github.com/docker/buildx/util/dockerutil"
|
||||||
|
"github.com/docker/buildx/util/osutil"
|
||||||
|
"github.com/docker/buildx/util/progress"
|
||||||
|
"github.com/moby/buildkit/client"
|
||||||
|
"github.com/moby/buildkit/client/llb"
|
||||||
|
"github.com/moby/buildkit/client/ociindex"
|
||||||
|
gateway "github.com/moby/buildkit/frontend/gateway/client"
|
||||||
|
"github.com/moby/buildkit/identity"
|
||||||
|
"github.com/moby/buildkit/session/upload/uploadprovider"
|
||||||
|
"github.com/moby/buildkit/solver/pb"
|
||||||
|
"github.com/moby/buildkit/util/apicaps"
|
||||||
|
"github.com/moby/buildkit/util/entitlements"
|
||||||
|
"github.com/opencontainers/go-digest"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
"github.com/tonistiigi/fsutil"
|
||||||
|
)
|
||||||
|
|
||||||
|
func toSolveOpt(ctx context.Context, node builder.Node, multiDriver bool, opt *Options, bopts gateway.BuildOpts, cfg *confutil.Config, pw progress.Writer, docker *dockerutil.Client) (_ *client.SolveOpt, release func(), err error) {
|
||||||
|
nodeDriver := node.Driver
|
||||||
|
defers := make([]func(), 0, 2)
|
||||||
|
releaseF := func() {
|
||||||
|
for _, f := range defers {
|
||||||
|
f()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
defer func() {
|
||||||
|
if err != nil {
|
||||||
|
releaseF()
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
|
||||||
|
// inline cache from build arg
|
||||||
|
if v, ok := opt.BuildArgs["BUILDKIT_INLINE_CACHE"]; ok {
|
||||||
|
if v, _ := strconv.ParseBool(v); v {
|
||||||
|
opt.CacheTo = append(opt.CacheTo, client.CacheOptionsEntry{
|
||||||
|
Type: "inline",
|
||||||
|
Attrs: map[string]string{},
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, e := range opt.CacheTo {
|
||||||
|
if e.Type != "inline" && !nodeDriver.Features(ctx)[driver.CacheExport] {
|
||||||
|
return nil, nil, notSupported(driver.CacheExport, nodeDriver, "https://docs.docker.com/go/build-cache-backends/")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
cacheTo := make([]client.CacheOptionsEntry, 0, len(opt.CacheTo))
|
||||||
|
for _, e := range opt.CacheTo {
|
||||||
|
if e.Type == "gha" {
|
||||||
|
if !bopts.LLBCaps.Contains(apicaps.CapID("cache.gha")) {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
} else if e.Type == "s3" {
|
||||||
|
if !bopts.LLBCaps.Contains(apicaps.CapID("cache.s3")) {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
}
|
||||||
|
cacheTo = append(cacheTo, e)
|
||||||
|
}
|
||||||
|
|
||||||
|
cacheFrom := make([]client.CacheOptionsEntry, 0, len(opt.CacheFrom))
|
||||||
|
for _, e := range opt.CacheFrom {
|
||||||
|
if e.Type == "gha" {
|
||||||
|
if !bopts.LLBCaps.Contains(apicaps.CapID("cache.gha")) {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
} else if e.Type == "s3" {
|
||||||
|
if !bopts.LLBCaps.Contains(apicaps.CapID("cache.s3")) {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
}
|
||||||
|
cacheFrom = append(cacheFrom, e)
|
||||||
|
}
|
||||||
|
|
||||||
|
so := client.SolveOpt{
|
||||||
|
Ref: opt.Ref,
|
||||||
|
Frontend: "dockerfile.v0",
|
||||||
|
FrontendAttrs: map[string]string{},
|
||||||
|
LocalMounts: map[string]fsutil.FS{},
|
||||||
|
CacheExports: cacheTo,
|
||||||
|
CacheImports: cacheFrom,
|
||||||
|
AllowedEntitlements: opt.Allow,
|
||||||
|
SourcePolicy: opt.SourcePolicy,
|
||||||
|
}
|
||||||
|
|
||||||
|
if opt.CgroupParent != "" {
|
||||||
|
so.FrontendAttrs["cgroup-parent"] = opt.CgroupParent
|
||||||
|
}
|
||||||
|
|
||||||
|
if v, ok := opt.BuildArgs["BUILDKIT_MULTI_PLATFORM"]; ok {
|
||||||
|
if v, _ := strconv.ParseBool(v); v {
|
||||||
|
so.FrontendAttrs["multi-platform"] = "true"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if multiDriver {
|
||||||
|
// force creation of manifest list
|
||||||
|
so.FrontendAttrs["multi-platform"] = "true"
|
||||||
|
}
|
||||||
|
|
||||||
|
attests := make(map[string]string)
|
||||||
|
for k, v := range opt.Attests {
|
||||||
|
if v != nil {
|
||||||
|
attests[k] = *v
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
supportAttestations := bopts.LLBCaps.Contains(apicaps.CapID("exporter.image.attestations")) && nodeDriver.Features(ctx)[driver.MultiPlatform]
|
||||||
|
if len(attests) > 0 {
|
||||||
|
if !supportAttestations {
|
||||||
|
if !nodeDriver.Features(ctx)[driver.MultiPlatform] {
|
||||||
|
return nil, nil, notSupported("Attestation", nodeDriver, "https://docs.docker.com/go/attestations/")
|
||||||
|
}
|
||||||
|
return nil, nil, errors.Errorf("Attestations are not supported by the current BuildKit daemon")
|
||||||
|
}
|
||||||
|
for k, v := range attests {
|
||||||
|
so.FrontendAttrs["attest:"+k] = v
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if _, ok := opt.Attests["provenance"]; !ok && supportAttestations {
|
||||||
|
const noAttestEnv = "BUILDX_NO_DEFAULT_ATTESTATIONS"
|
||||||
|
var noProv bool
|
||||||
|
if v, ok := os.LookupEnv(noAttestEnv); ok {
|
||||||
|
noProv, err = strconv.ParseBool(v)
|
||||||
|
if err != nil {
|
||||||
|
return nil, nil, errors.Wrap(err, "invalid "+noAttestEnv)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !noProv {
|
||||||
|
so.FrontendAttrs["attest:provenance"] = "mode=min,inline-only=true"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
switch len(opt.Exports) {
|
||||||
|
case 1:
|
||||||
|
// valid
|
||||||
|
case 0:
|
||||||
|
if !noDefaultLoad() && opt.CallFunc == nil {
|
||||||
|
if nodeDriver.IsMobyDriver() {
|
||||||
|
// backwards compat for docker driver only:
|
||||||
|
// this ensures the build results in a docker image.
|
||||||
|
opt.Exports = []client.ExportEntry{{Type: "image", Attrs: map[string]string{}}}
|
||||||
|
} else if nodeDriver.Features(ctx)[driver.DefaultLoad] {
|
||||||
|
opt.Exports = []client.ExportEntry{{Type: "docker", Attrs: map[string]string{}}}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
default:
|
||||||
|
if err := bopts.LLBCaps.Supports(pb.CapMultipleExporters); err != nil {
|
||||||
|
return nil, nil, errors.Errorf("multiple outputs currently unsupported by the current BuildKit daemon, please upgrade to version v0.13+ or use a single output")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// fill in image exporter names from tags
|
||||||
|
if len(opt.Tags) > 0 {
|
||||||
|
tags := make([]string, len(opt.Tags))
|
||||||
|
for i, tag := range opt.Tags {
|
||||||
|
ref, err := reference.Parse(tag)
|
||||||
|
if err != nil {
|
||||||
|
return nil, nil, errors.Wrapf(err, "invalid tag %q", tag)
|
||||||
|
}
|
||||||
|
tags[i] = ref.String()
|
||||||
|
}
|
||||||
|
for i, e := range opt.Exports {
|
||||||
|
switch e.Type {
|
||||||
|
case "image", "oci", "docker":
|
||||||
|
opt.Exports[i].Attrs["name"] = strings.Join(tags, ",")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
for _, e := range opt.Exports {
|
||||||
|
if e.Type == "image" && e.Attrs["name"] == "" && e.Attrs["push"] != "" {
|
||||||
|
if ok, _ := strconv.ParseBool(e.Attrs["push"]); ok {
|
||||||
|
return nil, nil, errors.Errorf("tag is needed when pushing to registry")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// cacheonly is a fake exporter to opt out of default behaviors
|
||||||
|
exports := make([]client.ExportEntry, 0, len(opt.Exports))
|
||||||
|
for _, e := range opt.Exports {
|
||||||
|
if e.Type != "cacheonly" {
|
||||||
|
exports = append(exports, e)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
opt.Exports = exports
|
||||||
|
|
||||||
|
// set up exporters
|
||||||
|
for i, e := range opt.Exports {
|
||||||
|
if e.Type == "oci" && !nodeDriver.Features(ctx)[driver.OCIExporter] {
|
||||||
|
return nil, nil, notSupported(driver.OCIExporter, nodeDriver, "https://docs.docker.com/go/build-exporters/")
|
||||||
|
}
|
||||||
|
if e.Type == "docker" {
|
||||||
|
features := docker.Features(ctx, e.Attrs["context"])
|
||||||
|
if features[dockerutil.OCIImporter] && e.Output == nil {
|
||||||
|
// rely on oci importer if available (which supports
|
||||||
|
// multi-platform images), otherwise fall back to docker
|
||||||
|
opt.Exports[i].Type = "oci"
|
||||||
|
} else if len(opt.Platforms) > 1 || len(attests) > 0 {
|
||||||
|
if e.Output != nil {
|
||||||
|
return nil, nil, errors.Errorf("docker exporter does not support exporting manifest lists, use the oci exporter instead")
|
||||||
|
}
|
||||||
|
return nil, nil, errors.Errorf("docker exporter does not currently support exporting manifest lists")
|
||||||
|
}
|
||||||
|
if e.Output == nil {
|
||||||
|
if nodeDriver.IsMobyDriver() {
|
||||||
|
e.Type = "image"
|
||||||
|
} else {
|
||||||
|
w, cancel, err := docker.LoadImage(ctx, e.Attrs["context"], pw)
|
||||||
|
if err != nil {
|
||||||
|
return nil, nil, err
|
||||||
|
}
|
||||||
|
defers = append(defers, cancel)
|
||||||
|
opt.Exports[i].Output = func(_ map[string]string) (io.WriteCloser, error) {
|
||||||
|
return w, nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else if !nodeDriver.Features(ctx)[driver.DockerExporter] {
|
||||||
|
return nil, nil, notSupported(driver.DockerExporter, nodeDriver, "https://docs.docker.com/go/build-exporters/")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if e.Type == "image" && nodeDriver.IsMobyDriver() {
|
||||||
|
opt.Exports[i].Type = "moby"
|
||||||
|
if e.Attrs["push"] != "" {
|
||||||
|
if ok, _ := strconv.ParseBool(e.Attrs["push"]); ok {
|
||||||
|
if ok, _ := strconv.ParseBool(e.Attrs["push-by-digest"]); ok {
|
||||||
|
return nil, nil, errors.Errorf("push-by-digest is currently not implemented for docker driver, please create a new builder instance")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if e.Type == "docker" || e.Type == "image" || e.Type == "oci" {
|
||||||
|
// inline buildinfo attrs from build arg
|
||||||
|
if v, ok := opt.BuildArgs["BUILDKIT_INLINE_BUILDINFO_ATTRS"]; ok {
|
||||||
|
opt.Exports[i].Attrs["buildinfo-attrs"] = v
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
so.Exports = opt.Exports
|
||||||
|
so.Session = slices.Clone(opt.Session)
|
||||||
|
|
||||||
|
releaseLoad, err := loadInputs(ctx, nodeDriver, &opt.Inputs, pw, &so)
|
||||||
|
if err != nil {
|
||||||
|
return nil, nil, err
|
||||||
|
}
|
||||||
|
defers = append(defers, releaseLoad)
|
||||||
|
|
||||||
|
// add node identifier to shared key if one was specified
|
||||||
|
if so.SharedKey != "" {
|
||||||
|
so.SharedKey += ":" + cfg.TryNodeIdentifier()
|
||||||
|
}
|
||||||
|
|
||||||
|
if opt.Pull {
|
||||||
|
so.FrontendAttrs["image-resolve-mode"] = pb.AttrImageResolveModeForcePull
|
||||||
|
} else if nodeDriver.IsMobyDriver() {
|
||||||
|
// moby driver always resolves local images by default
|
||||||
|
so.FrontendAttrs["image-resolve-mode"] = pb.AttrImageResolveModePreferLocal
|
||||||
|
}
|
||||||
|
if opt.Target != "" {
|
||||||
|
so.FrontendAttrs["target"] = opt.Target
|
||||||
|
}
|
||||||
|
if len(opt.NoCacheFilter) > 0 {
|
||||||
|
so.FrontendAttrs["no-cache"] = strings.Join(opt.NoCacheFilter, ",")
|
||||||
|
}
|
||||||
|
if opt.NoCache {
|
||||||
|
so.FrontendAttrs["no-cache"] = ""
|
||||||
|
}
|
||||||
|
for k, v := range opt.BuildArgs {
|
||||||
|
so.FrontendAttrs["build-arg:"+k] = v
|
||||||
|
}
|
||||||
|
for k, v := range opt.Labels {
|
||||||
|
so.FrontendAttrs["label:"+k] = v
|
||||||
|
}
|
||||||
|
|
||||||
|
for k, v := range node.ProxyConfig {
|
||||||
|
if _, ok := opt.BuildArgs[k]; !ok {
|
||||||
|
so.FrontendAttrs["build-arg:"+k] = v
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// set platforms
|
||||||
|
if len(opt.Platforms) != 0 {
|
||||||
|
pp := make([]string, len(opt.Platforms))
|
||||||
|
for i, p := range opt.Platforms {
|
||||||
|
pp[i] = platforms.Format(p)
|
||||||
|
}
|
||||||
|
if len(pp) > 1 && !nodeDriver.Features(ctx)[driver.MultiPlatform] {
|
||||||
|
return nil, nil, notSupported(driver.MultiPlatform, nodeDriver, "https://docs.docker.com/go/build-multi-platform/")
|
||||||
|
}
|
||||||
|
so.FrontendAttrs["platform"] = strings.Join(pp, ",")
|
||||||
|
}
|
||||||
|
|
||||||
|
// setup networkmode
|
||||||
|
switch opt.NetworkMode {
|
||||||
|
case "host":
|
||||||
|
so.FrontendAttrs["force-network-mode"] = opt.NetworkMode
|
||||||
|
so.AllowedEntitlements = append(so.AllowedEntitlements, entitlements.EntitlementNetworkHost)
|
||||||
|
case "none":
|
||||||
|
so.FrontendAttrs["force-network-mode"] = opt.NetworkMode
|
||||||
|
case "", "default":
|
||||||
|
default:
|
||||||
|
return nil, nil, errors.Errorf("network mode %q not supported by buildkit - you can define a custom network for your builder using the network driver-opt in buildx create", opt.NetworkMode)
|
||||||
|
}
|
||||||
|
|
||||||
|
// setup extrahosts
|
||||||
|
extraHosts, err := toBuildkitExtraHosts(ctx, opt.ExtraHosts, nodeDriver)
|
||||||
|
if err != nil {
|
||||||
|
return nil, nil, err
|
||||||
|
}
|
||||||
|
if len(extraHosts) > 0 {
|
||||||
|
so.FrontendAttrs["add-hosts"] = extraHosts
|
||||||
|
}
|
||||||
|
|
||||||
|
// setup shm size
|
||||||
|
if opt.ShmSize.Value() > 0 {
|
||||||
|
so.FrontendAttrs["shm-size"] = strconv.FormatInt(opt.ShmSize.Value(), 10)
|
||||||
|
}
|
||||||
|
|
||||||
|
// setup ulimits
|
||||||
|
ulimits, err := toBuildkitUlimits(opt.Ulimits)
|
||||||
|
if err != nil {
|
||||||
|
return nil, nil, err
|
||||||
|
} else if len(ulimits) > 0 {
|
||||||
|
so.FrontendAttrs["ulimit"] = ulimits
|
||||||
|
}
|
||||||
|
|
||||||
|
// mark call request as internal
|
||||||
|
if opt.CallFunc != nil {
|
||||||
|
so.Internal = true
|
||||||
|
}
|
||||||
|
|
||||||
|
return &so, releaseF, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func loadInputs(ctx context.Context, d *driver.DriverHandle, inp *Inputs, pw progress.Writer, target *client.SolveOpt) (func(), error) {
|
||||||
|
if inp.ContextPath == "" {
|
||||||
|
return nil, errors.New("please specify build context (e.g. \".\" for the current directory)")
|
||||||
|
}
|
||||||
|
|
||||||
|
// TODO: handle stdin, symlinks, remote contexts, check files exist
|
||||||
|
|
||||||
|
var (
|
||||||
|
err error
|
||||||
|
dockerfileReader io.ReadCloser
|
||||||
|
dockerfileDir string
|
||||||
|
dockerfileName = inp.DockerfilePath
|
||||||
|
dockerfileSrcName = inp.DockerfilePath
|
||||||
|
toRemove []string
|
||||||
|
)
|
||||||
|
|
||||||
|
switch {
|
||||||
|
case inp.ContextState != nil:
|
||||||
|
if target.FrontendInputs == nil {
|
||||||
|
target.FrontendInputs = make(map[string]llb.State)
|
||||||
|
}
|
||||||
|
target.FrontendInputs["context"] = *inp.ContextState
|
||||||
|
target.FrontendInputs["dockerfile"] = *inp.ContextState
|
||||||
|
case inp.ContextPath == "-":
|
||||||
|
if inp.DockerfilePath == "-" {
|
||||||
|
return nil, errors.Errorf("invalid argument: can't use stdin for both build context and dockerfile")
|
||||||
|
}
|
||||||
|
|
||||||
|
rc := inp.InStream.NewReadCloser()
|
||||||
|
magic, err := inp.InStream.Peek(archiveHeaderSize * 2)
|
||||||
|
if err != nil && err != io.EOF {
|
||||||
|
return nil, errors.Wrap(err, "failed to peek context header from STDIN")
|
||||||
|
}
|
||||||
|
if !(err == io.EOF && len(magic) == 0) {
|
||||||
|
if isArchive(magic) {
|
||||||
|
// stdin is context
|
||||||
|
up := uploadprovider.New()
|
||||||
|
target.FrontendAttrs["context"] = up.Add(rc)
|
||||||
|
target.Session = append(target.Session, up)
|
||||||
|
} else {
|
||||||
|
if inp.DockerfilePath != "" {
|
||||||
|
return nil, errors.Errorf("ambiguous Dockerfile source: both stdin and flag correspond to Dockerfiles")
|
||||||
|
}
|
||||||
|
// stdin is dockerfile
|
||||||
|
dockerfileReader = rc
|
||||||
|
inp.ContextPath, _ = os.MkdirTemp("", "empty-dir")
|
||||||
|
toRemove = append(toRemove, inp.ContextPath)
|
||||||
|
if err := setLocalMount("context", inp.ContextPath, target); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
case osutil.IsLocalDir(inp.ContextPath):
|
||||||
|
if err := setLocalMount("context", inp.ContextPath, target); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
sharedKey := inp.ContextPath
|
||||||
|
if p, err := filepath.Abs(sharedKey); err == nil {
|
||||||
|
sharedKey = filepath.Base(p)
|
||||||
|
}
|
||||||
|
target.SharedKey = sharedKey
|
||||||
|
switch inp.DockerfilePath {
|
||||||
|
case "-":
|
||||||
|
dockerfileReader = inp.InStream.NewReadCloser()
|
||||||
|
case "":
|
||||||
|
dockerfileDir = inp.ContextPath
|
||||||
|
default:
|
||||||
|
dockerfileDir = filepath.Dir(inp.DockerfilePath)
|
||||||
|
dockerfileName = filepath.Base(inp.DockerfilePath)
|
||||||
|
}
|
||||||
|
case IsRemoteURL(inp.ContextPath):
|
||||||
|
if inp.DockerfilePath == "-" {
|
||||||
|
dockerfileReader = inp.InStream.NewReadCloser()
|
||||||
|
} else if filepath.IsAbs(inp.DockerfilePath) {
|
||||||
|
dockerfileDir = filepath.Dir(inp.DockerfilePath)
|
||||||
|
dockerfileName = filepath.Base(inp.DockerfilePath)
|
||||||
|
target.FrontendAttrs["dockerfilekey"] = "dockerfile"
|
||||||
|
}
|
||||||
|
target.FrontendAttrs["context"] = inp.ContextPath
|
||||||
|
default:
|
||||||
|
return nil, errors.Errorf("unable to prepare context: path %q not found", inp.ContextPath)
|
||||||
|
}
|
||||||
|
|
||||||
|
if inp.DockerfileInline != "" {
|
||||||
|
dockerfileReader = io.NopCloser(strings.NewReader(inp.DockerfileInline))
|
||||||
|
dockerfileSrcName = "inline"
|
||||||
|
} else if inp.DockerfilePath == "-" {
|
||||||
|
dockerfileSrcName = "stdin"
|
||||||
|
} else if inp.DockerfilePath == "" {
|
||||||
|
dockerfileSrcName = filepath.Join(inp.ContextPath, "Dockerfile")
|
||||||
|
}
|
||||||
|
|
||||||
|
if dockerfileReader != nil {
|
||||||
|
dockerfileDir, err = createTempDockerfile(dockerfileReader, inp.InStream)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
toRemove = append(toRemove, dockerfileDir)
|
||||||
|
dockerfileName = "Dockerfile"
|
||||||
|
target.FrontendAttrs["dockerfilekey"] = "dockerfile"
|
||||||
|
}
|
||||||
|
if isHTTPURL(inp.DockerfilePath) {
|
||||||
|
dockerfileDir, err = createTempDockerfileFromURL(ctx, d, inp.DockerfilePath, pw)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
toRemove = append(toRemove, dockerfileDir)
|
||||||
|
dockerfileName = "Dockerfile"
|
||||||
|
target.FrontendAttrs["dockerfilekey"] = "dockerfile"
|
||||||
|
delete(target.FrontendInputs, "dockerfile")
|
||||||
|
}
|
||||||
|
|
||||||
|
if dockerfileName == "" {
|
||||||
|
dockerfileName = "Dockerfile"
|
||||||
|
}
|
||||||
|
|
||||||
|
if dockerfileDir != "" {
|
||||||
|
if err := setLocalMount("dockerfile", dockerfileDir, target); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
dockerfileName = handleLowercaseDockerfile(dockerfileDir, dockerfileName)
|
||||||
|
}
|
||||||
|
|
||||||
|
target.FrontendAttrs["filename"] = dockerfileName
|
||||||
|
|
||||||
|
for k, v := range inp.NamedContexts {
|
||||||
|
target.FrontendAttrs["frontend.caps"] = "moby.buildkit.frontend.contexts+forward"
|
||||||
|
if v.State != nil {
|
||||||
|
target.FrontendAttrs["context:"+k] = "input:" + k
|
||||||
|
if target.FrontendInputs == nil {
|
||||||
|
target.FrontendInputs = make(map[string]llb.State)
|
||||||
|
}
|
||||||
|
target.FrontendInputs[k] = *v.State
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
if IsRemoteURL(v.Path) || strings.HasPrefix(v.Path, "docker-image://") || strings.HasPrefix(v.Path, "target:") {
|
||||||
|
target.FrontendAttrs["context:"+k] = v.Path
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
// handle OCI layout
|
||||||
|
if strings.HasPrefix(v.Path, "oci-layout://") {
|
||||||
|
localPath := strings.TrimPrefix(v.Path, "oci-layout://")
|
||||||
|
localPath, dig, hasDigest := strings.Cut(localPath, "@")
|
||||||
|
localPath, tag, hasTag := strings.Cut(localPath, ":")
|
||||||
|
if !hasTag {
|
||||||
|
tag = "latest"
|
||||||
|
}
|
||||||
|
if !hasDigest {
|
||||||
|
dig, err = resolveDigest(localPath, tag)
|
||||||
|
if err != nil {
|
||||||
|
return nil, errors.Wrapf(err, "oci-layout reference %q could not be resolved", v.Path)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
store, err := local.NewStore(localPath)
|
||||||
|
if err != nil {
|
||||||
|
return nil, errors.Wrapf(err, "invalid store at %s", localPath)
|
||||||
|
}
|
||||||
|
storeName := identity.NewID()
|
||||||
|
if target.OCIStores == nil {
|
||||||
|
target.OCIStores = map[string]content.Store{}
|
||||||
|
}
|
||||||
|
target.OCIStores[storeName] = store
|
||||||
|
|
||||||
|
target.FrontendAttrs["context:"+k] = "oci-layout://" + storeName + ":" + tag + "@" + dig
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
st, err := os.Stat(v.Path)
|
||||||
|
if err != nil {
|
||||||
|
return nil, errors.Wrapf(err, "failed to get build context %v", k)
|
||||||
|
}
|
||||||
|
if !st.IsDir() {
|
||||||
|
return nil, errors.Wrapf(syscall.ENOTDIR, "failed to get build context path %v", v)
|
||||||
|
}
|
||||||
|
localName := k
|
||||||
|
if k == "context" || k == "dockerfile" {
|
||||||
|
localName = "_" + k // underscore to avoid collisions
|
||||||
|
}
|
||||||
|
if err := setLocalMount(localName, v.Path, target); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
target.FrontendAttrs["context:"+k] = "local:" + localName
|
||||||
|
}
|
||||||
|
|
||||||
|
release := func() {
|
||||||
|
for _, dir := range toRemove {
|
||||||
|
_ = os.RemoveAll(dir)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
inp.DockerfileMappingSrc = dockerfileSrcName
|
||||||
|
inp.DockerfileMappingDst = dockerfileName
|
||||||
|
return release, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func resolveDigest(localPath, tag string) (dig string, _ error) {
|
||||||
|
idx := ociindex.NewStoreIndex(localPath)
|
||||||
|
|
||||||
|
// lookup by name
|
||||||
|
desc, err := idx.Get(tag)
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
if desc == nil {
|
||||||
|
// lookup single
|
||||||
|
desc, err = idx.GetSingle()
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if desc == nil {
|
||||||
|
return "", errors.New("failed to resolve digest")
|
||||||
|
}
|
||||||
|
|
||||||
|
dig = string(desc.Digest)
|
||||||
|
_, err = digest.Parse(dig)
|
||||||
|
if err != nil {
|
||||||
|
return "", errors.Wrapf(err, "invalid digest %s", dig)
|
||||||
|
}
|
||||||
|
|
||||||
|
return dig, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func setLocalMount(name, dir string, so *client.SolveOpt) error {
|
||||||
|
lm, err := fsutil.NewFS(dir)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if so.LocalMounts == nil {
|
||||||
|
so.LocalMounts = map[string]fsutil.FS{}
|
||||||
|
}
|
||||||
|
so.LocalMounts[name] = &fs{FS: lm, dir: dir}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func createTempDockerfile(r io.Reader, multiReader *SyncMultiReader) (string, error) {
|
||||||
|
dir, err := os.MkdirTemp("", "dockerfile")
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
f, err := os.Create(filepath.Join(dir, "Dockerfile"))
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
defer f.Close()
|
||||||
|
|
||||||
|
if multiReader != nil {
|
||||||
|
dt, err := io.ReadAll(r)
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
multiReader.Reset(dt)
|
||||||
|
r = bytes.NewReader(dt)
|
||||||
|
}
|
||||||
|
|
||||||
|
if _, err := io.Copy(f, r); err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
return dir, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// handle https://github.com/moby/moby/pull/10858
|
||||||
|
func handleLowercaseDockerfile(dir, p string) string {
|
||||||
|
if filepath.Base(p) != "Dockerfile" {
|
||||||
|
return p
|
||||||
|
}
|
||||||
|
|
||||||
|
f, err := os.Open(filepath.Dir(filepath.Join(dir, p)))
|
||||||
|
if err != nil {
|
||||||
|
return p
|
||||||
|
}
|
||||||
|
|
||||||
|
names, err := f.Readdirnames(-1)
|
||||||
|
if err != nil {
|
||||||
|
return p
|
||||||
|
}
|
||||||
|
|
||||||
|
foundLowerCase := false
|
||||||
|
for _, n := range names {
|
||||||
|
if n == "Dockerfile" {
|
||||||
|
return p
|
||||||
|
}
|
||||||
|
if n == "dockerfile" {
|
||||||
|
foundLowerCase = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if foundLowerCase {
|
||||||
|
return filepath.Join(filepath.Dir(p), "dockerfile")
|
||||||
|
}
|
||||||
|
return p
|
||||||
|
}
|
||||||
|
|
||||||
|
type fs struct {
|
||||||
|
fsutil.FS
|
||||||
|
dir string
|
||||||
|
}
|
||||||
|
|
||||||
|
var _ fsutil.FS = &fs{}
|
||||||
157
build/provenance.go
Normal file
157
build/provenance.go
Normal file
@@ -0,0 +1,157 @@
|
|||||||
|
package build
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"encoding/base64"
|
||||||
|
"encoding/json"
|
||||||
|
"io"
|
||||||
|
"strings"
|
||||||
|
"sync"
|
||||||
|
|
||||||
|
"github.com/containerd/containerd/content"
|
||||||
|
"github.com/containerd/containerd/content/proxy"
|
||||||
|
"github.com/docker/buildx/util/confutil"
|
||||||
|
"github.com/docker/buildx/util/progress"
|
||||||
|
controlapi "github.com/moby/buildkit/api/services/control"
|
||||||
|
"github.com/moby/buildkit/client"
|
||||||
|
provenancetypes "github.com/moby/buildkit/solver/llbsolver/provenance/types"
|
||||||
|
digest "github.com/opencontainers/go-digest"
|
||||||
|
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
"golang.org/x/sync/errgroup"
|
||||||
|
)
|
||||||
|
|
||||||
|
type provenancePredicate struct {
|
||||||
|
Builder *provenanceBuilder `json:"builder,omitempty"`
|
||||||
|
provenancetypes.ProvenancePredicate
|
||||||
|
}
|
||||||
|
|
||||||
|
type provenanceBuilder struct {
|
||||||
|
ID string `json:"id,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func setRecordProvenance(ctx context.Context, c *client.Client, sr *client.SolveResponse, ref string, mode confutil.MetadataProvenanceMode, pw progress.Writer) error {
|
||||||
|
if mode == confutil.MetadataProvenanceModeDisabled {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
pw = progress.ResetTime(pw)
|
||||||
|
return progress.Wrap("resolving provenance for metadata file", pw.Write, func(l progress.SubLogger) error {
|
||||||
|
res, err := fetchProvenance(ctx, c, ref, mode)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
for k, v := range res {
|
||||||
|
sr.ExporterResponse[k] = v
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func fetchProvenance(ctx context.Context, c *client.Client, ref string, mode confutil.MetadataProvenanceMode) (out map[string]string, err error) {
|
||||||
|
cl, err := c.ControlClient().ListenBuildHistory(ctx, &controlapi.BuildHistoryRequest{
|
||||||
|
Ref: ref,
|
||||||
|
EarlyExit: true,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
var mu sync.Mutex
|
||||||
|
eg, ctx := errgroup.WithContext(ctx)
|
||||||
|
store := proxy.NewContentStore(c.ContentClient())
|
||||||
|
for {
|
||||||
|
ev, err := cl.Recv()
|
||||||
|
if errors.Is(err, io.EOF) {
|
||||||
|
break
|
||||||
|
} else if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if ev.Record == nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if ev.Record.Result != nil {
|
||||||
|
desc := lookupProvenance(ev.Record.Result)
|
||||||
|
if desc == nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
eg.Go(func() error {
|
||||||
|
dt, err := content.ReadBlob(ctx, store, *desc)
|
||||||
|
if err != nil {
|
||||||
|
return errors.Wrapf(err, "failed to load provenance blob from build record")
|
||||||
|
}
|
||||||
|
prv, err := encodeProvenance(dt, mode)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
mu.Lock()
|
||||||
|
if out == nil {
|
||||||
|
out = make(map[string]string)
|
||||||
|
}
|
||||||
|
out["buildx.build.provenance"] = prv
|
||||||
|
mu.Unlock()
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
} else if ev.Record.Results != nil {
|
||||||
|
for platform, res := range ev.Record.Results {
|
||||||
|
platform := platform
|
||||||
|
desc := lookupProvenance(res)
|
||||||
|
if desc == nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
eg.Go(func() error {
|
||||||
|
dt, err := content.ReadBlob(ctx, store, *desc)
|
||||||
|
if err != nil {
|
||||||
|
return errors.Wrapf(err, "failed to load provenance blob from build record")
|
||||||
|
}
|
||||||
|
prv, err := encodeProvenance(dt, mode)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
mu.Lock()
|
||||||
|
if out == nil {
|
||||||
|
out = make(map[string]string)
|
||||||
|
}
|
||||||
|
out["buildx.build.provenance/"+platform] = prv
|
||||||
|
mu.Unlock()
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return out, eg.Wait()
|
||||||
|
}
|
||||||
|
|
||||||
|
func lookupProvenance(res *controlapi.BuildResultInfo) *ocispecs.Descriptor {
|
||||||
|
for _, a := range res.Attestations {
|
||||||
|
if a.MediaType == "application/vnd.in-toto+json" && strings.HasPrefix(a.Annotations["in-toto.io/predicate-type"], "https://slsa.dev/provenance/") {
|
||||||
|
return &ocispecs.Descriptor{
|
||||||
|
Digest: digest.Digest(a.Digest),
|
||||||
|
Size: a.Size,
|
||||||
|
MediaType: a.MediaType,
|
||||||
|
Annotations: a.Annotations,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func encodeProvenance(dt []byte, mode confutil.MetadataProvenanceMode) (string, error) {
|
||||||
|
var prv provenancePredicate
|
||||||
|
if err := json.Unmarshal(dt, &prv); err != nil {
|
||||||
|
return "", errors.Wrapf(err, "failed to unmarshal provenance")
|
||||||
|
}
|
||||||
|
if prv.Builder != nil && prv.Builder.ID == "" {
|
||||||
|
// reset builder if id is empty
|
||||||
|
prv.Builder = nil
|
||||||
|
}
|
||||||
|
if mode == confutil.MetadataProvenanceModeMin {
|
||||||
|
// reset fields for minimal provenance
|
||||||
|
prv.BuildConfig = nil
|
||||||
|
prv.Metadata = nil
|
||||||
|
}
|
||||||
|
dtprv, err := json.Marshal(prv)
|
||||||
|
if err != nil {
|
||||||
|
return "", errors.Wrapf(err, "failed to marshal provenance")
|
||||||
|
}
|
||||||
|
return base64.StdEncoding.EncodeToString(dtprv), nil
|
||||||
|
}
|
||||||
164
build/replicatedstream.go
Normal file
164
build/replicatedstream.go
Normal file
@@ -0,0 +1,164 @@
|
|||||||
|
package build
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bufio"
|
||||||
|
"bytes"
|
||||||
|
"io"
|
||||||
|
"sync"
|
||||||
|
)
|
||||||
|
|
||||||
|
type SyncMultiReader struct {
|
||||||
|
source *bufio.Reader
|
||||||
|
buffer []byte
|
||||||
|
static []byte
|
||||||
|
mu sync.Mutex
|
||||||
|
cond *sync.Cond
|
||||||
|
readers []*syncReader
|
||||||
|
err error
|
||||||
|
offset int
|
||||||
|
}
|
||||||
|
|
||||||
|
type syncReader struct {
|
||||||
|
mr *SyncMultiReader
|
||||||
|
offset int
|
||||||
|
closed bool
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewSyncMultiReader(source io.Reader) *SyncMultiReader {
|
||||||
|
mr := &SyncMultiReader{
|
||||||
|
source: bufio.NewReader(source),
|
||||||
|
buffer: make([]byte, 0, 32*1024),
|
||||||
|
}
|
||||||
|
mr.cond = sync.NewCond(&mr.mu)
|
||||||
|
return mr
|
||||||
|
}
|
||||||
|
|
||||||
|
func (mr *SyncMultiReader) Peek(n int) ([]byte, error) {
|
||||||
|
mr.mu.Lock()
|
||||||
|
defer mr.mu.Unlock()
|
||||||
|
|
||||||
|
if mr.static != nil {
|
||||||
|
return mr.static[min(n, len(mr.static)):], nil
|
||||||
|
}
|
||||||
|
|
||||||
|
return mr.source.Peek(n)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (mr *SyncMultiReader) Reset(dt []byte) {
|
||||||
|
mr.mu.Lock()
|
||||||
|
defer mr.mu.Unlock()
|
||||||
|
|
||||||
|
mr.static = dt
|
||||||
|
}
|
||||||
|
|
||||||
|
func (mr *SyncMultiReader) NewReadCloser() io.ReadCloser {
|
||||||
|
mr.mu.Lock()
|
||||||
|
defer mr.mu.Unlock()
|
||||||
|
|
||||||
|
if mr.static != nil {
|
||||||
|
return io.NopCloser(bytes.NewReader(mr.static))
|
||||||
|
}
|
||||||
|
|
||||||
|
reader := &syncReader{
|
||||||
|
mr: mr,
|
||||||
|
}
|
||||||
|
mr.readers = append(mr.readers, reader)
|
||||||
|
return reader
|
||||||
|
}
|
||||||
|
|
||||||
|
func (sr *syncReader) Read(p []byte) (int, error) {
|
||||||
|
sr.mr.mu.Lock()
|
||||||
|
defer sr.mr.mu.Unlock()
|
||||||
|
|
||||||
|
return sr.read(p)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (sr *syncReader) read(p []byte) (int, error) {
|
||||||
|
end := sr.mr.offset + len(sr.mr.buffer)
|
||||||
|
|
||||||
|
loop0:
|
||||||
|
for {
|
||||||
|
if sr.closed {
|
||||||
|
return 0, io.EOF
|
||||||
|
}
|
||||||
|
|
||||||
|
end := sr.mr.offset + len(sr.mr.buffer)
|
||||||
|
|
||||||
|
if sr.mr.err != nil && sr.offset == end {
|
||||||
|
return 0, sr.mr.err
|
||||||
|
}
|
||||||
|
|
||||||
|
start := sr.offset - sr.mr.offset
|
||||||
|
|
||||||
|
dt := sr.mr.buffer[start:]
|
||||||
|
|
||||||
|
if len(dt) > 0 {
|
||||||
|
n := copy(p, dt)
|
||||||
|
sr.offset += n
|
||||||
|
sr.mr.cond.Broadcast()
|
||||||
|
return n, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// check for readers that have not caught up
|
||||||
|
hasOpen := false
|
||||||
|
for _, r := range sr.mr.readers {
|
||||||
|
if !r.closed {
|
||||||
|
hasOpen = true
|
||||||
|
} else {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if r.offset < end {
|
||||||
|
sr.mr.cond.Wait()
|
||||||
|
continue loop0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if !hasOpen {
|
||||||
|
return 0, io.EOF
|
||||||
|
}
|
||||||
|
break
|
||||||
|
}
|
||||||
|
|
||||||
|
last := sr.mr.offset + len(sr.mr.buffer)
|
||||||
|
// another reader has already updated the buffer
|
||||||
|
if last > end || sr.mr.err != nil {
|
||||||
|
return sr.read(p)
|
||||||
|
}
|
||||||
|
|
||||||
|
sr.mr.offset += len(sr.mr.buffer)
|
||||||
|
|
||||||
|
sr.mr.buffer = sr.mr.buffer[:cap(sr.mr.buffer)]
|
||||||
|
n, err := sr.mr.source.Read(sr.mr.buffer)
|
||||||
|
if n >= 0 {
|
||||||
|
sr.mr.buffer = sr.mr.buffer[:n]
|
||||||
|
} else {
|
||||||
|
sr.mr.buffer = sr.mr.buffer[:0]
|
||||||
|
}
|
||||||
|
|
||||||
|
sr.mr.cond.Broadcast()
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
sr.mr.err = err
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
|
||||||
|
nn := copy(p, sr.mr.buffer)
|
||||||
|
sr.offset += nn
|
||||||
|
|
||||||
|
return nn, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (sr *syncReader) Close() error {
|
||||||
|
sr.mr.mu.Lock()
|
||||||
|
defer sr.mr.mu.Unlock()
|
||||||
|
|
||||||
|
if sr.closed {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
sr.closed = true
|
||||||
|
|
||||||
|
sr.mr.cond.Broadcast()
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
76
build/replicatedstream_test.go
Normal file
76
build/replicatedstream_test.go
Normal file
@@ -0,0 +1,76 @@
|
|||||||
|
package build
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"crypto/rand"
|
||||||
|
"io"
|
||||||
|
mathrand "math/rand"
|
||||||
|
"sync"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/stretchr/testify/assert"
|
||||||
|
)
|
||||||
|
|
||||||
|
func generateRandomData(size int) []byte {
|
||||||
|
data := make([]byte, size)
|
||||||
|
rand.Read(data)
|
||||||
|
return data
|
||||||
|
}
|
||||||
|
func TestSyncMultiReaderParallel(t *testing.T) {
|
||||||
|
data := generateRandomData(1024 * 1024)
|
||||||
|
source := bytes.NewReader(data)
|
||||||
|
mr := NewSyncMultiReader(source)
|
||||||
|
|
||||||
|
var wg sync.WaitGroup
|
||||||
|
numReaders := 10
|
||||||
|
bufferSize := 4096 * 4
|
||||||
|
|
||||||
|
readers := make([]io.ReadCloser, numReaders)
|
||||||
|
|
||||||
|
for i := 0; i < numReaders; i++ {
|
||||||
|
readers[i] = mr.NewReadCloser()
|
||||||
|
}
|
||||||
|
|
||||||
|
for i := 0; i < numReaders; i++ {
|
||||||
|
wg.Add(1)
|
||||||
|
go func(readerId int) {
|
||||||
|
defer wg.Done()
|
||||||
|
reader := readers[readerId]
|
||||||
|
defer reader.Close()
|
||||||
|
|
||||||
|
totalRead := 0
|
||||||
|
buf := make([]byte, bufferSize)
|
||||||
|
for totalRead < len(data) {
|
||||||
|
// Simulate random read sizes
|
||||||
|
readSize := mathrand.Intn(bufferSize) //nolint:gosec
|
||||||
|
n, err := reader.Read(buf[:readSize])
|
||||||
|
|
||||||
|
if n > 0 {
|
||||||
|
assert.Equal(t, data[totalRead:totalRead+n], buf[:n], "Reader %d mismatch", readerId)
|
||||||
|
totalRead += n
|
||||||
|
}
|
||||||
|
|
||||||
|
if err == io.EOF {
|
||||||
|
assert.Equal(t, len(data), totalRead, "Reader %d EOF mismatch", readerId)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
assert.NoError(t, err, "Reader %d error", readerId)
|
||||||
|
|
||||||
|
if mathrand.Intn(1000) == 0 { //nolint:gosec
|
||||||
|
t.Logf("Reader %d closing", readerId)
|
||||||
|
// Simulate random close
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Simulate random timing between reads
|
||||||
|
time.Sleep(time.Millisecond * time.Duration(mathrand.Intn(5))) //nolint:gosec
|
||||||
|
}
|
||||||
|
|
||||||
|
assert.Equal(t, len(data), totalRead, "Reader %d total read mismatch", readerId)
|
||||||
|
}(i)
|
||||||
|
}
|
||||||
|
|
||||||
|
wg.Wait()
|
||||||
|
}
|
||||||
@@ -82,7 +82,7 @@ func NewResultHandle(ctx context.Context, cc *client.Client, opt client.SolveOpt
|
|||||||
var respHandle *ResultHandle
|
var respHandle *ResultHandle
|
||||||
|
|
||||||
go func() {
|
go func() {
|
||||||
defer cancel(context.Canceled) // ensure no dangling processes
|
defer func() { cancel(errors.WithStack(context.Canceled)) }() // ensure no dangling processes
|
||||||
|
|
||||||
var res *gateway.Result
|
var res *gateway.Result
|
||||||
var err error
|
var err error
|
||||||
@@ -181,7 +181,7 @@ func NewResultHandle(ctx context.Context, cc *client.Client, opt client.SolveOpt
|
|||||||
case <-respHandle.done:
|
case <-respHandle.done:
|
||||||
case <-ctx.Done():
|
case <-ctx.Done():
|
||||||
}
|
}
|
||||||
return nil, ctx.Err()
|
return nil, context.Cause(ctx)
|
||||||
}, nil)
|
}, nil)
|
||||||
if respHandle != nil {
|
if respHandle != nil {
|
||||||
return
|
return
|
||||||
@@ -292,17 +292,17 @@ func (r *ResultHandle) build(buildFunc gateway.BuildFunc) (err error) {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
func (r *ResultHandle) getContainerConfig(ctx context.Context, c gateway.Client, cfg *controllerapi.InvokeConfig) (containerCfg gateway.NewContainerRequest, _ error) {
|
func (r *ResultHandle) getContainerConfig(cfg *controllerapi.InvokeConfig) (containerCfg gateway.NewContainerRequest, _ error) {
|
||||||
if r.res != nil && r.solveErr == nil {
|
if r.res != nil && r.solveErr == nil {
|
||||||
logrus.Debugf("creating container from successful build")
|
logrus.Debugf("creating container from successful build")
|
||||||
ccfg, err := containerConfigFromResult(ctx, r.res, c, *cfg)
|
ccfg, err := containerConfigFromResult(r.res, cfg)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return containerCfg, err
|
return containerCfg, err
|
||||||
}
|
}
|
||||||
containerCfg = *ccfg
|
containerCfg = *ccfg
|
||||||
} else {
|
} else {
|
||||||
logrus.Debugf("creating container from failed build %+v", cfg)
|
logrus.Debugf("creating container from failed build %+v", cfg)
|
||||||
ccfg, err := containerConfigFromError(r.solveErr, *cfg)
|
ccfg, err := containerConfigFromError(r.solveErr, cfg)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return containerCfg, errors.Wrapf(err, "no result nor error is available")
|
return containerCfg, errors.Wrapf(err, "no result nor error is available")
|
||||||
}
|
}
|
||||||
@@ -315,19 +315,19 @@ func (r *ResultHandle) getProcessConfig(cfg *controllerapi.InvokeConfig, stdin i
|
|||||||
processCfg := newStartRequest(stdin, stdout, stderr)
|
processCfg := newStartRequest(stdin, stdout, stderr)
|
||||||
if r.res != nil && r.solveErr == nil {
|
if r.res != nil && r.solveErr == nil {
|
||||||
logrus.Debugf("creating container from successful build")
|
logrus.Debugf("creating container from successful build")
|
||||||
if err := populateProcessConfigFromResult(&processCfg, r.res, *cfg); err != nil {
|
if err := populateProcessConfigFromResult(&processCfg, r.res, cfg); err != nil {
|
||||||
return processCfg, err
|
return processCfg, err
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
logrus.Debugf("creating container from failed build %+v", cfg)
|
logrus.Debugf("creating container from failed build %+v", cfg)
|
||||||
if err := populateProcessConfigFromError(&processCfg, r.solveErr, *cfg); err != nil {
|
if err := populateProcessConfigFromError(&processCfg, r.solveErr, cfg); err != nil {
|
||||||
return processCfg, err
|
return processCfg, err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return processCfg, nil
|
return processCfg, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func containerConfigFromResult(ctx context.Context, res *gateway.Result, c gateway.Client, cfg controllerapi.InvokeConfig) (*gateway.NewContainerRequest, error) {
|
func containerConfigFromResult(res *gateway.Result, cfg *controllerapi.InvokeConfig) (*gateway.NewContainerRequest, error) {
|
||||||
if cfg.Initial {
|
if cfg.Initial {
|
||||||
return nil, errors.Errorf("starting from the container from the initial state of the step is supported only on the failed steps")
|
return nil, errors.Errorf("starting from the container from the initial state of the step is supported only on the failed steps")
|
||||||
}
|
}
|
||||||
@@ -352,7 +352,7 @@ func containerConfigFromResult(ctx context.Context, res *gateway.Result, c gatew
|
|||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func populateProcessConfigFromResult(req *gateway.StartRequest, res *gateway.Result, cfg controllerapi.InvokeConfig) error {
|
func populateProcessConfigFromResult(req *gateway.StartRequest, res *gateway.Result, cfg *controllerapi.InvokeConfig) error {
|
||||||
imgData := res.Metadata[exptypes.ExporterImageConfigKey]
|
imgData := res.Metadata[exptypes.ExporterImageConfigKey]
|
||||||
var img *specs.Image
|
var img *specs.Image
|
||||||
if len(imgData) > 0 {
|
if len(imgData) > 0 {
|
||||||
@@ -403,7 +403,7 @@ func populateProcessConfigFromResult(req *gateway.StartRequest, res *gateway.Res
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func containerConfigFromError(solveErr *errdefs.SolveError, cfg controllerapi.InvokeConfig) (*gateway.NewContainerRequest, error) {
|
func containerConfigFromError(solveErr *errdefs.SolveError, cfg *controllerapi.InvokeConfig) (*gateway.NewContainerRequest, error) {
|
||||||
exec, err := execOpFromError(solveErr)
|
exec, err := execOpFromError(solveErr)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
@@ -431,7 +431,7 @@ func containerConfigFromError(solveErr *errdefs.SolveError, cfg controllerapi.In
|
|||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func populateProcessConfigFromError(req *gateway.StartRequest, solveErr *errdefs.SolveError, cfg controllerapi.InvokeConfig) error {
|
func populateProcessConfigFromError(req *gateway.StartRequest, solveErr *errdefs.SolveError, cfg *controllerapi.InvokeConfig) error {
|
||||||
exec, err := execOpFromError(solveErr)
|
exec, err := execOpFromError(solveErr)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
|
|||||||
@@ -7,12 +7,15 @@ import (
|
|||||||
|
|
||||||
"github.com/docker/buildx/driver"
|
"github.com/docker/buildx/driver"
|
||||||
"github.com/docker/buildx/util/progress"
|
"github.com/docker/buildx/util/progress"
|
||||||
|
"github.com/docker/go-units"
|
||||||
"github.com/moby/buildkit/client"
|
"github.com/moby/buildkit/client"
|
||||||
"github.com/moby/buildkit/client/llb"
|
"github.com/moby/buildkit/client/llb"
|
||||||
gwclient "github.com/moby/buildkit/frontend/gateway/client"
|
gwclient "github.com/moby/buildkit/frontend/gateway/client"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
const maxDockerfileSize = 2 * 1024 * 1024 // 2 MB
|
||||||
|
|
||||||
func createTempDockerfileFromURL(ctx context.Context, d *driver.DriverHandle, url string, pw progress.Writer) (string, error) {
|
func createTempDockerfileFromURL(ctx context.Context, d *driver.DriverHandle, url string, pw progress.Writer) (string, error) {
|
||||||
c, err := driver.Boot(ctx, ctx, d, pw)
|
c, err := driver.Boot(ctx, ctx, d, pw)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -43,8 +46,8 @@ func createTempDockerfileFromURL(ctx context.Context, d *driver.DriverHandle, ur
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
if stat.Size() > 512*1024 {
|
if stat.Size > maxDockerfileSize {
|
||||||
return nil, errors.Errorf("Dockerfile %s bigger than allowed max size", url)
|
return nil, errors.Errorf("Dockerfile %s bigger than allowed max size (%s)", url, units.HumanSize(maxDockerfileSize))
|
||||||
}
|
}
|
||||||
|
|
||||||
dt, err := ref.ReadFile(ctx, gwclient.ReadRequest{
|
dt, err := ref.ReadFile(ctx, gwclient.ReadRequest{
|
||||||
@@ -63,7 +66,6 @@ func createTempDockerfileFromURL(ctx context.Context, d *driver.DriverHandle, ur
|
|||||||
out = dir
|
out = dir
|
||||||
return nil, nil
|
return nil, nil
|
||||||
}, ch)
|
}, ch)
|
||||||
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return "", err
|
return "", err
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -5,13 +5,15 @@ import (
|
|||||||
"bytes"
|
"bytes"
|
||||||
"context"
|
"context"
|
||||||
"net"
|
"net"
|
||||||
|
"os"
|
||||||
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"github.com/docker/buildx/driver"
|
"github.com/docker/buildx/driver"
|
||||||
"github.com/docker/cli/opts"
|
"github.com/docker/cli/opts"
|
||||||
"github.com/docker/docker/builder/remotecontext/urlutil"
|
|
||||||
"github.com/moby/buildkit/util/gitutil"
|
"github.com/moby/buildkit/util/gitutil"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
|
"github.com/sirupsen/logrus"
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
@@ -23,8 +25,15 @@ const (
|
|||||||
mobyHostGatewayName = "host-gateway"
|
mobyHostGatewayName = "host-gateway"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
// isHTTPURL returns true if the provided str is an HTTP(S) URL by checking if it
|
||||||
|
// has a http:// or https:// scheme. No validation is performed to verify if the
|
||||||
|
// URL is well-formed.
|
||||||
|
func isHTTPURL(str string) bool {
|
||||||
|
return strings.HasPrefix(str, "https://") || strings.HasPrefix(str, "http://")
|
||||||
|
}
|
||||||
|
|
||||||
func IsRemoteURL(c string) bool {
|
func IsRemoteURL(c string) bool {
|
||||||
if urlutil.IsURL(c) {
|
if isHTTPURL(c) {
|
||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
if _, err := gitutil.ParseGitRef(c); err == nil {
|
if _, err := gitutil.ParseGitRef(c); err == nil {
|
||||||
@@ -101,3 +110,21 @@ func toBuildkitUlimits(inp *opts.UlimitOpt) (string, error) {
|
|||||||
}
|
}
|
||||||
return strings.Join(ulimits, ","), nil
|
return strings.Join(ulimits, ","), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func notSupported(f driver.Feature, d *driver.DriverHandle, docs string) error {
|
||||||
|
return errors.Errorf(`%s is not supported for the %s driver.
|
||||||
|
Switch to a different driver, or turn on the containerd image store, and try again.
|
||||||
|
Learn more at %s`, f, d.Factory().Name(), docs)
|
||||||
|
}
|
||||||
|
|
||||||
|
func noDefaultLoad() bool {
|
||||||
|
v, ok := os.LookupEnv("BUILDX_NO_DEFAULT_LOAD")
|
||||||
|
if !ok {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
b, err := strconv.ParseBool(v)
|
||||||
|
if err != nil {
|
||||||
|
logrus.Warnf("invalid non-bool value for BUILDX_NO_DEFAULT_LOAD: %s", v)
|
||||||
|
}
|
||||||
|
return b
|
||||||
|
}
|
||||||
|
|||||||
@@ -138,7 +138,7 @@ func TestToBuildkitExtraHosts(t *testing.T) {
|
|||||||
actualOut, actualErr := toBuildkitExtraHosts(context.TODO(), tc.input, nil)
|
actualOut, actualErr := toBuildkitExtraHosts(context.TODO(), tc.input, nil)
|
||||||
if tc.expectedErr == "" {
|
if tc.expectedErr == "" {
|
||||||
require.Equal(t, tc.expectedOut, actualOut)
|
require.Equal(t, tc.expectedOut, actualOut)
|
||||||
require.Nil(t, actualErr)
|
require.NoError(t, actualErr)
|
||||||
} else {
|
} else {
|
||||||
require.Zero(t, actualOut)
|
require.Zero(t, actualOut)
|
||||||
require.Error(t, actualErr, tc.expectedErr)
|
require.Error(t, actualErr, tc.expectedErr)
|
||||||
|
|||||||
@@ -2,7 +2,6 @@ package builder
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"encoding/csv"
|
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
"net/url"
|
"net/url"
|
||||||
"os"
|
"os"
|
||||||
@@ -27,6 +26,7 @@ import (
|
|||||||
"github.com/moby/buildkit/util/progress/progressui"
|
"github.com/moby/buildkit/util/progress/progressui"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
"github.com/spf13/pflag"
|
"github.com/spf13/pflag"
|
||||||
|
"github.com/tonistiigi/go-csvvalue"
|
||||||
"golang.org/x/sync/errgroup"
|
"golang.org/x/sync/errgroup"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -288,7 +288,15 @@ func GetBuilders(dockerCli command.Cli, txn *store.Txn) ([]*Builder, error) {
|
|||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
builders := make([]*Builder, len(storeng))
|
contexts, err := dockerCli.ContextStore().List()
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
sort.Slice(contexts, func(i, j int) bool {
|
||||||
|
return contexts[i].Name < contexts[j].Name
|
||||||
|
})
|
||||||
|
|
||||||
|
builders := make([]*Builder, len(storeng), len(storeng)+len(contexts))
|
||||||
seen := make(map[string]struct{})
|
seen := make(map[string]struct{})
|
||||||
for i, ng := range storeng {
|
for i, ng := range storeng {
|
||||||
b, err := New(dockerCli,
|
b, err := New(dockerCli,
|
||||||
@@ -303,14 +311,6 @@ func GetBuilders(dockerCli command.Cli, txn *store.Txn) ([]*Builder, error) {
|
|||||||
seen[b.NodeGroup.Name] = struct{}{}
|
seen[b.NodeGroup.Name] = struct{}{}
|
||||||
}
|
}
|
||||||
|
|
||||||
contexts, err := dockerCli.ContextStore().List()
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
sort.Slice(contexts, func(i, j int) bool {
|
|
||||||
return contexts[i].Name < contexts[j].Name
|
|
||||||
})
|
|
||||||
|
|
||||||
for _, c := range contexts {
|
for _, c := range contexts {
|
||||||
// if a context has the same name as an instance from the store, do not
|
// if a context has the same name as an instance from the store, do not
|
||||||
// add it to the builders list. An instance from the store takes
|
// add it to the builders list. An instance from the store takes
|
||||||
@@ -435,7 +435,16 @@ func Create(ctx context.Context, txn *store.Txn, dockerCli command.Cli, opts Cre
|
|||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
buildkitdFlags, err := parseBuildkitdFlags(opts.BuildkitdFlags, driverName, driverOpts)
|
buildkitdConfigFile := opts.BuildkitdConfigFile
|
||||||
|
if buildkitdConfigFile == "" {
|
||||||
|
// if buildkit daemon config is not provided, check if the default one
|
||||||
|
// is available and use it
|
||||||
|
if f, ok := confutil.NewConfig(dockerCli).BuildKitConfigFile(); ok {
|
||||||
|
buildkitdConfigFile = f
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
buildkitdFlags, err := parseBuildkitdFlags(opts.BuildkitdFlags, driverName, driverOpts, buildkitdConfigFile)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@@ -496,15 +505,6 @@ func Create(ctx context.Context, txn *store.Txn, dockerCli command.Cli, opts Cre
|
|||||||
setEp = false
|
setEp = false
|
||||||
}
|
}
|
||||||
|
|
||||||
buildkitdConfigFile := opts.BuildkitdConfigFile
|
|
||||||
if buildkitdConfigFile == "" {
|
|
||||||
// if buildkit daemon config is not provided, check if the default one
|
|
||||||
// is available and use it
|
|
||||||
if f, ok := confutil.DefaultConfigFile(dockerCli); ok {
|
|
||||||
buildkitdConfigFile = f
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := ng.Update(opts.NodeName, ep, opts.Platforms, setEp, opts.Append, buildkitdFlags, buildkitdConfigFile, driverOpts); err != nil {
|
if err := ng.Update(opts.NodeName, ep, opts.Platforms, setEp, opts.Append, buildkitdFlags, buildkitdConfigFile, driverOpts); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@@ -522,8 +522,9 @@ func Create(ctx context.Context, txn *store.Txn, dockerCli command.Cli, opts Cre
|
|||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
timeoutCtx, cancel := context.WithTimeout(ctx, 20*time.Second)
|
cancelCtx, cancel := context.WithCancelCause(ctx)
|
||||||
defer cancel()
|
timeoutCtx, _ := context.WithTimeoutCause(cancelCtx, 20*time.Second, errors.WithStack(context.DeadlineExceeded)) //nolint:govet,lostcancel // no need to manually cancel this context as we already rely on parent
|
||||||
|
defer func() { cancel(errors.WithStack(context.Canceled)) }()
|
||||||
|
|
||||||
nodes, err := b.LoadNodes(timeoutCtx, WithData())
|
nodes, err := b.LoadNodes(timeoutCtx, WithData())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -584,7 +585,7 @@ func Leave(ctx context.Context, txn *store.Txn, dockerCli command.Cli, opts Leav
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
ls, err := localstate.New(confutil.ConfigDir(dockerCli))
|
ls, err := localstate.New(confutil.NewConfig(dockerCli))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -601,8 +602,7 @@ func csvToMap(in []string) (map[string]string, error) {
|
|||||||
}
|
}
|
||||||
m := make(map[string]string, len(in))
|
m := make(map[string]string, len(in))
|
||||||
for _, s := range in {
|
for _, s := range in {
|
||||||
csvReader := csv.NewReader(strings.NewReader(s))
|
fields, err := csvvalue.Fields(s, nil)
|
||||||
fields, err := csvReader.Read()
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@@ -642,7 +642,7 @@ func validateBuildkitEndpoint(ep string) (string, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// parseBuildkitdFlags parses buildkit flags
|
// parseBuildkitdFlags parses buildkit flags
|
||||||
func parseBuildkitdFlags(inp string, driver string, driverOpts map[string]string) (res []string, err error) {
|
func parseBuildkitdFlags(inp string, driver string, driverOpts map[string]string, buildkitdConfigFile string) (res []string, err error) {
|
||||||
if inp != "" {
|
if inp != "" {
|
||||||
res, err = shlex.Split(inp)
|
res, err = shlex.Split(inp)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -664,10 +664,27 @@ func parseBuildkitdFlags(inp string, driver string, driverOpts map[string]string
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
var hasNetworkHostEntitlementInConf bool
|
||||||
|
if buildkitdConfigFile != "" {
|
||||||
|
btoml, err := confutil.LoadConfigTree(buildkitdConfigFile)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
} else if btoml != nil {
|
||||||
|
if ies := btoml.GetArray("insecure-entitlements"); ies != nil {
|
||||||
|
for _, e := range ies.([]string) {
|
||||||
|
if e == "network.host" {
|
||||||
|
hasNetworkHostEntitlementInConf = true
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if v, ok := driverOpts["network"]; ok && v == "host" && !hasNetworkHostEntitlement && driver == "docker-container" {
|
if v, ok := driverOpts["network"]; ok && v == "host" && !hasNetworkHostEntitlement && driver == "docker-container" {
|
||||||
// always set network.host entitlement if user has set network=host
|
// always set network.host entitlement if user has set network=host
|
||||||
res = append(res, "--allow-insecure-entitlement=network.host")
|
res = append(res, "--allow-insecure-entitlement=network.host")
|
||||||
} else if len(allowInsecureEntitlements) == 0 && (driver == "kubernetes" || driver == "docker-container") {
|
} else if len(allowInsecureEntitlements) == 0 && !hasNetworkHostEntitlementInConf && (driver == "kubernetes" || driver == "docker-container") {
|
||||||
// set network.host entitlement if user does not provide any as
|
// set network.host entitlement if user does not provide any as
|
||||||
// network is isolated for container drivers.
|
// network is isolated for container drivers.
|
||||||
res = append(res, "--allow-insecure-entitlement=network.host")
|
res = append(res, "--allow-insecure-entitlement=network.host")
|
||||||
|
|||||||
@@ -1,6 +1,8 @@
|
|||||||
package builder
|
package builder
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"os"
|
||||||
|
"path"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
@@ -17,21 +19,35 @@ func TestCsvToMap(t *testing.T) {
|
|||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
require.Contains(t, r, "tolerations")
|
require.Contains(t, r, "tolerations")
|
||||||
require.Equal(t, r["tolerations"], "key=foo,value=bar;key=foo2,value=bar2")
|
require.Equal(t, "key=foo,value=bar;key=foo2,value=bar2", r["tolerations"])
|
||||||
|
|
||||||
require.Contains(t, r, "replicas")
|
require.Contains(t, r, "replicas")
|
||||||
require.Equal(t, r["replicas"], "1")
|
require.Equal(t, "1", r["replicas"])
|
||||||
|
|
||||||
require.Contains(t, r, "namespace")
|
require.Contains(t, r, "namespace")
|
||||||
require.Equal(t, r["namespace"], "default")
|
require.Equal(t, "default", r["namespace"])
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestParseBuildkitdFlags(t *testing.T) {
|
func TestParseBuildkitdFlags(t *testing.T) {
|
||||||
|
buildkitdConf := `
|
||||||
|
# debug enables additional debug logging
|
||||||
|
debug = true
|
||||||
|
# insecure-entitlements allows insecure entitlements, disabled by default.
|
||||||
|
insecure-entitlements = [ "network.host", "security.insecure" ]
|
||||||
|
[log]
|
||||||
|
# log formatter: json or text
|
||||||
|
format = "text"
|
||||||
|
`
|
||||||
|
dirConf := t.TempDir()
|
||||||
|
buildkitdConfPath := path.Join(dirConf, "buildkitd-conf.toml")
|
||||||
|
require.NoError(t, os.WriteFile(buildkitdConfPath, []byte(buildkitdConf), 0644))
|
||||||
|
|
||||||
testCases := []struct {
|
testCases := []struct {
|
||||||
name string
|
name string
|
||||||
flags string
|
flags string
|
||||||
driver string
|
driver string
|
||||||
driverOpts map[string]string
|
driverOpts map[string]string
|
||||||
|
buildkitdConfigFile string
|
||||||
expected []string
|
expected []string
|
||||||
wantErr bool
|
wantErr bool
|
||||||
}{
|
}{
|
||||||
@@ -40,6 +56,7 @@ func TestParseBuildkitdFlags(t *testing.T) {
|
|||||||
"",
|
"",
|
||||||
"docker-container",
|
"docker-container",
|
||||||
nil,
|
nil,
|
||||||
|
"",
|
||||||
[]string{
|
[]string{
|
||||||
"--allow-insecure-entitlement=network.host",
|
"--allow-insecure-entitlement=network.host",
|
||||||
},
|
},
|
||||||
@@ -50,6 +67,7 @@ func TestParseBuildkitdFlags(t *testing.T) {
|
|||||||
"",
|
"",
|
||||||
"kubernetes",
|
"kubernetes",
|
||||||
nil,
|
nil,
|
||||||
|
"",
|
||||||
[]string{
|
[]string{
|
||||||
"--allow-insecure-entitlement=network.host",
|
"--allow-insecure-entitlement=network.host",
|
||||||
},
|
},
|
||||||
@@ -60,6 +78,7 @@ func TestParseBuildkitdFlags(t *testing.T) {
|
|||||||
"",
|
"",
|
||||||
"remote",
|
"remote",
|
||||||
nil,
|
nil,
|
||||||
|
"",
|
||||||
nil,
|
nil,
|
||||||
false,
|
false,
|
||||||
},
|
},
|
||||||
@@ -68,6 +87,7 @@ func TestParseBuildkitdFlags(t *testing.T) {
|
|||||||
"--allow-insecure-entitlement=security.insecure",
|
"--allow-insecure-entitlement=security.insecure",
|
||||||
"docker-container",
|
"docker-container",
|
||||||
nil,
|
nil,
|
||||||
|
"",
|
||||||
[]string{
|
[]string{
|
||||||
"--allow-insecure-entitlement=security.insecure",
|
"--allow-insecure-entitlement=security.insecure",
|
||||||
},
|
},
|
||||||
@@ -78,6 +98,7 @@ func TestParseBuildkitdFlags(t *testing.T) {
|
|||||||
"--allow-insecure-entitlement=network.host --allow-insecure-entitlement=security.insecure",
|
"--allow-insecure-entitlement=network.host --allow-insecure-entitlement=security.insecure",
|
||||||
"docker-container",
|
"docker-container",
|
||||||
nil,
|
nil,
|
||||||
|
"",
|
||||||
[]string{
|
[]string{
|
||||||
"--allow-insecure-entitlement=network.host",
|
"--allow-insecure-entitlement=network.host",
|
||||||
"--allow-insecure-entitlement=security.insecure",
|
"--allow-insecure-entitlement=security.insecure",
|
||||||
@@ -89,6 +110,7 @@ func TestParseBuildkitdFlags(t *testing.T) {
|
|||||||
"",
|
"",
|
||||||
"docker-container",
|
"docker-container",
|
||||||
map[string]string{"network": "host"},
|
map[string]string{"network": "host"},
|
||||||
|
"",
|
||||||
[]string{
|
[]string{
|
||||||
"--allow-insecure-entitlement=network.host",
|
"--allow-insecure-entitlement=network.host",
|
||||||
},
|
},
|
||||||
@@ -99,6 +121,7 @@ func TestParseBuildkitdFlags(t *testing.T) {
|
|||||||
"--allow-insecure-entitlement=network.host",
|
"--allow-insecure-entitlement=network.host",
|
||||||
"docker-container",
|
"docker-container",
|
||||||
map[string]string{"network": "host"},
|
map[string]string{"network": "host"},
|
||||||
|
"",
|
||||||
[]string{
|
[]string{
|
||||||
"--allow-insecure-entitlement=network.host",
|
"--allow-insecure-entitlement=network.host",
|
||||||
},
|
},
|
||||||
@@ -109,17 +132,28 @@ func TestParseBuildkitdFlags(t *testing.T) {
|
|||||||
"--allow-insecure-entitlement=network.host --allow-insecure-entitlement=security.insecure",
|
"--allow-insecure-entitlement=network.host --allow-insecure-entitlement=security.insecure",
|
||||||
"docker-container",
|
"docker-container",
|
||||||
map[string]string{"network": "host"},
|
map[string]string{"network": "host"},
|
||||||
|
"",
|
||||||
[]string{
|
[]string{
|
||||||
"--allow-insecure-entitlement=network.host",
|
"--allow-insecure-entitlement=network.host",
|
||||||
"--allow-insecure-entitlement=security.insecure",
|
"--allow-insecure-entitlement=security.insecure",
|
||||||
},
|
},
|
||||||
false,
|
false,
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"docker-container with buildkitd conf setting network.host entitlement",
|
||||||
|
"",
|
||||||
|
"docker-container",
|
||||||
|
nil,
|
||||||
|
buildkitdConfPath,
|
||||||
|
nil,
|
||||||
|
false,
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"error parsing flags",
|
"error parsing flags",
|
||||||
"foo'",
|
"foo'",
|
||||||
"docker-container",
|
"docker-container",
|
||||||
nil,
|
nil,
|
||||||
|
"",
|
||||||
nil,
|
nil,
|
||||||
true,
|
true,
|
||||||
},
|
},
|
||||||
@@ -127,7 +161,7 @@ func TestParseBuildkitdFlags(t *testing.T) {
|
|||||||
for _, tt := range testCases {
|
for _, tt := range testCases {
|
||||||
tt := tt
|
tt := tt
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
flags, err := parseBuildkitdFlags(tt.flags, tt.driver, tt.driverOpts)
|
flags, err := parseBuildkitdFlags(tt.flags, tt.driver, tt.driverOpts, tt.buildkitdConfigFile)
|
||||||
if tt.wantErr {
|
if tt.wantErr {
|
||||||
require.Error(t, err)
|
require.Error(t, err)
|
||||||
return
|
return
|
||||||
|
|||||||
@@ -6,9 +6,8 @@ import (
|
|||||||
"sort"
|
"sort"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"github.com/containerd/containerd/platforms"
|
"github.com/containerd/platforms"
|
||||||
"github.com/docker/buildx/driver"
|
"github.com/docker/buildx/driver"
|
||||||
ctxkube "github.com/docker/buildx/driver/kubernetes/context"
|
|
||||||
"github.com/docker/buildx/store"
|
"github.com/docker/buildx/store"
|
||||||
"github.com/docker/buildx/store/storeutil"
|
"github.com/docker/buildx/store/storeutil"
|
||||||
"github.com/docker/buildx/util/dockerutil"
|
"github.com/docker/buildx/util/dockerutil"
|
||||||
@@ -18,7 +17,6 @@ import (
|
|||||||
"github.com/moby/buildkit/util/grpcerrors"
|
"github.com/moby/buildkit/util/grpcerrors"
|
||||||
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
|
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
"github.com/sirupsen/logrus"
|
|
||||||
"golang.org/x/sync/errgroup"
|
"golang.org/x/sync/errgroup"
|
||||||
"google.golang.org/grpc/codes"
|
"google.golang.org/grpc/codes"
|
||||||
)
|
)
|
||||||
@@ -50,6 +48,7 @@ type LoadNodesOption func(*loadNodesOptions)
|
|||||||
type loadNodesOptions struct {
|
type loadNodesOptions struct {
|
||||||
data bool
|
data bool
|
||||||
dialMeta map[string][]string
|
dialMeta map[string][]string
|
||||||
|
clientOpt []client.ClientOpt
|
||||||
}
|
}
|
||||||
|
|
||||||
func WithData() LoadNodesOption {
|
func WithData() LoadNodesOption {
|
||||||
@@ -64,6 +63,12 @@ func WithDialMeta(dialMeta map[string][]string) LoadNodesOption {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func WithClientOpt(clientOpt ...client.ClientOpt) LoadNodesOption {
|
||||||
|
return func(o *loadNodesOptions) {
|
||||||
|
o.clientOpt = clientOpt
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// LoadNodes loads and returns nodes for this builder.
|
// LoadNodes loads and returns nodes for this builder.
|
||||||
// TODO: this should be a method on a Node object and lazy load data for each driver.
|
// TODO: this should be a method on a Node object and lazy load data for each driver.
|
||||||
func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []Node, err error) {
|
func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []Node, err error) {
|
||||||
@@ -112,37 +117,19 @@ func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []N
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
contextStore := b.opts.dockerCli.ContextStore()
|
d, err := driver.GetDriver(ctx, factory, driver.InitConfig{
|
||||||
|
Name: driver.BuilderName(n.Name),
|
||||||
var kcc driver.KubeClientConfig
|
EndpointAddr: n.Endpoint,
|
||||||
kcc, err = ctxkube.ConfigFromEndpoint(n.Endpoint, contextStore)
|
DockerAPI: dockerapi,
|
||||||
if err != nil {
|
ContextStore: b.opts.dockerCli.ContextStore(),
|
||||||
// err is returned if n.Endpoint is non-context name like "unix:///var/run/docker.sock".
|
BuildkitdFlags: n.BuildkitdFlags,
|
||||||
// try again with name="default".
|
Files: n.Files,
|
||||||
// FIXME(@AkihiroSuda): n should retain real context name.
|
DriverOpts: n.DriverOpts,
|
||||||
kcc, err = ctxkube.ConfigFromEndpoint("default", contextStore)
|
Auth: imageopt.Auth,
|
||||||
if err != nil {
|
Platforms: n.Platforms,
|
||||||
logrus.Error(err)
|
ContextPathHash: b.opts.contextPathHash,
|
||||||
}
|
DialMeta: lno.dialMeta,
|
||||||
}
|
})
|
||||||
|
|
||||||
tryToUseKubeConfigInCluster := false
|
|
||||||
if kcc == nil {
|
|
||||||
tryToUseKubeConfigInCluster = true
|
|
||||||
} else {
|
|
||||||
if _, err := kcc.ClientConfig(); err != nil {
|
|
||||||
tryToUseKubeConfigInCluster = true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if tryToUseKubeConfigInCluster {
|
|
||||||
kccInCluster := driver.KubeClientConfigInCluster{}
|
|
||||||
if _, err := kccInCluster.ClientConfig(); err == nil {
|
|
||||||
logrus.Debug("using kube config in cluster")
|
|
||||||
kcc = kccInCluster
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
d, err := driver.GetDriver(ctx, "buildx_buildkit_"+n.Name, factory, n.Endpoint, dockerapi, imageopt.Auth, kcc, n.BuildkitdFlags, n.Files, n.DriverOpts, n.Platforms, b.opts.contextPathHash, lno.dialMeta)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
node.Err = err
|
node.Err = err
|
||||||
return nil
|
return nil
|
||||||
@@ -151,7 +138,7 @@ func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []N
|
|||||||
node.ImageOpt = imageopt
|
node.ImageOpt = imageopt
|
||||||
|
|
||||||
if lno.data {
|
if lno.data {
|
||||||
if err := node.loadData(ctx); err != nil {
|
if err := node.loadData(ctx, lno.clientOpt...); err != nil {
|
||||||
node.Err = err
|
node.Err = err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -186,7 +173,7 @@ func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []N
|
|||||||
if pl := di.DriverInfo.DynamicNodes[i].Platforms; len(pl) > 0 {
|
if pl := di.DriverInfo.DynamicNodes[i].Platforms; len(pl) > 0 {
|
||||||
diClone.Platforms = pl
|
diClone.Platforms = pl
|
||||||
}
|
}
|
||||||
nodes = append(nodes, di)
|
nodes = append(nodes, diClone)
|
||||||
}
|
}
|
||||||
dynamicNodes = append(dynamicNodes, di.DriverInfo.DynamicNodes...)
|
dynamicNodes = append(dynamicNodes, di.DriverInfo.DynamicNodes...)
|
||||||
}
|
}
|
||||||
@@ -247,7 +234,7 @@ func (n *Node) MarshalJSON() ([]byte, error) {
|
|||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
func (n *Node) loadData(ctx context.Context) error {
|
func (n *Node) loadData(ctx context.Context, clientOpt ...client.ClientOpt) error {
|
||||||
if n.Driver == nil {
|
if n.Driver == nil {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
@@ -257,7 +244,7 @@ func (n *Node) loadData(ctx context.Context) error {
|
|||||||
}
|
}
|
||||||
n.DriverInfo = info
|
n.DriverInfo = info
|
||||||
if n.DriverInfo.Status == driver.Running {
|
if n.DriverInfo.Status == driver.Running {
|
||||||
driverClient, err := n.Driver.Client(ctx)
|
driverClient, err := n.Driver.Client(ctx, clientOpt...)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|||||||
75
cmd/buildx/debug.go
Normal file
75
cmd/buildx/debug.go
Normal file
@@ -0,0 +1,75 @@
|
|||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"os"
|
||||||
|
"runtime"
|
||||||
|
"runtime/pprof"
|
||||||
|
|
||||||
|
"github.com/moby/buildkit/util/bklog"
|
||||||
|
"github.com/sirupsen/logrus"
|
||||||
|
)
|
||||||
|
|
||||||
|
func setupDebugProfiles(ctx context.Context) (stop func()) {
|
||||||
|
var stopFuncs []func()
|
||||||
|
if fn := setupCPUProfile(ctx); fn != nil {
|
||||||
|
stopFuncs = append(stopFuncs, fn)
|
||||||
|
}
|
||||||
|
if fn := setupHeapProfile(ctx); fn != nil {
|
||||||
|
stopFuncs = append(stopFuncs, fn)
|
||||||
|
}
|
||||||
|
return func() {
|
||||||
|
for _, fn := range stopFuncs {
|
||||||
|
fn()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func setupCPUProfile(ctx context.Context) (stop func()) {
|
||||||
|
if cpuProfile := os.Getenv("BUILDX_CPU_PROFILE"); cpuProfile != "" {
|
||||||
|
f, err := os.Create(cpuProfile)
|
||||||
|
if err != nil {
|
||||||
|
bklog.G(ctx).Warn("could not create cpu profile", logrus.WithError(err))
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := pprof.StartCPUProfile(f); err != nil {
|
||||||
|
bklog.G(ctx).Warn("could not start cpu profile", logrus.WithError(err))
|
||||||
|
_ = f.Close()
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
return func() {
|
||||||
|
pprof.StopCPUProfile()
|
||||||
|
if err := f.Close(); err != nil {
|
||||||
|
bklog.G(ctx).Warn("could not close file for cpu profile", logrus.WithError(err))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func setupHeapProfile(ctx context.Context) (stop func()) {
|
||||||
|
if heapProfile := os.Getenv("BUILDX_MEM_PROFILE"); heapProfile != "" {
|
||||||
|
// Memory profile is only created on stop.
|
||||||
|
return func() {
|
||||||
|
f, err := os.Create(heapProfile)
|
||||||
|
if err != nil {
|
||||||
|
bklog.G(ctx).Warn("could not create memory profile", logrus.WithError(err))
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// get up-to-date statistics
|
||||||
|
runtime.GC()
|
||||||
|
|
||||||
|
if err := pprof.WriteHeapProfile(f); err != nil {
|
||||||
|
bklog.G(ctx).Warn("could not write memory profile", logrus.WithError(err))
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := f.Close(); err != nil {
|
||||||
|
bklog.G(ctx).Warn("could not close file for memory profile", logrus.WithError(err))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
@@ -1,10 +1,12 @@
|
|||||||
package main
|
package main
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"context"
|
||||||
"fmt"
|
"fmt"
|
||||||
"os"
|
"os"
|
||||||
|
|
||||||
"github.com/docker/buildx/commands"
|
"github.com/docker/buildx/commands"
|
||||||
|
controllererrors "github.com/docker/buildx/controller/errdefs"
|
||||||
"github.com/docker/buildx/util/desktop"
|
"github.com/docker/buildx/util/desktop"
|
||||||
"github.com/docker/buildx/version"
|
"github.com/docker/buildx/version"
|
||||||
"github.com/docker/cli/cli"
|
"github.com/docker/cli/cli"
|
||||||
@@ -15,6 +17,8 @@ import (
|
|||||||
cliflags "github.com/docker/cli/cli/flags"
|
cliflags "github.com/docker/cli/cli/flags"
|
||||||
"github.com/moby/buildkit/solver/errdefs"
|
"github.com/moby/buildkit/solver/errdefs"
|
||||||
"github.com/moby/buildkit/util/stack"
|
"github.com/moby/buildkit/util/stack"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
"go.opentelemetry.io/otel"
|
||||||
|
|
||||||
//nolint:staticcheck // vendored dependencies may still use this
|
//nolint:staticcheck // vendored dependencies may still use this
|
||||||
"github.com/containerd/containerd/pkg/seed"
|
"github.com/containerd/containerd/pkg/seed"
|
||||||
@@ -25,6 +29,9 @@ import (
|
|||||||
_ "github.com/docker/buildx/driver/docker-container"
|
_ "github.com/docker/buildx/driver/docker-container"
|
||||||
_ "github.com/docker/buildx/driver/kubernetes"
|
_ "github.com/docker/buildx/driver/kubernetes"
|
||||||
_ "github.com/docker/buildx/driver/remote"
|
_ "github.com/docker/buildx/driver/remote"
|
||||||
|
|
||||||
|
// Use custom grpc codec to utilize vtprotobuf
|
||||||
|
_ "github.com/moby/buildkit/util/grpcutil/encoding/proto"
|
||||||
)
|
)
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
@@ -38,10 +45,27 @@ func runStandalone(cmd *command.DockerCli) error {
|
|||||||
if err := cmd.Initialize(cliflags.NewClientOptions()); err != nil {
|
if err := cmd.Initialize(cliflags.NewClientOptions()); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
defer flushMetrics(cmd)
|
||||||
|
|
||||||
rootCmd := commands.NewRootCmd(os.Args[0], false, cmd)
|
rootCmd := commands.NewRootCmd(os.Args[0], false, cmd)
|
||||||
return rootCmd.Execute()
|
return rootCmd.Execute()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// flushMetrics will manually flush metrics from the configured
|
||||||
|
// meter provider. This is needed when running in standalone mode
|
||||||
|
// because the meter provider is initialized by the cli library,
|
||||||
|
// but the mechanism for forcing it to report is not presently
|
||||||
|
// exposed and not invoked when run in standalone mode.
|
||||||
|
// There are plans to fix that in the next release, but this is
|
||||||
|
// needed temporarily until the API for this is more thorough.
|
||||||
|
func flushMetrics(cmd *command.DockerCli) {
|
||||||
|
if mp, ok := cmd.MeterProvider().(command.MeterProvider); ok {
|
||||||
|
if err := mp.ForceFlush(context.Background()); err != nil {
|
||||||
|
otel.Handle(err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
func runPlugin(cmd *command.DockerCli) error {
|
func runPlugin(cmd *command.DockerCli) error {
|
||||||
rootCmd := commands.NewRootCmd("buildx", true, cmd)
|
rootCmd := commands.NewRootCmd("buildx", true, cmd)
|
||||||
return plugin.RunPlugin(cmd, rootCmd, manager.Metadata{
|
return plugin.RunPlugin(cmd, rootCmd, manager.Metadata{
|
||||||
@@ -51,6 +75,16 @@ func runPlugin(cmd *command.DockerCli) error {
|
|||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func run(cmd *command.DockerCli) error {
|
||||||
|
stopProfiles := setupDebugProfiles(context.TODO())
|
||||||
|
defer stopProfiles()
|
||||||
|
|
||||||
|
if plugin.RunningStandalone() {
|
||||||
|
return runStandalone(cmd)
|
||||||
|
}
|
||||||
|
return runPlugin(cmd)
|
||||||
|
}
|
||||||
|
|
||||||
func main() {
|
func main() {
|
||||||
cmd, err := command.NewDockerCli()
|
cmd, err := command.NewDockerCli()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -58,15 +92,11 @@ func main() {
|
|||||||
os.Exit(1)
|
os.Exit(1)
|
||||||
}
|
}
|
||||||
|
|
||||||
if plugin.RunningStandalone() {
|
if err = run(cmd); err == nil {
|
||||||
err = runStandalone(cmd)
|
|
||||||
} else {
|
|
||||||
err = runPlugin(cmd)
|
|
||||||
}
|
|
||||||
if err == nil {
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Check the error from the run function above.
|
||||||
if sterr, ok := err.(cli.StatusError); ok {
|
if sterr, ok := err.(cli.StatusError); ok {
|
||||||
if sterr.Status != "" {
|
if sterr.Status != "" {
|
||||||
fmt.Fprintln(cmd.Err(), sterr.Status)
|
fmt.Fprintln(cmd.Err(), sterr.Status)
|
||||||
@@ -87,8 +117,15 @@ func main() {
|
|||||||
} else {
|
} else {
|
||||||
fmt.Fprintf(cmd.Err(), "ERROR: %v\n", err)
|
fmt.Fprintf(cmd.Err(), "ERROR: %v\n", err)
|
||||||
}
|
}
|
||||||
if ebr, ok := err.(*desktop.ErrorWithBuildRef); ok {
|
|
||||||
|
var ebr *desktop.ErrorWithBuildRef
|
||||||
|
if errors.As(err, &ebr) {
|
||||||
ebr.Print(cmd.Err())
|
ebr.Print(cmd.Err())
|
||||||
|
} else {
|
||||||
|
var be *controllererrors.BuildError
|
||||||
|
if errors.As(err, &be) {
|
||||||
|
be.PrintBuildDetails(cmd.Err())
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
os.Exit(1)
|
os.Exit(1)
|
||||||
|
|||||||
@@ -4,7 +4,6 @@ import (
|
|||||||
"github.com/moby/buildkit/util/tracing/detect"
|
"github.com/moby/buildkit/util/tracing/detect"
|
||||||
"go.opentelemetry.io/otel"
|
"go.opentelemetry.io/otel"
|
||||||
|
|
||||||
_ "github.com/moby/buildkit/util/tracing/detect/delegated"
|
|
||||||
_ "github.com/moby/buildkit/util/tracing/env"
|
_ "github.com/moby/buildkit/util/tracing/env"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|||||||
@@ -1 +1,4 @@
|
|||||||
comment: false
|
comment: false
|
||||||
|
|
||||||
|
ignore:
|
||||||
|
- "**/*.pb.go"
|
||||||
|
|||||||
525
commands/bake.go
525
commands/bake.go
@@ -1,24 +1,36 @@
|
|||||||
package commands
|
package commands
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"bytes"
|
||||||
|
"cmp"
|
||||||
"context"
|
"context"
|
||||||
|
"crypto/sha256"
|
||||||
|
"encoding/hex"
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
"fmt"
|
"fmt"
|
||||||
"io"
|
"io"
|
||||||
"os"
|
"os"
|
||||||
|
"slices"
|
||||||
|
"sort"
|
||||||
"strings"
|
"strings"
|
||||||
|
"sync"
|
||||||
|
"text/tabwriter"
|
||||||
|
|
||||||
"github.com/containerd/console"
|
"github.com/containerd/console"
|
||||||
"github.com/containerd/containerd/platforms"
|
"github.com/containerd/platforms"
|
||||||
"github.com/docker/buildx/bake"
|
"github.com/docker/buildx/bake"
|
||||||
|
"github.com/docker/buildx/bake/hclparser"
|
||||||
"github.com/docker/buildx/build"
|
"github.com/docker/buildx/build"
|
||||||
"github.com/docker/buildx/builder"
|
"github.com/docker/buildx/builder"
|
||||||
|
"github.com/docker/buildx/controller/pb"
|
||||||
"github.com/docker/buildx/localstate"
|
"github.com/docker/buildx/localstate"
|
||||||
"github.com/docker/buildx/util/buildflags"
|
"github.com/docker/buildx/util/buildflags"
|
||||||
|
"github.com/docker/buildx/util/cobrautil"
|
||||||
"github.com/docker/buildx/util/cobrautil/completion"
|
"github.com/docker/buildx/util/cobrautil/completion"
|
||||||
"github.com/docker/buildx/util/confutil"
|
"github.com/docker/buildx/util/confutil"
|
||||||
"github.com/docker/buildx/util/desktop"
|
"github.com/docker/buildx/util/desktop"
|
||||||
"github.com/docker/buildx/util/dockerutil"
|
"github.com/docker/buildx/util/dockerutil"
|
||||||
|
"github.com/docker/buildx/util/osutil"
|
||||||
"github.com/docker/buildx/util/progress"
|
"github.com/docker/buildx/util/progress"
|
||||||
"github.com/docker/buildx/util/tracing"
|
"github.com/docker/buildx/util/tracing"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
@@ -26,22 +38,29 @@ import (
|
|||||||
"github.com/moby/buildkit/util/progress/progressui"
|
"github.com/moby/buildkit/util/progress/progressui"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
|
"go.opentelemetry.io/otel/attribute"
|
||||||
)
|
)
|
||||||
|
|
||||||
type bakeOptions struct {
|
type bakeOptions struct {
|
||||||
files []string
|
files []string
|
||||||
overrides []string
|
overrides []string
|
||||||
printOnly bool
|
printOnly bool
|
||||||
|
listTargets bool
|
||||||
|
listVars bool
|
||||||
sbom string
|
sbom string
|
||||||
provenance string
|
provenance string
|
||||||
|
allow []string
|
||||||
|
|
||||||
builder string
|
builder string
|
||||||
metadataFile string
|
metadataFile string
|
||||||
exportPush bool
|
exportPush bool
|
||||||
exportLoad bool
|
exportLoad bool
|
||||||
|
callFunc string
|
||||||
}
|
}
|
||||||
|
|
||||||
func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in bakeOptions, cFlags commonFlags) (err error) {
|
func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in bakeOptions, cFlags commonFlags) (err error) {
|
||||||
|
mp := dockerCli.MeterProvider()
|
||||||
|
|
||||||
ctx, end, err := tracing.TraceCurrentCommand(ctx, "bake")
|
ctx, end, err := tracing.TraceCurrentCommand(ctx, "bake")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -50,34 +69,25 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
|
|||||||
end(err)
|
end(err)
|
||||||
}()
|
}()
|
||||||
|
|
||||||
var url string
|
url, cmdContext, targets := bakeArgs(targets)
|
||||||
cmdContext := "cwd://"
|
|
||||||
|
|
||||||
if len(targets) > 0 {
|
|
||||||
if build.IsRemoteURL(targets[0]) {
|
|
||||||
url = targets[0]
|
|
||||||
targets = targets[1:]
|
|
||||||
if len(targets) > 0 {
|
|
||||||
if build.IsRemoteURL(targets[0]) {
|
|
||||||
cmdContext = targets[0]
|
|
||||||
targets = targets[1:]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(targets) == 0 {
|
if len(targets) == 0 {
|
||||||
targets = []string{"default"}
|
targets = []string{"default"}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
callFunc, err := buildflags.ParseCallFunc(in.callFunc)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
overrides := in.overrides
|
overrides := in.overrides
|
||||||
if in.exportPush {
|
if in.exportPush {
|
||||||
if in.exportLoad {
|
|
||||||
return errors.Errorf("push and load may not be set together at the moment")
|
|
||||||
}
|
|
||||||
overrides = append(overrides, "*.push=true")
|
overrides = append(overrides, "*.push=true")
|
||||||
} else if in.exportLoad {
|
}
|
||||||
overrides = append(overrides, "*.output=type=docker")
|
if in.exportLoad {
|
||||||
|
overrides = append(overrides, "*.load=true")
|
||||||
|
}
|
||||||
|
if callFunc != nil {
|
||||||
|
overrides = append(overrides, fmt.Sprintf("*.call=%s", callFunc.Name))
|
||||||
}
|
}
|
||||||
if cFlags.noCache != nil {
|
if cFlags.noCache != nil {
|
||||||
overrides = append(overrides, fmt.Sprintf("*.no-cache=%t", *cFlags.noCache))
|
overrides = append(overrides, fmt.Sprintf("*.no-cache=%t", *cFlags.noCache))
|
||||||
@@ -93,14 +103,27 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
|
|||||||
}
|
}
|
||||||
contextPathHash, _ := os.Getwd()
|
contextPathHash, _ := os.Getwd()
|
||||||
|
|
||||||
ctx2, cancel := context.WithCancel(context.TODO())
|
ent, err := bake.ParseEntitlements(in.allow)
|
||||||
defer cancel()
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
wd, err := os.Getwd()
|
||||||
|
if err != nil {
|
||||||
|
return errors.Wrapf(err, "failed to get current working directory")
|
||||||
|
}
|
||||||
|
// filesystem access under the current working directory is allowed by default
|
||||||
|
ent.FSRead = append(ent.FSRead, wd)
|
||||||
|
ent.FSWrite = append(ent.FSWrite, wd)
|
||||||
|
|
||||||
|
ctx2, cancel := context.WithCancelCause(context.TODO())
|
||||||
|
defer cancel(errors.WithStack(context.Canceled))
|
||||||
|
|
||||||
var nodes []builder.Node
|
var nodes []builder.Node
|
||||||
var progressConsoleDesc, progressTextDesc string
|
var progressConsoleDesc, progressTextDesc string
|
||||||
|
|
||||||
// instance only needed for reading remote bake files or building
|
// instance only needed for reading remote bake files or building
|
||||||
if url != "" || !in.printOnly {
|
var driverType string
|
||||||
|
if url != "" || !(in.printOnly || in.listTargets || in.listVars) {
|
||||||
b, err := builder.New(dockerCli,
|
b, err := builder.New(dockerCli,
|
||||||
builder.WithName(in.builder),
|
builder.WithName(in.builder),
|
||||||
builder.WithContextPathHash(contextPathHash),
|
builder.WithContextPathHash(contextPathHash),
|
||||||
@@ -117,32 +140,33 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
|
|||||||
}
|
}
|
||||||
progressConsoleDesc = fmt.Sprintf("%s:%s", b.Driver, b.Name)
|
progressConsoleDesc = fmt.Sprintf("%s:%s", b.Driver, b.Name)
|
||||||
progressTextDesc = fmt.Sprintf("building with %q instance using %s driver", b.Name, b.Driver)
|
progressTextDesc = fmt.Sprintf("building with %q instance using %s driver", b.Name, b.Driver)
|
||||||
|
driverType = b.Driver
|
||||||
}
|
}
|
||||||
|
|
||||||
var term bool
|
var term bool
|
||||||
if _, err := console.ConsoleFromFile(os.Stderr); err == nil {
|
if _, err := console.ConsoleFromFile(os.Stderr); err == nil {
|
||||||
term = true
|
term = true
|
||||||
}
|
}
|
||||||
|
attributes := bakeMetricAttributes(dockerCli, driverType, url, cmdContext, targets, &in)
|
||||||
|
|
||||||
progressMode := progressui.DisplayMode(cFlags.progress)
|
progressMode := progressui.DisplayMode(cFlags.progress)
|
||||||
printer, err := progress.NewPrinter(ctx2, os.Stderr, progressMode,
|
var printer *progress.Printer
|
||||||
|
|
||||||
|
makePrinter := func() error {
|
||||||
|
var err error
|
||||||
|
printer, err = progress.NewPrinter(ctx2, os.Stderr, progressMode,
|
||||||
progress.WithDesc(progressTextDesc, progressConsoleDesc),
|
progress.WithDesc(progressTextDesc, progressConsoleDesc),
|
||||||
|
progress.WithMetrics(mp, attributes),
|
||||||
|
progress.WithOnClose(func() {
|
||||||
|
printWarnings(os.Stderr, printer.Warnings(), progressMode)
|
||||||
|
}),
|
||||||
)
|
)
|
||||||
if err != nil {
|
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
defer func() {
|
if err := makePrinter(); err != nil {
|
||||||
if printer != nil {
|
return err
|
||||||
err1 := printer.Wait()
|
|
||||||
if err == nil {
|
|
||||||
err = err1
|
|
||||||
}
|
}
|
||||||
if err == nil && progressMode != progressui.QuietMode && progressMode != progressui.RawJSONMode {
|
|
||||||
desktop.PrintBuildDetails(os.Stderr, printer.BuildRefs(), term)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
|
|
||||||
files, inp, err := readBakeFiles(ctx, nodes, url, in.files, dockerCli.In(), printer)
|
files, inp, err := readBakeFiles(ctx, nodes, url, in.files, dockerCli.In(), printer)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -153,12 +177,29 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
|
|||||||
return errors.New("couldn't find a bake definition")
|
return errors.New("couldn't find a bake definition")
|
||||||
}
|
}
|
||||||
|
|
||||||
tgts, grps, err := bake.ReadTargets(ctx, files, targets, overrides, map[string]string{
|
defaults := map[string]string{
|
||||||
// don't forget to update documentation if you add a new
|
// don't forget to update documentation if you add a new
|
||||||
// built-in variable: docs/bake-reference.md#built-in-variables
|
// built-in variable: docs/bake-reference.md#built-in-variables
|
||||||
"BAKE_CMD_CONTEXT": cmdContext,
|
"BAKE_CMD_CONTEXT": cmdContext,
|
||||||
"BAKE_LOCAL_PLATFORM": platforms.DefaultString(),
|
"BAKE_LOCAL_PLATFORM": platforms.Format(platforms.DefaultSpec()),
|
||||||
})
|
}
|
||||||
|
|
||||||
|
if in.listTargets || in.listVars {
|
||||||
|
cfg, pm, err := bake.ParseFiles(files, defaults)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if err = printer.Wait(); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if in.listTargets {
|
||||||
|
return printTargetList(dockerCli.Out(), cfg)
|
||||||
|
} else if in.listVars {
|
||||||
|
return printVars(dockerCli.Out(), pm.AllVariables)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
tgts, grps, err := bake.ReadTargets(ctx, files, targets, overrides, defaults)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -191,57 +232,183 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
|
|||||||
}
|
}
|
||||||
|
|
||||||
if in.printOnly {
|
if in.printOnly {
|
||||||
dt, err := json.MarshalIndent(def, "", " ")
|
if err = printer.Wait(); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
dtdef, err := json.MarshalIndent(def, "", " ")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
err = printer.Wait()
|
_, err = fmt.Fprintln(dockerCli.Out(), string(dtdef))
|
||||||
printer = nil
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
fmt.Fprintln(dockerCli.Out(), string(dt))
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// local state group
|
|
||||||
groupRef := identity.NewID()
|
|
||||||
var refs []string
|
|
||||||
for k, b := range bo {
|
|
||||||
b.Ref = identity.NewID()
|
|
||||||
b.GroupRef = groupRef
|
|
||||||
refs = append(refs, b.Ref)
|
|
||||||
bo[k] = b
|
|
||||||
}
|
|
||||||
dt, err := json.Marshal(def)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if err := saveLocalStateGroup(dockerCli, groupRef, localstate.StateGroup{
|
|
||||||
Definition: dt,
|
|
||||||
Targets: targets,
|
|
||||||
Inputs: overrides,
|
|
||||||
Refs: refs,
|
|
||||||
}); err != nil {
|
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
resp, err := build.Build(ctx, nodes, bo, dockerutil.NewClient(dockerCli), confutil.ConfigDir(dockerCli), printer)
|
for _, opt := range bo {
|
||||||
|
if opt.CallFunc != nil {
|
||||||
|
cf, err := buildflags.ParseCallFunc(opt.CallFunc.Name)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return wrapBuildError(err, true)
|
return err
|
||||||
|
}
|
||||||
|
opt.CallFunc.Name = cf.Name
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
exp, err := ent.Validate(bo)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if err := exp.Prompt(ctx, url != "", &syncWriter{w: dockerCli.Err(), wait: printer.Wait}); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if printer.IsDone() {
|
||||||
|
// init new printer as old one was stopped to show the prompt
|
||||||
|
if err := makePrinter(); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := saveLocalStateGroup(dockerCli, in, targets, bo, overrides, def); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
done := timeBuildCommand(mp, attributes)
|
||||||
|
resp, retErr := build.Build(ctx, nodes, bo, dockerutil.NewClient(dockerCli), confutil.NewConfig(dockerCli), printer)
|
||||||
|
if err := printer.Wait(); retErr == nil {
|
||||||
|
retErr = err
|
||||||
|
}
|
||||||
|
if retErr != nil {
|
||||||
|
err = wrapBuildError(retErr, true)
|
||||||
|
}
|
||||||
|
done(err)
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if progressMode != progressui.QuietMode && progressMode != progressui.RawJSONMode {
|
||||||
|
desktop.PrintBuildDetails(os.Stderr, printer.BuildRefs(), term)
|
||||||
|
}
|
||||||
if len(in.metadataFile) > 0 {
|
if len(in.metadataFile) > 0 {
|
||||||
dt := make(map[string]interface{})
|
dt := make(map[string]interface{})
|
||||||
for t, r := range resp {
|
for t, r := range resp {
|
||||||
dt[t] = decodeExporterResponse(r.ExporterResponse)
|
dt[t] = decodeExporterResponse(r.ExporterResponse)
|
||||||
}
|
}
|
||||||
|
if callFunc == nil {
|
||||||
|
if warnings := printer.Warnings(); len(warnings) > 0 && confutil.MetadataWarningsEnabled() {
|
||||||
|
dt["buildx.build.warnings"] = warnings
|
||||||
|
}
|
||||||
|
}
|
||||||
if err := writeMetadataFile(in.metadataFile, dt); err != nil {
|
if err := writeMetadataFile(in.metadataFile, dt); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
var callFormatJSON bool
|
||||||
|
jsonResults := map[string]map[string]any{}
|
||||||
|
if callFunc != nil {
|
||||||
|
callFormatJSON = callFunc.Format == "json"
|
||||||
|
}
|
||||||
|
var sep bool
|
||||||
|
var exitCode int
|
||||||
|
|
||||||
|
names := make([]string, 0, len(bo))
|
||||||
|
for name := range bo {
|
||||||
|
names = append(names, name)
|
||||||
|
}
|
||||||
|
slices.Sort(names)
|
||||||
|
|
||||||
|
for _, name := range names {
|
||||||
|
req := bo[name]
|
||||||
|
if req.CallFunc == nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
pf := &pb.CallFunc{
|
||||||
|
Name: req.CallFunc.Name,
|
||||||
|
Format: req.CallFunc.Format,
|
||||||
|
IgnoreStatus: req.CallFunc.IgnoreStatus,
|
||||||
|
}
|
||||||
|
|
||||||
|
if callFunc != nil {
|
||||||
|
pf.Format = callFunc.Format
|
||||||
|
pf.IgnoreStatus = callFunc.IgnoreStatus
|
||||||
|
}
|
||||||
|
|
||||||
|
var res map[string]string
|
||||||
|
if sp, ok := resp[name]; ok {
|
||||||
|
res = sp.ExporterResponse
|
||||||
|
}
|
||||||
|
|
||||||
|
if callFormatJSON {
|
||||||
|
jsonResults[name] = map[string]any{}
|
||||||
|
buf := &bytes.Buffer{}
|
||||||
|
if code, err := printResult(buf, pf, res, name, &req.Inputs); err != nil {
|
||||||
|
jsonResults[name]["error"] = err.Error()
|
||||||
|
exitCode = 1
|
||||||
|
} else if code != 0 && exitCode == 0 {
|
||||||
|
exitCode = code
|
||||||
|
}
|
||||||
|
m := map[string]*json.RawMessage{}
|
||||||
|
if err := json.Unmarshal(buf.Bytes(), &m); err == nil {
|
||||||
|
for k, v := range m {
|
||||||
|
jsonResults[name][k] = v
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
jsonResults[name][pf.Name] = json.RawMessage(buf.Bytes())
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
if sep {
|
||||||
|
fmt.Fprintln(dockerCli.Out())
|
||||||
|
} else {
|
||||||
|
sep = true
|
||||||
|
}
|
||||||
|
fmt.Fprintf(dockerCli.Out(), "%s\n", name)
|
||||||
|
if descr := tgts[name].Description; descr != "" {
|
||||||
|
fmt.Fprintf(dockerCli.Out(), "%s\n", descr)
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Fprintln(dockerCli.Out())
|
||||||
|
if code, err := printResult(dockerCli.Out(), pf, res, name, &req.Inputs); err != nil {
|
||||||
|
fmt.Fprintf(dockerCli.Out(), "error: %v\n", err)
|
||||||
|
exitCode = 1
|
||||||
|
} else if code != 0 && exitCode == 0 {
|
||||||
|
exitCode = code
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if callFormatJSON {
|
||||||
|
out := struct {
|
||||||
|
Group map[string]*bake.Group `json:"group,omitempty"`
|
||||||
|
Target map[string]map[string]any `json:"target"`
|
||||||
|
}{
|
||||||
|
Group: grps,
|
||||||
|
Target: map[string]map[string]any{},
|
||||||
|
}
|
||||||
|
|
||||||
|
for name, def := range tgts {
|
||||||
|
out.Target[name] = map[string]any{
|
||||||
|
"build": def,
|
||||||
|
}
|
||||||
|
if res, ok := jsonResults[name]; ok {
|
||||||
|
printName := bo[name].CallFunc.Name
|
||||||
|
if printName == "lint" {
|
||||||
|
printName = "check"
|
||||||
|
}
|
||||||
|
out.Target[name][printName] = res
|
||||||
|
}
|
||||||
|
}
|
||||||
|
dt, err := json.MarshalIndent(out, "", " ")
|
||||||
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
|
}
|
||||||
|
fmt.Fprintln(dockerCli.Out(), string(dt))
|
||||||
|
}
|
||||||
|
|
||||||
|
if exitCode != 0 {
|
||||||
|
os.Exit(exitCode)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func bakeCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
|
func bakeCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
|
||||||
@@ -277,18 +444,74 @@ func bakeCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
|
|||||||
flags.StringVar(&options.sbom, "sbom", "", `Shorthand for "--set=*.attest=type=sbom"`)
|
flags.StringVar(&options.sbom, "sbom", "", `Shorthand for "--set=*.attest=type=sbom"`)
|
||||||
flags.StringVar(&options.provenance, "provenance", "", `Shorthand for "--set=*.attest=type=provenance"`)
|
flags.StringVar(&options.provenance, "provenance", "", `Shorthand for "--set=*.attest=type=provenance"`)
|
||||||
flags.StringArrayVar(&options.overrides, "set", nil, `Override target value (e.g., "targetpattern.key=value")`)
|
flags.StringArrayVar(&options.overrides, "set", nil, `Override target value (e.g., "targetpattern.key=value")`)
|
||||||
|
flags.StringVar(&options.callFunc, "call", "build", `Set method for evaluating build ("check", "outline", "targets")`)
|
||||||
|
flags.StringArrayVar(&options.allow, "allow", nil, "Allow build to access specified resources")
|
||||||
|
|
||||||
|
flags.VarPF(callAlias(&options.callFunc, "check"), "check", "", `Shorthand for "--call=check"`)
|
||||||
|
flags.Lookup("check").NoOptDefVal = "true"
|
||||||
|
|
||||||
|
flags.BoolVar(&options.listTargets, "list-targets", false, "List available targets")
|
||||||
|
cobrautil.MarkFlagsExperimental(flags, "list-targets")
|
||||||
|
flags.MarkHidden("list-targets")
|
||||||
|
|
||||||
|
flags.BoolVar(&options.listVars, "list-variables", false, "List defined variables")
|
||||||
|
cobrautil.MarkFlagsExperimental(flags, "list-variables")
|
||||||
|
flags.MarkHidden("list-variables")
|
||||||
|
|
||||||
commonBuildFlags(&cFlags, flags)
|
commonBuildFlags(&cFlags, flags)
|
||||||
|
|
||||||
return cmd
|
return cmd
|
||||||
}
|
}
|
||||||
|
|
||||||
func saveLocalStateGroup(dockerCli command.Cli, ref string, lsg localstate.StateGroup) error {
|
func saveLocalStateGroup(dockerCli command.Cli, in bakeOptions, targets []string, bo map[string]build.Options, overrides []string, def any) error {
|
||||||
l, err := localstate.New(confutil.ConfigDir(dockerCli))
|
prm := confutil.MetadataProvenance()
|
||||||
|
if len(in.metadataFile) == 0 {
|
||||||
|
prm = confutil.MetadataProvenanceModeDisabled
|
||||||
|
}
|
||||||
|
groupRef := identity.NewID()
|
||||||
|
refs := make([]string, 0, len(bo))
|
||||||
|
for k, b := range bo {
|
||||||
|
if b.CallFunc != nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
b.Ref = identity.NewID()
|
||||||
|
b.GroupRef = groupRef
|
||||||
|
b.ProvenanceResponseMode = prm
|
||||||
|
refs = append(refs, b.Ref)
|
||||||
|
bo[k] = b
|
||||||
|
}
|
||||||
|
if len(refs) == 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
l, err := localstate.New(confutil.NewConfig(dockerCli))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
return l.SaveGroup(ref, lsg)
|
dtdef, err := json.MarshalIndent(def, "", " ")
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return l.SaveGroup(groupRef, localstate.StateGroup{
|
||||||
|
Definition: dtdef,
|
||||||
|
Targets: targets,
|
||||||
|
Inputs: overrides,
|
||||||
|
Refs: refs,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// bakeArgs will retrieve the remote url, command context, and targets
|
||||||
|
// from the command line arguments.
|
||||||
|
func bakeArgs(args []string) (url, cmdContext string, targets []string) {
|
||||||
|
cmdContext, targets = "cwd://", args
|
||||||
|
if len(targets) == 0 || !build.IsRemoteURL(targets[0]) {
|
||||||
|
return url, cmdContext, targets
|
||||||
|
}
|
||||||
|
url, targets = targets[0], targets[1:]
|
||||||
|
if len(targets) == 0 || !build.IsRemoteURL(targets[0]) {
|
||||||
|
return url, cmdContext, targets
|
||||||
|
}
|
||||||
|
cmdContext, targets = targets[0], targets[1:]
|
||||||
|
return url, cmdContext, targets
|
||||||
}
|
}
|
||||||
|
|
||||||
func readBakeFiles(ctx context.Context, nodes []builder.Node, url string, names []string, stdin io.Reader, pw progress.Writer) (files []bake.File, inp *bake.Input, err error) {
|
func readBakeFiles(ctx context.Context, nodes []builder.Node, url string, names []string, stdin io.Reader, pw progress.Writer) (files []bake.File, inp *bake.Input, err error) {
|
||||||
@@ -333,3 +556,157 @@ func readBakeFiles(ctx context.Context, nodes []builder.Node, url string, names
|
|||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func printVars(w io.Writer, vars []*hclparser.Variable) error {
|
||||||
|
slices.SortFunc(vars, func(a, b *hclparser.Variable) int {
|
||||||
|
return cmp.Compare(a.Name, b.Name)
|
||||||
|
})
|
||||||
|
tw := tabwriter.NewWriter(w, 1, 8, 1, '\t', 0)
|
||||||
|
defer tw.Flush()
|
||||||
|
|
||||||
|
tw.Write([]byte("VARIABLE\tVALUE\tDESCRIPTION\n"))
|
||||||
|
|
||||||
|
for _, v := range vars {
|
||||||
|
var value string
|
||||||
|
if v.Value != nil {
|
||||||
|
value = *v.Value
|
||||||
|
} else {
|
||||||
|
value = "<null>"
|
||||||
|
}
|
||||||
|
fmt.Fprintf(tw, "%s\t%s\t%s\n", v.Name, value, v.Description)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func printTargetList(w io.Writer, cfg *bake.Config) error {
|
||||||
|
tw := tabwriter.NewWriter(w, 1, 8, 1, '\t', 0)
|
||||||
|
defer tw.Flush()
|
||||||
|
|
||||||
|
tw.Write([]byte("TARGET\tDESCRIPTION\n"))
|
||||||
|
|
||||||
|
type targetOrGroup struct {
|
||||||
|
name string
|
||||||
|
target *bake.Target
|
||||||
|
group *bake.Group
|
||||||
|
}
|
||||||
|
|
||||||
|
list := make([]targetOrGroup, 0, len(cfg.Targets)+len(cfg.Groups))
|
||||||
|
for _, tgt := range cfg.Targets {
|
||||||
|
list = append(list, targetOrGroup{name: tgt.Name, target: tgt})
|
||||||
|
}
|
||||||
|
for _, grp := range cfg.Groups {
|
||||||
|
list = append(list, targetOrGroup{name: grp.Name, group: grp})
|
||||||
|
}
|
||||||
|
|
||||||
|
slices.SortFunc(list, func(a, b targetOrGroup) int {
|
||||||
|
return cmp.Compare(a.name, b.name)
|
||||||
|
})
|
||||||
|
|
||||||
|
for _, tgt := range list {
|
||||||
|
if strings.HasPrefix(tgt.name, "_") {
|
||||||
|
// convention for a private target
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
var descr string
|
||||||
|
if tgt.target != nil {
|
||||||
|
descr = tgt.target.Description
|
||||||
|
} else if tgt.group != nil {
|
||||||
|
descr = tgt.group.Description
|
||||||
|
|
||||||
|
if len(tgt.group.Targets) > 0 {
|
||||||
|
slices.Sort(tgt.group.Targets)
|
||||||
|
names := strings.Join(tgt.group.Targets, ", ")
|
||||||
|
if descr != "" {
|
||||||
|
descr += " (" + names + ")"
|
||||||
|
} else {
|
||||||
|
descr = names
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
fmt.Fprintf(tw, "%s\t%s\n", tgt.name, descr)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func bakeMetricAttributes(dockerCli command.Cli, driverType, url, cmdContext string, targets []string, options *bakeOptions) attribute.Set {
|
||||||
|
return attribute.NewSet(
|
||||||
|
commandNameAttribute.String("bake"),
|
||||||
|
attribute.Stringer(string(commandOptionsHash), &bakeOptionsHash{
|
||||||
|
bakeOptions: options,
|
||||||
|
cfg: confutil.NewConfig(dockerCli),
|
||||||
|
url: url,
|
||||||
|
cmdContext: cmdContext,
|
||||||
|
targets: targets,
|
||||||
|
}),
|
||||||
|
driverNameAttribute.String(options.builder),
|
||||||
|
driverTypeAttribute.String(driverType),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
type bakeOptionsHash struct {
|
||||||
|
*bakeOptions
|
||||||
|
cfg *confutil.Config
|
||||||
|
url string
|
||||||
|
cmdContext string
|
||||||
|
targets []string
|
||||||
|
result string
|
||||||
|
resultOnce sync.Once
|
||||||
|
}
|
||||||
|
|
||||||
|
func (o *bakeOptionsHash) String() string {
|
||||||
|
o.resultOnce.Do(func() {
|
||||||
|
url := o.url
|
||||||
|
cmdContext := o.cmdContext
|
||||||
|
if cmdContext == "cwd://" {
|
||||||
|
// Resolve the directory if the cmdContext is the current working directory.
|
||||||
|
cmdContext = osutil.GetWd()
|
||||||
|
}
|
||||||
|
|
||||||
|
// Sort the inputs for files and targets since the ordering
|
||||||
|
// doesn't matter, but avoid modifying the original slice.
|
||||||
|
files := immutableSort(o.files)
|
||||||
|
targets := immutableSort(o.targets)
|
||||||
|
|
||||||
|
joinedFiles := strings.Join(files, ",")
|
||||||
|
joinedTargets := strings.Join(targets, ",")
|
||||||
|
salt := o.cfg.TryNodeIdentifier()
|
||||||
|
|
||||||
|
h := sha256.New()
|
||||||
|
for _, s := range []string{url, cmdContext, joinedFiles, joinedTargets, salt} {
|
||||||
|
_, _ = io.WriteString(h, s)
|
||||||
|
h.Write([]byte{0})
|
||||||
|
}
|
||||||
|
o.result = hex.EncodeToString(h.Sum(nil))
|
||||||
|
})
|
||||||
|
return o.result
|
||||||
|
}
|
||||||
|
|
||||||
|
// immutableSort will sort the entries in s without modifying the original slice.
|
||||||
|
func immutableSort(s []string) []string {
|
||||||
|
if !sort.StringsAreSorted(s) {
|
||||||
|
cpy := make([]string, len(s))
|
||||||
|
copy(cpy, s)
|
||||||
|
sort.Strings(cpy)
|
||||||
|
return cpy
|
||||||
|
}
|
||||||
|
return s
|
||||||
|
}
|
||||||
|
|
||||||
|
type syncWriter struct {
|
||||||
|
w io.Writer
|
||||||
|
once sync.Once
|
||||||
|
wait func() error
|
||||||
|
}
|
||||||
|
|
||||||
|
func (w *syncWriter) Write(p []byte) (n int, err error) {
|
||||||
|
w.once.Do(func() {
|
||||||
|
if w.wait != nil {
|
||||||
|
err = w.wait()
|
||||||
|
}
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
return w.w.Write(p)
|
||||||
|
}
|
||||||
|
|||||||
@@ -5,12 +5,10 @@ import (
|
|||||||
"context"
|
"context"
|
||||||
"crypto/sha256"
|
"crypto/sha256"
|
||||||
"encoding/base64"
|
"encoding/base64"
|
||||||
"encoding/csv"
|
|
||||||
"encoding/hex"
|
"encoding/hex"
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
"fmt"
|
"fmt"
|
||||||
"io"
|
"io"
|
||||||
"log"
|
|
||||||
"os"
|
"os"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"strconv"
|
"strconv"
|
||||||
@@ -39,7 +37,6 @@ import (
|
|||||||
"github.com/docker/buildx/util/osutil"
|
"github.com/docker/buildx/util/osutil"
|
||||||
"github.com/docker/buildx/util/progress"
|
"github.com/docker/buildx/util/progress"
|
||||||
"github.com/docker/buildx/util/tracing"
|
"github.com/docker/buildx/util/tracing"
|
||||||
"github.com/docker/cli-docs-tool/annotation"
|
|
||||||
"github.com/docker/cli/cli"
|
"github.com/docker/cli/cli"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
dockeropts "github.com/docker/cli/opts"
|
dockeropts "github.com/docker/cli/opts"
|
||||||
@@ -48,9 +45,11 @@ import (
|
|||||||
"github.com/moby/buildkit/client"
|
"github.com/moby/buildkit/client"
|
||||||
"github.com/moby/buildkit/exporter/containerimage/exptypes"
|
"github.com/moby/buildkit/exporter/containerimage/exptypes"
|
||||||
"github.com/moby/buildkit/frontend/subrequests"
|
"github.com/moby/buildkit/frontend/subrequests"
|
||||||
|
"github.com/moby/buildkit/frontend/subrequests/lint"
|
||||||
"github.com/moby/buildkit/frontend/subrequests/outline"
|
"github.com/moby/buildkit/frontend/subrequests/outline"
|
||||||
"github.com/moby/buildkit/frontend/subrequests/targets"
|
"github.com/moby/buildkit/frontend/subrequests/targets"
|
||||||
"github.com/moby/buildkit/solver/errdefs"
|
"github.com/moby/buildkit/solver/errdefs"
|
||||||
|
solverpb "github.com/moby/buildkit/solver/pb"
|
||||||
"github.com/moby/buildkit/util/grpcerrors"
|
"github.com/moby/buildkit/util/grpcerrors"
|
||||||
"github.com/moby/buildkit/util/progress/progressui"
|
"github.com/moby/buildkit/util/progress/progressui"
|
||||||
"github.com/morikuni/aec"
|
"github.com/morikuni/aec"
|
||||||
@@ -58,9 +57,11 @@ import (
|
|||||||
"github.com/sirupsen/logrus"
|
"github.com/sirupsen/logrus"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
"github.com/spf13/pflag"
|
"github.com/spf13/pflag"
|
||||||
|
"github.com/tonistiigi/go-csvvalue"
|
||||||
"go.opentelemetry.io/otel/attribute"
|
"go.opentelemetry.io/otel/attribute"
|
||||||
"go.opentelemetry.io/otel/metric"
|
"go.opentelemetry.io/otel/metric"
|
||||||
"google.golang.org/grpc/codes"
|
"google.golang.org/grpc/codes"
|
||||||
|
"google.golang.org/protobuf/proto"
|
||||||
)
|
)
|
||||||
|
|
||||||
type buildOptions struct {
|
type buildOptions struct {
|
||||||
@@ -80,7 +81,7 @@ type buildOptions struct {
|
|||||||
noCacheFilter []string
|
noCacheFilter []string
|
||||||
outputs []string
|
outputs []string
|
||||||
platforms []string
|
platforms []string
|
||||||
printFunc string
|
callFunc string
|
||||||
secrets []string
|
secrets []string
|
||||||
shmSize dockeropts.MemBytes
|
shmSize dockeropts.MemBytes
|
||||||
ssh []string
|
ssh []string
|
||||||
@@ -200,11 +201,17 @@ func (o *buildOptions) toControllerOptions() (*controllerapi.BuildOptions, error
|
|||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
opts.PrintFunc, err = buildflags.ParsePrintFunc(o.printFunc)
|
opts.CallFunc, err = buildflags.ParseCallFunc(o.callFunc)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
prm := confutil.MetadataProvenance()
|
||||||
|
if opts.CallFunc != nil || len(o.metadataFile) == 0 {
|
||||||
|
prm = confutil.MetadataProvenanceModeDisabled
|
||||||
|
}
|
||||||
|
opts.ProvenanceResponseMode = string(prm)
|
||||||
|
|
||||||
return &opts, nil
|
return &opts, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -219,15 +226,22 @@ func (o *buildOptions) toDisplayMode() (progressui.DisplayMode, error) {
|
|||||||
return progress, nil
|
return progress, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func buildMetricAttributes(dockerCli command.Cli, b *builder.Builder, options *buildOptions) attribute.Set {
|
const (
|
||||||
|
commandNameAttribute = attribute.Key("command.name")
|
||||||
|
commandOptionsHash = attribute.Key("command.options.hash")
|
||||||
|
driverNameAttribute = attribute.Key("driver.name")
|
||||||
|
driverTypeAttribute = attribute.Key("driver.type")
|
||||||
|
)
|
||||||
|
|
||||||
|
func buildMetricAttributes(dockerCli command.Cli, driverType string, options *buildOptions) attribute.Set {
|
||||||
return attribute.NewSet(
|
return attribute.NewSet(
|
||||||
attribute.String("command.name", "build"),
|
commandNameAttribute.String("build"),
|
||||||
attribute.Stringer("command.options.hash", &buildOptionsHash{
|
attribute.Stringer(string(commandOptionsHash), &buildOptionsHash{
|
||||||
buildOptions: options,
|
buildOptions: options,
|
||||||
configDir: confutil.ConfigDir(dockerCli),
|
cfg: confutil.NewConfig(dockerCli),
|
||||||
}),
|
}),
|
||||||
attribute.String("driver.name", options.builder),
|
driverNameAttribute.String(options.builder),
|
||||||
attribute.String("driver.type", b.Driver),
|
driverTypeAttribute.String(driverType),
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -236,7 +250,7 @@ func buildMetricAttributes(dockerCli command.Cli, b *builder.Builder, options *b
|
|||||||
// the fmt.Stringer interface.
|
// the fmt.Stringer interface.
|
||||||
type buildOptionsHash struct {
|
type buildOptionsHash struct {
|
||||||
*buildOptions
|
*buildOptions
|
||||||
configDir string
|
cfg *confutil.Config
|
||||||
result string
|
result string
|
||||||
resultOnce sync.Once
|
resultOnce sync.Once
|
||||||
}
|
}
|
||||||
@@ -253,7 +267,7 @@ func (o *buildOptionsHash) String() string {
|
|||||||
if contextPath != "-" && osutil.IsLocalDir(contextPath) {
|
if contextPath != "-" && osutil.IsLocalDir(contextPath) {
|
||||||
contextPath = osutil.ToAbs(contextPath)
|
contextPath = osutil.ToAbs(contextPath)
|
||||||
}
|
}
|
||||||
salt := confutil.TryNodeIdentifier(o.configDir)
|
salt := o.cfg.TryNodeIdentifier()
|
||||||
|
|
||||||
h := sha256.New()
|
h := sha256.New()
|
||||||
for _, s := range []string{target, contextPath, dockerfile, salt} {
|
for _, s := range []string{target, contextPath, dockerfile, salt} {
|
||||||
@@ -266,11 +280,7 @@ func (o *buildOptionsHash) String() string {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func runBuild(ctx context.Context, dockerCli command.Cli, options buildOptions) (err error) {
|
func runBuild(ctx context.Context, dockerCli command.Cli, options buildOptions) (err error) {
|
||||||
mp, err := metricutil.NewMeterProvider(ctx, dockerCli)
|
mp := dockerCli.MeterProvider()
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
defer mp.Report(context.Background())
|
|
||||||
|
|
||||||
ctx, end, err := tracing.TraceCurrentCommand(ctx, "build")
|
ctx, end, err := tracing.TraceCurrentCommand(ctx, "build")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -307,15 +317,16 @@ func runBuild(ctx context.Context, dockerCli command.Cli, options buildOptions)
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
driverType := b.Driver
|
||||||
|
|
||||||
var term bool
|
var term bool
|
||||||
if _, err := console.ConsoleFromFile(os.Stderr); err == nil {
|
if _, err := console.ConsoleFromFile(os.Stderr); err == nil {
|
||||||
term = true
|
term = true
|
||||||
}
|
}
|
||||||
attributes := buildMetricAttributes(dockerCli, b, &options)
|
attributes := buildMetricAttributes(dockerCli, driverType, &options)
|
||||||
|
|
||||||
ctx2, cancel := context.WithCancel(context.TODO())
|
ctx2, cancel := context.WithCancelCause(context.TODO())
|
||||||
defer cancel()
|
defer func() { cancel(errors.WithStack(context.Canceled)) }()
|
||||||
progressMode, err := options.toDisplayMode()
|
progressMode, err := options.toDisplayMode()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -337,11 +348,12 @@ func runBuild(ctx context.Context, dockerCli command.Cli, options buildOptions)
|
|||||||
|
|
||||||
done := timeBuildCommand(mp, attributes)
|
done := timeBuildCommand(mp, attributes)
|
||||||
var resp *client.SolveResponse
|
var resp *client.SolveResponse
|
||||||
|
var inputs *build.Inputs
|
||||||
var retErr error
|
var retErr error
|
||||||
if isExperimental() {
|
if confutil.IsExperimental() {
|
||||||
resp, retErr = runControllerBuild(ctx, dockerCli, opts, options, printer)
|
resp, inputs, retErr = runControllerBuild(ctx, dockerCli, opts, options, printer)
|
||||||
} else {
|
} else {
|
||||||
resp, retErr = runBasicBuild(ctx, dockerCli, opts, options, printer)
|
resp, inputs, retErr = runBasicBuild(ctx, dockerCli, opts, printer)
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := printer.Wait(); retErr == nil {
|
if err := printer.Wait(); retErr == nil {
|
||||||
@@ -367,13 +379,21 @@ func runBuild(ctx context.Context, dockerCli command.Cli, options buildOptions)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
if options.metadataFile != "" {
|
if options.metadataFile != "" {
|
||||||
if err := writeMetadataFile(options.metadataFile, decodeExporterResponse(resp.ExporterResponse)); err != nil {
|
dt := decodeExporterResponse(resp.ExporterResponse)
|
||||||
|
if opts.CallFunc == nil {
|
||||||
|
if warnings := printer.Warnings(); len(warnings) > 0 && confutil.MetadataWarningsEnabled() {
|
||||||
|
dt["buildx.build.warnings"] = warnings
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if err := writeMetadataFile(options.metadataFile, dt); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if opts.PrintFunc != nil {
|
if opts.CallFunc != nil {
|
||||||
if err := printResult(opts.PrintFunc, resp.ExporterResponse); err != nil {
|
if exitcode, err := printResult(dockerCli.Out(), opts.CallFunc, resp.ExporterResponse, options.target, inputs); err != nil {
|
||||||
return err
|
return err
|
||||||
|
} else if exitcode != 0 {
|
||||||
|
os.Exit(exitcode)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
@@ -388,22 +408,22 @@ func getImageID(resp map[string]string) string {
|
|||||||
return dgst
|
return dgst
|
||||||
}
|
}
|
||||||
|
|
||||||
func runBasicBuild(ctx context.Context, dockerCli command.Cli, opts *controllerapi.BuildOptions, options buildOptions, printer *progress.Printer) (*client.SolveResponse, error) {
|
func runBasicBuild(ctx context.Context, dockerCli command.Cli, opts *controllerapi.BuildOptions, printer *progress.Printer) (*client.SolveResponse, *build.Inputs, error) {
|
||||||
resp, res, err := cbuild.RunBuild(ctx, dockerCli, *opts, dockerCli.In(), printer, false)
|
resp, res, dfmap, err := cbuild.RunBuild(ctx, dockerCli, opts, dockerCli.In(), printer, false)
|
||||||
if res != nil {
|
if res != nil {
|
||||||
res.Done()
|
res.Done()
|
||||||
}
|
}
|
||||||
return resp, err
|
return resp, dfmap, err
|
||||||
}
|
}
|
||||||
|
|
||||||
func runControllerBuild(ctx context.Context, dockerCli command.Cli, opts *controllerapi.BuildOptions, options buildOptions, printer *progress.Printer) (*client.SolveResponse, error) {
|
func runControllerBuild(ctx context.Context, dockerCli command.Cli, opts *controllerapi.BuildOptions, options buildOptions, printer *progress.Printer) (*client.SolveResponse, *build.Inputs, error) {
|
||||||
if options.invokeConfig != nil && (options.dockerfileName == "-" || options.contextPath == "-") {
|
if options.invokeConfig != nil && (options.dockerfileName == "-" || options.contextPath == "-") {
|
||||||
// stdin must be usable for monitor
|
// stdin must be usable for monitor
|
||||||
return nil, errors.Errorf("Dockerfile or context from stdin is not supported with invoke")
|
return nil, nil, errors.Errorf("Dockerfile or context from stdin is not supported with invoke")
|
||||||
}
|
}
|
||||||
c, err := controller.NewController(ctx, options.ControlOptions, dockerCli, printer)
|
c, err := controller.NewController(ctx, options.ControlOptions, dockerCli, printer)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, nil, err
|
||||||
}
|
}
|
||||||
defer func() {
|
defer func() {
|
||||||
if err := c.Close(); err != nil {
|
if err := c.Close(); err != nil {
|
||||||
@@ -415,22 +435,31 @@ func runControllerBuild(ctx context.Context, dockerCli command.Cli, opts *contro
|
|||||||
// so we need to resolve paths to abosolute ones in the client.
|
// so we need to resolve paths to abosolute ones in the client.
|
||||||
opts, err = controllerapi.ResolveOptionPaths(opts)
|
opts, err = controllerapi.ResolveOptionPaths(opts)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
var ref string
|
var ref string
|
||||||
var retErr error
|
var retErr error
|
||||||
var resp *client.SolveResponse
|
var resp *client.SolveResponse
|
||||||
f := ioset.NewSingleForwarder()
|
var inputs *build.Inputs
|
||||||
|
|
||||||
|
var f *ioset.SingleForwarder
|
||||||
|
var pr io.ReadCloser
|
||||||
|
var pw io.WriteCloser
|
||||||
|
if options.invokeConfig == nil {
|
||||||
|
pr = dockerCli.In()
|
||||||
|
} else {
|
||||||
|
f = ioset.NewSingleForwarder()
|
||||||
f.SetReader(dockerCli.In())
|
f.SetReader(dockerCli.In())
|
||||||
pr, pw := io.Pipe()
|
pr, pw = io.Pipe()
|
||||||
f.SetWriter(pw, func() io.WriteCloser {
|
f.SetWriter(pw, func() io.WriteCloser {
|
||||||
pw.Close() // propagate EOF
|
pw.Close() // propagate EOF
|
||||||
logrus.Debug("propagating stdin close")
|
logrus.Debug("propagating stdin close")
|
||||||
return nil
|
return nil
|
||||||
})
|
})
|
||||||
|
}
|
||||||
|
|
||||||
ref, resp, err = c.Build(ctx, *opts, pr, printer)
|
ref, resp, inputs, err = c.Build(ctx, opts, pr, printer)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
var be *controllererrors.BuildError
|
var be *controllererrors.BuildError
|
||||||
if errors.As(err, &be) {
|
if errors.As(err, &be) {
|
||||||
@@ -438,16 +467,18 @@ func runControllerBuild(ctx context.Context, dockerCli command.Cli, opts *contro
|
|||||||
retErr = err
|
retErr = err
|
||||||
// We can proceed to monitor
|
// We can proceed to monitor
|
||||||
} else {
|
} else {
|
||||||
return nil, errors.Wrapf(err, "failed to build")
|
return nil, nil, errors.Wrapf(err, "failed to build")
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if options.invokeConfig != nil {
|
||||||
if err := pw.Close(); err != nil {
|
if err := pw.Close(); err != nil {
|
||||||
logrus.Debug("failed to close stdin pipe writer")
|
logrus.Debug("failed to close stdin pipe writer")
|
||||||
}
|
}
|
||||||
if err := pr.Close(); err != nil {
|
if err := pr.Close(); err != nil {
|
||||||
logrus.Debug("failed to close stdin pipe reader")
|
logrus.Debug("failed to close stdin pipe reader")
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if options.invokeConfig != nil && options.invokeConfig.needsDebug(retErr) {
|
if options.invokeConfig != nil && options.invokeConfig.needsDebug(retErr) {
|
||||||
// Print errors before launching monitor
|
// Print errors before launching monitor
|
||||||
@@ -477,7 +508,7 @@ func runControllerBuild(ctx context.Context, dockerCli command.Cli, opts *contro
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return resp, retErr
|
return resp, inputs, retErr
|
||||||
}
|
}
|
||||||
|
|
||||||
func printError(err error, printer *progress.Printer) error {
|
func printError(err error, printer *progress.Printer) error {
|
||||||
@@ -514,9 +545,12 @@ func buildCmd(dockerCli command.Cli, rootOpts *rootOptions, debugConfig *debug.D
|
|||||||
|
|
||||||
cmd := &cobra.Command{
|
cmd := &cobra.Command{
|
||||||
Use: "build [OPTIONS] PATH | URL | -",
|
Use: "build [OPTIONS] PATH | URL | -",
|
||||||
Aliases: []string{"b"},
|
|
||||||
Short: "Start a build",
|
Short: "Start a build",
|
||||||
Args: cli.ExactArgs(1),
|
Args: cli.ExactArgs(1),
|
||||||
|
Aliases: []string{"b"},
|
||||||
|
Annotations: map[string]string{
|
||||||
|
"aliases": "docker build, docker builder build, docker image build, docker buildx b",
|
||||||
|
},
|
||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
options.contextPath = args[0]
|
options.contextPath = args[0]
|
||||||
options.builder = rootOpts.builder
|
options.builder = rootOpts.builder
|
||||||
@@ -555,7 +589,6 @@ func buildCmd(dockerCli command.Cli, rootOpts *rootOptions, debugConfig *debug.D
|
|||||||
flags := cmd.Flags()
|
flags := cmd.Flags()
|
||||||
|
|
||||||
flags.StringSliceVar(&options.extraHosts, "add-host", []string{}, `Add a custom host-to-IP mapping (format: "host:ip")`)
|
flags.StringSliceVar(&options.extraHosts, "add-host", []string{}, `Add a custom host-to-IP mapping (format: "host:ip")`)
|
||||||
flags.SetAnnotation("add-host", annotation.ExternalURL, []string{"https://docs.docker.com/reference/cli/docker/image/build/#add-host"})
|
|
||||||
|
|
||||||
flags.StringSliceVar(&options.allow, "allow", []string{}, `Allow extra privileged entitlement (e.g., "network.host", "security.insecure")`)
|
flags.StringSliceVar(&options.allow, "allow", []string{}, `Allow extra privileged entitlement (e.g., "network.host", "security.insecure")`)
|
||||||
|
|
||||||
@@ -568,14 +601,12 @@ func buildCmd(dockerCli command.Cli, rootOpts *rootOptions, debugConfig *debug.D
|
|||||||
flags.StringArrayVar(&options.cacheTo, "cache-to", []string{}, `Cache export destinations (e.g., "user/app:cache", "type=local,dest=path/to/dir")`)
|
flags.StringArrayVar(&options.cacheTo, "cache-to", []string{}, `Cache export destinations (e.g., "user/app:cache", "type=local,dest=path/to/dir")`)
|
||||||
|
|
||||||
flags.StringVar(&options.cgroupParent, "cgroup-parent", "", `Set the parent cgroup for the "RUN" instructions during build`)
|
flags.StringVar(&options.cgroupParent, "cgroup-parent", "", `Set the parent cgroup for the "RUN" instructions during build`)
|
||||||
flags.SetAnnotation("cgroup-parent", annotation.ExternalURL, []string{"https://docs.docker.com/reference/cli/docker/image/build/#cgroup-parent"})
|
|
||||||
|
|
||||||
flags.StringArrayVar(&options.contexts, "build-context", []string{}, "Additional build contexts (e.g., name=path)")
|
flags.StringArrayVar(&options.contexts, "build-context", []string{}, "Additional build contexts (e.g., name=path)")
|
||||||
|
|
||||||
flags.StringVarP(&options.dockerfileName, "file", "f", "", `Name of the Dockerfile (default: "PATH/Dockerfile")`)
|
flags.StringVarP(&options.dockerfileName, "file", "f", "", `Name of the Dockerfile (default: "PATH/Dockerfile")`)
|
||||||
flags.SetAnnotation("file", annotation.ExternalURL, []string{"https://docs.docker.com/reference/cli/docker/image/build/#file"})
|
|
||||||
|
|
||||||
flags.StringVar(&options.imageIDFile, "iidfile", "", "Write the image ID to the file")
|
flags.StringVar(&options.imageIDFile, "iidfile", "", "Write the image ID to a file")
|
||||||
|
|
||||||
flags.StringArrayVar(&options.labels, "label", []string{}, "Set metadata for an image")
|
flags.StringArrayVar(&options.labels, "label", []string{}, "Set metadata for an image")
|
||||||
|
|
||||||
@@ -589,11 +620,6 @@ func buildCmd(dockerCli command.Cli, rootOpts *rootOptions, debugConfig *debug.D
|
|||||||
|
|
||||||
flags.StringArrayVar(&options.platforms, "platform", platformsDefault, "Set target platform for build")
|
flags.StringArrayVar(&options.platforms, "platform", platformsDefault, "Set target platform for build")
|
||||||
|
|
||||||
if isExperimental() {
|
|
||||||
flags.StringVar(&options.printFunc, "print", "", "Print result of information request (e.g., outline, targets)")
|
|
||||||
cobrautil.MarkFlagsExperimental(flags, "print")
|
|
||||||
}
|
|
||||||
|
|
||||||
flags.BoolVar(&options.exportPush, "push", false, `Shorthand for "--output=type=registry"`)
|
flags.BoolVar(&options.exportPush, "push", false, `Shorthand for "--output=type=registry"`)
|
||||||
|
|
||||||
flags.BoolVarP(&options.quiet, "quiet", "q", false, "Suppress the build output and print image ID on success")
|
flags.BoolVarP(&options.quiet, "quiet", "q", false, "Suppress the build output and print image ID on success")
|
||||||
@@ -605,10 +631,8 @@ func buildCmd(dockerCli command.Cli, rootOpts *rootOptions, debugConfig *debug.D
|
|||||||
flags.StringArrayVar(&options.ssh, "ssh", []string{}, `SSH agent socket or keys to expose to the build (format: "default|<id>[=<socket>|<key>[,<key>]]")`)
|
flags.StringArrayVar(&options.ssh, "ssh", []string{}, `SSH agent socket or keys to expose to the build (format: "default|<id>[=<socket>|<key>[,<key>]]")`)
|
||||||
|
|
||||||
flags.StringArrayVarP(&options.tags, "tag", "t", []string{}, `Name and optionally a tag (format: "name:tag")`)
|
flags.StringArrayVarP(&options.tags, "tag", "t", []string{}, `Name and optionally a tag (format: "name:tag")`)
|
||||||
flags.SetAnnotation("tag", annotation.ExternalURL, []string{"https://docs.docker.com/reference/cli/docker/image/build/#tag"})
|
|
||||||
|
|
||||||
flags.StringVar(&options.target, "target", "", "Set the target build stage to build")
|
flags.StringVar(&options.target, "target", "", "Set the target build stage to build")
|
||||||
flags.SetAnnotation("target", annotation.ExternalURL, []string{"https://docs.docker.com/reference/cli/docker/image/build/#target"})
|
|
||||||
|
|
||||||
options.ulimits = dockeropts.NewUlimitOpt(nil)
|
options.ulimits = dockeropts.NewUlimitOpt(nil)
|
||||||
flags.Var(options.ulimits, "ulimit", "Ulimit options")
|
flags.Var(options.ulimits, "ulimit", "Ulimit options")
|
||||||
@@ -617,7 +641,7 @@ func buildCmd(dockerCli command.Cli, rootOpts *rootOptions, debugConfig *debug.D
|
|||||||
flags.StringVar(&options.sbom, "sbom", "", `Shorthand for "--attest=type=sbom"`)
|
flags.StringVar(&options.sbom, "sbom", "", `Shorthand for "--attest=type=sbom"`)
|
||||||
flags.StringVar(&options.provenance, "provenance", "", `Shorthand for "--attest=type=provenance"`)
|
flags.StringVar(&options.provenance, "provenance", "", `Shorthand for "--attest=type=provenance"`)
|
||||||
|
|
||||||
if isExperimental() {
|
if confutil.IsExperimental() {
|
||||||
// TODO: move this to debug command if needed
|
// TODO: move this to debug command if needed
|
||||||
flags.StringVar(&options.Root, "root", "", "Specify root directory of server to connect")
|
flags.StringVar(&options.Root, "root", "", "Specify root directory of server to connect")
|
||||||
flags.BoolVar(&options.Detach, "detach", false, "Detach buildx server (supported only on linux)")
|
flags.BoolVar(&options.Detach, "detach", false, "Detach buildx server (supported only on linux)")
|
||||||
@@ -625,12 +649,20 @@ func buildCmd(dockerCli command.Cli, rootOpts *rootOptions, debugConfig *debug.D
|
|||||||
cobrautil.MarkFlagsExperimental(flags, "root", "detach", "server-config")
|
cobrautil.MarkFlagsExperimental(flags, "root", "detach", "server-config")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
flags.StringVar(&options.callFunc, "call", "build", `Set method for evaluating build ("check", "outline", "targets")`)
|
||||||
|
flags.VarPF(callAlias(&options.callFunc, "check"), "check", "", `Shorthand for "--call=check"`)
|
||||||
|
flags.Lookup("check").NoOptDefVal = "true"
|
||||||
|
|
||||||
// hidden flags
|
// hidden flags
|
||||||
var ignore string
|
var ignore string
|
||||||
var ignoreSlice []string
|
var ignoreSlice []string
|
||||||
var ignoreBool bool
|
var ignoreBool bool
|
||||||
var ignoreInt int64
|
var ignoreInt int64
|
||||||
|
|
||||||
|
flags.StringVar(&options.callFunc, "print", "", "Print result of information request (e.g., outline, targets)")
|
||||||
|
cobrautil.MarkFlagsExperimental(flags, "print")
|
||||||
|
flags.MarkHidden("print")
|
||||||
|
|
||||||
flags.BoolVar(&ignoreBool, "compress", false, "Compress the build context using gzip")
|
flags.BoolVar(&ignoreBool, "compress", false, "Compress the build context using gzip")
|
||||||
flags.MarkHidden("compress")
|
flags.MarkHidden("compress")
|
||||||
|
|
||||||
@@ -688,9 +720,9 @@ type commonFlags struct {
|
|||||||
|
|
||||||
func commonBuildFlags(options *commonFlags, flags *pflag.FlagSet) {
|
func commonBuildFlags(options *commonFlags, flags *pflag.FlagSet) {
|
||||||
options.noCache = flags.Bool("no-cache", false, "Do not use cache when building the image")
|
options.noCache = flags.Bool("no-cache", false, "Do not use cache when building the image")
|
||||||
flags.StringVar(&options.progress, "progress", "auto", `Set type of progress output ("auto", "plain", "tty"). Use plain to show container output`)
|
flags.StringVar(&options.progress, "progress", "auto", `Set type of progress output ("auto", "plain", "tty", "rawjson"). Use plain to show container output`)
|
||||||
options.pull = flags.Bool("pull", false, "Always attempt to pull all referenced images")
|
options.pull = flags.Bool("pull", false, "Always attempt to pull all referenced images")
|
||||||
flags.StringVar(&options.metadataFile, "metadata-file", "", "Write build result metadata to the file")
|
flags.StringVar(&options.metadataFile, "metadata-file", "", "Write build result metadata to a file")
|
||||||
}
|
}
|
||||||
|
|
||||||
func checkWarnedFlags(f *pflag.Flag) {
|
func checkWarnedFlags(f *pflag.Flag) {
|
||||||
@@ -714,18 +746,29 @@ func writeMetadataFile(filename string, dt interface{}) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func decodeExporterResponse(exporterResponse map[string]string) map[string]interface{} {
|
func decodeExporterResponse(exporterResponse map[string]string) map[string]interface{} {
|
||||||
|
decFunc := func(k, v string) ([]byte, error) {
|
||||||
|
if k == "result.json" {
|
||||||
|
// result.json is part of metadata response for subrequests which
|
||||||
|
// is already a JSON object: https://github.com/moby/buildkit/blob/f6eb72f2f5db07ddab89ac5e2bd3939a6444f4be/frontend/dockerui/requests.go#L100-L102
|
||||||
|
return []byte(v), nil
|
||||||
|
}
|
||||||
|
return base64.StdEncoding.DecodeString(v)
|
||||||
|
}
|
||||||
out := make(map[string]interface{})
|
out := make(map[string]interface{})
|
||||||
for k, v := range exporterResponse {
|
for k, v := range exporterResponse {
|
||||||
dt, err := base64.StdEncoding.DecodeString(v)
|
dt, err := decFunc(k, v)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
out[k] = v
|
out[k] = v
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
var raw map[string]interface{}
|
var raw map[string]interface{}
|
||||||
if err = json.Unmarshal(dt, &raw); err != nil || len(raw) == 0 {
|
if err = json.Unmarshal(dt, &raw); err != nil || len(raw) == 0 {
|
||||||
|
var rawList []map[string]interface{}
|
||||||
|
if err = json.Unmarshal(dt, &rawList); err != nil || len(rawList) == 0 {
|
||||||
out[k] = v
|
out[k] = v
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
}
|
||||||
out[k] = json.RawMessage(dt)
|
out[k] = json.RawMessage(dt)
|
||||||
}
|
}
|
||||||
return out
|
return out
|
||||||
@@ -762,14 +805,6 @@ func (w *wrapped) Unwrap() error {
|
|||||||
return w.err
|
return w.err
|
||||||
}
|
}
|
||||||
|
|
||||||
func isExperimental() bool {
|
|
||||||
if v, ok := os.LookupEnv("BUILDX_EXPERIMENTAL"); ok {
|
|
||||||
vv, _ := strconv.ParseBool(v)
|
|
||||||
return vv
|
|
||||||
}
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
func updateLastActivity(dockerCli command.Cli, ng *store.NodeGroup) error {
|
func updateLastActivity(dockerCli command.Cli, ng *store.NodeGroup) error {
|
||||||
txn, release, err := storeutil.GetStore(dockerCli)
|
txn, release, err := storeutil.GetStore(dockerCli)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -826,7 +861,7 @@ func printWarnings(w io.Writer, warnings []client.VertexWarning, mode progressui
|
|||||||
fmt.Fprintf(sb, "%d warnings found", len(warnings))
|
fmt.Fprintf(sb, "%d warnings found", len(warnings))
|
||||||
}
|
}
|
||||||
if logrus.GetLevel() < logrus.DebugLevel {
|
if logrus.GetLevel() < logrus.DebugLevel {
|
||||||
fmt.Fprintf(sb, " (use --debug to expand)")
|
fmt.Fprintf(sb, " (use docker --debug to expand)")
|
||||||
}
|
}
|
||||||
fmt.Fprintf(sb, ":\n")
|
fmt.Fprintf(sb, ":\n")
|
||||||
fmt.Fprint(w, aec.Apply(sb.String(), aec.YellowF))
|
fmt.Fprint(w, aec.Apply(sb.String(), aec.YellowF))
|
||||||
@@ -850,42 +885,107 @@ func printWarnings(w io.Writer, warnings []client.VertexWarning, mode progressui
|
|||||||
src.Print(w)
|
src.Print(w)
|
||||||
}
|
}
|
||||||
fmt.Fprintf(w, "\n")
|
fmt.Fprintf(w, "\n")
|
||||||
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func printResult(f *controllerapi.PrintFunc, res map[string]string) error {
|
func printResult(w io.Writer, f *controllerapi.CallFunc, res map[string]string, target string, inp *build.Inputs) (int, error) {
|
||||||
switch f.Name {
|
switch f.Name {
|
||||||
case "outline":
|
case "outline":
|
||||||
return printValue(outline.PrintOutline, outline.SubrequestsOutlineDefinition.Version, f.Format, res)
|
return 0, printValue(w, outline.PrintOutline, outline.SubrequestsOutlineDefinition.Version, f.Format, res)
|
||||||
case "targets":
|
case "targets":
|
||||||
return printValue(targets.PrintTargets, targets.SubrequestsTargetsDefinition.Version, f.Format, res)
|
return 0, printValue(w, targets.PrintTargets, targets.SubrequestsTargetsDefinition.Version, f.Format, res)
|
||||||
case "subrequests.describe":
|
case "subrequests.describe":
|
||||||
return printValue(subrequests.PrintDescribe, subrequests.SubrequestsDescribeDefinition.Version, f.Format, res)
|
return 0, printValue(w, subrequests.PrintDescribe, subrequests.SubrequestsDescribeDefinition.Version, f.Format, res)
|
||||||
|
case "lint":
|
||||||
|
lintResults := lint.LintResults{}
|
||||||
|
if result, ok := res["result.json"]; ok {
|
||||||
|
if err := json.Unmarshal([]byte(result), &lintResults); err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
warningCount := len(lintResults.Warnings)
|
||||||
|
if f.Format != "json" && warningCount > 0 {
|
||||||
|
var warningCountMsg string
|
||||||
|
if warningCount == 1 {
|
||||||
|
warningCountMsg = "1 warning has been found!"
|
||||||
|
} else if warningCount > 1 {
|
||||||
|
warningCountMsg = fmt.Sprintf("%d warnings have been found!", warningCount)
|
||||||
|
}
|
||||||
|
fmt.Fprintf(w, "Check complete, %s\n", warningCountMsg)
|
||||||
|
}
|
||||||
|
sourceInfoMap := func(sourceInfo *solverpb.SourceInfo) *solverpb.SourceInfo {
|
||||||
|
if sourceInfo == nil || inp == nil {
|
||||||
|
return sourceInfo
|
||||||
|
}
|
||||||
|
if target == "" {
|
||||||
|
target = "default"
|
||||||
|
}
|
||||||
|
|
||||||
|
if inp.DockerfileMappingSrc != "" {
|
||||||
|
newSourceInfo := proto.Clone(sourceInfo).(*solverpb.SourceInfo)
|
||||||
|
newSourceInfo.Filename = inp.DockerfileMappingSrc
|
||||||
|
return newSourceInfo
|
||||||
|
}
|
||||||
|
return sourceInfo
|
||||||
|
}
|
||||||
|
|
||||||
|
printLintWarnings := func(dt []byte, w io.Writer) error {
|
||||||
|
return lintResults.PrintTo(w, sourceInfoMap)
|
||||||
|
}
|
||||||
|
|
||||||
|
err := printValue(w, printLintWarnings, lint.SubrequestLintDefinition.Version, f.Format, res)
|
||||||
|
if err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if lintResults.Error != nil {
|
||||||
|
// Print the error message and the source
|
||||||
|
// Normally, we would use `errdefs.WithSource` to attach the source to the
|
||||||
|
// error and let the error be printed by the handling that's already in place,
|
||||||
|
// but here we want to print the error in a way that's consistent with how
|
||||||
|
// the lint warnings are printed via the `lint.PrintLintViolations` function,
|
||||||
|
// which differs from the default error printing.
|
||||||
|
if f.Format != "json" && len(lintResults.Warnings) > 0 {
|
||||||
|
fmt.Fprintln(w)
|
||||||
|
}
|
||||||
|
lintBuf := bytes.NewBuffer(nil)
|
||||||
|
lintResults.PrintErrorTo(lintBuf, sourceInfoMap)
|
||||||
|
return 0, errors.New(lintBuf.String())
|
||||||
|
} else if len(lintResults.Warnings) == 0 && f.Format != "json" {
|
||||||
|
fmt.Fprintln(w, "Check complete, no warnings found.")
|
||||||
|
}
|
||||||
default:
|
default:
|
||||||
if dt, ok := res["result.txt"]; ok {
|
if dt, ok := res["result.json"]; ok && f.Format == "json" {
|
||||||
fmt.Print(dt)
|
fmt.Fprintln(w, dt)
|
||||||
|
} else if dt, ok := res["result.txt"]; ok {
|
||||||
|
fmt.Fprint(w, dt)
|
||||||
} else {
|
} else {
|
||||||
log.Printf("%s %+v", f, res)
|
fmt.Fprintf(w, "%s %+v\n", f, res)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return nil
|
if v, ok := res["result.statuscode"]; !f.IgnoreStatus && ok {
|
||||||
|
if n, err := strconv.Atoi(v); err == nil && n != 0 {
|
||||||
|
return n, nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return 0, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
type printFunc func([]byte, io.Writer) error
|
type callFunc func([]byte, io.Writer) error
|
||||||
|
|
||||||
func printValue(printer printFunc, version string, format string, res map[string]string) error {
|
func printValue(w io.Writer, printer callFunc, version string, format string, res map[string]string) error {
|
||||||
if format == "json" {
|
if format == "json" {
|
||||||
fmt.Fprintln(os.Stdout, res["result.json"])
|
fmt.Fprintln(w, res["result.json"])
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
if res["version"] != "" && versions.LessThan(version, res["version"]) && res["result.txt"] != "" {
|
if res["version"] != "" && versions.LessThan(version, res["version"]) && res["result.txt"] != "" {
|
||||||
// structure is too new and we don't know how to print it
|
// structure is too new and we don't know how to print it
|
||||||
fmt.Fprint(os.Stdout, res["result.txt"])
|
fmt.Fprint(w, res["result.txt"])
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
return printer([]byte(res["result.json"]), os.Stdout)
|
return printer([]byte(res["result.json"]), w)
|
||||||
}
|
}
|
||||||
|
|
||||||
type invokeConfig struct {
|
type invokeConfig struct {
|
||||||
@@ -915,7 +1015,7 @@ func (cfg *invokeConfig) runDebug(ctx context.Context, ref string, options *cont
|
|||||||
return nil, errors.Errorf("failed to configure terminal: %v", err)
|
return nil, errors.Errorf("failed to configure terminal: %v", err)
|
||||||
}
|
}
|
||||||
defer con.Reset()
|
defer con.Reset()
|
||||||
return monitor.RunMonitor(ctx, ref, options, cfg.InvokeConfig, c, stdin, stdout, stderr, progress)
|
return monitor.RunMonitor(ctx, ref, options, &cfg.InvokeConfig, c, stdin, stdout, stderr, progress)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (cfg *invokeConfig) parseInvokeConfig(invoke, on string) error {
|
func (cfg *invokeConfig) parseInvokeConfig(invoke, on string) error {
|
||||||
@@ -935,9 +1035,9 @@ func (cfg *invokeConfig) parseInvokeConfig(invoke, on string) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
csvReader := csv.NewReader(strings.NewReader(invoke))
|
csvParser := csvvalue.NewParser()
|
||||||
csvReader.LazyQuotes = true
|
csvParser.LazyQuotes = true
|
||||||
fields, err := csvReader.Read()
|
fields, err := csvParser.Fields(invoke, nil)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -993,6 +1093,20 @@ func maybeJSONArray(v string) []string {
|
|||||||
return []string{v}
|
return []string{v}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func callAlias(target *string, value string) cobrautil.BoolFuncValue {
|
||||||
|
return func(s string) error {
|
||||||
|
v, err := strconv.ParseBool(s)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if v {
|
||||||
|
*target = value
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// timeBuildCommand will start a timer for timing the build command. It records the time when the returned
|
// timeBuildCommand will start a timer for timing the build command. It records the time when the returned
|
||||||
// function is invoked into a metric.
|
// function is invoked into a metric.
|
||||||
func timeBuildCommand(mp metric.MeterProvider, attrs attribute.Set) func(err error) {
|
func timeBuildCommand(mp metric.MeterProvider, attrs attribute.Set) func(err error) {
|
||||||
|
|||||||
@@ -64,7 +64,7 @@ func RootCmd(dockerCli command.Cli, children ...DebuggableCmd) *cobra.Command {
|
|||||||
return errors.Errorf("failed to configure terminal: %v", err)
|
return errors.Errorf("failed to configure terminal: %v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
_, err = monitor.RunMonitor(ctx, "", nil, controllerapi.InvokeConfig{
|
_, err = monitor.RunMonitor(ctx, "", nil, &controllerapi.InvokeConfig{
|
||||||
Tty: true,
|
Tty: true,
|
||||||
}, c, dockerCli.In(), os.Stdout, os.Stderr, printer)
|
}, c, dockerCli.In(), os.Stdout, os.Stderr, printer)
|
||||||
con.Reset()
|
con.Reset()
|
||||||
@@ -80,7 +80,7 @@ func RootCmd(dockerCli command.Cli, children ...DebuggableCmd) *cobra.Command {
|
|||||||
flags.StringVar(&controlOptions.Root, "root", "", "Specify root directory of server to connect for the monitor")
|
flags.StringVar(&controlOptions.Root, "root", "", "Specify root directory of server to connect for the monitor")
|
||||||
flags.BoolVar(&controlOptions.Detach, "detach", runtime.GOOS == "linux", "Detach buildx server for the monitor (supported only on linux)")
|
flags.BoolVar(&controlOptions.Detach, "detach", runtime.GOOS == "linux", "Detach buildx server for the monitor (supported only on linux)")
|
||||||
flags.StringVar(&controlOptions.ServerConfig, "server-config", "", "Specify buildx server config file for the monitor (used only when launching new server)")
|
flags.StringVar(&controlOptions.ServerConfig, "server-config", "", "Specify buildx server config file for the monitor (used only when launching new server)")
|
||||||
flags.StringVar(&progressMode, "progress", "auto", `Set type of progress output ("auto", "plain", "tty") for the monitor. Use plain to show container output`)
|
flags.StringVar(&progressMode, "progress", "auto", `Set type of progress output ("auto", "plain", "tty", "rawjson") for the monitor. Use plain to show container output`)
|
||||||
|
|
||||||
cobrautil.MarkFlagsExperimental(flags, "invoke", "on", "root", "detach", "server-config")
|
cobrautil.MarkFlagsExperimental(flags, "invoke", "on", "root", "detach", "server-config")
|
||||||
|
|
||||||
|
|||||||
@@ -5,7 +5,7 @@ import (
|
|||||||
"net"
|
"net"
|
||||||
"os"
|
"os"
|
||||||
|
|
||||||
"github.com/containerd/containerd/platforms"
|
"github.com/containerd/platforms"
|
||||||
"github.com/docker/buildx/build"
|
"github.com/docker/buildx/build"
|
||||||
"github.com/docker/buildx/builder"
|
"github.com/docker/buildx/builder"
|
||||||
"github.com/docker/buildx/util/progress"
|
"github.com/docker/buildx/util/progress"
|
||||||
@@ -125,8 +125,7 @@ func dialStdioCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
|
|||||||
}
|
}
|
||||||
|
|
||||||
flags := cmd.Flags()
|
flags := cmd.Flags()
|
||||||
cmd.Flags()
|
|
||||||
flags.StringVar(&opts.platform, "platform", os.Getenv("DOCKER_DEFAULT_PLATFORM"), "Target platform: this is used for node selection")
|
flags.StringVar(&opts.platform, "platform", os.Getenv("DOCKER_DEFAULT_PLATFORM"), "Target platform: this is used for node selection")
|
||||||
flags.StringVar(&opts.progress, "progress", "quiet", "Set type of progress output (auto, plain, tty).")
|
flags.StringVar(&opts.progress, "progress", "quiet", `Set type of progress output ("auto", "plain", "tty", "rawjson"). Use plain to show container output`)
|
||||||
return cmd
|
return cmd
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -9,6 +9,7 @@ import (
|
|||||||
|
|
||||||
"github.com/distribution/reference"
|
"github.com/distribution/reference"
|
||||||
"github.com/docker/buildx/builder"
|
"github.com/docker/buildx/builder"
|
||||||
|
"github.com/docker/buildx/util/buildflags"
|
||||||
"github.com/docker/buildx/util/cobrautil/completion"
|
"github.com/docker/buildx/util/cobrautil/completion"
|
||||||
"github.com/docker/buildx/util/imagetools"
|
"github.com/docker/buildx/util/imagetools"
|
||||||
"github.com/docker/buildx/util/progress"
|
"github.com/docker/buildx/util/progress"
|
||||||
@@ -29,6 +30,7 @@ type createOptions struct {
|
|||||||
dryrun bool
|
dryrun bool
|
||||||
actionAppend bool
|
actionAppend bool
|
||||||
progress string
|
progress string
|
||||||
|
preferIndex bool
|
||||||
}
|
}
|
||||||
|
|
||||||
func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, args []string) error {
|
func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, args []string) error {
|
||||||
@@ -40,7 +42,7 @@ func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, arg
|
|||||||
return errors.Errorf("can't push with no tags specified, please set --tag or --dry-run")
|
return errors.Errorf("can't push with no tags specified, please set --tag or --dry-run")
|
||||||
}
|
}
|
||||||
|
|
||||||
fileArgs := make([]string, len(in.files))
|
fileArgs := make([]string, len(in.files), len(in.files)+len(args))
|
||||||
for i, f := range in.files {
|
for i, f := range in.files {
|
||||||
dt, err := os.ReadFile(f)
|
dt, err := os.ReadFile(f)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -153,7 +155,12 @@ func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, arg
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
dt, desc, err := r.Combine(ctx, srcs, in.annotations)
|
annotations, err := buildflags.ParseAnnotations(in.annotations)
|
||||||
|
if err != nil {
|
||||||
|
return errors.Wrapf(err, "failed to parse annotations")
|
||||||
|
}
|
||||||
|
|
||||||
|
dt, desc, err := r.Combine(ctx, srcs, annotations, in.preferIndex)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -166,8 +173,8 @@ func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, arg
|
|||||||
// new resolver cause need new auth
|
// new resolver cause need new auth
|
||||||
r = imagetools.New(imageopt)
|
r = imagetools.New(imageopt)
|
||||||
|
|
||||||
ctx2, cancel := context.WithCancel(context.TODO())
|
ctx2, cancel := context.WithCancelCause(context.TODO())
|
||||||
defer cancel()
|
defer func() { cancel(errors.WithStack(context.Canceled)) }()
|
||||||
printer, err := progress.NewPrinter(ctx2, os.Stderr, progressui.DisplayMode(in.progress))
|
printer, err := progress.NewPrinter(ctx2, os.Stderr, progressui.DisplayMode(in.progress))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -281,8 +288,9 @@ func createCmd(dockerCli command.Cli, opts RootOptions) *cobra.Command {
|
|||||||
flags.StringArrayVarP(&options.tags, "tag", "t", []string{}, "Set reference for new image")
|
flags.StringArrayVarP(&options.tags, "tag", "t", []string{}, "Set reference for new image")
|
||||||
flags.BoolVar(&options.dryrun, "dry-run", false, "Show final image instead of pushing")
|
flags.BoolVar(&options.dryrun, "dry-run", false, "Show final image instead of pushing")
|
||||||
flags.BoolVar(&options.actionAppend, "append", false, "Append to existing manifest")
|
flags.BoolVar(&options.actionAppend, "append", false, "Append to existing manifest")
|
||||||
flags.StringVar(&options.progress, "progress", "auto", `Set type of progress output ("auto", "plain", "tty"). Use plain to show container output`)
|
flags.StringVar(&options.progress, "progress", "auto", `Set type of progress output ("auto", "plain", "tty", "rawjson"). Use plain to show container output`)
|
||||||
flags.StringArrayVarP(&options.annotations, "annotation", "", []string{}, "Add annotation to the image")
|
flags.StringArrayVarP(&options.annotations, "annotation", "", []string{}, "Add annotation to the image")
|
||||||
|
flags.BoolVar(&options.preferIndex, "prefer-index", true, "When only a single source is specified, prefer outputting an image index or manifest list instead of performing a carbon copy")
|
||||||
|
|
||||||
return cmd
|
return cmd
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -10,11 +10,12 @@ type RootOptions struct {
|
|||||||
Builder *string
|
Builder *string
|
||||||
}
|
}
|
||||||
|
|
||||||
func RootCmd(dockerCli command.Cli, opts RootOptions) *cobra.Command {
|
func RootCmd(rootcmd *cobra.Command, dockerCli command.Cli, opts RootOptions) *cobra.Command {
|
||||||
cmd := &cobra.Command{
|
cmd := &cobra.Command{
|
||||||
Use: "imagetools",
|
Use: "imagetools",
|
||||||
Short: "Commands to work on images in registry",
|
Short: "Commands to work on images in registry",
|
||||||
ValidArgsFunction: completion.Disable,
|
ValidArgsFunction: completion.Disable,
|
||||||
|
RunE: rootcmd.RunE,
|
||||||
}
|
}
|
||||||
|
|
||||||
cmd.AddCommand(
|
cmd.AddCommand(
|
||||||
|
|||||||
@@ -17,6 +17,7 @@ import (
|
|||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
"github.com/docker/cli/cli/debug"
|
"github.com/docker/cli/cli/debug"
|
||||||
"github.com/docker/go-units"
|
"github.com/docker/go-units"
|
||||||
|
"github.com/pkg/errors"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -34,8 +35,9 @@ func runInspect(ctx context.Context, dockerCli command.Cli, in inspectOptions) e
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
timeoutCtx, cancel := context.WithTimeout(ctx, 20*time.Second)
|
timeoutCtx, cancel := context.WithCancelCause(ctx)
|
||||||
defer cancel()
|
timeoutCtx, _ = context.WithTimeoutCause(timeoutCtx, 20*time.Second, errors.WithStack(context.DeadlineExceeded)) //nolint:govet,lostcancel // no need to manually cancel this context as we already rely on parent
|
||||||
|
defer func() { cancel(errors.WithStack(context.Canceled)) }()
|
||||||
|
|
||||||
nodes, err := b.LoadNodes(timeoutCtx, builder.WithData())
|
nodes, err := b.LoadNodes(timeoutCtx, builder.WithData())
|
||||||
if in.bootstrap {
|
if in.bootstrap {
|
||||||
@@ -122,8 +124,20 @@ func runInspect(ctx context.Context, dockerCli command.Cli, in inspectOptions) e
|
|||||||
if rule.KeepDuration > 0 {
|
if rule.KeepDuration > 0 {
|
||||||
fmt.Fprintf(w, "\tKeep Duration:\t%v\n", rule.KeepDuration.String())
|
fmt.Fprintf(w, "\tKeep Duration:\t%v\n", rule.KeepDuration.String())
|
||||||
}
|
}
|
||||||
if rule.KeepBytes > 0 {
|
if rule.ReservedSpace > 0 {
|
||||||
fmt.Fprintf(w, "\tKeep Bytes:\t%s\n", units.BytesSize(float64(rule.KeepBytes)))
|
fmt.Fprintf(w, "\tReserved Space:\t%s\n", units.BytesSize(float64(rule.ReservedSpace)))
|
||||||
|
}
|
||||||
|
if rule.MaxUsedSpace > 0 {
|
||||||
|
fmt.Fprintf(w, "\tMax Used Space:\t%s\n", units.BytesSize(float64(rule.MaxUsedSpace)))
|
||||||
|
}
|
||||||
|
if rule.MinFreeSpace > 0 {
|
||||||
|
fmt.Fprintf(w, "\tMin Free Space:\t%s\n", units.BytesSize(float64(rule.MinFreeSpace)))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for f, dt := range nodes[i].Files {
|
||||||
|
fmt.Fprintf(w, "File#%s:\n", f)
|
||||||
|
for _, line := range strings.Split(string(dt), "\n") {
|
||||||
|
fmt.Fprintf(w, "\t> %s\n", line)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -15,7 +15,7 @@ import (
|
|||||||
type installOptions struct {
|
type installOptions struct {
|
||||||
}
|
}
|
||||||
|
|
||||||
func runInstall(dockerCli command.Cli, in installOptions) error {
|
func runInstall(_ command.Cli, _ installOptions) error {
|
||||||
dir := config.Dir()
|
dir := config.Dir()
|
||||||
if err := os.MkdirAll(dir, 0755); err != nil {
|
if err := os.MkdirAll(dir, 0755); err != nil {
|
||||||
return errors.Wrap(err, "could not create docker config")
|
return errors.Wrap(err, "could not create docker config")
|
||||||
|
|||||||
161
commands/ls.go
161
commands/ls.go
@@ -8,6 +8,7 @@ import (
|
|||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/containerd/platforms"
|
||||||
"github.com/docker/buildx/builder"
|
"github.com/docker/buildx/builder"
|
||||||
"github.com/docker/buildx/store"
|
"github.com/docker/buildx/store"
|
||||||
"github.com/docker/buildx/store/storeutil"
|
"github.com/docker/buildx/store/storeutil"
|
||||||
@@ -17,6 +18,7 @@ import (
|
|||||||
"github.com/docker/cli/cli"
|
"github.com/docker/cli/cli"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
"github.com/docker/cli/cli/command/formatter"
|
"github.com/docker/cli/cli/command/formatter"
|
||||||
|
"github.com/pkg/errors"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
"golang.org/x/sync/errgroup"
|
"golang.org/x/sync/errgroup"
|
||||||
)
|
)
|
||||||
@@ -36,6 +38,7 @@ const (
|
|||||||
|
|
||||||
type lsOptions struct {
|
type lsOptions struct {
|
||||||
format string
|
format string
|
||||||
|
noTrunc bool
|
||||||
}
|
}
|
||||||
|
|
||||||
func runLs(ctx context.Context, dockerCli command.Cli, in lsOptions) error {
|
func runLs(ctx context.Context, dockerCli command.Cli, in lsOptions) error {
|
||||||
@@ -55,8 +58,9 @@ func runLs(ctx context.Context, dockerCli command.Cli, in lsOptions) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
timeoutCtx, cancel := context.WithTimeout(ctx, 20*time.Second)
|
timeoutCtx, cancel := context.WithCancelCause(ctx)
|
||||||
defer cancel()
|
timeoutCtx, _ = context.WithTimeoutCause(timeoutCtx, 20*time.Second, errors.WithStack(context.DeadlineExceeded)) //nolint:govet,lostcancel // no need to manually cancel this context as we already rely on parent
|
||||||
|
defer func() { cancel(errors.WithStack(context.Canceled)) }()
|
||||||
|
|
||||||
eg, _ := errgroup.WithContext(timeoutCtx)
|
eg, _ := errgroup.WithContext(timeoutCtx)
|
||||||
for _, b := range builders {
|
for _, b := range builders {
|
||||||
@@ -72,7 +76,7 @@ func runLs(ctx context.Context, dockerCli command.Cli, in lsOptions) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
if hasErrors, err := lsPrint(dockerCli, current, builders, in.format); err != nil {
|
if hasErrors, err := lsPrint(dockerCli, current, builders, in); err != nil {
|
||||||
return err
|
return err
|
||||||
} else if hasErrors {
|
} else if hasErrors {
|
||||||
_, _ = fmt.Fprintf(dockerCli.Err(), "\n")
|
_, _ = fmt.Fprintf(dockerCli.Err(), "\n")
|
||||||
@@ -107,6 +111,7 @@ func lsCmd(dockerCli command.Cli) *cobra.Command {
|
|||||||
|
|
||||||
flags := cmd.Flags()
|
flags := cmd.Flags()
|
||||||
flags.StringVar(&options.format, "format", formatter.TableFormatKey, "Format the output")
|
flags.StringVar(&options.format, "format", formatter.TableFormatKey, "Format the output")
|
||||||
|
flags.BoolVar(&options.noTrunc, "no-trunc", false, "Don't truncate output")
|
||||||
|
|
||||||
// hide builder persistent flag for this command
|
// hide builder persistent flag for this command
|
||||||
cobrautil.HideInheritedFlags(cmd, "builder")
|
cobrautil.HideInheritedFlags(cmd, "builder")
|
||||||
@@ -114,14 +119,15 @@ func lsCmd(dockerCli command.Cli) *cobra.Command {
|
|||||||
return cmd
|
return cmd
|
||||||
}
|
}
|
||||||
|
|
||||||
func lsPrint(dockerCli command.Cli, current *store.NodeGroup, builders []*builder.Builder, format string) (hasErrors bool, _ error) {
|
func lsPrint(dockerCli command.Cli, current *store.NodeGroup, builders []*builder.Builder, in lsOptions) (hasErrors bool, _ error) {
|
||||||
if format == formatter.TableFormatKey {
|
if in.format == formatter.TableFormatKey {
|
||||||
format = lsDefaultTableFormat
|
in.format = lsDefaultTableFormat
|
||||||
}
|
}
|
||||||
|
|
||||||
ctx := formatter.Context{
|
ctx := formatter.Context{
|
||||||
Output: dockerCli.Out(),
|
Output: dockerCli.Out(),
|
||||||
Format: formatter.Format(format),
|
Format: formatter.Format(in.format),
|
||||||
|
Trunc: !in.noTrunc,
|
||||||
}
|
}
|
||||||
|
|
||||||
sort.SliceStable(builders, func(i, j int) bool {
|
sort.SliceStable(builders, func(i, j int) bool {
|
||||||
@@ -138,11 +144,12 @@ func lsPrint(dockerCli command.Cli, current *store.NodeGroup, builders []*builde
|
|||||||
render := func(format func(subContext formatter.SubContext) error) error {
|
render := func(format func(subContext formatter.SubContext) error) error {
|
||||||
for _, b := range builders {
|
for _, b := range builders {
|
||||||
if err := format(&lsContext{
|
if err := format(&lsContext{
|
||||||
|
format: ctx.Format,
|
||||||
|
trunc: ctx.Trunc,
|
||||||
Builder: &lsBuilder{
|
Builder: &lsBuilder{
|
||||||
Builder: b,
|
Builder: b,
|
||||||
Current: b.Name == current.Name,
|
Current: b.Name == current.Name,
|
||||||
},
|
},
|
||||||
format: ctx.Format,
|
|
||||||
}); err != nil {
|
}); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -160,6 +167,7 @@ func lsPrint(dockerCli command.Cli, current *store.NodeGroup, builders []*builde
|
|||||||
}
|
}
|
||||||
if err := format(&lsContext{
|
if err := format(&lsContext{
|
||||||
format: ctx.Format,
|
format: ctx.Format,
|
||||||
|
trunc: ctx.Trunc,
|
||||||
Builder: &lsBuilder{
|
Builder: &lsBuilder{
|
||||||
Builder: b,
|
Builder: b,
|
||||||
Current: b.Name == current.Name,
|
Current: b.Name == current.Name,
|
||||||
@@ -196,6 +204,7 @@ type lsContext struct {
|
|||||||
Builder *lsBuilder
|
Builder *lsBuilder
|
||||||
|
|
||||||
format formatter.Format
|
format formatter.Format
|
||||||
|
trunc bool
|
||||||
node builder.Node
|
node builder.Node
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -261,7 +270,11 @@ func (c *lsContext) Platforms() string {
|
|||||||
if c.node.Name == "" {
|
if c.node.Name == "" {
|
||||||
return ""
|
return ""
|
||||||
}
|
}
|
||||||
return strings.Join(platformutil.FormatInGroups(c.node.Node.Platforms, c.node.Platforms), ", ")
|
pfs := platformutil.FormatInGroups(c.node.Node.Platforms, c.node.Platforms)
|
||||||
|
if c.trunc && c.format.IsTable() {
|
||||||
|
return truncPlatforms(pfs, 4).String()
|
||||||
|
}
|
||||||
|
return strings.Join(pfs, ", ")
|
||||||
}
|
}
|
||||||
|
|
||||||
func (c *lsContext) Error() string {
|
func (c *lsContext) Error() string {
|
||||||
@@ -272,3 +285,133 @@ func (c *lsContext) Error() string {
|
|||||||
}
|
}
|
||||||
return ""
|
return ""
|
||||||
}
|
}
|
||||||
|
|
||||||
|
var truncMajorPlatforms = []string{
|
||||||
|
"linux/amd64",
|
||||||
|
"linux/arm64",
|
||||||
|
"linux/arm",
|
||||||
|
"linux/ppc64le",
|
||||||
|
"linux/s390x",
|
||||||
|
"linux/riscv64",
|
||||||
|
"linux/mips64",
|
||||||
|
}
|
||||||
|
|
||||||
|
type truncatedPlatforms struct {
|
||||||
|
res map[string][]string
|
||||||
|
input []string
|
||||||
|
max int
|
||||||
|
}
|
||||||
|
|
||||||
|
func (tp truncatedPlatforms) List() map[string][]string {
|
||||||
|
return tp.res
|
||||||
|
}
|
||||||
|
|
||||||
|
func (tp truncatedPlatforms) String() string {
|
||||||
|
var out []string
|
||||||
|
var count int
|
||||||
|
|
||||||
|
var keys []string
|
||||||
|
for k := range tp.res {
|
||||||
|
keys = append(keys, k)
|
||||||
|
}
|
||||||
|
sort.Strings(keys)
|
||||||
|
|
||||||
|
seen := make(map[string]struct{})
|
||||||
|
for _, mpf := range truncMajorPlatforms {
|
||||||
|
if tpf, ok := tp.res[mpf]; ok {
|
||||||
|
seen[mpf] = struct{}{}
|
||||||
|
if len(tpf) == 1 {
|
||||||
|
out = append(out, tpf[0])
|
||||||
|
count++
|
||||||
|
} else {
|
||||||
|
hasPreferredPlatform := false
|
||||||
|
for _, pf := range tpf {
|
||||||
|
if strings.HasSuffix(pf, "*") {
|
||||||
|
hasPreferredPlatform = true
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
mainpf := mpf
|
||||||
|
if hasPreferredPlatform {
|
||||||
|
mainpf += "*"
|
||||||
|
}
|
||||||
|
out = append(out, fmt.Sprintf("%s (+%d)", mainpf, len(tpf)))
|
||||||
|
count += len(tpf)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, mpf := range keys {
|
||||||
|
if len(out) >= tp.max {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
if _, ok := seen[mpf]; ok {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if len(tp.res[mpf]) == 1 {
|
||||||
|
out = append(out, tp.res[mpf][0])
|
||||||
|
count++
|
||||||
|
} else {
|
||||||
|
hasPreferredPlatform := false
|
||||||
|
for _, pf := range tp.res[mpf] {
|
||||||
|
if strings.HasSuffix(pf, "*") {
|
||||||
|
hasPreferredPlatform = true
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
mainpf := mpf
|
||||||
|
if hasPreferredPlatform {
|
||||||
|
mainpf += "*"
|
||||||
|
}
|
||||||
|
out = append(out, fmt.Sprintf("%s (+%d)", mainpf, len(tp.res[mpf])))
|
||||||
|
count += len(tp.res[mpf])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
left := len(tp.input) - count
|
||||||
|
if left > 0 {
|
||||||
|
out = append(out, fmt.Sprintf("(%d more)", left))
|
||||||
|
}
|
||||||
|
|
||||||
|
return strings.Join(out, ", ")
|
||||||
|
}
|
||||||
|
|
||||||
|
func truncPlatforms(pfs []string, max int) truncatedPlatforms {
|
||||||
|
res := make(map[string][]string)
|
||||||
|
for _, mpf := range truncMajorPlatforms {
|
||||||
|
for _, pf := range pfs {
|
||||||
|
if len(res) >= max {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
pp, err := platforms.Parse(strings.TrimSuffix(pf, "*"))
|
||||||
|
if err != nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if pp.OS+"/"+pp.Architecture == mpf {
|
||||||
|
res[mpf] = append(res[mpf], pf)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
left := make(map[string][]string)
|
||||||
|
for _, pf := range pfs {
|
||||||
|
if len(res) >= max {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
pp, err := platforms.Parse(strings.TrimSuffix(pf, "*"))
|
||||||
|
if err != nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
ppf := strings.TrimSuffix(pp.OS+"/"+pp.Architecture, "*")
|
||||||
|
if _, ok := res[ppf]; !ok {
|
||||||
|
left[ppf] = append(left[ppf], pf)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for k, v := range left {
|
||||||
|
res[k] = v
|
||||||
|
}
|
||||||
|
return truncatedPlatforms{
|
||||||
|
res: res,
|
||||||
|
input: pfs,
|
||||||
|
max: max,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|||||||
174
commands/ls_test.go
Normal file
174
commands/ls_test.go
Normal file
@@ -0,0 +1,174 @@
|
|||||||
|
package commands
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/stretchr/testify/assert"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestTruncPlatforms(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
platforms []string
|
||||||
|
max int
|
||||||
|
expectedList map[string][]string
|
||||||
|
expectedOut string
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "arm64 preferred and emulated",
|
||||||
|
platforms: []string{"linux/arm64*", "linux/amd64", "linux/amd64/v2", "linux/riscv64", "linux/ppc64le", "linux/s390x", "linux/386", "linux/mips64le", "linux/mips64", "linux/arm/v7", "linux/arm/v6"},
|
||||||
|
max: 4,
|
||||||
|
expectedList: map[string][]string{
|
||||||
|
"linux/amd64": {
|
||||||
|
"linux/amd64",
|
||||||
|
"linux/amd64/v2",
|
||||||
|
},
|
||||||
|
"linux/arm": {
|
||||||
|
"linux/arm/v7",
|
||||||
|
"linux/arm/v6",
|
||||||
|
},
|
||||||
|
"linux/arm64": {
|
||||||
|
"linux/arm64*",
|
||||||
|
},
|
||||||
|
"linux/ppc64le": {
|
||||||
|
"linux/ppc64le",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
expectedOut: "linux/amd64 (+2), linux/arm64*, linux/arm (+2), linux/ppc64le, (5 more)",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "riscv64 preferred only",
|
||||||
|
platforms: []string{"linux/riscv64*"},
|
||||||
|
max: 4,
|
||||||
|
expectedList: map[string][]string{
|
||||||
|
"linux/riscv64": {
|
||||||
|
"linux/riscv64*",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
expectedOut: "linux/riscv64*",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "amd64 no preferred and emulated",
|
||||||
|
platforms: []string{"linux/amd64", "linux/amd64/v2", "linux/amd64/v3", "linux/386", "linux/arm64", "linux/riscv64", "linux/ppc64le", "linux/s390x", "linux/mips64le", "linux/mips64", "linux/arm/v7", "linux/arm/v6"},
|
||||||
|
max: 4,
|
||||||
|
expectedList: map[string][]string{
|
||||||
|
"linux/amd64": {
|
||||||
|
"linux/amd64",
|
||||||
|
"linux/amd64/v2",
|
||||||
|
"linux/amd64/v3",
|
||||||
|
},
|
||||||
|
"linux/arm": {
|
||||||
|
"linux/arm/v7",
|
||||||
|
"linux/arm/v6",
|
||||||
|
},
|
||||||
|
"linux/arm64": {
|
||||||
|
"linux/arm64",
|
||||||
|
},
|
||||||
|
"linux/ppc64le": {
|
||||||
|
"linux/ppc64le",
|
||||||
|
}},
|
||||||
|
expectedOut: "linux/amd64 (+3), linux/arm64, linux/arm (+2), linux/ppc64le, (5 more)",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "amd64 no preferred",
|
||||||
|
platforms: []string{"linux/amd64", "linux/386"},
|
||||||
|
max: 4,
|
||||||
|
expectedList: map[string][]string{
|
||||||
|
"linux/386": {
|
||||||
|
"linux/386",
|
||||||
|
},
|
||||||
|
"linux/amd64": {
|
||||||
|
"linux/amd64",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
expectedOut: "linux/amd64, linux/386",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "arm64 no preferred",
|
||||||
|
platforms: []string{"linux/arm64", "linux/arm/v7", "linux/arm/v6"},
|
||||||
|
max: 4,
|
||||||
|
expectedList: map[string][]string{
|
||||||
|
"linux/arm": {
|
||||||
|
"linux/arm/v7",
|
||||||
|
"linux/arm/v6",
|
||||||
|
},
|
||||||
|
"linux/arm64": {
|
||||||
|
"linux/arm64",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
expectedOut: "linux/arm64, linux/arm (+2)",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "all preferred",
|
||||||
|
platforms: []string{"darwin/arm64*", "linux/arm64*", "linux/arm/v5*", "linux/arm/v6*", "linux/arm/v7*", "windows/arm64*"},
|
||||||
|
max: 4,
|
||||||
|
expectedList: map[string][]string{
|
||||||
|
"darwin/arm64": {
|
||||||
|
"darwin/arm64*",
|
||||||
|
},
|
||||||
|
"linux/arm": {
|
||||||
|
"linux/arm/v5*",
|
||||||
|
"linux/arm/v6*",
|
||||||
|
"linux/arm/v7*",
|
||||||
|
},
|
||||||
|
"linux/arm64": {
|
||||||
|
"linux/arm64*",
|
||||||
|
},
|
||||||
|
"windows/arm64": {
|
||||||
|
"windows/arm64*",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
expectedOut: "linux/arm64*, linux/arm* (+3), darwin/arm64*, windows/arm64*",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "no major preferred",
|
||||||
|
platforms: []string{"linux/amd64/v2*", "linux/arm/v6*", "linux/mips64le*", "linux/amd64", "linux/amd64/v3", "linux/386", "linux/arm64", "linux/riscv64", "linux/ppc64le", "linux/s390x", "linux/mips64", "linux/arm/v7"},
|
||||||
|
max: 4,
|
||||||
|
expectedList: map[string][]string{
|
||||||
|
"linux/amd64": {
|
||||||
|
"linux/amd64/v2*",
|
||||||
|
"linux/amd64",
|
||||||
|
"linux/amd64/v3",
|
||||||
|
},
|
||||||
|
"linux/arm": {
|
||||||
|
"linux/arm/v6*",
|
||||||
|
"linux/arm/v7",
|
||||||
|
},
|
||||||
|
"linux/arm64": {
|
||||||
|
"linux/arm64",
|
||||||
|
},
|
||||||
|
"linux/ppc64le": {
|
||||||
|
"linux/ppc64le",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
expectedOut: "linux/amd64* (+3), linux/arm64, linux/arm* (+2), linux/ppc64le, (5 more)",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "no major with multiple variants",
|
||||||
|
platforms: []string{"linux/arm64", "linux/arm/v7", "linux/arm/v6", "linux/mips64le/softfloat", "linux/mips64le/hardfloat"},
|
||||||
|
max: 4,
|
||||||
|
expectedList: map[string][]string{
|
||||||
|
"linux/arm": {
|
||||||
|
"linux/arm/v7",
|
||||||
|
"linux/arm/v6",
|
||||||
|
},
|
||||||
|
"linux/arm64": {
|
||||||
|
"linux/arm64",
|
||||||
|
},
|
||||||
|
"linux/mips64le": {
|
||||||
|
"linux/mips64le/softfloat",
|
||||||
|
"linux/mips64le/hardfloat",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
expectedOut: "linux/arm64, linux/arm (+2), linux/mips64le (+2)",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, tt := range tests {
|
||||||
|
tt := tt
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
tpfs := truncPlatforms(tt.platforms, tt.max)
|
||||||
|
assert.Equal(t, tt.expectedList, tpfs.List())
|
||||||
|
assert.Equal(t, tt.expectedOut, tpfs.String())
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -16,6 +16,9 @@ import (
|
|||||||
"github.com/docker/docker/api/types/filters"
|
"github.com/docker/docker/api/types/filters"
|
||||||
"github.com/docker/go-units"
|
"github.com/docker/go-units"
|
||||||
"github.com/moby/buildkit/client"
|
"github.com/moby/buildkit/client"
|
||||||
|
gateway "github.com/moby/buildkit/frontend/gateway/client"
|
||||||
|
pb "github.com/moby/buildkit/solver/pb"
|
||||||
|
"github.com/moby/buildkit/util/apicaps"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
"golang.org/x/sync/errgroup"
|
"golang.org/x/sync/errgroup"
|
||||||
@@ -25,7 +28,9 @@ type pruneOptions struct {
|
|||||||
builder string
|
builder string
|
||||||
all bool
|
all bool
|
||||||
filter opts.FilterOpt
|
filter opts.FilterOpt
|
||||||
keepStorage opts.MemBytes
|
reservedSpace opts.MemBytes
|
||||||
|
maxUsedSpace opts.MemBytes
|
||||||
|
minFreeSpace opts.MemBytes
|
||||||
force bool
|
force bool
|
||||||
verbose bool
|
verbose bool
|
||||||
}
|
}
|
||||||
@@ -105,8 +110,19 @@ func runPrune(ctx context.Context, dockerCli command.Cli, opts pruneOptions) err
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
// check if the client supports newer prune options
|
||||||
|
if opts.maxUsedSpace.Value() != 0 || opts.minFreeSpace.Value() != 0 {
|
||||||
|
caps, err := loadLLBCaps(ctx, c)
|
||||||
|
if err != nil {
|
||||||
|
return errors.Wrap(err, "failed to load buildkit capabilities for prune")
|
||||||
|
}
|
||||||
|
if caps.Supports(pb.CapGCFreeSpaceFilter) != nil {
|
||||||
|
return errors.New("buildkit v0.17.0+ is required for max-used-space and min-free-space filters")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
popts := []client.PruneOption{
|
popts := []client.PruneOption{
|
||||||
client.WithKeepOpt(pi.KeepDuration, opts.keepStorage.Value()),
|
client.WithKeepOpt(pi.KeepDuration, opts.reservedSpace.Value(), opts.maxUsedSpace.Value(), opts.minFreeSpace.Value()),
|
||||||
client.WithFilter(pi.Filter),
|
client.WithFilter(pi.Filter),
|
||||||
}
|
}
|
||||||
if opts.all {
|
if opts.all {
|
||||||
@@ -131,6 +147,17 @@ func runPrune(ctx context.Context, dockerCli command.Cli, opts pruneOptions) err
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func loadLLBCaps(ctx context.Context, c *client.Client) (apicaps.CapSet, error) {
|
||||||
|
var caps apicaps.CapSet
|
||||||
|
_, err := c.Build(ctx, client.SolveOpt{
|
||||||
|
Internal: true,
|
||||||
|
}, "buildx", func(ctx context.Context, c gateway.Client) (*gateway.Result, error) {
|
||||||
|
caps = c.BuildOpts().LLBCaps
|
||||||
|
return nil, nil
|
||||||
|
}, nil)
|
||||||
|
return caps, err
|
||||||
|
}
|
||||||
|
|
||||||
func pruneCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
|
func pruneCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
|
||||||
options := pruneOptions{filter: opts.NewFilterOpt()}
|
options := pruneOptions{filter: opts.NewFilterOpt()}
|
||||||
|
|
||||||
@@ -148,10 +175,15 @@ func pruneCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
|
|||||||
flags := cmd.Flags()
|
flags := cmd.Flags()
|
||||||
flags.BoolVarP(&options.all, "all", "a", false, "Include internal/frontend images")
|
flags.BoolVarP(&options.all, "all", "a", false, "Include internal/frontend images")
|
||||||
flags.Var(&options.filter, "filter", `Provide filter values (e.g., "until=24h")`)
|
flags.Var(&options.filter, "filter", `Provide filter values (e.g., "until=24h")`)
|
||||||
flags.Var(&options.keepStorage, "keep-storage", "Amount of disk space to keep for cache")
|
flags.Var(&options.reservedSpace, "reserved-space", "Amount of disk space always allowed to keep for cache")
|
||||||
|
flags.Var(&options.minFreeSpace, "min-free-space", "Target amount of free disk space after pruning")
|
||||||
|
flags.Var(&options.maxUsedSpace, "max-used-space", "Maximum amount of disk space allowed to keep for cache")
|
||||||
flags.BoolVar(&options.verbose, "verbose", false, "Provide a more verbose output")
|
flags.BoolVar(&options.verbose, "verbose", false, "Provide a more verbose output")
|
||||||
flags.BoolVarP(&options.force, "force", "f", false, "Do not prompt for confirmation")
|
flags.BoolVarP(&options.force, "force", "f", false, "Do not prompt for confirmation")
|
||||||
|
|
||||||
|
flags.Var(&options.reservedSpace, "keep-storage", "Amount of disk space to keep for cache")
|
||||||
|
flags.MarkDeprecated("keep-storage", "keep-storage flag has been changed to max-storage")
|
||||||
|
|
||||||
return cmd
|
return cmd
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -195,6 +227,8 @@ func toBuildkitPruneInfo(f filters.Args) (*client.PruneInfo, error) {
|
|||||||
case 1:
|
case 1:
|
||||||
if filterKey == "id" {
|
if filterKey == "id" {
|
||||||
filters = append(filters, filterKey+"~="+values[0])
|
filters = append(filters, filterKey+"~="+values[0])
|
||||||
|
} else if strings.HasSuffix(filterKey, "!") || strings.HasSuffix(filterKey, "~") {
|
||||||
|
filters = append(filters, filterKey+"="+values[0])
|
||||||
} else {
|
} else {
|
||||||
filters = append(filters, filterKey+"=="+values[0])
|
filters = append(filters, filterKey+"=="+values[0])
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -150,8 +150,9 @@ func rmAllInactive(ctx context.Context, txn *store.Txn, dockerCli command.Cli, i
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
timeoutCtx, cancel := context.WithTimeout(ctx, 20*time.Second)
|
timeoutCtx, cancel := context.WithCancelCause(ctx)
|
||||||
defer cancel()
|
timeoutCtx, _ = context.WithTimeoutCause(timeoutCtx, 20*time.Second, errors.WithStack(context.DeadlineExceeded)) //nolint:govet,lostcancel // no need to manually cancel this context as we already rely on parent
|
||||||
|
defer func() { cancel(errors.WithStack(context.Canceled)) }()
|
||||||
|
|
||||||
eg, _ := errgroup.WithContext(timeoutCtx)
|
eg, _ := errgroup.WithContext(timeoutCtx)
|
||||||
for _, b := range builders {
|
for _, b := range builders {
|
||||||
|
|||||||
@@ -1,12 +1,14 @@
|
|||||||
package commands
|
package commands
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"fmt"
|
||||||
"os"
|
"os"
|
||||||
|
|
||||||
debugcmd "github.com/docker/buildx/commands/debug"
|
debugcmd "github.com/docker/buildx/commands/debug"
|
||||||
imagetoolscmd "github.com/docker/buildx/commands/imagetools"
|
imagetoolscmd "github.com/docker/buildx/commands/imagetools"
|
||||||
"github.com/docker/buildx/controller/remote"
|
"github.com/docker/buildx/controller/remote"
|
||||||
"github.com/docker/buildx/util/cobrautil/completion"
|
"github.com/docker/buildx/util/cobrautil/completion"
|
||||||
|
"github.com/docker/buildx/util/confutil"
|
||||||
"github.com/docker/buildx/util/logutil"
|
"github.com/docker/buildx/util/logutil"
|
||||||
"github.com/docker/cli-docs-tool/annotation"
|
"github.com/docker/cli-docs-tool/annotation"
|
||||||
"github.com/docker/cli/cli"
|
"github.com/docker/cli/cli"
|
||||||
@@ -20,6 +22,7 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
func NewRootCmd(name string, isPlugin bool, dockerCli command.Cli) *cobra.Command {
|
func NewRootCmd(name string, isPlugin bool, dockerCli command.Cli) *cobra.Command {
|
||||||
|
var opt rootOptions
|
||||||
cmd := &cobra.Command{
|
cmd := &cobra.Command{
|
||||||
Short: "Docker Buildx",
|
Short: "Docker Buildx",
|
||||||
Long: `Extended build capabilities with BuildKit`,
|
Long: `Extended build capabilities with BuildKit`,
|
||||||
@@ -31,12 +34,25 @@ func NewRootCmd(name string, isPlugin bool, dockerCli command.Cli) *cobra.Comman
|
|||||||
HiddenDefaultCmd: true,
|
HiddenDefaultCmd: true,
|
||||||
},
|
},
|
||||||
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
|
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
|
||||||
|
if opt.debug {
|
||||||
|
debug.Enable()
|
||||||
|
}
|
||||||
cmd.SetContext(appcontext.Context())
|
cmd.SetContext(appcontext.Context())
|
||||||
if !isPlugin {
|
if !isPlugin {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
return plugin.PersistentPreRunE(cmd, args)
|
return plugin.PersistentPreRunE(cmd, args)
|
||||||
},
|
},
|
||||||
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
|
if len(args) == 0 {
|
||||||
|
return cmd.Help()
|
||||||
|
}
|
||||||
|
_ = cmd.Help()
|
||||||
|
return cli.StatusError{
|
||||||
|
StatusCode: 1,
|
||||||
|
Status: fmt.Sprintf("ERROR: unknown command: %q", args[0]),
|
||||||
|
}
|
||||||
|
},
|
||||||
}
|
}
|
||||||
if !isPlugin {
|
if !isPlugin {
|
||||||
// match plugin behavior for standalone mode
|
// match plugin behavior for standalone mode
|
||||||
@@ -46,11 +62,6 @@ func NewRootCmd(name string, isPlugin bool, dockerCli command.Cli) *cobra.Comman
|
|||||||
cmd.TraverseChildren = true
|
cmd.TraverseChildren = true
|
||||||
cmd.DisableFlagsInUseLine = true
|
cmd.DisableFlagsInUseLine = true
|
||||||
cli.DisableFlagsInUseLine(cmd)
|
cli.DisableFlagsInUseLine(cmd)
|
||||||
|
|
||||||
// DEBUG=1 should perform the same as --debug at the docker root level
|
|
||||||
if debug.IsEnabled() {
|
|
||||||
debug.Enable()
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
logrus.SetFormatter(&logutil.Formatter{})
|
logrus.SetFormatter(&logutil.Formatter{})
|
||||||
@@ -63,20 +74,20 @@ func NewRootCmd(name string, isPlugin bool, dockerCli command.Cli) *cobra.Comman
|
|||||||
"using default config store",
|
"using default config store",
|
||||||
))
|
))
|
||||||
|
|
||||||
if !isExperimental() {
|
if !confutil.IsExperimental() {
|
||||||
cmd.SetHelpTemplate(cmd.HelpTemplate() + "\nExperimental commands and flags are hidden. Set BUILDX_EXPERIMENTAL=1 to show them.\n")
|
cmd.SetHelpTemplate(cmd.HelpTemplate() + "\nExperimental commands and flags are hidden. Set BUILDX_EXPERIMENTAL=1 to show them.\n")
|
||||||
}
|
}
|
||||||
|
|
||||||
addCommands(cmd, dockerCli)
|
addCommands(cmd, &opt, dockerCli)
|
||||||
return cmd
|
return cmd
|
||||||
}
|
}
|
||||||
|
|
||||||
type rootOptions struct {
|
type rootOptions struct {
|
||||||
builder string
|
builder string
|
||||||
|
debug bool
|
||||||
}
|
}
|
||||||
|
|
||||||
func addCommands(cmd *cobra.Command, dockerCli command.Cli) {
|
func addCommands(cmd *cobra.Command, opts *rootOptions, dockerCli command.Cli) {
|
||||||
opts := &rootOptions{}
|
|
||||||
rootFlags(opts, cmd.PersistentFlags())
|
rootFlags(opts, cmd.PersistentFlags())
|
||||||
|
|
||||||
cmd.AddCommand(
|
cmd.AddCommand(
|
||||||
@@ -94,9 +105,9 @@ func addCommands(cmd *cobra.Command, dockerCli command.Cli) {
|
|||||||
versionCmd(dockerCli),
|
versionCmd(dockerCli),
|
||||||
pruneCmd(dockerCli, opts),
|
pruneCmd(dockerCli, opts),
|
||||||
duCmd(dockerCli, opts),
|
duCmd(dockerCli, opts),
|
||||||
imagetoolscmd.RootCmd(dockerCli, imagetoolscmd.RootOptions{Builder: &opts.builder}),
|
imagetoolscmd.RootCmd(cmd, dockerCli, imagetoolscmd.RootOptions{Builder: &opts.builder}),
|
||||||
)
|
)
|
||||||
if isExperimental() {
|
if confutil.IsExperimental() {
|
||||||
cmd.AddCommand(debugcmd.RootCmd(dockerCli,
|
cmd.AddCommand(debugcmd.RootCmd(dockerCli,
|
||||||
newDebuggableBuild(dockerCli, opts),
|
newDebuggableBuild(dockerCli, opts),
|
||||||
))
|
))
|
||||||
@@ -111,4 +122,5 @@ func addCommands(cmd *cobra.Command, dockerCli command.Cli) {
|
|||||||
|
|
||||||
func rootFlags(options *rootOptions, flags *pflag.FlagSet) {
|
func rootFlags(options *rootOptions, flags *pflag.FlagSet) {
|
||||||
flags.StringVar(&options.builder, "builder", os.Getenv("BUILDX_BUILDER"), "Override the configured builder instance")
|
flags.StringVar(&options.builder, "builder", os.Getenv("BUILDX_BUILDER"), "Override the configured builder instance")
|
||||||
|
flags.BoolVarP(&options.debug, "debug", "D", debug.IsEnabled(), "Enable debug logging")
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -15,7 +15,7 @@ import (
|
|||||||
type uninstallOptions struct {
|
type uninstallOptions struct {
|
||||||
}
|
}
|
||||||
|
|
||||||
func runUninstall(dockerCli command.Cli, in uninstallOptions) error {
|
func runUninstall(_ command.Cli, _ uninstallOptions) error {
|
||||||
dir := config.Dir()
|
dir := config.Dir()
|
||||||
cfg, err := config.Load(dir)
|
cfg, err := config.Load(dir)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
|||||||
@@ -46,7 +46,6 @@ func runUse(dockerCli command.Cli, in useOptions) error {
|
|||||||
return errors.Errorf("run `docker context use %s` to switch to context %s", in.builder, in.builder)
|
return errors.Errorf("run `docker context use %s` to switch to context %s", in.builder, in.builder)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
return errors.Wrapf(err, "failed to find instance %q", in.builder)
|
return errors.Wrapf(err, "failed to find instance %q", in.builder)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,17 +1,22 @@
|
|||||||
package commands
|
package commands
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"bufio"
|
||||||
"context"
|
"context"
|
||||||
|
"fmt"
|
||||||
"io"
|
"io"
|
||||||
|
"os"
|
||||||
|
"runtime"
|
||||||
|
"strings"
|
||||||
|
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/streams"
|
||||||
)
|
)
|
||||||
|
|
||||||
func prompt(ctx context.Context, ins io.Reader, out io.Writer, msg string) (bool, error) {
|
func prompt(ctx context.Context, ins io.Reader, out io.Writer, msg string) (bool, error) {
|
||||||
done := make(chan struct{})
|
done := make(chan struct{})
|
||||||
var ok bool
|
var ok bool
|
||||||
go func() {
|
go func() {
|
||||||
ok = command.PromptForConfirmation(ins, out, msg)
|
ok = promptForConfirmation(ins, out, msg)
|
||||||
close(done)
|
close(done)
|
||||||
}()
|
}()
|
||||||
select {
|
select {
|
||||||
@@ -21,3 +26,32 @@ func prompt(ctx context.Context, ins io.Reader, out io.Writer, msg string) (bool
|
|||||||
return ok, nil
|
return ok, nil
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// promptForConfirmation requests and checks confirmation from user.
|
||||||
|
// This will display the provided message followed by ' [y/N] '. If
|
||||||
|
// the user input 'y' or 'Y' it returns true other false. If no
|
||||||
|
// message is provided "Are you sure you want to proceed? [y/N] "
|
||||||
|
// will be used instead.
|
||||||
|
//
|
||||||
|
// Copied from github.com/docker/cli since the upstream version changed
|
||||||
|
// recently with an incompatible change.
|
||||||
|
//
|
||||||
|
// See https://github.com/docker/buildx/pull/2359#discussion_r1544736494
|
||||||
|
// for discussion on the issue.
|
||||||
|
func promptForConfirmation(ins io.Reader, outs io.Writer, message string) bool {
|
||||||
|
if message == "" {
|
||||||
|
message = "Are you sure you want to proceed?"
|
||||||
|
}
|
||||||
|
message += " [y/N] "
|
||||||
|
|
||||||
|
_, _ = fmt.Fprint(outs, message)
|
||||||
|
|
||||||
|
// On Windows, force the use of the regular OS stdin stream.
|
||||||
|
if runtime.GOOS == "windows" {
|
||||||
|
ins = streams.NewIn(os.Stdin)
|
||||||
|
}
|
||||||
|
|
||||||
|
reader := bufio.NewReader(ins)
|
||||||
|
answer, _, _ := reader.ReadLine()
|
||||||
|
return strings.ToLower(string(answer)) == "y"
|
||||||
|
}
|
||||||
|
|||||||
@@ -11,7 +11,7 @@ import (
|
|||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
func runVersion(dockerCli command.Cli) error {
|
func runVersion(_ command.Cli) error {
|
||||||
fmt.Println(version.Package, version.Version, version.Revision)
|
fmt.Println(version.Package, version.Version, version.Revision)
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -3,7 +3,6 @@ package build
|
|||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"io"
|
"io"
|
||||||
"os"
|
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"strings"
|
"strings"
|
||||||
"sync"
|
"sync"
|
||||||
@@ -19,9 +18,8 @@ import (
|
|||||||
"github.com/docker/buildx/util/platformutil"
|
"github.com/docker/buildx/util/platformutil"
|
||||||
"github.com/docker/buildx/util/progress"
|
"github.com/docker/buildx/util/progress"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
"github.com/docker/cli/cli/config"
|
|
||||||
dockeropts "github.com/docker/cli/opts"
|
dockeropts "github.com/docker/cli/opts"
|
||||||
"github.com/docker/go-units"
|
"github.com/docker/docker/api/types/container"
|
||||||
"github.com/moby/buildkit/client"
|
"github.com/moby/buildkit/client"
|
||||||
"github.com/moby/buildkit/session/auth/authprovider"
|
"github.com/moby/buildkit/session/auth/authprovider"
|
||||||
"github.com/moby/buildkit/util/grpcerrors"
|
"github.com/moby/buildkit/util/grpcerrors"
|
||||||
@@ -36,9 +34,9 @@ const defaultTargetName = "default"
|
|||||||
// NOTE: When an error happens during the build and this function acquires the debuggable *build.ResultHandle,
|
// NOTE: When an error happens during the build and this function acquires the debuggable *build.ResultHandle,
|
||||||
// this function returns it in addition to the error (i.e. it does "return nil, res, err"). The caller can
|
// this function returns it in addition to the error (i.e. it does "return nil, res, err"). The caller can
|
||||||
// inspect the result and debug the cause of that error.
|
// inspect the result and debug the cause of that error.
|
||||||
func RunBuild(ctx context.Context, dockerCli command.Cli, in controllerapi.BuildOptions, inStream io.Reader, progress progress.Writer, generateResult bool) (*client.SolveResponse, *build.ResultHandle, error) {
|
func RunBuild(ctx context.Context, dockerCli command.Cli, in *controllerapi.BuildOptions, inStream io.Reader, progress progress.Writer, generateResult bool) (*client.SolveResponse, *build.ResultHandle, *build.Inputs, error) {
|
||||||
if in.NoCache && len(in.NoCacheFilter) > 0 {
|
if in.NoCache && len(in.NoCacheFilter) > 0 {
|
||||||
return nil, nil, errors.Errorf("--no-cache and --no-cache-filter cannot currently be used together")
|
return nil, nil, nil, errors.Errorf("--no-cache and --no-cache-filter cannot currently be used together")
|
||||||
}
|
}
|
||||||
|
|
||||||
contexts := map[string]build.NamedContext{}
|
contexts := map[string]build.NamedContext{}
|
||||||
@@ -50,7 +48,7 @@ func RunBuild(ctx context.Context, dockerCli command.Cli, in controllerapi.Build
|
|||||||
Inputs: build.Inputs{
|
Inputs: build.Inputs{
|
||||||
ContextPath: in.ContextPath,
|
ContextPath: in.ContextPath,
|
||||||
DockerfilePath: in.DockerfileName,
|
DockerfilePath: in.DockerfileName,
|
||||||
InStream: inStream,
|
InStream: build.NewSyncMultiReader(inStream),
|
||||||
NamedContexts: contexts,
|
NamedContexts: contexts,
|
||||||
},
|
},
|
||||||
Ref: in.Ref,
|
Ref: in.Ref,
|
||||||
@@ -67,20 +65,21 @@ func RunBuild(ctx context.Context, dockerCli command.Cli, in controllerapi.Build
|
|||||||
Target: in.Target,
|
Target: in.Target,
|
||||||
Ulimits: controllerUlimitOpt2DockerUlimit(in.Ulimits),
|
Ulimits: controllerUlimitOpt2DockerUlimit(in.Ulimits),
|
||||||
GroupRef: in.GroupRef,
|
GroupRef: in.GroupRef,
|
||||||
|
ProvenanceResponseMode: confutil.ParseMetadataProvenance(in.ProvenanceResponseMode),
|
||||||
}
|
}
|
||||||
|
|
||||||
platforms, err := platformutil.Parse(in.Platforms)
|
platforms, err := platformutil.Parse(in.Platforms)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, nil, err
|
return nil, nil, nil, err
|
||||||
}
|
}
|
||||||
opts.Platforms = platforms
|
opts.Platforms = platforms
|
||||||
|
|
||||||
dockerConfig := config.LoadDefaultConfigFile(os.Stderr)
|
dockerConfig := dockerCli.ConfigFile()
|
||||||
opts.Session = append(opts.Session, authprovider.NewDockerAuthProvider(dockerConfig, nil))
|
opts.Session = append(opts.Session, authprovider.NewDockerAuthProvider(dockerConfig, nil))
|
||||||
|
|
||||||
secrets, err := controllerapi.CreateSecrets(in.Secrets)
|
secrets, err := controllerapi.CreateSecrets(in.Secrets)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, nil, err
|
return nil, nil, nil, err
|
||||||
}
|
}
|
||||||
opts.Session = append(opts.Session, secrets)
|
opts.Session = append(opts.Session, secrets)
|
||||||
|
|
||||||
@@ -90,53 +89,54 @@ func RunBuild(ctx context.Context, dockerCli command.Cli, in controllerapi.Build
|
|||||||
}
|
}
|
||||||
ssh, err := controllerapi.CreateSSH(sshSpecs)
|
ssh, err := controllerapi.CreateSSH(sshSpecs)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, nil, err
|
return nil, nil, nil, err
|
||||||
}
|
}
|
||||||
opts.Session = append(opts.Session, ssh)
|
opts.Session = append(opts.Session, ssh)
|
||||||
|
|
||||||
outputs, err := controllerapi.CreateExports(in.Exports)
|
outputs, err := controllerapi.CreateExports(in.Exports)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, nil, err
|
return nil, nil, nil, err
|
||||||
}
|
}
|
||||||
if in.ExportPush {
|
if in.ExportPush {
|
||||||
if in.ExportLoad {
|
var pushUsed bool
|
||||||
return nil, nil, errors.Errorf("push and load may not be set together at the moment")
|
for i := range outputs {
|
||||||
|
if outputs[i].Type == client.ExporterImage {
|
||||||
|
outputs[i].Attrs["push"] = "true"
|
||||||
|
pushUsed = true
|
||||||
}
|
}
|
||||||
if len(outputs) == 0 {
|
}
|
||||||
outputs = []client.ExportEntry{{
|
if !pushUsed {
|
||||||
Type: "image",
|
outputs = append(outputs, client.ExportEntry{
|
||||||
|
Type: client.ExporterImage,
|
||||||
Attrs: map[string]string{
|
Attrs: map[string]string{
|
||||||
"push": "true",
|
"push": "true",
|
||||||
},
|
},
|
||||||
}}
|
})
|
||||||
} else {
|
|
||||||
switch outputs[0].Type {
|
|
||||||
case "image":
|
|
||||||
outputs[0].Attrs["push"] = "true"
|
|
||||||
default:
|
|
||||||
return nil, nil, errors.Errorf("push and %q output can't be used together", outputs[0].Type)
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if in.ExportLoad {
|
if in.ExportLoad {
|
||||||
if len(outputs) == 0 {
|
var loadUsed bool
|
||||||
outputs = []client.ExportEntry{{
|
for i := range outputs {
|
||||||
Type: "docker",
|
if outputs[i].Type == client.ExporterDocker {
|
||||||
Attrs: map[string]string{},
|
if _, ok := outputs[i].Attrs["dest"]; !ok {
|
||||||
}}
|
loadUsed = true
|
||||||
} else {
|
break
|
||||||
switch outputs[0].Type {
|
|
||||||
case "docker":
|
|
||||||
default:
|
|
||||||
return nil, nil, errors.Errorf("load and %q output can't be used together", outputs[0].Type)
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
if !loadUsed {
|
||||||
|
outputs = append(outputs, client.ExportEntry{
|
||||||
|
Type: client.ExporterDocker,
|
||||||
|
Attrs: map[string]string{},
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
annotations, err := buildflags.ParseAnnotations(in.Annotations)
|
annotations, err := buildflags.ParseAnnotations(in.Annotations)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, nil, err
|
return nil, nil, nil, errors.Wrap(err, "parse annotations")
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, o := range outputs {
|
for _, o := range outputs {
|
||||||
for k, v := range annotations {
|
for k, v := range annotations {
|
||||||
o.Attrs[k.String()] = v
|
o.Attrs[k.String()] = v
|
||||||
@@ -154,14 +154,15 @@ func RunBuild(ctx context.Context, dockerCli command.Cli, in controllerapi.Build
|
|||||||
|
|
||||||
allow, err := buildflags.ParseEntitlements(in.Allow)
|
allow, err := buildflags.ParseEntitlements(in.Allow)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, nil, err
|
return nil, nil, nil, err
|
||||||
}
|
}
|
||||||
opts.Allow = allow
|
opts.Allow = allow
|
||||||
|
|
||||||
if in.PrintFunc != nil {
|
if in.CallFunc != nil {
|
||||||
opts.PrintFunc = &build.PrintFunc{
|
opts.CallFunc = &build.CallFunc{
|
||||||
Name: in.PrintFunc.Name,
|
Name: in.CallFunc.Name,
|
||||||
Format: in.PrintFunc.Format,
|
Format: in.CallFunc.Format,
|
||||||
|
IgnoreStatus: in.CallFunc.IgnoreStatus,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -177,23 +178,28 @@ func RunBuild(ctx context.Context, dockerCli command.Cli, in controllerapi.Build
|
|||||||
builder.WithContextPathHash(contextPathHash),
|
builder.WithContextPathHash(contextPathHash),
|
||||||
)
|
)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, nil, err
|
return nil, nil, nil, err
|
||||||
}
|
}
|
||||||
if err = updateLastActivity(dockerCli, b.NodeGroup); err != nil {
|
if err = updateLastActivity(dockerCli, b.NodeGroup); err != nil {
|
||||||
return nil, nil, errors.Wrapf(err, "failed to update builder last activity time")
|
return nil, nil, nil, errors.Wrapf(err, "failed to update builder last activity time")
|
||||||
}
|
}
|
||||||
nodes, err := b.LoadNodes(ctx)
|
nodes, err := b.LoadNodes(ctx)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, nil, err
|
return nil, nil, nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
resp, res, err := buildTargets(ctx, dockerCli, b.NodeGroup, nodes, map[string]build.Options{defaultTargetName: opts}, progress, generateResult)
|
var inputs *build.Inputs
|
||||||
|
buildOptions := map[string]build.Options{defaultTargetName: opts}
|
||||||
|
resp, res, err := buildTargets(ctx, dockerCli, nodes, buildOptions, progress, generateResult)
|
||||||
err = wrapBuildError(err, false)
|
err = wrapBuildError(err, false)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
// NOTE: buildTargets can return *build.ResultHandle even on error.
|
// NOTE: buildTargets can return *build.ResultHandle even on error.
|
||||||
return nil, res, err
|
return nil, res, nil, err
|
||||||
}
|
}
|
||||||
return resp, res, nil
|
if i, ok := buildOptions[defaultTargetName]; ok {
|
||||||
|
inputs = &i.Inputs
|
||||||
|
}
|
||||||
|
return resp, res, inputs, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// buildTargets runs the specified build and returns the result.
|
// buildTargets runs the specified build and returns the result.
|
||||||
@@ -201,14 +207,14 @@ func RunBuild(ctx context.Context, dockerCli command.Cli, in controllerapi.Build
|
|||||||
// NOTE: When an error happens during the build and this function acquires the debuggable *build.ResultHandle,
|
// NOTE: When an error happens during the build and this function acquires the debuggable *build.ResultHandle,
|
||||||
// this function returns it in addition to the error (i.e. it does "return nil, res, err"). The caller can
|
// this function returns it in addition to the error (i.e. it does "return nil, res, err"). The caller can
|
||||||
// inspect the result and debug the cause of that error.
|
// inspect the result and debug the cause of that error.
|
||||||
func buildTargets(ctx context.Context, dockerCli command.Cli, ng *store.NodeGroup, nodes []builder.Node, opts map[string]build.Options, progress progress.Writer, generateResult bool) (*client.SolveResponse, *build.ResultHandle, error) {
|
func buildTargets(ctx context.Context, dockerCli command.Cli, nodes []builder.Node, opts map[string]build.Options, progress progress.Writer, generateResult bool) (*client.SolveResponse, *build.ResultHandle, error) {
|
||||||
var res *build.ResultHandle
|
var res *build.ResultHandle
|
||||||
var resp map[string]*client.SolveResponse
|
var resp map[string]*client.SolveResponse
|
||||||
var err error
|
var err error
|
||||||
if generateResult {
|
if generateResult {
|
||||||
var mu sync.Mutex
|
var mu sync.Mutex
|
||||||
var idx int
|
var idx int
|
||||||
resp, err = build.BuildWithResultHandler(ctx, nodes, opts, dockerutil.NewClient(dockerCli), confutil.ConfigDir(dockerCli), progress, func(driverIndex int, gotRes *build.ResultHandle) {
|
resp, err = build.BuildWithResultHandler(ctx, nodes, opts, dockerutil.NewClient(dockerCli), confutil.NewConfig(dockerCli), progress, func(driverIndex int, gotRes *build.ResultHandle) {
|
||||||
mu.Lock()
|
mu.Lock()
|
||||||
defer mu.Unlock()
|
defer mu.Unlock()
|
||||||
if res == nil || driverIndex < idx {
|
if res == nil || driverIndex < idx {
|
||||||
@@ -216,7 +222,7 @@ func buildTargets(ctx context.Context, dockerCli command.Cli, ng *store.NodeGrou
|
|||||||
}
|
}
|
||||||
})
|
})
|
||||||
} else {
|
} else {
|
||||||
resp, err = build.Build(ctx, nodes, opts, dockerutil.NewClient(dockerCli), confutil.ConfigDir(dockerCli), progress)
|
resp, err = build.Build(ctx, nodes, opts, dockerutil.NewClient(dockerCli), confutil.NewConfig(dockerCli), progress)
|
||||||
}
|
}
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, res, err
|
return nil, res, err
|
||||||
@@ -268,9 +274,9 @@ func controllerUlimitOpt2DockerUlimit(u *controllerapi.UlimitOpt) *dockeropts.Ul
|
|||||||
if u == nil {
|
if u == nil {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
values := make(map[string]*units.Ulimit)
|
values := make(map[string]*container.Ulimit)
|
||||||
for k, v := range u.Values {
|
for k, v := range u.Values {
|
||||||
values[k] = &units.Ulimit{
|
values[k] = &container.Ulimit{
|
||||||
Name: v.Name,
|
Name: v.Name,
|
||||||
Hard: v.Hard,
|
Hard: v.Hard,
|
||||||
Soft: v.Soft,
|
Soft: v.Soft,
|
||||||
|
|||||||
@@ -4,18 +4,19 @@ import (
|
|||||||
"context"
|
"context"
|
||||||
"io"
|
"io"
|
||||||
|
|
||||||
|
"github.com/docker/buildx/build"
|
||||||
controllerapi "github.com/docker/buildx/controller/pb"
|
controllerapi "github.com/docker/buildx/controller/pb"
|
||||||
"github.com/docker/buildx/util/progress"
|
"github.com/docker/buildx/util/progress"
|
||||||
"github.com/moby/buildkit/client"
|
"github.com/moby/buildkit/client"
|
||||||
)
|
)
|
||||||
|
|
||||||
type BuildxController interface {
|
type BuildxController interface {
|
||||||
Build(ctx context.Context, options controllerapi.BuildOptions, in io.ReadCloser, progress progress.Writer) (ref string, resp *client.SolveResponse, err error)
|
Build(ctx context.Context, options *controllerapi.BuildOptions, in io.ReadCloser, progress progress.Writer) (ref string, resp *client.SolveResponse, inputs *build.Inputs, err error)
|
||||||
// Invoke starts an IO session into the specified process.
|
// Invoke starts an IO session into the specified process.
|
||||||
// If pid doesn't matche to any running processes, it starts a new process with the specified config.
|
// If pid doesn't matche to any running processes, it starts a new process with the specified config.
|
||||||
// If there is no container running or InvokeConfig.Rollback is speicfied, the process will start in a newly created container.
|
// If there is no container running or InvokeConfig.Rollback is speicfied, the process will start in a newly created container.
|
||||||
// NOTE: If needed, in the future, we can split this API into three APIs (NewContainer, NewProcess and Attach).
|
// NOTE: If needed, in the future, we can split this API into three APIs (NewContainer, NewProcess and Attach).
|
||||||
Invoke(ctx context.Context, ref, pid string, options controllerapi.InvokeConfig, ioIn io.ReadCloser, ioOut io.WriteCloser, ioErr io.WriteCloser) error
|
Invoke(ctx context.Context, ref, pid string, options *controllerapi.InvokeConfig, ioIn io.ReadCloser, ioOut io.WriteCloser, ioErr io.WriteCloser) error
|
||||||
Kill(ctx context.Context) error
|
Kill(ctx context.Context) error
|
||||||
Close() error
|
Close() error
|
||||||
List(ctx context.Context) (refs []string, _ error)
|
List(ctx context.Context) (refs []string, _ error)
|
||||||
|
|||||||
@@ -1,7 +1,10 @@
|
|||||||
package errdefs
|
package errdefs
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"io"
|
||||||
|
|
||||||
"github.com/containerd/typeurl/v2"
|
"github.com/containerd/typeurl/v2"
|
||||||
|
"github.com/docker/buildx/util/desktop"
|
||||||
"github.com/moby/buildkit/util/grpcerrors"
|
"github.com/moby/buildkit/util/grpcerrors"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -10,7 +13,7 @@ func init() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
type BuildError struct {
|
type BuildError struct {
|
||||||
Build
|
*Build
|
||||||
error
|
error
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -19,16 +22,27 @@ func (e *BuildError) Unwrap() error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (e *BuildError) ToProto() grpcerrors.TypedErrorProto {
|
func (e *BuildError) ToProto() grpcerrors.TypedErrorProto {
|
||||||
return &e.Build
|
return e.Build
|
||||||
}
|
}
|
||||||
|
|
||||||
func WrapBuild(err error, ref string) error {
|
func (e *BuildError) PrintBuildDetails(w io.Writer) error {
|
||||||
|
if e.Ref == "" {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
ebr := &desktop.ErrorWithBuildRef{
|
||||||
|
Ref: e.Ref,
|
||||||
|
Err: e.error,
|
||||||
|
}
|
||||||
|
return ebr.Print(w)
|
||||||
|
}
|
||||||
|
|
||||||
|
func WrapBuild(err error, sessionID string, ref string) error {
|
||||||
if err == nil {
|
if err == nil {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
return &BuildError{Build: Build{Ref: ref}, error: err}
|
return &BuildError{Build: &Build{SessionID: sessionID, Ref: ref}, error: err}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (b *Build) WrapError(err error) error {
|
func (b *Build) WrapError(err error) error {
|
||||||
return &BuildError{error: err, Build: *b}
|
return &BuildError{error: err, Build: b}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,77 +1,157 @@
|
|||||||
// Code generated by protoc-gen-gogo. DO NOT EDIT.
|
// Code generated by protoc-gen-go. DO NOT EDIT.
|
||||||
// source: errdefs.proto
|
// versions:
|
||||||
|
// protoc-gen-go v1.34.1
|
||||||
|
// protoc v3.11.4
|
||||||
|
// source: github.com/docker/buildx/controller/errdefs/errdefs.proto
|
||||||
|
|
||||||
package errdefs
|
package errdefs
|
||||||
|
|
||||||
import (
|
import (
|
||||||
fmt "fmt"
|
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
|
||||||
proto "github.com/gogo/protobuf/proto"
|
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
|
||||||
_ "github.com/moby/buildkit/solver/pb"
|
reflect "reflect"
|
||||||
math "math"
|
sync "sync"
|
||||||
)
|
)
|
||||||
|
|
||||||
// Reference imports to suppress errors if they are not otherwise used.
|
const (
|
||||||
var _ = proto.Marshal
|
// Verify that this generated code is sufficiently up-to-date.
|
||||||
var _ = fmt.Errorf
|
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
|
||||||
var _ = math.Inf
|
// Verify that runtime/protoimpl is sufficiently up-to-date.
|
||||||
|
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
|
||||||
// This is a compile-time assertion to ensure that this generated file
|
)
|
||||||
// is compatible with the proto package it is being compiled against.
|
|
||||||
// A compilation error at this line likely means your copy of the
|
|
||||||
// proto package needs to be updated.
|
|
||||||
const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package
|
|
||||||
|
|
||||||
type Build struct {
|
type Build struct {
|
||||||
Ref string `protobuf:"bytes,1,opt,name=Ref,proto3" json:"Ref,omitempty"`
|
state protoimpl.MessageState
|
||||||
XXX_NoUnkeyedLiteral struct{} `json:"-"`
|
sizeCache protoimpl.SizeCache
|
||||||
XXX_unrecognized []byte `json:"-"`
|
unknownFields protoimpl.UnknownFields
|
||||||
XXX_sizecache int32 `json:"-"`
|
|
||||||
|
SessionID string `protobuf:"bytes,1,opt,name=SessionID,proto3" json:"SessionID,omitempty"`
|
||||||
|
Ref string `protobuf:"bytes,2,opt,name=Ref,proto3" json:"Ref,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (x *Build) Reset() {
|
||||||
|
*x = Build{}
|
||||||
|
if protoimpl.UnsafeEnabled {
|
||||||
|
mi := &file_github_com_docker_buildx_controller_errdefs_errdefs_proto_msgTypes[0]
|
||||||
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||||
|
ms.StoreMessageInfo(mi)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (x *Build) String() string {
|
||||||
|
return protoimpl.X.MessageStringOf(x)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (m *Build) Reset() { *m = Build{} }
|
|
||||||
func (m *Build) String() string { return proto.CompactTextString(m) }
|
|
||||||
func (*Build) ProtoMessage() {}
|
func (*Build) ProtoMessage() {}
|
||||||
|
|
||||||
|
func (x *Build) ProtoReflect() protoreflect.Message {
|
||||||
|
mi := &file_github_com_docker_buildx_controller_errdefs_errdefs_proto_msgTypes[0]
|
||||||
|
if protoimpl.UnsafeEnabled && x != nil {
|
||||||
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||||
|
if ms.LoadMessageInfo() == nil {
|
||||||
|
ms.StoreMessageInfo(mi)
|
||||||
|
}
|
||||||
|
return ms
|
||||||
|
}
|
||||||
|
return mi.MessageOf(x)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Deprecated: Use Build.ProtoReflect.Descriptor instead.
|
||||||
func (*Build) Descriptor() ([]byte, []int) {
|
func (*Build) Descriptor() ([]byte, []int) {
|
||||||
return fileDescriptor_689dc58a5060aff5, []int{0}
|
return file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDescGZIP(), []int{0}
|
||||||
}
|
|
||||||
func (m *Build) XXX_Unmarshal(b []byte) error {
|
|
||||||
return xxx_messageInfo_Build.Unmarshal(m, b)
|
|
||||||
}
|
|
||||||
func (m *Build) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
|
|
||||||
return xxx_messageInfo_Build.Marshal(b, m, deterministic)
|
|
||||||
}
|
|
||||||
func (m *Build) XXX_Merge(src proto.Message) {
|
|
||||||
xxx_messageInfo_Build.Merge(m, src)
|
|
||||||
}
|
|
||||||
func (m *Build) XXX_Size() int {
|
|
||||||
return xxx_messageInfo_Build.Size(m)
|
|
||||||
}
|
|
||||||
func (m *Build) XXX_DiscardUnknown() {
|
|
||||||
xxx_messageInfo_Build.DiscardUnknown(m)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
var xxx_messageInfo_Build proto.InternalMessageInfo
|
func (x *Build) GetSessionID() string {
|
||||||
|
if x != nil {
|
||||||
func (m *Build) GetRef() string {
|
return x.SessionID
|
||||||
if m != nil {
|
|
||||||
return m.Ref
|
|
||||||
}
|
}
|
||||||
return ""
|
return ""
|
||||||
}
|
}
|
||||||
|
|
||||||
func init() {
|
func (x *Build) GetRef() string {
|
||||||
proto.RegisterType((*Build)(nil), "errdefs.Build")
|
if x != nil {
|
||||||
|
return x.Ref
|
||||||
|
}
|
||||||
|
return ""
|
||||||
}
|
}
|
||||||
|
|
||||||
func init() { proto.RegisterFile("errdefs.proto", fileDescriptor_689dc58a5060aff5) }
|
var File_github_com_docker_buildx_controller_errdefs_errdefs_proto protoreflect.FileDescriptor
|
||||||
|
|
||||||
var fileDescriptor_689dc58a5060aff5 = []byte{
|
var file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDesc = []byte{
|
||||||
// 111 bytes of a gzipped FileDescriptorProto
|
0x0a, 0x39, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x64, 0x6f, 0x63,
|
||||||
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0xe2, 0x4d, 0x2d, 0x2a, 0x4a,
|
0x6b, 0x65, 0x72, 0x2f, 0x62, 0x75, 0x69, 0x6c, 0x64, 0x78, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x72,
|
||||||
0x49, 0x4d, 0x2b, 0xd6, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x17, 0x62, 0x87, 0x72, 0xa5, 0x74, 0xd2,
|
0x6f, 0x6c, 0x6c, 0x65, 0x72, 0x2f, 0x65, 0x72, 0x72, 0x64, 0x65, 0x66, 0x73, 0x2f, 0x65, 0x72,
|
||||||
0x33, 0x4b, 0x32, 0x4a, 0x93, 0xf4, 0x92, 0xf3, 0x73, 0xf5, 0x73, 0xf3, 0x93, 0x2a, 0xf5, 0x93,
|
0x72, 0x64, 0x65, 0x66, 0x73, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x15, 0x64, 0x6f, 0x63,
|
||||||
0x4a, 0x33, 0x73, 0x52, 0xb2, 0x33, 0x4b, 0xf4, 0x8b, 0xf3, 0x73, 0xca, 0x52, 0x8b, 0xf4, 0x0b,
|
0x6b, 0x65, 0x72, 0x2e, 0x62, 0x75, 0x69, 0x6c, 0x64, 0x78, 0x2e, 0x65, 0x72, 0x72, 0x64, 0x65,
|
||||||
0x92, 0xf4, 0xf3, 0x0b, 0xa0, 0xda, 0x94, 0x24, 0xb9, 0x58, 0x9d, 0x40, 0xf2, 0x42, 0x02, 0x5c,
|
0x66, 0x73, 0x22, 0x37, 0x0a, 0x05, 0x42, 0x75, 0x69, 0x6c, 0x64, 0x12, 0x1c, 0x0a, 0x09, 0x53,
|
||||||
0xcc, 0x41, 0xa9, 0x69, 0x12, 0x8c, 0x0a, 0x8c, 0x1a, 0x9c, 0x41, 0x20, 0x66, 0x12, 0x1b, 0x58,
|
0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, 0x49, 0x44, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09,
|
||||||
0x85, 0x31, 0x20, 0x00, 0x00, 0xff, 0xff, 0x56, 0x52, 0x41, 0x91, 0x69, 0x00, 0x00, 0x00,
|
0x53, 0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, 0x49, 0x44, 0x12, 0x10, 0x0a, 0x03, 0x52, 0x65, 0x66,
|
||||||
|
0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x52, 0x65, 0x66, 0x42, 0x2d, 0x5a, 0x2b, 0x67,
|
||||||
|
0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x64, 0x6f, 0x63, 0x6b, 0x65, 0x72,
|
||||||
|
0x2f, 0x62, 0x75, 0x69, 0x6c, 0x64, 0x78, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x72, 0x6f, 0x6c, 0x6c,
|
||||||
|
0x65, 0x72, 0x2f, 0x65, 0x72, 0x72, 0x64, 0x65, 0x66, 0x73, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74,
|
||||||
|
0x6f, 0x33,
|
||||||
|
}
|
||||||
|
|
||||||
|
var (
|
||||||
|
file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDescOnce sync.Once
|
||||||
|
file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDescData = file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDesc
|
||||||
|
)
|
||||||
|
|
||||||
|
func file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDescGZIP() []byte {
|
||||||
|
file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDescOnce.Do(func() {
|
||||||
|
file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDescData = protoimpl.X.CompressGZIP(file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDescData)
|
||||||
|
})
|
||||||
|
return file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDescData
|
||||||
|
}
|
||||||
|
|
||||||
|
var file_github_com_docker_buildx_controller_errdefs_errdefs_proto_msgTypes = make([]protoimpl.MessageInfo, 1)
|
||||||
|
var file_github_com_docker_buildx_controller_errdefs_errdefs_proto_goTypes = []interface{}{
|
||||||
|
(*Build)(nil), // 0: docker.buildx.errdefs.Build
|
||||||
|
}
|
||||||
|
var file_github_com_docker_buildx_controller_errdefs_errdefs_proto_depIdxs = []int32{
|
||||||
|
0, // [0:0] is the sub-list for method output_type
|
||||||
|
0, // [0:0] is the sub-list for method input_type
|
||||||
|
0, // [0:0] is the sub-list for extension type_name
|
||||||
|
0, // [0:0] is the sub-list for extension extendee
|
||||||
|
0, // [0:0] is the sub-list for field type_name
|
||||||
|
}
|
||||||
|
|
||||||
|
func init() { file_github_com_docker_buildx_controller_errdefs_errdefs_proto_init() }
|
||||||
|
func file_github_com_docker_buildx_controller_errdefs_errdefs_proto_init() {
|
||||||
|
if File_github_com_docker_buildx_controller_errdefs_errdefs_proto != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if !protoimpl.UnsafeEnabled {
|
||||||
|
file_github_com_docker_buildx_controller_errdefs_errdefs_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
|
||||||
|
switch v := v.(*Build); i {
|
||||||
|
case 0:
|
||||||
|
return &v.state
|
||||||
|
case 1:
|
||||||
|
return &v.sizeCache
|
||||||
|
case 2:
|
||||||
|
return &v.unknownFields
|
||||||
|
default:
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
type x struct{}
|
||||||
|
out := protoimpl.TypeBuilder{
|
||||||
|
File: protoimpl.DescBuilder{
|
||||||
|
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
|
||||||
|
RawDescriptor: file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDesc,
|
||||||
|
NumEnums: 0,
|
||||||
|
NumMessages: 1,
|
||||||
|
NumExtensions: 0,
|
||||||
|
NumServices: 0,
|
||||||
|
},
|
||||||
|
GoTypes: file_github_com_docker_buildx_controller_errdefs_errdefs_proto_goTypes,
|
||||||
|
DependencyIndexes: file_github_com_docker_buildx_controller_errdefs_errdefs_proto_depIdxs,
|
||||||
|
MessageInfos: file_github_com_docker_buildx_controller_errdefs_errdefs_proto_msgTypes,
|
||||||
|
}.Build()
|
||||||
|
File_github_com_docker_buildx_controller_errdefs_errdefs_proto = out.File
|
||||||
|
file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDesc = nil
|
||||||
|
file_github_com_docker_buildx_controller_errdefs_errdefs_proto_goTypes = nil
|
||||||
|
file_github_com_docker_buildx_controller_errdefs_errdefs_proto_depIdxs = nil
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,9 +1,10 @@
|
|||||||
syntax = "proto3";
|
syntax = "proto3";
|
||||||
|
|
||||||
package errdefs;
|
package docker.buildx.errdefs;
|
||||||
|
|
||||||
import "github.com/moby/buildkit/solver/pb/ops.proto";
|
option go_package = "github.com/docker/buildx/controller/errdefs";
|
||||||
|
|
||||||
message Build {
|
message Build {
|
||||||
string Ref = 1;
|
string SessionID = 1;
|
||||||
|
string Ref = 2;
|
||||||
}
|
}
|
||||||
241
controller/errdefs/errdefs_vtproto.pb.go
Normal file
241
controller/errdefs/errdefs_vtproto.pb.go
Normal file
@@ -0,0 +1,241 @@
|
|||||||
|
// Code generated by protoc-gen-go-vtproto. DO NOT EDIT.
|
||||||
|
// protoc-gen-go-vtproto version: v0.6.1-0.20240319094008-0393e58bdf10
|
||||||
|
// source: github.com/docker/buildx/controller/errdefs/errdefs.proto
|
||||||
|
|
||||||
|
package errdefs
|
||||||
|
|
||||||
|
import (
|
||||||
|
fmt "fmt"
|
||||||
|
protohelpers "github.com/planetscale/vtprotobuf/protohelpers"
|
||||||
|
proto "google.golang.org/protobuf/proto"
|
||||||
|
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
|
||||||
|
io "io"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
// Verify that this generated code is sufficiently up-to-date.
|
||||||
|
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
|
||||||
|
// Verify that runtime/protoimpl is sufficiently up-to-date.
|
||||||
|
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
|
||||||
|
)
|
||||||
|
|
||||||
|
func (m *Build) CloneVT() *Build {
|
||||||
|
if m == nil {
|
||||||
|
return (*Build)(nil)
|
||||||
|
}
|
||||||
|
r := new(Build)
|
||||||
|
r.SessionID = m.SessionID
|
||||||
|
r.Ref = m.Ref
|
||||||
|
if len(m.unknownFields) > 0 {
|
||||||
|
r.unknownFields = make([]byte, len(m.unknownFields))
|
||||||
|
copy(r.unknownFields, m.unknownFields)
|
||||||
|
}
|
||||||
|
return r
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *Build) CloneMessageVT() proto.Message {
|
||||||
|
return m.CloneVT()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (this *Build) EqualVT(that *Build) bool {
|
||||||
|
if this == that {
|
||||||
|
return true
|
||||||
|
} else if this == nil || that == nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
if this.SessionID != that.SessionID {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
if this.Ref != that.Ref {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
return string(this.unknownFields) == string(that.unknownFields)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (this *Build) EqualMessageVT(thatMsg proto.Message) bool {
|
||||||
|
that, ok := thatMsg.(*Build)
|
||||||
|
if !ok {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
return this.EqualVT(that)
|
||||||
|
}
|
||||||
|
func (m *Build) MarshalVT() (dAtA []byte, err error) {
|
||||||
|
if m == nil {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
size := m.SizeVT()
|
||||||
|
dAtA = make([]byte, size)
|
||||||
|
n, err := m.MarshalToSizedBufferVT(dAtA[:size])
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return dAtA[:n], nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *Build) MarshalToVT(dAtA []byte) (int, error) {
|
||||||
|
size := m.SizeVT()
|
||||||
|
return m.MarshalToSizedBufferVT(dAtA[:size])
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *Build) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
|
||||||
|
if m == nil {
|
||||||
|
return 0, nil
|
||||||
|
}
|
||||||
|
i := len(dAtA)
|
||||||
|
_ = i
|
||||||
|
var l int
|
||||||
|
_ = l
|
||||||
|
if m.unknownFields != nil {
|
||||||
|
i -= len(m.unknownFields)
|
||||||
|
copy(dAtA[i:], m.unknownFields)
|
||||||
|
}
|
||||||
|
if len(m.Ref) > 0 {
|
||||||
|
i -= len(m.Ref)
|
||||||
|
copy(dAtA[i:], m.Ref)
|
||||||
|
i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Ref)))
|
||||||
|
i--
|
||||||
|
dAtA[i] = 0x12
|
||||||
|
}
|
||||||
|
if len(m.SessionID) > 0 {
|
||||||
|
i -= len(m.SessionID)
|
||||||
|
copy(dAtA[i:], m.SessionID)
|
||||||
|
i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.SessionID)))
|
||||||
|
i--
|
||||||
|
dAtA[i] = 0xa
|
||||||
|
}
|
||||||
|
return len(dAtA) - i, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *Build) SizeVT() (n int) {
|
||||||
|
if m == nil {
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
var l int
|
||||||
|
_ = l
|
||||||
|
l = len(m.SessionID)
|
||||||
|
if l > 0 {
|
||||||
|
n += 1 + l + protohelpers.SizeOfVarint(uint64(l))
|
||||||
|
}
|
||||||
|
l = len(m.Ref)
|
||||||
|
if l > 0 {
|
||||||
|
n += 1 + l + protohelpers.SizeOfVarint(uint64(l))
|
||||||
|
}
|
||||||
|
n += len(m.unknownFields)
|
||||||
|
return n
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *Build) UnmarshalVT(dAtA []byte) error {
|
||||||
|
l := len(dAtA)
|
||||||
|
iNdEx := 0
|
||||||
|
for iNdEx < l {
|
||||||
|
preIndex := iNdEx
|
||||||
|
var wire uint64
|
||||||
|
for shift := uint(0); ; shift += 7 {
|
||||||
|
if shift >= 64 {
|
||||||
|
return protohelpers.ErrIntOverflow
|
||||||
|
}
|
||||||
|
if iNdEx >= l {
|
||||||
|
return io.ErrUnexpectedEOF
|
||||||
|
}
|
||||||
|
b := dAtA[iNdEx]
|
||||||
|
iNdEx++
|
||||||
|
wire |= uint64(b&0x7F) << shift
|
||||||
|
if b < 0x80 {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
fieldNum := int32(wire >> 3)
|
||||||
|
wireType := int(wire & 0x7)
|
||||||
|
if wireType == 4 {
|
||||||
|
return fmt.Errorf("proto: Build: wiretype end group for non-group")
|
||||||
|
}
|
||||||
|
if fieldNum <= 0 {
|
||||||
|
return fmt.Errorf("proto: Build: illegal tag %d (wire type %d)", fieldNum, wire)
|
||||||
|
}
|
||||||
|
switch fieldNum {
|
||||||
|
case 1:
|
||||||
|
if wireType != 2 {
|
||||||
|
return fmt.Errorf("proto: wrong wireType = %d for field SessionID", wireType)
|
||||||
|
}
|
||||||
|
var stringLen uint64
|
||||||
|
for shift := uint(0); ; shift += 7 {
|
||||||
|
if shift >= 64 {
|
||||||
|
return protohelpers.ErrIntOverflow
|
||||||
|
}
|
||||||
|
if iNdEx >= l {
|
||||||
|
return io.ErrUnexpectedEOF
|
||||||
|
}
|
||||||
|
b := dAtA[iNdEx]
|
||||||
|
iNdEx++
|
||||||
|
stringLen |= uint64(b&0x7F) << shift
|
||||||
|
if b < 0x80 {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
intStringLen := int(stringLen)
|
||||||
|
if intStringLen < 0 {
|
||||||
|
return protohelpers.ErrInvalidLength
|
||||||
|
}
|
||||||
|
postIndex := iNdEx + intStringLen
|
||||||
|
if postIndex < 0 {
|
||||||
|
return protohelpers.ErrInvalidLength
|
||||||
|
}
|
||||||
|
if postIndex > l {
|
||||||
|
return io.ErrUnexpectedEOF
|
||||||
|
}
|
||||||
|
m.SessionID = string(dAtA[iNdEx:postIndex])
|
||||||
|
iNdEx = postIndex
|
||||||
|
case 2:
|
||||||
|
if wireType != 2 {
|
||||||
|
return fmt.Errorf("proto: wrong wireType = %d for field Ref", wireType)
|
||||||
|
}
|
||||||
|
var stringLen uint64
|
||||||
|
for shift := uint(0); ; shift += 7 {
|
||||||
|
if shift >= 64 {
|
||||||
|
return protohelpers.ErrIntOverflow
|
||||||
|
}
|
||||||
|
if iNdEx >= l {
|
||||||
|
return io.ErrUnexpectedEOF
|
||||||
|
}
|
||||||
|
b := dAtA[iNdEx]
|
||||||
|
iNdEx++
|
||||||
|
stringLen |= uint64(b&0x7F) << shift
|
||||||
|
if b < 0x80 {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
intStringLen := int(stringLen)
|
||||||
|
if intStringLen < 0 {
|
||||||
|
return protohelpers.ErrInvalidLength
|
||||||
|
}
|
||||||
|
postIndex := iNdEx + intStringLen
|
||||||
|
if postIndex < 0 {
|
||||||
|
return protohelpers.ErrInvalidLength
|
||||||
|
}
|
||||||
|
if postIndex > l {
|
||||||
|
return io.ErrUnexpectedEOF
|
||||||
|
}
|
||||||
|
m.Ref = string(dAtA[iNdEx:postIndex])
|
||||||
|
iNdEx = postIndex
|
||||||
|
default:
|
||||||
|
iNdEx = preIndex
|
||||||
|
skippy, err := protohelpers.Skip(dAtA[iNdEx:])
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if (skippy < 0) || (iNdEx+skippy) < 0 {
|
||||||
|
return protohelpers.ErrInvalidLength
|
||||||
|
}
|
||||||
|
if (iNdEx + skippy) > l {
|
||||||
|
return io.ErrUnexpectedEOF
|
||||||
|
}
|
||||||
|
m.unknownFields = append(m.unknownFields, dAtA[iNdEx:iNdEx+skippy]...)
|
||||||
|
iNdEx += skippy
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if iNdEx > l {
|
||||||
|
return io.ErrUnexpectedEOF
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
@@ -1,3 +0,0 @@
|
|||||||
package errdefs
|
|
||||||
|
|
||||||
//go:generate protoc -I=. -I=../../vendor/ --gogo_out=plugins=grpc:. errdefs.proto
|
|
||||||
@@ -11,6 +11,7 @@ import (
|
|||||||
controllererrors "github.com/docker/buildx/controller/errdefs"
|
controllererrors "github.com/docker/buildx/controller/errdefs"
|
||||||
controllerapi "github.com/docker/buildx/controller/pb"
|
controllerapi "github.com/docker/buildx/controller/pb"
|
||||||
"github.com/docker/buildx/controller/processes"
|
"github.com/docker/buildx/controller/processes"
|
||||||
|
"github.com/docker/buildx/util/desktop"
|
||||||
"github.com/docker/buildx/util/ioset"
|
"github.com/docker/buildx/util/ioset"
|
||||||
"github.com/docker/buildx/util/progress"
|
"github.com/docker/buildx/util/progress"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
@@ -21,7 +22,7 @@ import (
|
|||||||
func NewLocalBuildxController(ctx context.Context, dockerCli command.Cli, logger progress.SubLogger) control.BuildxController {
|
func NewLocalBuildxController(ctx context.Context, dockerCli command.Cli, logger progress.SubLogger) control.BuildxController {
|
||||||
return &localController{
|
return &localController{
|
||||||
dockerCli: dockerCli,
|
dockerCli: dockerCli,
|
||||||
ref: "local",
|
sessionID: "local",
|
||||||
processes: processes.NewManager(),
|
processes: processes.NewManager(),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -35,46 +36,51 @@ type buildConfig struct {
|
|||||||
|
|
||||||
type localController struct {
|
type localController struct {
|
||||||
dockerCli command.Cli
|
dockerCli command.Cli
|
||||||
ref string
|
sessionID string
|
||||||
buildConfig buildConfig
|
buildConfig buildConfig
|
||||||
processes *processes.Manager
|
processes *processes.Manager
|
||||||
|
|
||||||
buildOnGoing atomic.Bool
|
buildOnGoing atomic.Bool
|
||||||
}
|
}
|
||||||
|
|
||||||
func (b *localController) Build(ctx context.Context, options controllerapi.BuildOptions, in io.ReadCloser, progress progress.Writer) (string, *client.SolveResponse, error) {
|
func (b *localController) Build(ctx context.Context, options *controllerapi.BuildOptions, in io.ReadCloser, progress progress.Writer) (string, *client.SolveResponse, *build.Inputs, error) {
|
||||||
if !b.buildOnGoing.CompareAndSwap(false, true) {
|
if !b.buildOnGoing.CompareAndSwap(false, true) {
|
||||||
return "", nil, errors.New("build ongoing")
|
return "", nil, nil, errors.New("build ongoing")
|
||||||
}
|
}
|
||||||
defer b.buildOnGoing.Store(false)
|
defer b.buildOnGoing.Store(false)
|
||||||
|
|
||||||
resp, res, buildErr := cbuild.RunBuild(ctx, b.dockerCli, options, in, progress, true)
|
resp, res, dockerfileMappings, buildErr := cbuild.RunBuild(ctx, b.dockerCli, options, in, progress, true)
|
||||||
// NOTE: RunBuild can return *build.ResultHandle even on error.
|
// NOTE: RunBuild can return *build.ResultHandle even on error.
|
||||||
if res != nil {
|
if res != nil {
|
||||||
b.buildConfig = buildConfig{
|
b.buildConfig = buildConfig{
|
||||||
resultCtx: res,
|
resultCtx: res,
|
||||||
buildOptions: &options,
|
buildOptions: options,
|
||||||
}
|
}
|
||||||
if buildErr != nil {
|
if buildErr != nil {
|
||||||
buildErr = controllererrors.WrapBuild(buildErr, b.ref)
|
var ref string
|
||||||
|
var ebr *desktop.ErrorWithBuildRef
|
||||||
|
if errors.As(buildErr, &ebr) {
|
||||||
|
ref = ebr.Ref
|
||||||
|
}
|
||||||
|
buildErr = controllererrors.WrapBuild(buildErr, b.sessionID, ref)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if buildErr != nil {
|
if buildErr != nil {
|
||||||
return "", nil, buildErr
|
return "", nil, nil, buildErr
|
||||||
}
|
}
|
||||||
return b.ref, resp, nil
|
return b.sessionID, resp, dockerfileMappings, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (b *localController) ListProcesses(ctx context.Context, ref string) (infos []*controllerapi.ProcessInfo, retErr error) {
|
func (b *localController) ListProcesses(ctx context.Context, sessionID string) (infos []*controllerapi.ProcessInfo, retErr error) {
|
||||||
if ref != b.ref {
|
if sessionID != b.sessionID {
|
||||||
return nil, errors.Errorf("unknown ref %q", ref)
|
return nil, errors.Errorf("unknown session ID %q", sessionID)
|
||||||
}
|
}
|
||||||
return b.processes.ListProcesses(), nil
|
return b.processes.ListProcesses(), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (b *localController) DisconnectProcess(ctx context.Context, ref, pid string) error {
|
func (b *localController) DisconnectProcess(ctx context.Context, sessionID, pid string) error {
|
||||||
if ref != b.ref {
|
if sessionID != b.sessionID {
|
||||||
return errors.Errorf("unknown ref %q", ref)
|
return errors.Errorf("unknown session ID %q", sessionID)
|
||||||
}
|
}
|
||||||
return b.processes.DeleteProcess(pid)
|
return b.processes.DeleteProcess(pid)
|
||||||
}
|
}
|
||||||
@@ -83,9 +89,9 @@ func (b *localController) cancelRunningProcesses() {
|
|||||||
b.processes.CancelRunningProcesses()
|
b.processes.CancelRunningProcesses()
|
||||||
}
|
}
|
||||||
|
|
||||||
func (b *localController) Invoke(ctx context.Context, ref string, pid string, cfg controllerapi.InvokeConfig, ioIn io.ReadCloser, ioOut io.WriteCloser, ioErr io.WriteCloser) error {
|
func (b *localController) Invoke(ctx context.Context, sessionID string, pid string, cfg *controllerapi.InvokeConfig, ioIn io.ReadCloser, ioOut io.WriteCloser, ioErr io.WriteCloser) error {
|
||||||
if ref != b.ref {
|
if sessionID != b.sessionID {
|
||||||
return errors.Errorf("unknown ref %q", ref)
|
return errors.Errorf("unknown session ID %q", sessionID)
|
||||||
}
|
}
|
||||||
|
|
||||||
proc, ok := b.processes.Get(pid)
|
proc, ok := b.processes.Get(pid)
|
||||||
@@ -95,7 +101,7 @@ func (b *localController) Invoke(ctx context.Context, ref string, pid string, cf
|
|||||||
return errors.New("no build result is registered")
|
return errors.New("no build result is registered")
|
||||||
}
|
}
|
||||||
var err error
|
var err error
|
||||||
proc, err = b.processes.StartProcess(pid, b.buildConfig.resultCtx, &cfg)
|
proc, err = b.processes.StartProcess(pid, b.buildConfig.resultCtx, cfg)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -103,7 +109,7 @@ func (b *localController) Invoke(ctx context.Context, ref string, pid string, cf
|
|||||||
|
|
||||||
// Attach containerIn to this process
|
// Attach containerIn to this process
|
||||||
ioCancelledCh := make(chan struct{})
|
ioCancelledCh := make(chan struct{})
|
||||||
proc.ForwardIO(&ioset.In{Stdin: ioIn, Stdout: ioOut, Stderr: ioErr}, func() { close(ioCancelledCh) })
|
proc.ForwardIO(&ioset.In{Stdin: ioIn, Stdout: ioOut, Stderr: ioErr}, func(error) { close(ioCancelledCh) })
|
||||||
|
|
||||||
select {
|
select {
|
||||||
case <-ioCancelledCh:
|
case <-ioCancelledCh:
|
||||||
@@ -111,7 +117,7 @@ func (b *localController) Invoke(ctx context.Context, ref string, pid string, cf
|
|||||||
case err := <-proc.Done():
|
case err := <-proc.Done():
|
||||||
return err
|
return err
|
||||||
case <-ctx.Done():
|
case <-ctx.Done():
|
||||||
return ctx.Err()
|
return context.Cause(ctx)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -130,7 +136,7 @@ func (b *localController) Close() error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (b *localController) List(ctx context.Context) (res []string, _ error) {
|
func (b *localController) List(ctx context.Context) (res []string, _ error) {
|
||||||
return []string{b.ref}, nil
|
return []string{b.sessionID}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (b *localController) Disconnect(ctx context.Context, key string) error {
|
func (b *localController) Disconnect(ctx context.Context, key string) error {
|
||||||
@@ -138,9 +144,9 @@ func (b *localController) Disconnect(ctx context.Context, key string) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (b *localController) Inspect(ctx context.Context, ref string) (*controllerapi.InspectResponse, error) {
|
func (b *localController) Inspect(ctx context.Context, sessionID string) (*controllerapi.InspectResponse, error) {
|
||||||
if ref != b.ref {
|
if sessionID != b.sessionID {
|
||||||
return nil, errors.Errorf("unknown ref %q", ref)
|
return nil, errors.Errorf("unknown session ID %q", sessionID)
|
||||||
}
|
}
|
||||||
return &controllerapi.InspectResponse{Options: b.buildConfig.buildOptions}, nil
|
return &controllerapi.InspectResponse{Options: b.buildConfig.buildOptions}, nil
|
||||||
}
|
}
|
||||||
|
|||||||
File diff suppressed because it is too large
Load Diff
@@ -5,7 +5,7 @@ package buildx.controller.v1;
|
|||||||
import "github.com/moby/buildkit/api/services/control/control.proto";
|
import "github.com/moby/buildkit/api/services/control/control.proto";
|
||||||
import "github.com/moby/buildkit/sourcepolicy/pb/policy.proto";
|
import "github.com/moby/buildkit/sourcepolicy/pb/policy.proto";
|
||||||
|
|
||||||
option go_package = "pb";
|
option go_package = "github.com/docker/buildx/controller/pb";
|
||||||
|
|
||||||
service Controller {
|
service Controller {
|
||||||
rpc Build(BuildRequest) returns (BuildResponse);
|
rpc Build(BuildRequest) returns (BuildResponse);
|
||||||
@@ -21,7 +21,7 @@ service Controller {
|
|||||||
}
|
}
|
||||||
|
|
||||||
message ListProcessesRequest {
|
message ListProcessesRequest {
|
||||||
string Ref = 1;
|
string SessionID = 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
message ListProcessesResponse {
|
message ListProcessesResponse {
|
||||||
@@ -34,7 +34,7 @@ message ProcessInfo {
|
|||||||
}
|
}
|
||||||
|
|
||||||
message DisconnectProcessRequest {
|
message DisconnectProcessRequest {
|
||||||
string Ref = 1;
|
string SessionID = 1;
|
||||||
string ProcessID = 2;
|
string ProcessID = 2;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -42,14 +42,14 @@ message DisconnectProcessResponse {
|
|||||||
}
|
}
|
||||||
|
|
||||||
message BuildRequest {
|
message BuildRequest {
|
||||||
string Ref = 1;
|
string SessionID = 1;
|
||||||
BuildOptions Options = 2;
|
BuildOptions Options = 2;
|
||||||
}
|
}
|
||||||
|
|
||||||
message BuildOptions {
|
message BuildOptions {
|
||||||
string ContextPath = 1;
|
string ContextPath = 1;
|
||||||
string DockerfileName = 2;
|
string DockerfileName = 2;
|
||||||
PrintFunc PrintFunc = 3;
|
CallFunc CallFunc = 3;
|
||||||
map<string, string> NamedContexts = 4;
|
map<string, string> NamedContexts = 4;
|
||||||
|
|
||||||
repeated string Allow = 5;
|
repeated string Allow = 5;
|
||||||
@@ -80,6 +80,7 @@ message BuildOptions {
|
|||||||
string Ref = 29;
|
string Ref = 29;
|
||||||
string GroupRef = 30;
|
string GroupRef = 30;
|
||||||
repeated string Annotations = 31;
|
repeated string Annotations = 31;
|
||||||
|
string ProvenanceResponseMode = 32;
|
||||||
}
|
}
|
||||||
|
|
||||||
message ExportEntry {
|
message ExportEntry {
|
||||||
@@ -110,13 +111,14 @@ message Secret {
|
|||||||
string Env = 3;
|
string Env = 3;
|
||||||
}
|
}
|
||||||
|
|
||||||
message PrintFunc {
|
message CallFunc {
|
||||||
string Name = 1;
|
string Name = 1;
|
||||||
string Format = 2;
|
string Format = 2;
|
||||||
|
bool IgnoreStatus = 3;
|
||||||
}
|
}
|
||||||
|
|
||||||
message InspectRequest {
|
message InspectRequest {
|
||||||
string Ref = 1;
|
string SessionID = 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
message InspectResponse {
|
message InspectResponse {
|
||||||
@@ -138,13 +140,13 @@ message BuildResponse {
|
|||||||
}
|
}
|
||||||
|
|
||||||
message DisconnectRequest {
|
message DisconnectRequest {
|
||||||
string Ref = 1;
|
string SessionID = 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
message DisconnectResponse {}
|
message DisconnectResponse {}
|
||||||
|
|
||||||
message ListRequest {
|
message ListRequest {
|
||||||
string Ref = 1;
|
string SessionID = 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
message ListResponse {
|
message ListResponse {
|
||||||
@@ -159,7 +161,7 @@ message InputMessage {
|
|||||||
}
|
}
|
||||||
|
|
||||||
message InputInitMessage {
|
message InputInitMessage {
|
||||||
string Ref = 1;
|
string SessionID = 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
message DataMessage {
|
message DataMessage {
|
||||||
@@ -184,7 +186,7 @@ message Message {
|
|||||||
}
|
}
|
||||||
|
|
||||||
message InitMessage {
|
message InitMessage {
|
||||||
string Ref = 1;
|
string SessionID = 1;
|
||||||
|
|
||||||
// If ProcessID already exists in the server, it tries to connect to it
|
// If ProcessID already exists in the server, it tries to connect to it
|
||||||
// instead of invoking the new one. In this case, InvokeConfig will be ignored.
|
// instead of invoking the new one. In this case, InvokeConfig will be ignored.
|
||||||
@@ -225,7 +227,7 @@ message SignalMessage {
|
|||||||
}
|
}
|
||||||
|
|
||||||
message StatusRequest {
|
message StatusRequest {
|
||||||
string Ref = 1;
|
string SessionID = 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
message StatusResponse {
|
message StatusResponse {
|
||||||
|
|||||||
452
controller/pb/controller_grpc.pb.go
Normal file
452
controller/pb/controller_grpc.pb.go
Normal file
@@ -0,0 +1,452 @@
|
|||||||
|
// Code generated by protoc-gen-go-grpc. DO NOT EDIT.
|
||||||
|
// versions:
|
||||||
|
// - protoc-gen-go-grpc v1.5.1
|
||||||
|
// - protoc v3.11.4
|
||||||
|
// source: github.com/docker/buildx/controller/pb/controller.proto
|
||||||
|
|
||||||
|
package pb
|
||||||
|
|
||||||
|
import (
|
||||||
|
context "context"
|
||||||
|
grpc "google.golang.org/grpc"
|
||||||
|
codes "google.golang.org/grpc/codes"
|
||||||
|
status "google.golang.org/grpc/status"
|
||||||
|
)
|
||||||
|
|
||||||
|
// This is a compile-time assertion to ensure that this generated file
|
||||||
|
// is compatible with the grpc package it is being compiled against.
|
||||||
|
// Requires gRPC-Go v1.64.0 or later.
|
||||||
|
const _ = grpc.SupportPackageIsVersion9
|
||||||
|
|
||||||
|
const (
|
||||||
|
Controller_Build_FullMethodName = "/buildx.controller.v1.Controller/Build"
|
||||||
|
Controller_Inspect_FullMethodName = "/buildx.controller.v1.Controller/Inspect"
|
||||||
|
Controller_Status_FullMethodName = "/buildx.controller.v1.Controller/Status"
|
||||||
|
Controller_Input_FullMethodName = "/buildx.controller.v1.Controller/Input"
|
||||||
|
Controller_Invoke_FullMethodName = "/buildx.controller.v1.Controller/Invoke"
|
||||||
|
Controller_List_FullMethodName = "/buildx.controller.v1.Controller/List"
|
||||||
|
Controller_Disconnect_FullMethodName = "/buildx.controller.v1.Controller/Disconnect"
|
||||||
|
Controller_Info_FullMethodName = "/buildx.controller.v1.Controller/Info"
|
||||||
|
Controller_ListProcesses_FullMethodName = "/buildx.controller.v1.Controller/ListProcesses"
|
||||||
|
Controller_DisconnectProcess_FullMethodName = "/buildx.controller.v1.Controller/DisconnectProcess"
|
||||||
|
)
|
||||||
|
|
||||||
|
// ControllerClient is the client API for Controller service.
|
||||||
|
//
|
||||||
|
// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream.
|
||||||
|
type ControllerClient interface {
|
||||||
|
Build(ctx context.Context, in *BuildRequest, opts ...grpc.CallOption) (*BuildResponse, error)
|
||||||
|
Inspect(ctx context.Context, in *InspectRequest, opts ...grpc.CallOption) (*InspectResponse, error)
|
||||||
|
Status(ctx context.Context, in *StatusRequest, opts ...grpc.CallOption) (grpc.ServerStreamingClient[StatusResponse], error)
|
||||||
|
Input(ctx context.Context, opts ...grpc.CallOption) (grpc.ClientStreamingClient[InputMessage, InputResponse], error)
|
||||||
|
Invoke(ctx context.Context, opts ...grpc.CallOption) (grpc.BidiStreamingClient[Message, Message], error)
|
||||||
|
List(ctx context.Context, in *ListRequest, opts ...grpc.CallOption) (*ListResponse, error)
|
||||||
|
Disconnect(ctx context.Context, in *DisconnectRequest, opts ...grpc.CallOption) (*DisconnectResponse, error)
|
||||||
|
Info(ctx context.Context, in *InfoRequest, opts ...grpc.CallOption) (*InfoResponse, error)
|
||||||
|
ListProcesses(ctx context.Context, in *ListProcessesRequest, opts ...grpc.CallOption) (*ListProcessesResponse, error)
|
||||||
|
DisconnectProcess(ctx context.Context, in *DisconnectProcessRequest, opts ...grpc.CallOption) (*DisconnectProcessResponse, error)
|
||||||
|
}
|
||||||
|
|
||||||
|
type controllerClient struct {
|
||||||
|
cc grpc.ClientConnInterface
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewControllerClient(cc grpc.ClientConnInterface) ControllerClient {
|
||||||
|
return &controllerClient{cc}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *controllerClient) Build(ctx context.Context, in *BuildRequest, opts ...grpc.CallOption) (*BuildResponse, error) {
|
||||||
|
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
|
||||||
|
out := new(BuildResponse)
|
||||||
|
err := c.cc.Invoke(ctx, Controller_Build_FullMethodName, in, out, cOpts...)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return out, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *controllerClient) Inspect(ctx context.Context, in *InspectRequest, opts ...grpc.CallOption) (*InspectResponse, error) {
|
||||||
|
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
|
||||||
|
out := new(InspectResponse)
|
||||||
|
err := c.cc.Invoke(ctx, Controller_Inspect_FullMethodName, in, out, cOpts...)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return out, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *controllerClient) Status(ctx context.Context, in *StatusRequest, opts ...grpc.CallOption) (grpc.ServerStreamingClient[StatusResponse], error) {
|
||||||
|
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
|
||||||
|
stream, err := c.cc.NewStream(ctx, &Controller_ServiceDesc.Streams[0], Controller_Status_FullMethodName, cOpts...)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
x := &grpc.GenericClientStream[StatusRequest, StatusResponse]{ClientStream: stream}
|
||||||
|
if err := x.ClientStream.SendMsg(in); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if err := x.ClientStream.CloseSend(); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return x, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// This type alias is provided for backwards compatibility with existing code that references the prior non-generic stream type by name.
|
||||||
|
type Controller_StatusClient = grpc.ServerStreamingClient[StatusResponse]
|
||||||
|
|
||||||
|
func (c *controllerClient) Input(ctx context.Context, opts ...grpc.CallOption) (grpc.ClientStreamingClient[InputMessage, InputResponse], error) {
|
||||||
|
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
|
||||||
|
stream, err := c.cc.NewStream(ctx, &Controller_ServiceDesc.Streams[1], Controller_Input_FullMethodName, cOpts...)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
x := &grpc.GenericClientStream[InputMessage, InputResponse]{ClientStream: stream}
|
||||||
|
return x, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// This type alias is provided for backwards compatibility with existing code that references the prior non-generic stream type by name.
|
||||||
|
type Controller_InputClient = grpc.ClientStreamingClient[InputMessage, InputResponse]
|
||||||
|
|
||||||
|
func (c *controllerClient) Invoke(ctx context.Context, opts ...grpc.CallOption) (grpc.BidiStreamingClient[Message, Message], error) {
|
||||||
|
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
|
||||||
|
stream, err := c.cc.NewStream(ctx, &Controller_ServiceDesc.Streams[2], Controller_Invoke_FullMethodName, cOpts...)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
x := &grpc.GenericClientStream[Message, Message]{ClientStream: stream}
|
||||||
|
return x, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// This type alias is provided for backwards compatibility with existing code that references the prior non-generic stream type by name.
|
||||||
|
type Controller_InvokeClient = grpc.BidiStreamingClient[Message, Message]
|
||||||
|
|
||||||
|
func (c *controllerClient) List(ctx context.Context, in *ListRequest, opts ...grpc.CallOption) (*ListResponse, error) {
|
||||||
|
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
|
||||||
|
out := new(ListResponse)
|
||||||
|
err := c.cc.Invoke(ctx, Controller_List_FullMethodName, in, out, cOpts...)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return out, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *controllerClient) Disconnect(ctx context.Context, in *DisconnectRequest, opts ...grpc.CallOption) (*DisconnectResponse, error) {
|
||||||
|
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
|
||||||
|
out := new(DisconnectResponse)
|
||||||
|
err := c.cc.Invoke(ctx, Controller_Disconnect_FullMethodName, in, out, cOpts...)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return out, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *controllerClient) Info(ctx context.Context, in *InfoRequest, opts ...grpc.CallOption) (*InfoResponse, error) {
|
||||||
|
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
|
||||||
|
out := new(InfoResponse)
|
||||||
|
err := c.cc.Invoke(ctx, Controller_Info_FullMethodName, in, out, cOpts...)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return out, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *controllerClient) ListProcesses(ctx context.Context, in *ListProcessesRequest, opts ...grpc.CallOption) (*ListProcessesResponse, error) {
|
||||||
|
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
|
||||||
|
out := new(ListProcessesResponse)
|
||||||
|
err := c.cc.Invoke(ctx, Controller_ListProcesses_FullMethodName, in, out, cOpts...)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return out, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *controllerClient) DisconnectProcess(ctx context.Context, in *DisconnectProcessRequest, opts ...grpc.CallOption) (*DisconnectProcessResponse, error) {
|
||||||
|
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
|
||||||
|
out := new(DisconnectProcessResponse)
|
||||||
|
err := c.cc.Invoke(ctx, Controller_DisconnectProcess_FullMethodName, in, out, cOpts...)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return out, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// ControllerServer is the server API for Controller service.
|
||||||
|
// All implementations should embed UnimplementedControllerServer
|
||||||
|
// for forward compatibility.
|
||||||
|
type ControllerServer interface {
|
||||||
|
Build(context.Context, *BuildRequest) (*BuildResponse, error)
|
||||||
|
Inspect(context.Context, *InspectRequest) (*InspectResponse, error)
|
||||||
|
Status(*StatusRequest, grpc.ServerStreamingServer[StatusResponse]) error
|
||||||
|
Input(grpc.ClientStreamingServer[InputMessage, InputResponse]) error
|
||||||
|
Invoke(grpc.BidiStreamingServer[Message, Message]) error
|
||||||
|
List(context.Context, *ListRequest) (*ListResponse, error)
|
||||||
|
Disconnect(context.Context, *DisconnectRequest) (*DisconnectResponse, error)
|
||||||
|
Info(context.Context, *InfoRequest) (*InfoResponse, error)
|
||||||
|
ListProcesses(context.Context, *ListProcessesRequest) (*ListProcessesResponse, error)
|
||||||
|
DisconnectProcess(context.Context, *DisconnectProcessRequest) (*DisconnectProcessResponse, error)
|
||||||
|
}
|
||||||
|
|
||||||
|
// UnimplementedControllerServer should be embedded to have
|
||||||
|
// forward compatible implementations.
|
||||||
|
//
|
||||||
|
// NOTE: this should be embedded by value instead of pointer to avoid a nil
|
||||||
|
// pointer dereference when methods are called.
|
||||||
|
type UnimplementedControllerServer struct{}
|
||||||
|
|
||||||
|
func (UnimplementedControllerServer) Build(context.Context, *BuildRequest) (*BuildResponse, error) {
|
||||||
|
return nil, status.Errorf(codes.Unimplemented, "method Build not implemented")
|
||||||
|
}
|
||||||
|
func (UnimplementedControllerServer) Inspect(context.Context, *InspectRequest) (*InspectResponse, error) {
|
||||||
|
return nil, status.Errorf(codes.Unimplemented, "method Inspect not implemented")
|
||||||
|
}
|
||||||
|
func (UnimplementedControllerServer) Status(*StatusRequest, grpc.ServerStreamingServer[StatusResponse]) error {
|
||||||
|
return status.Errorf(codes.Unimplemented, "method Status not implemented")
|
||||||
|
}
|
||||||
|
func (UnimplementedControllerServer) Input(grpc.ClientStreamingServer[InputMessage, InputResponse]) error {
|
||||||
|
return status.Errorf(codes.Unimplemented, "method Input not implemented")
|
||||||
|
}
|
||||||
|
func (UnimplementedControllerServer) Invoke(grpc.BidiStreamingServer[Message, Message]) error {
|
||||||
|
return status.Errorf(codes.Unimplemented, "method Invoke not implemented")
|
||||||
|
}
|
||||||
|
func (UnimplementedControllerServer) List(context.Context, *ListRequest) (*ListResponse, error) {
|
||||||
|
return nil, status.Errorf(codes.Unimplemented, "method List not implemented")
|
||||||
|
}
|
||||||
|
func (UnimplementedControllerServer) Disconnect(context.Context, *DisconnectRequest) (*DisconnectResponse, error) {
|
||||||
|
return nil, status.Errorf(codes.Unimplemented, "method Disconnect not implemented")
|
||||||
|
}
|
||||||
|
func (UnimplementedControllerServer) Info(context.Context, *InfoRequest) (*InfoResponse, error) {
|
||||||
|
return nil, status.Errorf(codes.Unimplemented, "method Info not implemented")
|
||||||
|
}
|
||||||
|
func (UnimplementedControllerServer) ListProcesses(context.Context, *ListProcessesRequest) (*ListProcessesResponse, error) {
|
||||||
|
return nil, status.Errorf(codes.Unimplemented, "method ListProcesses not implemented")
|
||||||
|
}
|
||||||
|
func (UnimplementedControllerServer) DisconnectProcess(context.Context, *DisconnectProcessRequest) (*DisconnectProcessResponse, error) {
|
||||||
|
return nil, status.Errorf(codes.Unimplemented, "method DisconnectProcess not implemented")
|
||||||
|
}
|
||||||
|
func (UnimplementedControllerServer) testEmbeddedByValue() {}
|
||||||
|
|
||||||
|
// UnsafeControllerServer may be embedded to opt out of forward compatibility for this service.
|
||||||
|
// Use of this interface is not recommended, as added methods to ControllerServer will
|
||||||
|
// result in compilation errors.
|
||||||
|
type UnsafeControllerServer interface {
|
||||||
|
mustEmbedUnimplementedControllerServer()
|
||||||
|
}
|
||||||
|
|
||||||
|
func RegisterControllerServer(s grpc.ServiceRegistrar, srv ControllerServer) {
|
||||||
|
// If the following call pancis, it indicates UnimplementedControllerServer was
|
||||||
|
// embedded by pointer and is nil. This will cause panics if an
|
||||||
|
// unimplemented method is ever invoked, so we test this at initialization
|
||||||
|
// time to prevent it from happening at runtime later due to I/O.
|
||||||
|
if t, ok := srv.(interface{ testEmbeddedByValue() }); ok {
|
||||||
|
t.testEmbeddedByValue()
|
||||||
|
}
|
||||||
|
s.RegisterService(&Controller_ServiceDesc, srv)
|
||||||
|
}
|
||||||
|
|
||||||
|
func _Controller_Build_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
|
||||||
|
in := new(BuildRequest)
|
||||||
|
if err := dec(in); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if interceptor == nil {
|
||||||
|
return srv.(ControllerServer).Build(ctx, in)
|
||||||
|
}
|
||||||
|
info := &grpc.UnaryServerInfo{
|
||||||
|
Server: srv,
|
||||||
|
FullMethod: Controller_Build_FullMethodName,
|
||||||
|
}
|
||||||
|
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
|
||||||
|
return srv.(ControllerServer).Build(ctx, req.(*BuildRequest))
|
||||||
|
}
|
||||||
|
return interceptor(ctx, in, info, handler)
|
||||||
|
}
|
||||||
|
|
||||||
|
func _Controller_Inspect_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
|
||||||
|
in := new(InspectRequest)
|
||||||
|
if err := dec(in); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if interceptor == nil {
|
||||||
|
return srv.(ControllerServer).Inspect(ctx, in)
|
||||||
|
}
|
||||||
|
info := &grpc.UnaryServerInfo{
|
||||||
|
Server: srv,
|
||||||
|
FullMethod: Controller_Inspect_FullMethodName,
|
||||||
|
}
|
||||||
|
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
|
||||||
|
return srv.(ControllerServer).Inspect(ctx, req.(*InspectRequest))
|
||||||
|
}
|
||||||
|
return interceptor(ctx, in, info, handler)
|
||||||
|
}
|
||||||
|
|
||||||
|
func _Controller_Status_Handler(srv interface{}, stream grpc.ServerStream) error {
|
||||||
|
m := new(StatusRequest)
|
||||||
|
if err := stream.RecvMsg(m); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return srv.(ControllerServer).Status(m, &grpc.GenericServerStream[StatusRequest, StatusResponse]{ServerStream: stream})
|
||||||
|
}
|
||||||
|
|
||||||
|
// This type alias is provided for backwards compatibility with existing code that references the prior non-generic stream type by name.
|
||||||
|
type Controller_StatusServer = grpc.ServerStreamingServer[StatusResponse]
|
||||||
|
|
||||||
|
func _Controller_Input_Handler(srv interface{}, stream grpc.ServerStream) error {
|
||||||
|
return srv.(ControllerServer).Input(&grpc.GenericServerStream[InputMessage, InputResponse]{ServerStream: stream})
|
||||||
|
}
|
||||||
|
|
||||||
|
// This type alias is provided for backwards compatibility with existing code that references the prior non-generic stream type by name.
|
||||||
|
type Controller_InputServer = grpc.ClientStreamingServer[InputMessage, InputResponse]
|
||||||
|
|
||||||
|
func _Controller_Invoke_Handler(srv interface{}, stream grpc.ServerStream) error {
|
||||||
|
return srv.(ControllerServer).Invoke(&grpc.GenericServerStream[Message, Message]{ServerStream: stream})
|
||||||
|
}
|
||||||
|
|
||||||
|
// This type alias is provided for backwards compatibility with existing code that references the prior non-generic stream type by name.
|
||||||
|
type Controller_InvokeServer = grpc.BidiStreamingServer[Message, Message]
|
||||||
|
|
||||||
|
func _Controller_List_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
|
||||||
|
in := new(ListRequest)
|
||||||
|
if err := dec(in); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if interceptor == nil {
|
||||||
|
return srv.(ControllerServer).List(ctx, in)
|
||||||
|
}
|
||||||
|
info := &grpc.UnaryServerInfo{
|
||||||
|
Server: srv,
|
||||||
|
FullMethod: Controller_List_FullMethodName,
|
||||||
|
}
|
||||||
|
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
|
||||||
|
return srv.(ControllerServer).List(ctx, req.(*ListRequest))
|
||||||
|
}
|
||||||
|
return interceptor(ctx, in, info, handler)
|
||||||
|
}
|
||||||
|
|
||||||
|
func _Controller_Disconnect_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
|
||||||
|
in := new(DisconnectRequest)
|
||||||
|
if err := dec(in); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if interceptor == nil {
|
||||||
|
return srv.(ControllerServer).Disconnect(ctx, in)
|
||||||
|
}
|
||||||
|
info := &grpc.UnaryServerInfo{
|
||||||
|
Server: srv,
|
||||||
|
FullMethod: Controller_Disconnect_FullMethodName,
|
||||||
|
}
|
||||||
|
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
|
||||||
|
return srv.(ControllerServer).Disconnect(ctx, req.(*DisconnectRequest))
|
||||||
|
}
|
||||||
|
return interceptor(ctx, in, info, handler)
|
||||||
|
}
|
||||||
|
|
||||||
|
func _Controller_Info_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
|
||||||
|
in := new(InfoRequest)
|
||||||
|
if err := dec(in); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if interceptor == nil {
|
||||||
|
return srv.(ControllerServer).Info(ctx, in)
|
||||||
|
}
|
||||||
|
info := &grpc.UnaryServerInfo{
|
||||||
|
Server: srv,
|
||||||
|
FullMethod: Controller_Info_FullMethodName,
|
||||||
|
}
|
||||||
|
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
|
||||||
|
return srv.(ControllerServer).Info(ctx, req.(*InfoRequest))
|
||||||
|
}
|
||||||
|
return interceptor(ctx, in, info, handler)
|
||||||
|
}
|
||||||
|
|
||||||
|
func _Controller_ListProcesses_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
|
||||||
|
in := new(ListProcessesRequest)
|
||||||
|
if err := dec(in); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if interceptor == nil {
|
||||||
|
return srv.(ControllerServer).ListProcesses(ctx, in)
|
||||||
|
}
|
||||||
|
info := &grpc.UnaryServerInfo{
|
||||||
|
Server: srv,
|
||||||
|
FullMethod: Controller_ListProcesses_FullMethodName,
|
||||||
|
}
|
||||||
|
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
|
||||||
|
return srv.(ControllerServer).ListProcesses(ctx, req.(*ListProcessesRequest))
|
||||||
|
}
|
||||||
|
return interceptor(ctx, in, info, handler)
|
||||||
|
}
|
||||||
|
|
||||||
|
func _Controller_DisconnectProcess_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
|
||||||
|
in := new(DisconnectProcessRequest)
|
||||||
|
if err := dec(in); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if interceptor == nil {
|
||||||
|
return srv.(ControllerServer).DisconnectProcess(ctx, in)
|
||||||
|
}
|
||||||
|
info := &grpc.UnaryServerInfo{
|
||||||
|
Server: srv,
|
||||||
|
FullMethod: Controller_DisconnectProcess_FullMethodName,
|
||||||
|
}
|
||||||
|
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
|
||||||
|
return srv.(ControllerServer).DisconnectProcess(ctx, req.(*DisconnectProcessRequest))
|
||||||
|
}
|
||||||
|
return interceptor(ctx, in, info, handler)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Controller_ServiceDesc is the grpc.ServiceDesc for Controller service.
|
||||||
|
// It's only intended for direct use with grpc.RegisterService,
|
||||||
|
// and not to be introspected or modified (even as a copy)
|
||||||
|
var Controller_ServiceDesc = grpc.ServiceDesc{
|
||||||
|
ServiceName: "buildx.controller.v1.Controller",
|
||||||
|
HandlerType: (*ControllerServer)(nil),
|
||||||
|
Methods: []grpc.MethodDesc{
|
||||||
|
{
|
||||||
|
MethodName: "Build",
|
||||||
|
Handler: _Controller_Build_Handler,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
MethodName: "Inspect",
|
||||||
|
Handler: _Controller_Inspect_Handler,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
MethodName: "List",
|
||||||
|
Handler: _Controller_List_Handler,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
MethodName: "Disconnect",
|
||||||
|
Handler: _Controller_Disconnect_Handler,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
MethodName: "Info",
|
||||||
|
Handler: _Controller_Info_Handler,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
MethodName: "ListProcesses",
|
||||||
|
Handler: _Controller_ListProcesses_Handler,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
MethodName: "DisconnectProcess",
|
||||||
|
Handler: _Controller_DisconnectProcess_Handler,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
Streams: []grpc.StreamDesc{
|
||||||
|
{
|
||||||
|
StreamName: "Status",
|
||||||
|
Handler: _Controller_Status_Handler,
|
||||||
|
ServerStreams: true,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
StreamName: "Input",
|
||||||
|
Handler: _Controller_Input_Handler,
|
||||||
|
ClientStreams: true,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
StreamName: "Invoke",
|
||||||
|
Handler: _Controller_Invoke_Handler,
|
||||||
|
ServerStreams: true,
|
||||||
|
ClientStreams: true,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
Metadata: "github.com/docker/buildx/controller/pb/controller.proto",
|
||||||
|
}
|
||||||
11430
controller/pb/controller_vtproto.pb.go
Normal file
11430
controller/pb/controller_vtproto.pb.go
Normal file
File diff suppressed because it is too large
Load Diff
@@ -1,3 +0,0 @@
|
|||||||
package pb
|
|
||||||
|
|
||||||
//go:generate protoc -I=. -I=../../vendor/ --gogo_out=plugins=grpc:. controller.proto
|
|
||||||
@@ -4,7 +4,6 @@ import (
|
|||||||
"path/filepath"
|
"path/filepath"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"github.com/docker/docker/builder/remotecontext/urlutil"
|
|
||||||
"github.com/moby/buildkit/util/gitutil"
|
"github.com/moby/buildkit/util/gitutil"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -22,7 +21,7 @@ func ResolveOptionPaths(options *BuildOptions) (_ *BuildOptions, err error) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
if options.DockerfileName != "" && options.DockerfileName != "-" {
|
if options.DockerfileName != "" && options.DockerfileName != "-" {
|
||||||
if localContext && !urlutil.IsURL(options.DockerfileName) {
|
if localContext && !isHTTPURL(options.DockerfileName) {
|
||||||
options.DockerfileName, err = filepath.Abs(options.DockerfileName)
|
options.DockerfileName, err = filepath.Abs(options.DockerfileName)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
@@ -154,7 +153,6 @@ func ResolveOptionPaths(options *BuildOptions) (_ *BuildOptions, err error) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
ps = append(ps, p)
|
ps = append(ps, p)
|
||||||
|
|
||||||
}
|
}
|
||||||
s.Paths = ps
|
s.Paths = ps
|
||||||
ssh = append(ssh, s)
|
ssh = append(ssh, s)
|
||||||
@@ -164,8 +162,15 @@ func ResolveOptionPaths(options *BuildOptions) (_ *BuildOptions, err error) {
|
|||||||
return options, nil
|
return options, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// isHTTPURL returns true if the provided str is an HTTP(S) URL by checking if it
|
||||||
|
// has a http:// or https:// scheme. No validation is performed to verify if the
|
||||||
|
// URL is well-formed.
|
||||||
|
func isHTTPURL(str string) bool {
|
||||||
|
return strings.HasPrefix(str, "https://") || strings.HasPrefix(str, "http://")
|
||||||
|
}
|
||||||
|
|
||||||
func isRemoteURL(c string) bool {
|
func isRemoteURL(c string) bool {
|
||||||
if urlutil.IsURL(c) {
|
if isHTTPURL(c) {
|
||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
if _, err := gitutil.ParseGitRef(c); err == nil {
|
if _, err := gitutil.ParseGitRef(c); err == nil {
|
||||||
|
|||||||
@@ -3,10 +3,10 @@ package pb
|
|||||||
import (
|
import (
|
||||||
"os"
|
"os"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"reflect"
|
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
|
"google.golang.org/protobuf/proto"
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestResolvePaths(t *testing.T) {
|
func TestResolvePaths(t *testing.T) {
|
||||||
@@ -16,54 +16,58 @@ func TestResolvePaths(t *testing.T) {
|
|||||||
require.NoError(t, os.Chdir(tmpwd))
|
require.NoError(t, os.Chdir(tmpwd))
|
||||||
tests := []struct {
|
tests := []struct {
|
||||||
name string
|
name string
|
||||||
options BuildOptions
|
options *BuildOptions
|
||||||
want BuildOptions
|
want *BuildOptions
|
||||||
}{
|
}{
|
||||||
{
|
{
|
||||||
name: "contextpath",
|
name: "contextpath",
|
||||||
options: BuildOptions{ContextPath: "test"},
|
options: &BuildOptions{ContextPath: "test"},
|
||||||
want: BuildOptions{ContextPath: filepath.Join(tmpwd, "test")},
|
want: &BuildOptions{ContextPath: filepath.Join(tmpwd, "test")},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "contextpath-cwd",
|
name: "contextpath-cwd",
|
||||||
options: BuildOptions{ContextPath: "."},
|
options: &BuildOptions{ContextPath: "."},
|
||||||
want: BuildOptions{ContextPath: tmpwd},
|
want: &BuildOptions{ContextPath: tmpwd},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "contextpath-dash",
|
name: "contextpath-dash",
|
||||||
options: BuildOptions{ContextPath: "-"},
|
options: &BuildOptions{ContextPath: "-"},
|
||||||
want: BuildOptions{ContextPath: "-"},
|
want: &BuildOptions{ContextPath: "-"},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "contextpath-ssh",
|
name: "contextpath-ssh",
|
||||||
options: BuildOptions{ContextPath: "git@github.com:docker/buildx.git"},
|
options: &BuildOptions{ContextPath: "git@github.com:docker/buildx.git"},
|
||||||
want: BuildOptions{ContextPath: "git@github.com:docker/buildx.git"},
|
want: &BuildOptions{ContextPath: "git@github.com:docker/buildx.git"},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "dockerfilename",
|
name: "dockerfilename",
|
||||||
options: BuildOptions{DockerfileName: "test", ContextPath: "."},
|
options: &BuildOptions{DockerfileName: "test", ContextPath: "."},
|
||||||
want: BuildOptions{DockerfileName: filepath.Join(tmpwd, "test"), ContextPath: tmpwd},
|
want: &BuildOptions{DockerfileName: filepath.Join(tmpwd, "test"), ContextPath: tmpwd},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "dockerfilename-dash",
|
name: "dockerfilename-dash",
|
||||||
options: BuildOptions{DockerfileName: "-", ContextPath: "."},
|
options: &BuildOptions{DockerfileName: "-", ContextPath: "."},
|
||||||
want: BuildOptions{DockerfileName: "-", ContextPath: tmpwd},
|
want: &BuildOptions{DockerfileName: "-", ContextPath: tmpwd},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "dockerfilename-remote",
|
name: "dockerfilename-remote",
|
||||||
options: BuildOptions{DockerfileName: "test", ContextPath: "git@github.com:docker/buildx.git"},
|
options: &BuildOptions{DockerfileName: "test", ContextPath: "git@github.com:docker/buildx.git"},
|
||||||
want: BuildOptions{DockerfileName: "test", ContextPath: "git@github.com:docker/buildx.git"},
|
want: &BuildOptions{DockerfileName: "test", ContextPath: "git@github.com:docker/buildx.git"},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "contexts",
|
name: "contexts",
|
||||||
options: BuildOptions{NamedContexts: map[string]string{"a": "test1", "b": "test2",
|
options: &BuildOptions{NamedContexts: map[string]string{
|
||||||
"alpine": "docker-image://alpine@sha256:0123456789", "project": "https://github.com/myuser/project.git"}},
|
"a": "test1", "b": "test2",
|
||||||
want: BuildOptions{NamedContexts: map[string]string{"a": filepath.Join(tmpwd, "test1"), "b": filepath.Join(tmpwd, "test2"),
|
"alpine": "docker-image://alpine@sha256:0123456789", "project": "https://github.com/myuser/project.git",
|
||||||
"alpine": "docker-image://alpine@sha256:0123456789", "project": "https://github.com/myuser/project.git"}},
|
}},
|
||||||
|
want: &BuildOptions{NamedContexts: map[string]string{
|
||||||
|
"a": filepath.Join(tmpwd, "test1"), "b": filepath.Join(tmpwd, "test2"),
|
||||||
|
"alpine": "docker-image://alpine@sha256:0123456789", "project": "https://github.com/myuser/project.git",
|
||||||
|
}},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "cache-from",
|
name: "cache-from",
|
||||||
options: BuildOptions{
|
options: &BuildOptions{
|
||||||
CacheFrom: []*CacheOptionsEntry{
|
CacheFrom: []*CacheOptionsEntry{
|
||||||
{
|
{
|
||||||
Type: "local",
|
Type: "local",
|
||||||
@@ -75,7 +79,7 @@ func TestResolvePaths(t *testing.T) {
|
|||||||
},
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
want: BuildOptions{
|
want: &BuildOptions{
|
||||||
CacheFrom: []*CacheOptionsEntry{
|
CacheFrom: []*CacheOptionsEntry{
|
||||||
{
|
{
|
||||||
Type: "local",
|
Type: "local",
|
||||||
@@ -90,7 +94,7 @@ func TestResolvePaths(t *testing.T) {
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "cache-to",
|
name: "cache-to",
|
||||||
options: BuildOptions{
|
options: &BuildOptions{
|
||||||
CacheTo: []*CacheOptionsEntry{
|
CacheTo: []*CacheOptionsEntry{
|
||||||
{
|
{
|
||||||
Type: "local",
|
Type: "local",
|
||||||
@@ -102,7 +106,7 @@ func TestResolvePaths(t *testing.T) {
|
|||||||
},
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
want: BuildOptions{
|
want: &BuildOptions{
|
||||||
CacheTo: []*CacheOptionsEntry{
|
CacheTo: []*CacheOptionsEntry{
|
||||||
{
|
{
|
||||||
Type: "local",
|
Type: "local",
|
||||||
@@ -117,7 +121,7 @@ func TestResolvePaths(t *testing.T) {
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "exports",
|
name: "exports",
|
||||||
options: BuildOptions{
|
options: &BuildOptions{
|
||||||
Exports: []*ExportEntry{
|
Exports: []*ExportEntry{
|
||||||
{
|
{
|
||||||
Type: "local",
|
Type: "local",
|
||||||
@@ -145,7 +149,7 @@ func TestResolvePaths(t *testing.T) {
|
|||||||
},
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
want: BuildOptions{
|
want: &BuildOptions{
|
||||||
Exports: []*ExportEntry{
|
Exports: []*ExportEntry{
|
||||||
{
|
{
|
||||||
Type: "local",
|
Type: "local",
|
||||||
@@ -176,7 +180,7 @@ func TestResolvePaths(t *testing.T) {
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "secrets",
|
name: "secrets",
|
||||||
options: BuildOptions{
|
options: &BuildOptions{
|
||||||
Secrets: []*Secret{
|
Secrets: []*Secret{
|
||||||
{
|
{
|
||||||
FilePath: "test1",
|
FilePath: "test1",
|
||||||
@@ -191,7 +195,7 @@ func TestResolvePaths(t *testing.T) {
|
|||||||
},
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
want: BuildOptions{
|
want: &BuildOptions{
|
||||||
Secrets: []*Secret{
|
Secrets: []*Secret{
|
||||||
{
|
{
|
||||||
FilePath: filepath.Join(tmpwd, "test1"),
|
FilePath: filepath.Join(tmpwd, "test1"),
|
||||||
@@ -209,7 +213,7 @@ func TestResolvePaths(t *testing.T) {
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "ssh",
|
name: "ssh",
|
||||||
options: BuildOptions{
|
options: &BuildOptions{
|
||||||
SSH: []*SSH{
|
SSH: []*SSH{
|
||||||
{
|
{
|
||||||
ID: "default",
|
ID: "default",
|
||||||
@@ -221,7 +225,7 @@ func TestResolvePaths(t *testing.T) {
|
|||||||
},
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
want: BuildOptions{
|
want: &BuildOptions{
|
||||||
SSH: []*SSH{
|
SSH: []*SSH{
|
||||||
{
|
{
|
||||||
ID: "default",
|
ID: "default",
|
||||||
@@ -238,10 +242,10 @@ func TestResolvePaths(t *testing.T) {
|
|||||||
for _, tt := range tests {
|
for _, tt := range tests {
|
||||||
tt := tt
|
tt := tt
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
got, err := ResolveOptionPaths(&tt.options)
|
got, err := ResolveOptionPaths(tt.options)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
if !reflect.DeepEqual(tt.want, *got) {
|
if !proto.Equal(tt.want, got) {
|
||||||
t.Fatalf("expected %#v, got %#v", tt.want, *got)
|
t.Fatalf("expected %#v, got %#v", tt.want, got)
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,10 +1,13 @@
|
|||||||
package pb
|
package pb
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"time"
|
||||||
|
|
||||||
"github.com/docker/buildx/util/progress"
|
"github.com/docker/buildx/util/progress"
|
||||||
control "github.com/moby/buildkit/api/services/control"
|
control "github.com/moby/buildkit/api/services/control"
|
||||||
"github.com/moby/buildkit/client"
|
"github.com/moby/buildkit/client"
|
||||||
"github.com/opencontainers/go-digest"
|
"github.com/opencontainers/go-digest"
|
||||||
|
"google.golang.org/protobuf/types/known/timestamppb"
|
||||||
)
|
)
|
||||||
|
|
||||||
type writer struct {
|
type writer struct {
|
||||||
@@ -19,9 +22,7 @@ func (w *writer) Write(status *client.SolveStatus) {
|
|||||||
w.ch <- ToControlStatus(status)
|
w.ch <- ToControlStatus(status)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (w *writer) WriteBuildRef(target string, ref string) {
|
func (w *writer) WriteBuildRef(target string, ref string) {}
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
func (w *writer) ValidateLogSource(digest.Digest, interface{}) bool {
|
func (w *writer) ValidateLogSource(digest.Digest, interface{}) bool {
|
||||||
return true
|
return true
|
||||||
@@ -33,11 +34,11 @@ func ToControlStatus(s *client.SolveStatus) *StatusResponse {
|
|||||||
resp := StatusResponse{}
|
resp := StatusResponse{}
|
||||||
for _, v := range s.Vertexes {
|
for _, v := range s.Vertexes {
|
||||||
resp.Vertexes = append(resp.Vertexes, &control.Vertex{
|
resp.Vertexes = append(resp.Vertexes, &control.Vertex{
|
||||||
Digest: v.Digest,
|
Digest: string(v.Digest),
|
||||||
Inputs: v.Inputs,
|
Inputs: digestSliceToPB(v.Inputs),
|
||||||
Name: v.Name,
|
Name: v.Name,
|
||||||
Started: v.Started,
|
Started: timestampToPB(v.Started),
|
||||||
Completed: v.Completed,
|
Completed: timestampToPB(v.Completed),
|
||||||
Error: v.Error,
|
Error: v.Error,
|
||||||
Cached: v.Cached,
|
Cached: v.Cached,
|
||||||
ProgressGroup: v.ProgressGroup,
|
ProgressGroup: v.ProgressGroup,
|
||||||
@@ -46,26 +47,26 @@ func ToControlStatus(s *client.SolveStatus) *StatusResponse {
|
|||||||
for _, v := range s.Statuses {
|
for _, v := range s.Statuses {
|
||||||
resp.Statuses = append(resp.Statuses, &control.VertexStatus{
|
resp.Statuses = append(resp.Statuses, &control.VertexStatus{
|
||||||
ID: v.ID,
|
ID: v.ID,
|
||||||
Vertex: v.Vertex,
|
Vertex: string(v.Vertex),
|
||||||
Name: v.Name,
|
Name: v.Name,
|
||||||
Total: v.Total,
|
Total: v.Total,
|
||||||
Current: v.Current,
|
Current: v.Current,
|
||||||
Timestamp: v.Timestamp,
|
Timestamp: timestamppb.New(v.Timestamp),
|
||||||
Started: v.Started,
|
Started: timestampToPB(v.Started),
|
||||||
Completed: v.Completed,
|
Completed: timestampToPB(v.Completed),
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
for _, v := range s.Logs {
|
for _, v := range s.Logs {
|
||||||
resp.Logs = append(resp.Logs, &control.VertexLog{
|
resp.Logs = append(resp.Logs, &control.VertexLog{
|
||||||
Vertex: v.Vertex,
|
Vertex: string(v.Vertex),
|
||||||
Stream: int64(v.Stream),
|
Stream: int64(v.Stream),
|
||||||
Msg: v.Data,
|
Msg: v.Data,
|
||||||
Timestamp: v.Timestamp,
|
Timestamp: timestamppb.New(v.Timestamp),
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
for _, v := range s.Warnings {
|
for _, v := range s.Warnings {
|
||||||
resp.Warnings = append(resp.Warnings, &control.VertexWarning{
|
resp.Warnings = append(resp.Warnings, &control.VertexWarning{
|
||||||
Vertex: v.Vertex,
|
Vertex: string(v.Vertex),
|
||||||
Level: int64(v.Level),
|
Level: int64(v.Level),
|
||||||
Short: v.Short,
|
Short: v.Short,
|
||||||
Detail: v.Detail,
|
Detail: v.Detail,
|
||||||
@@ -81,11 +82,11 @@ func FromControlStatus(resp *StatusResponse) *client.SolveStatus {
|
|||||||
s := client.SolveStatus{}
|
s := client.SolveStatus{}
|
||||||
for _, v := range resp.Vertexes {
|
for _, v := range resp.Vertexes {
|
||||||
s.Vertexes = append(s.Vertexes, &client.Vertex{
|
s.Vertexes = append(s.Vertexes, &client.Vertex{
|
||||||
Digest: v.Digest,
|
Digest: digest.Digest(v.Digest),
|
||||||
Inputs: v.Inputs,
|
Inputs: digestSliceFromPB(v.Inputs),
|
||||||
Name: v.Name,
|
Name: v.Name,
|
||||||
Started: v.Started,
|
Started: timestampFromPB(v.Started),
|
||||||
Completed: v.Completed,
|
Completed: timestampFromPB(v.Completed),
|
||||||
Error: v.Error,
|
Error: v.Error,
|
||||||
Cached: v.Cached,
|
Cached: v.Cached,
|
||||||
ProgressGroup: v.ProgressGroup,
|
ProgressGroup: v.ProgressGroup,
|
||||||
@@ -94,26 +95,26 @@ func FromControlStatus(resp *StatusResponse) *client.SolveStatus {
|
|||||||
for _, v := range resp.Statuses {
|
for _, v := range resp.Statuses {
|
||||||
s.Statuses = append(s.Statuses, &client.VertexStatus{
|
s.Statuses = append(s.Statuses, &client.VertexStatus{
|
||||||
ID: v.ID,
|
ID: v.ID,
|
||||||
Vertex: v.Vertex,
|
Vertex: digest.Digest(v.Vertex),
|
||||||
Name: v.Name,
|
Name: v.Name,
|
||||||
Total: v.Total,
|
Total: v.Total,
|
||||||
Current: v.Current,
|
Current: v.Current,
|
||||||
Timestamp: v.Timestamp,
|
Timestamp: v.Timestamp.AsTime(),
|
||||||
Started: v.Started,
|
Started: timestampFromPB(v.Started),
|
||||||
Completed: v.Completed,
|
Completed: timestampFromPB(v.Completed),
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
for _, v := range resp.Logs {
|
for _, v := range resp.Logs {
|
||||||
s.Logs = append(s.Logs, &client.VertexLog{
|
s.Logs = append(s.Logs, &client.VertexLog{
|
||||||
Vertex: v.Vertex,
|
Vertex: digest.Digest(v.Vertex),
|
||||||
Stream: int(v.Stream),
|
Stream: int(v.Stream),
|
||||||
Data: v.Msg,
|
Data: v.Msg,
|
||||||
Timestamp: v.Timestamp,
|
Timestamp: v.Timestamp.AsTime(),
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
for _, v := range resp.Warnings {
|
for _, v := range resp.Warnings {
|
||||||
s.Warnings = append(s.Warnings, &client.VertexWarning{
|
s.Warnings = append(s.Warnings, &client.VertexWarning{
|
||||||
Vertex: v.Vertex,
|
Vertex: digest.Digest(v.Vertex),
|
||||||
Level: int(v.Level),
|
Level: int(v.Level),
|
||||||
Short: v.Short,
|
Short: v.Short,
|
||||||
Detail: v.Detail,
|
Detail: v.Detail,
|
||||||
@@ -124,3 +125,38 @@ func FromControlStatus(resp *StatusResponse) *client.SolveStatus {
|
|||||||
}
|
}
|
||||||
return &s
|
return &s
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func timestampFromPB(ts *timestamppb.Timestamp) *time.Time {
|
||||||
|
if ts == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
t := ts.AsTime()
|
||||||
|
if t.IsZero() {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return &t
|
||||||
|
}
|
||||||
|
|
||||||
|
func timestampToPB(ts *time.Time) *timestamppb.Timestamp {
|
||||||
|
if ts == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return timestamppb.New(*ts)
|
||||||
|
}
|
||||||
|
|
||||||
|
func digestSliceFromPB(elems []string) []digest.Digest {
|
||||||
|
clone := make([]digest.Digest, len(elems))
|
||||||
|
for i, e := range elems {
|
||||||
|
clone[i] = digest.Digest(e)
|
||||||
|
}
|
||||||
|
return clone
|
||||||
|
}
|
||||||
|
|
||||||
|
func digestSliceToPB(elems []digest.Digest) []string {
|
||||||
|
clone := make([]string, len(elems))
|
||||||
|
for i, e := range elems {
|
||||||
|
clone[i] = string(e)
|
||||||
|
}
|
||||||
|
return clone
|
||||||
|
}
|
||||||
|
|||||||
@@ -18,16 +18,16 @@ type Process struct {
|
|||||||
invokeConfig *pb.InvokeConfig
|
invokeConfig *pb.InvokeConfig
|
||||||
errCh chan error
|
errCh chan error
|
||||||
processCancel func()
|
processCancel func()
|
||||||
serveIOCancel func()
|
serveIOCancel func(error)
|
||||||
}
|
}
|
||||||
|
|
||||||
// ForwardIO forwards process's io to the specified reader/writer.
|
// ForwardIO forwards process's io to the specified reader/writer.
|
||||||
// Optionally specify ioCancelCallback which will be called when
|
// Optionally specify ioCancelCallback which will be called when
|
||||||
// the process closes the specified IO. This will be useful for additional cleanup.
|
// the process closes the specified IO. This will be useful for additional cleanup.
|
||||||
func (p *Process) ForwardIO(in *ioset.In, ioCancelCallback func()) {
|
func (p *Process) ForwardIO(in *ioset.In, ioCancelCallback func(error)) {
|
||||||
p.inEnd.SetIn(in)
|
p.inEnd.SetIn(in)
|
||||||
if f := p.serveIOCancel; f != nil {
|
if f := p.serveIOCancel; f != nil {
|
||||||
f()
|
f(errors.WithStack(context.Canceled))
|
||||||
}
|
}
|
||||||
p.serveIOCancel = ioCancelCallback
|
p.serveIOCancel = ioCancelCallback
|
||||||
}
|
}
|
||||||
@@ -124,9 +124,16 @@ func (m *Manager) StartProcess(pid string, resultCtx *build.ResultHandle, cfg *p
|
|||||||
f.SetOut(&out)
|
f.SetOut(&out)
|
||||||
|
|
||||||
// Register process
|
// Register process
|
||||||
ctx, cancel := context.WithCancel(context.TODO())
|
ctx, cancel := context.WithCancelCause(context.TODO())
|
||||||
var cancelOnce sync.Once
|
var cancelOnce sync.Once
|
||||||
processCancelFunc := func() { cancelOnce.Do(func() { cancel(); f.Close(); in.Close(); out.Close() }) }
|
processCancelFunc := func() {
|
||||||
|
cancelOnce.Do(func() {
|
||||||
|
cancel(errors.WithStack(context.Canceled))
|
||||||
|
f.Close()
|
||||||
|
in.Close()
|
||||||
|
out.Close()
|
||||||
|
})
|
||||||
|
}
|
||||||
p := &Process{
|
p := &Process{
|
||||||
inEnd: f,
|
inEnd: f,
|
||||||
invokeConfig: cfg,
|
invokeConfig: cfg,
|
||||||
|
|||||||
@@ -8,6 +8,7 @@ import (
|
|||||||
|
|
||||||
"github.com/containerd/containerd/defaults"
|
"github.com/containerd/containerd/defaults"
|
||||||
"github.com/containerd/containerd/pkg/dialer"
|
"github.com/containerd/containerd/pkg/dialer"
|
||||||
|
"github.com/docker/buildx/build"
|
||||||
"github.com/docker/buildx/controller/pb"
|
"github.com/docker/buildx/controller/pb"
|
||||||
"github.com/docker/buildx/util/progress"
|
"github.com/docker/buildx/util/progress"
|
||||||
"github.com/moby/buildkit/client"
|
"github.com/moby/buildkit/client"
|
||||||
@@ -27,6 +28,7 @@ func NewClient(ctx context.Context, addr string) (*Client, error) {
|
|||||||
Backoff: backoffConfig,
|
Backoff: backoffConfig,
|
||||||
}
|
}
|
||||||
gopts := []grpc.DialOption{
|
gopts := []grpc.DialOption{
|
||||||
|
//nolint:staticcheck // ignore SA1019: WithBlock is deprecated and does not work with NewClient.
|
||||||
grpc.WithBlock(),
|
grpc.WithBlock(),
|
||||||
grpc.WithTransportCredentials(insecure.NewCredentials()),
|
grpc.WithTransportCredentials(insecure.NewCredentials()),
|
||||||
grpc.WithConnectParams(connParams),
|
grpc.WithConnectParams(connParams),
|
||||||
@@ -36,6 +38,7 @@ func NewClient(ctx context.Context, addr string) (*Client, error) {
|
|||||||
grpc.WithUnaryInterceptor(grpcerrors.UnaryClientInterceptor),
|
grpc.WithUnaryInterceptor(grpcerrors.UnaryClientInterceptor),
|
||||||
grpc.WithStreamInterceptor(grpcerrors.StreamClientInterceptor),
|
grpc.WithStreamInterceptor(grpcerrors.StreamClientInterceptor),
|
||||||
}
|
}
|
||||||
|
//nolint:staticcheck // ignore SA1019: Recommended NewClient has different behavior from DialContext.
|
||||||
conn, err := grpc.DialContext(ctx, dialer.DialAddress(addr), gopts...)
|
conn, err := grpc.DialContext(ctx, dialer.DialAddress(addr), gopts...)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
@@ -72,36 +75,36 @@ func (c *Client) List(ctx context.Context) (keys []string, retErr error) {
|
|||||||
return res.Keys, nil
|
return res.Keys, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (c *Client) Disconnect(ctx context.Context, key string) error {
|
func (c *Client) Disconnect(ctx context.Context, sessionID string) error {
|
||||||
if key == "" {
|
if sessionID == "" {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
_, err := c.client().Disconnect(ctx, &pb.DisconnectRequest{Ref: key})
|
_, err := c.client().Disconnect(ctx, &pb.DisconnectRequest{SessionID: sessionID})
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
func (c *Client) ListProcesses(ctx context.Context, ref string) (infos []*pb.ProcessInfo, retErr error) {
|
func (c *Client) ListProcesses(ctx context.Context, sessionID string) (infos []*pb.ProcessInfo, retErr error) {
|
||||||
res, err := c.client().ListProcesses(ctx, &pb.ListProcessesRequest{Ref: ref})
|
res, err := c.client().ListProcesses(ctx, &pb.ListProcessesRequest{SessionID: sessionID})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
return res.Infos, nil
|
return res.Infos, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (c *Client) DisconnectProcess(ctx context.Context, ref, pid string) error {
|
func (c *Client) DisconnectProcess(ctx context.Context, sessionID, pid string) error {
|
||||||
_, err := c.client().DisconnectProcess(ctx, &pb.DisconnectProcessRequest{Ref: ref, ProcessID: pid})
|
_, err := c.client().DisconnectProcess(ctx, &pb.DisconnectProcessRequest{SessionID: sessionID, ProcessID: pid})
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
func (c *Client) Invoke(ctx context.Context, ref string, pid string, invokeConfig pb.InvokeConfig, in io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) error {
|
func (c *Client) Invoke(ctx context.Context, sessionID string, pid string, invokeConfig *pb.InvokeConfig, in io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) error {
|
||||||
if ref == "" || pid == "" {
|
if sessionID == "" || pid == "" {
|
||||||
return errors.New("build reference must be specified")
|
return errors.New("build session ID must be specified")
|
||||||
}
|
}
|
||||||
stream, err := c.client().Invoke(ctx)
|
stream, err := c.client().Invoke(ctx)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
return attachIO(ctx, stream, &pb.InitMessage{Ref: ref, ProcessID: pid, InvokeConfig: &invokeConfig}, ioAttachConfig{
|
return attachIO(ctx, stream, &pb.InitMessage{SessionID: sessionID, ProcessID: pid, InvokeConfig: invokeConfig}, ioAttachConfig{
|
||||||
stdin: in,
|
stdin: in,
|
||||||
stdout: stdout,
|
stdout: stdout,
|
||||||
stderr: stderr,
|
stderr: stderr,
|
||||||
@@ -109,11 +112,11 @@ func (c *Client) Invoke(ctx context.Context, ref string, pid string, invokeConfi
|
|||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
func (c *Client) Inspect(ctx context.Context, ref string) (*pb.InspectResponse, error) {
|
func (c *Client) Inspect(ctx context.Context, sessionID string) (*pb.InspectResponse, error) {
|
||||||
return c.client().Inspect(ctx, &pb.InspectRequest{Ref: ref})
|
return c.client().Inspect(ctx, &pb.InspectRequest{SessionID: sessionID})
|
||||||
}
|
}
|
||||||
|
|
||||||
func (c *Client) Build(ctx context.Context, options pb.BuildOptions, in io.ReadCloser, progress progress.Writer) (string, *client.SolveResponse, error) {
|
func (c *Client) Build(ctx context.Context, options *pb.BuildOptions, in io.ReadCloser, progress progress.Writer) (string, *client.SolveResponse, *build.Inputs, error) {
|
||||||
ref := identity.NewID()
|
ref := identity.NewID()
|
||||||
statusChan := make(chan *client.SolveStatus)
|
statusChan := make(chan *client.SolveStatus)
|
||||||
eg, egCtx := errgroup.WithContext(ctx)
|
eg, egCtx := errgroup.WithContext(ctx)
|
||||||
@@ -131,10 +134,10 @@ func (c *Client) Build(ctx context.Context, options pb.BuildOptions, in io.ReadC
|
|||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
})
|
})
|
||||||
return ref, resp, eg.Wait()
|
return ref, resp, nil, eg.Wait()
|
||||||
}
|
}
|
||||||
|
|
||||||
func (c *Client) build(ctx context.Context, ref string, options pb.BuildOptions, in io.ReadCloser, statusChan chan *client.SolveStatus) (*client.SolveResponse, error) {
|
func (c *Client) build(ctx context.Context, sessionID string, options *pb.BuildOptions, in io.ReadCloser, statusChan chan *client.SolveStatus) (*client.SolveResponse, error) {
|
||||||
eg, egCtx := errgroup.WithContext(ctx)
|
eg, egCtx := errgroup.WithContext(ctx)
|
||||||
done := make(chan struct{})
|
done := make(chan struct{})
|
||||||
|
|
||||||
@@ -143,8 +146,8 @@ func (c *Client) build(ctx context.Context, ref string, options pb.BuildOptions,
|
|||||||
eg.Go(func() error {
|
eg.Go(func() error {
|
||||||
defer close(done)
|
defer close(done)
|
||||||
pbResp, err := c.client().Build(egCtx, &pb.BuildRequest{
|
pbResp, err := c.client().Build(egCtx, &pb.BuildRequest{
|
||||||
Ref: ref,
|
SessionID: sessionID,
|
||||||
Options: &options,
|
Options: options,
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -156,7 +159,7 @@ func (c *Client) build(ctx context.Context, ref string, options pb.BuildOptions,
|
|||||||
})
|
})
|
||||||
eg.Go(func() error {
|
eg.Go(func() error {
|
||||||
stream, err := c.client().Status(egCtx, &pb.StatusRequest{
|
stream, err := c.client().Status(egCtx, &pb.StatusRequest{
|
||||||
Ref: ref,
|
SessionID: sessionID,
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -181,7 +184,7 @@ func (c *Client) build(ctx context.Context, ref string, options pb.BuildOptions,
|
|||||||
if err := stream.Send(&pb.InputMessage{
|
if err := stream.Send(&pb.InputMessage{
|
||||||
Input: &pb.InputMessage_Init{
|
Input: &pb.InputMessage_Init{
|
||||||
Init: &pb.InputInitMessage{
|
Init: &pb.InputInitMessage{
|
||||||
Ref: ref,
|
SessionID: sessionID,
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
}); err != nil {
|
}); err != nil {
|
||||||
@@ -210,7 +213,7 @@ func (c *Client) build(ctx context.Context, ref string, options pb.BuildOptions,
|
|||||||
}
|
}
|
||||||
return err
|
return err
|
||||||
} else if n > 0 {
|
} else if n > 0 {
|
||||||
if stream.Send(&pb.InputMessage{
|
if err := stream.Send(&pb.InputMessage{
|
||||||
Input: &pb.InputMessage_Data{
|
Input: &pb.InputMessage_Data{
|
||||||
Data: &pb.DataMessage{
|
Data: &pb.DataMessage{
|
||||||
Data: buf[:n],
|
Data: buf[:n],
|
||||||
|
|||||||
@@ -62,9 +62,10 @@ func NewRemoteBuildxController(ctx context.Context, dockerCli command.Cli, opts
|
|||||||
serverRoot := filepath.Join(rootDir, "shared")
|
serverRoot := filepath.Join(rootDir, "shared")
|
||||||
|
|
||||||
// connect to buildx server if it is already running
|
// connect to buildx server if it is already running
|
||||||
ctx2, cancel := context.WithTimeout(ctx, 1*time.Second)
|
ctx2, cancel := context.WithCancelCause(ctx)
|
||||||
|
ctx2, _ = context.WithTimeoutCause(ctx2, 1*time.Second, errors.WithStack(context.DeadlineExceeded)) //nolint:govet,lostcancel // no need to manually cancel this context as we already rely on parent
|
||||||
c, err := newBuildxClientAndCheck(ctx2, filepath.Join(serverRoot, defaultSocketFilename))
|
c, err := newBuildxClientAndCheck(ctx2, filepath.Join(serverRoot, defaultSocketFilename))
|
||||||
cancel()
|
cancel(errors.WithStack(context.Canceled))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
if !errors.Is(err, context.DeadlineExceeded) {
|
if !errors.Is(err, context.DeadlineExceeded) {
|
||||||
return nil, errors.Wrap(err, "cannot connect to the buildx server")
|
return nil, errors.Wrap(err, "cannot connect to the buildx server")
|
||||||
@@ -90,9 +91,10 @@ func NewRemoteBuildxController(ctx context.Context, dockerCli command.Cli, opts
|
|||||||
go wait()
|
go wait()
|
||||||
|
|
||||||
// wait for buildx server to be ready
|
// wait for buildx server to be ready
|
||||||
ctx2, cancel = context.WithTimeout(ctx, 10*time.Second)
|
ctx2, cancel = context.WithCancelCause(ctx)
|
||||||
|
ctx2, _ = context.WithTimeoutCause(ctx2, 10*time.Second, errors.WithStack(context.DeadlineExceeded)) //nolint:govet,lostcancel // no need to manually cancel this context as we already rely on parent
|
||||||
c, err = newBuildxClientAndCheck(ctx2, filepath.Join(serverRoot, defaultSocketFilename))
|
c, err = newBuildxClientAndCheck(ctx2, filepath.Join(serverRoot, defaultSocketFilename))
|
||||||
cancel()
|
cancel(errors.WithStack(context.Canceled))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return errors.Wrap(err, "cannot connect to the buildx server")
|
return errors.Wrap(err, "cannot connect to the buildx server")
|
||||||
}
|
}
|
||||||
@@ -148,8 +150,8 @@ func serveCmd(dockerCli command.Cli) *cobra.Command {
|
|||||||
}()
|
}()
|
||||||
|
|
||||||
// prepare server
|
// prepare server
|
||||||
b := NewServer(func(ctx context.Context, options *controllerapi.BuildOptions, stdin io.Reader, progress progress.Writer) (*client.SolveResponse, *build.ResultHandle, error) {
|
b := NewServer(func(ctx context.Context, options *controllerapi.BuildOptions, stdin io.Reader, progress progress.Writer) (*client.SolveResponse, *build.ResultHandle, *build.Inputs, error) {
|
||||||
return cbuild.RunBuild(ctx, dockerCli, *options, stdin, progress, true)
|
return cbuild.RunBuild(ctx, dockerCli, options, stdin, progress, true)
|
||||||
})
|
})
|
||||||
defer b.Close()
|
defer b.Close()
|
||||||
|
|
||||||
@@ -258,7 +260,7 @@ func prepareRootDir(dockerCli command.Cli, config *serverConfig) (string, error)
|
|||||||
}
|
}
|
||||||
|
|
||||||
func rootDataDir(dockerCli command.Cli) string {
|
func rootDataDir(dockerCli command.Cli) string {
|
||||||
return filepath.Join(confutil.ConfigDir(dockerCli), "controller")
|
return filepath.Join(confutil.NewConfig(dockerCli).Dir(), "controller")
|
||||||
}
|
}
|
||||||
|
|
||||||
func newBuildxClientAndCheck(ctx context.Context, addr string) (*Client, error) {
|
func newBuildxClientAndCheck(ctx context.Context, addr string) (*Client, error) {
|
||||||
|
|||||||
@@ -43,9 +43,9 @@ func serveIO(attachCtx context.Context, srv msgStream, initFn func(*pb.InitMessa
|
|||||||
if init == nil {
|
if init == nil {
|
||||||
return errors.Errorf("unexpected message: %T; wanted init", msg.GetInput())
|
return errors.Errorf("unexpected message: %T; wanted init", msg.GetInput())
|
||||||
}
|
}
|
||||||
ref := init.Ref
|
sessionID := init.SessionID
|
||||||
if ref == "" {
|
if sessionID == "" {
|
||||||
return errors.New("no ref is provided")
|
return errors.New("no session ID is provided")
|
||||||
}
|
}
|
||||||
if err := initFn(init); err != nil {
|
if err := initFn(init); err != nil {
|
||||||
return errors.Wrap(err, "failed to initialize IO server")
|
return errors.Wrap(err, "failed to initialize IO server")
|
||||||
@@ -207,6 +207,7 @@ func attachIO(ctx context.Context, stream msgStream, initMessage *pb.InitMessage
|
|||||||
|
|
||||||
if cfg.signal != nil {
|
if cfg.signal != nil {
|
||||||
eg.Go(func() error {
|
eg.Go(func() error {
|
||||||
|
names := signalNames()
|
||||||
for {
|
for {
|
||||||
var sig syscall.Signal
|
var sig syscall.Signal
|
||||||
select {
|
select {
|
||||||
@@ -216,7 +217,7 @@ func attachIO(ctx context.Context, stream msgStream, initMessage *pb.InitMessage
|
|||||||
case <-ctx.Done():
|
case <-ctx.Done():
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
name := sigToName[sig]
|
name := names[sig]
|
||||||
if name == "" {
|
if name == "" {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
@@ -301,7 +302,6 @@ func attachIO(ctx context.Context, stream msgStream, initMessage *pb.InitMessage
|
|||||||
out = cfg.stderr
|
out = cfg.stderr
|
||||||
default:
|
default:
|
||||||
return errors.Errorf("unsupported fd %d", file.Fd)
|
return errors.Errorf("unsupported fd %d", file.Fd)
|
||||||
|
|
||||||
}
|
}
|
||||||
if out == nil {
|
if out == nil {
|
||||||
logrus.Warnf("attachIO: no writer for fd %d", file.Fd)
|
logrus.Warnf("attachIO: no writer for fd %d", file.Fd)
|
||||||
@@ -344,7 +344,7 @@ func receive(ctx context.Context, stream msgStream) (*pb.Message, error) {
|
|||||||
case err := <-errCh:
|
case err := <-errCh:
|
||||||
return nil, err
|
return nil, err
|
||||||
case <-ctx.Done():
|
case <-ctx.Done():
|
||||||
return nil, ctx.Err()
|
return nil, context.Cause(ctx)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -358,7 +358,7 @@ func copyToStream(fd uint32, snd msgStream, r io.Reader) error {
|
|||||||
}
|
}
|
||||||
return err
|
return err
|
||||||
} else if n > 0 {
|
} else if n > 0 {
|
||||||
if snd.Send(&pb.Message{
|
if err := snd.Send(&pb.Message{
|
||||||
Input: &pb.Message_File{
|
Input: &pb.Message_File{
|
||||||
File: &pb.FdMessage{
|
File: &pb.FdMessage{
|
||||||
Fd: fd,
|
Fd: fd,
|
||||||
@@ -380,12 +380,12 @@ func copyToStream(fd uint32, snd msgStream, r io.Reader) error {
|
|||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
var sigToName = map[syscall.Signal]string{}
|
func signalNames() map[syscall.Signal]string {
|
||||||
|
m := make(map[syscall.Signal]string, len(signal.SignalMap))
|
||||||
func init() {
|
|
||||||
for name, value := range signal.SignalMap {
|
for name, value := range signal.SignalMap {
|
||||||
sigToName[value] = name
|
m[value] = name
|
||||||
}
|
}
|
||||||
|
return m
|
||||||
}
|
}
|
||||||
|
|
||||||
type debugStream struct {
|
type debugStream struct {
|
||||||
|
|||||||
@@ -11,6 +11,7 @@ import (
|
|||||||
controllererrors "github.com/docker/buildx/controller/errdefs"
|
controllererrors "github.com/docker/buildx/controller/errdefs"
|
||||||
"github.com/docker/buildx/controller/pb"
|
"github.com/docker/buildx/controller/pb"
|
||||||
"github.com/docker/buildx/controller/processes"
|
"github.com/docker/buildx/controller/processes"
|
||||||
|
"github.com/docker/buildx/util/desktop"
|
||||||
"github.com/docker/buildx/util/ioset"
|
"github.com/docker/buildx/util/ioset"
|
||||||
"github.com/docker/buildx/util/progress"
|
"github.com/docker/buildx/util/progress"
|
||||||
"github.com/docker/buildx/version"
|
"github.com/docker/buildx/version"
|
||||||
@@ -19,7 +20,7 @@ import (
|
|||||||
"golang.org/x/sync/errgroup"
|
"golang.org/x/sync/errgroup"
|
||||||
)
|
)
|
||||||
|
|
||||||
type BuildFunc func(ctx context.Context, options *pb.BuildOptions, stdin io.Reader, progress progress.Writer) (resp *client.SolveResponse, res *build.ResultHandle, err error)
|
type BuildFunc func(ctx context.Context, options *pb.BuildOptions, stdin io.Reader, progress progress.Writer) (resp *client.SolveResponse, res *build.ResultHandle, inp *build.Inputs, err error)
|
||||||
|
|
||||||
func NewServer(buildFunc BuildFunc) *Server {
|
func NewServer(buildFunc BuildFunc) *Server {
|
||||||
return &Server{
|
return &Server{
|
||||||
@@ -36,7 +37,7 @@ type Server struct {
|
|||||||
type session struct {
|
type session struct {
|
||||||
buildOnGoing atomic.Bool
|
buildOnGoing atomic.Bool
|
||||||
statusChan chan *pb.StatusResponse
|
statusChan chan *pb.StatusResponse
|
||||||
cancelBuild func()
|
cancelBuild func(error)
|
||||||
buildOptions *pb.BuildOptions
|
buildOptions *pb.BuildOptions
|
||||||
inputPipe *io.PipeWriter
|
inputPipe *io.PipeWriter
|
||||||
|
|
||||||
@@ -52,9 +53,9 @@ func (s *session) cancelRunningProcesses() {
|
|||||||
func (m *Server) ListProcesses(ctx context.Context, req *pb.ListProcessesRequest) (res *pb.ListProcessesResponse, err error) {
|
func (m *Server) ListProcesses(ctx context.Context, req *pb.ListProcessesRequest) (res *pb.ListProcessesResponse, err error) {
|
||||||
m.sessionMu.Lock()
|
m.sessionMu.Lock()
|
||||||
defer m.sessionMu.Unlock()
|
defer m.sessionMu.Unlock()
|
||||||
s, ok := m.session[req.Ref]
|
s, ok := m.session[req.SessionID]
|
||||||
if !ok {
|
if !ok {
|
||||||
return nil, errors.Errorf("unknown ref %q", req.Ref)
|
return nil, errors.Errorf("unknown session ID %q", req.SessionID)
|
||||||
}
|
}
|
||||||
res = new(pb.ListProcessesResponse)
|
res = new(pb.ListProcessesResponse)
|
||||||
res.Infos = append(res.Infos, s.processes.ListProcesses()...)
|
res.Infos = append(res.Infos, s.processes.ListProcesses()...)
|
||||||
@@ -64,9 +65,9 @@ func (m *Server) ListProcesses(ctx context.Context, req *pb.ListProcessesRequest
|
|||||||
func (m *Server) DisconnectProcess(ctx context.Context, req *pb.DisconnectProcessRequest) (res *pb.DisconnectProcessResponse, err error) {
|
func (m *Server) DisconnectProcess(ctx context.Context, req *pb.DisconnectProcessRequest) (res *pb.DisconnectProcessResponse, err error) {
|
||||||
m.sessionMu.Lock()
|
m.sessionMu.Lock()
|
||||||
defer m.sessionMu.Unlock()
|
defer m.sessionMu.Unlock()
|
||||||
s, ok := m.session[req.Ref]
|
s, ok := m.session[req.SessionID]
|
||||||
if !ok {
|
if !ok {
|
||||||
return nil, errors.Errorf("unknown ref %q", req.Ref)
|
return nil, errors.Errorf("unknown session ID %q", req.SessionID)
|
||||||
}
|
}
|
||||||
return res, s.processes.DeleteProcess(req.ProcessID)
|
return res, s.processes.DeleteProcess(req.ProcessID)
|
||||||
}
|
}
|
||||||
@@ -100,22 +101,22 @@ func (m *Server) List(ctx context.Context, req *pb.ListRequest) (res *pb.ListRes
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (m *Server) Disconnect(ctx context.Context, req *pb.DisconnectRequest) (res *pb.DisconnectResponse, err error) {
|
func (m *Server) Disconnect(ctx context.Context, req *pb.DisconnectRequest) (res *pb.DisconnectResponse, err error) {
|
||||||
key := req.Ref
|
sessionID := req.SessionID
|
||||||
if key == "" {
|
if sessionID == "" {
|
||||||
return nil, errors.New("disconnect: empty key")
|
return nil, errors.New("disconnect: empty session ID")
|
||||||
}
|
}
|
||||||
|
|
||||||
m.sessionMu.Lock()
|
m.sessionMu.Lock()
|
||||||
if s, ok := m.session[key]; ok {
|
if s, ok := m.session[sessionID]; ok {
|
||||||
if s.cancelBuild != nil {
|
if s.cancelBuild != nil {
|
||||||
s.cancelBuild()
|
s.cancelBuild(errors.WithStack(context.Canceled))
|
||||||
}
|
}
|
||||||
s.cancelRunningProcesses()
|
s.cancelRunningProcesses()
|
||||||
if s.result != nil {
|
if s.result != nil {
|
||||||
s.result.Done()
|
s.result.Done()
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
delete(m.session, key)
|
delete(m.session, sessionID)
|
||||||
m.sessionMu.Unlock()
|
m.sessionMu.Unlock()
|
||||||
|
|
||||||
return &pb.DisconnectResponse{}, nil
|
return &pb.DisconnectResponse{}, nil
|
||||||
@@ -126,7 +127,7 @@ func (m *Server) Close() error {
|
|||||||
for k := range m.session {
|
for k := range m.session {
|
||||||
if s, ok := m.session[k]; ok {
|
if s, ok := m.session[k]; ok {
|
||||||
if s.cancelBuild != nil {
|
if s.cancelBuild != nil {
|
||||||
s.cancelBuild()
|
s.cancelBuild(errors.WithStack(context.Canceled))
|
||||||
}
|
}
|
||||||
s.cancelRunningProcesses()
|
s.cancelRunningProcesses()
|
||||||
}
|
}
|
||||||
@@ -136,26 +137,26 @@ func (m *Server) Close() error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (m *Server) Inspect(ctx context.Context, req *pb.InspectRequest) (*pb.InspectResponse, error) {
|
func (m *Server) Inspect(ctx context.Context, req *pb.InspectRequest) (*pb.InspectResponse, error) {
|
||||||
ref := req.Ref
|
sessionID := req.SessionID
|
||||||
if ref == "" {
|
if sessionID == "" {
|
||||||
return nil, errors.New("inspect: empty key")
|
return nil, errors.New("inspect: empty session ID")
|
||||||
}
|
}
|
||||||
var bo *pb.BuildOptions
|
var bo *pb.BuildOptions
|
||||||
m.sessionMu.Lock()
|
m.sessionMu.Lock()
|
||||||
if s, ok := m.session[ref]; ok {
|
if s, ok := m.session[sessionID]; ok {
|
||||||
bo = s.buildOptions
|
bo = s.buildOptions
|
||||||
} else {
|
} else {
|
||||||
m.sessionMu.Unlock()
|
m.sessionMu.Unlock()
|
||||||
return nil, errors.Errorf("inspect: unknown key %v", ref)
|
return nil, errors.Errorf("inspect: unknown key %v", sessionID)
|
||||||
}
|
}
|
||||||
m.sessionMu.Unlock()
|
m.sessionMu.Unlock()
|
||||||
return &pb.InspectResponse{Options: bo}, nil
|
return &pb.InspectResponse{Options: bo}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (m *Server) Build(ctx context.Context, req *pb.BuildRequest) (*pb.BuildResponse, error) {
|
func (m *Server) Build(ctx context.Context, req *pb.BuildRequest) (*pb.BuildResponse, error) {
|
||||||
ref := req.Ref
|
sessionID := req.SessionID
|
||||||
if ref == "" {
|
if sessionID == "" {
|
||||||
return nil, errors.New("build: empty key")
|
return nil, errors.New("build: empty session ID")
|
||||||
}
|
}
|
||||||
|
|
||||||
// Prepare status channel and session
|
// Prepare status channel and session
|
||||||
@@ -163,7 +164,7 @@ func (m *Server) Build(ctx context.Context, req *pb.BuildRequest) (*pb.BuildResp
|
|||||||
if m.session == nil {
|
if m.session == nil {
|
||||||
m.session = make(map[string]*session)
|
m.session = make(map[string]*session)
|
||||||
}
|
}
|
||||||
s, ok := m.session[ref]
|
s, ok := m.session[sessionID]
|
||||||
if ok {
|
if ok {
|
||||||
if !s.buildOnGoing.CompareAndSwap(false, true) {
|
if !s.buildOnGoing.CompareAndSwap(false, true) {
|
||||||
m.sessionMu.Unlock()
|
m.sessionMu.Unlock()
|
||||||
@@ -182,12 +183,12 @@ func (m *Server) Build(ctx context.Context, req *pb.BuildRequest) (*pb.BuildResp
|
|||||||
inR, inW := io.Pipe()
|
inR, inW := io.Pipe()
|
||||||
defer inR.Close()
|
defer inR.Close()
|
||||||
s.inputPipe = inW
|
s.inputPipe = inW
|
||||||
m.session[ref] = s
|
m.session[sessionID] = s
|
||||||
m.sessionMu.Unlock()
|
m.sessionMu.Unlock()
|
||||||
defer func() {
|
defer func() {
|
||||||
close(statusChan)
|
close(statusChan)
|
||||||
m.sessionMu.Lock()
|
m.sessionMu.Lock()
|
||||||
s, ok := m.session[ref]
|
s, ok := m.session[sessionID]
|
||||||
if ok {
|
if ok {
|
||||||
s.statusChan = nil
|
s.statusChan = nil
|
||||||
s.buildOnGoing.Store(false)
|
s.buildOnGoing.Store(false)
|
||||||
@@ -198,24 +199,29 @@ func (m *Server) Build(ctx context.Context, req *pb.BuildRequest) (*pb.BuildResp
|
|||||||
pw := pb.NewProgressWriter(statusChan)
|
pw := pb.NewProgressWriter(statusChan)
|
||||||
|
|
||||||
// Build the specified request
|
// Build the specified request
|
||||||
ctx, cancel := context.WithCancel(ctx)
|
ctx, cancel := context.WithCancelCause(ctx)
|
||||||
defer cancel()
|
defer func() { cancel(errors.WithStack(context.Canceled)) }()
|
||||||
resp, res, buildErr := m.buildFunc(ctx, req.Options, inR, pw)
|
resp, res, _, buildErr := m.buildFunc(ctx, req.Options, inR, pw)
|
||||||
m.sessionMu.Lock()
|
m.sessionMu.Lock()
|
||||||
if s, ok := m.session[ref]; ok {
|
if s, ok := m.session[sessionID]; ok {
|
||||||
// NOTE: buildFunc can return *build.ResultHandle even on error (e.g. when it's implemented using (github.com/docker/buildx/controller/build).RunBuild).
|
// NOTE: buildFunc can return *build.ResultHandle even on error (e.g. when it's implemented using (github.com/docker/buildx/controller/build).RunBuild).
|
||||||
if res != nil {
|
if res != nil {
|
||||||
s.result = res
|
s.result = res
|
||||||
s.cancelBuild = cancel
|
s.cancelBuild = cancel
|
||||||
s.buildOptions = req.Options
|
s.buildOptions = req.Options
|
||||||
m.session[ref] = s
|
m.session[sessionID] = s
|
||||||
if buildErr != nil {
|
if buildErr != nil {
|
||||||
buildErr = controllererrors.WrapBuild(buildErr, ref)
|
var ref string
|
||||||
|
var ebr *desktop.ErrorWithBuildRef
|
||||||
|
if errors.As(buildErr, &ebr) {
|
||||||
|
ref = ebr.Ref
|
||||||
|
}
|
||||||
|
buildErr = controllererrors.WrapBuild(buildErr, sessionID, ref)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
m.sessionMu.Unlock()
|
m.sessionMu.Unlock()
|
||||||
return nil, errors.Errorf("build: unknown key %v", ref)
|
return nil, errors.Errorf("build: unknown session ID %v", sessionID)
|
||||||
}
|
}
|
||||||
m.sessionMu.Unlock()
|
m.sessionMu.Unlock()
|
||||||
|
|
||||||
@@ -232,9 +238,9 @@ func (m *Server) Build(ctx context.Context, req *pb.BuildRequest) (*pb.BuildResp
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (m *Server) Status(req *pb.StatusRequest, stream pb.Controller_StatusServer) error {
|
func (m *Server) Status(req *pb.StatusRequest, stream pb.Controller_StatusServer) error {
|
||||||
ref := req.Ref
|
sessionID := req.SessionID
|
||||||
if ref == "" {
|
if sessionID == "" {
|
||||||
return errors.New("status: empty key")
|
return errors.New("status: empty session ID")
|
||||||
}
|
}
|
||||||
|
|
||||||
// Wait and get status channel prepared by Build()
|
// Wait and get status channel prepared by Build()
|
||||||
@@ -242,12 +248,12 @@ func (m *Server) Status(req *pb.StatusRequest, stream pb.Controller_StatusServer
|
|||||||
for {
|
for {
|
||||||
// TODO: timeout?
|
// TODO: timeout?
|
||||||
m.sessionMu.Lock()
|
m.sessionMu.Lock()
|
||||||
if _, ok := m.session[ref]; !ok || m.session[ref].statusChan == nil {
|
if _, ok := m.session[sessionID]; !ok || m.session[sessionID].statusChan == nil {
|
||||||
m.sessionMu.Unlock()
|
m.sessionMu.Unlock()
|
||||||
time.Sleep(time.Millisecond) // TODO: wait Build without busy loop and make it cancellable
|
time.Sleep(time.Millisecond) // TODO: wait Build without busy loop and make it cancellable
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
statusChan = m.session[ref].statusChan
|
statusChan = m.session[sessionID].statusChan
|
||||||
m.sessionMu.Unlock()
|
m.sessionMu.Unlock()
|
||||||
break
|
break
|
||||||
}
|
}
|
||||||
@@ -278,9 +284,9 @@ func (m *Server) Input(stream pb.Controller_InputServer) (err error) {
|
|||||||
if init == nil {
|
if init == nil {
|
||||||
return errors.Errorf("unexpected message: %T; wanted init", msg.GetInit())
|
return errors.Errorf("unexpected message: %T; wanted init", msg.GetInit())
|
||||||
}
|
}
|
||||||
ref := init.Ref
|
sessionID := init.SessionID
|
||||||
if ref == "" {
|
if sessionID == "" {
|
||||||
return errors.New("input: no ref is provided")
|
return errors.New("input: no session ID is provided")
|
||||||
}
|
}
|
||||||
|
|
||||||
// Wait and get input stream pipe prepared by Build()
|
// Wait and get input stream pipe prepared by Build()
|
||||||
@@ -288,12 +294,12 @@ func (m *Server) Input(stream pb.Controller_InputServer) (err error) {
|
|||||||
for {
|
for {
|
||||||
// TODO: timeout?
|
// TODO: timeout?
|
||||||
m.sessionMu.Lock()
|
m.sessionMu.Lock()
|
||||||
if _, ok := m.session[ref]; !ok || m.session[ref].inputPipe == nil {
|
if _, ok := m.session[sessionID]; !ok || m.session[sessionID].inputPipe == nil {
|
||||||
m.sessionMu.Unlock()
|
m.sessionMu.Unlock()
|
||||||
time.Sleep(time.Millisecond) // TODO: wait Build without busy loop and make it cancellable
|
time.Sleep(time.Millisecond) // TODO: wait Build without busy loop and make it cancellable
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
inputPipeW = m.session[ref].inputPipe
|
inputPipeW = m.session[sessionID].inputPipe
|
||||||
m.sessionMu.Unlock()
|
m.sessionMu.Unlock()
|
||||||
break
|
break
|
||||||
}
|
}
|
||||||
@@ -335,7 +341,7 @@ func (m *Server) Input(stream pb.Controller_InputServer) (err error) {
|
|||||||
select {
|
select {
|
||||||
case msg = <-msgCh:
|
case msg = <-msgCh:
|
||||||
case <-ctx.Done():
|
case <-ctx.Done():
|
||||||
return errors.Wrap(ctx.Err(), "canceled")
|
return context.Cause(ctx)
|
||||||
}
|
}
|
||||||
if msg == nil {
|
if msg == nil {
|
||||||
return nil
|
return nil
|
||||||
@@ -364,23 +370,23 @@ func (m *Server) Invoke(srv pb.Controller_InvokeServer) error {
|
|||||||
initDoneCh := make(chan *processes.Process)
|
initDoneCh := make(chan *processes.Process)
|
||||||
initErrCh := make(chan error)
|
initErrCh := make(chan error)
|
||||||
eg, egCtx := errgroup.WithContext(context.TODO())
|
eg, egCtx := errgroup.WithContext(context.TODO())
|
||||||
srvIOCtx, srvIOCancel := context.WithCancel(egCtx)
|
srvIOCtx, srvIOCancel := context.WithCancelCause(egCtx)
|
||||||
eg.Go(func() error {
|
eg.Go(func() error {
|
||||||
defer srvIOCancel()
|
defer srvIOCancel(errors.WithStack(context.Canceled))
|
||||||
return serveIO(srvIOCtx, srv, func(initMessage *pb.InitMessage) (retErr error) {
|
return serveIO(srvIOCtx, srv, func(initMessage *pb.InitMessage) (retErr error) {
|
||||||
defer func() {
|
defer func() {
|
||||||
if retErr != nil {
|
if retErr != nil {
|
||||||
initErrCh <- retErr
|
initErrCh <- retErr
|
||||||
}
|
}
|
||||||
}()
|
}()
|
||||||
ref := initMessage.Ref
|
sessionID := initMessage.SessionID
|
||||||
cfg := initMessage.InvokeConfig
|
cfg := initMessage.InvokeConfig
|
||||||
|
|
||||||
m.sessionMu.Lock()
|
m.sessionMu.Lock()
|
||||||
s, ok := m.session[ref]
|
s, ok := m.session[sessionID]
|
||||||
if !ok {
|
if !ok {
|
||||||
m.sessionMu.Unlock()
|
m.sessionMu.Unlock()
|
||||||
return errors.Errorf("invoke: unknown key %v", ref)
|
return errors.Errorf("invoke: unknown session ID %v", sessionID)
|
||||||
}
|
}
|
||||||
m.sessionMu.Unlock()
|
m.sessionMu.Unlock()
|
||||||
|
|
||||||
@@ -412,7 +418,7 @@ func (m *Server) Invoke(srv pb.Controller_InvokeServer) error {
|
|||||||
})
|
})
|
||||||
})
|
})
|
||||||
eg.Go(func() (rErr error) {
|
eg.Go(func() (rErr error) {
|
||||||
defer srvIOCancel()
|
defer srvIOCancel(errors.WithStack(context.Canceled))
|
||||||
// Wait for init done
|
// Wait for init done
|
||||||
var proc *processes.Process
|
var proc *processes.Process
|
||||||
select {
|
select {
|
||||||
|
|||||||
@@ -7,9 +7,12 @@ variable "DOCS_FORMATS" {
|
|||||||
variable "DESTDIR" {
|
variable "DESTDIR" {
|
||||||
default = "./bin"
|
default = "./bin"
|
||||||
}
|
}
|
||||||
variable "GOLANGCI_LINT_MULTIPLATFORM" {
|
variable "TEST_COVERAGE" {
|
||||||
default = null
|
default = null
|
||||||
}
|
}
|
||||||
|
variable "GOLANGCI_LINT_MULTIPLATFORM" {
|
||||||
|
default = ""
|
||||||
|
}
|
||||||
|
|
||||||
# Special target: https://github.com/docker/metadata-action#bake-definition
|
# Special target: https://github.com/docker/metadata-action#bake-definition
|
||||||
target "meta-helper" {
|
target "meta-helper" {
|
||||||
@@ -28,26 +31,43 @@ group "default" {
|
|||||||
}
|
}
|
||||||
|
|
||||||
group "validate" {
|
group "validate" {
|
||||||
targets = ["lint", "validate-vendor", "validate-docs"]
|
targets = ["lint", "lint-gopls", "validate-golangci", "validate-vendor", "validate-docs"]
|
||||||
}
|
}
|
||||||
|
|
||||||
target "lint" {
|
target "lint" {
|
||||||
inherits = ["_common"]
|
inherits = ["_common"]
|
||||||
dockerfile = "./hack/dockerfiles/lint.Dockerfile"
|
dockerfile = "./hack/dockerfiles/lint.Dockerfile"
|
||||||
output = ["type=cacheonly"]
|
output = ["type=cacheonly"]
|
||||||
platforms = GOLANGCI_LINT_MULTIPLATFORM != null ? [
|
platforms = GOLANGCI_LINT_MULTIPLATFORM != "" ? [
|
||||||
"darwin/amd64",
|
"darwin/amd64",
|
||||||
"darwin/arm64",
|
"darwin/arm64",
|
||||||
|
"freebsd/amd64",
|
||||||
|
"freebsd/arm64",
|
||||||
"linux/amd64",
|
"linux/amd64",
|
||||||
"linux/arm64",
|
"linux/arm64",
|
||||||
"linux/s390x",
|
"linux/s390x",
|
||||||
"linux/ppc64le",
|
"linux/ppc64le",
|
||||||
"linux/riscv64",
|
"linux/riscv64",
|
||||||
|
"openbsd/amd64",
|
||||||
|
"openbsd/arm64",
|
||||||
"windows/amd64",
|
"windows/amd64",
|
||||||
"windows/arm64"
|
"windows/arm64"
|
||||||
] : []
|
] : []
|
||||||
}
|
}
|
||||||
|
|
||||||
|
target "validate-golangci" {
|
||||||
|
description = "Validate .golangci.yml schema (does not run Go linter)"
|
||||||
|
inherits = ["_common"]
|
||||||
|
dockerfile = "./hack/dockerfiles/lint.Dockerfile"
|
||||||
|
target = "validate-golangci"
|
||||||
|
output = ["type=cacheonly"]
|
||||||
|
}
|
||||||
|
|
||||||
|
target "lint-gopls" {
|
||||||
|
inherits = ["lint"]
|
||||||
|
target = "gopls-analyze"
|
||||||
|
}
|
||||||
|
|
||||||
target "validate-vendor" {
|
target "validate-vendor" {
|
||||||
inherits = ["_common"]
|
inherits = ["_common"]
|
||||||
dockerfile = "./hack/dockerfiles/vendor.Dockerfile"
|
dockerfile = "./hack/dockerfiles/vendor.Dockerfile"
|
||||||
@@ -138,6 +158,8 @@ target "binaries-cross" {
|
|||||||
platforms = [
|
platforms = [
|
||||||
"darwin/amd64",
|
"darwin/amd64",
|
||||||
"darwin/arm64",
|
"darwin/arm64",
|
||||||
|
"freebsd/amd64",
|
||||||
|
"freebsd/arm64",
|
||||||
"linux/amd64",
|
"linux/amd64",
|
||||||
"linux/arm/v6",
|
"linux/arm/v6",
|
||||||
"linux/arm/v7",
|
"linux/arm/v7",
|
||||||
@@ -145,6 +167,8 @@ target "binaries-cross" {
|
|||||||
"linux/ppc64le",
|
"linux/ppc64le",
|
||||||
"linux/riscv64",
|
"linux/riscv64",
|
||||||
"linux/s390x",
|
"linux/s390x",
|
||||||
|
"openbsd/amd64",
|
||||||
|
"openbsd/arm64",
|
||||||
"windows/amd64",
|
"windows/amd64",
|
||||||
"windows/arm64"
|
"windows/arm64"
|
||||||
]
|
]
|
||||||
@@ -180,13 +204,18 @@ variable "HTTPS_PROXY" {
|
|||||||
variable "NO_PROXY" {
|
variable "NO_PROXY" {
|
||||||
default = ""
|
default = ""
|
||||||
}
|
}
|
||||||
|
variable "TEST_BUILDKIT_TAG" {
|
||||||
|
default = null
|
||||||
|
}
|
||||||
|
|
||||||
target "integration-test-base" {
|
target "integration-test-base" {
|
||||||
inherits = ["_common"]
|
inherits = ["_common"]
|
||||||
args = {
|
args = {
|
||||||
|
GO_EXTRA_FLAGS = TEST_COVERAGE == "1" ? "-cover" : null
|
||||||
HTTP_PROXY = HTTP_PROXY
|
HTTP_PROXY = HTTP_PROXY
|
||||||
HTTPS_PROXY = HTTPS_PROXY
|
HTTPS_PROXY = HTTPS_PROXY
|
||||||
NO_PROXY = NO_PROXY
|
NO_PROXY = NO_PROXY
|
||||||
|
BUILDKIT_VERSION = TEST_BUILDKIT_TAG
|
||||||
}
|
}
|
||||||
target = "integration-test-base"
|
target = "integration-test-base"
|
||||||
output = ["type=cacheonly"]
|
output = ["type=cacheonly"]
|
||||||
@@ -196,3 +225,18 @@ target "integration-test" {
|
|||||||
inherits = ["integration-test-base"]
|
inherits = ["integration-test-base"]
|
||||||
target = "integration-test"
|
target = "integration-test"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
variable "GOVULNCHECK_FORMAT" {
|
||||||
|
default = null
|
||||||
|
}
|
||||||
|
|
||||||
|
target "govulncheck" {
|
||||||
|
inherits = ["_common"]
|
||||||
|
dockerfile = "./hack/dockerfiles/govulncheck.Dockerfile"
|
||||||
|
target = "output"
|
||||||
|
args = {
|
||||||
|
FORMAT = GOVULNCHECK_FORMAT
|
||||||
|
}
|
||||||
|
no-cache-filter = ["run"]
|
||||||
|
output = ["${DESTDIR}"]
|
||||||
|
}
|
||||||
|
|||||||
@@ -1,4 +1,6 @@
|
|||||||
# Bake file reference
|
---
|
||||||
|
title: Bake file reference
|
||||||
|
---
|
||||||
|
|
||||||
The Bake file is a file for defining workflows that you run using `docker buildx bake`.
|
The Bake file is a file for defining workflows that you run using `docker buildx bake`.
|
||||||
|
|
||||||
@@ -357,6 +359,21 @@ target "app" {
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### `target.call`
|
||||||
|
|
||||||
|
Specifies the frontend method to use. Frontend methods let you, for example,
|
||||||
|
execute build checks only, instead of running a build. This is the same as the
|
||||||
|
`--call` flag.
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
target "app" {
|
||||||
|
call = "check"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
For more information about frontend methods, refer to the CLI reference for
|
||||||
|
[`docker buildx build --call`](https://docs.docker.com/reference/cli/docker/buildx/build/#call).
|
||||||
|
|
||||||
### `target.context`
|
### `target.context`
|
||||||
|
|
||||||
Specifies the location of the build context to use for this target.
|
Specifies the location of the build context to use for this target.
|
||||||
@@ -441,8 +458,7 @@ COPY --from=src . .
|
|||||||
|
|
||||||
#### Use another target as base
|
#### Use another target as base
|
||||||
|
|
||||||
> **Note**
|
> [!NOTE]
|
||||||
>
|
|
||||||
> You should prefer to use regular multi-stage builds over this option. You can
|
> You should prefer to use regular multi-stage builds over this option. You can
|
||||||
> Use this feature when you have multiple Dockerfiles that can't be easily
|
> Use this feature when you have multiple Dockerfiles that can't be easily
|
||||||
> merged into one.
|
> merged into one.
|
||||||
@@ -504,6 +520,25 @@ $ docker buildx bake --print -f - <<< 'target "default" {}'
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### `target.entitlements`
|
||||||
|
|
||||||
|
Entitlements are permissions that the build process requires to run.
|
||||||
|
|
||||||
|
Currently supported entitlements are:
|
||||||
|
|
||||||
|
- `network.host`: Allows the build to use commands that access the host network. In Dockerfile, use [`RUN --network=host`](https://docs.docker.com/reference/dockerfile/#run---networkhost) to run a command with host network enabled.
|
||||||
|
|
||||||
|
- `security.insecure`: Allows the build to run commands in privileged containers that are not limited by the default security sandbox. Such container may potentially access and modify system resources. In Dockerfile, use [`RUN --security=insecure`](https://docs.docker.com/reference/dockerfile/#run---security) to run a command in a privileged container.
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
target "integration-tests" {
|
||||||
|
# this target requires privileged containers to run nested containers
|
||||||
|
entitlements = ["security.insecure"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Entitlements are enabled with a two-step process. First, a target must declare the entitlements it requires. Secondly, when invoking the `bake` command, the user must grant the entitlements by passing the `--allow` flag or confirming the entitlements when prompted in an interactive terminal. This is to ensure that the user is aware of the possibly insecure permissions they are granting to the build process.
|
||||||
|
|
||||||
### `target.inherits`
|
### `target.inherits`
|
||||||
|
|
||||||
A target can inherit attributes from other targets.
|
A target can inherit attributes from other targets.
|
||||||
@@ -748,6 +783,27 @@ target "app" {
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### `target.network`
|
||||||
|
|
||||||
|
Specify the network mode for the whole build request. This will override the default network mode
|
||||||
|
for all the `RUN` instructions in the Dockerfile. Accepted values are `default`, `host`, and `none`.
|
||||||
|
|
||||||
|
Usually, a better approach to set the network mode for your build steps is to instead use `RUN --network=<value>`
|
||||||
|
in your Dockerfile. This way, you can set the network mode for individual build steps and everyone building
|
||||||
|
the Dockerfile gets consistent behavior without needing to pass additional flags to the build command.
|
||||||
|
|
||||||
|
If you set network mode to `host` in your Bake file, you must also grant `network.host` entitlement when
|
||||||
|
invoking the `bake` command. This is because `host` network mode requires elevated privileges and can be a security risk.
|
||||||
|
You can pass `--allow=network.host` to the `docker buildx bake` command to grant the entitlement, or you can
|
||||||
|
confirm the entitlement when prompted if you are using an interactive terminal.
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
target "app" {
|
||||||
|
# make sure this build does not access internet
|
||||||
|
network = "none"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
### `target.no-cache-filter`
|
### `target.no-cache-filter`
|
||||||
|
|
||||||
Don't use build cache for the specified stages.
|
Don't use build cache for the specified stages.
|
||||||
@@ -803,7 +859,7 @@ The following example forces the builder to always pull all images referenced in
|
|||||||
|
|
||||||
```hcl
|
```hcl
|
||||||
target "default" {
|
target "default" {
|
||||||
pull = "always"
|
pull = true
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -830,8 +886,8 @@ This lets you [mount the secret][run_mount_secret] in your Dockerfile.
|
|||||||
```dockerfile
|
```dockerfile
|
||||||
RUN --mount=type=secret,id=aws,target=/root/.aws/credentials \
|
RUN --mount=type=secret,id=aws,target=/root/.aws/credentials \
|
||||||
aws cloudfront create-invalidation ...
|
aws cloudfront create-invalidation ...
|
||||||
RUN --mount=type=secret,id=KUBECONFIG \
|
RUN --mount=type=secret,id=KUBECONFIG,env=KUBECONFIG \
|
||||||
KUBECONFIG=$(cat /run/secrets/KUBECONFIG) helm upgrade --install
|
helm upgrade --install
|
||||||
```
|
```
|
||||||
|
|
||||||
### `target.shm-size`
|
### `target.shm-size`
|
||||||
@@ -851,8 +907,7 @@ target "default" {
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
> **Note**
|
> [!NOTE]
|
||||||
>
|
|
||||||
> In most cases, it is recommended to let the builder automatically determine
|
> In most cases, it is recommended to let the builder automatically determine
|
||||||
> the appropriate configurations. Manual adjustments should only be considered
|
> the appropriate configurations. Manual adjustments should only be considered
|
||||||
> when specific performance tuning is required for complex build scenarios.
|
> when specific performance tuning is required for complex build scenarios.
|
||||||
@@ -917,14 +972,12 @@ target "app" {
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
> **Note**
|
> [!NOTE]
|
||||||
>
|
|
||||||
> If you do not provide a `hard limit`, the `soft limit` is used
|
> If you do not provide a `hard limit`, the `soft limit` is used
|
||||||
> for both values. If no `ulimits` are set, they are inherited from
|
> for both values. If no `ulimits` are set, they are inherited from
|
||||||
> the default `ulimits` set on the daemon.
|
> the default `ulimits` set on the daemon.
|
||||||
|
|
||||||
> **Note**
|
> [!NOTE]
|
||||||
>
|
|
||||||
> In most cases, it is recommended to let the builder automatically determine
|
> In most cases, it is recommended to let the builder automatically determine
|
||||||
> the appropriate configurations. Manual adjustments should only be considered
|
> the appropriate configurations. Manual adjustments should only be considered
|
||||||
> when specific performance tuning is required for complex build scenarios.
|
> when specific performance tuning is required for complex build scenarios.
|
||||||
@@ -1112,8 +1165,7 @@ target "webapp-dev" {
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
> **Note**
|
> [!NOTE]
|
||||||
>
|
|
||||||
> See [User defined HCL functions][hcl-funcs] page for more details.
|
> See [User defined HCL functions][hcl-funcs] page for more details.
|
||||||
|
|
||||||
<!-- external links -->
|
<!-- external links -->
|
||||||
|
|||||||
@@ -4,8 +4,7 @@ To assist with creating and debugging complex builds, Buildx provides a
|
|||||||
debugger to help you step through the build process and easily inspect the
|
debugger to help you step through the build process and easily inspect the
|
||||||
state of the build environment at any point.
|
state of the build environment at any point.
|
||||||
|
|
||||||
> **Note**
|
> [!NOTE]
|
||||||
>
|
|
||||||
> The debug monitor is a new experimental feature in recent versions of Buildx.
|
> The debug monitor is a new experimental feature in recent versions of Buildx.
|
||||||
> There are rough edges, known bugs, and missing features. Please try it out
|
> There are rough edges, known bugs, and missing features. Please try it out
|
||||||
> and let us know what you think!
|
> and let us know what you think!
|
||||||
@@ -1,3 +0,0 @@
|
|||||||
# CI/CD
|
|
||||||
|
|
||||||
This page has moved to [Docker Docs website](https://docs.docker.com/build/ci/)
|
|
||||||
@@ -1,3 +0,0 @@
|
|||||||
# CNI networking
|
|
||||||
|
|
||||||
This page has moved to [Docker Docs website](https://docs.docker.com/build/buildkit/configure/#cni-networking)
|
|
||||||
@@ -1,3 +0,0 @@
|
|||||||
# Color output controls
|
|
||||||
|
|
||||||
This page has moved to [Docker Docs website](https://docs.docker.com/build/building/env-vars/#buildkit_colors)
|
|
||||||
@@ -1,3 +0,0 @@
|
|||||||
# Using a custom network
|
|
||||||
|
|
||||||
This page has moved to [Docker Docs website](https://docs.docker.com/build/drivers/docker-container/#custom-network)
|
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user