mirror of
https://gitea.com/Lydanne/buildx.git
synced 2025-08-29 23:19:10 +08:00
Compare commits
769 Commits
v0.9.0-rc1
...
v0.11
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
4e547752af | ||
|
|
95eee3e747 | ||
|
|
d5bfd8334f | ||
|
|
2083f24938 | ||
|
|
84da4ec603 | ||
|
|
35dac12ae5 | ||
|
|
27f332f135 | ||
|
|
9872040b66 | ||
|
|
d8c6c3fc30 | ||
|
|
69f929077b | ||
|
|
87ce701fe0 | ||
|
|
6faf7e5688 | ||
|
|
d21e9fa8c6 | ||
|
|
5657006c1f | ||
|
|
0424ae14c0 | ||
|
|
66fd2bbdee | ||
|
|
3305f18ce5 | ||
|
|
a8790788d1 | ||
|
|
0f6513a29a | ||
|
|
44f5946a66 | ||
|
|
ea610d8f14 | ||
|
|
d78c75947d | ||
|
|
7dddd3a7d3 | ||
|
|
54de900931 | ||
|
|
50e414f82a | ||
|
|
a24b6dd4f5 | ||
|
|
66600be6ab | ||
|
|
b4df08551f | ||
|
|
f581942d7d | ||
|
|
5159571dfc | ||
|
|
86a5c77c2b | ||
|
|
1602b491f9 | ||
|
|
94baaf3c90 | ||
|
|
c5e279f295 | ||
|
|
a0f91eb87e | ||
|
|
cb1812ec6a | ||
|
|
47e4c2576b | ||
|
|
3702e17ed5 | ||
|
|
8b85dbea72 | ||
|
|
afcb118e10 | ||
|
|
cb4fea66e0 | ||
|
|
74fa66b496 | ||
|
|
ff87dd183a | ||
|
|
9f844df9f7 | ||
|
|
bc597e6b5e | ||
|
|
687feca9e8 | ||
|
|
d4a2c8d0c3 | ||
|
|
bef42b2441 | ||
|
|
2de333fdd3 | ||
|
|
1138789f20 | ||
|
|
1f4ac09ffb | ||
|
|
26a8ffb393 | ||
|
|
9b7aada99b | ||
|
|
fd6207695b | ||
|
|
def96d2bf4 | ||
|
|
f5f00e68ef | ||
|
|
14aebe713e | ||
|
|
9d2388e6f5 | ||
|
|
75e2c46295 | ||
|
|
2c02db8db4 | ||
|
|
e304a05d2a | ||
|
|
14c1ea0e11 | ||
|
|
c30bcade2c | ||
|
|
62bfb19db4 | ||
|
|
47e34f2684 | ||
|
|
3d981be4ad | ||
|
|
5d94b0fcc7 | ||
|
|
569c66fb62 | ||
|
|
93f7fbdd78 | ||
|
|
ea06685c11 | ||
|
|
eaba4fa9e6 | ||
|
|
99e3882e2a | ||
|
|
0a2f35970c | ||
|
|
ab5f5e4169 | ||
|
|
696770d29c | ||
|
|
b47b4e5957 | ||
|
|
9a125afba0 | ||
|
|
d34103b0d9 | ||
|
|
c820350b5e | ||
|
|
61a7854659 | ||
|
|
e859ebc12e | ||
|
|
ef997fd6d0 | ||
|
|
76c96347ff | ||
|
|
48d7dafbd5 | ||
|
|
d03e93f6f1 | ||
|
|
fcb7810a38 | ||
|
|
459d94bdf1 | ||
|
|
7cef021a8a | ||
|
|
c6db4cf342 | ||
|
|
6c9436fbd5 | ||
|
|
a906149930 | ||
|
|
af328fe413 | ||
|
|
183a73abae | ||
|
|
b7f0b3d763 | ||
|
|
5b27d5a9f6 | ||
|
|
8f24c58f4d | ||
|
|
cd1648192e | ||
|
|
8d822fb06c | ||
|
|
0758a9b75d | ||
|
|
f8fa526678 | ||
|
|
4abff3ce12 | ||
|
|
e7034f66bc | ||
|
|
8c65e4fc1d | ||
|
|
d196ac347e | ||
|
|
9b723ece46 | ||
|
|
5e2f8bd64a | ||
|
|
5788ab33d2 | ||
|
|
f1788002e1 | ||
|
|
6c62225d1b | ||
|
|
38b4eef5c6 | ||
|
|
a4db138c5e | ||
|
|
55377b2b0f | ||
|
|
98dedd3225 | ||
|
|
74b121be66 | ||
|
|
b9cf46785b | ||
|
|
ecf8dd0a26 | ||
|
|
73c17ef4d2 | ||
|
|
e762e46b4b | ||
|
|
cafeedba79 | ||
|
|
17bdbbd3c3 | ||
|
|
2dae553d18 | ||
|
|
91c17f25fb | ||
|
|
63fc01e08a | ||
|
|
354ccc9469 | ||
|
|
68ae67720a | ||
|
|
b273db20c3 | ||
|
|
0ae88ecc4d | ||
|
|
341fb65f6f | ||
|
|
69a9c6609a | ||
|
|
1c96fdaf03 | ||
|
|
c77bd8a578 | ||
|
|
e5f701351c | ||
|
|
09798cdebd | ||
|
|
0dfc35d558 | ||
|
|
8085f57a3a | ||
|
|
d582a21acd | ||
|
|
580820a4de | ||
|
|
b7e8afc61b | ||
|
|
a8a637e19d | ||
|
|
79632a4c4c | ||
|
|
a6b0959276 | ||
|
|
6d7142b057 | ||
|
|
7e39644f69 | ||
|
|
adc6349b28 | ||
|
|
f558fd8b22 | ||
|
|
432e16ef70 | ||
|
|
8c86c2242a | ||
|
|
75ad5d732b | ||
|
|
9bd0202312 | ||
|
|
367f114cc7 | ||
|
|
2959ce205e | ||
|
|
75b5c6560f | ||
|
|
4429ccbcc2 | ||
|
|
c59fc18325 | ||
|
|
4ce80856b3 | ||
|
|
af3feec4ea | ||
|
|
90c849f5ef | ||
|
|
6024212ac8 | ||
|
|
2d124e0ce9 | ||
|
|
e61a8cf637 | ||
|
|
167cd16acb | ||
|
|
1dd31fefcb | ||
|
|
5a12b25bab | ||
|
|
b702188b65 | ||
|
|
060ac842bb | ||
|
|
31d1b778ff | ||
|
|
1cd4b54810 | ||
|
|
c54926c5b2 | ||
|
|
10aea8e970 | ||
|
|
be6542911f | ||
|
|
9b07f6510a | ||
|
|
9ee19520dd | ||
|
|
878faae332 | ||
|
|
eaf38570e7 | ||
|
|
167340df17 | ||
|
|
e61a1da7fc | ||
|
|
f8483d7243 | ||
|
|
2c8a9aad76 | ||
|
|
32009a701c | ||
|
|
0cbc316f76 | ||
|
|
45fccef3f3 | ||
|
|
fdcb4e2fb9 | ||
|
|
4a0a67d7a2 | ||
|
|
855d49ff58 | ||
|
|
384e873db0 | ||
|
|
60e72ba989 | ||
|
|
45a2ae6762 | ||
|
|
2eeef180ea | ||
|
|
8fd81f5cfd | ||
|
|
1eb9ad979e | ||
|
|
77e0e860f8 | ||
|
|
e228c398f4 | ||
|
|
5d06406f26 | ||
|
|
cb061b684c | ||
|
|
29b427ce13 | ||
|
|
4fa7cd1fc2 | ||
|
|
12b6a3ad9a | ||
|
|
22e1901581 | ||
|
|
e23c37fa96 | ||
|
|
e5a0ed1149 | ||
|
|
c9c1303e31 | ||
|
|
ae3299d9d4 | ||
|
|
a948cc14c5 | ||
|
|
621b07c799 | ||
|
|
7ad970f93a | ||
|
|
437fe55104 | ||
|
|
bebd244e33 | ||
|
|
9f2143e3df | ||
|
|
98efe7af10 | ||
|
|
c7c37c3591 | ||
|
|
a43837d26c | ||
|
|
f115abb509 | ||
|
|
43a07f3997 | ||
|
|
41e1693be0 | ||
|
|
9d5af461b2 | ||
|
|
b38c9c7db4 | ||
|
|
9f884edbbf | ||
|
|
0a7a2b1882 | ||
|
|
6bec8f6e00 | ||
|
|
65037e4611 | ||
|
|
ba92989a94 | ||
|
|
2bf996d9ad | ||
|
|
75ed3e296b | ||
|
|
e14e0521cf | ||
|
|
28e6995f7c | ||
|
|
8f72fb353c | ||
|
|
14f5d490ef | ||
|
|
c9095e8eab | ||
|
|
0589f69206 | ||
|
|
b724a173a9 | ||
|
|
e5ccb64617 | ||
|
|
08d114195f | ||
|
|
caf7d2ec9b | ||
|
|
2dffed3f3a | ||
|
|
784dc2223d | ||
|
|
c3fd1e8b79 | ||
|
|
6f0c550ee9 | ||
|
|
5d551dbbc1 | ||
|
|
043cb3a0db | ||
|
|
16d5b38f2b | ||
|
|
956a1be656 | ||
|
|
afcaa8df5f | ||
|
|
12885c01ad | ||
|
|
2ab8749052 | ||
|
|
e826141af4 | ||
|
|
0c1fd31226 | ||
|
|
0e9804901b | ||
|
|
2402607846 | ||
|
|
3d49bbd23a | ||
|
|
33b1fdbf39 | ||
|
|
de4cdab411 | ||
|
|
a7e471b7b3 | ||
|
|
ba6e5cddb0 | ||
|
|
e4ff82f864 | ||
|
|
48b733d6da | ||
|
|
0b432cc5f2 | ||
|
|
f6cccefffc | ||
|
|
fd5d90c699 | ||
|
|
06399630a2 | ||
|
|
20693aa808 | ||
|
|
f373b91cc3 | ||
|
|
ce48b1ae84 | ||
|
|
b3340cc7ba | ||
|
|
1303715aba | ||
|
|
b716e48926 | ||
|
|
7d35a3b8d8 | ||
|
|
200058b505 | ||
|
|
566f41b598 | ||
|
|
6c0547e7e6 | ||
|
|
871f865ac8 | ||
|
|
62a21520ea | ||
|
|
a597266a52 | ||
|
|
14b66817fb | ||
|
|
af011d6ca3 | ||
|
|
8a02cf8717 | ||
|
|
672eeed9a6 | ||
|
|
1b816ff838 | ||
|
|
10365ddf22 | ||
|
|
a28cb1491d | ||
|
|
1e149bb84f | ||
|
|
9827abbf76 | ||
|
|
a3293cdaaa | ||
|
|
f7d8bd2055 | ||
|
|
5d33a3af22 | ||
|
|
87f900ce77 | ||
|
|
bb5c93cafc | ||
|
|
c6ce0964b9 | ||
|
|
5c21e80a83 | ||
|
|
498cc9ba0a | ||
|
|
805f3a199d | ||
|
|
91fdb0423d | ||
|
|
8ba8659496 | ||
|
|
16e41ba297 | ||
|
|
387ce5be7c | ||
|
|
87a120e8e3 | ||
|
|
589d4e4cf5 | ||
|
|
6535f16aec | ||
|
|
a1520ea1b2 | ||
|
|
0844213897 | ||
|
|
989ba55d9a | ||
|
|
33388d6ede | ||
|
|
bfadbecb96 | ||
|
|
f815f4acf7 | ||
|
|
81d7decd13 | ||
|
|
d699d08399 | ||
|
|
9541457c54 | ||
|
|
c6cdcb02cf | ||
|
|
799715ea24 | ||
|
|
b5c6b3f10b | ||
|
|
3f59b27cf4 | ||
|
|
00b18558dd | ||
|
|
948414e1b2 | ||
|
|
56876ab825 | ||
|
|
0806870261 | ||
|
|
fd8eaab2df | ||
|
|
77252f161c | ||
|
|
4437802e63 | ||
|
|
1613fde55c | ||
|
|
624bc064d8 | ||
|
|
0c4a68555e | ||
|
|
476ac18d2c | ||
|
|
780531425b | ||
|
|
92d2dc8263 | ||
|
|
cfa6b4f7c8 | ||
|
|
5d4223e4f8 | ||
|
|
4a73abfd64 | ||
|
|
6f722da04d | ||
|
|
527d57540e | ||
|
|
b65f49622e | ||
|
|
c5ce08bf3c | ||
|
|
71b35ae42e | ||
|
|
15eb6418e8 | ||
|
|
2a83723d57 | ||
|
|
e8f55a3cf7 | ||
|
|
b5ea989eee | ||
|
|
17105bfc50 | ||
|
|
eefe27ff42 | ||
|
|
1ea71e358a | ||
|
|
14d8f95ec9 | ||
|
|
b0728c96d3 | ||
|
|
5e685c0e04 | ||
|
|
f2ac30f431 | ||
|
|
6808c0e585 | ||
|
|
9de12bb9c8 | ||
|
|
0645acfd79 | ||
|
|
439d58ddbd | ||
|
|
c0a9274d64 | ||
|
|
f3a4cd5176 | ||
|
|
c2e11196dd | ||
|
|
0b8f0264b0 | ||
|
|
5c31d855fd | ||
|
|
90d7fb5e77 | ||
|
|
c4ad930e2a | ||
|
|
3d0c88695e | ||
|
|
7332140fdf | ||
|
|
132fababb0 | ||
|
|
71507c0b58 | ||
|
|
7888fdee58 | ||
|
|
fb61fde581 | ||
|
|
5258e44030 | ||
|
|
e16c1b289b | ||
|
|
376b73f078 | ||
|
|
1c6060f27d | ||
|
|
ed4fd965ff | ||
|
|
bc9cb2c66a | ||
|
|
aa05f4c207 | ||
|
|
62fbef22d0 | ||
|
|
2563685d27 | ||
|
|
598f1f0a62 | ||
|
|
8311b0963a | ||
|
|
b1949b7388 | ||
|
|
3341bd1740 | ||
|
|
74f64f88a7 | ||
|
|
d4a4aaf509 | ||
|
|
1f73f4fd5d | ||
|
|
77f83d4171 | ||
|
|
642f28f439 | ||
|
|
54f4dc8f6e | ||
|
|
89d99b1694 | ||
|
|
9753f63f57 | ||
|
|
04804ff355 | ||
|
|
ed9ea2476d | ||
|
|
d0d29168a5 | ||
|
|
abda257763 | ||
|
|
1b91bc2e02 | ||
|
|
56b9e785e5 | ||
|
|
081447c9b1 | ||
|
|
260117289b | ||
|
|
73dca749ca | ||
|
|
8ac380bfb3 | ||
|
|
aeac7e08f9 | ||
|
|
7c9cdc4353 | ||
|
|
67572785cf | ||
|
|
8a70e7634d | ||
|
|
6dd5589a9c | ||
|
|
78058ce5f3 | ||
|
|
fd5884189c | ||
|
|
ab7a9f008d | ||
|
|
a8eb2a7fbe | ||
|
|
fbb4f4dec8 | ||
|
|
46fd0a61ba | ||
|
|
6444c813dc | ||
|
|
dc8a2b0398 | ||
|
|
d9780e27cd | ||
|
|
ab44d03771 | ||
|
|
b53cb256e5 | ||
|
|
c3075923f4 | ||
|
|
a32881313b | ||
|
|
07548bc898 | ||
|
|
0e544fe835 | ||
|
|
21ac4c34fb | ||
|
|
d2fa4a5724 | ||
|
|
4bdf98cf20 | ||
|
|
5da09f0c23 | ||
|
|
48357ee0c6 | ||
|
|
6506166f02 | ||
|
|
5f130b25ad | ||
|
|
a9fd128910 | ||
|
|
cb94298a02 | ||
|
|
046084c0b8 | ||
|
|
18760253b9 | ||
|
|
ded6376ece | ||
|
|
a4d60a451d | ||
|
|
0f4030de5d | ||
|
|
f1a5a3ec50 | ||
|
|
87beaefbb8 | ||
|
|
451847183d | ||
|
|
7625a3a4b0 | ||
|
|
6db696748b | ||
|
|
14f9ae679d | ||
|
|
4789d2219c | ||
|
|
eacecf657c | ||
|
|
1de0be240f | ||
|
|
ea4bec2bad | ||
|
|
36d95bd3b9 | ||
|
|
c33b310b48 | ||
|
|
8af76c68a8 | ||
|
|
1f56f51740 | ||
|
|
49b3c0dba5 | ||
|
|
a718d07f64 | ||
|
|
f6da7ee135 | ||
|
|
7eb266de69 | ||
|
|
9f821dabeb | ||
|
|
a27b8395b1 | ||
|
|
b1b4e64c97 | ||
|
|
c1058c17aa | ||
|
|
059c347fc2 | ||
|
|
7145e021f9 | ||
|
|
9723f4f76c | ||
|
|
db72d0cc05 | ||
|
|
00b7d5b858 | ||
|
|
6cd0c11ab1 | ||
|
|
c1ab55a3f2 | ||
|
|
c756e3ba96 | ||
|
|
566f37b65b | ||
|
|
6d1ff27410 | ||
|
|
be55b41427 | ||
|
|
a4f01b41a4 | ||
|
|
01e1c28dd9 | ||
|
|
51e41b16db | ||
|
|
9e9cdc2e6d | ||
|
|
bc1d590ca7 | ||
|
|
900d9c294d | ||
|
|
65aac16139 | ||
|
|
4903f462f6 | ||
|
|
44b5a19c13 | ||
|
|
ba8fa6c403 | ||
|
|
5b3083e9e1 | ||
|
|
523a16aa35 | ||
|
|
43a748fd15 | ||
|
|
15a80b56b5 | ||
|
|
b14bfb9fa2 | ||
|
|
56950ece69 | ||
|
|
1d2ac78443 | ||
|
|
8b7aa1a168 | ||
|
|
1180d919f5 | ||
|
|
347417ee12 | ||
|
|
fb27e3f919 | ||
|
|
edb16f8aab | ||
|
|
5c56e947fe | ||
|
|
571871b084 | ||
|
|
8340c40647 | ||
|
|
9818055b0e | ||
|
|
484823c97d | ||
|
|
3ce17b01dc | ||
|
|
e68c566c1c | ||
|
|
19d16aa941 | ||
|
|
6852713121 | ||
|
|
c97500b117 | ||
|
|
85040a9067 | ||
|
|
b8285c17e6 | ||
|
|
332dfb4b92 | ||
|
|
cb279bb14b | ||
|
|
60c9cf74ce | ||
|
|
ff6754eb04 | ||
|
|
e6b9aba997 | ||
|
|
0302894bfb | ||
|
|
e46394c3be | ||
|
|
1885e41789 | ||
|
|
2fb9db994b | ||
|
|
287aaf1696 | ||
|
|
0e6f5a155e | ||
|
|
88852e2330 | ||
|
|
6369c50614 | ||
|
|
a22d0a35a4 | ||
|
|
c93c02df85 | ||
|
|
e584c6e1a7 | ||
|
|
64e4c19971 | ||
|
|
551b8f6785 | ||
|
|
fbbe1c1b91 | ||
|
|
1a85745bf1 | ||
|
|
0d1fea8134 | ||
|
|
19417e76e7 | ||
|
|
53d88a79ef | ||
|
|
4c21b7e680 | ||
|
|
a8f689c223 | ||
|
|
ba8e3f9bc5 | ||
|
|
477200d1f9 | ||
|
|
662738a7e5 | ||
|
|
f992b77535 | ||
|
|
21b2f135b5 | ||
|
|
71e6be5d99 | ||
|
|
df8e7d0a9a | ||
|
|
64422a48d9 | ||
|
|
04f9c62772 | ||
|
|
2185d07f05 | ||
|
|
a49d28e00e | ||
|
|
629128c497 | ||
|
|
70682b043e | ||
|
|
b741d26eb5 | ||
|
|
cf8fa4a404 | ||
|
|
fe76a1b179 | ||
|
|
df4957307f | ||
|
|
e21f56e801 | ||
|
|
e51b55e03c | ||
|
|
296b8249cb | ||
|
|
7c6b840199 | ||
|
|
2a6ff4cbfc | ||
|
|
6ad5e2fcf3 | ||
|
|
37811320ef | ||
|
|
99ac7f5f9e | ||
|
|
96aca741a2 | ||
|
|
12ec931237 | ||
|
|
0e293a4ec9 | ||
|
|
163712a23b | ||
|
|
5f4d463780 | ||
|
|
abc8121aa8 | ||
|
|
8c47277141 | ||
|
|
36b5cd18e8 | ||
|
|
1e72e32ec3 | ||
|
|
8e5e5a563d | ||
|
|
98049e7eda | ||
|
|
25aa893bad | ||
|
|
b270a20274 | ||
|
|
f0262dd10e | ||
|
|
f8b673eccd | ||
|
|
0c0c9a0030 | ||
|
|
d1f79317cf | ||
|
|
fa58522242 | ||
|
|
aa6fd3d888 | ||
|
|
ebdd8834a9 | ||
|
|
fe8d5627e0 | ||
|
|
b242e3280b | ||
|
|
cc01caaecb | ||
|
|
e7b5ee7518 | ||
|
|
63073b65c0 | ||
|
|
47cf72b8ba | ||
|
|
af24d72dd8 | ||
|
|
f451b455c4 | ||
|
|
16f4dfafb1 | ||
|
|
5b4e8b9d71 | ||
|
|
b06eaffeeb | ||
|
|
3d55540db1 | ||
|
|
3c2b9aab96 | ||
|
|
49d46e71de | ||
|
|
6c5168e1ec | ||
|
|
e91d5326fe | ||
|
|
48b573e835 | ||
|
|
4788eb24ab | ||
|
|
3ed2783f34 | ||
|
|
c0e8a41a6f | ||
|
|
23b217af24 | ||
|
|
3dab19f933 | ||
|
|
05efb6291f | ||
|
|
eba49fdefd | ||
|
|
29f2c49374 | ||
|
|
2245371696 | ||
|
|
74631d5808 | ||
|
|
9264b0ca09 | ||
|
|
a96fb92939 | ||
|
|
ae59e1f72e | ||
|
|
47167a4e6f | ||
|
|
23cabd67fb | ||
|
|
e66410b932 | ||
|
|
c3bba05770 | ||
|
|
69b91f2760 | ||
|
|
e6b09580b4 | ||
|
|
36e663edda | ||
|
|
60e2029e70 | ||
|
|
5e1db43e34 | ||
|
|
6e9b743296 | ||
|
|
ef9710d8e2 | ||
|
|
468b3b9c8c | ||
|
|
0d8c853917 | ||
|
|
df3b868fe7 | ||
|
|
3f6a5ab6ba | ||
|
|
aa1f4389b1 | ||
|
|
246cd2aee9 | ||
|
|
0b6f8149d1 | ||
|
|
4dda2ad58b | ||
|
|
15bb14fcf9 | ||
|
|
b68114375f | ||
|
|
83a09b3cf2 | ||
|
|
3690cb12e6 | ||
|
|
b4de4826c4 | ||
|
|
b06df637c7 | ||
|
|
9bb9ae43f9 | ||
|
|
35e7172b89 | ||
|
|
abebf4d955 | ||
|
|
1c826d253b | ||
|
|
d1b454232d | ||
|
|
be3b41acc6 | ||
|
|
2a3e51ebfe | ||
|
|
1382fda1c9 | ||
|
|
c658096c17 | ||
|
|
6097919958 | ||
|
|
330bdde0a3 | ||
|
|
a55404fa2e | ||
|
|
c8c7c9f376 | ||
|
|
df34c1ce45 | ||
|
|
da1d66c938 | ||
|
|
d32926a7e5 | ||
|
|
7f008a7d1e | ||
|
|
eab3f704f5 | ||
|
|
a50e89c38e | ||
|
|
85723a138f | ||
|
|
9c69ba6f6f | ||
|
|
e84ed65525 | ||
|
|
4060abd3aa | ||
|
|
c924a0428d | ||
|
|
33ef1b3a30 | ||
|
|
a6caf4b948 | ||
|
|
cc7e11da99 | ||
|
|
a4c3efe783 | ||
|
|
4e22846e95 | ||
|
|
ddbd0cd095 | ||
|
|
255a3ec82c | ||
|
|
167c77baec | ||
|
|
ca2718366e | ||
|
|
58d3a643b9 | ||
|
|
718b8085fa | ||
|
|
64930d7440 | ||
|
|
4d2f948869 | ||
|
|
19c224cbe1 | ||
|
|
efd1581c01 | ||
|
|
ac85f590ba | ||
|
|
b0d3162875 | ||
|
|
4715a7e9e1 | ||
|
|
c5aec243c9 | ||
|
|
c76f3d3dba | ||
|
|
7add6e48b6 | ||
|
|
1267e0c076 | ||
|
|
361c093a35 | ||
|
|
9ad39a29f7 | ||
|
|
f5a1d8bff9 | ||
|
|
8c86afbd57 | ||
|
|
4d6e36df99 | ||
|
|
f51884e893 | ||
|
|
4afd9ecf16 | ||
|
|
ed3b311de4 | ||
|
|
d030fcc076 | ||
|
|
398da1f916 | ||
|
|
3a5741f534 | ||
|
|
c53b0b8a12 | ||
|
|
8fd34669ed | ||
|
|
be7e91899b | ||
|
|
74a822568e | ||
|
|
105c214d15 | ||
|
|
2b6a51ed34 | ||
|
|
e98c252490 | ||
|
|
17f5d6309f | ||
|
|
6a46ea04ab | ||
|
|
7bd97f6717 | ||
|
|
2a9c98ae40 | ||
|
|
1adf80c613 | ||
|
|
f823d3c73c | ||
|
|
91f0ed3fc3 | ||
|
|
04b56c7331 | ||
|
|
3c1a20097f | ||
|
|
966c4d4e14 | ||
|
|
6b8289d68e | ||
|
|
294421db9c | ||
|
|
9fdf991c27 | ||
|
|
77b33260f8 | ||
|
|
33e5f47c6c | ||
|
|
25ceb90678 | ||
|
|
27e29055cb | ||
|
|
810ce31f4b | ||
|
|
e3c91c9d29 | ||
|
|
2f47838ea1 | ||
|
|
0566e62995 | ||
|
|
aeac42be47 | ||
|
|
aa21ff7efd | ||
|
|
57d22a7bd1 | ||
|
|
6804bcbf12 | ||
|
|
6d34cc0b60 | ||
|
|
1bb375fe5c | ||
|
|
ed00243a0c | ||
|
|
1223e759a4 | ||
|
|
4fd3ec1a50 | ||
|
|
7f9cad1e4e | ||
|
|
437b8b140f | ||
|
|
8f0d9bd71f | ||
|
|
1378c616d6 | ||
|
|
3b5dfb3fb4 | ||
|
|
9c22be5d9c | ||
|
|
42dea89247 | ||
|
|
982a332679 | ||
|
|
441853f189 | ||
|
|
611329fc7f | ||
|
|
f3c135e583 | ||
|
|
7f84582b37 | ||
|
|
297526c49d | ||
|
|
d01d394a2b | ||
|
|
17d4369866 | ||
|
|
fb5e1393a4 | ||
|
|
18dbde9ed6 | ||
|
|
2a13491919 | ||
|
|
3509a1a7ff | ||
|
|
da1f4b8496 | ||
|
|
5b2e1d3ce4 | ||
|
|
7d8a6bc1d7 | ||
|
|
a378f8095e | ||
|
|
005bc009e8 | ||
|
|
3bc7d4bec6 | ||
|
|
96c1b05238 | ||
|
|
98f9f806f3 | ||
|
|
c834ba1389 | ||
|
|
cab437adef | ||
|
|
eefa8188e1 | ||
|
|
1d8db8a738 | ||
|
|
75ddc5b811 | ||
|
|
17dc0e1108 | ||
|
|
64ac6c9621 | ||
|
|
a7753ea781 | ||
|
|
12a6eb5b22 | ||
|
|
74b21258b6 | ||
|
|
2f9d46ce27 | ||
|
|
7b660c4e30 | ||
|
|
406799eb1c | ||
|
|
ef0cbf20f4 | ||
|
|
7f572eb044 | ||
|
|
0defb614a4 | ||
|
|
18023d7f32 | ||
|
|
4983b98005 | ||
|
|
8675e02cea | ||
|
|
45fc3bf842 | ||
|
|
cf809aec47 | ||
|
|
cceb1acca8 | ||
|
|
e620c40a14 | ||
|
|
e1590bf68b | ||
|
|
bad07943b5 | ||
|
|
603595559f | ||
|
|
febcc25d1a | ||
|
|
e3c0e34b33 | ||
|
|
3f5974b7f9 | ||
|
|
7ab3dc080b | ||
|
|
0883beac30 | ||
|
|
f9102a3295 |
@@ -1,3 +1 @@
|
|||||||
bin/
|
bin/
|
||||||
cross-out/
|
|
||||||
release-out/
|
|
||||||
|
|||||||
54
.github/CONTRIBUTING.md
vendored
54
.github/CONTRIBUTING.md
vendored
@@ -116,6 +116,60 @@ commit automatically with `git commit -s`.
|
|||||||
|
|
||||||
### Run the unit- and integration-tests
|
### Run the unit- and integration-tests
|
||||||
|
|
||||||
|
Running tests:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
make test
|
||||||
|
```
|
||||||
|
|
||||||
|
This runs all unit and integration tests, in a containerized environment.
|
||||||
|
Locally, every package can be tested separately with standard Go tools, but
|
||||||
|
integration tests are skipped if local user doesn't have enough permissions or
|
||||||
|
worker binaries are not installed.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# run unit tests only
|
||||||
|
make test-unit
|
||||||
|
|
||||||
|
# run integration tests only
|
||||||
|
make test-integration
|
||||||
|
|
||||||
|
# test a specific package
|
||||||
|
TESTPKGS=./bake make test
|
||||||
|
|
||||||
|
# run all integration tests with a specific worker
|
||||||
|
TESTFLAGS="--run=//worker=remote -v" make test-integration
|
||||||
|
|
||||||
|
# run a specific integration test
|
||||||
|
TESTFLAGS="--run /TestBuild/worker=remote/ -v" make test-integration
|
||||||
|
|
||||||
|
# run a selection of integration tests using a regexp
|
||||||
|
TESTFLAGS="--run /TestBuild.*/worker=remote/ -v" make test-integration
|
||||||
|
```
|
||||||
|
|
||||||
|
> **Note**
|
||||||
|
>
|
||||||
|
> Set `TEST_KEEP_CACHE=1` for the test framework to keep external dependant
|
||||||
|
> images in a docker volume if you are repeatedly calling `make test`. This
|
||||||
|
> helps to avoid rate limiting on the remote registry side.
|
||||||
|
|
||||||
|
> **Note**
|
||||||
|
>
|
||||||
|
> Set `TEST_DOCKERD=1` for the test framework to enable the docker workers,
|
||||||
|
> specifically the `docker` and `docker-container` drivers.
|
||||||
|
>
|
||||||
|
> The docker tests cannot be run in parallel, so require passing `--parallel=1`
|
||||||
|
> in `TESTFLAGS`.
|
||||||
|
|
||||||
|
> **Note**
|
||||||
|
>
|
||||||
|
> If you are working behind a proxy, you can set some of or all
|
||||||
|
> `HTTP_PROXY=http://ip:port`, `HTTPS_PROXY=http://ip:port`, `NO_PROXY=http://ip:port`
|
||||||
|
> for the test framework to specify the proxy build args.
|
||||||
|
|
||||||
|
|
||||||
|
### Run the helper commands
|
||||||
|
|
||||||
To enter a demo container environment and experiment, you may run:
|
To enter a demo container environment and experiment, you may run:
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|||||||
124
.github/ISSUE_TEMPLATE/bug.yml
vendored
Normal file
124
.github/ISSUE_TEMPLATE/bug.yml
vendored
Normal file
@@ -0,0 +1,124 @@
|
|||||||
|
# https://docs.github.com/en/communities/using-templates-to-encourage-useful-issues-and-pull-requests/syntax-for-githubs-form-schema
|
||||||
|
name: Bug Report
|
||||||
|
description: Report a bug
|
||||||
|
labels:
|
||||||
|
- status/triage
|
||||||
|
|
||||||
|
body:
|
||||||
|
- type: markdown
|
||||||
|
attributes:
|
||||||
|
value: |
|
||||||
|
Thank you for taking the time to report a bug!
|
||||||
|
If this is a security issue please report it to the [Docker Security team](mailto:security@docker.com).
|
||||||
|
|
||||||
|
- type: checkboxes
|
||||||
|
attributes:
|
||||||
|
label: Contributing guidelines
|
||||||
|
description: |
|
||||||
|
Please read the contributing guidelines before proceeding.
|
||||||
|
options:
|
||||||
|
- label: I've read the [contributing guidelines](https://github.com/docker/buildx/blob/master/.github/CONTRIBUTING.md) and wholeheartedly agree
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: checkboxes
|
||||||
|
attributes:
|
||||||
|
label: I've found a bug and checked that ...
|
||||||
|
description: |
|
||||||
|
Make sure that your request fulfills all of the following requirements.
|
||||||
|
If one requirement cannot be satisfied, explain in detail why.
|
||||||
|
options:
|
||||||
|
- label: ... the documentation does not mention anything about my problem
|
||||||
|
- label: ... there are no open or closed issues that are related to my problem
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
attributes:
|
||||||
|
label: Description
|
||||||
|
description: |
|
||||||
|
Please provide a brief description of the bug in 1-2 sentences.
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
attributes:
|
||||||
|
label: Expected behaviour
|
||||||
|
description: |
|
||||||
|
Please describe precisely what you'd expect to happen.
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
attributes:
|
||||||
|
label: Actual behaviour
|
||||||
|
description: |
|
||||||
|
Please describe precisely what is actually happening.
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: input
|
||||||
|
attributes:
|
||||||
|
label: Buildx version
|
||||||
|
description: |
|
||||||
|
Output of `docker buildx version` command.
|
||||||
|
Example: `github.com/docker/buildx v0.8.1 5fac64c2c49dae1320f2b51f1a899ca451935554`
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
attributes:
|
||||||
|
label: Docker info
|
||||||
|
description: |
|
||||||
|
Output of `docker info` command.
|
||||||
|
render: text
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
attributes:
|
||||||
|
label: Builders list
|
||||||
|
description: |
|
||||||
|
Output of `docker buildx ls` command.
|
||||||
|
render: text
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
attributes:
|
||||||
|
label: Configuration
|
||||||
|
description: >
|
||||||
|
Please provide a minimal Dockerfile, bake definition (if applicable) and
|
||||||
|
invoked commands to help reproducing your issue.
|
||||||
|
placeholder: |
|
||||||
|
```dockerfile
|
||||||
|
FROM alpine
|
||||||
|
echo hello
|
||||||
|
```
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
group "default" {
|
||||||
|
targets = ["app"]
|
||||||
|
}
|
||||||
|
target "app" {
|
||||||
|
dockerfile = "Dockerfile"
|
||||||
|
target = "build"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx build .
|
||||||
|
$ docker buildx bake
|
||||||
|
```
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
attributes:
|
||||||
|
label: Build logs
|
||||||
|
description: |
|
||||||
|
Please provide logs output (and/or BuildKit logs if applicable).
|
||||||
|
render: text
|
||||||
|
validations:
|
||||||
|
required: false
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
attributes:
|
||||||
|
label: Additional info
|
||||||
|
description: |
|
||||||
|
Please provide any additional information that could be useful.
|
||||||
12
.github/ISSUE_TEMPLATE/config.yml
vendored
Normal file
12
.github/ISSUE_TEMPLATE/config.yml
vendored
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
# https://docs.github.com/en/communities/using-templates-to-encourage-useful-issues-and-pull-requests/configuring-issue-templates-for-your-repository#configuring-the-template-chooser
|
||||||
|
blank_issues_enabled: true
|
||||||
|
contact_links:
|
||||||
|
- name: Questions and Discussions
|
||||||
|
url: https://github.com/docker/buildx/discussions/new
|
||||||
|
about: Use Github Discussions to ask questions and/or open discussion topics.
|
||||||
|
- name: Command line reference
|
||||||
|
url: https://docs.docker.com/engine/reference/commandline/buildx/
|
||||||
|
about: Read the command line reference.
|
||||||
|
- name: Documentation
|
||||||
|
url: https://docs.docker.com/build/
|
||||||
|
about: Read the documentation.
|
||||||
15
.github/ISSUE_TEMPLATE/feature.yml
vendored
Normal file
15
.github/ISSUE_TEMPLATE/feature.yml
vendored
Normal file
@@ -0,0 +1,15 @@
|
|||||||
|
# https://docs.github.com/en/communities/using-templates-to-encourage-useful-issues-and-pull-requests/syntax-for-githubs-form-schema
|
||||||
|
name: Feature request
|
||||||
|
description: Missing functionality? Come tell us about it!
|
||||||
|
labels:
|
||||||
|
- kind/enhancement
|
||||||
|
- status/triage
|
||||||
|
|
||||||
|
body:
|
||||||
|
- type: textarea
|
||||||
|
id: description
|
||||||
|
attributes:
|
||||||
|
label: Description
|
||||||
|
description: What is the feature you want to see?
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
12
.github/SECURITY.md
vendored
Normal file
12
.github/SECURITY.md
vendored
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
# Reporting security issues
|
||||||
|
|
||||||
|
The project maintainers take security seriously. If you discover a security
|
||||||
|
issue, please bring it to their attention right away!
|
||||||
|
|
||||||
|
**Please _DO NOT_ file a public issue**, instead send your report privately to
|
||||||
|
[security@docker.com](mailto:security@docker.com).
|
||||||
|
|
||||||
|
Security reports are greatly appreciated, and we will publicly thank you for it.
|
||||||
|
We also like to send gifts—if you're into schwag, make sure to let
|
||||||
|
us know. We currently do not offer a paid security bounty program, but are not
|
||||||
|
ruling it out in the future.
|
||||||
735
.github/releases.json
vendored
Normal file
735
.github/releases.json
vendored
Normal file
@@ -0,0 +1,735 @@
|
|||||||
|
{
|
||||||
|
"latest": {
|
||||||
|
"id": 90741208,
|
||||||
|
"tag_name": "v0.10.2",
|
||||||
|
"html_url": "https://github.com/docker/buildx/releases/tag/v0.10.2",
|
||||||
|
"assets": [
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-amd64.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-amd64.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-arm64.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-arm64.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-amd64.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-amd64.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v6",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v6.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v6.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v7",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v7.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v7.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm64.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm64.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-ppc64le",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-ppc64le.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-ppc64le.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-riscv64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-riscv64.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-riscv64.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-s390x",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-s390x.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-s390x.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-amd64.exe",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-amd64.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-amd64.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-arm64.exe",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-arm64.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-arm64.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/checksums.txt"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"v0.10.2": {
|
||||||
|
"id": 90741208,
|
||||||
|
"tag_name": "v0.10.2",
|
||||||
|
"html_url": "https://github.com/docker/buildx/releases/tag/v0.10.2",
|
||||||
|
"assets": [
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-amd64.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-amd64.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-arm64.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-arm64.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-amd64.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-amd64.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v6",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v6.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v6.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v7",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v7.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v7.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm64.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm64.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-ppc64le",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-ppc64le.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-ppc64le.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-riscv64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-riscv64.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-riscv64.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-s390x",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-s390x.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-s390x.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-amd64.exe",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-amd64.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-amd64.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-arm64.exe",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-arm64.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-arm64.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.2/checksums.txt"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"v0.10.1": {
|
||||||
|
"id": 90346950,
|
||||||
|
"tag_name": "v0.10.1",
|
||||||
|
"html_url": "https://github.com/docker/buildx/releases/tag/v0.10.1",
|
||||||
|
"assets": [
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.darwin-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.darwin-amd64.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.darwin-amd64.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.darwin-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.darwin-arm64.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.darwin-arm64.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-amd64.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-amd64.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm-v6",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm-v6.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm-v6.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm-v7",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm-v7.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm-v7.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm64.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm64.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-ppc64le",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-ppc64le.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-ppc64le.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-riscv64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-riscv64.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-riscv64.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-s390x",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-s390x.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-s390x.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.windows-amd64.exe",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.windows-amd64.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.windows-amd64.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.windows-arm64.exe",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.windows-arm64.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.windows-arm64.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.1/checksums.txt"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"v0.10.0": {
|
||||||
|
"id": 88388110,
|
||||||
|
"tag_name": "v0.10.0",
|
||||||
|
"html_url": "https://github.com/docker/buildx/releases/tag/v0.10.0",
|
||||||
|
"assets": [
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.darwin-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.darwin-amd64.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.darwin-amd64.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.darwin-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.darwin-arm64.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.darwin-arm64.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-amd64.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-amd64.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm-v6",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm-v6.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm-v6.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm-v7",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm-v7.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm-v7.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm64.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm64.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-ppc64le",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-ppc64le.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-ppc64le.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-riscv64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-riscv64.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-riscv64.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-s390x",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-s390x.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-s390x.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.windows-amd64.exe",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.windows-amd64.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.windows-amd64.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.windows-arm64.exe",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.windows-arm64.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.windows-arm64.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0/checksums.txt"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"v0.10.0-rc3": {
|
||||||
|
"id": 88191592,
|
||||||
|
"tag_name": "v0.10.0-rc3",
|
||||||
|
"html_url": "https://github.com/docker/buildx/releases/tag/v0.10.0-rc3",
|
||||||
|
"assets": [
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.darwin-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.darwin-amd64.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.darwin-amd64.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.darwin-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.darwin-arm64.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.darwin-arm64.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-amd64.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-amd64.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm-v6",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm-v6.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm-v6.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm-v7",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm-v7.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm-v7.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm64.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm64.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-ppc64le",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-ppc64le.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-ppc64le.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-riscv64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-riscv64.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-riscv64.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-s390x",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-s390x.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-s390x.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.windows-amd64.exe",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.windows-amd64.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.windows-amd64.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.windows-arm64.exe",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.windows-arm64.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.windows-arm64.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/checksums.txt"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"v0.10.0-rc2": {
|
||||||
|
"id": 86248476,
|
||||||
|
"tag_name": "v0.10.0-rc2",
|
||||||
|
"html_url": "https://github.com/docker/buildx/releases/tag/v0.10.0-rc2",
|
||||||
|
"assets": [
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.darwin-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.darwin-amd64.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.darwin-amd64.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.darwin-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.darwin-arm64.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.darwin-arm64.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-amd64.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-amd64.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm-v6",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm-v6.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm-v6.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm-v7",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm-v7.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm-v7.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm64.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm64.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-ppc64le",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-ppc64le.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-ppc64le.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-riscv64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-riscv64.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-riscv64.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-s390x",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-s390x.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-s390x.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.windows-amd64.exe",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.windows-amd64.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.windows-amd64.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.windows-arm64.exe",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.windows-arm64.provenance.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.windows-arm64.sbom.json",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/checksums.txt"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"v0.10.0-rc1": {
|
||||||
|
"id": 85963900,
|
||||||
|
"tag_name": "v0.10.0-rc1",
|
||||||
|
"html_url": "https://github.com/docker/buildx/releases/tag/v0.10.0-rc1",
|
||||||
|
"assets": [
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.darwin-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.darwin-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.linux-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.linux-arm-v6",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.linux-arm-v7",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.linux-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.linux-ppc64le",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.linux-riscv64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.linux-s390x",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.windows-amd64.exe",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.windows-arm64.exe",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/checksums.txt"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"v0.9.1": {
|
||||||
|
"id": 74760068,
|
||||||
|
"tag_name": "v0.9.1",
|
||||||
|
"html_url": "https://github.com/docker/buildx/releases/tag/v0.9.1",
|
||||||
|
"assets": [
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.darwin-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.darwin-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.linux-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.linux-arm-v6",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.linux-arm-v7",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.linux-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.linux-ppc64le",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.linux-riscv64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.linux-s390x",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.windows-amd64.exe",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.windows-arm64.exe",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.1/checksums.txt"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"v0.9.0": {
|
||||||
|
"id": 74546589,
|
||||||
|
"tag_name": "v0.9.0",
|
||||||
|
"html_url": "https://github.com/docker/buildx/releases/tag/v0.9.0",
|
||||||
|
"assets": [
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.darwin-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.darwin-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.linux-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.linux-arm-v6",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.linux-arm-v7",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.linux-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.linux-ppc64le",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.linux-riscv64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.linux-s390x",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.windows-amd64.exe",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.windows-arm64.exe",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.0/checksums.txt"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"v0.9.0-rc2": {
|
||||||
|
"id": 74052235,
|
||||||
|
"tag_name": "v0.9.0-rc2",
|
||||||
|
"html_url": "https://github.com/docker/buildx/releases/tag/v0.9.0-rc2",
|
||||||
|
"assets": [
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.darwin-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.darwin-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.linux-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.linux-arm-v6",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.linux-arm-v7",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.linux-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.linux-ppc64le",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.linux-riscv64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.linux-s390x",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.windows-amd64.exe",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.windows-arm64.exe",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/checksums.txt"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"v0.9.0-rc1": {
|
||||||
|
"id": 73389692,
|
||||||
|
"tag_name": "v0.9.0-rc1",
|
||||||
|
"html_url": "https://github.com/docker/buildx/releases/tag/v0.9.0-rc1",
|
||||||
|
"assets": [
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.darwin-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.darwin-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.linux-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.linux-arm-v6",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.linux-arm-v7",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.linux-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.linux-ppc64le",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.linux-riscv64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.linux-s390x",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.windows-amd64.exe",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.windows-arm64.exe",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/checksums.txt"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"v0.8.2": {
|
||||||
|
"id": 63479740,
|
||||||
|
"tag_name": "v0.8.2",
|
||||||
|
"html_url": "https://github.com/docker/buildx/releases/tag/v0.8.2",
|
||||||
|
"assets": [
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.darwin-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.darwin-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.linux-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.linux-arm-v6",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.linux-arm-v7",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.linux-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.linux-ppc64le",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.linux-riscv64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.linux-s390x",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.windows-amd64.exe",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.windows-arm64.exe",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.2/checksums.txt"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"v0.8.1": {
|
||||||
|
"id": 62289050,
|
||||||
|
"tag_name": "v0.8.1",
|
||||||
|
"html_url": "https://github.com/docker/buildx/releases/tag/v0.8.1",
|
||||||
|
"assets": [
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.darwin-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.darwin-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.linux-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.linux-arm-v6",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.linux-arm-v7",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.linux-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.linux-ppc64le",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.linux-riscv64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.linux-s390x",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.windows-amd64.exe",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.windows-arm64.exe",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.1/checksums.txt"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"v0.8.0": {
|
||||||
|
"id": 61423774,
|
||||||
|
"tag_name": "v0.8.0",
|
||||||
|
"html_url": "https://github.com/docker/buildx/releases/tag/v0.8.0",
|
||||||
|
"assets": [
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.darwin-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.darwin-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.linux-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.linux-arm-v6",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.linux-arm-v7",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.linux-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.linux-ppc64le",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.linux-riscv64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.linux-s390x",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.windows-amd64.exe",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.windows-arm64.exe",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.0/checksums.txt"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"v0.8.0-rc1": {
|
||||||
|
"id": 60513568,
|
||||||
|
"tag_name": "v0.8.0-rc1",
|
||||||
|
"html_url": "https://github.com/docker/buildx/releases/tag/v0.8.0-rc1",
|
||||||
|
"assets": [
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.darwin-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.darwin-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.linux-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.linux-arm-v6",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.linux-arm-v7",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.linux-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.linux-ppc64le",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.linux-riscv64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.linux-s390x",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.windows-amd64.exe",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.windows-arm64.exe",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/checksums.txt"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"v0.7.1": {
|
||||||
|
"id": 54098347,
|
||||||
|
"tag_name": "v0.7.1",
|
||||||
|
"html_url": "https://github.com/docker/buildx/releases/tag/v0.7.1",
|
||||||
|
"assets": [
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.darwin-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.darwin-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.linux-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.linux-arm-v6",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.linux-arm-v7",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.linux-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.linux-ppc64le",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.linux-riscv64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.linux-s390x",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.windows-amd64.exe",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.windows-arm64.exe",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.7.1/checksums.txt"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"v0.7.0": {
|
||||||
|
"id": 53109422,
|
||||||
|
"tag_name": "v0.7.0",
|
||||||
|
"html_url": "https://github.com/docker/buildx/releases/tag/v0.7.0",
|
||||||
|
"assets": [
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.darwin-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.darwin-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.linux-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.linux-arm-v6",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.linux-arm-v7",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.linux-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.linux-ppc64le",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.linux-riscv64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.linux-s390x",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.windows-amd64.exe",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.windows-arm64.exe",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.7.0/checksums.txt"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"v0.7.0-rc1": {
|
||||||
|
"id": 52726324,
|
||||||
|
"tag_name": "v0.7.0-rc1",
|
||||||
|
"html_url": "https://github.com/docker/buildx/releases/tag/v0.7.0-rc1",
|
||||||
|
"assets": [
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.darwin-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.darwin-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.linux-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.linux-arm-v6",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.linux-arm-v7",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.linux-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.linux-ppc64le",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.linux-riscv64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.linux-s390x",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.windows-amd64.exe",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.windows-arm64.exe",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/checksums.txt"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"v0.6.3": {
|
||||||
|
"id": 48691641,
|
||||||
|
"tag_name": "v0.6.3",
|
||||||
|
"html_url": "https://github.com/docker/buildx/releases/tag/v0.6.3",
|
||||||
|
"assets": [
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.darwin-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.darwin-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.linux-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.linux-arm-v6",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.linux-arm-v7",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.linux-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.linux-ppc64le",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.linux-riscv64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.linux-s390x",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.windows-amd64.exe",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.windows-arm64.exe"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"v0.6.2": {
|
||||||
|
"id": 48207405,
|
||||||
|
"tag_name": "v0.6.2",
|
||||||
|
"html_url": "https://github.com/docker/buildx/releases/tag/v0.6.2",
|
||||||
|
"assets": [
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.darwin-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.darwin-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.linux-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.linux-arm-v6",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.linux-arm-v7",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.linux-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.linux-ppc64le",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.linux-riscv64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.linux-s390x",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.windows-amd64.exe",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.windows-arm64.exe"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"v0.6.1": {
|
||||||
|
"id": 47064772,
|
||||||
|
"tag_name": "v0.6.1",
|
||||||
|
"html_url": "https://github.com/docker/buildx/releases/tag/v0.6.1",
|
||||||
|
"assets": [
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.darwin-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.darwin-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.linux-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.linux-arm-v6",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.linux-arm-v7",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.linux-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.linux-ppc64le",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.linux-riscv64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.linux-s390x",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.windows-amd64.exe",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.windows-arm64.exe"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"v0.6.0": {
|
||||||
|
"id": 46343260,
|
||||||
|
"tag_name": "v0.6.0",
|
||||||
|
"html_url": "https://github.com/docker/buildx/releases/tag/v0.6.0",
|
||||||
|
"assets": [
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.darwin-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.darwin-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.linux-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.linux-arm-v6",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.linux-arm-v7",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.linux-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.linux-ppc64le",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.linux-riscv64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.linux-s390x",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.windows-amd64.exe",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.windows-arm64.exe"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"v0.6.0-rc1": {
|
||||||
|
"id": 46230351,
|
||||||
|
"tag_name": "v0.6.0-rc1",
|
||||||
|
"html_url": "https://github.com/docker/buildx/releases/tag/v0.6.0-rc1",
|
||||||
|
"assets": [
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.darwin-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.darwin-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.linux-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.linux-arm-v6",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.linux-arm-v7",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.linux-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.linux-ppc64le",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.linux-riscv64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.linux-s390x",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.windows-amd64.exe",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.windows-arm64.exe"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"v0.5.1": {
|
||||||
|
"id": 35276550,
|
||||||
|
"tag_name": "v0.5.1",
|
||||||
|
"html_url": "https://github.com/docker/buildx/releases/tag/v0.5.1",
|
||||||
|
"assets": [
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.darwin-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.darwin-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.darwin-universal",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.linux-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.linux-arm-v6",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.linux-arm-v7",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.linux-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.linux-ppc64le",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.linux-s390x",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.windows-amd64.exe"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"v0.5.0": {
|
||||||
|
"id": 35268960,
|
||||||
|
"tag_name": "v0.5.0",
|
||||||
|
"html_url": "https://github.com/docker/buildx/releases/tag/v0.5.0",
|
||||||
|
"assets": [
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.darwin-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.darwin-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.darwin-universal",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.linux-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.linux-arm-v6",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.linux-arm-v7",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.linux-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.linux-ppc64le",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.linux-s390x",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.windows-amd64.exe"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"v0.5.0-rc1": {
|
||||||
|
"id": 35015334,
|
||||||
|
"tag_name": "v0.5.0-rc1",
|
||||||
|
"html_url": "https://github.com/docker/buildx/releases/tag/v0.5.0-rc1",
|
||||||
|
"assets": [
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.darwin-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.linux-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.linux-arm-v6",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.linux-arm-v7",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.linux-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.linux-ppc64le",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.linux-s390x",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.windows-amd64.exe"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"v0.4.2": {
|
||||||
|
"id": 30007794,
|
||||||
|
"tag_name": "v0.4.2",
|
||||||
|
"html_url": "https://github.com/docker/buildx/releases/tag/v0.4.2",
|
||||||
|
"assets": [
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.darwin-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.linux-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.linux-arm-v6",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.linux-arm-v7",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.linux-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.linux-ppc64le",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.linux-s390x",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.windows-amd64.exe"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"v0.4.1": {
|
||||||
|
"id": 26067509,
|
||||||
|
"tag_name": "v0.4.1",
|
||||||
|
"html_url": "https://github.com/docker/buildx/releases/tag/v0.4.1",
|
||||||
|
"assets": [
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.darwin-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.linux-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.linux-arm-v6",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.linux-arm-v7",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.linux-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.linux-ppc64le",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.linux-s390x",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.windows-amd64.exe"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"v0.4.0": {
|
||||||
|
"id": 26028174,
|
||||||
|
"tag_name": "v0.4.0",
|
||||||
|
"html_url": "https://github.com/docker/buildx/releases/tag/v0.4.0",
|
||||||
|
"assets": [
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.darwin-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.linux-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.linux-arm-v6",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.linux-arm-v7",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.linux-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.linux-ppc64le",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.linux-s390x",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.windows-amd64.exe"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"v0.3.1": {
|
||||||
|
"id": 20316235,
|
||||||
|
"tag_name": "v0.3.1",
|
||||||
|
"html_url": "https://github.com/docker/buildx/releases/tag/v0.3.1",
|
||||||
|
"assets": [
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.darwin-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.linux-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.linux-arm-v6",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.linux-arm-v7",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.linux-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.linux-ppc64le",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.linux-s390x",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.windows-amd64.exe"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"v0.3.0": {
|
||||||
|
"id": 19029664,
|
||||||
|
"tag_name": "v0.3.0",
|
||||||
|
"html_url": "https://github.com/docker/buildx/releases/tag/v0.3.0",
|
||||||
|
"assets": [
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.darwin-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.linux-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.linux-arm-v6",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.linux-arm-v7",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.linux-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.linux-ppc64le",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.linux-s390x",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.windows-amd64.exe"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"v0.2.2": {
|
||||||
|
"id": 17671545,
|
||||||
|
"tag_name": "v0.2.2",
|
||||||
|
"html_url": "https://github.com/docker/buildx/releases/tag/v0.2.2",
|
||||||
|
"assets": [
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.darwin-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.linux-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.linux-arm-v6",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.linux-arm-v7",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.linux-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.linux-ppc64le",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.linux-s390x",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.windows-amd64.exe"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"v0.2.1": {
|
||||||
|
"id": 17582885,
|
||||||
|
"tag_name": "v0.2.1",
|
||||||
|
"html_url": "https://github.com/docker/buildx/releases/tag/v0.2.1",
|
||||||
|
"assets": [
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.darwin-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.linux-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.linux-arm-v6",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.linux-arm-v7",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.linux-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.linux-ppc64le",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.linux-s390x",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.windows-amd64.exe"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"v0.2.0": {
|
||||||
|
"id": 16965310,
|
||||||
|
"tag_name": "v0.2.0",
|
||||||
|
"html_url": "https://github.com/docker/buildx/releases/tag/v0.2.0",
|
||||||
|
"assets": [
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.darwin-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.linux-amd64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.linux-arm-v6",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.linux-arm-v7",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.linux-arm64",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.linux-ppc64le",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.linux-s390x",
|
||||||
|
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.windows-amd64.exe"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
218
.github/workflows/build.yml
vendored
218
.github/workflows/build.yml
vendored
@@ -13,17 +13,21 @@ on:
|
|||||||
tags:
|
tags:
|
||||||
- 'v*'
|
- 'v*'
|
||||||
pull_request:
|
pull_request:
|
||||||
branches:
|
paths-ignore:
|
||||||
- 'master'
|
- '.github/releases.json'
|
||||||
- 'v[0-9]*'
|
- 'README.md'
|
||||||
|
- 'docs/**'
|
||||||
|
|
||||||
env:
|
env:
|
||||||
|
BUILDX_VERSION: "latest"
|
||||||
|
BUILDKIT_IMAGE: "moby/buildkit:latest"
|
||||||
REPO_SLUG: "docker/buildx-bin"
|
REPO_SLUG: "docker/buildx-bin"
|
||||||
RELEASE_OUT: "./release-out"
|
DESTDIR: "./bin"
|
||||||
|
TEST_CACHE_SCOPE: "test"
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
build:
|
prepare-test:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-22.04
|
||||||
steps:
|
steps:
|
||||||
-
|
-
|
||||||
name: Checkout
|
name: Checkout
|
||||||
@@ -35,33 +39,172 @@ jobs:
|
|||||||
name: Set up Docker Buildx
|
name: Set up Docker Buildx
|
||||||
uses: docker/setup-buildx-action@v2
|
uses: docker/setup-buildx-action@v2
|
||||||
with:
|
with:
|
||||||
version: latest
|
version: ${{ env.BUILDX_VERSION }}
|
||||||
|
driver-opts: image=${{ env.BUILDKIT_IMAGE }}
|
||||||
|
buildkitd-flags: --debug
|
||||||
|
-
|
||||||
|
name: Build
|
||||||
|
uses: docker/bake-action@v3
|
||||||
|
with:
|
||||||
|
targets: integration-test-base
|
||||||
|
set: |
|
||||||
|
*.cache-from=type=gha,scope=${{ env.TEST_CACHE_SCOPE }}
|
||||||
|
*.cache-to=type=gha,scope=${{ env.TEST_CACHE_SCOPE }}
|
||||||
|
|
||||||
|
test:
|
||||||
|
runs-on: ubuntu-22.04
|
||||||
|
needs:
|
||||||
|
- prepare-test
|
||||||
|
env:
|
||||||
|
TESTFLAGS: "-v --parallel=6 --timeout=30m"
|
||||||
|
TESTFLAGS_DOCKER: "-v --parallel=1 --timeout=30m"
|
||||||
|
GOTESTSUM_FORMAT: "standard-verbose"
|
||||||
|
TEST_IMAGE_BUILD: "0"
|
||||||
|
TEST_IMAGE_ID: "buildx-tests"
|
||||||
|
strategy:
|
||||||
|
fail-fast: false
|
||||||
|
matrix:
|
||||||
|
worker:
|
||||||
|
- docker
|
||||||
|
- docker-container
|
||||||
|
- remote
|
||||||
|
pkg:
|
||||||
|
- ./tests
|
||||||
|
include:
|
||||||
|
- pkg: ./...
|
||||||
|
skip-integration-tests: 1
|
||||||
|
steps:
|
||||||
|
-
|
||||||
|
name: Checkout
|
||||||
|
uses: actions/checkout@v3
|
||||||
|
-
|
||||||
|
name: Set up QEMU
|
||||||
|
uses: docker/setup-qemu-action@v2
|
||||||
|
-
|
||||||
|
name: Set up Docker Buildx
|
||||||
|
uses: docker/setup-buildx-action@v2
|
||||||
|
with:
|
||||||
|
version: ${{ env.BUILDX_VERSION }}
|
||||||
|
driver-opts: image=${{ env.BUILDKIT_IMAGE }}
|
||||||
|
buildkitd-flags: --debug
|
||||||
|
-
|
||||||
|
name: Build test image
|
||||||
|
uses: docker/bake-action@v3
|
||||||
|
with:
|
||||||
|
targets: integration-test
|
||||||
|
set: |
|
||||||
|
*.cache-from=type=gha,scope=${{ env.TEST_CACHE_SCOPE }}
|
||||||
|
*.output=type=docker,name=${{ env.TEST_IMAGE_ID }}
|
||||||
-
|
-
|
||||||
name: Test
|
name: Test
|
||||||
run: |
|
run: |
|
||||||
make test
|
export TEST_REPORT_SUFFIX=-${{ github.job }}-$(echo "${{ matrix.pkg }}-${{ matrix.skip-integration-tests }}-${{ matrix.worker }}" | tr -dc '[:alnum:]-\n\r' | tr '[:upper:]' '[:lower:]')
|
||||||
|
./hack/test
|
||||||
|
env:
|
||||||
|
TEST_DOCKERD: "${{ (matrix.worker == 'docker' || matrix.worker == 'docker-container') && '1' || '0' }}"
|
||||||
|
TESTFLAGS: "${{ (matrix.worker == 'docker') && env.TESTFLAGS_DOCKER || env.TESTFLAGS }} --run=//worker=${{ matrix.worker }}$"
|
||||||
|
TESTPKGS: "${{ matrix.pkg }}"
|
||||||
|
SKIP_INTEGRATION_TESTS: "${{ matrix.skip-integration-tests }}"
|
||||||
-
|
-
|
||||||
name: Send to Codecov
|
name: Send to Codecov
|
||||||
|
if: always()
|
||||||
uses: codecov/codecov-action@v3
|
uses: codecov/codecov-action@v3
|
||||||
with:
|
with:
|
||||||
file: ./coverage/coverage.txt
|
directory: ./bin/testreports
|
||||||
-
|
-
|
||||||
name: Expose GitHub Runtime
|
name: Generate annotations
|
||||||
uses: crazy-max/ghaction-github-runtime@906832f62b7baa936e3fbef72b029308af505ee7
|
if: always()
|
||||||
|
uses: crazy-max/.github/.github/actions/gotest-annotations@1a64ea6d01db9a48aa61954cb20e265782c167d9
|
||||||
|
with:
|
||||||
|
directory: ./bin/testreports
|
||||||
-
|
-
|
||||||
name: Build binaries
|
name: Upload test reports
|
||||||
|
if: always()
|
||||||
|
uses: actions/upload-artifact@v3
|
||||||
|
with:
|
||||||
|
name: test-reports
|
||||||
|
path: ./bin/testreports
|
||||||
|
|
||||||
|
prepare-binaries:
|
||||||
|
runs-on: ubuntu-22.04
|
||||||
|
outputs:
|
||||||
|
matrix: ${{ steps.platforms.outputs.matrix }}
|
||||||
|
steps:
|
||||||
|
-
|
||||||
|
name: Checkout
|
||||||
|
uses: actions/checkout@v3
|
||||||
|
-
|
||||||
|
name: Create matrix
|
||||||
|
id: platforms
|
||||||
|
run: |
|
||||||
|
echo "matrix=$(docker buildx bake binaries-cross --print | jq -cr '.target."binaries-cross".platforms')" >>${GITHUB_OUTPUT}
|
||||||
|
-
|
||||||
|
name: Show matrix
|
||||||
|
run: |
|
||||||
|
echo ${{ steps.platforms.outputs.matrix }}
|
||||||
|
|
||||||
|
binaries:
|
||||||
|
runs-on: ubuntu-22.04
|
||||||
|
needs:
|
||||||
|
- prepare-binaries
|
||||||
|
strategy:
|
||||||
|
fail-fast: false
|
||||||
|
matrix:
|
||||||
|
platform: ${{ fromJson(needs.prepare-binaries.outputs.matrix) }}
|
||||||
|
steps:
|
||||||
|
-
|
||||||
|
name: Prepare
|
||||||
|
run: |
|
||||||
|
platform=${{ matrix.platform }}
|
||||||
|
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
|
||||||
|
-
|
||||||
|
name: Checkout
|
||||||
|
uses: actions/checkout@v3
|
||||||
|
-
|
||||||
|
name: Set up QEMU
|
||||||
|
uses: docker/setup-qemu-action@v2
|
||||||
|
-
|
||||||
|
name: Set up Docker Buildx
|
||||||
|
uses: docker/setup-buildx-action@v2
|
||||||
|
with:
|
||||||
|
version: ${{ env.BUILDX_VERSION }}
|
||||||
|
driver-opts: image=${{ env.BUILDKIT_IMAGE }}
|
||||||
|
buildkitd-flags: --debug
|
||||||
|
-
|
||||||
|
name: Build
|
||||||
run: |
|
run: |
|
||||||
make release
|
make release
|
||||||
env:
|
env:
|
||||||
CACHE_FROM: type=gha,scope=release
|
PLATFORMS: ${{ matrix.platform }}
|
||||||
CACHE_TO: type=gha,scope=release
|
CACHE_FROM: type=gha,scope=binaries-${{ env.PLATFORM_PAIR }}
|
||||||
|
CACHE_TO: type=gha,scope=binaries-${{ env.PLATFORM_PAIR }},mode=max
|
||||||
-
|
-
|
||||||
name: Upload artifacts
|
name: Upload artifacts
|
||||||
uses: actions/upload-artifact@v3
|
uses: actions/upload-artifact@v3
|
||||||
with:
|
with:
|
||||||
name: buildx
|
name: buildx
|
||||||
path: ${{ env.RELEASE_OUT }}/*
|
path: ${{ env.DESTDIR }}/*
|
||||||
if-no-files-found: error
|
if-no-files-found: error
|
||||||
|
|
||||||
|
bin-image:
|
||||||
|
runs-on: ubuntu-22.04
|
||||||
|
needs:
|
||||||
|
- test
|
||||||
|
if: ${{ github.event_name != 'pull_request' && github.repository == 'docker/buildx' }}
|
||||||
|
steps:
|
||||||
|
-
|
||||||
|
name: Checkout
|
||||||
|
uses: actions/checkout@v3
|
||||||
|
-
|
||||||
|
name: Set up QEMU
|
||||||
|
uses: docker/setup-qemu-action@v2
|
||||||
|
-
|
||||||
|
name: Set up Docker Buildx
|
||||||
|
uses: docker/setup-buildx-action@v2
|
||||||
|
with:
|
||||||
|
version: ${{ env.BUILDX_VERSION }}
|
||||||
|
driver-opts: image=${{ env.BUILDKIT_IMAGE }}
|
||||||
|
buildkitd-flags: --debug
|
||||||
-
|
-
|
||||||
name: Docker meta
|
name: Docker meta
|
||||||
id: meta
|
id: meta
|
||||||
@@ -83,25 +226,56 @@ jobs:
|
|||||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||||
-
|
-
|
||||||
name: Build and push image
|
name: Build and push image
|
||||||
uses: docker/bake-action@v2
|
uses: docker/bake-action@v3
|
||||||
with:
|
with:
|
||||||
files: |
|
files: |
|
||||||
./docker-bake.hcl
|
./docker-bake.hcl
|
||||||
${{ steps.meta.outputs.bake-file }}
|
${{ steps.meta.outputs.bake-file }}
|
||||||
targets: image-cross
|
targets: image-cross
|
||||||
push: ${{ github.event_name != 'pull_request' }}
|
push: ${{ github.event_name != 'pull_request' }}
|
||||||
|
sbom: true
|
||||||
|
set: |
|
||||||
|
*.cache-from=type=gha,scope=bin-image
|
||||||
|
*.cache-to=type=gha,scope=bin-image,mode=max
|
||||||
|
|
||||||
|
release:
|
||||||
|
runs-on: ubuntu-22.04
|
||||||
|
needs:
|
||||||
|
- test
|
||||||
|
- binaries
|
||||||
|
steps:
|
||||||
|
-
|
||||||
|
name: Checkout
|
||||||
|
uses: actions/checkout@v3
|
||||||
|
-
|
||||||
|
name: Download binaries
|
||||||
|
uses: actions/download-artifact@v3
|
||||||
|
with:
|
||||||
|
name: buildx
|
||||||
|
path: ${{ env.DESTDIR }}
|
||||||
|
-
|
||||||
|
name: Create checksums
|
||||||
|
run: ./hack/hash-files
|
||||||
|
-
|
||||||
|
name: List artifacts
|
||||||
|
run: |
|
||||||
|
tree -nh ${{ env.DESTDIR }}
|
||||||
|
-
|
||||||
|
name: Check artifacts
|
||||||
|
run: |
|
||||||
|
find ${{ env.DESTDIR }} -type f -exec file -e ascii -- {} +
|
||||||
-
|
-
|
||||||
name: GitHub Release
|
name: GitHub Release
|
||||||
if: startsWith(github.ref, 'refs/tags/v')
|
if: startsWith(github.ref, 'refs/tags/v')
|
||||||
uses: softprops/action-gh-release@1e07f4398721186383de40550babbdf2b84acfc5
|
uses: softprops/action-gh-release@de2c0eb89ae2a093876385947365aca7b0e5f844 # v0.1.15
|
||||||
env:
|
env:
|
||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
with:
|
with:
|
||||||
draft: true
|
draft: true
|
||||||
files: ${{ env.RELEASE_OUT }}/*
|
files: ${{ env.DESTDIR }}/*
|
||||||
|
|
||||||
buildkit-edge:
|
buildkit-edge:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-22.04
|
||||||
continue-on-error: true
|
continue-on-error: true
|
||||||
steps:
|
steps:
|
||||||
-
|
-
|
||||||
@@ -114,12 +288,12 @@ jobs:
|
|||||||
name: Set up Docker Buildx
|
name: Set up Docker Buildx
|
||||||
uses: docker/setup-buildx-action@v2
|
uses: docker/setup-buildx-action@v2
|
||||||
with:
|
with:
|
||||||
version: latest
|
version: ${{ env.BUILDX_VERSION }}
|
||||||
driver-opts: image=moby/buildkit:master
|
driver-opts: image=moby/buildkit:master
|
||||||
buildkitd-flags: --debug
|
buildkitd-flags: --debug
|
||||||
-
|
-
|
||||||
# Just run a bake target to check eveything runs fine
|
# Just run a bake target to check eveything runs fine
|
||||||
name: Build
|
name: Build
|
||||||
uses: docker/bake-action@v2
|
uses: docker/bake-action@v3
|
||||||
with:
|
with:
|
||||||
targets: binaries-cross
|
targets: binaries
|
||||||
|
|||||||
@@ -1,20 +1,22 @@
|
|||||||
name: docs
|
name: docs-release
|
||||||
|
|
||||||
on:
|
on:
|
||||||
release:
|
release:
|
||||||
types: [ published ]
|
types:
|
||||||
|
- released
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
open-pr:
|
open-pr:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-22.04
|
||||||
|
if: ${{ github.event.release.prerelease != true && github.repository == 'docker/buildx' }}
|
||||||
steps:
|
steps:
|
||||||
-
|
-
|
||||||
name: Checkout docs repo
|
name: Checkout docs repo
|
||||||
uses: actions/checkout@v3
|
uses: actions/checkout@v3
|
||||||
with:
|
with:
|
||||||
token: ${{ secrets.GHPAT_DOCS_DISPATCH }}
|
token: ${{ secrets.GHPAT_DOCS_DISPATCH }}
|
||||||
repository: docker/docker.github.io
|
repository: docker/docs
|
||||||
ref: master
|
ref: main
|
||||||
-
|
-
|
||||||
name: Prepare
|
name: Prepare
|
||||||
run: |
|
run: |
|
||||||
@@ -24,7 +26,7 @@ jobs:
|
|||||||
uses: docker/setup-buildx-action@v2
|
uses: docker/setup-buildx-action@v2
|
||||||
-
|
-
|
||||||
name: Build docs
|
name: Build docs
|
||||||
uses: docker/bake-action@v2
|
uses: docker/bake-action@v3
|
||||||
with:
|
with:
|
||||||
source: ${{ github.server_url }}/${{ github.repository }}.git#${{ github.event.release.name }}
|
source: ${{ github.server_url }}/${{ github.repository }}.git#${{ github.event.release.name }}
|
||||||
targets: update-docs
|
targets: update-docs
|
||||||
@@ -42,7 +44,7 @@ jobs:
|
|||||||
git add -A .
|
git add -A .
|
||||||
-
|
-
|
||||||
name: Create PR on docs repo
|
name: Create PR on docs repo
|
||||||
uses: peter-evans/create-pull-request@923ad837f191474af6b1721408744feb989a4c27 # v4.0.4
|
uses: peter-evans/create-pull-request@284f54f989303d2699d373481a0cfa13ad5a6666
|
||||||
with:
|
with:
|
||||||
token: ${{ secrets.GHPAT_DOCS_DISPATCH }}
|
token: ${{ secrets.GHPAT_DOCS_DISPATCH }}
|
||||||
push-to-fork: docker-tools-robot/docker.github.io
|
push-to-fork: docker-tools-robot/docker.github.io
|
||||||
62
.github/workflows/docs-upstream.yml
vendored
Normal file
62
.github/workflows/docs-upstream.yml
vendored
Normal file
@@ -0,0 +1,62 @@
|
|||||||
|
# this workflow runs the remote validate bake target from docker/docker.github.io
|
||||||
|
# to check if yaml reference docs and markdown files used in this repo are still valid
|
||||||
|
# https://github.com/docker/docker.github.io/blob/98c7c9535063ae4cd2cd0a31478a21d16d2f07a3/docker-bake.hcl#L34-L36
|
||||||
|
name: docs-upstream
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-${{ github.ref }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
branches:
|
||||||
|
- 'master'
|
||||||
|
- 'v[0-9]*'
|
||||||
|
paths:
|
||||||
|
- '.github/workflows/docs-upstream.yml'
|
||||||
|
- 'docs/**'
|
||||||
|
pull_request:
|
||||||
|
paths:
|
||||||
|
- '.github/workflows/docs-upstream.yml'
|
||||||
|
- 'docs/**'
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
docs-yaml:
|
||||||
|
runs-on: ubuntu-22.04
|
||||||
|
steps:
|
||||||
|
-
|
||||||
|
name: Checkout
|
||||||
|
uses: actions/checkout@v3
|
||||||
|
-
|
||||||
|
name: Set up Docker Buildx
|
||||||
|
uses: docker/setup-buildx-action@v2
|
||||||
|
with:
|
||||||
|
version: latest
|
||||||
|
-
|
||||||
|
name: Build reference YAML docs
|
||||||
|
uses: docker/bake-action@v3
|
||||||
|
with:
|
||||||
|
targets: update-docs
|
||||||
|
set: |
|
||||||
|
*.output=/tmp/buildx-docs
|
||||||
|
*.cache-from=type=gha,scope=docs-yaml
|
||||||
|
*.cache-to=type=gha,scope=docs-yaml,mode=max
|
||||||
|
env:
|
||||||
|
DOCS_FORMATS: yaml
|
||||||
|
-
|
||||||
|
name: Upload reference YAML docs
|
||||||
|
uses: actions/upload-artifact@v3
|
||||||
|
with:
|
||||||
|
name: docs-yaml
|
||||||
|
path: /tmp/buildx-docs/out/reference
|
||||||
|
retention-days: 1
|
||||||
|
|
||||||
|
validate:
|
||||||
|
uses: docker/docs/.github/workflows/validate-upstream.yml@main
|
||||||
|
needs:
|
||||||
|
- docs-yaml
|
||||||
|
with:
|
||||||
|
repo: https://github.com/${{ github.repository }}
|
||||||
|
data-files-id: docs-yaml
|
||||||
|
data-files-folder: buildx
|
||||||
|
data-files-placeholder-folder: engine/reference/commandline
|
||||||
88
.github/workflows/e2e.yml
vendored
88
.github/workflows/e2e.yml
vendored
@@ -11,15 +11,18 @@ on:
|
|||||||
- 'master'
|
- 'master'
|
||||||
- 'v[0-9]*'
|
- 'v[0-9]*'
|
||||||
pull_request:
|
pull_request:
|
||||||
branches:
|
paths-ignore:
|
||||||
- 'master'
|
- '.github/releases.json'
|
||||||
- 'v[0-9]*'
|
- 'README.md'
|
||||||
|
- 'docs/**'
|
||||||
|
|
||||||
|
env:
|
||||||
|
DESTDIR: "./bin"
|
||||||
|
K3S_VERSION: "v1.21.2-k3s1"
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
build:
|
build:
|
||||||
runs-on: ubuntu-20.04
|
runs-on: ubuntu-22.04
|
||||||
env:
|
|
||||||
BIN_OUT: ./bin
|
|
||||||
steps:
|
steps:
|
||||||
- name: Checkout
|
- name: Checkout
|
||||||
uses: actions/checkout@v3
|
uses: actions/checkout@v3
|
||||||
@@ -30,7 +33,7 @@ jobs:
|
|||||||
version: latest
|
version: latest
|
||||||
-
|
-
|
||||||
name: Build
|
name: Build
|
||||||
uses: docker/bake-action@v2
|
uses: docker/bake-action@v3
|
||||||
with:
|
with:
|
||||||
targets: binaries
|
targets: binaries
|
||||||
set: |
|
set: |
|
||||||
@@ -40,13 +43,13 @@ jobs:
|
|||||||
-
|
-
|
||||||
name: Rename binary
|
name: Rename binary
|
||||||
run: |
|
run: |
|
||||||
mv ${{ env.BIN_OUT }}/buildx ${{ env.BIN_OUT }}/docker-buildx
|
mv ${{ env.DESTDIR }}/build/buildx ${{ env.DESTDIR }}/build/docker-buildx
|
||||||
-
|
-
|
||||||
name: Upload artifacts
|
name: Upload artifacts
|
||||||
uses: actions/upload-artifact@v3
|
uses: actions/upload-artifact@v3
|
||||||
with:
|
with:
|
||||||
name: binary
|
name: binary
|
||||||
path: ${{ env.BIN_OUT }}
|
path: ${{ env.DESTDIR }}/build
|
||||||
if-no-files-found: error
|
if-no-files-found: error
|
||||||
retention-days: 7
|
retention-days: 7
|
||||||
|
|
||||||
@@ -129,20 +132,67 @@ jobs:
|
|||||||
-
|
-
|
||||||
name: Install k3s
|
name: Install k3s
|
||||||
if: matrix.driver == 'kubernetes'
|
if: matrix.driver == 'kubernetes'
|
||||||
uses: debianmaster/actions-k3s@b9cf3f599fd118699a3c8a0d18a2f2bda6cf4ce4
|
uses: actions/github-script@v6
|
||||||
id: k3s
|
|
||||||
with:
|
with:
|
||||||
version: v1.21.2-k3s1
|
script: |
|
||||||
|
const fs = require('fs');
|
||||||
|
|
||||||
|
let wait = function(milliseconds) {
|
||||||
|
return new Promise((resolve, reject) => {
|
||||||
|
if (typeof(milliseconds) !== 'number') {
|
||||||
|
throw new Error('milleseconds not a number');
|
||||||
|
}
|
||||||
|
setTimeout(() => resolve("done!"), milliseconds)
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
const kubeconfig="/tmp/buildkit-k3s/kubeconfig.yaml";
|
||||||
|
core.info(`storing kubeconfig in ${kubeconfig}`);
|
||||||
|
|
||||||
|
await exec.exec('docker', ["run", "-d",
|
||||||
|
"--privileged",
|
||||||
|
"--name=buildkit-k3s",
|
||||||
|
"-e", "K3S_KUBECONFIG_OUTPUT="+kubeconfig,
|
||||||
|
"-e", "K3S_KUBECONFIG_MODE=666",
|
||||||
|
"-v", "/tmp/buildkit-k3s:/tmp/buildkit-k3s",
|
||||||
|
"-p", "6443:6443",
|
||||||
|
"-p", "80:80",
|
||||||
|
"-p", "443:443",
|
||||||
|
"-p", "8080:8080",
|
||||||
|
"rancher/k3s:${{ env.K3S_VERSION }}", "server"
|
||||||
|
]);
|
||||||
|
await wait(10000);
|
||||||
|
|
||||||
|
core.exportVariable('KUBECONFIG', kubeconfig);
|
||||||
|
|
||||||
|
let nodeName;
|
||||||
|
for (let count = 1; count <= 5; count++) {
|
||||||
|
try {
|
||||||
|
const nodeNameOutput = await exec.getExecOutput("kubectl get nodes --no-headers -oname");
|
||||||
|
nodeName = nodeNameOutput.stdout
|
||||||
|
} catch (error) {
|
||||||
|
core.info(`Unable to resolve node name (${error.message}). Attempt ${count} of 5.`)
|
||||||
|
} finally {
|
||||||
|
if (nodeName) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
await wait(5000);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if (!nodeName) {
|
||||||
|
throw new Error(`Unable to resolve node name after 5 attempts.`);
|
||||||
|
}
|
||||||
|
|
||||||
|
await exec.exec(`kubectl wait --for=condition=Ready ${nodeName}`);
|
||||||
|
} catch (error) {
|
||||||
|
core.setFailed(error.message);
|
||||||
|
}
|
||||||
-
|
-
|
||||||
name: Config k3s
|
name: Print KUBECONFIG
|
||||||
if: matrix.driver == 'kubernetes'
|
if: matrix.driver == 'kubernetes'
|
||||||
run: |
|
run: |
|
||||||
(set -x ; cat ${{ steps.k3s.outputs.kubeconfig }})
|
yq ${{ env.KUBECONFIG }}
|
||||||
-
|
|
||||||
name: Check k3s nodes
|
|
||||||
if: matrix.driver == 'kubernetes'
|
|
||||||
run: |
|
|
||||||
kubectl get nodes
|
|
||||||
-
|
-
|
||||||
name: Launch remote buildkitd
|
name: Launch remote buildkitd
|
||||||
if: matrix.driver == 'remote'
|
if: matrix.driver == 'remote'
|
||||||
|
|||||||
28
.github/workflows/validate.yml
vendored
28
.github/workflows/validate.yml
vendored
@@ -13,13 +13,12 @@ on:
|
|||||||
tags:
|
tags:
|
||||||
- 'v*'
|
- 'v*'
|
||||||
pull_request:
|
pull_request:
|
||||||
branches:
|
paths-ignore:
|
||||||
- 'master'
|
- '.github/releases.json'
|
||||||
- 'v[0-9]*'
|
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
validate:
|
validate:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-22.04
|
||||||
strategy:
|
strategy:
|
||||||
fail-fast: false
|
fail-fast: false
|
||||||
matrix:
|
matrix:
|
||||||
@@ -27,6 +26,7 @@ jobs:
|
|||||||
- lint
|
- lint
|
||||||
- validate-vendor
|
- validate-vendor
|
||||||
- validate-docs
|
- validate-docs
|
||||||
|
- validate-generated-files
|
||||||
steps:
|
steps:
|
||||||
-
|
-
|
||||||
name: Checkout
|
name: Checkout
|
||||||
@@ -40,23 +40,3 @@ jobs:
|
|||||||
name: Run
|
name: Run
|
||||||
run: |
|
run: |
|
||||||
make ${{ matrix.target }}
|
make ${{ matrix.target }}
|
||||||
|
|
||||||
validate-docs-yaml:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
needs:
|
|
||||||
- validate
|
|
||||||
steps:
|
|
||||||
-
|
|
||||||
name: Checkout
|
|
||||||
uses: actions/checkout@v3
|
|
||||||
-
|
|
||||||
name: Set up Docker Buildx
|
|
||||||
uses: docker/setup-buildx-action@v2
|
|
||||||
with:
|
|
||||||
version: latest
|
|
||||||
-
|
|
||||||
name: Run
|
|
||||||
run: |
|
|
||||||
make docs
|
|
||||||
env:
|
|
||||||
FORMATS: yaml
|
|
||||||
|
|||||||
5
.gitignore
vendored
5
.gitignore
vendored
@@ -1,4 +1 @@
|
|||||||
bin
|
/bin
|
||||||
coverage
|
|
||||||
cross-out
|
|
||||||
release-out
|
|
||||||
|
|||||||
@@ -11,17 +11,17 @@ linters:
|
|||||||
enable:
|
enable:
|
||||||
- gofmt
|
- gofmt
|
||||||
- govet
|
- govet
|
||||||
- deadcode
|
|
||||||
- depguard
|
- depguard
|
||||||
- goimports
|
- goimports
|
||||||
- ineffassign
|
- ineffassign
|
||||||
- misspell
|
- misspell
|
||||||
- unused
|
- unused
|
||||||
- varcheck
|
|
||||||
- revive
|
- revive
|
||||||
- staticcheck
|
- staticcheck
|
||||||
- typecheck
|
- typecheck
|
||||||
- structcheck
|
- nolintlint
|
||||||
|
- gosec
|
||||||
|
- forbidigo
|
||||||
disable-all: true
|
disable-all: true
|
||||||
|
|
||||||
linters-settings:
|
linters-settings:
|
||||||
@@ -32,6 +32,15 @@ linters-settings:
|
|||||||
# The io/ioutil package has been deprecated.
|
# The io/ioutil package has been deprecated.
|
||||||
# https://go.dev/doc/go1.16#ioutil
|
# https://go.dev/doc/go1.16#ioutil
|
||||||
- io/ioutil
|
- io/ioutil
|
||||||
|
forbidigo:
|
||||||
|
forbid:
|
||||||
|
- '^fmt\.Errorf(# use errors\.Errorf instead)?$'
|
||||||
|
gosec:
|
||||||
|
excludes:
|
||||||
|
- G204 # Audit use of command execution
|
||||||
|
- G402 # TLS MinVersion too low
|
||||||
|
config:
|
||||||
|
G306: "0644"
|
||||||
|
|
||||||
issues:
|
issues:
|
||||||
exclude-rules:
|
exclude-rules:
|
||||||
|
|||||||
97
Dockerfile
97
Dockerfile
@@ -1,10 +1,12 @@
|
|||||||
# syntax=docker/dockerfile:1.4
|
# syntax=docker/dockerfile:1
|
||||||
|
|
||||||
ARG GO_VERSION=1.18
|
ARG GO_VERSION=1.20.6
|
||||||
ARG XX_VERSION=1.1.2
|
ARG XX_VERSION=1.2.1
|
||||||
ARG DOCKERD_VERSION=20.10.14
|
|
||||||
|
|
||||||
FROM docker:$DOCKERD_VERSION AS dockerd-release
|
ARG DOCKER_VERSION=24.0.2
|
||||||
|
ARG GOTESTSUM_VERSION=v1.9.0
|
||||||
|
ARG REGISTRY_VERSION=2.8.0
|
||||||
|
ARG BUILDKIT_VERSION=v0.11.6
|
||||||
|
|
||||||
# xx is a helper for cross-compilation
|
# xx is a helper for cross-compilation
|
||||||
FROM --platform=$BUILDPLATFORM tonistiigi/xx:${XX_VERSION} AS xx
|
FROM --platform=$BUILDPLATFORM tonistiigi/xx:${XX_VERSION} AS xx
|
||||||
@@ -18,23 +20,55 @@ ENV GOFLAGS=-mod=vendor
|
|||||||
ENV CGO_ENABLED=0
|
ENV CGO_ENABLED=0
|
||||||
WORKDIR /src
|
WORKDIR /src
|
||||||
|
|
||||||
|
FROM registry:$REGISTRY_VERSION AS registry
|
||||||
|
|
||||||
|
FROM moby/buildkit:$BUILDKIT_VERSION AS buildkit
|
||||||
|
|
||||||
|
FROM gobase AS docker
|
||||||
|
ARG TARGETPLATFORM
|
||||||
|
ARG DOCKER_VERSION
|
||||||
|
WORKDIR /opt/docker
|
||||||
|
RUN DOCKER_ARCH=$(case ${TARGETPLATFORM:-linux/amd64} in \
|
||||||
|
"linux/amd64") echo "x86_64" ;; \
|
||||||
|
"linux/arm/v6") echo "armel" ;; \
|
||||||
|
"linux/arm/v7") echo "armhf" ;; \
|
||||||
|
"linux/arm64") echo "aarch64" ;; \
|
||||||
|
"linux/ppc64le") echo "ppc64le" ;; \
|
||||||
|
"linux/s390x") echo "s390x" ;; \
|
||||||
|
*) echo "" ;; esac) \
|
||||||
|
&& echo "DOCKER_ARCH=$DOCKER_ARCH" \
|
||||||
|
&& wget -qO- "https://download.docker.com/linux/static/stable/${DOCKER_ARCH}/docker-${DOCKER_VERSION}.tgz" | tar xvz --strip 1
|
||||||
|
RUN ./dockerd --version && ./containerd --version && ./ctr --version && ./runc --version
|
||||||
|
|
||||||
|
FROM gobase AS gotestsum
|
||||||
|
ARG GOTESTSUM_VERSION
|
||||||
|
ENV GOFLAGS=
|
||||||
|
RUN --mount=target=/root/.cache,type=cache \
|
||||||
|
GOBIN=/out/ go install "gotest.tools/gotestsum@${GOTESTSUM_VERSION}" && \
|
||||||
|
/out/gotestsum --version
|
||||||
|
|
||||||
FROM gobase AS buildx-version
|
FROM gobase AS buildx-version
|
||||||
RUN --mount=target=. \
|
RUN --mount=type=bind,target=. <<EOT
|
||||||
PKG=github.com/docker/buildx VERSION=$(git describe --match 'v[0-9]*' --dirty='.m' --always --tags) REVISION=$(git rev-parse HEAD)$(if ! git diff --no-ext-diff --quiet --exit-code; then echo .m; fi); \
|
set -e
|
||||||
echo "-X ${PKG}/version.Version=${VERSION} -X ${PKG}/version.Revision=${REVISION} -X ${PKG}/version.Package=${PKG}" | tee /tmp/.ldflags; \
|
mkdir /buildx-version
|
||||||
echo -n "${VERSION}" | tee /tmp/.version;
|
echo -n "$(./hack/git-meta version)" | tee /buildx-version/version
|
||||||
|
echo -n "$(./hack/git-meta revision)" | tee /buildx-version/revision
|
||||||
|
EOT
|
||||||
|
|
||||||
FROM gobase AS buildx-build
|
FROM gobase AS buildx-build
|
||||||
ARG LDFLAGS="-w -s"
|
|
||||||
ARG TARGETPLATFORM
|
ARG TARGETPLATFORM
|
||||||
RUN --mount=type=bind,target=. \
|
RUN --mount=type=bind,target=. \
|
||||||
--mount=type=cache,target=/root/.cache \
|
--mount=type=cache,target=/root/.cache \
|
||||||
--mount=type=cache,target=/go/pkg/mod \
|
--mount=type=cache,target=/go/pkg/mod \
|
||||||
--mount=type=bind,source=/tmp/.ldflags,target=/tmp/.ldflags,from=buildx-version \
|
--mount=type=bind,from=buildx-version,source=/buildx-version,target=/buildx-version <<EOT
|
||||||
set -x; xx-go build -ldflags "$(cat /tmp/.ldflags) ${LDFLAGS}" -o /usr/bin/buildx ./cmd/buildx && \
|
set -e
|
||||||
xx-verify --static /usr/bin/buildx
|
xx-go --wrap
|
||||||
|
DESTDIR=/usr/bin VERSION=$(cat /buildx-version/version) REVISION=$(cat /buildx-version/revision) GO_EXTRA_LDFLAGS="-s -w" ./hack/build
|
||||||
|
xx-verify --static /usr/bin/docker-buildx
|
||||||
|
EOT
|
||||||
|
|
||||||
FROM gobase AS test
|
FROM gobase AS test
|
||||||
|
ENV SKIP_INTEGRATION_TESTS=1
|
||||||
RUN --mount=type=bind,target=. \
|
RUN --mount=type=bind,target=. \
|
||||||
--mount=type=cache,target=/root/.cache \
|
--mount=type=cache,target=/root/.cache \
|
||||||
--mount=type=cache,target=/go/pkg/mod \
|
--mount=type=cache,target=/go/pkg/mod \
|
||||||
@@ -45,29 +79,56 @@ FROM scratch AS test-coverage
|
|||||||
COPY --from=test /tmp/coverage.txt /coverage.txt
|
COPY --from=test /tmp/coverage.txt /coverage.txt
|
||||||
|
|
||||||
FROM scratch AS binaries-unix
|
FROM scratch AS binaries-unix
|
||||||
COPY --link --from=buildx-build /usr/bin/buildx /
|
COPY --link --from=buildx-build /usr/bin/docker-buildx /buildx
|
||||||
|
|
||||||
FROM binaries-unix AS binaries-darwin
|
FROM binaries-unix AS binaries-darwin
|
||||||
FROM binaries-unix AS binaries-linux
|
FROM binaries-unix AS binaries-linux
|
||||||
|
|
||||||
FROM scratch AS binaries-windows
|
FROM scratch AS binaries-windows
|
||||||
COPY --link --from=buildx-build /usr/bin/buildx /buildx.exe
|
COPY --link --from=buildx-build /usr/bin/docker-buildx /buildx.exe
|
||||||
|
|
||||||
FROM binaries-$TARGETOS AS binaries
|
FROM binaries-$TARGETOS AS binaries
|
||||||
|
# enable scanning for this stage
|
||||||
|
ARG BUILDKIT_SBOM_SCAN_STAGE=true
|
||||||
|
|
||||||
|
FROM gobase AS integration-test-base
|
||||||
|
# https://github.com/docker/docker/blob/master/project/PACKAGERS.md#runtime-dependencies
|
||||||
|
RUN apk add --no-cache \
|
||||||
|
btrfs-progs \
|
||||||
|
e2fsprogs \
|
||||||
|
e2fsprogs-extra \
|
||||||
|
ip6tables \
|
||||||
|
iptables \
|
||||||
|
openssl \
|
||||||
|
shadow-uidmap \
|
||||||
|
xfsprogs \
|
||||||
|
xz
|
||||||
|
COPY --link --from=gotestsum /out/gotestsum /usr/bin/
|
||||||
|
COPY --link --from=registry /bin/registry /usr/bin/
|
||||||
|
COPY --link --from=docker /opt/docker/* /usr/bin/
|
||||||
|
COPY --link --from=buildkit /usr/bin/buildkitd /usr/bin/
|
||||||
|
COPY --link --from=buildkit /usr/bin/buildctl /usr/bin/
|
||||||
|
COPY --link --from=binaries /buildx /usr/bin/
|
||||||
|
|
||||||
|
FROM integration-test-base AS integration-test
|
||||||
|
COPY . .
|
||||||
|
|
||||||
# Release
|
# Release
|
||||||
FROM --platform=$BUILDPLATFORM alpine AS releaser
|
FROM --platform=$BUILDPLATFORM alpine AS releaser
|
||||||
WORKDIR /work
|
WORKDIR /work
|
||||||
ARG TARGETPLATFORM
|
ARG TARGETPLATFORM
|
||||||
RUN --mount=from=binaries \
|
RUN --mount=from=binaries \
|
||||||
--mount=type=bind,source=/tmp/.version,target=/tmp/.version,from=buildx-version \
|
--mount=type=bind,from=buildx-version,source=/buildx-version,target=/buildx-version <<EOT
|
||||||
mkdir -p /out && cp buildx* "/out/buildx-$(cat /tmp/.version).$(echo $TARGETPLATFORM | sed 's/\//-/g')$(ls buildx* | sed -e 's/^buildx//')"
|
set -e
|
||||||
|
mkdir -p /out
|
||||||
|
cp buildx* "/out/buildx-$(cat /buildx-version/version).$(echo $TARGETPLATFORM | sed 's/\//-/g')$(ls buildx* | sed -e 's/^buildx//')"
|
||||||
|
EOT
|
||||||
|
|
||||||
FROM scratch AS release
|
FROM scratch AS release
|
||||||
COPY --from=releaser /out/ /
|
COPY --from=releaser /out/ /
|
||||||
|
|
||||||
# Shell
|
# Shell
|
||||||
FROM docker:$DOCKERD_VERSION AS dockerd-release
|
FROM docker:$DOCKER_VERSION AS dockerd-release
|
||||||
FROM alpine AS shell
|
FROM alpine AS shell
|
||||||
RUN apk add --no-cache iptables tmux git vim less openssh
|
RUN apk add --no-cache iptables tmux git vim less openssh
|
||||||
RUN mkdir -p /usr/local/lib/docker/cli-plugins && ln -s /usr/local/bin/buildx /usr/local/lib/docker/cli-plugins/docker-buildx
|
RUN mkdir -p /usr/local/lib/docker/cli-plugins && ln -s /usr/local/bin/buildx /usr/local/lib/docker/cli-plugins/docker-buildx
|
||||||
|
|||||||
@@ -152,6 +152,7 @@ made through a pull request.
|
|||||||
people = [
|
people = [
|
||||||
"akihirosuda",
|
"akihirosuda",
|
||||||
"crazy-max",
|
"crazy-max",
|
||||||
|
"jedevc",
|
||||||
"tiborvass",
|
"tiborvass",
|
||||||
"tonistiigi",
|
"tonistiigi",
|
||||||
]
|
]
|
||||||
@@ -188,6 +189,11 @@ made through a pull request.
|
|||||||
Email = "contact@crazymax.dev"
|
Email = "contact@crazymax.dev"
|
||||||
GitHub = "crazy-max"
|
GitHub = "crazy-max"
|
||||||
|
|
||||||
|
[people.jedevc]
|
||||||
|
Name = "Justin Chadwell"
|
||||||
|
Email = "me@jedevc.com"
|
||||||
|
GitHub = "jedevc"
|
||||||
|
|
||||||
[people.thajeztah]
|
[people.thajeztah]
|
||||||
Name = "Sebastiaan van Stijn"
|
Name = "Sebastiaan van Stijn"
|
||||||
Email = "github@gone.nl"
|
Email = "github@gone.nl"
|
||||||
|
|||||||
50
Makefile
50
Makefile
@@ -4,59 +4,93 @@ else ifneq (, $(shell docker buildx version))
|
|||||||
export BUILDX_CMD = docker buildx
|
export BUILDX_CMD = docker buildx
|
||||||
else ifneq (, $(shell which buildx))
|
else ifneq (, $(shell which buildx))
|
||||||
export BUILDX_CMD = $(which buildx)
|
export BUILDX_CMD = $(which buildx)
|
||||||
else
|
|
||||||
$(error "Buildx is required: https://github.com/docker/buildx#installing")
|
|
||||||
endif
|
endif
|
||||||
|
|
||||||
export BIN_OUT = ./bin
|
export BUILDX_CMD ?= docker buildx
|
||||||
export RELEASE_OUT = ./release-out
|
|
||||||
|
|
||||||
|
.PHONY: all
|
||||||
|
all: binaries
|
||||||
|
|
||||||
|
.PHONY: build
|
||||||
|
build:
|
||||||
|
./hack/build
|
||||||
|
|
||||||
|
.PHONY: shell
|
||||||
shell:
|
shell:
|
||||||
./hack/shell
|
./hack/shell
|
||||||
|
|
||||||
|
.PHONY: binaries
|
||||||
binaries:
|
binaries:
|
||||||
$(BUILDX_CMD) bake binaries
|
$(BUILDX_CMD) bake binaries
|
||||||
|
|
||||||
|
.PHONY: binaries-cross
|
||||||
binaries-cross:
|
binaries-cross:
|
||||||
$(BUILDX_CMD) bake binaries-cross
|
$(BUILDX_CMD) bake binaries-cross
|
||||||
|
|
||||||
|
.PHONY: install
|
||||||
install: binaries
|
install: binaries
|
||||||
mkdir -p ~/.docker/cli-plugins
|
mkdir -p ~/.docker/cli-plugins
|
||||||
install bin/buildx ~/.docker/cli-plugins/docker-buildx
|
install bin/build/buildx ~/.docker/cli-plugins/docker-buildx
|
||||||
|
|
||||||
|
.PHONY: release
|
||||||
release:
|
release:
|
||||||
./hack/release
|
./hack/release
|
||||||
|
|
||||||
validate-all: lint test validate-vendor validate-docs
|
.PHONY: validate-all
|
||||||
|
validate-all: lint test validate-vendor validate-docs validate-generated-files
|
||||||
|
|
||||||
|
.PHONY: lint
|
||||||
lint:
|
lint:
|
||||||
$(BUILDX_CMD) bake lint
|
$(BUILDX_CMD) bake lint
|
||||||
|
|
||||||
|
.PHONY: test
|
||||||
test:
|
test:
|
||||||
$(BUILDX_CMD) bake test
|
./hack/test
|
||||||
|
|
||||||
|
.PHONY: test-unit
|
||||||
|
test-unit:
|
||||||
|
TESTPKGS=./... SKIP_INTEGRATION_TESTS=1 ./hack/test
|
||||||
|
|
||||||
|
.PHONY: test
|
||||||
|
test-integration:
|
||||||
|
TESTPKGS=./tests ./hack/test
|
||||||
|
|
||||||
|
.PHONY: validate-vendor
|
||||||
validate-vendor:
|
validate-vendor:
|
||||||
$(BUILDX_CMD) bake validate-vendor
|
$(BUILDX_CMD) bake validate-vendor
|
||||||
|
|
||||||
|
.PHONY: validate-docs
|
||||||
validate-docs:
|
validate-docs:
|
||||||
$(BUILDX_CMD) bake validate-docs
|
$(BUILDX_CMD) bake validate-docs
|
||||||
|
|
||||||
|
.PHONY: validate-authors
|
||||||
validate-authors:
|
validate-authors:
|
||||||
$(BUILDX_CMD) bake validate-authors
|
$(BUILDX_CMD) bake validate-authors
|
||||||
|
|
||||||
|
.PHONY: validate-generated-files
|
||||||
|
validate-generated-files:
|
||||||
|
$(BUILDX_CMD) bake validate-generated-files
|
||||||
|
|
||||||
|
.PHONY: test-driver
|
||||||
test-driver:
|
test-driver:
|
||||||
./hack/test-driver
|
./hack/test-driver
|
||||||
|
|
||||||
|
.PHONY: vendor
|
||||||
vendor:
|
vendor:
|
||||||
./hack/update-vendor
|
./hack/update-vendor
|
||||||
|
|
||||||
|
.PHONY: docs
|
||||||
docs:
|
docs:
|
||||||
./hack/update-docs
|
./hack/update-docs
|
||||||
|
|
||||||
|
.PHONY: authors
|
||||||
authors:
|
authors:
|
||||||
$(BUILDX_CMD) bake update-authors
|
$(BUILDX_CMD) bake update-authors
|
||||||
|
|
||||||
|
.PHONY: mod-outdated
|
||||||
mod-outdated:
|
mod-outdated:
|
||||||
$(BUILDX_CMD) bake mod-outdated
|
$(BUILDX_CMD) bake mod-outdated
|
||||||
|
|
||||||
.PHONY: shell binaries binaries-cross install release validate-all lint validate-vendor validate-docs validate-authors vendor docs authors
|
.PHONY: generated-files
|
||||||
|
generated-files:
|
||||||
|
$(BUILDX_CMD) bake update-generated-files
|
||||||
|
|||||||
43
README.md
43
README.md
@@ -2,7 +2,7 @@
|
|||||||
|
|
||||||
[](https://github.com/docker/buildx/releases/latest)
|
[](https://github.com/docker/buildx/releases/latest)
|
||||||
[](https://pkg.go.dev/github.com/docker/buildx)
|
[](https://pkg.go.dev/github.com/docker/buildx)
|
||||||
[](https://github.com/docker/buildx/actions?query=workflow%3Abuild)
|
[](https://github.com/docker/buildx/actions?query=workflow%3Abuild)
|
||||||
[](https://goreportcard.com/report/github.com/docker/buildx)
|
[](https://goreportcard.com/report/github.com/docker/buildx)
|
||||||
[](https://codecov.io/gh/docker/buildx)
|
[](https://codecov.io/gh/docker/buildx)
|
||||||
|
|
||||||
@@ -32,16 +32,6 @@ Key features:
|
|||||||
- [Building with buildx](#building-with-buildx)
|
- [Building with buildx](#building-with-buildx)
|
||||||
- [Working with builder instances](#working-with-builder-instances)
|
- [Working with builder instances](#working-with-builder-instances)
|
||||||
- [Building multi-platform images](#building-multi-platform-images)
|
- [Building multi-platform images](#building-multi-platform-images)
|
||||||
- [Guides](docs/guides)
|
|
||||||
- [High-level build options with Bake](docs/guides/bake/index.md)
|
|
||||||
- [CI/CD](docs/guides/cicd.md)
|
|
||||||
- [CNI networking](docs/guides/cni-networking.md)
|
|
||||||
- [Using a custom network](docs/guides/custom-network.md)
|
|
||||||
- [Using a custom registry configuration](docs/guides/custom-registry-config.md)
|
|
||||||
- [OpenTelemetry support](docs/guides/opentelemetry.md)
|
|
||||||
- [Registry mirror](docs/guides/registry-mirror.md)
|
|
||||||
- [Drivers](docs/guides/drivers/index.md)
|
|
||||||
- [Resource limiting](docs/guides/resource-limiting.md)
|
|
||||||
- [Reference](docs/reference/buildx.md)
|
- [Reference](docs/reference/buildx.md)
|
||||||
- [`buildx bake`](docs/reference/buildx_bake.md)
|
- [`buildx bake`](docs/reference/buildx_bake.md)
|
||||||
- [`buildx build`](docs/reference/buildx_build.md)
|
- [`buildx build`](docs/reference/buildx_build.md)
|
||||||
@@ -61,11 +51,18 @@ Key features:
|
|||||||
- [`buildx version`](docs/reference/buildx_version.md)
|
- [`buildx version`](docs/reference/buildx_version.md)
|
||||||
- [Contributing](#contributing)
|
- [Contributing](#contributing)
|
||||||
|
|
||||||
|
For more information on how to use Buildx, see
|
||||||
|
[Docker Build docs](https://docs.docker.com/build/).
|
||||||
|
|
||||||
# Installing
|
# Installing
|
||||||
|
|
||||||
Using `buildx` as a docker CLI plugin requires using Docker 19.03 or newer.
|
Using `buildx` with Docker requires Docker engine 19.03 or newer.
|
||||||
A limited set of functionality works with older versions of Docker when
|
|
||||||
invoking the binary directly.
|
> **Warning**
|
||||||
|
>
|
||||||
|
> Using an incompatible version of Docker may result in unexpected behavior,
|
||||||
|
> and will likely cause issues, especially when using Buildx builders with more
|
||||||
|
> recent versions of BuildKit.
|
||||||
|
|
||||||
## Windows and macOS
|
## Windows and macOS
|
||||||
|
|
||||||
@@ -123,7 +120,8 @@ On Windows:
|
|||||||
Here is how to install and use Buildx inside a Dockerfile through the
|
Here is how to install and use Buildx inside a Dockerfile through the
|
||||||
[`docker/buildx-bin`](https://hub.docker.com/r/docker/buildx-bin) image:
|
[`docker/buildx-bin`](https://hub.docker.com/r/docker/buildx-bin) image:
|
||||||
|
|
||||||
```Dockerfile
|
```dockerfile
|
||||||
|
# syntax=docker/dockerfile:1
|
||||||
FROM docker
|
FROM docker
|
||||||
COPY --from=docker/buildx-bin /buildx /usr/libexec/docker/cli-plugins/docker-buildx
|
COPY --from=docker/buildx-bin /buildx /usr/libexec/docker/cli-plugins/docker-buildx
|
||||||
RUN docker buildx version
|
RUN docker buildx version
|
||||||
@@ -143,7 +141,7 @@ To remove this alias, run [`docker buildx uninstall`](docs/reference/buildx_unin
|
|||||||
# Buildx 0.6+
|
# Buildx 0.6+
|
||||||
$ docker buildx bake "https://github.com/docker/buildx.git"
|
$ docker buildx bake "https://github.com/docker/buildx.git"
|
||||||
$ mkdir -p ~/.docker/cli-plugins
|
$ mkdir -p ~/.docker/cli-plugins
|
||||||
$ mv ./bin/buildx ~/.docker/cli-plugins/docker-buildx
|
$ mv ./bin/build/buildx ~/.docker/cli-plugins/docker-buildx
|
||||||
|
|
||||||
# Docker 19.03+
|
# Docker 19.03+
|
||||||
$ DOCKER_BUILDKIT=1 docker build --platform=local -o . "https://github.com/docker/buildx.git"
|
$ DOCKER_BUILDKIT=1 docker build --platform=local -o . "https://github.com/docker/buildx.git"
|
||||||
@@ -190,12 +188,12 @@ through various "drivers". Each driver defines how and where a build should
|
|||||||
run, and have different feature sets.
|
run, and have different feature sets.
|
||||||
|
|
||||||
We currently support the following drivers:
|
We currently support the following drivers:
|
||||||
- The `docker` driver ([guide](docs/guides/drivers/docker.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
|
- The `docker` driver ([guide](docs/manuals/drivers/docker.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
|
||||||
- The `docker-container` driver ([guide](docs/guides/drivers/docker-container.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
|
- The `docker-container` driver ([guide](docs/manuals/drivers/docker-container.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
|
||||||
- The `kubernetes` driver ([guide](docs/guides/drivers/kubernetes.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
|
- The `kubernetes` driver ([guide](docs/manuals/drivers/kubernetes.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
|
||||||
- The `remote` driver ([guide](docs/guides/drivers/remote.md))
|
- The `remote` driver ([guide](docs/manuals/drivers/remote.md))
|
||||||
|
|
||||||
For more information on drivers, see the [drivers guide](docs/guides/drivers/index.md).
|
For more information on drivers, see the [drivers guide](docs/manuals/drivers/index.md).
|
||||||
|
|
||||||
## Working with builder instances
|
## Working with builder instances
|
||||||
|
|
||||||
@@ -298,6 +296,7 @@ inside your Dockerfile and can be leveraged by the processes running as part
|
|||||||
of your build.
|
of your build.
|
||||||
|
|
||||||
```dockerfile
|
```dockerfile
|
||||||
|
# syntax=docker/dockerfile:1
|
||||||
FROM --platform=$BUILDPLATFORM golang:alpine AS build
|
FROM --platform=$BUILDPLATFORM golang:alpine AS build
|
||||||
ARG TARGETPLATFORM
|
ARG TARGETPLATFORM
|
||||||
ARG BUILDPLATFORM
|
ARG BUILDPLATFORM
|
||||||
@@ -311,7 +310,7 @@ cross-compilation helpers for more advanced use-cases.
|
|||||||
|
|
||||||
## High-level build options
|
## High-level build options
|
||||||
|
|
||||||
See [`docs/guides/bake/index.md`](docs/guides/bake/index.md) for more details.
|
See [High-level builds with Bake](https://docs.docker.com/build/bake/) for more details.
|
||||||
|
|
||||||
# Contributing
|
# Contributing
|
||||||
|
|
||||||
|
|||||||
559
bake/bake.go
559
bake/bake.go
@@ -3,7 +3,6 @@ package bake
|
|||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"encoding/csv"
|
"encoding/csv"
|
||||||
"fmt"
|
|
||||||
"io"
|
"io"
|
||||||
"os"
|
"os"
|
||||||
"path"
|
"path"
|
||||||
@@ -13,22 +12,23 @@ import (
|
|||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
|
composecli "github.com/compose-spec/compose-go/cli"
|
||||||
"github.com/docker/buildx/bake/hclparser"
|
"github.com/docker/buildx/bake/hclparser"
|
||||||
"github.com/docker/buildx/build"
|
"github.com/docker/buildx/build"
|
||||||
|
controllerapi "github.com/docker/buildx/controller/pb"
|
||||||
"github.com/docker/buildx/util/buildflags"
|
"github.com/docker/buildx/util/buildflags"
|
||||||
"github.com/docker/buildx/util/platformutil"
|
"github.com/docker/buildx/util/platformutil"
|
||||||
|
|
||||||
"github.com/docker/cli/cli/config"
|
"github.com/docker/cli/cli/config"
|
||||||
"github.com/docker/docker/builder/remotecontext/urlutil"
|
|
||||||
hcl "github.com/hashicorp/hcl/v2"
|
hcl "github.com/hashicorp/hcl/v2"
|
||||||
"github.com/moby/buildkit/client/llb"
|
"github.com/moby/buildkit/client/llb"
|
||||||
"github.com/moby/buildkit/session/auth/authprovider"
|
"github.com/moby/buildkit/session/auth/authprovider"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
|
"github.com/zclconf/go-cty/cty"
|
||||||
|
"github.com/zclconf/go-cty/cty/convert"
|
||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
httpPrefix = regexp.MustCompile(`^https?://`)
|
|
||||||
gitURLPathWithFragmentSuffix = regexp.MustCompile(`\.git(?:#.+)?$`)
|
|
||||||
|
|
||||||
validTargetNameChars = `[a-zA-Z0-9_-]+`
|
validTargetNameChars = `[a-zA-Z0-9_-]+`
|
||||||
targetNamePattern = regexp.MustCompile(`^` + validTargetNameChars + `$`)
|
targetNamePattern = regexp.MustCompile(`^` + validTargetNameChars + `$`)
|
||||||
)
|
)
|
||||||
@@ -44,17 +44,18 @@ type Override struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func defaultFilenames() []string {
|
func defaultFilenames() []string {
|
||||||
return []string{
|
names := []string{}
|
||||||
"docker-compose.yml", // support app
|
names = append(names, composecli.DefaultFileNames...)
|
||||||
"docker-compose.yaml", // support app
|
names = append(names, []string{
|
||||||
"docker-bake.json",
|
"docker-bake.json",
|
||||||
"docker-bake.override.json",
|
"docker-bake.override.json",
|
||||||
"docker-bake.hcl",
|
"docker-bake.hcl",
|
||||||
"docker-bake.override.hcl",
|
"docker-bake.override.hcl",
|
||||||
}
|
}...)
|
||||||
|
return names
|
||||||
}
|
}
|
||||||
|
|
||||||
func ReadLocalFiles(names []string) ([]File, error) {
|
func ReadLocalFiles(names []string, stdin io.Reader) ([]File, error) {
|
||||||
isDefault := false
|
isDefault := false
|
||||||
if len(names) == 0 {
|
if len(names) == 0 {
|
||||||
isDefault = true
|
isDefault = true
|
||||||
@@ -66,7 +67,7 @@ func ReadLocalFiles(names []string) ([]File, error) {
|
|||||||
var dt []byte
|
var dt []byte
|
||||||
var err error
|
var err error
|
||||||
if n == "-" {
|
if n == "-" {
|
||||||
dt, err = io.ReadAll(os.Stdin)
|
dt, err = io.ReadAll(stdin)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@@ -84,7 +85,22 @@ func ReadLocalFiles(names []string) ([]File, error) {
|
|||||||
return out, nil
|
return out, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func ReadTargets(ctx context.Context, files []File, targets, overrides []string, defaults map[string]string) (map[string]*Target, []*Group, error) {
|
func ListTargets(files []File) ([]string, error) {
|
||||||
|
c, err := ParseFiles(files, nil)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
var targets []string
|
||||||
|
for _, g := range c.Groups {
|
||||||
|
targets = append(targets, g.Name)
|
||||||
|
}
|
||||||
|
for _, t := range c.Targets {
|
||||||
|
targets = append(targets, t.Name)
|
||||||
|
}
|
||||||
|
return dedupSlice(targets), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func ReadTargets(ctx context.Context, files []File, targets, overrides []string, defaults map[string]string) (map[string]*Target, map[string]*Group, error) {
|
||||||
c, err := ParseFiles(files, defaults)
|
c, err := ParseFiles(files, defaults)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, nil, err
|
return nil, nil, err
|
||||||
@@ -99,42 +115,39 @@ func ReadTargets(ctx context.Context, files []File, targets, overrides []string,
|
|||||||
return nil, nil, err
|
return nil, nil, err
|
||||||
}
|
}
|
||||||
m := map[string]*Target{}
|
m := map[string]*Target{}
|
||||||
for _, n := range targets {
|
n := map[string]*Group{}
|
||||||
for _, n := range c.ResolveGroup(n) {
|
for _, target := range targets {
|
||||||
t, err := c.ResolveTarget(n, o)
|
ts, gs := c.ResolveGroup(target)
|
||||||
|
for _, tname := range ts {
|
||||||
|
t, err := c.ResolveTarget(tname, o)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, nil, err
|
return nil, nil, err
|
||||||
}
|
}
|
||||||
if t != nil {
|
if t != nil {
|
||||||
m[n] = t
|
m[tname] = t
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for _, gname := range gs {
|
||||||
|
for _, group := range c.Groups {
|
||||||
|
if group.Name == gname {
|
||||||
|
n[gname] = group
|
||||||
|
break
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
var g []*Group
|
for _, target := range targets {
|
||||||
if len(targets) == 0 || (len(targets) == 1 && targets[0] == "default") {
|
if target == "default" {
|
||||||
for _, group := range c.Groups {
|
continue
|
||||||
if group.Name != "default" {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
g = []*Group{{Targets: group.Targets}}
|
|
||||||
}
|
}
|
||||||
} else {
|
if _, ok := n["default"]; !ok {
|
||||||
var gt []string
|
n["default"] = &Group{Name: "default"}
|
||||||
for _, target := range targets {
|
|
||||||
isGroup := false
|
|
||||||
for _, group := range c.Groups {
|
|
||||||
if target == group.Name {
|
|
||||||
gt = append(gt, group.Targets...)
|
|
||||||
isGroup = true
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if !isGroup {
|
|
||||||
gt = append(gt, target)
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
g = []*Group{{Targets: dedupString(gt)}}
|
n["default"].Targets = append(n["default"].Targets, target)
|
||||||
|
}
|
||||||
|
if g, ok := n["default"]; ok {
|
||||||
|
g.Targets = dedupSlice(g.Targets)
|
||||||
}
|
}
|
||||||
|
|
||||||
for name, t := range m {
|
for name, t := range m {
|
||||||
@@ -143,10 +156,10 @@ func ReadTargets(ctx context.Context, files []File, targets, overrides []string,
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return m, g, nil
|
return m, n, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func dedupString(s []string) []string {
|
func dedupSlice(s []string) []string {
|
||||||
if len(s) == 0 {
|
if len(s) == 0 {
|
||||||
return s
|
return s
|
||||||
}
|
}
|
||||||
@@ -161,21 +174,54 @@ func dedupString(s []string) []string {
|
|||||||
return res
|
return res
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func dedupMap(ms ...map[string]string) map[string]string {
|
||||||
|
if len(ms) == 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
res := map[string]string{}
|
||||||
|
for _, m := range ms {
|
||||||
|
if len(m) == 0 {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
for k, v := range m {
|
||||||
|
if _, ok := res[k]; !ok {
|
||||||
|
res[k] = v
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return res
|
||||||
|
}
|
||||||
|
|
||||||
|
func sliceToMap(env []string) (res map[string]string) {
|
||||||
|
res = make(map[string]string)
|
||||||
|
for _, s := range env {
|
||||||
|
kv := strings.SplitN(s, "=", 2)
|
||||||
|
key := kv[0]
|
||||||
|
switch {
|
||||||
|
case len(kv) == 1:
|
||||||
|
res[key] = ""
|
||||||
|
default:
|
||||||
|
res[key] = kv[1]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
func ParseFiles(files []File, defaults map[string]string) (_ *Config, err error) {
|
func ParseFiles(files []File, defaults map[string]string) (_ *Config, err error) {
|
||||||
defer func() {
|
defer func() {
|
||||||
err = formatHCLError(err, files)
|
err = formatHCLError(err, files)
|
||||||
}()
|
}()
|
||||||
|
|
||||||
var c Config
|
var c Config
|
||||||
var fs []*hcl.File
|
var composeFiles []File
|
||||||
|
var hclFiles []*hcl.File
|
||||||
for _, f := range files {
|
for _, f := range files {
|
||||||
cfg, isCompose, composeErr := ParseComposeFile(f.Data, f.Name)
|
isCompose, composeErr := validateComposeFile(f.Data, f.Name)
|
||||||
if isCompose {
|
if isCompose {
|
||||||
if composeErr != nil {
|
if composeErr != nil {
|
||||||
return nil, composeErr
|
return nil, composeErr
|
||||||
}
|
}
|
||||||
c = mergeConfig(c, *cfg)
|
composeFiles = append(composeFiles, f)
|
||||||
c = dedupeConfig(c)
|
|
||||||
}
|
}
|
||||||
if !isCompose {
|
if !isCompose {
|
||||||
hf, isHCL, err := ParseHCLFile(f.Data, f.Name)
|
hf, isHCL, err := ParseHCLFile(f.Data, f.Name)
|
||||||
@@ -183,36 +229,67 @@ func ParseFiles(files []File, defaults map[string]string) (_ *Config, err error)
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
fs = append(fs, hf)
|
hclFiles = append(hclFiles, hf)
|
||||||
} else if composeErr != nil {
|
} else if composeErr != nil {
|
||||||
return nil, fmt.Errorf("failed to parse %s: parsing yaml: %v, parsing hcl: %w", f.Name, composeErr, err)
|
return nil, errors.Wrapf(err, "failed to parse %s: parsing yaml: %v, parsing hcl", f.Name, composeErr)
|
||||||
} else {
|
} else {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if len(fs) > 0 {
|
if len(composeFiles) > 0 {
|
||||||
if err := hclparser.Parse(hcl.MergeFiles(fs), hclparser.Opt{
|
cfg, cmperr := ParseComposeFiles(composeFiles)
|
||||||
|
if cmperr != nil {
|
||||||
|
return nil, errors.Wrap(cmperr, "failed to parse compose file")
|
||||||
|
}
|
||||||
|
c = mergeConfig(c, *cfg)
|
||||||
|
c = dedupeConfig(c)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(hclFiles) > 0 {
|
||||||
|
renamed, err := hclparser.Parse(hcl.MergeFiles(hclFiles), hclparser.Opt{
|
||||||
LookupVar: os.LookupEnv,
|
LookupVar: os.LookupEnv,
|
||||||
Vars: defaults,
|
Vars: defaults,
|
||||||
ValidateLabel: validateTargetName,
|
ValidateLabel: validateTargetName,
|
||||||
}, &c); err.HasErrors() {
|
}, &c)
|
||||||
|
if err.HasErrors() {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
for _, renamed := range renamed {
|
||||||
|
for oldName, newNames := range renamed {
|
||||||
|
newNames = dedupSlice(newNames)
|
||||||
|
if len(newNames) == 1 && oldName == newNames[0] {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
c.Groups = append(c.Groups, &Group{
|
||||||
|
Name: oldName,
|
||||||
|
Targets: newNames,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
c = dedupeConfig(c)
|
||||||
}
|
}
|
||||||
|
|
||||||
return &c, nil
|
return &c, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func dedupeConfig(c Config) Config {
|
func dedupeConfig(c Config) Config {
|
||||||
c2 := c
|
c2 := c
|
||||||
|
c2.Groups = make([]*Group, 0, len(c2.Groups))
|
||||||
|
for _, g := range c.Groups {
|
||||||
|
g1 := *g
|
||||||
|
g1.Targets = dedupSlice(g1.Targets)
|
||||||
|
c2.Groups = append(c2.Groups, &g1)
|
||||||
|
}
|
||||||
c2.Targets = make([]*Target, 0, len(c2.Targets))
|
c2.Targets = make([]*Target, 0, len(c2.Targets))
|
||||||
m := map[string]*Target{}
|
mt := map[string]*Target{}
|
||||||
for _, t := range c.Targets {
|
for _, t := range c.Targets {
|
||||||
if t2, ok := m[t.Name]; ok {
|
if t2, ok := mt[t.Name]; ok {
|
||||||
t2.Merge(t)
|
t2.Merge(t)
|
||||||
} else {
|
} else {
|
||||||
m[t.Name] = t
|
mt[t.Name] = t
|
||||||
c2.Targets = append(c2.Targets, t)
|
c2.Targets = append(c2.Targets, t)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -223,22 +300,9 @@ func ParseFile(dt []byte, fn string) (*Config, error) {
|
|||||||
return ParseFiles([]File{{Data: dt, Name: fn}}, nil)
|
return ParseFiles([]File{{Data: dt, Name: fn}}, nil)
|
||||||
}
|
}
|
||||||
|
|
||||||
func ParseComposeFile(dt []byte, fn string) (*Config, bool, error) {
|
|
||||||
fnl := strings.ToLower(fn)
|
|
||||||
if strings.HasSuffix(fnl, ".yml") || strings.HasSuffix(fnl, ".yaml") {
|
|
||||||
cfg, err := ParseCompose(dt)
|
|
||||||
return cfg, true, err
|
|
||||||
}
|
|
||||||
if strings.HasSuffix(fnl, ".json") || strings.HasSuffix(fnl, ".hcl") {
|
|
||||||
return nil, false, nil
|
|
||||||
}
|
|
||||||
cfg, err := ParseCompose(dt)
|
|
||||||
return cfg, err == nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
type Config struct {
|
type Config struct {
|
||||||
Groups []*Group `json:"group" hcl:"group,block"`
|
Groups []*Group `json:"group" hcl:"group,block" cty:"group"`
|
||||||
Targets []*Target `json:"target" hcl:"target,block"`
|
Targets []*Target `json:"target" hcl:"target,block" cty:"target"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func mergeConfig(c1, c2 Config) Config {
|
func mergeConfig(c1, c2 Config) Config {
|
||||||
@@ -384,7 +448,7 @@ func (c Config) newOverrides(v []string) (map[string]map[string]Override, error)
|
|||||||
o := t[kk[1]]
|
o := t[kk[1]]
|
||||||
|
|
||||||
switch keys[1] {
|
switch keys[1] {
|
||||||
case "output", "cache-to", "cache-from", "tags", "platform", "secrets", "ssh":
|
case "output", "cache-to", "cache-from", "tags", "platform", "secrets", "ssh", "attest":
|
||||||
if len(parts) == 2 {
|
if len(parts) == 2 {
|
||||||
o.ArrValue = append(o.ArrValue, parts[1])
|
o.ArrValue = append(o.ArrValue, parts[1])
|
||||||
}
|
}
|
||||||
@@ -417,13 +481,19 @@ func (c Config) newOverrides(v []string) (map[string]map[string]Override, error)
|
|||||||
return m, nil
|
return m, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (c Config) ResolveGroup(name string) []string {
|
func (c Config) ResolveGroup(name string) ([]string, []string) {
|
||||||
return dedupString(c.group(name, map[string][]string{}))
|
targets, groups := c.group(name, map[string]visit{})
|
||||||
|
return dedupSlice(targets), dedupSlice(groups)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (c Config) group(name string, visited map[string][]string) []string {
|
type visit struct {
|
||||||
if _, ok := visited[name]; ok {
|
target []string
|
||||||
return visited[name]
|
group []string
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c Config) group(name string, visited map[string]visit) ([]string, []string) {
|
||||||
|
if v, ok := visited[name]; ok {
|
||||||
|
return v.target, v.group
|
||||||
}
|
}
|
||||||
var g *Group
|
var g *Group
|
||||||
for _, group := range c.Groups {
|
for _, group := range c.Groups {
|
||||||
@@ -433,20 +503,24 @@ func (c Config) group(name string, visited map[string][]string) []string {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
if g == nil {
|
if g == nil {
|
||||||
return []string{name}
|
return []string{name}, nil
|
||||||
}
|
}
|
||||||
visited[name] = []string{}
|
visited[name] = visit{}
|
||||||
targets := make([]string, 0, len(g.Targets))
|
targets := make([]string, 0, len(g.Targets))
|
||||||
|
groups := []string{name}
|
||||||
for _, t := range g.Targets {
|
for _, t := range g.Targets {
|
||||||
tgroup := c.group(t, visited)
|
ttarget, tgroup := c.group(t, visited)
|
||||||
if len(tgroup) > 0 {
|
if len(ttarget) > 0 {
|
||||||
targets = append(targets, tgroup...)
|
targets = append(targets, ttarget...)
|
||||||
} else {
|
} else {
|
||||||
targets = append(targets, t)
|
targets = append(targets, t)
|
||||||
}
|
}
|
||||||
|
if len(tgroup) > 0 {
|
||||||
|
groups = append(groups, tgroup...)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
visited[name] = targets
|
visited[name] = visit{target: targets, group: groups}
|
||||||
return targets
|
return targets, groups
|
||||||
}
|
}
|
||||||
|
|
||||||
func (c Config) ResolveTarget(name string, overrides map[string]map[string]Override) (*Target, error) {
|
func (c Config) ResolveTarget(name string, overrides map[string]map[string]Override) (*Target, error) {
|
||||||
@@ -504,42 +578,49 @@ func (c Config) target(name string, visited map[string]*Target, overrides map[st
|
|||||||
}
|
}
|
||||||
|
|
||||||
type Group struct {
|
type Group struct {
|
||||||
Name string `json:"-" hcl:"name,label"`
|
Name string `json:"-" hcl:"name,label" cty:"name"`
|
||||||
Targets []string `json:"targets" hcl:"targets"`
|
Targets []string `json:"targets" hcl:"targets" cty:"targets"`
|
||||||
// Target // TODO?
|
// Target // TODO?
|
||||||
}
|
}
|
||||||
|
|
||||||
type Target struct {
|
type Target struct {
|
||||||
Name string `json:"-" hcl:"name,label"`
|
Name string `json:"-" hcl:"name,label" cty:"name"`
|
||||||
|
|
||||||
// Inherits is the only field that cannot be overridden with --set
|
// Inherits is the only field that cannot be overridden with --set
|
||||||
Inherits []string `json:"inherits,omitempty" hcl:"inherits,optional"`
|
Attest []string `json:"attest,omitempty" hcl:"attest,optional" cty:"attest"`
|
||||||
|
Inherits []string `json:"inherits,omitempty" hcl:"inherits,optional" cty:"inherits"`
|
||||||
|
|
||||||
Context *string `json:"context,omitempty" hcl:"context,optional"`
|
Context *string `json:"context,omitempty" hcl:"context,optional" cty:"context"`
|
||||||
Contexts map[string]string `json:"contexts,omitempty" hcl:"contexts,optional"`
|
Contexts map[string]string `json:"contexts,omitempty" hcl:"contexts,optional" cty:"contexts"`
|
||||||
Dockerfile *string `json:"dockerfile,omitempty" hcl:"dockerfile,optional"`
|
Dockerfile *string `json:"dockerfile,omitempty" hcl:"dockerfile,optional" cty:"dockerfile"`
|
||||||
DockerfileInline *string `json:"dockerfile-inline,omitempty" hcl:"dockerfile-inline,optional"`
|
DockerfileInline *string `json:"dockerfile-inline,omitempty" hcl:"dockerfile-inline,optional" cty:"dockerfile-inline"`
|
||||||
Args map[string]string `json:"args,omitempty" hcl:"args,optional"`
|
Args map[string]*string `json:"args,omitempty" hcl:"args,optional" cty:"args"`
|
||||||
Labels map[string]string `json:"labels,omitempty" hcl:"labels,optional"`
|
Labels map[string]*string `json:"labels,omitempty" hcl:"labels,optional" cty:"labels"`
|
||||||
Tags []string `json:"tags,omitempty" hcl:"tags,optional"`
|
Tags []string `json:"tags,omitempty" hcl:"tags,optional" cty:"tags"`
|
||||||
CacheFrom []string `json:"cache-from,omitempty" hcl:"cache-from,optional"`
|
CacheFrom []string `json:"cache-from,omitempty" hcl:"cache-from,optional" cty:"cache-from"`
|
||||||
CacheTo []string `json:"cache-to,omitempty" hcl:"cache-to,optional"`
|
CacheTo []string `json:"cache-to,omitempty" hcl:"cache-to,optional" cty:"cache-to"`
|
||||||
Target *string `json:"target,omitempty" hcl:"target,optional"`
|
Target *string `json:"target,omitempty" hcl:"target,optional" cty:"target"`
|
||||||
Secrets []string `json:"secret,omitempty" hcl:"secret,optional"`
|
Secrets []string `json:"secret,omitempty" hcl:"secret,optional" cty:"secret"`
|
||||||
SSH []string `json:"ssh,omitempty" hcl:"ssh,optional"`
|
SSH []string `json:"ssh,omitempty" hcl:"ssh,optional" cty:"ssh"`
|
||||||
Platforms []string `json:"platforms,omitempty" hcl:"platforms,optional"`
|
Platforms []string `json:"platforms,omitempty" hcl:"platforms,optional" cty:"platforms"`
|
||||||
Outputs []string `json:"output,omitempty" hcl:"output,optional"`
|
Outputs []string `json:"output,omitempty" hcl:"output,optional" cty:"output"`
|
||||||
Pull *bool `json:"pull,omitempty" hcl:"pull,optional"`
|
Pull *bool `json:"pull,omitempty" hcl:"pull,optional" cty:"pull"`
|
||||||
NoCache *bool `json:"no-cache,omitempty" hcl:"no-cache,optional"`
|
NoCache *bool `json:"no-cache,omitempty" hcl:"no-cache,optional" cty:"no-cache"`
|
||||||
NetworkMode *string `json:"-" hcl:"-"`
|
NetworkMode *string `json:"-" hcl:"-" cty:"-"`
|
||||||
NoCacheFilter []string `json:"no-cache-filter,omitempty" hcl:"no-cache-filter,optional"`
|
NoCacheFilter []string `json:"no-cache-filter,omitempty" hcl:"no-cache-filter,optional" cty:"no-cache-filter"`
|
||||||
// IMPORTANT: if you add more fields here, do not forget to update newOverrides and docs/guides/bake/file-definition.md.
|
// IMPORTANT: if you add more fields here, do not forget to update newOverrides and docs/bake-reference.md.
|
||||||
|
|
||||||
// linked is a private field to mark a target used as a linked one
|
// linked is a private field to mark a target used as a linked one
|
||||||
linked bool
|
linked bool
|
||||||
}
|
}
|
||||||
|
|
||||||
|
var _ hclparser.WithEvalContexts = &Target{}
|
||||||
|
var _ hclparser.WithGetName = &Target{}
|
||||||
|
var _ hclparser.WithEvalContexts = &Group{}
|
||||||
|
var _ hclparser.WithGetName = &Group{}
|
||||||
|
|
||||||
func (t *Target) normalize() {
|
func (t *Target) normalize() {
|
||||||
|
t.Attest = removeAttestDupes(t.Attest)
|
||||||
t.Tags = removeDupes(t.Tags)
|
t.Tags = removeDupes(t.Tags)
|
||||||
t.Secrets = removeDupes(t.Secrets)
|
t.Secrets = removeDupes(t.Secrets)
|
||||||
t.SSH = removeDupes(t.SSH)
|
t.SSH = removeDupes(t.SSH)
|
||||||
@@ -570,8 +651,11 @@ func (t *Target) Merge(t2 *Target) {
|
|||||||
t.DockerfileInline = t2.DockerfileInline
|
t.DockerfileInline = t2.DockerfileInline
|
||||||
}
|
}
|
||||||
for k, v := range t2.Args {
|
for k, v := range t2.Args {
|
||||||
|
if v == nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
if t.Args == nil {
|
if t.Args == nil {
|
||||||
t.Args = map[string]string{}
|
t.Args = map[string]*string{}
|
||||||
}
|
}
|
||||||
t.Args[k] = v
|
t.Args[k] = v
|
||||||
}
|
}
|
||||||
@@ -582,8 +666,11 @@ func (t *Target) Merge(t2 *Target) {
|
|||||||
t.Contexts[k] = v
|
t.Contexts[k] = v
|
||||||
}
|
}
|
||||||
for k, v := range t2.Labels {
|
for k, v := range t2.Labels {
|
||||||
|
if v == nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
if t.Labels == nil {
|
if t.Labels == nil {
|
||||||
t.Labels = map[string]string{}
|
t.Labels = map[string]*string{}
|
||||||
}
|
}
|
||||||
t.Labels[k] = v
|
t.Labels[k] = v
|
||||||
}
|
}
|
||||||
@@ -593,6 +680,10 @@ func (t *Target) Merge(t2 *Target) {
|
|||||||
if t2.Target != nil {
|
if t2.Target != nil {
|
||||||
t.Target = t2.Target
|
t.Target = t2.Target
|
||||||
}
|
}
|
||||||
|
if t2.Attest != nil { // merge
|
||||||
|
t.Attest = append(t.Attest, t2.Attest...)
|
||||||
|
t.Attest = removeAttestDupes(t.Attest)
|
||||||
|
}
|
||||||
if t2.Secrets != nil { // merge
|
if t2.Secrets != nil { // merge
|
||||||
t.Secrets = append(t.Secrets, t2.Secrets...)
|
t.Secrets = append(t.Secrets, t2.Secrets...)
|
||||||
}
|
}
|
||||||
@@ -640,9 +731,9 @@ func (t *Target) AddOverrides(overrides map[string]Override) error {
|
|||||||
return errors.Errorf("args require name")
|
return errors.Errorf("args require name")
|
||||||
}
|
}
|
||||||
if t.Args == nil {
|
if t.Args == nil {
|
||||||
t.Args = map[string]string{}
|
t.Args = map[string]*string{}
|
||||||
}
|
}
|
||||||
t.Args[keys[1]] = value
|
t.Args[keys[1]] = &value
|
||||||
case "contexts":
|
case "contexts":
|
||||||
if len(keys) != 2 {
|
if len(keys) != 2 {
|
||||||
return errors.Errorf("contexts require name")
|
return errors.Errorf("contexts require name")
|
||||||
@@ -656,9 +747,9 @@ func (t *Target) AddOverrides(overrides map[string]Override) error {
|
|||||||
return errors.Errorf("labels require name")
|
return errors.Errorf("labels require name")
|
||||||
}
|
}
|
||||||
if t.Labels == nil {
|
if t.Labels == nil {
|
||||||
t.Labels = map[string]string{}
|
t.Labels = map[string]*string{}
|
||||||
}
|
}
|
||||||
t.Labels[keys[1]] = value
|
t.Labels[keys[1]] = &value
|
||||||
case "tags":
|
case "tags":
|
||||||
t.Tags = o.ArrValue
|
t.Tags = o.ArrValue
|
||||||
case "cache-from":
|
case "cache-from":
|
||||||
@@ -675,6 +766,8 @@ func (t *Target) AddOverrides(overrides map[string]Override) error {
|
|||||||
t.Platforms = o.ArrValue
|
t.Platforms = o.ArrValue
|
||||||
case "output":
|
case "output":
|
||||||
t.Outputs = o.ArrValue
|
t.Outputs = o.ArrValue
|
||||||
|
case "attest":
|
||||||
|
t.Attest = append(t.Attest, o.ArrValue...)
|
||||||
case "no-cache":
|
case "no-cache":
|
||||||
noCache, err := strconv.ParseBool(value)
|
noCache, err := strconv.ParseBool(value)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -710,6 +803,114 @@ func (t *Target) AddOverrides(overrides map[string]Override) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (g *Group) GetEvalContexts(ectx *hcl.EvalContext, block *hcl.Block, loadDeps func(hcl.Expression) hcl.Diagnostics) ([]*hcl.EvalContext, error) {
|
||||||
|
content, _, err := block.Body.PartialContent(&hcl.BodySchema{
|
||||||
|
Attributes: []hcl.AttributeSchema{{Name: "matrix"}},
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if _, ok := content.Attributes["matrix"]; ok {
|
||||||
|
return nil, errors.Errorf("matrix is not supported for groups")
|
||||||
|
}
|
||||||
|
return []*hcl.EvalContext{ectx}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (t *Target) GetEvalContexts(ectx *hcl.EvalContext, block *hcl.Block, loadDeps func(hcl.Expression) hcl.Diagnostics) ([]*hcl.EvalContext, error) {
|
||||||
|
content, _, err := block.Body.PartialContent(&hcl.BodySchema{
|
||||||
|
Attributes: []hcl.AttributeSchema{{Name: "matrix"}},
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
attr, ok := content.Attributes["matrix"]
|
||||||
|
if !ok {
|
||||||
|
return []*hcl.EvalContext{ectx}, nil
|
||||||
|
}
|
||||||
|
if diags := loadDeps(attr.Expr); diags.HasErrors() {
|
||||||
|
return nil, diags
|
||||||
|
}
|
||||||
|
value, err := attr.Expr.Value(ectx)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if !value.Type().IsMapType() && !value.Type().IsObjectType() {
|
||||||
|
return nil, errors.Errorf("matrix must be a map")
|
||||||
|
}
|
||||||
|
matrix := value.AsValueMap()
|
||||||
|
|
||||||
|
ectxs := []*hcl.EvalContext{ectx}
|
||||||
|
for k, expr := range matrix {
|
||||||
|
if !expr.CanIterateElements() {
|
||||||
|
return nil, errors.Errorf("matrix values must be a list")
|
||||||
|
}
|
||||||
|
|
||||||
|
ectxs2 := []*hcl.EvalContext{}
|
||||||
|
for _, v := range expr.AsValueSlice() {
|
||||||
|
for _, e := range ectxs {
|
||||||
|
e2 := ectx.NewChild()
|
||||||
|
e2.Variables = make(map[string]cty.Value)
|
||||||
|
for k, v := range e.Variables {
|
||||||
|
e2.Variables[k] = v
|
||||||
|
}
|
||||||
|
e2.Variables[k] = v
|
||||||
|
ectxs2 = append(ectxs2, e2)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
ectxs = ectxs2
|
||||||
|
}
|
||||||
|
return ectxs, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (g *Group) GetName(ectx *hcl.EvalContext, block *hcl.Block, loadDeps func(hcl.Expression) hcl.Diagnostics) (string, error) {
|
||||||
|
content, _, diags := block.Body.PartialContent(&hcl.BodySchema{
|
||||||
|
Attributes: []hcl.AttributeSchema{{Name: "name"}, {Name: "matrix"}},
|
||||||
|
})
|
||||||
|
if diags != nil {
|
||||||
|
return "", diags
|
||||||
|
}
|
||||||
|
|
||||||
|
if _, ok := content.Attributes["name"]; ok {
|
||||||
|
return "", errors.Errorf("name is not supported for groups")
|
||||||
|
}
|
||||||
|
if _, ok := content.Attributes["matrix"]; ok {
|
||||||
|
return "", errors.Errorf("matrix is not supported for groups")
|
||||||
|
}
|
||||||
|
return block.Labels[0], nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (t *Target) GetName(ectx *hcl.EvalContext, block *hcl.Block, loadDeps func(hcl.Expression) hcl.Diagnostics) (string, error) {
|
||||||
|
content, _, diags := block.Body.PartialContent(&hcl.BodySchema{
|
||||||
|
Attributes: []hcl.AttributeSchema{{Name: "name"}, {Name: "matrix"}},
|
||||||
|
})
|
||||||
|
if diags != nil {
|
||||||
|
return "", diags
|
||||||
|
}
|
||||||
|
|
||||||
|
attr, ok := content.Attributes["name"]
|
||||||
|
if !ok {
|
||||||
|
return block.Labels[0], nil
|
||||||
|
}
|
||||||
|
if _, ok := content.Attributes["matrix"]; !ok {
|
||||||
|
return "", errors.Errorf("name requires matrix")
|
||||||
|
}
|
||||||
|
if diags := loadDeps(attr.Expr); diags.HasErrors() {
|
||||||
|
return "", diags
|
||||||
|
}
|
||||||
|
value, diags := attr.Expr.Value(ectx)
|
||||||
|
if diags != nil {
|
||||||
|
return "", diags
|
||||||
|
}
|
||||||
|
|
||||||
|
value, err := convert.Convert(value, cty.String)
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
return value.AsString(), nil
|
||||||
|
}
|
||||||
|
|
||||||
func TargetsToBuildOpt(m map[string]*Target, inp *Input) (map[string]build.Options, error) {
|
func TargetsToBuildOpt(m map[string]*Target, inp *Input) (map[string]build.Options, error) {
|
||||||
m2 := make(map[string]build.Options, len(m))
|
m2 := make(map[string]build.Options, len(m))
|
||||||
for k, v := range m {
|
for k, v := range m {
|
||||||
@@ -734,7 +935,7 @@ func updateContext(t *build.Inputs, inp *Input) {
|
|||||||
if strings.HasPrefix(v.Path, "cwd://") || strings.HasPrefix(v.Path, "target:") || strings.HasPrefix(v.Path, "docker-image:") {
|
if strings.HasPrefix(v.Path, "cwd://") || strings.HasPrefix(v.Path, "target:") || strings.HasPrefix(v.Path, "docker-image:") {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
if IsRemoteURL(v.Path) {
|
if build.IsRemoteURL(v.Path) {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
st := llb.Scratch().File(llb.Copy(*inp.State, v.Path, "/"), llb.WithCustomNamef("set context %s to %s", k, v.Path))
|
st := llb.Scratch().File(llb.Copy(*inp.State, v.Path, "/"), llb.WithCustomNamef("set context %s to %s", k, v.Path))
|
||||||
@@ -748,10 +949,15 @@ func updateContext(t *build.Inputs, inp *Input) {
|
|||||||
if strings.HasPrefix(t.ContextPath, "cwd://") {
|
if strings.HasPrefix(t.ContextPath, "cwd://") {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
if IsRemoteURL(t.ContextPath) {
|
if build.IsRemoteURL(t.ContextPath) {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
st := llb.Scratch().File(llb.Copy(*inp.State, t.ContextPath, "/"), llb.WithCustomNamef("set context to %s", t.ContextPath))
|
st := llb.Scratch().File(
|
||||||
|
llb.Copy(*inp.State, t.ContextPath, "/", &llb.CopyInfo{
|
||||||
|
CopyDirContentsOnly: true,
|
||||||
|
}),
|
||||||
|
llb.WithCustomNamef("set context to %s", t.ContextPath),
|
||||||
|
)
|
||||||
t.ContextState = &st
|
t.ContextState = &st
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -784,7 +990,7 @@ func validateContextsEntitlements(t build.Inputs, inp *Input) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func checkPath(p string) error {
|
func checkPath(p string) error {
|
||||||
if IsRemoteURL(p) || strings.HasPrefix(p, "target:") || strings.HasPrefix(p, "docker-image:") {
|
if build.IsRemoteURL(p) || strings.HasPrefix(p, "target:") || strings.HasPrefix(p, "docker-image:") {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
p, err := filepath.EvalSymlinks(p)
|
p, err := filepath.EvalSymlinks(p)
|
||||||
@@ -794,6 +1000,10 @@ func checkPath(p string) error {
|
|||||||
}
|
}
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
p, err = filepath.Abs(p)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
wd, err := os.Getwd()
|
wd, err := os.Getwd()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -802,7 +1012,8 @@ func checkPath(p string) error {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
if strings.HasPrefix(rel, ".."+string(os.PathSeparator)) {
|
parts := strings.Split(rel, string(os.PathSeparator))
|
||||||
|
if parts[0] == ".." {
|
||||||
return errors.Errorf("path %s is outside of the working directory, please set BAKE_ALLOW_REMOTE_FS_ACCESS=1", p)
|
return errors.Errorf("path %s is outside of the working directory, please set BAKE_ALLOW_REMOTE_FS_ACCESS=1", p)
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
@@ -820,7 +1031,7 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
|
|||||||
if t.Context != nil {
|
if t.Context != nil {
|
||||||
contextPath = *t.Context
|
contextPath = *t.Context
|
||||||
}
|
}
|
||||||
if !strings.HasPrefix(contextPath, "cwd://") && !IsRemoteURL(contextPath) {
|
if !strings.HasPrefix(contextPath, "cwd://") && !build.IsRemoteURL(contextPath) {
|
||||||
contextPath = path.Clean(contextPath)
|
contextPath = path.Clean(contextPath)
|
||||||
}
|
}
|
||||||
dockerfilePath := "Dockerfile"
|
dockerfilePath := "Dockerfile"
|
||||||
@@ -828,23 +1039,6 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
|
|||||||
dockerfilePath = *t.Dockerfile
|
dockerfilePath = *t.Dockerfile
|
||||||
}
|
}
|
||||||
|
|
||||||
if !isRemoteResource(contextPath) && !path.IsAbs(dockerfilePath) {
|
|
||||||
dockerfilePath = path.Join(contextPath, dockerfilePath)
|
|
||||||
}
|
|
||||||
|
|
||||||
noCache := false
|
|
||||||
if t.NoCache != nil {
|
|
||||||
noCache = *t.NoCache
|
|
||||||
}
|
|
||||||
pull := false
|
|
||||||
if t.Pull != nil {
|
|
||||||
pull = *t.Pull
|
|
||||||
}
|
|
||||||
networkMode := ""
|
|
||||||
if t.NetworkMode != nil {
|
|
||||||
networkMode = *t.NetworkMode
|
|
||||||
}
|
|
||||||
|
|
||||||
bi := build.Inputs{
|
bi := build.Inputs{
|
||||||
ContextPath: contextPath,
|
ContextPath: contextPath,
|
||||||
DockerfilePath: dockerfilePath,
|
DockerfilePath: dockerfilePath,
|
||||||
@@ -854,6 +1048,9 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
|
|||||||
bi.DockerfileInline = *t.DockerfileInline
|
bi.DockerfileInline = *t.DockerfileInline
|
||||||
}
|
}
|
||||||
updateContext(&bi, inp)
|
updateContext(&bi, inp)
|
||||||
|
if !build.IsRemoteURL(bi.ContextPath) && bi.ContextState == nil && !path.IsAbs(bi.DockerfilePath) {
|
||||||
|
bi.DockerfilePath = path.Join(bi.ContextPath, bi.DockerfilePath)
|
||||||
|
}
|
||||||
if strings.HasPrefix(bi.ContextPath, "cwd://") {
|
if strings.HasPrefix(bi.ContextPath, "cwd://") {
|
||||||
bi.ContextPath = path.Clean(strings.TrimPrefix(bi.ContextPath, "cwd://"))
|
bi.ContextPath = path.Clean(strings.TrimPrefix(bi.ContextPath, "cwd://"))
|
||||||
}
|
}
|
||||||
@@ -869,11 +1066,40 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
|
|||||||
|
|
||||||
t.Context = &bi.ContextPath
|
t.Context = &bi.ContextPath
|
||||||
|
|
||||||
|
args := map[string]string{}
|
||||||
|
for k, v := range t.Args {
|
||||||
|
if v == nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
args[k] = *v
|
||||||
|
}
|
||||||
|
|
||||||
|
labels := map[string]string{}
|
||||||
|
for k, v := range t.Labels {
|
||||||
|
if v == nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
labels[k] = *v
|
||||||
|
}
|
||||||
|
|
||||||
|
noCache := false
|
||||||
|
if t.NoCache != nil {
|
||||||
|
noCache = *t.NoCache
|
||||||
|
}
|
||||||
|
pull := false
|
||||||
|
if t.Pull != nil {
|
||||||
|
pull = *t.Pull
|
||||||
|
}
|
||||||
|
networkMode := ""
|
||||||
|
if t.NetworkMode != nil {
|
||||||
|
networkMode = *t.NetworkMode
|
||||||
|
}
|
||||||
|
|
||||||
bo := &build.Options{
|
bo := &build.Options{
|
||||||
Inputs: bi,
|
Inputs: bi,
|
||||||
Tags: t.Tags,
|
Tags: t.Tags,
|
||||||
BuildArgs: t.Args,
|
BuildArgs: args,
|
||||||
Labels: t.Labels,
|
Labels: labels,
|
||||||
NoCache: noCache,
|
NoCache: noCache,
|
||||||
NoCacheFilter: t.NoCacheFilter,
|
NoCacheFilter: t.NoCacheFilter,
|
||||||
Pull: pull,
|
Pull: pull,
|
||||||
@@ -894,17 +1120,24 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
bo.Session = append(bo.Session, secrets)
|
secretAttachment, err := controllerapi.CreateSecrets(secrets)
|
||||||
|
|
||||||
sshSpecs := t.SSH
|
|
||||||
if len(sshSpecs) == 0 && buildflags.IsGitSSH(contextPath) {
|
|
||||||
sshSpecs = []string{"default"}
|
|
||||||
}
|
|
||||||
ssh, err := buildflags.ParseSSHSpecs(sshSpecs)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
bo.Session = append(bo.Session, ssh)
|
bo.Session = append(bo.Session, secretAttachment)
|
||||||
|
|
||||||
|
sshSpecs, err := buildflags.ParseSSHSpecs(t.SSH)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if len(sshSpecs) == 0 && (buildflags.IsGitSSH(bi.ContextPath) || (inp != nil && buildflags.IsGitSSH(inp.URL))) {
|
||||||
|
sshSpecs = append(sshSpecs, &controllerapi.SSH{ID: "default"})
|
||||||
|
}
|
||||||
|
sshAttachment, err := controllerapi.CreateSSH(sshSpecs)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
bo.Session = append(bo.Session, sshAttachment)
|
||||||
|
|
||||||
if t.Target != nil {
|
if t.Target != nil {
|
||||||
bo.Target = *t.Target
|
bo.Target = *t.Target
|
||||||
@@ -914,19 +1147,33 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
bo.CacheFrom = cacheImports
|
bo.CacheFrom = controllerapi.CreateCaches(cacheImports)
|
||||||
|
|
||||||
cacheExports, err := buildflags.ParseCacheEntry(t.CacheTo)
|
cacheExports, err := buildflags.ParseCacheEntry(t.CacheTo)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
bo.CacheTo = cacheExports
|
bo.CacheTo = controllerapi.CreateCaches(cacheExports)
|
||||||
|
|
||||||
outputs, err := buildflags.ParseOutputs(t.Outputs)
|
outputs, err := buildflags.ParseExports(t.Outputs)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
bo.Exports, err = controllerapi.CreateExports(outputs)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
attests, err := buildflags.ParseAttests(t.Attest)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
bo.Attests = controllerapi.CreateAttestations(attests)
|
||||||
|
|
||||||
|
bo.SourcePolicy, err = build.ReadSourcePolicy()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
bo.Exports = outputs
|
|
||||||
|
|
||||||
return bo, nil
|
return bo, nil
|
||||||
}
|
}
|
||||||
@@ -952,8 +1199,24 @@ func removeDupes(s []string) []string {
|
|||||||
return s[:i]
|
return s[:i]
|
||||||
}
|
}
|
||||||
|
|
||||||
func isRemoteResource(str string) bool {
|
func removeAttestDupes(s []string) []string {
|
||||||
return urlutil.IsGitURL(str) || urlutil.IsURL(str)
|
res := []string{}
|
||||||
|
m := map[string]int{}
|
||||||
|
for _, v := range s {
|
||||||
|
att, err := buildflags.ParseAttest(v)
|
||||||
|
if err != nil {
|
||||||
|
res = append(res, v)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
if i, ok := m[att.Type]; ok {
|
||||||
|
res[i] = v
|
||||||
|
} else {
|
||||||
|
m[att.Type] = len(res)
|
||||||
|
res = append(res, v)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return res
|
||||||
}
|
}
|
||||||
|
|
||||||
func parseOutputType(str string) string {
|
func parseOutputType(str string) string {
|
||||||
|
|||||||
@@ -4,14 +4,13 @@ import (
|
|||||||
"context"
|
"context"
|
||||||
"os"
|
"os"
|
||||||
"sort"
|
"sort"
|
||||||
|
"strings"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestReadTargets(t *testing.T) {
|
func TestReadTargets(t *testing.T) {
|
||||||
t.Parallel()
|
|
||||||
|
|
||||||
fp := File{
|
fp := File{
|
||||||
Name: "config.hcl",
|
Name: "config.hcl",
|
||||||
Data: []byte(`
|
Data: []byte(`
|
||||||
@@ -35,21 +34,23 @@ target "webapp" {
|
|||||||
ctx := context.TODO()
|
ctx := context.TODO()
|
||||||
|
|
||||||
t.Run("NoOverrides", func(t *testing.T) {
|
t.Run("NoOverrides", func(t *testing.T) {
|
||||||
|
t.Parallel()
|
||||||
m, g, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, nil, nil)
|
m, g, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, nil, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, 1, len(m))
|
require.Equal(t, 1, len(m))
|
||||||
|
|
||||||
require.Equal(t, "Dockerfile.webapp", *m["webapp"].Dockerfile)
|
require.Equal(t, "Dockerfile.webapp", *m["webapp"].Dockerfile)
|
||||||
require.Equal(t, ".", *m["webapp"].Context)
|
require.Equal(t, ".", *m["webapp"].Context)
|
||||||
require.Equal(t, "webDEP", m["webapp"].Args["VAR_INHERITED"])
|
require.Equal(t, ptrstr("webDEP"), m["webapp"].Args["VAR_INHERITED"])
|
||||||
require.Equal(t, true, *m["webapp"].NoCache)
|
require.Equal(t, true, *m["webapp"].NoCache)
|
||||||
require.Nil(t, m["webapp"].Pull)
|
require.Nil(t, m["webapp"].Pull)
|
||||||
|
|
||||||
require.Equal(t, 1, len(g))
|
require.Equal(t, 1, len(g))
|
||||||
require.Equal(t, []string{"webapp"}, g[0].Targets)
|
require.Equal(t, []string{"webapp"}, g["default"].Targets)
|
||||||
})
|
})
|
||||||
|
|
||||||
t.Run("InvalidTargetOverrides", func(t *testing.T) {
|
t.Run("InvalidTargetOverrides", func(t *testing.T) {
|
||||||
|
t.Parallel()
|
||||||
_, _, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"nosuchtarget.context=foo"}, nil)
|
_, _, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"nosuchtarget.context=foo"}, nil)
|
||||||
require.NotNil(t, err)
|
require.NotNil(t, err)
|
||||||
require.Equal(t, err.Error(), "could not find any target matching 'nosuchtarget'")
|
require.Equal(t, err.Error(), "could not find any target matching 'nosuchtarget'")
|
||||||
@@ -57,8 +58,7 @@ target "webapp" {
|
|||||||
|
|
||||||
t.Run("ArgsOverrides", func(t *testing.T) {
|
t.Run("ArgsOverrides", func(t *testing.T) {
|
||||||
t.Run("leaf", func(t *testing.T) {
|
t.Run("leaf", func(t *testing.T) {
|
||||||
os.Setenv("VAR_FROMENV"+t.Name(), "fromEnv")
|
t.Setenv("VAR_FROMENV"+t.Name(), "fromEnv")
|
||||||
defer os.Unsetenv("VAR_FROM_ENV" + t.Name())
|
|
||||||
|
|
||||||
m, g, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{
|
m, g, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{
|
||||||
"webapp.args.VAR_UNSET",
|
"webapp.args.VAR_UNSET",
|
||||||
@@ -79,33 +79,35 @@ target "webapp" {
|
|||||||
_, isSet = m["webapp"].Args["VAR_EMPTY"]
|
_, isSet = m["webapp"].Args["VAR_EMPTY"]
|
||||||
require.True(t, isSet, m["webapp"].Args["VAR_EMPTY"])
|
require.True(t, isSet, m["webapp"].Args["VAR_EMPTY"])
|
||||||
|
|
||||||
require.Equal(t, m["webapp"].Args["VAR_SET"], "bananas")
|
require.Equal(t, ptrstr("bananas"), m["webapp"].Args["VAR_SET"])
|
||||||
|
|
||||||
require.Equal(t, m["webapp"].Args["VAR_FROMENV"+t.Name()], "fromEnv")
|
require.Equal(t, ptrstr("fromEnv"), m["webapp"].Args["VAR_FROMENV"+t.Name()])
|
||||||
|
|
||||||
require.Equal(t, m["webapp"].Args["VAR_BOTH"], "webapp")
|
require.Equal(t, ptrstr("webapp"), m["webapp"].Args["VAR_BOTH"])
|
||||||
require.Equal(t, m["webapp"].Args["VAR_INHERITED"], "override")
|
require.Equal(t, ptrstr("override"), m["webapp"].Args["VAR_INHERITED"])
|
||||||
|
|
||||||
require.Equal(t, 1, len(g))
|
require.Equal(t, 1, len(g))
|
||||||
require.Equal(t, []string{"webapp"}, g[0].Targets)
|
require.Equal(t, []string{"webapp"}, g["default"].Targets)
|
||||||
})
|
})
|
||||||
|
|
||||||
// building leaf but overriding parent fields
|
// building leaf but overriding parent fields
|
||||||
t.Run("parent", func(t *testing.T) {
|
t.Run("parent", func(t *testing.T) {
|
||||||
|
t.Parallel()
|
||||||
m, g, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{
|
m, g, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{
|
||||||
"webDEP.args.VAR_INHERITED=override",
|
"webDEP.args.VAR_INHERITED=override",
|
||||||
"webDEP.args.VAR_BOTH=override",
|
"webDEP.args.VAR_BOTH=override",
|
||||||
}, nil)
|
}, nil)
|
||||||
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, m["webapp"].Args["VAR_INHERITED"], "override")
|
require.Equal(t, ptrstr("override"), m["webapp"].Args["VAR_INHERITED"])
|
||||||
require.Equal(t, m["webapp"].Args["VAR_BOTH"], "webapp")
|
require.Equal(t, ptrstr("webapp"), m["webapp"].Args["VAR_BOTH"])
|
||||||
require.Equal(t, 1, len(g))
|
require.Equal(t, 1, len(g))
|
||||||
require.Equal(t, []string{"webapp"}, g[0].Targets)
|
require.Equal(t, []string{"webapp"}, g["default"].Targets)
|
||||||
})
|
})
|
||||||
})
|
})
|
||||||
|
|
||||||
t.Run("ContextOverride", func(t *testing.T) {
|
t.Run("ContextOverride", func(t *testing.T) {
|
||||||
|
t.Parallel()
|
||||||
_, _, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"webapp.context"}, nil)
|
_, _, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"webapp.context"}, nil)
|
||||||
require.NotNil(t, err)
|
require.NotNil(t, err)
|
||||||
|
|
||||||
@@ -113,44 +115,47 @@ target "webapp" {
|
|||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, "foo", *m["webapp"].Context)
|
require.Equal(t, "foo", *m["webapp"].Context)
|
||||||
require.Equal(t, 1, len(g))
|
require.Equal(t, 1, len(g))
|
||||||
require.Equal(t, []string{"webapp"}, g[0].Targets)
|
require.Equal(t, []string{"webapp"}, g["default"].Targets)
|
||||||
})
|
})
|
||||||
|
|
||||||
t.Run("NoCacheOverride", func(t *testing.T) {
|
t.Run("NoCacheOverride", func(t *testing.T) {
|
||||||
|
t.Parallel()
|
||||||
m, g, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"webapp.no-cache=false"}, nil)
|
m, g, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"webapp.no-cache=false"}, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, false, *m["webapp"].NoCache)
|
require.Equal(t, false, *m["webapp"].NoCache)
|
||||||
require.Equal(t, 1, len(g))
|
require.Equal(t, 1, len(g))
|
||||||
require.Equal(t, []string{"webapp"}, g[0].Targets)
|
require.Equal(t, []string{"webapp"}, g["default"].Targets)
|
||||||
})
|
})
|
||||||
|
|
||||||
t.Run("PullOverride", func(t *testing.T) {
|
t.Run("PullOverride", func(t *testing.T) {
|
||||||
|
t.Parallel()
|
||||||
m, g, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"webapp.pull=false"}, nil)
|
m, g, err := ReadTargets(ctx, []File{fp}, []string{"webapp"}, []string{"webapp.pull=false"}, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, false, *m["webapp"].Pull)
|
require.Equal(t, false, *m["webapp"].Pull)
|
||||||
require.Equal(t, 1, len(g))
|
require.Equal(t, 1, len(g))
|
||||||
require.Equal(t, []string{"webapp"}, g[0].Targets)
|
require.Equal(t, []string{"webapp"}, g["default"].Targets)
|
||||||
})
|
})
|
||||||
|
|
||||||
t.Run("PatternOverride", func(t *testing.T) {
|
t.Run("PatternOverride", func(t *testing.T) {
|
||||||
|
t.Parallel()
|
||||||
// same check for two cases
|
// same check for two cases
|
||||||
multiTargetCheck := func(t *testing.T, m map[string]*Target, g []*Group, err error) {
|
multiTargetCheck := func(t *testing.T, m map[string]*Target, g map[string]*Group, err error) {
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, 2, len(m))
|
require.Equal(t, 2, len(m))
|
||||||
require.Equal(t, "foo", *m["webapp"].Dockerfile)
|
require.Equal(t, "foo", *m["webapp"].Dockerfile)
|
||||||
require.Equal(t, "webDEP", m["webapp"].Args["VAR_INHERITED"])
|
require.Equal(t, ptrstr("webDEP"), m["webapp"].Args["VAR_INHERITED"])
|
||||||
require.Equal(t, "foo", *m["webDEP"].Dockerfile)
|
require.Equal(t, "foo", *m["webDEP"].Dockerfile)
|
||||||
require.Equal(t, "webDEP", m["webDEP"].Args["VAR_INHERITED"])
|
require.Equal(t, ptrstr("webDEP"), m["webDEP"].Args["VAR_INHERITED"])
|
||||||
require.Equal(t, 1, len(g))
|
require.Equal(t, 1, len(g))
|
||||||
sort.Strings(g[0].Targets)
|
sort.Strings(g["default"].Targets)
|
||||||
require.Equal(t, []string{"webDEP", "webapp"}, g[0].Targets)
|
require.Equal(t, []string{"webDEP", "webapp"}, g["default"].Targets)
|
||||||
}
|
}
|
||||||
|
|
||||||
cases := []struct {
|
cases := []struct {
|
||||||
name string
|
name string
|
||||||
targets []string
|
targets []string
|
||||||
overrides []string
|
overrides []string
|
||||||
check func(*testing.T, map[string]*Target, []*Group, error)
|
check func(*testing.T, map[string]*Target, map[string]*Group, error)
|
||||||
}{
|
}{
|
||||||
{
|
{
|
||||||
name: "multi target single pattern",
|
name: "multi target single pattern",
|
||||||
@@ -168,20 +173,20 @@ target "webapp" {
|
|||||||
name: "single target",
|
name: "single target",
|
||||||
targets: []string{"webapp"},
|
targets: []string{"webapp"},
|
||||||
overrides: []string{"web*.dockerfile=foo"},
|
overrides: []string{"web*.dockerfile=foo"},
|
||||||
check: func(t *testing.T, m map[string]*Target, g []*Group, err error) {
|
check: func(t *testing.T, m map[string]*Target, g map[string]*Group, err error) {
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, 1, len(m))
|
require.Equal(t, 1, len(m))
|
||||||
require.Equal(t, "foo", *m["webapp"].Dockerfile)
|
require.Equal(t, "foo", *m["webapp"].Dockerfile)
|
||||||
require.Equal(t, "webDEP", m["webapp"].Args["VAR_INHERITED"])
|
require.Equal(t, ptrstr("webDEP"), m["webapp"].Args["VAR_INHERITED"])
|
||||||
require.Equal(t, 1, len(g))
|
require.Equal(t, 1, len(g))
|
||||||
require.Equal(t, []string{"webapp"}, g[0].Targets)
|
require.Equal(t, []string{"webapp"}, g["default"].Targets)
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "nomatch",
|
name: "nomatch",
|
||||||
targets: []string{"webapp"},
|
targets: []string{"webapp"},
|
||||||
overrides: []string{"nomatch*.dockerfile=foo"},
|
overrides: []string{"nomatch*.dockerfile=foo"},
|
||||||
check: func(t *testing.T, m map[string]*Target, g []*Group, err error) {
|
check: func(t *testing.T, m map[string]*Target, g map[string]*Group, err error) {
|
||||||
// NOTE: I am unsure whether failing to match should always error out
|
// NOTE: I am unsure whether failing to match should always error out
|
||||||
// instead of simply skipping that override.
|
// instead of simply skipping that override.
|
||||||
// Let's enforce the error and we can relax it later if users complain.
|
// Let's enforce the error and we can relax it later if users complain.
|
||||||
@@ -299,12 +304,12 @@ services:
|
|||||||
require.True(t, ok)
|
require.True(t, ok)
|
||||||
require.Equal(t, "Dockerfile.webapp", *m["webapp"].Dockerfile)
|
require.Equal(t, "Dockerfile.webapp", *m["webapp"].Dockerfile)
|
||||||
require.Equal(t, ".", *m["webapp"].Context)
|
require.Equal(t, ".", *m["webapp"].Context)
|
||||||
require.Equal(t, "1", m["webapp"].Args["buildno"])
|
require.Equal(t, ptrstr("1"), m["webapp"].Args["buildno"])
|
||||||
require.Equal(t, "12", m["webapp"].Args["buildno2"])
|
require.Equal(t, ptrstr("12"), m["webapp"].Args["buildno2"])
|
||||||
|
|
||||||
require.Equal(t, 1, len(g))
|
require.Equal(t, 1, len(g))
|
||||||
sort.Strings(g[0].Targets)
|
sort.Strings(g["default"].Targets)
|
||||||
require.Equal(t, []string{"db", "newservice", "webapp"}, g[0].Targets)
|
require.Equal(t, []string{"db", "newservice", "webapp"}, g["default"].Targets)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestReadTargetsWithDotCompose(t *testing.T) {
|
func TestReadTargetsWithDotCompose(t *testing.T) {
|
||||||
@@ -343,7 +348,7 @@ services:
|
|||||||
_, ok := m["web_app"]
|
_, ok := m["web_app"]
|
||||||
require.True(t, ok)
|
require.True(t, ok)
|
||||||
require.Equal(t, "Dockerfile.webapp", *m["web_app"].Dockerfile)
|
require.Equal(t, "Dockerfile.webapp", *m["web_app"].Dockerfile)
|
||||||
require.Equal(t, "1", m["web_app"].Args["buildno"])
|
require.Equal(t, ptrstr("1"), m["web_app"].Args["buildno"])
|
||||||
|
|
||||||
m, _, err = ReadTargets(ctx, []File{fp2}, []string{"web_app"}, nil, nil)
|
m, _, err = ReadTargets(ctx, []File{fp2}, []string{"web_app"}, nil, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
@@ -351,7 +356,7 @@ services:
|
|||||||
_, ok = m["web_app"]
|
_, ok = m["web_app"]
|
||||||
require.True(t, ok)
|
require.True(t, ok)
|
||||||
require.Equal(t, "Dockerfile", *m["web_app"].Dockerfile)
|
require.Equal(t, "Dockerfile", *m["web_app"].Dockerfile)
|
||||||
require.Equal(t, "12", m["web_app"].Args["buildno2"])
|
require.Equal(t, ptrstr("12"), m["web_app"].Args["buildno2"])
|
||||||
|
|
||||||
m, g, err := ReadTargets(ctx, []File{fp, fp2}, []string{"default"}, nil, nil)
|
m, g, err := ReadTargets(ctx, []File{fp, fp2}, []string{"default"}, nil, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
@@ -360,12 +365,12 @@ services:
|
|||||||
require.True(t, ok)
|
require.True(t, ok)
|
||||||
require.Equal(t, "Dockerfile.webapp", *m["web_app"].Dockerfile)
|
require.Equal(t, "Dockerfile.webapp", *m["web_app"].Dockerfile)
|
||||||
require.Equal(t, ".", *m["web_app"].Context)
|
require.Equal(t, ".", *m["web_app"].Context)
|
||||||
require.Equal(t, "1", m["web_app"].Args["buildno"])
|
require.Equal(t, ptrstr("1"), m["web_app"].Args["buildno"])
|
||||||
require.Equal(t, "12", m["web_app"].Args["buildno2"])
|
require.Equal(t, ptrstr("12"), m["web_app"].Args["buildno2"])
|
||||||
|
|
||||||
require.Equal(t, 1, len(g))
|
require.Equal(t, 1, len(g))
|
||||||
sort.Strings(g[0].Targets)
|
sort.Strings(g["default"].Targets)
|
||||||
require.Equal(t, []string{"web_app"}, g[0].Targets)
|
require.Equal(t, []string{"web_app"}, g["default"].Targets)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestHCLCwdPrefix(t *testing.T) {
|
func TestHCLCwdPrefix(t *testing.T) {
|
||||||
@@ -392,7 +397,7 @@ func TestHCLCwdPrefix(t *testing.T) {
|
|||||||
require.Equal(t, "foo", *m["app"].Context)
|
require.Equal(t, "foo", *m["app"].Context)
|
||||||
|
|
||||||
require.Equal(t, 1, len(g))
|
require.Equal(t, 1, len(g))
|
||||||
require.Equal(t, []string{"app"}, g[0].Targets)
|
require.Equal(t, []string{"app"}, g["default"].Targets)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestOverrideMerge(t *testing.T) {
|
func TestOverrideMerge(t *testing.T) {
|
||||||
@@ -530,7 +535,8 @@ func TestReadEmptyTargets(t *testing.T) {
|
|||||||
Name: "docker-compose.yml",
|
Name: "docker-compose.yml",
|
||||||
Data: []byte(`
|
Data: []byte(`
|
||||||
services:
|
services:
|
||||||
app2: {}
|
app2:
|
||||||
|
build: {}
|
||||||
`),
|
`),
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -695,7 +701,7 @@ target "image" {
|
|||||||
m, g, err := ReadTargets(ctx, []File{f}, []string{"image"}, nil, nil)
|
m, g, err := ReadTargets(ctx, []File{f}, []string{"image"}, nil, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, 1, len(g))
|
require.Equal(t, 1, len(g))
|
||||||
require.Equal(t, []string{"image"}, g[0].Targets)
|
require.Equal(t, []string{"image"}, g["default"].Targets)
|
||||||
require.Equal(t, 1, len(m))
|
require.Equal(t, 1, len(m))
|
||||||
require.Equal(t, "test", *m["image"].Dockerfile)
|
require.Equal(t, "test", *m["image"].Dockerfile)
|
||||||
}
|
}
|
||||||
@@ -716,8 +722,9 @@ target "image" {
|
|||||||
|
|
||||||
m, g, err := ReadTargets(ctx, []File{f}, []string{"foo"}, nil, nil)
|
m, g, err := ReadTargets(ctx, []File{f}, []string{"foo"}, nil, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, 1, len(g))
|
require.Equal(t, 2, len(g))
|
||||||
require.Equal(t, []string{"image"}, g[0].Targets)
|
require.Equal(t, []string{"foo"}, g["default"].Targets)
|
||||||
|
require.Equal(t, []string{"image"}, g["foo"].Targets)
|
||||||
require.Equal(t, 1, len(m))
|
require.Equal(t, 1, len(m))
|
||||||
require.Equal(t, "test", *m["image"].Dockerfile)
|
require.Equal(t, "test", *m["image"].Dockerfile)
|
||||||
}
|
}
|
||||||
@@ -741,15 +748,17 @@ target "image" {
|
|||||||
|
|
||||||
m, g, err := ReadTargets(ctx, []File{f}, []string{"foo"}, nil, nil)
|
m, g, err := ReadTargets(ctx, []File{f}, []string{"foo"}, nil, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, 1, len(g))
|
require.Equal(t, 2, len(g))
|
||||||
require.Equal(t, []string{"image"}, g[0].Targets)
|
require.Equal(t, []string{"foo"}, g["default"].Targets)
|
||||||
|
require.Equal(t, []string{"image"}, g["foo"].Targets)
|
||||||
require.Equal(t, 1, len(m))
|
require.Equal(t, 1, len(m))
|
||||||
require.Equal(t, "test", *m["image"].Dockerfile)
|
require.Equal(t, "test", *m["image"].Dockerfile)
|
||||||
|
|
||||||
m, g, err = ReadTargets(ctx, []File{f}, []string{"foo", "foo"}, nil, nil)
|
m, g, err = ReadTargets(ctx, []File{f}, []string{"foo", "foo"}, nil, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, 1, len(g))
|
require.Equal(t, 2, len(g))
|
||||||
require.Equal(t, []string{"image"}, g[0].Targets)
|
require.Equal(t, []string{"foo"}, g["default"].Targets)
|
||||||
|
require.Equal(t, []string{"image"}, g["foo"].Targets)
|
||||||
require.Equal(t, 1, len(m))
|
require.Equal(t, 1, len(m))
|
||||||
require.Equal(t, "test", *m["image"].Dockerfile)
|
require.Equal(t, "test", *m["image"].Dockerfile)
|
||||||
}
|
}
|
||||||
@@ -828,7 +837,7 @@ services:
|
|||||||
m, g, err := ReadTargets(ctx, []File{fhcl}, []string{"default"}, nil, nil)
|
m, g, err := ReadTargets(ctx, []File{fhcl}, []string{"default"}, nil, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, 1, len(g))
|
require.Equal(t, 1, len(g))
|
||||||
require.Equal(t, []string{"image"}, g[0].Targets)
|
require.Equal(t, []string{"image"}, g["default"].Targets)
|
||||||
require.Equal(t, 1, len(m))
|
require.Equal(t, 1, len(m))
|
||||||
require.Equal(t, 1, len(m["image"].Outputs))
|
require.Equal(t, 1, len(m["image"].Outputs))
|
||||||
require.Equal(t, "type=docker", m["image"].Outputs[0])
|
require.Equal(t, "type=docker", m["image"].Outputs[0])
|
||||||
@@ -836,7 +845,7 @@ services:
|
|||||||
m, g, err = ReadTargets(ctx, []File{fhcl}, []string{"image-release"}, nil, nil)
|
m, g, err = ReadTargets(ctx, []File{fhcl}, []string{"image-release"}, nil, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, 1, len(g))
|
require.Equal(t, 1, len(g))
|
||||||
require.Equal(t, []string{"image-release"}, g[0].Targets)
|
require.Equal(t, []string{"image-release"}, g["default"].Targets)
|
||||||
require.Equal(t, 1, len(m))
|
require.Equal(t, 1, len(m))
|
||||||
require.Equal(t, 1, len(m["image-release"].Outputs))
|
require.Equal(t, 1, len(m["image-release"].Outputs))
|
||||||
require.Equal(t, "type=image,push=true", m["image-release"].Outputs[0])
|
require.Equal(t, "type=image,push=true", m["image-release"].Outputs[0])
|
||||||
@@ -844,7 +853,7 @@ services:
|
|||||||
m, g, err = ReadTargets(ctx, []File{fhcl}, []string{"image", "image-release"}, nil, nil)
|
m, g, err = ReadTargets(ctx, []File{fhcl}, []string{"image", "image-release"}, nil, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, 1, len(g))
|
require.Equal(t, 1, len(g))
|
||||||
require.Equal(t, []string{"image", "image-release"}, g[0].Targets)
|
require.Equal(t, []string{"image", "image-release"}, g["default"].Targets)
|
||||||
require.Equal(t, 2, len(m))
|
require.Equal(t, 2, len(m))
|
||||||
require.Equal(t, ".", *m["image"].Context)
|
require.Equal(t, ".", *m["image"].Context)
|
||||||
require.Equal(t, 1, len(m["image-release"].Outputs))
|
require.Equal(t, 1, len(m["image-release"].Outputs))
|
||||||
@@ -853,22 +862,22 @@ services:
|
|||||||
m, g, err = ReadTargets(ctx, []File{fyml, fhcl}, []string{"default"}, nil, nil)
|
m, g, err = ReadTargets(ctx, []File{fyml, fhcl}, []string{"default"}, nil, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, 1, len(g))
|
require.Equal(t, 1, len(g))
|
||||||
require.Equal(t, []string{"image"}, g[0].Targets)
|
require.Equal(t, []string{"image"}, g["default"].Targets)
|
||||||
require.Equal(t, 1, len(m))
|
require.Equal(t, 1, len(m))
|
||||||
require.Equal(t, ".", *m["image"].Context)
|
require.Equal(t, ".", *m["image"].Context)
|
||||||
|
|
||||||
m, g, err = ReadTargets(ctx, []File{fjson}, []string{"default"}, nil, nil)
|
m, g, err = ReadTargets(ctx, []File{fjson}, []string{"default"}, nil, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, 1, len(g))
|
require.Equal(t, 1, len(g))
|
||||||
require.Equal(t, []string{"image"}, g[0].Targets)
|
require.Equal(t, []string{"image"}, g["default"].Targets)
|
||||||
require.Equal(t, 1, len(m))
|
require.Equal(t, 1, len(m))
|
||||||
require.Equal(t, ".", *m["image"].Context)
|
require.Equal(t, ".", *m["image"].Context)
|
||||||
|
|
||||||
m, g, err = ReadTargets(ctx, []File{fyml}, []string{"default"}, nil, nil)
|
m, g, err = ReadTargets(ctx, []File{fyml}, []string{"default"}, nil, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, 1, len(g))
|
require.Equal(t, 1, len(g))
|
||||||
sort.Strings(g[0].Targets)
|
sort.Strings(g["default"].Targets)
|
||||||
require.Equal(t, []string{"addon", "aws"}, g[0].Targets)
|
require.Equal(t, []string{"addon", "aws"}, g["default"].Targets)
|
||||||
require.Equal(t, 2, len(m))
|
require.Equal(t, 2, len(m))
|
||||||
require.Equal(t, "./Dockerfile", *m["addon"].Dockerfile)
|
require.Equal(t, "./Dockerfile", *m["addon"].Dockerfile)
|
||||||
require.Equal(t, "./aws.Dockerfile", *m["aws"].Dockerfile)
|
require.Equal(t, "./aws.Dockerfile", *m["aws"].Dockerfile)
|
||||||
@@ -876,8 +885,8 @@ services:
|
|||||||
m, g, err = ReadTargets(ctx, []File{fyml, fhcl}, []string{"addon", "aws"}, nil, nil)
|
m, g, err = ReadTargets(ctx, []File{fyml, fhcl}, []string{"addon", "aws"}, nil, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, 1, len(g))
|
require.Equal(t, 1, len(g))
|
||||||
sort.Strings(g[0].Targets)
|
sort.Strings(g["default"].Targets)
|
||||||
require.Equal(t, []string{"addon", "aws"}, g[0].Targets)
|
require.Equal(t, []string{"addon", "aws"}, g["default"].Targets)
|
||||||
require.Equal(t, 2, len(m))
|
require.Equal(t, 2, len(m))
|
||||||
require.Equal(t, "./Dockerfile", *m["addon"].Dockerfile)
|
require.Equal(t, "./Dockerfile", *m["addon"].Dockerfile)
|
||||||
require.Equal(t, "./aws.Dockerfile", *m["aws"].Dockerfile)
|
require.Equal(t, "./aws.Dockerfile", *m["aws"].Dockerfile)
|
||||||
@@ -885,8 +894,8 @@ services:
|
|||||||
m, g, err = ReadTargets(ctx, []File{fyml, fhcl}, []string{"addon", "aws", "image"}, nil, nil)
|
m, g, err = ReadTargets(ctx, []File{fyml, fhcl}, []string{"addon", "aws", "image"}, nil, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, 1, len(g))
|
require.Equal(t, 1, len(g))
|
||||||
sort.Strings(g[0].Targets)
|
sort.Strings(g["default"].Targets)
|
||||||
require.Equal(t, []string{"addon", "aws", "image"}, g[0].Targets)
|
require.Equal(t, []string{"addon", "aws", "image"}, g["default"].Targets)
|
||||||
require.Equal(t, 3, len(m))
|
require.Equal(t, 3, len(m))
|
||||||
require.Equal(t, ".", *m["image"].Context)
|
require.Equal(t, ".", *m["image"].Context)
|
||||||
require.Equal(t, "./Dockerfile", *m["addon"].Dockerfile)
|
require.Equal(t, "./Dockerfile", *m["addon"].Dockerfile)
|
||||||
@@ -912,15 +921,17 @@ target "image" {
|
|||||||
|
|
||||||
m, g, err := ReadTargets(ctx, []File{f}, []string{"foo"}, nil, nil)
|
m, g, err := ReadTargets(ctx, []File{f}, []string{"foo"}, nil, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, 1, len(g))
|
require.Equal(t, 2, len(g))
|
||||||
require.Equal(t, []string{"foo"}, g[0].Targets)
|
require.Equal(t, []string{"foo"}, g["default"].Targets)
|
||||||
|
require.Equal(t, []string{"foo"}, g["foo"].Targets)
|
||||||
require.Equal(t, 1, len(m))
|
require.Equal(t, 1, len(m))
|
||||||
require.Equal(t, "bar", *m["foo"].Dockerfile)
|
require.Equal(t, "bar", *m["foo"].Dockerfile)
|
||||||
|
|
||||||
m, g, err = ReadTargets(ctx, []File{f}, []string{"foo", "foo"}, nil, nil)
|
m, g, err = ReadTargets(ctx, []File{f}, []string{"foo", "foo"}, nil, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, 1, len(g))
|
require.Equal(t, 2, len(g))
|
||||||
require.Equal(t, []string{"foo"}, g[0].Targets)
|
require.Equal(t, []string{"foo"}, g["default"].Targets)
|
||||||
|
require.Equal(t, []string{"foo"}, g["foo"].Targets)
|
||||||
require.Equal(t, 1, len(m))
|
require.Equal(t, 1, len(m))
|
||||||
require.Equal(t, "bar", *m["foo"].Dockerfile)
|
require.Equal(t, "bar", *m["foo"].Dockerfile)
|
||||||
}
|
}
|
||||||
@@ -944,16 +955,18 @@ target "image" {
|
|||||||
|
|
||||||
m, g, err := ReadTargets(ctx, []File{f}, []string{"foo"}, nil, nil)
|
m, g, err := ReadTargets(ctx, []File{f}, []string{"foo"}, nil, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, 1, len(g))
|
require.Equal(t, 2, len(g))
|
||||||
require.Equal(t, []string{"foo", "image"}, g[0].Targets)
|
require.Equal(t, []string{"foo"}, g["default"].Targets)
|
||||||
|
require.Equal(t, []string{"foo", "image"}, g["foo"].Targets)
|
||||||
require.Equal(t, 2, len(m))
|
require.Equal(t, 2, len(m))
|
||||||
require.Equal(t, "bar", *m["foo"].Dockerfile)
|
require.Equal(t, "bar", *m["foo"].Dockerfile)
|
||||||
require.Equal(t, "type=docker", m["image"].Outputs[0])
|
require.Equal(t, "type=docker", m["image"].Outputs[0])
|
||||||
|
|
||||||
m, g, err = ReadTargets(ctx, []File{f}, []string{"foo", "image"}, nil, nil)
|
m, g, err = ReadTargets(ctx, []File{f}, []string{"foo", "image"}, nil, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, 1, len(g))
|
require.Equal(t, 2, len(g))
|
||||||
require.Equal(t, []string{"foo", "image"}, g[0].Targets)
|
require.Equal(t, []string{"foo", "image"}, g["default"].Targets)
|
||||||
|
require.Equal(t, []string{"foo", "image"}, g["foo"].Targets)
|
||||||
require.Equal(t, 2, len(m))
|
require.Equal(t, 2, len(m))
|
||||||
require.Equal(t, "bar", *m["foo"].Dockerfile)
|
require.Equal(t, "bar", *m["foo"].Dockerfile)
|
||||||
require.Equal(t, "type=docker", m["image"].Outputs[0])
|
require.Equal(t, "type=docker", m["image"].Outputs[0])
|
||||||
@@ -990,22 +1003,22 @@ target "d" {
|
|||||||
cases := []struct {
|
cases := []struct {
|
||||||
name string
|
name string
|
||||||
overrides []string
|
overrides []string
|
||||||
want map[string]string
|
want map[string]*string
|
||||||
}{
|
}{
|
||||||
{
|
{
|
||||||
name: "nested simple",
|
name: "nested simple",
|
||||||
overrides: nil,
|
overrides: nil,
|
||||||
want: map[string]string{"bar": "234", "baz": "890", "foo": "123"},
|
want: map[string]*string{"bar": ptrstr("234"), "baz": ptrstr("890"), "foo": ptrstr("123")},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "nested with overrides first",
|
name: "nested with overrides first",
|
||||||
overrides: []string{"a.args.foo=321", "b.args.bar=432"},
|
overrides: []string{"a.args.foo=321", "b.args.bar=432"},
|
||||||
want: map[string]string{"bar": "234", "baz": "890", "foo": "321"},
|
want: map[string]*string{"bar": ptrstr("234"), "baz": ptrstr("890"), "foo": ptrstr("321")},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "nested with overrides last",
|
name: "nested with overrides last",
|
||||||
overrides: []string{"a.args.foo=321", "c.args.bar=432"},
|
overrides: []string{"a.args.foo=321", "c.args.bar=432"},
|
||||||
want: map[string]string{"bar": "432", "baz": "890", "foo": "321"},
|
want: map[string]*string{"bar": ptrstr("432"), "baz": ptrstr("890"), "foo": ptrstr("321")},
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
for _, tt := range cases {
|
for _, tt := range cases {
|
||||||
@@ -1014,7 +1027,7 @@ target "d" {
|
|||||||
m, g, err := ReadTargets(ctx, []File{f}, []string{"d"}, tt.overrides, nil)
|
m, g, err := ReadTargets(ctx, []File{f}, []string{"d"}, tt.overrides, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, 1, len(g))
|
require.Equal(t, 1, len(g))
|
||||||
require.Equal(t, []string{"d"}, g[0].Targets)
|
require.Equal(t, []string{"d"}, g["default"].Targets)
|
||||||
require.Equal(t, 1, len(m))
|
require.Equal(t, 1, len(m))
|
||||||
require.Equal(t, tt.want, m["d"].Args)
|
require.Equal(t, tt.want, m["d"].Args)
|
||||||
})
|
})
|
||||||
@@ -1058,26 +1071,26 @@ group "default" {
|
|||||||
cases := []struct {
|
cases := []struct {
|
||||||
name string
|
name string
|
||||||
overrides []string
|
overrides []string
|
||||||
wantch1 map[string]string
|
wantch1 map[string]*string
|
||||||
wantch2 map[string]string
|
wantch2 map[string]*string
|
||||||
}{
|
}{
|
||||||
{
|
{
|
||||||
name: "nested simple",
|
name: "nested simple",
|
||||||
overrides: nil,
|
overrides: nil,
|
||||||
wantch1: map[string]string{"BAR": "fuu", "FOO": "bar"},
|
wantch1: map[string]*string{"BAR": ptrstr("fuu"), "FOO": ptrstr("bar")},
|
||||||
wantch2: map[string]string{"BAR": "fuu", "FOO": "bar", "FOO2": "bar2"},
|
wantch2: map[string]*string{"BAR": ptrstr("fuu"), "FOO": ptrstr("bar"), "FOO2": ptrstr("bar2")},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "nested with overrides first",
|
name: "nested with overrides first",
|
||||||
overrides: []string{"grandparent.args.BAR=fii", "child1.args.FOO=baaar"},
|
overrides: []string{"grandparent.args.BAR=fii", "child1.args.FOO=baaar"},
|
||||||
wantch1: map[string]string{"BAR": "fii", "FOO": "baaar"},
|
wantch1: map[string]*string{"BAR": ptrstr("fii"), "FOO": ptrstr("baaar")},
|
||||||
wantch2: map[string]string{"BAR": "fii", "FOO": "bar", "FOO2": "bar2"},
|
wantch2: map[string]*string{"BAR": ptrstr("fii"), "FOO": ptrstr("bar"), "FOO2": ptrstr("bar2")},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "nested with overrides last",
|
name: "nested with overrides last",
|
||||||
overrides: []string{"grandparent.args.BAR=fii", "child2.args.FOO=baaar"},
|
overrides: []string{"grandparent.args.BAR=fii", "child2.args.FOO=baaar"},
|
||||||
wantch1: map[string]string{"BAR": "fii", "FOO": "bar"},
|
wantch1: map[string]*string{"BAR": ptrstr("fii"), "FOO": ptrstr("bar")},
|
||||||
wantch2: map[string]string{"BAR": "fii", "FOO": "baaar", "FOO2": "bar2"},
|
wantch2: map[string]*string{"BAR": ptrstr("fii"), "FOO": ptrstr("baaar"), "FOO2": ptrstr("bar2")},
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
for _, tt := range cases {
|
for _, tt := range cases {
|
||||||
@@ -1086,7 +1099,7 @@ group "default" {
|
|||||||
m, g, err := ReadTargets(ctx, []File{f}, []string{"default"}, tt.overrides, nil)
|
m, g, err := ReadTargets(ctx, []File{f}, []string{"default"}, tt.overrides, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, 1, len(g))
|
require.Equal(t, 1, len(g))
|
||||||
require.Equal(t, []string{"child1", "child2"}, g[0].Targets)
|
require.Equal(t, []string{"child1", "child2"}, g["default"].Targets)
|
||||||
require.Equal(t, 2, len(m))
|
require.Equal(t, 2, len(m))
|
||||||
require.Equal(t, tt.wantch1, m["child1"].Args)
|
require.Equal(t, tt.wantch1, m["child1"].Args)
|
||||||
require.Equal(t, []string{"type=docker"}, m["child1"].Outputs)
|
require.Equal(t, []string{"type=docker"}, m["child1"].Outputs)
|
||||||
@@ -1183,46 +1196,257 @@ target "f" {
|
|||||||
}`)}
|
}`)}
|
||||||
|
|
||||||
cases := []struct {
|
cases := []struct {
|
||||||
name string
|
names []string
|
||||||
targets []string
|
targets []string
|
||||||
ntargets int
|
groups []string
|
||||||
|
count int
|
||||||
}{
|
}{
|
||||||
{
|
{
|
||||||
name: "a",
|
names: []string{"a"},
|
||||||
targets: []string{"b", "c"},
|
targets: []string{"a"},
|
||||||
ntargets: 1,
|
groups: []string{"default", "a", "b", "c"},
|
||||||
|
count: 1,
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "b",
|
names: []string{"b"},
|
||||||
targets: []string{"d"},
|
targets: []string{"b"},
|
||||||
ntargets: 1,
|
groups: []string{"default", "b"},
|
||||||
|
count: 1,
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "c",
|
names: []string{"c"},
|
||||||
targets: []string{"b"},
|
targets: []string{"c"},
|
||||||
ntargets: 1,
|
groups: []string{"default", "b", "c"},
|
||||||
|
count: 1,
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "d",
|
names: []string{"d"},
|
||||||
targets: []string{"d"},
|
targets: []string{"d"},
|
||||||
ntargets: 1,
|
groups: []string{"default"},
|
||||||
|
count: 1,
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "e",
|
names: []string{"e"},
|
||||||
targets: []string{"a", "f"},
|
targets: []string{"e"},
|
||||||
ntargets: 2,
|
groups: []string{"default", "a", "b", "c", "e"},
|
||||||
|
count: 2,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
names: []string{"a", "e"},
|
||||||
|
targets: []string{"a", "e"},
|
||||||
|
groups: []string{"default", "a", "b", "c", "e"},
|
||||||
|
count: 2,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
for _, tt := range cases {
|
for _, tt := range cases {
|
||||||
tt := tt
|
tt := tt
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
t.Run(strings.Join(tt.names, "+"), func(t *testing.T) {
|
||||||
m, g, err := ReadTargets(ctx, []File{f}, []string{tt.name}, nil, nil)
|
m, g, err := ReadTargets(ctx, []File{f}, tt.names, nil, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, 1, len(g))
|
|
||||||
require.Equal(t, tt.targets, g[0].Targets)
|
var gnames []string
|
||||||
require.Equal(t, tt.ntargets, len(m))
|
for _, g := range g {
|
||||||
|
gnames = append(gnames, g.Name)
|
||||||
|
}
|
||||||
|
sort.Strings(gnames)
|
||||||
|
sort.Strings(tt.groups)
|
||||||
|
require.Equal(t, tt.groups, gnames)
|
||||||
|
|
||||||
|
sort.Strings(g["default"].Targets)
|
||||||
|
sort.Strings(tt.targets)
|
||||||
|
require.Equal(t, tt.targets, g["default"].Targets)
|
||||||
|
|
||||||
|
require.Equal(t, tt.count, len(m))
|
||||||
require.Equal(t, ".", *m["d"].Context)
|
require.Equal(t, ".", *m["d"].Context)
|
||||||
require.Equal(t, "./testdockerfile", *m["d"].Dockerfile)
|
require.Equal(t, "./testdockerfile", *m["d"].Dockerfile)
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestUnknownExt(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
target "app" {
|
||||||
|
context = "dir"
|
||||||
|
args = {
|
||||||
|
v1 = "foo"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
dt2 := []byte(`
|
||||||
|
services:
|
||||||
|
app:
|
||||||
|
build:
|
||||||
|
dockerfile: Dockerfile-alternate
|
||||||
|
args:
|
||||||
|
v2: "bar"
|
||||||
|
`)
|
||||||
|
|
||||||
|
c, err := ParseFiles([]File{
|
||||||
|
{Data: dt, Name: "c1.foo"},
|
||||||
|
{Data: dt2, Name: "c2.bar"},
|
||||||
|
}, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(c.Targets))
|
||||||
|
require.Equal(t, "app", c.Targets[0].Name)
|
||||||
|
require.Equal(t, ptrstr("foo"), c.Targets[0].Args["v1"])
|
||||||
|
require.Equal(t, ptrstr("bar"), c.Targets[0].Args["v2"])
|
||||||
|
require.Equal(t, "dir", *c.Targets[0].Context)
|
||||||
|
require.Equal(t, "Dockerfile-alternate", *c.Targets[0].Dockerfile)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHCLNullVars(t *testing.T) {
|
||||||
|
fp := File{
|
||||||
|
Name: "docker-bake.hcl",
|
||||||
|
Data: []byte(
|
||||||
|
`variable "FOO" {
|
||||||
|
default = null
|
||||||
|
}
|
||||||
|
variable "BAR" {
|
||||||
|
default = null
|
||||||
|
}
|
||||||
|
target "default" {
|
||||||
|
args = {
|
||||||
|
foo = FOO
|
||||||
|
bar = "baz"
|
||||||
|
}
|
||||||
|
labels = {
|
||||||
|
"com.docker.app.bar" = BAR
|
||||||
|
"com.docker.app.baz" = "foo"
|
||||||
|
}
|
||||||
|
}`),
|
||||||
|
}
|
||||||
|
|
||||||
|
ctx := context.TODO()
|
||||||
|
m, _, err := ReadTargets(ctx, []File{fp}, []string{"default"}, nil, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(m))
|
||||||
|
_, ok := m["default"]
|
||||||
|
require.True(t, ok)
|
||||||
|
|
||||||
|
_, err = TargetsToBuildOpt(m, &Input{})
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, map[string]*string{"bar": ptrstr("baz")}, m["default"].Args)
|
||||||
|
require.Equal(t, map[string]*string{"com.docker.app.baz": ptrstr("foo")}, m["default"].Labels)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestJSONNullVars(t *testing.T) {
|
||||||
|
fp := File{
|
||||||
|
Name: "docker-bake.json",
|
||||||
|
Data: []byte(
|
||||||
|
`{
|
||||||
|
"variable": {
|
||||||
|
"FOO": {
|
||||||
|
"default": null
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"target": {
|
||||||
|
"default": {
|
||||||
|
"args": {
|
||||||
|
"foo": "${FOO}",
|
||||||
|
"bar": "baz"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}`),
|
||||||
|
}
|
||||||
|
|
||||||
|
ctx := context.TODO()
|
||||||
|
m, _, err := ReadTargets(ctx, []File{fp}, []string{"default"}, nil, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(m))
|
||||||
|
_, ok := m["default"]
|
||||||
|
require.True(t, ok)
|
||||||
|
|
||||||
|
_, err = TargetsToBuildOpt(m, &Input{})
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, map[string]*string{"bar": ptrstr("baz")}, m["default"].Args)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestReadLocalFilesDefault(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
filenames []string
|
||||||
|
expected []string
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
filenames: []string{"abc.yml", "docker-compose.yml"},
|
||||||
|
expected: []string{"docker-compose.yml"},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
filenames: []string{"test.foo", "compose.yml", "docker-bake.hcl"},
|
||||||
|
expected: []string{"compose.yml", "docker-bake.hcl"},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
filenames: []string{"compose.yaml", "docker-compose.yml", "docker-bake.hcl"},
|
||||||
|
expected: []string{"compose.yaml", "docker-compose.yml", "docker-bake.hcl"},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
filenames: []string{"test.txt", "compsoe.yaml"}, // intentional misspell
|
||||||
|
expected: []string{},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
pwd, err := os.Getwd()
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
t.Run(strings.Join(tt.filenames, "-"), func(t *testing.T) {
|
||||||
|
dir := t.TempDir()
|
||||||
|
t.Cleanup(func() { _ = os.Chdir(pwd) })
|
||||||
|
require.NoError(t, os.Chdir(dir))
|
||||||
|
for _, tf := range tt.filenames {
|
||||||
|
require.NoError(t, os.WriteFile(tf, []byte(tf), 0644))
|
||||||
|
}
|
||||||
|
files, err := ReadLocalFiles(nil, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
if len(files) == 0 {
|
||||||
|
require.Equal(t, len(tt.expected), len(files))
|
||||||
|
} else {
|
||||||
|
found := false
|
||||||
|
for _, exp := range tt.expected {
|
||||||
|
for _, f := range files {
|
||||||
|
if f.Name == exp {
|
||||||
|
found = true
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
require.True(t, found, exp)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestAttestDuplicates(t *testing.T) {
|
||||||
|
fp := File{
|
||||||
|
Name: "docker-bake.hcl",
|
||||||
|
Data: []byte(
|
||||||
|
`target "default" {
|
||||||
|
attest = ["type=sbom", "type=sbom,generator=custom", "type=sbom,foo=bar", "type=provenance,mode=max"]
|
||||||
|
}`),
|
||||||
|
}
|
||||||
|
ctx := context.TODO()
|
||||||
|
|
||||||
|
m, _, err := ReadTargets(ctx, []File{fp}, []string{"default"}, nil, nil)
|
||||||
|
require.Equal(t, []string{"type=sbom,foo=bar", "type=provenance,mode=max"}, m["default"].Attest)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
opts, err := TargetsToBuildOpt(m, &Input{})
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, map[string]*string{
|
||||||
|
"sbom": ptrstr("type=sbom,foo=bar"),
|
||||||
|
"provenance": ptrstr("type=provenance,mode=max"),
|
||||||
|
}, opts["default"].Attests)
|
||||||
|
|
||||||
|
m, _, err = ReadTargets(ctx, []File{fp}, []string{"default"}, []string{"*.attest=type=sbom,disabled=true"}, nil)
|
||||||
|
require.Equal(t, []string{"type=sbom,disabled=true", "type=provenance,mode=max"}, m["default"].Attest)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
opts, err = TargetsToBuildOpt(m, &Input{})
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, map[string]*string{
|
||||||
|
"sbom": nil,
|
||||||
|
"provenance": ptrstr("type=provenance,mode=max"),
|
||||||
|
}, opts["default"].Attests)
|
||||||
|
}
|
||||||
|
|||||||
226
bake/compose.go
226
bake/compose.go
@@ -1,51 +1,45 @@
|
|||||||
package bake
|
package bake
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"fmt"
|
|
||||||
"os"
|
"os"
|
||||||
|
"path/filepath"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
|
"github.com/compose-spec/compose-go/dotenv"
|
||||||
"github.com/compose-spec/compose-go/loader"
|
"github.com/compose-spec/compose-go/loader"
|
||||||
compose "github.com/compose-spec/compose-go/types"
|
compose "github.com/compose-spec/compose-go/types"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
"gopkg.in/yaml.v3"
|
"gopkg.in/yaml.v3"
|
||||||
)
|
)
|
||||||
|
|
||||||
// errComposeInvalid is returned when a compose file is invalid
|
func ParseComposeFiles(fs []File) (*Config, error) {
|
||||||
var errComposeInvalid = errors.New("invalid compose file")
|
envs, err := composeEnv()
|
||||||
|
|
||||||
func parseCompose(dt []byte) (*compose.Project, error) {
|
|
||||||
return loader.Load(compose.ConfigDetails{
|
|
||||||
ConfigFiles: []compose.ConfigFile{
|
|
||||||
{
|
|
||||||
Content: dt,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
Environment: envMap(os.Environ()),
|
|
||||||
}, func(options *loader.Options) {
|
|
||||||
options.SkipNormalization = true
|
|
||||||
options.SkipConsistencyCheck = true
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
func envMap(env []string) map[string]string {
|
|
||||||
result := make(map[string]string, len(env))
|
|
||||||
for _, s := range env {
|
|
||||||
kv := strings.SplitN(s, "=", 2)
|
|
||||||
if len(kv) != 2 {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
result[kv[0]] = kv[1]
|
|
||||||
}
|
|
||||||
return result
|
|
||||||
}
|
|
||||||
|
|
||||||
func ParseCompose(dt []byte) (*Config, error) {
|
|
||||||
cfg, err := parseCompose(dt)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
if err = composeValidate(cfg); err != nil {
|
var cfgs []compose.ConfigFile
|
||||||
|
for _, f := range fs {
|
||||||
|
cfgs = append(cfgs, compose.ConfigFile{
|
||||||
|
Filename: f.Name,
|
||||||
|
Content: f.Data,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
return ParseCompose(cfgs, envs)
|
||||||
|
}
|
||||||
|
|
||||||
|
func ParseCompose(cfgs []compose.ConfigFile, envs map[string]string) (*Config, error) {
|
||||||
|
if envs == nil {
|
||||||
|
envs = make(map[string]string)
|
||||||
|
}
|
||||||
|
cfg, err := loader.Load(compose.ConfigDetails{
|
||||||
|
ConfigFiles: cfgs,
|
||||||
|
Environment: envs,
|
||||||
|
}, func(options *loader.Options) {
|
||||||
|
options.SetProjectName("bake", false)
|
||||||
|
options.SkipNormalization = true
|
||||||
|
options.Profiles = []string{"*"}
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -58,7 +52,7 @@ func ParseCompose(dt []byte) (*Config, error) {
|
|||||||
|
|
||||||
for _, s := range cfg.Services {
|
for _, s := range cfg.Services {
|
||||||
if s.Build == nil {
|
if s.Build == nil {
|
||||||
s.Build = &compose.BuildConfig{}
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
targetName := sanitizeTargetName(s.Name)
|
targetName := sanitizeTargetName(s.Name)
|
||||||
@@ -76,6 +70,19 @@ func ParseCompose(dt []byte) (*Config, error) {
|
|||||||
dockerfilePath := s.Build.Dockerfile
|
dockerfilePath := s.Build.Dockerfile
|
||||||
dockerfilePathP = &dockerfilePath
|
dockerfilePathP = &dockerfilePath
|
||||||
}
|
}
|
||||||
|
var dockerfileInlineP *string
|
||||||
|
if s.Build.DockerfileInline != "" {
|
||||||
|
dockerfileInline := s.Build.DockerfileInline
|
||||||
|
dockerfileInlineP = &dockerfileInline
|
||||||
|
}
|
||||||
|
|
||||||
|
var additionalContexts map[string]string
|
||||||
|
if s.Build.AdditionalContexts != nil {
|
||||||
|
additionalContexts = map[string]string{}
|
||||||
|
for k, v := range s.Build.AdditionalContexts {
|
||||||
|
additionalContexts[k] = v
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
var secrets []string
|
var secrets []string
|
||||||
for _, bs := range s.Build.Secrets {
|
for _, bs := range s.Build.Secrets {
|
||||||
@@ -86,13 +93,22 @@ func ParseCompose(dt []byte) (*Config, error) {
|
|||||||
secrets = append(secrets, secret)
|
secrets = append(secrets, secret)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// compose does not support nil values for labels
|
||||||
|
labels := map[string]*string{}
|
||||||
|
for k, v := range s.Build.Labels {
|
||||||
|
v := v
|
||||||
|
labels[k] = &v
|
||||||
|
}
|
||||||
|
|
||||||
g.Targets = append(g.Targets, targetName)
|
g.Targets = append(g.Targets, targetName)
|
||||||
t := &Target{
|
t := &Target{
|
||||||
Name: targetName,
|
Name: targetName,
|
||||||
Context: contextPathP,
|
Context: contextPathP,
|
||||||
Dockerfile: dockerfilePathP,
|
Contexts: additionalContexts,
|
||||||
Tags: s.Build.Tags,
|
Dockerfile: dockerfilePathP,
|
||||||
Labels: s.Build.Labels,
|
DockerfileInline: dockerfileInlineP,
|
||||||
|
Tags: s.Build.Tags,
|
||||||
|
Labels: labels,
|
||||||
Args: flatten(s.Build.Args.Resolve(func(val string) (string, bool) {
|
Args: flatten(s.Build.Args.Resolve(func(val string) (string, bool) {
|
||||||
if val, ok := s.Environment[val]; ok && val != nil {
|
if val, ok := s.Environment[val]; ok && val != nil {
|
||||||
return *val, true
|
return *val, true
|
||||||
@@ -124,16 +140,97 @@ func ParseCompose(dt []byte) (*Config, error) {
|
|||||||
return &c, nil
|
return &c, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func flatten(in compose.MappingWithEquals) compose.Mapping {
|
func validateComposeFile(dt []byte, fn string) (bool, error) {
|
||||||
|
envs, err := composeEnv()
|
||||||
|
if err != nil {
|
||||||
|
return true, err
|
||||||
|
}
|
||||||
|
fnl := strings.ToLower(fn)
|
||||||
|
if strings.HasSuffix(fnl, ".yml") || strings.HasSuffix(fnl, ".yaml") {
|
||||||
|
return true, validateCompose(dt, envs)
|
||||||
|
}
|
||||||
|
if strings.HasSuffix(fnl, ".json") || strings.HasSuffix(fnl, ".hcl") {
|
||||||
|
return false, nil
|
||||||
|
}
|
||||||
|
err = validateCompose(dt, envs)
|
||||||
|
return err == nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
func validateCompose(dt []byte, envs map[string]string) error {
|
||||||
|
_, err := loader.Load(compose.ConfigDetails{
|
||||||
|
ConfigFiles: []compose.ConfigFile{
|
||||||
|
{
|
||||||
|
Content: dt,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
Environment: envs,
|
||||||
|
}, func(options *loader.Options) {
|
||||||
|
options.SetProjectName("bake", false)
|
||||||
|
options.SkipNormalization = true
|
||||||
|
// consistency is checked later in ParseCompose to ensure multiple
|
||||||
|
// compose files can be merged together
|
||||||
|
options.SkipConsistencyCheck = true
|
||||||
|
})
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
func composeEnv() (map[string]string, error) {
|
||||||
|
envs := sliceToMap(os.Environ())
|
||||||
|
if wd, err := os.Getwd(); err == nil {
|
||||||
|
envs, err = loadDotEnv(envs, wd)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return envs, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func loadDotEnv(curenv map[string]string, workingDir string) (map[string]string, error) {
|
||||||
|
if curenv == nil {
|
||||||
|
curenv = make(map[string]string)
|
||||||
|
}
|
||||||
|
|
||||||
|
ef, err := filepath.Abs(filepath.Join(workingDir, ".env"))
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if _, err = os.Stat(ef); os.IsNotExist(err) {
|
||||||
|
return curenv, nil
|
||||||
|
} else if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
dt, err := os.ReadFile(ef)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
envs, err := dotenv.UnmarshalBytesWithLookup(dt, nil)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
for k, v := range envs {
|
||||||
|
if _, set := curenv[k]; set {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
curenv[k] = v
|
||||||
|
}
|
||||||
|
|
||||||
|
return curenv, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func flatten(in compose.MappingWithEquals) map[string]*string {
|
||||||
if len(in) == 0 {
|
if len(in) == 0 {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
out := compose.Mapping{}
|
out := map[string]*string{}
|
||||||
for k, v := range in {
|
for k, v := range in {
|
||||||
if v == nil {
|
if v == nil {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
out[k] = *v
|
out[k] = v
|
||||||
}
|
}
|
||||||
return out
|
return out
|
||||||
}
|
}
|
||||||
@@ -151,10 +248,12 @@ type xbake struct {
|
|||||||
Pull *bool `yaml:"pull,omitempty"`
|
Pull *bool `yaml:"pull,omitempty"`
|
||||||
NoCache *bool `yaml:"no-cache,omitempty"`
|
NoCache *bool `yaml:"no-cache,omitempty"`
|
||||||
NoCacheFilter stringArray `yaml:"no-cache-filter,omitempty"`
|
NoCacheFilter stringArray `yaml:"no-cache-filter,omitempty"`
|
||||||
|
Contexts stringMap `yaml:"contexts,omitempty"`
|
||||||
// don't forget to update documentation if you add a new field:
|
// don't forget to update documentation if you add a new field:
|
||||||
// docs/guides/bake/compose-file.md#extension-field-with-x-bake
|
// docs/manuals/bake/compose-file.md#extension-field-with-x-bake
|
||||||
}
|
}
|
||||||
|
|
||||||
|
type stringMap map[string]string
|
||||||
type stringArray []string
|
type stringArray []string
|
||||||
|
|
||||||
func (sa *stringArray) UnmarshalYAML(unmarshal func(interface{}) error) error {
|
func (sa *stringArray) UnmarshalYAML(unmarshal func(interface{}) error) error {
|
||||||
@@ -188,25 +287,25 @@ func (t *Target) composeExtTarget(exts map[string]interface{}) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
if len(xb.Tags) > 0 {
|
if len(xb.Tags) > 0 {
|
||||||
t.Tags = dedupString(append(t.Tags, xb.Tags...))
|
t.Tags = dedupSlice(append(t.Tags, xb.Tags...))
|
||||||
}
|
}
|
||||||
if len(xb.CacheFrom) > 0 {
|
if len(xb.CacheFrom) > 0 {
|
||||||
t.CacheFrom = dedupString(append(t.CacheFrom, xb.CacheFrom...))
|
t.CacheFrom = dedupSlice(append(t.CacheFrom, xb.CacheFrom...))
|
||||||
}
|
}
|
||||||
if len(xb.CacheTo) > 0 {
|
if len(xb.CacheTo) > 0 {
|
||||||
t.CacheTo = dedupString(append(t.CacheTo, xb.CacheTo...))
|
t.CacheTo = dedupSlice(append(t.CacheTo, xb.CacheTo...))
|
||||||
}
|
}
|
||||||
if len(xb.Secrets) > 0 {
|
if len(xb.Secrets) > 0 {
|
||||||
t.Secrets = dedupString(append(t.Secrets, xb.Secrets...))
|
t.Secrets = dedupSlice(append(t.Secrets, xb.Secrets...))
|
||||||
}
|
}
|
||||||
if len(xb.SSH) > 0 {
|
if len(xb.SSH) > 0 {
|
||||||
t.SSH = dedupString(append(t.SSH, xb.SSH...))
|
t.SSH = dedupSlice(append(t.SSH, xb.SSH...))
|
||||||
}
|
}
|
||||||
if len(xb.Platforms) > 0 {
|
if len(xb.Platforms) > 0 {
|
||||||
t.Platforms = dedupString(append(t.Platforms, xb.Platforms...))
|
t.Platforms = dedupSlice(append(t.Platforms, xb.Platforms...))
|
||||||
}
|
}
|
||||||
if len(xb.Outputs) > 0 {
|
if len(xb.Outputs) > 0 {
|
||||||
t.Outputs = dedupString(append(t.Outputs, xb.Outputs...))
|
t.Outputs = dedupSlice(append(t.Outputs, xb.Outputs...))
|
||||||
}
|
}
|
||||||
if xb.Pull != nil {
|
if xb.Pull != nil {
|
||||||
t.Pull = xb.Pull
|
t.Pull = xb.Pull
|
||||||
@@ -215,34 +314,15 @@ func (t *Target) composeExtTarget(exts map[string]interface{}) error {
|
|||||||
t.NoCache = xb.NoCache
|
t.NoCache = xb.NoCache
|
||||||
}
|
}
|
||||||
if len(xb.NoCacheFilter) > 0 {
|
if len(xb.NoCacheFilter) > 0 {
|
||||||
t.NoCacheFilter = dedupString(append(t.NoCacheFilter, xb.NoCacheFilter...))
|
t.NoCacheFilter = dedupSlice(append(t.NoCacheFilter, xb.NoCacheFilter...))
|
||||||
|
}
|
||||||
|
if len(xb.Contexts) > 0 {
|
||||||
|
t.Contexts = dedupMap(t.Contexts, xb.Contexts)
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// composeValidate validates a compose file
|
|
||||||
func composeValidate(project *compose.Project) error {
|
|
||||||
for _, s := range project.Services {
|
|
||||||
if s.Build != nil {
|
|
||||||
for _, secret := range s.Build.Secrets {
|
|
||||||
if _, ok := project.Secrets[secret.Source]; !ok {
|
|
||||||
return errors.Wrap(errComposeInvalid, fmt.Sprintf("service %q refers to undefined build secret %s", sanitizeTargetName(s.Name), secret.Source))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
for name, secret := range project.Secrets {
|
|
||||||
if secret.External.External {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
if secret.File == "" && secret.Environment == "" {
|
|
||||||
return errors.Wrap(errComposeInvalid, fmt.Sprintf("secret %q must declare either `file` or `environment`", name))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// composeToBuildkitSecret converts secret from compose format to buildkit's
|
// composeToBuildkitSecret converts secret from compose format to buildkit's
|
||||||
// csv format.
|
// csv format.
|
||||||
func composeToBuildkitSecret(inp compose.ServiceSecretConfig, psecret compose.SecretConfig) (string, error) {
|
func composeToBuildkitSecret(inp compose.ServiceSecretConfig, psecret compose.SecretConfig) (string, error) {
|
||||||
|
|||||||
@@ -2,9 +2,12 @@ package bake
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"os"
|
"os"
|
||||||
|
"path/filepath"
|
||||||
"sort"
|
"sort"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
|
compose "github.com/compose-spec/compose-go/types"
|
||||||
|
"github.com/stretchr/testify/assert"
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -18,6 +21,8 @@ services:
|
|||||||
webapp:
|
webapp:
|
||||||
build:
|
build:
|
||||||
context: ./dir
|
context: ./dir
|
||||||
|
additional_contexts:
|
||||||
|
foo: /bar
|
||||||
dockerfile: Dockerfile-alternate
|
dockerfile: Dockerfile-alternate
|
||||||
network:
|
network:
|
||||||
none
|
none
|
||||||
@@ -30,6 +35,13 @@ services:
|
|||||||
secrets:
|
secrets:
|
||||||
- token
|
- token
|
||||||
- aws
|
- aws
|
||||||
|
webapp2:
|
||||||
|
profiles:
|
||||||
|
- test
|
||||||
|
build:
|
||||||
|
context: ./dir
|
||||||
|
dockerfile_inline: |
|
||||||
|
FROM alpine
|
||||||
secrets:
|
secrets:
|
||||||
token:
|
token:
|
||||||
environment: ENV_TOKEN
|
environment: ENV_TOKEN
|
||||||
@@ -37,15 +49,15 @@ secrets:
|
|||||||
file: /root/.aws/credentials
|
file: /root/.aws/credentials
|
||||||
`)
|
`)
|
||||||
|
|
||||||
c, err := ParseCompose(dt)
|
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
require.Equal(t, 1, len(c.Groups))
|
require.Equal(t, 1, len(c.Groups))
|
||||||
require.Equal(t, c.Groups[0].Name, "default")
|
require.Equal(t, "default", c.Groups[0].Name)
|
||||||
sort.Strings(c.Groups[0].Targets)
|
sort.Strings(c.Groups[0].Targets)
|
||||||
require.Equal(t, []string{"db", "webapp"}, c.Groups[0].Targets)
|
require.Equal(t, []string{"db", "webapp", "webapp2"}, c.Groups[0].Targets)
|
||||||
|
|
||||||
require.Equal(t, 2, len(c.Targets))
|
require.Equal(t, 3, len(c.Targets))
|
||||||
sort.Slice(c.Targets, func(i, j int) bool {
|
sort.Slice(c.Targets, func(i, j int) bool {
|
||||||
return c.Targets[i].Name < c.Targets[j].Name
|
return c.Targets[i].Name < c.Targets[j].Name
|
||||||
})
|
})
|
||||||
@@ -55,16 +67,21 @@ secrets:
|
|||||||
|
|
||||||
require.Equal(t, "webapp", c.Targets[1].Name)
|
require.Equal(t, "webapp", c.Targets[1].Name)
|
||||||
require.Equal(t, "./dir", *c.Targets[1].Context)
|
require.Equal(t, "./dir", *c.Targets[1].Context)
|
||||||
|
require.Equal(t, map[string]string{"foo": "/bar"}, c.Targets[1].Contexts)
|
||||||
require.Equal(t, "Dockerfile-alternate", *c.Targets[1].Dockerfile)
|
require.Equal(t, "Dockerfile-alternate", *c.Targets[1].Dockerfile)
|
||||||
require.Equal(t, 1, len(c.Targets[1].Args))
|
require.Equal(t, 1, len(c.Targets[1].Args))
|
||||||
require.Equal(t, "123", c.Targets[1].Args["buildno"])
|
require.Equal(t, ptrstr("123"), c.Targets[1].Args["buildno"])
|
||||||
require.Equal(t, c.Targets[1].CacheFrom, []string{"type=local,src=path/to/cache"})
|
require.Equal(t, []string{"type=local,src=path/to/cache"}, c.Targets[1].CacheFrom)
|
||||||
require.Equal(t, c.Targets[1].CacheTo, []string{"type=local,dest=path/to/cache"})
|
require.Equal(t, []string{"type=local,dest=path/to/cache"}, c.Targets[1].CacheTo)
|
||||||
require.Equal(t, "none", *c.Targets[1].NetworkMode)
|
require.Equal(t, "none", *c.Targets[1].NetworkMode)
|
||||||
require.Equal(t, []string{
|
require.Equal(t, []string{
|
||||||
"id=token,env=ENV_TOKEN",
|
"id=token,env=ENV_TOKEN",
|
||||||
"id=aws,src=/root/.aws/credentials",
|
"id=aws,src=/root/.aws/credentials",
|
||||||
}, c.Targets[1].Secrets)
|
}, c.Targets[1].Secrets)
|
||||||
|
|
||||||
|
require.Equal(t, "webapp2", c.Targets[2].Name)
|
||||||
|
require.Equal(t, "./dir", *c.Targets[2].Context)
|
||||||
|
require.Equal(t, "FROM alpine\n", *c.Targets[2].DockerfileInline)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestNoBuildOutOfTreeService(t *testing.T) {
|
func TestNoBuildOutOfTreeService(t *testing.T) {
|
||||||
@@ -75,9 +92,10 @@ services:
|
|||||||
webapp:
|
webapp:
|
||||||
build: ./db
|
build: ./db
|
||||||
`)
|
`)
|
||||||
c, err := ParseCompose(dt)
|
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, 1, len(c.Groups))
|
require.Equal(t, 1, len(c.Groups))
|
||||||
|
require.Equal(t, 1, len(c.Targets))
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestParseComposeTarget(t *testing.T) {
|
func TestParseComposeTarget(t *testing.T) {
|
||||||
@@ -93,7 +111,7 @@ services:
|
|||||||
target: webapp
|
target: webapp
|
||||||
`)
|
`)
|
||||||
|
|
||||||
c, err := ParseCompose(dt)
|
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
require.Equal(t, 2, len(c.Targets))
|
require.Equal(t, 2, len(c.Targets))
|
||||||
@@ -118,15 +136,15 @@ services:
|
|||||||
target: webapp
|
target: webapp
|
||||||
`)
|
`)
|
||||||
|
|
||||||
c, err := ParseCompose(dt)
|
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, 2, len(c.Targets))
|
require.Equal(t, 2, len(c.Targets))
|
||||||
sort.Slice(c.Targets, func(i, j int) bool {
|
sort.Slice(c.Targets, func(i, j int) bool {
|
||||||
return c.Targets[i].Name < c.Targets[j].Name
|
return c.Targets[i].Name < c.Targets[j].Name
|
||||||
})
|
})
|
||||||
require.Equal(t, c.Targets[0].Name, "db")
|
require.Equal(t, "db", c.Targets[0].Name)
|
||||||
require.Equal(t, "db", *c.Targets[0].Target)
|
require.Equal(t, "db", *c.Targets[0].Target)
|
||||||
require.Equal(t, c.Targets[1].Name, "webapp")
|
require.Equal(t, "webapp", c.Targets[1].Name)
|
||||||
require.Equal(t, "webapp", *c.Targets[1].Target)
|
require.Equal(t, "webapp", *c.Targets[1].Target)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -145,18 +163,15 @@ services:
|
|||||||
BRB: FOO
|
BRB: FOO
|
||||||
`)
|
`)
|
||||||
|
|
||||||
os.Setenv("FOO", "bar")
|
t.Setenv("FOO", "bar")
|
||||||
defer os.Unsetenv("FOO")
|
t.Setenv("BAR", "foo")
|
||||||
os.Setenv("BAR", "foo")
|
t.Setenv("ZZZ_BAR", "zzz_foo")
|
||||||
defer os.Unsetenv("BAR")
|
|
||||||
os.Setenv("ZZZ_BAR", "zzz_foo")
|
|
||||||
defer os.Unsetenv("ZZZ_BAR")
|
|
||||||
|
|
||||||
c, err := ParseCompose(dt)
|
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, sliceToMap(os.Environ()))
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, c.Targets[0].Args["FOO"], "bar")
|
require.Equal(t, ptrstr("bar"), c.Targets[0].Args["FOO"])
|
||||||
require.Equal(t, c.Targets[0].Args["BAR"], "zzz_foo")
|
require.Equal(t, ptrstr("zzz_foo"), c.Targets[0].Args["BAR"])
|
||||||
require.Equal(t, c.Targets[0].Args["BRB"], "FOO")
|
require.Equal(t, ptrstr("FOO"), c.Targets[0].Args["BRB"])
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestInconsistentComposeFile(t *testing.T) {
|
func TestInconsistentComposeFile(t *testing.T) {
|
||||||
@@ -166,8 +181,8 @@ services:
|
|||||||
entrypoint: echo 1
|
entrypoint: echo 1
|
||||||
`)
|
`)
|
||||||
|
|
||||||
_, err := ParseCompose(dt)
|
_, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
|
||||||
require.NoError(t, err)
|
require.Error(t, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestAdvancedNetwork(t *testing.T) {
|
func TestAdvancedNetwork(t *testing.T) {
|
||||||
@@ -191,7 +206,7 @@ networks:
|
|||||||
gateway: 10.5.0.254
|
gateway: 10.5.0.254
|
||||||
`)
|
`)
|
||||||
|
|
||||||
_, err := ParseCompose(dt)
|
_, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -208,9 +223,9 @@ services:
|
|||||||
- bar
|
- bar
|
||||||
`)
|
`)
|
||||||
|
|
||||||
c, err := ParseCompose(dt)
|
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, c.Targets[0].Tags, []string{"foo", "bar"})
|
require.Equal(t, []string{"foo", "bar"}, c.Targets[0].Tags)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestDependsOnList(t *testing.T) {
|
func TestDependsOnList(t *testing.T) {
|
||||||
@@ -245,7 +260,7 @@ networks:
|
|||||||
name: test-net
|
name: test-net
|
||||||
`)
|
`)
|
||||||
|
|
||||||
_, err := ParseCompose(dt)
|
_, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -267,6 +282,8 @@ services:
|
|||||||
CT_ECR: foo
|
CT_ECR: foo
|
||||||
CT_TAG: bar
|
CT_TAG: bar
|
||||||
x-bake:
|
x-bake:
|
||||||
|
contexts:
|
||||||
|
alpine: docker-image://alpine:3.13
|
||||||
tags:
|
tags:
|
||||||
- ct-addon:foo
|
- ct-addon:foo
|
||||||
- ct-addon:alp
|
- ct-addon:alp
|
||||||
@@ -296,24 +313,25 @@ services:
|
|||||||
no-cache: true
|
no-cache: true
|
||||||
`)
|
`)
|
||||||
|
|
||||||
c, err := ParseCompose(dt)
|
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, 2, len(c.Targets))
|
require.Equal(t, 2, len(c.Targets))
|
||||||
sort.Slice(c.Targets, func(i, j int) bool {
|
sort.Slice(c.Targets, func(i, j int) bool {
|
||||||
return c.Targets[i].Name < c.Targets[j].Name
|
return c.Targets[i].Name < c.Targets[j].Name
|
||||||
})
|
})
|
||||||
require.Equal(t, c.Targets[0].Args, map[string]string{"CT_ECR": "foo", "CT_TAG": "bar"})
|
require.Equal(t, map[string]*string{"CT_ECR": ptrstr("foo"), "CT_TAG": ptrstr("bar")}, c.Targets[0].Args)
|
||||||
require.Equal(t, c.Targets[0].Tags, []string{"ct-addon:baz", "ct-addon:foo", "ct-addon:alp"})
|
require.Equal(t, []string{"ct-addon:baz", "ct-addon:foo", "ct-addon:alp"}, c.Targets[0].Tags)
|
||||||
require.Equal(t, c.Targets[0].Platforms, []string{"linux/amd64", "linux/arm64"})
|
require.Equal(t, []string{"linux/amd64", "linux/arm64"}, c.Targets[0].Platforms)
|
||||||
require.Equal(t, c.Targets[0].CacheFrom, []string{"user/app:cache", "type=local,src=path/to/cache"})
|
require.Equal(t, []string{"user/app:cache", "type=local,src=path/to/cache"}, c.Targets[0].CacheFrom)
|
||||||
require.Equal(t, c.Targets[0].CacheTo, []string{"user/app:cache", "type=local,dest=path/to/cache"})
|
require.Equal(t, []string{"user/app:cache", "type=local,dest=path/to/cache"}, c.Targets[0].CacheTo)
|
||||||
require.Equal(t, c.Targets[0].Pull, newBool(true))
|
require.Equal(t, newBool(true), c.Targets[0].Pull)
|
||||||
require.Equal(t, c.Targets[1].Tags, []string{"ct-fake-aws:bar"})
|
require.Equal(t, map[string]string{"alpine": "docker-image://alpine:3.13"}, c.Targets[0].Contexts)
|
||||||
require.Equal(t, c.Targets[1].Secrets, []string{"id=mysecret,src=/local/secret", "id=mysecret2,src=/local/secret2"})
|
require.Equal(t, []string{"ct-fake-aws:bar"}, c.Targets[1].Tags)
|
||||||
require.Equal(t, c.Targets[1].SSH, []string{"default"})
|
require.Equal(t, []string{"id=mysecret,src=/local/secret", "id=mysecret2,src=/local/secret2"}, c.Targets[1].Secrets)
|
||||||
require.Equal(t, c.Targets[1].Platforms, []string{"linux/arm64"})
|
require.Equal(t, []string{"default"}, c.Targets[1].SSH)
|
||||||
require.Equal(t, c.Targets[1].Outputs, []string{"type=docker"})
|
require.Equal(t, []string{"linux/arm64"}, c.Targets[1].Platforms)
|
||||||
require.Equal(t, c.Targets[1].NoCache, newBool(true))
|
require.Equal(t, []string{"type=docker"}, c.Targets[1].Outputs)
|
||||||
|
require.Equal(t, newBool(true), c.Targets[1].NoCache)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestComposeExtDedup(t *testing.T) {
|
func TestComposeExtDedup(t *testing.T) {
|
||||||
@@ -339,12 +357,12 @@ services:
|
|||||||
- type=local,dest=path/to/cache
|
- type=local,dest=path/to/cache
|
||||||
`)
|
`)
|
||||||
|
|
||||||
c, err := ParseCompose(dt)
|
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Tags, []string{"ct-addon:foo", "ct-addon:baz"})
|
require.Equal(t, []string{"ct-addon:foo", "ct-addon:baz"}, c.Targets[0].Tags)
|
||||||
require.Equal(t, c.Targets[0].CacheFrom, []string{"user/app:cache", "type=local,src=path/to/cache"})
|
require.Equal(t, []string{"user/app:cache", "type=local,src=path/to/cache"}, c.Targets[0].CacheFrom)
|
||||||
require.Equal(t, c.Targets[0].CacheTo, []string{"user/app:cache", "type=local,dest=path/to/cache"})
|
require.Equal(t, []string{"user/app:cache", "type=local,dest=path/to/cache"}, c.Targets[0].CacheTo)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestEnv(t *testing.T) {
|
func TestEnv(t *testing.T) {
|
||||||
@@ -372,9 +390,33 @@ services:
|
|||||||
- ` + envf.Name() + `
|
- ` + envf.Name() + `
|
||||||
`)
|
`)
|
||||||
|
|
||||||
c, err := ParseCompose(dt)
|
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, c.Targets[0].Args, map[string]string{"CT_ECR": "foo", "FOO": "bsdf -csdf", "NODE_ENV": "test"})
|
require.Equal(t, map[string]*string{"CT_ECR": ptrstr("foo"), "FOO": ptrstr("bsdf -csdf"), "NODE_ENV": ptrstr("test")}, c.Targets[0].Args)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestDotEnv(t *testing.T) {
|
||||||
|
tmpdir := t.TempDir()
|
||||||
|
|
||||||
|
err := os.WriteFile(filepath.Join(tmpdir, ".env"), []byte("FOO=bar"), 0644)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
var dt = []byte(`
|
||||||
|
services:
|
||||||
|
scratch:
|
||||||
|
build:
|
||||||
|
context: .
|
||||||
|
args:
|
||||||
|
FOO:
|
||||||
|
`)
|
||||||
|
|
||||||
|
chdir(t, tmpdir)
|
||||||
|
c, err := ParseComposeFiles([]File{{
|
||||||
|
Name: "docker-compose.yml",
|
||||||
|
Data: dt,
|
||||||
|
}})
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, map[string]*string{"FOO": ptrstr("bar")}, c.Targets[0].Args)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestPorts(t *testing.T) {
|
func TestPorts(t *testing.T) {
|
||||||
@@ -394,7 +436,7 @@ services:
|
|||||||
published: "3306"
|
published: "3306"
|
||||||
protocol: tcp
|
protocol: tcp
|
||||||
`)
|
`)
|
||||||
_, err := ParseCompose(dt)
|
_, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -440,12 +482,12 @@ func TestServiceName(t *testing.T) {
|
|||||||
for _, tt := range cases {
|
for _, tt := range cases {
|
||||||
tt := tt
|
tt := tt
|
||||||
t.Run(tt.svc, func(t *testing.T) {
|
t.Run(tt.svc, func(t *testing.T) {
|
||||||
_, err := ParseCompose([]byte(`
|
_, err := ParseCompose([]compose.ConfigFile{{Content: []byte(`
|
||||||
services:
|
services:
|
||||||
` + tt.svc + `:
|
` + tt.svc + `:
|
||||||
build:
|
build:
|
||||||
context: .
|
context: .
|
||||||
`))
|
`)}}, nil)
|
||||||
if tt.wantErr {
|
if tt.wantErr {
|
||||||
require.Error(t, err)
|
require.Error(t, err)
|
||||||
} else {
|
} else {
|
||||||
@@ -511,7 +553,7 @@ services:
|
|||||||
for _, tt := range cases {
|
for _, tt := range cases {
|
||||||
tt := tt
|
tt := tt
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
_, err := ParseCompose(tt.dt)
|
_, err := ParseCompose([]compose.ConfigFile{{Content: tt.dt}}, nil)
|
||||||
if tt.wantErr {
|
if tt.wantErr {
|
||||||
require.Error(t, err)
|
require.Error(t, err)
|
||||||
} else {
|
} else {
|
||||||
@@ -520,3 +562,114 @@ services:
|
|||||||
})
|
})
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestValidateComposeFile(t *testing.T) {
|
||||||
|
cases := []struct {
|
||||||
|
name string
|
||||||
|
fn string
|
||||||
|
dt []byte
|
||||||
|
isCompose bool
|
||||||
|
wantErr bool
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "empty service",
|
||||||
|
fn: "docker-compose.yml",
|
||||||
|
dt: []byte(`
|
||||||
|
services:
|
||||||
|
foo:
|
||||||
|
`),
|
||||||
|
isCompose: true,
|
||||||
|
wantErr: true,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "build",
|
||||||
|
fn: "docker-compose.yml",
|
||||||
|
dt: []byte(`
|
||||||
|
services:
|
||||||
|
foo:
|
||||||
|
build: .
|
||||||
|
`),
|
||||||
|
isCompose: true,
|
||||||
|
wantErr: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "image",
|
||||||
|
fn: "docker-compose.yml",
|
||||||
|
dt: []byte(`
|
||||||
|
services:
|
||||||
|
simple:
|
||||||
|
image: nginx
|
||||||
|
`),
|
||||||
|
isCompose: true,
|
||||||
|
wantErr: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "unknown ext",
|
||||||
|
fn: "docker-compose.foo",
|
||||||
|
dt: []byte(`
|
||||||
|
services:
|
||||||
|
simple:
|
||||||
|
image: nginx
|
||||||
|
`),
|
||||||
|
isCompose: true,
|
||||||
|
wantErr: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "hcl",
|
||||||
|
fn: "docker-bake.hcl",
|
||||||
|
dt: []byte(`
|
||||||
|
target "default" {
|
||||||
|
dockerfile = "test"
|
||||||
|
}
|
||||||
|
`),
|
||||||
|
isCompose: false,
|
||||||
|
wantErr: false,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, tt := range cases {
|
||||||
|
tt := tt
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
isCompose, err := validateComposeFile(tt.dt, tt.fn)
|
||||||
|
assert.Equal(t, tt.isCompose, isCompose)
|
||||||
|
if tt.wantErr {
|
||||||
|
require.Error(t, err)
|
||||||
|
} else {
|
||||||
|
require.NoError(t, err)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestComposeNullArgs(t *testing.T) {
|
||||||
|
var dt = []byte(`
|
||||||
|
services:
|
||||||
|
scratch:
|
||||||
|
build:
|
||||||
|
context: .
|
||||||
|
args:
|
||||||
|
FOO: null
|
||||||
|
bar: "baz"
|
||||||
|
`)
|
||||||
|
|
||||||
|
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, map[string]*string{"bar": ptrstr("baz")}, c.Targets[0].Args)
|
||||||
|
}
|
||||||
|
|
||||||
|
// chdir changes the current working directory to the named directory,
|
||||||
|
// and then restore the original working directory at the end of the test.
|
||||||
|
func chdir(t *testing.T, dir string) {
|
||||||
|
olddir, err := os.Getwd()
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("chdir: %v", err)
|
||||||
|
}
|
||||||
|
if err := os.Chdir(dir); err != nil {
|
||||||
|
t.Fatalf("chdir %s: %v", dir, err)
|
||||||
|
}
|
||||||
|
t.Cleanup(func() {
|
||||||
|
if err := os.Chdir(olddir); err != nil {
|
||||||
|
t.Errorf("chdir to original working directory %s: %v", olddir, err)
|
||||||
|
os.Exit(1)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|||||||
721
bake/hcl_test.go
721
bake/hcl_test.go
@@ -1,7 +1,7 @@
|
|||||||
package bake
|
package bake
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"os"
|
"reflect"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
@@ -54,7 +54,7 @@ func TestHCLBasic(t *testing.T) {
|
|||||||
|
|
||||||
require.Equal(t, c.Targets[1].Name, "webapp")
|
require.Equal(t, c.Targets[1].Name, "webapp")
|
||||||
require.Equal(t, 1, len(c.Targets[1].Args))
|
require.Equal(t, 1, len(c.Targets[1].Args))
|
||||||
require.Equal(t, "123", c.Targets[1].Args["buildno"])
|
require.Equal(t, ptrstr("123"), c.Targets[1].Args["buildno"])
|
||||||
|
|
||||||
require.Equal(t, c.Targets[2].Name, "cross")
|
require.Equal(t, c.Targets[2].Name, "cross")
|
||||||
require.Equal(t, 2, len(c.Targets[2].Platforms))
|
require.Equal(t, 2, len(c.Targets[2].Platforms))
|
||||||
@@ -62,7 +62,7 @@ func TestHCLBasic(t *testing.T) {
|
|||||||
|
|
||||||
require.Equal(t, c.Targets[3].Name, "webapp-plus")
|
require.Equal(t, c.Targets[3].Name, "webapp-plus")
|
||||||
require.Equal(t, 1, len(c.Targets[3].Args))
|
require.Equal(t, 1, len(c.Targets[3].Args))
|
||||||
require.Equal(t, map[string]string{"IAMCROSS": "true"}, c.Targets[3].Args)
|
require.Equal(t, map[string]*string{"IAMCROSS": ptrstr("true")}, c.Targets[3].Args)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestHCLBasicInJSON(t *testing.T) {
|
func TestHCLBasicInJSON(t *testing.T) {
|
||||||
@@ -114,7 +114,7 @@ func TestHCLBasicInJSON(t *testing.T) {
|
|||||||
|
|
||||||
require.Equal(t, c.Targets[1].Name, "webapp")
|
require.Equal(t, c.Targets[1].Name, "webapp")
|
||||||
require.Equal(t, 1, len(c.Targets[1].Args))
|
require.Equal(t, 1, len(c.Targets[1].Args))
|
||||||
require.Equal(t, "123", c.Targets[1].Args["buildno"])
|
require.Equal(t, ptrstr("123"), c.Targets[1].Args["buildno"])
|
||||||
|
|
||||||
require.Equal(t, c.Targets[2].Name, "cross")
|
require.Equal(t, c.Targets[2].Name, "cross")
|
||||||
require.Equal(t, 2, len(c.Targets[2].Platforms))
|
require.Equal(t, 2, len(c.Targets[2].Platforms))
|
||||||
@@ -122,7 +122,7 @@ func TestHCLBasicInJSON(t *testing.T) {
|
|||||||
|
|
||||||
require.Equal(t, c.Targets[3].Name, "webapp-plus")
|
require.Equal(t, c.Targets[3].Name, "webapp-plus")
|
||||||
require.Equal(t, 1, len(c.Targets[3].Args))
|
require.Equal(t, 1, len(c.Targets[3].Args))
|
||||||
require.Equal(t, map[string]string{"IAMCROSS": "true"}, c.Targets[3].Args)
|
require.Equal(t, map[string]*string{"IAMCROSS": ptrstr("true")}, c.Targets[3].Args)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestHCLWithFunctions(t *testing.T) {
|
func TestHCLWithFunctions(t *testing.T) {
|
||||||
@@ -147,7 +147,7 @@ func TestHCLWithFunctions(t *testing.T) {
|
|||||||
|
|
||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "webapp")
|
require.Equal(t, c.Targets[0].Name, "webapp")
|
||||||
require.Equal(t, "124", c.Targets[0].Args["buildno"])
|
require.Equal(t, ptrstr("124"), c.Targets[0].Args["buildno"])
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestHCLWithUserDefinedFunctions(t *testing.T) {
|
func TestHCLWithUserDefinedFunctions(t *testing.T) {
|
||||||
@@ -177,7 +177,7 @@ func TestHCLWithUserDefinedFunctions(t *testing.T) {
|
|||||||
|
|
||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "webapp")
|
require.Equal(t, c.Targets[0].Name, "webapp")
|
||||||
require.Equal(t, "124", c.Targets[0].Args["buildno"])
|
require.Equal(t, ptrstr("124"), c.Targets[0].Args["buildno"])
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestHCLWithVariables(t *testing.T) {
|
func TestHCLWithVariables(t *testing.T) {
|
||||||
@@ -206,9 +206,9 @@ func TestHCLWithVariables(t *testing.T) {
|
|||||||
|
|
||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "webapp")
|
require.Equal(t, c.Targets[0].Name, "webapp")
|
||||||
require.Equal(t, "123", c.Targets[0].Args["buildno"])
|
require.Equal(t, ptrstr("123"), c.Targets[0].Args["buildno"])
|
||||||
|
|
||||||
os.Setenv("BUILD_NUMBER", "456")
|
t.Setenv("BUILD_NUMBER", "456")
|
||||||
|
|
||||||
c, err = ParseFile(dt, "docker-bake.hcl")
|
c, err = ParseFile(dt, "docker-bake.hcl")
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
@@ -219,7 +219,7 @@ func TestHCLWithVariables(t *testing.T) {
|
|||||||
|
|
||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "webapp")
|
require.Equal(t, c.Targets[0].Name, "webapp")
|
||||||
require.Equal(t, "456", c.Targets[0].Args["buildno"])
|
require.Equal(t, ptrstr("456"), c.Targets[0].Args["buildno"])
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestHCLWithVariablesInFunctions(t *testing.T) {
|
func TestHCLWithVariablesInFunctions(t *testing.T) {
|
||||||
@@ -244,7 +244,7 @@ func TestHCLWithVariablesInFunctions(t *testing.T) {
|
|||||||
require.Equal(t, c.Targets[0].Name, "webapp")
|
require.Equal(t, c.Targets[0].Name, "webapp")
|
||||||
require.Equal(t, []string{"user/repo:v1"}, c.Targets[0].Tags)
|
require.Equal(t, []string{"user/repo:v1"}, c.Targets[0].Tags)
|
||||||
|
|
||||||
os.Setenv("REPO", "docker/buildx")
|
t.Setenv("REPO", "docker/buildx")
|
||||||
|
|
||||||
c, err = ParseFile(dt, "docker-bake.hcl")
|
c, err = ParseFile(dt, "docker-bake.hcl")
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
@@ -280,10 +280,10 @@ func TestHCLMultiFileSharedVariables(t *testing.T) {
|
|||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "app")
|
require.Equal(t, c.Targets[0].Name, "app")
|
||||||
require.Equal(t, "pre-abc", c.Targets[0].Args["v1"])
|
require.Equal(t, ptrstr("pre-abc"), c.Targets[0].Args["v1"])
|
||||||
require.Equal(t, "abc-post", c.Targets[0].Args["v2"])
|
require.Equal(t, ptrstr("abc-post"), c.Targets[0].Args["v2"])
|
||||||
|
|
||||||
os.Setenv("FOO", "def")
|
t.Setenv("FOO", "def")
|
||||||
|
|
||||||
c, err = ParseFiles([]File{
|
c, err = ParseFiles([]File{
|
||||||
{Data: dt, Name: "c1.hcl"},
|
{Data: dt, Name: "c1.hcl"},
|
||||||
@@ -293,12 +293,11 @@ func TestHCLMultiFileSharedVariables(t *testing.T) {
|
|||||||
|
|
||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "app")
|
require.Equal(t, c.Targets[0].Name, "app")
|
||||||
require.Equal(t, "pre-def", c.Targets[0].Args["v1"])
|
require.Equal(t, ptrstr("pre-def"), c.Targets[0].Args["v1"])
|
||||||
require.Equal(t, "def-post", c.Targets[0].Args["v2"])
|
require.Equal(t, ptrstr("def-post"), c.Targets[0].Args["v2"])
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestHCLVarsWithVars(t *testing.T) {
|
func TestHCLVarsWithVars(t *testing.T) {
|
||||||
os.Unsetenv("FOO")
|
|
||||||
dt := []byte(`
|
dt := []byte(`
|
||||||
variable "FOO" {
|
variable "FOO" {
|
||||||
default = upper("${BASE}def")
|
default = upper("${BASE}def")
|
||||||
@@ -330,10 +329,10 @@ func TestHCLVarsWithVars(t *testing.T) {
|
|||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "app")
|
require.Equal(t, c.Targets[0].Name, "app")
|
||||||
require.Equal(t, "pre--ABCDEF-", c.Targets[0].Args["v1"])
|
require.Equal(t, ptrstr("pre--ABCDEF-"), c.Targets[0].Args["v1"])
|
||||||
require.Equal(t, "ABCDEF-post", c.Targets[0].Args["v2"])
|
require.Equal(t, ptrstr("ABCDEF-post"), c.Targets[0].Args["v2"])
|
||||||
|
|
||||||
os.Setenv("BASE", "new")
|
t.Setenv("BASE", "new")
|
||||||
|
|
||||||
c, err = ParseFiles([]File{
|
c, err = ParseFiles([]File{
|
||||||
{Data: dt, Name: "c1.hcl"},
|
{Data: dt, Name: "c1.hcl"},
|
||||||
@@ -343,12 +342,11 @@ func TestHCLVarsWithVars(t *testing.T) {
|
|||||||
|
|
||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "app")
|
require.Equal(t, c.Targets[0].Name, "app")
|
||||||
require.Equal(t, "pre--NEWDEF-", c.Targets[0].Args["v1"])
|
require.Equal(t, ptrstr("pre--NEWDEF-"), c.Targets[0].Args["v1"])
|
||||||
require.Equal(t, "NEWDEF-post", c.Targets[0].Args["v2"])
|
require.Equal(t, ptrstr("NEWDEF-post"), c.Targets[0].Args["v2"])
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestHCLTypedVariables(t *testing.T) {
|
func TestHCLTypedVariables(t *testing.T) {
|
||||||
os.Unsetenv("FOO")
|
|
||||||
dt := []byte(`
|
dt := []byte(`
|
||||||
variable "FOO" {
|
variable "FOO" {
|
||||||
default = 3
|
default = 3
|
||||||
@@ -369,33 +367,80 @@ func TestHCLTypedVariables(t *testing.T) {
|
|||||||
|
|
||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "app")
|
require.Equal(t, c.Targets[0].Name, "app")
|
||||||
require.Equal(t, "lower", c.Targets[0].Args["v1"])
|
require.Equal(t, ptrstr("lower"), c.Targets[0].Args["v1"])
|
||||||
require.Equal(t, "yes", c.Targets[0].Args["v2"])
|
require.Equal(t, ptrstr("yes"), c.Targets[0].Args["v2"])
|
||||||
|
|
||||||
os.Setenv("FOO", "5.1")
|
t.Setenv("FOO", "5.1")
|
||||||
os.Setenv("IS_FOO", "0")
|
t.Setenv("IS_FOO", "0")
|
||||||
|
|
||||||
c, err = ParseFile(dt, "docker-bake.hcl")
|
c, err = ParseFile(dt, "docker-bake.hcl")
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "app")
|
require.Equal(t, c.Targets[0].Name, "app")
|
||||||
require.Equal(t, "higher", c.Targets[0].Args["v1"])
|
require.Equal(t, ptrstr("higher"), c.Targets[0].Args["v1"])
|
||||||
require.Equal(t, "no", c.Targets[0].Args["v2"])
|
require.Equal(t, ptrstr("no"), c.Targets[0].Args["v2"])
|
||||||
|
|
||||||
os.Setenv("FOO", "NaN")
|
t.Setenv("FOO", "NaN")
|
||||||
_, err = ParseFile(dt, "docker-bake.hcl")
|
_, err = ParseFile(dt, "docker-bake.hcl")
|
||||||
require.Error(t, err)
|
require.Error(t, err)
|
||||||
require.Contains(t, err.Error(), "failed to parse FOO as number")
|
require.Contains(t, err.Error(), "failed to parse FOO as number")
|
||||||
|
|
||||||
os.Setenv("FOO", "0")
|
t.Setenv("FOO", "0")
|
||||||
os.Setenv("IS_FOO", "maybe")
|
t.Setenv("IS_FOO", "maybe")
|
||||||
|
|
||||||
_, err = ParseFile(dt, "docker-bake.hcl")
|
_, err = ParseFile(dt, "docker-bake.hcl")
|
||||||
require.Error(t, err)
|
require.Error(t, err)
|
||||||
require.Contains(t, err.Error(), "failed to parse IS_FOO as bool")
|
require.Contains(t, err.Error(), "failed to parse IS_FOO as bool")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestHCLNullVariables(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
variable "FOO" {
|
||||||
|
default = null
|
||||||
|
}
|
||||||
|
target "default" {
|
||||||
|
args = {
|
||||||
|
foo = FOO
|
||||||
|
}
|
||||||
|
}`)
|
||||||
|
|
||||||
|
c, err := ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, ptrstr(nil), c.Targets[0].Args["foo"])
|
||||||
|
|
||||||
|
t.Setenv("FOO", "bar")
|
||||||
|
c, err = ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, ptrstr("bar"), c.Targets[0].Args["foo"])
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestJSONNullVariables(t *testing.T) {
|
||||||
|
dt := []byte(`{
|
||||||
|
"variable": {
|
||||||
|
"FOO": {
|
||||||
|
"default": null
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"target": {
|
||||||
|
"default": {
|
||||||
|
"args": {
|
||||||
|
"foo": "${FOO}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}`)
|
||||||
|
|
||||||
|
c, err := ParseFile(dt, "docker-bake.json")
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, ptrstr(nil), c.Targets[0].Args["foo"])
|
||||||
|
|
||||||
|
t.Setenv("FOO", "bar")
|
||||||
|
c, err = ParseFile(dt, "docker-bake.json")
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, ptrstr("bar"), c.Targets[0].Args["foo"])
|
||||||
|
}
|
||||||
|
|
||||||
func TestHCLVariableCycle(t *testing.T) {
|
func TestHCLVariableCycle(t *testing.T) {
|
||||||
dt := []byte(`
|
dt := []byte(`
|
||||||
variable "FOO" {
|
variable "FOO" {
|
||||||
@@ -431,19 +476,107 @@ func TestHCLAttrs(t *testing.T) {
|
|||||||
|
|
||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "app")
|
require.Equal(t, c.Targets[0].Name, "app")
|
||||||
require.Equal(t, "attr-abcdef", c.Targets[0].Args["v1"])
|
require.Equal(t, ptrstr("attr-abcdef"), c.Targets[0].Args["v1"])
|
||||||
|
|
||||||
// env does not apply if no variable
|
// env does not apply if no variable
|
||||||
os.Setenv("FOO", "bar")
|
t.Setenv("FOO", "bar")
|
||||||
c, err = ParseFile(dt, "docker-bake.hcl")
|
c, err = ParseFile(dt, "docker-bake.hcl")
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "app")
|
require.Equal(t, c.Targets[0].Name, "app")
|
||||||
require.Equal(t, "attr-abcdef", c.Targets[0].Args["v1"])
|
require.Equal(t, ptrstr("attr-abcdef"), c.Targets[0].Args["v1"])
|
||||||
// attr-multifile
|
// attr-multifile
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestHCLTargetAttrs(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
target "foo" {
|
||||||
|
dockerfile = "xxx"
|
||||||
|
context = target.bar.context
|
||||||
|
target = target.foo.dockerfile
|
||||||
|
}
|
||||||
|
|
||||||
|
target "bar" {
|
||||||
|
dockerfile = target.foo.dockerfile
|
||||||
|
context = "yyy"
|
||||||
|
target = target.bar.context
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
|
||||||
|
c, err := ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 2, len(c.Targets))
|
||||||
|
require.Equal(t, "foo", c.Targets[0].Name)
|
||||||
|
require.Equal(t, "bar", c.Targets[1].Name)
|
||||||
|
|
||||||
|
require.Equal(t, "xxx", *c.Targets[0].Dockerfile)
|
||||||
|
require.Equal(t, "yyy", *c.Targets[0].Context)
|
||||||
|
require.Equal(t, "xxx", *c.Targets[0].Target)
|
||||||
|
|
||||||
|
require.Equal(t, "xxx", *c.Targets[1].Dockerfile)
|
||||||
|
require.Equal(t, "yyy", *c.Targets[1].Context)
|
||||||
|
require.Equal(t, "yyy", *c.Targets[1].Target)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHCLTargetGlobal(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
target "foo" {
|
||||||
|
dockerfile = "x"
|
||||||
|
}
|
||||||
|
x = target.foo.dockerfile
|
||||||
|
y = x
|
||||||
|
target "bar" {
|
||||||
|
dockerfile = y
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
|
||||||
|
c, err := ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 2, len(c.Targets))
|
||||||
|
require.Equal(t, "foo", c.Targets[0].Name)
|
||||||
|
require.Equal(t, "bar", c.Targets[1].Name)
|
||||||
|
|
||||||
|
require.Equal(t, "x", *c.Targets[0].Dockerfile)
|
||||||
|
require.Equal(t, "x", *c.Targets[1].Dockerfile)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHCLTargetAttrName(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
target "foo" {
|
||||||
|
dockerfile = target.foo.name
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
|
||||||
|
c, err := ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(c.Targets))
|
||||||
|
require.Equal(t, "foo", c.Targets[0].Name)
|
||||||
|
require.Equal(t, "foo", *c.Targets[0].Dockerfile)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHCLTargetAttrEmptyChain(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
target "foo" {
|
||||||
|
# dockerfile = Dockerfile
|
||||||
|
context = target.foo.dockerfile
|
||||||
|
target = target.foo.context
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
|
||||||
|
c, err := ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(c.Targets))
|
||||||
|
require.Equal(t, "foo", c.Targets[0].Name)
|
||||||
|
require.Nil(t, c.Targets[0].Dockerfile)
|
||||||
|
require.Nil(t, c.Targets[0].Context)
|
||||||
|
require.Nil(t, c.Targets[0].Target)
|
||||||
|
}
|
||||||
|
|
||||||
func TestHCLAttrsCustomType(t *testing.T) {
|
func TestHCLAttrsCustomType(t *testing.T) {
|
||||||
dt := []byte(`
|
dt := []byte(`
|
||||||
platforms=["linux/arm64", "linux/amd64"]
|
platforms=["linux/arm64", "linux/amd64"]
|
||||||
@@ -461,11 +594,10 @@ func TestHCLAttrsCustomType(t *testing.T) {
|
|||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "app")
|
require.Equal(t, c.Targets[0].Name, "app")
|
||||||
require.Equal(t, []string{"linux/arm64", "linux/amd64"}, c.Targets[0].Platforms)
|
require.Equal(t, []string{"linux/arm64", "linux/amd64"}, c.Targets[0].Platforms)
|
||||||
require.Equal(t, "linux/arm64", c.Targets[0].Args["v1"])
|
require.Equal(t, ptrstr("linux/arm64"), c.Targets[0].Args["v1"])
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestHCLMultiFileAttrs(t *testing.T) {
|
func TestHCLMultiFileAttrs(t *testing.T) {
|
||||||
os.Unsetenv("FOO")
|
|
||||||
dt := []byte(`
|
dt := []byte(`
|
||||||
variable "FOO" {
|
variable "FOO" {
|
||||||
default = "abc"
|
default = "abc"
|
||||||
@@ -487,9 +619,9 @@ func TestHCLMultiFileAttrs(t *testing.T) {
|
|||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "app")
|
require.Equal(t, c.Targets[0].Name, "app")
|
||||||
require.Equal(t, "pre-def", c.Targets[0].Args["v1"])
|
require.Equal(t, ptrstr("pre-def"), c.Targets[0].Args["v1"])
|
||||||
|
|
||||||
os.Setenv("FOO", "ghi")
|
t.Setenv("FOO", "ghi")
|
||||||
|
|
||||||
c, err = ParseFiles([]File{
|
c, err = ParseFiles([]File{
|
||||||
{Data: dt, Name: "c1.hcl"},
|
{Data: dt, Name: "c1.hcl"},
|
||||||
@@ -499,7 +631,463 @@ func TestHCLMultiFileAttrs(t *testing.T) {
|
|||||||
|
|
||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "app")
|
require.Equal(t, c.Targets[0].Name, "app")
|
||||||
require.Equal(t, "pre-ghi", c.Targets[0].Args["v1"])
|
require.Equal(t, ptrstr("pre-ghi"), c.Targets[0].Args["v1"])
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHCLDuplicateTarget(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
target "app" {
|
||||||
|
dockerfile = "x"
|
||||||
|
}
|
||||||
|
target "app" {
|
||||||
|
dockerfile = "y"
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
|
||||||
|
c, err := ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(c.Targets))
|
||||||
|
require.Equal(t, "app", c.Targets[0].Name)
|
||||||
|
require.Equal(t, "y", *c.Targets[0].Dockerfile)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHCLRenameTarget(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
target "abc" {
|
||||||
|
name = "xyz"
|
||||||
|
dockerfile = "foo"
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
|
||||||
|
_, err := ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.ErrorContains(t, err, "requires matrix")
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHCLRenameGroup(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
group "foo" {
|
||||||
|
name = "bar"
|
||||||
|
targets = ["x", "y"]
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
|
||||||
|
_, err := ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.ErrorContains(t, err, "not supported")
|
||||||
|
|
||||||
|
dt = []byte(`
|
||||||
|
group "foo" {
|
||||||
|
matrix = {
|
||||||
|
name = ["x", "y"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
|
||||||
|
_, err = ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.ErrorContains(t, err, "not supported")
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHCLRenameTargetAttrs(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
target "abc" {
|
||||||
|
name = "xyz"
|
||||||
|
matrix = {}
|
||||||
|
dockerfile = "foo"
|
||||||
|
}
|
||||||
|
|
||||||
|
target "def" {
|
||||||
|
dockerfile = target.xyz.dockerfile
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
|
||||||
|
c, err := ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, 2, len(c.Targets))
|
||||||
|
require.Equal(t, "xyz", c.Targets[0].Name)
|
||||||
|
require.Equal(t, "foo", *c.Targets[0].Dockerfile)
|
||||||
|
require.Equal(t, "def", c.Targets[1].Name)
|
||||||
|
require.Equal(t, "foo", *c.Targets[1].Dockerfile)
|
||||||
|
|
||||||
|
dt = []byte(`
|
||||||
|
target "def" {
|
||||||
|
dockerfile = target.xyz.dockerfile
|
||||||
|
}
|
||||||
|
|
||||||
|
target "abc" {
|
||||||
|
name = "xyz"
|
||||||
|
matrix = {}
|
||||||
|
dockerfile = "foo"
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
|
||||||
|
c, err = ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, 2, len(c.Targets))
|
||||||
|
require.Equal(t, "def", c.Targets[0].Name)
|
||||||
|
require.Equal(t, "foo", *c.Targets[0].Dockerfile)
|
||||||
|
require.Equal(t, "xyz", c.Targets[1].Name)
|
||||||
|
require.Equal(t, "foo", *c.Targets[1].Dockerfile)
|
||||||
|
|
||||||
|
dt = []byte(`
|
||||||
|
target "abc" {
|
||||||
|
name = "xyz"
|
||||||
|
matrix = {}
|
||||||
|
dockerfile = "foo"
|
||||||
|
}
|
||||||
|
|
||||||
|
target "def" {
|
||||||
|
dockerfile = target.abc.dockerfile
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
|
||||||
|
_, err = ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.ErrorContains(t, err, "abc")
|
||||||
|
|
||||||
|
dt = []byte(`
|
||||||
|
target "def" {
|
||||||
|
dockerfile = target.abc.dockerfile
|
||||||
|
}
|
||||||
|
|
||||||
|
target "abc" {
|
||||||
|
name = "xyz"
|
||||||
|
matrix = {}
|
||||||
|
dockerfile = "foo"
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
|
||||||
|
_, err = ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.ErrorContains(t, err, "abc")
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHCLRenameSplit(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
target "x" {
|
||||||
|
name = "y"
|
||||||
|
matrix = {}
|
||||||
|
dockerfile = "foo"
|
||||||
|
}
|
||||||
|
|
||||||
|
target "x" {
|
||||||
|
name = "z"
|
||||||
|
matrix = {}
|
||||||
|
dockerfile = "bar"
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
|
||||||
|
c, err := ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 2, len(c.Targets))
|
||||||
|
require.Equal(t, "y", c.Targets[0].Name)
|
||||||
|
require.Equal(t, "foo", *c.Targets[0].Dockerfile)
|
||||||
|
require.Equal(t, "z", c.Targets[1].Name)
|
||||||
|
require.Equal(t, "bar", *c.Targets[1].Dockerfile)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(c.Groups))
|
||||||
|
require.Equal(t, "x", c.Groups[0].Name)
|
||||||
|
require.Equal(t, []string{"y", "z"}, c.Groups[0].Targets)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHCLRenameMultiFile(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
target "foo" {
|
||||||
|
name = "bar"
|
||||||
|
matrix = {}
|
||||||
|
dockerfile = "x"
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
dt2 := []byte(`
|
||||||
|
target "foo" {
|
||||||
|
context = "y"
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
dt3 := []byte(`
|
||||||
|
target "bar" {
|
||||||
|
target = "z"
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
|
||||||
|
c, err := ParseFiles([]File{
|
||||||
|
{Data: dt, Name: "c1.hcl"},
|
||||||
|
{Data: dt2, Name: "c2.hcl"},
|
||||||
|
{Data: dt3, Name: "c3.hcl"},
|
||||||
|
}, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 2, len(c.Targets))
|
||||||
|
|
||||||
|
require.Equal(t, c.Targets[0].Name, "bar")
|
||||||
|
require.Equal(t, *c.Targets[0].Dockerfile, "x")
|
||||||
|
require.Equal(t, *c.Targets[0].Target, "z")
|
||||||
|
|
||||||
|
require.Equal(t, c.Targets[1].Name, "foo")
|
||||||
|
require.Equal(t, *c.Targets[1].Context, "y")
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHCLMatrixBasic(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
target "default" {
|
||||||
|
matrix = {
|
||||||
|
foo = ["x", "y"]
|
||||||
|
}
|
||||||
|
name = foo
|
||||||
|
dockerfile = "${foo}.Dockerfile"
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
|
||||||
|
c, err := ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 2, len(c.Targets))
|
||||||
|
require.Equal(t, c.Targets[0].Name, "x")
|
||||||
|
require.Equal(t, c.Targets[1].Name, "y")
|
||||||
|
require.Equal(t, *c.Targets[0].Dockerfile, "x.Dockerfile")
|
||||||
|
require.Equal(t, *c.Targets[1].Dockerfile, "y.Dockerfile")
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(c.Groups))
|
||||||
|
require.Equal(t, "default", c.Groups[0].Name)
|
||||||
|
require.Equal(t, []string{"x", "y"}, c.Groups[0].Targets)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHCLMatrixMultipleKeys(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
target "default" {
|
||||||
|
matrix = {
|
||||||
|
foo = ["a"]
|
||||||
|
bar = ["b", "c"]
|
||||||
|
baz = ["d", "e", "f"]
|
||||||
|
}
|
||||||
|
name = "${foo}-${bar}-${baz}"
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
|
||||||
|
c, err := ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 6, len(c.Targets))
|
||||||
|
names := make([]string, len(c.Targets))
|
||||||
|
for i, t := range c.Targets {
|
||||||
|
names[i] = t.Name
|
||||||
|
}
|
||||||
|
require.ElementsMatch(t, []string{"a-b-d", "a-b-e", "a-b-f", "a-c-d", "a-c-e", "a-c-f"}, names)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(c.Groups))
|
||||||
|
require.Equal(t, "default", c.Groups[0].Name)
|
||||||
|
require.ElementsMatch(t, []string{"a-b-d", "a-b-e", "a-b-f", "a-c-d", "a-c-e", "a-c-f"}, c.Groups[0].Targets)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHCLMatrixLists(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
target "foo" {
|
||||||
|
matrix = {
|
||||||
|
aa = [["aa", "bb"], ["cc", "dd"]]
|
||||||
|
}
|
||||||
|
name = aa[0]
|
||||||
|
args = {
|
||||||
|
target = "val${aa[1]}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
|
||||||
|
c, err := ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 2, len(c.Targets))
|
||||||
|
require.Equal(t, "aa", c.Targets[0].Name)
|
||||||
|
require.Equal(t, ptrstr("valbb"), c.Targets[0].Args["target"])
|
||||||
|
require.Equal(t, "cc", c.Targets[1].Name)
|
||||||
|
require.Equal(t, ptrstr("valdd"), c.Targets[1].Args["target"])
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHCLMatrixMaps(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
target "foo" {
|
||||||
|
matrix = {
|
||||||
|
aa = [
|
||||||
|
{
|
||||||
|
foo = "aa"
|
||||||
|
bar = "bb"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
foo = "cc"
|
||||||
|
bar = "dd"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
name = aa.foo
|
||||||
|
args = {
|
||||||
|
target = "val${aa.bar}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
|
||||||
|
c, err := ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 2, len(c.Targets))
|
||||||
|
require.Equal(t, c.Targets[0].Name, "aa")
|
||||||
|
require.Equal(t, c.Targets[0].Args["target"], ptrstr("valbb"))
|
||||||
|
require.Equal(t, c.Targets[1].Name, "cc")
|
||||||
|
require.Equal(t, c.Targets[1].Args["target"], ptrstr("valdd"))
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHCLMatrixMultipleTargets(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
target "x" {
|
||||||
|
matrix = {
|
||||||
|
foo = ["a", "b"]
|
||||||
|
}
|
||||||
|
name = foo
|
||||||
|
}
|
||||||
|
target "y" {
|
||||||
|
matrix = {
|
||||||
|
bar = ["c", "d"]
|
||||||
|
}
|
||||||
|
name = bar
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
|
||||||
|
c, err := ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 4, len(c.Targets))
|
||||||
|
names := make([]string, len(c.Targets))
|
||||||
|
for i, t := range c.Targets {
|
||||||
|
names[i] = t.Name
|
||||||
|
}
|
||||||
|
require.ElementsMatch(t, []string{"a", "b", "c", "d"}, names)
|
||||||
|
|
||||||
|
require.Equal(t, 2, len(c.Groups))
|
||||||
|
names = make([]string, len(c.Groups))
|
||||||
|
for i, c := range c.Groups {
|
||||||
|
names[i] = c.Name
|
||||||
|
}
|
||||||
|
require.ElementsMatch(t, []string{"x", "y"}, names)
|
||||||
|
|
||||||
|
for _, g := range c.Groups {
|
||||||
|
switch g.Name {
|
||||||
|
case "x":
|
||||||
|
require.Equal(t, []string{"a", "b"}, g.Targets)
|
||||||
|
case "y":
|
||||||
|
require.Equal(t, []string{"c", "d"}, g.Targets)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHCLMatrixDuplicateNames(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
target "default" {
|
||||||
|
matrix = {
|
||||||
|
foo = ["a", "b"]
|
||||||
|
}
|
||||||
|
name = "c"
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
|
||||||
|
_, err := ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.Error(t, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHCLMatrixArgs(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
a = 1
|
||||||
|
variable "b" {
|
||||||
|
default = 2
|
||||||
|
}
|
||||||
|
target "default" {
|
||||||
|
matrix = {
|
||||||
|
foo = [a, b]
|
||||||
|
}
|
||||||
|
name = foo
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
|
||||||
|
c, err := ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 2, len(c.Targets))
|
||||||
|
require.Equal(t, "1", c.Targets[0].Name)
|
||||||
|
require.Equal(t, "2", c.Targets[1].Name)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHCLMatrixArgsOverride(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
variable "ABC" {
|
||||||
|
default = "def"
|
||||||
|
}
|
||||||
|
|
||||||
|
target "bar" {
|
||||||
|
matrix = {
|
||||||
|
aa = split(",", ABC)
|
||||||
|
}
|
||||||
|
name = "bar-${aa}"
|
||||||
|
args = {
|
||||||
|
foo = aa
|
||||||
|
}
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
|
||||||
|
c, err := ParseFiles([]File{
|
||||||
|
{Data: dt, Name: "docker-bake.hcl"},
|
||||||
|
}, map[string]string{"ABC": "11,22,33"})
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 3, len(c.Targets))
|
||||||
|
require.Equal(t, "bar-11", c.Targets[0].Name)
|
||||||
|
require.Equal(t, "bar-22", c.Targets[1].Name)
|
||||||
|
require.Equal(t, "bar-33", c.Targets[2].Name)
|
||||||
|
|
||||||
|
require.Equal(t, ptrstr("11"), c.Targets[0].Args["foo"])
|
||||||
|
require.Equal(t, ptrstr("22"), c.Targets[1].Args["foo"])
|
||||||
|
require.Equal(t, ptrstr("33"), c.Targets[2].Args["foo"])
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHCLMatrixBadTypes(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
target "default" {
|
||||||
|
matrix = "test"
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
_, err := ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.Error(t, err)
|
||||||
|
|
||||||
|
dt = []byte(`
|
||||||
|
target "default" {
|
||||||
|
matrix = ["test"]
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
_, err = ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.Error(t, err)
|
||||||
|
|
||||||
|
dt = []byte(`
|
||||||
|
target "default" {
|
||||||
|
matrix = {
|
||||||
|
["a"] = ["b"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
_, err = ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.Error(t, err)
|
||||||
|
|
||||||
|
dt = []byte(`
|
||||||
|
target "default" {
|
||||||
|
matrix = {
|
||||||
|
1 = 2
|
||||||
|
}
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
_, err = ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.Error(t, err)
|
||||||
|
|
||||||
|
dt = []byte(`
|
||||||
|
target "default" {
|
||||||
|
matrix = {
|
||||||
|
a = "b"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
_, err = ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.Error(t, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestJSONAttributes(t *testing.T) {
|
func TestJSONAttributes(t *testing.T) {
|
||||||
@@ -510,7 +1098,7 @@ func TestJSONAttributes(t *testing.T) {
|
|||||||
|
|
||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "app")
|
require.Equal(t, c.Targets[0].Name, "app")
|
||||||
require.Equal(t, "pre-abc-def", c.Targets[0].Args["v1"])
|
require.Equal(t, ptrstr("pre-abc-def"), c.Targets[0].Args["v1"])
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestJSONFunctions(t *testing.T) {
|
func TestJSONFunctions(t *testing.T) {
|
||||||
@@ -535,7 +1123,25 @@ func TestJSONFunctions(t *testing.T) {
|
|||||||
|
|
||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "app")
|
require.Equal(t, c.Targets[0].Name, "app")
|
||||||
require.Equal(t, "pre-<FOO-abc>", c.Targets[0].Args["v1"])
|
require.Equal(t, ptrstr("pre-<FOO-abc>"), c.Targets[0].Args["v1"])
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestJSONInvalidFunctions(t *testing.T) {
|
||||||
|
dt := []byte(`{
|
||||||
|
"target": {
|
||||||
|
"app": {
|
||||||
|
"args": {
|
||||||
|
"v1": "myfunc(\"foo\")"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}}`)
|
||||||
|
|
||||||
|
c, err := ParseFile(dt, "docker-bake.json")
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(c.Targets))
|
||||||
|
require.Equal(t, c.Targets[0].Name, "app")
|
||||||
|
require.Equal(t, ptrstr(`myfunc("foo")`), c.Targets[0].Args["v1"])
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestHCLFunctionInAttr(t *testing.T) {
|
func TestHCLFunctionInAttr(t *testing.T) {
|
||||||
@@ -563,7 +1169,7 @@ func TestHCLFunctionInAttr(t *testing.T) {
|
|||||||
|
|
||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "app")
|
require.Equal(t, c.Targets[0].Name, "app")
|
||||||
require.Equal(t, "FOO <> [baz]", c.Targets[0].Args["v1"])
|
require.Equal(t, ptrstr("FOO <> [baz]"), c.Targets[0].Args["v1"])
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestHCLCombineCompose(t *testing.T) {
|
func TestHCLCombineCompose(t *testing.T) {
|
||||||
@@ -594,8 +1200,8 @@ services:
|
|||||||
|
|
||||||
require.Equal(t, 1, len(c.Targets))
|
require.Equal(t, 1, len(c.Targets))
|
||||||
require.Equal(t, c.Targets[0].Name, "app")
|
require.Equal(t, c.Targets[0].Name, "app")
|
||||||
require.Equal(t, "foo", c.Targets[0].Args["v1"])
|
require.Equal(t, ptrstr("foo"), c.Targets[0].Args["v1"])
|
||||||
require.Equal(t, "bar", c.Targets[0].Args["v2"])
|
require.Equal(t, ptrstr("bar"), c.Targets[0].Args["v2"])
|
||||||
require.Equal(t, "dir", *c.Targets[0].Context)
|
require.Equal(t, "dir", *c.Targets[0].Context)
|
||||||
require.Equal(t, "Dockerfile-alternate", *c.Targets[0].Dockerfile)
|
require.Equal(t, "Dockerfile-alternate", *c.Targets[0].Dockerfile)
|
||||||
}
|
}
|
||||||
@@ -740,10 +1346,10 @@ target "two" {
|
|||||||
require.Equal(t, 2, len(c.Targets))
|
require.Equal(t, 2, len(c.Targets))
|
||||||
|
|
||||||
require.Equal(t, c.Targets[0].Name, "one")
|
require.Equal(t, c.Targets[0].Name, "one")
|
||||||
require.Equal(t, map[string]string{"a": "pre-ghi-jkl"}, c.Targets[0].Args)
|
require.Equal(t, map[string]*string{"a": ptrstr("pre-ghi-jkl")}, c.Targets[0].Args)
|
||||||
|
|
||||||
require.Equal(t, c.Targets[1].Name, "two")
|
require.Equal(t, c.Targets[1].Name, "two")
|
||||||
require.Equal(t, map[string]string{"b": "pre-jkl"}, c.Targets[1].Args)
|
require.Equal(t, map[string]*string{"b": ptrstr("pre-jkl")}, c.Targets[1].Args)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestEmptyVariableJSON(t *testing.T) {
|
func TestEmptyVariableJSON(t *testing.T) {
|
||||||
@@ -782,3 +1388,24 @@ func TestFunctionNoResult(t *testing.T) {
|
|||||||
_, err := ParseFile(dt, "docker-bake.hcl")
|
_, err := ParseFile(dt, "docker-bake.hcl")
|
||||||
require.Error(t, err)
|
require.Error(t, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestVarUnsupportedType(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
variable "FOO" {
|
||||||
|
default = []
|
||||||
|
}
|
||||||
|
target "default" {}`)
|
||||||
|
|
||||||
|
t.Setenv("FOO", "bar")
|
||||||
|
_, err := ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.Error(t, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
func ptrstr(s interface{}) *string {
|
||||||
|
var n *string
|
||||||
|
if reflect.ValueOf(s).Kind() == reflect.String {
|
||||||
|
ss := s.(string)
|
||||||
|
n = &ss
|
||||||
|
}
|
||||||
|
return n
|
||||||
|
}
|
||||||
|
|||||||
103
bake/hclparser/body.go
Normal file
103
bake/hclparser/body.go
Normal file
@@ -0,0 +1,103 @@
|
|||||||
|
package hclparser
|
||||||
|
|
||||||
|
import (
|
||||||
|
"github.com/hashicorp/hcl/v2"
|
||||||
|
)
|
||||||
|
|
||||||
|
type filterBody struct {
|
||||||
|
body hcl.Body
|
||||||
|
schema *hcl.BodySchema
|
||||||
|
exclude bool
|
||||||
|
}
|
||||||
|
|
||||||
|
func FilterIncludeBody(body hcl.Body, schema *hcl.BodySchema) hcl.Body {
|
||||||
|
return &filterBody{
|
||||||
|
body: body,
|
||||||
|
schema: schema,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func FilterExcludeBody(body hcl.Body, schema *hcl.BodySchema) hcl.Body {
|
||||||
|
return &filterBody{
|
||||||
|
body: body,
|
||||||
|
schema: schema,
|
||||||
|
exclude: true,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (b *filterBody) Content(schema *hcl.BodySchema) (*hcl.BodyContent, hcl.Diagnostics) {
|
||||||
|
if b.exclude {
|
||||||
|
schema = subtractSchemas(schema, b.schema)
|
||||||
|
} else {
|
||||||
|
schema = intersectSchemas(schema, b.schema)
|
||||||
|
}
|
||||||
|
content, _, diag := b.body.PartialContent(schema)
|
||||||
|
return content, diag
|
||||||
|
}
|
||||||
|
|
||||||
|
func (b *filterBody) PartialContent(schema *hcl.BodySchema) (*hcl.BodyContent, hcl.Body, hcl.Diagnostics) {
|
||||||
|
if b.exclude {
|
||||||
|
schema = subtractSchemas(schema, b.schema)
|
||||||
|
} else {
|
||||||
|
schema = intersectSchemas(schema, b.schema)
|
||||||
|
}
|
||||||
|
return b.body.PartialContent(schema)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (b *filterBody) JustAttributes() (hcl.Attributes, hcl.Diagnostics) {
|
||||||
|
return b.body.JustAttributes()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (b *filterBody) MissingItemRange() hcl.Range {
|
||||||
|
return b.body.MissingItemRange()
|
||||||
|
}
|
||||||
|
|
||||||
|
func intersectSchemas(a, b *hcl.BodySchema) *hcl.BodySchema {
|
||||||
|
result := &hcl.BodySchema{}
|
||||||
|
for _, blockA := range a.Blocks {
|
||||||
|
for _, blockB := range b.Blocks {
|
||||||
|
if blockA.Type == blockB.Type {
|
||||||
|
result.Blocks = append(result.Blocks, blockA)
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for _, attrA := range a.Attributes {
|
||||||
|
for _, attrB := range b.Attributes {
|
||||||
|
if attrA.Name == attrB.Name {
|
||||||
|
result.Attributes = append(result.Attributes, attrA)
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return result
|
||||||
|
}
|
||||||
|
|
||||||
|
func subtractSchemas(a, b *hcl.BodySchema) *hcl.BodySchema {
|
||||||
|
result := &hcl.BodySchema{}
|
||||||
|
for _, blockA := range a.Blocks {
|
||||||
|
found := false
|
||||||
|
for _, blockB := range b.Blocks {
|
||||||
|
if blockA.Type == blockB.Type {
|
||||||
|
found = true
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !found {
|
||||||
|
result.Blocks = append(result.Blocks, blockA)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for _, attrA := range a.Attributes {
|
||||||
|
found := false
|
||||||
|
for _, attrB := range b.Attributes {
|
||||||
|
if attrA.Name == attrB.Name {
|
||||||
|
found = true
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !found {
|
||||||
|
result.Attributes = append(result.Attributes, attrA)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return result
|
||||||
|
}
|
||||||
@@ -14,15 +14,7 @@ func funcCalls(exp hcl.Expression) ([]string, hcl.Diagnostics) {
|
|||||||
if !ok {
|
if !ok {
|
||||||
fns, err := jsonFuncCallsRecursive(exp)
|
fns, err := jsonFuncCallsRecursive(exp)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, hcl.Diagnostics{
|
return nil, wrapErrorDiagnostic("Invalid expression", err, exp.Range().Ptr(), exp.Range().Ptr())
|
||||||
&hcl.Diagnostic{
|
|
||||||
Severity: hcl.DiagError,
|
|
||||||
Summary: "Invalid expression",
|
|
||||||
Detail: err.Error(),
|
|
||||||
Subject: exp.Range().Ptr(),
|
|
||||||
Context: exp.Range().Ptr(),
|
|
||||||
},
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
return fns, nil
|
return fns, nil
|
||||||
}
|
}
|
||||||
@@ -83,11 +75,11 @@ func appendJSONFuncCalls(exp hcl.Expression, m map[string]struct{}) error {
|
|||||||
|
|
||||||
// hcl/v2/json/ast#stringVal
|
// hcl/v2/json/ast#stringVal
|
||||||
val := src.FieldByName("Value")
|
val := src.FieldByName("Value")
|
||||||
if val.IsZero() {
|
if !val.IsValid() || val.IsZero() {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
rng := src.FieldByName("SrcRange")
|
rng := src.FieldByName("SrcRange")
|
||||||
if val.IsZero() {
|
if rng.IsZero() {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
var stringVal struct {
|
var stringVal struct {
|
||||||
|
|||||||
@@ -1,7 +1,9 @@
|
|||||||
package hclparser
|
package hclparser
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"encoding/binary"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"hash/fnv"
|
||||||
"math"
|
"math"
|
||||||
"math/big"
|
"math/big"
|
||||||
"reflect"
|
"reflect"
|
||||||
@@ -13,6 +15,7 @@ import (
|
|||||||
"github.com/hashicorp/hcl/v2/gohcl"
|
"github.com/hashicorp/hcl/v2/gohcl"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
"github.com/zclconf/go-cty/cty"
|
"github.com/zclconf/go-cty/cty"
|
||||||
|
"github.com/zclconf/go-cty/cty/gocty"
|
||||||
)
|
)
|
||||||
|
|
||||||
type Opt struct {
|
type Opt struct {
|
||||||
@@ -48,30 +51,42 @@ type parser struct {
|
|||||||
attrs map[string]*hcl.Attribute
|
attrs map[string]*hcl.Attribute
|
||||||
funcs map[string]*functionDef
|
funcs map[string]*functionDef
|
||||||
|
|
||||||
|
blocks map[string]map[string][]*hcl.Block
|
||||||
|
blockValues map[*hcl.Block][]reflect.Value
|
||||||
|
blockEvalCtx map[*hcl.Block][]*hcl.EvalContext
|
||||||
|
blockNames map[*hcl.Block][]string
|
||||||
|
blockTypes map[string]reflect.Type
|
||||||
|
|
||||||
ectx *hcl.EvalContext
|
ectx *hcl.EvalContext
|
||||||
|
|
||||||
progress map[string]struct{}
|
progressV map[uint64]struct{}
|
||||||
progressF map[string]struct{}
|
progressF map[uint64]struct{}
|
||||||
doneF map[string]struct{}
|
progressB map[uint64]map[string]struct{}
|
||||||
|
doneB map[uint64]map[string]struct{}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (p *parser) loadDeps(exp hcl.Expression, exclude map[string]struct{}) hcl.Diagnostics {
|
type WithEvalContexts interface {
|
||||||
|
GetEvalContexts(base *hcl.EvalContext, block *hcl.Block, loadDeps func(hcl.Expression) hcl.Diagnostics) ([]*hcl.EvalContext, error)
|
||||||
|
}
|
||||||
|
|
||||||
|
type WithGetName interface {
|
||||||
|
GetName(ectx *hcl.EvalContext, block *hcl.Block, loadDeps func(hcl.Expression) hcl.Diagnostics) (string, error)
|
||||||
|
}
|
||||||
|
|
||||||
|
var errUndefined = errors.New("undefined")
|
||||||
|
|
||||||
|
func (p *parser) loadDeps(ectx *hcl.EvalContext, exp hcl.Expression, exclude map[string]struct{}, allowMissing bool) hcl.Diagnostics {
|
||||||
fns, hcldiags := funcCalls(exp)
|
fns, hcldiags := funcCalls(exp)
|
||||||
if hcldiags.HasErrors() {
|
if hcldiags.HasErrors() {
|
||||||
return hcldiags
|
return hcldiags
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, fn := range fns {
|
for _, fn := range fns {
|
||||||
if err := p.resolveFunction(fn); err != nil {
|
if err := p.resolveFunction(ectx, fn); err != nil {
|
||||||
return hcl.Diagnostics{
|
if allowMissing && errors.Is(err, errUndefined) {
|
||||||
&hcl.Diagnostic{
|
continue
|
||||||
Severity: hcl.DiagError,
|
|
||||||
Summary: "Invalid expression",
|
|
||||||
Detail: err.Error(),
|
|
||||||
Subject: exp.Range().Ptr(),
|
|
||||||
Context: exp.Range().Ptr(),
|
|
||||||
},
|
|
||||||
}
|
}
|
||||||
|
return wrapErrorDiagnostic("Invalid expression", err, exp.Range().Ptr(), exp.Range().Ptr())
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -79,15 +94,61 @@ func (p *parser) loadDeps(exp hcl.Expression, exclude map[string]struct{}) hcl.D
|
|||||||
if _, ok := exclude[v.RootName()]; ok {
|
if _, ok := exclude[v.RootName()]; ok {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
if err := p.resolveValue(v.RootName()); err != nil {
|
if _, ok := p.blockTypes[v.RootName()]; ok {
|
||||||
return hcl.Diagnostics{
|
blockType := v.RootName()
|
||||||
&hcl.Diagnostic{
|
|
||||||
Severity: hcl.DiagError,
|
split := v.SimpleSplit().Rel
|
||||||
Summary: "Invalid expression",
|
if len(split) == 0 {
|
||||||
Detail: err.Error(),
|
return hcl.Diagnostics{
|
||||||
Subject: v.SourceRange().Ptr(),
|
&hcl.Diagnostic{
|
||||||
Context: v.SourceRange().Ptr(),
|
Severity: hcl.DiagError,
|
||||||
},
|
Summary: "Invalid expression",
|
||||||
|
Detail: fmt.Sprintf("cannot access %s as a variable", blockType),
|
||||||
|
Subject: exp.Range().Ptr(),
|
||||||
|
Context: exp.Range().Ptr(),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
blockName, ok := split[0].(hcl.TraverseAttr)
|
||||||
|
if !ok {
|
||||||
|
return hcl.Diagnostics{
|
||||||
|
&hcl.Diagnostic{
|
||||||
|
Severity: hcl.DiagError,
|
||||||
|
Summary: "Invalid expression",
|
||||||
|
Detail: fmt.Sprintf("cannot traverse %s without attribute", blockType),
|
||||||
|
Subject: exp.Range().Ptr(),
|
||||||
|
Context: exp.Range().Ptr(),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
blocks := p.blocks[blockType][blockName.Name]
|
||||||
|
if len(blocks) == 0 {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
var target *hcl.BodySchema
|
||||||
|
if len(split) > 1 {
|
||||||
|
if attr, ok := split[1].(hcl.TraverseAttr); ok {
|
||||||
|
target = &hcl.BodySchema{
|
||||||
|
Attributes: []hcl.AttributeSchema{{Name: attr.Name}},
|
||||||
|
Blocks: []hcl.BlockHeaderSchema{{Type: attr.Name}},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for _, block := range blocks {
|
||||||
|
if err := p.resolveBlock(block, target); err != nil {
|
||||||
|
if allowMissing && errors.Is(err, errUndefined) {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
return wrapErrorDiagnostic("Invalid expression", err, exp.Range().Ptr(), exp.Range().Ptr())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
if err := p.resolveValue(ectx, v.RootName()); err != nil {
|
||||||
|
if allowMissing && errors.Is(err, errUndefined) {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
return wrapErrorDiagnostic("Invalid expression", err, exp.Range().Ptr(), exp.Range().Ptr())
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -95,21 +156,23 @@ func (p *parser) loadDeps(exp hcl.Expression, exclude map[string]struct{}) hcl.D
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (p *parser) resolveFunction(name string) error {
|
// resolveFunction forces evaluation of a function, storing the result into the
|
||||||
if _, ok := p.doneF[name]; ok {
|
// parser.
|
||||||
|
func (p *parser) resolveFunction(ectx *hcl.EvalContext, name string) error {
|
||||||
|
if _, ok := p.ectx.Functions[name]; ok {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
if _, ok := ectx.Functions[name]; ok {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
f, ok := p.funcs[name]
|
f, ok := p.funcs[name]
|
||||||
if !ok {
|
if !ok {
|
||||||
if _, ok := p.ectx.Functions[name]; ok {
|
return errors.Wrapf(errUndefined, "function %q does not exist", name)
|
||||||
return nil
|
|
||||||
}
|
|
||||||
return errors.Errorf("undefined function %s", name)
|
|
||||||
}
|
}
|
||||||
if _, ok := p.progressF[name]; ok {
|
if _, ok := p.progressF[key(ectx, name)]; ok {
|
||||||
return errors.Errorf("function cycle not allowed for %s", name)
|
return errors.Errorf("function cycle not allowed for %s", name)
|
||||||
}
|
}
|
||||||
p.progressF[name] = struct{}{}
|
p.progressF[key(ectx, name)] = struct{}{}
|
||||||
|
|
||||||
if f.Result == nil {
|
if f.Result == nil {
|
||||||
return errors.Errorf("empty result not allowed for %s", name)
|
return errors.Errorf("empty result not allowed for %s", name)
|
||||||
@@ -154,7 +217,7 @@ func (p *parser) resolveFunction(name string) error {
|
|||||||
return diags
|
return diags
|
||||||
}
|
}
|
||||||
|
|
||||||
if diags := p.loadDeps(f.Result.Expr, params); diags.HasErrors() {
|
if diags := p.loadDeps(p.ectx, f.Result.Expr, params, false); diags.HasErrors() {
|
||||||
return diags
|
return diags
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -164,20 +227,24 @@ func (p *parser) resolveFunction(name string) error {
|
|||||||
if diags.HasErrors() {
|
if diags.HasErrors() {
|
||||||
return diags
|
return diags
|
||||||
}
|
}
|
||||||
p.doneF[name] = struct{}{}
|
|
||||||
p.ectx.Functions[name] = v
|
p.ectx.Functions[name] = v
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (p *parser) resolveValue(name string) (err error) {
|
// resolveValue forces evaluation of a named value, storing the result into the
|
||||||
|
// parser.
|
||||||
|
func (p *parser) resolveValue(ectx *hcl.EvalContext, name string) (err error) {
|
||||||
if _, ok := p.ectx.Variables[name]; ok {
|
if _, ok := p.ectx.Variables[name]; ok {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
if _, ok := p.progress[name]; ok {
|
if _, ok := ectx.Variables[name]; ok {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
if _, ok := p.progressV[key(ectx, name)]; ok {
|
||||||
return errors.Errorf("variable cycle not allowed for %s", name)
|
return errors.Errorf("variable cycle not allowed for %s", name)
|
||||||
}
|
}
|
||||||
p.progress[name] = struct{}{}
|
p.progressV[key(ectx, name)] = struct{}{}
|
||||||
|
|
||||||
var v *cty.Value
|
var v *cty.Value
|
||||||
defer func() {
|
defer func() {
|
||||||
@@ -190,9 +257,10 @@ func (p *parser) resolveValue(name string) (err error) {
|
|||||||
if _, builtin := p.opt.Vars[name]; !ok && !builtin {
|
if _, builtin := p.opt.Vars[name]; !ok && !builtin {
|
||||||
vr, ok := p.vars[name]
|
vr, ok := p.vars[name]
|
||||||
if !ok {
|
if !ok {
|
||||||
return errors.Errorf("undefined variable %q", name)
|
return errors.Wrapf(errUndefined, "variable %q does not exist", name)
|
||||||
}
|
}
|
||||||
def = vr.Default
|
def = vr.Default
|
||||||
|
ectx = p.ectx
|
||||||
}
|
}
|
||||||
|
|
||||||
if def == nil {
|
if def == nil {
|
||||||
@@ -205,10 +273,10 @@ func (p *parser) resolveValue(name string) (err error) {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
if diags := p.loadDeps(def.Expr, nil); diags.HasErrors() {
|
if diags := p.loadDeps(ectx, def.Expr, nil, true); diags.HasErrors() {
|
||||||
return diags
|
return diags
|
||||||
}
|
}
|
||||||
vv, diags := def.Expr.Value(p.ectx)
|
vv, diags := def.Expr.Value(ectx)
|
||||||
if diags.HasErrors() {
|
if diags.HasErrors() {
|
||||||
return diags
|
return diags
|
||||||
}
|
}
|
||||||
@@ -216,19 +284,16 @@ func (p *parser) resolveValue(name string) (err error) {
|
|||||||
_, isVar := p.vars[name]
|
_, isVar := p.vars[name]
|
||||||
|
|
||||||
if envv, ok := p.opt.LookupVar(name); ok && isVar {
|
if envv, ok := p.opt.LookupVar(name); ok && isVar {
|
||||||
if vv.Type().Equals(cty.Bool) {
|
switch {
|
||||||
|
case vv.Type().Equals(cty.Bool):
|
||||||
b, err := strconv.ParseBool(envv)
|
b, err := strconv.ParseBool(envv)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return errors.Wrapf(err, "failed to parse %s as bool", name)
|
return errors.Wrapf(err, "failed to parse %s as bool", name)
|
||||||
}
|
}
|
||||||
vv := cty.BoolVal(b)
|
vv = cty.BoolVal(b)
|
||||||
v = &vv
|
case vv.Type().Equals(cty.String), vv.Type().Equals(cty.DynamicPseudoType):
|
||||||
return nil
|
vv = cty.StringVal(envv)
|
||||||
} else if vv.Type().Equals(cty.String) {
|
case vv.Type().Equals(cty.Number):
|
||||||
vv := cty.StringVal(envv)
|
|
||||||
v = &vv
|
|
||||||
return nil
|
|
||||||
} else if vv.Type().Equals(cty.Number) {
|
|
||||||
n, err := strconv.ParseFloat(envv, 64)
|
n, err := strconv.ParseFloat(envv, 64)
|
||||||
if err == nil && (math.IsNaN(n) || math.IsInf(n, 0)) {
|
if err == nil && (math.IsNaN(n) || math.IsInf(n, 0)) {
|
||||||
err = errors.Errorf("invalid number value")
|
err = errors.Errorf("invalid number value")
|
||||||
@@ -236,19 +301,240 @@ func (p *parser) resolveValue(name string) (err error) {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return errors.Wrapf(err, "failed to parse %s as number", name)
|
return errors.Wrapf(err, "failed to parse %s as number", name)
|
||||||
}
|
}
|
||||||
vv := cty.NumberVal(big.NewFloat(n))
|
vv = cty.NumberVal(big.NewFloat(n))
|
||||||
v = &vv
|
default:
|
||||||
return nil
|
|
||||||
} else {
|
|
||||||
// TODO: support lists with csv values
|
// TODO: support lists with csv values
|
||||||
return errors.Errorf("unsupported type %s for variable %s", v.Type(), name)
|
return errors.Errorf("unsupported type %s for variable %s", vv.Type().FriendlyName(), name)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
v = &vv
|
v = &vv
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func Parse(b hcl.Body, opt Opt, val interface{}) hcl.Diagnostics {
|
// resolveBlock force evaluates a block, storing the result in the parser. If a
|
||||||
|
// target schema is provided, only the attributes and blocks present in the
|
||||||
|
// schema will be evaluated.
|
||||||
|
func (p *parser) resolveBlock(block *hcl.Block, target *hcl.BodySchema) (err error) {
|
||||||
|
// prepare the variable map for this type
|
||||||
|
if _, ok := p.ectx.Variables[block.Type]; !ok {
|
||||||
|
p.ectx.Variables[block.Type] = cty.MapValEmpty(cty.Map(cty.String))
|
||||||
|
}
|
||||||
|
|
||||||
|
// prepare the output destination and evaluation context
|
||||||
|
t, ok := p.blockTypes[block.Type]
|
||||||
|
if !ok {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
var outputs []reflect.Value
|
||||||
|
var ectxs []*hcl.EvalContext
|
||||||
|
if prev, ok := p.blockValues[block]; ok {
|
||||||
|
outputs = prev
|
||||||
|
ectxs = p.blockEvalCtx[block]
|
||||||
|
} else {
|
||||||
|
if v, ok := reflect.New(t).Interface().(WithEvalContexts); ok {
|
||||||
|
ectxs, err = v.GetEvalContexts(p.ectx, block, func(expr hcl.Expression) hcl.Diagnostics {
|
||||||
|
return p.loadDeps(p.ectx, expr, nil, true)
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
for _, ectx := range ectxs {
|
||||||
|
if ectx != p.ectx && ectx.Parent() != p.ectx {
|
||||||
|
return errors.Errorf("EvalContext must return a context with the correct parent")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
ectxs = append([]*hcl.EvalContext{}, p.ectx)
|
||||||
|
}
|
||||||
|
for range ectxs {
|
||||||
|
outputs = append(outputs, reflect.New(t))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
p.blockValues[block] = outputs
|
||||||
|
p.blockEvalCtx[block] = ectxs
|
||||||
|
|
||||||
|
for i, output := range outputs {
|
||||||
|
target := target
|
||||||
|
ectx := ectxs[i]
|
||||||
|
name := block.Labels[0]
|
||||||
|
if names, ok := p.blockNames[block]; ok {
|
||||||
|
name = names[i]
|
||||||
|
}
|
||||||
|
|
||||||
|
if _, ok := p.doneB[key(block, ectx)]; !ok {
|
||||||
|
p.doneB[key(block, ectx)] = map[string]struct{}{}
|
||||||
|
}
|
||||||
|
if _, ok := p.progressB[key(block, ectx)]; !ok {
|
||||||
|
p.progressB[key(block, ectx)] = map[string]struct{}{}
|
||||||
|
}
|
||||||
|
|
||||||
|
if target != nil {
|
||||||
|
// filter out attributes and blocks that are already evaluated
|
||||||
|
original := target
|
||||||
|
target = &hcl.BodySchema{}
|
||||||
|
for _, a := range original.Attributes {
|
||||||
|
if _, ok := p.doneB[key(block, ectx)][a.Name]; !ok {
|
||||||
|
target.Attributes = append(target.Attributes, a)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for _, b := range original.Blocks {
|
||||||
|
if _, ok := p.doneB[key(block, ectx)][b.Type]; !ok {
|
||||||
|
target.Blocks = append(target.Blocks, b)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if len(target.Attributes) == 0 && len(target.Blocks) == 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if target != nil {
|
||||||
|
// detect reference cycles
|
||||||
|
for _, a := range target.Attributes {
|
||||||
|
if _, ok := p.progressB[key(block, ectx)][a.Name]; ok {
|
||||||
|
return errors.Errorf("reference cycle not allowed for %s.%s.%s", block.Type, name, a.Name)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for _, b := range target.Blocks {
|
||||||
|
if _, ok := p.progressB[key(block, ectx)][b.Type]; ok {
|
||||||
|
return errors.Errorf("reference cycle not allowed for %s.%s.%s", block.Type, name, b.Type)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for _, a := range target.Attributes {
|
||||||
|
p.progressB[key(block, ectx)][a.Name] = struct{}{}
|
||||||
|
}
|
||||||
|
for _, b := range target.Blocks {
|
||||||
|
p.progressB[key(block, ectx)][b.Type] = struct{}{}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// create a filtered body that contains only the target properties
|
||||||
|
body := func() hcl.Body {
|
||||||
|
if target != nil {
|
||||||
|
return FilterIncludeBody(block.Body, target)
|
||||||
|
}
|
||||||
|
|
||||||
|
filter := &hcl.BodySchema{}
|
||||||
|
for k := range p.doneB[key(block, ectx)] {
|
||||||
|
filter.Attributes = append(filter.Attributes, hcl.AttributeSchema{Name: k})
|
||||||
|
filter.Blocks = append(filter.Blocks, hcl.BlockHeaderSchema{Type: k})
|
||||||
|
}
|
||||||
|
return FilterExcludeBody(block.Body, filter)
|
||||||
|
}
|
||||||
|
|
||||||
|
// load dependencies from all targeted properties
|
||||||
|
schema, _ := gohcl.ImpliedBodySchema(reflect.New(t).Interface())
|
||||||
|
content, _, diag := body().PartialContent(schema)
|
||||||
|
if diag.HasErrors() {
|
||||||
|
return diag
|
||||||
|
}
|
||||||
|
for _, a := range content.Attributes {
|
||||||
|
diag := p.loadDeps(ectx, a.Expr, nil, true)
|
||||||
|
if diag.HasErrors() {
|
||||||
|
return diag
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for _, b := range content.Blocks {
|
||||||
|
err := p.resolveBlock(b, nil)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// decode!
|
||||||
|
diag = gohcl.DecodeBody(body(), ectx, output.Interface())
|
||||||
|
if diag.HasErrors() {
|
||||||
|
return diag
|
||||||
|
}
|
||||||
|
|
||||||
|
// mark all targeted properties as done
|
||||||
|
for _, a := range content.Attributes {
|
||||||
|
p.doneB[key(block, ectx)][a.Name] = struct{}{}
|
||||||
|
}
|
||||||
|
for _, b := range content.Blocks {
|
||||||
|
p.doneB[key(block, ectx)][b.Type] = struct{}{}
|
||||||
|
}
|
||||||
|
if target != nil {
|
||||||
|
for _, a := range target.Attributes {
|
||||||
|
p.doneB[key(block, ectx)][a.Name] = struct{}{}
|
||||||
|
}
|
||||||
|
for _, b := range target.Blocks {
|
||||||
|
p.doneB[key(block, ectx)][b.Type] = struct{}{}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// store the result into the evaluation context (so it can be referenced)
|
||||||
|
outputType, err := gocty.ImpliedType(output.Interface())
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
outputValue, err := gocty.ToCtyValue(output.Interface(), outputType)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
var m map[string]cty.Value
|
||||||
|
if m2, ok := p.ectx.Variables[block.Type]; ok {
|
||||||
|
m = m2.AsValueMap()
|
||||||
|
}
|
||||||
|
if m == nil {
|
||||||
|
m = map[string]cty.Value{}
|
||||||
|
}
|
||||||
|
m[name] = outputValue
|
||||||
|
p.ectx.Variables[block.Type] = cty.MapVal(m)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// resolveBlockNames returns the names of the block, calling resolveBlock to
|
||||||
|
// evaluate any label fields to correctly resolve the name.
|
||||||
|
func (p *parser) resolveBlockNames(block *hcl.Block) ([]string, error) {
|
||||||
|
if names, ok := p.blockNames[block]; ok {
|
||||||
|
return names, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := p.resolveBlock(block, &hcl.BodySchema{}); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
names := make([]string, 0, len(p.blockValues[block]))
|
||||||
|
for i, val := range p.blockValues[block] {
|
||||||
|
ectx := p.blockEvalCtx[block][i]
|
||||||
|
|
||||||
|
name := block.Labels[0]
|
||||||
|
if err := p.opt.ValidateLabel(name); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if v, ok := val.Interface().(WithGetName); ok {
|
||||||
|
var err error
|
||||||
|
name, err = v.GetName(ectx, block, func(expr hcl.Expression) hcl.Diagnostics {
|
||||||
|
return p.loadDeps(ectx, expr, nil, true)
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if err := p.opt.ValidateLabel(name); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
setName(val, name)
|
||||||
|
names = append(names, name)
|
||||||
|
}
|
||||||
|
|
||||||
|
found := map[string]struct{}{}
|
||||||
|
for _, name := range names {
|
||||||
|
if _, ok := found[name]; ok {
|
||||||
|
return nil, errors.Errorf("duplicate name %q", name)
|
||||||
|
}
|
||||||
|
found[name] = struct{}{}
|
||||||
|
}
|
||||||
|
|
||||||
|
p.blockNames[block] = names
|
||||||
|
return names, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func Parse(b hcl.Body, opt Opt, val interface{}) (map[string]map[string][]string, hcl.Diagnostics) {
|
||||||
reserved := map[string]struct{}{}
|
reserved := map[string]struct{}{}
|
||||||
schema, _ := gohcl.ImpliedBodySchema(val)
|
schema, _ := gohcl.ImpliedBodySchema(val)
|
||||||
|
|
||||||
@@ -261,7 +547,7 @@ func Parse(b hcl.Body, opt Opt, val interface{}) hcl.Diagnostics {
|
|||||||
|
|
||||||
var defs inputs
|
var defs inputs
|
||||||
if err := gohcl.DecodeBody(b, nil, &defs); err != nil {
|
if err := gohcl.DecodeBody(b, nil, &defs); err != nil {
|
||||||
return err
|
return nil, err
|
||||||
}
|
}
|
||||||
defsSchema, _ := gohcl.ImpliedBodySchema(defs)
|
defsSchema, _ := gohcl.ImpliedBodySchema(defs)
|
||||||
|
|
||||||
@@ -284,13 +570,20 @@ func Parse(b hcl.Body, opt Opt, val interface{}) hcl.Diagnostics {
|
|||||||
attrs: map[string]*hcl.Attribute{},
|
attrs: map[string]*hcl.Attribute{},
|
||||||
funcs: map[string]*functionDef{},
|
funcs: map[string]*functionDef{},
|
||||||
|
|
||||||
progress: map[string]struct{}{},
|
blocks: map[string]map[string][]*hcl.Block{},
|
||||||
progressF: map[string]struct{}{},
|
blockValues: map[*hcl.Block][]reflect.Value{},
|
||||||
doneF: map[string]struct{}{},
|
blockEvalCtx: map[*hcl.Block][]*hcl.EvalContext{},
|
||||||
|
blockNames: map[*hcl.Block][]string{},
|
||||||
|
blockTypes: map[string]reflect.Type{},
|
||||||
ectx: &hcl.EvalContext{
|
ectx: &hcl.EvalContext{
|
||||||
Variables: map[string]cty.Value{},
|
Variables: map[string]cty.Value{},
|
||||||
Functions: stdlibFunctions,
|
Functions: Stdlib(),
|
||||||
},
|
},
|
||||||
|
|
||||||
|
progressV: map[uint64]struct{}{},
|
||||||
|
progressF: map[uint64]struct{}{},
|
||||||
|
progressB: map[uint64]map[string]struct{}{},
|
||||||
|
doneB: map[uint64]map[string]struct{}{},
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, v := range defs.Variables {
|
for _, v := range defs.Variables {
|
||||||
@@ -310,18 +603,18 @@ func Parse(b hcl.Body, opt Opt, val interface{}) hcl.Diagnostics {
|
|||||||
|
|
||||||
content, b, diags := b.PartialContent(schema)
|
content, b, diags := b.PartialContent(schema)
|
||||||
if diags.HasErrors() {
|
if diags.HasErrors() {
|
||||||
return diags
|
return nil, diags
|
||||||
}
|
}
|
||||||
|
|
||||||
blocks, b, diags := b.PartialContent(defsSchema)
|
blocks, b, diags := b.PartialContent(defsSchema)
|
||||||
if diags.HasErrors() {
|
if diags.HasErrors() {
|
||||||
return diags
|
return nil, diags
|
||||||
}
|
}
|
||||||
|
|
||||||
attrs, diags := b.JustAttributes()
|
attrs, diags := b.JustAttributes()
|
||||||
if diags.HasErrors() {
|
if diags.HasErrors() {
|
||||||
if d := removeAttributesDiags(diags, reserved, p.vars); len(d) > 0 {
|
if d := removeAttributesDiags(diags, reserved, p.vars); len(d) > 0 {
|
||||||
return d
|
return nil, d
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -334,48 +627,35 @@ func Parse(b hcl.Body, opt Opt, val interface{}) hcl.Diagnostics {
|
|||||||
delete(p.attrs, "function")
|
delete(p.attrs, "function")
|
||||||
|
|
||||||
for k := range p.opt.Vars {
|
for k := range p.opt.Vars {
|
||||||
_ = p.resolveValue(k)
|
_ = p.resolveValue(p.ectx, k)
|
||||||
}
|
}
|
||||||
|
|
||||||
for k := range p.attrs {
|
for _, a := range content.Attributes {
|
||||||
if err := p.resolveValue(k); err != nil {
|
return nil, hcl.Diagnostics{
|
||||||
if diags, ok := err.(hcl.Diagnostics); ok {
|
&hcl.Diagnostic{
|
||||||
return diags
|
Severity: hcl.DiagError,
|
||||||
}
|
Summary: "Invalid attribute",
|
||||||
return hcl.Diagnostics{
|
Detail: "global attributes currently not supported",
|
||||||
&hcl.Diagnostic{
|
Subject: &a.Range,
|
||||||
Severity: hcl.DiagError,
|
Context: &a.Range,
|
||||||
Summary: "Invalid attribute",
|
},
|
||||||
Detail: err.Error(),
|
|
||||||
Subject: &p.attrs[k].Range,
|
|
||||||
Context: &p.attrs[k].Range,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
for k := range p.vars {
|
for k := range p.vars {
|
||||||
if err := p.resolveValue(k); err != nil {
|
if err := p.resolveValue(p.ectx, k); err != nil {
|
||||||
if diags, ok := err.(hcl.Diagnostics); ok {
|
if diags, ok := err.(hcl.Diagnostics); ok {
|
||||||
return diags
|
return nil, diags
|
||||||
}
|
}
|
||||||
r := p.vars[k].Body.MissingItemRange()
|
r := p.vars[k].Body.MissingItemRange()
|
||||||
return hcl.Diagnostics{
|
return nil, wrapErrorDiagnostic("Invalid value", err, &r, &r)
|
||||||
&hcl.Diagnostic{
|
|
||||||
Severity: hcl.DiagError,
|
|
||||||
Summary: "Invalid value",
|
|
||||||
Detail: err.Error(),
|
|
||||||
Subject: &r,
|
|
||||||
Context: &r,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
for k := range p.funcs {
|
for k := range p.funcs {
|
||||||
if err := p.resolveFunction(k); err != nil {
|
if err := p.resolveFunction(p.ectx, k); err != nil {
|
||||||
if diags, ok := err.(hcl.Diagnostics); ok {
|
if diags, ok := err.(hcl.Diagnostics); ok {
|
||||||
return diags
|
return nil, diags
|
||||||
}
|
}
|
||||||
var subject *hcl.Range
|
var subject *hcl.Range
|
||||||
var context *hcl.Range
|
var context *hcl.Range
|
||||||
@@ -391,56 +671,10 @@ func Parse(b hcl.Body, opt Opt, val interface{}) hcl.Diagnostics {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return hcl.Diagnostics{
|
return nil, wrapErrorDiagnostic("Invalid function", err, subject, context)
|
||||||
&hcl.Diagnostic{
|
|
||||||
Severity: hcl.DiagError,
|
|
||||||
Summary: "Invalid function",
|
|
||||||
Detail: err.Error(),
|
|
||||||
Subject: subject,
|
|
||||||
Context: context,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, a := range content.Attributes {
|
|
||||||
return hcl.Diagnostics{
|
|
||||||
&hcl.Diagnostic{
|
|
||||||
Severity: hcl.DiagError,
|
|
||||||
Summary: "Invalid attribute",
|
|
||||||
Detail: "global attributes currently not supported",
|
|
||||||
Subject: &a.Range,
|
|
||||||
Context: &a.Range,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
m := map[string]map[string][]*hcl.Block{}
|
|
||||||
for _, b := range content.Blocks {
|
|
||||||
if len(b.Labels) == 0 || len(b.Labels) > 1 {
|
|
||||||
return hcl.Diagnostics{
|
|
||||||
&hcl.Diagnostic{
|
|
||||||
Severity: hcl.DiagError,
|
|
||||||
Summary: "Invalid block",
|
|
||||||
Detail: fmt.Sprintf("invalid block label: %v", b.Labels),
|
|
||||||
Subject: &b.LabelRanges[0],
|
|
||||||
Context: &b.LabelRanges[0],
|
|
||||||
},
|
|
||||||
}
|
|
||||||
}
|
|
||||||
bm, ok := m[b.Type]
|
|
||||||
if !ok {
|
|
||||||
bm = map[string][]*hcl.Block{}
|
|
||||||
m[b.Type] = bm
|
|
||||||
}
|
|
||||||
|
|
||||||
lbl := b.Labels[0]
|
|
||||||
bm[lbl] = append(bm[lbl], b)
|
|
||||||
}
|
|
||||||
|
|
||||||
vt := reflect.ValueOf(val).Elem().Type()
|
|
||||||
numFields := vt.NumField()
|
|
||||||
|
|
||||||
type value struct {
|
type value struct {
|
||||||
reflect.Value
|
reflect.Value
|
||||||
idx int
|
idx int
|
||||||
@@ -451,93 +685,173 @@ func Parse(b hcl.Body, opt Opt, val interface{}) hcl.Diagnostics {
|
|||||||
values map[string]value
|
values map[string]value
|
||||||
}
|
}
|
||||||
types := map[string]field{}
|
types := map[string]field{}
|
||||||
|
renamed := map[string]map[string][]string{}
|
||||||
for i := 0; i < numFields; i++ {
|
vt := reflect.ValueOf(val).Elem().Type()
|
||||||
|
for i := 0; i < vt.NumField(); i++ {
|
||||||
tags := strings.Split(vt.Field(i).Tag.Get("hcl"), ",")
|
tags := strings.Split(vt.Field(i).Tag.Get("hcl"), ",")
|
||||||
|
|
||||||
|
p.blockTypes[tags[0]] = vt.Field(i).Type.Elem().Elem()
|
||||||
types[tags[0]] = field{
|
types[tags[0]] = field{
|
||||||
idx: i,
|
idx: i,
|
||||||
typ: vt.Field(i).Type,
|
typ: vt.Field(i).Type,
|
||||||
values: make(map[string]value),
|
values: make(map[string]value),
|
||||||
}
|
}
|
||||||
|
renamed[tags[0]] = map[string][]string{}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
tmpBlocks := map[string]map[string][]*hcl.Block{}
|
||||||
|
for _, b := range content.Blocks {
|
||||||
|
if len(b.Labels) == 0 || len(b.Labels) > 1 {
|
||||||
|
return nil, hcl.Diagnostics{
|
||||||
|
&hcl.Diagnostic{
|
||||||
|
Severity: hcl.DiagError,
|
||||||
|
Summary: "Invalid block",
|
||||||
|
Detail: fmt.Sprintf("invalid block label: %v", b.Labels),
|
||||||
|
Subject: &b.LabelRanges[0],
|
||||||
|
Context: &b.LabelRanges[0],
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
bm, ok := tmpBlocks[b.Type]
|
||||||
|
if !ok {
|
||||||
|
bm = map[string][]*hcl.Block{}
|
||||||
|
tmpBlocks[b.Type] = bm
|
||||||
|
}
|
||||||
|
|
||||||
|
names, err := p.resolveBlockNames(b)
|
||||||
|
if err != nil {
|
||||||
|
return nil, wrapErrorDiagnostic("Invalid name", err, &b.LabelRanges[0], &b.LabelRanges[0])
|
||||||
|
}
|
||||||
|
for _, name := range names {
|
||||||
|
bm[name] = append(bm[name], b)
|
||||||
|
renamed[b.Type][b.Labels[0]] = append(renamed[b.Type][b.Labels[0]], name)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
p.blocks = tmpBlocks
|
||||||
|
|
||||||
diags = hcl.Diagnostics{}
|
diags = hcl.Diagnostics{}
|
||||||
for _, b := range content.Blocks {
|
for _, b := range content.Blocks {
|
||||||
v := reflect.ValueOf(val)
|
v := reflect.ValueOf(val)
|
||||||
|
|
||||||
t, ok := types[b.Type]
|
err := p.resolveBlock(b, nil)
|
||||||
if !ok {
|
if err != nil {
|
||||||
continue
|
if diag, ok := err.(hcl.Diagnostics); ok {
|
||||||
}
|
if diag.HasErrors() {
|
||||||
|
diags = append(diags, diag...)
|
||||||
vv := reflect.New(t.typ.Elem().Elem())
|
continue
|
||||||
diag := gohcl.DecodeBody(b.Body, p.ectx, vv.Interface())
|
}
|
||||||
if diag.HasErrors() {
|
} else {
|
||||||
diags = append(diags, diag...)
|
return nil, wrapErrorDiagnostic("Invalid block", err, &b.LabelRanges[0], &b.DefRange)
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := opt.ValidateLabel(b.Labels[0]); err != nil {
|
|
||||||
return hcl.Diagnostics{
|
|
||||||
&hcl.Diagnostic{
|
|
||||||
Severity: hcl.DiagError,
|
|
||||||
Summary: "Invalid name",
|
|
||||||
Detail: err.Error(),
|
|
||||||
Subject: &b.LabelRanges[0],
|
|
||||||
},
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
lblIndex := setLabel(vv, b.Labels[0])
|
vvs := p.blockValues[b]
|
||||||
|
for _, vv := range vvs {
|
||||||
oldValue, exists := t.values[b.Labels[0]]
|
t := types[b.Type]
|
||||||
if !exists && lblIndex != -1 {
|
lblIndex, lblExists := getNameIndex(vv)
|
||||||
if v.Elem().Field(t.idx).Type().Kind() == reflect.Slice {
|
lblName, _ := getName(vv)
|
||||||
for i := 0; i < v.Elem().Field(t.idx).Len(); i++ {
|
oldValue, exists := t.values[lblName]
|
||||||
if b.Labels[0] == v.Elem().Field(t.idx).Index(i).Elem().Field(lblIndex).String() {
|
if !exists && lblExists {
|
||||||
exists = true
|
if v.Elem().Field(t.idx).Type().Kind() == reflect.Slice {
|
||||||
oldValue = value{Value: v.Elem().Field(t.idx).Index(i), idx: i}
|
for i := 0; i < v.Elem().Field(t.idx).Len(); i++ {
|
||||||
break
|
if lblName == v.Elem().Field(t.idx).Index(i).Elem().Field(lblIndex).String() {
|
||||||
|
exists = true
|
||||||
|
oldValue = value{Value: v.Elem().Field(t.idx).Index(i), idx: i}
|
||||||
|
break
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
if exists {
|
||||||
}
|
if m := oldValue.Value.MethodByName("Merge"); m.IsValid() {
|
||||||
if exists {
|
m.Call([]reflect.Value{vv})
|
||||||
if m := oldValue.Value.MethodByName("Merge"); m.IsValid() {
|
} else {
|
||||||
m.Call([]reflect.Value{vv})
|
v.Elem().Field(t.idx).Index(oldValue.idx).Set(vv)
|
||||||
|
}
|
||||||
} else {
|
} else {
|
||||||
v.Elem().Field(t.idx).Index(oldValue.idx).Set(vv)
|
slice := v.Elem().Field(t.idx)
|
||||||
|
if slice.IsNil() {
|
||||||
|
slice = reflect.New(t.typ).Elem()
|
||||||
|
}
|
||||||
|
t.values[lblName] = value{Value: vv, idx: slice.Len()}
|
||||||
|
v.Elem().Field(t.idx).Set(reflect.Append(slice, vv))
|
||||||
}
|
}
|
||||||
} else {
|
|
||||||
slice := v.Elem().Field(t.idx)
|
|
||||||
if slice.IsNil() {
|
|
||||||
slice = reflect.New(t.typ).Elem()
|
|
||||||
}
|
|
||||||
t.values[b.Labels[0]] = value{Value: vv, idx: slice.Len()}
|
|
||||||
v.Elem().Field(t.idx).Set(reflect.Append(slice, vv))
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if diags.HasErrors() {
|
if diags.HasErrors() {
|
||||||
return diags
|
return nil, diags
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
for k := range p.attrs {
|
||||||
|
if err := p.resolveValue(p.ectx, k); err != nil {
|
||||||
|
if diags, ok := err.(hcl.Diagnostics); ok {
|
||||||
|
return nil, diags
|
||||||
|
}
|
||||||
|
return nil, wrapErrorDiagnostic("Invalid attribute", err, &p.attrs[k].Range, &p.attrs[k].Range)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return renamed, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func setLabel(v reflect.Value, lbl string) int {
|
// wrapErrorDiagnostic wraps an error into a hcl.Diagnostics object.
|
||||||
// cache field index?
|
// If the error is already an hcl.Diagnostics object, it is returned as is.
|
||||||
|
func wrapErrorDiagnostic(message string, err error, subject *hcl.Range, context *hcl.Range) hcl.Diagnostics {
|
||||||
|
switch err := err.(type) {
|
||||||
|
case *hcl.Diagnostic:
|
||||||
|
return hcl.Diagnostics{err}
|
||||||
|
case hcl.Diagnostics:
|
||||||
|
return err
|
||||||
|
default:
|
||||||
|
return hcl.Diagnostics{
|
||||||
|
&hcl.Diagnostic{
|
||||||
|
Severity: hcl.DiagError,
|
||||||
|
Summary: message,
|
||||||
|
Detail: err.Error(),
|
||||||
|
Subject: subject,
|
||||||
|
Context: context,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func setName(v reflect.Value, name string) {
|
||||||
numFields := v.Elem().Type().NumField()
|
numFields := v.Elem().Type().NumField()
|
||||||
for i := 0; i < numFields; i++ {
|
for i := 0; i < numFields; i++ {
|
||||||
for _, t := range strings.Split(v.Elem().Type().Field(i).Tag.Get("hcl"), ",") {
|
parts := strings.Split(v.Elem().Type().Field(i).Tag.Get("hcl"), ",")
|
||||||
|
for _, t := range parts[1:] {
|
||||||
if t == "label" {
|
if t == "label" {
|
||||||
v.Elem().Field(i).Set(reflect.ValueOf(lbl))
|
v.Elem().Field(i).Set(reflect.ValueOf(name))
|
||||||
return i
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return -1
|
}
|
||||||
|
|
||||||
|
func getName(v reflect.Value) (string, bool) {
|
||||||
|
numFields := v.Elem().Type().NumField()
|
||||||
|
for i := 0; i < numFields; i++ {
|
||||||
|
parts := strings.Split(v.Elem().Type().Field(i).Tag.Get("hcl"), ",")
|
||||||
|
for _, t := range parts[1:] {
|
||||||
|
if t == "label" {
|
||||||
|
return v.Elem().Field(i).String(), true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return "", false
|
||||||
|
}
|
||||||
|
|
||||||
|
func getNameIndex(v reflect.Value) (int, bool) {
|
||||||
|
numFields := v.Elem().Type().NumField()
|
||||||
|
for i := 0; i < numFields; i++ {
|
||||||
|
parts := strings.Split(v.Elem().Type().Field(i).Tag.Get("hcl"), ",")
|
||||||
|
for _, t := range parts[1:] {
|
||||||
|
if t == "label" {
|
||||||
|
return i, true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return 0, false
|
||||||
}
|
}
|
||||||
|
|
||||||
func removeAttributesDiags(diags hcl.Diagnostics, reserved map[string]struct{}, vars map[string]*variable) hcl.Diagnostics {
|
func removeAttributesDiags(diags hcl.Diagnostics, reserved map[string]struct{}, vars map[string]*variable) hcl.Diagnostics {
|
||||||
@@ -569,3 +883,21 @@ func removeAttributesDiags(diags hcl.Diagnostics, reserved map[string]struct{},
|
|||||||
}
|
}
|
||||||
return fdiags
|
return fdiags
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// key returns a unique hash for the given values
|
||||||
|
func key(ks ...any) uint64 {
|
||||||
|
hash := fnv.New64a()
|
||||||
|
for _, k := range ks {
|
||||||
|
v := reflect.ValueOf(k)
|
||||||
|
switch v.Kind() {
|
||||||
|
case reflect.String:
|
||||||
|
hash.Write([]byte(v.String()))
|
||||||
|
case reflect.Pointer:
|
||||||
|
ptr := reflect.ValueOf(k).Pointer()
|
||||||
|
binary.Write(hash, binary.LittleEndian, uint64(ptr))
|
||||||
|
default:
|
||||||
|
panic(fmt.Sprintf("unknown key kind %s", v.Kind().String()))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return hash.Sum64()
|
||||||
|
}
|
||||||
|
|||||||
@@ -31,21 +31,21 @@ var stdlibFunctions = map[string]function.Function{
|
|||||||
"cidrnetmask": cidr.NetmaskFunc,
|
"cidrnetmask": cidr.NetmaskFunc,
|
||||||
"cidrsubnet": cidr.SubnetFunc,
|
"cidrsubnet": cidr.SubnetFunc,
|
||||||
"cidrsubnets": cidr.SubnetsFunc,
|
"cidrsubnets": cidr.SubnetsFunc,
|
||||||
"csvdecode": stdlib.CSVDecodeFunc,
|
|
||||||
"coalesce": stdlib.CoalesceFunc,
|
"coalesce": stdlib.CoalesceFunc,
|
||||||
"coalescelist": stdlib.CoalesceListFunc,
|
"coalescelist": stdlib.CoalesceListFunc,
|
||||||
"compact": stdlib.CompactFunc,
|
"compact": stdlib.CompactFunc,
|
||||||
"concat": stdlib.ConcatFunc,
|
"concat": stdlib.ConcatFunc,
|
||||||
"contains": stdlib.ContainsFunc,
|
"contains": stdlib.ContainsFunc,
|
||||||
"convert": typeexpr.ConvertFunc,
|
"convert": typeexpr.ConvertFunc,
|
||||||
|
"csvdecode": stdlib.CSVDecodeFunc,
|
||||||
"distinct": stdlib.DistinctFunc,
|
"distinct": stdlib.DistinctFunc,
|
||||||
"divide": stdlib.DivideFunc,
|
"divide": stdlib.DivideFunc,
|
||||||
"element": stdlib.ElementFunc,
|
"element": stdlib.ElementFunc,
|
||||||
"equal": stdlib.EqualFunc,
|
"equal": stdlib.EqualFunc,
|
||||||
"flatten": stdlib.FlattenFunc,
|
"flatten": stdlib.FlattenFunc,
|
||||||
"floor": stdlib.FloorFunc,
|
"floor": stdlib.FloorFunc,
|
||||||
"formatdate": stdlib.FormatDateFunc,
|
|
||||||
"format": stdlib.FormatFunc,
|
"format": stdlib.FormatFunc,
|
||||||
|
"formatdate": stdlib.FormatDateFunc,
|
||||||
"formatlist": stdlib.FormatListFunc,
|
"formatlist": stdlib.FormatListFunc,
|
||||||
"greaterthan": stdlib.GreaterThanFunc,
|
"greaterthan": stdlib.GreaterThanFunc,
|
||||||
"greaterthanorequalto": stdlib.GreaterThanOrEqualToFunc,
|
"greaterthanorequalto": stdlib.GreaterThanOrEqualToFunc,
|
||||||
@@ -53,10 +53,10 @@ var stdlibFunctions = map[string]function.Function{
|
|||||||
"indent": stdlib.IndentFunc,
|
"indent": stdlib.IndentFunc,
|
||||||
"index": stdlib.IndexFunc,
|
"index": stdlib.IndexFunc,
|
||||||
"int": stdlib.IntFunc,
|
"int": stdlib.IntFunc,
|
||||||
|
"join": stdlib.JoinFunc,
|
||||||
"jsondecode": stdlib.JSONDecodeFunc,
|
"jsondecode": stdlib.JSONDecodeFunc,
|
||||||
"jsonencode": stdlib.JSONEncodeFunc,
|
"jsonencode": stdlib.JSONEncodeFunc,
|
||||||
"keys": stdlib.KeysFunc,
|
"keys": stdlib.KeysFunc,
|
||||||
"join": stdlib.JoinFunc,
|
|
||||||
"length": stdlib.LengthFunc,
|
"length": stdlib.LengthFunc,
|
||||||
"lessthan": stdlib.LessThanFunc,
|
"lessthan": stdlib.LessThanFunc,
|
||||||
"lessthanorequalto": stdlib.LessThanOrEqualToFunc,
|
"lessthanorequalto": stdlib.LessThanOrEqualToFunc,
|
||||||
@@ -70,15 +70,16 @@ var stdlibFunctions = map[string]function.Function{
|
|||||||
"modulo": stdlib.ModuloFunc,
|
"modulo": stdlib.ModuloFunc,
|
||||||
"multiply": stdlib.MultiplyFunc,
|
"multiply": stdlib.MultiplyFunc,
|
||||||
"negate": stdlib.NegateFunc,
|
"negate": stdlib.NegateFunc,
|
||||||
"notequal": stdlib.NotEqualFunc,
|
|
||||||
"not": stdlib.NotFunc,
|
"not": stdlib.NotFunc,
|
||||||
|
"notequal": stdlib.NotEqualFunc,
|
||||||
"or": stdlib.OrFunc,
|
"or": stdlib.OrFunc,
|
||||||
"parseint": stdlib.ParseIntFunc,
|
"parseint": stdlib.ParseIntFunc,
|
||||||
"pow": stdlib.PowFunc,
|
"pow": stdlib.PowFunc,
|
||||||
"range": stdlib.RangeFunc,
|
"range": stdlib.RangeFunc,
|
||||||
"regexall": stdlib.RegexAllFunc,
|
|
||||||
"regex": stdlib.RegexFunc,
|
|
||||||
"regex_replace": stdlib.RegexReplaceFunc,
|
"regex_replace": stdlib.RegexReplaceFunc,
|
||||||
|
"regex": stdlib.RegexFunc,
|
||||||
|
"regexall": stdlib.RegexAllFunc,
|
||||||
|
"replace": stdlib.ReplaceFunc,
|
||||||
"reverse": stdlib.ReverseFunc,
|
"reverse": stdlib.ReverseFunc,
|
||||||
"reverselist": stdlib.ReverseListFunc,
|
"reverselist": stdlib.ReverseListFunc,
|
||||||
"rsadecrypt": crypto.RsaDecryptFunc,
|
"rsadecrypt": crypto.RsaDecryptFunc,
|
||||||
@@ -124,3 +125,11 @@ var timestampFunc = function.New(&function.Spec{
|
|||||||
return cty.StringVal(time.Now().UTC().Format(time.RFC3339)), nil
|
return cty.StringVal(time.Now().UTC().Format(time.RFC3339)), nil
|
||||||
},
|
},
|
||||||
})
|
})
|
||||||
|
|
||||||
|
func Stdlib() map[string]function.Function {
|
||||||
|
funcs := make(map[string]function.Function, len(stdlibFunctions))
|
||||||
|
for k, v := range stdlibFunctions {
|
||||||
|
funcs[k] = v
|
||||||
|
}
|
||||||
|
return funcs
|
||||||
|
}
|
||||||
|
|||||||
@@ -4,14 +4,16 @@ import (
|
|||||||
"archive/tar"
|
"archive/tar"
|
||||||
"bytes"
|
"bytes"
|
||||||
"context"
|
"context"
|
||||||
"strings"
|
|
||||||
|
|
||||||
"github.com/docker/buildx/build"
|
"github.com/docker/buildx/builder"
|
||||||
|
controllerapi "github.com/docker/buildx/controller/pb"
|
||||||
"github.com/docker/buildx/driver"
|
"github.com/docker/buildx/driver"
|
||||||
"github.com/docker/buildx/util/progress"
|
"github.com/docker/buildx/util/progress"
|
||||||
"github.com/moby/buildkit/client"
|
"github.com/moby/buildkit/client"
|
||||||
"github.com/moby/buildkit/client/llb"
|
"github.com/moby/buildkit/client/llb"
|
||||||
|
"github.com/moby/buildkit/frontend/dockerui"
|
||||||
gwclient "github.com/moby/buildkit/frontend/gateway/client"
|
gwclient "github.com/moby/buildkit/frontend/gateway/client"
|
||||||
|
"github.com/moby/buildkit/session"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -20,11 +22,17 @@ type Input struct {
|
|||||||
URL string
|
URL string
|
||||||
}
|
}
|
||||||
|
|
||||||
func ReadRemoteFiles(ctx context.Context, dis []build.DriverInfo, url string, names []string, pw progress.Writer) ([]File, *Input, error) {
|
func ReadRemoteFiles(ctx context.Context, nodes []builder.Node, url string, names []string, pw progress.Writer) ([]File, *Input, error) {
|
||||||
|
var session []session.Attachable
|
||||||
var filename string
|
var filename string
|
||||||
st, ok := detectGitContext(url)
|
st, ok := dockerui.DetectGitContext(url, false)
|
||||||
if !ok {
|
if ok {
|
||||||
st, filename, ok = detectHTTPContext(url)
|
ssh, err := controllerapi.CreateSSH([]*controllerapi.SSH{{ID: "default"}})
|
||||||
|
if err == nil {
|
||||||
|
session = append(session, ssh)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
st, filename, ok = dockerui.DetectHTTPContext(url)
|
||||||
if !ok {
|
if !ok {
|
||||||
return nil, nil, errors.Errorf("not url context")
|
return nil, nil, errors.Errorf("not url context")
|
||||||
}
|
}
|
||||||
@@ -33,25 +41,25 @@ func ReadRemoteFiles(ctx context.Context, dis []build.DriverInfo, url string, na
|
|||||||
inp := &Input{State: st, URL: url}
|
inp := &Input{State: st, URL: url}
|
||||||
var files []File
|
var files []File
|
||||||
|
|
||||||
var di *build.DriverInfo
|
var node *builder.Node
|
||||||
for _, d := range dis {
|
for i, n := range nodes {
|
||||||
if d.Err == nil {
|
if n.Err == nil {
|
||||||
di = &d
|
node = &nodes[i]
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if di == nil {
|
if node == nil {
|
||||||
return nil, nil, nil
|
return nil, nil, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
c, err := driver.Boot(ctx, ctx, di.Driver, pw)
|
c, err := driver.Boot(ctx, ctx, node.Driver, pw)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, nil, err
|
return nil, nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
ch, done := progress.NewChannel(pw)
|
ch, done := progress.NewChannel(pw)
|
||||||
defer func() { <-done }()
|
defer func() { <-done }()
|
||||||
_, err = c.Build(ctx, client.SolveOpt{}, "buildx", func(ctx context.Context, c gwclient.Client) (*gwclient.Result, error) {
|
_, err = c.Build(ctx, client.SolveOpt{Session: session, Internal: true}, "buildx", func(ctx context.Context, c gwclient.Client) (*gwclient.Result, error) {
|
||||||
def, err := st.Marshal(ctx)
|
def, err := st.Marshal(ctx)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
@@ -83,51 +91,6 @@ func ReadRemoteFiles(ctx context.Context, dis []build.DriverInfo, url string, na
|
|||||||
return files, inp, nil
|
return files, inp, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func IsRemoteURL(url string) bool {
|
|
||||||
if _, _, ok := detectHTTPContext(url); ok {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
if _, ok := detectGitContext(url); ok {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
func detectHTTPContext(url string) (*llb.State, string, bool) {
|
|
||||||
if httpPrefix.MatchString(url) {
|
|
||||||
httpContext := llb.HTTP(url, llb.Filename("context"), llb.WithCustomName("[internal] load remote build context"))
|
|
||||||
return &httpContext, "context", true
|
|
||||||
}
|
|
||||||
return nil, "", false
|
|
||||||
}
|
|
||||||
|
|
||||||
func detectGitContext(ref string) (*llb.State, bool) {
|
|
||||||
found := false
|
|
||||||
if httpPrefix.MatchString(ref) && gitURLPathWithFragmentSuffix.MatchString(ref) {
|
|
||||||
found = true
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, prefix := range []string{"git://", "github.com/", "git@"} {
|
|
||||||
if strings.HasPrefix(ref, prefix) {
|
|
||||||
found = true
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if !found {
|
|
||||||
return nil, false
|
|
||||||
}
|
|
||||||
|
|
||||||
parts := strings.SplitN(ref, "#", 2)
|
|
||||||
branch := ""
|
|
||||||
if len(parts) > 1 {
|
|
||||||
branch = parts[1]
|
|
||||||
}
|
|
||||||
gitOpts := []llb.GitOption{llb.WithCustomName("[internal] load git source " + ref)}
|
|
||||||
|
|
||||||
st := llb.Git(parts[0], branch, gitOpts...)
|
|
||||||
return &st, true
|
|
||||||
}
|
|
||||||
|
|
||||||
func isArchive(header []byte) bool {
|
func isArchive(header []byte) bool {
|
||||||
for _, m := range [][]byte{
|
for _, m := range [][]byte{
|
||||||
{0x42, 0x5A, 0x68}, // bzip2
|
{0x42, 0x5A, 0x68}, // bzip2
|
||||||
|
|||||||
871
build/build.go
871
build/build.go
File diff suppressed because it is too large
Load Diff
115
build/git.go
Normal file
115
build/git.go
Normal file
@@ -0,0 +1,115 @@
|
|||||||
|
package build
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"os"
|
||||||
|
"path"
|
||||||
|
"path/filepath"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/docker/buildx/util/gitutil"
|
||||||
|
specs "github.com/opencontainers/image-spec/specs-go/v1"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
)
|
||||||
|
|
||||||
|
const DockerfileLabel = "com.docker.image.source.entrypoint"
|
||||||
|
|
||||||
|
func getGitAttributes(ctx context.Context, contextPath string, dockerfilePath string) (res map[string]string, _ error) {
|
||||||
|
res = make(map[string]string)
|
||||||
|
if contextPath == "" {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
setGitLabels := false
|
||||||
|
if v, ok := os.LookupEnv("BUILDX_GIT_LABELS"); ok {
|
||||||
|
if v == "full" { // backward compatibility with old "full" mode
|
||||||
|
setGitLabels = true
|
||||||
|
} else if v, err := strconv.ParseBool(v); err == nil {
|
||||||
|
setGitLabels = v
|
||||||
|
}
|
||||||
|
}
|
||||||
|
setGitInfo := true
|
||||||
|
if v, ok := os.LookupEnv("BUILDX_GIT_INFO"); ok {
|
||||||
|
if v, err := strconv.ParseBool(v); err == nil {
|
||||||
|
setGitInfo = v
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if !setGitLabels && !setGitInfo {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// figure out in which directory the git command needs to run in
|
||||||
|
var wd string
|
||||||
|
if filepath.IsAbs(contextPath) {
|
||||||
|
wd = contextPath
|
||||||
|
} else {
|
||||||
|
cwd, _ := os.Getwd()
|
||||||
|
wd, _ = filepath.Abs(filepath.Join(cwd, contextPath))
|
||||||
|
}
|
||||||
|
|
||||||
|
gitc, err := gitutil.New(gitutil.WithContext(ctx), gitutil.WithWorkingDir(wd))
|
||||||
|
if err != nil {
|
||||||
|
if st, err := os.Stat(path.Join(wd, ".git")); err == nil && st.IsDir() {
|
||||||
|
return res, errors.New("buildx: git was not found in the system. Current commit information was not captured by the build")
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if !gitc.IsInsideWorkTree() {
|
||||||
|
if st, err := os.Stat(path.Join(wd, ".git")); err == nil && st.IsDir() {
|
||||||
|
return res, errors.New("buildx: failed to read current commit information with git rev-parse --is-inside-work-tree")
|
||||||
|
}
|
||||||
|
return res, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
if sha, err := gitc.FullCommit(); err != nil && !gitutil.IsUnknownRevision(err) {
|
||||||
|
return res, errors.Wrapf(err, "buildx: failed to get git commit")
|
||||||
|
} else if sha != "" {
|
||||||
|
checkDirty := false
|
||||||
|
if v, ok := os.LookupEnv("BUILDX_GIT_CHECK_DIRTY"); ok {
|
||||||
|
if v, err := strconv.ParseBool(v); err == nil {
|
||||||
|
checkDirty = v
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if checkDirty && gitc.IsDirty() {
|
||||||
|
sha += "-dirty"
|
||||||
|
}
|
||||||
|
if setGitLabels {
|
||||||
|
res["label:"+specs.AnnotationRevision] = sha
|
||||||
|
}
|
||||||
|
if setGitInfo {
|
||||||
|
res["vcs:revision"] = sha
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if rurl, err := gitc.RemoteURL(); err == nil && rurl != "" {
|
||||||
|
if setGitLabels {
|
||||||
|
res["label:"+specs.AnnotationSource] = rurl
|
||||||
|
}
|
||||||
|
if setGitInfo {
|
||||||
|
res["vcs:source"] = rurl
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if setGitLabels {
|
||||||
|
if root, err := gitc.RootDir(); err != nil {
|
||||||
|
return res, errors.Wrapf(err, "buildx: failed to get git root dir")
|
||||||
|
} else if root != "" {
|
||||||
|
if dockerfilePath == "" {
|
||||||
|
dockerfilePath = filepath.Join(wd, "Dockerfile")
|
||||||
|
}
|
||||||
|
if !filepath.IsAbs(dockerfilePath) {
|
||||||
|
cwd, _ := os.Getwd()
|
||||||
|
dockerfilePath = filepath.Join(cwd, dockerfilePath)
|
||||||
|
}
|
||||||
|
dockerfilePath, _ = filepath.Rel(root, dockerfilePath)
|
||||||
|
if !strings.HasPrefix(dockerfilePath, "..") {
|
||||||
|
res["label:"+DockerfileLabel] = dockerfilePath
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
156
build/git_test.go
Normal file
156
build/git_test.go
Normal file
@@ -0,0 +1,156 @@
|
|||||||
|
package build
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"os"
|
||||||
|
"path"
|
||||||
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/docker/buildx/util/gitutil"
|
||||||
|
specs "github.com/opencontainers/image-spec/specs-go/v1"
|
||||||
|
"github.com/stretchr/testify/assert"
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
)
|
||||||
|
|
||||||
|
func setupTest(tb testing.TB) {
|
||||||
|
gitutil.Mktmp(tb)
|
||||||
|
|
||||||
|
c, err := gitutil.New()
|
||||||
|
require.NoError(tb, err)
|
||||||
|
gitutil.GitInit(c, tb)
|
||||||
|
|
||||||
|
df := []byte("FROM alpine:latest\n")
|
||||||
|
assert.NoError(tb, os.WriteFile("Dockerfile", df, 0644))
|
||||||
|
|
||||||
|
gitutil.GitAdd(c, tb, "Dockerfile")
|
||||||
|
gitutil.GitCommit(c, tb, "initial commit")
|
||||||
|
gitutil.GitSetRemote(c, tb, "origin", "git@github.com:docker/buildx.git")
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGetGitAttributesNotGitRepo(t *testing.T) {
|
||||||
|
_, err := getGitAttributes(context.Background(), t.TempDir(), "Dockerfile")
|
||||||
|
assert.NoError(t, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGetGitAttributesBadGitRepo(t *testing.T) {
|
||||||
|
tmp := t.TempDir()
|
||||||
|
require.NoError(t, os.MkdirAll(path.Join(tmp, ".git"), 0755))
|
||||||
|
|
||||||
|
_, err := getGitAttributes(context.Background(), tmp, "Dockerfile")
|
||||||
|
assert.Error(t, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGetGitAttributesNoContext(t *testing.T) {
|
||||||
|
setupTest(t)
|
||||||
|
|
||||||
|
gitattrs, err := getGitAttributes(context.Background(), "", "Dockerfile")
|
||||||
|
assert.NoError(t, err)
|
||||||
|
assert.Empty(t, gitattrs)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGetGitAttributes(t *testing.T) {
|
||||||
|
cases := []struct {
|
||||||
|
name string
|
||||||
|
envGitLabels string
|
||||||
|
envGitInfo string
|
||||||
|
expected []string
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "default",
|
||||||
|
envGitLabels: "",
|
||||||
|
envGitInfo: "",
|
||||||
|
expected: []string{
|
||||||
|
"vcs:revision",
|
||||||
|
"vcs:source",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "none",
|
||||||
|
envGitLabels: "false",
|
||||||
|
envGitInfo: "false",
|
||||||
|
expected: []string{},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "gitinfo",
|
||||||
|
envGitLabels: "false",
|
||||||
|
envGitInfo: "true",
|
||||||
|
expected: []string{
|
||||||
|
"vcs:revision",
|
||||||
|
"vcs:source",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "gitlabels",
|
||||||
|
envGitLabels: "true",
|
||||||
|
envGitInfo: "false",
|
||||||
|
expected: []string{
|
||||||
|
"label:" + DockerfileLabel,
|
||||||
|
"label:" + specs.AnnotationRevision,
|
||||||
|
"label:" + specs.AnnotationSource,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "both",
|
||||||
|
envGitLabels: "true",
|
||||||
|
envGitInfo: "",
|
||||||
|
expected: []string{
|
||||||
|
"label:" + DockerfileLabel,
|
||||||
|
"label:" + specs.AnnotationRevision,
|
||||||
|
"label:" + specs.AnnotationSource,
|
||||||
|
"vcs:revision",
|
||||||
|
"vcs:source",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, tt := range cases {
|
||||||
|
tt := tt
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
setupTest(t)
|
||||||
|
if tt.envGitLabels != "" {
|
||||||
|
t.Setenv("BUILDX_GIT_LABELS", tt.envGitLabels)
|
||||||
|
}
|
||||||
|
if tt.envGitInfo != "" {
|
||||||
|
t.Setenv("BUILDX_GIT_INFO", tt.envGitInfo)
|
||||||
|
}
|
||||||
|
gitattrs, err := getGitAttributes(context.Background(), ".", "Dockerfile")
|
||||||
|
require.NoError(t, err)
|
||||||
|
for _, e := range tt.expected {
|
||||||
|
assert.Contains(t, gitattrs, e)
|
||||||
|
assert.NotEmpty(t, gitattrs[e])
|
||||||
|
if e == "label:"+DockerfileLabel {
|
||||||
|
assert.Equal(t, "Dockerfile", gitattrs[e])
|
||||||
|
} else if e == "label:"+specs.AnnotationSource || e == "vcs:source" {
|
||||||
|
assert.Equal(t, "git@github.com:docker/buildx.git", gitattrs[e])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGetGitAttributesDirty(t *testing.T) {
|
||||||
|
setupTest(t)
|
||||||
|
t.Setenv("BUILDX_GIT_CHECK_DIRTY", "true")
|
||||||
|
|
||||||
|
// make a change to test dirty flag
|
||||||
|
df := []byte("FROM alpine:edge\n")
|
||||||
|
require.NoError(t, os.Mkdir("dir", 0755))
|
||||||
|
require.NoError(t, os.WriteFile(filepath.Join("dir", "Dockerfile"), df, 0644))
|
||||||
|
|
||||||
|
t.Setenv("BUILDX_GIT_LABELS", "true")
|
||||||
|
gitattrs, _ := getGitAttributes(context.Background(), ".", "Dockerfile")
|
||||||
|
assert.Equal(t, 5, len(gitattrs))
|
||||||
|
|
||||||
|
assert.Contains(t, gitattrs, "label:"+DockerfileLabel)
|
||||||
|
assert.Equal(t, "Dockerfile", gitattrs["label:"+DockerfileLabel])
|
||||||
|
assert.Contains(t, gitattrs, "label:"+specs.AnnotationSource)
|
||||||
|
assert.Equal(t, "git@github.com:docker/buildx.git", gitattrs["label:"+specs.AnnotationSource])
|
||||||
|
assert.Contains(t, gitattrs, "label:"+specs.AnnotationRevision)
|
||||||
|
assert.True(t, strings.HasSuffix(gitattrs["label:"+specs.AnnotationRevision], "-dirty"))
|
||||||
|
|
||||||
|
assert.Contains(t, gitattrs, "vcs:source")
|
||||||
|
assert.Equal(t, "git@github.com:docker/buildx.git", gitattrs["vcs:source"])
|
||||||
|
assert.Contains(t, gitattrs, "vcs:revision")
|
||||||
|
assert.True(t, strings.HasSuffix(gitattrs["vcs:revision"], "-dirty"))
|
||||||
|
}
|
||||||
138
build/invoke.go
Normal file
138
build/invoke.go
Normal file
@@ -0,0 +1,138 @@
|
|||||||
|
package build
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
_ "crypto/sha256" // ensure digests can be computed
|
||||||
|
"io"
|
||||||
|
"sync"
|
||||||
|
"sync/atomic"
|
||||||
|
"syscall"
|
||||||
|
|
||||||
|
controllerapi "github.com/docker/buildx/controller/pb"
|
||||||
|
gateway "github.com/moby/buildkit/frontend/gateway/client"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
"github.com/sirupsen/logrus"
|
||||||
|
)
|
||||||
|
|
||||||
|
type Container struct {
|
||||||
|
cancelOnce sync.Once
|
||||||
|
containerCancel func()
|
||||||
|
isUnavailable atomic.Bool
|
||||||
|
initStarted atomic.Bool
|
||||||
|
container gateway.Container
|
||||||
|
releaseCh chan struct{}
|
||||||
|
resultCtx *ResultHandle
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewContainer(ctx context.Context, resultCtx *ResultHandle, cfg *controllerapi.InvokeConfig) (*Container, error) {
|
||||||
|
mainCtx := ctx
|
||||||
|
|
||||||
|
ctrCh := make(chan *Container)
|
||||||
|
errCh := make(chan error)
|
||||||
|
go func() {
|
||||||
|
err := resultCtx.build(func(ctx context.Context, c gateway.Client) (*gateway.Result, error) {
|
||||||
|
ctx, cancel := context.WithCancel(ctx)
|
||||||
|
go func() {
|
||||||
|
<-mainCtx.Done()
|
||||||
|
cancel()
|
||||||
|
}()
|
||||||
|
|
||||||
|
containerCfg, err := resultCtx.getContainerConfig(ctx, c, cfg)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
containerCtx, containerCancel := context.WithCancel(ctx)
|
||||||
|
defer containerCancel()
|
||||||
|
bkContainer, err := c.NewContainer(containerCtx, containerCfg)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
releaseCh := make(chan struct{})
|
||||||
|
container := &Container{
|
||||||
|
containerCancel: containerCancel,
|
||||||
|
container: bkContainer,
|
||||||
|
releaseCh: releaseCh,
|
||||||
|
resultCtx: resultCtx,
|
||||||
|
}
|
||||||
|
doneCh := make(chan struct{})
|
||||||
|
defer close(doneCh)
|
||||||
|
resultCtx.registerCleanup(func() {
|
||||||
|
container.Cancel()
|
||||||
|
<-doneCh
|
||||||
|
})
|
||||||
|
ctrCh <- container
|
||||||
|
<-container.releaseCh
|
||||||
|
|
||||||
|
return nil, bkContainer.Release(ctx)
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
errCh <- err
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
select {
|
||||||
|
case ctr := <-ctrCh:
|
||||||
|
return ctr, nil
|
||||||
|
case err := <-errCh:
|
||||||
|
return nil, err
|
||||||
|
case <-mainCtx.Done():
|
||||||
|
return nil, mainCtx.Err()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *Container) Cancel() {
|
||||||
|
c.markUnavailable()
|
||||||
|
c.cancelOnce.Do(func() {
|
||||||
|
if c.containerCancel != nil {
|
||||||
|
c.containerCancel()
|
||||||
|
}
|
||||||
|
close(c.releaseCh)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *Container) IsUnavailable() bool {
|
||||||
|
return c.isUnavailable.Load()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *Container) markUnavailable() {
|
||||||
|
c.isUnavailable.Store(true)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *Container) Exec(ctx context.Context, cfg *controllerapi.InvokeConfig, stdin io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) error {
|
||||||
|
if isInit := c.initStarted.CompareAndSwap(false, true); isInit {
|
||||||
|
defer func() {
|
||||||
|
// container can't be used after init exits
|
||||||
|
c.markUnavailable()
|
||||||
|
}()
|
||||||
|
}
|
||||||
|
err := exec(ctx, c.resultCtx, cfg, c.container, stdin, stdout, stderr)
|
||||||
|
if err != nil {
|
||||||
|
// Container becomes unavailable if one of the processes fails in it.
|
||||||
|
c.markUnavailable()
|
||||||
|
}
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
func exec(ctx context.Context, resultCtx *ResultHandle, cfg *controllerapi.InvokeConfig, ctr gateway.Container, stdin io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) error {
|
||||||
|
processCfg, err := resultCtx.getProcessConfig(cfg, stdin, stdout, stderr)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
proc, err := ctr.Start(ctx, processCfg)
|
||||||
|
if err != nil {
|
||||||
|
return errors.Errorf("failed to start container: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
doneCh := make(chan struct{})
|
||||||
|
defer close(doneCh)
|
||||||
|
go func() {
|
||||||
|
select {
|
||||||
|
case <-ctx.Done():
|
||||||
|
if err := proc.Signal(ctx, syscall.SIGKILL); err != nil {
|
||||||
|
logrus.Warnf("failed to kill process: %v", err)
|
||||||
|
}
|
||||||
|
case <-doneCh:
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
|
||||||
|
return proc.Wait()
|
||||||
|
}
|
||||||
495
build/result.go
Normal file
495
build/result.go
Normal file
@@ -0,0 +1,495 @@
|
|||||||
|
package build
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
_ "crypto/sha256" // ensure digests can be computed
|
||||||
|
"encoding/json"
|
||||||
|
"io"
|
||||||
|
"sync"
|
||||||
|
|
||||||
|
controllerapi "github.com/docker/buildx/controller/pb"
|
||||||
|
"github.com/moby/buildkit/client"
|
||||||
|
"github.com/moby/buildkit/exporter/containerimage/exptypes"
|
||||||
|
gateway "github.com/moby/buildkit/frontend/gateway/client"
|
||||||
|
"github.com/moby/buildkit/solver/errdefs"
|
||||||
|
"github.com/moby/buildkit/solver/pb"
|
||||||
|
"github.com/moby/buildkit/solver/result"
|
||||||
|
specs "github.com/opencontainers/image-spec/specs-go/v1"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
"github.com/sirupsen/logrus"
|
||||||
|
"golang.org/x/sync/errgroup"
|
||||||
|
)
|
||||||
|
|
||||||
|
// NewResultHandle makes a call to client.Build, additionally returning a
|
||||||
|
// opaque ResultHandle alongside the standard response and error.
|
||||||
|
//
|
||||||
|
// This ResultHandle can be used to execute additional build steps in the same
|
||||||
|
// context as the build occurred, which can allow easy debugging of build
|
||||||
|
// failures and successes.
|
||||||
|
//
|
||||||
|
// If the returned ResultHandle is not nil, the caller must call Done() on it.
|
||||||
|
func NewResultHandle(ctx context.Context, cc *client.Client, opt client.SolveOpt, product string, buildFunc gateway.BuildFunc, ch chan *client.SolveStatus) (*ResultHandle, *client.SolveResponse, error) {
|
||||||
|
// Create a new context to wrap the original, and cancel it when the
|
||||||
|
// caller-provided context is cancelled.
|
||||||
|
//
|
||||||
|
// We derive the context from the background context so that we can forbid
|
||||||
|
// cancellation of the build request after <-done is closed (which we do
|
||||||
|
// before returning the ResultHandle).
|
||||||
|
baseCtx := ctx
|
||||||
|
ctx, cancel := context.WithCancelCause(context.Background())
|
||||||
|
done := make(chan struct{})
|
||||||
|
go func() {
|
||||||
|
select {
|
||||||
|
case <-baseCtx.Done():
|
||||||
|
cancel(baseCtx.Err())
|
||||||
|
case <-done:
|
||||||
|
// Once done is closed, we've recorded a ResultHandle, so we
|
||||||
|
// shouldn't allow cancelling the underlying build request anymore.
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
|
||||||
|
// Create a new channel to forward status messages to the original.
|
||||||
|
//
|
||||||
|
// We do this so that we can discard status messages after the main portion
|
||||||
|
// of the build is complete. This is necessary for the solve error case,
|
||||||
|
// where the original gateway is kept open until the ResultHandle is
|
||||||
|
// closed - we don't want progress messages from operations in that
|
||||||
|
// ResultHandle to display after this function exits.
|
||||||
|
//
|
||||||
|
// Additionally, callers should wait for the progress channel to be closed.
|
||||||
|
// If we keep the session open and never close the progress channel, the
|
||||||
|
// caller will likely hang.
|
||||||
|
baseCh := ch
|
||||||
|
ch = make(chan *client.SolveStatus)
|
||||||
|
go func() {
|
||||||
|
for {
|
||||||
|
s, ok := <-ch
|
||||||
|
if !ok {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
select {
|
||||||
|
case <-baseCh:
|
||||||
|
// base channel is closed, discard status messages
|
||||||
|
default:
|
||||||
|
baseCh <- s
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
defer close(baseCh)
|
||||||
|
|
||||||
|
var resp *client.SolveResponse
|
||||||
|
var respErr error
|
||||||
|
var respHandle *ResultHandle
|
||||||
|
|
||||||
|
go func() {
|
||||||
|
defer cancel(context.Canceled) // ensure no dangling processes
|
||||||
|
|
||||||
|
var res *gateway.Result
|
||||||
|
var err error
|
||||||
|
resp, err = cc.Build(ctx, opt, product, func(ctx context.Context, c gateway.Client) (*gateway.Result, error) {
|
||||||
|
var err error
|
||||||
|
res, err = buildFunc(ctx, c)
|
||||||
|
|
||||||
|
if res != nil && err == nil {
|
||||||
|
// Force evaluation of the build result (otherwise, we likely
|
||||||
|
// won't get a solve error)
|
||||||
|
def, err2 := getDefinition(ctx, res)
|
||||||
|
if err2 != nil {
|
||||||
|
return nil, err2
|
||||||
|
}
|
||||||
|
res, err = evalDefinition(ctx, c, def)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
// Scenario 1: we failed to evaluate a node somewhere in the
|
||||||
|
// build graph.
|
||||||
|
//
|
||||||
|
// In this case, we construct a ResultHandle from this
|
||||||
|
// original Build session, and return it alongside the original
|
||||||
|
// build error. We then need to keep the gateway session open
|
||||||
|
// until the caller explicitly closes the ResultHandle.
|
||||||
|
|
||||||
|
var se *errdefs.SolveError
|
||||||
|
if errors.As(err, &se) {
|
||||||
|
respHandle = &ResultHandle{
|
||||||
|
done: make(chan struct{}),
|
||||||
|
solveErr: se,
|
||||||
|
gwClient: c,
|
||||||
|
gwCtx: ctx,
|
||||||
|
}
|
||||||
|
respErr = se
|
||||||
|
close(done)
|
||||||
|
|
||||||
|
// Block until the caller closes the ResultHandle.
|
||||||
|
select {
|
||||||
|
case <-respHandle.done:
|
||||||
|
case <-ctx.Done():
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return res, err
|
||||||
|
}, ch)
|
||||||
|
if respHandle != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
// Something unexpected failed during the build, we didn't succeed,
|
||||||
|
// but we also didn't make it far enough to create a ResultHandle.
|
||||||
|
respErr = err
|
||||||
|
close(done)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Scenario 2: we successfully built the image with no errors.
|
||||||
|
//
|
||||||
|
// In this case, the original gateway session has now been closed
|
||||||
|
// since the Build has been completed. So, we need to create a new
|
||||||
|
// gateway session to populate the ResultHandle. To do this, we
|
||||||
|
// need to re-evaluate the target result, in this new session. This
|
||||||
|
// should be instantaneous since the result should be cached.
|
||||||
|
|
||||||
|
def, err := getDefinition(ctx, res)
|
||||||
|
if err != nil {
|
||||||
|
respErr = err
|
||||||
|
close(done)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// NOTE: ideally this second connection should be lazily opened
|
||||||
|
opt := opt
|
||||||
|
opt.Ref = ""
|
||||||
|
opt.Exports = nil
|
||||||
|
opt.CacheExports = nil
|
||||||
|
opt.Internal = true
|
||||||
|
_, respErr = cc.Build(ctx, opt, "buildx", func(ctx context.Context, c gateway.Client) (*gateway.Result, error) {
|
||||||
|
res, err := evalDefinition(ctx, c, def)
|
||||||
|
if err != nil {
|
||||||
|
// This should probably not happen, since we've previously
|
||||||
|
// successfully evaluated the same result with no issues.
|
||||||
|
return nil, errors.Wrap(err, "inconsistent solve result")
|
||||||
|
}
|
||||||
|
respHandle = &ResultHandle{
|
||||||
|
done: make(chan struct{}),
|
||||||
|
res: res,
|
||||||
|
gwClient: c,
|
||||||
|
gwCtx: ctx,
|
||||||
|
}
|
||||||
|
close(done)
|
||||||
|
|
||||||
|
// Block until the caller closes the ResultHandle.
|
||||||
|
select {
|
||||||
|
case <-respHandle.done:
|
||||||
|
case <-ctx.Done():
|
||||||
|
}
|
||||||
|
return nil, ctx.Err()
|
||||||
|
}, nil)
|
||||||
|
if respHandle != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
close(done)
|
||||||
|
}()
|
||||||
|
|
||||||
|
// Block until the other thread signals that it's completed the build.
|
||||||
|
select {
|
||||||
|
case <-done:
|
||||||
|
case <-baseCtx.Done():
|
||||||
|
if respErr == nil {
|
||||||
|
respErr = baseCtx.Err()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return respHandle, resp, respErr
|
||||||
|
}
|
||||||
|
|
||||||
|
// getDefinition converts a gateway result into a collection of definitions for
|
||||||
|
// each ref in the result.
|
||||||
|
func getDefinition(ctx context.Context, res *gateway.Result) (*result.Result[*pb.Definition], error) {
|
||||||
|
return result.ConvertResult(res, func(ref gateway.Reference) (*pb.Definition, error) {
|
||||||
|
st, err := ref.ToState()
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
def, err := st.Marshal(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return def.ToPB(), nil
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// evalDefinition performs the reverse of getDefinition, converting a
|
||||||
|
// collection of definitions into a gateway result.
|
||||||
|
func evalDefinition(ctx context.Context, c gateway.Client, defs *result.Result[*pb.Definition]) (*gateway.Result, error) {
|
||||||
|
// force evaluation of all targets in parallel
|
||||||
|
results := make(map[*pb.Definition]*gateway.Result)
|
||||||
|
resultsMu := sync.Mutex{}
|
||||||
|
eg, egCtx := errgroup.WithContext(ctx)
|
||||||
|
defs.EachRef(func(def *pb.Definition) error {
|
||||||
|
eg.Go(func() error {
|
||||||
|
res, err := c.Solve(egCtx, gateway.SolveRequest{
|
||||||
|
Evaluate: true,
|
||||||
|
Definition: def,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
resultsMu.Lock()
|
||||||
|
results[def] = res
|
||||||
|
resultsMu.Unlock()
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
if err := eg.Wait(); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
res, _ := result.ConvertResult(defs, func(def *pb.Definition) (gateway.Reference, error) {
|
||||||
|
if res, ok := results[def]; ok {
|
||||||
|
return res.Ref, nil
|
||||||
|
}
|
||||||
|
return nil, nil
|
||||||
|
})
|
||||||
|
return res, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// ResultHandle is a build result with the client that built it.
|
||||||
|
type ResultHandle struct {
|
||||||
|
res *gateway.Result
|
||||||
|
solveErr *errdefs.SolveError
|
||||||
|
|
||||||
|
done chan struct{}
|
||||||
|
doneOnce sync.Once
|
||||||
|
|
||||||
|
gwClient gateway.Client
|
||||||
|
gwCtx context.Context
|
||||||
|
|
||||||
|
cleanups []func()
|
||||||
|
cleanupsMu sync.Mutex
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *ResultHandle) Done() {
|
||||||
|
r.doneOnce.Do(func() {
|
||||||
|
r.cleanupsMu.Lock()
|
||||||
|
cleanups := r.cleanups
|
||||||
|
r.cleanups = nil
|
||||||
|
r.cleanupsMu.Unlock()
|
||||||
|
for _, f := range cleanups {
|
||||||
|
f()
|
||||||
|
}
|
||||||
|
|
||||||
|
close(r.done)
|
||||||
|
<-r.gwCtx.Done()
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *ResultHandle) registerCleanup(f func()) {
|
||||||
|
r.cleanupsMu.Lock()
|
||||||
|
r.cleanups = append(r.cleanups, f)
|
||||||
|
r.cleanupsMu.Unlock()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *ResultHandle) build(buildFunc gateway.BuildFunc) (err error) {
|
||||||
|
_, err = buildFunc(r.gwCtx, r.gwClient)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *ResultHandle) getContainerConfig(ctx context.Context, c gateway.Client, cfg *controllerapi.InvokeConfig) (containerCfg gateway.NewContainerRequest, _ error) {
|
||||||
|
if r.res != nil && r.solveErr == nil {
|
||||||
|
logrus.Debugf("creating container from successful build")
|
||||||
|
ccfg, err := containerConfigFromResult(ctx, r.res, c, *cfg)
|
||||||
|
if err != nil {
|
||||||
|
return containerCfg, err
|
||||||
|
}
|
||||||
|
containerCfg = *ccfg
|
||||||
|
} else {
|
||||||
|
logrus.Debugf("creating container from failed build %+v", cfg)
|
||||||
|
ccfg, err := containerConfigFromError(r.solveErr, *cfg)
|
||||||
|
if err != nil {
|
||||||
|
return containerCfg, errors.Wrapf(err, "no result nor error is available")
|
||||||
|
}
|
||||||
|
containerCfg = *ccfg
|
||||||
|
}
|
||||||
|
return containerCfg, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *ResultHandle) getProcessConfig(cfg *controllerapi.InvokeConfig, stdin io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) (_ gateway.StartRequest, err error) {
|
||||||
|
processCfg := newStartRequest(stdin, stdout, stderr)
|
||||||
|
if r.res != nil && r.solveErr == nil {
|
||||||
|
logrus.Debugf("creating container from successful build")
|
||||||
|
if err := populateProcessConfigFromResult(&processCfg, r.res, *cfg); err != nil {
|
||||||
|
return processCfg, err
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
logrus.Debugf("creating container from failed build %+v", cfg)
|
||||||
|
if err := populateProcessConfigFromError(&processCfg, r.solveErr, *cfg); err != nil {
|
||||||
|
return processCfg, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return processCfg, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func containerConfigFromResult(ctx context.Context, res *gateway.Result, c gateway.Client, cfg controllerapi.InvokeConfig) (*gateway.NewContainerRequest, error) {
|
||||||
|
if cfg.Initial {
|
||||||
|
return nil, errors.Errorf("starting from the container from the initial state of the step is supported only on the failed steps")
|
||||||
|
}
|
||||||
|
|
||||||
|
ps, err := exptypes.ParsePlatforms(res.Metadata)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
ref, ok := res.FindRef(ps.Platforms[0].ID)
|
||||||
|
if !ok {
|
||||||
|
return nil, errors.Errorf("no reference found")
|
||||||
|
}
|
||||||
|
|
||||||
|
return &gateway.NewContainerRequest{
|
||||||
|
Mounts: []gateway.Mount{
|
||||||
|
{
|
||||||
|
Dest: "/",
|
||||||
|
MountType: pb.MountType_BIND,
|
||||||
|
Ref: ref,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func populateProcessConfigFromResult(req *gateway.StartRequest, res *gateway.Result, cfg controllerapi.InvokeConfig) error {
|
||||||
|
imgData := res.Metadata[exptypes.ExporterImageConfigKey]
|
||||||
|
var img *specs.Image
|
||||||
|
if len(imgData) > 0 {
|
||||||
|
img = &specs.Image{}
|
||||||
|
if err := json.Unmarshal(imgData, img); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
user := ""
|
||||||
|
if !cfg.NoUser {
|
||||||
|
user = cfg.User
|
||||||
|
} else if img != nil {
|
||||||
|
user = img.Config.User
|
||||||
|
}
|
||||||
|
|
||||||
|
cwd := ""
|
||||||
|
if !cfg.NoCwd {
|
||||||
|
cwd = cfg.Cwd
|
||||||
|
} else if img != nil {
|
||||||
|
cwd = img.Config.WorkingDir
|
||||||
|
}
|
||||||
|
|
||||||
|
env := []string{}
|
||||||
|
if img != nil {
|
||||||
|
env = append(env, img.Config.Env...)
|
||||||
|
}
|
||||||
|
env = append(env, cfg.Env...)
|
||||||
|
|
||||||
|
args := []string{}
|
||||||
|
if cfg.Entrypoint != nil {
|
||||||
|
args = append(args, cfg.Entrypoint...)
|
||||||
|
} else if img != nil {
|
||||||
|
args = append(args, img.Config.Entrypoint...)
|
||||||
|
}
|
||||||
|
if cfg.Cmd != nil {
|
||||||
|
args = append(args, cfg.Cmd...)
|
||||||
|
} else if img != nil {
|
||||||
|
args = append(args, img.Config.Cmd...)
|
||||||
|
}
|
||||||
|
|
||||||
|
req.Args = args
|
||||||
|
req.Env = env
|
||||||
|
req.User = user
|
||||||
|
req.Cwd = cwd
|
||||||
|
req.Tty = cfg.Tty
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func containerConfigFromError(solveErr *errdefs.SolveError, cfg controllerapi.InvokeConfig) (*gateway.NewContainerRequest, error) {
|
||||||
|
exec, err := execOpFromError(solveErr)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
var mounts []gateway.Mount
|
||||||
|
for i, mnt := range exec.Mounts {
|
||||||
|
rid := solveErr.Solve.MountIDs[i]
|
||||||
|
if cfg.Initial {
|
||||||
|
rid = solveErr.Solve.InputIDs[i]
|
||||||
|
}
|
||||||
|
mounts = append(mounts, gateway.Mount{
|
||||||
|
Selector: mnt.Selector,
|
||||||
|
Dest: mnt.Dest,
|
||||||
|
ResultID: rid,
|
||||||
|
Readonly: mnt.Readonly,
|
||||||
|
MountType: mnt.MountType,
|
||||||
|
CacheOpt: mnt.CacheOpt,
|
||||||
|
SecretOpt: mnt.SecretOpt,
|
||||||
|
SSHOpt: mnt.SSHOpt,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
return &gateway.NewContainerRequest{
|
||||||
|
Mounts: mounts,
|
||||||
|
NetMode: exec.Network,
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func populateProcessConfigFromError(req *gateway.StartRequest, solveErr *errdefs.SolveError, cfg controllerapi.InvokeConfig) error {
|
||||||
|
exec, err := execOpFromError(solveErr)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
meta := exec.Meta
|
||||||
|
user := ""
|
||||||
|
if !cfg.NoUser {
|
||||||
|
user = cfg.User
|
||||||
|
} else {
|
||||||
|
user = meta.User
|
||||||
|
}
|
||||||
|
|
||||||
|
cwd := ""
|
||||||
|
if !cfg.NoCwd {
|
||||||
|
cwd = cfg.Cwd
|
||||||
|
} else {
|
||||||
|
cwd = meta.Cwd
|
||||||
|
}
|
||||||
|
|
||||||
|
env := append(meta.Env, cfg.Env...)
|
||||||
|
|
||||||
|
args := []string{}
|
||||||
|
if cfg.Entrypoint != nil {
|
||||||
|
args = append(args, cfg.Entrypoint...)
|
||||||
|
}
|
||||||
|
if cfg.Cmd != nil {
|
||||||
|
args = append(args, cfg.Cmd...)
|
||||||
|
}
|
||||||
|
if len(args) == 0 {
|
||||||
|
args = meta.Args
|
||||||
|
}
|
||||||
|
|
||||||
|
req.Args = args
|
||||||
|
req.Env = env
|
||||||
|
req.User = user
|
||||||
|
req.Cwd = cwd
|
||||||
|
req.Tty = cfg.Tty
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func execOpFromError(solveErr *errdefs.SolveError) (*pb.ExecOp, error) {
|
||||||
|
if solveErr == nil {
|
||||||
|
return nil, errors.Errorf("no error is available")
|
||||||
|
}
|
||||||
|
switch op := solveErr.Solve.Op.GetOp().(type) {
|
||||||
|
case *pb.Op_Exec:
|
||||||
|
return op.Exec, nil
|
||||||
|
default:
|
||||||
|
return nil, errors.Errorf("invoke: unsupported error type")
|
||||||
|
}
|
||||||
|
// TODO: support other ops
|
||||||
|
}
|
||||||
|
|
||||||
|
func newStartRequest(stdin io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) gateway.StartRequest {
|
||||||
|
return gateway.StartRequest{
|
||||||
|
Stdin: stdin,
|
||||||
|
Stdout: stdout,
|
||||||
|
Stderr: stderr,
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -13,7 +13,7 @@ import (
|
|||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
)
|
)
|
||||||
|
|
||||||
func createTempDockerfileFromURL(ctx context.Context, d driver.Driver, url string, pw progress.Writer) (string, error) {
|
func createTempDockerfileFromURL(ctx context.Context, d *driver.DriverHandle, url string, pw progress.Writer) (string, error) {
|
||||||
c, err := driver.Boot(ctx, ctx, d, pw)
|
c, err := driver.Boot(ctx, ctx, d, pw)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return "", err
|
return "", err
|
||||||
@@ -21,7 +21,7 @@ func createTempDockerfileFromURL(ctx context.Context, d driver.Driver, url strin
|
|||||||
var out string
|
var out string
|
||||||
ch, done := progress.NewChannel(pw)
|
ch, done := progress.NewChannel(pw)
|
||||||
defer func() { <-done }()
|
defer func() { <-done }()
|
||||||
_, err = c.Build(ctx, client.SolveOpt{}, "buildx", func(ctx context.Context, c gwclient.Client) (*gwclient.Result, error) {
|
_, err = c.Build(ctx, client.SolveOpt{Internal: true}, "buildx", func(ctx context.Context, c gwclient.Client) (*gwclient.Result, error) {
|
||||||
def, err := llb.HTTP(url, llb.Filename("Dockerfile"), llb.WithCustomNamef("[internal] load %s", url)).Marshal(ctx)
|
def, err := llb.HTTP(url, llb.Filename("Dockerfile"), llb.WithCustomNamef("[internal] load %s", url)).Marshal(ctx)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
|
|||||||
@@ -3,16 +3,36 @@ package build
|
|||||||
import (
|
import (
|
||||||
"archive/tar"
|
"archive/tar"
|
||||||
"bytes"
|
"bytes"
|
||||||
|
"context"
|
||||||
"net"
|
"net"
|
||||||
"os"
|
"os"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
|
"github.com/docker/buildx/driver"
|
||||||
"github.com/docker/cli/opts"
|
"github.com/docker/cli/opts"
|
||||||
|
"github.com/docker/docker/builder/remotecontext/urlutil"
|
||||||
|
"github.com/moby/buildkit/util/gitutil"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
)
|
)
|
||||||
|
|
||||||
// archiveHeaderSize is the number of bytes in an archive header
|
const (
|
||||||
const archiveHeaderSize = 512
|
// archiveHeaderSize is the number of bytes in an archive header
|
||||||
|
archiveHeaderSize = 512
|
||||||
|
// mobyHostGatewayName defines a special string which users can append to
|
||||||
|
// --add-host to add an extra entry in /etc/hosts that maps
|
||||||
|
// host.docker.internal to the host IP
|
||||||
|
mobyHostGatewayName = "host-gateway"
|
||||||
|
)
|
||||||
|
|
||||||
|
func IsRemoteURL(c string) bool {
|
||||||
|
if urlutil.IsURL(c) {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
if _, err := gitutil.ParseGitRef(c); err == nil {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
func isLocalDir(c string) bool {
|
func isLocalDir(c string) bool {
|
||||||
st, err := os.Stat(c)
|
st, err := os.Stat(c)
|
||||||
@@ -39,18 +59,28 @@ func isArchive(header []byte) bool {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// toBuildkitExtraHosts converts hosts from docker key:value format to buildkit's csv format
|
// toBuildkitExtraHosts converts hosts from docker key:value format to buildkit's csv format
|
||||||
func toBuildkitExtraHosts(inp []string) (string, error) {
|
func toBuildkitExtraHosts(ctx context.Context, inp []string, nodeDriver *driver.DriverHandle) (string, error) {
|
||||||
if len(inp) == 0 {
|
if len(inp) == 0 {
|
||||||
return "", nil
|
return "", nil
|
||||||
}
|
}
|
||||||
hosts := make([]string, 0, len(inp))
|
hosts := make([]string, 0, len(inp))
|
||||||
for _, h := range inp {
|
for _, h := range inp {
|
||||||
parts := strings.Split(h, ":")
|
host, ip, ok := strings.Cut(h, ":")
|
||||||
|
if !ok || host == "" || ip == "" {
|
||||||
if len(parts) != 2 || parts[0] == "" || net.ParseIP(parts[1]) == nil {
|
|
||||||
return "", errors.Errorf("invalid host %s", h)
|
return "", errors.Errorf("invalid host %s", h)
|
||||||
}
|
}
|
||||||
hosts = append(hosts, parts[0]+"="+parts[1])
|
// If the IP Address is a "host-gateway", replace this value with the
|
||||||
|
// IP address provided by the worker's label.
|
||||||
|
if ip == mobyHostGatewayName {
|
||||||
|
hgip, err := nodeDriver.HostGatewayIP(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return "", errors.Wrap(err, "unable to derive the IP value for host-gateway")
|
||||||
|
}
|
||||||
|
ip = hgip.String()
|
||||||
|
} else if net.ParseIP(ip) == nil {
|
||||||
|
return "", errors.Errorf("invalid host %s", h)
|
||||||
|
}
|
||||||
|
hosts = append(hosts, host+"="+ip)
|
||||||
}
|
}
|
||||||
return strings.Join(hosts, ","), nil
|
return strings.Join(hosts, ","), nil
|
||||||
}
|
}
|
||||||
|
|||||||
292
builder/builder.go
Normal file
292
builder/builder.go
Normal file
@@ -0,0 +1,292 @@
|
|||||||
|
package builder
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"os"
|
||||||
|
"sort"
|
||||||
|
"sync"
|
||||||
|
|
||||||
|
"github.com/docker/buildx/driver"
|
||||||
|
"github.com/docker/buildx/store"
|
||||||
|
"github.com/docker/buildx/store/storeutil"
|
||||||
|
"github.com/docker/buildx/util/dockerutil"
|
||||||
|
"github.com/docker/buildx/util/imagetools"
|
||||||
|
"github.com/docker/buildx/util/progress"
|
||||||
|
"github.com/docker/cli/cli/command"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
"golang.org/x/sync/errgroup"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Builder represents an active builder object
|
||||||
|
type Builder struct {
|
||||||
|
*store.NodeGroup
|
||||||
|
driverFactory driverFactory
|
||||||
|
nodes []Node
|
||||||
|
opts builderOpts
|
||||||
|
err error
|
||||||
|
}
|
||||||
|
|
||||||
|
type builderOpts struct {
|
||||||
|
dockerCli command.Cli
|
||||||
|
name string
|
||||||
|
txn *store.Txn
|
||||||
|
contextPathHash string
|
||||||
|
validate bool
|
||||||
|
}
|
||||||
|
|
||||||
|
// Option provides a variadic option for configuring the builder.
|
||||||
|
type Option func(b *Builder)
|
||||||
|
|
||||||
|
// WithName sets builder name.
|
||||||
|
func WithName(name string) Option {
|
||||||
|
return func(b *Builder) {
|
||||||
|
b.opts.name = name
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// WithStore sets a store instance used at init.
|
||||||
|
func WithStore(txn *store.Txn) Option {
|
||||||
|
return func(b *Builder) {
|
||||||
|
b.opts.txn = txn
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// WithContextPathHash is used for determining pods in k8s driver instance.
|
||||||
|
func WithContextPathHash(contextPathHash string) Option {
|
||||||
|
return func(b *Builder) {
|
||||||
|
b.opts.contextPathHash = contextPathHash
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// WithSkippedValidation skips builder context validation.
|
||||||
|
func WithSkippedValidation() Option {
|
||||||
|
return func(b *Builder) {
|
||||||
|
b.opts.validate = false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// New initializes a new builder client
|
||||||
|
func New(dockerCli command.Cli, opts ...Option) (_ *Builder, err error) {
|
||||||
|
b := &Builder{
|
||||||
|
opts: builderOpts{
|
||||||
|
dockerCli: dockerCli,
|
||||||
|
validate: true,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, opt := range opts {
|
||||||
|
opt(b)
|
||||||
|
}
|
||||||
|
|
||||||
|
if b.opts.txn == nil {
|
||||||
|
// if store instance is nil we create a short-lived one using the
|
||||||
|
// default store and ensure we release it on completion
|
||||||
|
var release func()
|
||||||
|
b.opts.txn, release, err = storeutil.GetStore(dockerCli)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
defer release()
|
||||||
|
}
|
||||||
|
|
||||||
|
if b.opts.name != "" {
|
||||||
|
if b.NodeGroup, err = storeutil.GetNodeGroup(b.opts.txn, dockerCli, b.opts.name); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
if b.NodeGroup, err = storeutil.GetCurrentInstance(b.opts.txn, dockerCli); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if b.opts.validate {
|
||||||
|
if err = b.Validate(); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return b, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Validate validates builder context
|
||||||
|
func (b *Builder) Validate() error {
|
||||||
|
if b.NodeGroup != nil && b.NodeGroup.DockerContext {
|
||||||
|
list, err := b.opts.dockerCli.ContextStore().List()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
currentContext := b.opts.dockerCli.CurrentContext()
|
||||||
|
for _, l := range list {
|
||||||
|
if l.Name == b.Name && l.Name != currentContext {
|
||||||
|
return errors.Errorf("use `docker --context=%s buildx` to switch to context %q", l.Name, l.Name)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// ContextName returns builder context name if available.
|
||||||
|
func (b *Builder) ContextName() string {
|
||||||
|
ctxbuilders, err := b.opts.dockerCli.ContextStore().List()
|
||||||
|
if err != nil {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
for _, cb := range ctxbuilders {
|
||||||
|
if b.NodeGroup.Driver == "docker" && len(b.NodeGroup.Nodes) == 1 && b.NodeGroup.Nodes[0].Endpoint == cb.Name {
|
||||||
|
return cb.Name
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
|
||||||
|
// ImageOpt returns registry auth configuration
|
||||||
|
func (b *Builder) ImageOpt() (imagetools.Opt, error) {
|
||||||
|
return storeutil.GetImageConfig(b.opts.dockerCli, b.NodeGroup)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Boot bootstrap a builder
|
||||||
|
func (b *Builder) Boot(ctx context.Context) (bool, error) {
|
||||||
|
toBoot := make([]int, 0, len(b.nodes))
|
||||||
|
for idx, d := range b.nodes {
|
||||||
|
if d.Err != nil || d.Driver == nil || d.DriverInfo == nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if d.DriverInfo.Status != driver.Running {
|
||||||
|
toBoot = append(toBoot, idx)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if len(toBoot) == 0 {
|
||||||
|
return false, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
printer, err := progress.NewPrinter(context.TODO(), os.Stderr, os.Stderr, progress.PrinterModeAuto)
|
||||||
|
if err != nil {
|
||||||
|
return false, err
|
||||||
|
}
|
||||||
|
|
||||||
|
baseCtx := ctx
|
||||||
|
eg, _ := errgroup.WithContext(ctx)
|
||||||
|
for _, idx := range toBoot {
|
||||||
|
func(idx int) {
|
||||||
|
eg.Go(func() error {
|
||||||
|
pw := progress.WithPrefix(printer, b.NodeGroup.Nodes[idx].Name, len(toBoot) > 1)
|
||||||
|
_, err := driver.Boot(ctx, baseCtx, b.nodes[idx].Driver, pw)
|
||||||
|
if err != nil {
|
||||||
|
b.nodes[idx].Err = err
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
}(idx)
|
||||||
|
}
|
||||||
|
|
||||||
|
err = eg.Wait()
|
||||||
|
err1 := printer.Wait()
|
||||||
|
if err == nil {
|
||||||
|
err = err1
|
||||||
|
}
|
||||||
|
|
||||||
|
return true, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Inactive checks if all nodes are inactive for this builder.
|
||||||
|
func (b *Builder) Inactive() bool {
|
||||||
|
for _, d := range b.nodes {
|
||||||
|
if d.DriverInfo != nil && d.DriverInfo.Status == driver.Running {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
// Err returns error if any.
|
||||||
|
func (b *Builder) Err() error {
|
||||||
|
return b.err
|
||||||
|
}
|
||||||
|
|
||||||
|
type driverFactory struct {
|
||||||
|
driver.Factory
|
||||||
|
once sync.Once
|
||||||
|
}
|
||||||
|
|
||||||
|
// Factory returns the driver factory.
|
||||||
|
func (b *Builder) Factory(ctx context.Context) (_ driver.Factory, err error) {
|
||||||
|
b.driverFactory.once.Do(func() {
|
||||||
|
if b.Driver != "" {
|
||||||
|
b.driverFactory.Factory, err = driver.GetFactory(b.Driver, true)
|
||||||
|
if err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// empty driver means nodegroup was implicitly created as a default
|
||||||
|
// driver for a docker context and allows falling back to a
|
||||||
|
// docker-container driver for older daemon that doesn't support
|
||||||
|
// buildkit (< 18.06).
|
||||||
|
ep := b.NodeGroup.Nodes[0].Endpoint
|
||||||
|
var dockerapi *dockerutil.ClientAPI
|
||||||
|
dockerapi, err = dockerutil.NewClientAPI(b.opts.dockerCli, b.NodeGroup.Nodes[0].Endpoint)
|
||||||
|
if err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
// check if endpoint is healthy is needed to determine the driver type.
|
||||||
|
// if this fails then can't continue with driver selection.
|
||||||
|
if _, err = dockerapi.Ping(ctx); err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
b.driverFactory.Factory, err = driver.GetDefaultFactory(ctx, ep, dockerapi, false)
|
||||||
|
if err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
b.Driver = b.driverFactory.Factory.Name()
|
||||||
|
}
|
||||||
|
})
|
||||||
|
return b.driverFactory.Factory, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetBuilders returns all builders
|
||||||
|
func GetBuilders(dockerCli command.Cli, txn *store.Txn) ([]*Builder, error) {
|
||||||
|
storeng, err := txn.List()
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
builders := make([]*Builder, len(storeng))
|
||||||
|
seen := make(map[string]struct{})
|
||||||
|
for i, ng := range storeng {
|
||||||
|
b, err := New(dockerCli,
|
||||||
|
WithName(ng.Name),
|
||||||
|
WithStore(txn),
|
||||||
|
WithSkippedValidation(),
|
||||||
|
)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
builders[i] = b
|
||||||
|
seen[b.NodeGroup.Name] = struct{}{}
|
||||||
|
}
|
||||||
|
|
||||||
|
contexts, err := dockerCli.ContextStore().List()
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
sort.Slice(contexts, func(i, j int) bool {
|
||||||
|
return contexts[i].Name < contexts[j].Name
|
||||||
|
})
|
||||||
|
|
||||||
|
for _, c := range contexts {
|
||||||
|
// if a context has the same name as an instance from the store, do not
|
||||||
|
// add it to the builders list. An instance from the store takes
|
||||||
|
// precedence over context builders.
|
||||||
|
if _, ok := seen[c.Name]; ok {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
b, err := New(dockerCli,
|
||||||
|
WithName(c.Name),
|
||||||
|
WithStore(txn),
|
||||||
|
WithSkippedValidation(),
|
||||||
|
)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
builders = append(builders, b)
|
||||||
|
}
|
||||||
|
|
||||||
|
return builders, nil
|
||||||
|
}
|
||||||
211
builder/node.go
Normal file
211
builder/node.go
Normal file
@@ -0,0 +1,211 @@
|
|||||||
|
package builder
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
|
||||||
|
"github.com/docker/buildx/driver"
|
||||||
|
ctxkube "github.com/docker/buildx/driver/kubernetes/context"
|
||||||
|
"github.com/docker/buildx/store"
|
||||||
|
"github.com/docker/buildx/store/storeutil"
|
||||||
|
"github.com/docker/buildx/util/dockerutil"
|
||||||
|
"github.com/docker/buildx/util/imagetools"
|
||||||
|
"github.com/docker/buildx/util/platformutil"
|
||||||
|
"github.com/moby/buildkit/client"
|
||||||
|
"github.com/moby/buildkit/util/grpcerrors"
|
||||||
|
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
"github.com/sirupsen/logrus"
|
||||||
|
"golang.org/x/sync/errgroup"
|
||||||
|
"google.golang.org/grpc/codes"
|
||||||
|
)
|
||||||
|
|
||||||
|
type Node struct {
|
||||||
|
store.Node
|
||||||
|
Builder string
|
||||||
|
Driver *driver.DriverHandle
|
||||||
|
DriverInfo *driver.Info
|
||||||
|
Platforms []ocispecs.Platform
|
||||||
|
GCPolicy []client.PruneInfo
|
||||||
|
Labels map[string]string
|
||||||
|
ImageOpt imagetools.Opt
|
||||||
|
ProxyConfig map[string]string
|
||||||
|
Version string
|
||||||
|
Err error
|
||||||
|
}
|
||||||
|
|
||||||
|
// Nodes returns nodes for this builder.
|
||||||
|
func (b *Builder) Nodes() []Node {
|
||||||
|
return b.nodes
|
||||||
|
}
|
||||||
|
|
||||||
|
// LoadNodes loads and returns nodes for this builder.
|
||||||
|
// TODO: this should be a method on a Node object and lazy load data for each driver.
|
||||||
|
func (b *Builder) LoadNodes(ctx context.Context, withData bool) (_ []Node, err error) {
|
||||||
|
eg, _ := errgroup.WithContext(ctx)
|
||||||
|
b.nodes = make([]Node, len(b.NodeGroup.Nodes))
|
||||||
|
|
||||||
|
defer func() {
|
||||||
|
if b.err == nil && err != nil {
|
||||||
|
b.err = err
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
|
||||||
|
factory, err := b.Factory(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
imageopt, err := b.ImageOpt()
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
for i, n := range b.NodeGroup.Nodes {
|
||||||
|
func(i int, n store.Node) {
|
||||||
|
eg.Go(func() error {
|
||||||
|
node := Node{
|
||||||
|
Node: n,
|
||||||
|
ProxyConfig: storeutil.GetProxyConfig(b.opts.dockerCli),
|
||||||
|
Platforms: n.Platforms,
|
||||||
|
Builder: b.Name,
|
||||||
|
}
|
||||||
|
defer func() {
|
||||||
|
b.nodes[i] = node
|
||||||
|
}()
|
||||||
|
|
||||||
|
dockerapi, err := dockerutil.NewClientAPI(b.opts.dockerCli, n.Endpoint)
|
||||||
|
if err != nil {
|
||||||
|
node.Err = err
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
contextStore := b.opts.dockerCli.ContextStore()
|
||||||
|
|
||||||
|
var kcc driver.KubeClientConfig
|
||||||
|
kcc, err = ctxkube.ConfigFromEndpoint(n.Endpoint, contextStore)
|
||||||
|
if err != nil {
|
||||||
|
// err is returned if n.Endpoint is non-context name like "unix:///var/run/docker.sock".
|
||||||
|
// try again with name="default".
|
||||||
|
// FIXME(@AkihiroSuda): n should retain real context name.
|
||||||
|
kcc, err = ctxkube.ConfigFromEndpoint("default", contextStore)
|
||||||
|
if err != nil {
|
||||||
|
logrus.Error(err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
tryToUseKubeConfigInCluster := false
|
||||||
|
if kcc == nil {
|
||||||
|
tryToUseKubeConfigInCluster = true
|
||||||
|
} else {
|
||||||
|
if _, err := kcc.ClientConfig(); err != nil {
|
||||||
|
tryToUseKubeConfigInCluster = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if tryToUseKubeConfigInCluster {
|
||||||
|
kccInCluster := driver.KubeClientConfigInCluster{}
|
||||||
|
if _, err := kccInCluster.ClientConfig(); err == nil {
|
||||||
|
logrus.Debug("using kube config in cluster")
|
||||||
|
kcc = kccInCluster
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
d, err := driver.GetDriver(ctx, "buildx_buildkit_"+n.Name, factory, n.Endpoint, dockerapi, imageopt.Auth, kcc, n.Flags, n.Files, n.DriverOpts, n.Platforms, b.opts.contextPathHash)
|
||||||
|
if err != nil {
|
||||||
|
node.Err = err
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
node.Driver = d
|
||||||
|
node.ImageOpt = imageopt
|
||||||
|
|
||||||
|
if withData {
|
||||||
|
if err := node.loadData(ctx); err != nil {
|
||||||
|
node.Err = err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
}(i, n)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := eg.Wait(); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// TODO: This should be done in the routine loading driver data
|
||||||
|
if withData {
|
||||||
|
kubernetesDriverCount := 0
|
||||||
|
for _, d := range b.nodes {
|
||||||
|
if d.DriverInfo != nil && len(d.DriverInfo.DynamicNodes) > 0 {
|
||||||
|
kubernetesDriverCount++
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
isAllKubernetesDrivers := len(b.nodes) == kubernetesDriverCount
|
||||||
|
if isAllKubernetesDrivers {
|
||||||
|
var nodes []Node
|
||||||
|
var dynamicNodes []store.Node
|
||||||
|
for _, di := range b.nodes {
|
||||||
|
// dynamic nodes are used in Kubernetes driver.
|
||||||
|
// Kubernetes' pods are dynamically mapped to BuildKit Nodes.
|
||||||
|
if di.DriverInfo != nil && len(di.DriverInfo.DynamicNodes) > 0 {
|
||||||
|
for i := 0; i < len(di.DriverInfo.DynamicNodes); i++ {
|
||||||
|
diClone := di
|
||||||
|
if pl := di.DriverInfo.DynamicNodes[i].Platforms; len(pl) > 0 {
|
||||||
|
diClone.Platforms = pl
|
||||||
|
}
|
||||||
|
nodes = append(nodes, di)
|
||||||
|
}
|
||||||
|
dynamicNodes = append(dynamicNodes, di.DriverInfo.DynamicNodes...)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// not append (remove the static nodes in the store)
|
||||||
|
b.NodeGroup.Nodes = dynamicNodes
|
||||||
|
b.nodes = nodes
|
||||||
|
b.NodeGroup.Dynamic = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return b.nodes, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (n *Node) loadData(ctx context.Context) error {
|
||||||
|
if n.Driver == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
info, err := n.Driver.Info(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
n.DriverInfo = info
|
||||||
|
if n.DriverInfo.Status == driver.Running {
|
||||||
|
driverClient, err := n.Driver.Client(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
workers, err := driverClient.ListWorkers(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return errors.Wrap(err, "listing workers")
|
||||||
|
}
|
||||||
|
for idx, w := range workers {
|
||||||
|
n.Platforms = append(n.Platforms, w.Platforms...)
|
||||||
|
if idx == 0 {
|
||||||
|
n.GCPolicy = w.GCPolicy
|
||||||
|
n.Labels = w.Labels
|
||||||
|
}
|
||||||
|
}
|
||||||
|
n.Platforms = platformutil.Dedupe(n.Platforms)
|
||||||
|
inf, err := driverClient.Info(ctx)
|
||||||
|
if err != nil {
|
||||||
|
if st, ok := grpcerrors.AsGRPCStatus(err); ok && st.Code() == codes.Unimplemented {
|
||||||
|
n.Version, err = n.Driver.Version(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return errors.Wrap(err, "getting version")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
n.Version = inf.BuildkitVersion.Version
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
@@ -4,8 +4,8 @@ import (
|
|||||||
"fmt"
|
"fmt"
|
||||||
"os"
|
"os"
|
||||||
|
|
||||||
"github.com/containerd/containerd/pkg/seed"
|
|
||||||
"github.com/docker/buildx/commands"
|
"github.com/docker/buildx/commands"
|
||||||
|
"github.com/docker/buildx/util/desktop"
|
||||||
"github.com/docker/buildx/version"
|
"github.com/docker/buildx/version"
|
||||||
"github.com/docker/cli/cli"
|
"github.com/docker/cli/cli"
|
||||||
"github.com/docker/cli/cli-plugins/manager"
|
"github.com/docker/cli/cli-plugins/manager"
|
||||||
@@ -16,10 +16,10 @@ import (
|
|||||||
"github.com/moby/buildkit/solver/errdefs"
|
"github.com/moby/buildkit/solver/errdefs"
|
||||||
"github.com/moby/buildkit/util/stack"
|
"github.com/moby/buildkit/util/stack"
|
||||||
|
|
||||||
_ "k8s.io/client-go/plugin/pkg/client/auth/azure"
|
//nolint:staticcheck // vendored dependencies may still use this
|
||||||
_ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
|
"github.com/containerd/containerd/pkg/seed"
|
||||||
|
|
||||||
_ "k8s.io/client-go/plugin/pkg/client/auth/oidc"
|
_ "k8s.io/client-go/plugin/pkg/client/auth/oidc"
|
||||||
_ "k8s.io/client-go/plugin/pkg/client/auth/openstack"
|
|
||||||
|
|
||||||
_ "github.com/docker/buildx/driver/docker"
|
_ "github.com/docker/buildx/driver/docker"
|
||||||
_ "github.com/docker/buildx/driver/docker-container"
|
_ "github.com/docker/buildx/driver/docker-container"
|
||||||
@@ -28,7 +28,9 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
|
//nolint:staticcheck
|
||||||
seed.WithTimeAndRand()
|
seed.WithTimeAndRand()
|
||||||
|
|
||||||
stack.SetVersionInfo(version.Version, version.Revision)
|
stack.SetVersionInfo(version.Version, version.Revision)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -85,6 +87,9 @@ func main() {
|
|||||||
} else {
|
} else {
|
||||||
fmt.Fprintf(cmd.Err(), "ERROR: %v\n", err)
|
fmt.Fprintf(cmd.Err(), "ERROR: %v\n", err)
|
||||||
}
|
}
|
||||||
|
if ebr, ok := err.(*desktop.ErrorWithBuildRef); ok {
|
||||||
|
ebr.Print(cmd.Err())
|
||||||
|
}
|
||||||
|
|
||||||
os.Exit(1)
|
os.Exit(1)
|
||||||
}
|
}
|
||||||
|
|||||||
134
commands/bake.go
134
commands/bake.go
@@ -6,10 +6,16 @@ import (
|
|||||||
"fmt"
|
"fmt"
|
||||||
"os"
|
"os"
|
||||||
|
|
||||||
|
"github.com/containerd/console"
|
||||||
"github.com/containerd/containerd/platforms"
|
"github.com/containerd/containerd/platforms"
|
||||||
"github.com/docker/buildx/bake"
|
"github.com/docker/buildx/bake"
|
||||||
"github.com/docker/buildx/build"
|
"github.com/docker/buildx/build"
|
||||||
|
"github.com/docker/buildx/builder"
|
||||||
|
"github.com/docker/buildx/util/buildflags"
|
||||||
|
"github.com/docker/buildx/util/cobrautil/completion"
|
||||||
"github.com/docker/buildx/util/confutil"
|
"github.com/docker/buildx/util/confutil"
|
||||||
|
"github.com/docker/buildx/util/desktop"
|
||||||
|
"github.com/docker/buildx/util/dockerutil"
|
||||||
"github.com/docker/buildx/util/progress"
|
"github.com/docker/buildx/util/progress"
|
||||||
"github.com/docker/buildx/util/tracing"
|
"github.com/docker/buildx/util/tracing"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
@@ -19,13 +25,19 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
type bakeOptions struct {
|
type bakeOptions struct {
|
||||||
files []string
|
files []string
|
||||||
overrides []string
|
overrides []string
|
||||||
printOnly bool
|
printOnly bool
|
||||||
commonOptions
|
sbom string
|
||||||
|
provenance string
|
||||||
|
|
||||||
|
builder string
|
||||||
|
metadataFile string
|
||||||
|
exportPush bool
|
||||||
|
exportLoad bool
|
||||||
}
|
}
|
||||||
|
|
||||||
func runBake(dockerCli command.Cli, targets []string, in bakeOptions) (err error) {
|
func runBake(dockerCli command.Cli, targets []string, in bakeOptions, cFlags commonFlags) (err error) {
|
||||||
ctx := appcontext.Context()
|
ctx := appcontext.Context()
|
||||||
|
|
||||||
ctx, end, err := tracing.TraceCurrentCommand(ctx, "bake")
|
ctx, end, err := tracing.TraceCurrentCommand(ctx, "bake")
|
||||||
@@ -40,11 +52,11 @@ func runBake(dockerCli command.Cli, targets []string, in bakeOptions) (err error
|
|||||||
cmdContext := "cwd://"
|
cmdContext := "cwd://"
|
||||||
|
|
||||||
if len(targets) > 0 {
|
if len(targets) > 0 {
|
||||||
if bake.IsRemoteURL(targets[0]) {
|
if build.IsRemoteURL(targets[0]) {
|
||||||
url = targets[0]
|
url = targets[0]
|
||||||
targets = targets[1:]
|
targets = targets[1:]
|
||||||
if len(targets) > 0 {
|
if len(targets) > 0 {
|
||||||
if bake.IsRemoteURL(targets[0]) {
|
if build.IsRemoteURL(targets[0]) {
|
||||||
cmdContext = targets[0]
|
cmdContext = targets[0]
|
||||||
targets = targets[1:]
|
targets = targets[1:]
|
||||||
}
|
}
|
||||||
@@ -65,17 +77,59 @@ func runBake(dockerCli command.Cli, targets []string, in bakeOptions) (err error
|
|||||||
} else if in.exportLoad {
|
} else if in.exportLoad {
|
||||||
overrides = append(overrides, "*.output=type=docker")
|
overrides = append(overrides, "*.output=type=docker")
|
||||||
}
|
}
|
||||||
if in.noCache != nil {
|
if cFlags.noCache != nil {
|
||||||
overrides = append(overrides, fmt.Sprintf("*.no-cache=%t", *in.noCache))
|
overrides = append(overrides, fmt.Sprintf("*.no-cache=%t", *cFlags.noCache))
|
||||||
}
|
}
|
||||||
if in.pull != nil {
|
if cFlags.pull != nil {
|
||||||
overrides = append(overrides, fmt.Sprintf("*.pull=%t", *in.pull))
|
overrides = append(overrides, fmt.Sprintf("*.pull=%t", *cFlags.pull))
|
||||||
|
}
|
||||||
|
if in.sbom != "" {
|
||||||
|
overrides = append(overrides, fmt.Sprintf("*.attest=%s", buildflags.CanonicalizeAttest("sbom", in.sbom)))
|
||||||
|
}
|
||||||
|
if in.provenance != "" {
|
||||||
|
overrides = append(overrides, fmt.Sprintf("*.attest=%s", buildflags.CanonicalizeAttest("provenance", in.provenance)))
|
||||||
}
|
}
|
||||||
contextPathHash, _ := os.Getwd()
|
contextPathHash, _ := os.Getwd()
|
||||||
|
|
||||||
ctx2, cancel := context.WithCancel(context.TODO())
|
ctx2, cancel := context.WithCancel(context.TODO())
|
||||||
defer cancel()
|
defer cancel()
|
||||||
printer := progress.NewPrinter(ctx2, os.Stderr, os.Stderr, in.progress)
|
|
||||||
|
var nodes []builder.Node
|
||||||
|
var files []bake.File
|
||||||
|
var inp *bake.Input
|
||||||
|
var progressConsoleDesc, progressTextDesc string
|
||||||
|
|
||||||
|
// instance only needed for reading remote bake files or building
|
||||||
|
if url != "" || !in.printOnly {
|
||||||
|
b, err := builder.New(dockerCli,
|
||||||
|
builder.WithName(in.builder),
|
||||||
|
builder.WithContextPathHash(contextPathHash),
|
||||||
|
)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if err = updateLastActivity(dockerCli, b.NodeGroup); err != nil {
|
||||||
|
return errors.Wrapf(err, "failed to update builder last activity time")
|
||||||
|
}
|
||||||
|
nodes, err = b.LoadNodes(ctx, false)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
progressConsoleDesc = fmt.Sprintf("%s:%s", b.Driver, b.Name)
|
||||||
|
progressTextDesc = fmt.Sprintf("building with %q instance using %s driver", b.Name, b.Driver)
|
||||||
|
}
|
||||||
|
|
||||||
|
var term bool
|
||||||
|
if _, err := console.ConsoleFromFile(os.Stderr); err == nil {
|
||||||
|
term = true
|
||||||
|
}
|
||||||
|
|
||||||
|
printer, err := progress.NewPrinter(ctx2, os.Stderr, os.Stderr, cFlags.progress,
|
||||||
|
progress.WithDesc(progressTextDesc, progressConsoleDesc),
|
||||||
|
)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
defer func() {
|
defer func() {
|
||||||
if printer != nil {
|
if printer != nil {
|
||||||
@@ -83,21 +137,16 @@ func runBake(dockerCli command.Cli, targets []string, in bakeOptions) (err error
|
|||||||
if err == nil {
|
if err == nil {
|
||||||
err = err1
|
err = err1
|
||||||
}
|
}
|
||||||
|
if err == nil && cFlags.progress != progress.PrinterModeQuiet {
|
||||||
|
desktop.PrintBuildDetails(os.Stderr, printer.BuildRefs(), term)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}()
|
}()
|
||||||
|
|
||||||
dis, err := getInstanceOrDefault(ctx, dockerCli, in.builder, contextPathHash)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
var files []bake.File
|
|
||||||
var inp *bake.Input
|
|
||||||
|
|
||||||
if url != "" {
|
if url != "" {
|
||||||
files, inp, err = bake.ReadRemoteFiles(ctx, dis, url, in.files, printer)
|
files, inp, err = bake.ReadRemoteFiles(ctx, nodes, url, in.files, printer)
|
||||||
} else {
|
} else {
|
||||||
files, err = bake.ReadLocalFiles(in.files)
|
files, err = bake.ReadLocalFiles(in.files, dockerCli.In())
|
||||||
}
|
}
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -105,7 +154,7 @@ func runBake(dockerCli command.Cli, targets []string, in bakeOptions) (err error
|
|||||||
|
|
||||||
tgts, grps, err := bake.ReadTargets(ctx, files, targets, overrides, map[string]string{
|
tgts, grps, err := bake.ReadTargets(ctx, files, targets, overrides, map[string]string{
|
||||||
// don't forget to update documentation if you add a new
|
// don't forget to update documentation if you add a new
|
||||||
// built-in variable: docs/guides/bake/file-definition.md#built-in-variables
|
// built-in variable: docs/bake-reference.md#built-in-variables
|
||||||
"BAKE_CMD_CONTEXT": cmdContext,
|
"BAKE_CMD_CONTEXT": cmdContext,
|
||||||
"BAKE_LOCAL_PLATFORM": platforms.DefaultString(),
|
"BAKE_LOCAL_PLATFORM": platforms.DefaultString(),
|
||||||
})
|
})
|
||||||
@@ -113,6 +162,19 @@ func runBake(dockerCli command.Cli, targets []string, in bakeOptions) (err error
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if v := os.Getenv("SOURCE_DATE_EPOCH"); v != "" {
|
||||||
|
// TODO: extract env var parsing to a method easily usable by library consumers
|
||||||
|
for _, t := range tgts {
|
||||||
|
if _, ok := t.Args["SOURCE_DATE_EPOCH"]; ok {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if t.Args == nil {
|
||||||
|
t.Args = map[string]*string{}
|
||||||
|
}
|
||||||
|
t.Args["SOURCE_DATE_EPOCH"] = &v
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// this function can update target context string from the input so call before printOnly check
|
// this function can update target context string from the input so call before printOnly check
|
||||||
bo, err := bake.TargetsToBuildOpt(tgts, inp)
|
bo, err := bake.TargetsToBuildOpt(tgts, inp)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -120,17 +182,11 @@ func runBake(dockerCli command.Cli, targets []string, in bakeOptions) (err error
|
|||||||
}
|
}
|
||||||
|
|
||||||
if in.printOnly {
|
if in.printOnly {
|
||||||
var defg map[string]*bake.Group
|
|
||||||
if len(grps) == 1 {
|
|
||||||
defg = map[string]*bake.Group{
|
|
||||||
"default": grps[0],
|
|
||||||
}
|
|
||||||
}
|
|
||||||
dt, err := json.MarshalIndent(struct {
|
dt, err := json.MarshalIndent(struct {
|
||||||
Group map[string]*bake.Group `json:"group,omitempty"`
|
Group map[string]*bake.Group `json:"group,omitempty"`
|
||||||
Target map[string]*bake.Target `json:"target"`
|
Target map[string]*bake.Target `json:"target"`
|
||||||
}{
|
}{
|
||||||
defg,
|
grps,
|
||||||
tgts,
|
tgts,
|
||||||
}, "", " ")
|
}, "", " ")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -145,7 +201,7 @@ func runBake(dockerCli command.Cli, targets []string, in bakeOptions) (err error
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
resp, err := build.Build(ctx, dis, bo, dockerAPI(dockerCli), confutil.ConfigDir(dockerCli), printer)
|
resp, err := build.Build(ctx, nodes, bo, dockerutil.NewClient(dockerCli), confutil.ConfigDir(dockerCli), printer)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return wrapBuildError(err, true)
|
return wrapBuildError(err, true)
|
||||||
}
|
}
|
||||||
@@ -165,6 +221,7 @@ func runBake(dockerCli command.Cli, targets []string, in bakeOptions) (err error
|
|||||||
|
|
||||||
func bakeCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
|
func bakeCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
|
||||||
var options bakeOptions
|
var options bakeOptions
|
||||||
|
var cFlags commonFlags
|
||||||
|
|
||||||
cmd := &cobra.Command{
|
cmd := &cobra.Command{
|
||||||
Use: "bake [OPTIONS] [TARGET...]",
|
Use: "bake [OPTIONS] [TARGET...]",
|
||||||
@@ -173,14 +230,17 @@ func bakeCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
|
|||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
// reset to nil to avoid override is unset
|
// reset to nil to avoid override is unset
|
||||||
if !cmd.Flags().Lookup("no-cache").Changed {
|
if !cmd.Flags().Lookup("no-cache").Changed {
|
||||||
options.noCache = nil
|
cFlags.noCache = nil
|
||||||
}
|
}
|
||||||
if !cmd.Flags().Lookup("pull").Changed {
|
if !cmd.Flags().Lookup("pull").Changed {
|
||||||
options.pull = nil
|
cFlags.pull = nil
|
||||||
}
|
}
|
||||||
options.commonOptions.builder = rootOpts.builder
|
options.builder = rootOpts.builder
|
||||||
return runBake(dockerCli, args, options)
|
options.metadataFile = cFlags.metadataFile
|
||||||
|
// Other common flags (noCache, pull and progress) are processed in runBake function.
|
||||||
|
return runBake(dockerCli, args, options, cFlags)
|
||||||
},
|
},
|
||||||
|
ValidArgsFunction: completion.BakeTargets(options.files),
|
||||||
}
|
}
|
||||||
|
|
||||||
flags := cmd.Flags()
|
flags := cmd.Flags()
|
||||||
@@ -189,9 +249,11 @@ func bakeCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
|
|||||||
flags.BoolVar(&options.exportLoad, "load", false, `Shorthand for "--set=*.output=type=docker"`)
|
flags.BoolVar(&options.exportLoad, "load", false, `Shorthand for "--set=*.output=type=docker"`)
|
||||||
flags.BoolVar(&options.printOnly, "print", false, "Print the options without building")
|
flags.BoolVar(&options.printOnly, "print", false, "Print the options without building")
|
||||||
flags.BoolVar(&options.exportPush, "push", false, `Shorthand for "--set=*.output=type=registry"`)
|
flags.BoolVar(&options.exportPush, "push", false, `Shorthand for "--set=*.output=type=registry"`)
|
||||||
|
flags.StringVar(&options.sbom, "sbom", "", `Shorthand for "--set=*.attest=type=sbom"`)
|
||||||
|
flags.StringVar(&options.provenance, "provenance", "", `Shorthand for "--set=*.attest=type=provenance"`)
|
||||||
flags.StringArrayVar(&options.overrides, "set", nil, `Override target value (e.g., "targetpattern.key=value")`)
|
flags.StringArrayVar(&options.overrides, "set", nil, `Override target value (e.g., "targetpattern.key=value")`)
|
||||||
|
|
||||||
commonBuildFlags(&options.commonOptions, flags)
|
commonBuildFlags(&cFlags, flags)
|
||||||
|
|
||||||
return cmd
|
return cmd
|
||||||
}
|
}
|
||||||
|
|||||||
File diff suppressed because it is too large
Load Diff
@@ -10,13 +10,20 @@ import (
|
|||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/docker/buildx/builder"
|
||||||
"github.com/docker/buildx/driver"
|
"github.com/docker/buildx/driver"
|
||||||
|
k8sutil "github.com/docker/buildx/driver/kubernetes/util"
|
||||||
|
remoteutil "github.com/docker/buildx/driver/remote/util"
|
||||||
|
"github.com/docker/buildx/localstate"
|
||||||
"github.com/docker/buildx/store"
|
"github.com/docker/buildx/store"
|
||||||
"github.com/docker/buildx/store/storeutil"
|
"github.com/docker/buildx/store/storeutil"
|
||||||
"github.com/docker/buildx/util/cobrautil"
|
"github.com/docker/buildx/util/cobrautil"
|
||||||
|
"github.com/docker/buildx/util/cobrautil/completion"
|
||||||
"github.com/docker/buildx/util/confutil"
|
"github.com/docker/buildx/util/confutil"
|
||||||
|
"github.com/docker/buildx/util/dockerutil"
|
||||||
"github.com/docker/cli/cli"
|
"github.com/docker/cli/cli"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
|
dopts "github.com/docker/cli/opts"
|
||||||
"github.com/google/shlex"
|
"github.com/google/shlex"
|
||||||
"github.com/moby/buildkit/util/appcontext"
|
"github.com/moby/buildkit/util/appcontext"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
@@ -61,32 +68,6 @@ func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
buildkitHost := os.Getenv("BUILDKIT_HOST")
|
|
||||||
|
|
||||||
driverName := in.driver
|
|
||||||
if driverName == "" {
|
|
||||||
if len(args) == 0 && buildkitHost != "" {
|
|
||||||
driverName = "remote"
|
|
||||||
} else {
|
|
||||||
var arg string
|
|
||||||
if len(args) > 0 {
|
|
||||||
arg = args[0]
|
|
||||||
}
|
|
||||||
f, err := driver.GetDefaultFactory(ctx, arg, dockerCli.Client(), true)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if f == nil {
|
|
||||||
return errors.Errorf("no valid drivers found")
|
|
||||||
}
|
|
||||||
driverName = f.Name()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if driver.GetFactory(driverName, true) == nil {
|
|
||||||
return errors.Errorf("failed to find driver %q", in.driver)
|
|
||||||
}
|
|
||||||
|
|
||||||
txn, release, err := storeutil.GetStore(dockerCli)
|
txn, release, err := storeutil.GetStore(dockerCli)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -121,17 +102,48 @@ func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
|
|||||||
logrus.Warnf("failed to find %q for append, creating a new instance instead", in.name)
|
logrus.Warnf("failed to find %q for append, creating a new instance instead", in.name)
|
||||||
}
|
}
|
||||||
if in.actionLeave {
|
if in.actionLeave {
|
||||||
return errors.Errorf("failed to find instance %q for leave", name)
|
return errors.Errorf("failed to find instance %q for leave", in.name)
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
buildkitHost := os.Getenv("BUILDKIT_HOST")
|
||||||
|
|
||||||
|
driverName := in.driver
|
||||||
|
if driverName == "" {
|
||||||
|
if ng != nil {
|
||||||
|
driverName = ng.Driver
|
||||||
|
} else if len(args) == 0 && buildkitHost != "" {
|
||||||
|
driverName = "remote"
|
||||||
|
} else {
|
||||||
|
var arg string
|
||||||
|
if len(args) > 0 {
|
||||||
|
arg = args[0]
|
||||||
|
}
|
||||||
|
f, err := driver.GetDefaultFactory(ctx, arg, dockerCli.Client(), true)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if f == nil {
|
||||||
|
return errors.Errorf("no valid drivers found")
|
||||||
|
}
|
||||||
|
driverName = f.Name()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if ng != nil {
|
if ng != nil {
|
||||||
if in.nodeName == "" && !in.actionAppend {
|
if in.nodeName == "" && !in.actionAppend {
|
||||||
return errors.Errorf("existing instance for %s but no append mode, specify --node to make changes for existing instances", name)
|
return errors.Errorf("existing instance for %q but no append mode, specify --node to make changes for existing instances", name)
|
||||||
}
|
}
|
||||||
|
if driverName != ng.Driver {
|
||||||
|
return errors.Errorf("existing instance for %q but has mismatched driver %q", name, ng.Driver)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if _, err := driver.GetFactory(driverName, true); err != nil {
|
||||||
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
ngOriginal := ng
|
ngOriginal := ng
|
||||||
@@ -141,14 +153,11 @@ func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
|
|||||||
|
|
||||||
if ng == nil {
|
if ng == nil {
|
||||||
ng = &store.NodeGroup{
|
ng = &store.NodeGroup{
|
||||||
Name: name,
|
Name: name,
|
||||||
|
Driver: driverName,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if ng.Driver == "" || in.driver != "" {
|
|
||||||
ng.Driver = driverName
|
|
||||||
}
|
|
||||||
|
|
||||||
var flags []string
|
var flags []string
|
||||||
if in.flags != "" {
|
if in.flags != "" {
|
||||||
flags, err = shlex.Split(in.flags)
|
flags, err = shlex.Split(in.flags)
|
||||||
@@ -163,15 +172,34 @@ func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
|
|||||||
if err := ng.Leave(in.nodeName); err != nil {
|
if err := ng.Leave(in.nodeName); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
ls, err := localstate.New(confutil.ConfigDir(dockerCli))
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if err := ls.RemoveBuilderNode(ng.Name, in.nodeName); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
} else {
|
} else {
|
||||||
switch {
|
switch {
|
||||||
case driverName == "kubernetes":
|
case driverName == "kubernetes":
|
||||||
|
if len(args) > 0 {
|
||||||
|
logrus.Warnf("kubernetes driver does not support endpoint args %q", args[0])
|
||||||
|
}
|
||||||
|
// generate node name if not provided to avoid duplicated endpoint
|
||||||
|
// error: https://github.com/docker/setup-buildx-action/issues/215
|
||||||
|
nodeName := in.nodeName
|
||||||
|
if nodeName == "" {
|
||||||
|
nodeName, err = k8sutil.GenerateNodeName(name, txn)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
// naming endpoint to make --append works
|
// naming endpoint to make --append works
|
||||||
ep = (&url.URL{
|
ep = (&url.URL{
|
||||||
Scheme: driverName,
|
Scheme: driverName,
|
||||||
Path: "/" + in.name,
|
Path: "/" + name,
|
||||||
RawQuery: (&url.Values{
|
RawQuery: (&url.Values{
|
||||||
"deployment": {in.nodeName},
|
"deployment": {nodeName},
|
||||||
"kubeconfig": {os.Getenv("KUBECONFIG")},
|
"kubeconfig": {os.Getenv("KUBECONFIG")},
|
||||||
}).Encode(),
|
}).Encode(),
|
||||||
}).String()
|
}).String()
|
||||||
@@ -199,7 +227,7 @@ func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
|
|||||||
if dockerCli.CurrentContext() == "default" && dockerCli.DockerEndpoint().TLSData != nil {
|
if dockerCli.CurrentContext() == "default" && dockerCli.DockerEndpoint().TLSData != nil {
|
||||||
return errors.Errorf("could not create a builder instance with TLS data loaded from environment. Please use `docker context create <context-name>` to create a context for current environment and then create a builder instance with `docker buildx create <context-name>`")
|
return errors.Errorf("could not create a builder instance with TLS data loaded from environment. Please use `docker context create <context-name>` to create a context for current environment and then create a builder instance with `docker buildx create <context-name>`")
|
||||||
}
|
}
|
||||||
ep, err = storeutil.GetCurrentEndpoint(dockerCli)
|
ep, err = dockerutil.GetCurrentEndpoint(dockerCli)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -229,17 +257,26 @@ func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
ngi := &nginfo{ng: ng}
|
b, err := builder.New(dockerCli,
|
||||||
|
builder.WithName(ng.Name),
|
||||||
|
builder.WithStore(txn),
|
||||||
|
builder.WithSkippedValidation(),
|
||||||
|
)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
timeoutCtx, cancel := context.WithTimeout(ctx, 20*time.Second)
|
timeoutCtx, cancel := context.WithTimeout(ctx, 20*time.Second)
|
||||||
defer cancel()
|
defer cancel()
|
||||||
|
|
||||||
if err = loadNodeGroupData(timeoutCtx, dockerCli, ngi); err != nil {
|
nodes, err := b.LoadNodes(timeoutCtx, true)
|
||||||
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
for _, info := range ngi.drivers {
|
|
||||||
if err := info.di.Err; err != nil {
|
for _, node := range nodes {
|
||||||
err := errors.Errorf("failed to initialize builder %s (%s): %s", ng.Name, info.di.Name, err)
|
if err := node.Err; err != nil {
|
||||||
|
err := errors.Errorf("failed to initialize builder %s (%s): %s", ng.Name, node.Name, err)
|
||||||
var err2 error
|
var err2 error
|
||||||
if ngOriginal == nil {
|
if ngOriginal == nil {
|
||||||
err2 = txn.Remove(ng.Name)
|
err2 = txn.Remove(ng.Name)
|
||||||
@@ -254,7 +291,7 @@ func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
if in.use && ep != "" {
|
if in.use && ep != "" {
|
||||||
current, err := storeutil.GetCurrentEndpoint(dockerCli)
|
current, err := dockerutil.GetCurrentEndpoint(dockerCli)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -264,7 +301,7 @@ func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
if in.bootstrap {
|
if in.bootstrap {
|
||||||
if _, err = boot(ctx, ngi); err != nil {
|
if _, err = b.Boot(ctx); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -277,7 +314,7 @@ func createCmd(dockerCli command.Cli) *cobra.Command {
|
|||||||
var options createOptions
|
var options createOptions
|
||||||
|
|
||||||
var drivers bytes.Buffer
|
var drivers bytes.Buffer
|
||||||
for _, d := range driver.GetFactories() {
|
for _, d := range driver.GetFactories(true) {
|
||||||
if len(drivers.String()) > 0 {
|
if len(drivers.String()) > 0 {
|
||||||
drivers.WriteString(", ")
|
drivers.WriteString(", ")
|
||||||
}
|
}
|
||||||
@@ -291,6 +328,7 @@ func createCmd(dockerCli command.Cli) *cobra.Command {
|
|||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
return runCreate(dockerCli, options, args)
|
return runCreate(dockerCli, options, args)
|
||||||
},
|
},
|
||||||
|
ValidArgsFunction: completion.Disable,
|
||||||
}
|
}
|
||||||
|
|
||||||
flags := cmd.Flags()
|
flags := cmd.Flags()
|
||||||
@@ -315,6 +353,9 @@ func createCmd(dockerCli command.Cli) *cobra.Command {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func csvToMap(in []string) (map[string]string, error) {
|
func csvToMap(in []string) (map[string]string, error) {
|
||||||
|
if len(in) == 0 {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
m := make(map[string]string, len(in))
|
m := make(map[string]string, len(in))
|
||||||
for _, s := range in {
|
for _, s := range in {
|
||||||
csvReader := csv.NewReader(strings.NewReader(s))
|
csvReader := csv.NewReader(strings.NewReader(s))
|
||||||
@@ -332,3 +373,27 @@ func csvToMap(in []string) (map[string]string, error) {
|
|||||||
}
|
}
|
||||||
return m, nil
|
return m, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// validateEndpoint validates that endpoint is either a context or a docker host
|
||||||
|
func validateEndpoint(dockerCli command.Cli, ep string) (string, error) {
|
||||||
|
dem, err := dockerutil.GetDockerEndpoint(dockerCli, ep)
|
||||||
|
if err == nil && dem != nil {
|
||||||
|
if ep == "default" {
|
||||||
|
return dem.Host, nil
|
||||||
|
}
|
||||||
|
return ep, nil
|
||||||
|
}
|
||||||
|
h, err := dopts.ParseHost(true, ep)
|
||||||
|
if err != nil {
|
||||||
|
return "", errors.Wrapf(err, "failed to parse endpoint %s", ep)
|
||||||
|
}
|
||||||
|
return h, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// validateBuildkitEndpoint validates that endpoint is a valid buildkit host
|
||||||
|
func validateBuildkitEndpoint(ep string) (string, error) {
|
||||||
|
if err := remoteutil.IsValidEndpoint(ep); err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
return ep, nil
|
||||||
|
}
|
||||||
|
|||||||
79
commands/debug-shell.go
Normal file
79
commands/debug-shell.go
Normal file
@@ -0,0 +1,79 @@
|
|||||||
|
package commands
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"os"
|
||||||
|
"runtime"
|
||||||
|
|
||||||
|
"github.com/containerd/console"
|
||||||
|
"github.com/docker/buildx/controller"
|
||||||
|
"github.com/docker/buildx/controller/control"
|
||||||
|
controllerapi "github.com/docker/buildx/controller/pb"
|
||||||
|
"github.com/docker/buildx/monitor"
|
||||||
|
"github.com/docker/buildx/util/progress"
|
||||||
|
"github.com/docker/cli/cli/command"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
"github.com/sirupsen/logrus"
|
||||||
|
"github.com/spf13/cobra"
|
||||||
|
)
|
||||||
|
|
||||||
|
func debugShellCmd(dockerCli command.Cli) *cobra.Command {
|
||||||
|
var options control.ControlOptions
|
||||||
|
var progressMode string
|
||||||
|
|
||||||
|
cmd := &cobra.Command{
|
||||||
|
Use: "debug-shell",
|
||||||
|
Short: "Start a monitor",
|
||||||
|
Annotations: map[string]string{
|
||||||
|
"experimentalCLI": "",
|
||||||
|
},
|
||||||
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
|
printer, err := progress.NewPrinter(context.TODO(), os.Stderr, os.Stderr, progressMode)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
ctx := context.TODO()
|
||||||
|
c, err := controller.NewController(ctx, options, dockerCli, printer)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer func() {
|
||||||
|
if err := c.Close(); err != nil {
|
||||||
|
logrus.Warnf("failed to close server connection %v", err)
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
con := console.Current()
|
||||||
|
if err := con.SetRaw(); err != nil {
|
||||||
|
return errors.Errorf("failed to configure terminal: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
err = monitor.RunMonitor(ctx, "", nil, controllerapi.InvokeConfig{
|
||||||
|
Tty: true,
|
||||||
|
}, c, dockerCli.In(), os.Stdout, os.Stderr, printer)
|
||||||
|
con.Reset()
|
||||||
|
return err
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
flags := cmd.Flags()
|
||||||
|
|
||||||
|
flags.StringVar(&options.Root, "root", "", "Specify root directory of server to connect")
|
||||||
|
flags.SetAnnotation("root", "experimentalCLI", nil)
|
||||||
|
|
||||||
|
flags.BoolVar(&options.Detach, "detach", runtime.GOOS == "linux", "Detach buildx server (supported only on linux)")
|
||||||
|
flags.SetAnnotation("detach", "experimentalCLI", nil)
|
||||||
|
|
||||||
|
flags.StringVar(&options.ServerConfig, "server-config", "", "Specify buildx server config file (used only when launching new server)")
|
||||||
|
flags.SetAnnotation("server-config", "experimentalCLI", nil)
|
||||||
|
|
||||||
|
flags.StringVar(&progressMode, "progress", "auto", `Set type of progress output ("auto", "plain", "tty"). Use plain to show container output`)
|
||||||
|
|
||||||
|
return cmd
|
||||||
|
}
|
||||||
|
|
||||||
|
func addDebugShellCommand(cmd *cobra.Command, dockerCli command.Cli) {
|
||||||
|
cmd.AddCommand(
|
||||||
|
debugShellCmd(dockerCli),
|
||||||
|
)
|
||||||
|
}
|
||||||
@@ -8,7 +8,8 @@ import (
|
|||||||
"text/tabwriter"
|
"text/tabwriter"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/docker/buildx/build"
|
"github.com/docker/buildx/builder"
|
||||||
|
"github.com/docker/buildx/util/cobrautil/completion"
|
||||||
"github.com/docker/cli/cli"
|
"github.com/docker/cli/cli"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
"github.com/docker/cli/opts"
|
"github.com/docker/cli/opts"
|
||||||
@@ -33,25 +34,29 @@ func runDiskUsage(dockerCli command.Cli, opts duOptions) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
dis, err := getInstanceOrDefault(ctx, dockerCli, opts.builder, "")
|
b, err := builder.New(dockerCli, builder.WithName(opts.builder))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, di := range dis {
|
nodes, err := b.LoadNodes(ctx, false)
|
||||||
if di.Err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
|
}
|
||||||
|
for _, node := range nodes {
|
||||||
|
if node.Err != nil {
|
||||||
|
return node.Err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
out := make([][]*client.UsageInfo, len(dis))
|
out := make([][]*client.UsageInfo, len(nodes))
|
||||||
|
|
||||||
eg, ctx := errgroup.WithContext(ctx)
|
eg, ctx := errgroup.WithContext(ctx)
|
||||||
for i, di := range dis {
|
for i, node := range nodes {
|
||||||
func(i int, di build.DriverInfo) {
|
func(i int, node builder.Node) {
|
||||||
eg.Go(func() error {
|
eg.Go(func() error {
|
||||||
if di.Driver != nil {
|
if node.Driver != nil {
|
||||||
c, err := di.Driver.Client(ctx)
|
c, err := node.Driver.Client(ctx)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -64,7 +69,7 @@ func runDiskUsage(dockerCli command.Cli, opts duOptions) error {
|
|||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
})
|
})
|
||||||
}(i, di)
|
}(i, node)
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := eg.Wait(); err != nil {
|
if err := eg.Wait(); err != nil {
|
||||||
@@ -111,6 +116,7 @@ func duCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
|
|||||||
options.builder = rootOpts.builder
|
options.builder = rootOpts.builder
|
||||||
return runDiskUsage(dockerCli, options)
|
return runDiskUsage(dockerCli, options)
|
||||||
},
|
},
|
||||||
|
ValidArgsFunction: completion.Disable,
|
||||||
}
|
}
|
||||||
|
|
||||||
flags := cmd.Flags()
|
flags := cmd.Flags()
|
||||||
|
|||||||
@@ -7,8 +7,8 @@ import (
|
|||||||
"os"
|
"os"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"github.com/docker/buildx/store"
|
"github.com/docker/buildx/builder"
|
||||||
"github.com/docker/buildx/store/storeutil"
|
"github.com/docker/buildx/util/cobrautil/completion"
|
||||||
"github.com/docker/buildx/util/imagetools"
|
"github.com/docker/buildx/util/imagetools"
|
||||||
"github.com/docker/buildx/util/progress"
|
"github.com/docker/buildx/util/progress"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
@@ -90,47 +90,34 @@ func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
for i, s := range srcs {
|
for i, s := range srcs {
|
||||||
if s.Ref == nil && s.Desc.MediaType == "" && s.Desc.Digest != "" {
|
if s.Ref == nil {
|
||||||
if defaultRepo == nil {
|
if defaultRepo == nil {
|
||||||
return errors.Errorf("multiple repositories specified, cannot infer repository for %q", args[i])
|
return errors.Errorf("multiple repositories specified, cannot infer repository for %q", args[i])
|
||||||
}
|
}
|
||||||
|
|
||||||
n, err := reference.ParseNormalizedNamed(*defaultRepo)
|
n, err := reference.ParseNormalizedNamed(*defaultRepo)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
r, err := reference.WithDigest(n, s.Desc.Digest)
|
if s.Desc.MediaType == "" && s.Desc.Digest != "" {
|
||||||
if err != nil {
|
r, err := reference.WithDigest(n, s.Desc.Digest)
|
||||||
return err
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
srcs[i].Ref = r
|
||||||
|
sourceRefs = true
|
||||||
|
} else {
|
||||||
|
srcs[i].Ref = reference.TagNameOnly(n)
|
||||||
}
|
}
|
||||||
srcs[i].Ref = r
|
|
||||||
sourceRefs = true
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
ctx := appcontext.Context()
|
ctx := appcontext.Context()
|
||||||
|
|
||||||
txn, release, err := storeutil.GetStore(dockerCli)
|
b, err := builder.New(dockerCli, builder.WithName(in.builder))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
defer release()
|
imageopt, err := b.ImageOpt()
|
||||||
|
|
||||||
var ng *store.NodeGroup
|
|
||||||
|
|
||||||
if in.builder != "" {
|
|
||||||
ng, err = storeutil.GetNodeGroup(txn, dockerCli, in.builder)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
ng, err = storeutil.GetCurrentInstance(txn, dockerCli)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
imageopt, err := storeutil.GetImageConfig(dockerCli, ng)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -182,7 +169,10 @@ func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
|
|||||||
|
|
||||||
ctx2, cancel := context.WithCancel(context.TODO())
|
ctx2, cancel := context.WithCancel(context.TODO())
|
||||||
defer cancel()
|
defer cancel()
|
||||||
printer := progress.NewPrinter(ctx2, os.Stderr, os.Stderr, in.progress)
|
printer, err := progress.NewPrinter(ctx2, os.Stderr, os.Stderr, in.progress)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
eg, _ := errgroup.WithContext(ctx)
|
eg, _ := errgroup.WithContext(ctx)
|
||||||
pw := progress.WithPrefix(printer, "internal", true)
|
pw := progress.WithPrefix(printer, "internal", true)
|
||||||
@@ -284,6 +274,7 @@ func createCmd(dockerCli command.Cli, opts RootOptions) *cobra.Command {
|
|||||||
options.builder = *opts.Builder
|
options.builder = *opts.Builder
|
||||||
return runCreate(dockerCli, options, args)
|
return runCreate(dockerCli, options, args)
|
||||||
},
|
},
|
||||||
|
ValidArgsFunction: completion.Disable,
|
||||||
}
|
}
|
||||||
|
|
||||||
flags := cmd.Flags()
|
flags := cmd.Flags()
|
||||||
|
|||||||
@@ -1,8 +1,8 @@
|
|||||||
package commands
|
package commands
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"github.com/docker/buildx/store"
|
"github.com/docker/buildx/builder"
|
||||||
"github.com/docker/buildx/store/storeutil"
|
"github.com/docker/buildx/util/cobrautil/completion"
|
||||||
"github.com/docker/buildx/util/imagetools"
|
"github.com/docker/buildx/util/imagetools"
|
||||||
"github.com/docker/cli-docs-tool/annotation"
|
"github.com/docker/cli-docs-tool/annotation"
|
||||||
"github.com/docker/cli/cli"
|
"github.com/docker/cli/cli"
|
||||||
@@ -25,27 +25,11 @@ func runInspect(dockerCli command.Cli, in inspectOptions, name string) error {
|
|||||||
return errors.Errorf("format and raw cannot be used together")
|
return errors.Errorf("format and raw cannot be used together")
|
||||||
}
|
}
|
||||||
|
|
||||||
txn, release, err := storeutil.GetStore(dockerCli)
|
b, err := builder.New(dockerCli, builder.WithName(in.builder))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
defer release()
|
imageopt, err := b.ImageOpt()
|
||||||
|
|
||||||
var ng *store.NodeGroup
|
|
||||||
|
|
||||||
if in.builder != "" {
|
|
||||||
ng, err = storeutil.GetNodeGroup(txn, dockerCli, in.builder)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
ng, err = storeutil.GetCurrentInstance(txn, dockerCli)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
imageopt, err := storeutil.GetImageConfig(dockerCli, ng)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -69,6 +53,7 @@ func inspectCmd(dockerCli command.Cli, rootOpts RootOptions) *cobra.Command {
|
|||||||
options.builder = *rootOpts.Builder
|
options.builder = *rootOpts.Builder
|
||||||
return runInspect(dockerCli, options, args[0])
|
return runInspect(dockerCli, options, args[0])
|
||||||
},
|
},
|
||||||
|
ValidArgsFunction: completion.Disable,
|
||||||
}
|
}
|
||||||
|
|
||||||
flags := cmd.Flags()
|
flags := cmd.Flags()
|
||||||
|
|||||||
@@ -1,6 +1,7 @@
|
|||||||
package commands
|
package commands
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"github.com/docker/buildx/util/cobrautil/completion"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
@@ -11,8 +12,9 @@ type RootOptions struct {
|
|||||||
|
|
||||||
func RootCmd(dockerCli command.Cli, opts RootOptions) *cobra.Command {
|
func RootCmd(dockerCli command.Cli, opts RootOptions) *cobra.Command {
|
||||||
cmd := &cobra.Command{
|
cmd := &cobra.Command{
|
||||||
Use: "imagetools",
|
Use: "imagetools",
|
||||||
Short: "Commands to work on images in registry",
|
Short: "Commands to work on images in registry",
|
||||||
|
ValidArgsFunction: completion.Disable,
|
||||||
}
|
}
|
||||||
|
|
||||||
cmd.AddCommand(
|
cmd.AddCommand(
|
||||||
|
|||||||
@@ -4,15 +4,19 @@ import (
|
|||||||
"context"
|
"context"
|
||||||
"fmt"
|
"fmt"
|
||||||
"os"
|
"os"
|
||||||
|
"sort"
|
||||||
"strings"
|
"strings"
|
||||||
"text/tabwriter"
|
"text/tabwriter"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/docker/buildx/store"
|
"github.com/docker/buildx/builder"
|
||||||
"github.com/docker/buildx/store/storeutil"
|
"github.com/docker/buildx/driver"
|
||||||
|
"github.com/docker/buildx/util/cobrautil/completion"
|
||||||
"github.com/docker/buildx/util/platformutil"
|
"github.com/docker/buildx/util/platformutil"
|
||||||
"github.com/docker/cli/cli"
|
"github.com/docker/cli/cli"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
|
"github.com/docker/cli/cli/debug"
|
||||||
|
"github.com/docker/go-units"
|
||||||
"github.com/moby/buildkit/util/appcontext"
|
"github.com/moby/buildkit/util/appcontext"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
@@ -25,71 +29,46 @@ type inspectOptions struct {
|
|||||||
func runInspect(dockerCli command.Cli, in inspectOptions) error {
|
func runInspect(dockerCli command.Cli, in inspectOptions) error {
|
||||||
ctx := appcontext.Context()
|
ctx := appcontext.Context()
|
||||||
|
|
||||||
txn, release, err := storeutil.GetStore(dockerCli)
|
b, err := builder.New(dockerCli,
|
||||||
|
builder.WithName(in.builder),
|
||||||
|
builder.WithSkippedValidation(),
|
||||||
|
)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
defer release()
|
|
||||||
|
|
||||||
var ng *store.NodeGroup
|
|
||||||
|
|
||||||
if in.builder != "" {
|
|
||||||
ng, err = storeutil.GetNodeGroup(txn, dockerCli, in.builder)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
ng, err = storeutil.GetCurrentInstance(txn, dockerCli)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if ng == nil {
|
|
||||||
ng = &store.NodeGroup{
|
|
||||||
Name: "default",
|
|
||||||
Nodes: []store.Node{{
|
|
||||||
Name: "default",
|
|
||||||
Endpoint: "default",
|
|
||||||
}},
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
ngi := &nginfo{ng: ng}
|
|
||||||
|
|
||||||
timeoutCtx, cancel := context.WithTimeout(ctx, 20*time.Second)
|
timeoutCtx, cancel := context.WithTimeout(ctx, 20*time.Second)
|
||||||
defer cancel()
|
defer cancel()
|
||||||
|
|
||||||
err = loadNodeGroupData(timeoutCtx, dockerCli, ngi)
|
nodes, err := b.LoadNodes(timeoutCtx, true)
|
||||||
|
|
||||||
var bootNgi *nginfo
|
|
||||||
if in.bootstrap {
|
if in.bootstrap {
|
||||||
var ok bool
|
var ok bool
|
||||||
ok, err = boot(ctx, ngi)
|
ok, err = b.Boot(ctx)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
bootNgi = ngi
|
|
||||||
if ok {
|
if ok {
|
||||||
ngi = &nginfo{ng: ng}
|
nodes, err = b.LoadNodes(timeoutCtx, true)
|
||||||
err = loadNodeGroupData(ctx, dockerCli, ngi)
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
w := tabwriter.NewWriter(os.Stdout, 0, 0, 1, ' ', 0)
|
w := tabwriter.NewWriter(os.Stdout, 0, 0, 1, ' ', 0)
|
||||||
fmt.Fprintf(w, "Name:\t%s\n", ngi.ng.Name)
|
fmt.Fprintf(w, "Name:\t%s\n", b.Name)
|
||||||
fmt.Fprintf(w, "Driver:\t%s\n", ngi.ng.Driver)
|
fmt.Fprintf(w, "Driver:\t%s\n", b.Driver)
|
||||||
|
if !b.NodeGroup.LastActivity.IsZero() {
|
||||||
|
fmt.Fprintf(w, "Last Activity:\t%v\n", b.NodeGroup.LastActivity)
|
||||||
|
}
|
||||||
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
fmt.Fprintf(w, "Error:\t%s\n", err.Error())
|
fmt.Fprintf(w, "Error:\t%s\n", err.Error())
|
||||||
} else if ngi.err != nil {
|
} else if b.Err() != nil {
|
||||||
fmt.Fprintf(w, "Error:\t%s\n", ngi.err.Error())
|
fmt.Fprintf(w, "Error:\t%s\n", b.Err().Error())
|
||||||
}
|
}
|
||||||
if err == nil {
|
if err == nil {
|
||||||
fmt.Fprintln(w, "")
|
fmt.Fprintln(w, "")
|
||||||
fmt.Fprintln(w, "Nodes:")
|
fmt.Fprintln(w, "Nodes:")
|
||||||
|
|
||||||
for i, n := range ngi.ng.Nodes {
|
for i, n := range nodes {
|
||||||
if i != 0 {
|
if i != 0 {
|
||||||
fmt.Fprintln(w, "")
|
fmt.Fprintln(w, "")
|
||||||
}
|
}
|
||||||
@@ -104,18 +83,49 @@ func runInspect(dockerCli command.Cli, in inspectOptions) error {
|
|||||||
fmt.Fprintf(w, "Driver Options:\t%s\n", strings.Join(driverOpts, " "))
|
fmt.Fprintf(w, "Driver Options:\t%s\n", strings.Join(driverOpts, " "))
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := ngi.drivers[i].di.Err; err != nil {
|
if err := n.Err; err != nil {
|
||||||
fmt.Fprintf(w, "Error:\t%s\n", err.Error())
|
fmt.Fprintf(w, "Error:\t%s\n", err.Error())
|
||||||
} else if err := ngi.drivers[i].err; err != nil {
|
|
||||||
fmt.Fprintf(w, "Error:\t%s\n", err.Error())
|
|
||||||
} else if bootNgi != nil && len(bootNgi.drivers) > i && bootNgi.drivers[i].err != nil {
|
|
||||||
fmt.Fprintf(w, "Error:\t%s\n", bootNgi.drivers[i].err.Error())
|
|
||||||
} else {
|
} else {
|
||||||
fmt.Fprintf(w, "Status:\t%s\n", ngi.drivers[i].info.Status)
|
fmt.Fprintf(w, "Status:\t%s\n", nodes[i].DriverInfo.Status)
|
||||||
if len(n.Flags) > 0 {
|
if len(n.Flags) > 0 {
|
||||||
fmt.Fprintf(w, "Flags:\t%s\n", strings.Join(n.Flags, " "))
|
fmt.Fprintf(w, "Flags:\t%s\n", strings.Join(n.Flags, " "))
|
||||||
}
|
}
|
||||||
fmt.Fprintf(w, "Platforms:\t%s\n", strings.Join(platformutil.FormatInGroups(n.Platforms, ngi.drivers[i].platforms), ", "))
|
if nodes[i].Version != "" {
|
||||||
|
fmt.Fprintf(w, "Buildkit:\t%s\n", nodes[i].Version)
|
||||||
|
}
|
||||||
|
fmt.Fprintf(w, "Platforms:\t%s\n", strings.Join(platformutil.FormatInGroups(n.Node.Platforms, n.Platforms), ", "))
|
||||||
|
if debug.IsEnabled() {
|
||||||
|
fmt.Fprintf(w, "Features:\n")
|
||||||
|
features := nodes[i].Driver.Features(ctx)
|
||||||
|
featKeys := make([]string, 0, len(features))
|
||||||
|
for k := range features {
|
||||||
|
featKeys = append(featKeys, string(k))
|
||||||
|
}
|
||||||
|
sort.Strings(featKeys)
|
||||||
|
for _, k := range featKeys {
|
||||||
|
fmt.Fprintf(w, "\t%s:\t%t\n", k, features[driver.Feature(k)])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if len(nodes[i].Labels) > 0 {
|
||||||
|
fmt.Fprintf(w, "Labels:\n")
|
||||||
|
for _, k := range sortedKeys(nodes[i].Labels) {
|
||||||
|
v := nodes[i].Labels[k]
|
||||||
|
fmt.Fprintf(w, "\t%s:\t%s\n", k, v)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for ri, rule := range nodes[i].GCPolicy {
|
||||||
|
fmt.Fprintf(w, "GC Policy rule#%d:\n", ri)
|
||||||
|
fmt.Fprintf(w, "\tAll:\t%v\n", rule.All)
|
||||||
|
if len(rule.Filter) > 0 {
|
||||||
|
fmt.Fprintf(w, "\tFilters:\t%s\n", strings.Join(rule.Filter, " "))
|
||||||
|
}
|
||||||
|
if rule.KeepDuration > 0 {
|
||||||
|
fmt.Fprintf(w, "\tKeep Duration:\t%v\n", rule.KeepDuration.String())
|
||||||
|
}
|
||||||
|
if rule.KeepBytes > 0 {
|
||||||
|
fmt.Fprintf(w, "\tKeep Bytes:\t%s\n", units.BytesSize(float64(rule.KeepBytes)))
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -139,6 +149,7 @@ func inspectCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
|
|||||||
}
|
}
|
||||||
return runInspect(dockerCli, options)
|
return runInspect(dockerCli, options)
|
||||||
},
|
},
|
||||||
|
ValidArgsFunction: completion.BuilderNames(dockerCli),
|
||||||
}
|
}
|
||||||
|
|
||||||
flags := cmd.Flags()
|
flags := cmd.Flags()
|
||||||
@@ -146,3 +157,14 @@ func inspectCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
|
|||||||
|
|
||||||
return cmd
|
return cmd
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func sortedKeys(m map[string]string) []string {
|
||||||
|
s := make([]string, len(m))
|
||||||
|
i := 0
|
||||||
|
for k := range m {
|
||||||
|
s[i] = k
|
||||||
|
i++
|
||||||
|
}
|
||||||
|
sort.Strings(s)
|
||||||
|
return s
|
||||||
|
}
|
||||||
|
|||||||
@@ -4,6 +4,7 @@ import (
|
|||||||
"os"
|
"os"
|
||||||
|
|
||||||
"github.com/docker/buildx/util/cobrautil"
|
"github.com/docker/buildx/util/cobrautil"
|
||||||
|
"github.com/docker/buildx/util/cobrautil/completion"
|
||||||
"github.com/docker/cli/cli"
|
"github.com/docker/cli/cli"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
"github.com/docker/cli/cli/config"
|
"github.com/docker/cli/cli/config"
|
||||||
@@ -46,7 +47,8 @@ func installCmd(dockerCli command.Cli) *cobra.Command {
|
|||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
return runInstall(dockerCli, options)
|
return runInstall(dockerCli, options)
|
||||||
},
|
},
|
||||||
Hidden: true,
|
Hidden: true,
|
||||||
|
ValidArgsFunction: completion.Disable,
|
||||||
}
|
}
|
||||||
|
|
||||||
// hide builder persistent flag for this command
|
// hide builder persistent flag for this command
|
||||||
|
|||||||
117
commands/ls.go
117
commands/ls.go
@@ -4,14 +4,14 @@ import (
|
|||||||
"context"
|
"context"
|
||||||
"fmt"
|
"fmt"
|
||||||
"io"
|
"io"
|
||||||
"sort"
|
|
||||||
"strings"
|
"strings"
|
||||||
"text/tabwriter"
|
"text/tabwriter"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/docker/buildx/store"
|
"github.com/docker/buildx/builder"
|
||||||
"github.com/docker/buildx/store/storeutil"
|
"github.com/docker/buildx/store/storeutil"
|
||||||
"github.com/docker/buildx/util/cobrautil"
|
"github.com/docker/buildx/util/cobrautil"
|
||||||
|
"github.com/docker/buildx/util/cobrautil/completion"
|
||||||
"github.com/docker/buildx/util/platformutil"
|
"github.com/docker/buildx/util/platformutil"
|
||||||
"github.com/docker/cli/cli"
|
"github.com/docker/cli/cli"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
@@ -32,52 +32,24 @@ func runLs(dockerCli command.Cli, in lsOptions) error {
|
|||||||
}
|
}
|
||||||
defer release()
|
defer release()
|
||||||
|
|
||||||
ctx, cancel := context.WithTimeout(ctx, 20*time.Second)
|
current, err := storeutil.GetCurrentInstance(txn, dockerCli)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
builders, err := builder.GetBuilders(dockerCli, txn)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
timeoutCtx, cancel := context.WithTimeout(ctx, 20*time.Second)
|
||||||
defer cancel()
|
defer cancel()
|
||||||
|
|
||||||
ll, err := txn.List()
|
eg, _ := errgroup.WithContext(timeoutCtx)
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
builders := make([]*nginfo, len(ll))
|
|
||||||
for i, ng := range ll {
|
|
||||||
builders[i] = &nginfo{ng: ng}
|
|
||||||
}
|
|
||||||
|
|
||||||
contexts, err := dockerCli.ContextStore().List()
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
sort.Slice(contexts, func(i, j int) bool {
|
|
||||||
return contexts[i].Name < contexts[j].Name
|
|
||||||
})
|
|
||||||
for _, c := range contexts {
|
|
||||||
ngi := &nginfo{ng: &store.NodeGroup{
|
|
||||||
Name: c.Name,
|
|
||||||
Nodes: []store.Node{{
|
|
||||||
Name: c.Name,
|
|
||||||
Endpoint: c.Name,
|
|
||||||
}},
|
|
||||||
}}
|
|
||||||
// if a context has the same name as an instance from the store, do not
|
|
||||||
// add it to the builders list. An instance from the store takes
|
|
||||||
// precedence over context builders.
|
|
||||||
if hasNodeGroup(builders, ngi) {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
builders = append(builders, ngi)
|
|
||||||
}
|
|
||||||
|
|
||||||
eg, _ := errgroup.WithContext(ctx)
|
|
||||||
|
|
||||||
for _, b := range builders {
|
for _, b := range builders {
|
||||||
func(b *nginfo) {
|
func(b *builder.Builder) {
|
||||||
eg.Go(func() error {
|
eg.Go(func() error {
|
||||||
err = loadNodeGroupData(ctx, dockerCli, b)
|
_, _ = b.LoadNodes(timeoutCtx, true)
|
||||||
if b.err == nil && err != nil {
|
|
||||||
b.err = err
|
|
||||||
}
|
|
||||||
return nil
|
return nil
|
||||||
})
|
})
|
||||||
}(b)
|
}(b)
|
||||||
@@ -87,29 +59,15 @@ func runLs(dockerCli command.Cli, in lsOptions) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
currentName := "default"
|
|
||||||
current, err := storeutil.GetCurrentInstance(txn, dockerCli)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if current != nil {
|
|
||||||
currentName = current.Name
|
|
||||||
if current.Name == "default" {
|
|
||||||
currentName = current.Nodes[0].Endpoint
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
w := tabwriter.NewWriter(dockerCli.Out(), 0, 0, 1, ' ', 0)
|
w := tabwriter.NewWriter(dockerCli.Out(), 0, 0, 1, ' ', 0)
|
||||||
fmt.Fprintf(w, "NAME/NODE\tDRIVER/ENDPOINT\tSTATUS\tBUILDKIT\tPLATFORMS\n")
|
fmt.Fprintf(w, "NAME/NODE\tDRIVER/ENDPOINT\tSTATUS\tBUILDKIT\tPLATFORMS\n")
|
||||||
|
|
||||||
currentSet := false
|
|
||||||
printErr := false
|
printErr := false
|
||||||
for _, b := range builders {
|
for _, b := range builders {
|
||||||
if !currentSet && b.ng.Name == currentName {
|
if current.Name == b.Name {
|
||||||
b.ng.Name += " *"
|
b.Name += " *"
|
||||||
currentSet = true
|
|
||||||
}
|
}
|
||||||
if ok := printngi(w, b); !ok {
|
if ok := printBuilder(w, b); !ok {
|
||||||
printErr = true
|
printErr = true
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -119,19 +77,12 @@ func runLs(dockerCli command.Cli, in lsOptions) error {
|
|||||||
if printErr {
|
if printErr {
|
||||||
_, _ = fmt.Fprintf(dockerCli.Err(), "\n")
|
_, _ = fmt.Fprintf(dockerCli.Err(), "\n")
|
||||||
for _, b := range builders {
|
for _, b := range builders {
|
||||||
if b.err != nil {
|
if b.Err() != nil {
|
||||||
_, _ = fmt.Fprintf(dockerCli.Err(), "Cannot load builder %s: %s\n", b.ng.Name, strings.TrimSpace(b.err.Error()))
|
_, _ = fmt.Fprintf(dockerCli.Err(), "Cannot load builder %s: %s\n", b.Name, strings.TrimSpace(b.Err().Error()))
|
||||||
} else {
|
} else {
|
||||||
for idx, n := range b.ng.Nodes {
|
for _, d := range b.Nodes() {
|
||||||
d := b.drivers[idx]
|
if d.Err != nil {
|
||||||
var nodeErr string
|
_, _ = fmt.Fprintf(dockerCli.Err(), "Failed to get status for %s (%s): %s\n", b.Name, d.Name, strings.TrimSpace(d.Err.Error()))
|
||||||
if d.err != nil {
|
|
||||||
nodeErr = d.err.Error()
|
|
||||||
} else if d.di.Err != nil {
|
|
||||||
nodeErr = d.di.Err.Error()
|
|
||||||
}
|
|
||||||
if nodeErr != "" {
|
|
||||||
_, _ = fmt.Fprintf(dockerCli.Err(), "Failed to get status for %s (%s): %s\n", b.ng.Name, n.Name, strings.TrimSpace(nodeErr))
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -141,26 +92,25 @@ func runLs(dockerCli command.Cli, in lsOptions) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func printngi(w io.Writer, ngi *nginfo) (ok bool) {
|
func printBuilder(w io.Writer, b *builder.Builder) (ok bool) {
|
||||||
ok = true
|
ok = true
|
||||||
var err string
|
var err string
|
||||||
if ngi.err != nil {
|
if b.Err() != nil {
|
||||||
ok = false
|
ok = false
|
||||||
err = "error"
|
err = "error"
|
||||||
}
|
}
|
||||||
fmt.Fprintf(w, "%s\t%s\t%s\t\t\n", ngi.ng.Name, ngi.ng.Driver, err)
|
fmt.Fprintf(w, "%s\t%s\t%s\t\t\n", b.Name, b.Driver, err)
|
||||||
if ngi.err == nil {
|
if b.Err() == nil {
|
||||||
for idx, n := range ngi.ng.Nodes {
|
for _, n := range b.Nodes() {
|
||||||
d := ngi.drivers[idx]
|
|
||||||
var status string
|
var status string
|
||||||
if d.info != nil {
|
if n.DriverInfo != nil {
|
||||||
status = d.info.Status.String()
|
status = n.DriverInfo.Status.String()
|
||||||
}
|
}
|
||||||
if d.err != nil || d.di.Err != nil {
|
if n.Err != nil {
|
||||||
ok = false
|
ok = false
|
||||||
fmt.Fprintf(w, " %s\t%s\t%s\t\t\n", n.Name, n.Endpoint, "error")
|
fmt.Fprintf(w, " %s\t%s\t%s\t\t\n", n.Name, n.Endpoint, "error")
|
||||||
} else {
|
} else {
|
||||||
fmt.Fprintf(w, " %s\t%s\t%s\t%s\t%s\n", n.Name, n.Endpoint, status, d.version, strings.Join(platformutil.FormatInGroups(n.Platforms, d.platforms), ", "))
|
fmt.Fprintf(w, " %s\t%s\t%s\t%s\t%s\n", n.Name, n.Endpoint, status, n.Version, strings.Join(platformutil.FormatInGroups(n.Node.Platforms, n.Platforms), ", "))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -177,6 +127,7 @@ func lsCmd(dockerCli command.Cli) *cobra.Command {
|
|||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
return runLs(dockerCli, options)
|
return runLs(dockerCli, options)
|
||||||
},
|
},
|
||||||
|
ValidArgsFunction: completion.Disable,
|
||||||
}
|
}
|
||||||
|
|
||||||
// hide builder persistent flag for this command
|
// hide builder persistent flag for this command
|
||||||
|
|||||||
@@ -7,7 +7,8 @@ import (
|
|||||||
"text/tabwriter"
|
"text/tabwriter"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/docker/buildx/build"
|
"github.com/docker/buildx/builder"
|
||||||
|
"github.com/docker/buildx/util/cobrautil/completion"
|
||||||
"github.com/docker/cli/cli"
|
"github.com/docker/cli/cli"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
"github.com/docker/cli/opts"
|
"github.com/docker/cli/opts"
|
||||||
@@ -54,14 +55,18 @@ func runPrune(dockerCli command.Cli, opts pruneOptions) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
dis, err := getInstanceOrDefault(ctx, dockerCli, opts.builder, "")
|
b, err := builder.New(dockerCli, builder.WithName(opts.builder))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, di := range dis {
|
nodes, err := b.LoadNodes(ctx, false)
|
||||||
if di.Err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
|
}
|
||||||
|
for _, node := range nodes {
|
||||||
|
if node.Err != nil {
|
||||||
|
return node.Err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -90,11 +95,11 @@ func runPrune(dockerCli command.Cli, opts pruneOptions) error {
|
|||||||
}()
|
}()
|
||||||
|
|
||||||
eg, ctx := errgroup.WithContext(ctx)
|
eg, ctx := errgroup.WithContext(ctx)
|
||||||
for _, di := range dis {
|
for _, node := range nodes {
|
||||||
func(di build.DriverInfo) {
|
func(node builder.Node) {
|
||||||
eg.Go(func() error {
|
eg.Go(func() error {
|
||||||
if di.Driver != nil {
|
if node.Driver != nil {
|
||||||
c, err := di.Driver.Client(ctx)
|
c, err := node.Driver.Client(ctx)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -109,7 +114,7 @@ func runPrune(dockerCli command.Cli, opts pruneOptions) error {
|
|||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
})
|
})
|
||||||
}(di)
|
}(node)
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := eg.Wait(); err != nil {
|
if err := eg.Wait(); err != nil {
|
||||||
@@ -135,10 +140,11 @@ func pruneCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
|
|||||||
options.builder = rootOpts.builder
|
options.builder = rootOpts.builder
|
||||||
return runPrune(dockerCli, options)
|
return runPrune(dockerCli, options)
|
||||||
},
|
},
|
||||||
|
ValidArgsFunction: completion.Disable,
|
||||||
}
|
}
|
||||||
|
|
||||||
flags := cmd.Flags()
|
flags := cmd.Flags()
|
||||||
flags.BoolVarP(&options.all, "all", "a", false, "Remove all unused images, not just dangling ones")
|
flags.BoolVarP(&options.all, "all", "a", false, "Include internal/frontend images")
|
||||||
flags.Var(&options.filter, "filter", `Provide filter values (e.g., "until=24h")`)
|
flags.Var(&options.filter, "filter", `Provide filter values (e.g., "until=24h")`)
|
||||||
flags.Var(&options.keepStorage, "keep-storage", "Amount of disk space to keep for cache")
|
flags.Var(&options.keepStorage, "keep-storage", "Amount of disk space to keep for cache")
|
||||||
flags.BoolVar(&options.verbose, "verbose", false, "Provide a more verbose output")
|
flags.BoolVar(&options.verbose, "verbose", false, "Provide a more verbose output")
|
||||||
@@ -155,9 +161,9 @@ func toBuildkitPruneInfo(f filters.Args) (*client.PruneInfo, error) {
|
|||||||
if len(untilValues) > 0 && len(unusedForValues) > 0 {
|
if len(untilValues) > 0 && len(unusedForValues) > 0 {
|
||||||
return nil, errors.Errorf("conflicting filters %q and %q", "until", "unused-for")
|
return nil, errors.Errorf("conflicting filters %q and %q", "until", "unused-for")
|
||||||
}
|
}
|
||||||
filterKey := "until"
|
untilKey := "until"
|
||||||
if len(unusedForValues) > 0 {
|
if len(unusedForValues) > 0 {
|
||||||
filterKey = "unused-for"
|
untilKey = "unused-for"
|
||||||
}
|
}
|
||||||
untilValues = append(untilValues, unusedForValues...)
|
untilValues = append(untilValues, unusedForValues...)
|
||||||
|
|
||||||
@@ -168,23 +174,27 @@ func toBuildkitPruneInfo(f filters.Args) (*client.PruneInfo, error) {
|
|||||||
var err error
|
var err error
|
||||||
until, err = time.ParseDuration(untilValues[0])
|
until, err = time.ParseDuration(untilValues[0])
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, errors.Wrapf(err, "%q filter expects a duration (e.g., '24h')", filterKey)
|
return nil, errors.Wrapf(err, "%q filter expects a duration (e.g., '24h')", untilKey)
|
||||||
}
|
}
|
||||||
default:
|
default:
|
||||||
return nil, errors.Errorf("filters expect only one value")
|
return nil, errors.Errorf("filters expect only one value")
|
||||||
}
|
}
|
||||||
|
|
||||||
bkFilter := make([]string, 0, f.Len())
|
filters := make([]string, 0, f.Len())
|
||||||
for _, field := range f.Keys() {
|
for _, filterKey := range f.Keys() {
|
||||||
values := f.Get(field)
|
if filterKey == untilKey {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
values := f.Get(filterKey)
|
||||||
switch len(values) {
|
switch len(values) {
|
||||||
case 0:
|
case 0:
|
||||||
bkFilter = append(bkFilter, field)
|
filters = append(filters, filterKey)
|
||||||
case 1:
|
case 1:
|
||||||
if field == "id" {
|
if filterKey == "id" {
|
||||||
bkFilter = append(bkFilter, field+"~="+values[0])
|
filters = append(filters, filterKey+"~="+values[0])
|
||||||
} else {
|
} else {
|
||||||
bkFilter = append(bkFilter, field+"=="+values[0])
|
filters = append(filters, filterKey+"=="+values[0])
|
||||||
}
|
}
|
||||||
default:
|
default:
|
||||||
return nil, errors.Errorf("filters expect only one value")
|
return nil, errors.Errorf("filters expect only one value")
|
||||||
@@ -192,6 +202,6 @@ func toBuildkitPruneInfo(f filters.Args) (*client.PruneInfo, error) {
|
|||||||
}
|
}
|
||||||
return &client.PruneInfo{
|
return &client.PruneInfo{
|
||||||
KeepDuration: until,
|
KeepDuration: until,
|
||||||
Filter: []string{strings.Join(bkFilter, ",")},
|
Filter: []string{strings.Join(filters, ",")},
|
||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -5,8 +5,10 @@ import (
|
|||||||
"fmt"
|
"fmt"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/docker/buildx/builder"
|
||||||
"github.com/docker/buildx/store"
|
"github.com/docker/buildx/store"
|
||||||
"github.com/docker/buildx/store/storeutil"
|
"github.com/docker/buildx/store/storeutil"
|
||||||
|
"github.com/docker/buildx/util/cobrautil/completion"
|
||||||
"github.com/docker/cli/cli"
|
"github.com/docker/cli/cli"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
"github.com/moby/buildkit/util/appcontext"
|
"github.com/moby/buildkit/util/appcontext"
|
||||||
@@ -44,41 +46,33 @@ func runRm(dockerCli command.Cli, in rmOptions) error {
|
|||||||
return rmAllInactive(ctx, txn, dockerCli, in)
|
return rmAllInactive(ctx, txn, dockerCli, in)
|
||||||
}
|
}
|
||||||
|
|
||||||
var ng *store.NodeGroup
|
b, err := builder.New(dockerCli,
|
||||||
if in.builder != "" {
|
builder.WithName(in.builder),
|
||||||
ng, err = storeutil.GetNodeGroup(txn, dockerCli, in.builder)
|
builder.WithStore(txn),
|
||||||
if err != nil {
|
builder.WithSkippedValidation(),
|
||||||
return err
|
)
|
||||||
}
|
|
||||||
} else {
|
|
||||||
ng, err = storeutil.GetCurrentInstance(txn, dockerCli)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if ng == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
ctxbuilders, err := dockerCli.ContextStore().List()
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
for _, cb := range ctxbuilders {
|
|
||||||
if ng.Driver == "docker" && len(ng.Nodes) == 1 && ng.Nodes[0].Endpoint == cb.Name {
|
nodes, err := b.LoadNodes(ctx, false)
|
||||||
return errors.Errorf("context builder cannot be removed, run `docker context rm %s` to remove this context", cb.Name)
|
if err != nil {
|
||||||
}
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
err1 := rm(ctx, dockerCli, in, ng)
|
if cb := b.ContextName(); cb != "" {
|
||||||
if err := txn.Remove(ng.Name); err != nil {
|
return errors.Errorf("context builder cannot be removed, run `docker context rm %s` to remove this context", cb)
|
||||||
|
}
|
||||||
|
|
||||||
|
err1 := rm(ctx, nodes, in)
|
||||||
|
if err := txn.Remove(b.Name); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
if err1 != nil {
|
if err1 != nil {
|
||||||
return err1
|
return err1
|
||||||
}
|
}
|
||||||
|
|
||||||
_, _ = fmt.Fprintf(dockerCli.Err(), "%s removed\n", ng.Name)
|
_, _ = fmt.Fprintf(dockerCli.Err(), "%s removed\n", b.Name)
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -99,6 +93,7 @@ func rmCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
|
|||||||
}
|
}
|
||||||
return runRm(dockerCli, options)
|
return runRm(dockerCli, options)
|
||||||
},
|
},
|
||||||
|
ValidArgsFunction: completion.BuilderNames(dockerCli),
|
||||||
}
|
}
|
||||||
|
|
||||||
flags := cmd.Flags()
|
flags := cmd.Flags()
|
||||||
@@ -110,61 +105,53 @@ func rmCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
|
|||||||
return cmd
|
return cmd
|
||||||
}
|
}
|
||||||
|
|
||||||
func rm(ctx context.Context, dockerCli command.Cli, in rmOptions, ng *store.NodeGroup) error {
|
func rm(ctx context.Context, nodes []builder.Node, in rmOptions) (err error) {
|
||||||
dis, err := driversForNodeGroup(ctx, dockerCli, ng, "")
|
for _, node := range nodes {
|
||||||
if err != nil {
|
if node.Driver == nil {
|
||||||
return err
|
|
||||||
}
|
|
||||||
for _, di := range dis {
|
|
||||||
if di.Driver == nil {
|
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
// Do not stop the buildkitd daemon when --keep-daemon is provided
|
// Do not stop the buildkitd daemon when --keep-daemon is provided
|
||||||
if !in.keepDaemon {
|
if !in.keepDaemon {
|
||||||
if err := di.Driver.Stop(ctx, true); err != nil {
|
if err := node.Driver.Stop(ctx, true); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if err := di.Driver.Rm(ctx, true, !in.keepState, !in.keepDaemon); err != nil {
|
if err := node.Driver.Rm(ctx, true, !in.keepState, !in.keepDaemon); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
if di.Err != nil {
|
if node.Err != nil {
|
||||||
err = di.Err
|
err = node.Err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
func rmAllInactive(ctx context.Context, txn *store.Txn, dockerCli command.Cli, in rmOptions) error {
|
func rmAllInactive(ctx context.Context, txn *store.Txn, dockerCli command.Cli, in rmOptions) error {
|
||||||
ctx, cancel := context.WithTimeout(ctx, 20*time.Second)
|
builders, err := builder.GetBuilders(dockerCli, txn)
|
||||||
defer cancel()
|
|
||||||
|
|
||||||
ll, err := txn.List()
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
builders := make([]*nginfo, len(ll))
|
timeoutCtx, cancel := context.WithTimeout(ctx, 20*time.Second)
|
||||||
for i, ng := range ll {
|
defer cancel()
|
||||||
builders[i] = &nginfo{ng: ng}
|
|
||||||
}
|
|
||||||
|
|
||||||
eg, _ := errgroup.WithContext(ctx)
|
eg, _ := errgroup.WithContext(timeoutCtx)
|
||||||
for _, b := range builders {
|
for _, b := range builders {
|
||||||
func(b *nginfo) {
|
func(b *builder.Builder) {
|
||||||
eg.Go(func() error {
|
eg.Go(func() error {
|
||||||
if err := loadNodeGroupData(ctx, dockerCli, b); err != nil {
|
nodes, err := b.LoadNodes(timeoutCtx, true)
|
||||||
return errors.Wrapf(err, "cannot load %s", b.ng.Name)
|
if err != nil {
|
||||||
|
return errors.Wrapf(err, "cannot load %s", b.Name)
|
||||||
}
|
}
|
||||||
if b.ng.Dynamic {
|
if b.Dynamic {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
if b.inactive() {
|
if b.Inactive() {
|
||||||
rmerr := rm(ctx, dockerCli, in, b.ng)
|
rmerr := rm(ctx, nodes, in)
|
||||||
if err := txn.Remove(b.ng.Name); err != nil {
|
if err := txn.Remove(b.Name); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
_, _ = fmt.Fprintf(dockerCli.Err(), "%s removed\n", b.ng.Name)
|
_, _ = fmt.Fprintf(dockerCli.Err(), "%s removed\n", b.Name)
|
||||||
return rmerr
|
return rmerr
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
|
|||||||
@@ -4,6 +4,8 @@ import (
|
|||||||
"os"
|
"os"
|
||||||
|
|
||||||
imagetoolscmd "github.com/docker/buildx/commands/imagetools"
|
imagetoolscmd "github.com/docker/buildx/commands/imagetools"
|
||||||
|
"github.com/docker/buildx/controller/remote"
|
||||||
|
"github.com/docker/buildx/util/cobrautil/completion"
|
||||||
"github.com/docker/buildx/util/logutil"
|
"github.com/docker/buildx/util/logutil"
|
||||||
"github.com/docker/cli-docs-tool/annotation"
|
"github.com/docker/cli-docs-tool/annotation"
|
||||||
"github.com/docker/cli/cli"
|
"github.com/docker/cli/cli"
|
||||||
@@ -22,6 +24,9 @@ func NewRootCmd(name string, isPlugin bool, dockerCli command.Cli) *cobra.Comman
|
|||||||
Annotations: map[string]string{
|
Annotations: map[string]string{
|
||||||
annotation.CodeDelimiter: `"`,
|
annotation.CodeDelimiter: `"`,
|
||||||
},
|
},
|
||||||
|
CompletionOptions: cobra.CompletionOptions{
|
||||||
|
HiddenDefaultCmd: true,
|
||||||
|
},
|
||||||
}
|
}
|
||||||
if isPlugin {
|
if isPlugin {
|
||||||
cmd.PersistentPreRunE = func(cmd *cobra.Command, args []string) error {
|
cmd.PersistentPreRunE = func(cmd *cobra.Command, args []string) error {
|
||||||
@@ -47,17 +52,6 @@ func NewRootCmd(name string, isPlugin bool, dockerCli command.Cli) *cobra.Comman
|
|||||||
"using default config store",
|
"using default config store",
|
||||||
))
|
))
|
||||||
|
|
||||||
// filter out useless commandConn.CloseWrite warning message that can occur
|
|
||||||
// when listing builder instances with "buildx ls" for those that are
|
|
||||||
// unreachable: "commandConn.CloseWrite: commandconn: failed to wait: signal: killed"
|
|
||||||
// https://github.com/docker/cli/blob/3fb4fb83dfb5db0c0753a8316f21aea54dab32c5/cli/connhelper/commandconn/commandconn.go#L203-L214
|
|
||||||
logrus.AddHook(logutil.NewFilter([]logrus.Level{
|
|
||||||
logrus.WarnLevel,
|
|
||||||
},
|
|
||||||
"commandConn.CloseWrite:",
|
|
||||||
"commandConn.CloseRead:",
|
|
||||||
))
|
|
||||||
|
|
||||||
addCommands(cmd, dockerCli)
|
addCommands(cmd, dockerCli)
|
||||||
return cmd
|
return cmd
|
||||||
}
|
}
|
||||||
@@ -86,6 +80,15 @@ func addCommands(cmd *cobra.Command, dockerCli command.Cli) {
|
|||||||
duCmd(dockerCli, opts),
|
duCmd(dockerCli, opts),
|
||||||
imagetoolscmd.RootCmd(dockerCli, imagetoolscmd.RootOptions{Builder: &opts.builder}),
|
imagetoolscmd.RootCmd(dockerCli, imagetoolscmd.RootOptions{Builder: &opts.builder}),
|
||||||
)
|
)
|
||||||
|
if isExperimental() {
|
||||||
|
remote.AddControllerCommands(cmd, dockerCli)
|
||||||
|
addDebugShellCommand(cmd, dockerCli)
|
||||||
|
}
|
||||||
|
|
||||||
|
cmd.RegisterFlagCompletionFunc( //nolint:errcheck
|
||||||
|
"builder",
|
||||||
|
completion.BuilderNames(dockerCli),
|
||||||
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
func rootFlags(options *rootOptions, flags *pflag.FlagSet) {
|
func rootFlags(options *rootOptions, flags *pflag.FlagSet) {
|
||||||
|
|||||||
@@ -3,8 +3,8 @@ package commands
|
|||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
|
|
||||||
"github.com/docker/buildx/store"
|
"github.com/docker/buildx/builder"
|
||||||
"github.com/docker/buildx/store/storeutil"
|
"github.com/docker/buildx/util/cobrautil/completion"
|
||||||
"github.com/docker/cli/cli"
|
"github.com/docker/cli/cli"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
"github.com/moby/buildkit/util/appcontext"
|
"github.com/moby/buildkit/util/appcontext"
|
||||||
@@ -18,32 +18,19 @@ type stopOptions struct {
|
|||||||
func runStop(dockerCli command.Cli, in stopOptions) error {
|
func runStop(dockerCli command.Cli, in stopOptions) error {
|
||||||
ctx := appcontext.Context()
|
ctx := appcontext.Context()
|
||||||
|
|
||||||
txn, release, err := storeutil.GetStore(dockerCli)
|
b, err := builder.New(dockerCli,
|
||||||
|
builder.WithName(in.builder),
|
||||||
|
builder.WithSkippedValidation(),
|
||||||
|
)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
defer release()
|
nodes, err := b.LoadNodes(ctx, false)
|
||||||
|
|
||||||
if in.builder != "" {
|
|
||||||
ng, err := storeutil.GetNodeGroup(txn, dockerCli, in.builder)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if err := stop(ctx, dockerCli, ng); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
ng, err := storeutil.GetCurrentInstance(txn, dockerCli)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
if ng != nil {
|
|
||||||
return stop(ctx, dockerCli, ng)
|
|
||||||
}
|
|
||||||
|
|
||||||
return stopCurrent(ctx, dockerCli)
|
return stop(ctx, nodes)
|
||||||
}
|
}
|
||||||
|
|
||||||
func stopCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
|
func stopCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
|
||||||
@@ -60,42 +47,21 @@ func stopCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
|
|||||||
}
|
}
|
||||||
return runStop(dockerCli, options)
|
return runStop(dockerCli, options)
|
||||||
},
|
},
|
||||||
|
ValidArgsFunction: completion.BuilderNames(dockerCli),
|
||||||
}
|
}
|
||||||
|
|
||||||
return cmd
|
return cmd
|
||||||
}
|
}
|
||||||
|
|
||||||
func stop(ctx context.Context, dockerCli command.Cli, ng *store.NodeGroup) error {
|
func stop(ctx context.Context, nodes []builder.Node) (err error) {
|
||||||
dis, err := driversForNodeGroup(ctx, dockerCli, ng, "")
|
for _, node := range nodes {
|
||||||
if err != nil {
|
if node.Driver != nil {
|
||||||
return err
|
if err := node.Driver.Stop(ctx, true); err != nil {
|
||||||
}
|
|
||||||
for _, di := range dis {
|
|
||||||
if di.Driver != nil {
|
|
||||||
if err := di.Driver.Stop(ctx, true); err != nil {
|
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if di.Err != nil {
|
if node.Err != nil {
|
||||||
err = di.Err
|
err = node.Err
|
||||||
}
|
|
||||||
}
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
func stopCurrent(ctx context.Context, dockerCli command.Cli) error {
|
|
||||||
dis, err := getDefaultDrivers(ctx, dockerCli, false, "")
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
for _, di := range dis {
|
|
||||||
if di.Driver != nil {
|
|
||||||
if err := di.Driver.Stop(ctx, true); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if di.Err != nil {
|
|
||||||
err = di.Err
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return err
|
return err
|
||||||
|
|||||||
@@ -4,6 +4,7 @@ import (
|
|||||||
"os"
|
"os"
|
||||||
|
|
||||||
"github.com/docker/buildx/util/cobrautil"
|
"github.com/docker/buildx/util/cobrautil"
|
||||||
|
"github.com/docker/buildx/util/cobrautil/completion"
|
||||||
"github.com/docker/cli/cli"
|
"github.com/docker/cli/cli"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
"github.com/docker/cli/cli/config"
|
"github.com/docker/cli/cli/config"
|
||||||
@@ -52,7 +53,8 @@ func uninstallCmd(dockerCli command.Cli) *cobra.Command {
|
|||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
return runUninstall(dockerCli, options)
|
return runUninstall(dockerCli, options)
|
||||||
},
|
},
|
||||||
Hidden: true,
|
Hidden: true,
|
||||||
|
ValidArgsFunction: completion.Disable,
|
||||||
}
|
}
|
||||||
|
|
||||||
// hide builder persistent flag for this command
|
// hide builder persistent flag for this command
|
||||||
|
|||||||
@@ -4,6 +4,8 @@ import (
|
|||||||
"os"
|
"os"
|
||||||
|
|
||||||
"github.com/docker/buildx/store/storeutil"
|
"github.com/docker/buildx/store/storeutil"
|
||||||
|
"github.com/docker/buildx/util/cobrautil/completion"
|
||||||
|
"github.com/docker/buildx/util/dockerutil"
|
||||||
"github.com/docker/cli/cli"
|
"github.com/docker/cli/cli"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
@@ -29,7 +31,7 @@ func runUse(dockerCli command.Cli, in useOptions) error {
|
|||||||
return errors.Errorf("run `docker context use default` to switch to default context")
|
return errors.Errorf("run `docker context use default` to switch to default context")
|
||||||
}
|
}
|
||||||
if in.builder == "default" || in.builder == dockerCli.CurrentContext() {
|
if in.builder == "default" || in.builder == dockerCli.CurrentContext() {
|
||||||
ep, err := storeutil.GetCurrentEndpoint(dockerCli)
|
ep, err := dockerutil.GetCurrentEndpoint(dockerCli)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -52,7 +54,7 @@ func runUse(dockerCli command.Cli, in useOptions) error {
|
|||||||
return errors.Wrapf(err, "failed to find instance %q", in.builder)
|
return errors.Wrapf(err, "failed to find instance %q", in.builder)
|
||||||
}
|
}
|
||||||
|
|
||||||
ep, err := storeutil.GetCurrentEndpoint(dockerCli)
|
ep, err := dockerutil.GetCurrentEndpoint(dockerCli)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -77,6 +79,7 @@ func useCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
|
|||||||
}
|
}
|
||||||
return runUse(dockerCli, options)
|
return runUse(dockerCli, options)
|
||||||
},
|
},
|
||||||
|
ValidArgsFunction: completion.BuilderNames(dockerCli),
|
||||||
}
|
}
|
||||||
|
|
||||||
flags := cmd.Flags()
|
flags := cmd.Flags()
|
||||||
|
|||||||
486
commands/util.go
486
commands/util.go
@@ -1,486 +0,0 @@
|
|||||||
package commands
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"net/url"
|
|
||||||
"os"
|
|
||||||
"strings"
|
|
||||||
|
|
||||||
"github.com/docker/buildx/build"
|
|
||||||
"github.com/docker/buildx/driver"
|
|
||||||
ctxkube "github.com/docker/buildx/driver/kubernetes/context"
|
|
||||||
remoteutil "github.com/docker/buildx/driver/remote/util"
|
|
||||||
"github.com/docker/buildx/store"
|
|
||||||
"github.com/docker/buildx/store/storeutil"
|
|
||||||
"github.com/docker/buildx/util/platformutil"
|
|
||||||
"github.com/docker/buildx/util/progress"
|
|
||||||
"github.com/docker/cli/cli/command"
|
|
||||||
"github.com/docker/cli/cli/context/docker"
|
|
||||||
ctxstore "github.com/docker/cli/cli/context/store"
|
|
||||||
dopts "github.com/docker/cli/opts"
|
|
||||||
dockerclient "github.com/docker/docker/client"
|
|
||||||
"github.com/moby/buildkit/util/grpcerrors"
|
|
||||||
specs "github.com/opencontainers/image-spec/specs-go/v1"
|
|
||||||
"github.com/pkg/errors"
|
|
||||||
"github.com/sirupsen/logrus"
|
|
||||||
"golang.org/x/sync/errgroup"
|
|
||||||
"google.golang.org/grpc/codes"
|
|
||||||
"k8s.io/client-go/tools/clientcmd"
|
|
||||||
)
|
|
||||||
|
|
||||||
// validateEndpoint validates that endpoint is either a context or a docker host
|
|
||||||
func validateEndpoint(dockerCli command.Cli, ep string) (string, error) {
|
|
||||||
de, err := storeutil.GetDockerEndpoint(dockerCli, ep)
|
|
||||||
if err == nil && de != "" {
|
|
||||||
if ep == "default" {
|
|
||||||
return de, nil
|
|
||||||
}
|
|
||||||
return ep, nil
|
|
||||||
}
|
|
||||||
h, err := dopts.ParseHost(true, ep)
|
|
||||||
if err != nil {
|
|
||||||
return "", errors.Wrapf(err, "failed to parse endpoint %s", ep)
|
|
||||||
}
|
|
||||||
return h, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// validateBuildkitEndpoint validates that endpoint is a valid buildkit host
|
|
||||||
func validateBuildkitEndpoint(ep string) (string, error) {
|
|
||||||
if err := remoteutil.IsValidEndpoint(ep); err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
return ep, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// driversForNodeGroup returns drivers for a nodegroup instance
|
|
||||||
func driversForNodeGroup(ctx context.Context, dockerCli command.Cli, ng *store.NodeGroup, contextPathHash string) ([]build.DriverInfo, error) {
|
|
||||||
eg, _ := errgroup.WithContext(ctx)
|
|
||||||
|
|
||||||
dis := make([]build.DriverInfo, len(ng.Nodes))
|
|
||||||
|
|
||||||
var f driver.Factory
|
|
||||||
if ng.Driver != "" {
|
|
||||||
f = driver.GetFactory(ng.Driver, true)
|
|
||||||
if f == nil {
|
|
||||||
return nil, errors.Errorf("failed to find driver %q", f)
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
// empty driver means nodegroup was implicitly created as a default
|
|
||||||
// driver for a docker context and allows falling back to a
|
|
||||||
// docker-container driver for older daemon that doesn't support
|
|
||||||
// buildkit (< 18.06).
|
|
||||||
ep := ng.Nodes[0].Endpoint
|
|
||||||
dockerapi, err := clientForEndpoint(dockerCli, ep)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
// check if endpoint is healthy is needed to determine the driver type.
|
|
||||||
// if this fails then can't continue with driver selection.
|
|
||||||
if _, err = dockerapi.Ping(ctx); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
f, err = driver.GetDefaultFactory(ctx, ep, dockerapi, false)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
ng.Driver = f.Name()
|
|
||||||
}
|
|
||||||
imageopt, err := storeutil.GetImageConfig(dockerCli, ng)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
for i, n := range ng.Nodes {
|
|
||||||
func(i int, n store.Node) {
|
|
||||||
eg.Go(func() error {
|
|
||||||
di := build.DriverInfo{
|
|
||||||
Name: n.Name,
|
|
||||||
Platform: n.Platforms,
|
|
||||||
ProxyConfig: storeutil.GetProxyConfig(dockerCli),
|
|
||||||
}
|
|
||||||
defer func() {
|
|
||||||
dis[i] = di
|
|
||||||
}()
|
|
||||||
|
|
||||||
dockerapi, err := clientForEndpoint(dockerCli, n.Endpoint)
|
|
||||||
if err != nil {
|
|
||||||
di.Err = err
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
// TODO: replace the following line with dockerclient.WithAPIVersionNegotiation option in clientForEndpoint
|
|
||||||
dockerapi.NegotiateAPIVersion(ctx)
|
|
||||||
|
|
||||||
contextStore := dockerCli.ContextStore()
|
|
||||||
|
|
||||||
var kcc driver.KubeClientConfig
|
|
||||||
kcc, err = configFromContext(n.Endpoint, contextStore)
|
|
||||||
if err != nil {
|
|
||||||
// err is returned if n.Endpoint is non-context name like "unix:///var/run/docker.sock".
|
|
||||||
// try again with name="default".
|
|
||||||
// FIXME: n should retain real context name.
|
|
||||||
kcc, err = configFromContext("default", contextStore)
|
|
||||||
if err != nil {
|
|
||||||
logrus.Error(err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
tryToUseKubeConfigInCluster := false
|
|
||||||
if kcc == nil {
|
|
||||||
tryToUseKubeConfigInCluster = true
|
|
||||||
} else {
|
|
||||||
if _, err := kcc.ClientConfig(); err != nil {
|
|
||||||
tryToUseKubeConfigInCluster = true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if tryToUseKubeConfigInCluster {
|
|
||||||
kccInCluster := driver.KubeClientConfigInCluster{}
|
|
||||||
if _, err := kccInCluster.ClientConfig(); err == nil {
|
|
||||||
logrus.Debug("using kube config in cluster")
|
|
||||||
kcc = kccInCluster
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
d, err := driver.GetDriver(ctx, "buildx_buildkit_"+n.Name, f, n.Endpoint, dockerapi, imageopt.Auth, kcc, n.Flags, n.Files, n.DriverOpts, n.Platforms, contextPathHash)
|
|
||||||
if err != nil {
|
|
||||||
di.Err = err
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
di.Driver = d
|
|
||||||
di.ImageOpt = imageopt
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
}(i, n)
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := eg.Wait(); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
return dis, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func configFromContext(endpointName string, s ctxstore.Reader) (clientcmd.ClientConfig, error) {
|
|
||||||
if strings.HasPrefix(endpointName, "kubernetes://") {
|
|
||||||
u, _ := url.Parse(endpointName)
|
|
||||||
if kubeconfig := u.Query().Get("kubeconfig"); kubeconfig != "" {
|
|
||||||
_ = os.Setenv(clientcmd.RecommendedConfigPathEnvVar, kubeconfig)
|
|
||||||
}
|
|
||||||
rules := clientcmd.NewDefaultClientConfigLoadingRules()
|
|
||||||
apiConfig, err := rules.Load()
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
return clientcmd.NewDefaultClientConfig(*apiConfig, &clientcmd.ConfigOverrides{}), nil
|
|
||||||
}
|
|
||||||
return ctxkube.ConfigFromContext(endpointName, s)
|
|
||||||
}
|
|
||||||
|
|
||||||
// clientForEndpoint returns a docker client for an endpoint
|
|
||||||
func clientForEndpoint(dockerCli command.Cli, name string) (dockerclient.APIClient, error) {
|
|
||||||
list, err := dockerCli.ContextStore().List()
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
for _, l := range list {
|
|
||||||
if l.Name == name {
|
|
||||||
dep, ok := l.Endpoints["docker"]
|
|
||||||
if !ok {
|
|
||||||
return nil, errors.Errorf("context %q does not have a Docker endpoint", name)
|
|
||||||
}
|
|
||||||
epm, ok := dep.(docker.EndpointMeta)
|
|
||||||
if !ok {
|
|
||||||
return nil, errors.Errorf("endpoint %q is not of type EndpointMeta, %T", dep, dep)
|
|
||||||
}
|
|
||||||
ep, err := docker.WithTLSData(dockerCli.ContextStore(), name, epm)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
clientOpts, err := ep.ClientOpts()
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
return dockerclient.NewClientWithOpts(clientOpts...)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
ep := docker.Endpoint{
|
|
||||||
EndpointMeta: docker.EndpointMeta{
|
|
||||||
Host: name,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
clientOpts, err := ep.ClientOpts()
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
return dockerclient.NewClientWithOpts(clientOpts...)
|
|
||||||
}
|
|
||||||
|
|
||||||
func getInstanceOrDefault(ctx context.Context, dockerCli command.Cli, instance, contextPathHash string) ([]build.DriverInfo, error) {
|
|
||||||
var defaultOnly bool
|
|
||||||
|
|
||||||
if instance == "default" && instance != dockerCli.CurrentContext() {
|
|
||||||
return nil, errors.Errorf("use `docker --context=default buildx` to switch to default context")
|
|
||||||
}
|
|
||||||
if instance == "default" || instance == dockerCli.CurrentContext() {
|
|
||||||
instance = ""
|
|
||||||
defaultOnly = true
|
|
||||||
}
|
|
||||||
list, err := dockerCli.ContextStore().List()
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
for _, l := range list {
|
|
||||||
if l.Name == instance {
|
|
||||||
return nil, errors.Errorf("use `docker --context=%s buildx` to switch to context %s", instance, instance)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if instance != "" {
|
|
||||||
return getInstanceByName(ctx, dockerCli, instance, contextPathHash)
|
|
||||||
}
|
|
||||||
return getDefaultDrivers(ctx, dockerCli, defaultOnly, contextPathHash)
|
|
||||||
}
|
|
||||||
|
|
||||||
func getInstanceByName(ctx context.Context, dockerCli command.Cli, instance, contextPathHash string) ([]build.DriverInfo, error) {
|
|
||||||
txn, release, err := storeutil.GetStore(dockerCli)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
defer release()
|
|
||||||
|
|
||||||
ng, err := txn.NodeGroupByName(instance)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
return driversForNodeGroup(ctx, dockerCli, ng, contextPathHash)
|
|
||||||
}
|
|
||||||
|
|
||||||
// getDefaultDrivers returns drivers based on current cli config
|
|
||||||
func getDefaultDrivers(ctx context.Context, dockerCli command.Cli, defaultOnly bool, contextPathHash string) ([]build.DriverInfo, error) {
|
|
||||||
txn, release, err := storeutil.GetStore(dockerCli)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
defer release()
|
|
||||||
|
|
||||||
if !defaultOnly {
|
|
||||||
ng, err := storeutil.GetCurrentInstance(txn, dockerCli)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
if ng != nil {
|
|
||||||
return driversForNodeGroup(ctx, dockerCli, ng, contextPathHash)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
imageopt, err := storeutil.GetImageConfig(dockerCli, nil)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
d, err := driver.GetDriver(ctx, "buildx_buildkit_default", nil, "", dockerCli.Client(), imageopt.Auth, nil, nil, nil, nil, nil, contextPathHash)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
return []build.DriverInfo{
|
|
||||||
{
|
|
||||||
Name: "default",
|
|
||||||
Driver: d,
|
|
||||||
ImageOpt: imageopt,
|
|
||||||
ProxyConfig: storeutil.GetProxyConfig(dockerCli),
|
|
||||||
},
|
|
||||||
}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func loadInfoData(ctx context.Context, d *dinfo) error {
|
|
||||||
if d.di.Driver == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
info, err := d.di.Driver.Info(ctx)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
d.info = info
|
|
||||||
if info.Status == driver.Running {
|
|
||||||
c, err := d.di.Driver.Client(ctx)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
workers, err := c.ListWorkers(ctx)
|
|
||||||
if err != nil {
|
|
||||||
return errors.Wrap(err, "listing workers")
|
|
||||||
}
|
|
||||||
for _, w := range workers {
|
|
||||||
d.platforms = append(d.platforms, w.Platforms...)
|
|
||||||
}
|
|
||||||
d.platforms = platformutil.Dedupe(d.platforms)
|
|
||||||
inf, err := c.Info(ctx)
|
|
||||||
if err != nil {
|
|
||||||
if st, ok := grpcerrors.AsGRPCStatus(err); ok && st.Code() == codes.Unimplemented {
|
|
||||||
d.version, err = d.di.Driver.Version(ctx)
|
|
||||||
if err != nil {
|
|
||||||
return errors.Wrap(err, "getting version")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
d.version = inf.BuildkitVersion.Version
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func loadNodeGroupData(ctx context.Context, dockerCli command.Cli, ngi *nginfo) error {
|
|
||||||
eg, _ := errgroup.WithContext(ctx)
|
|
||||||
|
|
||||||
dis, err := driversForNodeGroup(ctx, dockerCli, ngi.ng, "")
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
ngi.drivers = make([]dinfo, len(dis))
|
|
||||||
for i, di := range dis {
|
|
||||||
d := di
|
|
||||||
ngi.drivers[i].di = &d
|
|
||||||
func(d *dinfo) {
|
|
||||||
eg.Go(func() error {
|
|
||||||
if err := loadInfoData(ctx, d); err != nil {
|
|
||||||
d.err = err
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
}(&ngi.drivers[i])
|
|
||||||
}
|
|
||||||
|
|
||||||
if eg.Wait(); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
kubernetesDriverCount := 0
|
|
||||||
|
|
||||||
for _, di := range ngi.drivers {
|
|
||||||
if di.info != nil && len(di.info.DynamicNodes) > 0 {
|
|
||||||
kubernetesDriverCount++
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
isAllKubernetesDrivers := len(ngi.drivers) == kubernetesDriverCount
|
|
||||||
|
|
||||||
if isAllKubernetesDrivers {
|
|
||||||
var drivers []dinfo
|
|
||||||
var dynamicNodes []store.Node
|
|
||||||
|
|
||||||
for _, di := range ngi.drivers {
|
|
||||||
// dynamic nodes are used in Kubernetes driver.
|
|
||||||
// Kubernetes pods are dynamically mapped to BuildKit Nodes.
|
|
||||||
if di.info != nil && len(di.info.DynamicNodes) > 0 {
|
|
||||||
for i := 0; i < len(di.info.DynamicNodes); i++ {
|
|
||||||
// all []dinfo share *build.DriverInfo and *driver.Info
|
|
||||||
diClone := di
|
|
||||||
if pl := di.info.DynamicNodes[i].Platforms; len(pl) > 0 {
|
|
||||||
diClone.platforms = pl
|
|
||||||
}
|
|
||||||
drivers = append(drivers, di)
|
|
||||||
}
|
|
||||||
dynamicNodes = append(dynamicNodes, di.info.DynamicNodes...)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// not append (remove the static nodes in the store)
|
|
||||||
ngi.ng.Nodes = dynamicNodes
|
|
||||||
ngi.drivers = drivers
|
|
||||||
ngi.ng.Dynamic = true
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func hasNodeGroup(list []*nginfo, ngi *nginfo) bool {
|
|
||||||
for _, l := range list {
|
|
||||||
if ngi.ng.Name == l.ng.Name {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
func dockerAPI(dockerCli command.Cli) *api {
|
|
||||||
return &api{dockerCli: dockerCli}
|
|
||||||
}
|
|
||||||
|
|
||||||
type api struct {
|
|
||||||
dockerCli command.Cli
|
|
||||||
}
|
|
||||||
|
|
||||||
func (a *api) DockerAPI(name string) (dockerclient.APIClient, error) {
|
|
||||||
if name == "" {
|
|
||||||
name = a.dockerCli.CurrentContext()
|
|
||||||
}
|
|
||||||
return clientForEndpoint(a.dockerCli, name)
|
|
||||||
}
|
|
||||||
|
|
||||||
type dinfo struct {
|
|
||||||
di *build.DriverInfo
|
|
||||||
info *driver.Info
|
|
||||||
platforms []specs.Platform
|
|
||||||
version string
|
|
||||||
err error
|
|
||||||
}
|
|
||||||
|
|
||||||
type nginfo struct {
|
|
||||||
ng *store.NodeGroup
|
|
||||||
drivers []dinfo
|
|
||||||
err error
|
|
||||||
}
|
|
||||||
|
|
||||||
// inactive checks if all nodes are inactive for this builder
|
|
||||||
func (n *nginfo) inactive() bool {
|
|
||||||
for idx := range n.ng.Nodes {
|
|
||||||
d := n.drivers[idx]
|
|
||||||
if d.info != nil && d.info.Status == driver.Running {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
|
|
||||||
func boot(ctx context.Context, ngi *nginfo) (bool, error) {
|
|
||||||
toBoot := make([]int, 0, len(ngi.drivers))
|
|
||||||
for i, d := range ngi.drivers {
|
|
||||||
if d.err != nil || d.di.Err != nil || d.di.Driver == nil || d.info == nil {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
if d.info.Status != driver.Running {
|
|
||||||
toBoot = append(toBoot, i)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if len(toBoot) == 0 {
|
|
||||||
return false, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
printer := progress.NewPrinter(context.TODO(), os.Stderr, os.Stderr, "auto")
|
|
||||||
|
|
||||||
baseCtx := ctx
|
|
||||||
eg, _ := errgroup.WithContext(ctx)
|
|
||||||
for _, idx := range toBoot {
|
|
||||||
func(idx int) {
|
|
||||||
eg.Go(func() error {
|
|
||||||
pw := progress.WithPrefix(printer, ngi.ng.Nodes[idx].Name, len(toBoot) > 1)
|
|
||||||
_, err := driver.Boot(ctx, baseCtx, ngi.drivers[idx].di.Driver, pw)
|
|
||||||
if err != nil {
|
|
||||||
ngi.drivers[idx].err = err
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
}(idx)
|
|
||||||
}
|
|
||||||
|
|
||||||
err := eg.Wait()
|
|
||||||
err1 := printer.Wait()
|
|
||||||
if err == nil {
|
|
||||||
err = err1
|
|
||||||
}
|
|
||||||
|
|
||||||
return true, err
|
|
||||||
}
|
|
||||||
@@ -4,6 +4,7 @@ import (
|
|||||||
"fmt"
|
"fmt"
|
||||||
|
|
||||||
"github.com/docker/buildx/util/cobrautil"
|
"github.com/docker/buildx/util/cobrautil"
|
||||||
|
"github.com/docker/buildx/util/cobrautil/completion"
|
||||||
"github.com/docker/buildx/version"
|
"github.com/docker/buildx/version"
|
||||||
"github.com/docker/cli/cli"
|
"github.com/docker/cli/cli"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
@@ -23,6 +24,7 @@ func versionCmd(dockerCli command.Cli) *cobra.Command {
|
|||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
return runVersion(dockerCli)
|
return runVersion(dockerCli)
|
||||||
},
|
},
|
||||||
|
ValidArgsFunction: completion.Disable,
|
||||||
}
|
}
|
||||||
|
|
||||||
// hide builder persistent flag for this command
|
// hide builder persistent flag for this command
|
||||||
|
|||||||
267
controller/build/build.go
Normal file
267
controller/build/build.go
Normal file
@@ -0,0 +1,267 @@
|
|||||||
|
package build
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"io"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
"sync"
|
||||||
|
|
||||||
|
"github.com/docker/buildx/build"
|
||||||
|
"github.com/docker/buildx/builder"
|
||||||
|
controllerapi "github.com/docker/buildx/controller/pb"
|
||||||
|
"github.com/docker/buildx/store"
|
||||||
|
"github.com/docker/buildx/store/storeutil"
|
||||||
|
"github.com/docker/buildx/util/buildflags"
|
||||||
|
"github.com/docker/buildx/util/confutil"
|
||||||
|
"github.com/docker/buildx/util/dockerutil"
|
||||||
|
"github.com/docker/buildx/util/platformutil"
|
||||||
|
"github.com/docker/buildx/util/progress"
|
||||||
|
"github.com/docker/cli/cli/command"
|
||||||
|
"github.com/docker/cli/cli/config"
|
||||||
|
dockeropts "github.com/docker/cli/opts"
|
||||||
|
"github.com/docker/go-units"
|
||||||
|
"github.com/moby/buildkit/client"
|
||||||
|
"github.com/moby/buildkit/session/auth/authprovider"
|
||||||
|
"github.com/moby/buildkit/util/grpcerrors"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
"google.golang.org/grpc/codes"
|
||||||
|
)
|
||||||
|
|
||||||
|
const defaultTargetName = "default"
|
||||||
|
|
||||||
|
// RunBuild runs the specified build and returns the result.
|
||||||
|
//
|
||||||
|
// NOTE: When an error happens during the build and this function acquires the debuggable *build.ResultHandle,
|
||||||
|
// this function returns it in addition to the error (i.e. it does "return nil, res, err"). The caller can
|
||||||
|
// inspect the result and debug the cause of that error.
|
||||||
|
func RunBuild(ctx context.Context, dockerCli command.Cli, in controllerapi.BuildOptions, inStream io.Reader, progress progress.Writer, generateResult bool) (*client.SolveResponse, *build.ResultHandle, error) {
|
||||||
|
if in.NoCache && len(in.NoCacheFilter) > 0 {
|
||||||
|
return nil, nil, errors.Errorf("--no-cache and --no-cache-filter cannot currently be used together")
|
||||||
|
}
|
||||||
|
|
||||||
|
contexts := map[string]build.NamedContext{}
|
||||||
|
for name, path := range in.NamedContexts {
|
||||||
|
contexts[name] = build.NamedContext{Path: path}
|
||||||
|
}
|
||||||
|
|
||||||
|
opts := build.Options{
|
||||||
|
Inputs: build.Inputs{
|
||||||
|
ContextPath: in.ContextPath,
|
||||||
|
DockerfilePath: in.DockerfileName,
|
||||||
|
InStream: inStream,
|
||||||
|
NamedContexts: contexts,
|
||||||
|
},
|
||||||
|
BuildArgs: in.BuildArgs,
|
||||||
|
CgroupParent: in.CgroupParent,
|
||||||
|
ExtraHosts: in.ExtraHosts,
|
||||||
|
Labels: in.Labels,
|
||||||
|
NetworkMode: in.NetworkMode,
|
||||||
|
NoCache: in.NoCache,
|
||||||
|
NoCacheFilter: in.NoCacheFilter,
|
||||||
|
Pull: in.Pull,
|
||||||
|
ShmSize: dockeropts.MemBytes(in.ShmSize),
|
||||||
|
Tags: in.Tags,
|
||||||
|
Target: in.Target,
|
||||||
|
Ulimits: controllerUlimitOpt2DockerUlimit(in.Ulimits),
|
||||||
|
}
|
||||||
|
|
||||||
|
platforms, err := platformutil.Parse(in.Platforms)
|
||||||
|
if err != nil {
|
||||||
|
return nil, nil, err
|
||||||
|
}
|
||||||
|
opts.Platforms = platforms
|
||||||
|
|
||||||
|
dockerConfig := config.LoadDefaultConfigFile(os.Stderr)
|
||||||
|
opts.Session = append(opts.Session, authprovider.NewDockerAuthProvider(dockerConfig))
|
||||||
|
|
||||||
|
secrets, err := controllerapi.CreateSecrets(in.Secrets)
|
||||||
|
if err != nil {
|
||||||
|
return nil, nil, err
|
||||||
|
}
|
||||||
|
opts.Session = append(opts.Session, secrets)
|
||||||
|
|
||||||
|
sshSpecs := in.SSH
|
||||||
|
if len(sshSpecs) == 0 && buildflags.IsGitSSH(in.ContextPath) {
|
||||||
|
sshSpecs = append(sshSpecs, &controllerapi.SSH{ID: "default"})
|
||||||
|
}
|
||||||
|
ssh, err := controllerapi.CreateSSH(sshSpecs)
|
||||||
|
if err != nil {
|
||||||
|
return nil, nil, err
|
||||||
|
}
|
||||||
|
opts.Session = append(opts.Session, ssh)
|
||||||
|
|
||||||
|
outputs, err := controllerapi.CreateExports(in.Exports)
|
||||||
|
if err != nil {
|
||||||
|
return nil, nil, err
|
||||||
|
}
|
||||||
|
if in.ExportPush {
|
||||||
|
if in.ExportLoad {
|
||||||
|
return nil, nil, errors.Errorf("push and load may not be set together at the moment")
|
||||||
|
}
|
||||||
|
if len(outputs) == 0 {
|
||||||
|
outputs = []client.ExportEntry{{
|
||||||
|
Type: "image",
|
||||||
|
Attrs: map[string]string{
|
||||||
|
"push": "true",
|
||||||
|
},
|
||||||
|
}}
|
||||||
|
} else {
|
||||||
|
switch outputs[0].Type {
|
||||||
|
case "image":
|
||||||
|
outputs[0].Attrs["push"] = "true"
|
||||||
|
default:
|
||||||
|
return nil, nil, errors.Errorf("push and %q output can't be used together", outputs[0].Type)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if in.ExportLoad {
|
||||||
|
if len(outputs) == 0 {
|
||||||
|
outputs = []client.ExportEntry{{
|
||||||
|
Type: "docker",
|
||||||
|
Attrs: map[string]string{},
|
||||||
|
}}
|
||||||
|
} else {
|
||||||
|
switch outputs[0].Type {
|
||||||
|
case "docker":
|
||||||
|
default:
|
||||||
|
return nil, nil, errors.Errorf("load and %q output can't be used together", outputs[0].Type)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
opts.Exports = outputs
|
||||||
|
|
||||||
|
opts.CacheFrom = controllerapi.CreateCaches(in.CacheFrom)
|
||||||
|
opts.CacheTo = controllerapi.CreateCaches(in.CacheTo)
|
||||||
|
|
||||||
|
opts.Attests = controllerapi.CreateAttestations(in.Attests)
|
||||||
|
|
||||||
|
opts.SourcePolicy = in.SourcePolicy
|
||||||
|
|
||||||
|
allow, err := buildflags.ParseEntitlements(in.Allow)
|
||||||
|
if err != nil {
|
||||||
|
return nil, nil, err
|
||||||
|
}
|
||||||
|
opts.Allow = allow
|
||||||
|
|
||||||
|
if in.PrintFunc != nil {
|
||||||
|
opts.PrintFunc = &build.PrintFunc{
|
||||||
|
Name: in.PrintFunc.Name,
|
||||||
|
Format: in.PrintFunc.Format,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// key string used for kubernetes "sticky" mode
|
||||||
|
contextPathHash, err := filepath.Abs(in.ContextPath)
|
||||||
|
if err != nil {
|
||||||
|
contextPathHash = in.ContextPath
|
||||||
|
}
|
||||||
|
|
||||||
|
// TODO: this should not be loaded this side of the controller api
|
||||||
|
b, err := builder.New(dockerCli,
|
||||||
|
builder.WithName(in.Builder),
|
||||||
|
builder.WithContextPathHash(contextPathHash),
|
||||||
|
)
|
||||||
|
if err != nil {
|
||||||
|
return nil, nil, err
|
||||||
|
}
|
||||||
|
if err = updateLastActivity(dockerCli, b.NodeGroup); err != nil {
|
||||||
|
return nil, nil, errors.Wrapf(err, "failed to update builder last activity time")
|
||||||
|
}
|
||||||
|
nodes, err := b.LoadNodes(ctx, false)
|
||||||
|
if err != nil {
|
||||||
|
return nil, nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
resp, res, err := buildTargets(ctx, dockerCli, b.NodeGroup, nodes, map[string]build.Options{defaultTargetName: opts}, progress, generateResult)
|
||||||
|
err = wrapBuildError(err, false)
|
||||||
|
if err != nil {
|
||||||
|
// NOTE: buildTargets can return *build.ResultHandle even on error.
|
||||||
|
return nil, res, err
|
||||||
|
}
|
||||||
|
return resp, res, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// buildTargets runs the specified build and returns the result.
|
||||||
|
//
|
||||||
|
// NOTE: When an error happens during the build and this function acquires the debuggable *build.ResultHandle,
|
||||||
|
// this function returns it in addition to the error (i.e. it does "return nil, res, err"). The caller can
|
||||||
|
// inspect the result and debug the cause of that error.
|
||||||
|
func buildTargets(ctx context.Context, dockerCli command.Cli, ng *store.NodeGroup, nodes []builder.Node, opts map[string]build.Options, progress progress.Writer, generateResult bool) (*client.SolveResponse, *build.ResultHandle, error) {
|
||||||
|
var res *build.ResultHandle
|
||||||
|
var resp map[string]*client.SolveResponse
|
||||||
|
var err error
|
||||||
|
if generateResult {
|
||||||
|
var mu sync.Mutex
|
||||||
|
var idx int
|
||||||
|
resp, err = build.BuildWithResultHandler(ctx, nodes, opts, dockerutil.NewClient(dockerCli), confutil.ConfigDir(dockerCli), progress, func(driverIndex int, gotRes *build.ResultHandle) {
|
||||||
|
mu.Lock()
|
||||||
|
defer mu.Unlock()
|
||||||
|
if res == nil || driverIndex < idx {
|
||||||
|
idx, res = driverIndex, gotRes
|
||||||
|
}
|
||||||
|
})
|
||||||
|
} else {
|
||||||
|
resp, err = build.Build(ctx, nodes, opts, dockerutil.NewClient(dockerCli), confutil.ConfigDir(dockerCli), progress)
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
return nil, res, err
|
||||||
|
}
|
||||||
|
return resp[defaultTargetName], res, err
|
||||||
|
}
|
||||||
|
|
||||||
|
func wrapBuildError(err error, bake bool) error {
|
||||||
|
if err == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
st, ok := grpcerrors.AsGRPCStatus(err)
|
||||||
|
if ok {
|
||||||
|
if st.Code() == codes.Unimplemented && strings.Contains(st.Message(), "unsupported frontend capability moby.buildkit.frontend.contexts") {
|
||||||
|
msg := "current frontend does not support --build-context."
|
||||||
|
if bake {
|
||||||
|
msg = "current frontend does not support defining additional contexts for targets."
|
||||||
|
}
|
||||||
|
msg += " Named contexts are supported since Dockerfile v1.4. Use #syntax directive in Dockerfile or update to latest BuildKit."
|
||||||
|
return &wrapped{err, msg}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
type wrapped struct {
|
||||||
|
err error
|
||||||
|
msg string
|
||||||
|
}
|
||||||
|
|
||||||
|
func (w *wrapped) Error() string {
|
||||||
|
return w.msg
|
||||||
|
}
|
||||||
|
|
||||||
|
func (w *wrapped) Unwrap() error {
|
||||||
|
return w.err
|
||||||
|
}
|
||||||
|
|
||||||
|
func updateLastActivity(dockerCli command.Cli, ng *store.NodeGroup) error {
|
||||||
|
txn, release, err := storeutil.GetStore(dockerCli)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer release()
|
||||||
|
return txn.UpdateLastActivity(ng)
|
||||||
|
}
|
||||||
|
|
||||||
|
func controllerUlimitOpt2DockerUlimit(u *controllerapi.UlimitOpt) *dockeropts.UlimitOpt {
|
||||||
|
if u == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
values := make(map[string]*units.Ulimit)
|
||||||
|
for k, v := range u.Values {
|
||||||
|
values[k] = &units.Ulimit{
|
||||||
|
Name: v.Name,
|
||||||
|
Hard: v.Hard,
|
||||||
|
Soft: v.Soft,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return dockeropts.NewUlimitOpt(&values)
|
||||||
|
}
|
||||||
32
controller/control/controller.go
Normal file
32
controller/control/controller.go
Normal file
@@ -0,0 +1,32 @@
|
|||||||
|
package control
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"io"
|
||||||
|
|
||||||
|
controllerapi "github.com/docker/buildx/controller/pb"
|
||||||
|
"github.com/docker/buildx/util/progress"
|
||||||
|
"github.com/moby/buildkit/client"
|
||||||
|
)
|
||||||
|
|
||||||
|
type BuildxController interface {
|
||||||
|
Build(ctx context.Context, options controllerapi.BuildOptions, in io.ReadCloser, progress progress.Writer) (ref string, resp *client.SolveResponse, err error)
|
||||||
|
// Invoke starts an IO session into the specified process.
|
||||||
|
// If pid doesn't matche to any running processes, it starts a new process with the specified config.
|
||||||
|
// If there is no container running or InvokeConfig.Rollback is speicfied, the process will start in a newly created container.
|
||||||
|
// NOTE: If needed, in the future, we can split this API into three APIs (NewContainer, NewProcess and Attach).
|
||||||
|
Invoke(ctx context.Context, ref, pid string, options controllerapi.InvokeConfig, ioIn io.ReadCloser, ioOut io.WriteCloser, ioErr io.WriteCloser) error
|
||||||
|
Kill(ctx context.Context) error
|
||||||
|
Close() error
|
||||||
|
List(ctx context.Context) (refs []string, _ error)
|
||||||
|
Disconnect(ctx context.Context, ref string) error
|
||||||
|
ListProcesses(ctx context.Context, ref string) (infos []*controllerapi.ProcessInfo, retErr error)
|
||||||
|
DisconnectProcess(ctx context.Context, ref, pid string) error
|
||||||
|
Inspect(ctx context.Context, ref string) (*controllerapi.InspectResponse, error)
|
||||||
|
}
|
||||||
|
|
||||||
|
type ControlOptions struct {
|
||||||
|
ServerConfig string
|
||||||
|
Root string
|
||||||
|
Detach bool
|
||||||
|
}
|
||||||
36
controller/controller.go
Normal file
36
controller/controller.go
Normal file
@@ -0,0 +1,36 @@
|
|||||||
|
package controller
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
|
||||||
|
"github.com/docker/buildx/controller/control"
|
||||||
|
"github.com/docker/buildx/controller/local"
|
||||||
|
"github.com/docker/buildx/controller/remote"
|
||||||
|
"github.com/docker/buildx/util/progress"
|
||||||
|
"github.com/docker/cli/cli/command"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
)
|
||||||
|
|
||||||
|
func NewController(ctx context.Context, opts control.ControlOptions, dockerCli command.Cli, pw progress.Writer) (control.BuildxController, error) {
|
||||||
|
var name string
|
||||||
|
if opts.Detach {
|
||||||
|
name = "remote"
|
||||||
|
} else {
|
||||||
|
name = "local"
|
||||||
|
}
|
||||||
|
|
||||||
|
var c control.BuildxController
|
||||||
|
err := progress.Wrap(fmt.Sprintf("[internal] connecting to %s controller", name), pw.Write, func(l progress.SubLogger) (err error) {
|
||||||
|
if opts.Detach {
|
||||||
|
c, err = remote.NewRemoteBuildxController(ctx, dockerCli, opts, l)
|
||||||
|
} else {
|
||||||
|
c = local.NewLocalBuildxController(ctx, dockerCli, l)
|
||||||
|
}
|
||||||
|
return err
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return nil, errors.Wrap(err, "failed to start buildx controller")
|
||||||
|
}
|
||||||
|
return c, nil
|
||||||
|
}
|
||||||
34
controller/errdefs/build.go
Normal file
34
controller/errdefs/build.go
Normal file
@@ -0,0 +1,34 @@
|
|||||||
|
package errdefs
|
||||||
|
|
||||||
|
import (
|
||||||
|
"github.com/containerd/typeurl/v2"
|
||||||
|
"github.com/moby/buildkit/util/grpcerrors"
|
||||||
|
)
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
typeurl.Register((*Build)(nil), "github.com/docker/buildx", "errdefs.Build+json")
|
||||||
|
}
|
||||||
|
|
||||||
|
type BuildError struct {
|
||||||
|
Build
|
||||||
|
error
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e *BuildError) Unwrap() error {
|
||||||
|
return e.error
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e *BuildError) ToProto() grpcerrors.TypedErrorProto {
|
||||||
|
return &e.Build
|
||||||
|
}
|
||||||
|
|
||||||
|
func WrapBuild(err error, ref string) error {
|
||||||
|
if err == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return &BuildError{Build: Build{Ref: ref}, error: err}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (b *Build) WrapError(err error) error {
|
||||||
|
return &BuildError{error: err, Build: *b}
|
||||||
|
}
|
||||||
77
controller/errdefs/errdefs.pb.go
Normal file
77
controller/errdefs/errdefs.pb.go
Normal file
@@ -0,0 +1,77 @@
|
|||||||
|
// Code generated by protoc-gen-gogo. DO NOT EDIT.
|
||||||
|
// source: errdefs.proto
|
||||||
|
|
||||||
|
package errdefs
|
||||||
|
|
||||||
|
import (
|
||||||
|
fmt "fmt"
|
||||||
|
proto "github.com/gogo/protobuf/proto"
|
||||||
|
_ "github.com/moby/buildkit/solver/pb"
|
||||||
|
math "math"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Reference imports to suppress errors if they are not otherwise used.
|
||||||
|
var _ = proto.Marshal
|
||||||
|
var _ = fmt.Errorf
|
||||||
|
var _ = math.Inf
|
||||||
|
|
||||||
|
// This is a compile-time assertion to ensure that this generated file
|
||||||
|
// is compatible with the proto package it is being compiled against.
|
||||||
|
// A compilation error at this line likely means your copy of the
|
||||||
|
// proto package needs to be updated.
|
||||||
|
const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package
|
||||||
|
|
||||||
|
type Build struct {
|
||||||
|
Ref string `protobuf:"bytes,1,opt,name=Ref,proto3" json:"Ref,omitempty"`
|
||||||
|
XXX_NoUnkeyedLiteral struct{} `json:"-"`
|
||||||
|
XXX_unrecognized []byte `json:"-"`
|
||||||
|
XXX_sizecache int32 `json:"-"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *Build) Reset() { *m = Build{} }
|
||||||
|
func (m *Build) String() string { return proto.CompactTextString(m) }
|
||||||
|
func (*Build) ProtoMessage() {}
|
||||||
|
func (*Build) Descriptor() ([]byte, []int) {
|
||||||
|
return fileDescriptor_689dc58a5060aff5, []int{0}
|
||||||
|
}
|
||||||
|
func (m *Build) XXX_Unmarshal(b []byte) error {
|
||||||
|
return xxx_messageInfo_Build.Unmarshal(m, b)
|
||||||
|
}
|
||||||
|
func (m *Build) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
|
||||||
|
return xxx_messageInfo_Build.Marshal(b, m, deterministic)
|
||||||
|
}
|
||||||
|
func (m *Build) XXX_Merge(src proto.Message) {
|
||||||
|
xxx_messageInfo_Build.Merge(m, src)
|
||||||
|
}
|
||||||
|
func (m *Build) XXX_Size() int {
|
||||||
|
return xxx_messageInfo_Build.Size(m)
|
||||||
|
}
|
||||||
|
func (m *Build) XXX_DiscardUnknown() {
|
||||||
|
xxx_messageInfo_Build.DiscardUnknown(m)
|
||||||
|
}
|
||||||
|
|
||||||
|
var xxx_messageInfo_Build proto.InternalMessageInfo
|
||||||
|
|
||||||
|
func (m *Build) GetRef() string {
|
||||||
|
if m != nil {
|
||||||
|
return m.Ref
|
||||||
|
}
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
proto.RegisterType((*Build)(nil), "errdefs.Build")
|
||||||
|
}
|
||||||
|
|
||||||
|
func init() { proto.RegisterFile("errdefs.proto", fileDescriptor_689dc58a5060aff5) }
|
||||||
|
|
||||||
|
var fileDescriptor_689dc58a5060aff5 = []byte{
|
||||||
|
// 111 bytes of a gzipped FileDescriptorProto
|
||||||
|
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0xe2, 0x4d, 0x2d, 0x2a, 0x4a,
|
||||||
|
0x49, 0x4d, 0x2b, 0xd6, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x17, 0x62, 0x87, 0x72, 0xa5, 0x74, 0xd2,
|
||||||
|
0x33, 0x4b, 0x32, 0x4a, 0x93, 0xf4, 0x92, 0xf3, 0x73, 0xf5, 0x73, 0xf3, 0x93, 0x2a, 0xf5, 0x93,
|
||||||
|
0x4a, 0x33, 0x73, 0x52, 0xb2, 0x33, 0x4b, 0xf4, 0x8b, 0xf3, 0x73, 0xca, 0x52, 0x8b, 0xf4, 0x0b,
|
||||||
|
0x92, 0xf4, 0xf3, 0x0b, 0xa0, 0xda, 0x94, 0x24, 0xb9, 0x58, 0x9d, 0x40, 0xf2, 0x42, 0x02, 0x5c,
|
||||||
|
0xcc, 0x41, 0xa9, 0x69, 0x12, 0x8c, 0x0a, 0x8c, 0x1a, 0x9c, 0x41, 0x20, 0x66, 0x12, 0x1b, 0x58,
|
||||||
|
0x85, 0x31, 0x20, 0x00, 0x00, 0xff, 0xff, 0x56, 0x52, 0x41, 0x91, 0x69, 0x00, 0x00, 0x00,
|
||||||
|
}
|
||||||
9
controller/errdefs/errdefs.proto
Normal file
9
controller/errdefs/errdefs.proto
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
syntax = "proto3";
|
||||||
|
|
||||||
|
package errdefs;
|
||||||
|
|
||||||
|
import "github.com/moby/buildkit/solver/pb/ops.proto";
|
||||||
|
|
||||||
|
message Build {
|
||||||
|
string Ref = 1;
|
||||||
|
}
|
||||||
3
controller/errdefs/generate.go
Normal file
3
controller/errdefs/generate.go
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
package errdefs
|
||||||
|
|
||||||
|
//go:generate protoc -I=. -I=../../vendor/ --gogo_out=plugins=grpc:. errdefs.proto
|
||||||
146
controller/local/controller.go
Normal file
146
controller/local/controller.go
Normal file
@@ -0,0 +1,146 @@
|
|||||||
|
package local
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"io"
|
||||||
|
"sync/atomic"
|
||||||
|
|
||||||
|
"github.com/docker/buildx/build"
|
||||||
|
cbuild "github.com/docker/buildx/controller/build"
|
||||||
|
"github.com/docker/buildx/controller/control"
|
||||||
|
controllererrors "github.com/docker/buildx/controller/errdefs"
|
||||||
|
controllerapi "github.com/docker/buildx/controller/pb"
|
||||||
|
"github.com/docker/buildx/controller/processes"
|
||||||
|
"github.com/docker/buildx/util/ioset"
|
||||||
|
"github.com/docker/buildx/util/progress"
|
||||||
|
"github.com/docker/cli/cli/command"
|
||||||
|
"github.com/moby/buildkit/client"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
)
|
||||||
|
|
||||||
|
func NewLocalBuildxController(ctx context.Context, dockerCli command.Cli, logger progress.SubLogger) control.BuildxController {
|
||||||
|
return &localController{
|
||||||
|
dockerCli: dockerCli,
|
||||||
|
ref: "local",
|
||||||
|
processes: processes.NewManager(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
type buildConfig struct {
|
||||||
|
// TODO: these two structs should be merged
|
||||||
|
// Discussion: https://github.com/docker/buildx/pull/1640#discussion_r1113279719
|
||||||
|
resultCtx *build.ResultHandle
|
||||||
|
buildOptions *controllerapi.BuildOptions
|
||||||
|
}
|
||||||
|
|
||||||
|
type localController struct {
|
||||||
|
dockerCli command.Cli
|
||||||
|
ref string
|
||||||
|
buildConfig buildConfig
|
||||||
|
processes *processes.Manager
|
||||||
|
|
||||||
|
buildOnGoing atomic.Bool
|
||||||
|
}
|
||||||
|
|
||||||
|
func (b *localController) Build(ctx context.Context, options controllerapi.BuildOptions, in io.ReadCloser, progress progress.Writer) (string, *client.SolveResponse, error) {
|
||||||
|
if !b.buildOnGoing.CompareAndSwap(false, true) {
|
||||||
|
return "", nil, errors.New("build ongoing")
|
||||||
|
}
|
||||||
|
defer b.buildOnGoing.Store(false)
|
||||||
|
|
||||||
|
resp, res, buildErr := cbuild.RunBuild(ctx, b.dockerCli, options, in, progress, true)
|
||||||
|
// NOTE: RunBuild can return *build.ResultHandle even on error.
|
||||||
|
if res != nil {
|
||||||
|
b.buildConfig = buildConfig{
|
||||||
|
resultCtx: res,
|
||||||
|
buildOptions: &options,
|
||||||
|
}
|
||||||
|
if buildErr != nil {
|
||||||
|
buildErr = controllererrors.WrapBuild(buildErr, b.ref)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if buildErr != nil {
|
||||||
|
return "", nil, buildErr
|
||||||
|
}
|
||||||
|
return b.ref, resp, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (b *localController) ListProcesses(ctx context.Context, ref string) (infos []*controllerapi.ProcessInfo, retErr error) {
|
||||||
|
if ref != b.ref {
|
||||||
|
return nil, errors.Errorf("unknown ref %q", ref)
|
||||||
|
}
|
||||||
|
return b.processes.ListProcesses(), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (b *localController) DisconnectProcess(ctx context.Context, ref, pid string) error {
|
||||||
|
if ref != b.ref {
|
||||||
|
return errors.Errorf("unknown ref %q", ref)
|
||||||
|
}
|
||||||
|
return b.processes.DeleteProcess(pid)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (b *localController) cancelRunningProcesses() {
|
||||||
|
b.processes.CancelRunningProcesses()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (b *localController) Invoke(ctx context.Context, ref string, pid string, cfg controllerapi.InvokeConfig, ioIn io.ReadCloser, ioOut io.WriteCloser, ioErr io.WriteCloser) error {
|
||||||
|
if ref != b.ref {
|
||||||
|
return errors.Errorf("unknown ref %q", ref)
|
||||||
|
}
|
||||||
|
|
||||||
|
proc, ok := b.processes.Get(pid)
|
||||||
|
if !ok {
|
||||||
|
// Start a new process.
|
||||||
|
if b.buildConfig.resultCtx == nil {
|
||||||
|
return errors.New("no build result is registered")
|
||||||
|
}
|
||||||
|
var err error
|
||||||
|
proc, err = b.processes.StartProcess(pid, b.buildConfig.resultCtx, &cfg)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Attach containerIn to this process
|
||||||
|
ioCancelledCh := make(chan struct{})
|
||||||
|
proc.ForwardIO(&ioset.In{Stdin: ioIn, Stdout: ioOut, Stderr: ioErr}, func() { close(ioCancelledCh) })
|
||||||
|
|
||||||
|
select {
|
||||||
|
case <-ioCancelledCh:
|
||||||
|
return errors.Errorf("io cancelled")
|
||||||
|
case err := <-proc.Done():
|
||||||
|
return err
|
||||||
|
case <-ctx.Done():
|
||||||
|
return ctx.Err()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (b *localController) Kill(context.Context) error {
|
||||||
|
b.Close()
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (b *localController) Close() error {
|
||||||
|
b.cancelRunningProcesses()
|
||||||
|
if b.buildConfig.resultCtx != nil {
|
||||||
|
b.buildConfig.resultCtx.Done()
|
||||||
|
}
|
||||||
|
// TODO: cancel ongoing builds?
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (b *localController) List(ctx context.Context) (res []string, _ error) {
|
||||||
|
return []string{b.ref}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (b *localController) Disconnect(ctx context.Context, key string) error {
|
||||||
|
b.Close()
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (b *localController) Inspect(ctx context.Context, ref string) (*controllerapi.InspectResponse, error) {
|
||||||
|
if ref != b.ref {
|
||||||
|
return nil, errors.Errorf("unknown ref %q", ref)
|
||||||
|
}
|
||||||
|
return &controllerapi.InspectResponse{Options: b.buildConfig.buildOptions}, nil
|
||||||
|
}
|
||||||
20
controller/pb/attest.go
Normal file
20
controller/pb/attest.go
Normal file
@@ -0,0 +1,20 @@
|
|||||||
|
package pb
|
||||||
|
|
||||||
|
func CreateAttestations(attests []*Attest) map[string]*string {
|
||||||
|
result := map[string]*string{}
|
||||||
|
for _, attest := range attests {
|
||||||
|
// ignore duplicates
|
||||||
|
if _, ok := result[attest.Type]; ok {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
if attest.Disabled {
|
||||||
|
result[attest.Type] = nil
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
attrs := attest.Attrs
|
||||||
|
result[attest.Type] = &attrs
|
||||||
|
}
|
||||||
|
return result
|
||||||
|
}
|
||||||
21
controller/pb/cache.go
Normal file
21
controller/pb/cache.go
Normal file
@@ -0,0 +1,21 @@
|
|||||||
|
package pb
|
||||||
|
|
||||||
|
import "github.com/moby/buildkit/client"
|
||||||
|
|
||||||
|
func CreateCaches(entries []*CacheOptionsEntry) []client.CacheOptionsEntry {
|
||||||
|
var outs []client.CacheOptionsEntry
|
||||||
|
if len(entries) == 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
for _, entry := range entries {
|
||||||
|
out := client.CacheOptionsEntry{
|
||||||
|
Type: entry.Type,
|
||||||
|
Attrs: map[string]string{},
|
||||||
|
}
|
||||||
|
for k, v := range entry.Attrs {
|
||||||
|
out.Attrs[k] = v
|
||||||
|
}
|
||||||
|
outs = append(outs, out)
|
||||||
|
}
|
||||||
|
return outs
|
||||||
|
}
|
||||||
2666
controller/pb/controller.pb.go
Normal file
2666
controller/pb/controller.pb.go
Normal file
File diff suppressed because it is too large
Load Diff
244
controller/pb/controller.proto
Normal file
244
controller/pb/controller.proto
Normal file
@@ -0,0 +1,244 @@
|
|||||||
|
syntax = "proto3";
|
||||||
|
|
||||||
|
package buildx.controller.v1;
|
||||||
|
|
||||||
|
import "github.com/moby/buildkit/api/services/control/control.proto";
|
||||||
|
import "github.com/moby/buildkit/sourcepolicy/pb/policy.proto";
|
||||||
|
|
||||||
|
option go_package = "pb";
|
||||||
|
|
||||||
|
service Controller {
|
||||||
|
rpc Build(BuildRequest) returns (BuildResponse);
|
||||||
|
rpc Inspect(InspectRequest) returns (InspectResponse);
|
||||||
|
rpc Status(StatusRequest) returns (stream StatusResponse);
|
||||||
|
rpc Input(stream InputMessage) returns (InputResponse);
|
||||||
|
rpc Invoke(stream Message) returns (stream Message);
|
||||||
|
rpc List(ListRequest) returns (ListResponse);
|
||||||
|
rpc Disconnect(DisconnectRequest) returns (DisconnectResponse);
|
||||||
|
rpc Info(InfoRequest) returns (InfoResponse);
|
||||||
|
rpc ListProcesses(ListProcessesRequest) returns (ListProcessesResponse);
|
||||||
|
rpc DisconnectProcess(DisconnectProcessRequest) returns (DisconnectProcessResponse);
|
||||||
|
}
|
||||||
|
|
||||||
|
message ListProcessesRequest {
|
||||||
|
string Ref = 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
message ListProcessesResponse {
|
||||||
|
repeated ProcessInfo Infos = 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
message ProcessInfo {
|
||||||
|
string ProcessID = 1;
|
||||||
|
InvokeConfig InvokeConfig = 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
message DisconnectProcessRequest {
|
||||||
|
string Ref = 1;
|
||||||
|
string ProcessID = 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
message DisconnectProcessResponse {
|
||||||
|
}
|
||||||
|
|
||||||
|
message BuildRequest {
|
||||||
|
string Ref = 1;
|
||||||
|
BuildOptions Options = 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
message BuildOptions {
|
||||||
|
string ContextPath = 1;
|
||||||
|
string DockerfileName = 2;
|
||||||
|
PrintFunc PrintFunc = 3;
|
||||||
|
map<string, string> NamedContexts = 4;
|
||||||
|
|
||||||
|
repeated string Allow = 5;
|
||||||
|
repeated Attest Attests = 6;
|
||||||
|
map<string, string> BuildArgs = 7;
|
||||||
|
repeated CacheOptionsEntry CacheFrom = 8;
|
||||||
|
repeated CacheOptionsEntry CacheTo = 9;
|
||||||
|
string CgroupParent = 10;
|
||||||
|
repeated ExportEntry Exports = 11;
|
||||||
|
repeated string ExtraHosts = 12;
|
||||||
|
map<string, string> Labels = 13;
|
||||||
|
string NetworkMode = 14;
|
||||||
|
repeated string NoCacheFilter = 15;
|
||||||
|
repeated string Platforms = 16;
|
||||||
|
repeated Secret Secrets = 17;
|
||||||
|
int64 ShmSize = 18;
|
||||||
|
repeated SSH SSH = 19;
|
||||||
|
repeated string Tags = 20;
|
||||||
|
string Target = 21;
|
||||||
|
UlimitOpt Ulimits = 22;
|
||||||
|
|
||||||
|
string Builder = 23;
|
||||||
|
bool NoCache = 24;
|
||||||
|
bool Pull = 25;
|
||||||
|
bool ExportPush = 26;
|
||||||
|
bool ExportLoad = 27;
|
||||||
|
moby.buildkit.v1.sourcepolicy.Policy SourcePolicy = 28;
|
||||||
|
}
|
||||||
|
|
||||||
|
message ExportEntry {
|
||||||
|
string Type = 1;
|
||||||
|
map<string, string> Attrs = 2;
|
||||||
|
string Destination = 3;
|
||||||
|
}
|
||||||
|
|
||||||
|
message CacheOptionsEntry {
|
||||||
|
string Type = 1;
|
||||||
|
map<string, string> Attrs = 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
message Attest {
|
||||||
|
string Type = 1;
|
||||||
|
bool Disabled = 2;
|
||||||
|
string Attrs = 3;
|
||||||
|
}
|
||||||
|
|
||||||
|
message SSH {
|
||||||
|
string ID = 1;
|
||||||
|
repeated string Paths = 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
message Secret {
|
||||||
|
string ID = 1;
|
||||||
|
string FilePath = 2;
|
||||||
|
string Env = 3;
|
||||||
|
}
|
||||||
|
|
||||||
|
message PrintFunc {
|
||||||
|
string Name = 1;
|
||||||
|
string Format = 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
message InspectRequest {
|
||||||
|
string Ref = 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
message InspectResponse {
|
||||||
|
BuildOptions Options = 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
message UlimitOpt {
|
||||||
|
map<string, Ulimit> values = 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
message Ulimit {
|
||||||
|
string Name = 1;
|
||||||
|
int64 Hard = 2;
|
||||||
|
int64 Soft = 3;
|
||||||
|
}
|
||||||
|
|
||||||
|
message BuildResponse {
|
||||||
|
map<string, string> ExporterResponse = 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
message DisconnectRequest {
|
||||||
|
string Ref = 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
message DisconnectResponse {}
|
||||||
|
|
||||||
|
message ListRequest {
|
||||||
|
string Ref = 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
message ListResponse {
|
||||||
|
repeated string keys = 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
message InputMessage {
|
||||||
|
oneof Input {
|
||||||
|
InputInitMessage Init = 1;
|
||||||
|
DataMessage Data = 2;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
message InputInitMessage {
|
||||||
|
string Ref = 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
message DataMessage {
|
||||||
|
bool EOF = 1; // true if eof was reached
|
||||||
|
bytes Data = 2; // should be chunked smaller than 4MB:
|
||||||
|
// https://pkg.go.dev/google.golang.org/grpc#MaxRecvMsgSize
|
||||||
|
}
|
||||||
|
|
||||||
|
message InputResponse {}
|
||||||
|
|
||||||
|
message Message {
|
||||||
|
oneof Input {
|
||||||
|
InitMessage Init = 1;
|
||||||
|
// FdMessage used from client to server for input (stdin) and
|
||||||
|
// from server to client for output (stdout, stderr)
|
||||||
|
FdMessage File = 2;
|
||||||
|
// ResizeMessage used from client to server for terminal resize events
|
||||||
|
ResizeMessage Resize = 3;
|
||||||
|
// SignalMessage is used from client to server to send signal events
|
||||||
|
SignalMessage Signal = 4;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
message InitMessage {
|
||||||
|
string Ref = 1;
|
||||||
|
|
||||||
|
// If ProcessID already exists in the server, it tries to connect to it
|
||||||
|
// instead of invoking the new one. In this case, InvokeConfig will be ignored.
|
||||||
|
string ProcessID = 2;
|
||||||
|
InvokeConfig InvokeConfig = 3;
|
||||||
|
}
|
||||||
|
|
||||||
|
message InvokeConfig {
|
||||||
|
repeated string Entrypoint = 1;
|
||||||
|
repeated string Cmd = 2;
|
||||||
|
repeated string Env = 3;
|
||||||
|
string User = 4;
|
||||||
|
bool NoUser = 5; // Do not set user but use the image's default
|
||||||
|
string Cwd = 6;
|
||||||
|
bool NoCwd = 7; // Do not set cwd but use the image's default
|
||||||
|
bool Tty = 8;
|
||||||
|
bool Rollback = 9; // Kill all process in the container and recreate it.
|
||||||
|
bool Initial = 10; // Run container from the initial state of that stage (supported only on the failed step)
|
||||||
|
}
|
||||||
|
|
||||||
|
message FdMessage {
|
||||||
|
uint32 Fd = 1; // what fd the data was from
|
||||||
|
bool EOF = 2; // true if eof was reached
|
||||||
|
bytes Data = 3; // should be chunked smaller than 4MB:
|
||||||
|
// https://pkg.go.dev/google.golang.org/grpc#MaxRecvMsgSize
|
||||||
|
}
|
||||||
|
|
||||||
|
message ResizeMessage {
|
||||||
|
uint32 Rows = 1;
|
||||||
|
uint32 Cols = 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
message SignalMessage {
|
||||||
|
// we only send name (ie HUP, INT) because the int values
|
||||||
|
// are platform dependent.
|
||||||
|
string Name = 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
message StatusRequest {
|
||||||
|
string Ref = 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
message StatusResponse {
|
||||||
|
repeated moby.buildkit.v1.Vertex vertexes = 1;
|
||||||
|
repeated moby.buildkit.v1.VertexStatus statuses = 2;
|
||||||
|
repeated moby.buildkit.v1.VertexLog logs = 3;
|
||||||
|
repeated moby.buildkit.v1.VertexWarning warnings = 4;
|
||||||
|
}
|
||||||
|
|
||||||
|
message InfoRequest {}
|
||||||
|
|
||||||
|
message InfoResponse {
|
||||||
|
BuildxVersion buildxVersion = 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
message BuildxVersion {
|
||||||
|
string package = 1;
|
||||||
|
string version = 2;
|
||||||
|
string revision = 3;
|
||||||
|
}
|
||||||
100
controller/pb/export.go
Normal file
100
controller/pb/export.go
Normal file
@@ -0,0 +1,100 @@
|
|||||||
|
package pb
|
||||||
|
|
||||||
|
import (
|
||||||
|
"io"
|
||||||
|
"os"
|
||||||
|
"strconv"
|
||||||
|
|
||||||
|
"github.com/containerd/console"
|
||||||
|
"github.com/moby/buildkit/client"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
)
|
||||||
|
|
||||||
|
func CreateExports(entries []*ExportEntry) ([]client.ExportEntry, error) {
|
||||||
|
var outs []client.ExportEntry
|
||||||
|
if len(entries) == 0 {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
for _, entry := range entries {
|
||||||
|
if entry.Type == "" {
|
||||||
|
return nil, errors.Errorf("type is required for output")
|
||||||
|
}
|
||||||
|
|
||||||
|
out := client.ExportEntry{
|
||||||
|
Type: entry.Type,
|
||||||
|
Attrs: map[string]string{},
|
||||||
|
}
|
||||||
|
for k, v := range entry.Attrs {
|
||||||
|
out.Attrs[k] = v
|
||||||
|
}
|
||||||
|
|
||||||
|
supportFile := false
|
||||||
|
supportDir := false
|
||||||
|
switch out.Type {
|
||||||
|
case client.ExporterLocal:
|
||||||
|
supportDir = true
|
||||||
|
case client.ExporterTar:
|
||||||
|
supportFile = true
|
||||||
|
case client.ExporterOCI, client.ExporterDocker:
|
||||||
|
tar, err := strconv.ParseBool(out.Attrs["tar"])
|
||||||
|
if err != nil {
|
||||||
|
tar = true
|
||||||
|
}
|
||||||
|
supportFile = tar
|
||||||
|
supportDir = !tar
|
||||||
|
case "registry":
|
||||||
|
out.Type = client.ExporterImage
|
||||||
|
}
|
||||||
|
|
||||||
|
if supportDir {
|
||||||
|
if entry.Destination == "" {
|
||||||
|
return nil, errors.Errorf("dest is required for %s exporter", out.Type)
|
||||||
|
}
|
||||||
|
if entry.Destination == "-" {
|
||||||
|
return nil, errors.Errorf("dest cannot be stdout for %s exporter", out.Type)
|
||||||
|
}
|
||||||
|
|
||||||
|
fi, err := os.Stat(entry.Destination)
|
||||||
|
if err != nil && !os.IsNotExist(err) {
|
||||||
|
return nil, errors.Wrapf(err, "invalid destination directory: %s", entry.Destination)
|
||||||
|
}
|
||||||
|
if err == nil && !fi.IsDir() {
|
||||||
|
return nil, errors.Errorf("destination directory %s is a file", entry.Destination)
|
||||||
|
}
|
||||||
|
out.OutputDir = entry.Destination
|
||||||
|
}
|
||||||
|
if supportFile {
|
||||||
|
if entry.Destination == "" && out.Type != client.ExporterDocker {
|
||||||
|
entry.Destination = "-"
|
||||||
|
}
|
||||||
|
if entry.Destination == "-" {
|
||||||
|
if _, err := console.ConsoleFromFile(os.Stdout); err == nil {
|
||||||
|
return nil, errors.Errorf("dest file is required for %s exporter. refusing to write to console", out.Type)
|
||||||
|
}
|
||||||
|
out.Output = wrapWriteCloser(os.Stdout)
|
||||||
|
} else if entry.Destination != "" {
|
||||||
|
fi, err := os.Stat(entry.Destination)
|
||||||
|
if err != nil && !os.IsNotExist(err) {
|
||||||
|
return nil, errors.Wrapf(err, "invalid destination file: %s", entry.Destination)
|
||||||
|
}
|
||||||
|
if err == nil && fi.IsDir() {
|
||||||
|
return nil, errors.Errorf("destination file %s is a directory", entry.Destination)
|
||||||
|
}
|
||||||
|
f, err := os.Create(entry.Destination)
|
||||||
|
if err != nil {
|
||||||
|
return nil, errors.Errorf("failed to open %s", err)
|
||||||
|
}
|
||||||
|
out.Output = wrapWriteCloser(f)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
outs = append(outs, out)
|
||||||
|
}
|
||||||
|
return outs, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func wrapWriteCloser(wc io.WriteCloser) func(map[string]string) (io.WriteCloser, error) {
|
||||||
|
return func(map[string]string) (io.WriteCloser, error) {
|
||||||
|
return wc, nil
|
||||||
|
}
|
||||||
|
}
|
||||||
3
controller/pb/generate.go
Normal file
3
controller/pb/generate.go
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
package pb
|
||||||
|
|
||||||
|
//go:generate protoc -I=. -I=../../vendor/ --gogo_out=plugins=grpc:. controller.proto
|
||||||
175
controller/pb/path.go
Normal file
175
controller/pb/path.go
Normal file
@@ -0,0 +1,175 @@
|
|||||||
|
package pb
|
||||||
|
|
||||||
|
import (
|
||||||
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/docker/docker/builder/remotecontext/urlutil"
|
||||||
|
"github.com/moby/buildkit/util/gitutil"
|
||||||
|
)
|
||||||
|
|
||||||
|
// ResolveOptionPaths resolves all paths contained in BuildOptions
|
||||||
|
// and replaces them to absolute paths.
|
||||||
|
func ResolveOptionPaths(options *BuildOptions) (_ *BuildOptions, err error) {
|
||||||
|
localContext := false
|
||||||
|
if options.ContextPath != "" && options.ContextPath != "-" {
|
||||||
|
if !isRemoteURL(options.ContextPath) {
|
||||||
|
localContext = true
|
||||||
|
options.ContextPath, err = filepath.Abs(options.ContextPath)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if options.DockerfileName != "" && options.DockerfileName != "-" {
|
||||||
|
if localContext && !urlutil.IsURL(options.DockerfileName) {
|
||||||
|
options.DockerfileName, err = filepath.Abs(options.DockerfileName)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
var contexts map[string]string
|
||||||
|
for k, v := range options.NamedContexts {
|
||||||
|
if isRemoteURL(v) || strings.HasPrefix(v, "docker-image://") {
|
||||||
|
// url prefix, this is a remote path
|
||||||
|
} else if strings.HasPrefix(v, "oci-layout://") {
|
||||||
|
// oci layout prefix, this is a local path
|
||||||
|
p := strings.TrimPrefix(v, "oci-layout://")
|
||||||
|
p, err = filepath.Abs(p)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
v = "oci-layout://" + p
|
||||||
|
} else {
|
||||||
|
// no prefix, assume local path
|
||||||
|
v, err = filepath.Abs(v)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if contexts == nil {
|
||||||
|
contexts = make(map[string]string)
|
||||||
|
}
|
||||||
|
contexts[k] = v
|
||||||
|
}
|
||||||
|
options.NamedContexts = contexts
|
||||||
|
|
||||||
|
var cacheFrom []*CacheOptionsEntry
|
||||||
|
for _, co := range options.CacheFrom {
|
||||||
|
switch co.Type {
|
||||||
|
case "local":
|
||||||
|
var attrs map[string]string
|
||||||
|
for k, v := range co.Attrs {
|
||||||
|
if attrs == nil {
|
||||||
|
attrs = make(map[string]string)
|
||||||
|
}
|
||||||
|
switch k {
|
||||||
|
case "src":
|
||||||
|
p := v
|
||||||
|
if p != "" {
|
||||||
|
p, err = filepath.Abs(p)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
attrs[k] = p
|
||||||
|
default:
|
||||||
|
attrs[k] = v
|
||||||
|
}
|
||||||
|
}
|
||||||
|
co.Attrs = attrs
|
||||||
|
cacheFrom = append(cacheFrom, co)
|
||||||
|
default:
|
||||||
|
cacheFrom = append(cacheFrom, co)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
options.CacheFrom = cacheFrom
|
||||||
|
|
||||||
|
var cacheTo []*CacheOptionsEntry
|
||||||
|
for _, co := range options.CacheTo {
|
||||||
|
switch co.Type {
|
||||||
|
case "local":
|
||||||
|
var attrs map[string]string
|
||||||
|
for k, v := range co.Attrs {
|
||||||
|
if attrs == nil {
|
||||||
|
attrs = make(map[string]string)
|
||||||
|
}
|
||||||
|
switch k {
|
||||||
|
case "dest":
|
||||||
|
p := v
|
||||||
|
if p != "" {
|
||||||
|
p, err = filepath.Abs(p)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
attrs[k] = p
|
||||||
|
default:
|
||||||
|
attrs[k] = v
|
||||||
|
}
|
||||||
|
}
|
||||||
|
co.Attrs = attrs
|
||||||
|
cacheTo = append(cacheTo, co)
|
||||||
|
default:
|
||||||
|
cacheTo = append(cacheTo, co)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
options.CacheTo = cacheTo
|
||||||
|
var exports []*ExportEntry
|
||||||
|
for _, e := range options.Exports {
|
||||||
|
if e.Destination != "" && e.Destination != "-" {
|
||||||
|
e.Destination, err = filepath.Abs(e.Destination)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
exports = append(exports, e)
|
||||||
|
}
|
||||||
|
options.Exports = exports
|
||||||
|
|
||||||
|
var secrets []*Secret
|
||||||
|
for _, s := range options.Secrets {
|
||||||
|
if s.FilePath != "" {
|
||||||
|
s.FilePath, err = filepath.Abs(s.FilePath)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
secrets = append(secrets, s)
|
||||||
|
}
|
||||||
|
options.Secrets = secrets
|
||||||
|
|
||||||
|
var ssh []*SSH
|
||||||
|
for _, s := range options.SSH {
|
||||||
|
var ps []string
|
||||||
|
for _, pt := range s.Paths {
|
||||||
|
p := pt
|
||||||
|
if p != "" {
|
||||||
|
p, err = filepath.Abs(p)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
ps = append(ps, p)
|
||||||
|
|
||||||
|
}
|
||||||
|
s.Paths = ps
|
||||||
|
ssh = append(ssh, s)
|
||||||
|
}
|
||||||
|
options.SSH = ssh
|
||||||
|
|
||||||
|
return options, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func isRemoteURL(c string) bool {
|
||||||
|
if urlutil.IsURL(c) {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
if _, err := gitutil.ParseGitRef(c); err == nil {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
247
controller/pb/path_test.go
Normal file
247
controller/pb/path_test.go
Normal file
@@ -0,0 +1,247 @@
|
|||||||
|
package pb
|
||||||
|
|
||||||
|
import (
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"reflect"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestResolvePaths(t *testing.T) {
|
||||||
|
tmpwd, err := os.MkdirTemp("", "testresolvepaths")
|
||||||
|
require.NoError(t, err)
|
||||||
|
defer os.Remove(tmpwd)
|
||||||
|
require.NoError(t, os.Chdir(tmpwd))
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
options BuildOptions
|
||||||
|
want BuildOptions
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "contextpath",
|
||||||
|
options: BuildOptions{ContextPath: "test"},
|
||||||
|
want: BuildOptions{ContextPath: filepath.Join(tmpwd, "test")},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "contextpath-cwd",
|
||||||
|
options: BuildOptions{ContextPath: "."},
|
||||||
|
want: BuildOptions{ContextPath: tmpwd},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "contextpath-dash",
|
||||||
|
options: BuildOptions{ContextPath: "-"},
|
||||||
|
want: BuildOptions{ContextPath: "-"},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "contextpath-ssh",
|
||||||
|
options: BuildOptions{ContextPath: "git@github.com:docker/buildx.git"},
|
||||||
|
want: BuildOptions{ContextPath: "git@github.com:docker/buildx.git"},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "dockerfilename",
|
||||||
|
options: BuildOptions{DockerfileName: "test", ContextPath: "."},
|
||||||
|
want: BuildOptions{DockerfileName: filepath.Join(tmpwd, "test"), ContextPath: tmpwd},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "dockerfilename-dash",
|
||||||
|
options: BuildOptions{DockerfileName: "-", ContextPath: "."},
|
||||||
|
want: BuildOptions{DockerfileName: "-", ContextPath: tmpwd},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "dockerfilename-remote",
|
||||||
|
options: BuildOptions{DockerfileName: "test", ContextPath: "git@github.com:docker/buildx.git"},
|
||||||
|
want: BuildOptions{DockerfileName: "test", ContextPath: "git@github.com:docker/buildx.git"},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "contexts",
|
||||||
|
options: BuildOptions{NamedContexts: map[string]string{"a": "test1", "b": "test2",
|
||||||
|
"alpine": "docker-image://alpine@sha256:0123456789", "project": "https://github.com/myuser/project.git"}},
|
||||||
|
want: BuildOptions{NamedContexts: map[string]string{"a": filepath.Join(tmpwd, "test1"), "b": filepath.Join(tmpwd, "test2"),
|
||||||
|
"alpine": "docker-image://alpine@sha256:0123456789", "project": "https://github.com/myuser/project.git"}},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "cache-from",
|
||||||
|
options: BuildOptions{
|
||||||
|
CacheFrom: []*CacheOptionsEntry{
|
||||||
|
{
|
||||||
|
Type: "local",
|
||||||
|
Attrs: map[string]string{"src": "test"},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Type: "registry",
|
||||||
|
Attrs: map[string]string{"ref": "user/app"},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
want: BuildOptions{
|
||||||
|
CacheFrom: []*CacheOptionsEntry{
|
||||||
|
{
|
||||||
|
Type: "local",
|
||||||
|
Attrs: map[string]string{"src": filepath.Join(tmpwd, "test")},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Type: "registry",
|
||||||
|
Attrs: map[string]string{"ref": "user/app"},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "cache-to",
|
||||||
|
options: BuildOptions{
|
||||||
|
CacheTo: []*CacheOptionsEntry{
|
||||||
|
{
|
||||||
|
Type: "local",
|
||||||
|
Attrs: map[string]string{"dest": "test"},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Type: "registry",
|
||||||
|
Attrs: map[string]string{"ref": "user/app"},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
want: BuildOptions{
|
||||||
|
CacheTo: []*CacheOptionsEntry{
|
||||||
|
{
|
||||||
|
Type: "local",
|
||||||
|
Attrs: map[string]string{"dest": filepath.Join(tmpwd, "test")},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Type: "registry",
|
||||||
|
Attrs: map[string]string{"ref": "user/app"},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "exports",
|
||||||
|
options: BuildOptions{
|
||||||
|
Exports: []*ExportEntry{
|
||||||
|
{
|
||||||
|
Type: "local",
|
||||||
|
Destination: "-",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Type: "local",
|
||||||
|
Destination: "test1",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Type: "tar",
|
||||||
|
Destination: "test3",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Type: "oci",
|
||||||
|
Destination: "-",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Type: "docker",
|
||||||
|
Destination: "test4",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Type: "image",
|
||||||
|
Attrs: map[string]string{"push": "true"},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
want: BuildOptions{
|
||||||
|
Exports: []*ExportEntry{
|
||||||
|
{
|
||||||
|
Type: "local",
|
||||||
|
Destination: "-",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Type: "local",
|
||||||
|
Destination: filepath.Join(tmpwd, "test1"),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Type: "tar",
|
||||||
|
Destination: filepath.Join(tmpwd, "test3"),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Type: "oci",
|
||||||
|
Destination: "-",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Type: "docker",
|
||||||
|
Destination: filepath.Join(tmpwd, "test4"),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Type: "image",
|
||||||
|
Attrs: map[string]string{"push": "true"},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "secrets",
|
||||||
|
options: BuildOptions{
|
||||||
|
Secrets: []*Secret{
|
||||||
|
{
|
||||||
|
FilePath: "test1",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
ID: "val",
|
||||||
|
Env: "a",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
ID: "test",
|
||||||
|
FilePath: "test3",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
want: BuildOptions{
|
||||||
|
Secrets: []*Secret{
|
||||||
|
{
|
||||||
|
FilePath: filepath.Join(tmpwd, "test1"),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
ID: "val",
|
||||||
|
Env: "a",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
ID: "test",
|
||||||
|
FilePath: filepath.Join(tmpwd, "test3"),
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "ssh",
|
||||||
|
options: BuildOptions{
|
||||||
|
SSH: []*SSH{
|
||||||
|
{
|
||||||
|
ID: "default",
|
||||||
|
Paths: []string{"test1", "test2"},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
ID: "a",
|
||||||
|
Paths: []string{"test3"},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
want: BuildOptions{
|
||||||
|
SSH: []*SSH{
|
||||||
|
{
|
||||||
|
ID: "default",
|
||||||
|
Paths: []string{filepath.Join(tmpwd, "test1"), filepath.Join(tmpwd, "test2")},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
ID: "a",
|
||||||
|
Paths: []string{filepath.Join(tmpwd, "test3")},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, tt := range tests {
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
got, err := ResolveOptionPaths(&tt.options)
|
||||||
|
require.NoError(t, err)
|
||||||
|
if !reflect.DeepEqual(tt.want, *got) {
|
||||||
|
t.Fatalf("expected %#v, got %#v", tt.want, *got)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
126
controller/pb/progress.go
Normal file
126
controller/pb/progress.go
Normal file
@@ -0,0 +1,126 @@
|
|||||||
|
package pb
|
||||||
|
|
||||||
|
import (
|
||||||
|
"github.com/docker/buildx/util/progress"
|
||||||
|
control "github.com/moby/buildkit/api/services/control"
|
||||||
|
"github.com/moby/buildkit/client"
|
||||||
|
"github.com/opencontainers/go-digest"
|
||||||
|
)
|
||||||
|
|
||||||
|
type writer struct {
|
||||||
|
ch chan<- *StatusResponse
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewProgressWriter(ch chan<- *StatusResponse) progress.Writer {
|
||||||
|
return &writer{ch: ch}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (w *writer) Write(status *client.SolveStatus) {
|
||||||
|
w.ch <- ToControlStatus(status)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (w *writer) WriteBuildRef(target string, ref string) {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
func (w *writer) ValidateLogSource(digest.Digest, interface{}) bool {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
func (w *writer) ClearLogSource(interface{}) {}
|
||||||
|
|
||||||
|
func ToControlStatus(s *client.SolveStatus) *StatusResponse {
|
||||||
|
resp := StatusResponse{}
|
||||||
|
for _, v := range s.Vertexes {
|
||||||
|
resp.Vertexes = append(resp.Vertexes, &control.Vertex{
|
||||||
|
Digest: v.Digest,
|
||||||
|
Inputs: v.Inputs,
|
||||||
|
Name: v.Name,
|
||||||
|
Started: v.Started,
|
||||||
|
Completed: v.Completed,
|
||||||
|
Error: v.Error,
|
||||||
|
Cached: v.Cached,
|
||||||
|
ProgressGroup: v.ProgressGroup,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
for _, v := range s.Statuses {
|
||||||
|
resp.Statuses = append(resp.Statuses, &control.VertexStatus{
|
||||||
|
ID: v.ID,
|
||||||
|
Vertex: v.Vertex,
|
||||||
|
Name: v.Name,
|
||||||
|
Total: v.Total,
|
||||||
|
Current: v.Current,
|
||||||
|
Timestamp: v.Timestamp,
|
||||||
|
Started: v.Started,
|
||||||
|
Completed: v.Completed,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
for _, v := range s.Logs {
|
||||||
|
resp.Logs = append(resp.Logs, &control.VertexLog{
|
||||||
|
Vertex: v.Vertex,
|
||||||
|
Stream: int64(v.Stream),
|
||||||
|
Msg: v.Data,
|
||||||
|
Timestamp: v.Timestamp,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
for _, v := range s.Warnings {
|
||||||
|
resp.Warnings = append(resp.Warnings, &control.VertexWarning{
|
||||||
|
Vertex: v.Vertex,
|
||||||
|
Level: int64(v.Level),
|
||||||
|
Short: v.Short,
|
||||||
|
Detail: v.Detail,
|
||||||
|
Url: v.URL,
|
||||||
|
Info: v.SourceInfo,
|
||||||
|
Ranges: v.Range,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
return &resp
|
||||||
|
}
|
||||||
|
|
||||||
|
func FromControlStatus(resp *StatusResponse) *client.SolveStatus {
|
||||||
|
s := client.SolveStatus{}
|
||||||
|
for _, v := range resp.Vertexes {
|
||||||
|
s.Vertexes = append(s.Vertexes, &client.Vertex{
|
||||||
|
Digest: v.Digest,
|
||||||
|
Inputs: v.Inputs,
|
||||||
|
Name: v.Name,
|
||||||
|
Started: v.Started,
|
||||||
|
Completed: v.Completed,
|
||||||
|
Error: v.Error,
|
||||||
|
Cached: v.Cached,
|
||||||
|
ProgressGroup: v.ProgressGroup,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
for _, v := range resp.Statuses {
|
||||||
|
s.Statuses = append(s.Statuses, &client.VertexStatus{
|
||||||
|
ID: v.ID,
|
||||||
|
Vertex: v.Vertex,
|
||||||
|
Name: v.Name,
|
||||||
|
Total: v.Total,
|
||||||
|
Current: v.Current,
|
||||||
|
Timestamp: v.Timestamp,
|
||||||
|
Started: v.Started,
|
||||||
|
Completed: v.Completed,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
for _, v := range resp.Logs {
|
||||||
|
s.Logs = append(s.Logs, &client.VertexLog{
|
||||||
|
Vertex: v.Vertex,
|
||||||
|
Stream: int(v.Stream),
|
||||||
|
Data: v.Msg,
|
||||||
|
Timestamp: v.Timestamp,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
for _, v := range resp.Warnings {
|
||||||
|
s.Warnings = append(s.Warnings, &client.VertexWarning{
|
||||||
|
Vertex: v.Vertex,
|
||||||
|
Level: int(v.Level),
|
||||||
|
Short: v.Short,
|
||||||
|
Detail: v.Detail,
|
||||||
|
URL: v.Url,
|
||||||
|
SourceInfo: v.Info,
|
||||||
|
Range: v.Ranges,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
return &s
|
||||||
|
}
|
||||||
22
controller/pb/secrets.go
Normal file
22
controller/pb/secrets.go
Normal file
@@ -0,0 +1,22 @@
|
|||||||
|
package pb
|
||||||
|
|
||||||
|
import (
|
||||||
|
"github.com/moby/buildkit/session"
|
||||||
|
"github.com/moby/buildkit/session/secrets/secretsprovider"
|
||||||
|
)
|
||||||
|
|
||||||
|
func CreateSecrets(secrets []*Secret) (session.Attachable, error) {
|
||||||
|
fs := make([]secretsprovider.Source, 0, len(secrets))
|
||||||
|
for _, secret := range secrets {
|
||||||
|
fs = append(fs, secretsprovider.Source{
|
||||||
|
ID: secret.ID,
|
||||||
|
FilePath: secret.FilePath,
|
||||||
|
Env: secret.Env,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
store, err := secretsprovider.NewStore(fs)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return secretsprovider.NewSecretProvider(store), nil
|
||||||
|
}
|
||||||
18
controller/pb/ssh.go
Normal file
18
controller/pb/ssh.go
Normal file
@@ -0,0 +1,18 @@
|
|||||||
|
package pb
|
||||||
|
|
||||||
|
import (
|
||||||
|
"github.com/moby/buildkit/session"
|
||||||
|
"github.com/moby/buildkit/session/sshforward/sshprovider"
|
||||||
|
)
|
||||||
|
|
||||||
|
func CreateSSH(ssh []*SSH) (session.Attachable, error) {
|
||||||
|
configs := make([]sshprovider.AgentConfig, 0, len(ssh))
|
||||||
|
for _, ssh := range ssh {
|
||||||
|
cfg := sshprovider.AgentConfig{
|
||||||
|
ID: ssh.ID,
|
||||||
|
Paths: append([]string{}, ssh.Paths...),
|
||||||
|
}
|
||||||
|
configs = append(configs, cfg)
|
||||||
|
}
|
||||||
|
return sshprovider.NewSSHAgentProvider(configs)
|
||||||
|
}
|
||||||
149
controller/processes/processes.go
Normal file
149
controller/processes/processes.go
Normal file
@@ -0,0 +1,149 @@
|
|||||||
|
package processes
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"sync"
|
||||||
|
"sync/atomic"
|
||||||
|
|
||||||
|
"github.com/docker/buildx/build"
|
||||||
|
"github.com/docker/buildx/controller/pb"
|
||||||
|
"github.com/docker/buildx/util/ioset"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
"github.com/sirupsen/logrus"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Process provides methods to control a process.
|
||||||
|
type Process struct {
|
||||||
|
inEnd *ioset.Forwarder
|
||||||
|
invokeConfig *pb.InvokeConfig
|
||||||
|
errCh chan error
|
||||||
|
processCancel func()
|
||||||
|
serveIOCancel func()
|
||||||
|
}
|
||||||
|
|
||||||
|
// ForwardIO forwards process's io to the specified reader/writer.
|
||||||
|
// Optionally specify ioCancelCallback which will be called when
|
||||||
|
// the process closes the specified IO. This will be useful for additional cleanup.
|
||||||
|
func (p *Process) ForwardIO(in *ioset.In, ioCancelCallback func()) {
|
||||||
|
p.inEnd.SetIn(in)
|
||||||
|
if f := p.serveIOCancel; f != nil {
|
||||||
|
f()
|
||||||
|
}
|
||||||
|
p.serveIOCancel = ioCancelCallback
|
||||||
|
}
|
||||||
|
|
||||||
|
// Done returns a channel where error or nil will be sent
|
||||||
|
// when the process exits.
|
||||||
|
// TODO: change this to Wait()
|
||||||
|
func (p *Process) Done() <-chan error {
|
||||||
|
return p.errCh
|
||||||
|
}
|
||||||
|
|
||||||
|
// Manager manages a set of proceses.
|
||||||
|
type Manager struct {
|
||||||
|
container atomic.Value
|
||||||
|
processes sync.Map
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewManager creates and returns a Manager.
|
||||||
|
func NewManager() *Manager {
|
||||||
|
return &Manager{}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get returns the specified process.
|
||||||
|
func (m *Manager) Get(id string) (*Process, bool) {
|
||||||
|
v, ok := m.processes.Load(id)
|
||||||
|
if !ok {
|
||||||
|
return nil, false
|
||||||
|
}
|
||||||
|
return v.(*Process), true
|
||||||
|
}
|
||||||
|
|
||||||
|
// CancelRunningProcesses cancels execution of all running processes.
|
||||||
|
func (m *Manager) CancelRunningProcesses() {
|
||||||
|
var funcs []func()
|
||||||
|
m.processes.Range(func(key, value any) bool {
|
||||||
|
funcs = append(funcs, value.(*Process).processCancel)
|
||||||
|
m.processes.Delete(key)
|
||||||
|
return true
|
||||||
|
})
|
||||||
|
for _, f := range funcs {
|
||||||
|
f()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ListProcesses lists all running processes.
|
||||||
|
func (m *Manager) ListProcesses() (res []*pb.ProcessInfo) {
|
||||||
|
m.processes.Range(func(key, value any) bool {
|
||||||
|
res = append(res, &pb.ProcessInfo{
|
||||||
|
ProcessID: key.(string),
|
||||||
|
InvokeConfig: value.(*Process).invokeConfig,
|
||||||
|
})
|
||||||
|
return true
|
||||||
|
})
|
||||||
|
return res
|
||||||
|
}
|
||||||
|
|
||||||
|
// DeleteProcess deletes the specified process.
|
||||||
|
func (m *Manager) DeleteProcess(id string) error {
|
||||||
|
p, ok := m.processes.LoadAndDelete(id)
|
||||||
|
if !ok {
|
||||||
|
return errors.Errorf("unknown process %q", id)
|
||||||
|
}
|
||||||
|
p.(*Process).processCancel()
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// StartProcess starts a process in the container.
|
||||||
|
// When a container isn't available (i.e. first time invoking or the container has exited) or cfg.Rollback is set,
|
||||||
|
// this method will start a new container and run the process in it. Otherwise, this method starts a new process in the
|
||||||
|
// existing container.
|
||||||
|
func (m *Manager) StartProcess(pid string, resultCtx *build.ResultHandle, cfg *pb.InvokeConfig) (*Process, error) {
|
||||||
|
// Get the target result to invoke a container from
|
||||||
|
var ctr *build.Container
|
||||||
|
if a := m.container.Load(); a != nil {
|
||||||
|
ctr = a.(*build.Container)
|
||||||
|
}
|
||||||
|
if cfg.Rollback || ctr == nil || ctr.IsUnavailable() {
|
||||||
|
go m.CancelRunningProcesses()
|
||||||
|
// (Re)create a new container if this is rollback or first time to invoke a process.
|
||||||
|
if ctr != nil {
|
||||||
|
go ctr.Cancel() // Finish the existing container
|
||||||
|
}
|
||||||
|
var err error
|
||||||
|
ctr, err = build.NewContainer(context.TODO(), resultCtx, cfg)
|
||||||
|
if err != nil {
|
||||||
|
return nil, errors.Errorf("failed to create container %v", err)
|
||||||
|
}
|
||||||
|
m.container.Store(ctr)
|
||||||
|
}
|
||||||
|
// [client(ForwardIO)] <-forwarder(switchable)-> [out] <-pipe-> [in] <- [process]
|
||||||
|
in, out := ioset.Pipe()
|
||||||
|
f := ioset.NewForwarder()
|
||||||
|
f.PropagateStdinClose = false
|
||||||
|
f.SetOut(&out)
|
||||||
|
|
||||||
|
// Register process
|
||||||
|
ctx, cancel := context.WithCancel(context.TODO())
|
||||||
|
var cancelOnce sync.Once
|
||||||
|
processCancelFunc := func() { cancelOnce.Do(func() { cancel(); f.Close(); in.Close(); out.Close() }) }
|
||||||
|
p := &Process{
|
||||||
|
inEnd: f,
|
||||||
|
invokeConfig: cfg,
|
||||||
|
processCancel: processCancelFunc,
|
||||||
|
errCh: make(chan error),
|
||||||
|
}
|
||||||
|
m.processes.Store(pid, p)
|
||||||
|
go func() {
|
||||||
|
var err error
|
||||||
|
if err = ctr.Exec(ctx, cfg, in.Stdin, in.Stdout, in.Stderr); err != nil {
|
||||||
|
logrus.Errorf("failed to exec process: %v", err)
|
||||||
|
}
|
||||||
|
logrus.Debugf("finished process %s %v", pid, cfg.Entrypoint)
|
||||||
|
m.processes.Delete(pid)
|
||||||
|
processCancelFunc()
|
||||||
|
p.errCh <- err
|
||||||
|
}()
|
||||||
|
|
||||||
|
return p, nil
|
||||||
|
}
|
||||||
240
controller/remote/client.go
Normal file
240
controller/remote/client.go
Normal file
@@ -0,0 +1,240 @@
|
|||||||
|
package remote
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"io"
|
||||||
|
"sync"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/containerd/containerd/defaults"
|
||||||
|
"github.com/containerd/containerd/pkg/dialer"
|
||||||
|
"github.com/docker/buildx/controller/pb"
|
||||||
|
"github.com/docker/buildx/util/progress"
|
||||||
|
"github.com/moby/buildkit/client"
|
||||||
|
"github.com/moby/buildkit/identity"
|
||||||
|
"github.com/moby/buildkit/util/grpcerrors"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
"golang.org/x/sync/errgroup"
|
||||||
|
"google.golang.org/grpc"
|
||||||
|
"google.golang.org/grpc/backoff"
|
||||||
|
"google.golang.org/grpc/credentials/insecure"
|
||||||
|
)
|
||||||
|
|
||||||
|
func NewClient(ctx context.Context, addr string) (*Client, error) {
|
||||||
|
backoffConfig := backoff.DefaultConfig
|
||||||
|
backoffConfig.MaxDelay = 3 * time.Second
|
||||||
|
connParams := grpc.ConnectParams{
|
||||||
|
Backoff: backoffConfig,
|
||||||
|
}
|
||||||
|
gopts := []grpc.DialOption{
|
||||||
|
grpc.WithBlock(),
|
||||||
|
grpc.WithTransportCredentials(insecure.NewCredentials()),
|
||||||
|
grpc.WithConnectParams(connParams),
|
||||||
|
grpc.WithContextDialer(dialer.ContextDialer),
|
||||||
|
grpc.WithDefaultCallOptions(grpc.MaxCallRecvMsgSize(defaults.DefaultMaxRecvMsgSize)),
|
||||||
|
grpc.WithDefaultCallOptions(grpc.MaxCallSendMsgSize(defaults.DefaultMaxSendMsgSize)),
|
||||||
|
grpc.WithUnaryInterceptor(grpcerrors.UnaryClientInterceptor),
|
||||||
|
grpc.WithStreamInterceptor(grpcerrors.StreamClientInterceptor),
|
||||||
|
}
|
||||||
|
conn, err := grpc.DialContext(ctx, dialer.DialAddress(addr), gopts...)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return &Client{conn: conn}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
type Client struct {
|
||||||
|
conn *grpc.ClientConn
|
||||||
|
closeOnce sync.Once
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *Client) Close() (err error) {
|
||||||
|
c.closeOnce.Do(func() {
|
||||||
|
err = c.conn.Close()
|
||||||
|
})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *Client) Version(ctx context.Context) (string, string, string, error) {
|
||||||
|
res, err := c.client().Info(ctx, &pb.InfoRequest{})
|
||||||
|
if err != nil {
|
||||||
|
return "", "", "", err
|
||||||
|
}
|
||||||
|
v := res.BuildxVersion
|
||||||
|
return v.Package, v.Version, v.Revision, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *Client) List(ctx context.Context) (keys []string, retErr error) {
|
||||||
|
res, err := c.client().List(ctx, &pb.ListRequest{})
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return res.Keys, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *Client) Disconnect(ctx context.Context, key string) error {
|
||||||
|
if key == "" {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
_, err := c.client().Disconnect(ctx, &pb.DisconnectRequest{Ref: key})
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *Client) ListProcesses(ctx context.Context, ref string) (infos []*pb.ProcessInfo, retErr error) {
|
||||||
|
res, err := c.client().ListProcesses(ctx, &pb.ListProcessesRequest{Ref: ref})
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return res.Infos, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *Client) DisconnectProcess(ctx context.Context, ref, pid string) error {
|
||||||
|
_, err := c.client().DisconnectProcess(ctx, &pb.DisconnectProcessRequest{Ref: ref, ProcessID: pid})
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *Client) Invoke(ctx context.Context, ref string, pid string, invokeConfig pb.InvokeConfig, in io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) error {
|
||||||
|
if ref == "" || pid == "" {
|
||||||
|
return errors.New("build reference must be specified")
|
||||||
|
}
|
||||||
|
stream, err := c.client().Invoke(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return attachIO(ctx, stream, &pb.InitMessage{Ref: ref, ProcessID: pid, InvokeConfig: &invokeConfig}, ioAttachConfig{
|
||||||
|
stdin: in,
|
||||||
|
stdout: stdout,
|
||||||
|
stderr: stderr,
|
||||||
|
// TODO: Signal, Resize
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *Client) Inspect(ctx context.Context, ref string) (*pb.InspectResponse, error) {
|
||||||
|
return c.client().Inspect(ctx, &pb.InspectRequest{Ref: ref})
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *Client) Build(ctx context.Context, options pb.BuildOptions, in io.ReadCloser, progress progress.Writer) (string, *client.SolveResponse, error) {
|
||||||
|
ref := identity.NewID()
|
||||||
|
statusChan := make(chan *client.SolveStatus)
|
||||||
|
eg, egCtx := errgroup.WithContext(ctx)
|
||||||
|
var resp *client.SolveResponse
|
||||||
|
eg.Go(func() error {
|
||||||
|
defer close(statusChan)
|
||||||
|
var err error
|
||||||
|
resp, err = c.build(egCtx, ref, options, in, statusChan)
|
||||||
|
return err
|
||||||
|
})
|
||||||
|
eg.Go(func() error {
|
||||||
|
for s := range statusChan {
|
||||||
|
st := s
|
||||||
|
progress.Write(st)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
return ref, resp, eg.Wait()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *Client) build(ctx context.Context, ref string, options pb.BuildOptions, in io.ReadCloser, statusChan chan *client.SolveStatus) (*client.SolveResponse, error) {
|
||||||
|
eg, egCtx := errgroup.WithContext(ctx)
|
||||||
|
done := make(chan struct{})
|
||||||
|
|
||||||
|
var resp *client.SolveResponse
|
||||||
|
|
||||||
|
eg.Go(func() error {
|
||||||
|
defer close(done)
|
||||||
|
pbResp, err := c.client().Build(egCtx, &pb.BuildRequest{
|
||||||
|
Ref: ref,
|
||||||
|
Options: &options,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
resp = &client.SolveResponse{
|
||||||
|
ExporterResponse: pbResp.ExporterResponse,
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
eg.Go(func() error {
|
||||||
|
stream, err := c.client().Status(egCtx, &pb.StatusRequest{
|
||||||
|
Ref: ref,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
for {
|
||||||
|
resp, err := stream.Recv()
|
||||||
|
if err != nil {
|
||||||
|
if err == io.EOF {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return errors.Wrap(err, "failed to receive status")
|
||||||
|
}
|
||||||
|
statusChan <- pb.FromControlStatus(resp)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
if in != nil {
|
||||||
|
eg.Go(func() error {
|
||||||
|
stream, err := c.client().Input(egCtx)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if err := stream.Send(&pb.InputMessage{
|
||||||
|
Input: &pb.InputMessage_Init{
|
||||||
|
Init: &pb.InputInitMessage{
|
||||||
|
Ref: ref,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}); err != nil {
|
||||||
|
return errors.Wrap(err, "failed to init input")
|
||||||
|
}
|
||||||
|
|
||||||
|
inReader, inWriter := io.Pipe()
|
||||||
|
eg2, _ := errgroup.WithContext(ctx)
|
||||||
|
eg2.Go(func() error {
|
||||||
|
<-done
|
||||||
|
return inWriter.Close()
|
||||||
|
})
|
||||||
|
go func() {
|
||||||
|
// do not wait for read completion but return here and let the caller send EOF
|
||||||
|
// this allows us to return on ctx.Done() without being blocked by this reader.
|
||||||
|
io.Copy(inWriter, in)
|
||||||
|
inWriter.Close()
|
||||||
|
}()
|
||||||
|
eg2.Go(func() error {
|
||||||
|
for {
|
||||||
|
buf := make([]byte, 32*1024)
|
||||||
|
n, err := inReader.Read(buf)
|
||||||
|
if err != nil {
|
||||||
|
if err == io.EOF {
|
||||||
|
break // break loop and send EOF
|
||||||
|
}
|
||||||
|
return err
|
||||||
|
} else if n > 0 {
|
||||||
|
if stream.Send(&pb.InputMessage{
|
||||||
|
Input: &pb.InputMessage_Data{
|
||||||
|
Data: &pb.DataMessage{
|
||||||
|
Data: buf[:n],
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return stream.Send(&pb.InputMessage{
|
||||||
|
Input: &pb.InputMessage_Data{
|
||||||
|
Data: &pb.DataMessage{
|
||||||
|
EOF: true,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
})
|
||||||
|
})
|
||||||
|
return eg2.Wait()
|
||||||
|
})
|
||||||
|
}
|
||||||
|
return resp, eg.Wait()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *Client) client() pb.ControllerClient {
|
||||||
|
return pb.NewControllerClient(c.conn)
|
||||||
|
}
|
||||||
333
controller/remote/controller.go
Normal file
333
controller/remote/controller.go
Normal file
@@ -0,0 +1,333 @@
|
|||||||
|
//go:build linux
|
||||||
|
|
||||||
|
package remote
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"net"
|
||||||
|
"os"
|
||||||
|
"os/exec"
|
||||||
|
"os/signal"
|
||||||
|
"path/filepath"
|
||||||
|
"strconv"
|
||||||
|
"syscall"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/containerd/containerd/log"
|
||||||
|
"github.com/docker/buildx/build"
|
||||||
|
cbuild "github.com/docker/buildx/controller/build"
|
||||||
|
"github.com/docker/buildx/controller/control"
|
||||||
|
controllerapi "github.com/docker/buildx/controller/pb"
|
||||||
|
"github.com/docker/buildx/util/confutil"
|
||||||
|
"github.com/docker/buildx/util/progress"
|
||||||
|
"github.com/docker/buildx/version"
|
||||||
|
"github.com/docker/cli/cli/command"
|
||||||
|
"github.com/moby/buildkit/client"
|
||||||
|
"github.com/moby/buildkit/util/grpcerrors"
|
||||||
|
"github.com/pelletier/go-toml"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
"github.com/sirupsen/logrus"
|
||||||
|
"github.com/spf13/cobra"
|
||||||
|
"google.golang.org/grpc"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
serveCommandName = "_INTERNAL_SERVE"
|
||||||
|
)
|
||||||
|
|
||||||
|
var (
|
||||||
|
defaultLogFilename = fmt.Sprintf("buildx.%s.log", version.Revision)
|
||||||
|
defaultSocketFilename = fmt.Sprintf("buildx.%s.sock", version.Revision)
|
||||||
|
defaultPIDFilename = fmt.Sprintf("buildx.%s.pid", version.Revision)
|
||||||
|
)
|
||||||
|
|
||||||
|
type serverConfig struct {
|
||||||
|
// Specify buildx server root
|
||||||
|
Root string `toml:"root"`
|
||||||
|
|
||||||
|
// LogLevel sets the logging level [trace, debug, info, warn, error, fatal, panic]
|
||||||
|
LogLevel string `toml:"log_level"`
|
||||||
|
|
||||||
|
// Specify file to output buildx server log
|
||||||
|
LogFile string `toml:"log_file"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewRemoteBuildxController(ctx context.Context, dockerCli command.Cli, opts control.ControlOptions, logger progress.SubLogger) (control.BuildxController, error) {
|
||||||
|
rootDir := opts.Root
|
||||||
|
if rootDir == "" {
|
||||||
|
rootDir = rootDataDir(dockerCli)
|
||||||
|
}
|
||||||
|
serverRoot := filepath.Join(rootDir, "shared")
|
||||||
|
|
||||||
|
// connect to buildx server if it is already running
|
||||||
|
ctx2, cancel := context.WithTimeout(ctx, 1*time.Second)
|
||||||
|
c, err := newBuildxClientAndCheck(ctx2, filepath.Join(serverRoot, defaultSocketFilename))
|
||||||
|
cancel()
|
||||||
|
if err != nil {
|
||||||
|
if !errors.Is(err, context.DeadlineExceeded) {
|
||||||
|
return nil, errors.Wrap(err, "cannot connect to the buildx server")
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
return &buildxController{c, serverRoot}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// start buildx server via subcommand
|
||||||
|
err = logger.Wrap("no buildx server found; launching...", func() error {
|
||||||
|
launchFlags := []string{}
|
||||||
|
if opts.ServerConfig != "" {
|
||||||
|
launchFlags = append(launchFlags, "--config", opts.ServerConfig)
|
||||||
|
}
|
||||||
|
logFile, err := getLogFilePath(dockerCli, opts.ServerConfig)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
wait, err := launch(ctx, logFile, append([]string{serveCommandName}, launchFlags...)...)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
go wait()
|
||||||
|
|
||||||
|
// wait for buildx server to be ready
|
||||||
|
ctx2, cancel = context.WithTimeout(ctx, 10*time.Second)
|
||||||
|
c, err = newBuildxClientAndCheck(ctx2, filepath.Join(serverRoot, defaultSocketFilename))
|
||||||
|
cancel()
|
||||||
|
if err != nil {
|
||||||
|
return errors.Wrap(err, "cannot connect to the buildx server")
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return &buildxController{c, serverRoot}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func AddControllerCommands(cmd *cobra.Command, dockerCli command.Cli) {
|
||||||
|
cmd.AddCommand(
|
||||||
|
serveCmd(dockerCli),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
func serveCmd(dockerCli command.Cli) *cobra.Command {
|
||||||
|
var serverConfigPath string
|
||||||
|
cmd := &cobra.Command{
|
||||||
|
Use: fmt.Sprintf("%s [OPTIONS]", serveCommandName),
|
||||||
|
Hidden: true,
|
||||||
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
|
// Parse config
|
||||||
|
config, err := getConfig(dockerCli, serverConfigPath)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if config.LogLevel == "" {
|
||||||
|
logrus.SetLevel(logrus.InfoLevel)
|
||||||
|
} else {
|
||||||
|
lvl, err := logrus.ParseLevel(config.LogLevel)
|
||||||
|
if err != nil {
|
||||||
|
return errors.Wrap(err, "failed to prepare logger")
|
||||||
|
}
|
||||||
|
logrus.SetLevel(lvl)
|
||||||
|
}
|
||||||
|
logrus.SetFormatter(&logrus.JSONFormatter{
|
||||||
|
TimestampFormat: log.RFC3339NanoFixed,
|
||||||
|
})
|
||||||
|
root, err := prepareRootDir(dockerCli, config)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
pidF := filepath.Join(root, defaultPIDFilename)
|
||||||
|
if err := os.WriteFile(pidF, []byte(fmt.Sprintf("%d", os.Getpid())), 0600); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer func() {
|
||||||
|
if err := os.Remove(pidF); err != nil {
|
||||||
|
logrus.Errorf("failed to clean up info file %q: %v", pidF, err)
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
|
||||||
|
// prepare server
|
||||||
|
b := NewServer(func(ctx context.Context, options *controllerapi.BuildOptions, stdin io.Reader, progress progress.Writer) (*client.SolveResponse, *build.ResultHandle, error) {
|
||||||
|
return cbuild.RunBuild(ctx, dockerCli, *options, stdin, progress, true)
|
||||||
|
})
|
||||||
|
defer b.Close()
|
||||||
|
|
||||||
|
// serve server
|
||||||
|
addr := filepath.Join(root, defaultSocketFilename)
|
||||||
|
if err := os.Remove(addr); err != nil && !os.IsNotExist(err) { // avoid EADDRINUSE
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer func() {
|
||||||
|
if err := os.Remove(addr); err != nil {
|
||||||
|
logrus.Errorf("failed to clean up socket %q: %v", addr, err)
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
logrus.Infof("starting server at %q", addr)
|
||||||
|
l, err := net.Listen("unix", addr)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
rpc := grpc.NewServer(
|
||||||
|
grpc.UnaryInterceptor(grpcerrors.UnaryServerInterceptor),
|
||||||
|
grpc.StreamInterceptor(grpcerrors.StreamServerInterceptor),
|
||||||
|
)
|
||||||
|
controllerapi.RegisterControllerServer(rpc, b)
|
||||||
|
doneCh := make(chan struct{})
|
||||||
|
errCh := make(chan error, 1)
|
||||||
|
go func() {
|
||||||
|
defer close(doneCh)
|
||||||
|
if err := rpc.Serve(l); err != nil {
|
||||||
|
errCh <- errors.Wrapf(err, "error on serving via socket %q", addr)
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
|
||||||
|
var s os.Signal
|
||||||
|
sigCh := make(chan os.Signal, 1)
|
||||||
|
signal.Notify(sigCh, syscall.SIGINT)
|
||||||
|
signal.Notify(sigCh, syscall.SIGTERM)
|
||||||
|
select {
|
||||||
|
case err := <-errCh:
|
||||||
|
logrus.Errorf("got error %s, exiting", err)
|
||||||
|
return err
|
||||||
|
case s = <-sigCh:
|
||||||
|
logrus.Infof("got signal %s, exiting", s)
|
||||||
|
return nil
|
||||||
|
case <-doneCh:
|
||||||
|
logrus.Infof("rpc server done, exiting")
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
flags := cmd.Flags()
|
||||||
|
flags.StringVar(&serverConfigPath, "config", "", "Specify buildx server config file")
|
||||||
|
return cmd
|
||||||
|
}
|
||||||
|
|
||||||
|
func getLogFilePath(dockerCli command.Cli, configPath string) (string, error) {
|
||||||
|
config, err := getConfig(dockerCli, configPath)
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
if config.LogFile == "" {
|
||||||
|
root, err := prepareRootDir(dockerCli, config)
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
return filepath.Join(root, defaultLogFilename), nil
|
||||||
|
}
|
||||||
|
return config.LogFile, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func getConfig(dockerCli command.Cli, configPath string) (*serverConfig, error) {
|
||||||
|
var defaultConfigPath bool
|
||||||
|
if configPath == "" {
|
||||||
|
defaultRoot := rootDataDir(dockerCli)
|
||||||
|
configPath = filepath.Join(defaultRoot, "config.toml")
|
||||||
|
defaultConfigPath = true
|
||||||
|
}
|
||||||
|
var config serverConfig
|
||||||
|
tree, err := toml.LoadFile(configPath)
|
||||||
|
if err != nil && !(os.IsNotExist(err) && defaultConfigPath) {
|
||||||
|
return nil, errors.Wrapf(err, "failed to read config %q", configPath)
|
||||||
|
} else if err == nil {
|
||||||
|
if err := tree.Unmarshal(&config); err != nil {
|
||||||
|
return nil, errors.Wrapf(err, "failed to unmarshal config %q", configPath)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return &config, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func prepareRootDir(dockerCli command.Cli, config *serverConfig) (string, error) {
|
||||||
|
rootDir := config.Root
|
||||||
|
if rootDir == "" {
|
||||||
|
rootDir = rootDataDir(dockerCli)
|
||||||
|
}
|
||||||
|
if rootDir == "" {
|
||||||
|
return "", errors.New("buildx root dir must be determined")
|
||||||
|
}
|
||||||
|
if err := os.MkdirAll(rootDir, 0700); err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
serverRoot := filepath.Join(rootDir, "shared")
|
||||||
|
if err := os.MkdirAll(serverRoot, 0700); err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
return serverRoot, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func rootDataDir(dockerCli command.Cli) string {
|
||||||
|
return filepath.Join(confutil.ConfigDir(dockerCli), "controller")
|
||||||
|
}
|
||||||
|
|
||||||
|
func newBuildxClientAndCheck(ctx context.Context, addr string) (*Client, error) {
|
||||||
|
c, err := NewClient(ctx, addr)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
p, v, r, err := c.Version(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
logrus.Debugf("connected to server (\"%v %v %v\")", p, v, r)
|
||||||
|
if !(p == version.Package && v == version.Version && r == version.Revision) {
|
||||||
|
return nil, errors.Errorf("version mismatch (client: \"%v %v %v\", server: \"%v %v %v\")", version.Package, version.Version, version.Revision, p, v, r)
|
||||||
|
}
|
||||||
|
return c, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
type buildxController struct {
|
||||||
|
*Client
|
||||||
|
serverRoot string
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *buildxController) Kill(ctx context.Context) error {
|
||||||
|
pidB, err := os.ReadFile(filepath.Join(c.serverRoot, defaultPIDFilename))
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
pid, err := strconv.ParseInt(string(pidB), 10, 64)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if pid <= 0 {
|
||||||
|
return errors.New("no PID is recorded for buildx server")
|
||||||
|
}
|
||||||
|
p, err := os.FindProcess(int(pid))
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if err := p.Signal(syscall.SIGINT); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
// TODO: Should we send SIGKILL if process doesn't finish?
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func launch(ctx context.Context, logFile string, args ...string) (func() error, error) {
|
||||||
|
// set absolute path of binary, since we set the working directory to the root
|
||||||
|
pathname, err := os.Executable()
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
bCmd := exec.CommandContext(ctx, pathname, args...)
|
||||||
|
if logFile != "" {
|
||||||
|
f, err := os.OpenFile(logFile, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
defer f.Close()
|
||||||
|
bCmd.Stdout = f
|
||||||
|
bCmd.Stderr = f
|
||||||
|
}
|
||||||
|
bCmd.Stdin = nil
|
||||||
|
bCmd.Dir = "/"
|
||||||
|
bCmd.SysProcAttr = &syscall.SysProcAttr{
|
||||||
|
Setsid: true,
|
||||||
|
}
|
||||||
|
if err := bCmd.Start(); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return bCmd.Wait, nil
|
||||||
|
}
|
||||||
19
controller/remote/controller_nolinux.go
Normal file
19
controller/remote/controller_nolinux.go
Normal file
@@ -0,0 +1,19 @@
|
|||||||
|
//go:build !linux
|
||||||
|
|
||||||
|
package remote
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
|
||||||
|
"github.com/docker/buildx/controller/control"
|
||||||
|
"github.com/docker/buildx/util/progress"
|
||||||
|
"github.com/docker/cli/cli/command"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
"github.com/spf13/cobra"
|
||||||
|
)
|
||||||
|
|
||||||
|
func NewRemoteBuildxController(ctx context.Context, dockerCli command.Cli, opts control.ControlOptions, logger progress.SubLogger) (control.BuildxController, error) {
|
||||||
|
return nil, errors.New("remote buildx unsupported")
|
||||||
|
}
|
||||||
|
|
||||||
|
func AddControllerCommands(cmd *cobra.Command, dockerCli command.Cli) {}
|
||||||
430
controller/remote/io.go
Normal file
430
controller/remote/io.go
Normal file
@@ -0,0 +1,430 @@
|
|||||||
|
package remote
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"io"
|
||||||
|
"syscall"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/docker/buildx/controller/pb"
|
||||||
|
"github.com/moby/sys/signal"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
"github.com/sirupsen/logrus"
|
||||||
|
"golang.org/x/sync/errgroup"
|
||||||
|
)
|
||||||
|
|
||||||
|
type msgStream interface {
|
||||||
|
Send(*pb.Message) error
|
||||||
|
Recv() (*pb.Message, error)
|
||||||
|
}
|
||||||
|
|
||||||
|
type ioServerConfig struct {
|
||||||
|
stdin io.WriteCloser
|
||||||
|
stdout, stderr io.ReadCloser
|
||||||
|
|
||||||
|
// signalFn is a callback function called when a signal is reached to the client.
|
||||||
|
signalFn func(context.Context, syscall.Signal) error
|
||||||
|
|
||||||
|
// resizeFn is a callback function called when a resize event is reached to the client.
|
||||||
|
resizeFn func(context.Context, winSize) error
|
||||||
|
}
|
||||||
|
|
||||||
|
func serveIO(attachCtx context.Context, srv msgStream, initFn func(*pb.InitMessage) error, ioConfig *ioServerConfig) (err error) {
|
||||||
|
stdin, stdout, stderr := ioConfig.stdin, ioConfig.stdout, ioConfig.stderr
|
||||||
|
stream := &debugStream{srv, "server=" + time.Now().String()}
|
||||||
|
eg, ctx := errgroup.WithContext(attachCtx)
|
||||||
|
done := make(chan struct{})
|
||||||
|
|
||||||
|
msg, err := receive(ctx, stream)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
init := msg.GetInit()
|
||||||
|
if init == nil {
|
||||||
|
return errors.Errorf("unexpected message: %T; wanted init", msg.GetInput())
|
||||||
|
}
|
||||||
|
ref := init.Ref
|
||||||
|
if ref == "" {
|
||||||
|
return errors.New("no ref is provided")
|
||||||
|
}
|
||||||
|
if err := initFn(init); err != nil {
|
||||||
|
return errors.Wrap(err, "failed to initialize IO server")
|
||||||
|
}
|
||||||
|
|
||||||
|
if stdout != nil {
|
||||||
|
stdoutReader, stdoutWriter := io.Pipe()
|
||||||
|
eg.Go(func() error {
|
||||||
|
<-done
|
||||||
|
return stdoutWriter.Close()
|
||||||
|
})
|
||||||
|
|
||||||
|
go func() {
|
||||||
|
// do not wait for read completion but return here and let the caller send EOF
|
||||||
|
// this allows us to return on ctx.Done() without being blocked by this reader.
|
||||||
|
io.Copy(stdoutWriter, stdout)
|
||||||
|
stdoutWriter.Close()
|
||||||
|
}()
|
||||||
|
|
||||||
|
eg.Go(func() error {
|
||||||
|
defer stdoutReader.Close()
|
||||||
|
return copyToStream(1, stream, stdoutReader)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
if stderr != nil {
|
||||||
|
stderrReader, stderrWriter := io.Pipe()
|
||||||
|
eg.Go(func() error {
|
||||||
|
<-done
|
||||||
|
return stderrWriter.Close()
|
||||||
|
})
|
||||||
|
|
||||||
|
go func() {
|
||||||
|
// do not wait for read completion but return here and let the caller send EOF
|
||||||
|
// this allows us to return on ctx.Done() without being blocked by this reader.
|
||||||
|
io.Copy(stderrWriter, stderr)
|
||||||
|
stderrWriter.Close()
|
||||||
|
}()
|
||||||
|
|
||||||
|
eg.Go(func() error {
|
||||||
|
defer stderrReader.Close()
|
||||||
|
return copyToStream(2, stream, stderrReader)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
msgCh := make(chan *pb.Message)
|
||||||
|
eg.Go(func() error {
|
||||||
|
defer close(msgCh)
|
||||||
|
for {
|
||||||
|
msg, err := receive(ctx, stream)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
select {
|
||||||
|
case msgCh <- msg:
|
||||||
|
case <-done:
|
||||||
|
return nil
|
||||||
|
case <-ctx.Done():
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
eg.Go(func() error {
|
||||||
|
defer close(done)
|
||||||
|
for {
|
||||||
|
var msg *pb.Message
|
||||||
|
select {
|
||||||
|
case msg = <-msgCh:
|
||||||
|
case <-ctx.Done():
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
if msg == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
if file := msg.GetFile(); file != nil {
|
||||||
|
if file.Fd != 0 {
|
||||||
|
return errors.Errorf("unexpected fd: %v", file.Fd)
|
||||||
|
}
|
||||||
|
if stdin == nil {
|
||||||
|
continue // no stdin destination is specified so ignore the data
|
||||||
|
}
|
||||||
|
if len(file.Data) > 0 {
|
||||||
|
_, err := stdin.Write(file.Data)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if file.EOF {
|
||||||
|
stdin.Close()
|
||||||
|
}
|
||||||
|
} else if resize := msg.GetResize(); resize != nil {
|
||||||
|
if ioConfig.resizeFn != nil {
|
||||||
|
ioConfig.resizeFn(ctx, winSize{
|
||||||
|
cols: resize.Cols,
|
||||||
|
rows: resize.Rows,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
} else if sig := msg.GetSignal(); sig != nil {
|
||||||
|
if ioConfig.signalFn != nil {
|
||||||
|
syscallSignal, ok := signal.SignalMap[sig.Name]
|
||||||
|
if !ok {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
ioConfig.signalFn(ctx, syscallSignal)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
return errors.Errorf("unexpected message: %T", msg.GetInput())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
return eg.Wait()
|
||||||
|
}
|
||||||
|
|
||||||
|
type ioAttachConfig struct {
|
||||||
|
stdin io.ReadCloser
|
||||||
|
stdout, stderr io.WriteCloser
|
||||||
|
signal <-chan syscall.Signal
|
||||||
|
resize <-chan winSize
|
||||||
|
}
|
||||||
|
|
||||||
|
type winSize struct {
|
||||||
|
rows uint32
|
||||||
|
cols uint32
|
||||||
|
}
|
||||||
|
|
||||||
|
func attachIO(ctx context.Context, stream msgStream, initMessage *pb.InitMessage, cfg ioAttachConfig) (retErr error) {
|
||||||
|
eg, ctx := errgroup.WithContext(ctx)
|
||||||
|
done := make(chan struct{})
|
||||||
|
|
||||||
|
if err := stream.Send(&pb.Message{
|
||||||
|
Input: &pb.Message_Init{
|
||||||
|
Init: initMessage,
|
||||||
|
},
|
||||||
|
}); err != nil {
|
||||||
|
return errors.Wrap(err, "failed to init")
|
||||||
|
}
|
||||||
|
|
||||||
|
if cfg.stdin != nil {
|
||||||
|
stdinReader, stdinWriter := io.Pipe()
|
||||||
|
eg.Go(func() error {
|
||||||
|
<-done
|
||||||
|
return stdinWriter.Close()
|
||||||
|
})
|
||||||
|
|
||||||
|
go func() {
|
||||||
|
// do not wait for read completion but return here and let the caller send EOF
|
||||||
|
// this allows us to return on ctx.Done() without being blocked by this reader.
|
||||||
|
io.Copy(stdinWriter, cfg.stdin)
|
||||||
|
stdinWriter.Close()
|
||||||
|
}()
|
||||||
|
|
||||||
|
eg.Go(func() error {
|
||||||
|
defer stdinReader.Close()
|
||||||
|
return copyToStream(0, stream, stdinReader)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
if cfg.signal != nil {
|
||||||
|
eg.Go(func() error {
|
||||||
|
for {
|
||||||
|
var sig syscall.Signal
|
||||||
|
select {
|
||||||
|
case sig = <-cfg.signal:
|
||||||
|
case <-done:
|
||||||
|
return nil
|
||||||
|
case <-ctx.Done():
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
name := sigToName[sig]
|
||||||
|
if name == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if err := stream.Send(&pb.Message{
|
||||||
|
Input: &pb.Message_Signal{
|
||||||
|
Signal: &pb.SignalMessage{
|
||||||
|
Name: name,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}); err != nil {
|
||||||
|
return errors.Wrap(err, "failed to send signal")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
if cfg.resize != nil {
|
||||||
|
eg.Go(func() error {
|
||||||
|
for {
|
||||||
|
var win winSize
|
||||||
|
select {
|
||||||
|
case win = <-cfg.resize:
|
||||||
|
case <-done:
|
||||||
|
return nil
|
||||||
|
case <-ctx.Done():
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
if err := stream.Send(&pb.Message{
|
||||||
|
Input: &pb.Message_Resize{
|
||||||
|
Resize: &pb.ResizeMessage{
|
||||||
|
Rows: win.rows,
|
||||||
|
Cols: win.cols,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}); err != nil {
|
||||||
|
return errors.Wrap(err, "failed to send resize")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
msgCh := make(chan *pb.Message)
|
||||||
|
eg.Go(func() error {
|
||||||
|
defer close(msgCh)
|
||||||
|
for {
|
||||||
|
msg, err := receive(ctx, stream)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
select {
|
||||||
|
case msgCh <- msg:
|
||||||
|
case <-done:
|
||||||
|
return nil
|
||||||
|
case <-ctx.Done():
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
eg.Go(func() error {
|
||||||
|
eofs := make(map[uint32]struct{})
|
||||||
|
defer close(done)
|
||||||
|
for {
|
||||||
|
var msg *pb.Message
|
||||||
|
select {
|
||||||
|
case msg = <-msgCh:
|
||||||
|
case <-ctx.Done():
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
if msg == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
if file := msg.GetFile(); file != nil {
|
||||||
|
if _, ok := eofs[file.Fd]; ok {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
var out io.WriteCloser
|
||||||
|
switch file.Fd {
|
||||||
|
case 1:
|
||||||
|
out = cfg.stdout
|
||||||
|
case 2:
|
||||||
|
out = cfg.stderr
|
||||||
|
default:
|
||||||
|
return errors.Errorf("unsupported fd %d", file.Fd)
|
||||||
|
|
||||||
|
}
|
||||||
|
if out == nil {
|
||||||
|
logrus.Warnf("attachIO: no writer for fd %d", file.Fd)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if len(file.Data) > 0 {
|
||||||
|
if _, err := out.Write(file.Data); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if file.EOF {
|
||||||
|
eofs[file.Fd] = struct{}{}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
return errors.Errorf("unexpected message: %T", msg.GetInput())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
return eg.Wait()
|
||||||
|
}
|
||||||
|
|
||||||
|
func receive(ctx context.Context, stream msgStream) (*pb.Message, error) {
|
||||||
|
msgCh := make(chan *pb.Message)
|
||||||
|
errCh := make(chan error)
|
||||||
|
go func() {
|
||||||
|
msg, err := stream.Recv()
|
||||||
|
if err != nil {
|
||||||
|
if errors.Is(err, io.EOF) {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
errCh <- err
|
||||||
|
return
|
||||||
|
}
|
||||||
|
msgCh <- msg
|
||||||
|
}()
|
||||||
|
select {
|
||||||
|
case msg := <-msgCh:
|
||||||
|
return msg, nil
|
||||||
|
case err := <-errCh:
|
||||||
|
return nil, err
|
||||||
|
case <-ctx.Done():
|
||||||
|
return nil, ctx.Err()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func copyToStream(fd uint32, snd msgStream, r io.Reader) error {
|
||||||
|
for {
|
||||||
|
buf := make([]byte, 32*1024)
|
||||||
|
n, err := r.Read(buf)
|
||||||
|
if err != nil {
|
||||||
|
if err == io.EOF {
|
||||||
|
break // break loop and send EOF
|
||||||
|
}
|
||||||
|
return err
|
||||||
|
} else if n > 0 {
|
||||||
|
if snd.Send(&pb.Message{
|
||||||
|
Input: &pb.Message_File{
|
||||||
|
File: &pb.FdMessage{
|
||||||
|
Fd: fd,
|
||||||
|
Data: buf[:n],
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return snd.Send(&pb.Message{
|
||||||
|
Input: &pb.Message_File{
|
||||||
|
File: &pb.FdMessage{
|
||||||
|
Fd: fd,
|
||||||
|
EOF: true,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
var sigToName = map[syscall.Signal]string{}
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
for name, value := range signal.SignalMap {
|
||||||
|
sigToName[value] = name
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
type debugStream struct {
|
||||||
|
msgStream
|
||||||
|
prefix string
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *debugStream) Send(msg *pb.Message) error {
|
||||||
|
switch m := msg.GetInput().(type) {
|
||||||
|
case *pb.Message_File:
|
||||||
|
if m.File.EOF {
|
||||||
|
logrus.Debugf("|---> File Message (sender:%v) fd=%d, EOF", s.prefix, m.File.Fd)
|
||||||
|
} else {
|
||||||
|
logrus.Debugf("|---> File Message (sender:%v) fd=%d, %d bytes", s.prefix, m.File.Fd, len(m.File.Data))
|
||||||
|
}
|
||||||
|
case *pb.Message_Resize:
|
||||||
|
logrus.Debugf("|---> Resize Message (sender:%v): %+v", s.prefix, m.Resize)
|
||||||
|
case *pb.Message_Signal:
|
||||||
|
logrus.Debugf("|---> Signal Message (sender:%v): %s", s.prefix, m.Signal.Name)
|
||||||
|
}
|
||||||
|
return s.msgStream.Send(msg)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *debugStream) Recv() (*pb.Message, error) {
|
||||||
|
msg, err := s.msgStream.Recv()
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
switch m := msg.GetInput().(type) {
|
||||||
|
case *pb.Message_File:
|
||||||
|
if m.File.EOF {
|
||||||
|
logrus.Debugf("|<--- File Message (receiver:%v) fd=%d, EOF", s.prefix, m.File.Fd)
|
||||||
|
} else {
|
||||||
|
logrus.Debugf("|<--- File Message (receiver:%v) fd=%d, %d bytes", s.prefix, m.File.Fd, len(m.File.Data))
|
||||||
|
}
|
||||||
|
case *pb.Message_Resize:
|
||||||
|
logrus.Debugf("|<--- Resize Message (receiver:%v): %+v", s.prefix, m.Resize)
|
||||||
|
case *pb.Message_Signal:
|
||||||
|
logrus.Debugf("|<--- Signal Message (receiver:%v): %s", s.prefix, m.Signal.Name)
|
||||||
|
}
|
||||||
|
return msg, nil
|
||||||
|
}
|
||||||
441
controller/remote/server.go
Normal file
441
controller/remote/server.go
Normal file
@@ -0,0 +1,441 @@
|
|||||||
|
package remote
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"io"
|
||||||
|
"sync"
|
||||||
|
"sync/atomic"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/docker/buildx/build"
|
||||||
|
controllererrors "github.com/docker/buildx/controller/errdefs"
|
||||||
|
"github.com/docker/buildx/controller/pb"
|
||||||
|
"github.com/docker/buildx/controller/processes"
|
||||||
|
"github.com/docker/buildx/util/ioset"
|
||||||
|
"github.com/docker/buildx/util/progress"
|
||||||
|
"github.com/docker/buildx/version"
|
||||||
|
"github.com/moby/buildkit/client"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
"golang.org/x/sync/errgroup"
|
||||||
|
)
|
||||||
|
|
||||||
|
type BuildFunc func(ctx context.Context, options *pb.BuildOptions, stdin io.Reader, progress progress.Writer) (resp *client.SolveResponse, res *build.ResultHandle, err error)
|
||||||
|
|
||||||
|
func NewServer(buildFunc BuildFunc) *Server {
|
||||||
|
return &Server{
|
||||||
|
buildFunc: buildFunc,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
type Server struct {
|
||||||
|
buildFunc BuildFunc
|
||||||
|
session map[string]*session
|
||||||
|
sessionMu sync.Mutex
|
||||||
|
}
|
||||||
|
|
||||||
|
type session struct {
|
||||||
|
buildOnGoing atomic.Bool
|
||||||
|
statusChan chan *pb.StatusResponse
|
||||||
|
cancelBuild func()
|
||||||
|
buildOptions *pb.BuildOptions
|
||||||
|
inputPipe *io.PipeWriter
|
||||||
|
|
||||||
|
result *build.ResultHandle
|
||||||
|
|
||||||
|
processes *processes.Manager
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *session) cancelRunningProcesses() {
|
||||||
|
s.processes.CancelRunningProcesses()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *Server) ListProcesses(ctx context.Context, req *pb.ListProcessesRequest) (res *pb.ListProcessesResponse, err error) {
|
||||||
|
m.sessionMu.Lock()
|
||||||
|
defer m.sessionMu.Unlock()
|
||||||
|
s, ok := m.session[req.Ref]
|
||||||
|
if !ok {
|
||||||
|
return nil, errors.Errorf("unknown ref %q", req.Ref)
|
||||||
|
}
|
||||||
|
res = new(pb.ListProcessesResponse)
|
||||||
|
for _, p := range s.processes.ListProcesses() {
|
||||||
|
res.Infos = append(res.Infos, p)
|
||||||
|
}
|
||||||
|
return res, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *Server) DisconnectProcess(ctx context.Context, req *pb.DisconnectProcessRequest) (res *pb.DisconnectProcessResponse, err error) {
|
||||||
|
m.sessionMu.Lock()
|
||||||
|
defer m.sessionMu.Unlock()
|
||||||
|
s, ok := m.session[req.Ref]
|
||||||
|
if !ok {
|
||||||
|
return nil, errors.Errorf("unknown ref %q", req.Ref)
|
||||||
|
}
|
||||||
|
return res, s.processes.DeleteProcess(req.ProcessID)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *Server) Info(ctx context.Context, req *pb.InfoRequest) (res *pb.InfoResponse, err error) {
|
||||||
|
return &pb.InfoResponse{
|
||||||
|
BuildxVersion: &pb.BuildxVersion{
|
||||||
|
Package: version.Package,
|
||||||
|
Version: version.Version,
|
||||||
|
Revision: version.Revision,
|
||||||
|
},
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *Server) List(ctx context.Context, req *pb.ListRequest) (res *pb.ListResponse, err error) {
|
||||||
|
keys := make(map[string]struct{})
|
||||||
|
|
||||||
|
m.sessionMu.Lock()
|
||||||
|
for k := range m.session {
|
||||||
|
keys[k] = struct{}{}
|
||||||
|
}
|
||||||
|
m.sessionMu.Unlock()
|
||||||
|
|
||||||
|
var keysL []string
|
||||||
|
for k := range keys {
|
||||||
|
keysL = append(keysL, k)
|
||||||
|
}
|
||||||
|
return &pb.ListResponse{
|
||||||
|
Keys: keysL,
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *Server) Disconnect(ctx context.Context, req *pb.DisconnectRequest) (res *pb.DisconnectResponse, err error) {
|
||||||
|
key := req.Ref
|
||||||
|
if key == "" {
|
||||||
|
return nil, errors.New("disconnect: empty key")
|
||||||
|
}
|
||||||
|
|
||||||
|
m.sessionMu.Lock()
|
||||||
|
if s, ok := m.session[key]; ok {
|
||||||
|
if s.cancelBuild != nil {
|
||||||
|
s.cancelBuild()
|
||||||
|
}
|
||||||
|
s.cancelRunningProcesses()
|
||||||
|
if s.result != nil {
|
||||||
|
s.result.Done()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
delete(m.session, key)
|
||||||
|
m.sessionMu.Unlock()
|
||||||
|
|
||||||
|
return &pb.DisconnectResponse{}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *Server) Close() error {
|
||||||
|
m.sessionMu.Lock()
|
||||||
|
for k := range m.session {
|
||||||
|
if s, ok := m.session[k]; ok {
|
||||||
|
if s.cancelBuild != nil {
|
||||||
|
s.cancelBuild()
|
||||||
|
}
|
||||||
|
s.cancelRunningProcesses()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
m.sessionMu.Unlock()
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *Server) Inspect(ctx context.Context, req *pb.InspectRequest) (*pb.InspectResponse, error) {
|
||||||
|
ref := req.Ref
|
||||||
|
if ref == "" {
|
||||||
|
return nil, errors.New("inspect: empty key")
|
||||||
|
}
|
||||||
|
var bo *pb.BuildOptions
|
||||||
|
m.sessionMu.Lock()
|
||||||
|
if s, ok := m.session[ref]; ok {
|
||||||
|
bo = s.buildOptions
|
||||||
|
} else {
|
||||||
|
m.sessionMu.Unlock()
|
||||||
|
return nil, errors.Errorf("inspect: unknown key %v", ref)
|
||||||
|
}
|
||||||
|
m.sessionMu.Unlock()
|
||||||
|
return &pb.InspectResponse{Options: bo}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *Server) Build(ctx context.Context, req *pb.BuildRequest) (*pb.BuildResponse, error) {
|
||||||
|
ref := req.Ref
|
||||||
|
if ref == "" {
|
||||||
|
return nil, errors.New("build: empty key")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Prepare status channel and session
|
||||||
|
m.sessionMu.Lock()
|
||||||
|
if m.session == nil {
|
||||||
|
m.session = make(map[string]*session)
|
||||||
|
}
|
||||||
|
s, ok := m.session[ref]
|
||||||
|
if ok {
|
||||||
|
if !s.buildOnGoing.CompareAndSwap(false, true) {
|
||||||
|
m.sessionMu.Unlock()
|
||||||
|
return &pb.BuildResponse{}, errors.New("build ongoing")
|
||||||
|
}
|
||||||
|
s.cancelRunningProcesses()
|
||||||
|
s.result = nil
|
||||||
|
} else {
|
||||||
|
s = &session{}
|
||||||
|
s.buildOnGoing.Store(true)
|
||||||
|
}
|
||||||
|
|
||||||
|
s.processes = processes.NewManager()
|
||||||
|
statusChan := make(chan *pb.StatusResponse)
|
||||||
|
s.statusChan = statusChan
|
||||||
|
inR, inW := io.Pipe()
|
||||||
|
defer inR.Close()
|
||||||
|
s.inputPipe = inW
|
||||||
|
m.session[ref] = s
|
||||||
|
m.sessionMu.Unlock()
|
||||||
|
defer func() {
|
||||||
|
close(statusChan)
|
||||||
|
m.sessionMu.Lock()
|
||||||
|
s, ok := m.session[ref]
|
||||||
|
if ok {
|
||||||
|
s.statusChan = nil
|
||||||
|
s.buildOnGoing.Store(false)
|
||||||
|
}
|
||||||
|
m.sessionMu.Unlock()
|
||||||
|
}()
|
||||||
|
|
||||||
|
pw := pb.NewProgressWriter(statusChan)
|
||||||
|
|
||||||
|
// Build the specified request
|
||||||
|
ctx, cancel := context.WithCancel(ctx)
|
||||||
|
defer cancel()
|
||||||
|
resp, res, buildErr := m.buildFunc(ctx, req.Options, inR, pw)
|
||||||
|
m.sessionMu.Lock()
|
||||||
|
if s, ok := m.session[ref]; ok {
|
||||||
|
// NOTE: buildFunc can return *build.ResultHandle even on error (e.g. when it's implemented using (github.com/docker/buildx/controller/build).RunBuild).
|
||||||
|
if res != nil {
|
||||||
|
s.result = res
|
||||||
|
s.cancelBuild = cancel
|
||||||
|
s.buildOptions = req.Options
|
||||||
|
m.session[ref] = s
|
||||||
|
if buildErr != nil {
|
||||||
|
buildErr = controllererrors.WrapBuild(buildErr, ref)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
m.sessionMu.Unlock()
|
||||||
|
return nil, errors.Errorf("build: unknown key %v", ref)
|
||||||
|
}
|
||||||
|
m.sessionMu.Unlock()
|
||||||
|
|
||||||
|
if buildErr != nil {
|
||||||
|
return nil, buildErr
|
||||||
|
}
|
||||||
|
|
||||||
|
if resp == nil {
|
||||||
|
resp = &client.SolveResponse{}
|
||||||
|
}
|
||||||
|
return &pb.BuildResponse{
|
||||||
|
ExporterResponse: resp.ExporterResponse,
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *Server) Status(req *pb.StatusRequest, stream pb.Controller_StatusServer) error {
|
||||||
|
ref := req.Ref
|
||||||
|
if ref == "" {
|
||||||
|
return errors.New("status: empty key")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Wait and get status channel prepared by Build()
|
||||||
|
var statusChan <-chan *pb.StatusResponse
|
||||||
|
for {
|
||||||
|
// TODO: timeout?
|
||||||
|
m.sessionMu.Lock()
|
||||||
|
if _, ok := m.session[ref]; !ok || m.session[ref].statusChan == nil {
|
||||||
|
m.sessionMu.Unlock()
|
||||||
|
time.Sleep(time.Millisecond) // TODO: wait Build without busy loop and make it cancellable
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
statusChan = m.session[ref].statusChan
|
||||||
|
m.sessionMu.Unlock()
|
||||||
|
break
|
||||||
|
}
|
||||||
|
|
||||||
|
// forward status
|
||||||
|
for ss := range statusChan {
|
||||||
|
if ss == nil {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
if err := stream.Send(ss); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *Server) Input(stream pb.Controller_InputServer) (err error) {
|
||||||
|
// Get the target ref from init message
|
||||||
|
msg, err := stream.Recv()
|
||||||
|
if err != nil {
|
||||||
|
if !errors.Is(err, io.EOF) {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
init := msg.GetInit()
|
||||||
|
if init == nil {
|
||||||
|
return errors.Errorf("unexpected message: %T; wanted init", msg.GetInit())
|
||||||
|
}
|
||||||
|
ref := init.Ref
|
||||||
|
if ref == "" {
|
||||||
|
return errors.New("input: no ref is provided")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Wait and get input stream pipe prepared by Build()
|
||||||
|
var inputPipeW *io.PipeWriter
|
||||||
|
for {
|
||||||
|
// TODO: timeout?
|
||||||
|
m.sessionMu.Lock()
|
||||||
|
if _, ok := m.session[ref]; !ok || m.session[ref].inputPipe == nil {
|
||||||
|
m.sessionMu.Unlock()
|
||||||
|
time.Sleep(time.Millisecond) // TODO: wait Build without busy loop and make it cancellable
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
inputPipeW = m.session[ref].inputPipe
|
||||||
|
m.sessionMu.Unlock()
|
||||||
|
break
|
||||||
|
}
|
||||||
|
|
||||||
|
// Forward input stream
|
||||||
|
eg, ctx := errgroup.WithContext(context.TODO())
|
||||||
|
done := make(chan struct{})
|
||||||
|
msgCh := make(chan *pb.InputMessage)
|
||||||
|
eg.Go(func() error {
|
||||||
|
defer close(msgCh)
|
||||||
|
for {
|
||||||
|
msg, err := stream.Recv()
|
||||||
|
if err != nil {
|
||||||
|
if !errors.Is(err, io.EOF) {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
select {
|
||||||
|
case msgCh <- msg:
|
||||||
|
case <-done:
|
||||||
|
return nil
|
||||||
|
case <-ctx.Done():
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
eg.Go(func() (retErr error) {
|
||||||
|
defer close(done)
|
||||||
|
defer func() {
|
||||||
|
if retErr != nil {
|
||||||
|
inputPipeW.CloseWithError(retErr)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
inputPipeW.Close()
|
||||||
|
}()
|
||||||
|
for {
|
||||||
|
var msg *pb.InputMessage
|
||||||
|
select {
|
||||||
|
case msg = <-msgCh:
|
||||||
|
case <-ctx.Done():
|
||||||
|
return errors.Wrap(ctx.Err(), "canceled")
|
||||||
|
}
|
||||||
|
if msg == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
if data := msg.GetData(); data != nil {
|
||||||
|
if len(data.Data) > 0 {
|
||||||
|
_, err := inputPipeW.Write(data.Data)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if data.EOF {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
return eg.Wait()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *Server) Invoke(srv pb.Controller_InvokeServer) error {
|
||||||
|
containerIn, containerOut := ioset.Pipe()
|
||||||
|
defer func() { containerOut.Close(); containerIn.Close() }()
|
||||||
|
|
||||||
|
initDoneCh := make(chan *processes.Process)
|
||||||
|
initErrCh := make(chan error)
|
||||||
|
eg, egCtx := errgroup.WithContext(context.TODO())
|
||||||
|
srvIOCtx, srvIOCancel := context.WithCancel(egCtx)
|
||||||
|
eg.Go(func() error {
|
||||||
|
defer srvIOCancel()
|
||||||
|
return serveIO(srvIOCtx, srv, func(initMessage *pb.InitMessage) (retErr error) {
|
||||||
|
defer func() {
|
||||||
|
if retErr != nil {
|
||||||
|
initErrCh <- retErr
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
ref := initMessage.Ref
|
||||||
|
cfg := initMessage.InvokeConfig
|
||||||
|
|
||||||
|
m.sessionMu.Lock()
|
||||||
|
s, ok := m.session[ref]
|
||||||
|
if !ok {
|
||||||
|
m.sessionMu.Unlock()
|
||||||
|
return errors.Errorf("invoke: unknown key %v", ref)
|
||||||
|
}
|
||||||
|
m.sessionMu.Unlock()
|
||||||
|
|
||||||
|
pid := initMessage.ProcessID
|
||||||
|
if pid == "" {
|
||||||
|
return errors.Errorf("invoke: specify process ID")
|
||||||
|
}
|
||||||
|
proc, ok := s.processes.Get(pid)
|
||||||
|
if !ok {
|
||||||
|
// Start a new process.
|
||||||
|
if cfg == nil {
|
||||||
|
return errors.New("no container config is provided")
|
||||||
|
}
|
||||||
|
var err error
|
||||||
|
proc, err = s.processes.StartProcess(pid, s.result, cfg)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// Attach containerIn to this process
|
||||||
|
proc.ForwardIO(&containerIn, srvIOCancel)
|
||||||
|
initDoneCh <- proc
|
||||||
|
return nil
|
||||||
|
}, &ioServerConfig{
|
||||||
|
stdin: containerOut.Stdin,
|
||||||
|
stdout: containerOut.Stdout,
|
||||||
|
stderr: containerOut.Stderr,
|
||||||
|
// TODO: signal, resize
|
||||||
|
})
|
||||||
|
})
|
||||||
|
eg.Go(func() (rErr error) {
|
||||||
|
defer srvIOCancel()
|
||||||
|
// Wait for init done
|
||||||
|
var proc *processes.Process
|
||||||
|
select {
|
||||||
|
case p := <-initDoneCh:
|
||||||
|
proc = p
|
||||||
|
case err := <-initErrCh:
|
||||||
|
return err
|
||||||
|
case <-egCtx.Done():
|
||||||
|
return egCtx.Err()
|
||||||
|
}
|
||||||
|
|
||||||
|
// Wait for IO done
|
||||||
|
select {
|
||||||
|
case <-srvIOCtx.Done():
|
||||||
|
return srvIOCtx.Err()
|
||||||
|
case err := <-proc.Done():
|
||||||
|
return err
|
||||||
|
case <-egCtx.Done():
|
||||||
|
return egCtx.Err()
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
return eg.Wait()
|
||||||
|
}
|
||||||
@@ -1,17 +1,14 @@
|
|||||||
variable "GO_VERSION" {
|
variable "GO_VERSION" {
|
||||||
default = "1.18"
|
default = null
|
||||||
}
|
|
||||||
variable "BIN_OUT" {
|
|
||||||
default = "./bin"
|
|
||||||
}
|
|
||||||
variable "RELEASE_OUT" {
|
|
||||||
default = "./release-out"
|
|
||||||
}
|
}
|
||||||
variable "DOCS_FORMATS" {
|
variable "DOCS_FORMATS" {
|
||||||
default = "md"
|
default = "md"
|
||||||
}
|
}
|
||||||
|
variable "DESTDIR" {
|
||||||
|
default = "./bin"
|
||||||
|
}
|
||||||
|
|
||||||
// Special target: https://github.com/docker/metadata-action#bake-definition
|
# Special target: https://github.com/docker/metadata-action#bake-definition
|
||||||
target "meta-helper" {
|
target "meta-helper" {
|
||||||
tags = ["docker/buildx-bin:local"]
|
tags = ["docker/buildx-bin:local"]
|
||||||
}
|
}
|
||||||
@@ -48,6 +45,7 @@ target "validate-docs" {
|
|||||||
inherits = ["_common"]
|
inherits = ["_common"]
|
||||||
args = {
|
args = {
|
||||||
FORMATS = DOCS_FORMATS
|
FORMATS = DOCS_FORMATS
|
||||||
|
BUILDX_EXPERIMENTAL = 1 // enables experimental cmds/flags for docs generation
|
||||||
}
|
}
|
||||||
dockerfile = "./hack/dockerfiles/docs.Dockerfile"
|
dockerfile = "./hack/dockerfiles/docs.Dockerfile"
|
||||||
target = "validate"
|
target = "validate"
|
||||||
@@ -61,6 +59,13 @@ target "validate-authors" {
|
|||||||
output = ["type=cacheonly"]
|
output = ["type=cacheonly"]
|
||||||
}
|
}
|
||||||
|
|
||||||
|
target "validate-generated-files" {
|
||||||
|
inherits = ["_common"]
|
||||||
|
dockerfile = "./hack/dockerfiles/generated-files.Dockerfile"
|
||||||
|
target = "validate"
|
||||||
|
output = ["type=cacheonly"]
|
||||||
|
}
|
||||||
|
|
||||||
target "update-vendor" {
|
target "update-vendor" {
|
||||||
inherits = ["_common"]
|
inherits = ["_common"]
|
||||||
dockerfile = "./hack/dockerfiles/vendor.Dockerfile"
|
dockerfile = "./hack/dockerfiles/vendor.Dockerfile"
|
||||||
@@ -72,6 +77,7 @@ target "update-docs" {
|
|||||||
inherits = ["_common"]
|
inherits = ["_common"]
|
||||||
args = {
|
args = {
|
||||||
FORMATS = DOCS_FORMATS
|
FORMATS = DOCS_FORMATS
|
||||||
|
BUILDX_EXPERIMENTAL = 1 // enables experimental cmds/flags for docs generation
|
||||||
}
|
}
|
||||||
dockerfile = "./hack/dockerfiles/docs.Dockerfile"
|
dockerfile = "./hack/dockerfiles/docs.Dockerfile"
|
||||||
target = "update"
|
target = "update"
|
||||||
@@ -85,6 +91,13 @@ target "update-authors" {
|
|||||||
output = ["."]
|
output = ["."]
|
||||||
}
|
}
|
||||||
|
|
||||||
|
target "update-generated-files" {
|
||||||
|
inherits = ["_common"]
|
||||||
|
dockerfile = "./hack/dockerfiles/generated-files.Dockerfile"
|
||||||
|
target = "update"
|
||||||
|
output = ["."]
|
||||||
|
}
|
||||||
|
|
||||||
target "mod-outdated" {
|
target "mod-outdated" {
|
||||||
inherits = ["_common"]
|
inherits = ["_common"]
|
||||||
dockerfile = "./hack/dockerfiles/vendor.Dockerfile"
|
dockerfile = "./hack/dockerfiles/vendor.Dockerfile"
|
||||||
@@ -96,13 +109,13 @@ target "mod-outdated" {
|
|||||||
target "test" {
|
target "test" {
|
||||||
inherits = ["_common"]
|
inherits = ["_common"]
|
||||||
target = "test-coverage"
|
target = "test-coverage"
|
||||||
output = ["./coverage"]
|
output = ["${DESTDIR}/coverage"]
|
||||||
}
|
}
|
||||||
|
|
||||||
target "binaries" {
|
target "binaries" {
|
||||||
inherits = ["_common"]
|
inherits = ["_common"]
|
||||||
target = "binaries"
|
target = "binaries"
|
||||||
output = [BIN_OUT]
|
output = ["${DESTDIR}/build"]
|
||||||
platforms = ["local"]
|
platforms = ["local"]
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -126,7 +139,7 @@ target "binaries-cross" {
|
|||||||
target "release" {
|
target "release" {
|
||||||
inherits = ["binaries-cross"]
|
inherits = ["binaries-cross"]
|
||||||
target = "release"
|
target = "release"
|
||||||
output = [RELEASE_OUT]
|
output = ["${DESTDIR}/release"]
|
||||||
}
|
}
|
||||||
|
|
||||||
target "image" {
|
target "image" {
|
||||||
@@ -143,3 +156,29 @@ target "image-local" {
|
|||||||
inherits = ["image"]
|
inherits = ["image"]
|
||||||
output = ["type=docker"]
|
output = ["type=docker"]
|
||||||
}
|
}
|
||||||
|
|
||||||
|
variable "HTTP_PROXY" {
|
||||||
|
default = ""
|
||||||
|
}
|
||||||
|
variable "HTTPS_PROXY" {
|
||||||
|
default = ""
|
||||||
|
}
|
||||||
|
variable "NO_PROXY" {
|
||||||
|
default = ""
|
||||||
|
}
|
||||||
|
|
||||||
|
target "integration-test-base" {
|
||||||
|
inherits = ["_common"]
|
||||||
|
args = {
|
||||||
|
HTTP_PROXY = HTTP_PROXY
|
||||||
|
HTTPS_PROXY = HTTPS_PROXY
|
||||||
|
NO_PROXY = NO_PROXY
|
||||||
|
}
|
||||||
|
target = "integration-test-base"
|
||||||
|
output = ["type=cacheonly"]
|
||||||
|
}
|
||||||
|
|
||||||
|
target "integration-test" {
|
||||||
|
inherits = ["integration-test-base"]
|
||||||
|
target = "integration-test"
|
||||||
|
}
|
||||||
|
|||||||
952
docs/bake-reference.md
Normal file
952
docs/bake-reference.md
Normal file
@@ -0,0 +1,952 @@
|
|||||||
|
# Bake file reference
|
||||||
|
|
||||||
|
The Bake file is a file for defining workflows that you run using `docker buildx bake`.
|
||||||
|
|
||||||
|
## File format
|
||||||
|
|
||||||
|
You can define your Bake file in the following file formats:
|
||||||
|
|
||||||
|
- HashiCorp Configuration Language (HCL)
|
||||||
|
- JSON
|
||||||
|
- YAML (Compose file)
|
||||||
|
|
||||||
|
By default, Bake uses the following lookup order to find the configuration file:
|
||||||
|
|
||||||
|
1. `docker-bake.override.hcl`
|
||||||
|
2. `docker-bake.hcl`
|
||||||
|
3. `docker-bake.override.json`
|
||||||
|
4. `docker-bake.json`
|
||||||
|
5. `docker-compose.yaml`
|
||||||
|
6. `docker-compose.yml`
|
||||||
|
|
||||||
|
Bake searches for the file in the current working directory.
|
||||||
|
You can specify the file location explicitly using the `--file` flag:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx bake --file=../docker/bake.hcl --print
|
||||||
|
```
|
||||||
|
|
||||||
|
## Syntax
|
||||||
|
|
||||||
|
The Bake file supports the following property types:
|
||||||
|
|
||||||
|
- `target`: build targets
|
||||||
|
- `group`: collections of build targets
|
||||||
|
- `variable`: build arguments and variables
|
||||||
|
- `function`: custom Bake functions
|
||||||
|
|
||||||
|
You define properties as hierarchical blocks in the Bake file.
|
||||||
|
You can assign one or more attributes to a property.
|
||||||
|
|
||||||
|
The following snippet shows a JSON representation of a simple Bake file.
|
||||||
|
This Bake file defines three properties: a variable, a group, and a target.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"variable": {
|
||||||
|
"TAG": {
|
||||||
|
"default": "latest"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"group": {
|
||||||
|
"default": {
|
||||||
|
"targets": ["webapp"]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"target": {
|
||||||
|
"webapp": {
|
||||||
|
"dockerfile": "Dockerfile",
|
||||||
|
"tags": ["docker.io/username/webapp:${TAG}"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
In the JSON representation of a Bake file, properties are objects,
|
||||||
|
and attributes are values assigned to those objects.
|
||||||
|
|
||||||
|
The following example shows the same Bake file in the HCL format:
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
variable "TAG" {
|
||||||
|
default = "latest"
|
||||||
|
}
|
||||||
|
|
||||||
|
group "default" {
|
||||||
|
targets = ["webapp"]
|
||||||
|
}
|
||||||
|
|
||||||
|
target "webapp" {
|
||||||
|
dockerfile = "Dockerfile"
|
||||||
|
tags = ["docker.io/username/webapp:${TAG}"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
HCL is the preferred format for Bake files.
|
||||||
|
Aside from syntactic differences,
|
||||||
|
HCL lets you use features that the JSON and YAML formats don't support.
|
||||||
|
|
||||||
|
The examples in this document use the HCL format.
|
||||||
|
|
||||||
|
## Target
|
||||||
|
|
||||||
|
A target reflects a single `docker build` invocation.
|
||||||
|
Consider the following build command:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker build \
|
||||||
|
--file=Dockerfile.webapp \
|
||||||
|
--tag=docker.io/username/webapp:latest \
|
||||||
|
https://github.com/username/webapp
|
||||||
|
```
|
||||||
|
|
||||||
|
You can express this command in a Bake file as follows:
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
target "webapp" {
|
||||||
|
dockerfile = "Dockerfile.webapp"
|
||||||
|
tags = ["docker.io/username/webapp:latest"]
|
||||||
|
context = "https://github.com/username/webapp"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
The following table shows the complete list of attributes that you can assign to a target:
|
||||||
|
|
||||||
|
| Name | Type | Description |
|
||||||
|
| ----------------------------------------------- | ------- | -------------------------------------------------------------------- |
|
||||||
|
| [`args`](#targetargs) | Map | Build arguments |
|
||||||
|
| [`attest`](#targetattest) | List | Build attestations |
|
||||||
|
| [`cache-from`](#targetcache-from) | List | External cache sources |
|
||||||
|
| [`cache-to`](#targetcache-to) | List | External cache destinations |
|
||||||
|
| [`context`](#targetcontext) | String | Set of files located in the specified path or URL |
|
||||||
|
| [`contexts`](#targetcontexts) | Map | Additional build contexts |
|
||||||
|
| [`dockerfile-inline`](#targetdockerfile-inline) | String | Inline Dockerfile string |
|
||||||
|
| [`dockerfile`](#targetdockerfile) | String | Dockerfile location |
|
||||||
|
| [`inherits`](#targetinherits) | List | Inherit attributes from other targets |
|
||||||
|
| [`labels`](#targetlabels) | Map | Metadata for images |
|
||||||
|
| [`matrix`](#targetmatrix) | Map | Define a set of variables that forks a target into multiple targets. |
|
||||||
|
| [`name`](#targetname) | String | Override the target name when using a matrix. |
|
||||||
|
| [`no-cache-filter`](#targetno-cache-filter) | List | Disable build cache for specific stages |
|
||||||
|
| [`no-cache`](#targetno-cache) | Boolean | Disable build cache completely |
|
||||||
|
| [`output`](#targetoutput) | List | Output destinations |
|
||||||
|
| [`platforms`](#targetplatforms) | List | Target platforms |
|
||||||
|
| [`pull`](#targetpull) | Boolean | Always pull images |
|
||||||
|
| [`secret`](#targetsecret) | List | Secrets to expose to the build |
|
||||||
|
| [`ssh`](#targetssh) | List | SSH agent sockets or keys to expose to the build |
|
||||||
|
| [`tags`](#targettags) | List | Image names and tags |
|
||||||
|
| [`target`](#targettarget) | String | Target build stage |
|
||||||
|
|
||||||
|
### `target.args`
|
||||||
|
|
||||||
|
Use the `args` attribute to define build arguments for the target.
|
||||||
|
This has the same effect as passing a [`--build-arg`][build-arg] flag to the build command.
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
target "default" {
|
||||||
|
args = {
|
||||||
|
VERSION = "0.0.0+unknown"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
You can set `args` attributes to use `null` values.
|
||||||
|
Doing so forces the `target` to use the `ARG` value specified in the Dockerfile.
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
variable "GO_VERSION" {
|
||||||
|
default = "1.20.3"
|
||||||
|
}
|
||||||
|
|
||||||
|
target "webapp" {
|
||||||
|
dockerfile = "webapp.Dockerfile"
|
||||||
|
tags = ["docker.io/username/webapp"]
|
||||||
|
}
|
||||||
|
|
||||||
|
target "db" {
|
||||||
|
args = {
|
||||||
|
GO_VERSION = null
|
||||||
|
}
|
||||||
|
dockerfile = "db.Dockerfile"
|
||||||
|
tags = ["docker.io/username/db"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### `target.attest`
|
||||||
|
|
||||||
|
The `attest` attribute lets you apply [build attestations][attestations] to the target.
|
||||||
|
This attribute accepts the long-form CSV version of attestation parameters.
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
target "default" {
|
||||||
|
attest = [
|
||||||
|
"type=provenance,mode=min",
|
||||||
|
"type=sbom"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### `target.cache-from`
|
||||||
|
|
||||||
|
Build cache sources.
|
||||||
|
The builder imports cache from the locations you specify.
|
||||||
|
It uses the [Buildx cache storage backends][cache-backends],
|
||||||
|
and it works the same way as the [`--cache-from`][cache-from] flag.
|
||||||
|
This takes a list value, so you can specify multiple cache sources.
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
target "app" {
|
||||||
|
cache-from = [
|
||||||
|
"type=s3,region=eu-west-1,bucket=mybucket",
|
||||||
|
"user/repo:cache",
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### `target.cache-to`
|
||||||
|
|
||||||
|
Build cache export destinations.
|
||||||
|
The builder exports its build cache to the locations you specify.
|
||||||
|
It uses the [Buildx cache storage backends][cache-backends],
|
||||||
|
and it works the same way as the [`--cache-to` flag][cache-to].
|
||||||
|
This takes a list value, so you can specify multiple cache export targets.
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
target "app" {
|
||||||
|
cache-to = [
|
||||||
|
"type=s3,region=eu-west-1,bucket=mybucket",
|
||||||
|
"type=inline"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### `target.context`
|
||||||
|
|
||||||
|
Specifies the location of the build context to use for this target.
|
||||||
|
Accepts a URL or a directory path.
|
||||||
|
This is the same as the [build context][context] positional argument
|
||||||
|
that you pass to the build command.
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
target "app" {
|
||||||
|
context = "./src/www"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
This resolves to the current working directory (`"."`) by default.
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx bake --print -f - <<< 'target "default" {}'
|
||||||
|
[+] Building 0.0s (0/0)
|
||||||
|
{
|
||||||
|
"target": {
|
||||||
|
"default": {
|
||||||
|
"context": ".",
|
||||||
|
"dockerfile": "Dockerfile"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### `target.contexts`
|
||||||
|
|
||||||
|
Additional build contexts.
|
||||||
|
This is the same as the [`--build-context` flag][build-context].
|
||||||
|
This attribute takes a map, where keys result in named contexts that you can
|
||||||
|
reference in your builds.
|
||||||
|
|
||||||
|
You can specify different types of contexts, such local directories, Git URLs,
|
||||||
|
and even other Bake targets. Bake automatically determines the type of
|
||||||
|
a context based on the pattern of the context value.
|
||||||
|
|
||||||
|
| Context type | Example |
|
||||||
|
| --------------- | ----------------------------------------- |
|
||||||
|
| Container image | `docker-image://alpine@sha256:0123456789` |
|
||||||
|
| Git URL | `https://github.com/user/proj.git` |
|
||||||
|
| HTTP URL | `https://example.com/files` |
|
||||||
|
| Local directory | `../path/to/src` |
|
||||||
|
| Bake target | `target:base` |
|
||||||
|
|
||||||
|
#### Pin an image version
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
# docker-bake.hcl
|
||||||
|
target "app" {
|
||||||
|
contexts = {
|
||||||
|
alpine = "docker-image://alpine:3.13"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
```Dockerfile
|
||||||
|
# Dockerfile
|
||||||
|
FROM alpine
|
||||||
|
RUN echo "Hello world"
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Use a local directory
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
# docker-bake.hcl
|
||||||
|
target "app" {
|
||||||
|
contexts = {
|
||||||
|
src = "../path/to/source"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
```Dockerfile
|
||||||
|
# Dockerfile
|
||||||
|
FROM scratch AS src
|
||||||
|
FROM golang
|
||||||
|
COPY --from=src . .
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Use another target as base
|
||||||
|
|
||||||
|
> **Note**
|
||||||
|
>
|
||||||
|
> You should prefer to use regular multi-stage builds over this option. You can
|
||||||
|
> Use this feature when you have multiple Dockerfiles that can't be easily
|
||||||
|
> merged into one.
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
# docker-bake.hcl
|
||||||
|
target "base" {
|
||||||
|
dockerfile = "baseapp.Dockerfile"
|
||||||
|
}
|
||||||
|
target "app" {
|
||||||
|
contexts = {
|
||||||
|
baseapp = "target:base"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
```Dockerfile
|
||||||
|
# Dockerfile
|
||||||
|
FROM baseapp
|
||||||
|
RUN echo "Hello world"
|
||||||
|
```
|
||||||
|
|
||||||
|
### `target.dockerfile-inline`
|
||||||
|
|
||||||
|
Uses the string value as an inline Dockerfile for the build target.
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
target "default" {
|
||||||
|
dockerfile-inline = "FROM alpine\nENTRYPOINT [\"echo\", \"hello\"]"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
The `dockerfile-inline` takes precedence over the `dockerfile` attribute.
|
||||||
|
If you specify both, Bake uses the inline version.
|
||||||
|
|
||||||
|
### `target.dockerfile`
|
||||||
|
|
||||||
|
Name of the Dockerfile to use for the build.
|
||||||
|
This is the same as the [`--file` flag][file] for the `docker build` command.
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
target "default" {
|
||||||
|
dockerfile = "./src/www/Dockerfile"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Resolves to `"Dockerfile"` by default.
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx bake --print -f - <<< 'target "default" {}'
|
||||||
|
[+] Building 0.0s (0/0)
|
||||||
|
{
|
||||||
|
"target": {
|
||||||
|
"default": {
|
||||||
|
"context": ".",
|
||||||
|
"dockerfile": "Dockerfile"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### `target.inherits`
|
||||||
|
|
||||||
|
A target can inherit attributes from other targets.
|
||||||
|
Use `inherits` to reference from one target to another.
|
||||||
|
|
||||||
|
In the following example,
|
||||||
|
the `app-dev` target specifies an image name and tag.
|
||||||
|
The `app-release` target uses `inherits` to reuse the tag name.
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
variable "TAG" {
|
||||||
|
default = "latest"
|
||||||
|
}
|
||||||
|
|
||||||
|
target "app-dev" {
|
||||||
|
tags = ["docker.io/username/myapp:${TAG}"]
|
||||||
|
}
|
||||||
|
|
||||||
|
target "app-release" {
|
||||||
|
inherits = ["app-dev"]
|
||||||
|
platforms = ["linux/amd64", "linux/arm64"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
The `inherits` attribute is a list,
|
||||||
|
meaning you can reuse attributes from multiple other targets.
|
||||||
|
In the following example, the `app-release` target reuses attributes
|
||||||
|
from both the `app-dev` and `_release` targets.
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
target "app-dev" {
|
||||||
|
args = {
|
||||||
|
GO_VERSION = "1.20"
|
||||||
|
BUILDX_EXPERIMENTAL = 1
|
||||||
|
}
|
||||||
|
tags = ["docker.io/username/myapp"]
|
||||||
|
dockerfile = "app.Dockerfile"
|
||||||
|
labels = {
|
||||||
|
"org.opencontainers.image.source" = "https://github.com/username/myapp"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
target "_release" {
|
||||||
|
args = {
|
||||||
|
BUILDKIT_CONTEXT_KEEP_GIT_DIR = 1
|
||||||
|
BUILDX_EXPERIMENTAL = 0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
target "app-release" {
|
||||||
|
inherits = ["app-dev", "_release"]
|
||||||
|
platforms = ["linux/amd64", "linux/arm64"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
When inheriting attributes from multiple targets and there's a conflict,
|
||||||
|
the target that appears last in the `inherits` list takes precedence.
|
||||||
|
The previous example defines the `BUILDX_EXPERIMENTAL` argument twice for the `app-release` target.
|
||||||
|
It resolves to `0` because the `_release` target appears last in the inheritance chain:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx bake --print app-release
|
||||||
|
[+] Building 0.0s (0/0)
|
||||||
|
{
|
||||||
|
"group": {
|
||||||
|
"default": {
|
||||||
|
"targets": [
|
||||||
|
"app-release"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"target": {
|
||||||
|
"app-release": {
|
||||||
|
"context": ".",
|
||||||
|
"dockerfile": "app.Dockerfile",
|
||||||
|
"args": {
|
||||||
|
"BUILDKIT_CONTEXT_KEEP_GIT_DIR": "1",
|
||||||
|
"BUILDX_EXPERIMENTAL": "0",
|
||||||
|
"GO_VERSION": "1.20"
|
||||||
|
},
|
||||||
|
"labels": {
|
||||||
|
"org.opencontainers.image.source": "https://github.com/username/myapp"
|
||||||
|
},
|
||||||
|
"tags": [
|
||||||
|
"docker.io/username/myapp"
|
||||||
|
],
|
||||||
|
"platforms": [
|
||||||
|
"linux/amd64",
|
||||||
|
"linux/arm64"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### `target.labels`
|
||||||
|
|
||||||
|
Assigns image labels to the build.
|
||||||
|
This is the same as the `--label` flag for `docker build`.
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
target "default" {
|
||||||
|
labels = {
|
||||||
|
"org.opencontainers.image.source" = "https://github.com/username/myapp"
|
||||||
|
"com.docker.image.source.entrypoint" = "Dockerfile"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
It's possible to use a `null` value for labels.
|
||||||
|
If you do, the builder uses the label value specified in the Dockerfile.
|
||||||
|
|
||||||
|
### `target.matrix`
|
||||||
|
|
||||||
|
A matrix strategy lets you fork a single target into multiple different
|
||||||
|
variants, based on parameters that you specify.
|
||||||
|
This works in a similar way to [Matrix strategies for GitHub Actions].
|
||||||
|
You can use this to reduce duplication in your bake definition.
|
||||||
|
|
||||||
|
The `matrix` attribute is a map of parameter names to lists of values.
|
||||||
|
Bake builds each possible combination of values as a separate target.
|
||||||
|
|
||||||
|
Each generated target **must** have a unique name.
|
||||||
|
To specify how target names should resolve, use the `name` attribute.
|
||||||
|
|
||||||
|
The following example resolves the `app` target to `app-foo` and `app-bar`.
|
||||||
|
It also uses the matrix value to define the [target build stage](#targettarget).
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
target "app" {
|
||||||
|
name = "app-${tgt}"
|
||||||
|
matrix = {
|
||||||
|
tgt = ["foo", "bar"]
|
||||||
|
}
|
||||||
|
target = tgt
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx bake --print app
|
||||||
|
[+] Building 0.0s (0/0)
|
||||||
|
{
|
||||||
|
"group": {
|
||||||
|
"app": {
|
||||||
|
"targets": [
|
||||||
|
"app-foo",
|
||||||
|
"app-bar"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"default": {
|
||||||
|
"targets": [
|
||||||
|
"app"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"target": {
|
||||||
|
"app-bar": {
|
||||||
|
"context": ".",
|
||||||
|
"dockerfile": "Dockerfile",
|
||||||
|
"target": "bar"
|
||||||
|
},
|
||||||
|
"app-foo": {
|
||||||
|
"context": ".",
|
||||||
|
"dockerfile": "Dockerfile",
|
||||||
|
"target": "foo"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Multiple axes
|
||||||
|
|
||||||
|
You can specify multiple keys in your matrix to fork a target on multiple axes.
|
||||||
|
When using multiple matrix keys, Bake builds every possible variant.
|
||||||
|
|
||||||
|
The following example builds four targets:
|
||||||
|
|
||||||
|
- `app-foo-1-0`
|
||||||
|
- `app-foo-2-0`
|
||||||
|
- `app-bar-1-0`
|
||||||
|
- `app-bar-2-0`
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
target "app" {
|
||||||
|
name = "app-${tgt}-${replace(version, ".", "-")}"
|
||||||
|
matrix = {
|
||||||
|
tgt = ["foo", "bar"]
|
||||||
|
version = ["1.0", "2.0"]
|
||||||
|
}
|
||||||
|
target = tgt
|
||||||
|
args = {
|
||||||
|
VERSION = version
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Multiple values per matrix target
|
||||||
|
|
||||||
|
If you want to differentiate the matrix on more than just a single value,
|
||||||
|
you can use maps as matrix values. Bake creates a target for each map,
|
||||||
|
and you can access the nested values using dot notation.
|
||||||
|
|
||||||
|
The following example builds two targets:
|
||||||
|
|
||||||
|
- `app-foo-1-0`
|
||||||
|
- `app-bar-2-0`
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
target "app" {
|
||||||
|
name = "app-${item.tgt}-${replace(item.version, ".", "-")}"
|
||||||
|
matrix = {
|
||||||
|
item = [
|
||||||
|
{
|
||||||
|
tgt = "foo"
|
||||||
|
version = "1.0"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
tgt = "bar"
|
||||||
|
version = "2.0"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
target = item.tgt
|
||||||
|
args = {
|
||||||
|
VERSION = item.version
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### `target.name`
|
||||||
|
|
||||||
|
Specify name resolution for targets that use a matrix strategy.
|
||||||
|
The following example resolves the `app` target to `app-foo` and `app-bar`.
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
target "app" {
|
||||||
|
name = "app-${tgt}"
|
||||||
|
matrix = {
|
||||||
|
tgt = ["foo", "bar"]
|
||||||
|
}
|
||||||
|
target = tgt
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### `target.no-cache-filter`
|
||||||
|
|
||||||
|
Don't use build cache for the specified stages.
|
||||||
|
This is the same as the `--no-cache-filter` flag for `docker build`.
|
||||||
|
The following example avoids build cache for the `foo` build stage.
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
target "default" {
|
||||||
|
no-cache-filter = ["foo"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### `target.no-cache`
|
||||||
|
|
||||||
|
Don't use cache when building the image.
|
||||||
|
This is the same as the `--no-cache` flag for `docker build`.
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
target "default" {
|
||||||
|
no-cache = 1
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### `target.output`
|
||||||
|
|
||||||
|
Configuration for exporting the build output.
|
||||||
|
This is the same as the [`--output` flag][output].
|
||||||
|
The following example configures the target to use a cache-only output,
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
target "default" {
|
||||||
|
output = ["type=cacheonly"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### `target.platforms`
|
||||||
|
|
||||||
|
Set target platforms for the build target.
|
||||||
|
This is the same as the [`--platform` flag][platform].
|
||||||
|
The following example creates a multi-platform build for three architectures.
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
target "default" {
|
||||||
|
platforms = ["linux/amd64", "linux/arm64", "linux/arm/v7"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### `target.pull`
|
||||||
|
|
||||||
|
Configures whether the builder should attempt to pull images when building the target.
|
||||||
|
This is the same as the `--pull` flag for `docker build`.
|
||||||
|
The following example forces the builder to always pull all images referenced in the build target.
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
target "default" {
|
||||||
|
pull = "always"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### `target.secret`
|
||||||
|
|
||||||
|
Defines secrets to expose to the build target.
|
||||||
|
This is the same as the [`--secret` flag][secret].
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
variable "HOME" {
|
||||||
|
default = null
|
||||||
|
}
|
||||||
|
|
||||||
|
target "default" {
|
||||||
|
secret = [
|
||||||
|
"type=env,id=KUBECONFIG",
|
||||||
|
"type=file,id=aws,src=${HOME}/.aws/credentials"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
This lets you [mount the secret][run_mount_secret] in your Dockerfile.
|
||||||
|
|
||||||
|
```dockerfile
|
||||||
|
RUN --mount=type=secret,id=aws,target=/root/.aws/credentials \
|
||||||
|
aws cloudfront create-invalidation ...
|
||||||
|
RUN --mount=type=secret,id=KUBECONFIG \
|
||||||
|
KUBECONFIG=$(cat /run/secrets/KUBECONFIG) helm upgrade --install
|
||||||
|
```
|
||||||
|
|
||||||
|
### `target.ssh`
|
||||||
|
|
||||||
|
Defines SSH agent sockets or keys to expose to the build.
|
||||||
|
This is the same as the [`--ssh` flag][ssh].
|
||||||
|
This can be useful if you need to access private repositories during a build.
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
target "default" {
|
||||||
|
ssh = ["default"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
```dockerfile
|
||||||
|
FROM alpine
|
||||||
|
RUN --mount=type=ssh \
|
||||||
|
apk add git openssh-client \
|
||||||
|
&& install -m 0700 -d ~/.ssh \
|
||||||
|
&& ssh-keyscan github.com >> ~/.ssh/known_hosts \
|
||||||
|
&& git clone git@github.com:user/my-private-repo.git
|
||||||
|
```
|
||||||
|
|
||||||
|
### `target.tags`
|
||||||
|
|
||||||
|
Image names and tags to use for the build target.
|
||||||
|
This is the same as the [`--tag` flag][tag].
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
target "default" {
|
||||||
|
tags = [
|
||||||
|
"org/repo:latest",
|
||||||
|
"myregistry.azurecr.io/team/image:v1"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### `target.target`
|
||||||
|
|
||||||
|
Set the target build stage to build.
|
||||||
|
This is the same as the [`--target` flag][target].
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
target "default" {
|
||||||
|
target = "binaries"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Group
|
||||||
|
|
||||||
|
Groups allow you to invoke multiple builds (targets) at once.
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
group "default" {
|
||||||
|
targets = ["db", "webapp-dev"]
|
||||||
|
}
|
||||||
|
|
||||||
|
target "webapp-dev" {
|
||||||
|
dockerfile = "Dockerfile.webapp"
|
||||||
|
tags = ["docker.io/username/webapp:latest"]
|
||||||
|
}
|
||||||
|
|
||||||
|
target "db" {
|
||||||
|
dockerfile = "Dockerfile.db"
|
||||||
|
tags = ["docker.io/username/db"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Groups take precedence over targets, if both exist with the same name.
|
||||||
|
The following bake file builds the `default` group.
|
||||||
|
Bake ignores the `default` target.
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
target "default" {
|
||||||
|
dockerfile-inline = "FROM ubuntu"
|
||||||
|
}
|
||||||
|
|
||||||
|
group "default" {
|
||||||
|
targets = ["alpine", "debian"]
|
||||||
|
}
|
||||||
|
target "alpine" {
|
||||||
|
dockerfile-inline = "FROM alpine"
|
||||||
|
}
|
||||||
|
target "debian" {
|
||||||
|
dockerfile-inline = "FROM debian"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Variable
|
||||||
|
|
||||||
|
The HCL file format supports variable block definitions.
|
||||||
|
You can use variables as build arguments in your Dockerfile,
|
||||||
|
or interpolate them in attribute values in your Bake file.
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
variable "TAG" {
|
||||||
|
default = "latest"
|
||||||
|
}
|
||||||
|
|
||||||
|
target "webapp-dev" {
|
||||||
|
dockerfile = "Dockerfile.webapp"
|
||||||
|
tags = ["docker.io/username/webapp:${TAG}"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
You can assign a default value for a variable in the Bake file,
|
||||||
|
or assign a `null` value to it. If you assign a `null` value,
|
||||||
|
Buildx uses the default value from the Dockerfile instead.
|
||||||
|
|
||||||
|
You can override variable defaults set in the Bake file using environment variables.
|
||||||
|
The following example sets the `TAG` variable to `dev`,
|
||||||
|
overriding the default `latest` value shown in the previous example.
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ TAG=dev docker buildx bake webapp-dev
|
||||||
|
```
|
||||||
|
|
||||||
|
### Built-in variables
|
||||||
|
|
||||||
|
The following variables are built-ins that you can use with Bake without having
|
||||||
|
to define them.
|
||||||
|
|
||||||
|
| Variable | Description |
|
||||||
|
| --------------------- | ----------------------------------------------------------------------------------- |
|
||||||
|
| `BAKE_CMD_CONTEXT` | Holds the main context when building using a remote Bake file. |
|
||||||
|
| `BAKE_LOCAL_PLATFORM` | Returns the current platform’s default platform specification (e.g. `linux/amd64`). |
|
||||||
|
|
||||||
|
### Use environment variable as default
|
||||||
|
|
||||||
|
You can set a Bake variable to use the value of an environment variable as a default value:
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
variable "HOME" {
|
||||||
|
default = "$HOME"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Interpolate variables into attributes
|
||||||
|
|
||||||
|
To interpolate a variable into an attribute string value,
|
||||||
|
you must use curly brackets.
|
||||||
|
The following doesn't work:
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
variable "HOME" {
|
||||||
|
default = "$HOME"
|
||||||
|
}
|
||||||
|
|
||||||
|
target "default" {
|
||||||
|
ssh = ["default=$HOME/.ssh/id_rsa"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Wrap the variable in curly brackets where you want to insert it:
|
||||||
|
|
||||||
|
```diff
|
||||||
|
variable "HOME" {
|
||||||
|
default = "$HOME"
|
||||||
|
}
|
||||||
|
|
||||||
|
target "default" {
|
||||||
|
- ssh = ["default=$HOME/.ssh/id_rsa"]
|
||||||
|
+ ssh = ["default=${HOME}/.ssh/id_rsa"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Before you can interpolate a variable into an attribute,
|
||||||
|
first you must declare it in the bake file,
|
||||||
|
as demonstrated in the following example.
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ cat docker-bake.hcl
|
||||||
|
target "default" {
|
||||||
|
dockerfile-inline = "FROM ${BASE_IMAGE}"
|
||||||
|
}
|
||||||
|
$ docker buildx bake
|
||||||
|
[+] Building 0.0s (0/0)
|
||||||
|
docker-bake.hcl:2
|
||||||
|
--------------------
|
||||||
|
1 | target "default" {
|
||||||
|
2 | >>> dockerfile-inline = "FROM ${BASE_IMAGE}"
|
||||||
|
3 | }
|
||||||
|
4 |
|
||||||
|
--------------------
|
||||||
|
ERROR: docker-bake.hcl:2,31-41: Unknown variable; There is no variable named "BASE_IMAGE"., and 1 other diagnostic(s)
|
||||||
|
$ cat >> docker-bake.hcl
|
||||||
|
|
||||||
|
variable "BASE_IMAGE" {
|
||||||
|
default = "alpine"
|
||||||
|
}
|
||||||
|
|
||||||
|
$ docker buildx bake
|
||||||
|
[+] Building 0.6s (5/5) FINISHED
|
||||||
|
```
|
||||||
|
|
||||||
|
## Function
|
||||||
|
|
||||||
|
A [set of general-purpose functions][bake_stdlib]
|
||||||
|
provided by [go-cty][go-cty]
|
||||||
|
are available for use in HCL files:
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
# docker-bake.hcl
|
||||||
|
target "webapp-dev" {
|
||||||
|
dockerfile = "Dockerfile.webapp"
|
||||||
|
tags = ["docker.io/username/webapp:latest"]
|
||||||
|
args = {
|
||||||
|
buildno = "${add(123, 1)}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
In addition, [user defined functions][userfunc]
|
||||||
|
are also supported:
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
# docker-bake.hcl
|
||||||
|
function "increment" {
|
||||||
|
params = [number]
|
||||||
|
result = number + 1
|
||||||
|
}
|
||||||
|
|
||||||
|
target "webapp-dev" {
|
||||||
|
dockerfile = "Dockerfile.webapp"
|
||||||
|
tags = ["docker.io/username/webapp:latest"]
|
||||||
|
args = {
|
||||||
|
buildno = "${increment(123)}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
> **Note**
|
||||||
|
>
|
||||||
|
> See [User defined HCL functions][hcl-funcs] page for more details.
|
||||||
|
|
||||||
|
<!-- external links -->
|
||||||
|
|
||||||
|
[attestations]: https://docs.docker.com/build/attestations/
|
||||||
|
[bake_stdlib]: https://github.com/docker/buildx/blob/master/bake/hclparser/stdlib.go
|
||||||
|
[build-arg]: https://docs.docker.com/engine/reference/commandline/build/#build-arg
|
||||||
|
[build-context]: https://docs.docker.com/engine/reference/commandline/buildx_build/#build-context
|
||||||
|
[cache-backends]: https://docs.docker.com/build/cache/backends/
|
||||||
|
[cache-from]: https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-from
|
||||||
|
[cache-to]: https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-to
|
||||||
|
[context]: https://docs.docker.com/engine/reference/commandline/buildx_build/#build-context
|
||||||
|
[file]: https://docs.docker.com/engine/reference/commandline/build/#file
|
||||||
|
[go-cty]: https://github.com/zclconf/go-cty/tree/main/cty/function/stdlib
|
||||||
|
[hcl-funcs]: https://docs.docker.com/build/bake/hcl-funcs/
|
||||||
|
[output]: https://docs.docker.com/engine/reference/commandline/buildx_build/#output
|
||||||
|
[platform]: https://docs.docker.com/engine/reference/commandline/buildx_build/#platform
|
||||||
|
[run_mount_secret]: https://docs.docker.com/engine/reference/builder/#run---mounttypesecret
|
||||||
|
[secret]: https://docs.docker.com/engine/reference/commandline/buildx_build/#secret
|
||||||
|
[ssh]: https://docs.docker.com/engine/reference/commandline/buildx_build/#ssh
|
||||||
|
[tag]: https://docs.docker.com/engine/reference/commandline/build/#tag
|
||||||
|
[target]: https://docs.docker.com/engine/reference/commandline/build/#target
|
||||||
|
[userfunc]: https://github.com/hashicorp/hcl/tree/main/ext/userfunc
|
||||||
@@ -1,74 +0,0 @@
|
|||||||
# Defining additional build contexts and linking targets
|
|
||||||
|
|
||||||
In addition to the main `context` key that defines the build context each target
|
|
||||||
can also define additional named contexts with a map defined with key `contexts`.
|
|
||||||
These values map to the `--build-context` flag in the [build command](https://docs.docker.com/engine/reference/commandline/buildx_build/#build-context).
|
|
||||||
|
|
||||||
Inside the Dockerfile these contexts can be used with the `FROM` instruction or `--from` flag.
|
|
||||||
|
|
||||||
The value can be a local source directory, container image (with `docker-image://` prefix),
|
|
||||||
Git URL, HTTP URL or a name of another target in the Bake file (with `target:` prefix).
|
|
||||||
|
|
||||||
## Pinning alpine image
|
|
||||||
|
|
||||||
```dockerfile
|
|
||||||
# syntax=docker/dockerfile:1
|
|
||||||
FROM alpine
|
|
||||||
RUN echo "Hello world"
|
|
||||||
```
|
|
||||||
|
|
||||||
```hcl
|
|
||||||
# docker-bake.hcl
|
|
||||||
target "app" {
|
|
||||||
contexts = {
|
|
||||||
alpine = "docker-image://alpine:3.13"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Using a secondary source directory
|
|
||||||
|
|
||||||
```dockerfile
|
|
||||||
# syntax=docker/dockerfile:1
|
|
||||||
FROM scratch AS src
|
|
||||||
|
|
||||||
FROM golang
|
|
||||||
COPY --from=src . .
|
|
||||||
```
|
|
||||||
|
|
||||||
```hcl
|
|
||||||
# docker-bake.hcl
|
|
||||||
target "app" {
|
|
||||||
contexts = {
|
|
||||||
src = "../path/to/source"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Using a result of one target as a base image in another target
|
|
||||||
|
|
||||||
To use a result of one target as a build context of another, specity the target
|
|
||||||
name with `target:` prefix.
|
|
||||||
|
|
||||||
```dockerfile
|
|
||||||
# syntax=docker/dockerfile:1
|
|
||||||
FROM baseapp
|
|
||||||
RUN echo "Hello world"
|
|
||||||
```
|
|
||||||
|
|
||||||
```hcl
|
|
||||||
# docker-bake.hcl
|
|
||||||
target "base" {
|
|
||||||
dockerfile = "baseapp.Dockerfile"
|
|
||||||
}
|
|
||||||
|
|
||||||
target "app" {
|
|
||||||
contexts = {
|
|
||||||
baseapp = "target:base"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Please note that in most cases you should just use a single multi-stage
|
|
||||||
Dockerfile with multiple targets for similar behavior. This case is recommended
|
|
||||||
when you have multiple Dockerfiles that can't be easily merged into one.
|
|
||||||
@@ -1,219 +0,0 @@
|
|||||||
# Building from Compose file
|
|
||||||
|
|
||||||
## Specification
|
|
||||||
|
|
||||||
Bake uses the [compose-spec](https://docs.docker.com/compose/compose-file/) to
|
|
||||||
parse a compose file and translate each service to a [target](file-definition.md#target).
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
# docker-compose.yml
|
|
||||||
services:
|
|
||||||
webapp-dev:
|
|
||||||
build: &build-dev
|
|
||||||
dockerfile: Dockerfile.webapp
|
|
||||||
tags:
|
|
||||||
- docker.io/username/webapp:latest
|
|
||||||
cache_from:
|
|
||||||
- docker.io/username/webapp:cache
|
|
||||||
cache_to:
|
|
||||||
- docker.io/username/webapp:cache
|
|
||||||
|
|
||||||
webapp-release:
|
|
||||||
build:
|
|
||||||
<<: *build-dev
|
|
||||||
x-bake:
|
|
||||||
platforms:
|
|
||||||
- linux/amd64
|
|
||||||
- linux/arm64
|
|
||||||
|
|
||||||
db:
|
|
||||||
image: docker.io/username/db
|
|
||||||
build:
|
|
||||||
dockerfile: Dockerfile.db
|
|
||||||
```
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx bake --print
|
|
||||||
```
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"group": {
|
|
||||||
"default": {
|
|
||||||
"targets": [
|
|
||||||
"db",
|
|
||||||
"webapp-dev",
|
|
||||||
"webapp-release"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"target": {
|
|
||||||
"db": {
|
|
||||||
"context": ".",
|
|
||||||
"dockerfile": "Dockerfile.db",
|
|
||||||
"tags": [
|
|
||||||
"docker.io/username/db"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"webapp-dev": {
|
|
||||||
"context": ".",
|
|
||||||
"dockerfile": "Dockerfile.webapp",
|
|
||||||
"tags": [
|
|
||||||
"docker.io/username/webapp:latest"
|
|
||||||
],
|
|
||||||
"cache-from": [
|
|
||||||
"docker.io/username/webapp:cache"
|
|
||||||
],
|
|
||||||
"cache-to": [
|
|
||||||
"docker.io/username/webapp:cache"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"webapp-release": {
|
|
||||||
"context": ".",
|
|
||||||
"dockerfile": "Dockerfile.webapp",
|
|
||||||
"tags": [
|
|
||||||
"docker.io/username/webapp:latest"
|
|
||||||
],
|
|
||||||
"cache-from": [
|
|
||||||
"docker.io/username/webapp:cache"
|
|
||||||
],
|
|
||||||
"cache-to": [
|
|
||||||
"docker.io/username/webapp:cache"
|
|
||||||
],
|
|
||||||
"platforms": [
|
|
||||||
"linux/amd64",
|
|
||||||
"linux/arm64"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Unlike the [HCL format](file-definition.md#hcl-definition), there are some
|
|
||||||
limitations with the compose format:
|
|
||||||
|
|
||||||
* Specifying variables or global scope attributes is not yet supported
|
|
||||||
* `inherits` service field is not supported, but you can use [YAML anchors](https://docs.docker.com/compose/compose-file/#fragments) to reference other services like the example above
|
|
||||||
|
|
||||||
## Extension field with `x-bake`
|
|
||||||
|
|
||||||
Even if some fields are not (yet) available in the compose specification, you
|
|
||||||
can use the [special extension](https://docs.docker.com/compose/compose-file/#extension)
|
|
||||||
field `x-bake` in your compose file to evaluate extra fields:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
# docker-compose.yml
|
|
||||||
services:
|
|
||||||
addon:
|
|
||||||
image: ct-addon:bar
|
|
||||||
build:
|
|
||||||
context: .
|
|
||||||
dockerfile: ./Dockerfile
|
|
||||||
args:
|
|
||||||
CT_ECR: foo
|
|
||||||
CT_TAG: bar
|
|
||||||
x-bake:
|
|
||||||
tags:
|
|
||||||
- ct-addon:foo
|
|
||||||
- ct-addon:alp
|
|
||||||
platforms:
|
|
||||||
- linux/amd64
|
|
||||||
- linux/arm64
|
|
||||||
cache-from:
|
|
||||||
- user/app:cache
|
|
||||||
- type=local,src=path/to/cache
|
|
||||||
cache-to:
|
|
||||||
- type=local,dest=path/to/cache
|
|
||||||
pull: true
|
|
||||||
|
|
||||||
aws:
|
|
||||||
image: ct-fake-aws:bar
|
|
||||||
build:
|
|
||||||
dockerfile: ./aws.Dockerfile
|
|
||||||
args:
|
|
||||||
CT_ECR: foo
|
|
||||||
CT_TAG: bar
|
|
||||||
x-bake:
|
|
||||||
secret:
|
|
||||||
- id=mysecret,src=./secret
|
|
||||||
- id=mysecret2,src=./secret2
|
|
||||||
platforms: linux/arm64
|
|
||||||
output: type=docker
|
|
||||||
no-cache: true
|
|
||||||
```
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx bake --print
|
|
||||||
```
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"group": {
|
|
||||||
"default": {
|
|
||||||
"targets": [
|
|
||||||
"aws",
|
|
||||||
"addon"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"target": {
|
|
||||||
"addon": {
|
|
||||||
"context": ".",
|
|
||||||
"dockerfile": "./Dockerfile",
|
|
||||||
"args": {
|
|
||||||
"CT_ECR": "foo",
|
|
||||||
"CT_TAG": "bar"
|
|
||||||
},
|
|
||||||
"tags": [
|
|
||||||
"ct-addon:foo",
|
|
||||||
"ct-addon:alp"
|
|
||||||
],
|
|
||||||
"cache-from": [
|
|
||||||
"user/app:cache",
|
|
||||||
"type=local,src=path/to/cache"
|
|
||||||
],
|
|
||||||
"cache-to": [
|
|
||||||
"type=local,dest=path/to/cache"
|
|
||||||
],
|
|
||||||
"platforms": [
|
|
||||||
"linux/amd64",
|
|
||||||
"linux/arm64"
|
|
||||||
],
|
|
||||||
"pull": true
|
|
||||||
},
|
|
||||||
"aws": {
|
|
||||||
"context": ".",
|
|
||||||
"dockerfile": "./aws.Dockerfile",
|
|
||||||
"args": {
|
|
||||||
"CT_ECR": "foo",
|
|
||||||
"CT_TAG": "bar"
|
|
||||||
},
|
|
||||||
"tags": [
|
|
||||||
"ct-fake-aws:bar"
|
|
||||||
],
|
|
||||||
"secret": [
|
|
||||||
"id=mysecret,src=./secret",
|
|
||||||
"id=mysecret2,src=./secret2"
|
|
||||||
],
|
|
||||||
"platforms": [
|
|
||||||
"linux/arm64"
|
|
||||||
],
|
|
||||||
"output": [
|
|
||||||
"type=docker"
|
|
||||||
],
|
|
||||||
"no-cache": true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Complete list of valid fields for `x-bake`:
|
|
||||||
|
|
||||||
* `cache-from`
|
|
||||||
* `cache-to`
|
|
||||||
* `no-cache`
|
|
||||||
* `no-cache-filter`
|
|
||||||
* `output`
|
|
||||||
* `platforms`
|
|
||||||
* `pull`
|
|
||||||
* `secret`
|
|
||||||
* `ssh`
|
|
||||||
* `tags`
|
|
||||||
@@ -1,216 +0,0 @@
|
|||||||
# Configuring builds
|
|
||||||
|
|
||||||
Bake supports loading build definition from files, but sometimes you need even
|
|
||||||
more flexibility to configure this definition.
|
|
||||||
|
|
||||||
For this use case, you can define variables inside the bake files that can be
|
|
||||||
set by the user with environment variables or by [attribute definitions](#global-scope-attributes)
|
|
||||||
in other bake files. If you wish to change a specific value for a single
|
|
||||||
invocation you can use the `--set` flag [from the command line](#from-command-line).
|
|
||||||
|
|
||||||
## Global scope attributes
|
|
||||||
|
|
||||||
You can define global scope attributes in HCL/JSON and use them for code reuse
|
|
||||||
and setting values for variables. This means you can do a "data-only" HCL file
|
|
||||||
with the values you want to set/override and use it in the list of regular
|
|
||||||
output files.
|
|
||||||
|
|
||||||
```hcl
|
|
||||||
# docker-bake.hcl
|
|
||||||
variable "FOO" {
|
|
||||||
default = "abc"
|
|
||||||
}
|
|
||||||
|
|
||||||
target "app" {
|
|
||||||
args = {
|
|
||||||
v1 = "pre-${FOO}"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
You can use this file directly:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx bake --print app
|
|
||||||
```
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"group": {
|
|
||||||
"default": {
|
|
||||||
"targets": [
|
|
||||||
"app"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"target": {
|
|
||||||
"app": {
|
|
||||||
"context": ".",
|
|
||||||
"dockerfile": "Dockerfile",
|
|
||||||
"args": {
|
|
||||||
"v1": "pre-abc"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Or create an override configuration file:
|
|
||||||
|
|
||||||
```hcl
|
|
||||||
# env.hcl
|
|
||||||
WHOAMI="myuser"
|
|
||||||
FOO="def-${WHOAMI}"
|
|
||||||
```
|
|
||||||
|
|
||||||
And invoke bake together with both of the files:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx bake -f docker-bake.hcl -f env.hcl --print app
|
|
||||||
```
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"group": {
|
|
||||||
"default": {
|
|
||||||
"targets": [
|
|
||||||
"app"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"target": {
|
|
||||||
"app": {
|
|
||||||
"context": ".",
|
|
||||||
"dockerfile": "Dockerfile",
|
|
||||||
"args": {
|
|
||||||
"v1": "pre-def-myuser"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## From command line
|
|
||||||
|
|
||||||
You can also override target configurations from the command line with the
|
|
||||||
[`--set` flag](https://docs.docker.com/engine/reference/commandline/buildx_bake/#set):
|
|
||||||
|
|
||||||
```hcl
|
|
||||||
# docker-bake.hcl
|
|
||||||
target "app" {
|
|
||||||
args = {
|
|
||||||
mybuildarg = "foo"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx bake --set app.args.mybuildarg=bar --set app.platform=linux/arm64 app --print
|
|
||||||
```
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"group": {
|
|
||||||
"default": {
|
|
||||||
"targets": [
|
|
||||||
"app"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"target": {
|
|
||||||
"app": {
|
|
||||||
"context": ".",
|
|
||||||
"dockerfile": "Dockerfile",
|
|
||||||
"args": {
|
|
||||||
"mybuildarg": "bar"
|
|
||||||
},
|
|
||||||
"platforms": [
|
|
||||||
"linux/arm64"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Pattern matching syntax defined in [https://golang.org/pkg/path/#Match](https://golang.org/pkg/path/#Match)
|
|
||||||
is also supported:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx bake --set foo*.args.mybuildarg=value # overrides build arg for all targets starting with "foo"
|
|
||||||
$ docker buildx bake --set *.platform=linux/arm64 # overrides platform for all targets
|
|
||||||
$ docker buildx bake --set foo*.no-cache # bypass caching only for targets starting with "foo"
|
|
||||||
```
|
|
||||||
|
|
||||||
Complete list of overridable fields:
|
|
||||||
|
|
||||||
* `args`
|
|
||||||
* `cache-from`
|
|
||||||
* `cache-to`
|
|
||||||
* `context`
|
|
||||||
* `dockerfile`
|
|
||||||
* `labels`
|
|
||||||
* `no-cache`
|
|
||||||
* `output`
|
|
||||||
* `platform`
|
|
||||||
* `pull`
|
|
||||||
* `secrets`
|
|
||||||
* `ssh`
|
|
||||||
* `tags`
|
|
||||||
* `target`
|
|
||||||
|
|
||||||
## Using variables in variables across files
|
|
||||||
|
|
||||||
When multiple files are specified, one file can use variables defined in
|
|
||||||
another file.
|
|
||||||
|
|
||||||
```hcl
|
|
||||||
# docker-bake1.hcl
|
|
||||||
variable "FOO" {
|
|
||||||
default = upper("${BASE}def")
|
|
||||||
}
|
|
||||||
|
|
||||||
variable "BAR" {
|
|
||||||
default = "-${FOO}-"
|
|
||||||
}
|
|
||||||
|
|
||||||
target "app" {
|
|
||||||
args = {
|
|
||||||
v1 = "pre-${BAR}"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
```hcl
|
|
||||||
# docker-bake2.hcl
|
|
||||||
variable "BASE" {
|
|
||||||
default = "abc"
|
|
||||||
}
|
|
||||||
|
|
||||||
target "app" {
|
|
||||||
args = {
|
|
||||||
v2 = "${FOO}-post"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx bake -f docker-bake1.hcl -f docker-bake2.hcl --print app
|
|
||||||
```
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"group": {
|
|
||||||
"default": {
|
|
||||||
"targets": [
|
|
||||||
"app"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"target": {
|
|
||||||
"app": {
|
|
||||||
"context": ".",
|
|
||||||
"dockerfile": "Dockerfile",
|
|
||||||
"args": {
|
|
||||||
"v1": "pre--ABCDEF-",
|
|
||||||
"v2": "ABCDEF-post"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
@@ -1,440 +0,0 @@
|
|||||||
# Bake file definition
|
|
||||||
|
|
||||||
`buildx bake` supports HCL, JSON and Compose file format for defining build
|
|
||||||
[groups](#group), [targets](#target) as well as [variables](#variable) and
|
|
||||||
[functions](#functions). It looks for build definition files in the current
|
|
||||||
directory in the following order:
|
|
||||||
|
|
||||||
* `docker-compose.yml`
|
|
||||||
* `docker-compose.yaml`
|
|
||||||
* `docker-bake.json`
|
|
||||||
* `docker-bake.override.json`
|
|
||||||
* `docker-bake.hcl`
|
|
||||||
* `docker-bake.override.hcl`
|
|
||||||
|
|
||||||
## Specification
|
|
||||||
|
|
||||||
Inside a bake file you can declare group, target and variable blocks to define
|
|
||||||
project specific reusable build flows.
|
|
||||||
|
|
||||||
### Target
|
|
||||||
|
|
||||||
A target reflects a single docker build invocation with the same options that
|
|
||||||
you would specify for `docker build`:
|
|
||||||
|
|
||||||
```hcl
|
|
||||||
# docker-bake.hcl
|
|
||||||
target "webapp-dev" {
|
|
||||||
dockerfile = "Dockerfile.webapp"
|
|
||||||
tags = ["docker.io/username/webapp:latest"]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
```console
|
|
||||||
$ docker buildx bake webapp-dev
|
|
||||||
```
|
|
||||||
|
|
||||||
> **Note**
|
|
||||||
>
|
|
||||||
> In the case of compose files, each service corresponds to a target.
|
|
||||||
> If compose service name contains a dot it will be replaced with an underscore.
|
|
||||||
|
|
||||||
Complete list of valid target fields available for [HCL](#hcl-definition) and
|
|
||||||
[JSON](#json-definition) definitions:
|
|
||||||
|
|
||||||
| Name | Type | Description |
|
|
||||||
|---------------------|--------|-------------------------------------------------------------------------------------------------------------------------------------------------|
|
|
||||||
| `inherits` | List | [Inherit build options](#merging-and-inheritance) from other targets |
|
|
||||||
| `args` | Map | Set build-time variables (same as [`--build-arg` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
|
|
||||||
| `cache-from` | List | External cache sources (same as [`--cache-from` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
|
|
||||||
| `cache-to` | List | Cache export destinations (same as [`--cache-to` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
|
|
||||||
| `context` | String | Set of files located in the specified path or URL |
|
|
||||||
| `contexts` | Map | Additional build contexts (same as [`--build-context` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
|
|
||||||
| `dockerfile` | String | Name of the Dockerfile (same as [`--file` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
|
|
||||||
| `dockerfile-inline` | String | Inline Dockerfile content |
|
|
||||||
| `labels` | Map | Set metadata for an image (same as [`--label` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
|
|
||||||
| `no-cache` | Bool | Do not use cache when building the image (same as [`--no-cache` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
|
|
||||||
| `no-cache-filter` | List | Do not cache specified stages (same as [`--no-cache-filter` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
|
|
||||||
| `output` | List | Output destination (same as [`--output` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
|
|
||||||
| `platforms` | List | Set target platforms for build (same as [`--platform` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
|
|
||||||
| `pull` | Bool | Always attempt to pull all referenced images (same as [`--pull` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
|
|
||||||
| `secret` | List | Secret to expose to the build (same as [`--secret` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
|
|
||||||
| `ssh` | List | SSH agent socket or keys to expose to the build (same as [`--ssh` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
|
|
||||||
| `tags` | List | Name and optionally a tag in the format `name:tag` (same as [`--tag` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
|
|
||||||
| `target` | String | Set the target build stage to build (same as [`--target` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
|
|
||||||
|
|
||||||
### Group
|
|
||||||
|
|
||||||
A group is a grouping of targets:
|
|
||||||
|
|
||||||
```hcl
|
|
||||||
# docker-bake.hcl
|
|
||||||
group "build" {
|
|
||||||
targets = ["db", "webapp-dev"]
|
|
||||||
}
|
|
||||||
|
|
||||||
target "webapp-dev" {
|
|
||||||
dockerfile = "Dockerfile.webapp"
|
|
||||||
tags = ["docker.io/username/webapp:latest"]
|
|
||||||
}
|
|
||||||
|
|
||||||
target "db" {
|
|
||||||
dockerfile = "Dockerfile.db"
|
|
||||||
tags = ["docker.io/username/db"]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
```console
|
|
||||||
$ docker buildx bake build
|
|
||||||
```
|
|
||||||
|
|
||||||
### Variable
|
|
||||||
|
|
||||||
Similar to how Terraform provides a way to [define variables](https://www.terraform.io/docs/configuration/variables.html#declaring-an-input-variable),
|
|
||||||
the HCL file format also supports variable block definitions. These can be used
|
|
||||||
to define variables with values provided by the current environment, or a
|
|
||||||
default value when unset:
|
|
||||||
|
|
||||||
```hcl
|
|
||||||
# docker-bake.hcl
|
|
||||||
variable "TAG" {
|
|
||||||
default = "latest"
|
|
||||||
}
|
|
||||||
|
|
||||||
target "webapp-dev" {
|
|
||||||
dockerfile = "Dockerfile.webapp"
|
|
||||||
tags = ["docker.io/username/webapp:${TAG}"]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
```console
|
|
||||||
$ docker buildx bake webapp-dev # will use the default value "latest"
|
|
||||||
$ TAG=dev docker buildx bake webapp-dev # will use the TAG environment variable value
|
|
||||||
```
|
|
||||||
|
|
||||||
> **Tip**
|
|
||||||
>
|
|
||||||
> See also the [Configuring builds](configuring-build.md) page for advanced usage.
|
|
||||||
|
|
||||||
### Functions
|
|
||||||
|
|
||||||
A [set of generally useful functions](https://github.com/docker/buildx/blob/master/bake/hclparser/stdlib.go)
|
|
||||||
provided by [go-cty](https://github.com/zclconf/go-cty/tree/main/cty/function/stdlib)
|
|
||||||
are available for use in HCL files:
|
|
||||||
|
|
||||||
```hcl
|
|
||||||
# docker-bake.hcl
|
|
||||||
target "webapp-dev" {
|
|
||||||
dockerfile = "Dockerfile.webapp"
|
|
||||||
tags = ["docker.io/username/webapp:latest"]
|
|
||||||
args = {
|
|
||||||
buildno = "${add(123, 1)}"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
In addition, [user defined functions](https://github.com/hashicorp/hcl/tree/main/ext/userfunc)
|
|
||||||
are also supported:
|
|
||||||
|
|
||||||
```hcl
|
|
||||||
# docker-bake.hcl
|
|
||||||
function "increment" {
|
|
||||||
params = [number]
|
|
||||||
result = number + 1
|
|
||||||
}
|
|
||||||
|
|
||||||
target "webapp-dev" {
|
|
||||||
dockerfile = "Dockerfile.webapp"
|
|
||||||
tags = ["docker.io/username/webapp:latest"]
|
|
||||||
args = {
|
|
||||||
buildno = "${increment(123)}"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
> **Note**
|
|
||||||
>
|
|
||||||
> See [User defined HCL functions](hcl-funcs.md) page for more details.
|
|
||||||
|
|
||||||
## Built-in variables
|
|
||||||
|
|
||||||
* `BAKE_CMD_CONTEXT` can be used to access the main `context` for bake command
|
|
||||||
from a bake file that has been [imported remotely](file-definition.md#remote-definition).
|
|
||||||
* `BAKE_LOCAL_PLATFORM` returns the current platform's default platform
|
|
||||||
specification (e.g. `linux/amd64`).
|
|
||||||
|
|
||||||
## Merging and inheritance
|
|
||||||
|
|
||||||
Multiple files can include the same target and final build options will be
|
|
||||||
determined by merging them together:
|
|
||||||
|
|
||||||
```hcl
|
|
||||||
# docker-bake.hcl
|
|
||||||
target "webapp-dev" {
|
|
||||||
dockerfile = "Dockerfile.webapp"
|
|
||||||
tags = ["docker.io/username/webapp:latest"]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
```hcl
|
|
||||||
# docker-bake2.hcl
|
|
||||||
target "webapp-dev" {
|
|
||||||
tags = ["docker.io/username/webapp:dev"]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
```console
|
|
||||||
$ docker buildx bake -f docker-bake.hcl -f docker-bake2.hcl webapp-dev
|
|
||||||
```
|
|
||||||
|
|
||||||
A group can specify its list of targets with the `targets` option. A target can
|
|
||||||
inherit build options by setting the `inherits` option to the list of targets or
|
|
||||||
groups to inherit from:
|
|
||||||
|
|
||||||
```hcl
|
|
||||||
# docker-bake.hcl
|
|
||||||
target "webapp-dev" {
|
|
||||||
dockerfile = "Dockerfile.webapp"
|
|
||||||
tags = ["docker.io/username/webapp:${TAG}"]
|
|
||||||
}
|
|
||||||
|
|
||||||
target "webapp-release" {
|
|
||||||
inherits = ["webapp-dev"]
|
|
||||||
platforms = ["linux/amd64", "linux/arm64"]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## `default` target/group
|
|
||||||
|
|
||||||
When you invoke `bake` you specify what targets/groups you want to build. If no
|
|
||||||
arguments is specified, the group/target named `default` will be built:
|
|
||||||
|
|
||||||
```hcl
|
|
||||||
# docker-bake.hcl
|
|
||||||
target "default" {
|
|
||||||
dockerfile = "Dockerfile.webapp"
|
|
||||||
tags = ["docker.io/username/webapp:latest"]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
```console
|
|
||||||
$ docker buildx bake
|
|
||||||
```
|
|
||||||
|
|
||||||
## Definitions
|
|
||||||
|
|
||||||
### HCL definition
|
|
||||||
|
|
||||||
HCL definition file is recommended as its experience is more aligned with buildx UX
|
|
||||||
and also allows better code reuse, different target groups and extended features.
|
|
||||||
|
|
||||||
```hcl
|
|
||||||
# docker-bake.hcl
|
|
||||||
variable "TAG" {
|
|
||||||
default = "latest"
|
|
||||||
}
|
|
||||||
|
|
||||||
group "default" {
|
|
||||||
targets = ["db", "webapp-dev"]
|
|
||||||
}
|
|
||||||
|
|
||||||
target "webapp-dev" {
|
|
||||||
dockerfile = "Dockerfile.webapp"
|
|
||||||
tags = ["docker.io/username/webapp:${TAG}"]
|
|
||||||
}
|
|
||||||
|
|
||||||
target "webapp-release" {
|
|
||||||
inherits = ["webapp-dev"]
|
|
||||||
platforms = ["linux/amd64", "linux/arm64"]
|
|
||||||
}
|
|
||||||
|
|
||||||
target "db" {
|
|
||||||
dockerfile = "Dockerfile.db"
|
|
||||||
tags = ["docker.io/username/db"]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### JSON definition
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"variable": {
|
|
||||||
"TAG": {
|
|
||||||
"default": "latest"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"group": {
|
|
||||||
"default": {
|
|
||||||
"targets": [
|
|
||||||
"db",
|
|
||||||
"webapp-dev"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"target": {
|
|
||||||
"webapp-dev": {
|
|
||||||
"dockerfile": "Dockerfile.webapp",
|
|
||||||
"tags": [
|
|
||||||
"docker.io/username/webapp:${TAG}"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"webapp-release": {
|
|
||||||
"inherits": [
|
|
||||||
"webapp-dev"
|
|
||||||
],
|
|
||||||
"platforms": [
|
|
||||||
"linux/amd64",
|
|
||||||
"linux/arm64"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"db": {
|
|
||||||
"dockerfile": "Dockerfile.db",
|
|
||||||
"tags": [
|
|
||||||
"docker.io/username/db"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Compose file
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
# docker-compose.yml
|
|
||||||
services:
|
|
||||||
webapp:
|
|
||||||
image: docker.io/username/webapp:latest
|
|
||||||
build:
|
|
||||||
dockerfile: Dockerfile.webapp
|
|
||||||
|
|
||||||
db:
|
|
||||||
image: docker.io/username/db
|
|
||||||
build:
|
|
||||||
dockerfile: Dockerfile.db
|
|
||||||
```
|
|
||||||
|
|
||||||
> **Note**
|
|
||||||
>
|
|
||||||
> See [Building from Compose file](compose-file.md) page for more details.
|
|
||||||
|
|
||||||
## Remote definition
|
|
||||||
|
|
||||||
You can also build bake files directly from a remote Git repository or HTTPS URL:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx bake "https://github.com/docker/cli.git#v20.10.11" --print
|
|
||||||
#1 [internal] load git source https://github.com/docker/cli.git#v20.10.11
|
|
||||||
#1 0.745 e8f1871b077b64bcb4a13334b7146492773769f7 refs/tags/v20.10.11
|
|
||||||
#1 2.022 From https://github.com/docker/cli
|
|
||||||
#1 2.022 * [new tag] v20.10.11 -> v20.10.11
|
|
||||||
#1 DONE 2.9s
|
|
||||||
```
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"group": {
|
|
||||||
"default": {
|
|
||||||
"targets": [
|
|
||||||
"binary"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"target": {
|
|
||||||
"binary": {
|
|
||||||
"context": "https://github.com/docker/cli.git#v20.10.11",
|
|
||||||
"dockerfile": "Dockerfile",
|
|
||||||
"args": {
|
|
||||||
"BASE_VARIANT": "alpine",
|
|
||||||
"GO_STRIP": "",
|
|
||||||
"VERSION": ""
|
|
||||||
},
|
|
||||||
"target": "binary",
|
|
||||||
"platforms": [
|
|
||||||
"local"
|
|
||||||
],
|
|
||||||
"output": [
|
|
||||||
"build"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
As you can see the context is fixed to `https://github.com/docker/cli.git` even if
|
|
||||||
[no context is actually defined](https://github.com/docker/cli/blob/2776a6d694f988c0c1df61cad4bfac0f54e481c8/docker-bake.hcl#L17-L26)
|
|
||||||
in the definition.
|
|
||||||
|
|
||||||
If you want to access the main context for bake command from a bake file
|
|
||||||
that has been imported remotely, you can use the [`BAKE_CMD_CONTEXT` built-in var](#built-in-variables).
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ cat https://raw.githubusercontent.com/tonistiigi/buildx/remote-test/docker-bake.hcl
|
|
||||||
```
|
|
||||||
```hcl
|
|
||||||
target "default" {
|
|
||||||
context = BAKE_CMD_CONTEXT
|
|
||||||
dockerfile-inline = <<EOT
|
|
||||||
FROM alpine
|
|
||||||
WORKDIR /src
|
|
||||||
COPY . .
|
|
||||||
RUN ls -l && stop
|
|
||||||
EOT
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx bake "https://github.com/tonistiigi/buildx.git#remote-test" --print
|
|
||||||
```
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"target": {
|
|
||||||
"default": {
|
|
||||||
"context": ".",
|
|
||||||
"dockerfile": "Dockerfile",
|
|
||||||
"dockerfile-inline": "FROM alpine\nWORKDIR /src\nCOPY . .\nRUN ls -l \u0026\u0026 stop\n"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ touch foo bar
|
|
||||||
$ docker buildx bake "https://github.com/tonistiigi/buildx.git#remote-test"
|
|
||||||
```
|
|
||||||
```text
|
|
||||||
...
|
|
||||||
> [4/4] RUN ls -l && stop:
|
|
||||||
#8 0.101 total 0
|
|
||||||
#8 0.102 -rw-r--r-- 1 root root 0 Jul 27 18:47 bar
|
|
||||||
#8 0.102 -rw-r--r-- 1 root root 0 Jul 27 18:47 foo
|
|
||||||
#8 0.102 /bin/sh: stop: not found
|
|
||||||
```
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx bake "https://github.com/tonistiigi/buildx.git#remote-test" "https://github.com/docker/cli.git#v20.10.11" --print
|
|
||||||
#1 [internal] load git source https://github.com/tonistiigi/buildx.git#remote-test
|
|
||||||
#1 0.429 577303add004dd7efeb13434d69ea030d35f7888 refs/heads/remote-test
|
|
||||||
#1 CACHED
|
|
||||||
```
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"target": {
|
|
||||||
"default": {
|
|
||||||
"context": "https://github.com/docker/cli.git#v20.10.11",
|
|
||||||
"dockerfile": "Dockerfile",
|
|
||||||
"dockerfile-inline": "FROM alpine\nWORKDIR /src\nCOPY . .\nRUN ls -l \u0026\u0026 stop\n"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx bake "https://github.com/tonistiigi/buildx.git#remote-test" "https://github.com/docker/cli.git#v20.10.11"
|
|
||||||
```
|
|
||||||
```text
|
|
||||||
...
|
|
||||||
> [4/4] RUN ls -l && stop:
|
|
||||||
#8 0.136 drwxrwxrwx 5 root root 4096 Jul 27 18:31 kubernetes
|
|
||||||
#8 0.136 drwxrwxrwx 3 root root 4096 Jul 27 18:31 man
|
|
||||||
#8 0.136 drwxrwxrwx 2 root root 4096 Jul 27 18:31 opts
|
|
||||||
#8 0.136 -rw-rw-rw- 1 root root 1893 Jul 27 18:31 poule.yml
|
|
||||||
#8 0.136 drwxrwxrwx 7 root root 4096 Jul 27 18:31 scripts
|
|
||||||
#8 0.136 drwxrwxrwx 3 root root 4096 Jul 27 18:31 service
|
|
||||||
#8 0.136 drwxrwxrwx 2 root root 4096 Jul 27 18:31 templates
|
|
||||||
#8 0.136 drwxrwxrwx 10 root root 4096 Jul 27 18:31 vendor
|
|
||||||
#8 0.136 -rwxrwxrwx 1 root root 9620 Jul 27 18:31 vendor.conf
|
|
||||||
#8 0.136 /bin/sh: stop: not found
|
|
||||||
```
|
|
||||||
@@ -1,327 +0,0 @@
|
|||||||
# User defined HCL functions
|
|
||||||
|
|
||||||
## Using interpolation to tag an image with the git sha
|
|
||||||
|
|
||||||
As shown in the [File definition](file-definition.md#variable) page, `bake`
|
|
||||||
supports variable blocks which are assigned to matching environment variables
|
|
||||||
or default values:
|
|
||||||
|
|
||||||
```hcl
|
|
||||||
# docker-bake.hcl
|
|
||||||
variable "TAG" {
|
|
||||||
default = "latest"
|
|
||||||
}
|
|
||||||
|
|
||||||
group "default" {
|
|
||||||
targets = ["webapp"]
|
|
||||||
}
|
|
||||||
|
|
||||||
target "webapp" {
|
|
||||||
tags = ["docker.io/username/webapp:${TAG}"]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
alternatively, in json format:
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"variable": {
|
|
||||||
"TAG": {
|
|
||||||
"default": "latest"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"group": {
|
|
||||||
"default": {
|
|
||||||
"targets": ["webapp"]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"target": {
|
|
||||||
"webapp": {
|
|
||||||
"tags": ["docker.io/username/webapp:${TAG}"]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx bake --print webapp
|
|
||||||
```
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"group": {
|
|
||||||
"default": {
|
|
||||||
"targets": [
|
|
||||||
"webapp"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"target": {
|
|
||||||
"webapp": {
|
|
||||||
"context": ".",
|
|
||||||
"dockerfile": "Dockerfile",
|
|
||||||
"tags": [
|
|
||||||
"docker.io/username/webapp:latest"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ TAG=$(git rev-parse --short HEAD) docker buildx bake --print webapp
|
|
||||||
```
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"group": {
|
|
||||||
"default": {
|
|
||||||
"targets": [
|
|
||||||
"webapp"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"target": {
|
|
||||||
"webapp": {
|
|
||||||
"context": ".",
|
|
||||||
"dockerfile": "Dockerfile",
|
|
||||||
"tags": [
|
|
||||||
"docker.io/username/webapp:985e9e9"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Using the `add` function
|
|
||||||
|
|
||||||
You can use [`go-cty` stdlib functions](https://github.com/zclconf/go-cty/tree/main/cty/function/stdlib).
|
|
||||||
Here we are using the `add` function.
|
|
||||||
|
|
||||||
```hcl
|
|
||||||
# docker-bake.hcl
|
|
||||||
variable "TAG" {
|
|
||||||
default = "latest"
|
|
||||||
}
|
|
||||||
|
|
||||||
group "default" {
|
|
||||||
targets = ["webapp"]
|
|
||||||
}
|
|
||||||
|
|
||||||
target "webapp" {
|
|
||||||
args = {
|
|
||||||
buildno = "${add(123, 1)}"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx bake --print webapp
|
|
||||||
```
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"group": {
|
|
||||||
"default": {
|
|
||||||
"targets": [
|
|
||||||
"webapp"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"target": {
|
|
||||||
"webapp": {
|
|
||||||
"context": ".",
|
|
||||||
"dockerfile": "Dockerfile",
|
|
||||||
"args": {
|
|
||||||
"buildno": "124"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Defining an `increment` function
|
|
||||||
|
|
||||||
It also supports [user defined functions](https://github.com/hashicorp/hcl/tree/main/ext/userfunc).
|
|
||||||
The following example defines a simple an `increment` function.
|
|
||||||
|
|
||||||
```hcl
|
|
||||||
# docker-bake.hcl
|
|
||||||
function "increment" {
|
|
||||||
params = [number]
|
|
||||||
result = number + 1
|
|
||||||
}
|
|
||||||
|
|
||||||
group "default" {
|
|
||||||
targets = ["webapp"]
|
|
||||||
}
|
|
||||||
|
|
||||||
target "webapp" {
|
|
||||||
args = {
|
|
||||||
buildno = "${increment(123)}"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx bake --print webapp
|
|
||||||
```
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"group": {
|
|
||||||
"default": {
|
|
||||||
"targets": [
|
|
||||||
"webapp"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"target": {
|
|
||||||
"webapp": {
|
|
||||||
"context": ".",
|
|
||||||
"dockerfile": "Dockerfile",
|
|
||||||
"args": {
|
|
||||||
"buildno": "124"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Only adding tags if a variable is not empty using an `notequal`
|
|
||||||
|
|
||||||
Here we are using the conditional `notequal` function which is just for
|
|
||||||
symmetry with the `equal` one.
|
|
||||||
|
|
||||||
```hcl
|
|
||||||
# docker-bake.hcl
|
|
||||||
variable "TAG" {default="" }
|
|
||||||
|
|
||||||
group "default" {
|
|
||||||
targets = [
|
|
||||||
"webapp",
|
|
||||||
]
|
|
||||||
}
|
|
||||||
|
|
||||||
target "webapp" {
|
|
||||||
context="."
|
|
||||||
dockerfile="Dockerfile"
|
|
||||||
tags = [
|
|
||||||
"my-image:latest",
|
|
||||||
notequal("",TAG) ? "my-image:${TAG}": "",
|
|
||||||
]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx bake --print webapp
|
|
||||||
```
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"group": {
|
|
||||||
"default": {
|
|
||||||
"targets": [
|
|
||||||
"webapp"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"target": {
|
|
||||||
"webapp": {
|
|
||||||
"context": ".",
|
|
||||||
"dockerfile": "Dockerfile",
|
|
||||||
"tags": [
|
|
||||||
"my-image:latest"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Using variables in functions
|
|
||||||
|
|
||||||
You can refer variables to other variables like the target blocks can. Stdlib
|
|
||||||
functions can also be called but user functions can't at the moment.
|
|
||||||
|
|
||||||
```hcl
|
|
||||||
# docker-bake.hcl
|
|
||||||
variable "REPO" {
|
|
||||||
default = "user/repo"
|
|
||||||
}
|
|
||||||
|
|
||||||
function "tag" {
|
|
||||||
params = [tag]
|
|
||||||
result = ["${REPO}:${tag}"]
|
|
||||||
}
|
|
||||||
|
|
||||||
target "webapp" {
|
|
||||||
tags = tag("v1")
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx bake --print webapp
|
|
||||||
```
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"group": {
|
|
||||||
"default": {
|
|
||||||
"targets": [
|
|
||||||
"webapp"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"target": {
|
|
||||||
"webapp": {
|
|
||||||
"context": ".",
|
|
||||||
"dockerfile": "Dockerfile",
|
|
||||||
"tags": [
|
|
||||||
"user/repo:v1"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Using typed variables
|
|
||||||
|
|
||||||
Non-string variables are also accepted. The value passed with env is parsed
|
|
||||||
into suitable type first.
|
|
||||||
|
|
||||||
```hcl
|
|
||||||
# docker-bake.hcl
|
|
||||||
variable "FOO" {
|
|
||||||
default = 3
|
|
||||||
}
|
|
||||||
|
|
||||||
variable "IS_FOO" {
|
|
||||||
default = true
|
|
||||||
}
|
|
||||||
|
|
||||||
target "app" {
|
|
||||||
args = {
|
|
||||||
v1 = FOO > 5 ? "higher" : "lower"
|
|
||||||
v2 = IS_FOO ? "yes" : "no"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx bake --print app
|
|
||||||
```
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"group": {
|
|
||||||
"default": {
|
|
||||||
"targets": [
|
|
||||||
"app"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"target": {
|
|
||||||
"app": {
|
|
||||||
"context": ".",
|
|
||||||
"dockerfile": "Dockerfile",
|
|
||||||
"args": {
|
|
||||||
"v1": "lower",
|
|
||||||
"v2": "yes"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
@@ -1,36 +0,0 @@
|
|||||||
# High-level build options with Bake
|
|
||||||
|
|
||||||
> This command is experimental.
|
|
||||||
>
|
|
||||||
> The design of bake is in early stages, and we are looking for [feedback from users](https://github.com/docker/buildx/issues).
|
|
||||||
{: .experimental }
|
|
||||||
|
|
||||||
Buildx also aims to provide support for high-level build concepts that go beyond
|
|
||||||
invoking a single build command. We want to support building all the images in
|
|
||||||
your application together and let the users define project specific reusable
|
|
||||||
build flows that can then be easily invoked by anyone.
|
|
||||||
|
|
||||||
[BuildKit](https://github.com/moby/buildkit) efficiently handles multiple
|
|
||||||
concurrent build requests and de-duplicating work. The build commands can be
|
|
||||||
combined with general-purpose command runners (for example, `make`). However,
|
|
||||||
these tools generally invoke builds in sequence and therefore cannot leverage
|
|
||||||
the full potential of BuildKit parallelization, or combine BuildKit's output
|
|
||||||
for the user. For this use case, we have added a command called
|
|
||||||
[`docker buildx bake`](https://docs.docker.com/engine/reference/commandline/buildx_bake/).
|
|
||||||
|
|
||||||
The `bake` command supports building images from HCL, JSON and Compose files.
|
|
||||||
This is similar to [`docker compose build`](https://docs.docker.com/compose/reference/build/),
|
|
||||||
but allowing all the services to be built concurrently as part of a single
|
|
||||||
request. If multiple files are specified they are all read and configurations are
|
|
||||||
combined.
|
|
||||||
|
|
||||||
We recommend using HCL files as its experience is more aligned with buildx UX
|
|
||||||
and also allows better code reuse, different target groups and extended features.
|
|
||||||
|
|
||||||
## Next steps
|
|
||||||
|
|
||||||
* [File definition](file-definition.md)
|
|
||||||
* [Configuring builds](configuring-build.md)
|
|
||||||
* [User defined HCL functions](hcl-funcs.md)
|
|
||||||
* [Defining additional build contexts and linking targets](build-contexts.md)
|
|
||||||
* [Building from Compose file](compose-file.md)
|
|
||||||
@@ -1,48 +1,3 @@
|
|||||||
# CI/CD
|
# CI/CD
|
||||||
|
|
||||||
## GitHub Actions
|
This page has moved to [Docker Docs website](https://docs.docker.com/build/ci/)
|
||||||
|
|
||||||
Docker provides a [GitHub Action that will build and push your image](https://github.com/docker/build-push-action/#about)
|
|
||||||
using Buildx. Here is a simple workflow:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
name: ci
|
|
||||||
|
|
||||||
on:
|
|
||||||
push:
|
|
||||||
branches:
|
|
||||||
- 'main'
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
docker:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
-
|
|
||||||
name: Set up QEMU
|
|
||||||
uses: docker/setup-qemu-action@v2
|
|
||||||
-
|
|
||||||
name: Set up Docker Buildx
|
|
||||||
uses: docker/setup-buildx-action@v2
|
|
||||||
-
|
|
||||||
name: Login to DockerHub
|
|
||||||
uses: docker/login-action@v2
|
|
||||||
with:
|
|
||||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
|
||||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
|
||||||
-
|
|
||||||
name: Build and push
|
|
||||||
uses: docker/build-push-action@v2
|
|
||||||
with:
|
|
||||||
push: true
|
|
||||||
tags: user/app:latest
|
|
||||||
```
|
|
||||||
|
|
||||||
In this example we are also using 3 other actions:
|
|
||||||
|
|
||||||
* [`setup-buildx`](https://github.com/docker/setup-buildx-action) action will create and boot a builder using by
|
|
||||||
default the `docker-container` [builder driver](../reference/buildx_create.md#driver).
|
|
||||||
This is **not required but recommended** using it to be able to build multi-platform images, export cache, etc.
|
|
||||||
* [`setup-qemu`](https://github.com/docker/setup-qemu-action) action can be useful if you want
|
|
||||||
to add emulation support with QEMU to be able to build against more platforms.
|
|
||||||
* [`login`](https://github.com/docker/login-action) action will take care to log
|
|
||||||
in against a Docker registry.
|
|
||||||
|
|||||||
@@ -1,23 +1,3 @@
|
|||||||
# CNI networking
|
# CNI networking
|
||||||
|
|
||||||
It can be useful to use a bridge network for your builder if for example you
|
This page has moved to [Docker Docs website](https://docs.docker.com/build/buildkit/configure/#cni-networking)
|
||||||
encounter a network port contention during multiple builds. If you're using
|
|
||||||
the BuildKit image, CNI is not yet available in it, but you can create
|
|
||||||
[a custom BuildKit image with CNI support](https://github.com/moby/buildkit/blob/master/docs/cni-networking.md).
|
|
||||||
|
|
||||||
Now build this image:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx build --tag buildkit-cni:local --load .
|
|
||||||
```
|
|
||||||
|
|
||||||
Then [create a `docker-container` builder](https://docs.docker.com/engine/reference/commandline/buildx_create/) that
|
|
||||||
will use this image:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx create --use \
|
|
||||||
--name mybuilder \
|
|
||||||
--driver docker-container \
|
|
||||||
--driver-opt "image=buildkit-cni:local" \
|
|
||||||
--buildkitd-flags "--oci-worker-net=cni"
|
|
||||||
```
|
|
||||||
|
|||||||
@@ -1,20 +1,3 @@
|
|||||||
# Color output controls
|
# Color output controls
|
||||||
|
|
||||||
Buildx has support for modifying the colors that are used to output information
|
This page has moved to [Docker Docs website](https://docs.docker.com/build/building/env-vars/#buildkit_colors)
|
||||||
to the terminal. You can set the environment variable `BUILDKIT_COLORS` to
|
|
||||||
something like `run=123,20,245:error=yellow:cancel=blue:warning=white` to set
|
|
||||||
the colors that you would like to use:
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
Setting `NO_COLOR` to anything will disable any colorized output as recommended
|
|
||||||
by [no-color.org](https://no-color.org/):
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
> **Note**
|
|
||||||
>
|
|
||||||
> Parsing errors will be reported but ignored. This will result in default
|
|
||||||
> color values being used where needed.
|
|
||||||
|
|
||||||
See also [the list of pre-defined colors](https://github.com/moby/buildkit/blob/master/util/progress/progressui/colors.go).
|
|
||||||
|
|||||||
@@ -1,34 +1,3 @@
|
|||||||
# Using a custom network
|
# Using a custom network
|
||||||
|
|
||||||
[Create a network](https://docs.docker.com/engine/reference/commandline/network_create/)
|
This page has moved to [Docker Docs website](https://docs.docker.com/build/drivers/docker-container/#custom-network)
|
||||||
named `foonet`:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker network create foonet
|
|
||||||
```
|
|
||||||
|
|
||||||
[Create a `docker-container` builder](https://docs.docker.com/engine/reference/commandline/buildx_create/)
|
|
||||||
named `mybuilder` that will use this network:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx create --use \
|
|
||||||
--name mybuilder \
|
|
||||||
--driver docker-container \
|
|
||||||
--driver-opt "network=foonet"
|
|
||||||
```
|
|
||||||
|
|
||||||
Boot and [inspect `mybuilder`](https://docs.docker.com/engine/reference/commandline/buildx_inspect/):
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx inspect --bootstrap
|
|
||||||
```
|
|
||||||
|
|
||||||
[Inspect the builder container](https://docs.docker.com/engine/reference/commandline/inspect/)
|
|
||||||
and see what network is being used:
|
|
||||||
|
|
||||||
{% raw %}
|
|
||||||
```console
|
|
||||||
$ docker inspect buildx_buildkit_mybuilder0 --format={{.NetworkSettings.Networks}}
|
|
||||||
map[foonet:0xc00018c0c0]
|
|
||||||
```
|
|
||||||
{% endraw %}
|
|
||||||
|
|||||||
@@ -1,63 +1,3 @@
|
|||||||
# Using a custom registry configuration
|
# Using a custom registry configuration
|
||||||
|
|
||||||
If you [create a `docker-container` or `kubernetes` builder](https://docs.docker.com/engine/reference/commandline/buildx_create/) and
|
This page has moved to [Docker Docs website](https://docs.docker.com/build/buildkit/configure/#setting-registry-certificates)
|
||||||
have specified certificates for registries in the [BuildKit daemon configuration](https://github.com/moby/buildkit/blob/master/docs/buildkitd.toml.md),
|
|
||||||
the files will be copied into the container under `/etc/buildkit/certs` and
|
|
||||||
configuration will be updated to reflect that.
|
|
||||||
|
|
||||||
Take the following `buildkitd.toml` configuration that will be used for
|
|
||||||
pushing an image to this registry using self-signed certificates:
|
|
||||||
|
|
||||||
```toml
|
|
||||||
# /etc/buildkitd.toml
|
|
||||||
debug = true
|
|
||||||
[registry."myregistry.com"]
|
|
||||||
ca=["/etc/certs/myregistry.pem"]
|
|
||||||
[[registry."myregistry.com".keypair]]
|
|
||||||
key="/etc/certs/myregistry_key.pem"
|
|
||||||
cert="/etc/certs/myregistry_cert.pem"
|
|
||||||
```
|
|
||||||
|
|
||||||
Here we have configured a self-signed certificate for `myregistry.com` registry.
|
|
||||||
|
|
||||||
Now [create a `docker-container` builder](https://docs.docker.com/engine/reference/commandline/buildx_create/)
|
|
||||||
that will use this BuildKit configuration:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx create --use \
|
|
||||||
--name mybuilder \
|
|
||||||
--driver docker-container \
|
|
||||||
--config /etc/buildkitd.toml
|
|
||||||
```
|
|
||||||
|
|
||||||
Inspecting the builder container, you can see that buildkitd configuration
|
|
||||||
has changed:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker exec -it buildx_buildkit_mybuilder0 cat /etc/buildkit/buildkitd.toml
|
|
||||||
```
|
|
||||||
```toml
|
|
||||||
debug = true
|
|
||||||
|
|
||||||
[registry]
|
|
||||||
|
|
||||||
[registry."myregistry.com"]
|
|
||||||
ca = ["/etc/buildkit/certs/myregistry.com/myregistry.pem"]
|
|
||||||
|
|
||||||
[[registry."myregistry.com".keypair]]
|
|
||||||
cert = "/etc/buildkit/certs/myregistry.com/myregistry_cert.pem"
|
|
||||||
key = "/etc/buildkit/certs/myregistry.com/myregistry_key.pem"
|
|
||||||
```
|
|
||||||
|
|
||||||
And certificates copied inside the container:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker exec -it buildx_buildkit_mybuilder0 ls /etc/buildkit/certs/myregistry.com/
|
|
||||||
myregistry.pem myregistry_cert.pem myregistry_key.pem
|
|
||||||
```
|
|
||||||
|
|
||||||
Now you should be able to push to the registry with this builder:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx build --push --tag myregistry.com/myimage:latest .
|
|
||||||
```
|
|
||||||
|
|||||||
164
docs/guides/debugging.md
Normal file
164
docs/guides/debugging.md
Normal file
@@ -0,0 +1,164 @@
|
|||||||
|
# Debug monitor
|
||||||
|
|
||||||
|
To assist with creating and debugging complex builds, Buildx provides a
|
||||||
|
debugger to help you step through the build process and easily inspect the
|
||||||
|
state of the build environment at any point.
|
||||||
|
|
||||||
|
> **Note**
|
||||||
|
>
|
||||||
|
> The debug monitor is a new experimental feature in recent versions of Buildx.
|
||||||
|
> There are rough edges, known bugs, and missing features. Please try it out
|
||||||
|
> and let us know what you think!
|
||||||
|
|
||||||
|
## Starting the debugger
|
||||||
|
|
||||||
|
To start the debugger, first, ensure that `BUILDX_EXPERIMENTAL=1` is set in
|
||||||
|
your environment.
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ export BUILDX_EXPERIMENTAL=1
|
||||||
|
```
|
||||||
|
|
||||||
|
To start a debug session for a build, you can use the `--invoke` flag with the
|
||||||
|
build command to specify a command to launch in the resulting image.
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx build --invoke /bin/sh .
|
||||||
|
[+] Building 4.2s (19/19) FINISHED
|
||||||
|
=> [internal] connecting to local controller 0.0s
|
||||||
|
=> [internal] load build definition from Dockerfile 0.0s
|
||||||
|
=> => transferring dockerfile: 32B 0.0s
|
||||||
|
=> [internal] load .dockerignore 0.0s
|
||||||
|
=> => transferring context: 34B 0.0s
|
||||||
|
...
|
||||||
|
Launching interactive container. Press Ctrl-a-c to switch to monitor console
|
||||||
|
Interactive container was restarted with process "dzz7pjb4pk1mj29xqrx0ac3oj". Press Ctrl-a-c to switch to the new container
|
||||||
|
Switched IO
|
||||||
|
/ #
|
||||||
|
```
|
||||||
|
|
||||||
|
This launches a `/bin/sh` process in the final stage of the image, and allows
|
||||||
|
you to explore the contents of the image, without needing to export or load the
|
||||||
|
image outside of the builder.
|
||||||
|
|
||||||
|
For example, you can use `ls` to see the contents of the image:
|
||||||
|
|
||||||
|
```console
|
||||||
|
/ # ls
|
||||||
|
bin etc lib mnt proc run srv tmp var
|
||||||
|
dev home media opt root sbin sys usr work
|
||||||
|
```
|
||||||
|
|
||||||
|
Optional long form allows you specifying detailed configurations of the process.
|
||||||
|
It must be CSV-styled comma-separated key-value pairs.
|
||||||
|
Supported keys are `args` (can be JSON array format), `entrypoint` (can be JSON array format), `env` (can be JSON array format), `user`, `cwd` and `tty` (bool).
|
||||||
|
|
||||||
|
Example:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ docker buildx build --invoke 'entrypoint=["sh"],"args=[""-c"", ""env | grep -e FOO -e AAA""]","env=[""FOO=bar"", ""AAA=bbb""]"' .
|
||||||
|
```
|
||||||
|
|
||||||
|
#### `on-error`
|
||||||
|
|
||||||
|
If you want to start a debug session when a build fails, you can use
|
||||||
|
`--invoke=on-error` to start a debug session when the build fails.
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx build --invoke on-error .
|
||||||
|
[+] Building 4.2s (19/19) FINISHED
|
||||||
|
=> [internal] connecting to local controller 0.0s
|
||||||
|
=> [internal] load build definition from Dockerfile 0.0s
|
||||||
|
=> => transferring dockerfile: 32B 0.0s
|
||||||
|
=> [internal] load .dockerignore 0.0s
|
||||||
|
=> => transferring context: 34B 0.0s
|
||||||
|
...
|
||||||
|
=> ERROR [shell 10/10] RUN bad-command
|
||||||
|
------
|
||||||
|
> [shell 10/10] RUN bad-command:
|
||||||
|
#0 0.049 /bin/sh: bad-command: not found
|
||||||
|
------
|
||||||
|
Launching interactive container. Press Ctrl-a-c to switch to monitor console
|
||||||
|
Interactive container was restarted with process "edmzor60nrag7rh1mbi4o9lm8". Press Ctrl-a-c to switch to the new container
|
||||||
|
/ #
|
||||||
|
```
|
||||||
|
|
||||||
|
This allows you to explore the state of the image when the build failed.
|
||||||
|
|
||||||
|
#### `debug-shell`
|
||||||
|
|
||||||
|
If you want to drop into a debug session without first starting the build, you
|
||||||
|
can use `--invoke=debug-shell` to start a debug session.
|
||||||
|
|
||||||
|
```
|
||||||
|
$ docker buildx build --invoke debug-shell .
|
||||||
|
[+] Building 4.2s (19/19) FINISHED
|
||||||
|
=> [internal] connecting to local controller 0.0s
|
||||||
|
(buildx)
|
||||||
|
```
|
||||||
|
|
||||||
|
You can then use the commands available in [monitor mode](#monitor-mode) to
|
||||||
|
start and observe the build.
|
||||||
|
|
||||||
|
## Monitor mode
|
||||||
|
|
||||||
|
By default, when debugging, you'll be dropped into a shell in the final stage.
|
||||||
|
|
||||||
|
When you're in a debug shell, you can use the `Ctrl-a-c` key combination (press
|
||||||
|
`Ctrl`+`a` together, lift, then press `c`) to toggle between the debug shell
|
||||||
|
and the monitor mode. In monitor mode, you can run commands that control the
|
||||||
|
debug environment.
|
||||||
|
|
||||||
|
```console
|
||||||
|
(buildx) help
|
||||||
|
Available commands are:
|
||||||
|
attach attach to a buildx server or a process in the container
|
||||||
|
disconnect disconnect a client from a buildx server. Specific session ID can be specified an arg
|
||||||
|
exec execute a process in the interactive container
|
||||||
|
exit exits monitor
|
||||||
|
help shows this message
|
||||||
|
kill kill buildx server
|
||||||
|
list list buildx sessions
|
||||||
|
ps list processes invoked by "exec". Use "attach" to attach IO to that process
|
||||||
|
reload reloads the context and build it
|
||||||
|
rollback re-runs the interactive container with initial rootfs contents
|
||||||
|
```
|
||||||
|
|
||||||
|
## Build controllers
|
||||||
|
|
||||||
|
Debugging is performed using a buildx "controller", which provides a high-level
|
||||||
|
abstraction to perform builds. By default, the local controller is used for a
|
||||||
|
more stable experience which runs all builds in-process. However, you can also
|
||||||
|
use the remote controller to detach the build process from the CLI.
|
||||||
|
|
||||||
|
To detach the build process from the CLI, you can use the `--detach=true` flag with
|
||||||
|
the build command.
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx build --detach=true --invoke /bin/sh .
|
||||||
|
```
|
||||||
|
|
||||||
|
If you start a debugging session using the `--invoke` flag with a detached
|
||||||
|
build, then you can attach to it using the `buildx debug-shell` subcommand to
|
||||||
|
immediately enter the monitor mode.
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx debug-shell
|
||||||
|
[+] Building 0.0s (1/1) FINISHED
|
||||||
|
=> [internal] connecting to remote controller
|
||||||
|
(buildx) list
|
||||||
|
ID CURRENT_SESSION
|
||||||
|
xfe1162ovd9def8yapb4ys66t false
|
||||||
|
(buildx) attach xfe1162ovd9def8yapb4ys66t
|
||||||
|
Attached to process "". Press Ctrl-a-c to switch to the new container
|
||||||
|
(buildx) ps
|
||||||
|
PID CURRENT_SESSION COMMAND
|
||||||
|
3ug8iqaufiwwnukimhqqt06jz false [sh]
|
||||||
|
(buildx) attach 3ug8iqaufiwwnukimhqqt06jz
|
||||||
|
Attached to process "3ug8iqaufiwwnukimhqqt06jz". Press Ctrl-a-c to switch to the new container
|
||||||
|
(buildx) Switched IO
|
||||||
|
/ # ls
|
||||||
|
bin etc lib mnt proc run srv tmp var
|
||||||
|
dev home media opt root sbin sys usr work
|
||||||
|
/ #
|
||||||
|
```
|
||||||
@@ -1,75 +0,0 @@
|
|||||||
# Docker container driver
|
|
||||||
|
|
||||||
The buildx docker-container driver allows creation of a managed and
|
|
||||||
customizable BuildKit environment inside a dedicated Docker container.
|
|
||||||
|
|
||||||
Using the docker-container driver has a couple of advantages over the basic
|
|
||||||
docker driver. Firstly, we can manually override the version of buildkit to
|
|
||||||
use, meaning that we can access the latest and greatest features as soon as
|
|
||||||
they're released, instead of waiting to upgrade to a newer version of Docker.
|
|
||||||
Additionally, we can access more complex features like multi-architecture
|
|
||||||
builds and the more advanced cache exporters, which are currently unsupported
|
|
||||||
in the default docker driver.
|
|
||||||
|
|
||||||
We can easily create a new builder that uses the docker-container driver:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx create --name container --driver docker-container
|
|
||||||
container
|
|
||||||
```
|
|
||||||
|
|
||||||
We should then be able to see it on our list of available builders:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx ls
|
|
||||||
NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
|
|
||||||
container docker-container
|
|
||||||
container0 desktop-linux inactive
|
|
||||||
default docker
|
|
||||||
default default running 20.10.17 linux/amd64, linux/386
|
|
||||||
```
|
|
||||||
|
|
||||||
If we trigger a build, the appropriate `moby/buildkit` image will be pulled
|
|
||||||
from [Docker Hub](https://hub.docker.com/u/moby/buildkit), the image started,
|
|
||||||
and our build submitted to our containerized build server.
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx build -t <image> --builder=container .
|
|
||||||
WARNING: No output specified with docker-container driver. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load
|
|
||||||
#1 [internal] booting buildkit
|
|
||||||
#1 pulling image moby/buildkit:buildx-stable-1
|
|
||||||
#1 pulling image moby/buildkit:buildx-stable-1 1.9s done
|
|
||||||
#1 creating container buildx_buildkit_container0
|
|
||||||
#1 creating container buildx_buildkit_container0 0.5s done
|
|
||||||
#1 DONE 2.4s
|
|
||||||
...
|
|
||||||
```
|
|
||||||
|
|
||||||
Note the warning "Build result will only remain in the build cache" - unlike
|
|
||||||
the `docker` driver, the built image must be explicitly loaded into the local
|
|
||||||
image store. We can use the `--load` flag for this:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx build --load -t <image> --builder=container .
|
|
||||||
...
|
|
||||||
=> exporting to oci image format 7.7s
|
|
||||||
=> => exporting layers 4.9s
|
|
||||||
=> => exporting manifest sha256:4e4ca161fa338be2c303445411900ebbc5fc086153a0b846ac12996960b479d3 0.0s
|
|
||||||
=> => exporting config sha256:adf3eec768a14b6e183a1010cb96d91155a82fd722a1091440c88f3747f1f53f 0.0s
|
|
||||||
=> => sending tarball 2.8s
|
|
||||||
=> importing to docker
|
|
||||||
```
|
|
||||||
|
|
||||||
The image should then be available in the image store:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker image ls
|
|
||||||
REPOSITORY TAG IMAGE ID CREATED SIZE
|
|
||||||
<image> latest adf3eec768a1 2 minutes ago 197MB
|
|
||||||
```
|
|
||||||
|
|
||||||
## Further reading
|
|
||||||
|
|
||||||
For more information on the docker-container driver, see the [buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver).
|
|
||||||
|
|
||||||
<!--- FIXME: for 0.9, make reference link relative --->
|
|
||||||
@@ -1,50 +0,0 @@
|
|||||||
# Docker driver
|
|
||||||
|
|
||||||
The buildx docker driver is the default builtin driver, that uses the BuildKit
|
|
||||||
server components built directly into the docker engine.
|
|
||||||
|
|
||||||
No setup should be required for the docker driver - it should already be
|
|
||||||
configured for you:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx ls
|
|
||||||
NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
|
|
||||||
default docker
|
|
||||||
default default running 20.10.17 linux/amd64, linux/386
|
|
||||||
```
|
|
||||||
|
|
||||||
This builder is ready to build with out-of-the-box, requiring no extra setup,
|
|
||||||
so you can get going with a `docker buildx build` as soon as you like.
|
|
||||||
|
|
||||||
Depending on your personal setup, you may find multiple builders in your list
|
|
||||||
the use the docker driver. For example, on a system that runs both a package
|
|
||||||
managed version of dockerd, as well as Docker Desktop, you might have the
|
|
||||||
following:
|
|
||||||
|
|
||||||
```console
|
|
||||||
NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
|
|
||||||
default docker
|
|
||||||
default default running 20.10.17 linux/amd64, linux/386
|
|
||||||
desktop-linux * docker
|
|
||||||
desktop-linux desktop-linux running 20.10.17 linux/amd64, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v6
|
|
||||||
```
|
|
||||||
|
|
||||||
This is because the docker driver builders are automatically pulled from
|
|
||||||
the available [Docker Contexts](https://docs.docker.com/engine/context/working-with-contexts/).
|
|
||||||
When you add new contexts using `docker context create`, these will appear in
|
|
||||||
your list of buildx builders.
|
|
||||||
|
|
||||||
Unlike the [other drivers](../index.md), builders using the docker driver
|
|
||||||
cannot be manually created, and can only be automatically created from the
|
|
||||||
docker context. Additionally, they cannot be configured to a specific BuildKit
|
|
||||||
version, and cannot take any extra parameters, as these are both preset by the
|
|
||||||
Docker engine internally.
|
|
||||||
|
|
||||||
If you want the extra configuration and flexibility without too much more
|
|
||||||
overhead, then see the help page for the [docker-container driver](./docker-container.md).
|
|
||||||
|
|
||||||
## Further reading
|
|
||||||
|
|
||||||
For more information on the docker driver, see the [buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver).
|
|
||||||
|
|
||||||
<!--- FIXME: for 0.9, make reference link relative --->
|
|
||||||
@@ -1,41 +0,0 @@
|
|||||||
# Buildx drivers overview
|
|
||||||
|
|
||||||
The buildx client connects out to the BuildKit backend to execute builds -
|
|
||||||
Buildx drivers allow fine-grained control over management of the backend, and
|
|
||||||
supports several different options for where and how BuildKit should run.
|
|
||||||
|
|
||||||
Currently, we support the following drivers:
|
|
||||||
|
|
||||||
- The `docker` driver, that uses the BuildKit library bundled into the Docker
|
|
||||||
daemon.
|
|
||||||
([guide](./docker.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
|
|
||||||
- The `docker-container` driver, that launches a dedicated BuildKit container
|
|
||||||
using Docker, for access to advanced features.
|
|
||||||
([guide](./docker-container.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
|
|
||||||
- The `kubernetes` driver, that launches dedicated BuildKit pods in a
|
|
||||||
remote Kubernetes cluster, for scalable builds.
|
|
||||||
([guide](./kubernetes.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
|
|
||||||
- The `remote` driver, that allows directly connecting to a manually managed
|
|
||||||
BuildKit daemon, for more custom setups.
|
|
||||||
([guide](./remote.md))
|
|
||||||
|
|
||||||
<!--- FIXME: for 0.9, make links relative, and add reference link for remote --->
|
|
||||||
|
|
||||||
To create a new builder that uses one of the above drivers, you can use the
|
|
||||||
[`docker buildx create`](https://docs.docker.com/engine/reference/commandline/buildx_create/) command:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx create --name=<builder-name> --driver=<driver> --driver-opt=<driver-options>
|
|
||||||
```
|
|
||||||
|
|
||||||
The build experience is very similar across drivers, however, there are some
|
|
||||||
features that are not evenly supported across the board, notably, the `docker`
|
|
||||||
driver does not include support for certain output/caching types.
|
|
||||||
|
|
||||||
| Feature | `docker` | `docker-container` | `kubernetes` | `remote` |
|
|
||||||
| :---------------------------- | :-------------: | :----------------: | :----------: | :--------------------: |
|
|
||||||
| **Automatic `--load`** | ✅ | ❌ | ❌ | ❌ |
|
|
||||||
| **Cache export** | ❔ (inline only) | ✅ | ✅ | ✅ |
|
|
||||||
| **Docker/OCI tarball output** | ❌ | ✅ | ✅ | ✅ |
|
|
||||||
| **Multi-arch images** | ❌ | ✅ | ✅ | ✅ |
|
|
||||||
| **BuildKit configuration** | ❌ | ✅ | ✅ | ❔ (managed externally) |
|
|
||||||
@@ -1,238 +0,0 @@
|
|||||||
# Kubernetes driver
|
|
||||||
|
|
||||||
The buildx kubernetes driver allows connecting your local development or ci
|
|
||||||
environments to your kubernetes cluster to allow access to more powerful
|
|
||||||
and varied compute resources.
|
|
||||||
|
|
||||||
This guide assumes you already have an existing kubernetes cluster - if you don't already
|
|
||||||
have one, you can easily follow along by installing
|
|
||||||
[minikube](https://minikube.sigs.k8s.io/docs/).
|
|
||||||
|
|
||||||
Before connecting buildx to your cluster, you may want to create a dedicated
|
|
||||||
namespace using `kubectl` to keep your buildx-managed resources separate. You
|
|
||||||
can call your namespace anything you want, or use the existing `default`
|
|
||||||
namespace, but we'll create a `buildkit` namespace for now:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ kubectl create namespace buildkit
|
|
||||||
```
|
|
||||||
|
|
||||||
Then create a new buildx builder:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx create \
|
|
||||||
--bootstrap \
|
|
||||||
--name=kube \
|
|
||||||
--driver=kubernetes \
|
|
||||||
--driver-opt=namespace=buildkit
|
|
||||||
```
|
|
||||||
|
|
||||||
This assumes that the kubernetes cluster you want to connect to is currently
|
|
||||||
accessible via the kubectl command, with the `KUBECONFIG` environment variable
|
|
||||||
[set appropriately](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#set-the-kubeconfig-environment-variable)
|
|
||||||
if neccessary.
|
|
||||||
|
|
||||||
You should now be able to see the builder in the list of buildx builders:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx ls
|
|
||||||
NAME/NODE DRIVER/ENDPOINT STATUS PLATFORMS
|
|
||||||
kube kubernetes
|
|
||||||
kube0-6977cdcb75-k9h9m running linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/386
|
|
||||||
default * docker
|
|
||||||
default default running linux/amd64, linux/386
|
|
||||||
```
|
|
||||||
|
|
||||||
The buildx driver creates the neccessary resources on your cluster in the
|
|
||||||
specified namespace (in this case, `buildkit`), while keeping your
|
|
||||||
driver configuration locally. You can see the running pods with:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ kubectl -n buildkit get deployments
|
|
||||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
|
||||||
kube0 1/1 1 1 32s
|
|
||||||
|
|
||||||
$ kubectl -n buildkit get pods
|
|
||||||
NAME READY STATUS RESTARTS AGE
|
|
||||||
kube0-6977cdcb75-k9h9m 1/1 Running 0 32s
|
|
||||||
```
|
|
||||||
|
|
||||||
You can use your new builder by including the `--builder` flag when running
|
|
||||||
buildx commands. For example (replacing `<user>` and `<image>` with your Docker
|
|
||||||
Hub username and desired image output respectively):
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx build . \
|
|
||||||
--builder=kube \
|
|
||||||
-t <user>/<image> \
|
|
||||||
--push
|
|
||||||
```
|
|
||||||
|
|
||||||
## Scaling Buildkit
|
|
||||||
|
|
||||||
One of the main advantages of the kubernetes builder is that you can easily
|
|
||||||
scale your builder up and down to handle increased build load. These controls
|
|
||||||
are exposed via the following options:
|
|
||||||
|
|
||||||
- `replicas=N`
|
|
||||||
- This scales the number of buildkit pods to the desired size. By default,
|
|
||||||
only a single pod will be created, but increasing this allows taking of
|
|
||||||
advantage of multiple nodes in your cluster.
|
|
||||||
- `requests.cpu`, `requests.memory`, `limits.cpu`, `limits.memory`
|
|
||||||
- These options allow requesting and limiting the resources available to each
|
|
||||||
buildkit pod according to the official kubernetes documentation
|
|
||||||
[here](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/).
|
|
||||||
|
|
||||||
For example, to create 4 replica buildkit pods:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx create \
|
|
||||||
--bootstrap \
|
|
||||||
--name=kube \
|
|
||||||
--driver=kubernetes \
|
|
||||||
--driver-opt=namespace=buildkit,replicas=4
|
|
||||||
```
|
|
||||||
|
|
||||||
Listing the pods, we get:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ kubectl -n buildkit get deployments
|
|
||||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
|
||||||
kube0 4/4 4 4 8s
|
|
||||||
|
|
||||||
$ kubectl -n buildkit get pods
|
|
||||||
NAME READY STATUS RESTARTS AGE
|
|
||||||
kube0-6977cdcb75-48ld2 1/1 Running 0 8s
|
|
||||||
kube0-6977cdcb75-rkc6b 1/1 Running 0 8s
|
|
||||||
kube0-6977cdcb75-vb4ks 1/1 Running 0 8s
|
|
||||||
kube0-6977cdcb75-z4fzs 1/1 Running 0 8s
|
|
||||||
```
|
|
||||||
|
|
||||||
Additionally, you can use the `loadbalance=(sticky|random)` option to control
|
|
||||||
the load-balancing behavior when there are multiple replicas. While `random`
|
|
||||||
should selects random nodes from the available pool, which should provide
|
|
||||||
better balancing across all replicas, `sticky` (the default) attempts to
|
|
||||||
connect the same build performed multiple times to the same node each time,
|
|
||||||
ensuring better local cache utilization.
|
|
||||||
|
|
||||||
For more information on scalability, see the options for [buildx create](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver-opt).
|
|
||||||
|
|
||||||
## Multi-platform builds
|
|
||||||
|
|
||||||
The kubernetes buildx driver has support for creating [multi-platform images](https://docs.docker.com/build/buildx/multiplatform-images/),
|
|
||||||
for easily building for multiple platforms at once.
|
|
||||||
|
|
||||||
### QEMU
|
|
||||||
|
|
||||||
Like the other containerized driver `docker-container`, the kubernetes driver
|
|
||||||
also supports using [QEMU](https://www.qemu.org/) (user mode) to build
|
|
||||||
non-native platforms. If using a default setup like above, no extra setup
|
|
||||||
should be needed, you should just be able to start building for other
|
|
||||||
architectures, by including the `--platform` flag.
|
|
||||||
|
|
||||||
For example, to build a Linux image for `amd64` and `arm64`:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx build . \
|
|
||||||
--builder=kube \
|
|
||||||
--platform=linux/amd64,linux/arm64 \
|
|
||||||
-t <user>/<image> \
|
|
||||||
--push
|
|
||||||
```
|
|
||||||
|
|
||||||
> **Warning**
|
|
||||||
> QEMU performs full-system emulation of non-native platforms, which is *much*
|
|
||||||
> slower than native builds. Compute-heavy tasks like compilation and
|
|
||||||
> compression/decompression will likely take a large performance hit.
|
|
||||||
|
|
||||||
Note, if you're using a custom buildkit image using the `image=<image>` driver
|
|
||||||
option, or invoking non-native binaries from within your build, you may need to
|
|
||||||
explicitly enable QEMU using the `qemu.install` option during driver creation:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx create \
|
|
||||||
--bootstrap \
|
|
||||||
--name=kube \
|
|
||||||
--driver=kubernetes \
|
|
||||||
--driver-opt=namespace=buildkit,qemu.install=true
|
|
||||||
```
|
|
||||||
|
|
||||||
### Native
|
|
||||||
|
|
||||||
If you have access to cluster nodes of different architectures, we can
|
|
||||||
configure the kubernetes driver to take advantage of these for native builds.
|
|
||||||
To do this, we need to use the `--append` feature of `docker buildx create`.
|
|
||||||
|
|
||||||
To start, we can create our builder with explicit support for a single
|
|
||||||
architecture, `amd64`:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx create \
|
|
||||||
--bootstrap \
|
|
||||||
--name=kube \
|
|
||||||
--driver=kubernetes \
|
|
||||||
--platform=linux/amd64 \
|
|
||||||
--node=builder-amd64 \
|
|
||||||
--driver-opt=namespace=buildkit,nodeselector="kubernetes.io/arch=amd64"
|
|
||||||
```
|
|
||||||
|
|
||||||
This creates a buildx builder `kube` containing a single builder node `builder-amd64`.
|
|
||||||
Note that the buildx concept of a node is not the same as the kubernetes
|
|
||||||
concept of a node - the buildx node in this case could connect multiple
|
|
||||||
kubernetes nodes of the same architecture together.
|
|
||||||
|
|
||||||
With our `kube` driver created, we can now introduce another architecture into
|
|
||||||
the mix, for example, like before we can use `arm64`:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx create \
|
|
||||||
--append \
|
|
||||||
--bootstrap \
|
|
||||||
--name=kube \
|
|
||||||
--driver=kubernetes \
|
|
||||||
--platform=linux/arm64 \
|
|
||||||
--node=builder-arm64 \
|
|
||||||
--driver-opt=namespace=buildkit,nodeselector="kubernetes.io/arch=arm64"
|
|
||||||
```
|
|
||||||
|
|
||||||
If you list builders now, you should be able to see both nodes present:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx ls
|
|
||||||
NAME/NODE DRIVER/ENDPOINT STATUS PLATFORMS
|
|
||||||
kube kubernetes
|
|
||||||
builder-amd64 kubernetes:///kube?deployment=builder-amd64&kubeconfig= running linux/amd64*, linux/amd64/v2, linux/amd64/v3, linux/386
|
|
||||||
builder-arm64 kubernetes:///kube?deployment=builder-arm64&kubeconfig= running linux/arm64*
|
|
||||||
```
|
|
||||||
|
|
||||||
You should now be able to build multi-arch images with `amd64` and `arm64`
|
|
||||||
combined, by specifying those platforms together in your buildx command:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx build --builder=kube --platform=linux/amd64,linux/arm64 -t <user>/<image> --push .
|
|
||||||
```
|
|
||||||
|
|
||||||
You can repeat the `buildx create --append` command for as many different
|
|
||||||
architectures that you want to support.
|
|
||||||
|
|
||||||
## Rootless mode
|
|
||||||
|
|
||||||
The kubernetes driver supports rootless mode. For more information on how
|
|
||||||
rootless mode works, and it's requirements, see [here](https://github.com/moby/buildkit/blob/master/docs/rootless.md).
|
|
||||||
|
|
||||||
To enable it in your cluster, you can use the `rootless=true` driver option:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx create \
|
|
||||||
--name=kube \
|
|
||||||
--driver=kubernetes \
|
|
||||||
--driver-opt=namespace=buildkit,rootless=true
|
|
||||||
```
|
|
||||||
|
|
||||||
This will create your pods without `securityContext.privileged`.
|
|
||||||
|
|
||||||
## Further reading
|
|
||||||
|
|
||||||
For more information on the kubernetes driver, see the [buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver).
|
|
||||||
|
|
||||||
<!--- FIXME: for 0.9, make reference link relative --->
|
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user