mirror of
https://gitea.com/Lydanne/buildx.git
synced 2025-09-16 07:49:08 +08:00
Compare commits
1116 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
c513d34049 | ||
|
|
d455c07331 | ||
|
|
5ac3b4c4b6 | ||
|
|
b1440b07f2 | ||
|
|
a3286a0ab1 | ||
|
|
b79345c63e | ||
|
|
23eb3c3ccd | ||
|
|
79e156beb1 | ||
|
|
c960d16da5 | ||
|
|
b5b9de69d9 | ||
|
|
45863c4f16 | ||
|
|
f2feea8bed | ||
|
|
a73d07ff7a | ||
|
|
0fad89c3b9 | ||
|
|
661af29d46 | ||
|
|
02cf539a08 | ||
|
|
cc87bd104e | ||
|
|
582cc04be6 | ||
|
|
ae278ce450 | ||
|
|
b66988c824 | ||
|
|
00ed17df6d | ||
|
|
cfb71fab97 | ||
|
|
f62342768b | ||
|
|
7776652a4d | ||
|
|
5a4f80f3ce | ||
|
|
b5ea79e277 | ||
|
|
481796f84f | ||
|
|
0090d49e57 | ||
|
|
389ac0c3d1 | ||
|
|
2bb8ce2f57 | ||
|
|
65cea456fd | ||
|
|
f7bd5b99da | ||
|
|
8c14407fa2 | ||
|
|
5245a2b3ff | ||
|
|
44d99d4573 | ||
|
|
14942a266e | ||
|
|
123febf107 | ||
|
|
3f5f7c5228 | ||
|
|
6d935625a6 | ||
|
|
e640dc6041 | ||
|
|
08244b12b5 | ||
|
|
78d8b926db | ||
|
|
19291d900e | ||
|
|
ed9b4a7169 | ||
|
|
033d5629c0 | ||
|
|
7cd5add568 | ||
|
|
2a000096fa | ||
|
|
b7781447d7 | ||
|
|
f6ba0a23f8 | ||
|
|
bf4b95fc3a | ||
|
|
467586dc8d | ||
|
|
8764628976 | ||
|
|
583fe71740 | ||
|
|
9fb3ff1a27 | ||
|
|
9d4f38c5fa | ||
|
|
793082f543 | ||
|
|
fe6f697205 | ||
|
|
fd3fb752d3 | ||
|
|
7fcea64eb4 | ||
|
|
05e0ce4953 | ||
|
|
f8d9d1e776 | ||
|
|
8a7a221a7f | ||
|
|
e4db8d2a21 | ||
|
|
7394853ddf | ||
|
|
a8be6b576b | ||
|
|
8b960ededd | ||
|
|
4735a71fbd | ||
|
|
37fce8cc06 | ||
|
|
82476ab039 | ||
|
|
88852e2330 | ||
|
|
6369c50614 | ||
|
|
a22d0a35a4 | ||
|
|
c93c02df85 | ||
|
|
e584c6e1a7 | ||
|
|
64e4c19971 | ||
|
|
551b8f6785 | ||
|
|
fbbe1c1b91 | ||
|
|
1a85745bf1 | ||
|
|
0d1fea8134 | ||
|
|
19417e76e7 | ||
|
|
53d88a79ef | ||
|
|
4c21b7e680 | ||
|
|
a8f689c223 | ||
|
|
ba8e3f9bc5 | ||
|
|
477200d1f9 | ||
|
|
662738a7e5 | ||
|
|
f992b77535 | ||
|
|
21b2f135b5 | ||
|
|
71e6be5d99 | ||
|
|
df8e7d0a9a | ||
|
|
64422a48d9 | ||
|
|
04f9c62772 | ||
|
|
2185d07f05 | ||
|
|
a49d28e00e | ||
|
|
629128c497 | ||
|
|
b741d26eb5 | ||
|
|
cf8fa4a404 | ||
|
|
fe76a1b179 | ||
|
|
df4957307f | ||
|
|
e21f56e801 | ||
|
|
e51b55e03c | ||
|
|
296b8249cb | ||
|
|
7c6b840199 | ||
|
|
2a6ff4cbfc | ||
|
|
6ad5e2fcf3 | ||
|
|
37811320ef | ||
|
|
99ac7f5f9e | ||
|
|
96aca741a2 | ||
|
|
12ec931237 | ||
|
|
0e293a4ec9 | ||
|
|
163712a23b | ||
|
|
5f4d463780 | ||
|
|
abc8121aa8 | ||
|
|
8c47277141 | ||
|
|
36b5cd18e8 | ||
|
|
1e72e32ec3 | ||
|
|
8e5e5a563d | ||
|
|
98049e7eda | ||
|
|
25aa893bad | ||
|
|
b270a20274 | ||
|
|
f0262dd10e | ||
|
|
f8b673eccd | ||
|
|
0c0c9a0030 | ||
|
|
d1f79317cf | ||
|
|
fa58522242 | ||
|
|
aa6fd3d888 | ||
|
|
ebdd8834a9 | ||
|
|
fe8d5627e0 | ||
|
|
b242e3280b | ||
|
|
cc01caaecb | ||
|
|
e7b5ee7518 | ||
|
|
63073b65c0 | ||
|
|
47cf72b8ba | ||
|
|
af24d72dd8 | ||
|
|
f451b455c4 | ||
|
|
16f4dfafb1 | ||
|
|
5b4e8b9d71 | ||
|
|
b06eaffeeb | ||
|
|
3d55540db1 | ||
|
|
3c2b9aab96 | ||
|
|
49d46e71de | ||
|
|
6c5168e1ec | ||
|
|
e91d5326fe | ||
|
|
48b573e835 | ||
|
|
4788eb24ab | ||
|
|
3ed2783f34 | ||
|
|
c0e8a41a6f | ||
|
|
23b217af24 | ||
|
|
3dab19f933 | ||
|
|
05efb6291f | ||
|
|
eba49fdefd | ||
|
|
29f2c49374 | ||
|
|
2245371696 | ||
|
|
74631d5808 | ||
|
|
9264b0ca09 | ||
|
|
a96fb92939 | ||
|
|
ae59e1f72e | ||
|
|
47167a4e6f | ||
|
|
23cabd67fb | ||
|
|
e66410b932 | ||
|
|
c3bba05770 | ||
|
|
69b91f2760 | ||
|
|
e6b09580b4 | ||
|
|
36e663edda | ||
|
|
60e2029e70 | ||
|
|
5e1db43e34 | ||
|
|
6e9b743296 | ||
|
|
ef9710d8e2 | ||
|
|
468b3b9c8c | ||
|
|
0d8c853917 | ||
|
|
df3b868fe7 | ||
|
|
3f6a5ab6ba | ||
|
|
aa1f4389b1 | ||
|
|
246cd2aee9 | ||
|
|
0b6f8149d1 | ||
|
|
4dda2ad58b | ||
|
|
15bb14fcf9 | ||
|
|
b68114375f | ||
|
|
83a09b3cf2 | ||
|
|
3690cb12e6 | ||
|
|
b4de4826c4 | ||
|
|
b06df637c7 | ||
|
|
9bb9ae43f9 | ||
|
|
35e7172b89 | ||
|
|
abebf4d955 | ||
|
|
1c826d253b | ||
|
|
d1b454232d | ||
|
|
be3b41acc6 | ||
|
|
2a3e51ebfe | ||
|
|
1382fda1c9 | ||
|
|
c658096c17 | ||
|
|
6097919958 | ||
|
|
330bdde0a3 | ||
|
|
a55404fa2e | ||
|
|
c8c7c9f376 | ||
|
|
df34c1ce45 | ||
|
|
da1d66c938 | ||
|
|
d32926a7e5 | ||
|
|
7f008a7d1e | ||
|
|
eab3f704f5 | ||
|
|
a50e89c38e | ||
|
|
85723a138f | ||
|
|
9c69ba6f6f | ||
|
|
e84ed65525 | ||
|
|
4060abd3aa | ||
|
|
c924a0428d | ||
|
|
33ef1b3a30 | ||
|
|
a6caf4b948 | ||
|
|
cc7e11da99 | ||
|
|
a4c3efe783 | ||
|
|
4e22846e95 | ||
|
|
ddbd0cd095 | ||
|
|
255a3ec82c | ||
|
|
167c77baec | ||
|
|
ca2718366e | ||
|
|
58d3a643b9 | ||
|
|
718b8085fa | ||
|
|
64930d7440 | ||
|
|
4d2f948869 | ||
|
|
19c224cbe1 | ||
|
|
efd1581c01 | ||
|
|
ac85f590ba | ||
|
|
b0d3162875 | ||
|
|
4715a7e9e1 | ||
|
|
c5aec243c9 | ||
|
|
c76f3d3dba | ||
|
|
7add6e48b6 | ||
|
|
1267e0c076 | ||
|
|
361c093a35 | ||
|
|
9ad39a29f7 | ||
|
|
f5a1d8bff9 | ||
|
|
8c86afbd57 | ||
|
|
4d6e36df99 | ||
|
|
f51884e893 | ||
|
|
4afd9ecf16 | ||
|
|
ed3b311de4 | ||
|
|
d030fcc076 | ||
|
|
398da1f916 | ||
|
|
3a5741f534 | ||
|
|
c53b0b8a12 | ||
|
|
8fd34669ed | ||
|
|
be7e91899b | ||
|
|
74a822568e | ||
|
|
105c214d15 | ||
|
|
2b6a51ed34 | ||
|
|
e98c252490 | ||
|
|
17f5d6309f | ||
|
|
6a46ea04ab | ||
|
|
7bd97f6717 | ||
|
|
2a9c98ae40 | ||
|
|
1adf80c613 | ||
|
|
f823d3c73c | ||
|
|
91f0ed3fc3 | ||
|
|
04b56c7331 | ||
|
|
3c1a20097f | ||
|
|
966c4d4e14 | ||
|
|
6b8289d68e | ||
|
|
294421db9c | ||
|
|
9fdf991c27 | ||
|
|
77b33260f8 | ||
|
|
33e5f47c6c | ||
|
|
25ceb90678 | ||
|
|
27e29055cb | ||
|
|
810ce31f4b | ||
|
|
e3c91c9d29 | ||
|
|
2f47838ea1 | ||
|
|
0566e62995 | ||
|
|
aeac42be47 | ||
|
|
aa21ff7efd | ||
|
|
57d22a7bd1 | ||
|
|
6804bcbf12 | ||
|
|
6d34cc0b60 | ||
|
|
1bb375fe5c | ||
|
|
ed00243a0c | ||
|
|
1223e759a4 | ||
|
|
4fd3ec1a50 | ||
|
|
7f9cad1e4e | ||
|
|
437b8b140f | ||
|
|
8f0d9bd71f | ||
|
|
1378c616d6 | ||
|
|
3b5dfb3fb4 | ||
|
|
9c22be5d9c | ||
|
|
42dea89247 | ||
|
|
982a332679 | ||
|
|
441853f189 | ||
|
|
611329fc7f | ||
|
|
f3c135e583 | ||
|
|
7f84582b37 | ||
|
|
297526c49d | ||
|
|
d01d394a2b | ||
|
|
17d4369866 | ||
|
|
fb5e1393a4 | ||
|
|
18dbde9ed6 | ||
|
|
2a13491919 | ||
|
|
3509a1a7ff | ||
|
|
da1f4b8496 | ||
|
|
5b2e1d3ce4 | ||
|
|
7d8a6bc1d7 | ||
|
|
a378f8095e | ||
|
|
005bc009e8 | ||
|
|
3bc7d4bec6 | ||
|
|
96c1b05238 | ||
|
|
98f9f806f3 | ||
|
|
c834ba1389 | ||
|
|
cab437adef | ||
|
|
eefa8188e1 | ||
|
|
1d8db8a738 | ||
|
|
75ddc5b811 | ||
|
|
17dc0e1108 | ||
|
|
64ac6c9621 | ||
|
|
a7753ea781 | ||
|
|
12a6eb5b22 | ||
|
|
74b21258b6 | ||
|
|
2f9d46ce27 | ||
|
|
7b660c4e30 | ||
|
|
406799eb1c | ||
|
|
ef0cbf20f4 | ||
|
|
7f572eb044 | ||
|
|
0defb614a4 | ||
|
|
18023d7f32 | ||
|
|
4983b98005 | ||
|
|
8675e02cea | ||
|
|
45fc3bf842 | ||
|
|
cf809aec47 | ||
|
|
cceb1acca8 | ||
|
|
e620c40a14 | ||
|
|
e1590bf68b | ||
|
|
bad07943b5 | ||
|
|
603595559f | ||
|
|
febcc25d1a | ||
|
|
e3c0e34b33 | ||
|
|
3f5974b7f9 | ||
|
|
7ab3dc080b | ||
|
|
0883beac30 | ||
|
|
f9102a3295 | ||
|
|
f360088ae7 | ||
|
|
dfc1b361a9 | ||
|
|
19641ec8ca | ||
|
|
02f7d54aed | ||
|
|
1f6612b118 | ||
|
|
c1fbebe73f | ||
|
|
30d650862d | ||
|
|
52fd555bdd | ||
|
|
7b25e2cffc | ||
|
|
5eb1e40fea | ||
|
|
7ef679d945 | ||
|
|
480bf2e123 | ||
|
|
0078390934 | ||
|
|
06c11ecb61 | ||
|
|
e27a5966ef | ||
|
|
f1a9f91323 | ||
|
|
4ecca34a42 | ||
|
|
37ca8631f9 | ||
|
|
d3412f1039 | ||
|
|
8288ce96cc | ||
|
|
0222b74ee1 | ||
|
|
97bccc5ecf | ||
|
|
47ea0c5b03 | ||
|
|
766653f7a6 | ||
|
|
264451ba18 | ||
|
|
a42eb73043 | ||
|
|
f2b504b77d | ||
|
|
68ef5b9c9b | ||
|
|
07992e66e0 | ||
|
|
4522331229 | ||
|
|
ec1ba14f3e | ||
|
|
0694efb566 | ||
|
|
1324827cd5 | ||
|
|
86825a95ce | ||
|
|
dd445e5f9b | ||
|
|
3075a5a8c1 | ||
|
|
9ff5fb0356 | ||
|
|
bc19deb5d0 | ||
|
|
1c7088ee42 | ||
|
|
97d3841fbf | ||
|
|
20022fd441 | ||
|
|
23455744ac | ||
|
|
0ee14fb653 | ||
|
|
ff57ae1705 | ||
|
|
8da133e34f | ||
|
|
b0deb8bdd7 | ||
|
|
6583dd3aa2 | ||
|
|
701c548e46 | ||
|
|
0db719af8a | ||
|
|
7eb1235629 | ||
|
|
11c1e03e93 | ||
|
|
bea1ac296c | ||
|
|
2df799d331 | ||
|
|
fecc6958cb | ||
|
|
02bae945c3 | ||
|
|
691723f9f9 | ||
|
|
900f356df9 | ||
|
|
724cb29042 | ||
|
|
f69c62f07a | ||
|
|
309c49413c | ||
|
|
6824cf4548 | ||
|
|
881b48a3b6 | ||
|
|
5b452b72a2 | ||
|
|
27fcb73c7c | ||
|
|
2aa22597f0 | ||
|
|
d9ef9bec34 | ||
|
|
3b4780ef19 | ||
|
|
12fde33d9b | ||
|
|
a0f92829a7 | ||
|
|
b438032a60 | ||
|
|
3cf549a7f7 | ||
|
|
f8884a58e9 | ||
|
|
5ce3909c48 | ||
|
|
45fac6dee3 | ||
|
|
a8bb25d1b5 | ||
|
|
387e1ecca6 | ||
|
|
ad7b077d13 | ||
|
|
432c2b2650 | ||
|
|
055e85f48f | ||
|
|
91fec23f5d | ||
|
|
0295555a5a | ||
|
|
6cb1b85d7b | ||
|
|
e0350f671a | ||
|
|
c1adfcb658 | ||
|
|
1343cdfc83 | ||
|
|
f40c2dbb86 | ||
|
|
50c23aa755 | ||
|
|
ff9517cbf0 | ||
|
|
824b0268d8 | ||
|
|
77ea999adb | ||
|
|
1807cfdd26 | ||
|
|
ebd7d062bf | ||
|
|
6cb026b766 | ||
|
|
1cb1ee018b | ||
|
|
71e4a39ae9 | ||
|
|
009730f5fd | ||
|
|
36466c0744 | ||
|
|
1406ff141b | ||
|
|
1eff9310f8 | ||
|
|
22ac3271d2 | ||
|
|
064bd92583 | ||
|
|
1beb3359a6 | ||
|
|
35f4268081 | ||
|
|
81ce766501 | ||
|
|
66a764f9c1 | ||
|
|
e4137b2eea | ||
|
|
48067735fc | ||
|
|
54a2a0c49f | ||
|
|
d611bbe609 | ||
|
|
1e71a3ffa7 | ||
|
|
4a215a943b | ||
|
|
69d95cc847 | ||
|
|
cdd391e556 | ||
|
|
d69fe6140d | ||
|
|
ca3507656d | ||
|
|
78ae826d74 | ||
|
|
5a8060ea9f | ||
|
|
908ce2d206 | ||
|
|
69824a5d27 | ||
|
|
5d38fff729 | ||
|
|
31d12c89fa | ||
|
|
8257a04a7d | ||
|
|
bdc41dd308 | ||
|
|
f6e00a609d | ||
|
|
1845edd647 | ||
|
|
cab4cfe28f | ||
|
|
815c1dd05c | ||
|
|
bbfdaa4161 | ||
|
|
a9e62dfa83 | ||
|
|
b9a408017c | ||
|
|
062cf29de2 | ||
|
|
a2f1de6459 | ||
|
|
98439f7f08 | ||
|
|
6854eec48d | ||
|
|
1edfb13ba8 | ||
|
|
35b238ee82 | ||
|
|
55b85f5bb2 | ||
|
|
57156ee95c | ||
|
|
c245f30a94 | ||
|
|
6e3babc461 | ||
|
|
4ee8b14f2a | ||
|
|
21b41e580a | ||
|
|
cc90c5ca3c | ||
|
|
519aca3672 | ||
|
|
43968ffa68 | ||
|
|
79ba92b7f8 | ||
|
|
e0cffbdbdf | ||
|
|
df799b6a0f | ||
|
|
27bdbea410 | ||
|
|
1e52c2107c | ||
|
|
cf298ee01c | ||
|
|
e9d6501a4f | ||
|
|
92009ed03c | ||
|
|
f2fc0e9eb5 | ||
|
|
38f1138a45 | ||
|
|
72758fef22 | ||
|
|
9cdd837f6b | ||
|
|
d7e4affe98 | ||
|
|
3dc83e5dd8 | ||
|
|
29f97f6762 | ||
|
|
88a45cfb24 | ||
|
|
03885ec9f1 | ||
|
|
a648d58f63 | ||
|
|
0b9d426175 | ||
|
|
1c23d1cef5 | ||
|
|
95086cf641 | ||
|
|
6a702ebe5b | ||
|
|
a6a1a362ad | ||
|
|
4a226568a0 | ||
|
|
a2d5bc7cca | ||
|
|
951201ac1b | ||
|
|
c0f8a8314b | ||
|
|
d64428cd2a | ||
|
|
3a90f99635 | ||
|
|
04b44b3a89 | ||
|
|
b7c4fe5a3a | ||
|
|
082c83b825 | ||
|
|
79de2c5d82 | ||
|
|
b8bcf1d810 | ||
|
|
28a4363672 | ||
|
|
1e98de491d | ||
|
|
b54a0aa37c | ||
|
|
e10c385167 | ||
|
|
add4301ed6 | ||
|
|
a60150cbc6 | ||
|
|
cad7ed68be | ||
|
|
c317ca1e95 | ||
|
|
3f6517747e | ||
|
|
adafbe0e65 | ||
|
|
a49ad031a5 | ||
|
|
c3db06cda0 | ||
|
|
1201782a11 | ||
|
|
243b428a58 | ||
|
|
785dc17f13 | ||
|
|
cad87f54c5 | ||
|
|
0b8dde1071 | ||
|
|
1ca30a58c2 | ||
|
|
1246e8da3a | ||
|
|
c0f31349a6 | ||
|
|
5c2d2f294d | ||
|
|
da4c27e9af | ||
|
|
111ea95629 | ||
|
|
3adca1c17d | ||
|
|
6ffe22b843 | ||
|
|
824cb42fe0 | ||
|
|
08bb626304 | ||
|
|
38311a35f2 | ||
|
|
fd62216cbc | ||
|
|
fc7ba75fd7 | ||
|
|
c8f7c1e93f | ||
|
|
b78c680207 | ||
|
|
d7412c9420 | ||
|
|
a7fba7bf3a | ||
|
|
19ff7cdadc | ||
|
|
c255c04eed | ||
|
|
9fcea76dea | ||
|
|
1416bc1d83 | ||
|
|
215a128fc1 | ||
|
|
4e4eea7814 | ||
|
|
8079bd2841 | ||
|
|
2d5368cccc | ||
|
|
a1256c6bb2 | ||
|
|
e7863eb664 | ||
|
|
171c4375a1 | ||
|
|
45844805ec | ||
|
|
b77d7864fa | ||
|
|
6efcee28d5 | ||
|
|
3ad24524c4 | ||
|
|
971b5d2b73 | ||
|
|
94c5dde85a | ||
|
|
f62c02329e | ||
|
|
d2e53f5e05 | ||
|
|
7af29802d4 | ||
|
|
6ac01ec9ac | ||
|
|
20a55e9184 | ||
|
|
6c56109083 | ||
|
|
dab3fe71bd | ||
|
|
9867ca279a | ||
|
|
91e550b715 | ||
|
|
280c008f81 | ||
|
|
5939a23af6 | ||
|
|
7f1041164e | ||
|
|
64ce211ba4 | ||
|
|
b5bf28d722 | ||
|
|
10debb577e | ||
|
|
75cdea48e4 | ||
|
|
d96d7fb2dc | ||
|
|
e3245a400a | ||
|
|
e871c39f05 | ||
|
|
9c0a23996d | ||
|
|
3b2aeb2d5b | ||
|
|
e98a476dc8 | ||
|
|
7677052cb7 | ||
|
|
c273e0986c | ||
|
|
9ee499ae27 | ||
|
|
230dfa96a3 | ||
|
|
f1a8f54c83 | ||
|
|
2bcf3524e5 | ||
|
|
26918513e3 | ||
|
|
893d505803 | ||
|
|
22aaa260e7 | ||
|
|
1bcc3556fc | ||
|
|
eef6deb7c2 | ||
|
|
542759ea31 | ||
|
|
e5f590a7fa | ||
|
|
ecf215b927 | ||
|
|
299fd19c49 | ||
|
|
4b633c3c7b | ||
|
|
eb8057e8e0 | ||
|
|
32f6358d78 | ||
|
|
3b47722032 | ||
|
|
e60f0f2c4f | ||
|
|
b39ebab666 | ||
|
|
f891187d8b | ||
|
|
307c94e5c7 | ||
|
|
60a025b227 | ||
|
|
fec415a8e0 | ||
|
|
2d7540fb0a | ||
|
|
595285736c | ||
|
|
378f0b45c6 | ||
|
|
c3dab802d8 | ||
|
|
fa04611afc | ||
|
|
ffa062dc95 | ||
|
|
0fc2b5ca85 | ||
|
|
9a1267cd02 | ||
|
|
c74b2fe7a4 | ||
|
|
6c69d970f7 | ||
|
|
d3e56ea9d9 | ||
|
|
11b771c789 | ||
|
|
278f94a8b6 | ||
|
|
14b38a9aa8 | ||
|
|
0044c28b1f | ||
|
|
b568b219bc | ||
|
|
aabbe5a56a | ||
|
|
b038dd063e | ||
|
|
f25d5ff02f | ||
|
|
b67bdedb23 | ||
|
|
3ccb883d95 | ||
|
|
785c861233 | ||
|
|
1b69919313 | ||
|
|
24db7366ba | ||
|
|
08547827db | ||
|
|
f37c253ae4 | ||
|
|
d77e2453da | ||
|
|
2b4d305c58 | ||
|
|
5d715ada96 | ||
|
|
3400fa5628 | ||
|
|
f04c8c8430 | ||
|
|
de6b04d726 | ||
|
|
fe9f9bba87 | ||
|
|
e482ba2c73 | ||
|
|
038727477c | ||
|
|
ed4103ef52 | ||
|
|
54286a0117 | ||
|
|
9c3be32bc9 | ||
|
|
59533bbb5c | ||
|
|
5f8600f098 | ||
|
|
d95ebef55c | ||
|
|
33c121df01 | ||
|
|
1dde00c4bc | ||
|
|
4466a24f9e | ||
|
|
ec9daba87e | ||
|
|
202e99695b | ||
|
|
7371dda7a2 | ||
|
|
62bdf4d85e | ||
|
|
7f8dbf890d | ||
|
|
bede4ab552 | ||
|
|
5c5125f30e | ||
|
|
e9cf2cbe32 | ||
|
|
4ee7f70400 | ||
|
|
0abda783bb | ||
|
|
aadd118883 | ||
|
|
9aff9301ce | ||
|
|
93d6b654ca | ||
|
|
42287815b5 | ||
|
|
dcabc22072 | ||
|
|
ae53101e89 | ||
|
|
61627c2ece | ||
|
|
ab73275f58 | ||
|
|
316ca972b6 | ||
|
|
5c2b9bbfc5 | ||
|
|
cc2a879660 | ||
|
|
89334a88a9 | ||
|
|
1927dba42f | ||
|
|
72dab552b5 | ||
|
|
a0a7db127c | ||
|
|
bcfd434829 | ||
|
|
d1aaed7a77 | ||
|
|
f0026081a7 | ||
|
|
a18829f837 | ||
|
|
da0eb138d0 | ||
|
|
a2c7d43e46 | ||
|
|
f7cba04f5e | ||
|
|
12b5db70e2 | ||
|
|
5c4e3fc860 | ||
|
|
eab0e6a8fe | ||
|
|
4c938c77ba | ||
|
|
1cca41b81a | ||
|
|
c62472121b | ||
|
|
88d0775692 | ||
|
|
8afc82b427 | ||
|
|
d311561a8b | ||
|
|
44e180b26e | ||
|
|
02d29e0af5 | ||
|
|
40121c671c | ||
|
|
4c1621cccd | ||
|
|
7f0e37531c | ||
|
|
82b212bddf | ||
|
|
aa52a5a699 | ||
|
|
49342dd54d | ||
|
|
3f716f00fa | ||
|
|
5e25191cb6 | ||
|
|
dd15969c93 | ||
|
|
81cf2064c4 | ||
|
|
b497587f21 | ||
|
|
2890209a11 | ||
|
|
4690e14c40 | ||
|
|
25d2f73858 | ||
|
|
36a37a624e | ||
|
|
e150d7bdd8 | ||
|
|
be2c8f71fe | ||
|
|
89f5c1ce51 | ||
|
|
b6474d43a9 | ||
|
|
2644d56a6d | ||
|
|
084b6c0a95 | ||
|
|
8e5595b7c7 | ||
|
|
22500c9929 | ||
|
|
050f4f9219 | ||
|
|
1a56de8e68 | ||
|
|
868610e0e9 | ||
|
|
b89e2f35df | ||
|
|
1b3068df7c | ||
|
|
461369748c | ||
|
|
d5908cdddf | ||
|
|
b5bc754bad | ||
|
|
dff7673afb | ||
|
|
3e2fde5639 | ||
|
|
7a7b73c043 | ||
|
|
e50c9ae7be | ||
|
|
9e62c9f074 | ||
|
|
c82dbafaee | ||
|
|
0e4d7aa7a9 | ||
|
|
c05a6eb2c1 | ||
|
|
eec1693f30 | ||
|
|
c643c2ca95 | ||
|
|
761e22e395 | ||
|
|
ef8c936b27 | ||
|
|
0cea838344 | ||
|
|
2b18a9b4a5 | ||
|
|
45e4550c36 | ||
|
|
6fc906532b | ||
|
|
06541ebd0f | ||
|
|
773fac9a73 | ||
|
|
7f0e05dfac | ||
|
|
e59aecf034 | ||
|
|
ac9a1612d2 | ||
|
|
c83812144c | ||
|
|
df521e4e96 | ||
|
|
00cb53d0ef | ||
|
|
6cfef7fa36 | ||
|
|
b05c313204 | ||
|
|
3e8bbbc286 | ||
|
|
8a12884814 | ||
|
|
6cf9fa8261 | ||
|
|
fd94fc5fdf | ||
|
|
45c678ad26 | ||
|
|
55a3ce606f | ||
|
|
c1c414e4c9 | ||
|
|
610601cec0 | ||
|
|
9833420a03 | ||
|
|
7f322caa79 | ||
|
|
93867d02f0 | ||
|
|
b8a602821c | ||
|
|
a8a3b1738e | ||
|
|
ef3e46fd62 | ||
|
|
3ab0b6953a | ||
|
|
c19c018a4c | ||
|
|
422ba60b04 | ||
|
|
2d3763990c | ||
|
|
dc6ada9b50 | ||
|
|
cb185f095f | ||
|
|
89e126fa60 | ||
|
|
04bac63745 | ||
|
|
3594851128 | ||
|
|
58e5a73389 | ||
|
|
c685e46609 | ||
|
|
e3283e6169 | ||
|
|
5d50bd7b43 | ||
|
|
3dfbe2c184 | ||
|
|
06367a120b | ||
|
|
6149507c7e | ||
|
|
c76b5eac03 | ||
|
|
cd133cee25 | ||
|
|
eeab638476 | ||
|
|
19b9b86af8 | ||
|
|
0101c96532 | ||
|
|
85dedf1aea | ||
|
|
5f05bd9a2b | ||
|
|
260d07a9a1 | ||
|
|
9aa8f09f14 | ||
|
|
0363b676bc | ||
|
|
a10045e8cb | ||
|
|
0afcca221d | ||
|
|
5daf176722 | ||
|
|
3d1ab82dc6 | ||
|
|
872430d2d3 | ||
|
|
7d312eaa0a | ||
|
|
a6bc4ed21e | ||
|
|
3768ab268b | ||
|
|
4c2daeb852 | ||
|
|
d9ee3b134c | ||
|
|
0b6ba1cd32 | ||
|
|
65a6955db8 | ||
|
|
258d12b2e7 | ||
|
|
6e3a319a9d | ||
|
|
1bb425a882 | ||
|
|
5f6ad50df4 | ||
|
|
9d88450118 | ||
|
|
334c93fbbe | ||
|
|
6ba080d337 | ||
|
|
ba443811e4 | ||
|
|
67bd6f4dc8 | ||
|
|
9f50eccbd7 | ||
|
|
12db50748b | ||
|
|
9b4937f062 | ||
|
|
3d48359e95 | ||
|
|
70002ebbc7 | ||
|
|
ef95f8135b | ||
|
|
9215fc56a3 | ||
|
|
1253020b3d | ||
|
|
621c55066c | ||
|
|
77632ac15f | ||
|
|
db6aa34252 | ||
|
|
7ecfd3d298 | ||
|
|
9a8c287629 | ||
|
|
591099a4b8 | ||
|
|
31309b9205 | ||
|
|
8c0cefcd89 | ||
|
|
a07f5cdf42 | ||
|
|
a1d899d400 | ||
|
|
886e1a378c | ||
|
|
47b7ba4e79 | ||
|
|
79433cef7a | ||
|
|
c5eb8f58b4 | ||
|
|
03b7128b60 | ||
|
|
15b358bec6 | ||
|
|
a53e392afb | ||
|
|
4fec647b9d | ||
|
|
d7b28fb4d3 | ||
|
|
9bc9291fc9 | ||
|
|
df7a318ec0 | ||
|
|
908a856079 | ||
|
|
8d64b6484f | ||
|
|
399df854ea | ||
|
|
328441cdc6 | ||
|
|
5ca0cbff8e | ||
|
|
ab09846df7 | ||
|
|
cd3a9ad38d | ||
|
|
adc5f35237 | ||
|
|
0b984e429b | ||
|
|
eec843a325 | ||
|
|
83868a48b7 | ||
|
|
98d337af21 | ||
|
|
b2c7dc00cc | ||
|
|
44ddc5a02b | ||
|
|
f036bba48c | ||
|
|
0fe2ce7fac | ||
|
|
0147b92230 | ||
|
|
4047bccf6c | ||
|
|
363c0fdf4b | ||
|
|
c46407b2d3 | ||
|
|
ca0f5dabea | ||
|
|
17d4106e1b | ||
|
|
442d38080e | ||
|
|
87ec3af5bb | ||
|
|
1a8af33ff6 | ||
|
|
ff749d8863 | ||
|
|
2d86ddd37f | ||
|
|
e1bbb9d8de | ||
|
|
d7964be29c | ||
|
|
3fef64f584 | ||
|
|
319b6503a5 | ||
|
|
d40a6082fa | ||
|
|
28809b82a2 | ||
|
|
c9f02c32d4 | ||
|
|
55d5b80dfe | ||
|
|
33f25acb08 | ||
|
|
0e9066f6ed | ||
|
|
7d2e30096b | ||
|
|
0e9d6460db | ||
|
|
927163bf13 | ||
|
|
8ac1cf6e45 | ||
|
|
dba79ba223 | ||
|
|
905be6431b | ||
|
|
ad95d6ba04 | ||
|
|
b77690a373 | ||
|
|
84a734dc87 | ||
|
|
5079b64ab5 | ||
|
|
6a343488d2 | ||
|
|
98c3ef60e6 | ||
|
|
73fa351b1c | ||
|
|
c88f7fc307 | ||
|
|
55b8712268 | ||
|
|
7878f0c514 | ||
|
|
0f09e2ecfe | ||
|
|
bea3acd4b6 | ||
|
|
fb9004d6b2 | ||
|
|
42b7e7bc56 | ||
|
|
4b2ddd5b6e | ||
|
|
b3006221f1 | ||
|
|
e57108e7c9 | ||
|
|
6b3dc6687b | ||
|
|
92f6f9f973 | ||
|
|
a56a4c00dd | ||
|
|
ee4a115d4c | ||
|
|
976a58c918 | ||
|
|
db82aa1b77 | ||
|
|
d05504c50f | ||
|
|
f1f464e364 | ||
|
|
57b875a955 | ||
|
|
ea5d32ddff | ||
|
|
da8c8ccaf5 | ||
|
|
dcbe4b3e1a | ||
|
|
68cebffe13 | ||
|
|
96e7f3224a | ||
|
|
f6d83c97bb | ||
|
|
74f76cf4e9 | ||
|
|
8b8725d1fd | ||
|
|
20494f799d | ||
|
|
dd13e16bc7 | ||
|
|
11057da373 | ||
|
|
381dc8fb43 | ||
|
|
780fad46f2 | ||
|
|
2ca5ffa06a | ||
|
|
f349ba8750 | ||
|
|
33e3ca524e | ||
|
|
ea1a71dc07 | ||
|
|
ae820293a2 | ||
|
|
f68f42cb11 | ||
|
|
7f58ad45fa | ||
|
|
aa7c17989b | ||
|
|
6b6afc4077 | ||
|
|
69a1419ab1 | ||
|
|
080e9981c7 | ||
|
|
8cc00ab486 | ||
|
|
40fad4bbb5 | ||
|
|
232af9aa0d | ||
|
|
5bf2ff98c9 | ||
|
|
570e733a51 | ||
|
|
cffcd57edb | ||
|
|
1496ac9b55 | ||
|
|
290e25917c | ||
|
|
0360668cc1 | ||
|
|
343a4753c7 | ||
|
|
d827f42d38 | ||
|
|
5843e67a90 | ||
|
|
517df133e3 | ||
|
|
621114fbe1 | ||
|
|
2066051d3a | ||
|
|
d94cbd870c | ||
|
|
48f15dcf3d | ||
|
|
35a60b8e04 | ||
|
|
4b3df09155 | ||
|
|
b1215c2ce2 | ||
|
|
99ac03f9f3 | ||
|
|
a0aa45a4a7 | ||
|
|
aab3a92890 | ||
|
|
37020dc8da | ||
|
|
d66d3a2d09 | ||
|
|
f057195a4f | ||
|
|
378bf70d4b | ||
|
|
1ccf0bd7d8 | ||
|
|
ddbfddce88 | ||
|
|
ea19cf9d8d | ||
|
|
3b69482a2f | ||
|
|
778fbb4669 | ||
|
|
13533e359a | ||
|
|
3c2d0aa667 | ||
|
|
5551de4b8a | ||
|
|
fa51b90094 | ||
|
|
bfd1ea3877 | ||
|
|
abfb2c064d | ||
|
|
4f7517115c | ||
|
|
1621b9bad0 | ||
|
|
d2bf42f8b4 | ||
|
|
d1a46faf84 | ||
|
|
39f1d99dcc | ||
|
|
1f04ec9575 | ||
|
|
ac2e081528 | ||
|
|
95ac9ebb8a | ||
|
|
c41b006be1 | ||
|
|
92fb995505 | ||
|
|
3c94621142 | ||
|
|
2d720a1e0b | ||
|
|
0269388aa7 | ||
|
|
4b2aab09b5 | ||
|
|
1c7434a8f0 | ||
|
|
20f8f67928 | ||
|
|
159535261d | ||
|
|
a840e891fe | ||
|
|
a7c704c39d | ||
|
|
e1c0eb2187 | ||
|
|
aa8ab9fcca | ||
|
|
a746959fc1 | ||
|
|
ee34eb2180 | ||
|
|
844b901005 | ||
|
|
83ebc13a37 | ||
|
|
82c8f2d8f0 | ||
|
|
a11cc8840e | ||
|
|
35ee6ce62d | ||
|
|
37861cb99f | ||
|
|
a178d05023 | ||
|
|
ee9ccfe2e3 | ||
|
|
c6c4b4a871 | ||
|
|
273c6a75a2 | ||
|
|
1384bf02f9 | ||
|
|
fb7b670b76 | ||
|
|
9ac5b075cf | ||
|
|
0124b6b9c9 | ||
|
|
691a1479fc | ||
|
|
c9d69b082b | ||
|
|
e24e04be57 | ||
|
|
38f9241316 | ||
|
|
3862ff269b | ||
|
|
9e5321eab8 | ||
|
|
f2cf7cf281 | ||
|
|
8961d3573e | ||
|
|
26570d05c1 | ||
|
|
8627f668f2 | ||
|
|
d8ca46066d | ||
|
|
c00c5a89e5 | ||
|
|
14b7936c3b | ||
|
|
40b41ac6e4 | ||
|
|
fd6de6b6ae | ||
|
|
f3111bcbef | ||
|
|
e6be472831 | ||
|
|
e5217f26e2 | ||
|
|
7f7acf7837 | ||
|
|
baae4b2e71 | ||
|
|
42448c5f37 | ||
|
|
fc7875675c | ||
|
|
355261e49e | ||
|
|
44c840b31d | ||
|
|
1bc068a583 | ||
|
|
340686a383 | ||
|
|
1ad87c6ba6 | ||
|
|
eadf5eddbc | ||
|
|
f4f58003fb | ||
|
|
bda4882a65 | ||
|
|
77ddee9314 | ||
|
|
c9676c79d1 | ||
|
|
18095ee87b | ||
|
|
c4d07f67e3 | ||
|
|
205165bec5 | ||
|
|
870b38837b | ||
|
|
10d4b7a878 | ||
|
|
abed97cf33 | ||
|
|
f10d8dab5e | ||
|
|
5185d534bc | ||
|
|
a520de447e | ||
|
|
4121ae50b5 | ||
|
|
87c4bf1df9 | ||
|
|
09339bf500 | ||
|
|
af9edb6ba4 | ||
|
|
b2ec1d331c | ||
|
|
213d3af3b0 | ||
|
|
4804824c78 | ||
|
|
d89e3f3014 | ||
|
|
9ab9b852c2 | ||
|
|
2a257a8252 | ||
|
|
0e1f0e3c73 | ||
|
|
078b65905a | ||
|
|
417f52e001 | ||
|
|
2bca8fa677 | ||
|
|
721b63f3a0 | ||
|
|
14e65ff3b4 | ||
|
|
3282dae09b | ||
|
|
7b297eb895 | ||
|
|
bae6b1cec8 | ||
|
|
f4ac640252 | ||
|
|
7c627da986 | ||
|
|
d52f5db6ba | ||
|
|
66672b4052 | ||
|
|
ed6be92de4 | ||
|
|
2def02ea74 | ||
|
|
52b0ea328f | ||
|
|
960107d00f | ||
|
|
bbc902b4d6 | ||
|
|
54549235da | ||
|
|
231f983600 | ||
|
|
891d355679 | ||
|
|
87fbc406f5 | ||
|
|
08b5a52ccd | ||
|
|
14a28d7fc3 | ||
|
|
5a79b401b0 | ||
|
|
5e4444823c | ||
|
|
5ff7635447 | ||
|
|
709ef36b4f | ||
|
|
7f0b59dc37 | ||
|
|
9e8c532e61 | ||
|
|
f2be09f4e4 | ||
|
|
3ff9abca3a | ||
|
|
3d630c6f7f | ||
|
|
9f4f945d4f | ||
|
|
a0490a8720 | ||
|
|
aba962c12c | ||
|
|
aa21e3c731 | ||
|
|
5b9d88b3ad | ||
|
|
8bce430f4d | ||
|
|
c6f8de90aa | ||
|
|
6b65b0c982 | ||
|
|
f5c2673878 | ||
|
|
8e92bfc8f0 | ||
|
|
d7adb9ef6e | ||
|
|
6634f1e75c | ||
|
|
6aba19193a | ||
|
|
eb1aabe9e3 | ||
|
|
714f181d81 | ||
|
|
fd44accc79 | ||
|
|
43edd6b77e | ||
|
|
427c19d65c |
@@ -1,2 +1 @@
|
|||||||
bin/
|
bin/
|
||||||
cross-out/
|
|
||||||
|
|||||||
14
.fossa.yml
14
.fossa.yml
@@ -1,14 +0,0 @@
|
|||||||
# Generated by FOSSA CLI (https://github.com/fossas/fossa-cli)
|
|
||||||
# Visit https://fossa.com to learn more
|
|
||||||
|
|
||||||
version: 2
|
|
||||||
cli:
|
|
||||||
server: https://app.fossa.io
|
|
||||||
fetcher: custom
|
|
||||||
project: git@github.com:docker/buildx
|
|
||||||
analyze:
|
|
||||||
modules:
|
|
||||||
- name: github.com/docker/buildx/cmd/buildx
|
|
||||||
type: go
|
|
||||||
target: github.com/docker/buildx/cmd/buildx
|
|
||||||
path: cmd/buildx
|
|
||||||
10
.github/dependabot.yml
vendored
Normal file
10
.github/dependabot.yml
vendored
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
version: 2
|
||||||
|
updates:
|
||||||
|
- package-ecosystem: "github-actions"
|
||||||
|
open-pull-requests-limit: 10
|
||||||
|
directory: "/"
|
||||||
|
schedule:
|
||||||
|
interval: "daily"
|
||||||
|
labels:
|
||||||
|
- "dependencies"
|
||||||
|
- "bot"
|
||||||
226
.github/workflows/build.yml
vendored
Normal file
226
.github/workflows/build.yml
vendored
Normal file
@@ -0,0 +1,226 @@
|
|||||||
|
name: build
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-${{ github.ref }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
on:
|
||||||
|
workflow_dispatch:
|
||||||
|
push:
|
||||||
|
branches:
|
||||||
|
- 'master'
|
||||||
|
- 'v[0-9]*'
|
||||||
|
tags:
|
||||||
|
- 'v*'
|
||||||
|
pull_request:
|
||||||
|
branches:
|
||||||
|
- 'master'
|
||||||
|
- 'v[0-9]*'
|
||||||
|
paths-ignore:
|
||||||
|
- 'README.md'
|
||||||
|
- 'docs/**'
|
||||||
|
|
||||||
|
env:
|
||||||
|
BUILDX_VERSION: "latest"
|
||||||
|
BUILDKIT_IMAGE: "moby/buildkit:latest"
|
||||||
|
REPO_SLUG: "docker/buildx-bin"
|
||||||
|
DESTDIR: "./bin"
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
test:
|
||||||
|
runs-on: ubuntu-22.04
|
||||||
|
steps:
|
||||||
|
-
|
||||||
|
name: Checkout
|
||||||
|
uses: actions/checkout@v3
|
||||||
|
-
|
||||||
|
name: Set up Docker Buildx
|
||||||
|
uses: docker/setup-buildx-action@v2
|
||||||
|
with:
|
||||||
|
version: ${{ env.BUILDX_VERSION }}
|
||||||
|
driver-opts: image=${{ env.BUILDKIT_IMAGE }}
|
||||||
|
buildkitd-flags: --debug
|
||||||
|
-
|
||||||
|
name: Test
|
||||||
|
uses: docker/bake-action@v2
|
||||||
|
with:
|
||||||
|
targets: test
|
||||||
|
set: |
|
||||||
|
*.cache-from=type=gha,scope=test
|
||||||
|
*.cache-to=type=gha,scope=test
|
||||||
|
-
|
||||||
|
name: Upload coverage
|
||||||
|
uses: codecov/codecov-action@v3
|
||||||
|
with:
|
||||||
|
directory: ${{ env.DESTDIR }}/coverage
|
||||||
|
|
||||||
|
prepare:
|
||||||
|
runs-on: ubuntu-22.04
|
||||||
|
outputs:
|
||||||
|
matrix: ${{ steps.platforms.outputs.matrix }}
|
||||||
|
steps:
|
||||||
|
-
|
||||||
|
name: Checkout
|
||||||
|
uses: actions/checkout@v3
|
||||||
|
-
|
||||||
|
name: Create matrix
|
||||||
|
id: platforms
|
||||||
|
run: |
|
||||||
|
echo "matrix=$(docker buildx bake binaries-cross --print | jq -cr '.target."binaries-cross".platforms')" >>${GITHUB_OUTPUT}
|
||||||
|
-
|
||||||
|
name: Show matrix
|
||||||
|
run: |
|
||||||
|
echo ${{ steps.platforms.outputs.matrix }}
|
||||||
|
|
||||||
|
binaries:
|
||||||
|
runs-on: ubuntu-22.04
|
||||||
|
needs:
|
||||||
|
- prepare
|
||||||
|
strategy:
|
||||||
|
fail-fast: false
|
||||||
|
matrix:
|
||||||
|
platform: ${{ fromJson(needs.prepare.outputs.matrix) }}
|
||||||
|
steps:
|
||||||
|
-
|
||||||
|
name: Prepare
|
||||||
|
run: |
|
||||||
|
platform=${{ matrix.platform }}
|
||||||
|
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
|
||||||
|
-
|
||||||
|
name: Checkout
|
||||||
|
uses: actions/checkout@v3
|
||||||
|
-
|
||||||
|
name: Set up QEMU
|
||||||
|
uses: docker/setup-qemu-action@v2
|
||||||
|
-
|
||||||
|
name: Set up Docker Buildx
|
||||||
|
uses: docker/setup-buildx-action@v2
|
||||||
|
with:
|
||||||
|
version: ${{ env.BUILDX_VERSION }}
|
||||||
|
driver-opts: image=${{ env.BUILDKIT_IMAGE }}
|
||||||
|
buildkitd-flags: --debug
|
||||||
|
-
|
||||||
|
name: Build
|
||||||
|
run: |
|
||||||
|
make release
|
||||||
|
env:
|
||||||
|
PLATFORMS: ${{ matrix.platform }}
|
||||||
|
CACHE_FROM: type=gha,scope=binaries-${{ env.PLATFORM_PAIR }}
|
||||||
|
CACHE_TO: type=gha,scope=binaries-${{ env.PLATFORM_PAIR }},mode=max
|
||||||
|
-
|
||||||
|
name: Upload artifacts
|
||||||
|
uses: actions/upload-artifact@v3
|
||||||
|
with:
|
||||||
|
name: buildx
|
||||||
|
path: ${{ env.DESTDIR }}/*
|
||||||
|
if-no-files-found: error
|
||||||
|
|
||||||
|
bin-image:
|
||||||
|
runs-on: ubuntu-22.04
|
||||||
|
if: ${{ github.event_name != 'pull_request' }}
|
||||||
|
steps:
|
||||||
|
-
|
||||||
|
name: Checkout
|
||||||
|
uses: actions/checkout@v3
|
||||||
|
-
|
||||||
|
name: Set up QEMU
|
||||||
|
uses: docker/setup-qemu-action@v2
|
||||||
|
-
|
||||||
|
name: Set up Docker Buildx
|
||||||
|
uses: docker/setup-buildx-action@v2
|
||||||
|
with:
|
||||||
|
version: ${{ env.BUILDX_VERSION }}
|
||||||
|
driver-opts: image=${{ env.BUILDKIT_IMAGE }}
|
||||||
|
buildkitd-flags: --debug
|
||||||
|
-
|
||||||
|
name: Docker meta
|
||||||
|
id: meta
|
||||||
|
uses: docker/metadata-action@v4
|
||||||
|
with:
|
||||||
|
images: |
|
||||||
|
${{ env.REPO_SLUG }}
|
||||||
|
tags: |
|
||||||
|
type=ref,event=branch
|
||||||
|
type=ref,event=pr
|
||||||
|
type=semver,pattern={{version}}
|
||||||
|
bake-target: meta-helper
|
||||||
|
-
|
||||||
|
name: Login to DockerHub
|
||||||
|
if: github.event_name != 'pull_request'
|
||||||
|
uses: docker/login-action@v2
|
||||||
|
with:
|
||||||
|
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||||
|
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||||
|
-
|
||||||
|
name: Build and push image
|
||||||
|
uses: docker/bake-action@v2
|
||||||
|
with:
|
||||||
|
files: |
|
||||||
|
./docker-bake.hcl
|
||||||
|
${{ steps.meta.outputs.bake-file }}
|
||||||
|
targets: image-cross
|
||||||
|
push: ${{ github.event_name != 'pull_request' }}
|
||||||
|
set: |
|
||||||
|
*.cache-from=type=gha,scope=bin-image
|
||||||
|
*.cache-to=type=gha,scope=bin-image,mode=max
|
||||||
|
*.attest=type=sbom
|
||||||
|
*.attest=type=provenance,mode=max,builder-id=https://github.com/${{ env.GITHUB_REPOSITORY }}/actions/runs/${{ env.GITHUB_RUN_ID }}
|
||||||
|
|
||||||
|
release:
|
||||||
|
runs-on: ubuntu-22.04
|
||||||
|
needs:
|
||||||
|
- binaries
|
||||||
|
steps:
|
||||||
|
-
|
||||||
|
name: Checkout
|
||||||
|
uses: actions/checkout@v3
|
||||||
|
-
|
||||||
|
name: Download binaries
|
||||||
|
uses: actions/download-artifact@v3
|
||||||
|
with:
|
||||||
|
name: buildx
|
||||||
|
path: ${{ env.DESTDIR }}
|
||||||
|
-
|
||||||
|
name: Create checksums
|
||||||
|
run: ./hack/hash-files
|
||||||
|
-
|
||||||
|
name: List artifacts
|
||||||
|
run: |
|
||||||
|
tree -nh ${{ env.DESTDIR }}
|
||||||
|
-
|
||||||
|
name: Check artifacts
|
||||||
|
run: |
|
||||||
|
find ${{ env.DESTDIR }} -type f -exec file -e ascii -- {} +
|
||||||
|
-
|
||||||
|
name: GitHub Release
|
||||||
|
if: startsWith(github.ref, 'refs/tags/v')
|
||||||
|
uses: softprops/action-gh-release@de2c0eb89ae2a093876385947365aca7b0e5f844 # v0.1.15
|
||||||
|
env:
|
||||||
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
with:
|
||||||
|
draft: true
|
||||||
|
files: ${{ env.DESTDIR }}/*
|
||||||
|
|
||||||
|
buildkit-edge:
|
||||||
|
runs-on: ubuntu-22.04
|
||||||
|
continue-on-error: true
|
||||||
|
steps:
|
||||||
|
-
|
||||||
|
name: Checkout
|
||||||
|
uses: actions/checkout@v3
|
||||||
|
-
|
||||||
|
name: Set up QEMU
|
||||||
|
uses: docker/setup-qemu-action@v2
|
||||||
|
-
|
||||||
|
name: Set up Docker Buildx
|
||||||
|
uses: docker/setup-buildx-action@v2
|
||||||
|
with:
|
||||||
|
version: ${{ env.BUILDX_VERSION }}
|
||||||
|
driver-opts: image=moby/buildkit:master
|
||||||
|
buildkitd-flags: --debug
|
||||||
|
-
|
||||||
|
# Just run a bake target to check eveything runs fine
|
||||||
|
name: Build
|
||||||
|
uses: docker/bake-action@v2
|
||||||
|
with:
|
||||||
|
targets: binaries
|
||||||
58
.github/workflows/docs-release.yml
vendored
Normal file
58
.github/workflows/docs-release.yml
vendored
Normal file
@@ -0,0 +1,58 @@
|
|||||||
|
name: docs-release
|
||||||
|
|
||||||
|
on:
|
||||||
|
release:
|
||||||
|
types:
|
||||||
|
- released
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
open-pr:
|
||||||
|
runs-on: ubuntu-22.04
|
||||||
|
if: "!github.event.release.prerelease"
|
||||||
|
steps:
|
||||||
|
-
|
||||||
|
name: Checkout docs repo
|
||||||
|
uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
token: ${{ secrets.GHPAT_DOCS_DISPATCH }}
|
||||||
|
repository: docker/docs
|
||||||
|
ref: main
|
||||||
|
-
|
||||||
|
name: Prepare
|
||||||
|
run: |
|
||||||
|
rm -rf ./_data/buildx/*
|
||||||
|
-
|
||||||
|
name: Set up Docker Buildx
|
||||||
|
uses: docker/setup-buildx-action@v2
|
||||||
|
-
|
||||||
|
name: Build docs
|
||||||
|
uses: docker/bake-action@v2
|
||||||
|
with:
|
||||||
|
source: ${{ github.server_url }}/${{ github.repository }}.git#${{ github.event.release.name }}
|
||||||
|
targets: update-docs
|
||||||
|
set: |
|
||||||
|
*.output=/tmp/buildx-docs
|
||||||
|
env:
|
||||||
|
DOCS_FORMATS: yaml
|
||||||
|
-
|
||||||
|
name: Copy files
|
||||||
|
run: |
|
||||||
|
cp /tmp/buildx-docs/out/reference/*.yaml ./_data/buildx/
|
||||||
|
-
|
||||||
|
name: Commit changes
|
||||||
|
run: |
|
||||||
|
git add -A .
|
||||||
|
-
|
||||||
|
name: Create PR on docs repo
|
||||||
|
uses: peter-evans/create-pull-request@2b011faafdcbc9ceb11414d64d0573f37c774b04
|
||||||
|
with:
|
||||||
|
token: ${{ secrets.GHPAT_DOCS_DISPATCH }}
|
||||||
|
push-to-fork: docker-tools-robot/docker.github.io
|
||||||
|
commit-message: "build: update buildx reference to ${{ github.event.release.name }}"
|
||||||
|
signoff: true
|
||||||
|
branch: dispatch/buildx-ref-${{ github.event.release.name }}
|
||||||
|
delete-branch: true
|
||||||
|
title: Update buildx reference to ${{ github.event.release.name }}
|
||||||
|
body: |
|
||||||
|
Update the buildx reference documentation to keep in sync with the latest release `${{ github.event.release.name }}`
|
||||||
|
draft: false
|
||||||
61
.github/workflows/docs-upstream.yml
vendored
Normal file
61
.github/workflows/docs-upstream.yml
vendored
Normal file
@@ -0,0 +1,61 @@
|
|||||||
|
# this workflow runs the remote validate bake target from docker/docker.github.io
|
||||||
|
# to check if yaml reference docs and markdown files used in this repo are still valid
|
||||||
|
# https://github.com/docker/docker.github.io/blob/98c7c9535063ae4cd2cd0a31478a21d16d2f07a3/docker-bake.hcl#L34-L36
|
||||||
|
name: docs-upstream
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-${{ github.ref }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
branches:
|
||||||
|
- 'master'
|
||||||
|
- 'v[0-9]*'
|
||||||
|
paths:
|
||||||
|
- '.github/workflows/docs-upstream.yml'
|
||||||
|
- 'docs/**'
|
||||||
|
pull_request:
|
||||||
|
paths:
|
||||||
|
- '.github/workflows/docs-upstream.yml'
|
||||||
|
- 'docs/**'
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
docs-yaml:
|
||||||
|
runs-on: ubuntu-22.04
|
||||||
|
steps:
|
||||||
|
-
|
||||||
|
name: Checkout
|
||||||
|
uses: actions/checkout@v3
|
||||||
|
-
|
||||||
|
name: Set up Docker Buildx
|
||||||
|
uses: docker/setup-buildx-action@v2
|
||||||
|
with:
|
||||||
|
version: latest
|
||||||
|
-
|
||||||
|
name: Build reference YAML docs
|
||||||
|
uses: docker/bake-action@v2
|
||||||
|
with:
|
||||||
|
targets: update-docs
|
||||||
|
set: |
|
||||||
|
*.output=/tmp/buildx-docs
|
||||||
|
*.cache-from=type=gha,scope=docs-yaml
|
||||||
|
*.cache-to=type=gha,scope=docs-yaml,mode=max
|
||||||
|
env:
|
||||||
|
DOCS_FORMATS: yaml
|
||||||
|
-
|
||||||
|
name: Upload reference YAML docs
|
||||||
|
uses: actions/upload-artifact@v3
|
||||||
|
with:
|
||||||
|
name: docs-yaml
|
||||||
|
path: /tmp/buildx-docs/out/reference
|
||||||
|
retention-days: 1
|
||||||
|
|
||||||
|
validate:
|
||||||
|
uses: docker/docs/.github/workflows/validate-upstream.yml@main
|
||||||
|
needs:
|
||||||
|
- docs-yaml
|
||||||
|
with:
|
||||||
|
repo: https://github.com/${{ github.repository }}
|
||||||
|
data-files-id: docs-yaml
|
||||||
|
data-files-folder: buildx
|
||||||
218
.github/workflows/e2e.yml
vendored
Normal file
218
.github/workflows/e2e.yml
vendored
Normal file
@@ -0,0 +1,218 @@
|
|||||||
|
name: e2e
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-${{ github.ref }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
on:
|
||||||
|
workflow_dispatch:
|
||||||
|
push:
|
||||||
|
branches:
|
||||||
|
- 'master'
|
||||||
|
- 'v[0-9]*'
|
||||||
|
pull_request:
|
||||||
|
branches:
|
||||||
|
- 'master'
|
||||||
|
- 'v[0-9]*'
|
||||||
|
paths-ignore:
|
||||||
|
- 'README.md'
|
||||||
|
- 'docs/**'
|
||||||
|
|
||||||
|
env:
|
||||||
|
DESTDIR: "./bin"
|
||||||
|
K3S_VERSION: "v1.21.2-k3s1"
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
build:
|
||||||
|
runs-on: ubuntu-22.04
|
||||||
|
steps:
|
||||||
|
- name: Checkout
|
||||||
|
uses: actions/checkout@v3
|
||||||
|
-
|
||||||
|
name: Set up Docker Buildx
|
||||||
|
uses: docker/setup-buildx-action@v2
|
||||||
|
with:
|
||||||
|
version: latest
|
||||||
|
-
|
||||||
|
name: Build
|
||||||
|
uses: docker/bake-action@v2
|
||||||
|
with:
|
||||||
|
targets: binaries
|
||||||
|
set: |
|
||||||
|
*.cache-from=type=gha,scope=release
|
||||||
|
*.cache-from=type=gha,scope=binaries
|
||||||
|
*.cache-to=type=gha,scope=binaries
|
||||||
|
-
|
||||||
|
name: Rename binary
|
||||||
|
run: |
|
||||||
|
mv ${{ env.DESTDIR }}/build/buildx ${{ env.DESTDIR }}/build/docker-buildx
|
||||||
|
-
|
||||||
|
name: Upload artifacts
|
||||||
|
uses: actions/upload-artifact@v3
|
||||||
|
with:
|
||||||
|
name: binary
|
||||||
|
path: ${{ env.DESTDIR }}/build
|
||||||
|
if-no-files-found: error
|
||||||
|
retention-days: 7
|
||||||
|
|
||||||
|
driver:
|
||||||
|
runs-on: ubuntu-20.04
|
||||||
|
needs:
|
||||||
|
- build
|
||||||
|
strategy:
|
||||||
|
fail-fast: false
|
||||||
|
matrix:
|
||||||
|
driver:
|
||||||
|
- docker
|
||||||
|
- docker-container
|
||||||
|
- kubernetes
|
||||||
|
- remote
|
||||||
|
buildkit:
|
||||||
|
- moby/buildkit:buildx-stable-1
|
||||||
|
- moby/buildkit:master
|
||||||
|
buildkit-cfg:
|
||||||
|
- bkcfg-false
|
||||||
|
- bkcfg-true
|
||||||
|
multi-node:
|
||||||
|
- mnode-false
|
||||||
|
- mnode-true
|
||||||
|
platforms:
|
||||||
|
- linux/amd64
|
||||||
|
- linux/amd64,linux/arm64
|
||||||
|
include:
|
||||||
|
- driver: kubernetes
|
||||||
|
driver-opt: qemu.install=true
|
||||||
|
- driver: remote
|
||||||
|
endpoint: tcp://localhost:1234
|
||||||
|
exclude:
|
||||||
|
- driver: docker
|
||||||
|
multi-node: mnode-true
|
||||||
|
- driver: docker
|
||||||
|
buildkit-cfg: bkcfg-true
|
||||||
|
- driver: docker-container
|
||||||
|
multi-node: mnode-true
|
||||||
|
- driver: remote
|
||||||
|
multi-node: mnode-true
|
||||||
|
- driver: remote
|
||||||
|
buildkit-cfg: bkcfg-true
|
||||||
|
steps:
|
||||||
|
-
|
||||||
|
name: Checkout
|
||||||
|
uses: actions/checkout@v3
|
||||||
|
-
|
||||||
|
name: Set up QEMU
|
||||||
|
uses: docker/setup-qemu-action@v2
|
||||||
|
if: matrix.driver == 'docker' || matrix.driver == 'docker-container'
|
||||||
|
-
|
||||||
|
name: Install buildx
|
||||||
|
uses: actions/download-artifact@v3
|
||||||
|
with:
|
||||||
|
name: binary
|
||||||
|
path: /home/runner/.docker/cli-plugins
|
||||||
|
-
|
||||||
|
name: Fix perms and check
|
||||||
|
run: |
|
||||||
|
chmod +x /home/runner/.docker/cli-plugins/docker-buildx
|
||||||
|
docker buildx version
|
||||||
|
-
|
||||||
|
name: Init env vars
|
||||||
|
run: |
|
||||||
|
# BuildKit cfg
|
||||||
|
if [ "${{ matrix.buildkit-cfg }}" = "bkcfg-true" ]; then
|
||||||
|
cat > "/tmp/buildkitd.toml" <<EOL
|
||||||
|
[worker.oci]
|
||||||
|
max-parallelism = 2
|
||||||
|
EOL
|
||||||
|
echo "BUILDKIT_CFG=/tmp/buildkitd.toml" >> $GITHUB_ENV
|
||||||
|
fi
|
||||||
|
# Multi node
|
||||||
|
if [ "${{ matrix.multi-node }}" = "mnode-true" ]; then
|
||||||
|
echo "MULTI_NODE=1" >> $GITHUB_ENV
|
||||||
|
else
|
||||||
|
echo "MULTI_NODE=0" >> $GITHUB_ENV
|
||||||
|
fi
|
||||||
|
-
|
||||||
|
name: Install k3s
|
||||||
|
if: matrix.driver == 'kubernetes'
|
||||||
|
uses: actions/github-script@v6
|
||||||
|
with:
|
||||||
|
script: |
|
||||||
|
const fs = require('fs');
|
||||||
|
|
||||||
|
let wait = function(milliseconds) {
|
||||||
|
return new Promise((resolve, reject) => {
|
||||||
|
if (typeof(milliseconds) !== 'number') {
|
||||||
|
throw new Error('milleseconds not a number');
|
||||||
|
}
|
||||||
|
setTimeout(() => resolve("done!"), milliseconds)
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
const kubeconfig="/tmp/buildkit-k3s/kubeconfig.yaml";
|
||||||
|
core.info(`storing kubeconfig in ${kubeconfig}`);
|
||||||
|
|
||||||
|
await exec.exec('docker', ["run", "-d",
|
||||||
|
"--privileged",
|
||||||
|
"--name=buildkit-k3s",
|
||||||
|
"-e", "K3S_KUBECONFIG_OUTPUT="+kubeconfig,
|
||||||
|
"-e", "K3S_KUBECONFIG_MODE=666",
|
||||||
|
"-v", "/tmp/buildkit-k3s:/tmp/buildkit-k3s",
|
||||||
|
"-p", "6443:6443",
|
||||||
|
"-p", "80:80",
|
||||||
|
"-p", "443:443",
|
||||||
|
"-p", "8080:8080",
|
||||||
|
"rancher/k3s:${{ env.K3S_VERSION }}", "server"
|
||||||
|
]);
|
||||||
|
await wait(10000);
|
||||||
|
|
||||||
|
core.exportVariable('KUBECONFIG', kubeconfig);
|
||||||
|
|
||||||
|
let nodeName;
|
||||||
|
for (let count = 1; count <= 5; count++) {
|
||||||
|
try {
|
||||||
|
const nodeNameOutput = await exec.getExecOutput("kubectl get nodes --no-headers -oname");
|
||||||
|
nodeName = nodeNameOutput.stdout
|
||||||
|
} catch (error) {
|
||||||
|
core.info(`Unable to resolve node name (${error.message}). Attempt ${count} of 5.`)
|
||||||
|
} finally {
|
||||||
|
if (nodeName) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
await wait(5000);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if (!nodeName) {
|
||||||
|
throw new Error(`Unable to resolve node name after 5 attempts.`);
|
||||||
|
}
|
||||||
|
|
||||||
|
await exec.exec(`kubectl wait --for=condition=Ready ${nodeName}`);
|
||||||
|
} catch (error) {
|
||||||
|
core.setFailed(error.message);
|
||||||
|
}
|
||||||
|
-
|
||||||
|
name: Print KUBECONFIG
|
||||||
|
if: matrix.driver == 'kubernetes'
|
||||||
|
run: |
|
||||||
|
yq ${{ env.KUBECONFIG }}
|
||||||
|
-
|
||||||
|
name: Launch remote buildkitd
|
||||||
|
if: matrix.driver == 'remote'
|
||||||
|
run: |
|
||||||
|
docker run -d \
|
||||||
|
--privileged \
|
||||||
|
--name=remote-buildkit \
|
||||||
|
-p 1234:1234 \
|
||||||
|
${{ matrix.buildkit }} \
|
||||||
|
--addr unix:///run/buildkit/buildkitd.sock \
|
||||||
|
--addr tcp://0.0.0.0:1234
|
||||||
|
-
|
||||||
|
name: Test
|
||||||
|
run: |
|
||||||
|
make test-driver
|
||||||
|
env:
|
||||||
|
BUILDKIT_IMAGE: ${{ matrix.buildkit }}
|
||||||
|
DRIVER: ${{ matrix.driver }}
|
||||||
|
DRIVER_OPT: ${{ matrix.driver-opt }}
|
||||||
|
ENDPOINT: ${{ matrix.endpoint }}
|
||||||
|
PLATFORMS: ${{ matrix.platforms }}
|
||||||
42
.github/workflows/validate.yml
vendored
Normal file
42
.github/workflows/validate.yml
vendored
Normal file
@@ -0,0 +1,42 @@
|
|||||||
|
name: validate
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-${{ github.ref }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
on:
|
||||||
|
workflow_dispatch:
|
||||||
|
push:
|
||||||
|
branches:
|
||||||
|
- 'master'
|
||||||
|
- 'v[0-9]*'
|
||||||
|
tags:
|
||||||
|
- 'v*'
|
||||||
|
pull_request:
|
||||||
|
branches:
|
||||||
|
- 'master'
|
||||||
|
- 'v[0-9]*'
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
validate:
|
||||||
|
runs-on: ubuntu-22.04
|
||||||
|
strategy:
|
||||||
|
fail-fast: false
|
||||||
|
matrix:
|
||||||
|
target:
|
||||||
|
- lint
|
||||||
|
- validate-vendor
|
||||||
|
- validate-docs
|
||||||
|
steps:
|
||||||
|
-
|
||||||
|
name: Checkout
|
||||||
|
uses: actions/checkout@v3
|
||||||
|
-
|
||||||
|
name: Set up Docker Buildx
|
||||||
|
uses: docker/setup-buildx-action@v2
|
||||||
|
with:
|
||||||
|
version: latest
|
||||||
|
-
|
||||||
|
name: Run
|
||||||
|
run: |
|
||||||
|
make ${{ matrix.target }}
|
||||||
3
.gitignore
vendored
3
.gitignore
vendored
@@ -1,2 +1 @@
|
|||||||
bin
|
/bin
|
||||||
cross-out
|
|
||||||
|
|||||||
40
.golangci.yml
Normal file
40
.golangci.yml
Normal file
@@ -0,0 +1,40 @@
|
|||||||
|
run:
|
||||||
|
timeout: 10m
|
||||||
|
skip-files:
|
||||||
|
- ".*\\.pb\\.go$"
|
||||||
|
|
||||||
|
modules-download-mode: vendor
|
||||||
|
|
||||||
|
build-tags:
|
||||||
|
|
||||||
|
linters:
|
||||||
|
enable:
|
||||||
|
- gofmt
|
||||||
|
- govet
|
||||||
|
- deadcode
|
||||||
|
- depguard
|
||||||
|
- goimports
|
||||||
|
- ineffassign
|
||||||
|
- misspell
|
||||||
|
- unused
|
||||||
|
- varcheck
|
||||||
|
- revive
|
||||||
|
- staticcheck
|
||||||
|
- typecheck
|
||||||
|
- nolintlint
|
||||||
|
disable-all: true
|
||||||
|
|
||||||
|
linters-settings:
|
||||||
|
depguard:
|
||||||
|
list-type: blacklist
|
||||||
|
include-go-root: true
|
||||||
|
packages:
|
||||||
|
# The io/ioutil package has been deprecated.
|
||||||
|
# https://go.dev/doc/go1.16#ioutil
|
||||||
|
- io/ioutil
|
||||||
|
|
||||||
|
issues:
|
||||||
|
exclude-rules:
|
||||||
|
- linters:
|
||||||
|
- revive
|
||||||
|
text: "stutters"
|
||||||
9
.mailmap
9
.mailmap
@@ -1,6 +1,13 @@
|
|||||||
# This file lists all individuals having contributed content to the repository.
|
# This file lists all individuals having contributed content to the repository.
|
||||||
# For how it is generated, see `hack/generate-authors`.
|
# For how it is generated, see hack/dockerfiles/authors.Dockerfile.
|
||||||
|
|
||||||
|
CrazyMax <github@crazymax.dev>
|
||||||
|
CrazyMax <github@crazymax.dev> <1951866+crazy-max@users.noreply.github.com>
|
||||||
|
CrazyMax <github@crazymax.dev> <crazy-max@users.noreply.github.com>
|
||||||
|
Sebastiaan van Stijn <github@gone.nl>
|
||||||
|
Sebastiaan van Stijn <github@gone.nl> <thaJeztah@users.noreply.github.com>
|
||||||
Tibor Vass <tibor@docker.com>
|
Tibor Vass <tibor@docker.com>
|
||||||
Tibor Vass <tibor@docker.com> <tiborvass@users.noreply.github.com>
|
Tibor Vass <tibor@docker.com> <tiborvass@users.noreply.github.com>
|
||||||
Tõnis Tiigi <tonistiigi@gmail.com>
|
Tõnis Tiigi <tonistiigi@gmail.com>
|
||||||
|
Ulysses Souza <ulyssessouza@gmail.com>
|
||||||
|
Wang Jinglei <morlay.null@gmail.com>
|
||||||
|
|||||||
35
.travis.yml
35
.travis.yml
@@ -1,35 +0,0 @@
|
|||||||
dist: trusty
|
|
||||||
sudo: required
|
|
||||||
|
|
||||||
install:
|
|
||||||
- docker run --name buildkit --rm -d --privileged -p 1234:1234 $REPO_SLUG_ORIGIN --addr tcp://0.0.0.0:1234
|
|
||||||
- sudo docker cp buildkit:/usr/bin/buildctl /usr/bin/
|
|
||||||
- export BUILDKIT_HOST=tcp://0.0.0.0:1234
|
|
||||||
|
|
||||||
env:
|
|
||||||
global:
|
|
||||||
- PLATFORMS="linux/amd64,linux/arm/v6,linux/arm/v7,linux/arm64,linux/s390x,linux/ppc64le"
|
|
||||||
- CROSS_PLATFORMS="${PLATFORMS},darwin/amd64,windows/amd64"
|
|
||||||
- PREFER_BUILDCTL="1"
|
|
||||||
|
|
||||||
script:
|
|
||||||
- make binaries validate-all && TARGETPLATFORM="${CROSS_PLATFORMS}" ./hack/cross
|
|
||||||
|
|
||||||
|
|
||||||
deploy:
|
|
||||||
- provider: script
|
|
||||||
script: PLATFORMS="${CROSS_PLATFORMS}" ./hack/release $TRAVIS_TAG release-out
|
|
||||||
on:
|
|
||||||
repo: docker/buildx
|
|
||||||
tags: true
|
|
||||||
condition: $TRAVIS_TAG =~ ^v[0-9]
|
|
||||||
- provider: releases
|
|
||||||
api_key:
|
|
||||||
secure: "VKVL+tyS3BfqjM4VMGHoHJbcKY4mqq4AGrclVEvBnt0gm1LkGeKxSheCZgF1EC4oSV8rCy6dkoRWL0PLkl895MIl20Z4v53o1NOQ4Fn0A+eptnrld8jYUkL5PcD+kdEqv2GkBn7vO6E/fwYY/wH9FYlE+fXUa0c/YQGqNGS+XVDtgkftqBV+F2EzaIwk+D+QClFBRmKvIbXrUQASi1K6K2eT3gvzR4zh679TSdI2nbnTKtE06xG1PBFVmb1Ux3/Jz4yHFvf2d3M1mOyqIBsozKoyxisiFQxnm3FjhPrdlZJ9oy/nsQM3ahQKJ3DF8hiLI1LxcxRa6wo//t3uu2eJSYl/c5nu0T7gVw4sChQNy52fUhEGoDTDwYoAxsLSDXcpj1jevRsKvxt/dh2e2De1a9HYj5oM+z2O+pcyiY98cKDbhe2miUqUdiYMBy24xUunB46zVcJF3pIqCYtw5ts8ES6Ixn3u+4OGV/hMDrVdiG2bOZtNVkdbKMEkOEBGa3parPJ69jh6og639kdAD3DFxyZn3YKYuJlcNShn3tj6iPokBYhlLwwf8vuEV7gK7G0rDS9yxuF03jgkwpBBF2wy+u1AbJv241T7v2ZB8H8VlYyHA0E5pnoWbw+lIOTy4IAc8gIesMvDuFFi4r1okhiAt/24U0p4aAohjh1nPuU3spY="
|
|
||||||
file: release-out/**/*
|
|
||||||
skip_cleanup: true
|
|
||||||
file_glob: true
|
|
||||||
on:
|
|
||||||
repo: docker/buildx
|
|
||||||
tags: true
|
|
||||||
condition: $TRAVIS_TAG =~ ^v[0-9]
|
|
||||||
40
AUTHORS
40
AUTHORS
@@ -1,7 +1,45 @@
|
|||||||
# This file lists all individuals having contributed content to the repository.
|
# This file lists all individuals having contributed content to the repository.
|
||||||
# For how it is generated, see `scripts/generate-authors.sh`.
|
# For how it is generated, see hack/dockerfiles/authors.Dockerfile.
|
||||||
|
|
||||||
|
Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
|
||||||
|
Alex Couture-Beil <alex@earthly.dev>
|
||||||
|
Andrew Haines <andrew.haines@zencargo.com>
|
||||||
|
Andy MacKinlay <admackin@users.noreply.github.com>
|
||||||
|
Anthony Poschen <zanven42@gmail.com>
|
||||||
|
Artur Klauser <Artur.Klauser@computer.org>
|
||||||
|
Batuhan Apaydın <developerguy2@gmail.com>
|
||||||
Bin Du <bindu@microsoft.com>
|
Bin Du <bindu@microsoft.com>
|
||||||
|
Brandon Philips <brandon@ifup.org>
|
||||||
Brian Goff <cpuguy83@gmail.com>
|
Brian Goff <cpuguy83@gmail.com>
|
||||||
|
CrazyMax <github@crazymax.dev>
|
||||||
|
dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
|
||||||
|
Devin Bayer <dev@doubly.so>
|
||||||
|
Djordje Lukic <djordje.lukic@docker.com>
|
||||||
|
Dmytro Makovey <dmytro.makovey@docker.com>
|
||||||
|
Donghui Wang <977675308@qq.com>
|
||||||
|
faust <faustin@fala.red>
|
||||||
|
Felipe Santos <felipecassiors@gmail.com>
|
||||||
|
Fernando Miguel <github@FernandoMiguel.net>
|
||||||
|
gfrancesco <gfrancesco@users.noreply.github.com>
|
||||||
|
gracenoah <gracenoahgh@gmail.com>
|
||||||
|
Hollow Man <hollowman@hollowman.ml>
|
||||||
|
Ilya Dmitrichenko <errordeveloper@gmail.com>
|
||||||
|
Jack Laxson <jackjrabbit@gmail.com>
|
||||||
|
Jean-Yves Gastaud <jygastaud@gmail.com>
|
||||||
|
khs1994 <khs1994@khs1994.com>
|
||||||
|
Kotaro Adachi <k33asby@gmail.com>
|
||||||
|
l00397676 <lujingxiao@huawei.com>
|
||||||
|
Michal Augustyn <michal.augustyn@mail.com>
|
||||||
|
Patrick Van Stee <patrick@vanstee.me>
|
||||||
|
Saul Shanabrook <s.shanabrook@gmail.com>
|
||||||
|
Sebastiaan van Stijn <github@gone.nl>
|
||||||
|
SHIMA Tatsuya <ts1s1andn@gmail.com>
|
||||||
|
Silvin Lubecki <silvin.lubecki@docker.com>
|
||||||
|
Solomon Hykes <sh.github.6811@hykes.org>
|
||||||
|
Sune Keller <absukl@almbrand.dk>
|
||||||
Tibor Vass <tibor@docker.com>
|
Tibor Vass <tibor@docker.com>
|
||||||
Tõnis Tiigi <tonistiigi@gmail.com>
|
Tõnis Tiigi <tonistiigi@gmail.com>
|
||||||
|
Ulysses Souza <ulyssessouza@gmail.com>
|
||||||
|
Wang Jinglei <morlay.null@gmail.com>
|
||||||
|
Xiang Dai <764524258@qq.com>
|
||||||
|
zelahi <elahi.zuhayr@gmail.com>
|
||||||
|
|||||||
82
Dockerfile
82
Dockerfile
@@ -1,76 +1,88 @@
|
|||||||
# syntax=docker/dockerfile:1.1-experimental
|
# syntax=docker/dockerfile-upstream:1.5.0
|
||||||
|
|
||||||
ARG DOCKERD_VERSION=19.03-rc
|
ARG GO_VERSION=1.19
|
||||||
ARG CLI_VERSION=19.03
|
ARG XX_VERSION=1.1.2
|
||||||
|
ARG DOCKERD_VERSION=20.10.14
|
||||||
|
|
||||||
FROM docker:$DOCKERD_VERSION AS dockerd-release
|
FROM docker:$DOCKERD_VERSION AS dockerd-release
|
||||||
|
|
||||||
# xgo is a helper for golang cross-compilation
|
# xx is a helper for cross-compilation
|
||||||
FROM --platform=$BUILDPLATFORM tonistiigi/xx:golang@sha256:6f7d999551dd471b58f70716754290495690efa8421e0a1fcf18eb11d0c0a537 AS xgo
|
FROM --platform=$BUILDPLATFORM tonistiigi/xx:${XX_VERSION} AS xx
|
||||||
|
|
||||||
FROM --platform=$BUILDPLATFORM golang:1.12-alpine AS gobase
|
FROM --platform=$BUILDPLATFORM golang:${GO_VERSION}-alpine AS golatest
|
||||||
COPY --from=xgo / /
|
|
||||||
|
FROM golatest AS gobase
|
||||||
|
COPY --from=xx / /
|
||||||
RUN apk add --no-cache file git
|
RUN apk add --no-cache file git
|
||||||
ENV GOFLAGS=-mod=vendor
|
ENV GOFLAGS=-mod=vendor
|
||||||
|
ENV CGO_ENABLED=0
|
||||||
WORKDIR /src
|
WORKDIR /src
|
||||||
|
|
||||||
FROM gobase AS buildx-version
|
FROM gobase AS buildx-version
|
||||||
RUN --mount=target=. \
|
RUN --mount=type=bind,target=. <<EOT
|
||||||
PKG=github.com/docker/buildx VERSION=$(git describe --match 'v[0-9]*' --dirty='.m' --always --tags) REVISION=$(git rev-parse HEAD)$(if ! git diff --no-ext-diff --quiet --exit-code; then echo .m; fi); \
|
set -e
|
||||||
echo "-X ${PKG}/version.Version=${VERSION} -X ${PKG}/version.Revision=${REVISION} -X ${PKG}/version.Package=${PKG}" | tee /tmp/.ldflags; \
|
mkdir /buildx-version
|
||||||
echo -n "${VERSION}" | tee /tmp/.version;
|
echo -n "$(./hack/git-meta version)" | tee /buildx-version/version
|
||||||
|
echo -n "$(./hack/git-meta revision)" | tee /buildx-version/revision
|
||||||
|
EOT
|
||||||
|
|
||||||
FROM gobase AS buildx-build
|
FROM gobase AS buildx-build
|
||||||
ENV CGO_ENABLED=0
|
|
||||||
ARG TARGETPLATFORM
|
ARG TARGETPLATFORM
|
||||||
RUN --mount=target=. --mount=target=/root/.cache,type=cache \
|
RUN --mount=type=bind,target=. \
|
||||||
--mount=target=/go/pkg/mod,type=cache \
|
--mount=type=cache,target=/root/.cache \
|
||||||
--mount=source=/tmp/.ldflags,target=/tmp/.ldflags,from=buildx-version \
|
--mount=type=cache,target=/go/pkg/mod \
|
||||||
set -x; go build -ldflags "$(cat /tmp/.ldflags)" -o /usr/bin/buildx ./cmd/buildx && \
|
--mount=type=bind,from=buildx-version,source=/buildx-version,target=/buildx-version <<EOT
|
||||||
file /usr/bin/buildx && file /usr/bin/buildx | egrep "statically linked|Mach-O|Windows"
|
set -e
|
||||||
|
xx-go --wrap
|
||||||
|
DESTDIR=/usr/bin VERSION=$(cat /buildx-version/version) REVISION=$(cat /buildx-version/revision) GO_EXTRA_LDFLAGS="-s -w" ./hack/build
|
||||||
|
xx-verify --static /usr/bin/docker-buildx
|
||||||
|
EOT
|
||||||
|
|
||||||
FROM buildx-build AS integration-tests
|
FROM gobase AS test
|
||||||
COPY . .
|
RUN --mount=type=bind,target=. \
|
||||||
|
--mount=type=cache,target=/root/.cache \
|
||||||
|
--mount=type=cache,target=/go/pkg/mod \
|
||||||
|
go test -v -coverprofile=/tmp/coverage.txt -covermode=atomic ./... && \
|
||||||
|
go tool cover -func=/tmp/coverage.txt
|
||||||
|
|
||||||
# FROM golang:1.12-alpine AS docker-cli-build
|
FROM scratch AS test-coverage
|
||||||
# RUN apk add -U git bash coreutils gcc musl-dev
|
COPY --from=test /tmp/coverage.txt /coverage.txt
|
||||||
# ENV CGO_ENABLED=0
|
|
||||||
# ARG REPO=github.com/tiborvass/cli
|
|
||||||
# ARG BRANCH=cli-plugin-aliases
|
|
||||||
# ARG CLI_VERSION
|
|
||||||
# WORKDIR /go/src/github.com/docker/cli
|
|
||||||
# RUN git clone git://$REPO . && git checkout $BRANCH
|
|
||||||
# RUN ./scripts/build/binary
|
|
||||||
|
|
||||||
FROM scratch AS binaries-unix
|
FROM scratch AS binaries-unix
|
||||||
COPY --from=buildx-build /usr/bin/buildx /
|
COPY --link --from=buildx-build /usr/bin/docker-buildx /buildx
|
||||||
|
|
||||||
FROM binaries-unix AS binaries-darwin
|
FROM binaries-unix AS binaries-darwin
|
||||||
FROM binaries-unix AS binaries-linux
|
FROM binaries-unix AS binaries-linux
|
||||||
|
|
||||||
FROM scratch AS binaries-windows
|
FROM scratch AS binaries-windows
|
||||||
COPY --from=buildx-build /usr/bin/buildx /buildx.exe
|
COPY --link --from=buildx-build /usr/bin/docker-buildx /buildx.exe
|
||||||
|
|
||||||
FROM binaries-$TARGETOS AS binaries
|
FROM binaries-$TARGETOS AS binaries
|
||||||
|
# enable scanning for this stage
|
||||||
|
ARG BUILDKIT_SBOM_SCAN_STAGE=true
|
||||||
|
|
||||||
|
# Release
|
||||||
FROM --platform=$BUILDPLATFORM alpine AS releaser
|
FROM --platform=$BUILDPLATFORM alpine AS releaser
|
||||||
WORKDIR /work
|
WORKDIR /work
|
||||||
ARG TARGETPLATFORM
|
ARG TARGETPLATFORM
|
||||||
RUN --mount=from=binaries \
|
RUN --mount=from=binaries \
|
||||||
--mount=source=/tmp/.version,target=/tmp/.version,from=buildx-version \
|
--mount=type=bind,from=buildx-version,source=/buildx-version,target=/buildx-version <<EOT
|
||||||
mkdir -p /out && cp buildx* "/out/buildx-$(cat /tmp/.version).$(echo $TARGETPLATFORM | sed 's/\//-/g')$(ls buildx* | sed -e 's/^buildx//')"
|
set -e
|
||||||
|
mkdir -p /out
|
||||||
|
cp buildx* "/out/buildx-$(cat /buildx-version/version).$(echo $TARGETPLATFORM | sed 's/\//-/g')$(ls buildx* | sed -e 's/^buildx//')"
|
||||||
|
EOT
|
||||||
|
|
||||||
FROM scratch AS release
|
FROM scratch AS release
|
||||||
COPY --from=releaser /out/ /
|
COPY --from=releaser /out/ /
|
||||||
|
|
||||||
FROM alpine AS demo-env
|
# Shell
|
||||||
|
FROM docker:$DOCKERD_VERSION AS dockerd-release
|
||||||
|
FROM alpine AS shell
|
||||||
RUN apk add --no-cache iptables tmux git vim less openssh
|
RUN apk add --no-cache iptables tmux git vim less openssh
|
||||||
RUN mkdir -p /usr/local/lib/docker/cli-plugins && ln -s /usr/local/bin/buildx /usr/local/lib/docker/cli-plugins/docker-buildx
|
RUN mkdir -p /usr/local/lib/docker/cli-plugins && ln -s /usr/local/bin/buildx /usr/local/lib/docker/cli-plugins/docker-buildx
|
||||||
COPY ./hack/demo-env/entrypoint.sh /usr/local/bin
|
COPY ./hack/demo-env/entrypoint.sh /usr/local/bin
|
||||||
COPY ./hack/demo-env/tmux.conf /root/.tmux.conf
|
COPY ./hack/demo-env/tmux.conf /root/.tmux.conf
|
||||||
COPY --from=dockerd-release /usr/local/bin /usr/local/bin
|
COPY --from=dockerd-release /usr/local/bin /usr/local/bin
|
||||||
#COPY --from=docker-cli-build /go/src/github.com/docker/cli/build/docker /usr/local/bin
|
|
||||||
|
|
||||||
WORKDIR /work
|
WORKDIR /work
|
||||||
COPY ./hack/demo-env/examples .
|
COPY ./hack/demo-env/examples .
|
||||||
COPY --from=binaries / /usr/local/bin/
|
COPY --from=binaries / /usr/local/bin/
|
||||||
|
|||||||
29
Jenkinsfile
vendored
29
Jenkinsfile
vendored
@@ -1,29 +0,0 @@
|
|||||||
|
|
||||||
@Library('jps')
|
|
||||||
_
|
|
||||||
|
|
||||||
pipeline {
|
|
||||||
agent {
|
|
||||||
node {
|
|
||||||
label 'ubuntu-1804-overlay2'
|
|
||||||
}
|
|
||||||
}
|
|
||||||
options {
|
|
||||||
disableConcurrentBuilds()
|
|
||||||
}
|
|
||||||
stages {
|
|
||||||
stage("FOSSA Analyze") {
|
|
||||||
steps {
|
|
||||||
|
|
||||||
withCredentials([string(credentialsId: 'fossa-api-key', variable: 'FOSSA_API_KEY')]) {
|
|
||||||
withGithubStatus('FOSSA.scan') {
|
|
||||||
labelledShell returnStatus: false, returnStdout: true, label: "make fossa-analyze",
|
|
||||||
script:'make -f Makefile.fossa BRANCH_NAME=${BRANCH_NAME} fossa-analyze'
|
|
||||||
labelledShell returnStatus: false, returnStdout: true, label: "make fossa-test",
|
|
||||||
script: 'make -f Makefile.fossa BRANCH_NAME=${BRANCH_NAME} fossa-test'
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
18
MAINTAINERS
18
MAINTAINERS
@@ -150,6 +150,9 @@ made through a pull request.
|
|||||||
[Org.Maintainers]
|
[Org.Maintainers]
|
||||||
|
|
||||||
people = [
|
people = [
|
||||||
|
"akihirosuda",
|
||||||
|
"crazy-max",
|
||||||
|
"jedevc",
|
||||||
"tiborvass",
|
"tiborvass",
|
||||||
"tonistiigi",
|
"tonistiigi",
|
||||||
]
|
]
|
||||||
@@ -176,6 +179,21 @@ made through a pull request.
|
|||||||
# All other sections should refer to people by their canonical key
|
# All other sections should refer to people by their canonical key
|
||||||
# in the people section.
|
# in the people section.
|
||||||
|
|
||||||
|
[people.akihirosuda]
|
||||||
|
Name = "Akihiro Suda"
|
||||||
|
Email = "akihiro.suda.cz@hco.ntt.co.jp"
|
||||||
|
GitHub = "AkihiroSuda"
|
||||||
|
|
||||||
|
[people.crazy-max]
|
||||||
|
Name = "Kevin Alvarez"
|
||||||
|
Email = "contact@crazymax.dev"
|
||||||
|
GitHub = "crazy-max"
|
||||||
|
|
||||||
|
[people.jedevc]
|
||||||
|
Name = "Justin Chadwell"
|
||||||
|
Email = "me@jedevc.com"
|
||||||
|
GitHub = "jedevc"
|
||||||
|
|
||||||
[people.thajeztah]
|
[people.thajeztah]
|
||||||
Name = "Sebastiaan van Stijn"
|
Name = "Sebastiaan van Stijn"
|
||||||
Email = "github@gone.nl"
|
Email = "github@gone.nl"
|
||||||
|
|||||||
69
Makefile
69
Makefile
@@ -1,31 +1,80 @@
|
|||||||
|
ifneq (, $(BUILDX_BIN))
|
||||||
|
export BUILDX_CMD = $(BUILDX_BIN)
|
||||||
|
else ifneq (, $(shell docker buildx version))
|
||||||
|
export BUILDX_CMD = docker buildx
|
||||||
|
else ifneq (, $(shell which buildx))
|
||||||
|
export BUILDX_CMD = $(which buildx)
|
||||||
|
endif
|
||||||
|
|
||||||
|
export BUILDX_CMD ?= docker buildx
|
||||||
|
|
||||||
|
.PHONY: all
|
||||||
|
all: binaries
|
||||||
|
|
||||||
|
.PHONY: build
|
||||||
|
build:
|
||||||
|
./hack/build
|
||||||
|
|
||||||
|
.PHONY: shell
|
||||||
shell:
|
shell:
|
||||||
./hack/shell
|
./hack/shell
|
||||||
|
|
||||||
|
.PHONY: binaries
|
||||||
binaries:
|
binaries:
|
||||||
./hack/binaries
|
$(BUILDX_CMD) bake binaries
|
||||||
|
|
||||||
|
.PHONY: binaries-cross
|
||||||
binaries-cross:
|
binaries-cross:
|
||||||
EXPORT_LOCAL=cross-out ./hack/cross
|
$(BUILDX_CMD) bake binaries-cross
|
||||||
|
|
||||||
|
.PHONY: install
|
||||||
install: binaries
|
install: binaries
|
||||||
mkdir -p ~/.docker/cli-plugins
|
mkdir -p ~/.docker/cli-plugins
|
||||||
cp bin/buildx ~/.docker/cli-plugins/docker-buildx
|
install bin/build/buildx ~/.docker/cli-plugins/docker-buildx
|
||||||
|
|
||||||
|
.PHONY: release
|
||||||
|
release:
|
||||||
|
./hack/release
|
||||||
|
|
||||||
|
.PHONY: validate-all
|
||||||
|
validate-all: lint test validate-vendor validate-docs
|
||||||
|
|
||||||
|
.PHONY: lint
|
||||||
lint:
|
lint:
|
||||||
./hack/lint
|
$(BUILDX_CMD) bake lint
|
||||||
|
|
||||||
|
.PHONY: test
|
||||||
test:
|
test:
|
||||||
./hack/test
|
$(BUILDX_CMD) bake test
|
||||||
|
|
||||||
|
.PHONY: validate-vendor
|
||||||
validate-vendor:
|
validate-vendor:
|
||||||
./hack/validate-vendor
|
$(BUILDX_CMD) bake validate-vendor
|
||||||
|
|
||||||
validate-all: lint test validate-vendor
|
.PHONY: validate-docs
|
||||||
|
validate-docs:
|
||||||
|
$(BUILDX_CMD) bake validate-docs
|
||||||
|
|
||||||
|
.PHONY: validate-authors
|
||||||
|
validate-authors:
|
||||||
|
$(BUILDX_CMD) bake validate-authors
|
||||||
|
|
||||||
|
.PHONY: test-driver
|
||||||
|
test-driver:
|
||||||
|
./hack/test-driver
|
||||||
|
|
||||||
|
.PHONY: vendor
|
||||||
vendor:
|
vendor:
|
||||||
./hack/update-vendor
|
./hack/update-vendor
|
||||||
|
|
||||||
generate-authors:
|
.PHONY: docs
|
||||||
./hack/generate-authors
|
docs:
|
||||||
|
./hack/update-docs
|
||||||
|
|
||||||
.PHONY: vendor lint shell binaries install binaries-cross validate-all generate-authors
|
.PHONY: authors
|
||||||
|
authors:
|
||||||
|
$(BUILDX_CMD) bake update-authors
|
||||||
|
|
||||||
|
.PHONY: mod-outdated
|
||||||
|
mod-outdated:
|
||||||
|
$(BUILDX_CMD) bake mod-outdated
|
||||||
|
|||||||
@@ -1,18 +0,0 @@
|
|||||||
REPO_PATH?=docker/buildx
|
|
||||||
BUILD_ANALYZER?=docker/fossa-analyzer
|
|
||||||
FOSSA_OPTS?=--option all-tags:true --option allow-unresolved:true --no-ansi
|
|
||||||
|
|
||||||
fossa-analyze:
|
|
||||||
docker run -i --rm -e FOSSA_API_KEY=$(FOSSA_API_KEY) \
|
|
||||||
-v $(CURDIR)/$*:/go/src/github.com/$(REPO_PATH) \
|
|
||||||
-w /go/src/github.com/$(REPO_PATH) \
|
|
||||||
-e GO111MODULE=on \
|
|
||||||
$(BUILD_ANALYZER) analyze $(FOSSA_OPTS) --branch $(BRANCH_NAME)
|
|
||||||
|
|
||||||
# This command is used to run the fossa test command
|
|
||||||
fossa-test:
|
|
||||||
docker run -i --rm -e FOSSA_API_KEY=$(FOSSA_API_KEY) \
|
|
||||||
-v $(CURDIR)/$*:/go/src/github.com/$(REPO_PATH) \
|
|
||||||
-w /go/src/github.com/$(REPO_PATH) \
|
|
||||||
-e GO111MODULE=on \
|
|
||||||
$(BUILD_ANALYZER) test --debug
|
|
||||||
829
README.md
829
README.md
@@ -1,138 +1,308 @@
|
|||||||
# buildx
|
# buildx
|
||||||
### Docker CLI plugin for extended build capabilities with BuildKit
|
|
||||||
|
|
||||||
_buildx is Tech Preview_
|
[](https://github.com/docker/buildx/releases/latest)
|
||||||
|
[](https://pkg.go.dev/github.com/docker/buildx)
|
||||||
|
[](https://github.com/docker/buildx/actions?query=workflow%3Abuild)
|
||||||
|
[](https://goreportcard.com/report/github.com/docker/buildx)
|
||||||
|
[](https://codecov.io/gh/docker/buildx)
|
||||||
|
|
||||||
### TL;DR
|
`buildx` is a Docker CLI plugin for extended build capabilities with
|
||||||
|
[BuildKit](https://github.com/moby/buildkit).
|
||||||
|
|
||||||
|
Key features:
|
||||||
|
|
||||||
- Familiar UI from `docker build`
|
- Familiar UI from `docker build`
|
||||||
- Full BuildKit capabilities with container driver
|
- Full BuildKit capabilities with container driver
|
||||||
- Multiple builder instance support
|
- Multiple builder instance support
|
||||||
- Multi-node builds for cross-platform images
|
- Multi-node builds for cross-platform images
|
||||||
- Compose build support
|
- Compose build support
|
||||||
- WIP: High-level build constructs (`bake`)
|
- High-level build constructs (`bake`)
|
||||||
- TODO: In-container driver support
|
- In-container driver support (both Docker and Kubernetes)
|
||||||
|
|
||||||
# Table of Contents
|
# Table of Contents
|
||||||
|
|
||||||
- [Installing](#installing)
|
- [Installing](#installing)
|
||||||
|
- [Windows and macOS](#windows-and-macos)
|
||||||
|
- [Linux packages](#linux-packages)
|
||||||
|
- [Manual download](#manual-download)
|
||||||
|
- [Dockerfile](#dockerfile)
|
||||||
|
- [Set buildx as the default builder](#set-buildx-as-the-default-builder)
|
||||||
- [Building](#building)
|
- [Building](#building)
|
||||||
+ [with Docker 18.09+](#with-docker-1809)
|
|
||||||
+ [with buildx or Docker 19.03](#with-buildx-or-docker-1903)
|
|
||||||
- [Getting started](#getting-started)
|
- [Getting started](#getting-started)
|
||||||
* [Building with buildx](#building-with-buildx)
|
- [Building with buildx](#building-with-buildx)
|
||||||
* [Working with builder instances](#working-with-builder-instances)
|
- [Working with builder instances](#working-with-builder-instances)
|
||||||
* [Building multi-platform images](#building-multi-platform-images)
|
- [Building multi-platform images](#building-multi-platform-images)
|
||||||
* [High-level build options](#high-level-build-options)
|
- [Manuals](docs/manuals)
|
||||||
- [Documentation](#documentation)
|
- [High-level build options with Bake](docs/manuals/bake/index.md)
|
||||||
+ [`buildx build [OPTIONS] PATH | URL | -`](#buildx-build-options-path--url---)
|
- [Drivers](docs/manuals/drivers/index.md)
|
||||||
+ [`buildx create [OPTIONS] [CONTEXT|ENDPOINT]`](#buildx-create-options-contextendpoint)
|
- [Exporters](docs/manuals/exporters/index.md)
|
||||||
+ [`buildx use NAME`](#buildx-use-name)
|
- [Cache backends](docs/manuals/cache/backends/index.md)
|
||||||
+ [`buildx inspect [NAME]`](#buildx-inspect-name)
|
- [Guides](docs/guides)
|
||||||
+ [`buildx ls`](#buildx-ls)
|
- [CI/CD](docs/guides/cicd.md)
|
||||||
+ [`buildx stop [NAME]`](#buildx-stop-name)
|
- [CNI networking](docs/guides/cni-networking.md)
|
||||||
+ [`buildx rm [NAME]`](#buildx-rm-name)
|
- [Using a custom network](docs/guides/custom-network.md)
|
||||||
+ [`buildx bake [OPTIONS] [TARGET...]`](#buildx-bake-options-target)
|
- [Using a custom registry configuration](docs/guides/custom-registry-config.md)
|
||||||
+ [`buildx imagetools create [OPTIONS] [SOURCE] [SOURCE...]`](#buildx-imagetools-create-options-source-source)
|
- [OpenTelemetry support](docs/guides/opentelemetry.md)
|
||||||
+ [`buildx imagetools inspect NAME`](#buildx-imagetools-inspect-name)
|
- [Registry mirror](docs/guides/registry-mirror.md)
|
||||||
- [Setting buildx as default builder in Docker 19.03+](#setting-buildx-as-default-builder-in-docker-1903)
|
- [Resource limiting](docs/guides/resource-limiting.md)
|
||||||
|
- [Reference](docs/reference/buildx.md)
|
||||||
|
- [`buildx bake`](docs/reference/buildx_bake.md)
|
||||||
|
- [`buildx build`](docs/reference/buildx_build.md)
|
||||||
|
- [`buildx create`](docs/reference/buildx_create.md)
|
||||||
|
- [`buildx du`](docs/reference/buildx_du.md)
|
||||||
|
- [`buildx imagetools`](docs/reference/buildx_imagetools.md)
|
||||||
|
- [`buildx imagetools create`](docs/reference/buildx_imagetools_create.md)
|
||||||
|
- [`buildx imagetools inspect`](docs/reference/buildx_imagetools_inspect.md)
|
||||||
|
- [`buildx inspect`](docs/reference/buildx_inspect.md)
|
||||||
|
- [`buildx install`](docs/reference/buildx_install.md)
|
||||||
|
- [`buildx ls`](docs/reference/buildx_ls.md)
|
||||||
|
- [`buildx prune`](docs/reference/buildx_prune.md)
|
||||||
|
- [`buildx rm`](docs/reference/buildx_rm.md)
|
||||||
|
- [`buildx stop`](docs/reference/buildx_stop.md)
|
||||||
|
- [`buildx uninstall`](docs/reference/buildx_uninstall.md)
|
||||||
|
- [`buildx use`](docs/reference/buildx_use.md)
|
||||||
|
- [`buildx version`](docs/reference/buildx_version.md)
|
||||||
- [Contributing](#contributing)
|
- [Contributing](#contributing)
|
||||||
|
|
||||||
|
|
||||||
# Installing
|
# Installing
|
||||||
|
|
||||||
Using `buildx` as a docker CLI plugin requires using Docker 19.03. A limited set of functionality works with older versions of Docker when invoking the binary directly.
|
Using `buildx` as a docker CLI plugin requires using Docker 19.03 or newer.
|
||||||
|
A limited set of functionality works with older versions of Docker when
|
||||||
|
invoking the binary directly.
|
||||||
|
|
||||||
### Docker CE
|
## Windows and macOS
|
||||||
|
|
||||||
`buildx` comes bundled with Docker CE starting with 19.03, but requires experimental mode to be enabled on the Docker CLI.
|
Docker Buildx is included in [Docker Desktop](https://docs.docker.com/desktop/)
|
||||||
To enable it, `"experimental": "enabled"` can be added to the CLI configuration file `~/.docker/config.json`. An alternative is to set the `DOCKER_CLI_EXPERIMENTAL=enabled` environment variable.
|
for Windows and macOS.
|
||||||
|
|
||||||
### Binary release
|
## Linux packages
|
||||||
|
|
||||||
Download the latest binary release from https://github.com/docker/buildx/releases/latest and copy it to `~/.docker/cli-plugins` folder with name `docker-buildx`.
|
Docker Linux packages also include Docker Buildx when installed using the
|
||||||
|
[DEB or RPM packages](https://docs.docker.com/engine/install/).
|
||||||
|
|
||||||
Change the permission to execute:
|
## Manual download
|
||||||
```sh
|
|
||||||
chmod a+x ~/.docker/cli-plugins/docker-buildx
|
> **Important**
|
||||||
|
>
|
||||||
|
> This section is for unattended installation of the buildx component. These
|
||||||
|
> instructions are mostly suitable for testing purposes. We do not recommend
|
||||||
|
> installing buildx using manual download in production environments as they
|
||||||
|
> will not be updated automatically with security updates.
|
||||||
|
>
|
||||||
|
> On Windows and macOS, we recommend that you install [Docker Desktop](https://docs.docker.com/desktop/)
|
||||||
|
> instead. For Linux, we recommend that you follow the [instructions specific for your distribution](#linux-packages).
|
||||||
|
|
||||||
|
You can also download the latest binary from the [GitHub releases page](https://github.com/docker/buildx/releases/latest).
|
||||||
|
|
||||||
|
Rename the relevant binary and copy it to the destination matching your OS:
|
||||||
|
|
||||||
|
| OS | Binary name | Destination folder |
|
||||||
|
| -------- | -------------------- | -----------------------------------------|
|
||||||
|
| Linux | `docker-buildx` | `$HOME/.docker/cli-plugins` |
|
||||||
|
| macOS | `docker-buildx` | `$HOME/.docker/cli-plugins` |
|
||||||
|
| Windows | `docker-buildx.exe` | `%USERPROFILE%\.docker\cli-plugins` |
|
||||||
|
|
||||||
|
Or copy it into one of these folders for installing it system-wide.
|
||||||
|
|
||||||
|
On Unix environments:
|
||||||
|
|
||||||
|
* `/usr/local/lib/docker/cli-plugins` OR `/usr/local/libexec/docker/cli-plugins`
|
||||||
|
* `/usr/lib/docker/cli-plugins` OR `/usr/libexec/docker/cli-plugins`
|
||||||
|
|
||||||
|
On Windows:
|
||||||
|
|
||||||
|
* `C:\ProgramData\Docker\cli-plugins`
|
||||||
|
* `C:\Program Files\Docker\cli-plugins`
|
||||||
|
|
||||||
|
> **Note**
|
||||||
|
>
|
||||||
|
> On Unix environments, it may also be necessary to make it executable with `chmod +x`:
|
||||||
|
> ```shell
|
||||||
|
> $ chmod +x ~/.docker/cli-plugins/docker-buildx
|
||||||
|
> ```
|
||||||
|
|
||||||
|
## Dockerfile
|
||||||
|
|
||||||
|
Here is how to install and use Buildx inside a Dockerfile through the
|
||||||
|
[`docker/buildx-bin`](https://hub.docker.com/r/docker/buildx-bin) image:
|
||||||
|
|
||||||
|
```dockerfile
|
||||||
|
# syntax=docker/dockerfile:1
|
||||||
|
FROM docker
|
||||||
|
COPY --from=docker/buildx-bin /buildx /usr/libexec/docker/cli-plugins/docker-buildx
|
||||||
|
RUN docker buildx version
|
||||||
```
|
```
|
||||||
|
|
||||||
After installing you can run `docker buildx` to see the new commands.
|
# Set buildx as the default builder
|
||||||
|
|
||||||
|
Running the command [`docker buildx install`](docs/reference/buildx_install.md)
|
||||||
|
sets up docker builder command as an alias to `docker buildx build`. This
|
||||||
|
results in the ability to have `docker build` use the current buildx builder.
|
||||||
|
|
||||||
|
To remove this alias, run [`docker buildx uninstall`](docs/reference/buildx_uninstall.md).
|
||||||
|
|
||||||
# Building
|
# Building
|
||||||
|
|
||||||
### with Docker 18.09+
|
```console
|
||||||
```
|
# Buildx 0.6+
|
||||||
$ git clone git://github.com/docker/buildx && cd buildx
|
$ docker buildx bake "https://github.com/docker/buildx.git"
|
||||||
$ make install
|
$ mkdir -p ~/.docker/cli-plugins
|
||||||
```
|
$ mv ./bin/build/buildx ~/.docker/cli-plugins/docker-buildx
|
||||||
|
|
||||||
### with buildx or Docker 19.03
|
# Docker 19.03+
|
||||||
```
|
$ DOCKER_BUILDKIT=1 docker build --platform=local -o . "https://github.com/docker/buildx.git"
|
||||||
$ export DOCKER_BUILDKIT=1
|
$ mkdir -p ~/.docker/cli-plugins
|
||||||
$ docker build --platform=local -o . git://github.com/docker/buildx
|
|
||||||
$ mv buildx ~/.docker/cli-plugins/docker-buildx
|
$ mv buildx ~/.docker/cli-plugins/docker-buildx
|
||||||
|
|
||||||
|
# Local
|
||||||
|
$ git clone https://github.com/docker/buildx.git && cd buildx
|
||||||
|
$ make install
|
||||||
```
|
```
|
||||||
|
|
||||||
# Getting started
|
# Getting started
|
||||||
|
|
||||||
## Building with buildx
|
## Building with buildx
|
||||||
|
|
||||||
Buildx is a Docker CLI plugin that extends the `docker build` command with the full support of the features provided by [Moby BuildKit](https://github.com/moby/buildkit) builder toolkit. It provides the same user experience as `docker build` with many new features like creating scoped builder instances and building against multiple nodes concurrently.
|
Buildx is a Docker CLI plugin that extends the `docker build` command with the
|
||||||
|
full support of the features provided by [Moby BuildKit](https://github.com/moby/buildkit)
|
||||||
|
builder toolkit. It provides the same user experience as `docker build` with
|
||||||
|
many new features like creating scoped builder instances and building against
|
||||||
|
multiple nodes concurrently.
|
||||||
|
|
||||||
After installation, buildx can be accessed through the `docker buildx` command. `docker buildx build` is the command for starting a new build.
|
After installation, buildx can be accessed through the `docker buildx` command
|
||||||
|
with Docker 19.03. `docker buildx build` is the command for starting a new
|
||||||
|
build. With Docker versions older than 19.03 buildx binary can be called
|
||||||
|
directly to access the `docker buildx` subcommands.
|
||||||
|
|
||||||
```
|
```console
|
||||||
$ docker buildx build .
|
$ docker buildx build .
|
||||||
[+] Building 8.4s (23/32)
|
[+] Building 8.4s (23/32)
|
||||||
=> ...
|
=> ...
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Buildx will always build using the BuildKit engine and does not require
|
||||||
|
`DOCKER_BUILDKIT=1` environment variable for starting builds.
|
||||||
|
|
||||||
Buildx will always build using the BuildKit engine and does not require `DOCKER_BUILDKIT=1` environment variable for starting builds.
|
The `docker buildx build` command supports features available for `docker build`,
|
||||||
|
including features such as outputs configuration, inline build caching, and
|
||||||
|
specifying target platform. In addition, Buildx also supports new features that
|
||||||
|
are not yet available for regular `docker build` like building manifest lists,
|
||||||
|
distributed caching, and exporting build results to OCI image tarballs.
|
||||||
|
|
||||||
Buildx build command supports the features available for `docker build` including the new features in Docker 19.03 such as outputs configuration, inline build caching or specifying target platform. In addition, buildx supports new features not yet available for regular `docker build` like building manifest lists, distributed caching, exporting build results to OCI image tarballs etc.
|
Buildx is flexible and can be run in different configurations that are exposed
|
||||||
|
through various "drivers". Each driver defines how and where a build should
|
||||||
|
run, and have different feature sets.
|
||||||
|
|
||||||
Buildx is supposed to be flexible and can be run in different configurations that are exposed through a driver concept. Currently, we support a "docker" driver that uses the BuildKit library bundled into the docker daemon binary, and a "docker-container" driver that automatically launches BuildKit inside a Docker container. We plan to add more drivers in the future, for example, one that would allow running buildx inside an (unprivileged) container.
|
We currently support the following drivers:
|
||||||
|
- The `docker` driver ([guide](docs/manuals/drivers/docker.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
|
||||||
The user experience of using buildx is very similar across drivers, but there are some features that are not currently supported by the "docker" driver, because the BuildKit library bundled into docker daemon currently uses a different storage component. In contrast, all images built with "docker" driver are automatically added to the "docker images" view by default, whereas when using other drivers the method for outputting an image needs to be selected with `--output`.
|
- The `docker-container` driver ([guide](docs/manuals/drivers/docker-container.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
|
||||||
|
- The `kubernetes` driver ([guide](docs/manuals/drivers/kubernetes.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
|
||||||
|
- The `remote` driver ([guide](docs/manuals/drivers/remote.md))
|
||||||
|
|
||||||
|
For more information on drivers, see the [drivers guide](docs/manuals/drivers/index.md).
|
||||||
|
|
||||||
## Working with builder instances
|
## Working with builder instances
|
||||||
|
|
||||||
By default, buildx will initially use the "docker" driver if it is supported, providing a very similar user experience to the native `docker build`. But using a local shared daemon is only one way to build your applications.
|
By default, buildx will initially use the `docker` driver if it is supported,
|
||||||
|
providing a very similar user experience to the native `docker build`. Note that
|
||||||
|
you must use a local shared daemon to build your applications.
|
||||||
|
|
||||||
Buildx allows you to create new instances of isolated builders. This can be used for getting a scoped environment for your CI builds that does not change the state of the shared daemon or for isolating the builds for different projects. You can create a new instance for a set of remote nodes, forming a build farm, and quickly switch between them.
|
Buildx allows you to create new instances of isolated builders. This can be
|
||||||
|
used for getting a scoped environment for your CI builds that does not change
|
||||||
|
the state of the shared daemon or for isolating the builds for different
|
||||||
|
projects. You can create a new instance for a set of remote nodes, forming a
|
||||||
|
build farm, and quickly switch between them.
|
||||||
|
|
||||||
New instances can be created with `docker buildx create` command. This will create a new builder instance with a single node based on your current configuration. To use a remote node you can specify the `DOCKER_HOST` or remote context name while creating the new builder. After creating a new instance you can manage its lifecycle with the `inspect`, `stop` and `rm` commands and list all available builders with `ls`. After creating a new builder you can also append new nodes to it.
|
You can create new instances using the [`docker buildx create`](docs/reference/buildx_create.md)
|
||||||
|
command. This creates a new builder instance with a single node based on your
|
||||||
|
current configuration.
|
||||||
|
|
||||||
To switch between different builders use `docker buildx use <name>`. After running this command the build commands would automatically keep using this builder.
|
To use a remote node you can specify the `DOCKER_HOST` or the remote context name
|
||||||
|
while creating the new builder. After creating a new instance, you can manage its
|
||||||
|
lifecycle using the [`docker buildx inspect`](docs/reference/buildx_inspect.md),
|
||||||
|
[`docker buildx stop`](docs/reference/buildx_stop.md), and
|
||||||
|
[`docker buildx rm`](docs/reference/buildx_rm.md) commands. To list all
|
||||||
|
available builders, use [`buildx ls`](docs/reference/buildx_ls.md). After
|
||||||
|
creating a new builder you can also append new nodes to it.
|
||||||
|
|
||||||
Docker 19.03 also features a new `docker context` command that can be used for giving names for remote Docker API endpoints. Buildx integrates with `docker context` so that all of your contexts automatically get a default builder instance. While creating a new builder instance or when adding a node to it you can also set the context name as the target.
|
To switch between different builders, use [`docker buildx use <name>`](docs/reference/buildx_use.md).
|
||||||
|
After running this command, the build commands will automatically use this
|
||||||
|
builder.
|
||||||
|
|
||||||
|
Docker also features a [`docker context`](https://docs.docker.com/engine/reference/commandline/context/)
|
||||||
|
command that can be used for giving names for remote Docker API endpoints.
|
||||||
|
Buildx integrates with `docker context` so that all of your contexts
|
||||||
|
automatically get a default builder instance. While creating a new builder
|
||||||
|
instance or when adding a node to it you can also set the context name as the
|
||||||
|
target.
|
||||||
|
|
||||||
## Building multi-platform images
|
## Building multi-platform images
|
||||||
|
|
||||||
BuildKit is designed to work well for building for multiple platforms and not only for the architecture and operating system that the user invoking the build happens to run.
|
BuildKit is designed to work well for building for multiple platforms and not
|
||||||
|
only for the architecture and operating system that the user invoking the build
|
||||||
|
happens to run.
|
||||||
|
|
||||||
When invoking a build, the `--platform` flag can be used to specify the target platform for the build output, (e.g. linux/amd64, linux/arm64, darwin/amd64). When the current builder instance is backed by the "docker-container" driver, multiple platforms can be specified together. In this case, a manifest list will be built, containing images for all of the specified architectures. When this image is used in `docker run` or `docker service`, Docker will pick the correct image based on the node’s platform.
|
When you invoke a build, you can set the `--platform` flag to specify the target
|
||||||
|
platform for the build output, (for example, `linux/amd64`, `linux/arm64`, or
|
||||||
|
`darwin/amd64`).
|
||||||
|
|
||||||
Multi-platform images can be built by mainly three different strategies that are all supported by buildx and Dockerfiles. You can use the QEMU emulation support in the kernel, build on multiple native nodes using the same builder instance or use a stage in Dockerfile to cross-compile to different architectures.
|
When the current builder instance is backed by the `docker-container` or
|
||||||
|
`kubernetes` driver, you can specify multiple platforms together. In this case,
|
||||||
|
it builds a manifest list which contains images for all specified architectures.
|
||||||
|
When you use this image in [`docker run`](https://docs.docker.com/engine/reference/commandline/run/)
|
||||||
|
or [`docker service`](https://docs.docker.com/engine/reference/commandline/service/),
|
||||||
|
Docker picks the correct image based on the node's platform.
|
||||||
|
|
||||||
QEMU is the easiest way to get started if your node already supports it (e.g. if you are using Docker Desktop). It requires no changes to your Dockerfile and BuildKit will automatically detect the secondary architectures that are available. When BuildKit needs to run a binary for a different architecture it will automatically load it through a binary registered in the binfmt_misc handler.
|
You can build multi-platform images using three different strategies that are
|
||||||
|
supported by Buildx and Dockerfiles:
|
||||||
|
|
||||||
Using multiple native nodes provides better support for more complicated cases not handled by QEMU and generally have better performance. Additional nodes can be added to the builder instance with `--append` flag.
|
1. Using the QEMU emulation support in the kernel
|
||||||
|
2. Building on multiple native nodes using the same builder instance
|
||||||
|
3. Using a stage in Dockerfile to cross-compile to different architectures
|
||||||
|
|
||||||
|
QEMU is the easiest way to get started if your node already supports it (for
|
||||||
|
example. if you are using Docker Desktop). It requires no changes to your
|
||||||
|
Dockerfile and BuildKit automatically detects the secondary architectures that
|
||||||
|
are available. When BuildKit needs to run a binary for a different architecture,
|
||||||
|
it automatically loads it through a binary registered in the `binfmt_misc`
|
||||||
|
handler.
|
||||||
|
|
||||||
|
For QEMU binaries registered with `binfmt_misc` on the host OS to work
|
||||||
|
transparently inside containers they must be registered with the `fix_binary`
|
||||||
|
flag. This requires a kernel >= 4.8 and binfmt-support >= 2.1.7. You can check
|
||||||
|
for proper registration by checking if `F` is among the flags in
|
||||||
|
`/proc/sys/fs/binfmt_misc/qemu-*`. While Docker Desktop comes preconfigured
|
||||||
|
with `binfmt_misc` support for additional platforms, for other installations
|
||||||
|
it likely needs to be installed using [`tonistiigi/binfmt`](https://github.com/tonistiigi/binfmt)
|
||||||
|
image.
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker run --privileged --rm tonistiigi/binfmt --install all
|
||||||
```
|
```
|
||||||
# assuming contexts node-amd64 and node-arm64 exist in "docker context ls"
|
|
||||||
|
Using multiple native nodes provide better support for more complicated cases
|
||||||
|
that are not handled by QEMU and generally have better performance. You can
|
||||||
|
add additional nodes to the builder instance using the `--append` flag.
|
||||||
|
|
||||||
|
Assuming contexts `node-amd64` and `node-arm64` exist in `docker context ls`;
|
||||||
|
|
||||||
|
```console
|
||||||
$ docker buildx create --use --name mybuild node-amd64
|
$ docker buildx create --use --name mybuild node-amd64
|
||||||
mybuild
|
mybuild
|
||||||
$ docker buildx create --append --name mybuild node-arm64
|
$ docker buildx create --append --name mybuild node-arm64
|
||||||
$ docker buildx build --platform linux/amd64,linux/arm64 .
|
$ docker buildx build --platform linux/amd64,linux/arm64 .
|
||||||
```
|
```
|
||||||
|
|
||||||
Finally, depending on your project, the language that you use may have good support for cross-compilation. In that case, multi-stage builds in Dockerfiles can be effectively used to build binaries for the platform specified with `--platform` using the native architecture of the build node. List of build arguments like `BUILDPLATFORM` and `TARGETPLATFORM` are available automatically inside your Dockerfile and can be leveraged by the processes running as part of your build.
|
Finally, depending on your project, the language that you use may have good
|
||||||
|
support for cross-compilation. In that case, multi-stage builds in Dockerfiles
|
||||||
|
can be effectively used to build binaries for the platform specified with
|
||||||
|
`--platform` using the native architecture of the build node. A list of build
|
||||||
|
arguments like `BUILDPLATFORM` and `TARGETPLATFORM` is available automatically
|
||||||
|
inside your Dockerfile and can be leveraged by the processes running as part
|
||||||
|
of your build.
|
||||||
|
|
||||||
```
|
```dockerfile
|
||||||
|
# syntax=docker/dockerfile:1
|
||||||
FROM --platform=$BUILDPLATFORM golang:alpine AS build
|
FROM --platform=$BUILDPLATFORM golang:alpine AS build
|
||||||
ARG TARGETPLATFORM
|
ARG TARGETPLATFORM
|
||||||
ARG BUILDPLATFORM
|
ARG BUILDPLATFORM
|
||||||
@@ -141,537 +311,12 @@ FROM alpine
|
|||||||
COPY --from=build /log /log
|
COPY --from=build /log /log
|
||||||
```
|
```
|
||||||
|
|
||||||
|
You can also use [`tonistiigi/xx`](https://github.com/tonistiigi/xx) Dockerfile
|
||||||
|
cross-compilation helpers for more advanced use-cases.
|
||||||
|
|
||||||
## High-level build options
|
## High-level build options
|
||||||
|
|
||||||
Buildx also aims to provide support for higher level build concepts that go beyond invoking a single build command. We want to support building all the images in your application together and let the users define project specific reusable build flows that can then be easily invoked by anyone.
|
See [`docs/manuals/bake/index.md`](docs/manuals/bake/index.md) for more details.
|
||||||
|
|
||||||
BuildKit has great support for efficiently handling multiple concurrent build requests and deduplicating work. While build commands can be combined with general-purpose command runners (eg. make), these tools generally invoke builds in sequence and therefore can’t leverage the full potential of BuildKit parallelization or combine BuildKit’s output for the user. For this use case we have added a command called `docker buildx bake`.
|
|
||||||
|
|
||||||
Currently, the bake command supports building images from compose files, similar to `compose build` but allowing all the services to be built concurrently as part of a single request.
|
|
||||||
|
|
||||||
There is also support for custom build rules from HCL/JSON files allowing better code reuse and different target groups. The design of bake is in very early stages and we are looking for feedback from users.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
# Documentation
|
|
||||||
|
|
||||||
### `buildx build [OPTIONS] PATH | URL | -`
|
|
||||||
|
|
||||||
The `buildx build` command starts a build using BuildKit. This command is similar to the UI of `docker build` command and takes the same flags and arguments.
|
|
||||||
|
|
||||||
Options:
|
|
||||||
|
|
||||||
| Flag | Description |
|
|
||||||
| --- | --- |
|
|
||||||
| --add-host [] | Add a custom host-to-IP mapping (host:ip)
|
|
||||||
| --allow [] | Allow extra privileged entitlement, e.g. network.host, security.insecure
|
|
||||||
| --build-arg [] | Set build-time variables
|
|
||||||
| --cache-from [] | External cache sources (eg. user/app:cache, type=local,src=path/to/dir)
|
|
||||||
| --cache-to [] | Cache export destinations (eg. user/app:cache, type=local,dest=path/to/dir)
|
|
||||||
| --file string | Name of the Dockerfile (Default is 'PATH/Dockerfile')
|
|
||||||
| --iidfile string | Write the image ID to the file
|
|
||||||
| --label [] | Set metadata for an image
|
|
||||||
| --load | Shorthand for --output=type=docker
|
|
||||||
| --network string | Set the networking mode for the RUN instructions during build (default "default")
|
|
||||||
| --no-cache | Do not use cache when building the image
|
|
||||||
| --output [] | Output destination (format: type=local,dest=path)
|
|
||||||
| --platform [] | Set target platform for build
|
|
||||||
| --progress string | Set type of progress output (auto, plain, tty). Use plain to show container output (default "auto")
|
|
||||||
| --pull | Always attempt to pull a newer version of the image
|
|
||||||
| --push | Shorthand for --output=type=registry
|
|
||||||
| --secret [] | Secret file to expose to the build: id=mysecret,src=/local/secret
|
|
||||||
| --ssh [] | SSH agent socket or keys to expose to the build (format: default|<id>[=<socket>|<key>[,<key>]])
|
|
||||||
| --tag [] | Name and optionally a tag in the 'name:tag' format
|
|
||||||
| --target string | Set the target build stage to build.
|
|
||||||
|
|
||||||
For documentation on most of these flags refer to `docker build` documentation in https://docs.docker.com/engine/reference/commandline/build/ . In here we’ll document a subset of the new flags.
|
|
||||||
|
|
||||||
#### ` --platform=value[,value]`
|
|
||||||
|
|
||||||
Set the target platform for the build. All `FROM` commands inside the Dockerfile without their own `--platform` flag will pull base images for this platform and this value will also be the platform of the resulting image. The default value will be the current platform of the buildkit daemon.
|
|
||||||
|
|
||||||
When using `docker-container` driver with `buildx`, this flag can accept multiple values as an input separated by a comma. With multiple values the result will be built for all of the specified platforms and joined together into a single manifest list.
|
|
||||||
|
|
||||||
If the`Dockerfile` needs to invoke the `RUN` command, the builder needs runtime support for the specified platform. In a clean setup, you can only execute `RUN` commands for your system architecture. If your kernel supports binfmt_misc https://en.wikipedia.org/wiki/Binfmt_misc launchers for secondary architectures buildx will pick them up automatically. Docker desktop releases come with binfmt_misc automatically configured for `arm64` and `arm` architectures. You can see what runtime platforms your current builder instance supports by running `docker buildx inspect --bootstrap`.
|
|
||||||
|
|
||||||
Inside a `Dockerfile`, you can access the current platform value through `TARGETPLATFORM` build argument. Please refer to `docker build` documentation for the full description of automatic platform argument variants https://docs.docker.com/engine/reference/builder/#automatic-platform-args-in-the-global-scope .
|
|
||||||
|
|
||||||
The formatting for the platform specifier is defined in https://github.com/containerd/containerd/blob/v1.2.6/platforms/platforms.go#L63 .
|
|
||||||
|
|
||||||
Examples:
|
|
||||||
```
|
|
||||||
docker buildx build --platform=linux/arm64 .
|
|
||||||
docker buildx build --platform=linux/amd64,linux/arm64,linux/arm/v7 .
|
|
||||||
docker buildx build --platform=darwin .
|
|
||||||
```
|
|
||||||
|
|
||||||
#### `-o, --output=[PATH,-,type=TYPE[,KEY=VALUE]`
|
|
||||||
|
|
||||||
Sets the export action for the build result. In `docker build` all builds finish by creating a container image and exporting it to `docker images`. `buildx` makes this step configurable allowing results to be exported directly to the client, oci image tarballs, registry etc.
|
|
||||||
|
|
||||||
Supported exported types are:
|
|
||||||
|
|
||||||
##### `local`
|
|
||||||
|
|
||||||
The `local` export type writes all result files to a directory on the client. The new files will be owned by the current user. On multi-platform builds, all results will be put in subdirectories by their platform.
|
|
||||||
|
|
||||||
Attribute key:
|
|
||||||
|
|
||||||
- `dest` - destination directory where files will be written
|
|
||||||
|
|
||||||
##### `tar`
|
|
||||||
|
|
||||||
The `tar` export type writes all result files as a single tarball on the client. On multi-platform builds all results will be put in subdirectories by their platform.
|
|
||||||
|
|
||||||
Attribute key:
|
|
||||||
|
|
||||||
- `dest` - destination path where tarball will be written. “-” writes to stdout.
|
|
||||||
|
|
||||||
##### `oci`
|
|
||||||
|
|
||||||
The `oci` export type writes the result image or manifest list as an OCI image layout tarball https://github.com/opencontainers/image-spec/blob/master/image-layout.md on the client.
|
|
||||||
|
|
||||||
Attribute key:
|
|
||||||
|
|
||||||
- `dest` - destination path where tarball will be written. “-” writes to stdout.
|
|
||||||
|
|
||||||
##### `docker`
|
|
||||||
|
|
||||||
The `docker` export type writes the single-platform result image as a Docker image specification tarball https://github.com/moby/moby/blob/master/image/spec/v1.2.md on the client. Tarballs created by this exporter are also OCI compatible.
|
|
||||||
|
|
||||||
Currently, multi-platform images cannot be exported with the `docker` export type. The most common usecase for multi-platform images is to directly push to a registry (see [`registry`](#registry)).
|
|
||||||
|
|
||||||
Attribute keys:
|
|
||||||
|
|
||||||
- `dest` - destination path where tarball will be written. If not specified the tar will be loaded automatically to the current docker instance.
|
|
||||||
- `context` - name for the docker context where to import the result
|
|
||||||
|
|
||||||
##### `image`
|
|
||||||
|
|
||||||
The `image` exporter writes the build result as an image or a manifest list. When using `docker` driver the image will appear in `docker images`. Optionally image can be automatically pushed to a registry by specifying attributes.
|
|
||||||
|
|
||||||
Attribute keys:
|
|
||||||
|
|
||||||
- `name` - name (references) for the new image.
|
|
||||||
- `push` - boolean to automatically push the image.
|
|
||||||
|
|
||||||
##### `registry`
|
|
||||||
|
|
||||||
The `registry` exporter is a shortcut for `type=image,push=true`.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Buildx with `docker` driver currently only supports local, tarball exporter and image exporter. `docker-container` driver supports all the exporters.
|
|
||||||
|
|
||||||
If just the path is specified as a value, `buildx` will use the local exporter with this path as the destination. If the value is “-”, `buildx` will use `tar` exporter and write to `stdout`.
|
|
||||||
|
|
||||||
Examples:
|
|
||||||
|
|
||||||
```
|
|
||||||
docker buildx build -o . .
|
|
||||||
docker buildx build -o outdir .
|
|
||||||
docker buildx build -o - - > out.tar
|
|
||||||
docker buildx build -o type=docker .
|
|
||||||
docker buildx build -o type=docker,dest=- . > myimage.tar
|
|
||||||
docker buildx build -t tonistiigi/foo -o type=registry
|
|
||||||
````
|
|
||||||
|
|
||||||
#### `--push`
|
|
||||||
|
|
||||||
Shorthand for [`--output=type=registry`](#registry). Will automatically push the build result to registry.
|
|
||||||
|
|
||||||
#### `--load`
|
|
||||||
|
|
||||||
Shorthand for [`--output=type=docker`](#docker). Will automatically load the single-platform build result to `docker images`.
|
|
||||||
|
|
||||||
#### `--cache-from=[NAME|type=TYPE[,KEY=VALUE]]`
|
|
||||||
|
|
||||||
Use an external cache source for a build. Supported types are `registry` and `local`. The `registry` source can import cache from a cache manifest or (special) image configuration on the registry. The `local` source can import cache from local files previously exported with `--cache-to`.
|
|
||||||
|
|
||||||
If no type is specified, `registry` exporter is used with a specified reference.
|
|
||||||
|
|
||||||
`docker` driver currently only supports importing build cache from the registry.
|
|
||||||
|
|
||||||
Examples:
|
|
||||||
```
|
|
||||||
docker buildx build --cache-from=user/app:cache .
|
|
||||||
docker buildx build --cache-from=user/app .
|
|
||||||
docker buildx build --cache-from=type=registry,ref=user/app .
|
|
||||||
docker buildx build --cache-from=type=local,src=path/to/cache .
|
|
||||||
```
|
|
||||||
|
|
||||||
#### `--cache-to=[NAME|type=TYPE[,KEY=VALUE]]`
|
|
||||||
|
|
||||||
Export build cache to an external cache destination. Supported types are `registry`, `local` and `inline`. Registry exports build cache to a cache manifest in the registry, local exports cache to a local directory on the client and inline writes the cache metadata into the image configuration.
|
|
||||||
|
|
||||||
`docker` driver currently only supports exporting inline cache metadata to image configuration. Alternatively, `--build-arg BUILDKIT_INLINE_CACHE=1` can be used to trigger inline cache exporter.
|
|
||||||
|
|
||||||
Attribute key:
|
|
||||||
|
|
||||||
- `mode` - Specifies how many layers are exported with the cache. “min” on only exports layers already in the final build build stage, “max” exports layers for all stages. Metadata is always exported for the whole build.
|
|
||||||
|
|
||||||
Examples:
|
|
||||||
```
|
|
||||||
docker buildx build --cache-to=user/app:cache .
|
|
||||||
docker buildx build --cache-to=type=inline .
|
|
||||||
docker buildx build --cache-to=type=registry,ref=user/app .
|
|
||||||
docker buildx build --cache-to=type=local,dest=path/to/cache .
|
|
||||||
```
|
|
||||||
|
|
||||||
#### `--allow=ENTITLEMENT`
|
|
||||||
|
|
||||||
Allow extra privileged entitlement. List of entitlements:
|
|
||||||
|
|
||||||
- `network.host` - Allows executions with host networking.
|
|
||||||
- `security.insecure` - Allows executions without sandbox. See [related Dockerfile extensions](https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/experimental.md#run---securityinsecuresandbox).
|
|
||||||
|
|
||||||
For entitlements to be enabled, the `buildkitd` daemon also needs to allow them with `--allow-insecure-entitlement` (see [`create --buildkitd-flags`](#--buildkitd-flags-flags))
|
|
||||||
|
|
||||||
Example:
|
|
||||||
```
|
|
||||||
$ docker buildx create --use --name insecure-builder --buildkitd-flags '--allow-insecure-entitlement security.insecure'
|
|
||||||
$ docker buildx build --allow security.insecure .
|
|
||||||
```
|
|
||||||
|
|
||||||
### `buildx create [OPTIONS] [CONTEXT|ENDPOINT]`
|
|
||||||
|
|
||||||
Create makes a new builder instance pointing to a docker context or endpoint, where context is the name of a context from `docker context ls` and endpoint is the address for docker socket (eg. `DOCKER_HOST` value).
|
|
||||||
|
|
||||||
By default, the current docker configuration is used for determining the context/endpoint value.
|
|
||||||
|
|
||||||
Builder instances are isolated environments where builds can be invoked. All docker contexts also get the default builder instance.
|
|
||||||
|
|
||||||
Options:
|
|
||||||
|
|
||||||
| Flag | Description |
|
|
||||||
| --- | --- |
|
|
||||||
| --append | Append a node to builder instead of changing it
|
|
||||||
| --buildkitd-flags string | Flags for buildkitd daemon
|
|
||||||
| --config string | BuildKit config file
|
|
||||||
| --driver string | Driver to use (eg. docker-container)
|
|
||||||
| --driver-opt stringArray | Options for the driver
|
|
||||||
| --leave | Remove a node from builder instead of changing it
|
|
||||||
| --name string | Builder instance name
|
|
||||||
| --node string | Create/modify node with given name
|
|
||||||
| --platform stringArray | Fixed platforms for current node
|
|
||||||
| --use | Set the current builder instance
|
|
||||||
|
|
||||||
#### `--append`
|
|
||||||
|
|
||||||
Changes the action of the command to appends a new node to an existing builder specified by `--name`. Buildx will choose an appropriate node for a build based on the platforms it supports.
|
|
||||||
|
|
||||||
Example:
|
|
||||||
```
|
|
||||||
$ docker buildx create mycontext1
|
|
||||||
eager_beaver
|
|
||||||
$ docker buildx create --name eager_beaver --append mycontext2
|
|
||||||
eager_beaver
|
|
||||||
```
|
|
||||||
|
|
||||||
#### `--buildkitd-flags FLAGS`
|
|
||||||
|
|
||||||
Adds flags when starting the buildkitd daemon. They take precedence over the configuration file specified by [`--config`](#--config-file). See `buildkitd --help` for the available flags.
|
|
||||||
|
|
||||||
Example:
|
|
||||||
```
|
|
||||||
--buildkitd-flags '--debug --debugaddr 0.0.0.0:6666'
|
|
||||||
```
|
|
||||||
|
|
||||||
#### `--config FILE`
|
|
||||||
|
|
||||||
Specifies the configuration file for the buildkitd daemon to use. The configuration can be overridden by [`--buildkitd-flags`](#--buildkitd-flags-flags). See an [example buildkitd configuration file](https://github.com/moby/buildkit/blob/master/docs/buildkitd.toml.md).
|
|
||||||
|
|
||||||
#### `--driver DRIVER`
|
|
||||||
|
|
||||||
Sets the builder driver to be used. There are two available drivers, each have their own specificities.
|
|
||||||
|
|
||||||
- `docker` - Uses the builder that is built into the docker daemon. With this driver, the [`--load`](#--load) flag is implied by default on `buildx build`. However, building multi-platform images or exporting cache is not currently supported.
|
|
||||||
|
|
||||||
- `docker-container` - Uses a buildkit container that will be spawned via docker. With this driver, both building multi-platform images and exporting cache are supported. However, images built will not automatically appear in `docker images` (see [`build --load`](#--load)).
|
|
||||||
|
|
||||||
|
|
||||||
#### `--driver-opt OPTIONS`
|
|
||||||
|
|
||||||
Passes additional driver-specific options. Details for each driver:
|
|
||||||
|
|
||||||
- `docker` - No driver options
|
|
||||||
- `docker-container`
|
|
||||||
- `image` - Sets the container image to be used for running buildkit.
|
|
||||||
- `network` - Sets the network mode for running the buildkit container.
|
|
||||||
- Example:
|
|
||||||
```
|
|
||||||
--driver docker-container --driver-opt image=moby/buildkit:master,network=host
|
|
||||||
```
|
|
||||||
|
|
||||||
#### `--leave`
|
|
||||||
|
|
||||||
Changes the action of the command to removes a node from a builder. The builder needs to be specified with `--name` and node that is removed is set with `--node`.
|
|
||||||
|
|
||||||
Example:
|
|
||||||
```
|
|
||||||
docker buildx create --name mybuilder --node mybuilder0 --leave
|
|
||||||
```
|
|
||||||
|
|
||||||
#### `--name NAME`
|
|
||||||
|
|
||||||
Specifies the name of the builder to be created or modified. If none is specified, one will be automatically generated.
|
|
||||||
|
|
||||||
#### `--node NODE`
|
|
||||||
|
|
||||||
Specifies the name of the node to be created or modified. If none is specified, it is the name of the builder it belongs to, with an index number suffix.
|
|
||||||
|
|
||||||
#### `--platform PLATFORMS`
|
|
||||||
|
|
||||||
Sets the platforms supported by the node. It expects a comma-separated list of platforms of the form OS/architecture/variant. The node will also automatically detect the platforms it supports, but manual values take priority over the detected ones and can be used when multiple nodes support building for the same platform.
|
|
||||||
|
|
||||||
Example:
|
|
||||||
```
|
|
||||||
docker buildx create --platform linux/amd64
|
|
||||||
docker buildx create --platform linux/arm64,linux/arm/v8
|
|
||||||
```
|
|
||||||
|
|
||||||
#### `--use`
|
|
||||||
|
|
||||||
Automatically switches the current builder to the newly created one. Equivalent to running `docker buildx use $(docker buildx create ...)`.
|
|
||||||
|
|
||||||
### `buildx use NAME`
|
|
||||||
|
|
||||||
Switches the current builder instance. Build commands invoked after this command will run on a specified builder. Alternatively, a context name can be used to switch to the default builder of that context.
|
|
||||||
|
|
||||||
### `buildx inspect [NAME]`
|
|
||||||
|
|
||||||
Shows information about the current or specified builder.
|
|
||||||
|
|
||||||
Example:
|
|
||||||
```
|
|
||||||
Name: elated_tesla
|
|
||||||
Driver: docker-container
|
|
||||||
|
|
||||||
Nodes:
|
|
||||||
Name: elated_tesla0
|
|
||||||
Endpoint: unix:///var/run/docker.sock
|
|
||||||
Status: running
|
|
||||||
Platforms: linux/amd64
|
|
||||||
|
|
||||||
Name: elated_tesla1
|
|
||||||
Endpoint: ssh://ubuntu@1.2.3.4
|
|
||||||
Status: running
|
|
||||||
Platforms: linux/arm64, linux/arm/v7, linux/arm/v6
|
|
||||||
```
|
|
||||||
|
|
||||||
#### `--bootstrap`
|
|
||||||
|
|
||||||
Ensures that the builder is running before inspecting it. If the driver is `docker-container`, then `--bootstrap` starts the buildkit container and waits until it is operational. Bootstrapping is automatically done during build, it is thus not necessary. The same BuildKit container is used during the lifetime of the associated builder node (as displayed in `buildx ls`).
|
|
||||||
|
|
||||||
### `buildx ls`
|
|
||||||
|
|
||||||
Lists all builder instances and the nodes for each instance
|
|
||||||
|
|
||||||
Example:
|
|
||||||
|
|
||||||
```
|
|
||||||
docker buildx ls
|
|
||||||
NAME/NODE DRIVER/ENDPOINT STATUS PLATFORMS
|
|
||||||
elated_tesla * docker-container
|
|
||||||
elated_tesla0 unix:///var/run/docker.sock running linux/amd64
|
|
||||||
elated_tesla1 ssh://ubuntu@1.2.3.4 running linux/arm64, linux/arm/v7, linux/arm/v6
|
|
||||||
default docker
|
|
||||||
default default running linux/amd64
|
|
||||||
```
|
|
||||||
|
|
||||||
Each builder has one or more nodes associated with it. The current builder’s name is marked with a `*`.
|
|
||||||
|
|
||||||
### `buildx stop [NAME]`
|
|
||||||
|
|
||||||
Stops the specified or current builder. This will not prevent buildx build to restart the builder. The implementation of stop depends on the driver.
|
|
||||||
|
|
||||||
### `buildx rm [NAME]`
|
|
||||||
|
|
||||||
Removes the specified or current builder. It is a no-op attempting to remove the default builder.
|
|
||||||
|
|
||||||
### `buildx bake [OPTIONS] [TARGET...]`
|
|
||||||
|
|
||||||
Bake is a high-level build command.
|
|
||||||
|
|
||||||
Each specified target will run in parallel as part of the build.
|
|
||||||
|
|
||||||
Options:
|
|
||||||
|
|
||||||
| Flag | Description |
|
|
||||||
| --- | --- |
|
|
||||||
| -f, --file stringArray | Build definition file
|
|
||||||
| --no-cache | Do not use cache when building the image
|
|
||||||
| --print | Print the options without building
|
|
||||||
| --progress string | Set type of progress output (auto, plain, tty). Use plain to show container output (default "auto")
|
|
||||||
| --pull | Always attempt to pull a newer version of the image
|
|
||||||
| --set stringArray | Override target value (eg: target.key=value)
|
|
||||||
|
|
||||||
#### `-f, --file FILE`
|
|
||||||
|
|
||||||
Specifies the bake definition file. The file can be a Docker Compose, JSON or HCL file. If multiple files are specified they are all read and configurations are combined. By default, if no files are specified, the following are parsed:
|
|
||||||
docker-compose.yml
|
|
||||||
docker-compose.yaml
|
|
||||||
docker-bake.json
|
|
||||||
docker-bake.override.json
|
|
||||||
docker-bake.hcl
|
|
||||||
docker-bake.override.hcl
|
|
||||||
|
|
||||||
#### `--no-cache`
|
|
||||||
|
|
||||||
Same as `build --no-cache`. Do not use cache when building the image.
|
|
||||||
|
|
||||||
#### `--print`
|
|
||||||
|
|
||||||
Prints the resulting options of the targets desired to be built, in a JSON format, without starting a build.
|
|
||||||
|
|
||||||
```
|
|
||||||
$ docker buildx bake -f docker-bake.hcl --print db
|
|
||||||
{
|
|
||||||
"target": {
|
|
||||||
"db": {
|
|
||||||
"context": "./",
|
|
||||||
"dockerfile": "Dockerfile",
|
|
||||||
"tags": [
|
|
||||||
"docker.io/tiborvass/db"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
#### `--progress`
|
|
||||||
|
|
||||||
Same as `build --progress`. Set type of progress output (auto, plain, tty). Use plain to show container output (default "auto").
|
|
||||||
|
|
||||||
#### `--pull`
|
|
||||||
|
|
||||||
Same as `build --pull`.
|
|
||||||
|
|
||||||
#### `--set target.key[.subkey]=value`
|
|
||||||
|
|
||||||
Override target configurations from command line.
|
|
||||||
|
|
||||||
Example:
|
|
||||||
```
|
|
||||||
docker buildx bake --set target.args.mybuildarg=value
|
|
||||||
docker buildx bake --set target.platform=linux/arm64
|
|
||||||
```
|
|
||||||
|
|
||||||
#### File definition
|
|
||||||
|
|
||||||
In addition to compose files, bake supports a JSON and an equivalent HCL file format for defining build groups and targets.
|
|
||||||
|
|
||||||
A target reflects a single docker build invocation with the same options that you would specify for `docker build`. A group is a grouping of targets.
|
|
||||||
|
|
||||||
Multiple files can include the same target and final build options will be determined by merging them together.
|
|
||||||
|
|
||||||
In the case of compose files, each service corresponds to a target.
|
|
||||||
|
|
||||||
A group can specify its list of targets with the `targets` option. A target can inherit build options by setting the `inherits` option to the list of targets or groups to inherit from.
|
|
||||||
|
|
||||||
Note: Design of bake command is work in progress, the user experience may change based on feedback.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Example HCL defintion:
|
|
||||||
|
|
||||||
```
|
|
||||||
group "default" {
|
|
||||||
targets = ["db", "webapp-dev"]
|
|
||||||
}
|
|
||||||
|
|
||||||
target "webapp-dev" {
|
|
||||||
dockerfile = "Dockerfile.webapp"
|
|
||||||
tags = ["docker.io/username/webapp"]
|
|
||||||
}
|
|
||||||
|
|
||||||
target "webapp-release" {
|
|
||||||
inherits = ["webapp-dev"]
|
|
||||||
platforms = ["linux/amd64", "linux/arm64"]
|
|
||||||
}
|
|
||||||
|
|
||||||
target "db" {
|
|
||||||
dockerfile = "Dockerfile.db"
|
|
||||||
tags = ["docker.io/username/db"]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
### `buildx imagetools create [OPTIONS] [SOURCE] [SOURCE...]`
|
|
||||||
|
|
||||||
Imagetools contains commands for working with manifest lists in the registry. These commands are useful for inspecting multi-platform build results.
|
|
||||||
|
|
||||||
Create creates a new manifest list based on source manifests. The source manifests can be manifest lists or single platform distribution manifests and must already exist in the registry where the new manifest is created. If only one source is specified create performs a carbon copy.
|
|
||||||
|
|
||||||
Options:
|
|
||||||
|
|
||||||
| Flag | Description |
|
|
||||||
| --- | --- |
|
|
||||||
| --append | Append to existing manifest
|
|
||||||
| --dry-run | Show final image instead of pushing
|
|
||||||
| -f, --file stringArray | Read source descriptor from file
|
|
||||||
| -t, --tag stringArray | Set reference for new image
|
|
||||||
|
|
||||||
#### `--append`
|
|
||||||
|
|
||||||
Append appends the new sources to an existing manifest list in the destination.
|
|
||||||
|
|
||||||
#### `--dry-run`
|
|
||||||
|
|
||||||
Do not push the image, just show it.
|
|
||||||
|
|
||||||
#### `-f, --file FILE`
|
|
||||||
|
|
||||||
Reads source from files. A source can be a manifest digest, manifest reference or a JSON of OCI descriptor object.
|
|
||||||
|
|
||||||
#### `-t, --tag IMAGE`
|
|
||||||
|
|
||||||
Name of the image to be created.
|
|
||||||
|
|
||||||
Examples:
|
|
||||||
|
|
||||||
```
|
|
||||||
docker buildx imagetools create --dry-run alpine@sha256:5c40b3c27b9f13c873fefb2139765c56ce97fd50230f1f2d5c91e55dec171907 sha256:c4ba6347b0e4258ce6a6de2401619316f982b7bcc529f73d2a410d0097730204
|
|
||||||
|
|
||||||
docker buildx imagetools create -t tonistiigi/myapp -f image1 -f image2
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
### `buildx imagetools inspect NAME`
|
|
||||||
|
|
||||||
Show details of image in the registry.
|
|
||||||
|
|
||||||
Example:
|
|
||||||
```
|
|
||||||
$ docker buildx imagetools inspect alpine
|
|
||||||
Name: docker.io/library/alpine:latest
|
|
||||||
MediaType: application/vnd.docker.distribution.manifest.list.v2+json
|
|
||||||
Digest: sha256:28ef97b8686a0b5399129e9b763d5b7e5ff03576aa5580d6f4182a49c5fe1913
|
|
||||||
|
|
||||||
Manifests:
|
|
||||||
Name: docker.io/library/alpine:latest@sha256:5c40b3c27b9f13c873fefb2139765c56ce97fd50230f1f2d5c91e55dec171907
|
|
||||||
MediaType: application/vnd.docker.distribution.manifest.v2+json
|
|
||||||
Platform: linux/amd64
|
|
||||||
|
|
||||||
Name: docker.io/library/alpine:latest@sha256:c4ba6347b0e4258ce6a6de2401619316f982b7bcc529f73d2a410d0097730204
|
|
||||||
MediaType: application/vnd.docker.distribution.manifest.v2+json
|
|
||||||
Platform: linux/arm/v6
|
|
||||||
|
|
||||||
...
|
|
||||||
```
|
|
||||||
|
|
||||||
#### `--raw`
|
|
||||||
|
|
||||||
Raw prints the original JSON bytes instead of the formatted output.
|
|
||||||
|
|
||||||
|
|
||||||
# Setting buildx as default builder in Docker 19.03+
|
|
||||||
|
|
||||||
Running `docker buildx install` sets up `docker builder` command as an alias to `docker buildx`. This results in the ability to have `docker build` use the current buildx builder.
|
|
||||||
|
|
||||||
To remove this alias, you can run `docker buildx uninstall`.
|
|
||||||
|
|
||||||
|
|
||||||
# Contributing
|
# Contributing
|
||||||
|
|
||||||
|
|||||||
1164
bake/bake.go
1164
bake/bake.go
File diff suppressed because it is too large
Load Diff
1334
bake/bake_test.go
1334
bake/bake_test.go
File diff suppressed because it is too large
Load Diff
321
bake/compose.go
321
bake/compose.go
@@ -1,66 +1,60 @@
|
|||||||
package bake
|
package bake
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"fmt"
|
|
||||||
"os"
|
"os"
|
||||||
"reflect"
|
"path/filepath"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"github.com/docker/cli/cli/compose/loader"
|
"github.com/compose-spec/compose-go/dotenv"
|
||||||
composetypes "github.com/docker/cli/cli/compose/types"
|
"github.com/compose-spec/compose-go/loader"
|
||||||
|
compose "github.com/compose-spec/compose-go/types"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
"gopkg.in/yaml.v3"
|
||||||
)
|
)
|
||||||
|
|
||||||
func parseCompose(dt []byte) (*composetypes.Config, error) {
|
func ParseComposeFiles(fs []File) (*Config, error) {
|
||||||
parsed, err := loader.ParseYAML([]byte(dt))
|
envs, err := composeEnv()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
return loader.Load(composetypes.ConfigDetails{
|
var cfgs []compose.ConfigFile
|
||||||
ConfigFiles: []composetypes.ConfigFile{
|
for _, f := range fs {
|
||||||
{
|
cfgs = append(cfgs, compose.ConfigFile{
|
||||||
Config: parsed,
|
Filename: f.Name,
|
||||||
},
|
Content: f.Data,
|
||||||
},
|
|
||||||
Environment: envMap(os.Environ()),
|
|
||||||
})
|
})
|
||||||
|
}
|
||||||
|
return ParseCompose(cfgs, envs)
|
||||||
}
|
}
|
||||||
|
|
||||||
func envMap(env []string) map[string]string {
|
func ParseCompose(cfgs []compose.ConfigFile, envs map[string]string) (*Config, error) {
|
||||||
result := make(map[string]string, len(env))
|
cfg, err := loader.Load(compose.ConfigDetails{
|
||||||
for _, s := range env {
|
ConfigFiles: cfgs,
|
||||||
kv := strings.SplitN(s, "=", 2)
|
Environment: envs,
|
||||||
if len(kv) != 2 {
|
}, func(options *loader.Options) {
|
||||||
continue
|
options.SkipNormalization = true
|
||||||
}
|
})
|
||||||
result[kv[0]] = kv[1]
|
|
||||||
}
|
|
||||||
return result
|
|
||||||
}
|
|
||||||
|
|
||||||
func ParseCompose(dt []byte) (*Config, error) {
|
|
||||||
cfg, err := parseCompose(dt)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
var c Config
|
var c Config
|
||||||
var zeroBuildConfig composetypes.BuildConfig
|
|
||||||
if len(cfg.Services) > 0 {
|
if len(cfg.Services) > 0 {
|
||||||
c.Group = map[string]Group{}
|
c.Groups = []*Group{}
|
||||||
c.Target = map[string]Target{}
|
c.Targets = []*Target{}
|
||||||
|
|
||||||
var g Group
|
g := &Group{Name: "default"}
|
||||||
|
|
||||||
for _, s := range cfg.Services {
|
for _, s := range cfg.Services {
|
||||||
|
if s.Build == nil {
|
||||||
if reflect.DeepEqual(s.Build, zeroBuildConfig) {
|
|
||||||
// if not make sure they're setting an image or it's invalid d-c.yml
|
|
||||||
if s.Image == "" {
|
|
||||||
return nil, fmt.Errorf("compose file invalid: service %s has neither an image nor a build context specified. At least one must be provided.", s.Name)
|
|
||||||
}
|
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
|
targetName := sanitizeTargetName(s.Name)
|
||||||
|
if err = validateTargetName(targetName); err != nil {
|
||||||
|
return nil, errors.Wrapf(err, "invalid service name %q", targetName)
|
||||||
|
}
|
||||||
|
|
||||||
var contextPathP *string
|
var contextPathP *string
|
||||||
if s.Build.Context != "" {
|
if s.Build.Context != "" {
|
||||||
contextPath := s.Build.Context
|
contextPath := s.Build.Context
|
||||||
@@ -71,39 +65,260 @@ func ParseCompose(dt []byte) (*Config, error) {
|
|||||||
dockerfilePath := s.Build.Dockerfile
|
dockerfilePath := s.Build.Dockerfile
|
||||||
dockerfilePathP = &dockerfilePath
|
dockerfilePathP = &dockerfilePath
|
||||||
}
|
}
|
||||||
g.Targets = append(g.Targets, s.Name)
|
|
||||||
t := Target{
|
var secrets []string
|
||||||
|
for _, bs := range s.Build.Secrets {
|
||||||
|
secret, err := composeToBuildkitSecret(bs, cfg.Secrets[bs.Source])
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
secrets = append(secrets, secret)
|
||||||
|
}
|
||||||
|
|
||||||
|
// compose does not support nil values for labels
|
||||||
|
labels := map[string]*string{}
|
||||||
|
for k, v := range s.Build.Labels {
|
||||||
|
v := v
|
||||||
|
labels[k] = &v
|
||||||
|
}
|
||||||
|
|
||||||
|
g.Targets = append(g.Targets, targetName)
|
||||||
|
t := &Target{
|
||||||
|
Name: targetName,
|
||||||
Context: contextPathP,
|
Context: contextPathP,
|
||||||
Dockerfile: dockerfilePathP,
|
Dockerfile: dockerfilePathP,
|
||||||
Labels: s.Build.Labels,
|
Tags: s.Build.Tags,
|
||||||
Args: toMap(s.Build.Args),
|
Labels: labels,
|
||||||
|
Args: flatten(s.Build.Args.Resolve(func(val string) (string, bool) {
|
||||||
|
if val, ok := s.Environment[val]; ok && val != nil {
|
||||||
|
return *val, true
|
||||||
|
}
|
||||||
|
val, ok := cfg.Environment[val]
|
||||||
|
return val, ok
|
||||||
|
})),
|
||||||
CacheFrom: s.Build.CacheFrom,
|
CacheFrom: s.Build.CacheFrom,
|
||||||
// TODO: add platforms
|
CacheTo: s.Build.CacheTo,
|
||||||
|
NetworkMode: &s.Build.Network,
|
||||||
|
Secrets: secrets,
|
||||||
|
}
|
||||||
|
if err = t.composeExtTarget(s.Build.Extensions); err != nil {
|
||||||
|
return nil, err
|
||||||
}
|
}
|
||||||
if s.Build.Target != "" {
|
if s.Build.Target != "" {
|
||||||
target := s.Build.Target
|
target := s.Build.Target
|
||||||
t.Target = &target
|
t.Target = &target
|
||||||
}
|
}
|
||||||
if s.Image != "" {
|
if len(t.Tags) == 0 && s.Image != "" {
|
||||||
t.Tags = []string{s.Image}
|
t.Tags = []string{s.Image}
|
||||||
}
|
}
|
||||||
c.Target[s.Name] = t
|
c.Targets = append(c.Targets, t)
|
||||||
}
|
}
|
||||||
c.Group["default"] = g
|
c.Groups = append(c.Groups, g)
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return &c, nil
|
return &c, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func toMap(in composetypes.MappingWithEquals) map[string]string {
|
func validateComposeFile(dt []byte, fn string) (bool, error) {
|
||||||
m := map[string]string{}
|
envs, err := composeEnv()
|
||||||
for k, v := range in {
|
if err != nil {
|
||||||
if v != nil {
|
return true, err
|
||||||
m[k] = *v
|
|
||||||
} else {
|
|
||||||
m[k] = os.Getenv(k)
|
|
||||||
}
|
}
|
||||||
|
fnl := strings.ToLower(fn)
|
||||||
|
if strings.HasSuffix(fnl, ".yml") || strings.HasSuffix(fnl, ".yaml") {
|
||||||
|
return true, validateCompose(dt, envs)
|
||||||
}
|
}
|
||||||
return m
|
if strings.HasSuffix(fnl, ".json") || strings.HasSuffix(fnl, ".hcl") {
|
||||||
|
return false, nil
|
||||||
|
}
|
||||||
|
err = validateCompose(dt, envs)
|
||||||
|
return err == nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
func validateCompose(dt []byte, envs map[string]string) error {
|
||||||
|
_, err := loader.Load(compose.ConfigDetails{
|
||||||
|
ConfigFiles: []compose.ConfigFile{
|
||||||
|
{
|
||||||
|
Content: dt,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
Environment: envs,
|
||||||
|
}, func(options *loader.Options) {
|
||||||
|
options.SkipNormalization = true
|
||||||
|
// consistency is checked later in ParseCompose to ensure multiple
|
||||||
|
// compose files can be merged together
|
||||||
|
options.SkipConsistencyCheck = true
|
||||||
|
})
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
func composeEnv() (map[string]string, error) {
|
||||||
|
envs := sliceToMap(os.Environ())
|
||||||
|
if wd, err := os.Getwd(); err == nil {
|
||||||
|
envs, err = loadDotEnv(envs, wd)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return envs, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func loadDotEnv(curenv map[string]string, workingDir string) (map[string]string, error) {
|
||||||
|
if curenv == nil {
|
||||||
|
curenv = make(map[string]string)
|
||||||
|
}
|
||||||
|
|
||||||
|
ef, err := filepath.Abs(filepath.Join(workingDir, ".env"))
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if _, err = os.Stat(ef); os.IsNotExist(err) {
|
||||||
|
return curenv, nil
|
||||||
|
} else if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
dt, err := os.ReadFile(ef)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
envs, err := dotenv.UnmarshalBytes(dt)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
for k, v := range envs {
|
||||||
|
if _, set := curenv[k]; set {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
curenv[k] = v
|
||||||
|
}
|
||||||
|
|
||||||
|
return curenv, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func flatten(in compose.MappingWithEquals) map[string]*string {
|
||||||
|
if len(in) == 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
out := map[string]*string{}
|
||||||
|
for k, v := range in {
|
||||||
|
if v == nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
out[k] = v
|
||||||
|
}
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
|
||||||
|
// xbake Compose build extension provides fields not (yet) available in
|
||||||
|
// Compose build specification: https://github.com/compose-spec/compose-spec/blob/master/build.md
|
||||||
|
type xbake struct {
|
||||||
|
Tags stringArray `yaml:"tags,omitempty"`
|
||||||
|
CacheFrom stringArray `yaml:"cache-from,omitempty"`
|
||||||
|
CacheTo stringArray `yaml:"cache-to,omitempty"`
|
||||||
|
Secrets stringArray `yaml:"secret,omitempty"`
|
||||||
|
SSH stringArray `yaml:"ssh,omitempty"`
|
||||||
|
Platforms stringArray `yaml:"platforms,omitempty"`
|
||||||
|
Outputs stringArray `yaml:"output,omitempty"`
|
||||||
|
Pull *bool `yaml:"pull,omitempty"`
|
||||||
|
NoCache *bool `yaml:"no-cache,omitempty"`
|
||||||
|
NoCacheFilter stringArray `yaml:"no-cache-filter,omitempty"`
|
||||||
|
Contexts stringMap `yaml:"contexts,omitempty"`
|
||||||
|
// don't forget to update documentation if you add a new field:
|
||||||
|
// docs/manuals/bake/compose-file.md#extension-field-with-x-bake
|
||||||
|
}
|
||||||
|
|
||||||
|
type stringMap map[string]string
|
||||||
|
type stringArray []string
|
||||||
|
|
||||||
|
func (sa *stringArray) UnmarshalYAML(unmarshal func(interface{}) error) error {
|
||||||
|
var multi []string
|
||||||
|
err := unmarshal(&multi)
|
||||||
|
if err != nil {
|
||||||
|
var single string
|
||||||
|
if err := unmarshal(&single); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
*sa = strings.Fields(single)
|
||||||
|
} else {
|
||||||
|
*sa = multi
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// composeExtTarget converts Compose build extension x-bake to bake Target
|
||||||
|
// https://github.com/compose-spec/compose-spec/blob/master/spec.md#extension
|
||||||
|
func (t *Target) composeExtTarget(exts map[string]interface{}) error {
|
||||||
|
var xb xbake
|
||||||
|
|
||||||
|
ext, ok := exts["x-bake"]
|
||||||
|
if !ok || ext == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
yb, _ := yaml.Marshal(ext)
|
||||||
|
if err := yaml.Unmarshal(yb, &xb); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(xb.Tags) > 0 {
|
||||||
|
t.Tags = dedupSlice(append(t.Tags, xb.Tags...))
|
||||||
|
}
|
||||||
|
if len(xb.CacheFrom) > 0 {
|
||||||
|
t.CacheFrom = dedupSlice(append(t.CacheFrom, xb.CacheFrom...))
|
||||||
|
}
|
||||||
|
if len(xb.CacheTo) > 0 {
|
||||||
|
t.CacheTo = dedupSlice(append(t.CacheTo, xb.CacheTo...))
|
||||||
|
}
|
||||||
|
if len(xb.Secrets) > 0 {
|
||||||
|
t.Secrets = dedupSlice(append(t.Secrets, xb.Secrets...))
|
||||||
|
}
|
||||||
|
if len(xb.SSH) > 0 {
|
||||||
|
t.SSH = dedupSlice(append(t.SSH, xb.SSH...))
|
||||||
|
}
|
||||||
|
if len(xb.Platforms) > 0 {
|
||||||
|
t.Platforms = dedupSlice(append(t.Platforms, xb.Platforms...))
|
||||||
|
}
|
||||||
|
if len(xb.Outputs) > 0 {
|
||||||
|
t.Outputs = dedupSlice(append(t.Outputs, xb.Outputs...))
|
||||||
|
}
|
||||||
|
if xb.Pull != nil {
|
||||||
|
t.Pull = xb.Pull
|
||||||
|
}
|
||||||
|
if xb.NoCache != nil {
|
||||||
|
t.NoCache = xb.NoCache
|
||||||
|
}
|
||||||
|
if len(xb.NoCacheFilter) > 0 {
|
||||||
|
t.NoCacheFilter = dedupSlice(append(t.NoCacheFilter, xb.NoCacheFilter...))
|
||||||
|
}
|
||||||
|
if len(xb.Contexts) > 0 {
|
||||||
|
t.Contexts = dedupMap(t.Contexts, xb.Contexts)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// composeToBuildkitSecret converts secret from compose format to buildkit's
|
||||||
|
// csv format.
|
||||||
|
func composeToBuildkitSecret(inp compose.ServiceSecretConfig, psecret compose.SecretConfig) (string, error) {
|
||||||
|
if psecret.External.External {
|
||||||
|
return "", errors.Errorf("unsupported external secret %s", psecret.Name)
|
||||||
|
}
|
||||||
|
|
||||||
|
var bkattrs []string
|
||||||
|
if inp.Source != "" {
|
||||||
|
bkattrs = append(bkattrs, "id="+inp.Source)
|
||||||
|
}
|
||||||
|
if psecret.File != "" {
|
||||||
|
bkattrs = append(bkattrs, "src="+psecret.File)
|
||||||
|
}
|
||||||
|
if psecret.Environment != "" {
|
||||||
|
bkattrs = append(bkattrs, "env="+psecret.Environment)
|
||||||
|
}
|
||||||
|
|
||||||
|
return strings.Join(bkattrs, ","), nil
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,16 +1,18 @@
|
|||||||
package bake
|
package bake
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
"sort"
|
"sort"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
|
compose "github.com/compose-spec/compose-go/types"
|
||||||
|
"github.com/stretchr/testify/assert"
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestParseCompose(t *testing.T) {
|
func TestParseCompose(t *testing.T) {
|
||||||
var dt = []byte(`
|
var dt = []byte(`
|
||||||
version: "3"
|
|
||||||
|
|
||||||
services:
|
services:
|
||||||
db:
|
db:
|
||||||
build: ./db
|
build: ./db
|
||||||
@@ -20,45 +22,70 @@ services:
|
|||||||
build:
|
build:
|
||||||
context: ./dir
|
context: ./dir
|
||||||
dockerfile: Dockerfile-alternate
|
dockerfile: Dockerfile-alternate
|
||||||
|
network:
|
||||||
|
none
|
||||||
args:
|
args:
|
||||||
buildno: 123
|
buildno: 123
|
||||||
|
cache_from:
|
||||||
|
- type=local,src=path/to/cache
|
||||||
|
cache_to:
|
||||||
|
- type=local,dest=path/to/cache
|
||||||
|
secrets:
|
||||||
|
- token
|
||||||
|
- aws
|
||||||
|
secrets:
|
||||||
|
token:
|
||||||
|
environment: ENV_TOKEN
|
||||||
|
aws:
|
||||||
|
file: /root/.aws/credentials
|
||||||
`)
|
`)
|
||||||
|
|
||||||
c, err := ParseCompose(dt)
|
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
require.Equal(t, 1, len(c.Group))
|
require.Equal(t, 1, len(c.Groups))
|
||||||
sort.Strings(c.Group["default"].Targets)
|
require.Equal(t, "default", c.Groups[0].Name)
|
||||||
require.Equal(t, []string{"db", "webapp"}, c.Group["default"].Targets)
|
sort.Strings(c.Groups[0].Targets)
|
||||||
|
require.Equal(t, []string{"db", "webapp"}, c.Groups[0].Targets)
|
||||||
|
|
||||||
require.Equal(t, 2, len(c.Target))
|
require.Equal(t, 2, len(c.Targets))
|
||||||
require.Equal(t, "./db", *c.Target["db"].Context)
|
sort.Slice(c.Targets, func(i, j int) bool {
|
||||||
|
return c.Targets[i].Name < c.Targets[j].Name
|
||||||
|
})
|
||||||
|
require.Equal(t, "db", c.Targets[0].Name)
|
||||||
|
require.Equal(t, "./db", *c.Targets[0].Context)
|
||||||
|
require.Equal(t, []string{"docker.io/tonistiigi/db"}, c.Targets[0].Tags)
|
||||||
|
|
||||||
require.Equal(t, "./dir", *c.Target["webapp"].Context)
|
require.Equal(t, "webapp", c.Targets[1].Name)
|
||||||
require.Equal(t, "Dockerfile-alternate", *c.Target["webapp"].Dockerfile)
|
require.Equal(t, "./dir", *c.Targets[1].Context)
|
||||||
require.Equal(t, 1, len(c.Target["webapp"].Args))
|
require.Equal(t, "Dockerfile-alternate", *c.Targets[1].Dockerfile)
|
||||||
require.Equal(t, "123", c.Target["webapp"].Args["buildno"])
|
require.Equal(t, 1, len(c.Targets[1].Args))
|
||||||
|
require.Equal(t, ptrstr("123"), c.Targets[1].Args["buildno"])
|
||||||
|
require.Equal(t, []string{"type=local,src=path/to/cache"}, c.Targets[1].CacheFrom)
|
||||||
|
require.Equal(t, []string{"type=local,dest=path/to/cache"}, c.Targets[1].CacheTo)
|
||||||
|
require.Equal(t, "none", *c.Targets[1].NetworkMode)
|
||||||
|
require.Equal(t, []string{
|
||||||
|
"id=token,env=ENV_TOKEN",
|
||||||
|
"id=aws,src=/root/.aws/credentials",
|
||||||
|
}, c.Targets[1].Secrets)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestNoBuildOutOfTreeService(t *testing.T) {
|
func TestNoBuildOutOfTreeService(t *testing.T) {
|
||||||
var dt = []byte(`
|
var dt = []byte(`
|
||||||
version: "3.7"
|
|
||||||
|
|
||||||
services:
|
services:
|
||||||
external:
|
external:
|
||||||
image: "verycooldb:1337"
|
image: "verycooldb:1337"
|
||||||
webapp:
|
webapp:
|
||||||
build: ./db
|
build: ./db
|
||||||
`)
|
`)
|
||||||
c, err := ParseCompose(dt)
|
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, 1, len(c.Group))
|
require.Equal(t, 1, len(c.Groups))
|
||||||
|
require.Equal(t, 1, len(c.Targets))
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestParseComposeTarget(t *testing.T) {
|
func TestParseComposeTarget(t *testing.T) {
|
||||||
var dt = []byte(`
|
var dt = []byte(`
|
||||||
version: "3.7"
|
|
||||||
|
|
||||||
services:
|
services:
|
||||||
db:
|
db:
|
||||||
build:
|
build:
|
||||||
@@ -70,17 +97,21 @@ services:
|
|||||||
target: webapp
|
target: webapp
|
||||||
`)
|
`)
|
||||||
|
|
||||||
c, err := ParseCompose(dt)
|
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
require.Equal(t, "db", *c.Target["db"].Target)
|
require.Equal(t, 2, len(c.Targets))
|
||||||
require.Equal(t, "webapp", *c.Target["webapp"].Target)
|
sort.Slice(c.Targets, func(i, j int) bool {
|
||||||
|
return c.Targets[i].Name < c.Targets[j].Name
|
||||||
|
})
|
||||||
|
require.Equal(t, "db", c.Targets[0].Name)
|
||||||
|
require.Equal(t, "db", *c.Targets[0].Target)
|
||||||
|
require.Equal(t, "webapp", c.Targets[1].Name)
|
||||||
|
require.Equal(t, "webapp", *c.Targets[1].Target)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestComposeBuildWithoutContext(t *testing.T) {
|
func TestComposeBuildWithoutContext(t *testing.T) {
|
||||||
var dt = []byte(`
|
var dt = []byte(`
|
||||||
version: "3.7"
|
|
||||||
|
|
||||||
services:
|
services:
|
||||||
db:
|
db:
|
||||||
build:
|
build:
|
||||||
@@ -91,27 +122,540 @@ services:
|
|||||||
target: webapp
|
target: webapp
|
||||||
`)
|
`)
|
||||||
|
|
||||||
c, err := ParseCompose(dt)
|
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, "db", *c.Target["db"].Target)
|
require.Equal(t, 2, len(c.Targets))
|
||||||
require.Equal(t, "webapp", *c.Target["webapp"].Target)
|
sort.Slice(c.Targets, func(i, j int) bool {
|
||||||
|
return c.Targets[i].Name < c.Targets[j].Name
|
||||||
|
})
|
||||||
|
require.Equal(t, "db", c.Targets[0].Name)
|
||||||
|
require.Equal(t, "db", *c.Targets[0].Target)
|
||||||
|
require.Equal(t, "webapp", c.Targets[1].Name)
|
||||||
|
require.Equal(t, "webapp", *c.Targets[1].Target)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestBogusCompose(t *testing.T) {
|
func TestBuildArgEnvCompose(t *testing.T) {
|
||||||
var dt = []byte(`
|
var dt = []byte(`
|
||||||
version: "3.7"
|
version: "3.8"
|
||||||
|
|
||||||
services:
|
services:
|
||||||
db:
|
example:
|
||||||
labels:
|
image: example
|
||||||
- "foo"
|
|
||||||
webapp:
|
|
||||||
build:
|
build:
|
||||||
context: .
|
context: .
|
||||||
target: webapp
|
dockerfile: Dockerfile
|
||||||
|
args:
|
||||||
|
FOO:
|
||||||
|
BAR: $ZZZ_BAR
|
||||||
|
BRB: FOO
|
||||||
`)
|
`)
|
||||||
|
|
||||||
_, err := ParseCompose(dt)
|
t.Setenv("FOO", "bar")
|
||||||
require.Error(t, err)
|
t.Setenv("BAR", "foo")
|
||||||
require.Contains(t, err.Error(), "has neither an image nor a build context specified. At least one must be provided")
|
t.Setenv("ZZZ_BAR", "zzz_foo")
|
||||||
|
|
||||||
|
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, sliceToMap(os.Environ()))
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, ptrstr("bar"), c.Targets[0].Args["FOO"])
|
||||||
|
require.Equal(t, ptrstr("zzz_foo"), c.Targets[0].Args["BAR"])
|
||||||
|
require.Equal(t, ptrstr("FOO"), c.Targets[0].Args["BRB"])
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestInconsistentComposeFile(t *testing.T) {
|
||||||
|
var dt = []byte(`
|
||||||
|
services:
|
||||||
|
webapp:
|
||||||
|
entrypoint: echo 1
|
||||||
|
`)
|
||||||
|
|
||||||
|
_, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
|
||||||
|
require.Error(t, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestAdvancedNetwork(t *testing.T) {
|
||||||
|
var dt = []byte(`
|
||||||
|
services:
|
||||||
|
db:
|
||||||
|
networks:
|
||||||
|
- example.com
|
||||||
|
build:
|
||||||
|
context: ./db
|
||||||
|
target: db
|
||||||
|
|
||||||
|
networks:
|
||||||
|
example.com:
|
||||||
|
name: example.com
|
||||||
|
driver: bridge
|
||||||
|
ipam:
|
||||||
|
config:
|
||||||
|
- subnet: 10.5.0.0/24
|
||||||
|
ip_range: 10.5.0.0/24
|
||||||
|
gateway: 10.5.0.254
|
||||||
|
`)
|
||||||
|
|
||||||
|
_, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestTags(t *testing.T) {
|
||||||
|
var dt = []byte(`
|
||||||
|
services:
|
||||||
|
example:
|
||||||
|
image: example
|
||||||
|
build:
|
||||||
|
context: .
|
||||||
|
dockerfile: Dockerfile
|
||||||
|
tags:
|
||||||
|
- foo
|
||||||
|
- bar
|
||||||
|
`)
|
||||||
|
|
||||||
|
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, []string{"foo", "bar"}, c.Targets[0].Tags)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestDependsOnList(t *testing.T) {
|
||||||
|
var dt = []byte(`
|
||||||
|
version: "3.8"
|
||||||
|
|
||||||
|
services:
|
||||||
|
example-container:
|
||||||
|
image: example/fails:latest
|
||||||
|
build:
|
||||||
|
context: .
|
||||||
|
dockerfile: Dockerfile
|
||||||
|
depends_on:
|
||||||
|
other-container:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
default:
|
||||||
|
aliases:
|
||||||
|
- integration-tests
|
||||||
|
|
||||||
|
other-container:
|
||||||
|
image: example/other:latest
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD", "echo", "success"]
|
||||||
|
retries: 5
|
||||||
|
interval: 5s
|
||||||
|
timeout: 10s
|
||||||
|
start_period: 5s
|
||||||
|
|
||||||
|
networks:
|
||||||
|
default:
|
||||||
|
name: test-net
|
||||||
|
`)
|
||||||
|
|
||||||
|
_, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestComposeExt(t *testing.T) {
|
||||||
|
var dt = []byte(`
|
||||||
|
services:
|
||||||
|
addon:
|
||||||
|
image: ct-addon:bar
|
||||||
|
build:
|
||||||
|
context: .
|
||||||
|
dockerfile: ./Dockerfile
|
||||||
|
cache_from:
|
||||||
|
- user/app:cache
|
||||||
|
cache_to:
|
||||||
|
- user/app:cache
|
||||||
|
tags:
|
||||||
|
- ct-addon:baz
|
||||||
|
args:
|
||||||
|
CT_ECR: foo
|
||||||
|
CT_TAG: bar
|
||||||
|
x-bake:
|
||||||
|
contexts:
|
||||||
|
alpine: docker-image://alpine:3.13
|
||||||
|
tags:
|
||||||
|
- ct-addon:foo
|
||||||
|
- ct-addon:alp
|
||||||
|
platforms:
|
||||||
|
- linux/amd64
|
||||||
|
- linux/arm64
|
||||||
|
cache-from:
|
||||||
|
- type=local,src=path/to/cache
|
||||||
|
cache-to:
|
||||||
|
- type=local,dest=path/to/cache
|
||||||
|
pull: true
|
||||||
|
|
||||||
|
aws:
|
||||||
|
image: ct-fake-aws:bar
|
||||||
|
build:
|
||||||
|
dockerfile: ./aws.Dockerfile
|
||||||
|
args:
|
||||||
|
CT_ECR: foo
|
||||||
|
CT_TAG: bar
|
||||||
|
x-bake:
|
||||||
|
secret:
|
||||||
|
- id=mysecret,src=/local/secret
|
||||||
|
- id=mysecret2,src=/local/secret2
|
||||||
|
ssh: default
|
||||||
|
platforms: linux/arm64
|
||||||
|
output: type=docker
|
||||||
|
no-cache: true
|
||||||
|
`)
|
||||||
|
|
||||||
|
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, 2, len(c.Targets))
|
||||||
|
sort.Slice(c.Targets, func(i, j int) bool {
|
||||||
|
return c.Targets[i].Name < c.Targets[j].Name
|
||||||
|
})
|
||||||
|
require.Equal(t, map[string]*string{"CT_ECR": ptrstr("foo"), "CT_TAG": ptrstr("bar")}, c.Targets[0].Args)
|
||||||
|
require.Equal(t, []string{"ct-addon:baz", "ct-addon:foo", "ct-addon:alp"}, c.Targets[0].Tags)
|
||||||
|
require.Equal(t, []string{"linux/amd64", "linux/arm64"}, c.Targets[0].Platforms)
|
||||||
|
require.Equal(t, []string{"user/app:cache", "type=local,src=path/to/cache"}, c.Targets[0].CacheFrom)
|
||||||
|
require.Equal(t, []string{"user/app:cache", "type=local,dest=path/to/cache"}, c.Targets[0].CacheTo)
|
||||||
|
require.Equal(t, newBool(true), c.Targets[0].Pull)
|
||||||
|
require.Equal(t, map[string]string{"alpine": "docker-image://alpine:3.13"}, c.Targets[0].Contexts)
|
||||||
|
require.Equal(t, []string{"ct-fake-aws:bar"}, c.Targets[1].Tags)
|
||||||
|
require.Equal(t, []string{"id=mysecret,src=/local/secret", "id=mysecret2,src=/local/secret2"}, c.Targets[1].Secrets)
|
||||||
|
require.Equal(t, []string{"default"}, c.Targets[1].SSH)
|
||||||
|
require.Equal(t, []string{"linux/arm64"}, c.Targets[1].Platforms)
|
||||||
|
require.Equal(t, []string{"type=docker"}, c.Targets[1].Outputs)
|
||||||
|
require.Equal(t, newBool(true), c.Targets[1].NoCache)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestComposeExtDedup(t *testing.T) {
|
||||||
|
var dt = []byte(`
|
||||||
|
services:
|
||||||
|
webapp:
|
||||||
|
image: app:bar
|
||||||
|
build:
|
||||||
|
cache_from:
|
||||||
|
- user/app:cache
|
||||||
|
cache_to:
|
||||||
|
- user/app:cache
|
||||||
|
tags:
|
||||||
|
- ct-addon:foo
|
||||||
|
x-bake:
|
||||||
|
tags:
|
||||||
|
- ct-addon:foo
|
||||||
|
- ct-addon:baz
|
||||||
|
cache-from:
|
||||||
|
- user/app:cache
|
||||||
|
- type=local,src=path/to/cache
|
||||||
|
cache-to:
|
||||||
|
- type=local,dest=path/to/cache
|
||||||
|
`)
|
||||||
|
|
||||||
|
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, 1, len(c.Targets))
|
||||||
|
require.Equal(t, []string{"ct-addon:foo", "ct-addon:baz"}, c.Targets[0].Tags)
|
||||||
|
require.Equal(t, []string{"user/app:cache", "type=local,src=path/to/cache"}, c.Targets[0].CacheFrom)
|
||||||
|
require.Equal(t, []string{"user/app:cache", "type=local,dest=path/to/cache"}, c.Targets[0].CacheTo)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestEnv(t *testing.T) {
|
||||||
|
envf, err := os.CreateTemp("", "env")
|
||||||
|
require.NoError(t, err)
|
||||||
|
defer os.Remove(envf.Name())
|
||||||
|
|
||||||
|
_, err = envf.WriteString("FOO=bsdf -csdf\n")
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
var dt = []byte(`
|
||||||
|
services:
|
||||||
|
scratch:
|
||||||
|
build:
|
||||||
|
context: .
|
||||||
|
args:
|
||||||
|
CT_ECR: foo
|
||||||
|
FOO:
|
||||||
|
NODE_ENV:
|
||||||
|
environment:
|
||||||
|
- NODE_ENV=test
|
||||||
|
- AWS_ACCESS_KEY_ID=dummy
|
||||||
|
- AWS_SECRET_ACCESS_KEY=dummy
|
||||||
|
env_file:
|
||||||
|
- ` + envf.Name() + `
|
||||||
|
`)
|
||||||
|
|
||||||
|
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, map[string]*string{"CT_ECR": ptrstr("foo"), "FOO": ptrstr("bsdf -csdf"), "NODE_ENV": ptrstr("test")}, c.Targets[0].Args)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestDotEnv(t *testing.T) {
|
||||||
|
tmpdir := t.TempDir()
|
||||||
|
|
||||||
|
err := os.WriteFile(filepath.Join(tmpdir, ".env"), []byte("FOO=bar"), 0644)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
var dt = []byte(`
|
||||||
|
services:
|
||||||
|
scratch:
|
||||||
|
build:
|
||||||
|
context: .
|
||||||
|
args:
|
||||||
|
FOO:
|
||||||
|
`)
|
||||||
|
|
||||||
|
chdir(t, tmpdir)
|
||||||
|
c, err := ParseComposeFiles([]File{{
|
||||||
|
Name: "docker-compose.yml",
|
||||||
|
Data: dt,
|
||||||
|
}})
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, map[string]*string{"FOO": ptrstr("bar")}, c.Targets[0].Args)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestPorts(t *testing.T) {
|
||||||
|
var dt = []byte(`
|
||||||
|
services:
|
||||||
|
foo:
|
||||||
|
build:
|
||||||
|
context: .
|
||||||
|
ports:
|
||||||
|
- 3306:3306
|
||||||
|
bar:
|
||||||
|
build:
|
||||||
|
context: .
|
||||||
|
ports:
|
||||||
|
- mode: ingress
|
||||||
|
target: 3306
|
||||||
|
published: "3306"
|
||||||
|
protocol: tcp
|
||||||
|
`)
|
||||||
|
_, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
func newBool(val bool) *bool {
|
||||||
|
b := val
|
||||||
|
return &b
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestServiceName(t *testing.T) {
|
||||||
|
cases := []struct {
|
||||||
|
svc string
|
||||||
|
wantErr bool
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
svc: "a",
|
||||||
|
wantErr: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
svc: "abc",
|
||||||
|
wantErr: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
svc: "a.b",
|
||||||
|
wantErr: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
svc: "_a",
|
||||||
|
wantErr: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
svc: "a_b",
|
||||||
|
wantErr: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
svc: "AbC",
|
||||||
|
wantErr: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
svc: "AbC-0123",
|
||||||
|
wantErr: false,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, tt := range cases {
|
||||||
|
tt := tt
|
||||||
|
t.Run(tt.svc, func(t *testing.T) {
|
||||||
|
_, err := ParseCompose([]compose.ConfigFile{{Content: []byte(`
|
||||||
|
services:
|
||||||
|
` + tt.svc + `:
|
||||||
|
build:
|
||||||
|
context: .
|
||||||
|
`)}}, nil)
|
||||||
|
if tt.wantErr {
|
||||||
|
require.Error(t, err)
|
||||||
|
} else {
|
||||||
|
require.NoError(t, err)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestValidateComposeSecret(t *testing.T) {
|
||||||
|
cases := []struct {
|
||||||
|
name string
|
||||||
|
dt []byte
|
||||||
|
wantErr bool
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "secret set by file",
|
||||||
|
dt: []byte(`
|
||||||
|
secrets:
|
||||||
|
foo:
|
||||||
|
file: .secret
|
||||||
|
`),
|
||||||
|
wantErr: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "secret set by environment",
|
||||||
|
dt: []byte(`
|
||||||
|
secrets:
|
||||||
|
foo:
|
||||||
|
environment: TOKEN
|
||||||
|
`),
|
||||||
|
wantErr: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "external secret",
|
||||||
|
dt: []byte(`
|
||||||
|
secrets:
|
||||||
|
foo:
|
||||||
|
external: true
|
||||||
|
`),
|
||||||
|
wantErr: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "unset secret",
|
||||||
|
dt: []byte(`
|
||||||
|
secrets:
|
||||||
|
foo: {}
|
||||||
|
`),
|
||||||
|
wantErr: true,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "undefined secret",
|
||||||
|
dt: []byte(`
|
||||||
|
services:
|
||||||
|
foo:
|
||||||
|
build:
|
||||||
|
secrets:
|
||||||
|
- token
|
||||||
|
`),
|
||||||
|
wantErr: true,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, tt := range cases {
|
||||||
|
tt := tt
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
_, err := ParseCompose([]compose.ConfigFile{{Content: tt.dt}}, nil)
|
||||||
|
if tt.wantErr {
|
||||||
|
require.Error(t, err)
|
||||||
|
} else {
|
||||||
|
require.NoError(t, err)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestValidateComposeFile(t *testing.T) {
|
||||||
|
cases := []struct {
|
||||||
|
name string
|
||||||
|
fn string
|
||||||
|
dt []byte
|
||||||
|
isCompose bool
|
||||||
|
wantErr bool
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "empty service",
|
||||||
|
fn: "docker-compose.yml",
|
||||||
|
dt: []byte(`
|
||||||
|
services:
|
||||||
|
foo:
|
||||||
|
`),
|
||||||
|
isCompose: true,
|
||||||
|
wantErr: true,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "build",
|
||||||
|
fn: "docker-compose.yml",
|
||||||
|
dt: []byte(`
|
||||||
|
services:
|
||||||
|
foo:
|
||||||
|
build: .
|
||||||
|
`),
|
||||||
|
isCompose: true,
|
||||||
|
wantErr: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "image",
|
||||||
|
fn: "docker-compose.yml",
|
||||||
|
dt: []byte(`
|
||||||
|
services:
|
||||||
|
simple:
|
||||||
|
image: nginx
|
||||||
|
`),
|
||||||
|
isCompose: true,
|
||||||
|
wantErr: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "unknown ext",
|
||||||
|
fn: "docker-compose.foo",
|
||||||
|
dt: []byte(`
|
||||||
|
services:
|
||||||
|
simple:
|
||||||
|
image: nginx
|
||||||
|
`),
|
||||||
|
isCompose: true,
|
||||||
|
wantErr: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "hcl",
|
||||||
|
fn: "docker-bake.hcl",
|
||||||
|
dt: []byte(`
|
||||||
|
target "default" {
|
||||||
|
dockerfile = "test"
|
||||||
|
}
|
||||||
|
`),
|
||||||
|
isCompose: false,
|
||||||
|
wantErr: false,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, tt := range cases {
|
||||||
|
tt := tt
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
isCompose, err := validateComposeFile(tt.dt, tt.fn)
|
||||||
|
assert.Equal(t, tt.isCompose, isCompose)
|
||||||
|
if tt.wantErr {
|
||||||
|
require.Error(t, err)
|
||||||
|
} else {
|
||||||
|
require.NoError(t, err)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestComposeNullArgs(t *testing.T) {
|
||||||
|
var dt = []byte(`
|
||||||
|
services:
|
||||||
|
scratch:
|
||||||
|
build:
|
||||||
|
context: .
|
||||||
|
args:
|
||||||
|
FOO: null
|
||||||
|
bar: "baz"
|
||||||
|
`)
|
||||||
|
|
||||||
|
c, err := ParseCompose([]compose.ConfigFile{{Content: dt}}, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, map[string]*string{"bar": ptrstr("baz")}, c.Targets[0].Args)
|
||||||
|
}
|
||||||
|
|
||||||
|
// chdir changes the current working directory to the named directory,
|
||||||
|
// and then restore the original working directory at the end of the test.
|
||||||
|
func chdir(t *testing.T, dir string) {
|
||||||
|
olddir, err := os.Getwd()
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("chdir: %v", err)
|
||||||
|
}
|
||||||
|
if err := os.Chdir(dir); err != nil {
|
||||||
|
t.Fatalf("chdir %s: %v", dir, err)
|
||||||
|
}
|
||||||
|
t.Cleanup(func() {
|
||||||
|
if err := os.Chdir(olddir); err != nil {
|
||||||
|
t.Errorf("chdir to original working directory %s: %v", olddir, err)
|
||||||
|
os.Exit(1)
|
||||||
|
}
|
||||||
|
})
|
||||||
}
|
}
|
||||||
|
|||||||
79
bake/hcl.go
79
bake/hcl.go
@@ -1,11 +1,78 @@
|
|||||||
package bake
|
package bake
|
||||||
|
|
||||||
import "github.com/hashicorp/hcl"
|
import (
|
||||||
|
"strings"
|
||||||
|
|
||||||
func ParseHCL(dt []byte) (*Config, error) {
|
"github.com/hashicorp/hcl/v2"
|
||||||
var c Config
|
"github.com/hashicorp/hcl/v2/hclparse"
|
||||||
if err := hcl.Unmarshal(dt, &c); err != nil {
|
"github.com/moby/buildkit/solver/errdefs"
|
||||||
return nil, err
|
"github.com/moby/buildkit/solver/pb"
|
||||||
|
)
|
||||||
|
|
||||||
|
func ParseHCLFile(dt []byte, fn string) (*hcl.File, bool, error) {
|
||||||
|
var err error
|
||||||
|
if strings.HasSuffix(fn, ".json") {
|
||||||
|
f, diags := hclparse.NewParser().ParseJSON(dt, fn)
|
||||||
|
if diags.HasErrors() {
|
||||||
|
err = diags
|
||||||
|
}
|
||||||
|
return f, true, err
|
||||||
|
}
|
||||||
|
if strings.HasSuffix(fn, ".hcl") {
|
||||||
|
f, diags := hclparse.NewParser().ParseHCL(dt, fn)
|
||||||
|
if diags.HasErrors() {
|
||||||
|
err = diags
|
||||||
|
}
|
||||||
|
return f, true, err
|
||||||
|
}
|
||||||
|
f, diags := hclparse.NewParser().ParseHCL(dt, fn+".hcl")
|
||||||
|
if diags.HasErrors() {
|
||||||
|
f, diags2 := hclparse.NewParser().ParseJSON(dt, fn+".json")
|
||||||
|
if !diags2.HasErrors() {
|
||||||
|
return f, true, nil
|
||||||
|
}
|
||||||
|
return nil, false, diags
|
||||||
|
}
|
||||||
|
return f, true, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func formatHCLError(err error, files []File) error {
|
||||||
|
if err == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
diags, ok := err.(hcl.Diagnostics)
|
||||||
|
if !ok {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
for _, d := range diags {
|
||||||
|
if d.Severity != hcl.DiagError {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if d.Subject != nil {
|
||||||
|
var dt []byte
|
||||||
|
for _, f := range files {
|
||||||
|
if d.Subject.Filename == f.Name {
|
||||||
|
dt = f.Data
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
src := errdefs.Source{
|
||||||
|
Info: &pb.SourceInfo{
|
||||||
|
Filename: d.Subject.Filename,
|
||||||
|
Data: dt,
|
||||||
|
},
|
||||||
|
Ranges: []*pb.Range{toErrRange(d.Subject)},
|
||||||
|
}
|
||||||
|
err = errdefs.WithSource(err, src)
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
func toErrRange(in *hcl.Range) *pb.Range {
|
||||||
|
return &pb.Range{
|
||||||
|
Start: pb.Position{Line: int32(in.Start.Line), Character: int32(in.Start.Column)},
|
||||||
|
End: pb.Position{Line: int32(in.End.Line), Character: int32(in.End.Column)},
|
||||||
}
|
}
|
||||||
return &c, nil
|
|
||||||
}
|
}
|
||||||
|
|||||||
920
bake/hcl_test.go
920
bake/hcl_test.go
@@ -1,13 +1,15 @@
|
|||||||
package bake
|
package bake
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"reflect"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestParseHCL(t *testing.T) {
|
func TestHCLBasic(t *testing.T) {
|
||||||
var dt = []byte(`
|
t.Parallel()
|
||||||
|
dt := []byte(`
|
||||||
group "default" {
|
group "default" {
|
||||||
targets = ["db", "webapp"]
|
targets = ["db", "webapp"]
|
||||||
}
|
}
|
||||||
@@ -40,18 +42,914 @@ func TestParseHCL(t *testing.T) {
|
|||||||
}
|
}
|
||||||
`)
|
`)
|
||||||
|
|
||||||
c, err := ParseHCL(dt)
|
c, err := ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, 1, len(c.Groups))
|
||||||
|
require.Equal(t, "default", c.Groups[0].Name)
|
||||||
|
require.Equal(t, []string{"db", "webapp"}, c.Groups[0].Targets)
|
||||||
|
|
||||||
|
require.Equal(t, 4, len(c.Targets))
|
||||||
|
require.Equal(t, c.Targets[0].Name, "db")
|
||||||
|
require.Equal(t, "./db", *c.Targets[0].Context)
|
||||||
|
|
||||||
|
require.Equal(t, c.Targets[1].Name, "webapp")
|
||||||
|
require.Equal(t, 1, len(c.Targets[1].Args))
|
||||||
|
require.Equal(t, ptrstr("123"), c.Targets[1].Args["buildno"])
|
||||||
|
|
||||||
|
require.Equal(t, c.Targets[2].Name, "cross")
|
||||||
|
require.Equal(t, 2, len(c.Targets[2].Platforms))
|
||||||
|
require.Equal(t, []string{"linux/amd64", "linux/arm64"}, c.Targets[2].Platforms)
|
||||||
|
|
||||||
|
require.Equal(t, c.Targets[3].Name, "webapp-plus")
|
||||||
|
require.Equal(t, 1, len(c.Targets[3].Args))
|
||||||
|
require.Equal(t, map[string]*string{"IAMCROSS": ptrstr("true")}, c.Targets[3].Args)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHCLBasicInJSON(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
{
|
||||||
|
"group": {
|
||||||
|
"default": {
|
||||||
|
"targets": ["db", "webapp"]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"target": {
|
||||||
|
"db": {
|
||||||
|
"context": "./db",
|
||||||
|
"tags": ["docker.io/tonistiigi/db"]
|
||||||
|
},
|
||||||
|
"webapp": {
|
||||||
|
"context": "./dir",
|
||||||
|
"dockerfile": "Dockerfile-alternate",
|
||||||
|
"args": {
|
||||||
|
"buildno": "123"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cross": {
|
||||||
|
"platforms": [
|
||||||
|
"linux/amd64",
|
||||||
|
"linux/arm64"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"webapp-plus": {
|
||||||
|
"inherits": ["webapp", "cross"],
|
||||||
|
"args": {
|
||||||
|
"IAMCROSS": "true"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
|
||||||
|
c, err := ParseFile(dt, "docker-bake.json")
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
require.Equal(t, 1, len(c.Group))
|
require.Equal(t, 1, len(c.Groups))
|
||||||
require.Equal(t, []string{"db", "webapp"}, c.Group["default"].Targets)
|
require.Equal(t, "default", c.Groups[0].Name)
|
||||||
|
require.Equal(t, []string{"db", "webapp"}, c.Groups[0].Targets)
|
||||||
|
|
||||||
require.Equal(t, 4, len(c.Target))
|
require.Equal(t, 4, len(c.Targets))
|
||||||
require.Equal(t, "./db", *c.Target["db"].Context)
|
require.Equal(t, c.Targets[0].Name, "db")
|
||||||
|
require.Equal(t, "./db", *c.Targets[0].Context)
|
||||||
|
|
||||||
require.Equal(t, 1, len(c.Target["webapp"].Args))
|
require.Equal(t, c.Targets[1].Name, "webapp")
|
||||||
require.Equal(t, "123", c.Target["webapp"].Args["buildno"])
|
require.Equal(t, 1, len(c.Targets[1].Args))
|
||||||
|
require.Equal(t, ptrstr("123"), c.Targets[1].Args["buildno"])
|
||||||
|
|
||||||
require.Equal(t, 2, len(c.Target["cross"].Platforms))
|
require.Equal(t, c.Targets[2].Name, "cross")
|
||||||
require.Equal(t, []string{"linux/amd64", "linux/arm64"}, c.Target["cross"].Platforms)
|
require.Equal(t, 2, len(c.Targets[2].Platforms))
|
||||||
|
require.Equal(t, []string{"linux/amd64", "linux/arm64"}, c.Targets[2].Platforms)
|
||||||
|
|
||||||
|
require.Equal(t, c.Targets[3].Name, "webapp-plus")
|
||||||
|
require.Equal(t, 1, len(c.Targets[3].Args))
|
||||||
|
require.Equal(t, map[string]*string{"IAMCROSS": ptrstr("true")}, c.Targets[3].Args)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHCLWithFunctions(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
group "default" {
|
||||||
|
targets = ["webapp"]
|
||||||
|
}
|
||||||
|
|
||||||
|
target "webapp" {
|
||||||
|
args = {
|
||||||
|
buildno = "${add(123, 1)}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
|
||||||
|
c, err := ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(c.Groups))
|
||||||
|
require.Equal(t, "default", c.Groups[0].Name)
|
||||||
|
require.Equal(t, []string{"webapp"}, c.Groups[0].Targets)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(c.Targets))
|
||||||
|
require.Equal(t, c.Targets[0].Name, "webapp")
|
||||||
|
require.Equal(t, ptrstr("124"), c.Targets[0].Args["buildno"])
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHCLWithUserDefinedFunctions(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
function "increment" {
|
||||||
|
params = [number]
|
||||||
|
result = number + 1
|
||||||
|
}
|
||||||
|
|
||||||
|
group "default" {
|
||||||
|
targets = ["webapp"]
|
||||||
|
}
|
||||||
|
|
||||||
|
target "webapp" {
|
||||||
|
args = {
|
||||||
|
buildno = "${increment(123)}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
|
||||||
|
c, err := ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(c.Groups))
|
||||||
|
require.Equal(t, "default", c.Groups[0].Name)
|
||||||
|
require.Equal(t, []string{"webapp"}, c.Groups[0].Targets)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(c.Targets))
|
||||||
|
require.Equal(t, c.Targets[0].Name, "webapp")
|
||||||
|
require.Equal(t, ptrstr("124"), c.Targets[0].Args["buildno"])
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHCLWithVariables(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
variable "BUILD_NUMBER" {
|
||||||
|
default = "123"
|
||||||
|
}
|
||||||
|
|
||||||
|
group "default" {
|
||||||
|
targets = ["webapp"]
|
||||||
|
}
|
||||||
|
|
||||||
|
target "webapp" {
|
||||||
|
args = {
|
||||||
|
buildno = "${BUILD_NUMBER}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
|
||||||
|
c, err := ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(c.Groups))
|
||||||
|
require.Equal(t, "default", c.Groups[0].Name)
|
||||||
|
require.Equal(t, []string{"webapp"}, c.Groups[0].Targets)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(c.Targets))
|
||||||
|
require.Equal(t, c.Targets[0].Name, "webapp")
|
||||||
|
require.Equal(t, ptrstr("123"), c.Targets[0].Args["buildno"])
|
||||||
|
|
||||||
|
t.Setenv("BUILD_NUMBER", "456")
|
||||||
|
|
||||||
|
c, err = ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(c.Groups))
|
||||||
|
require.Equal(t, "default", c.Groups[0].Name)
|
||||||
|
require.Equal(t, []string{"webapp"}, c.Groups[0].Targets)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(c.Targets))
|
||||||
|
require.Equal(t, c.Targets[0].Name, "webapp")
|
||||||
|
require.Equal(t, ptrstr("456"), c.Targets[0].Args["buildno"])
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHCLWithVariablesInFunctions(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
variable "REPO" {
|
||||||
|
default = "user/repo"
|
||||||
|
}
|
||||||
|
function "tag" {
|
||||||
|
params = [tag]
|
||||||
|
result = ["${REPO}:${tag}"]
|
||||||
|
}
|
||||||
|
|
||||||
|
target "webapp" {
|
||||||
|
tags = tag("v1")
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
|
||||||
|
c, err := ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(c.Targets))
|
||||||
|
require.Equal(t, c.Targets[0].Name, "webapp")
|
||||||
|
require.Equal(t, []string{"user/repo:v1"}, c.Targets[0].Tags)
|
||||||
|
|
||||||
|
t.Setenv("REPO", "docker/buildx")
|
||||||
|
|
||||||
|
c, err = ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(c.Targets))
|
||||||
|
require.Equal(t, c.Targets[0].Name, "webapp")
|
||||||
|
require.Equal(t, []string{"docker/buildx:v1"}, c.Targets[0].Tags)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHCLMultiFileSharedVariables(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
variable "FOO" {
|
||||||
|
default = "abc"
|
||||||
|
}
|
||||||
|
target "app" {
|
||||||
|
args = {
|
||||||
|
v1 = "pre-${FOO}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
dt2 := []byte(`
|
||||||
|
target "app" {
|
||||||
|
args = {
|
||||||
|
v2 = "${FOO}-post"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
|
||||||
|
c, err := ParseFiles([]File{
|
||||||
|
{Data: dt, Name: "c1.hcl"},
|
||||||
|
{Data: dt2, Name: "c2.hcl"},
|
||||||
|
}, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, 1, len(c.Targets))
|
||||||
|
require.Equal(t, c.Targets[0].Name, "app")
|
||||||
|
require.Equal(t, ptrstr("pre-abc"), c.Targets[0].Args["v1"])
|
||||||
|
require.Equal(t, ptrstr("abc-post"), c.Targets[0].Args["v2"])
|
||||||
|
|
||||||
|
t.Setenv("FOO", "def")
|
||||||
|
|
||||||
|
c, err = ParseFiles([]File{
|
||||||
|
{Data: dt, Name: "c1.hcl"},
|
||||||
|
{Data: dt2, Name: "c2.hcl"},
|
||||||
|
}, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(c.Targets))
|
||||||
|
require.Equal(t, c.Targets[0].Name, "app")
|
||||||
|
require.Equal(t, ptrstr("pre-def"), c.Targets[0].Args["v1"])
|
||||||
|
require.Equal(t, ptrstr("def-post"), c.Targets[0].Args["v2"])
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHCLVarsWithVars(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
variable "FOO" {
|
||||||
|
default = upper("${BASE}def")
|
||||||
|
}
|
||||||
|
variable "BAR" {
|
||||||
|
default = "-${FOO}-"
|
||||||
|
}
|
||||||
|
target "app" {
|
||||||
|
args = {
|
||||||
|
v1 = "pre-${BAR}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
dt2 := []byte(`
|
||||||
|
variable "BASE" {
|
||||||
|
default = "abc"
|
||||||
|
}
|
||||||
|
target "app" {
|
||||||
|
args = {
|
||||||
|
v2 = "${FOO}-post"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
|
||||||
|
c, err := ParseFiles([]File{
|
||||||
|
{Data: dt, Name: "c1.hcl"},
|
||||||
|
{Data: dt2, Name: "c2.hcl"},
|
||||||
|
}, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, 1, len(c.Targets))
|
||||||
|
require.Equal(t, c.Targets[0].Name, "app")
|
||||||
|
require.Equal(t, ptrstr("pre--ABCDEF-"), c.Targets[0].Args["v1"])
|
||||||
|
require.Equal(t, ptrstr("ABCDEF-post"), c.Targets[0].Args["v2"])
|
||||||
|
|
||||||
|
t.Setenv("BASE", "new")
|
||||||
|
|
||||||
|
c, err = ParseFiles([]File{
|
||||||
|
{Data: dt, Name: "c1.hcl"},
|
||||||
|
{Data: dt2, Name: "c2.hcl"},
|
||||||
|
}, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(c.Targets))
|
||||||
|
require.Equal(t, c.Targets[0].Name, "app")
|
||||||
|
require.Equal(t, ptrstr("pre--NEWDEF-"), c.Targets[0].Args["v1"])
|
||||||
|
require.Equal(t, ptrstr("NEWDEF-post"), c.Targets[0].Args["v2"])
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHCLTypedVariables(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
variable "FOO" {
|
||||||
|
default = 3
|
||||||
|
}
|
||||||
|
variable "IS_FOO" {
|
||||||
|
default = true
|
||||||
|
}
|
||||||
|
target "app" {
|
||||||
|
args = {
|
||||||
|
v1 = FOO > 5 ? "higher" : "lower"
|
||||||
|
v2 = IS_FOO ? "yes" : "no"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
|
||||||
|
c, err := ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(c.Targets))
|
||||||
|
require.Equal(t, c.Targets[0].Name, "app")
|
||||||
|
require.Equal(t, ptrstr("lower"), c.Targets[0].Args["v1"])
|
||||||
|
require.Equal(t, ptrstr("yes"), c.Targets[0].Args["v2"])
|
||||||
|
|
||||||
|
t.Setenv("FOO", "5.1")
|
||||||
|
t.Setenv("IS_FOO", "0")
|
||||||
|
|
||||||
|
c, err = ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(c.Targets))
|
||||||
|
require.Equal(t, c.Targets[0].Name, "app")
|
||||||
|
require.Equal(t, ptrstr("higher"), c.Targets[0].Args["v1"])
|
||||||
|
require.Equal(t, ptrstr("no"), c.Targets[0].Args["v2"])
|
||||||
|
|
||||||
|
t.Setenv("FOO", "NaN")
|
||||||
|
_, err = ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.Error(t, err)
|
||||||
|
require.Contains(t, err.Error(), "failed to parse FOO as number")
|
||||||
|
|
||||||
|
t.Setenv("FOO", "0")
|
||||||
|
t.Setenv("IS_FOO", "maybe")
|
||||||
|
|
||||||
|
_, err = ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.Error(t, err)
|
||||||
|
require.Contains(t, err.Error(), "failed to parse IS_FOO as bool")
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHCLNullVariables(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
variable "FOO" {
|
||||||
|
default = null
|
||||||
|
}
|
||||||
|
target "default" {
|
||||||
|
args = {
|
||||||
|
foo = FOO
|
||||||
|
}
|
||||||
|
}`)
|
||||||
|
|
||||||
|
c, err := ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, ptrstr(nil), c.Targets[0].Args["foo"])
|
||||||
|
|
||||||
|
t.Setenv("FOO", "bar")
|
||||||
|
c, err = ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, ptrstr("bar"), c.Targets[0].Args["foo"])
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestJSONNullVariables(t *testing.T) {
|
||||||
|
dt := []byte(`{
|
||||||
|
"variable": {
|
||||||
|
"FOO": {
|
||||||
|
"default": null
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"target": {
|
||||||
|
"default": {
|
||||||
|
"args": {
|
||||||
|
"foo": "${FOO}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}`)
|
||||||
|
|
||||||
|
c, err := ParseFile(dt, "docker-bake.json")
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, ptrstr(nil), c.Targets[0].Args["foo"])
|
||||||
|
|
||||||
|
t.Setenv("FOO", "bar")
|
||||||
|
c, err = ParseFile(dt, "docker-bake.json")
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, ptrstr("bar"), c.Targets[0].Args["foo"])
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHCLVariableCycle(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
variable "FOO" {
|
||||||
|
default = BAR
|
||||||
|
}
|
||||||
|
variable "FOO2" {
|
||||||
|
default = FOO
|
||||||
|
}
|
||||||
|
variable "BAR" {
|
||||||
|
default = FOO
|
||||||
|
}
|
||||||
|
target "app" {}
|
||||||
|
`)
|
||||||
|
|
||||||
|
_, err := ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.Error(t, err)
|
||||||
|
require.Contains(t, err.Error(), "variable cycle not allowed")
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHCLAttrs(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
FOO="abc"
|
||||||
|
BAR="attr-${FOO}def"
|
||||||
|
target "app" {
|
||||||
|
args = {
|
||||||
|
"v1": BAR
|
||||||
|
}
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
|
||||||
|
c, err := ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(c.Targets))
|
||||||
|
require.Equal(t, c.Targets[0].Name, "app")
|
||||||
|
require.Equal(t, ptrstr("attr-abcdef"), c.Targets[0].Args["v1"])
|
||||||
|
|
||||||
|
// env does not apply if no variable
|
||||||
|
t.Setenv("FOO", "bar")
|
||||||
|
c, err = ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(c.Targets))
|
||||||
|
require.Equal(t, c.Targets[0].Name, "app")
|
||||||
|
require.Equal(t, ptrstr("attr-abcdef"), c.Targets[0].Args["v1"])
|
||||||
|
// attr-multifile
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHCLTargetAttrs(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
target "foo" {
|
||||||
|
dockerfile = "xxx"
|
||||||
|
context = target.bar.context
|
||||||
|
target = target.foo.dockerfile
|
||||||
|
}
|
||||||
|
|
||||||
|
target "bar" {
|
||||||
|
dockerfile = target.foo.dockerfile
|
||||||
|
context = "yyy"
|
||||||
|
target = target.bar.context
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
|
||||||
|
c, err := ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 2, len(c.Targets))
|
||||||
|
require.Equal(t, "foo", c.Targets[0].Name)
|
||||||
|
require.Equal(t, "bar", c.Targets[1].Name)
|
||||||
|
|
||||||
|
require.Equal(t, "xxx", *c.Targets[0].Dockerfile)
|
||||||
|
require.Equal(t, "yyy", *c.Targets[0].Context)
|
||||||
|
require.Equal(t, "xxx", *c.Targets[0].Target)
|
||||||
|
|
||||||
|
require.Equal(t, "xxx", *c.Targets[1].Dockerfile)
|
||||||
|
require.Equal(t, "yyy", *c.Targets[1].Context)
|
||||||
|
require.Equal(t, "yyy", *c.Targets[1].Target)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHCLTargetGlobal(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
target "foo" {
|
||||||
|
dockerfile = "x"
|
||||||
|
}
|
||||||
|
x = target.foo.dockerfile
|
||||||
|
y = x
|
||||||
|
target "bar" {
|
||||||
|
dockerfile = y
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
|
||||||
|
c, err := ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 2, len(c.Targets))
|
||||||
|
require.Equal(t, "foo", c.Targets[0].Name)
|
||||||
|
require.Equal(t, "bar", c.Targets[1].Name)
|
||||||
|
|
||||||
|
require.Equal(t, "x", *c.Targets[0].Dockerfile)
|
||||||
|
require.Equal(t, "x", *c.Targets[1].Dockerfile)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHCLTargetAttrName(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
target "foo" {
|
||||||
|
dockerfile = target.foo.name
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
|
||||||
|
c, err := ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(c.Targets))
|
||||||
|
require.Equal(t, "foo", c.Targets[0].Name)
|
||||||
|
require.Equal(t, "foo", *c.Targets[0].Dockerfile)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHCLTargetAttrEmptyChain(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
target "foo" {
|
||||||
|
# dockerfile = Dockerfile
|
||||||
|
context = target.foo.dockerfile
|
||||||
|
target = target.foo.context
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
|
||||||
|
c, err := ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(c.Targets))
|
||||||
|
require.Equal(t, "foo", c.Targets[0].Name)
|
||||||
|
require.Nil(t, c.Targets[0].Dockerfile)
|
||||||
|
require.Nil(t, c.Targets[0].Context)
|
||||||
|
require.Nil(t, c.Targets[0].Target)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHCLAttrsCustomType(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
platforms=["linux/arm64", "linux/amd64"]
|
||||||
|
target "app" {
|
||||||
|
platforms = platforms
|
||||||
|
args = {
|
||||||
|
"v1": platforms[0]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
|
||||||
|
c, err := ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(c.Targets))
|
||||||
|
require.Equal(t, c.Targets[0].Name, "app")
|
||||||
|
require.Equal(t, []string{"linux/arm64", "linux/amd64"}, c.Targets[0].Platforms)
|
||||||
|
require.Equal(t, ptrstr("linux/arm64"), c.Targets[0].Args["v1"])
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHCLMultiFileAttrs(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
variable "FOO" {
|
||||||
|
default = "abc"
|
||||||
|
}
|
||||||
|
target "app" {
|
||||||
|
args = {
|
||||||
|
v1 = "pre-${FOO}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
dt2 := []byte(`
|
||||||
|
FOO="def"
|
||||||
|
`)
|
||||||
|
|
||||||
|
c, err := ParseFiles([]File{
|
||||||
|
{Data: dt, Name: "c1.hcl"},
|
||||||
|
{Data: dt2, Name: "c2.hcl"},
|
||||||
|
}, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, 1, len(c.Targets))
|
||||||
|
require.Equal(t, c.Targets[0].Name, "app")
|
||||||
|
require.Equal(t, ptrstr("pre-def"), c.Targets[0].Args["v1"])
|
||||||
|
|
||||||
|
t.Setenv("FOO", "ghi")
|
||||||
|
|
||||||
|
c, err = ParseFiles([]File{
|
||||||
|
{Data: dt, Name: "c1.hcl"},
|
||||||
|
{Data: dt2, Name: "c2.hcl"},
|
||||||
|
}, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(c.Targets))
|
||||||
|
require.Equal(t, c.Targets[0].Name, "app")
|
||||||
|
require.Equal(t, ptrstr("pre-ghi"), c.Targets[0].Args["v1"])
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestJSONAttributes(t *testing.T) {
|
||||||
|
dt := []byte(`{"FOO": "abc", "variable": {"BAR": {"default": "def"}}, "target": { "app": { "args": {"v1": "pre-${FOO}-${BAR}"}} } }`)
|
||||||
|
|
||||||
|
c, err := ParseFile(dt, "docker-bake.json")
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(c.Targets))
|
||||||
|
require.Equal(t, c.Targets[0].Name, "app")
|
||||||
|
require.Equal(t, ptrstr("pre-abc-def"), c.Targets[0].Args["v1"])
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestJSONFunctions(t *testing.T) {
|
||||||
|
dt := []byte(`{
|
||||||
|
"FOO": "abc",
|
||||||
|
"function": {
|
||||||
|
"myfunc": {
|
||||||
|
"params": ["inp"],
|
||||||
|
"result": "<${upper(inp)}-${FOO}>"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"target": {
|
||||||
|
"app": {
|
||||||
|
"args": {
|
||||||
|
"v1": "pre-${myfunc(\"foo\")}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}}`)
|
||||||
|
|
||||||
|
c, err := ParseFile(dt, "docker-bake.json")
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(c.Targets))
|
||||||
|
require.Equal(t, c.Targets[0].Name, "app")
|
||||||
|
require.Equal(t, ptrstr("pre-<FOO-abc>"), c.Targets[0].Args["v1"])
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestJSONInvalidFunctions(t *testing.T) {
|
||||||
|
dt := []byte(`{
|
||||||
|
"target": {
|
||||||
|
"app": {
|
||||||
|
"args": {
|
||||||
|
"v1": "myfunc(\"foo\")"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}}`)
|
||||||
|
|
||||||
|
c, err := ParseFile(dt, "docker-bake.json")
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(c.Targets))
|
||||||
|
require.Equal(t, c.Targets[0].Name, "app")
|
||||||
|
require.Equal(t, ptrstr(`myfunc("foo")`), c.Targets[0].Args["v1"])
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHCLFunctionInAttr(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
function "brace" {
|
||||||
|
params = [inp]
|
||||||
|
result = "[${inp}]"
|
||||||
|
}
|
||||||
|
function "myupper" {
|
||||||
|
params = [val]
|
||||||
|
result = "${upper(val)} <> ${brace(v2)}"
|
||||||
|
}
|
||||||
|
|
||||||
|
v1=myupper("foo")
|
||||||
|
v2=lower("BAZ")
|
||||||
|
target "app" {
|
||||||
|
args = {
|
||||||
|
"v1": v1
|
||||||
|
}
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
|
||||||
|
c, err := ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(c.Targets))
|
||||||
|
require.Equal(t, c.Targets[0].Name, "app")
|
||||||
|
require.Equal(t, ptrstr("FOO <> [baz]"), c.Targets[0].Args["v1"])
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHCLCombineCompose(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
target "app" {
|
||||||
|
context = "dir"
|
||||||
|
args = {
|
||||||
|
v1 = "foo"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
dt2 := []byte(`
|
||||||
|
version: "3"
|
||||||
|
|
||||||
|
services:
|
||||||
|
app:
|
||||||
|
build:
|
||||||
|
dockerfile: Dockerfile-alternate
|
||||||
|
args:
|
||||||
|
v2: "bar"
|
||||||
|
`)
|
||||||
|
|
||||||
|
c, err := ParseFiles([]File{
|
||||||
|
{Data: dt, Name: "c1.hcl"},
|
||||||
|
{Data: dt2, Name: "c2.yml"},
|
||||||
|
}, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(c.Targets))
|
||||||
|
require.Equal(t, c.Targets[0].Name, "app")
|
||||||
|
require.Equal(t, ptrstr("foo"), c.Targets[0].Args["v1"])
|
||||||
|
require.Equal(t, ptrstr("bar"), c.Targets[0].Args["v2"])
|
||||||
|
require.Equal(t, "dir", *c.Targets[0].Context)
|
||||||
|
require.Equal(t, "Dockerfile-alternate", *c.Targets[0].Dockerfile)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHCLBuiltinVars(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
target "app" {
|
||||||
|
context = BAKE_CMD_CONTEXT
|
||||||
|
dockerfile = "test"
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
|
||||||
|
c, err := ParseFiles([]File{
|
||||||
|
{Data: dt, Name: "c1.hcl"},
|
||||||
|
}, map[string]string{
|
||||||
|
"BAKE_CMD_CONTEXT": "foo",
|
||||||
|
})
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(c.Targets))
|
||||||
|
require.Equal(t, c.Targets[0].Name, "app")
|
||||||
|
require.Equal(t, "foo", *c.Targets[0].Context)
|
||||||
|
require.Equal(t, "test", *c.Targets[0].Dockerfile)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestCombineHCLAndJSONTargets(t *testing.T) {
|
||||||
|
c, err := ParseFiles([]File{
|
||||||
|
{
|
||||||
|
Name: "docker-bake.hcl",
|
||||||
|
Data: []byte(`
|
||||||
|
group "default" {
|
||||||
|
targets = ["a"]
|
||||||
|
}
|
||||||
|
|
||||||
|
target "metadata-a" {}
|
||||||
|
target "metadata-b" {}
|
||||||
|
|
||||||
|
target "a" {
|
||||||
|
inherits = ["metadata-a"]
|
||||||
|
context = "."
|
||||||
|
target = "a"
|
||||||
|
}
|
||||||
|
|
||||||
|
target "b" {
|
||||||
|
inherits = ["metadata-b"]
|
||||||
|
context = "."
|
||||||
|
target = "b"
|
||||||
|
}`),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Name: "metadata-a.json",
|
||||||
|
Data: []byte(`
|
||||||
|
{
|
||||||
|
"target": [{
|
||||||
|
"metadata-a": [{
|
||||||
|
"tags": [
|
||||||
|
"app/a:1.0.0",
|
||||||
|
"app/a:latest"
|
||||||
|
]
|
||||||
|
}]
|
||||||
|
}]
|
||||||
|
}`),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Name: "metadata-b.json",
|
||||||
|
Data: []byte(`
|
||||||
|
{
|
||||||
|
"target": [{
|
||||||
|
"metadata-b": [{
|
||||||
|
"tags": [
|
||||||
|
"app/b:1.0.0",
|
||||||
|
"app/b:latest"
|
||||||
|
]
|
||||||
|
}]
|
||||||
|
}]
|
||||||
|
}`),
|
||||||
|
},
|
||||||
|
}, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(c.Groups))
|
||||||
|
require.Equal(t, "default", c.Groups[0].Name)
|
||||||
|
require.Equal(t, []string{"a"}, c.Groups[0].Targets)
|
||||||
|
|
||||||
|
require.Equal(t, 4, len(c.Targets))
|
||||||
|
|
||||||
|
require.Equal(t, c.Targets[0].Name, "metadata-a")
|
||||||
|
require.Equal(t, []string{"app/a:1.0.0", "app/a:latest"}, c.Targets[0].Tags)
|
||||||
|
|
||||||
|
require.Equal(t, c.Targets[1].Name, "metadata-b")
|
||||||
|
require.Equal(t, []string{"app/b:1.0.0", "app/b:latest"}, c.Targets[1].Tags)
|
||||||
|
|
||||||
|
require.Equal(t, c.Targets[2].Name, "a")
|
||||||
|
require.Equal(t, ".", *c.Targets[2].Context)
|
||||||
|
require.Equal(t, "a", *c.Targets[2].Target)
|
||||||
|
|
||||||
|
require.Equal(t, c.Targets[3].Name, "b")
|
||||||
|
require.Equal(t, ".", *c.Targets[3].Context)
|
||||||
|
require.Equal(t, "b", *c.Targets[3].Target)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestCombineHCLAndJSONVars(t *testing.T) {
|
||||||
|
c, err := ParseFiles([]File{
|
||||||
|
{
|
||||||
|
Name: "docker-bake.hcl",
|
||||||
|
Data: []byte(`
|
||||||
|
variable "ABC" {
|
||||||
|
default = "foo"
|
||||||
|
}
|
||||||
|
variable "DEF" {
|
||||||
|
default = ""
|
||||||
|
}
|
||||||
|
group "default" {
|
||||||
|
targets = ["one"]
|
||||||
|
}
|
||||||
|
target "one" {
|
||||||
|
args = {
|
||||||
|
a = "pre-${ABC}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
target "two" {
|
||||||
|
args = {
|
||||||
|
b = "pre-${DEF}"
|
||||||
|
}
|
||||||
|
}`),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Name: "foo.json",
|
||||||
|
Data: []byte(`{"variable": {"DEF": {"default": "bar"}}, "target": { "one": { "args": {"a": "pre-${ABC}-${DEF}"}} } }`),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Name: "bar.json",
|
||||||
|
Data: []byte(`{"ABC": "ghi", "DEF": "jkl"}`),
|
||||||
|
},
|
||||||
|
}, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(c.Groups))
|
||||||
|
require.Equal(t, "default", c.Groups[0].Name)
|
||||||
|
require.Equal(t, []string{"one"}, c.Groups[0].Targets)
|
||||||
|
|
||||||
|
require.Equal(t, 2, len(c.Targets))
|
||||||
|
|
||||||
|
require.Equal(t, c.Targets[0].Name, "one")
|
||||||
|
require.Equal(t, map[string]*string{"a": ptrstr("pre-ghi-jkl")}, c.Targets[0].Args)
|
||||||
|
|
||||||
|
require.Equal(t, c.Targets[1].Name, "two")
|
||||||
|
require.Equal(t, map[string]*string{"b": ptrstr("pre-jkl")}, c.Targets[1].Args)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestEmptyVariableJSON(t *testing.T) {
|
||||||
|
dt := []byte(`{
|
||||||
|
"variable": {
|
||||||
|
"VAR": {}
|
||||||
|
}
|
||||||
|
}`)
|
||||||
|
_, err := ParseFile(dt, "docker-bake.json")
|
||||||
|
require.NoError(t, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestFunctionNoParams(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
function "foo" {
|
||||||
|
result = "bar"
|
||||||
|
}
|
||||||
|
target "foo_target" {
|
||||||
|
args = {
|
||||||
|
test = foo()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
|
||||||
|
_, err := ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.Error(t, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestFunctionNoResult(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
function "foo" {
|
||||||
|
params = ["a"]
|
||||||
|
}
|
||||||
|
`)
|
||||||
|
|
||||||
|
_, err := ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.Error(t, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestVarUnsupportedType(t *testing.T) {
|
||||||
|
dt := []byte(`
|
||||||
|
variable "FOO" {
|
||||||
|
default = []
|
||||||
|
}
|
||||||
|
target "default" {}`)
|
||||||
|
|
||||||
|
t.Setenv("FOO", "bar")
|
||||||
|
_, err := ParseFile(dt, "docker-bake.hcl")
|
||||||
|
require.Error(t, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
func ptrstr(s interface{}) *string {
|
||||||
|
var n *string = nil
|
||||||
|
if reflect.ValueOf(s).Kind() == reflect.String {
|
||||||
|
ss := s.(string)
|
||||||
|
n = &ss
|
||||||
|
}
|
||||||
|
return n
|
||||||
}
|
}
|
||||||
|
|||||||
103
bake/hclparser/body.go
Normal file
103
bake/hclparser/body.go
Normal file
@@ -0,0 +1,103 @@
|
|||||||
|
package hclparser
|
||||||
|
|
||||||
|
import (
|
||||||
|
"github.com/hashicorp/hcl/v2"
|
||||||
|
)
|
||||||
|
|
||||||
|
type filterBody struct {
|
||||||
|
body hcl.Body
|
||||||
|
schema *hcl.BodySchema
|
||||||
|
exclude bool
|
||||||
|
}
|
||||||
|
|
||||||
|
func FilterIncludeBody(body hcl.Body, schema *hcl.BodySchema) hcl.Body {
|
||||||
|
return &filterBody{
|
||||||
|
body: body,
|
||||||
|
schema: schema,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func FilterExcludeBody(body hcl.Body, schema *hcl.BodySchema) hcl.Body {
|
||||||
|
return &filterBody{
|
||||||
|
body: body,
|
||||||
|
schema: schema,
|
||||||
|
exclude: true,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (b *filterBody) Content(schema *hcl.BodySchema) (*hcl.BodyContent, hcl.Diagnostics) {
|
||||||
|
if b.exclude {
|
||||||
|
schema = subtractSchemas(schema, b.schema)
|
||||||
|
} else {
|
||||||
|
schema = intersectSchemas(schema, b.schema)
|
||||||
|
}
|
||||||
|
content, _, diag := b.body.PartialContent(schema)
|
||||||
|
return content, diag
|
||||||
|
}
|
||||||
|
|
||||||
|
func (b *filterBody) PartialContent(schema *hcl.BodySchema) (*hcl.BodyContent, hcl.Body, hcl.Diagnostics) {
|
||||||
|
if b.exclude {
|
||||||
|
schema = subtractSchemas(schema, b.schema)
|
||||||
|
} else {
|
||||||
|
schema = intersectSchemas(schema, b.schema)
|
||||||
|
}
|
||||||
|
return b.body.PartialContent(schema)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (b *filterBody) JustAttributes() (hcl.Attributes, hcl.Diagnostics) {
|
||||||
|
return b.body.JustAttributes()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (b *filterBody) MissingItemRange() hcl.Range {
|
||||||
|
return b.body.MissingItemRange()
|
||||||
|
}
|
||||||
|
|
||||||
|
func intersectSchemas(a, b *hcl.BodySchema) *hcl.BodySchema {
|
||||||
|
result := &hcl.BodySchema{}
|
||||||
|
for _, blockA := range a.Blocks {
|
||||||
|
for _, blockB := range b.Blocks {
|
||||||
|
if blockA.Type == blockB.Type {
|
||||||
|
result.Blocks = append(result.Blocks, blockA)
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for _, attrA := range a.Attributes {
|
||||||
|
for _, attrB := range b.Attributes {
|
||||||
|
if attrA.Name == attrB.Name {
|
||||||
|
result.Attributes = append(result.Attributes, attrA)
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return result
|
||||||
|
}
|
||||||
|
|
||||||
|
func subtractSchemas(a, b *hcl.BodySchema) *hcl.BodySchema {
|
||||||
|
result := &hcl.BodySchema{}
|
||||||
|
for _, blockA := range a.Blocks {
|
||||||
|
found := false
|
||||||
|
for _, blockB := range b.Blocks {
|
||||||
|
if blockA.Type == blockB.Type {
|
||||||
|
found = true
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !found {
|
||||||
|
result.Blocks = append(result.Blocks, blockA)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for _, attrA := range a.Attributes {
|
||||||
|
found := false
|
||||||
|
for _, attrB := range b.Attributes {
|
||||||
|
if attrA.Name == attrB.Name {
|
||||||
|
found = true
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !found {
|
||||||
|
result.Attributes = append(result.Attributes, attrA)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return result
|
||||||
|
}
|
||||||
145
bake/hclparser/expr.go
Normal file
145
bake/hclparser/expr.go
Normal file
@@ -0,0 +1,145 @@
|
|||||||
|
package hclparser
|
||||||
|
|
||||||
|
import (
|
||||||
|
"reflect"
|
||||||
|
"unsafe"
|
||||||
|
|
||||||
|
"github.com/hashicorp/hcl/v2"
|
||||||
|
"github.com/hashicorp/hcl/v2/hclsyntax"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
)
|
||||||
|
|
||||||
|
func funcCalls(exp hcl.Expression) ([]string, hcl.Diagnostics) {
|
||||||
|
node, ok := exp.(hclsyntax.Node)
|
||||||
|
if !ok {
|
||||||
|
fns, err := jsonFuncCallsRecursive(exp)
|
||||||
|
if err != nil {
|
||||||
|
return nil, wrapErrorDiagnostic("Invalid expression", err, exp.Range().Ptr(), exp.Range().Ptr())
|
||||||
|
}
|
||||||
|
return fns, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
var funcnames []string
|
||||||
|
hcldiags := hclsyntax.VisitAll(node, func(n hclsyntax.Node) hcl.Diagnostics {
|
||||||
|
if fe, ok := n.(*hclsyntax.FunctionCallExpr); ok {
|
||||||
|
funcnames = append(funcnames, fe.Name)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
if hcldiags.HasErrors() {
|
||||||
|
return nil, hcldiags
|
||||||
|
}
|
||||||
|
return funcnames, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func jsonFuncCallsRecursive(exp hcl.Expression) ([]string, error) {
|
||||||
|
je, ok := exp.(jsonExp)
|
||||||
|
if !ok {
|
||||||
|
return nil, errors.Errorf("invalid expression type %T", exp)
|
||||||
|
}
|
||||||
|
m := map[string]struct{}{}
|
||||||
|
for _, e := range elementExpressions(je, exp) {
|
||||||
|
if err := appendJSONFuncCalls(e, m); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
arr := make([]string, 0, len(m))
|
||||||
|
for n := range m {
|
||||||
|
arr = append(arr, n)
|
||||||
|
}
|
||||||
|
return arr, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func appendJSONFuncCalls(exp hcl.Expression, m map[string]struct{}) error {
|
||||||
|
v := reflect.ValueOf(exp)
|
||||||
|
if v.Kind() != reflect.Ptr || v.IsNil() {
|
||||||
|
return errors.Errorf("invalid json expression kind %T %v", exp, v.Kind())
|
||||||
|
}
|
||||||
|
src := v.Elem().FieldByName("src")
|
||||||
|
if src.IsZero() {
|
||||||
|
return errors.Errorf("%v has no property src", v.Elem().Type())
|
||||||
|
}
|
||||||
|
if src.Kind() != reflect.Interface {
|
||||||
|
return errors.Errorf("%v src is not interface: %v", src.Type(), src.Kind())
|
||||||
|
}
|
||||||
|
src = src.Elem()
|
||||||
|
if src.IsNil() {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
if src.Kind() == reflect.Ptr {
|
||||||
|
src = src.Elem()
|
||||||
|
}
|
||||||
|
if src.Kind() != reflect.Struct {
|
||||||
|
return errors.Errorf("%v is not struct: %v", src.Type(), src.Kind())
|
||||||
|
}
|
||||||
|
|
||||||
|
// hcl/v2/json/ast#stringVal
|
||||||
|
val := src.FieldByName("Value")
|
||||||
|
if !val.IsValid() || val.IsZero() {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
rng := src.FieldByName("SrcRange")
|
||||||
|
if rng.IsZero() {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
var stringVal struct {
|
||||||
|
Value string
|
||||||
|
SrcRange hcl.Range
|
||||||
|
}
|
||||||
|
|
||||||
|
if !val.Type().AssignableTo(reflect.ValueOf(stringVal.Value).Type()) {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
if !rng.Type().AssignableTo(reflect.ValueOf(stringVal.SrcRange).Type()) {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
// reflect.Set does not work for unexported fields
|
||||||
|
stringVal.Value = *(*string)(unsafe.Pointer(val.UnsafeAddr()))
|
||||||
|
stringVal.SrcRange = *(*hcl.Range)(unsafe.Pointer(rng.UnsafeAddr()))
|
||||||
|
|
||||||
|
expr, diags := hclsyntax.ParseExpression([]byte(stringVal.Value), stringVal.SrcRange.Filename, stringVal.SrcRange.Start)
|
||||||
|
if diags.HasErrors() {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
fns, err := funcCalls(expr)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, fn := range fns {
|
||||||
|
m[fn] = struct{}{}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
type jsonExp interface {
|
||||||
|
ExprList() []hcl.Expression
|
||||||
|
ExprMap() []hcl.KeyValuePair
|
||||||
|
}
|
||||||
|
|
||||||
|
func elementExpressions(je jsonExp, exp hcl.Expression) []hcl.Expression {
|
||||||
|
list := je.ExprList()
|
||||||
|
if len(list) != 0 {
|
||||||
|
exp := make([]hcl.Expression, 0, len(list))
|
||||||
|
for _, e := range list {
|
||||||
|
if je, ok := e.(jsonExp); ok {
|
||||||
|
exp = append(exp, elementExpressions(je, e)...)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return exp
|
||||||
|
}
|
||||||
|
kvlist := je.ExprMap()
|
||||||
|
if len(kvlist) != 0 {
|
||||||
|
exp := make([]hcl.Expression, 0, len(kvlist)*2)
|
||||||
|
for _, p := range kvlist {
|
||||||
|
exp = append(exp, p.Key)
|
||||||
|
if je, ok := p.Value.(jsonExp); ok {
|
||||||
|
exp = append(exp, elementExpressions(je, p.Value)...)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return exp
|
||||||
|
}
|
||||||
|
return []hcl.Expression{exp}
|
||||||
|
}
|
||||||
755
bake/hclparser/hclparser.go
Normal file
755
bake/hclparser/hclparser.go
Normal file
@@ -0,0 +1,755 @@
|
|||||||
|
package hclparser
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"math"
|
||||||
|
"math/big"
|
||||||
|
"reflect"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/docker/buildx/util/userfunc"
|
||||||
|
"github.com/hashicorp/hcl/v2"
|
||||||
|
"github.com/hashicorp/hcl/v2/gohcl"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
"github.com/zclconf/go-cty/cty"
|
||||||
|
"github.com/zclconf/go-cty/cty/gocty"
|
||||||
|
)
|
||||||
|
|
||||||
|
type Opt struct {
|
||||||
|
LookupVar func(string) (string, bool)
|
||||||
|
Vars map[string]string
|
||||||
|
ValidateLabel func(string) error
|
||||||
|
}
|
||||||
|
|
||||||
|
type variable struct {
|
||||||
|
Name string `json:"-" hcl:"name,label"`
|
||||||
|
Default *hcl.Attribute `json:"default,omitempty" hcl:"default,optional"`
|
||||||
|
Body hcl.Body `json:"-" hcl:",body"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type functionDef struct {
|
||||||
|
Name string `json:"-" hcl:"name,label"`
|
||||||
|
Params *hcl.Attribute `json:"params,omitempty" hcl:"params"`
|
||||||
|
Variadic *hcl.Attribute `json:"variadic_param,omitempty" hcl:"variadic_params"`
|
||||||
|
Result *hcl.Attribute `json:"result,omitempty" hcl:"result"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type inputs struct {
|
||||||
|
Variables []*variable `hcl:"variable,block"`
|
||||||
|
Functions []*functionDef `hcl:"function,block"`
|
||||||
|
|
||||||
|
Remain hcl.Body `json:"-" hcl:",remain"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type parser struct {
|
||||||
|
opt Opt
|
||||||
|
|
||||||
|
vars map[string]*variable
|
||||||
|
attrs map[string]*hcl.Attribute
|
||||||
|
funcs map[string]*functionDef
|
||||||
|
|
||||||
|
blocks map[string]map[string][]*hcl.Block
|
||||||
|
blockValues map[*hcl.Block]reflect.Value
|
||||||
|
blockTypes map[string]reflect.Type
|
||||||
|
|
||||||
|
ectx *hcl.EvalContext
|
||||||
|
|
||||||
|
progress map[string]struct{}
|
||||||
|
progressF map[string]struct{}
|
||||||
|
progressB map[*hcl.Block]map[string]struct{}
|
||||||
|
doneF map[string]struct{}
|
||||||
|
doneB map[*hcl.Block]map[string]struct{}
|
||||||
|
}
|
||||||
|
|
||||||
|
var errUndefined = errors.New("undefined")
|
||||||
|
|
||||||
|
func (p *parser) loadDeps(exp hcl.Expression, exclude map[string]struct{}, allowMissing bool) hcl.Diagnostics {
|
||||||
|
fns, hcldiags := funcCalls(exp)
|
||||||
|
if hcldiags.HasErrors() {
|
||||||
|
return hcldiags
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, fn := range fns {
|
||||||
|
if err := p.resolveFunction(fn); err != nil {
|
||||||
|
if allowMissing && errors.Is(err, errUndefined) {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
return wrapErrorDiagnostic("Invalid expression", err, exp.Range().Ptr(), exp.Range().Ptr())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, v := range exp.Variables() {
|
||||||
|
if _, ok := exclude[v.RootName()]; ok {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if _, ok := p.blockTypes[v.RootName()]; ok {
|
||||||
|
blockType := v.RootName()
|
||||||
|
|
||||||
|
split := v.SimpleSplit().Rel
|
||||||
|
if len(split) == 0 {
|
||||||
|
return hcl.Diagnostics{
|
||||||
|
&hcl.Diagnostic{
|
||||||
|
Severity: hcl.DiagError,
|
||||||
|
Summary: "Invalid expression",
|
||||||
|
Detail: fmt.Sprintf("cannot access %s as a variable", blockType),
|
||||||
|
Subject: exp.Range().Ptr(),
|
||||||
|
Context: exp.Range().Ptr(),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
blockName, ok := split[0].(hcl.TraverseAttr)
|
||||||
|
if !ok {
|
||||||
|
return hcl.Diagnostics{
|
||||||
|
&hcl.Diagnostic{
|
||||||
|
Severity: hcl.DiagError,
|
||||||
|
Summary: "Invalid expression",
|
||||||
|
Detail: fmt.Sprintf("cannot traverse %s without attribute", blockType),
|
||||||
|
Subject: exp.Range().Ptr(),
|
||||||
|
Context: exp.Range().Ptr(),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
blocks := p.blocks[blockType][blockName.Name]
|
||||||
|
if len(blocks) == 0 {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
var target *hcl.BodySchema
|
||||||
|
if len(split) > 1 {
|
||||||
|
if attr, ok := split[1].(hcl.TraverseAttr); ok {
|
||||||
|
target = &hcl.BodySchema{
|
||||||
|
Attributes: []hcl.AttributeSchema{{Name: attr.Name}},
|
||||||
|
Blocks: []hcl.BlockHeaderSchema{{Type: attr.Name}},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if err := p.resolveBlock(blocks[0], target); err != nil {
|
||||||
|
if allowMissing && errors.Is(err, errUndefined) {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
return wrapErrorDiagnostic("Invalid expression", err, exp.Range().Ptr(), exp.Range().Ptr())
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
if err := p.resolveValue(v.RootName()); err != nil {
|
||||||
|
if allowMissing && errors.Is(err, errUndefined) {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
return wrapErrorDiagnostic("Invalid expression", err, exp.Range().Ptr(), exp.Range().Ptr())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// resolveFunction forces evaluation of a function, storing the result into the
|
||||||
|
// parser.
|
||||||
|
func (p *parser) resolveFunction(name string) error {
|
||||||
|
if _, ok := p.doneF[name]; ok {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
f, ok := p.funcs[name]
|
||||||
|
if !ok {
|
||||||
|
if _, ok := p.ectx.Functions[name]; ok {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return errors.Wrapf(errUndefined, "function %q does not exit", name)
|
||||||
|
}
|
||||||
|
if _, ok := p.progressF[name]; ok {
|
||||||
|
return errors.Errorf("function cycle not allowed for %s", name)
|
||||||
|
}
|
||||||
|
p.progressF[name] = struct{}{}
|
||||||
|
|
||||||
|
if f.Result == nil {
|
||||||
|
return errors.Errorf("empty result not allowed for %s", name)
|
||||||
|
}
|
||||||
|
if f.Params == nil {
|
||||||
|
return errors.Errorf("empty params not allowed for %s", name)
|
||||||
|
}
|
||||||
|
|
||||||
|
paramExprs, paramsDiags := hcl.ExprList(f.Params.Expr)
|
||||||
|
if paramsDiags.HasErrors() {
|
||||||
|
return paramsDiags
|
||||||
|
}
|
||||||
|
var diags hcl.Diagnostics
|
||||||
|
params := map[string]struct{}{}
|
||||||
|
for _, paramExpr := range paramExprs {
|
||||||
|
param := hcl.ExprAsKeyword(paramExpr)
|
||||||
|
if param == "" {
|
||||||
|
diags = append(diags, &hcl.Diagnostic{
|
||||||
|
Severity: hcl.DiagError,
|
||||||
|
Summary: "Invalid param element",
|
||||||
|
Detail: "Each parameter name must be an identifier.",
|
||||||
|
Subject: paramExpr.Range().Ptr(),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
params[param] = struct{}{}
|
||||||
|
}
|
||||||
|
var variadic hcl.Expression
|
||||||
|
if f.Variadic != nil {
|
||||||
|
variadic = f.Variadic.Expr
|
||||||
|
param := hcl.ExprAsKeyword(variadic)
|
||||||
|
if param == "" {
|
||||||
|
diags = append(diags, &hcl.Diagnostic{
|
||||||
|
Severity: hcl.DiagError,
|
||||||
|
Summary: "Invalid param element",
|
||||||
|
Detail: "Each parameter name must be an identifier.",
|
||||||
|
Subject: f.Variadic.Range.Ptr(),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
params[param] = struct{}{}
|
||||||
|
}
|
||||||
|
if diags.HasErrors() {
|
||||||
|
return diags
|
||||||
|
}
|
||||||
|
|
||||||
|
if diags := p.loadDeps(f.Result.Expr, params, false); diags.HasErrors() {
|
||||||
|
return diags
|
||||||
|
}
|
||||||
|
|
||||||
|
v, diags := userfunc.NewFunction(f.Params.Expr, variadic, f.Result.Expr, func() *hcl.EvalContext {
|
||||||
|
return p.ectx
|
||||||
|
})
|
||||||
|
if diags.HasErrors() {
|
||||||
|
return diags
|
||||||
|
}
|
||||||
|
p.doneF[name] = struct{}{}
|
||||||
|
p.ectx.Functions[name] = v
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// resolveValue forces evaluation of a named value, storing the result into the
|
||||||
|
// parser.
|
||||||
|
func (p *parser) resolveValue(name string) (err error) {
|
||||||
|
if _, ok := p.ectx.Variables[name]; ok {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
if _, ok := p.progress[name]; ok {
|
||||||
|
return errors.Errorf("variable cycle not allowed for %s", name)
|
||||||
|
}
|
||||||
|
p.progress[name] = struct{}{}
|
||||||
|
|
||||||
|
var v *cty.Value
|
||||||
|
defer func() {
|
||||||
|
if v != nil {
|
||||||
|
p.ectx.Variables[name] = *v
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
|
||||||
|
def, ok := p.attrs[name]
|
||||||
|
if _, builtin := p.opt.Vars[name]; !ok && !builtin {
|
||||||
|
vr, ok := p.vars[name]
|
||||||
|
if !ok {
|
||||||
|
return errors.Wrapf(errUndefined, "variable %q does not exit", name)
|
||||||
|
}
|
||||||
|
def = vr.Default
|
||||||
|
}
|
||||||
|
|
||||||
|
if def == nil {
|
||||||
|
val, ok := p.opt.Vars[name]
|
||||||
|
if !ok {
|
||||||
|
val, _ = p.opt.LookupVar(name)
|
||||||
|
}
|
||||||
|
vv := cty.StringVal(val)
|
||||||
|
v = &vv
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if diags := p.loadDeps(def.Expr, nil, true); diags.HasErrors() {
|
||||||
|
return diags
|
||||||
|
}
|
||||||
|
vv, diags := def.Expr.Value(p.ectx)
|
||||||
|
if diags.HasErrors() {
|
||||||
|
return diags
|
||||||
|
}
|
||||||
|
|
||||||
|
_, isVar := p.vars[name]
|
||||||
|
|
||||||
|
if envv, ok := p.opt.LookupVar(name); ok && isVar {
|
||||||
|
switch {
|
||||||
|
case vv.Type().Equals(cty.Bool):
|
||||||
|
b, err := strconv.ParseBool(envv)
|
||||||
|
if err != nil {
|
||||||
|
return errors.Wrapf(err, "failed to parse %s as bool", name)
|
||||||
|
}
|
||||||
|
vv = cty.BoolVal(b)
|
||||||
|
case vv.Type().Equals(cty.String), vv.Type().Equals(cty.DynamicPseudoType):
|
||||||
|
vv = cty.StringVal(envv)
|
||||||
|
case vv.Type().Equals(cty.Number):
|
||||||
|
n, err := strconv.ParseFloat(envv, 64)
|
||||||
|
if err == nil && (math.IsNaN(n) || math.IsInf(n, 0)) {
|
||||||
|
err = errors.Errorf("invalid number value")
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
return errors.Wrapf(err, "failed to parse %s as number", name)
|
||||||
|
}
|
||||||
|
vv = cty.NumberVal(big.NewFloat(n))
|
||||||
|
default:
|
||||||
|
// TODO: support lists with csv values
|
||||||
|
return errors.Errorf("unsupported type %s for variable %s", vv.Type().FriendlyName(), name)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
v = &vv
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// resolveBlock force evaluates a block, storing the result in the parser. If a
|
||||||
|
// target schema is provided, only the attributes and blocks present in the
|
||||||
|
// schema will be evaluated.
|
||||||
|
func (p *parser) resolveBlock(block *hcl.Block, target *hcl.BodySchema) (err error) {
|
||||||
|
name := block.Labels[0]
|
||||||
|
if err := p.opt.ValidateLabel(name); err != nil {
|
||||||
|
return wrapErrorDiagnostic("Invalid name", err, &block.LabelRanges[0], &block.LabelRanges[0])
|
||||||
|
}
|
||||||
|
|
||||||
|
if _, ok := p.doneB[block]; !ok {
|
||||||
|
p.doneB[block] = map[string]struct{}{}
|
||||||
|
}
|
||||||
|
if _, ok := p.progressB[block]; !ok {
|
||||||
|
p.progressB[block] = map[string]struct{}{}
|
||||||
|
}
|
||||||
|
|
||||||
|
if target != nil {
|
||||||
|
// filter out attributes and blocks that are already evaluated
|
||||||
|
original := target
|
||||||
|
target = &hcl.BodySchema{}
|
||||||
|
for _, a := range original.Attributes {
|
||||||
|
if _, ok := p.doneB[block][a.Name]; !ok {
|
||||||
|
target.Attributes = append(target.Attributes, a)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for _, b := range original.Blocks {
|
||||||
|
if _, ok := p.doneB[block][b.Type]; !ok {
|
||||||
|
target.Blocks = append(target.Blocks, b)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if len(target.Attributes) == 0 && len(target.Blocks) == 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if target != nil {
|
||||||
|
// detect reference cycles
|
||||||
|
for _, a := range target.Attributes {
|
||||||
|
if _, ok := p.progressB[block][a.Name]; ok {
|
||||||
|
return errors.Errorf("reference cycle not allowed for %s.%s.%s", block.Type, name, a.Name)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for _, b := range target.Blocks {
|
||||||
|
if _, ok := p.progressB[block][b.Type]; ok {
|
||||||
|
return errors.Errorf("reference cycle not allowed for %s.%s.%s", block.Type, name, b.Type)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for _, a := range target.Attributes {
|
||||||
|
p.progressB[block][a.Name] = struct{}{}
|
||||||
|
}
|
||||||
|
for _, b := range target.Blocks {
|
||||||
|
p.progressB[block][b.Type] = struct{}{}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// create a filtered body that contains only the target properties
|
||||||
|
body := func() hcl.Body {
|
||||||
|
if target != nil {
|
||||||
|
return FilterIncludeBody(block.Body, target)
|
||||||
|
}
|
||||||
|
|
||||||
|
filter := &hcl.BodySchema{}
|
||||||
|
for k := range p.doneB[block] {
|
||||||
|
filter.Attributes = append(filter.Attributes, hcl.AttributeSchema{Name: k})
|
||||||
|
filter.Blocks = append(filter.Blocks, hcl.BlockHeaderSchema{Type: k})
|
||||||
|
}
|
||||||
|
return FilterExcludeBody(block.Body, filter)
|
||||||
|
}
|
||||||
|
|
||||||
|
// load dependencies from all targeted properties
|
||||||
|
t, ok := p.blockTypes[block.Type]
|
||||||
|
if !ok {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
schema, _ := gohcl.ImpliedBodySchema(reflect.New(t).Interface())
|
||||||
|
content, _, diag := body().PartialContent(schema)
|
||||||
|
if diag.HasErrors() {
|
||||||
|
return diag
|
||||||
|
}
|
||||||
|
for _, a := range content.Attributes {
|
||||||
|
diag := p.loadDeps(a.Expr, nil, true)
|
||||||
|
if diag.HasErrors() {
|
||||||
|
return diag
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for _, b := range content.Blocks {
|
||||||
|
err := p.resolveBlock(b, nil)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// decode!
|
||||||
|
var output reflect.Value
|
||||||
|
if prev, ok := p.blockValues[block]; ok {
|
||||||
|
output = prev
|
||||||
|
} else {
|
||||||
|
output = reflect.New(t)
|
||||||
|
setLabel(output, block.Labels[0]) // early attach labels, so we can reference them
|
||||||
|
}
|
||||||
|
diag = gohcl.DecodeBody(body(), p.ectx, output.Interface())
|
||||||
|
if diag.HasErrors() {
|
||||||
|
return diag
|
||||||
|
}
|
||||||
|
p.blockValues[block] = output
|
||||||
|
|
||||||
|
// mark all targeted properties as done
|
||||||
|
for _, a := range content.Attributes {
|
||||||
|
p.doneB[block][a.Name] = struct{}{}
|
||||||
|
}
|
||||||
|
for _, b := range content.Blocks {
|
||||||
|
p.doneB[block][b.Type] = struct{}{}
|
||||||
|
}
|
||||||
|
if target != nil {
|
||||||
|
for _, a := range target.Attributes {
|
||||||
|
p.doneB[block][a.Name] = struct{}{}
|
||||||
|
}
|
||||||
|
for _, b := range target.Blocks {
|
||||||
|
p.doneB[block][b.Type] = struct{}{}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// store the result into the evaluation context (so if can be referenced)
|
||||||
|
outputType, err := gocty.ImpliedType(output.Interface())
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
outputValue, err := gocty.ToCtyValue(output.Interface(), outputType)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
var m map[string]cty.Value
|
||||||
|
if m2, ok := p.ectx.Variables[block.Type]; ok {
|
||||||
|
m = m2.AsValueMap()
|
||||||
|
}
|
||||||
|
if m == nil {
|
||||||
|
m = map[string]cty.Value{}
|
||||||
|
}
|
||||||
|
m[name] = outputValue
|
||||||
|
p.ectx.Variables[block.Type] = cty.MapVal(m)
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func Parse(b hcl.Body, opt Opt, val interface{}) hcl.Diagnostics {
|
||||||
|
reserved := map[string]struct{}{}
|
||||||
|
schema, _ := gohcl.ImpliedBodySchema(val)
|
||||||
|
|
||||||
|
for _, bs := range schema.Blocks {
|
||||||
|
reserved[bs.Type] = struct{}{}
|
||||||
|
}
|
||||||
|
for k := range opt.Vars {
|
||||||
|
reserved[k] = struct{}{}
|
||||||
|
}
|
||||||
|
|
||||||
|
var defs inputs
|
||||||
|
if err := gohcl.DecodeBody(b, nil, &defs); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defsSchema, _ := gohcl.ImpliedBodySchema(defs)
|
||||||
|
|
||||||
|
if opt.LookupVar == nil {
|
||||||
|
opt.LookupVar = func(string) (string, bool) {
|
||||||
|
return "", false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if opt.ValidateLabel == nil {
|
||||||
|
opt.ValidateLabel = func(string) error {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
p := &parser{
|
||||||
|
opt: opt,
|
||||||
|
|
||||||
|
vars: map[string]*variable{},
|
||||||
|
attrs: map[string]*hcl.Attribute{},
|
||||||
|
funcs: map[string]*functionDef{},
|
||||||
|
|
||||||
|
blocks: map[string]map[string][]*hcl.Block{},
|
||||||
|
blockValues: map[*hcl.Block]reflect.Value{},
|
||||||
|
blockTypes: map[string]reflect.Type{},
|
||||||
|
|
||||||
|
progress: map[string]struct{}{},
|
||||||
|
progressF: map[string]struct{}{},
|
||||||
|
progressB: map[*hcl.Block]map[string]struct{}{},
|
||||||
|
|
||||||
|
doneF: map[string]struct{}{},
|
||||||
|
doneB: map[*hcl.Block]map[string]struct{}{},
|
||||||
|
ectx: &hcl.EvalContext{
|
||||||
|
Variables: map[string]cty.Value{},
|
||||||
|
Functions: stdlibFunctions,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, v := range defs.Variables {
|
||||||
|
// TODO: validate name
|
||||||
|
if _, ok := reserved[v.Name]; ok {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
p.vars[v.Name] = v
|
||||||
|
}
|
||||||
|
for _, v := range defs.Functions {
|
||||||
|
// TODO: validate name
|
||||||
|
if _, ok := reserved[v.Name]; ok {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
p.funcs[v.Name] = v
|
||||||
|
}
|
||||||
|
|
||||||
|
content, b, diags := b.PartialContent(schema)
|
||||||
|
if diags.HasErrors() {
|
||||||
|
return diags
|
||||||
|
}
|
||||||
|
|
||||||
|
blocks, b, diags := b.PartialContent(defsSchema)
|
||||||
|
if diags.HasErrors() {
|
||||||
|
return diags
|
||||||
|
}
|
||||||
|
|
||||||
|
attrs, diags := b.JustAttributes()
|
||||||
|
if diags.HasErrors() {
|
||||||
|
if d := removeAttributesDiags(diags, reserved, p.vars); len(d) > 0 {
|
||||||
|
return d
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, v := range attrs {
|
||||||
|
if _, ok := reserved[v.Name]; ok {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
p.attrs[v.Name] = v
|
||||||
|
}
|
||||||
|
delete(p.attrs, "function")
|
||||||
|
|
||||||
|
for k := range p.opt.Vars {
|
||||||
|
_ = p.resolveValue(k)
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, a := range content.Attributes {
|
||||||
|
return hcl.Diagnostics{
|
||||||
|
&hcl.Diagnostic{
|
||||||
|
Severity: hcl.DiagError,
|
||||||
|
Summary: "Invalid attribute",
|
||||||
|
Detail: "global attributes currently not supported",
|
||||||
|
Subject: &a.Range,
|
||||||
|
Context: &a.Range,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for k := range p.vars {
|
||||||
|
if err := p.resolveValue(k); err != nil {
|
||||||
|
if diags, ok := err.(hcl.Diagnostics); ok {
|
||||||
|
return diags
|
||||||
|
}
|
||||||
|
r := p.vars[k].Body.MissingItemRange()
|
||||||
|
return wrapErrorDiagnostic("Invalid value", err, &r, &r)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for k := range p.funcs {
|
||||||
|
if err := p.resolveFunction(k); err != nil {
|
||||||
|
if diags, ok := err.(hcl.Diagnostics); ok {
|
||||||
|
return diags
|
||||||
|
}
|
||||||
|
var subject *hcl.Range
|
||||||
|
var context *hcl.Range
|
||||||
|
if p.funcs[k].Params != nil {
|
||||||
|
subject = &p.funcs[k].Params.Range
|
||||||
|
context = subject
|
||||||
|
} else {
|
||||||
|
for _, block := range blocks.Blocks {
|
||||||
|
if block.Type == "function" && len(block.Labels) == 1 && block.Labels[0] == k {
|
||||||
|
subject = &block.LabelRanges[0]
|
||||||
|
context = &block.DefRange
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return wrapErrorDiagnostic("Invalid function", err, subject, context)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, b := range content.Blocks {
|
||||||
|
if len(b.Labels) == 0 || len(b.Labels) > 1 {
|
||||||
|
return hcl.Diagnostics{
|
||||||
|
&hcl.Diagnostic{
|
||||||
|
Severity: hcl.DiagError,
|
||||||
|
Summary: "Invalid block",
|
||||||
|
Detail: fmt.Sprintf("invalid block label: %v", b.Labels),
|
||||||
|
Subject: &b.LabelRanges[0],
|
||||||
|
Context: &b.LabelRanges[0],
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
bm, ok := p.blocks[b.Type]
|
||||||
|
if !ok {
|
||||||
|
bm = map[string][]*hcl.Block{}
|
||||||
|
p.blocks[b.Type] = bm
|
||||||
|
}
|
||||||
|
|
||||||
|
lbl := b.Labels[0]
|
||||||
|
bm[lbl] = append(bm[lbl], b)
|
||||||
|
}
|
||||||
|
|
||||||
|
type value struct {
|
||||||
|
reflect.Value
|
||||||
|
idx int
|
||||||
|
}
|
||||||
|
type field struct {
|
||||||
|
idx int
|
||||||
|
typ reflect.Type
|
||||||
|
values map[string]value
|
||||||
|
}
|
||||||
|
types := map[string]field{}
|
||||||
|
|
||||||
|
vt := reflect.ValueOf(val).Elem().Type()
|
||||||
|
for i := 0; i < vt.NumField(); i++ {
|
||||||
|
tags := strings.Split(vt.Field(i).Tag.Get("hcl"), ",")
|
||||||
|
|
||||||
|
p.blockTypes[tags[0]] = vt.Field(i).Type.Elem().Elem()
|
||||||
|
types[tags[0]] = field{
|
||||||
|
idx: i,
|
||||||
|
typ: vt.Field(i).Type,
|
||||||
|
values: make(map[string]value),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
diags = hcl.Diagnostics{}
|
||||||
|
for _, b := range content.Blocks {
|
||||||
|
v := reflect.ValueOf(val)
|
||||||
|
|
||||||
|
err := p.resolveBlock(b, nil)
|
||||||
|
if err != nil {
|
||||||
|
if diag, ok := err.(hcl.Diagnostics); ok {
|
||||||
|
if diag.HasErrors() {
|
||||||
|
diags = append(diags, diag...)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
return wrapErrorDiagnostic("Invalid block", err, &b.LabelRanges[0], &b.DefRange)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
vv := p.blockValues[b]
|
||||||
|
|
||||||
|
t := types[b.Type]
|
||||||
|
lblIndex := setLabel(vv, b.Labels[0])
|
||||||
|
|
||||||
|
oldValue, exists := t.values[b.Labels[0]]
|
||||||
|
if !exists && lblIndex != -1 {
|
||||||
|
if v.Elem().Field(t.idx).Type().Kind() == reflect.Slice {
|
||||||
|
for i := 0; i < v.Elem().Field(t.idx).Len(); i++ {
|
||||||
|
if b.Labels[0] == v.Elem().Field(t.idx).Index(i).Elem().Field(lblIndex).String() {
|
||||||
|
exists = true
|
||||||
|
oldValue = value{Value: v.Elem().Field(t.idx).Index(i), idx: i}
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if exists {
|
||||||
|
if m := oldValue.Value.MethodByName("Merge"); m.IsValid() {
|
||||||
|
m.Call([]reflect.Value{vv})
|
||||||
|
} else {
|
||||||
|
v.Elem().Field(t.idx).Index(oldValue.idx).Set(vv)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
slice := v.Elem().Field(t.idx)
|
||||||
|
if slice.IsNil() {
|
||||||
|
slice = reflect.New(t.typ).Elem()
|
||||||
|
}
|
||||||
|
t.values[b.Labels[0]] = value{Value: vv, idx: slice.Len()}
|
||||||
|
v.Elem().Field(t.idx).Set(reflect.Append(slice, vv))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if diags.HasErrors() {
|
||||||
|
return diags
|
||||||
|
}
|
||||||
|
|
||||||
|
for k := range p.attrs {
|
||||||
|
if err := p.resolveValue(k); err != nil {
|
||||||
|
if diags, ok := err.(hcl.Diagnostics); ok {
|
||||||
|
return diags
|
||||||
|
}
|
||||||
|
return wrapErrorDiagnostic("Invalid attribute", err, &p.attrs[k].Range, &p.attrs[k].Range)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// wrapErrorDiagnostic wraps an error into a hcl.Diagnostics object.
|
||||||
|
// If the error is already an hcl.Diagnostics object, it is returned as is.
|
||||||
|
func wrapErrorDiagnostic(message string, err error, subject *hcl.Range, context *hcl.Range) hcl.Diagnostics {
|
||||||
|
switch err := err.(type) {
|
||||||
|
case *hcl.Diagnostic:
|
||||||
|
return hcl.Diagnostics{err}
|
||||||
|
case hcl.Diagnostics:
|
||||||
|
return err
|
||||||
|
default:
|
||||||
|
return hcl.Diagnostics{
|
||||||
|
&hcl.Diagnostic{
|
||||||
|
Severity: hcl.DiagError,
|
||||||
|
Summary: message,
|
||||||
|
Detail: err.Error(),
|
||||||
|
Subject: subject,
|
||||||
|
Context: context,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func setLabel(v reflect.Value, lbl string) int {
|
||||||
|
// cache field index?
|
||||||
|
numFields := v.Elem().Type().NumField()
|
||||||
|
for i := 0; i < numFields; i++ {
|
||||||
|
for _, t := range strings.Split(v.Elem().Type().Field(i).Tag.Get("hcl"), ",") {
|
||||||
|
if t == "label" {
|
||||||
|
v.Elem().Field(i).Set(reflect.ValueOf(lbl))
|
||||||
|
return i
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return -1
|
||||||
|
}
|
||||||
|
|
||||||
|
func removeAttributesDiags(diags hcl.Diagnostics, reserved map[string]struct{}, vars map[string]*variable) hcl.Diagnostics {
|
||||||
|
var fdiags hcl.Diagnostics
|
||||||
|
for _, d := range diags {
|
||||||
|
if fout := func(d *hcl.Diagnostic) bool {
|
||||||
|
// https://github.com/docker/buildx/pull/541
|
||||||
|
if d.Detail == "Blocks are not allowed here." {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
for r := range reserved {
|
||||||
|
// JSON body objects don't handle repeated blocks like HCL but
|
||||||
|
// reserved name attributes should be allowed when multi bodies are merged.
|
||||||
|
// https://github.com/hashicorp/hcl/blob/main/json/spec.md#blocks
|
||||||
|
if strings.HasPrefix(d.Detail, fmt.Sprintf(`Argument "%s" was already set at `, r)) {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for v := range vars {
|
||||||
|
// Do the same for global variables
|
||||||
|
if strings.HasPrefix(d.Detail, fmt.Sprintf(`Argument "%s" was already set at `, v)) {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}(d); !fout {
|
||||||
|
fdiags = append(fdiags, d)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return fdiags
|
||||||
|
}
|
||||||
126
bake/hclparser/stdlib.go
Normal file
126
bake/hclparser/stdlib.go
Normal file
@@ -0,0 +1,126 @@
|
|||||||
|
package hclparser
|
||||||
|
|
||||||
|
import (
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/hashicorp/go-cty-funcs/cidr"
|
||||||
|
"github.com/hashicorp/go-cty-funcs/crypto"
|
||||||
|
"github.com/hashicorp/go-cty-funcs/encoding"
|
||||||
|
"github.com/hashicorp/go-cty-funcs/uuid"
|
||||||
|
"github.com/hashicorp/hcl/v2/ext/tryfunc"
|
||||||
|
"github.com/hashicorp/hcl/v2/ext/typeexpr"
|
||||||
|
"github.com/zclconf/go-cty/cty"
|
||||||
|
"github.com/zclconf/go-cty/cty/function"
|
||||||
|
"github.com/zclconf/go-cty/cty/function/stdlib"
|
||||||
|
)
|
||||||
|
|
||||||
|
var stdlibFunctions = map[string]function.Function{
|
||||||
|
"absolute": stdlib.AbsoluteFunc,
|
||||||
|
"add": stdlib.AddFunc,
|
||||||
|
"and": stdlib.AndFunc,
|
||||||
|
"base64decode": encoding.Base64DecodeFunc,
|
||||||
|
"base64encode": encoding.Base64EncodeFunc,
|
||||||
|
"bcrypt": crypto.BcryptFunc,
|
||||||
|
"byteslen": stdlib.BytesLenFunc,
|
||||||
|
"bytesslice": stdlib.BytesSliceFunc,
|
||||||
|
"can": tryfunc.CanFunc,
|
||||||
|
"ceil": stdlib.CeilFunc,
|
||||||
|
"chomp": stdlib.ChompFunc,
|
||||||
|
"chunklist": stdlib.ChunklistFunc,
|
||||||
|
"cidrhost": cidr.HostFunc,
|
||||||
|
"cidrnetmask": cidr.NetmaskFunc,
|
||||||
|
"cidrsubnet": cidr.SubnetFunc,
|
||||||
|
"cidrsubnets": cidr.SubnetsFunc,
|
||||||
|
"csvdecode": stdlib.CSVDecodeFunc,
|
||||||
|
"coalesce": stdlib.CoalesceFunc,
|
||||||
|
"coalescelist": stdlib.CoalesceListFunc,
|
||||||
|
"compact": stdlib.CompactFunc,
|
||||||
|
"concat": stdlib.ConcatFunc,
|
||||||
|
"contains": stdlib.ContainsFunc,
|
||||||
|
"convert": typeexpr.ConvertFunc,
|
||||||
|
"distinct": stdlib.DistinctFunc,
|
||||||
|
"divide": stdlib.DivideFunc,
|
||||||
|
"element": stdlib.ElementFunc,
|
||||||
|
"equal": stdlib.EqualFunc,
|
||||||
|
"flatten": stdlib.FlattenFunc,
|
||||||
|
"floor": stdlib.FloorFunc,
|
||||||
|
"formatdate": stdlib.FormatDateFunc,
|
||||||
|
"format": stdlib.FormatFunc,
|
||||||
|
"formatlist": stdlib.FormatListFunc,
|
||||||
|
"greaterthan": stdlib.GreaterThanFunc,
|
||||||
|
"greaterthanorequalto": stdlib.GreaterThanOrEqualToFunc,
|
||||||
|
"hasindex": stdlib.HasIndexFunc,
|
||||||
|
"indent": stdlib.IndentFunc,
|
||||||
|
"index": stdlib.IndexFunc,
|
||||||
|
"int": stdlib.IntFunc,
|
||||||
|
"jsondecode": stdlib.JSONDecodeFunc,
|
||||||
|
"jsonencode": stdlib.JSONEncodeFunc,
|
||||||
|
"keys": stdlib.KeysFunc,
|
||||||
|
"join": stdlib.JoinFunc,
|
||||||
|
"length": stdlib.LengthFunc,
|
||||||
|
"lessthan": stdlib.LessThanFunc,
|
||||||
|
"lessthanorequalto": stdlib.LessThanOrEqualToFunc,
|
||||||
|
"log": stdlib.LogFunc,
|
||||||
|
"lookup": stdlib.LookupFunc,
|
||||||
|
"lower": stdlib.LowerFunc,
|
||||||
|
"max": stdlib.MaxFunc,
|
||||||
|
"md5": crypto.Md5Func,
|
||||||
|
"merge": stdlib.MergeFunc,
|
||||||
|
"min": stdlib.MinFunc,
|
||||||
|
"modulo": stdlib.ModuloFunc,
|
||||||
|
"multiply": stdlib.MultiplyFunc,
|
||||||
|
"negate": stdlib.NegateFunc,
|
||||||
|
"notequal": stdlib.NotEqualFunc,
|
||||||
|
"not": stdlib.NotFunc,
|
||||||
|
"or": stdlib.OrFunc,
|
||||||
|
"parseint": stdlib.ParseIntFunc,
|
||||||
|
"pow": stdlib.PowFunc,
|
||||||
|
"range": stdlib.RangeFunc,
|
||||||
|
"regexall": stdlib.RegexAllFunc,
|
||||||
|
"regex": stdlib.RegexFunc,
|
||||||
|
"regex_replace": stdlib.RegexReplaceFunc,
|
||||||
|
"reverse": stdlib.ReverseFunc,
|
||||||
|
"reverselist": stdlib.ReverseListFunc,
|
||||||
|
"rsadecrypt": crypto.RsaDecryptFunc,
|
||||||
|
"sethaselement": stdlib.SetHasElementFunc,
|
||||||
|
"setintersection": stdlib.SetIntersectionFunc,
|
||||||
|
"setproduct": stdlib.SetProductFunc,
|
||||||
|
"setsubtract": stdlib.SetSubtractFunc,
|
||||||
|
"setsymmetricdifference": stdlib.SetSymmetricDifferenceFunc,
|
||||||
|
"setunion": stdlib.SetUnionFunc,
|
||||||
|
"sha1": crypto.Sha1Func,
|
||||||
|
"sha256": crypto.Sha256Func,
|
||||||
|
"sha512": crypto.Sha512Func,
|
||||||
|
"signum": stdlib.SignumFunc,
|
||||||
|
"slice": stdlib.SliceFunc,
|
||||||
|
"sort": stdlib.SortFunc,
|
||||||
|
"split": stdlib.SplitFunc,
|
||||||
|
"strlen": stdlib.StrlenFunc,
|
||||||
|
"substr": stdlib.SubstrFunc,
|
||||||
|
"subtract": stdlib.SubtractFunc,
|
||||||
|
"timeadd": stdlib.TimeAddFunc,
|
||||||
|
"timestamp": timestampFunc,
|
||||||
|
"title": stdlib.TitleFunc,
|
||||||
|
"trim": stdlib.TrimFunc,
|
||||||
|
"trimprefix": stdlib.TrimPrefixFunc,
|
||||||
|
"trimspace": stdlib.TrimSpaceFunc,
|
||||||
|
"trimsuffix": stdlib.TrimSuffixFunc,
|
||||||
|
"try": tryfunc.TryFunc,
|
||||||
|
"upper": stdlib.UpperFunc,
|
||||||
|
"urlencode": encoding.URLEncodeFunc,
|
||||||
|
"uuidv4": uuid.V4Func,
|
||||||
|
"uuidv5": uuid.V5Func,
|
||||||
|
"values": stdlib.ValuesFunc,
|
||||||
|
"zipmap": stdlib.ZipmapFunc,
|
||||||
|
}
|
||||||
|
|
||||||
|
// timestampFunc constructs a function that returns a string representation of the current date and time.
|
||||||
|
//
|
||||||
|
// This function was imported from terraform's datetime utilities.
|
||||||
|
var timestampFunc = function.New(&function.Spec{
|
||||||
|
Params: []function.Parameter{},
|
||||||
|
Type: function.StaticReturnType(cty.String),
|
||||||
|
Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) {
|
||||||
|
return cty.StringVal(time.Now().UTC().Format(time.RFC3339)), nil
|
||||||
|
},
|
||||||
|
})
|
||||||
237
bake/remote.go
Normal file
237
bake/remote.go
Normal file
@@ -0,0 +1,237 @@
|
|||||||
|
package bake
|
||||||
|
|
||||||
|
import (
|
||||||
|
"archive/tar"
|
||||||
|
"bytes"
|
||||||
|
"context"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/docker/buildx/builder"
|
||||||
|
"github.com/docker/buildx/driver"
|
||||||
|
"github.com/docker/buildx/util/progress"
|
||||||
|
"github.com/moby/buildkit/client"
|
||||||
|
"github.com/moby/buildkit/client/llb"
|
||||||
|
gwclient "github.com/moby/buildkit/frontend/gateway/client"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
)
|
||||||
|
|
||||||
|
type Input struct {
|
||||||
|
State *llb.State
|
||||||
|
URL string
|
||||||
|
}
|
||||||
|
|
||||||
|
func ReadRemoteFiles(ctx context.Context, nodes []builder.Node, url string, names []string, pw progress.Writer) ([]File, *Input, error) {
|
||||||
|
var filename string
|
||||||
|
st, ok := detectGitContext(url)
|
||||||
|
if !ok {
|
||||||
|
st, filename, ok = detectHTTPContext(url)
|
||||||
|
if !ok {
|
||||||
|
return nil, nil, errors.Errorf("not url context")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
inp := &Input{State: st, URL: url}
|
||||||
|
var files []File
|
||||||
|
|
||||||
|
var node *builder.Node
|
||||||
|
for i, n := range nodes {
|
||||||
|
if n.Err == nil {
|
||||||
|
node = &nodes[i]
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if node == nil {
|
||||||
|
return nil, nil, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
c, err := driver.Boot(ctx, ctx, node.Driver, pw)
|
||||||
|
if err != nil {
|
||||||
|
return nil, nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
ch, done := progress.NewChannel(pw)
|
||||||
|
defer func() { <-done }()
|
||||||
|
_, err = c.Build(ctx, client.SolveOpt{}, "buildx", func(ctx context.Context, c gwclient.Client) (*gwclient.Result, error) {
|
||||||
|
def, err := st.Marshal(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
res, err := c.Solve(ctx, gwclient.SolveRequest{
|
||||||
|
Definition: def.ToPB(),
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
ref, err := res.SingleRef()
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if filename != "" {
|
||||||
|
files, err = filesFromURLRef(ctx, c, ref, inp, filename, names)
|
||||||
|
} else {
|
||||||
|
files, err = filesFromRef(ctx, ref, names)
|
||||||
|
}
|
||||||
|
return nil, err
|
||||||
|
}, ch)
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
return nil, nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return files, inp, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func IsRemoteURL(url string) bool {
|
||||||
|
if _, _, ok := detectHTTPContext(url); ok {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
if _, ok := detectGitContext(url); ok {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
func detectHTTPContext(url string) (*llb.State, string, bool) {
|
||||||
|
if httpPrefix.MatchString(url) {
|
||||||
|
httpContext := llb.HTTP(url, llb.Filename("context"), llb.WithCustomName("[internal] load remote build context"))
|
||||||
|
return &httpContext, "context", true
|
||||||
|
}
|
||||||
|
return nil, "", false
|
||||||
|
}
|
||||||
|
|
||||||
|
func detectGitContext(ref string) (*llb.State, bool) {
|
||||||
|
found := false
|
||||||
|
if httpPrefix.MatchString(ref) && gitURLPathWithFragmentSuffix.MatchString(ref) {
|
||||||
|
found = true
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, prefix := range []string{"git://", "github.com/", "git@"} {
|
||||||
|
if strings.HasPrefix(ref, prefix) {
|
||||||
|
found = true
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !found {
|
||||||
|
return nil, false
|
||||||
|
}
|
||||||
|
|
||||||
|
parts := strings.SplitN(ref, "#", 2)
|
||||||
|
branch := ""
|
||||||
|
if len(parts) > 1 {
|
||||||
|
branch = parts[1]
|
||||||
|
}
|
||||||
|
gitOpts := []llb.GitOption{llb.WithCustomName("[internal] load git source " + ref)}
|
||||||
|
|
||||||
|
st := llb.Git(parts[0], branch, gitOpts...)
|
||||||
|
return &st, true
|
||||||
|
}
|
||||||
|
|
||||||
|
func isArchive(header []byte) bool {
|
||||||
|
for _, m := range [][]byte{
|
||||||
|
{0x42, 0x5A, 0x68}, // bzip2
|
||||||
|
{0x1F, 0x8B, 0x08}, // gzip
|
||||||
|
{0xFD, 0x37, 0x7A, 0x58, 0x5A, 0x00}, // xz
|
||||||
|
} {
|
||||||
|
if len(header) < len(m) {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if bytes.Equal(m, header[:len(m)]) {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
r := tar.NewReader(bytes.NewBuffer(header))
|
||||||
|
_, err := r.Next()
|
||||||
|
return err == nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func filesFromURLRef(ctx context.Context, c gwclient.Client, ref gwclient.Reference, inp *Input, filename string, names []string) ([]File, error) {
|
||||||
|
stat, err := ref.StatFile(ctx, gwclient.StatRequest{Path: filename})
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
dt, err := ref.ReadFile(ctx, gwclient.ReadRequest{
|
||||||
|
Filename: filename,
|
||||||
|
Range: &gwclient.FileRange{
|
||||||
|
Length: 1024,
|
||||||
|
},
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if isArchive(dt) {
|
||||||
|
bc := llb.Scratch().File(llb.Copy(inp.State, filename, "/", &llb.CopyInfo{
|
||||||
|
AttemptUnpack: true,
|
||||||
|
}))
|
||||||
|
inp.State = &bc
|
||||||
|
inp.URL = ""
|
||||||
|
def, err := bc.Marshal(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
res, err := c.Solve(ctx, gwclient.SolveRequest{
|
||||||
|
Definition: def.ToPB(),
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
ref, err := res.SingleRef()
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return filesFromRef(ctx, ref, names)
|
||||||
|
}
|
||||||
|
|
||||||
|
inp.State = nil
|
||||||
|
name := inp.URL
|
||||||
|
inp.URL = ""
|
||||||
|
|
||||||
|
if len(dt) > stat.Size() {
|
||||||
|
if stat.Size() > 1024*512 {
|
||||||
|
return nil, errors.Errorf("non-archive definition URL bigger than maximum allowed size")
|
||||||
|
}
|
||||||
|
|
||||||
|
dt, err = ref.ReadFile(ctx, gwclient.ReadRequest{
|
||||||
|
Filename: filename,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return []File{{Name: name, Data: dt}}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func filesFromRef(ctx context.Context, ref gwclient.Reference, names []string) ([]File, error) {
|
||||||
|
// TODO: auto-remove parent dir in needed
|
||||||
|
var files []File
|
||||||
|
|
||||||
|
isDefault := false
|
||||||
|
if len(names) == 0 {
|
||||||
|
isDefault = true
|
||||||
|
names = defaultFilenames()
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, name := range names {
|
||||||
|
_, err := ref.StatFile(ctx, gwclient.StatRequest{Path: name})
|
||||||
|
if err != nil {
|
||||||
|
if isDefault {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
dt, err := ref.ReadFile(ctx, gwclient.ReadRequest{Filename: name})
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
files = append(files, File{Name: name, Data: dt})
|
||||||
|
}
|
||||||
|
|
||||||
|
return files, nil
|
||||||
|
}
|
||||||
1382
build/build.go
1382
build/build.go
File diff suppressed because it is too large
Load Diff
@@ -1,60 +0,0 @@
|
|||||||
package build
|
|
||||||
|
|
||||||
import (
|
|
||||||
"encoding/csv"
|
|
||||||
"strings"
|
|
||||||
|
|
||||||
"github.com/moby/buildkit/client"
|
|
||||||
"github.com/pkg/errors"
|
|
||||||
)
|
|
||||||
|
|
||||||
func ParseCacheEntry(in []string) ([]client.CacheOptionsEntry, error) {
|
|
||||||
imports := make([]client.CacheOptionsEntry, 0, len(in))
|
|
||||||
for _, in := range in {
|
|
||||||
csvReader := csv.NewReader(strings.NewReader(in))
|
|
||||||
fields, err := csvReader.Read()
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
if isRefOnlyFormat(fields) {
|
|
||||||
for _, field := range fields {
|
|
||||||
imports = append(imports, client.CacheOptionsEntry{
|
|
||||||
Type: "registry",
|
|
||||||
Attrs: map[string]string{"ref": field},
|
|
||||||
})
|
|
||||||
}
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
im := client.CacheOptionsEntry{
|
|
||||||
Attrs: map[string]string{},
|
|
||||||
}
|
|
||||||
for _, field := range fields {
|
|
||||||
parts := strings.SplitN(field, "=", 2)
|
|
||||||
if len(parts) != 2 {
|
|
||||||
return nil, errors.Errorf("invalid value %s", field)
|
|
||||||
}
|
|
||||||
key := strings.ToLower(parts[0])
|
|
||||||
value := parts[1]
|
|
||||||
switch key {
|
|
||||||
case "type":
|
|
||||||
im.Type = value
|
|
||||||
default:
|
|
||||||
im.Attrs[key] = value
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if im.Type == "" {
|
|
||||||
return nil, errors.Errorf("type required form> %q", in)
|
|
||||||
}
|
|
||||||
imports = append(imports, im)
|
|
||||||
}
|
|
||||||
return imports, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func isRefOnlyFormat(in []string) bool {
|
|
||||||
for _, v := range in {
|
|
||||||
if strings.Contains(v, "=") {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
115
build/git.go
Normal file
115
build/git.go
Normal file
@@ -0,0 +1,115 @@
|
|||||||
|
package build
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"os"
|
||||||
|
"path"
|
||||||
|
"path/filepath"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/docker/buildx/util/gitutil"
|
||||||
|
specs "github.com/opencontainers/image-spec/specs-go/v1"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
)
|
||||||
|
|
||||||
|
const DockerfileLabel = "com.docker.image.source.entrypoint"
|
||||||
|
|
||||||
|
func getGitAttributes(ctx context.Context, contextPath string, dockerfilePath string) (res map[string]string, _ error) {
|
||||||
|
res = make(map[string]string)
|
||||||
|
if contextPath == "" {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
setGitLabels := false
|
||||||
|
if v, ok := os.LookupEnv("BUILDX_GIT_LABELS"); ok {
|
||||||
|
if v == "full" { // backward compatibility with old "full" mode
|
||||||
|
setGitLabels = true
|
||||||
|
} else if v, err := strconv.ParseBool(v); err == nil {
|
||||||
|
setGitLabels = v
|
||||||
|
}
|
||||||
|
}
|
||||||
|
setGitInfo := true
|
||||||
|
if v, ok := os.LookupEnv("BUILDX_GIT_INFO"); ok {
|
||||||
|
if v, err := strconv.ParseBool(v); err == nil {
|
||||||
|
setGitInfo = v
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if !setGitLabels && !setGitInfo {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// figure out in which directory the git command needs to run in
|
||||||
|
var wd string
|
||||||
|
if filepath.IsAbs(contextPath) {
|
||||||
|
wd = contextPath
|
||||||
|
} else {
|
||||||
|
cwd, _ := os.Getwd()
|
||||||
|
wd, _ = filepath.Abs(filepath.Join(cwd, contextPath))
|
||||||
|
}
|
||||||
|
|
||||||
|
gitc, err := gitutil.New(gitutil.WithContext(ctx), gitutil.WithWorkingDir(wd))
|
||||||
|
if err != nil {
|
||||||
|
if st, err := os.Stat(path.Join(wd, ".git")); err == nil && st.IsDir() {
|
||||||
|
return res, errors.New("buildx: git was not found in the system. Current commit information was not captured by the build")
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if !gitc.IsInsideWorkTree() {
|
||||||
|
if st, err := os.Stat(path.Join(wd, ".git")); err == nil && st.IsDir() {
|
||||||
|
return res, errors.New("buildx: failed to read current commit information with git rev-parse --is-inside-work-tree")
|
||||||
|
}
|
||||||
|
return res, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
if sha, err := gitc.FullCommit(); err != nil && !gitutil.IsUnknownRevision(err) {
|
||||||
|
return res, errors.Wrapf(err, "buildx: failed to get git commit")
|
||||||
|
} else if sha != "" {
|
||||||
|
checkDirty := false
|
||||||
|
if v, ok := os.LookupEnv("BUILDX_GIT_CHECK_DIRTY"); ok {
|
||||||
|
if v, err := strconv.ParseBool(v); err == nil {
|
||||||
|
checkDirty = v
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if checkDirty && gitc.IsDirty() {
|
||||||
|
sha += "-dirty"
|
||||||
|
}
|
||||||
|
if setGitLabels {
|
||||||
|
res["label:"+specs.AnnotationRevision] = sha
|
||||||
|
}
|
||||||
|
if setGitInfo {
|
||||||
|
res["vcs:revision"] = sha
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if rurl, err := gitc.RemoteURL(); err == nil && rurl != "" {
|
||||||
|
if setGitLabels {
|
||||||
|
res["label:"+specs.AnnotationSource] = rurl
|
||||||
|
}
|
||||||
|
if setGitInfo {
|
||||||
|
res["vcs:source"] = rurl
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if setGitLabels {
|
||||||
|
if root, err := gitc.RootDir(); err != nil {
|
||||||
|
return res, errors.Wrapf(err, "buildx: failed to get git root dir")
|
||||||
|
} else if root != "" {
|
||||||
|
if dockerfilePath == "" {
|
||||||
|
dockerfilePath = filepath.Join(wd, "Dockerfile")
|
||||||
|
}
|
||||||
|
if !filepath.IsAbs(dockerfilePath) {
|
||||||
|
cwd, _ := os.Getwd()
|
||||||
|
dockerfilePath = filepath.Join(cwd, dockerfilePath)
|
||||||
|
}
|
||||||
|
dockerfilePath, _ = filepath.Rel(root, dockerfilePath)
|
||||||
|
if !strings.HasPrefix(dockerfilePath, "..") {
|
||||||
|
res["label:"+DockerfileLabel] = dockerfilePath
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
156
build/git_test.go
Normal file
156
build/git_test.go
Normal file
@@ -0,0 +1,156 @@
|
|||||||
|
package build
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"os"
|
||||||
|
"path"
|
||||||
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/docker/buildx/util/gitutil"
|
||||||
|
specs "github.com/opencontainers/image-spec/specs-go/v1"
|
||||||
|
"github.com/stretchr/testify/assert"
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
)
|
||||||
|
|
||||||
|
func setupTest(tb testing.TB) {
|
||||||
|
gitutil.Mktmp(tb)
|
||||||
|
|
||||||
|
c, err := gitutil.New()
|
||||||
|
require.NoError(tb, err)
|
||||||
|
gitutil.GitInit(c, tb)
|
||||||
|
|
||||||
|
df := []byte("FROM alpine:latest\n")
|
||||||
|
assert.NoError(tb, os.WriteFile("Dockerfile", df, 0644))
|
||||||
|
|
||||||
|
gitutil.GitAdd(c, tb, "Dockerfile")
|
||||||
|
gitutil.GitCommit(c, tb, "initial commit")
|
||||||
|
gitutil.GitSetRemote(c, tb, "origin", "git@github.com:docker/buildx.git")
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGetGitAttributesNotGitRepo(t *testing.T) {
|
||||||
|
_, err := getGitAttributes(context.Background(), t.TempDir(), "Dockerfile")
|
||||||
|
assert.NoError(t, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGetGitAttributesBadGitRepo(t *testing.T) {
|
||||||
|
tmp := t.TempDir()
|
||||||
|
require.NoError(t, os.MkdirAll(path.Join(tmp, ".git"), 0755))
|
||||||
|
|
||||||
|
_, err := getGitAttributes(context.Background(), tmp, "Dockerfile")
|
||||||
|
assert.Error(t, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGetGitAttributesNoContext(t *testing.T) {
|
||||||
|
setupTest(t)
|
||||||
|
|
||||||
|
gitattrs, err := getGitAttributes(context.Background(), "", "Dockerfile")
|
||||||
|
assert.NoError(t, err)
|
||||||
|
assert.Empty(t, gitattrs)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGetGitAttributes(t *testing.T) {
|
||||||
|
cases := []struct {
|
||||||
|
name string
|
||||||
|
envGitLabels string
|
||||||
|
envGitInfo string
|
||||||
|
expected []string
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "default",
|
||||||
|
envGitLabels: "",
|
||||||
|
envGitInfo: "",
|
||||||
|
expected: []string{
|
||||||
|
"vcs:revision",
|
||||||
|
"vcs:source",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "none",
|
||||||
|
envGitLabels: "false",
|
||||||
|
envGitInfo: "false",
|
||||||
|
expected: []string{},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "gitinfo",
|
||||||
|
envGitLabels: "false",
|
||||||
|
envGitInfo: "true",
|
||||||
|
expected: []string{
|
||||||
|
"vcs:revision",
|
||||||
|
"vcs:source",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "gitlabels",
|
||||||
|
envGitLabels: "true",
|
||||||
|
envGitInfo: "false",
|
||||||
|
expected: []string{
|
||||||
|
"label:" + DockerfileLabel,
|
||||||
|
"label:" + specs.AnnotationRevision,
|
||||||
|
"label:" + specs.AnnotationSource,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "both",
|
||||||
|
envGitLabels: "true",
|
||||||
|
envGitInfo: "",
|
||||||
|
expected: []string{
|
||||||
|
"label:" + DockerfileLabel,
|
||||||
|
"label:" + specs.AnnotationRevision,
|
||||||
|
"label:" + specs.AnnotationSource,
|
||||||
|
"vcs:revision",
|
||||||
|
"vcs:source",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, tt := range cases {
|
||||||
|
tt := tt
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
setupTest(t)
|
||||||
|
if tt.envGitLabels != "" {
|
||||||
|
t.Setenv("BUILDX_GIT_LABELS", tt.envGitLabels)
|
||||||
|
}
|
||||||
|
if tt.envGitInfo != "" {
|
||||||
|
t.Setenv("BUILDX_GIT_INFO", tt.envGitInfo)
|
||||||
|
}
|
||||||
|
gitattrs, err := getGitAttributes(context.Background(), ".", "Dockerfile")
|
||||||
|
require.NoError(t, err)
|
||||||
|
for _, e := range tt.expected {
|
||||||
|
assert.Contains(t, gitattrs, e)
|
||||||
|
assert.NotEmpty(t, gitattrs[e])
|
||||||
|
if e == "label:"+DockerfileLabel {
|
||||||
|
assert.Equal(t, "Dockerfile", gitattrs[e])
|
||||||
|
} else if e == "label:"+specs.AnnotationSource || e == "vcs:source" {
|
||||||
|
assert.Equal(t, "git@github.com:docker/buildx.git", gitattrs[e])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGetGitAttributesDirty(t *testing.T) {
|
||||||
|
setupTest(t)
|
||||||
|
t.Setenv("BUILDX_GIT_CHECK_DIRTY", "true")
|
||||||
|
|
||||||
|
// make a change to test dirty flag
|
||||||
|
df := []byte("FROM alpine:edge\n")
|
||||||
|
require.NoError(t, os.Mkdir("dir", 0755))
|
||||||
|
require.NoError(t, os.WriteFile(filepath.Join("dir", "Dockerfile"), df, 0644))
|
||||||
|
|
||||||
|
t.Setenv("BUILDX_GIT_LABELS", "true")
|
||||||
|
gitattrs, _ := getGitAttributes(context.Background(), ".", "Dockerfile")
|
||||||
|
assert.Equal(t, 5, len(gitattrs))
|
||||||
|
|
||||||
|
assert.Contains(t, gitattrs, "label:"+DockerfileLabel)
|
||||||
|
assert.Equal(t, "Dockerfile", gitattrs["label:"+DockerfileLabel])
|
||||||
|
assert.Contains(t, gitattrs, "label:"+specs.AnnotationSource)
|
||||||
|
assert.Equal(t, "git@github.com:docker/buildx.git", gitattrs["label:"+specs.AnnotationSource])
|
||||||
|
assert.Contains(t, gitattrs, "label:"+specs.AnnotationRevision)
|
||||||
|
assert.True(t, strings.HasSuffix(gitattrs["label:"+specs.AnnotationRevision], "-dirty"))
|
||||||
|
|
||||||
|
assert.Contains(t, gitattrs, "vcs:source")
|
||||||
|
assert.Equal(t, "git@github.com:docker/buildx.git", gitattrs["vcs:source"])
|
||||||
|
assert.Contains(t, gitattrs, "vcs:revision")
|
||||||
|
assert.True(t, strings.HasSuffix(gitattrs["vcs:revision"], "-dirty"))
|
||||||
|
}
|
||||||
115
build/output.go
115
build/output.go
@@ -1,115 +0,0 @@
|
|||||||
package build
|
|
||||||
|
|
||||||
import (
|
|
||||||
"encoding/csv"
|
|
||||||
"io"
|
|
||||||
"os"
|
|
||||||
"strings"
|
|
||||||
|
|
||||||
"github.com/containerd/console"
|
|
||||||
"github.com/moby/buildkit/client"
|
|
||||||
"github.com/pkg/errors"
|
|
||||||
)
|
|
||||||
|
|
||||||
func ParseOutputs(inp []string) ([]client.ExportEntry, error) {
|
|
||||||
var outs []client.ExportEntry
|
|
||||||
if len(inp) == 0 {
|
|
||||||
return nil, nil
|
|
||||||
}
|
|
||||||
for _, s := range inp {
|
|
||||||
csvReader := csv.NewReader(strings.NewReader(s))
|
|
||||||
fields, err := csvReader.Read()
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
out := client.ExportEntry{
|
|
||||||
Attrs: map[string]string{},
|
|
||||||
}
|
|
||||||
if len(fields) == 1 && fields[0] == s && !strings.HasPrefix(s, "type=") {
|
|
||||||
if s != "-" {
|
|
||||||
outs = append(outs, client.ExportEntry{
|
|
||||||
Type: client.ExporterLocal,
|
|
||||||
OutputDir: s,
|
|
||||||
})
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
out = client.ExportEntry{
|
|
||||||
Type: client.ExporterTar,
|
|
||||||
Attrs: map[string]string{
|
|
||||||
"dest": s,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if out.Type == "" {
|
|
||||||
for _, field := range fields {
|
|
||||||
parts := strings.SplitN(field, "=", 2)
|
|
||||||
if len(parts) != 2 {
|
|
||||||
return nil, errors.Errorf("invalid value %s", field)
|
|
||||||
}
|
|
||||||
key := strings.TrimSpace(strings.ToLower(parts[0]))
|
|
||||||
value := parts[1]
|
|
||||||
switch key {
|
|
||||||
case "type":
|
|
||||||
out.Type = value
|
|
||||||
default:
|
|
||||||
out.Attrs[key] = value
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if out.Type == "" {
|
|
||||||
return nil, errors.Errorf("type is required for output")
|
|
||||||
}
|
|
||||||
|
|
||||||
// handle client side
|
|
||||||
switch out.Type {
|
|
||||||
case client.ExporterLocal:
|
|
||||||
dest, ok := out.Attrs["dest"]
|
|
||||||
if !ok {
|
|
||||||
return nil, errors.Errorf("dest is required for local output")
|
|
||||||
}
|
|
||||||
out.OutputDir = dest
|
|
||||||
delete(out.Attrs, "dest")
|
|
||||||
case client.ExporterOCI, client.ExporterDocker, client.ExporterTar:
|
|
||||||
dest, ok := out.Attrs["dest"]
|
|
||||||
if !ok {
|
|
||||||
if out.Type != client.ExporterDocker {
|
|
||||||
dest = "-"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if dest == "-" {
|
|
||||||
if _, err := console.ConsoleFromFile(os.Stdout); err == nil {
|
|
||||||
return nil, errors.Errorf("output file is required for %s exporter. refusing to write to console", out.Type)
|
|
||||||
}
|
|
||||||
out.Output = wrapWriteCloser(os.Stdout)
|
|
||||||
} else if dest != "" {
|
|
||||||
fi, err := os.Stat(dest)
|
|
||||||
if err != nil && !os.IsNotExist(err) {
|
|
||||||
return nil, errors.Wrapf(err, "invalid destination file: %s", dest)
|
|
||||||
}
|
|
||||||
if err == nil && fi.IsDir() {
|
|
||||||
return nil, errors.Errorf("destination file %s is a directory", dest)
|
|
||||||
}
|
|
||||||
f, err := os.Create(dest)
|
|
||||||
if err != nil {
|
|
||||||
return nil, errors.Errorf("failed to open %s", err)
|
|
||||||
}
|
|
||||||
out.Output = wrapWriteCloser(f)
|
|
||||||
}
|
|
||||||
delete(out.Attrs, "dest")
|
|
||||||
case "registry":
|
|
||||||
out.Type = client.ExporterImage
|
|
||||||
out.Attrs["push"] = "true"
|
|
||||||
}
|
|
||||||
|
|
||||||
outs = append(outs, out)
|
|
||||||
}
|
|
||||||
return outs, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func wrapWriteCloser(wc io.WriteCloser) func(map[string]string) (io.WriteCloser, error) {
|
|
||||||
return func(map[string]string) (io.WriteCloser, error) {
|
|
||||||
return wc, nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,60 +0,0 @@
|
|||||||
package build
|
|
||||||
|
|
||||||
import (
|
|
||||||
"encoding/csv"
|
|
||||||
"strings"
|
|
||||||
|
|
||||||
"github.com/moby/buildkit/session"
|
|
||||||
"github.com/moby/buildkit/session/secrets/secretsprovider"
|
|
||||||
"github.com/pkg/errors"
|
|
||||||
)
|
|
||||||
|
|
||||||
func ParseSecretSpecs(sl []string) (session.Attachable, error) {
|
|
||||||
fs := make([]secretsprovider.FileSource, 0, len(sl))
|
|
||||||
for _, v := range sl {
|
|
||||||
s, err := parseSecret(v)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
fs = append(fs, *s)
|
|
||||||
}
|
|
||||||
store, err := secretsprovider.NewFileStore(fs)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
return secretsprovider.NewSecretProvider(store), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func parseSecret(value string) (*secretsprovider.FileSource, error) {
|
|
||||||
csvReader := csv.NewReader(strings.NewReader(value))
|
|
||||||
fields, err := csvReader.Read()
|
|
||||||
if err != nil {
|
|
||||||
return nil, errors.Wrap(err, "failed to parse csv secret")
|
|
||||||
}
|
|
||||||
|
|
||||||
fs := secretsprovider.FileSource{}
|
|
||||||
|
|
||||||
for _, field := range fields {
|
|
||||||
parts := strings.SplitN(field, "=", 2)
|
|
||||||
key := strings.ToLower(parts[0])
|
|
||||||
|
|
||||||
if len(parts) != 2 {
|
|
||||||
return nil, errors.Errorf("invalid field '%s' must be a key=value pair", field)
|
|
||||||
}
|
|
||||||
|
|
||||||
value := parts[1]
|
|
||||||
switch key {
|
|
||||||
case "type":
|
|
||||||
if value != "file" {
|
|
||||||
return nil, errors.Errorf("unsupported secret type %q", value)
|
|
||||||
}
|
|
||||||
case "id":
|
|
||||||
fs.ID = value
|
|
||||||
case "source", "src":
|
|
||||||
fs.FilePath = value
|
|
||||||
default:
|
|
||||||
return nil, errors.Errorf("unexpected key '%s' in '%s'", key, field)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return &fs, nil
|
|
||||||
}
|
|
||||||
31
build/ssh.go
31
build/ssh.go
@@ -1,31 +0,0 @@
|
|||||||
package build
|
|
||||||
|
|
||||||
import (
|
|
||||||
"strings"
|
|
||||||
|
|
||||||
"github.com/moby/buildkit/session"
|
|
||||||
"github.com/moby/buildkit/session/sshforward/sshprovider"
|
|
||||||
)
|
|
||||||
|
|
||||||
func ParseSSHSpecs(sl []string) (session.Attachable, error) {
|
|
||||||
configs := make([]sshprovider.AgentConfig, 0, len(sl))
|
|
||||||
for _, v := range sl {
|
|
||||||
c, err := parseSSH(v)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
configs = append(configs, *c)
|
|
||||||
}
|
|
||||||
return sshprovider.NewSSHAgentProvider(configs)
|
|
||||||
}
|
|
||||||
|
|
||||||
func parseSSH(value string) (*sshprovider.AgentConfig, error) {
|
|
||||||
parts := strings.SplitN(value, "=", 2)
|
|
||||||
cfg := sshprovider.AgentConfig{
|
|
||||||
ID: parts[0],
|
|
||||||
}
|
|
||||||
if len(parts) > 1 {
|
|
||||||
cfg.Paths = strings.Split(parts[1], ",")
|
|
||||||
}
|
|
||||||
return &cfg, nil
|
|
||||||
}
|
|
||||||
71
build/url.go
Normal file
71
build/url.go
Normal file
@@ -0,0 +1,71 @@
|
|||||||
|
package build
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
|
||||||
|
"github.com/docker/buildx/driver"
|
||||||
|
"github.com/docker/buildx/util/progress"
|
||||||
|
"github.com/moby/buildkit/client"
|
||||||
|
"github.com/moby/buildkit/client/llb"
|
||||||
|
gwclient "github.com/moby/buildkit/frontend/gateway/client"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
)
|
||||||
|
|
||||||
|
func createTempDockerfileFromURL(ctx context.Context, d driver.Driver, url string, pw progress.Writer) (string, error) {
|
||||||
|
c, err := driver.Boot(ctx, ctx, d, pw)
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
var out string
|
||||||
|
ch, done := progress.NewChannel(pw)
|
||||||
|
defer func() { <-done }()
|
||||||
|
_, err = c.Build(ctx, client.SolveOpt{}, "buildx", func(ctx context.Context, c gwclient.Client) (*gwclient.Result, error) {
|
||||||
|
def, err := llb.HTTP(url, llb.Filename("Dockerfile"), llb.WithCustomNamef("[internal] load %s", url)).Marshal(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
res, err := c.Solve(ctx, gwclient.SolveRequest{
|
||||||
|
Definition: def.ToPB(),
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
ref, err := res.SingleRef()
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
stat, err := ref.StatFile(ctx, gwclient.StatRequest{
|
||||||
|
Path: "Dockerfile",
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if stat.Size() > 512*1024 {
|
||||||
|
return nil, errors.Errorf("Dockerfile %s bigger than allowed max size", url)
|
||||||
|
}
|
||||||
|
|
||||||
|
dt, err := ref.ReadFile(ctx, gwclient.ReadRequest{
|
||||||
|
Filename: "Dockerfile",
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
dir, err := os.MkdirTemp("", "buildx")
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if err := os.WriteFile(filepath.Join(dir, "Dockerfile"), dt, 0600); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
out = dir
|
||||||
|
return nil, nil
|
||||||
|
}, ch)
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
return out, nil
|
||||||
|
}
|
||||||
@@ -7,11 +7,18 @@ import (
|
|||||||
"os"
|
"os"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
|
"github.com/docker/cli/opts"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
)
|
)
|
||||||
|
|
||||||
// archiveHeaderSize is the number of bytes in an archive header
|
const (
|
||||||
const archiveHeaderSize = 512
|
// archiveHeaderSize is the number of bytes in an archive header
|
||||||
|
archiveHeaderSize = 512
|
||||||
|
// mobyHostGatewayName defines a special string which users can append to
|
||||||
|
// --add-host to add an extra entry in /etc/hosts that maps
|
||||||
|
// host.docker.internal to the host IP
|
||||||
|
mobyHostGatewayName = "host-gateway"
|
||||||
|
)
|
||||||
|
|
||||||
func isLocalDir(c string) bool {
|
func isLocalDir(c string) bool {
|
||||||
st, err := os.Stat(c)
|
st, err := os.Stat(c)
|
||||||
@@ -38,18 +45,35 @@ func isArchive(header []byte) bool {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// toBuildkitExtraHosts converts hosts from docker key:value format to buildkit's csv format
|
// toBuildkitExtraHosts converts hosts from docker key:value format to buildkit's csv format
|
||||||
func toBuildkitExtraHosts(inp []string) (string, error) {
|
func toBuildkitExtraHosts(inp []string, mobyDriver bool) (string, error) {
|
||||||
if len(inp) == 0 {
|
if len(inp) == 0 {
|
||||||
return "", nil
|
return "", nil
|
||||||
}
|
}
|
||||||
hosts := make([]string, 0, len(inp))
|
hosts := make([]string, 0, len(inp))
|
||||||
for _, h := range inp {
|
for _, h := range inp {
|
||||||
parts := strings.Split(h, ":")
|
host, ip, ok := strings.Cut(h, ":")
|
||||||
|
if !ok || host == "" || ip == "" {
|
||||||
if len(parts) != 2 || parts[0] == "" || net.ParseIP(parts[1]) == nil {
|
|
||||||
return "", errors.Errorf("invalid host %s", h)
|
return "", errors.Errorf("invalid host %s", h)
|
||||||
}
|
}
|
||||||
hosts = append(hosts, parts[0]+"="+parts[1])
|
// Skip IP address validation for "host-gateway" string with moby driver
|
||||||
|
if !mobyDriver || ip != mobyHostGatewayName {
|
||||||
|
if net.ParseIP(ip) == nil {
|
||||||
|
return "", errors.Errorf("invalid host %s", h)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
hosts = append(hosts, host+"="+ip)
|
||||||
}
|
}
|
||||||
return strings.Join(hosts, ","), nil
|
return strings.Join(hosts, ","), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// toBuildkitUlimits converts ulimits from docker type=soft:hard format to buildkit's csv format
|
||||||
|
func toBuildkitUlimits(inp *opts.UlimitOpt) (string, error) {
|
||||||
|
if inp == nil || len(inp.GetList()) == 0 {
|
||||||
|
return "", nil
|
||||||
|
}
|
||||||
|
ulimits := make([]string, 0, len(inp.GetList()))
|
||||||
|
for _, ulimit := range inp.GetList() {
|
||||||
|
ulimits = append(ulimits, ulimit.String())
|
||||||
|
}
|
||||||
|
return strings.Join(ulimits, ","), nil
|
||||||
|
}
|
||||||
|
|||||||
292
builder/builder.go
Normal file
292
builder/builder.go
Normal file
@@ -0,0 +1,292 @@
|
|||||||
|
package builder
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"os"
|
||||||
|
"sort"
|
||||||
|
"sync"
|
||||||
|
|
||||||
|
"github.com/docker/buildx/driver"
|
||||||
|
"github.com/docker/buildx/store"
|
||||||
|
"github.com/docker/buildx/store/storeutil"
|
||||||
|
"github.com/docker/buildx/util/dockerutil"
|
||||||
|
"github.com/docker/buildx/util/imagetools"
|
||||||
|
"github.com/docker/buildx/util/progress"
|
||||||
|
"github.com/docker/cli/cli/command"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
"golang.org/x/sync/errgroup"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Builder represents an active builder object
|
||||||
|
type Builder struct {
|
||||||
|
*store.NodeGroup
|
||||||
|
driverFactory driverFactory
|
||||||
|
nodes []Node
|
||||||
|
opts builderOpts
|
||||||
|
err error
|
||||||
|
}
|
||||||
|
|
||||||
|
type builderOpts struct {
|
||||||
|
dockerCli command.Cli
|
||||||
|
name string
|
||||||
|
txn *store.Txn
|
||||||
|
contextPathHash string
|
||||||
|
validate bool
|
||||||
|
}
|
||||||
|
|
||||||
|
// Option provides a variadic option for configuring the builder.
|
||||||
|
type Option func(b *Builder)
|
||||||
|
|
||||||
|
// WithName sets builder name.
|
||||||
|
func WithName(name string) Option {
|
||||||
|
return func(b *Builder) {
|
||||||
|
b.opts.name = name
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// WithStore sets a store instance used at init.
|
||||||
|
func WithStore(txn *store.Txn) Option {
|
||||||
|
return func(b *Builder) {
|
||||||
|
b.opts.txn = txn
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// WithContextPathHash is used for determining pods in k8s driver instance.
|
||||||
|
func WithContextPathHash(contextPathHash string) Option {
|
||||||
|
return func(b *Builder) {
|
||||||
|
b.opts.contextPathHash = contextPathHash
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// WithSkippedValidation skips builder context validation.
|
||||||
|
func WithSkippedValidation() Option {
|
||||||
|
return func(b *Builder) {
|
||||||
|
b.opts.validate = false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// New initializes a new builder client
|
||||||
|
func New(dockerCli command.Cli, opts ...Option) (_ *Builder, err error) {
|
||||||
|
b := &Builder{
|
||||||
|
opts: builderOpts{
|
||||||
|
dockerCli: dockerCli,
|
||||||
|
validate: true,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, opt := range opts {
|
||||||
|
opt(b)
|
||||||
|
}
|
||||||
|
|
||||||
|
if b.opts.txn == nil {
|
||||||
|
// if store instance is nil we create a short-lived one using the
|
||||||
|
// default store and ensure we release it on completion
|
||||||
|
var release func()
|
||||||
|
b.opts.txn, release, err = storeutil.GetStore(dockerCli)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
defer release()
|
||||||
|
}
|
||||||
|
|
||||||
|
if b.opts.name != "" {
|
||||||
|
if b.NodeGroup, err = storeutil.GetNodeGroup(b.opts.txn, dockerCli, b.opts.name); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
if b.NodeGroup, err = storeutil.GetCurrentInstance(b.opts.txn, dockerCli); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if b.opts.validate {
|
||||||
|
if err = b.Validate(); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return b, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Validate validates builder context
|
||||||
|
func (b *Builder) Validate() error {
|
||||||
|
if b.NodeGroup.DockerContext {
|
||||||
|
list, err := b.opts.dockerCli.ContextStore().List()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
currentContext := b.opts.dockerCli.CurrentContext()
|
||||||
|
for _, l := range list {
|
||||||
|
if l.Name == b.Name && l.Name != currentContext {
|
||||||
|
return errors.Errorf("use `docker --context=%s buildx` to switch to context %q", l.Name, l.Name)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// ContextName returns builder context name if available.
|
||||||
|
func (b *Builder) ContextName() string {
|
||||||
|
ctxbuilders, err := b.opts.dockerCli.ContextStore().List()
|
||||||
|
if err != nil {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
for _, cb := range ctxbuilders {
|
||||||
|
if b.NodeGroup.Driver == "docker" && len(b.NodeGroup.Nodes) == 1 && b.NodeGroup.Nodes[0].Endpoint == cb.Name {
|
||||||
|
return cb.Name
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
|
||||||
|
// ImageOpt returns registry auth configuration
|
||||||
|
func (b *Builder) ImageOpt() (imagetools.Opt, error) {
|
||||||
|
return storeutil.GetImageConfig(b.opts.dockerCli, b.NodeGroup)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Boot bootstrap a builder
|
||||||
|
func (b *Builder) Boot(ctx context.Context) (bool, error) {
|
||||||
|
toBoot := make([]int, 0, len(b.nodes))
|
||||||
|
for idx, d := range b.nodes {
|
||||||
|
if d.Err != nil || d.Driver == nil || d.DriverInfo == nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if d.DriverInfo.Status != driver.Running {
|
||||||
|
toBoot = append(toBoot, idx)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if len(toBoot) == 0 {
|
||||||
|
return false, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
printer, err := progress.NewPrinter(context.TODO(), os.Stderr, os.Stderr, progress.PrinterModeAuto)
|
||||||
|
if err != nil {
|
||||||
|
return false, err
|
||||||
|
}
|
||||||
|
|
||||||
|
baseCtx := ctx
|
||||||
|
eg, _ := errgroup.WithContext(ctx)
|
||||||
|
for _, idx := range toBoot {
|
||||||
|
func(idx int) {
|
||||||
|
eg.Go(func() error {
|
||||||
|
pw := progress.WithPrefix(printer, b.NodeGroup.Nodes[idx].Name, len(toBoot) > 1)
|
||||||
|
_, err := driver.Boot(ctx, baseCtx, b.nodes[idx].Driver, pw)
|
||||||
|
if err != nil {
|
||||||
|
b.nodes[idx].Err = err
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
}(idx)
|
||||||
|
}
|
||||||
|
|
||||||
|
err = eg.Wait()
|
||||||
|
err1 := printer.Wait()
|
||||||
|
if err == nil {
|
||||||
|
err = err1
|
||||||
|
}
|
||||||
|
|
||||||
|
return true, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Inactive checks if all nodes are inactive for this builder.
|
||||||
|
func (b *Builder) Inactive() bool {
|
||||||
|
for _, d := range b.nodes {
|
||||||
|
if d.DriverInfo != nil && d.DriverInfo.Status == driver.Running {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
// Err returns error if any.
|
||||||
|
func (b *Builder) Err() error {
|
||||||
|
return b.err
|
||||||
|
}
|
||||||
|
|
||||||
|
type driverFactory struct {
|
||||||
|
driver.Factory
|
||||||
|
once sync.Once
|
||||||
|
}
|
||||||
|
|
||||||
|
// Factory returns the driver factory.
|
||||||
|
func (b *Builder) Factory(ctx context.Context) (_ driver.Factory, err error) {
|
||||||
|
b.driverFactory.once.Do(func() {
|
||||||
|
if b.Driver != "" {
|
||||||
|
b.driverFactory.Factory, err = driver.GetFactory(b.Driver, true)
|
||||||
|
if err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// empty driver means nodegroup was implicitly created as a default
|
||||||
|
// driver for a docker context and allows falling back to a
|
||||||
|
// docker-container driver for older daemon that doesn't support
|
||||||
|
// buildkit (< 18.06).
|
||||||
|
ep := b.NodeGroup.Nodes[0].Endpoint
|
||||||
|
var dockerapi *dockerutil.ClientAPI
|
||||||
|
dockerapi, err = dockerutil.NewClientAPI(b.opts.dockerCli, b.NodeGroup.Nodes[0].Endpoint)
|
||||||
|
if err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
// check if endpoint is healthy is needed to determine the driver type.
|
||||||
|
// if this fails then can't continue with driver selection.
|
||||||
|
if _, err = dockerapi.Ping(ctx); err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
b.driverFactory.Factory, err = driver.GetDefaultFactory(ctx, ep, dockerapi, false)
|
||||||
|
if err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
b.Driver = b.driverFactory.Factory.Name()
|
||||||
|
}
|
||||||
|
})
|
||||||
|
return b.driverFactory.Factory, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetBuilders returns all builders
|
||||||
|
func GetBuilders(dockerCli command.Cli, txn *store.Txn) ([]*Builder, error) {
|
||||||
|
storeng, err := txn.List()
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
builders := make([]*Builder, len(storeng))
|
||||||
|
seen := make(map[string]struct{})
|
||||||
|
for i, ng := range storeng {
|
||||||
|
b, err := New(dockerCli,
|
||||||
|
WithName(ng.Name),
|
||||||
|
WithStore(txn),
|
||||||
|
WithSkippedValidation(),
|
||||||
|
)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
builders[i] = b
|
||||||
|
seen[b.NodeGroup.Name] = struct{}{}
|
||||||
|
}
|
||||||
|
|
||||||
|
contexts, err := dockerCli.ContextStore().List()
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
sort.Slice(contexts, func(i, j int) bool {
|
||||||
|
return contexts[i].Name < contexts[j].Name
|
||||||
|
})
|
||||||
|
|
||||||
|
for _, c := range contexts {
|
||||||
|
// if a context has the same name as an instance from the store, do not
|
||||||
|
// add it to the builders list. An instance from the store takes
|
||||||
|
// precedence over context builders.
|
||||||
|
if _, ok := seen[c.Name]; ok {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
b, err := New(dockerCli,
|
||||||
|
WithName(c.Name),
|
||||||
|
WithStore(txn),
|
||||||
|
WithSkippedValidation(),
|
||||||
|
)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
builders = append(builders, b)
|
||||||
|
}
|
||||||
|
|
||||||
|
return builders, nil
|
||||||
|
}
|
||||||
202
builder/node.go
Normal file
202
builder/node.go
Normal file
@@ -0,0 +1,202 @@
|
|||||||
|
package builder
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
|
||||||
|
"github.com/docker/buildx/driver"
|
||||||
|
ctxkube "github.com/docker/buildx/driver/kubernetes/context"
|
||||||
|
"github.com/docker/buildx/store"
|
||||||
|
"github.com/docker/buildx/store/storeutil"
|
||||||
|
"github.com/docker/buildx/util/dockerutil"
|
||||||
|
"github.com/docker/buildx/util/imagetools"
|
||||||
|
"github.com/docker/buildx/util/platformutil"
|
||||||
|
"github.com/moby/buildkit/util/grpcerrors"
|
||||||
|
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
"github.com/sirupsen/logrus"
|
||||||
|
"golang.org/x/sync/errgroup"
|
||||||
|
"google.golang.org/grpc/codes"
|
||||||
|
)
|
||||||
|
|
||||||
|
type Node struct {
|
||||||
|
store.Node
|
||||||
|
Driver driver.Driver
|
||||||
|
DriverInfo *driver.Info
|
||||||
|
Platforms []ocispecs.Platform
|
||||||
|
ImageOpt imagetools.Opt
|
||||||
|
ProxyConfig map[string]string
|
||||||
|
Version string
|
||||||
|
Err error
|
||||||
|
}
|
||||||
|
|
||||||
|
// Nodes returns nodes for this builder.
|
||||||
|
func (b *Builder) Nodes() []Node {
|
||||||
|
return b.nodes
|
||||||
|
}
|
||||||
|
|
||||||
|
// LoadNodes loads and returns nodes for this builder.
|
||||||
|
// TODO: this should be a method on a Node object and lazy load data for each driver.
|
||||||
|
func (b *Builder) LoadNodes(ctx context.Context, withData bool) (_ []Node, err error) {
|
||||||
|
eg, _ := errgroup.WithContext(ctx)
|
||||||
|
b.nodes = make([]Node, len(b.NodeGroup.Nodes))
|
||||||
|
|
||||||
|
defer func() {
|
||||||
|
if b.err == nil && err != nil {
|
||||||
|
b.err = err
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
|
||||||
|
factory, err := b.Factory(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
imageopt, err := b.ImageOpt()
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
for i, n := range b.NodeGroup.Nodes {
|
||||||
|
func(i int, n store.Node) {
|
||||||
|
eg.Go(func() error {
|
||||||
|
node := Node{
|
||||||
|
Node: n,
|
||||||
|
ProxyConfig: storeutil.GetProxyConfig(b.opts.dockerCli),
|
||||||
|
Platforms: n.Platforms,
|
||||||
|
}
|
||||||
|
defer func() {
|
||||||
|
b.nodes[i] = node
|
||||||
|
}()
|
||||||
|
|
||||||
|
dockerapi, err := dockerutil.NewClientAPI(b.opts.dockerCli, n.Endpoint)
|
||||||
|
if err != nil {
|
||||||
|
node.Err = err
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
contextStore := b.opts.dockerCli.ContextStore()
|
||||||
|
|
||||||
|
var kcc driver.KubeClientConfig
|
||||||
|
kcc, err = ctxkube.ConfigFromContext(n.Endpoint, contextStore)
|
||||||
|
if err != nil {
|
||||||
|
// err is returned if n.Endpoint is non-context name like "unix:///var/run/docker.sock".
|
||||||
|
// try again with name="default".
|
||||||
|
// FIXME(@AkihiroSuda): n should retain real context name.
|
||||||
|
kcc, err = ctxkube.ConfigFromContext("default", contextStore)
|
||||||
|
if err != nil {
|
||||||
|
logrus.Error(err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
tryToUseKubeConfigInCluster := false
|
||||||
|
if kcc == nil {
|
||||||
|
tryToUseKubeConfigInCluster = true
|
||||||
|
} else {
|
||||||
|
if _, err := kcc.ClientConfig(); err != nil {
|
||||||
|
tryToUseKubeConfigInCluster = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if tryToUseKubeConfigInCluster {
|
||||||
|
kccInCluster := driver.KubeClientConfigInCluster{}
|
||||||
|
if _, err := kccInCluster.ClientConfig(); err == nil {
|
||||||
|
logrus.Debug("using kube config in cluster")
|
||||||
|
kcc = kccInCluster
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
d, err := driver.GetDriver(ctx, "buildx_buildkit_"+n.Name, factory, n.Endpoint, dockerapi, imageopt.Auth, kcc, n.Flags, n.Files, n.DriverOpts, n.Platforms, b.opts.contextPathHash)
|
||||||
|
if err != nil {
|
||||||
|
node.Err = err
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
node.Driver = d
|
||||||
|
node.ImageOpt = imageopt
|
||||||
|
|
||||||
|
if withData {
|
||||||
|
if err := node.loadData(ctx); err != nil {
|
||||||
|
node.Err = err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
}(i, n)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := eg.Wait(); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// TODO: This should be done in the routine loading driver data
|
||||||
|
if withData {
|
||||||
|
kubernetesDriverCount := 0
|
||||||
|
for _, d := range b.nodes {
|
||||||
|
if d.DriverInfo != nil && len(d.DriverInfo.DynamicNodes) > 0 {
|
||||||
|
kubernetesDriverCount++
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
isAllKubernetesDrivers := len(b.nodes) == kubernetesDriverCount
|
||||||
|
if isAllKubernetesDrivers {
|
||||||
|
var nodes []Node
|
||||||
|
var dynamicNodes []store.Node
|
||||||
|
for _, di := range b.nodes {
|
||||||
|
// dynamic nodes are used in Kubernetes driver.
|
||||||
|
// Kubernetes' pods are dynamically mapped to BuildKit Nodes.
|
||||||
|
if di.DriverInfo != nil && len(di.DriverInfo.DynamicNodes) > 0 {
|
||||||
|
for i := 0; i < len(di.DriverInfo.DynamicNodes); i++ {
|
||||||
|
diClone := di
|
||||||
|
if pl := di.DriverInfo.DynamicNodes[i].Platforms; len(pl) > 0 {
|
||||||
|
diClone.Platforms = pl
|
||||||
|
}
|
||||||
|
nodes = append(nodes, di)
|
||||||
|
}
|
||||||
|
dynamicNodes = append(dynamicNodes, di.DriverInfo.DynamicNodes...)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// not append (remove the static nodes in the store)
|
||||||
|
b.NodeGroup.Nodes = dynamicNodes
|
||||||
|
b.nodes = nodes
|
||||||
|
b.NodeGroup.Dynamic = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return b.nodes, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (n *Node) loadData(ctx context.Context) error {
|
||||||
|
if n.Driver == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
info, err := n.Driver.Info(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
n.DriverInfo = info
|
||||||
|
if n.DriverInfo.Status == driver.Running {
|
||||||
|
driverClient, err := n.Driver.Client(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
workers, err := driverClient.ListWorkers(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return errors.Wrap(err, "listing workers")
|
||||||
|
}
|
||||||
|
for _, w := range workers {
|
||||||
|
n.Platforms = append(n.Platforms, w.Platforms...)
|
||||||
|
}
|
||||||
|
n.Platforms = platformutil.Dedupe(n.Platforms)
|
||||||
|
inf, err := driverClient.Info(ctx)
|
||||||
|
if err != nil {
|
||||||
|
if st, ok := grpcerrors.AsGRPCStatus(err); ok && st.Code() == codes.Unimplemented {
|
||||||
|
n.Version, err = n.Driver.Version(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return errors.Wrap(err, "getting version")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
n.Version = inf.BuildkitVersion.Version
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
@@ -4,45 +4,87 @@ import (
|
|||||||
"fmt"
|
"fmt"
|
||||||
"os"
|
"os"
|
||||||
|
|
||||||
|
"github.com/containerd/containerd/pkg/seed"
|
||||||
"github.com/docker/buildx/commands"
|
"github.com/docker/buildx/commands"
|
||||||
"github.com/docker/buildx/version"
|
"github.com/docker/buildx/version"
|
||||||
|
"github.com/docker/cli/cli"
|
||||||
"github.com/docker/cli/cli-plugins/manager"
|
"github.com/docker/cli/cli-plugins/manager"
|
||||||
"github.com/docker/cli/cli-plugins/plugin"
|
"github.com/docker/cli/cli-plugins/plugin"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
|
"github.com/docker/cli/cli/debug"
|
||||||
cliflags "github.com/docker/cli/cli/flags"
|
cliflags "github.com/docker/cli/cli/flags"
|
||||||
"github.com/spf13/cobra"
|
"github.com/moby/buildkit/solver/errdefs"
|
||||||
|
"github.com/moby/buildkit/util/stack"
|
||||||
|
|
||||||
|
_ "k8s.io/client-go/plugin/pkg/client/auth/azure"
|
||||||
|
_ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
|
||||||
|
_ "k8s.io/client-go/plugin/pkg/client/auth/oidc"
|
||||||
|
_ "k8s.io/client-go/plugin/pkg/client/auth/openstack"
|
||||||
|
|
||||||
_ "github.com/docker/buildx/driver/docker"
|
_ "github.com/docker/buildx/driver/docker"
|
||||||
_ "github.com/docker/buildx/driver/docker-container"
|
_ "github.com/docker/buildx/driver/docker-container"
|
||||||
|
_ "github.com/docker/buildx/driver/kubernetes"
|
||||||
|
_ "github.com/docker/buildx/driver/remote"
|
||||||
)
|
)
|
||||||
|
|
||||||
var experimental string
|
func init() {
|
||||||
|
seed.WithTimeAndRand()
|
||||||
|
stack.SetVersionInfo(version.Version, version.Revision)
|
||||||
|
}
|
||||||
|
|
||||||
|
func runStandalone(cmd *command.DockerCli) error {
|
||||||
|
if err := cmd.Initialize(cliflags.NewClientOptions()); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
rootCmd := commands.NewRootCmd(os.Args[0], false, cmd)
|
||||||
|
return rootCmd.Execute()
|
||||||
|
}
|
||||||
|
|
||||||
|
func runPlugin(cmd *command.DockerCli) error {
|
||||||
|
rootCmd := commands.NewRootCmd("buildx", true, cmd)
|
||||||
|
return plugin.RunPlugin(cmd, rootCmd, manager.Metadata{
|
||||||
|
SchemaVersion: "0.1.0",
|
||||||
|
Vendor: "Docker Inc.",
|
||||||
|
Version: version.Version,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
func main() {
|
func main() {
|
||||||
if os.Getenv("DOCKER_CLI_PLUGIN_ORIGINAL_CLI_COMMAND") == "" {
|
cmd, err := command.NewDockerCli()
|
||||||
if len(os.Args) < 2 || os.Args[1] != manager.MetadataSubcommandName {
|
|
||||||
dockerCli, err := command.NewDockerCli()
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
fmt.Fprintln(os.Stderr, err)
|
fmt.Fprintln(os.Stderr, err)
|
||||||
os.Exit(1)
|
os.Exit(1)
|
||||||
}
|
}
|
||||||
opts := cliflags.NewClientOptions()
|
|
||||||
dockerCli.Initialize(opts)
|
if plugin.RunningStandalone() {
|
||||||
rootCmd := commands.NewRootCmd(os.Args[0], false, dockerCli)
|
err = runStandalone(cmd)
|
||||||
if err := rootCmd.Execute(); err != nil {
|
} else {
|
||||||
os.Exit(1)
|
err = runPlugin(cmd)
|
||||||
}
|
|
||||||
os.Exit(0)
|
|
||||||
}
|
}
|
||||||
|
if err == nil {
|
||||||
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
plugin.Run(func(dockerCli command.Cli) *cobra.Command {
|
if sterr, ok := err.(cli.StatusError); ok {
|
||||||
return commands.NewRootCmd("buildx", true, dockerCli)
|
if sterr.Status != "" {
|
||||||
},
|
fmt.Fprintln(cmd.Err(), sterr.Status)
|
||||||
manager.Metadata{
|
}
|
||||||
SchemaVersion: "0.1.0",
|
// StatusError should only be used for errors, and all errors should
|
||||||
Vendor: "Docker Inc.",
|
// have a non-zero exit status, so never exit with 0
|
||||||
Version: version.Version,
|
if sterr.StatusCode == 0 {
|
||||||
Experimental: experimental != "",
|
os.Exit(1)
|
||||||
})
|
}
|
||||||
|
os.Exit(sterr.StatusCode)
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, s := range errdefs.Sources(err) {
|
||||||
|
s.Print(cmd.Err())
|
||||||
|
}
|
||||||
|
if debug.IsEnabled() {
|
||||||
|
fmt.Fprintf(cmd.Err(), "ERROR: %+v", stack.Formatter(err))
|
||||||
|
} else {
|
||||||
|
fmt.Fprintf(cmd.Err(), "ERROR: %v\n", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
os.Exit(1)
|
||||||
}
|
}
|
||||||
|
|||||||
19
cmd/buildx/tracing.go
Normal file
19
cmd/buildx/tracing.go
Normal file
@@ -0,0 +1,19 @@
|
|||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"github.com/moby/buildkit/util/tracing/detect"
|
||||||
|
"go.opentelemetry.io/otel"
|
||||||
|
|
||||||
|
_ "github.com/moby/buildkit/util/tracing/detect/delegated"
|
||||||
|
_ "github.com/moby/buildkit/util/tracing/env"
|
||||||
|
)
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
detect.ServiceName = "buildx"
|
||||||
|
// do not log tracing errors to stdio
|
||||||
|
otel.SetErrorHandler(skipErrors{})
|
||||||
|
}
|
||||||
|
|
||||||
|
type skipErrors struct{}
|
||||||
|
|
||||||
|
func (skipErrors) Handle(err error) {}
|
||||||
1
codecov.yml
Normal file
1
codecov.yml
Normal file
@@ -0,0 +1 @@
|
|||||||
|
comment: false
|
||||||
186
commands/bake.go
186
commands/bake.go
@@ -1,11 +1,20 @@
|
|||||||
package commands
|
package commands
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"context"
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
"fmt"
|
"fmt"
|
||||||
"os"
|
"os"
|
||||||
|
|
||||||
|
"github.com/containerd/containerd/platforms"
|
||||||
"github.com/docker/buildx/bake"
|
"github.com/docker/buildx/bake"
|
||||||
|
"github.com/docker/buildx/build"
|
||||||
|
"github.com/docker/buildx/builder"
|
||||||
|
"github.com/docker/buildx/util/buildflags"
|
||||||
|
"github.com/docker/buildx/util/confutil"
|
||||||
|
"github.com/docker/buildx/util/dockerutil"
|
||||||
|
"github.com/docker/buildx/util/progress"
|
||||||
|
"github.com/docker/buildx/util/tracing"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
"github.com/moby/buildkit/util/appcontext"
|
"github.com/moby/buildkit/util/appcontext"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
@@ -14,36 +23,141 @@ import (
|
|||||||
|
|
||||||
type bakeOptions struct {
|
type bakeOptions struct {
|
||||||
files []string
|
files []string
|
||||||
printOnly bool
|
|
||||||
overrides []string
|
overrides []string
|
||||||
|
printOnly bool
|
||||||
commonOptions
|
commonOptions
|
||||||
}
|
}
|
||||||
|
|
||||||
func runBake(dockerCli command.Cli, targets []string, in bakeOptions) error {
|
func runBake(dockerCli command.Cli, targets []string, in bakeOptions) (err error) {
|
||||||
ctx := appcontext.Context()
|
ctx := appcontext.Context()
|
||||||
|
|
||||||
if len(in.files) == 0 {
|
ctx, end, err := tracing.TraceCurrentCommand(ctx, "bake")
|
||||||
files, err := defaultFiles()
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
if len(files) == 0 {
|
defer func() {
|
||||||
return errors.Errorf("no docker-compose.yml or docker-bake.hcl found, specify build file with -f/--file")
|
end(err)
|
||||||
|
}()
|
||||||
|
|
||||||
|
var url string
|
||||||
|
cmdContext := "cwd://"
|
||||||
|
|
||||||
|
if len(targets) > 0 {
|
||||||
|
if bake.IsRemoteURL(targets[0]) {
|
||||||
|
url = targets[0]
|
||||||
|
targets = targets[1:]
|
||||||
|
if len(targets) > 0 {
|
||||||
|
if bake.IsRemoteURL(targets[0]) {
|
||||||
|
cmdContext = targets[0]
|
||||||
|
targets = targets[1:]
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
in.files = files
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if len(targets) == 0 {
|
if len(targets) == 0 {
|
||||||
targets = []string{"default"}
|
targets = []string{"default"}
|
||||||
}
|
}
|
||||||
|
|
||||||
m, err := bake.ReadTargets(ctx, in.files, targets, in.overrides)
|
overrides := in.overrides
|
||||||
|
if in.exportPush {
|
||||||
|
if in.exportLoad {
|
||||||
|
return errors.Errorf("push and load may not be set together at the moment")
|
||||||
|
}
|
||||||
|
overrides = append(overrides, "*.push=true")
|
||||||
|
} else if in.exportLoad {
|
||||||
|
overrides = append(overrides, "*.output=type=docker")
|
||||||
|
}
|
||||||
|
if in.noCache != nil {
|
||||||
|
overrides = append(overrides, fmt.Sprintf("*.no-cache=%t", *in.noCache))
|
||||||
|
}
|
||||||
|
if in.pull != nil {
|
||||||
|
overrides = append(overrides, fmt.Sprintf("*.pull=%t", *in.pull))
|
||||||
|
}
|
||||||
|
if in.sbom != "" {
|
||||||
|
overrides = append(overrides, fmt.Sprintf("*.attest=%s", buildflags.CanonicalizeAttest("sbom", in.sbom)))
|
||||||
|
}
|
||||||
|
if in.provenance != "" {
|
||||||
|
overrides = append(overrides, fmt.Sprintf("*.attest=%s", buildflags.CanonicalizeAttest("provenance", in.provenance)))
|
||||||
|
}
|
||||||
|
contextPathHash, _ := os.Getwd()
|
||||||
|
|
||||||
|
ctx2, cancel := context.WithCancel(context.TODO())
|
||||||
|
defer cancel()
|
||||||
|
printer, err := progress.NewPrinter(ctx2, os.Stderr, os.Stderr, in.progress)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
defer func() {
|
||||||
|
if printer != nil {
|
||||||
|
err1 := printer.Wait()
|
||||||
|
if err == nil {
|
||||||
|
err = err1
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
|
||||||
|
var nodes []builder.Node
|
||||||
|
var files []bake.File
|
||||||
|
var inp *bake.Input
|
||||||
|
|
||||||
|
// instance only needed for reading remote bake files or building
|
||||||
|
if url != "" || !in.printOnly {
|
||||||
|
b, err := builder.New(dockerCli,
|
||||||
|
builder.WithName(in.builder),
|
||||||
|
builder.WithContextPathHash(contextPathHash),
|
||||||
|
)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if err = updateLastActivity(dockerCli, b.NodeGroup); err != nil {
|
||||||
|
return errors.Wrapf(err, "failed to update builder last activity time")
|
||||||
|
}
|
||||||
|
nodes, err = b.LoadNodes(ctx, false)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if url != "" {
|
||||||
|
files, inp, err = bake.ReadRemoteFiles(ctx, nodes, url, in.files, printer)
|
||||||
|
} else {
|
||||||
|
files, err = bake.ReadLocalFiles(in.files)
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
tgts, grps, err := bake.ReadTargets(ctx, files, targets, overrides, map[string]string{
|
||||||
|
// don't forget to update documentation if you add a new
|
||||||
|
// built-in variable: docs/manuals/bake/file-definition.md#built-in-variables
|
||||||
|
"BAKE_CMD_CONTEXT": cmdContext,
|
||||||
|
"BAKE_LOCAL_PLATFORM": platforms.DefaultString(),
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// this function can update target context string from the input so call before printOnly check
|
||||||
|
bo, err := bake.TargetsToBuildOpt(tgts, inp)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
if in.printOnly {
|
if in.printOnly {
|
||||||
dt, err := json.MarshalIndent(map[string]map[string]bake.Target{"target": m}, "", " ")
|
dt, err := json.MarshalIndent(struct {
|
||||||
|
Group map[string]*bake.Group `json:"group,omitempty"`
|
||||||
|
Target map[string]*bake.Target `json:"target"`
|
||||||
|
}{
|
||||||
|
grps,
|
||||||
|
tgts,
|
||||||
|
}, "", " ")
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
err = printer.Wait()
|
||||||
|
printer = nil
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -51,37 +165,25 @@ func runBake(dockerCli command.Cli, targets []string, in bakeOptions) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
bo, err := bake.TargetsToBuildOpt(m, in.noCache, in.pull)
|
resp, err := build.Build(ctx, nodes, bo, dockerutil.NewClient(dockerCli), confutil.ConfigDir(dockerCli), printer)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
return wrapBuildError(err, true)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(in.metadataFile) > 0 {
|
||||||
|
dt := make(map[string]interface{})
|
||||||
|
for t, r := range resp {
|
||||||
|
dt[t] = decodeExporterResponse(r.ExporterResponse)
|
||||||
|
}
|
||||||
|
if err := writeMetadataFile(in.metadataFile, dt); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
return buildTargets(ctx, dockerCli, bo, in.progress)
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
func defaultFiles() ([]string, error) {
|
func bakeCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
|
||||||
fns := []string{
|
|
||||||
"docker-compose.yml", // support app
|
|
||||||
"docker-compose.yaml", // support app
|
|
||||||
"docker-bake.json",
|
|
||||||
"docker-bake.override.json",
|
|
||||||
"docker-bake.hcl",
|
|
||||||
"docker-bake.override.hcl",
|
|
||||||
}
|
|
||||||
out := make([]string, 0, len(fns))
|
|
||||||
for _, f := range fns {
|
|
||||||
if _, err := os.Stat(f); err != nil {
|
|
||||||
if os.IsNotExist(errors.Cause(err)) {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
out = append(out, f)
|
|
||||||
}
|
|
||||||
return out, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func bakeCmd(dockerCli command.Cli) *cobra.Command {
|
|
||||||
var options bakeOptions
|
var options bakeOptions
|
||||||
|
|
||||||
cmd := &cobra.Command{
|
cmd := &cobra.Command{
|
||||||
@@ -89,6 +191,14 @@ func bakeCmd(dockerCli command.Cli) *cobra.Command {
|
|||||||
Aliases: []string{"f"},
|
Aliases: []string{"f"},
|
||||||
Short: "Build from a file",
|
Short: "Build from a file",
|
||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
|
// reset to nil to avoid override is unset
|
||||||
|
if !cmd.Flags().Lookup("no-cache").Changed {
|
||||||
|
options.noCache = nil
|
||||||
|
}
|
||||||
|
if !cmd.Flags().Lookup("pull").Changed {
|
||||||
|
options.pull = nil
|
||||||
|
}
|
||||||
|
options.commonOptions.builder = rootOpts.builder
|
||||||
return runBake(dockerCli, args, options)
|
return runBake(dockerCli, args, options)
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
@@ -96,10 +206,14 @@ func bakeCmd(dockerCli command.Cli) *cobra.Command {
|
|||||||
flags := cmd.Flags()
|
flags := cmd.Flags()
|
||||||
|
|
||||||
flags.StringArrayVarP(&options.files, "file", "f", []string{}, "Build definition file")
|
flags.StringArrayVarP(&options.files, "file", "f", []string{}, "Build definition file")
|
||||||
|
flags.BoolVar(&options.exportLoad, "load", false, `Shorthand for "--set=*.output=type=docker"`)
|
||||||
flags.BoolVar(&options.printOnly, "print", false, "Print the options without building")
|
flags.BoolVar(&options.printOnly, "print", false, "Print the options without building")
|
||||||
flags.StringArrayVar(&options.overrides, "set", nil, "Override target value (eg: target.key=value)")
|
flags.BoolVar(&options.exportPush, "push", false, `Shorthand for "--set=*.output=type=registry"`)
|
||||||
|
flags.StringVar(&options.sbom, "sbom", "", `Shorthand for "--set=*.attest=type=sbom"`)
|
||||||
|
flags.StringVar(&options.provenance, "provenance", "", `Shorthand for "--set=*.attest=type=provenance"`)
|
||||||
|
flags.StringArrayVar(&options.overrides, "set", nil, `Override target value (e.g., "targetpattern.key=value")`)
|
||||||
|
|
||||||
commonFlags(&options.commonOptions, flags)
|
commonBuildFlags(&options.commonOptions, flags)
|
||||||
|
|
||||||
return cmd
|
return cmd
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,99 +1,158 @@
|
|||||||
package commands
|
package commands
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"bytes"
|
||||||
"context"
|
"context"
|
||||||
|
"encoding/base64"
|
||||||
|
"encoding/csv"
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
"os"
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
|
"sync"
|
||||||
|
|
||||||
|
"github.com/containerd/console"
|
||||||
"github.com/docker/buildx/build"
|
"github.com/docker/buildx/build"
|
||||||
|
"github.com/docker/buildx/builder"
|
||||||
|
"github.com/docker/buildx/monitor"
|
||||||
|
"github.com/docker/buildx/store"
|
||||||
|
"github.com/docker/buildx/store/storeutil"
|
||||||
|
"github.com/docker/buildx/util/buildflags"
|
||||||
|
"github.com/docker/buildx/util/confutil"
|
||||||
|
"github.com/docker/buildx/util/dockerutil"
|
||||||
"github.com/docker/buildx/util/platformutil"
|
"github.com/docker/buildx/util/platformutil"
|
||||||
"github.com/docker/buildx/util/progress"
|
"github.com/docker/buildx/util/progress"
|
||||||
|
"github.com/docker/buildx/util/tracing"
|
||||||
|
"github.com/docker/cli-docs-tool/annotation"
|
||||||
"github.com/docker/cli/cli"
|
"github.com/docker/cli/cli"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
|
"github.com/docker/cli/cli/config"
|
||||||
|
dockeropts "github.com/docker/cli/opts"
|
||||||
|
"github.com/docker/distribution/reference"
|
||||||
|
"github.com/docker/docker/pkg/ioutils"
|
||||||
|
"github.com/docker/go-units"
|
||||||
"github.com/moby/buildkit/client"
|
"github.com/moby/buildkit/client"
|
||||||
"github.com/moby/buildkit/session/auth/authprovider"
|
"github.com/moby/buildkit/session/auth/authprovider"
|
||||||
|
"github.com/moby/buildkit/solver/errdefs"
|
||||||
"github.com/moby/buildkit/util/appcontext"
|
"github.com/moby/buildkit/util/appcontext"
|
||||||
|
"github.com/moby/buildkit/util/grpcerrors"
|
||||||
|
"github.com/morikuni/aec"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
|
"github.com/sirupsen/logrus"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
"github.com/spf13/pflag"
|
"github.com/spf13/pflag"
|
||||||
|
"google.golang.org/grpc/codes"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
const defaultTargetName = "default"
|
||||||
|
|
||||||
type buildOptions struct {
|
type buildOptions struct {
|
||||||
commonOptions
|
|
||||||
contextPath string
|
contextPath string
|
||||||
dockerfileName string
|
dockerfileName string
|
||||||
tags []string
|
printFunc string
|
||||||
labels []string
|
|
||||||
buildArgs []string
|
|
||||||
|
|
||||||
|
allow []string
|
||||||
|
attests []string
|
||||||
|
buildArgs []string
|
||||||
cacheFrom []string
|
cacheFrom []string
|
||||||
cacheTo []string
|
cacheTo []string
|
||||||
target string
|
cgroupParent string
|
||||||
platforms []string
|
contexts []string
|
||||||
secrets []string
|
|
||||||
ssh []string
|
|
||||||
outputs []string
|
|
||||||
imageIDFile string
|
|
||||||
extraHosts []string
|
extraHosts []string
|
||||||
|
imageIDFile string
|
||||||
|
invoke string
|
||||||
|
labels []string
|
||||||
networkMode string
|
networkMode string
|
||||||
|
noCacheFilter []string
|
||||||
|
outputs []string
|
||||||
|
platforms []string
|
||||||
|
quiet bool
|
||||||
|
secrets []string
|
||||||
|
shmSize dockeropts.MemBytes
|
||||||
|
ssh []string
|
||||||
|
tags []string
|
||||||
|
target string
|
||||||
|
ulimits *dockeropts.UlimitOpt
|
||||||
|
commonOptions
|
||||||
|
}
|
||||||
|
|
||||||
|
type commonOptions struct {
|
||||||
|
builder string
|
||||||
|
metadataFile string
|
||||||
|
noCache *bool
|
||||||
|
progress string
|
||||||
|
pull *bool
|
||||||
|
|
||||||
exportPush bool
|
exportPush bool
|
||||||
exportLoad bool
|
exportLoad bool
|
||||||
|
|
||||||
// unimplemented
|
sbom string
|
||||||
squash bool
|
provenance string
|
||||||
quiet bool
|
|
||||||
|
|
||||||
allow []string
|
|
||||||
|
|
||||||
// hidden
|
|
||||||
// untrusted bool
|
|
||||||
// ulimits *opts.UlimitOpt
|
|
||||||
// memory opts.MemBytes
|
|
||||||
// memorySwap opts.MemSwapBytes
|
|
||||||
// shmSize opts.MemBytes
|
|
||||||
// cpuShares int64
|
|
||||||
// cpuPeriod int64
|
|
||||||
// cpuQuota int64
|
|
||||||
// cpuSetCpus string
|
|
||||||
// cpuSetMems string
|
|
||||||
// cgroupParent string
|
|
||||||
// isolation string
|
|
||||||
// compress bool
|
|
||||||
// securityOpt []string
|
|
||||||
}
|
}
|
||||||
|
|
||||||
type commonOptions struct {
|
func runBuild(dockerCli command.Cli, in buildOptions) (err error) {
|
||||||
noCache bool
|
|
||||||
progress string
|
|
||||||
pull bool
|
|
||||||
}
|
|
||||||
|
|
||||||
func runBuild(dockerCli command.Cli, in buildOptions) error {
|
|
||||||
if in.squash {
|
|
||||||
return errors.Errorf("squash currently not implemented")
|
|
||||||
}
|
|
||||||
if in.quiet {
|
|
||||||
return errors.Errorf("quiet currently not implemented")
|
|
||||||
}
|
|
||||||
|
|
||||||
ctx := appcontext.Context()
|
ctx := appcontext.Context()
|
||||||
|
|
||||||
|
ctx, end, err := tracing.TraceCurrentCommand(ctx, "build")
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer func() {
|
||||||
|
end(err)
|
||||||
|
}()
|
||||||
|
|
||||||
|
noCache := false
|
||||||
|
if in.noCache != nil {
|
||||||
|
noCache = *in.noCache
|
||||||
|
}
|
||||||
|
pull := false
|
||||||
|
if in.pull != nil {
|
||||||
|
pull = *in.pull
|
||||||
|
}
|
||||||
|
|
||||||
|
if noCache && len(in.noCacheFilter) > 0 {
|
||||||
|
return errors.Errorf("--no-cache and --no-cache-filter cannot currently be used together")
|
||||||
|
}
|
||||||
|
|
||||||
|
if in.quiet && in.progress != progress.PrinterModeAuto && in.progress != progress.PrinterModeQuiet {
|
||||||
|
return errors.Errorf("progress=%s and quiet cannot be used together", in.progress)
|
||||||
|
} else if in.quiet {
|
||||||
|
in.progress = "quiet"
|
||||||
|
}
|
||||||
|
|
||||||
|
contexts, err := parseContextNames(in.contexts)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
printFunc, err := parsePrintFunc(in.printFunc)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
opts := build.Options{
|
opts := build.Options{
|
||||||
Inputs: build.Inputs{
|
Inputs: build.Inputs{
|
||||||
ContextPath: in.contextPath,
|
ContextPath: in.contextPath,
|
||||||
DockerfilePath: in.dockerfileName,
|
DockerfilePath: in.dockerfileName,
|
||||||
InStream: os.Stdin,
|
InStream: os.Stdin,
|
||||||
|
NamedContexts: contexts,
|
||||||
},
|
},
|
||||||
Tags: in.tags,
|
|
||||||
Labels: listToMap(in.labels, false),
|
|
||||||
BuildArgs: listToMap(in.buildArgs, true),
|
BuildArgs: listToMap(in.buildArgs, true),
|
||||||
Pull: in.pull,
|
|
||||||
NoCache: in.noCache,
|
|
||||||
Target: in.target,
|
|
||||||
ImageIDFile: in.imageIDFile,
|
|
||||||
ExtraHosts: in.extraHosts,
|
ExtraHosts: in.extraHosts,
|
||||||
|
ImageIDFile: in.imageIDFile,
|
||||||
|
Labels: listToMap(in.labels, false),
|
||||||
NetworkMode: in.networkMode,
|
NetworkMode: in.networkMode,
|
||||||
|
NoCache: noCache,
|
||||||
|
NoCacheFilter: in.noCacheFilter,
|
||||||
|
Pull: pull,
|
||||||
|
ShmSize: in.shmSize,
|
||||||
|
Tags: in.tags,
|
||||||
|
Target: in.target,
|
||||||
|
Ulimits: in.ulimits,
|
||||||
|
PrintFunc: printFunc,
|
||||||
}
|
}
|
||||||
|
|
||||||
platforms, err := platformutil.Parse(in.platforms)
|
platforms, err := platformutil.Parse(in.platforms)
|
||||||
@@ -102,21 +161,26 @@ func runBuild(dockerCli command.Cli, in buildOptions) error {
|
|||||||
}
|
}
|
||||||
opts.Platforms = platforms
|
opts.Platforms = platforms
|
||||||
|
|
||||||
opts.Session = append(opts.Session, authprovider.NewDockerAuthProvider(os.Stderr))
|
dockerConfig := config.LoadDefaultConfigFile(os.Stderr)
|
||||||
|
opts.Session = append(opts.Session, authprovider.NewDockerAuthProvider(dockerConfig))
|
||||||
|
|
||||||
secrets, err := build.ParseSecretSpecs(in.secrets)
|
secrets, err := buildflags.ParseSecretSpecs(in.secrets)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
opts.Session = append(opts.Session, secrets)
|
opts.Session = append(opts.Session, secrets)
|
||||||
|
|
||||||
ssh, err := build.ParseSSHSpecs(in.ssh)
|
sshSpecs := in.ssh
|
||||||
|
if len(sshSpecs) == 0 && buildflags.IsGitSSH(in.contextPath) {
|
||||||
|
sshSpecs = []string{"default"}
|
||||||
|
}
|
||||||
|
ssh, err := buildflags.ParseSSHSpecs(sshSpecs)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
opts.Session = append(opts.Session, ssh)
|
opts.Session = append(opts.Session, ssh)
|
||||||
|
|
||||||
outputs, err := build.ParseOutputs(in.outputs)
|
outputs, err := buildflags.ParseOutputs(in.outputs)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -154,46 +218,236 @@ func runBuild(dockerCli command.Cli, in buildOptions) error {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
opts.Exports = outputs
|
opts.Exports = outputs
|
||||||
|
|
||||||
cacheImports, err := build.ParseCacheEntry(in.cacheFrom)
|
inAttests := append([]string{}, in.attests...)
|
||||||
|
if in.provenance != "" {
|
||||||
|
inAttests = append(inAttests, buildflags.CanonicalizeAttest("provenance", in.provenance))
|
||||||
|
}
|
||||||
|
if in.sbom != "" {
|
||||||
|
inAttests = append(inAttests, buildflags.CanonicalizeAttest("sbom", in.sbom))
|
||||||
|
}
|
||||||
|
opts.Attests, err = buildflags.ParseAttests(inAttests)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
cacheImports, err := buildflags.ParseCacheEntry(in.cacheFrom)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
opts.CacheFrom = cacheImports
|
opts.CacheFrom = cacheImports
|
||||||
|
|
||||||
cacheExports, err := build.ParseCacheEntry(in.cacheTo)
|
cacheExports, err := buildflags.ParseCacheEntry(in.cacheTo)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
opts.CacheTo = cacheExports
|
opts.CacheTo = cacheExports
|
||||||
|
|
||||||
allow, err := build.ParseEntitlements(in.allow)
|
allow, err := buildflags.ParseEntitlements(in.allow)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
opts.Allow = allow
|
opts.Allow = allow
|
||||||
|
|
||||||
return buildTargets(ctx, dockerCli, map[string]build.Options{"default": opts}, in.progress)
|
// key string used for kubernetes "sticky" mode
|
||||||
}
|
contextPathHash, err := filepath.Abs(in.contextPath)
|
||||||
|
if err != nil {
|
||||||
|
contextPathHash = in.contextPath
|
||||||
|
}
|
||||||
|
|
||||||
func buildTargets(ctx context.Context, dockerCli command.Cli, opts map[string]build.Options, progressMode string) error {
|
b, err := builder.New(dockerCli,
|
||||||
dis, err := getDefaultDrivers(ctx, dockerCli)
|
builder.WithName(in.builder),
|
||||||
|
builder.WithContextPathHash(contextPathHash),
|
||||||
|
)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if err = updateLastActivity(dockerCli, b.NodeGroup); err != nil {
|
||||||
|
return errors.Wrapf(err, "failed to update builder last activity time")
|
||||||
|
}
|
||||||
|
nodes, err := b.LoadNodes(ctx, false)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
ctx2, cancel := context.WithCancel(context.TODO())
|
imageID, res, err := buildTargets(ctx, dockerCli, nodes, map[string]build.Options{defaultTargetName: opts}, in.progress, in.metadataFile, in.invoke != "")
|
||||||
defer cancel()
|
err = wrapBuildError(err, false)
|
||||||
pw := progress.NewPrinter(ctx2, os.Stderr, progressMode)
|
if err != nil {
|
||||||
|
|
||||||
_, err = build.Build(ctx, dis, opts, dockerAPI(dockerCli), dockerCli.ConfigFile(), pw)
|
|
||||||
return err
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if in.invoke != "" {
|
||||||
|
cfg, err := parseInvokeConfig(in.invoke)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
cfg.ResultCtx = res
|
||||||
|
con := console.Current()
|
||||||
|
if err := con.SetRaw(); err != nil {
|
||||||
|
return errors.Errorf("failed to configure terminal: %v", err)
|
||||||
|
}
|
||||||
|
err = monitor.RunMonitor(ctx, cfg, func(ctx context.Context) (*build.ResultContext, error) {
|
||||||
|
_, rr, err := buildTargets(ctx, dockerCli, nodes, map[string]build.Options{defaultTargetName: opts}, in.progress, in.metadataFile, true)
|
||||||
|
return rr, err
|
||||||
|
}, io.NopCloser(os.Stdin), nopCloser{os.Stdout}, nopCloser{os.Stderr})
|
||||||
|
if err != nil {
|
||||||
|
logrus.Warnf("failed to run monitor: %v", err)
|
||||||
|
}
|
||||||
|
con.Reset()
|
||||||
|
}
|
||||||
|
|
||||||
|
if in.quiet {
|
||||||
|
fmt.Println(imageID)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func buildCmd(dockerCli command.Cli) *cobra.Command {
|
type nopCloser struct {
|
||||||
var options buildOptions
|
io.WriteCloser
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c nopCloser) Close() error { return nil }
|
||||||
|
|
||||||
|
func buildTargets(ctx context.Context, dockerCli command.Cli, nodes []builder.Node, opts map[string]build.Options, progressMode string, metadataFile string, allowNoOutput bool) (imageID string, res *build.ResultContext, err error) {
|
||||||
|
ctx2, cancel := context.WithCancel(context.TODO())
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
printer, err := progress.NewPrinter(ctx2, os.Stderr, os.Stderr, progressMode)
|
||||||
|
if err != nil {
|
||||||
|
return "", nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
var mu sync.Mutex
|
||||||
|
var idx int
|
||||||
|
resp, err := build.BuildWithResultHandler(ctx, nodes, opts, dockerutil.NewClient(dockerCli), confutil.ConfigDir(dockerCli), printer, func(driverIndex int, gotRes *build.ResultContext) {
|
||||||
|
mu.Lock()
|
||||||
|
defer mu.Unlock()
|
||||||
|
if res == nil || driverIndex < idx {
|
||||||
|
idx, res = driverIndex, gotRes
|
||||||
|
}
|
||||||
|
}, allowNoOutput)
|
||||||
|
err1 := printer.Wait()
|
||||||
|
if err == nil {
|
||||||
|
err = err1
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
return "", nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(metadataFile) > 0 && resp != nil {
|
||||||
|
if err := writeMetadataFile(metadataFile, decodeExporterResponse(resp[defaultTargetName].ExporterResponse)); err != nil {
|
||||||
|
return "", nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
printWarnings(os.Stderr, printer.Warnings(), progressMode)
|
||||||
|
|
||||||
|
for k := range resp {
|
||||||
|
if opts[k].PrintFunc != nil {
|
||||||
|
if err := printResult(opts[k].PrintFunc, resp[k].ExporterResponse); err != nil {
|
||||||
|
return "", nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return resp[defaultTargetName].ExporterResponse["containerimage.digest"], res, err
|
||||||
|
}
|
||||||
|
|
||||||
|
func parseInvokeConfig(invoke string) (cfg build.ContainerConfig, err error) {
|
||||||
|
cfg.Tty = true
|
||||||
|
if invoke == "default" {
|
||||||
|
return cfg, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
csvReader := csv.NewReader(strings.NewReader(invoke))
|
||||||
|
fields, err := csvReader.Read()
|
||||||
|
if err != nil {
|
||||||
|
return cfg, err
|
||||||
|
}
|
||||||
|
if len(fields) == 1 && !strings.Contains(fields[0], "=") {
|
||||||
|
cfg.Cmd = []string{fields[0]}
|
||||||
|
return cfg, nil
|
||||||
|
}
|
||||||
|
for _, field := range fields {
|
||||||
|
parts := strings.SplitN(field, "=", 2)
|
||||||
|
if len(parts) != 2 {
|
||||||
|
return cfg, errors.Errorf("invalid value %s", field)
|
||||||
|
}
|
||||||
|
key := strings.ToLower(parts[0])
|
||||||
|
value := parts[1]
|
||||||
|
switch key {
|
||||||
|
case "args":
|
||||||
|
cfg.Cmd = append(cfg.Cmd, value) // TODO: support JSON
|
||||||
|
case "entrypoint":
|
||||||
|
cfg.Entrypoint = append(cfg.Entrypoint, value) // TODO: support JSON
|
||||||
|
case "env":
|
||||||
|
cfg.Env = append(cfg.Env, value)
|
||||||
|
case "user":
|
||||||
|
cfg.User = &value
|
||||||
|
case "cwd":
|
||||||
|
cfg.Cwd = &value
|
||||||
|
case "tty":
|
||||||
|
cfg.Tty, err = strconv.ParseBool(value)
|
||||||
|
if err != nil {
|
||||||
|
return cfg, errors.Errorf("failed to parse tty: %v", err)
|
||||||
|
}
|
||||||
|
default:
|
||||||
|
return cfg, errors.Errorf("unknown key %q", key)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return cfg, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func printWarnings(w io.Writer, warnings []client.VertexWarning, mode string) {
|
||||||
|
if len(warnings) == 0 || mode == progress.PrinterModeQuiet {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
fmt.Fprintf(w, "\n ")
|
||||||
|
sb := &bytes.Buffer{}
|
||||||
|
if len(warnings) == 1 {
|
||||||
|
fmt.Fprintf(sb, "1 warning found")
|
||||||
|
} else {
|
||||||
|
fmt.Fprintf(sb, "%d warnings found", len(warnings))
|
||||||
|
}
|
||||||
|
if logrus.GetLevel() < logrus.DebugLevel {
|
||||||
|
fmt.Fprintf(sb, " (use --debug to expand)")
|
||||||
|
}
|
||||||
|
fmt.Fprintf(sb, ":\n")
|
||||||
|
fmt.Fprint(w, aec.Apply(sb.String(), aec.YellowF))
|
||||||
|
|
||||||
|
for _, warn := range warnings {
|
||||||
|
fmt.Fprintf(w, " - %s\n", warn.Short)
|
||||||
|
if logrus.GetLevel() < logrus.DebugLevel {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
for _, d := range warn.Detail {
|
||||||
|
fmt.Fprintf(w, "%s\n", d)
|
||||||
|
}
|
||||||
|
if warn.URL != "" {
|
||||||
|
fmt.Fprintf(w, "More info: %s\n", warn.URL)
|
||||||
|
}
|
||||||
|
if warn.SourceInfo != nil && warn.Range != nil {
|
||||||
|
src := errdefs.Source{
|
||||||
|
Info: warn.SourceInfo,
|
||||||
|
Ranges: warn.Range,
|
||||||
|
}
|
||||||
|
src.Print(w)
|
||||||
|
}
|
||||||
|
fmt.Fprintf(w, "\n")
|
||||||
|
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func newBuildOptions() buildOptions {
|
||||||
|
ulimits := make(map[string]*units.Ulimit)
|
||||||
|
return buildOptions{
|
||||||
|
ulimits: dockeropts.NewUlimitOpt(&ulimits),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func buildCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
|
||||||
|
options := newBuildOptions()
|
||||||
|
|
||||||
cmd := &cobra.Command{
|
cmd := &cobra.Command{
|
||||||
Use: "build [OPTIONS] PATH | URL | -",
|
Use: "build [OPTIONS] PATH | URL | -",
|
||||||
@@ -202,94 +456,151 @@ func buildCmd(dockerCli command.Cli) *cobra.Command {
|
|||||||
Args: cli.ExactArgs(1),
|
Args: cli.ExactArgs(1),
|
||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
options.contextPath = args[0]
|
options.contextPath = args[0]
|
||||||
|
options.builder = rootOpts.builder
|
||||||
|
cmd.Flags().VisitAll(checkWarnedFlags)
|
||||||
return runBuild(dockerCli, options)
|
return runBuild(dockerCli, options)
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
|
var platformsDefault []string
|
||||||
|
if v := os.Getenv("DOCKER_DEFAULT_PLATFORM"); v != "" {
|
||||||
|
platformsDefault = []string{v}
|
||||||
|
}
|
||||||
|
|
||||||
flags := cmd.Flags()
|
flags := cmd.Flags()
|
||||||
|
|
||||||
flags.BoolVar(&options.exportPush, "push", false, "Shorthand for --output=type=registry")
|
flags.StringSliceVar(&options.extraHosts, "add-host", []string{}, `Add a custom host-to-IP mapping (format: "host:ip")`)
|
||||||
flags.BoolVar(&options.exportLoad, "load", false, "Shorthand for --output=type=docker")
|
flags.SetAnnotation("add-host", annotation.ExternalURL, []string{"https://docs.docker.com/engine/reference/commandline/build/#add-host"})
|
||||||
|
|
||||||
|
flags.StringSliceVar(&options.allow, "allow", []string{}, `Allow extra privileged entitlement (e.g., "network.host", "security.insecure")`)
|
||||||
|
|
||||||
flags.StringArrayVarP(&options.tags, "tag", "t", []string{}, "Name and optionally a tag in the 'name:tag' format")
|
|
||||||
flags.StringArrayVar(&options.buildArgs, "build-arg", []string{}, "Set build-time variables")
|
flags.StringArrayVar(&options.buildArgs, "build-arg", []string{}, "Set build-time variables")
|
||||||
flags.StringVarP(&options.dockerfileName, "file", "f", "", "Name of the Dockerfile (Default is 'PATH/Dockerfile')")
|
|
||||||
|
flags.StringArrayVar(&options.cacheFrom, "cache-from", []string{}, `External cache sources (e.g., "user/app:cache", "type=local,src=path/to/dir")`)
|
||||||
|
|
||||||
|
flags.StringArrayVar(&options.cacheTo, "cache-to", []string{}, `Cache export destinations (e.g., "user/app:cache", "type=local,dest=path/to/dir")`)
|
||||||
|
|
||||||
|
flags.StringVar(&options.cgroupParent, "cgroup-parent", "", "Optional parent cgroup for the container")
|
||||||
|
flags.SetAnnotation("cgroup-parent", annotation.ExternalURL, []string{"https://docs.docker.com/engine/reference/commandline/build/#cgroup-parent"})
|
||||||
|
|
||||||
|
flags.StringArrayVar(&options.contexts, "build-context", []string{}, "Additional build contexts (e.g., name=path)")
|
||||||
|
|
||||||
|
flags.StringVarP(&options.dockerfileName, "file", "f", "", `Name of the Dockerfile (default: "PATH/Dockerfile")`)
|
||||||
|
flags.SetAnnotation("file", annotation.ExternalURL, []string{"https://docs.docker.com/engine/reference/commandline/build/#file"})
|
||||||
|
|
||||||
|
flags.StringVar(&options.imageIDFile, "iidfile", "", "Write the image ID to the file")
|
||||||
|
|
||||||
flags.StringArrayVar(&options.labels, "label", []string{}, "Set metadata for an image")
|
flags.StringArrayVar(&options.labels, "label", []string{}, "Set metadata for an image")
|
||||||
|
|
||||||
flags.StringArrayVar(&options.cacheFrom, "cache-from", []string{}, "External cache sources (eg. user/app:cache, type=local,src=path/to/dir)")
|
flags.BoolVar(&options.exportLoad, "load", false, `Shorthand for "--output=type=docker"`)
|
||||||
flags.StringArrayVar(&options.cacheTo, "cache-to", []string{}, "Cache export destinations (eg. user/app:cache, type=local,dest=path/to/dir)")
|
|
||||||
|
|
||||||
flags.StringVar(&options.target, "target", "", "Set the target build stage to build.")
|
flags.StringVar(&options.networkMode, "network", "default", `Set the networking mode for the "RUN" instructions during build`)
|
||||||
|
|
||||||
flags.StringSliceVar(&options.allow, "allow", []string{}, "Allow extra privileged entitlement, e.g. network.host, security.insecure")
|
flags.StringArrayVar(&options.noCacheFilter, "no-cache-filter", []string{}, "Do not cache specified stages")
|
||||||
|
|
||||||
|
flags.StringArrayVarP(&options.outputs, "output", "o", []string{}, `Output destination (format: "type=local,dest=path")`)
|
||||||
|
|
||||||
|
flags.StringArrayVar(&options.platforms, "platform", platformsDefault, "Set target platform for build")
|
||||||
|
|
||||||
|
if isExperimental() {
|
||||||
|
flags.StringVar(&options.printFunc, "print", "", "Print result of information request (e.g., outline, targets) [experimental]")
|
||||||
|
}
|
||||||
|
|
||||||
|
flags.BoolVar(&options.exportPush, "push", false, `Shorthand for "--output=type=registry"`)
|
||||||
|
|
||||||
// not implemented
|
|
||||||
flags.BoolVarP(&options.quiet, "quiet", "q", false, "Suppress the build output and print image ID on success")
|
flags.BoolVarP(&options.quiet, "quiet", "q", false, "Suppress the build output and print image ID on success")
|
||||||
flags.StringVar(&options.networkMode, "network", "default", "Set the networking mode for the RUN instructions during build")
|
|
||||||
flags.StringSliceVar(&options.extraHosts, "add-host", []string{}, "Add a custom host-to-IP mapping (host:ip)")
|
flags.StringArrayVar(&options.secrets, "secret", []string{}, `Secret to expose to the build (format: "id=mysecret[,src=/local/secret]")`)
|
||||||
flags.StringVar(&options.imageIDFile, "iidfile", "", "Write the image ID to the file")
|
|
||||||
flags.BoolVar(&options.squash, "squash", false, "Squash newly built layers into a single new layer")
|
flags.Var(&options.shmSize, "shm-size", `Size of "/dev/shm"`)
|
||||||
flags.MarkHidden("quiet")
|
|
||||||
flags.MarkHidden("squash")
|
flags.StringArrayVar(&options.ssh, "ssh", []string{}, `SSH agent socket or keys to expose to the build (format: "default|<id>[=<socket>|<key>[,<key>]]")`)
|
||||||
|
|
||||||
|
flags.StringArrayVarP(&options.tags, "tag", "t", []string{}, `Name and optionally a tag (format: "name:tag")`)
|
||||||
|
flags.SetAnnotation("tag", annotation.ExternalURL, []string{"https://docs.docker.com/engine/reference/commandline/build/#tag"})
|
||||||
|
|
||||||
|
flags.StringVar(&options.target, "target", "", "Set the target build stage to build")
|
||||||
|
flags.SetAnnotation("target", annotation.ExternalURL, []string{"https://docs.docker.com/engine/reference/commandline/build/#target"})
|
||||||
|
|
||||||
|
flags.Var(options.ulimits, "ulimit", "Ulimit options")
|
||||||
|
|
||||||
|
flags.StringArrayVar(&options.attests, "attest", []string{}, `Attestation parameters (format: "type=sbom,generator=image")`)
|
||||||
|
flags.StringVar(&options.sbom, "sbom", "", `Shorthand for "--attest=type=sbom"`)
|
||||||
|
flags.StringVar(&options.provenance, "provenance", "", `Shortand for "--attest=type=provenance"`)
|
||||||
|
|
||||||
|
if isExperimental() {
|
||||||
|
flags.StringVar(&options.invoke, "invoke", "", "Invoke a command after the build [experimental]")
|
||||||
|
}
|
||||||
|
|
||||||
// hidden flags
|
// hidden flags
|
||||||
var ignore string
|
var ignore string
|
||||||
var ignoreSlice []string
|
var ignoreSlice []string
|
||||||
var ignoreBool bool
|
var ignoreBool bool
|
||||||
var ignoreInt int64
|
var ignoreInt int64
|
||||||
flags.StringVar(&ignore, "ulimit", "", "Ulimit options")
|
|
||||||
flags.MarkHidden("ulimit")
|
|
||||||
flags.StringSliceVar(&ignoreSlice, "security-opt", []string{}, "Security options")
|
|
||||||
flags.MarkHidden("security-opt")
|
|
||||||
flags.BoolVar(&ignoreBool, "compress", false, "Compress the build context using gzip")
|
flags.BoolVar(&ignoreBool, "compress", false, "Compress the build context using gzip")
|
||||||
flags.MarkHidden("compress")
|
flags.MarkHidden("compress")
|
||||||
flags.StringVarP(&ignore, "memory", "m", "", "Memory limit")
|
|
||||||
flags.MarkHidden("memory")
|
|
||||||
flags.StringVar(&ignore, "memory-swap", "", "Swap limit equal to memory plus swap: '-1' to enable unlimited swap")
|
|
||||||
flags.MarkHidden("memory-swap")
|
|
||||||
flags.StringVar(&ignore, "shm-size", "", "Size of /dev/shm")
|
|
||||||
flags.MarkHidden("shm-size")
|
|
||||||
flags.Int64VarP(&ignoreInt, "cpu-shares", "c", 0, "CPU shares (relative weight)")
|
|
||||||
flags.MarkHidden("cpu-shares")
|
|
||||||
flags.Int64Var(&ignoreInt, "cpu-period", 0, "Limit the CPU CFS (Completely Fair Scheduler) period")
|
|
||||||
flags.MarkHidden("cpu-period")
|
|
||||||
flags.Int64Var(&ignoreInt, "cpu-quota", 0, "Limit the CPU CFS (Completely Fair Scheduler) quota")
|
|
||||||
flags.MarkHidden("cpu-quota")
|
|
||||||
flags.StringVar(&ignore, "cpuset-cpus", "", "CPUs in which to allow execution (0-3, 0,1)")
|
|
||||||
flags.MarkHidden("cpuset-cpus")
|
|
||||||
flags.StringVar(&ignore, "cpuset-mems", "", "MEMs in which to allow execution (0-3, 0,1)")
|
|
||||||
flags.MarkHidden("cpuset-mems")
|
|
||||||
flags.StringVar(&ignore, "cgroup-parent", "", "Optional parent cgroup for the container")
|
|
||||||
flags.MarkHidden("cgroup-parent")
|
|
||||||
flags.StringVar(&ignore, "isolation", "", "Container isolation technology")
|
flags.StringVar(&ignore, "isolation", "", "Container isolation technology")
|
||||||
flags.MarkHidden("isolation")
|
flags.MarkHidden("isolation")
|
||||||
|
flags.SetAnnotation("isolation", "flag-warn", []string{"isolation flag is deprecated with BuildKit."})
|
||||||
|
|
||||||
|
flags.StringSliceVar(&ignoreSlice, "security-opt", []string{}, "Security options")
|
||||||
|
flags.MarkHidden("security-opt")
|
||||||
|
flags.SetAnnotation("security-opt", "flag-warn", []string{`security-opt flag is deprecated. "RUN --security=insecure" should be used with BuildKit.`})
|
||||||
|
|
||||||
|
flags.BoolVar(&ignoreBool, "squash", false, "Squash newly built layers into a single new layer")
|
||||||
|
flags.MarkHidden("squash")
|
||||||
|
flags.SetAnnotation("squash", "flag-warn", []string{"experimental flag squash is removed with BuildKit. You should squash inside build using a multi-stage Dockerfile for efficiency."})
|
||||||
|
|
||||||
|
flags.StringVarP(&ignore, "memory", "m", "", "Memory limit")
|
||||||
|
flags.MarkHidden("memory")
|
||||||
|
|
||||||
|
flags.StringVar(&ignore, "memory-swap", "", `Swap limit equal to memory plus swap: "-1" to enable unlimited swap`)
|
||||||
|
flags.MarkHidden("memory-swap")
|
||||||
|
|
||||||
|
flags.Int64VarP(&ignoreInt, "cpu-shares", "c", 0, "CPU shares (relative weight)")
|
||||||
|
flags.MarkHidden("cpu-shares")
|
||||||
|
|
||||||
|
flags.Int64Var(&ignoreInt, "cpu-period", 0, "Limit the CPU CFS (Completely Fair Scheduler) period")
|
||||||
|
flags.MarkHidden("cpu-period")
|
||||||
|
|
||||||
|
flags.Int64Var(&ignoreInt, "cpu-quota", 0, "Limit the CPU CFS (Completely Fair Scheduler) quota")
|
||||||
|
flags.MarkHidden("cpu-quota")
|
||||||
|
|
||||||
|
flags.StringVar(&ignore, "cpuset-cpus", "", `CPUs in which to allow execution ("0-3", "0,1")`)
|
||||||
|
flags.MarkHidden("cpuset-cpus")
|
||||||
|
|
||||||
|
flags.StringVar(&ignore, "cpuset-mems", "", `MEMs in which to allow execution ("0-3", "0,1")`)
|
||||||
|
flags.MarkHidden("cpuset-mems")
|
||||||
|
|
||||||
flags.BoolVar(&ignoreBool, "rm", true, "Remove intermediate containers after a successful build")
|
flags.BoolVar(&ignoreBool, "rm", true, "Remove intermediate containers after a successful build")
|
||||||
flags.MarkHidden("rm")
|
flags.MarkHidden("rm")
|
||||||
|
|
||||||
flags.BoolVar(&ignoreBool, "force-rm", false, "Always remove intermediate containers")
|
flags.BoolVar(&ignoreBool, "force-rm", false, "Always remove intermediate containers")
|
||||||
flags.MarkHidden("force-rm")
|
flags.MarkHidden("force-rm")
|
||||||
|
|
||||||
platformsDefault := []string{}
|
commonBuildFlags(&options.commonOptions, flags)
|
||||||
if v := os.Getenv("DOCKER_DEFAULT_PLATFORM"); v != "" {
|
|
||||||
platformsDefault = []string{v}
|
|
||||||
}
|
|
||||||
flags.StringArrayVar(&options.platforms, "platform", platformsDefault, "Set target platform for build")
|
|
||||||
|
|
||||||
flags.StringArrayVar(&options.secrets, "secret", []string{}, "Secret file to expose to the build: id=mysecret,src=/local/secret")
|
|
||||||
|
|
||||||
flags.StringArrayVar(&options.ssh, "ssh", []string{}, "SSH agent socket or keys to expose to the build (format: default|<id>[=<socket>|<key>[,<key>]])")
|
|
||||||
|
|
||||||
flags.StringArrayVarP(&options.outputs, "output", "o", []string{}, "Output destination (format: type=local,dest=path)")
|
|
||||||
|
|
||||||
commonFlags(&options.commonOptions, flags)
|
|
||||||
|
|
||||||
return cmd
|
return cmd
|
||||||
}
|
}
|
||||||
|
|
||||||
func commonFlags(options *commonOptions, flags *pflag.FlagSet) {
|
func commonBuildFlags(options *commonOptions, flags *pflag.FlagSet) {
|
||||||
flags.BoolVar(&options.noCache, "no-cache", false, "Do not use cache when building the image")
|
options.noCache = flags.Bool("no-cache", false, "Do not use cache when building the image")
|
||||||
flags.StringVar(&options.progress, "progress", "auto", "Set type of progress output (auto, plain, tty). Use plain to show container output")
|
flags.StringVar(&options.progress, "progress", "auto", `Set type of progress output ("auto", "plain", "tty"). Use plain to show container output`)
|
||||||
flags.BoolVar(&options.pull, "pull", false, "Always attempt to pull a newer version of the image")
|
options.pull = flags.Bool("pull", false, "Always attempt to pull all referenced images")
|
||||||
|
flags.StringVar(&options.metadataFile, "metadata-file", "", "Write build result metadata to the file")
|
||||||
|
}
|
||||||
|
|
||||||
|
func checkWarnedFlags(f *pflag.Flag) {
|
||||||
|
if !f.Changed {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
for t, m := range f.Annotations {
|
||||||
|
switch t {
|
||||||
|
case "flag-warn":
|
||||||
|
logrus.Warn(m[0])
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func listToMap(values []string, defaultEnv bool) map[string]string {
|
func listToMap(values []string, defaultEnv bool) map[string]string {
|
||||||
@@ -298,7 +609,10 @@ func listToMap(values []string, defaultEnv bool) map[string]string {
|
|||||||
kv := strings.SplitN(value, "=", 2)
|
kv := strings.SplitN(value, "=", 2)
|
||||||
if len(kv) == 1 {
|
if len(kv) == 1 {
|
||||||
if defaultEnv {
|
if defaultEnv {
|
||||||
result[kv[0]] = os.Getenv(kv[0])
|
v, ok := os.LookupEnv(kv[0])
|
||||||
|
if ok {
|
||||||
|
result[kv[0]] = v
|
||||||
|
}
|
||||||
} else {
|
} else {
|
||||||
result[kv[0]] = ""
|
result[kv[0]] = ""
|
||||||
}
|
}
|
||||||
@@ -308,3 +622,125 @@ func listToMap(values []string, defaultEnv bool) map[string]string {
|
|||||||
}
|
}
|
||||||
return result
|
return result
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func parseContextNames(values []string) (map[string]build.NamedContext, error) {
|
||||||
|
if len(values) == 0 {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
result := make(map[string]build.NamedContext, len(values))
|
||||||
|
for _, value := range values {
|
||||||
|
kv := strings.SplitN(value, "=", 2)
|
||||||
|
if len(kv) != 2 {
|
||||||
|
return nil, errors.Errorf("invalid context value: %s, expected key=value", value)
|
||||||
|
}
|
||||||
|
named, err := reference.ParseNormalizedNamed(kv[0])
|
||||||
|
if err != nil {
|
||||||
|
return nil, errors.Wrapf(err, "invalid context name %s", kv[0])
|
||||||
|
}
|
||||||
|
name := strings.TrimSuffix(reference.FamiliarString(named), ":latest")
|
||||||
|
result[name] = build.NamedContext{Path: kv[1]}
|
||||||
|
}
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func parsePrintFunc(str string) (*build.PrintFunc, error) {
|
||||||
|
if str == "" {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
csvReader := csv.NewReader(strings.NewReader(str))
|
||||||
|
fields, err := csvReader.Read()
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
f := &build.PrintFunc{}
|
||||||
|
for _, field := range fields {
|
||||||
|
parts := strings.SplitN(field, "=", 2)
|
||||||
|
if len(parts) == 2 {
|
||||||
|
if parts[0] == "format" {
|
||||||
|
f.Format = parts[1]
|
||||||
|
} else {
|
||||||
|
return nil, errors.Errorf("invalid print field: %s", field)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
if f.Name != "" {
|
||||||
|
return nil, errors.Errorf("invalid print value: %s", str)
|
||||||
|
}
|
||||||
|
f.Name = field
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return f, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func writeMetadataFile(filename string, dt interface{}) error {
|
||||||
|
b, err := json.MarshalIndent(dt, "", " ")
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return ioutils.AtomicWriteFile(filename, b, 0644)
|
||||||
|
}
|
||||||
|
|
||||||
|
func decodeExporterResponse(exporterResponse map[string]string) map[string]interface{} {
|
||||||
|
out := make(map[string]interface{})
|
||||||
|
for k, v := range exporterResponse {
|
||||||
|
dt, err := base64.StdEncoding.DecodeString(v)
|
||||||
|
if err != nil {
|
||||||
|
out[k] = v
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
var raw map[string]interface{}
|
||||||
|
if err = json.Unmarshal(dt, &raw); err != nil || len(raw) == 0 {
|
||||||
|
out[k] = v
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
out[k] = json.RawMessage(dt)
|
||||||
|
}
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
|
||||||
|
func wrapBuildError(err error, bake bool) error {
|
||||||
|
if err == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
st, ok := grpcerrors.AsGRPCStatus(err)
|
||||||
|
if ok {
|
||||||
|
if st.Code() == codes.Unimplemented && strings.Contains(st.Message(), "unsupported frontend capability moby.buildkit.frontend.contexts") {
|
||||||
|
msg := "current frontend does not support --build-context."
|
||||||
|
if bake {
|
||||||
|
msg = "current frontend does not support defining additional contexts for targets."
|
||||||
|
}
|
||||||
|
msg += " Named contexts are supported since Dockerfile v1.4. Use #syntax directive in Dockerfile or update to latest BuildKit."
|
||||||
|
return &wrapped{err, msg}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
type wrapped struct {
|
||||||
|
err error
|
||||||
|
msg string
|
||||||
|
}
|
||||||
|
|
||||||
|
func (w *wrapped) Error() string {
|
||||||
|
return w.msg
|
||||||
|
}
|
||||||
|
|
||||||
|
func (w *wrapped) Unwrap() error {
|
||||||
|
return w.err
|
||||||
|
}
|
||||||
|
|
||||||
|
func isExperimental() bool {
|
||||||
|
if v, ok := os.LookupEnv("BUILDX_EXPERIMENTAL"); ok {
|
||||||
|
vv, _ := strconv.ParseBool(v)
|
||||||
|
return vv
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
func updateLastActivity(dockerCli command.Cli, ng *store.NodeGroup) error {
|
||||||
|
txn, release, err := storeutil.GetStore(dockerCli)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer release()
|
||||||
|
return txn.UpdateLastActivity(ng)
|
||||||
|
}
|
||||||
|
|||||||
@@ -1,15 +1,26 @@
|
|||||||
package commands
|
package commands
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"bytes"
|
||||||
|
"context"
|
||||||
"encoding/csv"
|
"encoding/csv"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"net/url"
|
||||||
"os"
|
"os"
|
||||||
"strings"
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/docker/buildx/builder"
|
||||||
"github.com/docker/buildx/driver"
|
"github.com/docker/buildx/driver"
|
||||||
|
remoteutil "github.com/docker/buildx/driver/remote/util"
|
||||||
"github.com/docker/buildx/store"
|
"github.com/docker/buildx/store"
|
||||||
|
"github.com/docker/buildx/store/storeutil"
|
||||||
|
"github.com/docker/buildx/util/cobrautil"
|
||||||
|
"github.com/docker/buildx/util/confutil"
|
||||||
|
"github.com/docker/buildx/util/dockerutil"
|
||||||
"github.com/docker/cli/cli"
|
"github.com/docker/cli/cli"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
|
dopts "github.com/docker/cli/opts"
|
||||||
"github.com/google/shlex"
|
"github.com/google/shlex"
|
||||||
"github.com/moby/buildkit/util/appcontext"
|
"github.com/moby/buildkit/util/appcontext"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
@@ -28,6 +39,7 @@ type createOptions struct {
|
|||||||
flags string
|
flags string
|
||||||
configFile string
|
configFile string
|
||||||
driverOpts []string
|
driverOpts []string
|
||||||
|
bootstrap bool
|
||||||
// upgrade bool // perform upgrade of the driver
|
// upgrade bool // perform upgrade of the driver
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -53,23 +65,7 @@ func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
driverName := in.driver
|
txn, release, err := storeutil.GetStore(dockerCli)
|
||||||
if driverName == "" {
|
|
||||||
f, err := driver.GetDefaultFactory(ctx, dockerCli.Client(), true)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if f == nil {
|
|
||||||
return errors.Errorf("no valid drivers found")
|
|
||||||
}
|
|
||||||
driverName = f.Name()
|
|
||||||
}
|
|
||||||
|
|
||||||
if driver.GetFactory(driverName, true) == nil {
|
|
||||||
return errors.Errorf("failed to find driver %q", in.driver)
|
|
||||||
}
|
|
||||||
|
|
||||||
txn, release, err := getStore(dockerCli)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -83,6 +79,19 @@ func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if !in.actionLeave && !in.actionAppend {
|
||||||
|
contexts, err := dockerCli.ContextStore().List()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
for _, c := range contexts {
|
||||||
|
if c.Name == name {
|
||||||
|
logrus.Warnf("instance name %q already exists as context builder", name)
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
ng, err := txn.NodeGroupByName(name)
|
ng, err := txn.NodeGroupByName(name)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
if os.IsNotExist(errors.Cause(err)) {
|
if os.IsNotExist(errors.Cause(err)) {
|
||||||
@@ -90,29 +99,62 @@ func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
|
|||||||
logrus.Warnf("failed to find %q for append, creating a new instance instead", in.name)
|
logrus.Warnf("failed to find %q for append, creating a new instance instead", in.name)
|
||||||
}
|
}
|
||||||
if in.actionLeave {
|
if in.actionLeave {
|
||||||
return errors.Errorf("failed to find instance %q for leave", name)
|
return errors.Errorf("failed to find instance %q for leave", in.name)
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
buildkitHost := os.Getenv("BUILDKIT_HOST")
|
||||||
|
|
||||||
|
driverName := in.driver
|
||||||
|
if driverName == "" {
|
||||||
|
if ng != nil {
|
||||||
|
driverName = ng.Driver
|
||||||
|
} else if len(args) == 0 && buildkitHost != "" {
|
||||||
|
driverName = "remote"
|
||||||
|
} else {
|
||||||
|
var arg string
|
||||||
|
if len(args) > 0 {
|
||||||
|
arg = args[0]
|
||||||
|
}
|
||||||
|
f, err := driver.GetDefaultFactory(ctx, arg, dockerCli.Client(), true)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if f == nil {
|
||||||
|
return errors.Errorf("no valid drivers found")
|
||||||
|
}
|
||||||
|
driverName = f.Name()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if ng != nil {
|
if ng != nil {
|
||||||
if in.nodeName == "" && !in.actionAppend {
|
if in.nodeName == "" && !in.actionAppend {
|
||||||
return errors.Errorf("existing instance for %s but no append mode, specify --node to make changes for existing instances", name)
|
return errors.Errorf("existing instance for %q but no append mode, specify --node to make changes for existing instances", name)
|
||||||
}
|
}
|
||||||
|
if driverName != ng.Driver {
|
||||||
|
return errors.Errorf("existing instance for %q but has mismatched driver %q", name, ng.Driver)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if _, err := driver.GetFactory(driverName, true); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
ngOriginal := ng
|
||||||
|
if ngOriginal != nil {
|
||||||
|
ngOriginal = ngOriginal.Copy()
|
||||||
}
|
}
|
||||||
|
|
||||||
if ng == nil {
|
if ng == nil {
|
||||||
ng = &store.NodeGroup{
|
ng = &store.NodeGroup{
|
||||||
Name: name,
|
Name: name,
|
||||||
|
Driver: driverName,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if ng.Driver == "" || in.driver != "" {
|
|
||||||
ng.Driver = driverName
|
|
||||||
}
|
|
||||||
|
|
||||||
var flags []string
|
var flags []string
|
||||||
if in.flags != "" {
|
if in.flags != "" {
|
||||||
flags, err = shlex.Split(in.flags)
|
flags, err = shlex.Split(in.flags)
|
||||||
@@ -122,31 +164,72 @@ func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
var ep string
|
var ep string
|
||||||
|
var setEp bool
|
||||||
if in.actionLeave {
|
if in.actionLeave {
|
||||||
if err := ng.Leave(in.nodeName); err != nil {
|
if err := ng.Leave(in.nodeName); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
|
switch {
|
||||||
|
case driverName == "kubernetes":
|
||||||
if len(args) > 0 {
|
if len(args) > 0 {
|
||||||
|
logrus.Warnf("kubernetes driver does not support endpoint args %q", args[0])
|
||||||
|
}
|
||||||
|
// naming endpoint to make --append works
|
||||||
|
ep = (&url.URL{
|
||||||
|
Scheme: driverName,
|
||||||
|
Path: "/" + in.name,
|
||||||
|
RawQuery: (&url.Values{
|
||||||
|
"deployment": {in.nodeName},
|
||||||
|
"kubeconfig": {os.Getenv("KUBECONFIG")},
|
||||||
|
}).Encode(),
|
||||||
|
}).String()
|
||||||
|
setEp = false
|
||||||
|
case driverName == "remote":
|
||||||
|
if len(args) > 0 {
|
||||||
|
ep = args[0]
|
||||||
|
} else if buildkitHost != "" {
|
||||||
|
ep = buildkitHost
|
||||||
|
} else {
|
||||||
|
return errors.Errorf("no remote endpoint provided")
|
||||||
|
}
|
||||||
|
ep, err = validateBuildkitEndpoint(ep)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
setEp = true
|
||||||
|
case len(args) > 0:
|
||||||
ep, err = validateEndpoint(dockerCli, args[0])
|
ep, err = validateEndpoint(dockerCli, args[0])
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
} else {
|
setEp = true
|
||||||
|
default:
|
||||||
if dockerCli.CurrentContext() == "default" && dockerCli.DockerEndpoint().TLSData != nil {
|
if dockerCli.CurrentContext() == "default" && dockerCli.DockerEndpoint().TLSData != nil {
|
||||||
return errors.Errorf("could not create a builder instance with TLS data loaded from environment. Please use `docker context create <context-name>` to create a context for current environment and then create a builder instance with `docker buildx create <context-name>`")
|
return errors.Errorf("could not create a builder instance with TLS data loaded from environment. Please use `docker context create <context-name>` to create a context for current environment and then create a builder instance with `docker buildx create <context-name>`")
|
||||||
}
|
}
|
||||||
|
ep, err = dockerutil.GetCurrentEndpoint(dockerCli)
|
||||||
ep, err = getCurrentEndpoint(dockerCli)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
setEp = false
|
||||||
}
|
}
|
||||||
|
|
||||||
m, err := csvToMap(in.driverOpts)
|
m, err := csvToMap(in.driverOpts)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
if err := ng.Update(in.nodeName, ep, in.platform, len(args) > 0, in.actionAppend, flags, in.configFile, m); err != nil {
|
|
||||||
|
if in.configFile == "" {
|
||||||
|
// if buildkit config is not provided, check if the default one is
|
||||||
|
// available and use it
|
||||||
|
if f, ok := confutil.DefaultConfigFile(dockerCli); ok {
|
||||||
|
logrus.Warnf("Using default BuildKit config in %s", f)
|
||||||
|
in.configFile = f
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := ng.Update(in.nodeName, ep, in.platform, setEp, in.actionAppend, flags, in.configFile, m); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -155,8 +238,41 @@ func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
b, err := builder.New(dockerCli,
|
||||||
|
builder.WithName(ng.Name),
|
||||||
|
builder.WithStore(txn),
|
||||||
|
builder.WithSkippedValidation(),
|
||||||
|
)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
timeoutCtx, cancel := context.WithTimeout(ctx, 20*time.Second)
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
nodes, err := b.LoadNodes(timeoutCtx, true)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, node := range nodes {
|
||||||
|
if err := node.Err; err != nil {
|
||||||
|
err := errors.Errorf("failed to initialize builder %s (%s): %s", ng.Name, node.Name, err)
|
||||||
|
var err2 error
|
||||||
|
if ngOriginal == nil {
|
||||||
|
err2 = txn.Remove(ng.Name)
|
||||||
|
} else {
|
||||||
|
err2 = txn.Save(ngOriginal)
|
||||||
|
}
|
||||||
|
if err2 != nil {
|
||||||
|
logrus.Warnf("Could not rollback to previous state: %s", err2)
|
||||||
|
}
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if in.use && ep != "" {
|
if in.use && ep != "" {
|
||||||
current, err := getCurrentEndpoint(dockerCli)
|
current, err := dockerutil.GetCurrentEndpoint(dockerCli)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -165,6 +281,12 @@ func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if in.bootstrap {
|
||||||
|
if _, err = b.Boot(ctx); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
fmt.Printf("%s\n", ng.Name)
|
fmt.Printf("%s\n", ng.Name)
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
@@ -172,9 +294,12 @@ func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
|
|||||||
func createCmd(dockerCli command.Cli) *cobra.Command {
|
func createCmd(dockerCli command.Cli) *cobra.Command {
|
||||||
var options createOptions
|
var options createOptions
|
||||||
|
|
||||||
var drivers []string
|
var drivers bytes.Buffer
|
||||||
for s := range driver.GetFactories() {
|
for _, d := range driver.GetFactories(true) {
|
||||||
drivers = append(drivers, s)
|
if len(drivers.String()) > 0 {
|
||||||
|
drivers.WriteString(", ")
|
||||||
|
}
|
||||||
|
drivers.WriteString(fmt.Sprintf(`"%s"`, d.Name()))
|
||||||
}
|
}
|
||||||
|
|
||||||
cmd := &cobra.Command{
|
cmd := &cobra.Command{
|
||||||
@@ -189,23 +314,28 @@ func createCmd(dockerCli command.Cli) *cobra.Command {
|
|||||||
flags := cmd.Flags()
|
flags := cmd.Flags()
|
||||||
|
|
||||||
flags.StringVar(&options.name, "name", "", "Builder instance name")
|
flags.StringVar(&options.name, "name", "", "Builder instance name")
|
||||||
flags.StringVar(&options.driver, "driver", "", fmt.Sprintf("Driver to use (available: %v)", drivers))
|
flags.StringVar(&options.driver, "driver", "", fmt.Sprintf("Driver to use (available: %s)", drivers.String()))
|
||||||
flags.StringVar(&options.nodeName, "node", "", "Create/modify node with given name")
|
flags.StringVar(&options.nodeName, "node", "", "Create/modify node with given name")
|
||||||
flags.StringVar(&options.flags, "buildkitd-flags", "", "Flags for buildkitd daemon")
|
flags.StringVar(&options.flags, "buildkitd-flags", "", "Flags for buildkitd daemon")
|
||||||
flags.StringVar(&options.configFile, "config", "", "BuildKit config file")
|
flags.StringVar(&options.configFile, "config", "", "BuildKit config file")
|
||||||
flags.StringArrayVar(&options.platform, "platform", []string{}, "Fixed platforms for current node")
|
flags.StringArrayVar(&options.platform, "platform", []string{}, "Fixed platforms for current node")
|
||||||
flags.StringArrayVar(&options.driverOpts, "driver-opt", []string{}, "Options for the driver")
|
flags.StringArrayVar(&options.driverOpts, "driver-opt", []string{}, "Options for the driver")
|
||||||
|
flags.BoolVar(&options.bootstrap, "bootstrap", false, "Boot builder after creation")
|
||||||
|
|
||||||
flags.BoolVar(&options.actionAppend, "append", false, "Append a node to builder instead of changing it")
|
flags.BoolVar(&options.actionAppend, "append", false, "Append a node to builder instead of changing it")
|
||||||
flags.BoolVar(&options.actionLeave, "leave", false, "Remove a node from builder instead of changing it")
|
flags.BoolVar(&options.actionLeave, "leave", false, "Remove a node from builder instead of changing it")
|
||||||
flags.BoolVar(&options.use, "use", false, "Set the current builder instance")
|
flags.BoolVar(&options.use, "use", false, "Set the current builder instance")
|
||||||
|
|
||||||
_ = flags
|
// hide builder persistent flag for this command
|
||||||
|
cobrautil.HideInheritedFlags(cmd, "builder")
|
||||||
|
|
||||||
return cmd
|
return cmd
|
||||||
}
|
}
|
||||||
|
|
||||||
func csvToMap(in []string) (map[string]string, error) {
|
func csvToMap(in []string) (map[string]string, error) {
|
||||||
|
if len(in) == 0 {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
m := make(map[string]string, len(in))
|
m := make(map[string]string, len(in))
|
||||||
for _, s := range in {
|
for _, s := range in {
|
||||||
csvReader := csv.NewReader(strings.NewReader(s))
|
csvReader := csv.NewReader(strings.NewReader(s))
|
||||||
@@ -223,3 +353,27 @@ func csvToMap(in []string) (map[string]string, error) {
|
|||||||
}
|
}
|
||||||
return m, nil
|
return m, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// validateEndpoint validates that endpoint is either a context or a docker host
|
||||||
|
func validateEndpoint(dockerCli command.Cli, ep string) (string, error) {
|
||||||
|
dem, err := dockerutil.GetDockerEndpoint(dockerCli, ep)
|
||||||
|
if err == nil && dem != nil {
|
||||||
|
if ep == "default" {
|
||||||
|
return dem.Host, nil
|
||||||
|
}
|
||||||
|
return ep, nil
|
||||||
|
}
|
||||||
|
h, err := dopts.ParseHost(true, ep)
|
||||||
|
if err != nil {
|
||||||
|
return "", errors.Wrapf(err, "failed to parse endpoint %s", ep)
|
||||||
|
}
|
||||||
|
return h, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// validateBuildkitEndpoint validates that endpoint is a valid buildkit host
|
||||||
|
func validateBuildkitEndpoint(ep string) (string, error) {
|
||||||
|
if err := remoteutil.IsValidEndpoint(ep); err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
return ep, nil
|
||||||
|
}
|
||||||
|
|||||||
26
commands/create_test.go
Normal file
26
commands/create_test.go
Normal file
@@ -0,0 +1,26 @@
|
|||||||
|
package commands
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestCsvToMap(t *testing.T) {
|
||||||
|
d := []string{
|
||||||
|
"\"tolerations=key=foo,value=bar;key=foo2,value=bar2\",replicas=1",
|
||||||
|
"namespace=default",
|
||||||
|
}
|
||||||
|
r, err := csvToMap(d)
|
||||||
|
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Contains(t, r, "tolerations")
|
||||||
|
require.Equal(t, r["tolerations"], "key=foo,value=bar;key=foo2,value=bar2")
|
||||||
|
|
||||||
|
require.Contains(t, r, "replicas")
|
||||||
|
require.Equal(t, r["replicas"], "1")
|
||||||
|
|
||||||
|
require.Contains(t, r, "namespace")
|
||||||
|
require.Equal(t, r["namespace"], "default")
|
||||||
|
}
|
||||||
206
commands/diskusage.go
Normal file
206
commands/diskusage.go
Normal file
@@ -0,0 +1,206 @@
|
|||||||
|
package commands
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"os"
|
||||||
|
"strings"
|
||||||
|
"text/tabwriter"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/docker/buildx/builder"
|
||||||
|
"github.com/docker/cli/cli"
|
||||||
|
"github.com/docker/cli/cli/command"
|
||||||
|
"github.com/docker/cli/opts"
|
||||||
|
"github.com/docker/go-units"
|
||||||
|
"github.com/moby/buildkit/client"
|
||||||
|
"github.com/moby/buildkit/util/appcontext"
|
||||||
|
"github.com/spf13/cobra"
|
||||||
|
"golang.org/x/sync/errgroup"
|
||||||
|
)
|
||||||
|
|
||||||
|
type duOptions struct {
|
||||||
|
builder string
|
||||||
|
filter opts.FilterOpt
|
||||||
|
verbose bool
|
||||||
|
}
|
||||||
|
|
||||||
|
func runDiskUsage(dockerCli command.Cli, opts duOptions) error {
|
||||||
|
ctx := appcontext.Context()
|
||||||
|
|
||||||
|
pi, err := toBuildkitPruneInfo(opts.filter.Value())
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
b, err := builder.New(dockerCli, builder.WithName(opts.builder))
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
nodes, err := b.LoadNodes(ctx, false)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
for _, node := range nodes {
|
||||||
|
if node.Err != nil {
|
||||||
|
return node.Err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
out := make([][]*client.UsageInfo, len(nodes))
|
||||||
|
|
||||||
|
eg, ctx := errgroup.WithContext(ctx)
|
||||||
|
for i, node := range nodes {
|
||||||
|
func(i int, node builder.Node) {
|
||||||
|
eg.Go(func() error {
|
||||||
|
if node.Driver != nil {
|
||||||
|
c, err := node.Driver.Client(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
du, err := c.DiskUsage(ctx, client.WithFilter(pi.Filter))
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
out[i] = du
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
}(i, node)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := eg.Wait(); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
tw := tabwriter.NewWriter(os.Stdout, 1, 8, 1, '\t', 0)
|
||||||
|
first := true
|
||||||
|
for _, du := range out {
|
||||||
|
if du == nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if opts.verbose {
|
||||||
|
printVerbose(tw, du)
|
||||||
|
} else {
|
||||||
|
if first {
|
||||||
|
printTableHeader(tw)
|
||||||
|
first = false
|
||||||
|
}
|
||||||
|
for _, di := range du {
|
||||||
|
printTableRow(tw, di)
|
||||||
|
}
|
||||||
|
|
||||||
|
tw.Flush()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if opts.filter.Value().Len() == 0 {
|
||||||
|
printSummary(tw, out)
|
||||||
|
}
|
||||||
|
|
||||||
|
tw.Flush()
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func duCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
|
||||||
|
options := duOptions{filter: opts.NewFilterOpt()}
|
||||||
|
|
||||||
|
cmd := &cobra.Command{
|
||||||
|
Use: "du",
|
||||||
|
Short: "Disk usage",
|
||||||
|
Args: cli.NoArgs,
|
||||||
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
|
options.builder = rootOpts.builder
|
||||||
|
return runDiskUsage(dockerCli, options)
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
flags := cmd.Flags()
|
||||||
|
flags.Var(&options.filter, "filter", "Provide filter values")
|
||||||
|
flags.BoolVar(&options.verbose, "verbose", false, "Provide a more verbose output")
|
||||||
|
|
||||||
|
return cmd
|
||||||
|
}
|
||||||
|
|
||||||
|
func printKV(w io.Writer, k string, v interface{}) {
|
||||||
|
fmt.Fprintf(w, "%s:\t%v\n", k, v)
|
||||||
|
}
|
||||||
|
|
||||||
|
func printVerbose(tw *tabwriter.Writer, du []*client.UsageInfo) {
|
||||||
|
for _, di := range du {
|
||||||
|
printKV(tw, "ID", di.ID)
|
||||||
|
if len(di.Parents) != 0 {
|
||||||
|
printKV(tw, "Parent", strings.Join(di.Parents, ","))
|
||||||
|
}
|
||||||
|
printKV(tw, "Created at", di.CreatedAt)
|
||||||
|
printKV(tw, "Mutable", di.Mutable)
|
||||||
|
printKV(tw, "Reclaimable", !di.InUse)
|
||||||
|
printKV(tw, "Shared", di.Shared)
|
||||||
|
printKV(tw, "Size", units.HumanSize(float64(di.Size)))
|
||||||
|
if di.Description != "" {
|
||||||
|
printKV(tw, "Description", di.Description)
|
||||||
|
}
|
||||||
|
printKV(tw, "Usage count", di.UsageCount)
|
||||||
|
if di.LastUsedAt != nil {
|
||||||
|
printKV(tw, "Last used", units.HumanDuration(time.Since(*di.LastUsedAt))+" ago")
|
||||||
|
}
|
||||||
|
if di.RecordType != "" {
|
||||||
|
printKV(tw, "Type", di.RecordType)
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Fprintf(tw, "\n")
|
||||||
|
}
|
||||||
|
|
||||||
|
tw.Flush()
|
||||||
|
}
|
||||||
|
|
||||||
|
func printTableHeader(tw *tabwriter.Writer) {
|
||||||
|
fmt.Fprintln(tw, "ID\tRECLAIMABLE\tSIZE\tLAST ACCESSED")
|
||||||
|
}
|
||||||
|
|
||||||
|
func printTableRow(tw *tabwriter.Writer, di *client.UsageInfo) {
|
||||||
|
id := di.ID
|
||||||
|
if di.Mutable {
|
||||||
|
id += "*"
|
||||||
|
}
|
||||||
|
size := units.HumanSize(float64(di.Size))
|
||||||
|
if di.Shared {
|
||||||
|
size += "*"
|
||||||
|
}
|
||||||
|
lastAccessed := ""
|
||||||
|
if di.LastUsedAt != nil {
|
||||||
|
lastAccessed = units.HumanDuration(time.Since(*di.LastUsedAt)) + " ago"
|
||||||
|
}
|
||||||
|
fmt.Fprintf(tw, "%-40s\t%-5v\t%-10s\t%s\n", id, !di.InUse, size, lastAccessed)
|
||||||
|
}
|
||||||
|
|
||||||
|
func printSummary(tw *tabwriter.Writer, dus [][]*client.UsageInfo) {
|
||||||
|
total := int64(0)
|
||||||
|
reclaimable := int64(0)
|
||||||
|
shared := int64(0)
|
||||||
|
|
||||||
|
for _, du := range dus {
|
||||||
|
for _, di := range du {
|
||||||
|
if di.Size > 0 {
|
||||||
|
total += di.Size
|
||||||
|
if !di.InUse {
|
||||||
|
reclaimable += di.Size
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if di.Shared {
|
||||||
|
shared += di.Size
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if shared > 0 {
|
||||||
|
fmt.Fprintf(tw, "Shared:\t%s\n", units.HumanSize(float64(shared)))
|
||||||
|
fmt.Fprintf(tw, "Private:\t%s\n", units.HumanSize(float64(total-shared)))
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Fprintf(tw, "Reclaimable:\t%s\n", units.HumanSize(float64(reclaimable)))
|
||||||
|
fmt.Fprintf(tw, "Total:\t%s\n", units.HumanSize(float64(total)))
|
||||||
|
tw.Flush()
|
||||||
|
}
|
||||||
@@ -1,12 +1,15 @@
|
|||||||
package commands
|
package commands
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"context"
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
"fmt"
|
"fmt"
|
||||||
"io/ioutil"
|
"os"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
|
"github.com/docker/buildx/builder"
|
||||||
"github.com/docker/buildx/util/imagetools"
|
"github.com/docker/buildx/util/imagetools"
|
||||||
|
"github.com/docker/buildx/util/progress"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
"github.com/docker/distribution/reference"
|
"github.com/docker/distribution/reference"
|
||||||
"github.com/moby/buildkit/util/appcontext"
|
"github.com/moby/buildkit/util/appcontext"
|
||||||
@@ -18,10 +21,12 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
type createOptions struct {
|
type createOptions struct {
|
||||||
|
builder string
|
||||||
files []string
|
files []string
|
||||||
tags []string
|
tags []string
|
||||||
dryrun bool
|
dryrun bool
|
||||||
actionAppend bool
|
actionAppend bool
|
||||||
|
progress string
|
||||||
}
|
}
|
||||||
|
|
||||||
func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
|
func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
|
||||||
@@ -35,7 +40,7 @@ func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
|
|||||||
|
|
||||||
fileArgs := make([]string, len(in.files))
|
fileArgs := make([]string, len(in.files))
|
||||||
for i, f := range in.files {
|
for i, f := range in.files {
|
||||||
dt, err := ioutil.ReadFile(f)
|
dt, err := os.ReadFile(f)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -75,35 +80,48 @@ func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
|
|||||||
if len(repos) == 0 {
|
if len(repos) == 0 {
|
||||||
return errors.Errorf("no repositories specified, please set a reference in tag or source")
|
return errors.Errorf("no repositories specified, please set a reference in tag or source")
|
||||||
}
|
}
|
||||||
if len(repos) > 1 {
|
|
||||||
return errors.Errorf("multiple repositories currently not supported, found %v", repos)
|
|
||||||
}
|
|
||||||
|
|
||||||
var repo string
|
var defaultRepo *string
|
||||||
for r := range repos {
|
if len(repos) == 1 {
|
||||||
repo = r
|
for repo := range repos {
|
||||||
|
defaultRepo = &repo
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
for i, s := range srcs {
|
for i, s := range srcs {
|
||||||
if s.Ref == nil && s.Desc.MediaType == "" && s.Desc.Digest != "" {
|
if s.Ref == nil {
|
||||||
n, err := reference.ParseNormalizedNamed(repo)
|
if defaultRepo == nil {
|
||||||
|
return errors.Errorf("multiple repositories specified, cannot infer repository for %q", args[i])
|
||||||
|
}
|
||||||
|
n, err := reference.ParseNormalizedNamed(*defaultRepo)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
if s.Desc.MediaType == "" && s.Desc.Digest != "" {
|
||||||
r, err := reference.WithDigest(n, s.Desc.Digest)
|
r, err := reference.WithDigest(n, s.Desc.Digest)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
srcs[i].Ref = r
|
srcs[i].Ref = r
|
||||||
sourceRefs = true
|
sourceRefs = true
|
||||||
|
} else {
|
||||||
|
srcs[i].Ref = reference.TagNameOnly(n)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
ctx := appcontext.Context()
|
ctx := appcontext.Context()
|
||||||
|
|
||||||
r := imagetools.New(imagetools.Opt{
|
b, err := builder.New(dockerCli, builder.WithName(in.builder))
|
||||||
Auth: dockerCli.ConfigFile(),
|
if err != nil {
|
||||||
})
|
return err
|
||||||
|
}
|
||||||
|
imageopt, err := b.ImageOpt()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
r := imagetools.New(imageopt)
|
||||||
|
|
||||||
if sourceRefs {
|
if sourceRefs {
|
||||||
eg, ctx2 := errgroup.WithContext(ctx)
|
eg, ctx2 := errgroup.WithContext(ctx)
|
||||||
@@ -117,8 +135,15 @@ func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
srcs[i].Ref = nil
|
if srcs[i].Desc.Digest == "" {
|
||||||
srcs[i].Desc = desc
|
srcs[i].Desc = desc
|
||||||
|
} else {
|
||||||
|
var err error
|
||||||
|
srcs[i].Desc, err = mergeDesc(desc, srcs[i].Desc)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
return nil
|
return nil
|
||||||
})
|
})
|
||||||
}(i)
|
}(i)
|
||||||
@@ -128,12 +153,7 @@ func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
descs := make([]ocispec.Descriptor, len(srcs))
|
dt, desc, err := r.Combine(ctx, srcs)
|
||||||
for i := range descs {
|
|
||||||
descs[i] = srcs[i].Desc
|
|
||||||
}
|
|
||||||
|
|
||||||
dt, desc, err := r.Combine(ctx, repo, descs)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -144,31 +164,58 @@ func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// new resolver cause need new auth
|
// new resolver cause need new auth
|
||||||
r = imagetools.New(imagetools.Opt{
|
r = imagetools.New(imageopt)
|
||||||
Auth: dockerCli.ConfigFile(),
|
|
||||||
})
|
|
||||||
|
|
||||||
for _, t := range tags {
|
ctx2, cancel := context.WithCancel(context.TODO())
|
||||||
if err := r.Push(ctx, t, desc, dt); err != nil {
|
defer cancel()
|
||||||
|
printer, err := progress.NewPrinter(ctx2, os.Stderr, os.Stderr, in.progress)
|
||||||
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
fmt.Println(t.String())
|
|
||||||
|
eg, _ := errgroup.WithContext(ctx)
|
||||||
|
pw := progress.WithPrefix(printer, "internal", true)
|
||||||
|
|
||||||
|
for _, t := range tags {
|
||||||
|
t := t
|
||||||
|
eg.Go(func() error {
|
||||||
|
return progress.Wrap(fmt.Sprintf("pushing %s", t.String()), pw.Write, func(sub progress.SubLogger) error {
|
||||||
|
eg2, _ := errgroup.WithContext(ctx)
|
||||||
|
for _, s := range srcs {
|
||||||
|
if reference.Domain(s.Ref) == reference.Domain(t) && reference.Path(s.Ref) == reference.Path(t) {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
s := s
|
||||||
|
eg2.Go(func() error {
|
||||||
|
sub.Log(1, []byte(fmt.Sprintf("copying %s from %s to %s\n", s.Desc.Digest.String(), s.Ref.String(), t.String())))
|
||||||
|
return r.Copy(ctx, s, t)
|
||||||
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
if err := eg2.Wait(); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
sub.Log(1, []byte(fmt.Sprintf("pushing %s to %s\n", desc.Digest.String(), t.String())))
|
||||||
|
return r.Push(ctx, t, desc, dt)
|
||||||
|
})
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
err = eg.Wait()
|
||||||
|
err1 := printer.Wait()
|
||||||
|
if err == nil {
|
||||||
|
err = err1
|
||||||
|
}
|
||||||
|
|
||||||
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
type src struct {
|
func parseSources(in []string) ([]*imagetools.Source, error) {
|
||||||
Desc ocispec.Descriptor
|
out := make([]*imagetools.Source, len(in))
|
||||||
Ref reference.Named
|
|
||||||
}
|
|
||||||
|
|
||||||
func parseSources(in []string) ([]*src, error) {
|
|
||||||
out := make([]*src, len(in))
|
|
||||||
for i, in := range in {
|
for i, in := range in {
|
||||||
s, err := parseSource(in)
|
s, err := parseSource(in)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, errors.Wrapf(err, "failed to parse source %q, valid sources are digests, refereces and descriptors", in)
|
return nil, errors.Wrapf(err, "failed to parse source %q, valid sources are digests, references and descriptors", in)
|
||||||
}
|
}
|
||||||
out[i] = s
|
out[i] = s
|
||||||
}
|
}
|
||||||
@@ -187,11 +234,11 @@ func parseRefs(in []string) ([]reference.Named, error) {
|
|||||||
return refs, nil
|
return refs, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func parseSource(in string) (*src, error) {
|
func parseSource(in string) (*imagetools.Source, error) {
|
||||||
// source can be a digest, reference or a descriptor JSON
|
// source can be a digest, reference or a descriptor JSON
|
||||||
dgst, err := digest.Parse(in)
|
dgst, err := digest.Parse(in)
|
||||||
if err == nil {
|
if err == nil {
|
||||||
return &src{
|
return &imagetools.Source{
|
||||||
Desc: ocispec.Descriptor{
|
Desc: ocispec.Descriptor{
|
||||||
Digest: dgst,
|
Digest: dgst,
|
||||||
},
|
},
|
||||||
@@ -202,39 +249,54 @@ func parseSource(in string) (*src, error) {
|
|||||||
|
|
||||||
ref, err := reference.ParseNormalizedNamed(in)
|
ref, err := reference.ParseNormalizedNamed(in)
|
||||||
if err == nil {
|
if err == nil {
|
||||||
return &src{
|
return &imagetools.Source{
|
||||||
Ref: ref,
|
Ref: ref,
|
||||||
}, nil
|
}, nil
|
||||||
} else if !strings.HasPrefix(in, "{") {
|
} else if !strings.HasPrefix(in, "{") {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
var s src
|
var s imagetools.Source
|
||||||
if err := json.Unmarshal([]byte(in), &s.Desc); err != nil {
|
if err := json.Unmarshal([]byte(in), &s.Desc); err != nil {
|
||||||
return nil, errors.WithStack(err)
|
return nil, errors.WithStack(err)
|
||||||
}
|
}
|
||||||
return &s, nil
|
return &s, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func createCmd(dockerCli command.Cli) *cobra.Command {
|
func createCmd(dockerCli command.Cli, opts RootOptions) *cobra.Command {
|
||||||
var options createOptions
|
var options createOptions
|
||||||
|
|
||||||
cmd := &cobra.Command{
|
cmd := &cobra.Command{
|
||||||
Use: "create [OPTIONS] [SOURCE] [SOURCE...]",
|
Use: "create [OPTIONS] [SOURCE] [SOURCE...]",
|
||||||
Short: "Create a new image based on source images",
|
Short: "Create a new image based on source images",
|
||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
|
options.builder = *opts.Builder
|
||||||
return runCreate(dockerCli, options, args)
|
return runCreate(dockerCli, options, args)
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
flags := cmd.Flags()
|
flags := cmd.Flags()
|
||||||
|
|
||||||
flags.StringArrayVarP(&options.files, "file", "f", []string{}, "Read source descriptor from file")
|
flags.StringArrayVarP(&options.files, "file", "f", []string{}, "Read source descriptor from file")
|
||||||
flags.StringArrayVarP(&options.tags, "tag", "t", []string{}, "Set reference for new image")
|
flags.StringArrayVarP(&options.tags, "tag", "t", []string{}, "Set reference for new image")
|
||||||
flags.BoolVar(&options.dryrun, "dry-run", false, "Show final image instead of pushing")
|
flags.BoolVar(&options.dryrun, "dry-run", false, "Show final image instead of pushing")
|
||||||
flags.BoolVar(&options.actionAppend, "append", false, "Append to existing manifest")
|
flags.BoolVar(&options.actionAppend, "append", false, "Append to existing manifest")
|
||||||
|
flags.StringVar(&options.progress, "progress", "auto", `Set type of progress output ("auto", "plain", "tty"). Use plain to show container output`)
|
||||||
_ = flags
|
|
||||||
|
|
||||||
return cmd
|
return cmd
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func mergeDesc(d1, d2 ocispec.Descriptor) (ocispec.Descriptor, error) {
|
||||||
|
if d2.Size != 0 && d1.Size != d2.Size {
|
||||||
|
return ocispec.Descriptor{}, errors.Errorf("invalid size mismatch for %s, %d != %d", d1.Digest, d2.Size, d1.Size)
|
||||||
|
}
|
||||||
|
if d2.MediaType != "" {
|
||||||
|
d1.MediaType = d2.MediaType
|
||||||
|
}
|
||||||
|
if len(d2.Annotations) != 0 {
|
||||||
|
d1.Annotations = d2.Annotations // no merge so support removes
|
||||||
|
}
|
||||||
|
if d2.Platform != nil {
|
||||||
|
d1.Platform = d2.Platform // missing items filled in later from image config
|
||||||
|
}
|
||||||
|
return d1, nil
|
||||||
|
}
|
||||||
|
|||||||
@@ -1,68 +1,65 @@
|
|||||||
package commands
|
package commands
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"fmt"
|
"github.com/docker/buildx/builder"
|
||||||
"os"
|
|
||||||
|
|
||||||
"github.com/containerd/containerd/images"
|
|
||||||
"github.com/docker/buildx/util/imagetools"
|
"github.com/docker/buildx/util/imagetools"
|
||||||
|
"github.com/docker/cli-docs-tool/annotation"
|
||||||
"github.com/docker/cli/cli"
|
"github.com/docker/cli/cli"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
"github.com/moby/buildkit/util/appcontext"
|
"github.com/moby/buildkit/util/appcontext"
|
||||||
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
|
"github.com/pkg/errors"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
type inspectOptions struct {
|
type inspectOptions struct {
|
||||||
|
builder string
|
||||||
|
format string
|
||||||
raw bool
|
raw bool
|
||||||
}
|
}
|
||||||
|
|
||||||
func runInspect(dockerCli command.Cli, in inspectOptions, name string) error {
|
func runInspect(dockerCli command.Cli, in inspectOptions, name string) error {
|
||||||
ctx := appcontext.Context()
|
ctx := appcontext.Context()
|
||||||
|
|
||||||
r := imagetools.New(imagetools.Opt{
|
if in.format != "" && in.raw {
|
||||||
Auth: dockerCli.ConfigFile(),
|
return errors.Errorf("format and raw cannot be used together")
|
||||||
})
|
}
|
||||||
|
|
||||||
dt, desc, err := r.Get(ctx, name)
|
b, err := builder.New(dockerCli, builder.WithName(in.builder))
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
imageopt, err := b.ImageOpt()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
if in.raw {
|
p, err := imagetools.NewPrinter(ctx, imageopt, name, in.format)
|
||||||
fmt.Printf("%s\n", dt)
|
if err != nil {
|
||||||
return nil
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
switch desc.MediaType {
|
return p.Print(in.raw, dockerCli.Out())
|
||||||
// case images.MediaTypeDockerSchema2Manifest, specs.MediaTypeImageManifest:
|
|
||||||
// TODO: handle distribution manifest and schema1
|
|
||||||
case images.MediaTypeDockerSchema2ManifestList, ocispec.MediaTypeImageIndex:
|
|
||||||
imagetools.PrintManifestList(dt, desc, name, os.Stdout)
|
|
||||||
default:
|
|
||||||
fmt.Printf("%s\n", dt)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func inspectCmd(dockerCli command.Cli) *cobra.Command {
|
func inspectCmd(dockerCli command.Cli, rootOpts RootOptions) *cobra.Command {
|
||||||
var options inspectOptions
|
var options inspectOptions
|
||||||
|
|
||||||
cmd := &cobra.Command{
|
cmd := &cobra.Command{
|
||||||
Use: "inspect [OPTIONS] NAME",
|
Use: "inspect [OPTIONS] NAME",
|
||||||
Short: "Show details of image in the registry",
|
Short: "Show details of an image in the registry",
|
||||||
Args: cli.ExactArgs(1),
|
Args: cli.ExactArgs(1),
|
||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
|
options.builder = *rootOpts.Builder
|
||||||
return runInspect(dockerCli, options, args[0])
|
return runInspect(dockerCli, options, args[0])
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
flags := cmd.Flags()
|
flags := cmd.Flags()
|
||||||
|
|
||||||
flags.BoolVar(&options.raw, "raw", false, "Show original JSON manifest")
|
flags.StringVar(&options.format, "format", "", "Format the output using the given Go template")
|
||||||
|
flags.SetAnnotation("format", annotation.DefaultValue, []string{`"{{.Manifest}}"`})
|
||||||
|
|
||||||
_ = flags
|
flags.BoolVar(&options.raw, "raw", false, "Show original, unformatted JSON manifest")
|
||||||
|
|
||||||
return cmd
|
return cmd
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -5,15 +5,19 @@ import (
|
|||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
func RootCmd(dockerCli command.Cli) *cobra.Command {
|
type RootOptions struct {
|
||||||
|
Builder *string
|
||||||
|
}
|
||||||
|
|
||||||
|
func RootCmd(dockerCli command.Cli, opts RootOptions) *cobra.Command {
|
||||||
cmd := &cobra.Command{
|
cmd := &cobra.Command{
|
||||||
Use: "imagetools",
|
Use: "imagetools",
|
||||||
Short: "Commands to work on images in registry",
|
Short: "Commands to work on images in registry",
|
||||||
}
|
}
|
||||||
|
|
||||||
cmd.AddCommand(
|
cmd.AddCommand(
|
||||||
inspectCmd(dockerCli),
|
createCmd(dockerCli, opts),
|
||||||
createCmd(dockerCli),
|
inspectCmd(dockerCli, opts),
|
||||||
)
|
)
|
||||||
|
|
||||||
return cmd
|
return cmd
|
||||||
|
|||||||
@@ -8,116 +8,87 @@ import (
|
|||||||
"text/tabwriter"
|
"text/tabwriter"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/docker/buildx/build"
|
"github.com/docker/buildx/builder"
|
||||||
"github.com/docker/buildx/driver"
|
|
||||||
"github.com/docker/buildx/store"
|
|
||||||
"github.com/docker/buildx/util/platformutil"
|
"github.com/docker/buildx/util/platformutil"
|
||||||
"github.com/docker/buildx/util/progress"
|
|
||||||
"github.com/docker/cli/cli"
|
"github.com/docker/cli/cli"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
"github.com/moby/buildkit/util/appcontext"
|
"github.com/moby/buildkit/util/appcontext"
|
||||||
specs "github.com/opencontainers/image-spec/specs-go/v1"
|
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
"golang.org/x/sync/errgroup"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
type inspectOptions struct {
|
type inspectOptions struct {
|
||||||
bootstrap bool
|
bootstrap bool
|
||||||
|
builder string
|
||||||
}
|
}
|
||||||
|
|
||||||
type dinfo struct {
|
func runInspect(dockerCli command.Cli, in inspectOptions) error {
|
||||||
di *build.DriverInfo
|
|
||||||
info *driver.Info
|
|
||||||
platforms []specs.Platform
|
|
||||||
err error
|
|
||||||
}
|
|
||||||
|
|
||||||
type nginfo struct {
|
|
||||||
ng *store.NodeGroup
|
|
||||||
drivers []dinfo
|
|
||||||
err error
|
|
||||||
}
|
|
||||||
|
|
||||||
func runInspect(dockerCli command.Cli, in inspectOptions, args []string) error {
|
|
||||||
ctx := appcontext.Context()
|
ctx := appcontext.Context()
|
||||||
|
|
||||||
txn, release, err := getStore(dockerCli)
|
b, err := builder.New(dockerCli,
|
||||||
|
builder.WithName(in.builder),
|
||||||
|
builder.WithSkippedValidation(),
|
||||||
|
)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
defer release()
|
|
||||||
|
|
||||||
var ng *store.NodeGroup
|
timeoutCtx, cancel := context.WithTimeout(ctx, 20*time.Second)
|
||||||
|
|
||||||
if len(args) > 0 {
|
|
||||||
ng, err = getNodeGroup(txn, dockerCli, args[0])
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
ng, err = getCurrentInstance(txn, dockerCli)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if ng == nil {
|
|
||||||
ng = &store.NodeGroup{
|
|
||||||
Name: "default",
|
|
||||||
Nodes: []store.Node{{
|
|
||||||
Name: "default",
|
|
||||||
Endpoint: "default",
|
|
||||||
}},
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
ngi := &nginfo{ng: ng}
|
|
||||||
|
|
||||||
timeoutCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
|
|
||||||
defer cancel()
|
defer cancel()
|
||||||
|
|
||||||
err = loadNodeGroupData(timeoutCtx, dockerCli, ngi)
|
nodes, err := b.LoadNodes(timeoutCtx, true)
|
||||||
|
|
||||||
if in.bootstrap {
|
if in.bootstrap {
|
||||||
var ok bool
|
var ok bool
|
||||||
ok, err = boot(ctx, ngi)
|
ok, err = b.Boot(ctx)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
if ok {
|
if ok {
|
||||||
ngi = &nginfo{ng: ng}
|
nodes, err = b.LoadNodes(timeoutCtx, true)
|
||||||
err = loadNodeGroupData(ctx, dockerCli, ngi)
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
w := tabwriter.NewWriter(os.Stdout, 0, 0, 1, ' ', 0)
|
w := tabwriter.NewWriter(os.Stdout, 0, 0, 1, ' ', 0)
|
||||||
fmt.Fprintf(w, "Name:\t%s\n", ngi.ng.Name)
|
fmt.Fprintf(w, "Name:\t%s\n", b.Name)
|
||||||
fmt.Fprintf(w, "Driver:\t%s\n", ngi.ng.Driver)
|
fmt.Fprintf(w, "Driver:\t%s\n", b.Driver)
|
||||||
|
if !b.NodeGroup.LastActivity.IsZero() {
|
||||||
|
fmt.Fprintf(w, "Last Activity:\t%v\n", b.NodeGroup.LastActivity)
|
||||||
|
}
|
||||||
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
fmt.Fprintf(w, "Error:\t%s\n", err.Error())
|
fmt.Fprintf(w, "Error:\t%s\n", err.Error())
|
||||||
} else if ngi.err != nil {
|
} else if b.Err() != nil {
|
||||||
fmt.Fprintf(w, "Error:\t%s\n", ngi.err.Error())
|
fmt.Fprintf(w, "Error:\t%s\n", b.Err().Error())
|
||||||
}
|
}
|
||||||
if err == nil {
|
if err == nil {
|
||||||
fmt.Fprintln(w, "")
|
fmt.Fprintln(w, "")
|
||||||
fmt.Fprintln(w, "Nodes:")
|
fmt.Fprintln(w, "Nodes:")
|
||||||
|
|
||||||
for i, n := range ngi.ng.Nodes {
|
for i, n := range nodes {
|
||||||
if i != 0 {
|
if i != 0 {
|
||||||
fmt.Fprintln(w, "")
|
fmt.Fprintln(w, "")
|
||||||
}
|
}
|
||||||
fmt.Fprintf(w, "Name:\t%s\n", n.Name)
|
fmt.Fprintf(w, "Name:\t%s\n", n.Name)
|
||||||
fmt.Fprintf(w, "Endpoint:\t%s\n", n.Endpoint)
|
fmt.Fprintf(w, "Endpoint:\t%s\n", n.Endpoint)
|
||||||
if err := ngi.drivers[i].di.Err; err != nil {
|
|
||||||
fmt.Fprintf(w, "Error:\t%s\n", err.Error())
|
var driverOpts []string
|
||||||
} else if err := ngi.drivers[i].err; err != nil {
|
for k, v := range n.DriverOpts {
|
||||||
|
driverOpts = append(driverOpts, fmt.Sprintf("%s=%q", k, v))
|
||||||
|
}
|
||||||
|
if len(driverOpts) > 0 {
|
||||||
|
fmt.Fprintf(w, "Driver Options:\t%s\n", strings.Join(driverOpts, " "))
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := n.Err; err != nil {
|
||||||
fmt.Fprintf(w, "Error:\t%s\n", err.Error())
|
fmt.Fprintf(w, "Error:\t%s\n", err.Error())
|
||||||
} else {
|
} else {
|
||||||
fmt.Fprintf(w, "Status:\t%s\n", ngi.drivers[i].info.Status)
|
fmt.Fprintf(w, "Status:\t%s\n", nodes[i].DriverInfo.Status)
|
||||||
if len(n.Flags) > 0 {
|
if len(n.Flags) > 0 {
|
||||||
fmt.Fprintf(w, "Flags:\t%s\n", strings.Join(n.Flags, " "))
|
fmt.Fprintf(w, "Flags:\t%s\n", strings.Join(n.Flags, " "))
|
||||||
}
|
}
|
||||||
fmt.Fprintf(w, "Platforms:\t%s\n", strings.Join(platformutil.Format(platformutil.Dedupe(append(n.Platforms, ngi.drivers[i].platforms...))), ", "))
|
if nodes[i].Version != "" {
|
||||||
|
fmt.Fprintf(w, "Buildkit:\t%s\n", nodes[i].Version)
|
||||||
|
}
|
||||||
|
fmt.Fprintf(w, "Platforms:\t%s\n", strings.Join(platformutil.FormatInGroups(n.Node.Platforms, n.Platforms), ", "))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -127,7 +98,7 @@ func runInspect(dockerCli command.Cli, in inspectOptions, args []string) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func inspectCmd(dockerCli command.Cli) *cobra.Command {
|
func inspectCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
|
||||||
var options inspectOptions
|
var options inspectOptions
|
||||||
|
|
||||||
cmd := &cobra.Command{
|
cmd := &cobra.Command{
|
||||||
@@ -135,52 +106,16 @@ func inspectCmd(dockerCli command.Cli) *cobra.Command {
|
|||||||
Short: "Inspect current builder instance",
|
Short: "Inspect current builder instance",
|
||||||
Args: cli.RequiresMaxArgs(1),
|
Args: cli.RequiresMaxArgs(1),
|
||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
return runInspect(dockerCli, options, args)
|
options.builder = rootOpts.builder
|
||||||
|
if len(args) > 0 {
|
||||||
|
options.builder = args[0]
|
||||||
|
}
|
||||||
|
return runInspect(dockerCli, options)
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
flags := cmd.Flags()
|
flags := cmd.Flags()
|
||||||
|
|
||||||
flags.BoolVar(&options.bootstrap, "bootstrap", false, "Ensure builder has booted before inspecting")
|
flags.BoolVar(&options.bootstrap, "bootstrap", false, "Ensure builder has booted before inspecting")
|
||||||
|
|
||||||
_ = flags
|
|
||||||
|
|
||||||
return cmd
|
return cmd
|
||||||
}
|
}
|
||||||
|
|
||||||
func boot(ctx context.Context, ngi *nginfo) (bool, error) {
|
|
||||||
toBoot := make([]int, 0, len(ngi.drivers))
|
|
||||||
for i, d := range ngi.drivers {
|
|
||||||
if d.err != nil || d.di.Err != nil || d.di.Driver == nil || d.info == nil {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
if d.info.Status != driver.Running {
|
|
||||||
toBoot = append(toBoot, i)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if len(toBoot) == 0 {
|
|
||||||
return false, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
pw := progress.NewPrinter(context.TODO(), os.Stderr, "auto")
|
|
||||||
|
|
||||||
mw := progress.NewMultiWriter(pw)
|
|
||||||
|
|
||||||
eg, _ := errgroup.WithContext(ctx)
|
|
||||||
for _, idx := range toBoot {
|
|
||||||
func(idx int) {
|
|
||||||
eg.Go(func() error {
|
|
||||||
pw := mw.WithPrefix(ngi.ng.Nodes[idx].Name, len(toBoot) > 1)
|
|
||||||
_, err := driver.Boot(ctx, ngi.drivers[idx].di.Driver, pw)
|
|
||||||
if err != nil {
|
|
||||||
ngi.drivers[idx].err = err
|
|
||||||
}
|
|
||||||
close(pw.Status())
|
|
||||||
<-pw.Done()
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
}(idx)
|
|
||||||
}
|
|
||||||
|
|
||||||
return true, eg.Wait()
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -3,6 +3,7 @@ package commands
|
|||||||
import (
|
import (
|
||||||
"os"
|
"os"
|
||||||
|
|
||||||
|
"github.com/docker/buildx/util/cobrautil"
|
||||||
"github.com/docker/cli/cli"
|
"github.com/docker/cli/cli"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
"github.com/docker/cli/cli/config"
|
"github.com/docker/cli/cli/config"
|
||||||
@@ -48,5 +49,8 @@ func installCmd(dockerCli command.Cli) *cobra.Command {
|
|||||||
Hidden: true,
|
Hidden: true,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// hide builder persistent flag for this command
|
||||||
|
cobrautil.HideInheritedFlags(cmd, "builder")
|
||||||
|
|
||||||
return cmd
|
return cmd
|
||||||
}
|
}
|
||||||
|
|||||||
135
commands/ls.go
135
commands/ls.go
@@ -4,12 +4,13 @@ import (
|
|||||||
"context"
|
"context"
|
||||||
"fmt"
|
"fmt"
|
||||||
"io"
|
"io"
|
||||||
"os"
|
|
||||||
"strings"
|
"strings"
|
||||||
"text/tabwriter"
|
"text/tabwriter"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/docker/buildx/store"
|
"github.com/docker/buildx/builder"
|
||||||
|
"github.com/docker/buildx/store/storeutil"
|
||||||
|
"github.com/docker/buildx/util/cobrautil"
|
||||||
"github.com/docker/buildx/util/platformutil"
|
"github.com/docker/buildx/util/platformutil"
|
||||||
"github.com/docker/cli/cli"
|
"github.com/docker/cli/cli"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
@@ -24,51 +25,30 @@ type lsOptions struct {
|
|||||||
func runLs(dockerCli command.Cli, in lsOptions) error {
|
func runLs(dockerCli command.Cli, in lsOptions) error {
|
||||||
ctx := appcontext.Context()
|
ctx := appcontext.Context()
|
||||||
|
|
||||||
txn, release, err := getStore(dockerCli)
|
txn, release, err := storeutil.GetStore(dockerCli)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
defer release()
|
defer release()
|
||||||
|
|
||||||
ctx, cancel := context.WithTimeout(ctx, 7*time.Second)
|
current, err := storeutil.GetCurrentInstance(txn, dockerCli)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
builders, err := builder.GetBuilders(dockerCli, txn)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
timeoutCtx, cancel := context.WithTimeout(ctx, 20*time.Second)
|
||||||
defer cancel()
|
defer cancel()
|
||||||
|
|
||||||
ll, err := txn.List()
|
eg, _ := errgroup.WithContext(timeoutCtx)
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
builders := make([]*nginfo, len(ll))
|
|
||||||
for i, ng := range ll {
|
|
||||||
builders[i] = &nginfo{ng: ng}
|
|
||||||
}
|
|
||||||
|
|
||||||
list, err := dockerCli.ContextStore().List()
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
ctxbuilders := make([]*nginfo, len(list))
|
|
||||||
for i, l := range list {
|
|
||||||
ctxbuilders[i] = &nginfo{ng: &store.NodeGroup{
|
|
||||||
Name: l.Name,
|
|
||||||
Nodes: []store.Node{{
|
|
||||||
Name: l.Name,
|
|
||||||
Endpoint: l.Name,
|
|
||||||
}},
|
|
||||||
}}
|
|
||||||
}
|
|
||||||
|
|
||||||
builders = append(builders, ctxbuilders...)
|
|
||||||
|
|
||||||
eg, _ := errgroup.WithContext(ctx)
|
|
||||||
|
|
||||||
for _, b := range builders {
|
for _, b := range builders {
|
||||||
func(b *nginfo) {
|
func(b *builder.Builder) {
|
||||||
eg.Go(func() error {
|
eg.Go(func() error {
|
||||||
err = loadNodeGroupData(ctx, dockerCli, b)
|
_, _ = b.LoadNodes(timeoutCtx, true)
|
||||||
if b.err == nil && err != nil {
|
|
||||||
b.err = err
|
|
||||||
}
|
|
||||||
return nil
|
return nil
|
||||||
})
|
})
|
||||||
}(b)
|
}(b)
|
||||||
@@ -78,62 +58,62 @@ func runLs(dockerCli command.Cli, in lsOptions) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
currentName := "default"
|
w := tabwriter.NewWriter(dockerCli.Out(), 0, 0, 1, ' ', 0)
|
||||||
current, err := getCurrentInstance(txn, dockerCli)
|
fmt.Fprintf(w, "NAME/NODE\tDRIVER/ENDPOINT\tSTATUS\tBUILDKIT\tPLATFORMS\n")
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if current != nil {
|
|
||||||
currentName = current.Name
|
|
||||||
if current.Name == "default" {
|
|
||||||
currentName = current.Nodes[0].Endpoint
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
w := tabwriter.NewWriter(os.Stdout, 0, 0, 1, ' ', 0)
|
printErr := false
|
||||||
fmt.Fprintf(w, "NAME/NODE\tDRIVER/ENDPOINT\tSTATUS\tPLATFORMS\n")
|
|
||||||
|
|
||||||
currentSet := false
|
|
||||||
for _, b := range builders {
|
for _, b := range builders {
|
||||||
if !currentSet && b.ng.Name == currentName {
|
if current.Name == b.Name {
|
||||||
b.ng.Name += " *"
|
b.Name += " *"
|
||||||
currentSet = true
|
}
|
||||||
|
if ok := printBuilder(w, b); !ok {
|
||||||
|
printErr = true
|
||||||
}
|
}
|
||||||
printngi(w, b)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
w.Flush()
|
w.Flush()
|
||||||
|
|
||||||
|
if printErr {
|
||||||
|
_, _ = fmt.Fprintf(dockerCli.Err(), "\n")
|
||||||
|
for _, b := range builders {
|
||||||
|
if b.Err() != nil {
|
||||||
|
_, _ = fmt.Fprintf(dockerCli.Err(), "Cannot load builder %s: %s\n", b.Name, strings.TrimSpace(b.Err().Error()))
|
||||||
|
} else {
|
||||||
|
for _, d := range b.Nodes() {
|
||||||
|
if d.Err != nil {
|
||||||
|
_, _ = fmt.Fprintf(dockerCli.Err(), "Failed to get status for %s (%s): %s\n", b.Name, d.Name, strings.TrimSpace(d.Err.Error()))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func printngi(w io.Writer, ngi *nginfo) {
|
func printBuilder(w io.Writer, b *builder.Builder) (ok bool) {
|
||||||
|
ok = true
|
||||||
var err string
|
var err string
|
||||||
if ngi.err != nil {
|
if b.Err() != nil {
|
||||||
err = ngi.err.Error()
|
ok = false
|
||||||
}
|
err = "error"
|
||||||
fmt.Fprintf(w, "%s\t%s\t%s\t\n", ngi.ng.Name, ngi.ng.Driver, err)
|
|
||||||
if ngi.err == nil {
|
|
||||||
for idx, n := range ngi.ng.Nodes {
|
|
||||||
d := ngi.drivers[idx]
|
|
||||||
var err string
|
|
||||||
if d.err != nil {
|
|
||||||
err = d.err.Error()
|
|
||||||
} else if d.di.Err != nil {
|
|
||||||
err = d.di.Err.Error()
|
|
||||||
}
|
}
|
||||||
|
fmt.Fprintf(w, "%s\t%s\t%s\t\t\n", b.Name, b.Driver, err)
|
||||||
|
if b.Err() == nil {
|
||||||
|
for _, n := range b.Nodes() {
|
||||||
var status string
|
var status string
|
||||||
if d.info != nil {
|
if n.DriverInfo != nil {
|
||||||
status = d.info.Status.String()
|
status = n.DriverInfo.Status.String()
|
||||||
}
|
}
|
||||||
p := append(n.Platforms, d.platforms...)
|
if n.Err != nil {
|
||||||
if err != "" {
|
ok = false
|
||||||
fmt.Fprintf(w, " %s\t%s\t%s\n", n.Name, n.Endpoint, err)
|
fmt.Fprintf(w, " %s\t%s\t%s\t\t\n", n.Name, n.Endpoint, "error")
|
||||||
} else {
|
} else {
|
||||||
fmt.Fprintf(w, " %s\t%s\t%s\t%s\n", n.Name, n.Endpoint, status, strings.Join(platformutil.Format(p), ", "))
|
fmt.Fprintf(w, " %s\t%s\t%s\t%s\t%s\n", n.Name, n.Endpoint, status, n.Version, strings.Join(platformutil.FormatInGroups(n.Node.Platforms, n.Platforms), ", "))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
func lsCmd(dockerCli command.Cli) *cobra.Command {
|
func lsCmd(dockerCli command.Cli) *cobra.Command {
|
||||||
@@ -148,5 +128,8 @@ func lsCmd(dockerCli command.Cli) *cobra.Command {
|
|||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// hide builder persistent flag for this command
|
||||||
|
cobrautil.HideInheritedFlags(cmd, "builder")
|
||||||
|
|
||||||
return cmd
|
return cmd
|
||||||
}
|
}
|
||||||
|
|||||||
48
commands/print.go
Normal file
48
commands/print.go
Normal file
@@ -0,0 +1,48 @@
|
|||||||
|
package commands
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"log"
|
||||||
|
"os"
|
||||||
|
|
||||||
|
"github.com/docker/buildx/build"
|
||||||
|
"github.com/docker/docker/api/types/versions"
|
||||||
|
"github.com/moby/buildkit/frontend/subrequests"
|
||||||
|
"github.com/moby/buildkit/frontend/subrequests/outline"
|
||||||
|
"github.com/moby/buildkit/frontend/subrequests/targets"
|
||||||
|
)
|
||||||
|
|
||||||
|
func printResult(f *build.PrintFunc, res map[string]string) error {
|
||||||
|
switch f.Name {
|
||||||
|
case "outline":
|
||||||
|
return printValue(outline.PrintOutline, outline.SubrequestsOutlineDefinition.Version, f.Format, res)
|
||||||
|
case "targets":
|
||||||
|
return printValue(targets.PrintTargets, targets.SubrequestsTargetsDefinition.Version, f.Format, res)
|
||||||
|
case "subrequests.describe":
|
||||||
|
return printValue(subrequests.PrintDescribe, subrequests.SubrequestsDescribeDefinition.Version, f.Format, res)
|
||||||
|
default:
|
||||||
|
if dt, ok := res["result.txt"]; ok {
|
||||||
|
fmt.Print(dt)
|
||||||
|
} else {
|
||||||
|
log.Printf("%s %+v", f, res)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
type printFunc func([]byte, io.Writer) error
|
||||||
|
|
||||||
|
func printValue(printer printFunc, version string, format string, res map[string]string) error {
|
||||||
|
if format == "json" {
|
||||||
|
fmt.Fprintln(os.Stdout, res["result.json"])
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
if res["version"] != "" && versions.LessThan(version, res["version"]) && res["result.txt"] != "" {
|
||||||
|
// structure is too new and we don't know how to print it
|
||||||
|
fmt.Fprint(os.Stdout, res["result.txt"])
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return printer([]byte(res["result.json"]), os.Stdout)
|
||||||
|
}
|
||||||
205
commands/prune.go
Normal file
205
commands/prune.go
Normal file
@@ -0,0 +1,205 @@
|
|||||||
|
package commands
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
"strings"
|
||||||
|
"text/tabwriter"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/docker/buildx/builder"
|
||||||
|
"github.com/docker/cli/cli"
|
||||||
|
"github.com/docker/cli/cli/command"
|
||||||
|
"github.com/docker/cli/opts"
|
||||||
|
"github.com/docker/docker/api/types/filters"
|
||||||
|
"github.com/docker/go-units"
|
||||||
|
"github.com/moby/buildkit/client"
|
||||||
|
"github.com/moby/buildkit/util/appcontext"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
"github.com/spf13/cobra"
|
||||||
|
"golang.org/x/sync/errgroup"
|
||||||
|
)
|
||||||
|
|
||||||
|
type pruneOptions struct {
|
||||||
|
builder string
|
||||||
|
all bool
|
||||||
|
filter opts.FilterOpt
|
||||||
|
keepStorage opts.MemBytes
|
||||||
|
force bool
|
||||||
|
verbose bool
|
||||||
|
}
|
||||||
|
|
||||||
|
const (
|
||||||
|
normalWarning = `WARNING! This will remove all dangling build cache. Are you sure you want to continue?`
|
||||||
|
allCacheWarning = `WARNING! This will remove all build cache. Are you sure you want to continue?`
|
||||||
|
)
|
||||||
|
|
||||||
|
func runPrune(dockerCli command.Cli, opts pruneOptions) error {
|
||||||
|
ctx := appcontext.Context()
|
||||||
|
|
||||||
|
pruneFilters := opts.filter.Value()
|
||||||
|
pruneFilters = command.PruneFilters(dockerCli, pruneFilters)
|
||||||
|
|
||||||
|
pi, err := toBuildkitPruneInfo(pruneFilters)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
warning := normalWarning
|
||||||
|
if opts.all {
|
||||||
|
warning = allCacheWarning
|
||||||
|
}
|
||||||
|
|
||||||
|
if !opts.force && !command.PromptForConfirmation(dockerCli.In(), dockerCli.Out(), warning) {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
b, err := builder.New(dockerCli, builder.WithName(opts.builder))
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
nodes, err := b.LoadNodes(ctx, false)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
for _, node := range nodes {
|
||||||
|
if node.Err != nil {
|
||||||
|
return node.Err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
ch := make(chan client.UsageInfo)
|
||||||
|
printed := make(chan struct{})
|
||||||
|
|
||||||
|
tw := tabwriter.NewWriter(os.Stdout, 1, 8, 1, '\t', 0)
|
||||||
|
first := true
|
||||||
|
total := int64(0)
|
||||||
|
|
||||||
|
go func() {
|
||||||
|
defer close(printed)
|
||||||
|
for du := range ch {
|
||||||
|
total += du.Size
|
||||||
|
if opts.verbose {
|
||||||
|
printVerbose(tw, []*client.UsageInfo{&du})
|
||||||
|
} else {
|
||||||
|
if first {
|
||||||
|
printTableHeader(tw)
|
||||||
|
first = false
|
||||||
|
}
|
||||||
|
printTableRow(tw, &du)
|
||||||
|
tw.Flush()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
|
||||||
|
eg, ctx := errgroup.WithContext(ctx)
|
||||||
|
for _, node := range nodes {
|
||||||
|
func(node builder.Node) {
|
||||||
|
eg.Go(func() error {
|
||||||
|
if node.Driver != nil {
|
||||||
|
c, err := node.Driver.Client(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
popts := []client.PruneOption{
|
||||||
|
client.WithKeepOpt(pi.KeepDuration, opts.keepStorage.Value()),
|
||||||
|
client.WithFilter(pi.Filter),
|
||||||
|
}
|
||||||
|
if opts.all {
|
||||||
|
popts = append(popts, client.PruneAll)
|
||||||
|
}
|
||||||
|
return c.Prune(ctx, ch, popts...)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
}(node)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := eg.Wait(); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
close(ch)
|
||||||
|
<-printed
|
||||||
|
|
||||||
|
tw = tabwriter.NewWriter(os.Stdout, 1, 8, 1, '\t', 0)
|
||||||
|
fmt.Fprintf(tw, "Total:\t%s\n", units.HumanSize(float64(total)))
|
||||||
|
tw.Flush()
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func pruneCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
|
||||||
|
options := pruneOptions{filter: opts.NewFilterOpt()}
|
||||||
|
|
||||||
|
cmd := &cobra.Command{
|
||||||
|
Use: "prune",
|
||||||
|
Short: "Remove build cache",
|
||||||
|
Args: cli.NoArgs,
|
||||||
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
|
options.builder = rootOpts.builder
|
||||||
|
return runPrune(dockerCli, options)
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
flags := cmd.Flags()
|
||||||
|
flags.BoolVarP(&options.all, "all", "a", false, "Include internal/frontend images")
|
||||||
|
flags.Var(&options.filter, "filter", `Provide filter values (e.g., "until=24h")`)
|
||||||
|
flags.Var(&options.keepStorage, "keep-storage", "Amount of disk space to keep for cache")
|
||||||
|
flags.BoolVar(&options.verbose, "verbose", false, "Provide a more verbose output")
|
||||||
|
flags.BoolVarP(&options.force, "force", "f", false, "Do not prompt for confirmation")
|
||||||
|
|
||||||
|
return cmd
|
||||||
|
}
|
||||||
|
|
||||||
|
func toBuildkitPruneInfo(f filters.Args) (*client.PruneInfo, error) {
|
||||||
|
var until time.Duration
|
||||||
|
untilValues := f.Get("until") // canonical
|
||||||
|
unusedForValues := f.Get("unused-for") // deprecated synonym for "until" filter
|
||||||
|
|
||||||
|
if len(untilValues) > 0 && len(unusedForValues) > 0 {
|
||||||
|
return nil, errors.Errorf("conflicting filters %q and %q", "until", "unused-for")
|
||||||
|
}
|
||||||
|
untilKey := "until"
|
||||||
|
if len(unusedForValues) > 0 {
|
||||||
|
untilKey = "unused-for"
|
||||||
|
}
|
||||||
|
untilValues = append(untilValues, unusedForValues...)
|
||||||
|
|
||||||
|
switch len(untilValues) {
|
||||||
|
case 0:
|
||||||
|
// nothing to do
|
||||||
|
case 1:
|
||||||
|
var err error
|
||||||
|
until, err = time.ParseDuration(untilValues[0])
|
||||||
|
if err != nil {
|
||||||
|
return nil, errors.Wrapf(err, "%q filter expects a duration (e.g., '24h')", untilKey)
|
||||||
|
}
|
||||||
|
default:
|
||||||
|
return nil, errors.Errorf("filters expect only one value")
|
||||||
|
}
|
||||||
|
|
||||||
|
filters := make([]string, 0, f.Len())
|
||||||
|
for _, filterKey := range f.Keys() {
|
||||||
|
if filterKey == untilKey {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
values := f.Get(filterKey)
|
||||||
|
switch len(values) {
|
||||||
|
case 0:
|
||||||
|
filters = append(filters, filterKey)
|
||||||
|
case 1:
|
||||||
|
if filterKey == "id" {
|
||||||
|
filters = append(filters, filterKey+"~="+values[0])
|
||||||
|
} else {
|
||||||
|
filters = append(filters, filterKey+"=="+values[0])
|
||||||
|
}
|
||||||
|
default:
|
||||||
|
return nil, errors.Errorf("filters expect only one value")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return &client.PruneInfo{
|
||||||
|
KeepDuration: until,
|
||||||
|
Filter: []string{strings.Join(filters, ",")},
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
136
commands/rm.go
136
commands/rm.go
@@ -2,54 +2,80 @@ package commands
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/docker/buildx/builder"
|
||||||
"github.com/docker/buildx/store"
|
"github.com/docker/buildx/store"
|
||||||
|
"github.com/docker/buildx/store/storeutil"
|
||||||
"github.com/docker/cli/cli"
|
"github.com/docker/cli/cli"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
"github.com/moby/buildkit/util/appcontext"
|
"github.com/moby/buildkit/util/appcontext"
|
||||||
|
"github.com/pkg/errors"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
|
"golang.org/x/sync/errgroup"
|
||||||
)
|
)
|
||||||
|
|
||||||
type rmOptions struct {
|
type rmOptions struct {
|
||||||
|
builder string
|
||||||
|
keepState bool
|
||||||
|
keepDaemon bool
|
||||||
|
allInactive bool
|
||||||
|
force bool
|
||||||
}
|
}
|
||||||
|
|
||||||
func runRm(dockerCli command.Cli, in rmOptions, args []string) error {
|
const (
|
||||||
|
rmInactiveWarning = `WARNING! This will remove all builders that are not in running state. Are you sure you want to continue?`
|
||||||
|
)
|
||||||
|
|
||||||
|
func runRm(dockerCli command.Cli, in rmOptions) error {
|
||||||
ctx := appcontext.Context()
|
ctx := appcontext.Context()
|
||||||
|
|
||||||
txn, release, err := getStore(dockerCli)
|
if in.allInactive && !in.force && !command.PromptForConfirmation(dockerCli.In(), dockerCli.Out(), rmInactiveWarning) {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
txn, release, err := storeutil.GetStore(dockerCli)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
defer release()
|
defer release()
|
||||||
|
|
||||||
if len(args) > 0 {
|
if in.allInactive {
|
||||||
ng, err := getNodeGroup(txn, dockerCli, args[0])
|
return rmAllInactive(ctx, txn, dockerCli, in)
|
||||||
|
}
|
||||||
|
|
||||||
|
b, err := builder.New(dockerCli,
|
||||||
|
builder.WithName(in.builder),
|
||||||
|
builder.WithStore(txn),
|
||||||
|
builder.WithSkippedValidation(),
|
||||||
|
)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
err1 := stop(ctx, dockerCli, ng, true)
|
|
||||||
if err := txn.Remove(ng.Name); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
return err1
|
|
||||||
}
|
|
||||||
|
|
||||||
ng, err := getCurrentInstance(txn, dockerCli)
|
nodes, err := b.LoadNodes(ctx, false)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
if ng != nil {
|
|
||||||
err1 := stop(ctx, dockerCli, ng, true)
|
if cb := b.ContextName(); cb != "" {
|
||||||
if err := txn.Remove(ng.Name); err != nil {
|
return errors.Errorf("context builder cannot be removed, run `docker context rm %s` to remove this context", cb)
|
||||||
|
}
|
||||||
|
|
||||||
|
err1 := rm(ctx, nodes, in)
|
||||||
|
if err := txn.Remove(b.Name); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
if err1 != nil {
|
||||||
return err1
|
return err1
|
||||||
}
|
}
|
||||||
|
|
||||||
|
_, _ = fmt.Fprintf(dockerCli.Err(), "%s removed\n", b.Name)
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func rmCmd(dockerCli command.Cli) *cobra.Command {
|
func rmCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
|
||||||
var options rmOptions
|
var options rmOptions
|
||||||
|
|
||||||
cmd := &cobra.Command{
|
cmd := &cobra.Command{
|
||||||
@@ -57,55 +83,79 @@ func rmCmd(dockerCli command.Cli) *cobra.Command {
|
|||||||
Short: "Remove a builder instance",
|
Short: "Remove a builder instance",
|
||||||
Args: cli.RequiresMaxArgs(1),
|
Args: cli.RequiresMaxArgs(1),
|
||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
return runRm(dockerCli, options, args)
|
options.builder = rootOpts.builder
|
||||||
|
if len(args) > 0 {
|
||||||
|
if options.allInactive {
|
||||||
|
return errors.New("cannot specify builder name when --all-inactive is set")
|
||||||
|
}
|
||||||
|
options.builder = args[0]
|
||||||
|
}
|
||||||
|
return runRm(dockerCli, options)
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
|
flags := cmd.Flags()
|
||||||
|
flags.BoolVar(&options.keepState, "keep-state", false, "Keep BuildKit state")
|
||||||
|
flags.BoolVar(&options.keepDaemon, "keep-daemon", false, "Keep the buildkitd daemon running")
|
||||||
|
flags.BoolVar(&options.allInactive, "all-inactive", false, "Remove all inactive builders")
|
||||||
|
flags.BoolVarP(&options.force, "force", "f", false, "Do not prompt for confirmation")
|
||||||
|
|
||||||
return cmd
|
return cmd
|
||||||
}
|
}
|
||||||
|
|
||||||
func stop(ctx context.Context, dockerCli command.Cli, ng *store.NodeGroup, rm bool) error {
|
func rm(ctx context.Context, nodes []builder.Node, in rmOptions) (err error) {
|
||||||
dis, err := driversForNodeGroup(ctx, dockerCli, ng)
|
for _, node := range nodes {
|
||||||
if err != nil {
|
if node.Driver == nil {
|
||||||
return err
|
continue
|
||||||
}
|
}
|
||||||
for _, di := range dis {
|
// Do not stop the buildkitd daemon when --keep-daemon is provided
|
||||||
if di.Driver != nil {
|
if !in.keepDaemon {
|
||||||
if err := di.Driver.Stop(ctx, true); err != nil {
|
if err := node.Driver.Stop(ctx, true); err != nil {
|
||||||
return err
|
|
||||||
}
|
|
||||||
if rm {
|
|
||||||
if err := di.Driver.Rm(ctx, true); err != nil {
|
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
if err := node.Driver.Rm(ctx, true, !in.keepState, !in.keepDaemon); err != nil {
|
||||||
|
return err
|
||||||
}
|
}
|
||||||
if di.Err != nil {
|
if node.Err != nil {
|
||||||
err = di.Err
|
err = node.Err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
func stopCurrent(ctx context.Context, dockerCli command.Cli, rm bool) error {
|
func rmAllInactive(ctx context.Context, txn *store.Txn, dockerCli command.Cli, in rmOptions) error {
|
||||||
dis, err := getDefaultDrivers(ctx, dockerCli)
|
builders, err := builder.GetBuilders(dockerCli, txn)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
for _, di := range dis {
|
|
||||||
if di.Driver != nil {
|
timeoutCtx, cancel := context.WithTimeout(ctx, 20*time.Second)
|
||||||
if err := di.Driver.Stop(ctx, true); err != nil {
|
defer cancel()
|
||||||
|
|
||||||
|
eg, _ := errgroup.WithContext(timeoutCtx)
|
||||||
|
for _, b := range builders {
|
||||||
|
func(b *builder.Builder) {
|
||||||
|
eg.Go(func() error {
|
||||||
|
nodes, err := b.LoadNodes(timeoutCtx, true)
|
||||||
|
if err != nil {
|
||||||
|
return errors.Wrapf(err, "cannot load %s", b.Name)
|
||||||
|
}
|
||||||
|
if b.Dynamic {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
if b.Inactive() {
|
||||||
|
rmerr := rm(ctx, nodes, in)
|
||||||
|
if err := txn.Remove(b.Name); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
if rm {
|
_, _ = fmt.Fprintf(dockerCli.Err(), "%s removed\n", b.Name)
|
||||||
if err := di.Driver.Rm(ctx, true); err != nil {
|
return rmerr
|
||||||
return err
|
|
||||||
}
|
}
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
}(b)
|
||||||
}
|
}
|
||||||
}
|
|
||||||
if di.Err != nil {
|
return eg.Wait()
|
||||||
err = di.Err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return err
|
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,40 +1,93 @@
|
|||||||
package commands
|
package commands
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"os"
|
||||||
|
|
||||||
imagetoolscmd "github.com/docker/buildx/commands/imagetools"
|
imagetoolscmd "github.com/docker/buildx/commands/imagetools"
|
||||||
|
"github.com/docker/buildx/util/logutil"
|
||||||
|
"github.com/docker/cli-docs-tool/annotation"
|
||||||
|
"github.com/docker/cli/cli"
|
||||||
"github.com/docker/cli/cli-plugins/plugin"
|
"github.com/docker/cli/cli-plugins/plugin"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
|
"github.com/sirupsen/logrus"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
|
"github.com/spf13/pflag"
|
||||||
)
|
)
|
||||||
|
|
||||||
func NewRootCmd(name string, isPlugin bool, dockerCli command.Cli) *cobra.Command {
|
func NewRootCmd(name string, isPlugin bool, dockerCli command.Cli) *cobra.Command {
|
||||||
cmd := &cobra.Command{
|
cmd := &cobra.Command{
|
||||||
Short: "Build with BuildKit",
|
Short: "Docker Buildx",
|
||||||
|
Long: `Extended build capabilities with BuildKit`,
|
||||||
Use: name,
|
Use: name,
|
||||||
|
Annotations: map[string]string{
|
||||||
|
annotation.CodeDelimiter: `"`,
|
||||||
|
},
|
||||||
}
|
}
|
||||||
if isPlugin {
|
if isPlugin {
|
||||||
cmd.PersistentPreRunE = func(cmd *cobra.Command, args []string) error {
|
cmd.PersistentPreRunE = func(cmd *cobra.Command, args []string) error {
|
||||||
return plugin.PersistentPreRunE(cmd, args)
|
return plugin.PersistentPreRunE(cmd, args)
|
||||||
}
|
}
|
||||||
|
} else {
|
||||||
|
// match plugin behavior for standalone mode
|
||||||
|
// https://github.com/docker/cli/blob/6c9eb708fa6d17765d71965f90e1c59cea686ee9/cli-plugins/plugin/plugin.go#L117-L127
|
||||||
|
cmd.SilenceUsage = true
|
||||||
|
cmd.SilenceErrors = true
|
||||||
|
cmd.TraverseChildren = true
|
||||||
|
cmd.DisableFlagsInUseLine = true
|
||||||
|
cli.DisableFlagsInUseLine(cmd)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
logrus.SetFormatter(&logutil.Formatter{})
|
||||||
|
|
||||||
|
logrus.AddHook(logutil.NewFilter([]logrus.Level{
|
||||||
|
logrus.DebugLevel,
|
||||||
|
},
|
||||||
|
"serving grpc connection",
|
||||||
|
"stopping session",
|
||||||
|
"using default config store",
|
||||||
|
))
|
||||||
|
|
||||||
|
// filter out useless commandConn.CloseWrite warning message that can occur
|
||||||
|
// when listing builder instances with "buildx ls" for those that are
|
||||||
|
// unreachable: "commandConn.CloseWrite: commandconn: failed to wait: signal: killed"
|
||||||
|
// https://github.com/docker/cli/blob/3fb4fb83dfb5db0c0753a8316f21aea54dab32c5/cli/connhelper/commandconn/commandconn.go#L203-L214
|
||||||
|
logrus.AddHook(logutil.NewFilter([]logrus.Level{
|
||||||
|
logrus.WarnLevel,
|
||||||
|
},
|
||||||
|
"commandConn.CloseWrite:",
|
||||||
|
"commandConn.CloseRead:",
|
||||||
|
))
|
||||||
|
|
||||||
addCommands(cmd, dockerCli)
|
addCommands(cmd, dockerCli)
|
||||||
return cmd
|
return cmd
|
||||||
}
|
}
|
||||||
|
|
||||||
|
type rootOptions struct {
|
||||||
|
builder string
|
||||||
|
}
|
||||||
|
|
||||||
func addCommands(cmd *cobra.Command, dockerCli command.Cli) {
|
func addCommands(cmd *cobra.Command, dockerCli command.Cli) {
|
||||||
|
opts := &rootOptions{}
|
||||||
|
rootFlags(opts, cmd.PersistentFlags())
|
||||||
|
|
||||||
cmd.AddCommand(
|
cmd.AddCommand(
|
||||||
buildCmd(dockerCli),
|
buildCmd(dockerCli, opts),
|
||||||
bakeCmd(dockerCli),
|
bakeCmd(dockerCli, opts),
|
||||||
createCmd(dockerCli),
|
createCmd(dockerCli),
|
||||||
rmCmd(dockerCli),
|
rmCmd(dockerCli, opts),
|
||||||
lsCmd(dockerCli),
|
lsCmd(dockerCli),
|
||||||
useCmd(dockerCli),
|
useCmd(dockerCli, opts),
|
||||||
inspectCmd(dockerCli),
|
inspectCmd(dockerCli, opts),
|
||||||
stopCmd(dockerCli),
|
stopCmd(dockerCli, opts),
|
||||||
installCmd(dockerCli),
|
installCmd(dockerCli),
|
||||||
uninstallCmd(dockerCli),
|
uninstallCmd(dockerCli),
|
||||||
versionCmd(dockerCli),
|
versionCmd(dockerCli),
|
||||||
imagetoolscmd.RootCmd(dockerCli),
|
pruneCmd(dockerCli, opts),
|
||||||
|
duCmd(dockerCli, opts),
|
||||||
|
imagetoolscmd.RootCmd(dockerCli, imagetoolscmd.RootOptions{Builder: &opts.builder}),
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func rootFlags(options *rootOptions, flags *pflag.FlagSet) {
|
||||||
|
flags.StringVar(&options.builder, "builder", os.Getenv("BUILDX_BUILDER"), "Override the configured builder instance")
|
||||||
|
}
|
||||||
|
|||||||
@@ -1,6 +1,9 @@
|
|||||||
package commands
|
package commands
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"context"
|
||||||
|
|
||||||
|
"github.com/docker/buildx/builder"
|
||||||
"github.com/docker/cli/cli"
|
"github.com/docker/cli/cli"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
"github.com/moby/buildkit/util/appcontext"
|
"github.com/moby/buildkit/util/appcontext"
|
||||||
@@ -8,40 +11,28 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
type stopOptions struct {
|
type stopOptions struct {
|
||||||
|
builder string
|
||||||
}
|
}
|
||||||
|
|
||||||
func runStop(dockerCli command.Cli, in stopOptions, args []string) error {
|
func runStop(dockerCli command.Cli, in stopOptions) error {
|
||||||
ctx := appcontext.Context()
|
ctx := appcontext.Context()
|
||||||
|
|
||||||
txn, release, err := getStore(dockerCli)
|
b, err := builder.New(dockerCli,
|
||||||
|
builder.WithName(in.builder),
|
||||||
|
builder.WithSkippedValidation(),
|
||||||
|
)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
defer release()
|
nodes, err := b.LoadNodes(ctx, false)
|
||||||
|
|
||||||
if len(args) > 0 {
|
|
||||||
ng, err := getNodeGroup(txn, dockerCli, args[0])
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
if err := stop(ctx, dockerCli, ng, false); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
ng, err := getCurrentInstance(txn, dockerCli)
|
return stop(ctx, nodes)
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if ng != nil {
|
|
||||||
return stop(ctx, dockerCli, ng, false)
|
|
||||||
}
|
|
||||||
|
|
||||||
return stopCurrent(ctx, dockerCli, false)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func stopCmd(dockerCli command.Cli) *cobra.Command {
|
func stopCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
|
||||||
var options stopOptions
|
var options stopOptions
|
||||||
|
|
||||||
cmd := &cobra.Command{
|
cmd := &cobra.Command{
|
||||||
@@ -49,15 +40,27 @@ func stopCmd(dockerCli command.Cli) *cobra.Command {
|
|||||||
Short: "Stop builder instance",
|
Short: "Stop builder instance",
|
||||||
Args: cli.RequiresMaxArgs(1),
|
Args: cli.RequiresMaxArgs(1),
|
||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
return runStop(dockerCli, options, args)
|
options.builder = rootOpts.builder
|
||||||
|
if len(args) > 0 {
|
||||||
|
options.builder = args[0]
|
||||||
|
}
|
||||||
|
return runStop(dockerCli, options)
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
flags := cmd.Flags()
|
|
||||||
|
|
||||||
// flags.StringArrayVarP(&options.outputs, "output", "o", []string{}, "Output destination (format: type=local,dest=path)")
|
|
||||||
|
|
||||||
_ = flags
|
|
||||||
|
|
||||||
return cmd
|
return cmd
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func stop(ctx context.Context, nodes []builder.Node) (err error) {
|
||||||
|
for _, node := range nodes {
|
||||||
|
if node.Driver != nil {
|
||||||
|
if err := node.Driver.Stop(ctx, true); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if node.Err != nil {
|
||||||
|
err = node.Err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|||||||
@@ -3,6 +3,7 @@ package commands
|
|||||||
import (
|
import (
|
||||||
"os"
|
"os"
|
||||||
|
|
||||||
|
"github.com/docker/buildx/util/cobrautil"
|
||||||
"github.com/docker/cli/cli"
|
"github.com/docker/cli/cli"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
"github.com/docker/cli/cli/config"
|
"github.com/docker/cli/cli/config"
|
||||||
@@ -54,5 +55,8 @@ func uninstallCmd(dockerCli command.Cli) *cobra.Command {
|
|||||||
Hidden: true,
|
Hidden: true,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// hide builder persistent flag for this command
|
||||||
|
cobrautil.HideInheritedFlags(cmd, "builder")
|
||||||
|
|
||||||
return cmd
|
return cmd
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -3,6 +3,8 @@ package commands
|
|||||||
import (
|
import (
|
||||||
"os"
|
"os"
|
||||||
|
|
||||||
|
"github.com/docker/buildx/store/storeutil"
|
||||||
|
"github.com/docker/buildx/util/dockerutil"
|
||||||
"github.com/docker/cli/cli"
|
"github.com/docker/cli/cli"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
@@ -12,22 +14,23 @@ import (
|
|||||||
type useOptions struct {
|
type useOptions struct {
|
||||||
isGlobal bool
|
isGlobal bool
|
||||||
isDefault bool
|
isDefault bool
|
||||||
|
builder string
|
||||||
}
|
}
|
||||||
|
|
||||||
func runUse(dockerCli command.Cli, in useOptions, name string) error {
|
func runUse(dockerCli command.Cli, in useOptions) error {
|
||||||
txn, release, err := getStore(dockerCli)
|
txn, release, err := storeutil.GetStore(dockerCli)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
defer release()
|
defer release()
|
||||||
|
|
||||||
if _, err := txn.NodeGroupByName(name); err != nil {
|
if _, err := txn.NodeGroupByName(in.builder); err != nil {
|
||||||
if os.IsNotExist(errors.Cause(err)) {
|
if os.IsNotExist(errors.Cause(err)) {
|
||||||
if name == "default" && name != dockerCli.CurrentContext() {
|
if in.builder == "default" && in.builder != dockerCli.CurrentContext() {
|
||||||
return errors.Errorf("run `docker context use default` to switch to default context")
|
return errors.Errorf("run `docker context use default` to switch to default context")
|
||||||
}
|
}
|
||||||
if name == "default" || name == dockerCli.CurrentContext() {
|
if in.builder == "default" || in.builder == dockerCli.CurrentContext() {
|
||||||
ep, err := getCurrentEndpoint(dockerCli)
|
ep, err := dockerutil.GetCurrentEndpoint(dockerCli)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -41,44 +44,45 @@ func runUse(dockerCli command.Cli, in useOptions, name string) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
for _, l := range list {
|
for _, l := range list {
|
||||||
if l.Name == name {
|
if l.Name == in.builder {
|
||||||
return errors.Errorf("run `docker context use %s` to switch to context %s", name, name)
|
return errors.Errorf("run `docker context use %s` to switch to context %s", in.builder, in.builder)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
return errors.Wrapf(err, "failed to find instance %q", name)
|
return errors.Wrapf(err, "failed to find instance %q", in.builder)
|
||||||
}
|
}
|
||||||
|
|
||||||
ep, err := getCurrentEndpoint(dockerCli)
|
ep, err := dockerutil.GetCurrentEndpoint(dockerCli)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
if err := txn.SetCurrent(ep, name, in.isGlobal, in.isDefault); err != nil {
|
if err := txn.SetCurrent(ep, in.builder, in.isGlobal, in.isDefault); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func useCmd(dockerCli command.Cli) *cobra.Command {
|
func useCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
|
||||||
var options useOptions
|
var options useOptions
|
||||||
|
|
||||||
cmd := &cobra.Command{
|
cmd := &cobra.Command{
|
||||||
Use: "use [OPTIONS] NAME",
|
Use: "use [OPTIONS] NAME",
|
||||||
Short: "Set the current builder instance",
|
Short: "Set the current builder instance",
|
||||||
Args: cli.ExactArgs(1),
|
Args: cli.RequiresMaxArgs(1),
|
||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
return runUse(dockerCli, options, args[0])
|
options.builder = rootOpts.builder
|
||||||
|
if len(args) > 0 {
|
||||||
|
options.builder = args[0]
|
||||||
|
}
|
||||||
|
return runUse(dockerCli, options)
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
flags := cmd.Flags()
|
flags := cmd.Flags()
|
||||||
|
|
||||||
flags.BoolVar(&options.isGlobal, "global", false, "Builder persists context changes")
|
flags.BoolVar(&options.isGlobal, "global", false, "Builder persists context changes")
|
||||||
flags.BoolVar(&options.isDefault, "default", false, "Set builder as default for current context")
|
flags.BoolVar(&options.isDefault, "default", false, "Set builder as default for current context")
|
||||||
|
|
||||||
_ = flags
|
|
||||||
|
|
||||||
return cmd
|
return cmd
|
||||||
}
|
}
|
||||||
|
|||||||
331
commands/util.go
331
commands/util.go
@@ -1,331 +0,0 @@
|
|||||||
package commands
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"os"
|
|
||||||
"path/filepath"
|
|
||||||
|
|
||||||
"github.com/docker/buildx/build"
|
|
||||||
"github.com/docker/buildx/driver"
|
|
||||||
"github.com/docker/buildx/store"
|
|
||||||
"github.com/docker/buildx/util/platformutil"
|
|
||||||
"github.com/docker/cli/cli/command"
|
|
||||||
"github.com/docker/cli/cli/context/docker"
|
|
||||||
dopts "github.com/docker/cli/opts"
|
|
||||||
dockerclient "github.com/docker/docker/client"
|
|
||||||
"github.com/pkg/errors"
|
|
||||||
"golang.org/x/sync/errgroup"
|
|
||||||
)
|
|
||||||
|
|
||||||
// getStore returns current builder instance store
|
|
||||||
func getStore(dockerCli command.Cli) (*store.Txn, func(), error) {
|
|
||||||
dir := filepath.Dir(dockerCli.ConfigFile().Filename)
|
|
||||||
s, err := store.New(dir)
|
|
||||||
if err != nil {
|
|
||||||
return nil, nil, err
|
|
||||||
}
|
|
||||||
return s.Txn()
|
|
||||||
}
|
|
||||||
|
|
||||||
// getCurrentEndpoint returns the current default endpoint value
|
|
||||||
func getCurrentEndpoint(dockerCli command.Cli) (string, error) {
|
|
||||||
name := dockerCli.CurrentContext()
|
|
||||||
if name != "default" {
|
|
||||||
return name, nil
|
|
||||||
}
|
|
||||||
de, err := getDockerEndpoint(dockerCli, name)
|
|
||||||
if err != nil {
|
|
||||||
return "", errors.Errorf("docker endpoint for %q not found", name)
|
|
||||||
}
|
|
||||||
return de, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// getDockerEndpoint returns docker endpoint string for given context
|
|
||||||
func getDockerEndpoint(dockerCli command.Cli, name string) (string, error) {
|
|
||||||
list, err := dockerCli.ContextStore().List()
|
|
||||||
if err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
for _, l := range list {
|
|
||||||
if l.Name == name {
|
|
||||||
ep, ok := l.Endpoints["docker"]
|
|
||||||
if !ok {
|
|
||||||
return "", errors.Errorf("context %q does not have a Docker endpoint", name)
|
|
||||||
}
|
|
||||||
typed, ok := ep.(docker.EndpointMeta)
|
|
||||||
if !ok {
|
|
||||||
return "", errors.Errorf("endpoint %q is not of type EndpointMeta, %T", ep, ep)
|
|
||||||
}
|
|
||||||
return typed.Host, nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return "", nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// validateEndpoint validates that endpoint is either a context or a docker host
|
|
||||||
func validateEndpoint(dockerCli command.Cli, ep string) (string, error) {
|
|
||||||
de, err := getDockerEndpoint(dockerCli, ep)
|
|
||||||
if err == nil && de != "" {
|
|
||||||
if ep == "default" {
|
|
||||||
return de, nil
|
|
||||||
}
|
|
||||||
return ep, nil
|
|
||||||
}
|
|
||||||
h, err := dopts.ParseHost(true, ep)
|
|
||||||
if err != nil {
|
|
||||||
return "", errors.Wrapf(err, "failed to parse endpoint %s", ep)
|
|
||||||
}
|
|
||||||
return h, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// getCurrentInstance finds the current builder instance
|
|
||||||
func getCurrentInstance(txn *store.Txn, dockerCli command.Cli) (*store.NodeGroup, error) {
|
|
||||||
ep, err := getCurrentEndpoint(dockerCli)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
ng, err := txn.Current(ep)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
if ng == nil {
|
|
||||||
ng, _ = getNodeGroup(txn, dockerCli, dockerCli.CurrentContext())
|
|
||||||
}
|
|
||||||
|
|
||||||
return ng, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// getNodeGroup returns nodegroup based on the name
|
|
||||||
func getNodeGroup(txn *store.Txn, dockerCli command.Cli, name string) (*store.NodeGroup, error) {
|
|
||||||
ng, err := txn.NodeGroupByName(name)
|
|
||||||
if err != nil {
|
|
||||||
if !os.IsNotExist(errors.Cause(err)) {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if ng != nil {
|
|
||||||
return ng, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
if name == "default" {
|
|
||||||
name = dockerCli.CurrentContext()
|
|
||||||
}
|
|
||||||
|
|
||||||
list, err := dockerCli.ContextStore().List()
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
for _, l := range list {
|
|
||||||
if l.Name == name {
|
|
||||||
return &store.NodeGroup{
|
|
||||||
Name: "default",
|
|
||||||
Nodes: []store.Node{
|
|
||||||
{
|
|
||||||
Name: "default",
|
|
||||||
Endpoint: name,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
}, nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil, errors.Errorf("no builder %q found", name)
|
|
||||||
}
|
|
||||||
|
|
||||||
// driversForNodeGroup returns drivers for a nodegroup instance
|
|
||||||
func driversForNodeGroup(ctx context.Context, dockerCli command.Cli, ng *store.NodeGroup) ([]build.DriverInfo, error) {
|
|
||||||
eg, _ := errgroup.WithContext(ctx)
|
|
||||||
|
|
||||||
dis := make([]build.DriverInfo, len(ng.Nodes))
|
|
||||||
|
|
||||||
var f driver.Factory
|
|
||||||
if ng.Driver != "" {
|
|
||||||
f = driver.GetFactory(ng.Driver, true)
|
|
||||||
if f == nil {
|
|
||||||
return nil, errors.Errorf("failed to find driver %q", f)
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
dockerapi, err := clientForEndpoint(dockerCli, ng.Nodes[0].Endpoint)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
f, err = driver.GetDefaultFactory(ctx, dockerapi, false)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
ng.Driver = f.Name()
|
|
||||||
}
|
|
||||||
|
|
||||||
for i, n := range ng.Nodes {
|
|
||||||
func(i int, n store.Node) {
|
|
||||||
eg.Go(func() error {
|
|
||||||
di := build.DriverInfo{
|
|
||||||
Name: n.Name,
|
|
||||||
Platform: n.Platforms,
|
|
||||||
}
|
|
||||||
defer func() {
|
|
||||||
dis[i] = di
|
|
||||||
}()
|
|
||||||
dockerapi, err := clientForEndpoint(dockerCli, n.Endpoint)
|
|
||||||
if err != nil {
|
|
||||||
di.Err = err
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
// TODO: replace the following line with dockerclient.WithAPIVersionNegotiation option in clientForEndpoint
|
|
||||||
dockerapi.NegotiateAPIVersion(ctx)
|
|
||||||
|
|
||||||
d, err := driver.GetDriver(ctx, "buildx_buildkit_"+n.Name, f, dockerapi, n.Flags, n.ConfigFile, n.DriverOpts)
|
|
||||||
if err != nil {
|
|
||||||
di.Err = err
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
di.Driver = d
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
}(i, n)
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := eg.Wait(); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
return dis, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// clientForEndpoint returns a docker client for an endpoint
|
|
||||||
func clientForEndpoint(dockerCli command.Cli, name string) (dockerclient.APIClient, error) {
|
|
||||||
list, err := dockerCli.ContextStore().List()
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
for _, l := range list {
|
|
||||||
if l.Name == name {
|
|
||||||
dep, ok := l.Endpoints["docker"]
|
|
||||||
if !ok {
|
|
||||||
return nil, errors.Errorf("context %q does not have a Docker endpoint", name)
|
|
||||||
}
|
|
||||||
epm, ok := dep.(docker.EndpointMeta)
|
|
||||||
if !ok {
|
|
||||||
return nil, errors.Errorf("endpoint %q is not of type EndpointMeta, %T", dep, dep)
|
|
||||||
}
|
|
||||||
ep, err := docker.WithTLSData(dockerCli.ContextStore(), name, epm)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
clientOpts, err := ep.ClientOpts()
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
return dockerclient.NewClientWithOpts(clientOpts...)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
ep := docker.Endpoint{
|
|
||||||
EndpointMeta: docker.EndpointMeta{
|
|
||||||
Host: name,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
clientOpts, err := ep.ClientOpts()
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
return dockerclient.NewClientWithOpts(clientOpts...)
|
|
||||||
}
|
|
||||||
|
|
||||||
// getDefaultDrivers returns drivers based on current cli config
|
|
||||||
func getDefaultDrivers(ctx context.Context, dockerCli command.Cli) ([]build.DriverInfo, error) {
|
|
||||||
txn, release, err := getStore(dockerCli)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
defer release()
|
|
||||||
|
|
||||||
ng, err := getCurrentInstance(txn, dockerCli)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
if ng != nil {
|
|
||||||
return driversForNodeGroup(ctx, dockerCli, ng)
|
|
||||||
}
|
|
||||||
|
|
||||||
d, err := driver.GetDriver(ctx, "buildx_buildkit_default", nil, dockerCli.Client(), nil, "", nil)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
return []build.DriverInfo{
|
|
||||||
{
|
|
||||||
Name: "default",
|
|
||||||
Driver: d,
|
|
||||||
},
|
|
||||||
}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func loadInfoData(ctx context.Context, d *dinfo) error {
|
|
||||||
if d.di.Driver == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
info, err := d.di.Driver.Info(ctx)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
d.info = info
|
|
||||||
if info.Status == driver.Running {
|
|
||||||
c, err := d.di.Driver.Client(ctx)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
workers, err := c.ListWorkers(ctx)
|
|
||||||
if err != nil {
|
|
||||||
return errors.Wrap(err, "listing workers")
|
|
||||||
}
|
|
||||||
for _, w := range workers {
|
|
||||||
for _, p := range w.Platforms {
|
|
||||||
d.platforms = append(d.platforms, p)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
d.platforms = platformutil.Dedupe(d.platforms)
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func loadNodeGroupData(ctx context.Context, dockerCli command.Cli, ngi *nginfo) error {
|
|
||||||
eg, _ := errgroup.WithContext(ctx)
|
|
||||||
|
|
||||||
dis, err := driversForNodeGroup(ctx, dockerCli, ngi.ng)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
ngi.drivers = make([]dinfo, len(dis))
|
|
||||||
for i, di := range dis {
|
|
||||||
d := di
|
|
||||||
ngi.drivers[i].di = &d
|
|
||||||
func(d *dinfo) {
|
|
||||||
eg.Go(func() error {
|
|
||||||
if err := loadInfoData(ctx, d); err != nil {
|
|
||||||
d.err = err
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
}(&ngi.drivers[i])
|
|
||||||
}
|
|
||||||
|
|
||||||
return eg.Wait()
|
|
||||||
}
|
|
||||||
|
|
||||||
func dockerAPI(dockerCli command.Cli) *api {
|
|
||||||
return &api{dockerCli: dockerCli}
|
|
||||||
}
|
|
||||||
|
|
||||||
type api struct {
|
|
||||||
dockerCli command.Cli
|
|
||||||
}
|
|
||||||
|
|
||||||
func (a *api) DockerAPI(name string) (dockerclient.APIClient, error) {
|
|
||||||
if name == "" {
|
|
||||||
name = a.dockerCli.CurrentContext()
|
|
||||||
}
|
|
||||||
return clientForEndpoint(a.dockerCli, name)
|
|
||||||
}
|
|
||||||
@@ -3,6 +3,7 @@ package commands
|
|||||||
import (
|
import (
|
||||||
"fmt"
|
"fmt"
|
||||||
|
|
||||||
|
"github.com/docker/buildx/util/cobrautil"
|
||||||
"github.com/docker/buildx/version"
|
"github.com/docker/buildx/version"
|
||||||
"github.com/docker/cli/cli"
|
"github.com/docker/cli/cli"
|
||||||
"github.com/docker/cli/cli/command"
|
"github.com/docker/cli/cli/command"
|
||||||
@@ -17,11 +18,15 @@ func runVersion(dockerCli command.Cli) error {
|
|||||||
func versionCmd(dockerCli command.Cli) *cobra.Command {
|
func versionCmd(dockerCli command.Cli) *cobra.Command {
|
||||||
cmd := &cobra.Command{
|
cmd := &cobra.Command{
|
||||||
Use: "version",
|
Use: "version",
|
||||||
Short: "Show buildx version information ",
|
Short: "Show buildx version information",
|
||||||
Args: cli.ExactArgs(0),
|
Args: cli.ExactArgs(0),
|
||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
return runVersion(dockerCli)
|
return runVersion(dockerCli)
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// hide builder persistent flag for this command
|
||||||
|
cobrautil.HideInheritedFlags(cmd, "builder")
|
||||||
|
|
||||||
return cmd
|
return cmd
|
||||||
}
|
}
|
||||||
|
|||||||
144
docker-bake.hcl
Normal file
144
docker-bake.hcl
Normal file
@@ -0,0 +1,144 @@
|
|||||||
|
variable "GO_VERSION" {
|
||||||
|
default = "1.19"
|
||||||
|
}
|
||||||
|
variable "DOCS_FORMATS" {
|
||||||
|
default = "md"
|
||||||
|
}
|
||||||
|
variable "DESTDIR" {
|
||||||
|
default = "./bin"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Special target: https://github.com/docker/metadata-action#bake-definition
|
||||||
|
target "meta-helper" {
|
||||||
|
tags = ["docker/buildx-bin:local"]
|
||||||
|
}
|
||||||
|
|
||||||
|
target "_common" {
|
||||||
|
args = {
|
||||||
|
GO_VERSION = GO_VERSION
|
||||||
|
BUILDKIT_CONTEXT_KEEP_GIT_DIR = 1
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
group "default" {
|
||||||
|
targets = ["binaries"]
|
||||||
|
}
|
||||||
|
|
||||||
|
group "validate" {
|
||||||
|
targets = ["lint", "validate-vendor", "validate-docs"]
|
||||||
|
}
|
||||||
|
|
||||||
|
target "lint" {
|
||||||
|
inherits = ["_common"]
|
||||||
|
dockerfile = "./hack/dockerfiles/lint.Dockerfile"
|
||||||
|
output = ["type=cacheonly"]
|
||||||
|
}
|
||||||
|
|
||||||
|
target "validate-vendor" {
|
||||||
|
inherits = ["_common"]
|
||||||
|
dockerfile = "./hack/dockerfiles/vendor.Dockerfile"
|
||||||
|
target = "validate"
|
||||||
|
output = ["type=cacheonly"]
|
||||||
|
}
|
||||||
|
|
||||||
|
target "validate-docs" {
|
||||||
|
inherits = ["_common"]
|
||||||
|
args = {
|
||||||
|
FORMATS = DOCS_FORMATS
|
||||||
|
BUILDX_EXPERIMENTAL = 1 // enables experimental cmds/flags for docs generation
|
||||||
|
}
|
||||||
|
dockerfile = "./hack/dockerfiles/docs.Dockerfile"
|
||||||
|
target = "validate"
|
||||||
|
output = ["type=cacheonly"]
|
||||||
|
}
|
||||||
|
|
||||||
|
target "validate-authors" {
|
||||||
|
inherits = ["_common"]
|
||||||
|
dockerfile = "./hack/dockerfiles/authors.Dockerfile"
|
||||||
|
target = "validate"
|
||||||
|
output = ["type=cacheonly"]
|
||||||
|
}
|
||||||
|
|
||||||
|
target "update-vendor" {
|
||||||
|
inherits = ["_common"]
|
||||||
|
dockerfile = "./hack/dockerfiles/vendor.Dockerfile"
|
||||||
|
target = "update"
|
||||||
|
output = ["."]
|
||||||
|
}
|
||||||
|
|
||||||
|
target "update-docs" {
|
||||||
|
inherits = ["_common"]
|
||||||
|
args = {
|
||||||
|
FORMATS = DOCS_FORMATS
|
||||||
|
BUILDX_EXPERIMENTAL = 1 // enables experimental cmds/flags for docs generation
|
||||||
|
}
|
||||||
|
dockerfile = "./hack/dockerfiles/docs.Dockerfile"
|
||||||
|
target = "update"
|
||||||
|
output = ["./docs/reference"]
|
||||||
|
}
|
||||||
|
|
||||||
|
target "update-authors" {
|
||||||
|
inherits = ["_common"]
|
||||||
|
dockerfile = "./hack/dockerfiles/authors.Dockerfile"
|
||||||
|
target = "update"
|
||||||
|
output = ["."]
|
||||||
|
}
|
||||||
|
|
||||||
|
target "mod-outdated" {
|
||||||
|
inherits = ["_common"]
|
||||||
|
dockerfile = "./hack/dockerfiles/vendor.Dockerfile"
|
||||||
|
target = "outdated"
|
||||||
|
no-cache-filter = ["outdated"]
|
||||||
|
output = ["type=cacheonly"]
|
||||||
|
}
|
||||||
|
|
||||||
|
target "test" {
|
||||||
|
inherits = ["_common"]
|
||||||
|
target = "test-coverage"
|
||||||
|
output = ["${DESTDIR}/coverage"]
|
||||||
|
}
|
||||||
|
|
||||||
|
target "binaries" {
|
||||||
|
inherits = ["_common"]
|
||||||
|
target = "binaries"
|
||||||
|
output = ["${DESTDIR}/build"]
|
||||||
|
platforms = ["local"]
|
||||||
|
}
|
||||||
|
|
||||||
|
target "binaries-cross" {
|
||||||
|
inherits = ["binaries"]
|
||||||
|
platforms = [
|
||||||
|
"darwin/amd64",
|
||||||
|
"darwin/arm64",
|
||||||
|
"linux/amd64",
|
||||||
|
"linux/arm/v6",
|
||||||
|
"linux/arm/v7",
|
||||||
|
"linux/arm64",
|
||||||
|
"linux/ppc64le",
|
||||||
|
"linux/riscv64",
|
||||||
|
"linux/s390x",
|
||||||
|
"windows/amd64",
|
||||||
|
"windows/arm64"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
|
||||||
|
target "release" {
|
||||||
|
inherits = ["binaries-cross"]
|
||||||
|
target = "release"
|
||||||
|
output = ["${DESTDIR}/release"]
|
||||||
|
}
|
||||||
|
|
||||||
|
target "image" {
|
||||||
|
inherits = ["meta-helper", "binaries"]
|
||||||
|
output = ["type=image"]
|
||||||
|
}
|
||||||
|
|
||||||
|
target "image-cross" {
|
||||||
|
inherits = ["meta-helper", "binaries-cross"]
|
||||||
|
output = ["type=image"]
|
||||||
|
}
|
||||||
|
|
||||||
|
target "image-local" {
|
||||||
|
inherits = ["image"]
|
||||||
|
output = ["type=docker"]
|
||||||
|
}
|
||||||
90
docs/generate.go
Normal file
90
docs/generate.go
Normal file
@@ -0,0 +1,90 @@
|
|||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"log"
|
||||||
|
"os"
|
||||||
|
|
||||||
|
"github.com/docker/buildx/commands"
|
||||||
|
clidocstool "github.com/docker/cli-docs-tool"
|
||||||
|
"github.com/docker/cli/cli/command"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
"github.com/spf13/cobra"
|
||||||
|
"github.com/spf13/pflag"
|
||||||
|
|
||||||
|
// import drivers otherwise factories are empty
|
||||||
|
// for --driver output flag usage
|
||||||
|
_ "github.com/docker/buildx/driver/docker"
|
||||||
|
_ "github.com/docker/buildx/driver/docker-container"
|
||||||
|
_ "github.com/docker/buildx/driver/kubernetes"
|
||||||
|
_ "github.com/docker/buildx/driver/remote"
|
||||||
|
)
|
||||||
|
|
||||||
|
const defaultSourcePath = "docs/reference/"
|
||||||
|
|
||||||
|
type options struct {
|
||||||
|
source string
|
||||||
|
formats []string
|
||||||
|
}
|
||||||
|
|
||||||
|
func gen(opts *options) error {
|
||||||
|
log.SetFlags(0)
|
||||||
|
|
||||||
|
dockerCLI, err := command.NewDockerCli()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
cmd := &cobra.Command{
|
||||||
|
Use: "docker [OPTIONS] COMMAND [ARG...]",
|
||||||
|
Short: "The base command for the Docker CLI.",
|
||||||
|
DisableAutoGenTag: true,
|
||||||
|
}
|
||||||
|
|
||||||
|
cmd.AddCommand(commands.NewRootCmd("buildx", true, dockerCLI))
|
||||||
|
|
||||||
|
c, err := clidocstool.New(clidocstool.Options{
|
||||||
|
Root: cmd,
|
||||||
|
SourceDir: opts.source,
|
||||||
|
Plugin: true,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, format := range opts.formats {
|
||||||
|
switch format {
|
||||||
|
case "md":
|
||||||
|
if err = c.GenMarkdownTree(cmd); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
case "yaml":
|
||||||
|
if err = c.GenYamlTree(cmd); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
default:
|
||||||
|
return errors.Errorf("unknown format %q", format)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func run() error {
|
||||||
|
opts := &options{}
|
||||||
|
flags := pflag.NewFlagSet(os.Args[0], pflag.ContinueOnError)
|
||||||
|
flags.StringVar(&opts.source, "source", defaultSourcePath, "Docs source folder")
|
||||||
|
flags.StringSliceVar(&opts.formats, "formats", []string{}, "Format (md, yaml)")
|
||||||
|
if err := flags.Parse(os.Args[1:]); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if len(opts.formats) == 0 {
|
||||||
|
return errors.New("Docs format required")
|
||||||
|
}
|
||||||
|
return gen(opts)
|
||||||
|
}
|
||||||
|
|
||||||
|
func main() {
|
||||||
|
if err := run(); err != nil {
|
||||||
|
log.Printf("ERROR: %+v", err)
|
||||||
|
os.Exit(1)
|
||||||
|
}
|
||||||
|
}
|
||||||
48
docs/guides/cicd.md
Normal file
48
docs/guides/cicd.md
Normal file
@@ -0,0 +1,48 @@
|
|||||||
|
# CI/CD
|
||||||
|
|
||||||
|
## GitHub Actions
|
||||||
|
|
||||||
|
Docker provides a [GitHub Action that will build and push your image](https://github.com/docker/build-push-action/#about)
|
||||||
|
using Buildx. Here is a simple workflow:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
name: ci
|
||||||
|
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
branches:
|
||||||
|
- 'main'
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
docker:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
-
|
||||||
|
name: Set up QEMU
|
||||||
|
uses: docker/setup-qemu-action@v2
|
||||||
|
-
|
||||||
|
name: Set up Docker Buildx
|
||||||
|
uses: docker/setup-buildx-action@v2
|
||||||
|
-
|
||||||
|
name: Login to DockerHub
|
||||||
|
uses: docker/login-action@v2
|
||||||
|
with:
|
||||||
|
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||||
|
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||||
|
-
|
||||||
|
name: Build and push
|
||||||
|
uses: docker/build-push-action@v2
|
||||||
|
with:
|
||||||
|
push: true
|
||||||
|
tags: user/app:latest
|
||||||
|
```
|
||||||
|
|
||||||
|
In this example we are also using 3 other actions:
|
||||||
|
|
||||||
|
* [`setup-buildx`](https://github.com/docker/setup-buildx-action) action will create and boot a builder using by
|
||||||
|
default the `docker-container` [builder driver](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver).
|
||||||
|
This is **not required but recommended** using it to be able to build multi-platform images, export cache, etc.
|
||||||
|
* [`setup-qemu`](https://github.com/docker/setup-qemu-action) action can be useful if you want
|
||||||
|
to add emulation support with QEMU to be able to build against more platforms.
|
||||||
|
* [`login`](https://github.com/docker/login-action) action will take care to log
|
||||||
|
in against a Docker registry.
|
||||||
23
docs/guides/cni-networking.md
Normal file
23
docs/guides/cni-networking.md
Normal file
@@ -0,0 +1,23 @@
|
|||||||
|
# CNI networking
|
||||||
|
|
||||||
|
It can be useful to use a bridge network for your builder if for example you
|
||||||
|
encounter a network port contention during multiple builds. If you're using
|
||||||
|
the BuildKit image, CNI is not yet available in it, but you can create
|
||||||
|
[a custom BuildKit image with CNI support](https://github.com/moby/buildkit/blob/master/docs/cni-networking.md).
|
||||||
|
|
||||||
|
Now build this image:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx build --tag buildkit-cni:local --load .
|
||||||
|
```
|
||||||
|
|
||||||
|
Then [create a `docker-container` builder](https://docs.docker.com/engine/reference/commandline/buildx_create/) that
|
||||||
|
will use this image:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx create --use \
|
||||||
|
--name mybuilder \
|
||||||
|
--driver docker-container \
|
||||||
|
--driver-opt "image=buildkit-cni:local" \
|
||||||
|
--buildkitd-flags "--oci-worker-net=cni"
|
||||||
|
```
|
||||||
20
docs/guides/color-output.md
Normal file
20
docs/guides/color-output.md
Normal file
@@ -0,0 +1,20 @@
|
|||||||
|
# Color output controls
|
||||||
|
|
||||||
|
Buildx has support for modifying the colors that are used to output information
|
||||||
|
to the terminal. You can set the environment variable `BUILDKIT_COLORS` to
|
||||||
|
something like `run=123,20,245:error=yellow:cancel=blue:warning=white` to set
|
||||||
|
the colors that you would like to use:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Setting `NO_COLOR` to anything will disable any colorized output as recommended
|
||||||
|
by [no-color.org](https://no-color.org/):
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
> **Note**
|
||||||
|
>
|
||||||
|
> Parsing errors will be reported but ignored. This will result in default
|
||||||
|
> color values being used where needed.
|
||||||
|
|
||||||
|
See also [the list of pre-defined colors](https://github.com/moby/buildkit/blob/master/util/progress/progressui/colors.go).
|
||||||
34
docs/guides/custom-network.md
Normal file
34
docs/guides/custom-network.md
Normal file
@@ -0,0 +1,34 @@
|
|||||||
|
# Using a custom network
|
||||||
|
|
||||||
|
[Create a network](https://docs.docker.com/engine/reference/commandline/network_create/)
|
||||||
|
named `foonet`:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker network create foonet
|
||||||
|
```
|
||||||
|
|
||||||
|
[Create a `docker-container` builder](https://docs.docker.com/engine/reference/commandline/buildx_create/)
|
||||||
|
named `mybuilder` that will use this network:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx create --use \
|
||||||
|
--name mybuilder \
|
||||||
|
--driver docker-container \
|
||||||
|
--driver-opt "network=foonet"
|
||||||
|
```
|
||||||
|
|
||||||
|
Boot and [inspect `mybuilder`](https://docs.docker.com/engine/reference/commandline/buildx_inspect/):
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx inspect --bootstrap
|
||||||
|
```
|
||||||
|
|
||||||
|
[Inspect the builder container](https://docs.docker.com/engine/reference/commandline/inspect/)
|
||||||
|
and see what network is being used:
|
||||||
|
|
||||||
|
{% raw %}
|
||||||
|
```console
|
||||||
|
$ docker inspect buildx_buildkit_mybuilder0 --format={{.NetworkSettings.Networks}}
|
||||||
|
map[foonet:0xc00018c0c0]
|
||||||
|
```
|
||||||
|
{% endraw %}
|
||||||
63
docs/guides/custom-registry-config.md
Normal file
63
docs/guides/custom-registry-config.md
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
# Using a custom registry configuration
|
||||||
|
|
||||||
|
If you [create a `docker-container` or `kubernetes` builder](https://docs.docker.com/engine/reference/commandline/buildx_create/) and
|
||||||
|
have specified certificates for registries in the [BuildKit daemon configuration](https://github.com/moby/buildkit/blob/master/docs/buildkitd.toml.md),
|
||||||
|
the files will be copied into the container under `/etc/buildkit/certs` and
|
||||||
|
configuration will be updated to reflect that.
|
||||||
|
|
||||||
|
Take the following `buildkitd.toml` configuration that will be used for
|
||||||
|
pushing an image to this registry using self-signed certificates:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
# /etc/buildkitd.toml
|
||||||
|
debug = true
|
||||||
|
[registry."myregistry.com"]
|
||||||
|
ca=["/etc/certs/myregistry.pem"]
|
||||||
|
[[registry."myregistry.com".keypair]]
|
||||||
|
key="/etc/certs/myregistry_key.pem"
|
||||||
|
cert="/etc/certs/myregistry_cert.pem"
|
||||||
|
```
|
||||||
|
|
||||||
|
Here we have configured a self-signed certificate for `myregistry.com` registry.
|
||||||
|
|
||||||
|
Now [create a `docker-container` builder](https://docs.docker.com/engine/reference/commandline/buildx_create/)
|
||||||
|
that will use this BuildKit configuration:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx create --use \
|
||||||
|
--name mybuilder \
|
||||||
|
--driver docker-container \
|
||||||
|
--config /etc/buildkitd.toml
|
||||||
|
```
|
||||||
|
|
||||||
|
Inspecting the builder container, you can see that buildkitd configuration
|
||||||
|
has changed:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker exec -it buildx_buildkit_mybuilder0 cat /etc/buildkit/buildkitd.toml
|
||||||
|
```
|
||||||
|
```toml
|
||||||
|
debug = true
|
||||||
|
|
||||||
|
[registry]
|
||||||
|
|
||||||
|
[registry."myregistry.com"]
|
||||||
|
ca = ["/etc/buildkit/certs/myregistry.com/myregistry.pem"]
|
||||||
|
|
||||||
|
[[registry."myregistry.com".keypair]]
|
||||||
|
cert = "/etc/buildkit/certs/myregistry.com/myregistry_cert.pem"
|
||||||
|
key = "/etc/buildkit/certs/myregistry.com/myregistry_key.pem"
|
||||||
|
```
|
||||||
|
|
||||||
|
And certificates copied inside the container:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker exec -it buildx_buildkit_mybuilder0 ls /etc/buildkit/certs/myregistry.com/
|
||||||
|
myregistry.pem myregistry_cert.pem myregistry_key.pem
|
||||||
|
```
|
||||||
|
|
||||||
|
Now you should be able to push to the registry with this builder:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx build --push --tag myregistry.com/myimage:latest .
|
||||||
|
```
|
||||||
31
docs/guides/opentelemetry.md
Normal file
31
docs/guides/opentelemetry.md
Normal file
@@ -0,0 +1,31 @@
|
|||||||
|
# OpenTelemetry support
|
||||||
|
|
||||||
|
To capture the trace to [Jaeger](https://github.com/jaegertracing/jaeger), set
|
||||||
|
`JAEGER_TRACE` environment variable to the collection address using a `driver-opt`.
|
||||||
|
|
||||||
|
First create a Jaeger container:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker run -d --name jaeger -p "6831:6831/udp" -p "16686:16686" jaegertracing/all-in-one
|
||||||
|
```
|
||||||
|
|
||||||
|
Then [create a `docker-container` builder](https://docs.docker.com/engine/reference/commandline/buildx_create/)
|
||||||
|
that will use the Jaeger instance via the `JAEGER_TRACE` env var:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx create --use \
|
||||||
|
--name mybuilder \
|
||||||
|
--driver docker-container \
|
||||||
|
--driver-opt "network=host" \
|
||||||
|
--driver-opt "env.JAEGER_TRACE=localhost:6831"
|
||||||
|
```
|
||||||
|
|
||||||
|
Boot and [inspect `mybuilder`](https://docs.docker.com/engine/reference/commandline/buildx_inspect/):
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx inspect --bootstrap
|
||||||
|
```
|
||||||
|
|
||||||
|
Buildx commands should be traced at `http://127.0.0.1:16686/`:
|
||||||
|
|
||||||
|

|
||||||
62
docs/guides/registry-mirror.md
Normal file
62
docs/guides/registry-mirror.md
Normal file
@@ -0,0 +1,62 @@
|
|||||||
|
# Registry mirror
|
||||||
|
|
||||||
|
You can define a registry mirror to use for your builds by providing a [BuildKit daemon configuration](https://github.com/moby/buildkit/blob/master/docs/buildkitd.toml.md)
|
||||||
|
while creating a builder with the [`--config` flags](https://docs.docker.com/engine/reference/commandline/buildx_create/#config).
|
||||||
|
|
||||||
|
```toml
|
||||||
|
# /etc/buildkitd.toml
|
||||||
|
debug = true
|
||||||
|
[registry."docker.io"]
|
||||||
|
mirrors = ["mirror.gcr.io"]
|
||||||
|
```
|
||||||
|
|
||||||
|
> **Note**
|
||||||
|
>
|
||||||
|
> `debug = true` has been added to be able to debug requests
|
||||||
|
> in the BuildKit daemon and see if the mirror is effectively used.
|
||||||
|
|
||||||
|
Then [create a `docker-container` builder](https://docs.docker.com/engine/reference/commandline/buildx_create/)
|
||||||
|
that will use this BuildKit configuration:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx create --use \
|
||||||
|
--name mybuilder \
|
||||||
|
--driver docker-container \
|
||||||
|
--config /etc/buildkitd.toml
|
||||||
|
```
|
||||||
|
|
||||||
|
Boot and [inspect `mybuilder`](https://docs.docker.com/engine/reference/commandline/buildx_inspect/):
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx inspect --bootstrap
|
||||||
|
```
|
||||||
|
|
||||||
|
Build an image:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx build --load . -f-<<EOF
|
||||||
|
FROM alpine
|
||||||
|
RUN echo "hello world"
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
|
Now let's check the BuildKit logs in the builder container:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker logs buildx_buildkit_mybuilder0
|
||||||
|
```
|
||||||
|
```text
|
||||||
|
...
|
||||||
|
time="2022-02-06T17:47:48Z" level=debug msg="do request" request.header.accept="application/vnd.docker.container.image.v1+json, */*" request.header.user-agent=containerd/1.5.8+unknown request.method=GET spanID=9460e5b6e64cec91 traceID=b162d3040ddf86d6614e79c66a01a577
|
||||||
|
time="2022-02-06T17:47:48Z" level=debug msg="fetch response received" response.header.accept-ranges=bytes response.header.age=1356 response.header.alt-svc="h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000,h3-Q050=\":443\"; ma=2592000,h3-Q046=\":443\"; ma=2592000,h3-Q043=\":443\"; ma=2592000,quic=\":443\"; ma=2592000; v=\"46,43\"" response.header.cache-control="public, max-age=3600" response.header.content-length=1469 response.header.content-type=application/octet-stream response.header.date="Sun, 06 Feb 2022 17:25:17 GMT" response.header.etag="\"774380abda8f4eae9a149e5d5d3efc83\"" response.header.expires="Sun, 06 Feb 2022 18:25:17 GMT" response.header.last-modified="Wed, 24 Nov 2021 21:07:57 GMT" response.header.server=UploadServer response.header.x-goog-generation=1637788077652182 response.header.x-goog-hash="crc32c=V3DSrg==" response.header.x-goog-hash.1="md5=d0OAq9qPTq6aFJ5dXT78gw==" response.header.x-goog-metageneration=1 response.header.x-goog-storage-class=STANDARD response.header.x-goog-stored-content-encoding=identity response.header.x-goog-stored-content-length=1469 response.header.x-guploader-uploadid=ADPycduqQipVAXc3tzXmTzKQ2gTT6CV736B2J628smtD1iDytEyiYCgvvdD8zz9BT1J1sASUq9pW_ctUyC4B-v2jvhIxnZTlKg response.status="200 OK" spanID=9460e5b6e64cec91 traceID=b162d3040ddf86d6614e79c66a01a577
|
||||||
|
time="2022-02-06T17:47:48Z" level=debug msg="fetch response received" response.header.accept-ranges=bytes response.header.age=760 response.header.alt-svc="h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000,h3-Q050=\":443\"; ma=2592000,h3-Q046=\":443\"; ma=2592000,h3-Q043=\":443\"; ma=2592000,quic=\":443\"; ma=2592000; v=\"46,43\"" response.header.cache-control="public, max-age=3600" response.header.content-length=1471 response.header.content-type=application/octet-stream response.header.date="Sun, 06 Feb 2022 17:35:13 GMT" response.header.etag="\"35d688bd15327daafcdb4d4395e616a8\"" response.header.expires="Sun, 06 Feb 2022 18:35:13 GMT" response.header.last-modified="Wed, 24 Nov 2021 21:07:12 GMT" response.header.server=UploadServer response.header.x-goog-generation=1637788032100793 response.header.x-goog-hash="crc32c=aWgRjA==" response.header.x-goog-hash.1="md5=NdaIvRUyfar8201DleYWqA==" response.header.x-goog-metageneration=1 response.header.x-goog-storage-class=STANDARD response.header.x-goog-stored-content-encoding=identity response.header.x-goog-stored-content-length=1471 response.header.x-guploader-uploadid=ADPycdtR-gJYwC7yHquIkJWFFG8FovDySvtmRnZBqlO3yVDanBXh_VqKYt400yhuf0XbQ3ZMB9IZV2vlcyHezn_Pu3a1SMMtiw response.status="200 OK" spanID=9460e5b6e64cec91 traceID=b162d3040ddf86d6614e79c66a01a577
|
||||||
|
time="2022-02-06T17:47:48Z" level=debug msg=fetch spanID=9460e5b6e64cec91 traceID=b162d3040ddf86d6614e79c66a01a577
|
||||||
|
time="2022-02-06T17:47:48Z" level=debug msg=fetch spanID=9460e5b6e64cec91 traceID=b162d3040ddf86d6614e79c66a01a577
|
||||||
|
time="2022-02-06T17:47:48Z" level=debug msg=fetch spanID=9460e5b6e64cec91 traceID=b162d3040ddf86d6614e79c66a01a577
|
||||||
|
time="2022-02-06T17:47:48Z" level=debug msg=fetch spanID=9460e5b6e64cec91 traceID=b162d3040ddf86d6614e79c66a01a577
|
||||||
|
time="2022-02-06T17:47:48Z" level=debug msg="do request" request.header.accept="application/vnd.docker.image.rootfs.diff.tar.gzip, */*" request.header.user-agent=containerd/1.5.8+unknown request.method=GET spanID=9460e5b6e64cec91 traceID=b162d3040ddf86d6614e79c66a01a577
|
||||||
|
time="2022-02-06T17:47:48Z" level=debug msg="fetch response received" response.header.accept-ranges=bytes response.header.age=1356 response.header.alt-svc="h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000,h3-Q050=\":443\"; ma=2592000,h3-Q046=\":443\"; ma=2592000,h3-Q043=\":443\"; ma=2592000,quic=\":443\"; ma=2592000; v=\"46,43\"" response.header.cache-control="public, max-age=3600" response.header.content-length=2818413 response.header.content-type=application/octet-stream response.header.date="Sun, 06 Feb 2022 17:25:17 GMT" response.header.etag="\"1d55e7be5a77c4a908ad11bc33ebea1c\"" response.header.expires="Sun, 06 Feb 2022 18:25:17 GMT" response.header.last-modified="Wed, 24 Nov 2021 21:07:06 GMT" response.header.server=UploadServer response.header.x-goog-generation=1637788026431708 response.header.x-goog-hash="crc32c=ZojF+g==" response.header.x-goog-hash.1="md5=HVXnvlp3xKkIrRG8M+vqHA==" response.header.x-goog-metageneration=1 response.header.x-goog-storage-class=STANDARD response.header.x-goog-stored-content-encoding=identity response.header.x-goog-stored-content-length=2818413 response.header.x-guploader-uploadid=ADPycdsebqxiTBJqZ0bv9zBigjFxgQydD2ESZSkKchpE0ILlN9Ibko3C5r4fJTJ4UR9ddp-UBd-2v_4eRpZ8Yo2llW_j4k8WhQ response.status="200 OK" spanID=9460e5b6e64cec91 traceID=b162d3040ddf86d6614e79c66a01a577
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
As you can see, requests come from the GCR registry mirror (`response.header.x-goog*`).
|
||||||
33
docs/guides/resource-limiting.md
Normal file
33
docs/guides/resource-limiting.md
Normal file
@@ -0,0 +1,33 @@
|
|||||||
|
# Resource limiting
|
||||||
|
|
||||||
|
## Max parallelism
|
||||||
|
|
||||||
|
You can limit the parallelism of the BuildKit solver, which is particularly useful
|
||||||
|
for low-powered machines, using a [BuildKit daemon configuration](https://github.com/moby/buildkit/blob/master/docs/buildkitd.toml.md)
|
||||||
|
while creating a builder with the [`--config` flags](https://docs.docker.com/engine/reference/commandline/buildx_create/#config).
|
||||||
|
|
||||||
|
```toml
|
||||||
|
# /etc/buildkitd.toml
|
||||||
|
[worker.oci]
|
||||||
|
max-parallelism = 4
|
||||||
|
```
|
||||||
|
|
||||||
|
Now you can [create a `docker-container` builder](https://docs.docker.com/engine/reference/commandline/buildx_create/)
|
||||||
|
that will use this BuildKit configuration to limit parallelism.
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx create --use \
|
||||||
|
--name mybuilder \
|
||||||
|
--driver docker-container \
|
||||||
|
--config /etc/buildkitd.toml
|
||||||
|
```
|
||||||
|
|
||||||
|
## Limit on TCP connections
|
||||||
|
|
||||||
|
We are also now limiting TCP connections to **4 per registry** with an additional
|
||||||
|
connection not used for layer pulls and pushes. This limitation will be able to
|
||||||
|
manage TCP connection per host to avoid your build being stuck while pulling
|
||||||
|
images. The additional connection is used for metadata requests
|
||||||
|
(image config retrieval) to enhance the overall build time.
|
||||||
|
|
||||||
|
More info: [moby/buildkit#2259](https://github.com/moby/buildkit/pull/2259)
|
||||||
14
docs/manuals/README.md
Normal file
14
docs/manuals/README.md
Normal file
@@ -0,0 +1,14 @@
|
|||||||
|
# Buildx manuals 📚
|
||||||
|
|
||||||
|
This directory contains a bunch of useful docs for how to use Buildx features.
|
||||||
|
|
||||||
|
> **Note**
|
||||||
|
>
|
||||||
|
> The markdown files in this directory (excluding this README) are reused
|
||||||
|
> downstream by the
|
||||||
|
> [Docker documentation repository](https://github.com/docker/docs).
|
||||||
|
>
|
||||||
|
> If you wish to contribute to these docs, be sure to first review the
|
||||||
|
> [documentation contribution guidelines](https://docs.docker.com/contribute/overview/).
|
||||||
|
>
|
||||||
|
> Thank you!
|
||||||
3
docs/manuals/bake/build-contexts.md
Normal file
3
docs/manuals/bake/build-contexts.md
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
# Defining additional build contexts and linking targets
|
||||||
|
|
||||||
|
Moved to [docs.docker.com](https://docs.docker.com/build/bake/build-contexts)
|
||||||
3
docs/manuals/bake/compose-file.md
Normal file
3
docs/manuals/bake/compose-file.md
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
# Building from Compose file
|
||||||
|
|
||||||
|
Moved to [docs.docker.com](https://docs.docker.com/build/bake/compose-file)
|
||||||
3
docs/manuals/bake/configuring-build.md
Normal file
3
docs/manuals/bake/configuring-build.md
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
# Configuring builds
|
||||||
|
|
||||||
|
Moved to [docs.docker.com](https://docs.docker.com/build/bake/configuring-build)
|
||||||
3
docs/manuals/bake/file-definition.md
Normal file
3
docs/manuals/bake/file-definition.md
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
# Bake file definition
|
||||||
|
|
||||||
|
Moved to [docs.docker.com](https://docs.docker.com/build/bake/file-definition)
|
||||||
3
docs/manuals/bake/hcl-funcs.md
Normal file
3
docs/manuals/bake/hcl-funcs.md
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
# User defined HCL functions
|
||||||
|
|
||||||
|
Moved to [docs.docker.com](https://docs.docker.com/build/bake/hcl-funcs)
|
||||||
3
docs/manuals/bake/index.md
Normal file
3
docs/manuals/bake/index.md
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
# High-level build options with Bake
|
||||||
|
|
||||||
|
Moved to [docs.docker.com](https://docs.docker.com/build/bake)
|
||||||
3
docs/manuals/cache/backends/azblob.md
vendored
Normal file
3
docs/manuals/cache/backends/azblob.md
vendored
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
# Azure Blob Storage cache storage
|
||||||
|
|
||||||
|
Moved to [docs.docker.com](https://docs.docker.com/build/building/cache/backends/azblob)
|
||||||
3
docs/manuals/cache/backends/gha.md
vendored
Normal file
3
docs/manuals/cache/backends/gha.md
vendored
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
# GitHub Actions cache storage
|
||||||
|
|
||||||
|
Moved to [docs.docker.com](https://docs.docker.com/build/building/cache/backends/gha)
|
||||||
3
docs/manuals/cache/backends/index.md
vendored
Normal file
3
docs/manuals/cache/backends/index.md
vendored
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
# Cache storage backends
|
||||||
|
|
||||||
|
Moved to [docs.docker.com](https://docs.docker.com/build/building/cache/backends)
|
||||||
3
docs/manuals/cache/backends/inline.md
vendored
Normal file
3
docs/manuals/cache/backends/inline.md
vendored
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
# Inline cache storage
|
||||||
|
|
||||||
|
Moved to [docs.docker.com](https://docs.docker.com/build/building/cache/backends/inline)
|
||||||
3
docs/manuals/cache/backends/local.md
vendored
Normal file
3
docs/manuals/cache/backends/local.md
vendored
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
# Local cache storage
|
||||||
|
|
||||||
|
Moved to [docs.docker.com](https://docs.docker.com/build/building/cache/backends/local)
|
||||||
3
docs/manuals/cache/backends/registry.md
vendored
Normal file
3
docs/manuals/cache/backends/registry.md
vendored
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
# Registry cache storage
|
||||||
|
|
||||||
|
Moved to [docs.docker.com](https://docs.docker.com/build/building/cache/backends/registry)
|
||||||
3
docs/manuals/cache/backends/s3.md
vendored
Normal file
3
docs/manuals/cache/backends/s3.md
vendored
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
# Amazon S3 cache storage
|
||||||
|
|
||||||
|
Moved to [docs.docker.com](https://docs.docker.com/build/building/cache/backends/s3)
|
||||||
3
docs/manuals/drivers/docker-container.md
Normal file
3
docs/manuals/drivers/docker-container.md
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
# Docker container driver
|
||||||
|
|
||||||
|
Moved to [docs.docker.com](https://docs.docker.com/build/building/drivers/docker-container)
|
||||||
3
docs/manuals/drivers/docker.md
Normal file
3
docs/manuals/drivers/docker.md
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
# Docker driver
|
||||||
|
|
||||||
|
Moved to [docs.docker.com](https://docs.docker.com/build/building/drivers/docker)
|
||||||
3
docs/manuals/drivers/index.md
Normal file
3
docs/manuals/drivers/index.md
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
# Buildx drivers overview
|
||||||
|
|
||||||
|
Moved to [docs.docker.com](https://docs.docker.com/build/building/drivers)
|
||||||
3
docs/manuals/drivers/kubernetes.md
Normal file
3
docs/manuals/drivers/kubernetes.md
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
# Kubernetes driver
|
||||||
|
|
||||||
|
Moved to [docs.docker.com](https://docs.docker.com/build/building/drivers/kubernetes)
|
||||||
3
docs/manuals/drivers/remote.md
Normal file
3
docs/manuals/drivers/remote.md
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
# Remote driver
|
||||||
|
|
||||||
|
Moved to [docs.docker.com](https://docs.docker.com/build/building/drivers/remote)
|
||||||
3
docs/manuals/exporters/image-registry.md
Normal file
3
docs/manuals/exporters/image-registry.md
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
# Image and registry exporters
|
||||||
|
|
||||||
|
Moved to [docs.docker.com](https://docs.docker.com/build/building/exporters/image-registry)
|
||||||
3
docs/manuals/exporters/index.md
Normal file
3
docs/manuals/exporters/index.md
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
# Exporters overview
|
||||||
|
|
||||||
|
Moved to [docs.docker.com](https://docs.docker.com/build/building/exporters)
|
||||||
3
docs/manuals/exporters/local-tar.md
Normal file
3
docs/manuals/exporters/local-tar.md
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
# Local and tar exporters
|
||||||
|
|
||||||
|
Moved to [docs.docker.com](https://docs.docker.com/build/building/exporters/local-tar)
|
||||||
3
docs/manuals/exporters/oci-docker.md
Normal file
3
docs/manuals/exporters/oci-docker.md
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
# OCI and Docker exporters
|
||||||
|
|
||||||
|
Moved to [docs.docker.com](https://docs.docker.com/build/building/exporters/oci-docker)
|
||||||
43
docs/reference/buildx.md
Normal file
43
docs/reference/buildx.md
Normal file
@@ -0,0 +1,43 @@
|
|||||||
|
# buildx
|
||||||
|
|
||||||
|
```
|
||||||
|
docker buildx [OPTIONS] COMMAND
|
||||||
|
```
|
||||||
|
|
||||||
|
<!---MARKER_GEN_START-->
|
||||||
|
Extended build capabilities with BuildKit
|
||||||
|
|
||||||
|
### Subcommands
|
||||||
|
|
||||||
|
| Name | Description |
|
||||||
|
|:-------------------------------------|:-------------------------------------------|
|
||||||
|
| [`bake`](buildx_bake.md) | Build from a file |
|
||||||
|
| [`build`](buildx_build.md) | Start a build |
|
||||||
|
| [`create`](buildx_create.md) | Create a new builder instance |
|
||||||
|
| [`du`](buildx_du.md) | Disk usage |
|
||||||
|
| [`imagetools`](buildx_imagetools.md) | Commands to work on images in registry |
|
||||||
|
| [`inspect`](buildx_inspect.md) | Inspect current builder instance |
|
||||||
|
| [`install`](buildx_install.md) | Install buildx as a 'docker builder' alias |
|
||||||
|
| [`ls`](buildx_ls.md) | List builder instances |
|
||||||
|
| [`prune`](buildx_prune.md) | Remove build cache |
|
||||||
|
| [`rm`](buildx_rm.md) | Remove a builder instance |
|
||||||
|
| [`stop`](buildx_stop.md) | Stop builder instance |
|
||||||
|
| [`uninstall`](buildx_uninstall.md) | Uninstall the 'docker builder' alias |
|
||||||
|
| [`use`](buildx_use.md) | Set the current builder instance |
|
||||||
|
| [`version`](buildx_version.md) | Show buildx version information |
|
||||||
|
|
||||||
|
|
||||||
|
### Options
|
||||||
|
|
||||||
|
| Name | Type | Default | Description |
|
||||||
|
|:------------------------|:---------|:--------|:-----------------------------------------|
|
||||||
|
| [`--builder`](#builder) | `string` | | Override the configured builder instance |
|
||||||
|
|
||||||
|
|
||||||
|
<!---MARKER_GEN_END-->
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
### <a name="builder"></a> Override the configured builder instance (--builder)
|
||||||
|
|
||||||
|
You can also use the `BUILDX_BUILDER` environment variable.
|
||||||
174
docs/reference/buildx_bake.md
Normal file
174
docs/reference/buildx_bake.md
Normal file
@@ -0,0 +1,174 @@
|
|||||||
|
# buildx bake
|
||||||
|
|
||||||
|
```
|
||||||
|
docker buildx bake [OPTIONS] [TARGET...]
|
||||||
|
```
|
||||||
|
|
||||||
|
<!---MARKER_GEN_START-->
|
||||||
|
Build from a file
|
||||||
|
|
||||||
|
### Aliases
|
||||||
|
|
||||||
|
`docker buildx bake`, `docker buildx f`
|
||||||
|
|
||||||
|
### Options
|
||||||
|
|
||||||
|
| Name | Type | Default | Description |
|
||||||
|
|:---------------------------------|:--------------|:--------|:-----------------------------------------------------------------------------------------|
|
||||||
|
| [`--builder`](#builder) | `string` | | Override the configured builder instance |
|
||||||
|
| [`-f`](#file), [`--file`](#file) | `stringArray` | | Build definition file |
|
||||||
|
| `--load` | | | Shorthand for `--set=*.output=type=docker` |
|
||||||
|
| `--metadata-file` | `string` | | Write build result metadata to the file |
|
||||||
|
| [`--no-cache`](#no-cache) | | | Do not use cache when building the image |
|
||||||
|
| [`--print`](#print) | | | Print the options without building |
|
||||||
|
| [`--progress`](#progress) | `string` | `auto` | Set type of progress output (`auto`, `plain`, `tty`). Use plain to show container output |
|
||||||
|
| [`--provenance`](#provenance) | `string` | | Shorthand for `--set=*.attest=type=provenance` |
|
||||||
|
| [`--pull`](#pull) | | | Always attempt to pull all referenced images |
|
||||||
|
| `--push` | | | Shorthand for `--set=*.output=type=registry` |
|
||||||
|
| [`--sbom`](#sbom) | `string` | | Shorthand for `--set=*.attest=type=sbom` |
|
||||||
|
| [`--set`](#set) | `stringArray` | | Override target value (e.g., `targetpattern.key=value`) |
|
||||||
|
|
||||||
|
|
||||||
|
<!---MARKER_GEN_END-->
|
||||||
|
|
||||||
|
## Description
|
||||||
|
|
||||||
|
Bake is a high-level build command. Each specified target will run in parallel
|
||||||
|
as part of the build.
|
||||||
|
|
||||||
|
Read [High-level build options with Bake](https://docs.docker.com/build/bake/)
|
||||||
|
guide for introduction to writing bake files.
|
||||||
|
|
||||||
|
> **Note**
|
||||||
|
>
|
||||||
|
> `buildx bake` command may receive backwards incompatible features in the future
|
||||||
|
> if needed. We are looking for feedback on improving the command and extending
|
||||||
|
> the functionality further.
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
### <a name="builder"></a> Override the configured builder instance (--builder)
|
||||||
|
|
||||||
|
Same as [`buildx --builder`](buildx.md#builder).
|
||||||
|
|
||||||
|
### <a name="file"></a> Specify a build definition file (-f, --file)
|
||||||
|
|
||||||
|
Use the `-f` / `--file` option to specify the build definition file to use.
|
||||||
|
The file can be an HCL, JSON or Compose file. If multiple files are specified
|
||||||
|
they are all read and configurations are combined.
|
||||||
|
|
||||||
|
You can pass the names of the targets to build, to build only specific target(s).
|
||||||
|
The following example builds the `db` and `webapp-release` targets that are
|
||||||
|
defined in the `docker-bake.dev.hcl` file:
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
# docker-bake.dev.hcl
|
||||||
|
group "default" {
|
||||||
|
targets = ["db", "webapp-dev"]
|
||||||
|
}
|
||||||
|
|
||||||
|
target "webapp-dev" {
|
||||||
|
dockerfile = "Dockerfile.webapp"
|
||||||
|
tags = ["docker.io/username/webapp"]
|
||||||
|
}
|
||||||
|
|
||||||
|
target "webapp-release" {
|
||||||
|
inherits = ["webapp-dev"]
|
||||||
|
platforms = ["linux/amd64", "linux/arm64"]
|
||||||
|
}
|
||||||
|
|
||||||
|
target "db" {
|
||||||
|
dockerfile = "Dockerfile.db"
|
||||||
|
tags = ["docker.io/username/db"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx bake -f docker-bake.dev.hcl db webapp-release
|
||||||
|
```
|
||||||
|
|
||||||
|
See our [file definition](https://docs.docker.com/build/bake/file-definition/)
|
||||||
|
guide for more details.
|
||||||
|
|
||||||
|
### <a name="no-cache"></a> Do not use cache when building the image (--no-cache)
|
||||||
|
|
||||||
|
Same as `build --no-cache`. Do not use cache when building the image.
|
||||||
|
|
||||||
|
### <a name="print"></a> Print the options without building (--print)
|
||||||
|
|
||||||
|
Prints the resulting options of the targets desired to be built, in a JSON
|
||||||
|
format, without starting a build.
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx bake -f docker-bake.hcl --print db
|
||||||
|
{
|
||||||
|
"group": {
|
||||||
|
"default": {
|
||||||
|
"targets": [
|
||||||
|
"db"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"target": {
|
||||||
|
"db": {
|
||||||
|
"context": "./",
|
||||||
|
"dockerfile": "Dockerfile",
|
||||||
|
"tags": [
|
||||||
|
"docker.io/tiborvass/db"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### <a name="progress"></a> Set type of progress output (--progress)
|
||||||
|
|
||||||
|
Same as [`build --progress`](buildx_build.md#progress).
|
||||||
|
|
||||||
|
### <a name="provenance"></a> Create provenance attestations (--provenance)
|
||||||
|
|
||||||
|
Same as [`build --provenance`](buildx_build.md#provenance).
|
||||||
|
|
||||||
|
### <a name="pull"></a> Always attempt to pull a newer version of the image (--pull)
|
||||||
|
|
||||||
|
Same as `build --pull`.
|
||||||
|
|
||||||
|
### <a name="sbom"></a> Create SBOM attestations (--sbom)
|
||||||
|
|
||||||
|
Same as [`build --sbom`](buildx_build.md#sbom).
|
||||||
|
|
||||||
|
### <a name="set"></a> Override target configurations from command line (--set)
|
||||||
|
|
||||||
|
```
|
||||||
|
--set targetpattern.key[.subkey]=value
|
||||||
|
```
|
||||||
|
|
||||||
|
Override target configurations from command line. The pattern matching syntax
|
||||||
|
is defined in https://golang.org/pkg/path/#Match.
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx bake --set target.args.mybuildarg=value
|
||||||
|
$ docker buildx bake --set target.platform=linux/arm64
|
||||||
|
$ docker buildx bake --set foo*.args.mybuildarg=value # overrides build arg for all targets starting with 'foo'
|
||||||
|
$ docker buildx bake --set *.platform=linux/arm64 # overrides platform for all targets
|
||||||
|
$ docker buildx bake --set foo*.no-cache # bypass caching only for targets starting with 'foo'
|
||||||
|
```
|
||||||
|
|
||||||
|
Complete list of overridable fields:
|
||||||
|
|
||||||
|
* `args`
|
||||||
|
* `cache-from`
|
||||||
|
* `cache-to`
|
||||||
|
* `context`
|
||||||
|
* `dockerfile`
|
||||||
|
* `labels`
|
||||||
|
* `no-cache`
|
||||||
|
* `no-cache-filter`
|
||||||
|
* `output`
|
||||||
|
* `platform`
|
||||||
|
* `pull`
|
||||||
|
* `push`
|
||||||
|
* `secrets`
|
||||||
|
* `ssh`
|
||||||
|
* `tags`
|
||||||
|
* `target`
|
||||||
620
docs/reference/buildx_build.md
Normal file
620
docs/reference/buildx_build.md
Normal file
@@ -0,0 +1,620 @@
|
|||||||
|
# buildx build
|
||||||
|
|
||||||
|
```
|
||||||
|
docker buildx build [OPTIONS] PATH | URL | -
|
||||||
|
```
|
||||||
|
|
||||||
|
<!---MARKER_GEN_START-->
|
||||||
|
Start a build
|
||||||
|
|
||||||
|
### Aliases
|
||||||
|
|
||||||
|
`docker buildx build`, `docker buildx b`
|
||||||
|
|
||||||
|
### Options
|
||||||
|
|
||||||
|
| Name | Type | Default | Description |
|
||||||
|
|:-------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------|:----------|:----------------------------------------------------------------------------------------------------|
|
||||||
|
| [`--add-host`](https://docs.docker.com/engine/reference/commandline/build/#add-host) | `stringSlice` | | Add a custom host-to-IP mapping (format: `host:ip`) |
|
||||||
|
| [`--allow`](#allow) | `stringSlice` | | Allow extra privileged entitlement (e.g., `network.host`, `security.insecure`) |
|
||||||
|
| [`--attest`](#attest) | `stringArray` | | Attestation parameters (format: `type=sbom,generator=image`) |
|
||||||
|
| [`--build-arg`](#build-arg) | `stringArray` | | Set build-time variables |
|
||||||
|
| [`--build-context`](#build-context) | `stringArray` | | Additional build contexts (e.g., name=path) |
|
||||||
|
| [`--builder`](#builder) | `string` | | Override the configured builder instance |
|
||||||
|
| [`--cache-from`](#cache-from) | `stringArray` | | External cache sources (e.g., `user/app:cache`, `type=local,src=path/to/dir`) |
|
||||||
|
| [`--cache-to`](#cache-to) | `stringArray` | | Cache export destinations (e.g., `user/app:cache`, `type=local,dest=path/to/dir`) |
|
||||||
|
| [`--cgroup-parent`](https://docs.docker.com/engine/reference/commandline/build/#cgroup-parent) | `string` | | Optional parent cgroup for the container |
|
||||||
|
| [`-f`](https://docs.docker.com/engine/reference/commandline/build/#file), [`--file`](https://docs.docker.com/engine/reference/commandline/build/#file) | `string` | | Name of the Dockerfile (default: `PATH/Dockerfile`) |
|
||||||
|
| `--iidfile` | `string` | | Write the image ID to the file |
|
||||||
|
| `--invoke` | `string` | | Invoke a command after the build [experimental] |
|
||||||
|
| `--label` | `stringArray` | | Set metadata for an image |
|
||||||
|
| [`--load`](#load) | | | Shorthand for `--output=type=docker` |
|
||||||
|
| [`--metadata-file`](#metadata-file) | `string` | | Write build result metadata to the file |
|
||||||
|
| `--network` | `string` | `default` | Set the networking mode for the `RUN` instructions during build |
|
||||||
|
| `--no-cache` | | | Do not use cache when building the image |
|
||||||
|
| `--no-cache-filter` | `stringArray` | | Do not cache specified stages |
|
||||||
|
| [`-o`](#output), [`--output`](#output) | `stringArray` | | Output destination (format: `type=local,dest=path`) |
|
||||||
|
| [`--platform`](#platform) | `stringArray` | | Set target platform for build |
|
||||||
|
| `--print` | `string` | | Print result of information request (e.g., outline, targets) [experimental] |
|
||||||
|
| [`--progress`](#progress) | `string` | `auto` | Set type of progress output (`auto`, `plain`, `tty`). Use plain to show container output |
|
||||||
|
| [`--provenance`](#provenance) | `string` | | Shortand for `--attest=type=provenance` |
|
||||||
|
| `--pull` | | | Always attempt to pull all referenced images |
|
||||||
|
| [`--push`](#push) | | | Shorthand for `--output=type=registry` |
|
||||||
|
| `-q`, `--quiet` | | | Suppress the build output and print image ID on success |
|
||||||
|
| [`--sbom`](#sbom) | `string` | | Shorthand for `--attest=type=sbom` |
|
||||||
|
| [`--secret`](#secret) | `stringArray` | | Secret to expose to the build (format: `id=mysecret[,src=/local/secret]`) |
|
||||||
|
| [`--shm-size`](#shm-size) | `bytes` | `0` | Size of `/dev/shm` |
|
||||||
|
| [`--ssh`](#ssh) | `stringArray` | | SSH agent socket or keys to expose to the build (format: `default\|<id>[=<socket>\|<key>[,<key>]]`) |
|
||||||
|
| [`-t`](https://docs.docker.com/engine/reference/commandline/build/#tag), [`--tag`](https://docs.docker.com/engine/reference/commandline/build/#tag) | `stringArray` | | Name and optionally a tag (format: `name:tag`) |
|
||||||
|
| [`--target`](https://docs.docker.com/engine/reference/commandline/build/#target) | `string` | | Set the target build stage to build |
|
||||||
|
| [`--ulimit`](#ulimit) | `ulimit` | | Ulimit options |
|
||||||
|
|
||||||
|
|
||||||
|
<!---MARKER_GEN_END-->
|
||||||
|
|
||||||
|
Flags marked with `[experimental]` need to be explicitly enabled by setting the
|
||||||
|
`BUILDX_EXPERIMENTAL=1` environment variable.
|
||||||
|
|
||||||
|
## Description
|
||||||
|
|
||||||
|
The `buildx build` command starts a build using BuildKit. This command is similar
|
||||||
|
to the UI of `docker build` command and takes the same flags and arguments.
|
||||||
|
|
||||||
|
For documentation on most of these flags, refer to the [`docker build`
|
||||||
|
documentation](https://docs.docker.com/engine/reference/commandline/build/). In
|
||||||
|
here we'll document a subset of the new flags.
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
### <a name="attest"></a> Create attestations (--attest)
|
||||||
|
|
||||||
|
```
|
||||||
|
--attest=type=sbom,...
|
||||||
|
--attest=type=provenance,...
|
||||||
|
```
|
||||||
|
|
||||||
|
Create [image attestations](https://docs.docker.com/build/attestations/).
|
||||||
|
BuildKit currently supports:
|
||||||
|
|
||||||
|
- `sbom` - Software Bill of Materials.
|
||||||
|
|
||||||
|
Use `--attest=type=sbom` to generate an SBOM for an image at build-time.
|
||||||
|
Alternatively, you can use the [`--sbom` shorthand](#sbom).
|
||||||
|
|
||||||
|
For more information, see [here](https://docs.docker.com/build/attestations/sbom/).
|
||||||
|
|
||||||
|
- `provenance` - SLSA Provenance
|
||||||
|
|
||||||
|
Use `--attest=type=provenance` to generate provenance for an image at
|
||||||
|
build-time. Alternatively, you can use the [`--provenance` shorthand](#provenance).
|
||||||
|
|
||||||
|
By default, a minimal provenance attestation will be created for the build
|
||||||
|
result, which will only be attached for images pushed to registries.
|
||||||
|
|
||||||
|
For more information, see [here](https://docs.docker.com/build/attestations/slsa-provenance/).
|
||||||
|
|
||||||
|
### <a name="allow"></a> Allow extra privileged entitlement (--allow)
|
||||||
|
|
||||||
|
```
|
||||||
|
--allow=ENTITLEMENT
|
||||||
|
```
|
||||||
|
|
||||||
|
Allow extra privileged entitlement. List of entitlements:
|
||||||
|
|
||||||
|
- `network.host` - Allows executions with host networking.
|
||||||
|
- `security.insecure` - Allows executions without sandbox. See
|
||||||
|
[related Dockerfile extensions](https://docs.docker.com/engine/reference/builder/#run---securitysandbox).
|
||||||
|
|
||||||
|
For entitlements to be enabled, the `buildkitd` daemon also needs to allow them
|
||||||
|
with `--allow-insecure-entitlement` (see [`create --buildkitd-flags`](buildx_create.md#buildkitd-flags))
|
||||||
|
|
||||||
|
**Examples**
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx create --use --name insecure-builder --buildkitd-flags '--allow-insecure-entitlement security.insecure'
|
||||||
|
$ docker buildx build --allow security.insecure .
|
||||||
|
```
|
||||||
|
|
||||||
|
### <a name="build-arg"></a> Set build-time variables (--build-arg)
|
||||||
|
|
||||||
|
Same as [`docker build` command](https://docs.docker.com/engine/reference/commandline/build/#build-arg).
|
||||||
|
|
||||||
|
There are also useful built-in build args like:
|
||||||
|
|
||||||
|
* `BUILDKIT_CONTEXT_KEEP_GIT_DIR=<bool>` trigger git context to keep the `.git` directory
|
||||||
|
* `BUILDKIT_INLINE_BUILDINFO_ATTRS=<bool>` inline build info attributes in image config or not
|
||||||
|
* `BUILDKIT_INLINE_CACHE=<bool>` inline cache metadata to image config or not
|
||||||
|
* `BUILDKIT_MULTI_PLATFORM=<bool>` opt into deterministic output regardless of multi-platform output or not
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx build --build-arg BUILDKIT_MULTI_PLATFORM=1 .
|
||||||
|
```
|
||||||
|
|
||||||
|
> **Note**
|
||||||
|
>
|
||||||
|
> More built-in build args can be found in [Dockerfile reference docs](https://docs.docker.com/engine/reference/builder/#buildkit-built-in-build-args).
|
||||||
|
|
||||||
|
### <a name="build-context"></a> Additional build contexts (--build-context)
|
||||||
|
|
||||||
|
```
|
||||||
|
--build-context=name=VALUE
|
||||||
|
```
|
||||||
|
|
||||||
|
Define additional build context with specified contents. In Dockerfile the context can be accessed when `FROM name` or `--from=name` is used.
|
||||||
|
When Dockerfile defines a stage with the same name it is overwritten.
|
||||||
|
|
||||||
|
The value can be a local source directory, [local OCI layout compliant directory](https://github.com/opencontainers/image-spec/blob/main/image-layout.md), container image (with docker-image:// prefix), Git or HTTP URL.
|
||||||
|
|
||||||
|
Replace `alpine:latest` with a pinned one:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx build --build-context alpine=docker-image://alpine@sha256:0123456789 .
|
||||||
|
```
|
||||||
|
|
||||||
|
Expose a secondary local source directory:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx build --build-context project=path/to/project/source .
|
||||||
|
# docker buildx build --build-context project=https://github.com/myuser/project.git .
|
||||||
|
```
|
||||||
|
|
||||||
|
```dockerfile
|
||||||
|
# syntax=docker/dockerfile:1
|
||||||
|
FROM alpine
|
||||||
|
COPY --from=project myfile /
|
||||||
|
```
|
||||||
|
|
||||||
|
#### <a name="source-oci-layout"></a> Source image from OCI layout directory
|
||||||
|
|
||||||
|
Source an image from a local [OCI layout compliant directory](https://github.com/opencontainers/image-spec/blob/main/image-layout.md),
|
||||||
|
either by tag, or by digest:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx build --build-context foo=oci-layout:///path/to/local/layout:<tag>
|
||||||
|
$ docker buildx build --build-context foo=oci-layout:///path/to/local/layout@sha256:<digest>
|
||||||
|
```
|
||||||
|
|
||||||
|
```dockerfile
|
||||||
|
# syntax=docker/dockerfile:1
|
||||||
|
FROM alpine
|
||||||
|
RUN apk add git
|
||||||
|
COPY --from=foo myfile /
|
||||||
|
|
||||||
|
FROM foo
|
||||||
|
```
|
||||||
|
|
||||||
|
The OCI layout directory must be compliant with the [OCI layout specification](https://github.com/opencontainers/image-spec/blob/main/image-layout.md).
|
||||||
|
You can reference an image in the layout using either tags, or the exact digest.
|
||||||
|
|
||||||
|
### <a name="builder"></a> Override the configured builder instance (--builder)
|
||||||
|
|
||||||
|
Same as [`buildx --builder`](buildx.md#builder).
|
||||||
|
|
||||||
|
### <a name="cache-from"></a> Use an external cache source for a build (--cache-from)
|
||||||
|
|
||||||
|
```
|
||||||
|
--cache-from=[NAME|type=TYPE[,KEY=VALUE]]
|
||||||
|
```
|
||||||
|
|
||||||
|
Use an external cache source for a build. Supported types are `registry`,
|
||||||
|
`local`, `gha` and `s3`.
|
||||||
|
|
||||||
|
- [`registry` source](https://github.com/moby/buildkit#registry-push-image-and-cache-separately)
|
||||||
|
can import cache from a cache manifest or (special) image configuration on the
|
||||||
|
registry.
|
||||||
|
- [`local` source](https://github.com/moby/buildkit#local-directory-1) can
|
||||||
|
import cache from local files previously exported with `--cache-to`.
|
||||||
|
- [`gha` source](https://github.com/moby/buildkit#github-actions-cache-experimental)
|
||||||
|
can import cache from a previously exported cache with `--cache-to` in your
|
||||||
|
GitHub repository
|
||||||
|
- [`s3` source](https://github.com/moby/buildkit#s3-cache-experimental)
|
||||||
|
can import cache from a previously exported cache with `--cache-to` in your
|
||||||
|
S3 bucket
|
||||||
|
|
||||||
|
If no type is specified, `registry` exporter is used with a specified reference.
|
||||||
|
|
||||||
|
`docker` driver currently only supports importing build cache from the registry.
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx build --cache-from=user/app:cache .
|
||||||
|
$ docker buildx build --cache-from=user/app .
|
||||||
|
$ docker buildx build --cache-from=type=registry,ref=user/app .
|
||||||
|
$ docker buildx build --cache-from=type=local,src=path/to/cache .
|
||||||
|
$ docker buildx build --cache-from=type=gha .
|
||||||
|
$ docker buildx build --cache-from=type=s3,region=eu-west-1,bucket=mybucket .
|
||||||
|
```
|
||||||
|
|
||||||
|
More info about cache exporters and available attributes: https://github.com/moby/buildkit#export-cache
|
||||||
|
|
||||||
|
### <a name="cache-to"></a> Export build cache to an external cache destination (--cache-to)
|
||||||
|
|
||||||
|
```
|
||||||
|
--cache-to=[NAME|type=TYPE[,KEY=VALUE]]
|
||||||
|
```
|
||||||
|
|
||||||
|
Export build cache to an external cache destination. Supported types are
|
||||||
|
`registry`, `local`, `inline`, `gha` and `s3`.
|
||||||
|
|
||||||
|
- [`registry` type](https://github.com/moby/buildkit#registry-push-image-and-cache-separately) exports build cache to a cache manifest in the registry.
|
||||||
|
- [`local` type](https://github.com/moby/buildkit#local-directory-1) exports
|
||||||
|
cache to a local directory on the client.
|
||||||
|
- [`inline` type](https://github.com/moby/buildkit#inline-push-image-and-cache-together)
|
||||||
|
writes the cache metadata into the image configuration.
|
||||||
|
- [`gha` type](https://github.com/moby/buildkit#github-actions-cache-experimental)
|
||||||
|
exports cache through the [GitHub Actions Cache service API](https://github.com/tonistiigi/go-actions-cache/blob/master/api.md#authentication).
|
||||||
|
- [`s3` type](https://github.com/moby/buildkit#s3-cache-experimental) exports
|
||||||
|
cache to a S3 bucket.
|
||||||
|
|
||||||
|
`docker` driver currently only supports exporting inline cache metadata to image
|
||||||
|
configuration. Alternatively, `--build-arg BUILDKIT_INLINE_CACHE=1` can be used
|
||||||
|
to trigger inline cache exporter.
|
||||||
|
|
||||||
|
Attribute key:
|
||||||
|
|
||||||
|
- `mode` - Specifies how many layers are exported with the cache. `min` on only
|
||||||
|
exports layers already in the final build stage, `max` exports layers for
|
||||||
|
all stages. Metadata is always exported for the whole build.
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx build --cache-to=user/app:cache .
|
||||||
|
$ docker buildx build --cache-to=type=inline .
|
||||||
|
$ docker buildx build --cache-to=type=registry,ref=user/app .
|
||||||
|
$ docker buildx build --cache-to=type=local,dest=path/to/cache .
|
||||||
|
$ docker buildx build --cache-to=type=gha .
|
||||||
|
$ docker buildx build --cache-to=type=s3,region=eu-west-1,bucket=mybucket .
|
||||||
|
```
|
||||||
|
|
||||||
|
More info about cache exporters and available attributes: https://github.com/moby/buildkit#export-cache
|
||||||
|
|
||||||
|
### <a name="load"></a> Load the single-platform build result to `docker images` (--load)
|
||||||
|
|
||||||
|
Shorthand for [`--output=type=docker`](#docker). Will automatically load the
|
||||||
|
single-platform build result to `docker images`.
|
||||||
|
|
||||||
|
### <a name="metadata-file"></a> Write build result metadata to the file (--metadata-file)
|
||||||
|
|
||||||
|
To output build metadata such as the image digest, pass the `--metadata-file` flag.
|
||||||
|
The metadata will be written as a JSON object to the specified file. The
|
||||||
|
directory of the specified file must already exist and be writable.
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx build --load --metadata-file metadata.json .
|
||||||
|
$ cat metadata.json
|
||||||
|
```
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"containerimage.buildinfo": {
|
||||||
|
"frontend": "dockerfile.v0",
|
||||||
|
"attrs": {
|
||||||
|
"context": "https://github.com/crazy-max/buildkit-buildsources-test.git#master",
|
||||||
|
"filename": "Dockerfile",
|
||||||
|
"source": "docker/dockerfile:master"
|
||||||
|
},
|
||||||
|
"sources": [
|
||||||
|
{
|
||||||
|
"type": "docker-image",
|
||||||
|
"ref": "docker.io/docker/buildx-bin:0.6.1@sha256:a652ced4a4141977c7daaed0a074dcd9844a78d7d2615465b12f433ae6dd29f0",
|
||||||
|
"pin": "sha256:a652ced4a4141977c7daaed0a074dcd9844a78d7d2615465b12f433ae6dd29f0"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "docker-image",
|
||||||
|
"ref": "docker.io/library/alpine:3.13",
|
||||||
|
"pin": "sha256:026f721af4cf2843e07bba648e158fb35ecc876d822130633cc49f707f0fc88c"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"containerimage.config.digest": "sha256:2937f66a9722f7f4a2df583de2f8cb97fc9196059a410e7f00072fc918930e66",
|
||||||
|
"containerimage.descriptor": {
|
||||||
|
"annotations": {
|
||||||
|
"config.digest": "sha256:2937f66a9722f7f4a2df583de2f8cb97fc9196059a410e7f00072fc918930e66",
|
||||||
|
"org.opencontainers.image.created": "2022-02-08T21:28:03Z"
|
||||||
|
},
|
||||||
|
"digest": "sha256:19ffeab6f8bc9293ac2c3fdf94ebe28396254c993aea0b5a542cfb02e0883fa3",
|
||||||
|
"mediaType": "application/vnd.oci.image.manifest.v1+json",
|
||||||
|
"size": 506
|
||||||
|
},
|
||||||
|
"containerimage.digest": "sha256:19ffeab6f8bc9293ac2c3fdf94ebe28396254c993aea0b5a542cfb02e0883fa3"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### <a name="output"></a> Set the export action for the build result (-o, --output)
|
||||||
|
|
||||||
|
```
|
||||||
|
-o, --output=[PATH,-,type=TYPE[,KEY=VALUE]
|
||||||
|
```
|
||||||
|
|
||||||
|
Sets the export action for the build result. In `docker build` all builds finish
|
||||||
|
by creating a container image and exporting it to `docker images`. `buildx` makes
|
||||||
|
this step configurable allowing results to be exported directly to the client,
|
||||||
|
oci image tarballs, registry etc.
|
||||||
|
|
||||||
|
Buildx with `docker` driver currently only supports local, tarball exporter and
|
||||||
|
image exporter. `docker-container` driver supports all the exporters.
|
||||||
|
|
||||||
|
If just the path is specified as a value, `buildx` will use the local exporter
|
||||||
|
with this path as the destination. If the value is "-", `buildx` will use `tar`
|
||||||
|
exporter and write to `stdout`.
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx build -o . .
|
||||||
|
$ docker buildx build -o outdir .
|
||||||
|
$ docker buildx build -o - - > out.tar
|
||||||
|
$ docker buildx build -o type=docker .
|
||||||
|
$ docker buildx build -o type=docker,dest=- . > myimage.tar
|
||||||
|
$ docker buildx build -t tonistiigi/foo -o type=registry
|
||||||
|
```
|
||||||
|
|
||||||
|
Supported exported types are:
|
||||||
|
|
||||||
|
#### `local`
|
||||||
|
|
||||||
|
The `local` export type writes all result files to a directory on the client. The
|
||||||
|
new files will be owned by the current user. On multi-platform builds, all results
|
||||||
|
will be put in subdirectories by their platform.
|
||||||
|
|
||||||
|
Attribute key:
|
||||||
|
|
||||||
|
- `dest` - destination directory where files will be written
|
||||||
|
|
||||||
|
#### `tar`
|
||||||
|
|
||||||
|
The `tar` export type writes all result files as a single tarball on the client.
|
||||||
|
On multi-platform builds all results will be put in subdirectories by their platform.
|
||||||
|
|
||||||
|
Attribute key:
|
||||||
|
|
||||||
|
- `dest` - destination path where tarball will be written. “-” writes to stdout.
|
||||||
|
|
||||||
|
#### `oci`
|
||||||
|
|
||||||
|
The `oci` export type writes the result image or manifest list as an [OCI image
|
||||||
|
layout](https://github.com/opencontainers/image-spec/blob/v1.0.1/image-layout.md)
|
||||||
|
tarball on the client.
|
||||||
|
|
||||||
|
Attribute key:
|
||||||
|
|
||||||
|
- `dest` - destination path where tarball will be written. “-” writes to stdout.
|
||||||
|
|
||||||
|
#### `docker`
|
||||||
|
|
||||||
|
The `docker` export type writes the single-platform result image as a [Docker image
|
||||||
|
specification](https://github.com/docker/docker/blob/v20.10.2/image/spec/v1.2.md)
|
||||||
|
tarball on the client. Tarballs created by this exporter are also OCI compatible.
|
||||||
|
|
||||||
|
Currently, multi-platform images cannot be exported with the `docker` export type.
|
||||||
|
The most common usecase for multi-platform images is to directly push to a registry
|
||||||
|
(see [`registry`](#registry)).
|
||||||
|
|
||||||
|
Attribute keys:
|
||||||
|
|
||||||
|
- `dest` - destination path where tarball will be written. If not specified the
|
||||||
|
tar will be loaded automatically to the current docker instance.
|
||||||
|
- `context` - name for the docker context where to import the result
|
||||||
|
|
||||||
|
#### `image`
|
||||||
|
|
||||||
|
The `image` exporter writes the build result as an image or a manifest list. When
|
||||||
|
using `docker` driver the image will appear in `docker images`. Optionally, image
|
||||||
|
can be automatically pushed to a registry by specifying attributes.
|
||||||
|
|
||||||
|
Attribute keys:
|
||||||
|
|
||||||
|
- `name` - name (references) for the new image.
|
||||||
|
- `push` - boolean to automatically push the image.
|
||||||
|
|
||||||
|
#### `registry`
|
||||||
|
|
||||||
|
The `registry` exporter is a shortcut for `type=image,push=true`.
|
||||||
|
|
||||||
|
### <a name="platform"></a> Set the target platforms for the build (--platform)
|
||||||
|
|
||||||
|
```
|
||||||
|
--platform=value[,value]
|
||||||
|
```
|
||||||
|
|
||||||
|
Set the target platform for the build. All `FROM` commands inside the Dockerfile
|
||||||
|
without their own `--platform` flag will pull base images for this platform and
|
||||||
|
this value will also be the platform of the resulting image.
|
||||||
|
|
||||||
|
The default value is the platform of the BuildKit daemon where the build runs.
|
||||||
|
The value takes the form of `os/arch` or `os/arch/variant`. For example,
|
||||||
|
`linux/amd64` or `linux/arm/v7`. Additionally, the `--platform` flag also supports
|
||||||
|
a special `local` value, which tells BuildKit to use the platform of the BuildKit
|
||||||
|
client that invokes the build.
|
||||||
|
|
||||||
|
When using `docker-container` driver with `buildx`, this flag can accept multiple
|
||||||
|
values as an input separated by a comma. With multiple values the result will be
|
||||||
|
built for all of the specified platforms and joined together into a single manifest
|
||||||
|
list.
|
||||||
|
|
||||||
|
If the `Dockerfile` needs to invoke the `RUN` command, the builder needs runtime
|
||||||
|
support for the specified platform. In a clean setup, you can only execute `RUN`
|
||||||
|
commands for your system architecture.
|
||||||
|
If your kernel supports [`binfmt_misc`](https://en.wikipedia.org/wiki/Binfmt_misc)
|
||||||
|
launchers for secondary architectures, buildx will pick them up automatically.
|
||||||
|
Docker desktop releases come with `binfmt_misc` automatically configured for `arm64`
|
||||||
|
and `arm` architectures. You can see what runtime platforms your current builder
|
||||||
|
instance supports by running `docker buildx inspect --bootstrap`.
|
||||||
|
|
||||||
|
Inside a `Dockerfile`, you can access the current platform value through
|
||||||
|
`TARGETPLATFORM` build argument. Please refer to the [`docker build`
|
||||||
|
documentation](https://docs.docker.com/engine/reference/builder/#automatic-platform-args-in-the-global-scope)
|
||||||
|
for the full description of automatic platform argument variants .
|
||||||
|
|
||||||
|
The formatting for the platform specifier is defined in the [containerd source
|
||||||
|
code](https://github.com/containerd/containerd/blob/v1.4.3/platforms/platforms.go#L63).
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx build --platform=linux/arm64 .
|
||||||
|
$ docker buildx build --platform=linux/amd64,linux/arm64,linux/arm/v7 .
|
||||||
|
$ docker buildx build --platform=darwin .
|
||||||
|
```
|
||||||
|
|
||||||
|
### <a name="progress"></a> Set type of progress output (--progress)
|
||||||
|
|
||||||
|
```
|
||||||
|
--progress=VALUE
|
||||||
|
```
|
||||||
|
|
||||||
|
Set type of progress output (auto, plain, tty). Use plain to show container
|
||||||
|
output (default "auto").
|
||||||
|
|
||||||
|
> **Note**
|
||||||
|
>
|
||||||
|
> You can also use the `BUILDKIT_PROGRESS` environment variable to set its value.
|
||||||
|
|
||||||
|
The following example uses `plain` output during the build:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx build --load --progress=plain .
|
||||||
|
|
||||||
|
#1 [internal] load build definition from Dockerfile
|
||||||
|
#1 transferring dockerfile: 227B 0.0s done
|
||||||
|
#1 DONE 0.1s
|
||||||
|
|
||||||
|
#2 [internal] load .dockerignore
|
||||||
|
#2 transferring context: 129B 0.0s done
|
||||||
|
#2 DONE 0.0s
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
> **Note**
|
||||||
|
>
|
||||||
|
> Check also our [Color output controls guide](https://github.com/docker/buildx/blob/master/docs/guides/color-output.md)
|
||||||
|
> for modifying the colors that are used to output information to the terminal.
|
||||||
|
|
||||||
|
### <a name="provenance"></a> Create provenance attestations (--provenance)
|
||||||
|
|
||||||
|
Shorthand for [`--attest=type=provenance`](#attest), used to configure
|
||||||
|
provenance attestations for the build result. For example,
|
||||||
|
`--provenance=mode=max` can be used as an abbreviation for
|
||||||
|
`--attest=type=provenance,mode=max`.
|
||||||
|
|
||||||
|
Additionally, `--provenance` can be used with boolean values to broadly enable
|
||||||
|
or disable provenance attestations. For example, `--provenance=false` can be
|
||||||
|
used to disable all provenance attestations, while `--provenance=true` can be
|
||||||
|
used to enable all provenance attestations.
|
||||||
|
|
||||||
|
By default, a minimal provenance attestation will be created for the build
|
||||||
|
result, which will only be attached for images pushed to registries.
|
||||||
|
|
||||||
|
For more information, see [here](https://docs.docker.com/build/attestations/slsa-provenance/).
|
||||||
|
|
||||||
|
### <a name="push"></a> Push the build result to a registry (--push)
|
||||||
|
|
||||||
|
Shorthand for [`--output=type=registry`](#registry). Will automatically push the
|
||||||
|
build result to registry.
|
||||||
|
|
||||||
|
### <a name="sbom"></a> Create SBOM attestations (--sbom)
|
||||||
|
|
||||||
|
Shorthand for [`--attest=type=sbom`](#attest), used to configure SBOM
|
||||||
|
attestations for the build result. For example,
|
||||||
|
`--sbom=generator=<user>/<generator-image>` can be used as an abbreviation for
|
||||||
|
`--attest=type=sbom,generator=<user>/<generator-image>`.
|
||||||
|
|
||||||
|
Additionally, `--sbom` can be used with boolean values to broadly enable or
|
||||||
|
disable SBOM attestations. For example, `--sbom=false` can be used to disable
|
||||||
|
all SBOM attestations.
|
||||||
|
|
||||||
|
For more information, see [here](https://docs.docker.com/build/attestations/sbom/).
|
||||||
|
|
||||||
|
### <a name="secret"></a> Secret to expose to the build (--secret)
|
||||||
|
|
||||||
|
```
|
||||||
|
--secret=[type=TYPE[,KEY=VALUE]
|
||||||
|
```
|
||||||
|
|
||||||
|
Exposes secret to the build. The secret can be used by the build using
|
||||||
|
[`RUN --mount=type=secret` mount](https://docs.docker.com/engine/reference/builder/#run---mounttypesecret).
|
||||||
|
|
||||||
|
If `type` is unset it will be detected. Supported types are:
|
||||||
|
|
||||||
|
#### `file`
|
||||||
|
|
||||||
|
Attribute keys:
|
||||||
|
|
||||||
|
- `id` - ID of the secret. Defaults to basename of the `src` path.
|
||||||
|
- `src`, `source` - Secret filename. `id` used if unset.
|
||||||
|
|
||||||
|
```dockerfile
|
||||||
|
# syntax=docker/dockerfile:1
|
||||||
|
FROM python:3
|
||||||
|
RUN pip install awscli
|
||||||
|
RUN --mount=type=secret,id=aws,target=/root/.aws/credentials \
|
||||||
|
aws s3 cp s3://... ...
|
||||||
|
```
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx build --secret id=aws,src=$HOME/.aws/credentials .
|
||||||
|
```
|
||||||
|
|
||||||
|
#### `env`
|
||||||
|
|
||||||
|
Attribute keys:
|
||||||
|
|
||||||
|
- `id` - ID of the secret. Defaults to `env` name.
|
||||||
|
- `env` - Secret environment variable. `id` used if unset, otherwise will look for `src`, `source` if `id` unset.
|
||||||
|
|
||||||
|
```dockerfile
|
||||||
|
# syntax=docker/dockerfile:1
|
||||||
|
FROM node:alpine
|
||||||
|
RUN --mount=type=bind,target=. \
|
||||||
|
--mount=type=secret,id=SECRET_TOKEN \
|
||||||
|
SECRET_TOKEN=$(cat /run/secrets/SECRET_TOKEN) yarn run test
|
||||||
|
```
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ SECRET_TOKEN=token docker buildx build --secret id=SECRET_TOKEN .
|
||||||
|
```
|
||||||
|
|
||||||
|
### <a name="shm-size"></a> Size of /dev/shm (--shm-size)
|
||||||
|
|
||||||
|
The format is `<number><unit>`. `number` must be greater than `0`. Unit is
|
||||||
|
optional and can be `b` (bytes), `k` (kilobytes), `m` (megabytes), or `g`
|
||||||
|
(gigabytes). If you omit the unit, the system uses bytes.
|
||||||
|
|
||||||
|
### <a name="ssh"></a> SSH agent socket or keys to expose to the build (--ssh)
|
||||||
|
|
||||||
|
```
|
||||||
|
--ssh=default|<id>[=<socket>|<key>[,<key>]]
|
||||||
|
```
|
||||||
|
|
||||||
|
This can be useful when some commands in your Dockerfile need specific SSH
|
||||||
|
authentication (e.g., cloning a private repository).
|
||||||
|
|
||||||
|
`--ssh` exposes SSH agent socket or keys to the build and can be used with the
|
||||||
|
[`RUN --mount=type=ssh` mount](https://docs.docker.com/engine/reference/builder/#run---mounttypessh).
|
||||||
|
|
||||||
|
Example to access Gitlab using an SSH agent socket:
|
||||||
|
|
||||||
|
```dockerfile
|
||||||
|
# syntax=docker/dockerfile:1
|
||||||
|
FROM alpine
|
||||||
|
RUN apk add --no-cache openssh-client
|
||||||
|
RUN mkdir -p -m 0700 ~/.ssh && ssh-keyscan gitlab.com >> ~/.ssh/known_hosts
|
||||||
|
RUN --mount=type=ssh ssh -q -T git@gitlab.com 2>&1 | tee /hello
|
||||||
|
# "Welcome to GitLab, @GITLAB_USERNAME_ASSOCIATED_WITH_SSHKEY" should be printed here
|
||||||
|
# with the type of build progress is defined as `plain`.
|
||||||
|
```
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ eval $(ssh-agent)
|
||||||
|
$ ssh-add ~/.ssh/id_rsa
|
||||||
|
(Input your passphrase here)
|
||||||
|
$ docker buildx build --ssh default=$SSH_AUTH_SOCK .
|
||||||
|
```
|
||||||
|
|
||||||
|
### <a name="ulimit"></a> Set ulimits (--ulimit)
|
||||||
|
|
||||||
|
`--ulimit` is specified with a soft and hard limit as such:
|
||||||
|
`<type>=<soft limit>[:<hard limit>]`, for example:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx build --ulimit nofile=1024:1024 .
|
||||||
|
```
|
||||||
|
|
||||||
|
> **Note**
|
||||||
|
>
|
||||||
|
> If you do not provide a `hard limit`, the `soft limit` is used
|
||||||
|
> for both values. If no `ulimits` are set, they are inherited from
|
||||||
|
> the default `ulimits` set on the daemon.
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user