mirror of
https://gitea.com/Lydanne/buildx.git
synced 2025-07-12 06:27:07 +08:00
docs: create dedicated drivers section
Create a dedicated folder for information on drivers, and write a new index.md with content adapted from the README, and a new feature comparisons table. Signed-off-by: Justin Chadwell <me@jedevc.com>
This commit is contained in:
41
docs/guides/drivers/index.md
Normal file
41
docs/guides/drivers/index.md
Normal file
@ -0,0 +1,41 @@
|
||||
# Buildx drivers overview
|
||||
|
||||
The buildx client connects out to the BuildKit backend to execute builds -
|
||||
Buildx drivers allow fine-grained control over management of the backend, and
|
||||
supports several different options for where and how BuildKit should run.
|
||||
|
||||
Currently, we support the following drivers:
|
||||
|
||||
- The `docker` driver, that uses the BuildKit library bundled into the Docker
|
||||
daemon.
|
||||
([reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
|
||||
- The `docker-container` driver, that launches a dedicated BuildKit container
|
||||
using Docker, for access to advanced features.
|
||||
([guide](./docker-container.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
|
||||
- The `kubernetes` driver, that launches dedicated BuildKit pods in a
|
||||
remote Kubernetes cluster, for scalable builds.
|
||||
([guide](./kubernetes.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
|
||||
- The `remote` driver, that allows directly connecting to a manually managed
|
||||
BuildKit daemon, for more custom setups.
|
||||
([guide](./remote.md))
|
||||
|
||||
<!--- FIXME: for 0.9, make links relative, and add reference link for remote --->
|
||||
|
||||
To create a new builder that uses one of the above drivers, you can use the
|
||||
[`docker buildx create`](https://docs.docker.com/engine/reference/commandline/buildx_create/) command:
|
||||
|
||||
```console
|
||||
$ docker buildx create --name=<builder-name> --driver=<driver> --driver-opt=<driver-options>
|
||||
```
|
||||
|
||||
The build experience is very similar across drivers, however, there are some
|
||||
features that are not evenly supported across the board, notably, the `docker`
|
||||
driver does not include support for certain output/caching types.
|
||||
|
||||
| Feature | `docker` | `docker-container` | `kubernetes` | `remote` |
|
||||
| :---------------------------- | :-------------: | :----------------: | :----------: | :--------------------: |
|
||||
| **Automatic `--load`** | ✅ | ❌ | ❌ | ❌ |
|
||||
| **Cache export** | ❔ (inline only) | ✅ | ✅ | ✅ |
|
||||
| **Docker/OCI tarball output** | ❌ | ✅ | ✅ | ✅ |
|
||||
| **Multi-arch images** | ❌ | ✅ | ✅ | ✅ |
|
||||
| **BuildKit configuration** | ❌ | ✅ | ✅ | ❔ (managed externally) |
|
232
docs/guides/drivers/kubernetes.md
Normal file
232
docs/guides/drivers/kubernetes.md
Normal file
@ -0,0 +1,232 @@
|
||||
# Kubernetes driver
|
||||
|
||||
The buildx kubernetes driver allows connecting your local development or ci
|
||||
environments to your kubernetes cluster to allow access to more powerful
|
||||
and varied compute resources.
|
||||
|
||||
This guide assumes you already have an existing kubernetes cluster - if you don't already
|
||||
have one, you can easily follow along by installing
|
||||
[minikube](https://minikube.sigs.k8s.io/docs/).
|
||||
|
||||
Before connecting buildx to your cluster, you may want to create a dedicated
|
||||
namespace using `kubectl` to keep your buildx-managed resources separate. You
|
||||
can call your namespace anything you want, or use the existing `default`
|
||||
namespace, but we'll create a `buildkit` namespace for now:
|
||||
|
||||
```console
|
||||
$ kubectl create namespace buildkit
|
||||
```
|
||||
|
||||
Then create a new buildx builder:
|
||||
|
||||
```console
|
||||
$ docker buildx create \
|
||||
--bootstrap \
|
||||
--name=kube \
|
||||
--driver=kubernetes \
|
||||
--driver-opt=namespace=buildkit
|
||||
```
|
||||
|
||||
This assumes that the kubernetes cluster you want to connect to is currently
|
||||
accessible via the kubectl command, with the `KUBECONFIG` environment variable
|
||||
[set appropriately](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#set-the-kubeconfig-environment-variable)
|
||||
if neccessary.
|
||||
|
||||
You should now be able to see the builder in the list of buildx builders:
|
||||
|
||||
```console
|
||||
$ docker buildx ls
|
||||
NAME/NODE DRIVER/ENDPOINT STATUS PLATFORMS
|
||||
kube kubernetes
|
||||
kube0-6977cdcb75-k9h9m running linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/386
|
||||
default * docker
|
||||
default default running linux/amd64, linux/386
|
||||
```
|
||||
|
||||
The buildx driver creates the neccessary resources on your cluster in the
|
||||
specified namespace (in this case, `buildkit`), while keeping your
|
||||
driver configuration locally. You can see the running pods with:
|
||||
|
||||
```console
|
||||
$ kubectl -n buildkit get deployments
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
kube0 1/1 1 1 32s
|
||||
|
||||
$ kubectl -n buildkit get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
kube0-6977cdcb75-k9h9m 1/1 Running 0 32s
|
||||
```
|
||||
|
||||
You can use your new builder by including the `--builder` flag when running
|
||||
buildx commands. For example (replacing `<user>` and `<image>` with your Docker
|
||||
Hub username and desired image output respectively):
|
||||
|
||||
```console
|
||||
$ docker buildx build . \
|
||||
--builder=kube \
|
||||
-t <user>/<image> \
|
||||
--push
|
||||
```
|
||||
|
||||
## Scaling Buildkit
|
||||
|
||||
One of the main advantages of the kubernetes builder is that you can easily
|
||||
scale your builder up and down to handle increased build load. These controls
|
||||
are exposed via the following options:
|
||||
|
||||
- `replicas=N`
|
||||
- This scales the number of buildkit pods to the desired size. By default,
|
||||
only a single pod will be created, but increasing this allows taking of
|
||||
advantage of multiple nodes in your cluster.
|
||||
- `requests.cpu`, `requests.memory`, `limits.cpu`, `limits.memory`
|
||||
- These options allow requesting and limiting the resources available to each
|
||||
buildkit pod according to the official kubernetes documentation
|
||||
[here](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/).
|
||||
|
||||
For example, to create 4 replica buildkit pods:
|
||||
|
||||
```console
|
||||
$ docker buildx create \
|
||||
--bootstrap \
|
||||
--name=kube \
|
||||
--driver=kubernetes \
|
||||
--driver-opt=namespace=buildkit,replicas=4
|
||||
```
|
||||
|
||||
Listing the pods, we get:
|
||||
|
||||
```console
|
||||
$ kubectl -n buildkit get deployments
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
kube0 4/4 4 4 8s
|
||||
|
||||
$ kubectl -n buildkit get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
kube0-6977cdcb75-48ld2 1/1 Running 0 8s
|
||||
kube0-6977cdcb75-rkc6b 1/1 Running 0 8s
|
||||
kube0-6977cdcb75-vb4ks 1/1 Running 0 8s
|
||||
kube0-6977cdcb75-z4fzs 1/1 Running 0 8s
|
||||
```
|
||||
|
||||
Additionally, you can use the `loadbalance=(sticky|random)` option to control
|
||||
the load-balancing behavior when there are multiple replicas. While `random`
|
||||
should selects random nodes from the available pool, which should provide
|
||||
better balancing across all replicas, `sticky` (the default) attempts to
|
||||
connect the same build performed multiple times to the same node each time,
|
||||
ensuring better local cache utilization.
|
||||
|
||||
For more information on scalability, see the options for [buildx create](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver-opt).
|
||||
|
||||
## Multi-arch builds
|
||||
|
||||
The kubernetes buildx driver has support for creating [multi-architecture images](https://docs.docker.com/build/buildx/#build-multi-platform-images),
|
||||
for easily building for multiple platforms at once.
|
||||
|
||||
### QEMU
|
||||
|
||||
Like the other containerized driver `docker-container`, the kubernetes driver
|
||||
also supports using [QEMU](https://www.qemu.org/) (user mode) to build
|
||||
non-native platforms. If using a default setup like above, no extra setup
|
||||
should be needed, you should just be able to start building for other
|
||||
architectures, by including the `--platform` flag.
|
||||
|
||||
For example, to build a Linux image for `amd64` and `arm64`:
|
||||
|
||||
```console
|
||||
$ docker buildx build . \
|
||||
--builder=kube \
|
||||
--platform=linux/amd64,linux/arm64 \
|
||||
-t <user>/<image> \
|
||||
--push
|
||||
```
|
||||
|
||||
> **Warning**
|
||||
> QEMU performs full-system emulation of non-native platforms, which is *much*
|
||||
> slower than native builds. Compute-heavy tasks like compilation and
|
||||
> compression/decompression will likely take a large performance hit.
|
||||
|
||||
Note, if you're using a custom buildkit image using the `image=<image>` driver
|
||||
option, or invoking non-native binaries from within your build, you may need to
|
||||
explicitly enable QEMU using the `qemu.install` option during driver creation:
|
||||
|
||||
```console
|
||||
$ docker buildx create \
|
||||
--bootstrap \
|
||||
--name=kube \
|
||||
--driver=kubernetes \
|
||||
--driver-opt=namespace=buildkit,qemu.install=true
|
||||
```
|
||||
|
||||
### Native
|
||||
|
||||
If you have access to cluster nodes of different architectures, we can
|
||||
configure the kubernetes driver to take advantage of these for native builds.
|
||||
To do this, we need to use the `--append` feature of `docker buildx create`.
|
||||
|
||||
To start, we can create our builder with explicit support for a single
|
||||
architecture, `amd64`:
|
||||
|
||||
```console
|
||||
$ docker buildx create \
|
||||
--bootstrap \
|
||||
--name=kube \
|
||||
--driver=kubernetes \
|
||||
--platform=linux/amd64 \
|
||||
--node=builder-amd64 \
|
||||
--driver-opt=namespace=buildkit,nodeselector="kubernetes.io/arch=amd64"
|
||||
```
|
||||
|
||||
This creates a buildx builder `kube` containing a single builder node `builder-amd64`.
|
||||
Note that the buildx concept of a node is not the same as the kubernetes
|
||||
concept of a node - the buildx node in this case could connect multiple
|
||||
kubernetes nodes of the same architecture together.
|
||||
|
||||
With our `kube` driver created, we can now introduce another architecture into
|
||||
the mix, for example, like before we can use `arm64`:
|
||||
|
||||
```console
|
||||
$ docker buildx create \
|
||||
--append \
|
||||
--bootstrap \
|
||||
--name=kube \
|
||||
--driver=kubernetes \
|
||||
--platform=linux/arm64 \
|
||||
--node=builder-arm64 \
|
||||
--driver-opt=namespace=buildkit,nodeselector="kubernetes.io/arch=arm64"
|
||||
```
|
||||
|
||||
If you list builders now, you should be able to see both nodes present:
|
||||
|
||||
```console
|
||||
$ docker buildx ls
|
||||
NAME/NODE DRIVER/ENDPOINT STATUS PLATFORMS
|
||||
kube kubernetes
|
||||
builder-amd64 kubernetes:///kube?deployment=builder-amd64&kubeconfig= running linux/amd64*, linux/amd64/v2, linux/amd64/v3, linux/386
|
||||
builder-arm64 kubernetes:///kube?deployment=builder-arm64&kubeconfig= running linux/arm64*
|
||||
```
|
||||
|
||||
You should now be able to build multi-arch images with `amd64` and `arm64`
|
||||
combined, by specifying those platforms together in your buildx command:
|
||||
|
||||
```console
|
||||
$ docker buildx build --builder=kube --platform=linux/amd64,linux/arm64 -t <user>/<image> --push .
|
||||
```
|
||||
|
||||
You can repeat the `buildx create --append` command for as many different
|
||||
architectures that you want to support.
|
||||
|
||||
## Rootless mode
|
||||
|
||||
The kubernetes driver supports rootless mode. For more information on how
|
||||
rootless mode works, and it's requirements, see [here](https://github.com/moby/buildkit/blob/master/docs/rootless.md).
|
||||
|
||||
To enable it in your cluster, you can use the `rootless=true` driver option:
|
||||
|
||||
```console
|
||||
$ docker buildx create \
|
||||
--name=kube \
|
||||
--driver=kubernetes \
|
||||
--driver-opt=namespace=buildkit,rootless=true
|
||||
```
|
||||
|
||||
This will create your pods without `securityContext.privileged`.
|
176
docs/guides/drivers/remote.md
Normal file
176
docs/guides/drivers/remote.md
Normal file
@ -0,0 +1,176 @@
|
||||
# Remote driver
|
||||
|
||||
The buildx remote driver allows for more complex custom build workloads that
|
||||
allow users to connect to external buildkit instances. This is useful for
|
||||
scenarios that require manual management of the buildkit daemon, or where a
|
||||
buildkit daemon is exposed from another source.
|
||||
|
||||
To connect to a running buildkitd instance:
|
||||
|
||||
```console
|
||||
$ docker buildx create \
|
||||
--name remote \
|
||||
--driver remote \
|
||||
tcp://localhost:1234
|
||||
```
|
||||
|
||||
## Remote Buildkit over Unix sockets
|
||||
|
||||
In this scenario, we'll create a setup with buildkitd listening on a unix
|
||||
socket, and have buildx connect through it.
|
||||
|
||||
Firstly, ensure that [buildkit](https://github.com/moby/buildkit) is installed.
|
||||
For example, you can launch an instance of buildkitd with:
|
||||
|
||||
```console
|
||||
$ sudo ./buildkitd --group $(id -gn) --addr unix://$HOME/buildkitd.sock
|
||||
```
|
||||
|
||||
Alternatively, [see here](https://github.com/moby/buildkit/blob/master/docs/rootless.md)
|
||||
for running buildkitd in rootless mode or [here](https://github.com/moby/buildkit/tree/master/examples/systemd)
|
||||
for examples of running it as a systemd service.
|
||||
|
||||
You should now have a unix socket accessible to your user, that is available to
|
||||
connect to:
|
||||
|
||||
```console
|
||||
$ ls -lh /home/user/buildkitd.sock
|
||||
srw-rw---- 1 root user 0 May 5 11:04 /home/user/buildkitd.sock
|
||||
```
|
||||
|
||||
You can then connect buildx to it with the remote driver:
|
||||
|
||||
```console
|
||||
$ docker buildx create \
|
||||
--name remote-unix \
|
||||
--driver remote \
|
||||
unix://$HOME/buildkitd.sock
|
||||
```
|
||||
|
||||
If you list available builders, you should then see `remote-unix` among them:
|
||||
|
||||
```console
|
||||
$ docker buildx ls
|
||||
NAME/NODE DRIVER/ENDPOINT STATUS PLATFORMS
|
||||
remote-unix remote
|
||||
remote-unix0 unix:///home/.../buildkitd.sock running linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/386
|
||||
default * docker
|
||||
default default running linux/amd64, linux/386
|
||||
```
|
||||
|
||||
We can switch to this new builder as the default using `docker buildx use remote-unix`,
|
||||
or specify it per build:
|
||||
|
||||
```console
|
||||
$ docker buildx build --builder=remote-unix -t test --load .
|
||||
```
|
||||
|
||||
(remember that `--load` is necessary when not using the default `docker`
|
||||
driver, to load the build result into the docker daemon)
|
||||
|
||||
## Remote Buildkit in Docker container
|
||||
|
||||
In this scenario, we'll create a similar setup to the `docker-container`
|
||||
driver, by manually booting a buildkit docker container and connecting to it
|
||||
using the buildx remote driver. In most cases you'd probably just use the
|
||||
`docker-container` driver that connects to buildkit through the Docker daemon,
|
||||
but in this case we manually create a container and access it via it's exposed
|
||||
port.
|
||||
|
||||
First, we need to generate certificates for buildkit - you can use the
|
||||
[create-certs.sh](https://github.com/moby/buildkit/v0.10.3/master/examples/kubernetes/create-certs.sh)
|
||||
script as a starting point. Note, that while it is *possible* to expose
|
||||
buildkit over TCP without using TLS, it is **not recommended**, since this will
|
||||
allow arbitrary access to buildkit without credentials.
|
||||
|
||||
With our certificates generated in `.certs/`, we startup the container:
|
||||
|
||||
```console
|
||||
$ docker run -d --rm \
|
||||
--name=remote-buildkitd \
|
||||
--privileged \
|
||||
-p 1234:1234 \
|
||||
-v $PWD/.certs:/etc/buildkit/certs \
|
||||
moby/buildkit:latest \
|
||||
--addr tcp://0.0.0.0:1234 \
|
||||
--tlscacert /etc/buildkit/certs/ca.pem \
|
||||
--tlscert /etc/buildkit/certs/daemon-cert.pem \
|
||||
--tlskey /etc/buildkit/certs/daemon-key.pem
|
||||
```
|
||||
|
||||
The above command starts a buildkit container and exposes the daemon's port
|
||||
1234 to localhost.
|
||||
|
||||
We can now connect to this running container using buildx:
|
||||
|
||||
```console
|
||||
$ docker buildx create \
|
||||
--name remote-container \
|
||||
--driver remote \
|
||||
--driver-opt cacert=.certs/ca.pem,cert=.certs/client-cert.pem,key=.certs/client-key.pem,servername=... \
|
||||
tcp://localhost:1234
|
||||
```
|
||||
|
||||
Alternatively, we could use the `docker-container://` URL scheme to connect
|
||||
to the buildkit container without specifying a port:
|
||||
|
||||
```console
|
||||
$ docker buildx create \
|
||||
--name remote-container \
|
||||
--driver remote \
|
||||
docker-container://remote-container
|
||||
```
|
||||
|
||||
## Remote Buildkit in Kubernetes
|
||||
|
||||
In this scenario, we'll create a similar setup to the `kubernetes` driver by
|
||||
manually creating a buildkit `Deployment`. While the `kubernetes` driver will
|
||||
do this under-the-hood, it might sometimes be desirable to scale buildkit
|
||||
manually. Additionally, when executing builds from inside Kubernetes pods,
|
||||
the buildx builder will need to be recreated from within each pod or copied
|
||||
between them.
|
||||
|
||||
Firstly, we can create a kubernetes deployment of buildkitd, as per the
|
||||
instructions [here](https://github.com/moby/buildkit/tree/master/examples/kubernetes).
|
||||
Following the guide, we setup certificates for the buildkit daemon and client
|
||||
(as above using [create-certs.sh](https://github.com/moby/buildkit/blob/v0.10.3/examples/kubernetes/create-certs.sh))
|
||||
and create a `Deployment` of buildkit pods with a service that connects to
|
||||
them.
|
||||
|
||||
Assuming that the service is called `buildkitd`, we can create a remote builder
|
||||
in buildx, ensuring that the listed certificate files are present:
|
||||
|
||||
```console
|
||||
$ docker buildx create \
|
||||
--name remote-kubernetes \
|
||||
--driver remote \
|
||||
--driver-opt cacert=.certs/ca.pem,cert=.certs/client-cert.pem,key=.certs/client-key.pem \
|
||||
tcp://buildkitd.default.svc:1234
|
||||
```
|
||||
|
||||
Note that the above will only work in-cluster (since the buildkit setup guide
|
||||
only creates a ClusterIP service). To configure the builder to be accessible
|
||||
remotely, you can use an appropriately configured Ingress, which is outside the
|
||||
scope of this guide.
|
||||
|
||||
To access the service remotely, we can use the port forwarding mechanism in
|
||||
kubectl:
|
||||
|
||||
```console
|
||||
$ kubectl port-forward svc/buildkitd 1234:1234
|
||||
```
|
||||
|
||||
Then you can simply point the remote driver at `tcp://localhost:1234`.
|
||||
|
||||
Alternatively, we could use the `kube-pod://` URL scheme to connect
|
||||
directly to a buildkit pod through the kubernetes api (note that this method
|
||||
will only connect to a single pod in the deployment):
|
||||
|
||||
```console
|
||||
$ kubectl get pods --selector=app=buildkitd -o json | jq -r '.items[].metadata.name
|
||||
buildkitd-XXXXXXXXXX-xxxxx
|
||||
$ docker buildx create \
|
||||
--name remote-container \
|
||||
--driver remote \
|
||||
kube-pod://buildkitd-XXXXXXXXXX-xxxxx
|
||||
```
|
Reference in New Issue
Block a user