mirror of
https://gitea.com/Lydanne/buildx.git
synced 2025-05-18 09:17:49 +08:00
updated prose and structure for driver docs
Signed-off-by: David Karlsson <david.karlsson@docker.com>
This commit is contained in:
parent
e98c252490
commit
d030fcc076
@ -1,40 +1,43 @@
|
|||||||
# Docker container driver
|
# Docker container driver
|
||||||
|
|
||||||
The buildx docker-container driver allows creation of a managed and
|
The buildx Docker container driver allows creation of a managed and customizable
|
||||||
customizable BuildKit environment inside a dedicated Docker container.
|
BuildKit environment in a dedicated Docker container.
|
||||||
|
|
||||||
Using the docker-container driver has a couple of advantages over the basic
|
Using the Docker container driver has a couple of advantages over the default
|
||||||
docker driver. Firstly, we can manually override the version of buildkit to
|
Docker driver. For example:
|
||||||
use, meaning that we can access the latest and greatest features as soon as
|
|
||||||
they're released, instead of waiting to upgrade to a newer version of Docker.
|
|
||||||
Additionally, we can access more complex features like multi-architecture
|
|
||||||
builds and the more advanced cache exporters, which are currently unsupported
|
|
||||||
in the default docker driver.
|
|
||||||
|
|
||||||
We can easily create a new builder that uses the docker-container driver:
|
- Specify custom BuildKit versions to use.
|
||||||
|
- Build multi-arch images, see [QEMU](#qemu)
|
||||||
|
- Advanced options for
|
||||||
|
[cache import and export](https://docs.docker.com/build/building/cache/)
|
||||||
|
|
||||||
|
## Synopsis
|
||||||
|
|
||||||
|
Run the following command to create a new builder, named `container`, that uses
|
||||||
|
the Docker container driver:
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ docker buildx create --name container --driver docker-container
|
$ docker buildx create --name container --driver docker-container
|
||||||
container
|
container
|
||||||
```
|
```
|
||||||
|
|
||||||
We should then be able to see it on our list of available builders:
|
The following table describes the available driver-specific options that you can
|
||||||
|
pass to `--driver-opt`:
|
||||||
|
|
||||||
|
| Parameter | Value | Default | Description |
|
||||||
|
| --------------- | ------ | ---------------- | ------------------------------------------------------------------------------------------ |
|
||||||
|
| `image` | string | | Sets the image to use for running BuildKit. |
|
||||||
|
| `network` | string | | Sets the network mode for running the BuildKit container. |
|
||||||
|
| `cgroup-parent` | string | `/docker/buildx` | Sets the cgroup parent of the BuildKit container if Docker is using the `cgroupfs` driver. |
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
When you run a build, Buildx pulls the specified `moby/buildkit` image from
|
||||||
|
[Docker Hub](https://hub.docker.com/u/moby/buildkit). When the container has
|
||||||
|
started, Buildx submits the build submitted to the containerized build server.
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ docker buildx ls
|
$ docker buildx build . -t <image> --builder=container
|
||||||
NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
|
|
||||||
container docker-container
|
|
||||||
container0 desktop-linux inactive
|
|
||||||
default docker
|
|
||||||
default default running 20.10.17 linux/amd64, linux/386
|
|
||||||
```
|
|
||||||
|
|
||||||
If we trigger a build, the appropriate `moby/buildkit` image will be pulled
|
|
||||||
from [Docker Hub](https://hub.docker.com/u/moby/buildkit), the image started,
|
|
||||||
and our build submitted to our containerized build server.
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx build -t <image> --builder=container .
|
|
||||||
WARNING: No output specified with docker-container driver. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load
|
WARNING: No output specified with docker-container driver. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load
|
||||||
#1 [internal] booting buildkit
|
#1 [internal] booting buildkit
|
||||||
#1 pulling image moby/buildkit:buildx-stable-1
|
#1 pulling image moby/buildkit:buildx-stable-1
|
||||||
@ -45,12 +48,14 @@ WARNING: No output specified with docker-container driver. Build result will onl
|
|||||||
...
|
...
|
||||||
```
|
```
|
||||||
|
|
||||||
Note the warning "Build result will only remain in the build cache" - unlike
|
## Loading to local image store
|
||||||
the `docker` driver, the built image must be explicitly loaded into the local
|
|
||||||
image store. We can use the `--load` flag for this:
|
Unlike when using the default `docker` driver, images built with the
|
||||||
|
`docker-container` driver must be explicitly loaded into the local image store.
|
||||||
|
Use the `--load` flag:
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ docker buildx build --load -t <image> --builder=container .
|
$ docker buildx build . --load -t <image> --builder=container
|
||||||
...
|
...
|
||||||
=> exporting to oci image format 7.7s
|
=> exporting to oci image format 7.7s
|
||||||
=> => exporting layers 4.9s
|
=> => exporting layers 4.9s
|
||||||
@ -60,7 +65,7 @@ $ docker buildx build --load -t <image> --builder=container .
|
|||||||
=> importing to docker
|
=> importing to docker
|
||||||
```
|
```
|
||||||
|
|
||||||
The image should then be available in the image store:
|
The image becomes available in the image store when the build finishes:
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ docker image ls
|
$ docker image ls
|
||||||
@ -68,6 +73,32 @@ REPOSITORY TAG IMAGE ID CREATED
|
|||||||
<image> latest adf3eec768a1 2 minutes ago 197MB
|
<image> latest adf3eec768a1 2 minutes ago 197MB
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### QEMU
|
||||||
|
|
||||||
|
The `docker-container` driver supports using [QEMU](https://www.qemu.org/) (user
|
||||||
|
mode) to build non-native platforms. Use the `--platform` flag to specify which
|
||||||
|
architectures that you want to build for.
|
||||||
|
|
||||||
|
For example, to build a Linux image for `amd64` and `arm64`:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx build . \
|
||||||
|
--builder=container \
|
||||||
|
--platform=linux/amd64,linux/arm64 \
|
||||||
|
-t <registry>/<image> \
|
||||||
|
--push
|
||||||
|
```
|
||||||
|
|
||||||
|
> **Warning**
|
||||||
|
>
|
||||||
|
> QEMU performs full-system emulation of non-native platforms, which is much
|
||||||
|
> slower than native builds. Compute-heavy tasks like compilation and
|
||||||
|
> compression/decompression will likely take a large performance hit.
|
||||||
|
|
||||||
## Further reading
|
## Further reading
|
||||||
|
|
||||||
For more information on the docker-container driver, see the [buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver).
|
For more information on the Docker container driver, see the
|
||||||
|
[buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver).
|
||||||
|
|
||||||
|
If want to explore builders running on a remote server, see the
|
||||||
|
[Kubernetes driver](./kubernetes.md) and the [Remote driver](./remote.md).
|
||||||
|
@ -1,48 +1,30 @@
|
|||||||
# Docker driver
|
# Docker driver
|
||||||
|
|
||||||
The buildx docker driver is the default builtin driver, that uses the BuildKit
|
The Buildx Docker driver is the default driver. It uses the BuildKit server
|
||||||
server components built directly into the docker engine.
|
components built directly into the Docker engine. The Docker driver requires no
|
||||||
|
configuration.
|
||||||
|
|
||||||
No setup should be required for the docker driver - it should already be
|
Unlike the other drivers, builders using the Docker driver can't be manually
|
||||||
configured for you:
|
created. They're only created automatically from the Docker context.
|
||||||
|
|
||||||
|
Images built with the Docker driver are automatically loaded to the local image
|
||||||
|
store.
|
||||||
|
|
||||||
|
## Synopsis
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ docker buildx ls
|
# The Docker driver is used by buildx by default
|
||||||
NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
|
docker buildx build .
|
||||||
default docker
|
|
||||||
default default running 20.10.17 linux/amd64, linux/386
|
|
||||||
```
|
```
|
||||||
|
|
||||||
This builder is ready to build with out-of-the-box, requiring no extra setup,
|
It's not possible to configure which BuildKit version to use, or to pass any
|
||||||
so you can get going with a `docker buildx build` as soon as you like.
|
additional BuildKit parameters to a builder using the Docker driver. The
|
||||||
|
BuildKit version and parameters are preset by the Docker engine internally.
|
||||||
|
|
||||||
Depending on your personal setup, you may find multiple builders in your list
|
If you need additional configuration and flexibility, consider using the
|
||||||
the use the docker driver. For example, on a system that runs both a package
|
[Docker container driver](./docker-container.md).
|
||||||
managed version of dockerd, as well as Docker Desktop, you might have the
|
|
||||||
following:
|
|
||||||
|
|
||||||
```console
|
|
||||||
NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
|
|
||||||
default docker
|
|
||||||
default default running 20.10.17 linux/amd64, linux/386
|
|
||||||
desktop-linux * docker
|
|
||||||
desktop-linux desktop-linux running 20.10.17 linux/amd64, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v6
|
|
||||||
```
|
|
||||||
|
|
||||||
This is because the docker driver builders are automatically pulled from
|
|
||||||
the available [Docker Contexts](https://docs.docker.com/engine/context/working-with-contexts/).
|
|
||||||
When you add new contexts using `docker context create`, these will appear in
|
|
||||||
your list of buildx builders.
|
|
||||||
|
|
||||||
Unlike the [other drivers](./index.md), builders using the docker driver
|
|
||||||
cannot be manually created, and can only be automatically created from the
|
|
||||||
docker context. Additionally, they cannot be configured to a specific BuildKit
|
|
||||||
version, and cannot take any extra parameters, as these are both preset by the
|
|
||||||
Docker engine internally.
|
|
||||||
|
|
||||||
If you want the extra configuration and flexibility without too much more
|
|
||||||
overhead, then see the help page for the [docker-container driver](./docker-container.md).
|
|
||||||
|
|
||||||
## Further reading
|
## Further reading
|
||||||
|
|
||||||
For more information on the docker driver, see the [buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver).
|
For more information on the Docker driver, see the
|
||||||
|
[buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver).
|
||||||
|
@ -1,39 +1,78 @@
|
|||||||
# Buildx drivers overview
|
# Buildx drivers overview
|
||||||
|
|
||||||
The buildx client connects out to the BuildKit backend to execute builds -
|
Buildx drivers are configurations for how and where the BuildKit backend runs.
|
||||||
Buildx drivers allow fine-grained control over management of the backend, and
|
Driver settings are customizable and allows fine-grained control of the builder.
|
||||||
supports several options for where and how BuildKit should run.
|
Buildx supports the following drivers:
|
||||||
|
|
||||||
Currently, we support the following drivers:
|
- `docker`: uses the BuildKit library bundled into the Docker daemon.
|
||||||
|
- `docker-container`: creates a dedicated BuildKit container using Docker.
|
||||||
|
- `kubernetes`: creates BuildKit pods in a Kubernetes cluster.
|
||||||
|
- `remote`: connects directly to a manually managed BuildKit daemon.
|
||||||
|
|
||||||
- The `docker` driver, that uses the BuildKit library bundled into the Docker
|
Different drivers support different use cases. The default `docker` driver
|
||||||
daemon.
|
prioritizes simplicity and ease of use. It has limited support for advanced
|
||||||
([guide](./docker.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#docker-driver-1))
|
features like caching and output formats, and isn't configurable. Other drivers
|
||||||
- The `docker-container` driver, that launches a dedicated BuildKit container
|
provide more flexibility and are better at handling advanced scenarios. The
|
||||||
using Docker, for access to advanced features.
|
`kubernetes` and `remote` drivers specifically aim to enable remote builders.
|
||||||
([guide](./docker-container.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#docker-container-driver-1))
|
|
||||||
- The `kubernetes` driver, that launches dedicated BuildKit pods in a
|
|
||||||
remote Kubernetes cluster, for scalable builds.
|
|
||||||
([guide](./kubernetes.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#kubernetes-driver-1))
|
|
||||||
- The `remote` driver, that allows directly connecting to a manually managed
|
|
||||||
BuildKit daemon, for more custom setups.
|
|
||||||
([guide](./remote.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#remote-driver-1))
|
|
||||||
|
|
||||||
To create a new builder that uses one of the above drivers, you can use the
|
The following table outlines some of the differences between drivers.
|
||||||
[`docker buildx create`](https://docs.docker.com/engine/reference/commandline/buildx_create/) command:
|
|
||||||
|
| Feature | `docker` | `docker-container` | `kubernetes` | `remote` |
|
||||||
|
| :--------------------------- | :---------: | :----------------: | :----------: | :----------------: |
|
||||||
|
| **Automatically load image** | Yes | No | No | No |
|
||||||
|
| **Cache export** | Inline only | Yes | Yes | Yes |
|
||||||
|
| **Remote builders** | No | No | Yes | Yes |
|
||||||
|
| **Tarball output** | No | Yes | Yes | Yes |
|
||||||
|
| **Multi-arch images** | No | Yes | Yes | Yes |
|
||||||
|
| **BuildKit configuration** | No | Yes | Yes | Managed externally |
|
||||||
|
|
||||||
|
## List available drivers
|
||||||
|
|
||||||
|
Use `docker buildx ls` to see builder instances available on your system, and
|
||||||
|
the drivers they're using.
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx ls
|
||||||
|
NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
|
||||||
|
default docker
|
||||||
|
default default running 20.10.17 linux/amd64, linux/386
|
||||||
|
```
|
||||||
|
|
||||||
|
Depending on your setup, you may find multiple builders in your list that use
|
||||||
|
the Docker driver. For example, on a system that runs both a manually installed
|
||||||
|
version of dockerd, as well as Docker Desktop, you might see the following
|
||||||
|
output from `docker buildx ls`:
|
||||||
|
|
||||||
|
```console
|
||||||
|
NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
|
||||||
|
default docker
|
||||||
|
default default running 20.10.17 linux/amd64, linux/386
|
||||||
|
desktop-linux * docker
|
||||||
|
desktop-linux desktop-linux running 20.10.17 linux/amd64, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v6
|
||||||
|
```
|
||||||
|
|
||||||
|
This is because the Docker driver builders are automatically pulled from the
|
||||||
|
available
|
||||||
|
[Docker Contexts](https://docs.docker.com/engine/context/working-with-contexts/).
|
||||||
|
When you add new contexts using `docker context create`, these will appear in
|
||||||
|
your list of buildx builders.
|
||||||
|
|
||||||
|
## Create a new builder
|
||||||
|
|
||||||
|
Use the
|
||||||
|
[`docker buildx create`](https://docs.docker.com/engine/reference/commandline/buildx_create/)
|
||||||
|
command to create a builder, and specify the driver using the `--driver` option.
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ docker buildx create --name=<builder-name> --driver=<driver> --driver-opt=<driver-options>
|
$ docker buildx create --name=<builder-name> --driver=<driver> --driver-opt=<driver-options>
|
||||||
```
|
```
|
||||||
|
|
||||||
The build experience is very similar across drivers, however, there are some
|
## What's next
|
||||||
features that are not evenly supported across the board, notably, the `docker`
|
|
||||||
driver does not include support for certain output/caching types.
|
|
||||||
|
|
||||||
| Feature | `docker` | `docker-container` | `kubernetes` | `remote` |
|
Read about each of the Buildx drivers to learn about how they work and how to
|
||||||
| :---------------------------- | :-------------: | :----------------: | :----------: | :--------------------: |
|
use them:
|
||||||
| **Automatic `--load`** | ✅ | ❌ | ❌ | ❌ |
|
|
||||||
| **Cache export** | ❔ (inline only) | ✅ | ✅ | ✅ |
|
- [Docker driver](./docker.md)
|
||||||
| **Docker/OCI tarball output** | ❌ | ✅ | ✅ | ✅ |
|
- [Docker container driver](./docker-container.md)
|
||||||
| **Multi-arch images** | ❌ | ✅ | ✅ | ✅ |
|
- [Kubernetes driver](./kubernetes.md)
|
||||||
| **BuildKit configuration** | ❌ | ✅ | ✅ | ❔ (managed externally) |
|
- [Remote driver](./remote.md)
|
||||||
|
@ -1,89 +1,65 @@
|
|||||||
# Kubernetes driver
|
# Kubernetes driver
|
||||||
|
|
||||||
The buildx kubernetes driver allows connecting your local development or ci
|
The Buildx Kubernetes driver allows connecting your local development or CI
|
||||||
environments to your kubernetes cluster to allow access to more powerful
|
environments to your Kubernetes cluster to allow access to more powerful and
|
||||||
and varied compute resources.
|
varied compute resources.
|
||||||
|
|
||||||
This guide assumes you already have an existing kubernetes cluster - if you don't already
|
## Synopsis
|
||||||
have one, you can easily follow along by installing
|
|
||||||
[minikube](https://minikube.sigs.k8s.io/docs/).
|
|
||||||
|
|
||||||
Before connecting buildx to your cluster, you may want to create a dedicated
|
Run the following command to create a new builder, named `container`, that uses
|
||||||
namespace using `kubectl` to keep your buildx-managed resources separate. You
|
the Docker container driver:
|
||||||
can call your namespace anything you want, or use the existing `default`
|
|
||||||
namespace, but we'll create a `buildkit` namespace for now:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ kubectl create namespace buildkit
|
|
||||||
```
|
|
||||||
|
|
||||||
Then create a new buildx builder:
|
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ docker buildx create \
|
$ docker buildx create \
|
||||||
--bootstrap \
|
--bootstrap \
|
||||||
--name=kube \
|
--name=kube \
|
||||||
--driver=kubernetes \
|
--driver=kubernetes \
|
||||||
--driver-opt=namespace=buildkit
|
--driver-opt=[key=value,...]
|
||||||
```
|
```
|
||||||
|
|
||||||
This assumes that the kubernetes cluster you want to connect to is currently
|
The following table describes the available driver-specific options that you can
|
||||||
accessible via the kubectl command, with the `KUBECONFIG` environment variable
|
pass to `--driver-opt`:
|
||||||
[set appropriately](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#set-the-kubeconfig-environment-variable)
|
|
||||||
if neccessary.
|
|
||||||
|
|
||||||
You should now be able to see the builder in the list of buildx builders:
|
| Parameter | Value | Default | Description |
|
||||||
|
| ----------------- | ---------------- | --------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------ |
|
||||||
|
| `image` | String | | Sets the image to use for running BuildKit. |
|
||||||
|
| `namespace` | String | Namespace in current Kubernetes context | Sets the Kubernetes namespace. |
|
||||||
|
| `replicas` | Integer | 1 | Sets the number of Pod replicas to create. See [scaling BuildKit][1] |
|
||||||
|
| `requests.cpu` | CPU units | | Sets the request CPU value specified in units of Kubernetes CPU. For example `requests.cpu=100m` or `requests.cpu=2` |
|
||||||
|
| `requests.memory` | Memory size | | Sets the request memory value specified in bytes or with a valid suffix. For example `requests.memory=500Mi` or `requests.memory=4G` |
|
||||||
|
| `limits.cpu` | CPU units | | Sets the limit CPU value specified in units of Kubernetes CPU. For example `requests.cpu=100m` or `requests.cpu=2` |
|
||||||
|
| `limits.memory` | Memory size | | Sets the limit memory value specified in bytes or with a valid suffix. For example `requests.memory=500Mi` or `requests.memory=4G` |
|
||||||
|
| `nodeselector` | CSV string | | Sets the pod's `nodeSelector` label(s). See [node assignment][2]. |
|
||||||
|
| `tolerations` | CSV string | | Configures the pod's taint toleration. See [node assignment][2]. |
|
||||||
|
| `rootless` | `true\|false` | `false` | Run the container as a non-root user. See [rootless mode][3]. |
|
||||||
|
| `loadbalance` | `sticky\|random` | `sticky` | Load-balancing strategy. If set to `sticky`, the pod is chosen using the hash of the context path. |
|
||||||
|
| `qemu.install` | `true\|false` | | Install QEMU emulation for multi platforms support. See [QEMU][4]. |
|
||||||
|
| `qemu.image` | String | `tonistiigi/binfmt:latest` | Sets the QEMU emulation image. See [QEMU][4]. |
|
||||||
|
|
||||||
```console
|
[1]: #scaling-buildkit
|
||||||
$ docker buildx ls
|
[2]: #node-assignment
|
||||||
NAME/NODE DRIVER/ENDPOINT STATUS PLATFORMS
|
[3]: #rootless-mode
|
||||||
kube kubernetes
|
[4]: #qemu
|
||||||
kube0-6977cdcb75-k9h9m running linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/386
|
|
||||||
default * docker
|
|
||||||
default default running linux/amd64, linux/386
|
|
||||||
```
|
|
||||||
|
|
||||||
The buildx driver creates the neccessary resources on your cluster in the
|
## Scaling BuildKit
|
||||||
specified namespace (in this case, `buildkit`), while keeping your
|
|
||||||
driver configuration locally. You can see the running pods with:
|
|
||||||
|
|
||||||
```console
|
One of the main advantages of the Kubernetes driver is that you can scale the
|
||||||
$ kubectl -n buildkit get deployments
|
number of builder replicas up and down to handle increased build load. Scaling
|
||||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
is configurable using the following driver options:
|
||||||
kube0 1/1 1 1 32s
|
|
||||||
|
|
||||||
$ kubectl -n buildkit get pods
|
|
||||||
NAME READY STATUS RESTARTS AGE
|
|
||||||
kube0-6977cdcb75-k9h9m 1/1 Running 0 32s
|
|
||||||
```
|
|
||||||
|
|
||||||
You can use your new builder by including the `--builder` flag when running
|
|
||||||
buildx commands. For example (replacing `<user>` and `<image>` with your Docker
|
|
||||||
Hub username and desired image output respectively):
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx build . \
|
|
||||||
--builder=kube \
|
|
||||||
-t <user>/<image> \
|
|
||||||
--push
|
|
||||||
```
|
|
||||||
|
|
||||||
## Scaling Buildkit
|
|
||||||
|
|
||||||
One of the main advantages of the kubernetes builder is that you can easily
|
|
||||||
scale your builder up and down to handle increased build load. These controls
|
|
||||||
are exposed via the following options:
|
|
||||||
|
|
||||||
- `replicas=N`
|
- `replicas=N`
|
||||||
- This scales the number of buildkit pods to the desired size. By default,
|
|
||||||
only a single pod will be created, but increasing this allows taking of
|
This scales the number of BuildKit pods to the desired size. By default, it
|
||||||
|
only creates a single pod. Increasing the number of replicas lets you take
|
||||||
advantage of multiple nodes in your cluster.
|
advantage of multiple nodes in your cluster.
|
||||||
|
|
||||||
- `requests.cpu`, `requests.memory`, `limits.cpu`, `limits.memory`
|
- `requests.cpu`, `requests.memory`, `limits.cpu`, `limits.memory`
|
||||||
- These options allow requesting and limiting the resources available to each
|
|
||||||
buildkit pod according to the official kubernetes documentation
|
These options allow requesting and limiting the resources available to each
|
||||||
|
BuildKit pod according to the official Kubernetes documentation
|
||||||
[here](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/).
|
[here](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/).
|
||||||
|
|
||||||
For example, to create 4 replica buildkit pods:
|
For example, to create 4 replica BuildKit pods:
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ docker buildx create \
|
$ docker buildx create \
|
||||||
@ -93,7 +69,7 @@ $ docker buildx create \
|
|||||||
--driver-opt=namespace=buildkit,replicas=4
|
--driver-opt=namespace=buildkit,replicas=4
|
||||||
```
|
```
|
||||||
|
|
||||||
Listing the pods, we get:
|
Listing the pods, you get this:
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ kubectl -n buildkit get deployments
|
$ kubectl -n buildkit get deployments
|
||||||
@ -109,26 +85,52 @@ kube0-6977cdcb75-z4fzs 1/1 Running 0 8s
|
|||||||
```
|
```
|
||||||
|
|
||||||
Additionally, you can use the `loadbalance=(sticky|random)` option to control
|
Additionally, you can use the `loadbalance=(sticky|random)` option to control
|
||||||
the load-balancing behavior when there are multiple replicas. While `random`
|
the load-balancing behavior when there are multiple replicas. `random` selects
|
||||||
should selects random nodes from the available pool, which should provide
|
random nodes from the node pool, providing an even workload distribution across
|
||||||
better balancing across all replicas, `sticky` (the default) attempts to
|
replicas. `sticky` (the default) attempts to connect the same build performed
|
||||||
connect the same build performed multiple times to the same node each time,
|
multiple times to the same node each time, ensuring better use of local cache.
|
||||||
ensuring better local cache utilization.
|
|
||||||
|
|
||||||
For more information on scalability, see the options for [buildx create](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver-opt).
|
For more information on scalability, see the options for
|
||||||
|
[buildx create](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver-opt).
|
||||||
|
|
||||||
|
## Node assignment
|
||||||
|
|
||||||
|
The Kubernetes driver allows you to control the scheduling of BuildKit pods
|
||||||
|
using the `nodeSelector` and `tolerations` driver options.
|
||||||
|
|
||||||
|
The value of the `nodeSelector` parameter is a comma-separated string of
|
||||||
|
key-value pairs, where the key is the node label and the value is the label
|
||||||
|
text. For example: `"nodeselector=kubernetes.io/arch=arm64"`
|
||||||
|
|
||||||
|
The `tolerations` parameter is a semicolon-separated list of taints. It accepts
|
||||||
|
the same values as the Kubernetes manifest. Each `tolerations` entry specifies a
|
||||||
|
taint key and the value, operator, or effect. For example:
|
||||||
|
`"tolerations=key=foo,value=bar;key=foo2,operator=exists;key=foo3,effect=NoSchedule"`
|
||||||
|
|
||||||
|
The syntax for these parameters is slightly different compared to other driver
|
||||||
|
options. You must wrap both `nodeSelector` and `tolerations` in double quotes.
|
||||||
|
For example:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx create \
|
||||||
|
--bootstrap \
|
||||||
|
--name=kube \
|
||||||
|
--driver=kubernetes \
|
||||||
|
--driver-opt="nodeselector=label=value","tolerations=key=key1,value=value1"
|
||||||
|
```
|
||||||
|
|
||||||
## Multi-platform builds
|
## Multi-platform builds
|
||||||
|
|
||||||
The kubernetes buildx driver has support for creating [multi-platform images](https://docs.docker.com/build/building/multi-platform/),
|
The Buildx Kubernetes driver has support for creating
|
||||||
for easily building for multiple platforms at once.
|
[multi-platform images](https://docs.docker.com/build/building/multi-platform/),
|
||||||
|
either using QEMU or by leveraging the native architecture of nodes.
|
||||||
|
|
||||||
### QEMU
|
### QEMU
|
||||||
|
|
||||||
Like the other containerized driver `docker-container`, the kubernetes driver
|
Like the `docker-container` driver, the Kubernetes driver also supports using
|
||||||
also supports using [QEMU](https://www.qemu.org/) (user mode) to build
|
[QEMU](https://www.qemu.org/) (user mode) to build images for non-native
|
||||||
non-native platforms. If using a default setup like above, no extra setup
|
platforms. Include the `--platform` flag and specify which platforms you want to
|
||||||
should be needed, you should just be able to start building for other
|
output to.
|
||||||
architectures, by including the `--platform` flag.
|
|
||||||
|
|
||||||
For example, to build a Linux image for `amd64` and `arm64`:
|
For example, to build a Linux image for `amd64` and `arm64`:
|
||||||
|
|
||||||
@ -141,13 +143,14 @@ $ docker buildx build . \
|
|||||||
```
|
```
|
||||||
|
|
||||||
> **Warning**
|
> **Warning**
|
||||||
> QEMU performs full-system emulation of non-native platforms, which is *much*
|
>
|
||||||
|
> QEMU performs full-system emulation of non-native platforms, which is much
|
||||||
> slower than native builds. Compute-heavy tasks like compilation and
|
> slower than native builds. Compute-heavy tasks like compilation and
|
||||||
> compression/decompression will likely take a large performance hit.
|
> compression/decompression will likely take a large performance hit.
|
||||||
|
|
||||||
Note, if you're using a custom buildkit image using the `image=<image>` driver
|
Using a custom BuildKit image or invoking non-native binaries in builds may
|
||||||
option, or invoking non-native binaries from within your build, you may need to
|
require that you explicitly turn on QEMU using the `qemu.install` option when
|
||||||
explicitly enable QEMU using the `qemu.install` option during driver creation:
|
creating the builder:
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ docker buildx create \
|
$ docker buildx create \
|
||||||
@ -159,12 +162,12 @@ $ docker buildx create \
|
|||||||
|
|
||||||
### Native
|
### Native
|
||||||
|
|
||||||
If you have access to cluster nodes of different architectures, we can
|
If you have access to cluster nodes of different architectures, the Kubernetes
|
||||||
configure the kubernetes driver to take advantage of these for native builds.
|
driver can take advantage of these for native builds. To do this, use the
|
||||||
To do this, we need to use the `--append` feature of `docker buildx create`.
|
`--append` flag of `docker buildx create`.
|
||||||
|
|
||||||
To start, we can create our builder with explicit support for a single
|
First, create your builder with explicit support for a single architecture, for
|
||||||
architecture, `amd64`:
|
example `amd64`:
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ docker buildx create \
|
$ docker buildx create \
|
||||||
@ -176,13 +179,13 @@ $ docker buildx create \
|
|||||||
--driver-opt=namespace=buildkit,nodeselector="kubernetes.io/arch=amd64"
|
--driver-opt=namespace=buildkit,nodeselector="kubernetes.io/arch=amd64"
|
||||||
```
|
```
|
||||||
|
|
||||||
This creates a buildx builder `kube` containing a single builder node `builder-amd64`.
|
This creates a Buildx builder named `kube`, containing a single builder node
|
||||||
Note that the buildx concept of a node is not the same as the kubernetes
|
`builder-amd64`. Note that the Buildx concept of a node isn't the same as the
|
||||||
concept of a node - the buildx node in this case could connect multiple
|
Kubernetes concept of a node. A Buildx node in this case could connect multiple
|
||||||
kubernetes nodes of the same architecture together.
|
Kubernetes nodes of the same architecture together.
|
||||||
|
|
||||||
With our `kube` driver created, we can now introduce another architecture into
|
With the `kube` builder created, you can now introduce another architecture into
|
||||||
the mix, for example, like before we can use `arm64`:
|
the mix using `--append`. For example, to add `arm64`:
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ docker buildx create \
|
$ docker buildx create \
|
||||||
@ -217,10 +220,11 @@ architectures that you want to support.
|
|||||||
|
|
||||||
## Rootless mode
|
## Rootless mode
|
||||||
|
|
||||||
The kubernetes driver supports rootless mode. For more information on how
|
The Kubernetes driver supports rootless mode. For more information on how
|
||||||
rootless mode works, and it's requirements, see [here](https://github.com/moby/buildkit/blob/master/docs/rootless.md).
|
rootless mode works, and it's requirements, see
|
||||||
|
[here](https://github.com/moby/buildkit/blob/master/docs/rootless.md).
|
||||||
|
|
||||||
To enable it in your cluster, you can use the `rootless=true` driver option:
|
To turn it on in your cluster, you can use the `rootless=true` driver option:
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ docker buildx create \
|
$ docker buildx create \
|
||||||
@ -231,6 +235,89 @@ $ docker buildx create \
|
|||||||
|
|
||||||
This will create your pods without `securityContext.privileged`.
|
This will create your pods without `securityContext.privileged`.
|
||||||
|
|
||||||
|
Requires Kubernetes version 1.19 or later. Using Ubuntu as the host kernel is
|
||||||
|
recommended.
|
||||||
|
|
||||||
|
## Guide: Creating a Buildx builder in Kubernetes
|
||||||
|
|
||||||
|
This guide shows you how to:
|
||||||
|
|
||||||
|
- Create a namespace for your Buildx resources
|
||||||
|
- Create a Kubernetes builder.
|
||||||
|
- List the available builders
|
||||||
|
- Build an image using your Kubernetes builders
|
||||||
|
|
||||||
|
Prerequisites:
|
||||||
|
|
||||||
|
- You have an existing Kubernetes cluster. If you don't already have one, you
|
||||||
|
can follow along by installing [minikube](https://minikube.sigs.k8s.io/docs/).
|
||||||
|
- The cluster you want to connect to is accessible via the `kubectl` command,
|
||||||
|
with the `KUBECONFIG` environment variable
|
||||||
|
[set appropriately](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#set-the-kubeconfig-environment-variable)
|
||||||
|
if necessary.
|
||||||
|
|
||||||
|
1. Create a `buildkit` namespace.
|
||||||
|
|
||||||
|
Creating a separate namespace helps keep your Buildx resources separate from
|
||||||
|
other resources in the cluster.
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ kubectl create namespace buildkit
|
||||||
|
namespace/buildkit created
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Create a new Buildx builder with the Kubernetes driver:
|
||||||
|
|
||||||
|
```console
|
||||||
|
# Remember to specify the namespace in driver options
|
||||||
|
$ docker buildx create \
|
||||||
|
--bootstrap \
|
||||||
|
--name=kube \
|
||||||
|
--driver=kubernetes \
|
||||||
|
```
|
||||||
|
|
||||||
|
3. List available Buildx builders using `docker buildx ls`
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx ls
|
||||||
|
NAME/NODE DRIVER/ENDPOINT STATUS PLATFORMS
|
||||||
|
kube kubernetes
|
||||||
|
kube0-6977cdcb75-k9h9m running linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/386
|
||||||
|
default * docker
|
||||||
|
default default running linux/amd64, linux/386
|
||||||
|
```
|
||||||
|
|
||||||
|
4. Inspect the running pods created by the Buildx driver with `kubectl`.
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ kubectl -n buildkit get deployments
|
||||||
|
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||||
|
kube0 1/1 1 1 32s
|
||||||
|
|
||||||
|
$ kubectl -n buildkit get pods
|
||||||
|
NAME READY STATUS RESTARTS AGE
|
||||||
|
kube0-6977cdcb75-k9h9m 1/1 Running 0 32s
|
||||||
|
```
|
||||||
|
|
||||||
|
The buildx driver creates the necessary resources on your cluster in the
|
||||||
|
specified namespace (in this case, `buildkit`), while keeping your driver
|
||||||
|
configuration locally.
|
||||||
|
|
||||||
|
5. Use your new builder by including the `--builder` flag when running buildx
|
||||||
|
commands. For example: :
|
||||||
|
|
||||||
|
```console
|
||||||
|
# Replace <registry> with your Docker username
|
||||||
|
# and <image> with the name of the image you want to build
|
||||||
|
docker buildx build . \
|
||||||
|
--builder=kube \
|
||||||
|
-t <registry>/<image> \
|
||||||
|
--push
|
||||||
|
```
|
||||||
|
|
||||||
|
That's it! You've now built an image from a Kubernetes pod, using Buildx!
|
||||||
|
|
||||||
## Further reading
|
## Further reading
|
||||||
|
|
||||||
For more information on the kubernetes driver, see the [buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver).
|
For more information on the Kubernetes driver, see the
|
||||||
|
[buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver).
|
||||||
|
@ -1,11 +1,11 @@
|
|||||||
# Remote driver
|
# Remote driver
|
||||||
|
|
||||||
The buildx remote driver allows for more complex custom build workloads that
|
The Buildx remote driver allows for more complex custom build workloads,
|
||||||
allow users to connect to external buildkit instances. This is useful for
|
allowing you to connect to externally managed BuildKit instances. This is useful
|
||||||
scenarios that require manual management of the buildkit daemon, or where a
|
for scenarios that require manual management of the BuildKit daemon, or where a
|
||||||
buildkit daemon is exposed from another source.
|
BuildKit daemon is exposed from another source.
|
||||||
|
|
||||||
To connect to a running buildkitd instance:
|
## Synopsis
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ docker buildx create \
|
$ docker buildx create \
|
||||||
@ -14,79 +14,94 @@ $ docker buildx create \
|
|||||||
tcp://localhost:1234
|
tcp://localhost:1234
|
||||||
```
|
```
|
||||||
|
|
||||||
## Remote Buildkit over Unix sockets
|
The following table describes the available driver-specific options that you can
|
||||||
|
pass to `--driver-opt`:
|
||||||
|
|
||||||
In this scenario, we'll create a setup with buildkitd listening on a unix
|
| Parameter | Value | Default | Description |
|
||||||
socket, and have buildx connect through it.
|
| ------------ | ------ | ------------------ | ---------------------------------------------------------- |
|
||||||
|
| `key` | String | | Sets the TLS client key. |
|
||||||
|
| `cert` | String | | Sets the TLS client certificate to present to `buildkitd`. |
|
||||||
|
| `cacert` | String | | Sets the TLS certificate authority used for validation. |
|
||||||
|
| `servername` | String | Endpoint hostname. | Sets the TLS server name used in requests. |
|
||||||
|
|
||||||
Firstly, ensure that [buildkit](https://github.com/moby/buildkit) is installed.
|
## Guide: Remote BuildKit over Unix sockets
|
||||||
For example, you can launch an instance of buildkitd with:
|
|
||||||
|
|
||||||
```console
|
This guide shows you how to create a setup with a BuildKit daemon listening on a
|
||||||
$ sudo ./buildkitd --group $(id -gn) --addr unix://$HOME/buildkitd.sock
|
Unix socket, and have Buildx connect through it.
|
||||||
```
|
|
||||||
|
|
||||||
Alternatively, [see here](https://github.com/moby/buildkit/blob/master/docs/rootless.md)
|
1. Ensure that [BuildKit](https://github.com/moby/buildkit) is installed.
|
||||||
for running buildkitd in rootless mode or [here](https://github.com/moby/buildkit/tree/master/examples/systemd)
|
|
||||||
for examples of running it as a systemd service.
|
|
||||||
|
|
||||||
You should now have a unix socket accessible to your user, that is available to
|
For example, you can launch an instance of buildkitd with:
|
||||||
connect to:
|
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ ls -lh /home/user/buildkitd.sock
|
$ sudo ./buildkitd --group $(id -gn) --addr unix://$HOME/buildkitd.sock
|
||||||
srw-rw---- 1 root user 0 May 5 11:04 /home/user/buildkitd.sock
|
```
|
||||||
```
|
|
||||||
|
|
||||||
You can then connect buildx to it with the remote driver:
|
Alternatively,
|
||||||
|
[see here](https://github.com/moby/buildkit/blob/master/docs/rootless.md) for
|
||||||
|
running buildkitd in rootless mode or
|
||||||
|
[here](https://github.com/moby/buildkit/tree/master/examples/systemd) for
|
||||||
|
examples of running it as a systemd service.
|
||||||
|
|
||||||
```console
|
2. Check that you have a Unix socket that you can connect to.
|
||||||
$ docker buildx create \
|
|
||||||
|
```console
|
||||||
|
$ ls -lh /home/user/buildkitd.sock
|
||||||
|
srw-rw---- 1 root user 0 May 5 11:04 /home/user/buildkitd.sock
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Connect Buildx to it using the remote driver:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx create \
|
||||||
--name remote-unix \
|
--name remote-unix \
|
||||||
--driver remote \
|
--driver remote \
|
||||||
unix://$HOME/buildkitd.sock
|
unix://$HOME/buildkitd.sock
|
||||||
```
|
```
|
||||||
|
|
||||||
If you list available builders, you should then see `remote-unix` among them:
|
4. List available builders with `docker buildx ls`. You should then see
|
||||||
|
`remote-unix` among them:
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ docker buildx ls
|
$ docker buildx ls
|
||||||
NAME/NODE DRIVER/ENDPOINT STATUS PLATFORMS
|
NAME/NODE DRIVER/ENDPOINT STATUS PLATFORMS
|
||||||
remote-unix remote
|
remote-unix remote
|
||||||
remote-unix0 unix:///home/.../buildkitd.sock running linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/386
|
remote-unix0 unix:///home/.../buildkitd.sock running linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/386
|
||||||
default * docker
|
default * docker
|
||||||
default default running linux/amd64, linux/386
|
default default running linux/amd64, linux/386
|
||||||
```
|
```
|
||||||
|
|
||||||
We can switch to this new builder as the default using `docker buildx use remote-unix`,
|
You can switch to this new builder as the default using
|
||||||
or specify it per build:
|
`docker buildx use remote-unix`, or specify it per build using `--builder`:
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ docker buildx build --builder=remote-unix -t test --load .
|
$ docker buildx build . --builder=remote-unix -t test --load
|
||||||
```
|
```
|
||||||
|
|
||||||
(remember that `--load` is necessary when not using the default `docker`
|
Remember that you need to use the `--load` flag if you want to load the build
|
||||||
driver, to load the build result into the docker daemon)
|
result into the Docker daemon.
|
||||||
|
|
||||||
## Remote Buildkit in Docker container
|
## Guide: Remote BuildKit in Docker container
|
||||||
|
|
||||||
In this scenario, we'll create a similar setup to the `docker-container`
|
This guide will show you how to create setup similar to the `docker-container`
|
||||||
driver, by manually booting a buildkit docker container and connecting to it
|
driver, by manually booting a BuildKit Docker container and connecting to it
|
||||||
using the buildx remote driver. In most cases you'd probably just use the
|
using the Buildx remote driver. This procedure will manually create a container
|
||||||
`docker-container` driver that connects to buildkit through the Docker daemon,
|
and access it via it's exposed port. (You'd probably be better of just using the
|
||||||
but in this case we manually create a container and access it via it's exposed
|
`docker-container` driver that connects to BuildKit through the Docker daemon,
|
||||||
port.
|
but this is for illustration purposes.)
|
||||||
|
|
||||||
First, we need to generate certificates for buildkit - you can use the
|
1. Generate certificates for BuildKit.
|
||||||
[create-certs.sh](https://github.com/moby/buildkit/v0.10.3/master/examples/kubernetes/create-certs.sh)
|
|
||||||
script as a starting point. Note, that while it is *possible* to expose
|
|
||||||
buildkit over TCP without using TLS, it is **not recommended**, since this will
|
|
||||||
allow arbitrary access to buildkit without credentials.
|
|
||||||
|
|
||||||
With our certificates generated in `.certs/`, we startup the container:
|
You can use the
|
||||||
|
[create-certs.sh](https://github.com/moby/buildkit/v0.10.3/master/examples/kubernetes/create-certs.sh)
|
||||||
|
script as a starting point. Note that while it's possible to expose BuildKit
|
||||||
|
over TCP without using TLS, it's not recommended. Doing so allows arbitrary
|
||||||
|
access to BuildKit without credentials.
|
||||||
|
|
||||||
```console
|
2. With certificates generated in `.certs/`, startup the container:
|
||||||
$ docker run -d --rm \
|
|
||||||
|
```console
|
||||||
|
$ docker run -d --rm \
|
||||||
--name=remote-buildkitd \
|
--name=remote-buildkitd \
|
||||||
--privileged \
|
--privileged \
|
||||||
-p 1234:1234 \
|
-p 1234:1234 \
|
||||||
@ -96,75 +111,76 @@ $ docker run -d --rm \
|
|||||||
--tlscacert /etc/buildkit/certs/ca.pem \
|
--tlscacert /etc/buildkit/certs/ca.pem \
|
||||||
--tlscert /etc/buildkit/certs/daemon-cert.pem \
|
--tlscert /etc/buildkit/certs/daemon-cert.pem \
|
||||||
--tlskey /etc/buildkit/certs/daemon-key.pem
|
--tlskey /etc/buildkit/certs/daemon-key.pem
|
||||||
```
|
```
|
||||||
|
|
||||||
The above command starts a buildkit container and exposes the daemon's port
|
This command starts a BuildKit container and exposes the daemon's port 1234
|
||||||
1234 to localhost.
|
to localhost.
|
||||||
|
|
||||||
We can now connect to this running container using buildx:
|
3. Connect to this running container using Buildx:
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ docker buildx create \
|
$ docker buildx create \
|
||||||
--name remote-container \
|
--name remote-container \
|
||||||
--driver remote \
|
--driver remote \
|
||||||
--driver-opt cacert=.certs/ca.pem,cert=.certs/client-cert.pem,key=.certs/client-key.pem,servername=... \
|
--driver-opt cacert=.certs/ca.pem,cert=.certs/client-cert.pem,key=.certs/client-key.pem,servername=... \
|
||||||
tcp://localhost:1234
|
tcp://localhost:1234
|
||||||
```
|
```
|
||||||
|
|
||||||
Alternatively, we could use the `docker-container://` URL scheme to connect
|
Alternatively, use the `docker-container://` URL scheme to connect to the
|
||||||
to the buildkit container without specifying a port:
|
BuildKit container without specifying a port:
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ docker buildx create \
|
$ docker buildx create \
|
||||||
--name remote-container \
|
--name remote-container \
|
||||||
--driver remote \
|
--driver remote \
|
||||||
docker-container://remote-container
|
docker-container://remote-container
|
||||||
```
|
```
|
||||||
|
|
||||||
## Remote Buildkit in Kubernetes
|
## Guide: Remote BuildKit in Kubernetes
|
||||||
|
|
||||||
In this scenario, we'll create a similar setup to the `kubernetes` driver by
|
This guide will show you how to create a setup similar to the `kubernetes`
|
||||||
manually creating a buildkit `Deployment`. While the `kubernetes` driver will
|
driver by manually creating a BuildKit `Deployment`. While the `kubernetes`
|
||||||
do this under-the-hood, it might sometimes be desirable to scale buildkit
|
driver will do this under-the-hood, it might sometimes be desirable to scale
|
||||||
manually. Additionally, when executing builds from inside Kubernetes pods,
|
BuildKit manually. Additionally, when executing builds from inside Kubernetes
|
||||||
the buildx builder will need to be recreated from within each pod or copied
|
pods, the Buildx builder will need to be recreated from within each pod or
|
||||||
between them.
|
copied between them.
|
||||||
|
|
||||||
Firstly, we can create a kubernetes deployment of buildkitd, as per the
|
1. Create a Kubernetes deployment of `buildkitd`, as per the instructions
|
||||||
instructions [here](https://github.com/moby/buildkit/tree/master/examples/kubernetes).
|
[here](https://github.com/moby/buildkit/tree/master/examples/kubernetes).
|
||||||
Following the guide, we setup certificates for the buildkit daemon and client
|
|
||||||
(as above using [create-certs.sh](https://github.com/moby/buildkit/blob/v0.10.3/examples/kubernetes/create-certs.sh))
|
|
||||||
and create a `Deployment` of buildkit pods with a service that connects to
|
|
||||||
them.
|
|
||||||
|
|
||||||
Assuming that the service is called `buildkitd`, we can create a remote builder
|
Following the guide, create certificates for the BuildKit daemon and client
|
||||||
in buildx, ensuring that the listed certificate files are present:
|
using
|
||||||
|
[create-certs.sh](https://github.com/moby/buildkit/blob/v0.10.3/examples/kubernetes/create-certs.sh),
|
||||||
|
and create a deployment of BuildKit pods with a service that connects to
|
||||||
|
them.
|
||||||
|
|
||||||
```console
|
2. Assuming that the service is called `buildkitd`, create a remote builder in
|
||||||
$ docker buildx create \
|
Buildx, ensuring that the listed certificate files are present:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx create \
|
||||||
--name remote-kubernetes \
|
--name remote-kubernetes \
|
||||||
--driver remote \
|
--driver remote \
|
||||||
--driver-opt cacert=.certs/ca.pem,cert=.certs/client-cert.pem,key=.certs/client-key.pem \
|
--driver-opt cacert=.certs/ca.pem,cert=.certs/client-cert.pem,key=.certs/client-key.pem \
|
||||||
tcp://buildkitd.default.svc:1234
|
tcp://buildkitd.default.svc:1234
|
||||||
```
|
```
|
||||||
|
|
||||||
Note that the above will only work in-cluster (since the buildkit setup guide
|
Note that this will only work internally, within the cluster, since the BuildKit
|
||||||
only creates a ClusterIP service). To configure the builder to be accessible
|
setup guide only creates a ClusterIP service. To configure the builder to be
|
||||||
remotely, you can use an appropriately configured Ingress, which is outside the
|
accessible remotely, you can use an appropriately configured ingress, which is
|
||||||
scope of this guide.
|
outside the scope of this guide.
|
||||||
|
|
||||||
To access the service remotely, we can use the port forwarding mechanism in
|
To access the service remotely, use the port forwarding mechanism of `kubectl`:
|
||||||
kubectl:
|
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ kubectl port-forward svc/buildkitd 1234:1234
|
$ kubectl port-forward svc/buildkitd 1234:1234
|
||||||
```
|
```
|
||||||
|
|
||||||
Then you can simply point the remote driver at `tcp://localhost:1234`.
|
Then you can point the remote driver at `tcp://localhost:1234`.
|
||||||
|
|
||||||
Alternatively, we could use the `kube-pod://` URL scheme to connect
|
Alternatively, you can use the `kube-pod://` URL scheme to connect directly to a
|
||||||
directly to a buildkit pod through the kubernetes api (note that this method
|
BuildKit pod through the Kubernetes API. Note that this method only connects to
|
||||||
will only connect to a single pod in the deployment):
|
a single pod in the deployment:
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ kubectl get pods --selector=app=buildkitd -o json | jq -r '.items[].metadata.name
|
$ kubectl get pods --selector=app=buildkitd -o json | jq -r '.items[].metadata.name
|
||||||
|
Loading…
x
Reference in New Issue
Block a user